使用下拉列表从 url 抓取 csv 文件?

Crawling csv files from a url with dropdown list?

我正在尝试从 Weather Canada 抓取每月数据(csv 文件)。

通常需要从下拉列表中选择 select year/month/day 并单击 "GO" 然后单击 "Download Data" 按钮以获得 selected月+年,如下。 我想从 python(beautifulsoup 4)中所有可用的 month/year 下载 CSV 格式的所有数据文件。

我试图修改另一个问题的一些代码,但没有成功。请帮忙。 从 bs4 导入 BeautifulSoup # Python 3.x 来自 urllib.request 导入 urlopen、urlretrieve

# Removed the trailing / from the URL
urlJan2020 = 
'''https://climate.weather.gc.ca/climate_data/hourly_data_e.html?hlyRange=2004-09-24%7C2020-03-03&dlyRange=2018-05-14%7C2020-03-03&mlyRange=%7C&StationID=43403&Prov=NS&urlExtension=_e.html&searchType=stnProx&optLimit=yearRange&StartYear=1840&EndYear=2020&selRowPerPage=25&Line=0&txtRadius=50&optProxType=city&selCity=44%7C40%7C63%7C36%7CHalifax&selPark=&txtCentralLatDeg=&txtCentralLatMin=0&txtCentralLatSec=0&txtCentralLongDeg=&txtCentralLongMin=0&txtCentralLongSec=0&txtLatDecDeg=&txtLongDecDeg=&timeframe=1&Year=2020&Month=1&Day=1#'''
u = urlopen(urlJan2020)
try:
    html = u.read().decode('utf-8')
finally:
    u.close()

soup = BeautifulSoup(html, "html.parser")

# Select all A elements that have an href attribute, starting with http://
for link in soup.select('a[href^="http://"]'):
    href = link.get('href')
    if not any(href.endswith(x) for x in ['.csv','.xls','.xlsx']):
        continue

    filename = href.rsplit('/', 1)[-1]

    # You don't need to join + quote as URLs in the HTML are absolute.
    # However, we need a https:// URL (in spite of what the link says: check request in your web browser's developer tools)
    href = href.replace('http://','https://')

    print("Downloading %s to %s..." % (href, filename) )
    urlretrieve(href, filename)
    print("Done.")
from bs4 import BeautifulSoup
import requests


def Main():
    with requests.Session() as req:
        for year in range(2019, 2021):
            for month in range(1, 13):
                r = req.post(
                    f"https://climate.weather.gc.ca/climate_data/bulk_data_e.html?format=csv&stationID=43403&Year={year}&Month={month}&Day=1&timeframe=1&submit=Download+Data")
                name = r.headers.get(
                    "Content-Disposition").split("_", 5)[-1][:-1]
                with open(name, 'w') as f:
                    f.write(r.text)
                print(f"Saved {name}")


Main()