BeautifulSoup:合并表并导出为 .csv
BeautifulSoup: Merge tables and export to .csv
我一直在尝试从不同的 url 下载数据,然后将其保存到 csv 文件。
想法是从以下位置提取年度/季度数据:
https://www.marketwatch.com/investing/stock/MMM/financials/
年度:
https://www.marketwatch.com/investing/stock/MMM/financials/cash-flow
季度:
https://www.marketwatch.com/investing/stock/MMM/financials/cash-flow/quarter
使用以下代码:
import requests
import pandas as pd
urls = ['https://www.marketwatch.com/investing/stock/AAPL/financials/cash-flow',
'https://www.marketwatch.com/investing/stock/MMM/financials/cash-flow']
def main(urls):
with requests.Session() as req:
goal = []
for url in urls:
r = req.get(url)
df = pd.read_html(
r.content, match="Cash Dividends Paid - Total")[0].iloc[[0], 0:3]
goal.append(df)
new = pd.concat(goal)
print(new)
main(urls)
输出:
我可以提取所需的信息(在 2015 和 2016 示例中,2 家公司) 但仅适用于 1 套(季度或年度)
我想合并表年度+季度
为此,我在这段代码中想到:
import requests
import pandas as pd
from urllib.request import urlopen
from bs4 import BeautifulSoup
import csv
html = urlopen('https://www.marketwatch.com/investing/stock/MMM/financials/')
soup = BeautifulSoup(html, 'html.parser')
ids = ['cash-flow','cash-flow/quarter']
with open("news.csv", "w", newline="", encoding='utf-8') as f_news:
csv_news = csv.writer(f_news)
csv_news.writerow(["A"])
for id in ids:
a = soup.find("Cash Dividends Paid - Total", id=id)
csv_news.writerow([a.text])
但是出现以下错误:
BeautifulSoup 元素没有 属性 text
,但有方法 get_text()
csv_news.writerow([a.get_text()])
https://www.crummy.com/software/BeautifulSoup/bs4/doc/#get-text
这意味着您的soup.find()
没有找到您想要的元素。 a
是 None
.
为什么需要 id
?我查看了年度页面,日期为 5 月 19 日。没有必要使用 id
.
我一直在尝试从不同的 url 下载数据,然后将其保存到 csv 文件。
想法是从以下位置提取年度/季度数据: https://www.marketwatch.com/investing/stock/MMM/financials/
年度:
https://www.marketwatch.com/investing/stock/MMM/financials/cash-flow
季度:
https://www.marketwatch.com/investing/stock/MMM/financials/cash-flow/quarter
使用以下代码:
import requests
import pandas as pd
urls = ['https://www.marketwatch.com/investing/stock/AAPL/financials/cash-flow',
'https://www.marketwatch.com/investing/stock/MMM/financials/cash-flow']
def main(urls):
with requests.Session() as req:
goal = []
for url in urls:
r = req.get(url)
df = pd.read_html(
r.content, match="Cash Dividends Paid - Total")[0].iloc[[0], 0:3]
goal.append(df)
new = pd.concat(goal)
print(new)
main(urls)
输出:
我可以提取所需的信息(在 2015 和 2016 示例中,2 家公司) 但仅适用于 1 套(季度或年度)
我想合并表年度+季度
为此,我在这段代码中想到:
import requests
import pandas as pd
from urllib.request import urlopen
from bs4 import BeautifulSoup
import csv
html = urlopen('https://www.marketwatch.com/investing/stock/MMM/financials/')
soup = BeautifulSoup(html, 'html.parser')
ids = ['cash-flow','cash-flow/quarter']
with open("news.csv", "w", newline="", encoding='utf-8') as f_news:
csv_news = csv.writer(f_news)
csv_news.writerow(["A"])
for id in ids:
a = soup.find("Cash Dividends Paid - Total", id=id)
csv_news.writerow([a.text])
但是出现以下错误:
BeautifulSoup 元素没有 属性 text
,但有方法 get_text()
csv_news.writerow([a.get_text()])
https://www.crummy.com/software/BeautifulSoup/bs4/doc/#get-text
这意味着您的soup.find()
没有找到您想要的元素。 a
是 None
.
为什么需要 id
?我查看了年度页面,日期为 5 月 19 日。没有必要使用 id
.