如何在不覆盖结果的情况下抓取多个网页?
How to scrape multiple webpages without overwriting the results?
刚开始尝试从 Transfermarkt 抓取多个网页而不覆盖之前的网页。
知道之前有人问过这个问题,但我无法解决这个问题。
from bs4 import BeautifulSoup as bs
import requests
import re
import pandas as pd
import itertools
headers = {'User-Agent' : 'Mozilla/5.0'}
df_headers = ['position_number' , 'position_description' , 'name' , 'dob' , 'nationality' , 'height' , 'foot' , 'joined' , 'signed_from' , 'contract_until']
urls = ['https://www.transfermarkt.com/fc-bayern-munich-u17/kader/verein/21058/saison_id/2018/plus/1', 'https://www.transfermarkt.com/fc-hennef-05-u17/kader/verein/48776/saison_id/2018/plus/1']
for url in urls:
r = requests.get(url, headers = headers)
soup = bs(r.content, 'html.parser')
position_number = [item.text for item in soup.select('.items .rn_nummer')]
position_description = [item.text for item in soup.select('.items td:not([class])')]
name = [item.text for item in soup.select('.hide-for-small .spielprofil_tooltip')]
dob = [item.text for item in soup.select('.zentriert:nth-of-type(3):not([id])')]
nationality = ['/'.join([i['title'] for i in item.select('[title]')]) for item in soup.select('.zentriert:nth-of-type(4):not([id])')]
height = [item.text for item in soup.select('.zentriert:nth-of-type(5):not([id])')]
foot = [item.text for item in soup.select('.zentriert:nth-of-type(6):not([id])')]
joined = [item.text for item in soup.select('.zentriert:nth-of-type(7):not([id])')]
signed_from = ['/'.join([item.find('img')['title'].lstrip(': '), item.find('img')['alt']]) if item.find('a') else ''
for item in soup.select('.zentriert:nth-of-type(8):not([id])')]
contract_until = [item.text for item in soup.select('.zentriert:nth-of-type(9):not([id])')]
df = pd.DataFrame(list(zip(position_number, position_description, name, dob, nationality, height, foot, joined, signed_from, contract_until)), columns = df_headers)
print(df)
df.to_csv(r'Uljanas-MacBook-Air-2:~ uljanadufour$\bayern-munich123.csv')
一旦抓取,能够区分网页也会很有帮助。
如有任何帮助,我们将不胜感激。
两种可能的方法:
您可以在文件名中添加时间戳,这样每次您的运行您的脚本
创建一个不同的 CSV 文件
from datetime import datetime
timestamp = datetime.now().strftime("%Y-%m-%d %H.%m.%s")
df.to_csv(rf'Uljanas-MacBook-Air-2:~ uljanadufour$\{timestamp} bayern-munich123.csv')
这将为您提供以下格式的文件:
"2019-05-08 10.39.05 bayern-munich123.csv"
通过使用年月日格式,您的文件将自动按时间顺序排序。
或者您可以使用追加模式添加到您现有的 CSV 文件:
df.to_csv(r'Uljanas-MacBook-Air-2:~ uljanadufour$\bayern-munich123.csv', mode='a')
最后,您当前的代码只保存最后一个 URL,如果您想将每个 URL 保存为不同的文件,您需要在循环中缩进最后两行。您可以在文件名中添加一个数字来区分每个 URL,例如1
或 2
如下。 Python 的 enumerate()
函数可用于为每个 URL:
提供一个数字
from datetime import datetime
from bs4 import BeautifulSoup as bs
import requests
import re
import pandas as pd
import itertools
headers = {'User-Agent' : 'Mozilla/5.0'}
df_headers = ['position_number' , 'position_description' , 'name' , 'dob' , 'nationality' , 'height' , 'foot' , 'joined' , 'signed_from' , 'contract_until']
urls = [
'https://www.transfermarkt.com/fc-bayern-munich-u17/kader/verein/21058/saison_id/2018/plus/1',
'https://www.transfermarkt.com/fc-hennef-05-u17/kader/verein/48776/saison_id/2018/plus/1'
]
for index, url in enumerate(urls, start=1):
r = requests.get(url, headers=headers)
soup = bs(r.content, 'html.parser')
position_number = [item.text for item in soup.select('.items .rn_nummer')]
position_description = [item.text for item in soup.select('.items td:not([class])')]
name = [item.text for item in soup.select('.hide-for-small .spielprofil_tooltip')]
dob = [item.text for item in soup.select('.zentriert:nth-of-type(3):not([id])')]
nationality = ['/'.join([i['title'] for i in item.select('[title]')]) for item in soup.select('.zentriert:nth-of-type(4):not([id])')]
height = [item.text for item in soup.select('.zentriert:nth-of-type(5):not([id])')]
foot = [item.text for item in soup.select('.zentriert:nth-of-type(6):not([id])')]
joined = [item.text for item in soup.select('.zentriert:nth-of-type(7):not([id])')]
signed_from = ['/'.join([item.find('img')['title'].lstrip(': '), item.find('img')['alt']]) if item.find('a') else ''
for item in soup.select('.zentriert:nth-of-type(8):not([id])')]
contract_until = [item.text for item in soup.select('.zentriert:nth-of-type(9):not([id])')]
df = pd.DataFrame(list(zip(position_number, position_description, name, dob, nationality, height, foot, joined, signed_from, contract_until)), columns = df_headers)
timestamp = datetime.now().strftime("%Y-%m-%d %H.%M.%S")
df.to_csv(rf'{timestamp} bayern-munich123_{index}.csv')
这将为您提供文件名,例如:
"2019-05-08 11.44.38 bayern-munich123_1.csv"
上面的代码为每个 URL 抓取数据, 解析它而不 将其放入数据帧,然后继续下一个 URL。由于您对 pd.DataFrame()
的调用发生在循环之外,因此您正在从 urls
.
中的最后一个 URL 构建页面数据的数据框
您需要在 for-loop 之外创建一个数据框,然后将每个 URL 的传入数据附加到此数据框。
from bs4 import BeautifulSoup as bs
import requests
import re
import pandas as pd
import itertools
headers = {'User-Agent' : 'Mozilla/5.0'}
df_headers = ['position_number' , 'position_description' , 'name' , 'dob' , 'nationality' , 'height' , 'foot' , 'joined' , 'signed_from' , 'contract_until']
urls = ['https://www.transfermarkt.com/fc-bayern-munich-u17/kader/verein/21058/saison_id/2018/plus/1', 'https://www.transfermarkt.com/fc-hennef-05-u17/kader/verein/48776/saison_id/2018/plus/1']
#### Add this before for-loop. ####
# Create empty dataframe with expected column names.
df_full = pd.DataFrame(columns = df_headers)
for url in urls:
r = requests.get(url, headers = headers)
soup = bs(r.content, 'html.parser')
position_number = [item.text for item in soup.select('.items .rn_nummer')]
position_description = [item.text for item in soup.select('.items td:not([class])')]
name = [item.text for item in soup.select('.hide-for-small .spielprofil_tooltip')]
dob = [item.text for item in soup.select('.zentriert:nth-of-type(3):not([id])')]
nationality = ['/'.join([i['title'] for i in item.select('[title]')]) for item in soup.select('.zentriert:nth-of-type(4):not([id])')]
height = [item.text for item in soup.select('.zentriert:nth-of-type(5):not([id])')]
foot = [item.text for item in soup.select('.zentriert:nth-of-type(6):not([id])')]
joined = [item.text for item in soup.select('.zentriert:nth-of-type(7):not([id])')]
signed_from = ['/'.join([item.find('img')['title'].lstrip(': '), item.find('img')['alt']]) if item.find('a') else ''
for item in soup.select('.zentriert:nth-of-type(8):not([id])')]
contract_until = [item.text for item in soup.select('.zentriert:nth-of-type(9):not([id])')]
#### Add this to for-loop. ####
# Create a dataframe for page data.
df = pd.DataFrame(list(zip(position_number, position_description, name, dob, nationality, height, foot, joined, signed_from, contract_until)), columns = df_headers)
# Add page URL to index of page data.
df.index = [url] * len(df)
# Append page data to full data.
df_full = df_full.append(df)
print(df_full)
刚开始尝试从 Transfermarkt 抓取多个网页而不覆盖之前的网页。
知道之前有人问过这个问题,但我无法解决这个问题。
from bs4 import BeautifulSoup as bs
import requests
import re
import pandas as pd
import itertools
headers = {'User-Agent' : 'Mozilla/5.0'}
df_headers = ['position_number' , 'position_description' , 'name' , 'dob' , 'nationality' , 'height' , 'foot' , 'joined' , 'signed_from' , 'contract_until']
urls = ['https://www.transfermarkt.com/fc-bayern-munich-u17/kader/verein/21058/saison_id/2018/plus/1', 'https://www.transfermarkt.com/fc-hennef-05-u17/kader/verein/48776/saison_id/2018/plus/1']
for url in urls:
r = requests.get(url, headers = headers)
soup = bs(r.content, 'html.parser')
position_number = [item.text for item in soup.select('.items .rn_nummer')]
position_description = [item.text for item in soup.select('.items td:not([class])')]
name = [item.text for item in soup.select('.hide-for-small .spielprofil_tooltip')]
dob = [item.text for item in soup.select('.zentriert:nth-of-type(3):not([id])')]
nationality = ['/'.join([i['title'] for i in item.select('[title]')]) for item in soup.select('.zentriert:nth-of-type(4):not([id])')]
height = [item.text for item in soup.select('.zentriert:nth-of-type(5):not([id])')]
foot = [item.text for item in soup.select('.zentriert:nth-of-type(6):not([id])')]
joined = [item.text for item in soup.select('.zentriert:nth-of-type(7):not([id])')]
signed_from = ['/'.join([item.find('img')['title'].lstrip(': '), item.find('img')['alt']]) if item.find('a') else ''
for item in soup.select('.zentriert:nth-of-type(8):not([id])')]
contract_until = [item.text for item in soup.select('.zentriert:nth-of-type(9):not([id])')]
df = pd.DataFrame(list(zip(position_number, position_description, name, dob, nationality, height, foot, joined, signed_from, contract_until)), columns = df_headers)
print(df)
df.to_csv(r'Uljanas-MacBook-Air-2:~ uljanadufour$\bayern-munich123.csv')
一旦抓取,能够区分网页也会很有帮助。
如有任何帮助,我们将不胜感激。
两种可能的方法:
您可以在文件名中添加时间戳,这样每次您的运行您的脚本
创建一个不同的 CSV 文件from datetime import datetime timestamp = datetime.now().strftime("%Y-%m-%d %H.%m.%s") df.to_csv(rf'Uljanas-MacBook-Air-2:~ uljanadufour$\{timestamp} bayern-munich123.csv')
这将为您提供以下格式的文件:
"2019-05-08 10.39.05 bayern-munich123.csv"
通过使用年月日格式,您的文件将自动按时间顺序排序。
或者您可以使用追加模式添加到您现有的 CSV 文件:
df.to_csv(r'Uljanas-MacBook-Air-2:~ uljanadufour$\bayern-munich123.csv', mode='a')
最后,您当前的代码只保存最后一个 URL,如果您想将每个 URL 保存为不同的文件,您需要在循环中缩进最后两行。您可以在文件名中添加一个数字来区分每个 URL,例如1
或 2
如下。 Python 的 enumerate()
函数可用于为每个 URL:
from datetime import datetime
from bs4 import BeautifulSoup as bs
import requests
import re
import pandas as pd
import itertools
headers = {'User-Agent' : 'Mozilla/5.0'}
df_headers = ['position_number' , 'position_description' , 'name' , 'dob' , 'nationality' , 'height' , 'foot' , 'joined' , 'signed_from' , 'contract_until']
urls = [
'https://www.transfermarkt.com/fc-bayern-munich-u17/kader/verein/21058/saison_id/2018/plus/1',
'https://www.transfermarkt.com/fc-hennef-05-u17/kader/verein/48776/saison_id/2018/plus/1'
]
for index, url in enumerate(urls, start=1):
r = requests.get(url, headers=headers)
soup = bs(r.content, 'html.parser')
position_number = [item.text for item in soup.select('.items .rn_nummer')]
position_description = [item.text for item in soup.select('.items td:not([class])')]
name = [item.text for item in soup.select('.hide-for-small .spielprofil_tooltip')]
dob = [item.text for item in soup.select('.zentriert:nth-of-type(3):not([id])')]
nationality = ['/'.join([i['title'] for i in item.select('[title]')]) for item in soup.select('.zentriert:nth-of-type(4):not([id])')]
height = [item.text for item in soup.select('.zentriert:nth-of-type(5):not([id])')]
foot = [item.text for item in soup.select('.zentriert:nth-of-type(6):not([id])')]
joined = [item.text for item in soup.select('.zentriert:nth-of-type(7):not([id])')]
signed_from = ['/'.join([item.find('img')['title'].lstrip(': '), item.find('img')['alt']]) if item.find('a') else ''
for item in soup.select('.zentriert:nth-of-type(8):not([id])')]
contract_until = [item.text for item in soup.select('.zentriert:nth-of-type(9):not([id])')]
df = pd.DataFrame(list(zip(position_number, position_description, name, dob, nationality, height, foot, joined, signed_from, contract_until)), columns = df_headers)
timestamp = datetime.now().strftime("%Y-%m-%d %H.%M.%S")
df.to_csv(rf'{timestamp} bayern-munich123_{index}.csv')
这将为您提供文件名,例如:
"2019-05-08 11.44.38 bayern-munich123_1.csv"
上面的代码为每个 URL 抓取数据, 解析它而不 将其放入数据帧,然后继续下一个 URL。由于您对 pd.DataFrame()
的调用发生在循环之外,因此您正在从 urls
.
您需要在 for-loop 之外创建一个数据框,然后将每个 URL 的传入数据附加到此数据框。
from bs4 import BeautifulSoup as bs
import requests
import re
import pandas as pd
import itertools
headers = {'User-Agent' : 'Mozilla/5.0'}
df_headers = ['position_number' , 'position_description' , 'name' , 'dob' , 'nationality' , 'height' , 'foot' , 'joined' , 'signed_from' , 'contract_until']
urls = ['https://www.transfermarkt.com/fc-bayern-munich-u17/kader/verein/21058/saison_id/2018/plus/1', 'https://www.transfermarkt.com/fc-hennef-05-u17/kader/verein/48776/saison_id/2018/plus/1']
#### Add this before for-loop. ####
# Create empty dataframe with expected column names.
df_full = pd.DataFrame(columns = df_headers)
for url in urls:
r = requests.get(url, headers = headers)
soup = bs(r.content, 'html.parser')
position_number = [item.text for item in soup.select('.items .rn_nummer')]
position_description = [item.text for item in soup.select('.items td:not([class])')]
name = [item.text for item in soup.select('.hide-for-small .spielprofil_tooltip')]
dob = [item.text for item in soup.select('.zentriert:nth-of-type(3):not([id])')]
nationality = ['/'.join([i['title'] for i in item.select('[title]')]) for item in soup.select('.zentriert:nth-of-type(4):not([id])')]
height = [item.text for item in soup.select('.zentriert:nth-of-type(5):not([id])')]
foot = [item.text for item in soup.select('.zentriert:nth-of-type(6):not([id])')]
joined = [item.text for item in soup.select('.zentriert:nth-of-type(7):not([id])')]
signed_from = ['/'.join([item.find('img')['title'].lstrip(': '), item.find('img')['alt']]) if item.find('a') else ''
for item in soup.select('.zentriert:nth-of-type(8):not([id])')]
contract_until = [item.text for item in soup.select('.zentriert:nth-of-type(9):not([id])')]
#### Add this to for-loop. ####
# Create a dataframe for page data.
df = pd.DataFrame(list(zip(position_number, position_description, name, dob, nationality, height, foot, joined, signed_from, contract_until)), columns = df_headers)
# Add page URL to index of page data.
df.index = [url] * len(df)
# Append page data to full data.
df_full = df_full.append(df)
print(df_full)