避免在抓取页面时复制某些内容

Avoid to copy some content while scraping through pages

我在保存正在抓取的结果时遇到了一些困难。 请参考此代码(此代码针对我的具体情况略有更改):

import bs4, requests
import pandas as pd
import re
import time

headline=[]
corpus=[]
dates=[]
tag=[]  

start=1
url="https://www.imolaoggi.it/category/cron/"

while True:
    r = requests.get(url)
    soup = bs4.BeautifulSoup(r.text, 'html')


    headlines=soup.find_all('h3')
    corpora=soup.find_all('p') 
    dates=soup.find_all('time', attrs={'class':'entry-date published updated'}) 
    tags=soup.find_all('span', attrs={'class':'cat-links'})
    for t in headlines:
        headline.append(t.text)
    
    for s in corpora:
        corpus.append(s.text)
        
    for d in date:
        dates.append(d.text)
    
    for c in tags:
        tag.append(c.text)
    if soup.find_all('a', attrs={'class':'page-numbers'}):
      url = f"https://www.imolaoggi.it/category/cron/page/{page}"
      page +=1
    else:
      break

创建数据框

df = pd.DataFrame(list(zip(date, headline, tag, corpus)), 
               columns =['Date', 'Headlines', 'Tags', 'Corpus']) 

我想保存此 link 中的所有页面。该代码有效,但似乎每次(即每一页)都为语料库编写两个相同的句子:

我认为这是因为我选择的标签:

corpora=soup.find_all('p') 

这会导致我的数据框中的行错位,因为数据保存在列表中,并且语料库稍后开始正确抓取,如果与其他人相比。

希望您能帮忙了解如何修复它。

import requests
from bs4 import BeautifulSoup
from concurrent.futures import ThreadPoolExecutor
import pandas as pd


def main(req, num):
    r = req.get("https://www.imolaoggi.it/category/cron/page/{}/".format(num))
    soup = BeautifulSoup(r.content, 'html.parser')
    goal = [(x.time.text, x.h3.a.text, x.select_one("span.cat-links").get_text(strip=True), x.p.get_text(strip=True))
            for x in soup.select("div.entry-content")]
    return goal


with ThreadPoolExecutor(max_workers=30) as executor:
    with requests.Session() as req:
        fs = [executor.submit(main, req, num) for num in range(1, 2937)]
        allin = []
        for f in fs:
            allin.extend(f.result())
        df = pd.DataFrame.from_records(
            allin, columns=["Date", "Title", "Tags", "Content"])
        print(df)
        df.to_csv("result.csv", index=False)

你很接近,但你的选择器关闭了,你 mis-naned 你的一些变量。

我会像这样使用 css 选择器:

eadline=[]
corpus=[]
date_list=[]
tag_list=[]  


headlines=soup.select('h3.entry-title')
corpora=soup.select('div.entry-meta + p') 
dates=soup.select('div.entry-meta  span.posted-on')
tags=soup.select('span.cat-links')

for t in headlines:
    headline.append(t.text)

for s in corpora:
        corpus.append(s.text.strip())

for d in dates:
        date_list.append(d.text)

for c in tags:
        tag_list.append(c.text)

df = pd.DataFrame(list(zip(date_list, headline, tag_list, corpus)), 
               columns =['Date', 'Headlines', 'Tags', 'Corpus']) 
df

输出:

    Date    Headlines   Tags    Corpus
0   30 Ottobre 2020     Roma: con spranga di ferro danneggia 50 auto i...   CRONACA, NEWS   Notte di vandalismi a Colli Albani dove un uom...
1   30 Ottobre 2020\n30 Ottobre 2020    Aggressione con machete: grave un 28enne, arre...   CRONACA, NEWS   Roma - Ha impugnato il suo machete e lo ha agi...
2   30 Ottobre 2020\n30 Ottobre 2020    Deep State e globalismo, Mons. Viganò scrive a...   CRONACA, NEWS   LETTERA APERTA\r\nAL PRESIDENTE DEGLI STATI UN...
3   30 Ottobre 2020     Meluzzi e Scandurra: “Sacrificare libertà per ...   CRONACA, NEWS   "Sacrificare la libertà per la sicurezza è un ...