将抓取结果一一保存到 Excel 或 Python 中的 CSV 文件中

Save scraping results one by one into Excel or CSV file in Python

我有一个爬虫代码如下:

import requests
import json
import pandas as pd
import numpy as np
from bs4 import BeautifulSoup
import re
from datetime import datetime

def crawl(id):
    try:
        url = 'https://www.china0001.com.cn/project/{0:06d}.html'.format(id)
        print(url)
        content = requests.get(url).text
        soup = BeautifulSoup(content, 'lxml')
        tbody = soup.find("table", attrs={"id":"mse_new"}).find("tbody", attrs={"class":"jg"})
        tr = tbody.find_all("tr")
        rows = []
        for i in tr[1:]:
            rows.append([j.text.strip() for j in i.findAll("td")])
        out = dict([map(str.strip, y.split(':')) for x in rows for y in x])
        return out

    except AttributeError:
        return False

data = list()
for id in range(699998, 700010):
    print(id)
    res = crawl(id)
    if res:
        data.append(res)

if len(data) > 0:
    df = pd.DataFrame(data)
    df.to_excel('test.xlsx', index = False)

在此代码中,结果数据帧 df 将在整个抓取过程完成后写入 Excel 文件。

现在我想在抓取过程中将抓取结果一一保存到Excel或CSV文件中,请问如何修改上面的代码?

谢谢。

更新:

MAX_WORKERS = 30
ids = range(700000, 700050)
workers = min(MAX_WORKERS, len(ids))

with futures.ThreadPoolExecutor(workers) as executor:
    res = executor.map(crawl, sorted(ids))

data = list(res)

if len(data) > 0:
    df = pd.DataFrame(data)
    df.to_csv('test.csv', mode = 'a', header = True, index = False)

我建议在这里查看我的问题: 。 我建议查看每日表格的答案,然后应用并修改它以适合您的计划

尝试将 to_csvheader=False, index=False

一起使用

例如:

for id in range(699998, 700010):
    res = crawl(id)
    if res:
        df = pd.DataFrame([res])
        df.to_csv('test.csv', mode='a', header=False, index=False)