使用 Beautiful soup 从 csv 中抓取 url 列表

Scraping a list of urls from a csv with Beautiful soup

我在 urls.csv

中有 url 的列表
http://www.indianngos.org/ngo_detail.aspx?nprof=292241149
http://www.indianngos.org/ngo_detail.aspx?nprof=9986241242
http://www.indianngos.org/ngo_detail.aspx?nprof=319824125

我的代码如下:

import requests
from bs4 import BeautifulSoup
import csv

with open('urls.csv' , 'r') as csv_file:
csv_reader = csv.reader(csv_file)

for line in csv_reader:
    r = requests.get(line[0]).text

    soup = BeautifulSoup(r,'lxml')

    csv_file = open('output.csv', 'w')

    csv_writer = csv.writer(csv_file)
    csv_writer.writerow(['Ngoname', 'CEO', 'City', 'Address', 'Phone', 'Mobile', 'E-mail'])
    # print(soup.prettify())

    ngoname = soup.find('h1')
    print('NGO Name :', ngoname.text)

    ceo = soup.find('h2', class_='').text
    ceo_name = ceo.split(':')
    print('CeoName:', ceo_name[1])

    city = soup.find_all('span')
    print('City :', city[5].text)

    address = soup.find_all('span')
    print('Address :', address[6].text)

    phone = soup.find_all('span')
    print('Phone :', phone[7].text)

    mobile = soup.find_all('span')
    print('Mobile :', mobile[8].text)

    email = soup.find_all('span')
    print('Email_id :', email[9].text)

 csv_writer.writerow([ngoname.text, ceo_name[1], city[5].text, address[6].text, phone[7].text, mobile[8].text, email[9].text])

csv_file.close()

我只从这个抓取工具中获取最后 url 的数据。 我如何从输出 csv

中的每个 url 一个下方获取数据

您需要让所有三个 CSV 文件的输出文件保持打开状态。目前您每次都在覆盖:

import requests
from bs4 import BeautifulSoup
import csv

with open('urls.csv', newline='') as f_urls, open('output.csv', 'w', newline='') as f_output:
    csv_urls = csv.reader(f_urls)
    csv_output = csv.writer(f_output)
    csv_output.writerow(['Ngoname', 'CEO', 'City', 'Address', 'Phone', 'Mobile', 'E-mail'])

    for line in csv_urls:
        r = requests.get(line[0]).text
        soup = BeautifulSoup(r, 'lxml')

        ngoname = soup.find('h1')
        print('NGO Name :', ngoname.text)

        ceo = soup.find('h2', class_='').text
        ceo_name = ceo.split(':')
        print('CeoName:', ceo_name[1])

        city = soup.find_all('span')
        print('City :', city[5].text)

        address = soup.find_all('span')
        print('Address :', address[6].text)

        phone = soup.find_all('span')
        print('Phone :', phone[7].text)

        mobile = soup.find_all('span')
        print('Mobile :', mobile[8].text)

        email = soup.find_all('span')
        print('Email_id :', email[9].text)

        csv_output.writerow([ngoname.text, ceo_name[1], city[5].text, address[6].text, phone[7].text, mobile[8].text, email[9].text])

此方法将为您提供包含以下内容的输出文件:

Ngoname,CEO,City,Address,Phone,Mobile,E-mail
A CONSUMER WELFARE SOCIETY, Bhanu Pratap,Delhi,201 Vardhman Grand Market Sec 3 Dwarka New Delhi 110075,011-20078086,,admin@consumercourt.in
AADARSH MAHILA KALYAN SAMITI, Kusum Lata,Delhi,"G-61, Jai Vihar Extn. Baprola, Najafgarh, New Delhi-110043",011-28012307,9953574659,snehayadav96@yahoo.com
AAJ KI AWAJ JAN KALYAN SOCIETY, YASHPAL SINGH BALIYAN,Delhi,HEAD OFFICE,011-43029251,9899668750,aajkiawaj@gmail.com