如何遍历 CSV 文件 URL 调用 PYTHON

How to iterate through CSV file URL call PYTHON

我有一个包含 100 万个以上网址的文件 (urls.csv)。每行都是一个新的 url,例如:

  1. https://example.com/1
  2. https://example.com/2 等等....

我想获取每个 url 末尾的 json 文件,并将其另存为每个 url 的单独 json 文件] 文件名按顺序排列 1,2,3,n...

这是我目前的情况:

import requests
import csv

url = []

with open('urls.csv') as csvfile:    
    csvReader = csv.reader(csvfile)    
    for row in csvReader:        
        url.append(row[0])

headers = {'Accept': 'application/json'}

response = requests.get(url, headers=headers)

with open('outputfile.json', 'wb') as outf:
    outf.write(response.content)

我该如何解决这个问题?

试试这个:

import requests
import csv

urls = []

with open('urls.csv') as csvfile:    
    csvReader = csv.reader(csvfile)    
    for row in csvReader:        
        urls.append(row[0])

headers = {'Accept': 'application/json'}

for url in urls:
    response = requests.get(url, headers=headers)
    filename = url.split('/')[-1]
    with open(f'{filename}.json', 'wb') as outf:
        outf.write(response.content)

假设您的第 3 个网址是 https://example.com/3,代码将为相应的响应保存一个名为 3.json 的文件。