Python 解析 xml 并保存 100 个请求
Python parse xml and save 100 request
我在 products.xml 文件中有 10000 个 SKU。例如,我如何制作 300 个 SKU(300 个请求),将数据存储到 xml 并从 301 个 SKU 重新开始。我想将 10000 个 SKU 拆分成更小的部分,写入多个文件,因为循环无法在那么长的时间内执行。
import xml.etree.ElementTree as ET
import requests
import time
from time import sleep
tree = ET.parse('products.xml')
root = tree.getroot()
for id in root.iter('SKU'):
sku = id.text
response = requests.get('http://example.com/Items&ID='+sku)
with open("_Temp.xml", "w") as f:
f.write(response.text)
print(response.text) ```
'''
您可以使用 itertools.grouper
食谱:
from itertools import zip_longest
for i, group in enumerate(zip_longest(*[iter(root.iter('SKU'))] * 300), 1):
with open(f"{i}_Temp.xml", "w") as f:
for id in group:
if group != None:
sku = id.text
response = requests.get(f'http://example.com/Items&ID={sku}')
f.write(response.text)
我在 products.xml 文件中有 10000 个 SKU。例如,我如何制作 300 个 SKU(300 个请求),将数据存储到 xml 并从 301 个 SKU 重新开始。我想将 10000 个 SKU 拆分成更小的部分,写入多个文件,因为循环无法在那么长的时间内执行。
import xml.etree.ElementTree as ET
import requests
import time
from time import sleep
tree = ET.parse('products.xml')
root = tree.getroot()
for id in root.iter('SKU'):
sku = id.text
response = requests.get('http://example.com/Items&ID='+sku)
with open("_Temp.xml", "w") as f:
f.write(response.text)
print(response.text) ```
'''
您可以使用 itertools.grouper
食谱:
from itertools import zip_longest
for i, group in enumerate(zip_longest(*[iter(root.iter('SKU'))] * 300), 1):
with open(f"{i}_Temp.xml", "w") as f:
for id in group:
if group != None:
sku = id.text
response = requests.get(f'http://example.com/Items&ID={sku}')
f.write(response.text)