如何从跨多个页面的 GET 请求中提取所有结果?

How do I extract all results from a GET request that spans multiple pages?

我已经成功编写了调用 API 然后将结果转换为 DataFrame 的代码。

wax_wallet = "zqsfm.wam"

# Get Assets from AtomicHub API
response1 = requests.get(
    "https://wax.api.atomicassets.io/atomicassets/v1/assets?"
    f"owner={wax_wallet}"
    "&collection_whitelist=nftdraft2121"
    "&page=1"
    "&limit=1000"
    "&order=asc"
    "&sort=name")

# Save Response as JSON
json_assets = response1.json()

# Convert JSON to DataFrame
df = pd.json_normalize(json_assets['data'])

这 API returns 每页最多 1000 个项目,所以我需要让它循环遍历所需的页面,并最终将结果存储到 DataFrame 中。

我试图用下面的代码解决它,但没有成功。

asset_count = 2500
pages = int(math.ceil(asset_count / 1000))

# Get Assets from AtomicHub API
all_assets = []
for page in range(1, pages):
    url = f'https://wax.api.atomicassets.io/atomicassets/v1/assets?owner={wax_wallet}' \
          f'&collection_whitelist=nftdraft2121&page={page}&limit=1000&order=asc&sort=name'
    response = rq.get(url)
    all_assets.append(json.loads(response.text))["response"]

在此先感谢您的帮助!

您可以将它们转换为数据帧,然后将各个帧连接成最终结果:

def get_page(page_num):
    wax_wallet = "zqsfm.wam"

    response = requests.get(
        "https://wax.api.atomicassets.io/atomicassets/v1/assets",
        params={
            "owner": wax_wallet,
            "collection_whitelist": "nftdraft2121",
            "page": page_num,
            "limit": "1000",
            "order": "asc",
            "sort": "name"
        }
    )

    json_assets = response.json()
    return pd.json_normalize(json_assets['data'])

# The number of pages you want
number_of_pages_requested = 10

# Get all pages as dataframes
pages = [get_page(n + 1) for n in range(number_of_pages_requested)]

# Combine pages to single dataframe
df = pd.concat(pages)

编辑: 根据 Olvin Roght 的评论

使用参数更新

编辑 2: 修复了索引错误

我认为这应该有所帮助:-

import requests

all_assets = []
URL = 'https://wax.api.atomicassets.io/atomicassets/v1/assets'
params = {
    'owner': 'zqsfm.wam',
    'collection_whitelist': 'nftdraft2121',
    'page': 1,
    'order': 'asc',
    'sort': 'name',
    'limit': 1000
}
with requests.Session() as session:
    while True:
        print(f"Getting page {params['page']}")
        response = session.get(URL, params=params)
        response.raise_for_status()
        _j = response.json()
        data = _j['data']
        if len(data) > 0:
            all_assets.append(data)
            params['page'] += 1
        else:
            break
print('Done')