循环内 requests.get() 的问题。 "No connection adapters found"

Problem with requests.get() inside a loop. "No connection adapters found"

所以我正在尝试使用他们的 JSON 版本抓取几个页面。当我 运行 单个 URL 的代码(如所附代码的第一部分)时,我确实得到了所需的输出,但是,当我尝试在 for 循环中执行相同的操作时几个 URLs,我从请求中得到一个 "No connection adapters found",这没有多大意义,因为它在 for 循环之外为相同的 URL 工作。

# Import package
import requests
from pandas import json_normalize
import pandas as pd

# Assign URL to variable: url
url = 'https://www.olx.com.gt/api/relevance/search?category=367&facet_limit=100&location=4168811&location_facet_limit=20&page=1&sorting=desc-creation&user=16c20011d0fx61aada41'

# Package the request, send the request and catch the response: r
r = requests.get(url)

# Decode the JSON data into a dictionary: json_data
json_data = r.json()

# Extract data from the Json file
json_data_2 = json_data['data']

#normalize json data into a dataframe
df = json_normalize(json_data_2)
df.head()

使用此脚本一切 运行 都很顺利。这是我得到错误的地方。

%%time

n_paginas = 0

all_urls = pd.DataFrame()

for paginas in range(0,20):
    n_paginas += 1
    olx_url = 'https://www.olx.com.gt/api/relevance/search?category=367&facet_limit=100&location=4168811&location_facet_limit=20&page=%s&sorting=desc-creation&user=16c20011d0fx61aada41'
    start_urls = [olx_url % n_paginas]
    r = requests.get(start_urls)
    #json_data = r.json()
    #json_data_2 = json_data['data']
    #df = json_normalize(json_data_2)
    #all_urls.apped(df)

这是追溯:

---------------------------------------------------------------------------
InvalidSchema                             Traceback (most recent call last)
<timed exec> in <module>

~/anaconda3/lib/python3.7/site-packages/requests/api.py in get(url, params, **kwargs)
     74 
     75     kwargs.setdefault('allow_redirects', True)
---> 76     return request('get', url, params=params, **kwargs)
     77 
     78 

~/anaconda3/lib/python3.7/site-packages/requests/api.py in request(method, url, **kwargs)
     59     # cases, and look like a memory leak in others.
     60     with sessions.Session() as session:
---> 61         return session.request(method=method, url=url, **kwargs)
     62 
     63 

~/anaconda3/lib/python3.7/site-packages/requests/sessions.py in request(self, method, url, params, data, headers, cookies, files, auth, timeout, allow_redirects, proxies, hooks, stream, verify, cert, json)
    528         }
    529         send_kwargs.update(settings)
--> 530         resp = self.send(prep, **send_kwargs)
    531 
    532         return resp

~/anaconda3/lib/python3.7/site-packages/requests/sessions.py in send(self, request, **kwargs)
    635 
    636         # Get the appropriate adapter to use
--> 637         adapter = self.get_adapter(url=request.url)
    638 
    639         # Start time (approximately) of the request

~/anaconda3/lib/python3.7/site-packages/requests/sessions.py in get_adapter(self, url)
    726 
    727         # Nothing matches :-/
--> 728         raise InvalidSchema("No connection adapters were found for {!r}".format(url))
    729 
    730     def close(self):

InvalidSchema: No connection adapters were found for "['https://www.olx.com.gt/api/relevance/search?category=367&facet_limit=100&location=4168811&location_facet_limit=20&page=1&sorting=desc-creation&user=16c20011d0fx61aada41']"

基于页码的新 URL 正在正确生成,如果我在上面的脚本中输入其中任何一个,它确实有效。

我们将不胜感激任何帮助。

提前谢谢你。

您可能不需要 start_urls = [olx_url % n_paginas] 部分。无论哪种方式,对 for 循环的这种轻微修改似乎都能得到结果。

# Import package
import requests
from pandas import json_normalize
import pandas as pd

# Assign URL to variable: url
url = 'https://www.olx.com.gt/api/relevance/search?category=367&facet_limit=100&location=4168811&location_facet_limit=20&page=1&sorting=desc-creation&user=16c20011d0fx61aada41'

# Package the request, send the request and catch the response: r
r = requests.get(url)

# Decode the JSON data into a dictionary: json_data
json_data = r.json()

# Extract data from the Json file
json_data_2 = json_data['data']

#normalize json data into a dataframe
df = json_normalize(json_data_2)
df.head()

n_paginas = 0

all_urls = pd.DataFrame()

for paginas in range(0,20):
    n_paginas += 1
    olx_url = 'https://www.olx.com.gt/api/relevance/search?category=367&facet_limit=100&location=4168811&location_facet_limit=20&page={}&sorting=desc-creation&user=16c20011d0fx61aada41'.format(str(n_paginas))
    r = requests.get(olx_url)
    all_urls = all_urls.append(pd.DataFrame(json_normalize(r.json()['data'])))

all_urls.shape

(400, 60)