在 python-requests 模块中保持活动状态
Keep-alive within python-requests module
我有一个关于 python-requests 模块的问题。根据文档
thanks to urllib3, keep-alive is 100% automatic within a session! Any requests that you make within a session will automatically reuse the appropriate connection!
我的示例代码如下所示:
def make_double_get_request():
response = requests.get(url=API_URL, headers=headers, timeout=10)
print response.text
response = requests.get(url=API_URL, headers=headers, timeout=10)
print response.text
但是我收到的日志告诉我每个请求都会启动新的 HTTP 连接:
INFO:requests.packages.urllib3.connectionpool:Starting new HTTP connection (1): url
DEBUG:requests.packages.urllib3.connectionpool:"GET url HTTP/1.1" 200 None
response text goes here
INFO:requests.packages.urllib3.connectionpool:Starting new HTTP connection (1): url
DEBUG:requests.packages.urllib3.connectionpool:"GET url HTTP/1.1" 200 None
response text goes here
我是不是做错了什么?通过使用 wireshark 查看数据包,似乎它们实际上已经设置了 keep-alive 设置。
def make_double_get_request():
session = requests.Session()
response = session.get(url=API_URL, headers=headers, timeout=10)
print response.text
response = session.get(url=API_URL, headers=headers, timeout=10)
print response.text
requests
顶级 HTTP 方法函数很方便 API,它每次都创建一个新的 Session
对象,防止重用连接。
来自文档:
The Session object allows you to persist certain parameters across requests. It also persists cookies across all requests made from the Session instance, and will use urllib3
's connection pooling.
我有一个关于 python-requests 模块的问题。根据文档
thanks to urllib3, keep-alive is 100% automatic within a session! Any requests that you make within a session will automatically reuse the appropriate connection!
我的示例代码如下所示:
def make_double_get_request():
response = requests.get(url=API_URL, headers=headers, timeout=10)
print response.text
response = requests.get(url=API_URL, headers=headers, timeout=10)
print response.text
但是我收到的日志告诉我每个请求都会启动新的 HTTP 连接:
INFO:requests.packages.urllib3.connectionpool:Starting new HTTP connection (1): url
DEBUG:requests.packages.urllib3.connectionpool:"GET url HTTP/1.1" 200 None
response text goes here
INFO:requests.packages.urllib3.connectionpool:Starting new HTTP connection (1): url
DEBUG:requests.packages.urllib3.connectionpool:"GET url HTTP/1.1" 200 None
response text goes here
我是不是做错了什么?通过使用 wireshark 查看数据包,似乎它们实际上已经设置了 keep-alive 设置。
def make_double_get_request():
session = requests.Session()
response = session.get(url=API_URL, headers=headers, timeout=10)
print response.text
response = session.get(url=API_URL, headers=headers, timeout=10)
print response.text
requests
顶级 HTTP 方法函数很方便 API,它每次都创建一个新的 Session
对象,防止重用连接。
来自文档:
The Session object allows you to persist certain parameters across requests. It also persists cookies across all requests made from the Session instance, and will use
urllib3
's connection pooling.