设置 PyODBC 的内存使用是否可行?

Is it feasible to set the memory usage for PyODBC?

我正在开发一个连接到 SQL 服务器的项目,以执行数千个存储过程并检索结果集。

我设置 fast_executemany = True 并使用 executemany 快速遍历所有 Sproc。但是在检索结果集时,我发现虽然前 30-40 个结果集很快,但检索其余结果集会逐渐变慢。

是不是前30-40组缓存在内存中,剩下的游标对象要回远程数据库取数据?如果是这种情况,我可以 increase/control PyODBC 的内存使用情况,以便只要总 RAM 允许,就可以缓存所有结果集吗?

"Is it because the first 30-40 sets are cached in memory but for the rest, the cursor object is going back to the remote database to fetch the data?"

是的,但它们缓存在服务器上,而不是客户端。当您启动 运行 代表 "thousands of Stored Procedures" 的批次时,您也开始生成结果集。这些结果集在服务器上缓冲,直到它们被客户端检索。该缓冲区大小是有限的,如果它已满,则将控制权返回给客户端并暂停批处理,直到客户端检索到一些结果集以释放一些 space.

中所述

https://support.microsoft.com/en-ca/help/827575/sql-server-does-not-finish-execution-of-a-large-batch-of-sql-statement

If you are executing a large batch with multiple result sets, SQL Server fills that output buffer until it hits an internal limit and cannot continue to process more result sets. At that point, control returns to the client. When the client starts to consume the result sets, SQL Server starts to execute the batch again because there is now available memory in the output buffer.