使用 Python(内存不足)将 2Gb+ SELECT 导出到 CSV

Export 2Gb+ SELECT to CSV with Python (out of Memory)

我正在尝试从 Netezza 导出一个大文件(使用 Netezza ODBC + pyodbc),此解决方案会引发 memoryError,如果我在没有 "list" 的情况下循环,它会非常慢。您是否有什么中间解决方案不会终止我的 server/python 进程,但可以 运行 更快?

cursorNZ.execute(sql)
archi = open("c:\test.csv", "w")
lista = list(cursorNZ.fetchall())
for fila  in lista:
    registro = ''
    for campo in fila:
        campo = str(campo)
        registro = registro+str(campo)+";"
    registro = registro[:-1]
    registro = registro.replace('None','NULL')
    registro = registro.replace("'NULL'","NULL")
    archi.write(registro+"\n")

----编辑----

谢谢,我正在尝试这个: 其中 "sql" 是查询, cursorNZ 是

connMy = pyodbc.connect(DRIVER=.....)
cursorNZ = connNZ.cursor()

chunk = 10 ** 5  # tweak this
chunks = pandas.read_sql(sql, cursorNZ, chunksize=chunk)
with open('C:/test.csv', 'a') as output:
    for n, df in enumerate(chunks):
        write_header = n == 0
        df.to_csv(output, sep=';', header=write_header, na_rep='NULL')

有这个: AttributeError: 'pyodbc.Cursor' 对象没有属性 'cursor' 有什么想法吗?

不要使用 cursorNZ.fetchall()

相反,直接遍历游标:

with open("c:/test.csv", "w") as archi:  # note the fixed '/'
    cursorNZ.execute(sql)
    for fila in cursorNZ:
        registro = ''
        for campo in fila:
            campo = str(campo)
            registro = registro+str(campo)+";"
        registro = registro[:-1]
        registro = registro.replace('None','NULL')
        registro = registro.replace("'NULL'","NULL")
        archi.write(registro+"\n")

就个人而言,我会使用 pandas:

import pyodbc
import pandas

cnn = pyodbc.connect(DRIVER=.....)
chunksize = 10 ** 5  # tweak this
chunks = pandas.read_sql(sql, cnn, chunksize=chunksize)

with open('C:/test.csv', 'a') as output:
    for n, df in enumerate(chunks):
        write_header = n == 0
        df.to_csv(output, sep=';', header=write_header, na_rep='NULL')