将数据帧作为压缩的 csv 直接上传到 s3,而不将其保存在本地计算机上

upload a dataframe as a zipped csv directly to s3 without saving it on the local machine

如何将数据框作为 压缩 csv 格式上传到 S3 存储桶中,而不先将其保存在我的本地计算机上?

我已经 运行 使用以下方式连接到该存储桶:

self.s3_output = S3(bucket_name='test-bucket', bucket_subfolder='')

我们可以使用标准库中的 BytesIO 和 zipfile 创建一个类文件对象。

# 3.7
from io import BytesIO
import zipfile

# .to_csv returns a string when called with no args
s = df.to_csv()

with zipfile.ZipFile(BytesIO(), mode="w",) as z:
  z.writestr("df.csv", s)
  # upload file here

您需要参考 upload_fileobj 以自定义上传行为。

yourclass.s3_output.upload_fileobj(z, ...)

这对 zip 和 gz 同样有效:

import boto3
import gzip
import pandas as pd
from io import BytesIO, TextIOWrapper


s3_client = boto3.client(
        service_name = "s3",
        endpoint_url = your_endpoint_url,
        aws_access_key_id = your_access_key,
        aws_secret_access_key = your_secret_key
    
    
# Your file name inside zip

your_filename = "test.csv"
    
s3_path = f"path/to/your/s3/compressed/file/test.zip"
    
bucket = "your_bucket"
    
df = your_df
    
    
gz_buffer = BytesIO()


with gzip.GzipFile(   
    
    filename = your_filename,
    mode = 'w', 
    fileobj = gz_buffer ) as gz_file:

    
    df.to_csv(TextIOWrapper(gz_file, 'utf8'), index=False)
    
    
    s3.put_object(
        Bucket=bucket, Key=s3_path, Body=gz_buffer.getvalue()
    )