AWS lambda 一次放置对象多个图像
AWS lambda put object multiple images at once
我正在尝试将源图像调整为多维+扩展。
例如: 当我上传源图像时,说 abc.jpg 我需要调整它的大小.jpg 和 .webp 具有不同的尺寸,例如 abc_320.jpg、abc_320.webp、abc_640.jpg、abc_640.webp 与 s3 事件触发器。因此,使用我当前的 python lambda 处理程序,我可以通过多次 put_object
调用目标存储桶来完成它,但我想让它更加优化,因为将来我的维度+扩展可能会增加。 那么如何一次调用将所有调整大小的图像存储到目标存储桶?
当前的 Lambda 处理程序:
import json
import boto3
import os
from os import path
from io import BytesIO
from PIL import Image
# boto3 S3 initialization
s3_client = boto3.client("s3")
def lambda_handler(event, context):
destination_bucket_name = 'destination-bucket'
# event contains all information about uploaded object
print("Event :", event)
# Bucket Name where file was uploaded
source_bucket_name = event['Records'][0]['s3']['bucket']['name']
# Filename of object (with path)
dest_bucket_perfix = 'resized'
file_key_name = event['Records'][0]['s3']['object']['key']
image_obj = s3_client.get_object(Bucket=source_bucket_name, Key=file_key_name)
image_obj = image_obj.get('Body').read()
img = Image.open(BytesIO(image_obj))
dimensions = [320, 640]
# Checking the extension and
img_extension = path.splitext(file_key_name)[1].lower()
extension_dict = {".jpg":"JPEG", ".png":"PNG", ".jpeg":"JPEG"}
extensions = ["WebP"]
if img_extension in extension_dict.keys():
extensions.append(extension_dict[img_extension])
print ("test-1")
for dimension in dimensions:
WIDTH = HEIGHT = dimension
for extension in extensions:
resized_img = img.resize((WIDTH, HEIGHT))
buffer = BytesIO()
resized_img.save(buffer, extension)
buffer.seek(0)
# I don't want to use this put_object in loop <<<---
s3_client.put_object(Bucket=destination_bucket_name, Key=file_key_name.replace("upload", dest_bucket_perfix, 1), Body=buffer)
return {
'statusCode': 200,
'body': json.dumps('Hello from S3 events Lambda!')
}
您可以看到我需要在维度+扩展的每次迭代中调用 put_object
,这非常昂贵。 我也考虑过多线程和压缩解决方案,但正在寻找其他可能的解决方案 thoughts/solutions
Amazon S3 API 调用仅允许每次调用上传 一个 对象。
但是,您可以针对多线程修改程序并并行上传对象。
我正在尝试将源图像调整为多维+扩展。
例如: 当我上传源图像时,说 abc.jpg 我需要调整它的大小.jpg 和 .webp 具有不同的尺寸,例如 abc_320.jpg、abc_320.webp、abc_640.jpg、abc_640.webp 与 s3 事件触发器。因此,使用我当前的 python lambda 处理程序,我可以通过多次 put_object
调用目标存储桶来完成它,但我想让它更加优化,因为将来我的维度+扩展可能会增加。 那么如何一次调用将所有调整大小的图像存储到目标存储桶?
当前的 Lambda 处理程序:
import json
import boto3
import os
from os import path
from io import BytesIO
from PIL import Image
# boto3 S3 initialization
s3_client = boto3.client("s3")
def lambda_handler(event, context):
destination_bucket_name = 'destination-bucket'
# event contains all information about uploaded object
print("Event :", event)
# Bucket Name where file was uploaded
source_bucket_name = event['Records'][0]['s3']['bucket']['name']
# Filename of object (with path)
dest_bucket_perfix = 'resized'
file_key_name = event['Records'][0]['s3']['object']['key']
image_obj = s3_client.get_object(Bucket=source_bucket_name, Key=file_key_name)
image_obj = image_obj.get('Body').read()
img = Image.open(BytesIO(image_obj))
dimensions = [320, 640]
# Checking the extension and
img_extension = path.splitext(file_key_name)[1].lower()
extension_dict = {".jpg":"JPEG", ".png":"PNG", ".jpeg":"JPEG"}
extensions = ["WebP"]
if img_extension in extension_dict.keys():
extensions.append(extension_dict[img_extension])
print ("test-1")
for dimension in dimensions:
WIDTH = HEIGHT = dimension
for extension in extensions:
resized_img = img.resize((WIDTH, HEIGHT))
buffer = BytesIO()
resized_img.save(buffer, extension)
buffer.seek(0)
# I don't want to use this put_object in loop <<<---
s3_client.put_object(Bucket=destination_bucket_name, Key=file_key_name.replace("upload", dest_bucket_perfix, 1), Body=buffer)
return {
'statusCode': 200,
'body': json.dumps('Hello from S3 events Lambda!')
}
您可以看到我需要在维度+扩展的每次迭代中调用 put_object
,这非常昂贵。 我也考虑过多线程和压缩解决方案,但正在寻找其他可能的解决方案 thoughts/solutions
Amazon S3 API 调用仅允许每次调用上传 一个 对象。
但是,您可以针对多线程修改程序并并行上传对象。