at::Tensor 到 UIImage

at::Tensor to UIImage

我有一个 PyTorch 模型并在 iOS 上尝试 运行。我有下一个代码:

        at::Tensor tensor2 = torch::from_blob(imageBuffer2, {1, 1, 256, 256}, at::kFloat);
        c10::InferenceMode guard;
        auto output = _impl.forward({tensor1, tensor2});
        torch::Tensor tensor_img = output.toTuple()->elements()[0].toTensor();

我的问题是“如何将 tensor_img 转换为 UIImage?”

我在 PyTorch 文档中发现函数:

- (UIImage*)convertRGBBufferToUIImage:(unsigned char*)buffer
                            withWidth:(int)width
                           withHeight:(int)height {
    char* rgba = (char*)malloc(width * height * 4);
    for (int i = 0; i < width * height; ++i) {
        rgba[4 * i] = buffer[3 * i];
        rgba[4 * i + 1] = buffer[3 * i + 1];
        rgba[4 * i + 2] = buffer[3 * i + 2];
        rgba[4 * i + 3] = 255;
    }

    size_t bufferLength = width * height * 4;
    CGDataProviderRef provider = CGDataProviderCreateWithData(NULL, rgba, bufferLength, NULL);
    size_t bitsPerComponent = 8;
    size_t bitsPerPixel = 32;
    size_t bytesPerRow = 4 * width;

    CGColorSpaceRef colorSpaceRef = CGColorSpaceCreateDeviceRGB();
    if (colorSpaceRef == NULL) {
        NSLog(@"Error allocating color space");
        CGDataProviderRelease(provider);
        return nil;
    }

    CGBitmapInfo bitmapInfo = kCGBitmapByteOrderDefault | kCGImageAlphaPremultipliedLast;
    CGColorRenderingIntent renderingIntent = kCGRenderingIntentDefault;

    CGImageRef iref = CGImageCreate(width,
        height,
        bitsPerComponent,
        bitsPerPixel,
        bytesPerRow,
        colorSpaceRef,
        bitmapInfo,
        provider,
        NULL,
        YES,
        renderingIntent);

    uint32_t* pixels = (uint32_t*)malloc(bufferLength);

    if (pixels == NULL) {
        NSLog(@"Error: Memory not allocated for bitmap");
        CGDataProviderRelease(provider);
        CGColorSpaceRelease(colorSpaceRef);
        CGImageRelease(iref);
        return nil;
    }

    CGContextRef context = CGBitmapContextCreate(pixels,
        width,
        height,
        bitsPerComponent,
        bytesPerRow,
        colorSpaceRef,
        bitmapInfo);

    if (context == NULL) {
        NSLog(@"Error context not created");
        free(pixels);
    }

    UIImage* image = nil;
    if (context) {
        CGContextDrawImage(context, CGRectMake(0.0f, 0.0f, width, height), iref);
        CGImageRef imageRef = CGBitmapContextCreateImage(context);
        if ([UIImage respondsToSelector:@selector(imageWithCGImage:scale:orientation:)]) {
            float scale = [[UIScreen mainScreen] scale];
            image = [UIImage imageWithCGImage:imageRef scale:scale orientation:UIImageOrientationUp];
        } else {
            image = [UIImage imageWithCGImage:imageRef];
        }

        CGImageRelease(imageRef);
        CGContextRelease(context);
    }

    CGColorSpaceRelease(colorSpaceRef);
    CGImageRelease(iref);
    CGDataProviderRelease(provider);

    if (pixels) {
        free(pixels);
    }
    return image;
}

@end

如果我没看错的话,那个函数可以把unsigned char * 转成UIImage。我认为我需要将我的 tensor_img 转换为 unsigned char*,但我不知道该怎么做。

第一个代码是火炬桥,第二个代码是 UIImage 助手,我 运行 来自 Swift。无论如何,我解决了那个问题,我们可以关闭它。代码示例:

for (int i = 0; i < 3 * width * height; i++) { 
  [results addObject:@(floatBuffer[i])]; 
} 

NSMutableData* data = [NSMutableData dataWithLength:sizeof(float) * 3 * width * height];

float* buffer = (float*)[data mutableBytes]; 

for (int j = 0; j < 3 * width * height; j++) {
  buffer[j] = [results[j] floatValue];
} 
return buffer;