扩展 Pytorch:Python vs. C++ vs. CUDA

Extending Pytorch: Python vs. C++ vs. CUDA

我一直在尝试实现自定义 Conv2d 模块,其中 grad_input (dx) 和 grad_weight (dw) 是通过使用不同的 grad_output (dy) 值计算的。我通过在 Pytorch 教程中扩展 torch.autograd 来实现这一点。

然而我对这些信息感到困惑in this link

这是我的自定义函数:

class myCustomConv2d(torch.autograd.Function):
@staticmethod
def forward(ctx, x, w, bias=None, stride=1, padding=0, dilation=1, groups=1):
    ctx.save_for_backward(x, w, bias)
    ctx.stride = stride
    ctx.padding = padding
    ctx.dilation = dilation
    ctx.groups = groups
    out = F.conv2d(x, w, bias, stride, padding, dilation, groups)
    return out

@staticmethod
def backward(ctx, grad_output):
    input, weight, bias = ctx.saved_tensors
    stride = ctx.stride
    padding = ctx.padding
    dilation = ctx.dilation
    groups = ctx.groups
    grad_input = grad_weight = grad_bias = None

    dy_for_inputs = myspecialfunction1(grad_output)
    dy_for_weights = myspecialfunction2(grad_output)

    grad_input = torch.nn.grad.conv2d_input(input.shape, weight, dy_for_inputs , stride, padding, dilation, groups)
    grad_weight = torch.nn.grad.conv2d_weight(input, weight.shape, dy_for_weights , stride, padding, dilation, groups)

    if bias is not None and ctx.needs_input_grad[2]:
        grad_bias = dy_for_weights .sum((0,2,3)).squeeze(0)

    return grad_input, grad_weight, grad_bias, None, None, None, None

Is extending the autograd.Function not enough?

如果您的代码重用包装在 Python 接口中的 Pytorch 组件就足够了(这似乎是这种情况)。渐变自动合成。

What is the difference between writing a new autograd function in Python vs C++?

性能,您的操作越自定义(并且越难从现有的 Pytorch 操作组合它),您获得的性能提升就越大。

How about the CUDA implementations in /torch/nn/blob/master/lib/THNN/generic/SpatialConvolutionMM.c where dx and dw calculated? Should I change them too?

不需要那个,除非你想为 CUDA 创建专门的操作