拆分功能,对其中一些进行预处理,然后将它们重新组合在一起。 (永远挂起)
Split features, preprocess some of them, then join them back together. (hangs forever)
我正在尝试将所有特征(第一个特征除外)提供给某些层 (nn.Linear + nn.LeakyReLU),获取输出,然后重新组合初始数据结构并提供它到最后一层。但是训练过程永远挂起,我没有得到任何输出。
需要说明的是,如果没有这个,代码也能正常工作,但我试图通过在将它们(未处理的第一个特征)提供给最后一层之前预处理一些特征来改进结果。
如有任何帮助,我们将不胜感激。
这是我的代码:
def forward(self, x):
# save the residual for the skip connection
res = x[:, :, 0:self.skip]
xSignal = np.zeros((len(x),len(x[0]),1))
xParams = np.zeros((len(x),len(x[0]),len(x[0][0])-1))
# separate data
for b in range(len(x)):
for c in range(len(x[b])):
for d in range(len(x[b][c])):
if d == 0:
xSignal[b][c][d] = x[b][c][d]
else:
xParams[b][c][d-1] = x[b][c][d]
# pass parameters through first network
xParams = torch.from_numpy(xParams).cuda().float()
xParams = self.paramsLinear(xParams)
xParams = self.paramsLeakyRelu(xParams)
# make new array with output and the signal
xConcat = np.zeros((len(x),len(x[0]),len(x[0][0])))
for b in range(len(x)):
for c in range(len(x[b])):
for d in range(len(x[b][c])):
if d == 0:
xConcat[b][c][d] = xSignal[b][c][d]
else:
xConcat[b][c][d] = xParams[b][c][d-1]
# convert to tensor
xConcat = torch.from_numpy(xConcat).cuda().float()
# pass it through the recurrent part
xConcat, self.hidden = self.rec(xConcat, self.hidden)
# then the linear part and return
return self.lin(xConcat) + res```
事实证明,切片比迭代更快更容易。而且我还使用 torch.cat
函数将所有内容放回到一个张量中。
def forward(self, x):
# save the residual for the skip connection
res = x[:, :, 0:self.skip]
# split features
xSignal = x[:, :, 0:1]
xParams = x[:, :, 1:]
# pass only some features through first layers
xParams = self.paramsLinear(xParams)
xParams = self.paramsLeakyRelu(xParams)
# put everything back together
x = torch.cat((xSignal, xParams), 2)
# pass it through the last layers
x, self.hidden = self.rec(x, self.hidden)
# then the linear part and return
return self.lin(x) + res
现在正在按预期训练:)
我正在尝试将所有特征(第一个特征除外)提供给某些层 (nn.Linear + nn.LeakyReLU),获取输出,然后重新组合初始数据结构并提供它到最后一层。但是训练过程永远挂起,我没有得到任何输出。
需要说明的是,如果没有这个,代码也能正常工作,但我试图通过在将它们(未处理的第一个特征)提供给最后一层之前预处理一些特征来改进结果。
如有任何帮助,我们将不胜感激。
这是我的代码:
def forward(self, x):
# save the residual for the skip connection
res = x[:, :, 0:self.skip]
xSignal = np.zeros((len(x),len(x[0]),1))
xParams = np.zeros((len(x),len(x[0]),len(x[0][0])-1))
# separate data
for b in range(len(x)):
for c in range(len(x[b])):
for d in range(len(x[b][c])):
if d == 0:
xSignal[b][c][d] = x[b][c][d]
else:
xParams[b][c][d-1] = x[b][c][d]
# pass parameters through first network
xParams = torch.from_numpy(xParams).cuda().float()
xParams = self.paramsLinear(xParams)
xParams = self.paramsLeakyRelu(xParams)
# make new array with output and the signal
xConcat = np.zeros((len(x),len(x[0]),len(x[0][0])))
for b in range(len(x)):
for c in range(len(x[b])):
for d in range(len(x[b][c])):
if d == 0:
xConcat[b][c][d] = xSignal[b][c][d]
else:
xConcat[b][c][d] = xParams[b][c][d-1]
# convert to tensor
xConcat = torch.from_numpy(xConcat).cuda().float()
# pass it through the recurrent part
xConcat, self.hidden = self.rec(xConcat, self.hidden)
# then the linear part and return
return self.lin(xConcat) + res```
事实证明,切片比迭代更快更容易。而且我还使用 torch.cat
函数将所有内容放回到一个张量中。
def forward(self, x):
# save the residual for the skip connection
res = x[:, :, 0:self.skip]
# split features
xSignal = x[:, :, 0:1]
xParams = x[:, :, 1:]
# pass only some features through first layers
xParams = self.paramsLinear(xParams)
xParams = self.paramsLeakyRelu(xParams)
# put everything back together
x = torch.cat((xSignal, xParams), 2)
# pass it through the last layers
x, self.hidden = self.rec(x, self.hidden)
# then the linear part and return
return self.lin(x) + res
现在正在按预期训练:)