在大张量中寻找多种方法
Finding several means in a large tensor
我有一个很大的张量 (~10k)。这是一个具有 200 个值的采样器:
sample_tensor = tensor([ 0.6676, 0.0917, 0.6083, 0.4536, 1.1882, 0.6672, 0.6058, -0.1615,
0.5254, 1.1642, 0.1994, -0.2274, 0.0511, 0.3707, 0.3675, -0.1629,
-0.0638, -0.0118, 0.2668, 0.8586, 0.7027, 0.3018, -0.2930, 1.2613,
0.9374, 0.3154, 1.0396, -0.0263, 0.2012, 1.5710, -0.4640, -0.1657,
-0.2670, 0.5783, 0.7420, 0.1886, -1.1255, 0.3682, 0.2597, 0.3697,
0.1404, -0.0289, 0.5903, 0.0461, 0.2288, -0.0414, 0.9736, 0.4891,
-0.0593, 0.1694, 0.2426, -0.0339, 0.1683, 0.2374, 0.1349, 0.1672,
0.4174, 0.8038, 1.4121, -0.1046, 0.1169, 0.6447, -0.1168, 0.7392,
0.0578, -0.1398, 0.8974, 1.0977, 0.7102, 1.4012, 0.8541, 0.3314,
-0.2045, 0.1540, 0.2779, -0.3912, 0.4068, -0.1868, 0.1796, 0.0318,
0.1354, -0.9689, 0.3460, 0.3762, 0.8637, -0.4735, 0.8413, 0.5261,
0.8362, -0.2226, -0.2772, -0.2757, 0.2079, 0.0895, 0.4352, 0.8868,
0.3707, 0.8412, 0.3026, 0.1568, 0.4442, 0.0789, 0.5050, 0.0102,
0.6944, 0.1852, 0.5215, -0.7028, -0.7591, 0.2139, 0.7411, 0.3830,
0.8048, -0.7532, 0.7710, 0.8526, 1.1322, 0.0939, -0.3318, 1.1003,
0.3066, 1.6501, 1.1300, 0.0062, 0.2600, 0.2605, -0.2236, 0.2516,
0.4460, 0.6813, 0.1876, -0.4710, -0.5939, 0.4144, 0.0783, 0.4282,
0.1744, 0.0569, 0.1043, 0.3329, 0.3561, 0.1618, -0.1184, 0.4183,
0.5722, -0.4459, 0.3354, 0.3373, 0.2290, 1.0164, -0.5191, 0.0992,
0.9188, -0.3634, 1.2128, 0.0457, 0.1028, -0.2206, 0.9355, 0.6074,
0.3834, 0.0802, 0.7016, 0.8777, 0.2769, -0.7512, 0.8667, -0.1056,
0.5435, 1.4568, -0.3943, 0.5740, 0.6328, 0.4063, -0.7712, 0.5113,
0.1578, 0.4571, 1.0314, 0.2863, -0.1470, 1.0763, -0.0019, 0.9103,
1.0114, -0.1229, -0.3118, 0.5383, 0.5566, 0.2280, 0.9320, 0.6770,
0.0908, 0.5056, 0.0445, -0.0810, 0.2611, 0.1223, -0.0108, 0.0611])
我还有一个输入值,它对应于我需要从这个张量中得到多少means
:
sampler_number_of_means = 10
从该张量中获取 10 均值张量的有效方法是什么,其中每个均值是一组不同的值,大小为 len(sample_tensor)/sampler_number_of_means
。也就是说,在此示例中,第一个平均值将是前 20 个值,第二个平均值将是接下来的 20 个值,依此类推。
我目前正在遍历张量并将其分成大小相等的列表,然后遍历每个列表以获得平均值。但是对于大张量来说它很慢。
您可以重塑张量,然后取平均值。
import torch
sample_tensor = torch.tensor([ 0.6676, 0.0917, 0.6083, 0.4536, 1.1882, 0.6672, 0.6058, -0.1615,
0.5254, 1.1642, 0.1994, -0.2274, 0.0511, 0.3707, 0.3675, -0.1629,
-0.0638, -0.0118, 0.2668, 0.8586, 0.7027, 0.3018, -0.2930, 1.2613,
0.9374, 0.3154, 1.0396, -0.0263, 0.2012, 1.5710, -0.4640, -0.1657,
-0.2670, 0.5783, 0.7420, 0.1886, -1.1255, 0.3682, 0.2597, 0.3697,
0.1404, -0.0289, 0.5903, 0.0461, 0.2288, -0.0414, 0.9736, 0.4891,
-0.0593, 0.1694, 0.2426, -0.0339, 0.1683, 0.2374, 0.1349, 0.1672,
0.4174, 0.8038, 1.4121, -0.1046, 0.1169, 0.6447, -0.1168, 0.7392,
0.0578, -0.1398, 0.8974, 1.0977, 0.7102, 1.4012, 0.8541, 0.3314,
-0.2045, 0.1540, 0.2779, -0.3912, 0.4068, -0.1868, 0.1796, 0.0318,
0.1354, -0.9689, 0.3460, 0.3762, 0.8637, -0.4735, 0.8413, 0.5261,
0.8362, -0.2226, -0.2772, -0.2757, 0.2079, 0.0895, 0.4352, 0.8868,
0.3707, 0.8412, 0.3026, 0.1568, 0.4442, 0.0789, 0.5050, 0.0102,
0.6944, 0.1852, 0.5215, -0.7028, -0.7591, 0.2139, 0.7411, 0.3830,
0.8048, -0.7532, 0.7710, 0.8526, 1.1322, 0.0939, -0.3318, 1.1003,
0.3066, 1.6501, 1.1300, 0.0062, 0.2600, 0.2605, -0.2236, 0.2516,
0.4460, 0.6813, 0.1876, -0.4710, -0.5939, 0.4144, 0.0783, 0.4282,
0.1744, 0.0569, 0.1043, 0.3329, 0.3561, 0.1618, -0.1184, 0.4183,
0.5722, -0.4459, 0.3354, 0.3373, 0.2290, 1.0164, -0.5191, 0.0992,
0.9188, -0.3634, 1.2128, 0.0457, 0.1028, -0.2206, 0.9355, 0.6074,
0.3834, 0.0802, 0.7016, 0.8777, 0.2769, -0.7512, 0.8667, -0.1056,
0.5435, 1.4568, -0.3943, 0.5740, 0.6328, 0.4063, -0.7712, 0.5113,
0.1578, 0.4571, 1.0314, 0.2863, -0.1470, 1.0763, -0.0019, 0.9103,
1.0114, -0.1229, -0.3118, 0.5383, 0.5566, 0.2280, 0.9320, 0.6770,
0.0908, 0.5056, 0.0445, -0.0810, 0.2611, 0.1223, -0.0108, 0.0611])
sampler_number_of_means = 10
sample_tensor.reshape((sampler_number_of_means,int(sample_tensor.shape[0]/sampler_number_of_means))).mean(1)
输出
tensor([0.3729, 0.3248, 0.2977, 0.3431, 0.2499, 0.2993, 0.2740, 0.2841, 0.3611,
0.3170])
我有一个很大的张量 (~10k)。这是一个具有 200 个值的采样器:
sample_tensor = tensor([ 0.6676, 0.0917, 0.6083, 0.4536, 1.1882, 0.6672, 0.6058, -0.1615,
0.5254, 1.1642, 0.1994, -0.2274, 0.0511, 0.3707, 0.3675, -0.1629,
-0.0638, -0.0118, 0.2668, 0.8586, 0.7027, 0.3018, -0.2930, 1.2613,
0.9374, 0.3154, 1.0396, -0.0263, 0.2012, 1.5710, -0.4640, -0.1657,
-0.2670, 0.5783, 0.7420, 0.1886, -1.1255, 0.3682, 0.2597, 0.3697,
0.1404, -0.0289, 0.5903, 0.0461, 0.2288, -0.0414, 0.9736, 0.4891,
-0.0593, 0.1694, 0.2426, -0.0339, 0.1683, 0.2374, 0.1349, 0.1672,
0.4174, 0.8038, 1.4121, -0.1046, 0.1169, 0.6447, -0.1168, 0.7392,
0.0578, -0.1398, 0.8974, 1.0977, 0.7102, 1.4012, 0.8541, 0.3314,
-0.2045, 0.1540, 0.2779, -0.3912, 0.4068, -0.1868, 0.1796, 0.0318,
0.1354, -0.9689, 0.3460, 0.3762, 0.8637, -0.4735, 0.8413, 0.5261,
0.8362, -0.2226, -0.2772, -0.2757, 0.2079, 0.0895, 0.4352, 0.8868,
0.3707, 0.8412, 0.3026, 0.1568, 0.4442, 0.0789, 0.5050, 0.0102,
0.6944, 0.1852, 0.5215, -0.7028, -0.7591, 0.2139, 0.7411, 0.3830,
0.8048, -0.7532, 0.7710, 0.8526, 1.1322, 0.0939, -0.3318, 1.1003,
0.3066, 1.6501, 1.1300, 0.0062, 0.2600, 0.2605, -0.2236, 0.2516,
0.4460, 0.6813, 0.1876, -0.4710, -0.5939, 0.4144, 0.0783, 0.4282,
0.1744, 0.0569, 0.1043, 0.3329, 0.3561, 0.1618, -0.1184, 0.4183,
0.5722, -0.4459, 0.3354, 0.3373, 0.2290, 1.0164, -0.5191, 0.0992,
0.9188, -0.3634, 1.2128, 0.0457, 0.1028, -0.2206, 0.9355, 0.6074,
0.3834, 0.0802, 0.7016, 0.8777, 0.2769, -0.7512, 0.8667, -0.1056,
0.5435, 1.4568, -0.3943, 0.5740, 0.6328, 0.4063, -0.7712, 0.5113,
0.1578, 0.4571, 1.0314, 0.2863, -0.1470, 1.0763, -0.0019, 0.9103,
1.0114, -0.1229, -0.3118, 0.5383, 0.5566, 0.2280, 0.9320, 0.6770,
0.0908, 0.5056, 0.0445, -0.0810, 0.2611, 0.1223, -0.0108, 0.0611])
我还有一个输入值,它对应于我需要从这个张量中得到多少means
:
sampler_number_of_means = 10
从该张量中获取 10 均值张量的有效方法是什么,其中每个均值是一组不同的值,大小为 len(sample_tensor)/sampler_number_of_means
。也就是说,在此示例中,第一个平均值将是前 20 个值,第二个平均值将是接下来的 20 个值,依此类推。
我目前正在遍历张量并将其分成大小相等的列表,然后遍历每个列表以获得平均值。但是对于大张量来说它很慢。
您可以重塑张量,然后取平均值。
import torch
sample_tensor = torch.tensor([ 0.6676, 0.0917, 0.6083, 0.4536, 1.1882, 0.6672, 0.6058, -0.1615,
0.5254, 1.1642, 0.1994, -0.2274, 0.0511, 0.3707, 0.3675, -0.1629,
-0.0638, -0.0118, 0.2668, 0.8586, 0.7027, 0.3018, -0.2930, 1.2613,
0.9374, 0.3154, 1.0396, -0.0263, 0.2012, 1.5710, -0.4640, -0.1657,
-0.2670, 0.5783, 0.7420, 0.1886, -1.1255, 0.3682, 0.2597, 0.3697,
0.1404, -0.0289, 0.5903, 0.0461, 0.2288, -0.0414, 0.9736, 0.4891,
-0.0593, 0.1694, 0.2426, -0.0339, 0.1683, 0.2374, 0.1349, 0.1672,
0.4174, 0.8038, 1.4121, -0.1046, 0.1169, 0.6447, -0.1168, 0.7392,
0.0578, -0.1398, 0.8974, 1.0977, 0.7102, 1.4012, 0.8541, 0.3314,
-0.2045, 0.1540, 0.2779, -0.3912, 0.4068, -0.1868, 0.1796, 0.0318,
0.1354, -0.9689, 0.3460, 0.3762, 0.8637, -0.4735, 0.8413, 0.5261,
0.8362, -0.2226, -0.2772, -0.2757, 0.2079, 0.0895, 0.4352, 0.8868,
0.3707, 0.8412, 0.3026, 0.1568, 0.4442, 0.0789, 0.5050, 0.0102,
0.6944, 0.1852, 0.5215, -0.7028, -0.7591, 0.2139, 0.7411, 0.3830,
0.8048, -0.7532, 0.7710, 0.8526, 1.1322, 0.0939, -0.3318, 1.1003,
0.3066, 1.6501, 1.1300, 0.0062, 0.2600, 0.2605, -0.2236, 0.2516,
0.4460, 0.6813, 0.1876, -0.4710, -0.5939, 0.4144, 0.0783, 0.4282,
0.1744, 0.0569, 0.1043, 0.3329, 0.3561, 0.1618, -0.1184, 0.4183,
0.5722, -0.4459, 0.3354, 0.3373, 0.2290, 1.0164, -0.5191, 0.0992,
0.9188, -0.3634, 1.2128, 0.0457, 0.1028, -0.2206, 0.9355, 0.6074,
0.3834, 0.0802, 0.7016, 0.8777, 0.2769, -0.7512, 0.8667, -0.1056,
0.5435, 1.4568, -0.3943, 0.5740, 0.6328, 0.4063, -0.7712, 0.5113,
0.1578, 0.4571, 1.0314, 0.2863, -0.1470, 1.0763, -0.0019, 0.9103,
1.0114, -0.1229, -0.3118, 0.5383, 0.5566, 0.2280, 0.9320, 0.6770,
0.0908, 0.5056, 0.0445, -0.0810, 0.2611, 0.1223, -0.0108, 0.0611])
sampler_number_of_means = 10
sample_tensor.reshape((sampler_number_of_means,int(sample_tensor.shape[0]/sampler_number_of_means))).mean(1)
输出
tensor([0.3729, 0.3248, 0.2977, 0.3431, 0.2499, 0.2993, 0.2740, 0.2841, 0.3611,
0.3170])