索引 11513 超出轴 0 的范围,大小为 10000
index 11513 is out of bounds for axis 0 with size 10000
我在练习一个简单的MNIST例子,出现如题的错误,不知道index 11513是什么意思。
下面是完整的代码。
np.random.seed(3)
(x_train, y_train), (x_test, y_test) = mnist.load_data()
x_val = x_train[50000:]
y_val = y_train[50000:]
x_train = x_train[:50000]
y_train = y_train[:50000]
x_train = x_train.reshape(50000, 784).astype('float32') / 255.0
x_val = x_val.reshape(10000, 784).astype('float32') / 255.0
x_test = x_test.reshape(10000, 784).astype('float32') / 255.0
train_rand_idxs = np.random.choice(50000, 700)
val_rand_idxs = np.random.choice(10000, 300)
x_train = x_train[train_rand_idxs]
y_train = y_train[train_rand_idxs]
x_val = x_val[train_rand_idxs]#***This is where the error occurred***
y_val = y_val[train_rand_idxs]
y_train = np_utils.to_categorical(y_train)
y_test = np_utils.to_categorical(y_test)
y_val = np_utils.to_categorical(y_val)
model = Sequential()
model.add(Dense(units=2 , input_dim= 28*28, activation='relu'))
model.add(Dense(units=10 , activation='softmax'))
model.compile(loss='categorical_crossentropy' , optimizer='sgd' , metrics= ['accuracy'])
hist = model.fit(x_train, y_train, epochs =1000 , batch_size=10 , validation_data =(x_val, y_val))
您的 x_val
重塑后只有 10000 行:
x_val = x_val.reshape(10000, 784).astype('float32') / 255.0
但是 train_rand_idxs
的索引值高达 50000:
train_rand_idxs = np.random.choice(50000, 700)
当您尝试使用火车索引对 x_val
进行子集化时:
x_val = x_val[train_rand_idxs]
您收到错误消息,因为 [0,50000)
中抽样的某些索引大于 x_val
索引的 [0,10000)
范围。
尝试使用 x_val[val_rand_idxs]
对 x_val
进行采样。
我在练习一个简单的MNIST例子,出现如题的错误,不知道index 11513是什么意思。 下面是完整的代码。
np.random.seed(3)
(x_train, y_train), (x_test, y_test) = mnist.load_data()
x_val = x_train[50000:]
y_val = y_train[50000:]
x_train = x_train[:50000]
y_train = y_train[:50000]
x_train = x_train.reshape(50000, 784).astype('float32') / 255.0
x_val = x_val.reshape(10000, 784).astype('float32') / 255.0
x_test = x_test.reshape(10000, 784).astype('float32') / 255.0
train_rand_idxs = np.random.choice(50000, 700)
val_rand_idxs = np.random.choice(10000, 300)
x_train = x_train[train_rand_idxs]
y_train = y_train[train_rand_idxs]
x_val = x_val[train_rand_idxs]#***This is where the error occurred***
y_val = y_val[train_rand_idxs]
y_train = np_utils.to_categorical(y_train)
y_test = np_utils.to_categorical(y_test)
y_val = np_utils.to_categorical(y_val)
model = Sequential()
model.add(Dense(units=2 , input_dim= 28*28, activation='relu'))
model.add(Dense(units=10 , activation='softmax'))
model.compile(loss='categorical_crossentropy' , optimizer='sgd' , metrics= ['accuracy'])
hist = model.fit(x_train, y_train, epochs =1000 , batch_size=10 , validation_data =(x_val, y_val))
您的 x_val
重塑后只有 10000 行:
x_val = x_val.reshape(10000, 784).astype('float32') / 255.0
但是 train_rand_idxs
的索引值高达 50000:
train_rand_idxs = np.random.choice(50000, 700)
当您尝试使用火车索引对 x_val
进行子集化时:
x_val = x_val[train_rand_idxs]
您收到错误消息,因为 [0,50000)
中抽样的某些索引大于 x_val
索引的 [0,10000)
范围。
尝试使用 x_val[val_rand_idxs]
对 x_val
进行采样。