神经网络仿真误差
Neural network simulation error
我使用下面也可见的训练数据集 "two4" 来应用下面描述的神经网络。数据集有 150370 行。
from keras.models import Sequential
from keras.layers import Dense
from sklearn.cross_validation import train_test_split
import numpy
from sklearn.preprocessing import StandardScaler
# fix random seed for reproducibility
seed = 7
numpy.random.seed(seed)
dataset = numpy.loadtxt("two4.csv", delimiter=",")
# split into input (X) and output (Y) variables
X = dataset[:,0:22]
scaler = StandardScaler()
X = scaler.fit_transform(X)
Y = dataset[:,22]
# split into 67% for train and 33% for test
X_train, X_test, y_train, y_test = train_test_split(X, Y, test_size=0.33,random_state=seed)
# create model
model = Sequential()
model.add(Dense(12, input_dim=22, init='uniform', activation='relu'))
model.add(Dense(12, init='uniform', activation='relu'))
model.add(Dense(1, init='uniform', activation='sigmoid'))
# Compile model
model.compile(loss='binary_crossentropy', optimizer='adam', metrics=['accuracy'])
# Fit the model
model.fit(X_train, y_train, validation_data=(X_test,y_test), nb_epoch=30, batch_size=10)
在我执行模拟后,它一直崩溃,我得到的错误看起来像:
30810/100747 [========>.....................]Traceback (most recent call last):.9989
File "<ipython-input-1-adb3fdf3bae0>", line 1, in <module>
runfile('C:/Users/Dimitris/Desktop/seventh experiment configuration/feedforward_net.py', wdir='C:/Users/Dimitris/Desktop/seventh experiment configuration')
File "C:\Users\Dimitris\Anaconda2\envs\keras_env\lib\site-packages\spyderlib\widgets\externalshell\sitecustomize.py", line 714, in runfile
execfile(filename, namespace)
File "C:\Users\Dimitris\Anaconda2\envs\keras_env\lib\site-packages\spyderlib\widgets\externalshell\sitecustomize.py", line 74, in execfile
exec(compile(scripttext, filename, 'exec'), glob, loc)
File "C:/Users/Dimitris/Desktop/seventh experiment configuration/feedforward_net.py", line 26, in <module>
model.fit(X_train, y_train, validation_data=(X_test,y_test), nb_epoch=30, batch_size=10)
File "C:\Users\Dimitris\Anaconda2\envs\keras_env\lib\site-packages\keras\models.py", line 432, in fit
sample_weight=sample_weight)
File "C:\Users\Dimitris\Anaconda2\envs\keras_env\lib\site-packages\keras\engine\training.py", line 1106, in fit
callback_metrics=callback_metrics)
File "C:\Users\Dimitris\Anaconda2\envs\keras_env\lib\site-packages\keras\engine\training.py", line 830, in _fit_loop
callbacks.on_batch_end(batch_index, batch_logs)
File "C:\Users\Dimitris\Anaconda2\envs\keras_env\lib\site-packages\keras\callbacks.py", line 60, in on_batch_end
callback.on_batch_end(batch, logs)
File "C:\Users\Dimitris\Anaconda2\envs\keras_env\lib\site-packages\keras\callbacks.py", line 188, in on_batch_end
self.progbar.update(self.seen, self.log_values)
File "C:\Users\Dimitris\Anaconda2\envs\keras_env\lib\site-packages\keras\utils\generic_utils.py", line 119, in update
sys.stdout.write(info)
File "C:\Users\Dimitris\Anaconda2\envs\keras_env\lib\site-packages\ipykernel\iostream.py", line 317, in write
self._buffer.write(string)
ValueError: I/O operation on closed file
您是否知道可能导致错误的原因?
您的问题来自向 Spyder 中的 standard IO
端口发送大量数据。这将关闭它。尝试设置:
history = model.fit(X_train, y_train, validation_data=(X_test,y_test), nb_epoch=30, batch_size=10, verbose=0)
现在您可以恢复 epoch
指标值,例如:
epoch_loss = history.history["loss"]
一个history.history
字典存储了每个时期保存的所有训练统计数据。
我使用下面也可见的训练数据集 "two4" 来应用下面描述的神经网络。数据集有 150370 行。
from keras.models import Sequential
from keras.layers import Dense
from sklearn.cross_validation import train_test_split
import numpy
from sklearn.preprocessing import StandardScaler
# fix random seed for reproducibility
seed = 7
numpy.random.seed(seed)
dataset = numpy.loadtxt("two4.csv", delimiter=",")
# split into input (X) and output (Y) variables
X = dataset[:,0:22]
scaler = StandardScaler()
X = scaler.fit_transform(X)
Y = dataset[:,22]
# split into 67% for train and 33% for test
X_train, X_test, y_train, y_test = train_test_split(X, Y, test_size=0.33,random_state=seed)
# create model
model = Sequential()
model.add(Dense(12, input_dim=22, init='uniform', activation='relu'))
model.add(Dense(12, init='uniform', activation='relu'))
model.add(Dense(1, init='uniform', activation='sigmoid'))
# Compile model
model.compile(loss='binary_crossentropy', optimizer='adam', metrics=['accuracy'])
# Fit the model
model.fit(X_train, y_train, validation_data=(X_test,y_test), nb_epoch=30, batch_size=10)
30810/100747 [========>.....................]Traceback (most recent call last):.9989
File "<ipython-input-1-adb3fdf3bae0>", line 1, in <module>
runfile('C:/Users/Dimitris/Desktop/seventh experiment configuration/feedforward_net.py', wdir='C:/Users/Dimitris/Desktop/seventh experiment configuration')
File "C:\Users\Dimitris\Anaconda2\envs\keras_env\lib\site-packages\spyderlib\widgets\externalshell\sitecustomize.py", line 714, in runfile
execfile(filename, namespace)
File "C:\Users\Dimitris\Anaconda2\envs\keras_env\lib\site-packages\spyderlib\widgets\externalshell\sitecustomize.py", line 74, in execfile
exec(compile(scripttext, filename, 'exec'), glob, loc)
File "C:/Users/Dimitris/Desktop/seventh experiment configuration/feedforward_net.py", line 26, in <module>
model.fit(X_train, y_train, validation_data=(X_test,y_test), nb_epoch=30, batch_size=10)
File "C:\Users\Dimitris\Anaconda2\envs\keras_env\lib\site-packages\keras\models.py", line 432, in fit
sample_weight=sample_weight)
File "C:\Users\Dimitris\Anaconda2\envs\keras_env\lib\site-packages\keras\engine\training.py", line 1106, in fit
callback_metrics=callback_metrics)
File "C:\Users\Dimitris\Anaconda2\envs\keras_env\lib\site-packages\keras\engine\training.py", line 830, in _fit_loop
callbacks.on_batch_end(batch_index, batch_logs)
File "C:\Users\Dimitris\Anaconda2\envs\keras_env\lib\site-packages\keras\callbacks.py", line 60, in on_batch_end
callback.on_batch_end(batch, logs)
File "C:\Users\Dimitris\Anaconda2\envs\keras_env\lib\site-packages\keras\callbacks.py", line 188, in on_batch_end
self.progbar.update(self.seen, self.log_values)
File "C:\Users\Dimitris\Anaconda2\envs\keras_env\lib\site-packages\keras\utils\generic_utils.py", line 119, in update
sys.stdout.write(info)
File "C:\Users\Dimitris\Anaconda2\envs\keras_env\lib\site-packages\ipykernel\iostream.py", line 317, in write
self._buffer.write(string)
ValueError: I/O operation on closed file
您是否知道可能导致错误的原因?
您的问题来自向 Spyder 中的 standard IO
端口发送大量数据。这将关闭它。尝试设置:
history = model.fit(X_train, y_train, validation_data=(X_test,y_test), nb_epoch=30, batch_size=10, verbose=0)
现在您可以恢复 epoch
指标值,例如:
epoch_loss = history.history["loss"]
一个history.history
字典存储了每个时期保存的所有训练统计数据。