LSTM + Nvidia GPU 导致 NotImplementedError
LSTM + Nvidia GPU causes NotImplementedError
我有 Python 3.9.7,TensowFlow == 2.5,NVIDIA-SMI 497.09,驱动程序版本:497.09,CUDA 版本:11.5。尝试按如下方式定义 LSTM 模型:
n_steps = 500
n_features = 1
# Univariate multi-step time series prediction-
model = Sequential()
model.add(LSTM(units=50, activation='relu', return_sequences=True, input_shape=(n_steps, n_features)))
首先给出警告:
WARNING:tensorflow:Layer lstm_4 will not use cuDNN kernels since it
doesn't meet the criteria. It will use a generic GPU kernel as
fallback when running on GPU.
然后:
NotImplementedError: Cannot convert a symbolic Tensor
(lstm_4/strided_slice:0) to a numpy array. This error may indicate
that you're trying to pass a Tensor to a NumPy call, which is not
supported
将激活更改为 'tanh' 不会改变任何内容。
检查你的numpy版本。超过1.20了吗?这是 issue #14687 with the same problem, and they point to issue #47691,他们在 tensorflow 2.6 中修复了这个问题。
因此,要么降级到 numpy 1.19.2,pip install numpy==1.19.2
,要么升级 tensorflow,pip install --upgrade tensorflow
。这应该有望解决问题![=13=]
我有 Python 3.9.7,TensowFlow == 2.5,NVIDIA-SMI 497.09,驱动程序版本:497.09,CUDA 版本:11.5。尝试按如下方式定义 LSTM 模型:
n_steps = 500
n_features = 1
# Univariate multi-step time series prediction-
model = Sequential()
model.add(LSTM(units=50, activation='relu', return_sequences=True, input_shape=(n_steps, n_features)))
首先给出警告:
WARNING:tensorflow:Layer lstm_4 will not use cuDNN kernels since it doesn't meet the criteria. It will use a generic GPU kernel as fallback when running on GPU.
然后:
NotImplementedError: Cannot convert a symbolic Tensor (lstm_4/strided_slice:0) to a numpy array. This error may indicate that you're trying to pass a Tensor to a NumPy call, which is not supported
将激活更改为 'tanh' 不会改变任何内容。
检查你的numpy版本。超过1.20了吗?这是 issue #14687 with the same problem, and they point to issue #47691,他们在 tensorflow 2.6 中修复了这个问题。
因此,要么降级到 numpy 1.19.2,pip install numpy==1.19.2
,要么升级 tensorflow,pip install --upgrade tensorflow
。这应该有望解决问题![=13=]