tf.matmul 没有按预期工作

tf.matmul doesn't works as expected

我尝试在张量流中编写和(逻辑运算),有两个输入和两个权重将它们相乘得到一个数字并将这个数字加到偏差中,我的问题是 matmul a 发送 X(输入)和 W (重量)到形状的方法。 [[1], [1]] 对于 X(垂直)和 [0.49900547 , 0.49900547] 对于 W(水平)得到一个数字作为结果,但它给了我两个数字,我怎样才能使乘法正确? 这是我的代码 >>

import tensorflow as tf
import numpy
rng = numpy.random

# Parameters
learning_rate = 0.01
training_epochs = 2000
display_step = 50

# Training Data
train_X = numpy.asarray([[[1.0],[1.0]],[[1.0],[0.0]],[[0.0],[1.0]],[[0.0],[0.0]]])
train_Y = numpy.asarray([1.0,0.0,0.0,0.0])
n_samples = train_X.shape[0]

# tf Graph Input
X = tf.placeholder("float",[2,1],name="inputarr")
Y = tf.placeholder("float",name = "outputarr")

# Create Model

# Set model weights
W = tf.Variable(tf.zeros([1,2]), name="weight")
b = tf.Variable(rng.randn(), name="bias")

# Construct a linear model
activation = tf.add(tf.matmul(X,W), b)
mulres = tf.matmul(X,W)

# Minimize the squared errors
cost = tf.reduce_sum(tf.pow(activation-Y, 2))/(2*n_samples) #L2 loss
optimizer = tf.train.GradientDescentOptimizer(learning_rate).minimize(cost) #Gradient descent

# Initializing the variables
init = tf.initialize_all_variables()

# Launch the graph
with tf.Session() as sess:
    sess.run(init)

    # Fit all training data
    for epoch in range(training_epochs):
        for (x, y) in zip(train_X, train_Y):
            sess.run(optimizer, feed_dict={X: x, Y: y})

        #Display logs per epoch step
        if epoch % display_step == 0:
            print "Epoch:", '%04d' % (epoch+1),  \
                "W=", sess.run(W), "b=", sess.run(b) , "x= ",x," y =", y," result :",sess.run(mulres,feed_dict={X: x})

    print "Optimization Finished!"
    print  "W=", sess.run(W), "b=", sess.run(b), '\n'


    # Testing example, as requested (Issue #2)
    test_X = numpy.asarray([[1.0,0.0]])
    test_Y = numpy.asarray([0])

    for x, y in zip(train_X, train_Y):
        print "x: ",x,"y: ",y
        print "Testing... (L2 loss Comparison)","result :",sess.run(mulres, feed_dict={X: x})
        print sess.run(tf.matmul(X, W),feed_dict={X: x})
        print "result :"
        predict = sess.run(activation,feed_dict={X: x})
        print predict

matmul 直接对张量进行操作,在您的情况下,张量有 2 行和 1 列。

matmul 中有一个参数可以转置任一条目,例如:

matmul(X, W, transpose_a=True)

您可以在此处查看文档:docs

与标准矩阵乘法一样,如果 A 的形状为 [m, k],而 B 的形状为 [k, n],则 tf.matmul(A, B) 的形状为 [=18] =](m 行,n 列,按照 TensorFlow 使用的顺序)。

在您的程序中,您正在计算 tf.matmul(X, W)X 定义为占位符,形状为 [2, 1]W 被定义为初始化为 [1, 2] 零矩阵的变量。结果,mulres = tf.matmul(X, W) 将具有 [2, 2] 的形状,这是当我在本地 运行 您的代码时打印的 (result: ...)。

如果你想定义一个单输出的隐藏层,改变很简单:

W = tf.Variable(tf.zeros([1,2]), name="weight")

...应替换为:

W = tf.Variable(tf.zeros([2, 1]), name="weight")

(实际上,将权重初始化为 tf.zeros 会阻止它进行训练,因为所有输入元素在反向传播中都会获得相同的梯度。相反,您应该随机初始化它们,例如使用:

W = tf.Variable(tf.truncated_normal([2, 1], stddev=0.5), name="weight")

这将使网络能够为每个权重分量学习不同的值。)