如何在 TensorFlow2 中写这个等式? (4x+2 = 0)
how to write this equation in TensorFlow2? (4x+2 = 0)
这段代码没有运行TensorFlow 2中的代码是什么?
import tensorflow as tf
# the equation is : 4x+2 = 0
unknownvalue = tf.Variable(0.0)
a = tf.constant(4.0)
b = tf.constant(2.0)
c = tf.multiply(unknownvalue,a) # 4x
equation = tf.add(c,b) # 4x+2
zerovalue = tf.constant(0.0)
diff = tf.square(equation-zerovalue) # differnce is : 4x+2 - 0
solving = tf.train.GradientDescentOptimizer(0.01).minimize(diff)
init = tf.global_variables_initializer()
TF2 改变了自 TF1 以来构建训练循环的方式。
我鼓励您阅读本指南:Writing a training loop from scratch,以了解更多操作方法。
这是您的 tf1 代码在 TF2 中的直接实现:
import tensorflow as tf
x = tf.Variable(0.0)
optimizer = tf.optimizers.SGD(1e-2)
for idx in range(1,26):
with tf.GradientTape() as tape:
# the equation is : 4x+2 = 0
equation = x*4 + 2 # 4x+2
loss = tf.square(equation)
grad = tape.gradient(loss, [x])
optimizer.apply_gradients(zip(grad, [x]))
if not idx%5:
tf.print(f"Iteration:{idx},loss:{loss:.4f},x:{x.numpy():.4f}")
结果是
Iteration:5,loss:0.1829,x:-0.4273
Iteration:10,loss:0.0039,x:-0.4894
Iteration:15,loss:0.0001,x:-0.4985
Iteration:20,loss:0.0000,x:-0.4998
Iteration:25,loss:0.0000,x:-0.5000
Tensorflow 2 代码更加简单直观(如果您不习惯,可能是 GradientTape 部分除外):
import tensorflow as tf
# the equation is : 4x+2 = 0
unknownvalue = tf.Variable(0.0)
with tf.GradientTape as tape:
diff = (4 * unknownvalue + 2 - 0) ** 2
grads = tape.gradient(diff, unknownvalue)
tf.optimizers.SGD(0.01).apply_gradients(zip(grads, [unknownvalue])
这段代码没有运行TensorFlow 2中的代码是什么?
import tensorflow as tf
# the equation is : 4x+2 = 0
unknownvalue = tf.Variable(0.0)
a = tf.constant(4.0)
b = tf.constant(2.0)
c = tf.multiply(unknownvalue,a) # 4x
equation = tf.add(c,b) # 4x+2
zerovalue = tf.constant(0.0)
diff = tf.square(equation-zerovalue) # differnce is : 4x+2 - 0
solving = tf.train.GradientDescentOptimizer(0.01).minimize(diff)
init = tf.global_variables_initializer()
TF2 改变了自 TF1 以来构建训练循环的方式。 我鼓励您阅读本指南:Writing a training loop from scratch,以了解更多操作方法。
这是您的 tf1 代码在 TF2 中的直接实现:
import tensorflow as tf
x = tf.Variable(0.0)
optimizer = tf.optimizers.SGD(1e-2)
for idx in range(1,26):
with tf.GradientTape() as tape:
# the equation is : 4x+2 = 0
equation = x*4 + 2 # 4x+2
loss = tf.square(equation)
grad = tape.gradient(loss, [x])
optimizer.apply_gradients(zip(grad, [x]))
if not idx%5:
tf.print(f"Iteration:{idx},loss:{loss:.4f},x:{x.numpy():.4f}")
结果是
Iteration:5,loss:0.1829,x:-0.4273
Iteration:10,loss:0.0039,x:-0.4894
Iteration:15,loss:0.0001,x:-0.4985
Iteration:20,loss:0.0000,x:-0.4998
Iteration:25,loss:0.0000,x:-0.5000
Tensorflow 2 代码更加简单直观(如果您不习惯,可能是 GradientTape 部分除外):
import tensorflow as tf
# the equation is : 4x+2 = 0
unknownvalue = tf.Variable(0.0)
with tf.GradientTape as tape:
diff = (4 * unknownvalue + 2 - 0) ** 2
grads = tape.gradient(diff, unknownvalue)
tf.optimizers.SGD(0.01).apply_gradients(zip(grads, [unknownvalue])