tensorflow训练中出现nan问题的解决
发布时间:2020-05-31 02:00:19 所属栏目:Python 来源:互联网
导读:深度学习中对于网络的训练是参数更新的过程,需要注意一种情况就是输入数据未做归一化时,如果前向传播结果已经是[0,1,0]这种形式,而真实结果是[1,0],此时由于得出的结论不惧有概率性,而是错误的估计值,此时反向
|
深度学习中对于网络的训练是参数更新的过程,需要注意一种情况就是输入数据未做归一化时,如果前向传播结果已经是[0,1,0]这种形式,而真实结果是[1,0],此时由于得出的结论不惧有概率性,而是错误的估计值,此时反向传播会使得权重和偏置值变的无穷大,导致数据溢出,也就出现了nan的问题。 解决办法: 1、对输入数据进行归一化处理,如将输入的图片数据除以255将其转化成0-1之间的数据; 2、对于层数较多的情况,各层都做batch_nomorlization; 3、对设置Weights权重使用tf.truncated_normal(0,0.01,[3,3,64])生成,同时值的均值为0,方差要小一些; 4、激活函数可以使用tanh; 5、减小学习率lr。 实例:
import tensorflow as tf
from tensorflow.examples.tutorials.mnist import input_data
mnist = input_data.read_data_sets('data',one_hot = True)
def add_layer(input_data,in_size,out_size,activation_function=None):
Weights = tf.Variable(tf.random_normal([in_size,out_size]))
Biases = tf.Variable(tf.zeros([1,out_size])+0.1)
Wx_plus_b = tf.add(tf.matmul(input_data,Weights),Biases)
if activation_function==None:
outputs = Wx_plus_b
else:
outputs = activation_function(Wx_plus_b)
#return outputs#,Weights
return {'outdata':outputs,'w':Weights}
def get_accuracy(t_y):
# global l1
# accu = tf.reduce_mean(tf.cast(tf.equal(tf.argmax(l1['outdata'],1),tf.argmax(t_y,1)),dtype = tf.float32))
global prediction
accu = tf.reduce_mean(tf.cast(tf.equal(tf.argmax(prediction['outdata'],dtype = tf.float32))
return accu
X = tf.placeholder(tf.float32,[None,784])
Y = tf.placeholder(tf.float32,10])
#l1 = add_layer(X,784,10,tf.nn.softmax)
#cross_entropy = tf.reduce_mean(-tf.reduce_sum(Y*tf.log(l1['outdata']),reduction_indices= [1]))
#l1 = add_layer(X,1024,tf.nn.relu)
l1 = add_layer(X,None)
prediction = add_layer(l1['outdata'],tf.nn.softmax)
cross_entropy = tf.reduce_mean(-tf.reduce_sum(Y*tf.log(prediction['outdata']),reduction_indices= [1]))
optimizer = tf.train.GradientDescentOptimizer(0.000001)
train = optimizer.minimize(cross_entropy)
newW = tf.Variable(tf.random_normal([1024,10]))
newOut = tf.matmul(l1['outdata'],newW)
newSoftMax = tf.nn.softmax(newOut)
init = tf.global_variables_initializer()
with tf.Session() as sess:
sess.run(init)
#print(sess.run(l1_Weights))
for i in range(2):
X_train,y_train = mnist.train.next_batch(1)
X_train = X_train/255 #需要进行归一化处理
#print(sess.run(l1['w'],feed_dict={X:X_train}))
#print(sess.run(prediction['w'],feed_dict={X:X_train,Y:y_train}))
#print(sess.run(l1['outdata'],Y:y_train}).shape)
print(sess.run(prediction['outdata'],Y:y_train}))
print(sess.run(newOut,feed_dict={X:X_train}))
print(sess.run(newSoftMax,feed_dict={X:X_train}))
print(y_train)
#print(sess.run(l1['outdata'],feed_dict={X:X_train}))
sess.run(train,Y:y_train})
if i%100 == 0:
#print(sess.run(cross_entropy,Y:y_train}))
accuracy = get_accuracy(mnist.test.labels)
print(sess.run(accuracy,feed_dict={X:mnist.test.images}))
#if i%100==0:
#print(sess.run(prediction,feed_dict={X:X_train}))
#print(sess.run(cross_entropy,Y:y_train}))
以上就是本文的全部内容,希望对大家的学习有所帮助,也希望大家多多支持编程小技巧。 您可能感兴趣的文章:
(编辑:安卓应用网) 【声明】本站内容均来自网络,其相关言论仅代表作者个人观点,不代表本站立场。若无意侵犯到您的权利,请及时与联系站长删除相关内容! |
