python神经网络使用Keras构建RNN训练

Mathilda ·
更新时间:2024-11-13
· 1060 次阅读

目录

Keras中构建RNN的重要函数

1、SimpleRNN

2、model.train_on_batch

全部代码

Keras中构建RNN的重要函数 1、SimpleRNN

SimpleRNN用于在Keras中构建普通的简单RNN层,在使用前需要import。

from keras.layers import SimpleRNN

在实际使用时,需要用到几个参数。

model.add( SimpleRNN( batch_input_shape = (BATCH_SIZE,TIME_STEPS,INPUT_SIZE), output_dim = CELL_SIZE, ) )

其中,batch_input_shape代表RNN输入数据的shape,shape的内容分别是每一次训练使用的BATCH,TIME_STEPS表示这个RNN按顺序输入的时间点的数量,INPUT_SIZE表示每一个时间点的输入数据大小。
CELL_SIZE代表训练每一个时间点的神经元数量。

2、model.train_on_batch

与之前的训练CNN网络和普通分类网络不同,RNN网络在建立时就规定了batch_input_shape,所以训练的时候也需要一定量一定量的传入训练数据。
model.train_on_batch在使用前需要对数据进行处理。获取指定BATCH大小的训练集。

X_batch = X_train[index_start:index_start + BATCH_SIZE,:,:] Y_batch = Y_train[index_start:index_start + BATCH_SIZE,:] index_start += BATCH_SIZE

具体训练过程如下:

for i in range(500): X_batch = X_train[index_start:index_start + BATCH_SIZE,:,:] Y_batch = Y_train[index_start:index_start + BATCH_SIZE,:] index_start += BATCH_SIZE cost = model.train_on_batch(X_batch,Y_batch) if index_start >= X_train.shape[0]: index_start = 0 if i%100 == 0: ## acc cost,accuracy = model.evaluate(X_test,Y_test,batch_size=50) ## W,b = model.layers[0].get_weights() print("accuracy:",accuracy) x = X_test[1].reshape(1,28,28) 全部代码

这是一个RNN神经网络的例子,用于识别手写体。

import numpy as np from keras.models import Sequential from keras.layers import SimpleRNN,Activation,Dense ## 全连接层 from keras.datasets import mnist from keras.utils import np_utils from keras.optimizers import Adam TIME_STEPS = 28 INPUT_SIZE = 28 BATCH_SIZE = 50 index_start = 0 OUTPUT_SIZE = 10 CELL_SIZE = 75 LR = 1e-3 (X_train,Y_train),(X_test,Y_test) = mnist.load_data() X_train = X_train.reshape(-1,28,28)/255 X_test = X_test.reshape(-1,28,28)/255 Y_train = np_utils.to_categorical(Y_train,num_classes= 10) Y_test = np_utils.to_categorical(Y_test,num_classes= 10) model = Sequential() # conv1 model.add( SimpleRNN( batch_input_shape = (BATCH_SIZE,TIME_STEPS,INPUT_SIZE), output_dim = CELL_SIZE, ) ) model.add(Dense(OUTPUT_SIZE)) model.add(Activation("softmax")) adam = Adam(LR) ## compile model.compile(loss = 'categorical_crossentropy',optimizer = adam,metrics = ['accuracy']) ## tarin for i in range(500): X_batch = X_train[index_start:index_start + BATCH_SIZE,:,:] Y_batch = Y_train[index_start:index_start + BATCH_SIZE,:] index_start += BATCH_SIZE cost = model.train_on_batch(X_batch,Y_batch) if index_start >= X_train.shape[0]: index_start = 0 if i%100 == 0: ## acc cost,accuracy = model.evaluate(X_test,Y_test,batch_size=50) ## W,b = model.layers[0].get_weights() print("accuracy:",accuracy)

实验结果为:

10000/10000 [==============================] - 1s 147us/step accuracy: 0.09329999938607215 ………………………… 10000/10000 [==============================] - 1s 112us/step accuracy: 0.9395000022649765 10000/10000 [==============================] - 1s 109us/step accuracy: 0.9422999995946885 10000/10000 [==============================] - 1s 114us/step accuracy: 0.9534000000357628 10000/10000 [==============================] - 1s 112us/step accuracy: 0.9566000008583069 10000/10000 [==============================] - 1s 113us/step accuracy: 0.950799999833107 10000/10000 [==============================] - 1s 116us/step 10000/10000 [==============================] - 1s 112us/step accuracy: 0.9474999988079071 10000/10000 [==============================] - 1s 111us/step accuracy: 0.9515000003576278 10000/10000 [==============================] - 1s 114us/step accuracy: 0.9288999977707862 10000/10000 [==============================] - 1s 115us/step accuracy: 0.9487999993562698

以上就是python神经网络使用Keras构建RNN训练的详细内容,更多关于Keras构建RNN训练的资料请关注软件开发网其它相关文章!



python神经网络 keras rnn Python

需要 登录 后方可回复, 如果你还没有账号请 注册新账号