git.png

 

Many to One RNN with Fixed Sequence Length:

In this tutorial we implement

06.png

Fig1. Unfolded representation of the implemented RNN structure

 

0. Import the required libraries:

We will start with importing the required libraries to our Python environment.

In [ ]:
# imports
import tensorflow as tf
import numpy as np
 
 

1. Generate some data

For this tutorial ...

1.1. Data dimension

Here, we specify the dimensions of the data samples which will be used in the code. Defining these variables makes it easier (compared with using hard-coded number all throughout the code) to modify them later. Ideally these would be inferred from the data that has been read, but here we just write the numbers.

In [ ]:
# Data Dimensions
input_dim = 1           # input dimension
seq_max_len = 4         # sequence maximum length
out_dim = 1             # output dimension
 
 

1.2. Generate data and display the sizes

Now we can use the defined helper function in "train" mode which loads the train and validation images and their corresponding labels. We'll also display their sizes:

In [ ]:
# ==========
#  TOY DATA
# ==========
x_train = np.random.randint(0, 10, size=(100, 4, 1))
y_train = np.sum(x_train, axis=1)

x_test = np.random.randint(0, 10, size=(5, 4, 1))
y_test = np.sum(x_test, axis=1)

print("Size of:")
print("- Training-set size:\t\t{}".format(len(y_train)))
print("- Test-set size:\t{}".format(len(y_test)))
 
 

To get batches of samples:

In [ ]:
def next_batch(x, y, batch_size):
    N = x.shape[0]
    batch_indices = np.random.permutation(N)[:batch_size]
    x_batch = x[batch_indices]
    y_batch = y[batch_indices]
    return x_batch, y_batch
 
 

2. Hyperparameters

In [ ]:
# Parameters
learning_rate = 0.01    # The optimization initial learning rate
training_steps = 10000  # Total number of training steps
batch_size = 10         # batch size
display_freq = 1000     # Frequency of displaying the training results
 
 

3. Network configuration

In [ ]:
num_hidden_units = 10   # number of hidden units
 
 

4. Create network helper functions

4.1. Helper functions for creating new variables

In [ ]:
# weight and bais wrappers
def weight_variable(shape):
    """
    Create a weight variable with appropriate initialization
    :param name: weight name
    :param shape: weight shape
    :return: initialized weight variable
    """
    initer = tf.truncated_normal_initializer(stddev=0.01)
    return tf.get_variable('W',
                           dtype=tf.float32,
                           shape=shape,
                           initializer=initer)

def bias_variable(shape):
    """
    Create a bias variable with appropriate initialization
    :param name: bias variable name
    :param shape: bias variable shape
    :return: initialized bias variable
    """
    initial = tf.constant(0., shape=shape, dtype=tf.float32)
    return tf.get_variable('b',
                           dtype=tf.float32,
                           initializer=initial)
 
 

4.2. Helper-function for creating a RNN

In [ ]:
def RNN(x, weights, biases, num_hidden):
    """
    :param x: inputs of size [batch_size, max_time, input_dim]
    :param weights: matrix of fully-connected output layer weights
    :param biases: vector of fully-connected output layer biases
    :param num_hidden: number of hidden units
    """
    cell = tf.nn.rnn_cell.BasicRNNCell(num_hidden)
    outputs, states = tf.nn.dynamic_rnn(cell, x, dtype=tf.float32)
    out = tf.matmul(outputs[:, -1, :], weights) + biases
    return out
 
 

5. Create the network graph

5.1. Placeholders for the inputs (x) and corresponding labels (y)

In [ ]:
# Placeholders for inputs(x), input sequence lengths (seqLen) and outputs(y)
x = tf.placeholder(tf.float32, [None, seq_max_len, input_dim])
y = tf.placeholder(tf.float32, [None, 1])
 
 

5.2. Define the network

In [ ]:
# create weight matrix initialized randomely from N~(0, 0.01)
W = weight_variable(shape=[num_hidden_units, out_dim])

# create bias vector initialized as zero
b = bias_variable(shape=[out_dim])

# Network predictions
pred_out = RNN(x, W, b, num_hidden_units)
 
 

5.3. Define the loss function and optimizer

In [ ]:
# Define the loss function (i.e. mean-squared error loss) and optimizer
cost = tf.reduce_mean(tf.square(pred_out - y))
train_op = tf.train.AdamOptimizer(learning_rate=0.001).minimize(cost)
 
 

5.4. Initialize all variables

In [ ]:
# Creating the op for initializing all variables
init = tf.global_variables_initializer()
 
 

6. Train

In [ ]:
sess = tf.InteractiveSession()
sess.run(init)
for i in range(training_steps):
    x_batch, y_batch = next_batch(x_train, y_train, batch_size)
    _, mse = sess.run([train_op, cost], feed_dict={x: x_batch, y: y_batch})
    if i % display_freq == 0:
        print('Step {}, MSE={}'.format(i, mse))
# Test
y_pred = sess.run(pred_out, feed_dict={x: x_test})
 
 

7. Test

In [ ]:
# Test
y_pred = sess.run(pred_out, feed_dict={x: x_test})

for i, x in enumerate(y_test):
    print("When the ground truth output is {}, the model thinks it is {}"
          .format(y_test[i], y_pred[i]))
 
 

After we are finished the testing, we will close the session to free the memory.

In [ ]:
sess.close()
 
 

Thanks for reading! If you have any question or doubt, feel free to leave a comment in our website.

© 2018 Easy-TensorFlow team. All Rights Reserved.