SimpleRNN realizes stock forecast

Please check the previous articles for the principle.

1. Data source

  SH600519.csv is the daily k-line data of sh600519 Guizhou Maotai downloaded by tushare module. In this example, only its C-column data is used (as shown in the figure):
  use the opening price for 60 consecutive days to predict the opening price on the 61st day.

2. Code implementation

   follow the six steps: import relevant modules - > read the daily k-line data of Guizhou maotai to the variable maotai, take the opening price in the data of the first 2126 days in the variable maotai as the training data, and take the opening price in the data of the last 300 days in the variable maotai as the test data; Then the opening price is normalized so that the data sent into the neural network is distributed between 0 and 1;
   next, create an empty list to receive training set input features, training set labels, test set input features and test set labels respectively;

  continue to construct data. The for loop is used to traverse the whole training data, and the data every 60 consecutive days is used as the input feature x_train, the 61st day data is used as the corresponding label y_train, a total of 2066 groups of training data are generated, and then the sequence of training data is disrupted and transformed into array format, and then transformed into the dimension required by RNN input;
   similarly, use the for loop to traverse the whole test data and generate a total of 240 groups of test data. The test set does not need to disturb the order, but needs to be transformed into array format and then into the dimension required by RNN input.

   building neural network with sequntial:
   the first layer of circular computing layer memory is set to 80, and each time step is pushed h t h_t ht , for the next layer, use Dropout of 0.2;
   the second layer cyclic computing layer sets 100 memory, and only the last time step is pushed h t h_t ht , for the next layer, use Dropout of 0.2;
   since the output value is only one number of the opening price on the 61st day, the full connection density is 1 - > compile. The training method uses adam optimizer and the mean square error loss function. In the stock prediction code, only the loss is observed and only the loss is printed when the training iteration is printed, so there is no need to assign a value to metrics - > set breakpoint continuation training, fit executes the training process - > summary, and prints out the network structure and parameter statistics.

  perform loss visualization and parameter error reporting

  make stock forecast. Predict the test set data with predict, and then transform the predicted value and real value from the normalized value to the real value. Finally, draw the real value curve with red line and the predicted value curve with blue line.

  in order to evaluate the advantages and disadvantages of the model, three evaluation indexes are given: mean square error, root mean square error and average absolute error. The more obvious these errors are, the closer the predicted value is to the real value.

  loss curve of RNN stock forecast:

RNN stock forecast curve:

  RNN stock forecast and evaluation index:

  model summary:

3. Complete code

import numpy as np
import tensorflow as tf
from tensorflow.keras.layers import Dropout, Dense, SimpleRNN
import matplotlib.pyplot as plt
import os
import pandas as pd
from sklearn.preprocessing import MinMaxScaler
from sklearn.metrics import mean_squared_error, mean_absolute_error
import math
# Read stock file
maotai = pd.read_csv('./SH600519.csv')
# The opening price of the previous (2426-300 = 2126) days is used as the training set. The table counts from 0, and 2:3 is to extract [2:3) column, which is closed before opening, so the opening price of column C is extracted
training_set = maotai.iloc[0:2426 - 300, 2:3].values
# The opening price after 300 days is used as the test set
test_set = maotai.iloc[2426 - 300:, 2:3].values

# normalization
sc = MinMaxScaler(feature_range=(0, 1))  # Definition normalization: normalized to (0, 1)
training_set_scaled = sc.fit_transform(training_set)  # The maximum and minimum values of the training set are obtained, and the inherent attributes of these training sets are normalized on the training set
test_set = sc.transform(test_set)  # The test set is normalized by using the attributes of the training set

x_train = []
y_train = []

x_test = []
y_test = []

# Test set: data of the first 2426-300 = 2126 days in csv table
# The whole training set is traversed for 60 days, and the feature set is extracted as the opening price of the training set_ Train, the data on the 61st day is used as the label, and 2426-300-60 = 2066 groups of data are constructed in the for loop.
for i in range(60, len(training_set_scaled)):
    x_train.append(training_set_scaled[i - 60:i, 0])
    y_train.append(training_set_scaled[i, 0])
# Disrupt the training set
np.random.seed(7)
np.random.shuffle(x_train)
np.random.seed(7)
np.random.shuffle(y_train)
tf.random.set_seed(7)
# Change the training set from list format to array format
x_train, y_train = np.array(x_train), np.array(y_train)

# Make x_train meets the RNN input requirements: [number of samples sent, number of cycle kernel time expansion steps, and number of features input in each time step].
# Here, the whole data set is sent, and the number of samples sent is x_train.shape[0], i.e. 2066 sets of data; Input 60 opening prices, predict the opening price on the 61st day, and the number of steps of cycle verification time is 60; The input feature of each time step is the opening price of a day, and there is only one data, so the number of input features in each time step is 1
x_train = np.reshape(x_train, (x_train.shape[0], 60, 1))
# Test set: data of the last 300 days in csv table
# Use the for loop to traverse the whole test set, and extract the opening price of the test set for 60 consecutive days as the input feature x_test, the data on the 61st day is used as the label y_ A total of 300-60 = 240 groups of data are constructed in the test and for loop.
for i in range(60, len(test_set)):
    x_test.append(test_set[i - 60:i, 0])
    y_test.append(test_set[i, 0])
# Test the set transformer array and reshape to meet the RNN input requirements: [the number of samples sent, the number of cycle kernel time expansion steps, and the number of input characteristics in each time step]
x_test, y_test = np.array(x_test), np.array(y_test)
x_test = np.reshape(x_test, (x_test.shape[0], 60, 1))

model = tf.keras.Sequential([
    SimpleRNN(80, return_sequences=True),# The first layer is the cyclic computing layer: 80 memories are set, and each time step is pushed ht to the next layer
    Dropout(0.2),   #Use Dropout of 0.2
    SimpleRNN(100),# The second layer is the cyclic computing layer, which sets 100 memories
    Dropout(0.2),   #
    Dense(1)    # Since the output value is the opening price on the 61st day, there is only one number, so density is 1
])

model.compile(optimizer=tf.keras.optimizers.Adam(0.001),
              loss='mean_squared_error')  # Mean square error for loss function
# The application only observes the loss value and does not observe the accuracy, so delete the metrics option and only display the loss value in each epoch iteration

checkpoint_save_path = "./checkpoint/rnn_stock.ckpt"

if os.path.exists(checkpoint_save_path + '.index'):
    print('-------------load the model-----------------')
    model.load_weights(checkpoint_save_path)

cp_callback = tf.keras.callbacks.ModelCheckpoint(filepath=checkpoint_save_path,
                                                 save_weights_only=True,
                                                 save_best_only=True,
                                                 monitor='val_loss')

history = model.fit(x_train, y_train, batch_size=64, epochs=50, validation_data=(x_test, y_test), validation_freq=1,
                    callbacks=[cp_callback])

model.summary()

file = open('./weights.txt', 'w')  # Parameter extraction
for v in model.trainable_variables:
    file.write(str(v.name) + '\n')
    file.write(str(v.shape) + '\n')
    file.write(str(v.numpy()) + '\n')
file.close()

loss = history.history['loss']
val_loss = history.history['val_loss']

plt.plot(loss, label='Training Loss')
plt.plot(val_loss, label='Validation Loss')
plt.title('Training and Validation Loss')
plt.legend()
plt.show()

################## predict ######################
# Test set input model for prediction
predicted_stock_price = model.predict(x_test)
# Restore the prediction data --- from (0, 1) inverse normalization to the original range
predicted_stock_price = sc.inverse_transform(predicted_stock_price)
# Restore real data --- from (0, 1) inverse normalization to original range
real_stock_price = sc.inverse_transform(test_set[60:])
# Draw the comparison curve between the real data and the predicted data
plt.plot(real_stock_price, color='red', label='MaoTai Stock Price')
plt.plot(predicted_stock_price, color='blue', label='Predicted MaoTai Stock Price')
plt.title('MaoTai Stock Price Prediction')
plt.xlabel('Time')
plt.ylabel('MaoTai Stock Price')
plt.legend()
plt.show()

##########evaluate##############
# calculate MSE mean square error -- > e [(predicted value - real value) ^ 2] (the predicted value minus the square of the real value to find the mean)
mse = mean_squared_error(predicted_stock_price, real_stock_price)
# calculate RMSE root mean square error -- > sqrt [MSE]
rmse = math.sqrt(mean_squared_error(predicted_stock_price, real_stock_price))
# calculate MAE mean absolute error ----- > e [| predicted value - true value |] (calculate the mean value after the predicted value minus the true value)
mae = mean_absolute_error(predicted_stock_price, real_stock_price)
print('Mean square error: %.6f' % mse)
print('Root mean square error: %.6f' % rmse)
print('Mean absolute error: %.6f' % mae)

Tags: neural networks Deep Learning keras rnn

Posted by phpnewbie25 on Thu, 19 May 2022 23:01:07 +0300