Training neural network using Nelder-Mead algorithm on TI Nspire

In this installment the Nelder-Mead method is used to train a simple neural network for the XOR problem. The network consisted of 2-input, 1-output, and 2 hidden layers, and is fully connected. In mainstream practical neural network, back propagation and other evolutionary algorithms are much more popular for training neural network for real world problem. Nelder-Mead is used here just out of curiosity to see how this general optimization routine performed under neural network settings on TI Nspire.
neural-network-xor1

The sigmoid function is declared in an TI Nspire function.
neural-network-xor2

For the XOR problem, the inputs are defined as two lists, and the expected output in another.
neural-network-xor3

The activation functions for each neuron are declared.
neural-network-xor4

To train the network, the sum of squared error function is used to feed into the Nelder-Mead algorithm for minimization. Random numbers are used for initial parameters.
neural-network-xor6
neural-network-xor7

Finally the resulting weights and bias are obtained from running the Nelder-Mead program.
neural-network-xor8

The comparison graph of the performance of the Nelder-Mead trained XOR neural network against expected values.
neural-network-xor9

 

Advertisements

One thought on “Training neural network using Nelder-Mead algorithm on TI Nspire

  1. Pingback: Experimenting with convergence time in neural network models | gmgolem

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s