Tag Archives: Back propagation

Training neural network using Nelder-Mead algorithm on TI Nspire

In this installment the Nelder-Mead method is used to train a simple neural network for the XOR problem. The network consisted of 2-input, 1-output, and 2 hidden layers, and is fully connected. In mainstream practical neural network, back propagation and other evolutionary algorithms are much more popular for training neural network for real world problem. Nelder-Mead is used here just out of curiosity to see how this general optimization routine performed under neural network settings on TI Nspire.
neural-network-xor1

The sigmoid function is declared in an TI Nspire function.
neural-network-xor2

For the XOR problem, the inputs are defined as two lists, and the expected output in another.
neural-network-xor3

The activation functions for each neuron are declared.
neural-network-xor4

To train the network, the sum of squared error function is used to feed into the Nelder-Mead algorithm for minimization. Random numbers are used for initial parameters.
neural-network-xor6
neural-network-xor7

Finally the resulting weights and bias are obtained from running the Nelder-Mead program.
neural-network-xor8

The comparison graph of the performance of the Nelder-Mead trained XOR neural network against expected values.
neural-network-xor9

 

Advertisements

Stochastic Gradient Descent in R

Stochastic Gradient Descent (SGD) is an optimization method common used in machine learning, especially neural network. The name implied it is aimed at minimization of function.

In R, there is a SGD package for the purpose. As a warm up for the newly upgraded R and RStudio, it is taken as the target of a test drive.

R-sgd1

Running the documentation example.
R-sgd2

Running the included demo for logistic regression.R-sgd3