In this installment the Nelder-Mead method is used to train a simple neural network for the XOR problem. The network consisted of 2-input, 1-output, and 2 hidden layers, and is fully connected. In mainstream practical neural network, back propagation and other evolutionary algorithms are much more popular for training neural network for real world problem. Nelder-Mead is used here just out of curiosity to see how this general optimization routine performed under neural network settings on TI Nspire.
The sigmoid function is declared in an TI Nspire function.
For the XOR problem, the inputs are defined as two lists, and the expected output in another.
The activation functions for each neuron are declared.
To train the network, the sum of squared error function is used to feed into the Nelder-Mead algorithm for minimization. Random numbers are used for initial parameters.
Finally the resulting weights and bias are obtained from running the Nelder-Mead program.
The comparison graph of the performance of the Nelder-Mead trained XOR neural network against expected values.