The Supply Chain optimization is a classic sample in linear programming. The Microsoft Solver Foundation comes with an example on solving this problem in Excel.
The data are categorized and neatly listed out in one worksheet, while the Model pane provided interface for constraints settings. Unlike in the standard Excel Solver, the modelling is a bit more complex, but the interface helped by providing an organized and consistent user interface in Excel.
The Microsoft Solver Foundation (MSF) provides a rich set of optimization library and a modelling framework, all in an easy to use GUI through Excel. To make it developer friendly, MSF made available its library through API implemented in several programming languages.
Using MSF in Excel is very straight forward, thanks to its well designed user interface. For example, in the classic shipping optimization example, the model inputs are grouped in parameters, constraints, and goals, and are presented in side tabs.
Comparison of Microsoft Solver Foundation and the built-in Excel solver can be found in a previous installment.
Excel is able to solve optimization problems. Two commonly available tools are the build-in Solver tool and the Excel plugin for Microsoft Solver Foundation (MSF). The former is not installed by default but can be easily enabled through the Excel Options menu. The latter is a separate download available from Microsoft.
For a simple comparison of the performance of the two, the non-linear data fitting example from the MSF is used as benchmark.
MSF provided additional menu pane within Excel for complex optimization operations.
Optimization results and log of this benchmark run of a non-linear data fitting sample from the MSF, based on an NIST sample.
Goals setting screen.
On the other hand, the built-in Solver offered a simpler interface but still provide detailed reports, including answer, sensitivity, and limits reports in separate spreadsheets.
The built-in Excel Solver offered easy to use interface, while the Microsoft Solver Foundation is more capable for complex problems and modelling.
The NelderMead solver is selected in this benchmark by the MSF. Check out this previous installment for details of running Nelder-Mead on TI Nspire. The same data set is performed on the Nspire using Nelder-Mead to obtain the following results.
Genetic algorithm is one of the more popular evolutionary algorithm with wide range of usage including optimization. While there are a lot of implementation of this technique, including the one as an option in the Excel solver, building one is a very good choice to understand the underlying process.
The TI Nspire provided a rich set of matrix operations that can be utilized to model the data structure required genetic algorithm. For example, creating an initial population with arbitrary size of binary, integer, or real numbers.
The cross over operation can be modeled as extracting part of a matrix using the augment function.
Fitness function can be dynamically defined using the expr function available in the Nspire environment.
A much more practical approach than calculating GARCH parameters on a calculator is to do it in R. Not only is there is available packages, retrieving financial data for experimenting is also a piece of cake as the facilities built-in offered convenient access to historical data.
To use GARCH in R the library must be installed first.
To test the library, data are imported using the tSeries package.
A plot of the log return.
Before running the GARCH model, a QQ plot is reviewed.
Finally, the GARCH model is created using the command below.
With trace=off a clean model can be printed after running the model.
In this installment the Nelder-Mead method is used to train a simple neural network for the XOR problem. The network consisted of 2-input, 1-output, and 2 hidden layers, and is fully connected. In mainstream practical neural network, back propagation and other evolutionary algorithms are much more popular for training neural network for real world problem. Nelder-Mead is used here just out of curiosity to see how this general optimization routine performed under neural network settings on TI Nspire.
The sigmoid function is declared in an TI Nspire function.
For the XOR problem, the inputs are defined as two lists, and the expected output in another.
The activation functions for each neuron are declared.
To train the network, the sum of squared error function is used to feed into the Nelder-Mead algorithm for minimization. Random numbers are used for initial parameters.
Finally the resulting weights and bias are obtained from running the Nelder-Mead program.
The comparison graph of the performance of the Nelder-Mead trained XOR neural network against expected values.
As shown in a previous installment on running the Nelder-Mead optimization of the Rosenbrock function in TI-84, its hardware is not up to the challenge for problems of this kind. It is however an interesting case to learn from this program that took a considerable amount of time (in minutes), when most TI-84 programs complete in seconds, to grasp the technique in writing optimized code.
The problem used in the setting is the Rosenbrock function:
Two techniques were used to try to optimize the code. The first one is to in-line the function (in this case the Rosenbrock function). In the original version this function is stored in a string value to be evaluated whenever necessary by using the TI-84 expr() function. The idea from common in-lining is borrowed to see if it works in TI-84. As shown in line 44 below, the Rosenbrock function replaced the original expr() function call.
Another technique is to remove unnecessary comments. For most modern languages this is never considered a problem but for simple calculators it is speculated valuable processing time may be spent on processing comments that contribute nothing to the algorithm.
Both programs returned the same results. To compare the gain in speed, the same set of initial parameters and tolerance were used in both the original and code-optimized program. Surprisingly, the inline method does not yield any gain while removing all those “:” that were used for indentation does improve the speed. The average of the original program is 183 seconds while the program with cleaned code is only 174 seconds, which is around 95% of the run time of the original program.