# Constructing matrix in TI Nspire for Markov chain with Poisson distribution

In calculation for Markov chain, matrix setup can be complicated evn with the user-friendly interface on the TI Nspire. The built-in function like identity(), constructMat() etc are useful on specific cases.

For Markov chain calculation, the model setup in terms of matrix sometimes get quite complicated and demanded a few exceptions from the built-in function mentioned above that are geared towards common linear algebra.

The example below attempts to simply the construction of the Markov model by use of piece-wise functions in building matrix. The problem to solve in Markov process is on a stock resupply problem, where an imaginary shop keeps stock of 1 to 3, and replenish on each day only when all stock is exhausted. Poisson distribution is assumed on the demand side.

Firstly a function called markov_prob is created, which takes the demand as variable and return the corresponding Poisson distribution. Note that how the function handle probability for demand greater than 3. A short-hand function mkp() is created afterwards to ease reading in the matrix.

When the function is defined, the Markov model can be formulated in a new 3 x 3 matrix created using the template. It is sometimes not easy to define state transition function as in this example problem. Therefore, the shorthand function just created will help in filling in the states as matrix elements. For example, P11 means the stock next day (Sx+1) equals stock today (Sx) which is 1, and that means the demand is zero, and thus mkp(0). There are some curve ball like P12, where Sx+1 > Sx which is not possible, since that means the shop has more stock the next day for any day with non-zero stock (i.e. negative demand), and thus the probability is zero.

After setting up the Markov model, the equilibrium distribution can be deduced. The below is one of the methods using Nelder-Mead algorithm.

Based on this model, more figures can be deduced for decision making on fine tuning stock keeping strategy.

# Solving linear programming problem with Nelder-Mead method

For solving linear programming problem, the simplex method is often applied to search for solution. On the other hand, the Nelder-Mead method is mostly applied as a non-linear searching technique. It would be interesting to see how well it is applied to a linear programming problem previously solved using the Simple Method in TI-84.

The Nelder-Mead method is ran under the TI Nspire CX CAS with NM program written in the TI Basic program. The program accepts arguments including the name of the function to maximize as a string and a list of initial parameters to execute the Nelder-Mead algorithm. The function itself is declared using piece-wise function to bound the return value to the function to maximize while giving penalty to values that violate any constraints (as in the inequalities of the standard simplex method).

Previous program in the TI-84 using the simplex method obtained {200,400} as the solution. The Nelder-Mead returned a solution very close to it.

# Data input for ANOVA in TI nspire and R

In TI nspire CX, the application Lists & Spreadsheet provided a convenient Excel list interface for data input.

The data can also be named by columns and recalled from the Calculator application. Statistical functions can then be applied. Using a sample from the classical TI-89 statistics guide book on determining the interaction between two factors using 2-way ANOVA, the same output is obtained from the TI nspire CX.

In R, data are usually imported from CSV file using read.csv() command. There are also other supported formats including SPSS and Excel. For more casual data entry that command line input is suffice, raw data are usually stored into list variable using c() command. Working with ANOVA for data entry in this way is not as straightforward because dimension is required for the analysis on data stored in the list variable.

To accomplish the ANOVA, factor data types are used in conjunction with list variable. The below is the same TI example completed in R. Firstly we define the list variable in a fashion of the order by club (c1 = driver, c2 = five iron) then brand (b1-, b2-, b3-, with the last digit as the sample number), i.e.
{c1,b1-1}; {c1,b1-2}; {c1,b1-3}; {c1,b1-4};
{c1,b2-1}; {c1,b2-2};…
{c2,b1-1}; {c2,b1-2};…

Two Factor variables are then created, one for club (with twelve 1’s followed by twelve 2’s), and another for brand (1 to 3 each repeating four times for each sample, and then completed by another identical sequence).

These two Factor variables essentially represent the position (or index in array’s term) of the nth data value in respect of the factor it belongs to, and can be better visualized in the following table.

Finally, the 2-way ANOVA can be performed using the following commands.

Interaction plot in R.

# Conversion of system mode settings from Nspire to TI-89

Mode settings can be done on both the Nspire and TI-89 using the `setMode()` call. After finding out the parameters and values differ significantly when porting TI-BASIC program from the Titanium to the Nspire, a chart showing the corresponding TI-89 values will be handy. The Titanium required string type for the arguments in the `setMode()` call.

Some mode setting options available in the Titanium seem to be absent in the Nspire.

# Estimating GARCH parameters from S&P500 index data in Nspire

Yes this could be done easily even in Excel today with a modern PC, so why abuse a calculator that is probably running on 3.3V? But dare I ask why not. Calculators today, especially the top of the line products like the Nspire, are almost on par in terms of computing power with the PC back in the days when I am in college. It is almost a fun in itself to program these little machines for calculations so complex that owners of early generation calculators back then can never dream of.

One of the usage of GARCH in finance is the pricing of options. The gist of this model is that returns follow normal distribution with volatility dependent on time. Like most models, good parameters are crucial to the success in the model’s capability, and GARCH is no exception. To try out how well Nspire will do the parameter estimation that required computation technique including processing arrays of numerical data, logarithmic, and also optimization, a list of continuous daily S&P500 index figures are loaded into the Spreadsheet application in the Nspire, and the continuous compounded returns are then calculated in the next column, by the formula

`Rt = LN(St/St-1)`

where st denotes the price of the underlying asset at time t.

There are 1000 data points in this test set but Nspire seems to handle it decently in the Spreadsheet application, with the help from the “Fill” function that works exactly like in Excel, so that the row index will be worked out automatically (t and t-1 correspond to the row index). My only complaint is that unlike in Excel, there is no provision in the Nspire similar to the “Shift+End+Arrow” key feature that instantly jump to the last data row with selection.

The simplest form of GARCH, GARCH(1,1), is targeted in this test for parameter estimation. Three parameters in the model, namely alpha, beta, and omega, are to be estimated using the maximum likelihood method which is a popular technique. In this particular case, i.e. when p=1 and q=1, the GARCH model can be represented by

`σt+12 = ω + α • rt2 + β • σt2`

The coefficients alpha and beta exhibits the measurement of persistence of volatility. The more close to 1 for their sum, the more persistent the volatility is; while the reverse (towards zero) means a quicker reversal to the long term variance. A function in Nspire is written to do this calculation which takes the list from the log returns of the 1000 S&P data stored in the Spreadsheet application as variable log_return, and the actual maximization (for likelihood) of this function in Nspire is done via the Nelder-Mead algorithm, using the following kernel distribution function with the constants (2π) removed for calculation speed yet preserving the property of proportional to the maximum likelihood.

`Σ (- ln(σt2 - rt2 / σt2), t=1..n)`

It took the real Nspire more than two hours to finish the computation. I think I am going back to Excel 😉 Nevertheless, doing all these again from scratch on the Nspire is really a good refresher on how these algorithms, models and formulae work.

# Coordinate conversion from HK80 grid to Google Map and vice versa using Nspire

At work there is a need to deal with data from a GIS system and its own local coordinate system which is different from the usual one as in GPS and Google Map. Transformation between these two coordinate systems is possible and is well documented at this article (link). A practical application with Google Map and public CCTV images also required such conversion.

For casual usage, a handy transformation function is coded in the Nspire to convert XY values between the two systems. Since the calculation is pretty simple, the built in function script is enough for the task and as such there is no need to summon the power of lua.

web tool from the local authority was used to verify the results. Due to rounding or difference in calculation methods, a little discrepancy is observed but is well within tolerable margins.

Since we already have X,Y, why not utilize the nice color screen of the Nspire by adding a map with the result plotted on? GoogleMap API is not feasible as the Nspire is not geared for the Internet. If we can settle on something simpler, and forget zoom-in or switching to satellite view, a simple map in the background can do the trick as well. After-all, a full blown map software is not something this calculator excel at. But then, plotting the approximate location helps shed some lights on the true nature of the results from arguably its most used perspective – on the map.

# Comparing bug prediction methods by logistic growth and Gompertz curve in Nspire

Analysis can be performed on a sample set of data with cumulative bug counts collected over 12 days to obtain parameters to fit in models for future prediction. Column A and B are data, with the standard Nspire logistic regression function executed on column C and D to obtain the parameters a,b,c. Column E is the function value of the logistic function but not the one built-in with Nspire, instead the parameters are obtained separately using the Nelder-Mead program from the previous post.

There are other models besides logistic regression for prediction, one being an sigmoid function called Gompertz function and is applied to the same data set to obtain the parameters for comparison with the more common logistic function. Since the parameters are obtained in a similar fashion as the logistic function, i.e. by minimizing the sum of errors, the Nelder-Mead program can be reused. After obtaining the parameters, the function values on the data set are calculated and shown in Column F.

The application of the Nelder-Mead program to obtain the parameters of the logistic regression is shown below. Firstly the logi function is declared, and the sum of squared error is declared in the numfunc_logi function which in turn will be passed to the nm function in order to obtain the minimum by the Nelder-Mead algorithm. As shown below the results are exactly the same with the Nspire built-in logistic regression function (a=64.003, b=9.0317, c=0.33644, albeit the Nspire formula named a,b,c differently).

The application of the Nelder-Mead program to obtain the parameters of the Gompertz function is similar.

The number of bugs, data fit for both functions are plotted in the below graph alongside with the logistic regression curve. Hard to tell which of the two functions is better?

Turns out there is some guess better than others. As the calculation of Ru value below shown, the Gompertz function provided a little better fit in this bug prediction case. To calculate, obtain the one-var stats from the bugs data (only the sum of squares of deviation, stat.SSX is needed), and then plug in other values accordingly. Similar to the R coefficient in regression analysis, the larger value is, the better the prediction. And in this case, 0.9248 from Gompertz outperformed 0.9107 from logistic.
Eduguesstimate is what I’d call this conclusion 😉

# Usage of the Nelder-Mead program for Logistic Regression in Nspire

The Nelder-Mead program from the previous post is handy to solve a couple of problems, one being finding the parameters for logistic regression. The TI Nspire comes with logistic regression function but I can’t seem to find a way to have it solve equations with more than one categorical dependent variable.  Since finding the parameters for the logistic curve required maximum likelihood, the Nelder-Mead program will be able to help.

A sample problem goes like this: Given the following daily variables (i) temperature, (ii) whether this is Wednesday, Saturday, or Sunday, the sale of a particular product is recorded. Note that “sale of a particular product” referred to a binary value. Either it is sold that day, or not. The number of sale does not matter.

Sample data:

The calculations:
The first equation, l, defined parts of the likelihood function and is a short hand so the resulting function will be less clumsy. It is defined as piece-wise function taking the variables for day of week and temperature with their coefficients, as well as the sold or not value which govern the sign of the return value (p or 1-p). The second equation defined the maximum likelihood function (the next one is an alternative). Setting initial guess into p and plug into the nm function and the results is obtained, in the resultsmatrix. The logistic function is thus y=1/(1+e^-(2.44*x1 + 0.54*x2 – 15.20)) and ready to use for predicting values.

# Implementing the Nelder-Mead method on the TI Nspire

Implementing the Nelder-Mead method on the Nspire is pretty straight forward. This is after nothing turns up in the search for such library on the Internet. A sample below using the Rosenbrog function as a test target: f(x,y) = (a-x)^2 + b(y-x^2)^2, using a=1 and b=100.

p is the initial parameter set with {10,10}.

nm(“nmfunc”, “p”, p) invoke the Nelder-Mead program, and specify “nmfunc” as the name of the objective function (which defined earlier); “p” as the name of the parameter set; and p the actual parameter.

Results are stored in the resultsmatrix set. 1st element is the function value, 2nd and 3rd are the x,y results respectively.

Took 22 seconds on real Nspire for this test set.

This program also helped determine the parameters for a logistic regression problem on another occasion, where it was not possible using the built-in Nspire logistic regression function since it supports only x-list and y-list but the problem involved 3 variables.