# LU decomposition in TI-84

As an extension to a previous entry on doing LU decomposition in Nspire and R, the TI-84 is covered here. There is no built-in function like in the Nspire for this, but there are many programs available online, with most of them employing a simple Doolittle algorithm without pivoting.

The non-pivoting program described here for the TI-84 series is with a twist. No separate L and U matrix variables are used and the calculations are done in place with the original input matrix A. The end result of both L and U are also stored in this input matrix. This is made possible by the property of the L and U matrices in this decomposition are triangular. Therefore, at the little price of some mental interpretation of the program output, this program will take up less memory and run a little faster than most simple LU decomposition programs online for the same class of calculator. From a simple benchmark with a 5 x 5 matrix, this program took 2 seconds while another standard program took 2.7 seconds.

The input matrix A.

Results are stored in the same matrix.

L, U, and verification.

# Fast Fourier transform in TI Nspire

The FFT is not available as built in function in the TI Nspire, but it is trivial to write a program for doing this calculation. Instead of using the standard TI Basic program, the Lua scripting is attempted this time. Unlike the commonly used TI Basic program, variables are not shared directly. Things get complex when working with lists and matrices. However, there are some utility functions from the Lua scripting in Nspire that make it possible to exchange data with the Calculator page. The example below shown the FFT results from the Lua script in the Table page.

The HP Prime provided built in function for FFT.

# Optimizing optimization code in TI-84 program

As shown in a previous installment on running the Nelder-Mead optimization of the Rosenbrock function in TI-84, its hardware is not up to the challenge for problems of this kind. It is however an interesting case to learn from this program that took a considerable amount of time (in minutes), when most TI-84 programs complete in seconds, to grasp the technique in writing optimized code.

The problem used in the setting is the Rosenbrock function:

Two techniques were used to try to optimize the code. The first one is to in-line the function (in this case the Rosenbrock function). In the original version this function is stored in a string value to be evaluated whenever necessary by using the TI-84 expr() function. The idea from common in-lining is borrowed to see if it works in TI-84. As shown in line 44 below, the Rosenbrock function replaced the original expr() function call.

Another technique is to remove unnecessary comments. For most modern languages this is never considered a problem but for simple calculators it is speculated valuable processing time may be spent on processing comments that contribute nothing to the algorithm.

Both programs returned the same results. To compare the gain in speed, the same set of initial parameters and tolerance were used in both the original and code-optimized program. Surprisingly, the inline method does not yield any gain while removing all those “:” that were used for indentation does improve the speed. The average of the original program is 183 seconds while the program with cleaned code is only 174 seconds, which is around 95% of the run time of the original program.

# Coefficient of determination for Multiple linear regression in TI-84 Plus

After determining the parameters of multiple linear regression in TI-84 (which do not have any direct built-in function support of this calculation), the coefficient of determination can also be easily calculated using the rich set of list functions supported by TI-84. Following the previous example, the dependent variable is in Sales list, the other two independent variables are Size and Dist lists.

The Yhat list is to be prepared first. This lists store the predicted values using the regression parameters determined in the previous installment.

Next, the mean of Y and Yhat are calculated and stored to a handy list S.

Furthermore, three lists SYY, SYhYh, SYYh are calculated respectively.

The result is obtained by the formula below.

# Graphical visualization of data distribution in TI-84 and R

For visualizing data distribution, the TI-84 Stat plot can provide some insights. Using the same data set as in the previous installment on Shapiro-Wilk test, TI-84 Stat plot is a quick and convenient tool.

In R, the command qqnorm() will show the following plot for the same data.

# Performing Shapiro-Wilk test on TI-84

On TI-84, although the Shapiro-Wilk test is not available as built-in function, it is possible to calculate the W statistics by using the calculator’s rich set of list and statistics function. It is not trivial, but doable.

First set up a list of sample values. Using the example from the original Shapiro-Wilk paper, the samples are input into a list called W, while the N variable denotes the dimension and U stores inverse of its square root. The data sample in W must be sorted in ascending order.

Next is to generate the M list based on W. This list is to calculate the inverse normal distribution based on the index value and the dimension of W list. Doing this in TI-84 is easy with the help of the seq() function. The complete expression is:
`seq(invNorm( (I-0.375) / (N+0.25), 0, 1), I, 1, N, 1)`

Now that the List M is ready, it is time to derive the sum of square of it and we store it into variable m (not the list M).

Since the approximation algorithm is used instead of look up table as proposed in the original paper, the next step is to prepare List A. This list is arranged in such a way that the first two and last two elements have different calculation than all the others. The last two values (N-1, N) of List A are prepared in the Y5 and Y6 equations. These are approximating equations. The first two elements are the negation of these two in a particular order.

One more variable, ε, has to be calculated at this point before we generate List A.
`(m - 2*M(N)² - 2*M(N-1)²) / (1 - 2*Y6² - 2*Y5²)`

Next is to generate the List A. As explained, four elements namely the first, second, N-1, and N elements of this list is calculated unlike all other elements. The screen below is for all except that four special elements.

The remaining steps for the four elements of this List A are then carried out as below.

Finally, all three lists are ready for calculation.

To calculate the W statistics, use the Linear Regression function in the TI-84 as below for W and A. The correlation coefficient r² is the W statistics as shown in R. See the next installment for the Z statistic and P value calculations from this W statistic, also on the TI-84.

# Extracting Black-Scholes implied volatility in Nspire

The Black-Scholes model is an important pricing model for options. In its formula for European call option, the following parameters are required to create a function in the TI Nspire CX:

• s – spot price
• k – strike price
• r – annual risk free interest rate
• q – dividend yield
• t – time to maturity
• v – volatility

Now that the standard Black-Scholes formula is ready, a common method to derive the implied volatility numerically is to determine a value such that the squared loss function between observed price and calculated Black-Scholes price is zero. To derive this volatility, root finding method like the Newton-Raphson method can easily be implemented with advanced calculators like the TI Nspire. In fact, there is no need to code this feature by utilizing the built-in zeros() function of the CX CAS. This function solve for the selected variable from the input expression so that the result is zero, which is exactly what is needed in this case. The function implied_vola() below is coded to do the squared loss function, which is then passed into the CAS built-in zeros() function. Notice a warning of “Questionable accuracy” is reported for the zeros() function.

Doing the Newton-Raphson method is simple enough with the Nspire Program Editor. The program below bs_call_vnewtrap() takes a list of Black-Scholes parameters (including the s,k,r,q,t,v), an initial guess value of implied volatility, and the call price of the standard Black-Scholes formula. The last parameter is a precision control (value of zero default to a pre-set value). The last two attempts calling this function shown below depicts the effect of the precision control in this custom root-finding program.

# Overclocking Casio fx-9860gII to the max with portable power bank

As discovered previously the overclocking program ftune performed better when USB cable is plugged in, it was not sure back then whether the data link or the extra power supply contributed to the performance boost. It is now confirmed power supply alone will do the trick.

The test is carried out using a Casio Basic program running a parameter searching for a logistic regression equation using the Nelder-Mead algorithm. A portable power bank is plugged to the USB port of the Casio fx-9860gII with ftune.

The tested portable USB power bank comes with 4000mAh capacity and 1A output, and is fully charged. There are 4 blue LED indicator lights.

The power bank is turned on.

The test result shown that the program run with power bank finished in 46 seconds, and the one without finished in 93 seconds.

# Real estate refinancing – example from HP 12C to TI Nspire

From the “HP 12C Platinum Solutions Handbook”, an example is given on calculation of refinancing an existing mortgage (on page 7). Since the HP 12C is a special breed specializing in financial calculations, much of the steps are optimized and is different from using financial functions available on other higher end calculators like the TI Nspire. In the following re-work of the same example, the Finance Solver is called from within the Calculator Page and the Vars are recalled for calculations.

Monthly payment on existing mortgage received by lender calculation.

Monthly payment on new payment calculation.

Net monthly payment to lender, and Present value of net monthly payment calculation.

# How old are they?

On a local newspaper yesterday, this question is reported to be controversial for primary school examination since it is too difficult. It goes like this. There are two people and the older one said:

“When I was at your age, you were only 5.
When you become as old as I am now, I will become 71.”
The question is, how old are these two people?

It turns out, according to the newspaper, the question is not aimed at simultaneous equation at all, but to solve the problem by graphically dividing lines in proportion.

To solve in an overkill fashion by Nspire:

And this is also a refresher on how to solve simultaneous equations on TI-84 series using the rref function (reduced row echelon form). Define the equations in form of matrix (as [A] below), and then run the rref() function on it. The result matrix contained the solution to the problem.

And just in case Excel is the only tool available, no worry, its solver will get you covered (!).