Tag Archives: calculator

Training neural network using Nelder-Mead algorithm on TI Nspire

In this installment the Nelder-Mead method is used to train a simple neural network for the XOR problem. The network consisted of 2-input, 1-output, and 2 hidden layers, and is fully connected. In mainstream practical neural network, back propagation and other evolutionary algorithms are much more popular for training neural network for real world problem. Nelder-Mead is used here just out of curiosity to see how this general optimization routine performed under neural network settings on TI Nspire.
neural-network-xor1

The sigmoid function is declared in an TI Nspire function.
neural-network-xor2

For the XOR problem, the inputs are defined as two lists, and the expected output in another.
neural-network-xor3

The activation functions for each neuron are declared.
neural-network-xor4

To train the network, the sum of squared error function is used to feed into the Nelder-Mead algorithm for minimization. Random numbers are used for initial parameters.
neural-network-xor6
neural-network-xor7

Finally the resulting weights and bias are obtained from running the Nelder-Mead program.
neural-network-xor8

The comparison graph of the performance of the Nelder-Mead trained XOR neural network against expected values.
neural-network-xor9

 

P value of Shapiro-Wilk test on TI-84

In previous installment, the Shapiro-Wilk test is performed step by step on the TI-84.

To calculate the p-value from this W statistic obtained, the following steps can also be done on the TI-84 using some standard statistics function. Note that the approximation below are for the case 4 ≤ n ≤ 11.

The mean and standard deviation are derived using the below equations.
shapiro84-pvalue1

The W statistic is assigned from the correlation calculation results, and another variable V is calculated for the transformed W.
shapiro84-pvalue2

Finally, the standardized Z statistic and p-value is calculated using the mean, standard deviation, and the transformed W value.
shapiro84-pvalue3 shapiro84-pvalue4

Overclocking Casio fx-9860gII to the max with portable power bank

As discovered previously the overclocking program ftune performed better when USB cable is plugged in, it was not sure back then whether the data link or the extra power supply contributed to the performance boost. It is now confirmed power supply alone will do the trick.

The test is carried out using a Casio Basic program running a parameter searching for a logistic regression equation using the Nelder-Mead algorithm. A portable power bank is plugged to the USB port of the Casio fx-9860gII with ftune.

The tested portable USB power bank comes with 4000mAh capacity and 1A output, and is fully charged. There are 4 blue LED indicator lights.
casiousbpoweroverclock1

The power bank is turned on.
casiousbpoweroverclock2

The test result shown that the program run with power bank finished in 46 seconds, and the one without finished in 93 seconds.

Real estate refinancing – example from HP 12C to TI Nspire

From the “HP 12C Platinum Solutions Handbook”, an example is given on calculation of refinancing an existing mortgage (on page 7). Since the HP 12C is a special breed specializing in financial calculations, much of the steps are optimized and is different from using financial functions available on other higher end calculators like the TI Nspire. In the following re-work of the same example, the Finance Solver is called from within the Calculator Page and the Vars are recalled for calculations.

refinance0

Monthly payment on existing mortgage received by lender calculation.
refinance1

Monthly payment on new payment calculation.
refinance2

Net monthly payment to lender, and Present value of net monthly payment calculation.
refinance3

Bonferroni procedure

The Bonferroni procedure can be used in multiple comparison to determine which means differ. Using a sample set of data below with three groups of equal size data, the ANOVA 2-way function is calculated and the results stored in stat1. For convenience, the standard ANOVA is also calculated and the results stored in stat6.

bonferroni1

The original confidence level is set to 0.05. To obtain the corrected confidence level value, 0.05 is divided by the number of group of data, and then by 2 for 2-tail test. The new critical t-value is then determined. The means for each group is available in the stat6 variable set (by ANOVA), while the pooled standard deviation s can be obtained from stat1 variable set (by ANOVA 2-way).

bonferroni2

And then for each of the combination of group, calculate its new t-value, i.e. GA-GC, GB-GC, and GA-GB respectively.

bonferroni3

As shown above, the combination of GA-GB is less than the critical value of 2.7178, meaning that fail to reject H0 and therefore can conclude that μAB.

Working with Mahalanobis distance in TI Nspire CX

The Mahalanobis distance is an important method in statistical analysis. It is a different thinking from the common Euclidean distance and considered the dimensionality of standard deviation. In TI Nspire, there is no built-in function for Mahalanobis distance. However, it can be easily calculated using the matrix operations available.

mahalanobis1

Using independent variables x1, x2, and dependent variable y. Firstly, the covariance matrix is obtained by either the first inverse matrix equation above, or the next one where d is defined as row-wise as x1, x2, and a last row of 1’s.

mahalanobis2

Once the covariance matrix is determined, the Mahalanobis distance for x1, x2 can be determined by the above equation, which is a summation of distances times the number of observation minus one. The use of a sum function on matrix is just for convenience of input and display as the summation function can be very long.

Approximating the value of Pi by Monte Carlo method in Nspire and NVIDIA GPU

montecarlopi8a
The value of pi can be approximated by Monte Carlo method. This can easily be done with any thing from a programmable calculator like the Nspire, or much more efficiently in parallel computing devices like GPU.

Writing code in the Nspire is straight forward. Nspire Basic provided random number functions like RandSeed(), Rand(), RandBin(), RandNorm() etc.

To implement the Monte Carlo method in GPU, random numbers are not generated in the CUDA kernel functions, but in the main program using CuRand host APIs. On the NVIDIA GPU, CuRand is an API for random number generation, with 9 different types of random number generators available. At first I picked one from the Mersenne Twister family (CURAND_RNG_PSEUDO_MT19937) as the generator but unfortunately this returned a segmentation fault. Tracing the call to curandCreateGenerator revealed the status value returned is 204 which means CURAND_STATUS_ARCH_MISMATCH. It turns out this particular generator is supported only on Host API and architecture sm_35 or above. Eventually settled with CURAND_RNG_PSEUDO_MTGP32. The performance of this RNG is closely tied to the thread and block count, and the most efficient use is to generate a multiple of 16384 samples (64 blocks × 256 threads).
montecarlopi4

On a side note, to use the CuRand APIs in Visual Studio, the CuRand library must be added to the project dependencies manually. Otherwise there will be error in the linking stage since the CuRand is dynamically linked.

montecarlopi2

On the Nspire, using 100,000 samples, it took an overclocked CX CAS 790 seconds to complete the simulation. The same Nspire program finishes in 8 seconds in the PC version running on a Core i5.
montecarlopi1

On the GPU side, CUDA on a GeForce 610M finished in a blink of an eye for the same number of iterations.
montecarlopi3

To get a better measurement on the performance, the number of iteration is increased to 10,000,000 and the result on performance is compared to a separately coded Monte Carlo program for the same purpose that run serially instead of parallel CUDA. The GPU version took 296 ms, while the plain vanilla C ran for 1014 ms.

Multiple linear regression in TI-84 Plus

Advanced feature like multiple linear regression is not included in the TI-84 Plus SE. However, obtaining the regression parameters need nothing more than some built-in matrix operations, and the steps are also very easy. For a simple example, consider two independent x variables x1 and x2 for a multiple regression analysis.

Firstly, the values are input into lists and later turned into matrices. L1 and L2 are x1 and x2, and L3 is the dependent variable.
ti84+multireg1

Convert the lists into matrices using the List>matr() function. L1 thru L3 are converted to Matrix C thru E.
ti84+multireg2

Create an matrix with all 1s with the dimension same as L1 / L2. And then use the augment() function to create a matrix such that the first row is L1 (Matrix C), second row is L2 (Matrix D), and the third row is the all 1s matrix. In this example we will store the result to matrix F. Notice that since augment() takes only two argument at one time, we have to chain the function.
ti84+multireg3

The result of F and its transform look like below.
ti84+multireg4ti84+multireg5ti84+multireg6

Finally, the following formula is used to obtain the parameters for the multiple regression

([F]t * [F])-1 * [F]t * [E]

ti84+multireg7

The parameters are expressed in the result matrix and therefore the multiple regression equation is

y = 41.51x1 - 0.34x2 + 65.32

See also this installment to determine the correlation of determination in a multiple linear regression settings also using the TI-84.

TI-84 Plus Pocket SE and the Simplex Algorithm

The TI-84+ Pocket SE is the little brother of the TI-84 Plus. They are almost identical in terms of screen resolution, processor architecture and speed, and also the OS. The Pocket version measured only 160 x 80 x 21mm in dimension and weighted at 142g, considerably more compact than the classic version.

TI84+PSE1

This little critter is full of features from the 2.55 MP OS. It will easily blow away the mainstream Casio series of the same size like the fx991 and even fx5800P with TI’s built-in advanced functions like ANOVA. Nevertheless, the TI-84 Plus series is still considered a stripped down version of the TI Nspire and TI-89 Titanium, and as such personally I do not expect or intent to run on it sophisticated calculations or programs like the Nelder-Mead algorithm that fits comfortably on the Nspire or Titanium.

Having said that, many complex calculation can easily be accomplished with the rich set of advanced features available out-of-the box in the TI-84, even without programming. One such example is the linear programming method implemented in the simplex algorithm for optimization. Consider the following example: In order to maximize profit, number of products to be produced given a set of constraints can be determined by linear programming. This set of constraints can be expressed in linear programming as system of equations as

8x + 7y ≤ 4400   :Raw Material P
2x + 7y ≤ 3200   :Raw Material Q
3x + y ≤ 1400    :Raw Material R
x,y ≥ 0

In the above, two products are considered by variable x and y, representing the constraints for product A and B respectively. Each of the first three equations denote the raw material requirements to manufacture each product, for material P, Q, and R. Specifically, product A requires 8 units of raw material P, 2 units of raw material Q, and 3 units of raw material R. The total available units for these three raw materials in a production run are 4400, 3200, and 1400 units respectively. When using the Simple Algorithm to maximize the function

P = 16x + 20y

which represents the profits for product A and B are $16 and $20 each, the tableau below is set up initially with slack variables set as

  8   7 1 0 0 0 4400
  2   7 0 1 0 0 3300
  3   1 0 0 1 0 1400
-16 -20 0 0 0 1    0

TI84+PSE3

Using the *row() and *row+() matrix function, the Simplex Algorithm can be implemented without even one line of code. The final answer is obtained as 11200 which is the maximum profit by producing 200 Product A and 400 Product B.

TI84+PSE4