# Experiencing Deep Learning with Jupyter and Anaconda

Most of the time my work with deep learning is done in command line interface with Python and TensorFlow. The clean and efficient syntax of the Python language and package design of TensorFlow almost eliminated the need of a complex Integrated Development Environment (IDE). But after trying out the free Google Colab service that provide a web based interface in Jupyter, I am going to set up one on my desktop that sports an Nvidia RTX2060 GPU.

Installation is easy, but be sure to run Anaconda console as Administrator on Windows platform. For running TensorFlow with GPU:

```conda create -n tensorflow_gpuenv tensorflow-gpu
conda activate tensorflow_gpuenv
```

Managing multiple packages is much easier with Anaconda as it separate configurations into environments that can be customized. On my development machine, I can simply create a TensorFlow environment with GPU and then install Jupyter to enjoy its graphical interface.

Finally to activate Jupyter:

```jupyter notebook
``` To see how Anaconda with Jupyter is flexible on the same machine, a comparison of a simple image pattern recognition program runs under Jupyter with and without GPU support.  # The birthday paradox riddle with TI Nspire

In probability theory, the birthday paradox is an interesting problem in that it is an easy vehicle to grasp several important statistical concepts like likelihood and combinatorics and the surprising conclusion it arrives.

The problem of the birthday is simple, in a room with n people, how many of them will have to same birthday? It turns out, using the following equation, it only takes 23 people to reach a 50% probability of having two people with the same birthday.  # Visualizing operating characteristic curve with TI Nspire

In the study of quality control, sampling is an important technique to assess the overall quality level of a lot of production run. Operating characteristic curve is a great tool to understand the quality profile of acceptance sampling.

In TI Nspire, the OC curve can be defined as following using binomial distribution as an alternative to hypergeometric distribution. With the function defined, visualizing of 10% failure rate and sampling size of 20 can be done by graphing this function. Interesting results from a recent paper presented at the 25th ACM conference on Computer and Communications Security shown advances in Generative Adversarial Network (GAN). In particular the paper focused on tackling Captcha with GAN. GANs take a game theory approach in the training of network and during the deep learning process two entities compete in a game that one trying to fool the other while the other strives not to be fooled.

Comparison of performance of machine learning the probability distributions are usually considered as metrics for benchmark. One such commonly used is the Jensen–Shannon Divergence and a generalization can be given as # Visualizing Volatility Sensitivity in Delta hedged gains with TI Nspire

The TI Nspire calculator is a great platform for visualizing data via interactive graphs. The built-in facility like input slider for variable value adjustment allowed dynamic visualization to complex equations, like the volatility sensitivity in delta-hedged gains used financial investment. Since this strategy involved a single call option, the volatility exposure equals the vega value of the option.

The following setup on the Nspire provided the functions to calculate the vega values. This spreadsheet input screen stores the spot prices and the calculated Black Scholes vega values. Finally, with the data plotting screen the graph of Delta hedged gains of volatility sensitivity is completed. An additional slider control can easily be added on it to adjust an offset variable so as to visualize scenarios under different spot price. # Variance Inflation Factors in R

The Variance Inflation Factors function is available in R for determining existence of multicollinearity. The VIF function is given by: And to use this built-in function is R:

```vif(fit) sqrt(vif(fit))```

# Autocorrelation function

The autocorrelation function is the ratio of the kth sample autocovariance to the sample covariance, i.e., A plot of rk against lag k is evaluated for discernible patterns, relationships, and absolute values (e.g., close to zero).