Author Archives: gmgolem

The Eagle has landed

Besides watching recordings of live TV broadcast of Apollo 11, to celebrate Apollo’s 50th Anniversary, it is no better time to train my notebook computer with power far surpassing the Apollo guidance computer how to land on the moon. Suffice it to say the Apple ][ is after the Apollo made history to land mankind on moon.

eagle2

With OpenAI Gym, a simulated environment for the lunar landing module on the tense moment landing on the moon is recreated. Training via TensorFlow to let the modern computer to practice and learn to how to land eventually accomplished the mission – the Eagle has landed.

 

Advertisements

Experiencing Deep Learning with Jupyter and Anaconda

Most of the time my work with deep learning is done in command line interface with Python and TensorFlow. The clean and efficient syntax of the Python language and package design of TensorFlow almost eliminated the need of a complex Integrated Development Environment (IDE). But after trying out the free Google Colab service that provide a web based interface in Jupyter, I am going to set up one on my desktop that sports an Nvidia RTX2060 GPU.

Installation is easy, but be sure to run Anaconda console as Administrator on Windows platform. For running TensorFlow with GPU:

conda create -n tensorflow_gpuenv tensorflow-gpu
conda activate tensorflow_gpuenv

Managing multiple packages is much easier with Anaconda as it separate configurations into environments that can be customized. On my development machine, I can simply create a TensorFlow environment with GPU and then install Jupyter to enjoy its graphical interface.

Finally to activate Jupyter:

jupyter notebook

jupyterconsole.PNG

To see how Anaconda with Jupyter is flexible on the same machine, a comparison of a simple image pattern recognition program runs under Jupyter with and without GPU support.

jupytergpu
jupytercpu

Machine learning with a classic game Doom and TensorFlow

Busy this weekend with work. During those irregular shift, I decided to let my GeForce RTX 2060 to play some game while I am away from home.

VizDoom is a port of the classic first person shooting game Doom to the machine learning area. With this specialised port compatible with Python, it is easy to train neural network to play this human game with TensorFlow. Over the weekend, my GPU card played countless episodes of games to learn how to win on itself while I am working in office.

doom2

 

Visualizing TensorFlow with TensorBoard

TensorBoard is a tool for visualizing graphs and various metrics for TensorFlow session runs. With a few line of additional code, TensorBoard gathers and report run statistics in a nice graphical interface.tensorboard1

First of all a few lines are required to TensorFlow session as in below:

writer = tf.summary.FileWriter("output", sess.graph)
print(sess.run(GH, options=options, run_metadata=run_metadata))
writer.close()

Before running the session, start TensorBoard using the following command.
tensorboard2

Finally run the TensorFlow session and point the browser to the TensorBoard.
tensorboard3

Profiling machine learning applications in TensorFlow

TensorFlow provided package timeline by using the import from tensorflow.python.client

from tensorflow.python.client import timeline

This is useful for performance profiling TensorFlow application with graphical visualization similar to the graphs generated from the CUDA Visual Profiler. With a little tweak in the machine learning code, TensorFlow applications can store and report performance metrics of the learning process.
tfprofile3

The design of the timeline package made it easy to add profiling by simply adding code below.

run_options = tf.RunOptions(trace_level=tf.RunOptions.FULL_TRACE)
run_metadata = tf.RunMetadata() 

It is also required to instruct the model to compile with the profiling options:

model.compile(optimizer='adam',
loss='sparse_categorical_crossentropy',
metrics=['accuracy'],
options=run_options,
run_metadata=run_metadata)

With the sample mnist digits classifier for TensorFlow, the output shown Keras history are saved and can be later retrieved to generate reports.
tfprofile2

Finally, using the Chrome tracing page ( chrome://tracing/ ), the performance metrics persisted on file system can be opened for verification.
tfprofile1

 

TensorFlow and Keras on RTX2060 for pattern recognition

The MNIST database is a catalog of handwritten digits for image processing. With TensorFlow and Keras training a neural network classifier using the Nvidia RTX206 GPU is a walk in the park.
mnist2

Using the default import of the MNIST dataset using tf.keras, which comprises of 60,000 handwritten digits images in 28 x 28 pixels, the training of a neural network to learn classifying it could be accomplished in a matter of seconds, depending on the accuracy. The same learning done on ordinary CPU is not as quick as GPU for architectural differences. In this sample run, the digit “eight” is correctly identified using the neural network.
mnist4.PNG

A simple comparison of the training result of the MNIST database on my RTX2060 with varying training samples depicts slight differences in the final accuracy.
mnist1

 

More test driving of Tensorflow with Nvidia RTX 2060

By following the TensorFlow guide, it is easy to see how TensorFlow harnesses the power of my new Nvidia RTX 2060.

The first one is image recognition. Similar to the technology used in a previous installment on neural network training with traffic images from CCTV captured, a sample data set of images with classification of fashion objects are learnt by using TensorFlow. In that previous installment, Amazon Web Service cloud with a K520 GPU instance is used for the model training. In this blog post, the training is actually taking place in the Nvidia RTX 2060.

tfimg1

Another classic sample is the regression analysis, from the Auto MPG data set. With a few line of code, TensorFlow clean up the data set to remove unsuitable values and convert categorical values to numeric ones for the model.

tfimg2