Category Archives: TensorFlow

Visualizing a MLP Neural Network with TensorBoard

The Multi-Layer Perceptron model is supported in Keras as a form of Sequential model container as MLP in its predefined layer type. For visualization of the training results, TensorBoard is handy with only a few line of code to add to the Python program.

log_dir="logs/fit/" + datetime.datetime.now().strftime("%Y%m%d-%H%M%S")
tensorboard_callback = tf.keras.callbacks.TensorBoard(log_dir=log_dir, histogram_freq=1)

Finally add callbacks to the corresponding fitting model command to collect model information.

history = model.fit(X_train, Y_train, validation_split=0.2,
epochs=100, batch_size=10
,callbacks=[tensorboard_callback])

tfb1

Once the training is completed, start the TensorBoard and point browser to the designated port number.

Click on the Graph tab to see a detailed visualization of the model.
tfb2

Click on the Distributions tab to check the layer output.
tfb3

Click on the Histograms tab for a 3D visualization of the dense layers.
tfb4

 

 

Advertisements

Experiencing Deep Learning with Jupyter and Anaconda

Most of the time my work with deep learning is done in command line interface with Python and TensorFlow. The clean and efficient syntax of the Python language and package design of TensorFlow almost eliminated the need of a complex Integrated Development Environment (IDE). But after trying out the free Google Colab service that provide a web based interface in Jupyter, I am going to set up one on my desktop that sports an Nvidia RTX2060 GPU.

Installation is easy, but be sure to run Anaconda console as Administrator on Windows platform. For running TensorFlow with GPU:

conda create -n tensorflow_gpuenv tensorflow-gpu
conda activate tensorflow_gpuenv

Managing multiple packages is much easier with Anaconda as it separate configurations into environments that can be customized. On my development machine, I can simply create a TensorFlow environment with GPU and then install Jupyter to enjoy its graphical interface.

Finally to activate Jupyter:

jupyter notebook

jupyterconsole.PNG

To see how Anaconda with Jupyter is flexible on the same machine, a comparison of a simple image pattern recognition program runs under Jupyter with and without GPU support.

jupytergpu
jupytercpu

Visualizing TensorFlow with TensorBoard

TensorBoard is a tool for visualizing graphs and various metrics for TensorFlow session runs. With a few line of additional code, TensorBoard gathers and report run statistics in a nice graphical interface.tensorboard1

First of all a few lines are required to TensorFlow session as in below:

writer = tf.summary.FileWriter("output", sess.graph)
print(sess.run(GH, options=options, run_metadata=run_metadata))
writer.close()

Before running the session, start TensorBoard using the following command.
tensorboard2

Finally run the TensorFlow session and point the browser to the TensorBoard.
tensorboard3

Profiling machine learning applications in TensorFlow

TensorFlow provided package timeline by using the import from tensorflow.python.client

from tensorflow.python.client import timeline

This is useful for performance profiling TensorFlow application with graphical visualization similar to the graphs generated from the CUDA Visual Profiler. With a little tweak in the machine learning code, TensorFlow applications can store and report performance metrics of the learning process.
tfprofile3

The design of the timeline package made it easy to add profiling by simply adding code below.

run_options = tf.RunOptions(trace_level=tf.RunOptions.FULL_TRACE)
run_metadata = tf.RunMetadata() 

It is also required to instruct the model to compile with the profiling options:

model.compile(optimizer='adam',
loss='sparse_categorical_crossentropy',
metrics=['accuracy'],
options=run_options,
run_metadata=run_metadata)

With the sample mnist digits classifier for TensorFlow, the output shown Keras history are saved and can be later retrieved to generate reports.
tfprofile2

Finally, using the Chrome tracing page ( chrome://tracing/ ), the performance metrics persisted on file system can be opened for verification.
tfprofile1

 

TensorFlow and Keras on RTX2060 for pattern recognition

The MNIST database is a catalog of handwritten digits for image processing. With TensorFlow and Keras training a neural network classifier using the Nvidia RTX206 GPU is a walk in the park.
mnist2

Using the default import of the MNIST dataset using tf.keras, which comprises of 60,000 handwritten digits images in 28 x 28 pixels, the training of a neural network to learn classifying it could be accomplished in a matter of seconds, depending on the accuracy. The same learning done on ordinary CPU is not as quick as GPU for architectural differences. In this sample run, the digit “eight” is correctly identified using the neural network.
mnist4.PNG

A simple comparison of the training result of the MNIST database on my RTX2060 with varying training samples depicts slight differences in the final accuracy.
mnist1

 

More test driving of Tensorflow with Nvidia RTX 2060

By following the TensorFlow guide, it is easy to see how TensorFlow harnesses the power of my new Nvidia RTX 2060.

The first one is image recognition. Similar to the technology used in a previous installment on neural network training with traffic images from CCTV captured, a sample data set of images with classification of fashion objects are learnt by using TensorFlow. In that previous installment, Amazon Web Service cloud with a K520 GPU instance is used for the model training. In this blog post, the training is actually taking place in the Nvidia RTX 2060.

tfimg1

Another classic sample is the regression analysis, from the Auto MPG data set. With a few line of code, TensorFlow clean up the data set to remove unsuitable values and convert categorical values to numeric ones for the model.

tfimg2

Monte Carlo methods in TensorFlow

The Markov Chain Monte Carlo (MCMC) is a sampling method to sample from a probability distribution when direct sampling is not feasible.

The implementation of Monte Carlo in the TensorFlow Probability package included sample to run the Hamiltonian MCMC, which is a variation with input from the Hamiltonian dynamics to avoid slow exploration of state space.

While running the samples, my Python reported errors on importing tensorflow_probability:tfp1

The problem is resolved by uninstalling and re-installing the tensorflow-estimator package:


pip uninstall tensorflow_estimator
pip install tensorflow_estimator

Finally the samples run fine with expected results.

tfp3.PNG

The results from the sample run.
tfp7tfp4