Category Archives: Pattern recognition

Performance testing the Movidius Neural Compute Stick

The Movidius Neural Compute Stick is great for visual recognition projects with low power consumption and small form factor requirements. A basic introduction is covered in the previous installment, and it is time for a little more field tests on performance.

The first test is calculator, one is Texas Instruments TI84 Plus Pocket SE and the other a Casio fx-4500PA. Both are recognized as hand-held computer with fairly high confidence.movtest1movtest2

Second test is a luxury watch photo, recognized as an analog clock.movtest3

The final test is a feather. At some point the stick returned a 100% confidence. The result is correct.movtest4

This neural computing module did a good job with amazing results as a development kit which is also great for hobby projects. As the technology on hardware AI acceleration matures, we may one day see its integration with CPU to make it ubiquitous product available on the market.

Advertisements

Deep Learning with the Movidius Neural Compute Stick

IMAG1428

Deep Learning is a breakthrough in Artificial Intelligence. With its root from neural network, modern computing hardware advancement enabled new possibilities by sophisticated integrated circuits technology.

A branch of this exciting area in AI is machine learning. The leading development frameworks include TensorFlow and Caffe. Pattern recognition is a practical application of machine learning where photos or videos are analysed by machine to produce usable output as if a human did the analysis. GPU has been a favorite choice for its specialized architecture, delivering its supreme processing power not only in graphics processing but also popular among the neural network community. Covered in a previous installment is how to deploy an Amazon Web Services GPU instance to analyse real time traffic camera images using Caffe.

To bring this kind of machine learning power to IoT, Intel shrank and packaged a specialized Vision Processing Unit into the form factor of a USB thumb drive in the Movidius™ Neural Compute Stick.
IMAG1432

It sports an ultra low power Vision Processing Unit (VPU) inside an aluminium casing, weights only 30g (without the cap). Supported on the Raspberry Pi 3 model B makes it a very attractive add-on for development projects involving AI application on this platform.IMAG1435

In the form factor of an USB thumb drive, the specialized VPU geared for machine learning in the Movidius performs as an AI accelerator for the host computer.IMAG1439

To put this neural compute stick into action, an SDK available from git download provided by Movidius is required. Although this SDK runs on Ubuntu, Windows users with VirtualBox can easily install the SDK with an Ubuntu 16.04 VM.

While the SDK comes with many examples, and the setup is a walk in the park, running these examples is not so straight forward, especially on a VM. There are points to note from making this stick available in the VM including USB 3 and filters setting in VirtualBox, to the actual execution of the provided sample scripts. Some examples required two sticks to run. Developers should be comfortable with Python, unix make / git commands, as well as installing plugins in Ubuntu.
mod1

The results from the examples in the SDK alone are quite convincing, considering the form factor of the stick and its electrical power consumption. This neural computing stick “kept its cool” literally throughout the test drive, unlike the FPGA stick I occasionally use for bitcoins mining which turn really hot.

Analysis of traffic CCTV camera image from data.gov.hk with GPU deep learning on Amazon cloud

In this installment, modern computing technology including Internet, artificial intelligence, deep learning, GPU, and cloud technology are utilized to solve practical problem.

Traffic jam is very common in metropolitan cities. Authorities responsible for road traffic management often install CCTV for monitoring. Data available at data.gov.hk site published on the Internet for big data analysis is one such example. The followings are two sample images from the same camera with one showing heavy traffic and the other one with very light traffic.

Modern pattern recognition technology is a piece of cake to determine these two distinct conditions from reading images above. The frontier in this field is no doubt utilizing deep learning network in GPU processor. Amazon AWS provided GPU equipped computing resources for such task.

The first step is to fire up an GPU instance on Amazon AWS. The following AMI is selected. Note that HVM is required for GPU VM.
awsgpu1

The next step is to choose a GPU instance. As the problem is simple enough only g2.2xlarge is used.
awsgpu2

After logging in to the terminal, set up CUDA 7, cuDNN, caffe, and DIGITS. The steps can be referred from their respective official documentations. A device query test below confirmed successful installation of CUDA. The whole process may took an hour to complete if installed from scratch. There may be pre-built images out there.
awsgpu4

Note that an account from NVIDIA Accelerated Computing Developer Program may be required to download some of these packages. A make test below confirmed complete setup of caffe.awsgpu5

Finally, after installing DIGITS, the login page is default to port 5000. At AWS console network connection rule can easily be setup to open this port. Alternatively, for more secure connections, tunneling can be used instead, as shown below running at 8500.awsgpu6

Now it is time to start training. A new image classification dataset is to be created. As stated above the source image set is obtained from data.gov.hk. At this site, traffic cameras installed over strategic road network point feed JPG image on the web for public access. The images are refreshed every 2 minutes. A simple shell script is prepared to fetch the image to build the data set. Below is the screen where DIGITS configures the classification training.
awsgpu8

Since our sample data set size is small, the training completed in no time.
awsgpu9

awsgpu10

Next, a model is defined. GoogLeNet is selected in this example.
awsgpu12awsgpu13

Model training in progress. The charts update in real time.
awsgpu17

awsgpu15awsgpuB

When the model is completed, some tests can be carried out. In this example, the model is trained to determine whether the camera image taken indicates traffic jam or not.

A traffic jam sample. Prediction: free=-64.23%, jam=89.32%
awsgpuC

The opposite. Prediction: free=116.62%, jam=-133.61%
awsgpuE

With Amazon cloud, the ability to deploy cutting edge AI technology in GPU is no longer limited to researchers or those rich in resources. General public can now benefit from these easy to access computing resources to explore limitless possibilities in the era of big data.