Author Archives: gmgolem

Bringing AI to your favorite dev board with HuskyLens

The HuskeyLens is a compact AI machine vision sensor board geared to add machine vision to robot and IoT applications. It sports everything a developer can dream of for machine vision capability be added to their favorite board at an affordable price point.

The board is equipped with a Kendryte K210 vision processor, an OV2640 camera, and a 2.0″ screen with 320 x 240 resolution. Two control buttons for end users to navigate its menu. A four wire interface is provided (connecting wire is bundled in the package) for either UART or I2C connections. Firmware and source code are available for download from the manufacturer’s wiki site.

For a quick out of the box test on the Raspberry Pi platform, just download the library and configure the I2C on the Pi.

pilens1

pilens2

Connect the HuskyLens to the Pi with the four wire interface. Configure the HuskyLens to interface with I2C, and choose from face recognition or learn from object.
pilens3

Run the sample Python script which is a simple command line menu interface, and see how easy it is to add machine vision.pilens5pilens7

Returning to a journey with IoT

Although I am not sure about the definition to this day, six years ago I tinkered my first IoT device that, like most prototypes back then, is equipped with environmental sensors with connectivity to the Internet.
tmp006-2

It looks clumsy, with a Texas Instruments MSP430 F5529 Microcontroller development board, also TI’s TMP006 non-contact infrared sensor, an OpenWRT flashed TP Link NR-703N WiFi router, and a 16×2 LED display.
tmp006-1

This little toy has been sitting next to my computer since its first reported temperature to the Internet via Exosite which partnered with TI to provide free trial of their IoT services to TI’s development board users.
tmp006-3

Unfortunately, this freebies is about to end in March 2020. Migrations to Exosite’s paid services should be seamless, but I decided to roll my own on another cloud platform where I can still enjoy limited free services – Amazon Web Services.

The plan is to open up a custom RESTful API on an Apache server to capture the temperature data for future visualization. Code change is easy as the Lua script running in the OpenWRT router is as capable as a high level web component as it handles serial data communication with the lower level microcontroller.

 

Calculating COVID-19 infection statistics using Nspire

The recent outbreak of COVID-19 world wide is alarming. Using data published by the Johns Hopkins CSSE and NSpire calculator, we are able to perform some basic regression analysis with Nspire calculator to get a rough picture of the outbreak.

The graph below shows data of daily infections outside of China.
wuhan1

It is showing more of exponential growth than linear. The r squared value is contrasted between the exponential (0.91) and linear regression analysis (0.81) using the Statistics function built in to the Nspire.
wuhan2

For epidemiology analysis, there are well established mathematics models when fed with accurate data, better descriptions and even reliable predictions are possible. One of the index from these models is the Basic Reproduction Number, known as R0 value, which indicates how many more infections from an infected individual can infect other uninfected individual. By far the estimation for COVID-19 is from 1.4 to 6.6.
wuhan3

Adding a compact OLED display to Arduino Yun

Got my hands on some OLED display boards that are neat and nice in 0.96 inch with 128×64 pixels. Previous versions of it is monochrome. The upgraded version is a clear and crisp display in yellow and blue color. Adding a little display to development boards is always an item on my to do list to replace LED signals deciphering. Due to board space limitations very few development boards come with one.
yunoled4

This OLED display runs in I2C mode. On the Arduino Yun the SDA/SCL pins are next to the RJ45 jacket. Fortunately these pins are aligned to the OLED board in the same order the OLED board will fit “inside” of the Arduino rather than the other way round.
yunoled2

The next problem is to feed the Vcc and GND for power. The space is so tight with the RJ45 jacket, I have to resort to soldering a short pair of power cable and then run it to the power supply pins.
yunoled1

After confirming the OLED display is working properly, it is time to show something on it.
yunoled3

With the Wifi capability of Arduino Yun, there are a lot of information to show. The code below is modified from the Arduino IDE example of WifiStatus. To fit the information on a tiny OLED display, the lua script this example code used is also modified so that all lines fit nicely.

Web service call with Azure Sphere and curl

libcurl library is included in the Azure Sphere SDK. For IoT applications web service calls are almost a pre-requisite to connect everyday objects to the Internet. Being able to invoke web service as convenient as the Azure Sphere is definitely an advantage.

Sample application is available at git hub. Open with Visual Studio the HTTPS_Curl_Easy solution. This project defaults to open example.com. To change to the web service desired, first update the app_manifest.json file for the allowed host.

{
"SchemaVersion": 1,
"Name": "HTTPS_Curl_Easy",
"ComponentId": "20191001-0000-0000-0000-000000000000",
"EntryPoint": "/bin/app",
"CmdArgs": [],
"Capabilities": {
"AllowedConnections": [ "example.com", "your.webserver.com" ]
},
"ApplicationType": "Default"
}

Then open the main.c file and point the Azure Sphere to the Internet.

 if ((res = curl_easy_setopt(curlHandle, CURLOPT_URL, "http://your.webserver.com/")) != CURLE_OK) {
LogCurlError("curl_easy_setopt CURLOPT_URL", res);
goto cleanupLabel;
}

Check the log from web server.

azurecurl1.PNG

 

Road testing Azure Sphere Starter Kit with Visual Studio and Azure IoT Hub

The Avnet Azure Sphere Starter Kit is a development board featuring the Azure Sphere module with a MT3620 processor. It is designed for end-to-end IoT with security in mind, and is tightly integrated with the Azure cloud service.
sphere0

To try out developing IoT solutions using this kit, Visual Studio 2017 or 2019 is required. The Azure Sphere SDK can added to Visual Studio. An Azure account is needed to create an Azure Directory user in the cloud. For details of these preparations, Microsoft provided step by step instructions.

Out of the box, the kit has to be connected to a PC with Internet access via a USB cable (one is included in the kit).  The driver should self install. Once connected, open up the Azure Sphere Developer Command Prompt. Each kit has to be registered to the Azure cloud before it can function. The following outline the basic commands to complete the registration.

azsphere login

azsphere tenant create --name sphere01-tenant

azsphere device claim

azsphere device show-ota-status

azsphere device recover

azsphere device wifi show-status

azsphere device wifi add --ssid  --key

After completed the basic registration and Wifi setup, issue the command below to ready the Azure Sphere to work with Visual Studio in debug mode.

azsphere device prep-debug

At this point, open Visual Studio, and pull a sample project from Github. For example, the demo project at https://github.com/CloudConnectKits/Azure_Sphere_SK_ADC_RTApp. Compile and debug the project.
sphere8

Observe the Output window to see the data fetched from the Azure Sphere. In Sphere’s terms, this is called side-loading a program.sphere14

Once the debugger exits, the Sphere will no longer run the program. To deploy the program in a more permanent manner, use the following commands to do an Over the Air (OTA) deployment.

azsphere feed list
--> [3369f0e1-dedf-49ec-a602-2aa98669fd61] 'Retail Azure Sphere OS'
azsphere device prep-field --newdevicegroupname  --newskuname 

azsphere device link-feed --dependentfeedid 3369f0e1-dedf-49ec-a602-2aa98669fd61 --imagepath "C:\Users\dennis\source\repos\AvnetAzureSphereStarterKitReferenceDesign1\AvnetStarterKitReferenceDesign\bin\ARM\Debug\AvnetStarterKitReferenceDesign.imagepackage" --newfeedname sphere01-test-avnet --force
Adding feed with ID 'e9243998-58b1-42c5-a7a3-7d76e55e5603' to device group with ID '193d1734-f1e3-4af1-a42e-e4e0a99f585c'.
Creating new image set with name 'ImageSet-Avnet-Starter-Kit-reference-V1.-2019.09.21-12.17.22+08:00' for images with these IDs: 4ae124ed-503a-4cf2-acf9-198c3decd55d.

(reboot)

azsphere device image list-installed

azsphere device prep-field --devicegroupid 193d1734-f1e3-4af1-a42e-e4e0a99f585c

OpenVINO on Raspberry

OpenVINO is the short term for Open Visual Inference and Neural network Optimization toolkit. There is a port to the Raspberry platform running Rasbian OS.
openvino

To setup on a Raspberry Pi, download the latest zip from OpenVINO, and run the commands below.

sudo tar -xf l_openvino_toolkit_runtime_raspbian_p_2019.2.242.tgz --strip 1 -C /opt/intel/openvino
sudo apt install cmake
source /opt/intel/openvino/bin/setupvars.sh
echo "source /opt/intel/openvino/bin/setupvars.sh" >> ~/.bashrc
sudo usermod -a -G users "$(whoami)"
sh /opt/intel/openvino/install_dependencies/install_NCS_udev_rules.sh
mkdir build && cd build
cmake -DCMAKE_BUILD_TYPE=Release -DCMAKE_CXX_FLAGS="-march=armv7-a" /opt/intel/openvino/deployment_tools/inference_engine/samples
make -j2 object_detection_sample_ssd

 

Once the installation completed, download the pre-built model for facial recognition. The following test will return an output image with the face detected using an input image.

wget --no-check-certificate https://download.01.org/opencv/2019/open_model_zoo/R1/models_bin/face-detection-adas-0001/FP16/face-detection-adas-0001.bin
wget --no-check-certificate https://download.01.org/opencv/2019/open_model_zoo/R1/models_bin/face-detection-adas-0001/FP16/face-detection-adas-0001.xml

./armv7l/Release/object_detection_sample_ssd -m face-detection-adas-0001.xml -d MYRIAD -i barack-obama-12782369-1-402.jpg 

raspi-movidius3

 

 

Visualizing a MLP Neural Network with TensorBoard

The Multi-Layer Perceptron model is supported in Keras as a form of Sequential model container as MLP in its predefined layer type. For visualization of the training results, TensorBoard is handy with only a few line of code to add to the Python program.

log_dir="logs/fit/" + datetime.datetime.now().strftime("%Y%m%d-%H%M%S")
tensorboard_callback = tf.keras.callbacks.TensorBoard(log_dir=log_dir, histogram_freq=1)

Finally add callbacks to the corresponding fitting model command to collect model information.

history = model.fit(X_train, Y_train, validation_split=0.2,
epochs=100, batch_size=10
,callbacks=[tensorboard_callback])

tfb1

Once the training is completed, start the TensorBoard and point browser to the designated port number.

Click on the Graph tab to see a detailed visualization of the model.
tfb2

Click on the Distributions tab to check the layer output.
tfb3

Click on the Histograms tab for a 3D visualization of the dense layers.
tfb4

 

 

The Eagle has landed

Besides watching recordings of live TV broadcast of Apollo 11, to celebrate Apollo’s 50th Anniversary, it is no better time to train my notebook computer with power far surpassing the Apollo guidance computer how to land on the moon. Suffice it to say the Apple ][ is after the Apollo made history to land mankind on moon.

eagle2

With OpenAI Gym, a simulated environment for the lunar landing module on the tense moment landing on the moon is recreated. Training via TensorFlow to let the modern computer to practice and learn to how to land eventually accomplished the mission – the Eagle has landed.

 

Experiencing Deep Learning with Jupyter and Anaconda

Most of the time my work with deep learning is done in command line interface with Python and TensorFlow. The clean and efficient syntax of the Python language and package design of TensorFlow almost eliminated the need of a complex Integrated Development Environment (IDE). But after trying out the free Google Colab service that provide a web based interface in Jupyter, I am going to set up one on my desktop that sports an Nvidia RTX2060 GPU.

Installation is easy, but be sure to run Anaconda console as Administrator on Windows platform. For running TensorFlow with GPU:

conda create -n tensorflow_gpuenv tensorflow-gpu
conda activate tensorflow_gpuenv

Managing multiple packages is much easier with Anaconda as it separate configurations into environments that can be customized. On my development machine, I can simply create a TensorFlow environment with GPU and then install Jupyter to enjoy its graphical interface.

Finally to activate Jupyter:

jupyter notebook

jupyterconsole.PNG

To see how Anaconda with Jupyter is flexible on the same machine, a comparison of a simple image pattern recognition program runs under Jupyter with and without GPU support.

jupytergpu
jupytercpu