Monthly Archives: September 2019

Road testing Azure Sphere Starter Kit with Visual Studio and Azure IoT Hub

The Avnet Azure Sphere Starter Kit is a development board featuring the Azure Sphere module with a MT3620 processor. It is designed for end-to-end IoT with security in mind, and is tightly integrated with the Azure cloud service.
sphere0

To try out developing IoT solutions using this kit, Visual Studio 2017 or 2019 is required. The Azure Sphere SDK can added to Visual Studio. An Azure account is needed to create an Azure Directory user in the cloud. For details of these preparations, Microsoft provided step by step instructions.

Out of the box, the kit has to be connected to a PC with Internet access via a USB cable (one is included in the kit).  The driver should self install. Once connected, open up the Azure Sphere Developer Command Prompt. Each kit has to be registered to the Azure cloud before it can function. The following outline the basic commands to complete the registration.

azsphere login

azsphere tenant create --name sphere01-tenant

azsphere device claim

azsphere device show-ota-status

azsphere device recover

azsphere device wifi show-status

azsphere device wifi add --ssid  --key

After completed the basic registration and Wifi setup, issue the command below to ready the Azure Sphere to work with Visual Studio in debug mode.

azsphere device prep-debug

At this point, open Visual Studio, and pull a sample project from Github. For example, the demo project at https://github.com/CloudConnectKits/Azure_Sphere_SK_ADC_RTApp. Compile and debug the project.
sphere8

Observe the Output window to see the data fetched from the Azure Sphere. In Sphere’s terms, this is called side-loading a program.sphere14

Once the debugger exits, the Sphere will no longer run the program. To deploy the program in a more permanent manner, use the following commands to do an Over the Air (OTA) deployment.

azsphere feed list
--> [3369f0e1-dedf-49ec-a602-2aa98669fd61] 'Retail Azure Sphere OS'
azsphere device prep-field --newdevicegroupname  --newskuname 

azsphere device link-feed --dependentfeedid 3369f0e1-dedf-49ec-a602-2aa98669fd61 --imagepath "C:\Users\dennis\source\repos\AvnetAzureSphereStarterKitReferenceDesign1\AvnetStarterKitReferenceDesign\bin\ARM\Debug\AvnetStarterKitReferenceDesign.imagepackage" --newfeedname sphere01-test-avnet --force
Adding feed with ID 'e9243998-58b1-42c5-a7a3-7d76e55e5603' to device group with ID '193d1734-f1e3-4af1-a42e-e4e0a99f585c'.
Creating new image set with name 'ImageSet-Avnet-Starter-Kit-reference-V1.-2019.09.21-12.17.22+08:00' for images with these IDs: 4ae124ed-503a-4cf2-acf9-198c3decd55d.

(reboot)

azsphere device image list-installed

azsphere device prep-field --devicegroupid 193d1734-f1e3-4af1-a42e-e4e0a99f585c

Advertisements

OpenVINO on Raspberry

OpenVINO is the short term for Open Visual Inference and Neural network Optimization toolkit. There is a port to the Raspberry platform running Rasbian OS.
openvino

To setup on a Raspberry Pi, download the latest zip from OpenVINO, and run the commands below.

sudo tar -xf l_openvino_toolkit_runtime_raspbian_p_2019.2.242.tgz --strip 1 -C /opt/intel/openvino
sudo apt install cmake
source /opt/intel/openvino/bin/setupvars.sh
echo "source /opt/intel/openvino/bin/setupvars.sh" >> ~/.bashrc
sudo usermod -a -G users "$(whoami)"
sh /opt/intel/openvino/install_dependencies/install_NCS_udev_rules.sh
mkdir build && cd build
cmake -DCMAKE_BUILD_TYPE=Release -DCMAKE_CXX_FLAGS="-march=armv7-a" /opt/intel/openvino/deployment_tools/inference_engine/samples
make -j2 object_detection_sample_ssd

 

Once the installation completed, download the pre-built model for facial recognition. The following test will return an output image with the face detected using an input image.

wget --no-check-certificate https://download.01.org/opencv/2019/open_model_zoo/R1/models_bin/face-detection-adas-0001/FP16/face-detection-adas-0001.bin
wget --no-check-certificate https://download.01.org/opencv/2019/open_model_zoo/R1/models_bin/face-detection-adas-0001/FP16/face-detection-adas-0001.xml

./armv7l/Release/object_detection_sample_ssd -m face-detection-adas-0001.xml -d MYRIAD -i barack-obama-12782369-1-402.jpg 

raspi-movidius3

 

 

Visualizing a MLP Neural Network with TensorBoard

The Multi-Layer Perceptron model is supported in Keras as a form of Sequential model container as MLP in its predefined layer type. For visualization of the training results, TensorBoard is handy with only a few line of code to add to the Python program.

log_dir="logs/fit/" + datetime.datetime.now().strftime("%Y%m%d-%H%M%S")
tensorboard_callback = tf.keras.callbacks.TensorBoard(log_dir=log_dir, histogram_freq=1)

Finally add callbacks to the corresponding fitting model command to collect model information.

history = model.fit(X_train, Y_train, validation_split=0.2,
epochs=100, batch_size=10
,callbacks=[tensorboard_callback])

tfb1

Once the training is completed, start the TensorBoard and point browser to the designated port number.

Click on the Graph tab to see a detailed visualization of the model.
tfb2

Click on the Distributions tab to check the layer output.
tfb3

Click on the Histograms tab for a 3D visualization of the dense layers.
tfb4