Close
0%
0%

AMB21/22/23 TensorFlow Lite - Hello World

This project demonstrated how to apply a simple machine learning model trained via Google Tensor Flow, and transfer it to AMB21/22/23 board

Public Chat
Similar projects worth following
This project demos how TFL Hello World Example works with Realtek AMB21/22/23 EVB.
This Hello World Example is meant to train a simple light-weighed Neural Network for microcontrollers, using the Google TensorFlow training platform.

Please check the detailed information in the "INSTRUCTIONS" section below.

  • 1 × AmebaD [AMB21/AMB22/AMB23] x 1

  • 1
    Background of Model Training

    The workflow of how TensorFlow (TF) lite works is quite simple:

    1. Firstly, we are using TensorFlow on Google Colab to train a deep learning model/algorithm,
    2. To generate a TF Lite for the microcontroller, this quantized model will be converted into a C source file that is capable of TF Lite.
    3. This converted model is light weighted and small enough to be computed by TF Lite that working on a microcontroller.
    4. Lastly, TF Lite will load this "model.c" file and do the real-time onboard computation.

    This Hello World example is using a rather simple model

  • 2
    Hello World Model Explaination
    1. This Model takes a value between 0~2PI  as input
    2. The input data will be fed into a neural network model which can be treated as a magic “black box”
    3. Then output a value between -1 to +1,

    In another word, this NN will be trained to model data generated by a sine function. This will result in a model that can take a value: x, and predict its sine result: y.

    I have put the link to a detailed explanation of how to train the NN model for this example in the description below, feel free to check it out and train your own model: https://colab.research.google.com/github/tensorflow/tensorflow/blob/master/tensorflow/lite/micro/examples/hello_world/train/train_hello_world_model.ipynb#scrollTo=aCZBFzjClURz

  • 3
    Model Comparison

    Deep learning networks learn to model patterns in underlying data

    Both number of hidden layers and the number of hidden neurons will affect the prediction results.

    The 2 images above are showing 2 different NN structures, the model on the left is using 1 hidden layer with 8 neurons inside

    The model on the right is using 2 hidden layers, each hidden layer contains 16 neurons

    Comparison of the prediction and actual values are shown in the 2 images below: the actual values are shown in blue dots, while the TF prediction is shown in the red curves.

    Apparently, the prediction result of the model with more hidden layers and a greater number of hidden neurons has a better prediction result. But this statement is not always true, but it fits in our case. Therefore, we are going to use the model on the right for the hello world example.

    PS: The model on the right is a better choice, but still not good enough. as you can see the prediction result between 4.6 to 6 is almost a linear relationship. This means this model can still be modified to a better version. 

View all 8 instructions

Enjoy this project?

Share

Discussions

Similar Projects

Does this project spark your interest?

Become a member to follow this project and never miss any updates