BBAI-64 building Tensorflow Lite custom model artifacts for libtidl_tfl_delegate

Hi there. Did anyone managed to build and run custom tflite model on BBAI-64?
Particularly I’m interested in TI’s libtidl_tfl_delegate.so usage. It seems that it requires some artifacts folder with specific stuff I don’t know how to generate.
So does anyone know where I can find working scripts/examples of how to do this?

1 Like

Hello @Illia_Pikin,
You can find those instructions here.
Refer this page for the folder structure.

Could you please guide me a bit further?
I have my own .tflite model, and I just want to generate those artifacts for this model. And that’s it.
Is there some tool/script I need to pass my model to?
Because right now the only thing I can see in the manuals that: “It can be done with Edge AI TIDL Tools, I just need to use those awesome tools, blah blah blah” :slight_smile:
But no clear example so far…

Anyway, I’ve managed to adjust/fix/hack the code of edgeai-tidl-tools repo and build these artifacts for my model.
And the inference time reduced by 40-50 times with libtidl_tfl_delegate which is great. But what is not great that the model itself stopped working :slight_smile:
My guess it somehow changed the input format, and I can’t figure out what I need to pass to input tensor to make it work…

Could you post a step-by-step solution for these artifacts generation so that others facing this issue could get some help?

Great!!

Could you explain in detail like where did it stop working and after doing what?

NB: I’m messing with this bbai64-emmc-flasher-debian-11.8-xfce-edgeai-arm64-2023-10-07-10gb snapshot on my Beaglebone AI-64 from here

sudo apt-get install libyaml-cpp-dev
sudo apt-get install cmake
export SOC=am68pa
source ./setup.sh
  • Added my model to models_configs in common_utils.py:
'my_model' : {
	'model_path' : os.path.join(models_base_path, 'saved_model.tflite'),
	'source' : {'model_url': '/path_to_my_model/saved_model.tflite', 'opt': True},
	'mean': [0, 0, 0],
	'scale' : [1, 1, 1],
	'num_images' : numImages,
	'num_classes': 4,
	'session_name' : 'tflitert',
	'model_type': 'classification'
},
  • Installed python requirenments:
pip install -r requirements_pc.txt
  • Dealt with some usual python’s “we never heard of backward compatibility” stuff.
  • Fixed models value in examples/osrt_python/tfl/tflrt_delegate.py with:
models = ['my_model']
  • And executed this python script with:
cd examples/osrt_python/tfl/
python3 tflrt_delegate.py -c
  • Got my_model at model-artifacts folder:
177_tidl_io_1.bin
177_tidl_net.bin
allowedNode.txt
param.yaml
saved_model.tflite
  • Added those files to my repo.
  • Modified my initModel function with all those links in main.go
1 Like

I can add some simple “load png image, make inference” example, if needed.

Anyway… I’ve updated the param.yaml of my model file with:

postprocess:
  data_layout: NHWC
preprocess:
  crop:
  - 224
  - 224
  data_layout: NHWC
  mean:
  - 0
  - 0
  - 0
  resize:
  - 224
  - 224
  scale:
  - 1
  - 1
  - 1
  reverse_channels: false
input_dataset:
  name: coins
session:
  model_folder: model
  model_path: saved_model.tflite
  session_name: tflitert
  artifacts_folder: artifacts
  input_mean: null
  input_optimization: true
  input_scale: null
target_device: pc
task_type: classification

Have added labels to classnames.py

coins = {
    0: "no coin",
    1: "50",
    2: "10",
    3: "1",
}

And ran the app_edgeai.py - got the same result:
Model is not working… It doesn’t distinguish anything, just returns “no coin” with 99% confidence…

1 Like

I am not smart at times, e.g. running an inference on the BBAI-64 in CPU only mode…

Outside of that idea, tensorflow is an interesting subject. Lots of datasets and tools to use.

On Debian luckily with python3, tensorflow is sort of easy to install but…

  1. I ran out of free system memory.
  2. Should I build it on another platform to perform building?
  3. Once built, say on WSL2 Debian Bookworm, would sftp be the way to shove off on the build for the inference to take place on the BBAI-64?

I have been able to port tensorflow in the past to the BBAI and wanted to try again with building specifics…

I may try random things like if this then that…

Like when this means a specific thing, then that needs to transpire.

For example, when something is viewed that is noticeable from a build, then my servo can alter itself from the source given in advance. Things like this idea are what I am getting at currently.

Seth

P.S. Have you already tried tensorflow with the CPU or are you gearing your findings towards the GPU on board?

I’ve managed to train (on PC) and run my image classification model on BBAI-64 on CPU with both Tensorflow as well as Tensorflow Lite.
But the minimum inference time I got for 224x224 rgb input was about 80-90 ms. Which is not so great.
So I’ve tried to figure out how can I use TI’s tflite delegate libtidl_tfl_delegate.so with no luck so far…

1 Like

Okay…

I remember a few pointers about armhf and porting it to the BBAI, i.e. not the BBAI-64.

Back then, installing everything via scratch was mandatory for building tensorflow-lite with bazel.

Luckily, there was a last ditch effort to make bazel armhf usable at the time. Anyway, not any longer from what I understand.

On aarch64, it is a different story. pip3 is the way to install it supposedly. I could not figure out why in the world the errors kept coming. I listened to the documentation and followed precisely what was demanded to build from source.

Anyway, to those out there wondering:

sudo apt install python3 python3-pip python3-venv python3-dev

python3 -m venv SOME_DIR

source SOME_DIR/bin/activate

python3 -m pip install tensorflow

That got me to use the CPU, the very intensive CPU, build to run keras and tensorflow source.

If you are trying with tensorflow-lite, maybe it will work the same for you, i.e. instead of building tensorflow, one could build tensorflow-lite.

Seth

P.S. And…the TI stuff. Yikes. I was racking my brain trying to figure it out with all the libs. and ideas floating around. I almost found myself eating keyboards. I am sure there is a trick. Some unforeseen adaptation of source needing to be commanded in a specific manner but I just do not know it…

When the GPU is Open Source’d, one may find it easier to use the TI source a bit more.

Just for reference, I tried to build from source for onnx, tensorflow, tensorflow-lite, and a bunch of other TI recommended ideas of source needed for compilation.

Phew…I found myself cross_compiling this and that. I even tried directly on the BBAI-64 with 128GB of build space in the form of a micro SD Card.

@Illia_Pikin ,

There is actually some command to install it. Let me research it.

Seth

P.S. I do not know how far along you are currently…

As part of the linux systemd
/opt/edge_ai_apps/init_script.sh is executed which does the below.

This kills weston compositor which holds the display pipe.
This step will make the wallpaper showing on the display disappear and come back

The display pipe can now be used by ‘kmssink’ GStreamer 
element while running the demo applications.

The script can also be used to setup proxies if connected behind a firewall.

That is taken right from the docs. of beagleboard.org.

It seems there has been updates and improvements so far to the documentation. I will try later to run some commands to get the edge_ai_apps in working order and return service.

https://docs.beagleboard.org/latest/boards/beaglebone/ai-64/edge_ai_apps/getting_started.html#software-setup

If you are testing right from the TI git repo, I am out of luck. I have not kept up with what is happening with their business or organization of repo builders.

Seth

And… it is working… finally!
With ≈3.2 ms inference time!
Anyway… I’ve messed the model input, and I’ve messed the training.
To keep things short and not to mislead anyone - I have made this compile.py script based on TI’s tflrt_delegate.py which does the trick.

2 Likes

Hello @Illia_Pikin

So, you didn’t use the provided tflrt_delegate.py script in the tidl-tools. Instead, you created your own compile.py script and ran that instead of this:

Am I correct?

Hi. I just tried to clean up all the mess from TI’s tflrt_delegate.py script and make it look like a useful utility.
So yes I’ve used my own version at the end of the day.

The easiest way, if you have already trained your model and exported to .tflite, will be:

  • To create some folder. Than inside this folder:
wget https://raw.githubusercontent.com/Hypnotriod/bbai64/master/python/osrt_tfl/compile.py
wget https://raw.githubusercontent.com/Hypnotriod/bbai64/master/edgeai-tidl-tools-08_02_00_05.Dockerfile
docker build -t edgeai-tidl-tools-08_02_00_05 -f edgeai-tidl-tools-08_02_00_05.Dockerfile .

Will take a while to build this docker container…

{
    "model_name": "classification_model",
    "model_path" : "saved_model.tflite",
    "calibration_images": [
        "0.jpg",
        "1.jpg",
        "2.jpg",
        "3.jpg"
    ],
    "calibration_iterations": 15,
    "tensor_bits": 16,
    "artifacts_folder": "artifacts",
    "mean": [0, 0, 0],
    "scale" : [0.003921568627, 0.003921568627, 0.003921568627],
    "session_name" : "tflitert",
    "model_type": "classification"
}

mean and std_dev relationship

range (0,255) mean = 0 std_dev = 1 scale = 1
range (-1,1) mean = 127.5 std_dev = 127.5 scale = 1 / 127.5 = 0.0078431373
range (0,1) mean = 0 std_dev = 255 scale = 1 / 255 = 0.0039215686

  • Run the docker container with :
docker run -it -v "$(CURDIR):/workspace" edgeai-tidl-tools-08_02_00_05
  • Run the compilation inside the container:
cd /workspace
python3 compile.py -c config.json
exit
1 Like

Alright!
Thanks
I will try it out.

Hello @Illia_Pikin,
Could you please explain the formula for calculating the mean and standard deviation for a given range of input data values?

For example, after scaling down and standardizing, my data values range from -2.04 to 3.16. How can I determine the corresponding mean and standard deviation for this range? Is manual calculation necessary for this?

Thank you.