BB-AI64 Compile and Benchmark Model help

Hi everyone,
I am reaching you out because I am struggling with TI tools.
I have already trained my model (edgeai-yolov5) and I tried to compile/ benchmark it to deploy on the BB-AI64 board.
At first, I used kealinl product to compile my model. However, as I wanted to use TI interface ( or apps_cpp), I needed to use TI compiling tools for it to be compatible.

Unfortunately, I can’t get anything from TI git tutorial (cf edgeai-ti-tools and edgeai-benchmark). Their docker setup doesn’t work (PC with WSL and MacOS). Setting up environment, as recommanded, to run their notebook. Yet still nothing.

So I’d like to know if someone here managed to get anything from TI gits and guide me.

Thanks anyone who is passing by.

Please post a link to the exact stuff you are trying to use and myself and others will take a look at it.

Here is the link for the TI tools: edgeai-tidl-tools.
Here is the link for the benchmark: edgeai-benchmark

Thank you.

First possible problems might be related to this.
Make sure you have the exact processor that it will work with.
Second, they mention TI evaluation board…that is not what you have. The device tree and other stuff is not the same as the AI64 so this could be another issue.

I don’t understand what you are trying to tell me. I try to compile and I have to do it on a 86x windows. You specify the processor for later, as it is important for the inference. The processor for the BBAI64 is am68pa.
However, I can’t either use the docker nor the setup on my pc and it is not made to be run of the board.

The device tree for the evaluation board is different than what is used on the AI64. If you want to use that software you will have to use the “evm board” from TI. Or modify that code to work on your AI64, it might not be too hard to do. I could be all wrong on this one and hopefully some one that has worked with that package will be able to get you up and running.

Hi, i am doing the same thing with you, i follow the tidl git guide and success compile the model and get the artifacts, but unfortunately, when run on AI-64, it out out error:

  Number of subgraphs:1 , 34 nodes delegated out of 34 nodes

APP: Init ... !!!
MEM: Init ... !!!
MEM: Initialized DMA HEAP (fd=5) !!!
MEM: Init ... Done !!!
IPC: Init ... !!!
IPC: Init ... Done !!!
REMOTE_SERVICE: Init ... !!!
REMOTE_SERVICE: Init ... Done !!!
 18010.627620 s: GTC Frequency = 200 MHz
APP: Init ... Done !!!
 18010.627673 s:  VX_ZONE_INIT:Enabled
 18010.627676 s:  VX_ZONE_ERROR:Enabled
 18010.627678 s:  VX_ZONE_WARNING:Enabled
 18010.628219 s:  VX_ZONE_INIT:[tivxInitLocal:130] Initialization Done !!!
 18010.628332 s:  VX_ZONE_INIT:[tivxHostInitLocal:86] Initialization Done for HOST !!!
 18010.652669 s:  VX_ZONE_ERROR:[ownContextSendCmd:815] Command ack message returned failure cmd_status: -1
 18010.652678 s:  VX_ZONE_ERROR:[ownContextSendCmd:851] tivxEventWait() failed.
 18010.652684 s:  VX_ZONE_ERROR:[ownNodeKernelInit:538] Target kernel, TIVX_CMD_NODE_CREATE failed for node TIDLNode
 18010.652687 s:  VX_ZONE_ERROR:[ownNodeKernelInit:539] Please be sure the target callbacks have been registered for this core
 18010.652690 s:  VX_ZONE_ERROR:[ownNodeKernelInit:540] If the target callbacks have been registered, please ensure no errors are occurring within the create callback of this kernel
 18010.652696 s:  VX_ZONE_ERROR:[ownGraphNodeKernelInit:583] kernel init for node 0, kernel com.ti.tidl ... failed !!!
 18010.652710 s:  VX_ZONE_ERROR:[vxVerifyGraph:2055] Node kernel init failed
 18010.652713 s:  VX_ZONE_ERROR:[vxVerifyGraph:2109] Graph verify failed
TIDL_RT_OVX: ERROR: Verifying TIDL graph ... Failed !!!
TIDL_RT_OVX: ERROR: Verify OpenVX graph failed
 18010.716686 s:  VX_ZONE_ERROR:[ownContextSendCmd:815] Command ack message returned failure cmd_status: -1
 18010.716694 s:  VX_ZONE_ERROR:[ownContextSendCmd:851] tivxEventWait() failed.

i use tidl tag is 08_06_00_03, but i am not sure the version of BBAI-64 Processor SDK and RTOS

What did you use to compile the model? Because the Docker container doesn’t work for me.
And I’d like to know if your PC is x86 or x64.

Otherwise for your issue I have no idea why it doesn’t work maybe try the last version 08_06_00_05 or the 08_05_00_11.

  • Make sure you are running your app as root (sudo) – TI maps /dev/mem so you’ll need permissions for it

  • Versions 8.05 and 8.06 of TIDL won’t work on the Beagle OS image. Currently the beagleboard OS images only support version 8.02.

  • If TIDL is failing, check the logs:

    sudo /opt/vision_apps/vx_app_arm_remote_log.out
  • I haven’t personally even attempted to get TI’s official scripts to work (they are inscrutable), so I can’t provide proper support.

i use PC x86, docker it also doesn’t work for me too, i still trying to figure out this problem, and i find it is the compile problem by binary file “”, i did the test used same model from model_zoo which can success hardware acceleration on BBAI-64, i used it compile by tidl-tools, then out put file 163_tidl_io_1.bin and 163_tidl_net.bin are difference with the original from model_zoo, even the number of model layer in file is wrong, by 08_06_00_03 it detected 65, but the correct layer is 64, that why it can’t work, i will try to use the 08_06_00_03 to check is ok or not

1 Like

what is your means of 8.02? it is 08_06_00_02 tag?

I think he is talking about the 08_02_00_XX version

yes, i have tried, that great, it work on 08_02_00_05, thank @kaelinl

Would you mine giving me the name of you x86 pc?
I can’t find one where I am so I might need to buy it to compile.

My pc is hp elitebook 850 i5 with Ubuntu22.04,
i have success compiled the model from model_zoo and success run on BBAI-64 with hardware acceleration, but i want to compile my speech wake up model, Unfortunately, it still have error when use delegate.


Have you success compile your models by edge-tidl-tools? if yes could you share you accuracy result after use the hardware acceleration, because for me, both my image classification or speech recognize model, the accuracy are supper bad, look like did not thing


Truthfully, I still can’t compile anything. I can’t setup the environment right. Nothing compile and python librairies are “missing” while they are installed. I’ve been following TI instructions but nothing seems to work for me.
If you can tell me how you did I might find a solution.

Otherwise, maybe it can help you. I tried Kaelinl compilation file and I’ve also found out that the accuracy kinda drop. It wasn’t high from the start but after compilation it went worse. So there is a chance the compilation has to do with losing accuracy.

Okay, here are the steps you need to follow:

  1. Start by cloning the repository from this link: GitHub - TexasInstruments/edgeai-tidl-tools: Edgeai TIDL Tools and Examples - This repository contains Tools and example developed for Deep learning runtime (DLRT) offering provided by TI’s edge AI solutions..
  2. Once you have cloned the repository, use conda to create a Python 3.6 environment.
  3. Inside the cloned folder, create a virtual environment (venv) based on the conda environment you just created.
  4. Upgrade pip to ensure you have the latest version.
  5. Finally, you can run the script.

Best regards.

1 Like

I followed your instructions and confirmed that the Python example, ./scripts/, runs well. Thank you.

My goal is to compile the depth_estimation model from the edgeai-modelzoo to use it on the BB-Ai64.

However, I’m not sure what to use from the edgeai-tidl-tools to compile this depth_estimation model.

If you have tried something similar, I would appreciate it if you could share your experience!

hey brother can you please tell me how compile my custom ti_lite onnx model to produce artifacts

1 Like