tivxAlgiVisionCreate returned NULL

I compile my custom ai model with using edgeai-tidl-tools (may ı have to use edgeai-benchmark or just get artifact folder with that is enough ?) and ı get artifact file. I install edgeai-tidl-tools 08-02 via docker, compile and inference can happen on container on my local laptop in docker container. When try to run my custom model with python3 inference code which BeagleBone AI-64 I currently used (write image bbai64-debian-11.5-xfce-edgeai-arm64-2022-11-01-10gb), ı got [C7x_1 ] 19998.745662 s: VX_ZONE_ERROR:[tivxKernelTIDLCreate:659] tivxAlgiVisionCreate returned NULL
[C7x_1 ] 19999.163745 s: VX_ZONE_ERROR:[tivxAlgiVisionCreate:316] Calling ialg.algInit failed with status = -1111
[C7x_1 ] 19999.903443 s: VX_ZONE_ERROR:[tivxAlgiVisionCreate:316] Calling ialg.algInit failed with status = -1111
[C7x_1 ] 19999.903617 s: VX_ZONE_ERROR:[tivxKernelTIDLCreate:659] tivxAlgiVisionCreate returned NULL
[C7x_1 ] 20000.076527 s: VX_ZONE_ERROR:[tivxAlgiVisionCreate:316] Calling ialg.algInit failed with status = -1111
[C7x_1 ] 20000.076748 s: VX_ZONE_ERROR:[tivxKernelTIDLCreate:659] tivxAlgiVisionCreate returned NULL
error on BeagleBoard. Can someone help me ?

Hi. My radical advice will be to try with latest ARM64 - Debian 11.x (Bullseye) 2023-10-07 TI EDGEAI BBAI-64 image.
Sounds like TI’s Delegate creation procedure crashed. (Delegate is that TI’s proprietary thing which should do the inference boost.)
If you are going for custom image classification with Tensorflow Lite - I can prove that it is possible to achieve on last os image.
Custom object detection with TI’s tflite delegate - doesn’t work so well. Precision is low (if use delegate) and as I can see TI is not going to fix that. So you should consider the ONNX runtime.

1 Like

Hello thanks for response. I gave a shot 11.5 and latest edge ai os image but none of the image can run the model. I exactly try to run on head pose estimation model on BeagleBone AI-64. Is there any restriction which can be run on board ? My model is onnx turned model that give this error. May be there are any compilation errors cause this but ı dont really understand entirely how can ı got result from container. Thanks for response. I sincerely wait your advice. I am newbie like this board. I forgot the say that I boot from sdcard when I use 11.5 debian edge ai image may error be related to that ?

Unfortunately I can’t help you with onnx model at all because of my lack of experience with it…
The problem here that TI’s AI ecosystem sucks. It is overcomplicated bad written mess. It is more like “proof of concept” for showcasing but not the reliable framework to build your product on.
I hope STMicroelectronics will have such powerful devices in future.


Thanks for response. I will give a shot and try to execute if ı suceed ı will explain detailed.

I figure out that when model conversion step, I didn’t noticed that same error given in log file and when I make inference on docker container just give me dummy output. When I add unsupported layers to compile_options: deny_list this error solved.

Do you have experience with TFLite SSD models on AI64?
My problem is that I can’t make compiled model to work properly with tfl delegate (Without it everything is working fine). It looks like the only first class and boxes are working properly with it. But rest are broken. Boxes are all over the place. I’ve tried with different tidl compilation options but result is always the same…

I didn’t experience TFLite on BBAI-64. I work on Image 11.5 and ı use edgeai-tidl-tools branch 08_02_xx_xx it works fine on onnx runtime. Maybe if you make quantization increase calibration images and iteration number can fix that but I am not sure. I will try tflite too, If I can run on model BBAI-64 I will try to explain how can ı do that.

Have spent some time to add support for tensorflow v1, and yeah, it seems working ok on 08_02_xx_xx.
At least with ssd_mobilenet_v2_coco_2018_03_29.
Here are my object detection train scripts so far.