TIDL/EdgeAI benchmarks on the AI-64

Initial signs of life on running inference with TIDL. This software is a complete mess – if you look at it funny, it prints uninitialized memory contents to the terminal or segfaults when you return from main. It also segfaults any time it fails to open a file, dislikes the format of a file, or just wants to for the entertainment value. Their training code also crashes when running under Docker for unknown reasons.

Repo with minimal compilation+inference code: GitHub - WasabiFan/tidl-yolov5-custom-model-demo: A walkthrough and personal notes on running YOLOv5 models with TIDL.

I haven’t actually demonstrated that the quantized model is performing appropriately, but at least it seems to run and output values. It takes around 7ms as measured from a Python caller to run inference on a ResNet-50 quantized to 8 bit precision.


Thanks for the tip. On my first install I had rebooted a few times and the rootfs was still small, but perhaps that was an anomaly/transient error. I didn’t wait the second time I flashed a card before I did the above. I’ll keep this in mind the next time I set up a card.

1 Like