Beaglebone AI 64 showing 2GB of RAM in htop

“Killer” sounds like it might be the OOM killer detecting an out-of-memory condition. Regardless, as Robert suggested, I wouldn’t expect the coprocessors to be functional without being given firmware or memory (which is what the device tree does). So even if this particular error is something else I wouldn’t expect inference to get very far.

Have you already “compiled” the model for TIDL or is it his just running un-accelersted inference? I’ve been working with the PyTorch/ONNX APIs so I don’t know what it looks like to compile a tflite model with their tools. I think you’ve already found the thread in which I’ve run their benchmarks and then compiled a custom ONNX model (TIDL/EdgeAI benchmarks on the AI-64 - #13 by kaelinl), but that might be at least mildly helpful.

Fully agreed on the code quality. It’s a bit striking that a company could willingly go to market with a software ecosystem in this state… There is absolutely no documentation, and all they have for examples are poorly written omni-tools. And since their forum support strategy is to guide people into hiding their bugs via workarounds, none of it gets fixed. I’m continuing to play with my BBAI-64 but I don’t think I will be able to recommend them due to no fault of Nonetheless, I’m still trying to get a custom model to run and demonstrate the performance :laughing:

Re: “j7”, it’s referring to “j721e” which is some flavor of the same TDA4VM SoC. I haven’t figured out exactly what the name means but I’ve yet to find a time they weren’t equivalent so I guess that’s good enough. In other words, “j7” is what you want. The alternative “am6x” series is a different SoC line with its own (older) hardware.