YOLOv5 object detection on BB AI-64: end-to-end walkthrough

Hi @kaelinl ,

I could train a yolo, export using your code and could run using the onnx as you made, but I have a question and I didn’t find anywhere an answer.
There is a way to having the inference filtring the confidence threshold or need to be manually? In tf I always pass to inference the confidence - what really filter the results, but with onnx it’s simply returning the objects detected plus a lot of -1 - the preditions array having a size of 300 elements - where 295 is simply useless.

Best regards.

Answering my own question.

I could filter the detection at numpy level.

        detection = detection[(detection[:,4] >= conf_threshold)]

Yeah, this TI’s chosen output format; they produce a fixed-length tensor with enough entries for every anchor, but use a negative confidence to indicate anchors which had insufficient confidence. It’s strange, but can be worked around. What you did is reasonable. Watch out for poor performance; I’d recommend verifying that numpy isn’t taking too much time to perform the lookup.

In my recollection, the TI postprocessing seems to put all the non-empty anchors at the top of the list, so if you want to depend on that behavior you could iterate to find the first nonempty entry and then slice up to that point.

How can Edit the file (run_inference_video.py) to display stream over HDMI at real time instead of write video to file ?