I’m stuck compiling torch / onnx model into BBAI64.
I’m not using supported model from Ti or edge-AI. Although, when I run run_custom_pc.sh , outputs such as model, param, result are generated except artifacts. Only folder is generated, and all the files below directory such as .bin etc… doesn’t show up. When I check the log, only success message appear. So… I can’t find out what is the problem here.
How can I generate artifacts files?
Maybe, custom models are impossible to use TIDLcompilation for onnx session provider and this could be the root of my problem because I changed it to CPUexecutioner since it was the only option.
you have to use the edge-tidl-tools to generate the artifacts folder, for example on onnx model, they provide a example case on ./examples/osrt_python/ort/onnxrt_ep.py compile flag is -c
Have you success compile your models by edge-tidl-tools? if yes could you share you accuracy result after use the hardware acceleration, because for me, both my image classification or speech recognize model, the accuracy are supper bad, look like did not thing
Sorry I’ve just checked your reply. I failed using custom model which isn’t currently being supported. So no I’m just training a supported model. It will take about 5 more days, so I can share the result then.
I ran segmentation model. The result log is as follows : SUCCESS:20230629-065903: benchmark results - {‘infer_path’: ‘ss-8720’, ‘accuracy_mean_iou%’: 82.431027, ‘num_subgraphs’: 1, ‘infer_time_core_ms’: 4238.464705, ‘infer_time_subgraph_ms’: 4238.379096, ‘ddr_transfer_mb’: 0.0, ‘perfsim_time_ms’: 0.0, ‘perfsim_ddr_transfer_mb’: 0.0, ‘perfsim_gmacs’: 0.0}. Inference result on picture and video was also checked and the results were satisfying.