After I run the above code, I will get model artifacts in my folder.
Now, how to do model inferencing on BB AI-64 using these model artifacts and how to utilize the C7x dsps and mma accelerator present in bbai64 for inferencing?
I’m not sure that -2.04 to 3.16 are right values for input data.
I mean - each pixel color is 1 byte 0 to 255
range. And the model input tensor is usually float32 0 to 1
or -1 to +1
for each pixel color.
The best way is to normalize/convert (somehow) model input tensor to uint8 with 0 to 255 range and retrain the model. So you don’t need to spend processor time on converting raw input into float32.
To run the accelerated inference you need to add the delegate to TFLite Interpreter Options.
Here is the example of how I instantiated the TfLiteDelegate
in my Go wrapper. In case you are using C TFLite library API you can try something like this:
TfLiteDelegate* TiTflDelegateCreate(const char* delegate_so, const char* artifacts_folder)
{
TfLiteExternalDelegateOptions options = TfLiteExternalDelegateOptionsDefault(delegate_so);
TfLiteExternalDelegateOptionsInsert(&options, "tidl_tools_path", "null");
TfLiteExternalDelegateOptionsInsert(&options, "import", "no");
TfLiteExternalDelegateOptionsInsert(&options, "artifacts_folder", artifacts_folder);
return TfLiteExternalDelegateCreate(&options);
}
...
TfLiteInterpreterOptions* options = TfLiteInterpreterOptionsCreate();
TfLiteDelegate* delegate = TiTflDelegateCreate("/usr/lib/libtidl_tfl_delegate.so", "artifacts");
TfLiteInterpreterOptionsAddDelegate(options, delegate);
hello @Illia_Pikin i am using edgeai-tidl-tools-09_01_06_00 and following your same compile.py alongwith config.json but facing illegal instruction error and model artifacts are not getting generated. Could you please let me know if your provided solution will work for 09_01_06_00 version ?
Hi @Vimal_Ramesh. Honestly I only tried this with v08_02_00_05
. Post your error here, maybe someone can spot something.