BBAI-64 and the Barrel Jack!

Hello,

I was wondering something. Does the barrel jack automatically power on the device or should it power on the BBAI-64 when plugged in?

Seth

P.S. The reason I am asking is b/c I noticed my 5v @ 3A power adapter does not power on the board but USB-C to USB 3.0 works just fine. Anyway, I will keep testing and waiting to see if it just takes a bit.

Also,

On the BBAI-64, when I am connected to a remote server, I noticed I get an error after a couple minutes stating that my remote server has been disconnected remotely.

Seth

P.S. Is there anyone having an issue w/ remotely connecting via Ethernet? I thought it may have been some “stale” server settings. I installed xcb and was able to see the Camera! But then…

The remote server got disconnected remotely happened again. Oh well. If anyone knows of my woes, please do reply.

Server_settings

SCRATCH THAT

The power cord and wall adapter now turn on the board!

Hello,

Okay! The power adapter definitely needs to be plugged in for USB Cameras to work well. Power consumption. Anyway, something funny about this program…

If you come in on the side of the frame, it calls me a dog! A DOG! Ha. Anyway, here is the photo:

dog

And…it thinks it is 66% right. Ha.

Seth

P.S. Enjoy! CSI-2 next, i.e. I can feel it!

It does on mine. The wall power supplies I had didn’t seem to provide quite enough current despite being rated at 3A (the board seemed to brown out under load occasionally) but it did turn on when powered.


What app are you running doing object detection – is it based on TIDL? I’ve been trying to get custom model compilation working and would love to hear what you’ve gotten going.

I got their official benchmarks to run (TIDL/EdgeAI benchmarks on the AI-64), but the custom model compilation has been even more finnicky.

And I’d love to see CSI working!

1 Like

Hello,

@kaelinl , no. It is not. I am actually sorry to say it. I have been trying w/ an updated version of edgeai-tidl-tools but I cannot get it to work just yet.

Depthai w/ a cam. is what lib. I am currently using. Me too about the CSI-2 cams. being able to run well.

I have all the installed ideas on my BBAI-64 for tidl-tools and I cannot get what to do or when to do it. So, I tried an older model cam. based around the Depthai libs.

Seth

P.S. I will get back to testing the tidl-tools soon. The benchmarks work. Yes but after installing, I have six or seven extra libs. available on my BBAI-64 that all need tending. I kept trying to install via cmake under a /build/ dir. but that keeps stopping during all specific builds, e.g. opencv, armnn, and etc.

Hello,

You can see I am having trouble w/ the tidl-tools:

debian@BeagleBone:~/TENsor$ ./dataOne.py
/home/debian/.local/lib/python3.9/site-packages/tensorflow_io/python/ops/__init__.py:98: UserWarning: unable to load libtensorflow_io_plugins.so: unable to open file                                                                        : libtensorflow_io_plugins.so, from paths: ['/home/debian/.local/lib/python3.9/site-packages/tensorflow_io/python/ops/libtensorflow_io_plugins.so']
caused by: ["[Errno 2] The file to load file system plugin from does not exist.: '/home/debian/.local/lib/python3.9/site-packages/tensorflow_io/python/ops/libtensorf                                                                        low_io_plugins.so'"]
  warnings.warn(f"unable to load libtensorflow_io_plugins.so: {e}")
/home/debian/.local/lib/python3.9/site-packages/tensorflow_io/python/ops/__init__.py:104: UserWarning: file system plugins are not loaded: unable to open file: libte                                                                        nsorflow_io.so, from paths: ['/home/debian/.local/lib/python3.9/site-packages/tensorflow_io/python/ops/libtensorflow_io.so']
caused by: ['/home/debian/.local/lib/python3.9/site-packages/tensorflow_io/python/ops/libtensorflow_io.so: cannot open shared object file: No such file or directory'                                                                        ]
  warnings.warn(f"file system plugins are not loaded: {e}")
TensorFlow version: 2.10.0-rc2
2022-10-10 22:06:03.415256: W tensorflow/core/framework/cpu_allocator_impl.cc:82] Allocation of 188160000 exceeds 10% of free system memory.
Epoch 1/5
1875/1875 [==============================] - 19s 7ms/step - loss: 0.2975 - accuracy: 0.9133
Epoch 2/5
1875/1875 [==============================] - 13s 7ms/step - loss: 0.1434 - accuracy: 0.9574
Epoch 3/5
1875/1875 [==============================] - 13s 7ms/step - loss: 0.1065 - accuracy: 0.9675
Epoch 4/5
1875/1875 [==============================] - 13s 7ms/step - loss: 0.0879 - accuracy: 0.9729
Epoch 5/5
1875/1875 [==============================] - 13s 7ms/step - loss: 0.0763 - accuracy: 0.9761
313/313 - 3s - loss: 0.0744 - accuracy: 0.9779 - 3s/epoch - 9ms/step

That is from this source:

# from some file on tensorflow's website...
#!/usr/bin/python3

import tensorflow as tf
print("TensorFlow version:", tf.__version__)

mnist = tf.keras.datasets.mnist

(x_train, y_train), (x_test, y_test) = mnist.load_data()
x_train, x_test = x_train / 255.0, x_test / 255.0

model = tf.keras.models.Sequential([
    tf.keras.layers.Flatten(input_shape=(28, 28)),
    tf.keras.layers.Dense(128, activation='relu'),
    tf.keras.layers.Dropout(0.2),
    tf.keras.layers.Dense(10)
])

predictions = model(x_train[:1]).numpy()
predictions

tf.nn.softmax(predictions).numpy()

loss_fn = tf.keras.losses.SparseCategoricalCrossentropy(from_logits=True)

loss_fn(y_train[:1], predictions).numpy()

model.compile(optimizer='adam',
    loss=loss_fn,
    metrics=['accuracy'])

model.fit(x_train, y_train, epochs=5)

model.evaluate(x_test, y_test, verbose=2)

probability_model = tf.keras.Sequential([
    model,
    tf.keras.layers.Softmax()
])

probability_model(x_test[:5])

So, the .so file is missing but it still runs somehow?

Seth

P.S. Please view this photo to see what files are listed on my /home/debian/ file.

bazel

I am not sure if it is easy to see but there are many libs. on the machine so far that will not build correctly. There…it should be a bit easier to read now.

See edge_ai_apps and onnxruntime. They both stop at about 65% to 100% depending on the time compiling.

It should be as easy as make sdk for the vision apps but it is not when trying from scratch.

Wow, interesting. I’m not sure what’s going on there. Although I don’t think TIDL is intended to be used during training (they expect you to train and compile/quantize it on a PC). Is that one of the scripts provided by TI or is that something homegrown?

For reference, I’ve just pushed what I’m working on here: GitHub - WasabiFan/tidl-custom-model-demo: WIP. I’m trying to get their tools to compile an ONNX model (exported from PyTorch). I’m running it on an x86 host PC (Linux Docker container). My goal is to distill the important bit of the “tools” repo into a self-contained demo since their code is a complete mess.

I’m currently getting this garbage:

<output snipped>
WARNING: [TIDL_E_DATAFLOW_INFO_NULL] ti_cnnperfsim.out fails to allocate memory in MSMC. Please look into perfsim log. This model can only be used on PC emulation, it will get fault on target.
<output snipped>
 0.22272s:  VX_ZONE_INIT:[tivxInit:178] Initialization Done !!!

Thread 1 "python3.6" received signal SIGBUS, Bus error.
__memset_avx2_erms () at ../sysdeps/x86_64/multiarch/memset-vec-unaligned-erms.S:151
151     ../sysdeps/x86_64/multiarch/memset-vec-unaligned-erms.S: No such file or directory.
(gdb) disas
Dump of assembler code for function __memset_avx2_erms:
   0x00007f4d409dfb30 <+0>:     endbr64 
   0x00007f4d409dfb34 <+4>:     vzeroupper 
   0x00007f4d409dfb37 <+7>:     mov    %rdx,%rcx
   0x00007f4d409dfb3a <+10>:    movzbl %sil,%eax
   0x00007f4d409dfb3e <+14>:    mov    %rdi,%rdx
=> 0x00007f4d409dfb41 <+17>:    rep stos %al,%es:(%rdi)
   0x00007f4d409dfb43 <+19>:    mov    %rdx,%rax
   0x00007f4d409dfb46 <+22>:    retq   
End of assembler dump.
(gdb) i r
rax            0x0                 0
rbx            0x0                 0
rcx            0x9214              37396
rdx            0x7f4d3ba51000      139969689882624
rsi            0x0                 0
rdi            0x7f4d3ba51000      139969689882624
rbp            0x7f4cddd77e00      0x7f4cddd77e00 <g_context_obj>
rsp            0x7fff66fb9e48      0x7fff66fb9e48
r8             0x0                 0
r9             0x7f4d40a40b80      139969773702016
r10            0x0                 0
r11            0x7                 7
r12            0x9214              37396
r13            0x7f4d3ba51000      139969689882624
r14            0x0                 0
r15            0x7f4ce8733b40      139968294107968
rip            0x7f4d409dfb41      0x7f4d409dfb41 <__memset_avx2_erms+17>
eflags         0x10206             [ PF IF RF ]
cs             0x33                51
ss             0x2b                43
ds             0x0                 0
es             0x0                 0
fs             0x0                 0
gs             0x0                 0
(gdb) bt
#0  __memset_avx2_erms () at ../sysdeps/x86_64/multiarch/memset-vec-unaligned-erms.S:151
#1  0x00007f4cdcfda1fd in vxCreateUserDataObject () from /tidl_tools/libvx_tidl_rt.so
#2  0x00007f4cdcfd8712 in TIDLRT_create () from /tidl_tools/libvx_tidl_rt.so
#3  0x00007f4ce8c8e36e in TIDL_subgraphRtCreate () from /tidl_tools//tidl_model_import_onnx.so
#4  0x00007f4ce8c788af in TIDL_computeImportFunc () from /tidl_tools//tidl_model_import_onnx.so
#5  0x00007f4d3f6b691a in std::_Function_handler<onnxruntime::common::Status (void*, OrtApi const*, OrtKernelContext*), onnxruntime::TidlExecutionProvider::Compile(std::vector<onnxruntime::Node*, std::allocator<onnxruntime::Node*> > const&, std::vector<onnxruntime::NodeComputeInfo, std::allocator<onnxruntime::NodeComputeInfo> >&)::{lambda(void*, OrtApi const*, OrtKernelContext*)#3}>::_M_invoke(std::_Any_data const&, void*&&, OrtApi const*&&, OrtKernelContext*&&) ()
   from /usr/local/lib/python3.6/dist-packages/onnxruntime/capi/onnxruntime_pybind11_state.so

I might try doing the compilation on-device instead and see if that improves things.

Glad to hear there are others working in the same area! Hopefully we can figure out how this giant pile of weirdness is supposed to work.

Compilation on the BBAI-64 is very difficult for whatever reason…

tiovx is the file that is stopping me so far. I see you are ahead of me. I installed the tiovx.h file but I think I need to install it via debian and apt to make it work. So, libvx? Maybe.

Seth

P.S. A lot of the files want to compile on the BBAI-64, luckily, but none are completely compiling. This has put a damper in my vx installation so far. I noticed that TI has got the Vx stuff from the Kronos people: TIOVX User Guide: Overview .

It is like one would have to have the RTOS working and installed or nothing is available for installments.

And yes, the training is supposed to be done PC wise assuming the data is too large to handle on the target. But, I have been trying anyways to see what I can compile and what does not compile.

I am not trying to get you to move to ARM on this subject but some of their headaches are some stuff I turned into my own headaches: Documentation – Arm Developer .

They have specifics on building specific libs. from scratch which is what I think TI should be able to respect for people like me trying to build and then change the build instructions to fit my personal needs. But? Who knows? I will keep trying.

In your lib. repo. on github, I think there are a few details that may need tending. For instance, in the file tidl-custom-model-demo/compile_model.py at 74e8c9c4a524749fc85bcd39574bc84fffc08236 · WasabiFan/tidl-custom-model-demo · GitHub and location, are you basically allowing that line to use the .so file or is there a particular location for that file?

Yeah, getting the ARM64 versions of some of these libraries seems easier. It depends on which ones. I do believe that ArmNN is only required for some combinations of the tools (TFLite, at a minimum) but isn’t required for ONNX inference, so hopefully I can avoid it.

Yeah. The fact that you have to pass a path to their libraries as a configuration parameter is completely nuts, but apparently that’s how they decided to do it. In addition to that, you also seem to need to put their libs on your LD_LIBRARY_PATH or otherwise make it available to the dynamic linker, which I did in my Docker container environment defaults (tidl-yolov5-custom-model-demo/docker/Dockerfile at 74e8c9c4a524749fc85bcd39574bc84fffc08236 · WasabiFan/tidl-yolov5-custom-model-demo · GitHub). I derived this from their examples here: edgeai-tidl-tools/examples/osrt_python at master · TexasInstruments/edgeai-tidl-tools · GitHub. Note that this is for the ONNX runtime, and the TensorFlow Lite and other runtimes might be a bit different.

1 Like
ti-edgeai-tiovx-modules-8.2 # can be installed via apt
1. Here is a list of libs. online that can/cannot be installed from TI:
a. https://github.com/TexasInstruments/edgeai-tidl-tools
* so, in the setup.sh file, some changes are needed. 
* unknown is reported back from "uname -p" on my board...
* so, change the required parts in that file to handle unknown. This should work as starters...

This is one lib. that can be installed via apt. In the future, I will coming back here to list other libs. they have available for using the .so files in other files.

Seth

P.S. Did you install TVM yourself or use the install of edgeai-tidl-tools repo? Anyway, I have a ton of dependency issues so far w/ installing many libs. via the edgeai-tidl-tools repo.

I will keep you updated.

Okay…

Jupyter Notebooks was simple enough! Phew. Anyway, here is a screenshot after the build:

Jupyter

It is neat so far. I can do stuff again. I guess b/c of uname -p. Who knows?

So…I am not sure if this is correct or not, yet. I think from the lib. online, edgeai-tidl-tools, it builds ubuntu and you flash it to the BBAI-64 as the PSDK and then use the tools that way. I am not 100% sure yet, i.e. as I have not used their toolchain and tried to flash it to the BBAI-64 (yet). Updates!

update here…

I think I was incorrect about the build of Ubuntu18.04 and Ubuntu20.04 and PSDK on x86_64 for Cross_Compilation of the two or one.

Anyway, I spent a good five hour block of testing to find things are not what they seem but the Jupyter Notebooks work on the BBAI-64 so far. I can run the notebooks directly on the server of the BBAI-64 and reach it via the www and address bar.

Anyway, I will be able to keep trying the more interested I get in it.

Seth

P.S. Something I did notice:

a. uname -p always outputs unknown on my end, i.e. whether it is target or host.
b. there are some missing files.

Update:
Power input is very important as well as the cooling board properly.
Use barrel jack or USB-C power in, just be sure about the power level provided by AC wall adapter - should be >2.5A .
Also, I suggest using 5V FAN on top of the radiator - otherwise, you will have overheating issues.

1 Like

Unrelated to the thread’s original purpose, but following up on the tangent… I managed to coerce TIDL into compiling and running inference on a PyTorch model. The process (using ONNX) is different than what they seem to recommend for TFLite to your applications might want something different. Cross-post: TIDL/EdgeAI benchmarks on the AI-64 - #13 by kaelinl

2 Likes

I’ve been powering up my BBAI-64 with the barrel jack for months. But today I can only power it up from the USB-C connector. I tried a different power supply for the barrel jack but it still won’t power up that way. I tested the output of the supply and it looks good.

I reflashed the eMMC from the uSD card (using the USB-C for power). No change. I can no longer use the barrel jack for power.

Any ideas?

Hello @Kendall_Auel and @kaelinl ,

I gave in as I did not want to follow along w/ what TI wanted to do w/ or w/out my assistance. So, in the dark again. I may prompt a new build soon. I will check out your builds @kaelinl once and firstly to see what has transpired.

Seth

P.S. @Kendall_Auel , I am not sure exactly what could be going wrong. Do you have a powerful fan dissipating the heat? This is all I can think of currently. If you are building Yocto builds, I am out of that too. I could not get the fan or USB to promote fan use so far.

Thank you both for input and ideas. I will try again since I stopped for so long and am wondering what has changed since the stop-cease-quit to this day!

@Kendall_Auel Are you measuring the DC jack voltage when plugged in ?
It is possibly reading fine with no load, but might drop low enough to prevent the board powering up.

Looking at the schematic there is a power mux chip that will switch off, if the DC in volts is above 5.57V or below 3.99V

You can get to the positive jack terminal on the back of the socket. The USB socket case will get you the ground. What voltage do you read when powered up ?

If inside the above voltages it is probably an issue with chip U1.

@benedict.hewson Thanks, that’s likely what was happening. The board magically started booting again from the barrel jack. It could be my cheap-o supply or it could be the board was getting too hot, hard to know exactly. But the USB-C power source seems reliable so I have a workaround if the supply fails again.

They should use 12V power, not 5V, this chip uses too much power when all funcioning.

Thanks. that was what happened to me. (I thought the board is broken somehow)

My power supply measured at 5.54V however the board is not on.
I did some experiments with a programable power supply → anything bigger than 5.48V won’t turn the board on.
The figure is slightly different from the designed threshold 5.57V due to either measurement error or component, I guess.

So, I think it will probably fine if the power supply output somewhere 5.2-5.4V. Higher than this, it might not very stable especially when it has high ripple & overshoot → Sometimes it works, sometime it doesn’t work.