I have a YOLO model, and I convert it into TensorFlow Lite. I want to run BeagleBone AI 64. It’s running, but I want to run it on the GPU so there is less CPU load on the BeagleBone board. How can I check the GPU load, and how do I know if it’s running on the GPU or not?
kindly any one help me out
You are S.O.L with the Ti stuff and gpu. Its a been a long time since touching that board and its a combination of many negatives. If you need to use gpu the Jetson / Tegra is the only SBC that has enough compute power and open access to the CUDA cores and GPU.
The big value this board holds is in all these higher performing real-time cores, working with Linux, in combination with a lot fancy IO. If you for example want to make an industrial machine, that needs tight demanding real-time controls, this board is reasonable choice.
The GPU is useless and a waste. It’s annoying to pay for. There is no support from what I can see. I believe there might be some kind of Linux driver now, but not fully working or some garbage… The AI demo stuff, iirc, works in some overly complicated way making use of the DSP and R5 cores.
The checkmarks this board has filled for me have been very hard time find elsewhere. Almost nobody is using this board, the community support is painful, many like @foxquirrel have given up. I have repeatedly looked to try to find better alternative platforms for my use case. Platforms that would cost me less headbanging, be better documented, and around the same price range. I found nothing that gave me the confidence to make the switch. Of all this, the current best alternative I’ve found for my use case is the PocketBeagle2…if only it had a network port…
I dream of a BeagleBoneBlack2 with a TI AM6442, and some actual initial support and documentation.
Anyway, if you want to do fancy AI stuff, use the latest Jetson. You’ll get 20x the performance and much better support. The strength of the AI64 is in industrial controls. If you need both fancy AI in combination with industrial controls, use both boards simultaneously. Have them talk.
After two years of battling, and much learning, I am now finally approaching the point of making use of this board in production. To help others make use of this board for controls, I have been working on putting together this example application Heterogeneous_App_Example
Thank you very much for putting that together, I have not tried it out to verify. However, its seem to touch on plenty of the key issues.
The very strong positive about the board is the fact it makes a rock solid server. Rip out the desktop and run it headless with nvme. We have one that is in production on our local network. It hangs out with our big dells, no issues at all.
I am thinking of buying this board and I don’t have much experience with TI.
I thought TI is open w.r.t their documentation. Is it still hard to handle the C7x DSP for AI applications? Also with AI, is there not much hope ?
Can you please give some specific examples regarding the issues faced with AI applications?
Get the BeagleY-AI over the BBIA64… A driver called “thames” is in development.. https://www.phoronix.com/news/Thames-Accelerator-Driver
Regards,
Impressive.
I wonder on how possible it would be in the future to configure this driver to work with the C7x cores on the BBAI64
You may now be able to get a TI SDK Yocto build working for the BBAI64 generally following this guide
In short, follow steps 1 and 2 from here.
For your layer config try configs/processor-sdk-analytics/processor-sdk-analytics-11.02.00-config.txt. The source is here.
For your machine config, use beaglebone-ai64. The config file for that machine config is here.
So anyway, after getting your container setup, for step 2, do as follows.
tisdk@9b297a000db9:~$ pwd
/home/tisdk
# Provide necessary permissions to write in /home/tisdk to user tisdk
tisdk@9b297a000db9:~$ sudo chown -R tisdk /home/tisdk
# Clone oe-layersetup
tisdk@9b297a000db9:~$ git clone https://git.ti.com/git/arago-project/oe-layersetup.git tisdk
### IMPORTANT, THINGS BREAK IF YOU DO NOT NAME YOUR FOLDER tisdk
tisdk@9b297a000db9:~$ cd tisdk
tisdk@9b297a000db9:~$ ./oe-layertool-setup.sh -f configs/processor-sdk-analytics/processor-sdk-analytics-11.02.00-config.txt
tisdk@9b297a000db9:~$ cd build
tisdk@9b297a000db9:~$ . conf/setenv
# To build tisdk-default-image image for the BBAI64
tisdk@9b297a000db9:~$ MACHINE=beaglebone-ai64 bitbake -k tisdk-default-image
If the tisdk-default-image build works, next attempt the tisdk-edgeai-image build.
tisdk@9b297a000db9:~$ MACHINE=beaglebone-ai64 bitbake -k tisdk-edgeai-image
I am currently attempting a build with the latest SDK config. Tomorrow I will know if the build works. On my computer, the full tisdk-edgeai-image build takes about 14 hours. I have a Ryzen 7840u with 64Gigs of ram.
Last time I attempted Yocto for this board, the tisdk-default-image build worked fine, the tisdk-edgeai-image build did not have the AI stuff working, but otherwise booted and worked. I vaguely remember seeing a commit that may have fixed the edgeai build. I hopefully will have a good update tomorrow.
For my attempt from maybe a half year ago, I used the 11.01 sdk layer config. Today I am now attempting to build the 11.02 sdk layer config. There is a reasonable chance the AI demo now works, fingers crossed…
14 hours !!! I never thought it takes so long to compile. Eagerly waiting for your updates further
the same thames driver should work on c7x, what worries me, we don’t have the mesa “open” firmware for the gpu on teh BBAI64
Why do we need GPU Mesa drivers for AI ? I thought we will use C7x cores
it’s an open source way to do AI acceleration with C7x that currently isn’t used..
so I understand the following. I am a beginner. Sorry for the dump questions/points
- BB AI-64 has also C7x but as it doesn’t have MESA drivers, we wont be able to run AI models using Vulkan/MESA.
- But we can still write custom (kernel) drivers for C7x and do AI inference.
I thought GPU was used for AI inference and as its closed, it will be very hard to implement drivers for it. But if its C7x, I see things are better i.e its possible to write C7x drivers.
The major advantage I see is that BB AI-64 has 8 TOPS C7x, while BeagleY-AI has only 4 TOPS.
Nothing is in main yet, c7x stack is in development..
I found the closed out-of-tree drivers for the GPU and the Video Accelerator IP.
GPU driver:
Build guide, and general explanation of the parts
GPU open source side (k6.12)
GPU raw firmware files (closed source part)
Mesa shim (I am not sure on what branch to use for kernel 6.12)
Video accelerator D5520/VXE384 driver:
kernel 6.12 branch
From what I gather, these drivers may only work with the TI Linux kernel branches. They depend on source that will never go upstream, thus, these drivers my not work with mainline kernels.
The links above I believe are for the ti-linux-6.12.y kernel.
The TI Yocto tisdk-default-image build for this board should have these driver setup and working. I have not yet checked myself if the drivers are actually present and working.
I asked Gemini about the GPU driver and the reply is as below. Can you confirm if it’s correct? So I was wondering, if it’s possible to get vulkan running for AI inference ?
I also found a BB AI-64 post discussion related to GPU.
Response from Gemini regarding Linux drivers for Imagination PowerVR 8XE GE8430 GPU:
Thanks to Imagination Technologies finally opening up their “Rogue” architecture, true open-source drivers do exist now, but they require you to be on the bleeding edge of Linux development.
The Kernel Driver (powervr): Imagination officially merged their open-source DRM (Direct Rendering Manager) driver for the Rogue architecture into the mainline Linux kernel starting in version 6.8.
The User-Space Driver (pvr): The open-source Vulkan driver for the Rogue architecture was merged into mainline Mesa.