Greetings all! With the release of the BeagleBone AI, I see that it has two DSP cores.
→ Is it possible to use Linux on the Arm cores (as per “normal” operation), but use the two DSP cores for a real time application (such as 100kHz sampling motor control)? How is this done, in terms of the software needed?
→ Is it possible to have the two cores interact? For instance, let’s say I do have a 100kHz sampling algorithm running on a DSP core. In a background task, I want the DSP core to pass live data (let’s say voltage and current measurements) to an Arm core, so that the user can view the data in regular Linux space. Is this possible?
I think you could use the DSPs as you described, but you might be better off using the Programmable Realtime Units (PRUs). All the Beagles have the PRUs (not just the AI and the x15) and I have some instructions on using them here: https://markayoder.github.io/PRUCookbook/
At boot, Linux searches the /lib/firmware directory for binaries that it loads to the DSP, IPU, PRU, and EVE cores. The default binaries for example will wake DSP cores to run opencl acceleration then puts them to sleep when not in use. You can make up your own binaries for the above cores mentioned using the SDK examples provided below.
I now have a Beaglebone AI, and I’m just getting into using the GPIO pins.
I understand about the PRUs, but for my application (motor control/power electronics), it would be really nice to have floating point operations available, which is why I’m looking at the on-board DSPs (if the PRUs are good, then the Cortex M4s are better, and the DSPs would be awesome…).
Now, I’m reading through the examples to figure out how to use the high performance DSPs with rudimentary tasks such as reading GPIO pins and triggering interrupts (I.E. a PWM timer that triggers a task every 10 usec).