Real Time experience on Beagle?

Hello,

I’m presenting an overview of Beagle projects on October 31 at the Real Time Summit in Lyon, France.

I’d appreciate any feedback if you’ve tried Xenomia or the RT_PREEMPT kernel.

Thanks
Drew

Hi Drew,

I hope you’re well!

I recently experimented briefly with both, here’ s the steps I used to install Xenomai (Rob Nelson helped me find the pre-built kernel, the link to them is below).
Not all Xenomai APIs are enabled in the kernel. Anyway, I did get reduced jitter when using one of the API sets called Alchemy, which I guess is probably the easiest to code with. In a nutshell you can use the API to create a task thread, and do your low-latency stuff there.

For my experiment, I used this content in my makefile:

XENO_CONFIG := /usr/xenomai/bin/xeno-config

CFLAGS := $(shell $(XENO_CONFIG) --posix --alchemy --cflags)

LDFLAGS := $(shell $(XENO_CONFIG) --posix --alchemy --ldflags)

EXECUTABLE := atest

all: $(EXECUTABLE)

%: %.c

$(CC) -o $@ $< $(CFLAGS) $(LDFLAGS)

clean:

rm -f $(EXECUTABLE)

and, code needs to look like this:

#include <stdio.h>

#include <signal.h>

#include <unistd.h>

#include <stdlib.h>
#include <alchemy/task.h>

RT_TASK hello_task;

// function to be executed by task

// this is your stuff for which you want low jitter
void helloWorld(void *arg)

{
RT_TASK_INFO curtaskinfo;
printf(“Hello World!\n”);

// inquire current task

rt_task_inquire(NULL,&curtaskinfo);

// print task name

printf(“Task name : %s \n”, curtaskinfo.name);

while(1)
{
// do your stuff here in a forever loop if you like

// use this sleep command if you need to use any sleep. This sleep has low jitter:
rt_task_sleep(50000);
}

}

int main(int argc, char* argv[])

{

char str[10];

printf(“start task\n”);

sprintf(str,“hello”);

/* Create task

  • Arguments: &task,

  • name,

  • stack size (0=default),

  • priority,

  • mode (FPU, start suspended, …)

*/

rt_task_create(&hello_task, str, 0, 99, 0);

/* Start task

  • Arguments: &task,

  • task function,

  • function argument

*/

rt_task_start(&hello_task, &helloWorld, 0);

while(1)

{

sleep(10);

}

}

To test latency I ran this:
cyclictest -n -p 90 -i 1000

and the result was:
T: 0 ( 2914) P:90 I:1000 C: 31719 Min: 6 Act: 19 Avg: 18 Max: 51

which was about ten times lower for the Max value, compared to PREEMPT RT.
And it was far lower than x86 Linux running a standard kernel with Ubuntu. The x86 Linux was a virtual machine on ESXi on an Intel NUC.
It was all over the place with that - especially if I tried opening another terminal to do something. With Xenomai, it was stable.
In summary, provided one is willing to code for Xenomai, then the jitter difference is large - still no-where as good as PRU or a microcontroller of course, but fantastic for Linux.
Also, it seems that the pre-built Machinekit images use PREMPT RT, not Xenomai : ( I’ve no idea if Machinekit is coded to support Xenomai, I’ve not really investigated too far currently.

Installing pre-built Xenomi kernel:

https://github.com/beagleboard/linux/releases

cd /opt/scripts/tools/

git pull

As root user:

./update_kernel.sh --ti-xenomai-channel --lts-4_14

as non-root user:

cd development

mkdir xenomi

cd xenomi

wget https://xenomai.org/downloads/xenomai/stable/latest/xenomai-3.0.9.tar.bz2

bunzip2 xenomai-3.0.9.tar.bz2

tar xvf xenomai-3.0.9.tar

cd xenomai-3.0.9

./configure --enable-smp CFLAGS="-march=armv7-a -mfpu=vfp3" LDFLAGS="-march=armv7-a -mfpu=vfp3"

make

As root user:

make install

Testing it:

/usr/xenomai/bin/xeno-test

Development/xtest

make -f Mafefile-a

as root user:

export LD_LIBRARY_PATH=/usr/lib:/usr/xenomai/lib

./atest

Hi Drew,

I hope you're well!

I recently experimented briefly with both, here' s the steps I used to
install Xenomai (Rob Nelson helped me find the pre-built kernel, the link
to them is below).
Not all Xenomai APIs are enabled in the kernel. Anyway, I did get reduced
jitter when using one of the API sets called Alchemy, which I guess is
probably the easiest to code with. In a nutshell you can use the API to
create a task thread, and do your low-latency stuff there.

For my experiment, I used this content in my makefile:

XENO_CONFIG := /usr/xenomai/bin/xeno-config

CFLAGS := $(shell $(XENO_CONFIG) --posix --alchemy --cflags)
LDFLAGS := $(shell $(XENO_CONFIG) --posix --alchemy --ldflags)

You can simplify your life just use:

atest:

clean:
         rm -f atest

the %: %.c stuff is a build in rule CC should be a default
to you local compiler.

jm2c,

re,
wh

Can you use bela (https://bela.io/about) ?

Thanks for that! : )
I’m terrible at makefiles : )

By the way,

I’ve put an oscilloscope trace of a Xenomai’d program on the BBB here in case you’d like to show it for your presentation:
https://app.box.com/s/nfwlud613c7zoz7gu6rn9arfttticvvc

For that trace, the BBB is just toggling a GPIO pin repeatedly, in a Xenomai’d thread. I left it running for several minutes and the statistics that were collected are at the bottom of the screenshot.
The delta between the Max (66.34 usec) and Min (60.76 usec) values indicates that jitter was under 6 usec.

The code that produced that was the same code that I pasted below, but in the real-time thread (i.e. in the helloWorld function there) I added some code toggling a GPIO pin (using the I/O library I wrote a while back - updated version is documented here:
https://www.element14.com/community/community/designcenter/single-board-computers/blog/2019/08/15/beaglebone-black-bbb-io-gpio-spi-and-i2c-library-for-c-2019-edition

The code I had in that function was something like:
while(1) {

pin_high(8,12);
rt_task_sleep(50000);
pin_low(8,12);
rt_task_sleep(50000);

}

Thanks very much for these instructions and your results. I will give a try.

Did you run anything to put load on the system while you were collecting stats?

thanks,
drew

Hi Drew,

Without the real time kernel, I didn’t need to put any additional load and I could see the jitter was large (on x86 in a VM). As soon as I did any activity such as just open another terminal, the jitter shot even higher, so I didn’t deliberately add any further load, since it was clear I couldn’t do much machine control in this manner.

I’ve just now repeated with BBB, and recorded a couple of videos (each video is less than 2Mbyte, MP4 file). They are here to download:
https://app.box.com/s/hcax6malowe43ctxdg1x64r83yruklyg

In the no-xenomai-gcc.mp4 video, you can see that I ran the cyclic test which shows the Min/Actual(i.e. current)/Avg/Max latency values in usec.
I ran gcc as a real-world load. You can see that the latency shoots up to 591 usec, i.e. jitter is higher than 500 usec.

In the with-xenomai-gcc.mp4 file, I repeat things, but this time using a cyclic test which is xenomai-enabled.
The video shows that the latency during the gcc load didn’t exceed 62 usec, so almost 10 times better : )

There’s a third video there too, titled stress-xenomai.mp4. In that, I ran a stress command, and also displayed the processes and top. Afterwards, I stopped the cyclic test and you can see the difference in ‘top’ output too.
I don’t know how useful this video is, because I don’t know how good that stress command is. I copied that command from some Pi stress-test document.

Thanks,

Shabaz.

Thank you very much. Those findings are very interesting and the
videos make it easy to see the difference.