High CPU usage after loading a cape on Jessie

I am experimenting with getting Machinekit running on Debian Jessie,
and have run into an issue with loading capes.

After I manually load a cape:

$ SLOTS=/sys/devices/bone_capemgr.*/slots
$ sudo -A su -c "echo cape-bebopr-brdg:R2 > $SLOTS"

...CPU usage maxes out and I have eight systemd-udevd tasks running
that are each taking a good chunk of the CPU. These typically go away
after apx. 17 seconds of CPU time (each), or about 2-1/2 minutes, but
I'm wondering what in the world is going on.

Is this a known issue? Any ideas how to tell what the systemd-udevd
processes are doing?

The kernel is 3.8.13-xenomai-r78, which works fine under Wheezy.

Here's an example from top shortly after loading the cape:

I get the same results with a "stock" Debian Jessie image (

debian@beaglebone:~$ cat /etc/dogtag
BeagleBoard.org Debian Image 2016-03-27

...using the 3.8.13-bone79 kernel. The 4.1.18-ti-r55 kernel provide
with the Jessie image has a cape manager (although the slots file is
in a different location), but trying to load the cape-bebopr-brdg:R2
cape fails.

Any hints as to how to debug would be very welcome!

I am experimenting with getting Machinekit running on Debian Jessie,
and have run into an issue with loading capes.

After I manually load a cape:

$ SLOTS=/sys/devices/bone_capemgr.*/slots
$ sudo -A su -c "echo cape-bebopr-brdg:R2 > $SLOTS"

...CPU usage maxes out and I have eight systemd-udevd tasks running
that are each taking a good chunk of the CPU. These typically go away
after apx. 17 seconds of CPU time (each), or about 2-1/2 minutes, but
I'm wondering what in the world is going on.

Is this a known issue? Any ideas how to tell what the systemd-udevd
processes are doing?

The kernel is 3.8.13-xenomai-r78, which works fine under Wheezy.

I get the same results with a "stock" Debian Jessie image (

debian@beaglebone:~$ cat /etc/dogtag
BeagleBoard.org Debian Image 2016-03-27

...using the 3.8.13-bone79 kernel. The 4.1.18-ti-r55 kernel provide
with the Jessie image has a cape manager (although the slots file is
in a different location), but trying to load the cape-bebopr-brdg:R2
cape fails.

Any hints as to how to debug would be very welcome!

It looks like this is related to the PRU. When the CPU gobbling
systemd-udevd processes go away, this appears in the syslog:

Apr 10 20:29:00 beaglebone systemd-udevd[179]: worker [1940] /devices/ocp.3/4a300000.pruss/uio/uio3 timeout; kill it
Apr 10 20:29:00 beaglebone rsyslogd-2007: action 'action 17' suspended, next retry is Sun Apr 10 20:29:30 2016 [try You searched for error 2007 - rsyslog ]
Apr 10 20:29:00 beaglebone systemd-udevd[179]: seq 2037 '/devices/ocp.3/4a300000.pruss/uio/uio3' killed
Apr 10 20:29:00 beaglebone systemd-udevd[179]: worker [1941] /devices/ocp.3/4a300000.pruss/uio/uio4 timeout; kill it
Apr 10 20:29:00 beaglebone systemd-udevd[179]: seq 2038 '/devices/ocp.3/4a300000.pruss/uio/uio4' killed
Apr 10 20:29:00 beaglebone systemd-udevd[179]: worker [1942] /devices/ocp.3/4a300000.pruss/uio/uio0 timeout; kill it
Apr 10 20:29:00 beaglebone systemd-udevd[179]: seq 2034 '/devices/ocp.3/4a300000.pruss/uio/uio0' killed
Apr 10 20:29:00 beaglebone systemd-udevd[179]: worker [1943] /devices/ocp.3/4a300000.pruss/uio/uio1 timeout; kill it
Apr 10 20:29:00 beaglebone systemd-udevd[179]: seq 2035 '/devices/ocp.3/4a300000.pruss/uio/uio1' killed
Apr 10 20:29:00 beaglebone systemd-udevd[179]: worker [1944] /devices/ocp.3/4a300000.pruss/uio/uio2 timeout; kill it
Apr 10 20:29:00 beaglebone systemd-udevd[179]: seq 2036 '/devices/ocp.3/4a300000.pruss/uio/uio2' killed
Apr 10 20:29:00 beaglebone systemd-udevd[179]: worker [1949] /devices/ocp.3/4a300000.pruss/uio/uio5 timeout; kill it
Apr 10 20:29:00 beaglebone systemd-udevd[179]: seq 2039 '/devices/ocp.3/4a300000.pruss/uio/uio5' killed
Apr 10 20:29:00 beaglebone systemd-udevd[179]: worker [1969] /devices/ocp.3/4a300000.pruss/uio/uio6 timeout; kill it
Apr 10 20:29:00 beaglebone systemd-udevd[179]: seq 2040 '/devices/ocp.3/4a300000.pruss/uio/uio6' killed
Apr 10 20:29:00 beaglebone systemd-udevd[179]: worker [1970] /devices/ocp.3/4a300000.pruss/uio/uio7 timeout; kill it
Apr 10 20:29:00 beaglebone systemd-udevd[179]: seq 2041 '/devices/ocp.3/4a300000.pruss/uio/uio7' killed
Apr 10 20:29:00 beaglebone systemd-udevd[179]: worker [1941] terminated by signal 9 (Killed)
Apr 10 20:29:00 beaglebone systemd-udevd[179]: worker [1969] terminated by signal 9 (Killed)
Apr 10 20:29:00 beaglebone systemd-udevd[179]: worker [1943] terminated by signal 9 (Killed)
Apr 10 20:29:00 beaglebone systemd-udevd[179]: worker [1940] terminated by signal 9 (Killed)
Apr 10 20:29:00 beaglebone systemd-udevd[179]: worker [1942] terminated by signal 9 (Killed)
Apr 10 20:29:00 beaglebone systemd-udevd[179]: worker [1944] terminated by signal 9 (Killed)
Apr 10 20:29:00 beaglebone systemd-udevd[179]: worker [1949] terminated by signal 9 (Killed)
Apr 10 20:29:00 beaglebone systemd-udevd[179]: worker [1970] terminated by signal 9 (Killed)

If it helps, the output from udevadm monitor while this is happening:

Any hints as to how to debug would be very welcome!

I wonder if running strace, and piping stdout to a file would provide any useful information on the subject. I’m pretty sure this would have to be run explicitly as root( not sudo ).

I wonder if running strace, and piping stdout to a file would provide any useful information on the subject. I’m pretty sure this would have to be run explicitly as root( not sudo ).

Not really useful in the least . . .

I am experimenting with getting Machinekit running on Debian Jessie,
and have run into an issue with loading capes.

After I manually load a cape:

$ SLOTS=/sys/devices/bone_capemgr.*/slots
$ sudo -A su -c "echo cape-bebopr-brdg:R2 > $SLOTS"

...CPU usage maxes out and I have eight systemd-udevd tasks running
that are each taking a good chunk of the CPU. These typically go away
after apx. 17 seconds of CPU time (each), or about 2-1/2 minutes, but
I'm wondering what in the world is going on.

Is this a known issue? Any ideas how to tell what the systemd-udevd
processes are doing?

The kernel is 3.8.13-xenomai-r78, which works fine under Wheezy.

I get the same results with a "stock" Debian Jessie image (

debian@beaglebone:~$ cat /etc/dogtag
BeagleBoard.org Debian Image 2016-03-27

...using the 3.8.13-bone79 kernel. The 4.1.18-ti-r55 kernel provide
with the Jessie image has a cape manager (although the slots file is
in a different location), but trying to load the cape-bebopr-brdg:R2
cape fails.

Any hints as to how to debug would be very welcome!

OK, this doesn't appear to be a PRU issue, it's a fundamental problem
with udev. The systemd-udevd process that are chewing up CPU cycles
are endlessly trying to create a symlink due to the contents of
/etc/udev/rules.d/uio.rules, which contains:

SUBSYSTEM=="uio", SYMLINK+="uio/%s{device/of_node/uio-alias}"
SUBSYSTEM=="uio", GROUP="users", MODE="0660"

When you strace one of the 'stuck' systemd-udevd processes, you get an
endless (until the process is killed) repeating stream of:

mkdir("/dev", 0755) = -1 EEXIST (File exists)
symlink("../uio1", "/dev/uio/") = -1 ENOENT (No such file or directory)
stat64("/dev/uio", 0xbeffb430) = -1 ENOENT (No such file or directory)
mkdir("/dev", 0755) = -1 EEXIST (File exists)
symlink("../uio1", "/dev/uio/") = -1 ENOENT (No such file or directory)
stat64("/dev/uio", 0xbeffb430) = -1 ENOENT (No such file or directory)
...

The symlink fails because there is no /dev/uio directory, which the
systemd-udevd process seems to think is supposed to exist.

Any uio and/or udev gurus know what's going on?

and of course, i never added who pinged me on this, when i pushed the
change...

https://github.com/RobertCNelson/omap-image-builder/commit/47f982cf2896664dacd21126cf4381b078b94d15

Regards,

Well things are _mostly_ working. The /dev/uio[0-7] entries are
created, they apparently just don't get symlinked to wherever it is
the first line in the uio.rules file is trying to put them.

Any ideas what this was all about?

Is it OK to just comment out the SYMLINK line?

After further testing, if you create a /dev/uio directory before
trying to load a uio driver (like the PRU driver), everything works
fine. Interestingly, there are *NO* symlinks actually generated in
the /dev/uio directory, but the simple fact that it exists seems to be
enough to keep the systemd-udevd processes from chewing up tons of CPU
until they get killed.

So the solution seems to be one of:

* Create a /dev/uio directory

* Remove the SYMLINK line from /etc/udev/uio.rules

I'm not enough of a udev guru to know which is the better option, but
removing or commenting the SYMLINK line in uio.rules seems like the
better choice, since there aren't any symlinks generated anyway.

After further testing, if you create a /dev/uio directory before
trying to load a uio driver (like the PRU driver), everything works
fine. Interestingly, there are NO symlinks actually generated in
the /dev/uio directory, but the simple fact that it exists seems to be
enough to keep the systemd-udevd processes from chewing up tons of CPU
until they get killed.

So the solution seems to be one of:

* Create a /dev/uio directory

* Remove the SYMLINK line from /etc/udev/uio.rules

I’m not enough of a udev guru to know which is the better option, but
removing or commenting the SYMLINK line in uio.rules seems like the
better choice, since there aren’t any symlinks generated anyway.

I’m not an expert Charles, but I think the symlink should be removed from the udev rules file, and when various drivers that require uio are installed via however the “vendor” needs to include their own *.rules file. That’s how it’s done already in many cases.

As creating a directory structure in the /dev/ directory tree probably is not a good idea if it never gets used, or even just isn’t installed. Because it could be a potential cause of confusion.

Does the uio_pruss still work if we just nuke that whole udev rule??

Regards,

Testing....

Robert, Charles,

One thing that confused me, was why does the uio_pruss driver add 8 mmap() file objects in that /dev/uio directory structure. They all point to the same address in memory it seems too.

Also according to older exact step how-to’s from aroudn 2013-2014, this hs omehow changed from a single file object, to 8. Perhaps that udev line is the cause ?

Commenting the first line (with SYMLINK+=):

>>
>> Does the uio_pruss still work if we just nuke that whole udev rule??
>
> Testing....

Thanks for testing Charles!

Commenting the first line (with SYMLINK+=):

No excessive CPU usage when loading a PRU device-tree overlay.

Commenting both lines (same as not having a uio.rules file):

No excessive CPU usage when loading a PRU device-tree overlay

/dev/uio* permissions change from 0660 to 0600

In both instances, I can use the PRU as expected (Machinekit loads and
moves motors).

I would recommend leaving the permissions rule (the second line) but
comment out the first line (the SYMLINK+ line). Honestly, IMHO there
should be a udev rule that creates a device node that has "pru" or
"pruss" in it somewhere, but since that's not how it was done
previously... :wink:

I'll push the first line commenting change..

Regards,

@Robert,

I think I found the answer to your question. From a text file based on something another person on these forums was discussing with me . . .

put in /etc/modprobe.d/uio.conf

Anyway, if you have a device tree “compatible=uio” definition for any node in any device tree. It seems that udev rule will hammer the crap out of the system until the udev rule can actually do what it wants to.

Also oddly enough, that method for loading an uio driver, at least the demonstration uio driver for ADC is broken. Exact step instructions I’ve created and tested personally no longer work. At least on my BBB with . . .

william@beaglebone:~$ uname -r
4.4.7-bone-rt-r9
william@beaglebone:~$ cat /etc/dogtag
BeagleBoard.org Debian Image 2015-03-01
william@beaglebone:~$ dtc -v
Version: DTC 1.4.1-g1e75ebc9

So it would seems that udev rule is no longer needed anyway . .