PX4 Autopilot running on Zephyr OS (BB-AI64, BeagleY-AI)

I think that both boards made by BeagleBoard, BeagleBone AI-64 and BeagleY-AI, are very capable and powerful. But here is a problem: absence of good software infrastructure. Some time ago I tried to port PX4 software on top of TI PSDK. I didn’t really like it because the FreeRTOS API feels somehow outdated, and it is difficult to work with. I have been watching Zephyr OS for a while and finally decided to try it. PX4 Autopilot is a huge project, and it would be a good test for my integration efforts to see how Zephyr OS handles such a big piece of code. Also, PX4 is mostly C++ code, and the Zephyr doc says its C++ implementation is not very well tested. It would be interesting to see how it behaves. Everything described below is about things related to the R5F processor. The Linux host OS was almost unchanged.

  1. Before even starting with PX4 Autopilot, I had to prepare a new toolchain for my project. I decided to use the “native” TI toolchain based on the Clang 15 compiler, linker, and accompanying binary tools. The major motivation for this choice was that I also wanted to use TI PDK (when talking about BeagleBone AI-64), and I didn’t want to test TI PDK compilation with another toolchain. TI PDK is a huge piece of software by itself, and it could have a lot of hidden “surprise"s.

  2. Why not use TI PDK? The answer is it contains dozens of low-level drivers and APIs, which help to interact with TI SOC hardware. Another important part is the SCI Client. It is responsible for SOC-wide configuration via the Device Manager SOC server. It is very important for such a versatile system to have all of its components centrally configurable and manageable.

  3. I used Zephyr drivers where they exist and work for TI.

  4. As for memory layout, I ran into some issues and had to slightly modify Zephyr CMake files. I read in the Zephyr documentation that its memory model assumes two types of memory: FLASH and RAM. I am not sure if I understood it correctly, but I had to isolate FLASH from my CMake and add separate memory regions for TCM and DRAM.

  5. As for MPU (Memory Protection Unit), I had to add configuration for MPU because I used memory beyond TCM, which enabled access by default. I took the configuration from Xilinx SOC/FPGA because it has a similar R5 processor. Zephyr OS has everything for configuring MPU; there is no low-level code required. The only small configuration part was in the corresponding SOC section.

  6. I made some changes in the Zephyr Device Tree in order to accommodate additional R5 cores. In my experiment I used the MCU3_1 core, but I also hope to utilize MCU3_0 for different tasks.

  7. As for RPMSG, TI SOC has many processor cores and DSPs. They need to communicate somehow with HLOS (High-Level Operating System) and among themselves. In order to do so, RPMSG is used. Here it consists of two major parts: in-memory ring buffer and mailboxes. TI SOC has an entire network of mailboxes. When one processor core wants to send data to another core, it writes a word to a corresponding mailbox location, and that core gets an interrupt and reads from the ring buffer.

  8. I added file operations support over RPMSG. I had to make some minor changes in Zephyr OS because not everything was implemented there. An R5 processor-based application can make regular file operation calls, and they will be executed on the HL OS side over RPMSG. There is a service on the HL OS side that accepts and executes file requests. At this moment this service is built as part of TI PSDK, e.g., it is a part of its source code tree.

  9. In order to interact with an application running on an R5 core, there is an application that is also a part of the TI PSDK source tree. This application can show log messages from the corresponding R5 processor, and it can also process rudimentary terminal input, send it to R5 over RPMSG, and execute the command there. This is an addition to Zephyr shell. Zephyr shell runs separately and communicates over a UART console.

  10. There is one more shell application for HL OS (Linux): a modified Dash interpreter that has a built-in RPMSG bridge and can interact with an application on an R5 core and execute commands there. These commands can be used within regular Dash scripts on the HL side.

  11. The next part is more specific to the PX4 Autopilot: the HL OS Mavlink service, which interacts with R5-based applications over the RPMSG protocol. I’m going to rework this part and make it as one additional protocol to the mavlink-router application, which is available from the Mavlink repository.

My current project is based on TI PSDK ti-processor-sdk-rtos-j721e-evm-10_01_00_04. I copied the PDK part and put it in the Zephyr modules/hal directory. The build process can access required source headers and libraries here. The TI toolchain is located at its regular location: ~/ti/ti-cgt-armllvm_3.2.2.LTS.

CMake projects for both Zephyr and PX4 Autopilot are fused together where PX4 depends on a Zephyr application, but it still has its own Kconfig tree.

At this moment the project state can be described as “work in progress.” It is not a production state yet, not even close.

My GitLab repos with the code described above:

  1. Zephyr_J7 / px4zephyr · GitLab

  2. Zephyr_J7 / zephyr-j721e · GitLab

  3. Zephyr_J7 / zephyr_px4 · GitLab

Please pay attention to the fact that 3 is a submodule for 1.

Thanks,

Oleg

4 Likes

Your work is awesome. Thankyou for sharing!

Great work man

Could you share more about how this works? How do you run this service apart of the TI PSDK?

Zephyr OS implements the RPMSG protocol via third-party software called OpenAMP. RPMSG is a standard protocol. It appears that OpenAMP employs a number of low-level communication techniques. The Resource Table worked when I tested it.
There are more examples in the zephyr/samples directory.

On Linux, I used this library: