Open Source Wildlife Camera Based On BeagleY-AI

We’re working on designing an open source wildlife camera for our work on conservation of endangered dormouse (family Gliridae), bat (family Vespertilionidae) and squirrel (family Sciuridae) species.

For this to be possible on the limited solar energy available in forests, Linux needs to stay in suspend-to-RAM and waken and start processing video (from a MIPI CSI-2 camera then ideally through the ISP and encoder) within a few hundred milliseconds when a PIR detects an animal approaching.

Would this be feasible with the BeagleY-AI (TI AM67A)?

Our ecologists would like video not still images so the two MIPI CSI2 interfaces and good vision pipeline and h.264 encoder on the BeagleY-AI will be very valuable. We hope to offer users a choice of low-cost general purpose MIPI CSI-2 camera modules with integrated optics and special purpose camera modules with mounts for custom optics and we’ve been able to integrate DT (Device Tree) and kernel drivers for some ST, Sony and OmniVision image sensors with embedded Linux.

We’d like to be able to do simple CNN (Convolutional Neural Network) inference (for example - to determine whether an animal is present in video or whether PIR activation was caused by a branch moving in the wind) so the 4 TOPs MMA is a valuable feature.

At the moment we’re working on preparing 1.1 TB of training and test data. Because NN inference tends to be energy intensive, video could also be encoded and stored for later analysis when more solar energy is available. We’re planning to use solar panels and a rechargeable battery - likely LiFePO4 chemistry.

We’re also looking at a possible low power MCU implementation for situations where solar power is very limited and the full flexibility of Linux isn’t needed.

My background is in user space software engineering and as a climbing arborist so my knowledge of embedded systems is limited.

More about our conservation here:

and WildCamera:

We asked about current driver support for suspend-to-RAM

based on this original question in 2024:

Thank you very much!

Will

2 Likes

This fellow might know something…whomever that person is now?

Check this out: Using Edge AI — BeagleBoard Documentation

It is a bit lengthy but should appease you.

Seth

1 Like

Sounds like a great project!

The AM67A is a good candidate in terms of hardware. In terms of software it can likely do you want but it will be nowhere near as simple as developing for iOS/Android platforms. TI + embedded linux tooling is vastly far behind in terms of ease of getting started. For example, you may need to compile additional libraries yourself, figure out TI’s tools for porting models, and port your own camera driver if using CSI vs USB.

The “Using Edge AI” link above will give you a sense of where things are at with BeagleY-AI.

I wouldn’t expect object detection algorithms to be a huge power drain if done efficiently. It’s probably marginal beyond operating the camera + OS. I’d imagine you want to do some sort of detection before bothering to save useless video to the SD card.

So if you’re excited about the challenge of embedded linux dev, I’d say go for it. But if you’re more focused on getting something working quickly I’d consider iOS/Android or at least another platform with more easy-to-use tooling for a camera app.

1 Like

Sorry I forgot to mention this project - it’s actually fairly close to what we need except for NN inference:

Hi Seth,

Thank you very much - I hadn’t seen that - that’s very helpful - the IMX219 is similar to some of the other Sony image sensors we’re planning to use.

Will

1 Like

Thank you very much!

Yes - integrating a MIPI CSI-2 camera to embedded Linux is very difficult - we’ve learned how to adapt device tree and kernel drivers then rebuild the kernel with those but it’s been a very difficult process and a lot to learn. It’s been a very steep learning curve for us.

USB is an option but as far as we could work out it tends to have higher power consumption than MIPI CSI-2? (The forest canopy absorbs c. 90% of solar energy - leaving only 10% available to power a device on the forest floor so energy budgets are limited.) In the past a limitation with USB in embedded devices was that additional hardware had to be added to power cycle the USB port if the USB bus crashed but I think that seems to have been overcome now with some modern USB chipsets having that functionality integrated. Another reason I hesitated about USB is that I wasn’t sure how long it would take from Linux waking from suspend-to-RAM to USB communication with the camera being established (the time budget there is fairly tight - ideally the ecologists would like of the order of 200 milliseconds). Requiring USB instead of MIPI CSI-2 also seems to push up the cost and limit the choice of image sensors and camera modules.

I’d thought about Android early-on but I heard that building an Android image can be extremely difficult. Are there existing Android images for that AM67A that we could use?

wouldn’t expect object detection algorithms to be a huge power drain if done efficiently. It’s probably marginal beyond operating the camera + OS.

Yes - we’ve got a prototype running on an i.MX 8M Plus which can do object detection on live video at over 30 fps on c. 4 Watts of power - so it does seem possible - I think the MMA on the AM67A is newer than the NPU on the i.MX 8M Plus so I’m guessing that it will likely also be more energy efficient?

The snag with the i.MX 8M Plus is that the vision pipeline seems to be complex and fairly difficult to configure - I get the feeling that the silicon, drivers and documentation have all evolved more and more complexity over the years as more and more functionality has been added - resulting in silicon, drivers and documentation which are very powerful and flexible but also difficult to use. NXP seem to be planning a new start for silicon, drivers and documentation with the pre-production i.MX 95.

When energy budgets are tight (for example, at night), video could be recorded to SD storage then NN inference delayed (and possibly even h.264 encoding delayed) until the next day and video which doesn’t contain animals deleted - this might be necessary anyway because if an animal is detected by the NN in a video in any frame in the video the whole video would be kept - for our ecologists it’s very important to be able to watch the whole video to extract as much information about animal behavior as possible.

or at least another platform with more easy-to-use tooling for a camera app.

What other platforms should we look at? At the moment we’re looking at an MCU implementation on the STM32N6570-DK eva. board and an embedded Linux implementation but if there’s an easier platform we could maybe use that for initial field studies while we work on the embedded Linux implementation.

Will

In the past a limitation with USB in embedded devices was that additional hardware had to be added to power cycle the USB port if the USB bus crashed but I think that seems to have been overcome now with some modern USB chipsets having that functionality integrated

Unfortunately USB is also still flaky on BeagleY-AI right now. My project doesn’t use USB so I haven’t looked into it but it’s not functional occasionally on boot. Not sure if it is caused by some hardware defect or just a bug in the driver. I assume there’s a workaround that could be added.

The one nice thing about USB cameras is they typically come with integrated ISP. I’m going through the process of porting the IMX678 right now and while it’s fun to learn about tuning a raw CSI camera it’s also a lot of work :slight_smile:

As a side note, the IMX678 may work much better for your low light conditions (larger + newer sensor, back illuminated, WDR).

I’d thought about Android early-on but I heard that building an Android image can be extremely difficult. Are there existing Android images for that AM67A that we could use?

By suggesting iOS/Android I meant an iPhone/Android phone. Sounds like that is far off your power budget, but moreso wanted to contrast that the software side for those platforms is far superior. You’re generally dealing with a platform where the camera + AI inference is already integrated and the software development workflow is way simpler.

Hopefully someday some embedded platforms will get close to that but we’ve got a ways to go…

Yes - we’ve got a prototype running on an i.MX 8M Plus which can do object detection on live video at over 30 fps on c. 4 Watts of power - so it does seem possible -

FWIW my BeagleY-AI when running inference is using around ~7W but I haven’t focused on minimizing it. This includes powering a fan, etc…

The snag with the i.MX 8M Plus is that the vision pipeline seems to be complex and fairly difficult to configure - I get the feeling that the silicon, drivers and documentation have all evolved more and more complexity over the years as more and more functionality has been added - resulting in silicon, drivers and documentation which are very powerful and flexible but also difficult to use.

I would say it’s also complex for BeagleY-AI but similarly powerful and with good documentation. TI clearly has a few customers with decent engineering resources AM6xA for automotive applications.

What other platforms should we look at?

There may not be much better in terms of options. If low power prevents you from using iOS/Android I haven’t found any super easy-to-use embedded vision platforms.

Rasberry Pi has a few more CSI cameras already integrated and perhaps a few more case studies to work off of. But not sure how much power it would draw especially if you needed to add one of those AI Hailo hats.

STM32N6 could perhaps run at lower power but I bought one of those and found it difficult to use as well.

Using an RTOS vs Linux may lower the software complexity and make it easier to run lower power but reduce your flexibility to run more complicated applications.

1 Like

Yes - I think that sums it up exactly - having routine things done by an RTOS with Linux booting for more flexibility and advanced functionality would be ideal.

Another big limitation is that my background is server side software, electronics and as a climbing arborist so my knowledge of the kernel and embedded is very limited - in that situation the simpler, the better…

Yes - the Sony IMX678 is very much on our radar for exactly the reasons you described.

Have you had a look at the ST image sensors? They were launched over the summer and have some very clever features (microlenses on the chip, etc.) to get good image quality out of only 1.5 Mp (5 Mp sensors are under development) - ST also provide Mainline drivers which is a big advantage. The ST image sensors have some ISP functionality on-chip but not all.

Can I join that via Element?

I see “framos” peeking out from the top of your camera - framos seem to provide very good open source drivers and device tree for their cameras!

Thank you very much - that’s valuable to know - we’re planning to use MIPI CSI-2 for the camera but USB can be very helpful for peripherals so good to know that USB support isn’t 100% yet.

What’s the quality of the documentation and drivers like? One difficulty we found with the i.MX 8M Plus is the variability of the documentation - some of the documentation is very good and some isn’t very good.

Thank you very much - an STM32N6570-DK for evaluation yesterday - let us know if you’ve any advice - it’s a very advanced and powerful chip but we haven’t started working on learning to configure it yet,

That sounds about right compared to our 4 Watts for NN inference on the i.MX 8M Plus. We’ll put a lot of thought into trying to make the NNs as simple and efficient as possible and put them through an optimisation step.

I did think about trying to find an open source Android smartphone that I could adapt but couldn’t find one suitable.

The RPi 5 has no hardware video encoder or NPU and none of the RPis support suspend-to-RAM so I used an RPi 4B for the first prototype then started the search for something more energy efficient.

Will

I am not sure what your power budget looks like, for most projects we are using the Jetson Orin Nano. It is one mean chunk of silicon, you can light up the GPU and it leaves everything else in the dust. You do have the most options for implementing bleeding edge code for your project when using that one for the core.

1 Like

I have not taken a close look at those yet

Another big limitation is that my background is server side software, electronics and as a climbing arborist

You seem to be pretty familiar with embedded dev so don’t sell yourself short :slight_smile:

Can I join that via Element?

Haven’t heard of that but some folks are active in Discord in #edge-ai – I’m active there with my project on the beagley-ai

I see “framos” peeking out from the top of your camera - framos seem to provide very good open source drivers and device tree for their cameras!

Framos is great. I picked them specifically because of the high quality docs + standard raspberry pi / beagley ai CSI connector + open drivers

What’s the quality of the documentation and drivers like?

TI docs + Yocto support is solid. Unfortunately the software stack itself is extremely complex and fragmented, but I’ve generally found sufficient docs to at least explain it.

1 Like

The solar energy available on European forest floors is c. 10% of the solar energy available in the open - from our measurements, a 450 Watt nominal 1.8 x 1.1 m solar panel gives c. 20 Watts during daylight in good weather from spring to autumn. In overcast weather the solar energy available is a lot less so there aren’t any exact power budgets.

A 1.8 x 1.1 m solar panel and large battery are no problem for me to carry around the forst but would be too bulky and too heavy for most ecologists to carry safely.

For a Linux implementation the power consumption when active isn’t as important as the power consumption in suspend-to-RAM - let me know if you have any figures on the power consumption of the Jetson Orin Nano in suspend-to-RAM - I couldn’t find any specifications for this.

They’re well worth a look - particularly for Edge Ai applications where the useful information in each pixel is more important than the pixel count (to efficiently process high pixel count images on Edge NPUs it seems to usually be necessary to dramatically reduce the pixel count before the NPU).

The drivers and full data sheets for the ST images sensors are also fully open source which is a big help.

Yes!

Thank you very much - the software stacks for vision pipelines seem to have developed like that on some SoCs - with more and more functionality being added over many years and many generations of chips - very high quality drivers and documentation seem to be key to navigating that.

I’ll try to get onto Discord as well as here - remind me if I don’t appear there.

A family of young hazel dormice (Muscardinus avellanarius) whose mother moved them into a carved nest hole in our Belgian field study.

2 Likes

I don’t use that functionality, most of the stuff we work on is in full power mode and not concerned with suspend to ram.

With the power budget the Beagley-AI might be your best solution. You have an extremely cool project.

1 Like

Hi @babldev @foxsquirrel @silver2row

People working on WildCamera are on Matrix on #wildcamera:chab.is or on IRC on #WildCamera - if you’d like to join that would be wonderful! :slightly_smiling_face: :slightly_smiling_face: :slightly_smiling_face:

At the moment we’re looking at the BeagleY-AI as an alternative to the i.MX 8M Plus and the advantages and disadvantages of an MCU implementation on the STM32N6570-DK (MCU with integrated ISP, h.264 encoder and hardware NPU - not as flexible as an embedded Linux implementation but c. 1/10 of the power consumption).

@babldev Would you be interested in collaborating? Your work on integrating the BeagleY-AI and IMX678 is very interesting.

We’ve got a large volume of training and test data from the ecologists running the field studies for neural network training. Later on, we’ll hopefully be able to start evaluating different camera modules in the darkroom to see how image quality compares in both daylight and at night (IR LED illumination).

@babldev Could you suggest any links for the best documents for us to read to understand how to configure the vision pipeline from MIPI CSI-2 through ISP to h.264 encoder and neural network accelerator and on the AM67A?

I found this:

and this for example but wasn’t sure if it was the best place to start:

Thank you very much for all your help!

Will

1 Like

@Will_Robertson Your project seems cool – wish I could help more but I’m focusing my efforts on my vision project on the AM67A + BeagleY-AI platform. I hope to share more resources/tools as I make progress like that docs page above. Eg I’m almost done adding IMX678 support and may do a quick write-up on that.

Other important docs I’d suggest – the RTOS SDK which includes important vision components used in the Edge AI image:

ISP tuning guide goes over adding new cameras

1 Like

Hi Brady,

That’s what I was thinking - if we were to introduce the BeagleY-AI (TI AM67A) as a candidate platform for the Linux WildCamera implementation and make use of your work while contributing our work on integrating candidate image sensors and NN training?

For the IMX678, I can also put you in contact with someone whom I think may have access to the full Sony data sheets if that would be of any help?

I asked TI for more details on suspend-to-RAM support for the ISP and MMA on the AM67A - I can understand why suspend-to-RAM of the ISP and MMA isn’t fully supported so I asked TI about other options - hoping to hear back from TI again over the next few days:

Thank you very much - those are an enormous help.

I don’t know if I fully understand “J722S“ in the first document - is the TI AM67A a chip in the J722S family?

Does this mean that TI support running both an RTOS and Linux on the same AM67A silicon?

Thank you very much for your help!

Will

Had a very good discussion with the folk at TI about some important aspects of implementation of this :slightly_smiling_face::slightly_smiling_face::slightly_smiling_face:

Will