Hi,
I’m very sorry to take up people’s time with such a basic question.
What’s the best way to configure the vision pipeline from the MIPI CSI-2 interface, through the ISP and out to the MMA and h.264 encoder in the BeagleY-AI (TI AM67A)?
Am I right in thinking that the first choice that we have to make is whether to use the TI Yocto build or the beagleboard.org Yocto build for this board then I can start working on configuring the vision pipeline from there?
It’s for an open source wildlife camera for our work on conservation of endangered dormice https://new-homes-for-old-friends.cairnwater.com/ so the main priorities are that we can configure the vision pipeline efficiently and take the system into suspend-to-RAM to save energy (as demonstrated in the PeaglePlay Smart energy efficient video doorbell — BeagleBoard Documentation )
I can understand that because the BeagleY-AI (TI AM67A) has a very advanced ISP which can handle a RGB-IR (Red, Green, Blue, InfraRed) video stream - rather than just the simple RGB image stream that most ISPs process - configuring the vision pipeline is more complex than it would be for other ISPs.
We’ve shortlisted a handful of image sensors and got DT (Device Tree) and kernel drivers for them working with a 6.12 kernel.
This is one of the endangered dormice (Eliomys quercinus) moving nesting material into one of our carved nest holes - caught this time on one of the proprietary wildlife cameras:
We’ve got the following documentation: Adding new image sensor to PSDK RTOS
the TI J721E Imaging User Guide
and the AM6xA ISP Tuning Guide
My background is as a climbing arborist and server side software engineer so my knowledge of embedded systems is very limited unfortunately.
Thank you very much for your help!
Will