Step #4 on Demos for the BeagleY-AI under header tensorflow_lite

Hello,

Is anyone coming across with this file not being available any longer?

pip install https://github.com/google-coral/pycoral/releases/download/v2.0.0/tflite_runtime-2.5.0.post1-cp39-cp39-linux_aarch64.whl

For whatever reason, I am getting a 404.

I am in Demos under the BeagleY-AI located here: TensorFlow Lite Object Detection — BeagleBoard Documentation

Seth

1 Like

Whelp…I stand corrected. I can download the file but not use pip3 to install it. Off to try another route.

Seth

1 Like

it works!

1 Like

what camera and display panel do you use?

1 Like

I have an older model USB Camera from Microsoft. I cannot remember the model name.

The panel is listed below… Off to search for the content of the purchase:

Amazon.com: Hosyond 5 Inch IPS LCD Touch Screen Display Panel 800×480 Capacitive Screen HDMI Monitor for Raspberry Pi 5/4/3, BB Black, Windows 10 8 7 : Electronics is the brand and it was purchased on Amazon in 2023.

2 Likes

Nice.
HDMI interface works out-of-box and is cheap.

I will buy one with back mounting holes.

1 Like

Kernel 6.1.x and use the bb-imager-rs image with kernel 6.1.x.

I found those two combinations to work well for HDMI access.

Seth

2 Likes

That’s wonderful! :+1::+1::+1:

We’ve been working on using a similar TFLite (now renamed LiteRT but I call it TFLite) model but using a MIPI CSI-2 camera not USB camera and the model we have doesn’t give a % confidence estimate - it looks like the BeagleY-AI is a very good platform for this?

1 Like

The beagley-ai is a good platform without PRUs but if you are not involving fast motor access via PRUs for camera control, there are the R5FSS.

I have been reading what you are accomplishing. This is a feat.

Wildlife preservation is good. I enjoy wildlife in the wilderness and seeing specific mammals, birds, and other animals always gets a chuckle out of me…

Unless they are predators looking to eat me, wildlife is a good subject to review and research. I am not that into it lately, i.e. as I have been taught more recently to hate outdoors (I use a lot of outdoor machinery to handle landscaping). Blah. On a different note and not discussing mowing here any longer, I did see where TFLite was called LiteRT.

tflite-micro/tensorflow/lite/micro/examples/person_detection at main · tensorflow/tflite-micro · GitHub

Even though it was meant for Arduino, it can be ported to other architectures.

Now, how long before beagleboard.org can harness LiteRT in a build? No clue. I see work, like that fellow you were discussing cameras and sensors for the mouse, you have listed about the models.

Taking 1000 photographs of the dormouse will surely get the models noticing the mammal when in existence.

That is good work. A simple scroller of sorts to pay attention to beds, undergrowth, and under canopies will surely work well. Scroller in this sense is basically the part, hardware, that follows the animal in question around by sensing it is actually that specific animal.

If LiteRT can do it with regular electrical hardware, good. Now, that the BeagleY-AI is around for ISP use cases, yes, I think it is a good addition for any person looking to attain specifics in around the ISP.

I also read that the ISP is not 100% Open Source with Datasheet, TRM, and etc being available.

All that selling pitching and I am just learning how to handle it too. I have used M4 technology on TI launchpads before today and I am still learning how to use them.

It is never ending how much work goes into building, maintaining, and using these chips and boards.

Have you scrolled the webs for every angle of dormouse photos or are you using live feeds to revert back to photography stills with your models?

OLDI is a display available on it too.

See here:

OLDI (LVDS) with touchscreen support
MIPI-DSI with touchscreen support (muxed with MIPI-CSI)

# taken from beagley-ai and the website dedicated to this board

So, whatever you do, it seems that the beagley-ai webpage online shows support for DSI and CSI.

I went to MIPI online. Yikes. A lot of data.

Seth

P.S. I think you will need additional data first. Like, for instance, the camera gets what pixels, the CSI port can support what of those pixels, and can the am67a support it all.

For instance, I use motors at times. I use them for pure enjoyment for now. Silly. I know. Anyway, using one of the R5F cores may prove valuable for outside-of-linux support.

1 Like

Hi Seth,

:+1: The quality of the drivers, documentation, Yocto build and community are most important for us - we don’t need fast motor control here (which is good - motors of any size would eat into the power budget).

I think TFLite models maybe can be run directly on a device but often they’re put through an optimisation step first to optimise the efficiency of the model on a specific device and compiled ahead of time.

We’ve got 1.1 TB of test and training data at the moment. Because of the limited electrical energy available we’re only aiming for the model to be able to categorize whether a bird, a mammal, a moving branch, distant vehicles or a distant person caused the activation - that will make an enormous difference for ecologists and more advanced categorization like species identification can be done by much deeper neural networks on more powerful hardware where the energy supply isn’t so limited.

There are existing open source data sets and NNs that have become very good at species recognition so it’s better to use them for species recognition and contribute to them than it is for us to try to develop the deep CNNs (Convolutional Neural Networks) needed for that ourselves.

With the existing 1.1 TB of test and training data we’ll run into problems with the amount of human time taken to categorise all the data and the amount of computer time to train the NN - we’ll see how things go with that amount of data before adding more.

Yes - a lot of very detailed information about the image sensor and its configuration is needed - this is usually specified in the device tree and driver for the image sensor.

Will

1 Like

Right…

after training, the model can be run on the beagley-ai.

I thought you said earlier that the ISP is not fully Open Source yet. I may have read it wrong or from another source on this forums site.

TI personnel does a good job with their TRM and datasheets from as long as I have been referencing them.

Beagleboard.org persons do a really good job at promotion and making things work with their hardware. Just for data here, I am not affiliated with their business. I am just some average person learning about things.

The DTS and driver data is up to the user at times, e.g. depending on if one can get up to speed on what beagleboard.org people are doing with specific images and kernels.

I am not handy at DTS. There will be others to assist you in that field. DTS is an ongoing effort on my part, i.e. how I learn it is up to me.

And A-Okay about motors and movement. I figured you guys would use some type of scroller to scroll the canopy and when attached to a target, would then build upon that target data. If indeed, for instance, it is a dormouse or other mammal.

Just for instance, I am outdoors more often than most people and in all types of climates. I have done this outdoor work in fields and around brush, tall grass, and tree lines for years upon years.

I see some wildlife. From Kites to king snakes and mocassins to marsupials and even roadrunners, I see my slew of the animal kingdom.

I know this is highly irregular to propose tasks in my daily life. So, the only thing I can account to tell you are these facts about animals in general.

  • They move a lot.
  • If you want to catch/see/study them in their natural habitat, you have to move as much as them.

Like that old joke from that movie, “Be the ball.” In reality, no person can be a ball or an animal in this context. So, the closest thing is to mimic them and their movements. Silly. Yea.

You would be surprised at all the wildlife that comes out of the woodwork, so to say, when you steadily move and only stop when you have to pass out from exhaustion.

Outside of that idea and those tidbits, no matter how silly it actually is reading it, I think the project to study wildlife is topnotch.

Anyway, good luck. I hope you can make the project post to this forum and post its completion or a small section of the completed project.

Oh. Tensorflow. aarch64. Four-core. am67a. R5FSS.

Things are changing fast in the Linux field with cost and efficiency (kernel stuff too).

Every time I turn on the ole Ubuntu Noble OS, I see kernel.org has a new kernel. From 6.1.x to mainline to next. They are pushing the envelope now.

I think kernel.org now has 6.17.x. Amazing.

Seth

P.S. For the NN and CNN and datasets and ports from suppliers to targets, they keep making it easier somehow. Dormouse. Hmm. Tiny little mammals. Rodents! We have nutria out here. Man, they get big.

1 Like

@Will_Robertson ,

About the nutria “rats,” 20 lbs. is not uncommon. They had a couple take over the most populated park in the city. People kept feeding them. They kept coming back.

Seth

P.S. Anyway, good luck with your projects and I hope you make it back to display the finale. I found another project about conservation: Olympic Marten Project - Woodland Park Zoo Seattle WA

It is about martens but still, interesting.

1 Like