We are not able to capture VGA (640 x 480) video or frames with the BBB.
We’ve experimented with mplayer, luvcview, guvcview, cheese, opencv, simplecv, v4l, etc. and the behaviour seems consistent.
Lower resolutions like 320 x 240 work, but 640 x 480 doesn’t.
We’ve tested with Ubuntu, Fedora and Arch Linux BBB images.
With a Logitech camera which is able of doing up to 1280 x 1024, although 1280 x 1024 works, 640 x 480 still does not work.
The cameras work OK in other computers.
Should this be a bug in the hardware or software?
João,
I am in the process of writing this up on my blog; however, I hope this information helps you.
I have recently run into the same issue on the BBB with a PS3Eye and this is what I have found.
Initially I was unable to capture any images at 640x480, but I could capture at 320x240.
I found this project v4l2grab and made changes to it to allow me to set the frame rate.
With the ability to set the frame rate, I discovered I could capture images at 640x480 if the frame rate was no more than 15 fps.
I could also capture 320x240 at up to 60 fps.
Doing some quick calculation we see that 15 * 640 * 480 * 24 = 30 * 320 * 240 * 24
where 24 is the bits per pixel (8 * 3) in the uncompressed image from the PS3Eye.
This works out to just about 13.2 MB/s.
I unloaded the usb driver for the audio from the PS3Eye, ensured there were no other usb devices connected over usb, and connected the PS3Eye directly to the board to ensure it would have the maximum amount of bandwidth. Long story short, it seems 13.2 MB/s is all it will handle.
I did some further testing with the same program and running off a Bodhi live cd and found that a laptop I had with an Intel 5 series 3400 usb controller could support a max of 30 fps at 640x480 while one with an Intel 6 series c200 was able to achieve the full 60 fps at 640x480 that the PS3eye is capable of.
If your camera supports compression, enabling it should allow you to capture larger images and/or at a faster frame rate.
You guys should check out Derek Molley’s blog. He was able to capture high quality images at reasonable frame rates if he used a camera that did H264 encoding internally.
He was able to stream at 1920x1080, 25fps with the BBB @ 12% CPU utilization.
You need to put a little more money in the camera to do this.
http://derekmolloy.ie/beaglebone-images-video-and-opencv/
http://derekmolloy.ie/streaming-video-using-rtp-on-the-beaglebone-black/
Don
Don,
Thanks for the links. As I said, if your camera supports compression, enabling it should allow you to capture large images and/or at a faster frame rate. I was simply detailing why cameras weren’t working so people can understand why they need to spend more money. I didn’t want to just say spend more money and it will work.
There is something interesting in the OP: the fact that with his camera, 1280x1024 actually works, but 640x480 does not.
Do you suppose that his Logitech camera is H264 capable, but maybe it doesn’t use H264 for the lower resolutions?
The command v4l2-ctl --list-formats-ext will list all available pixel formats and resolutions. When I execute that with my Logitech C920, it shows it is capable of H264, YUYV, and MJPG. I can pull single images up to 1920x1080 off of it using either H264, MJPG, YUYV (uncompressed) when the frame rate is 30fps. I have successfully (in the sense that I got video with some dropped frames) captured video at 1920x1080 in both compressed formats as well, but I get select timeouts when uncompressed.
The PS3Eye plays a little differently. It transfers data in bulk mode as opposed to isochronous like most other web cams and seems to really tax the bus. Bulk guarantees delivery but not speed and isochronous is the opposite. Cameras using bulk transfers need to use compressed images even for low frame rates or the data takes too long. Judging by the dropped frames
with isochronous, relatively large captures can be sent - frames just get lost.
For the OP, I would suspect the camera is sending YUYV at 640x480. I have noticed when using OpenCV with Python, it doesn’t matter what pixel format I set using v4l2-ctl, OpenCV will switch to YUYV when I start a capture.
Just wanted to clarify on OpenCV and YUYV - I gave it as an example of a program that will change the pixel format to YUYV without explicitly asking it to. I didn’t mean to suggest the other programs OP listed do the same. One interesting side effect I have noticed when using OpenCV is after I do a capture with OpenCV, if I open another vid capture program, it will be set to YUYV as well since v4l2 is used to control the pixel format and it keeps the last set pixel format (resolution and fps as well) across programs.
So will my YUYV cam always work slow?
I’m running the moustache placer software from here: http://beagleboard.org/project/stache
It produces a window that is probably 640x320, but I get what looks like a 320 feed and two underneath, one smaller that the other…
There are some settings in the file I haven’t managed to get to work to lower the resolution yet. It runs at 100% cpu, so all progress is slow.
But if I could benefit from a camera with compression, i’d be happy to get one, except I only need 320x240 resolution… So I can process opencv functioning.
Btw cheese works extremely well with my cam at 640x320, drops a few frames, but you can move your hand it front of it and see it move in real time, 1-2sec delay. Its not bad.
Did you register anything on CPU usage. I know running moustache placing cv code it will run at 100% CPU on a standard webcam. Is pre processing less cpu intensive or will cv code always push the bb to the max?
I didn’t think I’d rush to buy a h264 hardware encoding cam if its the software and running at 320 is just as efficient as preencoded hd video.
James,
I have not done any cpu testing other than while saving single frames. In MJPEG format, capturing a single frame at 1920x1080 30 fps and saving it was taking about 6% cpu. Doing the same in YUYV at a reduced resolution and converting to jpeg was taking about 70%. I would not rush out to buy a h264 camera. First, I am not sure OpenCV decodes h264 streams. Second, if it did, I am not sure you would see much improvement because of the need to decode the h264 stream. If the BBB does this in hardware is likely wont be bad, but if it is a software implementation we are just shifting the burden. I am currently working through several tradeoffs. YUYV should give the truest image free of compression artifacts, but will be larger and require more resources to capture and process. MJPEG may suffer from compression artifacts, but images can be captured more quickly and at higher resolutions. One has to decide what combination of frame rate, resolution, and fidelity are required. I am currently working through the permutations to see what suits my application.
An option I have open is to send the captured images over a wifi link to a computer that runs all the OpenCV code and pipes back the data I am after. This leaves the BBB mostly free to do other work. I will update as I progress.
Matthew Witherwax
Matthew,
It does remind of something on Derek Molloy’s site in that he adds coding to deal with the h264 codec for taking stills with opencv. I hesitated at the idea. Interestingly even using an additional bbb or raspberry pi might not even help. I’m looking at camera tracking objects/object recognition and servo control.
I might find that by not displaying the video on the monitor, that the tracking and servo control can work less than 100% CPU.
I have yet to implement the cv code for that, it’s a real struggle to find good code. And not get errors. I have managed servo control through python and manual entering numbers so its ultimately possible.
I’m not sure if my camera does anything other than yuyv. I’d be interested in connecting an Ethernet cam to the Ethernet port and read camera data from there, but you’d probably run into the same problem. Ethernet cam to pc processing cv code, wifi to beaglebone servo control might be good. But its probably a slow process and too complicated for me.
Anyway good luck with your project.
James
Getting another BBB or raspberry pi probably wont help, but a U2 from HardKernel here http://www.hardkernel.com/renewal_2011/products/prdt_info.php?g_code=G135341370451 probably would. You can use the BBB to do all IO and use the processing power of the U2 to handle compute intensive tasks. That is actually my end goal. The cpu use I stated earlier was without showing the image. Displaying the image will increase cpu use.
I am also working on a tracking application with the webcam mounted on a servo. For my purposes, I do not need to see the image on the BBB and can push it over wifi to my laptop if I want to view it.
As far as cv code, what are you looking for? I have posted a tool on my blog here http://blog.lemoneerlabs.com/post/shades-of-red to help you find HSV values from images to allow you to to find values to threshold target colors. I will post another one on using HSV ranges to threshold an image to isolate things like a red colored ball. I have been implementing OpenCV code using python right now for experimentation, but you should be able to translate it to C++ or your language of choice.
The command v4l2-ctl --list-formats-ext will tell you the pixel formats and resolutions supported by your camera.
Hope this helps,
Matthew Witherwax
Thanks Matthew,
I’ll recheck the camera to see what t produces,
I have seen the U2 before. I was hoping not to spend much more… I might. The development is for a pc less version of
http://projectsentrygun.rudolphlabs.com/
Indeed I won’t need to see the video stream directly, similarly a wifi module would be nice to capture images of tracked
Objects. I’m only planning on using water pistols though, nothing as beefy as some people.
The red ball tracking would be a reasonable start. I’m not not well versed in opencv. The pc/java version uses a particular
Form of tracking. I wouldn’t mind a adding a colour detector to avoid my cat, or whatever colour collar she has on.
Ingeniously your colour detector code might be just the thing to detect a particular range on a specific item as such.
Cheers
James
James,
I have posted the application and write up for thresholding a target color here http://blog.lemoneerlabs.com/post/hsv-threshold
Capture code to follow soon.
Matthew Witherwax
Thanks Matthew,
I did send you a message through the site. I don’t know if that works, it basically a question as to whether I have opencv setup correctly and setting up the pythonpath. Which actually it might be well worth you posting on the site so anyone can have a go.
Regards,
James
Keep up the good work. I look forward to giving the code a scent whirl.
James,
I received your email. To install OpenCV and Python, I first installed Python then OpenCV using Arch’s package manager. What distro are you using?
Ah,
Just read things through a bit more. Well I’ll probably give Arch Linux a go rather than angstrom 3.8.13
I’m no longer sure how I have especially installed python and opencv. It’s python 2.7.
I can run c++ program’s but python eludes me apart from the adafruit setup but they have a load of setup files…
James
James,
I wanted to let you know I posted the code I am using to capture images and time webcams on my BBB here http://blog.lemoneerlabs.com/post/BBB-webcams
I will take a look at what is required to run OpenCV with Python on Angstrom when I have a free moment.