I read a usb webcam with my own c++ program out. For this, I use the
v4l2 api and the v4l2_read() method. It works all okay with a resolution
of 160x120 but when I set the resolution to 320x240 or 640x480 I have a
cpu load of 98%. With the smaller resolution I have a load of 4%.
I have no displaymanager on my beagle. I have only the console-image
so I can't use gstreamer. This is the error I get with gstreamer:
Setting pipeline to PAUSED ...
ERROR: Pipeline doesn't want to pause.
ERROR: from element /GstPipeline:pipeline0/
GstXvImageSink:xvimagesink0: Could not initialise Xv output
Additional debug info:
xvimagesink.c(1668): gst_xvimagesink_xcontext_get (): /
GstPipeline:pipeline0/GstXvImageSink:xvimagesink0:
Could not open display
Setting pipeline to NULL ...
FREEING pipeline ...
I only read the image in a single thread out (without displaying) and
count the frames in an other thread and wait there for the next image.
At 160x120 I get 30fps @ 4% cpu load. At 320x240 17fps @ 99% and at
640x480 I get 4fps @ 99%. Also at 640x480 the main output of my
program is:
libv4lconvert: Error decompressing JPEG: fill_nbits error: need 9 more
bits
libv4lconvert: Error decompressing JPEG: fill_nbits error: need 7 more
bits
.....
libv4lconvert: Error decompressing JPEG: unknown huffman code:
0000ffd9
My webcam is a Philips SPC1030NC which uses the uvc driver.
I only read the image with a single thread out and count the images in
an other thread and wait there for the next image.
At 160x120 I get 30fps @4%. At 320x240 I get 17fps @98% and at 640x480
I get 4fps @ 98%. But the main output at 640x480 is:
libv4lconvert: Error decompressing JPEG: fill_nbits error: need 3 more
bits
libv4lconvert: Error decompressing JPEG: fill_nbits error: need 9 more
bits
libv4lconvert: Error decompressing JPEG: fill_nbits error: need 7 more
bits
...
libv4lconvert: Error decompressing JPEG: unknown huffman code:
0000ffd9
I have a Philips SPC1030NC which uses the uvc driver.
I can't use gstreamer, because I don't have a displaymanager on the
beagle or a display connected. On the board is only the console-image.
With gstreamer I get this error:
Setting pipeline to PAUSED ...
ERROR: Pipeline doesn't want to pause.
ERROR: from element /GstPipeline:pipeline0/
GstXvImageSink:xvimagesink0: Could not initialise Xv output
Additional debug info:
xvimagesink.c(1668): gst_xvimagesink_xcontext_get (): /
GstPipeline:pipeline0/GstXvImageSink:xvimagesink0:
Could not open display
Setting pipeline to NULL ...
FREEING pipeline ...
Hi,
I have not done any webcam work without a GUI desktop. I will need
to
eventually. I need to learn more about the combination. Really I
wanted to
reassure you that far better performance can be had. There is a top
out
of higher resolutions. 320x240 tops about 20fps and 640x480 about 10 -
15 fps.
I think the problem is the pixelformat conversion by v4l2. When I set
in my program the pixelformat to yuyv I have a cpu load of 17% at
30fps with 640x480. But I can't work with this pixelformat, so I have
to convert it into rgb and the cpu load is at 99%. With 320x240
resolution I have a load of 2% and with converting into rgb it is 33%.
Very good you can see it, when the pixelformat is MJPEG. Then I have a
load of <1% at 30fps and 640x480..
How to change the pixelformat, even i am also facing the same kind of issue with my webcam program.
the default is yuyv in my program. how to change different pixelformats and try out
You can use a struct of type "v4l2_format" to set some camera
settings. When you have a struct of type "v4l2_format" called "format"
you can set the width, height and pixelformat like this:
format.fmt.pix.width=width;
format.fmt.pix.height=height;
format.fmt.pix.pixelformat=V4L2_PIX_FMT_RGB24;
The possible values are described in the v4l2 api. And then you have
to send the structure to the camera:
v4l2_ioctl(fd, VIDIOC_S_FMT, &format)
Does any uvc camera support V4L2_PIX_FMT_RGB24?
Logitech UVC cameras (at least the ones I have tried) only support:
V4L2_PIX_FMT_MJPEG
V4L2_PIX_FMT_YUYV http://forums.quickcamteam.net/index.php
Both lead to a very slow processing on Beagleboard.
I don't know, but I think libv4l2 converts automatically from MJPEG/
YUYV to RGB24. And the conversion creates a high cpu load.
Is there a simple way to let the DSP decode the JPEG data or convert
from YUYV to RGB?
Are there any cameras which transfers raw rgb data over usb? I don't
think so..