Hello Vladimir,
I saw lot of your posts and support in Beagle Board Group. Actually
I asked my question in Google Group. But I couldn't get expected help.
I hope you can help me out.
Still, the mailing list is the place to ask
I am using Logitech HD 910 webcam. I have installed Opencv on both
Angstrom and Ubuntu Os. I have Beagle Board XM rev c.
When I run a simple webcam streaming Program on both OS. It
runs successfully. But there is lot of delay in streaming. I read in
forums. So I planned to use on chip DSP for this operation.
why do you need openCV at all if all you want is to stream video?
gstreamer and vlc should both be able to do that for you.
Babu, if you don’t want to make some research then you need something like “opkg install opencv”. Probably this instruction is wrong, but I did not research either
Thanks for reply.
I want to do a real time color detection and object detection..
I am not aware of how to do it using DSP.
1) ignore openCV and your camera for now
2) look into dsplink and codec engine (CE) or maybe C6Run
3) run dsplink/codec engine/C6Run examples
4) write your own simple dsp algorithm (or codec)
5) successfully run your simple codec/algo inside link/CE/C6Run
6) implement your "color detection and object detection" for the DSP
Beware that making a DSP algorithm that is really fast is not simple
and for most cases you will end up with something that could run
at similar speed on the ARM unless you put a lot of effort into it
Of course you can use the DSP to offload work to it and keep the
ARM free for other stuff, but that means you need to have a SW
framework on the ARM that allows to schedule DSP work in parallel
to ARM work.
and only once that works:
7) integrate your dsp algorithm inside openCV.
recommended reading material (list is not exhaustive):
Thanks for reply.
I want to do a real time color detection and object detection..
I am not aware of how to do it using DSP.
1) ignore openCV and your camera for now
2) look into dsplink and codec engine (CE) or maybe C6Run
2a) use Angstrom because it has dsplink and CE already
3) run dsplink/codec engine/C6Run examples
4) write your own simple dsp algorithm (or codec)
5) successfully run your simple codec/algo inside link/CE/C6Run
6) implement your "color detection and object detection" for the DSP
Beware that making a DSP algorithm that is really fast is not simple
and for most cases you will end up with something that could run
at similar speed on the ARM unless you put a lot of effort into it
Of course you can use the DSP to offload work to it and keep the
ARM free for other stuff, but that means you need to have a SW
framework on the ARM that allows to schedule DSP work in parallel
to ARM work.
Thanks a lot for your great help!!
I just want to know Does anybody use on chip DSP on any BeagleBoard or BeagleBoard XM ? The reason I am asking because He may know some practical challenges in an actual implementation or some tricks to make it happen.
I just want to know Does anybody use on chip DSP on any BeagleBoard
or BeagleBoard XM ? The reason I am asking because He may know some
practical challenges in an actual implementation or some tricks to make it
happen.
Speaking about Gstreamer - the following page contains great set of
examples how to use GStreamer with web-camera:
Thanks a lot for your great help!!
I just want to know Does anybody use on chip DSP on any BeagleBoard or BeagleBoard XM ? The reason I am asking
because He may know some practical challenges in an actual implementation or some tricks to make it happen.
Ask a concrete question once you get to one, asking meta-questions like that
won't get you far....
Hello Vladimir,
I am trying to follow your instructions to Use DSP. I went through all these references and docs. Most of info seems outdated. I already have angstrom running card, Do I need to make a new one to use DSP?
Can I use Narcissus online tool to create a new angstrom image which have some DSP related files ?
I have faced with a problem while trying to achieve webcam streaming working fine in my BB-xM where LI5M03 (mt9p031) camera is attached.
I’m currently trying to see webcam images on the screen. I’m able to do that by using mplayer as follows: #sudo mplayer tv:// -tv driver=v4l2:device=/dev/video6:outfmt=uyvy:width=320:height=240
However, I am getting an error when I try the similar by using GStreamer as follows: #sudo gst-launch -e v4l2src device=/dev/video6 ! filesink location=./test.data
I have tried various combinations of v4l2src and sink methods but the error I am getting is:
“v4l2src: Couldn’t negotiate format”
It seems the problem is related to the output format of LI5M03 camera which is UYVY, and the gst drivers are not expecting/supporting UYVY. Does anyone have an experience to confirm my findings?
I have tried various combinations of v4l2src and sink methods but the error
I am getting is:
"v4l2src: Couldn't negotiate format"
It seems the problem is related to the output format of LI5M03 camera which
is UYVY, and the gst drivers are not expecting/supporting UYVY. Does anyone
have an experience to confirm my findings?
To check your hypothesis, you can try to insert ffmpegcolorspace
between source and sink.
I also tried a basic example: # gst-launch -e v4l2src device=/dev/video6 ! dspmp4venc ! filesink location=./test.mp4
But I got an error: Setting pipeline to PAUSED … ERROR: Pipeline doesn’t want to pause. ERROR: from element /GstPipeline:pipeline0/GstV4l2Src:v4l2src0: Device ‘/dev/video6’ cannot capture in the specified format Additional debug info: gstv4l2object.c(2218): gst_v4l2_object_set_format (): /GstPipeline:pipeline0/GstV4l2Src:v4l2src0: Tried to capture in YU12, but device returned format UYVY Setting pipeline to NULL … Freeing pipeline …
I have tried lots of combinations in order to get camera image on the screen on BB-xM but no success so far.
ximagesink gives a different error than filesink, any comment/feedback about the reason?
root@arm:# gst-launch -e v4l2src device=/dev/video6 ! ximagesink
Setting pipeline to PAUSED …
ERROR: Pipeline doesn’t want to pause.
ERROR: from element /GstPipeline:pipeline0/GstV4l2Src:v4l2src0: Could not negotiate format
Additional debug info:
gstbasesrc.c(2830): gst_base_src_start (): /GstPipeline:pipeline0/GstV4l2Src:v4l2src0:
Check your filtered caps, if any
Setting pipeline to NULL …
Freeing pipeline …
root@arm:# gst-launch -e v4l2src device=/dev/video6 ! filesink location=./test.data
Setting pipeline to PAUSED …
ERROR: Pipeline doesn’t want to pause.
ERROR: from element /GstPipeline:pipeline0/GstV4l2Src:v4l2src0: Device ‘/dev/video6’ cannot capture in the specified format
Additional debug info:
gstv4l2object.c(2218): gst_v4l2_object_set_format (): /GstPipeline:pipeline0/GstV4l2Src:v4l2src0:
Tried to capture in YU12, but device returned format UYVY
Setting pipeline to NULL …
Freeing pipeline …
To check your hypothesis, you can try to insert ffmpegcolorspace
between source and sink.
I also tried ffmegcolorspace, colorspace, autoconvert but no success, same error as copied below.
root@arm:# gst-launch -e v4l2src device=/dev/video6 ! video/xraw-yuv,format=(fourcc)YU12 ! ffmpegcolorspace ! filesink location=./test.data
WARNING: erroneous pipeline: could not link v4l2src0 to ffmpegcsp0