Analog Video Processing

1. Is this supposed to be the official forum for the beagle board?

2. Is there some way to make the beagle board like an arduino that
accepts analog video input (and doesn't require an OS)? I already know
that using a USB camera is supposed to be the best way to do things
but I have very specific requirements. A USB camera is not going to
cut it. I don't want to mess with Linux or WinCE (because they are
massive time vampires). I shouldn't need them. I'm not making a media
center like everyone else. I don't need any of the other inputs/
outputs on the BB. Just analog video in (which doesn't even exist on
it.)

I just want each frame from any single analog video source converted
to something like 720x576px and stored in a 2d array. I can handle the
rest from there. I've been searching and searching for something that
meets these requirements with no luck. Any tips would be greatly
appreciated.

Hi awefwrs,

1. Probably this forum is not in the US Government Forums Register
yet...

2. Please see "AM/DM37x Multimedia Device Technical Reference Manual"
section "6. Camera Image Signal Processor". Also see "BB-xM schematic"
page 8 "BeagleBoard-xM uSD, CAMERA, EXPANSION, & UART" (pay attention
to P10 "Camera Connector") and "BeagleBoard-xM System Reference
Manual" section "8.20 CAMERA PORT".

Cheers,
Max.

why don’t you go for some dedicated video processing controller if you don’t want to use other peripherals?
is it possible for you to work on this beagleboard, without a OS… like linux/win-ce… ?
if yes, how?

I just want each frame from any single analog video source converted
to something like 720x576px and stored in a 2d array. I can handle the
rest from there. I've been searching and searching for something that
meets these requirements with no luck. Any tips would be greatly
appreciated.

I think the simplest way would be just to use ~$10 usb video grabber.
Something like this one:
http://www.amazon.com/EasyCAP-DC60-Creator-Capture-High-quality/dp/B002H3BSCM/ref=sr_1_5?ie=UTF8&qid=1298370442&sr=8-5
. This one returns the "right" colorspace for DSP-based processing.

Regarding whether to use Linux or not - I think having Linux is a big
advantage. With V4L2 API and grabber I mentioned above, you can *very*
easy get what you need - 2D pixel array. I guess it would take you
much more time to achieve the same without OS (unless you are
assembler and hardware ninja with all 3000 pages of the OMAP manual in
your had :slight_smile: ). Depending on what you want to do with raw frame, you
can continue to benefit from further libraries such as for example
GStreamer which offers a lot of DSP based video processing algorithms.

yeah, all your algorithms will work easily on an OS with the help of libraries…

without OS you wil have to write a very big C/assembly code… which will take millenniums… :smiley:

show me some! wtf. I'd love to find a dedicated video processing
controller. where are they??

I read up on that and actually found a site that sells camera boards
for the BB: http://shop.leopardimaging.com/category.sc?categoryId=9
Are there other sites that sell camera boards for the BB that are made
specifically to plug into the 26-pin interface (whatever it is
called)?

can you give me an example where I would need libraries?
I'm not making a media center like everyone else. I could care less
about all of the other ports on the BB. All i need is power, video in
and PWM out. That's it. I'm not doing anything standard like blurring,
sharpening, color tracking, or any of that nonsense. The problem I've
found with linux is the countless hours you have to spend toiling away
just to get the simplest of things working. It's horribly inefficient.
I'd like to avoid all of that If I can.

This is the only source.

Gerald

no problem. where can I find example code where people are capturing
frames from one of these cameras without using linux (or with linux if
that isn't available)?

look at cmucam3. Just google it. I’ll sell you mine if you want one.

Mark

hmm… what do you want to do with the captured video? see it in TV ?
why dont you get a wireless transmittor and send it to ur home…
why do want a beagleboard for that??

can you give me an example where I would need libraries?
I'm not making a media center like everyone else. I could care less
about all of the other ports on the BB. All i need is power, video in
and PWM out. That's it. I'm not doing anything standard like blurring,
sharpening, color tracking, or any of that nonsense.

Even if you do not need libraries, you need to properly initialize
your hardware, access storage media to load your executable, etc.
Do you think that guys who for example working on bootloader have
written this whole bunch of code just because they are crazy
technocrats having fun by writing unnecessary overcomplicated code?
The answer is definitely no. It is simply necessary for proper
initialization. You can use it or try to write it on your own. But
then be prepared to write the same amount (or maybe just a little bit
less) and what is more important to understand the underlying hardware
very very deep. I doubt that you would save any time going this route.

The problem I've
found with linux is the countless hours you have to spend toiling away
just to get the simplest of things working. It's horribly inefficient.
I'd like to avoid all of that If I can.

I can not understand how can you make faster progress then for exampe:
- go to narcissus website and generate working image for BB
- write it to SD card
- google for "video4linux grabbing frame example"
- download, compile and run the example.

Doing it on your own, you will spend *much* more time just by
preparing reasonable cross-compiling tool set, not talking about all
the following problems.

image processing. that's what. for the sake of simplification just
think of it as motion detection.

maybe you're right. that Narcissus site was a great tip (Angstrom was
what I was going to use anyway if it turned out I had to use linux).
1. I'm a linux newb though. what do i do with the Video4Linux
tarball?
2. Do you know of a more detailed tutorial that is a s close as
possible to what I'm trying to do that I can just refer to instead of
asking you guys a million basic linux questions?

1. I'm a linux newb though. what do i do with the Video4Linux
tarball?

Nothing. You do not need tarball. V4L is just a library which abstract
access to video hardware. Correspondingly, what you need is header
files and compiled libraries to use them in your program. Fortunately,
as I remember, they will be included in the narcissus image if you ask
(check) for native SDK.

2. Do you know of a more detailed tutorial that is a s close as
possible to what I'm trying to do that I can just refer to instead of
asking you guys a million basic linux questions?

Maybe I just overlooked something in the previous posts, but most of
the time there were statements about what you do *not* want to do (no
linux, no libraries, etc.). To give a reasonable advice, I have to
better understand what you are trying to achieve. So maybe you can
restate the problem and goals you want to achieve? Not *how* (without
linux, etc.) but *what*.

Just as a guess, maybe the following links could be helpful for you:

- http://v4l2spec.bytesex.org/spec/capture-example.html - example how
to capture raw frames with V4L API.

- http://v4l2spec.bytesex.org/spec/ - Video for Linux Two API Specification

- www.google.com - useful search engine :slight_smile:

The finished product will be a SBC that takes input from a camera
performs my own CPU intensive version of motion detection and outputs
servo positions. I'm trying to find the path of least resistance. I
already have a product that I sell that runs on windows machines.
What would be awesome is something just like an arduino (but a whole
lot more powerful) where a camera was a digital input that returned a
2d array of the current frame when requested. I guess life can't be
that simple though.
From the info I've gathered so far it looks like I'm going to have to
toil away with linux. That Narcissus tip was very helpful. Say writing
the boot image to the SD card was step one and trying out that V4L
example code you referneced was the last step. Could you detail the
steps in between? (My background is in C#. Messing around with header
files and manually importing libraries are not my forte.)
1. buy BB, other hardware and camera
2. get boot image from Narcissus
3. write Angstrom image to SD card
4. insert card and power up the BB
5. ?
6. ?
7. ?
8. test V4L sample code

The finished product will be a SBC that takes input from a camera
performs my own CPU intensive version of motion detection

As it was mentioned recently in this forum, there is a good *DSP
accelerated* image processing library which might be useful for you:

The following article could be also interesting for you:
http://www.hbrobotics.org/wiki/index.php5/Beagle_Board#Using_the_BeagleBoard.27s_DSP_for_vision_processing

The reason I am referencing these sites is because you said "my own
CPU intensive version". ARM CPU is not very powerful. However, there
is a DSP which can dramatically improve the performance of the image
processing algorithm. TI's library mentioned above provides typical
image processing building blocks which are DSP-optimized.

and outputs servo positions.

How position is set? PWMs? If yes, then consider the following. There
are in general three most obvious ways to generate PWMs on BB:

1. Use three available hardware PWM generators. Here you can find more
information: BeagleBoardPWM - eLinux.org

2. If you need more then three PWMs, use GPIO pins to generate PWM
programaticaly (loop with sleep). Taking in account that you will be
also running calculation intensive algorithm on CPU, it could
negatively influence PWM timing to the level where servos will start
doing strange things. Solution for it could be either xenomai.org or
https://rt.wiki.kernel.org/index.php/Main_Page . I personally did not
manage to make either of them running on BB until now. But taking in
account that you were going to boot BB manually without any OS, maybe
you will mange to make these real-time extensions running on BB :wink:

3. Use additional hardware (for example Arduino) to generate PWMs. You
can communicate over serial port or I2C between micro-controller and
BB.

I'm trying to find the path of least resistance. I
already have a product that I sell that runs on windows machines.
What would be awesome is something just like an arduino (but a whole
lot more powerful) where a camera was a digital input that returned a
2d array of the current frame when requested. I guess life can't be
that simple though.

Well... almost :slight_smile:

BTW, you can take a look at Gumstix
(http://www.gumstix.com/store/catalog/index.php) which is a small form
factor embedded computer with the same processor as BeagleBoard.

From the info I've gathered so far it looks like I'm going to have to
toil away with linux. That Narcissus tip was very helpful. Say writing
the boot image to the SD card was step one and trying out that V4L
example code you referneced was the last step. Could you detail the
steps in between? (My background is in C#. Messing around with header
files and manually importing libraries are not my forte.)
1. buy BB, other hardware and camera

Important is also not to forget serial-to-usb cable. This is the
simplest way to communicate with BB from PC (using minicom on Unix or
HyperTerminal on Windows).

2. get boot image from Narcissus
3. write Angstrom image to SD card

Not only Angstrom image, but also MLO, u-boot and uImage (kernel) on
the first FAT partition on SD card. Consider reading installation
instructions on Angstrom web-site.

4. insert card and power up the BB
5. ?

Connect ethernet cable to BB and make sure that your network is up and running.

6. ?

Use scp (or pscp from putty if you are on windows) to copy V4L example to BB

7. ?

Log into BB with ssh or using just terminal (with serial to usb cable
I mentioned above) and compile V4L example which I mentioned in my
previous post: gcc example.c -o example -lv4l2 (or something similar)

8. test V4L sample code

Make sure that you can see dots output on the terminal. Each dot
corresponds to the "processed" frame.

9. Reimplement

static void process_image(const void *p)
{
  fputc ('.', stdout);
  fflush (stdout);
}

with your image processing algorithm (p - is the pointer to the frame).

10. Contribute back to the community by documenting problems you were
facing and posting corresponding solutions to this forum, the BB Wiki
or any other places which might be appropriate.

Man, I really, really appreciate your help so far. I'll summarize the
steps so far and then I have questions about some of them.

To summarize:
1. buy BB, serial-to-usb cable, usb camera (logitec pro 9000)
2. get boot image from Narcissus
3. write Angstrom image, MLO, u-boot and uImage (kernel) on the first
FAT partition on SD card.
4. insert card and power up the BB
5. Connect ethernet cable to BB and make sure that your network is up
and running.
6. Use pscp from putty (if you are on windows) to copy V4L example to
BB
7. Log into BB with ssh or using just terminal (with serial to usb
cable
I mentioned above) and compile V4L example which I mentioned in my
previous post: gcc example.c -o example -lv4l2 (or something similar)
8. test V4L sample code
Make sure that you can see dots output on the terminal. Each dot
corresponds to the "processed" frame.
9. Reimplement
static void process_image(const void *p)
{
  fputc ('.', stdout);
  fflush (stdout);

}

1. why do I need a serial to usb cable, or a network cable? can't
everything be done over USB instead?
3. so there are 4 seperate files? (Angstrom image, MLO, u-boot and
uImage) I thought there were 3 (MLO, u-boot and uImage).
5. why can't the usb cable be used for that instead? it's not a big
deal either way, I'm just curious.
6. where exactly do I cope the V4L example to on the BB? just anywhere
on the SD card or does it have to go into a specific folder?
7. ok so we're compiling code on the BB itself? I though we compile it
on the connected PC and then copy the executable over to the BB. Can
you be more specific?

Man, I really, really appreciate your help so far.

You are welcome.
This is also the reason why you should not forget the step 10 I
mentioned in my previous email:

10. Contribute back to the community by documenting problems you were
facing and posting corresponding solutions to this forum, the BB Wiki
or any other places which might be appropriate. Your experience could
be also helpful for someone else.

To summarize:
1. buy BB, serial-to-usb cable, usb camera (logitec pro 9000)

If you are not going to use DSP for image processing, Logitec 9000Pro
is OK. The problem with this camera is the "wrong" color space
supported - YUYV instead of UYVY which seams to be the preferred
format for TI's DSP algorithms. As a result, color-space conversion
will be required for each frame which could be rather expensive step.
That is why, I was pointing to the EasyCap video grabber which I know
returns the "right" colorspace.

The big advantage of the 9000Pro vs. typical analog cameras/grabbers
is the superior image quality.

2. get boot image from Narcissus
3. write Angstrom image, MLO, u-boot and uImage (kernel) on the first
FAT partition on SD card.

Not exactly. You need two partitions on SD card - FAT and ext2. MLO,
u-boot and uImage will go to the FAT and Angstrom image should be
unpacked to the ext2.

I am *strongly recommend* you to read installation guide at Angstrom web-site:
http://www.angstrom-distribution.org/demo/beagleboard/

1. why do I need a serial to usb cable, or a network cable? can't
everything be done over USB instead?

No. Later, when your BB set up and running, you can use usb cable as a
network interface. But for setup, I would recommend the serial-to-usb.

As alternative, you can consider connecting normal monitor to the DVI
output and keyboard over powered USB2.0 Hub. I've never tried this way
myself, so I am not sure if the output will goes to the monitor "by
default".

3. so there are 4 seperate files? (Angstrom image, MLO, u-boot and
uImage) I thought there were 3 (MLO, u-boot and uImage).

Please read explanations here:
http://www.angstrom-distribution.org/demo/beagleboard/

5. why can't the usb cable be used for that instead? it's not a big
deal either way, I'm just curious.

Because when booting, kernel output goes to the *serial* console. And
that is why you need serial cable (serial-to-usb) to see it as well as
log in to BB.

6. where exactly do I cope the V4L example to on the BB? just anywhere
on the SD card or does it have to go into a specific folder?

Anywhere you like. I would suggest to develop in your home directory
and then place the executable to /usr/local/bin

7. ok so we're compiling code on the BB itself? I though we compile it
on the connected PC and then copy the executable over to the BB. Can
you be more specific?

Both is possible. However, I thought that for small program and as a
first step native compilation would be much easier to setup. Later on,
if you like, you can also setup the cross-compilation environment.