The finished product will be a SBC that takes input from a camera
performs my own CPU intensive version of motion detection
As it was mentioned recently in this forum, there is a good *DSP
accelerated* image processing library which might be useful for you:
The following article could be also interesting for you:
http://www.hbrobotics.org/wiki/index.php5/Beagle_Board#Using_the_BeagleBoard.27s_DSP_for_vision_processing
The reason I am referencing these sites is because you said "my own
CPU intensive version". ARM CPU is not very powerful. However, there
is a DSP which can dramatically improve the performance of the image
processing algorithm. TI's library mentioned above provides typical
image processing building blocks which are DSP-optimized.
and outputs servo positions.
How position is set? PWMs? If yes, then consider the following. There
are in general three most obvious ways to generate PWMs on BB:
1. Use three available hardware PWM generators. Here you can find more
information: BeagleBoardPWM - eLinux.org
2. If you need more then three PWMs, use GPIO pins to generate PWM
programaticaly (loop with sleep). Taking in account that you will be
also running calculation intensive algorithm on CPU, it could
negatively influence PWM timing to the level where servos will start
doing strange things. Solution for it could be either xenomai.org or
https://rt.wiki.kernel.org/index.php/Main_Page . I personally did not
manage to make either of them running on BB until now. But taking in
account that you were going to boot BB manually without any OS, maybe
you will mange to make these real-time extensions running on BB 
3. Use additional hardware (for example Arduino) to generate PWMs. You
can communicate over serial port or I2C between micro-controller and
BB.
I'm trying to find the path of least resistance. I
already have a product that I sell that runs on windows machines.
What would be awesome is something just like an arduino (but a whole
lot more powerful) where a camera was a digital input that returned a
2d array of the current frame when requested. I guess life can't be
that simple though.
Well... almost 
BTW, you can take a look at Gumstix
(http://www.gumstix.com/store/catalog/index.php) which is a small form
factor embedded computer with the same processor as BeagleBoard.
From the info I've gathered so far it looks like I'm going to have to
toil away with linux. That Narcissus tip was very helpful. Say writing
the boot image to the SD card was step one and trying out that V4L
example code you referneced was the last step. Could you detail the
steps in between? (My background is in C#. Messing around with header
files and manually importing libraries are not my forte.)
1. buy BB, other hardware and camera
Important is also not to forget serial-to-usb cable. This is the
simplest way to communicate with BB from PC (using minicom on Unix or
HyperTerminal on Windows).
2. get boot image from Narcissus
3. write Angstrom image to SD card
Not only Angstrom image, but also MLO, u-boot and uImage (kernel) on
the first FAT partition on SD card. Consider reading installation
instructions on Angstrom web-site.
4. insert card and power up the BB
5. ?
Connect ethernet cable to BB and make sure that your network is up and running.
6. ?
Use scp (or pscp from putty if you are on windows) to copy V4L example to BB
7. ?
Log into BB with ssh or using just terminal (with serial to usb cable
I mentioned above) and compile V4L example which I mentioned in my
previous post: gcc example.c -o example -lv4l2 (or something similar)
8. test V4L sample code
Make sure that you can see dots output on the terminal. Each dot
corresponds to the "processed" frame.
9. Reimplement
static void process_image(const void *p)
{
fputc ('.', stdout);
fflush (stdout);
}
with your image processing algorithm (p - is the pointer to the frame).
10. Contribute back to the community by documenting problems you were
facing and posting corresponding solutions to this forum, the BB Wiki
or any other places which might be appropriate.