Human presence detection project

  • My post was not showing in the emails. So, I am reposting my project ideas*

Hi
I am Mohammad T Rahman, currently in my final year of PhD. My
research interests include image processing, algorithms for digital image
pipeline and human device interface.
I would like to propose a project on Beagle Board using its ARM and
DSP side. Title of the project could be something like this: Human
presence detection in a secure environment. A USB camera will be
attached with the Beagle Board covering a whole room or ROI of the
room. If any person enters the room without prior authentication the
system shall detect the position of the person, take a snapshot
(maybe
a zoomed shot of that position, if optical zooming is not possible
for the camera we can do some digital zooming) and send that information
to another computer or mobile device along with a warning message.
For human detection in the scene a skin color based approach will be
used. If the lighting condition of the room is known, a predefined model
can be used. If not, an adaptive model is to be used.

This is just a high level overview. I will put more details soon. Let
me know your ideas and comments on this.

Mohammad Rahman
PhD student
University of Texas at Dallas

This sounds like established technology. What particular value will exist from your implementation? Who will want to use your version and why?

Thank you for your reply.
I haven’t seen any full solution of this in BeagleBoard platform. What I am proposing is a cheap software based solution. Solution for face detection or human detection already exists. But when it comes to robustness and also implementing it on a resource limited platform without any dedicated hardware engine support, things might not be that simple. Previously we worked on face detection and pose estimation on mobile platforms. Same argument was there, if there is any value or anything new. We found that when it comes to real-time operation very few robust solutions are out there. So I am looking for a solution that will be-a) Robust
b) Real-time
c) Without any dedicated hardware support
d) better than existing solutions in terms of speed and robustness.

Rahman

I believe any of the listed tasks require DSP processing if you need real-time. I think none of the existing ARM based platforms can do this, only with a help of DSP. So we return to the “dedicated hardware support” which means that DSP usually require specific instructions.

2010/4/6 Tayabur Rahman <tayabur@gmail.com>

Hi Rahman,
Would you be able to port an existing OpenCV algorithm to the DSP and use it for this application?
BR,
Leo

There is OpenCV algorithm for face detection. I need to look into it and find if they have something more similar to this.

Rahman

Maxim Podbereznyy wrote:

I believe any of the listed tasks require DSP processing if you need
real-time.

Well. It is a question of definition.

You don't need a DSP to be realtime. Realtime doesn't mean "fast
response". It means that the maximum latency between an event and
detection (in the case of some surveillance system) has a defined upper
bound. In other words. A realtime system may have a latency of an hour
or more..

So, how much delay can we accept between event and detection? If we
talking about a hard x-ray system that should detect your finger so it
can shut down within a millisecond (to prevent further harm) the linux
side may not cut it. If otoh you just want to detect "someone in the
room" a quarter second should be more than okay, and that can be done
without any DSP.

No need to complicate things. Imho everything can be done on the ARM,
even using an interpreted language.

Cheers,
  Nils

Estevez, Leonardo wrote:

Hi Rahman,
Would you be able to port an existing OpenCV algorithm to the DSP and use it for this application?
BR,
Leo
  

I'd like to add, that it takes some serious low-level hardware DMA/IRQ
hacking to get better performance out of the DSP than the ARM can do.

If someone wants a prove: Just try to add 1 to each byte of an image
(aka four liner in C) and compare the results to native ARM code.

I don't want to discourage anyone, but it takes alot more than "compile
thing on the DSP and add an intrinsic or two".

Cheers,
      Nils

Hi Nils,

  I don't think you would expect to same acceleration for any code compiled on the DSP or the ARM. The Cortex architecture has a dual instruction pipeline which makes it especially adept at executing flow control code (lots of if/then). It is also especially good at bit manipulation and logical functions commonly found in flow control code. The DSP is a VLIW vector processor which makes it good at crunching a lot of matrix operations. The ARM Neon coprocessor is a SIMD coprocessor which also makes it good at crunching some common matrix operations.

  If your algorithm/kernel is executing a diversified series of matrix operations in a recursive loop, C compiled DSP code may provide the best performance improvement.

  If you are running a combination of flow control code and recursive matrix operations using cached data, the ARM/DSP combination may provide better performance than ARM/Neon simply because Neon will share the ARM L2 cache when you use cached data (the DSP has an independent memory architecture).

  Handling DDR memory interactions well/appropriately (as you point out) is critical to the development of real-time systems. When we're talking about video real-time systems, there is often a tradeoff between quality and real-time execution. That is, if you want to analyze video at a higher resolution and frame rate, you will need to be able to conduct your interframe processing at a higher speed. Some applications aren't realizable until you get to a certain resolution and frame rate simply because the video you are processing is not at a high enough temporal/spatial resolution to reliably determine whether the information you are looking for is in the scene. (eg. Whether a specific person or object of interest is moving through the scene)

BR,
Leo