my name is Kartik Nighania. i am an undergraduate in Electronics and Communication engineering from National Institute of Technology Surat (India) and presently in my 2nd year
of college and a member of Drishti (student chapter for robotics of our college).
i would love to contribute to Beagleboard community and praticipate in GSOC. Even if not selected, i would like to get guidance from mentors to continue my project.
My college have recently got a hexacopter with PixHawk autopilot in it that has ArduPilot having the beaglebone as the brain. where my task is to mount a camera module with servo
attached for aerial recording and i am currently working on this project which let me have hands on to the Beaglebone Black.
i am very excited to work on the project in the ideas page.
’Process Sensor Data in Real-Time’
Port/implement MAV (drone) optical flow/stereo image processing to PRUs, use BBIO Ardupilot platform
For 2 years i have got
exposure to ARM Cortex-M4F Based TIVA 32-bit microcontrollers and
AVR set of microcontrollers (atmega 32, 128 , 2560, Arduino UNO)
for projects we do in drishti.
i have done various projects in image processing using the openCV library in
QT 5.1, MATLAB R2015a and visual studios 2010
i am proficient with C/C++ , java but new to python and had linux installed just 3 months back.
i have started using the beaglebone black and is getting more familiar with it.
i am new to PRU for which i am refering to the documentation and looking at codes written by contributers
and soon will get comfortable using it.
can i please get a more detailed explaination as to what all has to be done in the project
and what the final outcome demands?
a big thank you
congratulations to beaglebone org for gsoc 2016 …
can someone please help me clear my doubts
from the project i mentioned earlier,i have a few doubt
- what camera specification does the project want?.
because there is just 12 kB shared between the PRU and the ARM core. which is an issue must be taken care of later.
So maybe the linux host app ill be making will try to get the images in parts.
the app will use openCV image processing library and functions like cv2.calcOpticalFlowPyrLK() and many more filters to get
vectors of optical flow.
i am looking for a mentor for this project
maybe mentors like Alex Hiam and Kumar Abhishek can help me with this…
i am currently working on studying stereo/optical flow image processing and will implement it on openCV.
under the ideas page-
Process Sensor Data in Real-Time
Port/implement MAV (drone) optical flow or stereo image processing to PRUs, use “Blue” or Black (via BBIO cape) as Ardupilot platform
by discussion on irc channel i was told that- input images will be taken over USB and not by the PRU Unit.
for micro aerial vehicle (MAV) beaglebone as ardupilot platform is BeaglePilot right. So then
what is the role of PRUs in processing part ?
as i am using openCV library for image processing all the I/O and processing part is done on linux itself.
so the output of openCV algo can be provided to beaglePilot on the software side itself.
also the hardware implementation of beaglePilot is done in Erle-Brain using a pixHawk cape which luckily we have one.
moreover mentor Alexamder Hiam gave me the work to -
implement stereo imaging by dual web cam and a usb host using python
will be completed within the next 3-5 days , as it took time to meet the hardware requirements.
eagerly waiting for a reply
What we’ve been discussing in irc is using the PRU for the stereo processing - i.e. send the two images from a webcam to the PRU and get a depth map back. So that means a fixed-point (no floating point in the PRU) implementation of something like this: http://arxiv.org/pdf/1412.6153.pdf in the PRU, and probably a kernel driver to configure it and pass the images back and forth.
I suggested you look at how this would be done in python with opencv because it that combination provides a really quick way to prototype up that kind of thing - so the goal of the project would be to hopefully greatly reduce the latency of that stereo image processing.
With the depth map generation offloaded to the PRU, it would free up more resources for other sorts of opencv processing as well, such as object detection, which could then be overlayed on the PRU generated depth map.
finally everything is crystal clear to me. thanks for your time.
work under progress on the PRU coding. Taking two low resolution images and applying an image addition with PRU unit. it would be indeed awesome to compare the difference between the one done by OpenCV and the one by the PRU to get some great performance boost.
i have written my application and request mentors to give their feedback to improve it? . i will try my level best as much as i can.
implementation of stereo image processing using PRU in MAV using ardupilot in BBB
after careful examination of your given views. i have made necessary amendments.
i request all the mentors to kindly give their feedback.
thanks again Steven Arnold