Greetings!
My name is Ryan Done, I’m a graduate student at the University of California, San Diego working on my Master’s in Computer Music. I also hold a B.Sc in Computer Science from Simon Fraser University, BC, Canada.
I’m very interested in contributing to a Google Summer of Code project this year. In my last semester at SFU, I participated in UCOSP, contributing to the Review Board project, and in another course worked with the Beaglebone Black hardware on an embedded music visualizer to control addressable LED lights.
Given my current academic focus of computer music, audio and DSP, I’m particularly interested in delving into the Low Latency Multi-Channel audio project(s) that are suggested on the Wiki.
As I understand, there are a few different possible ideas listed on the wiki page for audio, so I have some initial questions and comments. Hopefully from this discussion I can flesh out expectations with a potential mentor for a project.
Regarding:
Extend driver architecture to Beagle Board X15 (more computational power for more DSP capabilities), including performance test at CPU load conditions, add DSP library to make use of X15’s DSPs
My understanding (that perhaps someone could correct) is this would involve implementing the relevant parts of the CTAG face2|4 project code directly onto Beagleboard X15 (no need for a cape), and then enhancing the interface to further leverage the X15’s DSPs.
Regarding:
Create USB Audio Class 1 and/or 2 Gadget Kernel Module, and optimizing throughput latency to allow cape to be used as independent PC soundcard
Would be a separate task all together, or something that could be developed on the X15 using the improved driver architecture from the first bullet point?
The mentioned of the cape leads me to believe this would be more applicable to the Beaglebone Black or Green. I do own a Beaglebone black, and would then only need to pick up a corresponding audio cape to get started on this.
Regarding:
Further optimize available driver for BBG for latency, with focus on ASOC driver
Could this also be developed for the Beaglebone black, or is it specific to Green? Would it also require an audio cape?
Regarding:
Make a real-time audio processor box on beaglebone. Needs HD audio cape, could use PRUs for real-time sound processing (ie, guitar input) and second midi source using alsa or hardware cape. Also like to have pitch/envelope-following synth for analog instrument/mic input.
I have built all kinds of software instruments, and building something “playable” with hardware and I/O would be taking that to the next level. I can also imagine a bunch of smaller possible tasks here that could be tacked on the the end of a larger project, such as cross-compiling Pure Data to run in -nogui mode on the device, a la Citter and Guitari Organelle
Overall, I’m imagining taking on one of the driver improvement tasks, and then developing an instrument or USB soundcard to leverage those improvements. Does that seem like a reasonable scope for a GSOC project? Or perhaps too much or too little?
Based on the mentor list on the wiki, I see that Robert Manzke, Vladimir Pantelic and Andrew Bradford all have listed DSP as a focus and would be useful mentors to talk to. If there are other mentors who would be interested in mentoring an audio focused project, I’d love to chat.
I’m currently idling in the IRC channel under “rdone”, and am free to chat there (provided I am at my desk), or in this thread of course.
Presumably the next step would also be to get started on cross compiling the hello world example, which I will get on right away!
Thank you for your consideration,
Ryan Done