GSOC 2016 Introduction

Greetings!

My name is Ryan Done, I’m a graduate student at the University of California, San Diego working on my Master’s in Computer Music. I also hold a B.Sc in Computer Science from Simon Fraser University, BC, Canada.
I’m very interested in contributing to a Google Summer of Code project this year. In my last semester at SFU, I participated in UCOSP, contributing to the Review Board project, and in another course worked with the Beaglebone Black hardware on an embedded music visualizer to control addressable LED lights.

Given my current academic focus of computer music, audio and DSP, I’m particularly interested in delving into the Low Latency Multi-Channel audio project(s) that are suggested on the Wiki.

http://elinux.org/BeagleBoard/GSoC/Ideas#Improving_the_BeagleBone_low-latency_multi-channel_audio_system

As I understand, there are a few different possible ideas listed on the wiki page for audio, so I have some initial questions and comments. Hopefully from this discussion I can flesh out expectations with a potential mentor for a project.

Regarding:
Extend driver architecture to Beagle Board X15 (more computational power for more DSP capabilities), including performance test at CPU load conditions, add DSP library to make use of X15’s DSPs

My understanding (that perhaps someone could correct) is this would involve implementing the relevant parts of the CTAG face2|4 project code directly onto Beagleboard X15 (no need for a cape), and then enhancing the interface to further leverage the X15’s DSPs.

Regarding:
Create USB Audio Class 1 and/or 2 Gadget Kernel Module, and optimizing throughput latency to allow cape to be used as independent PC soundcard

Would be a separate task all together, or something that could be developed on the X15 using the improved driver architecture from the first bullet point?

The mentioned of the cape leads me to believe this would be more applicable to the Beaglebone Black or Green. I do own a Beaglebone black, and would then only need to pick up a corresponding audio cape to get started on this.

Regarding:
Further optimize available driver for BBG for latency, with focus on ASOC driver

Could this also be developed for the Beaglebone black, or is it specific to Green? Would it also require an audio cape?

Regarding:
Make a real-time audio processor box on beaglebone. Needs HD audio cape, could use PRUs for real-time sound processing (ie, guitar input) and second midi source using alsa or hardware cape. Also like to have pitch/envelope-following synth for analog instrument/mic input.

I have built all kinds of software instruments, and building something “playable” with hardware and I/O would be taking that to the next level. I can also imagine a bunch of smaller possible tasks here that could be tacked on the the end of a larger project, such as cross-compiling Pure Data to run in -nogui mode on the device, a la Citter and Guitari Organelle

Overall, I’m imagining taking on one of the driver improvement tasks, and then developing an instrument or USB soundcard to leverage those improvements. Does that seem like a reasonable scope for a GSOC project? Or perhaps too much or too little?

Based on the mentor list on the wiki, I see that Robert Manzke, Vladimir Pantelic and Andrew Bradford all have listed DSP as a focus and would be useful mentors to talk to. If there are other mentors who would be interested in mentoring an audio focused project, I’d love to chat.

I’m currently idling in the IRC channel under “rdone”, and am free to chat there (provided I am at my desk), or in this thread of course.

Presumably the next step would also be to get started on cross compiling the hello world example, which I will get on right away!

Thank you for your consideration,

Ryan Done

Greetings!

My name is Ryan Done, I’m a graduate student at the University of California, San Diego working on my Master’s in Computer Music. I also hold a B.Sc in Computer Science from Simon Fraser University, BC, Canada.
I’m very interested in contributing to a Google Summer of Code project this year. In my last semester at SFU, I participated in UCOSP, contributing to the Review Board project, and in another course worked with the Beaglebone Black hardware on an embedded music visualizer to control addressable LED lights.

Given my current academic focus of computer music, audio and DSP, I’m particularly interested in delving into the Low Latency Multi-Channel audio project(s) that are suggested on the Wiki.

http://elinux.org/BeagleBoard/GSoC/Ideas#Improving_the_BeagleBone_low-latency_multi-channel_audio_system

As I understand, there are a few different possible ideas listed on the wiki page for audio, so I have some initial questions and comments. Hopefully from this discussion I can flesh out expectations with a potential mentor for a project.

Regarding:
Extend driver architecture to Beagle Board X15 (more computational power for more DSP capabilities), including performance test at CPU load conditions, add DSP library to make use of X15’s DSPs

My understanding (that perhaps someone could correct) is this would involve implementing the relevant parts of the CTAG face2|4 project code directly onto Beagleboard X15 (no need for a cape), and then enhancing the interface to further leverage the X15’s DSPs.

Seems like that to me. I’d say any proposal should consider what it would take to help move some of the relevant CTAG face2|4 project code into upstream projects or create a sustainable project boundary. One-off code sucks.

Regarding:
Create USB Audio Class 1 and/or 2 Gadget Kernel Module, and optimizing throughput latency to allow cape to be used as independent PC soundcard

Would be a separate task all together, or something that could be developed on the X15 using the improved driver architecture from the first bullet point?

Vladimir added the idea, so he should be able to best clarify, but it seems pretty clear to me that each of these bullets is a separate idea, just with a common theme (improving the low-latency audio code).

The mentioned of the cape leads me to believe this would be more applicable to the Beaglebone Black or Green. I do own a Beaglebone black, and would then only need to pick up a corresponding audio cape to get started on this.

Regarding:
Further optimize available driver for BBG for latency, with focus on ASOC driver

Could this also be developed for the Beaglebone black, or is it specific to Green? Would it also require an audio cape?

I’d suspect an audio cape or USB audio dongle could be used. Clearly they’d have different latency issues. I don’t see the significance of using Green or Black in this case (though Black does have HDMI audio) and any Beagle needed would be provided by me anyway.

Regarding:
Make a real-time audio processor box on beaglebone. Needs HD audio cape, could use PRUs for real-time sound processing (ie, guitar input) and second midi source using alsa or hardware cape. Also like to have pitch/envelope-following synth for analog instrument/mic input.

I have built all kinds of software instruments, and building something “playable” with hardware and I/O would be taking that to the next level. I can also imagine a bunch of smaller possible tasks here that could be tacked on the the end of a larger project, such as cross-compiling Pure Data to run in -nogui mode on the device, a la Citter and Guitari Organelle

Overall, I’m imagining taking on one of the driver improvement tasks, and then developing an instrument or USB soundcard to leverage those improvements. Does that seem like a reasonable scope for a GSOC project? Or perhaps too much or too little?

Can you break it down into 11 weekly milestones that individually seem almost trivial and yet overall seem to accomplish something notably useful? If not, the scope is wrong.

Based on the mentor list on the wiki, I see that Robert Manzke, Vladimir Pantelic and Andrew Bradford all have listed DSP as a focus and would be useful mentors to talk to. If there are other mentors who would be interested in mentoring an audio focused project, I’d love to chat.

I’m currently idling in the IRC channel under “rdone”, and am free to chat there (provided I am at my desk), or in this thread of course.

I’ll try to say “hi” some time.

Presumably the next step would also be to get started on cross compiling the hello world example, which I will get on right away!

Yeah, that task ought to seem pretty trivial to you. We do use it as a screen for people unable or unwilling to learn a bit of the vocabulary of embedded Linux developers.

Thanks for the quick reply Jason!

I’ve created a pull request for the hello world example at https://github.com/jadonk/gsoc-application/pull/48.

I’ll start investigating each of these ideas and break down a plan for week to week tasks. I’m sure more questions will come up as I figure things out.

Thanks again,
Ryan

Perhaps an ignorant question, but where in the kernel code might I find the existing audio driver code for the X-15?

Thanks,
Ryan

The audio in the X15 is through a TLV320AIC3104 audio codec IC that is connected to one of the McASPs of the AM5728 SoC used in the X15, so I suppose you’d need to look at the McASP code in the kernel tree, and also a bit about the actual codec being used.Also look at the ctag-2-4-'s kernel tree and see the new driver that was added there.

Source: X15 schematics (http://www.elinux.org/Beagleboard:BeagleBoard-X15)

~Abhishek

Looping in Robert Manzke who is behind:

http://www.creative-technologies.de/linux-based-low-latency-multichannel-audio-system-2/

Robert,

can you comment?

Regards,

Vladimir

Hi all,

The on-board audio of the X15 doesn’t support multi-channel to the extent the face2|4 does.

For GSoC we can provide a face2|4 cape for development purposes.

Once adapted drivers are up, in combination with the X15s DSPs, the GSoC project could be extended to create a powerful multi-channel low-latency music box template solution (or looking at the kernel gadget items to create a UAC 2 interface, I’d say lower priority).

This way more upstream projects based on the X15 and/or Beagle Bone in combination with the face2|4 cape could be created and the driver backbone code may become part of the mainline kernel, helping to create a more sustainable project.

Optimizing the driver architecture could also be something just for the Bone Black/Green, but I think it doesn’t have high priority, since we already spent quite a bit of time on that task.

Using an external USB audio board instead of the face2|4 would not be of interest due to issues with latency etc. and no (/little) prospect for mainline kernel outcomes.

Hope, I have helped answering some questions.@Ryan, I think your findings were pretty much on target.

Best,
Robert

Hi everyone,

Congratulations to Beagleboard on getting accepted to GSOC 2016!

I would absolutely be interested in focusing on upstream possible projects, such as the face2|4 to X15 driver migration. After that it seems like the next most valuable task would be a music box template.

I’m going to look a bit more into the points that Kumar mentioned about the current state of the face2|4 and X15 drivers. (Thank you Kumar so following up on my question so quickly).

Making use of the DSPs on the X15 would be a new code contribution after migrating the face2|4 driver, correct?

Is there any precedent or example or resources available on how the DSPs might be used? Put another way, could the goal here be described with more detail than “performance increase” ? Maybe a general idea about what the ‘DSP library’ should provide? Perhaps I just need to read up a bit more on programming for DSPs. :slight_smile:

The wiki suggests performance testing: CPU loads, latency perhaps…should my proposal have detailed plans for that test? (Once again are there precedents for how these tests are performed?)

Does it seem possible to tackle both the X15 driver and a music template box over 11/12 weeks, or perhaps should I just focus on the driver migration and DSPs?

Keeping in line with making things reusable, should the music box template simply be large enough in scope to provide examples of a lot of common/useful techniques (FFTs, audio I/O, multi channel audio, MIDI, audio files)? Alternatively, should I look at designing and implementing interfaces that could be reused for the tasks above? The latter to me seems like it could both have benefits and drawbacks in terms of re-usability, just based on the very nature of creating abstractions that encode certain assumptions about how someone might make a musical device with a beagleboard.

Also, sorry about missing your message in IRC earlier Robert. I’ll try to be more diligent about checking IRC in the mornings.

Thanks,
Ryan

I dont think it advisable to lump these two things together for one GSoC
projects. Time is always running out faster than one imagines.

Figured I’d bump this. Still researching and planning a proposal for the X15 audio driver. Regarding potential mentors: should we be aiming to have mentor’s arranged before we submit the proposal (seems difficult to get a commitment), or just be aware of projects that are compatible with the available mentors listed?

Thanks,
Ryan

(Sorry about posting from the wrong email before, logged into the wrong account).