GSoC Project: a realtime audio effects Box

Hello everyone,
I like to participate in GSoC 2016 and have the idea of creating a nice audio tool platform on the beagle board. So really would like from you what you think of this idea and if it would be helpful for anybody else out there?

Here is my abstract:

In my project I’m trying to build the foundation for using a beagleboard device as a sound processor. I would like to implement the effects modular - they should do just little things and have an open routing between each other. It should be similar to Native Instrument’s Reaktor (see: NI’s Reaktor Homepage). A graphical user interface however might be a stretch goal - maybe also DSPatcher could be used for this purpose (see: DSPatcher Sourceforge). The effects might be implemented using DSPatch C++ Library. (see: DSPatch Homepage) The usage of such an effects box might be a transportable mixing desk (when adding MIDI controllers for example), a guitar effects box, a synthesizer/sampler instrument, a music studio effect, an acoustic measurement box, room/speaker correction or a hrtf based 3d sound headphone system (when adding a head tracking sensor and an headphone amplifier). It targets musicians, audio engineers and audio enthusiasts. It shall be based on the work already done by Henrik Langer and Robert Manzke (see: Linux-Based Low-Latency Multichannel Audio System (CTAG face2|4)). The project shall assess if the PRUs of BBB/BBG or the DSPs of the X15 could be used to provide a better performance for real time audio. The minimum of implemented effects is an equalizer (consisting of a lowpass, highpass, bandpass and bandstop filters), dynamic effects (compressor/limiter/expander) and a convolution engine (use fftw if possible). As a stretched goal there could be generators (VCOs - controlled via MIDI and or OSC, sample player), envelopers (ADSR - also controlled via midi), distortion (as a guitar effect for example), modulations effects (like chorus, flanger, phaser), reverberation (based on parameters, via an IR should be possible with the convolution engine) or pitch/time shifters.

Even I pointed out complex effects these effects should be build of submodules which are connected to get theses effects working. For example a biquad filter would consist of sample-delays, gain modules and summing modules. The equalizer would then consist of some of these filters - and then there are modules that calculate the right filter coefficients to get the desired filter (by giving a frequency, (optional) gain, (optional) Q and filter type). This module which calculates the filter coefficients could even consist of submodules for highpass, lowpass, bandpass and bandstop calculating modules.

MIDI could be done with rtmidi library.

Best regards,
Johannes

Well I’d love to see that ! The “acoustic measurement box” especially…

I have done similar things already with a raspberry pi if you need help.

Regards

Hi,
thank you for your reply!

Well I’d love to see that ! The “acoustic measurement box” especially…

If there are more people interested in this - I could easy try to focus this project around that. I have quite some experience with that topic since I wrote my bachelor thesis about acoustic measurements!

I have done similar things already with a raspberry pi if you need help.

Interesting! What did you use in terms of hardware/software?

Regards,
Johannes

“bachelor thesis about acoustic measurements”. Cool, can I read it ?

I’ve built a sound level monitoring station with a raspberry pi and a wolfson audio card (with added IEPE bias).

Then it’s all the ALSA API for capture and Qt for the rest on the software side (IIR Filters, Network stuff etc.).

I didn’t implement playback though.

Actually I wanted to do it with the BBB first but the audio cape was nowhere to be found in stock !!

“bachelor thesis about acoustic measurements”. Cool, can I read it ?

Of course! But it is in german - so I don’t know how much that might help you? https://johanneswegener.de/BA.pdf

I’ve built a sound level monitoring station with a raspberry pi and a wolfson audio card (with added IEPE bias).

Then it’s all the ALSA API for capture and Qt for the rest on the software side (IIR Filters, Network stuff etc.).

I didn’t implement playback though.

intereseting - is the code on github or something?

Not yet! I still have to cleanup some specific stuff (you know… Ftp passwords, stupid comments, etc… ) but I am planning to!