Maxim Codec and BBB a different way.

I am updating our project page for an advanced remotely controlled audio system using the BBB and a Maxim Codec 98089. We use a different method for alsa-soc than most. Our DTB driver only uses I2S and no I2C control mechanism is included in it. We think this makes the I2S routine cleaner and less error prone. We then use FTDI over USB to send our commands to the codec. We will be releasing the driver to the BBB community in a few days. But you can learn a lot of what we are doing now at our project page.
Perhaps as other developers learn more, alsa-soc may one day only need platform specifics and allow any I2S codec to be used and the only thing one needs to change are routines that call I2C commands or in our example FTDI commands from user space.

https://sites.google.com/site/hdpoint1/home

Jay

Many including myself don’t understand all the complexity’s of the way the alsa-soc is developed. And only a few engineers really have the grasp of it. Its so complex that T.I. themselves said mixing I2C and I2S into the same code bundle makes it unnecessarily hard to develop a routine for each soc board and each codec. Here I will state some of the things as I understand happen in alsa-soc.
Alsa states, there should a definition for each platform, a definition for each codec, and oddly something called the machine which is how they intercommunicate I believe.
What this means is for each platform, such as A beaglebone black and a beaglebone xm, the code could be different. On top of that you then have to have definitions for each codec. So just because you have definitions for a codec from a codec maker, it doesn’t mean your gonna compile that into the kernel and walla! your platform is now talking to the codec, its just not that simple. All this was done to combine i2c and i2s specifics for each codec / platform to work from one package.
But after looking at this and realizing I was on the short end of a long stick and finding a programmer that could even understand this stuff, It became evident that I2C was the villain like T.I. said. Looking at the disadvantages besides the complexity, what happens in this prescribed platform, codec, machine we find…
I2C being a timed command and response protocol uses wires completely different than i2s, and even a different clock.
I2S is only audio, it knows nothing but sending and receiving audio packets, it doesn’t care to whom and whatever codec is on the other end and as long as it works with the same timings should work just fine. What happens if a I2C command hangs in the middle of the routine to send I2S signals?. audio drops and possibly worse. A badly coded alsa-soc package can play hell not only on audio, but may cause kernel dumps and who knows what else.
I2S routines should be free from any hindrance, let the kernel handle this because its a very timing critical task.
I2C commands though should occur from user space, they aren’t so timing critical and if from user space an i2c send / response sequence hangs, the kernel / i2s routines probably wont even care.
I2C from user space has the advantage of enhanced command error checking and being able to be written in many languages that have access to i2c calls, such as python, c++, nodejs and golang.

In this regard, an i2s driver with configurable parameters could be written for every soc platform, and only once! And an i2c routine can be universal on all soc-platforms if the hooks to the actual pins are abstracted. I will be presenting this thesis to the ALSA consortium at a future date.

Jay Steele

I agree. I've been trying to get a simple audio codec to work for over a year. It worked in 3.1.x, but now in 4.4.x it's marginal, at best (one channel works, not the other). I've been a bit lazy to see what the working 3.1.x version had different.

But, it's really complicated, very poorly documented, and the people would be able to clear it up either aren't on the lists (beaglebone and alsa), or don't care to answer.