I2C i2cget/i2cset execution time

Hi,
I am using beagleboard C4 to interface to a custom hardware via MCP23017 I2C expander. I am running Angstrom and using I2C tools for this.

First task for me is to read data coming on the MCP23017 pins and store it for further processing.

Data from the external hardware arrives at MCP23017 IO pins at 4800 baud (a bit change every 208 usec).

I had assumed that I2C running in fast mode at 400KHz will be faster enough to read this data without missing anything. However, it did miss lot of data.

So I did a trial run to measure the time required to read an I2C byte using the following script.

`
START=$(date +%N)
i2cget -y 2 0x21 0x12
END=$(date +%N)
echo $(($END-$START))

`

Result of the above script is:

`
root@beagleboard:~# sh testrdtime.sh
0xfc
10864254
root@beagleboard:~#

`

Which indicates that it took 10 msec to execute a read.

It looks strange to me. Is it because that the Kernels is busy with some other process and not allocating sufficient time for the I2C process? Or some other reason?

p.s.: On the oscilloscope, I2C speed is seen correct (400 KHz), however, there is a gap of about 10 msec between two bursts. I am not able to decode the I2C transactions from DSO though. But this supports the script result above.

If this result is practical, what is the remedy? Writing custom I2C driver with blocking functions?

MCP23017 can run at 1.7MHz (fast I2C) which can be tested with rebuilt kernel running I2C at higher speed. Has anyone done that previously?

Any other inputs?

Thank you very much !

If you're making i2c calls from user space (as it seems you are)
there's the syscall overhead (not 0), any buffering the kernel does
(not 0), scheduling the i2c operation (not 0), doing the i2c operation
(your scope plot), and returning that data. Plus, your getting the
start and end times have non-zero execution time, too.

There also might be measurement error introduced with your start and
stop times based on how low of a unit of time can be measured by your
system.

If you work at the kernel level, there will still be some delays, but
they should be significantly less than when performing user space
operations. In order to remove as much delay as possible, either write
some bare metal code (ie: don't use Linux) or dig into the i2c
subsystem in the kernel and manually tweak it to get it to do what you
want. Also be aware that the scheduler might introduce some delays
still.

If you're looking for bare metal latency (like you'd get on a
microcontroller), that's rather difficult to get in Linux or any other
general purpose OS.

-Andrew

Thank you very much Andrew for your inputs.

Yes I did understand your points. This means for my application a user space program isn’t useful. So what I will do now is test the timings of read/write to I2C using a kernel module. If that still doesn’t give expected result, I am thinking of using PREEMPT_RT patched Linux and even further using xenomai, if required.

Realtime Linux may not mean what you think it means if you expect less
latency. In general, realtime operating systems guarantee latency
maximums, generally 5, 10, or larger milli seconds, for operations to
occur at kernel level. They generally don't give you lower latency,
just that the maximum won't be longer than X. You'll give up a little
bit of throughput in order to obtain the latency guarantees, usually.

See [1].

[1]:http://en.wikipedia.org/wiki/Real-time_operating_system

PREEMPT may help but it's not a cure-all. If you're looking for
latency under 1ms, a general purpose OS may not be your solution. Or
it may be your solution but with a bit of work.

-Andrew

Well, I am going through this document and seems they’ve achieved latency in usec. I am still not sure about how it will apply to my application. The best way I have decided is to do a trial and find out. If it doesn’t work at all then I will be using a dedicated MCU with some buffer so it will keep on reading/writing to my custom hardware from/to the beagleboard via a buffer.
As far as possible, I want to avoid using bare metal for the end application.

That paper's doing GPIO with interrupts. At kernel level, you'll get
pretty good response even without real time patches (as their results
show). I did not realize that some of the real time patch sets would
allow for alternate paths, avoiding some of the overhead within the
normal kernel, that would make a difference for interrupts. Good to
know.

But if you're trying to do I2C, that's a little different than GPIO.
You might get some improvements, based on that paper, by going to a
real time patch set. Try it, see how it works.

If you see improvements, please do let us know. I'm interested in your
results.

Thanks,
Andrew

Thank you Andrew for your thoughts. Yes I2C itself will add some overhead. If that overhead isn’t acceptable, I can directly use GPIOs and may switch to a beaglebone with more number of IOs as required. That’s little further but right now I have to prove my concept to develop my app on Linux without another dedicated MCU / bare metal code.

I will surely keep this group posted about the progress but it may take little time since this is a ‘homework’ as of now…

Thanks,
Gopal