I'm still pretty new to I2C. I understand that it commonly clocks along either at 100kHz or 400kHz. I have pored over the software I2C include file trying to determine what our GCB version does. So, a straightforward question:
"What speed are the GCB software I2C routines clocking at?"
Thanks,
Thomas Henry
If you would like to refer to this comment somewhere else in this project, copy and paste the following link:
The I2C software routines are clocked by the time delays constants. They are shown in the Help File.
The 100/400 options related to the i2c hardware module within the Pic and therefore not applicable to the i2c software routines.
We had a question the other day on the hardware i2c. I am hoping someone can validate the functionality and help us document the supported set of commands - which includes.... the 100/400 options.
If you would like to refer to this comment somewhere else in this project, copy and paste the following link:
Anonymous
-
2014-05-28
Thanks. I guess I didn't make my question clear enough. I understand that the software I2C is just that, doing things "manually" as it were, in software. What I'm curious about is what the effective clocking rate is. I don't want to change it, just know it. Since I2C is still pretty new to me, the include file is still pretty mysterious. As you say, the rate is determined by the delay times (period), but can be translate that to an effective frequency?
So, let me rephrase the question:
"What is the effective rate that software I2C is clocking at with the default delay times?"
If you would like to refer to this comment somewhere else in this project, copy and paste the following link:
This made me smile. What a great question. :-) I don't know. But...
The I2C fast mode within the chip defines transfer rates up to 400 kbit/s whilst the first I2C specification dated 1982 had a limit of 100 kbit/s, this is also supported within the chip. To accomplish the 400 kbit/s the timing requirement were defined more strictly within industry standard.
Clock Generation
The SCL clock is always generated by the I2C master. The specification requires minimum periods for the low and high phases of the clock signal. Hence, the actual clock rate may be lower than the nominal clock rate e.g. in I2C buses with large rise times due to high capacitances.
Clock Stretching
I2C devices can slow down communication by stretching SCL: During an SCL low phase, any I2C device on the bus may additionally hold down SCL to prevent it to rise high again, enabling them to slow down the SCL clock rate or to stop I2C communication for a while. This is also referred to as clock synchronization.
Note: The I2C specification does not specify any timeout conditions for clock stretching, i.e. any device can hold down SCL as long as it likes.
So, that's the background. We really needs Perry to get his Analyser out to get some samples - this is the best answer but I think someone can do the maths whilst I sleep on the unit conversion problem to come up with an estimate.
If you would like to refer to this comment somewhere else in this project, copy and paste the following link:
Hello all,
I'm still pretty new to I2C. I understand that it commonly clocks along either at 100kHz or 400kHz. I have pored over the software I2C include file trying to determine what our GCB version does. So, a straightforward question:
"What speed are the GCB software I2C routines clocking at?"
Thanks,
Thomas Henry
The I2C software routines are clocked by the time delays constants. They are shown in the Help File.
The 100/400 options related to the i2c hardware module within the Pic and therefore not applicable to the i2c software routines.
We had a question the other day on the hardware i2c. I am hoping someone can validate the functionality and help us document the supported set of commands - which includes.... the 100/400 options.
Thanks. I guess I didn't make my question clear enough. I understand that the software I2C is just that, doing things "manually" as it were, in software. What I'm curious about is what the effective clocking rate is. I don't want to change it, just know it. Since I2C is still pretty new to me, the include file is still pretty mysterious. As you say, the rate is determined by the delay times (period), but can be translate that to an effective frequency?
So, let me rephrase the question:
"What is the effective rate that software I2C is clocking at with the default delay times?"
This made me smile. What a great question. :-) I don't know. But...
The I2C fast mode within the chip defines transfer rates up to 400 kbit/s whilst the first I2C specification dated 1982 had a limit of 100 kbit/s, this is also supported within the chip. To accomplish the 400 kbit/s the timing requirement were defined more strictly within industry standard.
Clock Generation
Clock Stretching
Note: The I2C specification does not specify any timeout conditions for clock stretching, i.e. any device can hold down SCL as long as it likes.
So, that's the background. We really needs Perry to get his Analyser out to get some samples - this is the best answer but I think someone can do the maths whilst I sleep on the unit conversion problem to come up with an estimate.