linux-i2c.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* ddc/ci over i2c
@ 2014-07-08 10:35 Sanford Rockowitz
       [not found] ` <53BBC97B.5020805-9+fK6rGKj7RBDgjK7y7TUQ@public.gmane.org>
  0 siblings, 1 reply; 8+ messages in thread
From: Sanford Rockowitz @ 2014-07-08 10:35 UTC (permalink / raw)
  To: linux-i2c-u79uwXL29TY76Z2rM5mHXA; +Cc: Jean Delvare

I've been trying to use i2c-dev to read and control monitor settings, 
using the DDC/CI protocol to communicate with address 0x37 on device 
/dev/i2c-n.  Things are sort of working (details below), But it's not 
clear to me to what extent I'm hitting real limits, or if I just don't 
know what I'm doing.  Perhaps I shouldn't even be trying to use i2c-dev 
for this application.  Advice appreciated. And if there's a more 
appropriate place to post this question, I'd appreciate hearing that as 
well.

To review: A monitor is accessed via device /dev/i2c-n, created by the 
video device driver.  EDID data is found by reading address 0x50.  The 
monitor's settings are read and written at address 0x37.

I can communicate with the monitor if either the open-source nouveau or 
radeon drivers are loaded.  Both support i2c_smbus_read_i2c_block_data() 
and i2c_smbus_write_i2c_block_data(), which put the right bytes on the 
wire and handle the ack and nack bits as per the DDC specification. (See 
documentation file i2c/smbus-protocol.)

For the nouveau driver running on Fedora 20, "lsmod | grep i2c" reports:

i2c_piix4              22155  0
i2c_algo_bit           13257  1 nouveau
i2c_dev                14027  0
i2c_core               38656  6 
drm,i2c_dev,i2c_piix4,drm_kms_helper,i2c_algo_bit,nouveau


Here's a simplified example (minimal error checking) of how I'm reading 
the monitor's brightness setting:

    int fh = open("/dev/i2c-0",  O_NONBLOCK|O_RDWR);
    ioctl(fh, I2C_SLAVE, 0x37);

    unsigned char zeroByte = 0x00;
    write(fh, &zeroByte, 1);   // seems to be necessary to reset monitor 
state

    unsigned char ddc_cmd_bytes[] = {
       0x6e,              // address 0x37, shifted left 1 bit
       0x51,              // source address
       0x02 | 0x80,       // number of DDC data bytes, with high bit set
       0x01,              // DDC Get Feature Command
       0x10,              // Feature, Luminosity
       0x00,              // checksum, to be set
    };
    ddc_cmd_bytes[5] = ddc_checksum(ddc_cmd_bytes, 5);    // calculate 
DDC checksum on all bytes
    i2c_smbus_write_i2c_block_data(fh, ddc_cmd_bytes[1], 
sizeof(ddc_cmd_bytes)-2, ddc_cmd_bytes+2);
    // alt: write(fh, ddc_cmd_bytes+1, sizeof(ddc_cmd_bytes)-1); // see 
below
    usleep(5000);

    unsigned char ddc_response_bytes[12];
    unsigned char cmd_byte = 0x00;   // apparently ignored, can be anything
    i2c_smbus_read_i2c_block_data(fh, cmd_byte, 11, readbuf+1);
    // alt read(fh, readbuf+1, 11);   // see below
    ddc_response_bytes[0] = 0x50;     // for proper checksum calculation
    int calculated_checksum = ddc_checksum(readbuf, 11);
    assert(readbuf[11] == calculated_checksum);
    int response_len = ddc_response_bytes[2] & 0x7f;       // always 8 
for DDC Get Value response
    // now parse the response data


When issuing the DDC get feature command (code 0x01), a fixed data block 
of 12 data bytes is returned as shown above (as counted from the 
i2c_cmsbus_read_i2c_block_data() perspective. However, the DDC Get 
Capabilities request (0xf3), can return up to 39 bytes (fixed DDC data 
of 7 bytes plus a "fragment" of up to 32 bytes, depending on the monitor 
being communicated with.).   This is greater than the 32 byte max data 
size supported by i2c_smbus_read_i2c_block_data() (constant 
I2C_SMBUS_I2C_BLOCK_MAX in i2c-dev.h).  And indeed, I've seen 
i2c_smbus_read_i2c_block_data() return truncated responses.

Now things get interesting.

Simply using write() and read() seems to work, when the 
i2c_smbus_..._i2c_block_data() calls in the above code are replaced by 
the commented out write() and read() lines.  So apparently apparently 
write() and read() are handling the algorithm bits (start, top, ack, 
nack) properly.


Now come the key questions:

Am I just getting lucky here, or is the i2c_dev driver (or one of the 
drivers it calls) really doing the right thing for managing the 
algorithm bits for the i2c DDC protocol?

Is there a better way to use lower level services in i2c-dev?

Should I really be writing a device driver?


Finally, the proprietary video drivers.

The proprietary nvidia driver creates the /dev/i2c-n devices.  I can 
read the monitor EDID information on bus address 0x50. and get the 
functionality flags using ioctl I2C_FUNCS.   Functions 
ic2_smbus_read_i2c_block_data() and is2_smbus_write_i2c_block_data() are 
not supported (flags I2C_FUNC_SMBUS_READ_I2C_BLOCK and 
I2C_FUNC_SMBUS_WRITE_I2C_BLOCK are not set).  Attempting to call these 
functions fails with errno=22 (EINVAL - invalid argument) if module 
i2c_algo_bit has not been loaded, and errno=5 (EIO - IO Error) if it has.

Trying to use write() and read() as described above also fails. write() 
appears to succeed, but read() returns invalid data.

Appendix F of the Nvidia driver README discusses i2c support.  It 
appears to be quite dated, referring to the 2.4 and 2.6 kernels. 
According to this document, only the following functionality is 
supported, which would be consistent with what I've seen: I2C_FUNC_I2C, 
I2C_FUNC_SMBUS_QUICK, I2C_FUNC_SMBUS_BYTE, I2C_FUNC_SMBUS_BYTE_DATA, 
I2C_FUNC_SMBUS_WORD_DATA


As for the proprietary fglrx driver, it doesn't even create the 
/dev/i2c-n devices.   End of story.

^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: ddc/ci over i2c
       [not found] ` <53BBC97B.5020805-9+fK6rGKj7RBDgjK7y7TUQ@public.gmane.org>
@ 2014-07-08 22:20   ` Jean Delvare
       [not found]     ` <20140709002017.4c2b332e-R0o5gVi9kd7kN2dkZ6Wm7A@public.gmane.org>
  0 siblings, 1 reply; 8+ messages in thread
From: Jean Delvare @ 2014-07-08 22:20 UTC (permalink / raw)
  To: Sanford Rockowitz; +Cc: linux-i2c-u79uwXL29TY76Z2rM5mHXA

Hi Sanford,

On Tue, 08 Jul 2014 03:35:39 -0700, Sanford Rockowitz wrote:
> I've been trying to use i2c-dev to read and control monitor settings, 
> using the DDC/CI protocol to communicate with address 0x37 on device 
> /dev/i2c-n.  Things are sort of working (details below), But it's not 
> clear to me to what extent I'm hitting real limits, or if I just don't 
> know what I'm doing.  Perhaps I shouldn't even be trying to use i2c-dev 
> for this application.

i2c-dev should work fine for what you're doing. 

> Advice appreciated. And if there's a more 
> appropriate place to post this question, I'd appreciate hearing that as 
> well.
> 
> To review: A monitor is accessed via device /dev/i2c-n, created by the 
> video device driver.  EDID data is found by reading address 0x50.  The 
> monitor's settings are read and written at address 0x37.
> 
> I can communicate with the monitor if either the open-source nouveau or 
> radeon drivers are loaded.  Both support i2c_smbus_read_i2c_block_data() 
> and i2c_smbus_write_i2c_block_data(), which put the right bytes on the 
> wire and handle the ack and nack bits as per the DDC specification. (See 
> documentation file i2c/smbus-protocol.)
> 
> For the nouveau driver running on Fedora 20, "lsmod | grep i2c" reports:
> 
> i2c_piix4              22155  0
> i2c_algo_bit           13257  1 nouveau
> i2c_dev                14027  0
> i2c_core               38656  6 
> drm,i2c_dev,i2c_piix4,drm_kms_helper,i2c_algo_bit,nouveau
> 
> 
> Here's a simplified example (minimal error checking) of how I'm reading 
> the monitor's brightness setting:
> 
>     int fh = open("/dev/i2c-0",  O_NONBLOCK|O_RDWR);

Out of curiosity, why O_NONBLOCK?

>     ioctl(fh, I2C_SLAVE, 0x37);
> 
>     unsigned char zeroByte = 0x00;
>     write(fh, &zeroByte, 1);   // seems to be necessary to reset monitor state
> 
>     unsigned char ddc_cmd_bytes[] = {
>        0x6e,              // address 0x37, shifted left 1 bit
>        0x51,              // source address
>        0x02 | 0x80,       // number of DDC data bytes, with high bit set
>        0x01,              // DDC Get Feature Command
>        0x10,              // Feature, Luminosity
>        0x00,              // checksum, to be set
>     };
>     ddc_cmd_bytes[5] = ddc_checksum(ddc_cmd_bytes, 5);    // calculate DDC checksum on all bytes
>     i2c_smbus_write_i2c_block_data(fh, ddc_cmd_bytes[1], sizeof(ddc_cmd_bytes)-2, ddc_cmd_bytes+2);
>     // alt: write(fh, ddc_cmd_bytes+1, sizeof(ddc_cmd_bytes)-1); // see  below

Yes, that would be equivalent, because there is no direction change for
I2C block writes.

>     usleep(5000);
> 
>     unsigned char ddc_response_bytes[12];
>     unsigned char cmd_byte = 0x00;   // apparently ignored, can be anything
>     i2c_smbus_read_i2c_block_data(fh, cmd_byte, 11, readbuf+1);
>     // alt read(fh, readbuf+1, 11);   // see below

I don't know the DDC/CI protocol so I can't tell how you are supposed
to read the response, but I can tell you that
i2c_smbus_read_i2c_block_data and read are not equivalent.
i2c_smbus_read_i2c_block_data first writes a command bytes, then
there's a direction change (a.k.a. restart condition in I2C dialect)
and then data is read from the chip. read, OTOH, reads data from the
chip directly.

But the fact that the command byte is ignored, would be compatible with
your theory that both are equivalent.

>     ddc_response_bytes[0] = 0x50;     // for proper checksum calculation
>     int calculated_checksum = ddc_checksum(readbuf, 11);
>     assert(readbuf[11] == calculated_checksum);
>     int response_len = ddc_response_bytes[2] & 0x7f;       // always 8 for DDC Get Value response
>     // now parse the response data
> 
> 
> When issuing the DDC get feature command (code 0x01), a fixed data block 
> of 12 data bytes is returned as shown above (as counted from the 
> i2c_cmsbus_read_i2c_block_data() perspective.

Your code reads 11 data bytes, not 12.

> However, the DDC Get 
> Capabilities request (0xf3), can return up to 39 bytes (fixed DDC data 
> of 7 bytes plus a "fragment" of up to 32 bytes, depending on the monitor 
> being communicated with.).   This is greater than the 32 byte max data 
> size supported by i2c_smbus_read_i2c_block_data() (constant 
> I2C_SMBUS_I2C_BLOCK_MAX in i2c-dev.h).  And indeed, I've seen 
> i2c_smbus_read_i2c_block_data() return truncated responses.

The 32 bytes limit is inherited from SMBus block limits. In practice,
i2c_smbus_read/write_i2c_block_data is useful with SMBus controllers
which can't do full I2C but implement these functions. If you need to
go beyond this limit then your code won't work on simple SMBus
controllers so you shouldn't use i2c_smbus_read_i2c_block_data, instead
you should use the I2C_RDRW ioctl. See eepromer/eeprom.c in i2c-tools
for an example.

> Now things get interesting.
> 
> Simply using write() and read() seems to work, when the 
> i2c_smbus_..._i2c_block_data() calls in the above code are replaced by 
> the commented out write() and read() lines.  So apparently apparently 
> write() and read() are handling the algorithm bits (start, top, ack, 
> nack) properly.

Yes, write and read are fully I2C compliant. What they can't do is
direction changes in the middle of a message. Maybe DDC/CI doesn't
require that, I don't know. I could never get my hands on the
specification.

> Now come the key questions:
> 
> Am I just getting lucky here, or is the i2c_dev driver (or one of the 
> drivers it calls) really doing the right thing for managing the 
> algorithm bits for the i2c DDC protocol?

Well your code looks mostly sane. It's hard for me to tell if it is
fully correct without seeing the DDC/CI specification.

> Is there a better way to use lower level services in i2c-dev?

If DDC/CI needs direction changes inside messages, then you should use
the I2C_RDWD ioctl. If not then you're OK with read and write.

> Should I really be writing a device driver?

You could but then you would have to define a user interface. Which
might be good anyway. i2c-dev requires root capabilities while ideally
regular users should be able to change the contrast and brightness of
the display. But presumably this should all go through X11 anyway.

> Finally, the proprietary video drivers.
> 
> The proprietary nvidia driver creates the /dev/i2c-n devices.  I can 
> read the monitor EDID information on bus address 0x50. and get the 
> functionality flags using ioctl I2C_FUNCS.   Functions 
> ic2_smbus_read_i2c_block_data() and is2_smbus_write_i2c_block_data() are 
> not supported (flags I2C_FUNC_SMBUS_READ_I2C_BLOCK and 
> I2C_FUNC_SMBUS_WRITE_I2C_BLOCK are not set).  Attempting to call these 
> functions fails with errno=22 (EINVAL - invalid argument) if module 
> i2c_algo_bit has not been loaded, and errno=5 (EIO - IO Error) if it has.

Hard to comment without seeing the code, and I don't want to see it.
But I suppose that the binary driver is taking benefit of hardware I2C
controllers inside the graphics chipset, while the open source drivers
are doing everything in software using i2c-algo-bit over GPIOs. Hardware
implementations are faster and more reliable, but possibly less
flexible. It is also possible that the hardware can do more than the
drivers allows but Nvidia did not bother implementing full support.

> Trying to use write() and read() as described above also fails. write() 
> appears to succeed, but read() returns invalid data.

Again, can't comment on that without seeing the driver code.

> Appendix F of the Nvidia driver README discusses i2c support.  It 
> appears to be quite dated, referring to the 2.4 and 2.6 kernels. 

That doesn't necessarily mean much, 2.6 == 3.x in many respects.

> According to this document, only the following functionality is 
> supported, which would be consistent with what I've seen: I2C_FUNC_I2C, 
> I2C_FUNC_SMBUS_QUICK, I2C_FUNC_SMBUS_BYTE, I2C_FUNC_SMBUS_BYTE_DATA, 
> I2C_FUNC_SMBUS_WORD_DATA

I would trust the output of i2cdetect -F more than the documentation.

This suggests that the driver authors were lazy and/or did something
stupid. A driver that can do I2C_FUNC_I2C, can do most SMBus transfers,
including I2C block transfers (which despite the name are close to
SMBus transfers.)

OTOH, if the driver really supports I2C_FUNC_I2C then the I2C_RDWR
ioctl should do the trick. But I wouldn't hope too much, because then
read and write should also work, when your experiments apparently show
they do not.

> As for the proprietary fglrx driver, it doesn't even create the 
> /dev/i2c-n devices.   End of story.

Correct :(

-- 
Jean Delvare
SUSE L3 Support

^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: ddc/ci over i2c
       [not found]     ` <20140709002017.4c2b332e-R0o5gVi9kd7kN2dkZ6Wm7A@public.gmane.org>
@ 2014-07-09 16:31       ` Sanford Rockowitz
       [not found]         ` <53BD6E61.8070101-9+fK6rGKj7RBDgjK7y7TUQ@public.gmane.org>
  0 siblings, 1 reply; 8+ messages in thread
From: Sanford Rockowitz @ 2014-07-09 16:31 UTC (permalink / raw)
  To: linux-i2c-u79uwXL29TY76Z2rM5mHXA; +Cc: Jean Delvare

On 07/08/2014 03:20 PM, Jean Delvare wrote:
> Hi Sanford,
>
> On Tue, 08 Jul 2014 03:35:39 -0700, Sanford Rockowitz wrote:
>> I've been trying to use i2c-dev to read and control monitor settings,
>> using the DDC/CI protocol to communicate with address 0x37 on device
>> /dev/i2c-n.  Things are sort of working (details below), But it's not
>> clear to me to what extent I'm hitting real limits, or if I just don't
>> know what I'm doing.  Perhaps I shouldn't even be trying to use i2c-dev
>> for this application.
> i2c-dev should work fine for what you're doing.
Jean,

Thank you for your detailed reply.

First, a description of the DDC/CI protocol, which will probably be 
clearer than splitting the information up into snippets on individual 
comments.

The DDC/CI protocol is defined in VESA document DDC/CI Standard 1.1, 
file name ddcciv1r1.pdf.  An earlier version 1.0, file name ddcciv1.pdf, 
is less clear, but it does have a number of comments that clarify 
behaviour of the protocol.  The actual commands that are transmitted 
over DDC/CI are detailed in the VESA Monitor Control Command Set (MCCS) 
document, file name mccsV3.pdf.  These files can be a bit hard to 
locate, but googling the file names finds them readily.

The I2C bus notation used in the DDC/CI specification is as follows:

S         Start bit                    Generated by the master to start 
communication (bus becomes busy)
XX       Data byte, hex        Made of 8 data bits, may be sent or 
received by the master
a          Acknowledge bit     Sent in the opposite direction of the 
data bits
n          Non acknowledge bit   Signals the end of the data transfer, a 
stop bit should follow
                               to free the bus.
P          Stop bit                    Signals the end of the 
communication, bus becomes free
CHK     Checksum               XOR of preceding bytes
CHK'    Checksum               Checksum computed using the 0x50h virtual 
address
VCP     VCP code                 Virtual Control Panel feature code (1 
byte, e.g. 0x10 for Luminosity)

DDC/CI Get VCP Feature and VCP Feature Reply is defined as follows.

The following sequence is written by the host:

S-6Ea-51a-82a-01a-CPa-CHKa-P

Where:
    6E     Destination address
    51     Source address
    82     number of data bytes following (not including checksum) XOR 0x80
    01     Get VCP Feature opcode
    CP     VCP Feature code (10h for luminosity)

The display responds with:

S-6Fa-6Ea-88a-02a-RCa-CPa-TPa-MHa-MLa-SHa-SLa-CHK'n-P

Where:
    6F    Destination address
    6E    Source address
    88    number of data bytes following (not including checksum) XOR 0x80
    02    VCP Feature reply opcode
    RC    Result code, 00=No Error
    CP    VCP Feature code from Feature Request message (10h for luminosity)
    TP     VCP type code (will be 00h for Luminosity)
    MH   Max value high byte
    ML    Max value low byte
    SH    Present value high byte
    SL     Present Value low byte
>> Advice appreciated. And if there's a more
>> appropriate place to post this question, I'd appreciate hearing that as
>> well.
>>
>> To review: A monitor is accessed via device /dev/i2c-n, created by the
>> video device driver.  EDID data is found by reading address 0x50.  The
>> monitor's settings are read and written at address 0x37.
>>
>> I can communicate with the monitor if either the open-source nouveau or
>> radeon drivers are loaded.  Both support i2c_smbus_read_i2c_block_data()
>> and i2c_smbus_write_i2c_block_data(), which put the right bytes on the
>> wire and handle the ack and nack bits as per the DDC specification. (See
>> documentation file i2c/smbus-protocol.)
>>
>> For the nouveau driver running on Fedora 20, "lsmod | grep i2c" reports:
>>
>> i2c_piix4              22155  0
>> i2c_algo_bit           13257  1 nouveau
>> i2c_dev                14027  0
>> i2c_core               38656  6
>> drm,i2c_dev,i2c_piix4,drm_kms_helper,i2c_algo_bit,nouveau
>>
>>
>> Here's a simplified example (minimal error checking) of how I'm reading
>> the monitor's brightness setting:
>>
>>      int fh = open("/dev/i2c-0",  O_NONBLOCK|O_RDWR);
> Out of curiosity, why O_NONBLOCK?
No good reason.  O_NONBLOCK is the result of copy and paste from some 
blog post when I was first trying to get things to work.  I've 
eliminated the flag, and program execution is unaffected.

>>      ioctl(fh, I2C_SLAVE, 0x37);
>>
>>      unsigned char zeroByte = 0x00;
>>      write(fh, &zeroByte, 1);   // seems to be necessary to reset monitor state
>>
>>      unsigned char ddc_cmd_bytes[] = {
>>         0x6e,              // address 0x37, shifted left 1 bit
>>         0x51,              // source address
>>         0x02 | 0x80,       // number of DDC data bytes, with high bit set
>>         0x01,              // DDC Get Feature Command
>>         0x10,              // Feature, Luminosity
>>         0x00,              // checksum, to be set
>>      };
>>      ddc_cmd_bytes[5] = ddc_checksum(ddc_cmd_bytes, 5);    // calculate DDC checksum on all bytes
>>      i2c_smbus_write_i2c_block_data(fh, ddc_cmd_bytes[1], sizeof(ddc_cmd_bytes)-2, ddc_cmd_bytes+2);
>>      // alt: write(fh, ddc_cmd_bytes+1, sizeof(ddc_cmd_bytes)-1); // see  below
> Yes, that would be equivalent, because there is no direction change for
> I2C block writes.
>
>>      usleep(5000);
>>
>>      unsigned char ddc_response_bytes[12];
>>      unsigned char cmd_byte = 0x00;   // apparently ignored, can be anything
>>      i2c_smbus_read_i2c_block_data(fh, cmd_byte, 11, readbuf+1);
>>      // alt read(fh, readbuf+1, 11);   // see below
> I don't know the DDC/CI protocol so I can't tell how you are supposed
> to read the response, but I can tell you that
> i2c_smbus_read_i2c_block_data and read are not equivalent.
> i2c_smbus_read_i2c_block_data first writes a command bytes, then
> there's a direction change (a.k.a. restart condition in I2C dialect)
> and then data is read from the chip. read, OTOH, reads data from the
> chip directly.
>
> But the fact that the command byte is ignored, would be compatible with
> your theory that both are equivalent.

As described in the DDC spec above, a DDC exchange is a write of several 
bytes, followed by a read of several bytes.  There's no direction change 
within the read.

I didn't include code for parsing the response because doing it right 
would just add detail irrelevant to getting the I2C calls right.
>>      ddc_response_bytes[0] = 0x50;     // for proper checksum calculation
>>      int calculated_checksum = ddc_checksum(readbuf, 11);
>>      assert(readbuf[11] == calculated_checksum);
>>      int response_len = ddc_response_bytes[2] & 0x7f;       // always 8 for DDC Get Value response
>>      // now parse the response data
>>
>>
>> When issuing the DDC get feature command (code 0x01), a fixed data block
>> of 12 data bytes is returned as shown above (as counted from the
>> i2c_cmsbus_read_i2c_block_data() perspective.
> Your code reads 11 data bytes, not 12.

Correct.  A bit of sloppy exposition.   I had to stand on my head three 
times over and look sideways to properly mung
the DDC spec into the i2c-dev calls.  From the DDC perspective, 1 more 
byte is written and read than are written and read using write(), 
read(), i2c_smbus_write_i2c_block_data(), etc.  Byte 0 of a DDC request 
or response is implicit (the result of ioctl(fh, I2C_SLAVE, 0x37);) but 
from a DDC perspective you need to think of it as being there for the 
checksums to be properly calculated.
>
>> However, the DDC Get
>> Capabilities request (0xf3), can return up to 39 bytes (fixed DDC data
>> of 7 bytes plus a "fragment" of up to 32 bytes, depending on the monitor
>> being communicated with.).   This is greater than the 32 byte max data
>> size supported by i2c_smbus_read_i2c_block_data() (constant
>> I2C_SMBUS_I2C_BLOCK_MAX in i2c-dev.h).  And indeed, I've seen
>> i2c_smbus_read_i2c_block_data() return truncated responses.
> The 32 bytes limit is inherited from SMBus block limits. In practice,
> i2c_smbus_read/write_i2c_block_data is useful with SMBus controllers
> which can't do full I2C but implement these functions. If you need to
> go beyond this limit then your code won't work on simple SMBus
> controllers so you shouldn't use i2c_smbus_read_i2c_block_data, instead
> you should use the I2C_RDRW ioctl. See eepromer/eeprom.c in i2c-tools
> for an example.
>
>> Now things get interesting.
>>
>> Simply using write() and read() seems to work, when the
>> i2c_smbus_..._i2c_block_data() calls in the above code are replaced by
>> the commented out write() and read() lines.  So apparently apparently
>> write() and read() are handling the algorithm bits (start, top, ack,
>> nack) properly.
> Yes, write and read are fully I2C compliant. What they can't do is
> direction changes in the middle of a message. Maybe DDC/CI doesn't
> require that, I don't know. I could never get my hands on the
> specification.

As noted, a DDC exchange is write/read sequence of 2 separate messages 
(or in some cases a simple write). Each write or read begins with a 
start bit and ends with a stop bit.  So if I understand your terminology 
correctly, there's no direction change within a message.

Spec location noted above.
>
>> Now come the key questions:
>>
>> Am I just getting lucky here, or is the i2c_dev driver (or one of the
>> drivers it calls) really doing the right thing for managing the
>> algorithm bits for the i2c DDC protocol?
> Well your code looks mostly sane. It's hard for me to tell if it is
> fully correct without seeing the DDC/CI specification.
>
>> Is there a better way to use lower level services in i2c-dev?
> If DDC/CI needs direction changes inside messages, then you should use
> the I2C_RDWD ioctl. If not then you're OK with read and write.
>
>> Should I really be writing a device driver?
> You could but then you would have to define a user interface. Which
> might be good anyway. i2c-dev requires root capabilities while ideally
> regular users should be able to change the contrast and brightness of
> the display. But presumably this should all go through X11 anyway.

Some context of why I'm doing this may be appropriate here.  This 
project came about because of my interest in monitor profiling and 
calibration.  A monitor calibration is relative to the current monitor 
settings for luminosity, contrast, color temperature, etc. Setting a 
profile only makes sense if you know that the monitor settings are 
unchanged from when you made the profile.  Currently in Linux, the only 
way to do this is leave the buttons on the monitor untouched.   None of 
the major profiling tools - dispcal, dispcalGUI, colord, oyranos - 
record or restore the monitor settings.  At most they can give you 
advice on how to adjust the physical monitor controls to obtain the 
desired color temperature. There once was a program to manage monitor 
settings, ddccontrol, but it hasn't been maintained and has been dropped 
from distros.  When I enquired about this on the color lists, the gist 
of the responses was that it was too hard to do because the DDC spec was 
inconsistently implemented across monitors.  I decided to try to come up 
with a solution that works for at least MY set of monitors. Basically, 
at the time of monitor calibration, the program records whatever color 
related settings exist for the monitor, and can then restore those 
settings when the profile is set.   Interestingly, the variation among 
monitor implementations has proven quite manageable.  Almost all the 
effort has gone into getting DDC over I2C right.  At this point, it 
appears that the biggest obstacle to a general solution comes from the 
proprietary drivers.
>> Finally, the proprietary video drivers.
>>
>> The proprietary nvidia driver creates the /dev/i2c-n devices.  I can
>> read the monitor EDID information on bus address 0x50. and get the
>> functionality flags using ioctl I2C_FUNCS.   Functions
>> ic2_smbus_read_i2c_block_data() and is2_smbus_write_i2c_block_data() are
>> not supported (flags I2C_FUNC_SMBUS_READ_I2C_BLOCK and
>> I2C_FUNC_SMBUS_WRITE_I2C_BLOCK are not set).  Attempting to call these
>> functions fails with errno=22 (EINVAL - invalid argument) if module
>> i2c_algo_bit has not been loaded, and errno=5 (EIO - IO Error) if it has.
> Hard to comment without seeing the code, and I don't want to see it.
> But I suppose that the binary driver is taking benefit of hardware I2C
> controllers inside the graphics chipset, while the open source drivers
> are doing everything in software using i2c-algo-bit over GPIOs. Hardware
> implementations are faster and more reliable, but possibly less
> flexible. It is also possible that the hardware can do more than the
> drivers allows but Nvidia did not bother implementing full support.
>
>> Trying to use write() and read() as described above also fails. write()
>> appears to succeed, but read() returns invalid data.
> Again, can't comment on that without seeing the driver code.
>
>> Appendix F of the Nvidia driver README discusses i2c support.  It
>> appears to be quite dated, referring to the 2.4 and 2.6 kernels.
> That doesn't necessarily mean much, 2.6 == 3.x in many respects.
>
>> According to this document, only the following functionality is
>> supported, which would be consistent with what I've seen: I2C_FUNC_I2C,
>> I2C_FUNC_SMBUS_QUICK, I2C_FUNC_SMBUS_BYTE, I2C_FUNC_SMBUS_BYTE_DATA,
>> I2C_FUNC_SMBUS_WORD_DATA
> I would trust the output of i2cdetect -F more than the documentation.

With the proprietary nvidia driver loaded,  i2cdetect -F 0 reports:

Functionalities implemented by /dev/i2c-0:
I2C                              yes
SMBus Quick Command              yes
SMBus Send Byte                  yes
SMBus Receive Byte               yes
SMBus Write Byte                 yes
SMBus Read Byte                  yes
SMBus Write Word                 yes
SMBus Read Word                  yes
SMBus Process Call               no
SMBus Block Write                no
SMBus Block Read                 no
SMBus Block Process Call         no
SMBus PEC                        no
I2C Block Write                  no
I2C Block Read                   no


This is (unsurprisingly) consistent with what I get calling ioctl I2C_FUNCS.

>
> This suggests that the driver authors were lazy and/or did something
> stupid. A driver that can do I2C_FUNC_I2C, can do most SMBus transfers,
> including I2C block transfers (which despite the name are close to
> SMBus transfers.)
>
> OTOH, if the driver really supports I2C_FUNC_I2C then the I2C_RDWR
> ioctl should do the trick. But I wouldn't hope too much, because then
> read and write should also work, when your experiments apparently show
> they do not.
As a first test case, I just replaced the write() call with ioctl 
I2C_RDWR.  It works for nouveau, and fails with errno=5 (EIO) for the 
proprietary nvidia driver.  I don't think there's any reason to believer 
other uses of ioctl I2C_RDWR will fare better.

As for the proprietary fglrx driver, it doesn't even create the
/dev/i2c-n devices.   End of story.

> Correct :(
>

So I think the bottom line from this exchange is as follows:

1) Use write() and read().  They really are doing the right thing. 
Forget about i2c_smbus_write/read_block_data().

2) It's time to head over to a nvidia forum to explore why the code 
doesn't work with the proprietary driver.  Maybe there's a work around, 
or maybe I can light a fire under the nvidia developers.

Thanks again,
Sanford

^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: ddc/ci over i2c
       [not found]         ` <53BD6E61.8070101-9+fK6rGKj7RBDgjK7y7TUQ@public.gmane.org>
@ 2014-07-28 10:52           ` Jean Delvare
       [not found]             ` <20140728125248.5bbc0f89-R0o5gVi9kd7kN2dkZ6Wm7A@public.gmane.org>
  0 siblings, 1 reply; 8+ messages in thread
From: Jean Delvare @ 2014-07-28 10:52 UTC (permalink / raw)
  To: Sanford Rockowitz; +Cc: linux-i2c-u79uwXL29TY76Z2rM5mHXA

Hi Sanford,

Sorry for the late reply and thanks for the detailed explanation.

On Wed, 09 Jul 2014 09:31:29 -0700, Sanford Rockowitz wrote:
> > Your code reads 11 data bytes, not 12.
> 
> Correct.  A bit of sloppy exposition.   I had to stand on my head three 
> times over and look sideways to properly mung
> the DDC spec into the i2c-dev calls.  From the DDC perspective, 1 more 
> byte is written and read than are written and read using write(), 
> read(), i2c_smbus_write_i2c_block_data(), etc.  Byte 0 of a DDC request 
> or response is implicit (the result of ioctl(fh, I2C_SLAVE, 0x37);) but 
> from a DDC perspective you need to think of it as being there for the 
> checksums to be properly calculated.

OK, I understand.

> (...)
> As noted, a DDC exchange is write/read sequence of 2 separate messages 
> (or in some cases a simple write). Each write or read begins with a 
> start bit and ends with a stop bit.  So if I understand your terminology 
> correctly, there's no direction change within a message.

Correct. I think this is because the protocol allows for a delay of up
to 40 ms for the display to prepare the answer message. Well in theory
this could still be done with a single transfer and a direction change,
with the slave (display) holding SCL low until it is ready. This would
have avoided unneeded delays when the display is faster. But well
apparently they did not want to implement it that way.

BTW your test program is only waiting for 5 ms instead of the 40 ms
suggested in the specification. Could this explain your problems?

> (...)
> Some context of why I'm doing this may be appropriate here.  This 
> project came about because of my interest in monitor profiling and 
> calibration.  A monitor calibration is relative to the current monitor 
> settings for luminosity, contrast, color temperature, etc. Setting a 
> profile only makes sense if you know that the monitor settings are 
> unchanged from when you made the profile.  Currently in Linux, the only 
> way to do this is leave the buttons on the monitor untouched.   None of 
> the major profiling tools - dispcal, dispcalGUI, colord, oyranos - 
> record or restore the monitor settings.  At most they can give you 
> advice on how to adjust the physical monitor controls to obtain the 
> desired color temperature. There once was a program to manage monitor 
> settings, ddccontrol, but it hasn't been maintained and has been dropped 
> from distros.

I took a quick look at this and while it looked promising, it indeed
seems to be unmaintained for almost a decade. From a design
perspective, I seem to understand that the tool is trying to access the
DDC channels directly instead of relying on the i2c buses exposed by
the graphics drivers. That's very bad so I'm not surprised that
distributions ended up dropping this tool.

> When I enquired about this on the color lists, the gist 
> of the responses was that it was too hard to do because the DDC spec was 
> inconsistently implemented across monitors.  I decided to try to come up 
> with a solution that works for at least MY set of monitors.

I agree. Just because we can't support all monitors out there, is no
good reason to not try to support at least some of them.

> Basically, 
> at the time of monitor calibration, the program records whatever color 
> related settings exist for the monitor, and can then restore those 
> settings when the profile is set.   Interestingly, the variation among 
> monitor implementations has proven quite manageable.  Almost all the 
> effort has gone into getting DDC over I2C right.  At this point, it 
> appears that the biggest obstacle to a general solution comes from the 
> proprietary drivers.

Well, if your tool is good and gains popularity, it might be a good
incentive for driver authors to fix the problems.

Pardon my ignorance but if color profile managers don't currently set
monitor settings using DDC/CI, then what are they doing? Are they only
doing software color correction? Or asking the graphics card to do so?

> >> (....)
> >> According to this document, only the following functionality is
> >> supported, which would be consistent with what I've seen: I2C_FUNC_I2C,
> >> I2C_FUNC_SMBUS_QUICK, I2C_FUNC_SMBUS_BYTE, I2C_FUNC_SMBUS_BYTE_DATA,
> >> I2C_FUNC_SMBUS_WORD_DATA
> > I would trust the output of i2cdetect -F more than the documentation.
> 
> With the proprietary nvidia driver loaded,  i2cdetect -F 0 reports:
> 
> Functionalities implemented by /dev/i2c-0:
> I2C                              yes
> SMBus Quick Command              yes
> SMBus Send Byte                  yes
> SMBus Receive Byte               yes
> SMBus Write Byte                 yes
> SMBus Read Byte                  yes
> SMBus Write Word                 yes
> SMBus Read Word                  yes
> SMBus Process Call               no
> SMBus Block Write                no
> SMBus Block Read                 no
> SMBus Block Process Call         no
> SMBus PEC                        no
> I2C Block Write                  no
> I2C Block Read                   no
> 
> 
> This is (unsurprisingly) consistent with what I get calling ioctl I2C_FUNCS.

Of course, as i2cdetect -F is simply calling ioctl I2C_FUNCS :-) The
output above points at a buggy driver implementation. If the driver can
do I2C then it can definitely do I2C Block Write and I2C Block Read.
That being said, for DDC/CI, you don't actually care, all you need is
I2C.

> > This suggests that the driver authors were lazy and/or did something
> > stupid. A driver that can do I2C_FUNC_I2C, can do most SMBus transfers,
> > including I2C block transfers (which despite the name are close to
> > SMBus transfers.)
> >
> > OTOH, if the driver really supports I2C_FUNC_I2C then the I2C_RDWR
> > ioctl should do the trick. But I wouldn't hope too much, because then
> > read and write should also work, when your experiments apparently show
> > they do not.
> As a first test case, I just replaced the write() call with ioctl 
> I2C_RDWR.  It works for nouveau, and fails with errno=5 (EIO) for the 
> proprietary nvidia driver.  I don't think there's any reason to believer 
> other uses of ioctl I2C_RDWR will fare better.

Indeed not :-(

> As for the proprietary fglrx driver, it doesn't even create the
> /dev/i2c-n devices.   End of story.

I've seen that, yes, that's sad. Well AFAIK the open-source radeon
driver is better and more popular than the nouveau driver so the impact
is limited, but it would still be good if the fglrx driver would expose
the i2c buses.

Or alternatively, the X11 stack could provide a DDC/CI API which
drivers would implement and color profile tools would use. I personally
don't care if this happens in X11 or through /dev/i2c* nodes, I'm not
familiar enough with the area to tell which approach is the best.

> (...)
> So I think the bottom line from this exchange is as follows:
> 
> 1) Use write() and read().  They really are doing the right thing. 
> Forget about i2c_smbus_write/read_block_data().

I agree.

> 2) It's time to head over to a nvidia forum to explore why the code 
> doesn't work with the proprietary driver.  Maybe there's a work around, 
> or maybe I can light a fire under the nvidia developers.

Good idea, I hope you get somewhere with that. Odds are that this part
of the code saw little testing so far, because almost all device
drivers use the SMBus command set rather than raw I2C messages.

-- 
Jean Delvare
SUSE L3 Support

^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: ddc/ci over i2c
       [not found]             ` <20140728125248.5bbc0f89-R0o5gVi9kd7kN2dkZ6Wm7A@public.gmane.org>
@ 2014-08-09 21:18               ` Sanford Rockowitz
       [not found]                 ` <53E6902B.9000104-9+fK6rGKj7RBDgjK7y7TUQ@public.gmane.org>
  0 siblings, 1 reply; 8+ messages in thread
From: Sanford Rockowitz @ 2014-08-09 21:18 UTC (permalink / raw)
  To: Jean Delvare; +Cc: linux-i2c-u79uwXL29TY76Z2rM5mHXA

Hi John,

Well, now it's my turn to be late in replying.    There's been progress.

On 07/28/2014 03:52 AM, Jean Delvare wrote:
> Hi Sanford,
>
> Sorry for the late reply and thanks for the detailed explanation.
>
>
>
> BTW your test program is only waiting for 5 ms instead of the 40 ms
> suggested in the specification. Could this explain your problems?
Just sloppiness in preparing the simplified demo code.  I'm using 50 ms 
in the actual code whenever the spec calls for 40 ms, and I'm generous 
in adding delays for retrys.  The problem with the nvidia driver is 
unrelated to delays (see below).
> Pardon my ignorance but if color profile managers don't currently set 
> monitor settings using DDC/CI, then what are they doing? Are they only 
> doing software color correction? Or asking the graphics card to do so? 
Basic answer:  the calibration process creates a lookup table that is 
loaded into the graphics card.  The lookup table translates the color 
input values into output values that will precisely show the expected 
color on the screen.

Long winded answer:  What is loosely referred to as "profiling" consists 
of two pieces, "calibration" and "characterization". Calibration is the 
process of bringing a monitor into a precisely known state.   That is 
the function of the lookup table. Characterization is process of 
describing a monitor's capabilities, e.g. what colors it is capable of 
displaying.  The characterization tables are used by color managed 
applications, for example, to appropriately render an image whose colors 
are beyond the range of what the monitor can show.   Both pieces of 
information are stored in the profile file.

Very high end monitors, such as the HP Dreamcolor and various EIZOs 
expose a LUT that can be used for color translation.  While the DDC 
protocol allows for manipulating the monitor LUT, some of these monitors 
instead use a special connection, such as USB, and a proprietary 
protocol.  Very high end LCD monitors will also have separate red, 
green, and blue LEDs or CCFL tubes, so that the color temperature can be 
precisely adjusted.  But most flat panel monitors  have just one one 
type of "white" LED or CCFL tube.

Linux calibration programs, such as Argyllcms, simply take as given 
whatever state the on screen controls have put the monitor into, and 
build the lookup table for that state.   dispcalGUI has an optional 
pre-calibration step which displays the measured color temperature of 
the display and suggests which physical monitor controls the user should 
change to achieve the desired color temperature.

There's a school of thought that holds that unless you have a high end 
monitor, you're actually best off setting all the controls (excepting 
brightness) on a LCD monitor to the factory defaults. Graeme Gill, 
create of Argyllcms, has a well considered discussion of this question 
(http://www.arbyllcms/monitor-controls.html).

>> As for the proprietary fglrx driver, it doesn't even create the
>> /dev/i2c-n devices.   End of story.
> I've seen that, yes, that's sad. Well AFAIK the open-source radeon
> driver is better and more popular than the nouveau driver so the impact
> is limited, but it would still be good if the fglrx driver would expose
> the i2c buses.

Well, it turns out that while fglrx doesn't expose /dev/i2c devices, it 
has a group of APIs for accessing the I2C bus, executing the DDC 
protocol, reading the EDID, etc.  notably.:
    ADL_Display_WriteAndReadI2C()
    ADL_Display_DDCBlockAccess_Get()
    ADL_Display_EdidData_Get()

After more munging, I'm able to use this API (called ADL) to communicate 
with monitors using DDC.   The code is actually simpler than the 
/dev/i2c version, because the API  knows something about the DDC 
protocol, handles retries, etc.  ( Interestingly, there's ifdef'd code 
in ddccontrol to use ADL.  Whether it actually works I don't know. )


>
> Or alternatively, the X11 stack could provide a DDC/CI API which
> drivers would implement and color profile tools would use. I personally
> don't care if this happens in X11 or through /dev/i2c* nodes, I'm not
> familiar enough with the area to tell which approach is the best.

 From an overall architecture perspective, it seems to me that X11 is 
the desirable place from a DDC/CI API.   I'm also sure that there are 
excellent reasons, far beyond my pay grade, why this has not been 
implemented.   However, there does seem to be discussion of putting 
support into Wayland.   See, for example, this comment by Richard Hughes 
(author of colord) on the Wayland list: 
http://lists.freedesktop.org/archives/wayland-devel/2013-April/008406.html

>> 2) It's time to head over to a nvidia forum to explore why the code
>> doesn't work with the proprietary driver.  Maybe there's a work around,
>> or maybe I can light a fire under the nvidia developers.
> Good idea, I hope you get somewhere with that. Odds are that this part
> of the code saw little testing so far, because almost all device
> drivers use the SMBus command set rather than raw I2C messages.
>

I was able to narrow the nvidia driver problem to just the write() call, 
so the delay between write() and read() plays no role. Moreover, I only 
see the problem on my (relatively) new GTX660Ti. write() calls of 1 or 2 
bytes succeed.   Calling write() for 3 or more bytes fails.   The 
problem does not occur on an older GT220 card.  So it appears that the 
driver was not properly upgraded to handle newer cards.   I posted a 
report along with sample code on the Nvidia developer website ( 
https://devtalk.nvidia.com/default/topic/760798/linux/ddc-ci-over-i2c-fails-gtx660ti/ 
).  Other than a comment from one other user who believed that this was 
the same issue as the known problem of ddccontrol not working for more 
recent cards, there's been no response.

Lastly, a question.  I have one monitor, a relatively recent 
"professional" quality Dell P2411H, which has a propensity to return 
duplicated bytes on read(), .e.g.  0x01020203 instead of 0x010203. The 
DDC protocol works, but has high retry counts.   Any thoughts on what I 
might do here as a workaround?

Regards,
Sanford

^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: ddc/ci over i2c
       [not found]                 ` <53E6902B.9000104-9+fK6rGKj7RBDgjK7y7TUQ@public.gmane.org>
@ 2014-08-10  9:05                   ` Jean Delvare
       [not found]                     ` <20140810110528.1b5a825f-R0o5gVi9kd7kN2dkZ6Wm7A@public.gmane.org>
  0 siblings, 1 reply; 8+ messages in thread
From: Jean Delvare @ 2014-08-10  9:05 UTC (permalink / raw)
  To: Sanford Rockowitz; +Cc: linux-i2c-u79uwXL29TY76Z2rM5mHXA

Hi Sanford,

Thanks for all the explanations.

On Sat, 09 Aug 2014 14:18:35 -0700, Sanford Rockowitz wrote:
> Lastly, a question.  I have one monitor, a relatively recent 
> "professional" quality Dell P2411H, which has a propensity to return 
> duplicated bytes on read(), .e.g.  0x01020203 instead of 0x010203. The 
> DDC protocol works, but has high retry counts.   Any thoughts on what I 
> might do here as a workaround?

I'm a bit confused by the question, as I don't understand how you could
read more bytes than you actually decided. Or do you mean that, when
reading for example six bytes, you get "1, 2, 2, 3, 4, 5" instead of
the expected "1, 2, 3, 4, 5, 6", i.e. the byte count is good but you're
missing the last one (and you have one duplicate)?

I have two Dell monitors here, U2312HM and P2414H, I can try to
reproduce the problem if you give me the code to run.

I can't remember seeing this before, so I have no immediate explanation.
However some I2C chips use an internal pointer register to select which
register is being read. Whenever you read a byte, the pointer gets
increased to point to the next register. I can imagine that, if the
master reads faster than the slave is able to process, then maybe the
next read could happen before the pointer register is actually
increased. That being said, with such a race condition, I'd expect the
pointer to eventually catch up so you'd get something like "1, 2, 2, 4,
5, 6". Is the duplicate byte always early in the sequence, or can it be
anywhere?

I suggest that you verify that the master properly handles clock
stretching (to give the slave the opportunity to slow down the
transfer.) Or simply lower the DDC bus clock speed and see if it helps.

-- 
Jean Delvare
SUSE L3 Support

^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: ddc/ci over i2c
       [not found]                     ` <20140810110528.1b5a825f-R0o5gVi9kd7kN2dkZ6Wm7A@public.gmane.org>
@ 2014-08-10 10:08                       ` Sanford Rockowitz
       [not found]                         ` <53E74487.8020809-9+fK6rGKj7RBDgjK7y7TUQ@public.gmane.org>
  0 siblings, 1 reply; 8+ messages in thread
From: Sanford Rockowitz @ 2014-08-10 10:08 UTC (permalink / raw)
  To: Jean Delvare; +Cc: linux-i2c-u79uwXL29TY76Z2rM5mHXA

On 08/10/2014 02:05 AM, Jean Delvare wrote:
> Hi Sanford,
>
> Thanks for all the explanations.
>
> On Sat, 09 Aug 2014 14:18:35 -0700, Sanford Rockowitz wrote:
>> Lastly, a question.  I have one monitor, a relatively recent
>> "professional" quality Dell P2411H, which has a propensity to return
>> duplicated bytes on read(), .e.g.  0x01020203 instead of 0x010203. The
>> DDC protocol works, but has high retry counts.   Any thoughts on what I
>> might do here as a workaround?
> I'm a bit confused by the question, as I don't understand how you could
> read more bytes than you actually decided. Or do you mean that, when
> reading for example six bytes, you get "1, 2, 2, 3, 4, 5" instead of
> the expected "1, 2, 3, 4, 5, 6", i.e. the byte count is good but you're
> missing the last one (and you have one duplicate)?

The byte count is correct.   There's a duplicate and so I'm missing the 
last byte.  The duplicate can be anywhere in the sequence. I detect the 
problem when some field value is invalid, or more commonly the checksum 
calculation fails.
>
> I have two Dell monitors here, U2312HM and P2414H, I can try to
> reproduce the problem if you give me the code to run.

I'll have to build a simple test case.   Won't be able to get to it for 
a few days.
>
> I can't remember seeing this before, so I have no immediate explanation.
> However some I2C chips use an internal pointer register to select which
> register is being read. Whenever you read a byte, the pointer gets
> increased to point to the next register. I can imagine that, if the
> master reads faster than the slave is able to process, then maybe the
> next read could happen before the pointer register is actually
> increased. That being said, with such a race condition, I'd expect the
> pointer to eventually catch up so you'd get something like "1, 2, 2, 4,
> 5, 6". Is the duplicate byte always early in the sequence, or can it be
> anywhere?

As noted, the duplicate can be anywhere in the sequence.   I'll have to 
study the traces to see if there's a corresponding dropped byte.

> I suggest that you verify that the master properly handles clock
> stretching (to give the slave the opportunity to slow down the
> transfer.) Or simply lower the DDC bus clock speed and see if it helps.
>
I'm using write() and read() on /dev/i2c-n to communicate with the 
monitor.  Is there some way to control clock stretching or DDC bus clock 
speed from userspace?

I just built a custom kernel (first time in years), with the following 
lines in config-local:

CONFIG_I2C_DEBUG_CORE=y
CONFIG_I2C_DEBUG_BUS=y
CONFIG_I2C_DEBUG_ALGO=y

I was puzzling over why I wasn't seeing any output in /var/log/messages 
when your email came in.  I would have thought I'd  be overwhelmed by 
output.  But when communicating with the P2411H, I do see repeated lines 
of the form:

Aug 10 02:55:34 banner kernel: [ 5006.692794] i2c i2c-0: sendbytes: NAK 
bailout.

Sanford

^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: ddc/ci over i2c
       [not found]                         ` <53E74487.8020809-9+fK6rGKj7RBDgjK7y7TUQ@public.gmane.org>
@ 2014-08-10 19:13                           ` Jean Delvare
  0 siblings, 0 replies; 8+ messages in thread
From: Jean Delvare @ 2014-08-10 19:13 UTC (permalink / raw)
  To: Sanford Rockowitz; +Cc: linux-i2c-u79uwXL29TY76Z2rM5mHXA

On Sun, 10 Aug 2014 03:08:07 -0700, Sanford Rockowitz wrote:
> On 08/10/2014 02:05 AM, Jean Delvare wrote:
> > I suggest that you verify that the master properly handles clock
> > stretching (to give the slave the opportunity to slow down the
> > transfer.) Or simply lower the DDC bus clock speed and see if it helps.
>
> I'm using write() and read() on /dev/i2c-n to communicate with the 
> monitor.  Is there some way to control clock stretching or DDC bus clock 
> speed from userspace?

No, it is handled by the I2C bus driver in the kernel (or i2c-algo-bit
for bit-banged buses which rely on it.) I2C bus controllers and drivers
should always handle clock stretching as it is a fundamental part of
the specification, that's not something you need to be able to switch
on and off.

> I just built a custom kernel (first time in years), with the following 
> lines in config-local:
> 
> CONFIG_I2C_DEBUG_CORE=y
> CONFIG_I2C_DEBUG_BUS=y
> CONFIG_I2C_DEBUG_ALGO=y
> 
> I was puzzling over why I wasn't seeing any output in /var/log/messages 
> when your email came in.  I would have thought I'd  be overwhelmed by 
> output.

Not all code hot paths are verbose in debug mode, most likely you're
only walking paths where no debugging takes place. Also note that
CONFIG_I2C_DEBUG_BUS only affects drivers under drivers/i2c/busses and
drivers/i2c/muxes, so I'm certain it has no effect for anything
DDC-related.

>  But when communicating with the P2411H, I do see repeated lines 
> of the form:
> 
> Aug 10 02:55:34 banner kernel: [ 5006.692794] i2c i2c-0: sendbytes: NAK 
> bailout.

That one comes from i2c-algo-bit, so your driver is using software
bit-banging, which can be unreliable if the system is under stress. The
message above is an error message, not a debug message. A "NAK" from a
slave in the middle of a write means that somehow the slave did not
like what it received. It could be that the data was out of the
expected range, or too long, inconsistent, bad CRC, whatever. But of
course it could come from some data corruption on the bus, causing the
display to receive something different from what you sent.

-- 
Jean Delvare
SUSE L3 Support

^ permalink raw reply	[flat|nested] 8+ messages in thread

end of thread, other threads:[~2014-08-10 19:13 UTC | newest]

Thread overview: 8+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2014-07-08 10:35 ddc/ci over i2c Sanford Rockowitz
     [not found] ` <53BBC97B.5020805-9+fK6rGKj7RBDgjK7y7TUQ@public.gmane.org>
2014-07-08 22:20   ` Jean Delvare
     [not found]     ` <20140709002017.4c2b332e-R0o5gVi9kd7kN2dkZ6Wm7A@public.gmane.org>
2014-07-09 16:31       ` Sanford Rockowitz
     [not found]         ` <53BD6E61.8070101-9+fK6rGKj7RBDgjK7y7TUQ@public.gmane.org>
2014-07-28 10:52           ` Jean Delvare
     [not found]             ` <20140728125248.5bbc0f89-R0o5gVi9kd7kN2dkZ6Wm7A@public.gmane.org>
2014-08-09 21:18               ` Sanford Rockowitz
     [not found]                 ` <53E6902B.9000104-9+fK6rGKj7RBDgjK7y7TUQ@public.gmane.org>
2014-08-10  9:05                   ` Jean Delvare
     [not found]                     ` <20140810110528.1b5a825f-R0o5gVi9kd7kN2dkZ6Wm7A@public.gmane.org>
2014-08-10 10:08                       ` Sanford Rockowitz
     [not found]                         ` <53E74487.8020809-9+fK6rGKj7RBDgjK7y7TUQ@public.gmane.org>
2014-08-10 19:13                           ` Jean Delvare

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).