* Re: working with IIO
       [not found] <0423FED8EB79934F939F077EAF96DBD717D8025F@HASMSX105.ger.corp.intel.com>
@ 2013-08-21 21:00 ` Jonathan Cameron
  2013-08-22 11:30   ` Drubin, Daniel
  0 siblings, 1 reply; 20+ messages in thread
From: Jonathan Cameron @ 2013-08-21 21:00 UTC (permalink / raw)
  To: Yuniverg, Michael, linux-iio@vger.kernel.org
  Cc: Drubin, Daniel, Haimovich, Yoav
"Yuniverg, Michael" <michael.yuniverg@intel.com> wrote:
>Hi Jonathan, guys,
>
>My name is Michael Yuniverg and I'm working for Intel.
>You surely noted that starting Kernel 3.7 some of Intel-supported
>sensors were exposed to user space via iio subsystem.
>
>Me and my colleagues (in CC) are working to expose a new generation of
>Intel-supported sensors and naturally we'd like to  keep using iio.
>However we've seen the limitation of just single user-mode client
>allowed to work with a particular iio device:
>/**
>* iio_chrdev_open() - chrdev file open for buffer access and ioctls
>**/
>static int iio_chrdev_open(struct inode *inode, struct file *filp)
>{
>        struct iio_dev *indio_dev = container_of(inode->i_cdev,
>                                               struct iio_dev, chrdev);
>
>        if (test_and_set_bit(IIO_BUSY_BIT_POS, &indio_dev->flags))
>                return -EBUSY;
>
>        filp->private_data = indio_dev;
>
>        return 0;
>}
>
>This limitation is really painful for our design that is striving to
>achieve better performance moving some of sensors logic to Kernel mode.
>Now please Jonathan, could you explain the ratio behind this
>limitation?
Simplicity and hence speed plus where all this evolved from which was data logging. Technically there is nothing stopping us having a separate buffer per user as we already allow multiple buffers to be fed from the same device for different purposes. This is how the bridge to input functions (not yet in mainline)
What is your application which needs multiple simultaneous buffered users? 
As a quick thought I would not necessarily be against having the ability to request additional chrdevs each with their own buffers. A bit fiddly to retrofit though as not breaking abi would mean we need to keep the existing interfaces as they are.
>And in general - could you share with us your plans of future
>modifications to  iio
Lots though honestly most of the interesting bits are not my ideas or plans but rather those of others.
Personally I am still working through a load of core changes that are not going to change anything fundamental.
Of course feel free to propose any changes you would like!
>
>Thanks in advance,
>Michael
>
>
>---------------------------------------------------------------------
>Intel Israel (74) Limited
>
>This e-mail and any attachments may contain confidential material for
>the sole use of the intended recipient(s). Any review or distribution
>by others is strictly prohibited. If you are not the intended
>recipient, please contact the sender and delete all copies.
-- 
Sent from my Android phone with K-9 Mail. Please excuse my brevity.
^ permalink raw reply	[flat|nested] 20+ messages in thread
* RE: working with IIO
  2013-08-21 21:00 ` working with IIO Jonathan Cameron
@ 2013-08-22 11:30   ` Drubin, Daniel
  2013-08-22 13:16     ` Lars-Peter Clausen
  0 siblings, 1 reply; 20+ messages in thread
From: Drubin, Daniel @ 2013-08-22 11:30 UTC (permalink / raw)
  To: Jonathan Cameron, Yuniverg, Michael, linux-iio@vger.kernel.org
  Cc: Haimovich, Yoav
SGkgSm9uYXRoYW4sDQoNCkkgYW0gRGFuaWVsIGFuZCBJIHdvcmsgd2l0aCBNaWNoYWVsLg0KDQpb
Li4uXQ0KPiA+VGhpcyBsaW1pdGF0aW9uIGlzIHJlYWxseSBwYWluZnVsIGZvciBvdXIgZGVzaWdu
IHRoYXQgaXMgc3RyaXZpbmcgdG8NCj4gPmFjaGlldmUgYmV0dGVyIHBlcmZvcm1hbmNlIG1vdmlu
ZyBzb21lIG9mIHNlbnNvcnMgbG9naWMgdG8gS2VybmVsIG1vZGUuDQo+ID5Ob3cgcGxlYXNlIEpv
bmF0aGFuLCBjb3VsZCB5b3UgZXhwbGFpbiB0aGUgcmF0aW8gYmVoaW5kIHRoaXMNCj4gPmxpbWl0
YXRpb24/DQo+IA0KPiBTaW1wbGljaXR5IGFuZCBoZW5jZSBzcGVlZCBwbHVzIHdoZXJlIGFsbCB0
aGlzIGV2b2x2ZWQgZnJvbSB3aGljaCB3YXMgZGF0YQ0KPiBsb2dnaW5nLiBUZWNobmljYWxseSB0
aGVyZSBpcyBub3RoaW5nIHN0b3BwaW5nIHVzIGhhdmluZyBhIHNlcGFyYXRlIGJ1ZmZlciBwZXIN
Cj4gdXNlciBhcyB3ZSBhbHJlYWR5IGFsbG93IG11bHRpcGxlIGJ1ZmZlcnMgdG8gYmUgZmVkIGZy
b20gdGhlIHNhbWUgZGV2aWNlIGZvcg0KPiBkaWZmZXJlbnQgcHVycG9zZXMuIFRoaXMgaXMgaG93
IHRoZSBicmlkZ2UgdG8gaW5wdXQgZnVuY3Rpb25zIChub3QgeWV0IGluDQo+IG1haW5saW5lKQ0K
PiANCj4gV2hhdCBpcyB5b3VyIGFwcGxpY2F0aW9uIHdoaWNoIG5lZWRzIG11bHRpcGxlIHNpbXVs
dGFuZW91cyBidWZmZXJlZCB1c2Vycz8NCg0KV2UgbmVlZCBtdWx0aXBsZSBoaWdoZXIgbGV2ZWwg
Y2xpZW50cyB0byBiZSBhYmxlIHRvIGNvbmZpZ3VyZSB0aGUgc2FtZSBzZW5zb3IgZm9yIGRpZmZl
cmVudCByYXRlcyBhbmQgcmVjZWl2ZSBkYXRhIGF0IGRpZmZlcmVudCByYXRlcywgaW5kZXBlbmRl
bnRseSBvZiBlYWNoIG90aGVyLiBJLmUuIGRldmljZSB3aWxsIHNlbnNlIGF0IG1heGltdW0gb2Yg
Y29uZmlndXJlZCByYXRlcywgYW5kIGNsaWVudHMgd2lsbCBwb2xsIGRhdGEgYXMgaWYgdGhlaXIg
Y29uZmlndXJlZCByYXRlcy4gVGhpcyBpcyBtYWlubHkgZm9yIEFuZHJvaWQsIHdoZXJlIHRoZXJl
IGFyZSBtdWx0aXBsZSBzZW5zb3IgZnJhbWV3b3JrcyBpbmRlcGVuZGVudCBvZiBlYWNoIG90aGVy
Lg0KDQo+IEFzIGEgcXVpY2sgdGhvdWdodCBJIHdvdWxkIG5vdCBuZWNlc3NhcmlseSBiZSBhZ2Fp
bnN0IGhhdmluZyB0aGUgYWJpbGl0eSB0bw0KPiByZXF1ZXN0IGFkZGl0aW9uYWwgY2hyZGV2cyBl
YWNoIHdpdGggdGhlaXIgb3duIGJ1ZmZlcnMuIEEgYml0IGZpZGRseSB0byByZXRyb2ZpdA0KPiB0
aG91Z2ggYXMgbm90IGJyZWFraW5nIGFiaSB3b3VsZCBtZWFuIHdlIG5lZWQgdG8ga2VlcCB0aGUg
ZXhpc3RpbmcNCj4gaW50ZXJmYWNlcyBhcyB0aGV5IGFyZS4NCg0KV2UgYWN0dWFsbHkgdGhvdWdo
dCBhYm91dCByZWdpc3RlcmluZyBtdWx0aXBsZSAidmlydHVhbCIgc2Vuc29ycyBmb3IgdGhlIHNh
bWUgcGh5c2ljYWwgZGV2aWNlIGluIG9yZGVyIHRvIHN0aWNrIGl0IGludG8gZXhpc3RpbmcgSUlP
LCBidXQgdGhhdCBoYXMgbGltaXRlZCB1c2UgZm9yIHVzIChpbiBwYXJ0aWN1bGFyLCB3ZSB3aWxs
IGhhdmUgdG8gb25seSBwcmUtcmVnaXN0ZXIgc3RhdGljYWxseSB0aG9zZSBtdWx0aXBsZSB2aXJ0
dWFsIHNlbnNvcnMpLg0KDQo+ID5BbmQgaW4gZ2VuZXJhbCAtIGNvdWxkIHlvdSBzaGFyZSB3aXRo
IHVzIHlvdXIgcGxhbnMgb2YgZnV0dXJlDQo+ID5tb2RpZmljYXRpb25zIHRvICBpaW8NClsuLi5d
DQo+IE9mIGNvdXJzZSBmZWVsIGZyZWUgdG8gcHJvcG9zZSBhbnkgY2hhbmdlcyB5b3Ugd291bGQg
bGlrZSENCg0KQXMgb2Ygbm93IHdlIHdvdWxkIGxpa2UgdG8gcHJvcG9zZSBhbiBvcHRpb24gZm9y
IElJTyBkZXZpY2UgdG8gYWxsb3cgbXVsdGlwbGUgb3BlbnMoKXMuIFdpdGhvdXQgdGhhdCBhZGRp
dGlvbmFsIG9wdGlvbiBkZXZpY2VzIHdpbGwgd29yayBhcyB0b2RheSwgc28gdGhhdCwgZm9yIGV4
YW1wbGUsIGV4aXN0aW5nIHNlbnNvciBkcml2ZXJzIHdpbGwgbm90IGhhdmUgdG8gYmUgbW9kaWZp
ZWQgZm9yIHJlZW50cmFuY3k7IGJ1dCB0aG9zZSBkcml2ZXJzIHRoYXQgbmVlZCBpdCB3aWxsIHNp
Z25hbCB3aXRoIHRoYXQgb3B0aW9uIHRoYXQgdGhleSB3aWxsIGhhbmRsZSByZWVudHJhbmN5IHRo
ZW1zZWx2ZXMuDQoNCkJlc3QgcmVnYXJkcywNCkRhbmllbA0KLS0tLS0tLS0tLS0tLS0tLS0tLS0t
LS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tCkludGVsIElz
cmFlbCAoNzQpIExpbWl0ZWQKClRoaXMgZS1tYWlsIGFuZCBhbnkgYXR0YWNobWVudHMgbWF5IGNv
bnRhaW4gY29uZmlkZW50aWFsIG1hdGVyaWFsIGZvcgp0aGUgc29sZSB1c2Ugb2YgdGhlIGludGVu
ZGVkIHJlY2lwaWVudChzKS4gQW55IHJldmlldyBvciBkaXN0cmlidXRpb24KYnkgb3RoZXJzIGlz
IHN0cmljdGx5IHByb2hpYml0ZWQuIElmIHlvdSBhcmUgbm90IHRoZSBpbnRlbmRlZApyZWNpcGll
bnQsIHBsZWFzZSBjb250YWN0IHRoZSBzZW5kZXIgYW5kIGRlbGV0ZSBhbGwgY29waWVzLgo=
^ permalink raw reply	[flat|nested] 20+ messages in thread
* Re: working with IIO
  2013-08-22 11:30   ` Drubin, Daniel
@ 2013-08-22 13:16     ` Lars-Peter Clausen
  2013-08-22 13:39       ` Drubin, Daniel
  0 siblings, 1 reply; 20+ messages in thread
From: Lars-Peter Clausen @ 2013-08-22 13:16 UTC (permalink / raw)
  To: Drubin, Daniel
  Cc: Jonathan Cameron, Yuniverg, Michael, linux-iio@vger.kernel.org,
	Haimovich, Yoav
On 08/22/2013 01:30 PM, Drubin, Daniel wrote:
> Hi Jonathan,
> 
> I am Daniel and I work with Michael.
> 
> [...]
>>> This limitation is really painful for our design that is striving to
>>> achieve better performance moving some of sensors logic to Kernel mode.
>>> Now please Jonathan, could you explain the ratio behind this
>>> limitation?
>>
>> Simplicity and hence speed plus where all this evolved from which was data
>> logging. Technically there is nothing stopping us having a separate buffer per
>> user as we already allow multiple buffers to be fed from the same device for
>> different purposes. This is how the bridge to input functions (not yet in
>> mainline)
>>
>> What is your application which needs multiple simultaneous buffered users?
> 
> We need multiple higher level clients to be able to configure the same sensor for different rates and receive data at different rates, independently of each other. I.e. device will sense at maximum of configured rates, and clients will poll data as if their configured rates. This is mainly for Android, where there are multiple sensor frameworks independent of each other.
> 
>> As a quick thought I would not necessarily be against having the ability to
>> request additional chrdevs each with their own buffers. A bit fiddly to retrofit
>> though as not breaking abi would mean we need to keep the existing
>> interfaces as they are.
> 
> We actually thought about registering multiple "virtual" sensors for the same physical device in order to stick it into existing IIO, but that has limited use for us (in particular, we will have to only pre-register statically those multiple virtual sensors).
> 
>>> And in general - could you share with us your plans of future
>>> modifications to  iio
> [...]
>> Of course feel free to propose any changes you would like!
> 
> As of now we would like to propose an option for IIO device to allow multiple opens()s. Without that additional option devices will work as today, so that, for example, existing sensor drivers will not have to be modified for reentrancy; but those drivers that need it will signal with that option that they will handle reentrancy themselves.
How about implementing a userspace daemon which does the arbitration between
multiple users? Doing this in kernel space can get tricky, especially if you
want to allow concurrent users with different settings, e.g. sample rate.
This is in part due to the majority of the IIO ABI being stateless and this
doesn't really mix well with concurrent users. Having a daemon will allow
you to implement a stateful API on top of the stateless IIO userspace ABI.
If you go the kernel route though I'm pretty sure you'll run into lots of
problems without an immediate solution.
- Lars
^ permalink raw reply	[flat|nested] 20+ messages in thread
* RE: working with IIO
  2013-08-22 13:16     ` Lars-Peter Clausen
@ 2013-08-22 13:39       ` Drubin, Daniel
  2013-08-22 14:16         ` Lars-Peter Clausen
  0 siblings, 1 reply; 20+ messages in thread
From: Drubin, Daniel @ 2013-08-22 13:39 UTC (permalink / raw)
  To: Lars-Peter Clausen
  Cc: Jonathan Cameron, Yuniverg, Michael, linux-iio@vger.kernel.org,
	Haimovich, Yoav
Wy4uLl0NCj4gPiBBcyBvZiBub3cgd2Ugd291bGQgbGlrZSB0byBwcm9wb3NlIGFuIG9wdGlvbiBm
b3IgSUlPIGRldmljZSB0byBhbGxvdw0KPiBtdWx0aXBsZSBvcGVucygpcy4gV2l0aG91dCB0aGF0
IGFkZGl0aW9uYWwgb3B0aW9uIGRldmljZXMgd2lsbCB3b3JrIGFzIHRvZGF5LA0KPiBzbyB0aGF0
LCBmb3IgZXhhbXBsZSwgZXhpc3Rpbmcgc2Vuc29yIGRyaXZlcnMgd2lsbCBub3QgaGF2ZSB0byBi
ZSBtb2RpZmllZCBmb3INCj4gcmVlbnRyYW5jeTsgYnV0IHRob3NlIGRyaXZlcnMgdGhhdCBuZWVk
IGl0IHdpbGwgc2lnbmFsIHdpdGggdGhhdCBvcHRpb24gdGhhdA0KPiB0aGV5IHdpbGwgaGFuZGxl
IHJlZW50cmFuY3kgdGhlbXNlbHZlcy4NCj4gDQo+IEhvdyBhYm91dCBpbXBsZW1lbnRpbmcgYSB1
c2Vyc3BhY2UgZGFlbW9uIHdoaWNoIGRvZXMgdGhlIGFyYml0cmF0aW9uDQo+IGJldHdlZW4gbXVs
dGlwbGUgdXNlcnM/IERvaW5nIHRoaXMgaW4ga2VybmVsIHNwYWNlIGNhbiBnZXQgdHJpY2t5LCBl
c3BlY2lhbGx5IGlmDQo+IHlvdSB3YW50IHRvIGFsbG93IGNvbmN1cnJlbnQgdXNlcnMgd2l0aCBk
aWZmZXJlbnQgc2V0dGluZ3MsIGUuZy4gc2FtcGxlIHJhdGUuDQo+IFRoaXMgaXMgaW4gcGFydCBk
dWUgdG8gdGhlIG1ham9yaXR5IG9mIHRoZSBJSU8gQUJJIGJlaW5nIHN0YXRlbGVzcyBhbmQgdGhp
cw0KPiBkb2Vzbid0IHJlYWxseSBtaXggd2VsbCB3aXRoIGNvbmN1cnJlbnQgdXNlcnMuIEhhdmlu
ZyBhIGRhZW1vbiB3aWxsIGFsbG93IHlvdQ0KPiB0byBpbXBsZW1lbnQgYSBzdGF0ZWZ1bCBBUEkg
b24gdG9wIG9mIHRoZSBzdGF0ZWxlc3MgSUlPIHVzZXJzcGFjZSBBQkkuDQo+IElmIHlvdSBnbyB0
aGUga2VybmVsIHJvdXRlIHRob3VnaCBJJ20gcHJldHR5IHN1cmUgeW91J2xsIHJ1biBpbnRvIGxv
dHMgb2YNCj4gcHJvYmxlbXMgd2l0aG91dCBhbiBpbW1lZGlhdGUgc29sdXRpb24uDQoNClRoYXQn
cyB0aGUgZGlyZWN0aW9uIGluIHdoaWNoIHdlIGFyZSBjdXJyZW50bHkgYWR2YW5jaW5nLiBOb3Qg
YmVjYXVzZSB3ZSBhcmUgYWZyYWlkIG9mIGtlcm5lbC1tb2RlIHByb2JsZW1zIC0gYWZ0ZXIgYWxs
IHRoZXkgYXJlIHZlcnkgc2ltaWxhciB0byB3aGF0IGtlcm5lbC1tb2RlIGZpbGVzeXN0ZW0gZHJp
dmVyIGZhY2VzIHdoZW4gc2VydmluZyBtdWx0aXBsZSBwcm9jZXNzZXMgYWNjZXNzaW5nIHRoZSBz
YW1lIEZTLCBqdXN0IGxlc3MgY29tcGxpY2F0ZWQgKGUuZy4gbm8gd3JpdGVycyk7IGJ1dCBtYWlu
bHkgYmVjYXVzZSB3ZSB3YW50IHRvIHVzZSBleGlzdGluZyBJSU8gYXMgZnJhbWV3b3JrLg0KDQpU
aGUgbWFpbiBkcmF3YmFjayB0aGF0IHdlIHNlZSBpbiB1c2VyLW1vZGUgZGFlbW9uIGlzIHBlcmZv
cm1hbmNlLiBDb25zaWRlciBzZXF1ZW5jZSBvZiBldmVudHMgYmV0d2VlbiB0aGUgZGFlbW9uIGFu
ZCB0aGUgY2FsbGVyIHByb2Nlc3M6DQoNCi0gQ2FsbGVyIHByb2Nlc3MgaW52b2tlcyBzb21lIHNv
cnQgb2YgUlBDIHZpYSBzb2NrZXQvcGlwZS9tZXNzYWdlIHF1ZXVlIFtzeXN0ZW0gY2FsbCwgY29u
dGV4dCBzd2l0Y2hdDQotIERhZW1vbiByZWNlaXZlcyByZXF1ZXN0IG1lc3NhZ2UgW3N5c3RlbSBj
YWxsXQ0KLSBEYWVtb24gcHVzaGVzIHNhbXBsZSBkYXRhIHRocm91Z2ggSVBDIFtzeXN0ZW0gY2Fs
bCwgZGF0YSBjb3B5LCBjb250ZXh0IHN3aXRjaF0NCi0gQ2FsbGVyIHBvcHMgZGF0YSBvZmYgSVBD
IFtzeXN0ZW0gY2FsbCwgZGF0YSBjb3B5XQ0KDQpJLmUuIHRoZXJlIGFyZSA0IHN5c3RlbSBjYWxs
cywgMiBjb250ZXh0IHN3aXRjaGVzIGFuZCAyIGRhdGEgY29waWVzIGFkZGVkIHNvbGVseSBmb3Ig
dGhlIHB1cnBvc2Ugb2YgYXJiaXRyYXRpb24gZm9yIEVBQ0ggY2xpZW50LCBldmVuIGZvciBzZW5z
b3JzIG5vdCBjdXJyZW50bHkgc2hhcmVkLg0KDQpCVFcgKG5vdCBkaXJlY3RseSByZWxhdGVkKSwg
SSd2ZSByZWFkIHNvbWV3aGVyZSB0aGF0IG9uIHNvbWUgc3lzdGVtIElJTyBkaWQgdXAgdG8gMjAw
TSBzYW1wbGVzIHBlciBzZWNvbmQuIElzIGl0IHRydWU/IElmIHllcywgaG93IHN1Y2ggZGF0YSBy
YXRlIHdhcyBhY2hpZXZlZD8NCg0KQmVzdCByZWdhcmRzLA0KRGFuaWVsDQoNCi0tLS0tLS0tLS0t
LS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0t
LQpJbnRlbCBJc3JhZWwgKDc0KSBMaW1pdGVkCgpUaGlzIGUtbWFpbCBhbmQgYW55IGF0dGFjaG1l
bnRzIG1heSBjb250YWluIGNvbmZpZGVudGlhbCBtYXRlcmlhbCBmb3IKdGhlIHNvbGUgdXNlIG9m
IHRoZSBpbnRlbmRlZCByZWNpcGllbnQocykuIEFueSByZXZpZXcgb3IgZGlzdHJpYnV0aW9uCmJ5
IG90aGVycyBpcyBzdHJpY3RseSBwcm9oaWJpdGVkLiBJZiB5b3UgYXJlIG5vdCB0aGUgaW50ZW5k
ZWQKcmVjaXBpZW50LCBwbGVhc2UgY29udGFjdCB0aGUgc2VuZGVyIGFuZCBkZWxldGUgYWxsIGNv
cGllcy4K
^ permalink raw reply	[flat|nested] 20+ messages in thread
* Re: working with IIO
  2013-08-22 13:39       ` Drubin, Daniel
@ 2013-08-22 14:16         ` Lars-Peter Clausen
  2013-08-22 14:45           ` Drubin, Daniel
  0 siblings, 1 reply; 20+ messages in thread
From: Lars-Peter Clausen @ 2013-08-22 14:16 UTC (permalink / raw)
  To: Drubin, Daniel
  Cc: Jonathan Cameron, Yuniverg, Michael, linux-iio@vger.kernel.org,
	Haimovich, Yoav
On 08/22/2013 03:39 PM, Drubin, Daniel wrote:
> [...]
>>> As of now we would like to propose an option for IIO device to allow
>> multiple opens()s. Without that additional option devices will work as today,
>> so that, for example, existing sensor drivers will not have to be modified for
>> reentrancy; but those drivers that need it will signal with that option that
>> they will handle reentrancy themselves.
>>
>> How about implementing a userspace daemon which does the arbitration
>> between multiple users? Doing this in kernel space can get tricky, especially if
>> you want to allow concurrent users with different settings, e.g. sample rate.
>> This is in part due to the majority of the IIO ABI being stateless and this
>> doesn't really mix well with concurrent users. Having a daemon will allow you
>> to implement a stateful API on top of the stateless IIO userspace ABI.
>> If you go the kernel route though I'm pretty sure you'll run into lots of
>> problems without an immediate solution.
> 
> That's the direction in which we are currently advancing. Not because we are afraid of kernel-mode problems - after all they are very similar to what kernel-mode filesystem driver faces when serving multiple processes accessing the same FS, just less complicated (e.g. no writers); but mainly because we want to use existing IIO as framework.
> 
The problem is that the IIO ABI is stateless, so either you need to add some
very crude hacks on-top of it to allow this or you'd have to throw  away the
current ABI and develop a IIOv2. The userspace daemon is in my opinion
preferable to both cases.
> The main drawback that we see in user-mode daemon is performance. Consider sequence of events between the daemon and the caller process:
> 
> - Caller process invokes some sort of RPC via socket/pipe/message queue [system call, context switch]
> - Daemon receives request message [system call]
> - Daemon pushes sample data through IPC [system call, data copy, context switch]
> - Caller pops data off IPC [system call, data copy]
> 
> I.e. there are 4 system calls, 2 context switches and 2 data copies added solely for the purpose of arbitration for EACH client, even for sensors not currently shared.
If done right the overhead should hopefully be negligible. E.g. right now we
do not have mmap support for IIO, but this is something that will be
implemented sooner or later (probably sooner than later). I think we should
take a look at how ALSA does these things. There we also have no in-kernel
multiplexing or mixing and things are handled by a userspace daemon.
> 
> BTW (not directly related), I've read somewhere that on some system IIO did up to 200M samples per second. Is it true? If yes, how such data rate was achieved?
Yes, but not continuous streaming. This is implemented by sampling at 200
MHz, put data into a buffer, stop sampling and then let userspace read the
buffer and after that start again.
- Lars
^ permalink raw reply	[flat|nested] 20+ messages in thread
* RE: working with IIO
  2013-08-22 14:16         ` Lars-Peter Clausen
@ 2013-08-22 14:45           ` Drubin, Daniel
  2013-08-22 14:52             ` Lars-Peter Clausen
  0 siblings, 1 reply; 20+ messages in thread
From: Drubin, Daniel @ 2013-08-22 14:45 UTC (permalink / raw)
  To: Lars-Peter Clausen
  Cc: Jonathan Cameron, Yuniverg, Michael, linux-iio@vger.kernel.org,
	Haimovich, Yoav
DQoNCj4gLS0tLS1PcmlnaW5hbCBNZXNzYWdlLS0tLS0NCj4gRnJvbTogTGFycy1QZXRlciBDbGF1
c2VuIFttYWlsdG86bGFyc0BtZXRhZm9vLmRlXQ0KPiBTZW50OiBUaHVyc2RheSwgQXVndXN0IDIy
LCAyMDEzIDU6MTcgUE0NCj4gVG86IERydWJpbiwgRGFuaWVsDQo+IENjOiBKb25hdGhhbiBDYW1l
cm9uOyBZdW5pdmVyZywgTWljaGFlbDsgbGludXgtaWlvQHZnZXIua2VybmVsLm9yZzsNCj4gSGFp
bW92aWNoLCBZb2F2DQo+IFN1YmplY3Q6IFJlOiB3b3JraW5nIHdpdGggSUlPDQo+IA0KPiBPbiAw
OC8yMi8yMDEzIDAzOjM5IFBNLCBEcnViaW4sIERhbmllbCB3cm90ZToNCj4gPiBbLi4uXQ0KPiA+
Pj4gQXMgb2Ygbm93IHdlIHdvdWxkIGxpa2UgdG8gcHJvcG9zZSBhbiBvcHRpb24gZm9yIElJTyBk
ZXZpY2UgdG8gYWxsb3cNCj4gPj4gbXVsdGlwbGUgb3BlbnMoKXMuIFdpdGhvdXQgdGhhdCBhZGRp
dGlvbmFsIG9wdGlvbiBkZXZpY2VzIHdpbGwgd29yaw0KPiA+PiBhcyB0b2RheSwgc28gdGhhdCwg
Zm9yIGV4YW1wbGUsIGV4aXN0aW5nIHNlbnNvciBkcml2ZXJzIHdpbGwgbm90IGhhdmUNCj4gPj4g
dG8gYmUgbW9kaWZpZWQgZm9yIHJlZW50cmFuY3k7IGJ1dCB0aG9zZSBkcml2ZXJzIHRoYXQgbmVl
ZCBpdCB3aWxsDQo+ID4+IHNpZ25hbCB3aXRoIHRoYXQgb3B0aW9uIHRoYXQgdGhleSB3aWxsIGhh
bmRsZSByZWVudHJhbmN5IHRoZW1zZWx2ZXMuDQo+ID4+DQo+ID4+IEhvdyBhYm91dCBpbXBsZW1l
bnRpbmcgYSB1c2Vyc3BhY2UgZGFlbW9uIHdoaWNoIGRvZXMgdGhlIGFyYml0cmF0aW9uDQo+ID4+
IGJldHdlZW4gbXVsdGlwbGUgdXNlcnM/IERvaW5nIHRoaXMgaW4ga2VybmVsIHNwYWNlIGNhbiBn
ZXQgdHJpY2t5LA0KPiA+PiBlc3BlY2lhbGx5IGlmIHlvdSB3YW50IHRvIGFsbG93IGNvbmN1cnJl
bnQgdXNlcnMgd2l0aCBkaWZmZXJlbnQgc2V0dGluZ3MsDQo+IGUuZy4gc2FtcGxlIHJhdGUuDQo+
ID4+IFRoaXMgaXMgaW4gcGFydCBkdWUgdG8gdGhlIG1ham9yaXR5IG9mIHRoZSBJSU8gQUJJIGJl
aW5nIHN0YXRlbGVzcw0KPiA+PiBhbmQgdGhpcyBkb2Vzbid0IHJlYWxseSBtaXggd2VsbCB3aXRo
IGNvbmN1cnJlbnQgdXNlcnMuIEhhdmluZyBhDQo+ID4+IGRhZW1vbiB3aWxsIGFsbG93IHlvdSB0
byBpbXBsZW1lbnQgYSBzdGF0ZWZ1bCBBUEkgb24gdG9wIG9mIHRoZSBzdGF0ZWxlc3MNCj4gSUlP
IHVzZXJzcGFjZSBBQkkuDQo+ID4+IElmIHlvdSBnbyB0aGUga2VybmVsIHJvdXRlIHRob3VnaCBJ
J20gcHJldHR5IHN1cmUgeW91J2xsIHJ1biBpbnRvDQo+ID4+IGxvdHMgb2YgcHJvYmxlbXMgd2l0
aG91dCBhbiBpbW1lZGlhdGUgc29sdXRpb24uDQo+ID4NCj4gPiBUaGF0J3MgdGhlIGRpcmVjdGlv
biBpbiB3aGljaCB3ZSBhcmUgY3VycmVudGx5IGFkdmFuY2luZy4gTm90IGJlY2F1c2Ugd2UNCj4g
YXJlIGFmcmFpZCBvZiBrZXJuZWwtbW9kZSBwcm9ibGVtcyAtIGFmdGVyIGFsbCB0aGV5IGFyZSB2
ZXJ5IHNpbWlsYXIgdG8gd2hhdA0KPiBrZXJuZWwtbW9kZSBmaWxlc3lzdGVtIGRyaXZlciBmYWNl
cyB3aGVuIHNlcnZpbmcgbXVsdGlwbGUgcHJvY2Vzc2VzDQo+IGFjY2Vzc2luZyB0aGUgc2FtZSBG
UywganVzdCBsZXNzIGNvbXBsaWNhdGVkIChlLmcuIG5vIHdyaXRlcnMpOyBidXQgbWFpbmx5DQo+
IGJlY2F1c2Ugd2Ugd2FudCB0byB1c2UgZXhpc3RpbmcgSUlPIGFzIGZyYW1ld29yay4NCj4gPg0K
PiANCj4gVGhlIHByb2JsZW0gaXMgdGhhdCB0aGUgSUlPIEFCSSBpcyBzdGF0ZWxlc3MsIHNvIGVp
dGhlciB5b3UgbmVlZCB0byBhZGQgc29tZQ0KPiB2ZXJ5IGNydWRlIGhhY2tzIG9uLXRvcCBvZiBp
dCB0byBhbGxvdyB0aGlzIG9yIHlvdSdkIGhhdmUgdG8gdGhyb3cgIGF3YXkgdGhlDQo+IGN1cnJl
bnQgQUJJIGFuZCBkZXZlbG9wIGEgSUlPdjIuIFRoZSB1c2Vyc3BhY2UgZGFlbW9uIGlzIGluIG15
IG9waW5pb24NCj4gcHJlZmVyYWJsZSB0byBib3RoIGNhc2VzLg0KDQpGcm9tIHByYWN0aWNhbCBQ
T1Ygd2UgZG9uJ3QgaGF2ZSBtdWNoIGNob2ljZSAodGltZWxpbmUpLCBzaW5jZSB3ZSBoYXZlIHRv
IHJldXNlIGRyaXZlciB0aGF0IGlzIGJvdW5kIHRvIElJTy4gRnJvbSBwcmluY2lwbGUgc3RhbmRw
b2ludCBJIHNvbWVob3cgZmFpbCB0byBzZWUgYSBwcm9ibGVtLiBJdCBzZWVtcyB0byBtZSB0aGF0
IGFsbCBzdGF0ZSBoYW5kbGluZyB0aGF0IGFuIElJTyBkcml2ZXIgbmVlZHMgdG8gZG8gaXMgdG8g
a2VlcCBhc3NvY2lhdGlvbnMgb2YgUElEcyB0byBzZW5zb3IgcmF0ZXMsIGNvbmZpZ3VyZSBzZW5z
b3IgdG8gdGhlIGhpZ2hlc3QgcmF0ZSBpbiB0aGUgbGlzdCBhbmQgcmVwbGljYXRlIHNoYXJlZCBk
YXRhIGF0IHJhdGVzIHJlcXVlc3RlZCBieSB0aGUgY2xpZW50cy4gV2hlbiBhIGZpbGUgZGVzY3Jp
cHRvciBpcyBjbG9zZWQgKGR1ZSB0byBwcm9jZXNzIHRlcm1pbmF0aW9uIG9yIGFub3RoZXIgcmVh
c29ucyksIHRoZSBhY3R1YWwgc2Vuc29yIGlzIHJlLWNvbmZpZ3VyZWQgd2l0aCBuZXh0LWhpZ2hl
c3QgcmF0ZSBhbW9uZyB0aGUgb3BlbiBGRHMuDQoNCj4gPiBUaGUgbWFpbiBkcmF3YmFjayB0aGF0
IHdlIHNlZSBpbiB1c2VyLW1vZGUgZGFlbW9uIGlzIHBlcmZvcm1hbmNlLg0KPiBDb25zaWRlciBz
ZXF1ZW5jZSBvZiBldmVudHMgYmV0d2VlbiB0aGUgZGFlbW9uIGFuZCB0aGUgY2FsbGVyIHByb2Nl
c3M6DQo+ID4NCj4gPiAtIENhbGxlciBwcm9jZXNzIGludm9rZXMgc29tZSBzb3J0IG9mIFJQQyB2
aWEgc29ja2V0L3BpcGUvbWVzc2FnZQ0KPiA+IHF1ZXVlIFtzeXN0ZW0gY2FsbCwgY29udGV4dCBz
d2l0Y2hdDQo+ID4gLSBEYWVtb24gcmVjZWl2ZXMgcmVxdWVzdCBtZXNzYWdlIFtzeXN0ZW0gY2Fs
bF0NCj4gPiAtIERhZW1vbiBwdXNoZXMgc2FtcGxlIGRhdGEgdGhyb3VnaCBJUEMgW3N5c3RlbSBj
YWxsLCBkYXRhIGNvcHksDQo+ID4gY29udGV4dCBzd2l0Y2hdDQo+ID4gLSBDYWxsZXIgcG9wcyBk
YXRhIG9mZiBJUEMgW3N5c3RlbSBjYWxsLCBkYXRhIGNvcHldDQo+ID4NCj4gPiBJLmUuIHRoZXJl
IGFyZSA0IHN5c3RlbSBjYWxscywgMiBjb250ZXh0IHN3aXRjaGVzIGFuZCAyIGRhdGEgY29waWVz
IGFkZGVkDQo+IHNvbGVseSBmb3IgdGhlIHB1cnBvc2Ugb2YgYXJiaXRyYXRpb24gZm9yIEVBQ0gg
Y2xpZW50LCBldmVuIGZvciBzZW5zb3JzIG5vdA0KPiBjdXJyZW50bHkgc2hhcmVkLg0KPiANCj4g
SWYgZG9uZSByaWdodCB0aGUgb3ZlcmhlYWQgc2hvdWxkIGhvcGVmdWxseSBiZSBuZWdsaWdpYmxl
LiBFLmcuIHJpZ2h0IG5vdyB3ZQ0KPiBkbyBub3QgaGF2ZSBtbWFwIHN1cHBvcnQgZm9yIElJTywg
YnV0IHRoaXMgaXMgc29tZXRoaW5nIHRoYXQgd2lsbCBiZQ0KPiBpbXBsZW1lbnRlZCBzb29uZXIg
b3IgbGF0ZXIgKHByb2JhYmx5IHNvb25lciB0aGFuIGxhdGVyKS4gSSB0aGluayB3ZSBzaG91bGQN
Cj4gdGFrZSBhIGxvb2sgYXQgaG93IEFMU0EgZG9lcyB0aGVzZSB0aGluZ3MuIFRoZXJlIHdlIGFs
c28gaGF2ZSBubyBpbi1rZXJuZWwNCj4gbXVsdGlwbGV4aW5nIG9yIG1peGluZyBhbmQgdGhpbmdz
IGFyZSBoYW5kbGVkIGJ5IGEgdXNlcnNwYWNlIGRhZW1vbi4NCg0KSXQncyBub3QgdGhhdCBuZWds
aWdpYmxlLiBXZSBhcmUgZGV2ZWxvcGluZyBhIHNlbnNvcnMgaHViLCBub3QgaW5kaXZpZHVhbCBz
ZW5zb3IuIFNvIHdlIGFyZSByZXF1aXJlZCB0byB3aXRoc3RhbmQgYWJvdXQgMTAgc2Vuc29ycyBz
aW11bHRhbmVvdXNseSwgbW9zdCBhdCBhcHByb3guIDQwMCBzYW1wbGVzL3MuIExpbnV4IGlzIG5v
dCBleHRyZW1lbHkgZ29vZCBhdCBjb250ZXh0IHN3aXRjaGluZyBsYXRlbmNpZXMsIHNvIHdlIGFy
ZSBhZnJhaWQgdGhhdCB0aGlzIG92ZXJoZWFkIG11bHRpcGxpZWQgYnkgbnVtYmVyIG9mIGNvbmN1
cnJlbnRseSB1c2VkIHNlbnNvcnMgbWF5IGJlY29tZSBhIHJlYWwgb2JzdGFjbGUuDQoNCkkgZG9u
J3Qga25vdyBBTFNBIHRvbyB3ZWxsLCBhbmQgYXVkaW8gc2FtcGxpbmcgbWF5IGJlIHRvbyBsb3ct
cmF0ZSB0byBiZSByZWxldmFudCBpbiB0aGlzIGNvbnRleHQuIEkgY2FuIHN1Z2dlc3QgbG9va2lu
ZyBhdCBWNEwoMikgaW5zdGVhZCAtIGl0cyBjb25jaXNlIGludGVyZmFjZSBjYXBhYmxlIG9mIGhh
bmRsaW5nIGxhcmdlIGFtb3VudHMgb2YgdGltZWx5IGRhdGEgaXMgb3Zlci1kZWNhZGUgcHJvdmVu
Lg0KDQpCZXN0IHJlZ2FyZHMsDQpEYW5pZWwNCi0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0t
LS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLQpJbnRlbCBJc3JhZWwgKDc0
KSBMaW1pdGVkCgpUaGlzIGUtbWFpbCBhbmQgYW55IGF0dGFjaG1lbnRzIG1heSBjb250YWluIGNv
bmZpZGVudGlhbCBtYXRlcmlhbCBmb3IKdGhlIHNvbGUgdXNlIG9mIHRoZSBpbnRlbmRlZCByZWNp
cGllbnQocykuIEFueSByZXZpZXcgb3IgZGlzdHJpYnV0aW9uCmJ5IG90aGVycyBpcyBzdHJpY3Rs
eSBwcm9oaWJpdGVkLiBJZiB5b3UgYXJlIG5vdCB0aGUgaW50ZW5kZWQKcmVjaXBpZW50LCBwbGVh
c2UgY29udGFjdCB0aGUgc2VuZGVyIGFuZCBkZWxldGUgYWxsIGNvcGllcy4K
^ permalink raw reply	[flat|nested] 20+ messages in thread
* Re: working with IIO
  2013-08-22 14:45           ` Drubin, Daniel
@ 2013-08-22 14:52             ` Lars-Peter Clausen
  2013-08-22 15:08               ` Jonathan Cameron
  2013-08-22 15:16               ` Drubin, Daniel
  0 siblings, 2 replies; 20+ messages in thread
From: Lars-Peter Clausen @ 2013-08-22 14:52 UTC (permalink / raw)
  To: Drubin, Daniel
  Cc: Jonathan Cameron, Yuniverg, Michael, linux-iio@vger.kernel.org,
	Haimovich, Yoav
On 08/22/2013 04:45 PM, Drubin, Daniel wrote:
> 
> 
>> -----Original Message-----
>> From: Lars-Peter Clausen [mailto:lars@metafoo.de]
>> Sent: Thursday, August 22, 2013 5:17 PM
>> To: Drubin, Daniel
>> Cc: Jonathan Cameron; Yuniverg, Michael; linux-iio@vger.kernel.org;
>> Haimovich, Yoav
>> Subject: Re: working with IIO
>>
>> On 08/22/2013 03:39 PM, Drubin, Daniel wrote:
>>> [...]
>>>>> As of now we would like to propose an option for IIO device to allow
>>>> multiple opens()s. Without that additional option devices will work
>>>> as today, so that, for example, existing sensor drivers will not have
>>>> to be modified for reentrancy; but those drivers that need it will
>>>> signal with that option that they will handle reentrancy themselves.
>>>>
>>>> How about implementing a userspace daemon which does the arbitration
>>>> between multiple users? Doing this in kernel space can get tricky,
>>>> especially if you want to allow concurrent users with different settings,
>> e.g. sample rate.
>>>> This is in part due to the majority of the IIO ABI being stateless
>>>> and this doesn't really mix well with concurrent users. Having a
>>>> daemon will allow you to implement a stateful API on top of the stateless
>> IIO userspace ABI.
>>>> If you go the kernel route though I'm pretty sure you'll run into
>>>> lots of problems without an immediate solution.
>>>
>>> That's the direction in which we are currently advancing. Not because we
>> are afraid of kernel-mode problems - after all they are very similar to what
>> kernel-mode filesystem driver faces when serving multiple processes
>> accessing the same FS, just less complicated (e.g. no writers); but mainly
>> because we want to use existing IIO as framework.
>>>
>>
>> The problem is that the IIO ABI is stateless, so either you need to add some
>> very crude hacks on-top of it to allow this or you'd have to throw  away the
>> current ABI and develop a IIOv2. The userspace daemon is in my opinion
>> preferable to both cases.
> 
> From practical POV we don't have much choice (timeline), since we have to reuse driver that is bound to IIO. From principle standpoint I somehow fail to see a problem. It seems to me that all state handling that an IIO driver needs to do is to keep associations of PIDs to sensor rates, configure sensor to the highest rate in the list and replicate shared data at rates requested by the clients. When a file descriptor is closed (due to process termination or another reasons), the actual sensor is re-configured with next-highest rate among the open FDs.
But you can't track the configured rate per PID with the current API. That's
why I keep saying that the API is stateless. You can not track state per
application without inventing a new API.
- Lars
^ permalink raw reply	[flat|nested] 20+ messages in thread
* Re: working with IIO
  2013-08-22 14:52             ` Lars-Peter Clausen
@ 2013-08-22 15:08               ` Jonathan Cameron
  2013-08-22 15:33                 ` Drubin, Daniel
  2013-08-22 15:16               ` Drubin, Daniel
  1 sibling, 1 reply; 20+ messages in thread
From: Jonathan Cameron @ 2013-08-22 15:08 UTC (permalink / raw)
  To: Lars-Peter Clausen
  Cc: Drubin, Daniel, Jonathan Cameron, Yuniverg, Michael,
	linux-iio@vger.kernel.org, Haimovich, Yoav
On 22/08/13 15:52, Lars-Peter Clausen wrote:
> On 08/22/2013 04:45 PM, Drubin, Daniel wrote:
>>
>>
>>> -----Original Message-----
>>> From: Lars-Peter Clausen [mailto:lars@metafoo.de]
>>> Sent: Thursday, August 22, 2013 5:17 PM
>>> To: Drubin, Daniel
>>> Cc: Jonathan Cameron; Yuniverg, Michael; linux-iio@vger.kernel.org;
>>> Haimovich, Yoav
>>> Subject: Re: working with IIO
>>>
>>> On 08/22/2013 03:39 PM, Drubin, Daniel wrote:
>>>> [...]
>>>>>> As of now we would like to propose an option for IIO device to allow
>>>>> multiple opens()s. Without that additional option devices will work
>>>>> as today, so that, for example, existing sensor drivers will not have
>>>>> to be modified for reentrancy; but those drivers that need it will
>>>>> signal with that option that they will handle reentrancy themselves.
>>>>>
>>>>> How about implementing a userspace daemon which does the arbitration
>>>>> between multiple users? Doing this in kernel space can get tricky,
>>>>> especially if you want to allow concurrent users with different settings,
>>> e.g. sample rate.
>>>>> This is in part due to the majority of the IIO ABI being stateless
>>>>> and this doesn't really mix well with concurrent users. Having a
>>>>> daemon will allow you to implement a stateful API on top of the stateless
>>> IIO userspace ABI.
>>>>> If you go the kernel route though I'm pretty sure you'll run into
>>>>> lots of problems without an immediate solution.
>>>>
>>>> That's the direction in which we are currently advancing. Not because we
>>> are afraid of kernel-mode problems - after all they are very similar to what
>>> kernel-mode filesystem driver faces when serving multiple processes
>>> accessing the same FS, just less complicated (e.g. no writers); but mainly
>>> because we want to use existing IIO as framework.
>>>>
>>>
>>> The problem is that the IIO ABI is stateless, so either you need to add some
>>> very crude hacks on-top of it to allow this or you'd have to throw  away the
>>> current ABI and develop a IIOv2. The userspace daemon is in my opinion
>>> preferable to both cases.
>>
>>  From practical POV we don't have much choice (timeline), since we have to reuse driver that is bound to IIO. From principle standpoint I somehow fail to see a problem. It seems to me that all state handling that an IIO driver needs to do is to keep associations of PIDs to sensor rates, configure sensor to the highest rate in the list and replicate shared data at rates requested by the clients. When a file descriptor is closed (due to process termination or another reasons), the actual sensor is re-configured with next-highest rate among the open FDs.
>
> But you can't track the configured rate per PID with the current API. That's
> why I keep saying that the API is stateless. You can not track state per
> application without inventing a new API.
The fastest way of doing this that I can see would be to allow 
initialization of multiple buffers per device (not actually that
hard to do given we have the multiple stream demux there for other 
reasons) and allow subsampling of the datastream in the demux.  Note 
that there are no assumptions that the triggering will be fixed
frequency, hence it could only be done as a filter on whatever comes up 
rather than explicit frequency requests.
Does v4l handle multiple video streams to userspace at different
frame rates?  That is what we are talking about here effectively.
The few times this has come up before we have always concluded that
it is better done in userspace, but there is some infrastructure in
place now that would make it 'not truly horrible' to do some of the
work in kernel.
Jonathan
^ permalink raw reply	[flat|nested] 20+ messages in thread
* RE: working with IIO
  2013-08-22 14:52             ` Lars-Peter Clausen
  2013-08-22 15:08               ` Jonathan Cameron
@ 2013-08-22 15:16               ` Drubin, Daniel
  2013-08-22 15:41                 ` Lars-Peter Clausen
  1 sibling, 1 reply; 20+ messages in thread
From: Drubin, Daniel @ 2013-08-22 15:16 UTC (permalink / raw)
  To: Lars-Peter Clausen
  Cc: Jonathan Cameron, Yuniverg, Michael, linux-iio@vger.kernel.org,
	Haimovich, Yoav
Wy4uLl0NCj4gPiBGcm9tIHByYWN0aWNhbCBQT1Ygd2UgZG9uJ3QgaGF2ZSBtdWNoIGNob2ljZSAo
dGltZWxpbmUpLCBzaW5jZSB3ZSBoYXZlIHRvDQo+IHJldXNlIGRyaXZlciB0aGF0IGlzIGJvdW5k
IHRvIElJTy4gRnJvbSBwcmluY2lwbGUgc3RhbmRwb2ludCBJIHNvbWVob3cgZmFpbCB0bw0KPiBz
ZWUgYSBwcm9ibGVtLiBJdCBzZWVtcyB0byBtZSB0aGF0IGFsbCBzdGF0ZSBoYW5kbGluZyB0aGF0
IGFuIElJTyBkcml2ZXIgbmVlZHMNCj4gdG8gZG8gaXMgdG8ga2VlcCBhc3NvY2lhdGlvbnMgb2Yg
UElEcyB0byBzZW5zb3IgcmF0ZXMsIGNvbmZpZ3VyZSBzZW5zb3IgdG8gdGhlDQo+IGhpZ2hlc3Qg
cmF0ZSBpbiB0aGUgbGlzdCBhbmQgcmVwbGljYXRlIHNoYXJlZCBkYXRhIGF0IHJhdGVzIHJlcXVl
c3RlZCBieSB0aGUNCj4gY2xpZW50cy4gV2hlbiBhIGZpbGUgZGVzY3JpcHRvciBpcyBjbG9zZWQg
KGR1ZSB0byBwcm9jZXNzIHRlcm1pbmF0aW9uIG9yDQo+IGFub3RoZXIgcmVhc29ucyksIHRoZSBh
Y3R1YWwgc2Vuc29yIGlzIHJlLWNvbmZpZ3VyZWQgd2l0aCBuZXh0LWhpZ2hlc3QgcmF0ZQ0KPiBh
bW9uZyB0aGUgb3BlbiBGRHMuDQo+IA0KPiBCdXQgeW91IGNhbid0IHRyYWNrIHRoZSBjb25maWd1
cmVkIHJhdGUgcGVyIFBJRCB3aXRoIHRoZSBjdXJyZW50IEFQSS4gVGhhdCdzDQo+IHdoeSBJIGtl
ZXAgc2F5aW5nIHRoYXQgdGhlIEFQSSBpcyBzdGF0ZWxlc3MuIFlvdSBjYW4gbm90IHRyYWNrIHN0
YXRlIHBlcg0KPiBhcHBsaWNhdGlvbiB3aXRob3V0IGludmVudGluZyBhIG5ldyBBUEkuDQoNCldo
eSBjYW4ndCBJIGR1cmluZyBrZWVwIGEgbGlzdCBvZiBQSURzIHRoYXQgY3VycmVudGx5IHVzZSBh
IHNlbnNvciBhbmQgcmVjb3JkIGN1cnJlbnQtPnBpZCB0b2dldGhlciB3aXRoICJkZWZhdWx0IiBy
YXRlIGR1cmluZyB0aGUgZmlyc3Qgc2FtcGxpbmcgcmVxdWVzdCB0aGF0IGRvZXNuJ3QgaGF2ZSBh
IG1hdGNoaW5nIFBJRCwgYW5kIGluIHdyaXRlX3JhdygpIGhhbmRsZXIgdGhhdCB1cGRhdGVzIHJh
dGUgbWF0Y2ggdGhhdCBjdXJyZW50LT5waWQgYWdhaW5zdCBsaXN0IG9mIHJlY29yZGVkIFBJRHM/
IEkgZGlkbid0IHNlZSBhIHBvc3NpYmlsaXR5IHRoYXQgc2Vuc29yIGRyaXZlcidzIGhhbmRsZXIg
bWF5IGdldCBjYWxsZWQgaW4gYSBkaWZmZXJlbnQgY29udGV4dCB0aGFuIElJTyBjb3JlIGZvcHMg
aGFuZGxlci4NCg0KQmVzdCByZWdhcmRzLA0KRGFuaWVsDQotLS0tLS0tLS0tLS0tLS0tLS0tLS0t
LS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0KSW50ZWwgSXNy
YWVsICg3NCkgTGltaXRlZAoKVGhpcyBlLW1haWwgYW5kIGFueSBhdHRhY2htZW50cyBtYXkgY29u
dGFpbiBjb25maWRlbnRpYWwgbWF0ZXJpYWwgZm9yCnRoZSBzb2xlIHVzZSBvZiB0aGUgaW50ZW5k
ZWQgcmVjaXBpZW50KHMpLiBBbnkgcmV2aWV3IG9yIGRpc3RyaWJ1dGlvbgpieSBvdGhlcnMgaXMg
c3RyaWN0bHkgcHJvaGliaXRlZC4gSWYgeW91IGFyZSBub3QgdGhlIGludGVuZGVkCnJlY2lwaWVu
dCwgcGxlYXNlIGNvbnRhY3QgdGhlIHNlbmRlciBhbmQgZGVsZXRlIGFsbCBjb3BpZXMuCg==
^ permalink raw reply	[flat|nested] 20+ messages in thread
* RE: working with IIO
  2013-08-22 15:08               ` Jonathan Cameron
@ 2013-08-22 15:33                 ` Drubin, Daniel
  2013-08-22 16:15                   ` Jonathan Cameron
  0 siblings, 1 reply; 20+ messages in thread
From: Drubin, Daniel @ 2013-08-22 15:33 UTC (permalink / raw)
  To: Jonathan Cameron, Lars-Peter Clausen
  Cc: Jonathan Cameron, Yuniverg, Michael, linux-iio@vger.kernel.org,
	Haimovich, Yoav
Wy4uLl0NCj4gRG9lcyB2NGwgaGFuZGxlIG11bHRpcGxlIHZpZGVvIHN0cmVhbXMgdG8gdXNlcnNw
YWNlIGF0IGRpZmZlcmVudCBmcmFtZQ0KPiByYXRlcz8gIFRoYXQgaXMgd2hhdCB3ZSBhcmUgdGFs
a2luZyBhYm91dCBoZXJlIGVmZmVjdGl2ZWx5Lg0KDQpJIGFjdHVhbGx5IG1lbnRpb25lZCB2NGwg
aW4gY29udGV4dCBvZiBtbWFwKCkuIE5vdCBmb3IgcmVwbGljYXRpbmcgdGhlIHNhbWUgbW1hcCgp
ZWQgZGF0YSBmb3IgbXVsdGlwbGUgY2xpZW50cyBvZiBjb3Vyc2UsIGJ1dCBJSVJDIHllcywgdjRs
IGFsbG93cyBtdWx0aXBsZSBjaGFubmVscyB0byBiZSBvcGVuZWQuIEUuZy4gZm9yIHBlZWtpbmcg
ZnVsbCB2aWRlbywgcHJldmlldyBhbmQgdGVsZXRleHQgb3IgZm9yIHN3aXRjaGluZyBiZXR3ZWVu
IHZpZGVvIHNvdXJjZXMgb24gdGhlIHNhbWUgZ3JhYmJlci4gTWFraW5nIHRoZSB3aG9sZSBtYWpv
ciBub2RlIHNpbmdsZXRvbiBpcyBhIGJpdCBoYXJzaCByZXN0cmljdGlvbiBJTUhPLg0KDQpCZXN0
IHJlZ2FyZHMsDQpEYW5pZWwNCi0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0t
LS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLQpJbnRlbCBJc3JhZWwgKDc0KSBMaW1pdGVk
CgpUaGlzIGUtbWFpbCBhbmQgYW55IGF0dGFjaG1lbnRzIG1heSBjb250YWluIGNvbmZpZGVudGlh
bCBtYXRlcmlhbCBmb3IKdGhlIHNvbGUgdXNlIG9mIHRoZSBpbnRlbmRlZCByZWNpcGllbnQocyku
IEFueSByZXZpZXcgb3IgZGlzdHJpYnV0aW9uCmJ5IG90aGVycyBpcyBzdHJpY3RseSBwcm9oaWJp
dGVkLiBJZiB5b3UgYXJlIG5vdCB0aGUgaW50ZW5kZWQKcmVjaXBpZW50LCBwbGVhc2UgY29udGFj
dCB0aGUgc2VuZGVyIGFuZCBkZWxldGUgYWxsIGNvcGllcy4K
^ permalink raw reply	[flat|nested] 20+ messages in thread
* Re: working with IIO
  2013-08-22 15:16               ` Drubin, Daniel
@ 2013-08-22 15:41                 ` Lars-Peter Clausen
  2013-08-22 15:48                   ` Drubin, Daniel
  0 siblings, 1 reply; 20+ messages in thread
From: Lars-Peter Clausen @ 2013-08-22 15:41 UTC (permalink / raw)
  To: Drubin, Daniel
  Cc: Jonathan Cameron, Yuniverg, Michael, linux-iio@vger.kernel.org,
	Haimovich, Yoav
On 08/22/2013 05:16 PM, Drubin, Daniel wrote:
> [...]
>>>  From practical POV we don't have much choice (timeline), since we have to
>> reuse driver that is bound to IIO. From principle standpoint I somehow fail to
>> see a problem. It seems to me that all state handling that an IIO driver needs
>> to do is to keep associations of PIDs to sensor rates, configure sensor to the
>> highest rate in the list and replicate shared data at rates requested by the
>> clients. When a file descriptor is closed (due to process termination or
>> another reasons), the actual sensor is re-configured with next-highest rate
>> among the open FDs.
>>
>> But you can't track the configured rate per PID with the current API. That's
>> why I keep saying that the API is stateless. You can not track state per
>> application without inventing a new API.
>
> Why can't I during keep a list of PIDs that currently use a sensor and record current->pid together with "default" rate during the first sampling request that doesn't have a matching PID, and in write_raw() handler that updates rate match that current->pid against list of recorded PIDs? I didn't see a possibility that sensor driver's handler may get called in a different context than IIO core fops handler.
So each time a process writes to a IIO sysfs file you want to record which 
value that application wrote. So when I run `for i in `seq 0 100000`; do echo 
$i > sampling_frequency; done` I'd end up with a list with one million entries 
which will stay in the list forever.
- Lars
^ permalink raw reply	[flat|nested] 20+ messages in thread
* RE: working with IIO
  2013-08-22 15:41                 ` Lars-Peter Clausen
@ 2013-08-22 15:48                   ` Drubin, Daniel
  2013-08-22 16:00                     ` Lars-Peter Clausen
  0 siblings, 1 reply; 20+ messages in thread
From: Drubin, Daniel @ 2013-08-22 15:48 UTC (permalink / raw)
  To: Lars-Peter Clausen
  Cc: Jonathan Cameron, Yuniverg, Michael, linux-iio@vger.kernel.org,
	Haimovich, Yoav
DQoNCj4gLS0tLS1PcmlnaW5hbCBNZXNzYWdlLS0tLS0NCj4gRnJvbTogTGFycy1QZXRlciBDbGF1
c2VuIFttYWlsdG86bGFyc0BtZXRhZm9vLmRlXQ0KPiBTZW50OiBUaHVyc2RheSwgQXVndXN0IDIy
LCAyMDEzIDY6NDIgUE0NCj4gVG86IERydWJpbiwgRGFuaWVsDQo+IENjOiBKb25hdGhhbiBDYW1l
cm9uOyBZdW5pdmVyZywgTWljaGFlbDsgbGludXgtaWlvQHZnZXIua2VybmVsLm9yZzsNCj4gSGFp
bW92aWNoLCBZb2F2DQo+IFN1YmplY3Q6IFJlOiB3b3JraW5nIHdpdGggSUlPDQo+IA0KPiBPbiAw
OC8yMi8yMDEzIDA1OjE2IFBNLCBEcnViaW4sIERhbmllbCB3cm90ZToNCj4gPiBbLi4uXQ0KPiA+
Pj4gIEZyb20gcHJhY3RpY2FsIFBPViB3ZSBkb24ndCBoYXZlIG11Y2ggY2hvaWNlICh0aW1lbGlu
ZSksIHNpbmNlIHdlDQo+ID4+PiBoYXZlIHRvDQo+ID4+IHJldXNlIGRyaXZlciB0aGF0IGlzIGJv
dW5kIHRvIElJTy4gRnJvbSBwcmluY2lwbGUgc3RhbmRwb2ludCBJDQo+ID4+IHNvbWVob3cgZmFp
bCB0byBzZWUgYSBwcm9ibGVtLiBJdCBzZWVtcyB0byBtZSB0aGF0IGFsbCBzdGF0ZSBoYW5kbGlu
Zw0KPiA+PiB0aGF0IGFuIElJTyBkcml2ZXIgbmVlZHMgdG8gZG8gaXMgdG8ga2VlcCBhc3NvY2lh
dGlvbnMgb2YgUElEcyB0bw0KPiA+PiBzZW5zb3IgcmF0ZXMsIGNvbmZpZ3VyZSBzZW5zb3IgdG8g
dGhlIGhpZ2hlc3QgcmF0ZSBpbiB0aGUgbGlzdCBhbmQNCj4gPj4gcmVwbGljYXRlIHNoYXJlZCBk
YXRhIGF0IHJhdGVzIHJlcXVlc3RlZCBieSB0aGUgY2xpZW50cy4gV2hlbiBhIGZpbGUNCj4gPj4g
ZGVzY3JpcHRvciBpcyBjbG9zZWQgKGR1ZSB0byBwcm9jZXNzIHRlcm1pbmF0aW9uIG9yIGFub3Ro
ZXIgcmVhc29ucyksDQo+ID4+IHRoZSBhY3R1YWwgc2Vuc29yIGlzIHJlLWNvbmZpZ3VyZWQgd2l0
aCBuZXh0LWhpZ2hlc3QgcmF0ZSBhbW9uZyB0aGUgb3Blbg0KPiBGRHMuDQo+ID4+DQo+ID4+IEJ1
dCB5b3UgY2FuJ3QgdHJhY2sgdGhlIGNvbmZpZ3VyZWQgcmF0ZSBwZXIgUElEIHdpdGggdGhlIGN1
cnJlbnQgQVBJLg0KPiA+PiBUaGF0J3Mgd2h5IEkga2VlcCBzYXlpbmcgdGhhdCB0aGUgQVBJIGlz
IHN0YXRlbGVzcy4gWW91IGNhbiBub3QgdHJhY2sNCj4gPj4gc3RhdGUgcGVyIGFwcGxpY2F0aW9u
IHdpdGhvdXQgaW52ZW50aW5nIGEgbmV3IEFQSS4NCj4gPg0KPiA+IFdoeSBjYW4ndCBJIGR1cmlu
ZyBrZWVwIGEgbGlzdCBvZiBQSURzIHRoYXQgY3VycmVudGx5IHVzZSBhIHNlbnNvciBhbmQgcmVj
b3JkDQo+IGN1cnJlbnQtPnBpZCB0b2dldGhlciB3aXRoICJkZWZhdWx0IiByYXRlIGR1cmluZyB0
aGUgZmlyc3Qgc2FtcGxpbmcgcmVxdWVzdA0KPiB0aGF0IGRvZXNuJ3QgaGF2ZSBhIG1hdGNoaW5n
IFBJRCwgYW5kIGluIHdyaXRlX3JhdygpIGhhbmRsZXIgdGhhdCB1cGRhdGVzDQo+IHJhdGUgbWF0
Y2ggdGhhdCBjdXJyZW50LT5waWQgYWdhaW5zdCBsaXN0IG9mIHJlY29yZGVkIFBJRHM/IEkgZGlk
bid0IHNlZSBhDQo+IHBvc3NpYmlsaXR5IHRoYXQgc2Vuc29yIGRyaXZlcidzIGhhbmRsZXIgbWF5
IGdldCBjYWxsZWQgaW4gYSBkaWZmZXJlbnQgY29udGV4dA0KPiB0aGFuIElJTyBjb3JlIGZvcHMg
aGFuZGxlci4NCj4gDQo+IFNvIGVhY2ggdGltZSBhIHByb2Nlc3Mgd3JpdGVzIHRvIGEgSUlPIHN5
c2ZzIGZpbGUgeW91IHdhbnQgdG8gcmVjb3JkIHdoaWNoDQo+IHZhbHVlIHRoYXQgYXBwbGljYXRp
b24gd3JvdGUuIFNvIHdoZW4gSSBydW4gYGZvciBpIGluIGBzZXEgMCAxMDAwMDBgOyBkbyBlY2hv
ICRpDQo+ID4gc2FtcGxpbmdfZnJlcXVlbmN5OyBkb25lYCBJJ2QgZW5kIHVwIHdpdGggYSBsaXN0
IHdpdGggb25lIG1pbGxpb24gZW50cmllcw0KPiB3aGljaCB3aWxsIHN0YXkgaW4gdGhlIGxpc3Qg
Zm9yZXZlci4NCg0KTm8sIHRoZXJlIGlzIG9ubHkgb25lIGVudHJ5IHBlciBQSUQuIE5leHQgdmFs
dWUgdGhhdCB0aGUgc2FtZSBwcm9jZXNzIHdyaXRlcyB3aWxsIHJlcGxhY2UgdGhlIHByZXZpb3Vz
IG9uZSwgbm90IGNyZWF0ZSBhIG5ldyBlbnRyeS4gQW4gZW50cnkgd2lsbCBiZSBjcmVhdGUgb25s
eSBpZiB0aGUgd3JpdGUgcmVxdWVzdCBhcnJpdmVkIGZyb20gYSBQSUQgY3VycmVudGx5IG5vdCBp
biBsaXN0Lg0KDQpCZXN0IHJlZ2FyZHMsDQpEYW5pZWwNCi0tLS0tLS0tLS0tLS0tLS0tLS0tLS0t
LS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLQpJbnRlbCBJc3Jh
ZWwgKDc0KSBMaW1pdGVkCgpUaGlzIGUtbWFpbCBhbmQgYW55IGF0dGFjaG1lbnRzIG1heSBjb250
YWluIGNvbmZpZGVudGlhbCBtYXRlcmlhbCBmb3IKdGhlIHNvbGUgdXNlIG9mIHRoZSBpbnRlbmRl
ZCByZWNpcGllbnQocykuIEFueSByZXZpZXcgb3IgZGlzdHJpYnV0aW9uCmJ5IG90aGVycyBpcyBz
dHJpY3RseSBwcm9oaWJpdGVkLiBJZiB5b3UgYXJlIG5vdCB0aGUgaW50ZW5kZWQKcmVjaXBpZW50
LCBwbGVhc2UgY29udGFjdCB0aGUgc2VuZGVyIGFuZCBkZWxldGUgYWxsIGNvcGllcy4K
^ permalink raw reply	[flat|nested] 20+ messages in thread
* Re: working with IIO
  2013-08-22 15:48                   ` Drubin, Daniel
@ 2013-08-22 16:00                     ` Lars-Peter Clausen
  2013-08-22 16:26                       ` Drubin, Daniel
  0 siblings, 1 reply; 20+ messages in thread
From: Lars-Peter Clausen @ 2013-08-22 16:00 UTC (permalink / raw)
  To: Drubin, Daniel
  Cc: Jonathan Cameron, Yuniverg, Michael, linux-iio@vger.kernel.org,
	Haimovich, Yoav
On 08/22/2013 05:48 PM, Drubin, Daniel wrote:
>
>
>> -----Original Message-----
>> From: Lars-Peter Clausen [mailto:lars@metafoo.de]
>> Sent: Thursday, August 22, 2013 6:42 PM
>> To: Drubin, Daniel
>> Cc: Jonathan Cameron; Yuniverg, Michael; linux-iio@vger.kernel.org;
>> Haimovich, Yoav
>> Subject: Re: working with IIO
>>
>> On 08/22/2013 05:16 PM, Drubin, Daniel wrote:
>>> [...]
>>>>>   From practical POV we don't have much choice (timeline), since we
>>>>> have to
>>>> reuse driver that is bound to IIO. From principle standpoint I
>>>> somehow fail to see a problem. It seems to me that all state handling
>>>> that an IIO driver needs to do is to keep associations of PIDs to
>>>> sensor rates, configure sensor to the highest rate in the list and
>>>> replicate shared data at rates requested by the clients. When a file
>>>> descriptor is closed (due to process termination or another reasons),
>>>> the actual sensor is re-configured with next-highest rate among the open
>> FDs.
>>>>
>>>> But you can't track the configured rate per PID with the current API.
>>>> That's why I keep saying that the API is stateless. You can not track
>>>> state per application without inventing a new API.
>>>
>>> Why can't I during keep a list of PIDs that currently use a sensor and record
>> current->pid together with "default" rate during the first sampling request
>> that doesn't have a matching PID, and in write_raw() handler that updates
>> rate match that current->pid against list of recorded PIDs? I didn't see a
>> possibility that sensor driver's handler may get called in a different context
>> than IIO core fops handler.
>>
>> So each time a process writes to a IIO sysfs file you want to record which
>> value that application wrote. So when I run `for i in `seq 0 100000`; do echo $i
>>> sampling_frequency; done` I'd end up with a list with one million entries
>> which will stay in the list forever.
>
> No, there is only one entry per PID. Next value that the same process writes will replace the previous one, not create a new entry. An entry will be create only if the write request arrived from a PID currently not in list.
>
Assume that echo is a /bin/echo, not a shell built-in command.
- Lars
^ permalink raw reply	[flat|nested] 20+ messages in thread
* RE: working with IIO
  2013-08-22 15:33                 ` Drubin, Daniel
@ 2013-08-22 16:15                   ` Jonathan Cameron
  2013-08-22 16:35                     ` Drubin, Daniel
  0 siblings, 1 reply; 20+ messages in thread
From: Jonathan Cameron @ 2013-08-22 16:15 UTC (permalink / raw)
  To: Drubin, Daniel, Lars-Peter Clausen
  Cc: Jonathan Cameron, Yuniverg, Michael, linux-iio@vger.kernel.org,
	Haimovich, Yoav
"Drubin, Daniel" <daniel.drubin@intel.com> wrote:
>[...]
>> Does v4l handle multiple video streams to userspace at different
>frame
>> rates?  That is what we are talking about here effectively.
>
>I actually mentioned v4l in context of mmap().
Fair enough. Indeed mmap support is a great feature as is splice support for other uses.
Honestly I was originally waiting for the tracing guys to produce the magic lockless ring buffer that does all of this.  Been a long time since I picked up on anything about that though!
 Not for replicating the
>same mmap()ed data for multiple clients of course, but IIRC yes, v4l
>allows multiple channels to be opened. E.g. for peeking full video,
>preview and teletext or for switching between video sources on the same
>grabber. Making the whole major node singleton is a bit harsh
>restriction IMHO.
Sure to that but here equivalent is opening main stream an pulling out different frame rates. Equivalent of your example is a multiple sample rate hardware device. Those are handled using multiple instances of iio_Dev.
>
>Best regards,
>Daniel
>---------------------------------------------------------------------
>Intel Israel (74) Limited
>
>This e-mail and any attachments may contain confidential material for
>the sole use of the intended recipient(s). Any review or distribution
>by others is strictly prohibited. If you are not the intended
>recipient, please contact the sender and delete all copies.
>N�����r��y���b�X��ǧv�^�){.n�+����{��*"��^n�r��z�\x1a��h����&��\x1e�G���h�\x03(�階�ݢj"��\x1a�^[m�����z�ޖ���f���h���~�m
^ permalink raw reply	[flat|nested] 20+ messages in thread
* RE: working with IIO
  2013-08-22 16:00                     ` Lars-Peter Clausen
@ 2013-08-22 16:26                       ` Drubin, Daniel
  2013-08-22 16:56                         ` Lars-Peter Clausen
  2013-08-28 12:56                         ` Alexander Holler
  0 siblings, 2 replies; 20+ messages in thread
From: Drubin, Daniel @ 2013-08-22 16:26 UTC (permalink / raw)
  To: Lars-Peter Clausen
  Cc: Jonathan Cameron, Yuniverg, Michael, linux-iio@vger.kernel.org,
	Haimovich, Yoav
DQoNCj4gLS0tLS1PcmlnaW5hbCBNZXNzYWdlLS0tLS0NCj4gRnJvbTogTGFycy1QZXRlciBDbGF1
c2VuIFttYWlsdG86bGFyc0BtZXRhZm9vLmRlXQ0KPiBTZW50OiBUaHVyc2RheSwgQXVndXN0IDIy
LCAyMDEzIDc6MDAgUE0NCj4gVG86IERydWJpbiwgRGFuaWVsDQo+IENjOiBKb25hdGhhbiBDYW1l
cm9uOyBZdW5pdmVyZywgTWljaGFlbDsgbGludXgtaWlvQHZnZXIua2VybmVsLm9yZzsNCj4gSGFp
bW92aWNoLCBZb2F2DQo+IFN1YmplY3Q6IFJlOiB3b3JraW5nIHdpdGggSUlPDQo+IA0KPiBPbiAw
OC8yMi8yMDEzIDA1OjQ4IFBNLCBEcnViaW4sIERhbmllbCB3cm90ZToNCj4gPg0KPiA+DQo+ID4+
IC0tLS0tT3JpZ2luYWwgTWVzc2FnZS0tLS0tDQo+ID4+IEZyb206IExhcnMtUGV0ZXIgQ2xhdXNl
biBbbWFpbHRvOmxhcnNAbWV0YWZvby5kZV0NCj4gPj4gU2VudDogVGh1cnNkYXksIEF1Z3VzdCAy
MiwgMjAxMyA2OjQyIFBNDQo+ID4+IFRvOiBEcnViaW4sIERhbmllbA0KPiA+PiBDYzogSm9uYXRo
YW4gQ2FtZXJvbjsgWXVuaXZlcmcsIE1pY2hhZWw7IGxpbnV4LWlpb0B2Z2VyLmtlcm5lbC5vcmc7
DQo+ID4+IEhhaW1vdmljaCwgWW9hdg0KPiA+PiBTdWJqZWN0OiBSZTogd29ya2luZyB3aXRoIElJ
Tw0KPiA+Pg0KPiA+PiBPbiAwOC8yMi8yMDEzIDA1OjE2IFBNLCBEcnViaW4sIERhbmllbCB3cm90
ZToNCj4gPj4+IFsuLi5dDQo+ID4+Pj4+ICAgRnJvbSBwcmFjdGljYWwgUE9WIHdlIGRvbid0IGhh
dmUgbXVjaCBjaG9pY2UgKHRpbWVsaW5lKSwgc2luY2UNCj4gPj4+Pj4gd2UgaGF2ZSB0bw0KPiA+
Pj4+IHJldXNlIGRyaXZlciB0aGF0IGlzIGJvdW5kIHRvIElJTy4gRnJvbSBwcmluY2lwbGUgc3Rh
bmRwb2ludCBJDQo+ID4+Pj4gc29tZWhvdyBmYWlsIHRvIHNlZSBhIHByb2JsZW0uIEl0IHNlZW1z
IHRvIG1lIHRoYXQgYWxsIHN0YXRlDQo+ID4+Pj4gaGFuZGxpbmcgdGhhdCBhbiBJSU8gZHJpdmVy
IG5lZWRzIHRvIGRvIGlzIHRvIGtlZXAgYXNzb2NpYXRpb25zIG9mDQo+ID4+Pj4gUElEcyB0byBz
ZW5zb3IgcmF0ZXMsIGNvbmZpZ3VyZSBzZW5zb3IgdG8gdGhlIGhpZ2hlc3QgcmF0ZSBpbiB0aGUN
Cj4gPj4+PiBsaXN0IGFuZCByZXBsaWNhdGUgc2hhcmVkIGRhdGEgYXQgcmF0ZXMgcmVxdWVzdGVk
IGJ5IHRoZSBjbGllbnRzLg0KPiA+Pj4+IFdoZW4gYSBmaWxlIGRlc2NyaXB0b3IgaXMgY2xvc2Vk
IChkdWUgdG8gcHJvY2VzcyB0ZXJtaW5hdGlvbiBvcg0KPiA+Pj4+IGFub3RoZXIgcmVhc29ucyks
IHRoZSBhY3R1YWwgc2Vuc29yIGlzIHJlLWNvbmZpZ3VyZWQgd2l0aA0KPiA+Pj4+IG5leHQtaGln
aGVzdCByYXRlIGFtb25nIHRoZSBvcGVuDQo+ID4+IEZEcy4NCj4gPj4+Pg0KPiA+Pj4+IEJ1dCB5
b3UgY2FuJ3QgdHJhY2sgdGhlIGNvbmZpZ3VyZWQgcmF0ZSBwZXIgUElEIHdpdGggdGhlIGN1cnJl
bnQgQVBJLg0KPiA+Pj4+IFRoYXQncyB3aHkgSSBrZWVwIHNheWluZyB0aGF0IHRoZSBBUEkgaXMg
c3RhdGVsZXNzLiBZb3UgY2FuIG5vdA0KPiA+Pj4+IHRyYWNrIHN0YXRlIHBlciBhcHBsaWNhdGlv
biB3aXRob3V0IGludmVudGluZyBhIG5ldyBBUEkuDQo+ID4+Pg0KPiA+Pj4gV2h5IGNhbid0IEkg
ZHVyaW5nIGtlZXAgYSBsaXN0IG9mIFBJRHMgdGhhdCBjdXJyZW50bHkgdXNlIGEgc2Vuc29yDQo+
ID4+PiBhbmQgcmVjb3JkDQo+ID4+IGN1cnJlbnQtPnBpZCB0b2dldGhlciB3aXRoICJkZWZhdWx0
IiByYXRlIGR1cmluZyB0aGUgZmlyc3Qgc2FtcGxpbmcNCj4gPj4gY3VycmVudC0+cmVxdWVzdA0K
PiA+PiB0aGF0IGRvZXNuJ3QgaGF2ZSBhIG1hdGNoaW5nIFBJRCwgYW5kIGluIHdyaXRlX3Jhdygp
IGhhbmRsZXIgdGhhdA0KPiA+PiB1cGRhdGVzIHJhdGUgbWF0Y2ggdGhhdCBjdXJyZW50LT5waWQg
YWdhaW5zdCBsaXN0IG9mIHJlY29yZGVkIFBJRHM/IEkNCj4gPj4gZGlkbid0IHNlZSBhIHBvc3Np
YmlsaXR5IHRoYXQgc2Vuc29yIGRyaXZlcidzIGhhbmRsZXIgbWF5IGdldCBjYWxsZWQNCj4gPj4g
aW4gYSBkaWZmZXJlbnQgY29udGV4dCB0aGFuIElJTyBjb3JlIGZvcHMgaGFuZGxlci4NCj4gPj4N
Cj4gPj4gU28gZWFjaCB0aW1lIGEgcHJvY2VzcyB3cml0ZXMgdG8gYSBJSU8gc3lzZnMgZmlsZSB5
b3Ugd2FudCB0byByZWNvcmQNCj4gPj4gd2hpY2ggdmFsdWUgdGhhdCBhcHBsaWNhdGlvbiB3cm90
ZS4gU28gd2hlbiBJIHJ1biBgZm9yIGkgaW4gYHNlcSAwDQo+ID4+IDEwMDAwMGA7IGRvIGVjaG8g
JGkNCj4gPj4+IHNhbXBsaW5nX2ZyZXF1ZW5jeTsgZG9uZWAgSSdkIGVuZCB1cCB3aXRoIGEgbGlz
dCB3aXRoIG9uZSBtaWxsaW9uDQo+ID4+PiBlbnRyaWVzDQo+ID4+IHdoaWNoIHdpbGwgc3RheSBp
biB0aGUgbGlzdCBmb3JldmVyLg0KPiA+DQo+ID4gTm8sIHRoZXJlIGlzIG9ubHkgb25lIGVudHJ5
IHBlciBQSUQuIE5leHQgdmFsdWUgdGhhdCB0aGUgc2FtZSBwcm9jZXNzDQo+IHdyaXRlcyB3aWxs
IHJlcGxhY2UgdGhlIHByZXZpb3VzIG9uZSwgbm90IGNyZWF0ZSBhIG5ldyBlbnRyeS4gQW4gZW50
cnkgd2lsbCBiZQ0KPiBjcmVhdGUgb25seSBpZiB0aGUgd3JpdGUgcmVxdWVzdCBhcnJpdmVkIGZy
b20gYSBQSUQgY3VycmVudGx5IG5vdCBpbiBsaXN0Lg0KPiA+DQo+IA0KPiBBc3N1bWUgdGhhdCBl
Y2hvIGlzIGEgL2Jpbi9lY2hvLCBub3QgYSBzaGVsbCBidWlsdC1pbiBjb21tYW5kLg0KDQpUaGVu
IGluZGVlZCBhIG5ldyBlbnRyeSB3aWxsIGJlIGNyZWF0ZWQgMTAwMDAwIHRpbWVzLiBCdXQgYmVm
b3JlIGNyZWF0aW5nIGEgbmV3IGluc3RhbmNlIG9mIC9iaW4vZWNobywgdGhlIHByZXZpb3VzIG9u
ZSB3aWxsIHRlcm1pbmF0ZSBjbG9zaW5nIGFsbCBmaWxlIGRlc2NyaXB0b3JzLiBBIGRldmljZSBk
cml2ZXIgd291bGQgbWlzcyB0aGlzIGV2ZW50IGFuZCB0aHVzIGFuIGFiaWxpdHkgdG8gcmVtb3Zl
IFBJRCBmcm9tIGxpc3Qgb25seSBpZiB0aGUgZnJhbWV3b3JrIGZvciBzb21lIHJlYXNvbiBjaG9z
ZSB0byBiYW4gaXQgZnJvbSBrbm93aW5nLg0KDQpCZXN0IHJlZ2FyZHMsDQpEYW5pZWwNCg0KLS0t
LS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0t
LS0tLS0tLS0tCkludGVsIElzcmFlbCAoNzQpIExpbWl0ZWQKClRoaXMgZS1tYWlsIGFuZCBhbnkg
YXR0YWNobWVudHMgbWF5IGNvbnRhaW4gY29uZmlkZW50aWFsIG1hdGVyaWFsIGZvcgp0aGUgc29s
ZSB1c2Ugb2YgdGhlIGludGVuZGVkIHJlY2lwaWVudChzKS4gQW55IHJldmlldyBvciBkaXN0cmli
dXRpb24KYnkgb3RoZXJzIGlzIHN0cmljdGx5IHByb2hpYml0ZWQuIElmIHlvdSBhcmUgbm90IHRo
ZSBpbnRlbmRlZApyZWNpcGllbnQsIHBsZWFzZSBjb250YWN0IHRoZSBzZW5kZXIgYW5kIGRlbGV0
ZSBhbGwgY29waWVzLgo=
^ permalink raw reply	[flat|nested] 20+ messages in thread
* RE: working with IIO
  2013-08-22 16:15                   ` Jonathan Cameron
@ 2013-08-22 16:35                     ` Drubin, Daniel
  2013-08-23 16:23                       ` Jonathan Cameron
  0 siblings, 1 reply; 20+ messages in thread
From: Drubin, Daniel @ 2013-08-22 16:35 UTC (permalink / raw)
  To: Jonathan Cameron, Lars-Peter Clausen
  Cc: Jonathan Cameron, Yuniverg, Michael, linux-iio@vger.kernel.org,
	Haimovich, Yoav
Wy4uLl0NCj4gIE5vdCBmb3IgcmVwbGljYXRpbmcgdGhlDQo+ID5zYW1lIG1tYXAoKWVkIGRhdGEg
Zm9yIG11bHRpcGxlIGNsaWVudHMgb2YgY291cnNlLCBidXQgSUlSQyB5ZXMsIHY0bA0KPiA+YWxs
b3dzIG11bHRpcGxlIGNoYW5uZWxzIHRvIGJlIG9wZW5lZC4gRS5nLiBmb3IgcGVla2luZyBmdWxs
IHZpZGVvLA0KPiA+cHJldmlldyBhbmQgdGVsZXRleHQgb3IgZm9yIHN3aXRjaGluZyBiZXR3ZWVu
IHZpZGVvIHNvdXJjZXMgb24gdGhlIHNhbWUNCj4gPmdyYWJiZXIuIE1ha2luZyB0aGUgd2hvbGUg
bWFqb3Igbm9kZSBzaW5nbGV0b24gaXMgYSBiaXQgaGFyc2gNCj4gPnJlc3RyaWN0aW9uIElNSE8u
DQo+IFN1cmUgdG8gdGhhdCBidXQgaGVyZSBlcXVpdmFsZW50IGlzIG9wZW5pbmcgbWFpbiBzdHJl
YW0gYW4gcHVsbGluZyBvdXQNCj4gZGlmZmVyZW50IGZyYW1lIHJhdGVzLiBFcXVpdmFsZW50IG9m
IHlvdXIgZXhhbXBsZSBpcyBhIG11bHRpcGxlIHNhbXBsZSByYXRlDQo+IGhhcmR3YXJlIGRldmlj
ZS4gVGhvc2UgYXJlIGhhbmRsZWQgdXNpbmcgbXVsdGlwbGUgaW5zdGFuY2VzIG9mIGlpb19EZXYu
DQoNCldlIGFjdHVhbGx5IGNvbnNpZGVyZWQgY3JlYXRpbmcgYSBzZXBhcmF0ZSBpaW9fZGV2IHBl
ciAidmlydHVhbCIgc2Vuc29yIChwYWlyIG9mIHtzZW5zb3IsIHJhdGV9KS4gVGhlIHByb2JsZW0g
aXMsIHRoZXkgY2FuIHBvcC11cCBkeW5hbWljYWxseSBhbmQgZXZlbiBpZiB3ZSBvcGVuZWQgYSBi
YWNrZG9vciBpbnRlcmZhY2Ugb2Ygc2lsZW50bHkgY3JlYXRpbmcgbmV3IGNoYXJkZXZzLCB3ZSBj
YW4ndCBlYXQgdXAgYWxsIG1ham9yIG51bWJlcnMgaW4gdGhlIHN5c3RlbS4gQW5kIHdlIGNhbiBn
ZXQgZGFuZ2Vyb3VzbHkgY2xvc2UgdG8gdGhhdCA6LSgNCg0KQmVzdCByZWdhcmRzLA0KRGFuaWVs
DQoNCi0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0t
LS0tLS0tLS0tLS0tLS0tLQpJbnRlbCBJc3JhZWwgKDc0KSBMaW1pdGVkCgpUaGlzIGUtbWFpbCBh
bmQgYW55IGF0dGFjaG1lbnRzIG1heSBjb250YWluIGNvbmZpZGVudGlhbCBtYXRlcmlhbCBmb3IK
dGhlIHNvbGUgdXNlIG9mIHRoZSBpbnRlbmRlZCByZWNpcGllbnQocykuIEFueSByZXZpZXcgb3Ig
ZGlzdHJpYnV0aW9uCmJ5IG90aGVycyBpcyBzdHJpY3RseSBwcm9oaWJpdGVkLiBJZiB5b3UgYXJl
IG5vdCB0aGUgaW50ZW5kZWQKcmVjaXBpZW50LCBwbGVhc2UgY29udGFjdCB0aGUgc2VuZGVyIGFu
ZCBkZWxldGUgYWxsIGNvcGllcy4K
^ permalink raw reply	[flat|nested] 20+ messages in thread
* Re: working with IIO
  2013-08-22 16:26                       ` Drubin, Daniel
@ 2013-08-22 16:56                         ` Lars-Peter Clausen
  2013-08-28 12:56                         ` Alexander Holler
  1 sibling, 0 replies; 20+ messages in thread
From: Lars-Peter Clausen @ 2013-08-22 16:56 UTC (permalink / raw)
  To: Drubin, Daniel
  Cc: Jonathan Cameron, Yuniverg, Michael, linux-iio@vger.kernel.org,
	Haimovich, Yoav
On 08/22/2013 06:26 PM, Drubin, Daniel wrote:
> 
> 
>> -----Original Message-----
>> From: Lars-Peter Clausen [mailto:lars@metafoo.de]
>> Sent: Thursday, August 22, 2013 7:00 PM
>> To: Drubin, Daniel
>> Cc: Jonathan Cameron; Yuniverg, Michael; linux-iio@vger.kernel.org;
>> Haimovich, Yoav
>> Subject: Re: working with IIO
>>
>> On 08/22/2013 05:48 PM, Drubin, Daniel wrote:
>>>
>>>
>>>> -----Original Message-----
>>>> From: Lars-Peter Clausen [mailto:lars@metafoo.de]
>>>> Sent: Thursday, August 22, 2013 6:42 PM
>>>> To: Drubin, Daniel
>>>> Cc: Jonathan Cameron; Yuniverg, Michael; linux-iio@vger.kernel.org;
>>>> Haimovich, Yoav
>>>> Subject: Re: working with IIO
>>>>
>>>> On 08/22/2013 05:16 PM, Drubin, Daniel wrote:
>>>>> [...]
>>>>>>>   From practical POV we don't have much choice (timeline), since
>>>>>>> we have to
>>>>>> reuse driver that is bound to IIO. From principle standpoint I
>>>>>> somehow fail to see a problem. It seems to me that all state
>>>>>> handling that an IIO driver needs to do is to keep associations of
>>>>>> PIDs to sensor rates, configure sensor to the highest rate in the
>>>>>> list and replicate shared data at rates requested by the clients.
>>>>>> When a file descriptor is closed (due to process termination or
>>>>>> another reasons), the actual sensor is re-configured with
>>>>>> next-highest rate among the open
>>>> FDs.
>>>>>>
>>>>>> But you can't track the configured rate per PID with the current API.
>>>>>> That's why I keep saying that the API is stateless. You can not
>>>>>> track state per application without inventing a new API.
>>>>>
>>>>> Why can't I during keep a list of PIDs that currently use a sensor
>>>>> and record
>>>> current->pid together with "default" rate during the first sampling
>>>> current->request
>>>> that doesn't have a matching PID, and in write_raw() handler that
>>>> updates rate match that current->pid against list of recorded PIDs? I
>>>> didn't see a possibility that sensor driver's handler may get called
>>>> in a different context than IIO core fops handler.
>>>>
>>>> So each time a process writes to a IIO sysfs file you want to record
>>>> which value that application wrote. So when I run `for i in `seq 0
>>>> 100000`; do echo $i
>>>>> sampling_frequency; done` I'd end up with a list with one million
>>>>> entries
>>>> which will stay in the list forever.
>>>
>>> No, there is only one entry per PID. Next value that the same process
>> writes will replace the previous one, not create a new entry. An entry will be
>> create only if the write request arrived from a PID currently not in list.
>>>
>>
>> Assume that echo is a /bin/echo, not a shell built-in command.
> 
> Then indeed a new entry will be created 100000 times. But before creating a new instance of /bin/echo, the previous one will terminate closing all file descriptors. A device driver would miss this event and thus an ability to remove PID from list only if the framework for some reason chose to ban it from knowing.
Which file descriptor? That of the sample_frequency sysfs file? Please try
to think this through. This approach is in my opinion a bad idea, if it is
to ever work at all, its implementation will be really ugly and its
semantics will probably be bogus. If you are on a tight deadline don't try
run down a dead-end first, but rather try to take the proper road (userspace
daemon).
- Lars
^ permalink raw reply	[flat|nested] 20+ messages in thread
* RE: working with IIO
  2013-08-22 16:35                     ` Drubin, Daniel
@ 2013-08-23 16:23                       ` Jonathan Cameron
  2013-08-23 18:37                         ` Jonathan Cameron
  0 siblings, 1 reply; 20+ messages in thread
From: Jonathan Cameron @ 2013-08-23 16:23 UTC (permalink / raw)
  To: Drubin, Daniel, Jonathan Cameron, Lars-Peter Clausen
  Cc: Jonathan Cameron, Yuniverg, Michael, linux-iio@vger.kernel.org,
	Haimovich, Yoav
"Drubin, Daniel" <daniel.drubin@intel.com> wrote:
>[...]
>>  Not for replicating the
>> >same mmap()ed data for multiple clients of course, but IIRC yes, v4l
>> >allows multiple channels to be opened. E.g. for peeking full video,
>> >preview and teletext or for switching between video sources on the
>same
>> >grabber. Making the whole major node singleton is a bit harsh
>> >restriction IMHO.
>> Sure to that but here equivalent is opening main stream an pulling
>out
>> different frame rates. Equivalent of your example is a multiple
>sample rate
>> hardware device. Those are handled using multiple instances of
>iio_Dev.
>
>We actually considered creating a separate iio_dev per "virtual" sensor
>(pair of {sensor, rate}). The problem is, they can pop-up dynamically
>and even if we opened a backdoor interface of silently creating new
>chardevs, we can't eat up all major numbers in the system. And we can
>get dangerously close to that :-(
Not a problem as can create anonymous ones like we already do for events.
Now how you control which channels go where is harder as normally we do this sysfs and I would be very dubious about changing that.  It could obviously be done but I can't think of a clean way of doing it.
Just how many readers are we talking?
>
>Best regards,
>Daniel
>
>---------------------------------------------------------------------
>Intel Israel (74) Limited
>
>This e-mail and any attachments may contain confidential material for
>the sole use of the intended recipient(s). Any review or distribution
>by others is strictly prohibited. If you are not the intended
>recipient, please contact the sender and delete all copies.
>N�����r��y���b�X��ǧv�^�){.n�+����{��*"��^n�r��z�\x1a��h����&��\x1e�G���h�\x03(�階�ݢj"��\x1a�^[m�����z�ޖ���f���h���~�m
-- 
Sent from my Android phone with K-9 Mail. Please excuse my brevity.
^ permalink raw reply	[flat|nested] 20+ messages in thread
* RE: working with IIO
  2013-08-23 16:23                       ` Jonathan Cameron
@ 2013-08-23 18:37                         ` Jonathan Cameron
  0 siblings, 0 replies; 20+ messages in thread
From: Jonathan Cameron @ 2013-08-23 18:37 UTC (permalink / raw)
  To: Jonathan Cameron, Drubin, Daniel, Jonathan Cameron,
	Lars-Peter Clausen
  Cc: Yuniverg, Michael, linux-iio@vger.kernel.org, Haimovich, Yoav
Jonathan Cameron <jic23@kernel.org> wrote:
>
>
>"Drubin, Daniel" <daniel.drubin@intel.com> wrote:
>>[...]
>>>  Not for replicating the
>>> >same mmap()ed data for multiple clients of course, but IIRC yes,
>v4l
>>> >allows multiple channels to be opened. E.g. for peeking full video,
>>> >preview and teletext or for switching between video sources on the
>>same
>>> >grabber. Making the whole major node singleton is a bit harsh
>>> >restriction IMHO.
>>> Sure to that but here equivalent is opening main stream an pulling
>>out
>>> different frame rates. Equivalent of your example is a multiple
>>sample rate
>>> hardware device. Those are handled using multiple instances of
>>iio_Dev.
>>
>>We actually considered creating a separate iio_dev per "virtual"
>sensor
>>(pair of {sensor, rate}). The problem is, they can pop-up dynamically
>>and even if we opened a backdoor interface of silently creating new
>>chardevs, we can't eat up all major numbers in the system. And we can
>>get dangerously close to that :-(
>
>Not a problem as can create anonymous ones like we already do for
>events.
>
>Now how you control which channels go where is harder as normally we do
>this sysfs and I would be very dubious about changing that.  It could
>obviously be done but I can't think of a clean way of doing it.
>
>Just how many readers are we talking?
Note I agree with Lars that this probably better done in user space. I just find the idea of doing it in kernel space interesting!
>>
>>Best regards,
>>Daniel
>>
>>---------------------------------------------------------------------
>>Intel Israel (74) Limited
>>
>>This e-mail and any attachments may contain confidential material for
>>the sole use of the intended recipient(s). Any review or distribution
>>by others is strictly prohibited. If you are not the intended
>>recipient, please contact the sender and delete all copies.
>>N�����r��y���b�X��ǧv�^�){.n�+����{��*"��^n�r��z�\x1a��h����&��\x1e�G���h�\x03(�階�ݢj"��\x1a�^[m�����z�ޖ���f���h���~�m
-- 
Sent from my Android phone with K-9 Mail. Please excuse my brevity.
^ permalink raw reply	[flat|nested] 20+ messages in thread
* Re: working with IIO
  2013-08-22 16:26                       ` Drubin, Daniel
  2013-08-22 16:56                         ` Lars-Peter Clausen
@ 2013-08-28 12:56                         ` Alexander Holler
  1 sibling, 0 replies; 20+ messages in thread
From: Alexander Holler @ 2013-08-28 12:56 UTC (permalink / raw)
  To: Drubin, Daniel
  Cc: Lars-Peter Clausen, Jonathan Cameron, Yuniverg, Michael,
	linux-iio@vger.kernel.org, Haimovich, Yoav
Am 22.08.2013 18:26, schrieb Drubin, Daniel:
>>>> So each time a process writes to a IIO sysfs file you want to record
>>>> which value that application wrote. So when I run `for i in `seq 0
>>>> 100000`; do echo $i
>>>>> sampling_frequency; done` I'd end up with a list with one million
>>>>> entries
>>>> which will stay in the list forever.
>>>
>>> No, there is only one entry per PID. Next value that the same process
>> writes will replace the previous one, not create a new entry. An entry will be
>> create only if the write request arrived from a PID currently not in list.
>>>
>>
>> Assume that echo is a /bin/echo, not a shell built-in command.
> 
> Then indeed a new entry will be created 100000 times. But before creating a new instance of /bin/echo, the previous one will terminate closing all file descriptors. A device driver would miss this event and thus an ability to remove PID from list only if the framework for some reason chose to ban it from knowing.
Try
sysctl kernel.pid_max
So the list would be likely smaller and reused PIDs will end up with a
funny behaviour. ;)
Regards,
Alexander Holler
^ permalink raw reply	[flat|nested] 20+ messages in thread
end of thread, other threads:[~2013-08-28 12:56 UTC | newest]
Thread overview: 20+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
     [not found] <0423FED8EB79934F939F077EAF96DBD717D8025F@HASMSX105.ger.corp.intel.com>
2013-08-21 21:00 ` working with IIO Jonathan Cameron
2013-08-22 11:30   ` Drubin, Daniel
2013-08-22 13:16     ` Lars-Peter Clausen
2013-08-22 13:39       ` Drubin, Daniel
2013-08-22 14:16         ` Lars-Peter Clausen
2013-08-22 14:45           ` Drubin, Daniel
2013-08-22 14:52             ` Lars-Peter Clausen
2013-08-22 15:08               ` Jonathan Cameron
2013-08-22 15:33                 ` Drubin, Daniel
2013-08-22 16:15                   ` Jonathan Cameron
2013-08-22 16:35                     ` Drubin, Daniel
2013-08-23 16:23                       ` Jonathan Cameron
2013-08-23 18:37                         ` Jonathan Cameron
2013-08-22 15:16               ` Drubin, Daniel
2013-08-22 15:41                 ` Lars-Peter Clausen
2013-08-22 15:48                   ` Drubin, Daniel
2013-08-22 16:00                     ` Lars-Peter Clausen
2013-08-22 16:26                       ` Drubin, Daniel
2013-08-22 16:56                         ` Lars-Peter Clausen
2013-08-28 12:56                         ` Alexander Holler
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).