linux-iio.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [RFC PATCH] Proposal for a kthread tight loop based trigger.
@ 2016-02-27 12:05 Jonathan Cameron
  2016-02-27 12:05 ` [RFC PATCH] iio:trigger: Experimental kthread tight loop trigger (thread only) Jonathan Cameron
  2016-03-01 11:51 ` [RFC PATCH] Proposal for a kthread tight loop based trigger Daniel Baluta
  0 siblings, 2 replies; 4+ messages in thread
From: Jonathan Cameron @ 2016-02-27 12:05 UTC (permalink / raw)
  To: linux-iio; +Cc: gregor.boirie, lars, pmeerw, daniel.baluta, Jonathan Cameron

Hi Gregor and all,

This patch was motivated by a proposal from Gregor to put a kthread in the
ms5611 driver to basically spin and grap data as fast as possible.
I can see the application is realistic but was unhappy with the per driver
code change approach.  Hence I decided to test out another approach over
a couple of hours this morning.

Hence in brief the use case is:
1) Read from a 'capture on demand' device as quickly as possible.  For tests
I used a max1363 on an ancient stargate2 because I have a few lying around and
it is the right sort of device.
2) Do it in a fashion that allows higher priority code to slow it down if
  needed
3) Allow lots of devices to run in the same fashion but without each one
having to wait for them all to finish.

As a quick aside, configfs (or at least iio-trig-hrtimer) isn't working
on my platform right now so I'll need to follow that up when I get a bit more
time. Hence I based this off the sysfs trigger (which is an old friend :)

So first some background on our 'non hardware' triggers.

1) First there were hardware triggers or things that looked very much like
  hardware triggers (e.g. the periodic rtc driver that is on it's way out -
  finally).  These call 'real' interrupt handlers - they predated the
  threaded interrupts stuff so everything used to have to have a top half
  anyway to schedule the bottom half (which later became a thread)
2) It made sense in some devices - dataready triggered ones typically - to
   grab timestamps and sometimes some other state in the top half.
3) Threaded interrupts came along getting rid of most of the top half code but
  typically leaving some timestamping etc in there.
4) Somewhere in this process we had the sysfs trigger come along as a really
   handy test tool (initially).  This first of all simply didn't run top halfs
   but later jumped through some hoops to get into interrupt context to call
   them so it looked like a hardware interrupt.  This is now nicely wrapped up
   in IRQ_WORK etc.  One major advantage of this is that we have multiple
   devices triggering off one interrupt they will run in their own threads.

Anyhow so what we have here goes back to the bottom half (now thread) element
only as we can do that quick and dirty (as an aside iio_poll_chained is
really badly named - I suspect I either messed this up or the naming used
in the irq subsystem was clarified sometime later)

So here we launch a kthread on the spin up of a buffer attached to this
trigger.  That then calls all devices poll func (thread part) in the thread
one after another.

So what are the issues with this:
1) Multiple devices connected to this trigger will not have their thread
   handlers potentially run in parallel. (do we care? - the usecase wouldn't
   put more than on on given trigger anyway)
2) We have no current way of identifying if a device needs a top half called
   (say to get a timestamp).  This should be easy enough to add as a flag in
   struct iio_dev and enforce with the validate callbacks.
3) For those devices that do something that doesn't need to be in the top half
   (from a not hanging point of view - if not from a get the best answer point
    of view) we have no way of knowing this or calling the top half if we did.

I think the last two can be worked around (3 might be 'tricky!)

Anyhow what do people think?

For some numbers I ran this on a pxa270 with a max1363 hanging of the i2c
bus.  generic_buffer into a ramdisk, watershed at 64 buffer length 128. Ran
happily at upwards of a kHz though with some 5msec pauses (presumably something
higher priority came along).

Jonathan

Jonathan Cameron (1):
  iio:trigger: Experimental kthread tight loop trigger (thread only)

 drivers/iio/trigger/Kconfig         |   5 +
 drivers/iio/trigger/Makefile        |   1 +
 drivers/iio/trigger/iio-trig-loop.c | 245 ++++++++++++++++++++++++++++++++++++
 3 files changed, 251 insertions(+)
 create mode 100644 drivers/iio/trigger/iio-trig-loop.c

-- 
2.7.1


^ permalink raw reply	[flat|nested] 4+ messages in thread

* [RFC PATCH] iio:trigger: Experimental kthread tight loop trigger (thread only)
  2016-02-27 12:05 [RFC PATCH] Proposal for a kthread tight loop based trigger Jonathan Cameron
@ 2016-02-27 12:05 ` Jonathan Cameron
  2016-03-01 11:51 ` [RFC PATCH] Proposal for a kthread tight loop based trigger Daniel Baluta
  1 sibling, 0 replies; 4+ messages in thread
From: Jonathan Cameron @ 2016-02-27 12:05 UTC (permalink / raw)
  To: linux-iio; +Cc: gregor.boirie, lars, pmeerw, daniel.baluta, Jonathan Cameron

This patch is in response to that of
Gregor Boirie <gregor.boirie@parrot.com>
who proposed using a tight kthread within a device driver (be it with the
support factored out into a helper library) in order to basically spin as
fast as possible.

It is meant as a talking point rather than a formal proposal of the code.
Also gives people some working code to mess around with.

I proposed that this could be done with a trigger with a few constraints
and this is the proof (be it ugly) of that.

There are some constraints though, some of which we would want to relax
if this were to move forward.

* Will only run the thread part of the registered pollfunc.  This is to
  avoid the overhead of jumping in and out of interrupt context.  Is the
  overhead significant?  Not certain but feels like it should be!

* This limitation precludes any device that 'must' do some work in
  interrupt context.  However, that is true of few if any drivers and
  I suspect that any that do will be restricted to using triggers they
  provide themselves.  Usually we have a top half mainly to grab a
  timestamp as soon after the dataready type signal as possible.

Anyhow, I'm not signing this as even if we go with the approach there are
some hideous corners.

Not-Signed-off-by: Jonathan Cameron <jic23@kernel.org>
---
 drivers/iio/trigger/Kconfig         |   5 +
 drivers/iio/trigger/Makefile        |   1 +
 drivers/iio/trigger/iio-trig-loop.c | 245 ++++++++++++++++++++++++++++++++++++
 3 files changed, 251 insertions(+)

diff --git a/drivers/iio/trigger/Kconfig b/drivers/iio/trigger/Kconfig
index 519e6772f6f5..bc6e2ec67a75 100644
--- a/drivers/iio/trigger/Kconfig
+++ b/drivers/iio/trigger/Kconfig
@@ -5,6 +5,11 @@
 
 menu "Triggers - standalone"
 
+config IIO_TIGHTLOOP_TRIGGER
+	tristate "A kthread based hammering loop trigger"
+	help
+	  A horrible bodge as an experiment.
+
 config IIO_HRTIMER_TRIGGER
 	tristate "High resolution timer trigger"
 	depends on IIO_SW_TRIGGER
diff --git a/drivers/iio/trigger/Makefile b/drivers/iio/trigger/Makefile
index fe06eb564367..aab4dc23303d 100644
--- a/drivers/iio/trigger/Makefile
+++ b/drivers/iio/trigger/Makefile
@@ -7,3 +7,4 @@
 obj-$(CONFIG_IIO_HRTIMER_TRIGGER) += iio-trig-hrtimer.o
 obj-$(CONFIG_IIO_INTERRUPT_TRIGGER) += iio-trig-interrupt.o
 obj-$(CONFIG_IIO_SYSFS_TRIGGER) += iio-trig-sysfs.o
+obj-$(CONFIG_IIO_TIGHTLOOP_TRIGGER) += iio-trig-loop.o
diff --git a/drivers/iio/trigger/iio-trig-loop.c b/drivers/iio/trigger/iio-trig-loop.c
new file mode 100644
index 000000000000..deeea23df8aa
--- /dev/null
+++ b/drivers/iio/trigger/iio-trig-loop.c
@@ -0,0 +1,245 @@
+/*
+ * Copyright 2016 Jonathan Cameron <jic23@kernel.org>
+ *
+ * Licensed under the GPL-2.
+ *
+ * Based on a mashup of the sysfs trigger of 
+ * "Michael Hennerich <hennerich@blackfin.uclinux.org>"
+ * and continuous sampling proposal of
+ * Gregor Boirie <gregor.boirie@parrot.com>
+ *
+ * Not this is uggly and may eat babies.
+ * 
+ * Todo
+ * * configfs interface rather than sysfs creation - right now configfs
+ *   is broken on my dev platform (which is rather aged and not much used
+ *   so I need to chase that down)
+ * * Protect against connection of devices that 'need' the top half
+ *   handler.
+ * * Work out how to run top half handlers in this context if it is
+ *   safe to do so (timestamp grabbing for example)
+ *
+ * Tested against a max1363. Used about 33% cpu for the thread and 20%
+ * for generic_buffer piping to /dev/null. Watermark set at 64 on a 128
+ * element kfifo buffer.
+ */
+
+#include <linux/kernel.h>
+#include <linux/module.h>
+#include <linux/platform_device.h>
+#include <linux/slab.h>
+#include <linux/list.h>
+#include <linux/irq_work.h>
+#include <linux/kthread.h>
+#include <linux/freezer.h>
+
+#include <linux/iio/iio.h>
+#include <linux/iio/trigger.h>
+
+struct iio_loop_trig {
+	struct iio_trigger *trig;
+	struct task_struct *task;
+	int id;
+	struct list_head l;
+};
+
+static LIST_HEAD(iio_loop_trig_list);
+static DEFINE_MUTEX(iio_loop_trig_list_mut);
+
+static int iio_loop_trigger_probe(int id);
+static ssize_t iio_loop_trig_add(struct device *dev,
+				  struct device_attribute *attr,
+				  const char *buf,
+				  size_t len)
+{
+	int ret;
+	unsigned long input;
+
+	ret = kstrtoul(buf, 10, &input);
+	if (ret)
+		return ret;
+	ret = iio_loop_trigger_probe(input);
+	if (ret)
+		return ret;
+	return len;
+}
+static DEVICE_ATTR(add_trigger, S_IWUSR, NULL, &iio_loop_trig_add);
+
+static int iio_loop_trigger_remove(int id);
+static ssize_t iio_loop_trig_remove(struct device *dev,
+				     struct device_attribute *attr,
+				     const char *buf,
+				     size_t len)
+{
+	int ret;
+	unsigned long input;
+
+	ret = kstrtoul(buf, 10, &input);
+	if (ret)
+		return ret;
+	ret = iio_loop_trigger_remove(input);
+	if (ret)
+		return ret;
+	return len;
+}
+
+static DEVICE_ATTR(remove_trigger, S_IWUSR, NULL, &iio_loop_trig_remove);
+
+static struct attribute *iio_loop_trig_attrs[] = {
+	&dev_attr_add_trigger.attr,
+	&dev_attr_remove_trigger.attr,
+	NULL,
+};
+
+static const struct attribute_group iio_loop_trig_group = {
+	.attrs = iio_loop_trig_attrs,
+};
+
+static const struct attribute_group *iio_loop_trig_groups[] = {
+	&iio_loop_trig_group,
+	NULL
+};
+
+/* Nothing to actually do upon release */
+static void iio_trigger_loop_release(struct device *dev)
+{
+}
+
+static struct device iio_loop_trig_dev = {
+	.bus = &iio_bus_type,
+	.groups = iio_loop_trig_groups,
+	.release = &iio_trigger_loop_release,
+};
+
+static int iio_loop_thread(void *data)
+{
+	struct iio_trigger *trig = data;
+
+	set_freezable();
+
+	do {
+		iio_trigger_poll_chained(trig);
+	} while (likely(!kthread_freezable_should_stop(NULL)));
+
+	return 0;
+}
+
+static int iio_loop_trigger_set_state(struct iio_trigger *trig, bool state)
+{
+	struct iio_loop_trig *loop_trig = iio_trigger_get_drvdata(trig);
+
+	if (state) {
+		loop_trig->task = kthread_run(iio_loop_thread, trig, trig->name);
+		if (unlikely(IS_ERR(loop_trig->task))) {
+			dev_err(&trig->dev,
+				"failed to create trigger loop thread\n");
+			return PTR_ERR(loop_trig->task);
+		}	
+	} else {
+		kthread_stop(loop_trig->task);
+	}
+	
+	return 0;
+}
+
+static const struct iio_trigger_ops iio_loop_trigger_ops = {
+	.set_trigger_state = iio_loop_trigger_set_state,
+	.owner = THIS_MODULE,
+};
+
+static int iio_loop_trigger_probe(int id)
+{
+	struct iio_loop_trig *t;
+	int ret;
+	bool foundit = false;
+
+	mutex_lock(&iio_loop_trig_list_mut);
+	list_for_each_entry(t, &iio_loop_trig_list, l)
+		if (id == t->id) {
+			foundit = true;
+			break;
+		}
+	if (foundit) {
+		ret = -EINVAL;
+		goto out1;
+	}
+	t = kmalloc(sizeof(*t), GFP_KERNEL);
+	if (t == NULL) {
+		ret = -ENOMEM;
+		goto out1;
+	}
+	t->id = id;
+	t->trig = iio_trigger_alloc("looptrig%d", id);
+	if (!t->trig) {
+		ret = -ENOMEM;
+		goto free_t;
+	}
+
+	t->trig->ops = &iio_loop_trigger_ops;
+	t->trig->dev.parent = &iio_loop_trig_dev;
+	iio_trigger_set_drvdata(t->trig, t);
+
+	ret = iio_trigger_register(t->trig);
+	if (ret)
+		goto out2;
+	list_add(&t->l, &iio_loop_trig_list);
+	__module_get(THIS_MODULE);
+	mutex_unlock(&iio_loop_trig_list_mut);
+	
+
+	return 0;
+
+out2:
+	iio_trigger_put(t->trig);
+free_t:
+	kfree(t);
+out1:
+	mutex_unlock(&iio_loop_trig_list_mut);
+	return ret;
+}
+
+static int iio_loop_trigger_remove(int id)
+{
+	bool foundit = false;
+	struct iio_loop_trig *t;
+
+	mutex_lock(&iio_loop_trig_list_mut);
+	list_for_each_entry(t, &iio_loop_trig_list, l)
+		if (id == t->id) {
+			foundit = true;
+			break;
+		}
+	if (!foundit) {
+		mutex_unlock(&iio_loop_trig_list_mut);
+		return -EINVAL;
+	}
+
+	iio_trigger_unregister(t->trig);
+	iio_trigger_free(t->trig);
+
+	list_del(&t->l);
+	kfree(t);
+	module_put(THIS_MODULE);
+	mutex_unlock(&iio_loop_trig_list_mut);
+
+	return 0;
+}
+
+static int __init iio_loop_trig_init(void)
+{
+	device_initialize(&iio_loop_trig_dev);
+	dev_set_name(&iio_loop_trig_dev, "iio_loop_trigger");
+	return device_add(&iio_loop_trig_dev);
+}
+module_init(iio_loop_trig_init);
+
+static void __exit iio_loop_trig_exit(void)
+{
+	device_unregister(&iio_loop_trig_dev);
+}
+module_exit(iio_loop_trig_exit);
+
+MODULE_AUTHOR("Jonathan Cameron <jic23@kernel.org>");
+MODULE_DESCRIPTION("Loop based trigger for the iio subsystem");
+MODULE_LICENSE("GPL v2");
+MODULE_ALIAS("platform:iio-trig-loop");
-- 
2.7.1

^ permalink raw reply related	[flat|nested] 4+ messages in thread

* Re: [RFC PATCH] Proposal for a kthread tight loop based trigger.
  2016-02-27 12:05 [RFC PATCH] Proposal for a kthread tight loop based trigger Jonathan Cameron
  2016-02-27 12:05 ` [RFC PATCH] iio:trigger: Experimental kthread tight loop trigger (thread only) Jonathan Cameron
@ 2016-03-01 11:51 ` Daniel Baluta
  2016-03-01 17:27   ` Jonathan Cameron
  1 sibling, 1 reply; 4+ messages in thread
From: Daniel Baluta @ 2016-03-01 11:51 UTC (permalink / raw)
  To: Jonathan Cameron
  Cc: linux-iio@vger.kernel.org, gregor.boirie, Lars-Peter Clausen,
	Peter Meerwald-Stadler, Daniel Baluta

On Sat, Feb 27, 2016 at 2:05 PM, Jonathan Cameron <jic23@kernel.org> wrote:
> Hi Gregor and all,
>
> This patch was motivated by a proposal from Gregor to put a kthread in the
> ms5611 driver to basically spin and grap data as fast as possible.
> I can see the application is realistic but was unhappy with the per driver
> code change approach.  Hence I decided to test out another approach over
> a couple of hours this morning.

This is a good idea.

>
> Hence in brief the use case is:
> 1) Read from a 'capture on demand' device as quickly as possible.  For tests
> I used a max1363 on an ancient stargate2 because I have a few lying around and
> it is the right sort of device.
> 2) Do it in a fashion that allows higher priority code to slow it down if
>   needed
> 3) Allow lots of devices to run in the same fashion but without each one
> having to wait for them all to finish.
>
> As a quick aside, configfs (or at least iio-trig-hrtimer) isn't working
> on my platform right now so I'll need to follow that up when I get a bit more
> time. Hence I based this off the sysfs trigger (which is an old friend :)
>

Which part of configfs support didn't work :).

> So first some background on our 'non hardware' triggers.
>
> 1) First there were hardware triggers or things that looked very much like
>   hardware triggers (e.g. the periodic rtc driver that is on it's way out -
>   finally).  These call 'real' interrupt handlers - they predated the
>   threaded interrupts stuff so everything used to have to have a top half
>   anyway to schedule the bottom half (which later became a thread)
> 2) It made sense in some devices - dataready triggered ones typically - to
>    grab timestamps and sometimes some other state in the top half.
> 3) Threaded interrupts came along getting rid of most of the top half code but
>   typically leaving some timestamping etc in there.
> 4) Somewhere in this process we had the sysfs trigger come along as a really
>    handy test tool (initially).  This first of all simply didn't run top halfs
>    but later jumped through some hoops to get into interrupt context to call
>    them so it looked like a hardware interrupt.  This is now nicely wrapped up
>    in IRQ_WORK etc.  One major advantage of this is that we have multiple
>    devices triggering off one interrupt they will run in their own threads.
>
> Anyhow so what we have here goes back to the bottom half (now thread) element
> only as we can do that quick and dirty (as an aside iio_poll_chained is
> really badly named - I suspect I either messed this up or the naming used
> in the irq subsystem was clarified sometime later)
>
> So here we launch a kthread on the spin up of a buffer attached to this
> trigger.  That then calls all devices poll func (thread part) in the thread
> one after another.
>
> So what are the issues with this:
> 1) Multiple devices connected to this trigger will not have their thread
>    handlers potentially run in parallel. (do we care? - the usecase wouldn't
>    put more than on on given trigger anyway)
> 2) We have no current way of identifying if a device needs a top half called
>    (say to get a timestamp).  This should be easy enough to add as a flag in
>    struct iio_dev and enforce with the validate callbacks.
> 3) For those devices that do something that doesn't need to be in the top half
>    (from a not hanging point of view - if not from a get the best answer point
>     of view) we have no way of knowing this or calling the top half if we did.
>
> I think the last two can be worked around (3 might be 'tricky!)
>
> Anyhow what do people think?
>
> For some numbers I ran this on a pxa270 with a max1363 hanging of the i2c
> bus.  generic_buffer into a ramdisk, watershed at 64 buffer length 128. Ran
> happily at upwards of a kHz though with some 5msec pauses (presumably something
> higher priority came along).
>
> Jonathan
>
> Jonathan Cameron (1):
>   iio:trigger: Experimental kthread tight loop trigger (thread only)
>
>  drivers/iio/trigger/Kconfig         |   5 +
>  drivers/iio/trigger/Makefile        |   1 +
>  drivers/iio/trigger/iio-trig-loop.c | 245 ++++++++++++++++++++++++++++++++++++
>  3 files changed, 251 insertions(+)
>  create mode 100644 drivers/iio/trigger/iio-trig-loop.c
>
> --
> 2.7.1
>
> --
> To unsubscribe from this list: send the line "unsubscribe linux-iio" in
> the body of a message to majordomo@vger.kernel.org
> More majordomo info at  http://vger.kernel.org/majordomo-info.html

^ permalink raw reply	[flat|nested] 4+ messages in thread

* Re: [RFC PATCH] Proposal for a kthread tight loop based trigger.
  2016-03-01 11:51 ` [RFC PATCH] Proposal for a kthread tight loop based trigger Daniel Baluta
@ 2016-03-01 17:27   ` Jonathan Cameron
  0 siblings, 0 replies; 4+ messages in thread
From: Jonathan Cameron @ 2016-03-01 17:27 UTC (permalink / raw)
  To: Daniel Baluta, Jonathan Cameron
  Cc: linux-iio@vger.kernel.org, gregor.boirie, Lars-Peter Clausen,
	Peter Meerwald-Stadler



On 1 March 2016 11:51:12 GMT+00:00, Daniel Baluta <daniel.baluta@intel.com> wrote:
>On Sat, Feb 27, 2016 at 2:05 PM, Jonathan Cameron <jic23@kernel.org>
>wrote:
>> Hi Gregor and all,
>>
>> This patch was motivated by a proposal from Gregor to put a kthread
>in the
>> ms5611 driver to basically spin and grap data as fast as possible.
>> I can see the application is realistic but was unhappy with the per
>driver
>> code change approach.  Hence I decided to test out another approach
>over
>> a couple of hours this morning.
>
>This is a good idea.
>
>>
>> Hence in brief the use case is:
>> 1) Read from a 'capture on demand' device as quickly as possible. 
>For tests
>> I used a max1363 on an ancient stargate2 because I have a few lying
>around and
>> it is the right sort of device.
>> 2) Do it in a fashion that allows higher priority code to slow it
>down if
>>   needed
>> 3) Allow lots of devices to run in the same fashion but without each
>one
>> having to wait for them all to finish.
>>
>> As a quick aside, configfs (or at least iio-trig-hrtimer) isn't
>working
>> on my platform right now so I'll need to follow that up when I get a
>bit more
>> time. Hence I based this off the sysfs trigger (which is an old
>friend :)
>>
>
>Which part of configfs support didn't work :).
mkdir in the hrtimer directory. Gave a useless looking stack trace. I haven't
 looked at it properly yet.
>
>> So first some background on our 'non hardware' triggers.
>>
>> 1) First there were hardware triggers or things that looked very much
>like
>>   hardware triggers (e.g. the periodic rtc driver that is on it's way
>out -
>>   finally).  These call 'real' interrupt handlers - they predated the
>>   threaded interrupts stuff so everything used to have to have a top
>half
>>   anyway to schedule the bottom half (which later became a thread)
>> 2) It made sense in some devices - dataready triggered ones typically
>- to
>>    grab timestamps and sometimes some other state in the top half.
>> 3) Threaded interrupts came along getting rid of most of the top half
>code but
>>   typically leaving some timestamping etc in there.
>> 4) Somewhere in this process we had the sysfs trigger come along as a
>really
>>    handy test tool (initially).  This first of all simply didn't run
>top halfs
>>    but later jumped through some hoops to get into interrupt context
>to call
>>    them so it looked like a hardware interrupt.  This is now nicely
>wrapped up
>>    in IRQ_WORK etc.  One major advantage of this is that we have
>multiple
>>    devices triggering off one interrupt they will run in their own
>threads.
>>
>> Anyhow so what we have here goes back to the bottom half (now thread)
>element
>> only as we can do that quick and dirty (as an aside iio_poll_chained
>is
>> really badly named - I suspect I either messed this up or the naming
>used
>> in the irq subsystem was clarified sometime later)
>>
>> So here we launch a kthread on the spin up of a buffer attached to
>this
>> trigger.  That then calls all devices poll func (thread part) in the
>thread
>> one after another.
>>
>> So what are the issues with this:
>> 1) Multiple devices connected to this trigger will not have their
>thread
>>    handlers potentially run in parallel. (do we care? - the usecase
>wouldn't
>>    put more than on on given trigger anyway)
>> 2) We have no current way of identifying if a device needs a top half
>called
>>    (say to get a timestamp).  This should be easy enough to add as a
>flag in
>>    struct iio_dev and enforce with the validate callbacks.
>> 3) For those devices that do something that doesn't need to be in the
>top half
>>    (from a not hanging point of view - if not from a get the best
>answer point
>>     of view) we have no way of knowing this or calling the top half
>if we did.
>>
>> I think the last two can be worked around (3 might be 'tricky!)
>>
>> Anyhow what do people think?
>>
>> For some numbers I ran this on a pxa270 with a max1363 hanging of the
>i2c
>> bus.  generic_buffer into a ramdisk, watershed at 64 buffer length
>128. Ran
>> happily at upwards of a kHz though with some 5msec pauses (presumably
>something
>> higher priority came along).
>>
>> Jonathan
>>
>> Jonathan Cameron (1):
>>   iio:trigger: Experimental kthread tight loop trigger (thread only)
>>
>>  drivers/iio/trigger/Kconfig         |   5 +
>>  drivers/iio/trigger/Makefile        |   1 +
>>  drivers/iio/trigger/iio-trig-loop.c | 245
>++++++++++++++++++++++++++++++++++++
>>  3 files changed, 251 insertions(+)
>>  create mode 100644 drivers/iio/trigger/iio-trig-loop.c
>>
>> --
>> 2.7.1
>>
>> --
>> To unsubscribe from this list: send the line "unsubscribe linux-iio"
>in
>> the body of a message to majordomo@vger.kernel.org
>> More majordomo info at  http://vger.kernel.org/majordomo-info.html
>--
>To unsubscribe from this list: send the line "unsubscribe linux-iio" in
>the body of a message to majordomo@vger.kernel.org
>More majordomo info at  http://vger.kernel.org/majordomo-info.html

-- 
Sent from my Android device with K-9 Mail. Please excuse my brevity.

^ permalink raw reply	[flat|nested] 4+ messages in thread

end of thread, other threads:[~2016-03-01 17:27 UTC | newest]

Thread overview: 4+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2016-02-27 12:05 [RFC PATCH] Proposal for a kthread tight loop based trigger Jonathan Cameron
2016-02-27 12:05 ` [RFC PATCH] iio:trigger: Experimental kthread tight loop trigger (thread only) Jonathan Cameron
2016-03-01 11:51 ` [RFC PATCH] Proposal for a kthread tight loop based trigger Daniel Baluta
2016-03-01 17:27   ` Jonathan Cameron

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).