linux-iio.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [PATCH 0/8] iio: Refactor __iio_update_buffers()
@ 2015-05-13 14:04 Lars-Peter Clausen
  2015-05-13 14:04 ` [PATCH 1/8] iio: Replace printk in __iio_update_buffers with dev_dbg Lars-Peter Clausen
                   ` (7 more replies)
  0 siblings, 8 replies; 18+ messages in thread
From: Lars-Peter Clausen @ 2015-05-13 14:04 UTC (permalink / raw)
  To: Jonathan Cameron, Hartmut Knaack, Peter Meerwald
  Cc: linux-iio, Lars-Peter Clausen

__iio_update_buffers() is quite the beast, it does quite a few different
things and has a couple of separate error paths due to this. This series
refactors the function starting with moving the configuration verification
out of the configuration path itself. This means we can verify that the
configuration will be accepted before we make any changes to the device and
don't have to implement rollback functionality. The function is also broken
down into a couple of smaller helper function each which only a single
error path. This should hopefully make it easier to comprehend what is
going on. The last step of the refactoring process is to make sure that we
always leave the device in a sane and consistent state even if we encounter
a fatal error. This allows to potentially recover from the fatal error and
will also make sure that we don't trigger unexpected behavior in drivers.

The main motivation for this series to get __iio_update_buffers() ready for
future extension like the ones that are needed to support DMA buffer
support as well as better support for hardware buffer support.

The last patch in this series is not meant to be applied, but it implements
some simple random error injecting into the various error paths of
__iio_update_buffers(). This was used to test the patch series for
regressions and to make sure that we can indeed recover from fatal errors
now.

- Lars

Lars-Peter Clausen (8):
  iio: Replace printk in __iio_update_buffers with dev_dbg
  iio: __iio_update_buffers: Slightly refactor scan mask memory
    management
  iio: __iio_update_buffers: Perform request_update() only for new
    buffers
  iio: __iio_update_buffers: Update mode before preenable/after
    postdisable
  iio: __iio_update_buffers: Verify configuration before starting to
    apply it
  iio: __iio_update_buffers: Split enable and disable path into helper
    functions
  iio: __iio_update_buffers: Leave device in sane state on error
  DO NOT APPLY! __iio_update_buffers error path testing

 drivers/iio/industrialio-buffer.c | 364 ++++++++++++++++++++++++--------------
 1 file changed, 229 insertions(+), 135 deletions(-)

-- 
1.8.0


^ permalink raw reply	[flat|nested] 18+ messages in thread

* [PATCH 1/8] iio: Replace printk in __iio_update_buffers with dev_dbg
  2015-05-13 14:04 [PATCH 0/8] iio: Refactor __iio_update_buffers() Lars-Peter Clausen
@ 2015-05-13 14:04 ` Lars-Peter Clausen
  2015-05-17  8:42   ` Jonathan Cameron
  2015-05-13 14:04 ` [PATCH 2/8] iio: __iio_update_buffers: Slightly refactor scan mask memory management Lars-Peter Clausen
                   ` (6 subsequent siblings)
  7 siblings, 1 reply; 18+ messages in thread
From: Lars-Peter Clausen @ 2015-05-13 14:04 UTC (permalink / raw)
  To: Jonathan Cameron, Hartmut Knaack, Peter Meerwald
  Cc: linux-iio, Lars-Peter Clausen

While more verbose error messages are useful for debugging we should really
not put those error messages into the kernel log for normal errors that are
already reported to the application via the error code, when running in
non-debug mode.

Otherwise application authors might expect that this is part of the ABI and
to get the error they should scan the kernel log. Which would be rather
error prone itself since there is no direct mapping between a operation and
the error message so it is impossible to find out which error message
belongs to which error.

Signed-off-by: Lars-Peter Clausen <lars@metafoo.de>
---
 drivers/iio/industrialio-buffer.c | 12 +++++++-----
 1 file changed, 7 insertions(+), 5 deletions(-)

diff --git a/drivers/iio/industrialio-buffer.c b/drivers/iio/industrialio-buffer.c
index df919f4..1f91031 100644
--- a/drivers/iio/industrialio-buffer.c
+++ b/drivers/iio/industrialio-buffer.c
@@ -663,7 +663,7 @@ static int __iio_update_buffers(struct iio_dev *indio_dev,
 	if (indio_dev->setup_ops->preenable) {
 		ret = indio_dev->setup_ops->preenable(indio_dev);
 		if (ret) {
-			printk(KERN_ERR
+			dev_dbg(&indio_dev->dev,
 			       "Buffer not started: buffer preenable failed (%d)\n", ret);
 			goto error_remove_inserted;
 		}
@@ -677,7 +677,7 @@ static int __iio_update_buffers(struct iio_dev *indio_dev,
 		if (buffer->access->request_update) {
 			ret = buffer->access->request_update(buffer);
 			if (ret) {
-				printk(KERN_INFO
+				dev_dbg(&indio_dev->dev,
 				       "Buffer not started: buffer parameter update failed (%d)\n", ret);
 				goto error_run_postdisable;
 			}
@@ -688,7 +688,9 @@ static int __iio_update_buffers(struct iio_dev *indio_dev,
 			->update_scan_mode(indio_dev,
 					   indio_dev->active_scan_mask);
 		if (ret < 0) {
-			printk(KERN_INFO "Buffer not started: update scan mode failed (%d)\n", ret);
+			dev_dbg(&indio_dev->dev,
+				"Buffer not started: update scan mode failed (%d)\n",
+				ret);
 			goto error_run_postdisable;
 		}
 	}
@@ -702,7 +704,7 @@ static int __iio_update_buffers(struct iio_dev *indio_dev,
 	} else { /* Should never be reached */
 		/* Can only occur on first buffer */
 		if (indio_dev->modes & INDIO_BUFFER_TRIGGERED)
-			pr_info("Buffer not started: no trigger\n");
+			dev_dbg(&indio_dev->dev, "Buffer not started: no trigger\n");
 		ret = -EINVAL;
 		goto error_run_postdisable;
 	}
@@ -710,7 +712,7 @@ static int __iio_update_buffers(struct iio_dev *indio_dev,
 	if (indio_dev->setup_ops->postenable) {
 		ret = indio_dev->setup_ops->postenable(indio_dev);
 		if (ret) {
-			printk(KERN_INFO
+			dev_dbg(&indio_dev->dev,
 			       "Buffer not started: postenable failed (%d)\n", ret);
 			indio_dev->currentmode = INDIO_DIRECT_MODE;
 			if (indio_dev->setup_ops->postdisable)
-- 
1.8.0


^ permalink raw reply related	[flat|nested] 18+ messages in thread

* [PATCH 2/8] iio: __iio_update_buffers: Slightly refactor scan mask memory management
  2015-05-13 14:04 [PATCH 0/8] iio: Refactor __iio_update_buffers() Lars-Peter Clausen
  2015-05-13 14:04 ` [PATCH 1/8] iio: Replace printk in __iio_update_buffers with dev_dbg Lars-Peter Clausen
@ 2015-05-13 14:04 ` Lars-Peter Clausen
  2015-05-17  8:48   ` Jonathan Cameron
  2015-05-13 14:04 ` [PATCH 3/8] iio: __iio_update_buffers: Perform request_update() only for new buffers Lars-Peter Clausen
                   ` (5 subsequent siblings)
  7 siblings, 1 reply; 18+ messages in thread
From: Lars-Peter Clausen @ 2015-05-13 14:04 UTC (permalink / raw)
  To: Jonathan Cameron, Hartmut Knaack, Peter Meerwald
  Cc: linux-iio, Lars-Peter Clausen

Add a small helper function iio_free_scan_mask() that takes a mask and
frees its memory if the scan masks for the device are dynamically
allocated, otherwise does nothing. This means we don't have to open-code
the same check over and over again in __iio_update_buffers.

Also free compound_mask as soon a we are done using it. This constrains its
usage to a specific region of the function will make further refactoring
and splitting the function into smaller sub-parts more easier.

Signed-off-by: Lars-Peter Clausen <lars@metafoo.de>
---
 drivers/iio/industrialio-buffer.c | 23 +++++++++++++----------
 1 file changed, 13 insertions(+), 10 deletions(-)

diff --git a/drivers/iio/industrialio-buffer.c b/drivers/iio/industrialio-buffer.c
index 1f91031..2afe3db 100644
--- a/drivers/iio/industrialio-buffer.c
+++ b/drivers/iio/industrialio-buffer.c
@@ -575,6 +575,14 @@ static void iio_buffer_update_bytes_per_datum(struct iio_dev *indio_dev,
 	buffer->access->set_bytes_per_datum(buffer, bytes);
 }
 
+static void iio_free_scan_mask(struct iio_dev *indio_dev,
+	const unsigned long *mask)
+{
+	/* If the mask is dynamically allocated free it, otherwise do nothing */
+	if (!indio_dev->available_scan_masks)
+		kfree(mask);
+}
+
 static int __iio_update_buffers(struct iio_dev *indio_dev,
 		       struct iio_buffer *insert_buffer,
 		       struct iio_buffer *remove_buffer)
@@ -612,8 +620,7 @@ static int __iio_update_buffers(struct iio_dev *indio_dev,
 	/* If no buffers in list, we are done */
 	if (list_empty(&indio_dev->buffer_list)) {
 		indio_dev->currentmode = INDIO_DIRECT_MODE;
-		if (indio_dev->available_scan_masks == NULL)
-			kfree(old_mask);
+		iio_free_scan_mask(indio_dev, old_mask);
 		return 0;
 	}
 
@@ -621,8 +628,7 @@ static int __iio_update_buffers(struct iio_dev *indio_dev,
 	compound_mask = kcalloc(BITS_TO_LONGS(indio_dev->masklength),
 				sizeof(long), GFP_KERNEL);
 	if (compound_mask == NULL) {
-		if (indio_dev->available_scan_masks == NULL)
-			kfree(old_mask);
+		iio_free_scan_mask(indio_dev, old_mask);
 		return -ENOMEM;
 	}
 	indio_dev->scan_timestamp = 0;
@@ -637,6 +643,7 @@ static int __iio_update_buffers(struct iio_dev *indio_dev,
 			iio_scan_mask_match(indio_dev->available_scan_masks,
 					    indio_dev->masklength,
 					    compound_mask);
+		kfree(compound_mask);
 		if (indio_dev->active_scan_mask == NULL) {
 			/*
 			 * Roll back.
@@ -648,7 +655,6 @@ static int __iio_update_buffers(struct iio_dev *indio_dev,
 				success = -EINVAL;
 			}
 			else {
-				kfree(compound_mask);
 				ret = -EINVAL;
 				return ret;
 			}
@@ -721,10 +727,7 @@ static int __iio_update_buffers(struct iio_dev *indio_dev,
 		}
 	}
 
-	if (indio_dev->available_scan_masks)
-		kfree(compound_mask);
-	else
-		kfree(old_mask);
+	iio_free_scan_mask(indio_dev, old_mask);
 
 	return success;
 
@@ -736,8 +739,8 @@ error_run_postdisable:
 error_remove_inserted:
 	if (insert_buffer)
 		iio_buffer_deactivate(insert_buffer);
+	iio_free_scan_mask(indio_dev, indio_dev->active_scan_mask);
 	indio_dev->active_scan_mask = old_mask;
-	kfree(compound_mask);
 	return ret;
 }
 
-- 
1.8.0


^ permalink raw reply related	[flat|nested] 18+ messages in thread

* [PATCH 3/8] iio: __iio_update_buffers: Perform request_update() only for new buffers
  2015-05-13 14:04 [PATCH 0/8] iio: Refactor __iio_update_buffers() Lars-Peter Clausen
  2015-05-13 14:04 ` [PATCH 1/8] iio: Replace printk in __iio_update_buffers with dev_dbg Lars-Peter Clausen
  2015-05-13 14:04 ` [PATCH 2/8] iio: __iio_update_buffers: Slightly refactor scan mask memory management Lars-Peter Clausen
@ 2015-05-13 14:04 ` Lars-Peter Clausen
  2015-05-17  9:01   ` Jonathan Cameron
  2015-05-13 14:04 ` [PATCH 4/8] iio: __iio_update_buffers: Update mode before preenable/after postdisable Lars-Peter Clausen
                   ` (4 subsequent siblings)
  7 siblings, 1 reply; 18+ messages in thread
From: Lars-Peter Clausen @ 2015-05-13 14:04 UTC (permalink / raw)
  To: Jonathan Cameron, Hartmut Knaack, Peter Meerwald
  Cc: linux-iio, Lars-Peter Clausen

We only have to call the request_update() callback for a newly inserted
buffer. The configuration of the already previously active buffers will not
have changed.

This also allows us to move the request_update() call to the beginning of
__iio_update_buffers(), before any currently active buffers are stopped.
This makes the error handling a lot easier since no changes were made to
the buffer list and no rollback needs to be performed.

Signed-off-by: Lars-Peter Clausen <lars@metafoo.de>
---
 drivers/iio/industrialio-buffer.c | 36 +++++++++++++++++++++++++-----------
 1 file changed, 25 insertions(+), 11 deletions(-)

diff --git a/drivers/iio/industrialio-buffer.c b/drivers/iio/industrialio-buffer.c
index 2afe3db..21ed316 100644
--- a/drivers/iio/industrialio-buffer.c
+++ b/drivers/iio/industrialio-buffer.c
@@ -575,6 +575,25 @@ static void iio_buffer_update_bytes_per_datum(struct iio_dev *indio_dev,
 	buffer->access->set_bytes_per_datum(buffer, bytes);
 }
 
+static int iio_buffer_request_update(struct iio_dev *indio_dev,
+	struct iio_buffer *buffer)
+{
+	int ret;
+
+	iio_buffer_update_bytes_per_datum(indio_dev, buffer);
+	if (buffer->access->request_update) {
+		ret = buffer->access->request_update(buffer);
+		if (ret) {
+			dev_dbg(&indio_dev->dev,
+			       "Buffer not started: buffer parameter update failed (%d)\n",
+				ret);
+			return ret;
+		}
+	}
+
+	return 0;
+}
+
 static void iio_free_scan_mask(struct iio_dev *indio_dev,
 	const unsigned long *mask)
 {
@@ -593,6 +612,12 @@ static int __iio_update_buffers(struct iio_dev *indio_dev,
 	unsigned long *compound_mask;
 	const unsigned long *old_mask;
 
+	if (insert_buffer) {
+		ret = iio_buffer_request_update(indio_dev, insert_buffer);
+		if (ret)
+			return ret;
+	}
+
 	/* Wind down existing buffers - iff there are any */
 	if (!list_empty(&indio_dev->buffer_list)) {
 		if (indio_dev->setup_ops->predisable) {
@@ -678,17 +703,6 @@ static int __iio_update_buffers(struct iio_dev *indio_dev,
 		iio_compute_scan_bytes(indio_dev,
 				       indio_dev->active_scan_mask,
 				       indio_dev->scan_timestamp);
-	list_for_each_entry(buffer, &indio_dev->buffer_list, buffer_list) {
-		iio_buffer_update_bytes_per_datum(indio_dev, buffer);
-		if (buffer->access->request_update) {
-			ret = buffer->access->request_update(buffer);
-			if (ret) {
-				dev_dbg(&indio_dev->dev,
-				       "Buffer not started: buffer parameter update failed (%d)\n", ret);
-				goto error_run_postdisable;
-			}
-		}
-	}
 	if (indio_dev->info->update_scan_mode) {
 		ret = indio_dev->info
 			->update_scan_mode(indio_dev,
-- 
1.8.0


^ permalink raw reply related	[flat|nested] 18+ messages in thread

* [PATCH 4/8] iio: __iio_update_buffers: Update mode before preenable/after postdisable
  2015-05-13 14:04 [PATCH 0/8] iio: Refactor __iio_update_buffers() Lars-Peter Clausen
                   ` (2 preceding siblings ...)
  2015-05-13 14:04 ` [PATCH 3/8] iio: __iio_update_buffers: Perform request_update() only for new buffers Lars-Peter Clausen
@ 2015-05-13 14:04 ` Lars-Peter Clausen
  2015-05-17  9:10   ` Jonathan Cameron
  2015-05-13 14:04 ` [PATCH 5/8] iio: __iio_update_buffers: Verify configuration before starting to apply it Lars-Peter Clausen
                   ` (3 subsequent siblings)
  7 siblings, 1 reply; 18+ messages in thread
From: Lars-Peter Clausen @ 2015-05-13 14:04 UTC (permalink / raw)
  To: Jonathan Cameron, Hartmut Knaack, Peter Meerwald
  Cc: linux-iio, Lars-Peter Clausen

It is clear that we transition to INDIO_DIRECT_MODE when disabling the
buffer(s) and it is also clear that we transition from INDIO_DIRECT_MODE
when enabling the buffer(s). So leaving the currentmode field
INDIO_DIRECT_MODE until after the preenable() callback and updating it to
INDIO_DIRECT_MODE before the postdisable() callback doesn't add additional
value. On the other hand some drivers will need to perform different
actions depending on which mode the device is going to operate in/was
operating in.

Moving the update of currentmode before preenable() and after postdisable()
enables us to have drivers which perform mode dependent actions in those
callbacks.

Signed-off-by: Lars-Peter Clausen <lars@metafoo.de>
---
 drivers/iio/industrialio-buffer.c | 44 +++++++++++++++++++--------------------
 1 file changed, 22 insertions(+), 22 deletions(-)

diff --git a/drivers/iio/industrialio-buffer.c b/drivers/iio/industrialio-buffer.c
index 21ed316..8ecba2f 100644
--- a/drivers/iio/industrialio-buffer.c
+++ b/drivers/iio/industrialio-buffer.c
@@ -553,10 +553,11 @@ void iio_disable_all_buffers(struct iio_dev *indio_dev)
 			&indio_dev->buffer_list, buffer_list)
 		iio_buffer_deactivate(buffer);
 
-	indio_dev->currentmode = INDIO_DIRECT_MODE;
 	if (indio_dev->setup_ops->postdisable)
 		indio_dev->setup_ops->postdisable(indio_dev);
 
+	indio_dev->currentmode = INDIO_DIRECT_MODE;
+
 	if (indio_dev->available_scan_masks == NULL)
 		kfree(indio_dev->active_scan_mask);
 }
@@ -625,12 +626,14 @@ static int __iio_update_buffers(struct iio_dev *indio_dev,
 			if (ret)
 				return ret;
 		}
-		indio_dev->currentmode = INDIO_DIRECT_MODE;
+
 		if (indio_dev->setup_ops->postdisable) {
 			ret = indio_dev->setup_ops->postdisable(indio_dev);
 			if (ret)
 				return ret;
 		}
+
+		indio_dev->currentmode = INDIO_DIRECT_MODE;
 	}
 	/* Keep a copy of current setup to allow roll back */
 	old_mask = indio_dev->active_scan_mask;
@@ -688,6 +691,21 @@ static int __iio_update_buffers(struct iio_dev *indio_dev,
 		indio_dev->active_scan_mask = compound_mask;
 	}
 
+	/* Definitely possible for devices to support both of these. */
+	if ((indio_dev->modes & INDIO_BUFFER_TRIGGERED) && indio_dev->trig) {
+		indio_dev->currentmode = INDIO_BUFFER_TRIGGERED;
+	} else if (indio_dev->modes & INDIO_BUFFER_HARDWARE) {
+		indio_dev->currentmode = INDIO_BUFFER_HARDWARE;
+	} else if (indio_dev->modes & INDIO_BUFFER_SOFTWARE) {
+		indio_dev->currentmode = INDIO_BUFFER_SOFTWARE;
+	} else { /* Should never be reached */
+		/* Can only occur on first buffer */
+		if (indio_dev->modes & INDIO_BUFFER_TRIGGERED)
+			dev_dbg(&indio_dev->dev, "Buffer not started: no trigger\n");
+		ret = -EINVAL;
+		goto error_remove_inserted;
+	}
+
 	iio_update_demux(indio_dev);
 
 	/* Wind up again */
@@ -714,30 +732,13 @@ static int __iio_update_buffers(struct iio_dev *indio_dev,
 			goto error_run_postdisable;
 		}
 	}
-	/* Definitely possible for devices to support both of these. */
-	if ((indio_dev->modes & INDIO_BUFFER_TRIGGERED) && indio_dev->trig) {
-		indio_dev->currentmode = INDIO_BUFFER_TRIGGERED;
-	} else if (indio_dev->modes & INDIO_BUFFER_HARDWARE) {
-		indio_dev->currentmode = INDIO_BUFFER_HARDWARE;
-	} else if (indio_dev->modes & INDIO_BUFFER_SOFTWARE) {
-		indio_dev->currentmode = INDIO_BUFFER_SOFTWARE;
-	} else { /* Should never be reached */
-		/* Can only occur on first buffer */
-		if (indio_dev->modes & INDIO_BUFFER_TRIGGERED)
-			dev_dbg(&indio_dev->dev, "Buffer not started: no trigger\n");
-		ret = -EINVAL;
-		goto error_run_postdisable;
-	}
 
 	if (indio_dev->setup_ops->postenable) {
 		ret = indio_dev->setup_ops->postenable(indio_dev);
 		if (ret) {
 			dev_dbg(&indio_dev->dev,
 			       "Buffer not started: postenable failed (%d)\n", ret);
-			indio_dev->currentmode = INDIO_DIRECT_MODE;
-			if (indio_dev->setup_ops->postdisable)
-				indio_dev->setup_ops->postdisable(indio_dev);
-			goto error_disable_all_buffers;
+			goto error_run_postdisable;
 		}
 	}
 
@@ -745,12 +746,11 @@ static int __iio_update_buffers(struct iio_dev *indio_dev,
 
 	return success;
 
-error_disable_all_buffers:
-	indio_dev->currentmode = INDIO_DIRECT_MODE;
 error_run_postdisable:
 	if (indio_dev->setup_ops->postdisable)
 		indio_dev->setup_ops->postdisable(indio_dev);
 error_remove_inserted:
+	indio_dev->currentmode = INDIO_DIRECT_MODE;
 	if (insert_buffer)
 		iio_buffer_deactivate(insert_buffer);
 	iio_free_scan_mask(indio_dev, indio_dev->active_scan_mask);
-- 
1.8.0


^ permalink raw reply related	[flat|nested] 18+ messages in thread

* [PATCH 5/8] iio: __iio_update_buffers: Verify configuration before starting to apply it
  2015-05-13 14:04 [PATCH 0/8] iio: Refactor __iio_update_buffers() Lars-Peter Clausen
                   ` (3 preceding siblings ...)
  2015-05-13 14:04 ` [PATCH 4/8] iio: __iio_update_buffers: Update mode before preenable/after postdisable Lars-Peter Clausen
@ 2015-05-13 14:04 ` Lars-Peter Clausen
  2015-05-17  9:14   ` Jonathan Cameron
  2015-05-13 14:04 ` [PATCH 6/8] iio: __iio_update_buffers: Split enable and disable path into helper functions Lars-Peter Clausen
                   ` (2 subsequent siblings)
  7 siblings, 1 reply; 18+ messages in thread
From: Lars-Peter Clausen @ 2015-05-13 14:04 UTC (permalink / raw)
  To: Jonathan Cameron, Hartmut Knaack, Peter Meerwald
  Cc: linux-iio, Lars-Peter Clausen

Currently __iio_update_buffers() verifies whether the new configuration
will work in the middle of the update sequence. This means if the new
configuration is invalid we need to rollback the changes already made. This
patch moves the validation of the new configuration at the beginning of
__iio_update_buffers() and will not start to make any changes if the new
configuration is invalid.

Signed-off-by: Lars-Peter Clausen <lars@metafoo.de>
---
 drivers/iio/industrialio-buffer.c | 164 +++++++++++++++++++++++---------------
 1 file changed, 100 insertions(+), 64 deletions(-)

diff --git a/drivers/iio/industrialio-buffer.c b/drivers/iio/industrialio-buffer.c
index 8ecba2f..68e2669 100644
--- a/drivers/iio/industrialio-buffer.c
+++ b/drivers/iio/industrialio-buffer.c
@@ -603,20 +603,104 @@ static void iio_free_scan_mask(struct iio_dev *indio_dev,
 		kfree(mask);
 }
 
+struct iio_device_config {
+	unsigned int mode;
+	const unsigned long *scan_mask;
+	unsigned int scan_bytes;
+	bool scan_timestamp;
+};
+
+static int iio_verify_update(struct iio_dev *indio_dev,
+	struct iio_buffer *insert_buffer, struct iio_buffer *remove_buffer,
+	struct iio_device_config *config)
+{
+	unsigned long *compound_mask;
+	const unsigned long *scan_mask;
+	struct iio_buffer *buffer;
+	bool scan_timestamp;
+
+	memset(config, 0, sizeof(*config));
+
+	/*
+	 * If there is just one buffer and we are removing it there is nothing
+	 * to verify.
+	 */
+	if (remove_buffer && !insert_buffer &&
+		list_is_singular(&indio_dev->buffer_list))
+			return 0;
+
+	/* Definitely possible for devices to support both of these. */
+	if ((indio_dev->modes & INDIO_BUFFER_TRIGGERED) && indio_dev->trig) {
+		config->mode = INDIO_BUFFER_TRIGGERED;
+	} else if (indio_dev->modes & INDIO_BUFFER_HARDWARE) {
+		config->mode = INDIO_BUFFER_HARDWARE;
+	} else if (indio_dev->modes & INDIO_BUFFER_SOFTWARE) {
+		config->mode = INDIO_BUFFER_SOFTWARE;
+	} else {
+		/* Can only occur on first buffer */
+		if (indio_dev->modes & INDIO_BUFFER_TRIGGERED)
+			dev_dbg(&indio_dev->dev, "Buffer not started: no trigger\n");
+		return -EINVAL;
+	}
+
+	/* What scan mask do we actually have? */
+	compound_mask = kcalloc(BITS_TO_LONGS(indio_dev->masklength),
+				sizeof(long), GFP_KERNEL);
+	if (compound_mask == NULL)
+		return -ENOMEM;
+
+	scan_timestamp = false;
+
+	list_for_each_entry(buffer, &indio_dev->buffer_list, buffer_list) {
+		if (buffer == remove_buffer)
+			continue;
+		bitmap_or(compound_mask, compound_mask, buffer->scan_mask,
+			  indio_dev->masklength);
+		scan_timestamp |= buffer->scan_timestamp;
+	}
+
+	if (insert_buffer) {
+		bitmap_or(compound_mask, compound_mask,
+			  insert_buffer->scan_mask, indio_dev->masklength);
+		scan_timestamp |= insert_buffer->scan_timestamp;
+	}
+
+	if (indio_dev->available_scan_masks) {
+		scan_mask = iio_scan_mask_match(indio_dev->available_scan_masks,
+				    indio_dev->masklength,
+				    compound_mask);
+		kfree(compound_mask);
+		if (scan_mask == NULL)
+			return -EINVAL;
+	} else {
+	    scan_mask = compound_mask;
+	}
+
+	config->scan_bytes = iio_compute_scan_bytes(indio_dev,
+				    scan_mask, scan_timestamp);
+	config->scan_mask = scan_mask;
+	config->scan_timestamp = scan_timestamp;
+
+	return 0;
+}
+
 static int __iio_update_buffers(struct iio_dev *indio_dev,
 		       struct iio_buffer *insert_buffer,
 		       struct iio_buffer *remove_buffer)
 {
 	int ret;
-	int success = 0;
-	struct iio_buffer *buffer;
-	unsigned long *compound_mask;
 	const unsigned long *old_mask;
+	struct iio_device_config new_config;
+
+	ret = iio_verify_update(indio_dev, insert_buffer, remove_buffer,
+		&new_config);
+	if (ret)
+		return ret;
 
 	if (insert_buffer) {
 		ret = iio_buffer_request_update(indio_dev, insert_buffer);
 		if (ret)
-			return ret;
+			goto err_free_config;
 	}
 
 	/* Wind down existing buffers - iff there are any */
@@ -624,13 +708,13 @@ static int __iio_update_buffers(struct iio_dev *indio_dev,
 		if (indio_dev->setup_ops->predisable) {
 			ret = indio_dev->setup_ops->predisable(indio_dev);
 			if (ret)
-				return ret;
+				goto err_free_config;
 		}
 
 		if (indio_dev->setup_ops->postdisable) {
 			ret = indio_dev->setup_ops->postdisable(indio_dev);
 			if (ret)
-				return ret;
+				goto err_free_config;
 		}
 
 		indio_dev->currentmode = INDIO_DIRECT_MODE;
@@ -652,59 +736,10 @@ static int __iio_update_buffers(struct iio_dev *indio_dev,
 		return 0;
 	}
 
-	/* What scan mask do we actually have? */
-	compound_mask = kcalloc(BITS_TO_LONGS(indio_dev->masklength),
-				sizeof(long), GFP_KERNEL);
-	if (compound_mask == NULL) {
-		iio_free_scan_mask(indio_dev, old_mask);
-		return -ENOMEM;
-	}
-	indio_dev->scan_timestamp = 0;
-
-	list_for_each_entry(buffer, &indio_dev->buffer_list, buffer_list) {
-		bitmap_or(compound_mask, compound_mask, buffer->scan_mask,
-			  indio_dev->masklength);
-		indio_dev->scan_timestamp |= buffer->scan_timestamp;
-	}
-	if (indio_dev->available_scan_masks) {
-		indio_dev->active_scan_mask =
-			iio_scan_mask_match(indio_dev->available_scan_masks,
-					    indio_dev->masklength,
-					    compound_mask);
-		kfree(compound_mask);
-		if (indio_dev->active_scan_mask == NULL) {
-			/*
-			 * Roll back.
-			 * Note can only occur when adding a buffer.
-			 */
-			iio_buffer_deactivate(insert_buffer);
-			if (old_mask) {
-				indio_dev->active_scan_mask = old_mask;
-				success = -EINVAL;
-			}
-			else {
-				ret = -EINVAL;
-				return ret;
-			}
-		}
-	} else {
-		indio_dev->active_scan_mask = compound_mask;
-	}
-
-	/* Definitely possible for devices to support both of these. */
-	if ((indio_dev->modes & INDIO_BUFFER_TRIGGERED) && indio_dev->trig) {
-		indio_dev->currentmode = INDIO_BUFFER_TRIGGERED;
-	} else if (indio_dev->modes & INDIO_BUFFER_HARDWARE) {
-		indio_dev->currentmode = INDIO_BUFFER_HARDWARE;
-	} else if (indio_dev->modes & INDIO_BUFFER_SOFTWARE) {
-		indio_dev->currentmode = INDIO_BUFFER_SOFTWARE;
-	} else { /* Should never be reached */
-		/* Can only occur on first buffer */
-		if (indio_dev->modes & INDIO_BUFFER_TRIGGERED)
-			dev_dbg(&indio_dev->dev, "Buffer not started: no trigger\n");
-		ret = -EINVAL;
-		goto error_remove_inserted;
-	}
+	indio_dev->currentmode = new_config.mode;
+	indio_dev->active_scan_mask = new_config.scan_mask;
+	indio_dev->scan_timestamp = new_config.scan_timestamp;
+	indio_dev->scan_bytes = new_config.scan_bytes;
 
 	iio_update_demux(indio_dev);
 
@@ -717,10 +752,7 @@ static int __iio_update_buffers(struct iio_dev *indio_dev,
 			goto error_remove_inserted;
 		}
 	}
-	indio_dev->scan_bytes =
-		iio_compute_scan_bytes(indio_dev,
-				       indio_dev->active_scan_mask,
-				       indio_dev->scan_timestamp);
+
 	if (indio_dev->info->update_scan_mode) {
 		ret = indio_dev->info
 			->update_scan_mode(indio_dev,
@@ -744,7 +776,7 @@ static int __iio_update_buffers(struct iio_dev *indio_dev,
 
 	iio_free_scan_mask(indio_dev, old_mask);
 
-	return success;
+	return 0;
 
 error_run_postdisable:
 	if (indio_dev->setup_ops->postdisable)
@@ -756,6 +788,10 @@ error_remove_inserted:
 	iio_free_scan_mask(indio_dev, indio_dev->active_scan_mask);
 	indio_dev->active_scan_mask = old_mask;
 	return ret;
+
+err_free_config:
+	iio_free_scan_mask(indio_dev, new_config.scan_mask);
+	return ret;
 }
 
 int iio_update_buffers(struct iio_dev *indio_dev,
-- 
1.8.0


^ permalink raw reply related	[flat|nested] 18+ messages in thread

* [PATCH 6/8] iio: __iio_update_buffers: Split enable and disable path into helper functions
  2015-05-13 14:04 [PATCH 0/8] iio: Refactor __iio_update_buffers() Lars-Peter Clausen
                   ` (4 preceding siblings ...)
  2015-05-13 14:04 ` [PATCH 5/8] iio: __iio_update_buffers: Verify configuration before starting to apply it Lars-Peter Clausen
@ 2015-05-13 14:04 ` Lars-Peter Clausen
  2015-05-17  9:17   ` Jonathan Cameron
  2015-05-13 14:04 ` [PATCH 7/8] iio: __iio_update_buffers: Leave device in sane state on error Lars-Peter Clausen
  2015-05-13 14:04 ` [PATCH 8/8] DO NOT APPLY! __iio_update_buffers error path testing Lars-Peter Clausen
  7 siblings, 1 reply; 18+ messages in thread
From: Lars-Peter Clausen @ 2015-05-13 14:04 UTC (permalink / raw)
  To: Jonathan Cameron, Hartmut Knaack, Peter Meerwald
  Cc: linux-iio, Lars-Peter Clausen

__iio_update_buffers is already a rather large function with many different
error paths and it is going to get even larger. This patch factors out the
device enable and device disable paths into separate helper functions.

The patch also re-implements iio_disable_all_buffers() using the new
iio_disable_buffers() function removing a fair bit of redundant code.

Signed-off-by: Lars-Peter Clausen <lars@metafoo.de>
---
 drivers/iio/industrialio-buffer.c | 174 +++++++++++++++++++++-----------------
 1 file changed, 95 insertions(+), 79 deletions(-)

diff --git a/drivers/iio/industrialio-buffer.c b/drivers/iio/industrialio-buffer.c
index 68e2669..dd04e35 100644
--- a/drivers/iio/industrialio-buffer.c
+++ b/drivers/iio/industrialio-buffer.c
@@ -539,29 +539,6 @@ static void iio_buffer_deactivate(struct iio_buffer *buffer)
 	iio_buffer_put(buffer);
 }
 
-void iio_disable_all_buffers(struct iio_dev *indio_dev)
-{
-	struct iio_buffer *buffer, *_buffer;
-
-	if (list_empty(&indio_dev->buffer_list))
-		return;
-
-	if (indio_dev->setup_ops->predisable)
-		indio_dev->setup_ops->predisable(indio_dev);
-
-	list_for_each_entry_safe(buffer, _buffer,
-			&indio_dev->buffer_list, buffer_list)
-		iio_buffer_deactivate(buffer);
-
-	if (indio_dev->setup_ops->postdisable)
-		indio_dev->setup_ops->postdisable(indio_dev);
-
-	indio_dev->currentmode = INDIO_DIRECT_MODE;
-
-	if (indio_dev->available_scan_masks == NULL)
-		kfree(indio_dev->active_scan_mask);
-}
-
 static void iio_buffer_update_bytes_per_datum(struct iio_dev *indio_dev,
 	struct iio_buffer *buffer)
 {
@@ -684,62 +661,40 @@ static int iio_verify_update(struct iio_dev *indio_dev,
 	return 0;
 }
 
-static int __iio_update_buffers(struct iio_dev *indio_dev,
-		       struct iio_buffer *insert_buffer,
-		       struct iio_buffer *remove_buffer)
+static int iio_disable_buffers(struct iio_dev *indio_dev)
 {
 	int ret;
-	const unsigned long *old_mask;
-	struct iio_device_config new_config;
 
-	ret = iio_verify_update(indio_dev, insert_buffer, remove_buffer,
-		&new_config);
-	if (ret)
-		return ret;
+	/* Wind down existing buffers - iff there are any */
+	if (list_empty(&indio_dev->buffer_list))
+		return 0;
 
-	if (insert_buffer) {
-		ret = iio_buffer_request_update(indio_dev, insert_buffer);
+	if (indio_dev->setup_ops->predisable) {
+		ret = indio_dev->setup_ops->predisable(indio_dev);
 		if (ret)
-			goto err_free_config;
+			return ret;
 	}
 
-	/* Wind down existing buffers - iff there are any */
-	if (!list_empty(&indio_dev->buffer_list)) {
-		if (indio_dev->setup_ops->predisable) {
-			ret = indio_dev->setup_ops->predisable(indio_dev);
-			if (ret)
-				goto err_free_config;
-		}
-
-		if (indio_dev->setup_ops->postdisable) {
-			ret = indio_dev->setup_ops->postdisable(indio_dev);
-			if (ret)
-				goto err_free_config;
-		}
-
-		indio_dev->currentmode = INDIO_DIRECT_MODE;
+	if (indio_dev->setup_ops->postdisable) {
+		ret = indio_dev->setup_ops->postdisable(indio_dev);
+		if (ret)
+			return ret;
 	}
-	/* Keep a copy of current setup to allow roll back */
-	old_mask = indio_dev->active_scan_mask;
-	if (!indio_dev->available_scan_masks)
-		indio_dev->active_scan_mask = NULL;
 
-	if (remove_buffer)
-		iio_buffer_deactivate(remove_buffer);
-	if (insert_buffer)
-		iio_buffer_activate(indio_dev, insert_buffer);
+	indio_dev->currentmode = INDIO_DIRECT_MODE;
 
-	/* If no buffers in list, we are done */
-	if (list_empty(&indio_dev->buffer_list)) {
-		indio_dev->currentmode = INDIO_DIRECT_MODE;
-		iio_free_scan_mask(indio_dev, old_mask);
-		return 0;
-	}
+	return 0;
+}
 
-	indio_dev->currentmode = new_config.mode;
-	indio_dev->active_scan_mask = new_config.scan_mask;
-	indio_dev->scan_timestamp = new_config.scan_timestamp;
-	indio_dev->scan_bytes = new_config.scan_bytes;
+static int iio_enable_buffers(struct iio_dev *indio_dev,
+	struct iio_device_config *config)
+{
+	int ret;
+
+	indio_dev->currentmode = config->mode;
+	indio_dev->active_scan_mask = config->scan_mask;
+	indio_dev->scan_timestamp = config->scan_timestamp;
+	indio_dev->scan_bytes = config->scan_bytes;
 
 	iio_update_demux(indio_dev);
 
@@ -749,7 +704,7 @@ static int __iio_update_buffers(struct iio_dev *indio_dev,
 		if (ret) {
 			dev_dbg(&indio_dev->dev,
 			       "Buffer not started: buffer preenable failed (%d)\n", ret);
-			goto error_remove_inserted;
+			goto err_undo_config;
 		}
 	}
 
@@ -761,7 +716,7 @@ static int __iio_update_buffers(struct iio_dev *indio_dev,
 			dev_dbg(&indio_dev->dev,
 				"Buffer not started: update scan mode failed (%d)\n",
 				ret);
-			goto error_run_postdisable;
+			goto err_run_postdisable;
 		}
 	}
 
@@ -770,24 +725,72 @@ static int __iio_update_buffers(struct iio_dev *indio_dev,
 		if (ret) {
 			dev_dbg(&indio_dev->dev,
 			       "Buffer not started: postenable failed (%d)\n", ret);
-			goto error_run_postdisable;
+			goto err_run_postdisable;
 		}
 	}
 
-	iio_free_scan_mask(indio_dev, old_mask);
-
 	return 0;
 
-error_run_postdisable:
+err_run_postdisable:
 	if (indio_dev->setup_ops->postdisable)
 		indio_dev->setup_ops->postdisable(indio_dev);
-error_remove_inserted:
+err_undo_config:
 	indio_dev->currentmode = INDIO_DIRECT_MODE;
-	if (insert_buffer)
-		iio_buffer_deactivate(insert_buffer);
-	iio_free_scan_mask(indio_dev, indio_dev->active_scan_mask);
-	indio_dev->active_scan_mask = old_mask;
+	indio_dev->active_scan_mask = NULL;
+
 	return ret;
+}
+
+static int __iio_update_buffers(struct iio_dev *indio_dev,
+		       struct iio_buffer *insert_buffer,
+		       struct iio_buffer *remove_buffer)
+{
+	int ret;
+	const unsigned long *old_mask;
+	struct iio_device_config new_config;
+
+	ret = iio_verify_update(indio_dev, insert_buffer, remove_buffer,
+		&new_config);
+	if (ret)
+		return ret;
+
+	if (insert_buffer) {
+		ret = iio_buffer_request_update(indio_dev, insert_buffer);
+		if (ret)
+			goto err_free_config;
+	}
+
+	/* Keep a copy of current setup to allow roll back */
+	old_mask = indio_dev->active_scan_mask;
+	indio_dev->active_scan_mask = NULL;
+
+	ret = iio_disable_buffers(indio_dev);
+	if (ret) {
+		iio_free_scan_mask(indio_dev, old_mask);
+		goto err_free_config;
+	}
+
+	if (remove_buffer)
+		iio_buffer_deactivate(remove_buffer);
+	if (insert_buffer)
+		iio_buffer_activate(indio_dev, insert_buffer);
+
+	/* If no buffers in list, we are done */
+	if (list_empty(&indio_dev->buffer_list)) {
+		iio_free_scan_mask(indio_dev, old_mask);
+		return 0;
+	}
+
+	ret = iio_enable_buffers(indio_dev, &new_config);
+	if (ret) {
+		if (insert_buffer)
+			iio_buffer_deactivate(insert_buffer);
+		indio_dev->active_scan_mask = old_mask;
+		goto err_free_config;
+	}
+
+	iio_free_scan_mask(indio_dev, old_mask);
+	return 0;
 
 err_free_config:
 	iio_free_scan_mask(indio_dev, new_config.scan_mask);
@@ -832,6 +835,19 @@ out_unlock:
 }
 EXPORT_SYMBOL_GPL(iio_update_buffers);
 
+void iio_disable_all_buffers(struct iio_dev *indio_dev)
+{
+	struct iio_buffer *buffer, *_buffer;
+
+	iio_disable_buffers(indio_dev);
+	iio_free_scan_mask(indio_dev, indio_dev->active_scan_mask);
+	indio_dev->active_scan_mask = NULL;
+
+	list_for_each_entry_safe(buffer, _buffer,
+			&indio_dev->buffer_list, buffer_list)
+		iio_buffer_deactivate(buffer);
+}
+
 static ssize_t iio_buffer_store_enable(struct device *dev,
 				       struct device_attribute *attr,
 				       const char *buf,
-- 
1.8.0


^ permalink raw reply related	[flat|nested] 18+ messages in thread

* [PATCH 7/8] iio: __iio_update_buffers: Leave device in sane state on error
  2015-05-13 14:04 [PATCH 0/8] iio: Refactor __iio_update_buffers() Lars-Peter Clausen
                   ` (5 preceding siblings ...)
  2015-05-13 14:04 ` [PATCH 6/8] iio: __iio_update_buffers: Split enable and disable path into helper functions Lars-Peter Clausen
@ 2015-05-13 14:04 ` Lars-Peter Clausen
  2015-05-17  9:22   ` Jonathan Cameron
  2015-05-13 14:04 ` [PATCH 8/8] DO NOT APPLY! __iio_update_buffers error path testing Lars-Peter Clausen
  7 siblings, 1 reply; 18+ messages in thread
From: Lars-Peter Clausen @ 2015-05-13 14:04 UTC (permalink / raw)
  To: Jonathan Cameron, Hartmut Knaack, Peter Meerwald
  Cc: linux-iio, Lars-Peter Clausen

Currently when something goes wrong at some step when disabling the buffers
we immediately abort. This has the effect that the enable/disable calls are
no longer balanced. So make sure that even if one step in the disable
sequence fails the other steps are still executed.

The other issue is that when either enable or disable fails buffers that
were active at that time stay active while the device itself is disabled.
This leaves things in a inconsistent state and can cause unbalanced
enable/disable calls. Furthermore when enable fails we restore the old scan
mask, but still keeps things disabled.

Given that verification of the configuration was performed earlier and it
is valid at the point where we try to enable/disable the most likely reason
of failure is a communication failure with the device or maybe a
out-of-memory situation. There is not really a good recovery strategy in
such a case, so it makes sense to leave the device disabled, but we should
still leave it in a consistent state.

What the patch does if disable/enable fails is to deactivate all buffers
and make sure that the device will be in the same state as if all buffers
had been manually disabled.

Signed-off-by: Lars-Peter Clausen <lars@metafoo.de>
---
 drivers/iio/industrialio-buffer.c | 79 ++++++++++++++++++++++-----------------
 1 file changed, 44 insertions(+), 35 deletions(-)

diff --git a/drivers/iio/industrialio-buffer.c b/drivers/iio/industrialio-buffer.c
index dd04e35..3a11a2a 100644
--- a/drivers/iio/industrialio-buffer.c
+++ b/drivers/iio/industrialio-buffer.c
@@ -539,6 +539,15 @@ static void iio_buffer_deactivate(struct iio_buffer *buffer)
 	iio_buffer_put(buffer);
 }
 
+static void iio_buffer_deactivate_all(struct iio_dev *indio_dev)
+{
+	struct iio_buffer *buffer, *_buffer;
+
+	list_for_each_entry_safe(buffer, _buffer,
+			&indio_dev->buffer_list, buffer_list)
+		iio_buffer_deactivate(buffer);
+}
+
 static void iio_buffer_update_bytes_per_datum(struct iio_dev *indio_dev,
 	struct iio_buffer *buffer)
 {
@@ -663,27 +672,37 @@ static int iio_verify_update(struct iio_dev *indio_dev,
 
 static int iio_disable_buffers(struct iio_dev *indio_dev)
 {
-	int ret;
+	int ret = 0;
+	int ret2;
 
 	/* Wind down existing buffers - iff there are any */
 	if (list_empty(&indio_dev->buffer_list))
 		return 0;
 
+	/*
+	 * If things go wrong at some step in disable we still need to continue
+	 * to perform the other steps, otherwise we leave the device in a
+	 * inconsistent state. We return the error code for the first error we
+	 * encountered.
+	 */
+
 	if (indio_dev->setup_ops->predisable) {
-		ret = indio_dev->setup_ops->predisable(indio_dev);
-		if (ret)
-			return ret;
+		ret2 = indio_dev->setup_ops->predisable(indio_dev);
+		if (ret2 && !ret)
+			ret = ret2;
 	}
 
 	if (indio_dev->setup_ops->postdisable) {
 		ret = indio_dev->setup_ops->postdisable(indio_dev);
-		if (ret)
-			return ret;
+		if (ret2 && !ret)
+			ret = ret2;
 	}
 
 	indio_dev->currentmode = INDIO_DIRECT_MODE;
+	iio_free_scan_mask(indio_dev, indio_dev->active_scan_mask);
+	indio_dev->active_scan_mask = NULL;
 
-	return 0;
+	return ret;
 }
 
 static int iio_enable_buffers(struct iio_dev *indio_dev,
@@ -745,9 +764,8 @@ static int __iio_update_buffers(struct iio_dev *indio_dev,
 		       struct iio_buffer *insert_buffer,
 		       struct iio_buffer *remove_buffer)
 {
-	int ret;
-	const unsigned long *old_mask;
 	struct iio_device_config new_config;
+	int ret;
 
 	ret = iio_verify_update(indio_dev, insert_buffer, remove_buffer,
 		&new_config);
@@ -760,15 +778,9 @@ static int __iio_update_buffers(struct iio_dev *indio_dev,
 			goto err_free_config;
 	}
 
-	/* Keep a copy of current setup to allow roll back */
-	old_mask = indio_dev->active_scan_mask;
-	indio_dev->active_scan_mask = NULL;
-
 	ret = iio_disable_buffers(indio_dev);
-	if (ret) {
-		iio_free_scan_mask(indio_dev, old_mask);
-		goto err_free_config;
-	}
+	if (ret)
+		goto err_deactivate_all;
 
 	if (remove_buffer)
 		iio_buffer_deactivate(remove_buffer);
@@ -776,22 +788,26 @@ static int __iio_update_buffers(struct iio_dev *indio_dev,
 		iio_buffer_activate(indio_dev, insert_buffer);
 
 	/* If no buffers in list, we are done */
-	if (list_empty(&indio_dev->buffer_list)) {
-		iio_free_scan_mask(indio_dev, old_mask);
+	if (list_empty(&indio_dev->buffer_list))
 		return 0;
-	}
 
 	ret = iio_enable_buffers(indio_dev, &new_config);
-	if (ret) {
-		if (insert_buffer)
-			iio_buffer_deactivate(insert_buffer);
-		indio_dev->active_scan_mask = old_mask;
-		goto err_free_config;
-	}
+	if (ret)
+		goto err_deactivate_all;
 
-	iio_free_scan_mask(indio_dev, old_mask);
 	return 0;
 
+err_deactivate_all:
+	/*
+	 * We've already verified that the config is valid earlier. If things go
+	 * wrong in either enable or disable the most likely reason is an IO
+	 * error from the device. In this case there is no good recovery
+	 * strategy. Just make sure to disable everything and leave the device
+	 * in a sane state.  With a bit of luck the device might come back to
+	 * life again later and userspace can try again.
+	 */
+	iio_buffer_deactivate_all(indio_dev);
+
 err_free_config:
 	iio_free_scan_mask(indio_dev, new_config.scan_mask);
 	return ret;
@@ -837,15 +853,8 @@ EXPORT_SYMBOL_GPL(iio_update_buffers);
 
 void iio_disable_all_buffers(struct iio_dev *indio_dev)
 {
-	struct iio_buffer *buffer, *_buffer;
-
 	iio_disable_buffers(indio_dev);
-	iio_free_scan_mask(indio_dev, indio_dev->active_scan_mask);
-	indio_dev->active_scan_mask = NULL;
-
-	list_for_each_entry_safe(buffer, _buffer,
-			&indio_dev->buffer_list, buffer_list)
-		iio_buffer_deactivate(buffer);
+	iio_buffer_deactivate_all(indio_dev);
 }
 
 static ssize_t iio_buffer_store_enable(struct device *dev,
-- 
1.8.0


^ permalink raw reply related	[flat|nested] 18+ messages in thread

* [PATCH 8/8] DO NOT APPLY! __iio_update_buffers error path testing
  2015-05-13 14:04 [PATCH 0/8] iio: Refactor __iio_update_buffers() Lars-Peter Clausen
                   ` (6 preceding siblings ...)
  2015-05-13 14:04 ` [PATCH 7/8] iio: __iio_update_buffers: Leave device in sane state on error Lars-Peter Clausen
@ 2015-05-13 14:04 ` Lars-Peter Clausen
  7 siblings, 0 replies; 18+ messages in thread
From: Lars-Peter Clausen @ 2015-05-13 14:04 UTC (permalink / raw)
  To: Jonathan Cameron, Hartmut Knaack, Peter Meerwald
  Cc: linux-iio, Lars-Peter Clausen

This patch is not meant to be applied. This is what I've been using to test
the error paths. I've included it in case somebody wants to re-test things
or future changes to __iio_update_buffers.

For testing I've basically kept the following script running for about 30
minutes:
	while true; do
		echo 1 > buffer/enable
		sleep 1
		echo 0.1 > buffer/enable
		sleep 0.1
	done;

After that I did set test_err_paths to 0 and things worked again as if
nothing happened. kmemleak didn't complain either.

Signed-off-by: Lars-Peter Clausen <lars@metafoo.de>
---
 drivers/iio/industrialio-buffer.c | 56 ++++++++++++++++++++++++---------------
 1 file changed, 35 insertions(+), 21 deletions(-)

diff --git a/drivers/iio/industrialio-buffer.c b/drivers/iio/industrialio-buffer.c
index 3a11a2a..13c45f6 100644
--- a/drivers/iio/industrialio-buffer.c
+++ b/drivers/iio/industrialio-buffer.c
@@ -13,6 +13,9 @@
  * - Better memory allocation techniques?
  * - Alternative access techniques?
  */
+
+#define DEBUG
+
 #include <linux/kernel.h>
 #include <linux/export.h>
 #include <linux/device.h>
@@ -27,6 +30,29 @@
 #include <linux/iio/sysfs.h>
 #include <linux/iio/buffer.h>
 
+#include <linux/module.h>
+#include <linux/random.h>
+
+static int test_err_path = 1;
+module_param(test_err_path, int, 0644);
+MODULE_PARM_DESC(test_err_path, "test");
+
+static int _random_err(const char *func)
+{
+	if (test_err_path && (prandom_u32() & 0xf) == 0) {
+		pr_info("Injected error for %s\n", func);
+		return -EIO;
+	}
+
+	return 0;
+}
+
+#define random_err(x) (_random_err(#x) ?: (x))
+#define random_err_callback(x, ...) \
+	(_random_err(#x) ?: (x ? x(__VA_ARGS__) : 0))
+#define random_err_callback2(x, ...) \
+	((x ? (x(__VA_ARGS__) ?: _random_err(#x)) : _random_err(#x)))
+
 static const char * const iio_endian_prefix[] = {
 	[IIO_BE] = "be",
 	[IIO_LE] = "le",
@@ -568,15 +594,13 @@ static int iio_buffer_request_update(struct iio_dev *indio_dev,
 	int ret;
 
 	iio_buffer_update_bytes_per_datum(indio_dev, buffer);
-	if (buffer->access->request_update) {
-		ret = buffer->access->request_update(buffer);
+		ret = random_err_callback(buffer->access->request_update, buffer);
 		if (ret) {
 			dev_dbg(&indio_dev->dev,
 			       "Buffer not started: buffer parameter update failed (%d)\n",
 				ret);
 			return ret;
 		}
-	}
 
 	return 0;
 }
@@ -686,17 +710,13 @@ static int iio_disable_buffers(struct iio_dev *indio_dev)
 	 * encountered.
 	 */
 
-	if (indio_dev->setup_ops->predisable) {
-		ret2 = indio_dev->setup_ops->predisable(indio_dev);
+		ret2 = random_err_callback2(indio_dev->setup_ops->predisable, indio_dev);
 		if (ret2 && !ret)
 			ret = ret2;
-	}
 
-	if (indio_dev->setup_ops->postdisable) {
-		ret = indio_dev->setup_ops->postdisable(indio_dev);
+		ret2 = random_err_callback2(indio_dev->setup_ops->postdisable, indio_dev);
 		if (ret2 && !ret)
 			ret = ret2;
-	}
 
 	indio_dev->currentmode = INDIO_DIRECT_MODE;
 	iio_free_scan_mask(indio_dev, indio_dev->active_scan_mask);
@@ -718,18 +738,15 @@ static int iio_enable_buffers(struct iio_dev *indio_dev,
 	iio_update_demux(indio_dev);
 
 	/* Wind up again */
-	if (indio_dev->setup_ops->preenable) {
-		ret = indio_dev->setup_ops->preenable(indio_dev);
+		ret = random_err_callback(indio_dev->setup_ops->preenable, indio_dev);
 		if (ret) {
 			dev_dbg(&indio_dev->dev,
 			       "Buffer not started: buffer preenable failed (%d)\n", ret);
 			goto err_undo_config;
 		}
-	}
 
-	if (indio_dev->info->update_scan_mode) {
-		ret = indio_dev->info
-			->update_scan_mode(indio_dev,
+		ret = random_err_callback(indio_dev->info
+			->update_scan_mode, indio_dev,
 					   indio_dev->active_scan_mask);
 		if (ret < 0) {
 			dev_dbg(&indio_dev->dev,
@@ -737,16 +754,13 @@ static int iio_enable_buffers(struct iio_dev *indio_dev,
 				ret);
 			goto err_run_postdisable;
 		}
-	}
 
-	if (indio_dev->setup_ops->postenable) {
-		ret = indio_dev->setup_ops->postenable(indio_dev);
+		ret = random_err_callback(indio_dev->setup_ops->postenable, indio_dev);
 		if (ret) {
 			dev_dbg(&indio_dev->dev,
 			       "Buffer not started: postenable failed (%d)\n", ret);
 			goto err_run_postdisable;
 		}
-	}
 
 	return 0;
 
@@ -767,8 +781,8 @@ static int __iio_update_buffers(struct iio_dev *indio_dev,
 	struct iio_device_config new_config;
 	int ret;
 
-	ret = iio_verify_update(indio_dev, insert_buffer, remove_buffer,
-		&new_config);
+	ret = random_err(iio_verify_update(indio_dev, insert_buffer, remove_buffer,
+		&new_config));
 	if (ret)
 		return ret;
 
-- 
1.8.0


^ permalink raw reply related	[flat|nested] 18+ messages in thread

* Re: [PATCH 1/8] iio: Replace printk in __iio_update_buffers with dev_dbg
  2015-05-13 14:04 ` [PATCH 1/8] iio: Replace printk in __iio_update_buffers with dev_dbg Lars-Peter Clausen
@ 2015-05-17  8:42   ` Jonathan Cameron
  0 siblings, 0 replies; 18+ messages in thread
From: Jonathan Cameron @ 2015-05-17  8:42 UTC (permalink / raw)
  To: Lars-Peter Clausen, Hartmut Knaack, Peter Meerwald; +Cc: linux-iio

On 13/05/15 15:04, Lars-Peter Clausen wrote:
> While more verbose error messages are useful for debugging we should really
> not put those error messages into the kernel log for normal errors that are
> already reported to the application via the error code, when running in
> non-debug mode.
> 
> Otherwise application authors might expect that this is part of the ABI and
> to get the error they should scan the kernel log. Which would be rather
> error prone itself since there is no direct mapping between a operation and
> the error message so it is impossible to find out which error message
> belongs to which error.
Let's just hope no one is using these already!
> 
> Signed-off-by: Lars-Peter Clausen <lars@metafoo.de>
Applied to the togreg branch of iio.git, initially pushed out as testing (etc etc).

> ---
>  drivers/iio/industrialio-buffer.c | 12 +++++++-----
>  1 file changed, 7 insertions(+), 5 deletions(-)
> 
> diff --git a/drivers/iio/industrialio-buffer.c b/drivers/iio/industrialio-buffer.c
> index df919f4..1f91031 100644
> --- a/drivers/iio/industrialio-buffer.c
> +++ b/drivers/iio/industrialio-buffer.c
> @@ -663,7 +663,7 @@ static int __iio_update_buffers(struct iio_dev *indio_dev,
>  	if (indio_dev->setup_ops->preenable) {
>  		ret = indio_dev->setup_ops->preenable(indio_dev);
>  		if (ret) {
> -			printk(KERN_ERR
> +			dev_dbg(&indio_dev->dev,
>  			       "Buffer not started: buffer preenable failed (%d)\n", ret);
>  			goto error_remove_inserted;
>  		}
> @@ -677,7 +677,7 @@ static int __iio_update_buffers(struct iio_dev *indio_dev,
>  		if (buffer->access->request_update) {
>  			ret = buffer->access->request_update(buffer);
>  			if (ret) {
> -				printk(KERN_INFO
> +				dev_dbg(&indio_dev->dev,
>  				       "Buffer not started: buffer parameter update failed (%d)\n", ret);
>  				goto error_run_postdisable;
>  			}
> @@ -688,7 +688,9 @@ static int __iio_update_buffers(struct iio_dev *indio_dev,
>  			->update_scan_mode(indio_dev,
>  					   indio_dev->active_scan_mask);
>  		if (ret < 0) {
> -			printk(KERN_INFO "Buffer not started: update scan mode failed (%d)\n", ret);
> +			dev_dbg(&indio_dev->dev,
> +				"Buffer not started: update scan mode failed (%d)\n",
> +				ret);
>  			goto error_run_postdisable;
>  		}
>  	}
> @@ -702,7 +704,7 @@ static int __iio_update_buffers(struct iio_dev *indio_dev,
>  	} else { /* Should never be reached */
>  		/* Can only occur on first buffer */
>  		if (indio_dev->modes & INDIO_BUFFER_TRIGGERED)
> -			pr_info("Buffer not started: no trigger\n");
> +			dev_dbg(&indio_dev->dev, "Buffer not started: no trigger\n");
>  		ret = -EINVAL;
>  		goto error_run_postdisable;
>  	}
> @@ -710,7 +712,7 @@ static int __iio_update_buffers(struct iio_dev *indio_dev,
>  	if (indio_dev->setup_ops->postenable) {
>  		ret = indio_dev->setup_ops->postenable(indio_dev);
>  		if (ret) {
> -			printk(KERN_INFO
> +			dev_dbg(&indio_dev->dev,
>  			       "Buffer not started: postenable failed (%d)\n", ret);
>  			indio_dev->currentmode = INDIO_DIRECT_MODE;
>  			if (indio_dev->setup_ops->postdisable)
> 


^ permalink raw reply	[flat|nested] 18+ messages in thread

* Re: [PATCH 2/8] iio: __iio_update_buffers: Slightly refactor scan mask memory management
  2015-05-13 14:04 ` [PATCH 2/8] iio: __iio_update_buffers: Slightly refactor scan mask memory management Lars-Peter Clausen
@ 2015-05-17  8:48   ` Jonathan Cameron
  0 siblings, 0 replies; 18+ messages in thread
From: Jonathan Cameron @ 2015-05-17  8:48 UTC (permalink / raw)
  To: Lars-Peter Clausen, Hartmut Knaack, Peter Meerwald; +Cc: linux-iio

On 13/05/15 15:04, Lars-Peter Clausen wrote:
> Add a small helper function iio_free_scan_mask() that takes a mask and
> frees its memory if the scan masks for the device are dynamically
> allocated, otherwise does nothing. This means we don't have to open-code
> the same check over and over again in __iio_update_buffers.
> 
> Also free compound_mask as soon a we are done using it. This constrains its
> usage to a specific region of the function will make further refactoring
> and splitting the function into smaller sub-parts more easier.
> 
> Signed-off-by: Lars-Peter Clausen <lars@metafoo.de>
Gah, even this little tidy up gave me a headache chasing compound_mask through
the various paths.  Looks good though and illustrates just how horrible this
function is!

Applied to the togreg branch of iio.git - initially pushed out as testing
for the autobuilders to play with it.

Thanks,

Jonathan

> ---
>  drivers/iio/industrialio-buffer.c | 23 +++++++++++++----------
>  1 file changed, 13 insertions(+), 10 deletions(-)
> 
> diff --git a/drivers/iio/industrialio-buffer.c b/drivers/iio/industrialio-buffer.c
> index 1f91031..2afe3db 100644
> --- a/drivers/iio/industrialio-buffer.c
> +++ b/drivers/iio/industrialio-buffer.c
> @@ -575,6 +575,14 @@ static void iio_buffer_update_bytes_per_datum(struct iio_dev *indio_dev,
>  	buffer->access->set_bytes_per_datum(buffer, bytes);
>  }
>  
> +static void iio_free_scan_mask(struct iio_dev *indio_dev,
> +	const unsigned long *mask)
> +{
> +	/* If the mask is dynamically allocated free it, otherwise do nothing */
> +	if (!indio_dev->available_scan_masks)
> +		kfree(mask);
> +}
> +
>  static int __iio_update_buffers(struct iio_dev *indio_dev,
>  		       struct iio_buffer *insert_buffer,
>  		       struct iio_buffer *remove_buffer)
> @@ -612,8 +620,7 @@ static int __iio_update_buffers(struct iio_dev *indio_dev,
>  	/* If no buffers in list, we are done */
>  	if (list_empty(&indio_dev->buffer_list)) {
>  		indio_dev->currentmode = INDIO_DIRECT_MODE;
> -		if (indio_dev->available_scan_masks == NULL)
> -			kfree(old_mask);
> +		iio_free_scan_mask(indio_dev, old_mask);
>  		return 0;
>  	}
>  
> @@ -621,8 +628,7 @@ static int __iio_update_buffers(struct iio_dev *indio_dev,
>  	compound_mask = kcalloc(BITS_TO_LONGS(indio_dev->masklength),
>  				sizeof(long), GFP_KERNEL);
>  	if (compound_mask == NULL) {
> -		if (indio_dev->available_scan_masks == NULL)
> -			kfree(old_mask);
> +		iio_free_scan_mask(indio_dev, old_mask);
>  		return -ENOMEM;
>  	}
>  	indio_dev->scan_timestamp = 0;
> @@ -637,6 +643,7 @@ static int __iio_update_buffers(struct iio_dev *indio_dev,
>  			iio_scan_mask_match(indio_dev->available_scan_masks,
>  					    indio_dev->masklength,
>  					    compound_mask);
> +		kfree(compound_mask);
>  		if (indio_dev->active_scan_mask == NULL) {
>  			/*
>  			 * Roll back.
> @@ -648,7 +655,6 @@ static int __iio_update_buffers(struct iio_dev *indio_dev,
>  				success = -EINVAL;
>  			}
>  			else {
> -				kfree(compound_mask);
>  				ret = -EINVAL;
>  				return ret;
>  			}
> @@ -721,10 +727,7 @@ static int __iio_update_buffers(struct iio_dev *indio_dev,
>  		}
>  	}
>  
> -	if (indio_dev->available_scan_masks)
> -		kfree(compound_mask);
> -	else
> -		kfree(old_mask);
> +	iio_free_scan_mask(indio_dev, old_mask);
>  
>  	return success;
>  
> @@ -736,8 +739,8 @@ error_run_postdisable:
>  error_remove_inserted:
>  	if (insert_buffer)
>  		iio_buffer_deactivate(insert_buffer);
> +	iio_free_scan_mask(indio_dev, indio_dev->active_scan_mask);
>  	indio_dev->active_scan_mask = old_mask;
> -	kfree(compound_mask);
>  	return ret;
>  }
>  
> 


^ permalink raw reply	[flat|nested] 18+ messages in thread

* Re: [PATCH 3/8] iio: __iio_update_buffers: Perform request_update() only for new buffers
  2015-05-13 14:04 ` [PATCH 3/8] iio: __iio_update_buffers: Perform request_update() only for new buffers Lars-Peter Clausen
@ 2015-05-17  9:01   ` Jonathan Cameron
  0 siblings, 0 replies; 18+ messages in thread
From: Jonathan Cameron @ 2015-05-17  9:01 UTC (permalink / raw)
  To: Lars-Peter Clausen, Hartmut Knaack, Peter Meerwald; +Cc: linux-iio

On 13/05/15 15:04, Lars-Peter Clausen wrote:
> We only have to call the request_update() callback for a newly inserted
> buffer. The configuration of the already previously active buffers will not
> have changed.
> 
> This also allows us to move the request_update() call to the beginning of
> __iio_update_buffers(), before any currently active buffers are stopped.
> This makes the error handling a lot easier since no changes were made to
> the buffer list and no rollback needs to be performed.
> 
> Signed-off-by: Lars-Peter Clausen <lars@metafoo.de>
Good spot. If I were guessing I would think this probably dates back to early stages
of implementing the demux when it was possible for buffers to change as a result
of others being added.  Mind you that was probably fixed before I even posted
this beast ;)

Whilst looking at this I notice that currently the call to request_update, in event
of not changing the size of the kfifo, calls kfifo_reset_out which seems undesirable
if we are not actually modifying this buffer (your change here prevents this anyway).

Anyhow, I think this side effect is desirable and I doubt anyone will notice given
the rather small set of systems actually using this code in the way that would hit
it.

Hence applied to the togreg branch of iio.git.

Thanks,

Jonathan
> ---
>  drivers/iio/industrialio-buffer.c | 36 +++++++++++++++++++++++++-----------
>  1 file changed, 25 insertions(+), 11 deletions(-)
> 
> diff --git a/drivers/iio/industrialio-buffer.c b/drivers/iio/industrialio-buffer.c
> index 2afe3db..21ed316 100644
> --- a/drivers/iio/industrialio-buffer.c
> +++ b/drivers/iio/industrialio-buffer.c
> @@ -575,6 +575,25 @@ static void iio_buffer_update_bytes_per_datum(struct iio_dev *indio_dev,
>  	buffer->access->set_bytes_per_datum(buffer, bytes);
>  }
>  
> +static int iio_buffer_request_update(struct iio_dev *indio_dev,
> +	struct iio_buffer *buffer)
> +{
> +	int ret;
> +
> +	iio_buffer_update_bytes_per_datum(indio_dev, buffer);
> +	if (buffer->access->request_update) {
> +		ret = buffer->access->request_update(buffer);
> +		if (ret) {
> +			dev_dbg(&indio_dev->dev,
> +			       "Buffer not started: buffer parameter update failed (%d)\n",
> +				ret);
> +			return ret;
> +		}
> +	}
> +
> +	return 0;
> +}
> +
>  static void iio_free_scan_mask(struct iio_dev *indio_dev,
>  	const unsigned long *mask)
>  {
> @@ -593,6 +612,12 @@ static int __iio_update_buffers(struct iio_dev *indio_dev,
>  	unsigned long *compound_mask;
>  	const unsigned long *old_mask;
>  
> +	if (insert_buffer) {
> +		ret = iio_buffer_request_update(indio_dev, insert_buffer);
> +		if (ret)
> +			return ret;
> +	}
> +
>  	/* Wind down existing buffers - iff there are any */
>  	if (!list_empty(&indio_dev->buffer_list)) {
>  		if (indio_dev->setup_ops->predisable) {
> @@ -678,17 +703,6 @@ static int __iio_update_buffers(struct iio_dev *indio_dev,
>  		iio_compute_scan_bytes(indio_dev,
>  				       indio_dev->active_scan_mask,
>  				       indio_dev->scan_timestamp);
> -	list_for_each_entry(buffer, &indio_dev->buffer_list, buffer_list) {
> -		iio_buffer_update_bytes_per_datum(indio_dev, buffer);
> -		if (buffer->access->request_update) {
> -			ret = buffer->access->request_update(buffer);
> -			if (ret) {
> -				dev_dbg(&indio_dev->dev,
> -				       "Buffer not started: buffer parameter update failed (%d)\n", ret);
> -				goto error_run_postdisable;
> -			}
> -		}
> -	}
>  	if (indio_dev->info->update_scan_mode) {
>  		ret = indio_dev->info
>  			->update_scan_mode(indio_dev,
> 


^ permalink raw reply	[flat|nested] 18+ messages in thread

* Re: [PATCH 4/8] iio: __iio_update_buffers: Update mode before preenable/after postdisable
  2015-05-13 14:04 ` [PATCH 4/8] iio: __iio_update_buffers: Update mode before preenable/after postdisable Lars-Peter Clausen
@ 2015-05-17  9:10   ` Jonathan Cameron
  2015-05-17 11:38     ` Lars-Peter Clausen
  0 siblings, 1 reply; 18+ messages in thread
From: Jonathan Cameron @ 2015-05-17  9:10 UTC (permalink / raw)
  To: Lars-Peter Clausen, Hartmut Knaack, Peter Meerwald; +Cc: linux-iio

On 13/05/15 15:04, Lars-Peter Clausen wrote:
> It is clear that we transition to INDIO_DIRECT_MODE when disabling the
> buffer(s) and it is also clear that we transition from INDIO_DIRECT_MODE
> when enabling the buffer(s). So leaving the currentmode field
> INDIO_DIRECT_MODE until after the preenable() callback and updating it to
> INDIO_DIRECT_MODE before the postdisable() callback doesn't add additional
> value. On the other hand some drivers will need to perform different
> actions depending on which mode the device is going to operate in/was
> operating in.
> 
> Moving the update of currentmode before preenable() and after postdisable()
> enables us to have drivers which perform mode dependent actions in those
> callbacks.
I'm not sure that the argument that drivers might do something that requires
knowledge of this state in preenable or post disable is terribly relevant
(as inherently the driver always knows what is going on in these callbacks!).

However, more interesting is the race conditions with drivers that cannot do
direct reads when in buffered mode. There are presumably still race conditions
in this region whatever we do here so it's up to the drivers to take appropriate
locks - I suspect many do not and hence might generate garbage readings if
a sysfs read is going on during the transition...

I suppose adding additional states to say we are transitioning modes might be
useful.. Worth the bother?

> 
> Signed-off-by: Lars-Peter Clausen <lars@metafoo.de>
> ---
>  drivers/iio/industrialio-buffer.c | 44 +++++++++++++++++++--------------------
>  1 file changed, 22 insertions(+), 22 deletions(-)
> 
> diff --git a/drivers/iio/industrialio-buffer.c b/drivers/iio/industrialio-buffer.c
> index 21ed316..8ecba2f 100644
> --- a/drivers/iio/industrialio-buffer.c
> +++ b/drivers/iio/industrialio-buffer.c
> @@ -553,10 +553,11 @@ void iio_disable_all_buffers(struct iio_dev *indio_dev)
>  			&indio_dev->buffer_list, buffer_list)
>  		iio_buffer_deactivate(buffer);
>  
> -	indio_dev->currentmode = INDIO_DIRECT_MODE;
>  	if (indio_dev->setup_ops->postdisable)
>  		indio_dev->setup_ops->postdisable(indio_dev);
>  
> +	indio_dev->currentmode = INDIO_DIRECT_MODE;
> +
>  	if (indio_dev->available_scan_masks == NULL)
>  		kfree(indio_dev->active_scan_mask);
>  }
> @@ -625,12 +626,14 @@ static int __iio_update_buffers(struct iio_dev *indio_dev,
>  			if (ret)
>  				return ret;
>  		}
> -		indio_dev->currentmode = INDIO_DIRECT_MODE;
> +
>  		if (indio_dev->setup_ops->postdisable) {
>  			ret = indio_dev->setup_ops->postdisable(indio_dev);
>  			if (ret)
>  				return ret;
>  		}
> +
> +		indio_dev->currentmode = INDIO_DIRECT_MODE;
>  	}
>  	/* Keep a copy of current setup to allow roll back */
>  	old_mask = indio_dev->active_scan_mask;
> @@ -688,6 +691,21 @@ static int __iio_update_buffers(struct iio_dev *indio_dev,
>  		indio_dev->active_scan_mask = compound_mask;
>  	}
>  
> +	/* Definitely possible for devices to support both of these. */
> +	if ((indio_dev->modes & INDIO_BUFFER_TRIGGERED) && indio_dev->trig) {
> +		indio_dev->currentmode = INDIO_BUFFER_TRIGGERED;
> +	} else if (indio_dev->modes & INDIO_BUFFER_HARDWARE) {
> +		indio_dev->currentmode = INDIO_BUFFER_HARDWARE;
> +	} else if (indio_dev->modes & INDIO_BUFFER_SOFTWARE) {
> +		indio_dev->currentmode = INDIO_BUFFER_SOFTWARE;
> +	} else { /* Should never be reached */
> +		/* Can only occur on first buffer */
> +		if (indio_dev->modes & INDIO_BUFFER_TRIGGERED)
> +			dev_dbg(&indio_dev->dev, "Buffer not started: no trigger\n");
> +		ret = -EINVAL;
> +		goto error_remove_inserted;
> +	}
> +
>  	iio_update_demux(indio_dev);
>  
>  	/* Wind up again */
> @@ -714,30 +732,13 @@ static int __iio_update_buffers(struct iio_dev *indio_dev,
>  			goto error_run_postdisable;
>  		}
>  	}
> -	/* Definitely possible for devices to support both of these. */
> -	if ((indio_dev->modes & INDIO_BUFFER_TRIGGERED) && indio_dev->trig) {
> -		indio_dev->currentmode = INDIO_BUFFER_TRIGGERED;
> -	} else if (indio_dev->modes & INDIO_BUFFER_HARDWARE) {
> -		indio_dev->currentmode = INDIO_BUFFER_HARDWARE;
> -	} else if (indio_dev->modes & INDIO_BUFFER_SOFTWARE) {
> -		indio_dev->currentmode = INDIO_BUFFER_SOFTWARE;
> -	} else { /* Should never be reached */
> -		/* Can only occur on first buffer */
> -		if (indio_dev->modes & INDIO_BUFFER_TRIGGERED)
> -			dev_dbg(&indio_dev->dev, "Buffer not started: no trigger\n");
> -		ret = -EINVAL;
> -		goto error_run_postdisable;
> -	}
>  
>  	if (indio_dev->setup_ops->postenable) {
>  		ret = indio_dev->setup_ops->postenable(indio_dev);
>  		if (ret) {
>  			dev_dbg(&indio_dev->dev,
>  			       "Buffer not started: postenable failed (%d)\n", ret);
> -			indio_dev->currentmode = INDIO_DIRECT_MODE;
> -			if (indio_dev->setup_ops->postdisable)
> -				indio_dev->setup_ops->postdisable(indio_dev);
> -			goto error_disable_all_buffers;
> +			goto error_run_postdisable;
>  		}
>  	}
>  
> @@ -745,12 +746,11 @@ static int __iio_update_buffers(struct iio_dev *indio_dev,
>  
>  	return success;
>  
> -error_disable_all_buffers:
> -	indio_dev->currentmode = INDIO_DIRECT_MODE;
>  error_run_postdisable:
>  	if (indio_dev->setup_ops->postdisable)
>  		indio_dev->setup_ops->postdisable(indio_dev);
>  error_remove_inserted:
> +	indio_dev->currentmode = INDIO_DIRECT_MODE;
>  	if (insert_buffer)
>  		iio_buffer_deactivate(insert_buffer);
>  	iio_free_scan_mask(indio_dev, indio_dev->active_scan_mask);
> 


^ permalink raw reply	[flat|nested] 18+ messages in thread

* Re: [PATCH 5/8] iio: __iio_update_buffers: Verify configuration before starting to apply it
  2015-05-13 14:04 ` [PATCH 5/8] iio: __iio_update_buffers: Verify configuration before starting to apply it Lars-Peter Clausen
@ 2015-05-17  9:14   ` Jonathan Cameron
  0 siblings, 0 replies; 18+ messages in thread
From: Jonathan Cameron @ 2015-05-17  9:14 UTC (permalink / raw)
  To: Lars-Peter Clausen, Hartmut Knaack, Peter Meerwald; +Cc: linux-iio

On 13/05/15 15:04, Lars-Peter Clausen wrote:
> Currently __iio_update_buffers() verifies whether the new configuration
> will work in the middle of the update sequence. This means if the new
> configuration is invalid we need to rollback the changes already made. This
> patch moves the validation of the new configuration at the beginning of
> __iio_update_buffers() and will not start to make any changes if the new
> configuration is invalid.
It's funny how code evolves sometimes.  It never occurred to me that
we could do this before actually unwinding the buffers but of course
we can. :)
> 
> Signed-off-by: Lars-Peter Clausen <lars@metafoo.de>

This one looks good, will wait on responses to the previous before taking
it however.
> ---
>  drivers/iio/industrialio-buffer.c | 164 +++++++++++++++++++++++---------------
>  1 file changed, 100 insertions(+), 64 deletions(-)
> 
> diff --git a/drivers/iio/industrialio-buffer.c b/drivers/iio/industrialio-buffer.c
> index 8ecba2f..68e2669 100644
> --- a/drivers/iio/industrialio-buffer.c
> +++ b/drivers/iio/industrialio-buffer.c
> @@ -603,20 +603,104 @@ static void iio_free_scan_mask(struct iio_dev *indio_dev,
>  		kfree(mask);
>  }
>  
> +struct iio_device_config {
> +	unsigned int mode;
> +	const unsigned long *scan_mask;
> +	unsigned int scan_bytes;
> +	bool scan_timestamp;
> +};
> +
> +static int iio_verify_update(struct iio_dev *indio_dev,
> +	struct iio_buffer *insert_buffer, struct iio_buffer *remove_buffer,
> +	struct iio_device_config *config)
> +{
> +	unsigned long *compound_mask;
> +	const unsigned long *scan_mask;
> +	struct iio_buffer *buffer;
> +	bool scan_timestamp;
> +
> +	memset(config, 0, sizeof(*config));
> +
> +	/*
> +	 * If there is just one buffer and we are removing it there is nothing
> +	 * to verify.
> +	 */
> +	if (remove_buffer && !insert_buffer &&
> +		list_is_singular(&indio_dev->buffer_list))
> +			return 0;
> +
> +	/* Definitely possible for devices to support both of these. */
> +	if ((indio_dev->modes & INDIO_BUFFER_TRIGGERED) && indio_dev->trig) {
> +		config->mode = INDIO_BUFFER_TRIGGERED;
> +	} else if (indio_dev->modes & INDIO_BUFFER_HARDWARE) {
> +		config->mode = INDIO_BUFFER_HARDWARE;
> +	} else if (indio_dev->modes & INDIO_BUFFER_SOFTWARE) {
> +		config->mode = INDIO_BUFFER_SOFTWARE;
> +	} else {
> +		/* Can only occur on first buffer */
> +		if (indio_dev->modes & INDIO_BUFFER_TRIGGERED)
> +			dev_dbg(&indio_dev->dev, "Buffer not started: no trigger\n");
> +		return -EINVAL;
> +	}
> +
> +	/* What scan mask do we actually have? */
> +	compound_mask = kcalloc(BITS_TO_LONGS(indio_dev->masklength),
> +				sizeof(long), GFP_KERNEL);
> +	if (compound_mask == NULL)
> +		return -ENOMEM;
> +
> +	scan_timestamp = false;
> +
> +	list_for_each_entry(buffer, &indio_dev->buffer_list, buffer_list) {
> +		if (buffer == remove_buffer)
> +			continue;
> +		bitmap_or(compound_mask, compound_mask, buffer->scan_mask,
> +			  indio_dev->masklength);
> +		scan_timestamp |= buffer->scan_timestamp;
> +	}
> +
> +	if (insert_buffer) {
> +		bitmap_or(compound_mask, compound_mask,
> +			  insert_buffer->scan_mask, indio_dev->masklength);
> +		scan_timestamp |= insert_buffer->scan_timestamp;
> +	}
> +
> +	if (indio_dev->available_scan_masks) {
> +		scan_mask = iio_scan_mask_match(indio_dev->available_scan_masks,
> +				    indio_dev->masklength,
> +				    compound_mask);
> +		kfree(compound_mask);
> +		if (scan_mask == NULL)
> +			return -EINVAL;
> +	} else {
> +	    scan_mask = compound_mask;
> +	}
> +
> +	config->scan_bytes = iio_compute_scan_bytes(indio_dev,
> +				    scan_mask, scan_timestamp);
> +	config->scan_mask = scan_mask;
> +	config->scan_timestamp = scan_timestamp;
> +
> +	return 0;
> +}
> +
>  static int __iio_update_buffers(struct iio_dev *indio_dev,
>  		       struct iio_buffer *insert_buffer,
>  		       struct iio_buffer *remove_buffer)
>  {
>  	int ret;
> -	int success = 0;
> -	struct iio_buffer *buffer;
> -	unsigned long *compound_mask;
>  	const unsigned long *old_mask;
> +	struct iio_device_config new_config;
> +
> +	ret = iio_verify_update(indio_dev, insert_buffer, remove_buffer,
> +		&new_config);
> +	if (ret)
> +		return ret;
>  
>  	if (insert_buffer) {
>  		ret = iio_buffer_request_update(indio_dev, insert_buffer);
>  		if (ret)
> -			return ret;
> +			goto err_free_config;
>  	}
>  
>  	/* Wind down existing buffers - iff there are any */
> @@ -624,13 +708,13 @@ static int __iio_update_buffers(struct iio_dev *indio_dev,
>  		if (indio_dev->setup_ops->predisable) {
>  			ret = indio_dev->setup_ops->predisable(indio_dev);
>  			if (ret)
> -				return ret;
> +				goto err_free_config;
>  		}
>  
>  		if (indio_dev->setup_ops->postdisable) {
>  			ret = indio_dev->setup_ops->postdisable(indio_dev);
>  			if (ret)
> -				return ret;
> +				goto err_free_config;
>  		}
>  
>  		indio_dev->currentmode = INDIO_DIRECT_MODE;
> @@ -652,59 +736,10 @@ static int __iio_update_buffers(struct iio_dev *indio_dev,
>  		return 0;
>  	}
>  
> -	/* What scan mask do we actually have? */
> -	compound_mask = kcalloc(BITS_TO_LONGS(indio_dev->masklength),
> -				sizeof(long), GFP_KERNEL);
> -	if (compound_mask == NULL) {
> -		iio_free_scan_mask(indio_dev, old_mask);
> -		return -ENOMEM;
> -	}
> -	indio_dev->scan_timestamp = 0;
> -
> -	list_for_each_entry(buffer, &indio_dev->buffer_list, buffer_list) {
> -		bitmap_or(compound_mask, compound_mask, buffer->scan_mask,
> -			  indio_dev->masklength);
> -		indio_dev->scan_timestamp |= buffer->scan_timestamp;
> -	}
> -	if (indio_dev->available_scan_masks) {
> -		indio_dev->active_scan_mask =
> -			iio_scan_mask_match(indio_dev->available_scan_masks,
> -					    indio_dev->masklength,
> -					    compound_mask);
> -		kfree(compound_mask);
> -		if (indio_dev->active_scan_mask == NULL) {
> -			/*
> -			 * Roll back.
> -			 * Note can only occur when adding a buffer.
> -			 */
> -			iio_buffer_deactivate(insert_buffer);
> -			if (old_mask) {
> -				indio_dev->active_scan_mask = old_mask;
> -				success = -EINVAL;
> -			}
> -			else {
> -				ret = -EINVAL;
> -				return ret;
> -			}
> -		}
> -	} else {
> -		indio_dev->active_scan_mask = compound_mask;
> -	}
> -
> -	/* Definitely possible for devices to support both of these. */
> -	if ((indio_dev->modes & INDIO_BUFFER_TRIGGERED) && indio_dev->trig) {
> -		indio_dev->currentmode = INDIO_BUFFER_TRIGGERED;
> -	} else if (indio_dev->modes & INDIO_BUFFER_HARDWARE) {
> -		indio_dev->currentmode = INDIO_BUFFER_HARDWARE;
> -	} else if (indio_dev->modes & INDIO_BUFFER_SOFTWARE) {
> -		indio_dev->currentmode = INDIO_BUFFER_SOFTWARE;
> -	} else { /* Should never be reached */
> -		/* Can only occur on first buffer */
> -		if (indio_dev->modes & INDIO_BUFFER_TRIGGERED)
> -			dev_dbg(&indio_dev->dev, "Buffer not started: no trigger\n");
> -		ret = -EINVAL;
> -		goto error_remove_inserted;
> -	}
> +	indio_dev->currentmode = new_config.mode;
> +	indio_dev->active_scan_mask = new_config.scan_mask;
> +	indio_dev->scan_timestamp = new_config.scan_timestamp;
> +	indio_dev->scan_bytes = new_config.scan_bytes;
>  
>  	iio_update_demux(indio_dev);
>  
> @@ -717,10 +752,7 @@ static int __iio_update_buffers(struct iio_dev *indio_dev,
>  			goto error_remove_inserted;
>  		}
>  	}
> -	indio_dev->scan_bytes =
> -		iio_compute_scan_bytes(indio_dev,
> -				       indio_dev->active_scan_mask,
> -				       indio_dev->scan_timestamp);
> +
>  	if (indio_dev->info->update_scan_mode) {
>  		ret = indio_dev->info
>  			->update_scan_mode(indio_dev,
> @@ -744,7 +776,7 @@ static int __iio_update_buffers(struct iio_dev *indio_dev,
>  
>  	iio_free_scan_mask(indio_dev, old_mask);
>  
> -	return success;
> +	return 0;
>  
>  error_run_postdisable:
>  	if (indio_dev->setup_ops->postdisable)
> @@ -756,6 +788,10 @@ error_remove_inserted:
>  	iio_free_scan_mask(indio_dev, indio_dev->active_scan_mask);
>  	indio_dev->active_scan_mask = old_mask;
>  	return ret;
> +
> +err_free_config:
> +	iio_free_scan_mask(indio_dev, new_config.scan_mask);
> +	return ret;
>  }
>  
>  int iio_update_buffers(struct iio_dev *indio_dev,
> 


^ permalink raw reply	[flat|nested] 18+ messages in thread

* Re: [PATCH 6/8] iio: __iio_update_buffers: Split enable and disable path into helper functions
  2015-05-13 14:04 ` [PATCH 6/8] iio: __iio_update_buffers: Split enable and disable path into helper functions Lars-Peter Clausen
@ 2015-05-17  9:17   ` Jonathan Cameron
  0 siblings, 0 replies; 18+ messages in thread
From: Jonathan Cameron @ 2015-05-17  9:17 UTC (permalink / raw)
  To: Lars-Peter Clausen, Hartmut Knaack, Peter Meerwald; +Cc: linux-iio

On 13/05/15 15:04, Lars-Peter Clausen wrote:
> __iio_update_buffers is already a rather large function with many different
> error paths and it is going to get even larger. This patch factors out the
> device enable and device disable paths into separate helper functions.
> 
> The patch also re-implements iio_disable_all_buffers() using the new
> iio_disable_buffers() function removing a fair bit of redundant code.
> 
> Signed-off-by: Lars-Peter Clausen <lars@metafoo.de>
Also looks fine to me.
> ---
>  drivers/iio/industrialio-buffer.c | 174 +++++++++++++++++++++-----------------
>  1 file changed, 95 insertions(+), 79 deletions(-)
> 
> diff --git a/drivers/iio/industrialio-buffer.c b/drivers/iio/industrialio-buffer.c
> index 68e2669..dd04e35 100644
> --- a/drivers/iio/industrialio-buffer.c
> +++ b/drivers/iio/industrialio-buffer.c
> @@ -539,29 +539,6 @@ static void iio_buffer_deactivate(struct iio_buffer *buffer)
>  	iio_buffer_put(buffer);
>  }
>  
> -void iio_disable_all_buffers(struct iio_dev *indio_dev)
> -{
> -	struct iio_buffer *buffer, *_buffer;
> -
> -	if (list_empty(&indio_dev->buffer_list))
> -		return;
> -
> -	if (indio_dev->setup_ops->predisable)
> -		indio_dev->setup_ops->predisable(indio_dev);
> -
> -	list_for_each_entry_safe(buffer, _buffer,
> -			&indio_dev->buffer_list, buffer_list)
> -		iio_buffer_deactivate(buffer);
> -
> -	if (indio_dev->setup_ops->postdisable)
> -		indio_dev->setup_ops->postdisable(indio_dev);
> -
> -	indio_dev->currentmode = INDIO_DIRECT_MODE;
> -
> -	if (indio_dev->available_scan_masks == NULL)
> -		kfree(indio_dev->active_scan_mask);
> -}
> -
>  static void iio_buffer_update_bytes_per_datum(struct iio_dev *indio_dev,
>  	struct iio_buffer *buffer)
>  {
> @@ -684,62 +661,40 @@ static int iio_verify_update(struct iio_dev *indio_dev,
>  	return 0;
>  }
>  
> -static int __iio_update_buffers(struct iio_dev *indio_dev,
> -		       struct iio_buffer *insert_buffer,
> -		       struct iio_buffer *remove_buffer)
> +static int iio_disable_buffers(struct iio_dev *indio_dev)
>  {
>  	int ret;
> -	const unsigned long *old_mask;
> -	struct iio_device_config new_config;
>  
> -	ret = iio_verify_update(indio_dev, insert_buffer, remove_buffer,
> -		&new_config);
> -	if (ret)
> -		return ret;
> +	/* Wind down existing buffers - iff there are any */
> +	if (list_empty(&indio_dev->buffer_list))
> +		return 0;
>  
> -	if (insert_buffer) {
> -		ret = iio_buffer_request_update(indio_dev, insert_buffer);
> +	if (indio_dev->setup_ops->predisable) {
> +		ret = indio_dev->setup_ops->predisable(indio_dev);
>  		if (ret)
> -			goto err_free_config;
> +			return ret;
>  	}
>  
> -	/* Wind down existing buffers - iff there are any */
> -	if (!list_empty(&indio_dev->buffer_list)) {
> -		if (indio_dev->setup_ops->predisable) {
> -			ret = indio_dev->setup_ops->predisable(indio_dev);
> -			if (ret)
> -				goto err_free_config;
> -		}
> -
> -		if (indio_dev->setup_ops->postdisable) {
> -			ret = indio_dev->setup_ops->postdisable(indio_dev);
> -			if (ret)
> -				goto err_free_config;
> -		}
> -
> -		indio_dev->currentmode = INDIO_DIRECT_MODE;
> +	if (indio_dev->setup_ops->postdisable) {
> +		ret = indio_dev->setup_ops->postdisable(indio_dev);
> +		if (ret)
> +			return ret;
>  	}
> -	/* Keep a copy of current setup to allow roll back */
> -	old_mask = indio_dev->active_scan_mask;
> -	if (!indio_dev->available_scan_masks)
> -		indio_dev->active_scan_mask = NULL;
>  
> -	if (remove_buffer)
> -		iio_buffer_deactivate(remove_buffer);
> -	if (insert_buffer)
> -		iio_buffer_activate(indio_dev, insert_buffer);
> +	indio_dev->currentmode = INDIO_DIRECT_MODE;
>  
> -	/* If no buffers in list, we are done */
> -	if (list_empty(&indio_dev->buffer_list)) {
> -		indio_dev->currentmode = INDIO_DIRECT_MODE;
> -		iio_free_scan_mask(indio_dev, old_mask);
> -		return 0;
> -	}
> +	return 0;
> +}
>  
> -	indio_dev->currentmode = new_config.mode;
> -	indio_dev->active_scan_mask = new_config.scan_mask;
> -	indio_dev->scan_timestamp = new_config.scan_timestamp;
> -	indio_dev->scan_bytes = new_config.scan_bytes;
> +static int iio_enable_buffers(struct iio_dev *indio_dev,
> +	struct iio_device_config *config)
> +{
> +	int ret;
> +
> +	indio_dev->currentmode = config->mode;
> +	indio_dev->active_scan_mask = config->scan_mask;
> +	indio_dev->scan_timestamp = config->scan_timestamp;
> +	indio_dev->scan_bytes = config->scan_bytes;
>  
>  	iio_update_demux(indio_dev);
>  
> @@ -749,7 +704,7 @@ static int __iio_update_buffers(struct iio_dev *indio_dev,
>  		if (ret) {
>  			dev_dbg(&indio_dev->dev,
>  			       "Buffer not started: buffer preenable failed (%d)\n", ret);
> -			goto error_remove_inserted;
> +			goto err_undo_config;
>  		}
>  	}
>  
> @@ -761,7 +716,7 @@ static int __iio_update_buffers(struct iio_dev *indio_dev,
>  			dev_dbg(&indio_dev->dev,
>  				"Buffer not started: update scan mode failed (%d)\n",
>  				ret);
> -			goto error_run_postdisable;
> +			goto err_run_postdisable;
>  		}
>  	}
>  
> @@ -770,24 +725,72 @@ static int __iio_update_buffers(struct iio_dev *indio_dev,
>  		if (ret) {
>  			dev_dbg(&indio_dev->dev,
>  			       "Buffer not started: postenable failed (%d)\n", ret);
> -			goto error_run_postdisable;
> +			goto err_run_postdisable;
>  		}
>  	}
>  
> -	iio_free_scan_mask(indio_dev, old_mask);
> -
>  	return 0;
>  
> -error_run_postdisable:
> +err_run_postdisable:
>  	if (indio_dev->setup_ops->postdisable)
>  		indio_dev->setup_ops->postdisable(indio_dev);
> -error_remove_inserted:
> +err_undo_config:
>  	indio_dev->currentmode = INDIO_DIRECT_MODE;
> -	if (insert_buffer)
> -		iio_buffer_deactivate(insert_buffer);
> -	iio_free_scan_mask(indio_dev, indio_dev->active_scan_mask);
> -	indio_dev->active_scan_mask = old_mask;
> +	indio_dev->active_scan_mask = NULL;
> +
>  	return ret;
> +}
> +
> +static int __iio_update_buffers(struct iio_dev *indio_dev,
> +		       struct iio_buffer *insert_buffer,
> +		       struct iio_buffer *remove_buffer)
> +{
> +	int ret;
> +	const unsigned long *old_mask;
> +	struct iio_device_config new_config;
> +
> +	ret = iio_verify_update(indio_dev, insert_buffer, remove_buffer,
> +		&new_config);
> +	if (ret)
> +		return ret;
> +
> +	if (insert_buffer) {
> +		ret = iio_buffer_request_update(indio_dev, insert_buffer);
> +		if (ret)
> +			goto err_free_config;
> +	}
> +
> +	/* Keep a copy of current setup to allow roll back */
> +	old_mask = indio_dev->active_scan_mask;
> +	indio_dev->active_scan_mask = NULL;
> +
> +	ret = iio_disable_buffers(indio_dev);
> +	if (ret) {
> +		iio_free_scan_mask(indio_dev, old_mask);
> +		goto err_free_config;
> +	}
> +
> +	if (remove_buffer)
> +		iio_buffer_deactivate(remove_buffer);
> +	if (insert_buffer)
> +		iio_buffer_activate(indio_dev, insert_buffer);
> +
> +	/* If no buffers in list, we are done */
> +	if (list_empty(&indio_dev->buffer_list)) {
> +		iio_free_scan_mask(indio_dev, old_mask);
> +		return 0;
> +	}
> +
> +	ret = iio_enable_buffers(indio_dev, &new_config);
> +	if (ret) {
> +		if (insert_buffer)
> +			iio_buffer_deactivate(insert_buffer);
> +		indio_dev->active_scan_mask = old_mask;
> +		goto err_free_config;
> +	}
> +
> +	iio_free_scan_mask(indio_dev, old_mask);
> +	return 0;
>  
>  err_free_config:
>  	iio_free_scan_mask(indio_dev, new_config.scan_mask);
> @@ -832,6 +835,19 @@ out_unlock:
>  }
>  EXPORT_SYMBOL_GPL(iio_update_buffers);
>  
> +void iio_disable_all_buffers(struct iio_dev *indio_dev)
> +{
> +	struct iio_buffer *buffer, *_buffer;
> +
> +	iio_disable_buffers(indio_dev);
> +	iio_free_scan_mask(indio_dev, indio_dev->active_scan_mask);
> +	indio_dev->active_scan_mask = NULL;
> +
> +	list_for_each_entry_safe(buffer, _buffer,
> +			&indio_dev->buffer_list, buffer_list)
> +		iio_buffer_deactivate(buffer);
> +}
> +
>  static ssize_t iio_buffer_store_enable(struct device *dev,
>  				       struct device_attribute *attr,
>  				       const char *buf,
> 


^ permalink raw reply	[flat|nested] 18+ messages in thread

* Re: [PATCH 7/8] iio: __iio_update_buffers: Leave device in sane state on error
  2015-05-13 14:04 ` [PATCH 7/8] iio: __iio_update_buffers: Leave device in sane state on error Lars-Peter Clausen
@ 2015-05-17  9:22   ` Jonathan Cameron
  0 siblings, 0 replies; 18+ messages in thread
From: Jonathan Cameron @ 2015-05-17  9:22 UTC (permalink / raw)
  To: Lars-Peter Clausen, Hartmut Knaack, Peter Meerwald; +Cc: linux-iio

On 13/05/15 15:04, Lars-Peter Clausen wrote:
> Currently when something goes wrong at some step when disabling the buffers
> we immediately abort. This has the effect that the enable/disable calls are
> no longer balanced. So make sure that even if one step in the disable
> sequence fails the other steps are still executed.
> 
> The other issue is that when either enable or disable fails buffers that
> were active at that time stay active while the device itself is disabled.
> This leaves things in a inconsistent state and can cause unbalanced
> enable/disable calls. Furthermore when enable fails we restore the old scan
> mask, but still keeps things disabled.
> 
> Given that verification of the configuration was performed earlier and it
> is valid at the point where we try to enable/disable the most likely reason
> of failure is a communication failure with the device or maybe a
> out-of-memory situation. There is not really a good recovery strategy in
> such a case, so it makes sense to leave the device disabled, but we should
> still leave it in a consistent state.
> 
> What the patch does if disable/enable fails is to deactivate all buffers
> and make sure that the device will be in the same state as if all buffers
> had been manually disabled.
> 
This is always a fun corner as you are quite right in observing there
is no right answer. Now you have the only incorrect setup cases separated
out, all the roll back stuff makes no sense as you observe. 

So I guess this is the best option.  Though, a userspace app is going to
have to take a very dim view of hardware reliability to actually bother
retrying the buffer bring ups. It'll have to notice that stuff that was
previously running fine is now disabled which is nasty.

Jonathan
> Signed-off-by: Lars-Peter Clausen <lars@metafoo.de>
> ---
>  drivers/iio/industrialio-buffer.c | 79 ++++++++++++++++++++++-----------------
>  1 file changed, 44 insertions(+), 35 deletions(-)
> 
> diff --git a/drivers/iio/industrialio-buffer.c b/drivers/iio/industrialio-buffer.c
> index dd04e35..3a11a2a 100644
> --- a/drivers/iio/industrialio-buffer.c
> +++ b/drivers/iio/industrialio-buffer.c
> @@ -539,6 +539,15 @@ static void iio_buffer_deactivate(struct iio_buffer *buffer)
>  	iio_buffer_put(buffer);
>  }
>  
> +static void iio_buffer_deactivate_all(struct iio_dev *indio_dev)
> +{
> +	struct iio_buffer *buffer, *_buffer;
> +
> +	list_for_each_entry_safe(buffer, _buffer,
> +			&indio_dev->buffer_list, buffer_list)
> +		iio_buffer_deactivate(buffer);
> +}
> +
>  static void iio_buffer_update_bytes_per_datum(struct iio_dev *indio_dev,
>  	struct iio_buffer *buffer)
>  {
> @@ -663,27 +672,37 @@ static int iio_verify_update(struct iio_dev *indio_dev,
>  
>  static int iio_disable_buffers(struct iio_dev *indio_dev)
>  {
> -	int ret;
> +	int ret = 0;
> +	int ret2;
>  
>  	/* Wind down existing buffers - iff there are any */
>  	if (list_empty(&indio_dev->buffer_list))
>  		return 0;
>  
> +	/*
> +	 * If things go wrong at some step in disable we still need to continue
> +	 * to perform the other steps, otherwise we leave the device in a
> +	 * inconsistent state. We return the error code for the first error we
> +	 * encountered.
> +	 */
> +
>  	if (indio_dev->setup_ops->predisable) {
> -		ret = indio_dev->setup_ops->predisable(indio_dev);
> -		if (ret)
> -			return ret;
> +		ret2 = indio_dev->setup_ops->predisable(indio_dev);
> +		if (ret2 && !ret)
> +			ret = ret2;
>  	}
>  
>  	if (indio_dev->setup_ops->postdisable) {
>  		ret = indio_dev->setup_ops->postdisable(indio_dev);
> -		if (ret)
> -			return ret;
> +		if (ret2 && !ret)
> +			ret = ret2;
>  	}
>  
>  	indio_dev->currentmode = INDIO_DIRECT_MODE;
> +	iio_free_scan_mask(indio_dev, indio_dev->active_scan_mask);
> +	indio_dev->active_scan_mask = NULL;
>  
> -	return 0;
> +	return ret;
>  }
>  
>  static int iio_enable_buffers(struct iio_dev *indio_dev,
> @@ -745,9 +764,8 @@ static int __iio_update_buffers(struct iio_dev *indio_dev,
>  		       struct iio_buffer *insert_buffer,
>  		       struct iio_buffer *remove_buffer)
>  {
> -	int ret;
> -	const unsigned long *old_mask;
>  	struct iio_device_config new_config;
> +	int ret;
>  
>  	ret = iio_verify_update(indio_dev, insert_buffer, remove_buffer,
>  		&new_config);
> @@ -760,15 +778,9 @@ static int __iio_update_buffers(struct iio_dev *indio_dev,
>  			goto err_free_config;
>  	}
>  
> -	/* Keep a copy of current setup to allow roll back */
> -	old_mask = indio_dev->active_scan_mask;
> -	indio_dev->active_scan_mask = NULL;
> -
>  	ret = iio_disable_buffers(indio_dev);
> -	if (ret) {
> -		iio_free_scan_mask(indio_dev, old_mask);
> -		goto err_free_config;
> -	}
> +	if (ret)
> +		goto err_deactivate_all;
>  
>  	if (remove_buffer)
>  		iio_buffer_deactivate(remove_buffer);
> @@ -776,22 +788,26 @@ static int __iio_update_buffers(struct iio_dev *indio_dev,
>  		iio_buffer_activate(indio_dev, insert_buffer);
>  
>  	/* If no buffers in list, we are done */
> -	if (list_empty(&indio_dev->buffer_list)) {
> -		iio_free_scan_mask(indio_dev, old_mask);
> +	if (list_empty(&indio_dev->buffer_list))
>  		return 0;
> -	}
>  
>  	ret = iio_enable_buffers(indio_dev, &new_config);
> -	if (ret) {
> -		if (insert_buffer)
> -			iio_buffer_deactivate(insert_buffer);
> -		indio_dev->active_scan_mask = old_mask;
> -		goto err_free_config;
> -	}
> +	if (ret)
> +		goto err_deactivate_all;
>  
> -	iio_free_scan_mask(indio_dev, old_mask);
>  	return 0;
>  
> +err_deactivate_all:
> +	/*
> +	 * We've already verified that the config is valid earlier. If things go
> +	 * wrong in either enable or disable the most likely reason is an IO
> +	 * error from the device. In this case there is no good recovery
> +	 * strategy. Just make sure to disable everything and leave the device
> +	 * in a sane state.  With a bit of luck the device might come back to
> +	 * life again later and userspace can try again.
> +	 */
> +	iio_buffer_deactivate_all(indio_dev);
> +
>  err_free_config:
>  	iio_free_scan_mask(indio_dev, new_config.scan_mask);
>  	return ret;
> @@ -837,15 +853,8 @@ EXPORT_SYMBOL_GPL(iio_update_buffers);
>  
>  void iio_disable_all_buffers(struct iio_dev *indio_dev)
>  {
> -	struct iio_buffer *buffer, *_buffer;
> -
>  	iio_disable_buffers(indio_dev);
> -	iio_free_scan_mask(indio_dev, indio_dev->active_scan_mask);
> -	indio_dev->active_scan_mask = NULL;
> -
> -	list_for_each_entry_safe(buffer, _buffer,
> -			&indio_dev->buffer_list, buffer_list)
> -		iio_buffer_deactivate(buffer);
> +	iio_buffer_deactivate_all(indio_dev);
>  }
>  
>  static ssize_t iio_buffer_store_enable(struct device *dev,
> 


^ permalink raw reply	[flat|nested] 18+ messages in thread

* Re: [PATCH 4/8] iio: __iio_update_buffers: Update mode before preenable/after postdisable
  2015-05-17  9:10   ` Jonathan Cameron
@ 2015-05-17 11:38     ` Lars-Peter Clausen
  2015-05-17 11:44       ` Jonathan Cameron
  0 siblings, 1 reply; 18+ messages in thread
From: Lars-Peter Clausen @ 2015-05-17 11:38 UTC (permalink / raw)
  To: Jonathan Cameron, Hartmut Knaack, Peter Meerwald; +Cc: linux-iio

On 05/17/2015 11:10 AM, Jonathan Cameron wrote:
> On 13/05/15 15:04, Lars-Peter Clausen wrote:
>> It is clear that we transition to INDIO_DIRECT_MODE when disabling the
>> buffer(s) and it is also clear that we transition from INDIO_DIRECT_MODE
>> when enabling the buffer(s). So leaving the currentmode field
>> INDIO_DIRECT_MODE until after the preenable() callback and updating it to
>> INDIO_DIRECT_MODE before the postdisable() callback doesn't add additional
>> value. On the other hand some drivers will need to perform different
>> actions depending on which mode the device is going to operate in/was
>> operating in.
>>
>> Moving the update of currentmode before preenable() and after postdisable()
>> enables us to have drivers which perform mode dependent actions in those
>> callbacks.
> I'm not sure that the argument that drivers might do something that requires
> knowledge of this state in preenable or post disable is terribly relevant
> (as inherently the driver always knows what is going on in these callbacks!).

I have a driver which supports multiple different modes. And the selected 
mode depends on which type of buffer is attached. Depending on which mode 
got selected the driver has to do slightly different things in the callbacks.

But yea, this patch is a bit out of place in this series. I initially moved 
it up here since it made some of the other refactoring a bit easier. But I 
think that refactoring has been refactored so this is patch is no longer 
necessary. I'll drop it for now and resend it later in a context where it 
makes more sense.

>
> However, more interesting is the race conditions with drivers that cannot do
> direct reads when in buffered mode. There are presumably still race conditions
> in this region whatever we do here so it's up to the drivers to take appropriate
> locks - I suspect many do not and hence might generate garbage readings if
> a sysfs read is going on during the transition...
 >
 > I suppose adding additional states to say we are transitioning modes might be
 > useful.. Worth the bother?

We do hold the device lock for the whole duration of __iio_update_buffers(). 
So things should be ok as long as the driver holds the same lock when doing 
a direct conversion. If the driver does not then even adding additional 
states will not help.

Unfortunately most drivers currently do something like

	if (iio_buffer_enabled(indio_dev))
		return -EBUSY;
	do_direct_conversion(...)

So this is clearly broken at the moment. For one preenable() can already 
have been called and iio_buffer_enabled() still returns false. So that's 
something you could avoid having extra states. But there is nothing that 
prevents buffered mode from getting enabled right after the 
iio_buffer_enabled() check or even in the middle of do_direct_conversion(). 
So additional modes won't helper there only proper locking will do.

A few other drivers do something like:

	mutex_lock(&indio_dev->mlock);
	if (indio_dev->currentmode == INDIO_BUFFER_TRIGGERED) {
		ret = -EBUSY;
	} else {
		ret = do_direct_conversion(...);
	}
	mutex_unlock(&indio_dev->mlock);

This should be safe. Although we might want to add some helpers in the core 
that makes this a bit more straight forward and a bit more semantically 
expressive, maybe something like.

	if (!iio_device_claim_direct_mode(indio_dev))
		return -EBUSY;
	do_direct_conversion(...)
	iio_device_release_direct_mode(indio_dev);

iio_device_claim_direct_mode() returns false if the device is in buffered 
mode. If it returns true it is guaranteed that the device stays in direct 
mode until iio_device_release_direct_mode() is called. This makes sure that 
the driver can do a direction conversion without having to bother that the 
device might suddenly switch to buffered mode.

And while we should fix that, this is not necessarily part of the scope of 
this series.

- Lars

^ permalink raw reply	[flat|nested] 18+ messages in thread

* Re: [PATCH 4/8] iio: __iio_update_buffers: Update mode before preenable/after postdisable
  2015-05-17 11:38     ` Lars-Peter Clausen
@ 2015-05-17 11:44       ` Jonathan Cameron
  0 siblings, 0 replies; 18+ messages in thread
From: Jonathan Cameron @ 2015-05-17 11:44 UTC (permalink / raw)
  To: Lars-Peter Clausen, Hartmut Knaack, Peter Meerwald; +Cc: linux-iio

On 17/05/15 12:38, Lars-Peter Clausen wrote:
> On 05/17/2015 11:10 AM, Jonathan Cameron wrote:
>> On 13/05/15 15:04, Lars-Peter Clausen wrote:
>>> It is clear that we transition to INDIO_DIRECT_MODE when disabling the
>>> buffer(s) and it is also clear that we transition from INDIO_DIRECT_MODE
>>> when enabling the buffer(s). So leaving the currentmode field
>>> INDIO_DIRECT_MODE until after the preenable() callback and updating it to
>>> INDIO_DIRECT_MODE before the postdisable() callback doesn't add additional
>>> value. On the other hand some drivers will need to perform different
>>> actions depending on which mode the device is going to operate in/was
>>> operating in.
>>>
>>> Moving the update of currentmode before preenable() and after postdisable()
>>> enables us to have drivers which perform mode dependent actions in those
>>> callbacks.
>> I'm not sure that the argument that drivers might do something that requires
>> knowledge of this state in preenable or post disable is terribly relevant
>> (as inherently the driver always knows what is going on in these
>> callbacks!).
> 
> I have a driver which supports multiple different modes. And the
> selected mode depends on which type of buffer is attached. Depending
> on which mode got selected the driver has to do slightly different
> things in the callbacks.
> 
> But yea, this patch is a bit out of place in this series. I initially
> moved it up here since it made some of the other refactoring a bit
> easier. But I think that refactoring has been refactored so this is
> patch is no longer necessary. I'll drop it for now and resend it
> later in a context where it makes more sense.
>>
>> However, more interesting is the race conditions with drivers that cannot do
>> direct reads when in buffered mode. There are presumably still race conditions
>> in this region whatever we do here so it's up to the drivers to take appropriate
>> locks - I suspect many do not and hence might generate garbage readings if
>> a sysfs read is going on during the transition...
>>
>> I suppose adding additional states to say we are transitioning modes might be
>> useful.. Worth the bother?
> 
> We do hold the device lock for the whole duration of
> __iio_update_buffers(). So things should be ok as long as the driver
> holds the same lock when doing a direct conversion. If the driver
> does not then even adding additional states will not help.
> Unfortunately most drivers currently do something like
> 
>     if (iio_buffer_enabled(indio_dev))
>         return -EBUSY;
>     do_direct_conversion(...)
Exactly what I meant (my fault as I did it first, but that was a long time ago!)

> 
> So this is clearly broken at the moment. For one preenable() can
> already have been called and iio_buffer_enabled() still returns
> false. So that's something you could avoid having extra states. But
> there is nothing that prevents buffered mode from getting enabled
> right after the iio_buffer_enabled() check or even in the middle of
> do_direct_conversion(). So additional modes won't helper there only
> proper locking will do.> 
Agreed.
> A few other drivers do something like:
> 
>     mutex_lock(&indio_dev->mlock);
>     if (indio_dev->currentmode == INDIO_BUFFER_TRIGGERED) {
>         ret = -EBUSY;
>     } else {
>         ret = do_direct_conversion(...);
>     }
>     mutex_unlock(&indio_dev->mlock);
> 
> This should be safe. Although we might want to add some helpers in
> the core that makes this a bit more straight forward and a bit more
> semantically expressive, maybe something like.
> 
>     if (!iio_device_claim_direct_mode(indio_dev))
>         return -EBUSY;
>     do_direct_conversion(...)
>     iio_device_release_direct_mode(indio_dev);
> 
> iio_device_claim_direct_mode() returns false if the device is in
> buffered mode. If it returns true it is guaranteed that the device
> stays in direct mode until iio_device_release_direct_mode() is
> called. This makes sure that the driver can do a direction conversion
> without having to bother that the device might suddenly switch to
> buffered mode.
Makes sense.
> 
> And while we should fix that, this is not necessarily part of the
> scope of this series.
Agreed.
> 
> - Lars
> 


^ permalink raw reply	[flat|nested] 18+ messages in thread

end of thread, other threads:[~2015-05-17 11:44 UTC | newest]

Thread overview: 18+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2015-05-13 14:04 [PATCH 0/8] iio: Refactor __iio_update_buffers() Lars-Peter Clausen
2015-05-13 14:04 ` [PATCH 1/8] iio: Replace printk in __iio_update_buffers with dev_dbg Lars-Peter Clausen
2015-05-17  8:42   ` Jonathan Cameron
2015-05-13 14:04 ` [PATCH 2/8] iio: __iio_update_buffers: Slightly refactor scan mask memory management Lars-Peter Clausen
2015-05-17  8:48   ` Jonathan Cameron
2015-05-13 14:04 ` [PATCH 3/8] iio: __iio_update_buffers: Perform request_update() only for new buffers Lars-Peter Clausen
2015-05-17  9:01   ` Jonathan Cameron
2015-05-13 14:04 ` [PATCH 4/8] iio: __iio_update_buffers: Update mode before preenable/after postdisable Lars-Peter Clausen
2015-05-17  9:10   ` Jonathan Cameron
2015-05-17 11:38     ` Lars-Peter Clausen
2015-05-17 11:44       ` Jonathan Cameron
2015-05-13 14:04 ` [PATCH 5/8] iio: __iio_update_buffers: Verify configuration before starting to apply it Lars-Peter Clausen
2015-05-17  9:14   ` Jonathan Cameron
2015-05-13 14:04 ` [PATCH 6/8] iio: __iio_update_buffers: Split enable and disable path into helper functions Lars-Peter Clausen
2015-05-17  9:17   ` Jonathan Cameron
2015-05-13 14:04 ` [PATCH 7/8] iio: __iio_update_buffers: Leave device in sane state on error Lars-Peter Clausen
2015-05-17  9:22   ` Jonathan Cameron
2015-05-13 14:04 ` [PATCH 8/8] DO NOT APPLY! __iio_update_buffers error path testing Lars-Peter Clausen

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).