linux-raid.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [PATCH 0/2] md: Add consistency check feature for level-1 RAID
@ 2014-03-17 15:00 Ralph Mueck
  2014-03-17 15:00 ` [PATCH 1/2] md: Add configurability for consistency check feature Ralph Mueck
  2014-03-17 15:00 ` [PATCH 2/2] md: Add support for RAID-1 " Ralph Mueck
  0 siblings, 2 replies; 5+ messages in thread
From: Ralph Mueck @ 2014-03-17 15:00 UTC (permalink / raw)
  To: linux-kernel; +Cc: i4passt, neilb, linux-raid, linux-kernel, Matthias Oefelein

This patch series introduces an online consistency check for RAID-1
md-arrays.
The feature compares each block that is read with its pendants on the
other array members to detect silent data corruption (a.k.a. bit rot).
Unfortunately, the feature is not fully-functional at this point, as we
have reached a dead end (see below).
We still want to hand in the patch for reference; maybe you can help
us, for we plan to keep on working on it.

The following issues remain:

- Partitions cannot be mounted with enabled safe read for some reason
  (We suppose something goes wrong when duplicating the bio structure)
- Broken sectors during a read result in a crash
- There may be synchronization issues

The following patches apply to the linux-next tree
(rev ed87ead565a6130174fc27a46af65169cbff7677).

Signed-off-by: Ralph Mueck <linux-kernel@rmueck.de>
Signed-off-by: Matthias Oefelein <ma.oefelein@arcor.de>

-- 
1.8.3.2


^ permalink raw reply	[flat|nested] 5+ messages in thread

* [PATCH 1/2] md: Add configurability for consistency check feature
  2014-03-17 15:00 [PATCH 0/2] md: Add consistency check feature for level-1 RAID Ralph Mueck
@ 2014-03-17 15:00 ` Ralph Mueck
  2014-03-17 22:54   ` NeilBrown
  2014-03-17 15:00 ` [PATCH 2/2] md: Add support for RAID-1 " Ralph Mueck
  1 sibling, 1 reply; 5+ messages in thread
From: Ralph Mueck @ 2014-03-17 15:00 UTC (permalink / raw)
  To: linux-kernel; +Cc: i4passt, neilb, linux-raid, linux-kernel, Matthias Oefelein

This patch adds sysfs configurability for the md level-1 RAID
consistency check.
The feature introduces a new attribute in sysfs named "safe_read".
To toggle consistency checks on/off, simply echo safe_read in
/sys/block/md*/md/safe_read.

Signed-off-by: Ralph Mueck <linux-kernel@rmueck.de>
Signed-off-by: Matthias Oefelein <ma.oefelein@arcor.de>

---
 drivers/md/md.c | 27 +++++++++++++++++++++++++++
 drivers/md/md.h |  3 +++
 2 files changed, 30 insertions(+)

diff --git a/drivers/md/md.c b/drivers/md/md.c
index 4ad5cc4..5cc9a00 100644
--- a/drivers/md/md.c
+++ b/drivers/md/md.c
@@ -4652,6 +4652,32 @@ static struct md_sysfs_entry md_array_size =
 __ATTR(array_size, S_IRUGO|S_IWUSR, array_size_show,
        array_size_store);
 
+static ssize_t
+safe_read_show(struct mddev *mddev, char *page)
+{
+	if(mddev->safe_read)
+		return sprintf(page, "enabled\n");
+	else
+		return sprintf(page, "disabled\n");
+}
+
+static ssize_t
+safe_read_store(struct mddev *mddev, const char *buf, size_t len)
+{
+	if(mddev->pers->level != 1) {
+		printk(KERN_NOTICE "RAID level not supported!\n");
+		return len;
+	}
+	if (strncmp(buf, "safe_read", 9) == 0) {
+		mddev->safe_read = !mddev->safe_read;
+	}
+	return len;
+}
+
+static struct md_sysfs_entry md_safe_read =
+__ATTR(safe_read, S_IRUGO|S_IWUSR, safe_read_show,
+       safe_read_store);
+
 static struct attribute *md_default_attrs[] = {
 	&md_level.attr,
 	&md_layout.attr,
@@ -4667,6 +4693,7 @@ static struct attribute *md_default_attrs[] = {
 	&md_reshape_direction.attr,
 	&md_array_size.attr,
 	&max_corr_read_errors.attr,
+	&md_safe_read.attr,
 	NULL,
 };
 
diff --git a/drivers/md/md.h b/drivers/md/md.h
index 07bba96..7e59cf1 100644
--- a/drivers/md/md.h
+++ b/drivers/md/md.h
@@ -346,6 +346,9 @@ struct mddev {
 	 */
 	int				recovery_disabled;
 
+	/* Set to 1 if the user desires a safe read (check for bitrot) */
+	int				safe_read;
+
 	int				in_sync;	/* know to not need resync */
 	/* 'open_mutex' avoids races between 'md_open' and 'do_md_stop', so
 	 * that we are never stopping an array while it is open.
-- 
1.8.3.2

^ permalink raw reply related	[flat|nested] 5+ messages in thread

* [PATCH 2/2] md: Add support for RAID-1 consistency check feature
  2014-03-17 15:00 [PATCH 0/2] md: Add consistency check feature for level-1 RAID Ralph Mueck
  2014-03-17 15:00 ` [PATCH 1/2] md: Add configurability for consistency check feature Ralph Mueck
@ 2014-03-17 15:00 ` Ralph Mueck
  2014-03-17 23:09   ` NeilBrown
  1 sibling, 1 reply; 5+ messages in thread
From: Ralph Mueck @ 2014-03-17 15:00 UTC (permalink / raw)
  To: linux-kernel; +Cc: i4passt, neilb, linux-raid, linux-kernel, Matthias Oefelein

This patch introduces a consistency check feature for level-1 RAID
arrays that have been created with the md driver.
When enabled, every read request is duplicated and initiated for each
member of the RAID array. All read blocks are compared with their
corresponding blocks on the other array members. If the check fails for
a block, the block is not handed over, but an error code is returned
instead.
As mentioned in the cover letter, the implementation still has some 
unresolved issues.

Signed-off-by: Ralph Mueck <linux-kernel@rmueck.de>
Signed-off-by: Matthias Oefelein <ma.oefelein@arcor.de>

---
 drivers/md/raid1.c | 252 ++++++++++++++++++++++++++++++++++++++++++++++++++++-
 1 file changed, 250 insertions(+), 2 deletions(-)

diff --git a/drivers/md/raid1.c b/drivers/md/raid1.c
index 4a6ca1c..8c64f9a 100644
--- a/drivers/md/raid1.c
+++ b/drivers/md/raid1.c
@@ -37,6 +37,7 @@
 #include <linux/module.h>
 #include <linux/seq_file.h>
 #include <linux/ratelimit.h>
+#include <linux/gfp.h>
 #include "md.h"
 #include "raid1.h"
 #include "bitmap.h"
@@ -257,6 +258,109 @@ static void call_bio_endio(struct r1bio *r1_bio)
 	}
 }
 
+/* The safe_read version of the raid_end_bio_io() function */
+/* On a read request, we issue requests to all available disks.
+ * Data is returned only if all discs contain the same data
+ */
+static void safe_read_call_bio_endio(struct r1bio *r1_bio)
+{
+	struct bio *bio = r1_bio->master_bio;
+	int done;
+	struct r1conf *conf = r1_bio->mddev->private;
+	sector_t start_next_window = r1_bio->start_next_window;
+	sector_t bi_sector = bio->bi_iter.bi_sector;
+	int disk;
+	struct md_rdev *rdev;
+	int i;
+	struct page *dragptr = NULL;
+	int already_copied = 0;	/* we want to copy the data only once */
+
+	for (disk = 0 ; disk < conf->raid_disks * 2 ; disk++) {
+		struct bio *p = NULL;
+		struct bio *s = NULL;
+		
+		rcu_read_lock();
+		rdev = rcu_dereference(conf->mirrors[disk].rdev);
+		rcu_read_unlock();
+
+		if (r1_bio->bios[disk] == IO_BLOCKED
+			|| rdev == NULL
+			|| test_bit(Unmerged, &rdev->flags)
+			|| test_bit(Faulty, &rdev->flags)) {
+			continue;
+		}
+
+		/* bio_for_each_segment is broken. at least here.. */
+		/* iterate over linked bios */
+		for (p = r1_bio->master_bio, s = r1_bio->bios[disk];
+		     (p != NULL) && (s != NULL);
+		     p = p->bi_next, s = s->bi_next) {
+			/* compare the pages read */
+			for (i = 0; i < r1_bio->bios[disk]->bi_vcnt; i++) {
+				if (dragptr) { /* dragptr points to the previous page */
+					if(memcmp(page_address(r1_bio->bios[disk]->bi_io_vec[0].bv_page),
+						page_address(dragptr),
+						(r1_bio->bios[disk]->bi_io_vec[0].bv_len))) {
+						set_bit(R1BIO_ReadError, &r1_bio->state);
+						clear_bit(R1BIO_Uptodate, &r1_bio->state);
+					}
+				}
+				dragptr = r1_bio->bios[disk]->bi_io_vec[0].bv_page;
+			}
+		}
+	}
+
+	for (disk = 0 ; disk < conf->raid_disks * 2 ; disk++) {
+		rcu_read_lock();
+		rdev = rcu_dereference(conf->mirrors[disk].rdev);
+		rcu_read_unlock();
+		if (r1_bio->bios[disk] == IO_BLOCKED	//stolen from read_balance - documentation? HA! Look there!
+			|| rdev == NULL
+			|| test_bit(Unmerged, &rdev->flags)
+			|| test_bit(Faulty, &rdev->flags)) {
+			continue;
+		}
+
+ 		for (i = 0; i < r1_bio->bios[disk]->bi_vcnt; i++) {
+			if(!already_copied) {
+				if (r1_bio->bios[disk]->bi_io_vec[i].bv_page) {
+					memcpy(page_address(r1_bio->master_bio->bi_io_vec[i].bv_page),
+					       page_address(r1_bio->bios[disk]->bi_io_vec[i].bv_page),
+					       (r1_bio->bios[disk]->bi_io_vec[i].bv_len));
+				}
+			}
+
+			put_page(r1_bio->bios[disk]->bi_io_vec[i].bv_page);
+		}
+		already_copied = 1;
+	}
+
+	if (bio->bi_phys_segments) {
+		unsigned long flags;
+		spin_lock_irqsave(&conf->device_lock, flags);
+		bio->bi_phys_segments--;
+		done = (bio->bi_phys_segments == 0);
+		spin_unlock_irqrestore(&conf->device_lock, flags);
+		/*
+		 * make_request() might be waiting for
+		 * bi_phys_segments to decrease
+		 */
+		wake_up(&conf->wait_barrier);
+	} else
+		done = 1;
+
+	if (!test_bit(R1BIO_Uptodate, &r1_bio->state))
+		clear_bit(BIO_UPTODATE, &bio->bi_flags);
+	if (done) {
+		bio_endio(bio, 0);
+		/*
+		 * Wake up any possible resync thread that waits for the device
+		 * to go idle.
+		 */
+		allow_barrier(conf, start_next_window, bi_sector);
+	}
+}
+
 static void raid_end_bio_io(struct r1bio *r1_bio)
 {
 	struct bio *bio = r1_bio->master_bio;
@@ -268,8 +372,12 @@ static void raid_end_bio_io(struct r1bio *r1_bio)
 			 (unsigned long long) bio->bi_iter.bi_sector,
 			 (unsigned long long) bio_end_sector(bio) - 1);
 
-		call_bio_endio(r1_bio);
+		if (r1_bio->mddev->safe_read && bio_data_dir(bio) == READ)
+			safe_read_call_bio_endio(r1_bio);
+		else
+			call_bio_endio(r1_bio);
 	}
+
 	free_r1bio(r1_bio);
 }
 
@@ -303,6 +411,14 @@ static int find_bio_disk(struct r1bio *r1_bio, struct bio *bio)
 	return mirror;
 }
 
+static void r1_bio_read_done(struct r1bio *r1_bio)
+{
+	if(r1_bio->mddev->safe_read)
+		if (!atomic_dec_and_test(&r1_bio->remaining))
+			return;
+	raid_end_bio_io(r1_bio);
+}
+
 static void raid1_end_read_request(struct bio *bio, int error)
 {
 	int uptodate = test_bit(BIO_UPTODATE, &bio->bi_flags);
@@ -333,7 +449,7 @@ static void raid1_end_read_request(struct bio *bio, int error)
 	}
 
 	if (uptodate) {
-		raid_end_bio_io(r1_bio);
+		r1_bio_read_done(r1_bio);
 		rdev_dec_pending(conf->mirrors[mirror].rdev, conf->mddev);
 	} else {
 		/*
@@ -1073,6 +1189,133 @@ static void raid1_unplug(struct blk_plug_cb *cb, bool from_schedule)
 	kfree(plug);
 }
 
+/* This function creates a "deep copy" of a bio (own pages, own bvecs) */
+static struct bio *copy_bio(struct bio *source, struct mddev *mddev) {
+	struct bio *temp;
+
+	temp = bio_clone_mddev(source, GFP_NOIO, mddev);
+	BUG_ON(!temp);
+
+	bio_alloc_pages(temp, GFP_NOIO | __GFP_HIGHMEM);
+	temp->bi_flags = source->bi_flags;
+	temp->bi_flags = (temp->bi_flags | BIO_OWNS_VEC);
+
+	temp->bi_rw = source->bi_rw;
+	temp->bi_iter.bi_sector = source->bi_iter.bi_sector;
+	temp->bi_iter.bi_size = source->bi_iter.bi_size;
+	temp->bi_phys_segments = source->bi_phys_segments;
+	temp->bi_end_io = source->bi_end_io;
+	temp->bi_private = source->bi_private;
+
+	return temp;
+}
+
+/* Duplicate the read command in order to read from every available disk */
+static void do_safe_read(struct mddev *mddev, struct bio * bio, struct r1bio *r1_bio) {
+	struct r1conf *conf = mddev->private;
+	struct raid1_info *mirror;
+	struct bitmap *bitmap;
+	struct bio *read_bio;
+	struct md_rdev *rdev;
+
+	int rdisk;
+	int max_sectors = r1_bio->sectors;
+	const unsigned long do_sync = (bio->bi_rw & REQ_SYNC);
+	int sectors_handled;
+
+	int disk;
+	bitmap = mddev->bitmap;
+
+	/* set the atomic counter */
+	atomic_set(&r1_bio->remaining, 1);
+
+	/* iterate over the disks */
+	for (disk = 0 ; disk < conf->raid_disks * 2 ; disk++) {
+d_s_read_again:
+		rcu_read_lock();
+		rdev = rcu_dereference(conf->mirrors[disk].rdev);
+		rcu_read_unlock();
+
+		/* check if disk is valid */
+		if (r1_bio->bios[disk] == IO_BLOCKED
+			|| rdev == NULL
+			|| test_bit(Unmerged, &rdev->flags)
+			|| test_bit(Faulty, &rdev->flags)) {
+			continue;
+		}
+
+		rdisk = disk;
+
+		mirror = conf->mirrors + rdisk;
+
+		if (test_bit(WriteMostly, &mirror->rdev->flags) &&
+			bitmap) {
+			/* Reading from a write-mostly device must
+				* take care not to over-take any writes
+				* that are 'behind'
+				*/
+			wait_event(bitmap->behind_wait,
+					atomic_read(&bitmap->behind_writes) == 0);
+		}
+		r1_bio->read_disk = rdisk;
+
+		/* try to copy the bio */
+		read_bio = copy_bio(bio, mddev);
+		if(!read_bio)
+			return;
+		bio_trim(read_bio, r1_bio->sector - bio->bi_iter.bi_sector,
+				max_sectors);
+
+		r1_bio->bios[rdisk] = read_bio;
+
+		read_bio->bi_iter.bi_sector = r1_bio->sector +
+			mirror->rdev->data_offset;
+		read_bio->bi_bdev = mirror->rdev->bdev;
+		read_bio->bi_end_io = raid1_end_read_request;
+		read_bio->bi_rw = READ | do_sync;
+		read_bio->bi_private = r1_bio;
+
+		if (max_sectors < r1_bio->sectors) {
+			/* could not read all from this device, so we will
+				* need another r1_bio.
+				*/
+
+			sectors_handled = (r1_bio->sector + max_sectors
+						- bio->bi_iter.bi_sector);
+			r1_bio->sectors = max_sectors;
+			spin_lock_irq(&conf->device_lock);
+			if (bio->bi_phys_segments == 0)
+				bio->bi_phys_segments = 2;
+			else
+				bio->bi_phys_segments++;
+			spin_unlock_irq(&conf->device_lock);
+			/* Cannot call generic_make_request directly
+				* as that will be queued in __make_request
+				* and subsequent mempool_alloc might block waiting
+				* for it.  So hand bio over to raid1d.
+				*/
+			reschedule_retry(r1_bio);
+
+			r1_bio = mempool_alloc(conf->r1bio_pool, GFP_NOIO);
+
+			r1_bio->master_bio = bio;
+			r1_bio->sectors = bio_sectors(bio) - sectors_handled;
+			r1_bio->state = 0;
+			r1_bio->mddev = mddev;
+			r1_bio->sector = bio->bi_iter.bi_sector +
+				sectors_handled;
+			goto d_s_read_again;
+		} else {
+			
+			atomic_inc(&r1_bio->remaining);
+			generic_make_request(read_bio);
+		}
+		
+	}
+	r1_bio_read_done(r1_bio);	/* decrement atomic counter */
+	return;
+}
+
 static void make_request(struct mddev *mddev, struct bio * bio)
 {
 	struct r1conf *conf = mddev->private;
@@ -1157,6 +1400,11 @@ static void make_request(struct mddev *mddev, struct bio * bio)
 		 */
 		int rdisk;
 
+		if(mddev->safe_read) {
+			do_safe_read(mddev, bio, r1_bio);
+			return;
+		}
+
 read_again:
 		rdisk = read_balance(conf, r1_bio, &max_sectors);
 
-- 
1.8.3.2

^ permalink raw reply related	[flat|nested] 5+ messages in thread

* Re: [PATCH 1/2] md: Add configurability for consistency check feature
  2014-03-17 15:00 ` [PATCH 1/2] md: Add configurability for consistency check feature Ralph Mueck
@ 2014-03-17 22:54   ` NeilBrown
  0 siblings, 0 replies; 5+ messages in thread
From: NeilBrown @ 2014-03-17 22:54 UTC (permalink / raw)
  To: Ralph Mueck; +Cc: i4passt, linux-raid, linux-kernel, Matthias Oefelein

[-- Attachment #1: Type: text/plain, Size: 1819 bytes --]

On Mon, 17 Mar 2014 16:00:04 +0100 Ralph Mueck <linux-kernel@rmueck.de> wrote:

> This patch adds sysfs configurability for the md level-1 RAID
> consistency check.
> The feature introduces a new attribute in sysfs named "safe_read".
> To toggle consistency checks on/off, simply echo safe_read in
> /sys/block/md*/md/safe_read.
> 
> Signed-off-by: Ralph Mueck <linux-kernel@rmueck.de>
> Signed-off-by: Matthias Oefelein <ma.oefelein@arcor.de>
> 
> ---
>  drivers/md/md.c | 27 +++++++++++++++++++++++++++
>  drivers/md/md.h |  3 +++
>  2 files changed, 30 insertions(+)
> 
> diff --git a/drivers/md/md.c b/drivers/md/md.c
> index 4ad5cc4..5cc9a00 100644
> --- a/drivers/md/md.c
> +++ b/drivers/md/md.c
> @@ -4652,6 +4652,32 @@ static struct md_sysfs_entry md_array_size =
>  __ATTR(array_size, S_IRUGO|S_IWUSR, array_size_show,
>         array_size_store);
>  
> +static ssize_t
> +safe_read_show(struct mddev *mddev, char *page)
> +{
> +	if(mddev->safe_read)
> +		return sprintf(page, "enabled\n");
> +	else
> +		return sprintf(page, "disabled\n");
> +}
> +
> +static ssize_t
> +safe_read_store(struct mddev *mddev, const char *buf, size_t len)
> +{
> +	if(mddev->pers->level != 1) {
> +		printk(KERN_NOTICE "RAID level not supported!\n");
> +		return len;
> +	}
> +	if (strncmp(buf, "safe_read", 9) == 0) {
> +		mddev->safe_read = !mddev->safe_read;
> +	}
> +	return len;
> +}
> +

So let me get this straight....

There is a sysfs file called "safe_read".
When you read from that file it reports either "enabled" or "disabled".

However you cannot write "enabled" or "disabled" to the file.  Rather you
write "safe_read".  And when you do, it toggles the status.

Seriously?

Any chance you could use device_show_bool / device_store_bool ??

NeilBrown



[-- Attachment #2: signature.asc --]
[-- Type: application/pgp-signature, Size: 828 bytes --]

^ permalink raw reply	[flat|nested] 5+ messages in thread

* Re: [PATCH 2/2] md: Add support for RAID-1 consistency check feature
  2014-03-17 15:00 ` [PATCH 2/2] md: Add support for RAID-1 " Ralph Mueck
@ 2014-03-17 23:09   ` NeilBrown
  0 siblings, 0 replies; 5+ messages in thread
From: NeilBrown @ 2014-03-17 23:09 UTC (permalink / raw)
  To: Ralph Mueck; +Cc: i4passt, linux-raid, linux-kernel, Matthias Oefelein

[-- Attachment #1: Type: text/plain, Size: 3523 bytes --]

On Mon, 17 Mar 2014 16:00:05 +0100 Ralph Mueck <linux-kernel@rmueck.de> wrote:

> This patch introduces a consistency check feature for level-1 RAID
> arrays that have been created with the md driver.
> When enabled, every read request is duplicated and initiated for each
> member of the RAID array. All read blocks are compared with their
> corresponding blocks on the other array members. If the check fails for
> a block, the block is not handed over, but an error code is returned
> instead.
> As mentioned in the cover letter, the implementation still has some 
> unresolved issues.
> 
> Signed-off-by: Ralph Mueck <linux-kernel@rmueck.de>
> Signed-off-by: Matthias Oefelein <ma.oefelein@arcor.de>
> 
> ---
>  drivers/md/raid1.c | 252 ++++++++++++++++++++++++++++++++++++++++++++++++++++-
>  1 file changed, 250 insertions(+), 2 deletions(-)
> 
> diff --git a/drivers/md/raid1.c b/drivers/md/raid1.c
> index 4a6ca1c..8c64f9a 100644
> --- a/drivers/md/raid1.c
> +++ b/drivers/md/raid1.c
> @@ -37,6 +37,7 @@
>  #include <linux/module.h>
>  #include <linux/seq_file.h>
>  #include <linux/ratelimit.h>
> +#include <linux/gfp.h>
>  #include "md.h"
>  #include "raid1.h"
>  #include "bitmap.h"
> @@ -257,6 +258,109 @@ static void call_bio_endio(struct r1bio *r1_bio)
>  	}
>  }
>  
> +/* The safe_read version of the raid_end_bio_io() function */
> +/* On a read request, we issue requests to all available disks.
> + * Data is returned only if all discs contain the same data
> + */
> +static void _endio(struct r1bio *r1_bio)
> +{
> +	struct bio *bio = r1_bio->master_bio;
> +	int done;
> +	struct r1conf *conf = r1_bio->mddev->private;
> +	sector_t start_next_window = r1_bio->start_next_window;
> +	sector_t bi_sector = bio->bi_iter.bi_sector;
> +	int disk;
> +	struct md_rdev *rdev;
> +	int i;
> +	struct page *dragptr = NULL;
> +	int already_copied = 0;	/* we want to copy the data only once */
> +
> +	for (disk = 0 ; disk < conf->raid_disks * 2 ; disk++) {
> +		struct bio *p = NULL;
> +		struct bio *s = NULL;
> +		
> +		rcu_read_lock();
> +		rdev = rcu_dereference(conf->mirrors[disk].rdev);
> +		rcu_read_unlock();

You cannot drop rcu_read_lock until you take a reference to rdev, or stop
using it.


> +
> +		if (r1_bio->bios[disk] == IO_BLOCKED
> +			|| rdev == NULL
> +			|| test_bit(Unmerged, &rdev->flags)
> +			|| test_bit(Faulty, &rdev->flags)) {
> +			continue;
> +		}
> +
> +		/* bio_for_each_segment is broken. at least here.. */
> +		/* iterate over linked bios */
> +		for (p = r1_bio->master_bio, s = r1_bio->bios[disk];
> +		     (p != NULL) && (s != NULL);
> +		     p = p->bi_next, s = s->bi_next) {
> +			/* compare the pages read */
> +			for (i = 0; i < r1_bio->bios[disk]->bi_vcnt; i++) {
> +				if (dragptr) { /* dragptr points to the previous page */
> +					if(memcmp(page_address(r1_bio->bios[disk]->bi_io_vec[0].bv_page),
> +						page_address(dragptr),
> +						(r1_bio->bios[disk]->bi_io_vec[0].bv_len))) {
> +						set_bit(R1BIO_ReadError, &r1_bio->state);
> +						clear_bit(R1BIO_Uptodate, &r1_bio->state);
> +					}
> +				}
> +				dragptr = r1_bio->bios[disk]->bi_io_vec[0].bv_page;
> +			}

This doesn't make any sense to me at all.  You use 'i' to loop bi_vnt times,
but never use 'i' or change any other variable in that loop (except dragptr
which is always set to the same value).

And you use "bi_next", but next set up any linkage through bi_next.

Confused.

NeilBrown



[-- Attachment #2: signature.asc --]
[-- Type: application/pgp-signature, Size: 828 bytes --]

^ permalink raw reply	[flat|nested] 5+ messages in thread

end of thread, other threads:[~2014-03-17 23:09 UTC | newest]

Thread overview: 5+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2014-03-17 15:00 [PATCH 0/2] md: Add consistency check feature for level-1 RAID Ralph Mueck
2014-03-17 15:00 ` [PATCH 1/2] md: Add configurability for consistency check feature Ralph Mueck
2014-03-17 22:54   ` NeilBrown
2014-03-17 15:00 ` [PATCH 2/2] md: Add support for RAID-1 " Ralph Mueck
2014-03-17 23:09   ` NeilBrown

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).