linux-raid.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [PATCH 00/12] Multiple PPL support and PPL bugfixes
@ 2017-09-28 12:41 Pawel Baldysiak
  2017-09-28 12:41 ` [PATCH 01/12] Don't abort starting the array if kernel does not support ppl Pawel Baldysiak
                   ` (12 more replies)
  0 siblings, 13 replies; 14+ messages in thread
From: Pawel Baldysiak @ 2017-09-28 12:41 UTC (permalink / raw)
  To: jes.sorensen; +Cc: linux-raid, Pawel Baldysiak

This patchset introduces userspace support for multiple PPLs written in a
circular buffer. It also contains few general bug fixes for PPL, and
couple of IMSM specific compatibility patches related to new PPL.

Artur Paszkiewicz (5):
  Don't abort starting the array if kernel does not support ppl
  imsm: don't skip resync when an invalid ppl header is found
  imsm: always do ppl recovery when starting a rebuilding array
  imsm: use correct map when validating ppl
  imsm: write initial ppl on a disk added for rebuild

Pawel Baldysiak (7):
  super1: Add support for multiple-ppls
  imsm: Add support for multiple ppls
  imsm: validate multiple ppls during assemble
  Zeroout whole ppl space during creation/force assemble
  imsm: switch to multiple ppls automatically during assemble
  Grow: fix switching on PPL during recovery
  imsm: Write empty PPL header if assembling regular clean array.

 Grow.c        |   3 -
 managemon.c   |  11 +++-
 mdadm.h       |   1 +
 super-intel.c | 181 ++++++++++++++++++++++++++++++++++++++++++++--------------
 super1.c      |  70 ++++++++++++++---------
 sysfs.c       |   6 +-
 util.c        |  49 ++++++++++++++++
 7 files changed, 241 insertions(+), 80 deletions(-)

-- 
2.13.5


^ permalink raw reply	[flat|nested] 14+ messages in thread

* [PATCH 01/12] Don't abort starting the array if kernel does not support ppl
  2017-09-28 12:41 [PATCH 00/12] Multiple PPL support and PPL bugfixes Pawel Baldysiak
@ 2017-09-28 12:41 ` Pawel Baldysiak
  2017-09-28 12:41 ` [PATCH 02/12] super1: Add support for multiple-ppls Pawel Baldysiak
                   ` (11 subsequent siblings)
  12 siblings, 0 replies; 14+ messages in thread
From: Pawel Baldysiak @ 2017-09-28 12:41 UTC (permalink / raw)
  To: jes.sorensen; +Cc: linux-raid, Artur Paszkiewicz

From: Artur Paszkiewicz <artur.paszkiewicz@intel.com>

Change the behavior of assemble and create for consistency-policy=ppl
for external metadata arrays. If the kernel does not support ppl, don't
abort but print a warning and start the array without ppl
(consistency-policy=resync). No change for native md arrays because the
kernel will not allow starting the array if it finds an unsupported
feature bit in the superblock.

In sysfs_add_disk() check consistency_policy in the mdinfo structure
that represents the array, not the disk and read the current consistency
policy from sysfs in mdmon's manage_member(). This is necessary to make
sysfs_add_disk() honor the actual consistency policy and not what is in
the metadata. Also remove all the places where consistency_policy is set
for a disk's mdinfo - it is a property of the array, not the disk.

Signed-off-by: Artur Paszkiewicz <artur.paszkiewicz@intel.com>
---
 managemon.c   | 11 ++++++++---
 super-intel.c |  4 +---
 sysfs.c       |  6 +++---
 3 files changed, 12 insertions(+), 9 deletions(-)

diff --git a/managemon.c b/managemon.c
index 68f0c2d3..cc3c6f10 100644
--- a/managemon.c
+++ b/managemon.c
@@ -477,7 +477,7 @@ static void manage_member(struct mdstat_ent *mdstat,
 	char buf[64];
 	int frozen;
 	struct supertype *container = a->container;
-	unsigned long long int component_size = 0;
+	struct mdinfo *mdi;
 
 	if (container == NULL)
 		/* Raced with something */
@@ -489,8 +489,13 @@ static void manage_member(struct mdstat_ent *mdstat,
 		// MORE
 	}
 
-	if (sysfs_get_ll(&a->info, NULL, "component_size", &component_size) >= 0)
-		a->info.component_size = component_size << 1;
+	mdi = sysfs_read(-1, mdstat->devnm,
+			 GET_COMPONENT|GET_CONSISTENCY_POLICY);
+	if (mdi) {
+		a->info.component_size = mdi->component_size;
+		a->info.consistency_policy = mdi->consistency_policy;
+		sysfs_free(mdi);
+	}
 
 	/* honor 'frozen' */
 	if (sysfs_get_str(&a->info, NULL, "metadata_version", buf, sizeof(buf)) > 0)
diff --git a/super-intel.c b/super-intel.c
index 125c3a98..2f378dea 100644
--- a/super-intel.c
+++ b/super-intel.c
@@ -7669,7 +7669,6 @@ static struct mdinfo *container_content_imsm(struct supertype *st, char *subarra
 			} else {
 				info_d->component_size = blocks_per_member(map);
 			}
-			info_d->consistency_policy = this->consistency_policy;
 
 			info_d->bb.supported = 1;
 			get_volume_badblocks(super->bbm_log, ord_to_idx(ord),
@@ -8758,8 +8757,7 @@ static struct mdinfo *imsm_activate_spare(struct active_array *a,
 		di->component_size = a->info.component_size;
 		di->container_member = inst;
 		di->bb.supported = 1;
-		if (dev->rwh_policy == RWH_DISTRIBUTED) {
-			di->consistency_policy = CONSISTENCY_POLICY_PPL;
+		if (a->info.consistency_policy == CONSISTENCY_POLICY_PPL) {
 			di->ppl_sector = get_ppl_sector(super, inst);
 			di->ppl_size = (PPL_HEADER_SIZE + PPL_ENTRY_SPACE) >> 9;
 		}
diff --git a/sysfs.c b/sysfs.c
index 78d2b526..8ea7ba2a 100644
--- a/sysfs.c
+++ b/sysfs.c
@@ -709,8 +709,8 @@ int sysfs_set_array(struct mdinfo *info, int vers)
 		if (sysfs_set_str(info, NULL, "consistency_policy",
 				  map_num(consistency_policies,
 					  info->consistency_policy))) {
-			pr_err("This kernel does not support PPL\n");
-			return 1;
+			pr_err("This kernel does not support PPL. Falling back to consistency-policy=resync.\n");
+			info->consistency_policy = CONSISTENCY_POLICY_RESYNC;
 		}
 	}
 
@@ -745,7 +745,7 @@ int sysfs_add_disk(struct mdinfo *sra, struct mdinfo *sd, int resume)
 	rv = sysfs_set_num(sra, sd, "offset", sd->data_offset);
 	rv |= sysfs_set_num(sra, sd, "size", (sd->component_size+1) / 2);
 	if (sra->array.level != LEVEL_CONTAINER) {
-		if (sd->consistency_policy == CONSISTENCY_POLICY_PPL) {
+		if (sra->consistency_policy == CONSISTENCY_POLICY_PPL) {
 			rv |= sysfs_set_num(sra, sd, "ppl_sector", sd->ppl_sector);
 			rv |= sysfs_set_num(sra, sd, "ppl_size", sd->ppl_size);
 		}
-- 
2.13.5


^ permalink raw reply related	[flat|nested] 14+ messages in thread

* [PATCH 02/12] super1: Add support for multiple-ppls
  2017-09-28 12:41 [PATCH 00/12] Multiple PPL support and PPL bugfixes Pawel Baldysiak
  2017-09-28 12:41 ` [PATCH 01/12] Don't abort starting the array if kernel does not support ppl Pawel Baldysiak
@ 2017-09-28 12:41 ` Pawel Baldysiak
  2017-09-28 12:41 ` [PATCH 03/12] imsm: Add support for multiple ppls Pawel Baldysiak
                   ` (10 subsequent siblings)
  12 siblings, 0 replies; 14+ messages in thread
From: Pawel Baldysiak @ 2017-09-28 12:41 UTC (permalink / raw)
  To: jes.sorensen; +Cc: linux-raid, Pawel Baldysiak, Artur Paszkiewicz

Add support for super1 with multiple ppls. Extend ppl area size to 1MB.
Use 1MB as default during creation. Always start array as single ppl -
if kernel is capable of multiple ppls and there is enough space reserved -
it will switch the policy during first metadata update.

Signed-off-by: Pawel Baldysiak <pawel.baldysiak@intel.com>
Signed-off-by: Artur Paszkiewicz <artur.paszkiewicz@intel.com>
---
 super1.c | 65 ++++++++++++++++++++++++++++++++++++----------------------------
 1 file changed, 37 insertions(+), 28 deletions(-)

diff --git a/super1.c b/super1.c
index f6a10450..b31f8450 100644
--- a/super1.c
+++ b/super1.c
@@ -121,6 +121,9 @@ struct misc_dev_info {
 	__u64 device_size;
 };
 
+#define MULTIPLE_PPL_AREA_SIZE_SUPER1 (1024 * 1024) /* Size of the whole
+						     * mutliple PPL area
+						     */
 /* feature_map bits */
 #define MD_FEATURE_BITMAP_OFFSET	1
 #define	MD_FEATURE_RECOVERY_OFFSET	2 /* recovery_offset is present and
@@ -140,6 +143,7 @@ struct misc_dev_info {
 #define	MD_FEATURE_BITMAP_VERSIONED	256 /* bitmap version number checked properly */
 #define	MD_FEATURE_JOURNAL		512 /* support write journal */
 #define	MD_FEATURE_PPL			1024 /* support PPL */
+#define	MD_FEATURE_MUTLIPLE_PPLS	2048 /* support for multiple PPLs */
 #define	MD_FEATURE_ALL			(MD_FEATURE_BITMAP_OFFSET	\
 					|MD_FEATURE_RECOVERY_OFFSET	\
 					|MD_FEATURE_RESHAPE_ACTIVE	\
@@ -150,6 +154,7 @@ struct misc_dev_info {
 					|MD_FEATURE_BITMAP_VERSIONED	\
 					|MD_FEATURE_JOURNAL		\
 					|MD_FEATURE_PPL			\
+					|MD_FEATURE_MULTIPLE_PPLS	\
 					)
 
 static int role_from_sb(struct mdp_superblock_1 *sb)
@@ -298,6 +303,12 @@ static int awrite(struct align_fd *afd, void *buf, int len)
 	return len;
 }
 
+static inline unsigned int md_feature_any_ppl_on(__u32 feature_map)
+{
+	return ((__cpu_to_le32(feature_map) &
+	    (MD_FEATURE_PPL | MD_FEATURE_MUTLIPLE_PPLS)));
+}
+
 static inline unsigned int choose_ppl_space(int chunk)
 {
 	return (PPL_HEADER_SIZE >> 9) + (chunk > 128*2 ? chunk : 128*2);
@@ -409,7 +420,7 @@ static void examine_super1(struct supertype *st, char *homehost)
 	if (sb->feature_map & __cpu_to_le32(MD_FEATURE_BITMAP_OFFSET)) {
 		printf("Internal Bitmap : %ld sectors from superblock\n",
 		       (long)(int32_t)__le32_to_cpu(sb->bitmap_offset));
-	} else if (sb->feature_map & __cpu_to_le32(MD_FEATURE_PPL)) {
+	} else if (md_feature_any_ppl_on(sb->feature_map)) {
 		printf("            PPL : %u sectors at offset %d sectors from superblock\n",
 		       __le16_to_cpu(sb->ppl.size),
 		       __le16_to_cpu(sb->ppl.offset));
@@ -985,7 +996,7 @@ static void getinfo_super1(struct supertype *st, struct mdinfo *info, char *map)
 		info->bitmap_offset = (int32_t)__le32_to_cpu(sb->bitmap_offset);
 		if (__le32_to_cpu(bsb->nodes) > 1)
 			info->array.state |= (1 << MD_SB_CLUSTERED);
-	} else if (sb->feature_map & __le32_to_cpu(MD_FEATURE_PPL)) {
+	} else if (md_feature_any_ppl_on(sb->feature_map)) {
 		info->ppl_offset = __le16_to_cpu(sb->ppl.offset);
 		info->ppl_size = __le16_to_cpu(sb->ppl.size);
 		info->ppl_sector = super_offset + info->ppl_offset;
@@ -1140,7 +1151,7 @@ static void getinfo_super1(struct supertype *st, struct mdinfo *info, char *map)
 	if (sb->feature_map & __le32_to_cpu(MD_FEATURE_JOURNAL)) {
 		info->journal_device_required = 1;
 		info->consistency_policy = CONSISTENCY_POLICY_JOURNAL;
-	} else if (sb->feature_map & __le32_to_cpu(MD_FEATURE_PPL)) {
+	} else if (md_feature_any_ppl_on(sb->feature_map)) {
 		info->consistency_policy = CONSISTENCY_POLICY_PPL;
 	} else if (sb->feature_map & __le32_to_cpu(MD_FEATURE_BITMAP_OFFSET)) {
 		info->consistency_policy = CONSISTENCY_POLICY_BITMAP;
@@ -1324,7 +1335,7 @@ static int update_super1(struct supertype *st, struct mdinfo *info,
 		if (sb->feature_map & __cpu_to_le32(MD_FEATURE_BITMAP_OFFSET)) {
 			bitmap_offset = (long)__le32_to_cpu(sb->bitmap_offset);
 			bm_sectors = calc_bitmap_size(bms, 4096) >> 9;
-		} else if (sb->feature_map & __cpu_to_le32(MD_FEATURE_PPL)) {
+		} else if (md_feature_any_ppl_on(sb->feature_map)) {
 			bitmap_offset = (long)__le16_to_cpu(sb->ppl.offset);
 			bm_sectors = (long)__le16_to_cpu(sb->ppl.size);
 		}
@@ -1377,7 +1388,6 @@ static int update_super1(struct supertype *st, struct mdinfo *info,
 		unsigned long long data_size = __le64_to_cpu(sb->data_size);
 		long bb_offset = __le32_to_cpu(sb->bblog_offset);
 		int space;
-		int optimal_space;
 		int offset;
 
 		if (sb->feature_map & __cpu_to_le32(MD_FEATURE_BITMAP_OFFSET)) {
@@ -1408,18 +1418,23 @@ static int update_super1(struct supertype *st, struct mdinfo *info,
 			return -2;
 		}
 
-		optimal_space = choose_ppl_space(__le32_to_cpu(sb->chunksize));
-
-		if (space > optimal_space)
-			space = optimal_space;
-		if (space > UINT16_MAX)
-			space = UINT16_MAX;
+		if (space >= (MULTIPLE_PPL_AREA_SIZE_SUPER1 >> 9)) {
+			space = (MULTIPLE_PPL_AREA_SIZE_SUPER1 >> 9);
+		} else {
+			int optimal_space = choose_ppl_space(
+						__le32_to_cpu(sb->chunksize));
+			if (space > optimal_space)
+				space = optimal_space;
+			if (space > UINT16_MAX)
+				space = UINT16_MAX;
+		}
 
 		sb->ppl.offset = __cpu_to_le16(offset);
 		sb->ppl.size = __cpu_to_le16(space);
 		sb->feature_map |= __cpu_to_le32(MD_FEATURE_PPL);
 	} else if (strcmp(update, "no-ppl") == 0) {
-		sb->feature_map &= ~ __cpu_to_le32(MD_FEATURE_PPL);
+		sb->feature_map &= ~__cpu_to_le32(MD_FEATURE_PPL |
+						   MD_FEATURE_MUTLIPLE_PPLS);
 	} else if (strcmp(update, "name") == 0) {
 		if (info->name[0] == 0)
 			sprintf(info->name, "%d", info->array.md_minor);
@@ -1974,20 +1989,12 @@ static int write_init_super1(struct supertype *st)
 					(((char *)sb) + MAX_SB_SIZE);
 			bm_space = calc_bitmap_size(bms, 4096) >> 9;
 			bm_offset = (long)__le32_to_cpu(sb->bitmap_offset);
-		} else if (sb->feature_map & __cpu_to_le32(MD_FEATURE_PPL)) {
-			bm_space =
-			  choose_ppl_space(__le32_to_cpu(sb->chunksize));
-			if (bm_space > UINT16_MAX)
-				bm_space = UINT16_MAX;
-			if (st->minor_version == 0) {
+		} else if (md_feature_any_ppl_on(sb->feature_map)) {
+			bm_space = MULTIPLE_PPL_AREA_SIZE_SUPER1 >> 9;
+			if (st->minor_version == 0)
 				bm_offset = -bm_space - 8;
-				if (bm_offset < INT16_MIN) {
-					bm_offset = INT16_MIN;
-					bm_space = -bm_offset - 8;
-				}
-			} else {
+			else
 				bm_offset = 8;
-			}
 			sb->ppl.offset = __cpu_to_le16(bm_offset);
 			sb->ppl.size = __cpu_to_le16(bm_space);
 		} else {
@@ -2069,7 +2076,7 @@ static int write_init_super1(struct supertype *st)
 		     MD_FEATURE_BITMAP_OFFSET)) {
 			rv = st->ss->write_bitmap(st, di->fd, NodeNumUpdate);
 		} else if (rv == 0 &&
-			 (__le32_to_cpu(sb->feature_map) & MD_FEATURE_PPL)) {
+		    md_feature_any_ppl_on(sb->feature_map)) {
 			struct mdinfo info;
 
 			st->ss->getinfo_super(st, &info, NULL);
@@ -2345,7 +2352,7 @@ static __u64 avail_size1(struct supertype *st, __u64 devsize,
 		struct bitmap_super_s *bsb;
 		bsb = (struct bitmap_super_s *)(((char*)super)+MAX_SB_SIZE);
 		bmspace = calc_bitmap_size(bsb, 4096) >> 9;
-	} else if (__le32_to_cpu(super->feature_map) & MD_FEATURE_PPL) {
+	} else if (md_feature_any_ppl_on(super->feature_map)) {
 		bmspace = __le16_to_cpu(super->ppl.size);
 	}
 
@@ -2769,8 +2776,10 @@ static int validate_geometry1(struct supertype *st, int level,
 	}
 
 	/* creating:  allow suitable space for bitmap or PPL */
-	bmspace = consistency_policy == CONSISTENCY_POLICY_PPL ?
-		  choose_ppl_space((*chunk)*2) : choose_bm_space(devsize);
+	if (consistency_policy == CONSISTENCY_POLICY_PPL)
+		bmspace = MULTIPLE_PPL_AREA_SIZE_SUPER1 >> 9;
+	else
+		bmspace = choose_bm_space(devsize);
 
 	if (data_offset == INVALID_SECTORS)
 		data_offset = st->data_offset;
-- 
2.13.5


^ permalink raw reply related	[flat|nested] 14+ messages in thread

* [PATCH 03/12] imsm: Add support for multiple ppls
  2017-09-28 12:41 [PATCH 00/12] Multiple PPL support and PPL bugfixes Pawel Baldysiak
  2017-09-28 12:41 ` [PATCH 01/12] Don't abort starting the array if kernel does not support ppl Pawel Baldysiak
  2017-09-28 12:41 ` [PATCH 02/12] super1: Add support for multiple-ppls Pawel Baldysiak
@ 2017-09-28 12:41 ` Pawel Baldysiak
  2017-09-28 12:41 ` [PATCH 04/12] imsm: validate multiple ppls during assemble Pawel Baldysiak
                   ` (9 subsequent siblings)
  12 siblings, 0 replies; 14+ messages in thread
From: Pawel Baldysiak @ 2017-09-28 12:41 UTC (permalink / raw)
  To: jes.sorensen; +Cc: linux-raid, Pawel Baldysiak, Artur Paszkiewicz

Add interpreting new rwh_policy bits. Set PPL size as 1MB.
If new array with ppl is created - use new implementation of ppl by
default.

Signed-off-by: Pawel Baldysiak <pawel.baldysiak@intel.com>
Signed-off-by: Artur Paszkiewicz <artur.paszkiewicz@intel.com>
---
 super-intel.c | 37 +++++++++++++++++++++++++++----------
 1 file changed, 27 insertions(+), 10 deletions(-)

diff --git a/super-intel.c b/super-intel.c
index 2f378dea..21eb048a 100644
--- a/super-intel.c
+++ b/super-intel.c
@@ -92,6 +92,9 @@
 #define NUM_BLOCKS_DIRTY_STRIPE_REGION 2056
 #define SECT_PER_MB_SHIFT 11
 #define MAX_SECTOR_SIZE 4096
+#define MULTIPLE_PPL_AREA_SIZE_IMSM (1024 * 1024) /* Size of the whole
+						   * mutliple PPL area
+						   */
 
 /* Disk configuration info. */
 #define IMSM_MAX_DEVICES 255
@@ -207,6 +210,9 @@ struct imsm_dev {
 #define RWH_OFF 0
 #define RWH_DISTRIBUTED 1
 #define RWH_JOURNALING_DRIVE 2
+#define RWH_MULTIPLE_DISTRIBUTED 3
+#define RWH_MULTIPLE_PPLS_JOURNALING_DRIVE 4
+#define RWH_MULTIPLE_OFF 5
 	__u8  rwh_policy; /* Raid Write Hole Policy */
 	__u8  jd_serial[MAX_RAID_SERIAL_LEN]; /* Journal Drive serial number */
 	__u8  filler1;
@@ -284,7 +290,7 @@ static char *map_state_str[] = { "normal", "uninitialized", "degraded", "failed"
 				 *  already been migrated and must
 				 *  be recovered from checkpoint area */
 
-#define PPL_ENTRY_SPACE (128 * 1024) /* Size of the PPL, without the header */
+#define PPL_ENTRY_SPACE (128 * 1024) /* Size of single PPL, without the header */
 
 struct migr_record {
 	__u32 rec_status;	    /* Status used to determine how to restart
@@ -1539,12 +1545,16 @@ static void print_imsm_dev(struct intel_super *super,
 	printf("    Dirty State : %s\n", (dev->vol.dirty & RAIDVOL_DIRTY) ?
 					 "dirty" : "clean");
 	printf("     RWH Policy : ");
-	if (dev->rwh_policy == RWH_OFF)
+	if (dev->rwh_policy == RWH_OFF || dev->rwh_policy == RWH_MULTIPLE_OFF)
 		printf("off\n");
 	else if (dev->rwh_policy == RWH_DISTRIBUTED)
 		printf("PPL distributed\n");
 	else if (dev->rwh_policy == RWH_JOURNALING_DRIVE)
 		printf("PPL journaling drive\n");
+	else if (dev->rwh_policy == RWH_MULTIPLE_DISTRIBUTED)
+		printf("Multiple distributed PPLs\n");
+	else if (dev->rwh_policy == RWH_MULTIPLE_PPLS_JOURNALING_DRIVE)
+		printf("Multiple PPLs on journaling drive\n");
 	else
 		printf("<unknown:%d>\n", dev->rwh_policy);
 }
@@ -3294,10 +3304,16 @@ static void getinfo_super_imsm_volume(struct supertype *st, struct mdinfo *info,
 	memset(info->uuid, 0, sizeof(info->uuid));
 	info->recovery_start = MaxSector;
 
-	if (info->array.level == 5 && dev->rwh_policy == RWH_DISTRIBUTED) {
+	if (info->array.level == 5 &&
+	    (dev->rwh_policy == RWH_DISTRIBUTED ||
+	     dev->rwh_policy == RWH_MULTIPLE_DISTRIBUTED)) {
 		info->consistency_policy = CONSISTENCY_POLICY_PPL;
 		info->ppl_sector = get_ppl_sector(super, super->current_vol);
-		info->ppl_size = (PPL_HEADER_SIZE + PPL_ENTRY_SPACE) >> 9;
+		if (dev->rwh_policy == RWH_MULTIPLE_DISTRIBUTED)
+			info->ppl_size = MULTIPLE_PPL_AREA_SIZE_IMSM >> 9;
+		else
+			info->ppl_size = (PPL_HEADER_SIZE + PPL_ENTRY_SPACE)
+					  >> 9;
 	} else if (info->array.level <= 0) {
 		info->consistency_policy = CONSISTENCY_POLICY_NONE;
 	} else {
@@ -5390,9 +5406,9 @@ static int init_super_imsm_volume(struct supertype *st, mdu_array_info_t *info,
 	dev->my_vol_raid_dev_num = mpb->num_raid_devs_created;
 
 	if (s->consistency_policy <= CONSISTENCY_POLICY_RESYNC) {
-		dev->rwh_policy = RWH_OFF;
+		dev->rwh_policy = RWH_MULTIPLE_OFF;
 	} else if (s->consistency_policy == CONSISTENCY_POLICY_PPL) {
-		dev->rwh_policy = RWH_DISTRIBUTED;
+		dev->rwh_policy = RWH_MULTIPLE_DISTRIBUTED;
 	} else {
 		free(dev);
 		free(dv);
@@ -7403,9 +7419,9 @@ static int update_subarray_imsm(struct supertype *st, char *subarray,
 			return 2;
 
 		if (strcmp(update, "ppl") == 0)
-			new_policy = RWH_DISTRIBUTED;
+			new_policy = RWH_MULTIPLE_DISTRIBUTED;
 		else
-			new_policy = RWH_OFF;
+			new_policy = RWH_MULTIPLE_OFF;
 
 		if (st->update_tail) {
 			struct imsm_update_rwh_policy *u = xmalloc(sizeof(*u));
@@ -8205,7 +8221,8 @@ skip_mark_checkpoint:
 			dev->vol.dirty = RAIDVOL_CLEAN;
 		} else {
 			dev->vol.dirty = RAIDVOL_DIRTY;
-			if (dev->rwh_policy == RWH_DISTRIBUTED)
+			if (dev->rwh_policy == RWH_DISTRIBUTED ||
+			    dev->rwh_policy == RWH_MULTIPLE_DISTRIBUTED)
 				dev->vol.dirty |= RAIDVOL_DSRECORD_VALID;
 		}
 		super->updates_pending++;
@@ -8759,7 +8776,7 @@ static struct mdinfo *imsm_activate_spare(struct active_array *a,
 		di->bb.supported = 1;
 		if (a->info.consistency_policy == CONSISTENCY_POLICY_PPL) {
 			di->ppl_sector = get_ppl_sector(super, inst);
-			di->ppl_size = (PPL_HEADER_SIZE + PPL_ENTRY_SPACE) >> 9;
+			di->ppl_size = MULTIPLE_PPL_AREA_SIZE_IMSM >> 9;
 		}
 		super->random = random32();
 		di->next = rv;
-- 
2.13.5


^ permalink raw reply related	[flat|nested] 14+ messages in thread

* [PATCH 04/12] imsm: validate multiple ppls during assemble
  2017-09-28 12:41 [PATCH 00/12] Multiple PPL support and PPL bugfixes Pawel Baldysiak
                   ` (2 preceding siblings ...)
  2017-09-28 12:41 ` [PATCH 03/12] imsm: Add support for multiple ppls Pawel Baldysiak
@ 2017-09-28 12:41 ` Pawel Baldysiak
  2017-09-28 12:41 ` [PATCH 05/12] Zeroout whole ppl space during creation/force assemble Pawel Baldysiak
                   ` (8 subsequent siblings)
  12 siblings, 0 replies; 14+ messages in thread
From: Pawel Baldysiak @ 2017-09-28 12:41 UTC (permalink / raw)
  To: jes.sorensen; +Cc: linux-raid, Pawel Baldysiak

Change validation algorithm to check validity of multiple ppls that
are stored in PPL area.

If read error occurs during - treat the all PPLs as invalid -
there is no guarantee that this one was not latest. If the header CRC is
incorrect - assume that there are no further PPLs in PPL area.

If whole PPL area was written at least once - there is a possibility that
old PPL (with lower generation number) will follow the recent one
(with higest generation number). Compare those generation numbers to check
which PPL is latest.

Signed-off-by: Pawel Baldysiak <pawel.baldysiak@intel.com>
---
 super-intel.c | 71 +++++++++++++++++++++++++++++++++++++++--------------------
 1 file changed, 47 insertions(+), 24 deletions(-)

diff --git a/super-intel.c b/super-intel.c
index 21eb048a..d11ddae4 100644
--- a/super-intel.c
+++ b/super-intel.c
@@ -6106,11 +6106,14 @@ static int validate_ppl_imsm(struct supertype *st, struct mdinfo *info,
 	struct imsm_dev *dev;
 	struct imsm_map *map;
 	__u32 idx;
+	unsigned int i;
+	unsigned long long ppl_offset = 0;
+	unsigned long long prev_gen_num = 0;
 
 	if (disk->disk.raid_disk < 0)
 		return 0;
 
-	if (posix_memalign(&buf, 4096, PPL_HEADER_SIZE)) {
+	if (posix_memalign(&buf, MAX_SECTOR_SIZE, PPL_HEADER_SIZE)) {
 		pr_err("Failed to allocate PPL header buffer\n");
 		return -1;
 	}
@@ -6123,34 +6126,54 @@ static int validate_ppl_imsm(struct supertype *st, struct mdinfo *info,
 	if (!d || d->index < 0 || is_failed(&d->disk))
 		goto out;
 
-	if (lseek64(d->fd, info->ppl_sector * 512, SEEK_SET) < 0) {
-		perror("Failed to seek to PPL header location");
-		ret = -1;
-		goto out;
-	}
+	ret = 1;
+	while (ppl_offset < MULTIPLE_PPL_AREA_SIZE_IMSM) {
+		dprintf("Checking potential PPL at offset: %llu\n", ppl_offset);
 
-	if (read(d->fd, buf, PPL_HEADER_SIZE) != PPL_HEADER_SIZE) {
-		perror("Read PPL header failed");
-		ret = -1;
-		goto out;
-	}
+		if (lseek64(d->fd, info->ppl_sector * 512 + ppl_offset,
+			    SEEK_SET) < 0) {
+			perror("Failed to seek to PPL header location");
+			ret = -1;
+			goto out;
+		}
 
-	ppl_hdr = buf;
+		if (read(d->fd, buf, PPL_HEADER_SIZE) != PPL_HEADER_SIZE) {
+			perror("Read PPL header failed");
+			ret = -1;
+			goto out;
+		}
 
-	crc = __le32_to_cpu(ppl_hdr->checksum);
-	ppl_hdr->checksum = 0;
+		ppl_hdr = buf;
 
-	if (crc != ~crc32c_le(~0, buf, PPL_HEADER_SIZE)) {
-		dprintf("Wrong PPL header checksum on %s\n",
-			d->devname);
-		ret = 1;
-	}
+		crc = __le32_to_cpu(ppl_hdr->checksum);
+		ppl_hdr->checksum = 0;
+
+		if (crc != ~crc32c_le(~0, buf, PPL_HEADER_SIZE)) {
+			dprintf("Wrong PPL header checksum on %s\n",
+				d->devname);
+			goto out;
+		}
+
+		if (prev_gen_num > __le64_to_cpu(ppl_hdr->generation)) {
+			/* previous was newest, it was already checked */
+			goto out;
+		}
+
+		if ((__le32_to_cpu(ppl_hdr->signature) !=
+			      super->anchor->orig_family_num)) {
+			dprintf("Wrong PPL header signature on %s\n",
+				d->devname);
+			ret = 1;
+			goto out;
+		}
+
+		ret = 0;
+		prev_gen_num = __le64_to_cpu(ppl_hdr->generation);
 
-	if (!ret && (__le32_to_cpu(ppl_hdr->signature) !=
-		      super->anchor->orig_family_num)) {
-		dprintf("Wrong PPL header signature on %s\n",
-			d->devname);
-		ret = 1;
+		ppl_offset += PPL_HEADER_SIZE;
+		for (i = 0; i < __le32_to_cpu(ppl_hdr->entries_count); i++)
+			ppl_offset +=
+				   __le32_to_cpu(ppl_hdr->entries[i].pp_size);
 	}
 
 out:
-- 
2.13.5


^ permalink raw reply related	[flat|nested] 14+ messages in thread

* [PATCH 05/12] Zeroout whole ppl space during creation/force assemble
  2017-09-28 12:41 [PATCH 00/12] Multiple PPL support and PPL bugfixes Pawel Baldysiak
                   ` (3 preceding siblings ...)
  2017-09-28 12:41 ` [PATCH 04/12] imsm: validate multiple ppls during assemble Pawel Baldysiak
@ 2017-09-28 12:41 ` Pawel Baldysiak
  2017-09-28 12:41 ` [PATCH 06/12] imsm: switch to multiple ppls automatically during assemble Pawel Baldysiak
                   ` (7 subsequent siblings)
  12 siblings, 0 replies; 14+ messages in thread
From: Pawel Baldysiak @ 2017-09-28 12:41 UTC (permalink / raw)
  To: jes.sorensen; +Cc: linux-raid, Pawel Baldysiak

PPL area should be cleared before creation/force assemble.
If the drive was used in other RAID array, it might contains PPL from it.
There is a risk that mdadm recognizes those PPLs and
refuses to assemble the RAID due to PPL conflict with created
array.

Signed-off-by: Pawel Baldysiak <pawel.baldysiak@intel.com>
---
 mdadm.h       |  1 +
 super-intel.c |  7 ++++++-
 super1.c      |  5 +++++
 util.c        | 49 +++++++++++++++++++++++++++++++++++++++++++++++++
 4 files changed, 61 insertions(+), 1 deletion(-)

diff --git a/mdadm.h b/mdadm.h
index 191ae8f7..8f802e50 100644
--- a/mdadm.h
+++ b/mdadm.h
@@ -687,6 +687,7 @@ extern int sysfs_unique_holder(char *devnm, long rdev);
 extern int sysfs_freeze_array(struct mdinfo *sra);
 extern int sysfs_wait(int fd, int *msec);
 extern int load_sys(char *path, char *buf, int len);
+extern int zero_disk_range(int fd, unsigned long long sector, size_t count);
 extern int reshape_prepare_fdlist(char *devname,
 				  struct mdinfo *sra,
 				  int raid_disks,
diff --git a/super-intel.c b/super-intel.c
index d11ddae4..e1862d4a 100644
--- a/super-intel.c
+++ b/super-intel.c
@@ -6065,7 +6065,12 @@ static int write_init_ppl_imsm(struct supertype *st, struct mdinfo *info, int fd
 	struct ppl_header *ppl_hdr;
 	int ret;
 
-	ret = posix_memalign(&buf, 4096, PPL_HEADER_SIZE);
+	/* first clear entire ppl space */
+	ret = zero_disk_range(fd, info->ppl_sector, info->ppl_size);
+	if (ret)
+		return ret;
+
+	ret = posix_memalign(&buf, MAX_SECTOR_SIZE, PPL_HEADER_SIZE);
 	if (ret) {
 		pr_err("Failed to allocate PPL header buffer\n");
 		return ret;
diff --git a/super1.c b/super1.c
index b31f8450..6b821525 100644
--- a/super1.c
+++ b/super1.c
@@ -1823,6 +1823,11 @@ static int write_init_ppl1(struct supertype *st, struct mdinfo *info, int fd)
 	struct ppl_header *ppl_hdr;
 	int ret;
 
+	/* first clear entire ppl space */
+	ret = zero_disk_range(fd, info->ppl_sector, info->ppl_size);
+	if (ret)
+		return ret;
+
 	ret = posix_memalign(&buf, 4096, PPL_HEADER_SIZE);
 	if (ret) {
 		pr_err("Failed to allocate PPL header buffer\n");
diff --git a/util.c b/util.c
index c1c85095..bab7d6c1 100644
--- a/util.c
+++ b/util.c
@@ -30,6 +30,7 @@
 #include	<sys/un.h>
 #include	<sys/resource.h>
 #include	<sys/vfs.h>
+#include	<sys/mman.h>
 #include	<linux/magic.h>
 #include	<poll.h>
 #include	<ctype.h>
@@ -2325,3 +2326,51 @@ void set_hooks(void)
 	set_dlm_hooks();
 	set_cmap_hooks();
 }
+
+int zero_disk_range(int fd, unsigned long long sector, size_t count)
+{
+	int ret = 0;
+	int fd_zero;
+	void *addr = NULL;
+	size_t written = 0;
+	size_t len = count * 512;
+	ssize_t n;
+
+	fd_zero = open("/dev/zero", O_RDONLY);
+	if (fd_zero < 0) {
+		pr_err("Cannot open /dev/zero\n");
+		return -1;
+	}
+
+	if (lseek64(fd, sector * 512, SEEK_SET) < 0) {
+		ret = -errno;
+		pr_err("Failed to seek offset for zeroing\n");
+		goto out;
+	}
+
+	addr = mmap(NULL, len, PROT_READ, MAP_PRIVATE, fd_zero, 0);
+
+	if (addr == MAP_FAILED) {
+		ret = -errno;
+		pr_err("Mapping /dev/zero failed\n");
+		goto out;
+	}
+
+	do {
+		n = write(fd, addr + written, len - written);
+		if (n < 0) {
+			if (errno == EINTR)
+				continue;
+			ret = -errno;
+			pr_err("Zeroing disk range failed\n");
+			break;
+		}
+		written += n;
+	} while (written != len);
+
+	munmap(addr, len);
+
+out:
+	close(fd_zero);
+	return ret;
+}
-- 
2.13.5


^ permalink raw reply related	[flat|nested] 14+ messages in thread

* [PATCH 06/12] imsm: switch to multiple ppls automatically during assemble
  2017-09-28 12:41 [PATCH 00/12] Multiple PPL support and PPL bugfixes Pawel Baldysiak
                   ` (4 preceding siblings ...)
  2017-09-28 12:41 ` [PATCH 05/12] Zeroout whole ppl space during creation/force assemble Pawel Baldysiak
@ 2017-09-28 12:41 ` Pawel Baldysiak
  2017-09-28 12:41 ` [PATCH 07/12] Grow: fix switching on PPL during recovery Pawel Baldysiak
                   ` (6 subsequent siblings)
  12 siblings, 0 replies; 14+ messages in thread
From: Pawel Baldysiak @ 2017-09-28 12:41 UTC (permalink / raw)
  To: jes.sorensen; +Cc: linux-raid, Pawel Baldysiak, Artur Paszkiewicz

If user has array with single ppl -
update the metadata to use multiple ppls.

Signed-off-by: Pawel Baldysiak <pawel.baldysiak@intel.com>
Signed-off-by: Artur Paszkiewicz <artur.paszkiewicz@intel.com>
---
 super-intel.c | 30 ++++++++++++++++++++++++++++++
 1 file changed, 30 insertions(+)

diff --git a/super-intel.c b/super-intel.c
index e1862d4a..6c48725d 100644
--- a/super-intel.c
+++ b/super-intel.c
@@ -6184,6 +6184,36 @@ static int validate_ppl_imsm(struct supertype *st, struct mdinfo *info,
 out:
 	free(buf);
 
+	/*
+	 * Update metadata to use mutliple PPLs area (1MB).
+	 * This is done once for all RAID members
+	 */
+	if (info->consistency_policy == CONSISTENCY_POLICY_PPL &&
+	    info->ppl_size != (MULTIPLE_PPL_AREA_SIZE_IMSM >> 9)) {
+		char subarray[20];
+		struct mdinfo *member_dev;
+
+		sprintf(subarray, "%d", info->container_member);
+
+		if (mdmon_running(st->container_devnm))
+			st->update_tail = &st->updates;
+
+		if (st->ss->update_subarray(st, subarray, "ppl", NULL)) {
+			pr_err("Failed to update subarray %s\n",
+			      subarray);
+		} else {
+			if (st->update_tail)
+				flush_metadata_updates(st);
+			else
+				st->ss->sync_metadata(st);
+			info->ppl_size = (MULTIPLE_PPL_AREA_SIZE_IMSM >> 9);
+			for (member_dev = info->devs; member_dev;
+			     member_dev = member_dev->next)
+				member_dev->ppl_size =
+				    (MULTIPLE_PPL_AREA_SIZE_IMSM >> 9);
+		}
+	}
+
 	if (ret == 1 && map->map_state == IMSM_T_STATE_UNINITIALIZED)
 		return st->ss->write_init_ppl(st, info, d->fd);
 
-- 
2.13.5


^ permalink raw reply related	[flat|nested] 14+ messages in thread

* [PATCH 07/12] Grow: fix switching on PPL during recovery
  2017-09-28 12:41 [PATCH 00/12] Multiple PPL support and PPL bugfixes Pawel Baldysiak
                   ` (5 preceding siblings ...)
  2017-09-28 12:41 ` [PATCH 06/12] imsm: switch to multiple ppls automatically during assemble Pawel Baldysiak
@ 2017-09-28 12:41 ` Pawel Baldysiak
  2017-09-28 12:41 ` [PATCH 08/12] imsm: don't skip resync when an invalid ppl header is found Pawel Baldysiak
                   ` (5 subsequent siblings)
  12 siblings, 0 replies; 14+ messages in thread
From: Pawel Baldysiak @ 2017-09-28 12:41 UTC (permalink / raw)
  To: jes.sorensen; +Cc: linux-raid, Pawel Baldysiak

If raid memeber is not in sync - it is skipped during
enablement of PPL. This is not correct, since the drive that
we are currently recovering to does not have ppl_size and ppl_sector
properly set in sysfs.
Remove this skipping, so all drives are updated during turning on the PPL.

Signed-off-by: Pawel Baldysiak <pawel.baldysiak@intel.com>
---
 Grow.c | 3 ---
 1 file changed, 3 deletions(-)

diff --git a/Grow.c b/Grow.c
index 534ba801..1d33a4eb 100644
--- a/Grow.c
+++ b/Grow.c
@@ -637,9 +637,6 @@ int Grow_consistency_policy(char *devname, int fd, struct context *c, struct sha
 			int dfd;
 			char *devpath;
 
-			if ((sd->disk.state & (1 << MD_DISK_SYNC)) == 0)
-				continue;
-
 			devpath = map_dev(sd->disk.major, sd->disk.minor, 0);
 			dfd = dev_open(devpath, O_RDWR);
 			if (dfd < 0) {
-- 
2.13.5


^ permalink raw reply related	[flat|nested] 14+ messages in thread

* [PATCH 08/12] imsm: don't skip resync when an invalid ppl header is found
  2017-09-28 12:41 [PATCH 00/12] Multiple PPL support and PPL bugfixes Pawel Baldysiak
                   ` (6 preceding siblings ...)
  2017-09-28 12:41 ` [PATCH 07/12] Grow: fix switching on PPL during recovery Pawel Baldysiak
@ 2017-09-28 12:41 ` Pawel Baldysiak
  2017-09-28 12:41 ` [PATCH 09/12] imsm: Write empty PPL header if assembling regular clean array Pawel Baldysiak
                   ` (4 subsequent siblings)
  12 siblings, 0 replies; 14+ messages in thread
From: Pawel Baldysiak @ 2017-09-28 12:41 UTC (permalink / raw)
  To: jes.sorensen; +Cc: linux-raid, Artur Paszkiewicz

From: Artur Paszkiewicz <artur.paszkiewicz@intel.com>

If validate_ppl_imsm() detects an invalid ppl header it will be
overwritten with a valid, empty ppl header. But if we are assembling an
array after unclean shutdown this will cause the kernel to skip resync
after ppl recovery. We don't want that because if there was an invalid
ppl it's best to assume that the ppl recovery is not enough to make the
array consistent and a full resync should be performed. So when
overwriting the invalid ppl add one ppl_header_entry with a wrong
checksum. This will prevent the kernel from skipping resync after ppl
recovery.

Signed-off-by: Artur Paszkiewicz <artur.paszkiewicz@intel.com>
---
 super-intel.c | 18 ++++++++++++++++--
 1 file changed, 16 insertions(+), 2 deletions(-)

diff --git a/super-intel.c b/super-intel.c
index 6c48725d..016a6028 100644
--- a/super-intel.c
+++ b/super-intel.c
@@ -6080,6 +6080,16 @@ static int write_init_ppl_imsm(struct supertype *st, struct mdinfo *info, int fd
 	ppl_hdr = buf;
 	memset(ppl_hdr->reserved, 0xff, PPL_HDR_RESERVED);
 	ppl_hdr->signature = __cpu_to_le32(super->anchor->orig_family_num);
+
+	if (info->mismatch_cnt) {
+		/*
+		 * We are overwriting an invalid ppl. Make one entry with wrong
+		 * checksum to prevent the kernel from skipping resync.
+		 */
+		ppl_hdr->entries_count = __cpu_to_le32(1);
+		ppl_hdr->entries[0].checksum = ~0;
+	}
+
 	ppl_hdr->checksum = __cpu_to_le32(~crc32c_le(~0, buf, PPL_HEADER_SIZE));
 
 	if (lseek64(fd, info->ppl_sector * 512, SEEK_SET) < 0) {
@@ -6214,8 +6224,12 @@ out:
 		}
 	}
 
-	if (ret == 1 && map->map_state == IMSM_T_STATE_UNINITIALIZED)
-		return st->ss->write_init_ppl(st, info, d->fd);
+	if (ret == 1) {
+		if (map->map_state == IMSM_T_STATE_UNINITIALIZED)
+			ret = st->ss->write_init_ppl(st, info, d->fd);
+		else
+			info->mismatch_cnt++;
+	}
 
 	return ret;
 }
-- 
2.13.5


^ permalink raw reply related	[flat|nested] 14+ messages in thread

* [PATCH 09/12] imsm: Write empty PPL header if assembling regular clean array.
  2017-09-28 12:41 [PATCH 00/12] Multiple PPL support and PPL bugfixes Pawel Baldysiak
                   ` (7 preceding siblings ...)
  2017-09-28 12:41 ` [PATCH 08/12] imsm: don't skip resync when an invalid ppl header is found Pawel Baldysiak
@ 2017-09-28 12:41 ` Pawel Baldysiak
  2017-09-28 12:41 ` [PATCH 10/12] imsm: always do ppl recovery when starting a rebuilding array Pawel Baldysiak
                   ` (3 subsequent siblings)
  12 siblings, 0 replies; 14+ messages in thread
From: Pawel Baldysiak @ 2017-09-28 12:41 UTC (permalink / raw)
  To: jes.sorensen; +Cc: linux-raid, Pawel Baldysiak

If array was initially assembled with kernel without PPL support -
initial header was never written to the drive.
If initial resync was completed and system is rebooted to kernel with
PPL support - mdadm prevents from assembling normal clean array
due to lack of valid PPL.
Write empty header when assemble normal clean array, so the
its assamble is no longer blocked.

Signed-off-by: Pawel Baldysiak <pawel.baldysiak@intel.com>
---
 super-intel.c | 4 +++-
 1 file changed, 3 insertions(+), 1 deletion(-)

diff --git a/super-intel.c b/super-intel.c
index 016a6028..2e8eb0d4 100644
--- a/super-intel.c
+++ b/super-intel.c
@@ -6225,7 +6225,9 @@ out:
 	}
 
 	if (ret == 1) {
-		if (map->map_state == IMSM_T_STATE_UNINITIALIZED)
+		if (map->map_state == IMSM_T_STATE_UNINITIALIZED ||
+		   (map->map_state == IMSM_T_STATE_NORMAL &&
+		   !(dev->vol.dirty & RAIDVOL_DIRTY)))
 			ret = st->ss->write_init_ppl(st, info, d->fd);
 		else
 			info->mismatch_cnt++;
-- 
2.13.5


^ permalink raw reply related	[flat|nested] 14+ messages in thread

* [PATCH 10/12] imsm: always do ppl recovery when starting a rebuilding array
  2017-09-28 12:41 [PATCH 00/12] Multiple PPL support and PPL bugfixes Pawel Baldysiak
                   ` (8 preceding siblings ...)
  2017-09-28 12:41 ` [PATCH 09/12] imsm: Write empty PPL header if assembling regular clean array Pawel Baldysiak
@ 2017-09-28 12:41 ` Pawel Baldysiak
  2017-09-28 12:41 ` [PATCH 11/12] imsm: use correct map when validating ppl Pawel Baldysiak
                   ` (2 subsequent siblings)
  12 siblings, 0 replies; 14+ messages in thread
From: Pawel Baldysiak @ 2017-09-28 12:41 UTC (permalink / raw)
  To: jes.sorensen; +Cc: linux-raid, Artur Paszkiewicz

From: Artur Paszkiewicz <artur.paszkiewicz@intel.com>

Set resync_start to 0 when starting a rebuilding array to make the
kernel perform ppl recovery before the rebuild.

Signed-off-by: Artur Paszkiewicz <artur.paszkiewicz@intel.com>
---
 super-intel.c | 3 +++
 1 file changed, 3 insertions(+)

diff --git a/super-intel.c b/super-intel.c
index 2e8eb0d4..56c60423 100644
--- a/super-intel.c
+++ b/super-intel.c
@@ -7756,6 +7756,9 @@ static struct mdinfo *container_content_imsm(struct supertype *st, char *subarra
 						map->blocks_per_strip;
 				info_d->ppl_sector = this->ppl_sector;
 				info_d->ppl_size = this->ppl_size;
+				if (this->consistency_policy == CONSISTENCY_POLICY_PPL &&
+				    recovery_start == 0)
+					this->resync_start = 0;
 			} else {
 				info_d->component_size = blocks_per_member(map);
 			}
-- 
2.13.5


^ permalink raw reply related	[flat|nested] 14+ messages in thread

* [PATCH 11/12] imsm: use correct map when validating ppl
  2017-09-28 12:41 [PATCH 00/12] Multiple PPL support and PPL bugfixes Pawel Baldysiak
                   ` (9 preceding siblings ...)
  2017-09-28 12:41 ` [PATCH 10/12] imsm: always do ppl recovery when starting a rebuilding array Pawel Baldysiak
@ 2017-09-28 12:41 ` Pawel Baldysiak
  2017-09-28 12:41 ` [PATCH 12/12] imsm: write initial ppl on a disk added for rebuild Pawel Baldysiak
  2017-10-02 20:16 ` [PATCH 00/12] Multiple PPL support and PPL bugfixes Jes Sorensen
  12 siblings, 0 replies; 14+ messages in thread
From: Pawel Baldysiak @ 2017-09-28 12:41 UTC (permalink / raw)
  To: jes.sorensen; +Cc: linux-raid, Artur Paszkiewicz

From: Artur Paszkiewicz <artur.paszkiewicz@intel.com>

Use the first map to get the correct disk when rebuilding and not the
failed disk from the second map.

Signed-off-by: Artur Paszkiewicz <artur.paszkiewicz@intel.com>
---
 super-intel.c | 6 +++---
 1 file changed, 3 insertions(+), 3 deletions(-)

diff --git a/super-intel.c b/super-intel.c
index 56c60423..8f89930a 100644
--- a/super-intel.c
+++ b/super-intel.c
@@ -6119,7 +6119,6 @@ static int validate_ppl_imsm(struct supertype *st, struct mdinfo *info,
 	struct ppl_header *ppl_hdr;
 	__u32 crc;
 	struct imsm_dev *dev;
-	struct imsm_map *map;
 	__u32 idx;
 	unsigned int i;
 	unsigned long long ppl_offset = 0;
@@ -6134,8 +6133,7 @@ static int validate_ppl_imsm(struct supertype *st, struct mdinfo *info,
 	}
 
 	dev = get_imsm_dev(super, info->container_member);
-	map = get_imsm_map(dev, MAP_X);
-	idx = get_imsm_disk_idx(dev, disk->disk.raid_disk, MAP_X);
+	idx = get_imsm_disk_idx(dev, disk->disk.raid_disk, MAP_0);
 	d = get_imsm_dl_disk(super, idx);
 
 	if (!d || d->index < 0 || is_failed(&d->disk))
@@ -6225,6 +6223,8 @@ out:
 	}
 
 	if (ret == 1) {
+		struct imsm_map *map = get_imsm_map(dev, MAP_X);
+
 		if (map->map_state == IMSM_T_STATE_UNINITIALIZED ||
 		   (map->map_state == IMSM_T_STATE_NORMAL &&
 		   !(dev->vol.dirty & RAIDVOL_DIRTY)))
-- 
2.13.5


^ permalink raw reply related	[flat|nested] 14+ messages in thread

* [PATCH 12/12] imsm: write initial ppl on a disk added for rebuild
  2017-09-28 12:41 [PATCH 00/12] Multiple PPL support and PPL bugfixes Pawel Baldysiak
                   ` (10 preceding siblings ...)
  2017-09-28 12:41 ` [PATCH 11/12] imsm: use correct map when validating ppl Pawel Baldysiak
@ 2017-09-28 12:41 ` Pawel Baldysiak
  2017-10-02 20:16 ` [PATCH 00/12] Multiple PPL support and PPL bugfixes Jes Sorensen
  12 siblings, 0 replies; 14+ messages in thread
From: Pawel Baldysiak @ 2017-09-28 12:41 UTC (permalink / raw)
  To: jes.sorensen; +Cc: linux-raid, Artur Paszkiewicz

From: Artur Paszkiewicz <artur.paszkiewicz@intel.com>

When rebuild is initiated by the UEFI driver it is possible that the new
disk will not contain a valid ppl header. Just write the initial ppl
and don't abort assembly.

Signed-off-by: Artur Paszkiewicz <artur.paszkiewicz@intel.com>
---
 super-intel.c | 5 ++++-
 1 file changed, 4 insertions(+), 1 deletion(-)

diff --git a/super-intel.c b/super-intel.c
index 8f89930a..53385018 100644
--- a/super-intel.c
+++ b/super-intel.c
@@ -6227,7 +6227,10 @@ out:
 
 		if (map->map_state == IMSM_T_STATE_UNINITIALIZED ||
 		   (map->map_state == IMSM_T_STATE_NORMAL &&
-		   !(dev->vol.dirty & RAIDVOL_DIRTY)))
+		   !(dev->vol.dirty & RAIDVOL_DIRTY)) ||
+		   (dev->vol.migr_state == MIGR_REBUILD &&
+		    dev->vol.curr_migr_unit == 0 &&
+		    get_imsm_disk_idx(dev, disk->disk.raid_disk, MAP_1) != idx))
 			ret = st->ss->write_init_ppl(st, info, d->fd);
 		else
 			info->mismatch_cnt++;
-- 
2.13.5


^ permalink raw reply related	[flat|nested] 14+ messages in thread

* Re: [PATCH 00/12] Multiple PPL support and PPL bugfixes
  2017-09-28 12:41 [PATCH 00/12] Multiple PPL support and PPL bugfixes Pawel Baldysiak
                   ` (11 preceding siblings ...)
  2017-09-28 12:41 ` [PATCH 12/12] imsm: write initial ppl on a disk added for rebuild Pawel Baldysiak
@ 2017-10-02 20:16 ` Jes Sorensen
  12 siblings, 0 replies; 14+ messages in thread
From: Jes Sorensen @ 2017-10-02 20:16 UTC (permalink / raw)
  To: Pawel Baldysiak; +Cc: linux-raid

On 09/28/2017 08:41 AM, Pawel Baldysiak wrote:
> This patchset introduces userspace support for multiple PPLs written in a
> circular buffer. It also contains few general bug fixes for PPL, and
> couple of IMSM specific compatibility patches related to new PPL.
> 
> Artur Paszkiewicz (5):
>    Don't abort starting the array if kernel does not support ppl
>    imsm: don't skip resync when an invalid ppl header is found
>    imsm: always do ppl recovery when starting a rebuilding array
>    imsm: use correct map when validating ppl
>    imsm: write initial ppl on a disk added for rebuild
> 
> Pawel Baldysiak (7):
>    super1: Add support for multiple-ppls
>    imsm: Add support for multiple ppls
>    imsm: validate multiple ppls during assemble
>    Zeroout whole ppl space during creation/force assemble
>    imsm: switch to multiple ppls automatically during assemble
>    Grow: fix switching on PPL during recovery
>    imsm: Write empty PPL header if assembling regular clean array.
> 
>   Grow.c        |   3 -
>   managemon.c   |  11 +++-
>   mdadm.h       |   1 +
>   super-intel.c | 181 ++++++++++++++++++++++++++++++++++++++++++++--------------
>   super1.c      |  70 ++++++++++++++---------
>   sysfs.c       |   6 +-
>   util.c        |  49 ++++++++++++++++
>   7 files changed, 241 insertions(+), 80 deletions(-)

Applied!

Thanks,
Jes


^ permalink raw reply	[flat|nested] 14+ messages in thread

end of thread, other threads:[~2017-10-02 20:16 UTC | newest]

Thread overview: 14+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2017-09-28 12:41 [PATCH 00/12] Multiple PPL support and PPL bugfixes Pawel Baldysiak
2017-09-28 12:41 ` [PATCH 01/12] Don't abort starting the array if kernel does not support ppl Pawel Baldysiak
2017-09-28 12:41 ` [PATCH 02/12] super1: Add support for multiple-ppls Pawel Baldysiak
2017-09-28 12:41 ` [PATCH 03/12] imsm: Add support for multiple ppls Pawel Baldysiak
2017-09-28 12:41 ` [PATCH 04/12] imsm: validate multiple ppls during assemble Pawel Baldysiak
2017-09-28 12:41 ` [PATCH 05/12] Zeroout whole ppl space during creation/force assemble Pawel Baldysiak
2017-09-28 12:41 ` [PATCH 06/12] imsm: switch to multiple ppls automatically during assemble Pawel Baldysiak
2017-09-28 12:41 ` [PATCH 07/12] Grow: fix switching on PPL during recovery Pawel Baldysiak
2017-09-28 12:41 ` [PATCH 08/12] imsm: don't skip resync when an invalid ppl header is found Pawel Baldysiak
2017-09-28 12:41 ` [PATCH 09/12] imsm: Write empty PPL header if assembling regular clean array Pawel Baldysiak
2017-09-28 12:41 ` [PATCH 10/12] imsm: always do ppl recovery when starting a rebuilding array Pawel Baldysiak
2017-09-28 12:41 ` [PATCH 11/12] imsm: use correct map when validating ppl Pawel Baldysiak
2017-09-28 12:41 ` [PATCH 12/12] imsm: write initial ppl on a disk added for rebuild Pawel Baldysiak
2017-10-02 20:16 ` [PATCH 00/12] Multiple PPL support and PPL bugfixes Jes Sorensen

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).