public inbox for linux-btrfs@vger.kernel.org
 help / color / mirror / Atom feed
* [PATCH v5 0/3] New RAID library supporting up to six parities
@ 2014-02-24 21:15 Andrea Mazzoleni
  2014-02-24 21:15 ` [PATCH v5 2/3] fs: btrfs: Adds new par3456 modes to support " Andrea Mazzoleni
  2014-02-24 21:15 ` [PATCH v5 3/3] btrfs-progs: " Andrea Mazzoleni
  0 siblings, 2 replies; 3+ messages in thread
From: Andrea Mazzoleni @ 2014-02-24 21:15 UTC (permalink / raw)
  To: clm, jbacik, neilb; +Cc: linux-kernel, linux-raid, linux-btrfs, amadvance

Hi,

A new version of the new RAID library. Finally with *working* btrfs support!

It includes patches for both the kernel and btrfs-progs to add new parity
modes "par3", "par4", "par5" and "par6" working similarly at the existing
"raid5" and "raid6" ones.

The patches apply cleanly to kernel v3.14-rc3 and btrfs-progs v3.12.

If you are willing to test it, you can do something like that:

mkfs.btrfs -d par3 -m par3 /dev/sdb1 /dev/sdc1 /dev/sdd1 /dev/sde1
mount /dev/sdb1 /mnt/tmp
...copy something to /mnt/tmp...
md5deep -r /mnt/tmp > test_before.hash
umount /mnt/tmp
dd if=/dev/urandom of=/dev/sdc1 count=...
dd if=/dev/urandom of=/dev/sdd1 count=...
dd if=/dev/urandom of=/dev/sde1 count=...
mount -o degraded /dev/sdb1 /mnt/tmp
md5deep -r /mnt/tmp > test_after.hash
umount /mnt/tmp
diff -u test_before.hash test_after.hash && echo OK

I run various test like that, and everything seems to work.

The first patch is the new RAID library for the kernel, supporting up to six
parities. It's verified with automated test that reach 99.3% code coverage.
It also passes clang and valgrind tests with no error.

It applies cleanly to kernel v3.14-rc3, but it should work with any other
version because it's formed only of new files. The only kernel change
is the new CONFIG_RAID_CAUCHY option in the "lib" configuration section.

For reviewing I recommend to start from the include/raid/raid.h that describes
the new generic raid interface. Then continue in lib/raid/raid.c where the interface
is implement. You can start reading the documentation about the RAID
mathematics used, taking care that its correctness is proven both
mathematically and by brute-force by the test programs.
You can then review raid_gen() and raid_rec(), that are high level forwarders
to generic and optimized asm functions that generate parity
and recover data. Their internal structure is very similar at the functions
in RAID6. The main difference is to have a generic matrix of parity coefficients.
All these functions are verified by the test programs, with full lines and
branches coverage, meaning that you can concentrate the review on their
structure, than in the computation and asm details.
Finally, you can review the test programs in lib/raid/test, to ensure that
everything is really tested, and the coverage test can help you on that.

The second patch contains the kernel btrfs modifications. Besides adding the
new parity modes it also removes a lot of code about raid details that are
now handled by the new raid library.

It applies cleanly to kernel v3.14-rc3. You can use it also for previous kernels,
with an obvious adjustment in fs/btrfs/ctree.h.

For reviewing you can start from the diff, and check chunk after chunk.
Likely the two most complex changes are where the new raid_gen() and raid_rec()
are called replacing big chunks of code. But the rest is mostly straightforward
as I just extended all the checks about RAID5 and RAID6 to six parities.
But for sure it needs a more careful review as my knowledge of btrfs internals is
very limited.

The third patch contains the btrfs-progs modification. They are just matching
the kernel changes, and the same considerations apply.

It applies cleanly to btrfs-progs v3.12.

Please let me know what you think, and if it can be considered for inclusion or
something more is required.

If some patch is missing due mailinglist size limit, you can download them at:

  http://snapraid.sourceforge.net/linux/v5/

You can see the code coverage analysis made by lcov at:

  http://snapraid.sourceforge.net/linux/v5/coverage/

Changes from v4 to v5:
 - Adds more comments in the libraid patch.
 - Reviews and completes the btrfs patch. The previous patch was not
   really working due some missing pieces.
 - Adds a new patch for btrfs-progs to extend the mkfs.btrfs
   functionality to create filesystem with up to six parity levels.
 - Removes the async_tx patch as not yet ready for inclusion.

Changes from v3 to v4:
 - Adds a code coverage test
 - Adds a matrix inversion test.
 - Everything updated to kernel 3.13.

Changes from v2 to v3:
 - Adds a new patch to change async_tx to use the new raid library
   for synchronous cases and to export a similar interface.
   Also modified md/raid5.c to use the new interface of async_tx.
   This is just example code not meant for inclusion!
 - Renamed raid_par() to raid_gen() to match better existing naming.
 - Removed raid_sort() and replaced with raid_insert() that allows
   to build a vector already in order instead of sorting it later.
   This function is declared in the new raid/helper.h.
 - Better documentation in the raid.h/c files. Start from raid.h
   to see the documentation of the new interface.

Changes from v1 to v2:
 - Adds a patch to btrfs to extend its support to more than double parity.
   This is just example code not meant for inclusion!
 - Changes the main raid_rec() interface to merge the failed data
   and parity index vectors. This matches better the kernel usage.
 - Uses alloc_pages_exact() instead of __get_free_pages().
 - Removes unnecessary register loads from par1_sse().
 - Converts the asm_begin/end() macros to inlined functions.
 - Fixes some more checkpatch.pl warnings.
 - Other minor style/comment changes.

Andrea Mazzoleni (2):
  lib: raid: New RAID library supporting up to six parities
  fs: btrfs: Adds new par3456 modes to support up to six parities

 fs/btrfs/Kconfig             |    1 +
 fs/btrfs/ctree.h             |   50 +-
 fs/btrfs/disk-io.c           |    7 +-
 fs/btrfs/extent-tree.c       |   67 +-
 fs/btrfs/inode.c             |    3 +-
 fs/btrfs/raid56.c            |  273 +++-----
 fs/btrfs/raid56.h            |   19 +-
 fs/btrfs/scrub.c             |    3 +-
 fs/btrfs/volumes.c           |  144 ++--
 include/linux/raid/helper.h  |   32 +
 include/linux/raid/raid.h    |   87 +++
 include/trace/events/btrfs.h |   16 +-
 include/uapi/linux/btrfs.h   |   19 +-
 lib/Kconfig                  |   17 +
 lib/Makefile                 |    1 +
 lib/raid/.gitignore          |    3 +
 lib/raid/Makefile            |   14 +
 lib/raid/cpu.h               |   44 ++
 lib/raid/gf.h                |  109 +++
 lib/raid/helper.c            |   38 +
 lib/raid/int.c               |  567 +++++++++++++++
 lib/raid/internal.h          |  148 ++++
 lib/raid/mktables.c          |  383 +++++++++++
 lib/raid/module.c            |  458 ++++++++++++
 lib/raid/raid.c              |  492 +++++++++++++
 lib/raid/test/Makefile       |   72 ++
 lib/raid/test/combo.h        |  155 +++++
 lib/raid/test/fulltest.c     |   79 +++
 lib/raid/test/invtest.c      |  172 +++++
 lib/raid/test/memory.c       |   79 +++
 lib/raid/test/memory.h       |   78 +++
 lib/raid/test/selftest.c     |   44 ++
 lib/raid/test/speedtest.c    |  578 ++++++++++++++++
 lib/raid/test/test.c         |  314 +++++++++
 lib/raid/test/test.h         |   59 ++
 lib/raid/test/usermode.h     |   95 +++
 lib/raid/test/xor.c          |   41 ++
 lib/raid/x86.c               | 1565 ++++++++++++++++++++++++++++++++++++++++++
 38 files changed, 6037 insertions(+), 289 deletions(-)
 create mode 100644 include/linux/raid/helper.h
 create mode 100644 include/linux/raid/raid.h
 create mode 100644 lib/raid/.gitignore
 create mode 100644 lib/raid/Makefile
 create mode 100644 lib/raid/cpu.h
 create mode 100644 lib/raid/gf.h
 create mode 100644 lib/raid/helper.c
 create mode 100644 lib/raid/int.c
 create mode 100644 lib/raid/internal.h
 create mode 100644 lib/raid/mktables.c
 create mode 100644 lib/raid/module.c
 create mode 100644 lib/raid/raid.c
 create mode 100644 lib/raid/test/Makefile
 create mode 100644 lib/raid/test/combo.h
 create mode 100644 lib/raid/test/fulltest.c
 create mode 100644 lib/raid/test/invtest.c
 create mode 100644 lib/raid/test/memory.c
 create mode 100644 lib/raid/test/memory.h
 create mode 100644 lib/raid/test/selftest.c
 create mode 100644 lib/raid/test/speedtest.c
 create mode 100644 lib/raid/test/test.c
 create mode 100644 lib/raid/test/test.h
 create mode 100644 lib/raid/test/usermode.h
 create mode 100644 lib/raid/test/xor.c
 create mode 100644 lib/raid/x86.c

-- 
1.7.12.1


^ permalink raw reply	[flat|nested] 3+ messages in thread

* [PATCH v5 2/3] fs: btrfs: Adds new par3456 modes to support up to six parities
  2014-02-24 21:15 [PATCH v5 0/3] New RAID library supporting up to six parities Andrea Mazzoleni
@ 2014-02-24 21:15 ` Andrea Mazzoleni
  2014-02-24 21:15 ` [PATCH v5 3/3] btrfs-progs: " Andrea Mazzoleni
  1 sibling, 0 replies; 3+ messages in thread
From: Andrea Mazzoleni @ 2014-02-24 21:15 UTC (permalink / raw)
  To: clm, jbacik, neilb; +Cc: linux-kernel, linux-raid, linux-btrfs, amadvance

Removes the RAID logic now handled in the new raid_gen() and raid_rec() calls
that hide all the details.
Replaces the faila/failb failure indexes with a fail[] vector that keeps
track of up to six failures.
Replaces the existing BLOCK_GROUP_RAID5/6 with new PAR1/2/3/4/5/6 ones that
handle up to six parities, and updates all the code to use them.

Signed-off-by: Andrea Mazzoleni <amadvance@gmail.com>
---
 fs/btrfs/Kconfig             |   1 +
 fs/btrfs/ctree.h             |  50 ++++++--
 fs/btrfs/disk-io.c           |   7 +-
 fs/btrfs/extent-tree.c       |  67 +++++++----
 fs/btrfs/inode.c             |   3 +-
 fs/btrfs/raid56.c            | 273 ++++++++++++++-----------------------------
 fs/btrfs/raid56.h            |  19 ++-
 fs/btrfs/scrub.c             |   3 +-
 fs/btrfs/volumes.c           | 144 +++++++++++++++--------
 include/trace/events/btrfs.h |  16 ++-
 include/uapi/linux/btrfs.h   |  19 ++-
 11 files changed, 313 insertions(+), 289 deletions(-)

diff --git a/fs/btrfs/Kconfig b/fs/btrfs/Kconfig
index a66768e..fb011b8 100644
--- a/fs/btrfs/Kconfig
+++ b/fs/btrfs/Kconfig
@@ -6,6 +6,7 @@ config BTRFS_FS
 	select ZLIB_DEFLATE
 	select LZO_COMPRESS
 	select LZO_DECOMPRESS
+	select RAID_CAUCHY
 	select RAID6_PQ
 	select XOR_BLOCKS
 
diff --git a/fs/btrfs/ctree.h b/fs/btrfs/ctree.h
index 2c1a42c..7e6d2bf 100644
--- a/fs/btrfs/ctree.h
+++ b/fs/btrfs/ctree.h
@@ -522,6 +522,7 @@ struct btrfs_super_block {
 #define BTRFS_FEATURE_INCOMPAT_RAID56		(1ULL << 7)
 #define BTRFS_FEATURE_INCOMPAT_SKINNY_METADATA	(1ULL << 8)
 #define BTRFS_FEATURE_INCOMPAT_NO_HOLES		(1ULL << 9)
+#define BTRFS_FEATURE_INCOMPAT_PAR3456		(1ULL << 10)
 
 #define BTRFS_FEATURE_COMPAT_SUPP		0ULL
 #define BTRFS_FEATURE_COMPAT_SAFE_SET		0ULL
@@ -539,7 +540,8 @@ struct btrfs_super_block {
 	 BTRFS_FEATURE_INCOMPAT_RAID56 |		\
 	 BTRFS_FEATURE_INCOMPAT_EXTENDED_IREF |		\
 	 BTRFS_FEATURE_INCOMPAT_SKINNY_METADATA |	\
-	 BTRFS_FEATURE_INCOMPAT_NO_HOLES)
+	 BTRFS_FEATURE_INCOMPAT_NO_HOLES |		\
+	 BTRFS_FEATURE_INCOMPAT_PAR3456)
 
 #define BTRFS_FEATURE_INCOMPAT_SAFE_SET			\
 	(BTRFS_FEATURE_INCOMPAT_EXTENDED_IREF)
@@ -983,8 +985,39 @@ struct btrfs_dev_replace_item {
 #define BTRFS_BLOCK_GROUP_RAID1		(1ULL << 4)
 #define BTRFS_BLOCK_GROUP_DUP		(1ULL << 5)
 #define BTRFS_BLOCK_GROUP_RAID10	(1ULL << 6)
-#define BTRFS_BLOCK_GROUP_RAID5         (1ULL << 7)
-#define BTRFS_BLOCK_GROUP_RAID6         (1ULL << 8)
+#define BTRFS_BLOCK_GROUP_PAR1     (1ULL << 7)
+#define BTRFS_BLOCK_GROUP_PAR2     (1ULL << 8)
+#define BTRFS_BLOCK_GROUP_PAR3     (1ULL << 9)
+#define BTRFS_BLOCK_GROUP_PAR4     (1ULL << 10)
+#define BTRFS_BLOCK_GROUP_PAR5     (1ULL << 11)
+#define BTRFS_BLOCK_GROUP_PAR6     (1ULL << 12)
+
+/* tags for all the parity groups */
+#define BTRFS_BLOCK_GROUP_PARX (BTRFS_BLOCK_GROUP_PAR1 | \
+				BTRFS_BLOCK_GROUP_PAR2 | \
+				BTRFS_BLOCK_GROUP_PAR3 | \
+				BTRFS_BLOCK_GROUP_PAR4 | \
+				BTRFS_BLOCK_GROUP_PAR5 | \
+				BTRFS_BLOCK_GROUP_PAR6)
+
+/* gets the parity number from the parity group */
+static inline int btrfs_flags_par(unsigned group)
+{
+	switch (group & BTRFS_BLOCK_GROUP_PARX) {
+	case BTRFS_BLOCK_GROUP_PAR1: return 1;
+	case BTRFS_BLOCK_GROUP_PAR2: return 2;
+	case BTRFS_BLOCK_GROUP_PAR3: return 3;
+	case BTRFS_BLOCK_GROUP_PAR4: return 4;
+	case BTRFS_BLOCK_GROUP_PAR5: return 5;
+	case BTRFS_BLOCK_GROUP_PAR6: return 6;
+	}
+
+	/* ensures that no multiple groups are defined */
+	BUG_ON(group & BTRFS_BLOCK_GROUP_PARX);
+
+	return 0;
+}
+
 #define BTRFS_BLOCK_GROUP_RESERVED	BTRFS_AVAIL_ALLOC_BIT_SINGLE
 
 enum btrfs_raid_types {
@@ -993,8 +1026,12 @@ enum btrfs_raid_types {
 	BTRFS_RAID_DUP,
 	BTRFS_RAID_RAID0,
 	BTRFS_RAID_SINGLE,
-	BTRFS_RAID_RAID5,
-	BTRFS_RAID_RAID6,
+	BTRFS_RAID_PAR1,
+	BTRFS_RAID_PAR2,
+	BTRFS_RAID_PAR3,
+	BTRFS_RAID_PAR4,
+	BTRFS_RAID_PAR5,
+	BTRFS_RAID_PAR6,
 	BTRFS_NR_RAID_TYPES
 };
 
@@ -1004,8 +1041,7 @@ enum btrfs_raid_types {
 
 #define BTRFS_BLOCK_GROUP_PROFILE_MASK	(BTRFS_BLOCK_GROUP_RAID0 |   \
 					 BTRFS_BLOCK_GROUP_RAID1 |   \
-					 BTRFS_BLOCK_GROUP_RAID5 |   \
-					 BTRFS_BLOCK_GROUP_RAID6 |   \
+					 BTRFS_BLOCK_GROUP_PARX |    \
 					 BTRFS_BLOCK_GROUP_DUP |     \
 					 BTRFS_BLOCK_GROUP_RAID10)
 /*
diff --git a/fs/btrfs/disk-io.c b/fs/btrfs/disk-io.c
index 81ea553..9931cf3 100644
--- a/fs/btrfs/disk-io.c
+++ b/fs/btrfs/disk-io.c
@@ -3337,12 +3337,11 @@ int btrfs_calc_num_tolerated_disk_barrier_failures(
 					num_tolerated_disk_barrier_failures = 0;
 				else if (num_tolerated_disk_barrier_failures > 1) {
 					if (flags & (BTRFS_BLOCK_GROUP_RAID1 |
-					    BTRFS_BLOCK_GROUP_RAID5 |
 					    BTRFS_BLOCK_GROUP_RAID10)) {
 						num_tolerated_disk_barrier_failures = 1;
-					} else if (flags &
-						   BTRFS_BLOCK_GROUP_RAID6) {
-						num_tolerated_disk_barrier_failures = 2;
+					} else if (flags & BTRFS_BLOCK_GROUP_PARX) {
+						num_tolerated_disk_barrier_failures
+						 = btrfs_flags_par(flags);
 					}
 				}
 			}
diff --git a/fs/btrfs/extent-tree.c b/fs/btrfs/extent-tree.c
index 32312e0..a5d1f9d 100644
--- a/fs/btrfs/extent-tree.c
+++ b/fs/btrfs/extent-tree.c
@@ -3516,21 +3516,35 @@ static u64 btrfs_reduce_alloc_profile(struct btrfs_root *root, u64 flags)
 	/* First, mask out the RAID levels which aren't possible */
 	if (num_devices == 1)
 		flags &= ~(BTRFS_BLOCK_GROUP_RAID1 | BTRFS_BLOCK_GROUP_RAID0 |
-			   BTRFS_BLOCK_GROUP_RAID5);
+			   BTRFS_BLOCK_GROUP_PAR1);
 	if (num_devices < 3)
-		flags &= ~BTRFS_BLOCK_GROUP_RAID6;
+		flags &= ~BTRFS_BLOCK_GROUP_PAR2;
 	if (num_devices < 4)
-		flags &= ~BTRFS_BLOCK_GROUP_RAID10;
+		flags &= ~(BTRFS_BLOCK_GROUP_RAID10 | BTRFS_BLOCK_GROUP_PAR3);
+	if (num_devices < 5)
+		flags &= ~BTRFS_BLOCK_GROUP_PAR4;
+	if (num_devices < 6)
+		flags &= ~BTRFS_BLOCK_GROUP_PAR5;
+	if (num_devices < 7)
+		flags &= ~BTRFS_BLOCK_GROUP_PAR6;
 
 	tmp = flags & (BTRFS_BLOCK_GROUP_DUP | BTRFS_BLOCK_GROUP_RAID0 |
-		       BTRFS_BLOCK_GROUP_RAID1 | BTRFS_BLOCK_GROUP_RAID5 |
-		       BTRFS_BLOCK_GROUP_RAID6 | BTRFS_BLOCK_GROUP_RAID10);
+		       BTRFS_BLOCK_GROUP_RAID1 | BTRFS_BLOCK_GROUP_PARX |
+		       BTRFS_BLOCK_GROUP_RAID10);
 	flags &= ~tmp;
 
-	if (tmp & BTRFS_BLOCK_GROUP_RAID6)
-		tmp = BTRFS_BLOCK_GROUP_RAID6;
-	else if (tmp & BTRFS_BLOCK_GROUP_RAID5)
-		tmp = BTRFS_BLOCK_GROUP_RAID5;
+	if (tmp & BTRFS_BLOCK_GROUP_PAR6)
+		tmp = BTRFS_BLOCK_GROUP_PAR6;
+	else if (tmp & BTRFS_BLOCK_GROUP_PAR5)
+		tmp = BTRFS_BLOCK_GROUP_PAR5;
+	else if (tmp & BTRFS_BLOCK_GROUP_PAR4)
+		tmp = BTRFS_BLOCK_GROUP_PAR4;
+	else if (tmp & BTRFS_BLOCK_GROUP_PAR3)
+		tmp = BTRFS_BLOCK_GROUP_PAR3;
+	else if (tmp & BTRFS_BLOCK_GROUP_PAR2)
+		tmp = BTRFS_BLOCK_GROUP_PAR2;
+	else if (tmp & BTRFS_BLOCK_GROUP_PAR1)
+		tmp = BTRFS_BLOCK_GROUP_PAR1;
 	else if (tmp & BTRFS_BLOCK_GROUP_RAID10)
 		tmp = BTRFS_BLOCK_GROUP_RAID10;
 	else if (tmp & BTRFS_BLOCK_GROUP_RAID1)
@@ -3769,8 +3783,7 @@ static u64 get_system_chunk_thresh(struct btrfs_root *root, u64 type)
 
 	if (type & (BTRFS_BLOCK_GROUP_RAID10 |
 		    BTRFS_BLOCK_GROUP_RAID0 |
-		    BTRFS_BLOCK_GROUP_RAID5 |
-		    BTRFS_BLOCK_GROUP_RAID6))
+		    BTRFS_BLOCK_GROUP_PARX))
 		num_dev = root->fs_info->fs_devices->rw_devices;
 	else if (type & BTRFS_BLOCK_GROUP_RAID1)
 		num_dev = 2;
@@ -6104,10 +6117,18 @@ int __get_raid_index(u64 flags)
 		return BTRFS_RAID_DUP;
 	else if (flags & BTRFS_BLOCK_GROUP_RAID0)
 		return BTRFS_RAID_RAID0;
-	else if (flags & BTRFS_BLOCK_GROUP_RAID5)
-		return BTRFS_RAID_RAID5;
-	else if (flags & BTRFS_BLOCK_GROUP_RAID6)
-		return BTRFS_RAID_RAID6;
+	else if (flags & BTRFS_BLOCK_GROUP_PAR1)
+		return BTRFS_RAID_PAR1;
+	else if (flags & BTRFS_BLOCK_GROUP_PAR2)
+		return BTRFS_RAID_PAR2;
+	else if (flags & BTRFS_BLOCK_GROUP_PAR3)
+		return BTRFS_RAID_PAR3;
+	else if (flags & BTRFS_BLOCK_GROUP_PAR4)
+		return BTRFS_RAID_PAR4;
+	else if (flags & BTRFS_BLOCK_GROUP_PAR5)
+		return BTRFS_RAID_PAR5;
+	else if (flags & BTRFS_BLOCK_GROUP_PAR6)
+		return BTRFS_RAID_PAR6;
 
 	return BTRFS_RAID_SINGLE; /* BTRFS_BLOCK_GROUP_SINGLE */
 }
@@ -6123,8 +6144,12 @@ static const char *btrfs_raid_type_names[BTRFS_NR_RAID_TYPES] = {
 	[BTRFS_RAID_DUP]	= "dup",
 	[BTRFS_RAID_RAID0]	= "raid0",
 	[BTRFS_RAID_SINGLE]	= "single",
-	[BTRFS_RAID_RAID5]	= "raid5",
-	[BTRFS_RAID_RAID6]	= "raid6",
+	[BTRFS_RAID_PAR1]	= "raid5",
+	[BTRFS_RAID_PAR2]	= "raid6",
+	[BTRFS_RAID_PAR3]	= "par3",
+	[BTRFS_RAID_PAR4]	= "par4",
+	[BTRFS_RAID_PAR5]	= "par5",
+	[BTRFS_RAID_PAR6]	= "par6",
 };
 
 static const char *get_raid_name(enum btrfs_raid_types type)
@@ -6269,8 +6294,7 @@ search:
 		if (!block_group_bits(block_group, flags)) {
 		    u64 extra = BTRFS_BLOCK_GROUP_DUP |
 				BTRFS_BLOCK_GROUP_RAID1 |
-				BTRFS_BLOCK_GROUP_RAID5 |
-				BTRFS_BLOCK_GROUP_RAID6 |
+				BTRFS_BLOCK_GROUP_PARX |
 				BTRFS_BLOCK_GROUP_RAID10;
 
 			/*
@@ -7856,7 +7880,7 @@ static u64 update_block_group_flags(struct btrfs_root *root, u64 flags)
 		root->fs_info->fs_devices->missing_devices;
 
 	stripped = BTRFS_BLOCK_GROUP_RAID0 |
-		BTRFS_BLOCK_GROUP_RAID5 | BTRFS_BLOCK_GROUP_RAID6 |
+		BTRFS_BLOCK_GROUP_PARX |
 		BTRFS_BLOCK_GROUP_RAID1 | BTRFS_BLOCK_GROUP_RAID10;
 
 	if (num_devices == 1) {
@@ -8539,8 +8563,7 @@ int btrfs_read_block_groups(struct btrfs_root *root)
 		if (!(get_alloc_profile(root, space_info->flags) &
 		      (BTRFS_BLOCK_GROUP_RAID10 |
 		       BTRFS_BLOCK_GROUP_RAID1 |
-		       BTRFS_BLOCK_GROUP_RAID5 |
-		       BTRFS_BLOCK_GROUP_RAID6 |
+		       BTRFS_BLOCK_GROUP_PARX |
 		       BTRFS_BLOCK_GROUP_DUP)))
 			continue;
 		/*
diff --git a/fs/btrfs/inode.c b/fs/btrfs/inode.c
index d3d4448..46b4b49 100644
--- a/fs/btrfs/inode.c
+++ b/fs/btrfs/inode.c
@@ -7184,8 +7184,7 @@ static int btrfs_submit_direct_hook(int rw, struct btrfs_dio_private *dip,
 	}
 
 	/* async crcs make it difficult to collect full stripe writes. */
-	if (btrfs_get_alloc_profile(root, 1) &
-	    (BTRFS_BLOCK_GROUP_RAID5 | BTRFS_BLOCK_GROUP_RAID6))
+	if (btrfs_get_alloc_profile(root, 1) & BTRFS_BLOCK_GROUP_PARX)
 		async_submit = 0;
 	else
 		async_submit = 1;
diff --git a/fs/btrfs/raid56.c b/fs/btrfs/raid56.c
index 9af0b25..c7573dc 100644
--- a/fs/btrfs/raid56.c
+++ b/fs/btrfs/raid56.c
@@ -27,10 +27,10 @@
 #include <linux/capability.h>
 #include <linux/ratelimit.h>
 #include <linux/kthread.h>
-#include <linux/raid/pq.h>
+#include <linux/raid/raid.h>
+#include <linux/raid/helper.h>
 #include <linux/hash.h>
 #include <linux/list_sort.h>
-#include <linux/raid/xor.h>
 #include <linux/vmalloc.h>
 #include <asm/div64.h>
 #include "ctree.h"
@@ -125,11 +125,11 @@ struct btrfs_raid_bio {
 	 */
 	int read_rebuild;
 
-	/* first bad stripe */
-	int faila;
+	/* bad stripes */
+	int fail[RAID_PARITY_MAX];
 
-	/* second bad stripe (for raid6 use) */
-	int failb;
+	/* number of bad stripes in fail[] */
+	int nr_fail;
 
 	/*
 	 * number of pages needed to represent the full
@@ -496,26 +496,6 @@ static void cache_rbio(struct btrfs_raid_bio *rbio)
 }
 
 /*
- * helper function to run the xor_blocks api.  It is only
- * able to do MAX_XOR_BLOCKS at a time, so we need to
- * loop through.
- */
-static void run_xor(void **pages, int src_cnt, ssize_t len)
-{
-	int src_off = 0;
-	int xor_src_cnt = 0;
-	void *dest = pages[src_cnt];
-
-	while(src_cnt > 0) {
-		xor_src_cnt = min(src_cnt, MAX_XOR_BLOCKS);
-		xor_blocks(xor_src_cnt, len, dest, pages + src_off);
-
-		src_cnt -= xor_src_cnt;
-		src_off += xor_src_cnt;
-	}
-}
-
-/*
  * returns true if the bio list inside this rbio
  * covers an entire stripe (no rmw required).
  * Must be called with the bio list lock held, or
@@ -587,25 +567,18 @@ static int rbio_can_merge(struct btrfs_raid_bio *last,
 }
 
 /*
- * helper to index into the pstripe
- */
-static struct page *rbio_pstripe_page(struct btrfs_raid_bio *rbio, int index)
-{
-	index += (rbio->nr_data * rbio->stripe_len) >> PAGE_CACHE_SHIFT;
-	return rbio->stripe_pages[index];
-}
-
-/*
- * helper to index into the qstripe, returns null
- * if there is no qstripe
+ * helper to index into the parity stripe
+ * returns null if there is no stripe
  */
-static struct page *rbio_qstripe_page(struct btrfs_raid_bio *rbio, int index)
+static struct page *rbio_pstripe_page(struct btrfs_raid_bio *rbio,
+	int index, int parity)
 {
-	if (rbio->nr_data + 1 == rbio->bbio->num_stripes)
+	if (rbio->nr_data + parity >= rbio->bbio->num_stripes)
 		return NULL;
 
-	index += ((rbio->nr_data + 1) * rbio->stripe_len) >>
-		PAGE_CACHE_SHIFT;
+	index += ((rbio->nr_data + parity) * rbio->stripe_len)
+		>> PAGE_CACHE_SHIFT;
+
 	return rbio->stripe_pages[index];
 }
 
@@ -946,8 +919,7 @@ static struct btrfs_raid_bio *alloc_rbio(struct btrfs_root *root,
 	rbio->fs_info = root->fs_info;
 	rbio->stripe_len = stripe_len;
 	rbio->nr_pages = num_pages;
-	rbio->faila = -1;
-	rbio->failb = -1;
+	rbio->nr_fail = 0;
 	atomic_set(&rbio->refs, 1);
 
 	/*
@@ -958,10 +930,10 @@ static struct btrfs_raid_bio *alloc_rbio(struct btrfs_root *root,
 	rbio->stripe_pages = p;
 	rbio->bio_pages = p + sizeof(struct page *) * num_pages;
 
-	if (raid_map[bbio->num_stripes - 1] == RAID6_Q_STRIPE)
-		nr_data = bbio->num_stripes - 2;
-	else
-		nr_data = bbio->num_stripes - 1;
+	/* get the number of data stripes removing all the parities */
+	nr_data = bbio->num_stripes;
+	while (nr_data > 0 && is_parity_stripe(raid_map[nr_data - 1]))
+		--nr_data;
 
 	rbio->nr_data = nr_data;
 	return rbio;
@@ -1072,8 +1044,7 @@ static int rbio_add_io_page(struct btrfs_raid_bio *rbio,
  */
 static void validate_rbio_for_rmw(struct btrfs_raid_bio *rbio)
 {
-	if (rbio->faila >= 0 || rbio->failb >= 0) {
-		BUG_ON(rbio->faila == rbio->bbio->num_stripes - 1);
+	if (rbio->nr_fail > 0) {
 		__raid56_parity_recover(rbio);
 	} else {
 		finish_rmw(rbio);
@@ -1137,10 +1108,10 @@ static noinline void finish_rmw(struct btrfs_raid_bio *rbio)
 	void *pointers[bbio->num_stripes];
 	int stripe_len = rbio->stripe_len;
 	int nr_data = rbio->nr_data;
+	int nr_parity;
+	int parity;
 	int stripe;
 	int pagenr;
-	int p_stripe = -1;
-	int q_stripe = -1;
 	struct bio_list bio_list;
 	struct bio *bio;
 	int pages_per_stripe = stripe_len >> PAGE_CACHE_SHIFT;
@@ -1148,14 +1119,7 @@ static noinline void finish_rmw(struct btrfs_raid_bio *rbio)
 
 	bio_list_init(&bio_list);
 
-	if (bbio->num_stripes - rbio->nr_data == 1) {
-		p_stripe = bbio->num_stripes - 1;
-	} else if (bbio->num_stripes - rbio->nr_data == 2) {
-		p_stripe = bbio->num_stripes - 2;
-		q_stripe = bbio->num_stripes - 1;
-	} else {
-		BUG();
-	}
+	nr_parity = bbio->num_stripes - rbio->nr_data;
 
 	/* at this point we either have a full stripe,
 	 * or we've read the full stripe from the drive.
@@ -1194,29 +1158,15 @@ static noinline void finish_rmw(struct btrfs_raid_bio *rbio)
 			pointers[stripe] = kmap(p);
 		}
 
-		/* then add the parity stripe */
-		p = rbio_pstripe_page(rbio, pagenr);
-		SetPageUptodate(p);
-		pointers[stripe++] = kmap(p);
-
-		if (q_stripe != -1) {
-
-			/*
-			 * raid6, add the qstripe and call the
-			 * library function to fill in our p/q
-			 */
-			p = rbio_qstripe_page(rbio, pagenr);
+		/* then add the parity stripes */
+		for (parity = 0; parity < nr_parity; ++parity) {
+			p = rbio_pstripe_page(rbio, pagenr, parity);
 			SetPageUptodate(p);
 			pointers[stripe++] = kmap(p);
-
-			raid6_call.gen_syndrome(bbio->num_stripes, PAGE_SIZE,
-						pointers);
-		} else {
-			/* raid5 */
-			memcpy(pointers[nr_data], pointers[0], PAGE_SIZE);
-			run_xor(pointers + 1, nr_data - 1, PAGE_CACHE_SIZE);
 		}
 
+		/* compute the parity */
+		raid_gen(rbio->nr_data, nr_parity, PAGE_SIZE, pointers);
 
 		for (stripe = 0; stripe < bbio->num_stripes; stripe++)
 			kunmap(page_in_rbio(rbio, stripe, pagenr, 0));
@@ -1321,24 +1271,25 @@ static int fail_rbio_index(struct btrfs_raid_bio *rbio, int failed)
 {
 	unsigned long flags;
 	int ret = 0;
+	int i;
 
 	spin_lock_irqsave(&rbio->bio_list_lock, flags);
 
 	/* we already know this stripe is bad, move on */
-	if (rbio->faila == failed || rbio->failb == failed)
-		goto out;
+	for (i = 0; i < rbio->nr_fail; ++i)
+		if (rbio->fail[i] == failed)
+			goto out;
 
-	if (rbio->faila == -1) {
-		/* first failure on this rbio */
-		rbio->faila = failed;
-		atomic_inc(&rbio->bbio->error);
-	} else if (rbio->failb == -1) {
-		/* second failure on this rbio */
-		rbio->failb = failed;
-		atomic_inc(&rbio->bbio->error);
-	} else {
+	if (rbio->nr_fail == RAID_PARITY_MAX) {
 		ret = -EIO;
+		goto out;
 	}
+
+	/* new failure on this rbio */
+	raid_insert(rbio->nr_fail, rbio->fail, failed);
+	++rbio->nr_fail;
+	atomic_inc(&rbio->bbio->error);
+
 out:
 	spin_unlock_irqrestore(&rbio->bio_list_lock, flags);
 
@@ -1724,8 +1675,10 @@ static void __raid_recover_end_io(struct btrfs_raid_bio *rbio)
 {
 	int pagenr, stripe;
 	void **pointers;
-	int faila = -1, failb = -1;
+	int ifail;
 	int nr_pages = (rbio->stripe_len + PAGE_CACHE_SIZE - 1) >> PAGE_CACHE_SHIFT;
+	int nr_parity;
+	int nr_fail;
 	struct page *page;
 	int err;
 	int i;
@@ -1737,8 +1690,8 @@ static void __raid_recover_end_io(struct btrfs_raid_bio *rbio)
 		goto cleanup_io;
 	}
 
-	faila = rbio->faila;
-	failb = rbio->failb;
+	nr_parity = rbio->bbio->num_stripes - rbio->nr_data;
+	nr_fail = rbio->nr_fail;
 
 	if (rbio->read_rebuild) {
 		spin_lock_irq(&rbio->bio_list_lock);
@@ -1752,98 +1705,30 @@ static void __raid_recover_end_io(struct btrfs_raid_bio *rbio)
 		/* setup our array of pointers with pages
 		 * from each stripe
 		 */
+		ifail = 0;
 		for (stripe = 0; stripe < rbio->bbio->num_stripes; stripe++) {
 			/*
 			 * if we're rebuilding a read, we have to use
 			 * pages from the bio list
 			 */
 			if (rbio->read_rebuild &&
-			    (stripe == faila || stripe == failb)) {
+			    rbio->fail[ifail] == stripe) {
 				page = page_in_rbio(rbio, stripe, pagenr, 0);
+				++ifail;
 			} else {
 				page = rbio_stripe_page(rbio, stripe, pagenr);
 			}
 			pointers[stripe] = kmap(page);
 		}
 
-		/* all raid6 handling here */
-		if (rbio->raid_map[rbio->bbio->num_stripes - 1] ==
-		    RAID6_Q_STRIPE) {
-
-			/*
-			 * single failure, rebuild from parity raid5
-			 * style
-			 */
-			if (failb < 0) {
-				if (faila == rbio->nr_data) {
-					/*
-					 * Just the P stripe has failed, without
-					 * a bad data or Q stripe.
-					 * TODO, we should redo the xor here.
-					 */
-					err = -EIO;
-					goto cleanup;
-				}
-				/*
-				 * a single failure in raid6 is rebuilt
-				 * in the pstripe code below
-				 */
-				goto pstripe;
-			}
-
-			/* make sure our ps and qs are in order */
-			if (faila > failb) {
-				int tmp = failb;
-				failb = faila;
-				faila = tmp;
-			}
-
-			/* if the q stripe is failed, do a pstripe reconstruction
-			 * from the xors.
-			 * If both the q stripe and the P stripe are failed, we're
-			 * here due to a crc mismatch and we can't give them the
-			 * data they want
-			 */
-			if (rbio->raid_map[failb] == RAID6_Q_STRIPE) {
-				if (rbio->raid_map[faila] == RAID5_P_STRIPE) {
-					err = -EIO;
-					goto cleanup;
-				}
-				/*
-				 * otherwise we have one bad data stripe and
-				 * a good P stripe.  raid5!
-				 */
-				goto pstripe;
-			}
-
-			if (rbio->raid_map[failb] == RAID5_P_STRIPE) {
-				raid6_datap_recov(rbio->bbio->num_stripes,
-						  PAGE_SIZE, faila, pointers);
-			} else {
-				raid6_2data_recov(rbio->bbio->num_stripes,
-						  PAGE_SIZE, faila, failb,
-						  pointers);
-			}
-		} else {
-			void *p;
-
-			/* rebuild from P stripe here (raid5 or raid6) */
-			BUG_ON(failb != -1);
-pstripe:
-			/* Copy parity block into failed block to start with */
-			memcpy(pointers[faila],
-			       pointers[rbio->nr_data],
-			       PAGE_CACHE_SIZE);
-
-			/* rearrange the pointer array */
-			p = pointers[faila];
-			for (stripe = faila; stripe < rbio->nr_data - 1; stripe++)
-				pointers[stripe] = pointers[stripe + 1];
-			pointers[rbio->nr_data - 1] = p;
-
-			/* xor in the rest */
-			run_xor(pointers, rbio->nr_data - 1, PAGE_CACHE_SIZE);
+		/* if we have too many failure */
+		if (nr_fail > nr_parity) {
+			err = -EIO;
+			goto cleanup;
 		}
+		raid_rec(nr_fail, rbio->fail, rbio->nr_data, nr_parity,
+			PAGE_SIZE, pointers);
+
 		/* if we're doing this rebuild as part of an rmw, go through
 		 * and set all of our private rbio pages in the
 		 * failed stripes as uptodate.  This way finish_rmw will
@@ -1852,24 +1737,23 @@ pstripe:
 		 */
 		if (!rbio->read_rebuild) {
 			for (i = 0;  i < nr_pages; i++) {
-				if (faila != -1) {
-					page = rbio_stripe_page(rbio, faila, i);
-					SetPageUptodate(page);
-				}
-				if (failb != -1) {
-					page = rbio_stripe_page(rbio, failb, i);
+				for (ifail = 0; ifail < nr_fail; ++ifail) {
+					int sfail = rbio->fail[ifail];
+					page = rbio_stripe_page(rbio, sfail, i);
 					SetPageUptodate(page);
 				}
 			}
 		}
+		ifail = 0;
 		for (stripe = 0; stripe < rbio->bbio->num_stripes; stripe++) {
 			/*
 			 * if we're rebuilding a read, we have to use
 			 * pages from the bio list
 			 */
 			if (rbio->read_rebuild &&
-			    (stripe == faila || stripe == failb)) {
+			    rbio->fail[ifail] == stripe) {
 				page = page_in_rbio(rbio, stripe, pagenr, 0);
+				++ifail;
 			} else {
 				page = rbio_stripe_page(rbio, stripe, pagenr);
 			}
@@ -1891,8 +1775,7 @@ cleanup_io:
 
 		rbio_orig_end_io(rbio, err, err == 0);
 	} else if (err == 0) {
-		rbio->faila = -1;
-		rbio->failb = -1;
+		rbio->nr_fail = 0;
 		finish_rmw(rbio);
 	} else {
 		rbio_orig_end_io(rbio, err, 0);
@@ -1939,6 +1822,7 @@ static int __raid56_parity_recover(struct btrfs_raid_bio *rbio)
 	int bios_to_read = 0;
 	struct btrfs_bio *bbio = rbio->bbio;
 	struct bio_list bio_list;
+	int ifail;
 	int ret;
 	int nr_pages = (rbio->stripe_len + PAGE_CACHE_SIZE - 1) >> PAGE_CACHE_SHIFT;
 	int pagenr;
@@ -1958,10 +1842,12 @@ static int __raid56_parity_recover(struct btrfs_raid_bio *rbio)
 	 * stripe cache, it is possible that some or all of these
 	 * pages are going to be uptodate.
 	 */
+	ifail = 0;
 	for (stripe = 0; stripe < bbio->num_stripes; stripe++) {
-		if (rbio->faila == stripe ||
-		    rbio->failb == stripe)
+		if (rbio->fail[ifail] == stripe) {
+			++ifail;
 			continue;
+		}
 
 		for (pagenr = 0; pagenr < nr_pages; pagenr++) {
 			struct page *p;
@@ -2037,6 +1923,7 @@ int raid56_parity_recover(struct btrfs_root *root, struct bio *bio,
 {
 	struct btrfs_raid_bio *rbio;
 	int ret;
+	int i;
 
 	rbio = alloc_rbio(root, bbio, raid_map, stripe_len);
 	if (IS_ERR(rbio))
@@ -2046,21 +1933,33 @@ int raid56_parity_recover(struct btrfs_root *root, struct bio *bio,
 	bio_list_add(&rbio->bio_list, bio);
 	rbio->bio_list_bytes = bio->bi_iter.bi_size;
 
-	rbio->faila = find_logical_bio_stripe(rbio, bio);
-	if (rbio->faila == -1) {
+	rbio->fail[0] = find_logical_bio_stripe(rbio, bio);
+	if (rbio->fail[0] == -1) {
 		BUG();
 		kfree(raid_map);
 		kfree(bbio);
 		kfree(rbio);
 		return -EIO;
 	}
+	rbio->nr_fail = 1;
 
 	/*
-	 * reconstruct from the q stripe if they are
-	 * asking for mirror 3
+	 * Reconstruct from other parity stripes if they are
+	 * asking for different mirrors.
+	 * For each mirror we disable one extra parity to trigger
+	 * a different recovery.
+	 * With mirror_num == 2 we disable nothing and we reconstruct
+	 * with the first parity, with mirror_num == 3 we disable the
+	 * first parity and then we reconstruct with the second,
+	 * and so on, up to mirror_num == 7 where we disable the first 5
+	 * parity levels and we recover with the 6 one.
 	 */
-	if (mirror_num == 3)
-		rbio->failb = bbio->num_stripes - 2;
+	if (mirror_num > 2 && mirror_num - 2 < RAID_PARITY_MAX) {
+		for (i = 0; i < mirror_num - 2; ++i) {
+			raid_insert(rbio->nr_fail, rbio->fail, rbio->nr_data + i);
+			++rbio->nr_fail;
+		}
+	}
 
 	ret = lock_stripe_add(rbio);
 
diff --git a/fs/btrfs/raid56.h b/fs/btrfs/raid56.h
index ea5d73b..b1082b6 100644
--- a/fs/btrfs/raid56.h
+++ b/fs/btrfs/raid56.h
@@ -21,23 +21,22 @@
 #define __BTRFS_RAID56__
 static inline int nr_parity_stripes(struct map_lookup *map)
 {
-	if (map->type & BTRFS_BLOCK_GROUP_RAID5)
-		return 1;
-	else if (map->type & BTRFS_BLOCK_GROUP_RAID6)
-		return 2;
-	else
-		return 0;
+	return btrfs_flags_par(map->type);
 }
 
 static inline int nr_data_stripes(struct map_lookup *map)
 {
 	return map->num_stripes - nr_parity_stripes(map);
 }
-#define RAID5_P_STRIPE ((u64)-2)
-#define RAID6_Q_STRIPE ((u64)-1)
 
-#define is_parity_stripe(x) (((x) == RAID5_P_STRIPE) ||		\
-			     ((x) == RAID6_Q_STRIPE))
+#define BTRFS_RAID_PAR1_STRIPE ((u64)-6)
+#define BTRFS_RAID_PAR2_STRIPE ((u64)-5)
+#define BTRFS_RAID_PAR3_STRIPE ((u64)-4)
+#define BTRFS_RAID_PAR4_STRIPE ((u64)-3)
+#define BTRFS_RAID_PAR5_STRIPE ((u64)-2)
+#define BTRFS_RAID_PAR6_STRIPE ((u64)-1)
+
+#define is_parity_stripe(x) (((u64)(x) >= BTRFS_RAID_PAR1_STRIPE))
 
 int raid56_parity_recover(struct btrfs_root *root, struct bio *bio,
 				 struct btrfs_bio *bbio, u64 *raid_map,
diff --git a/fs/btrfs/scrub.c b/fs/btrfs/scrub.c
index efba5d1..495c13e 100644
--- a/fs/btrfs/scrub.c
+++ b/fs/btrfs/scrub.c
@@ -2259,8 +2259,7 @@ static noinline_for_stack int scrub_stripe(struct scrub_ctx *sctx,
 	int extent_mirror_num;
 	int stop_loop;
 
-	if (map->type & (BTRFS_BLOCK_GROUP_RAID5 |
-			 BTRFS_BLOCK_GROUP_RAID6)) {
+	if (map->type & BTRFS_BLOCK_GROUP_PARX) {
 		if (num >= nr_data_stripes(map)) {
 			return 0;
 		}
diff --git a/fs/btrfs/volumes.c b/fs/btrfs/volumes.c
index bab0b84..acafb50 100644
--- a/fs/btrfs/volumes.c
+++ b/fs/btrfs/volumes.c
@@ -1525,17 +1525,41 @@ int btrfs_rm_device(struct btrfs_root *root, char *device_path)
 		goto out;
 	}
 
-	if ((all_avail & BTRFS_BLOCK_GROUP_RAID5) &&
+	if ((all_avail & BTRFS_BLOCK_GROUP_PAR1) &&
 	    root->fs_info->fs_devices->rw_devices <= 2) {
 		ret = BTRFS_ERROR_DEV_RAID5_MIN_NOT_MET;
 		goto out;
 	}
-	if ((all_avail & BTRFS_BLOCK_GROUP_RAID6) &&
+
+	if ((all_avail & BTRFS_BLOCK_GROUP_PAR2) &&
 	    root->fs_info->fs_devices->rw_devices <= 3) {
 		ret = BTRFS_ERROR_DEV_RAID6_MIN_NOT_MET;
 		goto out;
 	}
 
+	if ((all_avail & BTRFS_BLOCK_GROUP_PAR3) &&
+	    root->fs_info->fs_devices->rw_devices <= 4) {
+		ret = BTRFS_ERROR_DEV_PAR3_MIN_NOT_MET;
+		goto out;
+	}
+
+	if ((all_avail & BTRFS_BLOCK_GROUP_PAR4) &&
+	    root->fs_info->fs_devices->rw_devices <= 5) {
+		ret = BTRFS_ERROR_DEV_PAR4_MIN_NOT_MET;
+		goto out;
+	}
+
+	if ((all_avail & BTRFS_BLOCK_GROUP_PAR5) &&
+	    root->fs_info->fs_devices->rw_devices <= 6) {
+		ret = BTRFS_ERROR_DEV_PAR5_MIN_NOT_MET;
+		goto out;
+	}
+
+	if ((all_avail & BTRFS_BLOCK_GROUP_PAR6) &&
+	    root->fs_info->fs_devices->rw_devices <= 7) {
+		ret = BTRFS_ERROR_DEV_PAR6_MIN_NOT_MET;
+		goto out;
+	}
 	if (strcmp(device_path, "missing") == 0) {
 		struct list_head *devices;
 		struct btrfs_device *tmp;
@@ -2797,10 +2821,8 @@ static int chunk_drange_filter(struct extent_buffer *leaf,
 	if (btrfs_chunk_type(leaf, chunk) & (BTRFS_BLOCK_GROUP_DUP |
 	     BTRFS_BLOCK_GROUP_RAID1 | BTRFS_BLOCK_GROUP_RAID10)) {
 		factor = num_stripes / 2;
-	} else if (btrfs_chunk_type(leaf, chunk) & BTRFS_BLOCK_GROUP_RAID5) {
-		factor = num_stripes - 1;
-	} else if (btrfs_chunk_type(leaf, chunk) & BTRFS_BLOCK_GROUP_RAID6) {
-		factor = num_stripes - 2;
+	} else if (btrfs_chunk_type(leaf, chunk) & BTRFS_BLOCK_GROUP_PARX) {
+		factor = num_stripes - btrfs_flags_par(btrfs_chunk_type(leaf, chunk));
 	} else {
 		factor = num_stripes;
 	}
@@ -3158,10 +3180,18 @@ int btrfs_balance(struct btrfs_balance_control *bctl,
 	else if (num_devices > 1)
 		allowed |= (BTRFS_BLOCK_GROUP_RAID0 | BTRFS_BLOCK_GROUP_RAID1);
 	if (num_devices > 2)
-		allowed |= BTRFS_BLOCK_GROUP_RAID5;
+		allowed |= BTRFS_BLOCK_GROUP_PAR1;
 	if (num_devices > 3)
 		allowed |= (BTRFS_BLOCK_GROUP_RAID10 |
-			    BTRFS_BLOCK_GROUP_RAID6);
+			    BTRFS_BLOCK_GROUP_PAR2);
+	if (num_devices > 4)
+		allowed |= BTRFS_BLOCK_GROUP_PAR3;
+	if (num_devices > 5)
+		allowed |= BTRFS_BLOCK_GROUP_PAR4;
+	if (num_devices > 6)
+		allowed |= BTRFS_BLOCK_GROUP_PAR5;
+	if (num_devices > 7)
+		allowed |= BTRFS_BLOCK_GROUP_PAR6;
 	if ((bctl->data.flags & BTRFS_BALANCE_ARGS_CONVERT) &&
 	    (!alloc_profile_is_valid(bctl->data.target, 1) ||
 	     (bctl->data.target & ~allowed))) {
@@ -3201,8 +3231,7 @@ int btrfs_balance(struct btrfs_balance_control *bctl,
 	/* allow to reduce meta or sys integrity only if force set */
 	allowed = BTRFS_BLOCK_GROUP_DUP | BTRFS_BLOCK_GROUP_RAID1 |
 			BTRFS_BLOCK_GROUP_RAID10 |
-			BTRFS_BLOCK_GROUP_RAID5 |
-			BTRFS_BLOCK_GROUP_RAID6;
+			BTRFS_BLOCK_GROUP_PARX;
 	do {
 		seq = read_seqbegin(&fs_info->profiles_lock);
 
@@ -3940,7 +3969,7 @@ static struct btrfs_raid_attr btrfs_raid_array[BTRFS_NR_RAID_TYPES] = {
 		.devs_increment	= 1,
 		.ncopies	= 1,
 	},
-	[BTRFS_RAID_RAID5] = {
+	[BTRFS_RAID_PAR1] = {
 		.sub_stripes	= 1,
 		.dev_stripes	= 1,
 		.devs_max	= 0,
@@ -3948,7 +3977,7 @@ static struct btrfs_raid_attr btrfs_raid_array[BTRFS_NR_RAID_TYPES] = {
 		.devs_increment	= 1,
 		.ncopies	= 2,
 	},
-	[BTRFS_RAID_RAID6] = {
+	[BTRFS_RAID_PAR2] = {
 		.sub_stripes	= 1,
 		.dev_stripes	= 1,
 		.devs_max	= 0,
@@ -3956,6 +3985,38 @@ static struct btrfs_raid_attr btrfs_raid_array[BTRFS_NR_RAID_TYPES] = {
 		.devs_increment	= 1,
 		.ncopies	= 3,
 	},
+	[BTRFS_RAID_PAR3] = {
+		.sub_stripes	= 1,
+		.dev_stripes	= 1,
+		.devs_max	= 0,
+		.devs_min	= 4,
+		.devs_increment	= 1,
+		.ncopies	= 4,
+	},
+	[BTRFS_RAID_PAR4] = {
+		.sub_stripes	= 1,
+		.dev_stripes	= 1,
+		.devs_max	= 0,
+		.devs_min	= 5,
+		.devs_increment	= 1,
+		.ncopies	= 5,
+	},
+	[BTRFS_RAID_PAR5] = {
+		.sub_stripes	= 1,
+		.dev_stripes	= 1,
+		.devs_max	= 0,
+		.devs_min	= 6,
+		.devs_increment	= 1,
+		.ncopies	= 6,
+	},
+	[BTRFS_RAID_PAR6] = {
+		.sub_stripes	= 1,
+		.dev_stripes	= 1,
+		.devs_max	= 0,
+		.devs_min	= 7,
+		.devs_increment	= 1,
+		.ncopies	= 7,
+	},
 };
 
 static u32 find_raid56_stripe_len(u32 data_devices, u32 dev_stripe_target)
@@ -3966,7 +4027,7 @@ static u32 find_raid56_stripe_len(u32 data_devices, u32 dev_stripe_target)
 
 static void check_raid56_incompat_flag(struct btrfs_fs_info *info, u64 type)
 {
-	if (!(type & (BTRFS_BLOCK_GROUP_RAID5 | BTRFS_BLOCK_GROUP_RAID6)))
+	if (!(type & BTRFS_BLOCK_GROUP_PARX))
 		return;
 
 	btrfs_set_fs_incompat(info, RAID56);
@@ -4134,15 +4195,11 @@ static int __btrfs_alloc_chunk(struct btrfs_trans_handle *trans,
 	 */
 	data_stripes = num_stripes / ncopies;
 
-	if (type & BTRFS_BLOCK_GROUP_RAID5) {
-		raid_stripe_len = find_raid56_stripe_len(ndevs - 1,
+	if (type & BTRFS_BLOCK_GROUP_PARX) {
+		int nr_par = btrfs_flags_par(type);
+		raid_stripe_len = find_raid56_stripe_len(ndevs - nr_par,
 				 btrfs_super_stripesize(info->super_copy));
-		data_stripes = num_stripes - 1;
-	}
-	if (type & BTRFS_BLOCK_GROUP_RAID6) {
-		raid_stripe_len = find_raid56_stripe_len(ndevs - 2,
-				 btrfs_super_stripesize(info->super_copy));
-		data_stripes = num_stripes - 2;
+		data_stripes = num_stripes - nr_par;
 	}
 
 	/*
@@ -4500,10 +4557,8 @@ int btrfs_num_copies(struct btrfs_fs_info *fs_info, u64 logical, u64 len)
 		ret = map->num_stripes;
 	else if (map->type & BTRFS_BLOCK_GROUP_RAID10)
 		ret = map->sub_stripes;
-	else if (map->type & BTRFS_BLOCK_GROUP_RAID5)
-		ret = 2;
-	else if (map->type & BTRFS_BLOCK_GROUP_RAID6)
-		ret = 3;
+	else if (map->type & BTRFS_BLOCK_GROUP_PARX)
+		ret = 1 + btrfs_flags_par(map->type);
 	else
 		ret = 1;
 	free_extent_map(em);
@@ -4532,10 +4587,9 @@ unsigned long btrfs_full_stripe_len(struct btrfs_root *root,
 
 	BUG_ON(em->start > logical || em->start + em->len < logical);
 	map = (struct map_lookup *)em->bdev;
-	if (map->type & (BTRFS_BLOCK_GROUP_RAID5 |
-			 BTRFS_BLOCK_GROUP_RAID6)) {
+	if (map->type & BTRFS_BLOCK_GROUP_PARX)
 		len = map->stripe_len * nr_data_stripes(map);
-	}
+
 	free_extent_map(em);
 	return len;
 }
@@ -4555,8 +4609,7 @@ int btrfs_is_parity_mirror(struct btrfs_mapping_tree *map_tree,
 
 	BUG_ON(em->start > logical || em->start + em->len < logical);
 	map = (struct map_lookup *)em->bdev;
-	if (map->type & (BTRFS_BLOCK_GROUP_RAID5 |
-			 BTRFS_BLOCK_GROUP_RAID6))
+	if (map->type & BTRFS_BLOCK_GROUP_PARX)
 		ret = 1;
 	free_extent_map(em);
 	return ret;
@@ -4694,7 +4747,7 @@ static int __btrfs_map_block(struct btrfs_fs_info *fs_info, int rw,
 	stripe_offset = offset - stripe_offset;
 
 	/* if we're here for raid56, we need to know the stripe aligned start */
-	if (map->type & (BTRFS_BLOCK_GROUP_RAID5 | BTRFS_BLOCK_GROUP_RAID6)) {
+	if (map->type & BTRFS_BLOCK_GROUP_PARX) {
 		unsigned long full_stripe_len = stripe_len * nr_data_stripes(map);
 		raid56_full_stripe_start = offset;
 
@@ -4707,8 +4760,7 @@ static int __btrfs_map_block(struct btrfs_fs_info *fs_info, int rw,
 
 	if (rw & REQ_DISCARD) {
 		/* we don't discard raid56 yet */
-		if (map->type &
-		    (BTRFS_BLOCK_GROUP_RAID5 | BTRFS_BLOCK_GROUP_RAID6)) {
+		if (map->type & BTRFS_BLOCK_GROUP_PARX) {
 			ret = -EOPNOTSUPP;
 			goto out;
 		}
@@ -4718,7 +4770,7 @@ static int __btrfs_map_block(struct btrfs_fs_info *fs_info, int rw,
 		/* For writes to RAID[56], allow a full stripeset across all disks.
 		   For other RAID types and for RAID[56] reads, just allow a single
 		   stripe (on a single disk). */
-		if (map->type & (BTRFS_BLOCK_GROUP_RAID5 | BTRFS_BLOCK_GROUP_RAID6) &&
+		if (map->type & BTRFS_BLOCK_GROUP_PARX &&
 		    (rw & REQ_WRITE)) {
 			max_len = stripe_len * nr_data_stripes(map) -
 				(offset - raid56_full_stripe_start);
@@ -4882,13 +4934,12 @@ static int __btrfs_map_block(struct btrfs_fs_info *fs_info, int rw,
 			mirror_num = stripe_index - old_stripe_index + 1;
 		}
 
-	} else if (map->type & (BTRFS_BLOCK_GROUP_RAID5 |
-				BTRFS_BLOCK_GROUP_RAID6)) {
+	} else if (map->type & BTRFS_BLOCK_GROUP_PARX) {
 		u64 tmp;
 
 		if (bbio_ret && ((rw & REQ_WRITE) || mirror_num > 1)
 		    && raid_map_ret) {
-			int i, rot;
+			int i, j, rot;
 
 			/* push stripe_nr back to the start of the full stripe */
 			stripe_nr = raid56_full_stripe_start;
@@ -4917,10 +4968,8 @@ static int __btrfs_map_block(struct btrfs_fs_info *fs_info, int rw,
 				raid_map[(i+rot) % num_stripes] =
 					em->start + (tmp + i) * map->stripe_len;
 
-			raid_map[(i+rot) % map->num_stripes] = RAID5_P_STRIPE;
-			if (map->type & BTRFS_BLOCK_GROUP_RAID6)
-				raid_map[(i+rot+1) % num_stripes] =
-					RAID6_Q_STRIPE;
+			for (j = 0; j < btrfs_flags_par(map->type); j++)
+				raid_map[(i+rot+j) % num_stripes] = BTRFS_RAID_PAR1_STRIPE + j;
 
 			*length = map->stripe_len;
 			stripe_index = 0;
@@ -4928,8 +4977,9 @@ static int __btrfs_map_block(struct btrfs_fs_info *fs_info, int rw,
 		} else {
 			/*
 			 * Mirror #0 or #1 means the original data block.
-			 * Mirror #2 is RAID5 parity block.
-			 * Mirror #3 is RAID6 Q block.
+			 * Mirror #2 is RAID5/PAR1 P block.
+			 * Mirror #3 is RAID6/PAR2 Q block.
+			 * .. and so on up to PAR6
 			 */
 			stripe_index = do_div(stripe_nr, nr_data_stripes(map));
 			if (mirror_num > 1)
@@ -5049,11 +5099,10 @@ static int __btrfs_map_block(struct btrfs_fs_info *fs_info, int rw,
 	if (rw & (REQ_WRITE | REQ_GET_READ_MIRRORS)) {
 		if (map->type & (BTRFS_BLOCK_GROUP_RAID1 |
 				 BTRFS_BLOCK_GROUP_RAID10 |
-				 BTRFS_BLOCK_GROUP_RAID5 |
 				 BTRFS_BLOCK_GROUP_DUP)) {
 			max_errors = 1;
-		} else if (map->type & BTRFS_BLOCK_GROUP_RAID6) {
-			max_errors = 2;
+		} else if (map->type & BTRFS_BLOCK_GROUP_PARX) {
+			max_errors = btrfs_flags_par(map->type);
 		}
 	}
 
@@ -5212,8 +5261,7 @@ int btrfs_rmap_block(struct btrfs_mapping_tree *map_tree,
 		do_div(length, map->num_stripes / map->sub_stripes);
 	else if (map->type & BTRFS_BLOCK_GROUP_RAID0)
 		do_div(length, map->num_stripes);
-	else if (map->type & (BTRFS_BLOCK_GROUP_RAID5 |
-			      BTRFS_BLOCK_GROUP_RAID6)) {
+	else if (map->type & BTRFS_BLOCK_GROUP_PARX) {
 		do_div(length, nr_data_stripes(map));
 		rmap_len = map->stripe_len * nr_data_stripes(map);
 	}
diff --git a/include/trace/events/btrfs.h b/include/trace/events/btrfs.h
index 3176cdc..98a9c78 100644
--- a/include/trace/events/btrfs.h
+++ b/include/trace/events/btrfs.h
@@ -58,8 +58,12 @@ struct extent_buffer;
 	{ BTRFS_BLOCK_GROUP_RAID1,	"RAID1"}, 	\
 	{ BTRFS_BLOCK_GROUP_DUP,	"DUP"}, 	\
 	{ BTRFS_BLOCK_GROUP_RAID10,	"RAID10"}, 	\
-	{ BTRFS_BLOCK_GROUP_RAID5,	"RAID5"},	\
-	{ BTRFS_BLOCK_GROUP_RAID6,	"RAID6"}
+	{ BTRFS_BLOCK_GROUP_PAR1,	"RAID5"},	\
+	{ BTRFS_BLOCK_GROUP_PAR2,	"RAID6"},	\
+	{ BTRFS_BLOCK_GROUP_PAR3,	"PAR3"},	\
+	{ BTRFS_BLOCK_GROUP_PAR4,	"PAR4"},	\
+	{ BTRFS_BLOCK_GROUP_PAR5,	"PAR5"},	\
+	{ BTRFS_BLOCK_GROUP_PAR6,	"PAR6"}
 
 #define BTRFS_UUID_SIZE 16
 
@@ -623,8 +627,12 @@ DEFINE_EVENT(btrfs_delayed_ref_head,  run_delayed_ref_head,
 		{ BTRFS_BLOCK_GROUP_RAID1, 	"RAID1" },	\
 		{ BTRFS_BLOCK_GROUP_DUP, 	"DUP"	},	\
 		{ BTRFS_BLOCK_GROUP_RAID10, 	"RAID10"},	\
-		{ BTRFS_BLOCK_GROUP_RAID5, 	"RAID5"	},	\
-		{ BTRFS_BLOCK_GROUP_RAID6, 	"RAID6"	})
+		{ BTRFS_BLOCK_GROUP_PAR1,	"RAID5"	},	\
+		{ BTRFS_BLOCK_GROUP_PAR2,	"RAID6"	},	\
+		{ BTRFS_BLOCK_GROUP_PAR3,	"PAR3"	},	\
+		{ BTRFS_BLOCK_GROUP_PAR4,	"PAR4"	},	\
+		{ BTRFS_BLOCK_GROUP_PAR5,	"PAR5"	},	\
+		{ BTRFS_BLOCK_GROUP_PAR6,	"PAR6"	})
 
 DECLARE_EVENT_CLASS(btrfs__chunk,
 
diff --git a/include/uapi/linux/btrfs.h b/include/uapi/linux/btrfs.h
index b4d6909..ba120ba 100644
--- a/include/uapi/linux/btrfs.h
+++ b/include/uapi/linux/btrfs.h
@@ -488,8 +488,13 @@ enum btrfs_err_code {
 	BTRFS_ERROR_DEV_TGT_REPLACE,
 	BTRFS_ERROR_DEV_MISSING_NOT_FOUND,
 	BTRFS_ERROR_DEV_ONLY_WRITABLE,
-	BTRFS_ERROR_DEV_EXCL_RUN_IN_PROGRESS
+	BTRFS_ERROR_DEV_EXCL_RUN_IN_PROGRESS,
+	BTRFS_ERROR_DEV_PAR3_MIN_NOT_MET,
+	BTRFS_ERROR_DEV_PAR4_MIN_NOT_MET,
+	BTRFS_ERROR_DEV_PAR5_MIN_NOT_MET,
+	BTRFS_ERROR_DEV_PAR6_MIN_NOT_MET
 };
+
 /* An error code to error string mapping for the kernel
 *  error codes
 */
@@ -501,9 +506,9 @@ static inline char *btrfs_err_str(enum btrfs_err_code err_code)
 		case BTRFS_ERROR_DEV_RAID10_MIN_NOT_MET:
 			return "unable to go below four devices on raid10";
 		case BTRFS_ERROR_DEV_RAID5_MIN_NOT_MET:
-			return "unable to go below two devices on raid5";
+			return "unable to go below two devices on raid5/par1";
 		case BTRFS_ERROR_DEV_RAID6_MIN_NOT_MET:
-			return "unable to go below three devices on raid6";
+			return "unable to go below three devices on raid6/par2";
 		case BTRFS_ERROR_DEV_TGT_REPLACE:
 			return "unable to remove the dev_replace target dev";
 		case BTRFS_ERROR_DEV_MISSING_NOT_FOUND:
@@ -513,6 +518,14 @@ static inline char *btrfs_err_str(enum btrfs_err_code err_code)
 		case BTRFS_ERROR_DEV_EXCL_RUN_IN_PROGRESS:
 			return "add/delete/balance/replace/resize operation "\
 				"in progress";
+		case BTRFS_ERROR_DEV_PAR3_MIN_NOT_MET:
+			return "unable to go below four devices on par3";
+		case BTRFS_ERROR_DEV_PAR4_MIN_NOT_MET:
+			return "unable to go below five devices on par4";
+		case BTRFS_ERROR_DEV_PAR5_MIN_NOT_MET:
+			return "unable to go below six devices on par5";
+		case BTRFS_ERROR_DEV_PAR6_MIN_NOT_MET:
+			return "unable to go below seven devices on par5";
 		default:
 			return NULL;
 	}
-- 
1.7.12.1


^ permalink raw reply related	[flat|nested] 3+ messages in thread

* [PATCH v5 3/3] btrfs-progs: Adds new par3456 modes to support up to six parities
  2014-02-24 21:15 [PATCH v5 0/3] New RAID library supporting up to six parities Andrea Mazzoleni
  2014-02-24 21:15 ` [PATCH v5 2/3] fs: btrfs: Adds new par3456 modes to support " Andrea Mazzoleni
@ 2014-02-24 21:15 ` Andrea Mazzoleni
  1 sibling, 0 replies; 3+ messages in thread
From: Andrea Mazzoleni @ 2014-02-24 21:15 UTC (permalink / raw)
  To: clm, jbacik, neilb; +Cc: linux-kernel, linux-raid, linux-btrfs, amadvance

Extends mkfs.btrfs to support the new par1/2/3/4/5/6 modes to create
filesystem with up to six parities.
Replaces the raid6 code with a new references function able to compute up
to six parities.
Replaces the existing BLOCK_GROUP_RAID5/6 with new PAR1/2/3/4/5/6 ones that
handle up to six parities, and updates all the code to use them.

Signed-off-by: Andrea Mazzoleni <amadvance@gmail.com>
---
 Makefile            |  14 ++-
 chunk-recover.c     |  18 +---
 cmds-balance.c      |  20 +++-
 cmds-check.c        |   7 +-
 cmds-chunk.c        |  18 +---
 cmds-filesystem.c   |  12 ++-
 ctree.h             |  42 ++++++++-
 disk-io.h           |   2 -
 extent-tree.c       |   3 +-
 ioctl.h             |  18 +++-
 man/mkfs.btrfs.8.in |   4 +-
 mkfs.c              |  28 +++++-
 mktables.c          | 256 ++++++++++++++++++++++++++++++++++++++++++++++++++++
 raid.c              |  44 +++++++++
 raid.h              |  34 +++++++
 raid6.c             | 101 ---------------------
 utils.c             |  12 ++-
 volumes.c           | 112 ++++++++++-------------
 volumes.h           |  12 ++-
 19 files changed, 530 insertions(+), 227 deletions(-)
 create mode 100644 mktables.c
 create mode 100644 raid.c
 create mode 100644 raid.h
 delete mode 100644 raid6.c

diff --git a/Makefile b/Makefile
index 0874a41..72c5c01 100644
--- a/Makefile
+++ b/Makefile
@@ -9,7 +9,7 @@ CFLAGS = -g -O1 -fno-strict-aliasing
 objects = ctree.o disk-io.o radix-tree.o extent-tree.o print-tree.o \
 	  root-tree.o dir-item.o file-item.o inode-item.o inode-map.o \
 	  extent-cache.o extent_io.o volumes.o utils.o repair.o \
-	  qgroup.o raid6.o free-space-cache.o list_sort.o
+	  qgroup.o raid.o tables.o free-space-cache.o list_sort.o
 cmds_objects = cmds-subvolume.o cmds-filesystem.o cmds-device.o cmds-scrub.o \
 	       cmds-inspect.o cmds-balance.o cmds-send.o cmds-receive.o \
 	       cmds-quota.o cmds-qgroup.o cmds-replace.o cmds-check.o \
@@ -140,6 +140,10 @@ version.h:
 	@echo "    [SH]     $@"
 	$(Q)bash version.sh
 
+tables.c: mktables
+	@echo "    [MK]     $@"
+	$(Q)./mktables > tables.c
+
 $(libs_shared): $(libbtrfs_objects) $(lib_links) send.h
 	@echo "    [LD]     $@"
 	$(Q)$(CC) $(CFLAGS) $(libbtrfs_objects) $(LDFLAGS) $(lib_LIBS) \
@@ -193,6 +197,10 @@ mkfs.btrfs: $(objects) $(libs) mkfs.o
 	@echo "    [LD]     $@"
 	$(Q)$(CC) $(CFLAGS) -o mkfs.btrfs $(objects) mkfs.o $(LDFLAGS) $(LIBS)
 
+mktables: $(libs) mktables.o
+	@echo "    [LD]     $@"
+	$(Q)$(CC) $(CFLAGS) -o mktables mktables.o $(LDFLAGS) $(LIBS)
+
 mkfs.btrfs.static: $(static_objects) mkfs.static.o $(static_libbtrfs_objects)
 	@echo "    [LD]     $@"
 	$(Q)$(CC) $(STATIC_CFLAGS) -o mkfs.btrfs.static mkfs.static.o $(static_objects) \
@@ -225,8 +233,8 @@ clean: $(CLEANDIRS)
 	@echo "Cleaning"
 	$(Q)rm -f $(progs) cscope.out *.o *.o.d btrfs-convert btrfs-image btrfs-select-super \
 	      btrfs-zero-log btrfstune dir-test ioctl-test quick-test send-test btrfsck \
-	      btrfs.static mkfs.btrfs.static btrfs-calc-size \
-	      version.h $(check_defs) \
+	      btrfs.static mkfs.btrfs.static btrfs-calc-size mktables \
+	      version.h tables.c $(check_defs) \
 	      $(libs) $(lib_links)
 
 $(CLEANDIRS):
diff --git a/chunk-recover.c b/chunk-recover.c
index bcde39e..cec14cd 100644
--- a/chunk-recover.c
+++ b/chunk-recover.c
@@ -1327,8 +1327,7 @@ static int calc_num_stripes(u64 type)
 {
 	if (type & (BTRFS_BLOCK_GROUP_RAID0 |
 		    BTRFS_BLOCK_GROUP_RAID10 |
-		    BTRFS_BLOCK_GROUP_RAID5 |
-		    BTRFS_BLOCK_GROUP_RAID6))
+		    BTRFS_BLOCK_GROUP_PARX))
 		return 0;
 	else if (type & (BTRFS_BLOCK_GROUP_RAID1 |
 			 BTRFS_BLOCK_GROUP_DUP))
@@ -1404,13 +1403,8 @@ static int btrfs_calc_stripe_index(struct chunk_record *chunk, u64 logical)
 	} else if (chunk->type_flags & BTRFS_BLOCK_GROUP_RAID10) {
 		index = stripe_nr % (chunk->num_stripes / chunk->sub_stripes);
 		index *= chunk->sub_stripes;
-	} else if (chunk->type_flags & BTRFS_BLOCK_GROUP_RAID5) {
-		nr_data_stripes = chunk->num_stripes - 1;
-		index = stripe_nr % nr_data_stripes;
-		stripe_nr /= nr_data_stripes;
-		index = (index + stripe_nr) % chunk->num_stripes;
-	} else if (chunk->type_flags & BTRFS_BLOCK_GROUP_RAID6) {
-		nr_data_stripes = chunk->num_stripes - 2;
+	} else if (chunk->type_flags & BTRFS_BLOCK_GROUP_PARX) {
+		nr_data_stripes = chunk->num_stripes - btrfs_flags_par(chunk->type_flags);
 		index = stripe_nr % nr_data_stripes;
 		stripe_nr /= nr_data_stripes;
 		index = (index + stripe_nr) % chunk->num_stripes;
@@ -1503,8 +1497,7 @@ no_extent_record:
 	if (list_empty(&devexts))
 		return 0;
 
-	if (chunk->type_flags & (BTRFS_BLOCK_GROUP_RAID5 |
-				 BTRFS_BLOCK_GROUP_RAID6)) {
+	if (chunk->type_flags & BTRFS_BLOCK_GROUP_PARX) {
 		/* Fixme: try to recover the order by the parity block. */
 		list_splice_tail(&devexts, &chunk->dextents);
 		return -EINVAL;
@@ -1540,8 +1533,7 @@ no_extent_record:
 
 #define BTRFS_ORDERED_RAID	(BTRFS_BLOCK_GROUP_RAID0 |	\
 				 BTRFS_BLOCK_GROUP_RAID10 |	\
-				 BTRFS_BLOCK_GROUP_RAID5 |	\
-				 BTRFS_BLOCK_GROUP_RAID6)
+				 BTRFS_BLOCK_GROUP_PARX)
 
 static int btrfs_rebuild_chunk_stripes(struct recover_control *rc,
 				       struct chunk_record *chunk)
diff --git a/cmds-balance.c b/cmds-balance.c
index a151475..7d116bb 100644
--- a/cmds-balance.c
+++ b/cmds-balance.c
@@ -48,10 +48,22 @@ static int parse_one_profile(const char *profile, u64 *flags)
 		*flags |= BTRFS_BLOCK_GROUP_RAID1;
 	} else if (!strcmp(profile, "raid10")) {
 		*flags |= BTRFS_BLOCK_GROUP_RAID10;
-	} else if (!strcmp(profile, "raid5")) {
-		*flags |= BTRFS_BLOCK_GROUP_RAID5;
-	} else if (!strcmp(profile, "raid6")) {
-		*flags |= BTRFS_BLOCK_GROUP_RAID6;
+	} else if (!strcmp(profile, "raid5")) { /* synonymous of "par1" */
+		*flags |= BTRFS_BLOCK_GROUP_PAR1;
+	} else if (!strcmp(profile, "raid6")) { /* synonymous of "par2" */
+		*flags |= BTRFS_BLOCK_GROUP_PAR2;
+	} else if (!strcmp(profile, "par1")) {
+		*flags |= BTRFS_BLOCK_GROUP_PAR1;
+	} else if (!strcmp(profile, "par2")) {
+		*flags |= BTRFS_BLOCK_GROUP_PAR2;
+	} else if (!strcmp(profile, "par3")) {
+		*flags |= BTRFS_BLOCK_GROUP_PAR3;
+	} else if (!strcmp(profile, "par4")) {
+		*flags |= BTRFS_BLOCK_GROUP_PAR4;
+	} else if (!strcmp(profile, "par5")) {
+		*flags |= BTRFS_BLOCK_GROUP_PAR5;
+	} else if (!strcmp(profile, "par6")) {
+		*flags |= BTRFS_BLOCK_GROUP_PAR6;
 	} else if (!strcmp(profile, "dup")) {
 		*flags |= BTRFS_BLOCK_GROUP_DUP;
 	} else if (!strcmp(profile, "single")) {
diff --git a/cmds-check.c b/cmds-check.c
index a65670e..46e1a83 100644
--- a/cmds-check.c
+++ b/cmds-check.c
@@ -5189,12 +5189,9 @@ u64 calc_stripe_length(u64 type, u64 length, int num_stripes)
 	} else if (type & BTRFS_BLOCK_GROUP_RAID10) {
 		stripe_size = length * 2;
 		stripe_size /= num_stripes;
-	} else if (type & BTRFS_BLOCK_GROUP_RAID5) {
+	} else if (type & BTRFS_BLOCK_GROUP_PARX) {
 		stripe_size = length;
-		stripe_size /= (num_stripes - 1);
-	} else if (type & BTRFS_BLOCK_GROUP_RAID6) {
-		stripe_size = length;
-		stripe_size /= (num_stripes - 2);
+		stripe_size /= num_stripes - btrfs_flags_par(type);
 	} else {
 		stripe_size = length;
 	}
diff --git a/cmds-chunk.c b/cmds-chunk.c
index 4d7fce0..b4c067d 100644
--- a/cmds-chunk.c
+++ b/cmds-chunk.c
@@ -1347,8 +1347,7 @@ static int calc_num_stripes(u64 type)
 {
 	if (type & (BTRFS_BLOCK_GROUP_RAID0 |
 		    BTRFS_BLOCK_GROUP_RAID10 |
-		    BTRFS_BLOCK_GROUP_RAID5 |
-		    BTRFS_BLOCK_GROUP_RAID6))
+		    BTRFS_BLOCK_GROUP_PARX))
 		return 0;
 	else if (type & (BTRFS_BLOCK_GROUP_RAID1 |
 			 BTRFS_BLOCK_GROUP_DUP))
@@ -1424,13 +1423,8 @@ static int btrfs_calc_stripe_index(struct chunk_record *chunk, u64 logical)
 	} else if (chunk->type_flags & BTRFS_BLOCK_GROUP_RAID10) {
 		index = stripe_nr % (chunk->num_stripes / chunk->sub_stripes);
 		index *= chunk->sub_stripes;
-	} else if (chunk->type_flags & BTRFS_BLOCK_GROUP_RAID5) {
-		nr_data_stripes = chunk->num_stripes - 1;
-		index = stripe_nr % nr_data_stripes;
-		stripe_nr /= nr_data_stripes;
-		index = (index + stripe_nr) % chunk->num_stripes;
-	} else if (chunk->type_flags & BTRFS_BLOCK_GROUP_RAID6) {
-		nr_data_stripes = chunk->num_stripes - 2;
+	} else if (chunk->type_flags & BTRFS_BLOCK_GROUP_PARX) {
+		nr_data_stripes = chunk->num_stripes - btrfs_flags_par(chunk->type_flags);
 		index = stripe_nr % nr_data_stripes;
 		stripe_nr /= nr_data_stripes;
 		index = (index + stripe_nr) % chunk->num_stripes;
@@ -1523,8 +1517,7 @@ no_extent_record:
 	if (list_empty(&devexts))
 		return 0;
 
-	if (chunk->type_flags & (BTRFS_BLOCK_GROUP_RAID5 |
-				 BTRFS_BLOCK_GROUP_RAID6)) {
+	if (chunk->type_flags & BTRFS_BLOCK_GROUP_PARX) {
 		/* Fixme: try to recover the order by the parity block. */
 		list_splice_tail(&devexts, &chunk->dextents);
 		return -EINVAL;
@@ -1560,8 +1553,7 @@ no_extent_record:
 
 #define BTRFS_ORDERED_RAID	(BTRFS_BLOCK_GROUP_RAID0 |	\
 				 BTRFS_BLOCK_GROUP_RAID10 |	\
-				 BTRFS_BLOCK_GROUP_RAID5 |	\
-				 BTRFS_BLOCK_GROUP_RAID6)
+				 BTRFS_BLOCK_GROUP_PARX)
 
 static int btrfs_rebuild_chunk_stripes(struct recover_control *rc,
 				       struct chunk_record *chunk)
diff --git a/cmds-filesystem.c b/cmds-filesystem.c
index 1c1926b..861cbb3 100644
--- a/cmds-filesystem.c
+++ b/cmds-filesystem.c
@@ -142,10 +142,18 @@ static char *group_profile_str(u64 flag)
 		return "RAID0";
 	case BTRFS_BLOCK_GROUP_RAID1:
 		return "RAID1";
-	case BTRFS_BLOCK_GROUP_RAID5:
+	case BTRFS_BLOCK_GROUP_PAR1:
 		return "RAID5";
-	case BTRFS_BLOCK_GROUP_RAID6:
+	case BTRFS_BLOCK_GROUP_PAR2:
 		return "RAID6";
+	case BTRFS_BLOCK_GROUP_PAR3:
+		return "PAR3";
+	case BTRFS_BLOCK_GROUP_PAR4:
+		return "PAR4";
+	case BTRFS_BLOCK_GROUP_PAR5:
+		return "PAR5";
+	case BTRFS_BLOCK_GROUP_PAR6:
+		return "PAR6";
 	case BTRFS_BLOCK_GROUP_DUP:
 		return "DUP";
 	case BTRFS_BLOCK_GROUP_RAID10:
diff --git a/ctree.h b/ctree.h
index 2117374..4d2d1b6 100644
--- a/ctree.h
+++ b/ctree.h
@@ -470,6 +470,7 @@ struct btrfs_super_block {
 #define BTRFS_FEATURE_INCOMPAT_EXTENDED_IREF	(1ULL << 6)
 #define BTRFS_FEATURE_INCOMPAT_RAID56		(1ULL << 7)
 #define BTRFS_FEATURE_INCOMPAT_SKINNY_METADATA	(1ULL << 8)
+#define BTRFS_FEATURE_INCOMPAT_PAR3456		(1ULL << 10)
 
 
 #define BTRFS_FEATURE_COMPAT_SUPP		0ULL
@@ -482,7 +483,8 @@ struct btrfs_super_block {
 	 BTRFS_FEATURE_INCOMPAT_EXTENDED_IREF |		\
 	 BTRFS_FEATURE_INCOMPAT_RAID56 |		\
 	 BTRFS_FEATURE_INCOMPAT_MIXED_GROUPS |		\
-	 BTRFS_FEATURE_INCOMPAT_SKINNY_METADATA)
+	 BTRFS_FEATURE_INCOMPAT_SKINNY_METADATA |	\
+	 BTRFS_FEATURE_INCOMPAT_PAR3456)
 
 /*
  * A leaf is full of items. offset and size tell us where to find
@@ -830,8 +832,39 @@ struct btrfs_csum_item {
 #define BTRFS_BLOCK_GROUP_RAID1		(1ULL << 4)
 #define BTRFS_BLOCK_GROUP_DUP		(1ULL << 5)
 #define BTRFS_BLOCK_GROUP_RAID10	(1ULL << 6)
-#define BTRFS_BLOCK_GROUP_RAID5    (1ULL << 7)
-#define BTRFS_BLOCK_GROUP_RAID6    (1ULL << 8)
+#define BTRFS_BLOCK_GROUP_PAR1     (1ULL << 7)
+#define BTRFS_BLOCK_GROUP_PAR2     (1ULL << 8)
+#define BTRFS_BLOCK_GROUP_PAR3     (1ULL << 9)
+#define BTRFS_BLOCK_GROUP_PAR4     (1ULL << 10)
+#define BTRFS_BLOCK_GROUP_PAR5     (1ULL << 11)
+#define BTRFS_BLOCK_GROUP_PAR6     (1ULL << 12)
+
+/* tags for all the parity groups */
+#define BTRFS_BLOCK_GROUP_PARX (BTRFS_BLOCK_GROUP_PAR1 | \
+				BTRFS_BLOCK_GROUP_PAR2 | \
+				BTRFS_BLOCK_GROUP_PAR3 | \
+				BTRFS_BLOCK_GROUP_PAR4 | \
+				BTRFS_BLOCK_GROUP_PAR5 | \
+				BTRFS_BLOCK_GROUP_PAR6)
+
+/* gets the parity number from the parity group */
+static inline int btrfs_flags_par(unsigned group)
+{
+	switch (group & BTRFS_BLOCK_GROUP_PARX) {
+	case BTRFS_BLOCK_GROUP_PAR1: return 1;
+	case BTRFS_BLOCK_GROUP_PAR2: return 2;
+	case BTRFS_BLOCK_GROUP_PAR3: return 3;
+	case BTRFS_BLOCK_GROUP_PAR4: return 4;
+	case BTRFS_BLOCK_GROUP_PAR5: return 5;
+	case BTRFS_BLOCK_GROUP_PAR6:  return 6;
+	}
+
+	/* ensures that no multiple groups are defined */
+	BUG_ON(group & BTRFS_BLOCK_GROUP_PARX);
+
+	return 0;
+}
+
 #define BTRFS_BLOCK_GROUP_RESERVED	BTRFS_AVAIL_ALLOC_BIT_SINGLE
 
 #define BTRFS_BLOCK_GROUP_TYPE_MASK	(BTRFS_BLOCK_GROUP_DATA |    \
@@ -840,8 +873,7 @@ struct btrfs_csum_item {
 
 #define BTRFS_BLOCK_GROUP_PROFILE_MASK	(BTRFS_BLOCK_GROUP_RAID0 |   \
 					 BTRFS_BLOCK_GROUP_RAID1 |   \
-					 BTRFS_BLOCK_GROUP_RAID5 |   \
-					 BTRFS_BLOCK_GROUP_RAID6 |   \
+					 BTRFS_BLOCK_GROUP_PARX |   \
 					 BTRFS_BLOCK_GROUP_DUP |     \
 					 BTRFS_BLOCK_GROUP_RAID10)
 
diff --git a/disk-io.h b/disk-io.h
index ca6af2d..27e3dc4 100644
--- a/disk-io.h
+++ b/disk-io.h
@@ -110,5 +110,3 @@ int write_and_map_eb(struct btrfs_trans_handle *trans, struct btrfs_root *root,
 		     struct extent_buffer *eb);
 #endif
 
-/* raid6.c */
-void raid6_gen_syndrome(int disks, size_t bytes, void **ptrs);
diff --git a/extent-tree.c b/extent-tree.c
index 7860d1d..98a8cb4 100644
--- a/extent-tree.c
+++ b/extent-tree.c
@@ -1862,8 +1862,7 @@ static void set_avail_alloc_bits(struct btrfs_fs_info *fs_info, u64 flags)
 	u64 extra_flags = flags & (BTRFS_BLOCK_GROUP_RAID0 |
 				   BTRFS_BLOCK_GROUP_RAID1 |
 				   BTRFS_BLOCK_GROUP_RAID10 |
-				   BTRFS_BLOCK_GROUP_RAID5 |
-				   BTRFS_BLOCK_GROUP_RAID6 |
+				   BTRFS_BLOCK_GROUP_PARX |
 				   BTRFS_BLOCK_GROUP_DUP);
 	if (extra_flags) {
 		if (flags & BTRFS_BLOCK_GROUP_DATA)
diff --git a/ioctl.h b/ioctl.h
index a589cd7..f798d22 100644
--- a/ioctl.h
+++ b/ioctl.h
@@ -466,7 +466,11 @@ enum btrfs_err_code {
 	BTRFS_ERROR_DEV_TGT_REPLACE,
 	BTRFS_ERROR_DEV_MISSING_NOT_FOUND,
 	BTRFS_ERROR_DEV_ONLY_WRITABLE,
-	BTRFS_ERROR_DEV_EXCL_RUN_IN_PROGRESS
+	BTRFS_ERROR_DEV_EXCL_RUN_IN_PROGRESS,
+	BTRFS_ERROR_DEV_PAR3_MIN_NOT_MET,
+	BTRFS_ERROR_DEV_PAR4_MIN_NOT_MET,
+	BTRFS_ERROR_DEV_PAR5_MIN_NOT_MET,
+	BTRFS_ERROR_DEV_PAR6_MIN_NOT_MET
 };
 
 /* An error code to error string mapping for the kernel
@@ -480,9 +484,9 @@ static inline char *btrfs_err_str(enum btrfs_err_code err_code)
 		case BTRFS_ERROR_DEV_RAID10_MIN_NOT_MET:
 			return "unable to go below four devices on raid10";
 		case BTRFS_ERROR_DEV_RAID5_MIN_NOT_MET:
-			return "unable to go below three devices on raid5";
+			return "unable to go below two devices on raid5/par1";
 		case BTRFS_ERROR_DEV_RAID6_MIN_NOT_MET:
-			return "unable to go below four devices on raid6";
+			return "unable to go below three devices on raid6/par2";
 		case BTRFS_ERROR_DEV_TGT_REPLACE:
 			return "unable to remove the dev_replace target dev";
 		case BTRFS_ERROR_DEV_MISSING_NOT_FOUND:
@@ -492,6 +496,14 @@ static inline char *btrfs_err_str(enum btrfs_err_code err_code)
 		case BTRFS_ERROR_DEV_EXCL_RUN_IN_PROGRESS:
 			return "add/delete/balance/replace/resize operation "
 				"in progress";
+		case BTRFS_ERROR_DEV_PAR3_MIN_NOT_MET:
+			return "unable to go below four devices on par3";
+		case BTRFS_ERROR_DEV_PAR4_MIN_NOT_MET:
+			return "unable to go below five devices on par4";
+		case BTRFS_ERROR_DEV_PAR5_MIN_NOT_MET:
+			return "unable to go below six devices on par5";
+		case BTRFS_ERROR_DEV_PAR6_MIN_NOT_MET:
+			return "unable to go below seven devices on par5";
 		default:
 			return NULL;
 	}
diff --git a/man/mkfs.btrfs.8.in b/man/mkfs.btrfs.8.in
index b54e935..e3f4ec7 100644
--- a/man/mkfs.btrfs.8.in
+++ b/man/mkfs.btrfs.8.in
@@ -38,7 +38,9 @@ mkfs.btrfs uses all the available storage for the filesystem.
 .TP
 \fB\-d\fR, \fB\-\-data \fItype\fR
 Specify how the data must be spanned across the devices specified. Valid
-values are raid0, raid1, raid5, raid6, raid10 or single.
+values are raid0, raid1, raid5, raid6, raid10, par1, par2, par3, par4, par5,
+par6 or single. The parX values enable RAID for up to six parity levels.
+Note that raid5 and raid6 are synonymous of par1 and par2.
 .TP
 \fB\-f\fR, \fB\-\-force\fR
 Force overwrite when an existing filesystem is detected on the device.
diff --git a/mkfs.c b/mkfs.c
index 33369f9..661e59f 100644
--- a/mkfs.c
+++ b/mkfs.c
@@ -276,7 +276,7 @@ static void print_usage(void)
 	fprintf(stderr, "options:\n");
 	fprintf(stderr, "\t -A --alloc-start the offset to start the FS\n");
 	fprintf(stderr, "\t -b --byte-count total number of bytes in the FS\n");
-	fprintf(stderr, "\t -d --data data profile, raid0, raid1, raid5, raid6, raid10, dup or single\n");
+	fprintf(stderr, "\t -d --data data profile, raid0, raid1, raid5, raid6, par[1,2,3,4,5,6], raid10, dup or single\n");
 	fprintf(stderr, "\t -f --force force overwrite of existing filesystem\n");
 	fprintf(stderr, "\t -l --leafsize size of btree leaves\n");
 	fprintf(stderr, "\t -L --label set a label\n");
@@ -306,9 +306,21 @@ static u64 parse_profile(char *s)
 	} else if (strcmp(s, "raid1") == 0) {
 		return BTRFS_BLOCK_GROUP_RAID1;
 	} else if (strcmp(s, "raid5") == 0) {
-		return BTRFS_BLOCK_GROUP_RAID5;
+		return BTRFS_BLOCK_GROUP_PAR1;
 	} else if (strcmp(s, "raid6") == 0) {
-		return BTRFS_BLOCK_GROUP_RAID6;
+		return BTRFS_BLOCK_GROUP_PAR2;
+	} else if (strcmp(s, "par1") == 0) {
+		return BTRFS_BLOCK_GROUP_PAR1;
+	} else if (strcmp(s, "par2") == 0) {
+		return BTRFS_BLOCK_GROUP_PAR2;
+	} else if (strcmp(s, "par3") == 0) {
+		return BTRFS_BLOCK_GROUP_PAR3;
+	} else if (strcmp(s, "par4") == 0) {
+		return BTRFS_BLOCK_GROUP_PAR4;
+	} else if (strcmp(s, "par5") == 0) {
+		return BTRFS_BLOCK_GROUP_PAR5;
+	} else if (strcmp(s, "par6") == 0) {
+		return BTRFS_BLOCK_GROUP_PAR6;
 	} else if (strcmp(s, "raid10") == 0) {
 		return BTRFS_BLOCK_GROUP_RAID10;
 	} else if (strcmp(s, "dup") == 0) {
@@ -1147,6 +1159,8 @@ static const struct btrfs_fs_feature {
 		"raid56 extended format" },
 	{ "skinny-metadata", BTRFS_FEATURE_INCOMPAT_SKINNY_METADATA,
 		"reduced-size metadata extent refs" },
+	{ "par3456", BTRFS_FEATURE_INCOMPAT_PAR3456,
+		"raid support with up to six parities" },
 	/* Keep this one last */
 	{ "list-all", BTRFS_FEATURE_LIST_ALL, NULL }
 };
@@ -1491,10 +1505,16 @@ int main(int ac, char **av)
 		features |= BTRFS_FEATURE_INCOMPAT_MIXED_GROUPS;
 
 	if ((data_profile | metadata_profile) &
-	    (BTRFS_BLOCK_GROUP_RAID5 | BTRFS_BLOCK_GROUP_RAID6)) {
+		(BTRFS_BLOCK_GROUP_PAR1 | BTRFS_BLOCK_GROUP_PAR2)) {
 		features |= BTRFS_FEATURE_INCOMPAT_RAID56;
 	}
 
+	if ((data_profile | metadata_profile) &
+		(BTRFS_BLOCK_GROUP_PAR3 | BTRFS_BLOCK_GROUP_PAR4
+		 | BTRFS_BLOCK_GROUP_PAR5 | BTRFS_BLOCK_GROUP_PAR6)) {
+		features |= BTRFS_FEATURE_INCOMPAT_PAR3456;
+	}
+
 	process_fs_features(features);
 
 	ret = make_btrfs(fd, file, label, blocks, dev_block_count,
diff --git a/mktables.c b/mktables.c
new file mode 100644
index 0000000..21c0222
--- /dev/null
+++ b/mktables.c
@@ -0,0 +1,256 @@
+/*
+ * Copyright (C) 2013 Andrea Mazzoleni
+ *
+ * This program is free software: you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License as published by
+ * the Free Software Foundation, either version 2 of the License, or
+ * (at your option) any later version.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+ * GNU General Public License for more details.
+ */
+
+#include <stdio.h>
+#include <stdint.h>
+
+/**
+ * Multiplication a*b in GF(2^8).
+ */
+static uint8_t gfmul(uint8_t a, uint8_t b)
+{
+	uint8_t v;
+
+	v = 0;
+	while (b)  {
+		if ((b & 1) != 0)
+			v ^= a;
+
+		if ((a & 0x80) != 0) {
+			a <<= 1;
+			a ^= 0x1d;
+		} else {
+			a <<= 1;
+		}
+
+		b >>= 1;
+	}
+
+	return v;
+}
+
+/**
+ * Inversion (1/a) in GF(2^8).
+ */
+uint8_t gfinv[256];
+
+/**
+ * Number of parities.
+ * This is the number of rows of the generator matrix.
+ */
+#define PARITY 6
+
+/**
+ * Number of disks.
+ * This is the number of columns of the generator matrix.
+ */
+#define DISK (257-PARITY)
+
+/**
+ * Setup the Cauchy matrix used to generate the parity.
+ */
+static void set_cauchy(uint8_t *matrix)
+{
+	int i, j;
+	uint8_t inv_x, y;
+
+	/*
+	 * The first row of the generator matrix is formed by all 1.
+	 *
+	 * The generator matrix is an Extended Cauchy matrix built from
+	 * a Cauchy matrix adding at the top a row of all 1.
+	 *
+	 * Extending a Cauchy matrix in this way maintains the MDS property
+	 * of the matrix.
+	 *
+	 * For example, considering a generator matrix of 4x6 we have now:
+	 *
+	 *   1   1   1   1   1   1
+	 *   -   -   -   -   -   -
+	 *   -   -   -   -   -   -
+	 *   -   -   -   -   -   -
+	 */
+	for (i = 0; i < DISK; ++i)
+		matrix[0*DISK+i] = 1;
+
+	/*
+	 * Second row is formed with powers 2^i, and it's the first
+	 * row of the Cauchy matrix.
+	 *
+	 * Each element of the Cauchy matrix is in the form 1/(x_i + y_j)
+	 * where all x_i and y_j must be different for any i and j.
+	 *
+	 * For the first row with j=0, we choose x_i = 2^-i and y_0 = 0
+	 * and we obtain a first row formed as:
+	 *
+	 * 1/(x_i + y_0) = 1/(2^-i + 0) = 2^i
+	 *
+	 * with 2^-i != 0 for any i
+	 *
+	 * In the example we get:
+	 *
+	 * x_0 = 1
+	 * x_1 = 142
+	 * x_2 = 71
+	 * x_3 = 173
+	 * x_4 = 216
+	 * x_5 = 108
+	 * y_0 = 0
+	 *
+	 * with the matrix:
+	 *
+	 *   1   1   1   1   1   1
+	 *   1   2   4   8  16  32
+	 *   -   -   -   -   -   -
+	 *   -   -   -   -   -   -
+	 */
+	inv_x = 1;
+	for (i = 0; i < DISK; ++i) {
+		matrix[1*DISK+i] = inv_x;
+		inv_x = gfmul(2, inv_x);
+	}
+
+	/*
+	 * The rest of the Cauchy matrix is formed choosing for each row j
+	 * a new y_j = 2^j and reusing the x_i already assigned in the first
+	 * row obtaining :
+	 *
+	 * 1/(x_i + y_j) = 1/(2^-i + 2^j)
+	 *
+	 * with 2^-i + 2^j != 0 for any i,j with i>=0,j>=1,i+j<255
+	 *
+	 * In the example we get:
+	 *
+	 * y_1 = 2
+	 * y_2 = 4
+	 *
+	 * with the matrix:
+	 *
+	 *   1   1   1   1   1   1
+	 *   1   2   4   8  16  32
+	 * 244  83  78 183 118  47
+	 * 167  39 213  59 153  82
+	 */
+	y = 2;
+	for (j = 0; j < PARITY-2; ++j) {
+		inv_x = 1;
+		for (i = 0; i < DISK; ++i) {
+			uint8_t x = gfinv[inv_x];
+			matrix[(j+2)*DISK+i] = gfinv[y ^ x];
+			inv_x = gfmul(2, inv_x);
+		}
+
+		y = gfmul(2, y);
+	}
+
+	/*
+	 * Finally we adjust the matrix multipling each row for
+	 * the inverse of the first element in the row.
+	 *
+	 * Also this operation maintains the MDS property of the matrix.
+	 *
+	 * Resulting in:
+	 *
+	 *   1   1   1   1   1   1
+	 *   1   2   4   8  16  32
+	 *   1 245 210 196 154 113
+	 *   1 187 166 215   7 106
+	 */
+	for (j = 0; j < PARITY-2; ++j) {
+		uint8_t f = gfinv[matrix[(j+2)*DISK]];
+
+		for (i = 0; i < DISK; ++i)
+			matrix[(j+2)*DISK+i] = gfmul(matrix[(j+2)*DISK+i], f);
+	}
+}
+
+int main(void)
+{
+	uint8_t v;
+	int i, j, p;
+	uint8_t matrix[PARITY * 256];
+
+	printf("/*\n");
+	printf(" * Copyright (C) 2013 Andrea Mazzoleni\n");
+	printf(" *\n");
+	printf(" * This program is free software: you can redistribute it and/or modify\n");
+	printf(" * it under the terms of the GNU General Public License as published by\n");
+	printf(" * the Free Software Foundation, either version 2 of the License, or\n");
+	printf(" * (at your option) any later version.\n");
+	printf(" *\n");
+	printf(" * This program is distributed in the hope that it will be useful,\n");
+	printf(" * but WITHOUT ANY WARRANTY; without even the implied warranty of\n");
+	printf(" * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the\n");
+	printf(" * GNU General Public License for more details.\n");
+	printf(" */\n");
+	printf("\n");
+
+	printf("#include \"kerncompat.h\"\n");
+	printf("\n");
+
+	/* a*b */
+	printf("const u8 raid_gfmul[256][256] =\n");
+	printf("{\n");
+	for (i = 0; i < 256; ++i) {
+		printf("\t{\n");
+		for (j = 0; j < 256; ++j) {
+			if (j % 8 == 0)
+				printf("\t\t");
+			v = gfmul(i, j);
+			if (v == 1)
+				gfinv[i] = j;
+			printf("0x%02x,", (unsigned)v);
+			if (j % 8 == 7)
+				printf("\n");
+			else
+				printf(" ");
+		}
+		printf("\t},\n");
+	}
+	printf("};\n\n");
+
+	/* cauchy matrix */
+	set_cauchy(matrix);
+
+	printf("/**\n");
+	printf(" * Cauchy matrix used to generate parity.\n");
+	printf(" * This matrix is valid for up to %u parity with %u data disks.\n", PARITY, DISK);
+	printf(" *\n");
+	for (p = 0; p < PARITY; ++p) {
+		printf(" * ");
+		for (i = 0; i < DISK; ++i)
+			printf("%02x ", matrix[p*DISK+i]);
+		printf("\n");
+	}
+	printf(" */\n");
+	printf("const u8 raid_gfcauchy[%u][256] =\n", PARITY);
+	printf("{\n");
+	for (p = 0; p < PARITY; ++p) {
+		printf("\t{\n");
+		for (i = 0; i < DISK; ++i) {
+			if (i % 8 == 0)
+				printf("\t\t");
+			printf("0x%02x,", matrix[p*DISK+i]);
+			if (i % 8 == 7)
+				printf("\n");
+			else
+				printf(" ");
+		}
+		printf("\n\t},\n");
+	}
+	printf("};\n\n");
+
+	return 0;
+}
+
diff --git a/raid.c b/raid.c
new file mode 100644
index 0000000..2aa275e
--- /dev/null
+++ b/raid.c
@@ -0,0 +1,44 @@
+/*
+ * Copyright (C) 2013 Andrea Mazzoleni
+ *
+ * This program is free software: you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License as published by
+ * the Free Software Foundation, either version 2 of the License, or
+ * (at your option) any later version.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+ * GNU General Public License for more details.
+ */
+
+#include "raid.h"
+
+/* tables defined in tables.c */
+const u8 raid_gfmul[256][256];
+const u8 raid_gfcauchy[6][256];
+
+void raid_gen(int nd, int np, size_t size, void **vv)
+{
+	u8 **v = (u8 **)vv;
+	size_t i;
+
+	for (i = 0; i < size; ++i) {
+		u8 p[RAID_PARITY_MAX];
+		int j, d;
+
+		for (j = 0; j < np; ++j)
+			p[j] = 0;
+
+		for (d = 0; d < nd; ++d) {
+			u8 b = v[d][i];
+
+			for (j = 0; j < np; ++j)
+				p[j] ^= raid_gfmul[b][raid_gfcauchy[j][d]];
+		}
+
+		for (j = 0; j < np; ++j)
+			v[nd + j][i] = p[j];
+	}
+}
+
diff --git a/raid.h b/raid.h
new file mode 100644
index 0000000..83f8b25
--- /dev/null
+++ b/raid.h
@@ -0,0 +1,34 @@
+/*
+ * Copyright (C) 2013 Andrea Mazzoleni
+ *
+ * This program is free software: you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License as published by
+ * the Free Software Foundation, either version 2 of the License, or
+ * (at your option) any later version.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+ * GNU General Public License for more details.
+ */
+
+#ifndef __RAID_H
+#define __RAID_H
+
+#include "kerncompat.h"
+
+/*
+ * Max number of parities supported.
+ */
+#define RAID_PARITY_MAX 6
+
+/*
+ * Generate the RAID Cauchy parity.
+ *
+ * Note that this is the slow reference implementation.
+ * For a faster one and documentation see lib/raid/raid.c in the Linux Kernel.
+ */
+void raid_gen(int nd, int np, size_t size, void **vv);
+
+#endif
+
diff --git a/raid6.c b/raid6.c
deleted file mode 100644
index a6ee483..0000000
--- a/raid6.c
+++ /dev/null
@@ -1,101 +0,0 @@
-/* -*- linux-c -*- ------------------------------------------------------- *
- *
- *   Copyright 2002-2004 H. Peter Anvin - All Rights Reserved
- *
- *   This program is free software; you can redistribute it and/or modify
- *   it under the terms of the GNU General Public License as published by
- *   the Free Software Foundation, Inc., 53 Temple Place Ste 330,
- *   Boston MA 02111-1307, USA; either version 2 of the License, or
- *   (at your option) any later version; incorporated herein by reference.
- *
- * ----------------------------------------------------------------------- */
-
-/*
- * raid6int1.c
- *
- * 1-way unrolled portable integer math RAID-6 instruction set
- *
- * This file was postprocessed using unroll.pl and then ported to userspace
- */
-#include <stdint.h>
-#include <unistd.h>
-#include "kerncompat.h"
-#include "ctree.h"
-#include "disk-io.h"
-
-/*
- * This is the C data type to use
- */
-
-/* Change this from BITS_PER_LONG if there is something better... */
-#if BITS_PER_LONG == 64
-# define NBYTES(x) ((x) * 0x0101010101010101UL)
-# define NSIZE  8
-# define NSHIFT 3
-typedef uint64_t unative_t;
-#else
-# define NBYTES(x) ((x) * 0x01010101U)
-# define NSIZE  4
-# define NSHIFT 2
-typedef uint32_t unative_t;
-#endif
-
-/*
- * These sub-operations are separate inlines since they can sometimes be
- * specially optimized using architecture-specific hacks.
- */
-
-/*
- * The SHLBYTE() operation shifts each byte left by 1, *not*
- * rolling over into the next byte
- */
-static inline __attribute_const__ unative_t SHLBYTE(unative_t v)
-{
-	unative_t vv;
-
-	vv = (v << 1) & NBYTES(0xfe);
-	return vv;
-}
-
-/*
- * The MASK() operation returns 0xFF in any byte for which the high
- * bit is 1, 0x00 for any byte for which the high bit is 0.
- */
-static inline __attribute_const__ unative_t MASK(unative_t v)
-{
-	unative_t vv;
-
-	vv = v & NBYTES(0x80);
-	vv = (vv << 1) - (vv >> 7); /* Overflow on the top bit is OK */
-	return vv;
-}
-
-
-void raid6_gen_syndrome(int disks, size_t bytes, void **ptrs)
-{
-	uint8_t **dptr = (uint8_t **)ptrs;
-	uint8_t *p, *q;
-	int d, z, z0;
-
-	unative_t wd0, wq0, wp0, w10, w20;
-
-	z0 = disks - 3;		/* Highest data disk */
-	p = dptr[z0+1];		/* XOR parity */
-	q = dptr[z0+2];		/* RS syndrome */
-
-	for ( d = 0 ; d < bytes ; d += NSIZE*1 ) {
-		wq0 = wp0 = *(unative_t *)&dptr[z0][d+0*NSIZE];
-		for ( z = z0-1 ; z >= 0 ; z-- ) {
-			wd0 = *(unative_t *)&dptr[z][d+0*NSIZE];
-			wp0 ^= wd0;
-			w20 = MASK(wq0);
-			w10 = SHLBYTE(wq0);
-			w20 &= NBYTES(0x1d);
-			w10 ^= w20;
-			wq0 = w10 ^ wd0;
-		}
-		*(unative_t *)&p[d+NSIZE*0] = wp0;
-		*(unative_t *)&q[d+NSIZE*0] = wq0;
-	}
-}
-
diff --git a/utils.c b/utils.c
index f499023..52b090b 100644
--- a/utils.c
+++ b/utils.c
@@ -1856,13 +1856,19 @@ int test_num_disk_vs_raid(u64 metadata_profile, u64 data_profile,
 
 	switch (dev_cnt) {
 	default:
+	case 7:
+		allowed |= BTRFS_BLOCK_GROUP_PAR6;
+	case 6:
+		allowed |= BTRFS_BLOCK_GROUP_PAR5;
+	case 5:
+		allowed |= BTRFS_BLOCK_GROUP_PAR4;
 	case 4:
-		allowed |= BTRFS_BLOCK_GROUP_RAID10;
+		allowed |= BTRFS_BLOCK_GROUP_RAID10 | BTRFS_BLOCK_GROUP_PAR3;
 	case 3:
-		allowed |= BTRFS_BLOCK_GROUP_RAID6;
+		allowed |= BTRFS_BLOCK_GROUP_PAR2;
 	case 2:
 		allowed |= BTRFS_BLOCK_GROUP_RAID0 | BTRFS_BLOCK_GROUP_RAID1 |
-			BTRFS_BLOCK_GROUP_RAID5;
+			BTRFS_BLOCK_GROUP_PAR1;
 		break;
 	case 1:
 		allowed |= BTRFS_BLOCK_GROUP_DUP;
diff --git a/volumes.c b/volumes.c
index c38da6c..b1fb7de 100644
--- a/volumes.c
+++ b/volumes.c
@@ -30,6 +30,7 @@
 #include "print-tree.h"
 #include "volumes.h"
 #include "math.h"
+#include "raid.h"
 
 struct stripe {
 	struct btrfs_device *dev;
@@ -38,12 +39,7 @@ struct stripe {
 
 static inline int nr_parity_stripes(struct map_lookup *map)
 {
-	if (map->type & BTRFS_BLOCK_GROUP_RAID5)
-		return 1;
-	else if (map->type & BTRFS_BLOCK_GROUP_RAID6)
-		return 2;
-	else
-		return 0;
+	return btrfs_flags_par(map->type);
 }
 
 static inline int nr_data_stripes(struct map_lookup *map)
@@ -51,8 +47,6 @@ static inline int nr_data_stripes(struct map_lookup *map)
 	return map->num_stripes - nr_parity_stripes(map);
 }
 
-#define is_parity_stripe(x) ( ((x) == BTRFS_RAID5_P_STRIPE) || ((x) == BTRFS_RAID6_Q_STRIPE) )
-
 static LIST_HEAD(fs_uuids);
 
 static struct btrfs_device *__find_device(struct list_head *head, u64 devid,
@@ -643,10 +637,8 @@ static u64 chunk_bytes_by_type(u64 type, u64 calc_size, int num_stripes,
 		return calc_size;
 	else if (type & BTRFS_BLOCK_GROUP_RAID10)
 		return calc_size * (num_stripes / sub_stripes);
-	else if (type & BTRFS_BLOCK_GROUP_RAID5)
-		return calc_size * (num_stripes - 1);
-	else if (type & BTRFS_BLOCK_GROUP_RAID6)
-		return calc_size * (num_stripes - 2);
+	else if (type & BTRFS_BLOCK_GROUP_PARX)
+		return calc_size * (num_stripes - btrfs_flags_par(type));
 	else
 		return calc_size * num_stripes;
 }
@@ -782,7 +774,7 @@ int btrfs_alloc_chunk(struct btrfs_trans_handle *trans,
 	}
 
 	if (type & (BTRFS_BLOCK_GROUP_RAID0 | BTRFS_BLOCK_GROUP_RAID1 |
-		    BTRFS_BLOCK_GROUP_RAID5 | BTRFS_BLOCK_GROUP_RAID6 |
+		    BTRFS_BLOCK_GROUP_PARX |
 		    BTRFS_BLOCK_GROUP_RAID10 |
 		    BTRFS_BLOCK_GROUP_DUP)) {
 		if (type & BTRFS_BLOCK_GROUP_SYSTEM) {
@@ -822,20 +814,13 @@ int btrfs_alloc_chunk(struct btrfs_trans_handle *trans,
 		sub_stripes = 2;
 		min_stripes = 4;
 	}
-	if (type & (BTRFS_BLOCK_GROUP_RAID5)) {
-		num_stripes = btrfs_super_num_devices(info->super_copy);
-		if (num_stripes < 2)
-			return -ENOSPC;
-		min_stripes = 2;
-		stripe_len = find_raid56_stripe_len(num_stripes - 1,
-				    btrfs_super_stripesize(info->super_copy));
-	}
-	if (type & (BTRFS_BLOCK_GROUP_RAID6)) {
+	if (type & BTRFS_BLOCK_GROUP_PARX) {
+		min_stripes = 1 + btrfs_flags_par(type);
 		num_stripes = btrfs_super_num_devices(info->super_copy);
-		if (num_stripes < 3)
+		if (num_stripes < min_stripes)
 			return -ENOSPC;
-		min_stripes = 3;
-		stripe_len = find_raid56_stripe_len(num_stripes - 2,
+
+		stripe_len = find_raid56_stripe_len(num_stripes - btrfs_flags_par(type),
 				    btrfs_super_stripesize(info->super_copy));
 	}
 
@@ -1107,10 +1092,8 @@ int btrfs_num_copies(struct btrfs_mapping_tree *map_tree, u64 logical, u64 len)
 		ret = map->num_stripes;
 	else if (map->type & BTRFS_BLOCK_GROUP_RAID10)
 		ret = map->sub_stripes;
-	else if (map->type & BTRFS_BLOCK_GROUP_RAID5)
-		ret = 2;
-	else if (map->type & BTRFS_BLOCK_GROUP_RAID6)
-		ret = 3;
+	else if (map->type & BTRFS_BLOCK_GROUP_PARX)
+		ret = 1 + btrfs_flags_par(map->type);
 	else
 		ret = 1;
 	return ret;
@@ -1163,8 +1146,7 @@ int btrfs_rmap_block(struct btrfs_mapping_tree *map_tree,
 		length = ce->size / (map->num_stripes / map->sub_stripes);
 	else if (map->type & BTRFS_BLOCK_GROUP_RAID0)
 		length = ce->size / map->num_stripes;
-	else if (map->type & (BTRFS_BLOCK_GROUP_RAID5 |
-			      BTRFS_BLOCK_GROUP_RAID6)) {
+	else if (map->type & BTRFS_BLOCK_GROUP_PARX) {
 		length = ce->size / nr_data_stripes(map);
 		rmap_len = map->stripe_len * nr_data_stripes(map);
 	}
@@ -1294,9 +1276,9 @@ again:
 			stripes_required = map->sub_stripes;
 		}
 	}
-	if (map->type & (BTRFS_BLOCK_GROUP_RAID5 | BTRFS_BLOCK_GROUP_RAID6)
+	if ((map->type & BTRFS_BLOCK_GROUP_PARX)
 	    && multi_ret && ((rw & WRITE) || mirror_num > 1) && raid_map_ret) {
-		    /* RAID[56] write or recovery. Return all stripes */
+		    /* PAR write or recovery. Return all stripes */
 		    stripes_required = map->num_stripes;
 
 		    /* Only allocate the map if we've already got a large enough multi_ret */
@@ -1330,7 +1312,7 @@ again:
 	stripe_offset = offset - stripe_offset;
 
 	if (map->type & (BTRFS_BLOCK_GROUP_RAID0 | BTRFS_BLOCK_GROUP_RAID1 |
-			 BTRFS_BLOCK_GROUP_RAID5 | BTRFS_BLOCK_GROUP_RAID6 |
+			 BTRFS_BLOCK_GROUP_PARX |
 			 BTRFS_BLOCK_GROUP_RAID10 |
 			 BTRFS_BLOCK_GROUP_DUP)) {
 		/* we limit the length of each bio to what fits in a stripe */
@@ -1369,14 +1351,14 @@ again:
 			multi->num_stripes = map->num_stripes;
 		else if (mirror_num)
 			stripe_index = mirror_num - 1;
-	} else if (map->type & (BTRFS_BLOCK_GROUP_RAID5 |
-				BTRFS_BLOCK_GROUP_RAID6)) {
+	} else if (map->type & BTRFS_BLOCK_GROUP_PARX) {
 
 		if (raid_map) {
 			int rot;
 			u64 tmp;
 			u64 raid56_full_stripe_start;
 			u64 full_stripe_len = nr_data_stripes(map) * map->stripe_len;
+			int j;
 
 			/*
 			 * align the start of our data stripe in the logical
@@ -1399,9 +1381,8 @@ again:
 				raid_map[(i+rot) % map->num_stripes] =
 					ce->start + (tmp + i) * map->stripe_len;
 
-			raid_map[(i+rot) % map->num_stripes] = BTRFS_RAID5_P_STRIPE;
-			if (map->type & BTRFS_BLOCK_GROUP_RAID6)
-				raid_map[(i+rot+1) % map->num_stripes] = BTRFS_RAID6_Q_STRIPE;
+			for (j = 0; j < btrfs_flags_par(map->type); j++)
+				raid_map[(i+rot+j) % map->num_stripes] = BTRFS_RAID_PAR1_STRIPE + j;
 
 			*length = map->stripe_len;
 			stripe_index = 0;
@@ -1413,8 +1394,9 @@ again:
 
 			/*
 			 * Mirror #0 or #1 means the original data block.
-			 * Mirror #2 is RAID5 parity block.
-			 * Mirror #3 is RAID6 Q block.
+			 * Mirror #2 is RAID5/PAR1 P block.
+			 * Mirror #3 is RAID6/PAR2 Q block.
+			 * .. and so on up to PAR6
 			 */
 			if (mirror_num > 1)
 				stripe_index = nr_data_stripes(map) + mirror_num - 2;
@@ -1838,7 +1820,7 @@ static void split_eb_for_raid56(struct btrfs_fs_info *info,
 	int ret;
 
 	for (i = 0; i < num_stripes; i++) {
-		if (raid_map[i] >= BTRFS_RAID5_P_STRIPE)
+		if (raid_map[i] >= BTRFS_RAID_PAR1_STRIPE)
 			break;
 
 		eb = malloc(sizeof(struct extent_buffer) + stripe_len);
@@ -1871,11 +1853,13 @@ int write_raid56_with_parity(struct btrfs_fs_info *info,
 			     struct btrfs_multi_bio *multi,
 			     u64 stripe_len, u64 *raid_map)
 {
-	struct extent_buffer **ebs, *p_eb = NULL, *q_eb = NULL;
+	struct extent_buffer **ebs;
+	struct extent_buffer *p_eb[RAID_PARITY_MAX];
 	int i;
 	int j;
 	int ret;
 	int alloc_size = eb->len;
+	int np;
 
 	ebs = kmalloc(sizeof(*ebs) * multi->num_stripes, GFP_NOFS);
 	BUG_ON(!ebs);
@@ -1883,12 +1867,16 @@ int write_raid56_with_parity(struct btrfs_fs_info *info,
 	if (stripe_len > alloc_size)
 		alloc_size = stripe_len;
 
+	np = 0;
+	for (i = 0; i < RAID_PARITY_MAX; i++)
+		p_eb[i] = NULL;
+
 	split_eb_for_raid56(info, eb, ebs, stripe_len, raid_map,
 			    multi->num_stripes);
 
 	for (i = 0; i < multi->num_stripes; i++) {
 		struct extent_buffer *new_eb;
-		if (raid_map[i] < BTRFS_RAID5_P_STRIPE) {
+		if (raid_map[i] < BTRFS_RAID_PAR1_STRIPE) {
 			ebs[i]->dev_bytenr = multi->stripes[i].physical;
 			ebs[i]->fd = multi->stripes[i].dev->fd;
 			multi->stripes[i].dev->total_ios++;
@@ -1902,35 +1890,33 @@ int write_raid56_with_parity(struct btrfs_fs_info *info,
 		multi->stripes[i].dev->total_ios++;
 		new_eb->len = stripe_len;
 
-		if (raid_map[i] == BTRFS_RAID5_P_STRIPE)
-			p_eb = new_eb;
-		else if (raid_map[i] == BTRFS_RAID6_Q_STRIPE)
-			q_eb = new_eb;
+		/* parity index */
+		j = raid_map[i] - BTRFS_RAID_PAR1_STRIPE;
+
+		BUG_ON(j < 0 || j >= RAID_PARITY_MAX);
+
+		p_eb[j] = new_eb;
+
+		/* keep track of the number of parities used */
+		if (j + 1 > np)
+			np = j + 1;
 	}
-	if (q_eb) {
+
+	if (np != 0) {
 		void **pointers;
 
-		pointers = kmalloc(sizeof(*pointers) * multi->num_stripes,
-				   GFP_NOFS);
+		pointers = kmalloc(sizeof(*pointers) * multi->num_stripes, GFP_NOFS);
 		BUG_ON(!pointers);
 
-		ebs[multi->num_stripes - 2] = p_eb;
-		ebs[multi->num_stripes - 1] = q_eb;
+		for (i = 0; i < np; i++)
+			ebs[multi->num_stripes - np + i] = p_eb[i];
 
 		for (i = 0; i < multi->num_stripes; i++)
 			pointers[i] = ebs[i]->data;
 
-		raid6_gen_syndrome(multi->num_stripes, stripe_len, pointers);
+		raid_gen(multi->num_stripes - np, np, stripe_len, pointers);
+
 		kfree(pointers);
-	} else {
-		ebs[multi->num_stripes - 1] = p_eb;
-		memcpy(p_eb->data, ebs[0]->data, stripe_len);
-		for (j = 1; j < multi->num_stripes - 1; j++) {
-			for (i = 0; i < stripe_len; i += sizeof(unsigned long)) {
-				*(unsigned long *)(p_eb->data + i) ^=
-					*(unsigned long *)(ebs[j]->data + i);
-			}
-		}
 	}
 
 	for (i = 0; i < multi->num_stripes; i++) {
diff --git a/volumes.h b/volumes.h
index 2802cb0..0a73084 100644
--- a/volumes.h
+++ b/volumes.h
@@ -137,9 +137,15 @@ struct map_lookup {
 #define BTRFS_BALANCE_ARGS_CONVERT	(1ULL << 8)
 #define BTRFS_BALANCE_ARGS_SOFT		(1ULL << 9)
 
-#define BTRFS_RAID5_P_STRIPE ((u64)-2)
-#define BTRFS_RAID6_Q_STRIPE ((u64)-1)
-
+/*
+ * Parity stripe indexes.
+ */
+#define BTRFS_RAID_PAR1_STRIPE ((u64)-6)
+#define BTRFS_RAID_PAR2_STRIPE ((u64)-5)
+#define BTRFS_RAID_PAR3_STRIPE ((u64)-4)
+#define BTRFS_RAID_PAR4_STRIPE ((u64)-3)
+#define BTRFS_RAID_PAR5_STRIPE ((u64)-2)
+#define BTRFS_RAID_PAR6_STRIPE ((u64)-1)
 
 int __btrfs_map_block(struct btrfs_mapping_tree *map_tree, int rw,
 		      u64 logical, u64 *length, u64 *type,
-- 
1.7.12.1


^ permalink raw reply related	[flat|nested] 3+ messages in thread

end of thread, other threads:[~2014-02-24 21:16 UTC | newest]

Thread overview: 3+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2014-02-24 21:15 [PATCH v5 0/3] New RAID library supporting up to six parities Andrea Mazzoleni
2014-02-24 21:15 ` [PATCH v5 2/3] fs: btrfs: Adds new par3456 modes to support " Andrea Mazzoleni
2014-02-24 21:15 ` [PATCH v5 3/3] btrfs-progs: " Andrea Mazzoleni

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox