linux-raid.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* mdadm 2.6 creates slow RAID 5 while mdadm 2.5.6 rocks
@ 2008-02-08 15:25 Hubert Verstraete
  2008-02-08 16:16 ` michael
  2008-02-26 17:24 ` mdadm 2.6 creates slow RAID 5 while mdadm 2.5.6 rocks Hubert Verstraete
  0 siblings, 2 replies; 6+ messages in thread
From: Hubert Verstraete @ 2008-02-08 15:25 UTC (permalink / raw)
  To: linux-raid

Hi All,

My RAID 5 array is running slow.
I've made a lot of test to find out where this issue is laying.
I've come to the conclusion that once the array is created with mdadm 
2.6.x (up to 2.6.4), whatever the kernel you run, whatever the mdadm you 
use to re-assemble the array, the array's performance is very degraded.

Would this be a bug in mdadm 2.6 ?
Are you seeing this issue too ?

Here are the stats made from bonnie:
2.6.18.8_mdadm_2.5.6,4G,,,38656,5,24171,6,,,182130,26,518.9,1,16,1033,3,+++++,+++,861,2,1224,3,+++++,+++,806,3
2.6.18.8_mdadm_2.6.4,4G,,,19191,2,15845,4,,,164907,26,491.9,1,16,697,2,+++++,+++,546,1,710,2,+++++,+++,465,2
2.6.22.6_mdadm_2.5.6,4G,,,49108,8,29441,7,,,174038,21,455.5,1,16,1351,4,+++++,+++,1073,3,1416,5,+++++,+++,696,4
2.6.22.6_mdadm_2.6.4,4G,,,18010,3,16763,4,,,185106,24,421.6,1,16,928,6,+++++,+++,659,3,871,7,+++++,+++,699,3
2.6.24-git17_mdadm_2.5.6,4G,,,126319,24,34342,4,,,79924,0,180.8,0,16,1566,5,+++++,+++,1459,3,1800,4,+++++,+++,1123,2
2.6.24-git17_mdadm_2.6.4,4G,,,24482,4,19717,3,,,79953,0,594.6,2,16,918,3,+++++,+++,715,2,907,3,+++++,+++,763,2

Remarks on the results:
The read performance is not degraded by mdadm 2.6 (but it gets degraded 
when using the newer kernel both with mdadm 2.5.6 and 2.6).
The write performance is affected by mdadm 2.6 and it's very very 
degraded in the 2.6.24 kernel compared to mdadm 2.5.6 (write performance 
on 2.6.24 kernel is 6 times faster!). Block write runs at 24KB/s when 
the array is created with mdadm 2.6 and 126KB/s when created with mdadm 
2.5.6!
Even when I use mdadm 2.5.6 to assemble an array created with mdadm 2.6 
the results are still bad.

The test environment:
4 disks
64K chunk
superblock 1.0 (same symptoms with 0.9)
XFS
no optimization

Hardware: tried on several computers with different CPU, RAM, SATA 
controller...

More details on the conf:

/dev/md_d0:
         Version : 01.00.03
   Creation Time : Fri Feb  8 14:13:51 2008
      Raid Level : raid5
      Array Size : 732595200 (698.66 GiB 750.18 GB)
     Device Size : 488396800 (232.89 GiB 250.06 GB)
    Raid Devices : 4
   Total Devices : 4
Preferred Minor : 0
     Persistence : Superblock is persistent

   Intent Bitmap : Internal

     Update Time : Fri Feb  8 14:42:57 2008
           State : active
  Active Devices : 4
Working Devices : 4
  Failed Devices : 0
   Spare Devices : 0

          Layout : left-symmetric
      Chunk Size : 64K

            Name : localhost:d0  (local to host localhost)
            UUID : 93ffc9ae:b33311aa:445e7821:cc7487ec
          Events : 2

     Number   Major   Minor   RaidDevice State
        0       8        0        0      active sync   /dev/sda
        1       8       16        1      active sync   /dev/sdb
        2       8       32        2      active sync   /dev/sdc
        3       8       48        3      active sync   /dev/sdd

# xfs_info /mnt
meta-data=/dev/md_d0p1  isize=256    agcount=32, agsize=5723399 blks
          =              sectsz=512   attr=0
data     =              bsize=4096   blocks=183148768, imaxpct=25
          =              sunit=0      swidth=0 blks, unwritten=1
naming   =version 2     bsize=4096
log      =internal      bsize=4096   blocks=32768, version=1
          =              sectsz=512   sunit=0 blks
realtime =none          extsz=65536  blocks=0, rtextents=0

Thanks for the help.
Hubert

^ permalink raw reply	[flat|nested] 6+ messages in thread

* Re: mdadm 2.6 creates slow RAID 5 while mdadm 2.5.6 rocks
  2008-02-08 15:25 mdadm 2.6 creates slow RAID 5 while mdadm 2.5.6 rocks Hubert Verstraete
@ 2008-02-08 16:16 ` michael
  2008-02-11  8:38   ` Hubert Verstraete
  2008-03-12 15:21   ` first partition on partitionable RAID-5 array Hubert Verstraete
  2008-02-26 17:24 ` mdadm 2.6 creates slow RAID 5 while mdadm 2.5.6 rocks Hubert Verstraete
  1 sibling, 2 replies; 6+ messages in thread
From: michael @ 2008-02-08 16:16 UTC (permalink / raw)
  To: linux-raid

Quoting Hubert Verstraete <hubskml@free.fr>:

> Hi All,
>
> My RAID 5 array is running slow.
> I've made a lot of test to find out where this issue is laying.
> I've come to the conclusion that once the array is created with mdadm
> 2.6.x (up to 2.6.4), whatever the kernel you run, whatever the mdadm
> you use to re-assemble the array, the array's performance is very
> degraded.
>
> Would this be a bug in mdadm 2.6 ?
> Are you seeing this issue too ?

I may have seen this before too.
What happens if you don't make an array that is partitionable?
Just create an /dev/mdx device, or, if you must use a partitionable array,
what happens to your benchmarks on your 2nd partition of your array?
Say, /dev/md_d0p2 ?

My symptons were similar that any partitionable Raid 5 array would be  
slower, but ony on the first partition.
mdadm version 2.5.6
kernel 2.6.18

Mike

^ permalink raw reply	[flat|nested] 6+ messages in thread

* Re: mdadm 2.6 creates slow RAID 5 while mdadm 2.5.6 rocks
  2008-02-08 16:16 ` michael
@ 2008-02-11  8:38   ` Hubert Verstraete
  2008-03-12 15:21   ` first partition on partitionable RAID-5 array Hubert Verstraete
  1 sibling, 0 replies; 6+ messages in thread
From: Hubert Verstraete @ 2008-02-11  8:38 UTC (permalink / raw)
  To: linux-raid

michael@estone.ca wrote:
> Quoting Hubert Verstraete:
>
>> Hi All,
>>
>> My RAID 5 array is running slow.
>> I've made a lot of test to find out where this issue is laying.
>> I've come to the conclusion that once the array is created with mdadm
>> 2.6.x (up to 2.6.4), whatever the kernel you run, whatever the mdadm
>> you use to re-assemble the array, the array's performance is very
>> degraded.
>>
>> Would this be a bug in mdadm 2.6 ?
>> Are you seeing this issue too ?
> I may have seen this before too.
> What happens if you don't make an array that is partitionable?
> Just create an /dev/mdx device, or, if you must use a partitionable 
> array,
> what happens to your benchmarks on your 2nd partition of your array?
> Say, /dev/md_d0p2 ?
>
> My symptons were similar that any partitionable Raid 5 array would be 
> slower, but ony on the first partition.
> mdadm version 2.5.6
> kernel 2.6.18
>
> Mike
Thanks for the idea.
I've tried with a non partitionable array and with a 2nd partition and 
got the same damn slow result on write performance :(

I'm appending the two new tests to the bonnie results:
2.6.18.8_mdadm_2.5.6,4G,,,38656,5,24171,6,,,182130,26,518.9,1,16,1033,3,+++++,+++,861,2,1224,3,+++++,+++,806,3 

2.6.18.8_mdadm_2.6.4,4G,,,19191,2,15845,4,,,164907,26,491.9,1,16,697,2,+++++,+++,546,1,710,2,+++++,+++,465,2 

2.6.22.6_mdadm_2.5.6,4G,,,49108,8,29441,7,,,174038,21,455.5,1,16,1351,4,+++++,+++,1073,3,1416,5,+++++,+++,696,4 

2.6.22.6_mdadm_2.6.4,4G,,,18010,3,16763,4,,,185106,24,421.6,1,16,928,6,+++++,+++,659,3,871,7,+++++,+++,699,3 

2.6.24-git17_mdadm_2.5.6,4G,,,126319,24,34342,4,,,79924,0,180.8,0,16,1566,5,+++++,+++,1459,3,1800,4,+++++,+++,1123,2 

2.6.24-git17_mdadm_2.6.4,4G,,,24482,4,19717,3,,,79953,0,594.6,2,16,918,3,+++++,+++,715,2,907,3,+++++,+++,763,2 

2.6.24-git17_mdadm_2.6.4_partition_2,4G,,,24338,4,21351,4,,,170408,19,580.7,1,16,933,3,+++++,+++,889,3,895,3,+++++,+++,725,2 

2.6.24-git17_mdadm_2.6.4_non_partitionable,4G,,,23798,4,20845,4,,,169994,19,627.7,1,16,1257,3,+++++,+++,1068,3,1180,4,+++++,+++,872,2 


Nevertheless, in the 2 tests, the read performance is back to the one I 
had in 2.6.22 and before. There might be a regression in 2.6.24 for 
reading on the first partition of a partionable array...

Hubert

^ permalink raw reply	[flat|nested] 6+ messages in thread

* Re: mdadm 2.6 creates slow RAID 5 while mdadm 2.5.6 rocks
  2008-02-08 15:25 mdadm 2.6 creates slow RAID 5 while mdadm 2.5.6 rocks Hubert Verstraete
  2008-02-08 16:16 ` michael
@ 2008-02-26 17:24 ` Hubert Verstraete
  1 sibling, 0 replies; 6+ messages in thread
From: Hubert Verstraete @ 2008-02-26 17:24 UTC (permalink / raw)
  To: linux-raid

In case someone is interested, I'm answering to myself ...

There has been a change between mdadm 2.5 and mdadm 2.6 when creating an 
array with superblock v1.0 and using an internal bitmap.
In my configuration, the result is an internal bitmap much bigger in 2.6 
than in 2.5. And it seems when the internal bitmap is bigger, it slows 
down the write speed, dramatically in my case.

Regards,
Hubert

Hubert Verstraete wrote:
> Hi All,
> 
> My RAID 5 array is running slow.
> I've made a lot of test to find out where this issue is laying.
> I've come to the conclusion that once the array is created with mdadm 
> 2.6.x (up to 2.6.4), whatever the kernel you run, whatever the mdadm you 
> use to re-assemble the array, the array's performance is very degraded.
> 
> Would this be a bug in mdadm 2.6 ?
> Are you seeing this issue too ?
> 
> Here are the stats made from bonnie:
> 2.6.18.8_mdadm_2.5.6,4G,,,38656,5,24171,6,,,182130,26,518.9,1,16,1033,3,+++++,+++,861,2,1224,3,+++++,+++,806,3 
> 
> 2.6.18.8_mdadm_2.6.4,4G,,,19191,2,15845,4,,,164907,26,491.9,1,16,697,2,+++++,+++,546,1,710,2,+++++,+++,465,2 
> 
> 2.6.22.6_mdadm_2.5.6,4G,,,49108,8,29441,7,,,174038,21,455.5,1,16,1351,4,+++++,+++,1073,3,1416,5,+++++,+++,696,4 
> 
> 2.6.22.6_mdadm_2.6.4,4G,,,18010,3,16763,4,,,185106,24,421.6,1,16,928,6,+++++,+++,659,3,871,7,+++++,+++,699,3 
> 
> 2.6.24-git17_mdadm_2.5.6,4G,,,126319,24,34342,4,,,79924,0,180.8,0,16,1566,5,+++++,+++,1459,3,1800,4,+++++,+++,1123,2 
> 
> 2.6.24-git17_mdadm_2.6.4,4G,,,24482,4,19717,3,,,79953,0,594.6,2,16,918,3,+++++,+++,715,2,907,3,+++++,+++,763,2 
> 
> 
> Remarks on the results:
> The read performance is not degraded by mdadm 2.6 (but it gets degraded 
> when using the newer kernel both with mdadm 2.5.6 and 2.6).
> The write performance is affected by mdadm 2.6 and it's very very 
> degraded in the 2.6.24 kernel compared to mdadm 2.5.6 (write performance 
> on 2.6.24 kernel is 6 times faster!). Block write runs at 24KB/s when 
> the array is created with mdadm 2.6 and 126KB/s when created with mdadm 
> 2.5.6!
> Even when I use mdadm 2.5.6 to assemble an array created with mdadm 2.6 
> the results are still bad.
> 
> The test environment:
> 4 disks
> 64K chunk
> superblock 1.0 (same symptoms with 0.9)
> XFS
> no optimization
> 
> Hardware: tried on several computers with different CPU, RAM, SATA 
> controller...
> 
> More details on the conf:
> 
> /dev/md_d0:
>         Version : 01.00.03
>   Creation Time : Fri Feb  8 14:13:51 2008
>      Raid Level : raid5
>      Array Size : 732595200 (698.66 GiB 750.18 GB)
>     Device Size : 488396800 (232.89 GiB 250.06 GB)
>    Raid Devices : 4
>   Total Devices : 4
> Preferred Minor : 0
>     Persistence : Superblock is persistent
> 
>   Intent Bitmap : Internal
> 
>     Update Time : Fri Feb  8 14:42:57 2008
>           State : active
>  Active Devices : 4
> Working Devices : 4
>  Failed Devices : 0
>   Spare Devices : 0
> 
>          Layout : left-symmetric
>      Chunk Size : 64K
> 
>            Name : localhost:d0  (local to host localhost)
>            UUID : 93ffc9ae:b33311aa:445e7821:cc7487ec
>          Events : 2
> 
>     Number   Major   Minor   RaidDevice State
>        0       8        0        0      active sync   /dev/sda
>        1       8       16        1      active sync   /dev/sdb
>        2       8       32        2      active sync   /dev/sdc
>        3       8       48        3      active sync   /dev/sdd
> 
> # xfs_info /mnt
> meta-data=/dev/md_d0p1  isize=256    agcount=32, agsize=5723399 blks
>          =              sectsz=512   attr=0
> data     =              bsize=4096   blocks=183148768, imaxpct=25
>          =              sunit=0      swidth=0 blks, unwritten=1
> naming   =version 2     bsize=4096
> log      =internal      bsize=4096   blocks=32768, version=1
>          =              sectsz=512   sunit=0 blks
> realtime =none          extsz=65536  blocks=0, rtextents=0
> 
> Thanks for the help.
> Hubert

^ permalink raw reply	[flat|nested] 6+ messages in thread

* first partition on partitionable RAID-5 array
  2008-02-08 16:16 ` michael
  2008-02-11  8:38   ` Hubert Verstraete
@ 2008-03-12 15:21   ` Hubert Verstraete
  2008-03-12 20:02     ` Peter Grandi
  1 sibling, 1 reply; 6+ messages in thread
From: Hubert Verstraete @ 2008-03-12 15:21 UTC (permalink / raw)
  To: michael; +Cc: linux-raid

michael@estone.ca wrote:
> My symptons were similar that any partitionable Raid 5 array would be 
> slower, but ony on the first partition.
> mdadm version 2.5.6
> kernel 2.6.18
>
> Mike

Regarding the slow write performance on the first partition of a 
partitionable RAID-5 array, actually this is not linked with the first 
partition. This issue happens when the partition does not start on a 
cylinder, and this is almost always the case for the first partition 
which does not start on the first sector of the first cylinder (some 
space is reserved for the partition table).

This symptom doesn't exist on RAID-1.

Does anyone know why ?

Hubert

^ permalink raw reply	[flat|nested] 6+ messages in thread

* Re: first partition on partitionable RAID-5 array
  2008-03-12 15:21   ` first partition on partitionable RAID-5 array Hubert Verstraete
@ 2008-03-12 20:02     ` Peter Grandi
  0 siblings, 0 replies; 6+ messages in thread
From: Peter Grandi @ 2008-03-12 20:02 UTC (permalink / raw)
  To: Linux RAID

>>> On Wed, 12 Mar 2008 16:21:27 +0100, Hubert Verstraete
>>> <hubskml@free.fr> said:

[ ... ]

hubskml> Regarding the slow write performance on the first
hubskml> partition of a partitionable RAID-5 array, actually
hubskml> this is not linked with the first partition. This issue
hubskml> happens when the partition does not start on a cylinder,

This is a kind of basic issue for the use of parity RAID, and
has nothing to do (at least directly) with cylinders, or with
first partitions. But it manifests itself also with partitions
that begin on cylinder 1 (often but not necessarily the first)
because of the 63 sector offset, and of course also on logical
partitions (same).

The detailed reasons (RMW and alignment) have also been recently
discussed with graphic examples in a couple of recent threads.

hubskml> This symptom doesn't exist on RAID-1.
hubskml> Does anyone know why ?

Those who have been reading the Linux RAID mailing list know why
:-).

^ permalink raw reply	[flat|nested] 6+ messages in thread

end of thread, other threads:[~2008-03-12 20:02 UTC | newest]

Thread overview: 6+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2008-02-08 15:25 mdadm 2.6 creates slow RAID 5 while mdadm 2.5.6 rocks Hubert Verstraete
2008-02-08 16:16 ` michael
2008-02-11  8:38   ` Hubert Verstraete
2008-03-12 15:21   ` first partition on partitionable RAID-5 array Hubert Verstraete
2008-03-12 20:02     ` Peter Grandi
2008-02-26 17:24 ` mdadm 2.6 creates slow RAID 5 while mdadm 2.5.6 rocks Hubert Verstraete

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).