linux-ide.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* Setting up md-raid5: observations, errors, questions
@ 2008-03-02 12:23 Christian Pernegger
  2008-03-02 12:41 ` Justin Piszcz
  2008-03-02 15:20 ` Michael Tokarev
  0 siblings, 2 replies; 19+ messages in thread
From: Christian Pernegger @ 2008-03-02 12:23 UTC (permalink / raw)
  To: linux-raid; +Cc: linux-ide

Hi all!

I'm not doing this for the first time but there were a few interesting
/ worrying points during the setup process and I'd rather clear those
up now.

Hardware:
Tyan Thunder K8W (S2885)
Dual Opteron 254, 2GB (2x2x512MB) RAM, 2x Promise SATA II TX4, Adaptec 29160
4x WD RE2-GP 1TB on the Promise (for raid5)
1x Maxtor Atlas 15K II on the Adaptec (system disk)

OS:
Debian testing-amd64
linux-2.6.22-3
mke2fs 1.40.6
mdadm

I did a badblocks -v -s -t random -w on all future RAID disks in
parallel to test / burn-in.
Result: no bad blocks, decent speed (vmstat: 148MB/s total read,
220MB/s total write), BUT the following error (once):

ata2.00: exception Emask 0x0 SAct 0x0 SErr 0x0 action 0x2
ata2.00: (port_status 0x20080000)
ata2.00: cmd 25/00:80:80:a1:55/00:00:3a:00:00/e0 tag 0 cdb 0x0 data 65536 in
         res 50/00:00:ff:a1:55/00:00:3a:00:00/e0 Emask 0x2 (HSM violation)
ata2: soft resetting port
ata2: SATA link up 3.0 Gbps (SStatus 123 SControl 300)
ata2.00: configured for UDMA/133
ata2: EH complete
sd 2:0:0:0: [sdc] 1953525168 512-byte hardware sectors (1000205 MB)
sd 2:0:0:0: [sdc] Write Protect is off
sd 2:0:0:0: [sdc] Mode Sense: 00 3a 00 00
sd 2:0:0:0: [sdc] Write cache: enabled, read cache: enabled, doesn't
support DPO or FUA

All four disks were beyond the first (random write) phase, but that's
all I can say as I had it running overnight. The "HSM violation" error
is all over Google but I couldn't find anything conlusive (= that I
could understand).

Ignoring the error I went on to create the array:

[exact command unavailable, see below. RAID5, 4 disks, 1024K chunk
size, internal bitmap, V1 superblock]

Went fine, only mdadm segfaulted:

mdadm[3295]: segfault at 0000000000000000 rip 0000000000412d2c rsp
00007fff9f31b5d0 error 4

This did only show up in dmesg so I'm not sure exactly when. Either
right after the create or after a first attempt at a create where I
had used -N instead of --name, which it didn't like (error message).
Recreated the array, just to be sure (same command as above).

Tried creating a filesystem:

mke2fs -E stride=256 -j -L tb-storage -m1 -T largfile4 /dev/md0

That was glacially slow, "writing inode tables" went up about 3-4/sec
(22357 total). Since I had forgotten the crypto layer anyway I
CTRL-Ced that attempt and added it:

[exact command unavailable, see below. Used 2048 (512byte sectors) for
LUKS payload alignment, which should land on it chunk boundaries]

OK. Back to the fs again, same command, different device. Still
glacially slow (and still running), only now the whole box is at a
standstill, too. cat /proc/cpuinfo takes about 3 minutes (!) to
complete, I'm still waiting for top to launch (15min and counting).
I'll leave mke2fs running for now ...


So, is all this normal? What did I do wrong, what can I do better?


Cheers,

C.

^ permalink raw reply	[flat|nested] 19+ messages in thread

* Re: Setting up md-raid5: observations, errors, questions
  2008-03-02 12:23 Setting up md-raid5: observations, errors, questions Christian Pernegger
@ 2008-03-02 12:41 ` Justin Piszcz
  2008-03-02 12:56   ` Christian Pernegger
  2008-03-02 15:20 ` Michael Tokarev
  1 sibling, 1 reply; 19+ messages in thread
From: Justin Piszcz @ 2008-03-02 12:41 UTC (permalink / raw)
  To: Christian Pernegger; +Cc: linux-raid, linux-ide



On Sun, 2 Mar 2008, Christian Pernegger wrote:

> Hi all!
>
> I'm not doing this for the first time but there were a few interesting
> / worrying points during the setup process and I'd rather clear those
> up now.
>
> Hardware:
> Tyan Thunder K8W (S2885)
> Dual Opteron 254, 2GB (2x2x512MB) RAM, 2x Promise SATA II TX4, Adaptec 29160
> 4x WD RE2-GP 1TB on the Promise (for raid5)
> 1x Maxtor Atlas 15K II on the Adaptec (system disk)
>
> OS:
> Debian testing-amd64
> linux-2.6.22-3
> mke2fs 1.40.6
> mdadm
>
> I did a badblocks -v -s -t random -w on all future RAID disks in
> parallel to test / burn-in.
> Result: no bad blocks, decent speed (vmstat: 148MB/s total read,
> 220MB/s total write), BUT the following error (once):
>
> ata2.00: exception Emask 0x0 SAct 0x0 SErr 0x0 action 0x2
> ata2.00: (port_status 0x20080000)
> ata2.00: cmd 25/00:80:80:a1:55/00:00:3a:00:00/e0 tag 0 cdb 0x0 data 65536 in
>         res 50/00:00:ff:a1:55/00:00:3a:00:00/e0 Emask 0x2 (HSM violation)
> ata2: soft resetting port
> ata2: SATA link up 3.0 Gbps (SStatus 123 SControl 300)
> ata2.00: configured for UDMA/133
> ata2: EH complete
> sd 2:0:0:0: [sdc] 1953525168 512-byte hardware sectors (1000205 MB)
> sd 2:0:0:0: [sdc] Write Protect is off
> sd 2:0:0:0: [sdc] Mode Sense: 00 3a 00 00
> sd 2:0:0:0: [sdc] Write cache: enabled, read cache: enabled, doesn't
> support DPO or FUA

Disable NCQ and your problem will go away.

echo 1 > /sys/block/$i/device/queue_depth

Justin.



^ permalink raw reply	[flat|nested] 19+ messages in thread

* Re: Setting up md-raid5: observations, errors, questions
  2008-03-02 12:41 ` Justin Piszcz
@ 2008-03-02 12:56   ` Christian Pernegger
  2008-03-02 13:03     ` Justin Piszcz
  0 siblings, 1 reply; 19+ messages in thread
From: Christian Pernegger @ 2008-03-02 12:56 UTC (permalink / raw)
  To: linux-raid; +Cc: linux-ide

>  Disable NCQ and your problem will go away.

Thank you. Just out of interest - which problem and why?

>  echo 1 > /sys/block/$i/device/queue_depth

I get access denied even as root. FWIW the value is at 1 for the 4
disks in the raid anyway. The SCSI disk has value 8, which is probably
irrelevant.

For completeness' sake:

mdadm version: mdadm - v2.6.4 - 19th October 2007
raid setup command: mdadm --create /dev/md0 --verbose --metadata=1.0
--homehost=jesus -n4 -c1024 -l5 --bitmap=internal --name tb-storage
-ayes /dev/sd[bcde]
cryptsetup command: cryptsetup luksFormat /dev/md0 --align-payload=2048

Any other suggestions on any of the issues?

Thanks,

C.

^ permalink raw reply	[flat|nested] 19+ messages in thread

* Re: Setting up md-raid5: observations, errors, questions
  2008-03-02 12:56   ` Christian Pernegger
@ 2008-03-02 13:03     ` Justin Piszcz
  2008-03-02 13:24       ` Christian Pernegger
  0 siblings, 1 reply; 19+ messages in thread
From: Justin Piszcz @ 2008-03-02 13:03 UTC (permalink / raw)
  To: Christian Pernegger; +Cc: linux-raid, linux-ide



On Sun, 2 Mar 2008, Christian Pernegger wrote:

>>  Disable NCQ and your problem will go away.
>
> Thank you. Just out of interest - which problem and why?
>
>>  echo 1 > /sys/block/$i/device/queue_depth
>
> I get access denied even as root. FWIW the value is at 1 for the 4
> disks in the raid anyway. The SCSI disk has value 8, which is probably
> irrelevant.
>
> For completeness' sake:
>
> mdadm version: mdadm - v2.6.4 - 19th October 2007
> raid setup command: mdadm --create /dev/md0 --verbose --metadata=1.0
> --homehost=jesus -n4 -c1024 -l5 --bitmap=internal --name tb-storage
> -ayes /dev/sd[bcde]
> cryptsetup command: cryptsetup luksFormat /dev/md0 --align-payload=2048
>
> Any other suggestions on any of the issues?
>
> Thanks,

You could try to change cables and such but you've already cc'd linux-ide, 
AFAIK it can/could be a chipset/related issue and the guys who work on NCQ 
etc are working on the problem is the last I heard..

You should try and narrow down the problem, is it always the same drive 
that has that problem every time?  Does it occur if you do a check on the 
RAID 5 array or only when building?  etc..

^ permalink raw reply	[flat|nested] 19+ messages in thread

* Re: Setting up md-raid5: observations, errors, questions
  2008-03-02 13:03     ` Justin Piszcz
@ 2008-03-02 13:24       ` Christian Pernegger
  2008-03-03 17:59         ` Bill Davidsen
  0 siblings, 1 reply; 19+ messages in thread
From: Christian Pernegger @ 2008-03-02 13:24 UTC (permalink / raw)
  To: linux-raid; +Cc: linux-ide

>  You could try to change cables and such but you've already cc'd linux-ide,
>  AFAIK it can/could be a chipset/related issue and the guys who work on NCQ
>  etc are working on the problem is the last I heard..
>
>  You should try and narrow down the problem, is it always the same drive
>  that has that problem every time?  Does it occur if you do a check on the
>  RAID 5 array or only when building?  etc..

Assuming you're talking about the HSM violation error ... I got that
exactly once (yet) in the situation described (sometime after the
writing phase of badblocks). Nothing since and certainly not the spew
others are reporting.

My primary concern now is the second problem - seems like any access
to the raid array makes the machine unusable. I can type/edit a
command at the bash prompt normally but as soon as I hit enter it just
hangs there. If the command is trivial, say cat, I might get a result
a few minutes later.

bonnie++ results are in:

Version 1.03c       ------Sequential Output------ --Sequential Input- --Random-
                    -Per Chr- --Block-- -Rewrite- -Per Chr- --Block-- --Seeks--
Machine        Size K/sec %CP K/sec %CP K/sec %CP K/sec %CP K/sec %CP  /sec %CP
1024k            4G           16282   5 13359   3           57373   7 149.0   0
                    ------Sequential Create------ --------Random Create--------
                    -Create-- --Read--- -Delete-- -Create-- --Read--- -Delete--
              files  /sec %CP  /sec %CP  /sec %CP  /sec %CP  /sec %CP  /sec %CP
                 16 +++++ +++ +++++ +++ +++++ +++ +++++ +++ +++++ +++ +++++ +++

Something is definitely not right ... :-(

As in, an old array of 300GB IDE Maxtors has 4x the seq writes, bus
capped (105MB/s, plain PCI) reads and 3x the IOs. And it doesn't block
the machine. Granted, there's the crypto but on my other (non-raid)
boxes the performance impact just isn't there.

Any help appreciated, as it is the box has expensive-paperweight status.

C.

^ permalink raw reply	[flat|nested] 19+ messages in thread

* Re: Setting up md-raid5: observations, errors, questions
  2008-03-02 12:23 Setting up md-raid5: observations, errors, questions Christian Pernegger
  2008-03-02 12:41 ` Justin Piszcz
@ 2008-03-02 15:20 ` Michael Tokarev
  2008-03-02 16:32   ` Christian Pernegger
  1 sibling, 1 reply; 19+ messages in thread
From: Michael Tokarev @ 2008-03-02 15:20 UTC (permalink / raw)
  To: Christian Pernegger; +Cc: linux-raid, linux-ide

Christian Pernegger wrote:
> Hi all!

Hello.

> Hardware:
> Tyan Thunder K8W (S2885)
> Dual Opteron 254, 2GB (2x2x512MB) RAM, 2x Promise SATA II TX4, Adaptec 29160
> 4x WD RE2-GP 1TB on the Promise (for raid5)

[]
> [exact command unavailable, see below. RAID5, 4 disks, 1024K chunk
> size, internal bitmap, V1 superblock]

ok.

> Tried creating a filesystem:
> 
> mke2fs -E stride=256 -j -L tb-storage -m1 -T largfile4 /dev/md0
> 
> That was glacially slow, "writing inode tables" went up about 3-4/sec
> (22357 total). Since I had forgotten the crypto layer anyway I
> CTRL-Ced that attempt and added it:
> 
> [exact command unavailable, see below. Used 2048 (512byte sectors) for
> LUKS payload alignment, which should land on it chunk boundaries]
> 
> OK. Back to the fs again, same command, different device. Still
> glacially slow (and still running), only now the whole box is at a
> standstill, too. cat /proc/cpuinfo takes about 3 minutes (!) to
> complete, I'm still waiting for top to launch (15min and counting).
> I'll leave mke2fs running for now ...

What's the state of your array at this point - is it resyncing?

> So, is all this normal? What did I do wrong, what can I do better?

In order to debug, try the following:

  o how about making filesystem(s) on individual disks first, to see
    how that will work out?  Maybe on each of them in parallel? :)

  o try --assume-clean when creating the array - this will omit resync
    and the array will be "clean" from the beginning (which is not bad
    when you will run mkfs on it anyway).  And try creating the filesystem
    on a clean (non-resyncing) array.

  o don't use crypto layer yet, - it seems does not hurt, just additional
    complexity

  o look at your interrupts (/proc/interrupts, or just run vmstat with
    some interval while the system working hard) - I bet your promises
    are taking all system time in irq context....

/mjt

^ permalink raw reply	[flat|nested] 19+ messages in thread

* Re: Setting up md-raid5: observations, errors, questions
  2008-03-02 15:20 ` Michael Tokarev
@ 2008-03-02 16:32   ` Christian Pernegger
  2008-03-02 18:33     ` Michael Tokarev
  2008-03-02 18:53     ` Christian Pernegger
  0 siblings, 2 replies; 19+ messages in thread
From: Christian Pernegger @ 2008-03-02 16:32 UTC (permalink / raw)
  To: linux-raid; +Cc: linux-ide

>  > OK. Back to the fs again, same command, different device. Still
>  > glacially slow (and still running), only now the whole box is at a
>  > standstill, too. cat /proc/cpuinfo takes about 3 minutes (!) to
>  > complete, I'm still waiting for top to launch (15min and counting).
>  > I'll leave mke2fs running for now ...
>
>  What's the state of your array at this point - is it resyncing?

Yes. Didn't think it would matter (much). Never did before.

>   o how about making filesystem(s) on individual disks first, to see
>     how that will work out?  Maybe on each of them in parallel? :)

Running. System is perfectly responsive during 4x mke2fs -j -q on raw devices.
Done. Upper bound for duration is 8 minutes (probaby much lower,
forgot to let it beep on completion), which is much better than the 2
hours with the syncing RAID.


chris@jesus:~$ cat /proc/interrupts
           CPU0       CPU1
  0:       4939    1920632   IO-APIC-edge      timer
  1:        113        133   IO-APIC-edge      i8042
  6:          0          3   IO-APIC-edge      floppy
  7:          0          0   IO-APIC-edge      parport0
  8:          0          1   IO-APIC-edge      rtc
  9:          0          0   IO-APIC-fasteoi   acpi
 12:          0          4   IO-APIC-edge      i8042
 14:          0        182   IO-APIC-edge      ide0
 19:         87         34   IO-APIC-fasteoi   ohci_hcd:usb1,
ohci_hcd:usb2, firewire_ohci
 24:      10142         57   IO-APIC-fasteoi   eth0
 26:    1041479        267   IO-APIC-fasteoi   sata_promise
 27:          0          0   IO-APIC-fasteoi   sata_promise
 28:       7141       2789   IO-APIC-fasteoi   aic7xxx
NMI:          0          0
LOC:    1925715    1925691
ERR:          0


chris@jesus:~$ vmstat 2
procs -----------memory---------- ---swap-- -----io---- -system-- ----cpu----
 r  b   swpd   free   buff  cache   si   so    bi    bo   in   cs us sy id wa
 3  4      0  12716 1770264   9972    0    0  1034  4902  196  883  0  9 78 13
 0  4      0  11404 1771260   9972    0    0     0 150156  544  874  0 26 28 46
 0  4      0  11836 1771016  10024    0    0     0 147500  544  682  0 26 26 48
 0  4      0  12572 1770036  10108    0    0     0 131022  515  506  0 25 13 62
 0  4      0  12864 1769688  10000    0    0     0 146822  539  809  0 26 23 51
 0  4      0  12132 1769988   9956    0    0     0 145942  536  900  0 26 15 59
 0  4      0  12520 1770324   9976    0    0     0 144638  536  820  0 26 32 42


top - 17:08:55 up  2:12,  2 users,  load average: 4.37, 3.13, 1.49
Tasks:  78 total,   1 running,  77 sleeping,   0 stopped,   0 zombie
Cpu(s):  1.3%us,  8.1%sy,  0.0%ni, 41.6%id, 46.0%wa,  0.7%hi,  2.3%si,  0.0%st
Mem:   2063940k total,  2051316k used,    12624k free,  1746616k buffers


I hope you can interpret that :)

>   o try --assume-clean when creating the array

mke2fs (same command as in first post) now running on fresh
--assumed-clean array w/o crypto. System is only marginally less
responsive than under idle load, if at all.
But inode table writing speed is only about 8-10/second. For the
single disk case I couldn't read the numbers fast enough.

chris@jesus:~$ cat /proc/interrupts
           CPU0       CPU1
  0:       7485    2227196   IO-APIC-edge      timer
  1:        113        133   IO-APIC-edge      i8042
  6:          0          3   IO-APIC-edge      floppy
  7:          0          0   IO-APIC-edge      parport0
  8:          0          1   IO-APIC-edge      rtc
  9:          0          0   IO-APIC-fasteoi   acpi
 12:          0          4   IO-APIC-edge      i8042
 14:          0        182   IO-APIC-edge      ide0
 19:        101         39   IO-APIC-fasteoi   ohci_hcd:usb1,
ohci_hcd:usb2, firewire_ohci
 24:      15656         57   IO-APIC-fasteoi   eth0
 26:    1211165        267   IO-APIC-fasteoi   sata_promise
 27:          0          0   IO-APIC-fasteoi   sata_promise
 28:       7892       2938   IO-APIC-fasteoi   aic7xxx
NMI:          0          0
LOC:    2234843    2234819
ERR:          0


chris@jesus:~$ vmstat 2
procs -----------memory---------- ---swap-- -----io---- -system-- ----cpu----
 r  b   swpd   free   buff  cache   si   so    bi    bo   in   cs us sy id wa
 0  1      0  12996 1811456  10860    0    0   912  4824  194  867  0  9 78 13
 0  0      0  11532 1812992  10832    0    0     0 12924  532 4992  0 11 61 28
 0  1      0  11092 1813376  10804    0    0     0 13316  535 5201  0  9 51 40
 0  0      0  12968 1811584  10832    0    0     0 12570  518 4890  0  9 58 32
 0  1      0  11724 1812736  10816    0    0     0 12818  508 5337  0 10 52 38
 0  0      0  12780 1811712  10804    0    0     0 13994  546 5055  0  9 52 40


top - 17:26:37 up  2:29,  2 users,  load average: 2.89, 2.12, 1.42
Tasks:  75 total,   2 running,  73 sleeping,   0 stopped,   0 zombie
Cpu(s):  0.0%us, 10.1%sy,  0.0%ni, 55.6%id, 33.7%wa,  0.2%hi,  0.3%si,  0.0%st
Mem:   2063940k total,  2052148k used,    11792k free,  1812288k buffers

>From vmstat I gather that total write throughput is an order of
magnitude slower than on the 4 raw disks in parallel. Naturally the
mke2fs on the raid isn't parallelized but it should still be
sequential enough to get the max for a single disk (~60-40MB/s),
right?

Thanks for helping.

C.

^ permalink raw reply	[flat|nested] 19+ messages in thread

* Re: Setting up md-raid5: observations, errors, questions
  2008-03-02 16:32   ` Christian Pernegger
@ 2008-03-02 18:33     ` Michael Tokarev
  2008-03-02 21:19       ` Christian Pernegger
  2008-03-02 18:53     ` Christian Pernegger
  1 sibling, 1 reply; 19+ messages in thread
From: Michael Tokarev @ 2008-03-02 18:33 UTC (permalink / raw)
  To: Christian Pernegger; +Cc: linux-raid, linux-ide

Christian Pernegger wrote:
>>  > OK. Back to the fs again, same command, different device. Still
>>  > glacially slow (and still running), only now the whole box is at a
>>  > standstill, too. cat /proc/cpuinfo takes about 3 minutes (!) to
>>  > complete, I'm still waiting for top to launch (15min and counting).
>>  > I'll leave mke2fs running for now ...
>>
>>  What's the state of your array at this point - is it resyncing?
> 
> Yes. Didn't think it would matter (much). Never did before.

It does.  If everything works ok, it should not, but it's not your
case ;)

>>   o how about making filesystem(s) on individual disks first, to see
>>     how that will work out?  Maybe on each of them in parallel? :)
> 
> Running. System is perfectly responsive during 4x mke2fs -j -q on raw devices.
> Done. Upper bound for duration is 8 minutes (probaby much lower,
> forgot to let it beep on completion), which is much better than the 2
> hours with the syncing RAID.

Aha.  Excellent.

>  26:    1041479        267   IO-APIC-fasteoi   sata_promise
>  27:          0          0   IO-APIC-fasteoi   sata_promise

> procs -----------memory---------- ---swap-- -----io---- -system-- ----cpu----
>  r  b   swpd   free   buff  cache   si   so    bi    bo   in   cs us sy id wa
>  0  4      0  12864 1769688  10000    0    0     0 146822  539  809  0 26 23 51

Ok.  146Mb/sec.

> Cpu(s):  1.3%us,  8.1%sy,  0.0%ni, 41.6%id, 46.0%wa,  0.7%hi,  2.3%si,  0.0%st

46.0% waiting

> I hope you can interpret that :)

Some ;)

>>   o try --assume-clean when creating the array
> 
> mke2fs (same command as in first post) now running on fresh
> --assumed-clean array w/o crypto. System is only marginally less
> responsive than under idle load, if at all.

So the responsibility problem is solved here, right?  I mean, if
there's no resync going on (the case with --assume-clean), the rest
of the system works as expected, right?

> But inode table writing speed is only about 8-10/second. For the
> single disk case I couldn't read the numbers fast enough.

Note that mkfs now has to do 3x more work, too - since the device
is 3x (for 4-drive raid5) larger.

> chris@jesus:~$ cat /proc/interrupts
>  26:    1211165        267   IO-APIC-fasteoi   sata_promise
>  27:          0          0   IO-APIC-fasteoi   sata_promise
> 
> procs -----------memory---------- ---swap-- -----io---- -system-- ----cpu----
>  r  b   swpd   free   buff  cache   si   so    bi    bo   in   cs us sy id wa
>  0  1      0  11092 1813376  10804    0    0     0 13316  535 5201  0  9 51 40

That's 10 times slower than in the case of 4 individual disks.

> Cpu(s):  0.0%us, 10.1%sy,  0.0%ni, 55.6%id, 33.7%wa,  0.2%hi,  0.3%si,  0.0%st

and only 33.7% waiting, which is probably due to the lack of
parallelism.

> From vmstat I gather that total write throughput is an order of
> magnitude slower than on the 4 raw disks in parallel. Naturally the
> mke2fs on the raid isn't parallelized but it should still be
> sequential enough to get the max for a single disk (~60-40MB/s),
> right?

Well, not really.  Mkfs is doing many small writes all over the
place, so each is seek+write.  And it's syncronous - no next write
gets submitted till the current one completes.

Ok.  For now I don't see a problem (over than that there IS a problem
somewhere - obviously).  Interrupts are ok.  System time (10.1%) in
second case doesn't look right, but it was 8.1% before...

Only 2 guesses left.  And I really mean "guesses", because I can't
say definitely what's going on anyway.

First, try to disable bitmaps on the raid array, and see if it makes
any difference.  For some reason I think it will... ;)

And second, the whole thing looks pretty much like a more general
problem discussed here and elsewhere last few days.  I mean handling
of parallel reads and writes - when single write may stall reads
for quite some time and vise versa.  I see it every day on disks
without NCQ/TCQ - system is mostly single-tasking, sorta like
ol'good MS-DOG :)  Good TCQ-enabled drives survives very high load
while the system is still more-or-less responsible (and I forgot when
I last saw "bad" TCQ-enabled drive - even 10 y/o 4Gb seagate has
excellent TCQ support ;).  And all modern SATA stuff works pretty
much like old IDE drives, which were designed "for personal use",
or "single-task only" -- even ones that CLAMS to support NCQ in
reality does not....  But that's a long story, and your disks
and/or controllers (or the combination) don't even support NCQ...

/mjt

^ permalink raw reply	[flat|nested] 19+ messages in thread

* Re: Setting up md-raid5: observations, errors, questions
  2008-03-02 16:32   ` Christian Pernegger
  2008-03-02 18:33     ` Michael Tokarev
@ 2008-03-02 18:53     ` Christian Pernegger
  1 sibling, 0 replies; 19+ messages in thread
From: Christian Pernegger @ 2008-03-02 18:53 UTC (permalink / raw)
  To: linux-raid; +Cc: linux-ide

Some more data, captured with "vmstat 2 10" during "mke2fs -j -E
stride=16" (to match default chunk size of 64k). The disks are still
WD RE2-GP 1TB.


single disk (no stride):

procs -----------memory---------- ---swap-- -----io---- -system-- ----cpu----
 r  b   swpd   free   buff  cache   si   so    bi    bo   in   cs us sy id wa
 1  2      0 1391888 546344  12784    0    0   527  3795  187  700  0  6 85  9
 0  3      0 1260596 673608  12860    0    0     0 64816  402  136  0  7  0 93
 0  3      0 1130200 800808  12836    0    0     0 56576  405  135  0  7  0 93
 0  3      0 1007040 919952  12796    0    0     0 67108  405  149  0  7  0 93
 0  3      0 892572 1030672  12852    0    0     0 54528  398  129  0  8  0 92
 0  3      0 753968 1165840  12792    0    0     0 61696  404  145  0 12  0 88
 0  3      0 631500 1284656  12788    0    0     0 61184  403  136  0 10  0 90
 0  3      0 500448 1411856  12868    0    0     0 65536  404  139  0 10  0 90
 0  3      0 382016 1526736  12860    0    0     0 59392  400  132  0  9  0 91
 0  3      0 251276 1653840  12792    0    0     0 58880  403  138  0 11  0 89


RAID1 (2 disks, no stride):

procs -----------memory---------- ---swap-- -----io---- -system-- ----cpu----
 r  b   swpd   free   buff  cache   si   so    bi    bo   in   cs us sy id wa
 0  0      0 1346104 586692  11716    0    0   524  3914  187  698  0  6 84 10
 0  0      0 1236228 697284  11776    0    0     0 41452  568 2932  0 14 86  0
 0  0      0 1130244 799684  11768    0    0     0 57518  670 2164  0 13 86  0
 0  0      0 1013020 914200  11752    0    0     0 51870  637 1572  0 14 86  0
 1  0      0 899232 1024972  11720    0    0     0 55504  632 2164  0 15 85  0
 1  0      0 788188 1132912  11728    0    0     0 52908  643 1839  0 16 83  0
 0  0      0 785120 1135564  11768    0    0     0 49980  660 2351  0 13 88  0
 2  0      0 667624 1250252  11768    0    0     0 50028  671 2304  0 17 83  0
 0  0      0 549556 1364940  11768    0    0     0 48186  651 2060  0 17 83  0
 0  0      0 427292 1483724  11768    0    0     0 48568  711 3367  0 18 82  0

[progress of "writing inode tables" pauses regularly then increases in a burst]


RAID0 (2 disks):

procs -----------memory---------- ---swap-- -----io---- -system-- ----cpu----
 r  b   swpd   free   buff  cache   si   so    bi    bo   in   cs us sy id wa
 0  1      0 1333272 566348  10716    0    0   515  4452  188  708  0  6 84 10
 0  2      0 1084244 808332  10632    0    0     0 123904  547  420  0 17 49 33
 1  1      0 847580 1039004  10720    0    0     0 113228  539  498  0 18 50 32
 1  1      0 603576 1276012  10724    0    0     0 119416  549  505  0 20 50 30
 0  2      0 366996 1505836  10692    0    0     0 120636  544  499  0 19 50 31
 1  1      0 113540 1751948  10700    0    0     0 116764  549  516  0 21 50 29
 0  2      0  12820 1849320  10092    0    0     0 122852  544  637  0 21 50 29
 0  2      0  11544 1850664  10160    0    0     0 120832  549  760  0 22 49 29
 1  1      0  11892 1850312   9980    0    0     0 117996  539  732  0 22 48 30
 0  2      0  12312 1849960   9980    0    0     0 107284  520  700  0 20 48 32


RAID1 (4 disks, no stride)

procs -----------memory---------- ---swap-- -----io---- -system-- ----cpu----
 r  b   swpd   free   buff  cache   si   so    bi    bo   in   cs us sy id wa
 0  0      0 1701472 240556  11600    0    0   512  4653  189  706  0  6 84 10
 0  0      0 1487172 453548  11580    0    0     0 26432  705 8308  0 15 85  0
 2  0      0 1487804 453548  11580    0    0     0 28214 1122 2917  0  9 91  0
 1  3      0 1309804 609292  11544    0    0     4 72986 1019 2111  0 21 78  1
 3  0      0 1279008 626284  11584    0    0     0 63262  551  236  0 13 38 49
 0  1      0 1294940 626284  11584    0    0     0     0  549 8816  0  8 49 43
 0  0      0 1098588 831088  11596    0    0     0  6752  586 14067  0 13 78  8
 0  0      0 1098672 831088  11584    0    0     0 33944  772 1183  0  9 91  0
 0  0      0 981492 945776  11584    0    0     0 32974  841 4643  0 15 85  0
 0  0      0 981436 945776  11584    0    0     0 30546 1120 2474  0 11 89  0

[extremely bursty, can't data be written to the disks in parallel?]


RAID0 (4 disks):

procs -----------memory---------- ---swap-- -----io---- -system-- ----cpu----
 r  b   swpd   free   buff  cache   si   so    bi    bo   in   cs us sy id wa
 1  2      0 945164 866516  11620    0    0   507  4675  190  707  0  6 84 10
 0  1      0 633716 1169620  11528    0    0     0 151552  623  734  0 23 50 27
 1  0      0 324452 1470016  11540    0    0     0 149504  622  717  0 24 50 26
 1  0      0  14644 1771024  11540    0    0     0 149522  622  689  0 25 50 25
 1  0      0  11948 1773160  11044    0    0     0 151552  621  992  0 28 48 24
 1  1      0  12788 1772420  11156    0    0     0 151552  623  985  0 28 48 23
 0  1      0  11952 1773060  11088    0    0     0 151552  622 1004  0 27 48 25
 1  0      0  12744 1772220  11172    0    0     0 149504  620 1000  0 27 48 25
 0  1      0  11888 1773192  11172    0    0     0 151552  622  967  0 28 47 25
 0  1      0  12860 1773000  10268    0    0     0 151560  624  994  0 29 48 23

[there seems to be a cap when writing @~150MB/s, 4 single disks in
parallel yield the same value. It's not the bus so it's probably the
controller. Anyway I can live with that.]


RAID5 (4 disks, syncing in b/g):

procs -----------memory---------- ---swap-- -----io---- -system-- ----cpu----
 r  b   swpd   free   buff  cache   si   so    bi    bo   in   cs us sy id wa
 0  0      0 1223568 636836  11340    0    0   501  4748  190  702  0  6 84 10
 0  0      0 1067636 788388  11372    0    0     0 63074 1639 19766  0 32 45 23
 3  0      0 945316 907172  11276    0    0     0 63294 1684 20441  0 31 47 22
 1  1      0 852584 997292  11340    0    0     0 57012 1651 15925  0 27 54 18
 2  1      0 717548 1128364  11340    0    0     0 61824 1659 20125  0 31 46 23
 0  0      0 586852 1255340  11340    0    0     0 60608 1643 14772  0 29 49 22
 2  1      0 447692 1390508  11368    0    0     0 61400 1703 18710  0 31 43 26
 3  0      0 333892 1501100  11340    0    0     0 64998 1769 20846  0 33 45 23
 3  0      0 190696 1640364  11336    0    0     0 60992 1683 18032  0 32 48 20
 0  1      0 110568 1718188  11340    0    0     0 59970 1651 13064  0 25 57 18

[burstier than RAID0 or the single disk but a lot smoother than RAID1.
Keep in mind that it is syncing in parallel. NO responsiveness
problem.]


RAID5 (4 disks, --assume-clean):

procs -----------memory---------- ---swap-- -----io---- -system-- ----cpu----
 r  b   swpd   free   buff  cache   si   so    bi    bo   in   cs us sy id wa
 5  2      0  11332 1814052  10536    0    0   472  4739  214  819  0  7 84  9
 2  1      0  12304 1812828  10540    0    0     0 73586 1562 23273  0 38 41 20
 0  0      0  13004 1812188  10584    0    0     0 69642 1649 19816  0 34 44 22
 0  1      0  12188 1813084  10580    0    0     0 72452 1675 20730  0 37 42 21
 2  0      0  11784 1813596  10540    0    0     0 74662 1776 20616  0 37 42 21
 0  0      0  12348 1812956  10548    0    0     0 69546 1578 19984  0 32 47 21
 2  1      0  11416 1813724  10608    0    0     0 71092 1712 20723  0 37 41 22
 1  1      0  12496 1812880  10624    0    0     0 71368 1608 22813  0 38 42 20
 2  0      0  11436 1813852  10628    0    0     0 74796 1727 22632  0 38 40 22
 0  1      0  12552 1812572  10564    0    0     0 70248 1656 12608  0 33 48 19



Aside from the fact that RAID1 writes are somewhat erratic these
values seem ok to me. I have no idea how fast a degraded 3/4 disk
RAID5 array should be but it's still faster than a single disk. No
responsiveness problems in any test. Is it possible that the Promise
doesn't like the requests generated by the 1M chunk size used for the
original setup? dmesg is silent so I guess I'll be doing chunksize
tests this evening.

Thanks,

C.

^ permalink raw reply	[flat|nested] 19+ messages in thread

* Re: Setting up md-raid5: observations, errors, questions
  2008-03-02 18:33     ` Michael Tokarev
@ 2008-03-02 21:19       ` Christian Pernegger
  2008-03-02 21:56         ` Michael Tokarev
  0 siblings, 1 reply; 19+ messages in thread
From: Christian Pernegger @ 2008-03-02 21:19 UTC (permalink / raw)
  To: linux-raid; +Cc: linux-ide

>  So the responsibility problem is solved here, right?

It is? I'm not sure yet.

>  I mean, if there's no resync going on (the case with --assume-clean), the rest of the system works as expected, right?

Yes, but the array itself is still dog slow and resync shouldn't have
that much impact on performance. What's more

mdadm --create /dev/md0 -l5 -n4 /dev/sd[bcde] -e 1.0

leaves room for tuning but is basically fine, whereas the original case

mdadm --create /dev/md0 --verbose --metadata=1.0 --homehost=jesus -n4
-c1024 -l5 --bitmap=internal --name tb-storage -ayes /dev/sd[bcde]

is all but unusable, which leaves two prime suspects

- the bitmap
- the chunk size

Could it be that some cache or other somewhere in the I/O stack
(probaby the controller itself) is too small for the 1MB chunks and
the disks are forced to work serially? The Promise has no RAM of
course but maybe it does have small send / receive buffers.
On the host side the I/O schedulers are set to cfq which is said to
play well with md-raid but I can experiment with that as well.

>  Note that mkfs now has to do 3x more work, too - since the device is 3x (for 4-drive raid5) larger.

Yes, but that just means there's more inode tables to write. It takes
longer, but the speed shouldn't change much.

>  Ok.  For now I don't see a problem (over than that there IS a problem
>  somewhere - obviously).  Interrupts are ok.  System time (10.1%) in
>  second case doesn't look right, but it was 8.1% before...

Too high? Too low?

>  Only 2 guesses left.

I'm fine with guesses, thank you :-) Of course a deus-ex-machina
solution (deus == Neil) is nice, too :)

>  First, try to disable bitmaps on the raid array

Maybe I did that by accident for the various vmstat data for different
RAID levels I posted previously. At least I forgot to explicitely
specify a bitmap for those tests (see above).

It's my understanding that the bitmap is a raid chunk level journal to
speed up recovery, correct? Doing that reduces the window during which
a second disk can die with catastrophic consequences -> bitmaps are a
good thing, especially on an array where a full rebuild takes hours.
Seeing as the primary purpose of the raid5 is fault tolerance I could
live with a performance penalty but why is it *that* slow?

If I put the bitmap on an external drive it will be a lot faster - but
what happens, when the bitmap "goes away" (because that disk fails,
isn't accessible, etc)?
Is it goodbye array or is the worst case a full resync? How well is
the external bitmap supported?
(That same consideration kept me from using external journals for ext3.)

>  And second, the whole thing looks pretty much like a more general
>  problem discussed here and elsewhere last few days.  I mean handling
>  of parallel reads and writes - when single write may stall reads
>  for quite some time and vise versa.

Any thread names to recommend?

>  I see it every day on disks without NCQ/TCQ [...] your disks
>  and/or controllers (or the combination) don't even support NCQ

The old IDE disks on mixed noname controllers array does well enough
and NCQ / ncq doesn't even show up in dmesg. Definitely something to
consider but probably not the root cause.

Back to testing ...

C.

^ permalink raw reply	[flat|nested] 19+ messages in thread

* Re: Setting up md-raid5: observations, errors, questions
  2008-03-02 21:19       ` Christian Pernegger
@ 2008-03-02 21:56         ` Michael Tokarev
  2008-03-03  0:17           ` Christian Pernegger
  0 siblings, 1 reply; 19+ messages in thread
From: Michael Tokarev @ 2008-03-02 21:56 UTC (permalink / raw)
  To: Christian Pernegger; +Cc: linux-raid, linux-ide

Christian Pernegger wrote:
[]
>>  First, try to disable bitmaps on the raid array

It has been pointed out recently here on linux-raid that internal bitmap
doesn't work well:

Message-ID: <47C44DDB.3050201@free.fr>
Date:	Tue, 26 Feb 2008 18:35:23 +0100
From:	Hubert Verstraete <hubskml@free.fr>
To:	Neil Brown <neilb@suse.de>, linux-raid@vger.kernel.org
Subject: internal bitmap size

Hi Neil,

Neil Brown wrote:
 > For now, you will have to live with a smallish bitmap, which probably
 > isn't a real problem.  With 19078 bits, you will still get a
 > several-thousand-fold increase it resync speed after a crash
 > (i.e. hours become seconds) and to some extent, fewer bits are better
 > and you have to update them less.
 >
 > I've haven't made any measurements to see what size bitmap is
 > ideal... maybe someone should :-)

I've made some tries with a 4 250GB disks RAID-5 array and the write
speed is really ugly with the default internal bitmap size.
Setting a bigger bitmap chunk size (16 MB for example) creates a small
bitmap. The write speed is then almost the same as when there is no
bitmap, which is great. And as you said, the resync is a matter of
seconds (or minutes) instead of hours (without bitmap).
With such a setting, I've got both a nice write speed and a nice resync
speed. That's where I would look at to find MY ideal bitmap size.
....


> Maybe I did that by accident for the various vmstat data for different
> RAID levels I posted previously. At least I forgot to explicitely
> specify a bitmap for those tests (see above).
> 
> It's my understanding that the bitmap is a raid chunk level journal to
> speed up recovery, correct? Doing that reduces the window during which
> a second disk can die with catastrophic consequences -> bitmaps are a
> good thing, especially on an array where a full rebuild takes hours.
> Seeing as the primary purpose of the raid5 is fault tolerance I could
> live with a performance penalty but why is it *that* slow?

Umm..  You mixed it all ;)

Bitmap is a place (stored somewhere... ;) where each equally-sized
block of the array has a single bit of information - namely, if that
block has been written recently (which means it was dirty) or not.
So for each block (which is in no way related to chunk size etc!)
we've an on/off switch, telling us if the said block has to be
re-syncronized if we need to perform re-syncronisation of data -
for example, in case of power loss -- only those blocks marked
"dirty" in the bitmap needs to be recalculated and rewritten,
not the whole array.

This has nothing to do with window between first and second disk
failure.  Once first disk fails, bitmap is of no use anymore,
because you will need a replacement disk, which has to be
resyncronized in whole, because it's shiny new.  Bitmap only
helps for unclean shutdown, and only if there was no recent write
activity (which hasn't been "comitted" by md layer and the array
hasn't been re-marked as clean - it happens every 0.21 sec by
default - see /sys/block/mdN/md/safe_mode_delay).

> If I put the bitmap on an external drive it will be a lot faster - but
> what happens, when the bitmap "goes away" (because that disk fails,
> isn't accessible, etc)?
> Is it goodbye array or is the worst case a full resync? How well is
> the external bitmap supported?
> (That same consideration kept me from using external journals for ext3.)

If the bitmap is unaccessible, it's handled as there was no bitmap
at all - ie, if the array was dirty, it will be resynced as a whole;
if it was clean, nothing will be done.  Bitmap gives a set of blocks
to OMIT from resyncronisation, and if that information is unavailable...

Yes, external bitmaps are supported and working.  It doesn't mean
they're faster however - I tried placing a bitmap into a tmpfs (just
for testing) - and discovered about 95% drop in speed compared to the
case with internal bitmap (ie, only 5% speed when bitmap is on tmpfs -
bitmap size was the same).  It was long (more than a year) ago so things
may have changed already.


I highly doubt chunk size makes any difference.  Bitmap is the primary
suspect here.

/mjt

^ permalink raw reply	[flat|nested] 19+ messages in thread

* Re: Setting up md-raid5: observations, errors, questions
  2008-03-02 21:56         ` Michael Tokarev
@ 2008-03-03  0:17           ` Christian Pernegger
  2008-03-03  2:58             ` Michael Tokarev
  0 siblings, 1 reply; 19+ messages in thread
From: Christian Pernegger @ 2008-03-03  0:17 UTC (permalink / raw)
  To: linux-raid; +Cc: linux-ide

>  I highly doubt chunk size makes any difference.  Bitmap is the primary
>  suspect here.

Some tests:

raid5, chunk size goes from 16k to 1m. arrays created with --assume-clean

dd-tests
========

read / write 4GB in 4-chunk blocks directly on the md device.

dd-read
-------
unaffected by bitmap as expected
gets MUCH better with inc. chunk size			 71 -> 219 MS/s
reads go up near theoretical bus maximum (266MB/s)
the maximum total reached via parallel single-disk reads is 220 MB/s

Conclusion: reads are fine

dd-write
--------
with bitmap: gets SLOWLY worse with inc. chunk size	 30 ->  27 MB/s
without bitmap: gets MUCH worse with inc chunk size	100 ->  59 MB/s

Conclusion: needs explanation / tuning

even omitting the bitmap the writes just touch 100 MB/s, more like 80
on any chunk size with nice reads.
why would it get worse? Anything tunable here?
the maximum total reached via parallel single-disk writes is 150 MB/s


mke2fs-tests
============

create ext3 fs with correct stride, get a 10-second vmstat average 10
seconds in and abort the mke2fs

with bitmap: goes down SLOWLY from 64k chunks		 17 ->  13 MB/s
without bitmap: gets MUCH worse with inc. chunk size	 80 ->  34 MB/s

Conclusion: needs explanation / tuning

the maximum total reached via parallel single-disk mke2fs is 150 MB/s.


Comments welcome.

Next step: smaller bitmap
When the performance seems normal I'll revisit the responsiveness-issue.

>  Umm..  You mixed it all ;)
>  Bitmap is a place (stored somewhere... ;) where each equally-sized
>  block of the array has a single bit of information - namely, if that
>  block has been written recently (which means it was dirty) or not.
>  So for each block (which is in no way related to chunk size etc!)

Aren't these blocks-represented-by-a-bit-in-the-bitmap called chunks,
too? Sorry for the confusion.

>  This has nothing to do with window between first and second disk
>  failure.  Once first disk fails, bitmap is of no use anymore,
>  because you will need a replacement disk, which has to be
>  resyncronized in whole,

Yes, that makes sense. Still sounds useful, since a lot of my
"failures" have been of the intermittent (SATA cables / connectors,
port resets, slow bad-sector remap) variety.

>  If the bitmap is unaccessible, it's handled as there was no bitmap
>  at all - ie, if the array was dirty, it will be resynced as a whole;
>  if it was clean, nothing will be done.

Ok, good to hear. In theory that's the sane mode of operation, in
practice it might just have been that the array refuses to assemble
without its bitmap.

>  Yes, external bitmaps are supported and working.  It doesn't mean
>  they're faster however - I tried placing a bitmap into a tmpfs (just
>  for testing) - and discovered about 95% drop in speed

Interesting ... what are external bitmaps good for, then?

Thank you, I appreciate your patience.

C.

^ permalink raw reply	[flat|nested] 19+ messages in thread

* Re: Setting up md-raid5: observations, errors, questions
  2008-03-03  0:17           ` Christian Pernegger
@ 2008-03-03  2:58             ` Michael Tokarev
  2008-03-03  8:38               ` Christian Pernegger
  0 siblings, 1 reply; 19+ messages in thread
From: Michael Tokarev @ 2008-03-03  2:58 UTC (permalink / raw)
  To: Christian Pernegger; +Cc: linux-raid, linux-ide

Christian Pernegger wrote:
>>  I highly doubt chunk size makes any difference.  Bitmap is the primary
>>  suspect here.

I meant something else.  Sure thing chunk size will have quite
significant impact on write performance, see below.  What I meant
is that the PROBLEM you're facing is not due to chunk size but
due to bitmap.

> Some tests:
> 
> raid5, chunk size goes from 16k to 1m. arrays created with --assume-clean
> 
> dd-tests
> ========
[]
> dd-write
> --------
> with bitmap: gets SLOWLY worse with inc. chunk size	 30 ->  27 MB/s
> without bitmap: gets MUCH worse with inc chunk size	100 ->  59 MB/s

In other words, bitmap makes HUGE impact - when it's on, everything is
so damn slow that other factors aren't even noticeable.  When bitmap
is off, write speed drops when increasing chunk size.

> Conclusion: needs explanation / tuning

You know how raid5 process writes, right?  The read-modify-write or
similar technique, that involves READING as well as writing, reading
from other disks in order to calculate/update parity.  Unless you
write complete stripe (all chunks).

So the bigger your chunk size is, the less chances you have to perform
full-stripe write, just because "large, properly aligned" writes are
much less frequent than "smaller, unaligned" ones - at least for a
typical filesystem usage (special usage patterns exists for sure).

Here, both linux write-back cache (writes don't go directly to disk
but to kernel memory first, and kernel does some reordering/batching
here) and md stripe-cache makes huge difference, for obvious reasons.

> even omitting the bitmap the writes just touch 100 MB/s, more like 80
> on any chunk size with nice reads.
> why would it get worse? Anything tunable here?

Yes.  See your read-test with small chunk size.  Here, you've got
better "initial" results.  Why the reading with small chunk size
is so slow?  Because of the small request size to the underlying
device, that's why - each drive effectively is reading by 16Kb at
once, spending much time rotating and communicating with the controller...

But unlike for reads, increasing chunk size does not help writes -
because of the above reasons (writing "half" stripes more often and
hence requiring read-modify-write cycle more often etc).

You can get best results when writing WITHOUT a filesystem
(directly to /dev/mdFOO) with dd and blocksize equal to the
total strip size (chunk size * number of data disks, or
16k * 3 ..... 1M * 3, since in your raid 3 disks are data
in each strip), and trying direct write at the same time.
Yes it will be a bit worse than read speed, but it should be
pretty descent still.

I said "without a filesystem" because alignment is very important
too, -- to the same level as full-strip vs "half"-strip writes,
and with a filesystem in place, you can't be sure anymore how
your files are aligned on disk (xfs tries to align files correctly,
at least under some conditions).

> the maximum total reached via parallel single-disk writes is 150 MB/s
> 
> 
> mke2fs-tests
> ============
> 
> create ext3 fs with correct stride, get a 10-second vmstat average 10
> seconds in and abort the mke2fs
> 
> with bitmap: goes down SLOWLY from 64k chunks		 17 ->  13 MB/s
> without bitmap: gets MUCH worse with inc. chunk size	 80 ->  34 MB/s
> 
> Conclusion: needs explanation / tuning

The same thing.  Bitmap makes HUGE impact, and w/o bitmap, write speed
drops when increasing chunk size.

> Comments welcome.

I see one problem (the bitmap thing, finally discovered and confirmed),
and a need to tune your system beforehand -- it's the chunk size.  And
here, you're pretty much a wizard - it's your guess.  In any case,
unless your usage pattern will be special and optimized for such a
setup, don't try to choose large chunk size for raid456.  Choosing large
chunk size for raid0 and raid10 makes sense, but with raid456 it has
immediately bad sides.

> Next step: smaller bitmap
> When the performance seems normal I'll revisit the responsiveness-issue.
> 
>>  Umm..  You mixed it all ;)
>>  Bitmap is a place (stored somewhere... ;) where each equally-sized
>>  block of the array has a single bit of information - namely, if that
>>  block has been written recently (which means it was dirty) or not.
>>  So for each block (which is in no way related to chunk size etc!)
> 
> Aren't these blocks-represented-by-a-bit-in-the-bitmap called chunks,
> too? Sorry for the confusion.

Well yes, but I especially avoided using of *this* "chunk" in my sentence ;)
In any case, the chunk as in "usual-raid-chunk-size" and "chunk-represented-
by-a-bit-in-the-bitmap" are different chunks, the latter consists of one
or more the formers.  Oh well.

>>  This has nothing to do with window between first and second disk
>>  failure.  Once first disk fails, bitmap is of no use anymore,
>>  because you will need a replacement disk, which has to be
>>  resyncronized in whole,
> 
> Yes, that makes sense. Still sounds useful, since a lot of my
> "failures" have been of the intermittent (SATA cables / connectors,
> port resets, slow bad-sector remap) variety.

Yes, you're right.  Annoying stuff, and bitmaps definitely helps here.

>>  If the bitmap is unaccessible, it's handled as there was no bitmap
>>  at all - ie, if the array was dirty, it will be resynced as a whole;
>>  if it was clean, nothing will be done.
> 
> Ok, good to hear. In theory that's the sane mode of operation, in
> practice it might just have been that the array refuses to assemble
> without its bitmap.

I had the opposite here.  Due to various reasons, including operator
errors, bugs in mdadm and md kernel code, and probably phases of the
Moon, after creating a bitmap on a raid array and rebooting, I were
discovering that the bitmap isn't here anymore, -- it's gone.  It all
was down to the case when bitmap data (information about bitmap
presence and location) were not passed to the kernel correctly - either
because I forgot to specify it in a config file, or because mdadm didn't
pass that info in several cases or because with that superblock version
bitmaps were handled incorrectly....

>>  Yes, external bitmaps are supported and working.  It doesn't mean
>>  they're faster however - I tried placing a bitmap into a tmpfs (just
>>  for testing) - and discovered about 95% drop in speed
> 
> Interesting ... what are external bitmaps good for, then?

I had no time to investigate, and now I don't have a hardware to test
again.  In theory it should work, but I guess only a few people are
using them if at all - most are using internal bitmaps.

/mjt

^ permalink raw reply	[flat|nested] 19+ messages in thread

* Re: Setting up md-raid5: observations, errors, questions
  2008-03-03  2:58             ` Michael Tokarev
@ 2008-03-03  8:38               ` Christian Pernegger
  2008-03-04 16:54                 ` Christian Pernegger
  0 siblings, 1 reply; 19+ messages in thread
From: Christian Pernegger @ 2008-03-03  8:38 UTC (permalink / raw)
  To: linux-raid; +Cc: linux-ide

>  You know how raid5 process writes, right?

Yes. What I'd forgotten, though, is that even with --assume-clean it's
still basically in degraded mode so testing writes at most serves as a
comparison between the different setups. This seems to be a common
late night error :)

>  So the bigger your chunk size is, the less chances you have to perform
>  full-stripe write, just because "large, properly aligned" writes are
>  much less frequent than "smaller, unaligned" ones

I had expected that the sequential dd-writes would be automatically
combined to write full stripes, yet they also go down massively with
increased chunk size.

>  You can get best results when writing WITHOUT a filesystem
>  (directly to /dev/mdFOO) with dd and blocksize equal to the
>  total strip size (chunk size * number of data disks, or
>  16k * 3 ..... 1M * 3, since in your raid 3 disks are data
>  in each strip)

I didn't use an fs for the dd tests, but 4*chunk size, which was
stupid. Still, I would have expected some combining to be going on
even at that level.

>  md stripe-cache makes huge difference, for obvious reasons.

Will experiment with that as well, thanks. Any other caches?

>  unless your usage pattern will be special and optimized for such a
>  setup, don't try to choose large chunk size for raid456.

I think I will be revising my test script to actually create a small
fs at the start of the device and run some tests in that. Note to
self: need a good load simulator ... ...

>  I had no time to investigate, and now I don't have a hardware to test
>  again.  In theory it should work, but I guess only a few people are
>  using [external bitmaps] if at all - most are using internal bitmaps.

I'd rather not use a feature where the probability that a bug will
bite me first is that high. :)

FWIW, using smaller bitmaps REALLY helps. Tests forthcoming.

Thanks,

C.

^ permalink raw reply	[flat|nested] 19+ messages in thread

* Re: Setting up md-raid5: observations, errors, questions
  2008-03-02 13:24       ` Christian Pernegger
@ 2008-03-03 17:59         ` Bill Davidsen
  2008-03-03 20:19           ` Christian Pernegger
  0 siblings, 1 reply; 19+ messages in thread
From: Bill Davidsen @ 2008-03-03 17:59 UTC (permalink / raw)
  To: Christian Pernegger; +Cc: linux-raid, linux-ide

Christian Pernegger wrote:
>>  You could try to change cables and such but you've already cc'd linux-ide,
>>  AFAIK it can/could be a chipset/related issue and the guys who work on NCQ
>>  etc are working on the problem is the last I heard..
>>
>>  You should try and narrow down the problem, is it always the same drive
>>  that has that problem every time?  Does it occur if you do a check on the
>>  RAID 5 array or only when building?  etc..
>>     
>
> Assuming you're talking about the HSM violation error ... I got that
> exactly once (yet) in the situation described (sometime after the
> writing phase of badblocks). Nothing since and certainly not the spew
> others are reporting.
>
>   
Are you by any chance running quota?

> My primary concern now is the second problem - seems like any access
> to the raid array makes the machine unusable. I can type/edit a
> command at the bash prompt normally but as soon as I hit enter it just
> hangs there. If the command is trivial, say cat, I might get a result
> a few minutes later.
>
> bonnie++ results are in:
>
> Version 1.03c       ------Sequential Output------ --Sequential Input- --Random-
>                     -Per Chr- --Block-- -Rewrite- -Per Chr- --Block-- --Seeks--
> Machine        Size K/sec %CP K/sec %CP K/sec %CP K/sec %CP K/sec %CP  /sec %CP
> 1024k            4G           16282   5 13359   3           57373   7 149.0   0
>                     ------Sequential Create------ --------Random Create--------
>                     -Create-- --Read--- -Delete-- -Create-- --Read--- -Delete--
>               files  /sec %CP  /sec %CP  /sec %CP  /sec %CP  /sec %CP  /sec %CP
>                  16 +++++ +++ +++++ +++ +++++ +++ +++++ +++ +++++ +++ +++++ +++
>
> Something is definitely not right ... :-(
>
> As in, an old array of 300GB IDE Maxtors has 4x the seq writes, bus
> capped (105MB/s, plain PCI) reads and 3x the IOs. And it doesn't block
> the machine. Granted, there's the crypto but on my other (non-raid)
> boxes the performance impact just isn't there.
>
> Any help appreciated, as it is the box has expensive-paperweight status.
>
> C.
> --
> To unsubscribe from this list: send the line "unsubscribe linux-raid" in
> the body of a message to majordomo@vger.kernel.org
> More majordomo info at  http://vger.kernel.org/majordomo-info.html
>
>   


-- 
Bill Davidsen <davidsen@tmr.com>
  "Woe unto the statesman who makes war without a reason that will still
  be valid when the war is over..." Otto von Bismark 



^ permalink raw reply	[flat|nested] 19+ messages in thread

* Re: Setting up md-raid5: observations, errors, questions
  2008-03-03 17:59         ` Bill Davidsen
@ 2008-03-03 20:19           ` Christian Pernegger
  0 siblings, 0 replies; 19+ messages in thread
From: Christian Pernegger @ 2008-03-03 20:19 UTC (permalink / raw)
  To: linux-raid; +Cc: linux-ide

>  > Assuming you're talking about the HSM violation error ... I got that
>  > exactly once (yet) in the situation described (sometime after the
>  > writing phase of badblocks). Nothing since and certainly not the spew
>  > others are reporting.
>  >
>  >
>  Are you by any chance running quota?

No, sorry ... next guess :-)

Ah, why?

Thanks,

C.

^ permalink raw reply	[flat|nested] 19+ messages in thread

* Re: Setting up md-raid5: observations, errors, questions
  2008-03-03  8:38               ` Christian Pernegger
@ 2008-03-04 16:54                 ` Christian Pernegger
  2008-03-05  6:38                   ` Christian Pernegger
  0 siblings, 1 reply; 19+ messages in thread
From: Christian Pernegger @ 2008-03-04 16:54 UTC (permalink / raw)
  To: linux-raid; +Cc: linux-ide

[-- Attachment #1: Type: text/plain, Size: 1557 bytes --]

>  FWIW, using smaller bitmaps REALLY helps. Tests forthcoming.

I accidentaly thrashed the original script output for the first test
so all I can offer is a spreadsheet (or not, 1.test should have most
of the data):

1. create array with given chunk size and bitmap chunk size and --assume-clean
2. dd write and read 4GB (2x RAM) directly on the device (no -oflag
direct, though)
3. create small ~13 GB fs at the start of the array
4. mount without options and run bonnie++
5. lather, rinse, repeat

Second test is the same as the first, only
1. create a smaller array with -z (4GB / disk so 12GB usable), let it
sync and set stripe_cache_size to 8192
2. ...
3. ...
4. mount with noatime, nodiratime
5. ...

Better. Much better. Now safely out of the realm of error and into
that of tuning.
Still a bottleneck somewhere ... If 50% in bonnie++ means 100% of one
CPU that could be it. Any comments on the CPU results (text version
only)?

Results are single-shot (not averaged) so the data is relatively low quality.

I didn't notice any responsiveness issues during the tests, but then
again I left the machine pretty much alone. Will tune first and tackle
that later. FWIW background resync alone isn't the culprit - that
doesn't even hurt benchmarks too badly. Maybe background sync + large
bitmap?

The HSM violation hasn't yet cropped up again. What does it mean exactly?

Also, aside from the fact that NCQ should apparently be turned off for
md raid anyway - why doesn't it work? The Promise SATA2 TX4 allegedly
does it, as do the disks.

Thanks,

C.

[-- Attachment #2: raid5-bench.ods --]
[-- Type: application/vnd.oasis.opendocument.spreadsheet, Size: 15346 bytes --]

[-- Attachment #3: 1.test --]
[-- Type: application/octet-stream, Size: 65731 bytes --]



Chunk size:  1024
BM chunk size:  524288

Personalities : [linear] [multipath] [raid0] [raid1] [raid6] [raid5] [raid4] [raid10] 
md0 : active(auto-read-only) raid5 sde[3] sdd[2] sdc[1] sdb[0]
      2930285568 blocks super 1.0 level 5, 1024k chunk, algorithm 2 [4/4] [UUUU]
      bitmap: 1/1 pages [4KB], 524288KB chunk

unused devices: <none>

RAW write
1365+0 Datensätze ein
1365+0 Datensätze aus
4293918720 Bytes (4,3 GB) kopiert, 72,7572 Sekunden, 59,0 MB/s

RAW read
1365+0 Datensätze ein
1365+0 Datensätze aus
4293918720 Bytes (4,3 GB) kopiert, 19,702 Sekunden, 218 MB/s

Version 1.03c       ------Sequential Output------ --Sequential Input- --Random-
                    -Per Chr- --Block-- -Rewrite- -Per Chr- --Block-- --Seeks--
Machine        Size K/sec %CP K/sec %CP K/sec %CP K/sec %CP K/sec %CP  /sec %CP
raid5-1024-512   4G           45594  32 37159  23           166326  45 356.9   1
                    ------Sequential Create------ --------Random Create--------
                    -Create-- --Read--- -Delete-- -Create-- --Read--- -Delete--
              files  /sec %CP  /sec %CP  /sec %CP  /sec %CP  /sec %CP  /sec %CP
                 16 22935  95 +++++ +++ 26607 100 +++++ +++ +++++ +++ 28495  99
raid5-1024-512,4G,,,45594,32,37159,23,,,166326,45,356.9,1,16,22935,95,+++++,+++,26607,100,+++++,+++,+++++,+++,28495,99


Chunk size:  1024
BM chunk size:  262144

Personalities : [linear] [multipath] [raid0] [raid1] [raid6] [raid5] [raid4] [raid10] 
md0 : active(auto-read-only) raid5 sde[3] sdd[2] sdc[1] sdb[0]
      2930285568 blocks super 1.0 level 5, 1024k chunk, algorithm 2 [4/4] [UUUU]
      bitmap: 2/2 pages [8KB], 262144KB chunk

unused devices: <none>

RAW write
1365+0 Datensätze ein
1365+0 Datensätze aus
4293918720 Bytes (4,3 GB) kopiert, 73,9192 Sekunden, 58,1 MB/s

RAW read
1365+0 Datensätze ein
1365+0 Datensätze aus
4293918720 Bytes (4,3 GB) kopiert, 19,6784 Sekunden, 218 MB/s

Version 1.03c       ------Sequential Output------ --Sequential Input- --Random-
                    -Per Chr- --Block-- -Rewrite- -Per Chr- --Block-- --Seeks--
Machine        Size K/sec %CP K/sec %CP K/sec %CP K/sec %CP K/sec %CP  /sec %CP
raid5-1024-256   4G           45427  32 36280  22           158803  42 362.5   1
                    ------Sequential Create------ --------Random Create--------
                    -Create-- --Read--- -Delete-- -Create-- --Read--- -Delete--
              files  /sec %CP  /sec %CP  /sec %CP  /sec %CP  /sec %CP  /sec %CP
                 16 +++++ +++ +++++ +++ 25353  99 25503  95 +++++ +++ 26475 100
raid5-1024-256,4G,,,45427,32,36280,22,,,158803,42,362.5,1,16,+++++,+++,+++++,+++,25353,99,25503,95,+++++,+++,26475,100


Chunk size:  1024
BM chunk size:  131072

Personalities : [linear] [multipath] [raid0] [raid1] [raid6] [raid5] [raid4] [raid10] 
md0 : active(auto-read-only) raid5 sde[3] sdd[2] sdc[1] sdb[0]
      2930285568 blocks super 1.0 level 5, 1024k chunk, algorithm 2 [4/4] [UUUU]
      bitmap: 4/4 pages [16KB], 131072KB chunk

unused devices: <none>

RAW write
1365+0 Datensätze ein
1365+0 Datensätze aus
4293918720 Bytes (4,3 GB) kopiert, 76,1866 Sekunden, 56,4 MB/s

RAW read
1365+0 Datensätze ein
1365+0 Datensätze aus
4293918720 Bytes (4,3 GB) kopiert, 20,4955 Sekunden, 210 MB/s

Version 1.03c       ------Sequential Output------ --Sequential Input- --Random-
                    -Per Chr- --Block-- -Rewrite- -Per Chr- --Block-- --Seeks--
Machine        Size K/sec %CP K/sec %CP K/sec %CP K/sec %CP K/sec %CP  /sec %CP
raid5-1024-128   4G           43021  32 33947  23           151999  42 350.6   0
                    ------Sequential Create------ --------Random Create--------
                    -Create-- --Read--- -Delete-- -Create-- --Read--- -Delete--
              files  /sec %CP  /sec %CP  /sec %CP  /sec %CP  /sec %CP  /sec %CP
                 16 19666  90 +++++ +++ +++++ +++ +++++ +++ +++++ +++ +++++ +++
raid5-1024-128,4G,,,43021,32,33947,23,,,151999,42,350.6,0,16,19666,90,+++++,+++,+++++,+++,+++++,+++,+++++,+++,+++++,+++


Chunk size:  1024
BM chunk size:  65536

Personalities : [linear] [multipath] [raid0] [raid1] [raid6] [raid5] [raid4] [raid10] 
md0 : active(auto-read-only) raid5 sde[3] sdd[2] sdc[1] sdb[0]
      2930285568 blocks super 1.0 level 5, 1024k chunk, algorithm 2 [4/4] [UUUU]
      bitmap: 8/8 pages [32KB], 65536KB chunk

unused devices: <none>

RAW write
1365+0 Datensätze ein
1365+0 Datensätze aus
4293918720 Bytes (4,3 GB) kopiert, 79,3601 Sekunden, 54,1 MB/s

RAW read
1365+0 Datensätze ein
1365+0 Datensätze aus
4293918720 Bytes (4,3 GB) kopiert, 21,0058 Sekunden, 204 MB/s

Version 1.03c       ------Sequential Output------ --Sequential Input- --Random-
                    -Per Chr- --Block-- -Rewrite- -Per Chr- --Block-- --Seeks--
Machine        Size K/sec %CP K/sec %CP K/sec %CP K/sec %CP K/sec %CP  /sec %CP
raid5-1024-64    4G           40781  30 33431  21           148294  40 354.6   0
                    ------Sequential Create------ --------Random Create--------
                    -Create-- --Read--- -Delete-- -Create-- --Read--- -Delete--
              files  /sec %CP  /sec %CP  /sec %CP  /sec %CP  /sec %CP  /sec %CP
                 16 18983  94 +++++ +++ 30233 100 +++++ +++ +++++ +++ 26111  99
raid5-1024-64,4G,,,40781,30,33431,21,,,148294,40,354.6,0,16,18983,94,+++++,+++,30233,100,+++++,+++,+++++,+++,26111,99


Chunk size:  1024
BM chunk size:  32768

Personalities : [linear] [multipath] [raid0] [raid1] [raid6] [raid5] [raid4] [raid10] 
md0 : active(auto-read-only) raid5 sde[3] sdd[2] sdc[1] sdb[0]
      2930285568 blocks super 1.0 level 5, 1024k chunk, algorithm 2 [4/4] [UUUU]
      bitmap: 15/15 pages [60KB], 32768KB chunk

unused devices: <none>

RAW write
1365+0 Datensätze ein
1365+0 Datensätze aus
4293918720 Bytes (4,3 GB) kopiert, 81,6679 Sekunden, 52,6 MB/s

RAW read
1365+0 Datensätze ein
1365+0 Datensätze aus
4293918720 Bytes (4,3 GB) kopiert, 20,5918 Sekunden, 209 MB/s

Version 1.03c       ------Sequential Output------ --Sequential Input- --Random-
                    -Per Chr- --Block-- -Rewrite- -Per Chr- --Block-- --Seeks--
Machine        Size K/sec %CP K/sec %CP K/sec %CP K/sec %CP K/sec %CP  /sec %CP
raid5-1024-32    4G           40169  30 33239  21           155120  42 338.7   0
                    ------Sequential Create------ --------Random Create--------
                    -Create-- --Read--- -Delete-- -Create-- --Read--- -Delete--
              files  /sec %CP  /sec %CP  /sec %CP  /sec %CP  /sec %CP  /sec %CP
                 16 32274  86 +++++ +++ +++++ +++ +++++ +++ +++++ +++ 26837  99
raid5-1024-32,4G,,,40169,30,33239,21,,,155120,42,338.7,0,16,32274,86,+++++,+++,+++++,+++,+++++,+++,+++++,+++,26837,99


Chunk size:  1024
BM chunk size:  16384

Personalities : [linear] [multipath] [raid0] [raid1] [raid6] [raid5] [raid4] [raid10] 
md0 : active(auto-read-only) raid5 sde[3] sdd[2] sdc[1] sdb[0]
      2930285568 blocks super 1.0 level 5, 1024k chunk, algorithm 2 [4/4] [UUUU]
      bitmap: 30/30 pages [120KB], 16384KB chunk

unused devices: <none>

RAW write
1365+0 Datensätze ein
1365+0 Datensätze aus
4293918720 Bytes (4,3 GB) kopiert, 84,6962 Sekunden, 50,7 MB/s

RAW read
1365+0 Datensätze ein
1365+0 Datensätze aus
4293918720 Bytes (4,3 GB) kopiert, 20,5898 Sekunden, 209 MB/s

Version 1.03c       ------Sequential Output------ --Sequential Input- --Random-
                    -Per Chr- --Block-- -Rewrite- -Per Chr- --Block-- --Seeks--
Machine        Size K/sec %CP K/sec %CP K/sec %CP K/sec %CP K/sec %CP  /sec %CP
raid5-1024-16    4G           39220  30 31604  20           150565  41 347.8   0
                    ------Sequential Create------ --------Random Create--------
                    -Create-- --Read--- -Delete-- -Create-- --Read--- -Delete--
              files  /sec %CP  /sec %CP  /sec %CP  /sec %CP  /sec %CP  /sec %CP
                 16 31511  94 +++++ +++ 24415 100 26566  95 +++++ +++ +++++ +++
raid5-1024-16,4G,,,39220,30,31604,20,,,150565,41,347.8,0,16,31511,94,+++++,+++,24415,100,26566,95,+++++,+++,+++++,+++


Chunk size:  1024
BM chunk size:  8192

Personalities : [linear] [multipath] [raid0] [raid1] [raid6] [raid5] [raid4] [raid10] 
md0 : active(auto-read-only) raid5 sde[3] sdd[2] sdc[1] sdb[0]
      2930285568 blocks super 1.0 level 5, 1024k chunk, algorithm 2 [4/4] [UUUU]
      bitmap: 59/59 pages [236KB], 8192KB chunk

unused devices: <none>

RAW write
1365+0 Datensätze ein
1365+0 Datensätze aus
4293918720 Bytes (4,3 GB) kopiert, 91,1599 Sekunden, 47,1 MB/s

RAW read
1365+0 Datensätze ein
1365+0 Datensätze aus
4293918720 Bytes (4,3 GB) kopiert, 20,7768 Sekunden, 207 MB/s

Version 1.03c       ------Sequential Output------ --Sequential Input- --Random-
                    -Per Chr- --Block-- -Rewrite- -Per Chr- --Block-- --Seeks--
Machine        Size K/sec %CP K/sec %CP K/sec %CP K/sec %CP K/sec %CP  /sec %CP
raid5-1024-8     4G           36976  26 30722  20           146899  41 339.7   0
                    ------Sequential Create------ --------Random Create--------
                    -Create-- --Read--- -Delete-- -Create-- --Read--- -Delete--
              files  /sec %CP  /sec %CP  /sec %CP  /sec %CP  /sec %CP  /sec %CP
                 16 23504  98 +++++ +++ 27074  99 +++++ +++ +++++ +++ +++++ +++
raid5-1024-8,4G,,,36976,26,30722,20,,,146899,41,339.7,0,16,23504,98,+++++,+++,27074,99,+++++,+++,+++++,+++,+++++,+++


Chunk size:  512
BM chunk size:  524288

Personalities : [linear] [multipath] [raid0] [raid1] [raid6] [raid5] [raid4] [raid10] 
md0 : active(auto-read-only) raid5 sde[3] sdd[2] sdc[1] sdb[0]
      2930287104 blocks super 1.0 level 5, 512k chunk, algorithm 2 [4/4] [UUUU]
      bitmap: 1/1 pages [4KB], 524288KB chunk

unused devices: <none>

RAW write
2730+0 Datensätze ein
2730+0 Datensätze aus
4293918720 Bytes (4,3 GB) kopiert, 64,9243 Sekunden, 66,1 MB/s

RAW read
2730+0 Datensätze ein
2730+0 Datensätze aus
4293918720 Bytes (4,3 GB) kopiert, 20,9037 Sekunden, 205 MB/s

Version 1.03c       ------Sequential Output------ --Sequential Input- --Random-
                    -Per Chr- --Block-- -Rewrite- -Per Chr- --Block-- --Seeks--
Machine        Size K/sec %CP K/sec %CP K/sec %CP K/sec %CP K/sec %CP  /sec %CP
raid5-512-512    4G           54883  37 41510  23           161461  39 356.0   1
                    ------Sequential Create------ --------Random Create--------
                    -Create-- --Read--- -Delete-- -Create-- --Read--- -Delete--
              files  /sec %CP  /sec %CP  /sec %CP  /sec %CP  /sec %CP  /sec %CP
                 16 23127  97 +++++ +++ 27126 100 31788  88 +++++ +++ +++++ +++
raid5-512-512,4G,,,54883,37,41510,23,,,161461,39,356.0,1,16,23127,97,+++++,+++,27126,100,31788,88,+++++,+++,+++++,+++


Chunk size:  512
BM chunk size:  262144

Personalities : [linear] [multipath] [raid0] [raid1] [raid6] [raid5] [raid4] [raid10] 
md0 : active(auto-read-only) raid5 sde[3] sdd[2] sdc[1] sdb[0]
      2930287104 blocks super 1.0 level 5, 512k chunk, algorithm 2 [4/4] [UUUU]
      bitmap: 2/2 pages [8KB], 262144KB chunk

unused devices: <none>

RAW write
2730+0 Datensätze ein
2730+0 Datensätze aus
4293918720 Bytes (4,3 GB) kopiert, 66,0985 Sekunden, 65,0 MB/s

RAW read
2730+0 Datensätze ein
2730+0 Datensätze aus
4293918720 Bytes (4,3 GB) kopiert, 20,9707 Sekunden, 205 MB/s

Version 1.03c       ------Sequential Output------ --Sequential Input- --Random-
                    -Per Chr- --Block-- -Rewrite- -Per Chr- --Block-- --Seeks--
Machine        Size K/sec %CP K/sec %CP K/sec %CP K/sec %CP K/sec %CP  /sec %CP
raid5-512-256    4G           55707  38 41697  23           162015  39 359.3   1
                    ------Sequential Create------ --------Random Create--------
                    -Create-- --Read--- -Delete-- -Create-- --Read--- -Delete--
              files  /sec %CP  /sec %CP  /sec %CP  /sec %CP  /sec %CP  /sec %CP
                 16 18737  95 +++++ +++ 24738  99 +++++ +++ +++++ +++ 26717  99
raid5-512-256,4G,,,55707,38,41697,23,,,162015,39,359.3,1,16,18737,95,+++++,+++,24738,99,+++++,+++,+++++,+++,26717,99


Chunk size:  512
BM chunk size:  131072

Personalities : [linear] [multipath] [raid0] [raid1] [raid6] [raid5] [raid4] [raid10] 
md0 : active(auto-read-only) raid5 sde[3] sdd[2] sdc[1] sdb[0]
      2930287104 blocks super 1.0 level 5, 512k chunk, algorithm 2 [4/4] [UUUU]
      bitmap: 4/4 pages [16KB], 131072KB chunk

unused devices: <none>

RAW write
2730+0 Datensätze ein
2730+0 Datensätze aus
4293918720 Bytes (4,3 GB) kopiert, 63,0405 Sekunden, 68,1 MB/s

RAW read
2730+0 Datensätze ein
2730+0 Datensätze aus
4293918720 Bytes (4,3 GB) kopiert, 20,4682 Sekunden, 210 MB/s

Version 1.03c       ------Sequential Output------ --Sequential Input- --Random-
                    -Per Chr- --Block-- -Rewrite- -Per Chr- --Block-- --Seeks--
Machine        Size K/sec %CP K/sec %CP K/sec %CP K/sec %CP K/sec %CP  /sec %CP
raid5-512-128    4G           56740  40 42806  24           166630  40 367.4   0
                    ------Sequential Create------ --------Random Create--------
                    -Create-- --Read--- -Delete-- -Create-- --Read--- -Delete--
              files  /sec %CP  /sec %CP  /sec %CP  /sec %CP  /sec %CP  /sec %CP
                 16 18821  89 +++++ +++ 19445  80 20415  96 +++++ +++ 29868  99
raid5-512-128,4G,,,56740,40,42806,24,,,166630,40,367.4,0,16,18821,89,+++++,+++,19445,80,20415,96,+++++,+++,29868,99


Chunk size:  512
BM chunk size:  65536

Personalities : [linear] [multipath] [raid0] [raid1] [raid6] [raid5] [raid4] [raid10] 
md0 : active(auto-read-only) raid5 sde[3] sdd[2] sdc[1] sdb[0]
      2930287104 blocks super 1.0 level 5, 512k chunk, algorithm 2 [4/4] [UUUU]
      bitmap: 8/8 pages [32KB], 65536KB chunk

unused devices: <none>

RAW write
2730+0 Datensätze ein
2730+0 Datensätze aus
4293918720 Bytes (4,3 GB) kopiert, 63,8452 Sekunden, 67,3 MB/s

RAW read
2730+0 Datensätze ein
2730+0 Datensätze aus
4293918720 Bytes (4,3 GB) kopiert, 21,0159 Sekunden, 204 MB/s

Version 1.03c       ------Sequential Output------ --Sequential Input- --Random-
                    -Per Chr- --Block-- -Rewrite- -Per Chr- --Block-- --Seeks--
Machine        Size K/sec %CP K/sec %CP K/sec %CP K/sec %CP K/sec %CP  /sec %CP
raid5-512-64     4G           56434  39 42049  24           158950  39 368.9   1
                    ------Sequential Create------ --------Random Create--------
                    -Create-- --Read--- -Delete-- -Create-- --Read--- -Delete--
              files  /sec %CP  /sec %CP  /sec %CP  /sec %CP  /sec %CP  /sec %CP
                 16 20541  99 +++++ +++ 28462  93 +++++ +++ +++++ +++ +++++ +++
raid5-512-64,4G,,,56434,39,42049,24,,,158950,39,368.9,1,16,20541,99,+++++,+++,28462,93,+++++,+++,+++++,+++,+++++,+++


Chunk size:  512
BM chunk size:  32768

Personalities : [linear] [multipath] [raid0] [raid1] [raid6] [raid5] [raid4] [raid10] 
md0 : active(auto-read-only) raid5 sde[3] sdd[2] sdc[1] sdb[0]
      2930287104 blocks super 1.0 level 5, 512k chunk, algorithm 2 [4/4] [UUUU]
      bitmap: 15/15 pages [60KB], 32768KB chunk

unused devices: <none>

RAW write
2730+0 Datensätze ein
2730+0 Datensätze aus
4293918720 Bytes (4,3 GB) kopiert, 69,3469 Sekunden, 61,9 MB/s

RAW read
2730+0 Datensätze ein
2730+0 Datensätze aus
4293918720 Bytes (4,3 GB) kopiert, 21,104 Sekunden, 203 MB/s

Version 1.03c       ------Sequential Output------ --Sequential Input- --Random-
                    -Per Chr- --Block-- -Rewrite- -Per Chr- --Block-- --Seeks--
Machine        Size K/sec %CP K/sec %CP K/sec %CP K/sec %CP K/sec %CP  /sec %CP
raid5-512-32     4G           52673  40 40229  23           152916  38 363.1   1
                    ------Sequential Create------ --------Random Create--------
                    -Create-- --Read--- -Delete-- -Create-- --Read--- -Delete--
              files  /sec %CP  /sec %CP  /sec %CP  /sec %CP  /sec %CP  /sec %CP
                 16 18663  97 +++++ +++ 22457 100 23015  99 +++++ +++ 30797 100
raid5-512-32,4G,,,52673,40,40229,23,,,152916,38,363.1,1,16,18663,97,+++++,+++,22457,100,23015,99,+++++,+++,30797,100


Chunk size:  512
BM chunk size:  16384

Personalities : [linear] [multipath] [raid0] [raid1] [raid6] [raid5] [raid4] [raid10] 
md0 : active(auto-read-only) raid5 sde[3] sdd[2] sdc[1] sdb[0]
      2930287104 blocks super 1.0 level 5, 512k chunk, algorithm 2 [4/4] [UUUU]
      bitmap: 30/30 pages [120KB], 16384KB chunk

unused devices: <none>

RAW write
2730+0 Datensätze ein
2730+0 Datensätze aus
4293918720 Bytes (4,3 GB) kopiert, 71,0779 Sekunden, 60,4 MB/s

RAW read
2730+0 Datensätze ein
2730+0 Datensätze aus
4293918720 Bytes (4,3 GB) kopiert, 21,2219 Sekunden, 202 MB/s

Version 1.03c       ------Sequential Output------ --Sequential Input- --Random-
                    -Per Chr- --Block-- -Rewrite- -Per Chr- --Block-- --Seeks--
Machine        Size K/sec %CP K/sec %CP K/sec %CP K/sec %CP K/sec %CP  /sec %CP
raid5-512-16     4G           50031  35 38853  21           156130  38 356.4   1
                    ------Sequential Create------ --------Random Create--------
                    -Create-- --Read--- -Delete-- -Create-- --Read--- -Delete--
              files  /sec %CP  /sec %CP  /sec %CP  /sec %CP  /sec %CP  /sec %CP
                 16 22715  99 +++++ +++ 20617  86 +++++ +++ +++++ +++ 26704  99
raid5-512-16,4G,,,50031,35,38853,21,,,156130,38,356.4,1,16,22715,99,+++++,+++,20617,86,+++++,+++,+++++,+++,26704,99


Chunk size:  512
BM chunk size:  8192

Personalities : [linear] [multipath] [raid0] [raid1] [raid6] [raid5] [raid4] [raid10] 
md0 : active(auto-read-only) raid5 sde[3] sdd[2] sdc[1] sdb[0]
      2930287104 blocks super 1.0 level 5, 512k chunk, algorithm 2 [4/4] [UUUU]
      bitmap: 59/59 pages [236KB], 8192KB chunk

unused devices: <none>

RAW write
2730+0 Datensätze ein
2730+0 Datensätze aus
4293918720 Bytes (4,3 GB) kopiert, 75,6919 Sekunden, 56,7 MB/s

RAW read
2730+0 Datensätze ein
2730+0 Datensätze aus
4293918720 Bytes (4,3 GB) kopiert, 20,7094 Sekunden, 207 MB/s

Version 1.03c       ------Sequential Output------ --Sequential Input- --Random-
                    -Per Chr- --Block-- -Rewrite- -Per Chr- --Block-- --Seeks--
Machine        Size K/sec %CP K/sec %CP K/sec %CP K/sec %CP K/sec %CP  /sec %CP
raid5-512-8      4G           48642  34 38047  21           160889  39 359.8   1
                    ------Sequential Create------ --------Random Create--------
                    -Create-- --Read--- -Delete-- -Create-- --Read--- -Delete--
              files  /sec %CP  /sec %CP  /sec %CP  /sec %CP  /sec %CP  /sec %CP
                 16 22720  98 +++++ +++ 24036  85 28420  85 +++++ +++ 32713  99
raid5-512-8,4G,,,48642,34,38047,21,,,160889,39,359.8,1,16,22720,98,+++++,+++,24036,85,28420,85,+++++,+++,32713,99


Chunk size:  256
BM chunk size:  524288

Personalities : [linear] [multipath] [raid0] [raid1] [raid6] [raid5] [raid4] [raid10] 
md0 : active(auto-read-only) raid5 sde[3] sdd[2] sdc[1] sdb[0]
      2930287104 blocks super 1.0 level 5, 256k chunk, algorithm 2 [4/4] [UUUU]
      bitmap: 1/1 pages [4KB], 524288KB chunk

unused devices: <none>

RAW write
5461+0 Datensätze ein
5461+0 Datensätze aus
4294705152 Bytes (4,3 GB) kopiert, 58,231 Sekunden, 73,8 MB/s

RAW read
5461+0 Datensätze ein
5461+0 Datensätze aus
4294705152 Bytes (4,3 GB) kopiert, 22,4732 Sekunden, 191 MB/s

Version 1.03c       ------Sequential Output------ --Sequential Input- --Random-
                    -Per Chr- --Block-- -Rewrite- -Per Chr- --Block-- --Seeks--
Machine        Size K/sec %CP K/sec %CP K/sec %CP K/sec %CP K/sec %CP  /sec %CP
raid5-256-512    4G           63757  43 41872  21           161179  33 356.9   1
                    ------Sequential Create------ --------Random Create--------
                    -Create-- --Read--- -Delete-- -Create-- --Read--- -Delete--
              files  /sec %CP  /sec %CP  /sec %CP  /sec %CP  /sec %CP  /sec %CP
                 16 18799  96 +++++ +++ 21537  84 +++++ +++ +++++ +++ +++++ +++
raid5-256-512,4G,,,63757,43,41872,21,,,161179,33,356.9,1,16,18799,96,+++++,+++,21537,84,+++++,+++,+++++,+++,+++++,+++


Chunk size:  256
BM chunk size:  262144

Personalities : [linear] [multipath] [raid0] [raid1] [raid6] [raid5] [raid4] [raid10] 
md0 : active(auto-read-only) raid5 sde[3] sdd[2] sdc[1] sdb[0]
      2930287104 blocks super 1.0 level 5, 256k chunk, algorithm 2 [4/4] [UUUU]
      bitmap: 2/2 pages [8KB], 262144KB chunk

unused devices: <none>

RAW write
5461+0 Datensätze ein
5461+0 Datensätze aus
4294705152 Bytes (4,3 GB) kopiert, 58,7288 Sekunden, 73,1 MB/s

RAW read
5461+0 Datensätze ein
5461+0 Datensätze aus
4294705152 Bytes (4,3 GB) kopiert, 22,1732 Sekunden, 194 MB/s

Version 1.03c       ------Sequential Output------ --Sequential Input- --Random-
                    -Per Chr- --Block-- -Rewrite- -Per Chr- --Block-- --Seeks--
Machine        Size K/sec %CP K/sec %CP K/sec %CP K/sec %CP K/sec %CP  /sec %CP
raid5-256-256    4G           61955  41 42117  20           159276  32 354.7   0
                    ------Sequential Create------ --------Random Create--------
                    -Create-- --Read--- -Delete-- -Create-- --Read--- -Delete--
              files  /sec %CP  /sec %CP  /sec %CP  /sec %CP  /sec %CP  /sec %CP
                 16 19279  99 +++++ +++ +++++ +++ +++++ +++ +++++ +++ +++++ +++
raid5-256-256,4G,,,61955,41,42117,20,,,159276,32,354.7,0,16,19279,99,+++++,+++,+++++,+++,+++++,+++,+++++,+++,+++++,+++


Chunk size:  256
BM chunk size:  131072

Personalities : [linear] [multipath] [raid0] [raid1] [raid6] [raid5] [raid4] [raid10] 
md0 : active(auto-read-only) raid5 sde[3] sdd[2] sdc[1] sdb[0]
      2930287104 blocks super 1.0 level 5, 256k chunk, algorithm 2 [4/4] [UUUU]
      bitmap: 4/4 pages [16KB], 131072KB chunk

unused devices: <none>

RAW write
5461+0 Datensätze ein
5461+0 Datensätze aus
4294705152 Bytes (4,3 GB) kopiert, 60,7463 Sekunden, 70,7 MB/s

RAW read
5461+0 Datensätze ein
5461+0 Datensätze aus
4294705152 Bytes (4,3 GB) kopiert, 22,9683 Sekunden, 187 MB/s

Version 1.03c       ------Sequential Output------ --Sequential Input- --Random-
                    -Per Chr- --Block-- -Rewrite- -Per Chr- --Block-- --Seeks--
Machine        Size K/sec %CP K/sec %CP K/sec %CP K/sec %CP K/sec %CP  /sec %CP
raid5-256-128    4G           59436  40 40913  20           158873  32 361.6   1
                    ------Sequential Create------ --------Random Create--------
                    -Create-- --Read--- -Delete-- -Create-- --Read--- -Delete--
              files  /sec %CP  /sec %CP  /sec %CP  /sec %CP  /sec %CP  /sec %CP
                 16 22671  97 +++++ +++ +++++ +++ +++++ +++ +++++ +++ +++++ +++
raid5-256-128,4G,,,59436,40,40913,20,,,158873,32,361.6,1,16,22671,97,+++++,+++,+++++,+++,+++++,+++,+++++,+++,+++++,+++


Chunk size:  256
BM chunk size:  65536

Personalities : [linear] [multipath] [raid0] [raid1] [raid6] [raid5] [raid4] [raid10] 
md0 : active(auto-read-only) raid5 sde[3] sdd[2] sdc[1] sdb[0]
      2930287104 blocks super 1.0 level 5, 256k chunk, algorithm 2 [4/4] [UUUU]
      bitmap: 8/8 pages [32KB], 65536KB chunk

unused devices: <none>

RAW write
5461+0 Datensätze ein
5461+0 Datensätze aus
4294705152 Bytes (4,3 GB) kopiert, 62,0628 Sekunden, 69,2 MB/s

RAW read
5461+0 Datensätze ein
5461+0 Datensätze aus
4294705152 Bytes (4,3 GB) kopiert, 23,3553 Sekunden, 184 MB/s

Version 1.03c       ------Sequential Output------ --Sequential Input- --Random-
                    -Per Chr- --Block-- -Rewrite- -Per Chr- --Block-- --Seeks--
Machine        Size K/sec %CP K/sec %CP K/sec %CP K/sec %CP K/sec %CP  /sec %CP
raid5-256-64     4G           59342  43 40743  20           152804  31 358.4   0
                    ------Sequential Create------ --------Random Create--------
                    -Create-- --Read--- -Delete-- -Create-- --Read--- -Delete--
              files  /sec %CP  /sec %CP  /sec %CP  /sec %CP  /sec %CP  /sec %CP
                 16 22073  95 +++++ +++ 29594  99 28599  99 +++++ +++ +++++ +++
raid5-256-64,4G,,,59342,43,40743,20,,,152804,31,358.4,0,16,22073,95,+++++,+++,29594,99,28599,99,+++++,+++,+++++,+++


Chunk size:  256
BM chunk size:  32768

Personalities : [linear] [multipath] [raid0] [raid1] [raid6] [raid5] [raid4] [raid10] 
md0 : active(auto-read-only) raid5 sde[3] sdd[2] sdc[1] sdb[0]
      2930287104 blocks super 1.0 level 5, 256k chunk, algorithm 2 [4/4] [UUUU]
      bitmap: 15/15 pages [60KB], 32768KB chunk

unused devices: <none>

RAW write
5461+0 Datensätze ein
5461+0 Datensätze aus
4294705152 Bytes (4,3 GB) kopiert, 61,0171 Sekunden, 70,4 MB/s

RAW read
5461+0 Datensätze ein
5461+0 Datensätze aus
4294705152 Bytes (4,3 GB) kopiert, 22,4737 Sekunden, 191 MB/s

Version 1.03c       ------Sequential Output------ --Sequential Input- --Random-
                    -Per Chr- --Block-- -Rewrite- -Per Chr- --Block-- --Seeks--
Machine        Size K/sec %CP K/sec %CP K/sec %CP K/sec %CP K/sec %CP  /sec %CP
raid5-256-32     4G           59409  42 41191  20           158059  31 369.0   0
                    ------Sequential Create------ --------Random Create--------
                    -Create-- --Read--- -Delete-- -Create-- --Read--- -Delete--
              files  /sec %CP  /sec %CP  /sec %CP  /sec %CP  /sec %CP  /sec %CP
                 16 22923  98 +++++ +++ 24681  80 +++++ +++ +++++ +++ +++++ +++
raid5-256-32,4G,,,59409,42,41191,20,,,158059,31,369.0,0,16,22923,98,+++++,+++,24681,80,+++++,+++,+++++,+++,+++++,+++


Chunk size:  256
BM chunk size:  16384

Personalities : [linear] [multipath] [raid0] [raid1] [raid6] [raid5] [raid4] [raid10] 
md0 : active(auto-read-only) raid5 sde[3] sdd[2] sdc[1] sdb[0]
      2930287104 blocks super 1.0 level 5, 256k chunk, algorithm 2 [4/4] [UUUU]
      bitmap: 30/30 pages [120KB], 16384KB chunk

unused devices: <none>

RAW write
5461+0 Datensätze ein
5461+0 Datensätze aus
4294705152 Bytes (4,3 GB) kopiert, 63,6786 Sekunden, 67,4 MB/s

RAW read
5461+0 Datensätze ein
5461+0 Datensätze aus
4294705152 Bytes (4,3 GB) kopiert, 22,3311 Sekunden, 192 MB/s

Version 1.03c       ------Sequential Output------ --Sequential Input- --Random-
                    -Per Chr- --Block-- -Rewrite- -Per Chr- --Block-- --Seeks--
Machine        Size K/sec %CP K/sec %CP K/sec %CP K/sec %CP K/sec %CP  /sec %CP
raid5-256-16     4G           55676  38 40154  19           155272  31 359.7   1
                    ------Sequential Create------ --------Random Create--------
                    -Create-- --Read--- -Delete-- -Create-- --Read--- -Delete--
              files  /sec %CP  /sec %CP  /sec %CP  /sec %CP  /sec %CP  /sec %CP
                 16 22567  98 +++++ +++ 25491 100 +++++ +++ +++++ +++ +++++ +++
raid5-256-16,4G,,,55676,38,40154,19,,,155272,31,359.7,1,16,22567,98,+++++,+++,25491,100,+++++,+++,+++++,+++,+++++,+++


Chunk size:  256
BM chunk size:  8192

Personalities : [linear] [multipath] [raid0] [raid1] [raid6] [raid5] [raid4] [raid10] 
md0 : active(auto-read-only) raid5 sde[3] sdd[2] sdc[1] sdb[0]
      2930287104 blocks super 1.0 level 5, 256k chunk, algorithm 2 [4/4] [UUUU]
      bitmap: 59/59 pages [236KB], 8192KB chunk

unused devices: <none>

RAW write
5461+0 Datensätze ein
5461+0 Datensätze aus
4294705152 Bytes (4,3 GB) kopiert, 72,476 Sekunden, 59,3 MB/s

RAW read
5461+0 Datensätze ein
5461+0 Datensätze aus
4294705152 Bytes (4,3 GB) kopiert, 22,6458 Sekunden, 190 MB/s

Version 1.03c       ------Sequential Output------ --Sequential Input- --Random-
                    -Per Chr- --Block-- -Rewrite- -Per Chr- --Block-- --Seeks--
Machine        Size K/sec %CP K/sec %CP K/sec %CP K/sec %CP K/sec %CP  /sec %CP
raid5-256-8      4G           51132  36 36603  18           147847  30 346.0   0
                    ------Sequential Create------ --------Random Create--------
                    -Create-- --Read--- -Delete-- -Create-- --Read--- -Delete--
              files  /sec %CP  /sec %CP  /sec %CP  /sec %CP  /sec %CP  /sec %CP
                 16 23114  99 +++++ +++ 24802  99 31556  76 +++++ +++ 28139 100
raid5-256-8,4G,,,51132,36,36603,18,,,147847,30,346.0,0,16,23114,99,+++++,+++,24802,99,31556,76,+++++,+++,28139,100


Chunk size:  128
BM chunk size:  524288

Personalities : [linear] [multipath] [raid0] [raid1] [raid6] [raid5] [raid4] [raid10] 
md0 : active(auto-read-only) raid5 sde[3] sdd[2] sdc[1] sdb[0]
      2930287104 blocks super 1.0 level 5, 128k chunk, algorithm 2 [4/4] [UUUU]
      bitmap: 1/1 pages [4KB], 524288KB chunk

unused devices: <none>

RAW write
10922+0 Datensätze ein
10922+0 Datensätze aus
4294705152 Bytes (4,3 GB) kopiert, 54,0318 Sekunden, 79,5 MB/s

RAW read
10922+0 Datensätze ein
10922+0 Datensätze aus
4294705152 Bytes (4,3 GB) kopiert, 26,517 Sekunden, 162 MB/s

Version 1.03c       ------Sequential Output------ --Sequential Input- --Random-
                    -Per Chr- --Block-- -Rewrite- -Per Chr- --Block-- --Seeks--
Machine        Size K/sec %CP K/sec %CP K/sec %CP K/sec %CP K/sec %CP  /sec %CP
raid5-128-512    4G           67024  44 38046  18           151708  28 368.1   1
                    ------Sequential Create------ --------Random Create--------
                    -Create-- --Read--- -Delete-- -Create-- --Read--- -Delete--
              files  /sec %CP  /sec %CP  /sec %CP  /sec %CP  /sec %CP  /sec %CP
                 16 22467  95 +++++ +++ 26126 100 +++++ +++ +++++ +++ +++++ +++
raid5-128-512,4G,,,67024,44,38046,18,,,151708,28,368.1,1,16,22467,95,+++++,+++,26126,100,+++++,+++,+++++,+++,+++++,+++


Chunk size:  128
BM chunk size:  262144

Personalities : [linear] [multipath] [raid0] [raid1] [raid6] [raid5] [raid4] [raid10] 
md0 : active(auto-read-only) raid5 sde[3] sdd[2] sdc[1] sdb[0]
      2930287104 blocks super 1.0 level 5, 128k chunk, algorithm 2 [4/4] [UUUU]
      bitmap: 2/2 pages [8KB], 262144KB chunk

unused devices: <none>

RAW write
10922+0 Datensätze ein
10922+0 Datensätze aus
4294705152 Bytes (4,3 GB) kopiert, 53,1742 Sekunden, 80,8 MB/s

RAW read
10922+0 Datensätze ein
10922+0 Datensätze aus
4294705152 Bytes (4,3 GB) kopiert, 26,8148 Sekunden, 160 MB/s

Version 1.03c       ------Sequential Output------ --Sequential Input- --Random-
                    -Per Chr- --Block-- -Rewrite- -Per Chr- --Block-- --Seeks--
Machine        Size K/sec %CP K/sec %CP K/sec %CP K/sec %CP K/sec %CP  /sec %CP
raid5-128-256    4G           67544  43 38478  17           156452  29 351.0   0
                    ------Sequential Create------ --------Random Create--------
                    -Create-- --Read--- -Delete-- -Create-- --Read--- -Delete--
              files  /sec %CP  /sec %CP  /sec %CP  /sec %CP  /sec %CP  /sec %CP
                 16 19931  97 +++++ +++ 25386  99 +++++ +++ +++++ +++ +++++ +++
raid5-128-256,4G,,,67544,43,38478,17,,,156452,29,351.0,0,16,19931,97,+++++,+++,25386,99,+++++,+++,+++++,+++,+++++,+++


Chunk size:  128
BM chunk size:  131072

Personalities : [linear] [multipath] [raid0] [raid1] [raid6] [raid5] [raid4] [raid10] 
md0 : active(auto-read-only) raid5 sde[3] sdd[2] sdc[1] sdb[0]
      2930287104 blocks super 1.0 level 5, 128k chunk, algorithm 2 [4/4] [UUUU]
      bitmap: 4/4 pages [16KB], 131072KB chunk

unused devices: <none>

RAW write
10922+0 Datensätze ein
10922+0 Datensätze aus
4294705152 Bytes (4,3 GB) kopiert, 55,1817 Sekunden, 77,8 MB/s

RAW read
10922+0 Datensätze ein
10922+0 Datensätze aus
4294705152 Bytes (4,3 GB) kopiert, 27,5264 Sekunden, 156 MB/s

Version 1.03c       ------Sequential Output------ --Sequential Input- --Random-
                    -Per Chr- --Block-- -Rewrite- -Per Chr- --Block-- --Seeks--
Machine        Size K/sec %CP K/sec %CP K/sec %CP K/sec %CP K/sec %CP  /sec %CP
raid5-128-128    4G           66143  42 38534  17           168539  31 360.4   1
                    ------Sequential Create------ --------Random Create--------
                    -Create-- --Read--- -Delete-- -Create-- --Read--- -Delete--
              files  /sec %CP  /sec %CP  /sec %CP  /sec %CP  /sec %CP  /sec %CP
                 16 21179  91 +++++ +++ 30078  99 +++++ +++ +++++ +++ +++++ +++
raid5-128-128,4G,,,66143,42,38534,17,,,168539,31,360.4,1,16,21179,91,+++++,+++,30078,99,+++++,+++,+++++,+++,+++++,+++


Chunk size:  128
BM chunk size:  65536

Personalities : [linear] [multipath] [raid0] [raid1] [raid6] [raid5] [raid4] [raid10] 
md0 : active(auto-read-only) raid5 sde[3] sdd[2] sdc[1] sdb[0]
      2930287104 blocks super 1.0 level 5, 128k chunk, algorithm 2 [4/4] [UUUU]
      bitmap: 8/8 pages [32KB], 65536KB chunk

unused devices: <none>

RAW write
10922+0 Datensätze ein
10922+0 Datensätze aus
4294705152 Bytes (4,3 GB) kopiert, 53,368 Sekunden, 80,5 MB/s

RAW read
10922+0 Datensätze ein
10922+0 Datensätze aus
4294705152 Bytes (4,3 GB) kopiert, 25,4093 Sekunden, 169 MB/s

Version 1.03c       ------Sequential Output------ --Sequential Input- --Random-
                    -Per Chr- --Block-- -Rewrite- -Per Chr- --Block-- --Seeks--
Machine        Size K/sec %CP K/sec %CP K/sec %CP K/sec %CP K/sec %CP  /sec %CP
raid5-128-64     4G           67181  42 38801  17           156906  29 362.7   1
                    ------Sequential Create------ --------Random Create--------
                    -Create-- --Read--- -Delete-- -Create-- --Read--- -Delete--
              files  /sec %CP  /sec %CP  /sec %CP  /sec %CP  /sec %CP  /sec %CP
                 16 23030  97 +++++ +++ 24732  80 +++++ +++ +++++ +++ +++++ +++
raid5-128-64,4G,,,67181,42,38801,17,,,156906,29,362.7,1,16,23030,97,+++++,+++,24732,80,+++++,+++,+++++,+++,+++++,+++


Chunk size:  128
BM chunk size:  32768

Personalities : [linear] [multipath] [raid0] [raid1] [raid6] [raid5] [raid4] [raid10] 
md0 : active(auto-read-only) raid5 sde[3] sdd[2] sdc[1] sdb[0]
      2930287104 blocks super 1.0 level 5, 128k chunk, algorithm 2 [4/4] [UUUU]
      bitmap: 15/15 pages [60KB], 32768KB chunk

unused devices: <none>

RAW write
10922+0 Datensätze ein
10922+0 Datensätze aus
4294705152 Bytes (4,3 GB) kopiert, 55,0117 Sekunden, 78,1 MB/s

RAW read
10922+0 Datensätze ein
10922+0 Datensätze aus
4294705152 Bytes (4,3 GB) kopiert, 26,5023 Sekunden, 162 MB/s

Version 1.03c       ------Sequential Output------ --Sequential Input- --Random-
                    -Per Chr- --Block-- -Rewrite- -Per Chr- --Block-- --Seeks--
Machine        Size K/sec %CP K/sec %CP K/sec %CP K/sec %CP K/sec %CP  /sec %CP
raid5-128-32     4G           62981  43 38609  18           151073  28 353.7   0
                    ------Sequential Create------ --------Random Create--------
                    -Create-- --Read--- -Delete-- -Create-- --Read--- -Delete--
              files  /sec %CP  /sec %CP  /sec %CP  /sec %CP  /sec %CP  /sec %CP
                 16 21624  98 +++++ +++ 22879  99 21702  95 +++++ +++ 26161 100
raid5-128-32,4G,,,62981,43,38609,18,,,151073,28,353.7,0,16,21624,98,+++++,+++,22879,99,21702,95,+++++,+++,26161,100


Chunk size:  128
BM chunk size:  16384

Personalities : [linear] [multipath] [raid0] [raid1] [raid6] [raid5] [raid4] [raid10] 
md0 : active(auto-read-only) raid5 sde[3] sdd[2] sdc[1] sdb[0]
      2930287104 blocks super 1.0 level 5, 128k chunk, algorithm 2 [4/4] [UUUU]
      bitmap: 30/30 pages [120KB], 16384KB chunk

unused devices: <none>

RAW write
10922+0 Datensätze ein
10922+0 Datensätze aus
4294705152 Bytes (4,3 GB) kopiert, 59,2844 Sekunden, 72,4 MB/s

RAW read
10922+0 Datensätze ein
10922+0 Datensätze aus
4294705152 Bytes (4,3 GB) kopiert, 26,0047 Sekunden, 165 MB/s

Version 1.03c       ------Sequential Output------ --Sequential Input- --Random-
                    -Per Chr- --Block-- -Rewrite- -Per Chr- --Block-- --Seeks--
Machine        Size K/sec %CP K/sec %CP K/sec %CP K/sec %CP K/sec %CP  /sec %CP
raid5-128-16     4G           59534  44 36088  17           155899  29 352.9   0
                    ------Sequential Create------ --------Random Create--------
                    -Create-- --Read--- -Delete-- -Create-- --Read--- -Delete--
              files  /sec %CP  /sec %CP  /sec %CP  /sec %CP  /sec %CP  /sec %CP
                 16 32588  96 +++++ +++ +++++ +++ +++++ +++ +++++ +++ +++++ +++
raid5-128-16,4G,,,59534,44,36088,17,,,155899,29,352.9,0,16,32588,96,+++++,+++,+++++,+++,+++++,+++,+++++,+++,+++++,+++


Chunk size:  128
BM chunk size:  8192

Personalities : [linear] [multipath] [raid0] [raid1] [raid6] [raid5] [raid4] [raid10] 
md0 : active(auto-read-only) raid5 sde[3] sdd[2] sdc[1] sdb[0]
      2930287104 blocks super 1.0 level 5, 128k chunk, algorithm 2 [4/4] [UUUU]
      bitmap: 59/59 pages [236KB], 8192KB chunk

unused devices: <none>

RAW write
10922+0 Datensätze ein
10922+0 Datensätze aus
4294705152 Bytes (4,3 GB) kopiert, 67,3536 Sekunden, 63,8 MB/s

RAW read
10922+0 Datensätze ein
10922+0 Datensätze aus
4294705152 Bytes (4,3 GB) kopiert, 27,2482 Sekunden, 158 MB/s

Version 1.03c       ------Sequential Output------ --Sequential Input- --Random-
                    -Per Chr- --Block-- -Rewrite- -Per Chr- --Block-- --Seeks--
Machine        Size K/sec %CP K/sec %CP K/sec %CP K/sec %CP K/sec %CP  /sec %CP
raid5-128-8      4G           54037  37 34025  15           148177  27 354.3   1
                    ------Sequential Create------ --------Random Create--------
                    -Create-- --Read--- -Delete-- -Create-- --Read--- -Delete--
              files  /sec %CP  /sec %CP  /sec %CP  /sec %CP  /sec %CP  /sec %CP
                 16 +++++ +++ +++++ +++ +++++ +++ 32125  73 +++++ +++ +++++ +++
raid5-128-8,4G,,,54037,37,34025,15,,,148177,27,354.3,1,16,+++++,+++,+++++,+++,+++++,+++,32125,73,+++++,+++,+++++,+++


Chunk size:  64
BM chunk size:  524288

Personalities : [linear] [multipath] [raid0] [raid1] [raid6] [raid5] [raid4] [raid10] 
md0 : active(auto-read-only) raid5 sde[3] sdd[2] sdc[1] sdb[0]
      2930287296 blocks super 1.0 level 5, 64k chunk, algorithm 2 [4/4] [UUUU]
      bitmap: 1/1 pages [4KB], 524288KB chunk

unused devices: <none>

RAW write
21845+0 Datensätze ein
21845+0 Datensätze aus
4294901760 Bytes (4,3 GB) kopiert, 51,0829 Sekunden, 84,1 MB/s

RAW read
21845+0 Datensätze ein
21845+0 Datensätze aus
4294901760 Bytes (4,3 GB) kopiert, 34,6773 Sekunden, 124 MB/s

Version 1.03c       ------Sequential Output------ --Sequential Input- --Random-
                    -Per Chr- --Block-- -Rewrite- -Per Chr- --Block-- --Seeks--
Machine        Size K/sec %CP K/sec %CP K/sec %CP K/sec %CP K/sec %CP  /sec %CP
raid5-64-512     4G           69866  42 35650  17           136541  24 364.1   1
                    ------Sequential Create------ --------Random Create--------
                    -Create-- --Read--- -Delete-- -Create-- --Read--- -Delete--
              files  /sec %CP  /sec %CP  /sec %CP  /sec %CP  /sec %CP  /sec %CP
                 16 23372  96 +++++ +++ 21685  86 21490  93 +++++ +++ 28839  99
raid5-64-512,4G,,,69866,42,35650,17,,,136541,24,364.1,1,16,23372,96,+++++,+++,21685,86,21490,93,+++++,+++,28839,99


Chunk size:  64
BM chunk size:  262144

Personalities : [linear] [multipath] [raid0] [raid1] [raid6] [raid5] [raid4] [raid10] 
md0 : active(auto-read-only) raid5 sde[3] sdd[2] sdc[1] sdb[0]
      2930287296 blocks super 1.0 level 5, 64k chunk, algorithm 2 [4/4] [UUUU]
      bitmap: 2/2 pages [8KB], 262144KB chunk

unused devices: <none>

RAW write
21845+0 Datensätze ein
21845+0 Datensätze aus
4294901760 Bytes (4,3 GB) kopiert, 51,816 Sekunden, 82,9 MB/s

RAW read
21845+0 Datensätze ein
21845+0 Datensätze aus
4294901760 Bytes (4,3 GB) kopiert, 36,0709 Sekunden, 119 MB/s

Version 1.03c       ------Sequential Output------ --Sequential Input- --Random-
                    -Per Chr- --Block-- -Rewrite- -Per Chr- --Block-- --Seeks--
Machine        Size K/sec %CP K/sec %CP K/sec %CP K/sec %CP K/sec %CP  /sec %CP
raid5-64-256     4G           68495  45 34508  16           139208  25 357.1   1
                    ------Sequential Create------ --------Random Create--------
                    -Create-- --Read--- -Delete-- -Create-- --Read--- -Delete--
              files  /sec %CP  /sec %CP  /sec %CP  /sec %CP  /sec %CP  /sec %CP
                 16 19297  90 +++++ +++ 25628 100 +++++ +++ +++++ +++ 25352 100
raid5-64-256,4G,,,68495,45,34508,16,,,139208,25,357.1,1,16,19297,90,+++++,+++,25628,100,+++++,+++,+++++,+++,25352,100


Chunk size:  64
BM chunk size:  131072

Personalities : [linear] [multipath] [raid0] [raid1] [raid6] [raid5] [raid4] [raid10] 
md0 : active(auto-read-only) raid5 sde[3] sdd[2] sdc[1] sdb[0]
      2930287296 blocks super 1.0 level 5, 64k chunk, algorithm 2 [4/4] [UUUU]
      bitmap: 4/4 pages [16KB], 131072KB chunk

unused devices: <none>

RAW write
21845+0 Datensätze ein
21845+0 Datensätze aus
4294901760 Bytes (4,3 GB) kopiert, 52,4412 Sekunden, 81,9 MB/s

RAW read
21845+0 Datensätze ein
21845+0 Datensätze aus
4294901760 Bytes (4,3 GB) kopiert, 36,1506 Sekunden, 119 MB/s

Version 1.03c       ------Sequential Output------ --Sequential Input- --Random-
                    -Per Chr- --Block-- -Rewrite- -Per Chr- --Block-- --Seeks--
Machine        Size K/sec %CP K/sec %CP K/sec %CP K/sec %CP K/sec %CP  /sec %CP
raid5-64-128     4G           69165  43 35540  16           133670  24 354.2   0
                    ------Sequential Create------ --------Random Create--------
                    -Create-- --Read--- -Delete-- -Create-- --Read--- -Delete--
              files  /sec %CP  /sec %CP  /sec %CP  /sec %CP  /sec %CP  /sec %CP
                 16 20119  83 +++++ +++ 25692 100 24397  99 +++++ +++ 29648 100
raid5-64-128,4G,,,69165,43,35540,16,,,133670,24,354.2,0,16,20119,83,+++++,+++,25692,100,24397,99,+++++,+++,29648,100


Chunk size:  64
BM chunk size:  65536

Personalities : [linear] [multipath] [raid0] [raid1] [raid6] [raid5] [raid4] [raid10] 
md0 : active(auto-read-only) raid5 sde[3] sdd[2] sdc[1] sdb[0]
      2930287296 blocks super 1.0 level 5, 64k chunk, algorithm 2 [4/4] [UUUU]
      bitmap: 8/8 pages [32KB], 65536KB chunk

unused devices: <none>

RAW write
21845+0 Datensätze ein
21845+0 Datensätze aus
4294901760 Bytes (4,3 GB) kopiert, 53,1809 Sekunden, 80,8 MB/s

RAW read
21845+0 Datensätze ein
21845+0 Datensätze aus
4294901760 Bytes (4,3 GB) kopiert, 34,8616 Sekunden, 123 MB/s

Version 1.03c       ------Sequential Output------ --Sequential Input- --Random-
                    -Per Chr- --Block-- -Rewrite- -Per Chr- --Block-- --Seeks--
Machine        Size K/sec %CP K/sec %CP K/sec %CP K/sec %CP K/sec %CP  /sec %CP
raid5-64-64      4G           67482  42 34286  16           136122  25 359.7   1
                    ------Sequential Create------ --------Random Create--------
                    -Create-- --Read--- -Delete-- -Create-- --Read--- -Delete--
              files  /sec %CP  /sec %CP  /sec %CP  /sec %CP  /sec %CP  /sec %CP
                 16 18987  97 +++++ +++ 30550  99 25117  93 +++++ +++ +++++ +++
raid5-64-64,4G,,,67482,42,34286,16,,,136122,25,359.7,1,16,18987,97,+++++,+++,30550,99,25117,93,+++++,+++,+++++,+++


Chunk size:  64
BM chunk size:  32768

Personalities : [linear] [multipath] [raid0] [raid1] [raid6] [raid5] [raid4] [raid10] 
md0 : active(auto-read-only) raid5 sde[3] sdd[2] sdc[1] sdb[0]
      2930287296 blocks super 1.0 level 5, 64k chunk, algorithm 2 [4/4] [UUUU]
      bitmap: 15/15 pages [60KB], 32768KB chunk

unused devices: <none>

RAW write
21845+0 Datensätze ein
21845+0 Datensätze aus
4294901760 Bytes (4,3 GB) kopiert, 54,3026 Sekunden, 79,1 MB/s

RAW read
21845+0 Datensätze ein
21845+0 Datensätze aus
4294901760 Bytes (4,3 GB) kopiert, 36,2894 Sekunden, 118 MB/s

Version 1.03c       ------Sequential Output------ --Sequential Input- --Random-
                    -Per Chr- --Block-- -Rewrite- -Per Chr- --Block-- --Seeks--
Machine        Size K/sec %CP K/sec %CP K/sec %CP K/sec %CP K/sec %CP  /sec %CP
raid5-64-32      4G           66273  44 33775  16           134606  25 354.1   1
                    ------Sequential Create------ --------Random Create--------
                    -Create-- --Read--- -Delete-- -Create-- --Read--- -Delete--
              files  /sec %CP  /sec %CP  /sec %CP  /sec %CP  /sec %CP  /sec %CP
                 16 16313  98 +++++ +++ 23776  99 21796  94 +++++ +++ 25468 100
raid5-64-32,4G,,,66273,44,33775,16,,,134606,25,354.1,1,16,16313,98,+++++,+++,23776,99,21796,94,+++++,+++,25468,100


Chunk size:  64
BM chunk size:  16384

Personalities : [linear] [multipath] [raid0] [raid1] [raid6] [raid5] [raid4] [raid10] 
md0 : active(auto-read-only) raid5 sde[3] sdd[2] sdc[1] sdb[0]
      2930287296 blocks super 1.0 level 5, 64k chunk, algorithm 2 [4/4] [UUUU]
      bitmap: 30/30 pages [120KB], 16384KB chunk

unused devices: <none>

RAW write
21845+0 Datensätze ein
21845+0 Datensätze aus
4294901760 Bytes (4,3 GB) kopiert, 58,2792 Sekunden, 73,7 MB/s

RAW read
21845+0 Datensätze ein
21845+0 Datensätze aus
4294901760 Bytes (4,3 GB) kopiert, 34,9719 Sekunden, 123 MB/s

Version 1.03c       ------Sequential Output------ --Sequential Input- --Random-
                    -Per Chr- --Block-- -Rewrite- -Per Chr- --Block-- --Seeks--
Machine        Size K/sec %CP K/sec %CP K/sec %CP K/sec %CP K/sec %CP  /sec %CP
raid5-64-16      4G           60404  37 34011  15           143417  26 332.9   1
                    ------Sequential Create------ --------Random Create--------
                    -Create-- --Read--- -Delete-- -Create-- --Read--- -Delete--
              files  /sec %CP  /sec %CP  /sec %CP  /sec %CP  /sec %CP  /sec %CP
                 16 25513  99 +++++ +++ 19331  76 21539  92 +++++ +++ 27364 100
raid5-64-16,4G,,,60404,37,34011,15,,,143417,26,332.9,1,16,25513,99,+++++,+++,19331,76,21539,92,+++++,+++,27364,100


Chunk size:  64
BM chunk size:  8192

Personalities : [linear] [multipath] [raid0] [raid1] [raid6] [raid5] [raid4] [raid10] 
md0 : active(auto-read-only) raid5 sde[3] sdd[2] sdc[1] sdb[0]
      2930287296 blocks super 1.0 level 5, 64k chunk, algorithm 2 [4/4] [UUUU]
      bitmap: 59/59 pages [236KB], 8192KB chunk

unused devices: <none>

RAW write
21845+0 Datensätze ein
21845+0 Datensätze aus
4294901760 Bytes (4,3 GB) kopiert, 64,6569 Sekunden, 66,4 MB/s

RAW read
21845+0 Datensätze ein
21845+0 Datensätze aus
4294901760 Bytes (4,3 GB) kopiert, 34,9528 Sekunden, 123 MB/s

Version 1.03c       ------Sequential Output------ --Sequential Input- --Random-
                    -Per Chr- --Block-- -Rewrite- -Per Chr- --Block-- --Seeks--
Machine        Size K/sec %CP K/sec %CP K/sec %CP K/sec %CP K/sec %CP  /sec %CP
raid5-64-8       4G           55374  36 32321  15           139629  26 345.8   0
                    ------Sequential Create------ --------Random Create--------
                    -Create-- --Read--- -Delete-- -Create-- --Read--- -Delete--
              files  /sec %CP  /sec %CP  /sec %CP  /sec %CP  /sec %CP  /sec %CP
                 16 22126  91 +++++ +++ 26532  99 31869  87 +++++ +++ +++++ +++
raid5-64-8,4G,,,55374,36,32321,15,,,139629,26,345.8,0,16,22126,91,+++++,+++,26532,99,31869,87,+++++,+++,+++++,+++


Chunk size:  32
BM chunk size:  524288

Personalities : [linear] [multipath] [raid0] [raid1] [raid6] [raid5] [raid4] [raid10] 
md0 : active(auto-read-only) raid5 sde[3] sdd[2] sdc[1] sdb[0]
      2930287296 blocks super 1.0 level 5, 32k chunk, algorithm 2 [4/4] [UUUU]
      bitmap: 1/1 pages [4KB], 524288KB chunk

unused devices: <none>

RAW write
43690+0 Datensätze ein
43690+0 Datensätze aus
4294901760 Bytes (4,3 GB) kopiert, 50,1915 Sekunden, 85,6 MB/s

RAW read
43690+0 Datensätze ein
43690+0 Datensätze aus
4294901760 Bytes (4,3 GB) kopiert, 53,1271 Sekunden, 80,8 MB/s

Version 1.03c       ------Sequential Output------ --Sequential Input- --Random-
                    -Per Chr- --Block-- -Rewrite- -Per Chr- --Block-- --Seeks--
Machine        Size K/sec %CP K/sec %CP K/sec %CP K/sec %CP K/sec %CP  /sec %CP
raid5-32-512     4G           70760  44 30992  16           81190  17 347.9   0
                    ------Sequential Create------ --------Random Create--------
                    -Create-- --Read--- -Delete-- -Create-- --Read--- -Delete--
              files  /sec %CP  /sec %CP  /sec %CP  /sec %CP  /sec %CP  /sec %CP
                 16 21865  96 +++++ +++ 24592  99 19328  99 +++++ +++ 29263 100
raid5-32-512,4G,,,70760,44,30992,16,,,81190,17,347.9,0,16,21865,96,+++++,+++,24592,99,19328,99,+++++,+++,29263,100


Chunk size:  32
BM chunk size:  262144

Personalities : [linear] [multipath] [raid0] [raid1] [raid6] [raid5] [raid4] [raid10] 
md0 : active(auto-read-only) raid5 sde[3] sdd[2] sdc[1] sdb[0]
      2930287296 blocks super 1.0 level 5, 32k chunk, algorithm 2 [4/4] [UUUU]
      bitmap: 2/2 pages [8KB], 262144KB chunk

unused devices: <none>

RAW write
43690+0 Datensätze ein
43690+0 Datensätze aus
4294901760 Bytes (4,3 GB) kopiert, 49,4897 Sekunden, 86,8 MB/s

RAW read
43690+0 Datensätze ein
43690+0 Datensätze aus
4294901760 Bytes (4,3 GB) kopiert, 54,5374 Sekunden, 78,8 MB/s

Version 1.03c       ------Sequential Output------ --Sequential Input- --Random-
                    -Per Chr- --Block-- -Rewrite- -Per Chr- --Block-- --Seeks--
Machine        Size K/sec %CP K/sec %CP K/sec %CP K/sec %CP K/sec %CP  /sec %CP
raid5-32-256     4G           71793  45 31387  17           88111  19 352.8   1
                    ------Sequential Create------ --------Random Create--------
                    -Create-- --Read--- -Delete-- -Create-- --Read--- -Delete--
              files  /sec %CP  /sec %CP  /sec %CP  /sec %CP  /sec %CP  /sec %CP
                 16 23237  94 +++++ +++ +++++ +++ +++++ +++ +++++ +++ +++++ +++
raid5-32-256,4G,,,71793,45,31387,17,,,88111,19,352.8,1,16,23237,94,+++++,+++,+++++,+++,+++++,+++,+++++,+++,+++++,+++


Chunk size:  32
BM chunk size:  131072

Personalities : [linear] [multipath] [raid0] [raid1] [raid6] [raid5] [raid4] [raid10] 
md0 : active(auto-read-only) raid5 sde[3] sdd[2] sdc[1] sdb[0]
      2930287296 blocks super 1.0 level 5, 32k chunk, algorithm 2 [4/4] [UUUU]
      bitmap: 4/4 pages [16KB], 131072KB chunk

unused devices: <none>

RAW write
43690+0 Datensätze ein
43690+0 Datensätze aus
4294901760 Bytes (4,3 GB) kopiert, 49,4241 Sekunden, 86,9 MB/s

RAW read
43690+0 Datensätze ein
43690+0 Datensätze aus
4294901760 Bytes (4,3 GB) kopiert, 53,429 Sekunden, 80,4 MB/s

Version 1.03c       ------Sequential Output------ --Sequential Input- --Random-
                    -Per Chr- --Block-- -Rewrite- -Per Chr- --Block-- --Seeks--
Machine        Size K/sec %CP K/sec %CP K/sec %CP K/sec %CP K/sec %CP  /sec %CP
raid5-32-128     4G           70404  40 31272  17           85745  18 339.4   1
                    ------Sequential Create------ --------Random Create--------
                    -Create-- --Read--- -Delete-- -Create-- --Read--- -Delete--
              files  /sec %CP  /sec %CP  /sec %CP  /sec %CP  /sec %CP  /sec %CP
                 16 20471  97 +++++ +++ 29674  99 32724  95 +++++ +++ 32626 100
raid5-32-128,4G,,,70404,40,31272,17,,,85745,18,339.4,1,16,20471,97,+++++,+++,29674,99,32724,95,+++++,+++,32626,100


Chunk size:  32
BM chunk size:  65536

Personalities : [linear] [multipath] [raid0] [raid1] [raid6] [raid5] [raid4] [raid10] 
md0 : active(auto-read-only) raid5 sde[3] sdd[2] sdc[1] sdb[0]
      2930287296 blocks super 1.0 level 5, 32k chunk, algorithm 2 [4/4] [UUUU]
      bitmap: 8/8 pages [32KB], 65536KB chunk

unused devices: <none>

RAW write
43690+0 Datensätze ein
43690+0 Datensätze aus
4294901760 Bytes (4,3 GB) kopiert, 51,9741 Sekunden, 82,6 MB/s

RAW read
43690+0 Datensätze ein
43690+0 Datensätze aus
4294901760 Bytes (4,3 GB) kopiert, 53,7874 Sekunden, 79,8 MB/s

Version 1.03c       ------Sequential Output------ --Sequential Input- --Random-
                    -Per Chr- --Block-- -Rewrite- -Per Chr- --Block-- --Seeks--
Machine        Size K/sec %CP K/sec %CP K/sec %CP K/sec %CP K/sec %CP  /sec %CP
raid5-32-64      4G           70446  46 31517  16           87237  18 357.4   1
                    ------Sequential Create------ --------Random Create--------
                    -Create-- --Read--- -Delete-- -Create-- --Read--- -Delete--
              files  /sec %CP  /sec %CP  /sec %CP  /sec %CP  /sec %CP  /sec %CP
                 16 +++++ +++ +++++ +++ +++++ +++ +++++ +++ +++++ +++ +++++ +++
raid5-32-64,4G,,,70446,46,31517,16,,,87237,18,357.4,1,16,+++++,+++,+++++,+++,+++++,+++,+++++,+++,+++++,+++,+++++,+++


Chunk size:  32
BM chunk size:  32768

Personalities : [linear] [multipath] [raid0] [raid1] [raid6] [raid5] [raid4] [raid10] 
md0 : active(auto-read-only) raid5 sde[3] sdd[2] sdc[1] sdb[0]
      2930287296 blocks super 1.0 level 5, 32k chunk, algorithm 2 [4/4] [UUUU]
      bitmap: 15/15 pages [60KB], 32768KB chunk

unused devices: <none>

RAW write
43690+0 Datensätze ein
43690+0 Datensätze aus
4294901760 Bytes (4,3 GB) kopiert, 50,7563 Sekunden, 84,6 MB/s

RAW read
43690+0 Datensätze ein
43690+0 Datensätze aus
4294901760 Bytes (4,3 GB) kopiert, 52,3644 Sekunden, 82,0 MB/s

Version 1.03c       ------Sequential Output------ --Sequential Input- --Random-
                    -Per Chr- --Block-- -Rewrite- -Per Chr- --Block-- --Seeks--
Machine        Size K/sec %CP K/sec %CP K/sec %CP K/sec %CP K/sec %CP  /sec %CP
raid5-32-32      4G           68135  42 30593  16           82078  17 346.4   1
                    ------Sequential Create------ --------Random Create--------
                    -Create-- --Read--- -Delete-- -Create-- --Read--- -Delete--
              files  /sec %CP  /sec %CP  /sec %CP  /sec %CP  /sec %CP  /sec %CP
                 16 +++++ +++ +++++ +++ +++++ +++ +++++ +++ +++++ +++ 31394  99
raid5-32-32,4G,,,68135,42,30593,16,,,82078,17,346.4,1,16,+++++,+++,+++++,+++,+++++,+++,+++++,+++,+++++,+++,31394,99


Chunk size:  32
BM chunk size:  16384

Personalities : [linear] [multipath] [raid0] [raid1] [raid6] [raid5] [raid4] [raid10] 
md0 : active(auto-read-only) raid5 sde[3] sdd[2] sdc[1] sdb[0]
      2930287296 blocks super 1.0 level 5, 32k chunk, algorithm 2 [4/4] [UUUU]
      bitmap: 30/30 pages [120KB], 16384KB chunk

unused devices: <none>

RAW write
43690+0 Datensätze ein
43690+0 Datensätze aus
4294901760 Bytes (4,3 GB) kopiert, 56,4179 Sekunden, 76,1 MB/s

RAW read
43690+0 Datensätze ein
43690+0 Datensätze aus
4294901760 Bytes (4,3 GB) kopiert, 55,0037 Sekunden, 78,1 MB/s

Version 1.03c       ------Sequential Output------ --Sequential Input- --Random-
                    -Per Chr- --Block-- -Rewrite- -Per Chr- --Block-- --Seeks--
Machine        Size K/sec %CP K/sec %CP K/sec %CP K/sec %CP K/sec %CP  /sec %CP
raid5-32-16      4G           62977  41 29604  15           82337  17 338.5   1
                    ------Sequential Create------ --------Random Create--------
                    -Create-- --Read--- -Delete-- -Create-- --Read--- -Delete--
              files  /sec %CP  /sec %CP  /sec %CP  /sec %CP  /sec %CP  /sec %CP
                 16 23071  98 +++++ +++ 26835  99 26639  84 +++++ +++ +++++ +++
raid5-32-16,4G,,,62977,41,29604,15,,,82337,17,338.5,1,16,23071,98,+++++,+++,26835,99,26639,84,+++++,+++,+++++,+++


Chunk size:  32
BM chunk size:  8192

Personalities : [linear] [multipath] [raid0] [raid1] [raid6] [raid5] [raid4] [raid10] 
md0 : active(auto-read-only) raid5 sde[3] sdd[2] sdc[1] sdb[0]
      2930287296 blocks super 1.0 level 5, 32k chunk, algorithm 2 [4/4] [UUUU]
      bitmap: 59/59 pages [236KB], 8192KB chunk

unused devices: <none>

RAW write
43690+0 Datensätze ein
43690+0 Datensätze aus
4294901760 Bytes (4,3 GB) kopiert, 61,8923 Sekunden, 69,4 MB/s

RAW read
43690+0 Datensätze ein
43690+0 Datensätze aus
4294901760 Bytes (4,3 GB) kopiert, 52,9937 Sekunden, 81,0 MB/s

Version 1.03c       ------Sequential Output------ --Sequential Input- --Random-
                    -Per Chr- --Block-- -Rewrite- -Per Chr- --Block-- --Seeks--
Machine        Size K/sec %CP K/sec %CP K/sec %CP K/sec %CP K/sec %CP  /sec %CP
raid5-32-8       4G           59055  39 30465  16           90835  19 345.2   1
                    ------Sequential Create------ --------Random Create--------
                    -Create-- --Read--- -Delete-- -Create-- --Read--- -Delete--
              files  /sec %CP  /sec %CP  /sec %CP  /sec %CP  /sec %CP  /sec %CP
                 16 22809  94 +++++ +++ 26384 100 +++++ +++ +++++ +++ +++++ +++
raid5-32-8,4G,,,59055,39,30465,16,,,90835,19,345.2,1,16,22809,94,+++++,+++,26384,100,+++++,+++,+++++,+++,+++++,+++


Chunk size:  16
BM chunk size:  524288

Personalities : [linear] [multipath] [raid0] [raid1] [raid6] [raid5] [raid4] [raid10] 
md0 : active(auto-read-only) raid5 sde[3] sdd[2] sdc[1] sdb[0]
      2930287344 blocks super 1.0 level 5, 16k chunk, algorithm 2 [4/4] [UUUU]
      bitmap: 1/1 pages [4KB], 524288KB chunk

unused devices: <none>

RAW write
87381+0 Datensätze ein
87381+0 Datensätze aus
4294950912 Bytes (4,3 GB) kopiert, 46,6759 Sekunden, 92,0 MB/s

RAW read
87381+0 Datensätze ein
87381+0 Datensätze aus
4294950912 Bytes (4,3 GB) kopiert, 60,7322 Sekunden, 70,7 MB/s

Version 1.03c       ------Sequential Output------ --Sequential Input- --Random-
                    -Per Chr- --Block-- -Rewrite- -Per Chr- --Block-- --Seeks--
Machine        Size K/sec %CP K/sec %CP K/sec %CP K/sec %CP K/sec %CP  /sec %CP
raid5-16-512     4G           76616  50 30053  18           70686  17 331.8   1
                    ------Sequential Create------ --------Random Create--------
                    -Create-- --Read--- -Delete-- -Create-- --Read--- -Delete--
              files  /sec %CP  /sec %CP  /sec %CP  /sec %CP  /sec %CP  /sec %CP
                 16 22769  97 +++++ +++ 26120 100 +++++ +++ +++++ +++ 28388  99
raid5-16-512,4G,,,76616,50,30053,18,,,70686,17,331.8,1,16,22769,97,+++++,+++,26120,100,+++++,+++,+++++,+++,28388,99


Chunk size:  16
BM chunk size:  262144

Personalities : [linear] [multipath] [raid0] [raid1] [raid6] [raid5] [raid4] [raid10] 
md0 : active(auto-read-only) raid5 sde[3] sdd[2] sdc[1] sdb[0]
      2930287344 blocks super 1.0 level 5, 16k chunk, algorithm 2 [4/4] [UUUU]
      bitmap: 2/2 pages [8KB], 262144KB chunk

unused devices: <none>

RAW write
87381+0 Datensätze ein
87381+0 Datensätze aus
4294950912 Bytes (4,3 GB) kopiert, 47,9805 Sekunden, 89,5 MB/s

RAW read
87381+0 Datensätze ein
87381+0 Datensätze aus
4294950912 Bytes (4,3 GB) kopiert, 61,9545 Sekunden, 69,3 MB/s

Version 1.03c       ------Sequential Output------ --Sequential Input- --Random-
                    -Per Chr- --Block-- -Rewrite- -Per Chr- --Block-- --Seeks--
Machine        Size K/sec %CP K/sec %CP K/sec %CP K/sec %CP K/sec %CP  /sec %CP
raid5-16-256     4G           71308  46 28986  17           67698  17 331.2   1
                    ------Sequential Create------ --------Random Create--------
                    -Create-- --Read--- -Delete-- -Create-- --Read--- -Delete--
              files  /sec %CP  /sec %CP  /sec %CP  /sec %CP  /sec %CP  /sec %CP
                 16 18085  96 +++++ +++ 21984 100 29970  95 +++++ +++ +++++ +++
raid5-16-256,4G,,,71308,46,28986,17,,,67698,17,331.2,1,16,18085,96,+++++,+++,21984,100,29970,95,+++++,+++,+++++,+++


Chunk size:  16
BM chunk size:  131072

Personalities : [linear] [multipath] [raid0] [raid1] [raid6] [raid5] [raid4] [raid10] 
md0 : active(auto-read-only) raid5 sde[3] sdd[2] sdc[1] sdb[0]
      2930287344 blocks super 1.0 level 5, 16k chunk, algorithm 2 [4/4] [UUUU]
      bitmap: 4/4 pages [16KB], 131072KB chunk

unused devices: <none>

RAW write
87381+0 Datensätze ein
87381+0 Datensätze aus
4294950912 Bytes (4,3 GB) kopiert, 49,8801 Sekunden, 86,1 MB/s

RAW read
87381+0 Datensätze ein
87381+0 Datensätze aus
4294950912 Bytes (4,3 GB) kopiert, 62,1272 Sekunden, 69,1 MB/s

Version 1.03c       ------Sequential Output------ --Sequential Input- --Random-
                    -Per Chr- --Block-- -Rewrite- -Per Chr- --Block-- --Seeks--
Machine        Size K/sec %CP K/sec %CP K/sec %CP K/sec %CP K/sec %CP  /sec %CP
raid5-16-128     4G           71435  45 28882  17           68398  17 321.4   0
                    ------Sequential Create------ --------Random Create--------
                    -Create-- --Read--- -Delete-- -Create-- --Read--- -Delete--
              files  /sec %CP  /sec %CP  /sec %CP  /sec %CP  /sec %CP  /sec %CP
                 16 22292  95 +++++ +++ 24351  99 +++++ +++ +++++ +++ 28510  73
raid5-16-128,4G,,,71435,45,28882,17,,,68398,17,321.4,0,16,22292,95,+++++,+++,24351,99,+++++,+++,+++++,+++,28510,73


Chunk size:  16
BM chunk size:  65536

Personalities : [linear] [multipath] [raid0] [raid1] [raid6] [raid5] [raid4] [raid10] 
md0 : active(auto-read-only) raid5 sde[3] sdd[2] sdc[1] sdb[0]
      2930287344 blocks super 1.0 level 5, 16k chunk, algorithm 2 [4/4] [UUUU]
      bitmap: 8/8 pages [32KB], 65536KB chunk

unused devices: <none>

RAW write
87381+0 Datensätze ein
87381+0 Datensätze aus
4294950912 Bytes (4,3 GB) kopiert, 49,5794 Sekunden, 86,6 MB/s

RAW read
87381+0 Datensätze ein
87381+0 Datensätze aus
4294950912 Bytes (4,3 GB) kopiert, 63,6062 Sekunden, 67,5 MB/s

Version 1.03c       ------Sequential Output------ --Sequential Input- --Random-
                    -Per Chr- --Block-- -Rewrite- -Per Chr- --Block-- --Seeks--
Machine        Size K/sec %CP K/sec %CP K/sec %CP K/sec %CP K/sec %CP  /sec %CP
raid5-16-64      4G           70578  43 28218  17           67001  17 311.6   0
                    ------Sequential Create------ --------Random Create--------
                    -Create-- --Read--- -Delete-- -Create-- --Read--- -Delete--
              files  /sec %CP  /sec %CP  /sec %CP  /sec %CP  /sec %CP  /sec %CP
                 16 22457  97 +++++ +++ 24748  99 +++++ +++ +++++ +++ 27224  94
raid5-16-64,4G,,,70578,43,28218,17,,,67001,17,311.6,0,16,22457,97,+++++,+++,24748,99,+++++,+++,+++++,+++,27224,94


Chunk size:  16
BM chunk size:  32768

Personalities : [linear] [multipath] [raid0] [raid1] [raid6] [raid5] [raid4] [raid10] 
md0 : active(auto-read-only) raid5 sde[3] sdd[2] sdc[1] sdb[0]
      2930287344 blocks super 1.0 level 5, 16k chunk, algorithm 2 [4/4] [UUUU]
      bitmap: 15/15 pages [60KB], 32768KB chunk

unused devices: <none>

RAW write
87381+0 Datensätze ein
87381+0 Datensätze aus
4294950912 Bytes (4,3 GB) kopiert, 52,1215 Sekunden, 82,4 MB/s

RAW read
87381+0 Datensätze ein
87381+0 Datensätze aus
4294950912 Bytes (4,3 GB) kopiert, 63,324 Sekunden, 67,8 MB/s

Version 1.03c       ------Sequential Output------ --Sequential Input- --Random-
                    -Per Chr- --Block-- -Rewrite- -Per Chr- --Block-- --Seeks--
Machine        Size K/sec %CP K/sec %CP K/sec %CP K/sec %CP K/sec %CP  /sec %CP
raid5-16-32      4G           68027  44 28480  17           68451  17 316.1   1
                    ------Sequential Create------ --------Random Create--------
                    -Create-- --Read--- -Delete-- -Create-- --Read--- -Delete--
              files  /sec %CP  /sec %CP  /sec %CP  /sec %CP  /sec %CP  /sec %CP
                 16 24408  91 +++++ +++ +++++ +++ 23152  87 +++++ +++ 23775  87
raid5-16-32,4G,,,68027,44,28480,17,,,68451,17,316.1,1,16,24408,91,+++++,+++,+++++,+++,23152,87,+++++,+++,23775,87


Chunk size:  16
BM chunk size:  16384

Personalities : [linear] [multipath] [raid0] [raid1] [raid6] [raid5] [raid4] [raid10] 
md0 : active(auto-read-only) raid5 sde[3] sdd[2] sdc[1] sdb[0]
      2930287344 blocks super 1.0 level 5, 16k chunk, algorithm 2 [4/4] [UUUU]
      bitmap: 30/30 pages [120KB], 16384KB chunk

unused devices: <none>

RAW write
87381+0 Datensätze ein
87381+0 Datensätze aus
4294950912 Bytes (4,3 GB) kopiert, 55,5958 Sekunden, 77,3 MB/s

RAW read
87381+0 Datensätze ein
87381+0 Datensätze aus
4294950912 Bytes (4,3 GB) kopiert, 64,7869 Sekunden, 66,3 MB/s

Version 1.03c       ------Sequential Output------ --Sequential Input- --Random-
                    -Per Chr- --Block-- -Rewrite- -Per Chr- --Block-- --Seeks--
Machine        Size K/sec %CP K/sec %CP K/sec %CP K/sec %CP K/sec %CP  /sec %CP
raid5-16-16      4G           63479  41 27328  16           64311  15 309.8   0
                    ------Sequential Create------ --------Random Create--------
                    -Create-- --Read--- -Delete-- -Create-- --Read--- -Delete--
              files  /sec %CP  /sec %CP  /sec %CP  /sec %CP  /sec %CP  /sec %CP
                 16 31516  99 +++++ +++ 25676 100 21536  99 +++++ +++ 24351  86
raid5-16-16,4G,,,63479,41,27328,16,,,64311,15,309.8,0,16,31516,99,+++++,+++,25676,100,21536,99,+++++,+++,24351,86


Chunk size:  16
BM chunk size:  8192

Personalities : [linear] [multipath] [raid0] [raid1] [raid6] [raid5] [raid4] [raid10] 
md0 : active(auto-read-only) raid5 sde[3] sdd[2] sdc[1] sdb[0]
      2930287344 blocks super 1.0 level 5, 16k chunk, algorithm 2 [4/4] [UUUU]
      bitmap: 59/59 pages [236KB], 8192KB chunk

unused devices: <none>

RAW write
87381+0 Datensätze ein
87381+0 Datensätze aus
4294950912 Bytes (4,3 GB) kopiert, 62,0137 Sekunden, 69,3 MB/s

RAW read
87381+0 Datensätze ein
87381+0 Datensätze aus
4294950912 Bytes (4,3 GB) kopiert, 66,3221 Sekunden, 64,8 MB/s

Version 1.03c       ------Sequential Output------ --Sequential Input- --Random-
                    -Per Chr- --Block-- -Rewrite- -Per Chr- --Block-- --Seeks--
Machine        Size K/sec %CP K/sec %CP K/sec %CP K/sec %CP K/sec %CP  /sec %CP
raid5-16-8       4G           58136  40 26564  16           65783  16 307.7   0
                    ------Sequential Create------ --------Random Create--------
                    -Create-- --Read--- -Delete-- -Create-- --Read--- -Delete--
              files  /sec %CP  /sec %CP  /sec %CP  /sec %CP  /sec %CP  /sec %CP
                 16 20129  96 +++++ +++ 26316  99 26425  92 +++++ +++ 25398  99
raid5-16-8,4G,,,58136,40,26564,16,,,65783,16,307.7,0,16,20129,96,+++++,+++,26316,99,26425,92,+++++,+++,25398,99

[-- Attachment #4: 2.test --]
[-- Type: application/octet-stream, Size: 39636 bytes --]



Chunk size:  1024 KB
BM chunk size:  512 MB

Wait for sync ...
Personalities : [linear] [multipath] [raid0] [raid1] [raid6] [raid5] [raid4] [raid10] 
md0 : active raid5 sde[4] sdd[2] sdc[1] sdb[0]
      12582912 blocks super 1.0 level 5, 1024k chunk, algorithm 2 [4/4] [UUUU]
      bitmap: 0/1 pages [0KB], 524288KB chunk

unused devices: <none>

RAW write
1365+0 records in
1365+0 records out
4293918720 bytes (4.3 GB) copied, 40.5786 seconds, 106 MB/s

RAW read
1365+0 records in
1365+0 records out
4293918720 bytes (4.3 GB) copied, 19.6135 seconds, 219 MB/s

Version 1.03c       ------Sequential Output------ --Sequential Input- --Random-
                    -Per Chr- --Block-- -Rewrite- -Per Chr- --Block-- --Seeks--
Machine        Size K/sec %CP K/sec %CP K/sec %CP K/sec %CP K/sec %CP  /sec %CP
raid5-1024-512   4G           95498  47 55099  31           181800  50 356.3   1
                    ------Sequential Create------ --------Random Create--------
                    -Create-- --Read--- -Delete-- -Create-- --Read--- -Delete--
              files  /sec %CP  /sec %CP  /sec %CP  /sec %CP  /sec %CP  /sec %CP
                 16 +++++ +++ +++++ +++ 20313  81 29348  99 +++++ +++ 25667  99
raid5-1024-512,4G,,,95498,47,55099,31,,,181800,50,356.3,1,16,+++++,+++,+++++,+++,20313,81,29348,99,+++++,+++,25667,99


Chunk size:  1024 KB
BM chunk size:  256 MB

Wait for sync ...
Personalities : [linear] [multipath] [raid0] [raid1] [raid6] [raid5] [raid4] [raid10] 
md0 : active raid5 sde[4] sdd[2] sdc[1] sdb[0]
      12582912 blocks super 1.0 level 5, 1024k chunk, algorithm 2 [4/4] [UUUU]
      bitmap: 0/1 pages [0KB], 262144KB chunk

unused devices: <none>

RAW write
1365+0 records in
1365+0 records out
4293918720 bytes (4.3 GB) copied, 40.9307 seconds, 105 MB/s

RAW read
1365+0 records in
1365+0 records out
4293918720 bytes (4.3 GB) copied, 19.5909 seconds, 219 MB/s

Version 1.03c       ------Sequential Output------ --Sequential Input- --Random-
                    -Per Chr- --Block-- -Rewrite- -Per Chr- --Block-- --Seeks--
Machine        Size K/sec %CP K/sec %CP K/sec %CP K/sec %CP K/sec %CP  /sec %CP
raid5-1024-256   4G           92692  49 56209  32           182883  52 361.2   1
                    ------Sequential Create------ --------Random Create--------
                    -Create-- --Read--- -Delete-- -Create-- --Read--- -Delete--
              files  /sec %CP  /sec %CP  /sec %CP  /sec %CP  /sec %CP  /sec %CP
                 16 +++++ +++ +++++ +++ 25816  99 +++++ +++ +++++ +++ +++++ +++
raid5-1024-256,4G,,,92692,49,56209,32,,,182883,52,361.2,1,16,+++++,+++,+++++,+++,25816,99,+++++,+++,+++++,+++,+++++,+++


Chunk size:  1024 KB
BM chunk size:  128 MB

Wait for sync ...
Personalities : [linear] [multipath] [raid0] [raid1] [raid6] [raid5] [raid4] [raid10] 
md0 : active raid5 sde[4] sdd[2] sdc[1] sdb[0]
      12582912 blocks super 1.0 level 5, 1024k chunk, algorithm 2 [4/4] [UUUU]
      bitmap: 0/1 pages [0KB], 131072KB chunk

unused devices: <none>

RAW write
1365+0 records in
1365+0 records out
4293918720 bytes (4.3 GB) copied, 41.3454 seconds, 104 MB/s

RAW read
1365+0 records in
1365+0 records out
4293918720 bytes (4.3 GB) copied, 19.6736 seconds, 218 MB/s

Version 1.03c       ------Sequential Output------ --Sequential Input- --Random-
                    -Per Chr- --Block-- -Rewrite- -Per Chr- --Block-- --Seeks--
Machine        Size K/sec %CP K/sec %CP K/sec %CP K/sec %CP K/sec %CP  /sec %CP
raid5-1024-128   4G           89952  43 55755  32           183070  52 356.1   0
                    ------Sequential Create------ --------Random Create--------
                    -Create-- --Read--- -Delete-- -Create-- --Read--- -Delete--
              files  /sec %CP  /sec %CP  /sec %CP  /sec %CP  /sec %CP  /sec %CP
                 16 +++++ +++ +++++ +++ +++++ +++ 31179  99 +++++ +++ 27462  99
raid5-1024-128,4G,,,89952,43,55755,32,,,183070,52,356.1,0,16,+++++,+++,+++++,+++,+++++,+++,31179,99,+++++,+++,27462,99


Chunk size:  1024 KB
BM chunk size:  64 MB

Wait for sync ...
Personalities : [linear] [multipath] [raid0] [raid1] [raid6] [raid5] [raid4] [raid10] 
md0 : active raid5 sde[4] sdd[2] sdc[1] sdb[0]
      12582912 blocks super 1.0 level 5, 1024k chunk, algorithm 2 [4/4] [UUUU]
      bitmap: 0/1 pages [0KB], 65536KB chunk

unused devices: <none>

RAW write
1365+0 records in
1365+0 records out
4293918720 bytes (4.3 GB) copied, 42.9143 seconds, 100 MB/s

RAW read
1365+0 records in
1365+0 records out
4293918720 bytes (4.3 GB) copied, 19.5113 seconds, 220 MB/s

Version 1.03c       ------Sequential Output------ --Sequential Input- --Random-
                    -Per Chr- --Block-- -Rewrite- -Per Chr- --Block-- --Seeks--
Machine        Size K/sec %CP K/sec %CP K/sec %CP K/sec %CP K/sec %CP  /sec %CP
raid5-1024-64    4G           87404  45 54147  32           180478  50 356.8   1
                    ------Sequential Create------ --------Random Create--------
                    -Create-- --Read--- -Delete-- -Create-- --Read--- -Delete--
              files  /sec %CP  /sec %CP  /sec %CP  /sec %CP  /sec %CP  /sec %CP
                 16 +++++ +++ +++++ +++ 20382  80 19318  93 +++++ +++ 25119  99
raid5-1024-64,4G,,,87404,45,54147,32,,,180478,50,356.8,1,16,+++++,+++,+++++,+++,20382,80,19318,93,+++++,+++,25119,99


Chunk size:  1024 KB
BM chunk size:  32 MB

Wait for sync ...
Personalities : [linear] [multipath] [raid0] [raid1] [raid6] [raid5] [raid4] [raid10] 
md0 : active raid5 sde[4] sdd[2] sdc[1] sdb[0]
      12582912 blocks super 1.0 level 5, 1024k chunk, algorithm 2 [4/4] [UUUU]
      bitmap: 0/1 pages [0KB], 32768KB chunk

unused devices: <none>

RAW write
1365+0 records in
1365+0 records out
4293918720 bytes (4.3 GB) copied, 45.4728 seconds, 94.4 MB/s

RAW read
1365+0 records in
1365+0 records out
4293918720 bytes (4.3 GB) copied, 19.9099 seconds, 216 MB/s

Version 1.03c       ------Sequential Output------ --Sequential Input- --Random-
                    -Per Chr- --Block-- -Rewrite- -Per Chr- --Block-- --Seeks--
Machine        Size K/sec %CP K/sec %CP K/sec %CP K/sec %CP K/sec %CP  /sec %CP
raid5-1024-32    4G           85629  46 52402  30           191734  52 356.8   0
                    ------Sequential Create------ --------Random Create--------
                    -Create-- --Read--- -Delete-- -Create-- --Read--- -Delete--
              files  /sec %CP  /sec %CP  /sec %CP  /sec %CP  /sec %CP  /sec %CP
                 16 17969  89 +++++ +++ 29295 100 29619  98 +++++ +++ +++++ +++
raid5-1024-32,4G,,,85629,46,52402,30,,,191734,52,356.8,0,16,17969,89,+++++,+++,29295,100,29619,98,+++++,+++,+++++,+++


Chunk size:  1024 KB
BM chunk size:  16 MB

Wait for sync ...
Personalities : [linear] [multipath] [raid0] [raid1] [raid6] [raid5] [raid4] [raid10] 
md0 : active raid5 sde[4] sdd[2] sdc[1] sdb[0]
      12582912 blocks super 1.0 level 5, 1024k chunk, algorithm 2 [4/4] [UUUU]
      bitmap: 0/1 pages [0KB], 16384KB chunk

unused devices: <none>

RAW write
1365+0 records in
1365+0 records out
4293918720 bytes (4.3 GB) copied, 47.0853 seconds, 91.2 MB/s

RAW read
1365+0 records in
1365+0 records out
4293918720 bytes (4.3 GB) copied, 19.5459 seconds, 220 MB/s

Version 1.03c       ------Sequential Output------ --Sequential Input- --Random-
                    -Per Chr- --Block-- -Rewrite- -Per Chr- --Block-- --Seeks--
Machine        Size K/sec %CP K/sec %CP K/sec %CP K/sec %CP K/sec %CP  /sec %CP
raid5-1024-16    4G           73640  43 51536  29           182022  50 351.7   1
                    ------Sequential Create------ --------Random Create--------
                    -Create-- --Read--- -Delete-- -Create-- --Read--- -Delete--
              files  /sec %CP  /sec %CP  /sec %CP  /sec %CP  /sec %CP  /sec %CP
                 16 +++++ +++ +++++ +++ +++++ +++ +++++ +++ +++++ +++ +++++ +++
raid5-1024-16,4G,,,73640,43,51536,29,,,182022,50,351.7,1,16,+++++,+++,+++++,+++,+++++,+++,+++++,+++,+++++,+++,+++++,+++


Chunk size:  512 KB
BM chunk size:  512 MB

Wait for sync ...
Personalities : [linear] [multipath] [raid0] [raid1] [raid6] [raid5] [raid4] [raid10] 
md0 : active raid5 sde[4] sdd[2] sdc[1] sdb[0]
      12582912 blocks super 1.0 level 5, 512k chunk, algorithm 2 [4/4] [UUUU]
      bitmap: 0/1 pages [0KB], 524288KB chunk

unused devices: <none>

RAW write
2730+0 records in
2730+0 records out
4293918720 bytes (4.3 GB) copied, 39.5976 seconds, 108 MB/s

RAW read
2730+0 records in
2730+0 records out
4293918720 bytes (4.3 GB) copied, 19.7725 seconds, 217 MB/s

Version 1.03c       ------Sequential Output------ --Sequential Input- --Random-
                    -Per Chr- --Block-- -Rewrite- -Per Chr- --Block-- --Seeks--
Machine        Size K/sec %CP K/sec %CP K/sec %CP K/sec %CP K/sec %CP  /sec %CP
raid5-512-512    4G           88372  45 53751  28           191042  46 373.0   1
                    ------Sequential Create------ --------Random Create--------
                    -Create-- --Read--- -Delete-- -Create-- --Read--- -Delete--
              files  /sec %CP  /sec %CP  /sec %CP  /sec %CP  /sec %CP  /sec %CP
                 16 22820  98 +++++ +++ 24124  99 +++++ +++ +++++ +++ 28290  99
raid5-512-512,4G,,,88372,45,53751,28,,,191042,46,373.0,1,16,22820,98,+++++,+++,24124,99,+++++,+++,+++++,+++,28290,99


Chunk size:  512 KB
BM chunk size:  256 MB

Wait for sync ...
Personalities : [linear] [multipath] [raid0] [raid1] [raid6] [raid5] [raid4] [raid10] 
md0 : active raid5 sde[4] sdd[2] sdc[1] sdb[0]
      12582912 blocks super 1.0 level 5, 512k chunk, algorithm 2 [4/4] [UUUU]
      bitmap: 0/1 pages [0KB], 262144KB chunk

unused devices: <none>

RAW write
2730+0 records in
2730+0 records out
4293918720 bytes (4.3 GB) copied, 40.5155 seconds, 106 MB/s

RAW read
2730+0 records in
2730+0 records out
4293918720 bytes (4.3 GB) copied, 19.6669 seconds, 218 MB/s

Version 1.03c       ------Sequential Output------ --Sequential Input- --Random-
                    -Per Chr- --Block-- -Rewrite- -Per Chr- --Block-- --Seeks--
Machine        Size K/sec %CP K/sec %CP K/sec %CP K/sec %CP K/sec %CP  /sec %CP
raid5-512-256    4G           90175  41 54175  27           185666  45 372.0   1
                    ------Sequential Create------ --------Random Create--------
                    -Create-- --Read--- -Delete-- -Create-- --Read--- -Delete--
              files  /sec %CP  /sec %CP  /sec %CP  /sec %CP  /sec %CP  /sec %CP
                 16 19880  95 +++++ +++ 30023 100 28176  96 +++++ +++ 29616  72
raid5-512-256,4G,,,90175,41,54175,27,,,185666,45,372.0,1,16,19880,95,+++++,+++,30023,100,28176,96,+++++,+++,29616,72


Chunk size:  512 KB
BM chunk size:  128 MB

Wait for sync ...
Personalities : [linear] [multipath] [raid0] [raid1] [raid6] [raid5] [raid4] [raid10] 
md0 : active raid5 sde[4] sdd[2] sdc[1] sdb[0]
      12582912 blocks super 1.0 level 5, 512k chunk, algorithm 2 [4/4] [UUUU]
      bitmap: 0/1 pages [0KB], 131072KB chunk

unused devices: <none>

RAW write
2730+0 records in
2730+0 records out
4293918720 bytes (4.3 GB) copied, 40.9181 seconds, 105 MB/s

RAW read
2730+0 records in
2730+0 records out
4293918720 bytes (4.3 GB) copied, 19.742 seconds, 218 MB/s

Version 1.03c       ------Sequential Output------ --Sequential Input- --Random-
                    -Per Chr- --Block-- -Rewrite- -Per Chr- --Block-- --Seeks--
Machine        Size K/sec %CP K/sec %CP K/sec %CP K/sec %CP K/sec %CP  /sec %CP
raid5-512-128    4G           87302  40 53847  27           189503  45 377.2   1
                    ------Sequential Create------ --------Random Create--------
                    -Create-- --Read--- -Delete-- -Create-- --Read--- -Delete--
              files  /sec %CP  /sec %CP  /sec %CP  /sec %CP  /sec %CP  /sec %CP
                 16 25008 100 +++++ +++ 26976 100 31679  91 +++++ +++ +++++ +++
raid5-512-128,4G,,,87302,40,53847,27,,,189503,45,377.2,1,16,25008,100,+++++,+++,26976,100,31679,91,+++++,+++,+++++,+++


Chunk size:  512 KB
BM chunk size:  64 MB

Wait for sync ...
Personalities : [linear] [multipath] [raid0] [raid1] [raid6] [raid5] [raid4] [raid10] 
md0 : active raid5 sde[4] sdd[2] sdc[1] sdb[0]
      12582912 blocks super 1.0 level 5, 512k chunk, algorithm 2 [4/4] [UUUU]
      bitmap: 0/1 pages [0KB], 65536KB chunk

unused devices: <none>

RAW write
2730+0 records in
2730+0 records out
4293918720 bytes (4.3 GB) copied, 42.6449 seconds, 101 MB/s

RAW read
2730+0 records in
2730+0 records out
4293918720 bytes (4.3 GB) copied, 19.6633 seconds, 218 MB/s

Version 1.03c       ------Sequential Output------ --Sequential Input- --Random-
                    -Per Chr- --Block-- -Rewrite- -Per Chr- --Block-- --Seeks--
Machine        Size K/sec %CP K/sec %CP K/sec %CP K/sec %CP K/sec %CP  /sec %CP
raid5-512-64     4G           84251  43 53430  28           190712  48 376.8   1
                    ------Sequential Create------ --------Random Create--------
                    -Create-- --Read--- -Delete-- -Create-- --Read--- -Delete--
              files  /sec %CP  /sec %CP  /sec %CP  /sec %CP  /sec %CP  /sec %CP
                 16 22908 100 +++++ +++ 30805 100 +++++ +++ +++++ +++ 21364  77
raid5-512-64,4G,,,84251,43,53430,28,,,190712,48,376.8,1,16,22908,100,+++++,+++,30805,100,+++++,+++,+++++,+++,21364,77


Chunk size:  512 KB
BM chunk size:  32 MB

Wait for sync ...
Personalities : [linear] [multipath] [raid0] [raid1] [raid6] [raid5] [raid4] [raid10] 
md0 : active raid5 sde[4] sdd[2] sdc[1] sdb[0]
      12582912 blocks super 1.0 level 5, 512k chunk, algorithm 2 [4/4] [UUUU]
      bitmap: 0/1 pages [0KB], 32768KB chunk

unused devices: <none>

RAW write
2730+0 records in
2730+0 records out
4293918720 bytes (4.3 GB) copied, 44.6483 seconds, 96.2 MB/s

RAW read
2730+0 records in
2730+0 records out
4293918720 bytes (4.3 GB) copied, 20.0377 seconds, 214 MB/s

Version 1.03c       ------Sequential Output------ --Sequential Input- --Random-
                    -Per Chr- --Block-- -Rewrite- -Per Chr- --Block-- --Seeks--
Machine        Size K/sec %CP K/sec %CP K/sec %CP K/sec %CP K/sec %CP  /sec %CP
raid5-512-32     4G           82500  43 51850  27           186544  46 364.4   1
                    ------Sequential Create------ --------Random Create--------
                    -Create-- --Read--- -Delete-- -Create-- --Read--- -Delete--
              files  /sec %CP  /sec %CP  /sec %CP  /sec %CP  /sec %CP  /sec %CP
                 16 24322  97 +++++ +++ 31309 100 +++++ +++ +++++ +++ +++++ +++
raid5-512-32,4G,,,82500,43,51850,27,,,186544,46,364.4,1,16,24322,97,+++++,+++,31309,100,+++++,+++,+++++,+++,+++++,+++


Chunk size:  512 KB
BM chunk size:  16 MB

Wait for sync ...
Personalities : [linear] [multipath] [raid0] [raid1] [raid6] [raid5] [raid4] [raid10] 
md0 : active raid5 sde[4] sdd[2] sdc[1] sdb[0]
      12582912 blocks super 1.0 level 5, 512k chunk, algorithm 2 [4/4] [UUUU]
      bitmap: 0/1 pages [0KB], 16384KB chunk

unused devices: <none>

RAW write
2730+0 records in
2730+0 records out
4293918720 bytes (4.3 GB) copied, 46.7281 seconds, 91.9 MB/s

RAW read
2730+0 records in
2730+0 records out
4293918720 bytes (4.3 GB) copied, 19.689 seconds, 218 MB/s

Version 1.03c       ------Sequential Output------ --Sequential Input- --Random-
                    -Per Chr- --Block-- -Rewrite- -Per Chr- --Block-- --Seeks--
Machine        Size K/sec %CP K/sec %CP K/sec %CP K/sec %CP K/sec %CP  /sec %CP
raid5-512-16     4G           75349  40 50679  26           192831  47 373.4   1
                    ------Sequential Create------ --------Random Create--------
                    -Create-- --Read--- -Delete-- -Create-- --Read--- -Delete--
              files  /sec %CP  /sec %CP  /sec %CP  /sec %CP  /sec %CP  /sec %CP
                 16 22475  96 +++++ +++ 32572 100 +++++ +++ +++++ +++ 30313  99
raid5-512-16,4G,,,75349,40,50679,26,,,192831,47,373.4,1,16,22475,96,+++++,+++,32572,100,+++++,+++,+++++,+++,30313,99


Chunk size:  256 KB
BM chunk size:  512 MB

Wait for sync ...
Personalities : [linear] [multipath] [raid0] [raid1] [raid6] [raid5] [raid4] [raid10] 
md0 : active raid5 sde[4] sdd[2] sdc[1] sdb[0]
      12582912 blocks super 1.0 level 5, 256k chunk, algorithm 2 [4/4] [UUUU]
      bitmap: 0/1 pages [0KB], 524288KB chunk

unused devices: <none>

RAW write
5461+0 records in
5461+0 records out
4294705152 bytes (4.3 GB) copied, 39.8064 seconds, 108 MB/s

RAW read
5461+0 records in
5461+0 records out
4294705152 bytes (4.3 GB) copied, 20.5954 seconds, 209 MB/s

Version 1.03c       ------Sequential Output------ --Sequential Input- --Random-
                    -Per Chr- --Block-- -Rewrite- -Per Chr- --Block-- --Seeks--
Machine        Size K/sec %CP K/sec %CP K/sec %CP K/sec %CP K/sec %CP  /sec %CP
raid5-256-512    4G           93154  43 51789  25           180362  36 373.1   1
                    ------Sequential Create------ --------Random Create--------
                    -Create-- --Read--- -Delete-- -Create-- --Read--- -Delete--
              files  /sec %CP  /sec %CP  /sec %CP  /sec %CP  /sec %CP  /sec %CP
                 16 22590  98 +++++ +++ 25852 100 +++++ +++ +++++ +++ +++++ +++
raid5-256-512,4G,,,93154,43,51789,25,,,180362,36,373.1,1,16,22590,98,+++++,+++,25852,100,+++++,+++,+++++,+++,+++++,+++


Chunk size:  256 KB
BM chunk size:  256 MB

Wait for sync ...
Personalities : [linear] [multipath] [raid0] [raid1] [raid6] [raid5] [raid4] [raid10] 
md0 : active raid5 sde[4] sdd[2] sdc[1] sdb[0]
      12582912 blocks super 1.0 level 5, 256k chunk, algorithm 2 [4/4] [UUUU]
      bitmap: 0/1 pages [0KB], 262144KB chunk

unused devices: <none>

RAW write
5461+0 records in
5461+0 records out
4294705152 bytes (4.3 GB) copied, 40.0364 seconds, 107 MB/s

RAW read
5461+0 records in
5461+0 records out
4294705152 bytes (4.3 GB) copied, 20.6355 seconds, 208 MB/s

Version 1.03c       ------Sequential Output------ --Sequential Input- --Random-
                    -Per Chr- --Block-- -Rewrite- -Per Chr- --Block-- --Seeks--
Machine        Size K/sec %CP K/sec %CP K/sec %CP K/sec %CP K/sec %CP  /sec %CP
raid5-256-256    4G           87925  45 50103  25           181072  37 368.9   1
                    ------Sequential Create------ --------Random Create--------
                    -Create-- --Read--- -Delete-- -Create-- --Read--- -Delete--
              files  /sec %CP  /sec %CP  /sec %CP  /sec %CP  /sec %CP  /sec %CP
                 16 22094  97 +++++ +++ 25312 100 22751  94 +++++ +++ 25841  99
raid5-256-256,4G,,,87925,45,50103,25,,,181072,37,368.9,1,16,22094,97,+++++,+++,25312,100,22751,94,+++++,+++,25841,99


Chunk size:  256 KB
BM chunk size:  128 MB

Wait for sync ...
Personalities : [linear] [multipath] [raid0] [raid1] [raid6] [raid5] [raid4] [raid10] 
md0 : active raid5 sde[4] sdd[2] sdc[1] sdb[0]
      12582912 blocks super 1.0 level 5, 256k chunk, algorithm 2 [4/4] [UUUU]
      bitmap: 0/1 pages [0KB], 131072KB chunk

unused devices: <none>

RAW write
5461+0 records in
5461+0 records out
4294705152 bytes (4.3 GB) copied, 41.1222 seconds, 104 MB/s

RAW read
5461+0 records in
5461+0 records out
4294705152 bytes (4.3 GB) copied, 20.4957 seconds, 210 MB/s

Version 1.03c       ------Sequential Output------ --Sequential Input- --Random-
                    -Per Chr- --Block-- -Rewrite- -Per Chr- --Block-- --Seeks--
Machine        Size K/sec %CP K/sec %CP K/sec %CP K/sec %CP K/sec %CP  /sec %CP
raid5-256-128    4G           85730  43 51644  26           184493  36 372.7   1
                    ------Sequential Create------ --------Random Create--------
                    -Create-- --Read--- -Delete-- -Create-- --Read--- -Delete--
              files  /sec %CP  /sec %CP  /sec %CP  /sec %CP  /sec %CP  /sec %CP
                 16 +++++ +++ +++++ +++ 25045  99 +++++ +++ +++++ +++ 22390  71
raid5-256-128,4G,,,85730,43,51644,26,,,184493,36,372.7,1,16,+++++,+++,+++++,+++,25045,99,+++++,+++,+++++,+++,22390,71


Chunk size:  256 KB
BM chunk size:  64 MB

Wait for sync ...
Personalities : [linear] [multipath] [raid0] [raid1] [raid6] [raid5] [raid4] [raid10] 
md0 : active raid5 sde[4] sdd[2] sdc[1] sdb[0]
      12582912 blocks super 1.0 level 5, 256k chunk, algorithm 2 [4/4] [UUUU]
      bitmap: 0/1 pages [0KB], 65536KB chunk

unused devices: <none>

RAW write
5461+0 records in
5461+0 records out
4294705152 bytes (4.3 GB) copied, 42.5863 seconds, 101 MB/s

RAW read
5461+0 records in
5461+0 records out
4294705152 bytes (4.3 GB) copied, 21.175 seconds, 203 MB/s

Version 1.03c       ------Sequential Output------ --Sequential Input- --Random-
                    -Per Chr- --Block-- -Rewrite- -Per Chr- --Block-- --Seeks--
Machine        Size K/sec %CP K/sec %CP K/sec %CP K/sec %CP K/sec %CP  /sec %CP
raid5-256-64     4G           89552  45 50419  25           183476  36 370.1   0
                    ------Sequential Create------ --------Random Create--------
                    -Create-- --Read--- -Delete-- -Create-- --Read--- -Delete--
              files  /sec %CP  /sec %CP  /sec %CP  /sec %CP  /sec %CP  /sec %CP
                 16 23192  99 +++++ +++ 27721 100 +++++ +++ +++++ +++ +++++ +++
raid5-256-64,4G,,,89552,45,50419,25,,,183476,36,370.1,0,16,23192,99,+++++,+++,27721,100,+++++,+++,+++++,+++,+++++,+++


Chunk size:  256 KB
BM chunk size:  32 MB

Wait for sync ...
Personalities : [linear] [multipath] [raid0] [raid1] [raid6] [raid5] [raid4] [raid10] 
md0 : active raid5 sde[4] sdd[2] sdc[1] sdb[0]
      12582912 blocks super 1.0 level 5, 256k chunk, algorithm 2 [4/4] [UUUU]
      bitmap: 0/1 pages [0KB], 32768KB chunk

unused devices: <none>

RAW write
5461+0 records in
5461+0 records out
4294705152 bytes (4.3 GB) copied, 44.3219 seconds, 96.9 MB/s

RAW read
5461+0 records in
5461+0 records out
4294705152 bytes (4.3 GB) copied, 20.589 seconds, 209 MB/s

Version 1.03c       ------Sequential Output------ --Sequential Input- --Random-
                    -Per Chr- --Block-- -Rewrite- -Per Chr- --Block-- --Seeks--
Machine        Size K/sec %CP K/sec %CP K/sec %CP K/sec %CP K/sec %CP  /sec %CP
raid5-256-32     4G           81179  44 49705  24           179691  36 370.8   0
                    ------Sequential Create------ --------Random Create--------
                    -Create-- --Read--- -Delete-- -Create-- --Read--- -Delete--
              files  /sec %CP  /sec %CP  /sec %CP  /sec %CP  /sec %CP  /sec %CP
                 16 23072 100 +++++ +++ 25838 100 +++++ +++ +++++ +++ +++++ +++
raid5-256-32,4G,,,81179,44,49705,24,,,179691,36,370.8,0,16,23072,100,+++++,+++,25838,100,+++++,+++,+++++,+++,+++++,+++


Chunk size:  256 KB
BM chunk size:  16 MB

Wait for sync ...
Personalities : [linear] [multipath] [raid0] [raid1] [raid6] [raid5] [raid4] [raid10] 
md0 : active raid5 sde[4] sdd[2] sdc[1] sdb[0]
      12582912 blocks super 1.0 level 5, 256k chunk, algorithm 2 [4/4] [UUUU]
      bitmap: 0/1 pages [0KB], 16384KB chunk

unused devices: <none>

RAW write
5461+0 records in
5461+0 records out
4294705152 bytes (4.3 GB) copied, 46.7222 seconds, 91.9 MB/s

RAW read
5461+0 records in
5461+0 records out
4294705152 bytes (4.3 GB) copied, 20.906 seconds, 205 MB/s

Version 1.03c       ------Sequential Output------ --Sequential Input- --Random-
                    -Per Chr- --Block-- -Rewrite- -Per Chr- --Block-- --Seeks--
Machine        Size K/sec %CP K/sec %CP K/sec %CP K/sec %CP K/sec %CP  /sec %CP
raid5-256-16     4G           72610  42 48187  24           183348  35 364.1   1
                    ------Sequential Create------ --------Random Create--------
                    -Create-- --Read--- -Delete-- -Create-- --Read--- -Delete--
              files  /sec %CP  /sec %CP  /sec %CP  /sec %CP  /sec %CP  /sec %CP
                 16 23412  99 +++++ +++ 24615  99 29517  84 +++++ +++ +++++ +++
raid5-256-16,4G,,,72610,42,48187,24,,,183348,35,364.1,1,16,23412,99,+++++,+++,24615,99,29517,84,+++++,+++,+++++,+++


Chunk size:  128 KB
BM chunk size:  512 MB

Wait for sync ...
Personalities : [linear] [multipath] [raid0] [raid1] [raid6] [raid5] [raid4] [raid10] 
md0 : active raid5 sde[4] sdd[2] sdc[1] sdb[0]
      12582912 blocks super 1.0 level 5, 128k chunk, algorithm 2 [4/4] [UUUU]
      bitmap: 0/1 pages [0KB], 524288KB chunk

unused devices: <none>

RAW write
10922+0 records in
10922+0 records out
4294705152 bytes (4.3 GB) copied, 39.8155 seconds, 108 MB/s

RAW read
10922+0 records in
10922+0 records out
4294705152 bytes (4.3 GB) copied, 25.1119 seconds, 171 MB/s

Version 1.03c       ------Sequential Output------ --Sequential Input- --Random-
                    -Per Chr- --Block-- -Rewrite- -Per Chr- --Block-- --Seeks--
Machine        Size K/sec %CP K/sec %CP K/sec %CP K/sec %CP K/sec %CP  /sec %CP
raid5-128-512    4G           97043  46 47814  24           172121  32 375.2   0
                    ------Sequential Create------ --------Random Create--------
                    -Create-- --Read--- -Delete-- -Create-- --Read--- -Delete--
              files  /sec %CP  /sec %CP  /sec %CP  /sec %CP  /sec %CP  /sec %CP
                 16 23771  98 +++++ +++ 32280  99 +++++ +++ +++++ +++ +++++ +++
raid5-128-512,4G,,,97043,46,47814,24,,,172121,32,375.2,0,16,23771,98,+++++,+++,32280,99,+++++,+++,+++++,+++,+++++,+++


Chunk size:  128 KB
BM chunk size:  256 MB

Wait for sync ...
Personalities : [linear] [multipath] [raid0] [raid1] [raid6] [raid5] [raid4] [raid10] 
md0 : active raid5 sde[4] sdd[2] sdc[1] sdb[0]
      12582912 blocks super 1.0 level 5, 128k chunk, algorithm 2 [4/4] [UUUU]
      bitmap: 0/1 pages [0KB], 262144KB chunk

unused devices: <none>

RAW write
10922+0 records in
10922+0 records out
4294705152 bytes (4.3 GB) copied, 40.2179 seconds, 107 MB/s

RAW read
10922+0 records in
10922+0 records out
4294705152 bytes (4.3 GB) copied, 22.6821 seconds, 189 MB/s

Version 1.03c       ------Sequential Output------ --Sequential Input- --Random-
                    -Per Chr- --Block-- -Rewrite- -Per Chr- --Block-- --Seeks--
Machine        Size K/sec %CP K/sec %CP K/sec %CP K/sec %CP K/sec %CP  /sec %CP
raid5-128-256    4G           93185  47 46657  23           173416  33 367.4   1
                    ------Sequential Create------ --------Random Create--------
                    -Create-- --Read--- -Delete-- -Create-- --Read--- -Delete--
              files  /sec %CP  /sec %CP  /sec %CP  /sec %CP  /sec %CP  /sec %CP
                 16 18597  96 +++++ +++ 17321  79 20444  95 +++++ +++ 29672  99
raid5-128-256,4G,,,93185,47,46657,23,,,173416,33,367.4,1,16,18597,96,+++++,+++,17321,79,20444,95,+++++,+++,29672,99


Chunk size:  128 KB
BM chunk size:  128 MB

Wait for sync ...
Personalities : [linear] [multipath] [raid0] [raid1] [raid6] [raid5] [raid4] [raid10] 
md0 : active raid5 sde[4] sdd[2] sdc[1] sdb[0]
      12582912 blocks super 1.0 level 5, 128k chunk, algorithm 2 [4/4] [UUUU]
      bitmap: 0/1 pages [0KB], 131072KB chunk

unused devices: <none>

RAW write
10922+0 records in
10922+0 records out
4294705152 bytes (4.3 GB) copied, 40.9995 seconds, 105 MB/s

RAW read
10922+0 records in
10922+0 records out
4294705152 bytes (4.3 GB) copied, 24.2446 seconds, 177 MB/s

Version 1.03c       ------Sequential Output------ --Sequential Input- --Random-
                    -Per Chr- --Block-- -Rewrite- -Per Chr- --Block-- --Seeks--
Machine        Size K/sec %CP K/sec %CP K/sec %CP K/sec %CP K/sec %CP  /sec %CP
raid5-128-128    4G           91055  45 46848  22           173601  32 369.4   1
                    ------Sequential Create------ --------Random Create--------
                    -Create-- --Read--- -Delete-- -Create-- --Read--- -Delete--
              files  /sec %CP  /sec %CP  /sec %CP  /sec %CP  /sec %CP  /sec %CP
                 16 20715  99 +++++ +++ 22151  79 +++++ +++ +++++ +++ +++++ +++
raid5-128-128,4G,,,91055,45,46848,22,,,173601,32,369.4,1,16,20715,99,+++++,+++,22151,79,+++++,+++,+++++,+++,+++++,+++


Chunk size:  128 KB
BM chunk size:  64 MB

Wait for sync ...
Personalities : [linear] [multipath] [raid0] [raid1] [raid6] [raid5] [raid4] [raid10] 
md0 : active raid5 sde[4] sdd[2] sdc[1] sdb[0]
      12582912 blocks super 1.0 level 5, 128k chunk, algorithm 2 [4/4] [UUUU]
      bitmap: 0/1 pages [0KB], 65536KB chunk

unused devices: <none>

RAW write
10922+0 records in
10922+0 records out
4294705152 bytes (4.3 GB) copied, 42.5521 seconds, 101 MB/s

RAW read
10922+0 records in
10922+0 records out
4294705152 bytes (4.3 GB) copied, 24.3978 seconds, 176 MB/s

Version 1.03c       ------Sequential Output------ --Sequential Input- --Random-
                    -Per Chr- --Block-- -Rewrite- -Per Chr- --Block-- --Seeks--
Machine        Size K/sec %CP K/sec %CP K/sec %CP K/sec %CP K/sec %CP  /sec %CP
raid5-128-64     4G           88929  45 46437  22           175345  33 367.5   0
                    ------Sequential Create------ --------Random Create--------
                    -Create-- --Read--- -Delete-- -Create-- --Read--- -Delete--
              files  /sec %CP  /sec %CP  /sec %CP  /sec %CP  /sec %CP  /sec %CP
                 16 20554  97 +++++ +++ 25292  79 +++++ +++ +++++ +++ +++++ +++
raid5-128-64,4G,,,88929,45,46437,22,,,175345,33,367.5,0,16,20554,97,+++++,+++,25292,79,+++++,+++,+++++,+++,+++++,+++


Chunk size:  128 KB
BM chunk size:  32 MB

Wait for sync ...
Personalities : [linear] [multipath] [raid0] [raid1] [raid6] [raid5] [raid4] [raid10] 
md0 : active raid5 sde[4] sdd[2] sdc[1] sdb[0]
      12582912 blocks super 1.0 level 5, 128k chunk, algorithm 2 [4/4] [UUUU]
      bitmap: 0/1 pages [0KB], 32768KB chunk

unused devices: <none>

RAW write
10922+0 records in
10922+0 records out
4294705152 bytes (4.3 GB) copied, 44.5575 seconds, 96.4 MB/s

RAW read
10922+0 records in
10922+0 records out
4294705152 bytes (4.3 GB) copied, 23.097 seconds, 186 MB/s

Version 1.03c       ------Sequential Output------ --Sequential Input- --Random-
                    -Per Chr- --Block-- -Rewrite- -Per Chr- --Block-- --Seeks--
Machine        Size K/sec %CP K/sec %CP K/sec %CP K/sec %CP K/sec %CP  /sec %CP
raid5-128-32     4G           81481  43 46331  22           172024  32 366.2   1
                    ------Sequential Create------ --------Random Create--------
                    -Create-- --Read--- -Delete-- -Create-- --Read--- -Delete--
              files  /sec %CP  /sec %CP  /sec %CP  /sec %CP  /sec %CP  /sec %CP
                 16 22892  97 +++++ +++ 20822  85 23391  99 +++++ +++ 26530  99
raid5-128-32,4G,,,81481,43,46331,22,,,172024,32,366.2,1,16,22892,97,+++++,+++,20822,85,23391,99,+++++,+++,26530,99


Chunk size:  128 KB
BM chunk size:  16 MB

Wait for sync ...
Personalities : [linear] [multipath] [raid0] [raid1] [raid6] [raid5] [raid4] [raid10] 
md0 : active raid5 sde[4] sdd[2] sdc[1] sdb[0]
      12582912 blocks super 1.0 level 5, 128k chunk, algorithm 2 [4/4] [UUUU]
      bitmap: 0/1 pages [0KB], 16384KB chunk

unused devices: <none>

RAW write
10922+0 records in
10922+0 records out
4294705152 bytes (4.3 GB) copied, 46.4805 seconds, 92.4 MB/s

RAW read
10922+0 records in
10922+0 records out
4294705152 bytes (4.3 GB) copied, 24.6012 seconds, 175 MB/s

Version 1.03c       ------Sequential Output------ --Sequential Input- --Random-
                    -Per Chr- --Block-- -Rewrite- -Per Chr- --Block-- --Seeks--
Machine        Size K/sec %CP K/sec %CP K/sec %CP K/sec %CP K/sec %CP  /sec %CP
raid5-128-16     4G           79314  46 44287  21           171650  32 369.1   0
                    ------Sequential Create------ --------Random Create--------
                    -Create-- --Read--- -Delete-- -Create-- --Read--- -Delete--
              files  /sec %CP  /sec %CP  /sec %CP  /sec %CP  /sec %CP  /sec %CP
                 16 23251  99 +++++ +++ 26461 100 26709  91 +++++ +++ +++++ +++
raid5-128-16,4G,,,79314,46,44287,21,,,171650,32,369.1,0,16,23251,99,+++++,+++,26461,100,26709,91,+++++,+++,+++++,+++


Chunk size:  64 KB
BM chunk size:  512 MB

Wait for sync ...
Personalities : [linear] [multipath] [raid0] [raid1] [raid6] [raid5] [raid4] [raid10] 
md0 : active raid5 sde[4] sdd[2] sdc[1] sdb[0]
      12582912 blocks super 1.0 level 5, 64k chunk, algorithm 2 [4/4] [UUUU]
      bitmap: 0/1 pages [0KB], 524288KB chunk

unused devices: <none>

RAW write
21845+0 records in
21845+0 records out
4294901760 bytes (4.3 GB) copied, 39.7231 seconds, 108 MB/s

RAW read
21845+0 records in
21845+0 records out
4294901760 bytes (4.3 GB) copied, 32.1528 seconds, 134 MB/s

Version 1.03c       ------Sequential Output------ --Sequential Input- --Random-
                    -Per Chr- --Block-- -Rewrite- -Per Chr- --Block-- --Seeks--
Machine        Size K/sec %CP K/sec %CP K/sec %CP K/sec %CP K/sec %CP  /sec %CP
raid5-64-512     4G           92985  46 42132  21           156068  29 371.4   1
                    ------Sequential Create------ --------Random Create--------
                    -Create-- --Read--- -Delete-- -Create-- --Read--- -Delete--
              files  /sec %CP  /sec %CP  /sec %CP  /sec %CP  /sec %CP  /sec %CP
                 16 21908  96 +++++ +++ 30917 100 29271  97 +++++ +++ +++++ +++
raid5-64-512,4G,,,92985,46,42132,21,,,156068,29,371.4,1,16,21908,96,+++++,+++,30917,100,29271,97,+++++,+++,+++++,+++


Chunk size:  64 KB
BM chunk size:  256 MB

Wait for sync ...
Personalities : [linear] [multipath] [raid0] [raid1] [raid6] [raid5] [raid4] [raid10] 
md0 : active raid5 sde[4] sdd[2] sdc[1] sdb[0]
      12582912 blocks super 1.0 level 5, 64k chunk, algorithm 2 [4/4] [UUUU]
      bitmap: 0/1 pages [0KB], 262144KB chunk

unused devices: <none>

RAW write
21845+0 records in
21845+0 records out
4294901760 bytes (4.3 GB) copied, 40.2671 seconds, 107 MB/s

RAW read
21845+0 records in
21845+0 records out
4294901760 bytes (4.3 GB) copied, 32.5262 seconds, 132 MB/s

Version 1.03c       ------Sequential Output------ --Sequential Input- --Random-
                    -Per Chr- --Block-- -Rewrite- -Per Chr- --Block-- --Seeks--
Machine        Size K/sec %CP K/sec %CP K/sec %CP K/sec %CP K/sec %CP  /sec %CP
raid5-64-256     4G           98544  49 40969  21           155997  30 366.8   1
                    ------Sequential Create------ --------Random Create--------
                    -Create-- --Read--- -Delete-- -Create-- --Read--- -Delete--
              files  /sec %CP  /sec %CP  /sec %CP  /sec %CP  /sec %CP  /sec %CP
                 16 21010  99 +++++ +++ 26723  99 17128  90 +++++ +++ 24930  99
raid5-64-256,4G,,,98544,49,40969,21,,,155997,30,366.8,1,16,21010,99,+++++,+++,26723,99,17128,90,+++++,+++,24930,99


Chunk size:  64 KB
BM chunk size:  128 MB

Wait for sync ...
Personalities : [linear] [multipath] [raid0] [raid1] [raid6] [raid5] [raid4] [raid10] 
md0 : active raid5 sde[4] sdd[2] sdc[1] sdb[0]
      12582912 blocks super 1.0 level 5, 64k chunk, algorithm 2 [4/4] [UUUU]
      bitmap: 0/1 pages [0KB], 131072KB chunk

unused devices: <none>

RAW write
21845+0 records in
21845+0 records out
4294901760 bytes (4.3 GB) copied, 41.0159 seconds, 105 MB/s

RAW read
21845+0 records in
21845+0 records out
4294901760 bytes (4.3 GB) copied, 32.173 seconds, 133 MB/s

Version 1.03c       ------Sequential Output------ --Sequential Input- --Random-
                    -Per Chr- --Block-- -Rewrite- -Per Chr- --Block-- --Seeks--
Machine        Size K/sec %CP K/sec %CP K/sec %CP K/sec %CP K/sec %CP  /sec %CP
raid5-64-128     4G           100053  48 39885  21           159066  29 361.3   1
                    ------Sequential Create------ --------Random Create--------
                    -Create-- --Read--- -Delete-- -Create-- --Read--- -Delete--
              files  /sec %CP  /sec %CP  /sec %CP  /sec %CP  /sec %CP  /sec %CP
                 16 18001  94 +++++ +++ 24494 100 18526  95 +++++ +++ 24797  99
raid5-64-128,4G,,,100053,48,39885,21,,,159066,29,361.3,1,16,18001,94,+++++,+++,24494,100,18526,95,+++++,+++,24797,99


Chunk size:  64 KB
BM chunk size:  64 MB

Wait for sync ...
Personalities : [linear] [multipath] [raid0] [raid1] [raid6] [raid5] [raid4] [raid10] 
md0 : active raid5 sde[4] sdd[2] sdc[1] sdb[0]
      12582912 blocks super 1.0 level 5, 64k chunk, algorithm 2 [4/4] [UUUU]
      bitmap: 0/1 pages [0KB], 65536KB chunk

unused devices: <none>

RAW write
21845+0 records in
21845+0 records out
4294901760 bytes (4.3 GB) copied, 42.2257 seconds, 102 MB/s

RAW read
21845+0 records in
21845+0 records out
4294901760 bytes (4.3 GB) copied, 31.0066 seconds, 139 MB/s

Version 1.03c       ------Sequential Output------ --Sequential Input- --Random-
                    -Per Chr- --Block-- -Rewrite- -Per Chr- --Block-- --Seeks--
Machine        Size K/sec %CP K/sec %CP K/sec %CP K/sec %CP K/sec %CP  /sec %CP
raid5-64-64      4G           87611  42 42815  20           153014  27 367.3   0
                    ------Sequential Create------ --------Random Create--------
                    -Create-- --Read--- -Delete-- -Create-- --Read--- -Delete--
              files  /sec %CP  /sec %CP  /sec %CP  /sec %CP  /sec %CP  /sec %CP
                 16 22569  91 +++++ +++ 27236  99 +++++ +++ +++++ +++ 28324  99
raid5-64-64,4G,,,87611,42,42815,20,,,153014,27,367.3,0,16,22569,91,+++++,+++,27236,99,+++++,+++,+++++,+++,28324,99


Chunk size:  64 KB
BM chunk size:  32 MB

Wait for sync ...
Personalities : [linear] [multipath] [raid0] [raid1] [raid6] [raid5] [raid4] [raid10] 
md0 : active raid5 sde[4] sdd[2] sdc[1] sdb[0]
      12582912 blocks super 1.0 level 5, 64k chunk, algorithm 2 [4/4] [UUUU]
      bitmap: 0/1 pages [0KB], 32768KB chunk

unused devices: <none>

RAW write
21845+0 records in
21845+0 records out
4294901760 bytes (4.3 GB) copied, 44.4222 seconds, 96.7 MB/s

RAW read
21845+0 records in
21845+0 records out
4294901760 bytes (4.3 GB) copied, 31.5176 seconds, 136 MB/s

Version 1.03c       ------Sequential Output------ --Sequential Input- --Random-
                    -Per Chr- --Block-- -Rewrite- -Per Chr- --Block-- --Seeks--
Machine        Size K/sec %CP K/sec %CP K/sec %CP K/sec %CP K/sec %CP  /sec %CP
raid5-64-32      4G           86426  45 41501  20           156154  28 367.1   0
                    ------Sequential Create------ --------Random Create--------
                    -Create-- --Read--- -Delete-- -Create-- --Read--- -Delete--
              files  /sec %CP  /sec %CP  /sec %CP  /sec %CP  /sec %CP  /sec %CP
                 16 23451 100 +++++ +++ 26292 100 25721  90 +++++ +++ 28475 100
raid5-64-32,4G,,,86426,45,41501,20,,,156154,28,367.1,0,16,23451,100,+++++,+++,26292,100,25721,90,+++++,+++,28475,100


Chunk size:  64 KB
BM chunk size:  16 MB

Wait for sync ...
Personalities : [linear] [multipath] [raid0] [raid1] [raid6] [raid5] [raid4] [raid10] 
md0 : active raid5 sde[4] sdd[2] sdc[1] sdb[0]
      12582912 blocks super 1.0 level 5, 64k chunk, algorithm 2 [4/4] [UUUU]
      bitmap: 0/1 pages [0KB], 16384KB chunk

unused devices: <none>

RAW write
21845+0 records in
21845+0 records out
4294901760 bytes (4.3 GB) copied, 46.6332 seconds, 92.1 MB/s

RAW read
21845+0 records in
21845+0 records out
4294901760 bytes (4.3 GB) copied, 31.9816 seconds, 134 MB/s

Version 1.03c       ------Sequential Output------ --Sequential Input- --Random-
                    -Per Chr- --Block-- -Rewrite- -Per Chr- --Block-- --Seeks--
Machine        Size K/sec %CP K/sec %CP K/sec %CP K/sec %CP K/sec %CP  /sec %CP
raid5-64-16      4G           75749  45 39776  20           165845  30 359.1   1
                    ------Sequential Create------ --------Random Create--------
                    -Create-- --Read--- -Delete-- -Create-- --Read--- -Delete--
              files  /sec %CP  /sec %CP  /sec %CP  /sec %CP  /sec %CP  /sec %CP
                 16 +++++ +++ +++++ +++ +++++ +++ +++++ +++ +++++ +++ 21195  85
raid5-64-16,4G,,,75749,45,39776,20,,,165845,30,359.1,1,16,+++++,+++,+++++,+++,+++++,+++,+++++,+++,+++++,+++,21195,85

^ permalink raw reply	[flat|nested] 19+ messages in thread

* Re: Setting up md-raid5: observations, errors, questions
  2008-03-04 16:54                 ` Christian Pernegger
@ 2008-03-05  6:38                   ` Christian Pernegger
  2008-03-10 14:03                     ` Christian Pernegger
  0 siblings, 1 reply; 19+ messages in thread
From: Christian Pernegger @ 2008-03-05  6:38 UTC (permalink / raw)
  To: linux-raid; +Cc: linux-ide

Added the dm-crypt layer back again, just to see what it would do:
- responsiveness problem is back
- performance is around the level of a single disk, if that

Played around some more and it seems that the system just reacts very
badly to some I/O loads. fs creation / bonnie on a resyncing RAID
apparently can trigger the same symptoms (even without dm-crypt),
though not all configurations.

Just for kicks I tried raid on dm-crypt, which doesn't have the
problem but is even slower.

Back to square one.

C.

^ permalink raw reply	[flat|nested] 19+ messages in thread

* Re: Setting up md-raid5: observations, errors, questions
  2008-03-05  6:38                   ` Christian Pernegger
@ 2008-03-10 14:03                     ` Christian Pernegger
  0 siblings, 0 replies; 19+ messages in thread
From: Christian Pernegger @ 2008-03-10 14:03 UTC (permalink / raw)
  To: linux-raid; +Cc: linux-ide

Dropping linux-ide as the HSM violation error was not reproducible
even a second time and I have since switched controllers from Promise
SATA300 TX4s (sata_promise) to Dawicontrol 4320 RAIDs (sil_3124 /
sata_sil24).

BTW the latter can have it's fakeraid firmware disabled via jumper,
which also skips any long option ROM / init sequences during boot.
When doing streaming seqential reads | writes on all four disk block
devices in parallel I get close to 300MB/s in both cases, which is the
limit for both the disk and controller hardware. Great driver!

Before it starts sounding too much like spam - I've only been pounding
the setup for a week, so no idea if it's really stable. Also I haven't
tested for data corruption yet :)

Cheers,

C.

^ permalink raw reply	[flat|nested] 19+ messages in thread

end of thread, other threads:[~2008-03-10 14:03 UTC | newest]

Thread overview: 19+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2008-03-02 12:23 Setting up md-raid5: observations, errors, questions Christian Pernegger
2008-03-02 12:41 ` Justin Piszcz
2008-03-02 12:56   ` Christian Pernegger
2008-03-02 13:03     ` Justin Piszcz
2008-03-02 13:24       ` Christian Pernegger
2008-03-03 17:59         ` Bill Davidsen
2008-03-03 20:19           ` Christian Pernegger
2008-03-02 15:20 ` Michael Tokarev
2008-03-02 16:32   ` Christian Pernegger
2008-03-02 18:33     ` Michael Tokarev
2008-03-02 21:19       ` Christian Pernegger
2008-03-02 21:56         ` Michael Tokarev
2008-03-03  0:17           ` Christian Pernegger
2008-03-03  2:58             ` Michael Tokarev
2008-03-03  8:38               ` Christian Pernegger
2008-03-04 16:54                 ` Christian Pernegger
2008-03-05  6:38                   ` Christian Pernegger
2008-03-10 14:03                     ` Christian Pernegger
2008-03-02 18:53     ` Christian Pernegger

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).