* raid5 code ok with 2TB + ?
@ 2004-11-28 10:48 Stephan van Hienen
2004-11-28 10:57 ` Brad Campbell
0 siblings, 1 reply; 25+ messages in thread
From: Stephan van Hienen @ 2004-11-28 10:48 UTC (permalink / raw)
To: linux-raid
in the past the raid5 code was 32bits internal
and the limit was 2TB
is this fixed in the 2.6.x ?
or is the 2TB limit still there ?
^ permalink raw reply [flat|nested] 25+ messages in thread
* Re: raid5 code ok with 2TB + ?
2004-11-28 10:48 raid5 code ok with 2TB + ? Stephan van Hienen
@ 2004-11-28 10:57 ` Brad Campbell
2004-11-28 11:19 ` Stephan van Hienen
2004-11-28 18:19 ` Guy
0 siblings, 2 replies; 25+ messages in thread
From: Brad Campbell @ 2004-11-28 10:57 UTC (permalink / raw)
To: Stephan van Hienen; +Cc: linux-raid
Stephan van Hienen wrote:
> in the past the raid5 code was 32bits internal
> and the limit was 2TB
>
> is this fixed in the 2.6.x ?
I hope so, I have been using it on a 2.2TB raid-5 for months now. (I have completely filled the
partition and e2fsck it monthly so I would hope any glitches should have surfaced by now)
brad@srv:~$ cat /proc/mdstat
Personalities : [raid0] [raid5] [raid6]
md2 : active raid5 sdl[0] sdm[2] sdk[1]
488396800 blocks level 5, 128k chunk, algorithm 2 [3/3] [UUU]
md0 : active raid5 sda1[0] sdj1[9] sdi1[8] sdh1[7] sdg1[6] sdf1[5] sde1[4] sdd1[3] sdc1[2] sdb1[1]
2206003968 blocks level 5, 128k chunk, algorithm 0 [10/10] [UUUUUUUUUU]
--
Brad
/"\
Save the Forests \ / ASCII RIBBON CAMPAIGN
Burn a Greenie. X AGAINST HTML MAIL
/ \
^ permalink raw reply [flat|nested] 25+ messages in thread
* Re: raid5 code ok with 2TB + ?
2004-11-28 10:57 ` Brad Campbell
@ 2004-11-28 11:19 ` Stephan van Hienen
2004-11-28 11:28 ` Brad Campbell
2004-11-28 18:19 ` Guy
1 sibling, 1 reply; 25+ messages in thread
From: Stephan van Hienen @ 2004-11-28 11:19 UTC (permalink / raw)
To: Brad Campbell; +Cc: linux-raid
On Sun, 28 Nov 2004, Brad Campbell wrote:
> I hope so, I have been using it on a 2.2TB raid-5 for months now. (I have
> completely filled the partition and e2fsck it monthly so I would hope any
> glitches should have surfaced by now)
ok looks ok
when i tried 2.3TB last year it failed within a few days (data written to
the 2TB+ was written to the first part (0-0.3TB)
btw which filesystem are you using ?
(and which stripesize?)
^ permalink raw reply [flat|nested] 25+ messages in thread
* Re: raid5 code ok with 2TB + ?
2004-11-28 11:19 ` Stephan van Hienen
@ 2004-11-28 11:28 ` Brad Campbell
0 siblings, 0 replies; 25+ messages in thread
From: Brad Campbell @ 2004-11-28 11:28 UTC (permalink / raw)
To: Stephan van Hienen; +Cc: linux-raid
Stephan van Hienen wrote:
> On Sun, 28 Nov 2004, Brad Campbell wrote:
>
>> I hope so, I have been using it on a 2.2TB raid-5 for months now. (I
>> have completely filled the partition and e2fsck it monthly so I would
>> hope any glitches should have surfaced by now)
>
>
> ok looks ok
> when i tried 2.3TB last year it failed within a few days (data written
> to the 2TB+ was written to the first part (0-0.3TB)
>
> btw which filesystem are you using ?
> (and which stripesize?)
Ext3 and ...
brad@srv:~$ sudo mdadm --misc --detail /dev/md0
/dev/md0:
Version : 00.90.01
Creation Time : Sun May 2 18:02:14 2004
Raid Level : raid5
Array Size : 2206003968 (2103.81 GiB 2258.95 GB)
Device Size : 245111552 (233.76 GiB 250.99 GB)
Raid Devices : 10
Total Devices : 10
Preferred Minor : 0
Persistence : Superblock is persistent
Update Time : Sun Nov 28 06:44:04 2004
State : clean
Active Devices : 10
Working Devices : 10
Failed Devices : 0
Spare Devices : 0
Layout : left-asymmetric
Chunk Size : 128K
UUID : 05cc3f43:de1ecfa4:83a51293:78015f1e
Events : 0.1031147
Number Major Minor RaidDevice State
0 8 1 0 active sync /dev/devfs/scsi/host0/bus0/target0/lun0/part1
1 8 17 1 active sync /dev/devfs/scsi/host1/bus0/target0/lun0/part1
2 8 33 2 active sync /dev/devfs/scsi/host2/bus0/target0/lun0/part1
3 8 49 3 active sync /dev/devfs/scsi/host3/bus0/target0/lun0/part1
4 8 65 4 active sync /dev/devfs/scsi/host4/bus0/target0/lun0/part1
5 8 81 5 active sync /dev/devfs/scsi/host5/bus0/target0/lun0/part1
6 8 97 6 active sync /dev/devfs/scsi/host6/bus0/target0/lun0/part1
7 8 113 7 active sync /dev/devfs/scsi/host7/bus0/target0/lun0/part1
8 8 129 8 active sync /dev/sdi1
9 8 145 9 active sync /dev/sdj1
--
Brad
/"\
Save the Forests \ / ASCII RIBBON CAMPAIGN
Burn a Greenie. X AGAINST HTML MAIL
/ \
^ permalink raw reply [flat|nested] 25+ messages in thread
* RE: raid5 code ok with 2TB + ?
2004-11-28 10:57 ` Brad Campbell
2004-11-28 11:19 ` Stephan van Hienen
@ 2004-11-28 18:19 ` Guy
2004-11-28 19:20 ` Brad Campbell
1 sibling, 1 reply; 25+ messages in thread
From: Guy @ 2004-11-28 18:19 UTC (permalink / raw)
To: 'Brad Campbell', 'Stephan van Hienen'; +Cc: linux-raid
Your 2.2T array is not as big as you think!
1TB = 2^40 not 1*10^12
Maybe depending on if you are buying disk drives, or selling them! :)
But when related to the 2TB limit it is 2^40.
2206003968 blocks
Divide by 1024 gives you 2154300.75 meg
Divide by 1024 gives you 2103.8093 Gig
Divide by 1024 gives you 2.0545 TB
So you are just over 2TB. by 58520320 blocks or 55.8 Gig.
The only reason I am being exact is that you have not tested disk I/O beyond
2TB as much as you think. Once, someone else made a similar 2T claim, after
the math he was really below 2T.
Guy
-----Original Message-----
From: linux-raid-owner@vger.kernel.org
[mailto:linux-raid-owner@vger.kernel.org] On Behalf Of Brad Campbell
Sent: Sunday, November 28, 2004 5:58 AM
To: Stephan van Hienen
Cc: linux-raid@vger.kernel.org
Subject: Re: raid5 code ok with 2TB + ?
Stephan van Hienen wrote:
> in the past the raid5 code was 32bits internal
> and the limit was 2TB
>
> is this fixed in the 2.6.x ?
I hope so, I have been using it on a 2.2TB raid-5 for months now. (I have
completely filled the
partition and e2fsck it monthly so I would hope any glitches should have
surfaced by now)
brad@srv:~$ cat /proc/mdstat
Personalities : [raid0] [raid5] [raid6]
md2 : active raid5 sdl[0] sdm[2] sdk[1]
488396800 blocks level 5, 128k chunk, algorithm 2 [3/3] [UUU]
md0 : active raid5 sda1[0] sdj1[9] sdi1[8] sdh1[7] sdg1[6] sdf1[5] sde1[4]
sdd1[3] sdc1[2] sdb1[1]
2206003968 blocks level 5, 128k chunk, algorithm 0 [10/10]
[UUUUUUUUUU]
--
Brad
/"\
Save the Forests \ / ASCII RIBBON CAMPAIGN
Burn a Greenie. X AGAINST HTML MAIL
/ \
-
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
^ permalink raw reply [flat|nested] 25+ messages in thread
* Re: raid5 code ok with 2TB + ?
2004-11-28 18:19 ` Guy
@ 2004-11-28 19:20 ` Brad Campbell
2004-11-28 20:17 ` raid5 code ok with 2TB + ? NEGATIVE :( Stephan van Hienen
0 siblings, 1 reply; 25+ messages in thread
From: Brad Campbell @ 2004-11-28 19:20 UTC (permalink / raw)
To: Guy; +Cc: 'Stephan van Hienen', linux-raid
Guy wrote:
> Your 2.2T array is not as big as you think!
>
> 1TB = 2^40 not 1*10^12
> Maybe depending on if you are buying disk drives, or selling them! :)
> But when related to the 2TB limit it is 2^40.
>
> 2206003968 blocks
> Divide by 1024 gives you 2154300.75 meg
> Divide by 1024 gives you 2103.8093 Gig
> Divide by 1024 gives you 2.0545 TB
> So you are just over 2TB. by 58520320 blocks or 55.8 Gig.
>
> The only reason I am being exact is that you have not tested disk I/O beyond
> 2TB as much as you think. Once, someone else made a similar 2T claim, after
> the math he was really below 2T.
Fair call. Having said that, if the code wrapped at 2TB then I would have blown away the 1st 55.8
Gig of my partition, which would be enough to prove the code faulty :p)
--
Brad
/"\
Save the Forests \ / ASCII RIBBON CAMPAIGN
Burn a Greenie. X AGAINST HTML MAIL
/ \
^ permalink raw reply [flat|nested] 25+ messages in thread
* Re: raid5 code ok with 2TB + ? NEGATIVE :(
2004-11-28 19:20 ` Brad Campbell
@ 2004-11-28 20:17 ` Stephan van Hienen
2004-11-28 20:27 ` Stephan van Hienen
0 siblings, 1 reply; 25+ messages in thread
From: Stephan van Hienen @ 2004-11-28 20:17 UTC (permalink / raw)
To: Brad Campbell; +Cc: Guy, linux-raid
just tried creating a raid5
it was supposed to be 2180 GB
but mdadm -D /dev/md0 reports 131.81 GiB
looks like it's still not supported ?
(kernel 2.6.9-ac11 and mdadm 1.8.0)
]# mdadm --create /dev/md0 -c 256 -l 5 -n 14 /dev/sdb1 /dev/sdc1 /dev/sdd1
/dev/sde1 /dev/sdf1 /dev/sdg1 /dev/sdh1 /dev/sdi1 /dev/sdj1 /dev/sdk1
/dev/sdl1 /dev/sdm1 /dev/sdn1 /dev/sdo1
<..>
Continue creating array? y
mdadm: array /dev/md0 started.
[root@storage etc]# cat /proc/mdstat
Personalities : [raid0] [raid5]
md0 : active raid5 sdo1[14] sdn1[12] sdm1[11] sdl1[10] sdk1[9] sdj1[8]
sdi1[7] sdh1[6] sdg1[5] sdf1[4] sde1[3] sdd1[2] sdc1[1] sdb1[0]
2285700352 blocks level 5, 256k chunk, algorithm 2 [14/13]
[UUUUUUUUUUUUU_]
[>....................] recovery = 0.0% (56064/175823104)
finish=156.6min speed=18688K/sec
unused devices: <none>
[root@storage etc]# mdadm -D /dev/md0
/dev/md0:
Version : 00.90.01
Creation Time : Sun Nov 28 21:15:32 2004
Raid Level : raid5
Array Size : 138216704 (131.81 GiB 141.53 GB)
Device Size : 175823104 (167.68 GiB 180.04 GB)
Raid Devices : 14
Total Devices : 14
Preferred Minor : 0
Persistence : Superblock is persistent
Update Time : Sun Nov 28 21:15:32 2004
State : clean, degraded, recovering
Active Devices : 13
Working Devices : 14
Failed Devices : 0
Spare Devices : 1
Layout : left-symmetric
Chunk Size : 256K
Rebuild Status : 0% complete
UUID : 70c4df2b:2dc20e84:e555fa94:a57be873
Events : 0.4
Number Major Minor RaidDevice State
0 8 17 0 active sync /dev/sdb1
1 8 33 1 active sync /dev/sdc1
2 8 49 2 active sync /dev/sdd1
3 8 65 3 active sync /dev/sde1
4 8 81 4 active sync /dev/sdf1
5 8 97 5 active sync /dev/sdg1
6 8 113 6 active sync /dev/sdh1
7 8 129 7 active sync /dev/sdi1
8 8 145 8 active sync /dev/sdj1
9 8 161 9 active sync /dev/sdk1
10 8 177 10 active sync /dev/sdl1
11 8 193 11 active sync /dev/sdm1
12 8 209 12 active sync /dev/sdn1
13 0 0 - removed
14 8 225 13 spare rebuilding /dev/sdo1
^ permalink raw reply [flat|nested] 25+ messages in thread
* Re: raid5 code ok with 2TB + ? NEGATIVE :(
2004-11-28 20:17 ` raid5 code ok with 2TB + ? NEGATIVE :( Stephan van Hienen
@ 2004-11-28 20:27 ` Stephan van Hienen
2004-11-28 20:45 ` Stephan van Hienen
0 siblings, 1 reply; 25+ messages in thread
From: Stephan van Hienen @ 2004-11-28 20:27 UTC (permalink / raw)
To: Brad Campbell; +Cc: Guy, linux-raid
btw 'Large block devices' is enabled in kernel config
On Sun, 28 Nov 2004, Stephan van Hienen wrote:
> just tried creating a raid5
> it was supposed to be 2180 GB
> but mdadm -D /dev/md0 reports 131.81 GiB
> looks like it's still not supported ?
> (kernel 2.6.9-ac11 and mdadm 1.8.0)
>
kk
^ permalink raw reply [flat|nested] 25+ messages in thread
* Re: raid5 code ok with 2TB + ? NEGATIVE :(
2004-11-28 20:27 ` Stephan van Hienen
@ 2004-11-28 20:45 ` Stephan van Hienen
2004-11-28 23:34 ` raid5 code ok with 2TB + ? Stephan van Hienen
0 siblings, 1 reply; 25+ messages in thread
From: Stephan van Hienen @ 2004-11-28 20:45 UTC (permalink / raw)
To: Brad Campbell; +Cc: Guy, linux-raid
On Sun, 28 Nov 2004, Stephan van Hienen wrote:
> btw 'Large block devices' is enabled in kernel config
argh and i copyed it to the wrong place
so i was still running a non LBD kernel
]# mdadm -D /dev/md0
/dev/md0:
Version : 00.90.01
Creation Time : Sun Nov 28 21:44:33 2004
Raid Level : raid5
Array Size : 2285700352 (2179.81 GiB 2340.56 GB)
Device Size : 175823104 (167.68 GiB 180.04 GB)
now running a resync
after that i'll run badblocks on it to test if i can read/write to the
complete array
^ permalink raw reply [flat|nested] 25+ messages in thread
* Re: raid5 code ok with 2TB + ?
2004-11-28 20:45 ` Stephan van Hienen
@ 2004-11-28 23:34 ` Stephan van Hienen
2004-11-29 4:49 ` Guy
0 siblings, 1 reply; 25+ messages in thread
From: Stephan van Hienen @ 2004-11-28 23:34 UTC (permalink / raw)
To: linux-raid
On Sun, 28 Nov 2004, Stephan van Hienen wrote:
> now running a resync
> after that i'll run badblocks on it to test if i can read/write to the
> complete array
resync finished without any problems
i wanted to run badblocks on the /dev/md0 to check
but it looks like badblocks doesn't support 2TB + :
]# badblocks -sw -p99 -c8192 /dev/md0
Writing pattern 0xaaaaaaaa: 126464/-2009266944
any other hints for testing the raid ?
^ permalink raw reply [flat|nested] 25+ messages in thread
* RE: raid5 code ok with 2TB + ?
2004-11-28 23:34 ` raid5 code ok with 2TB + ? Stephan van Hienen
@ 2004-11-29 4:49 ` Guy
2004-11-29 8:33 ` Stephan van Hienen
0 siblings, 1 reply; 25+ messages in thread
From: Guy @ 2004-11-29 4:49 UTC (permalink / raw)
To: 'Stephan van Hienen', linux-raid
I use dd to test disks, and arrays. Give this a test. It may stop at the
2TB limit, not sure. My monster 14 disk array is only about 240G.
time dd if=/dev/md0 of=/dev/null bs=1024k
I don't think badblocks would detect bad block on an array. If one did
occur, the drive would be failed and the array would continue. If your
re-sync finishes, that really does test your disks since every block used by
md is read or written.
Guy
-----Original Message-----
From: linux-raid-owner@vger.kernel.org
[mailto:linux-raid-owner@vger.kernel.org] On Behalf Of Stephan van Hienen
Sent: Sunday, November 28, 2004 6:35 PM
To: linux-raid@vger.kernel.org
Subject: Re: raid5 code ok with 2TB + ?
On Sun, 28 Nov 2004, Stephan van Hienen wrote:
> now running a resync
> after that i'll run badblocks on it to test if i can read/write to the
> complete array
resync finished without any problems
i wanted to run badblocks on the /dev/md0 to check
but it looks like badblocks doesn't support 2TB + :
]# badblocks -sw -p99 -c8192 /dev/md0
Writing pattern 0xaaaaaaaa: 126464/-2009266944
any other hints for testing the raid ?
-
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
^ permalink raw reply [flat|nested] 25+ messages in thread
* RE: raid5 code ok with 2TB + ?
2004-11-29 4:49 ` Guy
@ 2004-11-29 8:33 ` Stephan van Hienen
2004-11-29 8:51 ` raid5 slow Stephan van Hienen
0 siblings, 1 reply; 25+ messages in thread
From: Stephan van Hienen @ 2004-11-29 8:33 UTC (permalink / raw)
To: Guy; +Cc: linux-raid
On Sun, 28 Nov 2004, Guy wrote:
> I use dd to test disks, and arrays. Give this a test. It may stop at the
> 2TB limit, not sure. My monster 14 disk array is only about 240G.
>
> time dd if=/dev/md0 of=/dev/null bs=1024k
>
]# dd if=/dev/zero of=testfile bs=8192
dd: writing `testfile': No space left on device
285696059+0 records in
285696058+0 records out
du doesn't undestand the size :
]# du -h
132G .
ls understands a bit (total is incorrect, but filesize is ok) :
]# ls -la
total 138084832
drwxr-xr-x 21 root root 4096 Nov 29 01:40 ..
drwxr-xr-x 2 root root 21 Nov 29 02:07 .
-rw-r--r-- 1 root root 2340422111232 Nov 29 07:39 testfile
]# ls -al -h
total 132G
drwxr-xr-x 2 root root 21 Nov 29 02:07 .
drwxr-xr-x 21 root root 4.0K Nov 29 01:40 ..
-rw-r--r-- 1 root root 2.1T Nov 29 07:39 testfile
no errors in dmesg so looks like it's ok
it's only a bit slow :
Version 1.03 ------Sequential Output------ --Sequential Input-
--Random-
-Per Chr- --Block-- -Rewrite- -Per Chr- --Block--
--Seeks--
Machine Size K/sec %CP K/sec %CP K/sec %CP K/sec %CP K/sec %CP
/sec %CP
storage.a2000.nu 2G 26828 99 123983 73 58525 57 27336 99 171371 96
583.9 4
------Sequential Create------ --------Random
Create--------
-Create-- --Read--- -Delete-- -Create-- --Read---
-Delete--
files /sec %CP /sec %CP /sec %CP /sec %CP /sec %CP
/sec %CP
16 1023 12 +++++ +++ 920 11 1004 14 +++++ +++
722 9
storage.a2000.nu,2G,26828,99,123983,73,58525,57,27336,99,171371,96,583.9,4,16,1023,12,+++++,+++,920,11,1004,14,+++++,+++,722,9
compared to the old raid5 with 13 disk (with redhat 8 and kernel 2.4) :
met 13 disken : (rh 8 ext3)
Version 1.02c ------Sequential Output------ --Sequential Input-
--Random-
-Per Chr- --Block-- -Rewrite- -Per Chr- --Block--
--Seeks--
Machine Size K/sec %CP K/sec %CP K/sec %CP K/sec %CP K/sec %CP
/sec %CP
storage.a2000.nu 1G 17775 100 100912 99 70290 66 23648 99 257029 81
624.1 3
------Sequential Create------ --------Random
Create--------
-Create-- --Read--- -Delete-- -Create-- --Read---
-Delete--
files /sec %CP /sec %CP /sec %CP /sec %CP /sec %CP
/sec %CP
16 1944 99 +++++ +++ +++++ +++ 2042 98 +++++ +++
5689 97
storage.a2000.nu,1G,17775,100,100912,99,70290,66,23648,99,257029,81,
624.1,3,16,1944,99,+++++,+++,+++++,+++,2042,98,+++++,+++,5689,97
^ permalink raw reply [flat|nested] 25+ messages in thread
* raid5 slow
2004-11-29 8:33 ` Stephan van Hienen
@ 2004-11-29 8:51 ` Stephan van Hienen
2004-11-29 17:13 ` Guy
2004-11-30 0:30 ` raid5 slow Neil Brown
0 siblings, 2 replies; 25+ messages in thread
From: Stephan van Hienen @ 2004-11-29 8:51 UTC (permalink / raw)
To: Guy; +Cc: linux-raid
> it's only a bit slow :
>
> Version 1.03 ------Sequential Output------ --Sequential Input-
> --Random-
> -Per Chr- --Block-- -Rewrite- -Per Chr- --Block--
> --Seeks--
> Machine Size K/sec %CP K/sec %CP K/sec %CP K/sec %CP K/sec %CP /sec
> %CP
> storage.a2000.nu 2G 26828 99 123983 73 58525 57 27336 99 171371 96 583.9
> 4
> ------Sequential Create------ --------Random
> Create--------
> -Create-- --Read--- -Delete-- -Create-- --Read---
> -Delete--
> files /sec %CP /sec %CP /sec %CP /sec %CP /sec %CP /sec
> %CP
> 16 1023 12 +++++ +++ 920 11 1004 14 +++++ +++ 722 9
> storage.a2000.nu,2G,26828,99,123983,73,58525,57,27336,99,171371,96,583.9,4,16,1023,12,+++++,+++,920,11,1004,14,+++++,+++,722,9
created a raid0 with the same disks :
Version 1.03 ------Sequential Output------ --Sequential Input-
--Random-
-Per Chr- --Block-- -Rewrite- -Per Chr- --Block--
--Seeks--
Machine Size K/sec %CP K/sec %CP K/sec %CP K/sec %CP K/sec %CP
/sec %CP
storage.a2000.nu 2G 28435 99 219329 73 102949 51 29704 99 275323 62
639.5 2
------Sequential Create------ --------Random
Create--------
-Create-- --Read--- -Delete-- -Create-- --Read---
-Delete--
files /sec %CP /sec %CP /sec %CP /sec %CP /sec %CP
/sec %CP
16 10108 84 +++++ +++ 8825 69 9877 88 +++++ +++
7546 66
storage.a2000.nu,2G,28435,99,219329,73,102949,51,29704,99,275323,62,639.5,2,16,10108,84,+++++,+++,8825,69,9877,88,+++++,+++,7546,66
shouldn't the raid be about the same ?
now i get 275MB/sec with raid0 (14 disks)
171MB/sec with raid5 (14 disks)
while the old raid5 situation was 257MB/sec with raid5 (13 disks)
^ permalink raw reply [flat|nested] 25+ messages in thread
* RE: raid5 slow
2004-11-29 8:51 ` raid5 slow Stephan van Hienen
@ 2004-11-29 17:13 ` Guy
2004-11-29 18:11 ` Stephan van Hienen
2004-11-30 0:30 ` raid5 slow Neil Brown
1 sibling, 1 reply; 25+ messages in thread
From: Guy @ 2004-11-29 17:13 UTC (permalink / raw)
To: 'Stephan van Hienen'; +Cc: linux-raid
Very odd, RAID5 reading should be about the same as RAID0.
And your old array confirms that.
My guess is RAID5 needs some help. Neil? Any ideas?
Is your stripe size the same on both?
Did you allow your array to re-sync before the tests?
Guy
-----Original Message-----
From: Stephan van Hienen [mailto:raid@a2000.nu]
Sent: Monday, November 29, 2004 3:52 AM
To: Guy
Cc: linux-raid@vger.kernel.org
Subject: raid5 slow
> it's only a bit slow :
>
> Version 1.03 ------Sequential Output------ --Sequential Input-
> --Random-
> -Per Chr- --Block-- -Rewrite- -Per Chr- --Block--
> --Seeks--
> Machine Size K/sec %CP K/sec %CP K/sec %CP K/sec %CP K/sec %CP /sec
> %CP
> storage.a2000.nu 2G 26828 99 123983 73 58525 57 27336 99 171371 96
583.9
> 4
> ------Sequential Create------ --------Random
> Create--------
> -Create-- --Read--- -Delete-- -Create-- --Read---
> -Delete--
> files /sec %CP /sec %CP /sec %CP /sec %CP /sec %CP /sec
> %CP
> 16 1023 12 +++++ +++ 920 11 1004 14 +++++ +++ 722
9
>
storage.a2000.nu,2G,26828,99,123983,73,58525,57,27336,99,171371,96,583.9,4,1
6,1023,12,+++++,+++,920,11,1004,14,+++++,+++,722,9
created a raid0 with the same disks :
Version 1.03 ------Sequential Output------ --Sequential Input-
--Random-
-Per Chr- --Block-- -Rewrite- -Per Chr- --Block--
--Seeks--
Machine Size K/sec %CP K/sec %CP K/sec %CP K/sec %CP K/sec %CP
/sec %CP
storage.a2000.nu 2G 28435 99 219329 73 102949 51 29704 99 275323 62
639.5 2
------Sequential Create------ --------Random
Create--------
-Create-- --Read--- -Delete-- -Create-- --Read---
-Delete--
files /sec %CP /sec %CP /sec %CP /sec %CP /sec %CP
/sec %CP
16 10108 84 +++++ +++ 8825 69 9877 88 +++++ +++
7546 66
storage.a2000.nu,2G,28435,99,219329,73,102949,51,29704,99,275323,62,639.5,2,
16,10108,84,+++++,+++,8825,69,9877,88,+++++,+++,7546,66
shouldn't the raid be about the same ?
now i get 275MB/sec with raid0 (14 disks)
171MB/sec with raid5 (14 disks)
while the old raid5 situation was 257MB/sec with raid5 (13 disks)
--
No virus found in this incoming message.
Checked by AVG Anti-Virus.
Version: 7.0.289 / Virus Database: 265.4.3 - Release Date: 11/26/2004
^ permalink raw reply [flat|nested] 25+ messages in thread
* RE: raid5 slow
2004-11-29 17:13 ` Guy
@ 2004-11-29 18:11 ` Stephan van Hienen
2004-11-30 0:06 ` raid5 slow (looks like 2.6 problem) Stephan van Hienen
0 siblings, 1 reply; 25+ messages in thread
From: Stephan van Hienen @ 2004-11-29 18:11 UTC (permalink / raw)
To: Guy; +Cc: linux-raid
On Mon, 29 Nov 2004, Guy wrote:
> Very odd, RAID5 reading should be about the same as RAID0.
> And your old array confirms that.
> My guess is RAID5 needs some help. Neil? Any ideas?
>
> Is your stripe size the same on both?
> Did you allow your array to re-sync before the tests?
stripesize is the same (128k)
and yes i did a resync
^ permalink raw reply [flat|nested] 25+ messages in thread
* RE: raid5 slow (looks like 2.6 problem)
2004-11-29 18:11 ` Stephan van Hienen
@ 2004-11-30 0:06 ` Stephan van Hienen
0 siblings, 0 replies; 25+ messages in thread
From: Stephan van Hienen @ 2004-11-30 0:06 UTC (permalink / raw)
To: Guy; +Cc: linux-raid
just rebooted my system with an old 2.4.20 kernel (got from backup)
(2.4.20 13d r5 ext3)
Version 1.03 ------Sequential Output------ --Sequential Input-
--Random-
-Per Chr- --Block-- -Rewrite- -Per Chr- --Block--
--Seeks--
Machine Size K/sec %CP K/sec %CP K/sec %CP K/sec %CP K/sec %CP
/sec %CP
storage.a2000.nu 2G 21931 99 96429 99 79370 73 28970 99 280316 84
675.7 3
------Sequential Create------ --------Random
Create--------
-Create-- --Read--- -Delete-- -Create-- --Read---
-Delete--
files /sec %CP /sec %CP /sec %CP /sec %CP /sec %CP
/sec %CP
16 1444 71 +++++ +++ +++++ +++ 2024 99 +++++ +++
5563 98
storage.a2000.nu,2G,21931,99,96429,99,79370,73,28970,99,280316,84,675.7,3,16,1444,71,+++++,+++,+++++,+++,2024,99,+++++,+++,5563,98
again rebooted to 2.6.9-ac11 :
(previous score was from xfs, so retested with ext3)
(2.6.9-ac11 13d r5 ext3)
Version 1.03 ------Sequential Output------ --Sequential Input-
--Random-
-Per Chr- --Block-- -Rewrite- -Per Chr- --Block--
--Seeks--
Machine Size K/sec %CP K/sec %CP K/sec %CP K/sec %CP K/sec %CP
/sec %CP
storage.a2000.nu 2G 21711 99 99906 95 67559 79 24099 92 138763 72
606.4 3
------Sequential Create------ --------Random
Create--------
-Create-- --Read--- -Delete-- -Create-- --Read---
-Delete--
files /sec %CP /sec %CP /sec %CP /sec %CP /sec %CP
/sec %CP
16 1470 79 +++++ +++ +++++ +++ 1447 78 +++++ +++
4546 96
storage.a2000.nu,2G,21711,99,99906,95,67559,79,24099,92,138763,72,606.4,3,16,1470,79,+++++,+++,+++++,+++,1447,78,+++++,+++,4546,96
looks like something is wrong with the 2.6 code ?
i'm now compiling normal 2.6.9 (and maybe 2.6.8 after this?) to check for
results
any ideas ?
^ permalink raw reply [flat|nested] 25+ messages in thread
* Re: raid5 slow
2004-11-29 8:51 ` raid5 slow Stephan van Hienen
2004-11-29 17:13 ` Guy
@ 2004-11-30 0:30 ` Neil Brown
2004-11-30 0:50 ` Stephan van Hienen
1 sibling, 1 reply; 25+ messages in thread
From: Neil Brown @ 2004-11-30 0:30 UTC (permalink / raw)
To: Stephan van Hienen; +Cc: Guy, linux-raid
On Monday November 29, raid@a2000.nu wrote:
>
> now i get 275MB/sec with raid0 (14 disks)
> 171MB/sec with raid5 (14 disks)
>
> while the old raid5 situation was 257MB/sec with raid5 (13 disks)
I presume you are only looking at 'read' speed here.
There is a read optimisation that I haven't implemented in 2.6 yet
that could account for about a 10% difference, but a 30% difference
definitely surprises me.
Normally raid5 will read into the stripe cache, and then copy data out
of the stripe cache and into the request buffer for reads.
The optimisation is to read directly into the request buffer in
situations where no degraded stripes or pending write requests. This
optimisation is implemented in 2.4 and gave me about a 10% speedup on
sequential reads. Its substantially more complicated in 2.6...
I might do a bit of testing myself, but more results from others would
be helpful...
NeilBrown
^ permalink raw reply [flat|nested] 25+ messages in thread
* Re: raid5 slow
2004-11-30 0:30 ` raid5 slow Neil Brown
@ 2004-11-30 0:50 ` Stephan van Hienen
2004-11-30 19:37 ` Stephan van Hienen
0 siblings, 1 reply; 25+ messages in thread
From: Stephan van Hienen @ 2004-11-30 0:50 UTC (permalink / raw)
To: Neil Brown; +Cc: Guy, linux-raid
On Tue, 30 Nov 2004, Neil Brown wrote:
> On Monday November 29, raid@a2000.nu wrote:
>>
>> now i get 275MB/sec with raid0 (14 disks)
>> 171MB/sec with raid5 (14 disks)
>>
>> while the old raid5 situation was 257MB/sec with raid5 (13 disks)
>
> I presume you are only looking at 'read' speed here.
yes i'm looking at the read spead
just tested 2.6.10-rc2
142148
this is 50% of the performance i get with the 2.4 kernel
(the 171MB/sec i got yesterday with 14 disks was also with xfs)
(xfs on 14d r5 gives 171371 read/ext3 on 14 r5 gives 132845 read)
(2.6.10-rc2 13d r5 ext3)
Version 1.03 ------Sequential Output------ --Sequential Input-
--Random-
-Per Chr- --Block-- -Rewrite- -Per Chr- --Block--
--Seeks--
Machine Size K/sec %CP K/sec %CP K/sec %CP K/sec %CP K/sec %CP
/sec %CP
storage.a2000.nu 2G 24730 99 108265 88 59460 59 25759 91 142148 69
586.8 3
------Sequential Create------ --------Random
Create--------
-Create-- --Read--- -Delete-- -Create-- --Read---
-Delete--
files /sec %CP /sec %CP /sec %CP /sec %CP /sec %CP
/sec %CP
16 1432 78 +++++ +++ +++++ +++ 1456 77 +++++ +++
4908 97
storage.a2000.nu,2G,24730,99,108265,88,59460,59,25759,91,142148,69,586.8,3,16,1432,78,+++++,+++,+++++,+++,1456,77,+++++,+++,4908,97
>
> There is a read optimisation that I haven't implemented in 2.6 yet
> that could account for about a 10% difference, but a 30% difference
> definitely surprises me.
>
> Normally raid5 will read into the stripe cache, and then copy data out
> of the stripe cache and into the request buffer for reads.
> The optimisation is to read directly into the request buffer in
> situations where no degraded stripes or pending write requests. This
> optimisation is implemented in 2.4 and gave me about a 10% speedup on
> sequential reads. Its substantially more complicated in 2.6...
>
> I might do a bit of testing myself, but more results from others would
> be helpful...
>
> NeilBrown
>
^ permalink raw reply [flat|nested] 25+ messages in thread
* Re: raid5 slow
2004-11-30 0:50 ` Stephan van Hienen
@ 2004-11-30 19:37 ` Stephan van Hienen
2004-11-30 23:26 ` Neil Brown
0 siblings, 1 reply; 25+ messages in thread
From: Stephan van Hienen @ 2004-11-30 19:37 UTC (permalink / raw)
To: Neil Brown; +Cc: Guy, linux-raid
today also runned a test with 500gb size :
Version 1.03 ------Sequential Output------ --Sequential Input-
--Random-
-Per Chr- --Block-- -Rewrite- -Per Chr- --Block--
--Seeks--
Machine Size K/sec %CP K/sec %CP K/sec %CP K/sec %CP K/sec %CP
/sec %CP
storage.a2000. 500G 23385 97 79328 69 59189 58 25015 91 122493 58
67.7 11
------Sequential Create------ --------Random
Create--------
-Create-- --Read--- -Delete-- -Create-- --Read---
-Delete--
files /sec %CP /sec %CP /sec %CP /sec %CP /sec %CP
/sec %CP
16 1404 76 +++++ +++ +++++ +++ 1347 75 +++++ +++
4868 97
storage.a2000.nu,500G,23385,97,79328,69,59189,58,25015,91,122493,58,67.7,11,16,1404,76,+++++,+++,+++++,+++,1347,75,+++++,+++,4868,97
it's really slow... :(
is there anything i can do ?
i want to fill the raid asap
for now i have 2 options :
-use the 2.6 and hope someone is going to fix this ?
-use the 2.4 kernel but max 13 devices (there is no LBD support in 2.4?)
or is there a LBD patch so i can create a 2TB+ raid5 array ?
^ permalink raw reply [flat|nested] 25+ messages in thread
* Re: raid5 slow
2004-11-30 19:37 ` Stephan van Hienen
@ 2004-11-30 23:26 ` Neil Brown
2004-11-30 23:39 ` Stephan van Hienen
2004-11-30 23:55 ` Stephan van Hienen
0 siblings, 2 replies; 25+ messages in thread
From: Neil Brown @ 2004-11-30 23:26 UTC (permalink / raw)
To: Stephan van Hienen; +Cc: Neil Brown, Guy, linux-raid
On Tuesday November 30, raid@a2000.nu wrote:
> storage.a2000.nu,500G,23385,97,79328,69,59189,58,25015,91,122493,58,67.7,11,16,1404,76,+++++,+++,+++++,+++,1347,75,+++++,+++,4868,97
>
> it's really slow... :(
> is there anything i can do ?
> i want to fill the raid asap
>
> for now i have 2 options :
>
> -use the 2.6 and hope someone is going to fix this ?
>
> -use the 2.4 kernel but max 13 devices (there is no LBD support in 2.4?)
> or is there a LBD patch so i can create a 2TB+ raid5 array ?
>
>
I don't think there is LBD support for 2.4.
I haven't managed to duplicate the degree of slowdown that you report
- I only get about 10%.
However I found an old Email which also reported slow raid5 in 2.6 and
it noted that when the array is degraded, it goes faster ....
I just tried it and got most of the speed back when the array is
degraded!!!
Would you be able to test that option?
I will look into the code and see if I can figure out what is going
on.
NeilBrown
^ permalink raw reply [flat|nested] 25+ messages in thread
* Re: raid5 slow
2004-11-30 23:26 ` Neil Brown
@ 2004-11-30 23:39 ` Stephan van Hienen
2004-11-30 23:55 ` Stephan van Hienen
1 sibling, 0 replies; 25+ messages in thread
From: Stephan van Hienen @ 2004-11-30 23:39 UTC (permalink / raw)
To: Neil Brown; +Cc: Guy, linux-raid
> I haven't managed to duplicate the degree of slowdown that you report
> - I only get about 10%.
>
> However I found an old Email which also reported slow raid5 in 2.6 and
> it noted that when the array is degraded, it goes faster ....
>
> I just tried it and got most of the speed back when the array is
> degraded!!!
> Would you be able to test that option?
2 tests :
(kernel 2.6.10-rc2 13d r5 ext2)
Version 1.03 ------Sequential Output------ --Sequential Input-
--Random-
-Per Chr- --Block-- -Rewrite- -Per Chr- --Block--
--Seeks--
Machine Size K/sec %CP K/sec %CP K/sec %CP K/sec %CP K/sec %CP
/sec %CP
storage.a2000.nu 2G 25238 99 116958 98 67987 73 25702 91 142010 70
608.2 3
------Sequential Create------ --------Random
Create--------
-Create-- --Read--- -Delete-- -Create-- --Read---
-Delete--
files /sec %CP /sec %CP /sec %CP /sec %CP /sec %CP
/sec %CP
16 1354 78 +++++ +++ +++++ +++ 1262 74 +++++ +++
4895 95
storage.a2000.nu,2G,25238,99,116958,98,67987,73,25702,91,142010,70,608.2,3,16,1354,78,+++++,+++,+++++,+++,1262,74,+++++,+++,4895,95
]# mdadm --manage --fail /dev/md0 /dev/sdn1
mdadm: set /dev/sdn1 faulty in /dev/md0
]# mdadm --manage --remove /dev/md0 /dev/sdn1
mdadm: hot removed /dev/sdn1
Version 1.03 ------Sequential Output------ --Sequential Input-
--Random-
-Per Chr- --Block-- -Rewrite- -Per Chr- --Block--
--Seeks--
Machine Size K/sec %CP K/sec %CP K/sec %CP K/sec %CP K/sec %CP
/sec %CP
storage.a2000.nu 2G 25458 99 107295 83 59079 60 26079 93 145775 77
490.2 3
------Sequential Create------ --------Random
Create--------
-Create-- --Read--- -Delete-- -Create-- --Read---
-Delete--
files /sec %CP /sec %CP /sec %CP /sec %CP /sec %CP
/sec %CP
16 1423 83 +++++ +++ +++++ +++ 1453 82 +++++ +++
4872 100
storage.a2000.nu,2G,25458,99,107295,83,59079,60,26079,93,145775,77,490.2,3,16,1423,83,+++++,+++,+++++,+++,1453,82,+++++,+++,4872,100
^ permalink raw reply [flat|nested] 25+ messages in thread
* Re: raid5 slow
2004-11-30 23:26 ` Neil Brown
2004-11-30 23:39 ` Stephan van Hienen
@ 2004-11-30 23:55 ` Stephan van Hienen
2004-12-01 19:23 ` raid5 slow (test on another system) Stephan van Hienen
1 sibling, 1 reply; 25+ messages in thread
From: Stephan van Hienen @ 2004-11-30 23:55 UTC (permalink / raw)
To: Neil Brown; +Cc: Guy, linux-raid
On Wed, 1 Dec 2004, Neil Brown wrote:
> I don't think there is LBD support for 2.4.
found this :
https://www.gelato.unsw.edu.au/archives/lbd/2004-February/000039.html
http://www.gelato.unsw.edu.au/patches/
'Theoretically raid0 and linear should work; I'm certain raid1, raid4 and
raid5 will not work. In any case, RAID is limited to a maximum of 2TB
minus one block per member. '
i don't think i want this on my raid5....
^ permalink raw reply [flat|nested] 25+ messages in thread
* Re: raid5 slow (test on another system)
2004-11-30 23:55 ` Stephan van Hienen
@ 2004-12-01 19:23 ` Stephan van Hienen
2004-12-01 22:21 ` Stephan van Hienen
0 siblings, 1 reply; 25+ messages in thread
From: Stephan van Hienen @ 2004-12-01 19:23 UTC (permalink / raw)
To: Neil Brown; +Cc: linux-raid
today runned a test on another machine
(dual p3-733 with 4*9gb scsi)
looks like on this system the difference is arround the 10%
maybe you need more disks/faster disks to get a bigger difference
results :
2.6.10-rc2 4d r5 ext3
Version 1.03 ------Sequential Output------ --Sequential Input-
--Random-
-Per Chr- --Block-- -Rewrite- -Per Chr- --Block--
--Seeks--
Machine Size K/sec %CP K/sec %CP K/sec %CP K/sec %CP K/sec %CP
/sec %CP
linuxtest.expl 496M 10364 99 39689 55 22184 36 11485 96 52559 46
632.0 4
------Sequential Create------ --------Random
Create--------
-Create-- --Read--- -Delete-- -Create-- --Read---
-Delete--
files /sec %CP /sec %CP /sec %CP /sec %CP /sec %CP
/sec %CP
16 512 99 +++++ +++ 32502 99 528 99 +++++ +++
1938 97
linuxtest.explainerdc.local,496M,10364,99,39689,55,22184,36,11485,96,52559,46,632.0,4,16,512,99,+++++,+++,32502,99,528,99,+++++,+++,1938,97
2.4.20 4d r5 ext3
Version 1.03 ------Sequential Output------ --Sequential Input-
--Random-
-Per Chr- --Block-- -Rewrite- -Per Chr- --Block--
--Seeks--
Machine Size K/sec %CP K/sec %CP K/sec %CP K/sec %CP K/sec %CP
/sec %CP
linuxtest.expl 496M 10087 100 53021 63 20934 34 11746 98 58858 34
665.3 4
------Sequential Create------ --------Random
Create--------
-Create-- --Read--- -Delete-- -Create-- --Read---
-Delete--
files /sec %CP /sec %CP /sec %CP /sec %CP /sec %CP
/sec %CP
16 573 99 +++++ +++ 30886 99 592 99 +++++ +++
2264 96
linuxtest.explainerdc.local,496M,10087,100,53021,63,20934,34,11746,98,58858,34,665.3,4,16,573,99,+++++,+++,30886,99,592,99,+++++,+++,2264,96
^ permalink raw reply [flat|nested] 25+ messages in thread
* Re: raid5 slow (test on another system)
2004-12-01 19:23 ` raid5 slow (test on another system) Stephan van Hienen
@ 2004-12-01 22:21 ` Stephan van Hienen
2004-12-02 22:01 ` Stephan van Hienen
0 siblings, 1 reply; 25+ messages in thread
From: Stephan van Hienen @ 2004-12-01 22:21 UTC (permalink / raw)
To: Neil Brown; +Cc: linux-raid
and another system
P4 2.4Ghz with 8*120gb on 3ware 7805
pci32 so the 2.4 looks like limited by the pci bus :
(2.6.8 8d r5 ext3)
Version 1.93c ------Sequential Output------ --Sequential Input-
--Random-
Concurrency 1 -Per Chr- --Block-- -Rewrite- -Per Chr- --Block--
--Seeks--
Machine Size K/sec %CP K/sec %CP K/sec %CP K/sec %CP K/sec %CP
/sec %CP
storage 1G 507 98 64149 20 33240 11 984 97 95124 18
663.0 2
Latency 105ms 1039ms 879ms 29443us 63097us
870ms
Version 1.93c ------Sequential Create------ --------Random
Create--------
storage -Create-- --Read--- -Delete-- -Create-- --Read---
-Delete--
files /sec %CP /sec %CP /sec %CP /sec %CP /sec %CP
/sec %CP
16 1849 64 +++++ +++ 10044 9 1570 54 +++++ +++
4406 49
Latency 64064us 184us 188us 133ms 102us
293us
1.93c,1.93c,storage,1,1101940252,1G,,507,98,64149,20,33240,11,984,97,95124,18,663.0,2,16,,,,,1849,64,+++++,+++,10044,9,1570,54,+++++,+++,4406,49,105ms,1039ms,879ms,29443us,63097us,870ms,64064us,184us,188us,133ms,102us,293us
(2.4.28 8d r5 ext3)
Version 1.93c ------Sequential Output------ --Sequential Input-
--Random-
Concurrency 1 -Per Chr- --Block-- -Rewrite- -Per Chr- --Block--
--Seeks--
Machine Size K/sec %CP K/sec %CP K/sec %CP K/sec %CP K/sec %CP
/sec %CP
storage 1G 98 99 64449 44 42805 21 742 98 113930 29
732.5 4
Latency 129ms 809ms 819ms 23221us 139ms
754ms
Version 1.93c ------Sequential Create------ --------Random
Create--------
storage -Create-- --Read--- -Delete-- -Create-- --Read---
-Delete--
files /sec %CP /sec %CP /sec %CP /sec %CP /sec %CP
/sec %CP
16 1526 61 +++++ +++ 8337 20 1940 78 +++++ +++
2441 36
Latency 175ms 39us 83us 78763us 20us
77558us
1.93c,1.93c,storage,1,1101940608,1G,,98,99,64449,44,42805,21,742,98,113930,29,732.5,4,16,,,,,1526,61,+++++,+++,8337,20,1940,78,+++++,+++,2441,36,129ms,809ms,819ms,23221us,139ms,754ms,175ms,39us,83us,78763us,20us,77558us
^ permalink raw reply [flat|nested] 25+ messages in thread
* Re: raid5 slow (test on another system)
2004-12-01 22:21 ` Stephan van Hienen
@ 2004-12-02 22:01 ` Stephan van Hienen
0 siblings, 0 replies; 25+ messages in thread
From: Stephan van Hienen @ 2004-12-02 22:01 UTC (permalink / raw)
To: Neil Brown; +Cc: linux-raid
and another system
p4 2.8Ghz with 2*300gb (normal ide ports)
i created a degraded raid5
25% difference in read performance :
2.4.20 :
Version 1.03 ------Sequential Output------ --Sequential Input-
--Random-
-Per Chr- --Block-- -Rewrite- -Per Chr- --Block--
--Seeks--
Machine Size K/sec %CP K/sec %CP K/sec %CP K/sec %CP K/sec %CP
/sec %CP
gx270.a2000.nu 1G 29724 79 41447 15 17575 6 33146 80 55494 12
267.6 0
------Sequential Create------ --------Random
Create--------
-Create-- --Read--- -Delete-- -Create-- --Read---
-Delete--
files /sec %CP /sec %CP /sec %CP /sec %CP /sec %CP
/sec %CP
16 2539 98 +++++ +++ +++++ +++ 2523 98 +++++ +++
6307 96
gx270.a2000.nu,1G,29724,79,41447,15,17575,6,33146,80,55494,12,267.6,0,16,2539,98
,+++++,+++,+++++,+++,2523,98,+++++,+++,6307,96
2.6.9-ac11 :
Version 1.03 ------Sequential Output------ --Sequential Input-
--Random-
-Per Chr- --Block-- -Rewrite- -Per Chr- --Block--
--Seeks--
Machine Size K/sec %CP K/sec %CP K/sec %CP K/sec %CP K/sec %CP
/sec %CP
gx270.a2000.nu 1G 21190 52 37804 8 10092 2 34726 80 68587 9
263.1 0
------Sequential Create------ --------Random
Create--------
-Create-- --Read--- -Delete-- -Create-- --Read---
-Delete--
files /sec %CP /sec %CP /sec %CP /sec %CP /sec %CP
/sec %CP
16 3305 99 +++++ +++ +++++ +++ 3372 100 +++++ +++
10428 98
gx270.a2000.nu,1G,21190,52,37804,8,10092,2,34726,80,68587,9,263.1,0,16,3305,99,+
++++,+++,+++++,+++,3372,100,+++++,+++,10428,98
^ permalink raw reply [flat|nested] 25+ messages in thread
end of thread, other threads:[~2004-12-02 22:01 UTC | newest]
Thread overview: 25+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2004-11-28 10:48 raid5 code ok with 2TB + ? Stephan van Hienen
2004-11-28 10:57 ` Brad Campbell
2004-11-28 11:19 ` Stephan van Hienen
2004-11-28 11:28 ` Brad Campbell
2004-11-28 18:19 ` Guy
2004-11-28 19:20 ` Brad Campbell
2004-11-28 20:17 ` raid5 code ok with 2TB + ? NEGATIVE :( Stephan van Hienen
2004-11-28 20:27 ` Stephan van Hienen
2004-11-28 20:45 ` Stephan van Hienen
2004-11-28 23:34 ` raid5 code ok with 2TB + ? Stephan van Hienen
2004-11-29 4:49 ` Guy
2004-11-29 8:33 ` Stephan van Hienen
2004-11-29 8:51 ` raid5 slow Stephan van Hienen
2004-11-29 17:13 ` Guy
2004-11-29 18:11 ` Stephan van Hienen
2004-11-30 0:06 ` raid5 slow (looks like 2.6 problem) Stephan van Hienen
2004-11-30 0:30 ` raid5 slow Neil Brown
2004-11-30 0:50 ` Stephan van Hienen
2004-11-30 19:37 ` Stephan van Hienen
2004-11-30 23:26 ` Neil Brown
2004-11-30 23:39 ` Stephan van Hienen
2004-11-30 23:55 ` Stephan van Hienen
2004-12-01 19:23 ` raid5 slow (test on another system) Stephan van Hienen
2004-12-01 22:21 ` Stephan van Hienen
2004-12-02 22:01 ` Stephan van Hienen
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).