* Benchmarks comparing 3ware 7410 RAID5 to Linux md
@ 2003-09-08 22:58 Aaron Lehmann
2003-09-09 1:37 ` Peter L. Ashford
` (2 more replies)
0 siblings, 3 replies; 6+ messages in thread
From: Aaron Lehmann @ 2003-09-08 22:58 UTC (permalink / raw)
To: linux-raid
Hello,
I'm about to set up an IDE RAID5 with 4 7200rpm 160GB hard drives
under Linux 2.6.0-test. I found a 3ware Escalade 7410 card and thought
this would be perfect for the job, but read recently about the poor
write performance. It was suggested in the archives that Linux
software raid could do better. While reliability and read performance
are my main concerns, in that order, I wouldn't mind having decent
write performance too. Since I haven't seen any benchmarks comparing
Linux software raid with 3ware hardware raid, I think it would be
interesting to do some myself, especially because a 3ware controller
running in JBOD mode should be significantly better than most other
IDE controllers. I plan to use tiobench and bonnie as suggested in the
FAQ, but I'd welcome suggestions for other benchmarks to run.
I have some questions about setting up this test and practical use of
software raid:
* Can software raid 5 reliably deal with drive failures? If not, I
don't think I'll even run the test. I've heard about some bad
experiences with software raid, but I don't want to dismiss the option
because of hearsay.
* Is it possible to boot off a software array with LILO or GRUB?
* This is probably a major point of contention, but what filesystem(s)
fit well with my priorities of reliability, then read speed, then
write speed? I only need metadata journalling. XFS and ext3 are the
two fs' that come to mind, but I'm not sure how they compare, and
again, I haven't seen great benchmarks.
* Is it a good idea to tell ext3 about the stride of the array even
on a hardware RAID setup? If I go the XFS route, is there any
equivalent?
* Any particular /proc settings that could be tweaked? I've seen
suggestions relating to bdflush, but don't know if they still apply
to 2.6.
I'd also like to note that even if I don't end up using software raid,
it seems like a great subsystem and I'm thankful for its development.
Thanks,
Aaron Lehmann
^ permalink raw reply [flat|nested] 6+ messages in thread
* Re: Benchmarks comparing 3ware 7410 RAID5 to Linux md
2003-09-08 22:58 Benchmarks comparing 3ware 7410 RAID5 to Linux md Aaron Lehmann
@ 2003-09-09 1:37 ` Peter L. Ashford
2003-09-09 1:50 ` dean gaudet
2003-09-09 6:18 ` Aaron Lehmann
2 siblings, 0 replies; 6+ messages in thread
From: Peter L. Ashford @ 2003-09-09 1:37 UTC (permalink / raw)
To: Aaron Lehmann; +Cc: linux-raid
Aaron,
> I'm about to set up an IDE RAID5 with 4 7200rpm 160GB hard drives
> under Linux 2.6.0-test. I found a 3ware Escalade 7410 card and thought
> this would be perfect for the job, but read recently about the poor
> write performance. It was suggested in the archives that Linux
> software raid could do better. While reliability and read performance
> are my main concerns, in that order, I wouldn't mind having decent
> write performance too. Since I haven't seen any benchmarks comparing
> Linux software raid with 3ware hardware raid, I think it would be
> interesting to do some myself, especially because a 3ware controller
> running in JBOD mode should be significantly better than most other
> IDE controllers. I plan to use tiobench and bonnie as suggested in the
> FAQ, but I'd welcome suggestions for other benchmarks to run.
Look at using IOMETER. This will give you information on the I/O
Transaction abilities of the configuration.
> I have some questions about setting up this test and practical use of
> software raid:
>
> * Can software raid 5 reliably deal with drive failures? If not, I
> don't think I'll even run the test. I've heard about some bad
> experiences with software raid, but I don't want to dismiss the option
> because of hearsay.
Both hardware (3Ware 7410) and software RAID-5 will deal with a drive
failure. The important difference is that the hardware RAID does so with
no CPU overhead. There will be a significant performance drop (reading or
writting) with both. Another difference is that hardware RAID-5 allows
hot-swap of the drives.
> * Is it a good idea to tell ext3 about the stride of the array even
> on a hardware RAID setup?
If shouldn't hurt, and might actually help. I've never seen a benchmark
on this.
> * Any particular /proc settings that could be tweaked? I've seen
> suggestions relating to bdflush, but don't know if they still apply
> to 2.6.
The following are suggested by 3Ware:
vm.max-readahead = 256
vm.min-readahead = 128
Good luck.
Peter Ashford
^ permalink raw reply [flat|nested] 6+ messages in thread
* Re: Benchmarks comparing 3ware 7410 RAID5 to Linux md
2003-09-08 22:58 Benchmarks comparing 3ware 7410 RAID5 to Linux md Aaron Lehmann
2003-09-09 1:37 ` Peter L. Ashford
@ 2003-09-09 1:50 ` dean gaudet
2003-09-09 2:24 ` Kanoa Withington
2003-09-09 6:18 ` Aaron Lehmann
2 siblings, 1 reply; 6+ messages in thread
From: dean gaudet @ 2003-09-09 1:50 UTC (permalink / raw)
To: Aaron Lehmann; +Cc: linux-raid
On Mon, 8 Sep 2003, Aaron Lehmann wrote:
> * Can software raid 5 reliably deal with drive failures? If not, I
> don't think I'll even run the test. I've heard about some bad
> experiences with software raid, but I don't want to dismiss the option
> because of hearsay.
in my experience linux sw raid5 or raid1 have no problem dealing with
single drive failures.
there are a class of multiple drive failures for which it's at least
theoretically possible to recover, but which sw raid5 doesn't not
presently recover. and given that we don't have the source code to
3ware's raid5 stuff it's hard to say if they cover this class either (this
is generally true of hw raid, 3ware or otherwise). the specific type of
failures i'm referring to are those for which every stripe has at least
N-1 working copies, but there are no set of N-1 disks for which you can
read every stripe.
it's easier to explain with a picture:
good raid5:
// disk 0, 1, 2, 3 resp.
{ D, D, D, P } // stripe 0
{ D, D, P, D } // stripe 1
{ D, P, D, D } // stripe 2
{ P, D, D, D } // stripe 3
...
where D/P are data/parity respectively.
bad disk type 1:
// disk 0, 1, 2, 3 resp.
{ X, D, D, P } // stripe 0
{ X, D, P, D } // stripe 1
{ X, P, D, D } // stripe 2
{ X, D, D, D } // stripe 3
...
where "X" means we can't read this chunk. this is the type of failure
which sw raid5 handles fine -- it goes into a degraded mode using disks 1,
2, and 3.
bad disks type 2:
// disk 0, 1, 2, 3 resp.
{ D, X, D, P } // stripe 0
{ D, D, P, D } // stripe 1
{ X, P, D, D } // stripe 2
{ P, D, D, D } // stripe 3
...
this is a type of failure which sw raid5 does not presently handle
(although i'd love for someone to tell me i'm wrong :).
but it's easy to see that you *can* recover from this situation. in this
case to recover all of stripe 0 you'd reconstruct from disks 0, 2 and 3;
and to recover all of stripe 2 you'd reconstruct from disks 1, 2, and 3.
as to whether hw raids are any better is up for debate... if you've got
the source you can always look at it and prove it either way. (or a
vendor can step forward and claim they support this type of failure.)
there are similar failure modes for raid1 as well, and i believe sw
raid1 also believes a disk is either "all good" or "all bad" with no
in-betweens.
> * Is it possible to boot off a software array with LILO or GRUB?
LILO can do raid1 fine, and i don't know anything about GRUB.
-dean
^ permalink raw reply [flat|nested] 6+ messages in thread
* Re: Benchmarks comparing 3ware 7410 RAID5 to Linux md
2003-09-09 1:50 ` dean gaudet
@ 2003-09-09 2:24 ` Kanoa Withington
0 siblings, 0 replies; 6+ messages in thread
From: Kanoa Withington @ 2003-09-09 2:24 UTC (permalink / raw)
To: dean gaudet; +Cc: Aaron Lehmann, linux-raid
Dean is exactly right.
3ware cards cannot recover from scenario 2 either _if_ one drive has
already been kicked out of the array. The only advantage they have is
that they will not kick out a disk for a read error, they will find
the unreadable data from parity and rewrite it to the bad disk which
will usually map out the bad block automatically and therefore
"repair" the sector.
Software RAID would kick out the disk with the read error and then you
would loose data because the array can no longer be reconstructed due
to the other bad sector/disk.
In other words, neither can recover from scenario 2 but you will be
less likely to find yourself in that position with a 3ware array. The
chances of finding yourself in scenario 2 with a large RAID 5 volume
are actually frighteningly high since large disks tend to develop
latent read errors over time.
Keep in mind RAID doesn't care about the filesystem so every block in
the array, not just the utilized ones, needs to be readable for any
degraded RAID 5 to rebuild onto a new disk.
Since 3ware cards can repair read errors they also let you "verify"
volumes which means scanning the whole thing correcting any newfound
errors. This greatly reduces the odds that in the event of a real disk
failure you would find yourself in the dreaded "scenario 2".
That said, the Linux software raid is a wonderful thing as far as
software RAID goes, and software RAID 5 can be much faster and less
expensive than a 3ware RAID 5.
-Kanoa
On Mon, 8 Sep 2003, dean gaudet wrote:
> On Mon, 8 Sep 2003, Aaron Lehmann wrote:
>
> > * Can software raid 5 reliably deal with drive failures? If not, I
> > don't think I'll even run the test. I've heard about some bad
> > experiences with software raid, but I don't want to dismiss the option
> > because of hearsay.
>
> in my experience linux sw raid5 or raid1 have no problem dealing with
> single drive failures.
>
> there are a class of multiple drive failures for which it's at least
> theoretically possible to recover, but which sw raid5 doesn't not
> presently recover. and given that we don't have the source code to
> 3ware's raid5 stuff it's hard to say if they cover this class either (this
> is generally true of hw raid, 3ware or otherwise). the specific type of
> failures i'm referring to are those for which every stripe has at least
> N-1 working copies, but there are no set of N-1 disks for which you can
> read every stripe.
>
> it's easier to explain with a picture:
>
> good raid5:
>
> // disk 0, 1, 2, 3 resp.
> { D, D, D, P } // stripe 0
> { D, D, P, D } // stripe 1
> { D, P, D, D } // stripe 2
> { P, D, D, D } // stripe 3
> ...
>
> where D/P are data/parity respectively.
>
> bad disk type 1:
>
> // disk 0, 1, 2, 3 resp.
> { X, D, D, P } // stripe 0
> { X, D, P, D } // stripe 1
> { X, P, D, D } // stripe 2
> { X, D, D, D } // stripe 3
> ...
>
> where "X" means we can't read this chunk. this is the type of failure
> which sw raid5 handles fine -- it goes into a degraded mode using disks 1,
> 2, and 3.
>
> bad disks type 2:
>
> // disk 0, 1, 2, 3 resp.
> { D, X, D, P } // stripe 0
> { D, D, P, D } // stripe 1
> { X, P, D, D } // stripe 2
> { P, D, D, D } // stripe 3
> ...
>
> this is a type of failure which sw raid5 does not presently handle
> (although i'd love for someone to tell me i'm wrong :).
>
> but it's easy to see that you *can* recover from this situation. in this
> case to recover all of stripe 0 you'd reconstruct from disks 0, 2 and 3;
> and to recover all of stripe 2 you'd reconstruct from disks 1, 2, and 3.
>
> as to whether hw raids are any better is up for debate... if you've got
> the source you can always look at it and prove it either way. (or a
> vendor can step forward and claim they support this type of failure.)
>
> there are similar failure modes for raid1 as well, and i believe sw
> raid1 also believes a disk is either "all good" or "all bad" with no
> in-betweens.
>
>
> > * Is it possible to boot off a software array with LILO or GRUB?
>
> LILO can do raid1 fine, and i don't know anything about GRUB.
>
> -dean
> -
> To unsubscribe from this list: send the line "unsubscribe linux-raid" in
> the body of a message to majordomo@vger.kernel.org
> More majordomo info at http://vger.kernel.org/majordomo-info.html
>
^ permalink raw reply [flat|nested] 6+ messages in thread
* Re: Benchmarks comparing 3ware 7410 RAID5 to Linux md
2003-09-08 22:58 Benchmarks comparing 3ware 7410 RAID5 to Linux md Aaron Lehmann
2003-09-09 1:37 ` Peter L. Ashford
2003-09-09 1:50 ` dean gaudet
@ 2003-09-09 6:18 ` Aaron Lehmann
2003-09-10 13:23 ` Joshua Baker-LePain
2 siblings, 1 reply; 6+ messages in thread
From: Aaron Lehmann @ 2003-09-09 6:18 UTC (permalink / raw)
To: linux-raid
Well, I'm really sorry I didn't run these benchmarks. In the process
of installing these new drives, something happened to my old SCSI disk
such that it isn't detected on the bus. This is why I wanted RAID in
the first place! In addition, one of the new drives appears to be
defective, which will delay the process.
It hasn't been my day.
^ permalink raw reply [flat|nested] 6+ messages in thread
* Re: Benchmarks comparing 3ware 7410 RAID5 to Linux md
2003-09-09 6:18 ` Aaron Lehmann
@ 2003-09-10 13:23 ` Joshua Baker-LePain
0 siblings, 0 replies; 6+ messages in thread
From: Joshua Baker-LePain @ 2003-09-10 13:23 UTC (permalink / raw)
To: Aaron Lehmann; +Cc: linux-raid
[-- Attachment #1: Type: TEXT/PLAIN, Size: 1220 bytes --]
On Mon, 8 Sep 2003 at 11:18pm, Aaron Lehmann wrote
> Well, I'm really sorry I didn't run these benchmarks. In the process
> of installing these new drives, something happened to my old SCSI disk
> such that it isn't detected on the bus. This is why I wanted RAID in
> the first place! In addition, one of the new drives appears to be
> defective, which will delay the process.
>
> It hasn't been my day.
While these aren't exactly what you had in mind, I recently did some
benchmarking on a big 3ware based system (see attached). The system
has dual 2.4GHz Xeons (with HT on), 2GB RAM, 2 7500-8 boards, and 16
Hitachi 180GB drives. For the software RAID, I did a RAID5 across 15 of
the disks (leaving one for a hot spare) with a 64k chunk-size (same as the
3wares do in hardware). For the hardware RAID, I did a RAID5 with hot
spare on each card, and a software RAID0 stripe (512k chunk size) across
the cards.
Unfortunately, the tests *are* with two different kernels (well, three
really -- I tested the HW RAID config with two different kernels). All
were patched with XFS, and all the tests were done on an XFS partition.
--
Joshua Baker-LePain
Department of Biomedical Engineering
Duke University
[-- Attachment #2: Type: TEXT/PLAIN, Size: 4323 bytes --]
Version 1.02c ------Sequential Output------ --Sequential Input- --Random-
-Per Chr- --Block-- -Rewrite- -Per Chr- --Block-- --Seeks--
Machine Size K/sec %CP K/sec %CP K/sec %CP K/sec %CP K/sec %CP /sec %CP
buckbeak 8G 25517 99 141407 73 98245 59 27676 99 360682 87 478.0 2
------Sequential Create------ --------Random Create--------
-Create-- --Read--- -Delete-- -Create-- --Read--- -Delete--
files /sec %CP /sec %CP /sec %CP /sec %CP /sec %CP /sec %CP
16 1112 10 +++++ +++ 993 9 1112 11 +++++ +++ 695 14
buckbeak,8G,25517,99,141407,73,98245,59,27676,99,360682,87,478.0,2,16,1112,10,+++++,+++,993,9,1112,11,+++++,+++,695,14
[jlb@buckbeak tiobench-0.3.3]$ ./tiobench.pl --size 8192
Run #1: ./tiotest -t 8 -f 1024 -r 500 -b 4096 -d . -TT
Unit information
================
File size = megabytes
Blk Size = bytes
Rate = megabytes per second
CPU% = percentage of CPU used during the test
Latency = milliseconds
Lat% = percent of requests that took longer than X seconds
CPU Eff = Rate divided by CPU% - throughput per cpu load
Sequential Reads
File Blk Num Avg Maximum Lat% Lat% CPU
Identifier Size Size Thr Rate (CPU%) Latency Latency >2s >10s Eff
---------------------------- ------ ----- --- ------ ------ --------- ----------- -------- -------- -----
2.4.18-18SGI_XFS_1.2.0smp 8192 4096 1 267.57 86.00% 0.014 94.80 0.00000 0.00000 311
2.4.18-18SGI_XFS_1.2.0smp 8192 4096 2 294.90 121.6% 0.025 48.80 0.00000 0.00000 242
2.4.18-18SGI_XFS_1.2.0smp 8192 4096 4 172.77 120.6% 0.089 209.19 0.00000 0.00000 143
2.4.18-18SGI_XFS_1.2.0smp 8192 4096 8 166.20 110.4% 0.169 6843.29 0.00081 0.00000 151
Random Reads
File Blk Num Avg Maximum Lat% Lat% CPU
Identifier Size Size Thr Rate (CPU%) Latency Latency >2s >10s Eff
---------------------------- ------ ----- --- ------ ------ --------- ----------- -------- -------- -----
2.4.18-18SGI_XFS_1.2.0smp 8192 4096 1 0.79 0.952% 4.970 18.40 0.00000 0.00000 82
2.4.18-18SGI_XFS_1.2.0smp 8192 4096 2 1.47 1.490% 5.207 26.07 0.00000 0.00000 99
2.4.18-18SGI_XFS_1.2.0smp 8192 4096 4 2.44 2.500% 6.123 36.94 0.00000 0.00000 98
2.4.18-18SGI_XFS_1.2.0smp 8192 4096 8 3.63 3.902% 7.549 43.57 0.00000 0.00000 93
Sequential Writes
File Blk Num Avg Maximum Lat% Lat% CPU
Identifier Size Size Thr Rate (CPU%) Latency Latency >2s >10s Eff
---------------------------- ------ ----- --- ------ ------ --------- ----------- -------- -------- -----
2.4.18-18SGI_XFS_1.2.0smp 8192 4096 1 145.55 85.55% 0.025 718.62 0.00000 0.00000 170
2.4.18-18SGI_XFS_1.2.0smp 8192 4096 2 131.19 143.2% 0.054 2557.99 0.00010 0.00000 92
2.4.18-18SGI_XFS_1.2.0smp 8192 4096 4 93.70 134.3% 0.151 5748.75 0.00043 0.00000 70
2.4.18-18SGI_XFS_1.2.0smp 8192 4096 8 67.09 103.6% 0.418 7592.24 0.00372 0.00000 65
Random Writes
File Blk Num Avg Maximum Lat% Lat% CPU
Identifier Size Size Thr Rate (CPU%) Latency Latency >2s >10s Eff
---------------------------- ------ ----- --- ------ ------ --------- ----------- -------- -------- -----
2.4.18-18SGI_XFS_1.2.0smp 8192 4096 1 2.57 1.447% 0.011 0.10 0.00000 0.00000 178
2.4.18-18SGI_XFS_1.2.0smp 8192 4096 2 1.35 1.127% 0.017 0.11 0.00000 0.00000 119
2.4.18-18SGI_XFS_1.2.0smp 8192 4096 4 2.41 3.914% 0.029 0.15 0.00000 0.00000 62
2.4.18-18SGI_XFS_1.2.0smp 8192 4096 8 2.14 3.565% 0.029 5.15 0.00000 0.00000 60
[-- Attachment #3: Type: TEXT/PLAIN, Size: 8795 bytes --]
HW RAID, 2.4.21-xfs (1.3 release)
[jlb@buckbeak jlb]$ bonnie++ -s 8192
Version 1.02c ------Sequential Output------ --Sequential Input- --Random-
-Per Chr- --Block-- -Rewrite- -Per Chr- --Block-- --Seeks--
Machine Size K/sec %CP K/sec %CP K/sec %CP K/sec %CP K/sec %CP /sec %CP
buckbeak 8G 25283 96 131142 39 83869 32 28038 99 342849 68 444.1 1
------Sequential Create------ --------Random Create--------
-Create-- --Read--- -Delete-- -Create-- --Read--- -Delete--
files /sec %CP /sec %CP /sec %CP /sec %CP /sec %CP /sec %CP
16 2644 21 +++++ +++ 2227 17 2548 20 +++++ +++ 2118 21
buckbeak,8G,25283,96,131142,39,83869,32,28038,99,342849,68,444.1,1,16,2644,21,+++++,+++,2227,17,2548,20,+++++,+++,2118,21
[jlb@buckbeak tiobench-0.3.3]$ ./tiobench.pl --size 8192
Run #1: ./tiotest -t 8 -f 1024 -r 500 -b 4096 -d . -TT
Unit information
================
File size = megabytes
Blk Size = bytes
Rate = megabytes per second
CPU% = percentage of CPU used during the test
Latency = milliseconds
Lat% = percent of requests that took longer than X seconds
CPU Eff = Rate divided by CPU% - throughput per cpu load
Sequential Reads
File Blk Num Avg Maximum Lat% Lat% CPU
Identifier Size Size Thr Rate (CPU%) Latency Latency >2s >10s Eff
---------------------------- ------ ----- --- ------ ------ --------- ----------- -------- -------- -----
2.4.21-xfs 8192 4096 1 340.95 87.56% 0.011 22.33 0.00000 0.00000 389
2.4.21-xfs 8192 4096 2 294.01 91.05% 0.026 76.29 0.00000 0.00000 323
2.4.21-xfs 8192 4096 4 251.19 80.82% 0.060 95.17 0.00000 0.00000 311
2.4.21-xfs 8192 4096 8 255.43 84.03% 0.119 266.16 0.00000 0.00000 304
Random Reads
File Blk Num Avg Maximum Lat% Lat% CPU
Identifier Size Size Thr Rate (CPU%) Latency Latency >2s >10s Eff
---------------------------- ------ ----- --- ------ ------ --------- ----------- -------- -------- -----
2.4.21-xfs 8192 4096 1 0.76 0.340% 5.132 25.90 0.00000 0.00000 223
2.4.21-xfs 8192 4096 2 1.37 0.437% 5.487 21.35 0.00000 0.00000 312
2.4.21-xfs 8192 4096 4 2.32 1.190% 6.452 42.38 0.00000 0.00000 195
2.4.21-xfs 8192 4096 8 3.50 1.568% 8.275 41.85 0.00000 0.00000 223
Sequential Writes
File Blk Num Avg Maximum Lat% Lat% CPU
Identifier Size Size Thr Rate (CPU%) Latency Latency >2s >10s Eff
---------------------------- ------ ----- --- ------ ------ --------- ----------- -------- -------- -----
2.4.21-xfs 8192 4096 1 116.36 44.09% 0.031 737.00 0.00000 0.00000 264
2.4.21-xfs 8192 4096 2 63.30 28.41% 0.114 4194.28 0.00057 0.00000 223
2.4.21-xfs 8192 4096 4 43.49 24.81% 0.331 11465.55 0.00310 0.00005 175
2.4.21-xfs 8192 4096 8 38.26 25.88% 0.749 14738.81 0.01159 0.00038 148
Random Writes
File Blk Num Avg Maximum Lat% Lat% CPU
Identifier Size Size Thr Rate (CPU%) Latency Latency >2s >10s Eff
---------------------------- ------ ----- --- ------ ------ --------- ----------- -------- -------- -----
2.4.21-xfs 8192 4096 1 1.64 0.838% 0.011 0.08 0.00000 0.00000 195
2.4.21-xfs 8192 4096 2 1.68 1.183% 0.017 0.16 0.00000 0.00000 142
2.4.21-xfs 8192 4096 4 1.63 1.776% 0.028 2.80 0.00000 0.00000 92
2.4.21-xfs 8192 4096 8 1.58 2.330% 0.034 18.03 0.00000 0.00000 68
HW RAID, 2.4.20-19.7.XFS1.3.0smp
[jlb@buckbeak tiobench-0.3.3]$ ./tiobench.pl --size 4096
Run #1: ./tiotest -t 8 -f 512 -r 500 -b 4096 -d . -T-T
Unit information
================
File size = megabytes
Blk Size = bytes
Rate = megabytes per second
CPU% = percentage of CPU used during the test
Latency = milliseconds
Lat% = percent of requests that took longer than X seconds
CPU Eff = Rate divided by CPU% - throughput per cpu load
Sequential Reads
File Blk Num Avg Maximum Lat% Lat% CPU
Identifier Size Size Thr Rate (CPU%) Latency Latency >2s >10s Eff
---------------------------- ------ ----- --- ------ ------ --------- ----------- -------- -------- -----
2.4.20-19.7.XFS1.3.0smp 4096 4096 1 276.27 81.68% 0.013 99.82 0.00000 0.00000 338
2.4.20-19.7.XFS1.3.0smp 4096 4096 2 275.17 102.6% 0.027 106.56 0.00000 0.00000 268
2.4.20-19.7.XFS1.3.0smp 4096 4096 4 225.11 108.5% 0.067 256.05 0.00000 0.00000 207
2.4.20-19.7.XFS1.3.0smp 4096 4096 8 221.65 111.8% 0.132 217.67 0.00000 0.00000 198
Random Reads
File Blk Num Avg Maximum Lat% Lat% CPU
Identifier Size Size Thr Rate (CPU%) Latency Latency >2s >10s Eff
---------------------------- ------ ----- --- ------ ------ --------- ----------- -------- -------- -----
2.4.20-19.7.XFS1.3.0smp 4096 4096 1 1.13 2.676% 3.454 54.82 0.00000 0.00000 42
2.4.20-19.7.XFS1.3.0smp 4096 4096 2 1.93 16.33% 4.002 53.27 0.00000 0.00000 12
2.4.20-19.7.XFS1.3.0smp 4096 4096 4 2.94 20.87% 4.971 78.67 0.00000 0.00000 14
2.4.20-19.7.XFS1.3.0smp 4096 4096 8 4.61 26.52% 5.989 103.03 0.00000 0.00000 17
Sequential Writes
File Blk Num Avg Maximum Lat% Lat% CPU
Identifier Size Size Thr Rate (CPU%) Latency Latency >2s >10s Eff
---------------------------- ------ ----- --- ------ ------ --------- ----------- -------- -------- -----
2.4.20-19.7.XFS1.3.0smp 4096 4096 1 58.92 23.21% 0.054 2981.43 0.00038 0.00000 254
2.4.20-19.7.XFS1.3.0smp 4096 4096 2 44.03 22.14% 0.150 4731.74 0.00114 0.00000 199
2.4.20-19.7.XFS1.3.0smp 4096 4096 4 33.53 26.27% 0.382 6946.62 0.00458 0.00000 128
2.4.20-19.7.XFS1.3.0smp 4096 4096 8 29.41 24.99% 0.889 13357.19 0.01450 0.00010 118
Random Writes
File Blk Num Avg Maximum Lat% Lat% CPU
Identifier Size Size Thr Rate (CPU%) Latency Latency >2s >10s Eff
---------------------------- ------ ----- --- ------ ------ --------- ----------- -------- -------- -----
2.4.20-19.7.XFS1.3.0smp 4096 4096 1 1.66 0.531% 0.011 0.11 0.00000 0.00000 313
2.4.20-19.7.XFS1.3.0smp 4096 4096 2 1.67 1.071% 0.016 0.13 0.00000 0.00000 156
2.4.20-19.7.XFS1.3.0smp 4096 4096 4 1.63 1.873% 0.029 4.73 0.00000 0.00000 87
2.4.20-19.7.XFS1.3.0smp 4096 4096 8 1.50 2.117% 0.025 1.25 0.00000 0.00000 71
[jlb@buckbeak tmp]$ bonnie++ -s 8192
Version 1.02c ------Sequential Output------ --Sequential Input- --Random-
-Per Chr- --Block-- -Rewrite- -Per Chr- --Block-- --Seeks--
Machine Size K/sec %CP K/sec %CP K/sec %CP K/sec %CP K/sec %CP /sec %CP
buckbeak 8G 20485 76 55244 17 27481 10 27383 97 365660 81 446.0 1
------Sequential Create------ --------Random Create--------
-Create-- --Read--- -Delete-- -Create-- --Read--- -Delete--
files /sec %CP /sec %CP /sec %CP /sec %CP /sec %CP /sec %CP
16 2168 19 +++++ +++ 2710 23 1196 11 +++++ +++ 3178 32
buckbeak,8G,20485,76,55244,17,27481,10,27383,97,365660,81,446.0,1,16,2168,19,+++++,+++,2710,23,1196,11,+++++,+++,3178,32
^ permalink raw reply [flat|nested] 6+ messages in thread
end of thread, other threads:[~2003-09-10 13:23 UTC | newest]
Thread overview: 6+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2003-09-08 22:58 Benchmarks comparing 3ware 7410 RAID5 to Linux md Aaron Lehmann
2003-09-09 1:37 ` Peter L. Ashford
2003-09-09 1:50 ` dean gaudet
2003-09-09 2:24 ` Kanoa Withington
2003-09-09 6:18 ` Aaron Lehmann
2003-09-10 13:23 ` Joshua Baker-LePain
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).