From: Dan Christensen <jdc@uwo.ca>
To: linux-raid@vger.kernel.org
Subject: RAID-5 streaming read performance
Date: Mon, 11 Jul 2005 11:11:28 -0400 [thread overview]
Message-ID: <874qb14btr.fsf@uwo.ca> (raw)
I was wondering what I should expect in terms of streaming read
performance when using (software) RAID-5 with four SATA drives. I
thought I would get a noticeable improvement compared to reads from a
single device, but that's not the case. I tested this by using dd to
read 300MB directly from disk partitions /dev/sda7, etc, and also using
dd to read 300MB directly from the raid device (/dev/md2 in this case).
I get around 57MB/s from each of the disk partitions that make up the
raid device, and about 58MB/s from the raid device. On the other
hand, if I run parallel reads from the component partitions, I get
25 to 30MB/s each, so the bus can clearly achieve more than 100MB/s.
Before each read, I try to clear the kernel's cache by reading
900MB from an unrelated partition on the disk. (Is this guaranteed
to work? Is there a better and/or faster way to clear cache?)
I have AAM quiet mode/low performance enabled on the drives, but (a)
this shouldn't matter too much for streaming reads, and (b) it's the
relative performance of the reads from the partitions and the RAID
device that I'm curious about.
I also get poor write performance, but that's harder to isolate
because I have to go through the lvm and filesystem layers too.
I also get poor performance from my RAID-1 array and my other
RAID-5 arrays.
Details of my tests and set-up below.
Thanks for any suggestions,
Dan
System:
- Athlon 2500+
- kernel 2.6.12.2 (also tried 2.6.11.11)
- four SATA drives (3 160G, 1 200G); Samsung Spinpoint
- SiI3114 controller (latency_timer=32 by default; tried 128 too)
- 1G ram
- blockdev --getra /dev/sda --> 256 (didn't play with these)
- blockdev --getra /dev/md2 --> 768 (didn't play with this)
- tried anticipatory, deadline and cfq schedules, with no significant
difference.
- machine essentially idle during tests
Here is part of /proc/mdstat (the full output is below):
md2 : active raid5 sdd5[3] sdc5[2] sdb5[1] sda7[0]
218612160 blocks level 5, 64k chunk, algorithm 2 [4/4] [UUUU]
Here's the test script and output:
# Clear cache:
dd if=/dev/sda8 of=/dev/null bs=1M count=900 > /dev/null 2>&1
for f in sda7 sdb5 sdc5 sdd5 ; do
echo $f
dd if=/dev/$f of=/dev/null bs=1M count=300 2>&1 | grep bytes/sec
echo
done
# Clear cache:
dd if=/dev/sda8 of=/dev/null bs=1M count=900 > /dev/null 2>&1
for f in md2 ; do
echo $f
dd if=/dev/$f of=/dev/null bs=1M count=300 2>&1 | grep bytes/sec
echo
done
Output:
sda7
314572800 bytes transferred in 5.401071 seconds (58242671 bytes/sec)
sdb5
314572800 bytes transferred in 5.621170 seconds (55962158 bytes/sec)
sdc5
314572800 bytes transferred in 5.635491 seconds (55819947 bytes/sec)
sdd5
314572800 bytes transferred in 5.333374 seconds (58981951 bytes/sec)
md2
314572800 bytes transferred in 5.386627 seconds (58398846 bytes/sec)
# cat /proc/mdstat
md1 : active raid5 sdd1[2] sdc1[1] sda2[0]
578048 blocks level 5, 64k chunk, algorithm 2 [3/3] [UUU]
md4 : active raid5 sdd2[3] sdc2[2] sdb2[1] sda6[0]
30748032 blocks level 5, 64k chunk, algorithm 2 [4/4] [UUUU]
md2 : active raid5 sdd5[3] sdc5[2] sdb5[1] sda7[0]
218612160 blocks level 5, 64k chunk, algorithm 2 [4/4] [UUUU]
md3 : active raid5 sdd6[3] sdc6[2] sdb6[1] sda8[0]
218636160 blocks level 5, 64k chunk, algorithm 2 [4/4] [UUUU]
md0 : active raid1 sdb1[0] sda5[1]
289024 blocks [2/2] [UU]
# mdadm --detail /dev/md2
/dev/md2:
Version : 00.90.01
Creation Time : Mon Jul 4 23:54:34 2005
Raid Level : raid5
Array Size : 218612160 (208.48 GiB 223.86 GB)
Device Size : 72870720 (69.49 GiB 74.62 GB)
Raid Devices : 4
Total Devices : 4
Preferred Minor : 2
Persistence : Superblock is persistent
Update Time : Thu Jul 7 21:52:50 2005
State : clean
Active Devices : 4
Working Devices : 4
Failed Devices : 0
Spare Devices : 0
Layout : left-symmetric
Chunk Size : 64K
UUID : c4056d19:7b4bb550:44925b88:91d5bc8a
Events : 0.10873823
Number Major Minor RaidDevice State
0 8 7 0 active sync /dev/sda7
1 8 21 1 active sync /dev/sdb5
2 8 37 2 active sync /dev/sdc5
3 8 53 3 active sync /dev/sdd5
next reply other threads:[~2005-07-11 15:11 UTC|newest]
Thread overview: 41+ messages / expand[flat|nested] mbox.gz Atom feed top
2005-07-11 15:11 Dan Christensen [this message]
2005-07-13 2:08 ` RAID-5 streaming read performance Ming Zhang
2005-07-13 2:52 ` Dan Christensen
2005-07-13 3:15 ` berk walker
2005-07-13 12:24 ` Ming Zhang
2005-07-13 12:48 ` Dan Christensen
2005-07-13 12:52 ` Ming Zhang
2005-07-13 14:23 ` Dan Christensen
2005-07-13 14:29 ` Ming Zhang
2005-07-13 17:56 ` Dan Christensen
2005-07-13 22:38 ` Neil Brown
2005-07-14 0:09 ` Ming Zhang
2005-07-14 1:16 ` Neil Brown
2005-07-14 1:25 ` Ming Zhang
2005-07-13 18:02 ` David Greaves
2005-07-13 18:14 ` Ming Zhang
2005-07-13 21:18 ` David Greaves
2005-07-13 21:44 ` Ming Zhang
2005-07-13 21:50 ` David Greaves
2005-07-13 21:55 ` Ming Zhang
2005-07-13 22:52 ` Neil Brown
2005-07-14 3:58 ` Dan Christensen
2005-07-14 4:13 ` Mark Hahn
2005-07-14 21:16 ` Dan Christensen
2005-07-14 21:30 ` Ming Zhang
2005-07-14 23:29 ` Mark Hahn
2005-07-15 1:23 ` Ming Zhang
2005-07-15 2:11 ` Dan Christensen
2005-07-15 12:28 ` Ming Zhang
2005-07-14 12:30 ` Ming Zhang
2005-07-14 14:23 ` Ming Zhang
2005-07-14 17:54 ` Dan Christensen
2005-07-14 18:00 ` Ming Zhang
2005-07-14 18:03 ` Dan Christensen
2005-07-14 18:10 ` Ming Zhang
2005-07-14 19:16 ` Dan Christensen
2005-07-14 20:13 ` Ming Zhang
2005-07-15 2:38 ` Dan Christensen
2005-07-15 6:01 ` Holger Kiehl
2005-07-15 12:29 ` Ming Zhang
2005-07-13 22:42 ` Neil Brown
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=874qb14btr.fsf@uwo.ca \
--to=jdc@uwo.ca \
--cc=linux-raid@vger.kernel.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).