linux-raid.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* raid1 performance
@ 2010-07-25 14:58 Marco
  2010-07-25 15:19 ` Roman Mamedov
  0 siblings, 1 reply; 17+ messages in thread
From: Marco @ 2010-07-25 14:58 UTC (permalink / raw)
  To: linux-raid

Hi all,
I'm posting again the same message because I have had some problem subscribing 
the list so i'm not sure it has been received:

doing a simple performance tests i obtained some very unexpected results: if i 
issue hdparm -t /dev/md2 i obtain 61 - 65 MB/s while issuing the same test 
directly on the partitions which compose md2 (/dev/sda3 and /dev/sdb3) i obtain 
84 - 87 MB/s. I didn't expect a so big difference between md2 and one of its 
member. What can cause  this difference ? 

I'm running the test on a Centos 5.4, /dev/md2 is a mounted and used block 
device (lvm on top of md2 and the root partion on the lvm volume group), but the 

machine was quite idle during the tests. The controller is an Intel ICH9 with 
AHCI enabled.

thank you in advance!


Marco



      

^ permalink raw reply	[flat|nested] 17+ messages in thread
* raid1 performance
@ 2010-07-19 12:14 Marco
  0 siblings, 0 replies; 17+ messages in thread
From: Marco @ 2010-07-19 12:14 UTC (permalink / raw)
  To: linux-raid

Hi all,
doing a simple performance tests i obtained some very unexpected results: if i 
issue hdparm -t /dev/md2 i obtain 61 - 65 MB/s while issuing the same test 
directly on the partitions which compose md2 (/dev/sda3 and /dev/sdb3) i obtain 
84 - 87 MB/s. I didn't expect a so big difference between md2 and one of its 
member. What can cause  this difference ? 

I'm running the test on a Centos 5.4, /dev/md2 is a mounted and used block 
device (lvm on top of md2 and the root partion on the lvm volume group), but the 
machine was quite idle during the tests. The controller is an Intel ICH9 with 
AHCI enabled.

thank you in advance!


 Marco


      

^ permalink raw reply	[flat|nested] 17+ messages in thread
* RAID1: Performance.
@ 2004-04-21  5:57 Mike Mestnik
  0 siblings, 0 replies; 17+ messages in thread
From: Mike Mestnik @ 2004-04-21  5:57 UTC (permalink / raw)
  To: Linux-RAID

[-- Warning: decoded text below may be mangled, UTF-8 assumed --]
[-- Attachment #1: Type: text/plain; charset=us-ascii, Size: 2187 bytes --]

I have looked extensively at the code in raid1.c and found that it has a
fue bugs that cause it not to do what it was intended.
The Biggest problem is that it seams to not consider that a Write will
move the heads into the same spot.  In the case of multi-process IO idle
drives may stay idle if there heads are not CLOSEST.  The other problems I
see were already discussed in a thread of a similar name.  The solution I
have come up with I think will satisfy every ones concerns.

The idea of ordering the blocks ACEBDF is vary close to the right
solution.  When drives are idle is there no point in not having them read
ahead?
If the heads are in the same place, having just finished a write, they are
both just as close.  Lets use this to our advantage, after selecting what
drive we will use.  Use that drives read-ahead plus the start of the read
and use an idle disks and read that the length of it's read-ahead value. 
This means the md device will have "N * read-ahead" read-ahead or the sum
of all read-ahead, this should be then documented.

In the case of multi-io using idle disks, thought they may not be as
close, will be better than talking a disk that is working and asking it to
move the the end of the drive only to have it move back less than 1/4 of
the disk.  Here is how...

20 <-- write.
80 <-- read(1)
96 <-- read(1) NOT 2 or 3 as they are still on 20
81 <-- read(1)
97 <-- read(1)
73 <-- read(1)

Taking non-sequential read requests and handing them ought round-robin
might look better for the example above.  IMHO the only thing saving the
current code is that it some what randomly round-robins the disks.  If you
take that ought you will see that only one drive takes the brunt of more
than %70 of the load.

I don't think the oldest used disk is a good way to go this only works if
we know that the read-ahead is full of unused data.  This would mean that
the drive is idle and it can be counted in the search for the closest
drive of a new read.  Keeping in mind any wright erases all of our book keeping.


	
		
__________________________________
Do you Yahoo!?
Yahoo! Photos: High-quality 4x6 digital prints for 25¢
http://photos.yahoo.com/ph/print_splash

^ permalink raw reply	[flat|nested] 17+ messages in thread

end of thread, other threads:[~2010-07-31 16:04 UTC | newest]

Thread overview: 17+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2010-07-25 14:58 raid1 performance Marco
2010-07-25 15:19 ` Roman Mamedov
2010-07-26  9:37   ` Marco
2010-07-26 10:24     ` Keld Simonsen
2010-07-26 10:53       ` John Robinson
2010-07-26 11:30         ` Keld Simonsen
2010-07-27 16:10       ` Marco
2010-07-26 11:03     ` Neil Brown
2010-07-27  1:23       ` Leslie Rhorer
2010-07-27 16:10       ` Marco
2010-07-27 22:23         ` Neil Brown
2010-07-28 12:10           ` Marco
2010-07-28 12:24             ` Neil Brown
2010-07-31 15:21           ` Marco
2010-07-31 16:04             ` Keld Simonsen
  -- strict thread matches above, loose matches on Subject: below --
2010-07-19 12:14 Marco
2004-04-21  5:57 RAID1: Performance Mike Mestnik

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).