public inbox for linux-kernel@vger.kernel.org
 help / color / mirror / Atom feed
* raid1 performance
@ 2002-04-30 12:23 Jaime Medrano
  2002-04-30 12:38 ` Arjan van de Ven
  0 siblings, 1 reply; 9+ messages in thread
From: Jaime Medrano @ 2002-04-30 12:23 UTC (permalink / raw)
  To: linux-kernel

I have several raid arrays (level 0 and 1) in my machine and I have
noticed that raid1 is much more slower than I expected.

The arrays are made from two equal hds (/dev/hde, /dev/hdg). And some
numbers about the read performances are:

/dev/hde: 29 Mb/s
/dev/hdg: 29 Mb/s
/dev/md0: 27 Mb/s (raid1)
/dev/md1: 56 Mb/s (raid0)
/dev/md2: 27 Mb/s (raid1)

These numbers comes from hdparm -tT. I have noticed a very poor
performance when reading sequentially a large file from raid1 (I suppose
this is what hdparm does).

I have taken a look at the read balancing code at raid1.c and I have found
that when a sequential read happens no balancing is done, and so all the
reading is done from only one of the mirrors while the others are iddle.ç

I have tried to modify the balancing algorithm in order to balance also
sequential access, but I have got almost the same numbers.

I have thought that the reason may be that some layer bellow is making
reads of greater size than the chunks in which I balance, and so the same
work is being done twice; but I don't know the way to find this.

Does anybody know how this works?

Regards,
Jaime Medrano



^ permalink raw reply	[flat|nested] 9+ messages in thread

end of thread, other threads:[~2002-06-28 23:58 UTC | newest]

Thread overview: 9+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2002-04-30 12:23 raid1 performance Jaime Medrano
2002-04-30 12:38 ` Arjan van de Ven
2002-04-30 14:21   ` Kent Borg
2002-05-01 16:35     ` Jakob Østergaard
2002-05-01 17:01       ` Kent Borg
2002-05-01 17:16         ` Justin Cormack
2002-05-01 21:23         ` Bernd Eckenfels
2002-05-02 16:37           ` Jakob Østergaard
2002-06-29  0:01             ` Bernd Eckenfels

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox