* RAID10 far (f2) read throughput on random and sequential / read-ahead
@ 2008-02-22 4:14 Nat Makarevitch
2008-02-24 0:09 ` Nat Makarevitch
2008-02-24 2:17 ` Keld Jørn Simonsen
0 siblings, 2 replies; 3+ messages in thread
From: Nat Makarevitch @ 2008-02-22 4:14 UTC (permalink / raw)
To: linux-raid
'md' performs wonderfully. Thanks to every contributor!
I pitted it against a 3ware 9650 and 'md' won on nearly every account (albeit on
RAID5 for sequential I/O the 3ware is a distant winner):
http://www.makarevitch.org/rant/raid/#3wmd
On RAID10 f2 a small read-ahead reduces the throughput on sequential read, but
even a low value (768 for the whole 'md' block device, 0 for the underlying
spindles) enables very good sequential read performance (300 MB/s on 6 low-end
Hitachi 500 GB spindles).
What baffles me is that, on a 1.4TB array served by a box having 12 GB RAM (low
cache-hit ratio), the random access performance remains stable and high (450
IOPS with 48 threads, 20% writes - 10% fsync'ed), even with a fairly high
read-ahead (16k). How comes?!
^ permalink raw reply [flat|nested] 3+ messages in thread
* Re: RAID10 far (f2) read throughput on random and sequential / read-ahead
2008-02-22 4:14 RAID10 far (f2) read throughput on random and sequential / read-ahead Nat Makarevitch
@ 2008-02-24 0:09 ` Nat Makarevitch
2008-02-24 2:17 ` Keld Jørn Simonsen
1 sibling, 0 replies; 3+ messages in thread
From: Nat Makarevitch @ 2008-02-24 0:09 UTC (permalink / raw)
To: linux-raid
Nat Makarevitch <nat <at> makarevitch.org> writes:
> 'md' performs wonderfully
> random access performance remains stable and high (450
> IOPS with 48 threads, 20% writes - 10% fsync'ed), even with a fairly high
> read-ahead (16k).
Mystery solved, sorry for the noise
Explanation: I use the 'randomio' tool and searched in its sourcecode for the
various 'advise' calls (posix_fadvise(2), madvise(2)...) and forgot to check the
open call (it uses O_DIRECT!)
^ permalink raw reply [flat|nested] 3+ messages in thread
* Re: RAID10 far (f2) read throughput on random and sequential / read-ahead
2008-02-22 4:14 RAID10 far (f2) read throughput on random and sequential / read-ahead Nat Makarevitch
2008-02-24 0:09 ` Nat Makarevitch
@ 2008-02-24 2:17 ` Keld Jørn Simonsen
1 sibling, 0 replies; 3+ messages in thread
From: Keld Jørn Simonsen @ 2008-02-24 2:17 UTC (permalink / raw)
To: Nat Makarevitch; +Cc: linux-raid
I made a reference to your work in the wiki howto on performance.
Thanks!
Keld
On Fri, Feb 22, 2008 at 04:14:05AM +0000, Nat Makarevitch wrote:
> 'md' performs wonderfully. Thanks to every contributor!
>
> I pitted it against a 3ware 9650 and 'md' won on nearly every account (albeit on
> RAID5 for sequential I/O the 3ware is a distant winner):
> http://www.makarevitch.org/rant/raid/#3wmd
>
> On RAID10 f2 a small read-ahead reduces the throughput on sequential read, but
> even a low value (768 for the whole 'md' block device, 0 for the underlying
> spindles) enables very good sequential read performance (300 MB/s on 6 low-end
> Hitachi 500 GB spindles).
>
> What baffles me is that, on a 1.4TB array served by a box having 12 GB RAM (low
> cache-hit ratio), the random access performance remains stable and high (450
> IOPS with 48 threads, 20% writes - 10% fsync'ed), even with a fairly high
> read-ahead (16k). How comes?!
>
> -
> To unsubscribe from this list: send the line "unsubscribe linux-raid" in
> the body of a message to majordomo@vger.kernel.org
> More majordomo info at http://vger.kernel.org/majordomo-info.html
^ permalink raw reply [flat|nested] 3+ messages in thread
end of thread, other threads:[~2008-02-24 2:17 UTC | newest]
Thread overview: 3+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2008-02-22 4:14 RAID10 far (f2) read throughput on random and sequential / read-ahead Nat Makarevitch
2008-02-24 0:09 ` Nat Makarevitch
2008-02-24 2:17 ` Keld Jørn Simonsen
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).