linux-raid.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* single threaded parity calculation ?
@ 2011-04-15 19:18 Simon McNair
  2011-04-15 20:54 ` Phil Turmel
  2011-04-16 13:45 ` Drew
  0 siblings, 2 replies; 7+ messages in thread
From: Simon McNair @ 2011-04-15 19:18 UTC (permalink / raw)
  To: linux-raid; +Cc: philip

Hi all,
I'm under the impression that the read speed of my 10x1TB RAID5 array is 
limited by the 'single-threaded parity calculation' ? (I'm quoting Phil 
Turmel on that and other linux-raid messages I've read seem to confirm 
that terminology)  I'm running an i7 920 with irqbalance but if 
something is single threaded or single CPU bound I'm wondering what I 
can do to alleviate it.

iostat reports 83MB/s for each disk, running up to 830MB/s for all 10 
disks, but the max read speed of the array is approx 256MB/s.

Would it be better to have 5 (or more) partitions on each disk, create 
5xraid5 arrays (each of which would in theory have a separate thread) 
and then create a linear array over the top of them to join them together ?

yes...I know this is way overthinking and also a potentially dangerous 
to recreate, but I'm curious what the opinions are.  I think I'll 
probably just end up buying another 1TB drive and making it an 11 disk 
RAID6 instead.  I want maximum space, maximum speed and maximum 
redundancy ;-).

TIA :-)

Simon



^ permalink raw reply	[flat|nested] 7+ messages in thread

end of thread, other threads:[~2011-04-16 19:07 UTC | newest]

Thread overview: 7+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2011-04-15 19:18 single threaded parity calculation ? Simon McNair
2011-04-15 20:54 ` Phil Turmel
2011-04-15 21:28   ` Phil Turmel
2011-04-15 21:44     ` Simon Mcnair
2011-04-15 21:46   ` NeilBrown
2011-04-16 13:45 ` Drew
2011-04-16 19:07   ` Simon McNair

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).