linux-raid.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* experiences with raid5: stripe_queue patches
@ 2007-10-15 15:03 Bernd Schubert
  2007-10-15 16:40 ` Justin Piszcz
                   ` (3 more replies)
  0 siblings, 4 replies; 6+ messages in thread
From: Bernd Schubert @ 2007-10-15 15:03 UTC (permalink / raw)
  To: Dan Williams; +Cc: linux-raid, neilb

Hi,

in order to tune raid performance I did some benchmarks with and without the 
stripe queue patches. 2.6.22 is only for comparison to rule out other 
effects, e.g. the new scheduler, etc.
It seems there is a regression with these patch regarding the re-write 
performance, as you can see its almost 50% of what it should be.

write      re-write   read       re-read
480844.26  448723.48  707927.55  706075.02 (2.6.22 w/o SQ patches)
487069.47  232574.30  709038.28  707595.09 (2.6.23 with SQ patches)
469865.75  438649.88  711211.92  703229.00 (2.6.23 without SQ patches)

Benchmark details:

3xraid5 over 4 partitions of the very same hardware raid (in the end thats 
raid65, raid6 in hardware and raid5 in software, we need to do that).

chunk size: 8192
stripe_cache_size: 8192 each
readahead of the md*: 65535 (well actually it limits itself to 65528
readahead of the underlying partitions: 16384

filesystem: xfs

Testsystem: 2 x Quadcore Xeon 1.86 GHz (E5320)

An interesting effect to notice: Without these patches the pdflush daemons 
will take a lot of CPU time, with these patches, pdflush almost doesn't 
appear in the 'top' list.

Actually we would prefer one single raid5 array, but then one single raid5 
thread will run with 100% CPU time leaving 7 CPUs idle state, the status of 
the hardware raid says its utilization is only at about 50% and we only see 
writes at about 200 MB/s.
On the contrary, with 3 different software raid5 sets the i/o to the harware 
raid systems is the bottleneck.

Is there any chance to parallize the raid5 code? I think almost everything is 
done in raid5.c make_request(), but the main loop there is spin_locked by 
prepare_to_wait(). Would it be possible not to lock this entire loop?


Thanks,
Bernd

-- 
Bernd Schubert
Q-Leap Networks GmbH

^ permalink raw reply	[flat|nested] 6+ messages in thread

end of thread, other threads:[~2007-10-17 16:59 UTC | newest]

Thread overview: 6+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2007-10-15 15:03 experiences with raid5: stripe_queue patches Bernd Schubert
2007-10-15 16:40 ` Justin Piszcz
2007-10-16  2:01 ` Neil Brown
     [not found] ` <BAY125-W2D0CD53AC925A85655321A59C0@phx.gbl>
2007-10-16  2:04   ` Neil Brown
2007-10-16 17:31 ` Dan Williams
2007-10-17 16:59   ` Bernd Schubert

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).