linux-raid.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* major performance drop on raid5 due to context switches caused by small max_hw_sectors
@ 2007-04-20 21:06 Pallai Roland
       [not found] ` <5d96567b0704202247s60e4f2f1x19511f790f597ea0@mail.gmail.com>
  0 siblings, 1 reply; 15+ messages in thread
From: Pallai Roland @ 2007-04-20 21:06 UTC (permalink / raw)
  To: Linux RAID Mailing List


Hi!

 I made a software RAID5 array from 8 disks top on a HPT2320 card driven by 
hpt's driver. max_hw_sectors is 64Kb in this proprietary driver. I began to 
test it with a simple sequental read by 100 threads with adjusted readahead 
size (2048Kb; total ram is 1Gb, I use posix_fadvise DONTNEED after reads). 
Bad news: I noticed very weak peformance on this array compared to an another 
array built from 7 disk on the motherboard's AHCI controllers. I digged 
deeper, and I found the root of the problem: if I lower max_sectors_kb on my 
AHCI disks, the same happen there too!

dap:/sys/block# for i in sd*; do echo 64 >$i/queue/max_sectors_kb; done

dap:/sys/block# vmstat 1
procs -----------memory---------- ---swap-- -----io---- --system-- ----cpu----
 r  b   swpd   free   buff  cache   si   so    bi    bo   in    cs us sy id wa
 0 14      0   8216      0 791056    0    0 103304     0 2518 57340  0 41  0 
59
 3 12      0   7420      0 791856    0    0 117264     0 2600 55709  0 44  0 
56
thrashed readahead pages: 123363

dap:/sys/block# for i in sd*; do echo 512 >$i/queue/max_sectors_kb; done

dap:/sys/block# vmstat 1
procs -----------memory---------- ---swap-- -----io---- --system-- ----cpu----
 r  b   swpd   free   buff  cache   si   so    bi    bo   in    cs us sy id wa
 0 100      0 182876      0 762560    0    0 350944     0 1299  1484  1 14  0 
86
 0 100      0 129492      0 815460    0    0 265864     0 1432  2045  0 10  0 
90
 0 100      0 112812      0 832504    0    0 290084     0 1366  1807  1 11  0 
89
thrashed readahead pages: 4605


 Is not possible to reduce the number of context switches here? Why context 
switches causes readahead thrashing? Why just the RAID5 suffers from the 
small max_sectors_kb, why don't happen if I run lot of 'badblocks'?


thanks,
--
 dap


^ permalink raw reply	[flat|nested] 15+ messages in thread

end of thread, other threads:[~2007-04-22 20:35 UTC | newest]

Thread overview: 15+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2007-04-20 21:06 major performance drop on raid5 due to context switches caused by small max_hw_sectors Pallai Roland
     [not found] ` <5d96567b0704202247s60e4f2f1x19511f790f597ea0@mail.gmail.com>
2007-04-21 19:32   ` major performance drop on raid5 due to context switches caused by small max_hw_sectors [partially resolved] Pallai Roland
2007-04-22  0:18     ` Justin Piszcz
2007-04-22  0:42       ` Pallai Roland
2007-04-22  8:47         ` Justin Piszcz
2007-04-22  9:52           ` Pallai Roland
2007-04-22 10:23             ` Justin Piszcz
2007-04-22 11:38               ` Pallai Roland
2007-04-22 11:42                 ` Justin Piszcz
2007-04-22 14:38                   ` Pallai Roland
2007-04-22 14:48                     ` Justin Piszcz
2007-04-22 15:09                       ` Pallai Roland
2007-04-22 15:53                         ` Justin Piszcz
2007-04-22 19:01                           ` Mr. James W. Laferriere
2007-04-22 20:35                             ` Justin Piszcz

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).