public inbox for linux-kernel@vger.kernel.org
 help / color / mirror / Atom feed
* 2.6.31 under "heavy" NFS load.
@ 2009-11-09 19:30 Jesper Krogh
  2009-11-10 18:41 ` J. Bruce Fields
  0 siblings, 1 reply; 5+ messages in thread
From: Jesper Krogh @ 2009-11-09 19:30 UTC (permalink / raw)
  To: linux-kernel, linux-nfs

Hi List.

When a lot (~60 all on 1GbitE) of NFS clients are hitting an NFS server
that has an 10GbitE NIC sitting on it I'm seeing high IO-wait load
(>50%) and load number over 100 on the server. This is a change since
2.6.29 where the IO-wait load under similar workload was less than 10%.

The system has 16 Opteron cores.

All data the NFS-clients are reading are "memory recident" since they
are all reading off the same 10GB of data and the server has 32GB of
main memory dedicated to nothing else than serving NFS.

A snapshot of top looks like this:
http://krogh.cc/~jesper/top-hest-2.6.31.txt

The load is generally alot higher than on 2.6.29 and it "explodes" to
over 100 when a few processes begin utillizing the disk while serving
files over NFS. "dstat" reports a read-out of 10-20MB/s from disk which
is close to what I'd expect. and the system delivers around 600-800MB/s
over the NIC in this workload.


Sorry that I cannot be more specific, I can answer questions on a
running 2.6.31 kernel, but I cannot reboot the system back to 2.6.29
just to test since the system is "in production". I tried 2.6.30 and it
has the same pattern as 2.6.31, so based on that fragile evidence the
change should be found in between 2.6.29 and 2.6.30. I hope a "wague"
report is better than none.

-- 
Jesper

^ permalink raw reply	[flat|nested] 5+ messages in thread

end of thread, other threads:[~2009-11-23 21:26 UTC | newest]

Thread overview: 5+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2009-11-09 19:30 2.6.31 under "heavy" NFS load Jesper Krogh
2009-11-10 18:41 ` J. Bruce Fields
2009-11-10 19:05   ` Jesper Krogh
2009-11-19 20:22     ` Jesper Krogh
2009-11-23 21:27       ` J. Bruce Fields

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox