From: Jakob Oestergaard <jakob@unthought.net>
To: Greg Banks <gnb@sgi.com>
Cc: linux-kernel@vger.kernel.org
Subject: Re: bdflush/rpciod high CPU utilization, profile does not make sense
Date: Thu, 7 Apr 2005 17:38:48 +0200 [thread overview]
Message-ID: <20050407153848.GN347@unthought.net> (raw)
In-Reply-To: <20050406231906.GA4473@sgi.com>
On Thu, Apr 07, 2005 at 09:19:06AM +1000, Greg Banks wrote:
...
> How large is the client's RAM?
2GB - (32 bit kernel because it's dual PIII, so I use highmem)
A few more details:
With standard VM settings, the client will be laggy during the copy, but
it will also have a load average around 10 (!) And really, the only
thing I do with it is one single 'cp' operation. The CPU hogs are
pdflush, rpciod/0 and rpciod/1.
I tweaked the VM a bit, put the following in /etc/sysctl.conf:
vm.dirty_writeback_centisecs=100
vm.dirty_expire_centisecs=200
The defaults are 500 and 3000 respectively...
This improved things a lot; the client is now "almost not very laggy",
and load stays in the saner 1-2 range.
Still, system CPU utilization is very high (still from rpciod and
pdflush - more rpciod and less pdflush though), and the file copying
performance over NFS is roughly half of what I get locally on the server
(8G file copy with 16MB/sec over NFS versus 32 MB/sec locally).
(I run with plenty of knfsd threads on the server, and generally the
server is not very loaded when the client is pounding it as much as it
can)
> What does the following command report
> before and during the write?
>
> egrep 'nfs_page|nfs_write_data' /proc/slabinfo
During the copy I typically see:
nfs_write_data 681 952 480 8 1 : tunables 54 27 8 : slabdata 119 119 108
nfs_page 15639 18300 64 61 1 : tunables 120 60 8 : slabdata 300 300 180
The "18300" above typically goes from 12000 to 25000...
After the copy I see:
nfs_write_data 36 48 480 8 1 : tunables 54 27 8 : slabdata 5 6 0
nfs_page 1 61 64 61 1 : tunables 120 60 8 : slabdata 1 1 0
--
/ jakob
next prev parent reply other threads:[~2005-04-07 15:39 UTC|newest]
Thread overview: 24+ messages / expand[flat|nested] mbox.gz Atom feed top
2005-04-06 16:01 bdflush/rpciod high CPU utilization, profile does not make sense Jakob Oestergaard
2005-04-06 21:28 ` Trond Myklebust
2005-04-07 15:28 ` Jakob Oestergaard
2005-04-06 23:19 ` Greg Banks
2005-04-07 15:38 ` Jakob Oestergaard [this message]
2005-04-07 16:01 ` Greg Banks
2005-04-07 16:17 ` Trond Myklebust
2005-04-09 21:35 ` Jakob Oestergaard
2005-04-09 21:52 ` Trond Myklebust
2005-04-11 7:48 ` Jakob Oestergaard
2005-04-11 12:35 ` Trond Myklebust
2005-04-11 13:47 ` Jakob Oestergaard
2005-04-11 14:35 ` Trond Myklebust
2005-04-11 14:41 ` Jakob Oestergaard
2005-04-11 15:21 ` Trond Myklebust
2005-04-11 15:42 ` Jakob Oestergaard
2005-04-12 1:03 ` Greg Banks
2005-04-12 9:28 ` Jakob Oestergaard
2005-04-19 19:45 ` Jakob Oestergaard
2005-04-19 22:46 ` Trond Myklebust
2005-04-20 13:57 ` Jakob Oestergaard
2005-04-24 7:15 ` Jakob Oestergaard
2005-04-25 3:09 ` Trond Myklebust
2005-04-25 13:50 ` Jakob Oestergaard
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20050407153848.GN347@unthought.net \
--to=jakob@unthought.net \
--cc=gnb@sgi.com \
--cc=linux-kernel@vger.kernel.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox