From: Greg Banks <gnb@sgi.com>
To: Jakob Oestergaard <jakob@unthought.net>, linux-kernel@vger.kernel.org
Subject: Re: bdflush/rpciod high CPU utilization, profile does not make sense
Date: Fri, 8 Apr 2005 02:01:57 +1000 [thread overview]
Message-ID: <20050407160157.GD8579@sgi.com> (raw)
In-Reply-To: <20050407153848.GN347@unthought.net>
On Thu, Apr 07, 2005 at 05:38:48PM +0200, Jakob Oestergaard wrote:
> On Thu, Apr 07, 2005 at 09:19:06AM +1000, Greg Banks wrote:
> ...
> > How large is the client's RAM?
>
> 2GB - (32 bit kernel because it's dual PIII, so I use highmem)
Ok, that's probably not enough to fully trigger some of the problems
I've seen on large-memory NFS clients.
> A few more details:
>
> With standard VM settings, the client will be laggy during the copy, but
> it will also have a load average around 10 (!) And really, the only
> thing I do with it is one single 'cp' operation. The CPU hogs are
> pdflush, rpciod/0 and rpciod/1.
NFS writes of single files much larger than client RAM still have
interesting issues.
> I tweaked the VM a bit, put the following in /etc/sysctl.conf:
> vm.dirty_writeback_centisecs=100
> vm.dirty_expire_centisecs=200
>
> The defaults are 500 and 3000 respectively...
Yes, you want more frequent and smaller writebacks. It may help to
reduce vm.dirty_ratio and possibly vm.dirty_background_ratio.
> This improved things a lot; the client is now "almost not very laggy",
> and load stays in the saner 1-2 range.
>
> Still, system CPU utilization is very high (still from rpciod and
> pdflush - more rpciod and less pdflush though),
This is probably the rpciod's and pdflush all trying to do things
at the same time and contending for the BKL.
> During the copy I typically see:
>
> nfs_write_data 681 952 480 8 1 : tunables 54 27 8 : slabdata 119 119 108
> nfs_page 15639 18300 64 61 1 : tunables 120 60 8 : slabdata 300 300 180
That's not so bad, it's only about 3% of the system's pages.
Greg.
--
Greg Banks, R&D Software Engineer, SGI Australian Software Group.
I don't speak for SGI.
next prev parent reply other threads:[~2005-04-07 16:02 UTC|newest]
Thread overview: 24+ messages / expand[flat|nested] mbox.gz Atom feed top
2005-04-06 16:01 bdflush/rpciod high CPU utilization, profile does not make sense Jakob Oestergaard
2005-04-06 21:28 ` Trond Myklebust
2005-04-07 15:28 ` Jakob Oestergaard
2005-04-06 23:19 ` Greg Banks
2005-04-07 15:38 ` Jakob Oestergaard
2005-04-07 16:01 ` Greg Banks [this message]
2005-04-07 16:17 ` Trond Myklebust
2005-04-09 21:35 ` Jakob Oestergaard
2005-04-09 21:52 ` Trond Myklebust
2005-04-11 7:48 ` Jakob Oestergaard
2005-04-11 12:35 ` Trond Myklebust
2005-04-11 13:47 ` Jakob Oestergaard
2005-04-11 14:35 ` Trond Myklebust
2005-04-11 14:41 ` Jakob Oestergaard
2005-04-11 15:21 ` Trond Myklebust
2005-04-11 15:42 ` Jakob Oestergaard
2005-04-12 1:03 ` Greg Banks
2005-04-12 9:28 ` Jakob Oestergaard
2005-04-19 19:45 ` Jakob Oestergaard
2005-04-19 22:46 ` Trond Myklebust
2005-04-20 13:57 ` Jakob Oestergaard
2005-04-24 7:15 ` Jakob Oestergaard
2005-04-25 3:09 ` Trond Myklebust
2005-04-25 13:50 ` Jakob Oestergaard
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20050407160157.GD8579@sgi.com \
--to=gnb@sgi.com \
--cc=jakob@unthought.net \
--cc=linux-kernel@vger.kernel.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox