public inbox for linux-kernel@vger.kernel.org
 help / color / mirror / Atom feed
From: Jakob Oestergaard <jakob@unthought.net>
To: Marcelo Tosatti <marcelo.tosatti@cyclades.com>
Cc: linux-kernel@vger.kernel.org
Subject: Re: 2.4.25 - large inode_cache
Date: Thu, 26 Feb 2004 14:03:45 +0100	[thread overview]
Message-ID: <20040226130344.GP29776@unthought.net> (raw)
In-Reply-To: <Pine.LNX.4.58L.0402261004310.5003@logos.cnet>

On Thu, Feb 26, 2004 at 10:08:23AM -0300, Marcelo Tosatti wrote:
...
> >
> > free output is this:
> >              total       used       free     shared    buffers  cached
> > Mem:        515980     506464       9516          0    2272      19204
> > -/+ buffers/cache:     484988      30992
> > Swap:      1951856       7992    1943864
> 
> This should be normal behaviour -- the i/d caches grew because of file
> system activitity. This memory will be reclaimed in case theres pressure.

But how is "pressure" defined?

Will a heap of busy knfsd processes doing reads or writes exert
pressure?   Or is it only local userspace that can pressurize the VM (by
either anonymously backed memory or file I/O).

This server happily serves large home directories over NFS, at really
poor speeds.  It will happily serve tens or hundreds of gigabytes, read
and write, over the course of a day, and *still* only cache about 100MB
NFS to/from the server is slow. It's common to see 10 knfsd processes in
D state while vmstat tells me the array works with about 4-6MB/sec
sustained throughput (where hdparm -t would give me more than 70MB/sec
on the md device).

The files read and written are commonly in the 20-60 MB range, so it's
not just because I'm loading the server with small seeks. Many files are
read multiple times within a few minutes, so the cache usage of 100MB is
completely bogus the way that I see it - but maybe there's just
something I don't know about the caching?   :)

> 
> Is the behaviour different from previous 2.4 or 2.6 kernels?

I never investigated the slabinfo on earlier 2.4. But the performance on
this server has been "under expectations" for as long as I can remember.
So, from the performance experience on this server I would say that
2.4.25 is not any worse than older kernels.

Since this is a production system I have been reluctant to jump on the
2.6 wagon - but my other experiences with 2.6.X have been good, so I'm
probably going to soften up and give it a try in a not too distant
future.

However, if this dcache/icache problem is well known and is (or at least
should be) solved in 2.6, then I can do the test this weekend.

Any enlightenment or suggestions are greatly appreciated :)

Thanks,

 / jakob



  reply	other threads:[~2004-02-26 13:03 UTC|newest]

Thread overview: 9+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2004-02-26  1:33 2.4.25 - large inode_cache Jakob Oestergaard
2004-02-26 11:19 ` Christian Leber
2004-02-26 13:08   ` Marcelo Tosatti
2004-02-26 13:03     ` Jakob Oestergaard [this message]
2004-02-26 14:23       ` Marcelo Tosatti
2004-02-26 13:53         ` Jakob Oestergaard
2004-02-26 17:43         ` Andreas Dilger
2004-02-26 20:43           ` Marcelo Tosatti
2004-02-27 12:27             ` Jakob Oestergaard

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20040226130344.GP29776@unthought.net \
    --to=jakob@unthought.net \
    --cc=linux-kernel@vger.kernel.org \
    --cc=marcelo.tosatti@cyclades.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox