public inbox for linux-kernel@vger.kernel.org
 help / color / mirror / Atom feed
From: Andrea Arcangeli <andrea@suse.de>
To: "Martin J. Bligh" <Martin.Bligh@us.ibm.com>
Cc: Rik van Riel <riel@conectiva.com.br>,
	Andrew Morton <akpm@zip.com.au>,
	Martin Schwidefsky <schwidefsky@de.ibm.com>,
	linux-kernel@vger.kernel.org, Alan Cox <alan@lxorguk.ukuu.org.uk>
Subject: Re: Bug with shared memory.
Date: Tue, 21 May 2002 03:40:18 +0200	[thread overview]
Message-ID: <20020521014018.GP21806@dualathlon.random> (raw)
In-Reply-To: <20020520141523.GB21806@dualathlon.random> <Pine.LNX.4.44L.0205201618110.24352-100000@imladris.surriel.com> <20020520234622.GL21806@dualathlon.random> <262840000.1021940064@flay>

On Mon, May 20, 2002 at 05:14:24PM -0700, Martin J. Bligh wrote:
> > For the memclass_related_bhs() fix in -aa, that's in the testing TODO
> > list of Martin (on the multi giga machines), he also nicely proposed to
> > compare it to the other throw-away-all-bh-regardless patch from Andrew
> > (that I actually didn't seen floating around yet but it's clear how it
> > works, it's a subset of memclass_related_bhs). However the right way to
> > test the memclass_related_bhs vs throw-away-all-bh, is to run a rewrite
> > test that fits in cache, so write,fsync,write,fsync,write,fsync. specweb
> > or any other read-only test will obviously perform exactly the same both
> > ways (actually theoretically a bit cpu-faster in throw-away-all-bh
> > because it doesn't check the bh list).
> 
> The only thing that worries me in theory about your approach for this
> Andrea is fragmentation - if we try to shrink only when we're low on
> memory, isn't there a danger that one buffer_head per page of slab
> cache will be in use, and thus no pages are freeable (obviously this
> is extreme, but I can certainly see a situation with lots of partially used
> pages)? 

well, then you should be worried first for the whole /proc/slabinfo, not
just the bh heahders :) if it's a problem for the bh, it's a problem for
everything else.

The reason fragmentation is never a problem is that as far as the
persistent slab objects can be reclaimed dynamically by the vm we will
always be able to free all the slab pages, the only downside we run into
vs being aware of the slab fragmentation, is that we risk to reclaim
more objects than necessary at the layer above the slab cache (so at the
bh layer), but dropping more bh than necessary will never be a problem
(Andrew wants to drop them all indeed).

> With Andrew's approach, keeping things freed as we go, we should
> reuse the partially allocated slab pages, which would seem (to me)
> to result in less fragmentation?

less fragmentation because of zero bh allocated from slab cache :).

Andrea

  reply	other threads:[~2002-05-21  1:41 UTC|newest]

Thread overview: 29+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2002-05-14 15:13 Bug with shared memory Martin Schwidefsky
2002-05-14 19:33 ` Andrew Morton
2002-05-15 22:42   ` Mike Kravetz
2002-05-15 23:07     ` Andrew Morton
2002-05-17 17:53     ` Bill Davidsen
2002-05-17 20:07       ` Mike Kravetz
2002-05-17 20:29         ` Anton Blanchard
2002-05-20  4:30   ` Andrea Arcangeli
2002-05-20  5:21     ` Andrew Morton
2002-05-20 11:34       ` Andrey Savochkin
2002-05-20 14:15       ` Andrea Arcangeli
2002-05-20 19:24         ` Rik van Riel
2002-05-20 23:46           ` Andrea Arcangeli
2002-05-21  0:14             ` Martin J. Bligh
2002-05-21  1:40               ` Andrea Arcangeli [this message]
2002-05-20 16:22       ` Martin J. Bligh
2002-05-20 19:38         ` Rik van Riel
2002-05-20 20:06           ` William Lee Irwin III
2002-05-20 16:13     ` Martin J. Bligh
2002-05-20 16:37       ` Andrea Arcangeli
2002-05-20 17:23         ` Martin J. Bligh
2002-05-20 17:32           ` William Lee Irwin III
2002-05-24  7:33     ` inode highmem imbalance fix [Re: Bug with shared memory.] Andrea Arcangeli
2002-05-24  7:51       ` William Lee Irwin III
2002-05-24  8:04       ` Andrew Morton
2002-05-24 15:20         ` Andrea Arcangeli
2002-05-24 11:47       ` Ed Tomlinson
2002-05-30 11:25       ` Denis Lunev
2002-05-30 17:59         ` Andrea Arcangeli

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20020521014018.GP21806@dualathlon.random \
    --to=andrea@suse.de \
    --cc=Martin.Bligh@us.ibm.com \
    --cc=akpm@zip.com.au \
    --cc=alan@lxorguk.ukuu.org.uk \
    --cc=linux-kernel@vger.kernel.org \
    --cc=riel@conectiva.com.br \
    --cc=schwidefsky@de.ibm.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox