From: Con Kolivas <kernel@kolivas.org>
To: Christoph Lameter <clameter@engr.sgi.com>
Cc: linux-mm@kvack.org, linux-kernel@vger.kernel.org, alokk@calsoftinc.com
Subject: Re: [RFC, PATCH] Slab counter troubles with swap prefetch?
Date: Fri, 11 Nov 2005 10:07:10 +1100 [thread overview]
Message-ID: <200511111007.12872.kernel@kolivas.org> (raw)
In-Reply-To: <Pine.LNX.4.62.0511101351120.16380@schroedinger.engr.sgi.com>
[-- Attachment #1: Type: text/plain, Size: 1957 bytes --]
Hi Christoph
On Fri, 11 Nov 2005 08:55, Christoph Lameter wrote:
> Currently the slab allocator uses a page_state counter called nr_slab.
> The VM swap prefetch code assumes that this describes the number of pages
> used on a node by the slab allocator. However, that is not really true.
>
> Currently nr_slab is the number of total pages allocated which may
> be local or remote pages. Remote allocations may artificially inflate
> nr_slab and therefore disable swap prefetching.
Thanks for pointing this out.
> This patch splits the counter into the nr_local_slab which reflects
> slab pages allocated from the local zones (and this number is useful
> at least as a guidance for the VM) and the remotely allocated pages.
How large a contribution is the remote slab size likely to be? Would this
information be useful to anyone potentially in future code besides swap
prefetch? The nature of prefetch is that this is only a fairly coarse measure
of how full the vm is with data we don't want to displace. Thus it is also
not important that it is very accurate.
Unless the remote slab size can be a very large contribution, or having local
and remote slab sizes is useful potentially to some other code I'm inclined
to say this is unnecessary. A simple comment saying something like "the
nr_slab estimation is artificially elevated by remote slab pages on numa,
however this contribution is not important to the accuracy of this
algorithm". Of course it is nice to be more accurate and if you think
worthwhile then we can do this - I'll be happy to be guided by your
judgement.
As a side note I doubt any serious size numa hardware will ever be idle enough
by swap prefetch standards to even start prefetching swap pages. If you think
hardware of this sort is likely to benefit from swap prefetch then perhaps we
should look at relaxing the conditions under which prefetching occurs.
Cheers,
Con
[-- Attachment #2: Type: application/pgp-signature, Size: 189 bytes --]
next prev parent reply other threads:[~2005-11-10 23:07 UTC|newest]
Thread overview: 6+ messages / expand[flat|nested] mbox.gz Atom feed top
2005-11-10 21:55 [RFC, PATCH] Slab counter troubles with swap prefetch? Christoph Lameter
2005-11-10 23:07 ` Con Kolivas [this message]
2005-11-10 23:13 ` Christoph Lameter
2005-11-10 23:17 ` Con Kolivas
2005-11-11 3:50 ` Con Kolivas
2005-11-11 17:43 ` Christoph Lameter
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=200511111007.12872.kernel@kolivas.org \
--to=kernel@kolivas.org \
--cc=alokk@calsoftinc.com \
--cc=clameter@engr.sgi.com \
--cc=linux-kernel@vger.kernel.org \
--cc=linux-mm@kvack.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox