From: Peter Zijlstra <a.p.zijlstra@chello.nl>
To: Nikita Danilov <nikita@clusterfs.com>
Cc: Wu Fengguang <wfg@mail.ustc.edu.cn>,
Andrew Morton <akpm@osdl.org>,
Linux Kernel Mailing List <Linux-Kernel@vger.kernel.org>
Subject: Re: [PATCH 01/16] mm: delayed page activation
Date: Sun, 04 Dec 2005 20:10:23 +0100 [thread overview]
Message-ID: <1133723423.27985.10.camel@twins> (raw)
In-Reply-To: <17299.1331.368159.374754@gargle.gargle.HOWL>
On Sun, 2005-12-04 at 18:03 +0300, Nikita Danilov wrote:
> Wu Fengguang writes:
> > On Sun, Dec 04, 2005 at 03:11:28PM +0300, Nikita Danilov wrote:
> > > Wu Fengguang writes:
> > > > When a page is referenced the second time in inactive_list, mark it with
> > > > PG_activate instead of moving it into active_list immediately. The actual
> > > > moving work is delayed to vmscan time.
> > > >
> > > > This implies two essential changes:
> > > > - keeps the adjecency of pages in lru;
> > >
> > > But this change destroys LRU ordering: at the time when shrink_list()
> > > inspects PG_activate bit, information about order in which
> > > mark_page_accessed() was called against pages is lost. E.g., suppose
> >
> > Thanks.
> > But this order of re-access time may be pointless. In fact the original
> > mark_page_accessed() is doing another inversion: inversion of page lifetime.
> > In the word of CLOCK-Pro, a page first being re-accessed has lower
>
> The brave new world of CLOCK-Pro is still yet to happen, right?
Well, I have an implementation that is showing very promising results. I
plan to polish the code a bit and post the code somewhere this week.
(current state available at: http://linux-mm.org/PeterZClockPro2)
> > inter-reference distance, and therefore should be better protected(if ignore
> > possible read-ahead effects). If we move re-accessed pages immediately into
> > active_list, we are pushing them closer to danger of eviction.
>
> Huh? Pages in the active list are closer to the eviction? If it is
> really so, then CLOCK-pro hijacks the meaning of active list in a very
> unintuitive way. In the current MM active list is supposed to contain
> hot pages that will be evicted last.
Actually, CLOCK-pro does not have an active list. Pure CLOCK-pro has but
one clock. It is possible to create approximations that have more
lists/clocks, and in those the meaning of active list are indeed
somewhat different, but I agree with nikita here, this is odd.
> Anyway, these issues should be addressed in CLOCK-pro
> implementation. Current MM tries hard to maintain LRU approximation in
> both active and inactive lists.
nod.
Peter Zijlstra
(he who has dedicated his spare time to the eradication of LRU ;-)
next prev parent reply other threads:[~2005-12-04 19:10 UTC|newest]
Thread overview: 41+ messages / expand[flat|nested] mbox.gz Atom feed top
2005-12-03 7:14 [PATCH 00/16] Adaptive read-ahead V9 Wu Fengguang
2005-12-03 7:14 ` [PATCH 01/16] mm: delayed page activation Wu Fengguang
2005-12-04 12:11 ` Nikita Danilov
2005-12-04 13:48 ` Wu Fengguang
2005-12-04 15:03 ` Nikita Danilov
2005-12-04 15:37 ` Help!Unable to handle kernel NULL pointer tony
2005-12-04 19:10 ` Peter Zijlstra [this message]
2005-12-05 1:48 ` [PATCH 01/16] mm: delayed page activation Wu Fengguang
2005-12-06 17:55 ` Nikita Danilov
2005-12-07 1:42 ` Wu Fengguang
2005-12-07 9:46 ` Andrew Morton
2005-12-07 10:36 ` Wu Fengguang
2005-12-07 12:44 ` Nikita Danilov
2005-12-07 13:53 ` Wu Fengguang
2005-12-03 7:14 ` [PATCH 02/16] radixtree: sync with mainline Wu Fengguang
2005-12-04 23:57 ` Andrew Morton
2005-12-05 1:43 ` Wu Fengguang
2005-12-05 4:05 ` Wu Fengguang
2005-12-05 17:22 ` Christoph Lameter
2005-12-05 10:44 ` Wu Fengguang
2005-12-05 17:24 ` Christoph Lameter
2005-12-06 2:23 ` Wu Fengguang
2005-12-03 7:14 ` [PATCH 03/16] radixtree: look-aside cache Wu Fengguang
2005-12-03 7:14 ` [PATCH 04/16] readahead: some preparation Wu Fengguang
2005-12-03 7:14 ` [PATCH 05/16] readahead: call scheme Wu Fengguang
2005-12-03 7:14 ` [PATCH 06/16] readahead: parameters Wu Fengguang
2005-12-03 7:14 ` [PATCH 07/16] readahead: state based method Wu Fengguang
2005-12-03 7:14 ` [PATCH 08/16] readahead: context " Wu Fengguang
2005-12-03 7:14 ` [PATCH 09/16] readahead: read-around method for mmap file Wu Fengguang
2005-12-03 7:14 ` [PATCH 10/16] readahead: other methods Wu Fengguang
2005-12-03 7:14 ` [PATCH 11/16] readahead: detect and rescue live pages Wu Fengguang
2005-12-03 7:14 ` [PATCH 12/16] readahead: events accounting Wu Fengguang
2005-12-03 7:14 ` [PATCH 13/16] readahead: laptop mode support Wu Fengguang
2005-12-03 7:14 ` [PATCH 14/16] readahead: disable look-ahead for loopback file Wu Fengguang
2005-12-03 7:14 ` [PATCH 15/16] readahead: nfsd support Wu Fengguang
2005-12-03 7:15 ` [PATCH 16/16] io: prevent too much latency in the read-ahead code Wu Fengguang
-- strict thread matches above, loose matches on Subject: below --
2005-11-09 13:49 [PATCH 00/16] Adaptive read-ahead V7 Wu Fengguang
2005-11-09 13:49 ` [PATCH 01/16] mm: delayed page activation Wu Fengguang
2005-11-10 0:21 ` Nick Piggin
2005-11-10 3:15 ` Wu Fengguang
2005-11-10 9:17 ` Peter Zijlstra
2005-11-10 10:30 ` Wu Fengguang
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=1133723423.27985.10.camel@twins \
--to=a.p.zijlstra@chello.nl \
--cc=Linux-Kernel@vger.kernel.org \
--cc=akpm@osdl.org \
--cc=nikita@clusterfs.com \
--cc=wfg@mail.ustc.edu.cn \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox