From: Johannes Weiner <jweiner@redhat.com>
To: Michal Hocko <mhocko@suse.cz>
Cc: Andrew Morton <akpm@linux-foundation.org>,
KAMEZAWA Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com>,
Daisuke Nishimura <nishimura@mxp.nes.nec.co.jp>,
Balbir Singh <bsingharora@gmail.com>,
Ying Han <yinghan@google.com>, Greg Thelen <gthelen@google.com>,
Michel Lespinasse <walken@google.com>,
Rik van Riel <riel@redhat.com>,
Minchan Kim <minchan.kim@gmail.com>,
Christoph Hellwig <hch@infradead.org>,
linux-mm@kvack.org, linux-kernel@vger.kernel.org
Subject: Re: [patch 07/11] mm: vmscan: convert unevictable page rescue scanner to per-memcg LRU lists
Date: Wed, 21 Sep 2011 15:47:51 +0200 [thread overview]
Message-ID: <20110921134751.GD22516@redhat.com> (raw)
In-Reply-To: <20110921123354.GC8501@tiehlicka.suse.cz>
On Wed, Sep 21, 2011 at 02:33:56PM +0200, Michal Hocko wrote:
> On Mon 12-09-11 12:57:24, Johannes Weiner wrote:
> > The global per-zone LRU lists are about to go away on memcg-enabled
> > kernels, the unevictable page rescue scanner must be able to find its
> > pages on the per-memcg LRU lists.
> >
> > Signed-off-by: Johannes Weiner <jweiner@redhat.com>
>
> The patch is correct but I guess the original implementation of
> scan_zone_unevictable_pages is buggy (see bellow). This should be
> addressed separatelly, though.
>
> Reviewed-by: Michal Hocko <mhocko@suse.cz>
Thanks for your effort, Michal, I really appreciate it.
> > @@ -3490,32 +3501,40 @@ void scan_mapping_unevictable_pages(struct address_space *mapping)
> > #define SCAN_UNEVICTABLE_BATCH_SIZE 16UL /* arbitrary lock hold batch size */
> > static void scan_zone_unevictable_pages(struct zone *zone)
> > {
> > - struct list_head *l_unevictable = &zone->lru[LRU_UNEVICTABLE].list;
> > - unsigned long scan;
> > - unsigned long nr_to_scan = zone_page_state(zone, NR_UNEVICTABLE);
> > -
> > - while (nr_to_scan > 0) {
> > - unsigned long batch_size = min(nr_to_scan,
> > - SCAN_UNEVICTABLE_BATCH_SIZE);
> > -
> > - spin_lock_irq(&zone->lru_lock);
> > - for (scan = 0; scan < batch_size; scan++) {
> > - struct page *page = lru_to_page(l_unevictable);
> > + struct mem_cgroup *mem;
> >
> > - if (!trylock_page(page))
> > - continue;
> > + mem = mem_cgroup_iter(NULL, NULL, NULL);
> > + do {
> > + struct mem_cgroup_zone mz = {
> > + .mem_cgroup = mem,
> > + .zone = zone,
> > + };
> > + unsigned long nr_to_scan;
> >
> > - prefetchw_prev_lru_page(page, l_unevictable, flags);
> > + nr_to_scan = zone_nr_lru_pages(&mz, LRU_UNEVICTABLE);
> > + while (nr_to_scan > 0) {
> > + unsigned long batch_size;
> > + unsigned long scan;
> >
> > - if (likely(PageLRU(page) && PageUnevictable(page)))
> > - check_move_unevictable_page(page, zone);
> > + batch_size = min(nr_to_scan,
> > + SCAN_UNEVICTABLE_BATCH_SIZE);
> > + spin_lock_irq(&zone->lru_lock);
> > + for (scan = 0; scan < batch_size; scan++) {
> > + struct page *page;
> >
> > - unlock_page(page);
> > + page = lru_tailpage(&mz, LRU_UNEVICTABLE);
> > + if (!trylock_page(page))
> > + continue;
>
> We are not moving to the next page so we will try it again in the next
> round while we already increased the scan count. In the end we will
> missed some pages.
I guess this is about latency. This code is only executed when the
user requests so by writing to a proc-file, check the comment above
scan_all_zones_unevictable_pages. I think at one point Lee wanted to
move anon pages to the unevictable LRU when no swap is configured, but
we have separate anon LRUs now that are not scanned without swap, and
I think except for bugs there is no actual need to move these pages by
hand, let alone reliably every single page.
next prev parent reply other threads:[~2011-09-21 13:48 UTC|newest]
Thread overview: 65+ messages / expand[flat|nested] mbox.gz Atom feed top
2011-09-12 10:57 [patch 0/11] mm: memcg naturalization -rc3 Johannes Weiner
2011-09-12 10:57 ` [patch 01/11] mm: memcg: consolidate hierarchy iteration primitives Johannes Weiner
2011-09-12 22:37 ` Kirill A. Shutemov
2011-09-13 5:40 ` Johannes Weiner
2011-09-19 13:06 ` Michal Hocko
2011-09-13 10:06 ` KAMEZAWA Hiroyuki
2011-09-19 12:53 ` Michal Hocko
2011-09-20 8:45 ` Johannes Weiner
2011-09-20 8:53 ` Michal Hocko
2011-09-12 10:57 ` [patch 02/11] mm: vmscan: distinguish global reclaim from global LRU scanning Johannes Weiner
2011-09-12 23:02 ` Kirill A. Shutemov
2011-09-13 5:48 ` Johannes Weiner
2011-09-13 10:07 ` KAMEZAWA Hiroyuki
2011-09-19 13:23 ` Michal Hocko
2011-09-19 13:46 ` Michal Hocko
2011-09-20 8:52 ` Johannes Weiner
2011-09-12 10:57 ` [patch 03/11] mm: vmscan: distinguish between memcg triggering reclaim and memcg being scanned Johannes Weiner
2011-09-13 10:23 ` KAMEZAWA Hiroyuki
2011-09-19 14:29 ` Michal Hocko
2011-09-20 8:58 ` Johannes Weiner
2011-09-20 9:17 ` Michal Hocko
2011-09-29 7:55 ` Johannes Weiner
2011-09-12 10:57 ` [patch 04/11] mm: memcg: per-priority per-zone hierarchy scan generations Johannes Weiner
2011-09-13 10:27 ` KAMEZAWA Hiroyuki
2011-09-13 11:03 ` Johannes Weiner
2011-09-14 0:55 ` KAMEZAWA Hiroyuki
2011-09-14 5:56 ` Johannes Weiner
2011-09-14 7:40 ` KAMEZAWA Hiroyuki
2011-09-20 8:15 ` Michal Hocko
2011-09-20 8:45 ` Michal Hocko
2011-09-20 9:10 ` Johannes Weiner
2011-09-20 12:37 ` Michal Hocko
2011-09-12 10:57 ` [patch 05/11] mm: move memcg hierarchy reclaim to generic reclaim code Johannes Weiner
2011-09-13 10:31 ` KAMEZAWA Hiroyuki
2011-09-20 13:09 ` Michal Hocko
2011-09-20 13:29 ` Johannes Weiner
2011-09-20 14:08 ` Michal Hocko
2011-09-12 10:57 ` [patch 06/11] mm: memcg: remove optimization of keeping the root_mem_cgroup LRU lists empty Johannes Weiner
2011-09-13 10:34 ` KAMEZAWA Hiroyuki
2011-09-20 15:02 ` Michal Hocko
2011-09-29 9:20 ` Johannes Weiner
2011-09-29 9:49 ` Michal Hocko
2011-09-12 10:57 ` [patch 07/11] mm: vmscan: convert unevictable page rescue scanner to per-memcg LRU lists Johannes Weiner
2011-09-13 10:37 ` KAMEZAWA Hiroyuki
2011-09-21 12:33 ` Michal Hocko
2011-09-21 13:47 ` Johannes Weiner [this message]
2011-09-21 14:08 ` Michal Hocko
2011-09-12 10:57 ` [patch 08/11] mm: vmscan: convert global reclaim " Johannes Weiner
2011-09-13 10:41 ` KAMEZAWA Hiroyuki
2011-09-21 13:10 ` Michal Hocko
2011-09-21 13:51 ` Johannes Weiner
2011-09-21 13:57 ` Michal Hocko
2011-09-12 10:57 ` [patch 09/11] mm: collect LRU list heads into struct lruvec Johannes Weiner
2011-09-13 10:43 ` KAMEZAWA Hiroyuki
2011-09-21 13:43 ` Michal Hocko
2011-09-21 15:15 ` Michal Hocko
2011-09-12 10:57 ` [patch 10/11] mm: make per-memcg LRU lists exclusive Johannes Weiner
2011-09-13 10:47 ` KAMEZAWA Hiroyuki
2011-09-21 15:24 ` Michal Hocko
2011-09-21 15:47 ` Johannes Weiner
2011-09-21 16:05 ` Michal Hocko
2011-09-12 10:57 ` [patch 11/11] mm: memcg: remove unused node/section info from pc->flags Johannes Weiner
2011-09-13 10:50 ` KAMEZAWA Hiroyuki
2011-09-21 15:32 ` Michal Hocko
2011-09-13 20:35 ` [patch 0/11] mm: memcg naturalization -rc3 Kirill A. Shutemov
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20110921134751.GD22516@redhat.com \
--to=jweiner@redhat.com \
--cc=akpm@linux-foundation.org \
--cc=bsingharora@gmail.com \
--cc=gthelen@google.com \
--cc=hch@infradead.org \
--cc=kamezawa.hiroyu@jp.fujitsu.com \
--cc=linux-kernel@vger.kernel.org \
--cc=linux-mm@kvack.org \
--cc=mhocko@suse.cz \
--cc=minchan.kim@gmail.com \
--cc=nishimura@mxp.nes.nec.co.jp \
--cc=riel@redhat.com \
--cc=walken@google.com \
--cc=yinghan@google.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).