linux-mm.kvack.org archive mirror
 help / color / mirror / Atom feed
From: Yafang Shao <laoar.shao@gmail.com>
To: Andrew Morton <akpm@linux-foundation.org>
Cc: Michal Hocko <mhocko@suse.com>, Linux MM <linux-mm@kvack.org>,
	shaoyafang@didiglobal.com
Subject: Re: [PATCH 2/2] mm/vmscan: shrink slab in node reclaim
Date: Thu, 23 May 2019 12:56:42 +0800	[thread overview]
Message-ID: <CALOAHbCLRH1otrXkBKe1JD0w8YuRhXoi8yrkAUxDvdyv+FJ4eg@mail.gmail.com> (raw)
In-Reply-To: <20190522144014.9ea621c56cd80461fcd26a61@linux-foundation.org>

On Thu, May 23, 2019 at 5:40 AM Andrew Morton <akpm@linux-foundation.org> wrote:
>
> On Thu,  9 May 2019 16:07:49 +0800 Yafang Shao <laoar.shao@gmail.com> wrote:
>
> > In the node reclaim, may_shrinkslab is 0 by default,
> > hence shrink_slab will never be performed in it.
> > While shrik_slab should be performed if the relcaimable slab is over
> > min slab limit.
> >
> > This issue is very easy to produce, first you continuously cat a random
> > non-exist file to produce more and more dentry, then you read big file
> > to produce page cache. And finally you will find that the denty will
> > never be shrunk.
>
> It does sound like an oversight.
>
> > --- a/mm/vmscan.c
> > +++ b/mm/vmscan.c
> > @@ -4141,6 +4141,8 @@ static int __node_reclaim(struct pglist_data *pgdat, gfp_t gfp_mask, unsigned in
> >               .may_unmap = !!(node_reclaim_mode & RECLAIM_UNMAP),
> >               .may_swap = 1,
> >               .reclaim_idx = gfp_zone(gfp_mask),
> > +             .may_shrinkslab = node_page_state(pgdat, NR_SLAB_RECLAIMABLE) >
> > +                               pgdat->min_slab_pages,
> >       };
> >
> >       trace_mm_vmscan_node_reclaim_begin(pgdat->node_id, order,
> > @@ -4158,15 +4160,13 @@ static int __node_reclaim(struct pglist_data *pgdat, gfp_t gfp_mask, unsigned in
> >       reclaim_state.reclaimed_slab = 0;
> >       p->reclaim_state = &reclaim_state;
> >
> > -     if (node_pagecache_reclaimable(pgdat) > pgdat->min_unmapped_pages) {
>
> Would it be better to do
>
>         if (node_pagecache_reclaimable(pgdat) > pgdat->min_unmapped_pages ||
>                         sc.may_shrinkslab) {
>

This if condition is always true here, because we already check them
in node_reclaim(),
see bellow,

    if (node_pagecache_reclaimable(pgdat) <= pgdat->min_unmapped_pages &&
        node_page_state(pgdat, NR_SLAB_RECLAIMABLE) <= pgdat->min_slab_pages)
        return NODE_RECLAIM_FULL;


> >               /*
> >                * Free memory by calling shrink node with increasing
> >                * priorities until we have enough memory freed.
> >                */
>
> The above will want re-indenting and re-right-justifying.
>

Sorry about the carelessness.

> > -             do {
> > -                     shrink_node(pgdat, &sc);
> > -             } while (sc.nr_reclaimed < nr_pages && --sc.priority >= 0);
> > -     }
> > +     do {
> > +             shrink_node(pgdat, &sc);
> > +     } while (sc.nr_reclaimed < nr_pages && --sc.priority >= 0);
>
> Won't this cause pagecache reclaim and compaction which previously did
> not occur?  If yes, what are the effects of this and are they
> desirable?  If no, perhaps call shrink_slab() directly in this case.
> Or something like that.
>

It may cause pagecache reclaim and compaction even if
node_pagecache_reclaimable() is still less than
pgdat->min_unmapped_pages.
The active file will be deactivated and the inactive file will be recaimed.
(I traced these behavior with mm_vmscan_lru_shrink_active and
mm_vmscan_lru_shrink_inactive tracepoint)

If we don't like these behavior, what about bellow change ?

@@ -4166,6 +4166,17 @@ static int __node_reclaim(struct pglist_data
*pgdat, gfp_t gfp_mask, unsigned in
                do {
                        shrink_node(pgdat, &sc);
                } while (sc.nr_reclaimed < nr_pages && --sc.priority >= 0);
+       } else {
+               struct mem_cgroup *memcg;
+               struct mem_cgroup_reclaim_cookie reclaim = {
+                        .pgdat = pgdat,
+                        .priority = sc.priority,
+                };
+
+               memcg = mem_cgroup_iter(false, NULL, &reclaim);
+               do {
+                       shrink_slab(sc.gfp_mask, pgdat->node_id,
memcg, sc.priority);
+               } while ((memcg = mem_cgroup_iter(false, memcg, &reclaim)));

        }


> It's unclear why min_unmapped_pages (min_unmapped_ratio) exists. Is it

I have tried to understand it, but still don't have a clear idea yet.
So I just let it as-is.

> a batch-things-up efficiency thing?

I guess so.

Thanks
Yafang


      reply	other threads:[~2019-05-23  4:57 UTC|newest]

Thread overview: 4+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2019-05-09  8:07 [PATCH 1/2] mm/vmstat: expose min_slab_pages in /proc/zoneinfo Yafang Shao
2019-05-09  8:07 ` [PATCH 2/2] mm/vmscan: shrink slab in node reclaim Yafang Shao
2019-05-22 21:40   ` Andrew Morton
2019-05-23  4:56     ` Yafang Shao [this message]

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=CALOAHbCLRH1otrXkBKe1JD0w8YuRhXoi8yrkAUxDvdyv+FJ4eg@mail.gmail.com \
    --to=laoar.shao@gmail.com \
    --cc=akpm@linux-foundation.org \
    --cc=linux-mm@kvack.org \
    --cc=mhocko@suse.com \
    --cc=shaoyafang@didiglobal.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).