linux-mm.kvack.org archive mirror
 help / color / mirror / Atom feed
From: Yosry Ahmed <yosryahmed@google.com>
To: Takero Funaki <flintglass@gmail.com>
Cc: Johannes Weiner <hannes@cmpxchg.org>,
	Nhat Pham <nphamcs@gmail.com>,
	 Chengming Zhou <chengming.zhou@linux.dev>,
	Jonathan Corbet <corbet@lwn.net>,
	 Andrew Morton <akpm@linux-foundation.org>,
	 Domenico Cerasuolo <cerasuolodomenico@gmail.com>,
	linux-mm@kvack.org, linux-doc@vger.kernel.org,
	 linux-kernel@vger.kernel.org
Subject: Re: [PATCH v2 1/6] mm: zswap: fix global shrinker memcg iteration
Date: Tue, 16 Jul 2024 18:54:03 -0700	[thread overview]
Message-ID: <CAJD7tkYEQQ539id2yPi+m7bR3EDa33LBLKV-y+XFuBPc3BhxcQ@mail.gmail.com> (raw)
In-Reply-To: <20240706022523.1104080-2-flintglass@gmail.com>

On Fri, Jul 5, 2024 at 7:25 PM Takero Funaki <flintglass@gmail.com> wrote:
>
> This patch fixes an issue where the zswap global shrinker stopped
> iterating through the memcg tree.
>
> The problem was that shrink_worker() would stop iterating when a memcg
> was being offlined and restart from the tree root.  Now, it properly
> handles the offlie memcg and continues shrinking with the next memcg.
>
> Note that, to avoid a refcount leak of offline memcg encountered during
> the memcg tree walking, shrink_worker() must continue iterating to find
> the next online memcg.

Please do not use the word "leak" here. It is confusing. The refcount
is not leaked, we just have a long-standing ref that should eventually
be dropped (although in theory, shrink_worker() may never me called
again). Leak makes it sound like we increment the refcount but
completely forget dropping it, which is not the case here.

>
> The following minor issues in the existing code are also resolved by the
> change in the iteration logic:
>
> - A rare temporary refcount leak in the offline memcg cleaner, where the
>   next memcg of the offlined memcg is also offline.  The leaked memcg
>   cannot be freed until the next shrink_worker() releases the reference.
>
> - One memcg was skipped from shrinking when the offline memcg cleaner
>   advanced the cursor of memcg tree. It is addressed by a flag to
>   indicate that the cursor has already been advanced.
>
> Fixes: a65b0e7607cc ("zswap: make shrinking memcg-aware")
> Signed-off-by: Takero Funaki <flintglass@gmail.com>
> ---
>  mm/zswap.c | 94 ++++++++++++++++++++++++++++++++++++++++++------------
>  1 file changed, 73 insertions(+), 21 deletions(-)
>
> diff --git a/mm/zswap.c b/mm/zswap.c
> index a50e2986cd2f..29944d8145af 100644
> --- a/mm/zswap.c
> +++ b/mm/zswap.c
> @@ -171,6 +171,7 @@ static struct list_lru zswap_list_lru;
>  /* The lock protects zswap_next_shrink updates. */
>  static DEFINE_SPINLOCK(zswap_shrink_lock);
>  static struct mem_cgroup *zswap_next_shrink;
> +static bool zswap_next_shrink_changed;
>  static struct work_struct zswap_shrink_work;
>  static struct shrinker *zswap_shrinker;
>
> @@ -775,12 +776,39 @@ void zswap_folio_swapin(struct folio *folio)
>         }
>  }
>
> +/*
> + * This function should be called when a memcg is being offlined.
> + *
> + * Since the global shrinker shrink_worker() may hold a reference
> + * of the memcg, we must check and release the reference in
> + * zswap_next_shrink.
> + *
> + * shrink_worker() must handle the case where this function releases
> + * the reference of memcg being shrunk.
> + */
>  void zswap_memcg_offline_cleanup(struct mem_cgroup *memcg)
>  {
>         /* lock out zswap shrinker walking memcg tree */
>         spin_lock(&zswap_shrink_lock);
> -       if (zswap_next_shrink == memcg)
> -               zswap_next_shrink = mem_cgroup_iter(NULL, zswap_next_shrink, NULL);
> +       if (zswap_next_shrink == memcg) {
> +               /*
> +                * We advances the cursor to put back the offlined memcg.
> +                * shrink_worker() should not advance the cursor again.
> +                */
> +               zswap_next_shrink_changed = true;

I think this should be rare enough that it's not worth the extra complexity imo.

> +
> +               do {
> +                       zswap_next_shrink = mem_cgroup_iter(NULL,
> +                                       zswap_next_shrink, NULL);
> +               } while (zswap_next_shrink &&
> +                               !mem_cgroup_online(zswap_next_shrink));
> +               /*
> +                * We verified the next memcg is online.  Even if the next
> +                * memcg is being offlined here, another cleaner must be
> +                * waiting for our lock.  We can leave the online memcg
> +                * reference.
> +                */
> +       }
>         spin_unlock(&zswap_shrink_lock);
>  }
>
> @@ -1319,18 +1347,42 @@ static void shrink_worker(struct work_struct *w)
>         /* Reclaim down to the accept threshold */
>         thr = zswap_accept_thr_pages();
>
> -       /* global reclaim will select cgroup in a round-robin fashion. */
> +       /* global reclaim will select cgroup in a round-robin fashion.
> +        *
> +        * We save iteration cursor memcg into zswap_next_shrink,
> +        * which can be modified by the offline memcg cleaner
> +        * zswap_memcg_offline_cleanup().
> +        *
> +        * Since the offline cleaner is called only once, we cannot leave an
> +        * offline memcg reference in zswap_next_shrink.
> +        * We can rely on the cleaner only if we get online memcg under lock.
> +        *
> +        * If we get an offline memcg, we cannot determine the cleaner has

we cannot determine if* the cleaner

> +        * already been called or will be called later. We must put back the
> +        * reference before returning from this function. Otherwise, the
> +        * offline memcg left in zswap_next_shrink will hold the reference
> +        * until the next run of shrink_worker().
> +        */
>         do {
>                 spin_lock(&zswap_shrink_lock);
> -               zswap_next_shrink = mem_cgroup_iter(NULL, zswap_next_shrink, NULL);
> -               memcg = zswap_next_shrink;
>
>                 /*
> -                * We need to retry if we have gone through a full round trip, or if we
> -                * got an offline memcg (or else we risk undoing the effect of the
> -                * zswap memcg offlining cleanup callback). This is not catastrophic
> -                * per se, but it will keep the now offlined memcg hostage for a while.
> -                *
> +                * Start shrinking from the next memcg after zswap_next_shrink.
> +                * To not skip a memcg, do not advance the cursor when it has
> +                * already been advanced by the offline cleaner.
> +                */
> +               do {
> +                       if (zswap_next_shrink_changed) {
> +                               /* cleaner advanced the cursor */
> +                               zswap_next_shrink_changed = false;
> +                       } else {
> +                               zswap_next_shrink = mem_cgroup_iter(NULL,
> +                                               zswap_next_shrink, NULL);
> +                       }
> +                       memcg = zswap_next_shrink;
> +               } while (memcg && !mem_cgroup_tryget_online(memcg));
> +
> +               /*
>                  * Note that if we got an online memcg, we will keep the extra
>                  * reference in case the original reference obtained by mem_cgroup_iter
>                  * is dropped by the zswap memcg offlining callback, ensuring that the
> @@ -1344,17 +1396,11 @@ static void shrink_worker(struct work_struct *w)
>                         goto resched;
>                 }
>
> -               if (!mem_cgroup_tryget_online(memcg)) {
> -                       /* drop the reference from mem_cgroup_iter() */
> -                       mem_cgroup_iter_break(NULL, memcg);
> -                       zswap_next_shrink = NULL;
> -                       spin_unlock(&zswap_shrink_lock);
> -
> -                       if (++failures == MAX_RECLAIM_RETRIES)
> -                               break;
> -
> -                       goto resched;
> -               }
> +               /*
> +                * We verified the memcg is online and got an extra memcg
> +                * reference.  Our memcg might be offlined concurrently but the
> +                * respective offline cleaner must be waiting for our lock.
> +                */
>                 spin_unlock(&zswap_shrink_lock);
>
>                 ret = shrink_memcg(memcg);
> @@ -1368,6 +1414,12 @@ static void shrink_worker(struct work_struct *w)
>  resched:
>                 cond_resched();
>         } while (zswap_total_pages() > thr);
> +
> +       /*
> +        * We can still hold the original memcg reference.
> +        * The reference is stored in zswap_next_shrink, and then reused
> +        * by the next shrink_worker().
> +        */

This is unnecessary imo.

>  }
>
>  /*********************************
> --
> 2.43.0
>


  parent reply	other threads:[~2024-07-17  1:54 UTC|newest]

Thread overview: 37+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2024-07-06  2:25 [PATCH v2 0/6] mm: zswap: global shrinker fix and proactive shrink Takero Funaki
2024-07-06  2:25 ` [PATCH v2 1/6] mm: zswap: fix global shrinker memcg iteration Takero Funaki
2024-07-08  4:54   ` Chengming Zhou
2024-07-17  1:54   ` Yosry Ahmed [this message]
2024-07-06  2:25 ` [PATCH v2 2/6] mm: zswap: fix global shrinker error handling logic Takero Funaki
2024-07-17  2:39   ` Yosry Ahmed
2024-07-06  2:25 ` [PATCH v2 3/6] mm: zswap: proactive shrinking before pool size limit is hit Takero Funaki
2024-07-12 23:18   ` Nhat Pham
2024-07-06  2:25 ` [PATCH v2 4/6] mm: zswap: make writeback run in the background Takero Funaki
2024-07-06  2:25 ` [PATCH v2 5/6] mm: zswap: store incompressible page as-is Takero Funaki
2024-07-06 23:53   ` Nhat Pham
2024-07-07  9:38     ` Takero Funaki
2024-07-12 22:36       ` Nhat Pham
2024-07-08  3:56   ` Chengming Zhou
2024-07-08 13:44     ` Takero Funaki
2024-07-09 13:26       ` Chengming Zhou
2024-07-12 22:47         ` Nhat Pham
2024-07-16  2:30           ` Chengming Zhou
2024-07-06  2:25 ` [PATCH v2 6/6] mm: zswap: interrupt shrinker writeback while pagein/out IO Takero Funaki
2024-07-08 19:17   ` Nhat Pham
2024-07-09  0:57   ` Nhat Pham
2024-07-10 21:21     ` Takero Funaki
2024-07-10 22:10       ` Nhat Pham
2024-07-15  7:33         ` Takero Funaki
2024-07-06 17:32 ` [PATCH v2 0/6] mm: zswap: global shrinker fix and proactive shrink Andrew Morton
2024-07-07 10:54   ` Takero Funaki
2024-07-09  0:53 ` Nhat Pham
2024-07-10 22:26   ` Takero Funaki
2024-07-12 23:02     ` Nhat Pham
2024-07-15  8:20       ` Takero Funaki
2024-07-26 18:13         ` Nhat Pham
2024-07-26 18:25           ` Nhat Pham
2024-07-17  2:53     ` Yosry Ahmed
2024-07-17 17:49       ` Nhat Pham
2024-07-17 18:05         ` Yosry Ahmed
2024-07-17 19:01           ` Nhat Pham
2024-07-19 14:55           ` Takero Funaki

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=CAJD7tkYEQQ539id2yPi+m7bR3EDa33LBLKV-y+XFuBPc3BhxcQ@mail.gmail.com \
    --to=yosryahmed@google.com \
    --cc=akpm@linux-foundation.org \
    --cc=cerasuolodomenico@gmail.com \
    --cc=chengming.zhou@linux.dev \
    --cc=corbet@lwn.net \
    --cc=flintglass@gmail.com \
    --cc=hannes@cmpxchg.org \
    --cc=linux-doc@vger.kernel.org \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linux-mm@kvack.org \
    --cc=nphamcs@gmail.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).