From: Yosry Ahmed <yosryahmed@google.com>
To: Chengming Zhou <chengming.zhou@linux.dev>
Cc: Nhat Pham <nphamcs@gmail.com>,
akpm@linux-foundation.org, hannes@cmpxchg.org,
cerasuolodomenico@gmail.com, sjenning@redhat.com,
ddstreet@ieee.org, vitaly.wool@konsulko.com, mhocko@kernel.org,
roman.gushchin@linux.dev, shakeelb@google.com,
muchun.song@linux.dev, chrisl@kernel.org, linux-mm@kvack.org,
kernel-team@meta.com, linux-kernel@vger.kernel.org,
cgroups@vger.kernel.org, linux-doc@vger.kernel.org,
linux-kselftest@vger.kernel.org, shuah@kernel.org
Subject: Re: [PATCH v8 6/6] zswap: shrinks zswap pool based on memory pressure
Date: Tue, 5 Dec 2023 21:59:50 -0800 [thread overview]
Message-ID: <CAJD7tkbiWqXs1PEZjMHO0gj5uSaaB-KNUNCiUz25MuPvzeb=wg@mail.gmail.com> (raw)
In-Reply-To: <ed2792de-24cc-4037-9ee1-966cc07df57a@linux.dev>
[..]
> > @@ -526,6 +582,102 @@ static struct zswap_entry *zswap_entry_find_get(struct rb_root *root,
> > return entry;
> > }
> >
> > +/*********************************
> > +* shrinker functions
> > +**********************************/
> > +static enum lru_status shrink_memcg_cb(struct list_head *item, struct list_lru_one *l,
> > + spinlock_t *lock, void *arg);
> > +
> > +static unsigned long zswap_shrinker_scan(struct shrinker *shrinker,
> > + struct shrink_control *sc)
> > +{
> > + struct lruvec *lruvec = mem_cgroup_lruvec(sc->memcg, NODE_DATA(sc->nid));
> > + unsigned long shrink_ret, nr_protected, lru_size;
> > + struct zswap_pool *pool = shrinker->private_data;
> > + bool encountered_page_in_swapcache = false;
> > +
> > + nr_protected =
> > + atomic_long_read(&lruvec->zswap_lruvec_state.nr_zswap_protected);
> > + lru_size = list_lru_shrink_count(&pool->list_lru, sc);
> > +
> > + /*
> > + * Abort if the shrinker is disabled or if we are shrinking into the
> > + * protected region.
> > + *
> > + * This short-circuiting is necessary because if we have too many multiple
> > + * concurrent reclaimers getting the freeable zswap object counts at the
> > + * same time (before any of them made reasonable progress), the total
> > + * number of reclaimed objects might be more than the number of unprotected
> > + * objects (i.e the reclaimers will reclaim into the protected area of the
> > + * zswap LRU).
> > + */
> > + if (!zswap_shrinker_enabled || nr_protected >= lru_size - sc->nr_to_scan) {
> > + sc->nr_scanned = 0;
> > + return SHRINK_STOP;
> > + }
> > +
> > + shrink_ret = list_lru_shrink_walk(&pool->list_lru, sc, &shrink_memcg_cb,
> > + &encountered_page_in_swapcache);
> > +
> > + if (encountered_page_in_swapcache)
> > + return SHRINK_STOP;
> > +
> > + return shrink_ret ? shrink_ret : SHRINK_STOP;
> > +}
> > +
> > +static unsigned long zswap_shrinker_count(struct shrinker *shrinker,
> > + struct shrink_control *sc)
> > +{
> > + struct zswap_pool *pool = shrinker->private_data;
> > + struct mem_cgroup *memcg = sc->memcg;
> > + struct lruvec *lruvec = mem_cgroup_lruvec(memcg, NODE_DATA(sc->nid));
> > + unsigned long nr_backing, nr_stored, nr_freeable, nr_protected;
> > +
> > +#ifdef CONFIG_MEMCG_KMEM
> > + cgroup_rstat_flush(memcg->css.cgroup);
> > + nr_backing = memcg_page_state(memcg, MEMCG_ZSWAP_B) >> PAGE_SHIFT;
> > + nr_stored = memcg_page_state(memcg, MEMCG_ZSWAPPED);
> > +#else
> > + /* use pool stats instead of memcg stats */
> > + nr_backing = get_zswap_pool_size(pool) >> PAGE_SHIFT;
> > + nr_stored = atomic_read(&pool->nr_stored);
> > +#endif
> > +
> > + if (!zswap_shrinker_enabled || !nr_stored)
> When I tested with this series, with !zswap_shrinker_enabled in the default case,
> I found the performance is much worse than that without this patch.
>
> Testcase: memory.max=2G, zswap enabled, kernel build -j32 in a tmpfs directory.
>
> The reason seems the above cgroup_rstat_flush(), caused much rstat lock contention
> to the zswap_store() path. And if I put the "zswap_shrinker_enabled" check above
> the cgroup_rstat_flush(), the performance become much better.
>
> Maybe we can put the "zswap_shrinker_enabled" check above cgroup_rstat_flush()?
Yes, we should do nothing if !zswap_shrinker_enabled. We should also
use mem_cgroup_flush_stats() here like other places unless accuracy is
crucial, which I doubt given that reclaim uses
mem_cgroup_flush_stats().
mem_cgroup_flush_stats() has some thresholding to make sure we don't
do flushes unnecessarily, and I have a pending series in mm-unstable
that makes that thresholding per-memcg. Keep in mind that adding a
call to mem_cgroup_flush_stats() will cause a conflict in mm-unstable,
because the series there adds a memcg argument to
mem_cgroup_flush_stats(). That should be easily amenable though, I can
post a fixlet for my series to add the memcg argument there on top of
users if needed.
>
> Thanks!
>
> > + return 0;
> > +
> > + nr_protected =
> > + atomic_long_read(&lruvec->zswap_lruvec_state.nr_zswap_protected);
> > + nr_freeable = list_lru_shrink_count(&pool->list_lru, sc);
> > + /*
> > + * Subtract the lru size by an estimate of the number of pages
> > + * that should be protected.
> > + */
> > + nr_freeable = nr_freeable > nr_protected ? nr_freeable - nr_protected : 0;
> > +
> > + /*
> > + * Scale the number of freeable pages by the memory saving factor.
> > + * This ensures that the better zswap compresses memory, the fewer
> > + * pages we will evict to swap (as it will otherwise incur IO for
> > + * relatively small memory saving).
> > + */
> > + return mult_frac(nr_freeable, nr_backing, nr_stored);
> > +}
> > +
> > +static void zswap_alloc_shrinker(struct zswap_pool *pool)
> > +{
> > + pool->shrinker =
> > + shrinker_alloc(SHRINKER_NUMA_AWARE | SHRINKER_MEMCG_AWARE, "mm-zswap");
> > + if (!pool->shrinker)
> > + return;
> > +
> > + pool->shrinker->private_data = pool;
> > + pool->shrinker->scan_objects = zswap_shrinker_scan;
> > + pool->shrinker->count_objects = zswap_shrinker_count;
> > + pool->shrinker->batch = 0;
> > + pool->shrinker->seeks = DEFAULT_SEEKS;
> > +}
> > +
> > /*********************************
> > * per-cpu code
> > **********************************/
[..]
next prev parent reply other threads:[~2023-12-06 6:00 UTC|newest]
Thread overview: 48+ messages / expand[flat|nested] mbox.gz Atom feed top
2023-11-30 19:40 [PATCH v8 0/6] workload-specific and memory pressure-driven zswap writeback Nhat Pham
2023-11-30 19:40 ` [PATCH v8 1/6] list_lru: allows explicit memcg and NUMA node selection Nhat Pham
2023-11-30 19:57 ` Matthew Wilcox
2023-11-30 20:07 ` Nhat Pham
2023-11-30 20:35 ` Johannes Weiner
2023-12-04 8:30 ` Chengming Zhou
2023-12-04 17:48 ` Nhat Pham
2023-12-05 2:28 ` Chengming Zhou
2023-12-05 0:30 ` Chris Li
2023-12-05 17:17 ` Johannes Weiner
2023-11-30 19:40 ` [PATCH v8 2/6] memcontrol: implement mem_cgroup_tryget_online() Nhat Pham
2023-12-05 0:35 ` Chris Li
2023-12-05 1:39 ` Nhat Pham
2023-12-06 0:16 ` Chris Li
2023-12-06 1:30 ` Nhat Pham
2023-12-05 18:02 ` Yosry Ahmed
2023-12-05 19:55 ` Nhat Pham
2023-11-30 19:40 ` [PATCH v8 3/6] zswap: make shrinking memcg-aware Nhat Pham
2023-12-05 18:20 ` Yosry Ahmed
2023-12-05 18:49 ` Nhat Pham
2023-12-05 18:59 ` Yosry Ahmed
2023-12-05 19:09 ` Nhat Pham
2023-12-05 19:54 ` [PATCH v8 3/6] zswap: make shrinking memcg-aware (fix) Nhat Pham
2023-12-06 0:10 ` [PATCH v8 3/6] zswap: make shrinking memcg-aware Chris Li
2023-12-06 1:53 ` Nhat Pham
2023-12-06 3:03 ` Nhat Pham
2023-12-06 3:06 ` [PATCH v8 3/6] zswap: make shrinking memcg-aware (fix 2) Nhat Pham
2023-11-30 19:40 ` [PATCH v8 4/6] mm: memcg: add per-memcg zswap writeback stat Nhat Pham
2023-12-05 18:21 ` Yosry Ahmed
2023-12-05 18:56 ` Nhat Pham
2023-12-05 19:33 ` [PATCH v8 4/6] mm: memcg: add per-memcg zswap writeback stat (fix) Nhat Pham
2023-12-05 20:05 ` Yosry Ahmed
2023-12-08 0:25 ` Chris Li
2023-11-30 19:40 ` [PATCH v8 5/6] selftests: cgroup: update per-memcg zswap writeback selftest Nhat Pham
2023-12-08 0:43 ` Chris Li
2023-11-30 19:40 ` [PATCH v8 6/6] zswap: shrinks zswap pool based on memory pressure Nhat Pham
2023-12-06 5:51 ` Chengming Zhou
2023-12-06 5:59 ` Yosry Ahmed [this message]
2023-12-06 6:43 ` Chengming Zhou
2023-12-06 7:36 ` Yosry Ahmed
2023-12-06 7:39 ` Chengming Zhou
2023-12-06 16:56 ` Nhat Pham
2023-12-06 19:47 ` Nhat Pham
2023-12-06 21:13 ` Yosry Ahmed
2023-12-07 2:32 ` Chengming Zhou
2023-12-06 19:44 ` [PATCH v8 6/6] zswap: shrinks zswap pool based on memory pressure (fix) Nhat Pham
2023-11-30 21:19 ` [PATCH v8 0/6] workload-specific and memory pressure-driven zswap writeback Andrew Morton
2023-12-06 4:10 ` Bagas Sanjaya
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to='CAJD7tkbiWqXs1PEZjMHO0gj5uSaaB-KNUNCiUz25MuPvzeb=wg@mail.gmail.com' \
--to=yosryahmed@google.com \
--cc=akpm@linux-foundation.org \
--cc=cerasuolodomenico@gmail.com \
--cc=cgroups@vger.kernel.org \
--cc=chengming.zhou@linux.dev \
--cc=chrisl@kernel.org \
--cc=ddstreet@ieee.org \
--cc=hannes@cmpxchg.org \
--cc=kernel-team@meta.com \
--cc=linux-doc@vger.kernel.org \
--cc=linux-kernel@vger.kernel.org \
--cc=linux-kselftest@vger.kernel.org \
--cc=linux-mm@kvack.org \
--cc=mhocko@kernel.org \
--cc=muchun.song@linux.dev \
--cc=nphamcs@gmail.com \
--cc=roman.gushchin@linux.dev \
--cc=shakeelb@google.com \
--cc=shuah@kernel.org \
--cc=sjenning@redhat.com \
--cc=vitaly.wool@konsulko.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).