* [PATCH v8 0/6] workload-specific and memory pressure-driven zswap writeback
@ 2023-11-30 19:40 Nhat Pham
2023-11-30 19:40 ` [PATCH v8 1/6] list_lru: allows explicit memcg and NUMA node selection Nhat Pham
` (7 more replies)
0 siblings, 8 replies; 48+ messages in thread
From: Nhat Pham @ 2023-11-30 19:40 UTC (permalink / raw)
To: akpm
Cc: hannes, cerasuolodomenico, yosryahmed, sjenning, ddstreet,
vitaly.wool, mhocko, roman.gushchin, shakeelb, muchun.song,
chrisl, linux-mm, kernel-team, linux-kernel, cgroups, linux-doc,
linux-kselftest, shuah
Changelog:
v8:
* Fixed a couple of build errors in the case of !CONFIG_MEMCG
* Simplified the online memcg selection scheme for the zswap global
limit reclaim (suggested by Michal Hocko and Johannes Weiner)
(patch 2 and patch 3)
* Added a new kconfig to allows users to enable zswap shrinker by
default. (suggested by Johannes Weiner) (patch 6)
v7:
* Added the mem_cgroup_iter_online() function to the API for the new
behavior (suggested by Andrew Morton) (patch 2)
* Fixed a missing list_lru_del -> list_lru_del_obj (patch 1)
v6:
* Rebase on top of latest mm-unstable.
* Fix/improve the in-code documentation of the new list_lru
manipulation functions (patch 1)
v5:
* Replace reference getting with an rcu_read_lock() section for
zswap lru modifications (suggested by Yosry)
* Add a new prep patch that allows mem_cgroup_iter() to return
online cgroup.
* Add a callback that updates pool->next_shrink when the cgroup is
offlined (suggested by Yosry Ahmed, Johannes Weiner)
v4:
* Rename list_lru_add to list_lru_add_obj and __list_lru_add to
list_lru_add (patch 1) (suggested by Johannes Weiner and
Yosry Ahmed)
* Some cleanups on the memcg aware LRU patch (patch 2)
(suggested by Yosry Ahmed)
* Use event interface for the new per-cgroup writeback counters.
(patch 3) (suggested by Yosry Ahmed)
* Abstract zswap's lruvec states and handling into
zswap_lruvec_state (patch 5) (suggested by Yosry Ahmed)
v3:
* Add a patch to export per-cgroup zswap writeback counters
* Add a patch to update zswap's kselftest
* Separate the new list_lru functions into its own prep patch
* Do not start from the top of the hierarchy when encounter a memcg
that is not online for the global limit zswap writeback (patch 2)
(suggested by Yosry Ahmed)
* Do not remove the swap entry from list_lru in
__read_swapcache_async() (patch 2) (suggested by Yosry Ahmed)
* Removed a redundant zswap pool getting (patch 2)
(reported by Ryan Roberts)
* Use atomic for the nr_zswap_protected (instead of lruvec's lock)
(patch 5) (suggested by Yosry Ahmed)
* Remove the per-cgroup zswap shrinker knob (patch 5)
(suggested by Yosry Ahmed)
v2:
* Fix loongarch compiler errors
* Use pool stats instead of memcg stats when !CONFIG_MEMCG_KEM
There are currently several issues with zswap writeback:
1. There is only a single global LRU for zswap, making it impossible to
perform worload-specific shrinking - an memcg under memory pressure
cannot determine which pages in the pool it owns, and often ends up
writing pages from other memcgs. This issue has been previously
observed in practice and mitigated by simply disabling
memcg-initiated shrinking:
https://lore.kernel.org/all/20230530232435.3097106-1-nphamcs@gmail.com/T/#u
But this solution leaves a lot to be desired, as we still do not
have an avenue for an memcg to free up its own memory locked up in
the zswap pool.
2. We only shrink the zswap pool when the user-defined limit is hit.
This means that if we set the limit too high, cold data that are
unlikely to be used again will reside in the pool, wasting precious
memory. It is hard to predict how much zswap space will be needed
ahead of time, as this depends on the workload (specifically, on
factors such as memory access patterns and compressibility of the
memory pages).
This patch series solves these issues by separating the global zswap
LRU into per-memcg and per-NUMA LRUs, and performs workload-specific
(i.e memcg- and NUMA-aware) zswap writeback under memory pressure. The
new shrinker does not have any parameter that must be tuned by the
user, and can be opted in or out on a per-memcg basis.
As a proof of concept, we ran the following synthetic benchmark:
build the linux kernel in a memory-limited cgroup, and allocate some
cold data in tmpfs to see if the shrinker could write them out and
improved the overall performance. Depending on the amount of cold data
generated, we observe from 14% to 35% reduction in kernel CPU time used
in the kernel builds.
Domenico Cerasuolo (3):
zswap: make shrinking memcg-aware
mm: memcg: add per-memcg zswap writeback stat
selftests: cgroup: update per-memcg zswap writeback selftest
Nhat Pham (3):
list_lru: allows explicit memcg and NUMA node selection
memcontrol: implement mem_cgroup_tryget_online()
zswap: shrinks zswap pool based on memory pressure
Documentation/admin-guide/mm/zswap.rst | 10 +
drivers/android/binder_alloc.c | 7 +-
fs/dcache.c | 8 +-
fs/gfs2/quota.c | 6 +-
fs/inode.c | 4 +-
fs/nfs/nfs42xattr.c | 8 +-
fs/nfsd/filecache.c | 4 +-
fs/xfs/xfs_buf.c | 6 +-
fs/xfs/xfs_dquot.c | 2 +-
fs/xfs/xfs_qm.c | 2 +-
include/linux/list_lru.h | 54 ++-
include/linux/memcontrol.h | 15 +
include/linux/mmzone.h | 2 +
include/linux/vm_event_item.h | 1 +
include/linux/zswap.h | 27 +-
mm/Kconfig | 14 +
mm/list_lru.c | 48 ++-
mm/memcontrol.c | 3 +
mm/mmzone.c | 1 +
mm/swap.h | 3 +-
mm/swap_state.c | 26 +-
mm/vmstat.c | 1 +
mm/workingset.c | 4 +-
mm/zswap.c | 456 +++++++++++++++++---
tools/testing/selftests/cgroup/test_zswap.c | 74 ++--
25 files changed, 661 insertions(+), 125 deletions(-)
base-commit: 5cdba94229e58a39ca389ad99763af29e6b0c5a5
--
2.34.1
^ permalink raw reply [flat|nested] 48+ messages in thread
* [PATCH v8 1/6] list_lru: allows explicit memcg and NUMA node selection
2023-11-30 19:40 [PATCH v8 0/6] workload-specific and memory pressure-driven zswap writeback Nhat Pham
@ 2023-11-30 19:40 ` Nhat Pham
2023-11-30 19:57 ` Matthew Wilcox
2023-11-30 19:40 ` [PATCH v8 2/6] memcontrol: implement mem_cgroup_tryget_online() Nhat Pham
` (6 subsequent siblings)
7 siblings, 1 reply; 48+ messages in thread
From: Nhat Pham @ 2023-11-30 19:40 UTC (permalink / raw)
To: akpm
Cc: hannes, cerasuolodomenico, yosryahmed, sjenning, ddstreet,
vitaly.wool, mhocko, roman.gushchin, shakeelb, muchun.song,
chrisl, linux-mm, kernel-team, linux-kernel, cgroups, linux-doc,
linux-kselftest, shuah
The interface of list_lru is based on the assumption that the list node
and the data it represents belong to the same allocated on the correct
node/memcg. While this assumption is valid for existing slab objects LRU
such as dentries and inodes, it is undocumented, and rather inflexible
for certain potential list_lru users (such as the upcoming zswap
shrinker and the THP shrinker). It has caused us a lot of issues during
our development.
This patch changes list_lru interface so that the caller must explicitly
specify numa node and memcg when adding and removing objects. The old
list_lru_add() and list_lru_del() are renamed to list_lru_add_obj() and
list_lru_del_obj(), respectively.
It also extends the list_lru API with a new function, list_lru_putback,
which undoes a previous list_lru_isolate call. Unlike list_lru_add, it
does not increment the LRU node count (as list_lru_isolate does not
decrement the node count). list_lru_putback also allows for explicit
memcg and NUMA node selection.
Suggested-by: Johannes Weiner <hannes@cmpxchg.org>
Signed-off-by: Nhat Pham <nphamcs@gmail.com>
Acked-by: Johannes Weiner <hannes@cmpxchg.org>
---
drivers/android/binder_alloc.c | 7 ++---
fs/dcache.c | 8 +++--
fs/gfs2/quota.c | 6 ++--
fs/inode.c | 4 +--
fs/nfs/nfs42xattr.c | 8 ++---
fs/nfsd/filecache.c | 4 +--
fs/xfs/xfs_buf.c | 6 ++--
fs/xfs/xfs_dquot.c | 2 +-
fs/xfs/xfs_qm.c | 2 +-
include/linux/list_lru.h | 54 ++++++++++++++++++++++++++++++++--
mm/list_lru.c | 48 +++++++++++++++++++++++++-----
mm/workingset.c | 4 +--
12 files changed, 117 insertions(+), 36 deletions(-)
diff --git a/drivers/android/binder_alloc.c b/drivers/android/binder_alloc.c
index 138f6d43d13b..f69d30c9f50f 100644
--- a/drivers/android/binder_alloc.c
+++ b/drivers/android/binder_alloc.c
@@ -234,7 +234,7 @@ static int binder_update_page_range(struct binder_alloc *alloc, int allocate,
if (page->page_ptr) {
trace_binder_alloc_lru_start(alloc, index);
- on_lru = list_lru_del(&binder_alloc_lru, &page->lru);
+ on_lru = list_lru_del_obj(&binder_alloc_lru, &page->lru);
WARN_ON(!on_lru);
trace_binder_alloc_lru_end(alloc, index);
@@ -285,7 +285,7 @@ static int binder_update_page_range(struct binder_alloc *alloc, int allocate,
trace_binder_free_lru_start(alloc, index);
- ret = list_lru_add(&binder_alloc_lru, &page->lru);
+ ret = list_lru_add_obj(&binder_alloc_lru, &page->lru);
WARN_ON(!ret);
trace_binder_free_lru_end(alloc, index);
@@ -848,7 +848,7 @@ void binder_alloc_deferred_release(struct binder_alloc *alloc)
if (!alloc->pages[i].page_ptr)
continue;
- on_lru = list_lru_del(&binder_alloc_lru,
+ on_lru = list_lru_del_obj(&binder_alloc_lru,
&alloc->pages[i].lru);
page_addr = alloc->buffer + i * PAGE_SIZE;
binder_alloc_debug(BINDER_DEBUG_BUFFER_ALLOC,
@@ -1287,4 +1287,3 @@ int binder_alloc_copy_from_buffer(struct binder_alloc *alloc,
return binder_alloc_do_buffer_copy(alloc, false, buffer, buffer_offset,
dest, bytes);
}
-
diff --git a/fs/dcache.c b/fs/dcache.c
index c82ae731df9a..2ba37643b9c5 100644
--- a/fs/dcache.c
+++ b/fs/dcache.c
@@ -428,7 +428,8 @@ static void d_lru_add(struct dentry *dentry)
this_cpu_inc(nr_dentry_unused);
if (d_is_negative(dentry))
this_cpu_inc(nr_dentry_negative);
- WARN_ON_ONCE(!list_lru_add(&dentry->d_sb->s_dentry_lru, &dentry->d_lru));
+ WARN_ON_ONCE(!list_lru_add_obj(
+ &dentry->d_sb->s_dentry_lru, &dentry->d_lru));
}
static void d_lru_del(struct dentry *dentry)
@@ -438,7 +439,8 @@ static void d_lru_del(struct dentry *dentry)
this_cpu_dec(nr_dentry_unused);
if (d_is_negative(dentry))
this_cpu_dec(nr_dentry_negative);
- WARN_ON_ONCE(!list_lru_del(&dentry->d_sb->s_dentry_lru, &dentry->d_lru));
+ WARN_ON_ONCE(!list_lru_del_obj(
+ &dentry->d_sb->s_dentry_lru, &dentry->d_lru));
}
static void d_shrink_del(struct dentry *dentry)
@@ -1240,7 +1242,7 @@ static enum lru_status dentry_lru_isolate(struct list_head *item,
*
* This is guaranteed by the fact that all LRU management
* functions are intermediated by the LRU API calls like
- * list_lru_add and list_lru_del. List movement in this file
+ * list_lru_add_obj and list_lru_del_obj. List movement in this file
* only ever occur through this functions or through callbacks
* like this one, that are called from the LRU API.
*
diff --git a/fs/gfs2/quota.c b/fs/gfs2/quota.c
index 95dae7838b4e..b57f8c7b35be 100644
--- a/fs/gfs2/quota.c
+++ b/fs/gfs2/quota.c
@@ -271,7 +271,7 @@ static struct gfs2_quota_data *gfs2_qd_search_bucket(unsigned int hash,
if (qd->qd_sbd != sdp)
continue;
if (lockref_get_not_dead(&qd->qd_lockref)) {
- list_lru_del(&gfs2_qd_lru, &qd->qd_lru);
+ list_lru_del_obj(&gfs2_qd_lru, &qd->qd_lru);
return qd;
}
}
@@ -344,7 +344,7 @@ static void qd_put(struct gfs2_quota_data *qd)
}
qd->qd_lockref.count = 0;
- list_lru_add(&gfs2_qd_lru, &qd->qd_lru);
+ list_lru_add_obj(&gfs2_qd_lru, &qd->qd_lru);
spin_unlock(&qd->qd_lockref.lock);
}
@@ -1517,7 +1517,7 @@ void gfs2_quota_cleanup(struct gfs2_sbd *sdp)
lockref_mark_dead(&qd->qd_lockref);
spin_unlock(&qd->qd_lockref.lock);
- list_lru_del(&gfs2_qd_lru, &qd->qd_lru);
+ list_lru_del_obj(&gfs2_qd_lru, &qd->qd_lru);
list_add(&qd->qd_lru, &dispose);
}
spin_unlock(&qd_lock);
diff --git a/fs/inode.c b/fs/inode.c
index f238d987dec9..ef2034a985e0 100644
--- a/fs/inode.c
+++ b/fs/inode.c
@@ -464,7 +464,7 @@ static void __inode_add_lru(struct inode *inode, bool rotate)
if (!mapping_shrinkable(&inode->i_data))
return;
- if (list_lru_add(&inode->i_sb->s_inode_lru, &inode->i_lru))
+ if (list_lru_add_obj(&inode->i_sb->s_inode_lru, &inode->i_lru))
this_cpu_inc(nr_unused);
else if (rotate)
inode->i_state |= I_REFERENCED;
@@ -482,7 +482,7 @@ void inode_add_lru(struct inode *inode)
static void inode_lru_list_del(struct inode *inode)
{
- if (list_lru_del(&inode->i_sb->s_inode_lru, &inode->i_lru))
+ if (list_lru_del_obj(&inode->i_sb->s_inode_lru, &inode->i_lru))
this_cpu_dec(nr_unused);
}
diff --git a/fs/nfs/nfs42xattr.c b/fs/nfs/nfs42xattr.c
index 2ad66a8922f4..49aaf28a6950 100644
--- a/fs/nfs/nfs42xattr.c
+++ b/fs/nfs/nfs42xattr.c
@@ -132,7 +132,7 @@ nfs4_xattr_entry_lru_add(struct nfs4_xattr_entry *entry)
lru = (entry->flags & NFS4_XATTR_ENTRY_EXTVAL) ?
&nfs4_xattr_large_entry_lru : &nfs4_xattr_entry_lru;
- return list_lru_add(lru, &entry->lru);
+ return list_lru_add_obj(lru, &entry->lru);
}
static bool
@@ -143,7 +143,7 @@ nfs4_xattr_entry_lru_del(struct nfs4_xattr_entry *entry)
lru = (entry->flags & NFS4_XATTR_ENTRY_EXTVAL) ?
&nfs4_xattr_large_entry_lru : &nfs4_xattr_entry_lru;
- return list_lru_del(lru, &entry->lru);
+ return list_lru_del_obj(lru, &entry->lru);
}
/*
@@ -349,7 +349,7 @@ nfs4_xattr_cache_unlink(struct inode *inode)
oldcache = nfsi->xattr_cache;
if (oldcache != NULL) {
- list_lru_del(&nfs4_xattr_cache_lru, &oldcache->lru);
+ list_lru_del_obj(&nfs4_xattr_cache_lru, &oldcache->lru);
oldcache->inode = NULL;
}
nfsi->xattr_cache = NULL;
@@ -474,7 +474,7 @@ nfs4_xattr_get_cache(struct inode *inode, int add)
kref_get(&cache->ref);
nfsi->xattr_cache = cache;
cache->inode = inode;
- list_lru_add(&nfs4_xattr_cache_lru, &cache->lru);
+ list_lru_add_obj(&nfs4_xattr_cache_lru, &cache->lru);
}
spin_unlock(&inode->i_lock);
diff --git a/fs/nfsd/filecache.c b/fs/nfsd/filecache.c
index ef063f93fde9..6c2decfdeb4b 100644
--- a/fs/nfsd/filecache.c
+++ b/fs/nfsd/filecache.c
@@ -322,7 +322,7 @@ nfsd_file_check_writeback(struct nfsd_file *nf)
static bool nfsd_file_lru_add(struct nfsd_file *nf)
{
set_bit(NFSD_FILE_REFERENCED, &nf->nf_flags);
- if (list_lru_add(&nfsd_file_lru, &nf->nf_lru)) {
+ if (list_lru_add_obj(&nfsd_file_lru, &nf->nf_lru)) {
trace_nfsd_file_lru_add(nf);
return true;
}
@@ -331,7 +331,7 @@ static bool nfsd_file_lru_add(struct nfsd_file *nf)
static bool nfsd_file_lru_remove(struct nfsd_file *nf)
{
- if (list_lru_del(&nfsd_file_lru, &nf->nf_lru)) {
+ if (list_lru_del_obj(&nfsd_file_lru, &nf->nf_lru)) {
trace_nfsd_file_lru_del(nf);
return true;
}
diff --git a/fs/xfs/xfs_buf.c b/fs/xfs/xfs_buf.c
index 545c7991b9b5..669332849680 100644
--- a/fs/xfs/xfs_buf.c
+++ b/fs/xfs/xfs_buf.c
@@ -169,7 +169,7 @@ xfs_buf_stale(
atomic_set(&bp->b_lru_ref, 0);
if (!(bp->b_state & XFS_BSTATE_DISPOSE) &&
- (list_lru_del(&bp->b_target->bt_lru, &bp->b_lru)))
+ (list_lru_del_obj(&bp->b_target->bt_lru, &bp->b_lru)))
atomic_dec(&bp->b_hold);
ASSERT(atomic_read(&bp->b_hold) >= 1);
@@ -1047,7 +1047,7 @@ xfs_buf_rele(
* buffer for the LRU and clear the (now stale) dispose list
* state flag
*/
- if (list_lru_add(&bp->b_target->bt_lru, &bp->b_lru)) {
+ if (list_lru_add_obj(&bp->b_target->bt_lru, &bp->b_lru)) {
bp->b_state &= ~XFS_BSTATE_DISPOSE;
atomic_inc(&bp->b_hold);
}
@@ -1060,7 +1060,7 @@ xfs_buf_rele(
* was on was the disposal list
*/
if (!(bp->b_state & XFS_BSTATE_DISPOSE)) {
- list_lru_del(&bp->b_target->bt_lru, &bp->b_lru);
+ list_lru_del_obj(&bp->b_target->bt_lru, &bp->b_lru);
} else {
ASSERT(list_empty(&bp->b_lru));
}
diff --git a/fs/xfs/xfs_dquot.c b/fs/xfs/xfs_dquot.c
index ac6ba646624d..49f619f5aa96 100644
--- a/fs/xfs/xfs_dquot.c
+++ b/fs/xfs/xfs_dquot.c
@@ -1064,7 +1064,7 @@ xfs_qm_dqput(
struct xfs_quotainfo *qi = dqp->q_mount->m_quotainfo;
trace_xfs_dqput_free(dqp);
- if (list_lru_add(&qi->qi_lru, &dqp->q_lru))
+ if (list_lru_add_obj(&qi->qi_lru, &dqp->q_lru))
XFS_STATS_INC(dqp->q_mount, xs_qm_dquot_unused);
}
xfs_dqunlock(dqp);
diff --git a/fs/xfs/xfs_qm.c b/fs/xfs/xfs_qm.c
index 94a7932ac570..67d0a8564ff3 100644
--- a/fs/xfs/xfs_qm.c
+++ b/fs/xfs/xfs_qm.c
@@ -171,7 +171,7 @@ xfs_qm_dqpurge(
* hits zero, so it really should be on the freelist here.
*/
ASSERT(!list_empty(&dqp->q_lru));
- list_lru_del(&qi->qi_lru, &dqp->q_lru);
+ list_lru_del_obj(&qi->qi_lru, &dqp->q_lru);
XFS_STATS_DEC(dqp->q_mount, xs_qm_dquot_unused);
xfs_qm_dqdestroy(dqp);
diff --git a/include/linux/list_lru.h b/include/linux/list_lru.h
index db86ad78d428..7675a48a0701 100644
--- a/include/linux/list_lru.h
+++ b/include/linux/list_lru.h
@@ -75,6 +75,8 @@ void memcg_reparent_list_lrus(struct mem_cgroup *memcg, struct mem_cgroup *paren
* list_lru_add: add an element to the lru list's tail
* @lru: the lru pointer
* @item: the item to be added.
+ * @nid: the node id of the sublist to add the item to.
+ * @memcg: the cgroup of the sublist to add the item to.
*
* If the element is already part of a list, this function returns doing
* nothing. Therefore the caller does not need to keep state about whether or
@@ -87,12 +89,28 @@ void memcg_reparent_list_lrus(struct mem_cgroup *memcg, struct mem_cgroup *paren
*
* Return: true if the list was updated, false otherwise
*/
-bool list_lru_add(struct list_lru *lru, struct list_head *item);
+bool list_lru_add(struct list_lru *lru, struct list_head *item, int nid,
+ struct mem_cgroup *memcg);
/**
- * list_lru_del: delete an element to the lru list
+ * list_lru_add_obj: add an element to the lru list's tail
+ * @lru: the lru pointer
+ * @item: the item to be added.
+ *
+ * This function is similar to list_lru_add(), but the NUMA node and the
+ * memcg of the sublist is determined by @item list_head. This assumption is
+ * valid for slab objects LRU such as dentries, inodes, etc.
+ *
+ * Return value: true if the list was updated, false otherwise
+ */
+bool list_lru_add_obj(struct list_lru *lru, struct list_head *item);
+
+/**
+ * list_lru_del: delete an element from the lru list
* @lru: the lru pointer
* @item: the item to be deleted.
+ * @nid: the node id of the sublist to delete the item from.
+ * @memcg: the cgroup of the sublist to delete the item from.
*
* This function works analogously as list_lru_add() in terms of list
* manipulation. The comments about an element already pertaining to
@@ -100,7 +118,21 @@ bool list_lru_add(struct list_lru *lru, struct list_head *item);
*
* Return: true if the list was updated, false otherwise
*/
-bool list_lru_del(struct list_lru *lru, struct list_head *item);
+bool list_lru_del(struct list_lru *lru, struct list_head *item, int nid,
+ struct mem_cgroup *memcg);
+
+/**
+ * list_lru_del_obj: delete an element from the lru list
+ * @lru: the lru pointer
+ * @item: the item to be deleted.
+ *
+ * This function is similar to list_lru_del(), but the NUMA node and the
+ * memcg of the sublist is determined by @item list_head. This assumption is
+ * valid for slab objects LRU such as dentries, inodes, etc.
+ *
+ * Return value: true if the list was updated, false otherwise.
+ */
+bool list_lru_del_obj(struct list_lru *lru, struct list_head *item);
/**
* list_lru_count_one: return the number of objects currently held by @lru
@@ -138,6 +170,22 @@ static inline unsigned long list_lru_count(struct list_lru *lru)
void list_lru_isolate(struct list_lru_one *list, struct list_head *item);
void list_lru_isolate_move(struct list_lru_one *list, struct list_head *item,
struct list_head *head);
+/**
+ * list_lru_putback: undo list_lru_isolate
+ * @lru: the lru pointer.
+ * @item: the item to put back.
+ * @nid: the node id of the sublist to put the item back to.
+ * @memcg: the cgroup of the sublist to put the item back to.
+ *
+ * Put back an isolated item into its original LRU. Note that unlike
+ * list_lru_add, this does not increment the node LRU count (as
+ * list_lru_isolate does not originally decrement this count).
+ *
+ * Since we might have dropped the LRU lock in between, recompute list_lru_one
+ * from the node's id and memcg.
+ */
+void list_lru_putback(struct list_lru *lru, struct list_head *item, int nid,
+ struct mem_cgroup *memcg);
typedef enum lru_status (*list_lru_walk_cb)(struct list_head *item,
struct list_lru_one *list, spinlock_t *lock, void *cb_arg);
diff --git a/mm/list_lru.c b/mm/list_lru.c
index a05e5bef3b40..fcca67ac26ec 100644
--- a/mm/list_lru.c
+++ b/mm/list_lru.c
@@ -116,21 +116,19 @@ list_lru_from_kmem(struct list_lru *lru, int nid, void *ptr,
}
#endif /* CONFIG_MEMCG_KMEM */
-bool list_lru_add(struct list_lru *lru, struct list_head *item)
+bool list_lru_add(struct list_lru *lru, struct list_head *item, int nid,
+ struct mem_cgroup *memcg)
{
- int nid = page_to_nid(virt_to_page(item));
struct list_lru_node *nlru = &lru->node[nid];
- struct mem_cgroup *memcg;
struct list_lru_one *l;
spin_lock(&nlru->lock);
if (list_empty(item)) {
- l = list_lru_from_kmem(lru, nid, item, &memcg);
+ l = list_lru_from_memcg_idx(lru, nid, memcg_kmem_id(memcg));
list_add_tail(item, &l->list);
/* Set shrinker bit if the first element was added */
if (!l->nr_items++)
- set_shrinker_bit(memcg, nid,
- lru_shrinker_id(lru));
+ set_shrinker_bit(memcg, nid, lru_shrinker_id(lru));
nlru->nr_items++;
spin_unlock(&nlru->lock);
return true;
@@ -140,15 +138,25 @@ bool list_lru_add(struct list_lru *lru, struct list_head *item)
}
EXPORT_SYMBOL_GPL(list_lru_add);
-bool list_lru_del(struct list_lru *lru, struct list_head *item)
+bool list_lru_add_obj(struct list_lru *lru, struct list_head *item)
{
int nid = page_to_nid(virt_to_page(item));
+ struct mem_cgroup *memcg = list_lru_memcg_aware(lru) ?
+ mem_cgroup_from_slab_obj(item) : NULL;
+
+ return list_lru_add(lru, item, nid, memcg);
+}
+EXPORT_SYMBOL_GPL(list_lru_add_obj);
+
+bool list_lru_del(struct list_lru *lru, struct list_head *item, int nid,
+ struct mem_cgroup *memcg)
+{
struct list_lru_node *nlru = &lru->node[nid];
struct list_lru_one *l;
spin_lock(&nlru->lock);
if (!list_empty(item)) {
- l = list_lru_from_kmem(lru, nid, item, NULL);
+ l = list_lru_from_memcg_idx(lru, nid, memcg_kmem_id(memcg));
list_del_init(item);
l->nr_items--;
nlru->nr_items--;
@@ -160,6 +168,16 @@ bool list_lru_del(struct list_lru *lru, struct list_head *item)
}
EXPORT_SYMBOL_GPL(list_lru_del);
+bool list_lru_del_obj(struct list_lru *lru, struct list_head *item)
+{
+ int nid = page_to_nid(virt_to_page(item));
+ struct mem_cgroup *memcg = list_lru_memcg_aware(lru) ?
+ mem_cgroup_from_slab_obj(item) : NULL;
+
+ return list_lru_del(lru, item, nid, memcg);
+}
+EXPORT_SYMBOL_GPL(list_lru_del_obj);
+
void list_lru_isolate(struct list_lru_one *list, struct list_head *item)
{
list_del_init(item);
@@ -175,6 +193,20 @@ void list_lru_isolate_move(struct list_lru_one *list, struct list_head *item,
}
EXPORT_SYMBOL_GPL(list_lru_isolate_move);
+void list_lru_putback(struct list_lru *lru, struct list_head *item, int nid,
+ struct mem_cgroup *memcg)
+{
+ struct list_lru_one *list =
+ list_lru_from_memcg_idx(lru, nid, memcg_kmem_id(memcg));
+
+ if (list_empty(item)) {
+ list_add_tail(item, &list->list);
+ if (!list->nr_items++)
+ set_shrinker_bit(memcg, nid, lru_shrinker_id(lru));
+ }
+}
+EXPORT_SYMBOL_GPL(list_lru_putback);
+
unsigned long list_lru_count_one(struct list_lru *lru,
int nid, struct mem_cgroup *memcg)
{
diff --git a/mm/workingset.c b/mm/workingset.c
index b192e44a0e7c..c17d45c6f29b 100644
--- a/mm/workingset.c
+++ b/mm/workingset.c
@@ -615,12 +615,12 @@ void workingset_update_node(struct xa_node *node)
if (node->count && node->count == node->nr_values) {
if (list_empty(&node->private_list)) {
- list_lru_add(&shadow_nodes, &node->private_list);
+ list_lru_add_obj(&shadow_nodes, &node->private_list);
__inc_lruvec_kmem_state(node, WORKINGSET_NODES);
}
} else {
if (!list_empty(&node->private_list)) {
- list_lru_del(&shadow_nodes, &node->private_list);
+ list_lru_del_obj(&shadow_nodes, &node->private_list);
__dec_lruvec_kmem_state(node, WORKINGSET_NODES);
}
}
--
2.34.1
^ permalink raw reply related [flat|nested] 48+ messages in thread
* [PATCH v8 2/6] memcontrol: implement mem_cgroup_tryget_online()
2023-11-30 19:40 [PATCH v8 0/6] workload-specific and memory pressure-driven zswap writeback Nhat Pham
2023-11-30 19:40 ` [PATCH v8 1/6] list_lru: allows explicit memcg and NUMA node selection Nhat Pham
@ 2023-11-30 19:40 ` Nhat Pham
2023-12-05 0:35 ` Chris Li
2023-12-05 18:02 ` Yosry Ahmed
2023-11-30 19:40 ` [PATCH v8 3/6] zswap: make shrinking memcg-aware Nhat Pham
` (5 subsequent siblings)
7 siblings, 2 replies; 48+ messages in thread
From: Nhat Pham @ 2023-11-30 19:40 UTC (permalink / raw)
To: akpm
Cc: hannes, cerasuolodomenico, yosryahmed, sjenning, ddstreet,
vitaly.wool, mhocko, roman.gushchin, shakeelb, muchun.song,
chrisl, linux-mm, kernel-team, linux-kernel, cgroups, linux-doc,
linux-kselftest, shuah
This patch implements a helper function that try to get a reference to
an memcg's css, as well as checking if it is online. This new function
is almost exactly the same as the existing mem_cgroup_tryget(), except
for the onlineness check. In the !CONFIG_MEMCG case, it always returns
true, analogous to mem_cgroup_tryget(). This is useful for e.g to the
new zswap writeback scheme, where we need to select the next online
memcg as a candidate for the global limit reclaim.
Signed-off-by: Nhat Pham <nphamcs@gmail.com>
---
include/linux/memcontrol.h | 10 ++++++++++
1 file changed, 10 insertions(+)
diff --git a/include/linux/memcontrol.h b/include/linux/memcontrol.h
index 7bdcf3020d7a..2bd7d14ace78 100644
--- a/include/linux/memcontrol.h
+++ b/include/linux/memcontrol.h
@@ -821,6 +821,11 @@ static inline bool mem_cgroup_tryget(struct mem_cgroup *memcg)
return !memcg || css_tryget(&memcg->css);
}
+static inline bool mem_cgroup_tryget_online(struct mem_cgroup *memcg)
+{
+ return !memcg || css_tryget_online(&memcg->css);
+}
+
static inline void mem_cgroup_put(struct mem_cgroup *memcg)
{
if (memcg)
@@ -1349,6 +1354,11 @@ static inline bool mem_cgroup_tryget(struct mem_cgroup *memcg)
return true;
}
+static inline bool mem_cgroup_tryget_online(struct mem_cgroup *memcg)
+{
+ return true;
+}
+
static inline void mem_cgroup_put(struct mem_cgroup *memcg)
{
}
--
2.34.1
^ permalink raw reply related [flat|nested] 48+ messages in thread
* [PATCH v8 3/6] zswap: make shrinking memcg-aware
2023-11-30 19:40 [PATCH v8 0/6] workload-specific and memory pressure-driven zswap writeback Nhat Pham
2023-11-30 19:40 ` [PATCH v8 1/6] list_lru: allows explicit memcg and NUMA node selection Nhat Pham
2023-11-30 19:40 ` [PATCH v8 2/6] memcontrol: implement mem_cgroup_tryget_online() Nhat Pham
@ 2023-11-30 19:40 ` Nhat Pham
2023-12-05 18:20 ` Yosry Ahmed
` (4 more replies)
2023-11-30 19:40 ` [PATCH v8 4/6] mm: memcg: add per-memcg zswap writeback stat Nhat Pham
` (4 subsequent siblings)
7 siblings, 5 replies; 48+ messages in thread
From: Nhat Pham @ 2023-11-30 19:40 UTC (permalink / raw)
To: akpm
Cc: hannes, cerasuolodomenico, yosryahmed, sjenning, ddstreet,
vitaly.wool, mhocko, roman.gushchin, shakeelb, muchun.song,
chrisl, linux-mm, kernel-team, linux-kernel, cgroups, linux-doc,
linux-kselftest, shuah
From: Domenico Cerasuolo <cerasuolodomenico@gmail.com>
Currently, we only have a single global LRU for zswap. This makes it
impossible to perform worload-specific shrinking - an memcg cannot
determine which pages in the pool it owns, and often ends up writing
pages from other memcgs. This issue has been previously observed in
practice and mitigated by simply disabling memcg-initiated shrinking:
https://lore.kernel.org/all/20230530232435.3097106-1-nphamcs@gmail.com/T/#u
This patch fully resolves the issue by replacing the global zswap LRU
with memcg- and NUMA-specific LRUs, and modify the reclaim logic:
a) When a store attempt hits an memcg limit, it now triggers a
synchronous reclaim attempt that, if successful, allows the new
hotter page to be accepted by zswap.
b) If the store attempt instead hits the global zswap limit, it will
trigger an asynchronous reclaim attempt, in which an memcg is
selected for reclaim in a round-robin-like fashion.
Signed-off-by: Domenico Cerasuolo <cerasuolodomenico@gmail.com>
Co-developed-by: Nhat Pham <nphamcs@gmail.com>
Signed-off-by: Nhat Pham <nphamcs@gmail.com>
---
include/linux/memcontrol.h | 5 +
include/linux/zswap.h | 2 +
mm/memcontrol.c | 2 +
mm/swap.h | 3 +-
mm/swap_state.c | 24 +++-
mm/zswap.c | 269 +++++++++++++++++++++++++++++--------
6 files changed, 245 insertions(+), 60 deletions(-)
diff --git a/include/linux/memcontrol.h b/include/linux/memcontrol.h
index 2bd7d14ace78..a308c8eacf20 100644
--- a/include/linux/memcontrol.h
+++ b/include/linux/memcontrol.h
@@ -1192,6 +1192,11 @@ static inline struct mem_cgroup *page_memcg_check(struct page *page)
return NULL;
}
+static inline struct mem_cgroup *get_mem_cgroup_from_objcg(struct obj_cgroup *objcg)
+{
+ return NULL;
+}
+
static inline bool folio_memcg_kmem(struct folio *folio)
{
return false;
diff --git a/include/linux/zswap.h b/include/linux/zswap.h
index 2a60ce39cfde..e571e393669b 100644
--- a/include/linux/zswap.h
+++ b/include/linux/zswap.h
@@ -15,6 +15,7 @@ bool zswap_load(struct folio *folio);
void zswap_invalidate(int type, pgoff_t offset);
void zswap_swapon(int type);
void zswap_swapoff(int type);
+void zswap_memcg_offline_cleanup(struct mem_cgroup *memcg);
#else
@@ -31,6 +32,7 @@ static inline bool zswap_load(struct folio *folio)
static inline void zswap_invalidate(int type, pgoff_t offset) {}
static inline void zswap_swapon(int type) {}
static inline void zswap_swapoff(int type) {}
+static inline void zswap_memcg_offline_cleanup(struct mem_cgroup *memcg) {}
#endif
diff --git a/mm/memcontrol.c b/mm/memcontrol.c
index 470821d1ba1a..792ca21c5815 100644
--- a/mm/memcontrol.c
+++ b/mm/memcontrol.c
@@ -5614,6 +5614,8 @@ static void mem_cgroup_css_offline(struct cgroup_subsys_state *css)
page_counter_set_min(&memcg->memory, 0);
page_counter_set_low(&memcg->memory, 0);
+ zswap_memcg_offline_cleanup(memcg);
+
memcg_offline_kmem(memcg);
reparent_shrinker_deferred(memcg);
wb_memcg_offline(memcg);
diff --git a/mm/swap.h b/mm/swap.h
index 73c332ee4d91..c0dc73e10e91 100644
--- a/mm/swap.h
+++ b/mm/swap.h
@@ -51,7 +51,8 @@ struct page *read_swap_cache_async(swp_entry_t entry, gfp_t gfp_mask,
struct swap_iocb **plug);
struct page *__read_swap_cache_async(swp_entry_t entry, gfp_t gfp_mask,
struct mempolicy *mpol, pgoff_t ilx,
- bool *new_page_allocated);
+ bool *new_page_allocated,
+ bool skip_if_exists);
struct page *swap_cluster_readahead(swp_entry_t entry, gfp_t flag,
struct mempolicy *mpol, pgoff_t ilx);
struct page *swapin_readahead(swp_entry_t entry, gfp_t flag,
diff --git a/mm/swap_state.c b/mm/swap_state.c
index 85d9e5806a6a..6c84236382f3 100644
--- a/mm/swap_state.c
+++ b/mm/swap_state.c
@@ -412,7 +412,8 @@ struct folio *filemap_get_incore_folio(struct address_space *mapping,
struct page *__read_swap_cache_async(swp_entry_t entry, gfp_t gfp_mask,
struct mempolicy *mpol, pgoff_t ilx,
- bool *new_page_allocated)
+ bool *new_page_allocated,
+ bool skip_if_exists)
{
struct swap_info_struct *si;
struct folio *folio;
@@ -470,6 +471,17 @@ struct page *__read_swap_cache_async(swp_entry_t entry, gfp_t gfp_mask,
if (err != -EEXIST)
goto fail_put_swap;
+ /*
+ * Protect against a recursive call to __read_swap_cache_async()
+ * on the same entry waiting forever here because SWAP_HAS_CACHE
+ * is set but the folio is not the swap cache yet. This can
+ * happen today if mem_cgroup_swapin_charge_folio() below
+ * triggers reclaim through zswap, which may call
+ * __read_swap_cache_async() in the writeback path.
+ */
+ if (skip_if_exists)
+ goto fail_put_swap;
+
/*
* We might race against __delete_from_swap_cache(), and
* stumble across a swap_map entry whose SWAP_HAS_CACHE
@@ -537,7 +549,7 @@ struct page *read_swap_cache_async(swp_entry_t entry, gfp_t gfp_mask,
mpol = get_vma_policy(vma, addr, 0, &ilx);
page = __read_swap_cache_async(entry, gfp_mask, mpol, ilx,
- &page_allocated);
+ &page_allocated, false);
mpol_cond_put(mpol);
if (page_allocated)
@@ -654,7 +666,7 @@ struct page *swap_cluster_readahead(swp_entry_t entry, gfp_t gfp_mask,
/* Ok, do the async read-ahead now */
page = __read_swap_cache_async(
swp_entry(swp_type(entry), offset),
- gfp_mask, mpol, ilx, &page_allocated);
+ gfp_mask, mpol, ilx, &page_allocated, false);
if (!page)
continue;
if (page_allocated) {
@@ -672,7 +684,7 @@ struct page *swap_cluster_readahead(swp_entry_t entry, gfp_t gfp_mask,
skip:
/* The page was likely read above, so no need for plugging here */
page = __read_swap_cache_async(entry, gfp_mask, mpol, ilx,
- &page_allocated);
+ &page_allocated, false);
if (unlikely(page_allocated))
swap_readpage(page, false, NULL);
return page;
@@ -827,7 +839,7 @@ static struct page *swap_vma_readahead(swp_entry_t targ_entry, gfp_t gfp_mask,
pte_unmap(pte);
pte = NULL;
page = __read_swap_cache_async(entry, gfp_mask, mpol, ilx,
- &page_allocated);
+ &page_allocated, false);
if (!page)
continue;
if (page_allocated) {
@@ -847,7 +859,7 @@ static struct page *swap_vma_readahead(swp_entry_t targ_entry, gfp_t gfp_mask,
skip:
/* The page was likely read above, so no need for plugging here */
page = __read_swap_cache_async(targ_entry, gfp_mask, mpol, targ_ilx,
- &page_allocated);
+ &page_allocated, false);
if (unlikely(page_allocated))
swap_readpage(page, false, NULL);
return page;
diff --git a/mm/zswap.c b/mm/zswap.c
index 4bdb2d83bb0d..f323e45cbdc7 100644
--- a/mm/zswap.c
+++ b/mm/zswap.c
@@ -35,6 +35,7 @@
#include <linux/writeback.h>
#include <linux/pagemap.h>
#include <linux/workqueue.h>
+#include <linux/list_lru.h>
#include "swap.h"
#include "internal.h"
@@ -174,8 +175,8 @@ struct zswap_pool {
struct work_struct shrink_work;
struct hlist_node node;
char tfm_name[CRYPTO_MAX_ALG_NAME];
- struct list_head lru;
- spinlock_t lru_lock;
+ struct list_lru list_lru;
+ struct mem_cgroup *next_shrink;
};
/*
@@ -291,15 +292,46 @@ static void zswap_update_total_size(void)
zswap_pool_total_size = total;
}
+/* should be called under RCU */
+#ifdef CONFIG_MEMCG
+static inline struct mem_cgroup *mem_cgroup_from_entry(struct zswap_entry *entry)
+{
+ return entry->objcg ? obj_cgroup_memcg(entry->objcg) : NULL;
+}
+#else
+static inline struct mem_cgroup *mem_cgroup_from_entry(struct zswap_entry *entry)
+{
+ return NULL;
+}
+#endif
+
+static inline int entry_to_nid(struct zswap_entry *entry)
+{
+ return page_to_nid(virt_to_page(entry));
+}
+
+void zswap_memcg_offline_cleanup(struct mem_cgroup *memcg)
+{
+ struct zswap_pool *pool;
+
+ /* lock out zswap pools list modification */
+ spin_lock(&zswap_pools_lock);
+ list_for_each_entry(pool, &zswap_pools, list) {
+ if (pool->next_shrink == memcg)
+ pool->next_shrink = mem_cgroup_iter(NULL, pool->next_shrink, NULL);
+ }
+ spin_unlock(&zswap_pools_lock);
+}
+
/*********************************
* zswap entry functions
**********************************/
static struct kmem_cache *zswap_entry_cache;
-static struct zswap_entry *zswap_entry_cache_alloc(gfp_t gfp)
+static struct zswap_entry *zswap_entry_cache_alloc(gfp_t gfp, int nid)
{
struct zswap_entry *entry;
- entry = kmem_cache_alloc(zswap_entry_cache, gfp);
+ entry = kmem_cache_alloc_node(zswap_entry_cache, gfp, nid);
if (!entry)
return NULL;
entry->refcount = 1;
@@ -312,6 +344,61 @@ static void zswap_entry_cache_free(struct zswap_entry *entry)
kmem_cache_free(zswap_entry_cache, entry);
}
+/*********************************
+* lru functions
+**********************************/
+static void zswap_lru_add(struct list_lru *list_lru, struct zswap_entry *entry)
+{
+ int nid = entry_to_nid(entry);
+ struct mem_cgroup *memcg;
+
+ /*
+ * Note that it is safe to use rcu_read_lock() here, even in the face of
+ * concurrent memcg offlining. Thanks to the memcg->kmemcg_id indirection
+ * used in list_lru lookup, only two scenarios are possible:
+ *
+ * 1. list_lru_add() is called before memcg->kmemcg_id is updated. The
+ * new entry will be reparented to memcg's parent's list_lru.
+ * 2. list_lru_add() is called after memcg->kmemcg_id is updated. The
+ * new entry will be added directly to memcg's parent's list_lru.
+ *
+ * Similar reasoning holds for list_lru_del() and list_lru_putback().
+ */
+ rcu_read_lock();
+ memcg = mem_cgroup_from_entry(entry);
+ /* will always succeed */
+ list_lru_add(list_lru, &entry->lru, nid, memcg);
+ rcu_read_unlock();
+}
+
+static void zswap_lru_del(struct list_lru *list_lru, struct zswap_entry *entry)
+{
+ int nid = entry_to_nid(entry);
+ struct mem_cgroup *memcg;
+
+ rcu_read_lock();
+ memcg = mem_cgroup_from_entry(entry);
+ /* will always succeed */
+ list_lru_del(list_lru, &entry->lru, nid, memcg);
+ rcu_read_unlock();
+}
+
+static void zswap_lru_putback(struct list_lru *list_lru,
+ struct zswap_entry *entry)
+{
+ int nid = entry_to_nid(entry);
+ spinlock_t *lock = &list_lru->node[nid].lock;
+ struct mem_cgroup *memcg;
+
+ rcu_read_lock();
+ memcg = mem_cgroup_from_entry(entry);
+ spin_lock(lock);
+ /* we cannot use list_lru_add here, because it increments node's lru count */
+ list_lru_putback(list_lru, &entry->lru, nid, memcg);
+ spin_unlock(lock);
+ rcu_read_unlock();
+}
+
/*********************************
* rbtree functions
**********************************/
@@ -396,9 +483,7 @@ static void zswap_free_entry(struct zswap_entry *entry)
if (!entry->length)
atomic_dec(&zswap_same_filled_pages);
else {
- spin_lock(&entry->pool->lru_lock);
- list_del(&entry->lru);
- spin_unlock(&entry->pool->lru_lock);
+ zswap_lru_del(&entry->pool->list_lru, entry);
zpool_free(zswap_find_zpool(entry), entry->handle);
zswap_pool_put(entry->pool);
}
@@ -632,21 +717,15 @@ static void zswap_invalidate_entry(struct zswap_tree *tree,
zswap_entry_put(tree, entry);
}
-static int zswap_reclaim_entry(struct zswap_pool *pool)
+static enum lru_status shrink_memcg_cb(struct list_head *item, struct list_lru_one *l,
+ spinlock_t *lock, void *arg)
{
- struct zswap_entry *entry;
+ struct zswap_entry *entry = container_of(item, struct zswap_entry, lru);
struct zswap_tree *tree;
pgoff_t swpoffset;
- int ret;
+ enum lru_status ret = LRU_REMOVED_RETRY;
+ int writeback_result;
- /* Get an entry off the LRU */
- spin_lock(&pool->lru_lock);
- if (list_empty(&pool->lru)) {
- spin_unlock(&pool->lru_lock);
- return -EINVAL;
- }
- entry = list_last_entry(&pool->lru, struct zswap_entry, lru);
- list_del_init(&entry->lru);
/*
* Once the lru lock is dropped, the entry might get freed. The
* swpoffset is copied to the stack, and entry isn't deref'd again
@@ -654,28 +733,32 @@ static int zswap_reclaim_entry(struct zswap_pool *pool)
*/
swpoffset = swp_offset(entry->swpentry);
tree = zswap_trees[swp_type(entry->swpentry)];
- spin_unlock(&pool->lru_lock);
+ list_lru_isolate(l, item);
+ /*
+ * It's safe to drop the lock here because we return either
+ * LRU_REMOVED_RETRY or LRU_RETRY.
+ */
+ spin_unlock(lock);
/* Check for invalidate() race */
spin_lock(&tree->lock);
- if (entry != zswap_rb_search(&tree->rbroot, swpoffset)) {
- ret = -EAGAIN;
+ if (entry != zswap_rb_search(&tree->rbroot, swpoffset))
goto unlock;
- }
+
/* Hold a reference to prevent a free during writeback */
zswap_entry_get(entry);
spin_unlock(&tree->lock);
- ret = zswap_writeback_entry(entry, tree);
+ writeback_result = zswap_writeback_entry(entry, tree);
spin_lock(&tree->lock);
- if (ret) {
- /* Writeback failed, put entry back on LRU */
- spin_lock(&pool->lru_lock);
- list_move(&entry->lru, &pool->lru);
- spin_unlock(&pool->lru_lock);
+ if (writeback_result) {
+ zswap_reject_reclaim_fail++;
+ zswap_lru_putback(&entry->pool->list_lru, entry);
+ ret = LRU_RETRY;
goto put_unlock;
}
+ zswap_written_back_pages++;
/*
* Writeback started successfully, the page now belongs to the
@@ -689,27 +772,93 @@ static int zswap_reclaim_entry(struct zswap_pool *pool)
zswap_entry_put(tree, entry);
unlock:
spin_unlock(&tree->lock);
- return ret ? -EAGAIN : 0;
+ spin_lock(lock);
+ return ret;
+}
+
+static int shrink_memcg(struct mem_cgroup *memcg)
+{
+ struct zswap_pool *pool;
+ int nid, shrunk = 0;
+
+ /*
+ * Skip zombies because their LRUs are reparented and we would be
+ * reclaiming from the parent instead of the dead memcg.
+ */
+ if (memcg && !mem_cgroup_online(memcg))
+ return -ENOENT;
+
+ pool = zswap_pool_current_get();
+ if (!pool)
+ return -EINVAL;
+
+ for_each_node_state(nid, N_NORMAL_MEMORY) {
+ unsigned long nr_to_walk = 1;
+
+ shrunk += list_lru_walk_one(&pool->list_lru, nid, memcg,
+ &shrink_memcg_cb, NULL, &nr_to_walk);
+ }
+ zswap_pool_put(pool);
+ return shrunk ? 0 : -EAGAIN;
}
static void shrink_worker(struct work_struct *w)
{
struct zswap_pool *pool = container_of(w, typeof(*pool),
shrink_work);
+ struct mem_cgroup *memcg;
int ret, failures = 0;
+ /* global reclaim will select cgroup in a round-robin fashion. */
do {
- ret = zswap_reclaim_entry(pool);
- if (ret) {
- zswap_reject_reclaim_fail++;
- if (ret != -EAGAIN)
+ spin_lock(&zswap_pools_lock);
+ pool->next_shrink = mem_cgroup_iter(NULL, pool->next_shrink, NULL);
+ memcg = pool->next_shrink;
+
+ /*
+ * We need to retry if we have gone through a full round trip, or if we
+ * got an offline memcg (or else we risk undoing the effect of the
+ * zswap memcg offlining cleanup callback). This is not catastrophic
+ * per se, but it will keep the now offlined memcg hostage for a while.
+ *
+ * Note that if we got an online memcg, we will keep the extra
+ * reference in case the original reference obtained by mem_cgroup_iter
+ * is dropped by the zswap memcg offlining callback, ensuring that the
+ * memcg is not killed when we are reclaiming.
+ */
+ if (!memcg) {
+ spin_unlock(&zswap_pools_lock);
+ if (++failures == MAX_RECLAIM_RETRIES)
break;
+
+ goto resched;
+ }
+
+ if (!mem_cgroup_online(memcg)) {
+ /* drop the reference from mem_cgroup_iter() */
+ mem_cgroup_put(memcg);
+ pool->next_shrink = NULL;
+ spin_unlock(&zswap_pools_lock);
+
if (++failures == MAX_RECLAIM_RETRIES)
break;
+
+ goto resched;
}
+ spin_unlock(&zswap_pools_lock);
+
+ ret = shrink_memcg(memcg);
+ /* drop the extra reference */
+ mem_cgroup_put(memcg);
+
+ if (ret == -EINVAL)
+ break;
+ if (ret && ++failures == MAX_RECLAIM_RETRIES)
+ break;
+
+resched:
cond_resched();
} while (!zswap_can_accept());
- zswap_pool_put(pool);
}
static struct zswap_pool *zswap_pool_create(char *type, char *compressor)
@@ -767,8 +916,7 @@ static struct zswap_pool *zswap_pool_create(char *type, char *compressor)
*/
kref_init(&pool->kref);
INIT_LIST_HEAD(&pool->list);
- INIT_LIST_HEAD(&pool->lru);
- spin_lock_init(&pool->lru_lock);
+ list_lru_init_memcg(&pool->list_lru, NULL);
INIT_WORK(&pool->shrink_work, shrink_worker);
zswap_pool_debug("created", pool);
@@ -834,6 +982,13 @@ static void zswap_pool_destroy(struct zswap_pool *pool)
cpuhp_state_remove_instance(CPUHP_MM_ZSWP_POOL_PREPARE, &pool->node);
free_percpu(pool->acomp_ctx);
+ list_lru_destroy(&pool->list_lru);
+
+ spin_lock(&zswap_pools_lock);
+ mem_cgroup_put(pool->next_shrink);
+ pool->next_shrink = NULL;
+ spin_unlock(&zswap_pools_lock);
+
for (i = 0; i < ZSWAP_NR_ZPOOLS; i++)
zpool_destroy_pool(pool->zpools[i]);
kfree(pool);
@@ -1081,7 +1236,7 @@ static int zswap_writeback_entry(struct zswap_entry *entry,
/* try to allocate swap cache page */
mpol = get_task_policy(current);
page = __read_swap_cache_async(swpentry, GFP_KERNEL, mpol,
- NO_INTERLEAVE_INDEX, &page_was_allocated);
+ NO_INTERLEAVE_INDEX, &page_was_allocated, true);
if (!page) {
ret = -ENOMEM;
goto fail;
@@ -1152,7 +1307,6 @@ static int zswap_writeback_entry(struct zswap_entry *entry,
/* start writeback */
__swap_writepage(page, &wbc);
put_page(page);
- zswap_written_back_pages++;
return ret;
@@ -1209,6 +1363,7 @@ bool zswap_store(struct folio *folio)
struct scatterlist input, output;
struct crypto_acomp_ctx *acomp_ctx;
struct obj_cgroup *objcg = NULL;
+ struct mem_cgroup *memcg = NULL;
struct zswap_pool *pool;
struct zpool *zpool;
unsigned int dlen = PAGE_SIZE;
@@ -1240,15 +1395,15 @@ bool zswap_store(struct folio *folio)
zswap_invalidate_entry(tree, dupentry);
}
spin_unlock(&tree->lock);
-
- /*
- * XXX: zswap reclaim does not work with cgroups yet. Without a
- * cgroup-aware entry LRU, we will push out entries system-wide based on
- * local cgroup limits.
- */
objcg = get_obj_cgroup_from_folio(folio);
- if (objcg && !obj_cgroup_may_zswap(objcg))
- goto reject;
+ if (objcg && !obj_cgroup_may_zswap(objcg)) {
+ memcg = get_mem_cgroup_from_objcg(objcg);
+ if (shrink_memcg(memcg)) {
+ mem_cgroup_put(memcg);
+ goto reject;
+ }
+ mem_cgroup_put(memcg);
+ }
/* reclaim space if needed */
if (zswap_is_full()) {
@@ -1265,7 +1420,7 @@ bool zswap_store(struct folio *folio)
}
/* allocate entry */
- entry = zswap_entry_cache_alloc(GFP_KERNEL);
+ entry = zswap_entry_cache_alloc(GFP_KERNEL, page_to_nid(page));
if (!entry) {
zswap_reject_kmemcache_fail++;
goto reject;
@@ -1292,6 +1447,15 @@ bool zswap_store(struct folio *folio)
if (!entry->pool)
goto freepage;
+ if (objcg) {
+ memcg = get_mem_cgroup_from_objcg(objcg);
+ if (memcg_list_lru_alloc(memcg, &entry->pool->list_lru, GFP_KERNEL)) {
+ mem_cgroup_put(memcg);
+ goto put_pool;
+ }
+ mem_cgroup_put(memcg);
+ }
+
/* compress */
acomp_ctx = raw_cpu_ptr(entry->pool->acomp_ctx);
@@ -1370,9 +1534,8 @@ bool zswap_store(struct folio *folio)
zswap_invalidate_entry(tree, dupentry);
}
if (entry->length) {
- spin_lock(&entry->pool->lru_lock);
- list_add(&entry->lru, &entry->pool->lru);
- spin_unlock(&entry->pool->lru_lock);
+ INIT_LIST_HEAD(&entry->lru);
+ zswap_lru_add(&entry->pool->list_lru, entry);
}
spin_unlock(&tree->lock);
@@ -1385,6 +1548,7 @@ bool zswap_store(struct folio *folio)
put_dstmem:
mutex_unlock(acomp_ctx->mutex);
+put_pool:
zswap_pool_put(entry->pool);
freepage:
zswap_entry_cache_free(entry);
@@ -1479,9 +1643,8 @@ bool zswap_load(struct folio *folio)
zswap_invalidate_entry(tree, entry);
folio_mark_dirty(folio);
} else if (entry->length) {
- spin_lock(&entry->pool->lru_lock);
- list_move(&entry->lru, &entry->pool->lru);
- spin_unlock(&entry->pool->lru_lock);
+ zswap_lru_del(&entry->pool->list_lru, entry);
+ zswap_lru_add(&entry->pool->list_lru, entry);
}
zswap_entry_put(tree, entry);
spin_unlock(&tree->lock);
--
2.34.1
^ permalink raw reply related [flat|nested] 48+ messages in thread
* [PATCH v8 4/6] mm: memcg: add per-memcg zswap writeback stat
2023-11-30 19:40 [PATCH v8 0/6] workload-specific and memory pressure-driven zswap writeback Nhat Pham
` (2 preceding siblings ...)
2023-11-30 19:40 ` [PATCH v8 3/6] zswap: make shrinking memcg-aware Nhat Pham
@ 2023-11-30 19:40 ` Nhat Pham
2023-12-05 18:21 ` Yosry Ahmed
2023-12-05 19:33 ` [PATCH v8 4/6] mm: memcg: add per-memcg zswap writeback stat (fix) Nhat Pham
2023-11-30 19:40 ` [PATCH v8 5/6] selftests: cgroup: update per-memcg zswap writeback selftest Nhat Pham
` (3 subsequent siblings)
7 siblings, 2 replies; 48+ messages in thread
From: Nhat Pham @ 2023-11-30 19:40 UTC (permalink / raw)
To: akpm
Cc: hannes, cerasuolodomenico, yosryahmed, sjenning, ddstreet,
vitaly.wool, mhocko, roman.gushchin, shakeelb, muchun.song,
chrisl, linux-mm, kernel-team, linux-kernel, cgroups, linux-doc,
linux-kselftest, shuah
From: Domenico Cerasuolo <cerasuolodomenico@gmail.com>
Since zswap now writes back pages from memcg-specific LRUs, we now need a
new stat to show writebacks count for each memcg.
Suggested-by: Nhat Pham <nphamcs@gmail.com>
Signed-off-by: Domenico Cerasuolo <cerasuolodomenico@gmail.com>
Signed-off-by: Nhat Pham <nphamcs@gmail.com>
---
include/linux/vm_event_item.h | 1 +
mm/memcontrol.c | 1 +
mm/vmstat.c | 1 +
mm/zswap.c | 4 ++++
4 files changed, 7 insertions(+)
diff --git a/include/linux/vm_event_item.h b/include/linux/vm_event_item.h
index d1b847502f09..f4569ad98edf 100644
--- a/include/linux/vm_event_item.h
+++ b/include/linux/vm_event_item.h
@@ -142,6 +142,7 @@ enum vm_event_item { PGPGIN, PGPGOUT, PSWPIN, PSWPOUT,
#ifdef CONFIG_ZSWAP
ZSWPIN,
ZSWPOUT,
+ ZSWP_WB,
#endif
#ifdef CONFIG_X86
DIRECT_MAP_LEVEL2_SPLIT,
diff --git a/mm/memcontrol.c b/mm/memcontrol.c
index 792ca21c5815..21d79249c8b4 100644
--- a/mm/memcontrol.c
+++ b/mm/memcontrol.c
@@ -703,6 +703,7 @@ static const unsigned int memcg_vm_event_stat[] = {
#if defined(CONFIG_MEMCG_KMEM) && defined(CONFIG_ZSWAP)
ZSWPIN,
ZSWPOUT,
+ ZSWP_WB,
#endif
#ifdef CONFIG_TRANSPARENT_HUGEPAGE
THP_FAULT_ALLOC,
diff --git a/mm/vmstat.c b/mm/vmstat.c
index afa5a38fcc9c..2249f85e4a87 100644
--- a/mm/vmstat.c
+++ b/mm/vmstat.c
@@ -1401,6 +1401,7 @@ const char * const vmstat_text[] = {
#ifdef CONFIG_ZSWAP
"zswpin",
"zswpout",
+ "zswp_wb",
#endif
#ifdef CONFIG_X86
"direct_map_level2_splits",
diff --git a/mm/zswap.c b/mm/zswap.c
index f323e45cbdc7..49b79393e472 100644
--- a/mm/zswap.c
+++ b/mm/zswap.c
@@ -760,6 +760,10 @@ static enum lru_status shrink_memcg_cb(struct list_head *item, struct list_lru_o
}
zswap_written_back_pages++;
+ if (entry->objcg)
+ count_objcg_event(entry->objcg, ZSWP_WB);
+
+ count_vm_event(ZSWP_WB);
/*
* Writeback started successfully, the page now belongs to the
* swapcache. Drop the entry from zswap - unless invalidate already
--
2.34.1
^ permalink raw reply related [flat|nested] 48+ messages in thread
* [PATCH v8 5/6] selftests: cgroup: update per-memcg zswap writeback selftest
2023-11-30 19:40 [PATCH v8 0/6] workload-specific and memory pressure-driven zswap writeback Nhat Pham
` (3 preceding siblings ...)
2023-11-30 19:40 ` [PATCH v8 4/6] mm: memcg: add per-memcg zswap writeback stat Nhat Pham
@ 2023-11-30 19:40 ` Nhat Pham
2023-12-08 0:43 ` Chris Li
2023-11-30 19:40 ` [PATCH v8 6/6] zswap: shrinks zswap pool based on memory pressure Nhat Pham
` (2 subsequent siblings)
7 siblings, 1 reply; 48+ messages in thread
From: Nhat Pham @ 2023-11-30 19:40 UTC (permalink / raw)
To: akpm
Cc: hannes, cerasuolodomenico, yosryahmed, sjenning, ddstreet,
vitaly.wool, mhocko, roman.gushchin, shakeelb, muchun.song,
chrisl, linux-mm, kernel-team, linux-kernel, cgroups, linux-doc,
linux-kselftest, shuah
From: Domenico Cerasuolo <cerasuolodomenico@gmail.com>
The memcg-zswap self test is updated to adjust to the behavior change
implemented by commit 87730b165089 ("zswap: make shrinking memcg-aware"),
where zswap performs writeback for specific memcg.
Signed-off-by: Domenico Cerasuolo <cerasuolodomenico@gmail.com>
Signed-off-by: Nhat Pham <nphamcs@gmail.com>
---
tools/testing/selftests/cgroup/test_zswap.c | 74 ++++++++++++++-------
1 file changed, 50 insertions(+), 24 deletions(-)
diff --git a/tools/testing/selftests/cgroup/test_zswap.c b/tools/testing/selftests/cgroup/test_zswap.c
index c99d2adaca3f..47fdaa146443 100644
--- a/tools/testing/selftests/cgroup/test_zswap.c
+++ b/tools/testing/selftests/cgroup/test_zswap.c
@@ -50,9 +50,9 @@ static int get_zswap_stored_pages(size_t *value)
return read_int("/sys/kernel/debug/zswap/stored_pages", value);
}
-static int get_zswap_written_back_pages(size_t *value)
+static int get_cg_wb_count(const char *cg)
{
- return read_int("/sys/kernel/debug/zswap/written_back_pages", value);
+ return cg_read_key_long(cg, "memory.stat", "zswp_wb");
}
static long get_zswpout(const char *cgroup)
@@ -73,6 +73,24 @@ static int allocate_bytes(const char *cgroup, void *arg)
return 0;
}
+static char *setup_test_group_1M(const char *root, const char *name)
+{
+ char *group_name = cg_name(root, name);
+
+ if (!group_name)
+ return NULL;
+ if (cg_create(group_name))
+ goto fail;
+ if (cg_write(group_name, "memory.max", "1M")) {
+ cg_destroy(group_name);
+ goto fail;
+ }
+ return group_name;
+fail:
+ free(group_name);
+ return NULL;
+}
+
/*
* Sanity test to check that pages are written into zswap.
*/
@@ -117,43 +135,51 @@ static int test_zswap_usage(const char *root)
/*
* When trying to store a memcg page in zswap, if the memcg hits its memory
- * limit in zswap, writeback should not be triggered.
- *
- * This was fixed with commit 0bdf0efa180a("zswap: do not shrink if cgroup may
- * not zswap"). Needs to be revised when a per memcg writeback mechanism is
- * implemented.
+ * limit in zswap, writeback should affect only the zswapped pages of that
+ * memcg.
*/
static int test_no_invasive_cgroup_shrink(const char *root)
{
- size_t written_back_before, written_back_after;
int ret = KSFT_FAIL;
- char *test_group;
+ size_t control_allocation_size = MB(10);
+ char *control_allocation, *wb_group = NULL, *control_group = NULL;
/* Set up */
- test_group = cg_name(root, "no_shrink_test");
- if (!test_group)
- goto out;
- if (cg_create(test_group))
+ wb_group = setup_test_group_1M(root, "per_memcg_wb_test1");
+ if (!wb_group)
+ return KSFT_FAIL;
+ if (cg_write(wb_group, "memory.zswap.max", "10K"))
goto out;
- if (cg_write(test_group, "memory.max", "1M"))
+ control_group = setup_test_group_1M(root, "per_memcg_wb_test2");
+ if (!control_group)
goto out;
- if (cg_write(test_group, "memory.zswap.max", "10K"))
+
+ /* Push some test_group2 memory into zswap */
+ if (cg_enter_current(control_group))
goto out;
- if (get_zswap_written_back_pages(&written_back_before))
+ control_allocation = malloc(control_allocation_size);
+ for (int i = 0; i < control_allocation_size; i += 4095)
+ control_allocation[i] = 'a';
+ if (cg_read_key_long(control_group, "memory.stat", "zswapped") < 1)
goto out;
- /* Allocate 10x memory.max to push memory into zswap */
- if (cg_run(test_group, allocate_bytes, (void *)MB(10)))
+ /* Allocate 10x memory.max to push wb_group memory into zswap and trigger wb */
+ if (cg_run(wb_group, allocate_bytes, (void *)MB(10)))
goto out;
- /* Verify that no writeback happened because of the memcg allocation */
- if (get_zswap_written_back_pages(&written_back_after))
- goto out;
- if (written_back_after == written_back_before)
+ /* Verify that only zswapped memory from gwb_group has been written back */
+ if (get_cg_wb_count(wb_group) > 0 && get_cg_wb_count(control_group) == 0)
ret = KSFT_PASS;
out:
- cg_destroy(test_group);
- free(test_group);
+ cg_enter_current(root);
+ if (control_group) {
+ cg_destroy(control_group);
+ free(control_group);
+ }
+ cg_destroy(wb_group);
+ free(wb_group);
+ if (control_allocation)
+ free(control_allocation);
return ret;
}
--
2.34.1
^ permalink raw reply related [flat|nested] 48+ messages in thread
* [PATCH v8 6/6] zswap: shrinks zswap pool based on memory pressure
2023-11-30 19:40 [PATCH v8 0/6] workload-specific and memory pressure-driven zswap writeback Nhat Pham
` (4 preceding siblings ...)
2023-11-30 19:40 ` [PATCH v8 5/6] selftests: cgroup: update per-memcg zswap writeback selftest Nhat Pham
@ 2023-11-30 19:40 ` Nhat Pham
2023-12-06 5:51 ` Chengming Zhou
2023-12-06 19:44 ` [PATCH v8 6/6] zswap: shrinks zswap pool based on memory pressure (fix) Nhat Pham
2023-11-30 21:19 ` [PATCH v8 0/6] workload-specific and memory pressure-driven zswap writeback Andrew Morton
2023-12-06 4:10 ` Bagas Sanjaya
7 siblings, 2 replies; 48+ messages in thread
From: Nhat Pham @ 2023-11-30 19:40 UTC (permalink / raw)
To: akpm
Cc: hannes, cerasuolodomenico, yosryahmed, sjenning, ddstreet,
vitaly.wool, mhocko, roman.gushchin, shakeelb, muchun.song,
chrisl, linux-mm, kernel-team, linux-kernel, cgroups, linux-doc,
linux-kselftest, shuah
Currently, we only shrink the zswap pool when the user-defined limit is
hit. This means that if we set the limit too high, cold data that are
unlikely to be used again will reside in the pool, wasting precious
memory. It is hard to predict how much zswap space will be needed ahead
of time, as this depends on the workload (specifically, on factors such
as memory access patterns and compressibility of the memory pages).
This patch implements a memcg- and NUMA-aware shrinker for zswap, that
is initiated when there is memory pressure. The shrinker does not
have any parameter that must be tuned by the user, and can be opted in
or out on a per-memcg basis.
Furthermore, to make it more robust for many workloads and prevent
overshrinking (i.e evicting warm pages that might be refaulted into
memory), we build in the following heuristics:
* Estimate the number of warm pages residing in zswap, and attempt to
protect this region of the zswap LRU.
* Scale the number of freeable objects by an estimate of the memory
saving factor. The better zswap compresses the data, the fewer pages
we will evict to swap (as we will otherwise incur IO for relatively
small memory saving).
* During reclaim, if the shrinker encounters a page that is also being
brought into memory, the shrinker will cautiously terminate its
shrinking action, as this is a sign that it is touching the warmer
region of the zswap LRU.
As a proof of concept, we ran the following synthetic benchmark:
build the linux kernel in a memory-limited cgroup, and allocate some
cold data in tmpfs to see if the shrinker could write them out and
improved the overall performance. Depending on the amount of cold data
generated, we observe from 14% to 35% reduction in kernel CPU time used
in the kernel builds.
Signed-off-by: Nhat Pham <nphamcs@gmail.com>
Acked-by: Johannes Weiner <hannes@cmpxchg.org>
---
Documentation/admin-guide/mm/zswap.rst | 10 ++
include/linux/mmzone.h | 2 +
include/linux/zswap.h | 25 +++-
mm/Kconfig | 14 ++
mm/mmzone.c | 1 +
mm/swap_state.c | 2 +
mm/zswap.c | 185 ++++++++++++++++++++++++-
7 files changed, 233 insertions(+), 6 deletions(-)
diff --git a/Documentation/admin-guide/mm/zswap.rst b/Documentation/admin-guide/mm/zswap.rst
index 45b98390e938..62fc244ec702 100644
--- a/Documentation/admin-guide/mm/zswap.rst
+++ b/Documentation/admin-guide/mm/zswap.rst
@@ -153,6 +153,16 @@ attribute, e. g.::
Setting this parameter to 100 will disable the hysteresis.
+When there is a sizable amount of cold memory residing in the zswap pool, it
+can be advantageous to proactively write these cold pages to swap and reclaim
+the memory for other use cases. By default, the zswap shrinker is disabled.
+User can enable it as follows:
+
+ echo Y > /sys/module/zswap/parameters/shrinker_enabled
+
+This can be enabled at the boot time if ``CONFIG_ZSWAP_SHRINKER_DEFAULT_ON`` is
+selected.
+
A debugfs interface is provided for various statistic about pool size, number
of pages stored, same-value filled pages and various counters for the reasons
pages are rejected.
diff --git a/include/linux/mmzone.h b/include/linux/mmzone.h
index 7b1816450bfc..b23bc5390240 100644
--- a/include/linux/mmzone.h
+++ b/include/linux/mmzone.h
@@ -22,6 +22,7 @@
#include <linux/mm_types.h>
#include <linux/page-flags.h>
#include <linux/local_lock.h>
+#include <linux/zswap.h>
#include <asm/page.h>
/* Free memory management - zoned buddy allocator. */
@@ -641,6 +642,7 @@ struct lruvec {
#ifdef CONFIG_MEMCG
struct pglist_data *pgdat;
#endif
+ struct zswap_lruvec_state zswap_lruvec_state;
};
/* Isolate for asynchronous migration */
diff --git a/include/linux/zswap.h b/include/linux/zswap.h
index e571e393669b..08c240e16a01 100644
--- a/include/linux/zswap.h
+++ b/include/linux/zswap.h
@@ -5,20 +5,40 @@
#include <linux/types.h>
#include <linux/mm_types.h>
+struct lruvec;
+
extern u64 zswap_pool_total_size;
extern atomic_t zswap_stored_pages;
#ifdef CONFIG_ZSWAP
+struct zswap_lruvec_state {
+ /*
+ * Number of pages in zswap that should be protected from the shrinker.
+ * This number is an estimate of the following counts:
+ *
+ * a) Recent page faults.
+ * b) Recent insertion to the zswap LRU. This includes new zswap stores,
+ * as well as recent zswap LRU rotations.
+ *
+ * These pages are likely to be warm, and might incur IO if the are written
+ * to swap.
+ */
+ atomic_long_t nr_zswap_protected;
+};
+
bool zswap_store(struct folio *folio);
bool zswap_load(struct folio *folio);
void zswap_invalidate(int type, pgoff_t offset);
void zswap_swapon(int type);
void zswap_swapoff(int type);
void zswap_memcg_offline_cleanup(struct mem_cgroup *memcg);
-
+void zswap_lruvec_state_init(struct lruvec *lruvec);
+void zswap_page_swapin(struct page *page);
#else
+struct zswap_lruvec_state {};
+
static inline bool zswap_store(struct folio *folio)
{
return false;
@@ -33,7 +53,8 @@ static inline void zswap_invalidate(int type, pgoff_t offset) {}
static inline void zswap_swapon(int type) {}
static inline void zswap_swapoff(int type) {}
static inline void zswap_memcg_offline_cleanup(struct mem_cgroup *memcg) {}
-
+static inline void zswap_lruvec_state_init(struct lruvec *lruvec) {}
+static inline void zswap_page_swapin(struct page *page) {}
#endif
#endif /* _LINUX_ZSWAP_H */
diff --git a/mm/Kconfig b/mm/Kconfig
index 57cd378c73d6..ca87cdb72f11 100644
--- a/mm/Kconfig
+++ b/mm/Kconfig
@@ -61,6 +61,20 @@ config ZSWAP_EXCLUSIVE_LOADS_DEFAULT_ON
The cost is that if the page was never dirtied and needs to be
swapped out again, it will be re-compressed.
+config ZSWAP_SHRINKER_DEFAULT_ON
+ bool "Shrink the zswap pool on memory pressure"
+ depends on ZSWAP
+ default n
+ help
+ If selected, the zswap shrinker will be enabled, and the pages
+ stored in the zswap pool will become available for reclaim (i.e
+ written back to the backing swap device) on memory pressure.
+
+ This means that zswap writeback could happen even if the pool is
+ not yet full, or the cgroup zswap limit has not been reached,
+ reducing the chance that cold pages will reside in the zswap pool
+ and consume memory indefinitely.
+
choice
prompt "Default compressor"
depends on ZSWAP
diff --git a/mm/mmzone.c b/mm/mmzone.c
index b594d3f268fe..c01896eca736 100644
--- a/mm/mmzone.c
+++ b/mm/mmzone.c
@@ -78,6 +78,7 @@ void lruvec_init(struct lruvec *lruvec)
memset(lruvec, 0, sizeof(struct lruvec));
spin_lock_init(&lruvec->lru_lock);
+ zswap_lruvec_state_init(lruvec);
for_each_lru(lru)
INIT_LIST_HEAD(&lruvec->lists[lru]);
diff --git a/mm/swap_state.c b/mm/swap_state.c
index 6c84236382f3..c597cec606e4 100644
--- a/mm/swap_state.c
+++ b/mm/swap_state.c
@@ -687,6 +687,7 @@ struct page *swap_cluster_readahead(swp_entry_t entry, gfp_t gfp_mask,
&page_allocated, false);
if (unlikely(page_allocated))
swap_readpage(page, false, NULL);
+ zswap_page_swapin(page);
return page;
}
@@ -862,6 +863,7 @@ static struct page *swap_vma_readahead(swp_entry_t targ_entry, gfp_t gfp_mask,
&page_allocated, false);
if (unlikely(page_allocated))
swap_readpage(page, false, NULL);
+ zswap_page_swapin(page);
return page;
}
diff --git a/mm/zswap.c b/mm/zswap.c
index 49b79393e472..0f086ffd7b6a 100644
--- a/mm/zswap.c
+++ b/mm/zswap.c
@@ -148,6 +148,11 @@ module_param_named(exclusive_loads, zswap_exclusive_loads_enabled, bool, 0644);
/* Number of zpools in zswap_pool (empirically determined for scalability) */
#define ZSWAP_NR_ZPOOLS 32
+/* Enable/disable memory pressure-based shrinker. */
+static bool zswap_shrinker_enabled = IS_ENABLED(
+ CONFIG_ZSWAP_SHRINKER_DEFAULT_ON);
+module_param_named(shrinker_enabled, zswap_shrinker_enabled, bool, 0644);
+
/*********************************
* data structures
**********************************/
@@ -177,6 +182,8 @@ struct zswap_pool {
char tfm_name[CRYPTO_MAX_ALG_NAME];
struct list_lru list_lru;
struct mem_cgroup *next_shrink;
+ struct shrinker *shrinker;
+ atomic_t nr_stored;
};
/*
@@ -275,17 +282,26 @@ static bool zswap_can_accept(void)
DIV_ROUND_UP(zswap_pool_total_size, PAGE_SIZE);
}
+static u64 get_zswap_pool_size(struct zswap_pool *pool)
+{
+ u64 pool_size = 0;
+ int i;
+
+ for (i = 0; i < ZSWAP_NR_ZPOOLS; i++)
+ pool_size += zpool_get_total_size(pool->zpools[i]);
+
+ return pool_size;
+}
+
static void zswap_update_total_size(void)
{
struct zswap_pool *pool;
u64 total = 0;
- int i;
rcu_read_lock();
list_for_each_entry_rcu(pool, &zswap_pools, list)
- for (i = 0; i < ZSWAP_NR_ZPOOLS; i++)
- total += zpool_get_total_size(pool->zpools[i]);
+ total += get_zswap_pool_size(pool);
rcu_read_unlock();
@@ -344,13 +360,34 @@ static void zswap_entry_cache_free(struct zswap_entry *entry)
kmem_cache_free(zswap_entry_cache, entry);
}
+/*********************************
+* zswap lruvec functions
+**********************************/
+void zswap_lruvec_state_init(struct lruvec *lruvec)
+{
+ atomic_long_set(&lruvec->zswap_lruvec_state.nr_zswap_protected, 0);
+}
+
+void zswap_page_swapin(struct page *page)
+{
+ struct lruvec *lruvec;
+
+ if (page) {
+ lruvec = folio_lruvec(page_folio(page));
+ atomic_long_inc(&lruvec->zswap_lruvec_state.nr_zswap_protected);
+ }
+}
+
/*********************************
* lru functions
**********************************/
static void zswap_lru_add(struct list_lru *list_lru, struct zswap_entry *entry)
{
+ atomic_long_t *nr_zswap_protected;
+ unsigned long lru_size, old, new;
int nid = entry_to_nid(entry);
struct mem_cgroup *memcg;
+ struct lruvec *lruvec;
/*
* Note that it is safe to use rcu_read_lock() here, even in the face of
@@ -368,6 +405,19 @@ static void zswap_lru_add(struct list_lru *list_lru, struct zswap_entry *entry)
memcg = mem_cgroup_from_entry(entry);
/* will always succeed */
list_lru_add(list_lru, &entry->lru, nid, memcg);
+
+ /* Update the protection area */
+ lru_size = list_lru_count_one(list_lru, nid, memcg);
+ lruvec = mem_cgroup_lruvec(memcg, NODE_DATA(nid));
+ nr_zswap_protected = &lruvec->zswap_lruvec_state.nr_zswap_protected;
+ old = atomic_long_inc_return(nr_zswap_protected);
+ /*
+ * Decay to avoid overflow and adapt to changing workloads.
+ * This is based on LRU reclaim cost decaying heuristics.
+ */
+ do {
+ new = old > lru_size / 4 ? old / 2 : old;
+ } while (!atomic_long_try_cmpxchg(nr_zswap_protected, &old, new));
rcu_read_unlock();
}
@@ -389,6 +439,7 @@ static void zswap_lru_putback(struct list_lru *list_lru,
int nid = entry_to_nid(entry);
spinlock_t *lock = &list_lru->node[nid].lock;
struct mem_cgroup *memcg;
+ struct lruvec *lruvec;
rcu_read_lock();
memcg = mem_cgroup_from_entry(entry);
@@ -396,6 +447,10 @@ static void zswap_lru_putback(struct list_lru *list_lru,
/* we cannot use list_lru_add here, because it increments node's lru count */
list_lru_putback(list_lru, &entry->lru, nid, memcg);
spin_unlock(lock);
+
+ lruvec = mem_cgroup_lruvec(memcg, NODE_DATA(entry_to_nid(entry)));
+ /* increment the protection area to account for the LRU rotation. */
+ atomic_long_inc(&lruvec->zswap_lruvec_state.nr_zswap_protected);
rcu_read_unlock();
}
@@ -485,6 +540,7 @@ static void zswap_free_entry(struct zswap_entry *entry)
else {
zswap_lru_del(&entry->pool->list_lru, entry);
zpool_free(zswap_find_zpool(entry), entry->handle);
+ atomic_dec(&entry->pool->nr_stored);
zswap_pool_put(entry->pool);
}
zswap_entry_cache_free(entry);
@@ -526,6 +582,102 @@ static struct zswap_entry *zswap_entry_find_get(struct rb_root *root,
return entry;
}
+/*********************************
+* shrinker functions
+**********************************/
+static enum lru_status shrink_memcg_cb(struct list_head *item, struct list_lru_one *l,
+ spinlock_t *lock, void *arg);
+
+static unsigned long zswap_shrinker_scan(struct shrinker *shrinker,
+ struct shrink_control *sc)
+{
+ struct lruvec *lruvec = mem_cgroup_lruvec(sc->memcg, NODE_DATA(sc->nid));
+ unsigned long shrink_ret, nr_protected, lru_size;
+ struct zswap_pool *pool = shrinker->private_data;
+ bool encountered_page_in_swapcache = false;
+
+ nr_protected =
+ atomic_long_read(&lruvec->zswap_lruvec_state.nr_zswap_protected);
+ lru_size = list_lru_shrink_count(&pool->list_lru, sc);
+
+ /*
+ * Abort if the shrinker is disabled or if we are shrinking into the
+ * protected region.
+ *
+ * This short-circuiting is necessary because if we have too many multiple
+ * concurrent reclaimers getting the freeable zswap object counts at the
+ * same time (before any of them made reasonable progress), the total
+ * number of reclaimed objects might be more than the number of unprotected
+ * objects (i.e the reclaimers will reclaim into the protected area of the
+ * zswap LRU).
+ */
+ if (!zswap_shrinker_enabled || nr_protected >= lru_size - sc->nr_to_scan) {
+ sc->nr_scanned = 0;
+ return SHRINK_STOP;
+ }
+
+ shrink_ret = list_lru_shrink_walk(&pool->list_lru, sc, &shrink_memcg_cb,
+ &encountered_page_in_swapcache);
+
+ if (encountered_page_in_swapcache)
+ return SHRINK_STOP;
+
+ return shrink_ret ? shrink_ret : SHRINK_STOP;
+}
+
+static unsigned long zswap_shrinker_count(struct shrinker *shrinker,
+ struct shrink_control *sc)
+{
+ struct zswap_pool *pool = shrinker->private_data;
+ struct mem_cgroup *memcg = sc->memcg;
+ struct lruvec *lruvec = mem_cgroup_lruvec(memcg, NODE_DATA(sc->nid));
+ unsigned long nr_backing, nr_stored, nr_freeable, nr_protected;
+
+#ifdef CONFIG_MEMCG_KMEM
+ cgroup_rstat_flush(memcg->css.cgroup);
+ nr_backing = memcg_page_state(memcg, MEMCG_ZSWAP_B) >> PAGE_SHIFT;
+ nr_stored = memcg_page_state(memcg, MEMCG_ZSWAPPED);
+#else
+ /* use pool stats instead of memcg stats */
+ nr_backing = get_zswap_pool_size(pool) >> PAGE_SHIFT;
+ nr_stored = atomic_read(&pool->nr_stored);
+#endif
+
+ if (!zswap_shrinker_enabled || !nr_stored)
+ return 0;
+
+ nr_protected =
+ atomic_long_read(&lruvec->zswap_lruvec_state.nr_zswap_protected);
+ nr_freeable = list_lru_shrink_count(&pool->list_lru, sc);
+ /*
+ * Subtract the lru size by an estimate of the number of pages
+ * that should be protected.
+ */
+ nr_freeable = nr_freeable > nr_protected ? nr_freeable - nr_protected : 0;
+
+ /*
+ * Scale the number of freeable pages by the memory saving factor.
+ * This ensures that the better zswap compresses memory, the fewer
+ * pages we will evict to swap (as it will otherwise incur IO for
+ * relatively small memory saving).
+ */
+ return mult_frac(nr_freeable, nr_backing, nr_stored);
+}
+
+static void zswap_alloc_shrinker(struct zswap_pool *pool)
+{
+ pool->shrinker =
+ shrinker_alloc(SHRINKER_NUMA_AWARE | SHRINKER_MEMCG_AWARE, "mm-zswap");
+ if (!pool->shrinker)
+ return;
+
+ pool->shrinker->private_data = pool;
+ pool->shrinker->scan_objects = zswap_shrinker_scan;
+ pool->shrinker->count_objects = zswap_shrinker_count;
+ pool->shrinker->batch = 0;
+ pool->shrinker->seeks = DEFAULT_SEEKS;
+}
+
/*********************************
* per-cpu code
**********************************/
@@ -721,6 +873,7 @@ static enum lru_status shrink_memcg_cb(struct list_head *item, struct list_lru_o
spinlock_t *lock, void *arg)
{
struct zswap_entry *entry = container_of(item, struct zswap_entry, lru);
+ bool *encountered_page_in_swapcache = (bool *)arg;
struct zswap_tree *tree;
pgoff_t swpoffset;
enum lru_status ret = LRU_REMOVED_RETRY;
@@ -756,6 +909,17 @@ static enum lru_status shrink_memcg_cb(struct list_head *item, struct list_lru_o
zswap_reject_reclaim_fail++;
zswap_lru_putback(&entry->pool->list_lru, entry);
ret = LRU_RETRY;
+
+ /*
+ * Encountering a page already in swap cache is a sign that we are shrinking
+ * into the warmer region. We should terminate shrinking (if we're in the dynamic
+ * shrinker context).
+ */
+ if (writeback_result == -EEXIST && encountered_page_in_swapcache) {
+ ret = LRU_SKIP;
+ *encountered_page_in_swapcache = true;
+ }
+
goto put_unlock;
}
zswap_written_back_pages++;
@@ -913,6 +1077,11 @@ static struct zswap_pool *zswap_pool_create(char *type, char *compressor)
&pool->node);
if (ret)
goto error;
+
+ zswap_alloc_shrinker(pool);
+ if (!pool->shrinker)
+ goto error;
+
pr_debug("using %s compressor\n", pool->tfm_name);
/* being the current pool takes 1 ref; this func expects the
@@ -920,13 +1089,19 @@ static struct zswap_pool *zswap_pool_create(char *type, char *compressor)
*/
kref_init(&pool->kref);
INIT_LIST_HEAD(&pool->list);
- list_lru_init_memcg(&pool->list_lru, NULL);
+ if (list_lru_init_memcg(&pool->list_lru, pool->shrinker))
+ goto lru_fail;
+ shrinker_register(pool->shrinker);
INIT_WORK(&pool->shrink_work, shrink_worker);
+ atomic_set(&pool->nr_stored, 0);
zswap_pool_debug("created", pool);
return pool;
+lru_fail:
+ list_lru_destroy(&pool->list_lru);
+ shrinker_free(pool->shrinker);
error:
if (pool->acomp_ctx)
free_percpu(pool->acomp_ctx);
@@ -984,6 +1159,7 @@ static void zswap_pool_destroy(struct zswap_pool *pool)
zswap_pool_debug("destroying", pool);
+ shrinker_free(pool->shrinker);
cpuhp_state_remove_instance(CPUHP_MM_ZSWP_POOL_PREPARE, &pool->node);
free_percpu(pool->acomp_ctx);
list_lru_destroy(&pool->list_lru);
@@ -1540,6 +1716,7 @@ bool zswap_store(struct folio *folio)
if (entry->length) {
INIT_LIST_HEAD(&entry->lru);
zswap_lru_add(&entry->pool->list_lru, entry);
+ atomic_inc(&entry->pool->nr_stored);
}
spin_unlock(&tree->lock);
--
2.34.1
^ permalink raw reply related [flat|nested] 48+ messages in thread
* Re: [PATCH v8 1/6] list_lru: allows explicit memcg and NUMA node selection
2023-11-30 19:40 ` [PATCH v8 1/6] list_lru: allows explicit memcg and NUMA node selection Nhat Pham
@ 2023-11-30 19:57 ` Matthew Wilcox
2023-11-30 20:07 ` Nhat Pham
0 siblings, 1 reply; 48+ messages in thread
From: Matthew Wilcox @ 2023-11-30 19:57 UTC (permalink / raw)
To: Nhat Pham
Cc: akpm, hannes, cerasuolodomenico, yosryahmed, sjenning, ddstreet,
vitaly.wool, mhocko, roman.gushchin, shakeelb, muchun.song,
chrisl, linux-mm, kernel-team, linux-kernel, cgroups, linux-doc,
linux-kselftest, shuah
On Thu, Nov 30, 2023 at 11:40:18AM -0800, Nhat Pham wrote:
> This patch changes list_lru interface so that the caller must explicitly
> specify numa node and memcg when adding and removing objects. The old
> list_lru_add() and list_lru_del() are renamed to list_lru_add_obj() and
> list_lru_del_obj(), respectively.
Wouldn't it be better to add list_lru_add_memcg() and
list_lru_del_memcg() and have:
+bool list_lru_del(struct list_lru *lru, struct list_head *item)
+{
+ int nid = page_to_nid(virt_to_page(item));
+ struct mem_cgroup *memcg = list_lru_memcg_aware(lru) ?
+ mem_cgroup_from_slab_obj(item) : NULL;
+
+ return list_lru_del_memcg(lru, item, nid, memcg);
+}
Seems like _most_ callers will want the original versions and only
a few will want the explicit memcg/nid versions. No?
^ permalink raw reply [flat|nested] 48+ messages in thread
* Re: [PATCH v8 1/6] list_lru: allows explicit memcg and NUMA node selection
2023-11-30 19:57 ` Matthew Wilcox
@ 2023-11-30 20:07 ` Nhat Pham
2023-11-30 20:35 ` Johannes Weiner
0 siblings, 1 reply; 48+ messages in thread
From: Nhat Pham @ 2023-11-30 20:07 UTC (permalink / raw)
To: Matthew Wilcox
Cc: akpm, hannes, cerasuolodomenico, yosryahmed, sjenning, ddstreet,
vitaly.wool, mhocko, roman.gushchin, shakeelb, muchun.song,
chrisl, linux-mm, kernel-team, linux-kernel, cgroups, linux-doc,
linux-kselftest, shuah
On Thu, Nov 30, 2023 at 11:57 AM Matthew Wilcox <willy@infradead.org> wrote:
>
> On Thu, Nov 30, 2023 at 11:40:18AM -0800, Nhat Pham wrote:
> > This patch changes list_lru interface so that the caller must explicitly
> > specify numa node and memcg when adding and removing objects. The old
> > list_lru_add() and list_lru_del() are renamed to list_lru_add_obj() and
> > list_lru_del_obj(), respectively.
>
> Wouldn't it be better to add list_lru_add_memcg() and
> list_lru_del_memcg() and have:
>
> +bool list_lru_del(struct list_lru *lru, struct list_head *item)
> +{
> + int nid = page_to_nid(virt_to_page(item));
> + struct mem_cgroup *memcg = list_lru_memcg_aware(lru) ?
> + mem_cgroup_from_slab_obj(item) : NULL;
> +
> + return list_lru_del_memcg(lru, item, nid, memcg);
> +}
>
> Seems like _most_ callers will want the original versions and only
> a few will want the explicit memcg/nid versions. No?
>
I actually did something along that line in earlier iterations of this
patch series (albeit with poorer naming - __list_lru_add() instead of
list_lru_add_memcg()). The consensus after some back and forth was
that the original list_lru_add() was not a very good design (the
better one was this new version that allows for explicit numa/memcg
selection). So I agreed to fix it everywhere as a prep patch.
I don't have strong opinions here to be completely honest, but I do
think this new API makes more sense (at the cost of quite a bit of
elbow grease to fix every callsites and extra reviewing).
^ permalink raw reply [flat|nested] 48+ messages in thread
* Re: [PATCH v8 1/6] list_lru: allows explicit memcg and NUMA node selection
2023-11-30 20:07 ` Nhat Pham
@ 2023-11-30 20:35 ` Johannes Weiner
2023-12-04 8:30 ` Chengming Zhou
2023-12-05 0:30 ` Chris Li
0 siblings, 2 replies; 48+ messages in thread
From: Johannes Weiner @ 2023-11-30 20:35 UTC (permalink / raw)
To: Nhat Pham
Cc: Matthew Wilcox, akpm, cerasuolodomenico, yosryahmed, sjenning,
ddstreet, vitaly.wool, mhocko, roman.gushchin, shakeelb,
muchun.song, chrisl, linux-mm, kernel-team, linux-kernel, cgroups,
linux-doc, linux-kselftest, shuah
On Thu, Nov 30, 2023 at 12:07:41PM -0800, Nhat Pham wrote:
> On Thu, Nov 30, 2023 at 11:57 AM Matthew Wilcox <willy@infradead.org> wrote:
> >
> > On Thu, Nov 30, 2023 at 11:40:18AM -0800, Nhat Pham wrote:
> > > This patch changes list_lru interface so that the caller must explicitly
> > > specify numa node and memcg when adding and removing objects. The old
> > > list_lru_add() and list_lru_del() are renamed to list_lru_add_obj() and
> > > list_lru_del_obj(), respectively.
> >
> > Wouldn't it be better to add list_lru_add_memcg() and
> > list_lru_del_memcg() and have:
> >
> > +bool list_lru_del(struct list_lru *lru, struct list_head *item)
> > +{
> > + int nid = page_to_nid(virt_to_page(item));
> > + struct mem_cgroup *memcg = list_lru_memcg_aware(lru) ?
> > + mem_cgroup_from_slab_obj(item) : NULL;
> > +
> > + return list_lru_del_memcg(lru, item, nid, memcg);
> > +}
> >
> > Seems like _most_ callers will want the original versions and only
> > a few will want the explicit memcg/nid versions. No?
> >
>
> I actually did something along that line in earlier iterations of this
> patch series (albeit with poorer naming - __list_lru_add() instead of
> list_lru_add_memcg()). The consensus after some back and forth was
> that the original list_lru_add() was not a very good design (the
> better one was this new version that allows for explicit numa/memcg
> selection). So I agreed to fix it everywhere as a prep patch.
>
> I don't have strong opinions here to be completely honest, but I do
> think this new API makes more sense (at the cost of quite a bit of
> elbow grease to fix every callsites and extra reviewing).
Maybe I can shed some light since I was pushing for doing it this way.
The quiet assumption that 'struct list_head *item' is (embedded in) a
slab object that is also charged to a cgroup is a bit much, given that
nothing in the name or documentation of the function points to that.
It bit us in the THP shrinker where that list head is embedded in a
tailpage (virt_to_page(page) is fun to debug). And it caused some
confusion in this case as well, where the zswap entry is a slab object
but not charged (the entry descriptor is not attractive for cgroup
accounting, only the backing memory it points to.)
Yes, for most users - at least right now - the current assumption is
accurate. The thinking was just that if we do have to differentiate
callers now anyway, we might as well make the interface a bit more
self-documenting and harder to misuse going forward, even if it's a
bit more churn now.
^ permalink raw reply [flat|nested] 48+ messages in thread
* Re: [PATCH v8 0/6] workload-specific and memory pressure-driven zswap writeback
2023-11-30 19:40 [PATCH v8 0/6] workload-specific and memory pressure-driven zswap writeback Nhat Pham
` (5 preceding siblings ...)
2023-11-30 19:40 ` [PATCH v8 6/6] zswap: shrinks zswap pool based on memory pressure Nhat Pham
@ 2023-11-30 21:19 ` Andrew Morton
2023-12-06 4:10 ` Bagas Sanjaya
7 siblings, 0 replies; 48+ messages in thread
From: Andrew Morton @ 2023-11-30 21:19 UTC (permalink / raw)
To: Nhat Pham
Cc: hannes, cerasuolodomenico, yosryahmed, sjenning, ddstreet,
vitaly.wool, mhocko, roman.gushchin, shakeelb, muchun.song,
chrisl, linux-mm, kernel-team, linux-kernel, cgroups, linux-doc,
linux-kselftest, shuah
On Thu, 30 Nov 2023 11:40:17 -0800 Nhat Pham <nphamcs@gmail.com> wrote:
> This patch series solves these issues by separating the global zswap
> LRU into per-memcg and per-NUMA LRUs, and performs workload-specific
> (i.e memcg- and NUMA-aware) zswap writeback under memory pressure.
Thanks, I've updated mm-unstable to this version.
^ permalink raw reply [flat|nested] 48+ messages in thread
* Re: [PATCH v8 1/6] list_lru: allows explicit memcg and NUMA node selection
2023-11-30 20:35 ` Johannes Weiner
@ 2023-12-04 8:30 ` Chengming Zhou
2023-12-04 17:48 ` Nhat Pham
2023-12-05 0:30 ` Chris Li
1 sibling, 1 reply; 48+ messages in thread
From: Chengming Zhou @ 2023-12-04 8:30 UTC (permalink / raw)
To: Johannes Weiner, Nhat Pham
Cc: Matthew Wilcox, akpm, cerasuolodomenico, yosryahmed, sjenning,
ddstreet, vitaly.wool, mhocko, roman.gushchin, shakeelb,
muchun.song, chrisl, linux-mm, kernel-team, linux-kernel, cgroups,
linux-doc, linux-kselftest, shuah
On 2023/12/1 04:35, Johannes Weiner wrote:
> On Thu, Nov 30, 2023 at 12:07:41PM -0800, Nhat Pham wrote:
>> On Thu, Nov 30, 2023 at 11:57 AM Matthew Wilcox <willy@infradead.org> wrote:
>>>
>>> On Thu, Nov 30, 2023 at 11:40:18AM -0800, Nhat Pham wrote:
>>>> This patch changes list_lru interface so that the caller must explicitly
>>>> specify numa node and memcg when adding and removing objects. The old
>>>> list_lru_add() and list_lru_del() are renamed to list_lru_add_obj() and
>>>> list_lru_del_obj(), respectively.
>>>
>>> Wouldn't it be better to add list_lru_add_memcg() and
>>> list_lru_del_memcg() and have:
>>>
>>> +bool list_lru_del(struct list_lru *lru, struct list_head *item)
>>> +{
>>> + int nid = page_to_nid(virt_to_page(item));
>>> + struct mem_cgroup *memcg = list_lru_memcg_aware(lru) ?
>>> + mem_cgroup_from_slab_obj(item) : NULL;
>>> +
>>> + return list_lru_del_memcg(lru, item, nid, memcg);
>>> +}
>>>
>>> Seems like _most_ callers will want the original versions and only
>>> a few will want the explicit memcg/nid versions. No?
>>>
>>
>> I actually did something along that line in earlier iterations of this
>> patch series (albeit with poorer naming - __list_lru_add() instead of
>> list_lru_add_memcg()). The consensus after some back and forth was
>> that the original list_lru_add() was not a very good design (the
>> better one was this new version that allows for explicit numa/memcg
>> selection). So I agreed to fix it everywhere as a prep patch.
>>
>> I don't have strong opinions here to be completely honest, but I do
>> think this new API makes more sense (at the cost of quite a bit of
>> elbow grease to fix every callsites and extra reviewing).
>
> Maybe I can shed some light since I was pushing for doing it this way.
>
> The quiet assumption that 'struct list_head *item' is (embedded in) a
> slab object that is also charged to a cgroup is a bit much, given that
> nothing in the name or documentation of the function points to that.
>
> It bit us in the THP shrinker where that list head is embedded in a
> tailpage (virt_to_page(page) is fun to debug). And it caused some
> confusion in this case as well, where the zswap entry is a slab object
> but not charged (the entry descriptor is not attractive for cgroup
> accounting, only the backing memory it points to.)
Hi,
I have a question, maybe I missed something since I haven't read all
the earlier versions.
IIUC, the problem here is that "zswap_entry" has different memcg and node
than the "page", so I wonder if we can just charge "zswap_entry" to the
same memcg of the "page".
Like we can do these when allocating the "zswap_entry":
old_memcg = set_active_memcg(memcg)
kmem_cache_alloc_lru(zswap_entry_cache, lru, gfp)
set_active_memcg(old_memcg)
The good points are:
1. "zswap_entry" is charged to the memcg of "page", which is more sensible?
2. We can reuse the kmem_cache_alloc_lru() interface, which makes code simpler
since we don't need to manage list_lru_memcg by ourselves.
3. Maybe the new list_lru_add() and list_lru_del() are not needed anymore?
Since the "zswap_entry" is of the same memcg and node with the "page".
But don't know if THP shrinker still need it.
Thanks!
>
> Yes, for most users - at least right now - the current assumption is
> accurate. The thinking was just that if we do have to differentiate
> callers now anyway, we might as well make the interface a bit more
> self-documenting and harder to misuse going forward, even if it's a
> bit more churn now.
>
>
^ permalink raw reply [flat|nested] 48+ messages in thread
* Re: [PATCH v8 1/6] list_lru: allows explicit memcg and NUMA node selection
2023-12-04 8:30 ` Chengming Zhou
@ 2023-12-04 17:48 ` Nhat Pham
2023-12-05 2:28 ` Chengming Zhou
0 siblings, 1 reply; 48+ messages in thread
From: Nhat Pham @ 2023-12-04 17:48 UTC (permalink / raw)
To: Chengming Zhou
Cc: Johannes Weiner, Matthew Wilcox, akpm, cerasuolodomenico,
yosryahmed, sjenning, ddstreet, vitaly.wool, mhocko,
roman.gushchin, shakeelb, muchun.song, chrisl, linux-mm,
kernel-team, linux-kernel, cgroups, linux-doc, linux-kselftest,
shuah
On Mon, Dec 4, 2023 at 12:30 AM Chengming Zhou <chengming.zhou@linux.dev> wrote:
>
> On 2023/12/1 04:35, Johannes Weiner wrote:
> > On Thu, Nov 30, 2023 at 12:07:41PM -0800, Nhat Pham wrote:
> >> On Thu, Nov 30, 2023 at 11:57 AM Matthew Wilcox <willy@infradead.org> wrote:
> >>>
> >>> On Thu, Nov 30, 2023 at 11:40:18AM -0800, Nhat Pham wrote:
> >>>> This patch changes list_lru interface so that the caller must explicitly
> >>>> specify numa node and memcg when adding and removing objects. The old
> >>>> list_lru_add() and list_lru_del() are renamed to list_lru_add_obj() and
> >>>> list_lru_del_obj(), respectively.
> >>>
> >>> Wouldn't it be better to add list_lru_add_memcg() and
> >>> list_lru_del_memcg() and have:
> >>>
> >>> +bool list_lru_del(struct list_lru *lru, struct list_head *item)
> >>> +{
> >>> + int nid = page_to_nid(virt_to_page(item));
> >>> + struct mem_cgroup *memcg = list_lru_memcg_aware(lru) ?
> >>> + mem_cgroup_from_slab_obj(item) : NULL;
> >>> +
> >>> + return list_lru_del_memcg(lru, item, nid, memcg);
> >>> +}
> >>>
> >>> Seems like _most_ callers will want the original versions and only
> >>> a few will want the explicit memcg/nid versions. No?
> >>>
> >>
> >> I actually did something along that line in earlier iterations of this
> >> patch series (albeit with poorer naming - __list_lru_add() instead of
> >> list_lru_add_memcg()). The consensus after some back and forth was
> >> that the original list_lru_add() was not a very good design (the
> >> better one was this new version that allows for explicit numa/memcg
> >> selection). So I agreed to fix it everywhere as a prep patch.
> >>
> >> I don't have strong opinions here to be completely honest, but I do
> >> think this new API makes more sense (at the cost of quite a bit of
> >> elbow grease to fix every callsites and extra reviewing).
> >
> > Maybe I can shed some light since I was pushing for doing it this way.
> >
> > The quiet assumption that 'struct list_head *item' is (embedded in) a
> > slab object that is also charged to a cgroup is a bit much, given that
> > nothing in the name or documentation of the function points to that.
> >
> > It bit us in the THP shrinker where that list head is embedded in a
> > tailpage (virt_to_page(page) is fun to debug). And it caused some
> > confusion in this case as well, where the zswap entry is a slab object
> > but not charged (the entry descriptor is not attractive for cgroup
> > accounting, only the backing memory it points to.)
>
> Hi,
>
> I have a question, maybe I missed something since I haven't read all
> the earlier versions.
>
> IIUC, the problem here is that "zswap_entry" has different memcg and node
> than the "page", so I wonder if we can just charge "zswap_entry" to the
> same memcg of the "page".
>
> Like we can do these when allocating the "zswap_entry":
>
> old_memcg = set_active_memcg(memcg)
> kmem_cache_alloc_lru(zswap_entry_cache, lru, gfp)
> set_active_memcg(old_memcg)
>
> The good points are:
>
> 1. "zswap_entry" is charged to the memcg of "page", which is more sensible?
>
> 2. We can reuse the kmem_cache_alloc_lru() interface, which makes code simpler
> since we don't need to manage list_lru_memcg by ourselves.
>
> 3. Maybe the new list_lru_add() and list_lru_del() are not needed anymore?
> Since the "zswap_entry" is of the same memcg and node with the "page".
> But don't know if THP shrinker still need it.
>
> Thanks!
That idea was considered in earlier iterations/discussions of the
patch series as well. Charging things is not free - there is an
overhead associated with it, which is why we are usually selective
about whether to charge something. We were not super keen to do this
for zswap_entry just to plumb around the list_lru's restriction. Might
as well pay the price of extending the list_lru interface now.
If in the future, not charging the zswap entry causes a separate
isolation issue, we could revisit this decision and charge it.
Otherwise, IMHO we should just stick with this for now.
>
> >
> > Yes, for most users - at least right now - the current assumption is
> > accurate. The thinking was just that if we do have to differentiate
> > callers now anyway, we might as well make the interface a bit more
> > self-documenting and harder to misuse going forward, even if it's a
> > bit more churn now.
> >
> >
^ permalink raw reply [flat|nested] 48+ messages in thread
* Re: [PATCH v8 1/6] list_lru: allows explicit memcg and NUMA node selection
2023-11-30 20:35 ` Johannes Weiner
2023-12-04 8:30 ` Chengming Zhou
@ 2023-12-05 0:30 ` Chris Li
2023-12-05 17:17 ` Johannes Weiner
1 sibling, 1 reply; 48+ messages in thread
From: Chris Li @ 2023-12-05 0:30 UTC (permalink / raw)
To: Johannes Weiner
Cc: Nhat Pham, Matthew Wilcox, akpm, cerasuolodomenico, yosryahmed,
sjenning, ddstreet, vitaly.wool, mhocko, roman.gushchin, shakeelb,
muchun.song, linux-mm, kernel-team, linux-kernel, cgroups,
linux-doc, linux-kselftest, shuah
On Thu, Nov 30, 2023 at 12:35 PM Johannes Weiner <hannes@cmpxchg.org> wrote:
>
> On Thu, Nov 30, 2023 at 12:07:41PM -0800, Nhat Pham wrote:
> > On Thu, Nov 30, 2023 at 11:57 AM Matthew Wilcox <willy@infradead.org> wrote:
> > >
> > > On Thu, Nov 30, 2023 at 11:40:18AM -0800, Nhat Pham wrote:
> > > > This patch changes list_lru interface so that the caller must explicitly
> > > > specify numa node and memcg when adding and removing objects. The old
> > > > list_lru_add() and list_lru_del() are renamed to list_lru_add_obj() and
> > > > list_lru_del_obj(), respectively.
> > >
> > > Wouldn't it be better to add list_lru_add_memcg() and
> > > list_lru_del_memcg() and have:
That is my first thought as well. If we are having two different
flavors of LRU add, one has memcg and one without. The list_lru_add()
vs list_lru_add_memcg() is the common way to do it.
> > >
> > > +bool list_lru_del(struct list_lru *lru, struct list_head *item)
> > > +{
> > > + int nid = page_to_nid(virt_to_page(item));
> > > + struct mem_cgroup *memcg = list_lru_memcg_aware(lru) ?
> > > + mem_cgroup_from_slab_obj(item) : NULL;
> > > +
> > > + return list_lru_del_memcg(lru, item, nid, memcg);
> > > +}
> > >
> > > Seems like _most_ callers will want the original versions and only
> > > a few will want the explicit memcg/nid versions. No?
> > >
> >
> > I actually did something along that line in earlier iterations of this
> > patch series (albeit with poorer naming - __list_lru_add() instead of
> > list_lru_add_memcg()). The consensus after some back and forth was
> > that the original list_lru_add() was not a very good design (the
> > better one was this new version that allows for explicit numa/memcg
> > selection). So I agreed to fix it everywhere as a prep patch.
> >
> > I don't have strong opinions here to be completely honest, but I do
> > think this new API makes more sense (at the cost of quite a bit of
> > elbow grease to fix every callsites and extra reviewing).
>
> Maybe I can shed some light since I was pushing for doing it this way.
>
> The quiet assumption that 'struct list_head *item' is (embedded in) a
> slab object that is also charged to a cgroup is a bit much, given that
> nothing in the name or documentation of the function points to that.
We can add it to the document if that is desirable.
>
> It bit us in the THP shrinker where that list head is embedded in a
> tailpage (virt_to_page(page) is fun to debug). And it caused some
> confusion in this case as well, where the zswap entry is a slab object
> but not charged (the entry descriptor is not attractive for cgroup
> accounting, only the backing memory it points to.)
>
> Yes, for most users - at least right now - the current assumption is
> accurate. The thinking was just that if we do have to differentiate
> callers now anyway, we might as well make the interface a bit more
> self-documenting and harder to misuse going forward, even if it's a
> bit more churn now.
It comes down to whether we need to have the non memcg version of API
going forward. If we don't then change the meaning of list_lru_add()
to perform the deed of list_lru_add_memcg() makes sense. My assumption
is that the non memcg version of the API does have legit usage. In
that case, it seems having the original list_lru_add() and
list_lur_add_memcg() as Mattew suggested feels more natural. What you
really want is that every caller of list_lru_add() should seriously
consider switching to list_lru_add_memcg() unless it has a very good
reason to stay in the non memcg version. Renaming and changing the
meaning of list_lru_add() is a bit confusing and has a negative impact
on the outstanding patch that uses list_lru_add(). The end of the
day, some developers still need to evaluate the call site one by one,
renaming the function is not going to help that effort. Just make it
more obvious.
Just my 2 cents, others please chime in. Just to make it clear, that
is my preference, it is not a NACK.
Chris
^ permalink raw reply [flat|nested] 48+ messages in thread
* Re: [PATCH v8 2/6] memcontrol: implement mem_cgroup_tryget_online()
2023-11-30 19:40 ` [PATCH v8 2/6] memcontrol: implement mem_cgroup_tryget_online() Nhat Pham
@ 2023-12-05 0:35 ` Chris Li
2023-12-05 1:39 ` Nhat Pham
2023-12-05 18:02 ` Yosry Ahmed
1 sibling, 1 reply; 48+ messages in thread
From: Chris Li @ 2023-12-05 0:35 UTC (permalink / raw)
To: Nhat Pham
Cc: akpm, hannes, cerasuolodomenico, yosryahmed, sjenning, ddstreet,
vitaly.wool, mhocko, roman.gushchin, shakeelb, muchun.song,
linux-mm, kernel-team, linux-kernel, cgroups, linux-doc,
linux-kselftest, shuah
Hi Nhat,
Very minor nitpick. This patch can fold with the later patch that uses
it. That makes the review easier, no need to cross reference different
patches. It will also make it harder to introduce API that nobody
uses.
Chris
On Thu, Nov 30, 2023 at 11:40 AM Nhat Pham <nphamcs@gmail.com> wrote:
>
> This patch implements a helper function that try to get a reference to
> an memcg's css, as well as checking if it is online. This new function
> is almost exactly the same as the existing mem_cgroup_tryget(), except
> for the onlineness check. In the !CONFIG_MEMCG case, it always returns
> true, analogous to mem_cgroup_tryget(). This is useful for e.g to the
> new zswap writeback scheme, where we need to select the next online
> memcg as a candidate for the global limit reclaim.
Very minor nitpick. This patch can fold with the later patch that uses
it. That makes the review easier, no need to cross reference different
patches. It will also make it harder to introduce API that nobody
uses.
Chris
>
> Signed-off-by: Nhat Pham <nphamcs@gmail.com>
> ---
> include/linux/memcontrol.h | 10 ++++++++++
> 1 file changed, 10 insertions(+)
>
> diff --git a/include/linux/memcontrol.h b/include/linux/memcontrol.h
> index 7bdcf3020d7a..2bd7d14ace78 100644
> --- a/include/linux/memcontrol.h
> +++ b/include/linux/memcontrol.h
> @@ -821,6 +821,11 @@ static inline bool mem_cgroup_tryget(struct mem_cgroup *memcg)
> return !memcg || css_tryget(&memcg->css);
> }
>
> +static inline bool mem_cgroup_tryget_online(struct mem_cgroup *memcg)
> +{
> + return !memcg || css_tryget_online(&memcg->css);
> +}
> +
> static inline void mem_cgroup_put(struct mem_cgroup *memcg)
> {
> if (memcg)
> @@ -1349,6 +1354,11 @@ static inline bool mem_cgroup_tryget(struct mem_cgroup *memcg)
> return true;
> }
>
> +static inline bool mem_cgroup_tryget_online(struct mem_cgroup *memcg)
> +{
> + return true;
> +}
> +
> static inline void mem_cgroup_put(struct mem_cgroup *memcg)
> {
> }
> --
> 2.34.1
>
^ permalink raw reply [flat|nested] 48+ messages in thread
* Re: [PATCH v8 2/6] memcontrol: implement mem_cgroup_tryget_online()
2023-12-05 0:35 ` Chris Li
@ 2023-12-05 1:39 ` Nhat Pham
2023-12-06 0:16 ` Chris Li
0 siblings, 1 reply; 48+ messages in thread
From: Nhat Pham @ 2023-12-05 1:39 UTC (permalink / raw)
To: Chris Li
Cc: akpm, hannes, cerasuolodomenico, yosryahmed, sjenning, ddstreet,
vitaly.wool, mhocko, roman.gushchin, shakeelb, muchun.song,
linux-mm, kernel-team, linux-kernel, cgroups, linux-doc,
linux-kselftest, shuah
On Mon, Dec 4, 2023 at 4:36 PM Chris Li <chrisl@kernel.org> wrote:
>
> Hi Nhat,
>
> Very minor nitpick. This patch can fold with the later patch that uses
> it. That makes the review easier, no need to cross reference different
> patches. It will also make it harder to introduce API that nobody
> uses.
>
> Chris
>
> On Thu, Nov 30, 2023 at 11:40 AM Nhat Pham <nphamcs@gmail.com> wrote:
> >
> > This patch implements a helper function that try to get a reference to
> > an memcg's css, as well as checking if it is online. This new function
> > is almost exactly the same as the existing mem_cgroup_tryget(), except
> > for the onlineness check. In the !CONFIG_MEMCG case, it always returns
> > true, analogous to mem_cgroup_tryget(). This is useful for e.g to the
> > new zswap writeback scheme, where we need to select the next online
> > memcg as a candidate for the global limit reclaim.
>
> Very minor nitpick. This patch can fold with the later patch that uses
> it. That makes the review easier, no need to cross reference different
> patches. It will also make it harder to introduce API that nobody
> uses.
I don't have a strong preference one way or the other :) Probably not
worth the churn tho.
>
> Chris
>
> >
> > Signed-off-by: Nhat Pham <nphamcs@gmail.com>
> > ---
> > include/linux/memcontrol.h | 10 ++++++++++
> > 1 file changed, 10 insertions(+)
> >
> > diff --git a/include/linux/memcontrol.h b/include/linux/memcontrol.h
> > index 7bdcf3020d7a..2bd7d14ace78 100644
> > --- a/include/linux/memcontrol.h
> > +++ b/include/linux/memcontrol.h
> > @@ -821,6 +821,11 @@ static inline bool mem_cgroup_tryget(struct mem_cgroup *memcg)
> > return !memcg || css_tryget(&memcg->css);
> > }
> >
> > +static inline bool mem_cgroup_tryget_online(struct mem_cgroup *memcg)
> > +{
> > + return !memcg || css_tryget_online(&memcg->css);
> > +}
> > +
> > static inline void mem_cgroup_put(struct mem_cgroup *memcg)
> > {
> > if (memcg)
> > @@ -1349,6 +1354,11 @@ static inline bool mem_cgroup_tryget(struct mem_cgroup *memcg)
> > return true;
> > }
> >
> > +static inline bool mem_cgroup_tryget_online(struct mem_cgroup *memcg)
> > +{
> > + return true;
> > +}
> > +
> > static inline void mem_cgroup_put(struct mem_cgroup *memcg)
> > {
> > }
> > --
> > 2.34.1
> >
^ permalink raw reply [flat|nested] 48+ messages in thread
* Re: [PATCH v8 1/6] list_lru: allows explicit memcg and NUMA node selection
2023-12-04 17:48 ` Nhat Pham
@ 2023-12-05 2:28 ` Chengming Zhou
0 siblings, 0 replies; 48+ messages in thread
From: Chengming Zhou @ 2023-12-05 2:28 UTC (permalink / raw)
To: Nhat Pham
Cc: Johannes Weiner, Matthew Wilcox, akpm, cerasuolodomenico,
yosryahmed, sjenning, ddstreet, vitaly.wool, mhocko,
roman.gushchin, shakeelb, muchun.song, chrisl, linux-mm,
kernel-team, linux-kernel, cgroups, linux-doc, linux-kselftest,
shuah
On 2023/12/5 01:48, Nhat Pham wrote:
> On Mon, Dec 4, 2023 at 12:30 AM Chengming Zhou <chengming.zhou@linux.dev> wrote:
>>
>> On 2023/12/1 04:35, Johannes Weiner wrote:
>>> On Thu, Nov 30, 2023 at 12:07:41PM -0800, Nhat Pham wrote:
>>>> On Thu, Nov 30, 2023 at 11:57 AM Matthew Wilcox <willy@infradead.org> wrote:
>>>>>
>>>>> On Thu, Nov 30, 2023 at 11:40:18AM -0800, Nhat Pham wrote:
>>>>>> This patch changes list_lru interface so that the caller must explicitly
>>>>>> specify numa node and memcg when adding and removing objects. The old
>>>>>> list_lru_add() and list_lru_del() are renamed to list_lru_add_obj() and
>>>>>> list_lru_del_obj(), respectively.
>>>>>
>>>>> Wouldn't it be better to add list_lru_add_memcg() and
>>>>> list_lru_del_memcg() and have:
>>>>>
>>>>> +bool list_lru_del(struct list_lru *lru, struct list_head *item)
>>>>> +{
>>>>> + int nid = page_to_nid(virt_to_page(item));
>>>>> + struct mem_cgroup *memcg = list_lru_memcg_aware(lru) ?
>>>>> + mem_cgroup_from_slab_obj(item) : NULL;
>>>>> +
>>>>> + return list_lru_del_memcg(lru, item, nid, memcg);
>>>>> +}
>>>>>
>>>>> Seems like _most_ callers will want the original versions and only
>>>>> a few will want the explicit memcg/nid versions. No?
>>>>>
>>>>
>>>> I actually did something along that line in earlier iterations of this
>>>> patch series (albeit with poorer naming - __list_lru_add() instead of
>>>> list_lru_add_memcg()). The consensus after some back and forth was
>>>> that the original list_lru_add() was not a very good design (the
>>>> better one was this new version that allows for explicit numa/memcg
>>>> selection). So I agreed to fix it everywhere as a prep patch.
>>>>
>>>> I don't have strong opinions here to be completely honest, but I do
>>>> think this new API makes more sense (at the cost of quite a bit of
>>>> elbow grease to fix every callsites and extra reviewing).
>>>
>>> Maybe I can shed some light since I was pushing for doing it this way.
>>>
>>> The quiet assumption that 'struct list_head *item' is (embedded in) a
>>> slab object that is also charged to a cgroup is a bit much, given that
>>> nothing in the name or documentation of the function points to that.
>>>
>>> It bit us in the THP shrinker where that list head is embedded in a
>>> tailpage (virt_to_page(page) is fun to debug). And it caused some
>>> confusion in this case as well, where the zswap entry is a slab object
>>> but not charged (the entry descriptor is not attractive for cgroup
>>> accounting, only the backing memory it points to.)
>>
>> Hi,
>>
>> I have a question, maybe I missed something since I haven't read all
>> the earlier versions.
>>
>> IIUC, the problem here is that "zswap_entry" has different memcg and node
>> than the "page", so I wonder if we can just charge "zswap_entry" to the
>> same memcg of the "page".
>>
>> Like we can do these when allocating the "zswap_entry":
>>
>> old_memcg = set_active_memcg(memcg)
>> kmem_cache_alloc_lru(zswap_entry_cache, lru, gfp)
>> set_active_memcg(old_memcg)
>>
>> The good points are:
>>
>> 1. "zswap_entry" is charged to the memcg of "page", which is more sensible?
>>
>> 2. We can reuse the kmem_cache_alloc_lru() interface, which makes code simpler
>> since we don't need to manage list_lru_memcg by ourselves.
>>
>> 3. Maybe the new list_lru_add() and list_lru_del() are not needed anymore?
>> Since the "zswap_entry" is of the same memcg and node with the "page".
>> But don't know if THP shrinker still need it.
>>
>> Thanks!
>
> That idea was considered in earlier iterations/discussions of the
> patch series as well. Charging things is not free - there is an
> overhead associated with it, which is why we are usually selective
> about whether to charge something. We were not super keen to do this
> for zswap_entry just to plumb around the list_lru's restriction. Might
> as well pay the price of extending the list_lru interface now.
>
> If in the future, not charging the zswap entry causes a separate
> isolation issue, we could revisit this decision and charge it.
> Otherwise, IMHO we should just stick with this for now.
>
Ok, I get it. Thanks much for your clear explanation!
^ permalink raw reply [flat|nested] 48+ messages in thread
* Re: [PATCH v8 1/6] list_lru: allows explicit memcg and NUMA node selection
2023-12-05 0:30 ` Chris Li
@ 2023-12-05 17:17 ` Johannes Weiner
0 siblings, 0 replies; 48+ messages in thread
From: Johannes Weiner @ 2023-12-05 17:17 UTC (permalink / raw)
To: Chris Li
Cc: Nhat Pham, Matthew Wilcox, akpm, cerasuolodomenico, yosryahmed,
sjenning, ddstreet, vitaly.wool, mhocko, roman.gushchin, shakeelb,
muchun.song, linux-mm, kernel-team, linux-kernel, cgroups,
linux-doc, linux-kselftest, shuah
On Mon, Dec 04, 2023 at 04:30:44PM -0800, Chris Li wrote:
> On Thu, Nov 30, 2023 at 12:35 PM Johannes Weiner <hannes@cmpxchg.org> wrote:
> >
> > On Thu, Nov 30, 2023 at 12:07:41PM -0800, Nhat Pham wrote:
> > > On Thu, Nov 30, 2023 at 11:57 AM Matthew Wilcox <willy@infradead.org> wrote:
> > > >
> > > > On Thu, Nov 30, 2023 at 11:40:18AM -0800, Nhat Pham wrote:
> > > > > This patch changes list_lru interface so that the caller must explicitly
> > > > > specify numa node and memcg when adding and removing objects. The old
> > > > > list_lru_add() and list_lru_del() are renamed to list_lru_add_obj() and
> > > > > list_lru_del_obj(), respectively.
> > > >
> > > > Wouldn't it be better to add list_lru_add_memcg() and
> > > > list_lru_del_memcg() and have:
>
> That is my first thought as well. If we are having two different
> flavors of LRU add, one has memcg and one without. The list_lru_add()
> vs list_lru_add_memcg() is the common way to do it.
> > > >
> > > > +bool list_lru_del(struct list_lru *lru, struct list_head *item)
> > > > +{
> > > > + int nid = page_to_nid(virt_to_page(item));
> > > > + struct mem_cgroup *memcg = list_lru_memcg_aware(lru) ?
> > > > + mem_cgroup_from_slab_obj(item) : NULL;
> > > > +
> > > > + return list_lru_del_memcg(lru, item, nid, memcg);
> > > > +}
> > > >
> > > > Seems like _most_ callers will want the original versions and only
> > > > a few will want the explicit memcg/nid versions. No?
> > > >
> > >
> > > I actually did something along that line in earlier iterations of this
> > > patch series (albeit with poorer naming - __list_lru_add() instead of
> > > list_lru_add_memcg()). The consensus after some back and forth was
> > > that the original list_lru_add() was not a very good design (the
> > > better one was this new version that allows for explicit numa/memcg
> > > selection). So I agreed to fix it everywhere as a prep patch.
> > >
> > > I don't have strong opinions here to be completely honest, but I do
> > > think this new API makes more sense (at the cost of quite a bit of
> > > elbow grease to fix every callsites and extra reviewing).
> >
> > Maybe I can shed some light since I was pushing for doing it this way.
> >
> > The quiet assumption that 'struct list_head *item' is (embedded in) a
> > slab object that is also charged to a cgroup is a bit much, given that
> > nothing in the name or documentation of the function points to that.
>
> We can add it to the document if that is desirable.
It would help, but it still violates the "easy to use, hard to misuse"
principle. And I think it does the API layering backwards.
list_lru_add() is the "default" API function. It makes sense to keep
that simple and robust, then add add convenience wrappers for
additional, specialized functionality like memcg lookups for charged
slab objects - even if that's a common usecase.
It's better for a new user to be paused by the require memcg argument
in the default function and then go and find list_lru_add_obj(), than
it is for somebody to quietly pass an invalid object to list_lru_add()
and have subtle runtime problems and crashes (which has happened twice
now already).
^ permalink raw reply [flat|nested] 48+ messages in thread
* Re: [PATCH v8 2/6] memcontrol: implement mem_cgroup_tryget_online()
2023-11-30 19:40 ` [PATCH v8 2/6] memcontrol: implement mem_cgroup_tryget_online() Nhat Pham
2023-12-05 0:35 ` Chris Li
@ 2023-12-05 18:02 ` Yosry Ahmed
2023-12-05 19:55 ` Nhat Pham
1 sibling, 1 reply; 48+ messages in thread
From: Yosry Ahmed @ 2023-12-05 18:02 UTC (permalink / raw)
To: Nhat Pham
Cc: akpm, hannes, cerasuolodomenico, sjenning, ddstreet, vitaly.wool,
mhocko, roman.gushchin, shakeelb, muchun.song, chrisl, linux-mm,
kernel-team, linux-kernel, cgroups, linux-doc, linux-kselftest,
shuah
On Thu, Nov 30, 2023 at 11:40 AM Nhat Pham <nphamcs@gmail.com> wrote:
>
> This patch implements a helper function that try to get a reference to
> an memcg's css, as well as checking if it is online. This new function
> is almost exactly the same as the existing mem_cgroup_tryget(), except
> for the onlineness check. In the !CONFIG_MEMCG case, it always returns
> true, analogous to mem_cgroup_tryget(). This is useful for e.g to the
> new zswap writeback scheme, where we need to select the next online
> memcg as a candidate for the global limit reclaim.
>
> Signed-off-by: Nhat Pham <nphamcs@gmail.com>
Reviewed-by: Yosry Ahmed <yosryahmed@google.com>
> ---
> include/linux/memcontrol.h | 10 ++++++++++
> 1 file changed, 10 insertions(+)
>
> diff --git a/include/linux/memcontrol.h b/include/linux/memcontrol.h
> index 7bdcf3020d7a..2bd7d14ace78 100644
> --- a/include/linux/memcontrol.h
> +++ b/include/linux/memcontrol.h
> @@ -821,6 +821,11 @@ static inline bool mem_cgroup_tryget(struct mem_cgroup *memcg)
> return !memcg || css_tryget(&memcg->css);
> }
>
> +static inline bool mem_cgroup_tryget_online(struct mem_cgroup *memcg)
> +{
> + return !memcg || css_tryget_online(&memcg->css);
> +}
> +
> static inline void mem_cgroup_put(struct mem_cgroup *memcg)
> {
> if (memcg)
> @@ -1349,6 +1354,11 @@ static inline bool mem_cgroup_tryget(struct mem_cgroup *memcg)
> return true;
> }
>
> +static inline bool mem_cgroup_tryget_online(struct mem_cgroup *memcg)
> +{
> + return true;
> +}
> +
> static inline void mem_cgroup_put(struct mem_cgroup *memcg)
> {
> }
> --
> 2.34.1
^ permalink raw reply [flat|nested] 48+ messages in thread
* Re: [PATCH v8 3/6] zswap: make shrinking memcg-aware
2023-11-30 19:40 ` [PATCH v8 3/6] zswap: make shrinking memcg-aware Nhat Pham
@ 2023-12-05 18:20 ` Yosry Ahmed
2023-12-05 18:49 ` Nhat Pham
2023-12-05 19:54 ` [PATCH v8 3/6] zswap: make shrinking memcg-aware (fix) Nhat Pham
` (3 subsequent siblings)
4 siblings, 1 reply; 48+ messages in thread
From: Yosry Ahmed @ 2023-12-05 18:20 UTC (permalink / raw)
To: Nhat Pham
Cc: akpm, hannes, cerasuolodomenico, sjenning, ddstreet, vitaly.wool,
mhocko, roman.gushchin, shakeelb, muchun.song, chrisl, linux-mm,
kernel-team, linux-kernel, cgroups, linux-doc, linux-kselftest,
shuah
On Thu, Nov 30, 2023 at 11:40 AM Nhat Pham <nphamcs@gmail.com> wrote:
>
> From: Domenico Cerasuolo <cerasuolodomenico@gmail.com>
>
> Currently, we only have a single global LRU for zswap. This makes it
> impossible to perform worload-specific shrinking - an memcg cannot
> determine which pages in the pool it owns, and often ends up writing
> pages from other memcgs. This issue has been previously observed in
> practice and mitigated by simply disabling memcg-initiated shrinking:
>
> https://lore.kernel.org/all/20230530232435.3097106-1-nphamcs@gmail.com/T/#u
>
> This patch fully resolves the issue by replacing the global zswap LRU
> with memcg- and NUMA-specific LRUs, and modify the reclaim logic:
>
> a) When a store attempt hits an memcg limit, it now triggers a
> synchronous reclaim attempt that, if successful, allows the new
> hotter page to be accepted by zswap.
> b) If the store attempt instead hits the global zswap limit, it will
> trigger an asynchronous reclaim attempt, in which an memcg is
> selected for reclaim in a round-robin-like fashion.
>
> Signed-off-by: Domenico Cerasuolo <cerasuolodomenico@gmail.com>
> Co-developed-by: Nhat Pham <nphamcs@gmail.com>
> Signed-off-by: Nhat Pham <nphamcs@gmail.com>
> ---
> include/linux/memcontrol.h | 5 +
> include/linux/zswap.h | 2 +
> mm/memcontrol.c | 2 +
> mm/swap.h | 3 +-
> mm/swap_state.c | 24 +++-
> mm/zswap.c | 269 +++++++++++++++++++++++++++++--------
> 6 files changed, 245 insertions(+), 60 deletions(-)
>
> diff --git a/include/linux/memcontrol.h b/include/linux/memcontrol.h
> index 2bd7d14ace78..a308c8eacf20 100644
> --- a/include/linux/memcontrol.h
> +++ b/include/linux/memcontrol.h
> @@ -1192,6 +1192,11 @@ static inline struct mem_cgroup *page_memcg_check(struct page *page)
> return NULL;
> }
>
> +static inline struct mem_cgroup *get_mem_cgroup_from_objcg(struct obj_cgroup *objcg)
> +{
> + return NULL;
> +}
> +
> static inline bool folio_memcg_kmem(struct folio *folio)
> {
> return false;
> diff --git a/include/linux/zswap.h b/include/linux/zswap.h
> index 2a60ce39cfde..e571e393669b 100644
> --- a/include/linux/zswap.h
> +++ b/include/linux/zswap.h
> @@ -15,6 +15,7 @@ bool zswap_load(struct folio *folio);
> void zswap_invalidate(int type, pgoff_t offset);
> void zswap_swapon(int type);
> void zswap_swapoff(int type);
> +void zswap_memcg_offline_cleanup(struct mem_cgroup *memcg);
>
> #else
>
> @@ -31,6 +32,7 @@ static inline bool zswap_load(struct folio *folio)
> static inline void zswap_invalidate(int type, pgoff_t offset) {}
> static inline void zswap_swapon(int type) {}
> static inline void zswap_swapoff(int type) {}
> +static inline void zswap_memcg_offline_cleanup(struct mem_cgroup *memcg) {}
>
> #endif
>
> diff --git a/mm/memcontrol.c b/mm/memcontrol.c
> index 470821d1ba1a..792ca21c5815 100644
> --- a/mm/memcontrol.c
> +++ b/mm/memcontrol.c
> @@ -5614,6 +5614,8 @@ static void mem_cgroup_css_offline(struct cgroup_subsys_state *css)
> page_counter_set_min(&memcg->memory, 0);
> page_counter_set_low(&memcg->memory, 0);
>
> + zswap_memcg_offline_cleanup(memcg);
> +
> memcg_offline_kmem(memcg);
> reparent_shrinker_deferred(memcg);
> wb_memcg_offline(memcg);
> diff --git a/mm/swap.h b/mm/swap.h
> index 73c332ee4d91..c0dc73e10e91 100644
> --- a/mm/swap.h
> +++ b/mm/swap.h
> @@ -51,7 +51,8 @@ struct page *read_swap_cache_async(swp_entry_t entry, gfp_t gfp_mask,
> struct swap_iocb **plug);
> struct page *__read_swap_cache_async(swp_entry_t entry, gfp_t gfp_mask,
> struct mempolicy *mpol, pgoff_t ilx,
> - bool *new_page_allocated);
> + bool *new_page_allocated,
> + bool skip_if_exists);
> struct page *swap_cluster_readahead(swp_entry_t entry, gfp_t flag,
> struct mempolicy *mpol, pgoff_t ilx);
> struct page *swapin_readahead(swp_entry_t entry, gfp_t flag,
> diff --git a/mm/swap_state.c b/mm/swap_state.c
> index 85d9e5806a6a..6c84236382f3 100644
> --- a/mm/swap_state.c
> +++ b/mm/swap_state.c
> @@ -412,7 +412,8 @@ struct folio *filemap_get_incore_folio(struct address_space *mapping,
>
> struct page *__read_swap_cache_async(swp_entry_t entry, gfp_t gfp_mask,
> struct mempolicy *mpol, pgoff_t ilx,
> - bool *new_page_allocated)
> + bool *new_page_allocated,
> + bool skip_if_exists)
> {
> struct swap_info_struct *si;
> struct folio *folio;
> @@ -470,6 +471,17 @@ struct page *__read_swap_cache_async(swp_entry_t entry, gfp_t gfp_mask,
> if (err != -EEXIST)
> goto fail_put_swap;
>
> + /*
> + * Protect against a recursive call to __read_swap_cache_async()
> + * on the same entry waiting forever here because SWAP_HAS_CACHE
> + * is set but the folio is not the swap cache yet. This can
> + * happen today if mem_cgroup_swapin_charge_folio() below
> + * triggers reclaim through zswap, which may call
> + * __read_swap_cache_async() in the writeback path.
> + */
> + if (skip_if_exists)
> + goto fail_put_swap;
> +
> /*
> * We might race against __delete_from_swap_cache(), and
> * stumble across a swap_map entry whose SWAP_HAS_CACHE
> @@ -537,7 +549,7 @@ struct page *read_swap_cache_async(swp_entry_t entry, gfp_t gfp_mask,
>
> mpol = get_vma_policy(vma, addr, 0, &ilx);
> page = __read_swap_cache_async(entry, gfp_mask, mpol, ilx,
> - &page_allocated);
> + &page_allocated, false);
> mpol_cond_put(mpol);
>
> if (page_allocated)
> @@ -654,7 +666,7 @@ struct page *swap_cluster_readahead(swp_entry_t entry, gfp_t gfp_mask,
> /* Ok, do the async read-ahead now */
> page = __read_swap_cache_async(
> swp_entry(swp_type(entry), offset),
> - gfp_mask, mpol, ilx, &page_allocated);
> + gfp_mask, mpol, ilx, &page_allocated, false);
> if (!page)
> continue;
> if (page_allocated) {
> @@ -672,7 +684,7 @@ struct page *swap_cluster_readahead(swp_entry_t entry, gfp_t gfp_mask,
> skip:
> /* The page was likely read above, so no need for plugging here */
> page = __read_swap_cache_async(entry, gfp_mask, mpol, ilx,
> - &page_allocated);
> + &page_allocated, false);
> if (unlikely(page_allocated))
> swap_readpage(page, false, NULL);
> return page;
> @@ -827,7 +839,7 @@ static struct page *swap_vma_readahead(swp_entry_t targ_entry, gfp_t gfp_mask,
> pte_unmap(pte);
> pte = NULL;
> page = __read_swap_cache_async(entry, gfp_mask, mpol, ilx,
> - &page_allocated);
> + &page_allocated, false);
> if (!page)
> continue;
> if (page_allocated) {
> @@ -847,7 +859,7 @@ static struct page *swap_vma_readahead(swp_entry_t targ_entry, gfp_t gfp_mask,
> skip:
> /* The page was likely read above, so no need for plugging here */
> page = __read_swap_cache_async(targ_entry, gfp_mask, mpol, targ_ilx,
> - &page_allocated);
> + &page_allocated, false);
> if (unlikely(page_allocated))
> swap_readpage(page, false, NULL);
> return page;
> diff --git a/mm/zswap.c b/mm/zswap.c
> index 4bdb2d83bb0d..f323e45cbdc7 100644
> --- a/mm/zswap.c
> +++ b/mm/zswap.c
> @@ -35,6 +35,7 @@
> #include <linux/writeback.h>
> #include <linux/pagemap.h>
> #include <linux/workqueue.h>
> +#include <linux/list_lru.h>
>
> #include "swap.h"
> #include "internal.h"
> @@ -174,8 +175,8 @@ struct zswap_pool {
> struct work_struct shrink_work;
> struct hlist_node node;
> char tfm_name[CRYPTO_MAX_ALG_NAME];
> - struct list_head lru;
> - spinlock_t lru_lock;
> + struct list_lru list_lru;
> + struct mem_cgroup *next_shrink;
> };
>
> /*
> @@ -291,15 +292,46 @@ static void zswap_update_total_size(void)
> zswap_pool_total_size = total;
> }
>
> +/* should be called under RCU */
nit: probably WARN_ON_ONCE(!rcu_read_lock_held()) or
RCU_LOCKDEP_WARN(!rcu_read_lock_held()) in the function body is
better?
> +#ifdef CONFIG_MEMCG
> +static inline struct mem_cgroup *mem_cgroup_from_entry(struct zswap_entry *entry)
> +{
> + return entry->objcg ? obj_cgroup_memcg(entry->objcg) : NULL;
> +}
> +#else
> +static inline struct mem_cgroup *mem_cgroup_from_entry(struct zswap_entry *entry)
> +{
> + return NULL;
> +}
> +#endif
> +
> +static inline int entry_to_nid(struct zswap_entry *entry)
> +{
> + return page_to_nid(virt_to_page(entry));
> +}
> +
> +void zswap_memcg_offline_cleanup(struct mem_cgroup *memcg)
> +{
> + struct zswap_pool *pool;
> +
> + /* lock out zswap pools list modification */
> + spin_lock(&zswap_pools_lock);
> + list_for_each_entry(pool, &zswap_pools, list) {
> + if (pool->next_shrink == memcg)
> + pool->next_shrink = mem_cgroup_iter(NULL, pool->next_shrink, NULL);
> + }
> + spin_unlock(&zswap_pools_lock);
> +}
> +
> /*********************************
> * zswap entry functions
> **********************************/
> static struct kmem_cache *zswap_entry_cache;
>
> -static struct zswap_entry *zswap_entry_cache_alloc(gfp_t gfp)
> +static struct zswap_entry *zswap_entry_cache_alloc(gfp_t gfp, int nid)
> {
> struct zswap_entry *entry;
> - entry = kmem_cache_alloc(zswap_entry_cache, gfp);
> + entry = kmem_cache_alloc_node(zswap_entry_cache, gfp, nid);
> if (!entry)
> return NULL;
> entry->refcount = 1;
> @@ -312,6 +344,61 @@ static void zswap_entry_cache_free(struct zswap_entry *entry)
> kmem_cache_free(zswap_entry_cache, entry);
> }
>
> +/*********************************
> +* lru functions
> +**********************************/
> +static void zswap_lru_add(struct list_lru *list_lru, struct zswap_entry *entry)
> +{
> + int nid = entry_to_nid(entry);
> + struct mem_cgroup *memcg;
> +
> + /*
> + * Note that it is safe to use rcu_read_lock() here, even in the face of
> + * concurrent memcg offlining. Thanks to the memcg->kmemcg_id indirection
> + * used in list_lru lookup, only two scenarios are possible:
> + *
> + * 1. list_lru_add() is called before memcg->kmemcg_id is updated. The
> + * new entry will be reparented to memcg's parent's list_lru.
> + * 2. list_lru_add() is called after memcg->kmemcg_id is updated. The
> + * new entry will be added directly to memcg's parent's list_lru.
> + *
> + * Similar reasoning holds for list_lru_del() and list_lru_putback().
> + */
> + rcu_read_lock();
> + memcg = mem_cgroup_from_entry(entry);
> + /* will always succeed */
> + list_lru_add(list_lru, &entry->lru, nid, memcg);
> + rcu_read_unlock();
> +}
> +
> +static void zswap_lru_del(struct list_lru *list_lru, struct zswap_entry *entry)
> +{
> + int nid = entry_to_nid(entry);
> + struct mem_cgroup *memcg;
> +
> + rcu_read_lock();
> + memcg = mem_cgroup_from_entry(entry);
> + /* will always succeed */
> + list_lru_del(list_lru, &entry->lru, nid, memcg);
> + rcu_read_unlock();
> +}
> +
> +static void zswap_lru_putback(struct list_lru *list_lru,
> + struct zswap_entry *entry)
> +{
> + int nid = entry_to_nid(entry);
> + spinlock_t *lock = &list_lru->node[nid].lock;
> + struct mem_cgroup *memcg;
> +
> + rcu_read_lock();
> + memcg = mem_cgroup_from_entry(entry);
> + spin_lock(lock);
> + /* we cannot use list_lru_add here, because it increments node's lru count */
> + list_lru_putback(list_lru, &entry->lru, nid, memcg);
> + spin_unlock(lock);
> + rcu_read_unlock();
> +}
> +
> /*********************************
> * rbtree functions
> **********************************/
> @@ -396,9 +483,7 @@ static void zswap_free_entry(struct zswap_entry *entry)
> if (!entry->length)
> atomic_dec(&zswap_same_filled_pages);
> else {
> - spin_lock(&entry->pool->lru_lock);
> - list_del(&entry->lru);
> - spin_unlock(&entry->pool->lru_lock);
> + zswap_lru_del(&entry->pool->list_lru, entry);
> zpool_free(zswap_find_zpool(entry), entry->handle);
> zswap_pool_put(entry->pool);
> }
> @@ -632,21 +717,15 @@ static void zswap_invalidate_entry(struct zswap_tree *tree,
> zswap_entry_put(tree, entry);
> }
>
> -static int zswap_reclaim_entry(struct zswap_pool *pool)
> +static enum lru_status shrink_memcg_cb(struct list_head *item, struct list_lru_one *l,
> + spinlock_t *lock, void *arg)
> {
> - struct zswap_entry *entry;
> + struct zswap_entry *entry = container_of(item, struct zswap_entry, lru);
> struct zswap_tree *tree;
> pgoff_t swpoffset;
> - int ret;
> + enum lru_status ret = LRU_REMOVED_RETRY;
> + int writeback_result;
>
> - /* Get an entry off the LRU */
> - spin_lock(&pool->lru_lock);
> - if (list_empty(&pool->lru)) {
> - spin_unlock(&pool->lru_lock);
> - return -EINVAL;
> - }
> - entry = list_last_entry(&pool->lru, struct zswap_entry, lru);
> - list_del_init(&entry->lru);
> /*
> * Once the lru lock is dropped, the entry might get freed. The
> * swpoffset is copied to the stack, and entry isn't deref'd again
> @@ -654,28 +733,32 @@ static int zswap_reclaim_entry(struct zswap_pool *pool)
> */
> swpoffset = swp_offset(entry->swpentry);
> tree = zswap_trees[swp_type(entry->swpentry)];
> - spin_unlock(&pool->lru_lock);
> + list_lru_isolate(l, item);
> + /*
> + * It's safe to drop the lock here because we return either
> + * LRU_REMOVED_RETRY or LRU_RETRY.
> + */
> + spin_unlock(lock);
>
> /* Check for invalidate() race */
> spin_lock(&tree->lock);
> - if (entry != zswap_rb_search(&tree->rbroot, swpoffset)) {
> - ret = -EAGAIN;
> + if (entry != zswap_rb_search(&tree->rbroot, swpoffset))
> goto unlock;
> - }
> +
> /* Hold a reference to prevent a free during writeback */
> zswap_entry_get(entry);
> spin_unlock(&tree->lock);
>
> - ret = zswap_writeback_entry(entry, tree);
> + writeback_result = zswap_writeback_entry(entry, tree);
>
> spin_lock(&tree->lock);
> - if (ret) {
> - /* Writeback failed, put entry back on LRU */
> - spin_lock(&pool->lru_lock);
> - list_move(&entry->lru, &pool->lru);
> - spin_unlock(&pool->lru_lock);
> + if (writeback_result) {
> + zswap_reject_reclaim_fail++;
> + zswap_lru_putback(&entry->pool->list_lru, entry);
> + ret = LRU_RETRY;
> goto put_unlock;
> }
> + zswap_written_back_pages++;
>
> /*
> * Writeback started successfully, the page now belongs to the
> @@ -689,27 +772,93 @@ static int zswap_reclaim_entry(struct zswap_pool *pool)
> zswap_entry_put(tree, entry);
> unlock:
> spin_unlock(&tree->lock);
> - return ret ? -EAGAIN : 0;
> + spin_lock(lock);
> + return ret;
> +}
> +
> +static int shrink_memcg(struct mem_cgroup *memcg)
> +{
> + struct zswap_pool *pool;
> + int nid, shrunk = 0;
> +
> + /*
> + * Skip zombies because their LRUs are reparented and we would be
> + * reclaiming from the parent instead of the dead memcg.
> + */
> + if (memcg && !mem_cgroup_online(memcg))
> + return -ENOENT;
> +
> + pool = zswap_pool_current_get();
> + if (!pool)
> + return -EINVAL;
> +
> + for_each_node_state(nid, N_NORMAL_MEMORY) {
> + unsigned long nr_to_walk = 1;
> +
> + shrunk += list_lru_walk_one(&pool->list_lru, nid, memcg,
> + &shrink_memcg_cb, NULL, &nr_to_walk);
> + }
> + zswap_pool_put(pool);
> + return shrunk ? 0 : -EAGAIN;
> }
>
> static void shrink_worker(struct work_struct *w)
> {
> struct zswap_pool *pool = container_of(w, typeof(*pool),
> shrink_work);
> + struct mem_cgroup *memcg;
> int ret, failures = 0;
>
> + /* global reclaim will select cgroup in a round-robin fashion. */
> do {
> - ret = zswap_reclaim_entry(pool);
> - if (ret) {
> - zswap_reject_reclaim_fail++;
> - if (ret != -EAGAIN)
> + spin_lock(&zswap_pools_lock);
> + pool->next_shrink = mem_cgroup_iter(NULL, pool->next_shrink, NULL);
> + memcg = pool->next_shrink;
> +
> + /*
> + * We need to retry if we have gone through a full round trip, or if we
> + * got an offline memcg (or else we risk undoing the effect of the
> + * zswap memcg offlining cleanup callback). This is not catastrophic
> + * per se, but it will keep the now offlined memcg hostage for a while.
> + *
> + * Note that if we got an online memcg, we will keep the extra
> + * reference in case the original reference obtained by mem_cgroup_iter
> + * is dropped by the zswap memcg offlining callback, ensuring that the
> + * memcg is not killed when we are reclaiming.
> + */
> + if (!memcg) {
> + spin_unlock(&zswap_pools_lock);
> + if (++failures == MAX_RECLAIM_RETRIES)
> break;
> +
> + goto resched;
> + }
> +
> + if (!mem_cgroup_online(memcg)) {
> + /* drop the reference from mem_cgroup_iter() */
> + mem_cgroup_put(memcg);
Probably better to use mem_cgroup_iter_break() here?
Also, I don't see mem_cgroup_tryget_online() being used here (where I
expected it to be used), did I miss it?
> + pool->next_shrink = NULL;
> + spin_unlock(&zswap_pools_lock);
> +
> if (++failures == MAX_RECLAIM_RETRIES)
> break;
> +
> + goto resched;
> }
> + spin_unlock(&zswap_pools_lock);
> +
> + ret = shrink_memcg(memcg);
We just checked for online-ness above, and then shrink_memcg() checks
it again. Is this intentional?
> + /* drop the extra reference */
Where does the extra reference come from?
> + mem_cgroup_put(memcg);
> +
> + if (ret == -EINVAL)
> + break;
> + if (ret && ++failures == MAX_RECLAIM_RETRIES)
> + break;
> +
> +resched:
> cond_resched();
> } while (!zswap_can_accept());
> - zswap_pool_put(pool);
> }
>
> static struct zswap_pool *zswap_pool_create(char *type, char *compressor)
> @@ -767,8 +916,7 @@ static struct zswap_pool *zswap_pool_create(char *type, char *compressor)
> */
> kref_init(&pool->kref);
> INIT_LIST_HEAD(&pool->list);
> - INIT_LIST_HEAD(&pool->lru);
> - spin_lock_init(&pool->lru_lock);
> + list_lru_init_memcg(&pool->list_lru, NULL);
> INIT_WORK(&pool->shrink_work, shrink_worker);
>
> zswap_pool_debug("created", pool);
> @@ -834,6 +982,13 @@ static void zswap_pool_destroy(struct zswap_pool *pool)
>
> cpuhp_state_remove_instance(CPUHP_MM_ZSWP_POOL_PREPARE, &pool->node);
> free_percpu(pool->acomp_ctx);
> + list_lru_destroy(&pool->list_lru);
> +
> + spin_lock(&zswap_pools_lock);
> + mem_cgroup_put(pool->next_shrink);
> + pool->next_shrink = NULL;
> + spin_unlock(&zswap_pools_lock);
> +
> for (i = 0; i < ZSWAP_NR_ZPOOLS; i++)
> zpool_destroy_pool(pool->zpools[i]);
> kfree(pool);
> @@ -1081,7 +1236,7 @@ static int zswap_writeback_entry(struct zswap_entry *entry,
> /* try to allocate swap cache page */
> mpol = get_task_policy(current);
> page = __read_swap_cache_async(swpentry, GFP_KERNEL, mpol,
> - NO_INTERLEAVE_INDEX, &page_was_allocated);
> + NO_INTERLEAVE_INDEX, &page_was_allocated, true);
> if (!page) {
> ret = -ENOMEM;
> goto fail;
> @@ -1152,7 +1307,6 @@ static int zswap_writeback_entry(struct zswap_entry *entry,
> /* start writeback */
> __swap_writepage(page, &wbc);
> put_page(page);
> - zswap_written_back_pages++;
>
> return ret;
>
> @@ -1209,6 +1363,7 @@ bool zswap_store(struct folio *folio)
> struct scatterlist input, output;
> struct crypto_acomp_ctx *acomp_ctx;
> struct obj_cgroup *objcg = NULL;
> + struct mem_cgroup *memcg = NULL;
> struct zswap_pool *pool;
> struct zpool *zpool;
> unsigned int dlen = PAGE_SIZE;
> @@ -1240,15 +1395,15 @@ bool zswap_store(struct folio *folio)
> zswap_invalidate_entry(tree, dupentry);
> }
> spin_unlock(&tree->lock);
> -
> - /*
> - * XXX: zswap reclaim does not work with cgroups yet. Without a
> - * cgroup-aware entry LRU, we will push out entries system-wide based on
> - * local cgroup limits.
> - */
> objcg = get_obj_cgroup_from_folio(folio);
> - if (objcg && !obj_cgroup_may_zswap(objcg))
> - goto reject;
> + if (objcg && !obj_cgroup_may_zswap(objcg)) {
> + memcg = get_mem_cgroup_from_objcg(objcg);
Do we need a reference here? IIUC, this is folio_memcg() and the folio
is locked, so folio_memcg() should remain stable, no?
Same for the call below.
> + if (shrink_memcg(memcg)) {
> + mem_cgroup_put(memcg);
> + goto reject;
> + }
> + mem_cgroup_put(memcg);
> + }
>
> /* reclaim space if needed */
> if (zswap_is_full()) {
> @@ -1265,7 +1420,7 @@ bool zswap_store(struct folio *folio)
> }
>
> /* allocate entry */
> - entry = zswap_entry_cache_alloc(GFP_KERNEL);
> + entry = zswap_entry_cache_alloc(GFP_KERNEL, page_to_nid(page));
> if (!entry) {
> zswap_reject_kmemcache_fail++;
> goto reject;
> @@ -1292,6 +1447,15 @@ bool zswap_store(struct folio *folio)
> if (!entry->pool)
> goto freepage;
>
> + if (objcg) {
> + memcg = get_mem_cgroup_from_objcg(objcg);
> + if (memcg_list_lru_alloc(memcg, &entry->pool->list_lru, GFP_KERNEL)) {
> + mem_cgroup_put(memcg);
> + goto put_pool;
> + }
> + mem_cgroup_put(memcg);
> + }
> +
> /* compress */
> acomp_ctx = raw_cpu_ptr(entry->pool->acomp_ctx);
>
> @@ -1370,9 +1534,8 @@ bool zswap_store(struct folio *folio)
> zswap_invalidate_entry(tree, dupentry);
> }
> if (entry->length) {
> - spin_lock(&entry->pool->lru_lock);
> - list_add(&entry->lru, &entry->pool->lru);
> - spin_unlock(&entry->pool->lru_lock);
> + INIT_LIST_HEAD(&entry->lru);
> + zswap_lru_add(&entry->pool->list_lru, entry);
> }
> spin_unlock(&tree->lock);
>
> @@ -1385,6 +1548,7 @@ bool zswap_store(struct folio *folio)
>
> put_dstmem:
> mutex_unlock(acomp_ctx->mutex);
> +put_pool:
> zswap_pool_put(entry->pool);
> freepage:
> zswap_entry_cache_free(entry);
> @@ -1479,9 +1643,8 @@ bool zswap_load(struct folio *folio)
> zswap_invalidate_entry(tree, entry);
> folio_mark_dirty(folio);
> } else if (entry->length) {
> - spin_lock(&entry->pool->lru_lock);
> - list_move(&entry->lru, &entry->pool->lru);
> - spin_unlock(&entry->pool->lru_lock);
> + zswap_lru_del(&entry->pool->list_lru, entry);
> + zswap_lru_add(&entry->pool->list_lru, entry);
> }
> zswap_entry_put(tree, entry);
> spin_unlock(&tree->lock);
> --
> 2.34.1
^ permalink raw reply [flat|nested] 48+ messages in thread
* Re: [PATCH v8 4/6] mm: memcg: add per-memcg zswap writeback stat
2023-11-30 19:40 ` [PATCH v8 4/6] mm: memcg: add per-memcg zswap writeback stat Nhat Pham
@ 2023-12-05 18:21 ` Yosry Ahmed
2023-12-05 18:56 ` Nhat Pham
2023-12-05 19:33 ` [PATCH v8 4/6] mm: memcg: add per-memcg zswap writeback stat (fix) Nhat Pham
1 sibling, 1 reply; 48+ messages in thread
From: Yosry Ahmed @ 2023-12-05 18:21 UTC (permalink / raw)
To: Nhat Pham
Cc: akpm, hannes, cerasuolodomenico, sjenning, ddstreet, vitaly.wool,
mhocko, roman.gushchin, shakeelb, muchun.song, chrisl, linux-mm,
kernel-team, linux-kernel, cgroups, linux-doc, linux-kselftest,
shuah
On Thu, Nov 30, 2023 at 11:40 AM Nhat Pham <nphamcs@gmail.com> wrote:
>
> From: Domenico Cerasuolo <cerasuolodomenico@gmail.com>
>
> Since zswap now writes back pages from memcg-specific LRUs, we now need a
> new stat to show writebacks count for each memcg.
>
> Suggested-by: Nhat Pham <nphamcs@gmail.com>
> Signed-off-by: Domenico Cerasuolo <cerasuolodomenico@gmail.com>
> Signed-off-by: Nhat Pham <nphamcs@gmail.com>
> ---
> include/linux/vm_event_item.h | 1 +
> mm/memcontrol.c | 1 +
> mm/vmstat.c | 1 +
> mm/zswap.c | 4 ++++
> 4 files changed, 7 insertions(+)
>
> diff --git a/include/linux/vm_event_item.h b/include/linux/vm_event_item.h
> index d1b847502f09..f4569ad98edf 100644
> --- a/include/linux/vm_event_item.h
> +++ b/include/linux/vm_event_item.h
> @@ -142,6 +142,7 @@ enum vm_event_item { PGPGIN, PGPGOUT, PSWPIN, PSWPOUT,
> #ifdef CONFIG_ZSWAP
> ZSWPIN,
> ZSWPOUT,
> + ZSWP_WB,
I think you dismissed Johannes's comment from v7 about ZSWPWB and
"zswpwb" being more consistent with the existing events.
> #endif
> #ifdef CONFIG_X86
> DIRECT_MAP_LEVEL2_SPLIT,
> diff --git a/mm/memcontrol.c b/mm/memcontrol.c
> index 792ca21c5815..21d79249c8b4 100644
> --- a/mm/memcontrol.c
> +++ b/mm/memcontrol.c
> @@ -703,6 +703,7 @@ static const unsigned int memcg_vm_event_stat[] = {
> #if defined(CONFIG_MEMCG_KMEM) && defined(CONFIG_ZSWAP)
> ZSWPIN,
> ZSWPOUT,
> + ZSWP_WB,
> #endif
> #ifdef CONFIG_TRANSPARENT_HUGEPAGE
> THP_FAULT_ALLOC,
> diff --git a/mm/vmstat.c b/mm/vmstat.c
> index afa5a38fcc9c..2249f85e4a87 100644
> --- a/mm/vmstat.c
> +++ b/mm/vmstat.c
> @@ -1401,6 +1401,7 @@ const char * const vmstat_text[] = {
> #ifdef CONFIG_ZSWAP
> "zswpin",
> "zswpout",
> + "zswp_wb",
> #endif
> #ifdef CONFIG_X86
> "direct_map_level2_splits",
> diff --git a/mm/zswap.c b/mm/zswap.c
> index f323e45cbdc7..49b79393e472 100644
> --- a/mm/zswap.c
> +++ b/mm/zswap.c
> @@ -760,6 +760,10 @@ static enum lru_status shrink_memcg_cb(struct list_head *item, struct list_lru_o
> }
> zswap_written_back_pages++;
>
> + if (entry->objcg)
> + count_objcg_event(entry->objcg, ZSWP_WB);
> +
> + count_vm_event(ZSWP_WB);
> /*
> * Writeback started successfully, the page now belongs to the
> * swapcache. Drop the entry from zswap - unless invalidate already
> --
> 2.34.1
^ permalink raw reply [flat|nested] 48+ messages in thread
* Re: [PATCH v8 3/6] zswap: make shrinking memcg-aware
2023-12-05 18:20 ` Yosry Ahmed
@ 2023-12-05 18:49 ` Nhat Pham
2023-12-05 18:59 ` Yosry Ahmed
0 siblings, 1 reply; 48+ messages in thread
From: Nhat Pham @ 2023-12-05 18:49 UTC (permalink / raw)
To: Yosry Ahmed
Cc: akpm, hannes, cerasuolodomenico, sjenning, ddstreet, vitaly.wool,
mhocko, roman.gushchin, shakeelb, muchun.song, chrisl, linux-mm,
kernel-team, linux-kernel, cgroups, linux-doc, linux-kselftest,
shuah
On Tue, Dec 5, 2023 at 10:21 AM Yosry Ahmed <yosryahmed@google.com> wrote:
>
> On Thu, Nov 30, 2023 at 11:40 AM Nhat Pham <nphamcs@gmail.com> wrote:
> >
> > From: Domenico Cerasuolo <cerasuolodomenico@gmail.com>
> >
> > Currently, we only have a single global LRU for zswap. This makes it
> > impossible to perform worload-specific shrinking - an memcg cannot
> > determine which pages in the pool it owns, and often ends up writing
> > pages from other memcgs. This issue has been previously observed in
> > practice and mitigated by simply disabling memcg-initiated shrinking:
> >
> > https://lore.kernel.org/all/20230530232435.3097106-1-nphamcs@gmail.com/T/#u
> >
> > This patch fully resolves the issue by replacing the global zswap LRU
> > with memcg- and NUMA-specific LRUs, and modify the reclaim logic:
> >
> > a) When a store attempt hits an memcg limit, it now triggers a
> > synchronous reclaim attempt that, if successful, allows the new
> > hotter page to be accepted by zswap.
> > b) If the store attempt instead hits the global zswap limit, it will
> > trigger an asynchronous reclaim attempt, in which an memcg is
> > selected for reclaim in a round-robin-like fashion.
> >
> > Signed-off-by: Domenico Cerasuolo <cerasuolodomenico@gmail.com>
> > Co-developed-by: Nhat Pham <nphamcs@gmail.com>
> > Signed-off-by: Nhat Pham <nphamcs@gmail.com>
> > ---
> > include/linux/memcontrol.h | 5 +
> > include/linux/zswap.h | 2 +
> > mm/memcontrol.c | 2 +
> > mm/swap.h | 3 +-
> > mm/swap_state.c | 24 +++-
> > mm/zswap.c | 269 +++++++++++++++++++++++++++++--------
> > 6 files changed, 245 insertions(+), 60 deletions(-)
> >
> > diff --git a/include/linux/memcontrol.h b/include/linux/memcontrol.h
> > index 2bd7d14ace78..a308c8eacf20 100644
> > --- a/include/linux/memcontrol.h
> > +++ b/include/linux/memcontrol.h
> > @@ -1192,6 +1192,11 @@ static inline struct mem_cgroup *page_memcg_check(struct page *page)
> > return NULL;
> > }
> >
> > +static inline struct mem_cgroup *get_mem_cgroup_from_objcg(struct obj_cgroup *objcg)
> > +{
> > + return NULL;
> > +}
> > +
> > static inline bool folio_memcg_kmem(struct folio *folio)
> > {
> > return false;
> > diff --git a/include/linux/zswap.h b/include/linux/zswap.h
> > index 2a60ce39cfde..e571e393669b 100644
> > --- a/include/linux/zswap.h
> > +++ b/include/linux/zswap.h
> > @@ -15,6 +15,7 @@ bool zswap_load(struct folio *folio);
> > void zswap_invalidate(int type, pgoff_t offset);
> > void zswap_swapon(int type);
> > void zswap_swapoff(int type);
> > +void zswap_memcg_offline_cleanup(struct mem_cgroup *memcg);
> >
> > #else
> >
> > @@ -31,6 +32,7 @@ static inline bool zswap_load(struct folio *folio)
> > static inline void zswap_invalidate(int type, pgoff_t offset) {}
> > static inline void zswap_swapon(int type) {}
> > static inline void zswap_swapoff(int type) {}
> > +static inline void zswap_memcg_offline_cleanup(struct mem_cgroup *memcg) {}
> >
> > #endif
> >
> > diff --git a/mm/memcontrol.c b/mm/memcontrol.c
> > index 470821d1ba1a..792ca21c5815 100644
> > --- a/mm/memcontrol.c
> > +++ b/mm/memcontrol.c
> > @@ -5614,6 +5614,8 @@ static void mem_cgroup_css_offline(struct cgroup_subsys_state *css)
> > page_counter_set_min(&memcg->memory, 0);
> > page_counter_set_low(&memcg->memory, 0);
> >
> > + zswap_memcg_offline_cleanup(memcg);
> > +
> > memcg_offline_kmem(memcg);
> > reparent_shrinker_deferred(memcg);
> > wb_memcg_offline(memcg);
> > diff --git a/mm/swap.h b/mm/swap.h
> > index 73c332ee4d91..c0dc73e10e91 100644
> > --- a/mm/swap.h
> > +++ b/mm/swap.h
> > @@ -51,7 +51,8 @@ struct page *read_swap_cache_async(swp_entry_t entry, gfp_t gfp_mask,
> > struct swap_iocb **plug);
> > struct page *__read_swap_cache_async(swp_entry_t entry, gfp_t gfp_mask,
> > struct mempolicy *mpol, pgoff_t ilx,
> > - bool *new_page_allocated);
> > + bool *new_page_allocated,
> > + bool skip_if_exists);
> > struct page *swap_cluster_readahead(swp_entry_t entry, gfp_t flag,
> > struct mempolicy *mpol, pgoff_t ilx);
> > struct page *swapin_readahead(swp_entry_t entry, gfp_t flag,
> > diff --git a/mm/swap_state.c b/mm/swap_state.c
> > index 85d9e5806a6a..6c84236382f3 100644
> > --- a/mm/swap_state.c
> > +++ b/mm/swap_state.c
> > @@ -412,7 +412,8 @@ struct folio *filemap_get_incore_folio(struct address_space *mapping,
> >
> > struct page *__read_swap_cache_async(swp_entry_t entry, gfp_t gfp_mask,
> > struct mempolicy *mpol, pgoff_t ilx,
> > - bool *new_page_allocated)
> > + bool *new_page_allocated,
> > + bool skip_if_exists)
> > {
> > struct swap_info_struct *si;
> > struct folio *folio;
> > @@ -470,6 +471,17 @@ struct page *__read_swap_cache_async(swp_entry_t entry, gfp_t gfp_mask,
> > if (err != -EEXIST)
> > goto fail_put_swap;
> >
> > + /*
> > + * Protect against a recursive call to __read_swap_cache_async()
> > + * on the same entry waiting forever here because SWAP_HAS_CACHE
> > + * is set but the folio is not the swap cache yet. This can
> > + * happen today if mem_cgroup_swapin_charge_folio() below
> > + * triggers reclaim through zswap, which may call
> > + * __read_swap_cache_async() in the writeback path.
> > + */
> > + if (skip_if_exists)
> > + goto fail_put_swap;
> > +
> > /*
> > * We might race against __delete_from_swap_cache(), and
> > * stumble across a swap_map entry whose SWAP_HAS_CACHE
> > @@ -537,7 +549,7 @@ struct page *read_swap_cache_async(swp_entry_t entry, gfp_t gfp_mask,
> >
> > mpol = get_vma_policy(vma, addr, 0, &ilx);
> > page = __read_swap_cache_async(entry, gfp_mask, mpol, ilx,
> > - &page_allocated);
> > + &page_allocated, false);
> > mpol_cond_put(mpol);
> >
> > if (page_allocated)
> > @@ -654,7 +666,7 @@ struct page *swap_cluster_readahead(swp_entry_t entry, gfp_t gfp_mask,
> > /* Ok, do the async read-ahead now */
> > page = __read_swap_cache_async(
> > swp_entry(swp_type(entry), offset),
> > - gfp_mask, mpol, ilx, &page_allocated);
> > + gfp_mask, mpol, ilx, &page_allocated, false);
> > if (!page)
> > continue;
> > if (page_allocated) {
> > @@ -672,7 +684,7 @@ struct page *swap_cluster_readahead(swp_entry_t entry, gfp_t gfp_mask,
> > skip:
> > /* The page was likely read above, so no need for plugging here */
> > page = __read_swap_cache_async(entry, gfp_mask, mpol, ilx,
> > - &page_allocated);
> > + &page_allocated, false);
> > if (unlikely(page_allocated))
> > swap_readpage(page, false, NULL);
> > return page;
> > @@ -827,7 +839,7 @@ static struct page *swap_vma_readahead(swp_entry_t targ_entry, gfp_t gfp_mask,
> > pte_unmap(pte);
> > pte = NULL;
> > page = __read_swap_cache_async(entry, gfp_mask, mpol, ilx,
> > - &page_allocated);
> > + &page_allocated, false);
> > if (!page)
> > continue;
> > if (page_allocated) {
> > @@ -847,7 +859,7 @@ static struct page *swap_vma_readahead(swp_entry_t targ_entry, gfp_t gfp_mask,
> > skip:
> > /* The page was likely read above, so no need for plugging here */
> > page = __read_swap_cache_async(targ_entry, gfp_mask, mpol, targ_ilx,
> > - &page_allocated);
> > + &page_allocated, false);
> > if (unlikely(page_allocated))
> > swap_readpage(page, false, NULL);
> > return page;
> > diff --git a/mm/zswap.c b/mm/zswap.c
> > index 4bdb2d83bb0d..f323e45cbdc7 100644
> > --- a/mm/zswap.c
> > +++ b/mm/zswap.c
> > @@ -35,6 +35,7 @@
> > #include <linux/writeback.h>
> > #include <linux/pagemap.h>
> > #include <linux/workqueue.h>
> > +#include <linux/list_lru.h>
> >
> > #include "swap.h"
> > #include "internal.h"
> > @@ -174,8 +175,8 @@ struct zswap_pool {
> > struct work_struct shrink_work;
> > struct hlist_node node;
> > char tfm_name[CRYPTO_MAX_ALG_NAME];
> > - struct list_head lru;
> > - spinlock_t lru_lock;
> > + struct list_lru list_lru;
> > + struct mem_cgroup *next_shrink;
> > };
> >
> > /*
> > @@ -291,15 +292,46 @@ static void zswap_update_total_size(void)
> > zswap_pool_total_size = total;
> > }
> >
> > +/* should be called under RCU */
>
> nit: probably WARN_ON_ONCE(!rcu_read_lock_held()) or
> RCU_LOCKDEP_WARN(!rcu_read_lock_held()) in the function body is
> better?
>
> > +#ifdef CONFIG_MEMCG
> > +static inline struct mem_cgroup *mem_cgroup_from_entry(struct zswap_entry *entry)
> > +{
> > + return entry->objcg ? obj_cgroup_memcg(entry->objcg) : NULL;
> > +}
> > +#else
> > +static inline struct mem_cgroup *mem_cgroup_from_entry(struct zswap_entry *entry)
> > +{
> > + return NULL;
> > +}
> > +#endif
> > +
> > +static inline int entry_to_nid(struct zswap_entry *entry)
> > +{
> > + return page_to_nid(virt_to_page(entry));
> > +}
> > +
> > +void zswap_memcg_offline_cleanup(struct mem_cgroup *memcg)
> > +{
> > + struct zswap_pool *pool;
> > +
> > + /* lock out zswap pools list modification */
> > + spin_lock(&zswap_pools_lock);
> > + list_for_each_entry(pool, &zswap_pools, list) {
> > + if (pool->next_shrink == memcg)
> > + pool->next_shrink = mem_cgroup_iter(NULL, pool->next_shrink, NULL);
> > + }
> > + spin_unlock(&zswap_pools_lock);
> > +}
> > +
> > /*********************************
> > * zswap entry functions
> > **********************************/
> > static struct kmem_cache *zswap_entry_cache;
> >
> > -static struct zswap_entry *zswap_entry_cache_alloc(gfp_t gfp)
> > +static struct zswap_entry *zswap_entry_cache_alloc(gfp_t gfp, int nid)
> > {
> > struct zswap_entry *entry;
> > - entry = kmem_cache_alloc(zswap_entry_cache, gfp);
> > + entry = kmem_cache_alloc_node(zswap_entry_cache, gfp, nid);
> > if (!entry)
> > return NULL;
> > entry->refcount = 1;
> > @@ -312,6 +344,61 @@ static void zswap_entry_cache_free(struct zswap_entry *entry)
> > kmem_cache_free(zswap_entry_cache, entry);
> > }
> >
> > +/*********************************
> > +* lru functions
> > +**********************************/
> > +static void zswap_lru_add(struct list_lru *list_lru, struct zswap_entry *entry)
> > +{
> > + int nid = entry_to_nid(entry);
> > + struct mem_cgroup *memcg;
> > +
> > + /*
> > + * Note that it is safe to use rcu_read_lock() here, even in the face of
> > + * concurrent memcg offlining. Thanks to the memcg->kmemcg_id indirection
> > + * used in list_lru lookup, only two scenarios are possible:
> > + *
> > + * 1. list_lru_add() is called before memcg->kmemcg_id is updated. The
> > + * new entry will be reparented to memcg's parent's list_lru.
> > + * 2. list_lru_add() is called after memcg->kmemcg_id is updated. The
> > + * new entry will be added directly to memcg's parent's list_lru.
> > + *
> > + * Similar reasoning holds for list_lru_del() and list_lru_putback().
> > + */
> > + rcu_read_lock();
> > + memcg = mem_cgroup_from_entry(entry);
> > + /* will always succeed */
> > + list_lru_add(list_lru, &entry->lru, nid, memcg);
> > + rcu_read_unlock();
> > +}
> > +
> > +static void zswap_lru_del(struct list_lru *list_lru, struct zswap_entry *entry)
> > +{
> > + int nid = entry_to_nid(entry);
> > + struct mem_cgroup *memcg;
> > +
> > + rcu_read_lock();
> > + memcg = mem_cgroup_from_entry(entry);
> > + /* will always succeed */
> > + list_lru_del(list_lru, &entry->lru, nid, memcg);
> > + rcu_read_unlock();
> > +}
> > +
> > +static void zswap_lru_putback(struct list_lru *list_lru,
> > + struct zswap_entry *entry)
> > +{
> > + int nid = entry_to_nid(entry);
> > + spinlock_t *lock = &list_lru->node[nid].lock;
> > + struct mem_cgroup *memcg;
> > +
> > + rcu_read_lock();
> > + memcg = mem_cgroup_from_entry(entry);
> > + spin_lock(lock);
> > + /* we cannot use list_lru_add here, because it increments node's lru count */
> > + list_lru_putback(list_lru, &entry->lru, nid, memcg);
> > + spin_unlock(lock);
> > + rcu_read_unlock();
> > +}
> > +
> > /*********************************
> > * rbtree functions
> > **********************************/
> > @@ -396,9 +483,7 @@ static void zswap_free_entry(struct zswap_entry *entry)
> > if (!entry->length)
> > atomic_dec(&zswap_same_filled_pages);
> > else {
> > - spin_lock(&entry->pool->lru_lock);
> > - list_del(&entry->lru);
> > - spin_unlock(&entry->pool->lru_lock);
> > + zswap_lru_del(&entry->pool->list_lru, entry);
> > zpool_free(zswap_find_zpool(entry), entry->handle);
> > zswap_pool_put(entry->pool);
> > }
> > @@ -632,21 +717,15 @@ static void zswap_invalidate_entry(struct zswap_tree *tree,
> > zswap_entry_put(tree, entry);
> > }
> >
> > -static int zswap_reclaim_entry(struct zswap_pool *pool)
> > +static enum lru_status shrink_memcg_cb(struct list_head *item, struct list_lru_one *l,
> > + spinlock_t *lock, void *arg)
> > {
> > - struct zswap_entry *entry;
> > + struct zswap_entry *entry = container_of(item, struct zswap_entry, lru);
> > struct zswap_tree *tree;
> > pgoff_t swpoffset;
> > - int ret;
> > + enum lru_status ret = LRU_REMOVED_RETRY;
> > + int writeback_result;
> >
> > - /* Get an entry off the LRU */
> > - spin_lock(&pool->lru_lock);
> > - if (list_empty(&pool->lru)) {
> > - spin_unlock(&pool->lru_lock);
> > - return -EINVAL;
> > - }
> > - entry = list_last_entry(&pool->lru, struct zswap_entry, lru);
> > - list_del_init(&entry->lru);
> > /*
> > * Once the lru lock is dropped, the entry might get freed. The
> > * swpoffset is copied to the stack, and entry isn't deref'd again
> > @@ -654,28 +733,32 @@ static int zswap_reclaim_entry(struct zswap_pool *pool)
> > */
> > swpoffset = swp_offset(entry->swpentry);
> > tree = zswap_trees[swp_type(entry->swpentry)];
> > - spin_unlock(&pool->lru_lock);
> > + list_lru_isolate(l, item);
> > + /*
> > + * It's safe to drop the lock here because we return either
> > + * LRU_REMOVED_RETRY or LRU_RETRY.
> > + */
> > + spin_unlock(lock);
> >
> > /* Check for invalidate() race */
> > spin_lock(&tree->lock);
> > - if (entry != zswap_rb_search(&tree->rbroot, swpoffset)) {
> > - ret = -EAGAIN;
> > + if (entry != zswap_rb_search(&tree->rbroot, swpoffset))
> > goto unlock;
> > - }
> > +
> > /* Hold a reference to prevent a free during writeback */
> > zswap_entry_get(entry);
> > spin_unlock(&tree->lock);
> >
> > - ret = zswap_writeback_entry(entry, tree);
> > + writeback_result = zswap_writeback_entry(entry, tree);
> >
> > spin_lock(&tree->lock);
> > - if (ret) {
> > - /* Writeback failed, put entry back on LRU */
> > - spin_lock(&pool->lru_lock);
> > - list_move(&entry->lru, &pool->lru);
> > - spin_unlock(&pool->lru_lock);
> > + if (writeback_result) {
> > + zswap_reject_reclaim_fail++;
> > + zswap_lru_putback(&entry->pool->list_lru, entry);
> > + ret = LRU_RETRY;
> > goto put_unlock;
> > }
> > + zswap_written_back_pages++;
> >
> > /*
> > * Writeback started successfully, the page now belongs to the
> > @@ -689,27 +772,93 @@ static int zswap_reclaim_entry(struct zswap_pool *pool)
> > zswap_entry_put(tree, entry);
> > unlock:
> > spin_unlock(&tree->lock);
> > - return ret ? -EAGAIN : 0;
> > + spin_lock(lock);
> > + return ret;
> > +}
> > +
> > +static int shrink_memcg(struct mem_cgroup *memcg)
> > +{
> > + struct zswap_pool *pool;
> > + int nid, shrunk = 0;
> > +
> > + /*
> > + * Skip zombies because their LRUs are reparented and we would be
> > + * reclaiming from the parent instead of the dead memcg.
> > + */
> > + if (memcg && !mem_cgroup_online(memcg))
> > + return -ENOENT;
> > +
> > + pool = zswap_pool_current_get();
> > + if (!pool)
> > + return -EINVAL;
> > +
> > + for_each_node_state(nid, N_NORMAL_MEMORY) {
> > + unsigned long nr_to_walk = 1;
> > +
> > + shrunk += list_lru_walk_one(&pool->list_lru, nid, memcg,
> > + &shrink_memcg_cb, NULL, &nr_to_walk);
> > + }
> > + zswap_pool_put(pool);
> > + return shrunk ? 0 : -EAGAIN;
> > }
> >
> > static void shrink_worker(struct work_struct *w)
> > {
> > struct zswap_pool *pool = container_of(w, typeof(*pool),
> > shrink_work);
> > + struct mem_cgroup *memcg;
> > int ret, failures = 0;
> >
> > + /* global reclaim will select cgroup in a round-robin fashion. */
> > do {
> > - ret = zswap_reclaim_entry(pool);
> > - if (ret) {
> > - zswap_reject_reclaim_fail++;
> > - if (ret != -EAGAIN)
> > + spin_lock(&zswap_pools_lock);
> > + pool->next_shrink = mem_cgroup_iter(NULL, pool->next_shrink, NULL);
> > + memcg = pool->next_shrink;
> > +
> > + /*
> > + * We need to retry if we have gone through a full round trip, or if we
> > + * got an offline memcg (or else we risk undoing the effect of the
> > + * zswap memcg offlining cleanup callback). This is not catastrophic
> > + * per se, but it will keep the now offlined memcg hostage for a while.
> > + *
> > + * Note that if we got an online memcg, we will keep the extra
> > + * reference in case the original reference obtained by mem_cgroup_iter
> > + * is dropped by the zswap memcg offlining callback, ensuring that the
> > + * memcg is not killed when we are reclaiming.
> > + */
> > + if (!memcg) {
> > + spin_unlock(&zswap_pools_lock);
> > + if (++failures == MAX_RECLAIM_RETRIES)
> > break;
> > +
> > + goto resched;
> > + }
> > +
> > + if (!mem_cgroup_online(memcg)) {
> > + /* drop the reference from mem_cgroup_iter() */
> > + mem_cgroup_put(memcg);
>
> Probably better to use mem_cgroup_iter_break() here?
mem_cgroup_iter_break(NULL, memcg) seems to perform the same thing, right?
>
> Also, I don't see mem_cgroup_tryget_online() being used here (where I
> expected it to be used), did I miss it?
Oh shoot yeah that was a typo - it should be
mem_cgroup_tryget_online(). Let me send a fix to that.
>
> > + pool->next_shrink = NULL;
> > + spin_unlock(&zswap_pools_lock);
> > +
> > if (++failures == MAX_RECLAIM_RETRIES)
> > break;
> > +
> > + goto resched;
> > }
> > + spin_unlock(&zswap_pools_lock);
> > +
> > + ret = shrink_memcg(memcg);
>
> We just checked for online-ness above, and then shrink_memcg() checks
> it again. Is this intentional?
Hmm these two checks are for two different purposes. The check above
is mainly to prevent accidentally undoing the offline cleanup callback
during memcg selection step. Inside shrink_memcg(), we check
onlineness again to prevent reclaiming from offlined memcgs - which in
effect will trigger the reclaim of the parent's memcg.
>
> > + /* drop the extra reference */
>
> Where does the extra reference come from?
The extra reference is from mem_cgroup_tryget_online(). We get two
references in the dance above - one from mem_cgroup_iter() (which can
be dropped) and one extra from mem_cgroup_tryget_online(). I kept the
second one in case the first one was dropped by the zswap memcg
offlining callback, but after reclaiming it is safe to just drop it.
>
> > + mem_cgroup_put(memcg);
> > +
> > + if (ret == -EINVAL)
> > + break;
> > + if (ret && ++failures == MAX_RECLAIM_RETRIES)
> > + break;
> > +
> > +resched:
> > cond_resched();
> > } while (!zswap_can_accept());
> > - zswap_pool_put(pool);
> > }
> >
> > static struct zswap_pool *zswap_pool_create(char *type, char *compressor)
> > @@ -767,8 +916,7 @@ static struct zswap_pool *zswap_pool_create(char *type, char *compressor)
> > */
> > kref_init(&pool->kref);
> > INIT_LIST_HEAD(&pool->list);
> > - INIT_LIST_HEAD(&pool->lru);
> > - spin_lock_init(&pool->lru_lock);
> > + list_lru_init_memcg(&pool->list_lru, NULL);
> > INIT_WORK(&pool->shrink_work, shrink_worker);
> >
> > zswap_pool_debug("created", pool);
> > @@ -834,6 +982,13 @@ static void zswap_pool_destroy(struct zswap_pool *pool)
> >
> > cpuhp_state_remove_instance(CPUHP_MM_ZSWP_POOL_PREPARE, &pool->node);
> > free_percpu(pool->acomp_ctx);
> > + list_lru_destroy(&pool->list_lru);
> > +
> > + spin_lock(&zswap_pools_lock);
> > + mem_cgroup_put(pool->next_shrink);
> > + pool->next_shrink = NULL;
> > + spin_unlock(&zswap_pools_lock);
> > +
> > for (i = 0; i < ZSWAP_NR_ZPOOLS; i++)
> > zpool_destroy_pool(pool->zpools[i]);
> > kfree(pool);
> > @@ -1081,7 +1236,7 @@ static int zswap_writeback_entry(struct zswap_entry *entry,
> > /* try to allocate swap cache page */
> > mpol = get_task_policy(current);
> > page = __read_swap_cache_async(swpentry, GFP_KERNEL, mpol,
> > - NO_INTERLEAVE_INDEX, &page_was_allocated);
> > + NO_INTERLEAVE_INDEX, &page_was_allocated, true);
> > if (!page) {
> > ret = -ENOMEM;
> > goto fail;
> > @@ -1152,7 +1307,6 @@ static int zswap_writeback_entry(struct zswap_entry *entry,
> > /* start writeback */
> > __swap_writepage(page, &wbc);
> > put_page(page);
> > - zswap_written_back_pages++;
> >
> > return ret;
> >
> > @@ -1209,6 +1363,7 @@ bool zswap_store(struct folio *folio)
> > struct scatterlist input, output;
> > struct crypto_acomp_ctx *acomp_ctx;
> > struct obj_cgroup *objcg = NULL;
> > + struct mem_cgroup *memcg = NULL;
> > struct zswap_pool *pool;
> > struct zpool *zpool;
> > unsigned int dlen = PAGE_SIZE;
> > @@ -1240,15 +1395,15 @@ bool zswap_store(struct folio *folio)
> > zswap_invalidate_entry(tree, dupentry);
> > }
> > spin_unlock(&tree->lock);
> > -
> > - /*
> > - * XXX: zswap reclaim does not work with cgroups yet. Without a
> > - * cgroup-aware entry LRU, we will push out entries system-wide based on
> > - * local cgroup limits.
> > - */
> > objcg = get_obj_cgroup_from_folio(folio);
> > - if (objcg && !obj_cgroup_may_zswap(objcg))
> > - goto reject;
> > + if (objcg && !obj_cgroup_may_zswap(objcg)) {
> > + memcg = get_mem_cgroup_from_objcg(objcg);
>
> Do we need a reference here? IIUC, this is folio_memcg() and the folio
> is locked, so folio_memcg() should remain stable, no?
Hmmm obj_cgroup_may_zswap() also holds a reference to the objcg's
memcg, so I just followed the patterns to be safe.
>
> Same for the call below.
>
> > + if (shrink_memcg(memcg)) {
> > + mem_cgroup_put(memcg);
> > + goto reject;
> > + }
> > + mem_cgroup_put(memcg);
> > + }
> >
> > /* reclaim space if needed */
> > if (zswap_is_full()) {
> > @@ -1265,7 +1420,7 @@ bool zswap_store(struct folio *folio)
> > }
> >
> > /* allocate entry */
> > - entry = zswap_entry_cache_alloc(GFP_KERNEL);
> > + entry = zswap_entry_cache_alloc(GFP_KERNEL, page_to_nid(page));
> > if (!entry) {
> > zswap_reject_kmemcache_fail++;
> > goto reject;
> > @@ -1292,6 +1447,15 @@ bool zswap_store(struct folio *folio)
> > if (!entry->pool)
> > goto freepage;
> >
> > + if (objcg) {
> > + memcg = get_mem_cgroup_from_objcg(objcg);
> > + if (memcg_list_lru_alloc(memcg, &entry->pool->list_lru, GFP_KERNEL)) {
> > + mem_cgroup_put(memcg);
> > + goto put_pool;
> > + }
> > + mem_cgroup_put(memcg);
> > + }
> > +
> > /* compress */
> > acomp_ctx = raw_cpu_ptr(entry->pool->acomp_ctx);
> >
> > @@ -1370,9 +1534,8 @@ bool zswap_store(struct folio *folio)
> > zswap_invalidate_entry(tree, dupentry);
> > }
> > if (entry->length) {
> > - spin_lock(&entry->pool->lru_lock);
> > - list_add(&entry->lru, &entry->pool->lru);
> > - spin_unlock(&entry->pool->lru_lock);
> > + INIT_LIST_HEAD(&entry->lru);
> > + zswap_lru_add(&entry->pool->list_lru, entry);
> > }
> > spin_unlock(&tree->lock);
> >
> > @@ -1385,6 +1548,7 @@ bool zswap_store(struct folio *folio)
> >
> > put_dstmem:
> > mutex_unlock(acomp_ctx->mutex);
> > +put_pool:
> > zswap_pool_put(entry->pool);
> > freepage:
> > zswap_entry_cache_free(entry);
> > @@ -1479,9 +1643,8 @@ bool zswap_load(struct folio *folio)
> > zswap_invalidate_entry(tree, entry);
> > folio_mark_dirty(folio);
> > } else if (entry->length) {
> > - spin_lock(&entry->pool->lru_lock);
> > - list_move(&entry->lru, &entry->pool->lru);
> > - spin_unlock(&entry->pool->lru_lock);
> > + zswap_lru_del(&entry->pool->list_lru, entry);
> > + zswap_lru_add(&entry->pool->list_lru, entry);
> > }
> > zswap_entry_put(tree, entry);
> > spin_unlock(&tree->lock);
> > --
> > 2.34.1
^ permalink raw reply [flat|nested] 48+ messages in thread
* Re: [PATCH v8 4/6] mm: memcg: add per-memcg zswap writeback stat
2023-12-05 18:21 ` Yosry Ahmed
@ 2023-12-05 18:56 ` Nhat Pham
0 siblings, 0 replies; 48+ messages in thread
From: Nhat Pham @ 2023-12-05 18:56 UTC (permalink / raw)
To: Yosry Ahmed
Cc: akpm, hannes, cerasuolodomenico, sjenning, ddstreet, vitaly.wool,
mhocko, roman.gushchin, shakeelb, muchun.song, chrisl, linux-mm,
kernel-team, linux-kernel, cgroups, linux-doc, linux-kselftest,
shuah
On Tue, Dec 5, 2023 at 10:22 AM Yosry Ahmed <yosryahmed@google.com> wrote:
>
> On Thu, Nov 30, 2023 at 11:40 AM Nhat Pham <nphamcs@gmail.com> wrote:
> >
> > From: Domenico Cerasuolo <cerasuolodomenico@gmail.com>
> >
> > Since zswap now writes back pages from memcg-specific LRUs, we now need a
> > new stat to show writebacks count for each memcg.
> >
> > Suggested-by: Nhat Pham <nphamcs@gmail.com>
> > Signed-off-by: Domenico Cerasuolo <cerasuolodomenico@gmail.com>
> > Signed-off-by: Nhat Pham <nphamcs@gmail.com>
> > ---
> > include/linux/vm_event_item.h | 1 +
> > mm/memcontrol.c | 1 +
> > mm/vmstat.c | 1 +
> > mm/zswap.c | 4 ++++
> > 4 files changed, 7 insertions(+)
> >
> > diff --git a/include/linux/vm_event_item.h b/include/linux/vm_event_item.h
> > index d1b847502f09..f4569ad98edf 100644
> > --- a/include/linux/vm_event_item.h
> > +++ b/include/linux/vm_event_item.h
> > @@ -142,6 +142,7 @@ enum vm_event_item { PGPGIN, PGPGOUT, PSWPIN, PSWPOUT,
> > #ifdef CONFIG_ZSWAP
> > ZSWPIN,
> > ZSWPOUT,
> > + ZSWP_WB,
>
> I think you dismissed Johannes's comment from v7 about ZSWPWB and
> "zswpwb" being more consistent with the existing events.
I missed that entirely. Oops. Yeah I prefer ZSWPWB too. Let me send a fix.
>
> > #endif
> > #ifdef CONFIG_X86
> > DIRECT_MAP_LEVEL2_SPLIT,
> > diff --git a/mm/memcontrol.c b/mm/memcontrol.c
> > index 792ca21c5815..21d79249c8b4 100644
> > --- a/mm/memcontrol.c
> > +++ b/mm/memcontrol.c
> > @@ -703,6 +703,7 @@ static const unsigned int memcg_vm_event_stat[] = {
> > #if defined(CONFIG_MEMCG_KMEM) && defined(CONFIG_ZSWAP)
> > ZSWPIN,
> > ZSWPOUT,
> > + ZSWP_WB,
> > #endif
> > #ifdef CONFIG_TRANSPARENT_HUGEPAGE
> > THP_FAULT_ALLOC,
> > diff --git a/mm/vmstat.c b/mm/vmstat.c
> > index afa5a38fcc9c..2249f85e4a87 100644
> > --- a/mm/vmstat.c
> > +++ b/mm/vmstat.c
> > @@ -1401,6 +1401,7 @@ const char * const vmstat_text[] = {
> > #ifdef CONFIG_ZSWAP
> > "zswpin",
> > "zswpout",
> > + "zswp_wb",
> > #endif
> > #ifdef CONFIG_X86
> > "direct_map_level2_splits",
> > diff --git a/mm/zswap.c b/mm/zswap.c
> > index f323e45cbdc7..49b79393e472 100644
> > --- a/mm/zswap.c
> > +++ b/mm/zswap.c
> > @@ -760,6 +760,10 @@ static enum lru_status shrink_memcg_cb(struct list_head *item, struct list_lru_o
> > }
> > zswap_written_back_pages++;
> >
> > + if (entry->objcg)
> > + count_objcg_event(entry->objcg, ZSWP_WB);
> > +
> > + count_vm_event(ZSWP_WB);
> > /*
> > * Writeback started successfully, the page now belongs to the
> > * swapcache. Drop the entry from zswap - unless invalidate already
> > --
> > 2.34.1
^ permalink raw reply [flat|nested] 48+ messages in thread
* Re: [PATCH v8 3/6] zswap: make shrinking memcg-aware
2023-12-05 18:49 ` Nhat Pham
@ 2023-12-05 18:59 ` Yosry Ahmed
2023-12-05 19:09 ` Nhat Pham
0 siblings, 1 reply; 48+ messages in thread
From: Yosry Ahmed @ 2023-12-05 18:59 UTC (permalink / raw)
To: Nhat Pham
Cc: akpm, hannes, cerasuolodomenico, sjenning, ddstreet, vitaly.wool,
mhocko, roman.gushchin, shakeelb, muchun.song, chrisl, linux-mm,
kernel-team, linux-kernel, cgroups, linux-doc, linux-kselftest,
shuah
[..]
> > > static void shrink_worker(struct work_struct *w)
> > > {
> > > struct zswap_pool *pool = container_of(w, typeof(*pool),
> > > shrink_work);
> > > + struct mem_cgroup *memcg;
> > > int ret, failures = 0;
> > >
> > > + /* global reclaim will select cgroup in a round-robin fashion. */
> > > do {
> > > - ret = zswap_reclaim_entry(pool);
> > > - if (ret) {
> > > - zswap_reject_reclaim_fail++;
> > > - if (ret != -EAGAIN)
> > > + spin_lock(&zswap_pools_lock);
> > > + pool->next_shrink = mem_cgroup_iter(NULL, pool->next_shrink, NULL);
> > > + memcg = pool->next_shrink;
> > > +
> > > + /*
> > > + * We need to retry if we have gone through a full round trip, or if we
> > > + * got an offline memcg (or else we risk undoing the effect of the
> > > + * zswap memcg offlining cleanup callback). This is not catastrophic
> > > + * per se, but it will keep the now offlined memcg hostage for a while.
> > > + *
> > > + * Note that if we got an online memcg, we will keep the extra
> > > + * reference in case the original reference obtained by mem_cgroup_iter
> > > + * is dropped by the zswap memcg offlining callback, ensuring that the
> > > + * memcg is not killed when we are reclaiming.
> > > + */
> > > + if (!memcg) {
> > > + spin_unlock(&zswap_pools_lock);
> > > + if (++failures == MAX_RECLAIM_RETRIES)
> > > break;
> > > +
> > > + goto resched;
> > > + }
> > > +
> > > + if (!mem_cgroup_online(memcg)) {
> > > + /* drop the reference from mem_cgroup_iter() */
> > > + mem_cgroup_put(memcg);
> >
> > Probably better to use mem_cgroup_iter_break() here?
>
> mem_cgroup_iter_break(NULL, memcg) seems to perform the same thing, right?
Yes, but it's better to break the iteration with the documented API
(e.g. if mem_cgroup_iter_break() changes to do extra work).
>
> >
> > Also, I don't see mem_cgroup_tryget_online() being used here (where I
> > expected it to be used), did I miss it?
>
> Oh shoot yeah that was a typo - it should be
> mem_cgroup_tryget_online(). Let me send a fix to that.
>
> >
> > > + pool->next_shrink = NULL;
> > > + spin_unlock(&zswap_pools_lock);
> > > +
> > > if (++failures == MAX_RECLAIM_RETRIES)
> > > break;
> > > +
> > > + goto resched;
> > > }
> > > + spin_unlock(&zswap_pools_lock);
> > > +
> > > + ret = shrink_memcg(memcg);
> >
> > We just checked for online-ness above, and then shrink_memcg() checks
> > it again. Is this intentional?
>
> Hmm these two checks are for two different purposes. The check above
> is mainly to prevent accidentally undoing the offline cleanup callback
> during memcg selection step. Inside shrink_memcg(), we check
> onlineness again to prevent reclaiming from offlined memcgs - which in
> effect will trigger the reclaim of the parent's memcg.
Right, but two checks in close proximity are not doing a lot.
Especially that the memcg online-ness can change right after the check
inside shrink_memcg() anyway, so it's a best effort thing.
Anyway, it shouldn't matter much. We can leave it.
>
> >
> > > + /* drop the extra reference */
> >
> > Where does the extra reference come from?
>
> The extra reference is from mem_cgroup_tryget_online(). We get two
> references in the dance above - one from mem_cgroup_iter() (which can
> be dropped) and one extra from mem_cgroup_tryget_online(). I kept the
> second one in case the first one was dropped by the zswap memcg
> offlining callback, but after reclaiming it is safe to just drop it.
Right. I was confused by the missing mem_cgroup_tryget_online().
>
> >
> > > + mem_cgroup_put(memcg);
> > > +
> > > + if (ret == -EINVAL)
> > > + break;
> > > + if (ret && ++failures == MAX_RECLAIM_RETRIES)
> > > + break;
> > > +
> > > +resched:
> > > cond_resched();
> > > } while (!zswap_can_accept());
> > > - zswap_pool_put(pool);
> > > }
> > >
> > > static struct zswap_pool *zswap_pool_create(char *type, char *compressor)
[..]
> > > @@ -1240,15 +1395,15 @@ bool zswap_store(struct folio *folio)
> > > zswap_invalidate_entry(tree, dupentry);
> > > }
> > > spin_unlock(&tree->lock);
> > > -
> > > - /*
> > > - * XXX: zswap reclaim does not work with cgroups yet. Without a
> > > - * cgroup-aware entry LRU, we will push out entries system-wide based on
> > > - * local cgroup limits.
> > > - */
> > > objcg = get_obj_cgroup_from_folio(folio);
> > > - if (objcg && !obj_cgroup_may_zswap(objcg))
> > > - goto reject;
> > > + if (objcg && !obj_cgroup_may_zswap(objcg)) {
> > > + memcg = get_mem_cgroup_from_objcg(objcg);
> >
> > Do we need a reference here? IIUC, this is folio_memcg() and the folio
> > is locked, so folio_memcg() should remain stable, no?
>
> Hmmm obj_cgroup_may_zswap() also holds a reference to the objcg's
> memcg, so I just followed the patterns to be safe.
Perhaps it's less clear inside obj_cgroup_may_zswap(). We can actually
pass the folio to obj_cgroup_may_zswap(), add a debug check that the
folio is locked, and avoid getting the ref there as well. That can be
done separately. Perhaps Johannes can shed some light on this, if
there's a different reason why getting a ref there is needed.
For this change, I think the refcount manipulation is unnecessary.
>
>
> >
> > Same for the call below.
> >
> > > + if (shrink_memcg(memcg)) {
> > > + mem_cgroup_put(memcg);
> > > + goto reject;
> > > + }
> > > + mem_cgroup_put(memcg);
> > > + }
> > >
> > > /* reclaim space if needed */
> > > if (zswap_is_full()) {
[..]
^ permalink raw reply [flat|nested] 48+ messages in thread
* Re: [PATCH v8 3/6] zswap: make shrinking memcg-aware
2023-12-05 18:59 ` Yosry Ahmed
@ 2023-12-05 19:09 ` Nhat Pham
0 siblings, 0 replies; 48+ messages in thread
From: Nhat Pham @ 2023-12-05 19:09 UTC (permalink / raw)
To: Yosry Ahmed
Cc: akpm, hannes, cerasuolodomenico, sjenning, ddstreet, vitaly.wool,
mhocko, roman.gushchin, shakeelb, muchun.song, chrisl, linux-mm,
kernel-team, linux-kernel, cgroups, linux-doc, linux-kselftest,
shuah
On Tue, Dec 5, 2023 at 11:00 AM Yosry Ahmed <yosryahmed@google.com> wrote:
>
> [..]
> > > > static void shrink_worker(struct work_struct *w)
> > > > {
> > > > struct zswap_pool *pool = container_of(w, typeof(*pool),
> > > > shrink_work);
> > > > + struct mem_cgroup *memcg;
> > > > int ret, failures = 0;
> > > >
> > > > + /* global reclaim will select cgroup in a round-robin fashion. */
> > > > do {
> > > > - ret = zswap_reclaim_entry(pool);
> > > > - if (ret) {
> > > > - zswap_reject_reclaim_fail++;
> > > > - if (ret != -EAGAIN)
> > > > + spin_lock(&zswap_pools_lock);
> > > > + pool->next_shrink = mem_cgroup_iter(NULL, pool->next_shrink, NULL);
> > > > + memcg = pool->next_shrink;
> > > > +
> > > > + /*
> > > > + * We need to retry if we have gone through a full round trip, or if we
> > > > + * got an offline memcg (or else we risk undoing the effect of the
> > > > + * zswap memcg offlining cleanup callback). This is not catastrophic
> > > > + * per se, but it will keep the now offlined memcg hostage for a while.
> > > > + *
> > > > + * Note that if we got an online memcg, we will keep the extra
> > > > + * reference in case the original reference obtained by mem_cgroup_iter
> > > > + * is dropped by the zswap memcg offlining callback, ensuring that the
> > > > + * memcg is not killed when we are reclaiming.
> > > > + */
> > > > + if (!memcg) {
> > > > + spin_unlock(&zswap_pools_lock);
> > > > + if (++failures == MAX_RECLAIM_RETRIES)
> > > > break;
> > > > +
> > > > + goto resched;
> > > > + }
> > > > +
> > > > + if (!mem_cgroup_online(memcg)) {
> > > > + /* drop the reference from mem_cgroup_iter() */
> > > > + mem_cgroup_put(memcg);
> > >
> > > Probably better to use mem_cgroup_iter_break() here?
> >
> > mem_cgroup_iter_break(NULL, memcg) seems to perform the same thing, right?
>
> Yes, but it's better to break the iteration with the documented API
> (e.g. if mem_cgroup_iter_break() changes to do extra work).
Hmm, a mostly aesthetic fix to me, but I don't have a strong opinion otherwise.
>
> >
> > >
> > > Also, I don't see mem_cgroup_tryget_online() being used here (where I
> > > expected it to be used), did I miss it?
> >
> > Oh shoot yeah that was a typo - it should be
> > mem_cgroup_tryget_online(). Let me send a fix to that.
> >
> > >
> > > > + pool->next_shrink = NULL;
> > > > + spin_unlock(&zswap_pools_lock);
> > > > +
> > > > if (++failures == MAX_RECLAIM_RETRIES)
> > > > break;
> > > > +
> > > > + goto resched;
> > > > }
> > > > + spin_unlock(&zswap_pools_lock);
> > > > +
> > > > + ret = shrink_memcg(memcg);
> > >
> > > We just checked for online-ness above, and then shrink_memcg() checks
> > > it again. Is this intentional?
> >
> > Hmm these two checks are for two different purposes. The check above
> > is mainly to prevent accidentally undoing the offline cleanup callback
> > during memcg selection step. Inside shrink_memcg(), we check
> > onlineness again to prevent reclaiming from offlined memcgs - which in
> > effect will trigger the reclaim of the parent's memcg.
>
> Right, but two checks in close proximity are not doing a lot.
> Especially that the memcg online-ness can change right after the check
> inside shrink_memcg() anyway, so it's a best effort thing.
>
> Anyway, it shouldn't matter much. We can leave it.
>
> >
> > >
> > > > + /* drop the extra reference */
> > >
> > > Where does the extra reference come from?
> >
> > The extra reference is from mem_cgroup_tryget_online(). We get two
> > references in the dance above - one from mem_cgroup_iter() (which can
> > be dropped) and one extra from mem_cgroup_tryget_online(). I kept the
> > second one in case the first one was dropped by the zswap memcg
> > offlining callback, but after reclaiming it is safe to just drop it.
>
> Right. I was confused by the missing mem_cgroup_tryget_online().
>
> >
> > >
> > > > + mem_cgroup_put(memcg);
> > > > +
> > > > + if (ret == -EINVAL)
> > > > + break;
> > > > + if (ret && ++failures == MAX_RECLAIM_RETRIES)
> > > > + break;
> > > > +
> > > > +resched:
> > > > cond_resched();
> > > > } while (!zswap_can_accept());
> > > > - zswap_pool_put(pool);
> > > > }
> > > >
> > > > static struct zswap_pool *zswap_pool_create(char *type, char *compressor)
> [..]
> > > > @@ -1240,15 +1395,15 @@ bool zswap_store(struct folio *folio)
> > > > zswap_invalidate_entry(tree, dupentry);
> > > > }
> > > > spin_unlock(&tree->lock);
> > > > -
> > > > - /*
> > > > - * XXX: zswap reclaim does not work with cgroups yet. Without a
> > > > - * cgroup-aware entry LRU, we will push out entries system-wide based on
> > > > - * local cgroup limits.
> > > > - */
> > > > objcg = get_obj_cgroup_from_folio(folio);
> > > > - if (objcg && !obj_cgroup_may_zswap(objcg))
> > > > - goto reject;
> > > > + if (objcg && !obj_cgroup_may_zswap(objcg)) {
> > > > + memcg = get_mem_cgroup_from_objcg(objcg);
> > >
> > > Do we need a reference here? IIUC, this is folio_memcg() and the folio
> > > is locked, so folio_memcg() should remain stable, no?
> >
> > Hmmm obj_cgroup_may_zswap() also holds a reference to the objcg's
> > memcg, so I just followed the patterns to be safe.
>
> Perhaps it's less clear inside obj_cgroup_may_zswap(). We can actually
> pass the folio to obj_cgroup_may_zswap(), add a debug check that the
> folio is locked, and avoid getting the ref there as well. That can be
> done separately. Perhaps Johannes can shed some light on this, if
> there's a different reason why getting a ref there is needed.
>
> For this change, I think the refcount manipulation is unnecessary.
Hmm true. I'm leaning towards playing it safe - worst case scenario,
we can send a follow up patch to optimize this (perhaps for both
places, if neither place requires pinning the memcg). But I'll wait
for Johannes to chime in with his opinions on the matter.
>
> >
> >
> > >
> > > Same for the call below.
> > >
> > > > + if (shrink_memcg(memcg)) {
> > > > + mem_cgroup_put(memcg);
> > > > + goto reject;
> > > > + }
> > > > + mem_cgroup_put(memcg);
> > > > + }
> > > >
> > > > /* reclaim space if needed */
> > > > if (zswap_is_full()) {
> [..]
^ permalink raw reply [flat|nested] 48+ messages in thread
* [PATCH v8 4/6] mm: memcg: add per-memcg zswap writeback stat (fix)
2023-11-30 19:40 ` [PATCH v8 4/6] mm: memcg: add per-memcg zswap writeback stat Nhat Pham
2023-12-05 18:21 ` Yosry Ahmed
@ 2023-12-05 19:33 ` Nhat Pham
2023-12-05 20:05 ` Yosry Ahmed
2023-12-08 0:25 ` Chris Li
1 sibling, 2 replies; 48+ messages in thread
From: Nhat Pham @ 2023-12-05 19:33 UTC (permalink / raw)
To: akpm
Cc: hannes, cerasuolodomenico, yosryahmed, sjenning, ddstreet,
vitaly.wool, mhocko, roman.gushchin, shakeelb, muchun.song,
chrisl, linux-mm, kernel-team, linux-kernel, cgroups, linux-doc,
linux-kselftest, shuah
Rename ZSWP_WB to ZSWPWB to better match the existing counters naming
scheme.
Suggested-by: Johannes Weiner <hannes@cmpxchg.org>
Signed-off-by: Nhat Pham <nphamcs@gmail.com>
---
include/linux/vm_event_item.h | 2 +-
mm/memcontrol.c | 2 +-
mm/vmstat.c | 2 +-
mm/zswap.c | 4 ++--
4 files changed, 5 insertions(+), 5 deletions(-)
diff --git a/include/linux/vm_event_item.h b/include/linux/vm_event_item.h
index f4569ad98edf..747943bc8cc2 100644
--- a/include/linux/vm_event_item.h
+++ b/include/linux/vm_event_item.h
@@ -142,7 +142,7 @@ enum vm_event_item { PGPGIN, PGPGOUT, PSWPIN, PSWPOUT,
#ifdef CONFIG_ZSWAP
ZSWPIN,
ZSWPOUT,
- ZSWP_WB,
+ ZSWPWB,
#endif
#ifdef CONFIG_X86
DIRECT_MAP_LEVEL2_SPLIT,
diff --git a/mm/memcontrol.c b/mm/memcontrol.c
index 21d79249c8b4..0286b7d38832 100644
--- a/mm/memcontrol.c
+++ b/mm/memcontrol.c
@@ -703,7 +703,7 @@ static const unsigned int memcg_vm_event_stat[] = {
#if defined(CONFIG_MEMCG_KMEM) && defined(CONFIG_ZSWAP)
ZSWPIN,
ZSWPOUT,
- ZSWP_WB,
+ ZSWPWB,
#endif
#ifdef CONFIG_TRANSPARENT_HUGEPAGE
THP_FAULT_ALLOC,
diff --git a/mm/vmstat.c b/mm/vmstat.c
index 2249f85e4a87..cfd8d8256f8e 100644
--- a/mm/vmstat.c
+++ b/mm/vmstat.c
@@ -1401,7 +1401,7 @@ const char * const vmstat_text[] = {
#ifdef CONFIG_ZSWAP
"zswpin",
"zswpout",
- "zswp_wb",
+ "zswpwb",
#endif
#ifdef CONFIG_X86
"direct_map_level2_splits",
diff --git a/mm/zswap.c b/mm/zswap.c
index c65b8ccc6b72..0fb0945c0031 100644
--- a/mm/zswap.c
+++ b/mm/zswap.c
@@ -761,9 +761,9 @@ static enum lru_status shrink_memcg_cb(struct list_head *item, struct list_lru_o
zswap_written_back_pages++;
if (entry->objcg)
- count_objcg_event(entry->objcg, ZSWP_WB);
+ count_objcg_event(entry->objcg, ZSWPWB);
- count_vm_event(ZSWP_WB);
+ count_vm_event(ZSWPWB);
/*
* Writeback started successfully, the page now belongs to the
* swapcache. Drop the entry from zswap - unless invalidate already
--
2.34.1
^ permalink raw reply related [flat|nested] 48+ messages in thread
* [PATCH v8 3/6] zswap: make shrinking memcg-aware (fix)
2023-11-30 19:40 ` [PATCH v8 3/6] zswap: make shrinking memcg-aware Nhat Pham
2023-12-05 18:20 ` Yosry Ahmed
@ 2023-12-05 19:54 ` Nhat Pham
2023-12-06 0:10 ` [PATCH v8 3/6] zswap: make shrinking memcg-aware Chris Li
` (2 subsequent siblings)
4 siblings, 0 replies; 48+ messages in thread
From: Nhat Pham @ 2023-12-05 19:54 UTC (permalink / raw)
To: akpm
Cc: hannes, cerasuolodomenico, yosryahmed, sjenning, ddstreet,
vitaly.wool, mhocko, roman.gushchin, shakeelb, muchun.song,
chrisl, linux-mm, kernel-team, linux-kernel, cgroups, linux-doc,
linux-kselftest, shuah
Use the correct function for the onlineness check for the memcg
selection, and use mem_cgroup_iter_break() to break the iteration.
Suggested-by: Yosry Ahmed <yosryahmed@google.com>
Signed-off-by: Nhat Pham <nphamcs@gmail.com>
---
mm/zswap.c | 6 +++---
1 file changed, 3 insertions(+), 3 deletions(-)
diff --git a/mm/zswap.c b/mm/zswap.c
index f323e45cbdc7..7a84c1454988 100644
--- a/mm/zswap.c
+++ b/mm/zswap.c
@@ -834,9 +834,9 @@ static void shrink_worker(struct work_struct *w)
goto resched;
}
- if (!mem_cgroup_online(memcg)) {
+ if (!mem_cgroup_tryget_online(memcg)) {
/* drop the reference from mem_cgroup_iter() */
- mem_cgroup_put(memcg);
+ mem_cgroup_iter_break(NULL, memcg);
pool->next_shrink = NULL;
spin_unlock(&zswap_pools_lock);
@@ -985,7 +985,7 @@ static void zswap_pool_destroy(struct zswap_pool *pool)
list_lru_destroy(&pool->list_lru);
spin_lock(&zswap_pools_lock);
- mem_cgroup_put(pool->next_shrink);
+ mem_cgroup_iter_break(NULL, pool->next_shrink);
pool->next_shrink = NULL;
spin_unlock(&zswap_pools_lock);
--
2.34.1
^ permalink raw reply related [flat|nested] 48+ messages in thread
* Re: [PATCH v8 2/6] memcontrol: implement mem_cgroup_tryget_online()
2023-12-05 18:02 ` Yosry Ahmed
@ 2023-12-05 19:55 ` Nhat Pham
0 siblings, 0 replies; 48+ messages in thread
From: Nhat Pham @ 2023-12-05 19:55 UTC (permalink / raw)
To: Yosry Ahmed
Cc: akpm, hannes, cerasuolodomenico, sjenning, ddstreet, vitaly.wool,
mhocko, roman.gushchin, shakeelb, muchun.song, chrisl, linux-mm,
kernel-team, linux-kernel, cgroups, linux-doc, linux-kselftest,
shuah
On Tue, Dec 5, 2023 at 10:03 AM Yosry Ahmed <yosryahmed@google.com> wrote:
>
> On Thu, Nov 30, 2023 at 11:40 AM Nhat Pham <nphamcs@gmail.com> wrote:
> >
> > This patch implements a helper function that try to get a reference to
> > an memcg's css, as well as checking if it is online. This new function
> > is almost exactly the same as the existing mem_cgroup_tryget(), except
> > for the onlineness check. In the !CONFIG_MEMCG case, it always returns
> > true, analogous to mem_cgroup_tryget(). This is useful for e.g to the
> > new zswap writeback scheme, where we need to select the next online
> > memcg as a candidate for the global limit reclaim.
> >
> > Signed-off-by: Nhat Pham <nphamcs@gmail.com>
>
> Reviewed-by: Yosry Ahmed <yosryahmed@google.com>
Thanks for the review, Yosry :) Really appreciate the effort and your
comments so far.
>
> > ---
> > include/linux/memcontrol.h | 10 ++++++++++
> > 1 file changed, 10 insertions(+)
> >
> > diff --git a/include/linux/memcontrol.h b/include/linux/memcontrol.h
> > index 7bdcf3020d7a..2bd7d14ace78 100644
> > --- a/include/linux/memcontrol.h
> > +++ b/include/linux/memcontrol.h
> > @@ -821,6 +821,11 @@ static inline bool mem_cgroup_tryget(struct mem_cgroup *memcg)
> > return !memcg || css_tryget(&memcg->css);
> > }
> >
> > +static inline bool mem_cgroup_tryget_online(struct mem_cgroup *memcg)
> > +{
> > + return !memcg || css_tryget_online(&memcg->css);
> > +}
> > +
> > static inline void mem_cgroup_put(struct mem_cgroup *memcg)
> > {
> > if (memcg)
> > @@ -1349,6 +1354,11 @@ static inline bool mem_cgroup_tryget(struct mem_cgroup *memcg)
> > return true;
> > }
> >
> > +static inline bool mem_cgroup_tryget_online(struct mem_cgroup *memcg)
> > +{
> > + return true;
> > +}
> > +
> > static inline void mem_cgroup_put(struct mem_cgroup *memcg)
> > {
> > }
> > --
> > 2.34.1
^ permalink raw reply [flat|nested] 48+ messages in thread
* Re: [PATCH v8 4/6] mm: memcg: add per-memcg zswap writeback stat (fix)
2023-12-05 19:33 ` [PATCH v8 4/6] mm: memcg: add per-memcg zswap writeback stat (fix) Nhat Pham
@ 2023-12-05 20:05 ` Yosry Ahmed
2023-12-08 0:25 ` Chris Li
1 sibling, 0 replies; 48+ messages in thread
From: Yosry Ahmed @ 2023-12-05 20:05 UTC (permalink / raw)
To: Nhat Pham
Cc: akpm, hannes, cerasuolodomenico, sjenning, ddstreet, vitaly.wool,
mhocko, roman.gushchin, shakeelb, muchun.song, chrisl, linux-mm,
kernel-team, linux-kernel, cgroups, linux-doc, linux-kselftest,
shuah
On Tue, Dec 5, 2023 at 11:33 AM Nhat Pham <nphamcs@gmail.com> wrote:
>
> Rename ZSWP_WB to ZSWPWB to better match the existing counters naming
> scheme.
>
> Suggested-by: Johannes Weiner <hannes@cmpxchg.org>
> Signed-off-by: Nhat Pham <nphamcs@gmail.com>
For the original patch + this fix:
Reviewed-by: Yosry Ahmed <yosryahmed@google.com>
> ---
> include/linux/vm_event_item.h | 2 +-
> mm/memcontrol.c | 2 +-
> mm/vmstat.c | 2 +-
> mm/zswap.c | 4 ++--
> 4 files changed, 5 insertions(+), 5 deletions(-)
>
> diff --git a/include/linux/vm_event_item.h b/include/linux/vm_event_item.h
> index f4569ad98edf..747943bc8cc2 100644
> --- a/include/linux/vm_event_item.h
> +++ b/include/linux/vm_event_item.h
> @@ -142,7 +142,7 @@ enum vm_event_item { PGPGIN, PGPGOUT, PSWPIN, PSWPOUT,
> #ifdef CONFIG_ZSWAP
> ZSWPIN,
> ZSWPOUT,
> - ZSWP_WB,
> + ZSWPWB,
> #endif
> #ifdef CONFIG_X86
> DIRECT_MAP_LEVEL2_SPLIT,
> diff --git a/mm/memcontrol.c b/mm/memcontrol.c
> index 21d79249c8b4..0286b7d38832 100644
> --- a/mm/memcontrol.c
> +++ b/mm/memcontrol.c
> @@ -703,7 +703,7 @@ static const unsigned int memcg_vm_event_stat[] = {
> #if defined(CONFIG_MEMCG_KMEM) && defined(CONFIG_ZSWAP)
> ZSWPIN,
> ZSWPOUT,
> - ZSWP_WB,
> + ZSWPWB,
> #endif
> #ifdef CONFIG_TRANSPARENT_HUGEPAGE
> THP_FAULT_ALLOC,
> diff --git a/mm/vmstat.c b/mm/vmstat.c
> index 2249f85e4a87..cfd8d8256f8e 100644
> --- a/mm/vmstat.c
> +++ b/mm/vmstat.c
> @@ -1401,7 +1401,7 @@ const char * const vmstat_text[] = {
> #ifdef CONFIG_ZSWAP
> "zswpin",
> "zswpout",
> - "zswp_wb",
> + "zswpwb",
> #endif
> #ifdef CONFIG_X86
> "direct_map_level2_splits",
> diff --git a/mm/zswap.c b/mm/zswap.c
> index c65b8ccc6b72..0fb0945c0031 100644
> --- a/mm/zswap.c
> +++ b/mm/zswap.c
> @@ -761,9 +761,9 @@ static enum lru_status shrink_memcg_cb(struct list_head *item, struct list_lru_o
> zswap_written_back_pages++;
>
> if (entry->objcg)
> - count_objcg_event(entry->objcg, ZSWP_WB);
> + count_objcg_event(entry->objcg, ZSWPWB);
>
> - count_vm_event(ZSWP_WB);
> + count_vm_event(ZSWPWB);
> /*
> * Writeback started successfully, the page now belongs to the
> * swapcache. Drop the entry from zswap - unless invalidate already
> --
> 2.34.1
^ permalink raw reply [flat|nested] 48+ messages in thread
* Re: [PATCH v8 3/6] zswap: make shrinking memcg-aware
2023-11-30 19:40 ` [PATCH v8 3/6] zswap: make shrinking memcg-aware Nhat Pham
2023-12-05 18:20 ` Yosry Ahmed
2023-12-05 19:54 ` [PATCH v8 3/6] zswap: make shrinking memcg-aware (fix) Nhat Pham
@ 2023-12-06 0:10 ` Chris Li
2023-12-06 1:53 ` Nhat Pham
2023-12-06 3:03 ` Nhat Pham
2023-12-06 3:06 ` [PATCH v8 3/6] zswap: make shrinking memcg-aware (fix 2) Nhat Pham
4 siblings, 1 reply; 48+ messages in thread
From: Chris Li @ 2023-12-06 0:10 UTC (permalink / raw)
To: Nhat Pham
Cc: akpm, hannes, cerasuolodomenico, yosryahmed, sjenning, ddstreet,
vitaly.wool, mhocko, roman.gushchin, shakeelb, muchun.song,
linux-mm, kernel-team, linux-kernel, cgroups, linux-doc,
linux-kselftest, shuah
Hi Nhat,
Still working my way up of your patches series.
On Thu, Nov 30, 2023 at 11:40 AM Nhat Pham <nphamcs@gmail.com> wrote:
>
> From: Domenico Cerasuolo <cerasuolodomenico@gmail.com>
>
> Currently, we only have a single global LRU for zswap. This makes it
> impossible to perform worload-specific shrinking - an memcg cannot
> determine which pages in the pool it owns, and often ends up writing
> pages from other memcgs. This issue has been previously observed in
> practice and mitigated by simply disabling memcg-initiated shrinking:
>
> https://lore.kernel.org/all/20230530232435.3097106-1-nphamcs@gmail.com/T/#u
>
> This patch fully resolves the issue by replacing the global zswap LRU
> with memcg- and NUMA-specific LRUs, and modify the reclaim logic:
>
> a) When a store attempt hits an memcg limit, it now triggers a
> synchronous reclaim attempt that, if successful, allows the new
> hotter page to be accepted by zswap.
> b) If the store attempt instead hits the global zswap limit, it will
> trigger an asynchronous reclaim attempt, in which an memcg is
> selected for reclaim in a round-robin-like fashion.
>
> Signed-off-by: Domenico Cerasuolo <cerasuolodomenico@gmail.com>
> Co-developed-by: Nhat Pham <nphamcs@gmail.com>
> Signed-off-by: Nhat Pham <nphamcs@gmail.com>
> ---
> include/linux/memcontrol.h | 5 +
> include/linux/zswap.h | 2 +
> mm/memcontrol.c | 2 +
> mm/swap.h | 3 +-
> mm/swap_state.c | 24 +++-
> mm/zswap.c | 269 +++++++++++++++++++++++++++++--------
> 6 files changed, 245 insertions(+), 60 deletions(-)
>
> diff --git a/include/linux/memcontrol.h b/include/linux/memcontrol.h
> index 2bd7d14ace78..a308c8eacf20 100644
> --- a/include/linux/memcontrol.h
> +++ b/include/linux/memcontrol.h
> @@ -1192,6 +1192,11 @@ static inline struct mem_cgroup *page_memcg_check(struct page *page)
> return NULL;
> }
>
> +static inline struct mem_cgroup *get_mem_cgroup_from_objcg(struct obj_cgroup *objcg)
> +{
> + return NULL;
> +}
> +
> static inline bool folio_memcg_kmem(struct folio *folio)
> {
> return false;
> diff --git a/include/linux/zswap.h b/include/linux/zswap.h
> index 2a60ce39cfde..e571e393669b 100644
> --- a/include/linux/zswap.h
> +++ b/include/linux/zswap.h
> @@ -15,6 +15,7 @@ bool zswap_load(struct folio *folio);
> void zswap_invalidate(int type, pgoff_t offset);
> void zswap_swapon(int type);
> void zswap_swapoff(int type);
> +void zswap_memcg_offline_cleanup(struct mem_cgroup *memcg);
>
> #else
>
> @@ -31,6 +32,7 @@ static inline bool zswap_load(struct folio *folio)
> static inline void zswap_invalidate(int type, pgoff_t offset) {}
> static inline void zswap_swapon(int type) {}
> static inline void zswap_swapoff(int type) {}
> +static inline void zswap_memcg_offline_cleanup(struct mem_cgroup *memcg) {}
>
> #endif
>
> diff --git a/mm/memcontrol.c b/mm/memcontrol.c
> index 470821d1ba1a..792ca21c5815 100644
> --- a/mm/memcontrol.c
> +++ b/mm/memcontrol.c
> @@ -5614,6 +5614,8 @@ static void mem_cgroup_css_offline(struct cgroup_subsys_state *css)
> page_counter_set_min(&memcg->memory, 0);
> page_counter_set_low(&memcg->memory, 0);
>
> + zswap_memcg_offline_cleanup(memcg);
> +
> memcg_offline_kmem(memcg);
> reparent_shrinker_deferred(memcg);
> wb_memcg_offline(memcg);
> diff --git a/mm/swap.h b/mm/swap.h
> index 73c332ee4d91..c0dc73e10e91 100644
> --- a/mm/swap.h
> +++ b/mm/swap.h
> @@ -51,7 +51,8 @@ struct page *read_swap_cache_async(swp_entry_t entry, gfp_t gfp_mask,
> struct swap_iocb **plug);
> struct page *__read_swap_cache_async(swp_entry_t entry, gfp_t gfp_mask,
> struct mempolicy *mpol, pgoff_t ilx,
> - bool *new_page_allocated);
> + bool *new_page_allocated,
> + bool skip_if_exists);
> struct page *swap_cluster_readahead(swp_entry_t entry, gfp_t flag,
> struct mempolicy *mpol, pgoff_t ilx);
> struct page *swapin_readahead(swp_entry_t entry, gfp_t flag,
> diff --git a/mm/swap_state.c b/mm/swap_state.c
> index 85d9e5806a6a..6c84236382f3 100644
> --- a/mm/swap_state.c
> +++ b/mm/swap_state.c
> @@ -412,7 +412,8 @@ struct folio *filemap_get_incore_folio(struct address_space *mapping,
>
> struct page *__read_swap_cache_async(swp_entry_t entry, gfp_t gfp_mask,
> struct mempolicy *mpol, pgoff_t ilx,
> - bool *new_page_allocated)
> + bool *new_page_allocated,
> + bool skip_if_exists)
I think this skip_if_exists is problematic here. You might need to
redesign this.
First of all, the skip_if_exists as the argument name, the meaning to
the caller is not clear. When I saw this, I was wondering, what does
the function return when this condition is triggered? Unlike
"*new_page_allocated", which is a state after the function is
returned. "skip_if_exists" is referring to an internal execution flow.
It does not tell what value the function should return if that
condition is triggered. It will force the caller to look into the
internal of the function __read_swap_cache_async() to reason "should I
pass true or false when I call this function". I wish it had better
abstracted names. Or maybe a function argument documentation block to
explain the usage of this argument.
> {
> struct swap_info_struct *si;
> struct folio *folio;
> @@ -470,6 +471,17 @@ struct page *__read_swap_cache_async(swp_entry_t entry, gfp_t gfp_mask,
> if (err != -EEXIST)
> goto fail_put_swap;
>
> + /*
> + * Protect against a recursive call to __read_swap_cache_async()
> + * on the same entry waiting forever here because SWAP_HAS_CACHE
> + * is set but the folio is not the swap cache yet. This can
> + * happen today if mem_cgroup_swapin_charge_folio() below
> + * triggers reclaim through zswap, which may call
> + * __read_swap_cache_async() in the writeback path.
> + */
> + if (skip_if_exists)
> + goto fail_put_swap;
> +
This is very tricky, for the caller that did set "skip_if_exists" to
true. Because the return value is still under race condition.
The following comments describe two race situations, which get cut off
by the patch context. Let me paste it again here:
+ /*
* We might race against __delete_from_swap_cache(), and
* stumble across a swap_map entry whose SWAP_HAS_CACHE
* has not yet been cleared. Or race against another
* __read_swap_cache_async(), which has set SWAP_HAS_CACHE
* in swap_map, but not yet added its page to swap cache.
*/
schedule_timeout_uninterruptible(1);
}
Basically, it has two kinds of race conditions. First is the race to
delete the swap cache entry. The second one is to add the swap cache
entry. Your added comment block for "if (skip_if_exists)" only
describes the first kind of race. That begs the question, what if the
race is the second case, how does the caller handle that?
Let me paste the caller here:
page = __read_swap_cache_async(swpentry, GFP_KERNEL, mpol,
- NO_INTERLEAVE_INDEX, &page_was_allocated);
+ NO_INTERLEAVE_INDEX, &page_was_allocated, true);
if (!page) {
ret = -ENOMEM;
goto fail;
}
The caller will return -ENOMEM if the second race condition (adding to
the swap cache) was triggered. It will return ENOMEM while the page is
being added to the swap cache. That feels incorrect to me. Am I
missing anything?
A control flow modification to the racing path is very tricky. Need
more eyes for review.
> /*
> * We might race against __delete_from_swap_cache(), and
> * stumble across a swap_map entry whose SWAP_HAS_CACHE
> @@ -537,7 +549,7 @@ struct page *read_swap_cache_async(swp_entry_t entry, gfp_t gfp_mask,
>
> mpol = get_vma_policy(vma, addr, 0, &ilx);
> page = __read_swap_cache_async(entry, gfp_mask, mpol, ilx,
> - &page_allocated);
> + &page_allocated, false);
> mpol_cond_put(mpol);
>
> if (page_allocated)
> @@ -654,7 +666,7 @@ struct page *swap_cluster_readahead(swp_entry_t entry, gfp_t gfp_mask,
> /* Ok, do the async read-ahead now */
> page = __read_swap_cache_async(
> swp_entry(swp_type(entry), offset),
> - gfp_mask, mpol, ilx, &page_allocated);
> + gfp_mask, mpol, ilx, &page_allocated, false);
> if (!page)
> continue;
> if (page_allocated) {
> @@ -672,7 +684,7 @@ struct page *swap_cluster_readahead(swp_entry_t entry, gfp_t gfp_mask,
> skip:
> /* The page was likely read above, so no need for plugging here */
> page = __read_swap_cache_async(entry, gfp_mask, mpol, ilx,
> - &page_allocated);
> + &page_allocated, false);
> if (unlikely(page_allocated))
> swap_readpage(page, false, NULL);
> return page;
> @@ -827,7 +839,7 @@ static struct page *swap_vma_readahead(swp_entry_t targ_entry, gfp_t gfp_mask,
> pte_unmap(pte);
> pte = NULL;
> page = __read_swap_cache_async(entry, gfp_mask, mpol, ilx,
> - &page_allocated);
> + &page_allocated, false);
> if (!page)
> continue;
> if (page_allocated) {
> @@ -847,7 +859,7 @@ static struct page *swap_vma_readahead(swp_entry_t targ_entry, gfp_t gfp_mask,
> skip:
> /* The page was likely read above, so no need for plugging here */
> page = __read_swap_cache_async(targ_entry, gfp_mask, mpol, targ_ilx,
> - &page_allocated);
> + &page_allocated, false);
> if (unlikely(page_allocated))
> swap_readpage(page, false, NULL);
> return page;
> diff --git a/mm/zswap.c b/mm/zswap.c
> index 4bdb2d83bb0d..f323e45cbdc7 100644
> --- a/mm/zswap.c
> +++ b/mm/zswap.c
> @@ -35,6 +35,7 @@
> #include <linux/writeback.h>
> #include <linux/pagemap.h>
> #include <linux/workqueue.h>
> +#include <linux/list_lru.h>
>
> #include "swap.h"
> #include "internal.h"
> @@ -174,8 +175,8 @@ struct zswap_pool {
> struct work_struct shrink_work;
> struct hlist_node node;
> char tfm_name[CRYPTO_MAX_ALG_NAME];
> - struct list_head lru;
> - spinlock_t lru_lock;
> + struct list_lru list_lru;
> + struct mem_cgroup *next_shrink;
> };
>
> /*
> @@ -291,15 +292,46 @@ static void zswap_update_total_size(void)
> zswap_pool_total_size = total;
> }
>
> +/* should be called under RCU */
> +#ifdef CONFIG_MEMCG
> +static inline struct mem_cgroup *mem_cgroup_from_entry(struct zswap_entry *entry)
> +{
> + return entry->objcg ? obj_cgroup_memcg(entry->objcg) : NULL;
> +}
> +#else
> +static inline struct mem_cgroup *mem_cgroup_from_entry(struct zswap_entry *entry)
> +{
> + return NULL;
> +}
> +#endif
> +
> +static inline int entry_to_nid(struct zswap_entry *entry)
> +{
> + return page_to_nid(virt_to_page(entry));
> +}
> +
> +void zswap_memcg_offline_cleanup(struct mem_cgroup *memcg)
> +{
> + struct zswap_pool *pool;
> +
> + /* lock out zswap pools list modification */
> + spin_lock(&zswap_pools_lock);
> + list_for_each_entry(pool, &zswap_pools, list) {
> + if (pool->next_shrink == memcg)
> + pool->next_shrink = mem_cgroup_iter(NULL, pool->next_shrink, NULL);
This removes the memcg from pool->next_shrink if the first pool
next_shrink matches.
Please help me understand why only the first next_shrink, what if the
memcg matches on the follow up next_shrink entries?
What am I missing here?
> + }
> + spin_unlock(&zswap_pools_lock);
> +}
> +
> /*********************************
> * zswap entry functions
> **********************************/
> static struct kmem_cache *zswap_entry_cache;
>
> -static struct zswap_entry *zswap_entry_cache_alloc(gfp_t gfp)
> +static struct zswap_entry *zswap_entry_cache_alloc(gfp_t gfp, int nid)
> {
> struct zswap_entry *entry;
> - entry = kmem_cache_alloc(zswap_entry_cache, gfp);
> + entry = kmem_cache_alloc_node(zswap_entry_cache, gfp, nid);
> if (!entry)
> return NULL;
> entry->refcount = 1;
> @@ -312,6 +344,61 @@ static void zswap_entry_cache_free(struct zswap_entry *entry)
> kmem_cache_free(zswap_entry_cache, entry);
> }
>
> +/*********************************
> +* lru functions
> +**********************************/
> +static void zswap_lru_add(struct list_lru *list_lru, struct zswap_entry *entry)
> +{
> + int nid = entry_to_nid(entry);
> + struct mem_cgroup *memcg;
> +
> + /*
> + * Note that it is safe to use rcu_read_lock() here, even in the face of
> + * concurrent memcg offlining. Thanks to the memcg->kmemcg_id indirection
> + * used in list_lru lookup, only two scenarios are possible:
> + *
> + * 1. list_lru_add() is called before memcg->kmemcg_id is updated. The
> + * new entry will be reparented to memcg's parent's list_lru.
> + * 2. list_lru_add() is called after memcg->kmemcg_id is updated. The
> + * new entry will be added directly to memcg's parent's list_lru.
> + *
> + * Similar reasoning holds for list_lru_del() and list_lru_putback().
> + */
> + rcu_read_lock();
> + memcg = mem_cgroup_from_entry(entry);
> + /* will always succeed */
> + list_lru_add(list_lru, &entry->lru, nid, memcg);
> + rcu_read_unlock();
> +}
> +
> +static void zswap_lru_del(struct list_lru *list_lru, struct zswap_entry *entry)
> +{
> + int nid = entry_to_nid(entry);
> + struct mem_cgroup *memcg;
> +
> + rcu_read_lock();
> + memcg = mem_cgroup_from_entry(entry);
> + /* will always succeed */
> + list_lru_del(list_lru, &entry->lru, nid, memcg);
> + rcu_read_unlock();
> +}
> +
> +static void zswap_lru_putback(struct list_lru *list_lru,
> + struct zswap_entry *entry)
> +{
> + int nid = entry_to_nid(entry);
> + spinlock_t *lock = &list_lru->node[nid].lock;
> + struct mem_cgroup *memcg;
> +
> + rcu_read_lock();
> + memcg = mem_cgroup_from_entry(entry);
> + spin_lock(lock);
> + /* we cannot use list_lru_add here, because it increments node's lru count */
> + list_lru_putback(list_lru, &entry->lru, nid, memcg);
> + spin_unlock(lock);
> + rcu_read_unlock();
> +}
> +
> /*********************************
> * rbtree functions
> **********************************/
> @@ -396,9 +483,7 @@ static void zswap_free_entry(struct zswap_entry *entry)
> if (!entry->length)
> atomic_dec(&zswap_same_filled_pages);
> else {
> - spin_lock(&entry->pool->lru_lock);
> - list_del(&entry->lru);
> - spin_unlock(&entry->pool->lru_lock);
> + zswap_lru_del(&entry->pool->list_lru, entry);
> zpool_free(zswap_find_zpool(entry), entry->handle);
> zswap_pool_put(entry->pool);
> }
> @@ -632,21 +717,15 @@ static void zswap_invalidate_entry(struct zswap_tree *tree,
> zswap_entry_put(tree, entry);
> }
>
> -static int zswap_reclaim_entry(struct zswap_pool *pool)
> +static enum lru_status shrink_memcg_cb(struct list_head *item, struct list_lru_one *l,
> + spinlock_t *lock, void *arg)
> {
> - struct zswap_entry *entry;
> + struct zswap_entry *entry = container_of(item, struct zswap_entry, lru);
> struct zswap_tree *tree;
> pgoff_t swpoffset;
> - int ret;
> + enum lru_status ret = LRU_REMOVED_RETRY;
> + int writeback_result;
I do see a pattern here where you want to use long and descriptive
local variable names. It is in other patches as well. Therefore, I
want to make a point here.
According to the Linux coding style document.
https://www.kernel.org/doc/html/latest/process/coding-style.html
LOCAL variable names should be short, and to the point. If you have
some random integer loop counter, it should probably be called i.
Calling it loop_counter is non-productive, if there is no chance of it
being mis-understood. Similarly, tmp can be just about any type of
variable that is used to hold a temporary value.
I see you have a different return value for LRU status now so you want
to use ret for that. Just the "writeback_result" is a bit long as a
local variable.
>
> - /* Get an entry off the LRU */
> - spin_lock(&pool->lru_lock);
> - if (list_empty(&pool->lru)) {
> - spin_unlock(&pool->lru_lock);
> - return -EINVAL;
> - }
> - entry = list_last_entry(&pool->lru, struct zswap_entry, lru);
> - list_del_init(&entry->lru);
> /*
> * Once the lru lock is dropped, the entry might get freed. The
> * swpoffset is copied to the stack, and entry isn't deref'd again
> @@ -654,28 +733,32 @@ static int zswap_reclaim_entry(struct zswap_pool *pool)
> */
> swpoffset = swp_offset(entry->swpentry);
> tree = zswap_trees[swp_type(entry->swpentry)];
> - spin_unlock(&pool->lru_lock);
> + list_lru_isolate(l, item);
> + /*
> + * It's safe to drop the lock here because we return either
> + * LRU_REMOVED_RETRY or LRU_RETRY.
> + */
> + spin_unlock(lock);
>
> /* Check for invalidate() race */
> spin_lock(&tree->lock);
> - if (entry != zswap_rb_search(&tree->rbroot, swpoffset)) {
> - ret = -EAGAIN;
> + if (entry != zswap_rb_search(&tree->rbroot, swpoffset))
> goto unlock;
If the zswap_rb_search() encounters the invalide race.
Goto unlock will return LRU_REMOVED_RETRY.
Can you help me understand how the list_lru_walk_one() interacts with
the invalid race here?
I am very scared of changing the handle of race conditions, we need
more eyeballs to review it.
> - }
> +
> /* Hold a reference to prevent a free during writeback */
> zswap_entry_get(entry);
> spin_unlock(&tree->lock);
>
> - ret = zswap_writeback_entry(entry, tree);
> + writeback_result = zswap_writeback_entry(entry, tree);
>
> spin_lock(&tree->lock);
> - if (ret) {
> - /* Writeback failed, put entry back on LRU */
> - spin_lock(&pool->lru_lock);
> - list_move(&entry->lru, &pool->lru);
> - spin_unlock(&pool->lru_lock);
> + if (writeback_result) {
> + zswap_reject_reclaim_fail++;
> + zswap_lru_putback(&entry->pool->list_lru, entry);
Does this mean the writeback failed for whatever reason, the order of
the entry in the LRU changed? Seems a bit odd, the application did not
request this page, this access is purely internal behavior of zswap
write back. It should not impact how likely applications are going to
use this page.
> + ret = LRU_RETRY;
> goto put_unlock;
> }
> + zswap_written_back_pages++;
Why do you move the "zswap_written_back_pages" counter here rather
than in the zswap_writeback_entry() which is closer to the original
code?
It seems to me that the write back result already determines which
counter to update, so this counter update and
"zswap_reject_reclaim_fail++;" should move into
zswap_writeback_entry().
>
> /*
> * Writeback started successfully, the page now belongs to the
> @@ -689,27 +772,93 @@ static int zswap_reclaim_entry(struct zswap_pool *pool)
> zswap_entry_put(tree, entry);
> unlock:
> spin_unlock(&tree->lock);
> - return ret ? -EAGAIN : 0;
> + spin_lock(lock);
> + return ret;
> +}
> +
> +static int shrink_memcg(struct mem_cgroup *memcg)
> +{
> + struct zswap_pool *pool;
> + int nid, shrunk = 0;
> +
> + /*
> + * Skip zombies because their LRUs are reparented and we would be
> + * reclaiming from the parent instead of the dead memcg.
> + */
> + if (memcg && !mem_cgroup_online(memcg))
> + return -ENOENT;
> +
> + pool = zswap_pool_current_get();
> + if (!pool)
> + return -EINVAL;
What about the other pools that are not the current, do they shrink
somehow as well?
> +
> + for_each_node_state(nid, N_NORMAL_MEMORY) {
> + unsigned long nr_to_walk = 1;
> +
> + shrunk += list_lru_walk_one(&pool->list_lru, nid, memcg,
> + &shrink_memcg_cb, NULL, &nr_to_walk);
> + }
> + zswap_pool_put(pool);
> + return shrunk ? 0 : -EAGAIN;
Wouldn't it be useful to know how much actual pages get writeback here
as indicator of how useful it is to shrink it?
One idea is that we can use some kind of shrink control struct and
pass it down here. The shrink count can be a member of the shrink
control struct.
> }
>
> static void shrink_worker(struct work_struct *w)
> {
> struct zswap_pool *pool = container_of(w, typeof(*pool),
> shrink_work);
> + struct mem_cgroup *memcg;
> int ret, failures = 0;
>
> + /* global reclaim will select cgroup in a round-robin fashion. */
If we have the shrink control struct, it can be easier to move to more
fancy control of how the shrink performs to each memcg. We will likely
move away from the round robin once we move to the MGLRU world of
shrinking zswap entries. Maybe for a later patch.
> do {
> - ret = zswap_reclaim_entry(pool);
> - if (ret) {
> - zswap_reject_reclaim_fail++;
> - if (ret != -EAGAIN)
> + spin_lock(&zswap_pools_lock);
> + pool->next_shrink = mem_cgroup_iter(NULL, pool->next_shrink, NULL);
> + memcg = pool->next_shrink;
I am still a bit fuzzy about the data structure on the
pool->next_shrink. Is this store the current memcg that is iterating
on? After each walk of the memcg, next_shrink is pointed to the next
memcg in the list. After all the memcg is consumed, when does
next_shrink point back to the beginning of the list?
Chris
> +
> + /*
> + * We need to retry if we have gone through a full round trip, or if we
> + * got an offline memcg (or else we risk undoing the effect of the
> + * zswap memcg offlining cleanup callback). This is not catastrophic
> + * per se, but it will keep the now offlined memcg hostage for a while.
> + *
> + * Note that if we got an online memcg, we will keep the extra
> + * reference in case the original reference obtained by mem_cgroup_iter
> + * is dropped by the zswap memcg offlining callback, ensuring that the
> + * memcg is not killed when we are reclaiming.
> + */
> + if (!memcg) {
> + spin_unlock(&zswap_pools_lock);
> + if (++failures == MAX_RECLAIM_RETRIES)
> break;
> +
> + goto resched;
> + }
> +
> + if (!mem_cgroup_online(memcg)) {
> + /* drop the reference from mem_cgroup_iter() */
> + mem_cgroup_put(memcg);
> + pool->next_shrink = NULL;
> + spin_unlock(&zswap_pools_lock);
> +
> if (++failures == MAX_RECLAIM_RETRIES)
> break;
> +
> + goto resched;
> }
> + spin_unlock(&zswap_pools_lock);
> +
> + ret = shrink_memcg(memcg);
> + /* drop the extra reference */
> + mem_cgroup_put(memcg);
> +
> + if (ret == -EINVAL)
> + break;
> + if (ret && ++failures == MAX_RECLAIM_RETRIES)
> + break;
> +
> +resched:
> cond_resched();
> } while (!zswap_can_accept());
> - zswap_pool_put(pool);
> }
>
> static struct zswap_pool *zswap_pool_create(char *type, char *compressor)
> @@ -767,8 +916,7 @@ static struct zswap_pool *zswap_pool_create(char *type, char *compressor)
> */
> kref_init(&pool->kref);
> INIT_LIST_HEAD(&pool->list);
> - INIT_LIST_HEAD(&pool->lru);
> - spin_lock_init(&pool->lru_lock);
> + list_lru_init_memcg(&pool->list_lru, NULL);
> INIT_WORK(&pool->shrink_work, shrink_worker);
>
> zswap_pool_debug("created", pool);
> @@ -834,6 +982,13 @@ static void zswap_pool_destroy(struct zswap_pool *pool)
>
> cpuhp_state_remove_instance(CPUHP_MM_ZSWP_POOL_PREPARE, &pool->node);
> free_percpu(pool->acomp_ctx);
> + list_lru_destroy(&pool->list_lru);
> +
> + spin_lock(&zswap_pools_lock);
> + mem_cgroup_put(pool->next_shrink);
> + pool->next_shrink = NULL;
> + spin_unlock(&zswap_pools_lock);
> +
> for (i = 0; i < ZSWAP_NR_ZPOOLS; i++)
> zpool_destroy_pool(pool->zpools[i]);
> kfree(pool);
> @@ -1081,7 +1236,7 @@ static int zswap_writeback_entry(struct zswap_entry *entry,
> /* try to allocate swap cache page */
> mpol = get_task_policy(current);
> page = __read_swap_cache_async(swpentry, GFP_KERNEL, mpol,
> - NO_INTERLEAVE_INDEX, &page_was_allocated);
> + NO_INTERLEAVE_INDEX, &page_was_allocated, true);
> if (!page) {
> ret = -ENOMEM;
> goto fail;
> @@ -1152,7 +1307,6 @@ static int zswap_writeback_entry(struct zswap_entry *entry,
> /* start writeback */
> __swap_writepage(page, &wbc);
> put_page(page);
> - zswap_written_back_pages++;
>
> return ret;
>
> @@ -1209,6 +1363,7 @@ bool zswap_store(struct folio *folio)
> struct scatterlist input, output;
> struct crypto_acomp_ctx *acomp_ctx;
> struct obj_cgroup *objcg = NULL;
> + struct mem_cgroup *memcg = NULL;
> struct zswap_pool *pool;
> struct zpool *zpool;
> unsigned int dlen = PAGE_SIZE;
> @@ -1240,15 +1395,15 @@ bool zswap_store(struct folio *folio)
> zswap_invalidate_entry(tree, dupentry);
> }
> spin_unlock(&tree->lock);
> -
> - /*
> - * XXX: zswap reclaim does not work with cgroups yet. Without a
> - * cgroup-aware entry LRU, we will push out entries system-wide based on
> - * local cgroup limits.
> - */
> objcg = get_obj_cgroup_from_folio(folio);
> - if (objcg && !obj_cgroup_may_zswap(objcg))
> - goto reject;
> + if (objcg && !obj_cgroup_may_zswap(objcg)) {
> + memcg = get_mem_cgroup_from_objcg(objcg);
> + if (shrink_memcg(memcg)) {
> + mem_cgroup_put(memcg);
> + goto reject;
> + }
> + mem_cgroup_put(memcg);
> + }
>
> /* reclaim space if needed */
> if (zswap_is_full()) {
> @@ -1265,7 +1420,7 @@ bool zswap_store(struct folio *folio)
> }
>
> /* allocate entry */
> - entry = zswap_entry_cache_alloc(GFP_KERNEL);
> + entry = zswap_entry_cache_alloc(GFP_KERNEL, page_to_nid(page));
> if (!entry) {
> zswap_reject_kmemcache_fail++;
> goto reject;
> @@ -1292,6 +1447,15 @@ bool zswap_store(struct folio *folio)
> if (!entry->pool)
> goto freepage;
>
> + if (objcg) {
> + memcg = get_mem_cgroup_from_objcg(objcg);
> + if (memcg_list_lru_alloc(memcg, &entry->pool->list_lru, GFP_KERNEL)) {
> + mem_cgroup_put(memcg);
> + goto put_pool;
> + }
> + mem_cgroup_put(memcg);
> + }
> +
> /* compress */
> acomp_ctx = raw_cpu_ptr(entry->pool->acomp_ctx);
>
> @@ -1370,9 +1534,8 @@ bool zswap_store(struct folio *folio)
> zswap_invalidate_entry(tree, dupentry);
> }
> if (entry->length) {
> - spin_lock(&entry->pool->lru_lock);
> - list_add(&entry->lru, &entry->pool->lru);
> - spin_unlock(&entry->pool->lru_lock);
> + INIT_LIST_HEAD(&entry->lru);
> + zswap_lru_add(&entry->pool->list_lru, entry);
> }
> spin_unlock(&tree->lock);
>
> @@ -1385,6 +1548,7 @@ bool zswap_store(struct folio *folio)
>
> put_dstmem:
> mutex_unlock(acomp_ctx->mutex);
> +put_pool:
> zswap_pool_put(entry->pool);
> freepage:
> zswap_entry_cache_free(entry);
> @@ -1479,9 +1643,8 @@ bool zswap_load(struct folio *folio)
> zswap_invalidate_entry(tree, entry);
> folio_mark_dirty(folio);
> } else if (entry->length) {
> - spin_lock(&entry->pool->lru_lock);
> - list_move(&entry->lru, &entry->pool->lru);
> - spin_unlock(&entry->pool->lru_lock);
> + zswap_lru_del(&entry->pool->list_lru, entry);
> + zswap_lru_add(&entry->pool->list_lru, entry);
> }
> zswap_entry_put(tree, entry);
> spin_unlock(&tree->lock);
> --
> 2.34.1
>
^ permalink raw reply [flat|nested] 48+ messages in thread
* Re: [PATCH v8 2/6] memcontrol: implement mem_cgroup_tryget_online()
2023-12-05 1:39 ` Nhat Pham
@ 2023-12-06 0:16 ` Chris Li
2023-12-06 1:30 ` Nhat Pham
0 siblings, 1 reply; 48+ messages in thread
From: Chris Li @ 2023-12-06 0:16 UTC (permalink / raw)
To: Nhat Pham
Cc: akpm, hannes, cerasuolodomenico, yosryahmed, sjenning, ddstreet,
vitaly.wool, mhocko, roman.gushchin, shakeelb, muchun.song,
linux-mm, kernel-team, linux-kernel, cgroups, linux-doc,
linux-kselftest, shuah
On Mon, Dec 4, 2023 at 5:39 PM Nhat Pham <nphamcs@gmail.com> wrote:
>
> > > memcg as a candidate for the global limit reclaim.
> >
> > Very minor nitpick. This patch can fold with the later patch that uses
> > it. That makes the review easier, no need to cross reference different
> > patches. It will also make it harder to introduce API that nobody
> > uses.
>
> I don't have a strong preference one way or the other :) Probably not
> worth the churn tho.
Squashing a patch is very easy. If you are refreshing a new series, it
is worthwhile to do it. I notice on the other thread Yosry pointed out
you did not use the function "mem_cgroup_tryget_online" in patch 3,
that is exactly the situation my suggestion is trying to prevent.
If you don't have a strong preference, it sounds like you should squash it.
Chris
>
> >
> > Chris
> >
> > >
> > > Signed-off-by: Nhat Pham <nphamcs@gmail.com>
> > > ---
> > > include/linux/memcontrol.h | 10 ++++++++++
> > > 1 file changed, 10 insertions(+)
> > >
> > > diff --git a/include/linux/memcontrol.h b/include/linux/memcontrol.h
> > > index 7bdcf3020d7a..2bd7d14ace78 100644
> > > --- a/include/linux/memcontrol.h
> > > +++ b/include/linux/memcontrol.h
> > > @@ -821,6 +821,11 @@ static inline bool mem_cgroup_tryget(struct mem_cgroup *memcg)
> > > return !memcg || css_tryget(&memcg->css);
> > > }
> > >
> > > +static inline bool mem_cgroup_tryget_online(struct mem_cgroup *memcg)
> > > +{
> > > + return !memcg || css_tryget_online(&memcg->css);
> > > +}
> > > +
> > > static inline void mem_cgroup_put(struct mem_cgroup *memcg)
> > > {
> > > if (memcg)
> > > @@ -1349,6 +1354,11 @@ static inline bool mem_cgroup_tryget(struct mem_cgroup *memcg)
> > > return true;
> > > }
> > >
> > > +static inline bool mem_cgroup_tryget_online(struct mem_cgroup *memcg)
> > > +{
> > > + return true;
> > > +}
> > > +
> > > static inline void mem_cgroup_put(struct mem_cgroup *memcg)
> > > {
> > > }
> > > --
> > > 2.34.1
> > >
>
^ permalink raw reply [flat|nested] 48+ messages in thread
* Re: [PATCH v8 2/6] memcontrol: implement mem_cgroup_tryget_online()
2023-12-06 0:16 ` Chris Li
@ 2023-12-06 1:30 ` Nhat Pham
0 siblings, 0 replies; 48+ messages in thread
From: Nhat Pham @ 2023-12-06 1:30 UTC (permalink / raw)
To: Chris Li
Cc: akpm, hannes, cerasuolodomenico, yosryahmed, sjenning, ddstreet,
vitaly.wool, mhocko, roman.gushchin, shakeelb, muchun.song,
linux-mm, kernel-team, linux-kernel, cgroups, linux-doc,
linux-kselftest, shuah
On Tue, Dec 5, 2023 at 4:16 PM Chris Li <chrisl@kernel.org> wrote:
>
> On Mon, Dec 4, 2023 at 5:39 PM Nhat Pham <nphamcs@gmail.com> wrote:
> >
> > > > memcg as a candidate for the global limit reclaim.
> > >
> > > Very minor nitpick. This patch can fold with the later patch that uses
> > > it. That makes the review easier, no need to cross reference different
> > > patches. It will also make it harder to introduce API that nobody
> > > uses.
> >
> > I don't have a strong preference one way or the other :) Probably not
> > worth the churn tho.
>
> Squashing a patch is very easy. If you are refreshing a new series, it
> is worthwhile to do it. I notice on the other thread Yosry pointed out
> you did not use the function "mem_cgroup_tryget_online" in patch 3,
> that is exactly the situation my suggestion is trying to prevent.
I doubt squashing it would solve the issue - in fact, I think Yosry
noticed it precisely because he had to stare at a separate patch
detailing the adding of the new function in the first place :P
In general though, I'm hesitant to extend this API silently in a patch
that uses it. Is it not better to have a separate patch announcing
this API extension? list_lru_add() was originally part of the original
series too - we separate that out to its own thing because it gets
confusing. Another benefit is that there will be less work in the
future if we want to revert the per-cgroup zswap LRU patch, and
there's already another mem_cgroup_tryget_online() user - we can keep
this patch.
But yeah we'll see - I'll think about it if I actually have to send
v9. If not, let's not add unnecessary churning.
>
> If you don't have a strong preference, it sounds like you should squash it.
>
> Chris
>
> >
> > >
> > > Chris
> > >
> > > >
> > > > Signed-off-by: Nhat Pham <nphamcs@gmail.com>
> > > > ---
> > > > include/linux/memcontrol.h | 10 ++++++++++
> > > > 1 file changed, 10 insertions(+)
> > > >
> > > > diff --git a/include/linux/memcontrol.h b/include/linux/memcontrol.h
> > > > index 7bdcf3020d7a..2bd7d14ace78 100644
> > > > --- a/include/linux/memcontrol.h
> > > > +++ b/include/linux/memcontrol.h
> > > > @@ -821,6 +821,11 @@ static inline bool mem_cgroup_tryget(struct mem_cgroup *memcg)
> > > > return !memcg || css_tryget(&memcg->css);
> > > > }
> > > >
> > > > +static inline bool mem_cgroup_tryget_online(struct mem_cgroup *memcg)
> > > > +{
> > > > + return !memcg || css_tryget_online(&memcg->css);
> > > > +}
> > > > +
> > > > static inline void mem_cgroup_put(struct mem_cgroup *memcg)
> > > > {
> > > > if (memcg)
> > > > @@ -1349,6 +1354,11 @@ static inline bool mem_cgroup_tryget(struct mem_cgroup *memcg)
> > > > return true;
> > > > }
> > > >
> > > > +static inline bool mem_cgroup_tryget_online(struct mem_cgroup *memcg)
> > > > +{
> > > > + return true;
> > > > +}
> > > > +
> > > > static inline void mem_cgroup_put(struct mem_cgroup *memcg)
> > > > {
> > > > }
> > > > --
> > > > 2.34.1
> > > >
> >
^ permalink raw reply [flat|nested] 48+ messages in thread
* Re: [PATCH v8 3/6] zswap: make shrinking memcg-aware
2023-12-06 0:10 ` [PATCH v8 3/6] zswap: make shrinking memcg-aware Chris Li
@ 2023-12-06 1:53 ` Nhat Pham
0 siblings, 0 replies; 48+ messages in thread
From: Nhat Pham @ 2023-12-06 1:53 UTC (permalink / raw)
To: Chris Li
Cc: akpm, hannes, cerasuolodomenico, yosryahmed, sjenning, ddstreet,
vitaly.wool, mhocko, roman.gushchin, shakeelb, muchun.song,
linux-mm, kernel-team, linux-kernel, cgroups, linux-doc,
linux-kselftest, shuah
On Tue, Dec 5, 2023 at 4:10 PM Chris Li <chrisl@kernel.org> wrote:
>
> Hi Nhat,
>
> Still working my way up of your patches series.
>
> On Thu, Nov 30, 2023 at 11:40 AM Nhat Pham <nphamcs@gmail.com> wrote:
> >
> > From: Domenico Cerasuolo <cerasuolodomenico@gmail.com>
> >
> > Currently, we only have a single global LRU for zswap. This makes it
> > impossible to perform worload-specific shrinking - an memcg cannot
> > determine which pages in the pool it owns, and often ends up writing
> > pages from other memcgs. This issue has been previously observed in
> > practice and mitigated by simply disabling memcg-initiated shrinking:
> >
> > https://lore.kernel.org/all/20230530232435.3097106-1-nphamcs@gmail.com/T/#u
> >
> > This patch fully resolves the issue by replacing the global zswap LRU
> > with memcg- and NUMA-specific LRUs, and modify the reclaim logic:
> >
> > a) When a store attempt hits an memcg limit, it now triggers a
> > synchronous reclaim attempt that, if successful, allows the new
> > hotter page to be accepted by zswap.
> > b) If the store attempt instead hits the global zswap limit, it will
> > trigger an asynchronous reclaim attempt, in which an memcg is
> > selected for reclaim in a round-robin-like fashion.
> >
> > Signed-off-by: Domenico Cerasuolo <cerasuolodomenico@gmail.com>
> > Co-developed-by: Nhat Pham <nphamcs@gmail.com>
> > Signed-off-by: Nhat Pham <nphamcs@gmail.com>
> > ---
> > include/linux/memcontrol.h | 5 +
> > include/linux/zswap.h | 2 +
> > mm/memcontrol.c | 2 +
> > mm/swap.h | 3 +-
> > mm/swap_state.c | 24 +++-
> > mm/zswap.c | 269 +++++++++++++++++++++++++++++--------
> > 6 files changed, 245 insertions(+), 60 deletions(-)
> >
> > diff --git a/include/linux/memcontrol.h b/include/linux/memcontrol.h
> > index 2bd7d14ace78..a308c8eacf20 100644
> > --- a/include/linux/memcontrol.h
> > +++ b/include/linux/memcontrol.h
> > @@ -1192,6 +1192,11 @@ static inline struct mem_cgroup *page_memcg_check(struct page *page)
> > return NULL;
> > }
> >
> > +static inline struct mem_cgroup *get_mem_cgroup_from_objcg(struct obj_cgroup *objcg)
> > +{
> > + return NULL;
> > +}
> > +
> > static inline bool folio_memcg_kmem(struct folio *folio)
> > {
> > return false;
> > diff --git a/include/linux/zswap.h b/include/linux/zswap.h
> > index 2a60ce39cfde..e571e393669b 100644
> > --- a/include/linux/zswap.h
> > +++ b/include/linux/zswap.h
> > @@ -15,6 +15,7 @@ bool zswap_load(struct folio *folio);
> > void zswap_invalidate(int type, pgoff_t offset);
> > void zswap_swapon(int type);
> > void zswap_swapoff(int type);
> > +void zswap_memcg_offline_cleanup(struct mem_cgroup *memcg);
> >
> > #else
> >
> > @@ -31,6 +32,7 @@ static inline bool zswap_load(struct folio *folio)
> > static inline void zswap_invalidate(int type, pgoff_t offset) {}
> > static inline void zswap_swapon(int type) {}
> > static inline void zswap_swapoff(int type) {}
> > +static inline void zswap_memcg_offline_cleanup(struct mem_cgroup *memcg) {}
> >
> > #endif
> >
> > diff --git a/mm/memcontrol.c b/mm/memcontrol.c
> > index 470821d1ba1a..792ca21c5815 100644
> > --- a/mm/memcontrol.c
> > +++ b/mm/memcontrol.c
> > @@ -5614,6 +5614,8 @@ static void mem_cgroup_css_offline(struct cgroup_subsys_state *css)
> > page_counter_set_min(&memcg->memory, 0);
> > page_counter_set_low(&memcg->memory, 0);
> >
> > + zswap_memcg_offline_cleanup(memcg);
> > +
> > memcg_offline_kmem(memcg);
> > reparent_shrinker_deferred(memcg);
> > wb_memcg_offline(memcg);
> > diff --git a/mm/swap.h b/mm/swap.h
> > index 73c332ee4d91..c0dc73e10e91 100644
> > --- a/mm/swap.h
> > +++ b/mm/swap.h
> > @@ -51,7 +51,8 @@ struct page *read_swap_cache_async(swp_entry_t entry, gfp_t gfp_mask,
> > struct swap_iocb **plug);
> > struct page *__read_swap_cache_async(swp_entry_t entry, gfp_t gfp_mask,
> > struct mempolicy *mpol, pgoff_t ilx,
> > - bool *new_page_allocated);
> > + bool *new_page_allocated,
> > + bool skip_if_exists);
> > struct page *swap_cluster_readahead(swp_entry_t entry, gfp_t flag,
> > struct mempolicy *mpol, pgoff_t ilx);
> > struct page *swapin_readahead(swp_entry_t entry, gfp_t flag,
> > diff --git a/mm/swap_state.c b/mm/swap_state.c
> > index 85d9e5806a6a..6c84236382f3 100644
> > --- a/mm/swap_state.c
> > +++ b/mm/swap_state.c
> > @@ -412,7 +412,8 @@ struct folio *filemap_get_incore_folio(struct address_space *mapping,
> >
> > struct page *__read_swap_cache_async(swp_entry_t entry, gfp_t gfp_mask,
> > struct mempolicy *mpol, pgoff_t ilx,
> > - bool *new_page_allocated)
> > + bool *new_page_allocated,
> > + bool skip_if_exists)
>
> I think this skip_if_exists is problematic here. You might need to
> redesign this.
> First of all, the skip_if_exists as the argument name, the meaning to
> the caller is not clear. When I saw this, I was wondering, what does
> the function return when this condition is triggered? Unlike
> "*new_page_allocated", which is a state after the function is
> returned. "skip_if_exists" is referring to an internal execution flow.
> It does not tell what value the function should return if that
> condition is triggered. It will force the caller to look into the
> internal of the function __read_swap_cache_async() to reason "should I
> pass true or false when I call this function". I wish it had better
> abstracted names. Or maybe a function argument documentation block to
> explain the usage of this argument.
I am not the original author of this naming, but personally I don't
see anything egregious with it :) I was able to understand the
intention based on the naming of the argument, and any finer details
would require studying the code and the function anyway, which
Domenico has provided some documentation where it matters.
>
>
> > {
> > struct swap_info_struct *si;
> > struct folio *folio;
> > @@ -470,6 +471,17 @@ struct page *__read_swap_cache_async(swp_entry_t entry, gfp_t gfp_mask,
> > if (err != -EEXIST)
> > goto fail_put_swap;
> >
> > + /*
> > + * Protect against a recursive call to __read_swap_cache_async()
> > + * on the same entry waiting forever here because SWAP_HAS_CACHE
> > + * is set but the folio is not the swap cache yet. This can
> > + * happen today if mem_cgroup_swapin_charge_folio() below
> > + * triggers reclaim through zswap, which may call
> > + * __read_swap_cache_async() in the writeback path.
> > + */
> > + if (skip_if_exists)
> > + goto fail_put_swap;
> > +
>
> This is very tricky, for the caller that did set "skip_if_exists" to
> true. Because the return value is still under race condition.
> The following comments describe two race situations, which get cut off
> by the patch context. Let me paste it again here:
>
> + /*
> * We might race against __delete_from_swap_cache(), and
> * stumble across a swap_map entry whose SWAP_HAS_CACHE
> * has not yet been cleared. Or race against another
> * __read_swap_cache_async(), which has set SWAP_HAS_CACHE
> * in swap_map, but not yet added its page to swap cache.
> */
> schedule_timeout_uninterruptible(1);
> }
>
> Basically, it has two kinds of race conditions. First is the race to
> delete the swap cache entry. The second one is to add the swap cache
> entry. Your added comment block for "if (skip_if_exists)" only
> describes the first kind of race. That begs the question, what if the
> race is the second case, how does the caller handle that?
>
> Let me paste the caller here:
>
> page = __read_swap_cache_async(swpentry, GFP_KERNEL, mpol,
> - NO_INTERLEAVE_INDEX, &page_was_allocated);
> + NO_INTERLEAVE_INDEX, &page_was_allocated, true);
> if (!page) {
> ret = -ENOMEM;
> goto fail;
> }
>
> The caller will return -ENOMEM if the second race condition (adding to
> the swap cache) was triggered. It will return ENOMEM while the page is
> being added to the swap cache. That feels incorrect to me. Am I
> missing anything?
>
> A control flow modification to the racing path is very tricky. Need
> more eyes for review.
I think the second case will also be fine right? IIUC, in that case
err = swapcache_prepare(entry) == -EEXIST as well, in which case it is
fine for the zswap reclaimer to skip that entry (well I guess it's
always fine to skip, but still).
The comment above was regarding the specific instance where we *have*
to skip or else we reach an infinite loop, which was observed in
practice :)
>
> > /*
> > * We might race against __delete_from_swap_cache(), and
> > * stumble across a swap_map entry whose SWAP_HAS_CACHE
> > @@ -537,7 +549,7 @@ struct page *read_swap_cache_async(swp_entry_t entry, gfp_t gfp_mask,
> >
> > mpol = get_vma_policy(vma, addr, 0, &ilx);
> > page = __read_swap_cache_async(entry, gfp_mask, mpol, ilx,
> > - &page_allocated);
> > + &page_allocated, false);
> > mpol_cond_put(mpol);
> >
> > if (page_allocated)
> > @@ -654,7 +666,7 @@ struct page *swap_cluster_readahead(swp_entry_t entry, gfp_t gfp_mask,
> > /* Ok, do the async read-ahead now */
> > page = __read_swap_cache_async(
> > swp_entry(swp_type(entry), offset),
> > - gfp_mask, mpol, ilx, &page_allocated);
> > + gfp_mask, mpol, ilx, &page_allocated, false);
> > if (!page)
> > continue;
> > if (page_allocated) {
> > @@ -672,7 +684,7 @@ struct page *swap_cluster_readahead(swp_entry_t entry, gfp_t gfp_mask,
> > skip:
> > /* The page was likely read above, so no need for plugging here */
> > page = __read_swap_cache_async(entry, gfp_mask, mpol, ilx,
> > - &page_allocated);
> > + &page_allocated, false);
> > if (unlikely(page_allocated))
> > swap_readpage(page, false, NULL);
> > return page;
> > @@ -827,7 +839,7 @@ static struct page *swap_vma_readahead(swp_entry_t targ_entry, gfp_t gfp_mask,
> > pte_unmap(pte);
> > pte = NULL;
> > page = __read_swap_cache_async(entry, gfp_mask, mpol, ilx,
> > - &page_allocated);
> > + &page_allocated, false);
> > if (!page)
> > continue;
> > if (page_allocated) {
> > @@ -847,7 +859,7 @@ static struct page *swap_vma_readahead(swp_entry_t targ_entry, gfp_t gfp_mask,
> > skip:
> > /* The page was likely read above, so no need for plugging here */
> > page = __read_swap_cache_async(targ_entry, gfp_mask, mpol, targ_ilx,
> > - &page_allocated);
> > + &page_allocated, false);
> > if (unlikely(page_allocated))
> > swap_readpage(page, false, NULL);
> > return page;
> > diff --git a/mm/zswap.c b/mm/zswap.c
> > index 4bdb2d83bb0d..f323e45cbdc7 100644
> > --- a/mm/zswap.c
> > +++ b/mm/zswap.c
> > @@ -35,6 +35,7 @@
> > #include <linux/writeback.h>
> > #include <linux/pagemap.h>
> > #include <linux/workqueue.h>
> > +#include <linux/list_lru.h>
> >
> > #include "swap.h"
> > #include "internal.h"
> > @@ -174,8 +175,8 @@ struct zswap_pool {
> > struct work_struct shrink_work;
> > struct hlist_node node;
> > char tfm_name[CRYPTO_MAX_ALG_NAME];
> > - struct list_head lru;
> > - spinlock_t lru_lock;
> > + struct list_lru list_lru;
> > + struct mem_cgroup *next_shrink;
> > };
> >
> > /*
> > @@ -291,15 +292,46 @@ static void zswap_update_total_size(void)
> > zswap_pool_total_size = total;
> > }
> >
> > +/* should be called under RCU */
> > +#ifdef CONFIG_MEMCG
> > +static inline struct mem_cgroup *mem_cgroup_from_entry(struct zswap_entry *entry)
> > +{
> > + return entry->objcg ? obj_cgroup_memcg(entry->objcg) : NULL;
> > +}
> > +#else
> > +static inline struct mem_cgroup *mem_cgroup_from_entry(struct zswap_entry *entry)
> > +{
> > + return NULL;
> > +}
> > +#endif
> > +
> > +static inline int entry_to_nid(struct zswap_entry *entry)
> > +{
> > + return page_to_nid(virt_to_page(entry));
> > +}
> > +
> > +void zswap_memcg_offline_cleanup(struct mem_cgroup *memcg)
> > +{
> > + struct zswap_pool *pool;
> > +
> > + /* lock out zswap pools list modification */
> > + spin_lock(&zswap_pools_lock);
> > + list_for_each_entry(pool, &zswap_pools, list) {
> > + if (pool->next_shrink == memcg)
> > + pool->next_shrink = mem_cgroup_iter(NULL, pool->next_shrink, NULL);
>
> This removes the memcg from pool->next_shrink if the first pool
> next_shrink matches.
> Please help me understand why only the first next_shrink, what if the
> memcg matches on the follow up next_shrink entries?
> What am I missing here?
Each pool only stores a single next_shrink entry. There is no follow up.
>
> > + }
> > + spin_unlock(&zswap_pools_lock);
> > +}
> > +
> > /*********************************
> > * zswap entry functions
> > **********************************/
> > static struct kmem_cache *zswap_entry_cache;
> >
> > -static struct zswap_entry *zswap_entry_cache_alloc(gfp_t gfp)
> > +static struct zswap_entry *zswap_entry_cache_alloc(gfp_t gfp, int nid)
> > {
> > struct zswap_entry *entry;
> > - entry = kmem_cache_alloc(zswap_entry_cache, gfp);
> > + entry = kmem_cache_alloc_node(zswap_entry_cache, gfp, nid);
> > if (!entry)
> > return NULL;
> > entry->refcount = 1;
> > @@ -312,6 +344,61 @@ static void zswap_entry_cache_free(struct zswap_entry *entry)
> > kmem_cache_free(zswap_entry_cache, entry);
> > }
> >
> > +/*********************************
> > +* lru functions
> > +**********************************/
> > +static void zswap_lru_add(struct list_lru *list_lru, struct zswap_entry *entry)
> > +{
> > + int nid = entry_to_nid(entry);
> > + struct mem_cgroup *memcg;
> > +
> > + /*
> > + * Note that it is safe to use rcu_read_lock() here, even in the face of
> > + * concurrent memcg offlining. Thanks to the memcg->kmemcg_id indirection
> > + * used in list_lru lookup, only two scenarios are possible:
> > + *
> > + * 1. list_lru_add() is called before memcg->kmemcg_id is updated. The
> > + * new entry will be reparented to memcg's parent's list_lru.
> > + * 2. list_lru_add() is called after memcg->kmemcg_id is updated. The
> > + * new entry will be added directly to memcg's parent's list_lru.
> > + *
> > + * Similar reasoning holds for list_lru_del() and list_lru_putback().
> > + */
> > + rcu_read_lock();
> > + memcg = mem_cgroup_from_entry(entry);
> > + /* will always succeed */
> > + list_lru_add(list_lru, &entry->lru, nid, memcg);
> > + rcu_read_unlock();
> > +}
> > +
> > +static void zswap_lru_del(struct list_lru *list_lru, struct zswap_entry *entry)
> > +{
> > + int nid = entry_to_nid(entry);
> > + struct mem_cgroup *memcg;
> > +
> > + rcu_read_lock();
> > + memcg = mem_cgroup_from_entry(entry);
> > + /* will always succeed */
> > + list_lru_del(list_lru, &entry->lru, nid, memcg);
> > + rcu_read_unlock();
> > +}
> > +
> > +static void zswap_lru_putback(struct list_lru *list_lru,
> > + struct zswap_entry *entry)
> > +{
> > + int nid = entry_to_nid(entry);
> > + spinlock_t *lock = &list_lru->node[nid].lock;
> > + struct mem_cgroup *memcg;
> > +
> > + rcu_read_lock();
> > + memcg = mem_cgroup_from_entry(entry);
> > + spin_lock(lock);
> > + /* we cannot use list_lru_add here, because it increments node's lru count */
> > + list_lru_putback(list_lru, &entry->lru, nid, memcg);
> > + spin_unlock(lock);
> > + rcu_read_unlock();
> > +}
> > +
> > /*********************************
> > * rbtree functions
> > **********************************/
> > @@ -396,9 +483,7 @@ static void zswap_free_entry(struct zswap_entry *entry)
> > if (!entry->length)
> > atomic_dec(&zswap_same_filled_pages);
> > else {
> > - spin_lock(&entry->pool->lru_lock);
> > - list_del(&entry->lru);
> > - spin_unlock(&entry->pool->lru_lock);
> > + zswap_lru_del(&entry->pool->list_lru, entry);
> > zpool_free(zswap_find_zpool(entry), entry->handle);
> > zswap_pool_put(entry->pool);
> > }
> > @@ -632,21 +717,15 @@ static void zswap_invalidate_entry(struct zswap_tree *tree,
> > zswap_entry_put(tree, entry);
> > }
> >
> > -static int zswap_reclaim_entry(struct zswap_pool *pool)
> > +static enum lru_status shrink_memcg_cb(struct list_head *item, struct list_lru_one *l,
> > + spinlock_t *lock, void *arg)
> > {
> > - struct zswap_entry *entry;
> > + struct zswap_entry *entry = container_of(item, struct zswap_entry, lru);
> > struct zswap_tree *tree;
> > pgoff_t swpoffset;
> > - int ret;
> > + enum lru_status ret = LRU_REMOVED_RETRY;
> > + int writeback_result;
>
> I do see a pattern here where you want to use long and descriptive
> local variable names. It is in other patches as well. Therefore, I
> want to make a point here.
> According to the Linux coding style document.
> https://www.kernel.org/doc/html/latest/process/coding-style.html
>
> LOCAL variable names should be short, and to the point. If you have
> some random integer loop counter, it should probably be called i.
> Calling it loop_counter is non-productive, if there is no chance of it
> being mis-understood. Similarly, tmp can be just about any type of
> variable that is used to hold a temporary value.
>
> I see you have a different return value for LRU status now so you want
> to use ret for that. Just the "writeback_result" is a bit long as a
> local variable.
Yeah, the reason why it's named writeback_result is to differentiate
itself from the ret lru_status. I can rename it to writeback_ret or
wb_ret if you want, but I don't think this matters too much to warrant
the change :)
>
> >
> > - /* Get an entry off the LRU */
> > - spin_lock(&pool->lru_lock);
> > - if (list_empty(&pool->lru)) {
> > - spin_unlock(&pool->lru_lock);
> > - return -EINVAL;
> > - }
> > - entry = list_last_entry(&pool->lru, struct zswap_entry, lru);
> > - list_del_init(&entry->lru);
> > /*
> > * Once the lru lock is dropped, the entry might get freed. The
> > * swpoffset is copied to the stack, and entry isn't deref'd again
> > @@ -654,28 +733,32 @@ static int zswap_reclaim_entry(struct zswap_pool *pool)
> > */
> > swpoffset = swp_offset(entry->swpentry);
> > tree = zswap_trees[swp_type(entry->swpentry)];
> > - spin_unlock(&pool->lru_lock);
> > + list_lru_isolate(l, item);
> > + /*
> > + * It's safe to drop the lock here because we return either
> > + * LRU_REMOVED_RETRY or LRU_RETRY.
> > + */
> > + spin_unlock(lock);
> >
> > /* Check for invalidate() race */
> > spin_lock(&tree->lock);
> > - if (entry != zswap_rb_search(&tree->rbroot, swpoffset)) {
> > - ret = -EAGAIN;
> > + if (entry != zswap_rb_search(&tree->rbroot, swpoffset))
> > goto unlock;
> If the zswap_rb_search() encounters the invalide race.
> Goto unlock will return LRU_REMOVED_RETRY.
> Can you help me understand how the list_lru_walk_one() interacts with
> the invalid race here?
> I am very scared of changing the handle of race conditions, we need
> more eyeballs to review it.
>
> > - }
> > +
> > /* Hold a reference to prevent a free during writeback */
> > zswap_entry_get(entry);
> > spin_unlock(&tree->lock);
> >
> > - ret = zswap_writeback_entry(entry, tree);
> > + writeback_result = zswap_writeback_entry(entry, tree);
> >
> > spin_lock(&tree->lock);
> > - if (ret) {
> > - /* Writeback failed, put entry back on LRU */
> > - spin_lock(&pool->lru_lock);
> > - list_move(&entry->lru, &pool->lru);
> > - spin_unlock(&pool->lru_lock);
> > + if (writeback_result) {
> > + zswap_reject_reclaim_fail++;
> > + zswap_lru_putback(&entry->pool->list_lru, entry);
>
> Does this mean the writeback failed for whatever reason, the order of
> the entry in the LRU changed? Seems a bit odd, the application did not
> request this page, this access is purely internal behavior of zswap
> write back. It should not impact how likely applications are going to
> use this page.
The reason why the zswap's LRU is rotated here is so that we can retry
on a different entry. IIRC, this is the original behavior (even before
the zswap LRU refactoring - zsmalloc internally did this).
Maybe this needs to change, but let's do it in a separate patch, since
it will require a separate process of evaluation and justification of
its own.
>
> > + ret = LRU_RETRY;
> > goto put_unlock;
> > }
> > + zswap_written_back_pages++;
>
> Why do you move the "zswap_written_back_pages" counter here rather
> than in the zswap_writeback_entry() which is closer to the original
> code?
> It seems to me that the write back result already determines which
> counter to update, so this counter update and
> "zswap_reject_reclaim_fail++;" should move into
> zswap_writeback_entry().
I think originally, when we rewrote the zswap writeback logic, we
accidentally incremented this counter in 2 different places (in an
internal-only version). So we removed one of them. this one is kept
because we can now have a lot more concurrent shrinking actions
(thanks to the zswap shrinker), so serializing the counter makes it
look less off FWIW.
Buuut I don't have a strong preference one way or another :) If people
dislike it that much I can move it back.
>
> >
> > /*
> > * Writeback started successfully, the page now belongs to the
> > @@ -689,27 +772,93 @@ static int zswap_reclaim_entry(struct zswap_pool *pool)
> > zswap_entry_put(tree, entry);
> > unlock:
> > spin_unlock(&tree->lock);
> > - return ret ? -EAGAIN : 0;
> > + spin_lock(lock);
> > + return ret;
> > +}
> > +
> > +static int shrink_memcg(struct mem_cgroup *memcg)
> > +{
> > + struct zswap_pool *pool;
> > + int nid, shrunk = 0;
> > +
> > + /*
> > + * Skip zombies because their LRUs are reparented and we would be
> > + * reclaiming from the parent instead of the dead memcg.
> > + */
> > + if (memcg && !mem_cgroup_online(memcg))
> > + return -ENOENT;
> > +
> > + pool = zswap_pool_current_get();
> > + if (!pool)
> > + return -EINVAL;
>
> What about the other pools that are not the current, do they shrink
> somehow as well?
With the new zswap shrinker, yes ;) Since the shrinker struct is per-zswap-pool.
But in general, I don't think it's a common set up to have multiple
zswap pools - since each is determined by the (compressor, allocator)
pair, and I can't think of a reason why you would use different
combinations in the same machine. Most users will just turn on zswap
and expect it to "just works", and the few parties that care will pick
an allocator and compressor when zswap is enabled, and then forget
about it (heck, the selection might even be hardcoded via Kconfig).
We can figure out a nice scheme to perform multipool reclaim later -
round robin again? multigen pool reclaim? But for now the benefit does
not seem to justify the extra code, engineering effort,
maintainability burden IMHO. For now I'd say this is sufficient. And
more incentive to experiment with the shrinker :P
>
> > +
> > + for_each_node_state(nid, N_NORMAL_MEMORY) {
> > + unsigned long nr_to_walk = 1;
> > +
> > + shrunk += list_lru_walk_one(&pool->list_lru, nid, memcg,
> > + &shrink_memcg_cb, NULL, &nr_to_walk);
> > + }
> > + zswap_pool_put(pool);
> > + return shrunk ? 0 : -EAGAIN;
>
> Wouldn't it be useful to know how much actual pages get writeback here
> as indicator of how useful it is to shrink it?
>
> One idea is that we can use some kind of shrink control struct and
> pass it down here. The shrink count can be a member of the shrink
> control struct.
Perhaps it is, for non zswap shrinker users (if you use zswap shrinker
then it's unlikely to reach limits anyway). I would leave that to when
we actually have a use case for it though - storing extra piece of
information, which is not used in any codeflow, and not exported to
userspace, is just a waste of memory for no actual use case.
That shrink usefulness idea does sound interesting though - one future
approach could be to throttle writeback to a particular memcg/pool, if
past attempts have not been too successful.
> > }
> >
> > static void shrink_worker(struct work_struct *w)
> > {
> > struct zswap_pool *pool = container_of(w, typeof(*pool),
> > shrink_work);
> > + struct mem_cgroup *memcg;
> > int ret, failures = 0;
> >
> > + /* global reclaim will select cgroup in a round-robin fashion. */
>
> If we have the shrink control struct, it can be easier to move to more
> fancy control of how the shrink performs to each memcg. We will likely
> move away from the round robin once we move to the MGLRU world of
> shrinking zswap entries. Maybe for a later patch.
Agree.
>
> > do {
> > - ret = zswap_reclaim_entry(pool);
> > - if (ret) {
> > - zswap_reject_reclaim_fail++;
> > - if (ret != -EAGAIN)
> > + spin_lock(&zswap_pools_lock);
> > + pool->next_shrink = mem_cgroup_iter(NULL, pool->next_shrink, NULL);
> > + memcg = pool->next_shrink;
>
> I am still a bit fuzzy about the data structure on the
> pool->next_shrink. Is this store the current memcg that is iterating
> on? After each walk of the memcg, next_shrink is pointed to the next
> memcg in the list. After all the memcg is consumed, when does
> next_shrink point back to the beginning of the list?
>
> Chris
next_shrink stores the next memcg to be considered in the global limit
zswap reclaim. Initially, it is set to NULL. It is updated primarily
inside the global reclaim limit loop, but also in the new zswap's
memcg offlining cleanup callback, and at pool's destruction time.
We use mem_cgroup_iter to update next_shrink - which (from my
understanding) performs a tree traversal. It reaches the end of a full
traversal when the output is NULL, at which time we will restart from
the top.
As you mentioned above - this is not ideal. But without MGLRU, or some
other form of priority scheme for the memcg, reclaiming memcg in a
round-robin fashion is the best we can do for now IMHO.
>
> > +
> > + /*
> > + * We need to retry if we have gone through a full round trip, or if we
> > + * got an offline memcg (or else we risk undoing the effect of the
> > + * zswap memcg offlining cleanup callback). This is not catastrophic
> > + * per se, but it will keep the now offlined memcg hostage for a while.
> > + *
> > + * Note that if we got an online memcg, we will keep the extra
> > + * reference in case the original reference obtained by mem_cgroup_iter
> > + * is dropped by the zswap memcg offlining callback, ensuring that the
> > + * memcg is not killed when we are reclaiming.
> > + */
> > + if (!memcg) {
> > + spin_unlock(&zswap_pools_lock);
> > + if (++failures == MAX_RECLAIM_RETRIES)
> > break;
> > +
> > + goto resched;
> > + }
> > +
> > + if (!mem_cgroup_online(memcg)) {
> > + /* drop the reference from mem_cgroup_iter() */
> > + mem_cgroup_put(memcg);
> > + pool->next_shrink = NULL;
> > + spin_unlock(&zswap_pools_lock);
> > +
> > if (++failures == MAX_RECLAIM_RETRIES)
> > break;
> > +
> > + goto resched;
> > }
> > + spin_unlock(&zswap_pools_lock);
> > +
> > + ret = shrink_memcg(memcg);
> > + /* drop the extra reference */
> > + mem_cgroup_put(memcg);
> > +
> > + if (ret == -EINVAL)
> > + break;
> > + if (ret && ++failures == MAX_RECLAIM_RETRIES)
> > + break;
> > +
> > +resched:
> > cond_resched();
> > } while (!zswap_can_accept());
> > - zswap_pool_put(pool);
> > }
> >
> > static struct zswap_pool *zswap_pool_create(char *type, char *compressor)
> > @@ -767,8 +916,7 @@ static struct zswap_pool *zswap_pool_create(char *type, char *compressor)
> > */
> > kref_init(&pool->kref);
> > INIT_LIST_HEAD(&pool->list);
> > - INIT_LIST_HEAD(&pool->lru);
> > - spin_lock_init(&pool->lru_lock);
> > + list_lru_init_memcg(&pool->list_lru, NULL);
> > INIT_WORK(&pool->shrink_work, shrink_worker);
> >
> > zswap_pool_debug("created", pool);
> > @@ -834,6 +982,13 @@ static void zswap_pool_destroy(struct zswap_pool *pool)
> >
> > cpuhp_state_remove_instance(CPUHP_MM_ZSWP_POOL_PREPARE, &pool->node);
> > free_percpu(pool->acomp_ctx);
> > + list_lru_destroy(&pool->list_lru);
> > +
> > + spin_lock(&zswap_pools_lock);
> > + mem_cgroup_put(pool->next_shrink);
> > + pool->next_shrink = NULL;
> > + spin_unlock(&zswap_pools_lock);
> > +
> > for (i = 0; i < ZSWAP_NR_ZPOOLS; i++)
> > zpool_destroy_pool(pool->zpools[i]);
> > kfree(pool);
> > @@ -1081,7 +1236,7 @@ static int zswap_writeback_entry(struct zswap_entry *entry,
> > /* try to allocate swap cache page */
> > mpol = get_task_policy(current);
> > page = __read_swap_cache_async(swpentry, GFP_KERNEL, mpol,
> > - NO_INTERLEAVE_INDEX, &page_was_allocated);
> > + NO_INTERLEAVE_INDEX, &page_was_allocated, true);
> > if (!page) {
> > ret = -ENOMEM;
> > goto fail;
> > @@ -1152,7 +1307,6 @@ static int zswap_writeback_entry(struct zswap_entry *entry,
> > /* start writeback */
> > __swap_writepage(page, &wbc);
> > put_page(page);
> > - zswap_written_back_pages++;
> >
> > return ret;
> >
> > @@ -1209,6 +1363,7 @@ bool zswap_store(struct folio *folio)
> > struct scatterlist input, output;
> > struct crypto_acomp_ctx *acomp_ctx;
> > struct obj_cgroup *objcg = NULL;
> > + struct mem_cgroup *memcg = NULL;
> > struct zswap_pool *pool;
> > struct zpool *zpool;
> > unsigned int dlen = PAGE_SIZE;
> > @@ -1240,15 +1395,15 @@ bool zswap_store(struct folio *folio)
> > zswap_invalidate_entry(tree, dupentry);
> > }
> > spin_unlock(&tree->lock);
> > -
> > - /*
> > - * XXX: zswap reclaim does not work with cgroups yet. Without a
> > - * cgroup-aware entry LRU, we will push out entries system-wide based on
> > - * local cgroup limits.
> > - */
> > objcg = get_obj_cgroup_from_folio(folio);
> > - if (objcg && !obj_cgroup_may_zswap(objcg))
> > - goto reject;
> > + if (objcg && !obj_cgroup_may_zswap(objcg)) {
> > + memcg = get_mem_cgroup_from_objcg(objcg);
> > + if (shrink_memcg(memcg)) {
> > + mem_cgroup_put(memcg);
> > + goto reject;
> > + }
> > + mem_cgroup_put(memcg);
> > + }
> >
> > /* reclaim space if needed */
> > if (zswap_is_full()) {
> > @@ -1265,7 +1420,7 @@ bool zswap_store(struct folio *folio)
> > }
> >
> > /* allocate entry */
> > - entry = zswap_entry_cache_alloc(GFP_KERNEL);
> > + entry = zswap_entry_cache_alloc(GFP_KERNEL, page_to_nid(page));
> > if (!entry) {
> > zswap_reject_kmemcache_fail++;
> > goto reject;
> > @@ -1292,6 +1447,15 @@ bool zswap_store(struct folio *folio)
> > if (!entry->pool)
> > goto freepage;
> >
> > + if (objcg) {
> > + memcg = get_mem_cgroup_from_objcg(objcg);
> > + if (memcg_list_lru_alloc(memcg, &entry->pool->list_lru, GFP_KERNEL)) {
> > + mem_cgroup_put(memcg);
> > + goto put_pool;
> > + }
> > + mem_cgroup_put(memcg);
> > + }
> > +
> > /* compress */
> > acomp_ctx = raw_cpu_ptr(entry->pool->acomp_ctx);
> >
> > @@ -1370,9 +1534,8 @@ bool zswap_store(struct folio *folio)
> > zswap_invalidate_entry(tree, dupentry);
> > }
> > if (entry->length) {
> > - spin_lock(&entry->pool->lru_lock);
> > - list_add(&entry->lru, &entry->pool->lru);
> > - spin_unlock(&entry->pool->lru_lock);
> > + INIT_LIST_HEAD(&entry->lru);
> > + zswap_lru_add(&entry->pool->list_lru, entry);
> > }
> > spin_unlock(&tree->lock);
> >
> > @@ -1385,6 +1548,7 @@ bool zswap_store(struct folio *folio)
> >
> > put_dstmem:
> > mutex_unlock(acomp_ctx->mutex);
> > +put_pool:
> > zswap_pool_put(entry->pool);
> > freepage:
> > zswap_entry_cache_free(entry);
> > @@ -1479,9 +1643,8 @@ bool zswap_load(struct folio *folio)
> > zswap_invalidate_entry(tree, entry);
> > folio_mark_dirty(folio);
> > } else if (entry->length) {
> > - spin_lock(&entry->pool->lru_lock);
> > - list_move(&entry->lru, &entry->pool->lru);
> > - spin_unlock(&entry->pool->lru_lock);
> > + zswap_lru_del(&entry->pool->list_lru, entry);
> > + zswap_lru_add(&entry->pool->list_lru, entry);
> > }
> > zswap_entry_put(tree, entry);
> > spin_unlock(&tree->lock);
> > --
> > 2.34.1
> >
^ permalink raw reply [flat|nested] 48+ messages in thread
* Re: [PATCH v8 3/6] zswap: make shrinking memcg-aware
2023-11-30 19:40 ` [PATCH v8 3/6] zswap: make shrinking memcg-aware Nhat Pham
` (2 preceding siblings ...)
2023-12-06 0:10 ` [PATCH v8 3/6] zswap: make shrinking memcg-aware Chris Li
@ 2023-12-06 3:03 ` Nhat Pham
2023-12-06 3:06 ` [PATCH v8 3/6] zswap: make shrinking memcg-aware (fix 2) Nhat Pham
4 siblings, 0 replies; 48+ messages in thread
From: Nhat Pham @ 2023-12-06 3:03 UTC (permalink / raw)
To: akpm
Cc: hannes, cerasuolodomenico, yosryahmed, sjenning, ddstreet,
vitaly.wool, mhocko, roman.gushchin, shakeelb, muchun.song,
chrisl, linux-mm, kernel-team, linux-kernel, cgroups, linux-doc,
linux-kselftest, shuah
On Thu, Nov 30, 2023 at 11:40 AM Nhat Pham <nphamcs@gmail.com> wrote:
>
> From: Domenico Cerasuolo <cerasuolodomenico@gmail.com>
>
> Currently, we only have a single global LRU for zswap. This makes it
> impossible to perform worload-specific shrinking - an memcg cannot
> determine which pages in the pool it owns, and often ends up writing
> pages from other memcgs. This issue has been previously observed in
> practice and mitigated by simply disabling memcg-initiated shrinking:
>
> https://lore.kernel.org/all/20230530232435.3097106-1-nphamcs@gmail.com/T/#u
>
> This patch fully resolves the issue by replacing the global zswap LRU
> with memcg- and NUMA-specific LRUs, and modify the reclaim logic:
>
> a) When a store attempt hits an memcg limit, it now triggers a
> synchronous reclaim attempt that, if successful, allows the new
> hotter page to be accepted by zswap.
> b) If the store attempt instead hits the global zswap limit, it will
> trigger an asynchronous reclaim attempt, in which an memcg is
> selected for reclaim in a round-robin-like fashion.
>
> Signed-off-by: Domenico Cerasuolo <cerasuolodomenico@gmail.com>
> Co-developed-by: Nhat Pham <nphamcs@gmail.com>
> Signed-off-by: Nhat Pham <nphamcs@gmail.com>
> ---
> include/linux/memcontrol.h | 5 +
> include/linux/zswap.h | 2 +
> mm/memcontrol.c | 2 +
> mm/swap.h | 3 +-
> mm/swap_state.c | 24 +++-
> mm/zswap.c | 269 +++++++++++++++++++++++++++++--------
> 6 files changed, 245 insertions(+), 60 deletions(-)
>
> diff --git a/include/linux/memcontrol.h b/include/linux/memcontrol.h
> index 2bd7d14ace78..a308c8eacf20 100644
> --- a/include/linux/memcontrol.h
> +++ b/include/linux/memcontrol.h
> @@ -1192,6 +1192,11 @@ static inline struct mem_cgroup *page_memcg_check(struct page *page)
> return NULL;
> }
>
> +static inline struct mem_cgroup *get_mem_cgroup_from_objcg(struct obj_cgroup *objcg)
> +{
> + return NULL;
> +}
> +
> static inline bool folio_memcg_kmem(struct folio *folio)
> {
> return false;
> diff --git a/include/linux/zswap.h b/include/linux/zswap.h
> index 2a60ce39cfde..e571e393669b 100644
> --- a/include/linux/zswap.h
> +++ b/include/linux/zswap.h
> @@ -15,6 +15,7 @@ bool zswap_load(struct folio *folio);
> void zswap_invalidate(int type, pgoff_t offset);
> void zswap_swapon(int type);
> void zswap_swapoff(int type);
> +void zswap_memcg_offline_cleanup(struct mem_cgroup *memcg);
>
> #else
>
> @@ -31,6 +32,7 @@ static inline bool zswap_load(struct folio *folio)
> static inline void zswap_invalidate(int type, pgoff_t offset) {}
> static inline void zswap_swapon(int type) {}
> static inline void zswap_swapoff(int type) {}
> +static inline void zswap_memcg_offline_cleanup(struct mem_cgroup *memcg) {}
>
> #endif
>
> diff --git a/mm/memcontrol.c b/mm/memcontrol.c
> index 470821d1ba1a..792ca21c5815 100644
> --- a/mm/memcontrol.c
> +++ b/mm/memcontrol.c
> @@ -5614,6 +5614,8 @@ static void mem_cgroup_css_offline(struct cgroup_subsys_state *css)
> page_counter_set_min(&memcg->memory, 0);
> page_counter_set_low(&memcg->memory, 0);
>
> + zswap_memcg_offline_cleanup(memcg);
> +
> memcg_offline_kmem(memcg);
> reparent_shrinker_deferred(memcg);
> wb_memcg_offline(memcg);
> diff --git a/mm/swap.h b/mm/swap.h
> index 73c332ee4d91..c0dc73e10e91 100644
> --- a/mm/swap.h
> +++ b/mm/swap.h
> @@ -51,7 +51,8 @@ struct page *read_swap_cache_async(swp_entry_t entry, gfp_t gfp_mask,
> struct swap_iocb **plug);
> struct page *__read_swap_cache_async(swp_entry_t entry, gfp_t gfp_mask,
> struct mempolicy *mpol, pgoff_t ilx,
> - bool *new_page_allocated);
> + bool *new_page_allocated,
> + bool skip_if_exists);
> struct page *swap_cluster_readahead(swp_entry_t entry, gfp_t flag,
> struct mempolicy *mpol, pgoff_t ilx);
> struct page *swapin_readahead(swp_entry_t entry, gfp_t flag,
> diff --git a/mm/swap_state.c b/mm/swap_state.c
> index 85d9e5806a6a..6c84236382f3 100644
> --- a/mm/swap_state.c
> +++ b/mm/swap_state.c
> @@ -412,7 +412,8 @@ struct folio *filemap_get_incore_folio(struct address_space *mapping,
>
> struct page *__read_swap_cache_async(swp_entry_t entry, gfp_t gfp_mask,
> struct mempolicy *mpol, pgoff_t ilx,
> - bool *new_page_allocated)
> + bool *new_page_allocated,
> + bool skip_if_exists)
> {
> struct swap_info_struct *si;
> struct folio *folio;
> @@ -470,6 +471,17 @@ struct page *__read_swap_cache_async(swp_entry_t entry, gfp_t gfp_mask,
> if (err != -EEXIST)
> goto fail_put_swap;
>
> + /*
> + * Protect against a recursive call to __read_swap_cache_async()
> + * on the same entry waiting forever here because SWAP_HAS_CACHE
> + * is set but the folio is not the swap cache yet. This can
> + * happen today if mem_cgroup_swapin_charge_folio() below
> + * triggers reclaim through zswap, which may call
> + * __read_swap_cache_async() in the writeback path.
> + */
> + if (skip_if_exists)
> + goto fail_put_swap;
> +
> /*
> * We might race against __delete_from_swap_cache(), and
> * stumble across a swap_map entry whose SWAP_HAS_CACHE
> @@ -537,7 +549,7 @@ struct page *read_swap_cache_async(swp_entry_t entry, gfp_t gfp_mask,
>
> mpol = get_vma_policy(vma, addr, 0, &ilx);
> page = __read_swap_cache_async(entry, gfp_mask, mpol, ilx,
> - &page_allocated);
> + &page_allocated, false);
> mpol_cond_put(mpol);
>
> if (page_allocated)
> @@ -654,7 +666,7 @@ struct page *swap_cluster_readahead(swp_entry_t entry, gfp_t gfp_mask,
> /* Ok, do the async read-ahead now */
> page = __read_swap_cache_async(
> swp_entry(swp_type(entry), offset),
> - gfp_mask, mpol, ilx, &page_allocated);
> + gfp_mask, mpol, ilx, &page_allocated, false);
> if (!page)
> continue;
> if (page_allocated) {
> @@ -672,7 +684,7 @@ struct page *swap_cluster_readahead(swp_entry_t entry, gfp_t gfp_mask,
> skip:
> /* The page was likely read above, so no need for plugging here */
> page = __read_swap_cache_async(entry, gfp_mask, mpol, ilx,
> - &page_allocated);
> + &page_allocated, false);
> if (unlikely(page_allocated))
> swap_readpage(page, false, NULL);
> return page;
> @@ -827,7 +839,7 @@ static struct page *swap_vma_readahead(swp_entry_t targ_entry, gfp_t gfp_mask,
> pte_unmap(pte);
> pte = NULL;
> page = __read_swap_cache_async(entry, gfp_mask, mpol, ilx,
> - &page_allocated);
> + &page_allocated, false);
> if (!page)
> continue;
> if (page_allocated) {
> @@ -847,7 +859,7 @@ static struct page *swap_vma_readahead(swp_entry_t targ_entry, gfp_t gfp_mask,
> skip:
> /* The page was likely read above, so no need for plugging here */
> page = __read_swap_cache_async(targ_entry, gfp_mask, mpol, targ_ilx,
> - &page_allocated);
> + &page_allocated, false);
> if (unlikely(page_allocated))
> swap_readpage(page, false, NULL);
> return page;
> diff --git a/mm/zswap.c b/mm/zswap.c
> index 4bdb2d83bb0d..f323e45cbdc7 100644
> --- a/mm/zswap.c
> +++ b/mm/zswap.c
> @@ -35,6 +35,7 @@
> #include <linux/writeback.h>
> #include <linux/pagemap.h>
> #include <linux/workqueue.h>
> +#include <linux/list_lru.h>
>
> #include "swap.h"
> #include "internal.h"
> @@ -174,8 +175,8 @@ struct zswap_pool {
> struct work_struct shrink_work;
> struct hlist_node node;
> char tfm_name[CRYPTO_MAX_ALG_NAME];
> - struct list_head lru;
> - spinlock_t lru_lock;
> + struct list_lru list_lru;
> + struct mem_cgroup *next_shrink;
> };
>
> /*
> @@ -291,15 +292,46 @@ static void zswap_update_total_size(void)
> zswap_pool_total_size = total;
> }
>
> +/* should be called under RCU */
> +#ifdef CONFIG_MEMCG
> +static inline struct mem_cgroup *mem_cgroup_from_entry(struct zswap_entry *entry)
> +{
> + return entry->objcg ? obj_cgroup_memcg(entry->objcg) : NULL;
> +}
> +#else
> +static inline struct mem_cgroup *mem_cgroup_from_entry(struct zswap_entry *entry)
> +{
> + return NULL;
> +}
> +#endif
> +
> +static inline int entry_to_nid(struct zswap_entry *entry)
> +{
> + return page_to_nid(virt_to_page(entry));
> +}
> +
> +void zswap_memcg_offline_cleanup(struct mem_cgroup *memcg)
> +{
> + struct zswap_pool *pool;
> +
> + /* lock out zswap pools list modification */
> + spin_lock(&zswap_pools_lock);
> + list_for_each_entry(pool, &zswap_pools, list) {
> + if (pool->next_shrink == memcg)
> + pool->next_shrink = mem_cgroup_iter(NULL, pool->next_shrink, NULL);
> + }
> + spin_unlock(&zswap_pools_lock);
> +}
> +
> /*********************************
> * zswap entry functions
> **********************************/
> static struct kmem_cache *zswap_entry_cache;
>
> -static struct zswap_entry *zswap_entry_cache_alloc(gfp_t gfp)
> +static struct zswap_entry *zswap_entry_cache_alloc(gfp_t gfp, int nid)
> {
> struct zswap_entry *entry;
> - entry = kmem_cache_alloc(zswap_entry_cache, gfp);
> + entry = kmem_cache_alloc_node(zswap_entry_cache, gfp, nid);
> if (!entry)
> return NULL;
> entry->refcount = 1;
> @@ -312,6 +344,61 @@ static void zswap_entry_cache_free(struct zswap_entry *entry)
> kmem_cache_free(zswap_entry_cache, entry);
> }
>
> +/*********************************
> +* lru functions
> +**********************************/
> +static void zswap_lru_add(struct list_lru *list_lru, struct zswap_entry *entry)
> +{
> + int nid = entry_to_nid(entry);
> + struct mem_cgroup *memcg;
> +
> + /*
> + * Note that it is safe to use rcu_read_lock() here, even in the face of
> + * concurrent memcg offlining. Thanks to the memcg->kmemcg_id indirection
> + * used in list_lru lookup, only two scenarios are possible:
> + *
> + * 1. list_lru_add() is called before memcg->kmemcg_id is updated. The
> + * new entry will be reparented to memcg's parent's list_lru.
> + * 2. list_lru_add() is called after memcg->kmemcg_id is updated. The
> + * new entry will be added directly to memcg's parent's list_lru.
> + *
> + * Similar reasoning holds for list_lru_del() and list_lru_putback().
> + */
> + rcu_read_lock();
> + memcg = mem_cgroup_from_entry(entry);
> + /* will always succeed */
> + list_lru_add(list_lru, &entry->lru, nid, memcg);
> + rcu_read_unlock();
> +}
> +
> +static void zswap_lru_del(struct list_lru *list_lru, struct zswap_entry *entry)
> +{
> + int nid = entry_to_nid(entry);
> + struct mem_cgroup *memcg;
> +
> + rcu_read_lock();
> + memcg = mem_cgroup_from_entry(entry);
> + /* will always succeed */
> + list_lru_del(list_lru, &entry->lru, nid, memcg);
> + rcu_read_unlock();
> +}
> +
> +static void zswap_lru_putback(struct list_lru *list_lru,
> + struct zswap_entry *entry)
> +{
> + int nid = entry_to_nid(entry);
> + spinlock_t *lock = &list_lru->node[nid].lock;
> + struct mem_cgroup *memcg;
> +
> + rcu_read_lock();
> + memcg = mem_cgroup_from_entry(entry);
> + spin_lock(lock);
> + /* we cannot use list_lru_add here, because it increments node's lru count */
> + list_lru_putback(list_lru, &entry->lru, nid, memcg);
> + spin_unlock(lock);
> + rcu_read_unlock();
> +}
> +
> /*********************************
> * rbtree functions
> **********************************/
> @@ -396,9 +483,7 @@ static void zswap_free_entry(struct zswap_entry *entry)
> if (!entry->length)
> atomic_dec(&zswap_same_filled_pages);
> else {
> - spin_lock(&entry->pool->lru_lock);
> - list_del(&entry->lru);
> - spin_unlock(&entry->pool->lru_lock);
> + zswap_lru_del(&entry->pool->list_lru, entry);
> zpool_free(zswap_find_zpool(entry), entry->handle);
> zswap_pool_put(entry->pool);
> }
> @@ -632,21 +717,15 @@ static void zswap_invalidate_entry(struct zswap_tree *tree,
> zswap_entry_put(tree, entry);
> }
>
> -static int zswap_reclaim_entry(struct zswap_pool *pool)
> +static enum lru_status shrink_memcg_cb(struct list_head *item, struct list_lru_one *l,
> + spinlock_t *lock, void *arg)
> {
> - struct zswap_entry *entry;
> + struct zswap_entry *entry = container_of(item, struct zswap_entry, lru);
> struct zswap_tree *tree;
> pgoff_t swpoffset;
> - int ret;
> + enum lru_status ret = LRU_REMOVED_RETRY;
> + int writeback_result;
>
> - /* Get an entry off the LRU */
> - spin_lock(&pool->lru_lock);
> - if (list_empty(&pool->lru)) {
> - spin_unlock(&pool->lru_lock);
> - return -EINVAL;
> - }
> - entry = list_last_entry(&pool->lru, struct zswap_entry, lru);
> - list_del_init(&entry->lru);
> /*
> * Once the lru lock is dropped, the entry might get freed. The
> * swpoffset is copied to the stack, and entry isn't deref'd again
> @@ -654,28 +733,32 @@ static int zswap_reclaim_entry(struct zswap_pool *pool)
> */
> swpoffset = swp_offset(entry->swpentry);
> tree = zswap_trees[swp_type(entry->swpentry)];
> - spin_unlock(&pool->lru_lock);
> + list_lru_isolate(l, item);
> + /*
> + * It's safe to drop the lock here because we return either
> + * LRU_REMOVED_RETRY or LRU_RETRY.
> + */
> + spin_unlock(lock);
>
> /* Check for invalidate() race */
> spin_lock(&tree->lock);
> - if (entry != zswap_rb_search(&tree->rbroot, swpoffset)) {
> - ret = -EAGAIN;
> + if (entry != zswap_rb_search(&tree->rbroot, swpoffset))
> goto unlock;
> - }
> +
> /* Hold a reference to prevent a free during writeback */
> zswap_entry_get(entry);
> spin_unlock(&tree->lock);
>
> - ret = zswap_writeback_entry(entry, tree);
> + writeback_result = zswap_writeback_entry(entry, tree);
>
> spin_lock(&tree->lock);
> - if (ret) {
> - /* Writeback failed, put entry back on LRU */
> - spin_lock(&pool->lru_lock);
> - list_move(&entry->lru, &pool->lru);
> - spin_unlock(&pool->lru_lock);
> + if (writeback_result) {
> + zswap_reject_reclaim_fail++;
> + zswap_lru_putback(&entry->pool->list_lru, entry);
> + ret = LRU_RETRY;
> goto put_unlock;
> }
> + zswap_written_back_pages++;
>
> /*
> * Writeback started successfully, the page now belongs to the
> @@ -689,27 +772,93 @@ static int zswap_reclaim_entry(struct zswap_pool *pool)
> zswap_entry_put(tree, entry);
> unlock:
> spin_unlock(&tree->lock);
> - return ret ? -EAGAIN : 0;
> + spin_lock(lock);
> + return ret;
> +}
> +
> +static int shrink_memcg(struct mem_cgroup *memcg)
> +{
> + struct zswap_pool *pool;
> + int nid, shrunk = 0;
> +
> + /*
> + * Skip zombies because their LRUs are reparented and we would be
> + * reclaiming from the parent instead of the dead memcg.
> + */
> + if (memcg && !mem_cgroup_online(memcg))
> + return -ENOENT;
> +
> + pool = zswap_pool_current_get();
> + if (!pool)
> + return -EINVAL;
> +
> + for_each_node_state(nid, N_NORMAL_MEMORY) {
> + unsigned long nr_to_walk = 1;
> +
> + shrunk += list_lru_walk_one(&pool->list_lru, nid, memcg,
> + &shrink_memcg_cb, NULL, &nr_to_walk);
> + }
> + zswap_pool_put(pool);
> + return shrunk ? 0 : -EAGAIN;
> }
>
> static void shrink_worker(struct work_struct *w)
> {
> struct zswap_pool *pool = container_of(w, typeof(*pool),
> shrink_work);
> + struct mem_cgroup *memcg;
> int ret, failures = 0;
>
> + /* global reclaim will select cgroup in a round-robin fashion. */
> do {
> - ret = zswap_reclaim_entry(pool);
> - if (ret) {
> - zswap_reject_reclaim_fail++;
> - if (ret != -EAGAIN)
> + spin_lock(&zswap_pools_lock);
> + pool->next_shrink = mem_cgroup_iter(NULL, pool->next_shrink, NULL);
> + memcg = pool->next_shrink;
> +
> + /*
> + * We need to retry if we have gone through a full round trip, or if we
> + * got an offline memcg (or else we risk undoing the effect of the
> + * zswap memcg offlining cleanup callback). This is not catastrophic
> + * per se, but it will keep the now offlined memcg hostage for a while.
> + *
> + * Note that if we got an online memcg, we will keep the extra
> + * reference in case the original reference obtained by mem_cgroup_iter
> + * is dropped by the zswap memcg offlining callback, ensuring that the
> + * memcg is not killed when we are reclaiming.
> + */
> + if (!memcg) {
> + spin_unlock(&zswap_pools_lock);
> + if (++failures == MAX_RECLAIM_RETRIES)
> break;
> +
> + goto resched;
> + }
> +
> + if (!mem_cgroup_online(memcg)) {
> + /* drop the reference from mem_cgroup_iter() */
> + mem_cgroup_put(memcg);
> + pool->next_shrink = NULL;
> + spin_unlock(&zswap_pools_lock);
> +
> if (++failures == MAX_RECLAIM_RETRIES)
> break;
> +
> + goto resched;
> }
> + spin_unlock(&zswap_pools_lock);
> +
> + ret = shrink_memcg(memcg);
> + /* drop the extra reference */
> + mem_cgroup_put(memcg);
> +
> + if (ret == -EINVAL)
> + break;
> + if (ret && ++failures == MAX_RECLAIM_RETRIES)
> + break;
> +
> +resched:
> cond_resched();
> } while (!zswap_can_accept());
> - zswap_pool_put(pool);
Actually, after staring at this code for a bit more - this looks
wrong. This line should not have been removed - this reference putting
pairs with the getting from zswap_store(). Looks like some mishap when
I rebased and cleaned stuff. Lemme send a fixlet to undo this removal
real quick.
> }
>
> static struct zswap_pool *zswap_pool_create(char *type, char *compressor)
> @@ -767,8 +916,7 @@ static struct zswap_pool *zswap_pool_create(char *type, char *compressor)
> */
> kref_init(&pool->kref);
> INIT_LIST_HEAD(&pool->list);
> - INIT_LIST_HEAD(&pool->lru);
> - spin_lock_init(&pool->lru_lock);
> + list_lru_init_memcg(&pool->list_lru, NULL);
> INIT_WORK(&pool->shrink_work, shrink_worker);
>
> zswap_pool_debug("created", pool);
> @@ -834,6 +982,13 @@ static void zswap_pool_destroy(struct zswap_pool *pool)
>
> cpuhp_state_remove_instance(CPUHP_MM_ZSWP_POOL_PREPARE, &pool->node);
> free_percpu(pool->acomp_ctx);
> + list_lru_destroy(&pool->list_lru);
> +
> + spin_lock(&zswap_pools_lock);
> + mem_cgroup_put(pool->next_shrink);
> + pool->next_shrink = NULL;
> + spin_unlock(&zswap_pools_lock);
> +
> for (i = 0; i < ZSWAP_NR_ZPOOLS; i++)
> zpool_destroy_pool(pool->zpools[i]);
> kfree(pool);
> @@ -1081,7 +1236,7 @@ static int zswap_writeback_entry(struct zswap_entry *entry,
> /* try to allocate swap cache page */
> mpol = get_task_policy(current);
> page = __read_swap_cache_async(swpentry, GFP_KERNEL, mpol,
> - NO_INTERLEAVE_INDEX, &page_was_allocated);
> + NO_INTERLEAVE_INDEX, &page_was_allocated, true);
> if (!page) {
> ret = -ENOMEM;
> goto fail;
> @@ -1152,7 +1307,6 @@ static int zswap_writeback_entry(struct zswap_entry *entry,
> /* start writeback */
> __swap_writepage(page, &wbc);
> put_page(page);
> - zswap_written_back_pages++;
>
> return ret;
>
> @@ -1209,6 +1363,7 @@ bool zswap_store(struct folio *folio)
> struct scatterlist input, output;
> struct crypto_acomp_ctx *acomp_ctx;
> struct obj_cgroup *objcg = NULL;
> + struct mem_cgroup *memcg = NULL;
> struct zswap_pool *pool;
> struct zpool *zpool;
> unsigned int dlen = PAGE_SIZE;
> @@ -1240,15 +1395,15 @@ bool zswap_store(struct folio *folio)
> zswap_invalidate_entry(tree, dupentry);
> }
> spin_unlock(&tree->lock);
> -
> - /*
> - * XXX: zswap reclaim does not work with cgroups yet. Without a
> - * cgroup-aware entry LRU, we will push out entries system-wide based on
> - * local cgroup limits.
> - */
> objcg = get_obj_cgroup_from_folio(folio);
> - if (objcg && !obj_cgroup_may_zswap(objcg))
> - goto reject;
> + if (objcg && !obj_cgroup_may_zswap(objcg)) {
> + memcg = get_mem_cgroup_from_objcg(objcg);
> + if (shrink_memcg(memcg)) {
> + mem_cgroup_put(memcg);
> + goto reject;
> + }
> + mem_cgroup_put(memcg);
> + }
>
> /* reclaim space if needed */
> if (zswap_is_full()) {
> @@ -1265,7 +1420,7 @@ bool zswap_store(struct folio *folio)
> }
>
> /* allocate entry */
> - entry = zswap_entry_cache_alloc(GFP_KERNEL);
> + entry = zswap_entry_cache_alloc(GFP_KERNEL, page_to_nid(page));
> if (!entry) {
> zswap_reject_kmemcache_fail++;
> goto reject;
> @@ -1292,6 +1447,15 @@ bool zswap_store(struct folio *folio)
> if (!entry->pool)
> goto freepage;
>
> + if (objcg) {
> + memcg = get_mem_cgroup_from_objcg(objcg);
> + if (memcg_list_lru_alloc(memcg, &entry->pool->list_lru, GFP_KERNEL)) {
> + mem_cgroup_put(memcg);
> + goto put_pool;
> + }
> + mem_cgroup_put(memcg);
> + }
> +
> /* compress */
> acomp_ctx = raw_cpu_ptr(entry->pool->acomp_ctx);
>
> @@ -1370,9 +1534,8 @@ bool zswap_store(struct folio *folio)
> zswap_invalidate_entry(tree, dupentry);
> }
> if (entry->length) {
> - spin_lock(&entry->pool->lru_lock);
> - list_add(&entry->lru, &entry->pool->lru);
> - spin_unlock(&entry->pool->lru_lock);
> + INIT_LIST_HEAD(&entry->lru);
> + zswap_lru_add(&entry->pool->list_lru, entry);
> }
> spin_unlock(&tree->lock);
>
> @@ -1385,6 +1548,7 @@ bool zswap_store(struct folio *folio)
>
> put_dstmem:
> mutex_unlock(acomp_ctx->mutex);
> +put_pool:
> zswap_pool_put(entry->pool);
> freepage:
> zswap_entry_cache_free(entry);
> @@ -1479,9 +1643,8 @@ bool zswap_load(struct folio *folio)
> zswap_invalidate_entry(tree, entry);
> folio_mark_dirty(folio);
> } else if (entry->length) {
> - spin_lock(&entry->pool->lru_lock);
> - list_move(&entry->lru, &entry->pool->lru);
> - spin_unlock(&entry->pool->lru_lock);
> + zswap_lru_del(&entry->pool->list_lru, entry);
> + zswap_lru_add(&entry->pool->list_lru, entry);
> }
> zswap_entry_put(tree, entry);
> spin_unlock(&tree->lock);
> --
> 2.34.1
^ permalink raw reply [flat|nested] 48+ messages in thread
* [PATCH v8 3/6] zswap: make shrinking memcg-aware (fix 2)
2023-11-30 19:40 ` [PATCH v8 3/6] zswap: make shrinking memcg-aware Nhat Pham
` (3 preceding siblings ...)
2023-12-06 3:03 ` Nhat Pham
@ 2023-12-06 3:06 ` Nhat Pham
4 siblings, 0 replies; 48+ messages in thread
From: Nhat Pham @ 2023-12-06 3:06 UTC (permalink / raw)
To: akpm
Cc: hannes, cerasuolodomenico, yosryahmed, sjenning, ddstreet,
vitaly.wool, mhocko, roman.gushchin, shakeelb, muchun.song,
chrisl, linux-mm, kernel-team, linux-kernel, cgroups, linux-doc,
linux-kselftest, shuah
Drop the pool's reference at the end of the writeback step. Apply on
top of the first fixlet:
https://lore.kernel.org/linux-mm/20231130203522.GC543908@cmpxchg.org/T/#m6ba8efd2205486b1b333a29f5a890563b45c7a7e
Signed-off-by: Nhat Pham <nphamcs@gmail.com>
---
mm/zswap.c | 1 +
1 file changed, 1 insertion(+)
diff --git a/mm/zswap.c b/mm/zswap.c
index 7a84c1454988..56d4a8cc461d 100644
--- a/mm/zswap.c
+++ b/mm/zswap.c
@@ -859,6 +859,7 @@ static void shrink_worker(struct work_struct *w)
resched:
cond_resched();
} while (!zswap_can_accept());
+ zswap_pool_put(pool);
}
static struct zswap_pool *zswap_pool_create(char *type, char *compressor)
--
2.34.1
^ permalink raw reply related [flat|nested] 48+ messages in thread
* Re: [PATCH v8 0/6] workload-specific and memory pressure-driven zswap writeback
2023-11-30 19:40 [PATCH v8 0/6] workload-specific and memory pressure-driven zswap writeback Nhat Pham
` (6 preceding siblings ...)
2023-11-30 21:19 ` [PATCH v8 0/6] workload-specific and memory pressure-driven zswap writeback Andrew Morton
@ 2023-12-06 4:10 ` Bagas Sanjaya
7 siblings, 0 replies; 48+ messages in thread
From: Bagas Sanjaya @ 2023-12-06 4:10 UTC (permalink / raw)
To: Nhat Pham, akpm
Cc: hannes, cerasuolodomenico, yosryahmed, sjenning, ddstreet,
vitaly.wool, mhocko, roman.gushchin, shakeelb, muchun.song,
chrisl, linux-mm, kernel-team, linux-kernel, cgroups, linux-doc,
linux-kselftest, shuah
[-- Attachment #1: Type: text/plain, Size: 6507 bytes --]
On Thu, Nov 30, 2023 at 11:40:17AM -0800, Nhat Pham wrote:
> Changelog:
> v8:
> * Fixed a couple of build errors in the case of !CONFIG_MEMCG
> * Simplified the online memcg selection scheme for the zswap global
> limit reclaim (suggested by Michal Hocko and Johannes Weiner)
> (patch 2 and patch 3)
> * Added a new kconfig to allows users to enable zswap shrinker by
> default. (suggested by Johannes Weiner) (patch 6)
> v7:
> * Added the mem_cgroup_iter_online() function to the API for the new
> behavior (suggested by Andrew Morton) (patch 2)
> * Fixed a missing list_lru_del -> list_lru_del_obj (patch 1)
> v6:
> * Rebase on top of latest mm-unstable.
> * Fix/improve the in-code documentation of the new list_lru
> manipulation functions (patch 1)
> v5:
> * Replace reference getting with an rcu_read_lock() section for
> zswap lru modifications (suggested by Yosry)
> * Add a new prep patch that allows mem_cgroup_iter() to return
> online cgroup.
> * Add a callback that updates pool->next_shrink when the cgroup is
> offlined (suggested by Yosry Ahmed, Johannes Weiner)
> v4:
> * Rename list_lru_add to list_lru_add_obj and __list_lru_add to
> list_lru_add (patch 1) (suggested by Johannes Weiner and
> Yosry Ahmed)
> * Some cleanups on the memcg aware LRU patch (patch 2)
> (suggested by Yosry Ahmed)
> * Use event interface for the new per-cgroup writeback counters.
> (patch 3) (suggested by Yosry Ahmed)
> * Abstract zswap's lruvec states and handling into
> zswap_lruvec_state (patch 5) (suggested by Yosry Ahmed)
> v3:
> * Add a patch to export per-cgroup zswap writeback counters
> * Add a patch to update zswap's kselftest
> * Separate the new list_lru functions into its own prep patch
> * Do not start from the top of the hierarchy when encounter a memcg
> that is not online for the global limit zswap writeback (patch 2)
> (suggested by Yosry Ahmed)
> * Do not remove the swap entry from list_lru in
> __read_swapcache_async() (patch 2) (suggested by Yosry Ahmed)
> * Removed a redundant zswap pool getting (patch 2)
> (reported by Ryan Roberts)
> * Use atomic for the nr_zswap_protected (instead of lruvec's lock)
> (patch 5) (suggested by Yosry Ahmed)
> * Remove the per-cgroup zswap shrinker knob (patch 5)
> (suggested by Yosry Ahmed)
> v2:
> * Fix loongarch compiler errors
> * Use pool stats instead of memcg stats when !CONFIG_MEMCG_KEM
>
> There are currently several issues with zswap writeback:
>
> 1. There is only a single global LRU for zswap, making it impossible to
> perform worload-specific shrinking - an memcg under memory pressure
> cannot determine which pages in the pool it owns, and often ends up
> writing pages from other memcgs. This issue has been previously
> observed in practice and mitigated by simply disabling
> memcg-initiated shrinking:
>
> https://lore.kernel.org/all/20230530232435.3097106-1-nphamcs@gmail.com/T/#u
>
> But this solution leaves a lot to be desired, as we still do not
> have an avenue for an memcg to free up its own memory locked up in
> the zswap pool.
>
> 2. We only shrink the zswap pool when the user-defined limit is hit.
> This means that if we set the limit too high, cold data that are
> unlikely to be used again will reside in the pool, wasting precious
> memory. It is hard to predict how much zswap space will be needed
> ahead of time, as this depends on the workload (specifically, on
> factors such as memory access patterns and compressibility of the
> memory pages).
>
> This patch series solves these issues by separating the global zswap
> LRU into per-memcg and per-NUMA LRUs, and performs workload-specific
> (i.e memcg- and NUMA-aware) zswap writeback under memory pressure. The
> new shrinker does not have any parameter that must be tuned by the
> user, and can be opted in or out on a per-memcg basis.
>
> As a proof of concept, we ran the following synthetic benchmark:
> build the linux kernel in a memory-limited cgroup, and allocate some
> cold data in tmpfs to see if the shrinker could write them out and
> improved the overall performance. Depending on the amount of cold data
> generated, we observe from 14% to 35% reduction in kernel CPU time used
> in the kernel builds.
>
> Domenico Cerasuolo (3):
> zswap: make shrinking memcg-aware
> mm: memcg: add per-memcg zswap writeback stat
> selftests: cgroup: update per-memcg zswap writeback selftest
>
> Nhat Pham (3):
> list_lru: allows explicit memcg and NUMA node selection
> memcontrol: implement mem_cgroup_tryget_online()
> zswap: shrinks zswap pool based on memory pressure
>
> Documentation/admin-guide/mm/zswap.rst | 10 +
> drivers/android/binder_alloc.c | 7 +-
> fs/dcache.c | 8 +-
> fs/gfs2/quota.c | 6 +-
> fs/inode.c | 4 +-
> fs/nfs/nfs42xattr.c | 8 +-
> fs/nfsd/filecache.c | 4 +-
> fs/xfs/xfs_buf.c | 6 +-
> fs/xfs/xfs_dquot.c | 2 +-
> fs/xfs/xfs_qm.c | 2 +-
> include/linux/list_lru.h | 54 ++-
> include/linux/memcontrol.h | 15 +
> include/linux/mmzone.h | 2 +
> include/linux/vm_event_item.h | 1 +
> include/linux/zswap.h | 27 +-
> mm/Kconfig | 14 +
> mm/list_lru.c | 48 ++-
> mm/memcontrol.c | 3 +
> mm/mmzone.c | 1 +
> mm/swap.h | 3 +-
> mm/swap_state.c | 26 +-
> mm/vmstat.c | 1 +
> mm/workingset.c | 4 +-
> mm/zswap.c | 456 +++++++++++++++++---
> tools/testing/selftests/cgroup/test_zswap.c | 74 ++--
> 25 files changed, 661 insertions(+), 125 deletions(-)
>
Carrying from v7,
Tested-by: Bagas Sanjaya <bagasdotme@gmail.com>
--
An old man doll... just what I always wanted! - Clara
[-- Attachment #2: signature.asc --]
[-- Type: application/pgp-signature, Size: 228 bytes --]
^ permalink raw reply [flat|nested] 48+ messages in thread
* Re: [PATCH v8 6/6] zswap: shrinks zswap pool based on memory pressure
2023-11-30 19:40 ` [PATCH v8 6/6] zswap: shrinks zswap pool based on memory pressure Nhat Pham
@ 2023-12-06 5:51 ` Chengming Zhou
2023-12-06 5:59 ` Yosry Ahmed
2023-12-06 19:44 ` [PATCH v8 6/6] zswap: shrinks zswap pool based on memory pressure (fix) Nhat Pham
1 sibling, 1 reply; 48+ messages in thread
From: Chengming Zhou @ 2023-12-06 5:51 UTC (permalink / raw)
To: Nhat Pham, akpm
Cc: hannes, cerasuolodomenico, yosryahmed, sjenning, ddstreet,
vitaly.wool, mhocko, roman.gushchin, shakeelb, muchun.song,
chrisl, linux-mm, kernel-team, linux-kernel, cgroups, linux-doc,
linux-kselftest, shuah
On 2023/12/1 03:40, Nhat Pham wrote:
> Currently, we only shrink the zswap pool when the user-defined limit is
> hit. This means that if we set the limit too high, cold data that are
> unlikely to be used again will reside in the pool, wasting precious
> memory. It is hard to predict how much zswap space will be needed ahead
> of time, as this depends on the workload (specifically, on factors such
> as memory access patterns and compressibility of the memory pages).
>
> This patch implements a memcg- and NUMA-aware shrinker for zswap, that
> is initiated when there is memory pressure. The shrinker does not
> have any parameter that must be tuned by the user, and can be opted in
> or out on a per-memcg basis.
>
> Furthermore, to make it more robust for many workloads and prevent
> overshrinking (i.e evicting warm pages that might be refaulted into
> memory), we build in the following heuristics:
>
> * Estimate the number of warm pages residing in zswap, and attempt to
> protect this region of the zswap LRU.
> * Scale the number of freeable objects by an estimate of the memory
> saving factor. The better zswap compresses the data, the fewer pages
> we will evict to swap (as we will otherwise incur IO for relatively
> small memory saving).
> * During reclaim, if the shrinker encounters a page that is also being
> brought into memory, the shrinker will cautiously terminate its
> shrinking action, as this is a sign that it is touching the warmer
> region of the zswap LRU.
>
> As a proof of concept, we ran the following synthetic benchmark:
> build the linux kernel in a memory-limited cgroup, and allocate some
> cold data in tmpfs to see if the shrinker could write them out and
> improved the overall performance. Depending on the amount of cold data
> generated, we observe from 14% to 35% reduction in kernel CPU time used
> in the kernel builds.
>
> Signed-off-by: Nhat Pham <nphamcs@gmail.com>
> Acked-by: Johannes Weiner <hannes@cmpxchg.org>
> ---
> Documentation/admin-guide/mm/zswap.rst | 10 ++
> include/linux/mmzone.h | 2 +
> include/linux/zswap.h | 25 +++-
> mm/Kconfig | 14 ++
> mm/mmzone.c | 1 +
> mm/swap_state.c | 2 +
> mm/zswap.c | 185 ++++++++++++++++++++++++-
> 7 files changed, 233 insertions(+), 6 deletions(-)
>
> diff --git a/Documentation/admin-guide/mm/zswap.rst b/Documentation/admin-guide/mm/zswap.rst
> index 45b98390e938..62fc244ec702 100644
> --- a/Documentation/admin-guide/mm/zswap.rst
> +++ b/Documentation/admin-guide/mm/zswap.rst
> @@ -153,6 +153,16 @@ attribute, e. g.::
>
> Setting this parameter to 100 will disable the hysteresis.
>
> +When there is a sizable amount of cold memory residing in the zswap pool, it
> +can be advantageous to proactively write these cold pages to swap and reclaim
> +the memory for other use cases. By default, the zswap shrinker is disabled.
> +User can enable it as follows:
> +
> + echo Y > /sys/module/zswap/parameters/shrinker_enabled
> +
> +This can be enabled at the boot time if ``CONFIG_ZSWAP_SHRINKER_DEFAULT_ON`` is
> +selected.
> +
> A debugfs interface is provided for various statistic about pool size, number
> of pages stored, same-value filled pages and various counters for the reasons
> pages are rejected.
> diff --git a/include/linux/mmzone.h b/include/linux/mmzone.h
> index 7b1816450bfc..b23bc5390240 100644
> --- a/include/linux/mmzone.h
> +++ b/include/linux/mmzone.h
> @@ -22,6 +22,7 @@
> #include <linux/mm_types.h>
> #include <linux/page-flags.h>
> #include <linux/local_lock.h>
> +#include <linux/zswap.h>
> #include <asm/page.h>
>
> /* Free memory management - zoned buddy allocator. */
> @@ -641,6 +642,7 @@ struct lruvec {
> #ifdef CONFIG_MEMCG
> struct pglist_data *pgdat;
> #endif
> + struct zswap_lruvec_state zswap_lruvec_state;
> };
>
> /* Isolate for asynchronous migration */
> diff --git a/include/linux/zswap.h b/include/linux/zswap.h
> index e571e393669b..08c240e16a01 100644
> --- a/include/linux/zswap.h
> +++ b/include/linux/zswap.h
> @@ -5,20 +5,40 @@
> #include <linux/types.h>
> #include <linux/mm_types.h>
>
> +struct lruvec;
> +
> extern u64 zswap_pool_total_size;
> extern atomic_t zswap_stored_pages;
>
> #ifdef CONFIG_ZSWAP
>
> +struct zswap_lruvec_state {
> + /*
> + * Number of pages in zswap that should be protected from the shrinker.
> + * This number is an estimate of the following counts:
> + *
> + * a) Recent page faults.
> + * b) Recent insertion to the zswap LRU. This includes new zswap stores,
> + * as well as recent zswap LRU rotations.
> + *
> + * These pages are likely to be warm, and might incur IO if the are written
> + * to swap.
> + */
> + atomic_long_t nr_zswap_protected;
> +};
> +
> bool zswap_store(struct folio *folio);
> bool zswap_load(struct folio *folio);
> void zswap_invalidate(int type, pgoff_t offset);
> void zswap_swapon(int type);
> void zswap_swapoff(int type);
> void zswap_memcg_offline_cleanup(struct mem_cgroup *memcg);
> -
> +void zswap_lruvec_state_init(struct lruvec *lruvec);
> +void zswap_page_swapin(struct page *page);
> #else
>
> +struct zswap_lruvec_state {};
> +
> static inline bool zswap_store(struct folio *folio)
> {
> return false;
> @@ -33,7 +53,8 @@ static inline void zswap_invalidate(int type, pgoff_t offset) {}
> static inline void zswap_swapon(int type) {}
> static inline void zswap_swapoff(int type) {}
> static inline void zswap_memcg_offline_cleanup(struct mem_cgroup *memcg) {}
> -
> +static inline void zswap_lruvec_state_init(struct lruvec *lruvec) {}
> +static inline void zswap_page_swapin(struct page *page) {}
> #endif
>
> #endif /* _LINUX_ZSWAP_H */
> diff --git a/mm/Kconfig b/mm/Kconfig
> index 57cd378c73d6..ca87cdb72f11 100644
> --- a/mm/Kconfig
> +++ b/mm/Kconfig
> @@ -61,6 +61,20 @@ config ZSWAP_EXCLUSIVE_LOADS_DEFAULT_ON
> The cost is that if the page was never dirtied and needs to be
> swapped out again, it will be re-compressed.
>
> +config ZSWAP_SHRINKER_DEFAULT_ON
> + bool "Shrink the zswap pool on memory pressure"
> + depends on ZSWAP
> + default n
> + help
> + If selected, the zswap shrinker will be enabled, and the pages
> + stored in the zswap pool will become available for reclaim (i.e
> + written back to the backing swap device) on memory pressure.
> +
> + This means that zswap writeback could happen even if the pool is
> + not yet full, or the cgroup zswap limit has not been reached,
> + reducing the chance that cold pages will reside in the zswap pool
> + and consume memory indefinitely.
> +
> choice
> prompt "Default compressor"
> depends on ZSWAP
> diff --git a/mm/mmzone.c b/mm/mmzone.c
> index b594d3f268fe..c01896eca736 100644
> --- a/mm/mmzone.c
> +++ b/mm/mmzone.c
> @@ -78,6 +78,7 @@ void lruvec_init(struct lruvec *lruvec)
>
> memset(lruvec, 0, sizeof(struct lruvec));
> spin_lock_init(&lruvec->lru_lock);
> + zswap_lruvec_state_init(lruvec);
>
> for_each_lru(lru)
> INIT_LIST_HEAD(&lruvec->lists[lru]);
> diff --git a/mm/swap_state.c b/mm/swap_state.c
> index 6c84236382f3..c597cec606e4 100644
> --- a/mm/swap_state.c
> +++ b/mm/swap_state.c
> @@ -687,6 +687,7 @@ struct page *swap_cluster_readahead(swp_entry_t entry, gfp_t gfp_mask,
> &page_allocated, false);
> if (unlikely(page_allocated))
> swap_readpage(page, false, NULL);
> + zswap_page_swapin(page);
> return page;
> }
>
> @@ -862,6 +863,7 @@ static struct page *swap_vma_readahead(swp_entry_t targ_entry, gfp_t gfp_mask,
> &page_allocated, false);
> if (unlikely(page_allocated))
> swap_readpage(page, false, NULL);
> + zswap_page_swapin(page);
> return page;
> }
>
> diff --git a/mm/zswap.c b/mm/zswap.c
> index 49b79393e472..0f086ffd7b6a 100644
> --- a/mm/zswap.c
> +++ b/mm/zswap.c
> @@ -148,6 +148,11 @@ module_param_named(exclusive_loads, zswap_exclusive_loads_enabled, bool, 0644);
> /* Number of zpools in zswap_pool (empirically determined for scalability) */
> #define ZSWAP_NR_ZPOOLS 32
>
> +/* Enable/disable memory pressure-based shrinker. */
> +static bool zswap_shrinker_enabled = IS_ENABLED(
> + CONFIG_ZSWAP_SHRINKER_DEFAULT_ON);
> +module_param_named(shrinker_enabled, zswap_shrinker_enabled, bool, 0644);
> +
> /*********************************
> * data structures
> **********************************/
> @@ -177,6 +182,8 @@ struct zswap_pool {
> char tfm_name[CRYPTO_MAX_ALG_NAME];
> struct list_lru list_lru;
> struct mem_cgroup *next_shrink;
> + struct shrinker *shrinker;
> + atomic_t nr_stored;
> };
>
> /*
> @@ -275,17 +282,26 @@ static bool zswap_can_accept(void)
> DIV_ROUND_UP(zswap_pool_total_size, PAGE_SIZE);
> }
>
> +static u64 get_zswap_pool_size(struct zswap_pool *pool)
> +{
> + u64 pool_size = 0;
> + int i;
> +
> + for (i = 0; i < ZSWAP_NR_ZPOOLS; i++)
> + pool_size += zpool_get_total_size(pool->zpools[i]);
> +
> + return pool_size;
> +}
> +
> static void zswap_update_total_size(void)
> {
> struct zswap_pool *pool;
> u64 total = 0;
> - int i;
>
> rcu_read_lock();
>
> list_for_each_entry_rcu(pool, &zswap_pools, list)
> - for (i = 0; i < ZSWAP_NR_ZPOOLS; i++)
> - total += zpool_get_total_size(pool->zpools[i]);
> + total += get_zswap_pool_size(pool);
>
> rcu_read_unlock();
>
> @@ -344,13 +360,34 @@ static void zswap_entry_cache_free(struct zswap_entry *entry)
> kmem_cache_free(zswap_entry_cache, entry);
> }
>
> +/*********************************
> +* zswap lruvec functions
> +**********************************/
> +void zswap_lruvec_state_init(struct lruvec *lruvec)
> +{
> + atomic_long_set(&lruvec->zswap_lruvec_state.nr_zswap_protected, 0);
> +}
> +
> +void zswap_page_swapin(struct page *page)
> +{
> + struct lruvec *lruvec;
> +
> + if (page) {
> + lruvec = folio_lruvec(page_folio(page));
> + atomic_long_inc(&lruvec->zswap_lruvec_state.nr_zswap_protected);
> + }
> +}
> +
> /*********************************
> * lru functions
> **********************************/
> static void zswap_lru_add(struct list_lru *list_lru, struct zswap_entry *entry)
> {
> + atomic_long_t *nr_zswap_protected;
> + unsigned long lru_size, old, new;
> int nid = entry_to_nid(entry);
> struct mem_cgroup *memcg;
> + struct lruvec *lruvec;
>
> /*
> * Note that it is safe to use rcu_read_lock() here, even in the face of
> @@ -368,6 +405,19 @@ static void zswap_lru_add(struct list_lru *list_lru, struct zswap_entry *entry)
> memcg = mem_cgroup_from_entry(entry);
> /* will always succeed */
> list_lru_add(list_lru, &entry->lru, nid, memcg);
> +
> + /* Update the protection area */
> + lru_size = list_lru_count_one(list_lru, nid, memcg);
> + lruvec = mem_cgroup_lruvec(memcg, NODE_DATA(nid));
> + nr_zswap_protected = &lruvec->zswap_lruvec_state.nr_zswap_protected;
> + old = atomic_long_inc_return(nr_zswap_protected);
> + /*
> + * Decay to avoid overflow and adapt to changing workloads.
> + * This is based on LRU reclaim cost decaying heuristics.
> + */
> + do {
> + new = old > lru_size / 4 ? old / 2 : old;
> + } while (!atomic_long_try_cmpxchg(nr_zswap_protected, &old, new));
> rcu_read_unlock();
> }
>
> @@ -389,6 +439,7 @@ static void zswap_lru_putback(struct list_lru *list_lru,
> int nid = entry_to_nid(entry);
> spinlock_t *lock = &list_lru->node[nid].lock;
> struct mem_cgroup *memcg;
> + struct lruvec *lruvec;
>
> rcu_read_lock();
> memcg = mem_cgroup_from_entry(entry);
> @@ -396,6 +447,10 @@ static void zswap_lru_putback(struct list_lru *list_lru,
> /* we cannot use list_lru_add here, because it increments node's lru count */
> list_lru_putback(list_lru, &entry->lru, nid, memcg);
> spin_unlock(lock);
> +
> + lruvec = mem_cgroup_lruvec(memcg, NODE_DATA(entry_to_nid(entry)));
> + /* increment the protection area to account for the LRU rotation. */
> + atomic_long_inc(&lruvec->zswap_lruvec_state.nr_zswap_protected);
> rcu_read_unlock();
> }
>
> @@ -485,6 +540,7 @@ static void zswap_free_entry(struct zswap_entry *entry)
> else {
> zswap_lru_del(&entry->pool->list_lru, entry);
> zpool_free(zswap_find_zpool(entry), entry->handle);
> + atomic_dec(&entry->pool->nr_stored);
> zswap_pool_put(entry->pool);
> }
> zswap_entry_cache_free(entry);
> @@ -526,6 +582,102 @@ static struct zswap_entry *zswap_entry_find_get(struct rb_root *root,
> return entry;
> }
>
> +/*********************************
> +* shrinker functions
> +**********************************/
> +static enum lru_status shrink_memcg_cb(struct list_head *item, struct list_lru_one *l,
> + spinlock_t *lock, void *arg);
> +
> +static unsigned long zswap_shrinker_scan(struct shrinker *shrinker,
> + struct shrink_control *sc)
> +{
> + struct lruvec *lruvec = mem_cgroup_lruvec(sc->memcg, NODE_DATA(sc->nid));
> + unsigned long shrink_ret, nr_protected, lru_size;
> + struct zswap_pool *pool = shrinker->private_data;
> + bool encountered_page_in_swapcache = false;
> +
> + nr_protected =
> + atomic_long_read(&lruvec->zswap_lruvec_state.nr_zswap_protected);
> + lru_size = list_lru_shrink_count(&pool->list_lru, sc);
> +
> + /*
> + * Abort if the shrinker is disabled or if we are shrinking into the
> + * protected region.
> + *
> + * This short-circuiting is necessary because if we have too many multiple
> + * concurrent reclaimers getting the freeable zswap object counts at the
> + * same time (before any of them made reasonable progress), the total
> + * number of reclaimed objects might be more than the number of unprotected
> + * objects (i.e the reclaimers will reclaim into the protected area of the
> + * zswap LRU).
> + */
> + if (!zswap_shrinker_enabled || nr_protected >= lru_size - sc->nr_to_scan) {
> + sc->nr_scanned = 0;
> + return SHRINK_STOP;
> + }
> +
> + shrink_ret = list_lru_shrink_walk(&pool->list_lru, sc, &shrink_memcg_cb,
> + &encountered_page_in_swapcache);
> +
> + if (encountered_page_in_swapcache)
> + return SHRINK_STOP;
> +
> + return shrink_ret ? shrink_ret : SHRINK_STOP;
> +}
> +
> +static unsigned long zswap_shrinker_count(struct shrinker *shrinker,
> + struct shrink_control *sc)
> +{
> + struct zswap_pool *pool = shrinker->private_data;
> + struct mem_cgroup *memcg = sc->memcg;
> + struct lruvec *lruvec = mem_cgroup_lruvec(memcg, NODE_DATA(sc->nid));
> + unsigned long nr_backing, nr_stored, nr_freeable, nr_protected;
> +
> +#ifdef CONFIG_MEMCG_KMEM
> + cgroup_rstat_flush(memcg->css.cgroup);
> + nr_backing = memcg_page_state(memcg, MEMCG_ZSWAP_B) >> PAGE_SHIFT;
> + nr_stored = memcg_page_state(memcg, MEMCG_ZSWAPPED);
> +#else
> + /* use pool stats instead of memcg stats */
> + nr_backing = get_zswap_pool_size(pool) >> PAGE_SHIFT;
> + nr_stored = atomic_read(&pool->nr_stored);
> +#endif
> +
> + if (!zswap_shrinker_enabled || !nr_stored)
When I tested with this series, with !zswap_shrinker_enabled in the default case,
I found the performance is much worse than that without this patch.
Testcase: memory.max=2G, zswap enabled, kernel build -j32 in a tmpfs directory.
The reason seems the above cgroup_rstat_flush(), caused much rstat lock contention
to the zswap_store() path. And if I put the "zswap_shrinker_enabled" check above
the cgroup_rstat_flush(), the performance become much better.
Maybe we can put the "zswap_shrinker_enabled" check above cgroup_rstat_flush()?
Thanks!
> + return 0;
> +
> + nr_protected =
> + atomic_long_read(&lruvec->zswap_lruvec_state.nr_zswap_protected);
> + nr_freeable = list_lru_shrink_count(&pool->list_lru, sc);
> + /*
> + * Subtract the lru size by an estimate of the number of pages
> + * that should be protected.
> + */
> + nr_freeable = nr_freeable > nr_protected ? nr_freeable - nr_protected : 0;
> +
> + /*
> + * Scale the number of freeable pages by the memory saving factor.
> + * This ensures that the better zswap compresses memory, the fewer
> + * pages we will evict to swap (as it will otherwise incur IO for
> + * relatively small memory saving).
> + */
> + return mult_frac(nr_freeable, nr_backing, nr_stored);
> +}
> +
> +static void zswap_alloc_shrinker(struct zswap_pool *pool)
> +{
> + pool->shrinker =
> + shrinker_alloc(SHRINKER_NUMA_AWARE | SHRINKER_MEMCG_AWARE, "mm-zswap");
> + if (!pool->shrinker)
> + return;
> +
> + pool->shrinker->private_data = pool;
> + pool->shrinker->scan_objects = zswap_shrinker_scan;
> + pool->shrinker->count_objects = zswap_shrinker_count;
> + pool->shrinker->batch = 0;
> + pool->shrinker->seeks = DEFAULT_SEEKS;
> +}
> +
> /*********************************
> * per-cpu code
> **********************************/
> @@ -721,6 +873,7 @@ static enum lru_status shrink_memcg_cb(struct list_head *item, struct list_lru_o
> spinlock_t *lock, void *arg)
> {
> struct zswap_entry *entry = container_of(item, struct zswap_entry, lru);
> + bool *encountered_page_in_swapcache = (bool *)arg;
> struct zswap_tree *tree;
> pgoff_t swpoffset;
> enum lru_status ret = LRU_REMOVED_RETRY;
> @@ -756,6 +909,17 @@ static enum lru_status shrink_memcg_cb(struct list_head *item, struct list_lru_o
> zswap_reject_reclaim_fail++;
> zswap_lru_putback(&entry->pool->list_lru, entry);
> ret = LRU_RETRY;
> +
> + /*
> + * Encountering a page already in swap cache is a sign that we are shrinking
> + * into the warmer region. We should terminate shrinking (if we're in the dynamic
> + * shrinker context).
> + */
> + if (writeback_result == -EEXIST && encountered_page_in_swapcache) {
> + ret = LRU_SKIP;
> + *encountered_page_in_swapcache = true;
> + }
> +
> goto put_unlock;
> }
> zswap_written_back_pages++;
> @@ -913,6 +1077,11 @@ static struct zswap_pool *zswap_pool_create(char *type, char *compressor)
> &pool->node);
> if (ret)
> goto error;
> +
> + zswap_alloc_shrinker(pool);
> + if (!pool->shrinker)
> + goto error;
> +
> pr_debug("using %s compressor\n", pool->tfm_name);
>
> /* being the current pool takes 1 ref; this func expects the
> @@ -920,13 +1089,19 @@ static struct zswap_pool *zswap_pool_create(char *type, char *compressor)
> */
> kref_init(&pool->kref);
> INIT_LIST_HEAD(&pool->list);
> - list_lru_init_memcg(&pool->list_lru, NULL);
> + if (list_lru_init_memcg(&pool->list_lru, pool->shrinker))
> + goto lru_fail;
> + shrinker_register(pool->shrinker);
> INIT_WORK(&pool->shrink_work, shrink_worker);
> + atomic_set(&pool->nr_stored, 0);
>
> zswap_pool_debug("created", pool);
>
> return pool;
>
> +lru_fail:
> + list_lru_destroy(&pool->list_lru);
> + shrinker_free(pool->shrinker);
> error:
> if (pool->acomp_ctx)
> free_percpu(pool->acomp_ctx);
> @@ -984,6 +1159,7 @@ static void zswap_pool_destroy(struct zswap_pool *pool)
>
> zswap_pool_debug("destroying", pool);
>
> + shrinker_free(pool->shrinker);
> cpuhp_state_remove_instance(CPUHP_MM_ZSWP_POOL_PREPARE, &pool->node);
> free_percpu(pool->acomp_ctx);
> list_lru_destroy(&pool->list_lru);
> @@ -1540,6 +1716,7 @@ bool zswap_store(struct folio *folio)
> if (entry->length) {
> INIT_LIST_HEAD(&entry->lru);
> zswap_lru_add(&entry->pool->list_lru, entry);
> + atomic_inc(&entry->pool->nr_stored);
> }
> spin_unlock(&tree->lock);
>
^ permalink raw reply [flat|nested] 48+ messages in thread
* Re: [PATCH v8 6/6] zswap: shrinks zswap pool based on memory pressure
2023-12-06 5:51 ` Chengming Zhou
@ 2023-12-06 5:59 ` Yosry Ahmed
2023-12-06 6:43 ` Chengming Zhou
2023-12-06 16:56 ` Nhat Pham
0 siblings, 2 replies; 48+ messages in thread
From: Yosry Ahmed @ 2023-12-06 5:59 UTC (permalink / raw)
To: Chengming Zhou
Cc: Nhat Pham, akpm, hannes, cerasuolodomenico, sjenning, ddstreet,
vitaly.wool, mhocko, roman.gushchin, shakeelb, muchun.song,
chrisl, linux-mm, kernel-team, linux-kernel, cgroups, linux-doc,
linux-kselftest, shuah
[..]
> > @@ -526,6 +582,102 @@ static struct zswap_entry *zswap_entry_find_get(struct rb_root *root,
> > return entry;
> > }
> >
> > +/*********************************
> > +* shrinker functions
> > +**********************************/
> > +static enum lru_status shrink_memcg_cb(struct list_head *item, struct list_lru_one *l,
> > + spinlock_t *lock, void *arg);
> > +
> > +static unsigned long zswap_shrinker_scan(struct shrinker *shrinker,
> > + struct shrink_control *sc)
> > +{
> > + struct lruvec *lruvec = mem_cgroup_lruvec(sc->memcg, NODE_DATA(sc->nid));
> > + unsigned long shrink_ret, nr_protected, lru_size;
> > + struct zswap_pool *pool = shrinker->private_data;
> > + bool encountered_page_in_swapcache = false;
> > +
> > + nr_protected =
> > + atomic_long_read(&lruvec->zswap_lruvec_state.nr_zswap_protected);
> > + lru_size = list_lru_shrink_count(&pool->list_lru, sc);
> > +
> > + /*
> > + * Abort if the shrinker is disabled or if we are shrinking into the
> > + * protected region.
> > + *
> > + * This short-circuiting is necessary because if we have too many multiple
> > + * concurrent reclaimers getting the freeable zswap object counts at the
> > + * same time (before any of them made reasonable progress), the total
> > + * number of reclaimed objects might be more than the number of unprotected
> > + * objects (i.e the reclaimers will reclaim into the protected area of the
> > + * zswap LRU).
> > + */
> > + if (!zswap_shrinker_enabled || nr_protected >= lru_size - sc->nr_to_scan) {
> > + sc->nr_scanned = 0;
> > + return SHRINK_STOP;
> > + }
> > +
> > + shrink_ret = list_lru_shrink_walk(&pool->list_lru, sc, &shrink_memcg_cb,
> > + &encountered_page_in_swapcache);
> > +
> > + if (encountered_page_in_swapcache)
> > + return SHRINK_STOP;
> > +
> > + return shrink_ret ? shrink_ret : SHRINK_STOP;
> > +}
> > +
> > +static unsigned long zswap_shrinker_count(struct shrinker *shrinker,
> > + struct shrink_control *sc)
> > +{
> > + struct zswap_pool *pool = shrinker->private_data;
> > + struct mem_cgroup *memcg = sc->memcg;
> > + struct lruvec *lruvec = mem_cgroup_lruvec(memcg, NODE_DATA(sc->nid));
> > + unsigned long nr_backing, nr_stored, nr_freeable, nr_protected;
> > +
> > +#ifdef CONFIG_MEMCG_KMEM
> > + cgroup_rstat_flush(memcg->css.cgroup);
> > + nr_backing = memcg_page_state(memcg, MEMCG_ZSWAP_B) >> PAGE_SHIFT;
> > + nr_stored = memcg_page_state(memcg, MEMCG_ZSWAPPED);
> > +#else
> > + /* use pool stats instead of memcg stats */
> > + nr_backing = get_zswap_pool_size(pool) >> PAGE_SHIFT;
> > + nr_stored = atomic_read(&pool->nr_stored);
> > +#endif
> > +
> > + if (!zswap_shrinker_enabled || !nr_stored)
> When I tested with this series, with !zswap_shrinker_enabled in the default case,
> I found the performance is much worse than that without this patch.
>
> Testcase: memory.max=2G, zswap enabled, kernel build -j32 in a tmpfs directory.
>
> The reason seems the above cgroup_rstat_flush(), caused much rstat lock contention
> to the zswap_store() path. And if I put the "zswap_shrinker_enabled" check above
> the cgroup_rstat_flush(), the performance become much better.
>
> Maybe we can put the "zswap_shrinker_enabled" check above cgroup_rstat_flush()?
Yes, we should do nothing if !zswap_shrinker_enabled. We should also
use mem_cgroup_flush_stats() here like other places unless accuracy is
crucial, which I doubt given that reclaim uses
mem_cgroup_flush_stats().
mem_cgroup_flush_stats() has some thresholding to make sure we don't
do flushes unnecessarily, and I have a pending series in mm-unstable
that makes that thresholding per-memcg. Keep in mind that adding a
call to mem_cgroup_flush_stats() will cause a conflict in mm-unstable,
because the series there adds a memcg argument to
mem_cgroup_flush_stats(). That should be easily amenable though, I can
post a fixlet for my series to add the memcg argument there on top of
users if needed.
>
> Thanks!
>
> > + return 0;
> > +
> > + nr_protected =
> > + atomic_long_read(&lruvec->zswap_lruvec_state.nr_zswap_protected);
> > + nr_freeable = list_lru_shrink_count(&pool->list_lru, sc);
> > + /*
> > + * Subtract the lru size by an estimate of the number of pages
> > + * that should be protected.
> > + */
> > + nr_freeable = nr_freeable > nr_protected ? nr_freeable - nr_protected : 0;
> > +
> > + /*
> > + * Scale the number of freeable pages by the memory saving factor.
> > + * This ensures that the better zswap compresses memory, the fewer
> > + * pages we will evict to swap (as it will otherwise incur IO for
> > + * relatively small memory saving).
> > + */
> > + return mult_frac(nr_freeable, nr_backing, nr_stored);
> > +}
> > +
> > +static void zswap_alloc_shrinker(struct zswap_pool *pool)
> > +{
> > + pool->shrinker =
> > + shrinker_alloc(SHRINKER_NUMA_AWARE | SHRINKER_MEMCG_AWARE, "mm-zswap");
> > + if (!pool->shrinker)
> > + return;
> > +
> > + pool->shrinker->private_data = pool;
> > + pool->shrinker->scan_objects = zswap_shrinker_scan;
> > + pool->shrinker->count_objects = zswap_shrinker_count;
> > + pool->shrinker->batch = 0;
> > + pool->shrinker->seeks = DEFAULT_SEEKS;
> > +}
> > +
> > /*********************************
> > * per-cpu code
> > **********************************/
[..]
^ permalink raw reply [flat|nested] 48+ messages in thread
* Re: [PATCH v8 6/6] zswap: shrinks zswap pool based on memory pressure
2023-12-06 5:59 ` Yosry Ahmed
@ 2023-12-06 6:43 ` Chengming Zhou
2023-12-06 7:36 ` Yosry Ahmed
2023-12-06 16:56 ` Nhat Pham
1 sibling, 1 reply; 48+ messages in thread
From: Chengming Zhou @ 2023-12-06 6:43 UTC (permalink / raw)
To: Yosry Ahmed
Cc: Nhat Pham, akpm, hannes, cerasuolodomenico, sjenning, ddstreet,
vitaly.wool, mhocko, roman.gushchin, shakeelb, muchun.song,
chrisl, linux-mm, kernel-team, linux-kernel, cgroups, linux-doc,
linux-kselftest, shuah
On 2023/12/6 13:59, Yosry Ahmed wrote:
> [..]
>>> @@ -526,6 +582,102 @@ static struct zswap_entry *zswap_entry_find_get(struct rb_root *root,
>>> return entry;
>>> }
>>>
>>> +/*********************************
>>> +* shrinker functions
>>> +**********************************/
>>> +static enum lru_status shrink_memcg_cb(struct list_head *item, struct list_lru_one *l,
>>> + spinlock_t *lock, void *arg);
>>> +
>>> +static unsigned long zswap_shrinker_scan(struct shrinker *shrinker,
>>> + struct shrink_control *sc)
>>> +{
>>> + struct lruvec *lruvec = mem_cgroup_lruvec(sc->memcg, NODE_DATA(sc->nid));
>>> + unsigned long shrink_ret, nr_protected, lru_size;
>>> + struct zswap_pool *pool = shrinker->private_data;
>>> + bool encountered_page_in_swapcache = false;
>>> +
>>> + nr_protected =
>>> + atomic_long_read(&lruvec->zswap_lruvec_state.nr_zswap_protected);
>>> + lru_size = list_lru_shrink_count(&pool->list_lru, sc);
>>> +
>>> + /*
>>> + * Abort if the shrinker is disabled or if we are shrinking into the
>>> + * protected region.
>>> + *
>>> + * This short-circuiting is necessary because if we have too many multiple
>>> + * concurrent reclaimers getting the freeable zswap object counts at the
>>> + * same time (before any of them made reasonable progress), the total
>>> + * number of reclaimed objects might be more than the number of unprotected
>>> + * objects (i.e the reclaimers will reclaim into the protected area of the
>>> + * zswap LRU).
>>> + */
>>> + if (!zswap_shrinker_enabled || nr_protected >= lru_size - sc->nr_to_scan) {
>>> + sc->nr_scanned = 0;
>>> + return SHRINK_STOP;
>>> + }
>>> +
>>> + shrink_ret = list_lru_shrink_walk(&pool->list_lru, sc, &shrink_memcg_cb,
>>> + &encountered_page_in_swapcache);
>>> +
>>> + if (encountered_page_in_swapcache)
>>> + return SHRINK_STOP;
>>> +
>>> + return shrink_ret ? shrink_ret : SHRINK_STOP;
>>> +}
>>> +
>>> +static unsigned long zswap_shrinker_count(struct shrinker *shrinker,
>>> + struct shrink_control *sc)
>>> +{
>>> + struct zswap_pool *pool = shrinker->private_data;
>>> + struct mem_cgroup *memcg = sc->memcg;
>>> + struct lruvec *lruvec = mem_cgroup_lruvec(memcg, NODE_DATA(sc->nid));
>>> + unsigned long nr_backing, nr_stored, nr_freeable, nr_protected;
>>> +
>>> +#ifdef CONFIG_MEMCG_KMEM
>>> + cgroup_rstat_flush(memcg->css.cgroup);
>>> + nr_backing = memcg_page_state(memcg, MEMCG_ZSWAP_B) >> PAGE_SHIFT;
>>> + nr_stored = memcg_page_state(memcg, MEMCG_ZSWAPPED);
>>> +#else
>>> + /* use pool stats instead of memcg stats */
>>> + nr_backing = get_zswap_pool_size(pool) >> PAGE_SHIFT;
>>> + nr_stored = atomic_read(&pool->nr_stored);
>>> +#endif
>>> +
>>> + if (!zswap_shrinker_enabled || !nr_stored)
>> When I tested with this series, with !zswap_shrinker_enabled in the default case,
>> I found the performance is much worse than that without this patch.
>>
>> Testcase: memory.max=2G, zswap enabled, kernel build -j32 in a tmpfs directory.
>>
>> The reason seems the above cgroup_rstat_flush(), caused much rstat lock contention
>> to the zswap_store() path. And if I put the "zswap_shrinker_enabled" check above
>> the cgroup_rstat_flush(), the performance become much better.
>>
>> Maybe we can put the "zswap_shrinker_enabled" check above cgroup_rstat_flush()?
>
> Yes, we should do nothing if !zswap_shrinker_enabled. We should also
> use mem_cgroup_flush_stats() here like other places unless accuracy is
> crucial, which I doubt given that reclaim uses
> mem_cgroup_flush_stats().
>
Yes. After changing to use mem_cgroup_flush_stats() here, the performance
become much better.
> mem_cgroup_flush_stats() has some thresholding to make sure we don't
> do flushes unnecessarily, and I have a pending series in mm-unstable
> that makes that thresholding per-memcg. Keep in mind that adding a
> call to mem_cgroup_flush_stats() will cause a conflict in mm-unstable,
My test branch is linux-next 20231205, and it's all good after changing
to use mem_cgroup_flush_stats(memcg).
> because the series there adds a memcg argument to
> mem_cgroup_flush_stats(). That should be easily amenable though, I can
> post a fixlet for my series to add the memcg argument there on top of
> users if needed.
>
It's great. Thanks!
^ permalink raw reply [flat|nested] 48+ messages in thread
* Re: [PATCH v8 6/6] zswap: shrinks zswap pool based on memory pressure
2023-12-06 6:43 ` Chengming Zhou
@ 2023-12-06 7:36 ` Yosry Ahmed
2023-12-06 7:39 ` Chengming Zhou
0 siblings, 1 reply; 48+ messages in thread
From: Yosry Ahmed @ 2023-12-06 7:36 UTC (permalink / raw)
To: Chengming Zhou
Cc: Nhat Pham, akpm, hannes, cerasuolodomenico, sjenning, ddstreet,
vitaly.wool, mhocko, roman.gushchin, shakeelb, muchun.song,
chrisl, linux-mm, kernel-team, linux-kernel, cgroups, linux-doc,
linux-kselftest, shuah
On Tue, Dec 5, 2023 at 10:43 PM Chengming Zhou <chengming.zhou@linux.dev> wrote:
>
> On 2023/12/6 13:59, Yosry Ahmed wrote:
> > [..]
> >>> @@ -526,6 +582,102 @@ static struct zswap_entry *zswap_entry_find_get(struct rb_root *root,
> >>> return entry;
> >>> }
> >>>
> >>> +/*********************************
> >>> +* shrinker functions
> >>> +**********************************/
> >>> +static enum lru_status shrink_memcg_cb(struct list_head *item, struct list_lru_one *l,
> >>> + spinlock_t *lock, void *arg);
> >>> +
> >>> +static unsigned long zswap_shrinker_scan(struct shrinker *shrinker,
> >>> + struct shrink_control *sc)
> >>> +{
> >>> + struct lruvec *lruvec = mem_cgroup_lruvec(sc->memcg, NODE_DATA(sc->nid));
> >>> + unsigned long shrink_ret, nr_protected, lru_size;
> >>> + struct zswap_pool *pool = shrinker->private_data;
> >>> + bool encountered_page_in_swapcache = false;
> >>> +
> >>> + nr_protected =
> >>> + atomic_long_read(&lruvec->zswap_lruvec_state.nr_zswap_protected);
> >>> + lru_size = list_lru_shrink_count(&pool->list_lru, sc);
> >>> +
> >>> + /*
> >>> + * Abort if the shrinker is disabled or if we are shrinking into the
> >>> + * protected region.
> >>> + *
> >>> + * This short-circuiting is necessary because if we have too many multiple
> >>> + * concurrent reclaimers getting the freeable zswap object counts at the
> >>> + * same time (before any of them made reasonable progress), the total
> >>> + * number of reclaimed objects might be more than the number of unprotected
> >>> + * objects (i.e the reclaimers will reclaim into the protected area of the
> >>> + * zswap LRU).
> >>> + */
> >>> + if (!zswap_shrinker_enabled || nr_protected >= lru_size - sc->nr_to_scan) {
> >>> + sc->nr_scanned = 0;
> >>> + return SHRINK_STOP;
> >>> + }
> >>> +
> >>> + shrink_ret = list_lru_shrink_walk(&pool->list_lru, sc, &shrink_memcg_cb,
> >>> + &encountered_page_in_swapcache);
> >>> +
> >>> + if (encountered_page_in_swapcache)
> >>> + return SHRINK_STOP;
> >>> +
> >>> + return shrink_ret ? shrink_ret : SHRINK_STOP;
> >>> +}
> >>> +
> >>> +static unsigned long zswap_shrinker_count(struct shrinker *shrinker,
> >>> + struct shrink_control *sc)
> >>> +{
> >>> + struct zswap_pool *pool = shrinker->private_data;
> >>> + struct mem_cgroup *memcg = sc->memcg;
> >>> + struct lruvec *lruvec = mem_cgroup_lruvec(memcg, NODE_DATA(sc->nid));
> >>> + unsigned long nr_backing, nr_stored, nr_freeable, nr_protected;
> >>> +
> >>> +#ifdef CONFIG_MEMCG_KMEM
> >>> + cgroup_rstat_flush(memcg->css.cgroup);
> >>> + nr_backing = memcg_page_state(memcg, MEMCG_ZSWAP_B) >> PAGE_SHIFT;
> >>> + nr_stored = memcg_page_state(memcg, MEMCG_ZSWAPPED);
> >>> +#else
> >>> + /* use pool stats instead of memcg stats */
> >>> + nr_backing = get_zswap_pool_size(pool) >> PAGE_SHIFT;
> >>> + nr_stored = atomic_read(&pool->nr_stored);
> >>> +#endif
> >>> +
> >>> + if (!zswap_shrinker_enabled || !nr_stored)
> >> When I tested with this series, with !zswap_shrinker_enabled in the default case,
> >> I found the performance is much worse than that without this patch.
> >>
> >> Testcase: memory.max=2G, zswap enabled, kernel build -j32 in a tmpfs directory.
> >>
> >> The reason seems the above cgroup_rstat_flush(), caused much rstat lock contention
> >> to the zswap_store() path. And if I put the "zswap_shrinker_enabled" check above
> >> the cgroup_rstat_flush(), the performance become much better.
> >>
> >> Maybe we can put the "zswap_shrinker_enabled" check above cgroup_rstat_flush()?
> >
> > Yes, we should do nothing if !zswap_shrinker_enabled. We should also
> > use mem_cgroup_flush_stats() here like other places unless accuracy is
> > crucial, which I doubt given that reclaim uses
> > mem_cgroup_flush_stats().
> >
>
> Yes. After changing to use mem_cgroup_flush_stats() here, the performance
> become much better.
>
> > mem_cgroup_flush_stats() has some thresholding to make sure we don't
> > do flushes unnecessarily, and I have a pending series in mm-unstable
> > that makes that thresholding per-memcg. Keep in mind that adding a
> > call to mem_cgroup_flush_stats() will cause a conflict in mm-unstable,
>
> My test branch is linux-next 20231205, and it's all good after changing
> to use mem_cgroup_flush_stats(memcg).
Thanks for reporting back. We should still move the
zswap_shrinker_enabled check ahead, no need to even call
mem_cgroup_flush_stats() if we will do nothing anyway.
>
> > because the series there adds a memcg argument to
> > mem_cgroup_flush_stats(). That should be easily amenable though, I can
> > post a fixlet for my series to add the memcg argument there on top of
> > users if needed.
> >
>
> It's great. Thanks!
>
^ permalink raw reply [flat|nested] 48+ messages in thread
* Re: [PATCH v8 6/6] zswap: shrinks zswap pool based on memory pressure
2023-12-06 7:36 ` Yosry Ahmed
@ 2023-12-06 7:39 ` Chengming Zhou
0 siblings, 0 replies; 48+ messages in thread
From: Chengming Zhou @ 2023-12-06 7:39 UTC (permalink / raw)
To: Yosry Ahmed
Cc: Nhat Pham, akpm, hannes, cerasuolodomenico, sjenning, ddstreet,
vitaly.wool, mhocko, roman.gushchin, shakeelb, muchun.song,
chrisl, linux-mm, kernel-team, linux-kernel, cgroups, linux-doc,
linux-kselftest, shuah
On 2023/12/6 15:36, Yosry Ahmed wrote:
> On Tue, Dec 5, 2023 at 10:43 PM Chengming Zhou <chengming.zhou@linux.dev> wrote:
>>
>> On 2023/12/6 13:59, Yosry Ahmed wrote:
>>> [..]
>>>>> @@ -526,6 +582,102 @@ static struct zswap_entry *zswap_entry_find_get(struct rb_root *root,
>>>>> return entry;
>>>>> }
>>>>>
>>>>> +/*********************************
>>>>> +* shrinker functions
>>>>> +**********************************/
>>>>> +static enum lru_status shrink_memcg_cb(struct list_head *item, struct list_lru_one *l,
>>>>> + spinlock_t *lock, void *arg);
>>>>> +
>>>>> +static unsigned long zswap_shrinker_scan(struct shrinker *shrinker,
>>>>> + struct shrink_control *sc)
>>>>> +{
>>>>> + struct lruvec *lruvec = mem_cgroup_lruvec(sc->memcg, NODE_DATA(sc->nid));
>>>>> + unsigned long shrink_ret, nr_protected, lru_size;
>>>>> + struct zswap_pool *pool = shrinker->private_data;
>>>>> + bool encountered_page_in_swapcache = false;
>>>>> +
>>>>> + nr_protected =
>>>>> + atomic_long_read(&lruvec->zswap_lruvec_state.nr_zswap_protected);
>>>>> + lru_size = list_lru_shrink_count(&pool->list_lru, sc);
>>>>> +
>>>>> + /*
>>>>> + * Abort if the shrinker is disabled or if we are shrinking into the
>>>>> + * protected region.
>>>>> + *
>>>>> + * This short-circuiting is necessary because if we have too many multiple
>>>>> + * concurrent reclaimers getting the freeable zswap object counts at the
>>>>> + * same time (before any of them made reasonable progress), the total
>>>>> + * number of reclaimed objects might be more than the number of unprotected
>>>>> + * objects (i.e the reclaimers will reclaim into the protected area of the
>>>>> + * zswap LRU).
>>>>> + */
>>>>> + if (!zswap_shrinker_enabled || nr_protected >= lru_size - sc->nr_to_scan) {
>>>>> + sc->nr_scanned = 0;
>>>>> + return SHRINK_STOP;
>>>>> + }
>>>>> +
>>>>> + shrink_ret = list_lru_shrink_walk(&pool->list_lru, sc, &shrink_memcg_cb,
>>>>> + &encountered_page_in_swapcache);
>>>>> +
>>>>> + if (encountered_page_in_swapcache)
>>>>> + return SHRINK_STOP;
>>>>> +
>>>>> + return shrink_ret ? shrink_ret : SHRINK_STOP;
>>>>> +}
>>>>> +
>>>>> +static unsigned long zswap_shrinker_count(struct shrinker *shrinker,
>>>>> + struct shrink_control *sc)
>>>>> +{
>>>>> + struct zswap_pool *pool = shrinker->private_data;
>>>>> + struct mem_cgroup *memcg = sc->memcg;
>>>>> + struct lruvec *lruvec = mem_cgroup_lruvec(memcg, NODE_DATA(sc->nid));
>>>>> + unsigned long nr_backing, nr_stored, nr_freeable, nr_protected;
>>>>> +
>>>>> +#ifdef CONFIG_MEMCG_KMEM
>>>>> + cgroup_rstat_flush(memcg->css.cgroup);
>>>>> + nr_backing = memcg_page_state(memcg, MEMCG_ZSWAP_B) >> PAGE_SHIFT;
>>>>> + nr_stored = memcg_page_state(memcg, MEMCG_ZSWAPPED);
>>>>> +#else
>>>>> + /* use pool stats instead of memcg stats */
>>>>> + nr_backing = get_zswap_pool_size(pool) >> PAGE_SHIFT;
>>>>> + nr_stored = atomic_read(&pool->nr_stored);
>>>>> +#endif
>>>>> +
>>>>> + if (!zswap_shrinker_enabled || !nr_stored)
>>>> When I tested with this series, with !zswap_shrinker_enabled in the default case,
>>>> I found the performance is much worse than that without this patch.
>>>>
>>>> Testcase: memory.max=2G, zswap enabled, kernel build -j32 in a tmpfs directory.
>>>>
>>>> The reason seems the above cgroup_rstat_flush(), caused much rstat lock contention
>>>> to the zswap_store() path. And if I put the "zswap_shrinker_enabled" check above
>>>> the cgroup_rstat_flush(), the performance become much better.
>>>>
>>>> Maybe we can put the "zswap_shrinker_enabled" check above cgroup_rstat_flush()?
>>>
>>> Yes, we should do nothing if !zswap_shrinker_enabled. We should also
>>> use mem_cgroup_flush_stats() here like other places unless accuracy is
>>> crucial, which I doubt given that reclaim uses
>>> mem_cgroup_flush_stats().
>>>
>>
>> Yes. After changing to use mem_cgroup_flush_stats() here, the performance
>> become much better.
>>
>>> mem_cgroup_flush_stats() has some thresholding to make sure we don't
>>> do flushes unnecessarily, and I have a pending series in mm-unstable
>>> that makes that thresholding per-memcg. Keep in mind that adding a
>>> call to mem_cgroup_flush_stats() will cause a conflict in mm-unstable,
>>
>> My test branch is linux-next 20231205, and it's all good after changing
>> to use mem_cgroup_flush_stats(memcg).
>
> Thanks for reporting back. We should still move the
> zswap_shrinker_enabled check ahead, no need to even call
> mem_cgroup_flush_stats() if we will do nothing anyway.
>
Yes, agree!
^ permalink raw reply [flat|nested] 48+ messages in thread
* Re: [PATCH v8 6/6] zswap: shrinks zswap pool based on memory pressure
2023-12-06 5:59 ` Yosry Ahmed
2023-12-06 6:43 ` Chengming Zhou
@ 2023-12-06 16:56 ` Nhat Pham
2023-12-06 19:47 ` Nhat Pham
1 sibling, 1 reply; 48+ messages in thread
From: Nhat Pham @ 2023-12-06 16:56 UTC (permalink / raw)
To: Yosry Ahmed
Cc: Chengming Zhou, akpm, hannes, cerasuolodomenico, sjenning,
ddstreet, vitaly.wool, mhocko, roman.gushchin, shakeelb,
muchun.song, chrisl, linux-mm, kernel-team, linux-kernel, cgroups,
linux-doc, linux-kselftest, shuah
On Tue, Dec 5, 2023 at 10:00 PM Yosry Ahmed <yosryahmed@google.com> wrote:
>
> [..]
> > > @@ -526,6 +582,102 @@ static struct zswap_entry *zswap_entry_find_get(struct rb_root *root,
> > > return entry;
> > > }
> > >
> > > +/*********************************
> > > +* shrinker functions
> > > +**********************************/
> > > +static enum lru_status shrink_memcg_cb(struct list_head *item, struct list_lru_one *l,
> > > + spinlock_t *lock, void *arg);
> > > +
> > > +static unsigned long zswap_shrinker_scan(struct shrinker *shrinker,
> > > + struct shrink_control *sc)
> > > +{
> > > + struct lruvec *lruvec = mem_cgroup_lruvec(sc->memcg, NODE_DATA(sc->nid));
> > > + unsigned long shrink_ret, nr_protected, lru_size;
> > > + struct zswap_pool *pool = shrinker->private_data;
> > > + bool encountered_page_in_swapcache = false;
> > > +
> > > + nr_protected =
> > > + atomic_long_read(&lruvec->zswap_lruvec_state.nr_zswap_protected);
> > > + lru_size = list_lru_shrink_count(&pool->list_lru, sc);
> > > +
> > > + /*
> > > + * Abort if the shrinker is disabled or if we are shrinking into the
> > > + * protected region.
> > > + *
> > > + * This short-circuiting is necessary because if we have too many multiple
> > > + * concurrent reclaimers getting the freeable zswap object counts at the
> > > + * same time (before any of them made reasonable progress), the total
> > > + * number of reclaimed objects might be more than the number of unprotected
> > > + * objects (i.e the reclaimers will reclaim into the protected area of the
> > > + * zswap LRU).
> > > + */
> > > + if (!zswap_shrinker_enabled || nr_protected >= lru_size - sc->nr_to_scan) {
> > > + sc->nr_scanned = 0;
> > > + return SHRINK_STOP;
> > > + }
> > > +
> > > + shrink_ret = list_lru_shrink_walk(&pool->list_lru, sc, &shrink_memcg_cb,
> > > + &encountered_page_in_swapcache);
> > > +
> > > + if (encountered_page_in_swapcache)
> > > + return SHRINK_STOP;
> > > +
> > > + return shrink_ret ? shrink_ret : SHRINK_STOP;
> > > +}
> > > +
> > > +static unsigned long zswap_shrinker_count(struct shrinker *shrinker,
> > > + struct shrink_control *sc)
> > > +{
> > > + struct zswap_pool *pool = shrinker->private_data;
> > > + struct mem_cgroup *memcg = sc->memcg;
> > > + struct lruvec *lruvec = mem_cgroup_lruvec(memcg, NODE_DATA(sc->nid));
> > > + unsigned long nr_backing, nr_stored, nr_freeable, nr_protected;
> > > +
> > > +#ifdef CONFIG_MEMCG_KMEM
> > > + cgroup_rstat_flush(memcg->css.cgroup);
> > > + nr_backing = memcg_page_state(memcg, MEMCG_ZSWAP_B) >> PAGE_SHIFT;
> > > + nr_stored = memcg_page_state(memcg, MEMCG_ZSWAPPED);
> > > +#else
> > > + /* use pool stats instead of memcg stats */
> > > + nr_backing = get_zswap_pool_size(pool) >> PAGE_SHIFT;
> > > + nr_stored = atomic_read(&pool->nr_stored);
> > > +#endif
> > > +
> > > + if (!zswap_shrinker_enabled || !nr_stored)
> > When I tested with this series, with !zswap_shrinker_enabled in the default case,
> > I found the performance is much worse than that without this patch.
> >
> > Testcase: memory.max=2G, zswap enabled, kernel build -j32 in a tmpfs directory.
> >
> > The reason seems the above cgroup_rstat_flush(), caused much rstat lock contention
> > to the zswap_store() path. And if I put the "zswap_shrinker_enabled" check above
> > the cgroup_rstat_flush(), the performance become much better.
> >
> > Maybe we can put the "zswap_shrinker_enabled" check above cgroup_rstat_flush()?
>
> Yes, we should do nothing if !zswap_shrinker_enabled. We should also
> use mem_cgroup_flush_stats() here like other places unless accuracy is
> crucial, which I doubt given that reclaim uses
> mem_cgroup_flush_stats().
Ah, good points on both suggestions. We should not do extra work for
non-user. And, this is a best-effort approximation of the memory
saving factor, so as long as it is not *too* far off I think it's
acceptable.
>
> mem_cgroup_flush_stats() has some thresholding to make sure we don't
> do flushes unnecessarily, and I have a pending series in mm-unstable
> that makes that thresholding per-memcg. Keep in mind that adding a
> call to mem_cgroup_flush_stats() will cause a conflict in mm-unstable,
> because the series there adds a memcg argument to
> mem_cgroup_flush_stats(). That should be easily amenable though, I can
> post a fixlet for my series to add the memcg argument there on top of
> users if needed.
Hmm so how should we proceed from here? How about this:
a) I can send a fixlet to move the enablement check above the stats
flushing + use mem_cgroup_flush_stats
b) Then maybe, you can send a fixlet to update this new callsite?
Does that sound reasonable?
>
> >
> > Thanks!
> >
> > > + return 0;
> > > +
> > > + nr_protected =
> > > + atomic_long_read(&lruvec->zswap_lruvec_state.nr_zswap_protected);
> > > + nr_freeable = list_lru_shrink_count(&pool->list_lru, sc);
> > > + /*
> > > + * Subtract the lru size by an estimate of the number of pages
> > > + * that should be protected.
> > > + */
> > > + nr_freeable = nr_freeable > nr_protected ? nr_freeable - nr_protected : 0;
> > > +
> > > + /*
> > > + * Scale the number of freeable pages by the memory saving factor.
> > > + * This ensures that the better zswap compresses memory, the fewer
> > > + * pages we will evict to swap (as it will otherwise incur IO for
> > > + * relatively small memory saving).
> > > + */
> > > + return mult_frac(nr_freeable, nr_backing, nr_stored);
> > > +}
> > > +
> > > +static void zswap_alloc_shrinker(struct zswap_pool *pool)
> > > +{
> > > + pool->shrinker =
> > > + shrinker_alloc(SHRINKER_NUMA_AWARE | SHRINKER_MEMCG_AWARE, "mm-zswap");
> > > + if (!pool->shrinker)
> > > + return;
> > > +
> > > + pool->shrinker->private_data = pool;
> > > + pool->shrinker->scan_objects = zswap_shrinker_scan;
> > > + pool->shrinker->count_objects = zswap_shrinker_count;
> > > + pool->shrinker->batch = 0;
> > > + pool->shrinker->seeks = DEFAULT_SEEKS;
> > > +}
> > > +
> > > /*********************************
> > > * per-cpu code
> > > **********************************/
> [..]
^ permalink raw reply [flat|nested] 48+ messages in thread
* [PATCH v8 6/6] zswap: shrinks zswap pool based on memory pressure (fix)
2023-11-30 19:40 ` [PATCH v8 6/6] zswap: shrinks zswap pool based on memory pressure Nhat Pham
2023-12-06 5:51 ` Chengming Zhou
@ 2023-12-06 19:44 ` Nhat Pham
1 sibling, 0 replies; 48+ messages in thread
From: Nhat Pham @ 2023-12-06 19:44 UTC (permalink / raw)
To: akpm
Cc: hannes, cerasuolodomenico, yosryahmed, sjenning, ddstreet,
vitaly.wool, mhocko, roman.gushchin, shakeelb, muchun.song,
chrisl, linux-mm, kernel-team, linux-kernel, cgroups, linux-doc,
linux-kselftest, shuah
Check shrinker enablement early, and use a less costly stat flushing.
Suggested-by: Yosry Ahmed <yosryahmed@google.com>
Suggested-by: Chengming Zhou <chengming.zhou@linux.dev>
Signed-off-by: Nhat Pham <nphamcs@gmail.com>
---
mm/zswap.c | 17 ++++++++++++-----
1 file changed, 12 insertions(+), 5 deletions(-)
diff --git a/mm/zswap.c b/mm/zswap.c
index 27c749f6c1ba..d8ecd79120f3 100644
--- a/mm/zswap.c
+++ b/mm/zswap.c
@@ -596,13 +596,17 @@ static unsigned long zswap_shrinker_scan(struct shrinker *shrinker,
struct zswap_pool *pool = shrinker->private_data;
bool encountered_page_in_swapcache = false;
+ if (!zswap_shrinker_enabled) {
+ sc->nr_scanned = 0;
+ return SHRINK_STOP;
+ }
+
nr_protected =
atomic_long_read(&lruvec->zswap_lruvec_state.nr_zswap_protected);
lru_size = list_lru_shrink_count(&pool->list_lru, sc);
/*
- * Abort if the shrinker is disabled or if we are shrinking into the
- * protected region.
+ * Abort if we are shrinking into the protected region.
*
* This short-circuiting is necessary because if we have too many multiple
* concurrent reclaimers getting the freeable zswap object counts at the
@@ -611,7 +615,7 @@ static unsigned long zswap_shrinker_scan(struct shrinker *shrinker,
* objects (i.e the reclaimers will reclaim into the protected area of the
* zswap LRU).
*/
- if (!zswap_shrinker_enabled || nr_protected >= lru_size - sc->nr_to_scan) {
+ if (nr_protected >= lru_size - sc->nr_to_scan) {
sc->nr_scanned = 0;
return SHRINK_STOP;
}
@@ -633,8 +637,11 @@ static unsigned long zswap_shrinker_count(struct shrinker *shrinker,
struct lruvec *lruvec = mem_cgroup_lruvec(memcg, NODE_DATA(sc->nid));
unsigned long nr_backing, nr_stored, nr_freeable, nr_protected;
+ if (!zswap_shrinker_enabled)
+ return 0;
+
#ifdef CONFIG_MEMCG_KMEM
- cgroup_rstat_flush(memcg->css.cgroup);
+ mem_cgroup_flush_stats();
nr_backing = memcg_page_state(memcg, MEMCG_ZSWAP_B) >> PAGE_SHIFT;
nr_stored = memcg_page_state(memcg, MEMCG_ZSWAPPED);
#else
@@ -643,7 +650,7 @@ static unsigned long zswap_shrinker_count(struct shrinker *shrinker,
nr_stored = atomic_read(&pool->nr_stored);
#endif
- if (!zswap_shrinker_enabled || !nr_stored)
+ if (!nr_stored)
return 0;
nr_protected =
--
2.34.1
^ permalink raw reply related [flat|nested] 48+ messages in thread
* Re: [PATCH v8 6/6] zswap: shrinks zswap pool based on memory pressure
2023-12-06 16:56 ` Nhat Pham
@ 2023-12-06 19:47 ` Nhat Pham
2023-12-06 21:13 ` Yosry Ahmed
2023-12-07 2:32 ` Chengming Zhou
0 siblings, 2 replies; 48+ messages in thread
From: Nhat Pham @ 2023-12-06 19:47 UTC (permalink / raw)
To: Yosry Ahmed
Cc: Chengming Zhou, akpm, hannes, cerasuolodomenico, sjenning,
ddstreet, vitaly.wool, mhocko, roman.gushchin, shakeelb,
muchun.song, chrisl, linux-mm, kernel-team, linux-kernel, cgroups,
linux-doc, linux-kselftest, shuah
[...]
>
> Hmm so how should we proceed from here? How about this:
>
> a) I can send a fixlet to move the enablement check above the stats
> flushing + use mem_cgroup_flush_stats
> b) Then maybe, you can send a fixlet to update this new callsite?
>
> Does that sound reasonable?
I just sent out the fixlet. Yosry and Chengming, let me know if that
looks good. Thank you both for detecting this issue and proposing the
fix!
^ permalink raw reply [flat|nested] 48+ messages in thread
* Re: [PATCH v8 6/6] zswap: shrinks zswap pool based on memory pressure
2023-12-06 19:47 ` Nhat Pham
@ 2023-12-06 21:13 ` Yosry Ahmed
2023-12-07 2:32 ` Chengming Zhou
1 sibling, 0 replies; 48+ messages in thread
From: Yosry Ahmed @ 2023-12-06 21:13 UTC (permalink / raw)
To: Nhat Pham
Cc: Chengming Zhou, akpm, hannes, cerasuolodomenico, sjenning,
ddstreet, vitaly.wool, mhocko, roman.gushchin, shakeelb,
muchun.song, chrisl, linux-mm, kernel-team, linux-kernel, cgroups,
linux-doc, linux-kselftest, shuah
On Wed, Dec 6, 2023 at 11:47 AM Nhat Pham <nphamcs@gmail.com> wrote:
>
> [...]
> >
> > Hmm so how should we proceed from here? How about this:
> >
> > a) I can send a fixlet to move the enablement check above the stats
> > flushing + use mem_cgroup_flush_stats
> > b) Then maybe, you can send a fixlet to update this new callsite?
> >
> > Does that sound reasonable?
>
> I just sent out the fixlet. Yosry and Chengming, let me know if that
> looks good. Thank you both for detecting this issue and proposing the
> fix!
The fixlet looks good, and Andrew already took care of (b) before I
could send a followup fixlet out :)
^ permalink raw reply [flat|nested] 48+ messages in thread
* Re: [PATCH v8 6/6] zswap: shrinks zswap pool based on memory pressure
2023-12-06 19:47 ` Nhat Pham
2023-12-06 21:13 ` Yosry Ahmed
@ 2023-12-07 2:32 ` Chengming Zhou
1 sibling, 0 replies; 48+ messages in thread
From: Chengming Zhou @ 2023-12-07 2:32 UTC (permalink / raw)
To: Nhat Pham, Yosry Ahmed
Cc: akpm, hannes, cerasuolodomenico, sjenning, ddstreet, vitaly.wool,
mhocko, roman.gushchin, shakeelb, muchun.song, chrisl, linux-mm,
kernel-team, linux-kernel, cgroups, linux-doc, linux-kselftest,
shuah
On 2023/12/7 03:47, Nhat Pham wrote:
> [...]
>>
>> Hmm so how should we proceed from here? How about this:
>>
>> a) I can send a fixlet to move the enablement check above the stats
>> flushing + use mem_cgroup_flush_stats
>> b) Then maybe, you can send a fixlet to update this new callsite?
>>
>> Does that sound reasonable?
>
> I just sent out the fixlet. Yosry and Chengming, let me know if that
> looks good. Thank you both for detecting this issue and proposing the
> fix!
Yeah, also looks good to me. Thanks!
--
Best regards,
Chengming Zhou
^ permalink raw reply [flat|nested] 48+ messages in thread
* Re: [PATCH v8 4/6] mm: memcg: add per-memcg zswap writeback stat (fix)
2023-12-05 19:33 ` [PATCH v8 4/6] mm: memcg: add per-memcg zswap writeback stat (fix) Nhat Pham
2023-12-05 20:05 ` Yosry Ahmed
@ 2023-12-08 0:25 ` Chris Li
1 sibling, 0 replies; 48+ messages in thread
From: Chris Li @ 2023-12-08 0:25 UTC (permalink / raw)
To: Nhat Pham
Cc: akpm, hannes, cerasuolodomenico, yosryahmed, sjenning, ddstreet,
vitaly.wool, mhocko, roman.gushchin, shakeelb, muchun.song,
linux-mm, kernel-team, linux-kernel, cgroups, linux-doc,
linux-kselftest, shuah
Acked-by: Chris Li <chrisl@kernel.org> (Google)
Chris
On Tue, Dec 5, 2023 at 11:33 AM Nhat Pham <nphamcs@gmail.com> wrote:
>
> Rename ZSWP_WB to ZSWPWB to better match the existing counters naming
> scheme.
>
> Suggested-by: Johannes Weiner <hannes@cmpxchg.org>
> Signed-off-by: Nhat Pham <nphamcs@gmail.com>
> ---
> include/linux/vm_event_item.h | 2 +-
> mm/memcontrol.c | 2 +-
> mm/vmstat.c | 2 +-
> mm/zswap.c | 4 ++--
> 4 files changed, 5 insertions(+), 5 deletions(-)
>
> diff --git a/include/linux/vm_event_item.h b/include/linux/vm_event_item.h
> index f4569ad98edf..747943bc8cc2 100644
> --- a/include/linux/vm_event_item.h
> +++ b/include/linux/vm_event_item.h
> @@ -142,7 +142,7 @@ enum vm_event_item { PGPGIN, PGPGOUT, PSWPIN, PSWPOUT,
> #ifdef CONFIG_ZSWAP
> ZSWPIN,
> ZSWPOUT,
> - ZSWP_WB,
> + ZSWPWB,
> #endif
> #ifdef CONFIG_X86
> DIRECT_MAP_LEVEL2_SPLIT,
> diff --git a/mm/memcontrol.c b/mm/memcontrol.c
> index 21d79249c8b4..0286b7d38832 100644
> --- a/mm/memcontrol.c
> +++ b/mm/memcontrol.c
> @@ -703,7 +703,7 @@ static const unsigned int memcg_vm_event_stat[] = {
> #if defined(CONFIG_MEMCG_KMEM) && defined(CONFIG_ZSWAP)
> ZSWPIN,
> ZSWPOUT,
> - ZSWP_WB,
> + ZSWPWB,
> #endif
> #ifdef CONFIG_TRANSPARENT_HUGEPAGE
> THP_FAULT_ALLOC,
> diff --git a/mm/vmstat.c b/mm/vmstat.c
> index 2249f85e4a87..cfd8d8256f8e 100644
> --- a/mm/vmstat.c
> +++ b/mm/vmstat.c
> @@ -1401,7 +1401,7 @@ const char * const vmstat_text[] = {
> #ifdef CONFIG_ZSWAP
> "zswpin",
> "zswpout",
> - "zswp_wb",
> + "zswpwb",
> #endif
> #ifdef CONFIG_X86
> "direct_map_level2_splits",
> diff --git a/mm/zswap.c b/mm/zswap.c
> index c65b8ccc6b72..0fb0945c0031 100644
> --- a/mm/zswap.c
> +++ b/mm/zswap.c
> @@ -761,9 +761,9 @@ static enum lru_status shrink_memcg_cb(struct list_head *item, struct list_lru_o
> zswap_written_back_pages++;
>
> if (entry->objcg)
> - count_objcg_event(entry->objcg, ZSWP_WB);
> + count_objcg_event(entry->objcg, ZSWPWB);
>
> - count_vm_event(ZSWP_WB);
> + count_vm_event(ZSWPWB);
> /*
> * Writeback started successfully, the page now belongs to the
> * swapcache. Drop the entry from zswap - unless invalidate already
> --
> 2.34.1
>
^ permalink raw reply [flat|nested] 48+ messages in thread
* Re: [PATCH v8 5/6] selftests: cgroup: update per-memcg zswap writeback selftest
2023-11-30 19:40 ` [PATCH v8 5/6] selftests: cgroup: update per-memcg zswap writeback selftest Nhat Pham
@ 2023-12-08 0:43 ` Chris Li
0 siblings, 0 replies; 48+ messages in thread
From: Chris Li @ 2023-12-08 0:43 UTC (permalink / raw)
To: Nhat Pham
Cc: akpm, hannes, cerasuolodomenico, yosryahmed, sjenning, ddstreet,
vitaly.wool, mhocko, roman.gushchin, shakeelb, muchun.song,
linux-mm, kernel-team, linux-kernel, cgroups, linux-doc,
linux-kselftest, shuah
Hi Nhat,
Thanks for the self test.
Acked-by: Chris Li <chrisl@kernel.org> (Google)
Chris
On Thu, Nov 30, 2023 at 11:40 AM Nhat Pham <nphamcs@gmail.com> wrote:
>
> From: Domenico Cerasuolo <cerasuolodomenico@gmail.com>
>
> The memcg-zswap self test is updated to adjust to the behavior change
> implemented by commit 87730b165089 ("zswap: make shrinking memcg-aware"),
> where zswap performs writeback for specific memcg.
>
> Signed-off-by: Domenico Cerasuolo <cerasuolodomenico@gmail.com>
> Signed-off-by: Nhat Pham <nphamcs@gmail.com>
> ---
> tools/testing/selftests/cgroup/test_zswap.c | 74 ++++++++++++++-------
> 1 file changed, 50 insertions(+), 24 deletions(-)
>
> diff --git a/tools/testing/selftests/cgroup/test_zswap.c b/tools/testing/selftests/cgroup/test_zswap.c
> index c99d2adaca3f..47fdaa146443 100644
> --- a/tools/testing/selftests/cgroup/test_zswap.c
> +++ b/tools/testing/selftests/cgroup/test_zswap.c
> @@ -50,9 +50,9 @@ static int get_zswap_stored_pages(size_t *value)
> return read_int("/sys/kernel/debug/zswap/stored_pages", value);
> }
>
> -static int get_zswap_written_back_pages(size_t *value)
> +static int get_cg_wb_count(const char *cg)
> {
> - return read_int("/sys/kernel/debug/zswap/written_back_pages", value);
> + return cg_read_key_long(cg, "memory.stat", "zswp_wb");
> }
>
> static long get_zswpout(const char *cgroup)
> @@ -73,6 +73,24 @@ static int allocate_bytes(const char *cgroup, void *arg)
> return 0;
> }
>
> +static char *setup_test_group_1M(const char *root, const char *name)
> +{
> + char *group_name = cg_name(root, name);
> +
> + if (!group_name)
> + return NULL;
> + if (cg_create(group_name))
> + goto fail;
> + if (cg_write(group_name, "memory.max", "1M")) {
> + cg_destroy(group_name);
> + goto fail;
> + }
> + return group_name;
> +fail:
> + free(group_name);
> + return NULL;
> +}
> +
> /*
> * Sanity test to check that pages are written into zswap.
> */
> @@ -117,43 +135,51 @@ static int test_zswap_usage(const char *root)
>
> /*
> * When trying to store a memcg page in zswap, if the memcg hits its memory
> - * limit in zswap, writeback should not be triggered.
> - *
> - * This was fixed with commit 0bdf0efa180a("zswap: do not shrink if cgroup may
> - * not zswap"). Needs to be revised when a per memcg writeback mechanism is
> - * implemented.
> + * limit in zswap, writeback should affect only the zswapped pages of that
> + * memcg.
> */
> static int test_no_invasive_cgroup_shrink(const char *root)
> {
> - size_t written_back_before, written_back_after;
> int ret = KSFT_FAIL;
> - char *test_group;
> + size_t control_allocation_size = MB(10);
> + char *control_allocation, *wb_group = NULL, *control_group = NULL;
>
> /* Set up */
> - test_group = cg_name(root, "no_shrink_test");
> - if (!test_group)
> - goto out;
> - if (cg_create(test_group))
> + wb_group = setup_test_group_1M(root, "per_memcg_wb_test1");
> + if (!wb_group)
> + return KSFT_FAIL;
> + if (cg_write(wb_group, "memory.zswap.max", "10K"))
> goto out;
> - if (cg_write(test_group, "memory.max", "1M"))
> + control_group = setup_test_group_1M(root, "per_memcg_wb_test2");
> + if (!control_group)
> goto out;
> - if (cg_write(test_group, "memory.zswap.max", "10K"))
> +
> + /* Push some test_group2 memory into zswap */
> + if (cg_enter_current(control_group))
> goto out;
> - if (get_zswap_written_back_pages(&written_back_before))
> + control_allocation = malloc(control_allocation_size);
> + for (int i = 0; i < control_allocation_size; i += 4095)
> + control_allocation[i] = 'a';
> + if (cg_read_key_long(control_group, "memory.stat", "zswapped") < 1)
> goto out;
>
> - /* Allocate 10x memory.max to push memory into zswap */
> - if (cg_run(test_group, allocate_bytes, (void *)MB(10)))
> + /* Allocate 10x memory.max to push wb_group memory into zswap and trigger wb */
> + if (cg_run(wb_group, allocate_bytes, (void *)MB(10)))
> goto out;
>
> - /* Verify that no writeback happened because of the memcg allocation */
> - if (get_zswap_written_back_pages(&written_back_after))
> - goto out;
> - if (written_back_after == written_back_before)
> + /* Verify that only zswapped memory from gwb_group has been written back */
> + if (get_cg_wb_count(wb_group) > 0 && get_cg_wb_count(control_group) == 0)
> ret = KSFT_PASS;
> out:
> - cg_destroy(test_group);
> - free(test_group);
> + cg_enter_current(root);
> + if (control_group) {
> + cg_destroy(control_group);
> + free(control_group);
> + }
> + cg_destroy(wb_group);
> + free(wb_group);
> + if (control_allocation)
> + free(control_allocation);
> return ret;
> }
>
> --
> 2.34.1
>
^ permalink raw reply [flat|nested] 48+ messages in thread
end of thread, other threads:[~2023-12-08 0:43 UTC | newest]
Thread overview: 48+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2023-11-30 19:40 [PATCH v8 0/6] workload-specific and memory pressure-driven zswap writeback Nhat Pham
2023-11-30 19:40 ` [PATCH v8 1/6] list_lru: allows explicit memcg and NUMA node selection Nhat Pham
2023-11-30 19:57 ` Matthew Wilcox
2023-11-30 20:07 ` Nhat Pham
2023-11-30 20:35 ` Johannes Weiner
2023-12-04 8:30 ` Chengming Zhou
2023-12-04 17:48 ` Nhat Pham
2023-12-05 2:28 ` Chengming Zhou
2023-12-05 0:30 ` Chris Li
2023-12-05 17:17 ` Johannes Weiner
2023-11-30 19:40 ` [PATCH v8 2/6] memcontrol: implement mem_cgroup_tryget_online() Nhat Pham
2023-12-05 0:35 ` Chris Li
2023-12-05 1:39 ` Nhat Pham
2023-12-06 0:16 ` Chris Li
2023-12-06 1:30 ` Nhat Pham
2023-12-05 18:02 ` Yosry Ahmed
2023-12-05 19:55 ` Nhat Pham
2023-11-30 19:40 ` [PATCH v8 3/6] zswap: make shrinking memcg-aware Nhat Pham
2023-12-05 18:20 ` Yosry Ahmed
2023-12-05 18:49 ` Nhat Pham
2023-12-05 18:59 ` Yosry Ahmed
2023-12-05 19:09 ` Nhat Pham
2023-12-05 19:54 ` [PATCH v8 3/6] zswap: make shrinking memcg-aware (fix) Nhat Pham
2023-12-06 0:10 ` [PATCH v8 3/6] zswap: make shrinking memcg-aware Chris Li
2023-12-06 1:53 ` Nhat Pham
2023-12-06 3:03 ` Nhat Pham
2023-12-06 3:06 ` [PATCH v8 3/6] zswap: make shrinking memcg-aware (fix 2) Nhat Pham
2023-11-30 19:40 ` [PATCH v8 4/6] mm: memcg: add per-memcg zswap writeback stat Nhat Pham
2023-12-05 18:21 ` Yosry Ahmed
2023-12-05 18:56 ` Nhat Pham
2023-12-05 19:33 ` [PATCH v8 4/6] mm: memcg: add per-memcg zswap writeback stat (fix) Nhat Pham
2023-12-05 20:05 ` Yosry Ahmed
2023-12-08 0:25 ` Chris Li
2023-11-30 19:40 ` [PATCH v8 5/6] selftests: cgroup: update per-memcg zswap writeback selftest Nhat Pham
2023-12-08 0:43 ` Chris Li
2023-11-30 19:40 ` [PATCH v8 6/6] zswap: shrinks zswap pool based on memory pressure Nhat Pham
2023-12-06 5:51 ` Chengming Zhou
2023-12-06 5:59 ` Yosry Ahmed
2023-12-06 6:43 ` Chengming Zhou
2023-12-06 7:36 ` Yosry Ahmed
2023-12-06 7:39 ` Chengming Zhou
2023-12-06 16:56 ` Nhat Pham
2023-12-06 19:47 ` Nhat Pham
2023-12-06 21:13 ` Yosry Ahmed
2023-12-07 2:32 ` Chengming Zhou
2023-12-06 19:44 ` [PATCH v8 6/6] zswap: shrinks zswap pool based on memory pressure (fix) Nhat Pham
2023-11-30 21:19 ` [PATCH v8 0/6] workload-specific and memory pressure-driven zswap writeback Andrew Morton
2023-12-06 4:10 ` Bagas Sanjaya
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).