* [PATCH 0/2] workload-specific and memory pressure-driven zswap writeback
@ 2023-09-11 16:40 Nhat Pham
2023-09-11 16:40 ` [PATCH 1/2] zswap: make shrinking memcg-aware Nhat Pham
2023-09-11 16:40 ` [PATCH 2/2] zswap: shrinks zswap pool based on memory pressure Nhat Pham
0 siblings, 2 replies; 5+ messages in thread
From: Nhat Pham @ 2023-09-11 16:40 UTC (permalink / raw)
To: akpm
Cc: hannes, cerasuolodomenico, yosryahmed, sjenning, ddstreet,
vitaly.wool, mhocko, roman.gushchin, shakeelb, muchun.song,
linux-mm, kernel-team, linux-kernel, cgroups
There are currently several issues with zswap writeback:
1. There is only a single global LRU for zswap. This makes it impossible
to perform worload-specific shrinking - an memcg under memory
pressure cannot determine which pages in the pool it owns, and often
ends up writing pages from other memcgs. This issue has been
previously observed in practice and mitigated by simply disabling
memcg-initiated shrinking:
https://lore.kernel.org/all/20230530232435.3097106-1-nphamcs@gmail.com/T/#u
But this solution leaves a lot to be desired, as we still do not have an
avenue for an memcg to free up its own memory locked up in zswap.
2. We only shrink the zswap pool when the user-defined limit is hit.
This means that if we set the limit too high, cold data that are
unlikely to be used again will reside in the pool, wasting precious
memory. It is hard to predict how much zswap space will be needed
ahead of time, as this depends on the workload (specifically, on
factors such as memory access patterns and compressibility of the
memory pages).
This patch series solves these issues by separating the global zswap
LRU into per-memcg and per-NUMA LRUs, and performs workload-specific
(i.e memcg- and NUMA-aware) zswap writeback under memory pressure. The
new shrinker does not have any parameter that must be tuned by the
user, and can be opted in or out on a per-memcg basis.
On a benchmark that we have run:
(without the shrinker)
real -- mean: 153.27s, median: 153.199s
sys -- mean: 541.652s, median: 541.903s
user -- mean: 4384.9673999999995s, median: 4385.471s
(with the shrinker)
real -- mean: 151.4956s, median: 151.456s
sys -- mean: 461.14639999999997s, median: 465.656s
user -- mean: 4384.7118s, median: 4384.675s
We observed a 14-15% reduction in kernel CPU time, which translated to
over 1% reduction in real time.
On another benchmark, where there was a lot more cold memory residing in
zswap, we observed even more pronounced gains:
(without the shrinker)
real -- mean: 157.52519999999998s, median: 157.281s
sys -- mean: 769.3082s, median: 780.545s
user -- mean: 4378.1622s, median: 4378.286s
(with the shrinker)
real -- mean: 152.9608s, median: 152.845s
sys -- mean: 517.4446s, median: 506.749s
user -- mean: 4387.694s, median: 4387.935s
Here, we saw around 32-35% reduction in kernel CPU time, which
translated to 2.8% reduction in real time. These results confirm our
hypothesis that the shrinker is more helpful the more cold memory we
have.
Domenico Cerasuolo (1):
zswap: make shrinking memcg-aware
Nhat Pham (1):
zswap: shrinks zswap pool based on memory pressure
Documentation/admin-guide/mm/zswap.rst | 12 +
include/linux/list_lru.h | 39 +++
include/linux/memcontrol.h | 1 +
include/linux/mmzone.h | 14 +
include/linux/zswap.h | 9 +
mm/list_lru.c | 46 ++-
mm/memcontrol.c | 33 +++
mm/swap_state.c | 50 +++-
mm/zswap.c | 369 ++++++++++++++++++++++---
9 files changed, 518 insertions(+), 55 deletions(-)
--
2.34.1
^ permalink raw reply [flat|nested] 5+ messages in thread
* [PATCH 1/2] zswap: make shrinking memcg-aware
2023-09-11 16:40 [PATCH 0/2] workload-specific and memory pressure-driven zswap writeback Nhat Pham
@ 2023-09-11 16:40 ` Nhat Pham
2023-09-14 9:34 ` kernel test robot
2023-09-11 16:40 ` [PATCH 2/2] zswap: shrinks zswap pool based on memory pressure Nhat Pham
1 sibling, 1 reply; 5+ messages in thread
From: Nhat Pham @ 2023-09-11 16:40 UTC (permalink / raw)
To: akpm
Cc: hannes, cerasuolodomenico, yosryahmed, sjenning, ddstreet,
vitaly.wool, mhocko, roman.gushchin, shakeelb, muchun.song,
linux-mm, kernel-team, linux-kernel, cgroups
From: Domenico Cerasuolo <cerasuolodomenico@gmail.com>
Currently, we only have a single global LRU for zswap. This makes it
impossible to perform worload-specific shrinking - an memcg cannot
determine which pages in the pool it owns, and often ends up writing
pages from other memcgs. This issue has been previously observed in
practice and mitigated by simply disabling memcg-initiated shrinking:
https://lore.kernel.org/all/20230530232435.3097106-1-nphamcs@gmail.com/T/#u
This patch fully resolves the issue by replacing the global zswap LRU
with memcg- and NUMA-specific LRUs, and modify the reclaim logic:
a) When a store attempt hits an memcg limit, it now triggers a
synchronous reclaim attempt that, if successful, allows the new
hotter page to be accepted by zswap.
b) If the store attempt instead hits the global zswap limit, it will
trigger an asynchronous reclaim attempt, in which an memcg is
selected for reclaim in a round-robin-like fashion.
Signed-off-by: Domenico Cerasuolo <cerasuolodomenico@gmail.com>
Co-developed-by: Nhat Pham <nphamcs@gmail.com>
Signed-off-by: Nhat Pham <nphamcs@gmail.com>
---
include/linux/list_lru.h | 39 +++++++
include/linux/zswap.h | 9 ++
mm/list_lru.c | 46 ++++++--
mm/swap_state.c | 19 ++++
mm/zswap.c | 221 +++++++++++++++++++++++++++++++--------
5 files changed, 282 insertions(+), 52 deletions(-)
diff --git a/include/linux/list_lru.h b/include/linux/list_lru.h
index b35968ee9fb5..b517b4e2c7c4 100644
--- a/include/linux/list_lru.h
+++ b/include/linux/list_lru.h
@@ -89,6 +89,24 @@ void memcg_reparent_list_lrus(struct mem_cgroup *memcg, struct mem_cgroup *paren
*/
bool list_lru_add(struct list_lru *lru, struct list_head *item);
+/**
+ * __list_lru_add: add an element to a specific sublist.
+ * @list_lru: the lru pointer
+ * @item: the item to be added.
+ * @memcg: the cgroup of the sublist to add the item to.
+ * @nid: the node id of the sublist to add the item to.
+ *
+ * This function is similar to list_lru_add(), but it allows the caller to
+ * specify the sublist to which the item should be added. This can be useful
+ * when the list_head node is not necessarily in the same cgroup and NUMA node
+ * as the data it represents, such as zswap, where the list_head node could be
+ * from kswapd and the data from a different cgroup altogether.
+ *
+ * Return value: true if the list was updated, false otherwise
+ */
+bool __list_lru_add(struct list_lru *lru, struct list_head *item, int nid,
+ struct mem_cgroup *memcg);
+
/**
* list_lru_del: delete an element to the lru list
* @list_lru: the lru pointer
@@ -102,6 +120,18 @@ bool list_lru_add(struct list_lru *lru, struct list_head *item);
*/
bool list_lru_del(struct list_lru *lru, struct list_head *item);
+/**
+ * __list_lru_delete: delete an element from a specific sublist.
+ * @list_lru: the lru pointer
+ * @item: the item to be deleted.
+ * @memcg: the cgroup of the sublist to delete the item from.
+ * @nid: the node id of the sublist to delete the item from.
+ *
+ * Return value: true if the list was updated, false otherwise.
+ */
+bool __list_lru_del(struct list_lru *lru, struct list_head *item, int nid,
+ struct mem_cgroup *memcg);
+
/**
* list_lru_count_one: return the number of objects currently held by @lru
* @lru: the lru pointer.
@@ -137,6 +167,15 @@ void list_lru_isolate(struct list_lru_one *list, struct list_head *item);
void list_lru_isolate_move(struct list_lru_one *list, struct list_head *item,
struct list_head *head);
+/*
+ * list_lru_putback: undo list_lru_isolate.
+ *
+ * Since we might have dropped the LRU lock in between, recompute list_lru_one
+ * from the node's id and memcg.
+ */
+void list_lru_putback(struct list_lru *lru, struct list_head *item, int nid,
+ struct mem_cgroup *memcg);
+
typedef enum lru_status (*list_lru_walk_cb)(struct list_head *item,
struct list_lru_one *list, spinlock_t *lock, void *cb_arg);
diff --git a/include/linux/zswap.h b/include/linux/zswap.h
index 2a60ce39cfde..04f80b64a09b 100644
--- a/include/linux/zswap.h
+++ b/include/linux/zswap.h
@@ -15,6 +15,8 @@ bool zswap_load(struct folio *folio);
void zswap_invalidate(int type, pgoff_t offset);
void zswap_swapon(int type);
void zswap_swapoff(int type);
+bool zswap_remove_swpentry_from_lru(swp_entry_t swpentry);
+void zswap_insert_swpentry_into_lru(swp_entry_t swpentry);
#else
@@ -32,6 +34,13 @@ static inline void zswap_invalidate(int type, pgoff_t offset) {}
static inline void zswap_swapon(int type) {}
static inline void zswap_swapoff(int type) {}
+static inline bool zswap_remove_swpentry_from_lru(swp_entry_t swpentry)
+{
+ return false;
+}
+
+static inline void zswap_insert_swpentry_into_lru(swp_entry_t swpentry) {}
+
#endif
#endif /* _LINUX_ZSWAP_H */
diff --git a/mm/list_lru.c b/mm/list_lru.c
index a05e5bef3b40..37c5c2ef6c0e 100644
--- a/mm/list_lru.c
+++ b/mm/list_lru.c
@@ -119,18 +119,26 @@ list_lru_from_kmem(struct list_lru *lru, int nid, void *ptr,
bool list_lru_add(struct list_lru *lru, struct list_head *item)
{
int nid = page_to_nid(virt_to_page(item));
+ struct mem_cgroup *memcg = list_lru_memcg_aware(lru) ?
+ mem_cgroup_from_slab_obj(item) : NULL;
+
+ return __list_lru_add(lru, item, nid, memcg);
+}
+EXPORT_SYMBOL_GPL(list_lru_add);
+
+bool __list_lru_add(struct list_lru *lru, struct list_head *item, int nid,
+ struct mem_cgroup *memcg)
+{
struct list_lru_node *nlru = &lru->node[nid];
- struct mem_cgroup *memcg;
struct list_lru_one *l;
spin_lock(&nlru->lock);
if (list_empty(item)) {
- l = list_lru_from_kmem(lru, nid, item, &memcg);
+ l = list_lru_from_memcg_idx(lru, nid, memcg_kmem_id(memcg));
list_add_tail(item, &l->list);
/* Set shrinker bit if the first element was added */
if (!l->nr_items++)
- set_shrinker_bit(memcg, nid,
- lru_shrinker_id(lru));
+ set_shrinker_bit(memcg, nid, lru_shrinker_id(lru));
nlru->nr_items++;
spin_unlock(&nlru->lock);
return true;
@@ -138,17 +146,27 @@ bool list_lru_add(struct list_lru *lru, struct list_head *item)
spin_unlock(&nlru->lock);
return false;
}
-EXPORT_SYMBOL_GPL(list_lru_add);
+EXPORT_SYMBOL_GPL(__list_lru_add);
bool list_lru_del(struct list_lru *lru, struct list_head *item)
{
int nid = page_to_nid(virt_to_page(item));
+ struct mem_cgroup *memcg = list_lru_memcg_aware(lru) ?
+ mem_cgroup_from_slab_obj(item) : NULL;
+
+ return __list_lru_del(lru, item, nid, memcg);
+}
+EXPORT_SYMBOL_GPL(list_lru_del);
+
+bool __list_lru_del(struct list_lru *lru, struct list_head *item, int nid,
+ struct mem_cgroup *memcg)
+{
struct list_lru_node *nlru = &lru->node[nid];
struct list_lru_one *l;
spin_lock(&nlru->lock);
if (!list_empty(item)) {
- l = list_lru_from_kmem(lru, nid, item, NULL);
+ l = list_lru_from_memcg_idx(lru, nid, memcg_kmem_id(memcg));
list_del_init(item);
l->nr_items--;
nlru->nr_items--;
@@ -158,7 +176,7 @@ bool list_lru_del(struct list_lru *lru, struct list_head *item)
spin_unlock(&nlru->lock);
return false;
}
-EXPORT_SYMBOL_GPL(list_lru_del);
+EXPORT_SYMBOL_GPL(__list_lru_del);
void list_lru_isolate(struct list_lru_one *list, struct list_head *item)
{
@@ -175,6 +193,20 @@ void list_lru_isolate_move(struct list_lru_one *list, struct list_head *item,
}
EXPORT_SYMBOL_GPL(list_lru_isolate_move);
+void list_lru_putback(struct list_lru *lru, struct list_head *item, int nid,
+ struct mem_cgroup *memcg)
+{
+ struct list_lru_one *list =
+ list_lru_from_memcg_idx(lru, nid, memcg_kmem_id(memcg));
+
+ if (list_empty(item)) {
+ list_add_tail(item, &list->list);
+ if (!list->nr_items++)
+ set_shrinker_bit(memcg, nid, lru_shrinker_id(lru));
+ }
+}
+EXPORT_SYMBOL_GPL(list_lru_putback);
+
unsigned long list_lru_count_one(struct list_lru *lru,
int nid, struct mem_cgroup *memcg)
{
diff --git a/mm/swap_state.c b/mm/swap_state.c
index b3b14bd0dd64..1c826737aacb 100644
--- a/mm/swap_state.c
+++ b/mm/swap_state.c
@@ -21,6 +21,7 @@
#include <linux/swap_slots.h>
#include <linux/huge_mm.h>
#include <linux/shmem_fs.h>
+#include <linux/zswap.h>
#include "internal.h"
#include "swap.h"
@@ -417,6 +418,7 @@ struct page *__read_swap_cache_async(swp_entry_t entry, gfp_t gfp_mask,
struct folio *folio;
struct page *page;
void *shadow = NULL;
+ bool zswap_lru_removed = false;
*new_page_allocated = false;
si = get_swap_device(entry);
@@ -485,6 +487,17 @@ struct page *__read_swap_cache_async(swp_entry_t entry, gfp_t gfp_mask,
__folio_set_locked(folio);
__folio_set_swapbacked(folio);
+ /*
+ * Page fault might itself trigger reclaim, on a zswap object that
+ * corresponds to the same swap entry. However, as the swap entry has
+ * previously been pinned, the task will run into an infinite loop trying
+ * to pin the swap entry again.
+ *
+ * To prevent this from happening, we remove it from the zswap
+ * LRU to prevent its reclamation.
+ */
+ zswap_lru_removed = zswap_remove_swpentry_from_lru(entry);
+
if (mem_cgroup_swapin_charge_folio(folio, NULL, gfp_mask, entry))
goto fail_unlock;
@@ -497,6 +510,9 @@ struct page *__read_swap_cache_async(swp_entry_t entry, gfp_t gfp_mask,
if (shadow)
workingset_refault(folio, shadow);
+ if (zswap_lru_removed)
+ zswap_insert_swpentry_into_lru(entry);
+
/* Caller will initiate read into locked folio */
folio_add_lru(folio);
*new_page_allocated = true;
@@ -506,6 +522,9 @@ struct page *__read_swap_cache_async(swp_entry_t entry, gfp_t gfp_mask,
return page;
fail_unlock:
+ if (zswap_lru_removed)
+ zswap_insert_swpentry_into_lru(entry);
+
put_swap_folio(folio, entry);
folio_unlock(folio);
folio_put(folio);
diff --git a/mm/zswap.c b/mm/zswap.c
index 412b1409a0d7..83386200222a 100644
--- a/mm/zswap.c
+++ b/mm/zswap.c
@@ -34,6 +34,7 @@
#include <linux/writeback.h>
#include <linux/pagemap.h>
#include <linux/workqueue.h>
+#include <linux/list_lru.h>
#include "swap.h"
#include "internal.h"
@@ -171,8 +172,8 @@ struct zswap_pool {
struct work_struct shrink_work;
struct hlist_node node;
char tfm_name[CRYPTO_MAX_ALG_NAME];
- struct list_head lru;
- spinlock_t lru_lock;
+ struct list_lru list_lru;
+ struct mem_cgroup *next_shrink;
};
/*
@@ -209,6 +210,7 @@ struct zswap_entry {
unsigned long value;
};
struct obj_cgroup *objcg;
+ int nid;
struct list_head lru;
};
@@ -309,6 +311,29 @@ static void zswap_entry_cache_free(struct zswap_entry *entry)
kmem_cache_free(zswap_entry_cache, entry);
}
+/*********************************
+* lru functions
+**********************************/
+static bool zswap_lru_add(struct list_lru *list_lru, struct zswap_entry *entry)
+{
+ struct mem_cgroup *memcg = entry->objcg ?
+ get_mem_cgroup_from_objcg(entry->objcg) : NULL;
+ bool added = __list_lru_add(list_lru, &entry->lru, entry->nid, memcg);
+
+ mem_cgroup_put(memcg);
+ return added;
+}
+
+static bool zswap_lru_del(struct list_lru *list_lru, struct zswap_entry *entry)
+{
+ struct mem_cgroup *memcg = entry->objcg ?
+ get_mem_cgroup_from_objcg(entry->objcg) : NULL;
+ bool removed = __list_lru_del(list_lru, &entry->lru, entry->nid, memcg);
+
+ mem_cgroup_put(memcg);
+ return removed;
+}
+
/*********************************
* rbtree functions
**********************************/
@@ -393,9 +418,7 @@ static void zswap_free_entry(struct zswap_entry *entry)
if (!entry->length)
atomic_dec(&zswap_same_filled_pages);
else {
- spin_lock(&entry->pool->lru_lock);
- list_del(&entry->lru);
- spin_unlock(&entry->pool->lru_lock);
+ zswap_lru_del(&entry->pool->list_lru, entry);
zpool_free(zswap_find_zpool(entry), entry->handle);
zswap_pool_put(entry->pool);
}
@@ -629,21 +652,16 @@ static void zswap_invalidate_entry(struct zswap_tree *tree,
zswap_entry_put(tree, entry);
}
-static int zswap_reclaim_entry(struct zswap_pool *pool)
+static enum lru_status shrink_memcg_cb(struct list_head *item, struct list_lru_one *l,
+ spinlock_t *lock, void *arg)
{
- struct zswap_entry *entry;
+ struct zswap_entry *entry = container_of(item, struct zswap_entry, lru);
+ struct mem_cgroup *memcg;
struct zswap_tree *tree;
pgoff_t swpoffset;
- int ret;
+ enum lru_status ret = LRU_REMOVED_RETRY;
+ int writeback_result;
- /* Get an entry off the LRU */
- spin_lock(&pool->lru_lock);
- if (list_empty(&pool->lru)) {
- spin_unlock(&pool->lru_lock);
- return -EINVAL;
- }
- entry = list_last_entry(&pool->lru, struct zswap_entry, lru);
- list_del_init(&entry->lru);
/*
* Once the lru lock is dropped, the entry might get freed. The
* swpoffset is copied to the stack, and entry isn't deref'd again
@@ -651,26 +669,35 @@ static int zswap_reclaim_entry(struct zswap_pool *pool)
*/
swpoffset = swp_offset(entry->swpentry);
tree = zswap_trees[swp_type(entry->swpentry)];
- spin_unlock(&pool->lru_lock);
+ list_lru_isolate(l, item);
+ spin_unlock(lock);
/* Check for invalidate() race */
spin_lock(&tree->lock);
if (entry != zswap_rb_search(&tree->rbroot, swpoffset)) {
- ret = -EAGAIN;
goto unlock;
}
/* Hold a reference to prevent a free during writeback */
zswap_entry_get(entry);
spin_unlock(&tree->lock);
- ret = zswap_writeback_entry(entry, tree);
+ writeback_result = zswap_writeback_entry(entry, tree);
spin_lock(&tree->lock);
- if (ret) {
- /* Writeback failed, put entry back on LRU */
- spin_lock(&pool->lru_lock);
- list_move(&entry->lru, &pool->lru);
- spin_unlock(&pool->lru_lock);
+ if (writeback_result) {
+ zswap_reject_reclaim_fail++;
+
+ /* Check for invalidate() race */
+ if (entry != zswap_rb_search(&tree->rbroot, swpoffset))
+ goto put_unlock;
+
+ memcg = entry->objcg ? get_mem_cgroup_from_objcg(entry->objcg) : NULL;
+ spin_lock(lock);
+ /* we cannot use zswap_lru_add here, because it increments node's lru count */
+ list_lru_putback(&entry->pool->list_lru, item, entry->nid, memcg);
+ spin_unlock(lock);
+ mem_cgroup_put(memcg);
+ ret = LRU_RETRY;
goto put_unlock;
}
@@ -686,19 +713,63 @@ static int zswap_reclaim_entry(struct zswap_pool *pool)
zswap_entry_put(tree, entry);
unlock:
spin_unlock(&tree->lock);
- return ret ? -EAGAIN : 0;
+ spin_lock(lock);
+ return ret;
+}
+
+static int shrink_memcg(struct mem_cgroup *memcg)
+{
+ struct zswap_pool *pool;
+ int nid, shrunk = 0;
+ bool is_empty = true;
+
+ pool = zswap_pool_current_get();
+ if (!pool)
+ return -EINVAL;
+
+ for_each_node_state(nid, N_NORMAL_MEMORY) {
+ unsigned long nr_to_walk = 1;
+
+ if (list_lru_walk_one(&pool->list_lru, nid, memcg, &shrink_memcg_cb,
+ NULL, &nr_to_walk))
+ shrunk++;
+ if (!nr_to_walk)
+ is_empty = false;
+ }
+ zswap_pool_put(pool);
+
+ if (is_empty)
+ return -EINVAL;
+ if (shrunk)
+ return 0;
+ return -EAGAIN;
}
static void shrink_worker(struct work_struct *w)
{
struct zswap_pool *pool = container_of(w, typeof(*pool),
shrink_work);
- int ret, failures = 0;
+ int ret, failures = 0, memcg_selection_failures = 0;
+ /* global reclaim will select cgroup in a round-robin fashion. */
do {
- ret = zswap_reclaim_entry(pool);
+ /* previous next_shrink has become a zombie - restart from the top */
+ if (pool->next_shrink && !mem_cgroup_online(pool->next_shrink)) {
+ css_put(&pool->next_shrink->css);
+ pool->next_shrink = NULL;
+ }
+ pool->next_shrink = mem_cgroup_iter(NULL, pool->next_shrink, NULL);
+
+ /* fails to find a suitable cgroup - give the worker another chance. */
+ if (!pool->next_shrink) {
+ if (++memcg_selection_failures == 2)
+ break;
+ continue;
+ }
+
+ ret = shrink_memcg(pool->next_shrink);
+
if (ret) {
- zswap_reject_reclaim_fail++;
if (ret != -EAGAIN)
break;
if (++failures == MAX_RECLAIM_RETRIES)
@@ -764,9 +835,8 @@ static struct zswap_pool *zswap_pool_create(char *type, char *compressor)
*/
kref_init(&pool->kref);
INIT_LIST_HEAD(&pool->list);
- INIT_LIST_HEAD(&pool->lru);
- spin_lock_init(&pool->lru_lock);
INIT_WORK(&pool->shrink_work, shrink_worker);
+ list_lru_init_memcg(&pool->list_lru, NULL);
zswap_pool_debug("created", pool);
@@ -831,6 +901,9 @@ static void zswap_pool_destroy(struct zswap_pool *pool)
cpuhp_state_remove_instance(CPUHP_MM_ZSWP_POOL_PREPARE, &pool->node);
free_percpu(pool->acomp_ctx);
+ list_lru_destroy(&pool->list_lru);
+ if (pool->next_shrink)
+ css_put(&pool->next_shrink->css);
for (i = 0; i < ZSWAP_NR_ZPOOLS; i++)
zpool_destroy_pool(pool->zpools[i]);
kfree(pool);
@@ -1199,8 +1272,10 @@ bool zswap_store(struct folio *folio)
struct scatterlist input, output;
struct crypto_acomp_ctx *acomp_ctx;
struct obj_cgroup *objcg = NULL;
+ struct mem_cgroup *memcg = NULL;
struct zswap_pool *pool;
struct zpool *zpool;
+ int lru_alloc_ret;
unsigned int dlen = PAGE_SIZE;
unsigned long handle, value;
char *buf;
@@ -1218,14 +1293,15 @@ bool zswap_store(struct folio *folio)
if (!zswap_enabled || !tree)
return false;
- /*
- * XXX: zswap reclaim does not work with cgroups yet. Without a
- * cgroup-aware entry LRU, we will push out entries system-wide based on
- * local cgroup limits.
- */
objcg = get_obj_cgroup_from_folio(folio);
- if (objcg && !obj_cgroup_may_zswap(objcg))
- goto reject;
+ if (objcg && !obj_cgroup_may_zswap(objcg)) {
+ memcg = get_mem_cgroup_from_objcg(objcg);
+ if (shrink_memcg(memcg)) {
+ css_put(&memcg->css);
+ goto reject;
+ }
+ css_put(&memcg->css);
+ }
/* reclaim space if needed */
if (zswap_is_full()) {
@@ -1240,7 +1316,11 @@ bool zswap_store(struct folio *folio)
else
zswap_pool_reached_full = false;
}
-
+ pool = zswap_pool_current_get();
+ if (!pool) {
+ ret = -EINVAL;
+ goto reject;
+ }
/* allocate entry */
entry = zswap_entry_cache_alloc(GFP_KERNEL);
if (!entry) {
@@ -1256,6 +1336,7 @@ bool zswap_store(struct folio *folio)
entry->length = 0;
entry->value = value;
atomic_inc(&zswap_same_filled_pages);
+ zswap_pool_put(pool);
goto insert_entry;
}
kunmap_atomic(src);
@@ -1264,6 +1345,15 @@ bool zswap_store(struct folio *folio)
if (!zswap_non_same_filled_pages_enabled)
goto freepage;
+ if (objcg) {
+ memcg = get_mem_cgroup_from_objcg(objcg);
+ lru_alloc_ret = memcg_list_lru_alloc(memcg, &pool->list_lru, GFP_KERNEL);
+ css_put(&memcg->css);
+
+ if (lru_alloc_ret)
+ goto freepage;
+ }
+
/* if entry is successfully added, it keeps the reference */
entry->pool = zswap_pool_current_get();
if (!entry->pool)
@@ -1325,6 +1415,7 @@ bool zswap_store(struct folio *folio)
insert_entry:
entry->objcg = objcg;
+ entry->nid = page_to_nid(page);
if (objcg) {
obj_cgroup_charge_zswap(objcg, entry->length);
/* Account before objcg ref is moved to tree */
@@ -1338,9 +1429,8 @@ bool zswap_store(struct folio *folio)
zswap_invalidate_entry(tree, dupentry);
}
if (entry->length) {
- spin_lock(&entry->pool->lru_lock);
- list_add(&entry->lru, &entry->pool->lru);
- spin_unlock(&entry->pool->lru_lock);
+ INIT_LIST_HEAD(&entry->lru);
+ zswap_lru_add(&pool->list_lru, entry);
}
spin_unlock(&tree->lock);
@@ -1447,9 +1537,8 @@ bool zswap_load(struct folio *folio)
zswap_invalidate_entry(tree, entry);
folio_mark_dirty(folio);
} else if (entry->length) {
- spin_lock(&entry->pool->lru_lock);
- list_move(&entry->lru, &entry->pool->lru);
- spin_unlock(&entry->pool->lru_lock);
+ zswap_lru_del(&entry->pool->list_lru, entry);
+ zswap_lru_add(&entry->pool->list_lru, entry);
}
zswap_entry_put(tree, entry);
spin_unlock(&tree->lock);
@@ -1507,6 +1596,48 @@ void zswap_swapoff(int type)
zswap_trees[type] = NULL;
}
+bool zswap_remove_swpentry_from_lru(swp_entry_t swpentry)
+{
+ struct zswap_tree *tree = zswap_trees[swp_type(swpentry)];
+ struct zswap_entry *entry;
+ struct zswap_pool *pool;
+ bool removed = false;
+
+ /* get the zswap entry and prevent it from being freed */
+ spin_lock(&tree->lock);
+ entry = zswap_rb_search(&tree->rbroot, swp_offset(swpentry));
+ /* skip if the entry is already written back or is a same filled page */
+ if (!entry || !entry->length)
+ goto tree_unlock;
+
+ pool = entry->pool;
+ removed = zswap_lru_del(&pool->list_lru, entry);
+
+tree_unlock:
+ spin_unlock(&tree->lock);
+ return removed;
+}
+
+void zswap_insert_swpentry_into_lru(swp_entry_t swpentry)
+{
+ struct zswap_tree *tree = zswap_trees[swp_type(swpentry)];
+ struct zswap_entry *entry;
+ struct zswap_pool *pool;
+
+ /* get the zswap entry and prevent it from being freed */
+ spin_lock(&tree->lock);
+ entry = zswap_rb_search(&tree->rbroot, swp_offset(swpentry));
+ /* skip if the entry is already written back or is a same filled page */
+ if (!entry || !entry->length)
+ goto tree_unlock;
+
+ pool = entry->pool;
+ zswap_lru_add(&pool->list_lru, entry);
+
+tree_unlock:
+ spin_unlock(&tree->lock);
+}
+
/*********************************
* debugfs functions
**********************************/
@@ -1560,7 +1691,7 @@ static int zswap_setup(void)
struct zswap_pool *pool;
int ret;
- zswap_entry_cache = KMEM_CACHE(zswap_entry, 0);
+ zswap_entry_cache = KMEM_CACHE(zswap_entry, SLAB_ACCOUNT);
if (!zswap_entry_cache) {
pr_err("entry cache creation failed\n");
goto cache_fail;
--
2.34.1
^ permalink raw reply related [flat|nested] 5+ messages in thread
* [PATCH 2/2] zswap: shrinks zswap pool based on memory pressure
2023-09-11 16:40 [PATCH 0/2] workload-specific and memory pressure-driven zswap writeback Nhat Pham
2023-09-11 16:40 ` [PATCH 1/2] zswap: make shrinking memcg-aware Nhat Pham
@ 2023-09-11 16:40 ` Nhat Pham
2023-09-15 12:13 ` kernel test robot
1 sibling, 1 reply; 5+ messages in thread
From: Nhat Pham @ 2023-09-11 16:40 UTC (permalink / raw)
To: akpm
Cc: hannes, cerasuolodomenico, yosryahmed, sjenning, ddstreet,
vitaly.wool, mhocko, roman.gushchin, shakeelb, muchun.song,
linux-mm, kernel-team, linux-kernel, cgroups
Currently, we only shrink the zswap pool when the user-defined limit is
hit. This means that if we set the limit too high, cold data that are
unlikely to be used again will reside in the pool, wasting precious
memory. It is hard to predict how much zswap space will be needed ahead
of time, as this depends on the workload (specifically, on factors such
as memory access patterns and compressibility of the memory pages).
This patch implements a memcg- and NUMA-aware shrinker for zswap, that
is initiated when there is memory pressure. The shrinker does not
have any parameter that must be tuned by the user, and can be opted in
or out on a per-memcg basis.
Furthermore, to make it more robust for many workloads and prevent
overshrinking (i.e evicting warm pages that might be refaulted into
memory), we build in the following heuristics:
* Estimate the number of warm pages residing in zswap, and attempt to
protect this region of the zswap LRU.
* Scale the number of freeable objects by an estimate of the memory
saving factor. The better zswap compresses the data, the fewer pages
we will evict to swap (as we will otherwise incur IO for relatively
small memory saving).
* During reclaim, if the shrinker encounters a page that is also being
brought into memory, the shrinker will cautiously terminate its
shrinking action, as this is a sign that it is touching the warmer
region of the zswap LRU.
On a benchmark that we have run:
(without the shrinker)
real -- mean: 153.27s, median: 153.199s
sys -- mean: 541.652s, median: 541.903s
user -- mean: 4384.9673999999995s, median: 4385.471s
(with the shrinker)
real -- mean: 151.4956s, median: 151.456s
sys -- mean: 461.14639999999997s, median: 465.656s
user -- mean: 4384.7118s, median: 4384.675s
We observed a 14-15% reduction in kernel CPU time, which translated to
over 1% reduction in real time.
On another benchmark, where there was a lot more cold memory residing in
zswap, we observed even more pronounced gains:
(without the shrinker)
real -- mean: 157.52519999999998s, median: 157.281s
sys -- mean: 769.3082s, median: 780.545s
user -- mean: 4378.1622s, median: 4378.286s
(with the shrinker)
real -- mean: 152.9608s, median: 152.845s
sys -- mean: 517.4446s, median: 506.749s
user -- mean: 4387.694s, median: 4387.935s
Here, we saw around 32-35% reduction in kernel CPU time, which
translated to 2.8% reduction in real time. These results confirm our
hypothesis that the shrinker is more helpful the more cold memory we
have.
Suggested-by: Johannes Weiner <hannes@cmpxchg.org>
Signed-off-by: Nhat Pham <nphamcs@gmail.com>
---
Documentation/admin-guide/mm/zswap.rst | 12 ++
include/linux/memcontrol.h | 1 +
include/linux/mmzone.h | 14 +++
mm/memcontrol.c | 33 ++++++
mm/swap_state.c | 31 ++++-
mm/zswap.c | 152 ++++++++++++++++++++++++-
6 files changed, 238 insertions(+), 5 deletions(-)
diff --git a/Documentation/admin-guide/mm/zswap.rst b/Documentation/admin-guide/mm/zswap.rst
index 45b98390e938..ae8597a67804 100644
--- a/Documentation/admin-guide/mm/zswap.rst
+++ b/Documentation/admin-guide/mm/zswap.rst
@@ -153,6 +153,18 @@ attribute, e. g.::
Setting this parameter to 100 will disable the hysteresis.
+When there is a sizable amount of cold memory residing in the zswap pool, it
+can be advantageous to proactively write these cold pages to swap and reclaim
+the memory for other use cases. By default, the zswap shrinker is disabled.
+User can enable it by first switching on the global knob:
+
+ echo Y > /sys/module/zswap/par meters/shrinker_enabled
+
+When the kernel is compiled with CONFIG_MEMCG_KMEM, user needs to further turn
+it on for each cgroup that the shrinker should target:
+
+ echo 1 > /sys/fs/cgroup/<cgroup-name>/memory.zswap.shrinker.enabled
+
A debugfs interface is provided for various statistic about pool size, number
of pages stored, same-value filled pages and various counters for the reasons
pages are rejected.
diff --git a/include/linux/memcontrol.h b/include/linux/memcontrol.h
index 11810a2cfd2d..a8f133ee87ee 100644
--- a/include/linux/memcontrol.h
+++ b/include/linux/memcontrol.h
@@ -229,6 +229,7 @@ struct mem_cgroup {
#if defined(CONFIG_MEMCG_KMEM) && defined(CONFIG_ZSWAP)
unsigned long zswap_max;
+ atomic_t zswap_shrinker_enabled;
#endif
unsigned long soft_limit;
diff --git a/include/linux/mmzone.h b/include/linux/mmzone.h
index 4106fbc5b4b3..81f4c5ea3e16 100644
--- a/include/linux/mmzone.h
+++ b/include/linux/mmzone.h
@@ -637,6 +637,20 @@ struct lruvec {
#ifdef CONFIG_MEMCG
struct pglist_data *pgdat;
#endif
+#ifdef CONFIG_ZSWAP
+ /*
+ * Number of pages in zswap that should be protected from the shrinker.
+ * This number is an estimate of the following counts:
+ *
+ * a) Recent page faults.
+ * b) Recent insertion to the zswap LRU. This includes new zswap stores,
+ * as well as recent zswap LRU rotations.
+ *
+ * These pages are likely to be warm, and might incur IO if the are written
+ * to swap.
+ */
+ unsigned long nr_zswap_protected;
+#endif
};
/* Isolate unmapped pages */
diff --git a/mm/memcontrol.c b/mm/memcontrol.c
index a4d3282493b6..6132ef80da92 100644
--- a/mm/memcontrol.c
+++ b/mm/memcontrol.c
@@ -5350,6 +5350,8 @@ mem_cgroup_css_alloc(struct cgroup_subsys_state *parent_css)
WRITE_ONCE(memcg->soft_limit, PAGE_COUNTER_MAX);
#if defined(CONFIG_MEMCG_KMEM) && defined(CONFIG_ZSWAP)
memcg->zswap_max = PAGE_COUNTER_MAX;
+ /* Disable the shrinker by default */
+ atomic_set(&memcg->zswap_shrinker_enabled, 0);
#endif
page_counter_set_high(&memcg->swap, PAGE_COUNTER_MAX);
if (parent) {
@@ -7867,6 +7869,31 @@ static ssize_t zswap_max_write(struct kernfs_open_file *of,
return nbytes;
}
+static int zswap_shrinker_enabled_show(struct seq_file *m, void *v)
+{
+ struct mem_cgroup *memcg = mem_cgroup_from_seq(m);
+
+ seq_printf(m, "%d\n", atomic_read(&memcg->zswap_shrinker_enabled));
+ return 0;
+}
+
+static ssize_t zswap_shrinker_enabled_write(struct kernfs_open_file *of,
+ char *buf, size_t nbytes, loff_t off)
+{
+ struct mem_cgroup *memcg = mem_cgroup_from_css(of_css(of));
+ int zswap_shrinker_enabled;
+ ssize_t parse_ret = kstrtoint(strstrip(buf), 0, &zswap_shrinker_enabled);
+
+ if (parse_ret)
+ return parse_ret;
+
+ if (zswap_shrinker_enabled < 0 || zswap_shrinker_enabled > 1)
+ return -ERANGE;
+
+ atomic_set(&memcg->zswap_shrinker_enabled, zswap_shrinker_enabled);
+ return nbytes;
+}
+
static struct cftype zswap_files[] = {
{
.name = "zswap.current",
@@ -7879,6 +7906,12 @@ static struct cftype zswap_files[] = {
.seq_show = zswap_max_show,
.write = zswap_max_write,
},
+ {
+ .name = "zswap.shrinker.enabled",
+ .flags = CFTYPE_NOT_ON_ROOT,
+ .seq_show = zswap_shrinker_enabled_show,
+ .write = zswap_shrinker_enabled_write,
+ },
{ } /* terminate */
};
#endif /* CONFIG_MEMCG_KMEM && CONFIG_ZSWAP */
diff --git a/mm/swap_state.c b/mm/swap_state.c
index 1c826737aacb..788e36a06c34 100644
--- a/mm/swap_state.c
+++ b/mm/swap_state.c
@@ -618,6 +618,22 @@ static unsigned long swapin_nr_pages(unsigned long offset)
return pages;
}
+#ifdef CONFIG_ZSWAP
+/*
+ * Refault is an indication that warmer pages are not resident in memory.
+ * Increase the size of zswap's protected area.
+ */
+static void inc_nr_protected(struct page *page)
+{
+ struct lruvec *lruvec = folio_lruvec(page_folio(page));
+ unsigned long flags;
+
+ spin_lock_irqsave(&lruvec->lru_lock, flags);
+ lruvec->nr_zswap_protected++;
+ spin_unlock_irqrestore(&lruvec->lru_lock, flags);
+}
+#endif
+
/**
* swap_cluster_readahead - swap in pages in hope we need them soon
* @entry: swap entry of this memory
@@ -686,7 +702,12 @@ struct page *swap_cluster_readahead(swp_entry_t entry, gfp_t gfp_mask,
lru_add_drain(); /* Push any new pages onto the LRU now */
skip:
/* The page was likely read above, so no need for plugging here */
- return read_swap_cache_async(entry, gfp_mask, vma, addr, NULL);
+ page = read_swap_cache_async(entry, gfp_mask, vma, addr, NULL);
+#ifdef CONFIG_ZSWAP
+ if (page)
+ inc_nr_protected(page);
+#endif
+ return page;
}
int init_swap_address_space(unsigned int type, unsigned long nr_pages)
@@ -853,8 +874,12 @@ static struct page *swap_vma_readahead(swp_entry_t fentry, gfp_t gfp_mask,
lru_add_drain();
skip:
/* The page was likely read above, so no need for plugging here */
- return read_swap_cache_async(fentry, gfp_mask, vma, vmf->address,
- NULL);
+ page = read_swap_cache_async(fentry, gfp_mask, vma, vmf->address, NULL);
+#ifdef CONFIG_ZSWAP
+ if (page)
+ inc_nr_protected(page);
+#endif
+ return page;
}
/**
diff --git a/mm/zswap.c b/mm/zswap.c
index 83386200222a..ee4909891cce 100644
--- a/mm/zswap.c
+++ b/mm/zswap.c
@@ -145,6 +145,26 @@ module_param_named(exclusive_loads, zswap_exclusive_loads_enabled, bool, 0644);
/* Number of zpools in zswap_pool (empirically determined for scalability) */
#define ZSWAP_NR_ZPOOLS 32
+/*
+ * Global flag to enable/disable memory pressure-based shrinker for all memcgs.
+ * If CONFIG_MEMCG_KMEM is on, we can further selectively disable
+ * the shrinker for each memcg.
+ */
+static bool zswap_shrinker_enabled;
+module_param_named(shrinker_enabled, zswap_shrinker_enabled, bool, 0644);
+#ifdef CONFIG_MEMCG_KMEM
+static bool is_shrinker_enabled(struct mem_cgroup *memcg)
+{
+ return zswap_shrinker_enabled &&
+ atomic_read(&memcg->zswap_shrinker_enabled);
+}
+#else
+static bool is_shrinker_enabled(struct mem_cgroup *memcg)
+{
+ return zswap_shrinker_enabled;
+}
+#endif
+
/*********************************
* data structures
**********************************/
@@ -174,6 +194,7 @@ struct zswap_pool {
char tfm_name[CRYPTO_MAX_ALG_NAME];
struct list_lru list_lru;
struct mem_cgroup *next_shrink;
+ struct shrinker shrinker;
};
/*
@@ -318,8 +339,23 @@ static bool zswap_lru_add(struct list_lru *list_lru, struct zswap_entry *entry)
{
struct mem_cgroup *memcg = entry->objcg ?
get_mem_cgroup_from_objcg(entry->objcg) : NULL;
+ struct lruvec *lruvec = mem_cgroup_lruvec(memcg, NODE_DATA(entry->nid));
bool added = __list_lru_add(list_lru, &entry->lru, entry->nid, memcg);
+ unsigned long flags, lru_size;
+ if (added) {
+ lru_size = list_lru_count_one(list_lru, entry->nid, memcg);
+ spin_lock_irqsave(&lruvec->lru_lock, flags);
+ lruvec->nr_zswap_protected++;
+
+ /*
+ * Decay to avoid overflow and adapt to changing workloads.
+ * This is based on LRU reclaim cost decaying heuristics.
+ */
+ if (lruvec->nr_zswap_protected > lru_size / 4)
+ lruvec->nr_zswap_protected /= 2;
+ spin_unlock_irqrestore(&lruvec->lru_lock, flags);
+ }
mem_cgroup_put(memcg);
return added;
}
@@ -461,6 +497,89 @@ static struct zswap_entry *zswap_entry_find_get(struct rb_root *root,
return entry;
}
+/*********************************
+* shrinker functions
+**********************************/
+static enum lru_status shrink_memcg_cb(struct list_head *item, struct list_lru_one *l,
+ spinlock_t *lock, void *arg);
+
+static unsigned long zswap_shrinker_scan(struct shrinker *shrinker,
+ struct shrink_control *sc)
+{
+ struct zswap_pool *pool = container_of(shrinker, typeof(*pool), shrinker);
+ unsigned long shrink_ret, nr_zswap_protected, flags,
+ lru_size = list_lru_shrink_count(&pool->list_lru, sc);
+ struct lruvec *lruvec = mem_cgroup_lruvec(sc->memcg, NODE_DATA(sc->nid));
+ bool encountered_page_in_swapcache = false;
+
+ spin_lock_irqsave(&lruvec->lru_lock, flags);
+ nr_zswap_protected = lruvec->nr_zswap_protected;
+ spin_unlock_irqrestore(&lruvec->lru_lock, flags);
+
+ /*
+ * Abort if the shrinker is disabled or if we are shrinking into the
+ * protected region.
+ */
+ if (!is_shrinker_enabled(sc->memcg) ||
+ nr_zswap_protected >= lru_size - sc->nr_to_scan) {
+ sc->nr_scanned = 0;
+ return SHRINK_STOP;
+ }
+
+ shrink_ret = list_lru_shrink_walk(&pool->list_lru, sc, &shrink_memcg_cb,
+ &encountered_page_in_swapcache);
+
+ if (encountered_page_in_swapcache)
+ return SHRINK_STOP;
+
+ return shrink_ret ? shrink_ret : SHRINK_STOP;
+}
+
+static unsigned long zswap_shrinker_count(struct shrinker *shrinker,
+ struct shrink_control *sc)
+{
+ struct zswap_pool *pool = container_of(shrinker, typeof(*pool), shrinker);
+ struct mem_cgroup *memcg = sc->memcg;
+ struct lruvec *lruvec = mem_cgroup_lruvec(memcg, NODE_DATA(sc->nid));
+ unsigned long nr_backing, nr_stored, nr_freeable, flags;
+
+ cgroup_rstat_flush(memcg->css.cgroup);
+ nr_backing = memcg_page_state(memcg, MEMCG_ZSWAP_B) >> PAGE_SHIFT;
+ nr_stored = memcg_page_state(memcg, MEMCG_ZSWAPPED);
+
+ if (!is_shrinker_enabled(memcg) || !nr_stored)
+ return 0;
+
+ nr_freeable = list_lru_shrink_count(&pool->list_lru, sc);
+ /*
+ * Subtract the lru size by an estimate of the number of pages
+ * that should be protected.
+ */
+ spin_lock_irqsave(&lruvec->lru_lock, flags);
+ nr_freeable = nr_freeable > lruvec->nr_zswap_protected ?
+ nr_freeable - lruvec->nr_zswap_protected : 0;
+ spin_unlock_irqrestore(&lruvec->lru_lock, flags);
+
+ /*
+ * Scale the number of freeable pages by the memory saving factor.
+ * This ensures that the better zswap compresses memory, the fewer
+ * pages we will evict to swap (as it will otherwise incur IO for
+ * relatively small memory saving).
+ */
+ return mult_frac(nr_freeable, nr_backing, nr_stored);
+}
+
+static int zswap_prealloc_shrinker(struct zswap_pool *pool)
+{
+ pool->shrinker.scan_objects = zswap_shrinker_scan;
+ pool->shrinker.count_objects = zswap_shrinker_count;
+ pool->shrinker.batch = 0;
+ pool->shrinker.seeks = DEFAULT_SEEKS;
+ pool->shrinker.flags = SHRINKER_NUMA_AWARE | SHRINKER_MEMCG_AWARE;
+
+ return prealloc_shrinker(&pool->shrinker, "mm-zswap");
+}
+
/*********************************
* per-cpu code
**********************************/
@@ -656,11 +775,14 @@ static enum lru_status shrink_memcg_cb(struct list_head *item, struct list_lru_o
spinlock_t *lock, void *arg)
{
struct zswap_entry *entry = container_of(item, struct zswap_entry, lru);
+ bool *encountered_page_in_swapcache = (bool *)arg;
struct mem_cgroup *memcg;
struct zswap_tree *tree;
+ struct lruvec *lruvec;
pgoff_t swpoffset;
enum lru_status ret = LRU_REMOVED_RETRY;
int writeback_result;
+ unsigned long flags;
/*
* Once the lru lock is dropped, the entry might get freed. The
@@ -696,8 +818,24 @@ static enum lru_status shrink_memcg_cb(struct list_head *item, struct list_lru_o
/* we cannot use zswap_lru_add here, because it increments node's lru count */
list_lru_putback(&entry->pool->list_lru, item, entry->nid, memcg);
spin_unlock(lock);
- mem_cgroup_put(memcg);
ret = LRU_RETRY;
+
+ /*
+ * Encountering a page already in swap cache is a sign that we are shrinking
+ * into the warmer region. We should terminate shrinking (if we're in the dynamic
+ * shrinker context).
+ */
+ if (writeback_result == -EEXIST && encountered_page_in_swapcache) {
+ ret = LRU_SKIP;
+ *encountered_page_in_swapcache = true;
+ }
+ lruvec = mem_cgroup_lruvec(memcg, NODE_DATA(entry->nid));
+ spin_lock_irqsave(&lruvec->lru_lock, flags);
+ /* Increment the protection area to account for the LRU rotation. */
+ lruvec->nr_zswap_protected++;
+ spin_unlock_irqrestore(&lruvec->lru_lock, flags);
+
+ mem_cgroup_put(memcg);
goto put_unlock;
}
@@ -828,6 +966,10 @@ static struct zswap_pool *zswap_pool_create(char *type, char *compressor)
&pool->node);
if (ret)
goto error;
+
+ if (zswap_prealloc_shrinker(pool))
+ goto error;
+
pr_debug("using %s compressor\n", pool->tfm_name);
/* being the current pool takes 1 ref; this func expects the
@@ -836,12 +978,17 @@ static struct zswap_pool *zswap_pool_create(char *type, char *compressor)
kref_init(&pool->kref);
INIT_LIST_HEAD(&pool->list);
INIT_WORK(&pool->shrink_work, shrink_worker);
- list_lru_init_memcg(&pool->list_lru, NULL);
+ if (list_lru_init_memcg(&pool->list_lru, &pool->shrinker))
+ goto lru_fail;
+ register_shrinker_prepared(&pool->shrinker);
zswap_pool_debug("created", pool);
return pool;
+lru_fail:
+ list_lru_destroy(&pool->list_lru);
+ free_prealloced_shrinker(&pool->shrinker);
error:
if (pool->acomp_ctx)
free_percpu(pool->acomp_ctx);
@@ -899,6 +1046,7 @@ static void zswap_pool_destroy(struct zswap_pool *pool)
zswap_pool_debug("destroying", pool);
+ unregister_shrinker(&pool->shrinker);
cpuhp_state_remove_instance(CPUHP_MM_ZSWP_POOL_PREPARE, &pool->node);
free_percpu(pool->acomp_ctx);
list_lru_destroy(&pool->list_lru);
--
2.34.1
^ permalink raw reply related [flat|nested] 5+ messages in thread
* Re: [PATCH 1/2] zswap: make shrinking memcg-aware
2023-09-11 16:40 ` [PATCH 1/2] zswap: make shrinking memcg-aware Nhat Pham
@ 2023-09-14 9:34 ` kernel test robot
0 siblings, 0 replies; 5+ messages in thread
From: kernel test robot @ 2023-09-14 9:34 UTC (permalink / raw)
To: Nhat Pham, akpm
Cc: oe-kbuild-all, hannes, cerasuolodomenico, yosryahmed, sjenning,
ddstreet, vitaly.wool, mhocko, roman.gushchin, shakeelb,
muchun.song, linux-mm, kernel-team, linux-kernel, cgroups
Hi Nhat,
kernel test robot noticed the following build errors:
[auto build test ERROR on akpm-mm/mm-everything]
[also build test ERROR on linus/master v6.6-rc1 next-20230914]
[If your patch is applied to the wrong git tree, kindly drop us a note.
And when submitting patch, we suggest to use '--base' as documented in
https://git-scm.com/docs/git-format-patch#_base_tree_information]
url: https://github.com/intel-lab-lkp/linux/commits/Nhat-Pham/zswap-make-shrinking-memcg-aware/20230912-004147
base: https://git.kernel.org/pub/scm/linux/kernel/git/akpm/mm.git mm-everything
patch link: https://lore.kernel.org/r/20230911164024.2541401-2-nphamcs%40gmail.com
patch subject: [PATCH 1/2] zswap: make shrinking memcg-aware
config: loongarch-randconfig-001-20230914 (https://download.01.org/0day-ci/archive/20230914/202309141736.ABab8fuf-lkp@intel.com/config)
compiler: loongarch64-linux-gcc (GCC) 13.2.0
reproduce (this is a W=1 build): (https://download.01.org/0day-ci/archive/20230914/202309141736.ABab8fuf-lkp@intel.com/reproduce)
If you fix the issue in a separate patch/commit (i.e. not just a new version of
the same patch/commit), kindly add following tags
| Reported-by: kernel test robot <lkp@intel.com>
| Closes: https://lore.kernel.org/oe-kbuild-all/202309141736.ABab8fuf-lkp@intel.com/
All error/warnings (new ones prefixed by >>):
mm/zswap.c: In function 'zswap_lru_add':
>> mm/zswap.c:320:17: error: implicit declaration of function 'get_mem_cgroup_from_objcg'; did you mean 'get_mem_cgroup_from_mm'? [-Werror=implicit-function-declaration]
320 | get_mem_cgroup_from_objcg(entry->objcg) : NULL;
| ^~~~~~~~~~~~~~~~~~~~~~~~~
| get_mem_cgroup_from_mm
>> mm/zswap.c:320:57: warning: pointer/integer type mismatch in conditional expression
320 | get_mem_cgroup_from_objcg(entry->objcg) : NULL;
| ^
mm/zswap.c: In function 'zswap_lru_del':
mm/zswap.c:330:57: warning: pointer/integer type mismatch in conditional expression
330 | get_mem_cgroup_from_objcg(entry->objcg) : NULL;
| ^
mm/zswap.c: In function 'shrink_memcg_cb':
mm/zswap.c:694:80: warning: pointer/integer type mismatch in conditional expression
694 | memcg = entry->objcg ? get_mem_cgroup_from_objcg(entry->objcg) : NULL;
| ^
mm/zswap.c: In function 'shrink_worker':
>> mm/zswap.c:758:51: error: invalid use of undefined type 'struct mem_cgroup'
758 | css_put(&pool->next_shrink->css);
| ^~
mm/zswap.c: In function 'zswap_pool_destroy':
mm/zswap.c:906:43: error: invalid use of undefined type 'struct mem_cgroup'
906 | css_put(&pool->next_shrink->css);
| ^~
mm/zswap.c: In function 'zswap_store':
>> mm/zswap.c:1298:23: warning: assignment to 'struct mem_cgroup *' from 'int' makes pointer from integer without a cast [-Wint-conversion]
1298 | memcg = get_mem_cgroup_from_objcg(objcg);
| ^
mm/zswap.c:1300:39: error: invalid use of undefined type 'struct mem_cgroup'
1300 | css_put(&memcg->css);
| ^~
mm/zswap.c:1303:31: error: invalid use of undefined type 'struct mem_cgroup'
1303 | css_put(&memcg->css);
| ^~
mm/zswap.c:1349:23: warning: assignment to 'struct mem_cgroup *' from 'int' makes pointer from integer without a cast [-Wint-conversion]
1349 | memcg = get_mem_cgroup_from_objcg(objcg);
| ^
mm/zswap.c:1351:31: error: invalid use of undefined type 'struct mem_cgroup'
1351 | css_put(&memcg->css);
| ^~
cc1: some warnings being treated as errors
vim +320 mm/zswap.c
313
314 /*********************************
315 * lru functions
316 **********************************/
317 static bool zswap_lru_add(struct list_lru *list_lru, struct zswap_entry *entry)
318 {
319 struct mem_cgroup *memcg = entry->objcg ?
> 320 get_mem_cgroup_from_objcg(entry->objcg) : NULL;
321 bool added = __list_lru_add(list_lru, &entry->lru, entry->nid, memcg);
322
323 mem_cgroup_put(memcg);
324 return added;
325 }
326
--
0-DAY CI Kernel Test Service
https://github.com/intel/lkp-tests/wiki
^ permalink raw reply [flat|nested] 5+ messages in thread
* Re: [PATCH 2/2] zswap: shrinks zswap pool based on memory pressure
2023-09-11 16:40 ` [PATCH 2/2] zswap: shrinks zswap pool based on memory pressure Nhat Pham
@ 2023-09-15 12:13 ` kernel test robot
0 siblings, 0 replies; 5+ messages in thread
From: kernel test robot @ 2023-09-15 12:13 UTC (permalink / raw)
To: Nhat Pham, akpm
Cc: oe-kbuild-all, hannes, cerasuolodomenico, yosryahmed, sjenning,
ddstreet, vitaly.wool, mhocko, roman.gushchin, shakeelb,
muchun.song, linux-mm, kernel-team, linux-kernel, cgroups
Hi Nhat,
kernel test robot noticed the following build errors:
[auto build test ERROR on akpm-mm/mm-everything]
[also build test ERROR on linus/master v6.6-rc1 next-20230915]
[If your patch is applied to the wrong git tree, kindly drop us a note.
And when submitting patch, we suggest to use '--base' as documented in
https://git-scm.com/docs/git-format-patch#_base_tree_information]
url: https://github.com/intel-lab-lkp/linux/commits/Nhat-Pham/zswap-make-shrinking-memcg-aware/20230912-004147
base: https://git.kernel.org/pub/scm/linux/kernel/git/akpm/mm.git mm-everything
patch link: https://lore.kernel.org/r/20230911164024.2541401-3-nphamcs%40gmail.com
patch subject: [PATCH 2/2] zswap: shrinks zswap pool based on memory pressure
config: loongarch-randconfig-001-20230914 (https://download.01.org/0day-ci/archive/20230915/202309152048.GDMxTtFF-lkp@intel.com/config)
compiler: loongarch64-linux-gcc (GCC) 13.2.0
reproduce (this is a W=1 build): (https://download.01.org/0day-ci/archive/20230915/202309152048.GDMxTtFF-lkp@intel.com/reproduce)
If you fix the issue in a separate patch/commit (i.e. not just a new version of
the same patch/commit), kindly add following tags
| Reported-by: kernel test robot <lkp@intel.com>
| Closes: https://lore.kernel.org/oe-kbuild-all/202309152048.GDMxTtFF-lkp@intel.com/
All errors (new ones prefixed by >>):
mm/zswap.c: In function 'zswap_lru_add':
mm/zswap.c:341:17: error: implicit declaration of function 'get_mem_cgroup_from_objcg'; did you mean 'get_mem_cgroup_from_mm'? [-Werror=implicit-function-declaration]
341 | get_mem_cgroup_from_objcg(entry->objcg) : NULL;
| ^~~~~~~~~~~~~~~~~~~~~~~~~
| get_mem_cgroup_from_mm
mm/zswap.c:341:57: warning: pointer/integer type mismatch in conditional expression
341 | get_mem_cgroup_from_objcg(entry->objcg) : NULL;
| ^
mm/zswap.c: In function 'zswap_lru_del':
mm/zswap.c:366:57: warning: pointer/integer type mismatch in conditional expression
366 | get_mem_cgroup_from_objcg(entry->objcg) : NULL;
| ^
mm/zswap.c: In function 'zswap_shrinker_count':
>> mm/zswap.c:546:9: error: implicit declaration of function 'cgroup_rstat_flush' [-Werror=implicit-function-declaration]
546 | cgroup_rstat_flush(memcg->css.cgroup);
| ^~~~~~~~~~~~~~~~~~
mm/zswap.c:546:33: error: invalid use of undefined type 'struct mem_cgroup'
546 | cgroup_rstat_flush(memcg->css.cgroup);
| ^~
mm/zswap.c: In function 'shrink_memcg_cb':
mm/zswap.c:816:80: warning: pointer/integer type mismatch in conditional expression
816 | memcg = entry->objcg ? get_mem_cgroup_from_objcg(entry->objcg) : NULL;
| ^
mm/zswap.c: In function 'shrink_worker':
mm/zswap.c:896:51: error: invalid use of undefined type 'struct mem_cgroup'
896 | css_put(&pool->next_shrink->css);
| ^~
mm/zswap.c: In function 'zswap_pool_destroy':
mm/zswap.c:1054:43: error: invalid use of undefined type 'struct mem_cgroup'
1054 | css_put(&pool->next_shrink->css);
| ^~
mm/zswap.c: In function 'zswap_store':
mm/zswap.c:1446:23: warning: assignment to 'struct mem_cgroup *' from 'int' makes pointer from integer without a cast [-Wint-conversion]
1446 | memcg = get_mem_cgroup_from_objcg(objcg);
| ^
mm/zswap.c:1448:39: error: invalid use of undefined type 'struct mem_cgroup'
1448 | css_put(&memcg->css);
| ^~
mm/zswap.c:1451:31: error: invalid use of undefined type 'struct mem_cgroup'
1451 | css_put(&memcg->css);
| ^~
mm/zswap.c:1497:23: warning: assignment to 'struct mem_cgroup *' from 'int' makes pointer from integer without a cast [-Wint-conversion]
1497 | memcg = get_mem_cgroup_from_objcg(objcg);
| ^
mm/zswap.c:1499:31: error: invalid use of undefined type 'struct mem_cgroup'
1499 | css_put(&memcg->css);
| ^~
cc1: some warnings being treated as errors
vim +/cgroup_rstat_flush +546 mm/zswap.c
537
538 static unsigned long zswap_shrinker_count(struct shrinker *shrinker,
539 struct shrink_control *sc)
540 {
541 struct zswap_pool *pool = container_of(shrinker, typeof(*pool), shrinker);
542 struct mem_cgroup *memcg = sc->memcg;
543 struct lruvec *lruvec = mem_cgroup_lruvec(memcg, NODE_DATA(sc->nid));
544 unsigned long nr_backing, nr_stored, nr_freeable, flags;
545
> 546 cgroup_rstat_flush(memcg->css.cgroup);
547 nr_backing = memcg_page_state(memcg, MEMCG_ZSWAP_B) >> PAGE_SHIFT;
548 nr_stored = memcg_page_state(memcg, MEMCG_ZSWAPPED);
549
550 if (!is_shrinker_enabled(memcg) || !nr_stored)
551 return 0;
552
553 nr_freeable = list_lru_shrink_count(&pool->list_lru, sc);
554 /*
555 * Subtract the lru size by an estimate of the number of pages
556 * that should be protected.
557 */
558 spin_lock_irqsave(&lruvec->lru_lock, flags);
559 nr_freeable = nr_freeable > lruvec->nr_zswap_protected ?
560 nr_freeable - lruvec->nr_zswap_protected : 0;
561 spin_unlock_irqrestore(&lruvec->lru_lock, flags);
562
563 /*
564 * Scale the number of freeable pages by the memory saving factor.
565 * This ensures that the better zswap compresses memory, the fewer
566 * pages we will evict to swap (as it will otherwise incur IO for
567 * relatively small memory saving).
568 */
569 return mult_frac(nr_freeable, nr_backing, nr_stored);
570 }
571
--
0-DAY CI Kernel Test Service
https://github.com/intel/lkp-tests/wiki
^ permalink raw reply [flat|nested] 5+ messages in thread
end of thread, other threads:[~2023-09-15 12:14 UTC | newest]
Thread overview: 5+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2023-09-11 16:40 [PATCH 0/2] workload-specific and memory pressure-driven zswap writeback Nhat Pham
2023-09-11 16:40 ` [PATCH 1/2] zswap: make shrinking memcg-aware Nhat Pham
2023-09-14 9:34 ` kernel test robot
2023-09-11 16:40 ` [PATCH 2/2] zswap: shrinks zswap pool based on memory pressure Nhat Pham
2023-09-15 12:13 ` kernel test robot
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).