* [PATCH 0/5] use refcount+RCU method to implement lockless slab shrink (part 1)
@ 2023-08-16 8:34 Qi Zheng
2023-08-16 8:34 ` [PATCH 1/5] mm: move some shrinker-related function declarations to mm/internal.h Qi Zheng
` (4 more replies)
0 siblings, 5 replies; 12+ messages in thread
From: Qi Zheng @ 2023-08-16 8:34 UTC (permalink / raw)
To: akpm, david, tkhai, vbabka, roman.gushchin, djwong, brauner,
paulmck, tytso, steven.price, cel, senozhatsky, yujie.liu, gregkh,
muchun.song, joel, christian.koenig
Cc: linux-kernel, linux-mm, dri-devel, linux-fsdevel, Qi Zheng
Hi all,
To make reviewing and updating easier, I've chosen to split the previous
patchset[1] into the following three parts:
part 1: some cleanups and preparations
part 2: introduce new APIs and convert all shrinnkers to use these
part 3: implement lockless slab shrink
This series is the part 1.
Comments and suggestions are welcome.
[1]. https://lore.kernel.org/lkml/20230807110936.21819-1-zhengqi.arch@bytedance.com/
Thanks,
Qi
Changlog in v4 -> part 1 v1:
- split from the previous large patchset
- fix comment format in [PATCH v4 01/48] (pointed by Muchun Song)
- change to use kzalloc_node() and fix typo in [PATCH v4 44/48]
(pointed by Dave Chinner)
- collect Reviewed-bys
- rebase onto the next-20230815
Qi Zheng (5):
mm: move some shrinker-related function declarations to mm/internal.h
mm: vmscan: move shrinker-related code into a separate file
mm: shrinker: remove redundant shrinker_rwsem in debugfs operations
drm/ttm: introduce pool_shrink_rwsem
mm: shrinker: add a secondary array for shrinker_info::{map,
nr_deferred}
drivers/gpu/drm/ttm/ttm_pool.c | 15 +
include/linux/memcontrol.h | 12 +-
include/linux/shrinker.h | 37 +-
mm/Makefile | 4 +-
mm/internal.h | 28 ++
mm/shrinker.c | 751 +++++++++++++++++++++++++++++++++
mm/shrinker_debug.c | 16 +-
mm/vmscan.c | 701 ------------------------------
8 files changed, 815 insertions(+), 749 deletions(-)
create mode 100644 mm/shrinker.c
--
2.30.2
^ permalink raw reply [flat|nested] 12+ messages in thread
* [PATCH 1/5] mm: move some shrinker-related function declarations to mm/internal.h
2023-08-16 8:34 [PATCH 0/5] use refcount+RCU method to implement lockless slab shrink (part 1) Qi Zheng
@ 2023-08-16 8:34 ` Qi Zheng
2023-08-16 13:14 ` kernel test robot
` (2 more replies)
2023-08-16 8:34 ` [PATCH 2/5] mm: vmscan: move shrinker-related code into a separate file Qi Zheng
` (3 subsequent siblings)
4 siblings, 3 replies; 12+ messages in thread
From: Qi Zheng @ 2023-08-16 8:34 UTC (permalink / raw)
To: akpm, david, tkhai, vbabka, roman.gushchin, djwong, brauner,
paulmck, tytso, steven.price, cel, senozhatsky, yujie.liu, gregkh,
muchun.song, joel, christian.koenig
Cc: linux-kernel, linux-mm, dri-devel, linux-fsdevel, Qi Zheng,
Muchun Song
The following functions are only used inside the mm subsystem, so it's
better to move their declarations to the mm/internal.h file.
1. shrinker_debugfs_add()
2. shrinker_debugfs_detach()
3. shrinker_debugfs_remove()
Signed-off-by: Qi Zheng <zhengqi.arch@bytedance.com>
Reviewed-by: Muchun Song <songmuchun@bytedance.com>
---
include/linux/shrinker.h | 19 -------------------
mm/internal.h | 26 ++++++++++++++++++++++++++
2 files changed, 26 insertions(+), 19 deletions(-)
diff --git a/include/linux/shrinker.h b/include/linux/shrinker.h
index 224293b2dd06..8dc15aa37410 100644
--- a/include/linux/shrinker.h
+++ b/include/linux/shrinker.h
@@ -106,28 +106,9 @@ extern void free_prealloced_shrinker(struct shrinker *shrinker);
extern void synchronize_shrinkers(void);
#ifdef CONFIG_SHRINKER_DEBUG
-extern int shrinker_debugfs_add(struct shrinker *shrinker);
-extern struct dentry *shrinker_debugfs_detach(struct shrinker *shrinker,
- int *debugfs_id);
-extern void shrinker_debugfs_remove(struct dentry *debugfs_entry,
- int debugfs_id);
extern int __printf(2, 3) shrinker_debugfs_rename(struct shrinker *shrinker,
const char *fmt, ...);
#else /* CONFIG_SHRINKER_DEBUG */
-static inline int shrinker_debugfs_add(struct shrinker *shrinker)
-{
- return 0;
-}
-static inline struct dentry *shrinker_debugfs_detach(struct shrinker *shrinker,
- int *debugfs_id)
-{
- *debugfs_id = -1;
- return NULL;
-}
-static inline void shrinker_debugfs_remove(struct dentry *debugfs_entry,
- int debugfs_id)
-{
-}
static inline __printf(2, 3)
int shrinker_debugfs_rename(struct shrinker *shrinker, const char *fmt, ...)
{
diff --git a/mm/internal.h b/mm/internal.h
index 0b0029e4db87..dc9c81ff1b27 100644
--- a/mm/internal.h
+++ b/mm/internal.h
@@ -1153,4 +1153,30 @@ struct vma_prepare {
struct vm_area_struct *remove;
struct vm_area_struct *remove2;
};
+
+/* shrinker related functions */
+
+#ifdef CONFIG_SHRINKER_DEBUG
+extern int shrinker_debugfs_add(struct shrinker *shrinker);
+extern struct dentry *shrinker_debugfs_detach(struct shrinker *shrinker,
+ int *debugfs_id);
+extern void shrinker_debugfs_remove(struct dentry *debugfs_entry,
+ int debugfs_id);
+#else /* CONFIG_SHRINKER_DEBUG */
+static inline int shrinker_debugfs_add(struct shrinker *shrinker)
+{
+ return 0;
+}
+static inline struct dentry *shrinker_debugfs_detach(struct shrinker *shrinker,
+ int *debugfs_id)
+{
+ *debugfs_id = -1;
+ return NULL;
+}
+static inline void shrinker_debugfs_remove(struct dentry *debugfs_entry,
+ int debugfs_id)
+{
+}
+#endif /* CONFIG_SHRINKER_DEBUG */
+
#endif /* __MM_INTERNAL_H */
--
2.30.2
^ permalink raw reply related [flat|nested] 12+ messages in thread
* [PATCH 2/5] mm: vmscan: move shrinker-related code into a separate file
2023-08-16 8:34 [PATCH 0/5] use refcount+RCU method to implement lockless slab shrink (part 1) Qi Zheng
2023-08-16 8:34 ` [PATCH 1/5] mm: move some shrinker-related function declarations to mm/internal.h Qi Zheng
@ 2023-08-16 8:34 ` Qi Zheng
2023-08-16 8:34 ` [PATCH 3/5] mm: shrinker: remove redundant shrinker_rwsem in debugfs operations Qi Zheng
` (2 subsequent siblings)
4 siblings, 0 replies; 12+ messages in thread
From: Qi Zheng @ 2023-08-16 8:34 UTC (permalink / raw)
To: akpm, david, tkhai, vbabka, roman.gushchin, djwong, brauner,
paulmck, tytso, steven.price, cel, senozhatsky, yujie.liu, gregkh,
muchun.song, joel, christian.koenig
Cc: linux-kernel, linux-mm, dri-devel, linux-fsdevel, Qi Zheng,
Muchun Song
The mm/vmscan.c file is too large, so separate the shrinker-related
code from it into a separate file. No functional changes.
Signed-off-by: Qi Zheng <zhengqi.arch@bytedance.com>
Reviewed-by: Muchun Song <songmuchun@bytedance.com>
---
mm/Makefile | 4 +-
mm/internal.h | 2 +
mm/shrinker.c | 709 ++++++++++++++++++++++++++++++++++++++++++++++++++
mm/vmscan.c | 701 -------------------------------------------------
4 files changed, 713 insertions(+), 703 deletions(-)
create mode 100644 mm/shrinker.c
diff --git a/mm/Makefile b/mm/Makefile
index ec65984e2ade..33873c8aedb3 100644
--- a/mm/Makefile
+++ b/mm/Makefile
@@ -48,8 +48,8 @@ endif
obj-y := filemap.o mempool.o oom_kill.o fadvise.o \
maccess.o page-writeback.o folio-compat.o \
- readahead.o swap.o truncate.o vmscan.o shmem.o \
- util.o mmzone.o vmstat.o backing-dev.o \
+ readahead.o swap.o truncate.o vmscan.o shrinker.o \
+ shmem.o util.o mmzone.o vmstat.o backing-dev.o \
mm_init.o percpu.o slab_common.o \
compaction.o show_mem.o shmem_quota.o\
interval_tree.o list_lru.o workingset.o \
diff --git a/mm/internal.h b/mm/internal.h
index dc9c81ff1b27..5907eced8548 100644
--- a/mm/internal.h
+++ b/mm/internal.h
@@ -1155,6 +1155,8 @@ struct vma_prepare {
};
/* shrinker related functions */
+unsigned long shrink_slab(gfp_t gfp_mask, int nid, struct mem_cgroup *memcg,
+ int priority);
#ifdef CONFIG_SHRINKER_DEBUG
extern int shrinker_debugfs_add(struct shrinker *shrinker);
diff --git a/mm/shrinker.c b/mm/shrinker.c
new file mode 100644
index 000000000000..043c87ccfab4
--- /dev/null
+++ b/mm/shrinker.c
@@ -0,0 +1,709 @@
+// SPDX-License-Identifier: GPL-2.0
+#include <linux/memcontrol.h>
+#include <linux/rwsem.h>
+#include <linux/shrinker.h>
+#include <trace/events/vmscan.h>
+
+#include "internal.h"
+
+LIST_HEAD(shrinker_list);
+DECLARE_RWSEM(shrinker_rwsem);
+
+#ifdef CONFIG_MEMCG
+static int shrinker_nr_max;
+
+/* The shrinker_info is expanded in a batch of BITS_PER_LONG */
+static inline int shrinker_map_size(int nr_items)
+{
+ return (DIV_ROUND_UP(nr_items, BITS_PER_LONG) * sizeof(unsigned long));
+}
+
+static inline int shrinker_defer_size(int nr_items)
+{
+ return (round_up(nr_items, BITS_PER_LONG) * sizeof(atomic_long_t));
+}
+
+void free_shrinker_info(struct mem_cgroup *memcg)
+{
+ struct mem_cgroup_per_node *pn;
+ struct shrinker_info *info;
+ int nid;
+
+ for_each_node(nid) {
+ pn = memcg->nodeinfo[nid];
+ info = rcu_dereference_protected(pn->shrinker_info, true);
+ kvfree(info);
+ rcu_assign_pointer(pn->shrinker_info, NULL);
+ }
+}
+
+int alloc_shrinker_info(struct mem_cgroup *memcg)
+{
+ struct shrinker_info *info;
+ int nid, size, ret = 0;
+ int map_size, defer_size = 0;
+
+ down_write(&shrinker_rwsem);
+ map_size = shrinker_map_size(shrinker_nr_max);
+ defer_size = shrinker_defer_size(shrinker_nr_max);
+ size = map_size + defer_size;
+ for_each_node(nid) {
+ info = kvzalloc_node(sizeof(*info) + size, GFP_KERNEL, nid);
+ if (!info) {
+ free_shrinker_info(memcg);
+ ret = -ENOMEM;
+ break;
+ }
+ info->nr_deferred = (atomic_long_t *)(info + 1);
+ info->map = (void *)info->nr_deferred + defer_size;
+ info->map_nr_max = shrinker_nr_max;
+ rcu_assign_pointer(memcg->nodeinfo[nid]->shrinker_info, info);
+ }
+ up_write(&shrinker_rwsem);
+
+ return ret;
+}
+
+static struct shrinker_info *shrinker_info_protected(struct mem_cgroup *memcg,
+ int nid)
+{
+ return rcu_dereference_protected(memcg->nodeinfo[nid]->shrinker_info,
+ lockdep_is_held(&shrinker_rwsem));
+}
+
+static int expand_one_shrinker_info(struct mem_cgroup *memcg,
+ int map_size, int defer_size,
+ int old_map_size, int old_defer_size,
+ int new_nr_max)
+{
+ struct shrinker_info *new, *old;
+ struct mem_cgroup_per_node *pn;
+ int nid;
+ int size = map_size + defer_size;
+
+ for_each_node(nid) {
+ pn = memcg->nodeinfo[nid];
+ old = shrinker_info_protected(memcg, nid);
+ /* Not yet online memcg */
+ if (!old)
+ return 0;
+
+ /* Already expanded this shrinker_info */
+ if (new_nr_max <= old->map_nr_max)
+ continue;
+
+ new = kvmalloc_node(sizeof(*new) + size, GFP_KERNEL, nid);
+ if (!new)
+ return -ENOMEM;
+
+ new->nr_deferred = (atomic_long_t *)(new + 1);
+ new->map = (void *)new->nr_deferred + defer_size;
+ new->map_nr_max = new_nr_max;
+
+ /* map: set all old bits, clear all new bits */
+ memset(new->map, (int)0xff, old_map_size);
+ memset((void *)new->map + old_map_size, 0, map_size - old_map_size);
+ /* nr_deferred: copy old values, clear all new values */
+ memcpy(new->nr_deferred, old->nr_deferred, old_defer_size);
+ memset((void *)new->nr_deferred + old_defer_size, 0,
+ defer_size - old_defer_size);
+
+ rcu_assign_pointer(pn->shrinker_info, new);
+ kvfree_rcu(old, rcu);
+ }
+
+ return 0;
+}
+
+static int expand_shrinker_info(int new_id)
+{
+ int ret = 0;
+ int new_nr_max = round_up(new_id + 1, BITS_PER_LONG);
+ int map_size, defer_size = 0;
+ int old_map_size, old_defer_size = 0;
+ struct mem_cgroup *memcg;
+
+ if (!root_mem_cgroup)
+ goto out;
+
+ lockdep_assert_held(&shrinker_rwsem);
+
+ map_size = shrinker_map_size(new_nr_max);
+ defer_size = shrinker_defer_size(new_nr_max);
+ old_map_size = shrinker_map_size(shrinker_nr_max);
+ old_defer_size = shrinker_defer_size(shrinker_nr_max);
+
+ memcg = mem_cgroup_iter(NULL, NULL, NULL);
+ do {
+ ret = expand_one_shrinker_info(memcg, map_size, defer_size,
+ old_map_size, old_defer_size,
+ new_nr_max);
+ if (ret) {
+ mem_cgroup_iter_break(NULL, memcg);
+ goto out;
+ }
+ } while ((memcg = mem_cgroup_iter(NULL, memcg, NULL)) != NULL);
+out:
+ if (!ret)
+ shrinker_nr_max = new_nr_max;
+
+ return ret;
+}
+
+void set_shrinker_bit(struct mem_cgroup *memcg, int nid, int shrinker_id)
+{
+ if (shrinker_id >= 0 && memcg && !mem_cgroup_is_root(memcg)) {
+ struct shrinker_info *info;
+
+ rcu_read_lock();
+ info = rcu_dereference(memcg->nodeinfo[nid]->shrinker_info);
+ if (!WARN_ON_ONCE(shrinker_id >= info->map_nr_max)) {
+ /* Pairs with smp mb in shrink_slab() */
+ smp_mb__before_atomic();
+ set_bit(shrinker_id, info->map);
+ }
+ rcu_read_unlock();
+ }
+}
+
+static DEFINE_IDR(shrinker_idr);
+
+static int prealloc_memcg_shrinker(struct shrinker *shrinker)
+{
+ int id, ret = -ENOMEM;
+
+ if (mem_cgroup_disabled())
+ return -ENOSYS;
+
+ down_write(&shrinker_rwsem);
+ /* This may call shrinker, so it must use down_read_trylock() */
+ id = idr_alloc(&shrinker_idr, shrinker, 0, 0, GFP_KERNEL);
+ if (id < 0)
+ goto unlock;
+
+ if (id >= shrinker_nr_max) {
+ if (expand_shrinker_info(id)) {
+ idr_remove(&shrinker_idr, id);
+ goto unlock;
+ }
+ }
+ shrinker->id = id;
+ ret = 0;
+unlock:
+ up_write(&shrinker_rwsem);
+ return ret;
+}
+
+static void unregister_memcg_shrinker(struct shrinker *shrinker)
+{
+ int id = shrinker->id;
+
+ BUG_ON(id < 0);
+
+ lockdep_assert_held(&shrinker_rwsem);
+
+ idr_remove(&shrinker_idr, id);
+}
+
+static long xchg_nr_deferred_memcg(int nid, struct shrinker *shrinker,
+ struct mem_cgroup *memcg)
+{
+ struct shrinker_info *info;
+
+ info = shrinker_info_protected(memcg, nid);
+ return atomic_long_xchg(&info->nr_deferred[shrinker->id], 0);
+}
+
+static long add_nr_deferred_memcg(long nr, int nid, struct shrinker *shrinker,
+ struct mem_cgroup *memcg)
+{
+ struct shrinker_info *info;
+
+ info = shrinker_info_protected(memcg, nid);
+ return atomic_long_add_return(nr, &info->nr_deferred[shrinker->id]);
+}
+
+void reparent_shrinker_deferred(struct mem_cgroup *memcg)
+{
+ int i, nid;
+ long nr;
+ struct mem_cgroup *parent;
+ struct shrinker_info *child_info, *parent_info;
+
+ parent = parent_mem_cgroup(memcg);
+ if (!parent)
+ parent = root_mem_cgroup;
+
+ /* Prevent from concurrent shrinker_info expand */
+ down_read(&shrinker_rwsem);
+ for_each_node(nid) {
+ child_info = shrinker_info_protected(memcg, nid);
+ parent_info = shrinker_info_protected(parent, nid);
+ for (i = 0; i < child_info->map_nr_max; i++) {
+ nr = atomic_long_read(&child_info->nr_deferred[i]);
+ atomic_long_add(nr, &parent_info->nr_deferred[i]);
+ }
+ }
+ up_read(&shrinker_rwsem);
+}
+#else
+static int prealloc_memcg_shrinker(struct shrinker *shrinker)
+{
+ return -ENOSYS;
+}
+
+static void unregister_memcg_shrinker(struct shrinker *shrinker)
+{
+}
+
+static long xchg_nr_deferred_memcg(int nid, struct shrinker *shrinker,
+ struct mem_cgroup *memcg)
+{
+ return 0;
+}
+
+static long add_nr_deferred_memcg(long nr, int nid, struct shrinker *shrinker,
+ struct mem_cgroup *memcg)
+{
+ return 0;
+}
+#endif /* CONFIG_MEMCG */
+
+static long xchg_nr_deferred(struct shrinker *shrinker,
+ struct shrink_control *sc)
+{
+ int nid = sc->nid;
+
+ if (!(shrinker->flags & SHRINKER_NUMA_AWARE))
+ nid = 0;
+
+ if (sc->memcg &&
+ (shrinker->flags & SHRINKER_MEMCG_AWARE))
+ return xchg_nr_deferred_memcg(nid, shrinker,
+ sc->memcg);
+
+ return atomic_long_xchg(&shrinker->nr_deferred[nid], 0);
+}
+
+
+static long add_nr_deferred(long nr, struct shrinker *shrinker,
+ struct shrink_control *sc)
+{
+ int nid = sc->nid;
+
+ if (!(shrinker->flags & SHRINKER_NUMA_AWARE))
+ nid = 0;
+
+ if (sc->memcg &&
+ (shrinker->flags & SHRINKER_MEMCG_AWARE))
+ return add_nr_deferred_memcg(nr, nid, shrinker,
+ sc->memcg);
+
+ return atomic_long_add_return(nr, &shrinker->nr_deferred[nid]);
+}
+
+#define SHRINK_BATCH 128
+
+static unsigned long do_shrink_slab(struct shrink_control *shrinkctl,
+ struct shrinker *shrinker, int priority)
+{
+ unsigned long freed = 0;
+ unsigned long long delta;
+ long total_scan;
+ long freeable;
+ long nr;
+ long new_nr;
+ long batch_size = shrinker->batch ? shrinker->batch
+ : SHRINK_BATCH;
+ long scanned = 0, next_deferred;
+
+ freeable = shrinker->count_objects(shrinker, shrinkctl);
+ if (freeable == 0 || freeable == SHRINK_EMPTY)
+ return freeable;
+
+ /*
+ * copy the current shrinker scan count into a local variable
+ * and zero it so that other concurrent shrinker invocations
+ * don't also do this scanning work.
+ */
+ nr = xchg_nr_deferred(shrinker, shrinkctl);
+
+ if (shrinker->seeks) {
+ delta = freeable >> priority;
+ delta *= 4;
+ do_div(delta, shrinker->seeks);
+ } else {
+ /*
+ * These objects don't require any IO to create. Trim
+ * them aggressively under memory pressure to keep
+ * them from causing refetches in the IO caches.
+ */
+ delta = freeable / 2;
+ }
+
+ total_scan = nr >> priority;
+ total_scan += delta;
+ total_scan = min(total_scan, (2 * freeable));
+
+ trace_mm_shrink_slab_start(shrinker, shrinkctl, nr,
+ freeable, delta, total_scan, priority);
+
+ /*
+ * Normally, we should not scan less than batch_size objects in one
+ * pass to avoid too frequent shrinker calls, but if the slab has less
+ * than batch_size objects in total and we are really tight on memory,
+ * we will try to reclaim all available objects, otherwise we can end
+ * up failing allocations although there are plenty of reclaimable
+ * objects spread over several slabs with usage less than the
+ * batch_size.
+ *
+ * We detect the "tight on memory" situations by looking at the total
+ * number of objects we want to scan (total_scan). If it is greater
+ * than the total number of objects on slab (freeable), we must be
+ * scanning at high prio and therefore should try to reclaim as much as
+ * possible.
+ */
+ while (total_scan >= batch_size ||
+ total_scan >= freeable) {
+ unsigned long ret;
+ unsigned long nr_to_scan = min(batch_size, total_scan);
+
+ shrinkctl->nr_to_scan = nr_to_scan;
+ shrinkctl->nr_scanned = nr_to_scan;
+ ret = shrinker->scan_objects(shrinker, shrinkctl);
+ if (ret == SHRINK_STOP)
+ break;
+ freed += ret;
+
+ count_vm_events(SLABS_SCANNED, shrinkctl->nr_scanned);
+ total_scan -= shrinkctl->nr_scanned;
+ scanned += shrinkctl->nr_scanned;
+
+ cond_resched();
+ }
+
+ /*
+ * The deferred work is increased by any new work (delta) that wasn't
+ * done, decreased by old deferred work that was done now.
+ *
+ * And it is capped to two times of the freeable items.
+ */
+ next_deferred = max_t(long, (nr + delta - scanned), 0);
+ next_deferred = min(next_deferred, (2 * freeable));
+
+ /*
+ * move the unused scan count back into the shrinker in a
+ * manner that handles concurrent updates.
+ */
+ new_nr = add_nr_deferred(next_deferred, shrinker, shrinkctl);
+
+ trace_mm_shrink_slab_end(shrinker, shrinkctl->nid, freed, nr, new_nr, total_scan);
+ return freed;
+}
+
+#ifdef CONFIG_MEMCG
+static unsigned long shrink_slab_memcg(gfp_t gfp_mask, int nid,
+ struct mem_cgroup *memcg, int priority)
+{
+ struct shrinker_info *info;
+ unsigned long ret, freed = 0;
+ int i;
+
+ if (!mem_cgroup_online(memcg))
+ return 0;
+
+ if (!down_read_trylock(&shrinker_rwsem))
+ return 0;
+
+ info = shrinker_info_protected(memcg, nid);
+ if (unlikely(!info))
+ goto unlock;
+
+ for_each_set_bit(i, info->map, info->map_nr_max) {
+ struct shrink_control sc = {
+ .gfp_mask = gfp_mask,
+ .nid = nid,
+ .memcg = memcg,
+ };
+ struct shrinker *shrinker;
+
+ shrinker = idr_find(&shrinker_idr, i);
+ if (unlikely(!shrinker || !(shrinker->flags & SHRINKER_REGISTERED))) {
+ if (!shrinker)
+ clear_bit(i, info->map);
+ continue;
+ }
+
+ /* Call non-slab shrinkers even though kmem is disabled */
+ if (!memcg_kmem_online() &&
+ !(shrinker->flags & SHRINKER_NONSLAB))
+ continue;
+
+ ret = do_shrink_slab(&sc, shrinker, priority);
+ if (ret == SHRINK_EMPTY) {
+ clear_bit(i, info->map);
+ /*
+ * After the shrinker reported that it had no objects to
+ * free, but before we cleared the corresponding bit in
+ * the memcg shrinker map, a new object might have been
+ * added. To make sure, we have the bit set in this
+ * case, we invoke the shrinker one more time and reset
+ * the bit if it reports that it is not empty anymore.
+ * The memory barrier here pairs with the barrier in
+ * set_shrinker_bit():
+ *
+ * list_lru_add() shrink_slab_memcg()
+ * list_add_tail() clear_bit()
+ * <MB> <MB>
+ * set_bit() do_shrink_slab()
+ */
+ smp_mb__after_atomic();
+ ret = do_shrink_slab(&sc, shrinker, priority);
+ if (ret == SHRINK_EMPTY)
+ ret = 0;
+ else
+ set_shrinker_bit(memcg, nid, i);
+ }
+ freed += ret;
+
+ if (rwsem_is_contended(&shrinker_rwsem)) {
+ freed = freed ? : 1;
+ break;
+ }
+ }
+unlock:
+ up_read(&shrinker_rwsem);
+ return freed;
+}
+#else /* !CONFIG_MEMCG */
+static unsigned long shrink_slab_memcg(gfp_t gfp_mask, int nid,
+ struct mem_cgroup *memcg, int priority)
+{
+ return 0;
+}
+#endif /* CONFIG_MEMCG */
+
+/**
+ * shrink_slab - shrink slab caches
+ * @gfp_mask: allocation context
+ * @nid: node whose slab caches to target
+ * @memcg: memory cgroup whose slab caches to target
+ * @priority: the reclaim priority
+ *
+ * Call the shrink functions to age shrinkable caches.
+ *
+ * @nid is passed along to shrinkers with SHRINKER_NUMA_AWARE set,
+ * unaware shrinkers will receive a node id of 0 instead.
+ *
+ * @memcg specifies the memory cgroup to target. Unaware shrinkers
+ * are called only if it is the root cgroup.
+ *
+ * @priority is sc->priority, we take the number of objects and >> by priority
+ * in order to get the scan target.
+ *
+ * Returns the number of reclaimed slab objects.
+ */
+unsigned long shrink_slab(gfp_t gfp_mask, int nid, struct mem_cgroup *memcg,
+ int priority)
+{
+ unsigned long ret, freed = 0;
+ struct shrinker *shrinker;
+
+ /*
+ * The root memcg might be allocated even though memcg is disabled
+ * via "cgroup_disable=memory" boot parameter. This could make
+ * mem_cgroup_is_root() return false, then just run memcg slab
+ * shrink, but skip global shrink. This may result in premature
+ * oom.
+ */
+ if (!mem_cgroup_disabled() && !mem_cgroup_is_root(memcg))
+ return shrink_slab_memcg(gfp_mask, nid, memcg, priority);
+
+ if (!down_read_trylock(&shrinker_rwsem))
+ goto out;
+
+ list_for_each_entry(shrinker, &shrinker_list, list) {
+ struct shrink_control sc = {
+ .gfp_mask = gfp_mask,
+ .nid = nid,
+ .memcg = memcg,
+ };
+
+ ret = do_shrink_slab(&sc, shrinker, priority);
+ if (ret == SHRINK_EMPTY)
+ ret = 0;
+ freed += ret;
+ /*
+ * Bail out if someone want to register a new shrinker to
+ * prevent the registration from being stalled for long periods
+ * by parallel ongoing shrinking.
+ */
+ if (rwsem_is_contended(&shrinker_rwsem)) {
+ freed = freed ? : 1;
+ break;
+ }
+ }
+
+ up_read(&shrinker_rwsem);
+out:
+ cond_resched();
+ return freed;
+}
+
+/*
+ * Add a shrinker callback to be called from the vm.
+ */
+static int __prealloc_shrinker(struct shrinker *shrinker)
+{
+ unsigned int size;
+ int err;
+
+ if (shrinker->flags & SHRINKER_MEMCG_AWARE) {
+ err = prealloc_memcg_shrinker(shrinker);
+ if (err != -ENOSYS)
+ return err;
+
+ shrinker->flags &= ~SHRINKER_MEMCG_AWARE;
+ }
+
+ size = sizeof(*shrinker->nr_deferred);
+ if (shrinker->flags & SHRINKER_NUMA_AWARE)
+ size *= nr_node_ids;
+
+ shrinker->nr_deferred = kzalloc(size, GFP_KERNEL);
+ if (!shrinker->nr_deferred)
+ return -ENOMEM;
+
+ return 0;
+}
+
+#ifdef CONFIG_SHRINKER_DEBUG
+int prealloc_shrinker(struct shrinker *shrinker, const char *fmt, ...)
+{
+ va_list ap;
+ int err;
+
+ va_start(ap, fmt);
+ shrinker->name = kvasprintf_const(GFP_KERNEL, fmt, ap);
+ va_end(ap);
+ if (!shrinker->name)
+ return -ENOMEM;
+
+ err = __prealloc_shrinker(shrinker);
+ if (err) {
+ kfree_const(shrinker->name);
+ shrinker->name = NULL;
+ }
+
+ return err;
+}
+#else
+int prealloc_shrinker(struct shrinker *shrinker, const char *fmt, ...)
+{
+ return __prealloc_shrinker(shrinker);
+}
+#endif
+
+void free_prealloced_shrinker(struct shrinker *shrinker)
+{
+#ifdef CONFIG_SHRINKER_DEBUG
+ kfree_const(shrinker->name);
+ shrinker->name = NULL;
+#endif
+ if (shrinker->flags & SHRINKER_MEMCG_AWARE) {
+ down_write(&shrinker_rwsem);
+ unregister_memcg_shrinker(shrinker);
+ up_write(&shrinker_rwsem);
+ return;
+ }
+
+ kfree(shrinker->nr_deferred);
+ shrinker->nr_deferred = NULL;
+}
+
+void register_shrinker_prepared(struct shrinker *shrinker)
+{
+ down_write(&shrinker_rwsem);
+ list_add_tail(&shrinker->list, &shrinker_list);
+ shrinker->flags |= SHRINKER_REGISTERED;
+ shrinker_debugfs_add(shrinker);
+ up_write(&shrinker_rwsem);
+}
+
+static int __register_shrinker(struct shrinker *shrinker)
+{
+ int err = __prealloc_shrinker(shrinker);
+
+ if (err)
+ return err;
+ register_shrinker_prepared(shrinker);
+ return 0;
+}
+
+#ifdef CONFIG_SHRINKER_DEBUG
+int register_shrinker(struct shrinker *shrinker, const char *fmt, ...)
+{
+ va_list ap;
+ int err;
+
+ va_start(ap, fmt);
+ shrinker->name = kvasprintf_const(GFP_KERNEL, fmt, ap);
+ va_end(ap);
+ if (!shrinker->name)
+ return -ENOMEM;
+
+ err = __register_shrinker(shrinker);
+ if (err) {
+ kfree_const(shrinker->name);
+ shrinker->name = NULL;
+ }
+ return err;
+}
+#else
+int register_shrinker(struct shrinker *shrinker, const char *fmt, ...)
+{
+ return __register_shrinker(shrinker);
+}
+#endif
+EXPORT_SYMBOL(register_shrinker);
+
+/*
+ * Remove one
+ */
+void unregister_shrinker(struct shrinker *shrinker)
+{
+ struct dentry *debugfs_entry;
+ int debugfs_id;
+
+ if (!(shrinker->flags & SHRINKER_REGISTERED))
+ return;
+
+ down_write(&shrinker_rwsem);
+ list_del(&shrinker->list);
+ shrinker->flags &= ~SHRINKER_REGISTERED;
+ if (shrinker->flags & SHRINKER_MEMCG_AWARE)
+ unregister_memcg_shrinker(shrinker);
+ debugfs_entry = shrinker_debugfs_detach(shrinker, &debugfs_id);
+ up_write(&shrinker_rwsem);
+
+ shrinker_debugfs_remove(debugfs_entry, debugfs_id);
+
+ kfree(shrinker->nr_deferred);
+ shrinker->nr_deferred = NULL;
+}
+EXPORT_SYMBOL(unregister_shrinker);
+
+/**
+ * synchronize_shrinkers - Wait for all running shrinkers to complete.
+ *
+ * This is equivalent to calling unregister_shrink() and register_shrinker(),
+ * but atomically and with less overhead. This is useful to guarantee that all
+ * shrinker invocations have seen an update, before freeing memory, similar to
+ * rcu.
+ */
+void synchronize_shrinkers(void)
+{
+ down_write(&shrinker_rwsem);
+ up_write(&shrinker_rwsem);
+}
+EXPORT_SYMBOL(synchronize_shrinkers);
diff --git a/mm/vmscan.c b/mm/vmscan.c
index c7c149cb8d66..f5df4f1bf620 100644
--- a/mm/vmscan.c
+++ b/mm/vmscan.c
@@ -35,7 +35,6 @@
#include <linux/cpuset.h>
#include <linux/compaction.h>
#include <linux/notifier.h>
-#include <linux/rwsem.h>
#include <linux/delay.h>
#include <linux/kthread.h>
#include <linux/freezer.h>
@@ -188,246 +187,7 @@ struct scan_control {
*/
int vm_swappiness = 60;
-LIST_HEAD(shrinker_list);
-DECLARE_RWSEM(shrinker_rwsem);
-
#ifdef CONFIG_MEMCG
-static int shrinker_nr_max;
-
-/* The shrinker_info is expanded in a batch of BITS_PER_LONG */
-static inline int shrinker_map_size(int nr_items)
-{
- return (DIV_ROUND_UP(nr_items, BITS_PER_LONG) * sizeof(unsigned long));
-}
-
-static inline int shrinker_defer_size(int nr_items)
-{
- return (round_up(nr_items, BITS_PER_LONG) * sizeof(atomic_long_t));
-}
-
-static struct shrinker_info *shrinker_info_protected(struct mem_cgroup *memcg,
- int nid)
-{
- return rcu_dereference_protected(memcg->nodeinfo[nid]->shrinker_info,
- lockdep_is_held(&shrinker_rwsem));
-}
-
-static int expand_one_shrinker_info(struct mem_cgroup *memcg,
- int map_size, int defer_size,
- int old_map_size, int old_defer_size,
- int new_nr_max)
-{
- struct shrinker_info *new, *old;
- struct mem_cgroup_per_node *pn;
- int nid;
- int size = map_size + defer_size;
-
- for_each_node(nid) {
- pn = memcg->nodeinfo[nid];
- old = shrinker_info_protected(memcg, nid);
- /* Not yet online memcg */
- if (!old)
- return 0;
-
- /* Already expanded this shrinker_info */
- if (new_nr_max <= old->map_nr_max)
- continue;
-
- new = kvmalloc_node(sizeof(*new) + size, GFP_KERNEL, nid);
- if (!new)
- return -ENOMEM;
-
- new->nr_deferred = (atomic_long_t *)(new + 1);
- new->map = (void *)new->nr_deferred + defer_size;
- new->map_nr_max = new_nr_max;
-
- /* map: set all old bits, clear all new bits */
- memset(new->map, (int)0xff, old_map_size);
- memset((void *)new->map + old_map_size, 0, map_size - old_map_size);
- /* nr_deferred: copy old values, clear all new values */
- memcpy(new->nr_deferred, old->nr_deferred, old_defer_size);
- memset((void *)new->nr_deferred + old_defer_size, 0,
- defer_size - old_defer_size);
-
- rcu_assign_pointer(pn->shrinker_info, new);
- kvfree_rcu(old, rcu);
- }
-
- return 0;
-}
-
-void free_shrinker_info(struct mem_cgroup *memcg)
-{
- struct mem_cgroup_per_node *pn;
- struct shrinker_info *info;
- int nid;
-
- for_each_node(nid) {
- pn = memcg->nodeinfo[nid];
- info = rcu_dereference_protected(pn->shrinker_info, true);
- kvfree(info);
- rcu_assign_pointer(pn->shrinker_info, NULL);
- }
-}
-
-int alloc_shrinker_info(struct mem_cgroup *memcg)
-{
- struct shrinker_info *info;
- int nid, size, ret = 0;
- int map_size, defer_size = 0;
-
- down_write(&shrinker_rwsem);
- map_size = shrinker_map_size(shrinker_nr_max);
- defer_size = shrinker_defer_size(shrinker_nr_max);
- size = map_size + defer_size;
- for_each_node(nid) {
- info = kvzalloc_node(sizeof(*info) + size, GFP_KERNEL, nid);
- if (!info) {
- free_shrinker_info(memcg);
- ret = -ENOMEM;
- break;
- }
- info->nr_deferred = (atomic_long_t *)(info + 1);
- info->map = (void *)info->nr_deferred + defer_size;
- info->map_nr_max = shrinker_nr_max;
- rcu_assign_pointer(memcg->nodeinfo[nid]->shrinker_info, info);
- }
- up_write(&shrinker_rwsem);
-
- return ret;
-}
-
-static int expand_shrinker_info(int new_id)
-{
- int ret = 0;
- int new_nr_max = round_up(new_id + 1, BITS_PER_LONG);
- int map_size, defer_size = 0;
- int old_map_size, old_defer_size = 0;
- struct mem_cgroup *memcg;
-
- if (!root_mem_cgroup)
- goto out;
-
- lockdep_assert_held(&shrinker_rwsem);
-
- map_size = shrinker_map_size(new_nr_max);
- defer_size = shrinker_defer_size(new_nr_max);
- old_map_size = shrinker_map_size(shrinker_nr_max);
- old_defer_size = shrinker_defer_size(shrinker_nr_max);
-
- memcg = mem_cgroup_iter(NULL, NULL, NULL);
- do {
- ret = expand_one_shrinker_info(memcg, map_size, defer_size,
- old_map_size, old_defer_size,
- new_nr_max);
- if (ret) {
- mem_cgroup_iter_break(NULL, memcg);
- goto out;
- }
- } while ((memcg = mem_cgroup_iter(NULL, memcg, NULL)) != NULL);
-out:
- if (!ret)
- shrinker_nr_max = new_nr_max;
-
- return ret;
-}
-
-void set_shrinker_bit(struct mem_cgroup *memcg, int nid, int shrinker_id)
-{
- if (shrinker_id >= 0 && memcg && !mem_cgroup_is_root(memcg)) {
- struct shrinker_info *info;
-
- rcu_read_lock();
- info = rcu_dereference(memcg->nodeinfo[nid]->shrinker_info);
- if (!WARN_ON_ONCE(shrinker_id >= info->map_nr_max)) {
- /* Pairs with smp mb in shrink_slab() */
- smp_mb__before_atomic();
- set_bit(shrinker_id, info->map);
- }
- rcu_read_unlock();
- }
-}
-
-static DEFINE_IDR(shrinker_idr);
-
-static int prealloc_memcg_shrinker(struct shrinker *shrinker)
-{
- int id, ret = -ENOMEM;
-
- if (mem_cgroup_disabled())
- return -ENOSYS;
-
- down_write(&shrinker_rwsem);
- /* This may call shrinker, so it must use down_read_trylock() */
- id = idr_alloc(&shrinker_idr, shrinker, 0, 0, GFP_KERNEL);
- if (id < 0)
- goto unlock;
-
- if (id >= shrinker_nr_max) {
- if (expand_shrinker_info(id)) {
- idr_remove(&shrinker_idr, id);
- goto unlock;
- }
- }
- shrinker->id = id;
- ret = 0;
-unlock:
- up_write(&shrinker_rwsem);
- return ret;
-}
-
-static void unregister_memcg_shrinker(struct shrinker *shrinker)
-{
- int id = shrinker->id;
-
- BUG_ON(id < 0);
-
- lockdep_assert_held(&shrinker_rwsem);
-
- idr_remove(&shrinker_idr, id);
-}
-
-static long xchg_nr_deferred_memcg(int nid, struct shrinker *shrinker,
- struct mem_cgroup *memcg)
-{
- struct shrinker_info *info;
-
- info = shrinker_info_protected(memcg, nid);
- return atomic_long_xchg(&info->nr_deferred[shrinker->id], 0);
-}
-
-static long add_nr_deferred_memcg(long nr, int nid, struct shrinker *shrinker,
- struct mem_cgroup *memcg)
-{
- struct shrinker_info *info;
-
- info = shrinker_info_protected(memcg, nid);
- return atomic_long_add_return(nr, &info->nr_deferred[shrinker->id]);
-}
-
-void reparent_shrinker_deferred(struct mem_cgroup *memcg)
-{
- int i, nid;
- long nr;
- struct mem_cgroup *parent;
- struct shrinker_info *child_info, *parent_info;
-
- parent = parent_mem_cgroup(memcg);
- if (!parent)
- parent = root_mem_cgroup;
-
- /* Prevent from concurrent shrinker_info expand */
- down_read(&shrinker_rwsem);
- for_each_node(nid) {
- child_info = shrinker_info_protected(memcg, nid);
- parent_info = shrinker_info_protected(parent, nid);
- for (i = 0; i < child_info->map_nr_max; i++) {
- nr = atomic_long_read(&child_info->nr_deferred[i]);
- atomic_long_add(nr, &parent_info->nr_deferred[i]);
- }
- }
- up_read(&shrinker_rwsem);
-}
/* Returns true for reclaim through cgroup limits or cgroup interfaces. */
static bool cgroup_reclaim(struct scan_control *sc)
@@ -468,27 +228,6 @@ static bool writeback_throttling_sane(struct scan_control *sc)
return false;
}
#else
-static int prealloc_memcg_shrinker(struct shrinker *shrinker)
-{
- return -ENOSYS;
-}
-
-static void unregister_memcg_shrinker(struct shrinker *shrinker)
-{
-}
-
-static long xchg_nr_deferred_memcg(int nid, struct shrinker *shrinker,
- struct mem_cgroup *memcg)
-{
- return 0;
-}
-
-static long add_nr_deferred_memcg(long nr, int nid, struct shrinker *shrinker,
- struct mem_cgroup *memcg)
-{
- return 0;
-}
-
static bool cgroup_reclaim(struct scan_control *sc)
{
return false;
@@ -557,39 +296,6 @@ static void flush_reclaim_state(struct scan_control *sc)
}
}
-static long xchg_nr_deferred(struct shrinker *shrinker,
- struct shrink_control *sc)
-{
- int nid = sc->nid;
-
- if (!(shrinker->flags & SHRINKER_NUMA_AWARE))
- nid = 0;
-
- if (sc->memcg &&
- (shrinker->flags & SHRINKER_MEMCG_AWARE))
- return xchg_nr_deferred_memcg(nid, shrinker,
- sc->memcg);
-
- return atomic_long_xchg(&shrinker->nr_deferred[nid], 0);
-}
-
-
-static long add_nr_deferred(long nr, struct shrinker *shrinker,
- struct shrink_control *sc)
-{
- int nid = sc->nid;
-
- if (!(shrinker->flags & SHRINKER_NUMA_AWARE))
- nid = 0;
-
- if (sc->memcg &&
- (shrinker->flags & SHRINKER_MEMCG_AWARE))
- return add_nr_deferred_memcg(nr, nid, shrinker,
- sc->memcg);
-
- return atomic_long_add_return(nr, &shrinker->nr_deferred[nid]);
-}
-
static bool can_demote(int nid, struct scan_control *sc)
{
if (!numa_demotion_enabled)
@@ -671,413 +377,6 @@ static unsigned long lruvec_lru_size(struct lruvec *lruvec, enum lru_list lru,
return size;
}
-/*
- * Add a shrinker callback to be called from the vm.
- */
-static int __prealloc_shrinker(struct shrinker *shrinker)
-{
- unsigned int size;
- int err;
-
- if (shrinker->flags & SHRINKER_MEMCG_AWARE) {
- err = prealloc_memcg_shrinker(shrinker);
- if (err != -ENOSYS)
- return err;
-
- shrinker->flags &= ~SHRINKER_MEMCG_AWARE;
- }
-
- size = sizeof(*shrinker->nr_deferred);
- if (shrinker->flags & SHRINKER_NUMA_AWARE)
- size *= nr_node_ids;
-
- shrinker->nr_deferred = kzalloc(size, GFP_KERNEL);
- if (!shrinker->nr_deferred)
- return -ENOMEM;
-
- return 0;
-}
-
-#ifdef CONFIG_SHRINKER_DEBUG
-int prealloc_shrinker(struct shrinker *shrinker, const char *fmt, ...)
-{
- va_list ap;
- int err;
-
- va_start(ap, fmt);
- shrinker->name = kvasprintf_const(GFP_KERNEL, fmt, ap);
- va_end(ap);
- if (!shrinker->name)
- return -ENOMEM;
-
- err = __prealloc_shrinker(shrinker);
- if (err) {
- kfree_const(shrinker->name);
- shrinker->name = NULL;
- }
-
- return err;
-}
-#else
-int prealloc_shrinker(struct shrinker *shrinker, const char *fmt, ...)
-{
- return __prealloc_shrinker(shrinker);
-}
-#endif
-
-void free_prealloced_shrinker(struct shrinker *shrinker)
-{
-#ifdef CONFIG_SHRINKER_DEBUG
- kfree_const(shrinker->name);
- shrinker->name = NULL;
-#endif
- if (shrinker->flags & SHRINKER_MEMCG_AWARE) {
- down_write(&shrinker_rwsem);
- unregister_memcg_shrinker(shrinker);
- up_write(&shrinker_rwsem);
- return;
- }
-
- kfree(shrinker->nr_deferred);
- shrinker->nr_deferred = NULL;
-}
-
-void register_shrinker_prepared(struct shrinker *shrinker)
-{
- down_write(&shrinker_rwsem);
- list_add_tail(&shrinker->list, &shrinker_list);
- shrinker->flags |= SHRINKER_REGISTERED;
- shrinker_debugfs_add(shrinker);
- up_write(&shrinker_rwsem);
-}
-
-static int __register_shrinker(struct shrinker *shrinker)
-{
- int err = __prealloc_shrinker(shrinker);
-
- if (err)
- return err;
- register_shrinker_prepared(shrinker);
- return 0;
-}
-
-#ifdef CONFIG_SHRINKER_DEBUG
-int register_shrinker(struct shrinker *shrinker, const char *fmt, ...)
-{
- va_list ap;
- int err;
-
- va_start(ap, fmt);
- shrinker->name = kvasprintf_const(GFP_KERNEL, fmt, ap);
- va_end(ap);
- if (!shrinker->name)
- return -ENOMEM;
-
- err = __register_shrinker(shrinker);
- if (err) {
- kfree_const(shrinker->name);
- shrinker->name = NULL;
- }
- return err;
-}
-#else
-int register_shrinker(struct shrinker *shrinker, const char *fmt, ...)
-{
- return __register_shrinker(shrinker);
-}
-#endif
-EXPORT_SYMBOL(register_shrinker);
-
-/*
- * Remove one
- */
-void unregister_shrinker(struct shrinker *shrinker)
-{
- struct dentry *debugfs_entry;
- int debugfs_id;
-
- if (!(shrinker->flags & SHRINKER_REGISTERED))
- return;
-
- down_write(&shrinker_rwsem);
- list_del(&shrinker->list);
- shrinker->flags &= ~SHRINKER_REGISTERED;
- if (shrinker->flags & SHRINKER_MEMCG_AWARE)
- unregister_memcg_shrinker(shrinker);
- debugfs_entry = shrinker_debugfs_detach(shrinker, &debugfs_id);
- up_write(&shrinker_rwsem);
-
- shrinker_debugfs_remove(debugfs_entry, debugfs_id);
-
- kfree(shrinker->nr_deferred);
- shrinker->nr_deferred = NULL;
-}
-EXPORT_SYMBOL(unregister_shrinker);
-
-/**
- * synchronize_shrinkers - Wait for all running shrinkers to complete.
- *
- * This is equivalent to calling unregister_shrink() and register_shrinker(),
- * but atomically and with less overhead. This is useful to guarantee that all
- * shrinker invocations have seen an update, before freeing memory, similar to
- * rcu.
- */
-void synchronize_shrinkers(void)
-{
- down_write(&shrinker_rwsem);
- up_write(&shrinker_rwsem);
-}
-EXPORT_SYMBOL(synchronize_shrinkers);
-
-#define SHRINK_BATCH 128
-
-static unsigned long do_shrink_slab(struct shrink_control *shrinkctl,
- struct shrinker *shrinker, int priority)
-{
- unsigned long freed = 0;
- unsigned long long delta;
- long total_scan;
- long freeable;
- long nr;
- long new_nr;
- long batch_size = shrinker->batch ? shrinker->batch
- : SHRINK_BATCH;
- long scanned = 0, next_deferred;
-
- freeable = shrinker->count_objects(shrinker, shrinkctl);
- if (freeable == 0 || freeable == SHRINK_EMPTY)
- return freeable;
-
- /*
- * copy the current shrinker scan count into a local variable
- * and zero it so that other concurrent shrinker invocations
- * don't also do this scanning work.
- */
- nr = xchg_nr_deferred(shrinker, shrinkctl);
-
- if (shrinker->seeks) {
- delta = freeable >> priority;
- delta *= 4;
- do_div(delta, shrinker->seeks);
- } else {
- /*
- * These objects don't require any IO to create. Trim
- * them aggressively under memory pressure to keep
- * them from causing refetches in the IO caches.
- */
- delta = freeable / 2;
- }
-
- total_scan = nr >> priority;
- total_scan += delta;
- total_scan = min(total_scan, (2 * freeable));
-
- trace_mm_shrink_slab_start(shrinker, shrinkctl, nr,
- freeable, delta, total_scan, priority);
-
- /*
- * Normally, we should not scan less than batch_size objects in one
- * pass to avoid too frequent shrinker calls, but if the slab has less
- * than batch_size objects in total and we are really tight on memory,
- * we will try to reclaim all available objects, otherwise we can end
- * up failing allocations although there are plenty of reclaimable
- * objects spread over several slabs with usage less than the
- * batch_size.
- *
- * We detect the "tight on memory" situations by looking at the total
- * number of objects we want to scan (total_scan). If it is greater
- * than the total number of objects on slab (freeable), we must be
- * scanning at high prio and therefore should try to reclaim as much as
- * possible.
- */
- while (total_scan >= batch_size ||
- total_scan >= freeable) {
- unsigned long ret;
- unsigned long nr_to_scan = min(batch_size, total_scan);
-
- shrinkctl->nr_to_scan = nr_to_scan;
- shrinkctl->nr_scanned = nr_to_scan;
- ret = shrinker->scan_objects(shrinker, shrinkctl);
- if (ret == SHRINK_STOP)
- break;
- freed += ret;
-
- count_vm_events(SLABS_SCANNED, shrinkctl->nr_scanned);
- total_scan -= shrinkctl->nr_scanned;
- scanned += shrinkctl->nr_scanned;
-
- cond_resched();
- }
-
- /*
- * The deferred work is increased by any new work (delta) that wasn't
- * done, decreased by old deferred work that was done now.
- *
- * And it is capped to two times of the freeable items.
- */
- next_deferred = max_t(long, (nr + delta - scanned), 0);
- next_deferred = min(next_deferred, (2 * freeable));
-
- /*
- * move the unused scan count back into the shrinker in a
- * manner that handles concurrent updates.
- */
- new_nr = add_nr_deferred(next_deferred, shrinker, shrinkctl);
-
- trace_mm_shrink_slab_end(shrinker, shrinkctl->nid, freed, nr, new_nr, total_scan);
- return freed;
-}
-
-#ifdef CONFIG_MEMCG
-static unsigned long shrink_slab_memcg(gfp_t gfp_mask, int nid,
- struct mem_cgroup *memcg, int priority)
-{
- struct shrinker_info *info;
- unsigned long ret, freed = 0;
- int i;
-
- if (!mem_cgroup_online(memcg))
- return 0;
-
- if (!down_read_trylock(&shrinker_rwsem))
- return 0;
-
- info = shrinker_info_protected(memcg, nid);
- if (unlikely(!info))
- goto unlock;
-
- for_each_set_bit(i, info->map, info->map_nr_max) {
- struct shrink_control sc = {
- .gfp_mask = gfp_mask,
- .nid = nid,
- .memcg = memcg,
- };
- struct shrinker *shrinker;
-
- shrinker = idr_find(&shrinker_idr, i);
- if (unlikely(!shrinker || !(shrinker->flags & SHRINKER_REGISTERED))) {
- if (!shrinker)
- clear_bit(i, info->map);
- continue;
- }
-
- /* Call non-slab shrinkers even though kmem is disabled */
- if (!memcg_kmem_online() &&
- !(shrinker->flags & SHRINKER_NONSLAB))
- continue;
-
- ret = do_shrink_slab(&sc, shrinker, priority);
- if (ret == SHRINK_EMPTY) {
- clear_bit(i, info->map);
- /*
- * After the shrinker reported that it had no objects to
- * free, but before we cleared the corresponding bit in
- * the memcg shrinker map, a new object might have been
- * added. To make sure, we have the bit set in this
- * case, we invoke the shrinker one more time and reset
- * the bit if it reports that it is not empty anymore.
- * The memory barrier here pairs with the barrier in
- * set_shrinker_bit():
- *
- * list_lru_add() shrink_slab_memcg()
- * list_add_tail() clear_bit()
- * <MB> <MB>
- * set_bit() do_shrink_slab()
- */
- smp_mb__after_atomic();
- ret = do_shrink_slab(&sc, shrinker, priority);
- if (ret == SHRINK_EMPTY)
- ret = 0;
- else
- set_shrinker_bit(memcg, nid, i);
- }
- freed += ret;
-
- if (rwsem_is_contended(&shrinker_rwsem)) {
- freed = freed ? : 1;
- break;
- }
- }
-unlock:
- up_read(&shrinker_rwsem);
- return freed;
-}
-#else /* CONFIG_MEMCG */
-static unsigned long shrink_slab_memcg(gfp_t gfp_mask, int nid,
- struct mem_cgroup *memcg, int priority)
-{
- return 0;
-}
-#endif /* CONFIG_MEMCG */
-
-/**
- * shrink_slab - shrink slab caches
- * @gfp_mask: allocation context
- * @nid: node whose slab caches to target
- * @memcg: memory cgroup whose slab caches to target
- * @priority: the reclaim priority
- *
- * Call the shrink functions to age shrinkable caches.
- *
- * @nid is passed along to shrinkers with SHRINKER_NUMA_AWARE set,
- * unaware shrinkers will receive a node id of 0 instead.
- *
- * @memcg specifies the memory cgroup to target. Unaware shrinkers
- * are called only if it is the root cgroup.
- *
- * @priority is sc->priority, we take the number of objects and >> by priority
- * in order to get the scan target.
- *
- * Returns the number of reclaimed slab objects.
- */
-static unsigned long shrink_slab(gfp_t gfp_mask, int nid,
- struct mem_cgroup *memcg,
- int priority)
-{
- unsigned long ret, freed = 0;
- struct shrinker *shrinker;
-
- /*
- * The root memcg might be allocated even though memcg is disabled
- * via "cgroup_disable=memory" boot parameter. This could make
- * mem_cgroup_is_root() return false, then just run memcg slab
- * shrink, but skip global shrink. This may result in premature
- * oom.
- */
- if (!mem_cgroup_disabled() && !mem_cgroup_is_root(memcg))
- return shrink_slab_memcg(gfp_mask, nid, memcg, priority);
-
- if (!down_read_trylock(&shrinker_rwsem))
- goto out;
-
- list_for_each_entry(shrinker, &shrinker_list, list) {
- struct shrink_control sc = {
- .gfp_mask = gfp_mask,
- .nid = nid,
- .memcg = memcg,
- };
-
- ret = do_shrink_slab(&sc, shrinker, priority);
- if (ret == SHRINK_EMPTY)
- ret = 0;
- freed += ret;
- /*
- * Bail out if someone want to register a new shrinker to
- * prevent the registration from being stalled for long periods
- * by parallel ongoing shrinking.
- */
- if (rwsem_is_contended(&shrinker_rwsem)) {
- freed = freed ? : 1;
- break;
- }
- }
-
- up_read(&shrinker_rwsem);
-out:
- cond_resched();
- return freed;
-}
-
static unsigned long drop_slab_node(int nid)
{
unsigned long freed = 0;
--
2.30.2
^ permalink raw reply related [flat|nested] 12+ messages in thread
* [PATCH 3/5] mm: shrinker: remove redundant shrinker_rwsem in debugfs operations
2023-08-16 8:34 [PATCH 0/5] use refcount+RCU method to implement lockless slab shrink (part 1) Qi Zheng
2023-08-16 8:34 ` [PATCH 1/5] mm: move some shrinker-related function declarations to mm/internal.h Qi Zheng
2023-08-16 8:34 ` [PATCH 2/5] mm: vmscan: move shrinker-related code into a separate file Qi Zheng
@ 2023-08-16 8:34 ` Qi Zheng
2023-08-16 8:34 ` [PATCH 4/5] drm/ttm: introduce pool_shrink_rwsem Qi Zheng
2023-08-16 8:34 ` [PATCH 5/5] mm: shrinker: add a secondary array for shrinker_info::{map, nr_deferred} Qi Zheng
4 siblings, 0 replies; 12+ messages in thread
From: Qi Zheng @ 2023-08-16 8:34 UTC (permalink / raw)
To: akpm, david, tkhai, vbabka, roman.gushchin, djwong, brauner,
paulmck, tytso, steven.price, cel, senozhatsky, yujie.liu, gregkh,
muchun.song, joel, christian.koenig
Cc: linux-kernel, linux-mm, dri-devel, linux-fsdevel, Qi Zheng,
Muchun Song
The debugfs_remove_recursive() will wait for debugfs_file_put() to return,
so the shrinker will not be freed when doing debugfs operations (such as
shrinker_debugfs_count_show() and shrinker_debugfs_scan_write()), so there
is no need to hold shrinker_rwsem during debugfs operations.
Signed-off-by: Qi Zheng <zhengqi.arch@bytedance.com>
Reviewed-by: Muchun Song <songmuchun@bytedance.com>
---
mm/shrinker_debug.c | 16 +---------------
1 file changed, 1 insertion(+), 15 deletions(-)
diff --git a/mm/shrinker_debug.c b/mm/shrinker_debug.c
index 3ab53fad8876..61702bdc1af4 100644
--- a/mm/shrinker_debug.c
+++ b/mm/shrinker_debug.c
@@ -49,17 +49,12 @@ static int shrinker_debugfs_count_show(struct seq_file *m, void *v)
struct mem_cgroup *memcg;
unsigned long total;
bool memcg_aware;
- int ret, nid;
+ int ret = 0, nid;
count_per_node = kcalloc(nr_node_ids, sizeof(unsigned long), GFP_KERNEL);
if (!count_per_node)
return -ENOMEM;
- ret = down_read_killable(&shrinker_rwsem);
- if (ret) {
- kfree(count_per_node);
- return ret;
- }
rcu_read_lock();
memcg_aware = shrinker->flags & SHRINKER_MEMCG_AWARE;
@@ -92,7 +87,6 @@ static int shrinker_debugfs_count_show(struct seq_file *m, void *v)
} while ((memcg = mem_cgroup_iter(NULL, memcg, NULL)) != NULL);
rcu_read_unlock();
- up_read(&shrinker_rwsem);
kfree(count_per_node);
return ret;
@@ -117,7 +111,6 @@ static ssize_t shrinker_debugfs_scan_write(struct file *file,
struct mem_cgroup *memcg = NULL;
int nid;
char kbuf[72];
- ssize_t ret;
read_len = size < (sizeof(kbuf) - 1) ? size : (sizeof(kbuf) - 1);
if (copy_from_user(kbuf, buf, read_len))
@@ -146,12 +139,6 @@ static ssize_t shrinker_debugfs_scan_write(struct file *file,
return -EINVAL;
}
- ret = down_read_killable(&shrinker_rwsem);
- if (ret) {
- mem_cgroup_put(memcg);
- return ret;
- }
-
sc.nid = nid;
sc.memcg = memcg;
sc.nr_to_scan = nr_to_scan;
@@ -159,7 +146,6 @@ static ssize_t shrinker_debugfs_scan_write(struct file *file,
shrinker->scan_objects(shrinker, &sc);
- up_read(&shrinker_rwsem);
mem_cgroup_put(memcg);
return size;
--
2.30.2
^ permalink raw reply related [flat|nested] 12+ messages in thread
* [PATCH 4/5] drm/ttm: introduce pool_shrink_rwsem
2023-08-16 8:34 [PATCH 0/5] use refcount+RCU method to implement lockless slab shrink (part 1) Qi Zheng
` (2 preceding siblings ...)
2023-08-16 8:34 ` [PATCH 3/5] mm: shrinker: remove redundant shrinker_rwsem in debugfs operations Qi Zheng
@ 2023-08-16 8:34 ` Qi Zheng
2023-08-16 9:14 ` Christian König
2023-08-16 8:34 ` [PATCH 5/5] mm: shrinker: add a secondary array for shrinker_info::{map, nr_deferred} Qi Zheng
4 siblings, 1 reply; 12+ messages in thread
From: Qi Zheng @ 2023-08-16 8:34 UTC (permalink / raw)
To: akpm, david, tkhai, vbabka, roman.gushchin, djwong, brauner,
paulmck, tytso, steven.price, cel, senozhatsky, yujie.liu, gregkh,
muchun.song, joel, christian.koenig
Cc: linux-kernel, linux-mm, dri-devel, linux-fsdevel, Qi Zheng,
Muchun Song
Currently, the synchronize_shrinkers() is only used by TTM pool. It only
requires that no shrinkers run in parallel.
After we use RCU+refcount method to implement the lockless slab shrink,
we can not use shrinker_rwsem or synchronize_rcu() to guarantee that all
shrinker invocations have seen an update before freeing memory.
So we introduce a new pool_shrink_rwsem to implement a private
synchronize_shrinkers(), so as to achieve the same purpose.
Signed-off-by: Qi Zheng <zhengqi.arch@bytedance.com>
Reviewed-by: Muchun Song <songmuchun@bytedance.com>
---
drivers/gpu/drm/ttm/ttm_pool.c | 15 +++++++++++++++
include/linux/shrinker.h | 1 -
mm/shrinker.c | 15 ---------------
3 files changed, 15 insertions(+), 16 deletions(-)
diff --git a/drivers/gpu/drm/ttm/ttm_pool.c b/drivers/gpu/drm/ttm/ttm_pool.c
index cddb9151d20f..713b1c0a70e1 100644
--- a/drivers/gpu/drm/ttm/ttm_pool.c
+++ b/drivers/gpu/drm/ttm/ttm_pool.c
@@ -74,6 +74,7 @@ static struct ttm_pool_type global_dma32_uncached[MAX_ORDER + 1];
static spinlock_t shrinker_lock;
static struct list_head shrinker_list;
static struct shrinker mm_shrinker;
+static DECLARE_RWSEM(pool_shrink_rwsem);
/* Allocate pages of size 1 << order with the given gfp_flags */
static struct page *ttm_pool_alloc_page(struct ttm_pool *pool, gfp_t gfp_flags,
@@ -317,6 +318,7 @@ static unsigned int ttm_pool_shrink(void)
unsigned int num_pages;
struct page *p;
+ down_read(&pool_shrink_rwsem);
spin_lock(&shrinker_lock);
pt = list_first_entry(&shrinker_list, typeof(*pt), shrinker_list);
list_move_tail(&pt->shrinker_list, &shrinker_list);
@@ -329,6 +331,7 @@ static unsigned int ttm_pool_shrink(void)
} else {
num_pages = 0;
}
+ up_read(&pool_shrink_rwsem);
return num_pages;
}
@@ -572,6 +575,18 @@ void ttm_pool_init(struct ttm_pool *pool, struct device *dev,
}
EXPORT_SYMBOL(ttm_pool_init);
+/**
+ * synchronize_shrinkers - Wait for all running shrinkers to complete.
+ *
+ * This is useful to guarantee that all shrinker invocations have seen an
+ * update, before freeing memory, similar to rcu.
+ */
+static void synchronize_shrinkers(void)
+{
+ down_write(&pool_shrink_rwsem);
+ up_write(&pool_shrink_rwsem);
+}
+
/**
* ttm_pool_fini - Cleanup a pool
*
diff --git a/include/linux/shrinker.h b/include/linux/shrinker.h
index 8dc15aa37410..6b5843c3b827 100644
--- a/include/linux/shrinker.h
+++ b/include/linux/shrinker.h
@@ -103,7 +103,6 @@ extern int __printf(2, 3) register_shrinker(struct shrinker *shrinker,
const char *fmt, ...);
extern void unregister_shrinker(struct shrinker *shrinker);
extern void free_prealloced_shrinker(struct shrinker *shrinker);
-extern void synchronize_shrinkers(void);
#ifdef CONFIG_SHRINKER_DEBUG
extern int __printf(2, 3) shrinker_debugfs_rename(struct shrinker *shrinker,
diff --git a/mm/shrinker.c b/mm/shrinker.c
index 043c87ccfab4..a16cd448b924 100644
--- a/mm/shrinker.c
+++ b/mm/shrinker.c
@@ -692,18 +692,3 @@ void unregister_shrinker(struct shrinker *shrinker)
shrinker->nr_deferred = NULL;
}
EXPORT_SYMBOL(unregister_shrinker);
-
-/**
- * synchronize_shrinkers - Wait for all running shrinkers to complete.
- *
- * This is equivalent to calling unregister_shrink() and register_shrinker(),
- * but atomically and with less overhead. This is useful to guarantee that all
- * shrinker invocations have seen an update, before freeing memory, similar to
- * rcu.
- */
-void synchronize_shrinkers(void)
-{
- down_write(&shrinker_rwsem);
- up_write(&shrinker_rwsem);
-}
-EXPORT_SYMBOL(synchronize_shrinkers);
--
2.30.2
^ permalink raw reply related [flat|nested] 12+ messages in thread
* [PATCH 5/5] mm: shrinker: add a secondary array for shrinker_info::{map, nr_deferred}
2023-08-16 8:34 [PATCH 0/5] use refcount+RCU method to implement lockless slab shrink (part 1) Qi Zheng
` (3 preceding siblings ...)
2023-08-16 8:34 ` [PATCH 4/5] drm/ttm: introduce pool_shrink_rwsem Qi Zheng
@ 2023-08-16 8:34 ` Qi Zheng
4 siblings, 0 replies; 12+ messages in thread
From: Qi Zheng @ 2023-08-16 8:34 UTC (permalink / raw)
To: akpm, david, tkhai, vbabka, roman.gushchin, djwong, brauner,
paulmck, tytso, steven.price, cel, senozhatsky, yujie.liu, gregkh,
muchun.song, joel, christian.koenig
Cc: linux-kernel, linux-mm, dri-devel, linux-fsdevel, Qi Zheng,
Muchun Song
Currently, we maintain two linear arrays per node per memcg, which are
shrinker_info::map and shrinker_info::nr_deferred. And we need to resize
them when the shrinker_nr_max is exceeded, that is, allocate a new array,
and then copy the old array to the new array, and finally free the old
array by RCU.
For shrinker_info::map, we do set_bit() under the RCU lock, so we may set
the value into the old map which is about to be freed. This may cause the
value set to be lost. The current solution is not to copy the old map when
resizing, but to set all the corresponding bits in the new map to 1. This
solves the data loss problem, but bring the overhead of more pointless
loops while doing memcg slab shrink.
For shrinker_info::nr_deferred, we will only modify it under the read lock
of shrinker_rwsem, so it will not run concurrently with the resizing. But
after we make memcg slab shrink lockless, there will be the same data loss
problem as shrinker_info::map, and we can't work around it like the map.
For such resizable arrays, the most straightforward idea is to change it
to xarray, like we did for list_lru [1]. We need to do xa_store() in the
list_lru_add()-->set_shrinker_bit(), but this will cause memory
allocation, and the list_lru_add() doesn't accept failure. A possible
solution is to pre-allocate, but the location of pre-allocation is not
well determined (such as deferred_split_shrinker case).
Therefore, this commit chooses to introduce the following secondary array
for shrinker_info::{map, nr_deferred}:
+---------------+--------+--------+-----+
| shrinker_info | unit 0 | unit 1 | ... | (secondary array)
+---------------+--------+--------+-----+
|
v
+---------------+-----+
| nr_deferred[] | map | (leaf array)
+---------------+-----+
(shrinker_info_unit)
The leaf array is never freed unless the memcg is destroyed. The secondary
array will be resized every time the shrinker id exceeds shrinker_nr_max.
So the shrinker_info_unit can be indexed from both the old and the new
shrinker_info->unit[x]. Then even if we get the old secondary array under
the RCU lock, the found map and nr_deferred are also true, so the updated
nr_deferred and map will not be lost.
[1]. https://lore.kernel.org/all/20220228122126.37293-13-songmuchun@bytedance.com/
Signed-off-by: Qi Zheng <zhengqi.arch@bytedance.com>
Reviewed-by: Muchun Song <songmuchun@bytedance.com>
---
include/linux/memcontrol.h | 12 +-
include/linux/shrinker.h | 17 +++
mm/shrinker.c | 249 +++++++++++++++++++++++--------------
3 files changed, 171 insertions(+), 107 deletions(-)
diff --git a/include/linux/memcontrol.h b/include/linux/memcontrol.h
index 11810a2cfd2d..b49515bb6fbd 100644
--- a/include/linux/memcontrol.h
+++ b/include/linux/memcontrol.h
@@ -21,6 +21,7 @@
#include <linux/vmstat.h>
#include <linux/writeback.h>
#include <linux/page-flags.h>
+#include <linux/shrinker.h>
struct mem_cgroup;
struct obj_cgroup;
@@ -88,17 +89,6 @@ struct mem_cgroup_reclaim_iter {
unsigned int generation;
};
-/*
- * Bitmap and deferred work of shrinker::id corresponding to memcg-aware
- * shrinkers, which have elements charged to this memcg.
- */
-struct shrinker_info {
- struct rcu_head rcu;
- atomic_long_t *nr_deferred;
- unsigned long *map;
- int map_nr_max;
-};
-
struct lruvec_stats_percpu {
/* Local (CPU and cgroup) state */
long state[NR_VM_NODE_STAT_ITEMS];
diff --git a/include/linux/shrinker.h b/include/linux/shrinker.h
index 6b5843c3b827..8a3c99422fd3 100644
--- a/include/linux/shrinker.h
+++ b/include/linux/shrinker.h
@@ -5,6 +5,23 @@
#include <linux/atomic.h>
#include <linux/types.h>
+#define SHRINKER_UNIT_BITS BITS_PER_LONG
+
+/*
+ * Bitmap and deferred work of shrinker::id corresponding to memcg-aware
+ * shrinkers, which have elements charged to the memcg.
+ */
+struct shrinker_info_unit {
+ atomic_long_t nr_deferred[SHRINKER_UNIT_BITS];
+ DECLARE_BITMAP(map, SHRINKER_UNIT_BITS);
+};
+
+struct shrinker_info {
+ struct rcu_head rcu;
+ int map_nr_max;
+ struct shrinker_info_unit *unit[];
+};
+
/*
* This struct is used to pass information from page reclaim to the shrinkers.
* We consolidate the values for easier extension later.
diff --git a/mm/shrinker.c b/mm/shrinker.c
index a16cd448b924..a7b5397a4fb9 100644
--- a/mm/shrinker.c
+++ b/mm/shrinker.c
@@ -12,15 +12,50 @@ DECLARE_RWSEM(shrinker_rwsem);
#ifdef CONFIG_MEMCG
static int shrinker_nr_max;
-/* The shrinker_info is expanded in a batch of BITS_PER_LONG */
-static inline int shrinker_map_size(int nr_items)
+static inline int shrinker_unit_size(int nr_items)
{
- return (DIV_ROUND_UP(nr_items, BITS_PER_LONG) * sizeof(unsigned long));
+ return (DIV_ROUND_UP(nr_items, SHRINKER_UNIT_BITS) * sizeof(struct shrinker_info_unit *));
}
-static inline int shrinker_defer_size(int nr_items)
+static inline void shrinker_unit_free(struct shrinker_info *info, int start)
{
- return (round_up(nr_items, BITS_PER_LONG) * sizeof(atomic_long_t));
+ struct shrinker_info_unit **unit;
+ int nr, i;
+
+ if (!info)
+ return;
+
+ unit = info->unit;
+ nr = DIV_ROUND_UP(info->map_nr_max, SHRINKER_UNIT_BITS);
+
+ for (i = start; i < nr; i++) {
+ if (!unit[i])
+ break;
+
+ kfree(unit[i]);
+ unit[i] = NULL;
+ }
+}
+
+static inline int shrinker_unit_alloc(struct shrinker_info *new,
+ struct shrinker_info *old, int nid)
+{
+ struct shrinker_info_unit *unit;
+ int nr = DIV_ROUND_UP(new->map_nr_max, SHRINKER_UNIT_BITS);
+ int start = old ? DIV_ROUND_UP(old->map_nr_max, SHRINKER_UNIT_BITS) : 0;
+ int i;
+
+ for (i = start; i < nr; i++) {
+ unit = kzalloc_node(sizeof(*unit), GFP_KERNEL, nid);
+ if (!unit) {
+ shrinker_unit_free(new, start);
+ return -ENOMEM;
+ }
+
+ new->unit[i] = unit;
+ }
+
+ return 0;
}
void free_shrinker_info(struct mem_cgroup *memcg)
@@ -32,6 +67,7 @@ void free_shrinker_info(struct mem_cgroup *memcg)
for_each_node(nid) {
pn = memcg->nodeinfo[nid];
info = rcu_dereference_protected(pn->shrinker_info, true);
+ shrinker_unit_free(info, 0);
kvfree(info);
rcu_assign_pointer(pn->shrinker_info, NULL);
}
@@ -40,28 +76,27 @@ void free_shrinker_info(struct mem_cgroup *memcg)
int alloc_shrinker_info(struct mem_cgroup *memcg)
{
struct shrinker_info *info;
- int nid, size, ret = 0;
- int map_size, defer_size = 0;
+ int nid, ret = 0;
+ int array_size = 0;
down_write(&shrinker_rwsem);
- map_size = shrinker_map_size(shrinker_nr_max);
- defer_size = shrinker_defer_size(shrinker_nr_max);
- size = map_size + defer_size;
+ array_size = shrinker_unit_size(shrinker_nr_max);
for_each_node(nid) {
- info = kvzalloc_node(sizeof(*info) + size, GFP_KERNEL, nid);
- if (!info) {
- free_shrinker_info(memcg);
- ret = -ENOMEM;
- break;
- }
- info->nr_deferred = (atomic_long_t *)(info + 1);
- info->map = (void *)info->nr_deferred + defer_size;
+ info = kvzalloc_node(sizeof(*info) + array_size, GFP_KERNEL, nid);
+ if (!info)
+ goto err;
info->map_nr_max = shrinker_nr_max;
+ if (shrinker_unit_alloc(info, NULL, nid))
+ goto err;
rcu_assign_pointer(memcg->nodeinfo[nid]->shrinker_info, info);
}
up_write(&shrinker_rwsem);
return ret;
+
+err:
+ free_shrinker_info(memcg);
+ return -ENOMEM;
}
static struct shrinker_info *shrinker_info_protected(struct mem_cgroup *memcg,
@@ -71,15 +106,12 @@ static struct shrinker_info *shrinker_info_protected(struct mem_cgroup *memcg,
lockdep_is_held(&shrinker_rwsem));
}
-static int expand_one_shrinker_info(struct mem_cgroup *memcg,
- int map_size, int defer_size,
- int old_map_size, int old_defer_size,
- int new_nr_max)
+static int expand_one_shrinker_info(struct mem_cgroup *memcg, int new_size,
+ int old_size, int new_nr_max)
{
struct shrinker_info *new, *old;
struct mem_cgroup_per_node *pn;
int nid;
- int size = map_size + defer_size;
for_each_node(nid) {
pn = memcg->nodeinfo[nid];
@@ -92,21 +124,17 @@ static int expand_one_shrinker_info(struct mem_cgroup *memcg,
if (new_nr_max <= old->map_nr_max)
continue;
- new = kvmalloc_node(sizeof(*new) + size, GFP_KERNEL, nid);
+ new = kvmalloc_node(sizeof(*new) + new_size, GFP_KERNEL, nid);
if (!new)
return -ENOMEM;
- new->nr_deferred = (atomic_long_t *)(new + 1);
- new->map = (void *)new->nr_deferred + defer_size;
new->map_nr_max = new_nr_max;
- /* map: set all old bits, clear all new bits */
- memset(new->map, (int)0xff, old_map_size);
- memset((void *)new->map + old_map_size, 0, map_size - old_map_size);
- /* nr_deferred: copy old values, clear all new values */
- memcpy(new->nr_deferred, old->nr_deferred, old_defer_size);
- memset((void *)new->nr_deferred + old_defer_size, 0,
- defer_size - old_defer_size);
+ memcpy(new->unit, old->unit, old_size);
+ if (shrinker_unit_alloc(new, old, nid)) {
+ kvfree(new);
+ return -ENOMEM;
+ }
rcu_assign_pointer(pn->shrinker_info, new);
kvfree_rcu(old, rcu);
@@ -118,9 +146,8 @@ static int expand_one_shrinker_info(struct mem_cgroup *memcg,
static int expand_shrinker_info(int new_id)
{
int ret = 0;
- int new_nr_max = round_up(new_id + 1, BITS_PER_LONG);
- int map_size, defer_size = 0;
- int old_map_size, old_defer_size = 0;
+ int new_nr_max = round_up(new_id + 1, SHRINKER_UNIT_BITS);
+ int new_size, old_size = 0;
struct mem_cgroup *memcg;
if (!root_mem_cgroup)
@@ -128,15 +155,12 @@ static int expand_shrinker_info(int new_id)
lockdep_assert_held(&shrinker_rwsem);
- map_size = shrinker_map_size(new_nr_max);
- defer_size = shrinker_defer_size(new_nr_max);
- old_map_size = shrinker_map_size(shrinker_nr_max);
- old_defer_size = shrinker_defer_size(shrinker_nr_max);
+ new_size = shrinker_unit_size(new_nr_max);
+ old_size = shrinker_unit_size(shrinker_nr_max);
memcg = mem_cgroup_iter(NULL, NULL, NULL);
do {
- ret = expand_one_shrinker_info(memcg, map_size, defer_size,
- old_map_size, old_defer_size,
+ ret = expand_one_shrinker_info(memcg, new_size, old_size,
new_nr_max);
if (ret) {
mem_cgroup_iter_break(NULL, memcg);
@@ -150,17 +174,34 @@ static int expand_shrinker_info(int new_id)
return ret;
}
+static inline int shrinker_id_to_index(int shrinker_id)
+{
+ return shrinker_id / SHRINKER_UNIT_BITS;
+}
+
+static inline int shrinker_id_to_offset(int shrinker_id)
+{
+ return shrinker_id % SHRINKER_UNIT_BITS;
+}
+
+static inline int calc_shrinker_id(int index, int offset)
+{
+ return index * SHRINKER_UNIT_BITS + offset;
+}
+
void set_shrinker_bit(struct mem_cgroup *memcg, int nid, int shrinker_id)
{
if (shrinker_id >= 0 && memcg && !mem_cgroup_is_root(memcg)) {
struct shrinker_info *info;
+ struct shrinker_info_unit *unit;
rcu_read_lock();
info = rcu_dereference(memcg->nodeinfo[nid]->shrinker_info);
+ unit = info->unit[shrinker_id_to_index(shrinker_id)];
if (!WARN_ON_ONCE(shrinker_id >= info->map_nr_max)) {
/* Pairs with smp mb in shrink_slab() */
smp_mb__before_atomic();
- set_bit(shrinker_id, info->map);
+ set_bit(shrinker_id_to_offset(shrinker_id), unit->map);
}
rcu_read_unlock();
}
@@ -209,26 +250,31 @@ static long xchg_nr_deferred_memcg(int nid, struct shrinker *shrinker,
struct mem_cgroup *memcg)
{
struct shrinker_info *info;
+ struct shrinker_info_unit *unit;
info = shrinker_info_protected(memcg, nid);
- return atomic_long_xchg(&info->nr_deferred[shrinker->id], 0);
+ unit = info->unit[shrinker_id_to_index(shrinker->id)];
+ return atomic_long_xchg(&unit->nr_deferred[shrinker_id_to_offset(shrinker->id)], 0);
}
static long add_nr_deferred_memcg(long nr, int nid, struct shrinker *shrinker,
struct mem_cgroup *memcg)
{
struct shrinker_info *info;
+ struct shrinker_info_unit *unit;
info = shrinker_info_protected(memcg, nid);
- return atomic_long_add_return(nr, &info->nr_deferred[shrinker->id]);
+ unit = info->unit[shrinker_id_to_index(shrinker->id)];
+ return atomic_long_add_return(nr, &unit->nr_deferred[shrinker_id_to_offset(shrinker->id)]);
}
void reparent_shrinker_deferred(struct mem_cgroup *memcg)
{
- int i, nid;
+ int nid, index, offset;
long nr;
struct mem_cgroup *parent;
struct shrinker_info *child_info, *parent_info;
+ struct shrinker_info_unit *child_unit, *parent_unit;
parent = parent_mem_cgroup(memcg);
if (!parent)
@@ -239,9 +285,13 @@ void reparent_shrinker_deferred(struct mem_cgroup *memcg)
for_each_node(nid) {
child_info = shrinker_info_protected(memcg, nid);
parent_info = shrinker_info_protected(parent, nid);
- for (i = 0; i < child_info->map_nr_max; i++) {
- nr = atomic_long_read(&child_info->nr_deferred[i]);
- atomic_long_add(nr, &parent_info->nr_deferred[i]);
+ for (index = 0; index < shrinker_id_to_index(child_info->map_nr_max); index++) {
+ child_unit = child_info->unit[index];
+ parent_unit = parent_info->unit[index];
+ for (offset = 0; offset < SHRINKER_UNIT_BITS; offset++) {
+ nr = atomic_long_read(&child_unit->nr_deferred[offset]);
+ atomic_long_add(nr, &parent_unit->nr_deferred[offset]);
+ }
}
}
up_read(&shrinker_rwsem);
@@ -407,7 +457,7 @@ static unsigned long shrink_slab_memcg(gfp_t gfp_mask, int nid,
{
struct shrinker_info *info;
unsigned long ret, freed = 0;
- int i;
+ int offset, index = 0;
if (!mem_cgroup_online(memcg))
return 0;
@@ -419,56 +469,63 @@ static unsigned long shrink_slab_memcg(gfp_t gfp_mask, int nid,
if (unlikely(!info))
goto unlock;
- for_each_set_bit(i, info->map, info->map_nr_max) {
- struct shrink_control sc = {
- .gfp_mask = gfp_mask,
- .nid = nid,
- .memcg = memcg,
- };
- struct shrinker *shrinker;
+ for (; index < shrinker_id_to_index(info->map_nr_max); index++) {
+ struct shrinker_info_unit *unit;
- shrinker = idr_find(&shrinker_idr, i);
- if (unlikely(!shrinker || !(shrinker->flags & SHRINKER_REGISTERED))) {
- if (!shrinker)
- clear_bit(i, info->map);
- continue;
- }
+ unit = info->unit[index];
- /* Call non-slab shrinkers even though kmem is disabled */
- if (!memcg_kmem_online() &&
- !(shrinker->flags & SHRINKER_NONSLAB))
- continue;
+ for_each_set_bit(offset, unit->map, SHRINKER_UNIT_BITS) {
+ struct shrink_control sc = {
+ .gfp_mask = gfp_mask,
+ .nid = nid,
+ .memcg = memcg,
+ };
+ struct shrinker *shrinker;
+ int shrinker_id = calc_shrinker_id(index, offset);
- ret = do_shrink_slab(&sc, shrinker, priority);
- if (ret == SHRINK_EMPTY) {
- clear_bit(i, info->map);
- /*
- * After the shrinker reported that it had no objects to
- * free, but before we cleared the corresponding bit in
- * the memcg shrinker map, a new object might have been
- * added. To make sure, we have the bit set in this
- * case, we invoke the shrinker one more time and reset
- * the bit if it reports that it is not empty anymore.
- * The memory barrier here pairs with the barrier in
- * set_shrinker_bit():
- *
- * list_lru_add() shrink_slab_memcg()
- * list_add_tail() clear_bit()
- * <MB> <MB>
- * set_bit() do_shrink_slab()
- */
- smp_mb__after_atomic();
- ret = do_shrink_slab(&sc, shrinker, priority);
- if (ret == SHRINK_EMPTY)
- ret = 0;
- else
- set_shrinker_bit(memcg, nid, i);
- }
- freed += ret;
+ shrinker = idr_find(&shrinker_idr, shrinker_id);
+ if (unlikely(!shrinker || !(shrinker->flags & SHRINKER_REGISTERED))) {
+ if (!shrinker)
+ clear_bit(offset, unit->map);
+ continue;
+ }
- if (rwsem_is_contended(&shrinker_rwsem)) {
- freed = freed ? : 1;
- break;
+ /* Call non-slab shrinkers even though kmem is disabled */
+ if (!memcg_kmem_online() &&
+ !(shrinker->flags & SHRINKER_NONSLAB))
+ continue;
+
+ ret = do_shrink_slab(&sc, shrinker, priority);
+ if (ret == SHRINK_EMPTY) {
+ clear_bit(offset, unit->map);
+ /*
+ * After the shrinker reported that it had no objects to
+ * free, but before we cleared the corresponding bit in
+ * the memcg shrinker map, a new object might have been
+ * added. To make sure, we have the bit set in this
+ * case, we invoke the shrinker one more time and reset
+ * the bit if it reports that it is not empty anymore.
+ * The memory barrier here pairs with the barrier in
+ * set_shrinker_bit():
+ *
+ * list_lru_add() shrink_slab_memcg()
+ * list_add_tail() clear_bit()
+ * <MB> <MB>
+ * set_bit() do_shrink_slab()
+ */
+ smp_mb__after_atomic();
+ ret = do_shrink_slab(&sc, shrinker, priority);
+ if (ret == SHRINK_EMPTY)
+ ret = 0;
+ else
+ set_shrinker_bit(memcg, nid, shrinker_id);
+ }
+ freed += ret;
+
+ if (rwsem_is_contended(&shrinker_rwsem)) {
+ freed = freed ? : 1;
+ goto unlock;
+ }
}
}
unlock:
--
2.30.2
^ permalink raw reply related [flat|nested] 12+ messages in thread
* Re: [PATCH 4/5] drm/ttm: introduce pool_shrink_rwsem
2023-08-16 8:34 ` [PATCH 4/5] drm/ttm: introduce pool_shrink_rwsem Qi Zheng
@ 2023-08-16 9:14 ` Christian König
2023-08-16 9:20 ` Qi Zheng
0 siblings, 1 reply; 12+ messages in thread
From: Christian König @ 2023-08-16 9:14 UTC (permalink / raw)
To: Qi Zheng, akpm, david, tkhai, vbabka, roman.gushchin, djwong,
brauner, paulmck, tytso, steven.price, cel, senozhatsky,
yujie.liu, gregkh, muchun.song, joel
Cc: linux-kernel, linux-mm, dri-devel, linux-fsdevel, Muchun Song
Am 16.08.23 um 10:34 schrieb Qi Zheng:
> Currently, the synchronize_shrinkers() is only used by TTM pool. It only
> requires that no shrinkers run in parallel.
>
> After we use RCU+refcount method to implement the lockless slab shrink,
> we can not use shrinker_rwsem or synchronize_rcu() to guarantee that all
> shrinker invocations have seen an update before freeing memory.
>
> So we introduce a new pool_shrink_rwsem to implement a private
> synchronize_shrinkers(), so as to achieve the same purpose.
>
> Signed-off-by: Qi Zheng <zhengqi.arch@bytedance.com>
> Reviewed-by: Muchun Song <songmuchun@bytedance.com>
> ---
> drivers/gpu/drm/ttm/ttm_pool.c | 15 +++++++++++++++
> include/linux/shrinker.h | 1 -
> mm/shrinker.c | 15 ---------------
> 3 files changed, 15 insertions(+), 16 deletions(-)
>
> diff --git a/drivers/gpu/drm/ttm/ttm_pool.c b/drivers/gpu/drm/ttm/ttm_pool.c
> index cddb9151d20f..713b1c0a70e1 100644
> --- a/drivers/gpu/drm/ttm/ttm_pool.c
> +++ b/drivers/gpu/drm/ttm/ttm_pool.c
> @@ -74,6 +74,7 @@ static struct ttm_pool_type global_dma32_uncached[MAX_ORDER + 1];
> static spinlock_t shrinker_lock;
> static struct list_head shrinker_list;
> static struct shrinker mm_shrinker;
> +static DECLARE_RWSEM(pool_shrink_rwsem);
>
> /* Allocate pages of size 1 << order with the given gfp_flags */
> static struct page *ttm_pool_alloc_page(struct ttm_pool *pool, gfp_t gfp_flags,
> @@ -317,6 +318,7 @@ static unsigned int ttm_pool_shrink(void)
> unsigned int num_pages;
> struct page *p;
>
> + down_read(&pool_shrink_rwsem);
> spin_lock(&shrinker_lock);
> pt = list_first_entry(&shrinker_list, typeof(*pt), shrinker_list);
> list_move_tail(&pt->shrinker_list, &shrinker_list);
> @@ -329,6 +331,7 @@ static unsigned int ttm_pool_shrink(void)
> } else {
> num_pages = 0;
> }
> + up_read(&pool_shrink_rwsem);
>
> return num_pages;
> }
> @@ -572,6 +575,18 @@ void ttm_pool_init(struct ttm_pool *pool, struct device *dev,
> }
> EXPORT_SYMBOL(ttm_pool_init);
>
> +/**
> + * synchronize_shrinkers - Wait for all running shrinkers to complete.
> + *
> + * This is useful to guarantee that all shrinker invocations have seen an
> + * update, before freeing memory, similar to rcu.
> + */
> +static void synchronize_shrinkers(void)
Please rename that function to ttm_pool_synchronize_shrinkers().
With that done feel free to add Reviewed-by: Christian König
<christian.koenig@amd.com>
Regards,
Christian.
> +{
> + down_write(&pool_shrink_rwsem);
> + up_write(&pool_shrink_rwsem);
> +}
> +
> /**
> * ttm_pool_fini - Cleanup a pool
> *
> diff --git a/include/linux/shrinker.h b/include/linux/shrinker.h
> index 8dc15aa37410..6b5843c3b827 100644
> --- a/include/linux/shrinker.h
> +++ b/include/linux/shrinker.h
> @@ -103,7 +103,6 @@ extern int __printf(2, 3) register_shrinker(struct shrinker *shrinker,
> const char *fmt, ...);
> extern void unregister_shrinker(struct shrinker *shrinker);
> extern void free_prealloced_shrinker(struct shrinker *shrinker);
> -extern void synchronize_shrinkers(void);
>
> #ifdef CONFIG_SHRINKER_DEBUG
> extern int __printf(2, 3) shrinker_debugfs_rename(struct shrinker *shrinker,
> diff --git a/mm/shrinker.c b/mm/shrinker.c
> index 043c87ccfab4..a16cd448b924 100644
> --- a/mm/shrinker.c
> +++ b/mm/shrinker.c
> @@ -692,18 +692,3 @@ void unregister_shrinker(struct shrinker *shrinker)
> shrinker->nr_deferred = NULL;
> }
> EXPORT_SYMBOL(unregister_shrinker);
> -
> -/**
> - * synchronize_shrinkers - Wait for all running shrinkers to complete.
> - *
> - * This is equivalent to calling unregister_shrink() and register_shrinker(),
> - * but atomically and with less overhead. This is useful to guarantee that all
> - * shrinker invocations have seen an update, before freeing memory, similar to
> - * rcu.
> - */
> -void synchronize_shrinkers(void)
> -{
> - down_write(&shrinker_rwsem);
> - up_write(&shrinker_rwsem);
> -}
> -EXPORT_SYMBOL(synchronize_shrinkers);
^ permalink raw reply [flat|nested] 12+ messages in thread
* Re: [PATCH 4/5] drm/ttm: introduce pool_shrink_rwsem
2023-08-16 9:14 ` Christian König
@ 2023-08-16 9:20 ` Qi Zheng
0 siblings, 0 replies; 12+ messages in thread
From: Qi Zheng @ 2023-08-16 9:20 UTC (permalink / raw)
To: Christian König
Cc: linux-kernel, linux-mm, dri-devel, linux-fsdevel, Muchun Song,
akpm, david, tkhai, vbabka, roman.gushchin, djwong, brauner,
paulmck, tytso, steven.price, cel, senozhatsky, yujie.liu, gregkh,
muchun.song, joel
Hi Christian,
On 2023/8/16 17:14, Christian König wrote:
> Am 16.08.23 um 10:34 schrieb Qi Zheng:
>> Currently, the synchronize_shrinkers() is only used by TTM pool. It only
>> requires that no shrinkers run in parallel.
>>
>> After we use RCU+refcount method to implement the lockless slab shrink,
>> we can not use shrinker_rwsem or synchronize_rcu() to guarantee that all
>> shrinker invocations have seen an update before freeing memory.
>>
>> So we introduce a new pool_shrink_rwsem to implement a private
>> synchronize_shrinkers(), so as to achieve the same purpose.
>>
>> Signed-off-by: Qi Zheng <zhengqi.arch@bytedance.com>
>> Reviewed-by: Muchun Song <songmuchun@bytedance.com>
>> ---
>> drivers/gpu/drm/ttm/ttm_pool.c | 15 +++++++++++++++
>> include/linux/shrinker.h | 1 -
>> mm/shrinker.c | 15 ---------------
>> 3 files changed, 15 insertions(+), 16 deletions(-)
>>
>> diff --git a/drivers/gpu/drm/ttm/ttm_pool.c
>> b/drivers/gpu/drm/ttm/ttm_pool.c
>> index cddb9151d20f..713b1c0a70e1 100644
>> --- a/drivers/gpu/drm/ttm/ttm_pool.c
>> +++ b/drivers/gpu/drm/ttm/ttm_pool.c
>> @@ -74,6 +74,7 @@ static struct ttm_pool_type
>> global_dma32_uncached[MAX_ORDER + 1];
>> static spinlock_t shrinker_lock;
>> static struct list_head shrinker_list;
>> static struct shrinker mm_shrinker;
>> +static DECLARE_RWSEM(pool_shrink_rwsem);
>> /* Allocate pages of size 1 << order with the given gfp_flags */
>> static struct page *ttm_pool_alloc_page(struct ttm_pool *pool, gfp_t
>> gfp_flags,
>> @@ -317,6 +318,7 @@ static unsigned int ttm_pool_shrink(void)
>> unsigned int num_pages;
>> struct page *p;
>> + down_read(&pool_shrink_rwsem);
>> spin_lock(&shrinker_lock);
>> pt = list_first_entry(&shrinker_list, typeof(*pt), shrinker_list);
>> list_move_tail(&pt->shrinker_list, &shrinker_list);
>> @@ -329,6 +331,7 @@ static unsigned int ttm_pool_shrink(void)
>> } else {
>> num_pages = 0;
>> }
>> + up_read(&pool_shrink_rwsem);
>> return num_pages;
>> }
>> @@ -572,6 +575,18 @@ void ttm_pool_init(struct ttm_pool *pool, struct
>> device *dev,
>> }
>> EXPORT_SYMBOL(ttm_pool_init);
>> +/**
>> + * synchronize_shrinkers - Wait for all running shrinkers to complete.
>> + *
>> + * This is useful to guarantee that all shrinker invocations have
>> seen an
>> + * update, before freeing memory, similar to rcu.
>> + */
>> +static void synchronize_shrinkers(void)
>
> Please rename that function to ttm_pool_synchronize_shrinkers().
OK, will do.
>
> With that done feel free to add Reviewed-by: Christian König
> <christian.koenig@amd.com>
>
Thanks,
Qi
> Regards,
> Christian.
>
>> +{
>> + down_write(&pool_shrink_rwsem);
>> + up_write(&pool_shrink_rwsem);
>> +}
>> +
>> /**
>> * ttm_pool_fini - Cleanup a pool
>> *
>> diff --git a/include/linux/shrinker.h b/include/linux/shrinker.h
>> index 8dc15aa37410..6b5843c3b827 100644
>> --- a/include/linux/shrinker.h
>> +++ b/include/linux/shrinker.h
>> @@ -103,7 +103,6 @@ extern int __printf(2, 3) register_shrinker(struct
>> shrinker *shrinker,
>> const char *fmt, ...);
>> extern void unregister_shrinker(struct shrinker *shrinker);
>> extern void free_prealloced_shrinker(struct shrinker *shrinker);
>> -extern void synchronize_shrinkers(void);
>> #ifdef CONFIG_SHRINKER_DEBUG
>> extern int __printf(2, 3) shrinker_debugfs_rename(struct shrinker
>> *shrinker,
>> diff --git a/mm/shrinker.c b/mm/shrinker.c
>> index 043c87ccfab4..a16cd448b924 100644
>> --- a/mm/shrinker.c
>> +++ b/mm/shrinker.c
>> @@ -692,18 +692,3 @@ void unregister_shrinker(struct shrinker *shrinker)
>> shrinker->nr_deferred = NULL;
>> }
>> EXPORT_SYMBOL(unregister_shrinker);
>> -
>> -/**
>> - * synchronize_shrinkers - Wait for all running shrinkers to complete.
>> - *
>> - * This is equivalent to calling unregister_shrink() and
>> register_shrinker(),
>> - * but atomically and with less overhead. This is useful to guarantee
>> that all
>> - * shrinker invocations have seen an update, before freeing memory,
>> similar to
>> - * rcu.
>> - */
>> -void synchronize_shrinkers(void)
>> -{
>> - down_write(&shrinker_rwsem);
>> - up_write(&shrinker_rwsem);
>> -}
>> -EXPORT_SYMBOL(synchronize_shrinkers);
>
^ permalink raw reply [flat|nested] 12+ messages in thread
* Re: [PATCH 1/5] mm: move some shrinker-related function declarations to mm/internal.h
2023-08-16 8:34 ` [PATCH 1/5] mm: move some shrinker-related function declarations to mm/internal.h Qi Zheng
@ 2023-08-16 13:14 ` kernel test robot
2023-08-16 13:57 ` kernel test robot
2023-08-16 15:01 ` kernel test robot
2 siblings, 0 replies; 12+ messages in thread
From: kernel test robot @ 2023-08-16 13:14 UTC (permalink / raw)
To: Qi Zheng, akpm, david, tkhai, vbabka, roman.gushchin, djwong,
brauner, paulmck, tytso, steven.price, cel, senozhatsky,
yujie.liu, gregkh, muchun.song, joel, christian.koenig
Cc: llvm, oe-kbuild-all, linux-kernel, linux-mm, dri-devel,
linux-fsdevel, Qi Zheng, Muchun Song
Hi Qi,
kernel test robot noticed the following build warnings:
[auto build test WARNING on brauner-vfs/vfs.all]
[also build test WARNING on linus/master v6.5-rc6 next-20230816]
[cannot apply to akpm-mm/mm-everything drm-misc/drm-misc-next vfs-idmapping/for-next]
[If your patch is applied to the wrong git tree, kindly drop us a note.
And when submitting patch, we suggest to use '--base' as documented in
https://git-scm.com/docs/git-format-patch#_base_tree_information]
url: https://github.com/intel-lab-lkp/linux/commits/Qi-Zheng/mm-move-some-shrinker-related-function-declarations-to-mm-internal-h/20230816-163833
base: https://git.kernel.org/pub/scm/linux/kernel/git/vfs/vfs.git vfs.all
patch link: https://lore.kernel.org/r/20230816083419.41088-2-zhengqi.arch%40bytedance.com
patch subject: [PATCH 1/5] mm: move some shrinker-related function declarations to mm/internal.h
config: riscv-randconfig-r015-20230816 (https://download.01.org/0day-ci/archive/20230816/202308162118.motJd6aG-lkp@intel.com/config)
compiler: clang version 16.0.4 (https://github.com/llvm/llvm-project.git ae42196bc493ffe877a7e3dff8be32035dea4d07)
reproduce: (https://download.01.org/0day-ci/archive/20230816/202308162118.motJd6aG-lkp@intel.com/reproduce)
If you fix the issue in a separate patch/commit (i.e. not just a new version of
the same patch/commit), kindly add following tags
| Reported-by: kernel test robot <lkp@intel.com>
| Closes: https://lore.kernel.org/oe-kbuild-all/202308162118.motJd6aG-lkp@intel.com/
All warnings (new ones prefixed by >>):
~~~~~~~~~~ ^
In file included from mm/shrinker_debug.c:7:
In file included from include/linux/memcontrol.h:13:
In file included from include/linux/cgroup.h:26:
In file included from include/linux/kernel_stat.h:9:
In file included from include/linux/interrupt.h:11:
In file included from include/linux/hardirq.h:11:
In file included from ./arch/riscv/include/generated/asm/hardirq.h:1:
In file included from include/asm-generic/hardirq.h:17:
In file included from include/linux/irq.h:20:
In file included from include/linux/io.h:13:
In file included from arch/riscv/include/asm/io.h:136:
include/asm-generic/io.h:751:2: warning: performing pointer arithmetic on a null pointer has undefined behavior [-Wnull-pointer-arithmetic]
insw(addr, buffer, count);
^~~~~~~~~~~~~~~~~~~~~~~~~
arch/riscv/include/asm/io.h:105:53: note: expanded from macro 'insw'
#define insw(addr, buffer, count) __insw(PCI_IOBASE + (addr), buffer, count)
~~~~~~~~~~ ^
In file included from mm/shrinker_debug.c:7:
In file included from include/linux/memcontrol.h:13:
In file included from include/linux/cgroup.h:26:
In file included from include/linux/kernel_stat.h:9:
In file included from include/linux/interrupt.h:11:
In file included from include/linux/hardirq.h:11:
In file included from ./arch/riscv/include/generated/asm/hardirq.h:1:
In file included from include/asm-generic/hardirq.h:17:
In file included from include/linux/irq.h:20:
In file included from include/linux/io.h:13:
In file included from arch/riscv/include/asm/io.h:136:
include/asm-generic/io.h:759:2: warning: performing pointer arithmetic on a null pointer has undefined behavior [-Wnull-pointer-arithmetic]
insl(addr, buffer, count);
^~~~~~~~~~~~~~~~~~~~~~~~~
arch/riscv/include/asm/io.h:106:53: note: expanded from macro 'insl'
#define insl(addr, buffer, count) __insl(PCI_IOBASE + (addr), buffer, count)
~~~~~~~~~~ ^
In file included from mm/shrinker_debug.c:7:
In file included from include/linux/memcontrol.h:13:
In file included from include/linux/cgroup.h:26:
In file included from include/linux/kernel_stat.h:9:
In file included from include/linux/interrupt.h:11:
In file included from include/linux/hardirq.h:11:
In file included from ./arch/riscv/include/generated/asm/hardirq.h:1:
In file included from include/asm-generic/hardirq.h:17:
In file included from include/linux/irq.h:20:
In file included from include/linux/io.h:13:
In file included from arch/riscv/include/asm/io.h:136:
include/asm-generic/io.h:768:2: warning: performing pointer arithmetic on a null pointer has undefined behavior [-Wnull-pointer-arithmetic]
outsb(addr, buffer, count);
^~~~~~~~~~~~~~~~~~~~~~~~~~
arch/riscv/include/asm/io.h:118:55: note: expanded from macro 'outsb'
#define outsb(addr, buffer, count) __outsb(PCI_IOBASE + (addr), buffer, count)
~~~~~~~~~~ ^
In file included from mm/shrinker_debug.c:7:
In file included from include/linux/memcontrol.h:13:
In file included from include/linux/cgroup.h:26:
In file included from include/linux/kernel_stat.h:9:
In file included from include/linux/interrupt.h:11:
In file included from include/linux/hardirq.h:11:
In file included from ./arch/riscv/include/generated/asm/hardirq.h:1:
In file included from include/asm-generic/hardirq.h:17:
In file included from include/linux/irq.h:20:
In file included from include/linux/io.h:13:
In file included from arch/riscv/include/asm/io.h:136:
include/asm-generic/io.h:777:2: warning: performing pointer arithmetic on a null pointer has undefined behavior [-Wnull-pointer-arithmetic]
outsw(addr, buffer, count);
^~~~~~~~~~~~~~~~~~~~~~~~~~
arch/riscv/include/asm/io.h:119:55: note: expanded from macro 'outsw'
#define outsw(addr, buffer, count) __outsw(PCI_IOBASE + (addr), buffer, count)
~~~~~~~~~~ ^
In file included from mm/shrinker_debug.c:7:
In file included from include/linux/memcontrol.h:13:
In file included from include/linux/cgroup.h:26:
In file included from include/linux/kernel_stat.h:9:
In file included from include/linux/interrupt.h:11:
In file included from include/linux/hardirq.h:11:
In file included from ./arch/riscv/include/generated/asm/hardirq.h:1:
In file included from include/asm-generic/hardirq.h:17:
In file included from include/linux/irq.h:20:
In file included from include/linux/io.h:13:
In file included from arch/riscv/include/asm/io.h:136:
include/asm-generic/io.h:786:2: warning: performing pointer arithmetic on a null pointer has undefined behavior [-Wnull-pointer-arithmetic]
outsl(addr, buffer, count);
^~~~~~~~~~~~~~~~~~~~~~~~~~
arch/riscv/include/asm/io.h:120:55: note: expanded from macro 'outsl'
#define outsl(addr, buffer, count) __outsl(PCI_IOBASE + (addr), buffer, count)
~~~~~~~~~~ ^
In file included from mm/shrinker_debug.c:7:
In file included from include/linux/memcontrol.h:13:
In file included from include/linux/cgroup.h:26:
In file included from include/linux/kernel_stat.h:9:
In file included from include/linux/interrupt.h:11:
In file included from include/linux/hardirq.h:11:
In file included from ./arch/riscv/include/generated/asm/hardirq.h:1:
In file included from include/asm-generic/hardirq.h:17:
In file included from include/linux/irq.h:20:
In file included from include/linux/io.h:13:
In file included from arch/riscv/include/asm/io.h:136:
include/asm-generic/io.h:1134:55: warning: performing pointer arithmetic on a null pointer has undefined behavior [-Wnull-pointer-arithmetic]
return (port > MMIO_UPPER_LIMIT) ? NULL : PCI_IOBASE + port;
~~~~~~~~~~ ^
>> mm/shrinker_debug.c:174:5: warning: no previous prototype for function 'shrinker_debugfs_add' [-Wmissing-prototypes]
int shrinker_debugfs_add(struct shrinker *shrinker)
^
mm/shrinker_debug.c:174:1: note: declare 'static' if the function is not intended to be used outside of this translation unit
int shrinker_debugfs_add(struct shrinker *shrinker)
^
static
>> mm/shrinker_debug.c:249:16: warning: no previous prototype for function 'shrinker_debugfs_detach' [-Wmissing-prototypes]
struct dentry *shrinker_debugfs_detach(struct shrinker *shrinker,
^
mm/shrinker_debug.c:249:1: note: declare 'static' if the function is not intended to be used outside of this translation unit
struct dentry *shrinker_debugfs_detach(struct shrinker *shrinker,
^
static
>> mm/shrinker_debug.c:265:6: warning: no previous prototype for function 'shrinker_debugfs_remove' [-Wmissing-prototypes]
void shrinker_debugfs_remove(struct dentry *debugfs_entry, int debugfs_id)
^
mm/shrinker_debug.c:265:1: note: declare 'static' if the function is not intended to be used outside of this translation unit
void shrinker_debugfs_remove(struct dentry *debugfs_entry, int debugfs_id)
^
static
16 warnings generated.
vim +/shrinker_debugfs_add +174 mm/shrinker_debug.c
bbf535fd6f06b94 Roman Gushchin 2022-05-31 173
5035ebc644aec92 Roman Gushchin 2022-05-31 @174 int shrinker_debugfs_add(struct shrinker *shrinker)
5035ebc644aec92 Roman Gushchin 2022-05-31 175 {
5035ebc644aec92 Roman Gushchin 2022-05-31 176 struct dentry *entry;
e33c267ab70de42 Roman Gushchin 2022-05-31 177 char buf[128];
5035ebc644aec92 Roman Gushchin 2022-05-31 178 int id;
5035ebc644aec92 Roman Gushchin 2022-05-31 179
47a7c01c3efc658 Qi Zheng 2023-06-09 180 lockdep_assert_held(&shrinker_rwsem);
5035ebc644aec92 Roman Gushchin 2022-05-31 181
5035ebc644aec92 Roman Gushchin 2022-05-31 182 /* debugfs isn't initialized yet, add debugfs entries later. */
5035ebc644aec92 Roman Gushchin 2022-05-31 183 if (!shrinker_debugfs_root)
5035ebc644aec92 Roman Gushchin 2022-05-31 184 return 0;
5035ebc644aec92 Roman Gushchin 2022-05-31 185
5035ebc644aec92 Roman Gushchin 2022-05-31 186 id = ida_alloc(&shrinker_debugfs_ida, GFP_KERNEL);
5035ebc644aec92 Roman Gushchin 2022-05-31 187 if (id < 0)
5035ebc644aec92 Roman Gushchin 2022-05-31 188 return id;
5035ebc644aec92 Roman Gushchin 2022-05-31 189 shrinker->debugfs_id = id;
5035ebc644aec92 Roman Gushchin 2022-05-31 190
e33c267ab70de42 Roman Gushchin 2022-05-31 191 snprintf(buf, sizeof(buf), "%s-%d", shrinker->name, id);
5035ebc644aec92 Roman Gushchin 2022-05-31 192
5035ebc644aec92 Roman Gushchin 2022-05-31 193 /* create debugfs entry */
5035ebc644aec92 Roman Gushchin 2022-05-31 194 entry = debugfs_create_dir(buf, shrinker_debugfs_root);
5035ebc644aec92 Roman Gushchin 2022-05-31 195 if (IS_ERR(entry)) {
5035ebc644aec92 Roman Gushchin 2022-05-31 196 ida_free(&shrinker_debugfs_ida, id);
5035ebc644aec92 Roman Gushchin 2022-05-31 197 return PTR_ERR(entry);
5035ebc644aec92 Roman Gushchin 2022-05-31 198 }
5035ebc644aec92 Roman Gushchin 2022-05-31 199 shrinker->debugfs_entry = entry;
5035ebc644aec92 Roman Gushchin 2022-05-31 200
2124f79de6a9096 John Keeping 2023-04-18 201 debugfs_create_file("count", 0440, entry, shrinker,
5035ebc644aec92 Roman Gushchin 2022-05-31 202 &shrinker_debugfs_count_fops);
2124f79de6a9096 John Keeping 2023-04-18 203 debugfs_create_file("scan", 0220, entry, shrinker,
bbf535fd6f06b94 Roman Gushchin 2022-05-31 204 &shrinker_debugfs_scan_fops);
5035ebc644aec92 Roman Gushchin 2022-05-31 205 return 0;
5035ebc644aec92 Roman Gushchin 2022-05-31 206 }
5035ebc644aec92 Roman Gushchin 2022-05-31 207
e33c267ab70de42 Roman Gushchin 2022-05-31 208 int shrinker_debugfs_rename(struct shrinker *shrinker, const char *fmt, ...)
e33c267ab70de42 Roman Gushchin 2022-05-31 209 {
e33c267ab70de42 Roman Gushchin 2022-05-31 210 struct dentry *entry;
e33c267ab70de42 Roman Gushchin 2022-05-31 211 char buf[128];
e33c267ab70de42 Roman Gushchin 2022-05-31 212 const char *new, *old;
e33c267ab70de42 Roman Gushchin 2022-05-31 213 va_list ap;
e33c267ab70de42 Roman Gushchin 2022-05-31 214 int ret = 0;
e33c267ab70de42 Roman Gushchin 2022-05-31 215
e33c267ab70de42 Roman Gushchin 2022-05-31 216 va_start(ap, fmt);
e33c267ab70de42 Roman Gushchin 2022-05-31 217 new = kvasprintf_const(GFP_KERNEL, fmt, ap);
e33c267ab70de42 Roman Gushchin 2022-05-31 218 va_end(ap);
e33c267ab70de42 Roman Gushchin 2022-05-31 219
e33c267ab70de42 Roman Gushchin 2022-05-31 220 if (!new)
e33c267ab70de42 Roman Gushchin 2022-05-31 221 return -ENOMEM;
e33c267ab70de42 Roman Gushchin 2022-05-31 222
47a7c01c3efc658 Qi Zheng 2023-06-09 223 down_write(&shrinker_rwsem);
e33c267ab70de42 Roman Gushchin 2022-05-31 224
e33c267ab70de42 Roman Gushchin 2022-05-31 225 old = shrinker->name;
e33c267ab70de42 Roman Gushchin 2022-05-31 226 shrinker->name = new;
e33c267ab70de42 Roman Gushchin 2022-05-31 227
e33c267ab70de42 Roman Gushchin 2022-05-31 228 if (shrinker->debugfs_entry) {
e33c267ab70de42 Roman Gushchin 2022-05-31 229 snprintf(buf, sizeof(buf), "%s-%d", shrinker->name,
e33c267ab70de42 Roman Gushchin 2022-05-31 230 shrinker->debugfs_id);
e33c267ab70de42 Roman Gushchin 2022-05-31 231
e33c267ab70de42 Roman Gushchin 2022-05-31 232 entry = debugfs_rename(shrinker_debugfs_root,
e33c267ab70de42 Roman Gushchin 2022-05-31 233 shrinker->debugfs_entry,
e33c267ab70de42 Roman Gushchin 2022-05-31 234 shrinker_debugfs_root, buf);
e33c267ab70de42 Roman Gushchin 2022-05-31 235 if (IS_ERR(entry))
e33c267ab70de42 Roman Gushchin 2022-05-31 236 ret = PTR_ERR(entry);
e33c267ab70de42 Roman Gushchin 2022-05-31 237 else
e33c267ab70de42 Roman Gushchin 2022-05-31 238 shrinker->debugfs_entry = entry;
e33c267ab70de42 Roman Gushchin 2022-05-31 239 }
e33c267ab70de42 Roman Gushchin 2022-05-31 240
47a7c01c3efc658 Qi Zheng 2023-06-09 241 up_write(&shrinker_rwsem);
e33c267ab70de42 Roman Gushchin 2022-05-31 242
e33c267ab70de42 Roman Gushchin 2022-05-31 243 kfree_const(old);
e33c267ab70de42 Roman Gushchin 2022-05-31 244
e33c267ab70de42 Roman Gushchin 2022-05-31 245 return ret;
e33c267ab70de42 Roman Gushchin 2022-05-31 246 }
e33c267ab70de42 Roman Gushchin 2022-05-31 247 EXPORT_SYMBOL(shrinker_debugfs_rename);
e33c267ab70de42 Roman Gushchin 2022-05-31 248
26e239b37ebdfd1 Joan Bruguera Micó 2023-05-03 @249 struct dentry *shrinker_debugfs_detach(struct shrinker *shrinker,
26e239b37ebdfd1 Joan Bruguera Micó 2023-05-03 250 int *debugfs_id)
5035ebc644aec92 Roman Gushchin 2022-05-31 251 {
badc28d4924bfed Qi Zheng 2023-02-02 252 struct dentry *entry = shrinker->debugfs_entry;
badc28d4924bfed Qi Zheng 2023-02-02 253
47a7c01c3efc658 Qi Zheng 2023-06-09 254 lockdep_assert_held(&shrinker_rwsem);
5035ebc644aec92 Roman Gushchin 2022-05-31 255
e33c267ab70de42 Roman Gushchin 2022-05-31 256 kfree_const(shrinker->name);
14773bfa70e67f4 Tetsuo Handa 2022-07-20 257 shrinker->name = NULL;
e33c267ab70de42 Roman Gushchin 2022-05-31 258
26e239b37ebdfd1 Joan Bruguera Micó 2023-05-03 259 *debugfs_id = entry ? shrinker->debugfs_id : -1;
badc28d4924bfed Qi Zheng 2023-02-02 260 shrinker->debugfs_entry = NULL;
badc28d4924bfed Qi Zheng 2023-02-02 261
badc28d4924bfed Qi Zheng 2023-02-02 262 return entry;
5035ebc644aec92 Roman Gushchin 2022-05-31 263 }
5035ebc644aec92 Roman Gushchin 2022-05-31 264
26e239b37ebdfd1 Joan Bruguera Micó 2023-05-03 @265 void shrinker_debugfs_remove(struct dentry *debugfs_entry, int debugfs_id)
26e239b37ebdfd1 Joan Bruguera Micó 2023-05-03 266 {
26e239b37ebdfd1 Joan Bruguera Micó 2023-05-03 267 debugfs_remove_recursive(debugfs_entry);
26e239b37ebdfd1 Joan Bruguera Micó 2023-05-03 268 ida_free(&shrinker_debugfs_ida, debugfs_id);
26e239b37ebdfd1 Joan Bruguera Micó 2023-05-03 269 }
26e239b37ebdfd1 Joan Bruguera Micó 2023-05-03 270
--
0-DAY CI Kernel Test Service
https://github.com/intel/lkp-tests/wiki
^ permalink raw reply [flat|nested] 12+ messages in thread
* Re: [PATCH 1/5] mm: move some shrinker-related function declarations to mm/internal.h
2023-08-16 8:34 ` [PATCH 1/5] mm: move some shrinker-related function declarations to mm/internal.h Qi Zheng
2023-08-16 13:14 ` kernel test robot
@ 2023-08-16 13:57 ` kernel test robot
2023-08-16 15:01 ` kernel test robot
2 siblings, 0 replies; 12+ messages in thread
From: kernel test robot @ 2023-08-16 13:57 UTC (permalink / raw)
To: Qi Zheng, akpm, david, tkhai, vbabka, roman.gushchin, djwong,
brauner, paulmck, tytso, steven.price, cel, senozhatsky,
yujie.liu, gregkh, muchun.song, joel, christian.koenig
Cc: oe-kbuild-all, linux-kernel, linux-mm, dri-devel, linux-fsdevel,
Qi Zheng, Muchun Song
Hi Qi,
kernel test robot noticed the following build warnings:
[auto build test WARNING on brauner-vfs/vfs.all]
[also build test WARNING on linus/master v6.5-rc6 next-20230816]
[cannot apply to akpm-mm/mm-everything drm-misc/drm-misc-next vfs-idmapping/for-next]
[If your patch is applied to the wrong git tree, kindly drop us a note.
And when submitting patch, we suggest to use '--base' as documented in
https://git-scm.com/docs/git-format-patch#_base_tree_information]
url: https://github.com/intel-lab-lkp/linux/commits/Qi-Zheng/mm-move-some-shrinker-related-function-declarations-to-mm-internal-h/20230816-163833
base: https://git.kernel.org/pub/scm/linux/kernel/git/vfs/vfs.git vfs.all
patch link: https://lore.kernel.org/r/20230816083419.41088-2-zhengqi.arch%40bytedance.com
patch subject: [PATCH 1/5] mm: move some shrinker-related function declarations to mm/internal.h
config: m68k-randconfig-r013-20230816 (https://download.01.org/0day-ci/archive/20230816/202308162105.y9XrlTA7-lkp@intel.com/config)
compiler: m68k-linux-gcc (GCC) 12.3.0
reproduce: (https://download.01.org/0day-ci/archive/20230816/202308162105.y9XrlTA7-lkp@intel.com/reproduce)
If you fix the issue in a separate patch/commit (i.e. not just a new version of
the same patch/commit), kindly add following tags
| Reported-by: kernel test robot <lkp@intel.com>
| Closes: https://lore.kernel.org/oe-kbuild-all/202308162105.y9XrlTA7-lkp@intel.com/
All warnings (new ones prefixed by >>):
>> mm/shrinker_debug.c:174:5: warning: no previous prototype for 'shrinker_debugfs_add' [-Wmissing-prototypes]
174 | int shrinker_debugfs_add(struct shrinker *shrinker)
| ^~~~~~~~~~~~~~~~~~~~
>> mm/shrinker_debug.c:249:16: warning: no previous prototype for 'shrinker_debugfs_detach' [-Wmissing-prototypes]
249 | struct dentry *shrinker_debugfs_detach(struct shrinker *shrinker,
| ^~~~~~~~~~~~~~~~~~~~~~~
>> mm/shrinker_debug.c:265:6: warning: no previous prototype for 'shrinker_debugfs_remove' [-Wmissing-prototypes]
265 | void shrinker_debugfs_remove(struct dentry *debugfs_entry, int debugfs_id)
| ^~~~~~~~~~~~~~~~~~~~~~~
vim +/shrinker_debugfs_add +174 mm/shrinker_debug.c
bbf535fd6f06b9 Roman Gushchin 2022-05-31 173
5035ebc644aec9 Roman Gushchin 2022-05-31 @174 int shrinker_debugfs_add(struct shrinker *shrinker)
5035ebc644aec9 Roman Gushchin 2022-05-31 175 {
5035ebc644aec9 Roman Gushchin 2022-05-31 176 struct dentry *entry;
e33c267ab70de4 Roman Gushchin 2022-05-31 177 char buf[128];
5035ebc644aec9 Roman Gushchin 2022-05-31 178 int id;
5035ebc644aec9 Roman Gushchin 2022-05-31 179
47a7c01c3efc65 Qi Zheng 2023-06-09 180 lockdep_assert_held(&shrinker_rwsem);
5035ebc644aec9 Roman Gushchin 2022-05-31 181
5035ebc644aec9 Roman Gushchin 2022-05-31 182 /* debugfs isn't initialized yet, add debugfs entries later. */
5035ebc644aec9 Roman Gushchin 2022-05-31 183 if (!shrinker_debugfs_root)
5035ebc644aec9 Roman Gushchin 2022-05-31 184 return 0;
5035ebc644aec9 Roman Gushchin 2022-05-31 185
5035ebc644aec9 Roman Gushchin 2022-05-31 186 id = ida_alloc(&shrinker_debugfs_ida, GFP_KERNEL);
5035ebc644aec9 Roman Gushchin 2022-05-31 187 if (id < 0)
5035ebc644aec9 Roman Gushchin 2022-05-31 188 return id;
5035ebc644aec9 Roman Gushchin 2022-05-31 189 shrinker->debugfs_id = id;
5035ebc644aec9 Roman Gushchin 2022-05-31 190
e33c267ab70de4 Roman Gushchin 2022-05-31 191 snprintf(buf, sizeof(buf), "%s-%d", shrinker->name, id);
5035ebc644aec9 Roman Gushchin 2022-05-31 192
5035ebc644aec9 Roman Gushchin 2022-05-31 193 /* create debugfs entry */
5035ebc644aec9 Roman Gushchin 2022-05-31 194 entry = debugfs_create_dir(buf, shrinker_debugfs_root);
5035ebc644aec9 Roman Gushchin 2022-05-31 195 if (IS_ERR(entry)) {
5035ebc644aec9 Roman Gushchin 2022-05-31 196 ida_free(&shrinker_debugfs_ida, id);
5035ebc644aec9 Roman Gushchin 2022-05-31 197 return PTR_ERR(entry);
5035ebc644aec9 Roman Gushchin 2022-05-31 198 }
5035ebc644aec9 Roman Gushchin 2022-05-31 199 shrinker->debugfs_entry = entry;
5035ebc644aec9 Roman Gushchin 2022-05-31 200
2124f79de6a909 John Keeping 2023-04-18 201 debugfs_create_file("count", 0440, entry, shrinker,
5035ebc644aec9 Roman Gushchin 2022-05-31 202 &shrinker_debugfs_count_fops);
2124f79de6a909 John Keeping 2023-04-18 203 debugfs_create_file("scan", 0220, entry, shrinker,
bbf535fd6f06b9 Roman Gushchin 2022-05-31 204 &shrinker_debugfs_scan_fops);
5035ebc644aec9 Roman Gushchin 2022-05-31 205 return 0;
5035ebc644aec9 Roman Gushchin 2022-05-31 206 }
5035ebc644aec9 Roman Gushchin 2022-05-31 207
e33c267ab70de4 Roman Gushchin 2022-05-31 208 int shrinker_debugfs_rename(struct shrinker *shrinker, const char *fmt, ...)
e33c267ab70de4 Roman Gushchin 2022-05-31 209 {
e33c267ab70de4 Roman Gushchin 2022-05-31 210 struct dentry *entry;
e33c267ab70de4 Roman Gushchin 2022-05-31 211 char buf[128];
e33c267ab70de4 Roman Gushchin 2022-05-31 212 const char *new, *old;
e33c267ab70de4 Roman Gushchin 2022-05-31 213 va_list ap;
e33c267ab70de4 Roman Gushchin 2022-05-31 214 int ret = 0;
e33c267ab70de4 Roman Gushchin 2022-05-31 215
e33c267ab70de4 Roman Gushchin 2022-05-31 216 va_start(ap, fmt);
e33c267ab70de4 Roman Gushchin 2022-05-31 217 new = kvasprintf_const(GFP_KERNEL, fmt, ap);
e33c267ab70de4 Roman Gushchin 2022-05-31 218 va_end(ap);
e33c267ab70de4 Roman Gushchin 2022-05-31 219
e33c267ab70de4 Roman Gushchin 2022-05-31 220 if (!new)
e33c267ab70de4 Roman Gushchin 2022-05-31 221 return -ENOMEM;
e33c267ab70de4 Roman Gushchin 2022-05-31 222
47a7c01c3efc65 Qi Zheng 2023-06-09 223 down_write(&shrinker_rwsem);
e33c267ab70de4 Roman Gushchin 2022-05-31 224
e33c267ab70de4 Roman Gushchin 2022-05-31 225 old = shrinker->name;
e33c267ab70de4 Roman Gushchin 2022-05-31 226 shrinker->name = new;
e33c267ab70de4 Roman Gushchin 2022-05-31 227
e33c267ab70de4 Roman Gushchin 2022-05-31 228 if (shrinker->debugfs_entry) {
e33c267ab70de4 Roman Gushchin 2022-05-31 229 snprintf(buf, sizeof(buf), "%s-%d", shrinker->name,
e33c267ab70de4 Roman Gushchin 2022-05-31 230 shrinker->debugfs_id);
e33c267ab70de4 Roman Gushchin 2022-05-31 231
e33c267ab70de4 Roman Gushchin 2022-05-31 232 entry = debugfs_rename(shrinker_debugfs_root,
e33c267ab70de4 Roman Gushchin 2022-05-31 233 shrinker->debugfs_entry,
e33c267ab70de4 Roman Gushchin 2022-05-31 234 shrinker_debugfs_root, buf);
e33c267ab70de4 Roman Gushchin 2022-05-31 235 if (IS_ERR(entry))
e33c267ab70de4 Roman Gushchin 2022-05-31 236 ret = PTR_ERR(entry);
e33c267ab70de4 Roman Gushchin 2022-05-31 237 else
e33c267ab70de4 Roman Gushchin 2022-05-31 238 shrinker->debugfs_entry = entry;
e33c267ab70de4 Roman Gushchin 2022-05-31 239 }
e33c267ab70de4 Roman Gushchin 2022-05-31 240
47a7c01c3efc65 Qi Zheng 2023-06-09 241 up_write(&shrinker_rwsem);
e33c267ab70de4 Roman Gushchin 2022-05-31 242
e33c267ab70de4 Roman Gushchin 2022-05-31 243 kfree_const(old);
e33c267ab70de4 Roman Gushchin 2022-05-31 244
e33c267ab70de4 Roman Gushchin 2022-05-31 245 return ret;
e33c267ab70de4 Roman Gushchin 2022-05-31 246 }
e33c267ab70de4 Roman Gushchin 2022-05-31 247 EXPORT_SYMBOL(shrinker_debugfs_rename);
e33c267ab70de4 Roman Gushchin 2022-05-31 248
26e239b37ebdfd Joan Bruguera Micó 2023-05-03 @249 struct dentry *shrinker_debugfs_detach(struct shrinker *shrinker,
26e239b37ebdfd Joan Bruguera Micó 2023-05-03 250 int *debugfs_id)
5035ebc644aec9 Roman Gushchin 2022-05-31 251 {
badc28d4924bfe Qi Zheng 2023-02-02 252 struct dentry *entry = shrinker->debugfs_entry;
badc28d4924bfe Qi Zheng 2023-02-02 253
47a7c01c3efc65 Qi Zheng 2023-06-09 254 lockdep_assert_held(&shrinker_rwsem);
5035ebc644aec9 Roman Gushchin 2022-05-31 255
e33c267ab70de4 Roman Gushchin 2022-05-31 256 kfree_const(shrinker->name);
14773bfa70e67f Tetsuo Handa 2022-07-20 257 shrinker->name = NULL;
e33c267ab70de4 Roman Gushchin 2022-05-31 258
26e239b37ebdfd Joan Bruguera Micó 2023-05-03 259 *debugfs_id = entry ? shrinker->debugfs_id : -1;
badc28d4924bfe Qi Zheng 2023-02-02 260 shrinker->debugfs_entry = NULL;
badc28d4924bfe Qi Zheng 2023-02-02 261
badc28d4924bfe Qi Zheng 2023-02-02 262 return entry;
5035ebc644aec9 Roman Gushchin 2022-05-31 263 }
5035ebc644aec9 Roman Gushchin 2022-05-31 264
26e239b37ebdfd Joan Bruguera Micó 2023-05-03 @265 void shrinker_debugfs_remove(struct dentry *debugfs_entry, int debugfs_id)
26e239b37ebdfd Joan Bruguera Micó 2023-05-03 266 {
26e239b37ebdfd Joan Bruguera Micó 2023-05-03 267 debugfs_remove_recursive(debugfs_entry);
26e239b37ebdfd Joan Bruguera Micó 2023-05-03 268 ida_free(&shrinker_debugfs_ida, debugfs_id);
26e239b37ebdfd Joan Bruguera Micó 2023-05-03 269 }
26e239b37ebdfd Joan Bruguera Micó 2023-05-03 270
--
0-DAY CI Kernel Test Service
https://github.com/intel/lkp-tests/wiki
^ permalink raw reply [flat|nested] 12+ messages in thread
* Re: [PATCH 1/5] mm: move some shrinker-related function declarations to mm/internal.h
2023-08-16 8:34 ` [PATCH 1/5] mm: move some shrinker-related function declarations to mm/internal.h Qi Zheng
2023-08-16 13:14 ` kernel test robot
2023-08-16 13:57 ` kernel test robot
@ 2023-08-16 15:01 ` kernel test robot
2023-08-17 3:04 ` Qi Zheng
2 siblings, 1 reply; 12+ messages in thread
From: kernel test robot @ 2023-08-16 15:01 UTC (permalink / raw)
To: Qi Zheng, akpm, david, tkhai, vbabka, roman.gushchin, djwong,
brauner, paulmck, tytso, steven.price, cel, senozhatsky,
yujie.liu, gregkh, muchun.song, joel, christian.koenig
Cc: oe-kbuild-all, linux-kernel, linux-mm, dri-devel, linux-fsdevel,
Qi Zheng, Muchun Song
Hi Qi,
kernel test robot noticed the following build warnings:
[auto build test WARNING on brauner-vfs/vfs.all]
[also build test WARNING on linus/master v6.5-rc6 next-20230816]
[cannot apply to akpm-mm/mm-everything drm-misc/drm-misc-next vfs-idmapping/for-next]
[If your patch is applied to the wrong git tree, kindly drop us a note.
And when submitting patch, we suggest to use '--base' as documented in
https://git-scm.com/docs/git-format-patch#_base_tree_information]
url: https://github.com/intel-lab-lkp/linux/commits/Qi-Zheng/mm-move-some-shrinker-related-function-declarations-to-mm-internal-h/20230816-163833
base: https://git.kernel.org/pub/scm/linux/kernel/git/vfs/vfs.git vfs.all
patch link: https://lore.kernel.org/r/20230816083419.41088-2-zhengqi.arch%40bytedance.com
patch subject: [PATCH 1/5] mm: move some shrinker-related function declarations to mm/internal.h
config: x86_64-buildonly-randconfig-r003-20230816 (https://download.01.org/0day-ci/archive/20230816/202308162208.cQBnGoER-lkp@intel.com/config)
compiler: gcc-7 (Ubuntu 7.5.0-6ubuntu2) 7.5.0
reproduce: (https://download.01.org/0day-ci/archive/20230816/202308162208.cQBnGoER-lkp@intel.com/reproduce)
If you fix the issue in a separate patch/commit (i.e. not just a new version of
the same patch/commit), kindly add following tags
| Reported-by: kernel test robot <lkp@intel.com>
| Closes: https://lore.kernel.org/oe-kbuild-all/202308162208.cQBnGoER-lkp@intel.com/
All warnings (new ones prefixed by >>):
>> mm/shrinker_debug.c:174:5: warning: no previous declaration for 'shrinker_debugfs_add' [-Wmissing-declarations]
int shrinker_debugfs_add(struct shrinker *shrinker)
^~~~~~~~~~~~~~~~~~~~
>> mm/shrinker_debug.c:249:16: warning: no previous declaration for 'shrinker_debugfs_detach' [-Wmissing-declarations]
struct dentry *shrinker_debugfs_detach(struct shrinker *shrinker,
^~~~~~~~~~~~~~~~~~~~~~~
>> mm/shrinker_debug.c:265:6: warning: no previous declaration for 'shrinker_debugfs_remove' [-Wmissing-declarations]
void shrinker_debugfs_remove(struct dentry *debugfs_entry, int debugfs_id)
^~~~~~~~~~~~~~~~~~~~~~~
vim +/shrinker_debugfs_add +174 mm/shrinker_debug.c
bbf535fd6f06b9 Roman Gushchin 2022-05-31 173
5035ebc644aec9 Roman Gushchin 2022-05-31 @174 int shrinker_debugfs_add(struct shrinker *shrinker)
5035ebc644aec9 Roman Gushchin 2022-05-31 175 {
5035ebc644aec9 Roman Gushchin 2022-05-31 176 struct dentry *entry;
e33c267ab70de4 Roman Gushchin 2022-05-31 177 char buf[128];
5035ebc644aec9 Roman Gushchin 2022-05-31 178 int id;
5035ebc644aec9 Roman Gushchin 2022-05-31 179
47a7c01c3efc65 Qi Zheng 2023-06-09 180 lockdep_assert_held(&shrinker_rwsem);
5035ebc644aec9 Roman Gushchin 2022-05-31 181
5035ebc644aec9 Roman Gushchin 2022-05-31 182 /* debugfs isn't initialized yet, add debugfs entries later. */
5035ebc644aec9 Roman Gushchin 2022-05-31 183 if (!shrinker_debugfs_root)
5035ebc644aec9 Roman Gushchin 2022-05-31 184 return 0;
5035ebc644aec9 Roman Gushchin 2022-05-31 185
5035ebc644aec9 Roman Gushchin 2022-05-31 186 id = ida_alloc(&shrinker_debugfs_ida, GFP_KERNEL);
5035ebc644aec9 Roman Gushchin 2022-05-31 187 if (id < 0)
5035ebc644aec9 Roman Gushchin 2022-05-31 188 return id;
5035ebc644aec9 Roman Gushchin 2022-05-31 189 shrinker->debugfs_id = id;
5035ebc644aec9 Roman Gushchin 2022-05-31 190
e33c267ab70de4 Roman Gushchin 2022-05-31 191 snprintf(buf, sizeof(buf), "%s-%d", shrinker->name, id);
5035ebc644aec9 Roman Gushchin 2022-05-31 192
5035ebc644aec9 Roman Gushchin 2022-05-31 193 /* create debugfs entry */
5035ebc644aec9 Roman Gushchin 2022-05-31 194 entry = debugfs_create_dir(buf, shrinker_debugfs_root);
5035ebc644aec9 Roman Gushchin 2022-05-31 195 if (IS_ERR(entry)) {
5035ebc644aec9 Roman Gushchin 2022-05-31 196 ida_free(&shrinker_debugfs_ida, id);
5035ebc644aec9 Roman Gushchin 2022-05-31 197 return PTR_ERR(entry);
5035ebc644aec9 Roman Gushchin 2022-05-31 198 }
5035ebc644aec9 Roman Gushchin 2022-05-31 199 shrinker->debugfs_entry = entry;
5035ebc644aec9 Roman Gushchin 2022-05-31 200
2124f79de6a909 John Keeping 2023-04-18 201 debugfs_create_file("count", 0440, entry, shrinker,
5035ebc644aec9 Roman Gushchin 2022-05-31 202 &shrinker_debugfs_count_fops);
2124f79de6a909 John Keeping 2023-04-18 203 debugfs_create_file("scan", 0220, entry, shrinker,
bbf535fd6f06b9 Roman Gushchin 2022-05-31 204 &shrinker_debugfs_scan_fops);
5035ebc644aec9 Roman Gushchin 2022-05-31 205 return 0;
5035ebc644aec9 Roman Gushchin 2022-05-31 206 }
5035ebc644aec9 Roman Gushchin 2022-05-31 207
e33c267ab70de4 Roman Gushchin 2022-05-31 208 int shrinker_debugfs_rename(struct shrinker *shrinker, const char *fmt, ...)
e33c267ab70de4 Roman Gushchin 2022-05-31 209 {
e33c267ab70de4 Roman Gushchin 2022-05-31 210 struct dentry *entry;
e33c267ab70de4 Roman Gushchin 2022-05-31 211 char buf[128];
e33c267ab70de4 Roman Gushchin 2022-05-31 212 const char *new, *old;
e33c267ab70de4 Roman Gushchin 2022-05-31 213 va_list ap;
e33c267ab70de4 Roman Gushchin 2022-05-31 214 int ret = 0;
e33c267ab70de4 Roman Gushchin 2022-05-31 215
e33c267ab70de4 Roman Gushchin 2022-05-31 216 va_start(ap, fmt);
e33c267ab70de4 Roman Gushchin 2022-05-31 217 new = kvasprintf_const(GFP_KERNEL, fmt, ap);
e33c267ab70de4 Roman Gushchin 2022-05-31 218 va_end(ap);
e33c267ab70de4 Roman Gushchin 2022-05-31 219
e33c267ab70de4 Roman Gushchin 2022-05-31 220 if (!new)
e33c267ab70de4 Roman Gushchin 2022-05-31 221 return -ENOMEM;
e33c267ab70de4 Roman Gushchin 2022-05-31 222
47a7c01c3efc65 Qi Zheng 2023-06-09 223 down_write(&shrinker_rwsem);
e33c267ab70de4 Roman Gushchin 2022-05-31 224
e33c267ab70de4 Roman Gushchin 2022-05-31 225 old = shrinker->name;
e33c267ab70de4 Roman Gushchin 2022-05-31 226 shrinker->name = new;
e33c267ab70de4 Roman Gushchin 2022-05-31 227
e33c267ab70de4 Roman Gushchin 2022-05-31 228 if (shrinker->debugfs_entry) {
e33c267ab70de4 Roman Gushchin 2022-05-31 229 snprintf(buf, sizeof(buf), "%s-%d", shrinker->name,
e33c267ab70de4 Roman Gushchin 2022-05-31 230 shrinker->debugfs_id);
e33c267ab70de4 Roman Gushchin 2022-05-31 231
e33c267ab70de4 Roman Gushchin 2022-05-31 232 entry = debugfs_rename(shrinker_debugfs_root,
e33c267ab70de4 Roman Gushchin 2022-05-31 233 shrinker->debugfs_entry,
e33c267ab70de4 Roman Gushchin 2022-05-31 234 shrinker_debugfs_root, buf);
e33c267ab70de4 Roman Gushchin 2022-05-31 235 if (IS_ERR(entry))
e33c267ab70de4 Roman Gushchin 2022-05-31 236 ret = PTR_ERR(entry);
e33c267ab70de4 Roman Gushchin 2022-05-31 237 else
e33c267ab70de4 Roman Gushchin 2022-05-31 238 shrinker->debugfs_entry = entry;
e33c267ab70de4 Roman Gushchin 2022-05-31 239 }
e33c267ab70de4 Roman Gushchin 2022-05-31 240
47a7c01c3efc65 Qi Zheng 2023-06-09 241 up_write(&shrinker_rwsem);
e33c267ab70de4 Roman Gushchin 2022-05-31 242
e33c267ab70de4 Roman Gushchin 2022-05-31 243 kfree_const(old);
e33c267ab70de4 Roman Gushchin 2022-05-31 244
e33c267ab70de4 Roman Gushchin 2022-05-31 245 return ret;
e33c267ab70de4 Roman Gushchin 2022-05-31 246 }
e33c267ab70de4 Roman Gushchin 2022-05-31 247 EXPORT_SYMBOL(shrinker_debugfs_rename);
e33c267ab70de4 Roman Gushchin 2022-05-31 248
26e239b37ebdfd Joan Bruguera Micó 2023-05-03 @249 struct dentry *shrinker_debugfs_detach(struct shrinker *shrinker,
26e239b37ebdfd Joan Bruguera Micó 2023-05-03 250 int *debugfs_id)
5035ebc644aec9 Roman Gushchin 2022-05-31 251 {
badc28d4924bfe Qi Zheng 2023-02-02 252 struct dentry *entry = shrinker->debugfs_entry;
badc28d4924bfe Qi Zheng 2023-02-02 253
47a7c01c3efc65 Qi Zheng 2023-06-09 254 lockdep_assert_held(&shrinker_rwsem);
5035ebc644aec9 Roman Gushchin 2022-05-31 255
e33c267ab70de4 Roman Gushchin 2022-05-31 256 kfree_const(shrinker->name);
14773bfa70e67f Tetsuo Handa 2022-07-20 257 shrinker->name = NULL;
e33c267ab70de4 Roman Gushchin 2022-05-31 258
26e239b37ebdfd Joan Bruguera Micó 2023-05-03 259 *debugfs_id = entry ? shrinker->debugfs_id : -1;
badc28d4924bfe Qi Zheng 2023-02-02 260 shrinker->debugfs_entry = NULL;
badc28d4924bfe Qi Zheng 2023-02-02 261
badc28d4924bfe Qi Zheng 2023-02-02 262 return entry;
5035ebc644aec9 Roman Gushchin 2022-05-31 263 }
5035ebc644aec9 Roman Gushchin 2022-05-31 264
26e239b37ebdfd Joan Bruguera Micó 2023-05-03 @265 void shrinker_debugfs_remove(struct dentry *debugfs_entry, int debugfs_id)
26e239b37ebdfd Joan Bruguera Micó 2023-05-03 266 {
26e239b37ebdfd Joan Bruguera Micó 2023-05-03 267 debugfs_remove_recursive(debugfs_entry);
26e239b37ebdfd Joan Bruguera Micó 2023-05-03 268 ida_free(&shrinker_debugfs_ida, debugfs_id);
26e239b37ebdfd Joan Bruguera Micó 2023-05-03 269 }
26e239b37ebdfd Joan Bruguera Micó 2023-05-03 270
--
0-DAY CI Kernel Test Service
https://github.com/intel/lkp-tests/wiki
^ permalink raw reply [flat|nested] 12+ messages in thread
* Re: [PATCH 1/5] mm: move some shrinker-related function declarations to mm/internal.h
2023-08-16 15:01 ` kernel test robot
@ 2023-08-17 3:04 ` Qi Zheng
0 siblings, 0 replies; 12+ messages in thread
From: Qi Zheng @ 2023-08-17 3:04 UTC (permalink / raw)
To: kernel test robot, akpm, david, tkhai, vbabka, roman.gushchin,
djwong, brauner, paulmck, tytso, steven.price, cel, senozhatsky,
yujie.liu, gregkh, muchun.song, joel, christian.koenig
Cc: oe-kbuild-all, linux-kernel, linux-mm, dri-devel, linux-fsdevel,
Muchun Song
On 2023/8/16 23:01, kernel test robot wrote:
> Hi Qi,
>
> kernel test robot noticed the following build warnings:
>
> [auto build test WARNING on brauner-vfs/vfs.all]
> [also build test WARNING on linus/master v6.5-rc6 next-20230816]
> [cannot apply to akpm-mm/mm-everything drm-misc/drm-misc-next vfs-idmapping/for-next]
> [If your patch is applied to the wrong git tree, kindly drop us a note.
> And when submitting patch, we suggest to use '--base' as documented in
> https://git-scm.com/docs/git-format-patch#_base_tree_information]
>
> url: https://github.com/intel-lab-lkp/linux/commits/Qi-Zheng/mm-move-some-shrinker-related-function-declarations-to-mm-internal-h/20230816-163833
> base: https://git.kernel.org/pub/scm/linux/kernel/git/vfs/vfs.git vfs.all
> patch link: https://lore.kernel.org/r/20230816083419.41088-2-zhengqi.arch%40bytedance.com
> patch subject: [PATCH 1/5] mm: move some shrinker-related function declarations to mm/internal.h
> config: x86_64-buildonly-randconfig-r003-20230816 (https://download.01.org/0day-ci/archive/20230816/202308162208.cQBnGoER-lkp@intel.com/config)
> compiler: gcc-7 (Ubuntu 7.5.0-6ubuntu2) 7.5.0
> reproduce: (https://download.01.org/0day-ci/archive/20230816/202308162208.cQBnGoER-lkp@intel.com/reproduce)
>
> If you fix the issue in a separate patch/commit (i.e. not just a new version of
> the same patch/commit), kindly add following tags
> | Reported-by: kernel test robot <lkp@intel.com>
> | Closes: https://lore.kernel.org/oe-kbuild-all/202308162208.cQBnGoER-lkp@intel.com/
>
> All warnings (new ones prefixed by >>):
>
>>> mm/shrinker_debug.c:174:5: warning: no previous declaration for 'shrinker_debugfs_add' [-Wmissing-declarations]
> int shrinker_debugfs_add(struct shrinker *shrinker)
> ^~~~~~~~~~~~~~~~~~~~
>>> mm/shrinker_debug.c:249:16: warning: no previous declaration for 'shrinker_debugfs_detach' [-Wmissing-declarations]
> struct dentry *shrinker_debugfs_detach(struct shrinker *shrinker,
> ^~~~~~~~~~~~~~~~~~~~~~~
>>> mm/shrinker_debug.c:265:6: warning: no previous declaration for 'shrinker_debugfs_remove' [-Wmissing-declarations]
> void shrinker_debugfs_remove(struct dentry *debugfs_entry, int debugfs_id)
> ^~~~~~~~~~~~~~~~~~~~~~~
Compiling with W=1 will report this warning, will fix it by including
"internal.h" in the mm/shrinker_debug.c.
Thanks,
Qi
>
>
> vim +/shrinker_debugfs_add +174 mm/shrinker_debug.c
>
> bbf535fd6f06b9 Roman Gushchin 2022-05-31 173
> 5035ebc644aec9 Roman Gushchin 2022-05-31 @174 int shrinker_debugfs_add(struct shrinker *shrinker)
> 5035ebc644aec9 Roman Gushchin 2022-05-31 175 {
> 5035ebc644aec9 Roman Gushchin 2022-05-31 176 struct dentry *entry;
> e33c267ab70de4 Roman Gushchin 2022-05-31 177 char buf[128];
> 5035ebc644aec9 Roman Gushchin 2022-05-31 178 int id;
> 5035ebc644aec9 Roman Gushchin 2022-05-31 179
> 47a7c01c3efc65 Qi Zheng 2023-06-09 180 lockdep_assert_held(&shrinker_rwsem);
> 5035ebc644aec9 Roman Gushchin 2022-05-31 181
> 5035ebc644aec9 Roman Gushchin 2022-05-31 182 /* debugfs isn't initialized yet, add debugfs entries later. */
> 5035ebc644aec9 Roman Gushchin 2022-05-31 183 if (!shrinker_debugfs_root)
> 5035ebc644aec9 Roman Gushchin 2022-05-31 184 return 0;
> 5035ebc644aec9 Roman Gushchin 2022-05-31 185
> 5035ebc644aec9 Roman Gushchin 2022-05-31 186 id = ida_alloc(&shrinker_debugfs_ida, GFP_KERNEL);
> 5035ebc644aec9 Roman Gushchin 2022-05-31 187 if (id < 0)
> 5035ebc644aec9 Roman Gushchin 2022-05-31 188 return id;
> 5035ebc644aec9 Roman Gushchin 2022-05-31 189 shrinker->debugfs_id = id;
> 5035ebc644aec9 Roman Gushchin 2022-05-31 190
> e33c267ab70de4 Roman Gushchin 2022-05-31 191 snprintf(buf, sizeof(buf), "%s-%d", shrinker->name, id);
> 5035ebc644aec9 Roman Gushchin 2022-05-31 192
> 5035ebc644aec9 Roman Gushchin 2022-05-31 193 /* create debugfs entry */
> 5035ebc644aec9 Roman Gushchin 2022-05-31 194 entry = debugfs_create_dir(buf, shrinker_debugfs_root);
> 5035ebc644aec9 Roman Gushchin 2022-05-31 195 if (IS_ERR(entry)) {
> 5035ebc644aec9 Roman Gushchin 2022-05-31 196 ida_free(&shrinker_debugfs_ida, id);
> 5035ebc644aec9 Roman Gushchin 2022-05-31 197 return PTR_ERR(entry);
> 5035ebc644aec9 Roman Gushchin 2022-05-31 198 }
> 5035ebc644aec9 Roman Gushchin 2022-05-31 199 shrinker->debugfs_entry = entry;
> 5035ebc644aec9 Roman Gushchin 2022-05-31 200
> 2124f79de6a909 John Keeping 2023-04-18 201 debugfs_create_file("count", 0440, entry, shrinker,
> 5035ebc644aec9 Roman Gushchin 2022-05-31 202 &shrinker_debugfs_count_fops);
> 2124f79de6a909 John Keeping 2023-04-18 203 debugfs_create_file("scan", 0220, entry, shrinker,
> bbf535fd6f06b9 Roman Gushchin 2022-05-31 204 &shrinker_debugfs_scan_fops);
> 5035ebc644aec9 Roman Gushchin 2022-05-31 205 return 0;
> 5035ebc644aec9 Roman Gushchin 2022-05-31 206 }
> 5035ebc644aec9 Roman Gushchin 2022-05-31 207
> e33c267ab70de4 Roman Gushchin 2022-05-31 208 int shrinker_debugfs_rename(struct shrinker *shrinker, const char *fmt, ...)
> e33c267ab70de4 Roman Gushchin 2022-05-31 209 {
> e33c267ab70de4 Roman Gushchin 2022-05-31 210 struct dentry *entry;
> e33c267ab70de4 Roman Gushchin 2022-05-31 211 char buf[128];
> e33c267ab70de4 Roman Gushchin 2022-05-31 212 const char *new, *old;
> e33c267ab70de4 Roman Gushchin 2022-05-31 213 va_list ap;
> e33c267ab70de4 Roman Gushchin 2022-05-31 214 int ret = 0;
> e33c267ab70de4 Roman Gushchin 2022-05-31 215
> e33c267ab70de4 Roman Gushchin 2022-05-31 216 va_start(ap, fmt);
> e33c267ab70de4 Roman Gushchin 2022-05-31 217 new = kvasprintf_const(GFP_KERNEL, fmt, ap);
> e33c267ab70de4 Roman Gushchin 2022-05-31 218 va_end(ap);
> e33c267ab70de4 Roman Gushchin 2022-05-31 219
> e33c267ab70de4 Roman Gushchin 2022-05-31 220 if (!new)
> e33c267ab70de4 Roman Gushchin 2022-05-31 221 return -ENOMEM;
> e33c267ab70de4 Roman Gushchin 2022-05-31 222
> 47a7c01c3efc65 Qi Zheng 2023-06-09 223 down_write(&shrinker_rwsem);
> e33c267ab70de4 Roman Gushchin 2022-05-31 224
> e33c267ab70de4 Roman Gushchin 2022-05-31 225 old = shrinker->name;
> e33c267ab70de4 Roman Gushchin 2022-05-31 226 shrinker->name = new;
> e33c267ab70de4 Roman Gushchin 2022-05-31 227
> e33c267ab70de4 Roman Gushchin 2022-05-31 228 if (shrinker->debugfs_entry) {
> e33c267ab70de4 Roman Gushchin 2022-05-31 229 snprintf(buf, sizeof(buf), "%s-%d", shrinker->name,
> e33c267ab70de4 Roman Gushchin 2022-05-31 230 shrinker->debugfs_id);
> e33c267ab70de4 Roman Gushchin 2022-05-31 231
> e33c267ab70de4 Roman Gushchin 2022-05-31 232 entry = debugfs_rename(shrinker_debugfs_root,
> e33c267ab70de4 Roman Gushchin 2022-05-31 233 shrinker->debugfs_entry,
> e33c267ab70de4 Roman Gushchin 2022-05-31 234 shrinker_debugfs_root, buf);
> e33c267ab70de4 Roman Gushchin 2022-05-31 235 if (IS_ERR(entry))
> e33c267ab70de4 Roman Gushchin 2022-05-31 236 ret = PTR_ERR(entry);
> e33c267ab70de4 Roman Gushchin 2022-05-31 237 else
> e33c267ab70de4 Roman Gushchin 2022-05-31 238 shrinker->debugfs_entry = entry;
> e33c267ab70de4 Roman Gushchin 2022-05-31 239 }
> e33c267ab70de4 Roman Gushchin 2022-05-31 240
> 47a7c01c3efc65 Qi Zheng 2023-06-09 241 up_write(&shrinker_rwsem);
> e33c267ab70de4 Roman Gushchin 2022-05-31 242
> e33c267ab70de4 Roman Gushchin 2022-05-31 243 kfree_const(old);
> e33c267ab70de4 Roman Gushchin 2022-05-31 244
> e33c267ab70de4 Roman Gushchin 2022-05-31 245 return ret;
> e33c267ab70de4 Roman Gushchin 2022-05-31 246 }
> e33c267ab70de4 Roman Gushchin 2022-05-31 247 EXPORT_SYMBOL(shrinker_debugfs_rename);
> e33c267ab70de4 Roman Gushchin 2022-05-31 248
> 26e239b37ebdfd Joan Bruguera Micó 2023-05-03 @249 struct dentry *shrinker_debugfs_detach(struct shrinker *shrinker,
> 26e239b37ebdfd Joan Bruguera Micó 2023-05-03 250 int *debugfs_id)
> 5035ebc644aec9 Roman Gushchin 2022-05-31 251 {
> badc28d4924bfe Qi Zheng 2023-02-02 252 struct dentry *entry = shrinker->debugfs_entry;
> badc28d4924bfe Qi Zheng 2023-02-02 253
> 47a7c01c3efc65 Qi Zheng 2023-06-09 254 lockdep_assert_held(&shrinker_rwsem);
> 5035ebc644aec9 Roman Gushchin 2022-05-31 255
> e33c267ab70de4 Roman Gushchin 2022-05-31 256 kfree_const(shrinker->name);
> 14773bfa70e67f Tetsuo Handa 2022-07-20 257 shrinker->name = NULL;
> e33c267ab70de4 Roman Gushchin 2022-05-31 258
> 26e239b37ebdfd Joan Bruguera Micó 2023-05-03 259 *debugfs_id = entry ? shrinker->debugfs_id : -1;
> badc28d4924bfe Qi Zheng 2023-02-02 260 shrinker->debugfs_entry = NULL;
> badc28d4924bfe Qi Zheng 2023-02-02 261
> badc28d4924bfe Qi Zheng 2023-02-02 262 return entry;
> 5035ebc644aec9 Roman Gushchin 2022-05-31 263 }
> 5035ebc644aec9 Roman Gushchin 2022-05-31 264
> 26e239b37ebdfd Joan Bruguera Micó 2023-05-03 @265 void shrinker_debugfs_remove(struct dentry *debugfs_entry, int debugfs_id)
> 26e239b37ebdfd Joan Bruguera Micó 2023-05-03 266 {
> 26e239b37ebdfd Joan Bruguera Micó 2023-05-03 267 debugfs_remove_recursive(debugfs_entry);
> 26e239b37ebdfd Joan Bruguera Micó 2023-05-03 268 ida_free(&shrinker_debugfs_ida, debugfs_id);
> 26e239b37ebdfd Joan Bruguera Micó 2023-05-03 269 }
> 26e239b37ebdfd Joan Bruguera Micó 2023-05-03 270
>
^ permalink raw reply [flat|nested] 12+ messages in thread
end of thread, other threads:[~2023-08-17 3:05 UTC | newest]
Thread overview: 12+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2023-08-16 8:34 [PATCH 0/5] use refcount+RCU method to implement lockless slab shrink (part 1) Qi Zheng
2023-08-16 8:34 ` [PATCH 1/5] mm: move some shrinker-related function declarations to mm/internal.h Qi Zheng
2023-08-16 13:14 ` kernel test robot
2023-08-16 13:57 ` kernel test robot
2023-08-16 15:01 ` kernel test robot
2023-08-17 3:04 ` Qi Zheng
2023-08-16 8:34 ` [PATCH 2/5] mm: vmscan: move shrinker-related code into a separate file Qi Zheng
2023-08-16 8:34 ` [PATCH 3/5] mm: shrinker: remove redundant shrinker_rwsem in debugfs operations Qi Zheng
2023-08-16 8:34 ` [PATCH 4/5] drm/ttm: introduce pool_shrink_rwsem Qi Zheng
2023-08-16 9:14 ` Christian König
2023-08-16 9:20 ` Qi Zheng
2023-08-16 8:34 ` [PATCH 5/5] mm: shrinker: add a secondary array for shrinker_info::{map, nr_deferred} Qi Zheng
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).