linux-fsdevel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: Muchun Song <muchun.song@linux.dev>
To: Qi Zheng <zhengqi.arch@bytedance.com>
Cc: linux-kernel@vger.kernel.org, linux-mm@kvack.org,
	linux-fsdevel@vger.kernel.org, akpm@linux-foundation.org,
	david@fromorbit.com, tkhai@ya.ru, vbabka@suse.cz,
	roman.gushchin@linux.dev, djwong@kernel.org, brauner@kernel.org,
	paulmck@kernel.org, tytso@mit.edu, steven.price@arm.com,
	cel@kernel.org, senozhatsky@chromium.org, yujie.liu@intel.com,
	gregkh@linuxfoundation.org
Subject: Re: [PATCH v6 01/45] mm: shrinker: add infrastructure for dynamically allocating shrinker
Date: Mon, 18 Sep 2023 17:03:31 +0800	[thread overview]
Message-ID: <4aff0e17-b40f-406d-65fd-72a2bacfcc1a@linux.dev> (raw)
In-Reply-To: <20230911094444.68966-2-zhengqi.arch@bytedance.com>



On 2023/9/11 17:44, Qi Zheng wrote:
> Currently, the shrinker instances can be divided into the following three
> types:
>
> a) global shrinker instance statically defined in the kernel, such as
>     workingset_shadow_shrinker.
>
> b) global shrinker instance statically defined in the kernel modules, such
>     as mmu_shrinker in x86.
>
> c) shrinker instance embedded in other structures.
>
> For case a, the memory of shrinker instance is never freed. For case b,
> the memory of shrinker instance will be freed after synchronize_rcu() when
> the module is unloaded. For case c, the memory of shrinker instance will
> be freed along with the structure it is embedded in.
>
> In preparation for implementing lockless slab shrink, we need to
> dynamically allocate those shrinker instances in case c, then the memory
> can be dynamically freed alone by calling kfree_rcu().
>
> So this commit adds the following new APIs for dynamically allocating
> shrinker, and add a private_data field to struct shrinker to record and
> get the original embedded structure.
>
> 1. shrinker_alloc()
>
> Used to allocate shrinker instance itself and related memory, it will
> return a pointer to the shrinker instance on success and NULL on failure.
>
> 2. shrinker_register()
>
> Used to register the shrinker instance, which is same as the current
> register_shrinker_prepared().
>
> 3. shrinker_free()
>
> Used to unregister (if needed) and free the shrinker instance.
>
> In order to simplify shrinker-related APIs and make shrinker more
> independent of other kernel mechanisms, subsequent submissions will use
> the above API to convert all shrinkers (including case a and b) to
> dynamically allocated, and then remove all existing APIs.
>
> This will also have another advantage mentioned by Dave Chinner:
>
> ```
> The other advantage of this is that it will break all the existing
> out of tree code and third party modules using the old API and will
> no longer work with a kernel using lockless slab shrinkers. They
> need to break (both at the source and binary levels) to stop bad
> things from happening due to using unconverted shrinkers in the new
> setup.
> ```
>
> Signed-off-by: Qi Zheng <zhengqi.arch@bytedance.com>
> ---
>   include/linux/shrinker.h |   7 +++
>   mm/internal.h            |  11 +++++
>   mm/shrinker.c            | 102 +++++++++++++++++++++++++++++++++++++++
>   mm/shrinker_debug.c      |  17 ++++++-
>   4 files changed, 135 insertions(+), 2 deletions(-)
>
> diff --git a/include/linux/shrinker.h b/include/linux/shrinker.h
> index 6b5843c3b827..3f3fd9974ce5 100644
> --- a/include/linux/shrinker.h
> +++ b/include/linux/shrinker.h
> @@ -70,6 +70,8 @@ struct shrinker {
>   	int seeks;	/* seeks to recreate an obj */
>   	unsigned flags;
>   
> +	void *private_data;
> +
>   	/* These are for internal use */
>   	struct list_head list;
>   #ifdef CONFIG_MEMCG
> @@ -95,6 +97,11 @@ struct shrinker {
>    * non-MEMCG_AWARE shrinker should not have this flag set.
>    */
>   #define SHRINKER_NONSLAB	(1 << 3)
> +#define SHRINKER_ALLOCATED	(1 << 4)

It is better to add a comment here to tell users
it is only used by internals of shrinker. The users
are not supposed to pass this flag to shrink APIs.

> +
> +struct shrinker *shrinker_alloc(unsigned int flags, const char *fmt, ...);
> +void shrinker_register(struct shrinker *shrinker);
> +void shrinker_free(struct shrinker *shrinker);
>   
>   extern int __printf(2, 3) prealloc_shrinker(struct shrinker *shrinker,
>   					    const char *fmt, ...);
> diff --git a/mm/internal.h b/mm/internal.h
> index 0471d6326d01..5587cae20ebf 100644
> --- a/mm/internal.h
> +++ b/mm/internal.h
> @@ -1161,6 +1161,9 @@ unsigned long shrink_slab(gfp_t gfp_mask, int nid, struct mem_cgroup *memcg,
>   
>   #ifdef CONFIG_SHRINKER_DEBUG
>   extern int shrinker_debugfs_add(struct shrinker *shrinker);
> +extern int shrinker_debugfs_name_alloc(struct shrinker *shrinker,
> +				       const char *fmt, va_list ap);
> +extern void shrinker_debugfs_name_free(struct shrinker *shrinker);
>   extern struct dentry *shrinker_debugfs_detach(struct shrinker *shrinker,
>   					      int *debugfs_id);
>   extern void shrinker_debugfs_remove(struct dentry *debugfs_entry,
> @@ -1170,6 +1173,14 @@ static inline int shrinker_debugfs_add(struct shrinker *shrinker)
>   {
>   	return 0;
>   }
> +static inline int shrinker_debugfs_name_alloc(struct shrinker *shrinker,
> +					      const char *fmt, va_list ap)
> +{
> +	return 0;
> +}
> +static inline void shrinker_debugfs_name_free(struct shrinker *shrinker)
> +{
> +}
>   static inline struct dentry *shrinker_debugfs_detach(struct shrinker *shrinker,
>   						     int *debugfs_id)
>   {
> diff --git a/mm/shrinker.c b/mm/shrinker.c
> index a16cd448b924..201211a67827 100644
> --- a/mm/shrinker.c
> +++ b/mm/shrinker.c
> @@ -550,6 +550,108 @@ unsigned long shrink_slab(gfp_t gfp_mask, int nid, struct mem_cgroup *memcg,
>   	return freed;
>   }
>   
> +struct shrinker *shrinker_alloc(unsigned int flags, const char *fmt, ...)
> +{
> +	struct shrinker *shrinker;
> +	unsigned int size;
> +	va_list ap;
> +	int err;
> +
> +	shrinker = kzalloc(sizeof(struct shrinker), GFP_KERNEL);
> +	if (!shrinker)
> +		return NULL;
> +
> +	va_start(ap, fmt);
> +	err = shrinker_debugfs_name_alloc(shrinker, fmt, ap);
> +	va_end(ap);
> +	if (err)
> +		goto err_name;
> +
> +	shrinker->flags = flags | SHRINKER_ALLOCATED;
> +	shrinker->seeks = DEFAULT_SEEKS;
> +
> +	if (flags & SHRINKER_MEMCG_AWARE) {
> +		err = prealloc_memcg_shrinker(shrinker);
> +		if (err == -ENOSYS)
> +			shrinker->flags &= ~SHRINKER_MEMCG_AWARE;
> +		else if (err == 0)
> +			goto done;
> +		else
> +			goto err_flags;

Actually, the code here is a little confusing me when I fist look
at it. I think there could be some improvements here. Something
like:

         if (flags & SHRINKER_MEMCG_AWARE) {
                 err = prealloc_memcg_shrinker(shrinker);
                 if (err == -ENOSYS) {
                         /* Memcg is not supported and fallback to 
non-memcg-aware shrinker. */
                         shrinker->flags &= ~SHRINKER_MEMCG_AWARE;
                         goto non-memcg;
                 }

                 if (err)
                     goto err_flags;
                 return shrinker;
         }

non-memcg:
         [...]
         return shrinker;

In this case, the code becomes more clear (at least for me). We have 
split the
code into two part, one is handling memcg-aware case, another is 
non-memcg-aware
case. Any side will have a explicit "return" keyword to return once 
succeeds.
It is a little implicit that the previous one uses "goto done".

And the tag of "non-memcg" is also a good annotation to tell us the 
following
code handles non-memcg-aware case.

> +	}
> +
> +	/*
> +	 * The nr_deferred is available on per memcg level for memcg aware
> +	 * shrinkers, so only allocate nr_deferred in the following cases:
> +	 *  - non memcg aware shrinkers
> +	 *  - !CONFIG_MEMCG
> +	 *  - memcg is disabled by kernel command line
> +	 */
> +	size = sizeof(*shrinker->nr_deferred);
> +	if (flags & SHRINKER_NUMA_AWARE)
> +		size *= nr_node_ids;
> +
> +	shrinker->nr_deferred = kzalloc(size, GFP_KERNEL);
> +	if (!shrinker->nr_deferred)
> +		goto err_flags;
> +
> +done:
> +	return shrinker;
> +
> +err_flags:
> +	shrinker_debugfs_name_free(shrinker);
> +err_name:
> +	kfree(shrinker);
> +	return NULL;
> +}
> +EXPORT_SYMBOL_GPL(shrinker_alloc);
> +
> +void shrinker_register(struct shrinker *shrinker)
> +{
> +	if (unlikely(!(shrinker->flags & SHRINKER_ALLOCATED))) {
> +		pr_warn("Must use shrinker_alloc() to dynamically allocate the shrinker");
> +		return;
> +	}
> +
> +	down_write(&shrinker_rwsem);
> +	list_add_tail(&shrinker->list, &shrinker_list);
> +	shrinker->flags |= SHRINKER_REGISTERED;
> +	shrinker_debugfs_add(shrinker);
> +	up_write(&shrinker_rwsem);
> +}
> +EXPORT_SYMBOL_GPL(shrinker_register);
> +
> +void shrinker_free(struct shrinker *shrinker)
> +{
> +	struct dentry *debugfs_entry = NULL;
> +	int debugfs_id;
> +
> +	if (!shrinker)
> +		return;
> +
> +	down_write(&shrinker_rwsem);
> +	if (shrinker->flags & SHRINKER_REGISTERED) {
> +		list_del(&shrinker->list);
> +		debugfs_entry = shrinker_debugfs_detach(shrinker, &debugfs_id);
> +		shrinker->flags &= ~SHRINKER_REGISTERED;
> +	} else {
> +		shrinker_debugfs_name_free(shrinker);

We could remove shrinker_debugfs_name_free() calling from
shrinker_debugfs_detach(), then we could call
shrinker_debugfs_name_free() anyway, otherwise, it it a little
weird for me. And the srinker name is allocated from shrinker_alloc(),
so I think it it reasonable for shrinker_free() to free the
shrinker name.

Thanks.

> +	}
> +
> +	if (shrinker->flags & SHRINKER_MEMCG_AWARE)
> +		unregister_memcg_shrinker(shrinker);
> +	up_write(&shrinker_rwsem);
> +
> +	if (debugfs_entry)
> +		shrinker_debugfs_remove(debugfs_entry, debugfs_id);
> +
> +	kfree(shrinker->nr_deferred);
> +	shrinker->nr_deferred = NULL;
> +
> +	kfree(shrinker);
> +}
> +EXPORT_SYMBOL_GPL(shrinker_free);
> +
>   /*
>    * Add a shrinker callback to be called from the vm.
>    */
> diff --git a/mm/shrinker_debug.c b/mm/shrinker_debug.c
> index e4ce509f619e..38452f539f40 100644
> --- a/mm/shrinker_debug.c
> +++ b/mm/shrinker_debug.c
> @@ -193,6 +193,20 @@ int shrinker_debugfs_add(struct shrinker *shrinker)
>   	return 0;
>   }
>   
> +int shrinker_debugfs_name_alloc(struct shrinker *shrinker, const char *fmt,
> +				va_list ap)
> +{
> +	shrinker->name = kvasprintf_const(GFP_KERNEL, fmt, ap);
> +
> +	return shrinker->name ? 0 : -ENOMEM;
> +}
> +
> +void shrinker_debugfs_name_free(struct shrinker *shrinker)
> +{
> +	kfree_const(shrinker->name);
> +	shrinker->name = NULL;
> +}

It it better to move both helpers to internal.h and mark them as inline
since both are very simple enough.

Thanks.

> +
>   int shrinker_debugfs_rename(struct shrinker *shrinker, const char *fmt, ...)
>   {
>   	struct dentry *entry;
> @@ -241,8 +255,7 @@ struct dentry *shrinker_debugfs_detach(struct shrinker *shrinker,
>   
>   	lockdep_assert_held(&shrinker_rwsem);
>   
> -	kfree_const(shrinker->name);
> -	shrinker->name = NULL;
> +	shrinker_debugfs_name_free(shrinker);
>   
>   	*debugfs_id = entry ? shrinker->debugfs_id : -1;
>   	shrinker->debugfs_entry = NULL;


  reply	other threads:[~2023-09-18  9:04 UTC|newest]

Thread overview: 72+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2023-09-11  9:43 [PATCH v6 00/45] use refcount+RCU method to implement lockless slab shrink Qi Zheng
2023-09-11  9:44 ` [PATCH v6 01/45] mm: shrinker: add infrastructure for dynamically allocating shrinker Qi Zheng
2023-09-18  9:03   ` Muchun Song [this message]
2023-09-18 12:06     ` Qi Zheng
2023-09-19  2:36       ` Muchun Song
2023-09-19  2:46   ` [PATCH] mm: shrinker: some cleanup Qi Zheng
2023-09-19  8:04     ` Greg KH
2023-09-19  8:41       ` Qi Zheng
2023-09-11  9:44 ` [PATCH v6 02/45] kvm: mmu: dynamically allocate the x86-mmu shrinker Qi Zheng
2023-09-11  9:44 ` [PATCH v6 03/45] binder: dynamically allocate the android-binder shrinker Qi Zheng
2023-09-11  9:44 ` [PATCH v6 04/45] drm/ttm: dynamically allocate the drm-ttm_pool shrinker Qi Zheng
2023-09-11  9:44 ` [PATCH v6 05/45] xenbus/backend: dynamically allocate the xen-backend shrinker Qi Zheng
2023-09-11  9:44 ` [PATCH v6 06/45] erofs: dynamically allocate the erofs-shrinker Qi Zheng
2023-09-11  9:44 ` [PATCH v6 07/45] f2fs: dynamically allocate the f2fs-shrinker Qi Zheng
2023-09-11  9:44 ` [PATCH v6 08/45] gfs2: dynamically allocate the gfs2-glock shrinker Qi Zheng
2023-09-11  9:44 ` [PATCH v6 09/45] gfs2: dynamically allocate the gfs2-qd shrinker Qi Zheng
2023-09-11  9:44 ` [PATCH v6 10/45] NFSv4.2: dynamically allocate the nfs-xattr shrinkers Qi Zheng
2023-09-11  9:44 ` [PATCH v6 11/45] nfs: dynamically allocate the nfs-acl shrinker Qi Zheng
2023-09-11  9:44 ` [PATCH v6 12/45] nfsd: dynamically allocate the nfsd-filecache shrinker Qi Zheng
2023-09-11  9:44 ` [PATCH v6 13/45] quota: dynamically allocate the dquota-cache shrinker Qi Zheng
2023-09-11  9:44 ` [PATCH v6 14/45] ubifs: dynamically allocate the ubifs-slab shrinker Qi Zheng
2023-09-11  9:44 ` [PATCH v6 15/45] rcu: dynamically allocate the rcu-lazy shrinker Qi Zheng
2023-09-18  7:27   ` Muchun Song
2023-09-11  9:44 ` [PATCH v6 16/45] rcu: dynamically allocate the rcu-kfree shrinker Qi Zheng
2023-09-12  7:30   ` Uladzislau Rezki
2023-09-11  9:44 ` [PATCH v6 17/45] mm: thp: dynamically allocate the thp-related shrinkers Qi Zheng
2023-09-11  9:44 ` [PATCH v6 18/45] sunrpc: dynamically allocate the sunrpc_cred shrinker Qi Zheng
2023-09-11  9:44 ` [PATCH v6 19/45] mm: workingset: dynamically allocate the mm-shadow shrinker Qi Zheng
2023-09-18  7:26   ` Muchun Song
2023-09-11  9:44 ` [PATCH v6 20/45] drm/i915: dynamically allocate the i915_gem_mm shrinker Qi Zheng
2023-09-11  9:44 ` [PATCH v6 21/45] drm/msm: dynamically allocate the drm-msm_gem shrinker Qi Zheng
2023-09-11  9:44 ` [PATCH v6 22/45] drm/panfrost: dynamically allocate the drm-panfrost shrinker Qi Zheng
2023-09-11  9:44 ` [PATCH v6 23/45] dm: dynamically allocate the dm-bufio shrinker Qi Zheng
2023-09-11  9:44 ` [PATCH v6 24/45] dm zoned: dynamically allocate the dm-zoned-meta shrinker Qi Zheng
2023-09-11  9:44 ` [PATCH v6 25/45] md/raid5: dynamically allocate the md-raid5 shrinker Qi Zheng
2023-09-11  9:44 ` [PATCH v6 26/45] bcache: dynamically allocate the md-bcache shrinker Qi Zheng
2023-09-18  7:24   ` Muchun Song
2023-09-11  9:44 ` [PATCH v6 27/45] vmw_balloon: dynamically allocate the vmw-balloon shrinker Qi Zheng
2023-09-11 18:40   ` Nadav Amit
2023-09-11  9:44 ` [PATCH v6 28/45] virtio_balloon: dynamically allocate the virtio-balloon shrinker Qi Zheng
2023-09-11  9:44 ` [PATCH v6 29/45] mbcache: dynamically allocate the mbcache shrinker Qi Zheng
2023-09-11  9:44 ` [PATCH v6 30/45] ext4: dynamically allocate the ext4-es shrinker Qi Zheng
2023-09-11  9:44 ` [PATCH v6 31/45] jbd2,ext4: dynamically allocate the jbd2-journal shrinker Qi Zheng
2023-09-11  9:44 ` [PATCH v6 32/45] nfsd: dynamically allocate the nfsd-client shrinker Qi Zheng
2023-09-11  9:44 ` [PATCH v6 33/45] nfsd: dynamically allocate the nfsd-reply shrinker Qi Zheng
2023-09-18  7:21   ` Muchun Song
2023-09-11  9:44 ` [PATCH v6 34/45] xfs: dynamically allocate the xfs-buf shrinker Qi Zheng
2023-09-11  9:44 ` [PATCH v6 35/45] xfs: dynamically allocate the xfs-inodegc shrinker Qi Zheng
2023-09-11  9:44 ` [PATCH v6 36/45] xfs: dynamically allocate the xfs-qm shrinker Qi Zheng
2023-09-11  9:44 ` [PATCH v6 37/45] zsmalloc: dynamically allocate the mm-zspool shrinker Qi Zheng
2023-09-11  9:44 ` [PATCH v6 38/45] fs: super: dynamically allocate the s_shrink Qi Zheng
2023-09-13 17:03   ` David Sterba
2023-09-11  9:44 ` [PATCH v6 39/45] mm: shrinker: remove old APIs Qi Zheng
2023-09-11  9:44 ` [PATCH v6 40/45] mm: shrinker: add a secondary array for shrinker_info::{map, nr_deferred} Qi Zheng
2023-09-28 14:15   ` [PATCH] fixup: " Qi Zheng
2023-09-28 14:17     ` Qi Zheng
2023-09-11  9:44 ` [PATCH v6 41/45] mm: shrinker: rename {prealloc|unregister}_memcg_shrinker() to shrinker_memcg_{alloc|remove}() Qi Zheng
2023-09-18  7:20   ` Muchun Song
2023-09-11  9:44 ` [PATCH v6 42/45] mm: shrinker: make global slab shrink lockless Qi Zheng
2023-12-06  7:47   ` Lai Jiangshan
2023-12-06  7:55     ` Qi Zheng
2023-12-06  8:10       ` Lai Jiangshan
2023-12-06  8:23       ` Lai Jiangshan
2023-12-06  8:37         ` Qi Zheng
2023-12-06  9:13         ` Dave Chinner
2023-09-11  9:44 ` [PATCH v6 43/45] mm: shrinker: make memcg " Qi Zheng
2023-09-11  9:44 ` [PATCH v6 44/45] mm: shrinker: hold write lock to reparent shrinker nr_deferred Qi Zheng
2023-09-18  7:17   ` Muchun Song
2023-09-11  9:44 ` [PATCH v6 45/45] mm: shrinker: convert shrinker_rwsem to mutex Qi Zheng
2023-09-18  7:16   ` Muchun Song
2023-11-26 14:27 ` [PATCH v6 00/45] use refcount+RCU method to implement lockless slab shrink Ryan Lahfa
2023-11-27 13:53   ` Greg KH

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=4aff0e17-b40f-406d-65fd-72a2bacfcc1a@linux.dev \
    --to=muchun.song@linux.dev \
    --cc=akpm@linux-foundation.org \
    --cc=brauner@kernel.org \
    --cc=cel@kernel.org \
    --cc=david@fromorbit.com \
    --cc=djwong@kernel.org \
    --cc=gregkh@linuxfoundation.org \
    --cc=linux-fsdevel@vger.kernel.org \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linux-mm@kvack.org \
    --cc=paulmck@kernel.org \
    --cc=roman.gushchin@linux.dev \
    --cc=senozhatsky@chromium.org \
    --cc=steven.price@arm.com \
    --cc=tkhai@ya.ru \
    --cc=tytso@mit.edu \
    --cc=vbabka@suse.cz \
    --cc=yujie.liu@intel.com \
    --cc=zhengqi.arch@bytedance.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).