public inbox for linux-kernel@vger.kernel.org
 help / color / mirror / Atom feed
* [PATCH v5] erofs: lazily initialize per-CPU workers and CPU hotplug hooks
@ 2025-05-01 18:30 Sandeep Dhavale
  2025-05-06  2:53 ` Chao Yu
  2025-05-06  3:30 ` Gao Xiang
  0 siblings, 2 replies; 4+ messages in thread
From: Sandeep Dhavale @ 2025-05-01 18:30 UTC (permalink / raw)
  To: linux-erofs, Gao Xiang, Chao Yu, Yue Hu, Jeffle Xu,
	Sandeep Dhavale
  Cc: hsiangkao, kernel-team, linux-kernel

Currently, when EROFS is built with per-CPU workers, the workers are
started and CPU hotplug hooks are registered during module initialization.
This leads to unnecessary worker start/stop cycles during CPU hotplug
events, particularly on Android devices that frequently suspend and resume.

This change defers the initialization of per-CPU workers and the
registration of CPU hotplug hooks until the first EROFS mount. This
ensures that these resources are only allocated and managed when EROFS is
actually in use.

The tear down of per-CPU workers and unregistration of CPU hotplug hooks
still occurs during z_erofs_exit_subsystem(), but only if they were
initialized.

Signed-off-by: Sandeep Dhavale <dhavale@google.com>
---
v4: https://lore.kernel.org/linux-erofs/20250423061023.131354-1-dhavale@google.com/
Changes since v4:
- remove redundant blank line as suggested by Gao
- add a log for failure path as suggested by Chao
- also add success log which will help in case there was a failure
  before, else stale failure log could cause unnecessary concern

 fs/erofs/zdata.c | 65 ++++++++++++++++++++++++++++++++++++------------
 1 file changed, 49 insertions(+), 16 deletions(-)

diff --git a/fs/erofs/zdata.c b/fs/erofs/zdata.c
index 0671184d9cf1..a5d3aef319b2 100644
--- a/fs/erofs/zdata.c
+++ b/fs/erofs/zdata.c
@@ -291,6 +291,9 @@ static struct workqueue_struct *z_erofs_workqueue __read_mostly;
 
 #ifdef CONFIG_EROFS_FS_PCPU_KTHREAD
 static struct kthread_worker __rcu **z_erofs_pcpu_workers;
+static atomic_t erofs_percpu_workers_initialized = ATOMIC_INIT(0);
+static int erofs_cpu_hotplug_init(void);
+static void erofs_cpu_hotplug_destroy(void);
 
 static void erofs_destroy_percpu_workers(void)
 {
@@ -336,9 +339,45 @@ static int erofs_init_percpu_workers(void)
 	}
 	return 0;
 }
+
+static int z_erofs_init_pcpu_workers(void)
+{
+	int err;
+
+	if (atomic_xchg(&erofs_percpu_workers_initialized, 1))
+		return 0;
+
+	err = erofs_init_percpu_workers();
+	if (err) {
+		erofs_err(NULL, "per-cpu workers: failed to allocate.");
+		goto err_init_percpu_workers;
+	}
+
+	err = erofs_cpu_hotplug_init();
+	if (err < 0) {
+		erofs_err(NULL, "per-cpu workers: failed CPU hotplug init.");
+		goto err_cpuhp_init;
+	}
+	erofs_info(NULL, "initialized per-cpu workers successfully.");
+	return err;
+
+err_cpuhp_init:
+	erofs_destroy_percpu_workers();
+err_init_percpu_workers:
+	atomic_set(&erofs_percpu_workers_initialized, 0);
+	return err;
+}
+
+static void z_erofs_destroy_pcpu_workers(void)
+{
+	if (!atomic_xchg(&erofs_percpu_workers_initialized, 0))
+		return;
+	erofs_cpu_hotplug_destroy();
+	erofs_destroy_percpu_workers();
+}
 #else
-static inline void erofs_destroy_percpu_workers(void) {}
-static inline int erofs_init_percpu_workers(void) { return 0; }
+static inline int z_erofs_init_pcpu_workers(void) { return 0; }
+static inline void z_erofs_destroy_pcpu_workers(void) {}
 #endif
 
 #if defined(CONFIG_HOTPLUG_CPU) && defined(CONFIG_EROFS_FS_PCPU_KTHREAD)
@@ -405,8 +444,7 @@ static inline void erofs_cpu_hotplug_destroy(void) {}
 
 void z_erofs_exit_subsystem(void)
 {
-	erofs_cpu_hotplug_destroy();
-	erofs_destroy_percpu_workers();
+	z_erofs_destroy_pcpu_workers();
 	destroy_workqueue(z_erofs_workqueue);
 	z_erofs_destroy_pcluster_pool();
 	z_erofs_exit_decompressor();
@@ -430,19 +468,8 @@ int __init z_erofs_init_subsystem(void)
 		goto err_workqueue_init;
 	}
 
-	err = erofs_init_percpu_workers();
-	if (err)
-		goto err_pcpu_worker;
-
-	err = erofs_cpu_hotplug_init();
-	if (err < 0)
-		goto err_cpuhp_init;
 	return err;
 
-err_cpuhp_init:
-	erofs_destroy_percpu_workers();
-err_pcpu_worker:
-	destroy_workqueue(z_erofs_workqueue);
 err_workqueue_init:
 	z_erofs_destroy_pcluster_pool();
 err_pcluster_pool:
@@ -644,8 +671,14 @@ static const struct address_space_operations z_erofs_cache_aops = {
 
 int z_erofs_init_super(struct super_block *sb)
 {
-	struct inode *const inode = new_inode(sb);
+	struct inode *inode;
+	int err;
+
+	err = z_erofs_init_pcpu_workers();
+	if (err)
+		return err;
 
+	inode = new_inode(sb);
 	if (!inode)
 		return -ENOMEM;
 	set_nlink(inode, 1);
-- 
2.49.0.967.g6a0df3ecc3-goog


^ permalink raw reply related	[flat|nested] 4+ messages in thread

* Re: [PATCH v5] erofs: lazily initialize per-CPU workers and CPU hotplug hooks
  2025-05-01 18:30 [PATCH v5] erofs: lazily initialize per-CPU workers and CPU hotplug hooks Sandeep Dhavale
@ 2025-05-06  2:53 ` Chao Yu
  2025-05-06  3:30 ` Gao Xiang
  1 sibling, 0 replies; 4+ messages in thread
From: Chao Yu @ 2025-05-06  2:53 UTC (permalink / raw)
  To: Sandeep Dhavale, linux-erofs, Gao Xiang, Yue Hu, Jeffle Xu
  Cc: chao, hsiangkao, kernel-team, linux-kernel

On 2025/5/2 2:30, Sandeep Dhavale wrote:
> Currently, when EROFS is built with per-CPU workers, the workers are
> started and CPU hotplug hooks are registered during module initialization.
> This leads to unnecessary worker start/stop cycles during CPU hotplug
> events, particularly on Android devices that frequently suspend and resume.
> 
> This change defers the initialization of per-CPU workers and the
> registration of CPU hotplug hooks until the first EROFS mount. This
> ensures that these resources are only allocated and managed when EROFS is
> actually in use.
> 
> The tear down of per-CPU workers and unregistration of CPU hotplug hooks
> still occurs during z_erofs_exit_subsystem(), but only if they were
> initialized.
> 
> Signed-off-by: Sandeep Dhavale <dhavale@google.com>

Reviewed-by: Chao Yu <chao@kernel.org>

Thanks,

^ permalink raw reply	[flat|nested] 4+ messages in thread

* Re: [PATCH v5] erofs: lazily initialize per-CPU workers and CPU hotplug hooks
  2025-05-01 18:30 [PATCH v5] erofs: lazily initialize per-CPU workers and CPU hotplug hooks Sandeep Dhavale
  2025-05-06  2:53 ` Chao Yu
@ 2025-05-06  3:30 ` Gao Xiang
  2025-05-06 23:02   ` Sandeep Dhavale
  1 sibling, 1 reply; 4+ messages in thread
From: Gao Xiang @ 2025-05-06  3:30 UTC (permalink / raw)
  To: Sandeep Dhavale, linux-erofs, Gao Xiang, Chao Yu, Yue Hu,
	Jeffle Xu
  Cc: kernel-team, linux-kernel

Hi Sandeep,

On 2025/5/2 02:30, Sandeep Dhavale wrote:
> Currently, when EROFS is built with per-CPU workers, the workers are
> started and CPU hotplug hooks are registered during module initialization.
> This leads to unnecessary worker start/stop cycles during CPU hotplug
> events, particularly on Android devices that frequently suspend and resume.
> 
> This change defers the initialization of per-CPU workers and the
> registration of CPU hotplug hooks until the first EROFS mount. This
> ensures that these resources are only allocated and managed when EROFS is
> actually in use.
> 
> The tear down of per-CPU workers and unregistration of CPU hotplug hooks
> still occurs during z_erofs_exit_subsystem(), but only if they were
> initialized.
> 
> Signed-off-by: Sandeep Dhavale <dhavale@google.com>
> ---
> v4: https://lore.kernel.org/linux-erofs/20250423061023.131354-1-dhavale@google.com/
> Changes since v4:
> - remove redundant blank line as suggested by Gao
> - add a log for failure path as suggested by Chao
> - also add success log which will help in case there was a failure
>    before, else stale failure log could cause unnecessary concern
> 
>   fs/erofs/zdata.c | 65 ++++++++++++++++++++++++++++++++++++------------
>   1 file changed, 49 insertions(+), 16 deletions(-)
> 
> diff --git a/fs/erofs/zdata.c b/fs/erofs/zdata.c
> index 0671184d9cf1..a5d3aef319b2 100644
> --- a/fs/erofs/zdata.c
> +++ b/fs/erofs/zdata.c
> @@ -291,6 +291,9 @@ static struct workqueue_struct *z_erofs_workqueue __read_mostly;
>   
>   #ifdef CONFIG_EROFS_FS_PCPU_KTHREAD
>   static struct kthread_worker __rcu **z_erofs_pcpu_workers;
> +static atomic_t erofs_percpu_workers_initialized = ATOMIC_INIT(0);
> +static int erofs_cpu_hotplug_init(void);
> +static void erofs_cpu_hotplug_destroy(void);

We could move downwards to avoid those forward declarations;

>   
>   static void erofs_destroy_percpu_workers(void)
>   {
> @@ -336,9 +339,45 @@ static int erofs_init_percpu_workers(void)
>   	}
>   	return 0;
>   }
> +
> +static int z_erofs_init_pcpu_workers(void)

How about passing in `struct super_block *` here?
Since print messages are introduced, it's much better to
know which instance caused the error/info.

> +{
> +	int err;
> +
> +	if (atomic_xchg(&erofs_percpu_workers_initialized, 1))
> +		return 0;
> +
> +	err = erofs_init_percpu_workers();
> +	if (err) {
> +		erofs_err(NULL, "per-cpu workers: failed to allocate.");
> +		goto err_init_percpu_workers;
> +	}
> +
> +	err = erofs_cpu_hotplug_init();
> +	if (err < 0) {
> +		erofs_err(NULL, "per-cpu workers: failed CPU hotplug init.");
> +		goto err_cpuhp_init;
> +	}
> +	erofs_info(NULL, "initialized per-cpu workers successfully.");


Otherwise it looks good to me know.

Thanks,
Gao Xiang

^ permalink raw reply	[flat|nested] 4+ messages in thread

* Re: [PATCH v5] erofs: lazily initialize per-CPU workers and CPU hotplug hooks
  2025-05-06  3:30 ` Gao Xiang
@ 2025-05-06 23:02   ` Sandeep Dhavale
  0 siblings, 0 replies; 4+ messages in thread
From: Sandeep Dhavale @ 2025-05-06 23:02 UTC (permalink / raw)
  To: Gao Xiang
  Cc: linux-erofs, Gao Xiang, Chao Yu, Yue Hu, Jeffle Xu, kernel-team,
	linux-kernel

Hi Gao,
> >   #ifdef CONFIG_EROFS_FS_PCPU_KTHREAD
> >   static struct kthread_worker __rcu **z_erofs_pcpu_workers;
> > +static atomic_t erofs_percpu_workers_initialized = ATOMIC_INIT(0);
> > +static int erofs_cpu_hotplug_init(void);
> > +static void erofs_cpu_hotplug_destroy(void);
>
> We could move downwards to avoid those forward declarations;
>
Sure, I ended up moving the CONFIG_HOTPLUG_CPU block inside
CONFIG_EROFS_FS_PCPU_KTHREAD. That gets rid of forward declaration and
also much readable.
> >
> >   static void erofs_destroy_percpu_workers(void)
> >   {
> > @@ -336,9 +339,45 @@ static int erofs_init_percpu_workers(void)
> >       }
> >       return 0;
> >   }
> > +
> > +static int z_erofs_init_pcpu_workers(void)
>
> How about passing in `struct super_block *` here?
> Since print messages are introduced, it's much better to
> know which instance caused the error/info.
>
Sounds good. Log message now looks like this

[    8.724634] erofs (device loop0): initialized per-cpu workers successfully.
[    8.726133] erofs (device loop0): mounted with root inode @ nid 40.

Thanks for the review.
v6 addressing this is available at:
https://lore.kernel.org/linux-erofs/20250506225743.308517-1-dhavale@google.com/

Thanks,
Sandeep.

> > +{
> > +     int err;
> > +
> > +     if (atomic_xchg(&erofs_percpu_workers_initialized, 1))
> > +             return 0;
> > +
> > +     err = erofs_init_percpu_workers();
> > +     if (err) {
> > +             erofs_err(NULL, "per-cpu workers: failed to allocate.");
> > +             goto err_init_percpu_workers;
> > +     }
> > +
> > +     err = erofs_cpu_hotplug_init();
> > +     if (err < 0) {
> > +             erofs_err(NULL, "per-cpu workers: failed CPU hotplug init.");
> > +             goto err_cpuhp_init;
> > +     }
> > +     erofs_info(NULL, "initialized per-cpu workers successfully.");
>
>
> Otherwise it looks good to me know.
>
> Thanks,
> Gao Xiang

^ permalink raw reply	[flat|nested] 4+ messages in thread

end of thread, other threads:[~2025-05-06 23:03 UTC | newest]

Thread overview: 4+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2025-05-01 18:30 [PATCH v5] erofs: lazily initialize per-CPU workers and CPU hotplug hooks Sandeep Dhavale
2025-05-06  2:53 ` Chao Yu
2025-05-06  3:30 ` Gao Xiang
2025-05-06 23:02   ` Sandeep Dhavale

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox