From: Sandeep Dhavale <dhavale@google.com>
To: linux-erofs@lists.ozlabs.org, Gao Xiang <xiang@kernel.org>,
Chao Yu <chao@kernel.org>, Yue Hu <zbestahu@gmail.com>,
Jeffle Xu <jefflexu@linux.alibaba.com>,
Sandeep Dhavale <dhavale@google.com>
Cc: hsiangkao@linux.alibaba.com, kernel-team@android.com,
linux-kernel@vger.kernel.org
Subject: [PATCH v5] erofs: lazily initialize per-CPU workers and CPU hotplug hooks
Date: Thu, 1 May 2025 11:30:02 -0700 [thread overview]
Message-ID: <20250501183003.1125531-1-dhavale@google.com> (raw)
Currently, when EROFS is built with per-CPU workers, the workers are
started and CPU hotplug hooks are registered during module initialization.
This leads to unnecessary worker start/stop cycles during CPU hotplug
events, particularly on Android devices that frequently suspend and resume.
This change defers the initialization of per-CPU workers and the
registration of CPU hotplug hooks until the first EROFS mount. This
ensures that these resources are only allocated and managed when EROFS is
actually in use.
The tear down of per-CPU workers and unregistration of CPU hotplug hooks
still occurs during z_erofs_exit_subsystem(), but only if they were
initialized.
Signed-off-by: Sandeep Dhavale <dhavale@google.com>
---
v4: https://lore.kernel.org/linux-erofs/20250423061023.131354-1-dhavale@google.com/
Changes since v4:
- remove redundant blank line as suggested by Gao
- add a log for failure path as suggested by Chao
- also add success log which will help in case there was a failure
before, else stale failure log could cause unnecessary concern
fs/erofs/zdata.c | 65 ++++++++++++++++++++++++++++++++++++------------
1 file changed, 49 insertions(+), 16 deletions(-)
diff --git a/fs/erofs/zdata.c b/fs/erofs/zdata.c
index 0671184d9cf1..a5d3aef319b2 100644
--- a/fs/erofs/zdata.c
+++ b/fs/erofs/zdata.c
@@ -291,6 +291,9 @@ static struct workqueue_struct *z_erofs_workqueue __read_mostly;
#ifdef CONFIG_EROFS_FS_PCPU_KTHREAD
static struct kthread_worker __rcu **z_erofs_pcpu_workers;
+static atomic_t erofs_percpu_workers_initialized = ATOMIC_INIT(0);
+static int erofs_cpu_hotplug_init(void);
+static void erofs_cpu_hotplug_destroy(void);
static void erofs_destroy_percpu_workers(void)
{
@@ -336,9 +339,45 @@ static int erofs_init_percpu_workers(void)
}
return 0;
}
+
+static int z_erofs_init_pcpu_workers(void)
+{
+ int err;
+
+ if (atomic_xchg(&erofs_percpu_workers_initialized, 1))
+ return 0;
+
+ err = erofs_init_percpu_workers();
+ if (err) {
+ erofs_err(NULL, "per-cpu workers: failed to allocate.");
+ goto err_init_percpu_workers;
+ }
+
+ err = erofs_cpu_hotplug_init();
+ if (err < 0) {
+ erofs_err(NULL, "per-cpu workers: failed CPU hotplug init.");
+ goto err_cpuhp_init;
+ }
+ erofs_info(NULL, "initialized per-cpu workers successfully.");
+ return err;
+
+err_cpuhp_init:
+ erofs_destroy_percpu_workers();
+err_init_percpu_workers:
+ atomic_set(&erofs_percpu_workers_initialized, 0);
+ return err;
+}
+
+static void z_erofs_destroy_pcpu_workers(void)
+{
+ if (!atomic_xchg(&erofs_percpu_workers_initialized, 0))
+ return;
+ erofs_cpu_hotplug_destroy();
+ erofs_destroy_percpu_workers();
+}
#else
-static inline void erofs_destroy_percpu_workers(void) {}
-static inline int erofs_init_percpu_workers(void) { return 0; }
+static inline int z_erofs_init_pcpu_workers(void) { return 0; }
+static inline void z_erofs_destroy_pcpu_workers(void) {}
#endif
#if defined(CONFIG_HOTPLUG_CPU) && defined(CONFIG_EROFS_FS_PCPU_KTHREAD)
@@ -405,8 +444,7 @@ static inline void erofs_cpu_hotplug_destroy(void) {}
void z_erofs_exit_subsystem(void)
{
- erofs_cpu_hotplug_destroy();
- erofs_destroy_percpu_workers();
+ z_erofs_destroy_pcpu_workers();
destroy_workqueue(z_erofs_workqueue);
z_erofs_destroy_pcluster_pool();
z_erofs_exit_decompressor();
@@ -430,19 +468,8 @@ int __init z_erofs_init_subsystem(void)
goto err_workqueue_init;
}
- err = erofs_init_percpu_workers();
- if (err)
- goto err_pcpu_worker;
-
- err = erofs_cpu_hotplug_init();
- if (err < 0)
- goto err_cpuhp_init;
return err;
-err_cpuhp_init:
- erofs_destroy_percpu_workers();
-err_pcpu_worker:
- destroy_workqueue(z_erofs_workqueue);
err_workqueue_init:
z_erofs_destroy_pcluster_pool();
err_pcluster_pool:
@@ -644,8 +671,14 @@ static const struct address_space_operations z_erofs_cache_aops = {
int z_erofs_init_super(struct super_block *sb)
{
- struct inode *const inode = new_inode(sb);
+ struct inode *inode;
+ int err;
+
+ err = z_erofs_init_pcpu_workers();
+ if (err)
+ return err;
+ inode = new_inode(sb);
if (!inode)
return -ENOMEM;
set_nlink(inode, 1);
--
2.49.0.967.g6a0df3ecc3-goog
next reply other threads:[~2025-05-01 18:30 UTC|newest]
Thread overview: 4+ messages / expand[flat|nested] mbox.gz Atom feed top
2025-05-01 18:30 Sandeep Dhavale [this message]
2025-05-06 2:53 ` [PATCH v5] erofs: lazily initialize per-CPU workers and CPU hotplug hooks Chao Yu
2025-05-06 3:30 ` Gao Xiang
2025-05-06 23:02 ` Sandeep Dhavale
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20250501183003.1125531-1-dhavale@google.com \
--to=dhavale@google.com \
--cc=chao@kernel.org \
--cc=hsiangkao@linux.alibaba.com \
--cc=jefflexu@linux.alibaba.com \
--cc=kernel-team@android.com \
--cc=linux-erofs@lists.ozlabs.org \
--cc=linux-kernel@vger.kernel.org \
--cc=xiang@kernel.org \
--cc=zbestahu@gmail.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox