public inbox for linux-kernel@vger.kernel.org
 help / color / mirror / Atom feed
* [PATCH] Batch unmount cleanup
@ 2017-10-16 18:52 Leon Yang
  2017-10-16 22:19 ` Al Viro
  2017-10-18  1:38 ` Eric W. Biederman
  0 siblings, 2 replies; 3+ messages in thread
From: Leon Yang @ 2017-10-16 18:52 UTC (permalink / raw)
  To: Alexander Viro, open list:FILESYSTEMS (VFS and infrastructure),
	open list
  Cc: Leon Yang

From: Leon Yang <leon.gh.yang@gmail.com>

Each time the unmounted list is cleanup, synchronize_rcu() is
called, which is relatively costly. Scheduling the cleanup in a
workqueue, similar to what is being done in
net/core/net_namespace.c:cleanup_net, makes unmounting faster
without adding too much overhead. This is useful especially for
servers with many containers where mounting/unmounting happens a
lot.

Signed-off-by: Leon Yang <leon.gh.yang@gmail.com>
---
 fs/namespace.c | 27 +++++++++++++++++++++++----
 1 file changed, 23 insertions(+), 4 deletions(-)

diff --git a/fs/namespace.c b/fs/namespace.c
index 3b601f1..864ce7e 100644
--- a/fs/namespace.c
+++ b/fs/namespace.c
@@ -68,6 +68,7 @@ static int mnt_group_start = 1;
 static struct hlist_head *mount_hashtable __read_mostly;
 static struct hlist_head *mountpoint_hashtable __read_mostly;
 static struct kmem_cache *mnt_cache __read_mostly;
+static struct workqueue_struct *unmounted_wq;
 static DECLARE_RWSEM(namespace_sem);
 
 /* /sys/fs */
@@ -1409,22 +1410,29 @@ EXPORT_SYMBOL(may_umount);
 
 static HLIST_HEAD(unmounted);	/* protected by namespace_sem */
 
-static void namespace_unlock(void)
+static void cleanup_unmounted(struct work_struct *work)
 {
 	struct hlist_head head;
+	down_write(&namespace_sem);
 
 	hlist_move_list(&unmounted, &head);
 
 	up_write(&namespace_sem);
 
-	if (likely(hlist_empty(&head)))
-		return;
-
 	synchronize_rcu();
 
 	group_pin_kill(&head);
 }
 
+static DECLARE_WORK(unmounted_cleanup_work, cleanup_unmounted);
+
+static void namespace_unlock(void)
+{
+	if (!likely(hlist_empty(&unmounted)))
+		queue_work(unmounted_wq, &unmounted_cleanup_work);
+	up_write(&namespace_sem);
+}
+
 static inline void namespace_lock(void)
 {
 	down_write(&namespace_sem);
@@ -3276,6 +3284,17 @@ void __init mnt_init(void)
 	init_mount_tree();
 }
 
+static int __init unmounted_wq_init(void)
+{
+	/* Create workqueue for cleanup */
+	unmounted_wq = create_singlethread_workqueue("unmounted");
+	if (!unmounted_wq)
+		panic("Could not create unmounted workq");
+	return 0;
+}
+
+pure_initcall(unmounted_wq_init);
+
 void put_mnt_ns(struct mnt_namespace *ns)
 {
 	if (!atomic_dec_and_test(&ns->count))
-- 
2.7.4

^ permalink raw reply related	[flat|nested] 3+ messages in thread

end of thread, other threads:[~2017-10-18  1:38 UTC | newest]

Thread overview: 3+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2017-10-16 18:52 [PATCH] Batch unmount cleanup Leon Yang
2017-10-16 22:19 ` Al Viro
2017-10-18  1:38 ` Eric W. Biederman

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox