public inbox for cgroups@vger.kernel.org
 help / color / mirror / Atom feed
From: Tejun Heo <tj-DgEjT+Ai2ygdnm+yROfE0A@public.gmane.org>
To: Shawn Bohrer <shawn.bohrer-Re5JQEeQqe8AvxtiuMwx3w@public.gmane.org>
Cc: Michal Hocko <mhocko-AlSwsSmVLrQ@public.gmane.org>,
	Li Zefan <lizefan-hv44wF8Li93QT0dZR+AlfA@public.gmane.org>,
	cgroups-u79uwXL29TY76Z2rM5mHXA@public.gmane.org,
	linux-kernel-u79uwXL29TY76Z2rM5mHXA@public.gmane.org,
	Hugh Dickins <hughd-hpIqsD4AKlfQT0dZR+AlfA@public.gmane.org>,
	Johannes Weiner <hannes-druUgvl0LCNAfugRpC6u6w@public.gmane.org>,
	Markus Blank-Burian
	<burian-iYtK5bfT9M8b1SvskN2V4Q@public.gmane.org>
Subject: Re: 3.10.16 cgroup_mutex deadlock
Date: Fri, 15 Nov 2013 16:54:01 +0900	[thread overview]
Message-ID: <20131115075401.GB9755@mtj.dyndns.org> (raw)
In-Reply-To: <20131115062458.GA9755-9pTldWuhBndy/B6EtB590w@public.gmane.org>

Hello,

Shawn, Hugh, can you please verify whether the attached patch makes
the deadlock go away?

Thanks.

diff --git a/kernel/cgroup.c b/kernel/cgroup.c
index e0839bc..dc9dc06 100644
--- a/kernel/cgroup.c
+++ b/kernel/cgroup.c
@@ -90,6 +90,14 @@ static DEFINE_MUTEX(cgroup_mutex);
 static DEFINE_MUTEX(cgroup_root_mutex);
 
 /*
+ * cgroup destruction makes heavy use of work items and there can be a lot
+ * of concurrent destructions.  Use a separate workqueue so that cgroup
+ * destruction work items don't end up filling up max_active of system_wq
+ * which may lead to deadlock.
+ */
+static struct workqueue_struct *cgroup_destroy_wq;
+
+/*
  * Generate an array of cgroup subsystem pointers. At boot time, this is
  * populated with the built in subsystems, and modular subsystems are
  * registered after that. The mutable section of this array is protected by
@@ -871,7 +879,7 @@ static void cgroup_free_rcu(struct rcu_head *head)
 	struct cgroup *cgrp = container_of(head, struct cgroup, rcu_head);
 
 	INIT_WORK(&cgrp->destroy_work, cgroup_free_fn);
-	schedule_work(&cgrp->destroy_work);
+	queue_work(cgroup_destroy_wq, &cgrp->destroy_work);
 }
 
 static void cgroup_diput(struct dentry *dentry, struct inode *inode)
@@ -4254,7 +4262,7 @@ static void css_free_rcu_fn(struct rcu_head *rcu_head)
 	 * css_put().  dput() requires process context which we don't have.
 	 */
 	INIT_WORK(&css->destroy_work, css_free_work_fn);
-	schedule_work(&css->destroy_work);
+	queue_work(cgroup_destroy_wq, &css->destroy_work);
 }
 
 static void css_release(struct percpu_ref *ref)
@@ -4544,7 +4552,7 @@ static void css_killed_ref_fn(struct percpu_ref *ref)
 		container_of(ref, struct cgroup_subsys_state, refcnt);
 
 	INIT_WORK(&css->destroy_work, css_killed_work_fn);
-	schedule_work(&css->destroy_work);
+	queue_work(cgroup_destroy_wq, &css->destroy_work);
 }
 
 /**
@@ -5025,6 +5033,17 @@ int __init cgroup_init(void)
 	if (err)
 		return err;
 
+	/*
+	 * There isn't much point in executing destruction path in
+	 * parallel.  Good chunk is serialized with cgroup_mutex anyway.
+	 * Use 1 for @max_active.
+	 */
+	cgroup_destroy_wq = alloc_workqueue("cgroup_destroy", 0, 1);
+	if (!cgroup_destroy_wq) {
+		err = -ENOMEM;
+		goto out;
+	}
+
 	for_each_builtin_subsys(ss, i) {
 		if (!ss->early_init)
 			cgroup_init_subsys(ss);
@@ -5062,9 +5081,11 @@ int __init cgroup_init(void)
 	proc_create("cgroups", 0, NULL, &proc_cgroupstats_operations);
 
 out:
-	if (err)
+	if (err) {
+		if (cgroup_destroy_wq)
+			destroy_workqueue(cgroup_destroy_wq);
 		bdi_destroy(&cgroup_backing_dev_info);
-
+	}
 	return err;
 }
 

  parent reply	other threads:[~2013-11-15  7:54 UTC|newest]

Thread overview: 21+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2013-11-11 22:06 3.10.16 cgroup_mutex deadlock Shawn Bohrer
     [not found] ` <20131111220626.GA7509-/vebjAlq/uFE7V8Yqttd03bhEEblAqRIDbRjUBewulXQT0dZR+AlfA@public.gmane.org>
2013-11-12 10:17   ` Li Zefan
     [not found]     ` <52820030.6000806-hv44wF8Li93QT0dZR+AlfA@public.gmane.org>
2013-11-12 14:31       ` Michal Hocko
2013-11-12 15:55         ` Shawn Bohrer
     [not found]           ` <20131112155530.GA2860-/vebjAlq/uFE7V8Yqttd03bhEEblAqRIDbRjUBewulXQT0dZR+AlfA@public.gmane.org>
2013-11-12 16:55             ` Michal Hocko
     [not found]               ` <20131112165504.GF6049-2MMpYkNvuYDjFM9bn6wA6Q@public.gmane.org>
2013-11-14 22:56                 ` Shawn Bohrer
     [not found]                   ` <20131114225649.GA16725-/vebjAlq/uFE7V8Yqttd03bhEEblAqRIDbRjUBewulXQT0dZR+AlfA@public.gmane.org>
2013-11-15  6:24                     ` Tejun Heo
     [not found]                       ` <20131115062458.GA9755-9pTldWuhBndy/B6EtB590w@public.gmane.org>
2013-11-15  7:54                         ` Tejun Heo [this message]
     [not found]                           ` <20131115075401.GB9755-9pTldWuhBndy/B6EtB590w@public.gmane.org>
2013-11-18  2:17                             ` Hugh Dickins
     [not found]                               ` <alpine.LNX.2.00.1311171746160.15789-fupSdm12i1nKWymIFiNcPA@public.gmane.org>
2013-11-18 20:10                                 ` Shawn Bohrer
     [not found]                                   ` <20131118201025.GA2747-/vebjAlq/uFE7V8Yqttd03bhEEblAqRIDbRjUBewulXQT0dZR+AlfA@public.gmane.org>
2013-11-19  2:55                                     ` Li Zefan
     [not found]                                       ` <528AD316.10001-hv44wF8Li93QT0dZR+AlfA@public.gmane.org>
2013-11-20 22:47                                         ` Shawn Bohrer
2013-11-22 20:59                                 ` William Dauchy
     [not found]                                   ` <CAJ75kXZYjKwV_XiEB493jNyGRqS395JZyY-S9xQBQJLyaCSOEQ-JsoAwUIsXosN+BqQ9rBEUg@public.gmane.org>
2013-11-22 22:18                                     ` Tejun Heo
     [not found]                                       ` <20131122221839.GD8981-9pTldWuhBndy/B6EtB590w@public.gmane.org>
2013-11-22 22:54                                         ` William Dauchy
     [not found]                                           ` <CAJ75kXabrnxqdtb5SXqm_pYTrSih9yvP38DApF+_P+YZCepTMw-JsoAwUIsXosN+BqQ9rBEUg@public.gmane.org>
2013-11-25  1:20                                             ` Li Zefan
     [not found]                                               ` <5292A5EA.1030501-hv44wF8Li93QT0dZR+AlfA@public.gmane.org>
2013-12-02 10:31                                                 ` William Dauchy
2013-12-03  1:37                                                   ` Li Zefan
2013-11-22 22:17                                 ` [PATCH cgroup/for-3.13-fixes] cgroup: use a dedicated workqueue for cgroup destruction Tejun Heo
2013-11-24 18:23                                   ` Hugh Dickins
     [not found]                                   ` <20131122221752.GC8981-9pTldWuhBndy/B6EtB590w@public.gmane.org>
2013-11-25  1:16                                     ` Li Zefan

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20131115075401.GB9755@mtj.dyndns.org \
    --to=tj-dgejt+ai2ygdnm+yrofe0a@public.gmane.org \
    --cc=burian-iYtK5bfT9M8b1SvskN2V4Q@public.gmane.org \
    --cc=cgroups-u79uwXL29TY76Z2rM5mHXA@public.gmane.org \
    --cc=hannes-druUgvl0LCNAfugRpC6u6w@public.gmane.org \
    --cc=hughd-hpIqsD4AKlfQT0dZR+AlfA@public.gmane.org \
    --cc=linux-kernel-u79uwXL29TY76Z2rM5mHXA@public.gmane.org \
    --cc=lizefan-hv44wF8Li93QT0dZR+AlfA@public.gmane.org \
    --cc=mhocko-AlSwsSmVLrQ@public.gmane.org \
    --cc=shawn.bohrer-Re5JQEeQqe8AvxtiuMwx3w@public.gmane.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox