stable.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
To: linux-kernel@vger.kernel.org
Cc: Greg Kroah-Hartman <gregkh@linuxfoundation.org>,
	stable@vger.kernel.org, Tejun Heo <tj@kernel.org>,
	Li Zefan <lizefan@huawei.com>,
	Johannes Weiner <hannes@cmpxchg.org>,
	Juri Lelli <juri.lelli@arm.com>,
	Steven Rostedt <rostedt@goodmis.org>,
	cgroups@vger.kernel.org, Rik van Riel <riel@redhat.com>,
	"Luis Claudio R. Goncalves" <lgoncalv@redhat.com>,
	Daniel Bristot de Oliveira <bristot@redhat.com>,
	Zubin Mithra <zsm@chromium.org>
Subject: [PATCH 4.4 33/77] cgroup: Disable IRQs while holding css_set_lock
Date: Wed,  4 Sep 2019 19:53:20 +0200	[thread overview]
Message-ID: <20190904175306.620633120@linuxfoundation.org> (raw)
In-Reply-To: <20190904175303.317468926@linuxfoundation.org>

From: Daniel Bristot de Oliveira <bristot@redhat.com>

commit 82d6489d0fed2ec8a8c48c19e8d8a04ac8e5bb26 upstream.

While testing the deadline scheduler + cgroup setup I hit this
warning.

[  132.612935] ------------[ cut here ]------------
[  132.612951] WARNING: CPU: 5 PID: 0 at kernel/softirq.c:150 __local_bh_enable_ip+0x6b/0x80
[  132.612952] Modules linked in: (a ton of modules...)
[  132.612981] CPU: 5 PID: 0 Comm: swapper/5 Not tainted 4.7.0-rc2 #2
[  132.612981] Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS 1.8.2-20150714_191134- 04/01/2014
[  132.612982]  0000000000000086 45c8bb5effdd088b ffff88013fd43da0 ffffffff813d229e
[  132.612984]  0000000000000000 0000000000000000 ffff88013fd43de0 ffffffff810a652b
[  132.612985]  00000096811387b5 0000000000000200 ffff8800bab29d80 ffff880034c54c00
[  132.612986] Call Trace:
[  132.612987]  <IRQ>  [<ffffffff813d229e>] dump_stack+0x63/0x85
[  132.612994]  [<ffffffff810a652b>] __warn+0xcb/0xf0
[  132.612997]  [<ffffffff810e76a0>] ? push_dl_task.part.32+0x170/0x170
[  132.612999]  [<ffffffff810a665d>] warn_slowpath_null+0x1d/0x20
[  132.613000]  [<ffffffff810aba5b>] __local_bh_enable_ip+0x6b/0x80
[  132.613008]  [<ffffffff817d6c8a>] _raw_write_unlock_bh+0x1a/0x20
[  132.613010]  [<ffffffff817d6c9e>] _raw_spin_unlock_bh+0xe/0x10
[  132.613015]  [<ffffffff811388ac>] put_css_set+0x5c/0x60
[  132.613016]  [<ffffffff8113dc7f>] cgroup_free+0x7f/0xa0
[  132.613017]  [<ffffffff810a3912>] __put_task_struct+0x42/0x140
[  132.613018]  [<ffffffff810e776a>] dl_task_timer+0xca/0x250
[  132.613027]  [<ffffffff810e76a0>] ? push_dl_task.part.32+0x170/0x170
[  132.613030]  [<ffffffff8111371e>] __hrtimer_run_queues+0xee/0x270
[  132.613031]  [<ffffffff81113ec8>] hrtimer_interrupt+0xa8/0x190
[  132.613034]  [<ffffffff81051a58>] local_apic_timer_interrupt+0x38/0x60
[  132.613035]  [<ffffffff817d9b0d>] smp_apic_timer_interrupt+0x3d/0x50
[  132.613037]  [<ffffffff817d7c5c>] apic_timer_interrupt+0x8c/0xa0
[  132.613038]  <EOI>  [<ffffffff81063466>] ? native_safe_halt+0x6/0x10
[  132.613043]  [<ffffffff81037a4e>] default_idle+0x1e/0xd0
[  132.613044]  [<ffffffff810381cf>] arch_cpu_idle+0xf/0x20
[  132.613046]  [<ffffffff810e8fda>] default_idle_call+0x2a/0x40
[  132.613047]  [<ffffffff810e92d7>] cpu_startup_entry+0x2e7/0x340
[  132.613048]  [<ffffffff81050235>] start_secondary+0x155/0x190
[  132.613049] ---[ end trace f91934d162ce9977 ]---

The warn is the spin_(lock|unlock)_bh(&css_set_lock) in the interrupt
context. Converting the spin_lock_bh to spin_lock_irq(save) to avoid
this problem - and other problems of sharing a spinlock with an
interrupt.

Cc: Tejun Heo <tj@kernel.org>
Cc: Li Zefan <lizefan@huawei.com>
Cc: Johannes Weiner <hannes@cmpxchg.org>
Cc: Juri Lelli <juri.lelli@arm.com>
Cc: Steven Rostedt <rostedt@goodmis.org>
Cc: cgroups@vger.kernel.org
Cc: stable@vger.kernel.org # 4.5+
Cc: linux-kernel@vger.kernel.org
Reviewed-by: Rik van Riel <riel@redhat.com>
Reviewed-by: "Luis Claudio R. Goncalves" <lgoncalv@redhat.com>
Signed-off-by: Daniel Bristot de Oliveira <bristot@redhat.com>
Acked-by: Zefan Li <lizefan@huawei.com>
Signed-off-by: Tejun Heo <tj@kernel.org>
Signed-off-by: Zubin Mithra <zsm@chromium.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>

---
 kernel/cgroup.c |  122 +++++++++++++++++++++++++++++---------------------------
 1 file changed, 64 insertions(+), 58 deletions(-)

--- a/kernel/cgroup.c
+++ b/kernel/cgroup.c
@@ -784,6 +784,8 @@ static void put_css_set_locked(struct cs
 
 static void put_css_set(struct css_set *cset)
 {
+	unsigned long flags;
+
 	/*
 	 * Ensure that the refcount doesn't hit zero while any readers
 	 * can see it. Similar to atomic_dec_and_lock(), but for an
@@ -792,9 +794,9 @@ static void put_css_set(struct css_set *
 	if (atomic_add_unless(&cset->refcount, -1, 1))
 		return;
 
-	spin_lock_bh(&css_set_lock);
+	spin_lock_irqsave(&css_set_lock, flags);
 	put_css_set_locked(cset);
-	spin_unlock_bh(&css_set_lock);
+	spin_unlock_irqrestore(&css_set_lock, flags);
 }
 
 /*
@@ -1017,11 +1019,11 @@ static struct css_set *find_css_set(stru
 
 	/* First see if we already have a cgroup group that matches
 	 * the desired set */
-	spin_lock_bh(&css_set_lock);
+	spin_lock_irq(&css_set_lock);
 	cset = find_existing_css_set(old_cset, cgrp, template);
 	if (cset)
 		get_css_set(cset);
-	spin_unlock_bh(&css_set_lock);
+	spin_unlock_irq(&css_set_lock);
 
 	if (cset)
 		return cset;
@@ -1049,7 +1051,7 @@ static struct css_set *find_css_set(stru
 	 * find_existing_css_set() */
 	memcpy(cset->subsys, template, sizeof(cset->subsys));
 
-	spin_lock_bh(&css_set_lock);
+	spin_lock_irq(&css_set_lock);
 	/* Add reference counts and links from the new css_set. */
 	list_for_each_entry(link, &old_cset->cgrp_links, cgrp_link) {
 		struct cgroup *c = link->cgrp;
@@ -1075,7 +1077,7 @@ static struct css_set *find_css_set(stru
 		css_get(css);
 	}
 
-	spin_unlock_bh(&css_set_lock);
+	spin_unlock_irq(&css_set_lock);
 
 	return cset;
 }
@@ -1139,7 +1141,7 @@ static void cgroup_destroy_root(struct c
 	 * Release all the links from cset_links to this hierarchy's
 	 * root cgroup
 	 */
-	spin_lock_bh(&css_set_lock);
+	spin_lock_irq(&css_set_lock);
 
 	list_for_each_entry_safe(link, tmp_link, &cgrp->cset_links, cset_link) {
 		list_del(&link->cset_link);
@@ -1147,7 +1149,7 @@ static void cgroup_destroy_root(struct c
 		kfree(link);
 	}
 
-	spin_unlock_bh(&css_set_lock);
+	spin_unlock_irq(&css_set_lock);
 
 	if (!list_empty(&root->root_list)) {
 		list_del(&root->root_list);
@@ -1551,11 +1553,11 @@ static int rebind_subsystems(struct cgro
 		ss->root = dst_root;
 		css->cgroup = dcgrp;
 
-		spin_lock_bh(&css_set_lock);
+		spin_lock_irq(&css_set_lock);
 		hash_for_each(css_set_table, i, cset, hlist)
 			list_move_tail(&cset->e_cset_node[ss->id],
 				       &dcgrp->e_csets[ss->id]);
-		spin_unlock_bh(&css_set_lock);
+		spin_unlock_irq(&css_set_lock);
 
 		src_root->subsys_mask &= ~(1 << ssid);
 		scgrp->subtree_control &= ~(1 << ssid);
@@ -1832,7 +1834,7 @@ static void cgroup_enable_task_cg_lists(
 {
 	struct task_struct *p, *g;
 
-	spin_lock_bh(&css_set_lock);
+	spin_lock_irq(&css_set_lock);
 
 	if (use_task_css_set_links)
 		goto out_unlock;
@@ -1857,8 +1859,12 @@ static void cgroup_enable_task_cg_lists(
 		 * entry won't be deleted though the process has exited.
 		 * Do it while holding siglock so that we don't end up
 		 * racing against cgroup_exit().
+		 *
+		 * Interrupts were already disabled while acquiring
+		 * the css_set_lock, so we do not need to disable it
+		 * again when acquiring the sighand->siglock here.
 		 */
-		spin_lock_irq(&p->sighand->siglock);
+		spin_lock(&p->sighand->siglock);
 		if (!(p->flags & PF_EXITING)) {
 			struct css_set *cset = task_css_set(p);
 
@@ -1867,11 +1873,11 @@ static void cgroup_enable_task_cg_lists(
 			list_add_tail(&p->cg_list, &cset->tasks);
 			get_css_set(cset);
 		}
-		spin_unlock_irq(&p->sighand->siglock);
+		spin_unlock(&p->sighand->siglock);
 	} while_each_thread(g, p);
 	read_unlock(&tasklist_lock);
 out_unlock:
-	spin_unlock_bh(&css_set_lock);
+	spin_unlock_irq(&css_set_lock);
 }
 
 static void init_cgroup_housekeeping(struct cgroup *cgrp)
@@ -1976,13 +1982,13 @@ static int cgroup_setup_root(struct cgro
 	 * Link the root cgroup in this hierarchy into all the css_set
 	 * objects.
 	 */
-	spin_lock_bh(&css_set_lock);
+	spin_lock_irq(&css_set_lock);
 	hash_for_each(css_set_table, i, cset, hlist) {
 		link_css_set(&tmp_links, cset, root_cgrp);
 		if (css_set_populated(cset))
 			cgroup_update_populated(root_cgrp, true);
 	}
-	spin_unlock_bh(&css_set_lock);
+	spin_unlock_irq(&css_set_lock);
 
 	BUG_ON(!list_empty(&root_cgrp->self.children));
 	BUG_ON(atomic_read(&root->nr_cgrps) != 1);
@@ -2215,7 +2221,7 @@ char *task_cgroup_path(struct task_struc
 	char *path = NULL;
 
 	mutex_lock(&cgroup_mutex);
-	spin_lock_bh(&css_set_lock);
+	spin_lock_irq(&css_set_lock);
 
 	root = idr_get_next(&cgroup_hierarchy_idr, &hierarchy_id);
 
@@ -2228,7 +2234,7 @@ char *task_cgroup_path(struct task_struc
 			path = buf;
 	}
 
-	spin_unlock_bh(&css_set_lock);
+	spin_unlock_irq(&css_set_lock);
 	mutex_unlock(&cgroup_mutex);
 	return path;
 }
@@ -2403,7 +2409,7 @@ static int cgroup_taskset_migrate(struct
 	 * the new cgroup.  There are no failure cases after here, so this
 	 * is the commit point.
 	 */
-	spin_lock_bh(&css_set_lock);
+	spin_lock_irq(&css_set_lock);
 	list_for_each_entry(cset, &tset->src_csets, mg_node) {
 		list_for_each_entry_safe(task, tmp_task, &cset->mg_tasks, cg_list) {
 			struct css_set *from_cset = task_css_set(task);
@@ -2414,7 +2420,7 @@ static int cgroup_taskset_migrate(struct
 			put_css_set_locked(from_cset);
 		}
 	}
-	spin_unlock_bh(&css_set_lock);
+	spin_unlock_irq(&css_set_lock);
 
 	/*
 	 * Migration is committed, all target tasks are now on dst_csets.
@@ -2443,13 +2449,13 @@ out_cancel_attach:
 		}
 	}
 out_release_tset:
-	spin_lock_bh(&css_set_lock);
+	spin_lock_irq(&css_set_lock);
 	list_splice_init(&tset->dst_csets, &tset->src_csets);
 	list_for_each_entry_safe(cset, tmp_cset, &tset->src_csets, mg_node) {
 		list_splice_tail_init(&cset->mg_tasks, &cset->tasks);
 		list_del_init(&cset->mg_node);
 	}
-	spin_unlock_bh(&css_set_lock);
+	spin_unlock_irq(&css_set_lock);
 	return ret;
 }
 
@@ -2466,14 +2472,14 @@ static void cgroup_migrate_finish(struct
 
 	lockdep_assert_held(&cgroup_mutex);
 
-	spin_lock_bh(&css_set_lock);
+	spin_lock_irq(&css_set_lock);
 	list_for_each_entry_safe(cset, tmp_cset, preloaded_csets, mg_preload_node) {
 		cset->mg_src_cgrp = NULL;
 		cset->mg_dst_cset = NULL;
 		list_del_init(&cset->mg_preload_node);
 		put_css_set_locked(cset);
 	}
-	spin_unlock_bh(&css_set_lock);
+	spin_unlock_irq(&css_set_lock);
 }
 
 /**
@@ -2623,7 +2629,7 @@ static int cgroup_migrate(struct task_st
 	 * already PF_EXITING could be freed from underneath us unless we
 	 * take an rcu_read_lock.
 	 */
-	spin_lock_bh(&css_set_lock);
+	spin_lock_irq(&css_set_lock);
 	rcu_read_lock();
 	task = leader;
 	do {
@@ -2632,7 +2638,7 @@ static int cgroup_migrate(struct task_st
 			break;
 	} while_each_thread(leader, task);
 	rcu_read_unlock();
-	spin_unlock_bh(&css_set_lock);
+	spin_unlock_irq(&css_set_lock);
 
 	return cgroup_taskset_migrate(&tset, cgrp);
 }
@@ -2653,7 +2659,7 @@ static int cgroup_attach_task(struct cgr
 	int ret;
 
 	/* look up all src csets */
-	spin_lock_bh(&css_set_lock);
+	spin_lock_irq(&css_set_lock);
 	rcu_read_lock();
 	task = leader;
 	do {
@@ -2663,7 +2669,7 @@ static int cgroup_attach_task(struct cgr
 			break;
 	} while_each_thread(leader, task);
 	rcu_read_unlock();
-	spin_unlock_bh(&css_set_lock);
+	spin_unlock_irq(&css_set_lock);
 
 	/* prepare dst csets and commit */
 	ret = cgroup_migrate_prepare_dst(dst_cgrp, &preloaded_csets);
@@ -2696,9 +2702,9 @@ static int cgroup_procs_write_permission
 		struct cgroup *cgrp;
 		struct inode *inode;
 
-		spin_lock_bh(&css_set_lock);
+		spin_lock_irq(&css_set_lock);
 		cgrp = task_cgroup_from_root(task, &cgrp_dfl_root);
-		spin_unlock_bh(&css_set_lock);
+		spin_unlock_irq(&css_set_lock);
 
 		while (!cgroup_is_descendant(dst_cgrp, cgrp))
 			cgrp = cgroup_parent(cgrp);
@@ -2800,9 +2806,9 @@ int cgroup_attach_task_all(struct task_s
 		if (root == &cgrp_dfl_root)
 			continue;
 
-		spin_lock_bh(&css_set_lock);
+		spin_lock_irq(&css_set_lock);
 		from_cgrp = task_cgroup_from_root(from, root);
-		spin_unlock_bh(&css_set_lock);
+		spin_unlock_irq(&css_set_lock);
 
 		retval = cgroup_attach_task(from_cgrp, tsk, false);
 		if (retval)
@@ -2927,7 +2933,7 @@ static int cgroup_update_dfl_csses(struc
 	percpu_down_write(&cgroup_threadgroup_rwsem);
 
 	/* look up all csses currently attached to @cgrp's subtree */
-	spin_lock_bh(&css_set_lock);
+	spin_lock_irq(&css_set_lock);
 	css_for_each_descendant_pre(css, cgroup_css(cgrp, NULL)) {
 		struct cgrp_cset_link *link;
 
@@ -2939,14 +2945,14 @@ static int cgroup_update_dfl_csses(struc
 			cgroup_migrate_add_src(link->cset, cgrp,
 					       &preloaded_csets);
 	}
-	spin_unlock_bh(&css_set_lock);
+	spin_unlock_irq(&css_set_lock);
 
 	/* NULL dst indicates self on default hierarchy */
 	ret = cgroup_migrate_prepare_dst(NULL, &preloaded_csets);
 	if (ret)
 		goto out_finish;
 
-	spin_lock_bh(&css_set_lock);
+	spin_lock_irq(&css_set_lock);
 	list_for_each_entry(src_cset, &preloaded_csets, mg_preload_node) {
 		struct task_struct *task, *ntask;
 
@@ -2958,7 +2964,7 @@ static int cgroup_update_dfl_csses(struc
 		list_for_each_entry_safe(task, ntask, &src_cset->tasks, cg_list)
 			cgroup_taskset_add(task, &tset);
 	}
-	spin_unlock_bh(&css_set_lock);
+	spin_unlock_irq(&css_set_lock);
 
 	ret = cgroup_taskset_migrate(&tset, cgrp);
 out_finish:
@@ -3641,10 +3647,10 @@ static int cgroup_task_count(const struc
 	int count = 0;
 	struct cgrp_cset_link *link;
 
-	spin_lock_bh(&css_set_lock);
+	spin_lock_irq(&css_set_lock);
 	list_for_each_entry(link, &cgrp->cset_links, cset_link)
 		count += atomic_read(&link->cset->refcount);
-	spin_unlock_bh(&css_set_lock);
+	spin_unlock_irq(&css_set_lock);
 	return count;
 }
 
@@ -3982,7 +3988,7 @@ void css_task_iter_start(struct cgroup_s
 
 	memset(it, 0, sizeof(*it));
 
-	spin_lock_bh(&css_set_lock);
+	spin_lock_irq(&css_set_lock);
 
 	it->ss = css->ss;
 
@@ -3995,7 +4001,7 @@ void css_task_iter_start(struct cgroup_s
 
 	css_task_iter_advance_css_set(it);
 
-	spin_unlock_bh(&css_set_lock);
+	spin_unlock_irq(&css_set_lock);
 }
 
 /**
@@ -4013,7 +4019,7 @@ struct task_struct *css_task_iter_next(s
 		it->cur_task = NULL;
 	}
 
-	spin_lock_bh(&css_set_lock);
+	spin_lock_irq(&css_set_lock);
 
 	if (it->task_pos) {
 		it->cur_task = list_entry(it->task_pos, struct task_struct,
@@ -4022,7 +4028,7 @@ struct task_struct *css_task_iter_next(s
 		css_task_iter_advance(it);
 	}
 
-	spin_unlock_bh(&css_set_lock);
+	spin_unlock_irq(&css_set_lock);
 
 	return it->cur_task;
 }
@@ -4036,10 +4042,10 @@ struct task_struct *css_task_iter_next(s
 void css_task_iter_end(struct css_task_iter *it)
 {
 	if (it->cur_cset) {
-		spin_lock_bh(&css_set_lock);
+		spin_lock_irq(&css_set_lock);
 		list_del(&it->iters_node);
 		put_css_set_locked(it->cur_cset);
-		spin_unlock_bh(&css_set_lock);
+		spin_unlock_irq(&css_set_lock);
 	}
 
 	if (it->cur_task)
@@ -4068,10 +4074,10 @@ int cgroup_transfer_tasks(struct cgroup
 	mutex_lock(&cgroup_mutex);
 
 	/* all tasks in @from are being moved, all csets are source */
-	spin_lock_bh(&css_set_lock);
+	spin_lock_irq(&css_set_lock);
 	list_for_each_entry(link, &from->cset_links, cset_link)
 		cgroup_migrate_add_src(link->cset, to, &preloaded_csets);
-	spin_unlock_bh(&css_set_lock);
+	spin_unlock_irq(&css_set_lock);
 
 	ret = cgroup_migrate_prepare_dst(to, &preloaded_csets);
 	if (ret)
@@ -5180,10 +5186,10 @@ static int cgroup_destroy_locked(struct
 	 */
 	cgrp->self.flags &= ~CSS_ONLINE;
 
-	spin_lock_bh(&css_set_lock);
+	spin_lock_irq(&css_set_lock);
 	list_for_each_entry(link, &cgrp->cset_links, cset_link)
 		link->cset->dead = true;
-	spin_unlock_bh(&css_set_lock);
+	spin_unlock_irq(&css_set_lock);
 
 	/* initiate massacre of all css's */
 	for_each_css(css, ssid, cgrp)
@@ -5436,7 +5442,7 @@ int proc_cgroup_show(struct seq_file *m,
 		goto out;
 
 	mutex_lock(&cgroup_mutex);
-	spin_lock_bh(&css_set_lock);
+	spin_lock_irq(&css_set_lock);
 
 	for_each_root(root) {
 		struct cgroup_subsys *ss;
@@ -5488,7 +5494,7 @@ int proc_cgroup_show(struct seq_file *m,
 
 	retval = 0;
 out_unlock:
-	spin_unlock_bh(&css_set_lock);
+	spin_unlock_irq(&css_set_lock);
 	mutex_unlock(&cgroup_mutex);
 	kfree(buf);
 out:
@@ -5649,13 +5655,13 @@ void cgroup_post_fork(struct task_struct
 	if (use_task_css_set_links) {
 		struct css_set *cset;
 
-		spin_lock_bh(&css_set_lock);
+		spin_lock_irq(&css_set_lock);
 		cset = task_css_set(current);
 		if (list_empty(&child->cg_list)) {
 			get_css_set(cset);
 			css_set_move_task(child, NULL, cset, false);
 		}
-		spin_unlock_bh(&css_set_lock);
+		spin_unlock_irq(&css_set_lock);
 	}
 
 	/*
@@ -5699,9 +5705,9 @@ void cgroup_exit(struct task_struct *tsk
 	cset = task_css_set(tsk);
 
 	if (!list_empty(&tsk->cg_list)) {
-		spin_lock_bh(&css_set_lock);
+		spin_lock_irq(&css_set_lock);
 		css_set_move_task(tsk, cset, NULL, false);
-		spin_unlock_bh(&css_set_lock);
+		spin_unlock_irq(&css_set_lock);
 	} else {
 		get_css_set(cset);
 	}
@@ -5914,7 +5920,7 @@ static int current_css_set_cg_links_read
 	if (!name_buf)
 		return -ENOMEM;
 
-	spin_lock_bh(&css_set_lock);
+	spin_lock_irq(&css_set_lock);
 	rcu_read_lock();
 	cset = rcu_dereference(current->cgroups);
 	list_for_each_entry(link, &cset->cgrp_links, cgrp_link) {
@@ -5925,7 +5931,7 @@ static int current_css_set_cg_links_read
 			   c->root->hierarchy_id, name_buf);
 	}
 	rcu_read_unlock();
-	spin_unlock_bh(&css_set_lock);
+	spin_unlock_irq(&css_set_lock);
 	kfree(name_buf);
 	return 0;
 }
@@ -5936,7 +5942,7 @@ static int cgroup_css_links_read(struct
 	struct cgroup_subsys_state *css = seq_css(seq);
 	struct cgrp_cset_link *link;
 
-	spin_lock_bh(&css_set_lock);
+	spin_lock_irq(&css_set_lock);
 	list_for_each_entry(link, &css->cgroup->cset_links, cset_link) {
 		struct css_set *cset = link->cset;
 		struct task_struct *task;
@@ -5959,7 +5965,7 @@ static int cgroup_css_links_read(struct
 	overflow:
 		seq_puts(seq, "  ...\n");
 	}
-	spin_unlock_bh(&css_set_lock);
+	spin_unlock_irq(&css_set_lock);
 	return 0;
 }
 



  parent reply	other threads:[~2019-09-04 18:27 UTC|newest]

Thread overview: 87+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2019-09-04 17:52 [PATCH 4.4 00/77] 4.4.191-stable review Greg Kroah-Hartman
2019-09-04 17:52 ` [PATCH 4.4 01/77] HID: Add 044f:b320 ThrustMaster, Inc. 2 in 1 DT Greg Kroah-Hartman
2019-09-04 17:52 ` [PATCH 4.4 02/77] MIPS: kernel: only use i8253 clocksource with periodic clockevent Greg Kroah-Hartman
2019-09-04 17:52 ` [PATCH 4.4 03/77] netfilter: ebtables: fix a memory leak bug in compat Greg Kroah-Hartman
2019-09-04 17:52 ` [PATCH 4.4 04/77] bonding: Force slave speed check after link state recovery for 802.3ad Greg Kroah-Hartman
2019-09-04 17:52 ` [PATCH 4.4 05/77] can: dev: call netif_carrier_off() in register_candev() Greg Kroah-Hartman
2019-09-04 17:52 ` [PATCH 4.4 06/77] ASoC: Fail card instantiation if DAI format setup fails Greg Kroah-Hartman
2019-09-04 18:09   ` Mark Brown
2019-09-05 18:56     ` Greg Kroah-Hartman
2019-09-06 10:59       ` Mark Brown
2019-09-04 17:52 ` [PATCH 4.4 07/77] st21nfca_connectivity_event_received: null check the allocation Greg Kroah-Hartman
2019-09-04 17:52 ` [PATCH 4.4 08/77] st_nci_hci_connectivity_event_received: " Greg Kroah-Hartman
2019-09-04 17:52 ` [PATCH 4.4 09/77] ASoC: ti: davinci-mcasp: Correct slot_width posed constraint Greg Kroah-Hartman
2019-09-04 17:52 ` [PATCH 4.4 10/77] net: usb: qmi_wwan: Add the BroadMobi BM818 card Greg Kroah-Hartman
2019-09-04 17:52 ` [PATCH 4.4 11/77] isdn: mISDN: hfcsusb: Fix possible null-pointer dereferences in start_isoc_chain() Greg Kroah-Hartman
2019-09-04 17:52 ` [PATCH 4.4 12/77] isdn: hfcsusb: Fix mISDN driver crash caused by transfer buffer on the stack Greg Kroah-Hartman
2019-09-04 17:53 ` [PATCH 4.4 13/77] perf bench numa: Fix cpu0 binding Greg Kroah-Hartman
2019-09-04 17:53 ` [PATCH 4.4 14/77] can: sja1000: force the string buffer NULL-terminated Greg Kroah-Hartman
2019-09-04 17:53 ` [PATCH 4.4 15/77] can: peak_usb: " Greg Kroah-Hartman
2019-09-04 17:53 ` [PATCH 4.4 16/77] NFSv4: Fix a potential sleep while atomic in nfs4_do_reclaim() Greg Kroah-Hartman
2019-09-04 17:53 ` [PATCH 4.4 17/77] net: cxgb3_main: Fix a resource leak in a error path in init_one() Greg Kroah-Hartman
2019-09-04 17:53 ` [PATCH 4.4 18/77] net: hisilicon: make hip04_tx_reclaim non-reentrant Greg Kroah-Hartman
2019-09-04 17:53 ` [PATCH 4.4 19/77] net: hisilicon: fix hip04-xmit never return TX_BUSY Greg Kroah-Hartman
2019-09-04 17:53 ` [PATCH 4.4 20/77] net: hisilicon: Fix dma_map_single failed on arm64 Greg Kroah-Hartman
2019-09-04 17:53 ` [PATCH 4.4 21/77] libata: add SG safety checks in SFF pio transfers Greg Kroah-Hartman
2019-09-04 17:53 ` [PATCH 4.4 22/77] selftests: kvm: Adding config fragments Greg Kroah-Hartman
2019-09-04 17:53 ` [PATCH 4.4 23/77] HID: wacom: correct misreported EKR ring values Greg Kroah-Hartman
2019-09-04 17:53 ` [PATCH 4.4 24/77] Revert "dm bufio: fix deadlock with loop device" Greg Kroah-Hartman
2019-09-04 17:53 ` [PATCH 4.4 25/77] userfaultfd_release: always remove uffd flags and clear vm_userfaultfd_ctx Greg Kroah-Hartman
2019-09-04 17:53 ` [PATCH 4.4 26/77] x86/retpoline: Dont clobber RFLAGS during CALL_NOSPEC on i386 Greg Kroah-Hartman
2019-09-04 17:53 ` [PATCH 4.4 27/77] x86/apic: Handle missing global clockevent gracefully Greg Kroah-Hartman
2019-09-04 17:53 ` [PATCH 4.4 28/77] x86/boot: Save fields explicitly, zero out everything else Greg Kroah-Hartman
2019-09-04 17:53 ` [PATCH 4.4 29/77] x86/boot: Fix boot regression caused by bootparam sanitizing Greg Kroah-Hartman
2019-09-04 17:53 ` [PATCH 4.4 30/77] dm btree: fix order of block initialization in btree_split_beneath Greg Kroah-Hartman
2019-09-04 17:53 ` [PATCH 4.4 31/77] dm space map metadata: fix missing store of apply_bops() return value Greg Kroah-Hartman
2019-09-04 17:53 ` [PATCH 4.4 32/77] dm table: fix invalid memory accesses with too high sector number Greg Kroah-Hartman
2019-09-04 17:53 ` Greg Kroah-Hartman [this message]
2019-09-04 17:53 ` [PATCH 4.4 34/77] GFS2: dont set rgrp gl_object until its inserted into rgrp tree Greg Kroah-Hartman
2019-09-04 17:53 ` [PATCH 4.4 35/77] net: arc_emac: fix koops caused by sk_buff free Greg Kroah-Hartman
2019-09-04 17:53 ` [PATCH 4.4 36/77] vhost-net: set packet weight of tx polling to 2 * vq size Greg Kroah-Hartman
2019-09-04 17:53 ` [PATCH 4.4 37/77] vhost_net: use packet weight for rx handler, too Greg Kroah-Hartman
2019-09-04 17:53 ` [PATCH 4.4 38/77] vhost_net: introduce vhost_exceeds_weight() Greg Kroah-Hartman
2019-09-04 17:53 ` [PATCH 4.4 39/77] vhost: " Greg Kroah-Hartman
2019-09-04 17:53 ` [PATCH 4.4 40/77] vhost_net: fix possible infinite loop Greg Kroah-Hartman
2019-09-04 17:53 ` [PATCH 4.4 41/77] vhost: scsi: add weight support Greg Kroah-Hartman
2019-09-04 17:53 ` [PATCH 4.4 42/77] siphash: add cryptographically secure PRF Greg Kroah-Hartman
2019-09-04 17:53 ` [PATCH 4.4 43/77] siphash: implement HalfSipHash1-3 for hash tables Greg Kroah-Hartman
2019-09-04 17:53 ` [PATCH 4.4 44/77] inet: switch IP ID generator to siphash Greg Kroah-Hartman
2019-09-04 17:53 ` [PATCH 4.4 45/77] netfilter: ctnetlink: dont use conntrack/expect object addresses as id Greg Kroah-Hartman
2019-09-04 17:53 ` [PATCH 4.4 46/77] netfilter: conntrack: Use consistent ct id hash calculation Greg Kroah-Hartman
2019-09-04 17:53 ` [PATCH 4.4 47/77] Revert "perf test 6: Fix missing kvm module load for s390" Greg Kroah-Hartman
2019-09-04 17:53 ` [PATCH 4.4 48/77] x86/pm: Introduce quirk framework to save/restore extra MSR registers around suspend/resume Greg Kroah-Hartman
2019-09-04 17:53 ` [PATCH 4.4 49/77] x86/CPU/AMD: Clear RDRAND CPUID bit on AMD family 15h/16h Greg Kroah-Hartman
2019-09-04 17:53 ` [PATCH 4.4 50/77] scsi: ufs: Fix NULL pointer dereference in ufshcd_config_vreg_hpm() Greg Kroah-Hartman
2019-09-04 17:53 ` [PATCH 4.4 51/77] dmaengine: ste_dma40: fix unneeded variable warning Greg Kroah-Hartman
2019-09-04 17:53 ` [PATCH 4.4 52/77] usb: gadget: composite: Clear "suspended" on reset/disconnect Greg Kroah-Hartman
2019-09-04 17:53 ` [PATCH 4.4 53/77] usb: host: fotg2: restart hcd after port reset Greg Kroah-Hartman
2019-09-04 17:53 ` [PATCH 4.4 54/77] tools: hv: fix KVP and VSS daemons exit code Greg Kroah-Hartman
2019-09-04 17:53 ` [PATCH 4.4 55/77] watchdog: bcm2835_wdt: Fix module autoload Greg Kroah-Hartman
2019-09-04 17:53 ` [PATCH 4.4 56/77] tcp: fix tcp_rtx_queue_tail in case of empty retransmit queue Greg Kroah-Hartman
2019-09-04 17:53 ` [PATCH 4.4 57/77] ALSA: usb-audio: Fix a stack buffer overflow bug in check_input_term Greg Kroah-Hartman
2019-09-04 17:53 ` [PATCH 4.4 58/77] ALSA: usb-audio: Fix an OOB bug in parse_audio_mixer_unit Greg Kroah-Hartman
2019-09-04 17:53 ` [PATCH 4.4 59/77] tcp: make sure EPOLLOUT wont be missed Greg Kroah-Hartman
2019-09-04 17:53 ` [PATCH 4.4 60/77] ALSA: seq: Fix potential concurrent access to the deleted pool Greg Kroah-Hartman
2019-09-04 17:53 ` [PATCH 4.4 61/77] KVM: x86: Dont update RIP or do single-step on faulting emulation Greg Kroah-Hartman
2019-09-04 17:53 ` [PATCH 4.4 62/77] x86/apic: Do not initialize LDR and DFR for bigsmp Greg Kroah-Hartman
2019-09-04 17:53 ` [PATCH 4.4 63/77] x86/apic: Include the LDR when clearing out APIC registers Greg Kroah-Hartman
2019-09-04 17:53 ` [PATCH 4.4 64/77] usb-storage: Add new JMS567 revision to unusual_devs Greg Kroah-Hartman
2019-09-04 17:53 ` [PATCH 4.4 65/77] USB: cdc-wdm: fix race between write and disconnect due to flag abuse Greg Kroah-Hartman
2019-09-04 17:53 ` [PATCH 4.4 66/77] usb: host: ohci: fix a race condition between shutdown and irq Greg Kroah-Hartman
2019-09-04 17:53 ` [PATCH 4.4 67/77] USB: storage: ums-realtek: Update module parameter description for auto_delink_en Greg Kroah-Hartman
2019-09-04 17:53 ` [PATCH 4.4 68/77] USB: storage: ums-realtek: Whitelist auto-delink support Greg Kroah-Hartman
2019-09-04 17:53 ` [PATCH 4.4 69/77] ptrace,x86: Make user_64bit_mode() available to 32-bit builds Greg Kroah-Hartman
2019-09-04 17:53 ` [PATCH 4.4 70/77] uprobes/x86: Fix detection of 32-bit user mode Greg Kroah-Hartman
2019-09-04 17:53 ` [PATCH 4.4 71/77] mmc: sdhci-of-at91: add quirk for broken HS200 Greg Kroah-Hartman
2019-09-04 17:53 ` [PATCH 4.4 72/77] mmc: core: Fix init of SD cards reporting an invalid VDD range Greg Kroah-Hartman
2019-09-04 17:54 ` [PATCH 4.4 73/77] stm class: Fix a double free of stm_source_device Greg Kroah-Hartman
2019-09-04 17:54 ` [PATCH 4.4 74/77] VMCI: Release resource if the work is already queued Greg Kroah-Hartman
2019-09-04 17:54 ` [PATCH 4.4 75/77] Revert "cfg80211: fix processing world regdomain when non modular" Greg Kroah-Hartman
2019-09-04 17:54 ` [PATCH 4.4 76/77] mac80211: fix possible sta leak Greg Kroah-Hartman
2019-09-04 17:54 ` [PATCH 4.4 77/77] x86/ptrace: fix up botched merge of spectrev1 fix Greg Kroah-Hartman
2019-09-05  1:18 ` [PATCH 4.4 00/77] 4.4.191-stable review kernelci.org bot
2019-09-05 14:22 ` shuah
2019-09-05 16:54 ` Guenter Roeck
2019-09-05 17:24 ` Daniel Díaz
2019-09-05 19:50 ` Kelsey Skunberg
2019-09-06  7:36 ` Jon Hunter

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20190904175306.620633120@linuxfoundation.org \
    --to=gregkh@linuxfoundation.org \
    --cc=bristot@redhat.com \
    --cc=cgroups@vger.kernel.org \
    --cc=hannes@cmpxchg.org \
    --cc=juri.lelli@arm.com \
    --cc=lgoncalv@redhat.com \
    --cc=linux-kernel@vger.kernel.org \
    --cc=lizefan@huawei.com \
    --cc=riel@redhat.com \
    --cc=rostedt@goodmis.org \
    --cc=stable@vger.kernel.org \
    --cc=tj@kernel.org \
    --cc=zsm@chromium.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).