* [PATCH AUTOSEL 6.4 1/4] rcu-tasks: Avoid pr_info() with spin lock in cblist_init_generic()
@ 2023-07-02 19:57 Sasha Levin
0 siblings, 0 replies; 5+ messages in thread
From: Sasha Levin @ 2023-07-02 19:57 UTC (permalink / raw)
To: linux-kernel, stable
Cc: Shigeru Yoshida, Zhang, Qiang1, Paul E . McKenney, Sasha Levin,
frederic, quic_neeraju, joel, josh, boqun.feng, rcu
From: Shigeru Yoshida <syoshida@redhat.com>
[ Upstream commit 5fc8cbe4cf0fd34ded8045c385790c3bf04f6785 ]
pr_info() is called with rtp->cbs_gbl_lock spin lock locked. Because
pr_info() calls printk() that might sleep, this will result in BUG
like below:
[ 0.206455] cblist_init_generic: Setting adjustable number of callback queues.
[ 0.206463]
[ 0.206464] =============================
[ 0.206464] [ BUG: Invalid wait context ]
[ 0.206465] 5.19.0-00428-g9de1f9c8ca51 #5 Not tainted
[ 0.206466] -----------------------------
[ 0.206466] swapper/0/1 is trying to lock:
[ 0.206467] ffffffffa0167a58 (&port_lock_key){....}-{3:3}, at: serial8250_console_write+0x327/0x4a0
[ 0.206473] other info that might help us debug this:
[ 0.206473] context-{5:5}
[ 0.206474] 3 locks held by swapper/0/1:
[ 0.206474] #0: ffffffff9eb597e0 (rcu_tasks.cbs_gbl_lock){....}-{2:2}, at: cblist_init_generic.constprop.0+0x14/0x1f0
[ 0.206478] #1: ffffffff9eb579c0 (console_lock){+.+.}-{0:0}, at: _printk+0x63/0x7e
[ 0.206482] #2: ffffffff9ea77780 (console_owner){....}-{0:0}, at: console_emit_next_record.constprop.0+0x111/0x330
[ 0.206485] stack backtrace:
[ 0.206486] CPU: 0 PID: 1 Comm: swapper/0 Not tainted 5.19.0-00428-g9de1f9c8ca51 #5
[ 0.206488] Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS 1.16.0-1.fc36 04/01/2014
[ 0.206489] Call Trace:
[ 0.206490] <TASK>
[ 0.206491] dump_stack_lvl+0x6a/0x9f
[ 0.206493] __lock_acquire.cold+0x2d7/0x2fe
[ 0.206496] ? stack_trace_save+0x46/0x70
[ 0.206497] lock_acquire+0xd1/0x2f0
[ 0.206499] ? serial8250_console_write+0x327/0x4a0
[ 0.206500] ? __lock_acquire+0x5c7/0x2720
[ 0.206502] _raw_spin_lock_irqsave+0x3d/0x90
[ 0.206504] ? serial8250_console_write+0x327/0x4a0
[ 0.206506] serial8250_console_write+0x327/0x4a0
[ 0.206508] console_emit_next_record.constprop.0+0x180/0x330
[ 0.206511] console_unlock+0xf7/0x1f0
[ 0.206512] vprintk_emit+0xf7/0x330
[ 0.206514] _printk+0x63/0x7e
[ 0.206516] cblist_init_generic.constprop.0.cold+0x24/0x32
[ 0.206518] rcu_init_tasks_generic+0x5/0xd9
[ 0.206522] kernel_init_freeable+0x15b/0x2a2
[ 0.206523] ? rest_init+0x160/0x160
[ 0.206526] kernel_init+0x11/0x120
[ 0.206527] ret_from_fork+0x1f/0x30
[ 0.206530] </TASK>
[ 0.207018] cblist_init_generic: Setting shift to 1 and lim to 1.
This patch moves pr_info() so that it is called without
rtp->cbs_gbl_lock locked.
Signed-off-by: Shigeru Yoshida <syoshida@redhat.com>
Tested-by: "Zhang, Qiang1" <qiang1.zhang@intel.com>
Signed-off-by: Paul E. McKenney <paulmck@kernel.org>
Signed-off-by: Sasha Levin <sashal@kernel.org>
---
kernel/rcu/tasks.h | 5 ++++-
1 file changed, 4 insertions(+), 1 deletion(-)
diff --git a/kernel/rcu/tasks.h b/kernel/rcu/tasks.h
index 5f4fc8184dd0b..65df1aaf0ce9b 100644
--- a/kernel/rcu/tasks.h
+++ b/kernel/rcu/tasks.h
@@ -241,7 +241,6 @@ static void cblist_init_generic(struct rcu_tasks *rtp)
if (rcu_task_enqueue_lim < 0) {
rcu_task_enqueue_lim = 1;
rcu_task_cb_adjust = true;
- pr_info("%s: Setting adjustable number of callback queues.\n", __func__);
} else if (rcu_task_enqueue_lim == 0) {
rcu_task_enqueue_lim = 1;
}
@@ -272,6 +271,10 @@ static void cblist_init_generic(struct rcu_tasks *rtp)
raw_spin_unlock_rcu_node(rtpcp); // irqs remain disabled.
}
raw_spin_unlock_irqrestore(&rtp->cbs_gbl_lock, flags);
+
+ if (rcu_task_cb_adjust)
+ pr_info("%s: Setting adjustable number of callback queues.\n", __func__);
+
pr_info("%s: Setting shift to %d and lim to %d.\n", __func__, data_race(rtp->percpu_enqueue_shift), data_race(rtp->percpu_enqueue_lim));
}
--
2.39.2
^ permalink raw reply related [flat|nested] 5+ messages in thread
* [PATCH AUTOSEL 6.4 1/4] rcu-tasks: Avoid pr_info() with spin lock in cblist_init_generic()
@ 2023-07-09 14:55 Sasha Levin
2023-07-09 14:55 ` [PATCH AUTOSEL 6.4 2/4] rcu: Mark additional concurrent load from ->cpu_no_qs.b.exp Sasha Levin
` (2 more replies)
0 siblings, 3 replies; 5+ messages in thread
From: Sasha Levin @ 2023-07-09 14:55 UTC (permalink / raw)
To: linux-kernel, stable
Cc: Shigeru Yoshida, Zhang, Qiang1, Paul E . McKenney, Sasha Levin,
frederic, quic_neeraju, joel, josh, boqun.feng, rcu
From: Shigeru Yoshida <syoshida@redhat.com>
[ Upstream commit 5fc8cbe4cf0fd34ded8045c385790c3bf04f6785 ]
pr_info() is called with rtp->cbs_gbl_lock spin lock locked. Because
pr_info() calls printk() that might sleep, this will result in BUG
like below:
[ 0.206455] cblist_init_generic: Setting adjustable number of callback queues.
[ 0.206463]
[ 0.206464] =============================
[ 0.206464] [ BUG: Invalid wait context ]
[ 0.206465] 5.19.0-00428-g9de1f9c8ca51 #5 Not tainted
[ 0.206466] -----------------------------
[ 0.206466] swapper/0/1 is trying to lock:
[ 0.206467] ffffffffa0167a58 (&port_lock_key){....}-{3:3}, at: serial8250_console_write+0x327/0x4a0
[ 0.206473] other info that might help us debug this:
[ 0.206473] context-{5:5}
[ 0.206474] 3 locks held by swapper/0/1:
[ 0.206474] #0: ffffffff9eb597e0 (rcu_tasks.cbs_gbl_lock){....}-{2:2}, at: cblist_init_generic.constprop.0+0x14/0x1f0
[ 0.206478] #1: ffffffff9eb579c0 (console_lock){+.+.}-{0:0}, at: _printk+0x63/0x7e
[ 0.206482] #2: ffffffff9ea77780 (console_owner){....}-{0:0}, at: console_emit_next_record.constprop.0+0x111/0x330
[ 0.206485] stack backtrace:
[ 0.206486] CPU: 0 PID: 1 Comm: swapper/0 Not tainted 5.19.0-00428-g9de1f9c8ca51 #5
[ 0.206488] Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS 1.16.0-1.fc36 04/01/2014
[ 0.206489] Call Trace:
[ 0.206490] <TASK>
[ 0.206491] dump_stack_lvl+0x6a/0x9f
[ 0.206493] __lock_acquire.cold+0x2d7/0x2fe
[ 0.206496] ? stack_trace_save+0x46/0x70
[ 0.206497] lock_acquire+0xd1/0x2f0
[ 0.206499] ? serial8250_console_write+0x327/0x4a0
[ 0.206500] ? __lock_acquire+0x5c7/0x2720
[ 0.206502] _raw_spin_lock_irqsave+0x3d/0x90
[ 0.206504] ? serial8250_console_write+0x327/0x4a0
[ 0.206506] serial8250_console_write+0x327/0x4a0
[ 0.206508] console_emit_next_record.constprop.0+0x180/0x330
[ 0.206511] console_unlock+0xf7/0x1f0
[ 0.206512] vprintk_emit+0xf7/0x330
[ 0.206514] _printk+0x63/0x7e
[ 0.206516] cblist_init_generic.constprop.0.cold+0x24/0x32
[ 0.206518] rcu_init_tasks_generic+0x5/0xd9
[ 0.206522] kernel_init_freeable+0x15b/0x2a2
[ 0.206523] ? rest_init+0x160/0x160
[ 0.206526] kernel_init+0x11/0x120
[ 0.206527] ret_from_fork+0x1f/0x30
[ 0.206530] </TASK>
[ 0.207018] cblist_init_generic: Setting shift to 1 and lim to 1.
This patch moves pr_info() so that it is called without
rtp->cbs_gbl_lock locked.
Signed-off-by: Shigeru Yoshida <syoshida@redhat.com>
Tested-by: "Zhang, Qiang1" <qiang1.zhang@intel.com>
Signed-off-by: Paul E. McKenney <paulmck@kernel.org>
Signed-off-by: Sasha Levin <sashal@kernel.org>
---
kernel/rcu/tasks.h | 5 ++++-
1 file changed, 4 insertions(+), 1 deletion(-)
diff --git a/kernel/rcu/tasks.h b/kernel/rcu/tasks.h
index 5f4fc8184dd0b..65df1aaf0ce9b 100644
--- a/kernel/rcu/tasks.h
+++ b/kernel/rcu/tasks.h
@@ -241,7 +241,6 @@ static void cblist_init_generic(struct rcu_tasks *rtp)
if (rcu_task_enqueue_lim < 0) {
rcu_task_enqueue_lim = 1;
rcu_task_cb_adjust = true;
- pr_info("%s: Setting adjustable number of callback queues.\n", __func__);
} else if (rcu_task_enqueue_lim == 0) {
rcu_task_enqueue_lim = 1;
}
@@ -272,6 +271,10 @@ static void cblist_init_generic(struct rcu_tasks *rtp)
raw_spin_unlock_rcu_node(rtpcp); // irqs remain disabled.
}
raw_spin_unlock_irqrestore(&rtp->cbs_gbl_lock, flags);
+
+ if (rcu_task_cb_adjust)
+ pr_info("%s: Setting adjustable number of callback queues.\n", __func__);
+
pr_info("%s: Setting shift to %d and lim to %d.\n", __func__, data_race(rtp->percpu_enqueue_shift), data_race(rtp->percpu_enqueue_lim));
}
--
2.39.2
^ permalink raw reply related [flat|nested] 5+ messages in thread
* [PATCH AUTOSEL 6.4 2/4] rcu: Mark additional concurrent load from ->cpu_no_qs.b.exp
2023-07-09 14:55 [PATCH AUTOSEL 6.4 1/4] rcu-tasks: Avoid pr_info() with spin lock in cblist_init_generic() Sasha Levin
@ 2023-07-09 14:55 ` Sasha Levin
2023-07-09 14:55 ` [PATCH AUTOSEL 6.4 3/4] rcu: Mark rcu_cpu_kthread() accesses to ->rcu_cpu_has_work Sasha Levin
2023-07-09 14:55 ` [PATCH AUTOSEL 6.4 4/4] tools/nolibc: ensure stack protector guard is never zero Sasha Levin
2 siblings, 0 replies; 5+ messages in thread
From: Sasha Levin @ 2023-07-09 14:55 UTC (permalink / raw)
To: linux-kernel, stable
Cc: Paul E. McKenney, Sasha Levin, frederic, quic_neeraju, joel, josh,
boqun.feng, rcu
From: "Paul E. McKenney" <paulmck@kernel.org>
[ Upstream commit 9146eb25495ea8bfb5010192e61e3ed5805ce9ef ]
The per-CPU rcu_data structure's ->cpu_no_qs.b.exp field is updated
only on the instance corresponding to the current CPU, but can be read
more widely. Unmarked accesses are OK from the corresponding CPU, but
only if interrupts are disabled, given that interrupt handlers can and
do modify this field.
Unfortunately, although the load from rcu_preempt_deferred_qs() is always
carried out from the corresponding CPU, interrupts are not necessarily
disabled. This commit therefore upgrades this load to READ_ONCE.
Similarly, the diagnostic access from synchronize_rcu_expedited_wait()
might run with interrupts disabled and from some other CPU. This commit
therefore marks this load with data_race().
Finally, the C-language access in rcu_preempt_ctxt_queue() is OK as
is because interrupts are disabled and this load is always from the
corresponding CPU. This commit adds a comment giving the rationale for
this access being safe.
This data race was reported by KCSAN. Not appropriate for backporting
due to failure being unlikely.
Signed-off-by: Paul E. McKenney <paulmck@kernel.org>
Signed-off-by: Sasha Levin <sashal@kernel.org>
---
kernel/rcu/tree_exp.h | 2 +-
kernel/rcu/tree_plugin.h | 4 +++-
2 files changed, 4 insertions(+), 2 deletions(-)
diff --git a/kernel/rcu/tree_exp.h b/kernel/rcu/tree_exp.h
index 3b7abb58157df..8239b39d945bd 100644
--- a/kernel/rcu/tree_exp.h
+++ b/kernel/rcu/tree_exp.h
@@ -643,7 +643,7 @@ static void synchronize_rcu_expedited_wait(void)
"O."[!!cpu_online(cpu)],
"o."[!!(rdp->grpmask & rnp->expmaskinit)],
"N."[!!(rdp->grpmask & rnp->expmaskinitnext)],
- "D."[!!(rdp->cpu_no_qs.b.exp)]);
+ "D."[!!data_race(rdp->cpu_no_qs.b.exp)]);
}
}
pr_cont(" } %lu jiffies s: %lu root: %#lx/%c\n",
diff --git a/kernel/rcu/tree_plugin.h b/kernel/rcu/tree_plugin.h
index 7b0fe741a0886..41021080ad258 100644
--- a/kernel/rcu/tree_plugin.h
+++ b/kernel/rcu/tree_plugin.h
@@ -257,6 +257,8 @@ static void rcu_preempt_ctxt_queue(struct rcu_node *rnp, struct rcu_data *rdp)
* GP should not be able to end until we report, so there should be
* no need to check for a subsequent expedited GP. (Though we are
* still in a quiescent state in any case.)
+ *
+ * Interrupts are disabled, so ->cpu_no_qs.b.exp cannot change.
*/
if (blkd_state & RCU_EXP_BLKD && rdp->cpu_no_qs.b.exp)
rcu_report_exp_rdp(rdp);
@@ -941,7 +943,7 @@ notrace void rcu_preempt_deferred_qs(struct task_struct *t)
{
struct rcu_data *rdp = this_cpu_ptr(&rcu_data);
- if (rdp->cpu_no_qs.b.exp)
+ if (READ_ONCE(rdp->cpu_no_qs.b.exp))
rcu_report_exp_rdp(rdp);
}
--
2.39.2
^ permalink raw reply related [flat|nested] 5+ messages in thread
* [PATCH AUTOSEL 6.4 3/4] rcu: Mark rcu_cpu_kthread() accesses to ->rcu_cpu_has_work
2023-07-09 14:55 [PATCH AUTOSEL 6.4 1/4] rcu-tasks: Avoid pr_info() with spin lock in cblist_init_generic() Sasha Levin
2023-07-09 14:55 ` [PATCH AUTOSEL 6.4 2/4] rcu: Mark additional concurrent load from ->cpu_no_qs.b.exp Sasha Levin
@ 2023-07-09 14:55 ` Sasha Levin
2023-07-09 14:55 ` [PATCH AUTOSEL 6.4 4/4] tools/nolibc: ensure stack protector guard is never zero Sasha Levin
2 siblings, 0 replies; 5+ messages in thread
From: Sasha Levin @ 2023-07-09 14:55 UTC (permalink / raw)
To: linux-kernel, stable
Cc: Paul E. McKenney, Sasha Levin, frederic, quic_neeraju, joel, josh,
boqun.feng, rcu
From: "Paul E. McKenney" <paulmck@kernel.org>
[ Upstream commit a24c1aab652ebacf9ea62470a166514174c96fe1 ]
The rcu_data structure's ->rcu_cpu_has_work field can be modified by
any CPU attempting to wake up the rcuc kthread. Therefore, this commit
marks accesses to this field from the rcu_cpu_kthread() function.
This data race was reported by KCSAN. Not appropriate for backporting
due to failure being unlikely.
Signed-off-by: Paul E. McKenney <paulmck@kernel.org>
Signed-off-by: Sasha Levin <sashal@kernel.org>
---
kernel/rcu/tree.c | 4 ++--
1 file changed, 2 insertions(+), 2 deletions(-)
diff --git a/kernel/rcu/tree.c b/kernel/rcu/tree.c
index f52ff72410416..fe649dd0165c3 100644
--- a/kernel/rcu/tree.c
+++ b/kernel/rcu/tree.c
@@ -2459,12 +2459,12 @@ static void rcu_cpu_kthread(unsigned int cpu)
*statusp = RCU_KTHREAD_RUNNING;
local_irq_disable();
work = *workp;
- *workp = 0;
+ WRITE_ONCE(*workp, 0);
local_irq_enable();
if (work)
rcu_core();
local_bh_enable();
- if (*workp == 0) {
+ if (!READ_ONCE(*workp)) {
trace_rcu_utilization(TPS("End CPU kthread@rcu_wait"));
*statusp = RCU_KTHREAD_WAITING;
return;
--
2.39.2
^ permalink raw reply related [flat|nested] 5+ messages in thread
* [PATCH AUTOSEL 6.4 4/4] tools/nolibc: ensure stack protector guard is never zero
2023-07-09 14:55 [PATCH AUTOSEL 6.4 1/4] rcu-tasks: Avoid pr_info() with spin lock in cblist_init_generic() Sasha Levin
2023-07-09 14:55 ` [PATCH AUTOSEL 6.4 2/4] rcu: Mark additional concurrent load from ->cpu_no_qs.b.exp Sasha Levin
2023-07-09 14:55 ` [PATCH AUTOSEL 6.4 3/4] rcu: Mark rcu_cpu_kthread() accesses to ->rcu_cpu_has_work Sasha Levin
@ 2023-07-09 14:55 ` Sasha Levin
2 siblings, 0 replies; 5+ messages in thread
From: Sasha Levin @ 2023-07-09 14:55 UTC (permalink / raw)
To: linux-kernel, stable
Cc: Thomas Weißschuh, Willy Tarreau, Paul E . McKenney,
Sasha Levin
From: Thomas Weißschuh <linux@weissschuh.net>
[ Upstream commit 88fc7eb54ecc6db8b773341ce39ad201066fa7da ]
The all-zero pattern is one of the more probable out-of-bound writes so
add a special case to not accidentally accept it.
Also it enables the reliable detection of stack protector initialization
during testing.
Signed-off-by: Thomas Weißschuh <linux@weissschuh.net>
Signed-off-by: Willy Tarreau <w@1wt.eu>
Signed-off-by: Paul E. McKenney <paulmck@kernel.org>
Signed-off-by: Sasha Levin <sashal@kernel.org>
---
tools/include/nolibc/stackprotector.h | 5 +++--
1 file changed, 3 insertions(+), 2 deletions(-)
diff --git a/tools/include/nolibc/stackprotector.h b/tools/include/nolibc/stackprotector.h
index d119cbbbc256f..9890e86c26172 100644
--- a/tools/include/nolibc/stackprotector.h
+++ b/tools/include/nolibc/stackprotector.h
@@ -45,8 +45,9 @@ __attribute__((weak,no_stack_protector,section(".text.nolibc_stack_chk")))
void __stack_chk_init(void)
{
my_syscall3(__NR_getrandom, &__stack_chk_guard, sizeof(__stack_chk_guard), 0);
- /* a bit more randomness in case getrandom() fails */
- __stack_chk_guard ^= (uintptr_t) &__stack_chk_guard;
+ /* a bit more randomness in case getrandom() fails, ensure the guard is never 0 */
+ if (__stack_chk_guard != (uintptr_t) &__stack_chk_guard)
+ __stack_chk_guard ^= (uintptr_t) &__stack_chk_guard;
}
#endif // defined(NOLIBC_STACKPROTECTOR)
--
2.39.2
^ permalink raw reply related [flat|nested] 5+ messages in thread
end of thread, other threads:[~2023-07-09 14:55 UTC | newest]
Thread overview: 5+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2023-07-09 14:55 [PATCH AUTOSEL 6.4 1/4] rcu-tasks: Avoid pr_info() with spin lock in cblist_init_generic() Sasha Levin
2023-07-09 14:55 ` [PATCH AUTOSEL 6.4 2/4] rcu: Mark additional concurrent load from ->cpu_no_qs.b.exp Sasha Levin
2023-07-09 14:55 ` [PATCH AUTOSEL 6.4 3/4] rcu: Mark rcu_cpu_kthread() accesses to ->rcu_cpu_has_work Sasha Levin
2023-07-09 14:55 ` [PATCH AUTOSEL 6.4 4/4] tools/nolibc: ensure stack protector guard is never zero Sasha Levin
-- strict thread matches above, loose matches on Subject: below --
2023-07-02 19:57 [PATCH AUTOSEL 6.4 1/4] rcu-tasks: Avoid pr_info() with spin lock in cblist_init_generic() Sasha Levin
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).