From: Laurent Dufour <ldufour@linux.ibm.com>
To: Nicholas Piggin <npiggin@gmail.com>, linuxppc-dev@lists.ozlabs.org
Subject: Re: [PATCH v3 1/4] powerpc/watchdog: Fix missed watchdog reset due to memory ordering race
Date: Mon, 15 Nov 2021 16:09:59 +0100 [thread overview]
Message-ID: <0b0cffcb-c99a-bad5-5620-9f3ad154b61e@linux.ibm.com> (raw)
In-Reply-To: <20211110025056.2084347-2-npiggin@gmail.com>
Le 10/11/2021 à 03:50, Nicholas Piggin a écrit :
> It is possible for all CPUs to miss the pending cpumask becoming clear,
> and then nobody resetting it, which will cause the lockup detector to
> stop working. It will eventually expire, but watchdog_smp_panic will
> avoid doing anything if the pending mask is clear and it will never be
> reset.
>
> Order the cpumask clear vs the subsequent test to close this race.
>
> Add an extra check for an empty pending mask when the watchdog fires and
> finds its bit still clear, to try to catch any other possible races or
> bugs here and keep the watchdog working. The extra test in
> arch_touch_nmi_watchdog is required to prevent the new warning from
> firing off.
>
> Debugged-by: Laurent Dufour <ldufour@linux.ibm.com>
> Signed-off-by: Nicholas Piggin <npiggin@gmail.com>
> ---
> arch/powerpc/kernel/watchdog.c | 41 +++++++++++++++++++++++++++++++++-
> 1 file changed, 40 insertions(+), 1 deletion(-)
>
> diff --git a/arch/powerpc/kernel/watchdog.c b/arch/powerpc/kernel/watchdog.c
> index f9ea0e5357f9..3c60872b6a2c 100644
> --- a/arch/powerpc/kernel/watchdog.c
> +++ b/arch/powerpc/kernel/watchdog.c
> @@ -135,6 +135,10 @@ static void set_cpumask_stuck(const struct cpumask *cpumask, u64 tb)
> {
> cpumask_or(&wd_smp_cpus_stuck, &wd_smp_cpus_stuck, cpumask);
> cpumask_andnot(&wd_smp_cpus_pending, &wd_smp_cpus_pending, cpumask);
> + /*
> + * See wd_smp_clear_cpu_pending()
> + */
> + smp_mb();
> if (cpumask_empty(&wd_smp_cpus_pending)) {
> wd_smp_last_reset_tb = tb;
> cpumask_andnot(&wd_smp_cpus_pending,
> @@ -215,13 +219,44 @@ static void wd_smp_clear_cpu_pending(int cpu, u64 tb)
>
> cpumask_clear_cpu(cpu, &wd_smp_cpus_stuck);
> wd_smp_unlock(&flags);
> + } else {
> + /*
> + * The last CPU to clear pending should have reset the
> + * watchdog so we generally should not find it empty
> + * here if our CPU was clear. However it could happen
> + * due to a rare race with another CPU taking the
> + * last CPU out of the mask concurrently.
> + *
> + * We can't add a warning for it. But just in case
> + * there is a problem with the watchdog that is causing
> + * the mask to not be reset, try to kick it along here.
> + */
> + if (unlikely(cpumask_empty(&wd_smp_cpus_pending)))
> + goto none_pending;
If I understand correctly, that branch is a security in case the code is not
working as expected. But I'm really wondering if that's really needed, and we
will end up with a contention on the watchdog lock while this path should be
lockless, and I'd say that in most of the case there is nothing to do after
grabbing that lock. Am I missing something risky here?
> }
> return;
> }
> +
> cpumask_clear_cpu(cpu, &wd_smp_cpus_pending);
> +
> + /*
> + * Order the store to clear pending with the load(s) to check all
> + * words in the pending mask to check they are all empty. This orders
> + * with the same barrier on another CPU. This prevents two CPUs
> + * clearing the last 2 pending bits, but neither seeing the other's
> + * store when checking if the mask is empty, and missing an empty
> + * mask, which ends with a false positive.
> + */
> + smp_mb();
> if (cpumask_empty(&wd_smp_cpus_pending)) {
> unsigned long flags;
>
> +none_pending:
> + /*
> + * Double check under lock because more than one CPU could see
> + * a clear mask with the lockless check after clearing their
> + * pending bits.
> + */
> wd_smp_lock(&flags);
> if (cpumask_empty(&wd_smp_cpus_pending)) {
> wd_smp_last_reset_tb = tb;
> @@ -312,8 +347,12 @@ void arch_touch_nmi_watchdog(void)
> {
> unsigned long ticks = tb_ticks_per_usec * wd_timer_period_ms * 1000;
> int cpu = smp_processor_id();
> - u64 tb = get_tb();
> + u64 tb;
>
> + if (!cpumask_test_cpu(cpu, &watchdog_cpumask))
> + return;
> +
> + tb = get_tb();
> if (tb - per_cpu(wd_timer_tb, cpu) >= ticks) {
> per_cpu(wd_timer_tb, cpu) = tb;
> wd_smp_clear_cpu_pending(cpu, tb);
>
next prev parent reply other threads:[~2021-11-15 15:11 UTC|newest]
Thread overview: 13+ messages / expand[flat|nested] mbox.gz Atom feed top
2021-11-10 2:50 [PATCH v3 0/4] powerpc: watchdog fixes Nicholas Piggin
2021-11-10 2:50 ` [PATCH v3 1/4] powerpc/watchdog: Fix missed watchdog reset due to memory ordering race Nicholas Piggin
2021-11-15 15:09 ` Laurent Dufour [this message]
2021-11-19 9:05 ` Nicholas Piggin
2021-11-19 9:25 ` Laurent Dufour
2021-11-10 2:50 ` [PATCH v3 2/4] powerpc/watchdog: tighten non-atomic read-modify-write access Nicholas Piggin
2021-11-10 2:50 ` [PATCH v3 3/4] powerpc/watchdog: Avoid holding wd_smp_lock over printk and smp_send_nmi_ipi Nicholas Piggin
2021-11-19 11:05 ` Nicholas Piggin
2021-11-10 2:50 ` [PATCH v3 4/4] powerpc/watchdog: read TB close to where it is used Nicholas Piggin
2021-11-25 9:36 ` [PATCH v3 0/4] powerpc: watchdog fixes Michael Ellerman
2021-11-25 15:11 ` Laurent Dufour
2021-11-25 15:26 ` Michal Suchánek
2021-11-25 17:20 ` Laurent Dufour
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=0b0cffcb-c99a-bad5-5620-9f3ad154b61e@linux.ibm.com \
--to=ldufour@linux.ibm.com \
--cc=linuxppc-dev@lists.ozlabs.org \
--cc=npiggin@gmail.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).