From: Peter Zijlstra <peterz@infradead.org>
To: Peter Xu <peterx@redhat.com>
Cc: linux-kernel@vger.kernel.org,
Marcelo Tosatti <mtosatti@redhat.com>,
Thomas Gleixner <tglx@linutronix.de>,
Nadav Amit <namit@vmware.com>,
Josh Poimboeuf <jpoimboe@redhat.com>,
Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Subject: Re: [PATCH] smp: Allow smp_call_function_single_async() to insert locked csd
Date: Mon, 16 Dec 2019 21:37:05 +0100 [thread overview]
Message-ID: <20191216203705.GV2844@hirez.programming.kicks-ass.net> (raw)
In-Reply-To: <20191211162925.GD48697@xz-x1>
On Wed, Dec 11, 2019 at 11:29:25AM -0500, Peter Xu wrote:
> This is also true.
>
> Here's the statistics I mentioned:
>
> =================================================
>
> (1) Implemented the same counter mechanism on the caller's:
>
> *** arch/mips/kernel/smp.c:
> tick_broadcast[713] smp_call_function_single_async(cpu, csd);
> *** drivers/cpuidle/coupled.c:
> cpuidle_coupled_poke[336] smp_call_function_single_async(cpu, csd);
> *** kernel/sched/core.c:
> hrtick_start[298] smp_call_function_single_async(cpu_of(rq), &rq->hrtick_csd);
>
> (2) Cleared the csd flags before calls:
>
> *** arch/s390/pci/pci_irq.c:
> zpci_handle_fallback_irq[185] smp_call_function_single_async(cpu, &cpu_data->csd);
> *** block/blk-mq.c:
> __blk_mq_complete_request[622] smp_call_function_single_async(ctx->cpu, &rq->csd);
> *** block/blk-softirq.c:
> raise_blk_irq[70] smp_call_function_single_async(cpu, data);
> *** drivers/net/ethernet/cavium/liquidio/lio_core.c:
> liquidio_napi_drv_callback[735] smp_call_function_single_async(droq->cpu_id, csd);
>
> (3) Others:
>
> *** arch/mips/kernel/process.c:
> raise_backtrace[713] smp_call_function_single_async(cpu, csd);
per-cpu csd data, seems perfectly fine usage.
> *** arch/x86/kernel/cpuid.c:
> cpuid_read[85] err = smp_call_function_single_async(cpu, &csd);
> *** arch/x86/lib/msr-smp.c:
> rdmsr_safe_on_cpu[182] err = smp_call_function_single_async(cpu, &csd);
These two have csd on stack and wait with a completion. seems fine.
> *** include/linux/smp.h:
> bool[60] int smp_call_function_single_async(int cpu, call_single_data_t *csd);
this is the declaration, your grep went funny
> *** kernel/debug/debug_core.c:
> kgdb_roundup_cpus[272] ret = smp_call_function_single_async(cpu, csd);
> *** net/core/dev.c:
> net_rps_send_ipi[5818] smp_call_function_single_async(remsd->cpu, &remsd->csd);
Both percpu again.
>
> =================================================
>
> For (1): These probably justify more on that we might want a patch
> like this to avoid reimplementing it everywhere.
I can't quite parse that, but if you're saying we should fix the
callers, then I agree.
> For (2): If I read it right, smp_call_function_single_async() is the
> only place where we take a call_single_data_t structure
> rather than the (smp_call_func_t, void *) tuple.
That's on purpose; by supplying csd we allow explicit concurrency. If
you do as proposed here:
> I could
> miss something important, but otherwise I think it would be
> good to use the tuple for smp_call_function_single_async() as
> well, then we move call_single_data_t out of global header
> but move into smp.c to avoid callers from toucing it (which
> could be error-prone). In other words, IMHO it would be good
> to have all these callers fixed.
Then you could only ever have 1 of then in flight at the same time.
Which would break things.
> For (3): I didn't dig, but I think some of them (or future users)
> could still suffer from the same issue on retriggering the
> WARN_ON...
They all seem fine.
So I'm thinking your patch is good, but please also fix all 1).
next prev parent reply other threads:[~2019-12-16 20:37 UTC|newest]
Thread overview: 9+ messages / expand[flat|nested] mbox.gz Atom feed top
2019-12-04 20:48 [PATCH] smp: Allow smp_call_function_single_async() to insert locked csd Peter Xu
2019-12-11 15:40 ` Peter Zijlstra
2019-12-11 16:29 ` Peter Xu
2019-12-16 20:37 ` Peter Zijlstra [this message]
2019-12-16 20:58 ` Peter Xu
2019-12-17 9:51 ` Peter Zijlstra
2019-12-17 15:31 ` Peter Xu
2019-12-17 20:23 ` Peter Zijlstra
2019-12-17 20:48 ` Peter Xu
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20191216203705.GV2844@hirez.programming.kicks-ass.net \
--to=peterz@infradead.org \
--cc=gregkh@linuxfoundation.org \
--cc=jpoimboe@redhat.com \
--cc=linux-kernel@vger.kernel.org \
--cc=mtosatti@redhat.com \
--cc=namit@vmware.com \
--cc=peterx@redhat.com \
--cc=tglx@linutronix.de \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox