From mboxrd@z Thu Jan 1 00:00:00 1970 From: Peter Zijlstra Date: Wed, 22 Mar 2023 09:53:29 +0000 Subject: Re: [PATCH v5 7/7] sched, smp: Trace smp callback causing an IPI Message-Id: <20230322095329.GS2017917@hirez.programming.kicks-ass.net> List-Id: References: <20230307143558.294354-1-vschneid@redhat.com> <20230307143558.294354-8-vschneid@redhat.com> In-Reply-To: <20230307143558.294354-8-vschneid@redhat.com> MIME-Version: 1.0 Content-Type: text/plain; charset="us-ascii" Content-Transfer-Encoding: 7bit To: Valentin Schneider Cc: linux-alpha@vger.kernel.org, linux-kernel@vger.kernel.org, linux-snps-arc@lists.infradead.org, linux-arm-kernel@lists.infradead.org, linux-csky@vger.kernel.org, linux-hexagon@vger.kernel.org, linux-ia64@vger.kernel.org, loongarch@lists.linux.dev, linux-mips@vger.kernel.org, openrisc@lists.librecores.org, linux-parisc@vger.kernel.org, linuxppc-dev@lists.ozlabs.org, linux-riscv@lists.infradead.org, linux-s390@vger.kernel.org, linux-sh@vger.kernel.org, sparclinux@vger.kernel.org, linux-xtensa@linux-xtensa.org, x86@kernel.org, "Paul E. McKenney" , Steven Rostedt , Thomas Gleixner , Sebastian Andrzej Siewior , Juri Lelli , Daniel Bristot de Oliveira , Marcelo Tosatti , Frederic Weisbecker , Ingo Molnar , Borislav Petkov , Dave Hansen , "H. Peter Anvin" , Marc Zyngier , Mark Rutland , Russell King , Nicholas Piggin , Guo Ren , "David S. Miller" On Tue, Mar 07, 2023 at 02:35:58PM +0000, Valentin Schneider wrote: > @@ -477,6 +490,25 @@ static __always_inline void csd_unlock(struct __call_single_data *csd) > smp_store_release(&csd->node.u_flags, 0); > } > > +static __always_inline void > +raw_smp_call_single_queue(int cpu, struct llist_node *node, smp_call_func_t func) > +{ > + /* > + * The list addition should be visible to the target CPU when it pops > + * the head of the list to pull the entry off it in the IPI handler > + * because of normal cache coherency rules implied by the underlying > + * llist ops. > + * > + * If IPIs can go out of order to the cache coherency protocol > + * in an architecture, sufficient synchronisation should be added > + * to arch code to make it appear to obey cache coherency WRT > + * locking and barrier primitives. Generic code isn't really > + * equipped to do the right thing... > + */ > + if (llist_add(node, &per_cpu(call_single_queue, cpu))) > + send_call_function_single_ipi(cpu, func); > +} > + > static DEFINE_PER_CPU_SHARED_ALIGNED(call_single_data_t, csd_data); > > void __smp_call_single_queue(int cpu, struct llist_node *node) > @@ -493,21 +525,25 @@ void __smp_call_single_queue(int cpu, struct llist_node *node) > } > } > #endif > /* > + * We have to check the type of the CSD before queueing it, because > + * once queued it can have its flags cleared by > + * flush_smp_call_function_queue() > + * even if we haven't sent the smp_call IPI yet (e.g. the stopper > + * executes migration_cpu_stop() on the remote CPU). > */ > + if (trace_ipi_send_cpumask_enabled()) { > + call_single_data_t *csd; > + smp_call_func_t func; > + > + csd = container_of(node, call_single_data_t, node.llist); > + func = CSD_TYPE(csd) = CSD_TYPE_TTWU ? > + sched_ttwu_pending : csd->func; > + > + raw_smp_call_single_queue(cpu, node, func); > + } else { > + raw_smp_call_single_queue(cpu, node, NULL); > + } > } Hurmph... so we only really consume @func when we IPI. Would it not be more useful to trace this thing for *every* csd enqeued?