From mboxrd@z Thu Jan 1 00:00:00 1970 From: Daniel Bristot de Oliveira Subject: Re: [RFC PATCH v2 8/8] sched, smp: Trace smp callback causing an IPI Date: Fri, 18 Nov 2022 17:42:34 +0100 Message-ID: <1ab5082c-bec5-53f2-501b-f15f7e8edbd9@redhat.com> References: <20221102182949.3119584-1-vschneid@redhat.com> <20221102183336.3120536-7-vschneid@redhat.com> Mime-Version: 1.0 Content-Transfer-Encoding: 7bit Return-path: DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1668789760; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=iCeupirs0oRwfAYdrtKDNR33oxtw0kRHF9WhmcxURbs=; b=is4sSZobXPguOsrAJb7JAyhMO5SpS8UcYWAE+R5d34fjCAKvqtZg6QBIcFoLw5h6mr3RCt 5EPAB+vTGhKztvDccZH+7yoUS2r4oeOVsIgMWRoR4f+xF1kMvXGjusdUOr+rokCcXgyAxz YUK1G/bzc2+QiGQxJI9KYRjL7cpX4AI= DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1668789760; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=iCeupirs0oRwfAYdrtKDNR33oxtw0kRHF9WhmcxURbs=; b=is4sSZobXPguOsrAJb7JAyhMO5SpS8UcYWAE+R5d34fjCAKvqtZg6QBIcFoLw5h6mr3RCt 5EPAB+vTGhKztvDccZH+7yoUS2r4oeOVsIgMWRoR4f+xF1kMvXGjusdUOr+rokCcXgyAxz YUK1G/bzc2+QiGQxJI9KYRjL7cpX4AI= In-Reply-To: Content-Language: en-US List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: linuxppc-dev-bounces+glppe-linuxppc-embedded-2=m.gmane-mx.org@lists.ozlabs.org Sender: "Linuxppc-dev" Content-Type: text/plain; charset="us-ascii" To: Peter Zijlstra , Valentin Schneider Cc: Juri Lelli , Mark Rutland , linux-ia64@vger.kernel.org, linux-sh@vger.kernel.org, Sebastian Andrzej Siewior , Dave Hansen , linux-mips@vger.kernel.org, Guo Ren , "H. Peter Anvin" , sparclinux@vger.kernel.org, linux-riscv@lists.infradead.org, linux-s390@vger.kernel.org, Marc Zyngier , linux-hexagon@vger.kernel.org, x86@kernel.org, Russell King , linux-csky@vger.kernel.org, Ingo Molnar , linux-snps-arc@lists.infradead.org, linux-xtensa@linux-xtensa.org, "Paul E. McKenney" , Frederic Weisbecker , Steven Rostedt , openrisc@lists.librecores.org, Borislav Petkov , Nicholas Piggin , loongarch@lists.linux.dev, T On 11/18/22 10:12, Peter Zijlstra wrote: > On Thu, Nov 17, 2022 at 02:45:29PM +0000, Valentin Schneider wrote: > >>> + if (trace_ipi_send_cpumask_enabled()) { >>> + call_single_data_t *csd; >>> + smp_call_func_t func; >>> + >>> + csd = container_of(node, call_single_data_t, node.llist); >>> + >>> + func = sched_ttwu_pending; >>> + if (CSD_TYPE(csd) != CSD_TYPE_TTWU) >>> + func = csd->func; >>> + >>> + if (raw_smp_call_single_queue(cpu, node)) >>> + trace_ipi_send_cpumask(cpumask_of(cpu), _RET_IP_, func); >> So I went with the tracepoint being placed *before* the actual IPI gets >> sent to have a somewhat sane ordering between trace_ipi_send_cpumask() and >> e.g. trace_call_function_single_entry(). >> >> Packaging the call_single_queue logic makes the code less horrible, but it >> does mix up the event ordering... > Keeps em sharp ;-) > Having the trace before the IPI avoids the (non ideal) case where the trace stops because of an IPI execution before we have trace about who sent it... :-(. -- Daniel