From mboxrd@z Thu Jan 1 00:00:00 1970 Received: from relay.hostedemail.com (smtprelay0010.hostedemail.com [216.40.44.10]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id DBB2546AF1A; Wed, 13 May 2026 15:41:14 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=216.40.44.10 ARC-Seal:i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1778686876; cv=none; b=Se4qK3F0QzzQYTxdEVZ5E/XWB5UXo1I6O2BTagejpwWUATii3VnUxln/X3d5LCkReES68pIgLhOP5LUBtdOzEAYy9FmHHOH3DvOfdxeZP2/rgvgGAwo/acqXFr/qAIkwqRAGXOmxEG1NyPz5+K3xjHI4SxkAgX8TdgueUilU4Ys= ARC-Message-Signature:i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1778686876; c=relaxed/simple; bh=/xber48vB80zIRYc3VM97SI50KrhgHv6SCvWeIfyJME=; h=Date:From:To:Cc:Subject:Message-ID:In-Reply-To:References: MIME-Version:Content-Type; b=Eq5CyP4BJ6w5CQGSVhGYIvcYKseBw94XIDcNXh7hZfcW0U7GsnmW9luinw8w0ManFJmjz4aOMZ7DbwGgjA6w7xz0K+jbwprr4X7nYDpq3YyQm/zUIO8UMj7Toyvgryq1yJtYdMdssk66a+4qGJLMMC1dlEIJQzsKCqbaMbY4Uwg= ARC-Authentication-Results:i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=goodmis.org; spf=pass smtp.mailfrom=goodmis.org; arc=none smtp.client-ip=216.40.44.10 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=goodmis.org Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=goodmis.org Received: from omf05.hostedemail.com (lb01a-stub [10.200.18.249]) by unirelay06.hostedemail.com (Postfix) with ESMTP id 30AA61C0099; Wed, 13 May 2026 15:41:05 +0000 (UTC) Received: from [HIDDEN] (Authenticated sender: rostedt@goodmis.org) by omf05.hostedemail.com (Postfix) with ESMTPA id 70F3620018; Wed, 13 May 2026 15:40:59 +0000 (UTC) Date: Wed, 13 May 2026 11:41:02 -0400 From: Steven Rostedt To: Dmitry Ilvokhin Cc: Peter Zijlstra , Ingo Molnar , Will Deacon , Boqun Feng , Waiman Long , Thomas Bogendoerfer , Juergen Gross , Ajay Kaher , Alexey Makhalov , Broadcom internal kernel review list , Thomas Gleixner , Borislav Petkov , Dave Hansen , x86@kernel.org, "H. Peter Anvin" , Arnd Bergmann , Dennis Zhou , Tejun Heo , Christoph Lameter , Masami Hiramatsu , Mathieu Desnoyers , linux-kernel@vger.kernel.org, linux-mips@vger.kernel.org, virtualization@lists.linux.dev, linux-arch@vger.kernel.org, linux-mm@kvack.org, linux-trace-kernel@vger.kernel.org, kernel-team@meta.com, "Paul E. McKenney" Subject: Re: [PATCH v6 5/7] locking: Add contended_release tracepoint to qspinlock Message-ID: <20260513114102.50f4ca68@gandalf.local.home> In-Reply-To: <5d7ea75ffe74a785e6b234ada9f23c6373d4b4c1.1777999826.git.d@ilvokhin.com> References: <5d7ea75ffe74a785e6b234ada9f23c6373d4b4c1.1777999826.git.d@ilvokhin.com> X-Mailer: Claws Mail 3.20.0git84 (GTK+ 2.24.33; x86_64-pc-linux-gnu) Precedence: bulk X-Mailing-List: linux-trace-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Type: text/plain; charset=US-ASCII Content-Transfer-Encoding: 7bit X-Rspamd-Server: rspamout05 X-Rspamd-Queue-Id: 70F3620018 X-Stat-Signature: s3m5tohn5d1ca1w6py7xa3hske61ei5e X-Session-Marker: 726F737465647440676F6F646D69732E6F7267 X-Session-ID: U2FsdGVkX18mkuP/bq6vr3q95KTV/OT7JouORwDcItM= X-HE-Tag: 1778686859-354629 X-HE-Meta: U2FsdGVkX18OJHEBOrij1GY8MS3sNjKIeD5DPV0GB4D09g1fBle5KDCGhtreToxizLc5krdFdkETIQWCa5/Hf0PvTLoKCOr5zRyt/jzDf/p3FNCOAhqeRP64XRpaugSerSyM2sv2BlpGQmwShdn04U1ogmRGA89WGF4eP6wKngOVgmT7Ln69KfPv5Rpqf0EA0hyoXySTEuVqSccmmhPMlBxgCxvgaqB8m61aJ3j0VHiZvViunQ318cdNLbJJYF9qfW7SiRuqzbe+P40V73WCi4xweZLJQFmJowvjZCJKGlsahpOOABmLiD/FtqXlvH5lm2EbeOXqqKRVJquH8ZVFaS97/QIwZaZ3hJKzj5kkJZobg0ALrj721lldn38bGC5bsDPlQm8tfvk+YKr4xnbbug== On Tue, 5 May 2026 17:09:34 +0000 Dmitry Ilvokhin wrote: > Use the arch-overridable queued_spin_release(), introduced in the > previous commit, to ensure the tracepoint works correctly across all Remove the ", introduced in the previous commit," That's useless in git change logs. > architectures, including those with custom unlock implementations (e.g. > x86 paravirt). > > When the tracepoint is disabled, the only addition to the hot path is a > single NOP instruction (the static branch). When enabled, the contention > check, trace call, and unlock are combined in an out-of-line function to > minimize hot path impact, avoiding the compiler needing to preserve the > lock pointer in a callee-saved register across the trace call. > > Binary size impact (x86_64, defconfig): > uninlined unlock (common case): +680 bytes (+0.00%) > inlined unlock (worst case): +83659 bytes (+0.21%) > > The inlined unlock case could not be achieved through Kconfig options on > x86_64 as PREEMPT_BUILD unconditionally selects UNINLINE_SPIN_UNLOCK on > x86_64. The UNINLINE_SPIN_UNLOCK guards were manually inverted to force > inline the unlock path and estimate the worst case binary size increase. > > In practice, configurations with UNINLINE_SPIN_UNLOCK=n have already > opted against binary size optimization, so the inlined worst case is > unlikely to be a concern. > > Architectures with fully custom qspinlock implementations (e.g. > PowerPC) are not covered by this change. > > Signed-off-by: Dmitry Ilvokhin > Acked-by: Paul E. McKenney > --- > include/asm-generic/qspinlock.h | 18 ++++++++++++++++++ > kernel/locking/qspinlock.c | 8 ++++++++ > 2 files changed, 26 insertions(+) > > diff --git a/include/asm-generic/qspinlock.h b/include/asm-generic/qspinlock.h > index df76f34645a0..915a4c2777f6 100644 > --- a/include/asm-generic/qspinlock.h > +++ b/include/asm-generic/qspinlock.h > @@ -41,6 +41,7 @@ > > #include > #include > +#include > > #ifndef queued_spin_is_locked > /** > @@ -129,12 +130,29 @@ static __always_inline void queued_spin_release(struct qspinlock *lock) > } > #endif > > +DECLARE_TRACEPOINT(contended_release); > + > +extern void queued_spin_release_traced(struct qspinlock *lock); > + > /** > * queued_spin_unlock - unlock a queued spinlock > * @lock : Pointer to queued spinlock structure > + * > + * Generic tracing wrapper around the arch-overridable > + * queued_spin_release(). > */ > static __always_inline void queued_spin_unlock(struct qspinlock *lock) > { > + /* > + * Trace and release are combined in queued_spin_release_traced() so > + * the compiler does not need to preserve the lock pointer across the > + * function call, avoiding callee-saved register save/restore on the > + * hot path. > + */ > + if (tracepoint_enabled(contended_release)) { > + queued_spin_release_traced(lock); > + return; Get rid of the "return;". What does it save you? It just makes it that you need to duplicate the code. Even though it's a one liner, it can cause bugs in the future if this changes. You could call the function: do_trace_queued_spin_release_traced(lock); > + } > queued_spin_release(lock); > } > > diff --git a/kernel/locking/qspinlock.c b/kernel/locking/qspinlock.c > index af8d122bb649..649fdca69288 100644 > --- a/kernel/locking/qspinlock.c > +++ b/kernel/locking/qspinlock.c > @@ -104,6 +104,14 @@ static __always_inline u32 __pv_wait_head_or_lock(struct qspinlock *lock, > #define queued_spin_lock_slowpath native_queued_spin_lock_slowpath > #endif > > +void __lockfunc queued_spin_release_traced(struct qspinlock *lock) > +{ > + if (queued_spin_is_contended(lock)) > + trace_call__contended_release(lock); > + queued_spin_release(lock); And then remove the duplicate call of "queued_spin_release()" here. -- Steve > +} > +EXPORT_SYMBOL(queued_spin_release_traced); > + > #endif /* _GEN_PV_LOCK_SLOWPATH */ > > /**