From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 5466BC433EF for ; Thu, 3 Feb 2022 10:10:00 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:In-Reply-To:MIME-Version:References: Message-ID:Subject:Cc:To:From:Date:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=S4PkSk6matAFWIkLRIpxOsCoQbeGvb9q0C+y0wgqJCM=; b=m9W1s5bJZzLCPy r4/9y9lAcwvpVCqNVaLdNvin1oNI1PBuz9IWGBVY/aU1izuEO51Gj3/hqTyapMoGTgSBJ/zyrXSVc eZMbGiVO37XvgNNp4HtOarxlN4jtV5ffbf2oTg4pOw2NuFw9e1fTJP81Jy2I2LluxQpPWotgvYchh zKPiZsxMPyOBvhbGXz3HWqAUFI08C+L9nhIHXbVQ6uCJYzJrHQtlC4qHjP5FwsUt+8w6qaI+6AP/d fmNkW+v/C6+XSGwH8FLOIi5WXAey1eJwkueCSe7NcVXJXpYzg3cJZbSIF6S2xb8KRDo/axSfIvzqr vrQ8GsXYjSZP4sCJNaHQ==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.94.2 #2 (Red Hat Linux)) id 1nFZ2L-000iKB-Qc; Thu, 03 Feb 2022 10:08:22 +0000 Received: from foss.arm.com ([217.140.110.172]) by bombadil.infradead.org with esmtp (Exim 4.94.2 #2 (Red Hat Linux)) id 1nFYmY-000app-1a for linux-arm-kernel@lists.infradead.org; Thu, 03 Feb 2022 09:52:03 +0000 Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id BC91F113E; Thu, 3 Feb 2022 01:52:00 -0800 (PST) Received: from FVFF77S0Q05N (unknown [172.31.20.19]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id 1EA263F40C; Thu, 3 Feb 2022 01:51:58 -0800 (PST) Date: Thu, 3 Feb 2022 09:51:46 +0000 From: Mark Rutland To: Frederic Weisbecker Cc: linux-arm-kernel@lists.infradead.org, ardb@kernel.org, catalin.marinas@arm.com, juri.lelli@redhat.com, linux-kernel@vger.kernel.org, mingo@redhat.com, peterz@infradead.org, will@kernel.org Subject: Re: [PATCH 5/6] sched/preempt: add PREEMPT_DYNAMIC using static keys Message-ID: References: <20211109172408.49641-1-mark.rutland@arm.com> <20211109172408.49641-6-mark.rutland@arm.com> <20220202232145.GA461279@lothringen> MIME-Version: 1.0 Content-Disposition: inline In-Reply-To: <20220202232145.GA461279@lothringen> X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20220203_015202_241000_65F0E8D1 X-CRM114-Status: GOOD ( 21.06 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Content-Type: text/plain; charset="us-ascii" Content-Transfer-Encoding: 7bit Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org On Thu, Feb 03, 2022 at 12:21:45AM +0100, Frederic Weisbecker wrote: > On Tue, Nov 09, 2021 at 05:24:07PM +0000, Mark Rutland wrote: > > diff --git a/include/linux/kernel.h b/include/linux/kernel.h > > index e5359b09de1d..8a94ccfc7dc8 100644 > > --- a/include/linux/kernel.h > > +++ b/include/linux/kernel.h > > @@ -93,7 +93,7 @@ struct user; > > extern int __cond_resched(void); > > # define might_resched() __cond_resched() > > > > -#elif defined(CONFIG_PREEMPT_DYNAMIC) > > +#elif defined(CONFIG_PREEMPT_DYNAMIC) && defined(CONFIG_HAVE_PREEMPT_DYNAMIC_CALL) > > > > extern int __cond_resched(void); > > > > @@ -104,6 +104,11 @@ static __always_inline void might_resched(void) > > static_call_mod(might_resched)(); > > } > > > > +#elif defined(CONFIG_PREEMPT_DYNAMIC) && defined(CONFIG_HAVE_PREEMPT_DYNAMIC_KEY) > > + > > +extern int dynamic_might_resched(void); > > +# define might_resched() dynamic_might_resched() > > + > > #else > > > > # define might_resched() do { } while (0) > > diff --git a/include/linux/sched.h b/include/linux/sched.h > > index 78c351e35fec..7710b6593c72 100644 > > --- a/include/linux/sched.h > > +++ b/include/linux/sched.h > > @@ -2008,7 +2008,7 @@ static inline int test_tsk_need_resched(struct task_struct *tsk) > > #if !defined(CONFIG_PREEMPTION) || defined(CONFIG_PREEMPT_DYNAMIC) > > extern int __cond_resched(void); > > > > -#ifdef CONFIG_PREEMPT_DYNAMIC > > +#if defined(CONFIG_PREEMPT_DYNAMIC) && defined(CONFIG_HAVE_PREEMPT_DYNAMIC_CALL) > > > > DECLARE_STATIC_CALL(cond_resched, __cond_resched); > > > > @@ -2017,6 +2017,14 @@ static __always_inline int _cond_resched(void) > > return static_call_mod(cond_resched)(); > > } > > > > +#elif defined(CONFIG_PREEMPT_DYNAMIC) && defined(CONFIG_HAVE_PREEMPT_DYNAMIC_KEY) > > +extern int dynamic_cond_resched(void); > > + > > +static __always_inline int _cond_resched(void) > > +{ > > + return dynamic_cond_resched(); > > So in the end this is creating an indirect call for every preemption entrypoint. Huh? "indirect call" usually means a branch to a function pointer, and I don't think that's what you mean here. Do you just mean that we add a (direct) call+return? This gets inlined, and will be just a direct call to dynamic_cond_resched(). e,g. on arm64 this will be a single instruction: bl dynamic_cond_resched ... and (as the commit message desribes) then the implementation of dynamic_cond_resched will be the same as the regular __cond_resched *but* the static key trampoline is inlined at the start, e.g. | : | bti c | b | mov w0, #0x0 // #0 | ret | mrs x0, sp_el0 | ldr x0, [x0, #8] | cbnz x0, | paciasp | stp x29, x30, [sp, #-16]! | mov x29, sp | bl | mov w0, #0x1 // #1 | ldp x29, x30, [sp], #16 | autiasp | ret ... compared to the regular form of the function: | <__cond_resched>: | bti c | mrs x0, sp_el0 | ldr x1, [x0, #8] | cbz x1, <__cond_resched+0x18> | mov w0, #0x0 // #0 | ret | paciasp | stp x29, x30, [sp, #-16]! | mov x29, sp | bl | mov w0, #0x1 // #1 | ldp x29, x30, [sp], #16 | autiasp | ret > It seems to me that this loses the whole point of using static keys. As above, I don't think that's the case. Relative to static keys using trampolines (which is all arm64 can implement), the gain is that we inline the trampoline into the *callee*. That saves on I-cache footprint, the compiler can generate the early returns more optimally (and compatibly with an CFI scheme we wish to use), and we don't have to maintain a separate patching mechanism. If you think that static call trampolines lose the whole point of static keys then we've lost to begin with, since that's all we can reasonably implement. > Is there something that prevents from using inlines or macros? Inlining of *what* ? Thanks, Mark. _______________________________________________ linux-arm-kernel mailing list linux-arm-kernel@lists.infradead.org http://lists.infradead.org/mailman/listinfo/linux-arm-kernel