From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id E940EC4345F for ; Mon, 22 Apr 2024 09:25:19 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:In-Reply-To:MIME-Version:References: Message-ID:Subject:Cc:To:From:Date:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=tpny9bkOdY7HregUC9SD0wa+o/bqSGNr9nqq8WPwpqQ=; b=z49VutQiB9kQVT C42TMTTpCUnkZrXgi9cJsOt9LgQdNS1ZhnaKCrVoN6S9V/QpsFK0DvlHgPYbi+eKg7S54sTSv2Nl1 Gj3XRO+lFXBlDpG6MWQlJvGnHkWzZBOY6kGS1QrUWMFjygcE2YCyhkjIzSjQFKpfbw1fVzUoXTdtd 5p5uMssulv6080PjnDl5qVTt1CRvBIVgsg4Tdg+G46zVdkSnmfLetShItbx4w6Jm/rwZoPjVRwKwe BkzC0Sp+e54LAaK6W2onVHVkMp/genfpcUJjQ5p8nkDhfJ45wHLNzz33N7BNpekJ/Meqp4RUraCoC uVV9NFc88XfVBJ9sm+Aw==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.97.1 #2 (Red Hat Linux)) id 1rypv7-0000000CuNm-1Dam; Mon, 22 Apr 2024 09:25:05 +0000 Received: from foss.arm.com ([217.140.110.172]) by bombadil.infradead.org with esmtp (Exim 4.97.1 #2 (Red Hat Linux)) id 1rypv3-0000000CuKm-2mXj for linux-arm-kernel@lists.infradead.org; Mon, 22 Apr 2024 09:25:03 +0000 Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id A9FE0339; Mon, 22 Apr 2024 02:25:19 -0700 (PDT) Received: from FVFF77S0Q05N (unknown [10.57.21.63]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id 5C93D3F64C; Mon, 22 Apr 2024 02:24:49 -0700 (PDT) Date: Mon, 22 Apr 2024 10:24:46 +0100 From: Mark Rutland To: George Guo Cc: peterz@infradead.org, jpoimboe@kernel.org, jbaron@akamai.com, rostedt@goodmis.org, ardb@kernel.org, catalin.marinas@arm.com, will@kernel.org, linux-arm-kernel@lists.infradead.org, linux-kernel@vger.kernel.org, George Guo Subject: Re: [PATCH 1/1] arm64: optimize code duplication in arch_static_branch/_jump function Message-ID: References: <20240422063853.3568733-1-dongtai.guo@linux.dev> MIME-Version: 1.0 Content-Disposition: inline In-Reply-To: <20240422063853.3568733-1-dongtai.guo@linux.dev> X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20240422_022501_764566_5C174802 X-CRM114-Status: GOOD ( 18.39 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Content-Type: text/plain; charset="us-ascii" Content-Transfer-Encoding: 7bit Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org On Mon, Apr 22, 2024 at 02:38:53PM +0800, George Guo wrote: > From: George Guo > > Extracted the jump table definition code from the arch_static_branch and > arch_static_branch_jump functions into a macro JUMP_TABLE_ENTRY to reduce > code duplication and improve readability. The commit title says this is an optimization, but the commit message says this is a cleanup (and this clearly is not an optimization). This seems to be copying what x86 did in commit: e1aa35c4c4bc71e4 ("jump_label, x86: Factor out the __jump_table generation") ... where the commit message is much clearer. > > Signed-off-by: George Guo > --- > arch/arm64/include/asm/jump_label.h | 19 +++++++++---------- > 1 file changed, 9 insertions(+), 10 deletions(-) > > diff --git a/arch/arm64/include/asm/jump_label.h b/arch/arm64/include/asm/jump_label.h > index 6aafbb789991..69407b70821e 100644 > --- a/arch/arm64/include/asm/jump_label.h > +++ b/arch/arm64/include/asm/jump_label.h > @@ -15,16 +15,19 @@ > > #define JUMP_LABEL_NOP_SIZE AARCH64_INSN_SIZE > > +#define JUMP_TABLE_ENTRY \ > + ".pushsection __jump_table, \"aw\" \n\t" \ > + ".align 3 \n\t" \ > + ".long 1b - ., %l[l_yes] - . \n\t" \ > + ".quad %c0 - . \n\t" \ > + ".popsection \n\t" > + > static __always_inline bool arch_static_branch(struct static_key * const key, > const bool branch) > { > asm goto( > "1: nop \n\t" > - " .pushsection __jump_table, \"aw\" \n\t" > - " .align 3 \n\t" > - " .long 1b - ., %l[l_yes] - . \n\t" > - " .quad %c0 - . \n\t" > - " .popsection \n\t" > + JUMP_TABLE_ENTRY > : : "i"(&((char *)key)[branch]) : : l_yes); If we really need to factor this out, I'd prefer that the JUMP_TABLE_ENTRY() macro took the label and key as arguments, similar to what we do for _ASM_EXTABLE_*(). Mark. > > return false; > @@ -37,11 +40,7 @@ static __always_inline bool arch_static_branch_jump(struct static_key * const ke > { > asm goto( > "1: b %l[l_yes] \n\t" > - " .pushsection __jump_table, \"aw\" \n\t" > - " .align 3 \n\t" > - " .long 1b - ., %l[l_yes] - . \n\t" > - " .quad %c0 - . \n\t" > - " .popsection \n\t" > + JUMP_TABLE_ENTRY > : : "i"(&((char *)key)[branch]) : : l_yes); > > return false; > -- > 2.34.1 > > _______________________________________________ linux-arm-kernel mailing list linux-arm-kernel@lists.infradead.org http://lists.infradead.org/mailman/listinfo/linux-arm-kernel