From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 155EDCCD199 for ; Mon, 20 Oct 2025 16:49:11 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender:List-Subscribe:List-Help :List-Post:List-Archive:List-Unsubscribe:List-Id:Content-Type:MIME-Version: References:In-Reply-To:Subject:Cc:To:From:Message-ID:Date:Reply-To: Content-Transfer-Encoding:Content-ID:Content-Description:Resent-Date: Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID:List-Owner; bh=d6IPdS54ifgnZZpZCpTxK7GYvPHyc0glLc5M14AzolM=; b=mrZsDex6loYJZkBtn5XtQCp3+9 jq2vM3tqCAHUowSSU2e/KaWqNqh/aVt195e1DxzIfEUHuIIwx8dltFhq2w8vQP3UOTQq1dsR9EFfF Zjsyze69XQANGRxNQk0VaExnqwX0c1aDewO37xywM3LdpRlcby7Wh/Btr0zMch9JCzABXefC6oXyy bLKKeS2ACD/hJUMwgiYrRrCYFuIL/sHqOHs3MhfcPoaEpzZQDtx7vnkqKNQdiWJsb8XOSxIZhkKjP 3TYn5X8zrjIdPkpkHL+xpnfReRQU5BZCUcX9xhfrDxKaN5/sbU6sG5MLRYAyZqRs5daS9ze7WvM8/ Bzq2lM5w==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.98.2 #2 (Red Hat Linux)) id 1vAt4B-0000000EMe2-27uX; Mon, 20 Oct 2025 16:49:03 +0000 Received: from sea.source.kernel.org ([2600:3c0a:e001:78e:0:1991:8:25]) by bombadil.infradead.org with esmtps (Exim 4.98.2 #2 (Red Hat Linux)) id 1vAt49-0000000EMde-0wNh for linux-arm-kernel@lists.infradead.org; Mon, 20 Oct 2025 16:49:02 +0000 Received: from smtp.kernel.org (transwarp.subspace.kernel.org [100.75.92.58]) by sea.source.kernel.org (Postfix) with ESMTP id DD052434E0; Mon, 20 Oct 2025 16:48:46 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id B245AC4CEF9; Mon, 20 Oct 2025 16:48:46 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1760978926; bh=zYqoYCQe4K7XudfmeVGpZr7tdm5vBdoL/LEBbTTO5iU=; h=Date:From:To:Cc:Subject:In-Reply-To:References:From; b=dYg53VZAhN66E9XaBivc4JKX9/Sl92OsDM3o6ol9TGgJPfP0P/GsFMc22z1RtQtnt AdwdaC/WHquIcTzfheRpQ9XYtPMyWIdx/IZ+MjE+JUXg2/q9ggKsvvmskeyKNgu5jD p+RlWVRQ7lLItITmNpGAIXRDBXtZtpNVKvJiGdLf8BfX7RFTaYKfl/67WQNtjp+ZHq aER2ncF7Y+XwMuEAv4vMrNC5HhyMlMpyna3kBdCCZ24IQIkShnupcx38AG/Q+o2yrv xmHZp/muQvnP4hPCic6dRy8T6B9uhVIuq2Bm7Il4Dczr45dqY/iqK/UTdVevwea6hm mN7I0dKh1Nd9g== Received: from sofa.misterjones.org ([185.219.108.64] helo=goblin-girl.misterjones.org) by disco-boy.misterjones.org with esmtpsa (TLS1.3) tls TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384 (Exim 4.98.2) (envelope-from ) id 1vAt3s-0000000Fa00-1car; Mon, 20 Oct 2025 16:48:44 +0000 Date: Mon, 20 Oct 2025 17:48:43 +0100 Message-ID: <86jz0pwmc4.wl-maz@kernel.org> From: Marc Zyngier To: Ada Couprie Diaz Cc: linux-arm-kernel@lists.infradead.org, Catalin Marinas , Will Deacon , Oliver Upton , Ard Biesheuvel , Joey Gouly , Suzuki K Poulose , Zenghui Yu , Andrey Ryabinin , Alexander Potapenko , Andrey Konovalov , Dmitry Vyukov , Vincenzo Frascino , linux-kernel@vger.kernel.org, kvmarm@lists.linux.dev, kasan-dev@googlegroups.com, Mark Rutland Subject: Re: [RFC PATCH 06/16] arm64/insn: always inline aarch64_insn_gen_movewide() In-Reply-To: <20250923174903.76283-7-ada.coupriediaz@arm.com> References: <20250923174903.76283-1-ada.coupriediaz@arm.com> <20250923174903.76283-7-ada.coupriediaz@arm.com> User-Agent: Wanderlust/2.15.9 (Almost Unreal) SEMI-EPG/1.14.7 (Harue) FLIM-LB/1.14.9 (=?UTF-8?B?R29qxY0=?=) APEL-LB/10.8 EasyPG/1.0.0 Emacs/30.1 (aarch64-unknown-linux-gnu) MULE/6.0 (HANACHIRUSATO) MIME-Version: 1.0 (generated by SEMI-EPG 1.14.7 - "Harue") Content-Type: text/plain; charset=US-ASCII X-SA-Exim-Connect-IP: 185.219.108.64 X-SA-Exim-Rcpt-To: ada.coupriediaz@arm.com, linux-arm-kernel@lists.infradead.org, catalin.marinas@arm.com, will@kernel.org, oliver.upton@linux.dev, ardb@kernel.org, joey.gouly@arm.com, suzuki.poulose@arm.com, yuzenghui@huawei.com, ryabinin.a.a@gmail.com, glider@google.com, andreyknvl@gmail.com, dvyukov@google.com, vincenzo.frascino@arm.com, linux-kernel@vger.kernel.org, kvmarm@lists.linux.dev, kasan-dev@googlegroups.com, mark.rutland@arm.com X-SA-Exim-Mail-From: maz@kernel.org X-SA-Exim-Scanned: No (on disco-boy.misterjones.org); SAEximRunCond expanded to false X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20251020_094901_300695_EB3D4176 X-CRM114-Status: GOOD ( 26.00 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org On Tue, 23 Sep 2025 18:48:53 +0100, Ada Couprie Diaz wrote: > > As it is always called with an explicit movewide type, we can > check for its validity at compile time and remove the runtime error print. > > The other error prints cannot be verified at compile time, but should not > occur in practice and will still lead to a fault BRK, so remove them. > > This makes `aarch64_insn_gen_movewide()` safe for inlining > and usage from patching callbacks, as both > `aarch64_insn_encode_register()` and `aarch64_insn_encode_immediate()` > have been made safe in previous commits. > > Signed-off-by: Ada Couprie Diaz > --- > arch/arm64/include/asm/insn.h | 58 ++++++++++++++++++++++++++++++++--- > arch/arm64/lib/insn.c | 56 --------------------------------- > 2 files changed, 54 insertions(+), 60 deletions(-) > > diff --git a/arch/arm64/include/asm/insn.h b/arch/arm64/include/asm/insn.h > index 5f5f6a125b4e..5a25e311717f 100644 > --- a/arch/arm64/include/asm/insn.h > +++ b/arch/arm64/include/asm/insn.h > @@ -624,6 +624,8 @@ static __always_inline bool aarch64_get_imm_shift_mask( > #define ADR_IMM_LOSHIFT 29 > #define ADR_IMM_HISHIFT 5 > > +#define AARCH64_INSN_SF_BIT BIT(31) > + > enum aarch64_insn_encoding_class aarch64_get_insn_class(u32 insn); > u64 aarch64_insn_decode_immediate(enum aarch64_insn_imm_type type, u32 insn); > > @@ -796,10 +798,58 @@ u32 aarch64_insn_gen_bitfield(enum aarch64_insn_register dst, > int immr, int imms, > enum aarch64_insn_variant variant, > enum aarch64_insn_bitfield_type type); > -u32 aarch64_insn_gen_movewide(enum aarch64_insn_register dst, > - int imm, int shift, > - enum aarch64_insn_variant variant, > - enum aarch64_insn_movewide_type type); > + > +static __always_inline u32 aarch64_insn_gen_movewide( > + enum aarch64_insn_register dst, > + int imm, int shift, > + enum aarch64_insn_variant variant, > + enum aarch64_insn_movewide_type type) nit: I personally find this definition style pretty unreadable, and would rather see the "static __always_inline" stuff put on a line of its own: static __always_inline u32 aarch64_insn_gen_movewide(enum aarch64_insn_register dst, int imm, int shift, enum aarch64_insn_variant variant, enum aarch64_insn_movewide_type type) But again, that's a personal preference, nothing else. > +{ > + compiletime_assert(type >= AARCH64_INSN_MOVEWIDE_ZERO && > + type <= AARCH64_INSN_MOVEWIDE_INVERSE, "unknown movewide encoding"); > + u32 insn; > + > + switch (type) { > + case AARCH64_INSN_MOVEWIDE_ZERO: > + insn = aarch64_insn_get_movz_value(); > + break; > + case AARCH64_INSN_MOVEWIDE_KEEP: > + insn = aarch64_insn_get_movk_value(); > + break; > + case AARCH64_INSN_MOVEWIDE_INVERSE: > + insn = aarch64_insn_get_movn_value(); > + break; > + default: > + return AARCH64_BREAK_FAULT; Similar request to one of the previous patches: since you can check the validity at compile time, place it in the default: case, and drop the return statement. > + } > + > + if (imm & ~(SZ_64K - 1)) { > + return AARCH64_BREAK_FAULT; > + } > + > + switch (variant) { > + case AARCH64_INSN_VARIANT_32BIT: > + if (shift != 0 && shift != 16) { > + return AARCH64_BREAK_FAULT; > + } > + break; > + case AARCH64_INSN_VARIANT_64BIT: > + insn |= AARCH64_INSN_SF_BIT; > + if (shift != 0 && shift != 16 && shift != 32 && shift != 48) { > + return AARCH64_BREAK_FAULT; > + } > + break; > + default: > + return AARCH64_BREAK_FAULT; You could also check the variant. Thanks, M. -- Without deviation from the norm, progress is not possible.