From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id CC560C52D6F for ; Tue, 20 Aug 2024 02:44:13 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender:List-Subscribe:List-Help :List-Post:List-Archive:List-Unsubscribe:List-Id:Content-Transfer-Encoding: Content-Type:In-Reply-To:From:References:CC:To:Subject:MIME-Version:Date: Message-ID:Reply-To:Content-ID:Content-Description:Resent-Date:Resent-From: Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID:List-Owner; bh=TTIjAUX/294gbs7CPlAhuGwK4HAq4z6iKG5hNdvMHwo=; b=J5paxoPSTT6TrNxW7WyQNq6S20 rgL5D+CKicLJpV5/tUgC/5gEdrp/9ZW0xJiLygDC39AKzKGB+qzLoMvVdEtIU5Al1gZSd7YTuDsVb Uws3OmNreqgwJq5KZ6rLIeJ7OfRVQd2F7k1RyljnZDMcSDDb1Zkv8UmCu83WQr1mcpLxEY6YE+HFE kZBxXkG6n6Rt4AIiH+cx6ZM3hQXWkPOSVmoViQrDBRMSOocdCC/F63+hicGvr6fzTihetkvYct2+E 8I6DxZvV97Zo+cV6OH2kC/2SofiarDWEaBu1BAv1vUqNiNSoHDUomkeraamzO3brtfRD3ukcsrA5B hGBRKIxw==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.97.1 #2 (Red Hat Linux)) id 1sgEqn-00000003a6e-18TI; Tue, 20 Aug 2024 02:44:01 +0000 Received: from szxga06-in.huawei.com ([45.249.212.32]) by bombadil.infradead.org with esmtps (Exim 4.97.1 #2 (Red Hat Linux)) id 1sgEq2-00000003ZqN-1oyx for linux-arm-kernel@lists.infradead.org; Tue, 20 Aug 2024 02:43:16 +0000 Received: from mail.maildlp.com (unknown [172.19.88.163]) by szxga06-in.huawei.com (SkyGuard) with ESMTP id 4Wnty50Fljz1xvWx; Tue, 20 Aug 2024 10:41:17 +0800 (CST) Received: from kwepemm600017.china.huawei.com (unknown [7.193.23.234]) by mail.maildlp.com (Postfix) with ESMTPS id E88B7180044; Tue, 20 Aug 2024 10:43:10 +0800 (CST) Received: from [10.174.179.234] (10.174.179.234) by kwepemm600017.china.huawei.com (7.193.23.234) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.1.2507.39; Tue, 20 Aug 2024 10:43:09 +0800 Message-ID: <1e5036b9-9e3f-e68d-ef09-6fa693a9c42c@huawei.com> Date: Tue, 20 Aug 2024 10:43:08 +0800 MIME-Version: 1.0 User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:91.0) Gecko/20100101 Thunderbird/91.8.0 Subject: Re: [PATCH v12 2/6] arm64: add support for ARCH_HAS_COPY_MC To: Jonathan Cameron CC: Mark Rutland , Catalin Marinas , Will Deacon , Andrew Morton , James Morse , Robin Murphy , Andrey Konovalov , Dmitry Vyukov , Vincenzo Frascino , Michael Ellerman , Nicholas Piggin , Andrey Ryabinin , Alexander Potapenko , Christophe Leroy , Aneesh Kumar K.V , "Naveen N. Rao" , Thomas Gleixner , Ingo Molnar , Borislav Petkov , Dave Hansen , , "H. Peter Anvin" , , , , , , Guohanjun References: <20240528085915.1955987-1-tongtiangen@huawei.com> <20240528085915.1955987-3-tongtiangen@huawei.com> <20240819113032.000042af@Huawei.com> From: Tong Tiangen In-Reply-To: <20240819113032.000042af@Huawei.com> Content-Type: text/plain; charset="UTF-8"; format=flowed Content-Transfer-Encoding: 8bit X-Originating-IP: [10.174.179.234] X-ClientProxiedBy: dggems701-chm.china.huawei.com (10.3.19.178) To kwepemm600017.china.huawei.com (7.193.23.234) X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20240819_194315_007683_06E8E484 X-CRM114-Status: GOOD ( 38.24 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org 在 2024/8/19 18:30, Jonathan Cameron 写道: > On Tue, 28 May 2024 16:59:11 +0800 > Tong Tiangen wrote: > >> For the arm64 kernel, when it processes hardware memory errors for >> synchronize notifications(do_sea()), if the errors is consumed within the >> kernel, the current processing is panic. However, it is not optimal. >> >> Take copy_from/to_user for example, If ld* triggers a memory error, even in >> kernel mode, only the associated process is affected. Killing the user >> process and isolating the corrupt page is a better choice. >> >> New fixup type EX_TYPE_KACCESS_ERR_ZERO_ME_SAFE is added to identify insn >> that can recover from memory errors triggered by access to kernel memory. >> >> Signed-off-by: Tong Tiangen > > Hi - this is going slow :( > > A few comments inline in the meantime but this really needs ARM maintainers > to take a (hopefully final) look. > > Jonathan > > >> diff --git a/arch/arm64/include/asm/asm-extable.h b/arch/arm64/include/asm/asm-extable.h >> index 980d1dd8e1a3..9c0664fe1eb1 100644 >> --- a/arch/arm64/include/asm/asm-extable.h >> +++ b/arch/arm64/include/asm/asm-extable.h >> @@ -5,11 +5,13 @@ >> #include >> #include >> >> -#define EX_TYPE_NONE 0 >> -#define EX_TYPE_BPF 1 >> -#define EX_TYPE_UACCESS_ERR_ZERO 2 >> -#define EX_TYPE_KACCESS_ERR_ZERO 3 >> -#define EX_TYPE_LOAD_UNALIGNED_ZEROPAD 4 >> +#define EX_TYPE_NONE 0 >> +#define EX_TYPE_BPF 1 >> +#define EX_TYPE_UACCESS_ERR_ZERO 2 >> +#define EX_TYPE_KACCESS_ERR_ZERO 3 >> +#define EX_TYPE_LOAD_UNALIGNED_ZEROPAD 4 >> +/* kernel access memory error safe */ >> +#define EX_TYPE_KACCESS_ERR_ZERO_ME_SAFE 5 > > Does anyone care enough about the alignment to bother realigning for one > long line? I'd be tempted not to bother, but up to maintainers. > > >> diff --git a/arch/arm64/lib/copy_to_user.S b/arch/arm64/lib/copy_to_user.S >> index 802231772608..2ac716c0d6d8 100644 >> --- a/arch/arm64/lib/copy_to_user.S >> +++ b/arch/arm64/lib/copy_to_user.S >> @@ -20,7 +20,7 @@ >> * x0 - bytes not copied >> */ >> .macro ldrb1 reg, ptr, val >> - ldrb \reg, [\ptr], \val >> + KERNEL_ME_SAFE(9998f, ldrb \reg, [\ptr], \val) >> .endm >> >> .macro strb1 reg, ptr, val >> @@ -28,7 +28,7 @@ >> .endm >> >> .macro ldrh1 reg, ptr, val >> - ldrh \reg, [\ptr], \val >> + KERNEL_ME_SAFE(9998f, ldrh \reg, [\ptr], \val) >> .endm >> >> .macro strh1 reg, ptr, val >> @@ -36,7 +36,7 @@ >> .endm >> >> .macro ldr1 reg, ptr, val >> - ldr \reg, [\ptr], \val >> + KERNEL_ME_SAFE(9998f, ldr \reg, [\ptr], \val) >> .endm >> >> .macro str1 reg, ptr, val >> @@ -44,7 +44,7 @@ >> .endm >> >> .macro ldp1 reg1, reg2, ptr, val >> - ldp \reg1, \reg2, [\ptr], \val >> + KERNEL_ME_SAFE(9998f, ldp \reg1, \reg2, [\ptr], \val) >> .endm >> >> .macro stp1 reg1, reg2, ptr, val >> @@ -64,7 +64,7 @@ SYM_FUNC_START(__arch_copy_to_user) >> 9997: cmp dst, dstin >> b.ne 9998f >> // Before being absolutely sure we couldn't copy anything, try harder >> - ldrb tmp1w, [srcin] >> +KERNEL_ME_SAFE(9998f, ldrb tmp1w, [srcin]) > > Alignment looks off? Hi, Jonathan: How about we change this in conjunction with mark's suggestion? :) > >> USER(9998f, sttrb tmp1w, [dst]) >> add dst, dst, #1 >> 9998: sub x0, end, dst // bytes not copied > > > >> diff --git a/arch/arm64/mm/fault.c b/arch/arm64/mm/fault.c >> index 451ba7cbd5ad..2dc65f99d389 100644 >> --- a/arch/arm64/mm/fault.c >> +++ b/arch/arm64/mm/fault.c >> @@ -708,21 +708,32 @@ static int do_bad(unsigned long far, unsigned long esr, struct pt_regs *regs) >> return 1; /* "fault" */ >> } >> >> +/* >> + * APEI claimed this as a firmware-first notification. >> + * Some processing deferred to task_work before ret_to_user(). >> + */ >> +static bool do_apei_claim_sea(struct pt_regs *regs) >> +{ >> + if (user_mode(regs)) { >> + if (!apei_claim_sea(regs)) > > I'd keep to the the (apei_claim_sea(regs) == 0) > used in the original code. That hints to the reader that we are > interested here in an 'error' code rather than apei_claim_sea() returning > a bool. I initially wondered why we return true when the code > fails to claim it. > > Also, perhaps if you return 0 for success and an error code if not > you could just make this > > if (user_mode(regs)) > return apei_claim_sea(regs); > > if (IS_ENABLED(CONFIG_ARCH_HAS_COPY_MC)) { > if (fixup_exception_me(regs)) { > return apei_claim_sea(regs); > } > } > > return false; > > or maybe even (I may have messed this up, but I think this logic > works). > > if (!user_mode(regs) && IS_ENABLED(CONFIG_ARCH_HAS_COPY_MC)) { > if (!fixup_exception_me(regs)) > return false; > } > return apei_claim_sea(regs); > > >> + return true; >> + } else if (IS_ENABLED(CONFIG_ARCH_HAS_COPY_MC)) { >> + if (fixup_exception_me(regs) && !apei_claim_sea(regs)) > > Same here with using apei_claim_sea(regs) == 0 so it's obvious we > are checking for an error, not a boolean. > >> + return true; >> + } >> + >> + return false; >> +} >> + >> static int do_sea(unsigned long far, unsigned long esr, struct pt_regs *regs) >> { >> const struct fault_info *inf; >> unsigned long siaddr; >> >> - inf = esr_to_fault_info(esr); >> - >> - if (user_mode(regs) && apei_claim_sea(regs) == 0) { >> - /* >> - * APEI claimed this as a firmware-first notification. >> - * Some processing deferred to task_work before ret_to_user(). >> - */ >> + if (do_apei_claim_sea(regs)) > > It might be made sense to factor this out first, then could be reviewed > as a noop before the new stuff is added. Still it's not much code, so doesn't > really matter. > Might be worth keeping to returning 0 for success, error code > otherwise as per apei_claim_sea(regs) > > The bool returning functions in the nearby code tend to be is_xxxx > not things that succeed or not. > > If you change it to return int make this > if (do_apei_claim_sea(regs) == 0) > so it's obvious this is the no error case. > My fault, treating the return value of apei_claim_sea() as bool has caused some confusion. Perhaps using "== 0" can reduce this confuse. Here's the change: static int do_apei_claim_sea(struct pt_regs *regs) { if (!user_mode(regs) && IS_ENABLED(CONFIG_ARCH_HAS_COPY_MC)) { if (!fixup_exception_me(regs))) return -ENOENT; } return apei_claim_sea(regs); } static int do_sea(...) { [...] if (do_apei_claim_sea(regs) == 0) return 0; [...] } I'll modify it later with the comments of mark. Thanks, Tong. >> return 0; >> - } >> >> + inf = esr_to_fault_info(esr); >> if (esr & ESR_ELx_FnV) { >> siaddr = 0; >> } else { > > .