From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 19F88C5321E for ; Mon, 19 Aug 2024 12:00:50 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender:List-Subscribe:List-Help :List-Post:List-Archive:List-Unsubscribe:List-Id:Content-Transfer-Encoding: Content-Type:MIME-Version:References:In-Reply-To:Message-ID:Subject:CC:To: From:Date:Reply-To:Content-ID:Content-Description:Resent-Date:Resent-From: Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID:List-Owner; bh=HDIbyRnNMP0CUN7ZQ6yTMhvsyTAYGtiUGzKD4OW80pk=; b=QnQM2ebUV9feoy06O/bJ3q0Sne mlCYjXB3BiI8Y9uLhcb87aqSbSU/irHFjh0+P80NyJrnVMfoZbJVCDA1zYIbezhPTVSHOD3hcQqps it9d8cHoQWwrmDuyCxwMshbr7lLdP+bFvBnr8pUhw+HhyVgqP2DMOWe1oqkx0DrOIaBApNqp3Ap2A ETLczpWHJSiq2+MIZM4IJUWpbXJ4gFjJTSWts2ECWrdi03YO4fVF6OvomKna9+GaNkwn7sio0SnL2 YsdGqh/k3+6KicrsA4JqymSdbsCbSoiIJmZWC7zflgyfviScpLIh+i0X5eqOEhTQeG2J/gE1PIqMt Vf17VeuA==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.97.1 #2 (Red Hat Linux)) id 1sg13q-00000001Kak-0m4l; Mon, 19 Aug 2024 12:00:34 +0000 Received: from frasgout.his.huawei.com ([185.176.79.56]) by bombadil.infradead.org with esmtps (Exim 4.97.1 #2 (Red Hat Linux)) id 1sg0zW-00000001JMo-3Zt9 for linux-arm-kernel@lists.infradead.org; Mon, 19 Aug 2024 11:56:09 +0000 Received: from mail.maildlp.com (unknown [172.18.186.31]) by frasgout.his.huawei.com (SkyGuard) with ESMTP id 4WnWFL2mDwz6K9Hd; Mon, 19 Aug 2024 19:53:10 +0800 (CST) Received: from lhrpeml500005.china.huawei.com (unknown [7.191.163.240]) by mail.maildlp.com (Postfix) with ESMTPS id 6A8AA140B2F; Mon, 19 Aug 2024 19:56:03 +0800 (CST) Received: from localhost (10.203.177.66) by lhrpeml500005.china.huawei.com (7.191.163.240) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.1.2507.39; Mon, 19 Aug 2024 12:56:02 +0100 Date: Mon, 19 Aug 2024 12:56:01 +0100 From: Jonathan Cameron To: Tong Tiangen CC: Mark Rutland , Catalin Marinas , Will Deacon , Andrew Morton , James Morse , "Robin Murphy" , Andrey Konovalov , Dmitry Vyukov , Vincenzo Frascino , Michael Ellerman , Nicholas Piggin , Andrey Ryabinin , Alexander Potapenko , Christophe Leroy , Aneesh Kumar K.V , "Naveen N. Rao" , Thomas Gleixner , Ingo Molnar , Borislav Petkov , Dave Hansen , , "H. Peter Anvin" , , , , , , Guohanjun Subject: Re: [PATCH v12 4/6] arm64: support copy_mc_[user]_highpage() Message-ID: <20240819125601.0000687b@Huawei.com> In-Reply-To: <20240528085915.1955987-5-tongtiangen@huawei.com> References: <20240528085915.1955987-1-tongtiangen@huawei.com> <20240528085915.1955987-5-tongtiangen@huawei.com> Organization: Huawei Technologies Research and Development (UK) Ltd. X-Mailer: Claws Mail 4.1.0 (GTK 3.24.33; x86_64-w64-mingw32) MIME-Version: 1.0 Content-Type: text/plain; charset="US-ASCII" Content-Transfer-Encoding: 7bit X-Originating-IP: [10.203.177.66] X-ClientProxiedBy: lhrpeml100005.china.huawei.com (7.191.160.25) To lhrpeml500005.china.huawei.com (7.191.163.240) X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20240819_045607_215114_BCCE45A3 X-CRM114-Status: GOOD ( 24.61 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org On Tue, 28 May 2024 16:59:13 +0800 Tong Tiangen wrote: > Currently, many scenarios that can tolerate memory errors when copying page > have been supported in the kernel[1~5], all of which are implemented by > copy_mc_[user]_highpage(). arm64 should also support this mechanism. > > Due to mte, arm64 needs to have its own copy_mc_[user]_highpage() > architecture implementation, macros __HAVE_ARCH_COPY_MC_HIGHPAGE and > __HAVE_ARCH_COPY_MC_USER_HIGHPAGE have been added to control it. > > Add new helper copy_mc_page() which provide a page copy implementation with > hardware memory error safe. The code logic of copy_mc_page() is the same as > copy_page(), the main difference is that the ldp insn of copy_mc_page() > contains the fixup type EX_TYPE_KACCESS_ERR_ZERO_ME_SAFE, therefore, the > main logic is extracted to copy_page_template.S. > > [1] commit d302c2398ba2 ("mm, hwpoison: when copy-on-write hits poison, take page offline") > [2] commit 1cb9dc4b475c ("mm: hwpoison: support recovery from HugePage copy-on-write faults") > [3] commit 6b970599e807 ("mm: hwpoison: support recovery from ksm_might_need_to_copy()") > [4] commit 98c76c9f1ef7 ("mm/khugepaged: recover from poisoned anonymous memory") > [5] commit 12904d953364 ("mm/khugepaged: recover from poisoned file-backed memory") > > Signed-off-by: Tong Tiangen Trivial stuff inline. Jonathan > diff --git a/arch/arm64/lib/mte.S b/arch/arm64/lib/mte.S > index 5018ac03b6bf..50ef24318281 100644 > --- a/arch/arm64/lib/mte.S > +++ b/arch/arm64/lib/mte.S > @@ -80,6 +80,35 @@ SYM_FUNC_START(mte_copy_page_tags) > ret > SYM_FUNC_END(mte_copy_page_tags) > > +#ifdef CONFIG_ARCH_HAS_COPY_MC > +/* > + * Copy the tags from the source page to the destination one wiht machine check safe Spell check. with Also, maybe reword given machine check doesn't make sense on arm64. > + * x0 - address of the destination page > + * x1 - address of the source page > + * Returns: > + * x0 - Return 0 if copy success, or > + * -EFAULT if anything goes wrong while copying. > + */ > +SYM_FUNC_START(mte_copy_mc_page_tags) > + mov x2, x0 > + mov x3, x1 > + multitag_transfer_size x5, x6 > +1: > +KERNEL_ME_SAFE(2f, ldgm x4, [x3]) > + stgm x4, [x2] > + add x2, x2, x5 > + add x3, x3, x5 > + tst x2, #(PAGE_SIZE - 1) > + b.ne 1b > + > + mov x0, #0 > + ret > + > +2: mov x0, #-EFAULT > + ret > +SYM_FUNC_END(mte_copy_mc_page_tags) > +#endif > + > /* > * Read tags from a user buffer (one tag per byte) and set the corresponding > * tags at the given kernel address. Used by PTRACE_POKEMTETAGS. > diff --git a/arch/arm64/mm/copypage.c b/arch/arm64/mm/copypage.c > index a7bb20055ce0..ff0d9ceea2a4 100644 > --- a/arch/arm64/mm/copypage.c > +++ b/arch/arm64/mm/copypage.c > @@ -40,3 +40,48 @@ void copy_user_highpage(struct page *to, struct page *from, > + > +int copy_mc_user_highpage(struct page *to, struct page *from, > + unsigned long vaddr, struct vm_area_struct *vma) > +{ > + int ret; > + > + ret = copy_mc_highpage(to, from); > + if (!ret) > + flush_dcache_page(to); Personally I'd always keep the error out of line as it tends to be more readable when reviewing a lot of code. if (ret) return ret; flush_dcache_page(to); return 0; > + > + return ret; > +} > +EXPORT_SYMBOL_GPL(copy_mc_user_highpage); > +#endif