From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 72E31C3DA4A for ; Mon, 19 Aug 2024 09:58:58 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender:List-Subscribe:List-Help :List-Post:List-Archive:List-Unsubscribe:List-Id:Content-Transfer-Encoding: Content-Type:MIME-Version:References:In-Reply-To:Message-ID:Subject:CC:To: From:Date:Reply-To:Content-ID:Content-Description:Resent-Date:Resent-From: Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID:List-Owner; bh=EnBMvyEzzi9ZHlnb/Peya9fI++dz1exmCtueIBAzm+I=; b=Sq0LXNN+VxkKVZqJ+uUQJui/i9 1N4K1f0FSkqD1+qQFtVx5BYOC8EpXLhHmpv4XJM/2tXkvMrZc+BtfhmMHw3/oKlJEVZTLlv/G7gkQ J39T2z0MKcT7GveylCGxbmuizam5qKw10paOSsKA9xn8vkNaLO9HT7QB1TUuz27MZ3ya8Oc22Whbu 505eVsIldoVYyHDakSYPuJAfYpAQdFXIayy1MHCvllTGG7QVPVhfc9NGn9oHX9brCjnGDBvyQUcat R1s9Q7CotrvSBvWqgVhaOIeVCRkIW8/DvcjPbEv6tn+/XZKe84SiOmZHl/W8pq7t2LbWXBrSFLSsx GutY7kDg==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.97.1 #2 (Red Hat Linux)) id 1sfz9y-000000012Bh-1du1; Mon, 19 Aug 2024 09:58:46 +0000 Received: from frasgout.his.huawei.com ([185.176.79.56]) by bombadil.infradead.org with esmtps (Exim 4.97.1 #2 (Red Hat Linux)) id 1sfz9H-0000000123E-3747 for linux-arm-kernel@lists.infradead.org; Mon, 19 Aug 2024 09:58:05 +0000 Received: from mail.maildlp.com (unknown [172.18.186.231]) by frasgout.his.huawei.com (SkyGuard) with ESMTP id 4WnScz4lGNz6K94c; Mon, 19 Aug 2024 17:54:59 +0800 (CST) Received: from lhrpeml500005.china.huawei.com (unknown [7.191.163.240]) by mail.maildlp.com (Postfix) with ESMTPS id 6331E1400DB; Mon, 19 Aug 2024 17:57:52 +0800 (CST) Received: from localhost (10.203.177.66) by lhrpeml500005.china.huawei.com (7.191.163.240) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.1.2507.39; Mon, 19 Aug 2024 10:57:51 +0100 Date: Mon, 19 Aug 2024 10:57:50 +0100 From: Jonathan Cameron To: Tong Tiangen CC: Mark Rutland , Catalin Marinas , Will Deacon , Andrew Morton , James Morse , "Robin Murphy" , Andrey Konovalov , Dmitry Vyukov , Vincenzo Frascino , Michael Ellerman , Nicholas Piggin , Andrey Ryabinin , Alexander Potapenko , Christophe Leroy , Aneesh Kumar K.V , "Naveen N. Rao" , Thomas Gleixner , Ingo Molnar , Borislav Petkov , Dave Hansen , , "H. Peter Anvin" , , , , , , Guohanjun Subject: Re: [PATCH v12 1/6] uaccess: add generic fallback version of copy_mc_to_user() Message-ID: <20240819105750.00001269@Huawei.com> In-Reply-To: <20240528085915.1955987-2-tongtiangen@huawei.com> References: <20240528085915.1955987-1-tongtiangen@huawei.com> <20240528085915.1955987-2-tongtiangen@huawei.com> Organization: Huawei Technologies Research and Development (UK) Ltd. X-Mailer: Claws Mail 4.1.0 (GTK 3.24.33; x86_64-w64-mingw32) MIME-Version: 1.0 Content-Type: text/plain; charset="US-ASCII" Content-Transfer-Encoding: 7bit X-Originating-IP: [10.203.177.66] X-ClientProxiedBy: lhrpeml500001.china.huawei.com (7.191.163.213) To lhrpeml500005.china.huawei.com (7.191.163.240) X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20240819_025804_073274_24435950 X-CRM114-Status: GOOD ( 16.22 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org On Tue, 28 May 2024 16:59:10 +0800 Tong Tiangen wrote: > x86/powerpc has it's implementation of copy_mc_to_user(), we add generic > fallback in include/linux/uaccess.h prepare for other architechures to > enable CONFIG_ARCH_HAS_COPY_MC. > > Signed-off-by: Tong Tiangen > Acked-by: Michael Ellerman Seems like a sensible approach to me given existing fallbacks in x86 if the relevant features are disabled. It may be worth exploring at some point if some of the special casing in the callers of this function can also be remove now there is a default version. There are some small differences but I've not analyzed if they matter or not. Reviewed-by: Jonathan Cameron > --- > arch/powerpc/include/asm/uaccess.h | 1 + > arch/x86/include/asm/uaccess.h | 1 + > include/linux/uaccess.h | 8 ++++++++ > 3 files changed, 10 insertions(+) > > diff --git a/arch/powerpc/include/asm/uaccess.h b/arch/powerpc/include/asm/uaccess.h > index de10437fd206..df42e6ad647f 100644 > --- a/arch/powerpc/include/asm/uaccess.h > +++ b/arch/powerpc/include/asm/uaccess.h > @@ -381,6 +381,7 @@ copy_mc_to_user(void __user *to, const void *from, unsigned long n) > > return n; > } > +#define copy_mc_to_user copy_mc_to_user > #endif > > extern long __copy_from_user_flushcache(void *dst, const void __user *src, > diff --git a/arch/x86/include/asm/uaccess.h b/arch/x86/include/asm/uaccess.h > index 0f9bab92a43d..309f2439327e 100644 > --- a/arch/x86/include/asm/uaccess.h > +++ b/arch/x86/include/asm/uaccess.h > @@ -497,6 +497,7 @@ copy_mc_to_kernel(void *to, const void *from, unsigned len); > > unsigned long __must_check > copy_mc_to_user(void __user *to, const void *from, unsigned len); > +#define copy_mc_to_user copy_mc_to_user > #endif > > /* > diff --git a/include/linux/uaccess.h b/include/linux/uaccess.h > index 3064314f4832..0dfa9241b6ee 100644 > --- a/include/linux/uaccess.h > +++ b/include/linux/uaccess.h > @@ -205,6 +205,14 @@ copy_mc_to_kernel(void *dst, const void *src, size_t cnt) > } > #endif > > +#ifndef copy_mc_to_user > +static inline unsigned long __must_check > +copy_mc_to_user(void *dst, const void *src, size_t cnt) > +{ > + return copy_to_user(dst, src, cnt); > +} > +#endif > + > static __always_inline void pagefault_disabled_inc(void) > { > current->pagefault_disabled++;