From mboxrd@z Thu Jan 1 00:00:00 1970 Received: from BN1PR04CU002.outbound.protection.outlook.com (mail-eastus2azon11010000.outbound.protection.outlook.com [52.101.56.0]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 7BE77244692 for ; Sat, 25 Apr 2026 02:09:11 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=fail smtp.client-ip=52.101.56.0 ARC-Seal:i=2; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1777082953; cv=fail; b=dgWdYAlLQ1Wp/oIMhzTlTjGWYCUSuXIcJ8oQ4OKfVlOUzv2eY9XxgMOyU0f5iXOElRLloy/qlPYHOCwn/411oMRDzOmXxG3N6wo2AY6lH5mt5x1+oxweXPbxgBOY67/gQBqTV+n98Rh8SsIyAfNrHUq+6sTzbXuOPJ5Yyo6Hqhs= ARC-Message-Signature:i=2; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1777082953; c=relaxed/simple; bh=9grEdu3QiI6YyIPLcUgByf6bEFrxXTl+Jg8PUb5MAek=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: Content-Type:MIME-Version; b=c9bw/VCQOJObG3zW1OI9/E3HZUsb5CouRNcvO8Q6lHjEItM1MM5xtuqttzZs63OtUJjb5F7sWae0Ff45rJPUSNp8sdC3Zx1ly0RdMnhNc8TfhneKIzDe1A6lnkyr6ha6WA/gWsWHLJyPCf0rgaIz2huVbwVRJ4SnsDbsK8Mxk5w= ARC-Authentication-Results:i=2; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=nvidia.com; spf=fail smtp.mailfrom=nvidia.com; dkim=pass (2048-bit key) header.d=Nvidia.com header.i=@Nvidia.com header.b=YU6r/YCX; arc=fail smtp.client-ip=52.101.56.0 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=nvidia.com Authentication-Results: smtp.subspace.kernel.org; spf=fail smtp.mailfrom=nvidia.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=Nvidia.com header.i=@Nvidia.com header.b="YU6r/YCX" ARC-Seal: i=1; a=rsa-sha256; s=arcselector10001; d=microsoft.com; cv=none; b=xJ2wbFax6s0NoVQt2/YxyVwQNXUdKcu6KzZ8S+4XFhYVWqRm57UKTCczA1AZcFCdQCW0wf0FA5lNFFJKirXDZr7WMviSiurMiuPJBjz+B0UaNAeg97fgijs3G0sH+LSLMboTQSlfPR4Llw4Z+et8Z2XV6sMDcsSPPBirCHtuN/A/ToD+Qa3g67/lDhdyeoBbJ9AYAk5qV6WN58fv4/+bE7NQVuE+VFhTP17PHWHqvdh3DcR+QuCCAAhh1eunTzWxeXhumtIiJun+YfJn9IxW76nMrB9wKUQlEWaexebTSV3UAlSiD56MVLhtVuxE34YO57ASciMiKDptySbke0TPhA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector10001; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=ZyTkozxEOneAnSInOqRH06ehMR7Clfi/JxUxS14jLXM=; b=RNKZn7JOhtMM3HzAdLNxxSOTtM1CMFsA+eiRvmbcuTwxpWRtdtw7OdkyhzY/QMBQYYJIEhfdKCBNgeFSntExZUjSBqrnJOKxUMGDRgUvBB65L86IWPEKruEuwjVzYIJkAlFUU0cKJlMYdGoXBoEDjRpp8ahOFI9XcVlGPkraIt8N2RY9IZlkCgDb/fVGrn0ntwHgaY7/U4mL4KWs/SrsDTpOsarNyGccedXLNqGINdDXNQGQywQ6Yjscagf1d6EPO1fU3b9QizPUpF52NRxs+9dikbRove/Qt1lOqpmBHd0Ajp6uV1ivq1lLeMqLG3C4lntu5FyKznWGYxxN3yHOLg== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass smtp.mailfrom=nvidia.com; dmarc=pass action=none header.from=nvidia.com; dkim=pass header.d=nvidia.com; arc=none DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=Nvidia.com; s=selector2; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=ZyTkozxEOneAnSInOqRH06ehMR7Clfi/JxUxS14jLXM=; b=YU6r/YCXYkLMGPOSA6LFl5QRrONOcaViTJpV65BZGHKf5Thw7i0jyYZNj8YpfhP64GMZhHvPfKP9bRxFqTH+AUIF5sAPm+NS1zlasDhngUJxyot3rcX6DeGyZpglrtfOnC5/0wC7bIwfJeBnDilLnyPmrqF8yOZf9mWshjlQlW9KIgjy3AJIS0/gCdCGA11sxcAeYNg9NWIzVX0X0q+Mwf2LGD0M5XCRAlZCPzHA4O5tYWAVkVgstfgzytqFfzJMKjNhbwT5IO+tltzB4zE5ywk5MfhKOfWC6+Cb+e2nw9P1F+/u1Ov2kFdYQs1nYHEhaP+qn9UZz7PQinUIL4r+Ow== Authentication-Results: dkim=none (message not signed) header.d=none;dmarc=none action=none header.from=nvidia.com; Received: from CY8PR12MB8300.namprd12.prod.outlook.com (2603:10b6:930:7d::16) by IA0PR12MB8227.namprd12.prod.outlook.com (2603:10b6:208:406::15) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.9846.22; Sat, 25 Apr 2026 02:09:07 +0000 Received: from CY8PR12MB8300.namprd12.prod.outlook.com ([fe80::ce75:8187:3ac3:c5de]) by CY8PR12MB8300.namprd12.prod.outlook.com ([fe80::ce75:8187:3ac3:c5de%3]) with mapi id 15.20.9846.021; Sat, 25 Apr 2026 02:09:07 +0000 From: Yury Norov To: Andrew Morton , Thomas Gleixner , "Peter Zijlstra (Intel)" , Mathieu Desnoyers , Alice Ryhl , Viktor Malik , Randy Dunlap , David Laight , linux-kernel@vger.kernel.org Cc: Yury Norov , "Christophe Leroy (CS GROUP)" , Yury Norov Subject: [PATCH v2 2/3] uaccess: unify inline vs outline copy_{from,to}_user() selection Date: Fri, 24 Apr 2026 22:08:56 -0400 Message-ID: <20260425020857.356850-3-ynorov@nvidia.com> X-Mailer: git-send-email 2.51.0 In-Reply-To: <20260425020857.356850-1-ynorov@nvidia.com> References: <20260425020857.356850-1-ynorov@nvidia.com> Content-Transfer-Encoding: 8bit Content-Type: text/plain X-ClientProxiedBy: BN9PR03CA0127.namprd03.prod.outlook.com (2603:10b6:408:fe::12) To CY8PR12MB8300.namprd12.prod.outlook.com (2603:10b6:930:7d::16) Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-MS-PublicTrafficType: Email X-MS-TrafficTypeDiagnostic: CY8PR12MB8300:EE_|IA0PR12MB8227:EE_ X-MS-Office365-Filtering-Correlation-Id: e2cdab32-3474-48cb-32b0-08dea26fa1b5 X-MS-Exchange-SenderADCheck: 1 X-MS-Exchange-AntiSpam-Relay: 0 X-Microsoft-Antispam: BCL:0;ARA:13230040|366016|10070799003|1800799024|7416014|376014|18002099003|22082099003|56012099003; X-Microsoft-Antispam-Message-Info: zxBK6vXUw2ghwH/y+Q8ZyzQfXXfICgHoKf/lgLHJOQYemz9xgrZ2Np7+S7khySc//f86mdxpP1y+0Ukzn1xvPP0wAVRQMeVsyYN8IypEc54mbHnw++aqWo4pSBDgWhO6RJntH386YQg1Nekh/8ELMw0p9C87QySTJdKTZ/yiVEc435RIWBA+iNiYzkw5/Hp2JFo+wh7ZAN9NMsid/YDWaddnGOgQnP2xRIMotLuqYs7YxuyumNT/VLr+kCMfqp6CmlOy7eFEHKj6vLBUsK1uFtbvCK232v8VIsIcWe0WAtjDY5ARDw0T/+YIELWAwc7kKl40jaktBtbkyMGHiZFheGHrVm0Zb7PzR9MptHVSzsyn1jHJrwc7XCA+VFy4bIs3duCan5BodOo2cwEX23LJ+TPWsZ6J6LLEXPLErLt5X7Az/4JhqbgbT1jZa8cehj/LK+Z0jwckuI3sMyuIOstNkhtDskIcp5v9Pk35Juk4BqYVbhCmh9b/6wimlSlG+KK8iJh00phhmoHQ815+WU4MtG5kL7qowuMfVo+7EClr2IZeDp06fbeNPHCSiICX8yQ5/y0Rp55IIWmJRrZ4MuftlY2P9v/iqzLIPlzPBbvpdJRdOmq5srySeSMqfHhL4mFQ2DWhueJgowkqgtFo8S5JoXkAdeKuDWa9c5lv6kv7iinLSHPON8+B5k+EyBZh8VZL+64+jb4tHSBFK4rLqrBRqDqgg5Y7DtgF7O4Hur0GVj8= X-Forefront-Antispam-Report: CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:CY8PR12MB8300.namprd12.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230040)(366016)(10070799003)(1800799024)(7416014)(376014)(18002099003)(22082099003)(56012099003);DIR:OUT;SFP:1101; X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1 X-MS-Exchange-AntiSpam-MessageData-0: =?us-ascii?Q?eWlQv4SEkhLwMTfIKaic+pY5f4x399slCOS0wyAIE/jPd+U8VXCk6tciXj1U?= =?us-ascii?Q?1qNyfJ4AYYMF/d/roPwZxL1dWMaOhoV9WTQ5QXZIJCs7o/Np8WkRDhiVSUTz?= =?us-ascii?Q?FtcxM2V8RA9P11g/4fkwXnOyBYhi84w7HvcRpdIHN5te9oQlW3AxdkZjp2tK?= =?us-ascii?Q?P96GPQpxZxp2afXWHlqZ1FTsIelpIPs7pM8wg94ggdB8c/0I9YHhQ+gmBk7w?= =?us-ascii?Q?q94DI9Q84B9ZuyKGgAchapBRYOdRKxQeVmCFnR7A6R2Nrl8OfFrumfB7Rqe6?= =?us-ascii?Q?Y8Ng8Kfr6CoYkHFg1+oI5WTogaj6hB2UmnzNZ6dfBuNmQcxVHcDm42hceKk+?= =?us-ascii?Q?QAyyByLLnXXUYnt7Ve72xnYx1MerEbc1o5PFtF/YS4djdTm7cNEMDAyYm0QL?= =?us-ascii?Q?+gyYyRNxbkqUkY+DuRun7sDo5qvdkwi4MnKHCfz7FVIC7BZI8fVkFkJe8ZSV?= =?us-ascii?Q?FYZR/doEq4UnrXAK+nIBJwH+Rcr4CR10gyCseqxrn1pvD9rrMksGKSoLLgLh?= =?us-ascii?Q?efBtrQ65eMNgATTAz/Te0vM2K/32eJCQ1sGf2+9sJaop4Pjc7YuyKYCLEvVY?= =?us-ascii?Q?RyegvDNfXlRgXQ69d4WFknXQWMgN1NcPvfzE+q/gsI+iI8YIDDjT0QgUiFm6?= =?us-ascii?Q?Qshyo3IiI5PAUguB6MqklWDfTi3EN5MIWqoHaALncJzIe5WrB+RsbkztIZcz?= =?us-ascii?Q?dwgW9DPfYlSAjhPaSl6cDX9FDiC1qnmrgBy8PkRW91q5Ed5YIvFai0CQ4tGc?= =?us-ascii?Q?cL4ROlRflOMvO5GtMmXbf0uroys4qDl4d9/8Hq8f9N/BMghj8td0K8Q2p81O?= =?us-ascii?Q?kiofpjUb52dzIS6872opdRdaUUT6LTxsY6XCLBB4GdaBgA5mjpK8o314voEi?= =?us-ascii?Q?+pq7dK5i2XmSLkXoEX7yMF/IUEww5FSoSLBumMuI/oGyjXK2JLofe15jdEPN?= =?us-ascii?Q?CXTY2UL44FpeYAHnSEZHN5K4fqRTMaVY0H5eokiFcYIFrUKuXiCcOrLRTHKg?= =?us-ascii?Q?4oVsy0EwFvdKHgcVHti51NB1/7i4h9d3W2Segp2nlfMt6w39OssfYu+uChps?= =?us-ascii?Q?kHKqTtFSq2GciRUzngZ4lOcJd+GBXbMbw3zNzQGbb3fypZGH7wnVJJT+xspl?= =?us-ascii?Q?7EiOONTkI3ltFMYfcI++Ec/X/N6Y95gEQjXQArAIGZXCTkIuf7buUF5zq7S8?= =?us-ascii?Q?JSnoiKEs8LNdouvmVirCNBSQlQ7jjRMS1URLdaQLTwXi2Oir1cZu/z/es6nD?= =?us-ascii?Q?Y6lKEmwohK4hyIeAXIr84ZHLib3O/8W3l2ZmLXblx4eCoXmvFUtslATghU+B?= =?us-ascii?Q?38xOj38z6iC61NW6vnCHjWWM+QCrwOczQwz0hw3YxVvKGPWND5IReQWWi5X2?= =?us-ascii?Q?qSx6Swzz0ZXysi/p7tXnhnn9OnRVosMmr69ktyj3bio9XOlnrZoZ5ZshGkfB?= =?us-ascii?Q?7DrNnyTmfb69eT08klX3ChxJbmS8/3+MZSTWLlQjNGKpxrPO4JMdCgvJceLd?= =?us-ascii?Q?vRj5iNWUjNLaNKTiDuhLaxsDkJUJ5pEQSF5Oen6F9ifkb/R3D3h9ogFUPLTm?= =?us-ascii?Q?B6mOy0j87u4QsKhLY2AqMYgZx8DKuqZSN9k4kUEXAzlaw2ytyHZSxSg3gngq?= =?us-ascii?Q?rniAftOEUHW3O1TEQBfzYYqrFiX+lj4s/q4o9ZjiBdcyzGYBE56Q72AhbZlM?= =?us-ascii?Q?hY7srK5K/YcftZzAm7y8SwjEDIWMfYi0xBFNlFuHpAH5X4x2pddr2Gb7Q6LG?= =?us-ascii?Q?vtu/ZXb+0MdO7NoRddWNi2d2F+hkMHBgwpvJ/T9426heFRILaqjr?= X-OriginatorOrg: Nvidia.com X-MS-Exchange-CrossTenant-Network-Message-Id: e2cdab32-3474-48cb-32b0-08dea26fa1b5 X-MS-Exchange-CrossTenant-AuthSource: CY8PR12MB8300.namprd12.prod.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Internal X-MS-Exchange-CrossTenant-OriginalArrivalTime: 25 Apr 2026 02:09:07.2832 (UTC) X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted X-MS-Exchange-CrossTenant-Id: 43083d15-7273-40c1-b7db-39efd9ccc17a X-MS-Exchange-CrossTenant-MailboxType: HOSTED X-MS-Exchange-CrossTenant-UserPrincipalName: 5ikif/LlqvwhFsZ+fxIJep4Y2vM5gKqosVq3iVOqZ/SU/wb9oeZ/2YBd+fauwTh4OPZvZO8noMFuEYuGliBbag== X-MS-Exchange-Transport-CrossTenantHeadersStamped: IA0PR12MB8227 The kernel allows arches to select between inline and outline implementations of the copy_{from,to}_user() by defining individual INLINE_COPY_FROM_USER and INLINE_COPY_TO_USER, correspondingly. However, all arches enable or disable them always together. Without the real use-case for one helper being inlined while the other outlined, having independent controls is excessive and error prone. Switch the codebase to the single unified INLINE_COPY_USER control. Tested-by: Alice Ryhl Signed-off-by: Yury Norov --- arch/arc/include/asm/uaccess.h | 3 +-- arch/arm/include/asm/uaccess.h | 3 +-- arch/arm64/include/asm/uaccess.h | 3 +-- arch/hexagon/include/asm/uaccess.h | 3 +-- arch/loongarch/include/asm/uaccess.h | 3 +-- arch/m68k/include/asm/uaccess.h | 3 +-- arch/microblaze/include/asm/uaccess.h | 3 +-- arch/mips/include/asm/uaccess.h | 3 +-- arch/nios2/include/asm/uaccess.h | 3 +-- arch/openrisc/include/asm/uaccess.h | 3 +-- arch/parisc/include/asm/uaccess.h | 3 +-- arch/s390/include/asm/uaccess.h | 3 +-- arch/sh/include/asm/uaccess.h | 3 +-- arch/sparc/include/asm/uaccess_32.h | 3 +-- arch/sparc/include/asm/uaccess_64.h | 3 +-- arch/um/include/asm/uaccess.h | 3 +-- arch/xtensa/include/asm/uaccess.h | 3 +-- include/asm-generic/uaccess.h | 3 +-- include/linux/uaccess.h | 12 ++++++------ lib/usercopy.c | 4 +--- rust/helpers/uaccess.c | 4 +--- 21 files changed, 26 insertions(+), 48 deletions(-) diff --git a/arch/arc/include/asm/uaccess.h b/arch/arc/include/asm/uaccess.h index 1e8809ea000a..6df2209541ac 100644 --- a/arch/arc/include/asm/uaccess.h +++ b/arch/arc/include/asm/uaccess.h @@ -628,8 +628,7 @@ static inline unsigned long __clear_user(void __user *to, unsigned long n) return res; } -#define INLINE_COPY_TO_USER -#define INLINE_COPY_FROM_USER +#define INLINE_COPY_USER #define __clear_user __clear_user diff --git a/arch/arm/include/asm/uaccess.h b/arch/arm/include/asm/uaccess.h index d6ae80b5df36..1593cf3b9800 100644 --- a/arch/arm/include/asm/uaccess.h +++ b/arch/arm/include/asm/uaccess.h @@ -616,8 +616,7 @@ raw_copy_to_user(void __user *to, const void *from, unsigned long n) } #define __clear_user(addr, n) (memset((void __force *)addr, 0, n), 0) #endif -#define INLINE_COPY_TO_USER -#define INLINE_COPY_FROM_USER +#define INLINE_COPY_USER static inline unsigned long __must_check clear_user(void __user *to, unsigned long n) { diff --git a/arch/arm64/include/asm/uaccess.h b/arch/arm64/include/asm/uaccess.h index b0c83a08dda9..9f5bd9c69c24 100644 --- a/arch/arm64/include/asm/uaccess.h +++ b/arch/arm64/include/asm/uaccess.h @@ -456,8 +456,7 @@ do { \ unsafe_copy_loop(__ucu_dst, __ucu_src, __ucu_len, u8, label); \ } while (0) -#define INLINE_COPY_TO_USER -#define INLINE_COPY_FROM_USER +#define INLINE_COPY_USER extern unsigned long __must_check __arch_clear_user(void __user *to, unsigned long n); static inline unsigned long __must_check __clear_user(void __user *to, unsigned long n) diff --git a/arch/hexagon/include/asm/uaccess.h b/arch/hexagon/include/asm/uaccess.h index bff77efc0d9a..1aecf60ec4f5 100644 --- a/arch/hexagon/include/asm/uaccess.h +++ b/arch/hexagon/include/asm/uaccess.h @@ -26,8 +26,7 @@ unsigned long raw_copy_from_user(void *to, const void __user *from, unsigned long n); unsigned long raw_copy_to_user(void __user *to, const void *from, unsigned long n); -#define INLINE_COPY_FROM_USER -#define INLINE_COPY_TO_USER +#define INLINE_COPY_USER __kernel_size_t __clear_user_hexagon(void __user *dest, unsigned long count); #define __clear_user(a, s) __clear_user_hexagon((a), (s)) diff --git a/arch/loongarch/include/asm/uaccess.h b/arch/loongarch/include/asm/uaccess.h index 438269313e78..428f373feabf 100644 --- a/arch/loongarch/include/asm/uaccess.h +++ b/arch/loongarch/include/asm/uaccess.h @@ -292,8 +292,7 @@ raw_copy_to_user(void __user *to, const void *from, unsigned long n) return __copy_user((__force void *)to, from, n); } -#define INLINE_COPY_FROM_USER -#define INLINE_COPY_TO_USER +#define INLINE_COPY_USER /* * __clear_user: - Zero a block of memory in user space, with less checking. diff --git a/arch/m68k/include/asm/uaccess.h b/arch/m68k/include/asm/uaccess.h index 64914872a5c9..31d133faa45e 100644 --- a/arch/m68k/include/asm/uaccess.h +++ b/arch/m68k/include/asm/uaccess.h @@ -377,8 +377,7 @@ raw_copy_to_user(void __user *to, const void *from, unsigned long n) return __constant_copy_to_user(to, from, n); return __generic_copy_to_user(to, from, n); } -#define INLINE_COPY_FROM_USER -#define INLINE_COPY_TO_USER +#define INLINE_COPY_USER #define __get_kernel_nofault(dst, src, type, err_label) \ do { \ diff --git a/arch/microblaze/include/asm/uaccess.h b/arch/microblaze/include/asm/uaccess.h index 3aab2f17e046..afa0dd8d013f 100644 --- a/arch/microblaze/include/asm/uaccess.h +++ b/arch/microblaze/include/asm/uaccess.h @@ -250,8 +250,7 @@ raw_copy_to_user(void __user *to, const void *from, unsigned long n) { return __copy_tofrom_user(to, (__force const void __user *)from, n); } -#define INLINE_COPY_FROM_USER -#define INLINE_COPY_TO_USER +#define INLINE_COPY_USER /* * Copy a null terminated string from userspace. diff --git a/arch/mips/include/asm/uaccess.h b/arch/mips/include/asm/uaccess.h index c0cede273c7c..f00c36676b73 100644 --- a/arch/mips/include/asm/uaccess.h +++ b/arch/mips/include/asm/uaccess.h @@ -433,8 +433,7 @@ raw_copy_to_user(void __user *to, const void *from, unsigned long n) return __cu_len_r; } -#define INLINE_COPY_FROM_USER -#define INLINE_COPY_TO_USER +#define INLINE_COPY_USER extern __kernel_size_t __bzero(void __user *addr, __kernel_size_t size); diff --git a/arch/nios2/include/asm/uaccess.h b/arch/nios2/include/asm/uaccess.h index 6ccc9a232c23..5e6e05cc6efc 100644 --- a/arch/nios2/include/asm/uaccess.h +++ b/arch/nios2/include/asm/uaccess.h @@ -57,8 +57,7 @@ extern unsigned long raw_copy_from_user(void *to, const void __user *from, unsigned long n); extern unsigned long raw_copy_to_user(void __user *to, const void *from, unsigned long n); -#define INLINE_COPY_FROM_USER -#define INLINE_COPY_TO_USER +#define INLINE_COPY_USER extern long strncpy_from_user(char *__to, const char __user *__from, long __len); diff --git a/arch/openrisc/include/asm/uaccess.h b/arch/openrisc/include/asm/uaccess.h index d6500a374e18..db934ebc0069 100644 --- a/arch/openrisc/include/asm/uaccess.h +++ b/arch/openrisc/include/asm/uaccess.h @@ -218,8 +218,7 @@ raw_copy_to_user(void __user *to, const void *from, unsigned long size) { return __copy_tofrom_user((__force void *)to, from, size); } -#define INLINE_COPY_FROM_USER -#define INLINE_COPY_TO_USER +#define INLINE_COPY_USER extern unsigned long __clear_user(void __user *addr, unsigned long size); diff --git a/arch/parisc/include/asm/uaccess.h b/arch/parisc/include/asm/uaccess.h index 6c531d2c847e..0d17f81c8b27 100644 --- a/arch/parisc/include/asm/uaccess.h +++ b/arch/parisc/include/asm/uaccess.h @@ -197,7 +197,6 @@ unsigned long __must_check raw_copy_to_user(void __user *dst, const void *src, unsigned long len); unsigned long __must_check raw_copy_from_user(void *dst, const void __user *src, unsigned long len); -#define INLINE_COPY_TO_USER -#define INLINE_COPY_FROM_USER +#define INLINE_COPY_USER #endif /* __PARISC_UACCESS_H */ diff --git a/arch/s390/include/asm/uaccess.h b/arch/s390/include/asm/uaccess.h index dff035372601..a9f32c53f699 100644 --- a/arch/s390/include/asm/uaccess.h +++ b/arch/s390/include/asm/uaccess.h @@ -30,8 +30,7 @@ void debug_user_asce(int exit); #define uaccess_kmsan_or_inline __always_inline #endif -#define INLINE_COPY_FROM_USER -#define INLINE_COPY_TO_USER +#define INLINE_COPY_USER static uaccess_kmsan_or_inline __must_check unsigned long raw_copy_from_user(void *to, const void __user *from, unsigned long size) diff --git a/arch/sh/include/asm/uaccess.h b/arch/sh/include/asm/uaccess.h index a79609eb14be..02e7a066538e 100644 --- a/arch/sh/include/asm/uaccess.h +++ b/arch/sh/include/asm/uaccess.h @@ -95,8 +95,7 @@ raw_copy_to_user(void __user *to, const void *from, unsigned long n) { return __copy_user((__force void *)to, from, n); } -#define INLINE_COPY_FROM_USER -#define INLINE_COPY_TO_USER +#define INLINE_COPY_USER /* * Clear the area and return remaining number of bytes diff --git a/arch/sparc/include/asm/uaccess_32.h b/arch/sparc/include/asm/uaccess_32.h index 43284b6ec46a..5542d5b32994 100644 --- a/arch/sparc/include/asm/uaccess_32.h +++ b/arch/sparc/include/asm/uaccess_32.h @@ -190,8 +190,7 @@ static inline unsigned long raw_copy_from_user(void *to, const void __user *from return __copy_user((__force void __user *) to, from, n); } -#define INLINE_COPY_FROM_USER -#define INLINE_COPY_TO_USER +#define INLINE_COPY_USER static inline unsigned long __clear_user(void __user *addr, unsigned long size) { diff --git a/arch/sparc/include/asm/uaccess_64.h b/arch/sparc/include/asm/uaccess_64.h index b825a5dd0210..e2989cfba626 100644 --- a/arch/sparc/include/asm/uaccess_64.h +++ b/arch/sparc/include/asm/uaccess_64.h @@ -231,8 +231,7 @@ unsigned long __must_check raw_copy_from_user(void *to, unsigned long __must_check raw_copy_to_user(void __user *to, const void *from, unsigned long size); -#define INLINE_COPY_FROM_USER -#define INLINE_COPY_TO_USER +#define INLINE_COPY_USER unsigned long __must_check raw_copy_in_user(void __user *to, const void __user *from, diff --git a/arch/um/include/asm/uaccess.h b/arch/um/include/asm/uaccess.h index 0df9ea4abda8..4417c8b1d37a 100644 --- a/arch/um/include/asm/uaccess.h +++ b/arch/um/include/asm/uaccess.h @@ -27,8 +27,7 @@ static inline int __access_ok(const void __user *ptr, unsigned long size); #define __access_ok __access_ok #define __clear_user __clear_user -#define INLINE_COPY_FROM_USER -#define INLINE_COPY_TO_USER +#define INLINE_COPY_USER #include diff --git a/arch/xtensa/include/asm/uaccess.h b/arch/xtensa/include/asm/uaccess.h index 56aec6d504fe..6538a29a2bbd 100644 --- a/arch/xtensa/include/asm/uaccess.h +++ b/arch/xtensa/include/asm/uaccess.h @@ -237,8 +237,7 @@ raw_copy_to_user(void __user *to, const void *from, unsigned long n) prefetch(from); return __xtensa_copy_user((__force void *)to, from, n); } -#define INLINE_COPY_FROM_USER -#define INLINE_COPY_TO_USER +#define INLINE_COPY_USER /* * We need to return the number of bytes not cleared. Our memset() diff --git a/include/asm-generic/uaccess.h b/include/asm-generic/uaccess.h index b276f783494c..4569045e7139 100644 --- a/include/asm-generic/uaccess.h +++ b/include/asm-generic/uaccess.h @@ -91,8 +91,7 @@ raw_copy_to_user(void __user *to, const void *from, unsigned long n) memcpy((void __force *)to, from, n); return 0; } -#define INLINE_COPY_FROM_USER -#define INLINE_COPY_TO_USER +#define INLINE_COPY_USER #endif /* CONFIG_UACCESS_MEMCPY */ /* diff --git a/include/linux/uaccess.h b/include/linux/uaccess.h index 56328601218c..6100f1046546 100644 --- a/include/linux/uaccess.h +++ b/include/linux/uaccess.h @@ -84,7 +84,7 @@ * the 6 functions (copy_{to,from}_user(), __copy_{to,from}_user_inatomic()) * that are used instead. Out of those, __... ones are inlined. Plain * copy_{to,from}_user() might or might not be inlined. If you want them - * inlined, have asm/uaccess.h define INLINE_COPY_{TO,FROM}_USER. + * inlined, have asm/uaccess.h define INLINE_COPY_USER. * * NOTE: only copy_from_user() zero-pads the destination in case of short copy. * Neither __copy_from_user() nor __copy_from_user_inatomic() zero anything @@ -157,7 +157,7 @@ __copy_to_user(void __user *to, const void *from, unsigned long n) } /* - * Architectures that #define INLINE_COPY_TO_USER use this function + * Architectures that #define INLINE_COPY_USER use this function * directly in the normal copy_to/from_user(), the other ones go * through an extern _copy_to/from_user(), which expands the same code * here. @@ -190,7 +190,7 @@ _inline_copy_from_user(void *to, const void __user *from, unsigned long n) memset(to + (n - res), 0, res); return res; } -#ifndef INLINE_COPY_FROM_USER +#ifndef INLINE_COPY_USER extern __must_check unsigned long _copy_from_user(void *, const void __user *, unsigned long); #endif @@ -207,7 +207,7 @@ _inline_copy_to_user(void __user *to, const void *from, unsigned long n) } return n; } -#ifndef INLINE_COPY_TO_USER +#ifndef INLINE_COPY_USER extern __must_check unsigned long _copy_to_user(void __user *, const void *, unsigned long); #endif @@ -217,7 +217,7 @@ copy_from_user(void *to, const void __user *from, unsigned long n) { if (!check_copy_size(to, n, false)) return n; -#ifdef INLINE_COPY_FROM_USER +#ifdef INLINE_COPY_USER return _inline_copy_from_user(to, from, n); #else return _copy_from_user(to, from, n); @@ -230,7 +230,7 @@ copy_to_user(void __user *to, const void *from, unsigned long n) if (!check_copy_size(from, n, true)) return n; -#ifdef INLINE_COPY_TO_USER +#ifdef INLINE_COPY_USER return _inline_copy_to_user(to, from, n); #else return _copy_to_user(to, from, n); diff --git a/lib/usercopy.c b/lib/usercopy.c index b00a3a957de6..e2f0bf104a59 100644 --- a/lib/usercopy.c +++ b/lib/usercopy.c @@ -12,15 +12,13 @@ /* out-of-line parts */ -#if !defined(INLINE_COPY_FROM_USER) +#if !defined(INLINE_COPY_USER) unsigned long _copy_from_user(void *to, const void __user *from, unsigned long n) { return _inline_copy_from_user(to, from, n); } EXPORT_SYMBOL(_copy_from_user); -#endif -#if !defined(INLINE_COPY_TO_USER) unsigned long _copy_to_user(void __user *to, const void *from, unsigned long n) { return _inline_copy_to_user(to, from, n); diff --git a/rust/helpers/uaccess.c b/rust/helpers/uaccess.c index aff22f16ab38..6e59cc9c665c 100644 --- a/rust/helpers/uaccess.c +++ b/rust/helpers/uaccess.c @@ -14,15 +14,13 @@ rust_helper_copy_to_user(void __user *to, const void *from, unsigned long n) return copy_to_user(to, from, n); } -#ifdef INLINE_COPY_FROM_USER +#ifdef INLINE_COPY_USER __rust_helper unsigned long rust_helper__copy_from_user(void *to, const void __user *from, unsigned long n) { return _inline_copy_from_user(to, from, n); } -#endif -#ifdef INLINE_COPY_TO_USER __rust_helper unsigned long rust_helper__copy_to_user(void __user *to, const void *from, unsigned long n) { -- 2.51.0