From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 18D4BFF8864 for ; Mon, 27 Apr 2026 21:29:34 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-ID:Subject:Cc:To:From:Date:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=vR92Pgg0GlgD8h4OxE1xV4foO7TDThQmNMx4D8avf1U=; b=1OUmJzy0v35S0m tWPK6DJ7yYQv9MwHnNnle8WuOHn1747sbNC5NEQLSQpWxqxlqllm5fYpWuERfGtNl8icmMmDBB/o8 vG79gMRCVcPqW8WR6ELLRKDGfveoMM5d0VHVOd+w1f5yUfbDsa1QoFRurXy1exXhkioQ7MQ3aHN4S OVlwSHVuswoHQ3mhVVmFduCK7CliVI/+Xw4s95B5trhRZ9v346y5Tn1Xl3HXezCUgHOv3W9TCtgjT /D4APdTYidFRoZlialS4wQiQR1AuZccFNXPRgDUd/HN+xlAPPKVBnPGfWI6qTOqqm9P/VgzmirfGg BmtYlaSSrcaLoTesbGbQ==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.98.2 #2 (Red Hat Linux)) id 1wHTW8-00000000Ayz-0Inc; Mon, 27 Apr 2026 21:29:24 +0000 Received: from mail-wm1-x32a.google.com ([2a00:1450:4864:20::32a]) by bombadil.infradead.org with esmtps (Exim 4.98.2 #2 (Red Hat Linux)) id 1wHTW4-00000000AxR-1oc4 for linux-riscv@lists.infradead.org; Mon, 27 Apr 2026 21:29:22 +0000 Received: by mail-wm1-x32a.google.com with SMTP id 5b1f17b1804b1-488b0e1b870so182017255e9.2 for ; Mon, 27 Apr 2026 14:29:19 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20251104; t=1777325358; x=1777930158; darn=lists.infradead.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:subject:cc:to:from:date:from:to:cc:subject:date :message-id:reply-to; bh=Ebg5+i/pOUuQx7xxuERTUppaj1qL6vIxTxgv86V/NC0=; b=IMLbRmYEXHk4kUnosshE5P5pf1HqFLf0l7XiDgwKvr7oE5MkpvdH0FTxH9LxckM35l Os4ccdgiiqlHRjRvL+RryGdn7oCPjsw6SLFUekQpynSRUqQHJnFXCP4HacqJvlh9iVis zGNZ4qYId5So/UVoGJsvJ2dwzmNYcRR49V1njggySedCiI895cCkL8pOoq75Ll5tg6UJ gs8U4RZrdEdp2ukJO+287TbmqmdaAnfgJD/SIT9nlPuRmKUl02/Bp4INCRNCNdvLwWsW ZotKCyVf/DPyX+Q0XgzSuhekf61ba0R2ykzyXafBvJJhNbQi7bDkg47gbRImxl3ioFqu McvA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20251104; t=1777325358; x=1777930158; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:subject:cc:to:from:date:x-gm-gg:x-gm-message-state:from :to:cc:subject:date:message-id:reply-to; bh=Ebg5+i/pOUuQx7xxuERTUppaj1qL6vIxTxgv86V/NC0=; b=DLTfWiXcxcSuK3PJmHe9KJDI7CgzufaCwGG3CS8mDB75Q39IPluuRAVdV6VI+P8sI+ hlnDmCcogXK+y3zkfqvPmEGlpveJyXZ+pPjtZR5jL1VAp5Nd/ij9MNPZHbRtDkZ51wti 9E4IldnqwkYQkHeppQHviSZOKdFY31z3hbSfUbqTpeuZ0RDnw49Aa8aPYvKes+C7HMFu LsKgC4MJnEUdxwfGf425inJ9IOOqwDpbmeC7PW6q3UbF4gMMhDoWGOx3tPfH9vVYDfD6 0OQSOGsYAYXLpAcDqEhd7fTzYPMPm53jG+48OehfGQ0rBQn/gEpxkdPtVfuuQxm5gjLF dhDA== X-Forwarded-Encrypted: i=1; AFNElJ9/dF6JI98oTYS/wZ5eNAsvBhnUSWLfP5OLdLLSltTUaeBZbp/YCqMXF0dWB9o72uyhRN0rbCg5RL6SCg==@lists.infradead.org X-Gm-Message-State: AOJu0YyXHTQmtE6H6nBzeUSaYMeH5Mu20QiKYhO1KyacwmFdDB2A5fMi RJe2C8najbz7e2NJCtrMh496QhOOUVEtg2CB/P2lNLEOg9aFZ2pX3StO X-Gm-Gg: AeBDievMcRBMF0cidtyjVu6q+fgHdsa1wcIh5aHDmOg3PEaFPL1+GvgTrCRtP5hLl7r 0t5WLzBObph5P1B6qyVg/sM8NECpPPwR4NDfIWkrNd/TPyapZNclXqZki6j2Ze8mAegZ/KjRHHn B448wRwVbC8ehYJkQfeCeyfksQBx3uyzUtlQ9gIQjhtaNv5YpKPee7MQJ7ZKAa1goMtt9eUK354 wo+iNdTTjxfxWos1n120Lagk6RbwTx3uoJBgjNog5pixg8Cm8ACv1BVJ1LeNFlwXaGc2FXwnXEB Cya8OUpMPJzhmSshN3fUGBgmJXtoxttLvDlM2YFD6U5GrOtCwGbJE/xHEUfuxMHdxQpjmfjD+lv HZutwJ4rtb1s0xM1dR3YdCaTkRdFU98OKkWiFFlKF0P3/Yw3/fHwjzFtXuW42oPaw2y5u/r9r5b TF61HC0vkNUU9UrUe/BeBrFVqJRJoCFzAtQBeJxaKCtYaPbV2Z6mYtbKVd8MWoRA8wL1QVuI1gz /RKDfVMGD5PgA== X-Received: by 2002:a05:6000:3109:b0:43d:7d6f:f531 with SMTP id ffacd0b85a97d-44649ba1f4amr816127f8f.30.1777325358072; Mon, 27 Apr 2026 14:29:18 -0700 (PDT) Received: from pumpkin (82-69-66-36.dsl.in-addr.zen.co.uk. [82.69.66.36]) by smtp.gmail.com with ESMTPSA id ffacd0b85a97d-4463d02f270sm1120515f8f.9.2026.04.27.14.29.16 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 27 Apr 2026 14:29:17 -0700 (PDT) Date: Mon, 27 Apr 2026 22:29:14 +0100 From: David Laight To: Linus Torvalds Cc: "Christophe Leroy (CS GROUP)" , Yury Norov , Andrew Morton , Thomas Gleixner , linux-alpha@vger.kernel.org, linux-kernel@vger.kernel.org, linux-snps-arc@lists.infradead.org, linux-arm-kernel@lists.infradead.org, linux-mips@vger.kernel.org, linuxppc-dev@lists.ozlabs.org, kvm@vger.kernel.org, linux-riscv@lists.infradead.org, linux-s390@vger.kernel.org, sparclinux@vger.kernel.org, linux-um@lists.infradead.org, dmaengine@vger.kernel.org, linux-efi@vger.kernel.org, linux-fsi@lists.ozlabs.org, amd-gfx@lists.freedesktop.org, dri-devel@lists.freedesktop.org, intel-gfx@lists.freedesktop.org, linux-wpan@vger.kernel.org, netdev@vger.kernel.org, linux-wireless@vger.kernel.org, linux-spi@vger.kernel.org, linux-media@vger.kernel.org, linux-staging@lists.linux.dev, linux-serial@vger.kernel.org, linux-usb@vger.kernel.org, xen-devel@lists.xenproject.org, linux-fsdevel@vger.kernel.org, ocfs2-devel@lists.linux.dev, bpf@vger.kernel.org, kasan-dev@googlegroups.com, linux-mm@kvack.org, linux-x25@vger.kernel.org, rust-for-linux@vger.kernel.org, linux-sound@vger.kernel.org, sound-open-firmware@alsa-project.org, linux-csky@vger.kernel.org, linux-hexagon@vger.kernel.org, loongarch@lists.linux.dev, linux-m68k@lists.linux-m68k.org, linux-openrisc@vger.kernel.org, linux-parisc@vger.kernel.org, linux-sh@vger.kernel.org, linux-arch@vger.kernel.org Subject: Re: [RFC PATCH v1 5/9] uaccess: Switch to copy_{to/from}_user_partial() when relevant Message-ID: <20260427222914.1cb2dd3b@pumpkin> In-Reply-To: References: <289b424e243ba2c4139ea04009cf8b9c448a87ff.1777306795.git.chleroy@kernel.org> X-Mailer: Claws Mail 4.1.1 (GTK 3.24.38; arm-unknown-linux-gnueabihf) MIME-Version: 1.0 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20260427_142920_528796_5FBA03A0 X-CRM114-Status: GOOD ( 51.17 ) X-BeenThere: linux-riscv@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Content-Type: text/plain; charset="us-ascii" Content-Transfer-Encoding: 7bit Sender: "linux-riscv" Errors-To: linux-riscv-bounces+linux-riscv=archiver.kernel.org@lists.infradead.org On Mon, 27 Apr 2026 12:01:23 -0700 Linus Torvalds wrote: > On Mon, 27 Apr 2026 at 10:18, Christophe Leroy (CS GROUP) > wrote: > > > > In a subsequent patch, copy_{to/from}_user() will be modified to > > return -EFAULT when copy fails. > > Please don't do this. > > This is a maintenance nightmare, and changes pretty much three decades > of semantics, and will cause *very* subtle backporting issues if > somebody happens to rely on the old / new behavior. > > I understand the reasoning for the change, but I really don't think > the pain of creating yet another user copy interface is worth it. > > We already have a lot of different versions of user copies for > different reasons, and while they all tend to have a good reason (and > some not-so-good, but historical reasons) for existing, this one > doesn't seem worth it. > > The main - perhaps only - reason for this "partial" version is that > you want to do that "automatically inlined and optimized fixed-sized > case". > > But here's the thing: I think you can already do that. Yes, it > requires some improvements to unsafe_copy_from_user(), but *that* > interface doesn't have three decades of history associated with it, > _and_ you're extending on that one anyway in this series. > > "unsafe_copy_from_user()" is very odd, is meant only for small simple > copies that can be inlined and it's special-cased for 'objtool' anyway > (because objtool would have complained about an out-of-line call, > although it could have been special-cased other ways). > > In other words: unsafe_copy_from_user() is *very* close to what you > want for that "Oh, I noticed that it's a small fixed-size copy, so I > want to special-case copy-from-user for that". > > The _only_ issue with unsafe_copy_from_user() is that you can't see > that there were partial successes. But if *that* was fixed, then this > whole "create a new copy_from_user interface" issue would just go > away. > > So please - let's just change unsafe_copy_from_user() to be usable for > the partial case. > > And the thing is, all the existing unsafe_copy_from_user() > implementations already effectively *have* the "how much did I not > copy" internally, and they actually do extra work to hide it, ie they > have things like that > > int _i; > > that is "how many bytes have I copied" in the powerpc implementation, > or the x86 code does > > size_t __ucu_len = (_len); > > where that "ucu_len" is updated as you go along and is literally the > "how many bytes are left to copy" return value that is missing from > this interface. > > So what I would suggest is > > - introduce a new user accessor helper that is used for *both* > unsafe_copy_to/from_user() *and* the "inline small constant-sized > normal copy_to/from_user()" calls > > - it's the same thing as the existing unsafe_copy_to/from_user() > implementation, except it exposes how many bytes are left to be copied > to the exception label. I think there is a slight difference in that the normal copy_to_user() will determine the exact offset of the error by retrying with byte copies. There is also the issue of misaligned copies. Then there is the 'bugbear' of hardened user copies. Chasing down the stack to find whether the kernel buffer crosses a stack frame is probably more expensive than the copy for the typically small copies that will use on-stack buffers. David > > IOW, it would look something like > > #define unsafe_copy_to_user_outlen(_dst,_src,_len,label)... > > which is exactly the same as the current unsafe_copy_to_user(), > *except* it changes "_len" as it does along. > > And then you use that for both the "real" unsafe_copy_user and for the > "small constant values" case. > > Just as an example, attached is a completely stupid rough draft of a > patch that does this for x86 and only for unsafe_copy_to_user(). > > And I made a very very hacky change to kernel/sys.c to see what the > code generation looks like. > > This is what it results in on x86 with clang (with all the magic > .section data edited out): > > ... edited out the code to generate the times > ... this is the actual user copy: > # HERE! > movabsq $81985529216486895, %rcx # imm = 0x123456789ABCDEF > cmpq %rcx, %rbx > cmovaq %rcx, %rbx > stac > movq %r13, (%rbx) # exception to .LBB45_8 > movq %r14, 8(%rbx) # exception to .LBB45_8 > movq %r15, 16(%rbx) # exception to .LBB45_8 > movq %rax, 24(%rbx) # exception to .LBB45_8 > clac > .LBB45_6: > movq jiffies(%rip), %rdi > callq jiffies_64_to_clock_t > .LBB45_7: > addq $16, %rsp > popq %rbx > popq %r12 > popq %r13 > popq %r14 > popq %r15 > retq > .LBB45_8: > clac > movq $-14, %rax > jmp .LBB45_7 > > and notice how the compiler noticed that the 'outlen' isn't actually > used, and turned the exception label into just a "return -EFAULT" and > never actually generated any code for updating remaining lengths? > > That actually looks pretty much optimal for a 32-byte user copy. > > And it didn't involve changing the semantics at all. > > Just to check, I changed that "times()" system call to return the > number of bytes uncopied instead (to emulate the "I actually want to > know what's left" case), and it generated this: > > # HERE! > movabsq $81985529216486895, %rcx # imm = 0x123456789ABCDEF > cmpq %rcx, %rbx > cmovaq %rcx, %rbx > stac > movl $32, %ecx > movq %r13, (%rbx) # exception to .LBB45_7 > movl $24, %ecx > movq %r15, 8(%rbx) # exception to .LBB45_7 > movl $16, %ecx > movq %r14, 16(%rbx) # exception to .LBB45_7 > movl $8, %ecx > movq %rax, 24(%rbx) # exception to .LBB45_7 > clac > xorl %ecx, %ecx > .LBB45_8: > movq %rcx, %rax > addq $16, %rsp > popq %rbx > popq %r12 > popq %r13 > popq %r14 > popq %r15 > retq > .LBB45_6: > movq jiffies(%rip), %rdi > jmp jiffies_64_to_clock_t # TAILCALL > .LBB45_7: > clac > jmp .LBB45_8 > > so it all seems to work - although obviously the above is *not* the normal case. > > NOTE NOTE NOTE! The attached patch is entirely untested. I obviously > did some "test code generation" with it, but I only *looked* at the > result, and maybe it has some fundamental problem that I just didn't > notice. So treat this as a "how about this approach" patch, not as > anything more serious than that. > > And the kerrnel/sys.c hack is very obviously just that: a complate > hack for testing. > > A real patch would do that "for small constant-sized copies, turn > copy_to_user() automatically into "_small_copy_to_user()". > > The attached is *not* a real patch. Treat it with the contempt it deserves. > > Linus _______________________________________________ linux-riscv mailing list linux-riscv@lists.infradead.org http://lists.infradead.org/mailman/listinfo/linux-riscv