* [PATCH v2 0/2] Bypass usercopy hardening for kernel-only iterators
@ 2026-03-30 14:36 Chuck Lever
2026-03-30 14:36 ` [PATCH v2 1/2] iov: Bypass usercopy hardening for copy_to_iter() Chuck Lever
2026-03-30 14:36 ` [PATCH v2 2/2] iov: Bypass usercopy hardening for copy_from_iter() Chuck Lever
0 siblings, 2 replies; 4+ messages in thread
From: Chuck Lever @ 2026-03-30 14:36 UTC (permalink / raw)
To: Al Viro, Kees Cook, Gustavo A. R. Silva
Cc: linux-hardening, linux-block, linux-fsdevel, netdev, Chuck Lever
Profiling NFSD under an iozone workload showed that hardened
usercopy checks consume roughly 1.3% of CPU in the TCP receive
path. The runtime check in check_object_size() validates that
copy buffers reside in expected slab regions, which is
meaningful when data crosses the user/kernel boundary but adds
no value when both source and destination are kernel addresses.
The fix splits check_copy_size() into two variants: the
existing full check, and a new __compiletime_check_copy_size()
that retains the compile-time object size assertion and the
runtime overflow check but omits check_object_size(). A
user_backed_iter() test at each call site selects between
them, so user-backed iterators continue to receive the full
validation.
Patch 1 applies this to copy_to_iter(). Patch 2 applies the
same change to copy_from_iter(). copy_from_iter_nocache() is
left unchanged because all current callers pass user-space
addresses; the bypass there is deferred until that changes.
---
Changes since v1:
- Updated commit message for clarity and completeness
- Rename the check_copy_size() function (Kees)
- Added __compiletime_check_copy_size() stub to
tools/virtio/linux/ucopysize.h
- Add second patch to convert copy_from_iter()
---
Chuck Lever (2):
iov: Bypass usercopy hardening for copy_to_iter()
iov: Bypass usercopy hardening for copy_from_iter()
include/linux/ucopysize.h | 16 +++++++++++++++-
include/linux/uio.h | 18 ++++++++++++++----
tools/virtio/linux/ucopysize.h | 6 ++++++
3 files changed, 35 insertions(+), 5 deletions(-)
---
base-commit: 7aaa8047eafd0bd628065b15757d9b48c5f9c07d
change-id: 20260326-bypass-user-copy-3e73161cc90b
Best regards,
--
Chuck Lever
^ permalink raw reply [flat|nested] 4+ messages in thread
* [PATCH v2 1/2] iov: Bypass usercopy hardening for copy_to_iter()
2026-03-30 14:36 [PATCH v2 0/2] Bypass usercopy hardening for kernel-only iterators Chuck Lever
@ 2026-03-30 14:36 ` Chuck Lever
2026-03-30 21:11 ` David Laight
2026-03-30 14:36 ` [PATCH v2 2/2] iov: Bypass usercopy hardening for copy_from_iter() Chuck Lever
1 sibling, 1 reply; 4+ messages in thread
From: Chuck Lever @ 2026-03-30 14:36 UTC (permalink / raw)
To: Al Viro, Kees Cook, Gustavo A. R. Silva
Cc: linux-hardening, linux-block, linux-fsdevel, netdev, Chuck Lever
From: Chuck Lever <chuck.lever@oracle.com>
Profiling NFSD under an iozone workload showed that hardened
usercopy checks consume roughly 1.3% of CPU in the TCP receive
path. The runtime check in check_object_size() validates that
copy buffers reside in expected kernel memory regions (slab,
stack, and non-text), which is meaningful when data crosses
the user/kernel boundary but adds no value when both source
and destination are kernel addresses.
Split check_copy_size() so that copy_to_iter() can bypass
the runtime check_object_size() call for non-user-backed
iterators (ITER_KVEC, ITER_BVEC, ITER_FOLIOQ, ITER_XARRAY,
and ITER_DISCARD). Existing callers of check_copy_size() are
unaffected; user-backed iterators still receive the full
usercopy validation.
This benefits all kernel consumers of copy_to_iter(),
including the TCP receive path used by the NFS client and
server, NVMe-TCP, and any other subsystem that uses
non-user-backed receive buffers.
Signed-off-by: Chuck Lever <chuck.lever@oracle.com>
---
include/linux/ucopysize.h | 16 +++++++++++++++-
include/linux/uio.h | 9 +++++++--
tools/virtio/linux/ucopysize.h | 6 ++++++
3 files changed, 28 insertions(+), 3 deletions(-)
diff --git a/include/linux/ucopysize.h b/include/linux/ucopysize.h
index 41c2d9720466..d187108f845a 100644
--- a/include/linux/ucopysize.h
+++ b/include/linux/ucopysize.h
@@ -41,8 +41,14 @@ static inline void copy_overflow(int size, unsigned long count)
__copy_overflow(size, count);
}
+/*
+ * Copy size validation without usercopy hardening. Checks
+ * compile-time object size and runtime overflow, but skips
+ * check_object_size(). Use check_copy_size() when @addr
+ * may point to userspace-accessible memory.
+ */
static __always_inline __must_check bool
-check_copy_size(const void *addr, size_t bytes, bool is_source)
+__compiletime_check_copy_size(const void *addr, size_t bytes, bool is_source)
{
int sz = __builtin_object_size(addr, 0);
if (unlikely(sz >= 0 && sz < bytes)) {
@@ -56,6 +62,14 @@ check_copy_size(const void *addr, size_t bytes, bool is_source)
}
if (WARN_ON_ONCE(bytes > INT_MAX))
return false;
+ return true;
+}
+
+static __always_inline __must_check bool
+check_copy_size(const void *addr, size_t bytes, bool is_source)
+{
+ if (!__compiletime_check_copy_size(addr, bytes, is_source))
+ return false;
check_object_size(addr, bytes, is_source);
return true;
}
diff --git a/include/linux/uio.h b/include/linux/uio.h
index a9bc5b3067e3..45b323e4be97 100644
--- a/include/linux/uio.h
+++ b/include/linux/uio.h
@@ -216,8 +216,13 @@ size_t copy_page_to_iter_nofault(struct page *page, unsigned offset,
static __always_inline __must_check
size_t copy_to_iter(const void *addr, size_t bytes, struct iov_iter *i)
{
- if (check_copy_size(addr, bytes, true))
- return _copy_to_iter(addr, bytes, i);
+ if (user_backed_iter(i)) {
+ if (check_copy_size(addr, bytes, true))
+ return _copy_to_iter(addr, bytes, i);
+ } else {
+ if (__compiletime_check_copy_size(addr, bytes, true))
+ return _copy_to_iter(addr, bytes, i);
+ }
return 0;
}
diff --git a/tools/virtio/linux/ucopysize.h b/tools/virtio/linux/ucopysize.h
index 8beb7755d060..a330e14c81c5 100644
--- a/tools/virtio/linux/ucopysize.h
+++ b/tools/virtio/linux/ucopysize.h
@@ -12,6 +12,12 @@ static inline void copy_overflow(int size, unsigned long count)
{
}
+static __always_inline __must_check bool
+__compiletime_check_copy_size(const void *addr, size_t bytes, bool is_source)
+{
+ return true;
+}
+
static __always_inline __must_check bool
check_copy_size(const void *addr, size_t bytes, bool is_source)
{
--
2.53.0
^ permalink raw reply related [flat|nested] 4+ messages in thread
* [PATCH v2 2/2] iov: Bypass usercopy hardening for copy_from_iter()
2026-03-30 14:36 [PATCH v2 0/2] Bypass usercopy hardening for kernel-only iterators Chuck Lever
2026-03-30 14:36 ` [PATCH v2 1/2] iov: Bypass usercopy hardening for copy_to_iter() Chuck Lever
@ 2026-03-30 14:36 ` Chuck Lever
1 sibling, 0 replies; 4+ messages in thread
From: Chuck Lever @ 2026-03-30 14:36 UTC (permalink / raw)
To: Al Viro, Kees Cook, Gustavo A. R. Silva
Cc: linux-hardening, linux-block, linux-fsdevel, netdev, Chuck Lever
From: Chuck Lever <chuck.lever@oracle.com>
The previous patch bypassed runtime usercopy validation in
copy_to_iter() for kernel-only iterators. The same overhead
exists in the copy_from_iter() path: check_object_size()
validates the destination buffer's slab residency on every
call, even when the iterator source is entirely kernel-backed
and the user-copy protection is redundant.
Apply the same bypass so that copy_from_iter() calls
__compiletime_check_copy_size() instead of the full
check_copy_size() when the iterator is not user-backed.
All current callers of copy_from_iter_nocache() pass user-space
addresses, so the same change is deferred for that wrapper.
Signed-off-by: Chuck Lever <chuck.lever@oracle.com>
---
include/linux/uio.h | 9 +++++++--
1 file changed, 7 insertions(+), 2 deletions(-)
diff --git a/include/linux/uio.h b/include/linux/uio.h
index 45b323e4be97..5a6ad2dd5627 100644
--- a/include/linux/uio.h
+++ b/include/linux/uio.h
@@ -229,8 +229,13 @@ size_t copy_to_iter(const void *addr, size_t bytes, struct iov_iter *i)
static __always_inline __must_check
size_t copy_from_iter(void *addr, size_t bytes, struct iov_iter *i)
{
- if (check_copy_size(addr, bytes, false))
- return _copy_from_iter(addr, bytes, i);
+ if (user_backed_iter(i)) {
+ if (check_copy_size(addr, bytes, false))
+ return _copy_from_iter(addr, bytes, i);
+ } else {
+ if (__compiletime_check_copy_size(addr, bytes, false))
+ return _copy_from_iter(addr, bytes, i);
+ }
return 0;
}
--
2.53.0
^ permalink raw reply related [flat|nested] 4+ messages in thread
* Re: [PATCH v2 1/2] iov: Bypass usercopy hardening for copy_to_iter()
2026-03-30 14:36 ` [PATCH v2 1/2] iov: Bypass usercopy hardening for copy_to_iter() Chuck Lever
@ 2026-03-30 21:11 ` David Laight
0 siblings, 0 replies; 4+ messages in thread
From: David Laight @ 2026-03-30 21:11 UTC (permalink / raw)
To: Chuck Lever
Cc: Al Viro, Kees Cook, Gustavo A. R. Silva, linux-hardening,
linux-block, linux-fsdevel, netdev, Chuck Lever
On Mon, 30 Mar 2026 10:36:30 -0400
Chuck Lever <cel@kernel.org> wrote:
> From: Chuck Lever <chuck.lever@oracle.com>
>
> Profiling NFSD under an iozone workload showed that hardened
> usercopy checks consume roughly 1.3% of CPU in the TCP receive
> path. The runtime check in check_object_size() validates that
> copy buffers reside in expected kernel memory regions (slab,
> stack, and non-text), which is meaningful when data crosses
> the user/kernel boundary but adds no value when both source
> and destination are kernel addresses.
I thought the purpose was to avoid accidental overwrites when
the allocated buffer was the wrong size.
This is pretty much likely to affect user copies as kernel ones.
OTOH the overhead for some socket paths is really horrid.
IIRC sendmsg/recvmsg does copies where the length depends on
whether it is a 64bit or compat system call.
These go through the full horrors of user copy hardening even
thought there is no way they can ever fail.
That is the 'control pane' copies - well before you get to
any actual data.
David
^ permalink raw reply [flat|nested] 4+ messages in thread
end of thread, other threads:[~2026-03-30 21:11 UTC | newest]
Thread overview: 4+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2026-03-30 14:36 [PATCH v2 0/2] Bypass usercopy hardening for kernel-only iterators Chuck Lever
2026-03-30 14:36 ` [PATCH v2 1/2] iov: Bypass usercopy hardening for copy_to_iter() Chuck Lever
2026-03-30 21:11 ` David Laight
2026-03-30 14:36 ` [PATCH v2 2/2] iov: Bypass usercopy hardening for copy_from_iter() Chuck Lever
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox