* [RFC PATCH] iov: Bypass usercopy hardening for kernel iterators
@ 2026-03-03 16:29 Chuck Lever
2026-03-03 18:00 ` Matthew Wilcox
` (2 more replies)
0 siblings, 3 replies; 7+ messages in thread
From: Chuck Lever @ 2026-03-03 16:29 UTC (permalink / raw)
To: viro, kees, gustavoars
Cc: linux-hardening, linux-block, linux-fsdevel, netdev, Chuck Lever
From: Chuck Lever <chuck.lever@oracle.com>
Profiling NFSD under an iozone workload showed that hardened
usercopy checks consume roughly 1.3% of CPU in the TCP receive
path. The runtime check in check_object_size() validates that
copy buffers reside in expected slab regions, which is
meaningful when data crosses the user/kernel boundary but adds
no value when both source and destination are kernel addresses.
Split check_copy_size() so that copy_to_iter() can bypass the
runtime check_object_size() call for kernel-only iterators
(ITER_BVEC, ITER_KVEC). Existing callers of check_copy_size()
are unaffected; user-backed iterators still receive the full
usercopy validation.
This benefits all kernel consumers of copy_to_iter(), including
the TCP receive path used by the NFS client and server,
NVMe-TCP, and any other subsystem that uses ITER_BVEC or
ITER_KVEC receive buffers.
Signed-off-by: Chuck Lever <chuck.lever@oracle.com>
---
include/linux/ucopysize.h | 10 +++++++++-
include/linux/uio.h | 9 +++++++--
2 files changed, 16 insertions(+), 3 deletions(-)
diff --git a/include/linux/ucopysize.h b/include/linux/ucopysize.h
index 41c2d9720466..b3eacb4869a8 100644
--- a/include/linux/ucopysize.h
+++ b/include/linux/ucopysize.h
@@ -42,7 +42,7 @@ static inline void copy_overflow(int size, unsigned long count)
}
static __always_inline __must_check bool
-check_copy_size(const void *addr, size_t bytes, bool is_source)
+check_copy_size_nosec(const void *addr, size_t bytes, bool is_source)
{
int sz = __builtin_object_size(addr, 0);
if (unlikely(sz >= 0 && sz < bytes)) {
@@ -56,6 +56,14 @@ check_copy_size(const void *addr, size_t bytes, bool is_source)
}
if (WARN_ON_ONCE(bytes > INT_MAX))
return false;
+ return true;
+}
+
+static __always_inline __must_check bool
+check_copy_size(const void *addr, size_t bytes, bool is_source)
+{
+ if (!check_copy_size_nosec(addr, bytes, is_source))
+ return false;
check_object_size(addr, bytes, is_source);
return true;
}
diff --git a/include/linux/uio.h b/include/linux/uio.h
index a9bc5b3067e3..f860529abfbe 100644
--- a/include/linux/uio.h
+++ b/include/linux/uio.h
@@ -216,8 +216,13 @@ size_t copy_page_to_iter_nofault(struct page *page, unsigned offset,
static __always_inline __must_check
size_t copy_to_iter(const void *addr, size_t bytes, struct iov_iter *i)
{
- if (check_copy_size(addr, bytes, true))
- return _copy_to_iter(addr, bytes, i);
+ if (user_backed_iter(i)) {
+ if (check_copy_size(addr, bytes, true))
+ return _copy_to_iter(addr, bytes, i);
+ } else {
+ if (check_copy_size_nosec(addr, bytes, true))
+ return _copy_to_iter(addr, bytes, i);
+ }
return 0;
}
--
2.53.0
^ permalink raw reply related [flat|nested] 7+ messages in thread
* Re: [RFC PATCH] iov: Bypass usercopy hardening for kernel iterators
2026-03-03 16:29 [RFC PATCH] iov: Bypass usercopy hardening for kernel iterators Chuck Lever
@ 2026-03-03 18:00 ` Matthew Wilcox
2026-03-03 19:41 ` Chuck Lever
2026-03-25 17:26 ` Chuck Lever
2026-03-25 21:27 ` Kees Cook
2 siblings, 1 reply; 7+ messages in thread
From: Matthew Wilcox @ 2026-03-03 18:00 UTC (permalink / raw)
To: Chuck Lever
Cc: viro, kees, gustavoars, linux-hardening, linux-block,
linux-fsdevel, netdev, Chuck Lever
On Tue, Mar 03, 2026 at 11:29:32AM -0500, Chuck Lever wrote:
> Profiling NFSD under an iozone workload showed that hardened
> usercopy checks consume roughly 1.3% of CPU in the TCP receive
> path. The runtime check in check_object_size() validates that
> copy buffers reside in expected slab regions, which is
> meaningful when data crosses the user/kernel boundary but adds
> no value when both source and destination are kernel addresses.
I'm not sure I'd go as far as "no value". I could see an attack which
managed to trick the kernel into copying past the end of a slab object
and sending the contents of that buffer across the network to an attacker.
Or I guess in this case you're talking about copying _to_ a slab object.
Then we could see a network attacker somewhow confusing the kernel into
copying past the end of the object they allocated, overwriting slab
metadata and/or the contents of the next object in the slab.
Limited value, sure. And the performance change you're showing here
certainly isn't nothing!
> Split check_copy_size() so that copy_to_iter() can bypass the
> runtime check_object_size() call for kernel-only iterators
> (ITER_BVEC, ITER_KVEC). Existing callers of check_copy_size()
> are unaffected; user-backed iterators still receive the full
> usercopy validation.
>
> This benefits all kernel consumers of copy_to_iter(), including
> the TCP receive path used by the NFS client and server,
> NVMe-TCP, and any other subsystem that uses ITER_BVEC or
> ITER_KVEC receive buffers.
>
> Signed-off-by: Chuck Lever <chuck.lever@oracle.com>
> ---
> include/linux/ucopysize.h | 10 +++++++++-
> include/linux/uio.h | 9 +++++++--
> 2 files changed, 16 insertions(+), 3 deletions(-)
>
> diff --git a/include/linux/ucopysize.h b/include/linux/ucopysize.h
> index 41c2d9720466..b3eacb4869a8 100644
> --- a/include/linux/ucopysize.h
> +++ b/include/linux/ucopysize.h
> @@ -42,7 +42,7 @@ static inline void copy_overflow(int size, unsigned long count)
> }
>
> static __always_inline __must_check bool
> -check_copy_size(const void *addr, size_t bytes, bool is_source)
> +check_copy_size_nosec(const void *addr, size_t bytes, bool is_source)
> {
> int sz = __builtin_object_size(addr, 0);
> if (unlikely(sz >= 0 && sz < bytes)) {
> @@ -56,6 +56,14 @@ check_copy_size(const void *addr, size_t bytes, bool is_source)
> }
> if (WARN_ON_ONCE(bytes > INT_MAX))
> return false;
> + return true;
> +}
> +
> +static __always_inline __must_check bool
> +check_copy_size(const void *addr, size_t bytes, bool is_source)
> +{
> + if (!check_copy_size_nosec(addr, bytes, is_source))
> + return false;
> check_object_size(addr, bytes, is_source);
> return true;
> }
> diff --git a/include/linux/uio.h b/include/linux/uio.h
> index a9bc5b3067e3..f860529abfbe 100644
> --- a/include/linux/uio.h
> +++ b/include/linux/uio.h
> @@ -216,8 +216,13 @@ size_t copy_page_to_iter_nofault(struct page *page, unsigned offset,
> static __always_inline __must_check
> size_t copy_to_iter(const void *addr, size_t bytes, struct iov_iter *i)
> {
> - if (check_copy_size(addr, bytes, true))
> - return _copy_to_iter(addr, bytes, i);
> + if (user_backed_iter(i)) {
> + if (check_copy_size(addr, bytes, true))
> + return _copy_to_iter(addr, bytes, i);
> + } else {
> + if (check_copy_size_nosec(addr, bytes, true))
> + return _copy_to_iter(addr, bytes, i);
> + }
> return 0;
> }
>
> --
> 2.53.0
>
>
^ permalink raw reply [flat|nested] 7+ messages in thread
* Re: [RFC PATCH] iov: Bypass usercopy hardening for kernel iterators
2026-03-03 18:00 ` Matthew Wilcox
@ 2026-03-03 19:41 ` Chuck Lever
2026-03-03 19:59 ` Matthew Wilcox
0 siblings, 1 reply; 7+ messages in thread
From: Chuck Lever @ 2026-03-03 19:41 UTC (permalink / raw)
To: Matthew Wilcox (Oracle)
Cc: Alexander Viro, kees, gustavoars, linux-hardening,
linux-block@vger.kernel.org, linux-fsdevel, netdev, Chuck Lever
On Tue, Mar 3, 2026, at 1:00 PM, Matthew Wilcox wrote:
> On Tue, Mar 03, 2026 at 11:29:32AM -0500, Chuck Lever wrote:
>> Profiling NFSD under an iozone workload showed that hardened
>> usercopy checks consume roughly 1.3% of CPU in the TCP receive
>> path. The runtime check in check_object_size() validates that
>> copy buffers reside in expected slab regions, which is
>> meaningful when data crosses the user/kernel boundary but adds
>> no value when both source and destination are kernel addresses.
>
> I'm not sure I'd go as far as "no value". I could see an attack which
> managed to trick the kernel into copying past the end of a slab object
> and sending the contents of that buffer across the network to an attacker.
>
> Or I guess in this case you're talking about copying _to_ a slab object.
> Then we could see a network attacker somewhow confusing the kernel into
> copying past the end of the object they allocated, overwriting slab
> metadata and/or the contents of the next object in the slab.
>
> Limited value, sure. And the performance change you're showing here
> certainly isn't nothing!
To be clear, I'm absolutely interested in not degrading our security
posture. But NFSD (and other storage ULPs, for example) do a lot of
internal data copying that could be more efficient.
I would place the "trick the kernel into copying past the end of
a slab object" attack in the category of "you should sanitize your
input better"... Perhaps the existing copy_to_iter protection is
a general salve that could be replaced by something more narrow
and less costly. </hand wave>
--
Chuck Lever
^ permalink raw reply [flat|nested] 7+ messages in thread
* Re: [RFC PATCH] iov: Bypass usercopy hardening for kernel iterators
2026-03-03 19:41 ` Chuck Lever
@ 2026-03-03 19:59 ` Matthew Wilcox
0 siblings, 0 replies; 7+ messages in thread
From: Matthew Wilcox @ 2026-03-03 19:59 UTC (permalink / raw)
To: Chuck Lever
Cc: Alexander Viro, kees, gustavoars, linux-hardening,
linux-block@vger.kernel.org, linux-fsdevel, netdev, Chuck Lever
On Tue, Mar 03, 2026 at 02:41:33PM -0500, Chuck Lever wrote:
> On Tue, Mar 3, 2026, at 1:00 PM, Matthew Wilcox wrote:
> > On Tue, Mar 03, 2026 at 11:29:32AM -0500, Chuck Lever wrote:
> >> Profiling NFSD under an iozone workload showed that hardened
> >> usercopy checks consume roughly 1.3% of CPU in the TCP receive
> >> path. The runtime check in check_object_size() validates that
> >> copy buffers reside in expected slab regions, which is
> >> meaningful when data crosses the user/kernel boundary but adds
> >> no value when both source and destination are kernel addresses.
> >
> > I'm not sure I'd go as far as "no value". I could see an attack which
> > managed to trick the kernel into copying past the end of a slab object
> > and sending the contents of that buffer across the network to an attacker.
> >
> > Or I guess in this case you're talking about copying _to_ a slab object.
> > Then we could see a network attacker somewhow confusing the kernel into
> > copying past the end of the object they allocated, overwriting slab
> > metadata and/or the contents of the next object in the slab.
> >
> > Limited value, sure. And the performance change you're showing here
> > certainly isn't nothing!
>
> To be clear, I'm absolutely interested in not degrading our security
> posture. But NFSD (and other storage ULPs, for example) do a lot of
> internal data copying that could be more efficient.
>
> I would place the "trick the kernel into copying past the end of
> a slab object" attack in the category of "you should sanitize your
> input better"... Perhaps the existing copy_to_iter protection is
> a general salve that could be replaced by something more narrow
> and less costly. </hand wave>
As I understand it, and I'm sure Kees will correct me if I'm wrong,
the hardened usercopy stuff is always "you should have sanitised your
input before you got here"; it's never "yolo, just copy the amount the
user asked for and if it's too much, the hardening will catch it".
I'm definitely open to the original patch, as well as other alternatives
that narrow down the cases where we can prove that we're not doing
anything wrong. I just want to be sure that we all understand what
tradeoffs we're making and why.
^ permalink raw reply [flat|nested] 7+ messages in thread
* Re: [RFC PATCH] iov: Bypass usercopy hardening for kernel iterators
2026-03-03 16:29 [RFC PATCH] iov: Bypass usercopy hardening for kernel iterators Chuck Lever
2026-03-03 18:00 ` Matthew Wilcox
@ 2026-03-25 17:26 ` Chuck Lever
2026-03-25 21:27 ` Kees Cook
2 siblings, 0 replies; 7+ messages in thread
From: Chuck Lever @ 2026-03-25 17:26 UTC (permalink / raw)
To: Alexander Viro, kees, gustavoars
Cc: linux-hardening, linux-block@vger.kernel.org, linux-fsdevel,
netdev, Chuck Lever
On Tue, Mar 3, 2026, at 11:29 AM, Chuck Lever wrote:
> From: Chuck Lever <chuck.lever@oracle.com>
>
> Profiling NFSD under an iozone workload showed that hardened
> usercopy checks consume roughly 1.3% of CPU in the TCP receive
> path. The runtime check in check_object_size() validates that
> copy buffers reside in expected slab regions, which is
> meaningful when data crosses the user/kernel boundary but adds
> no value when both source and destination are kernel addresses.
>
> Split check_copy_size() so that copy_to_iter() can bypass the
> runtime check_object_size() call for kernel-only iterators
> (ITER_BVEC, ITER_KVEC). Existing callers of check_copy_size()
> are unaffected; user-backed iterators still receive the full
> usercopy validation.
>
> This benefits all kernel consumers of copy_to_iter(), including
> the TCP receive path used by the NFS client and server,
> NVMe-TCP, and any other subsystem that uses ITER_BVEC or
> ITER_KVEC receive buffers.
>
> Signed-off-by: Chuck Lever <chuck.lever@oracle.com>
> ---
> include/linux/ucopysize.h | 10 +++++++++-
> include/linux/uio.h | 9 +++++++--
> 2 files changed, 16 insertions(+), 3 deletions(-)
>
> diff --git a/include/linux/ucopysize.h b/include/linux/ucopysize.h
> index 41c2d9720466..b3eacb4869a8 100644
> --- a/include/linux/ucopysize.h
> +++ b/include/linux/ucopysize.h
> @@ -42,7 +42,7 @@ static inline void copy_overflow(int size, unsigned
> long count)
> }
>
> static __always_inline __must_check bool
> -check_copy_size(const void *addr, size_t bytes, bool is_source)
> +check_copy_size_nosec(const void *addr, size_t bytes, bool is_source)
> {
> int sz = __builtin_object_size(addr, 0);
> if (unlikely(sz >= 0 && sz < bytes)) {
> @@ -56,6 +56,14 @@ check_copy_size(const void *addr, size_t bytes, bool
> is_source)
> }
> if (WARN_ON_ONCE(bytes > INT_MAX))
> return false;
> + return true;
> +}
> +
> +static __always_inline __must_check bool
> +check_copy_size(const void *addr, size_t bytes, bool is_source)
> +{
> + if (!check_copy_size_nosec(addr, bytes, is_source))
> + return false;
> check_object_size(addr, bytes, is_source);
> return true;
> }
> diff --git a/include/linux/uio.h b/include/linux/uio.h
> index a9bc5b3067e3..f860529abfbe 100644
> --- a/include/linux/uio.h
> +++ b/include/linux/uio.h
> @@ -216,8 +216,13 @@ size_t copy_page_to_iter_nofault(struct page
> *page, unsigned offset,
> static __always_inline __must_check
> size_t copy_to_iter(const void *addr, size_t bytes, struct iov_iter *i)
> {
> - if (check_copy_size(addr, bytes, true))
> - return _copy_to_iter(addr, bytes, i);
> + if (user_backed_iter(i)) {
> + if (check_copy_size(addr, bytes, true))
> + return _copy_to_iter(addr, bytes, i);
> + } else {
> + if (check_copy_size_nosec(addr, bytes, true))
> + return _copy_to_iter(addr, bytes, i);
> + }
> return 0;
> }
>
> --
> 2.53.0
Ping: Any further thoughts on this? Al, Kees, Gustavo?
--
Chuck Lever
^ permalink raw reply [flat|nested] 7+ messages in thread
* Re: [RFC PATCH] iov: Bypass usercopy hardening for kernel iterators
2026-03-03 16:29 [RFC PATCH] iov: Bypass usercopy hardening for kernel iterators Chuck Lever
2026-03-03 18:00 ` Matthew Wilcox
2026-03-25 17:26 ` Chuck Lever
@ 2026-03-25 21:27 ` Kees Cook
2026-03-25 21:29 ` Chuck Lever
2 siblings, 1 reply; 7+ messages in thread
From: Kees Cook @ 2026-03-25 21:27 UTC (permalink / raw)
To: Chuck Lever
Cc: viro, gustavoars, linux-hardening, linux-block, linux-fsdevel,
netdev, Chuck Lever
On Tue, Mar 03, 2026 at 11:29:32AM -0500, Chuck Lever wrote:
> From: Chuck Lever <chuck.lever@oracle.com>
>
> Profiling NFSD under an iozone workload showed that hardened
> usercopy checks consume roughly 1.3% of CPU in the TCP receive
> path. The runtime check in check_object_size() validates that
> copy buffers reside in expected slab regions, which is
> meaningful when data crosses the user/kernel boundary but adds
> no value when both source and destination are kernel addresses.
>
> Split check_copy_size() so that copy_to_iter() can bypass the
> runtime check_object_size() call for kernel-only iterators
> (ITER_BVEC, ITER_KVEC). Existing callers of check_copy_size()
> are unaffected; user-backed iterators still receive the full
> usercopy validation.
>
> This benefits all kernel consumers of copy_to_iter(), including
> the TCP receive path used by the NFS client and server,
> NVMe-TCP, and any other subsystem that uses ITER_BVEC or
> ITER_KVEC receive buffers.
So, I'm not a big fan of this just because the whole point is to catch
unexpected conditions, but there is a reasonable point to be made that
this case shouldn't be covered by kernel/kernel copies.
>
> Signed-off-by: Chuck Lever <chuck.lever@oracle.com>
> ---
> include/linux/ucopysize.h | 10 +++++++++-
> include/linux/uio.h | 9 +++++++--
> 2 files changed, 16 insertions(+), 3 deletions(-)
>
> diff --git a/include/linux/ucopysize.h b/include/linux/ucopysize.h
> index 41c2d9720466..b3eacb4869a8 100644
> --- a/include/linux/ucopysize.h
> +++ b/include/linux/ucopysize.h
> @@ -42,7 +42,7 @@ static inline void copy_overflow(int size, unsigned long count)
> }
>
> static __always_inline __must_check bool
> -check_copy_size(const void *addr, size_t bytes, bool is_source)
> +check_copy_size_nosec(const void *addr, size_t bytes, bool is_source)
"nosec" is kind of ambiguous. Since this is doing the compile-time
checks, how about naming this __compiletime_check_copy_size() or so?
> {
> int sz = __builtin_object_size(addr, 0);
> if (unlikely(sz >= 0 && sz < bytes)) {
> @@ -56,6 +56,14 @@ check_copy_size(const void *addr, size_t bytes, bool is_source)
> }
> if (WARN_ON_ONCE(bytes > INT_MAX))
> return false;
> + return true;
> +}
> +
> +static __always_inline __must_check bool
> +check_copy_size(const void *addr, size_t bytes, bool is_source)
> +{
> + if (!check_copy_size_nosec(addr, bytes, is_source))
> + return false;
> check_object_size(addr, bytes, is_source);
> return true;
> }
> diff --git a/include/linux/uio.h b/include/linux/uio.h
> index a9bc5b3067e3..f860529abfbe 100644
> --- a/include/linux/uio.h
> +++ b/include/linux/uio.h
> @@ -216,8 +216,13 @@ size_t copy_page_to_iter_nofault(struct page *page, unsigned offset,
> static __always_inline __must_check
> size_t copy_to_iter(const void *addr, size_t bytes, struct iov_iter *i)
> {
> - if (check_copy_size(addr, bytes, true))
> - return _copy_to_iter(addr, bytes, i);
> + if (user_backed_iter(i)) {
> + if (check_copy_size(addr, bytes, true))
> + return _copy_to_iter(addr, bytes, i);
> + } else {
> + if (check_copy_size_nosec(addr, bytes, true))
> + return _copy_to_iter(addr, bytes, i);
> + }
> return 0;
> }
This seems reasonable with the renaming, though I might come back some
day and ask that this get a boot param or something (we have a big
hammer boot param for usercopy checking already, but I like this more
focused check).
--
Kees Cook
^ permalink raw reply [flat|nested] 7+ messages in thread
* Re: [RFC PATCH] iov: Bypass usercopy hardening for kernel iterators
2026-03-25 21:27 ` Kees Cook
@ 2026-03-25 21:29 ` Chuck Lever
0 siblings, 0 replies; 7+ messages in thread
From: Chuck Lever @ 2026-03-25 21:29 UTC (permalink / raw)
To: Kees Cook
Cc: viro, gustavoars, linux-hardening, linux-block, linux-fsdevel,
netdev, Chuck Lever
On 3/25/26 5:27 PM, Kees Cook wrote:
> On Tue, Mar 03, 2026 at 11:29:32AM -0500, Chuck Lever wrote:
>> From: Chuck Lever <chuck.lever@oracle.com>
>>
>> Profiling NFSD under an iozone workload showed that hardened
>> usercopy checks consume roughly 1.3% of CPU in the TCP receive
>> path. The runtime check in check_object_size() validates that
>> copy buffers reside in expected slab regions, which is
>> meaningful when data crosses the user/kernel boundary but adds
>> no value when both source and destination are kernel addresses.
>>
>> Split check_copy_size() so that copy_to_iter() can bypass the
>> runtime check_object_size() call for kernel-only iterators
>> (ITER_BVEC, ITER_KVEC). Existing callers of check_copy_size()
>> are unaffected; user-backed iterators still receive the full
>> usercopy validation.
>>
>> This benefits all kernel consumers of copy_to_iter(), including
>> the TCP receive path used by the NFS client and server,
>> NVMe-TCP, and any other subsystem that uses ITER_BVEC or
>> ITER_KVEC receive buffers.
>
> So, I'm not a big fan of this just because the whole point is to catch
> unexpected conditions, but there is a reasonable point to be made that
> this case shouldn't be covered by kernel/kernel copies.
>
>>
>> Signed-off-by: Chuck Lever <chuck.lever@oracle.com>
>> ---
>> include/linux/ucopysize.h | 10 +++++++++-
>> include/linux/uio.h | 9 +++++++--
>> 2 files changed, 16 insertions(+), 3 deletions(-)
>>
>> diff --git a/include/linux/ucopysize.h b/include/linux/ucopysize.h
>> index 41c2d9720466..b3eacb4869a8 100644
>> --- a/include/linux/ucopysize.h
>> +++ b/include/linux/ucopysize.h
>> @@ -42,7 +42,7 @@ static inline void copy_overflow(int size, unsigned long count)
>> }
>>
>> static __always_inline __must_check bool
>> -check_copy_size(const void *addr, size_t bytes, bool is_source)
>> +check_copy_size_nosec(const void *addr, size_t bytes, bool is_source)
>
> "nosec" is kind of ambiguous. Since this is doing the compile-time
> checks, how about naming this __compiletime_check_copy_size() or so?
No problem.
>> {
>> int sz = __builtin_object_size(addr, 0);
>> if (unlikely(sz >= 0 && sz < bytes)) {
>> @@ -56,6 +56,14 @@ check_copy_size(const void *addr, size_t bytes, bool is_source)
>> }
>> if (WARN_ON_ONCE(bytes > INT_MAX))
>> return false;
>> + return true;
>> +}
>> +
>> +static __always_inline __must_check bool
>> +check_copy_size(const void *addr, size_t bytes, bool is_source)
>> +{
>> + if (!check_copy_size_nosec(addr, bytes, is_source))
>> + return false;
>> check_object_size(addr, bytes, is_source);
>> return true;
>> }
>> diff --git a/include/linux/uio.h b/include/linux/uio.h
>> index a9bc5b3067e3..f860529abfbe 100644
>> --- a/include/linux/uio.h
>> +++ b/include/linux/uio.h
>> @@ -216,8 +216,13 @@ size_t copy_page_to_iter_nofault(struct page *page, unsigned offset,
>> static __always_inline __must_check
>> size_t copy_to_iter(const void *addr, size_t bytes, struct iov_iter *i)
>> {
>> - if (check_copy_size(addr, bytes, true))
>> - return _copy_to_iter(addr, bytes, i);
>> + if (user_backed_iter(i)) {
>> + if (check_copy_size(addr, bytes, true))
>> + return _copy_to_iter(addr, bytes, i);
>> + } else {
>> + if (check_copy_size_nosec(addr, bytes, true))
>> + return _copy_to_iter(addr, bytes, i);
>> + }
>> return 0;
>> }
>
> This seems reasonable with the renaming, though I might come back some
> day and ask that this get a boot param or something (we have a big
> hammer boot param for usercopy checking already, but I like this more
> focused check).
>
Thanks for having a look. An additional question is whether the
"copy from" direction needs similar treatment. Performance analysis
found "copy to" was an issue for my particular workload (NFSD) but
it's plausible that "copy from" should be handled similarly.
--
Chuck Lever
^ permalink raw reply [flat|nested] 7+ messages in thread
end of thread, other threads:[~2026-03-25 21:29 UTC | newest]
Thread overview: 7+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2026-03-03 16:29 [RFC PATCH] iov: Bypass usercopy hardening for kernel iterators Chuck Lever
2026-03-03 18:00 ` Matthew Wilcox
2026-03-03 19:41 ` Chuck Lever
2026-03-03 19:59 ` Matthew Wilcox
2026-03-25 17:26 ` Chuck Lever
2026-03-25 21:27 ` Kees Cook
2026-03-25 21:29 ` Chuck Lever
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox