* [PATCH net] rxrpc: Also unshare DATA/RESPONSE packets when paged frags are present
@ 2026-04-29 23:35 Hyunwoo Kim
2026-05-01 15:58 ` Simon Horman
2026-05-08 5:57 ` Qingfang Deng
0 siblings, 2 replies; 7+ messages in thread
From: Hyunwoo Kim @ 2026-04-29 23:35 UTC (permalink / raw)
To: dhowells, marc.dionne, davem, edumazet, kuba, pabeni, horms,
linux-afs, netdev
Cc: imv4bel
The DATA-packet handler in rxrpc_input_call_event() and the RESPONSE
handler in rxrpc_verify_response() copy the skb to a linear one before
calling into the security ops only when skb_cloned() is true. An skb
that is not cloned but still carries paged fragments (skb->data_len != 0)
falls through to the in-place decryption path, which binds the frag
pages directly into the AEAD/skcipher SGL via skb_to_sgvec().
Extend the gate so that any skb with non-linear data is also copied,
ensuring the security handler always operates on a fully linear skb.
The OOM/trace handling already in place is reused.
Fixes: d0d5c0cd1e71 ("rxrpc: Use skb_unshare() rather than skb_cow_data()")
Signed-off-by: Hyunwoo Kim <imv4bel@gmail.com>
---
net/rxrpc/call_event.c | 2 +-
net/rxrpc/conn_event.c | 2 +-
2 files changed, 2 insertions(+), 2 deletions(-)
diff --git a/net/rxrpc/call_event.c b/net/rxrpc/call_event.c
index fdd683261226..6c924ef55208 100644
--- a/net/rxrpc/call_event.c
+++ b/net/rxrpc/call_event.c
@@ -334,7 +334,7 @@ bool rxrpc_input_call_event(struct rxrpc_call *call)
if (sp->hdr.type == RXRPC_PACKET_TYPE_DATA &&
sp->hdr.securityIndex != 0 &&
- skb_cloned(skb)) {
+ (skb_cloned(skb) || skb->data_len)) {
/* Unshare the packet so that it can be
* modified by in-place decryption.
*/
diff --git a/net/rxrpc/conn_event.c b/net/rxrpc/conn_event.c
index a2130d25aaa9..eab7c5f2517a 100644
--- a/net/rxrpc/conn_event.c
+++ b/net/rxrpc/conn_event.c
@@ -245,7 +245,7 @@ static int rxrpc_verify_response(struct rxrpc_connection *conn,
{
int ret;
- if (skb_cloned(skb)) {
+ if (skb_cloned(skb) || skb->data_len) {
/* Copy the packet if shared so that we can do in-place
* decryption.
*/
--
2.43.0
^ permalink raw reply related [flat|nested] 7+ messages in thread* Re: [PATCH net] rxrpc: Also unshare DATA/RESPONSE packets when paged frags are present
2026-04-29 23:35 [PATCH net] rxrpc: Also unshare DATA/RESPONSE packets when paged frags are present Hyunwoo Kim
@ 2026-05-01 15:58 ` Simon Horman
2026-05-01 16:57 ` Hyunwoo Kim
2026-05-08 5:57 ` Qingfang Deng
1 sibling, 1 reply; 7+ messages in thread
From: Simon Horman @ 2026-05-01 15:58 UTC (permalink / raw)
To: imv4bel
Cc: 'Simon Horman', dhowells, marc.dionne, davem, edumazet,
kuba, pabeni, linux-afs, netdev
From: 'Simon Horman' <horms@kernel.org>
This is an AI-generated review of your patch. The human sending this
email has considered the AI review valid, or at least plausible.
Full review at: https://sashiko.dev
---
rxrpc: Also unshare DATA/RESPONSE packets when paged frags are present
This commit updates the DATA and RESPONSE packet handlers in rxrpc to copy
the skb into a linear one if there are paged fragments present. This ensures
the security handler operates on a fully linear skb during decryption.
> diff --git a/net/rxrpc/call_event.c b/net/rxrpc/call_event.c
> index fdd683261226..6c924ef55208 100644
> --- a/net/rxrpc/call_event.c
> +++ b/net/rxrpc/call_event.c
> @@ -334,7 +334,7 @@ bool rxrpc_input_call_event(struct rxrpc_call *call)
>
> if (sp->hdr.type == RXRPC_PACKET_TYPE_DATA &&
> sp->hdr.securityIndex != 0 &&
> - skb_cloned(skb)) {
> + (skb_cloned(skb) || skb->data_len)) {
> /* Unshare the packet so that it can be
> * modified by in-place decryption.
> */
Since modern NIC drivers commonly use page pools where packet payloads are
placed in page fragments, skb->data_len will be greater than zero for almost
all standard incoming packets.
By taking this path, the code calls skb_copy() right after this check.
Will this force a full packet copy for nearly all incoming RxRPC DATA
packets, bypassing zero-copy receive paths and causing a large performance
regression?
Additionally, skb_copy() fully linearizes the packet by allocating a single
contiguous buffer for the entire packet size. For RxRPC Jumbo packets or UDP
GRO-aggregated packets (which can reach up to ~64KB), this requires an
order-4 page allocation.
Could relying on an order-4 GFP_ATOMIC allocation cause reliable OOM
failures and packet drops on fragmented systems?
If the goal is to ensure the skb is writable for in-place decryption, would
it be safer to use skb_cow_data() instead of forcing a full contiguous
linearization?
Simon says:
I note that this patch included
Fixes: d0d5c0cd1e71 ("rxrpc: Use skb_unshare() rather than skb_cow_data()")
And that the suggestion in the last paragraph above would
reintroduce back to the problem addressed by d0d5c0cd1e71.
^ permalink raw reply [flat|nested] 7+ messages in thread* Re: [PATCH net] rxrpc: Also unshare DATA/RESPONSE packets when paged frags are present
2026-05-01 15:58 ` Simon Horman
@ 2026-05-01 16:57 ` Hyunwoo Kim
0 siblings, 0 replies; 7+ messages in thread
From: Hyunwoo Kim @ 2026-05-01 16:57 UTC (permalink / raw)
To: Simon Horman
Cc: dhowells, marc.dionne, davem, edumazet, kuba, pabeni, linux-afs,
netdev, imv4bel
On Fri, May 01, 2026 at 04:58:08PM +0100, Simon Horman wrote:
> From: 'Simon Horman' <horms@kernel.org>
>
> This is an AI-generated review of your patch. The human sending this
> email has considered the AI review valid, or at least plausible.
> Full review at: https://sashiko.dev
> ---
> rxrpc: Also unshare DATA/RESPONSE packets when paged frags are present
>
> This commit updates the DATA and RESPONSE packet handlers in rxrpc to copy
> the skb into a linear one if there are paged fragments present. This ensures
> the security handler operates on a fully linear skb during decryption.
>
> > diff --git a/net/rxrpc/call_event.c b/net/rxrpc/call_event.c
> > index fdd683261226..6c924ef55208 100644
> > --- a/net/rxrpc/call_event.c
> > +++ b/net/rxrpc/call_event.c
> > @@ -334,7 +334,7 @@ bool rxrpc_input_call_event(struct rxrpc_call *call)
> >
> > if (sp->hdr.type == RXRPC_PACKET_TYPE_DATA &&
> > sp->hdr.securityIndex != 0 &&
> > - skb_cloned(skb)) {
> > + (skb_cloned(skb) || skb->data_len)) {
> > /* Unshare the packet so that it can be
> > * modified by in-place decryption.
> > */
>
> Since modern NIC drivers commonly use page pools where packet payloads are
> placed in page fragments, skb->data_len will be greater than zero for almost
> all standard incoming packets.
>
> By taking this path, the code calls skb_copy() right after this check.
> Will this force a full packet copy for nearly all incoming RxRPC DATA
> packets, bypassing zero-copy receive paths and causing a large performance
> regression?
>
> Additionally, skb_copy() fully linearizes the packet by allocating a single
> contiguous buffer for the entire packet size. For RxRPC Jumbo packets or UDP
> GRO-aggregated packets (which can reach up to ~64KB), this requires an
> order-4 page allocation.
>
> Could relying on an order-4 GFP_ATOMIC allocation cause reliable OOM
> failures and packet drops on fragmented systems?
>
> If the goal is to ensure the skb is writable for in-place decryption, would
> it be safer to use skb_cow_data() instead of forcing a full contiguous
> linearization?
>
> Simon says:
>
> I note that this patch included
> Fixes: d0d5c0cd1e71 ("rxrpc: Use skb_unshare() rather than skb_cow_data()")
> And that the suggestion in the last paragraph above would
> reintroduce back to the problem addressed by d0d5c0cd1e71.
Yes. applying the ai's suggestion would reintroduce the bug fixed
by d0d5c0cd1e71.
The performance concern regarding skb_copy() is, however, valid.
That said, I cannot think of a way to keep the zero-copy path while
still handling in-place decryption properly. Suggestions welcome.
Best regards,
Hyunwoo Kim
^ permalink raw reply [flat|nested] 7+ messages in thread
* Re: [PATCH net] rxrpc: Also unshare DATA/RESPONSE packets when paged frags are present
2026-04-29 23:35 [PATCH net] rxrpc: Also unshare DATA/RESPONSE packets when paged frags are present Hyunwoo Kim
2026-05-01 15:58 ` Simon Horman
@ 2026-05-08 5:57 ` Qingfang Deng
2026-05-08 6:07 ` Hyunwoo Kim
1 sibling, 1 reply; 7+ messages in thread
From: Qingfang Deng @ 2026-05-08 5:57 UTC (permalink / raw)
To: Hyunwoo Kim
Cc: dhowells, marc.dionne, davem, edumazet, kuba, pabeni, horms,
linux-afs, netdev
On Thu, 30 Apr 2026 08:35:55 +0900, Hyunwoo Kim wrote:
>
> The DATA-packet handler in rxrpc_input_call_event() and the RESPONSE
> handler in rxrpc_verify_response() copy the skb to a linear one before
> calling into the security ops only when skb_cloned() is true. An skb
> that is not cloned but still carries paged fragments (skb->data_len != 0)
> falls through to the in-place decryption path, which binds the frag
> pages directly into the AEAD/skcipher SGL via skb_to_sgvec().
>
> Extend the gate so that any skb with non-linear data is also copied,
> ensuring the security handler always operates on a fully linear skb.
> The OOM/trace handling already in place is reused.
>
> Fixes: d0d5c0cd1e71 ("rxrpc: Use skb_unshare() rather than skb_cow_data()")
> Signed-off-by: Hyunwoo Kim <imv4bel@gmail.com>
> ---
> net/rxrpc/call_event.c | 2 +-
> net/rxrpc/conn_event.c | 2 +-
> 2 files changed, 2 insertions(+), 2 deletions(-)
>
> diff --git a/net/rxrpc/call_event.c b/net/rxrpc/call_event.c
> index fdd683261226..6c924ef55208 100644
> --- a/net/rxrpc/call_event.c
> +++ b/net/rxrpc/call_event.c
> @@ -334,7 +334,7 @@ bool rxrpc_input_call_event(struct rxrpc_call *call)
>
> if (sp->hdr.type == RXRPC_PACKET_TYPE_DATA &&
> sp->hdr.securityIndex != 0 &&
> - skb_cloned(skb)) {
> + (skb_cloned(skb) || skb->data_len)) {
It's recommended to use skb_is_nonlinear() instead of open-coding
skb->data_len.
> /* Unshare the packet so that it can be
> * modified by in-place decryption.
> */
> diff --git a/net/rxrpc/conn_event.c b/net/rxrpc/conn_event.c
> index a2130d25aaa9..eab7c5f2517a 100644
> --- a/net/rxrpc/conn_event.c
> +++ b/net/rxrpc/conn_event.c
> @@ -245,7 +245,7 @@ static int rxrpc_verify_response(struct rxrpc_connection *conn,
> {
> int ret;
>
> - if (skb_cloned(skb)) {
> + if (skb_cloned(skb) || skb->data_len) {
Ditto.
> /* Copy the packet if shared so that we can do in-place
> * decryption.
> */
Regards,
Qingfang
^ permalink raw reply [flat|nested] 7+ messages in thread* Re: [PATCH net] rxrpc: Also unshare DATA/RESPONSE packets when paged frags are present
2026-05-08 5:57 ` Qingfang Deng
@ 2026-05-08 6:07 ` Hyunwoo Kim
2026-05-08 20:25 ` Jakub Kicinski
0 siblings, 1 reply; 7+ messages in thread
From: Hyunwoo Kim @ 2026-05-08 6:07 UTC (permalink / raw)
To: Qingfang Deng
Cc: dhowells, marc.dionne, davem, edumazet, kuba, pabeni, horms,
linux-afs, netdev, imv4bel
On Fri, May 08, 2026 at 01:57:15PM +0800, Qingfang Deng wrote:
> On Thu, 30 Apr 2026 08:35:55 +0900, Hyunwoo Kim wrote:
> >
> > The DATA-packet handler in rxrpc_input_call_event() and the RESPONSE
> > handler in rxrpc_verify_response() copy the skb to a linear one before
> > calling into the security ops only when skb_cloned() is true. An skb
> > that is not cloned but still carries paged fragments (skb->data_len != 0)
> > falls through to the in-place decryption path, which binds the frag
> > pages directly into the AEAD/skcipher SGL via skb_to_sgvec().
> >
> > Extend the gate so that any skb with non-linear data is also copied,
> > ensuring the security handler always operates on a fully linear skb.
> > The OOM/trace handling already in place is reused.
> >
> > Fixes: d0d5c0cd1e71 ("rxrpc: Use skb_unshare() rather than skb_cow_data()")
> > Signed-off-by: Hyunwoo Kim <imv4bel@gmail.com>
> > ---
> > net/rxrpc/call_event.c | 2 +-
> > net/rxrpc/conn_event.c | 2 +-
> > 2 files changed, 2 insertions(+), 2 deletions(-)
> >
> > diff --git a/net/rxrpc/call_event.c b/net/rxrpc/call_event.c
> > index fdd683261226..6c924ef55208 100644
> > --- a/net/rxrpc/call_event.c
> > +++ b/net/rxrpc/call_event.c
> > @@ -334,7 +334,7 @@ bool rxrpc_input_call_event(struct rxrpc_call *call)
> >
> > if (sp->hdr.type == RXRPC_PACKET_TYPE_DATA &&
> > sp->hdr.securityIndex != 0 &&
> > - skb_cloned(skb)) {
> > + (skb_cloned(skb) || skb->data_len)) {
>
> It's recommended to use skb_is_nonlinear() instead of open-coding
> skb->data_len.
>
> > /* Unshare the packet so that it can be
> > * modified by in-place decryption.
> > */
> > diff --git a/net/rxrpc/conn_event.c b/net/rxrpc/conn_event.c
> > index a2130d25aaa9..eab7c5f2517a 100644
> > --- a/net/rxrpc/conn_event.c
> > +++ b/net/rxrpc/conn_event.c
> > @@ -245,7 +245,7 @@ static int rxrpc_verify_response(struct rxrpc_connection *conn,
> > {
> > int ret;
> >
> > - if (skb_cloned(skb)) {
> > + if (skb_cloned(skb) || skb->data_len) {
>
> Ditto.
Thank you for the review.
I will submit a v2 patch.
Best regards,
Hyunwoo Kim
>
> > /* Copy the packet if shared so that we can do in-place
> > * decryption.
> > */
>
> Regards,
> Qingfang
^ permalink raw reply [flat|nested] 7+ messages in thread* Re: [PATCH net] rxrpc: Also unshare DATA/RESPONSE packets when paged frags are present
2026-05-08 6:07 ` Hyunwoo Kim
@ 2026-05-08 20:25 ` Jakub Kicinski
2026-05-09 1:32 ` Qingfang Deng
0 siblings, 1 reply; 7+ messages in thread
From: Jakub Kicinski @ 2026-05-08 20:25 UTC (permalink / raw)
To: Hyunwoo Kim
Cc: Qingfang Deng, dhowells, marc.dionne, davem, edumazet, pabeni,
horms, linux-afs, netdev
On Fri, 8 May 2026 15:07:49 +0900 Hyunwoo Kim wrote:
> > > - if (skb_cloned(skb)) {
> > > + if (skb_cloned(skb) || skb->data_len) {
> >
> > Ditto.
>
> Thank you for the review.
Maybe skb_ensure_writable() ?
^ permalink raw reply [flat|nested] 7+ messages in thread* Re: [PATCH net] rxrpc: Also unshare DATA/RESPONSE packets when paged frags are present
2026-05-08 20:25 ` Jakub Kicinski
@ 2026-05-09 1:32 ` Qingfang Deng
0 siblings, 0 replies; 7+ messages in thread
From: Qingfang Deng @ 2026-05-09 1:32 UTC (permalink / raw)
To: Jakub Kicinski, Hyunwoo Kim
Cc: dhowells, marc.dionne, davem, edumazet, pabeni, horms, linux-afs,
netdev
On 2026/5/9 4:25, Jakub Kicinski wrote:
> On Fri, 8 May 2026 15:07:49 +0900 Hyunwoo Kim wrote:
>>>> - if (skb_cloned(skb)) {
>>>> + if (skb_cloned(skb) || skb->data_len) {
>>> Ditto.
>> Thank you for the review.
> Maybe skb_ensure_writable() ?
This also works, but has a subtle difference that a new buffer is
allocated with GFP_ATOMIC instead of GFP_NOFS.
^ permalink raw reply [flat|nested] 7+ messages in thread
end of thread, other threads:[~2026-05-09 1:32 UTC | newest]
Thread overview: 7+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2026-04-29 23:35 [PATCH net] rxrpc: Also unshare DATA/RESPONSE packets when paged frags are present Hyunwoo Kim
2026-05-01 15:58 ` Simon Horman
2026-05-01 16:57 ` Hyunwoo Kim
2026-05-08 5:57 ` Qingfang Deng
2026-05-08 6:07 ` Hyunwoo Kim
2026-05-08 20:25 ` Jakub Kicinski
2026-05-09 1:32 ` Qingfang Deng
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox