Netdev List
 help / color / mirror / Atom feed
From: David Howells <dhowells@redhat.com>
To: Jeffrey Altman <jaltman@auristor.com>
Cc: dhowells@redhat.com, netdev@vger.kernel.org,
	Hyunwoo Kim <imv4bel@gmail.com>,
	Marc Dionne <marc.dionne@auristor.com>,
	Jakub Kicinski <kuba@kernel.org>,
	"David S. Miller" <davem@davemloft.net>,
	Eric Dumazet <edumazet@google.com>,
	Paolo Abeni <pabeni@redhat.com>, Simon Horman <horms@kernel.org>,
	linux-afs@lists.infradead.org, linux-kernel@vger.kernel.org,
	Jiayuan Chen <jiayuan.chen@linux.dev>,
	stable@vger.kernel.org
Subject: Re: [PATCH net 2/3] rxrpc: Fix DATA decrypt vs splice() by copying data to buffer in recvmsg
Date: Wed, 13 May 2026 09:01:14 +0100	[thread overview]
Message-ID: <1354628.1778659274@warthog.procyon.org.uk> (raw)
In-Reply-To: <437CCB8A-5333-4349-B120-A103B1F0E617@auristor.com>

Jeffrey Altman <jaltman@auristor.com> wrote:

> > + void *rx_dec_buffer; /* Decryption buffer */
> > + unsigned short rx_dec_bsize; /* rx_dec_buffer size */
> > + unsigned short rx_dec_offset; /* Decrypted packet data offset */
> > + unsigned short rx_dec_len; /* Decrypted packet data len */
> > + rxrpc_seq_t rx_dec_seq; /* Packet in decryption buffer */
> > 
> > rxrpc_seq_t rx_highest_seq; /* Higest sequence number received */
> > rxrpc_seq_t rx_consumed; /* Highest packet consumed */
> 
> 
> Instead of allocating the storage within struct rxrpc_call perhaps
> It would be better to add them to struct rxrpc_channel.  Doing so 
> would reduce the allocation/deallocation churn.  The majority of
> calls are short lived (perhaps a single packet in each direction)
> but there will be many calls in rapid succession.

I'm trying to keep the I/O side separate from the application side.  I don't
particularly want recvmsg (on the app side) reaching into the rxrpc_connection
struct (on the I/O side).

Further, by only looking at the rxrpc_call struct, I don't have to deal with
locking required for the possibility that the next call on that channel will
start before I've finished with this one (say an incoming call is aborted and
immediately followed up by the first packet of the next call).

> > + size_t size = umin(round_up(sp->len, 32), 2048);
> 
> I think you meant to use max() here so that a minimum of 2048 bytes
> is allocated.  

Yeah.

> I think applying a cap on the allocation size would also be 
> beneficial.  IBM/Transarc derived Rx implementations have a hard
> upper-bound of 21180 (15 x 1412) bytes plus one 28 byte rx header.
> Applying a cap of 32KiB seems prudent.

This would need checking earlier in the input path.  A DATA packet that's too
large would need to be rejected as it comes off of the UDP socket if we're not
going to be able to unpack it later.

> It is also worth noting that there are no current implementations
> of Rx RPC which will send individual Rx DATA packets larger than 
> 1444 bytes including the Rx header.  Rx RESPONSE packets can be sent
> as large as 16384 bytes (including the Rx header).  However, it is
> extremely unlikely that this buffer once allocated would ever need 
> to be grown.  

For Rx RESPONSE packets, I'm fine with allocating a buffer on the spur of the
moment and freeing it immediately.  Ideally, there would only be one RESPONSE
per connection anyway.  I could do a static buffer with a lock, I suppose, to
make sure I can process the things under memory pressure-based writeback.

> > + kfree(call->rx_dec_buffer);
> 
> It might be better to avoid deallocating the buffer on the error
> path and permit it to be freed during normal call (or call channel)
> deallocation.

Hmmm.  But I then need some other way to note that the buffer is no longer
occupied by valid data.  I suppose I could set ->rx_dec_offset to USHRT_MAX.

David


  reply	other threads:[~2026-05-13  8:01 UTC|newest]

Thread overview: 15+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2026-05-11 16:07 [PATCH net 0/3] rxrpc: Better fix for DATA/RESPONSE decrypt vs splice() David Howells
2026-05-11 16:07 ` [PATCH net 1/3] rxrpc: Also unshare DATA/RESPONSE packets when paged frags are present David Howells
2026-05-11 16:07 ` [PATCH net 2/3] rxrpc: Fix DATA decrypt vs splice() by copying data to buffer in recvmsg David Howells
2026-05-12  7:58   ` Jeffrey Altman
2026-05-13  8:01     ` David Howells [this message]
2026-05-13  8:13       ` David Howells
2026-05-13  8:38       ` David Laight
2026-05-13  9:48       ` Jeffrey Altman
2026-05-12 13:38   ` David Laight
2026-05-12 16:52     ` David Howells
2026-05-12 21:36       ` David Laight
2026-05-11 16:07 ` [PATCH net 3/3] rxrpc: Fix RESPONSE packet verification to extract skb to a linear buffer David Howells
2026-05-12  8:22   ` Jeffrey Altman
2026-05-13  0:06   ` Jakub Kicinski
2026-05-13  7:35     ` David Howells

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=1354628.1778659274@warthog.procyon.org.uk \
    --to=dhowells@redhat.com \
    --cc=davem@davemloft.net \
    --cc=edumazet@google.com \
    --cc=horms@kernel.org \
    --cc=imv4bel@gmail.com \
    --cc=jaltman@auristor.com \
    --cc=jiayuan.chen@linux.dev \
    --cc=kuba@kernel.org \
    --cc=linux-afs@lists.infradead.org \
    --cc=linux-kernel@vger.kernel.org \
    --cc=marc.dionne@auristor.com \
    --cc=netdev@vger.kernel.org \
    --cc=pabeni@redhat.com \
    --cc=stable@vger.kernel.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox