From: Sabrina Dubroca <sd@queasysnail.net>
To: Ralf Lici <ralf@mandelbit.com>
Cc: Antonio Quartulli <antonio@openvpn.net>,
netdev@vger.kernel.org, Eric Dumazet <edumazet@google.com>,
Jakub Kicinski <kuba@kernel.org>, Paolo Abeni <pabeni@redhat.com>
Subject: Re: [PATCH net-next 6/8] ovpn: consolidate crypto allocations in one chunk
Date: Thu, 13 Nov 2025 14:48:32 +0100 [thread overview]
Message-ID: <aRXhsFpwTEfne0vF@krikkit> (raw)
In-Reply-To: <7315d47ac4cd7510ad9df7760e04c49bddd92383.camel@mandelbit.com>
2025-11-13, 11:35:07 +0100, Ralf Lici wrote:
> On Wed, 2025-11-12 at 17:29 +0100, Sabrina Dubroca wrote:
> > 2025-11-11, 22:47:39 +0100, Antonio Quartulli wrote:
> > > +static unsigned int ovpn_aead_crypto_tmp_size(struct crypto_aead
> > > *tfm,
> > > + const unsigned int
> > > nfrags)
> > > +{
> > > + unsigned int len = crypto_aead_ivsize(tfm);
> > > +
> > > + if (likely(len)) {
> >
> > Is that right?
> >
> > Previously iv was reserved with a constant size (OVPN_NONCE_SIZE), and
> > we're always going to write some data into ->iv via
> > ovpn_pktid_aead_write, but now we're only reserving the crypto
> > algorithm's IV size (which appear to be 12, ie OVPN_NONCE_SIZE, for
> > both chachapoly and gcm(aes), so maybe it doesn't matter).
>
> Exactly, I checked and both gcm-aes and chachapoly return an IV size
> equal to OVPN_NONCE_SIZE, as you noted. I just thought it wouldn't hurt
> to make the function a bit more generic in case we ever support
> algorithms without an IV in the future, knowing that OVPN_NONCE_SIZE
> matches ivsize for all current cases.
IMO there's not much to gain here, since the rest of the code
(ovpn_aead_encrypt/decrypt) isn't ready for it. It may even be more
confusing since this bit looks generic but the rest isn't, and
figuring out why the packets are not being encrypted/decrypted
correctly could be a bit painful.
> Also, there's a check in ovpn_aead_init to ensure that
> crypto_aead_ivsize returns the expected value, so we're covered if
> anything changes unexpectedly.
Ah, good.
Then I would prefer to just make ovpn_aead_crypto_tmp_size always use
OVPN_NONCE_SIZE, and maybe add a comment in ovpn_aead_init referencing
ovpn_aead_crypto_tmp_size.
Something like:
/* basic AEAD assumption
* all current algorithms use OVPN_NONCE_SIZE.
* ovpn_aead_crypto_tmp_size and ovpn_aead_encrypt/decrypt
* expect this.
*/
Or a
DEBUG_NET_WARN_ON_ONCE(OVPN_NONCE_SIZE != crypto_aead_ivsize(tfm));
in ovpn_aead_crypto_tmp_size, which would fire if the check in
ovpn_aead_init is ever removed.
> > > @@ -130,11 +225,7 @@ int ovpn_aead_encrypt(struct ovpn_peer *peer,
> > > struct ovpn_crypto_key_slot *ks,
> > > /* AEAD Additional data */
> > > sg_set_buf(sg, skb->data, OVPN_AAD_SIZE);
> > >
> > > - req = aead_request_alloc(ks->encrypt, GFP_ATOMIC);
> > > - if (unlikely(!req))
> > > - return -ENOMEM;
> > > -
> > > - ovpn_skb_cb(skb)->req = req;
> > > + ovpn_skb_cb(skb)->crypto_tmp = tmp;
> >
> > That should be done immediately after the allocation, so that any
> > failure before this (skb_to_sgvec_nomark, ovpn_pktid_xmit_next) will
> > not leak this blob? ovpn_aead_encrypt returns directly and lets
> > ovpn_encrypt_post handle the error and free the memory, but only after
> > ->crypto_tmp has been set.
> >
> > (same thing on the decrypt path)
>
> Right, will fix both paths.
Thanks.
--
Sabrina
next prev parent reply other threads:[~2025-11-13 13:48 UTC|newest]
Thread overview: 28+ messages / expand[flat|nested] mbox.gz Atom feed top
2025-11-11 21:47 [PATCH net-next 0/8] pull request: ovpn 2025-11-11 Antonio Quartulli
2025-11-11 21:47 ` [PATCH net-next 1/8] ovpn: use correct array size to parse nested attributes in ovpn_nl_key_swap_doit Antonio Quartulli
2025-11-14 2:26 ` Jakub Kicinski
2025-11-14 13:30 ` Antonio Quartulli
2025-11-11 21:47 ` [PATCH net-next 2/8] ovpn: pktid: use bitops.h API Antonio Quartulli
2025-11-15 10:10 ` Sabrina Dubroca
2025-11-11 21:47 ` [PATCH net-next 3/8] ovpn: notify userspace on client float event Antonio Quartulli
2025-11-14 2:21 ` Jakub Kicinski
2025-11-14 9:26 ` Ralf Lici
2025-11-14 14:22 ` Sabrina Dubroca
2025-11-14 14:43 ` Ralf Lici
2025-11-11 21:47 ` [PATCH net-next 4/8] ovpn: Allow IPv6 link-local addresses through RPF check Antonio Quartulli
2025-11-14 16:06 ` Sabrina Dubroca
2025-11-18 10:26 ` Antonio Quartulli
2025-11-11 21:47 ` [PATCH net-next 5/8] ovpn: add support for asymmetric peer IDs Antonio Quartulli
2025-11-13 13:58 ` Sabrina Dubroca
2025-11-13 14:12 ` Antonio Quartulli
2025-11-13 15:18 ` Sabrina Dubroca
2025-11-11 21:47 ` [PATCH net-next 6/8] ovpn: consolidate crypto allocations in one chunk Antonio Quartulli
2025-11-12 16:29 ` Sabrina Dubroca
2025-11-13 8:37 ` Antonio Quartulli
2025-11-13 10:35 ` Ralf Lici
2025-11-13 13:48 ` Sabrina Dubroca [this message]
2025-11-13 16:32 ` Ralf Lici
2025-11-14 2:25 ` Jakub Kicinski
2025-11-14 9:33 ` Ralf Lici
2025-11-11 21:47 ` [PATCH net-next 7/8] ovpn: use bound device in UDP when available Antonio Quartulli
2025-11-11 21:47 ` [PATCH net-next 8/8] ovpn: use bound address " Antonio Quartulli
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=aRXhsFpwTEfne0vF@krikkit \
--to=sd@queasysnail.net \
--cc=antonio@openvpn.net \
--cc=edumazet@google.com \
--cc=kuba@kernel.org \
--cc=netdev@vger.kernel.org \
--cc=pabeni@redhat.com \
--cc=ralf@mandelbit.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).