From: Sabrina Dubroca <sd@queasysnail.net>
To: Chuck Lever <cel@kernel.org>
Cc: john.fastabend@gmail.com, Jakub Kicinski <kuba@kernel.org>,
netdev@vger.kernel.org, kernel-tls-handshake@lists.linux.dev,
Chuck Lever <chuck.lever@oracle.com>,
Hannes Reinecke <hare@suse.de>
Subject: Re: [PATCH PATCH net-next v4 8/8] tls: Enable batch async decryption in read_sock
Date: Tue, 24 Mar 2026 23:58:13 +0100 [thread overview]
Message-ID: <acMXBSfCznV58emu@krikkit> (raw)
In-Reply-To: <b48d4311-51f2-4e71-8324-303af224bac6@kernel.org>
2026-03-24, 09:17:57 -0400, Chuck Lever wrote:
> On 3/23/26 7:08 PM, Sabrina Dubroca wrote:
> > 2026-03-23, 11:04:16 -0400, Chuck Lever wrote:
> >>
> >> On Mon, Mar 23, 2026, at 10:14 AM, Sabrina Dubroca wrote:
> >>> 2026-03-17, 11:04:21 -0400, Chuck Lever wrote:
> >>>> +/* Bound on concurrent async AEAD submissions per read_sock
> >>>> + * call. Chosen to fill typical hardware crypto pipelines
> >>>> + * without excessive memory consumption (each in-flight record
> >>>> + * holds one cleartext skb plus its AEAD request context).
> >>>> + */
> >>>> +#define TLS_READ_SOCK_BATCH 16
> >>>
> >>> I suspect that at some point, we'll have a request to make this
> >>> configurable (maybe system-wide, maybe by socket?).
> >>
> >> I appreciate your careful and close review. The series has
> >> improved significantly.
> >>
> >> I will admit that the current value (16) is arbitrary. I agree
> >> that someone might want to modify this value. At this point,
> >> however, the constant is straightforward and it is still quite
> >> easy to promote to a tunable later if that proves to be needed.
> >
> > Agreed.
> >
> >> The right interface for this depends on kTLS consumer needs
> >> that aren't clear (to me) yet.
> >
> > In this case (read_sock), the kTLS consumer is NVMe/TCP etc, and
> > specifically users of those features with crypto acceleration
> > cards. I'm not familiar with either.
> >
> >> But let me know if you have a
> >> preferred API mechanism or a specific use case in mind, or if
> >> there is a netdev policy that should guide the introduction
> >> of a suitable API for this purpose.
> >
> > Nothing specific, I just thought I'd mention it since I was replying
> > to the patch anyway. I think at this stage "it seems easy to promote
> > to a tunable later" is enough consideration (just to avoid getting
> > trapped in some API (or lack thereof) and unable to change it, but I
> > agree that it shouldn't be a problem here).
>
> I looked into this a little more yesterday before noticing that
> async crypto was disabled for TLS 1.3.
>
> The hardware crypto engines do not surface a concurrency limit to
> their consumers, but their ring sizes are typically much larger than
> the batch limit of 16 I've chosen here. We can't rely on them to
> tell kTLS where to set that limit.
>
> However, the loop was structured to continue until the batch limit
> was hit or reading gets -EBUSY. Either way 16 seems to be a safe
> but fairly conservative setting. For the larger rings it might cause
> a decrypt pipeline bubble.
>
> So the true purpose of the low limit is to constrain the amount of
> memory tls_read_sock sets aside for decryption in progress. That's
> going to be about 256KB per socket. Yes, that could be made larger
> or contingent upon TCP socket buffer sizes, for example.
If you end up resending those changes in the future (after/together
with async support for 1.3), it would be nice to record some of this
in the commit message, to justify the limit you've chosen.
About async for 1.3: I think it would be quite some work to implement,
since we don't know the record type until after we decrypt the record.
If we get a rekey, we'll have already submitted a bunch of other
records that will fail decryption, so we'll need to roll back and
resubmit everything after the rekey, once userspace gave us the new
key. I'm not sure if other non-DATA records would also be that
problematic (or maybe worse?)
Also, async for 1.2 has been a pain (see the many fixes in the past
years). 1.3 looks complex.
I don't have access to HW that would benefit from it, do you
have benchmarks on with-async vs without-async? (obviously for
non-NVMe/non-NFS use-cases, if you test that at all)
--
Sabrina
next prev parent reply other threads:[~2026-03-24 22:58 UTC|newest]
Thread overview: 25+ messages / expand[flat|nested] mbox.gz Atom feed top
2026-03-17 15:04 [PATCH net-next v4 0/8] TLS read_sock performance scalability Chuck Lever
2026-03-17 15:04 ` [PATCH PATCH net-next v4 1/8] tls: Factor tls_decrypt_async_drain() from recvmsg Chuck Lever
2026-03-17 19:55 ` Breno Leitao
2026-03-19 17:21 ` Sabrina Dubroca
2026-03-20 1:03 ` Chuck Lever
2026-03-17 15:04 ` [PATCH PATCH net-next v4 2/8] tls: Abort the connection on decrypt failure Chuck Lever
2026-03-23 10:22 ` Sabrina Dubroca
2026-03-17 15:04 ` [PATCH PATCH net-next v4 3/8] tls: Fix dangling skb pointer in tls_sw_read_sock() Chuck Lever
2026-03-17 15:04 ` [PATCH PATCH net-next v4 4/8] tls: Factor tls_strp_msg_release() from tls_strp_msg_done() Chuck Lever
2026-03-17 15:04 ` [PATCH PATCH net-next v4 5/8] tls: Suppress spurious saved_data_ready on all receive paths Chuck Lever
2026-03-23 10:32 ` Sabrina Dubroca
2026-03-17 15:04 ` [PATCH PATCH net-next v4 6/8] tls: Flush backlog before waiting for a new record Chuck Lever
2026-03-17 15:04 ` [PATCH PATCH net-next v4 7/8] tls: Restructure tls_sw_read_sock() into submit/deliver phases Chuck Lever
2026-03-23 11:31 ` Sabrina Dubroca
2026-03-17 15:04 ` [PATCH PATCH net-next v4 8/8] tls: Enable batch async decryption in read_sock Chuck Lever
2026-03-23 14:14 ` Sabrina Dubroca
2026-03-23 15:04 ` Chuck Lever
2026-03-23 23:08 ` Sabrina Dubroca
2026-03-24 13:17 ` Chuck Lever
2026-03-24 22:58 ` Sabrina Dubroca [this message]
2026-03-23 15:53 ` Chuck Lever
2026-03-23 21:28 ` Chuck Lever
2026-03-23 21:41 ` Jakub Kicinski
2026-03-23 22:48 ` Sabrina Dubroca
2026-03-24 12:44 ` Chuck Lever
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=acMXBSfCznV58emu@krikkit \
--to=sd@queasysnail.net \
--cc=cel@kernel.org \
--cc=chuck.lever@oracle.com \
--cc=hare@suse.de \
--cc=john.fastabend@gmail.com \
--cc=kernel-tls-handshake@lists.linux.dev \
--cc=kuba@kernel.org \
--cc=netdev@vger.kernel.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox