public inbox for netdev@vger.kernel.org
 help / color / mirror / Atom feed
* [PATCH net-next v5 0/6] TLS read_sock performance scalability
@ 2026-03-24 12:53 Chuck Lever
  2026-03-24 12:53 ` [PATCH net-next v5 1/6] tls: Purge async_hold in tls_decrypt_async_wait() Chuck Lever
                   ` (6 more replies)
  0 siblings, 7 replies; 12+ messages in thread
From: Chuck Lever @ 2026-03-24 12:53 UTC (permalink / raw)
  To: john.fastabend, kuba, sd
  Cc: netdev, kernel-tls-handshake, Chuck Lever, Hannes Reinecke,
	Alistair Francis

I'd like to encourage in-kernel kTLS consumers (i.e., NFS and
NVMe/TCP) to coalesce on the use of read_sock. When I suggested
this to Hannes, he reported a number of nagging performance
scalability issues with read_sock. This series is an attempt to
run these issues down and get them fixed before we convert the
above sock_recvmsg consumers over to read_sock.

Batch async decryption and its submit/deliver scaffolding were
dropped from this series because async_capable is always false
for TLS 1.3, which NFS and NVMe/TCP both require. Async crypto
support for TLS 1.3 is a prerequisite for revisiting that work.

---
Changes since v4:
- Drop batch async decryption and submit/deliver restructure:
  async_capable is always false for TLS 1.3, so the new code
  was unreachable for NFS and NVMe/TCP
- Purge async_hold directly in tls_decrypt_async_wait() and drop
  the tls_decrypt_async_drain() wrapper
- Merge tls_strp_check_rcv_quiet() into tls_strp_check_rcv() with
  a bool wake parameter; fix lost wakeup on the recvmsg exit path

Changes since v3:
- Clarify why tls_decrypt_async_drain() is separate from _wait()
- Fold tls_err_abort() into tls_rx_one_record(), drop tls_rx_decrypt_record()
- Move backlog flush into tls_rx_rec_wait() so all RX paths benefit

Changes since v2:
- Fix short read self tests

Changes since v1:
- Add C11 reference
- Extend data_ready reduction to recvmsg and splice
- Restructure read_sock and recvmsg using shared helpers

---
Chuck Lever (6):
      tls: Purge async_hold in tls_decrypt_async_wait()
      tls: Abort the connection on decrypt failure
      tls: Fix dangling skb pointer in tls_sw_read_sock()
      tls: Factor tls_strp_msg_release() from tls_strp_msg_done()
      tls: Suppress spurious saved_data_ready on all receive paths
      tls: Flush backlog before waiting for a new record

 net/tls/tls.h      |  4 ++--
 net/tls/tls_main.c |  2 +-
 net/tls/tls_strp.c | 42 +++++++++++++++++++++++++++++++-----------
 net/tls/tls_sw.c   | 51 ++++++++++++++++++++++++++++++---------------------
 4 files changed, 64 insertions(+), 35 deletions(-)
---
base-commit: fb78a629b4f0eb399b413f6c093a3da177b3a4eb
change-id: 20260317-tls-read-sock-a0022c9df265

Best regards,
--  
Chuck Lever


^ permalink raw reply	[flat|nested] 12+ messages in thread

end of thread, other threads:[~2026-03-26 10:32 UTC | newest]

Thread overview: 12+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2026-03-24 12:53 [PATCH net-next v5 0/6] TLS read_sock performance scalability Chuck Lever
2026-03-24 12:53 ` [PATCH net-next v5 1/6] tls: Purge async_hold in tls_decrypt_async_wait() Chuck Lever
2026-03-26 10:32   ` Hannes Reinecke
2026-03-24 12:53 ` [PATCH net-next v5 2/6] tls: Abort the connection on decrypt failure Chuck Lever
2026-03-24 12:53 ` [PATCH net-next v5 3/6] tls: Fix dangling skb pointer in tls_sw_read_sock() Chuck Lever
2026-03-24 12:53 ` [PATCH net-next v5 4/6] tls: Factor tls_strp_msg_release() from tls_strp_msg_done() Chuck Lever
2026-03-24 12:53 ` [PATCH net-next v5 5/6] tls: Suppress spurious saved_data_ready on all receive paths Chuck Lever
2026-03-24 12:53 ` [PATCH net-next v5 6/6] tls: Flush backlog before waiting for a new record Chuck Lever
2026-03-24 16:18   ` Sabrina Dubroca
2026-03-24 19:07     ` Chuck Lever
2026-03-26  8:59       ` Sabrina Dubroca
2026-03-26  9:10 ` [PATCH net-next v5 0/6] TLS read_sock performance scalability patchwork-bot+netdevbpf

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox