public inbox for netdev@vger.kernel.org
 help / color / mirror / Atom feed
From: Chuck Lever <cel@kernel.org>
To: john.fastabend@gmail.com, kuba@kernel.org, sd@queasysnail.net
Cc: netdev@vger.kernel.org, kernel-tls-handshake@lists.linux.dev,
	Chuck Lever <chuck.lever@oracle.com>,
	Hannes Reinecke <hare@suse.de>
Subject: [PATCH v3 7/8] tls: Restructure tls_sw_read_sock() into submit/deliver phases
Date: Wed, 11 Mar 2026 21:48:03 -0400	[thread overview]
Message-ID: <20260312014804.5083-8-cel@kernel.org> (raw)
In-Reply-To: <20260312014804.5083-1-cel@kernel.org>

From: Chuck Lever <chuck.lever@oracle.com>

Pipelining multiple AEAD operations requires separating decryption
from delivery so that several records can be submitted before any
are passed to the read_actor callback. The main loop in
tls_sw_read_sock() is split into two explicit phases: a submit
phase that decrypts one record onto ctx->rx_list, and a deliver
phase that drains rx_list and passes each cleartext skb to the
read_actor callback.

With a single record per submit phase, behavior is identical to the
previous code. A subsequent patch will extend the submit phase to
pipeline multiple AEAD operations.

Reviewed-by: Hannes Reinecke <hare@suse.de>
Signed-off-by: Chuck Lever <chuck.lever@oracle.com>
---
 net/tls/tls_sw.c | 79 +++++++++++++++++++++++++-----------------------
 1 file changed, 41 insertions(+), 38 deletions(-)

diff --git a/net/tls/tls_sw.c b/net/tls/tls_sw.c
index 7e1560d5ab79..6d54d350bced 100644
--- a/net/tls/tls_sw.c
+++ b/net/tls/tls_sw.c
@@ -2354,8 +2354,8 @@ int tls_sw_read_sock(struct sock *sk, read_descriptor_t *desc,
 	struct tls_context *tls_ctx = tls_get_ctx(sk);
 	struct tls_sw_context_rx *ctx = tls_sw_ctx_rx(tls_ctx);
 	struct tls_prot_info *prot = &tls_ctx->prot_info;
-	struct strp_msg *rxm = NULL;
 	struct sk_buff *skb = NULL;
+	struct strp_msg *rxm;
 	struct sk_psock *psock;
 	size_t flushed_at = 0;
 	bool released = true;
@@ -2380,17 +2380,15 @@ int tls_sw_read_sock(struct sock *sk, read_descriptor_t *desc,
 
 	decrypted = 0;
 	for (;;) {
-		if (!skb_queue_empty(&ctx->rx_list)) {
-			skb = __skb_dequeue(&ctx->rx_list);
-			rxm = strp_msg(skb);
-			tlm = tls_msg(skb);
-		} else {
-			struct tls_decrypt_arg darg;
+		struct tls_decrypt_arg darg;
 
-			/* Drain backlog so segments that arrived while the
-			 * lock was held appear on sk_receive_queue before
-			 * tls_rx_rec_wait waits for a new record.
-			 */
+		/* Phase 1: Submit -- decrypt one record onto rx_list.
+		 * Flush the backlog first so that segments that
+		 * arrived while the lock was held appear on
+		 * sk_receive_queue before tls_rx_rec_wait waits
+		 * for a new record.
+		 */
+		if (skb_queue_empty(&ctx->rx_list)) {
 			sk_flush_backlog(sk);
 			err = tls_rx_rec_wait(sk, NULL, true, released);
 			if (err <= 0)
@@ -2405,38 +2403,43 @@ int tls_sw_read_sock(struct sock *sk, read_descriptor_t *desc,
 			released = tls_read_flush_backlog(sk, prot, INT_MAX,
 							  0, decrypted,
 							  &flushed_at);
-			skb = darg.skb;
+			decrypted += strp_msg(darg.skb)->full_len;
+			tls_rx_rec_release(ctx);
+			__skb_queue_tail(&ctx->rx_list, darg.skb);
+		}
+
+		/* Phase 2: Deliver -- drain rx_list to read_actor */
+		while ((skb = __skb_dequeue(&ctx->rx_list)) != NULL) {
 			rxm = strp_msg(skb);
 			tlm = tls_msg(skb);
-			decrypted += rxm->full_len;
 
-			tls_rx_rec_release(ctx);
-		}
-
-		/* read_sock does not support reading control messages */
-		if (tlm->control != TLS_RECORD_TYPE_DATA) {
-			err = -EINVAL;
-			goto read_sock_requeue;
-		}
-
-		used = read_actor(desc, skb, rxm->offset, rxm->full_len);
-		if (used <= 0) {
-			if (!copied)
-				err = used;
-			goto read_sock_requeue;
-		}
-		copied += used;
-		if (used < rxm->full_len) {
-			rxm->offset += used;
-			rxm->full_len -= used;
-			if (!desc->count)
+			/* read_sock does not support reading control messages */
+			if (tlm->control != TLS_RECORD_TYPE_DATA) {
+				err = -EINVAL;
 				goto read_sock_requeue;
-		} else {
-			consume_skb(skb);
-			skb = NULL;
-			if (!desc->count)
-				break;
+			}
+
+			used = read_actor(desc, skb, rxm->offset,
+					  rxm->full_len);
+			if (used <= 0) {
+				if (!copied)
+					err = used;
+				goto read_sock_requeue;
+			}
+			copied += used;
+			if (used < rxm->full_len) {
+				rxm->offset += used;
+				rxm->full_len -= used;
+				if (!desc->count)
+					goto read_sock_requeue;
+			} else {
+				consume_skb(skb);
+				skb = NULL;
+			}
 		}
+		/* Drain all of rx_list before honoring !desc->count */
+		if (!desc->count)
+			break;
 	}
 
 read_sock_end:
-- 
2.52.0


  parent reply	other threads:[~2026-03-12  1:48 UTC|newest]

Thread overview: 15+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2026-03-12  1:47 [PATCH v3 0/8] TLS read_sock performance scalability Chuck Lever
2026-03-12  1:47 ` [PATCH v3 1/8] tls: Factor tls_decrypt_async_drain() from recvmsg Chuck Lever
2026-03-12  4:34   ` Alistair Francis
2026-03-16 10:13   ` Sabrina Dubroca
2026-03-12  1:47 ` [PATCH v3 2/8] tls: Factor tls_rx_decrypt_record() helper Chuck Lever
2026-03-12  4:35   ` Alistair Francis
2026-03-16 10:20   ` Sabrina Dubroca
2026-03-17  7:06     ` Hannes Reinecke
2026-03-12  1:47 ` [PATCH v3 3/8] tls: Fix dangling skb pointer in tls_sw_read_sock() Chuck Lever
2026-03-12  1:48 ` [PATCH v3 4/8] tls: Factor tls_strp_msg_release() from tls_strp_msg_done() Chuck Lever
2026-03-12  1:48 ` [PATCH v3 5/8] tls: Suppress spurious saved_data_ready on all receive paths Chuck Lever
2026-03-12  1:48 ` [PATCH v3 6/8] tls: Flush backlog before tls_rx_rec_wait in read_sock Chuck Lever
2026-03-16 17:17   ` Sabrina Dubroca
2026-03-12  1:48 ` Chuck Lever [this message]
2026-03-12  1:48 ` [PATCH v3 8/8] tls: Enable batch async decryption " Chuck Lever

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20260312014804.5083-8-cel@kernel.org \
    --to=cel@kernel.org \
    --cc=chuck.lever@oracle.com \
    --cc=hare@suse.de \
    --cc=john.fastabend@gmail.com \
    --cc=kernel-tls-handshake@lists.linux.dev \
    --cc=kuba@kernel.org \
    --cc=netdev@vger.kernel.org \
    --cc=sd@queasysnail.net \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox