public inbox for netdev@vger.kernel.org
 help / color / mirror / Atom feed
From: Chuck Lever <cel@kernel.org>
To: john.fastabend@gmail.com, kuba@kernel.org, sd@queasysnail.net
Cc: <netdev@vger.kernel.org>, <kernel-tls-handshake@lists.linux.dev>,
	Chuck Lever <chuck.lever@oracle.com>,
	Alistair Francis <alistair.francis@wdc.com>,
	Hannes Reinecke <hare@suse.de>
Subject: [PATCH v1 4/6] tls: Flush backlog before tls_rx_rec_wait in read_sock
Date: Thu,  5 Mar 2026 16:14:00 -0500	[thread overview]
Message-ID: <20260305211402.39408-5-cel@kernel.org> (raw)
In-Reply-To: <20260305211402.39408-1-cel@kernel.org>

From: Chuck Lever <chuck.lever@oracle.com>

While lock_sock is held during read_sock, incoming TCP segments
land on sk->sk_backlog rather than sk->sk_receive_queue.
tls_rx_rec_wait() inspects only sk_receive_queue, so backlog
data remains invisible until release_sock() drains it, forcing
an extra workqueue cycle for records that arrive during
decryption.

Calling sk_flush_backlog() before tls_rx_rec_wait() moves
backlog data into sk_receive_queue, where tls_strp_check_rcv()
can parse it immediately. The existing tls_read_flush_backlog
call after decryption is retained for TCP window management.

Acked-by: Alistair Francis <alistair.francis@wdc.com>
Reviewed-by: Hannes Reinecke <hare@suse.de>
Signed-off-by: Chuck Lever <chuck.lever@oracle.com>
---
 net/tls/tls_sw.c | 5 +++++
 1 file changed, 5 insertions(+)

diff --git a/net/tls/tls_sw.c b/net/tls/tls_sw.c
index a5905f4c1ae2..70a9c2402ea1 100644
--- a/net/tls/tls_sw.c
+++ b/net/tls/tls_sw.c
@@ -2371,6 +2371,11 @@ int tls_sw_read_sock(struct sock *sk, read_descriptor_t *desc,
 		} else {
 			struct tls_decrypt_arg darg;
 
+			/* Drain backlog so segments that arrived while the
+			 * lock was held appear on sk_receive_queue before
+			 * tls_rx_rec_wait waits for a new record.
+			 */
+			sk_flush_backlog(sk);
 			err = tls_rx_rec_wait(sk, NULL, true, released);
 			if (err <= 0)
 				goto read_sock_end;
-- 
2.53.0


  parent reply	other threads:[~2026-03-05 21:14 UTC|newest]

Thread overview: 10+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2026-03-05 21:13 [PATCH v1 0/6] TLS read_sock performance scalability Chuck Lever
2026-03-05 21:13 ` [PATCH v1 1/6] tls: Fix dangling skb pointer in tls_sw_read_sock() Chuck Lever
2026-03-05 22:19   ` Jakub Kicinski
2026-03-06 14:33     ` Chuck Lever
2026-03-05 21:13 ` [PATCH v1 2/6] tls: Factor tls_strp_msg_release() from tls_strp_msg_done() Chuck Lever
2026-03-05 21:13 ` [PATCH v1 3/6] tls: Suppress spurious saved_data_ready during read_sock Chuck Lever
2026-03-05 21:14 ` Chuck Lever [this message]
2026-03-05 21:14 ` [PATCH v1 5/6] tls: Restructure tls_sw_read_sock() into submit/deliver phases Chuck Lever
2026-03-05 21:14 ` [PATCH v1 6/6] tls: Enable batch async decryption in read_sock Chuck Lever
2026-03-10  3:14 ` [PATCH v1 0/6] TLS read_sock performance scalability Jakub Kicinski

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20260305211402.39408-5-cel@kernel.org \
    --to=cel@kernel.org \
    --cc=alistair.francis@wdc.com \
    --cc=chuck.lever@oracle.com \
    --cc=hare@suse.de \
    --cc=john.fastabend@gmail.com \
    --cc=kernel-tls-handshake@lists.linux.dev \
    --cc=kuba@kernel.org \
    --cc=netdev@vger.kernel.org \
    --cc=sd@queasysnail.net \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox