From mboxrd@z Thu Jan 1 00:00:00 1970 Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id B93BB1E1E04; Tue, 21 Oct 2025 20:03:06 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=10.30.226.201 ARC-Seal:i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1761076986; cv=none; b=tSLqtTI0mtBuSODuGYqtm8RKzweepZPdGjHUhRHlDL/3/IWIF158+QdKLRPnGk53oSQZJ31bc6gW8sCmmkHGtJfkzuBffVPbiKscDMgkw2Lkxek9BaDeHIJUtWItpqpzhoQTIZKKhscjYd1OuN9DgCxCSvVWuJJSndgfJtzqc7M= ARC-Message-Signature:i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1761076986; c=relaxed/simple; bh=n4jNkTAv5/VgpqfS5/Ck/yBu7OCscmUkV/xl7lsT/OM=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=sNZu0DsCtsfnsqJbNuoffvJ0j4NvBq3Icqkf5A1lRyaBCgbadrjizGyhoCcx55TaYjFDzqr3dWS90byQzB6aH8wCw6+a+cKrmVkW4IqEfc7l1eC5203y8lbkUjLOYh2zOt9vGBEWre7Ub/LR03a3NrmtPxu5o9MUa609e5RdwdI= ARC-Authentication-Results:i=1; smtp.subspace.kernel.org; dkim=pass (1024-bit key) header.d=linuxfoundation.org header.i=@linuxfoundation.org header.b=F1nPWKFX; arc=none smtp.client-ip=10.30.226.201 Authentication-Results: smtp.subspace.kernel.org; dkim=pass (1024-bit key) header.d=linuxfoundation.org header.i=@linuxfoundation.org header.b="F1nPWKFX" Received: by smtp.kernel.org (Postfix) with ESMTPSA id 25D47C4CEF1; Tue, 21 Oct 2025 20:03:05 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=linuxfoundation.org; s=korg; t=1761076986; bh=n4jNkTAv5/VgpqfS5/Ck/yBu7OCscmUkV/xl7lsT/OM=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=F1nPWKFX+y5oFTUFgGu2QZUhIyl1CuuG49HkN8ZSMLwg8YQIRZAd0fLbcvnFJTTW9 l7Leyy71RdM9cq5sh6Wk8+ZzFNSjT6EbDqSimJ+fD+vLrHNgqG5Fe5iMoNMIMlGaQj uvtbh6yrxkNpHa16bvyIns5u81RPbm5CviC5KD+k= From: Greg Kroah-Hartman To: stable@vger.kernel.org Cc: Greg Kroah-Hartman , patches@lists.linux.dev, Wilfred Mallawa , Hannes Reinecke , Keith Busch , Sasha Levin Subject: [PATCH 6.12 093/136] nvme/tcp: handle tls partially sent records in write_space() Date: Tue, 21 Oct 2025 21:51:21 +0200 Message-ID: <20251021195038.194730851@linuxfoundation.org> X-Mailer: git-send-email 2.51.1 In-Reply-To: <20251021195035.953989698@linuxfoundation.org> References: <20251021195035.953989698@linuxfoundation.org> User-Agent: quilt/0.69 X-stable: review X-Patchwork-Hint: ignore Precedence: bulk X-Mailing-List: patches@lists.linux.dev List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: 8bit 6.12-stable review patch. If anyone has any objections, please let me know. ------------------ From: Wilfred Mallawa [ Upstream commit 5a869d017793399fd1d2609ff27e900534173eb3 ] With TLS enabled, records that are encrypted and appended to TLS TX list can fail to see a retry if the underlying TCP socket is busy, for example, hitting an EAGAIN from tcp_sendmsg_locked(). This is not known to the NVMe TCP driver, as the TLS layer successfully generated a record. Typically, the TLS write_space() callback would ensure such records are retried, but in the NVMe TCP Host driver, write_space() invokes nvme_tcp_write_space(). This causes a partially sent record in the TLS TX list to timeout after not being retried. This patch fixes the above by calling queue->write_space(), which calls into the TLS layer to retry any pending records. Fixes: be8e82caa685 ("nvme-tcp: enable TLS handshake upcall") Signed-off-by: Wilfred Mallawa Reviewed-by: Hannes Reinecke Signed-off-by: Keith Busch Signed-off-by: Sasha Levin --- drivers/nvme/host/tcp.c | 3 +++ 1 file changed, 3 insertions(+) diff --git a/drivers/nvme/host/tcp.c b/drivers/nvme/host/tcp.c index 83a6b18b01ada..77df3432dfb78 100644 --- a/drivers/nvme/host/tcp.c +++ b/drivers/nvme/host/tcp.c @@ -1075,6 +1075,9 @@ static void nvme_tcp_write_space(struct sock *sk) queue = sk->sk_user_data; if (likely(queue && sk_stream_is_writeable(sk))) { clear_bit(SOCK_NOSPACE, &sk->sk_socket->flags); + /* Ensure pending TLS partial records are retried */ + if (nvme_tcp_queue_tls(queue)) + queue->write_space(sk); queue_work_on(queue->io_cpu, nvme_tcp_wq, &queue->io_work); } read_unlock_bh(&sk->sk_callback_lock); -- 2.51.0