public inbox for linux-nvme@lists.infradead.org
 help / color / mirror / Atom feed
* [PATCH] nvmet-tcp: fix race between ICReq handling and queue teardown
@ 2026-04-08  7:51 Chaitanya Kulkarni
       [not found] ` <BN0PR01MB7007C5B4178BCD8619FC5D91C85B2@BN0PR01MB7007.prod.exchangelabs.com>
  2026-04-08 19:22 ` Keith Busch
  0 siblings, 2 replies; 3+ messages in thread
From: Chaitanya Kulkarni @ 2026-04-08  7:51 UTC (permalink / raw)
  To: skumar47; +Cc: hch, sagi, linux-nvme, kbusch, Chaitanya Kulkarni, stable

nvmet_tcp_handle_icreq() updates queue->state after sending an
Initialization Connection Response (ICResp), but it does so without
serializing against target-side queue teardown.

If an NVMe/TCP host sends an Initialization Connection Request
(ICReq) and immediately closes the connection, target-side teardown
may start in softirq context before io_work drains the already
buffered ICReq. In that case, nvmet_tcp_schedule_release_queue()
sets queue->state to NVMET_TCP_Q_DISCONNECTING and drops the queue
reference under state_lock.

If io_work later processes that ICReq, nvmet_tcp_handle_icreq() can
still overwrite the state back to NVMET_TCP_Q_LIVE. That defeats the
DISCONNECTING-state guard in nvmet_tcp_schedule_release_queue() and
allows a later socket state change to re-enter teardown and issue a
second kref_put() on an already released queue.

The ICResp send failure path has the same problem. If teardown has
already moved the queue to DISCONNECTING, a send error can still
overwrite the state with NVMET_TCP_Q_FAILED, again reopening the
window for a second teardown path to drop the queue reference.

Fix this by serializing both post-send state transitions with
state_lock and bailing out if teardown has already started.

Use -ESHUTDOWN as an internal sentinel for that bail-out path rather
than propagating it as a transport error like -ECONNRESET. Keep
nvmet_tcp_socket_error() setting rcv_state to NVMET_TCP_RECV_ERR before
honoring that sentinel so receive-side parsing stays quiesced until the
existing release path completes.

Reported-by: Shivam Kumar <skumar47@syr.edu>
Fixes: c46a6465bac2 ("nvmet-tcp: add NVMe over TCP target driver")
Cc: stable@vger.kernel.org
Signed-off-by: Chaitanya Kulkarni <kch@nvidia.com>
---

Hi Shivam,

This patch is different than the one I posted.

Posted patch :-


		iov.iov_len = sizeof(*icresp);
		ret = kernel_sendmsg(queue->sock, &msg, &iov, 1, iov.iov_len);
		if (ret < 0) {
	-		queue->state = NVMET_TCP_Q_FAILED;
	+		spin_lock_bh(&queue->state_lock);
	+		if (queue->state != NVMET_TCP_Q_DISCONNECTING)
	+			queue->state = NVMET_TCP_Q_FAILED;
	+		spin_unlock_bh(&queue->state_lock);
			return ret; /* queue removal will cleanup */
		}

This patch :-

		iov.iov_len = sizeof(*icresp);
		ret = kernel_sendmsg(queue->sock, &msg, &iov, 1, iov.iov_len);
		if (ret < 0) {
	+		spin_lock_bh(&queue->state_lock);
	+		if (queue->state == NVMET_TCP_Q_DISCONNECTING) {
	+			spin_unlock_bh(&queue->state_lock);
	+			return -ESHUTDOWN;
	+		}
			queue->state = NVMET_TCP_Q_FAILED;
	+		spin_unlock_bh(&queue->state_lock);
			return ret; /* queue removal will cleanup */
		}

It will be great if you can provide tested-by tag on this patch
so we can merge this fix as well.

-ck

---
 drivers/nvme/target/tcp.c | 28 ++++++++++++++++++++++++++++
 1 file changed, 28 insertions(+)

diff --git a/drivers/nvme/target/tcp.c b/drivers/nvme/target/tcp.c
index 69e971b179ae..98b2ce9a70ca 100644
--- a/drivers/nvme/target/tcp.c
+++ b/drivers/nvme/target/tcp.c
@@ -407,7 +407,22 @@ static void nvmet_tcp_fatal_error(struct nvmet_tcp_queue *queue)
 
 static void nvmet_tcp_socket_error(struct nvmet_tcp_queue *queue, int status)
 {
+	/*
+	 * Keep rcv_state at RECV_ERR even for the internal -ESHUTDOWN path.
+	 * nvmet_tcp_handle_icreq() can return -ESHUTDOWN after the ICReq has
+	 * already been consumed and queue teardown has started.
+	 *
+	 * If nvmet_tcp_data_ready() or nvmet_tcp_write_space() queues
+	 * nvmet_tcp_io_work() again before nvmet_tcp_release_queue_work()
+	 * cancels it, the queue must not keep that old receive state.
+	 * Otherwise the next nvmet_tcp_io_work() run can reach
+	 * nvmet_tcp_done_recv_pdu() and try to handle the same ICReq again.
+	 *
+	 * That is why queue->rcv_state needs to be updated before we return.
+	 */
 	queue->rcv_state = NVMET_TCP_RECV_ERR;
+	if (status == -ESHUTDOWN)
+		return;
 	if (status == -EPIPE || status == -ECONNRESET)
 		kernel_sock_shutdown(queue->sock, SHUT_RDWR);
 	else
@@ -922,11 +937,24 @@ static int nvmet_tcp_handle_icreq(struct nvmet_tcp_queue *queue)
 	iov.iov_len = sizeof(*icresp);
 	ret = kernel_sendmsg(queue->sock, &msg, &iov, 1, iov.iov_len);
 	if (ret < 0) {
+		spin_lock_bh(&queue->state_lock);
+		if (queue->state == NVMET_TCP_Q_DISCONNECTING) {
+			spin_unlock_bh(&queue->state_lock);
+			return -ESHUTDOWN;
+		}
 		queue->state = NVMET_TCP_Q_FAILED;
+		spin_unlock_bh(&queue->state_lock);
 		return ret; /* queue removal will cleanup */
 	}
 
+	spin_lock_bh(&queue->state_lock);
+	if (queue->state == NVMET_TCP_Q_DISCONNECTING) {
+		spin_unlock_bh(&queue->state_lock);
+		/* Tell nvmet_tcp_socket_error() teardown is already in progress. */
+		return -ESHUTDOWN;
+	}
 	queue->state = NVMET_TCP_Q_LIVE;
+	spin_unlock_bh(&queue->state_lock);
 	nvmet_prepare_receive_pdu(queue);
 	return 0;
 }
-- 
2.39.5



^ permalink raw reply related	[flat|nested] 3+ messages in thread

* Re: Fw: [PATCH] nvmet-tcp: fix race between ICReq handling and queue teardown
       [not found] ` <BN0PR01MB7007C5B4178BCD8619FC5D91C85B2@BN0PR01MB7007.prod.exchangelabs.com>
@ 2026-04-08 16:50   ` Shivam Kumar
  0 siblings, 0 replies; 3+ messages in thread
From: Shivam Kumar @ 2026-04-08 16:50 UTC (permalink / raw)
  To: Shivam Kumar, Chaitanya Kulkarni
  Cc: Christoph Hellwig, Sagi Grimberg, linux-nvme@lists.infradead.org,
	kbusch, kch, stable

_______________________________________
> From: Chaitanya Kulkarni <kch@nvidia.com>
> Sent: 08 April 2026 03:51
> To: Shivam Kumar
> Cc: hch@lst.de; sagi@grimberg.me; linux-nvme@lists.infradead.org; kbusch@kernel.org; Chaitanya Kulkarni; stable@vger.kernel.org
> Subject: [PATCH] nvmet-tcp: fix race between ICReq handling and queue teardown
>
> nvmet_tcp_handle_icreq() updates queue->state after sending an
> Initialization Connection Response (ICResp), but it does so without
> serializing against target-side queue teardown.
>
> If an NVMe/TCP host sends an Initialization Connection Request
> (ICReq) and immediately closes the connection, target-side teardown
> may start in softirq context before io_work drains the already
> buffered ICReq. In that case, nvmet_tcp_schedule_release_queue()
> sets queue->state to NVMET_TCP_Q_DISCONNECTING and drops the queue
> reference under state_lock.
>
> If io_work later processes that ICReq, nvmet_tcp_handle_icreq() can
> still overwrite the state back to NVMET_TCP_Q_LIVE. That defeats the
> DISCONNECTING-state guard in nvmet_tcp_schedule_release_queue() and
> allows a later socket state change to re-enter teardown and issue a
> second kref_put() on an already released queue.
>
> The ICResp send failure path has the same problem. If teardown has
> already moved the queue to DISCONNECTING, a send error can still
> overwrite the state with NVMET_TCP_Q_FAILED, again reopening the
> window for a second teardown path to drop the queue reference.
>
> Fix this by serializing both post-send state transitions with
> state_lock and bailing out if teardown has already started.
>
> Use -ESHUTDOWN as an internal sentinel for that bail-out path rather
> than propagating it as a transport error like -ECONNRESET. Keep
> nvmet_tcp_socket_error() setting rcv_state to NVMET_TCP_RECV_ERR before
> honoring that sentinel so receive-side parsing stays quiesced until the
> existing release path completes.
>
> Reported-by: Shivam Kumar <skumar47@syr.edu>
> Fixes: c46a6465bac2 ("nvmet-tcp: add NVMe over TCP target driver")
> Cc: stable@vger.kernel.org
> Signed-off-by: Chaitanya Kulkarni <kch@nvidia.com>
> ---
>
> Hi Shivam,
>
> This patch is different than the one I posted.
>
> Posted patch :-
>
>
>                 iov.iov_len = sizeof(*icresp);
>                 ret = kernel_sendmsg(queue->sock, &msg, &iov, 1, iov.iov_len);
>                 if (ret < 0) {
>         -               queue->state = NVMET_TCP_Q_FAILED;
>         +               spin_lock_bh(&queue->state_lock);
>         +               if (queue->state != NVMET_TCP_Q_DISCONNECTING)
>         +                       queue->state = NVMET_TCP_Q_FAILED;
>         +               spin_unlock_bh(&queue->state_lock);
>                         return ret; /* queue removal will cleanup */
>                 }
>
> This patch :-
>
>                 iov.iov_len = sizeof(*icresp);
>                 ret = kernel_sendmsg(queue->sock, &msg, &iov, 1, iov.iov_len);
>                 if (ret < 0) {
>         +               spin_lock_bh(&queue->state_lock);
>         +               if (queue->state == NVMET_TCP_Q_DISCONNECTING) {
>         +                       spin_unlock_bh(&queue->state_lock);
>         +                       return -ESHUTDOWN;
>         +               }
>                         queue->state = NVMET_TCP_Q_FAILED;
>         +               spin_unlock_bh(&queue->state_lock);
>                         return ret; /* queue removal will cleanup */
>                 }
>
> It will be great if you can provide tested-by tag on this patch
> so we can merge this fix as well.
>
> -ck
>
> ---
>  drivers/nvme/target/tcp.c | 28 ++++++++++++++++++++++++++++
>  1 file changed, 28 insertions(+)
>
> diff --git a/drivers/nvme/target/tcp.c b/drivers/nvme/target/tcp.c
> index 69e971b179ae..98b2ce9a70ca 100644
> --- a/drivers/nvme/target/tcp.c
> +++ b/drivers/nvme/target/tcp.c
> @@ -407,7 +407,22 @@ static void nvmet_tcp_fatal_error(struct nvmet_tcp_queue *queue)
>
>  static void nvmet_tcp_socket_error(struct nvmet_tcp_queue *queue, int status)
>  {
> +       /*
> +        * Keep rcv_state at RECV_ERR even for the internal -ESHUTDOWN path.
> +        * nvmet_tcp_handle_icreq() can return -ESHUTDOWN after the ICReq has
> +        * already been consumed and queue teardown has started.
> +        *
> +        * If nvmet_tcp_data_ready() or nvmet_tcp_write_space() queues
> +        * nvmet_tcp_io_work() again before nvmet_tcp_release_queue_work()
> +        * cancels it, the queue must not keep that old receive state.
> +        * Otherwise the next nvmet_tcp_io_work() run can reach
> +        * nvmet_tcp_done_recv_pdu() and try to handle the same ICReq again.
> +        *
> +        * That is why queue->rcv_state needs to be updated before we return.
> +        */
>         queue->rcv_state = NVMET_TCP_RECV_ERR;
> +       if (status == -ESHUTDOWN)
> +               return;
>         if (status == -EPIPE || status == -ECONNRESET)
>                 kernel_sock_shutdown(queue->sock, SHUT_RDWR);
>         else
> @@ -922,11 +937,24 @@ static int nvmet_tcp_handle_icreq(struct nvmet_tcp_queue *queue)
>         iov.iov_len = sizeof(*icresp);
>         ret = kernel_sendmsg(queue->sock, &msg, &iov, 1, iov.iov_len);
>         if (ret < 0) {
> +               spin_lock_bh(&queue->state_lock);
> +               if (queue->state == NVMET_TCP_Q_DISCONNECTING) {
> +                       spin_unlock_bh(&queue->state_lock);
> +                       return -ESHUTDOWN;
> +               }
>                 queue->state = NVMET_TCP_Q_FAILED;
> +               spin_unlock_bh(&queue->state_lock);
>                 return ret; /* queue removal will cleanup */
>         }
>
> +       spin_lock_bh(&queue->state_lock);
> +       if (queue->state == NVMET_TCP_Q_DISCONNECTING) {
> +               spin_unlock_bh(&queue->state_lock);
> +               /* Tell nvmet_tcp_socket_error() teardown is already in progress. */
> +               return -ESHUTDOWN;
> +       }
>         queue->state = NVMET_TCP_Q_LIVE;
> +       spin_unlock_bh(&queue->state_lock);
>         nvmet_prepare_receive_pdu(queue);
>         return 0;
>  }
> --
> 2.39.5
>
Tested this updated patch - the handle_icreq race is fixed.
Tested-by: Shivam Kumar <kumar.shivam43666@gmail.com>


^ permalink raw reply	[flat|nested] 3+ messages in thread

* Re: [PATCH] nvmet-tcp: fix race between ICReq handling and queue teardown
  2026-04-08  7:51 [PATCH] nvmet-tcp: fix race between ICReq handling and queue teardown Chaitanya Kulkarni
       [not found] ` <BN0PR01MB7007C5B4178BCD8619FC5D91C85B2@BN0PR01MB7007.prod.exchangelabs.com>
@ 2026-04-08 19:22 ` Keith Busch
  1 sibling, 0 replies; 3+ messages in thread
From: Keith Busch @ 2026-04-08 19:22 UTC (permalink / raw)
  To: Chaitanya Kulkarni; +Cc: skumar47, hch, sagi, linux-nvme, stable

On Wed, Apr 08, 2026 at 12:51:31AM -0700, Chaitanya Kulkarni wrote:
> nvmet_tcp_handle_icreq() updates queue->state after sending an
> Initialization Connection Response (ICResp), but it does so without
> serializing against target-side queue teardown.
> 
> If an NVMe/TCP host sends an Initialization Connection Request
> (ICReq) and immediately closes the connection, target-side teardown
> may start in softirq context before io_work drains the already
> buffered ICReq. In that case, nvmet_tcp_schedule_release_queue()
> sets queue->state to NVMET_TCP_Q_DISCONNECTING and drops the queue
> reference under state_lock.
> 
> If io_work later processes that ICReq, nvmet_tcp_handle_icreq() can
> still overwrite the state back to NVMET_TCP_Q_LIVE. That defeats the
> DISCONNECTING-state guard in nvmet_tcp_schedule_release_queue() and
> allows a later socket state change to re-enter teardown and issue a
> second kref_put() on an already released queue.
> 
> The ICResp send failure path has the same problem. If teardown has
> already moved the queue to DISCONNECTING, a send error can still
> overwrite the state with NVMET_TCP_Q_FAILED, again reopening the
> window for a second teardown path to drop the queue reference.
> 
> Fix this by serializing both post-send state transitions with
> state_lock and bailing out if teardown has already started.

This looks okay to me. Will give this a couple days then queue it up if
no issues reported.


^ permalink raw reply	[flat|nested] 3+ messages in thread

end of thread, other threads:[~2026-04-08 19:22 UTC | newest]

Thread overview: 3+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2026-04-08  7:51 [PATCH] nvmet-tcp: fix race between ICReq handling and queue teardown Chaitanya Kulkarni
     [not found] ` <BN0PR01MB7007C5B4178BCD8619FC5D91C85B2@BN0PR01MB7007.prod.exchangelabs.com>
2026-04-08 16:50   ` Fw: " Shivam Kumar
2026-04-08 19:22 ` Keith Busch

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox