From: Sasha Levin <sashal@kernel.org>
To: linux-kernel@vger.kernel.org, stable@vger.kernel.org
Cc: Maurizio Lombardi <mlombard@redhat.com>,
Keith Busch <kbusch@kernel.org>, Sagi Grimberg <sagi@grimberg.me>,
John Meneghini <jmeneghi@redhat.com>,
Christoph Hellwig <hch@lst.de>, Sasha Levin <sashal@kernel.org>,
kch@nvidia.com, linux-nvme@lists.infradead.org
Subject: [PATCH AUTOSEL 5.4 16/25] nvmet-tcp: fix a race condition between release_queue and io_work
Date: Tue, 30 Nov 2021 09:51:46 -0500 [thread overview]
Message-ID: <20211130145156.946083-16-sashal@kernel.org> (raw)
In-Reply-To: <20211130145156.946083-1-sashal@kernel.org>
From: Maurizio Lombardi <mlombard@redhat.com>
[ Upstream commit a208fc56721775987c1b86e20d86d7e0d017c0b2 ]
If the initiator executes a reset controller operation while
performing I/O, the target kernel will crash because of a race condition
between release_queue and io_work;
nvmet_tcp_uninit_data_in_cmds() may be executed while io_work
is running, calling flush_work() was not sufficient to
prevent this because io_work could requeue itself.
Fix this bug by using cancel_work_sync() to prevent io_work
from requeuing itself and set rcv_state to NVMET_TCP_RECV_ERR to
make sure we don't receive any more data from the socket.
Signed-off-by: Maurizio Lombardi <mlombard@redhat.com>
Reviewed-by: Keith Busch <kbusch@kernel.org>
Reviewed-by: Sagi Grimberg <sagi@grimberg.me>
Reviewed-by: John Meneghini <jmeneghi@redhat.com>
Signed-off-by: Christoph Hellwig <hch@lst.de>
Signed-off-by: Sasha Levin <sashal@kernel.org>
---
drivers/nvme/target/tcp.c | 4 +++-
1 file changed, 3 insertions(+), 1 deletion(-)
diff --git a/drivers/nvme/target/tcp.c b/drivers/nvme/target/tcp.c
index fac1985870765..b3e82b0889f0b 100644
--- a/drivers/nvme/target/tcp.c
+++ b/drivers/nvme/target/tcp.c
@@ -1352,7 +1352,9 @@ static void nvmet_tcp_release_queue_work(struct work_struct *w)
mutex_unlock(&nvmet_tcp_queue_mutex);
nvmet_tcp_restore_socket_callbacks(queue);
- flush_work(&queue->io_work);
+ cancel_work_sync(&queue->io_work);
+ /* stop accepting incoming data */
+ queue->rcv_state = NVMET_TCP_RECV_ERR;
nvmet_tcp_uninit_data_in_cmds(queue);
nvmet_sq_destroy(&queue->nvme_sq);
--
2.33.0
next parent reply other threads:[~2021-11-30 15:12 UTC|newest]
Thread overview: 6+ messages / expand[flat|nested] mbox.gz Atom feed top
[not found] <20211130145156.946083-1-sashal@kernel.org>
2021-11-30 14:51 ` Sasha Levin [this message]
2021-11-30 14:51 ` [PATCH AUTOSEL 5.4 17/25] nvmet-tcp: add an helper to free the cmd buffers Sasha Levin
2021-11-30 14:51 ` [PATCH AUTOSEL 5.4 18/25] nvmet-tcp: fix memory leak when performing a controller reset Sasha Levin
2021-11-30 14:51 ` [PATCH AUTOSEL 5.4 19/25] nvme-tcp: fix memory leak when freeing a queue Sasha Levin
2021-11-30 14:51 ` [PATCH AUTOSEL 5.4 20/25] nvme-pci: add NO APST quirk for Kioxia device Sasha Levin
2021-11-30 14:51 ` [PATCH AUTOSEL 5.4 21/25] nvme: fix write zeroes pi Sasha Levin
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20211130145156.946083-16-sashal@kernel.org \
--to=sashal@kernel.org \
--cc=hch@lst.de \
--cc=jmeneghi@redhat.com \
--cc=kbusch@kernel.org \
--cc=kch@nvidia.com \
--cc=linux-kernel@vger.kernel.org \
--cc=linux-nvme@lists.infradead.org \
--cc=mlombard@redhat.com \
--cc=sagi@grimberg.me \
--cc=stable@vger.kernel.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox