public inbox for linux-nvme@lists.infradead.org
 help / color / mirror / Atom feed
From: John Meneghini <jmeneghi@redhat.com>
To: Maurizio Lombardi <mlombard@redhat.com>, linux-nvme@lists.infradead.org
Cc: hch@lst.de, sagi@grimberg.me, hare@suse.de,
	chaitanya.kulkarni@wdc.com, John Meneghini <jmeneghi@redhat.com>
Subject: Re: [PATCH 2/2] nvmet: fix a race condition between release_queue and io_work
Date: Thu, 21 Oct 2021 10:57:32 -0400	[thread overview]
Message-ID: <7af03d77-670d-fa5b-fb84-b6f90cc3cd41@redhat.com> (raw)
In-Reply-To: <20211021084155.16109-3-mlombard@redhat.com>

Reviewed-by: John Meneghini <jmeneghi@redhat.com>

On 10/21/21 4:41 AM, Maurizio Lombardi wrote:
> If the initiator executes a reset controller operation while
> performing I/O, the target kernel will crash because of a race condition
> between release_queue and io_work;
> nvmet_tcp_uninit_data_in_cmds() may be executed while io_work
> is running, calling flush_work(io_work) was not sufficient to
> prevent this because io_work could requeue itself.
> 
> * Fix this bug by preventing io_work from being enqueued when
> sk_user_data is NULL (it means that the queue is going to be deleted)
> 
> * Ensure that all the memory allocated for the commands' iovec is freed
> 
> Signed-off-by: Maurizio Lombardi <mlombard@redhat.com>
> ---
>   drivers/nvme/target/tcp.c | 13 +++++++++----
>   1 file changed, 9 insertions(+), 4 deletions(-)
> 
> diff --git a/drivers/nvme/target/tcp.c b/drivers/nvme/target/tcp.c
> index 2f03a94725ae..1eedbd83c95f 100644
> --- a/drivers/nvme/target/tcp.c
> +++ b/drivers/nvme/target/tcp.c
> @@ -551,6 +551,7 @@ static void nvmet_tcp_queue_response(struct nvmet_req *req)
>   	struct nvmet_tcp_cmd *cmd =
>   		container_of(req, struct nvmet_tcp_cmd, req);
>   	struct nvmet_tcp_queue	*queue = cmd->queue;
> +	struct socket *sock = queue->sock;
>   	struct nvme_sgl_desc *sgl;
>   	u32 len;
>   
> @@ -570,7 +571,10 @@ static void nvmet_tcp_queue_response(struct nvmet_req *req)
>   	}
>   
>   	llist_add(&cmd->lentry, &queue->resp_list);
> -	queue_work_on(queue_cpu(queue), nvmet_tcp_wq, &cmd->queue->io_work);
> +	read_lock_bh(&sock->sk->sk_callback_lock);
> +	if (likely(sock->sk->sk_user_data))
> +		queue_work_on(queue_cpu(queue), nvmet_tcp_wq, &cmd->queue->io_work);
> +	read_unlock_bh(&sock->sk->sk_callback_lock);
>   }
>   
>   static void nvmet_tcp_execute_request(struct nvmet_tcp_cmd *cmd)
> @@ -1427,7 +1431,9 @@ static void nvmet_tcp_uninit_data_in_cmds(struct nvmet_tcp_queue *queue)
>   
>   	for (i = 0; i < queue->nr_cmds; i++, cmd++) {
>   		if (nvmet_tcp_need_data_in(cmd))
> -			nvmet_tcp_finish_cmd(cmd);
> +			nvmet_req_uninit(&cmd->req);
> +		nvmet_tcp_unmap_pdu_iovec(cmd);
> +		nvmet_tcp_free_iovec(cmd);
>   	}
>   
>   	if (!queue->nr_cmds && nvmet_tcp_need_data_in(&queue->connect)) {
> @@ -1447,11 +1453,10 @@ static void nvmet_tcp_release_queue_work(struct work_struct *w)
>   	mutex_unlock(&nvmet_tcp_queue_mutex);
>   
>   	nvmet_tcp_restore_socket_callbacks(queue);
> -	flush_work(&queue->io_work);
> +	cancel_work_sync(&queue->io_work);
>   
>   	nvmet_tcp_uninit_data_in_cmds(queue);
>   	nvmet_sq_destroy(&queue->nvme_sq);
> -	cancel_work_sync(&queue->io_work);
>   	sock_release(queue->sock);
>   	nvmet_tcp_free_cmds(queue);
>   	if (queue->hdr_digest || queue->data_digest)
> 



  reply	other threads:[~2021-10-21 14:57 UTC|newest]

Thread overview: 21+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2021-10-21  8:41 [PATCH 0/2] Fix a race condition when performing a controller reset Maurizio Lombardi
2021-10-21  8:41 ` [PATCH 1/2] nvmet: add an helper to free the iovec Maurizio Lombardi
2021-10-21 14:56   ` John Meneghini
2021-10-21 14:58     ` John Meneghini
2021-10-27  0:15     ` Chaitanya Kulkarni
2021-10-21  8:41 ` [PATCH 2/2] nvmet: fix a race condition between release_queue and io_work Maurizio Lombardi
2021-10-21 14:57   ` John Meneghini [this message]
2021-10-26 15:42   ` Sagi Grimberg
2021-10-28  7:55     ` Maurizio Lombardi
2021-11-03  9:28       ` Sagi Grimberg
2021-11-03 11:31         ` Maurizio Lombardi
2021-11-04 12:59           ` Sagi Grimberg
2021-11-12 10:54             ` Maurizio Lombardi
2021-11-12 15:54               ` John Meneghini
2021-11-15  7:52                 ` Maurizio Lombardi
2021-11-14 10:28               ` Sagi Grimberg
2021-11-15  7:47                 ` Maurizio Lombardi
2021-11-15  9:48                   ` Sagi Grimberg
2021-11-15 10:00                     ` Maurizio Lombardi
2021-11-15 10:13                       ` Sagi Grimberg
2021-11-15 10:57                       ` Maurizio Lombardi

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=7af03d77-670d-fa5b-fb84-b6f90cc3cd41@redhat.com \
    --to=jmeneghi@redhat.com \
    --cc=chaitanya.kulkarni@wdc.com \
    --cc=hare@suse.de \
    --cc=hch@lst.de \
    --cc=linux-nvme@lists.infradead.org \
    --cc=mlombard@redhat.com \
    --cc=sagi@grimberg.me \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox