From: Maurizio Lombardi <mlombard@redhat.com>
To: Sagi Grimberg <sagi@grimberg.me>
Cc: linux-nvme@lists.infradead.org, hch@lst.de, hare@suse.de,
chaitanya.kulkarni@wdc.com, jmeneghi@redhat.com
Subject: Re: [PATCH 2/2] nvmet: fix a race condition between release_queue and io_work
Date: Wed, 3 Nov 2021 12:31:25 +0100 [thread overview]
Message-ID: <20211103113125.GA106365@raketa> (raw)
In-Reply-To: <68b69eee-c08c-a449-7e18-96e67a3c0c9d@grimberg.me>
On Wed, Nov 03, 2021 at 11:28:35AM +0200, Sagi Grimberg wrote:
>
> So this means we still get data from the network when
> we shouldn't. Maybe we are simply missing a kernel_sock_shutdown
> for SHUT_RD?
Hmm, right, kernel_sock_shutdown(queue->sock) is executed in
nvmet_tcp_delete_ctrl() and sock_release(queue->sock) is called
in nvmet_tcp_release_queue_work(), so there could be a race here.
I will try to move kernel_sock_shutdown(queue->sock) in nvmet_tcp_release_queue_work()
and test it.
>
> >
> > >
> > > > * Fix this bug by preventing io_work from being enqueued when
> > > > sk_user_data is NULL (it means that the queue is going to be deleted)
> > >
> > > This is triggered from the completion path, where the commands
> > > are not in a state where they are still fetching data from the
> > > host. How does this prevent the crash?
> >
> > io_work is also triggered every time a nvmet_req_init() fails and when
> > nvmet_sq_destroy() is called, I am not really sure about the state
> > of the commands in those cases.
>
> But that is from the workqueue context - which means that
> cancel_work_sync should prevent it right?
But nvmet_sq_destroy() is called from the release_work context,
we call cancel_work_sync() immediately after but we can't be sure
that the work will be canceled, io_work might have started already and
cancel_work_sync() will block until io_work ends its job, right?
>
> But that needs to be a separate fix and not combined with other
> fixes.
Ok I will submit it as a separate patch.
Maurizio
next prev parent reply other threads:[~2021-11-03 11:31 UTC|newest]
Thread overview: 21+ messages / expand[flat|nested] mbox.gz Atom feed top
2021-10-21 8:41 [PATCH 0/2] Fix a race condition when performing a controller reset Maurizio Lombardi
2021-10-21 8:41 ` [PATCH 1/2] nvmet: add an helper to free the iovec Maurizio Lombardi
2021-10-21 14:56 ` John Meneghini
2021-10-21 14:58 ` John Meneghini
2021-10-27 0:15 ` Chaitanya Kulkarni
2021-10-21 8:41 ` [PATCH 2/2] nvmet: fix a race condition between release_queue and io_work Maurizio Lombardi
2021-10-21 14:57 ` John Meneghini
2021-10-26 15:42 ` Sagi Grimberg
2021-10-28 7:55 ` Maurizio Lombardi
2021-11-03 9:28 ` Sagi Grimberg
2021-11-03 11:31 ` Maurizio Lombardi [this message]
2021-11-04 12:59 ` Sagi Grimberg
2021-11-12 10:54 ` Maurizio Lombardi
2021-11-12 15:54 ` John Meneghini
2021-11-15 7:52 ` Maurizio Lombardi
2021-11-14 10:28 ` Sagi Grimberg
2021-11-15 7:47 ` Maurizio Lombardi
2021-11-15 9:48 ` Sagi Grimberg
2021-11-15 10:00 ` Maurizio Lombardi
2021-11-15 10:13 ` Sagi Grimberg
2021-11-15 10:57 ` Maurizio Lombardi
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20211103113125.GA106365@raketa \
--to=mlombard@redhat.com \
--cc=chaitanya.kulkarni@wdc.com \
--cc=hare@suse.de \
--cc=hch@lst.de \
--cc=jmeneghi@redhat.com \
--cc=linux-nvme@lists.infradead.org \
--cc=sagi@grimberg.me \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox