From: Ming Lei <ming.lei@redhat.com>
To: Keith Busch <keith.busch@intel.com>
Cc: Jens Axboe <axboe@kernel.dk>, Sagi Grimberg <sagi@grimberg.me>,
stable@vger.kernel.org, linux-block@vger.kernel.org,
linux-nvme@lists.infradead.org, Christoph Hellwig <hch@lst.de>
Subject: Re: [PATCH] nvme: remove disk after hw queue is started
Date: Tue, 9 May 2017 00:15:25 +0800 [thread overview]
Message-ID: <20170508161524.GE5696@ming.t460p> (raw)
In-Reply-To: <20170508151123.GA463@localhost.localdomain>
On Mon, May 08, 2017 at 11:11:24AM -0400, Keith Busch wrote:
> On Mon, May 08, 2017 at 11:07:20AM -0400, Keith Busch wrote:
> > I'm almost certain the remove_work shouldn't even be running in this
> > case. If the reset work can't transition the controller state correctly,
> > it should assume something is handling the controller.
>
> Here's the more complete version of what I had in mind. Does this solve
> the reported issue?
>
> ---
> diff --git a/drivers/nvme/host/pci.c b/drivers/nvme/host/pci.c
> index 26a5fd0..46a37fb 100644
> --- a/drivers/nvme/host/pci.c
> +++ b/drivers/nvme/host/pci.c
> @@ -1792,7 +1792,7 @@ static void nvme_reset_work(struct work_struct *work)
> nvme_dev_disable(dev, false);
>
> if (!nvme_change_ctrl_state(&dev->ctrl, NVME_CTRL_RESETTING))
> - goto out;
> + return;
>
> result = nvme_pci_enable(dev);
> if (result)
> @@ -1854,7 +1854,7 @@ static void nvme_reset_work(struct work_struct *work)
>
> if (!nvme_change_ctrl_state(&dev->ctrl, NVME_CTRL_LIVE)) {
> dev_warn(dev->ctrl.device, "failed to mark controller live\n");
> - goto out;
> + return;
> }
>
> if (dev->online_queues > 1)
This patch looks working, but seems any 'goto out' in this function
may have rick to cause the same race too.
Another solution I thought of is to kill queues earlier, what do you
think about the following patch?
---
diff --git a/drivers/nvme/host/pci.c b/drivers/nvme/host/pci.c
index c8541c3dcd19..16740e8c4225 100644
--- a/drivers/nvme/host/pci.c
+++ b/drivers/nvme/host/pci.c
@@ -1892,6 +1892,7 @@ static void nvme_remove_dead_ctrl(struct nvme_dev *dev, int status)
kref_get(&dev->ctrl.kref);
nvme_dev_disable(dev, false);
+ nvme_kill_queues(&dev->ctrl);
if (!schedule_work(&dev->remove_work))
nvme_put_ctrl(&dev->ctrl);
}
@@ -1998,7 +1999,6 @@ static void nvme_remove_dead_ctrl_work(struct work_struct *work)
struct nvme_dev *dev = container_of(work, struct nvme_dev, remove_work);
struct pci_dev *pdev = to_pci_dev(dev->dev);
- nvme_kill_queues(&dev->ctrl);
if (pci_get_drvdata(pdev))
device_release_driver(&pdev->dev);
nvme_put_ctrl(&dev->ctrl);
Thanks,
Ming
next prev parent reply other threads:[~2017-05-08 16:15 UTC|newest]
Thread overview: 8+ messages / expand[flat|nested] mbox.gz Atom feed top
2017-05-08 11:24 [PATCH] nvme: remove disk after hw queue is started Ming Lei
2017-05-08 12:46 ` Ming Lei
2017-05-08 15:07 ` Keith Busch
2017-05-08 15:11 ` Keith Busch
2017-05-08 16:15 ` Ming Lei [this message]
2017-05-08 17:25 ` Keith Busch
2017-05-09 1:10 ` Ming Lei
2017-05-09 3:26 ` Ming Lei
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20170508161524.GE5696@ming.t460p \
--to=ming.lei@redhat.com \
--cc=axboe@kernel.dk \
--cc=hch@lst.de \
--cc=keith.busch@intel.com \
--cc=linux-block@vger.kernel.org \
--cc=linux-nvme@lists.infradead.org \
--cc=sagi@grimberg.me \
--cc=stable@vger.kernel.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox