From: hare@suse.de (Hannes Reinecke)
Subject: [PATCH V6 9/9] nvme: hold request queue's refcount in ns's whole lifetime
Date: Wed, 17 Apr 2019 14:10:15 +0200 [thread overview]
Message-ID: <bc4b8607-9fe7-aece-97f7-9d7d1c2c4e0b@suse.de> (raw)
In-Reply-To: <20190417034410.31957-10-ming.lei@redhat.com>
On 4/17/19 5:44 AM, Ming Lei wrote:
> Hennes reported the following kernel oops:
>
> There is a race condition between namespace rescanning and
> controller reset; during controller reset all namespaces are
> quiesed vie nams_stop_ctrl(), and after reset all namespaces
> are unquiesced again.
> When namespace scanning was active by the time controller reset
> was triggered the rescan code will call nvme_ns_remove(), which
> then will cause a kernel crash in nvme_start_ctrl() as it'll trip
> over uninitialized namespaces.
>
> Patch "blk-mq: free hw queue's resource in hctx's release handler"
> should make this issue quite difficult to trigger. However it can't
> kill the issue completely becasue pre-condition of that patch is to
> hold request queue's refcount before calling block layer API, and
> there is still a small window between blk_cleanup_queue() and removing
> the ns from the controller namspace list in nvme_ns_remove().
>
> Hold request queue's refcount until the ns is freed, then the above race
> can be avoided completely. Given the 'namespaces_rwsem' is always held
> to retrieve ns for starting/stopping request queue, this lock can prevent
> namespaces from being freed.
>
> Cc: Dongli Zhang <dongli.zhang at oracle.com>
> Cc: James Smart <james.smart at broadcom.com>
> Cc: Bart Van Assche <bart.vanassche at wdc.com>
> Cc: linux-scsi at vger.kernel.org,
> Cc: Martin K . Petersen <martin.petersen at oracle.com>,
> Cc: Christoph Hellwig <hch at lst.de>,
> Cc: James E . J . Bottomley <jejb at linux.vnet.ibm.com>,
> Cc: jianchao wang <jianchao.w.wang at oracle.com>
> Reported-by: Hannes Reinecke <hare at suse.com>
> Signed-off-by: Ming Lei <ming.lei at redhat.com>
> ---
> drivers/nvme/host/core.c | 10 +++++++++-
> 1 file changed, 9 insertions(+), 1 deletion(-)
>
> diff --git a/drivers/nvme/host/core.c b/drivers/nvme/host/core.c
> index 248ff3b48041..82cda6602ca7 100644
> --- a/drivers/nvme/host/core.c
> +++ b/drivers/nvme/host/core.c
> @@ -406,6 +406,7 @@ static void nvme_free_ns(struct kref *kref)
> nvme_nvm_unregister(ns);
>
> put_disk(ns->disk);
> + blk_put_queue(ns->queue);
> nvme_put_ns_head(ns->head);
> nvme_put_ctrl(ns->ctrl);
> kfree(ns);
> @@ -3229,6 +3230,11 @@ static int nvme_alloc_ns(struct nvme_ctrl *ctrl, unsigned nsid)
> goto out_free_ns;
> }
>
> + if (!blk_get_queue(ns->queue)) {
> + ret = -ENXIO;
> + goto out_free_queue;
> + }
> +
> blk_queue_flag_set(QUEUE_FLAG_NONROT, ns->queue);
> if (ctrl->ops->flags & NVME_F_PCI_P2PDMA)
> blk_queue_flag_set(QUEUE_FLAG_PCI_P2PDMA, ns->queue);
> @@ -3245,7 +3251,7 @@ static int nvme_alloc_ns(struct nvme_ctrl *ctrl, unsigned nsid)
> id = nvme_identify_ns(ctrl, nsid);
> if (!id) {
> ret = -EIO;
> - goto out_free_queue;
> + goto out_put_queue;
> }
>
> if (id->ncap == 0) {
> @@ -3304,6 +3310,8 @@ static int nvme_alloc_ns(struct nvme_ctrl *ctrl, unsigned nsid)
> nvme_put_ns_head(ns->head);
> out_free_id:
> kfree(id);
> + out_put_queue:
> + blk_put_queue(ns->queue);
> out_free_queue:
> blk_cleanup_queue(ns->queue);
> out_free_ns:
>
Reviewed-by: Hannes Reinecke <hare at suse.com>
Cheers,
Hannes
--
Dr. Hannes Reinecke Teamlead Storage & Networking
hare at suse.de +49 911 74053 688
SUSE LINUX GmbH, Maxfeldstr. 5, 90409 N?rnberg
GF: Felix Imend?rffer, Mary Higgins, Sri Rasiah
HRB 21284 (AG N?rnberg)
next prev parent reply other threads:[~2019-04-17 12:10 UTC|newest]
Thread overview: 25+ messages / expand[flat|nested] mbox.gz Atom feed top
2019-04-17 3:44 [PATCH V6 0/9] blk-mq: fix races related with freeing queue Ming Lei
2019-04-17 3:44 ` [PATCH V6 1/9] blk-mq: grab .q_usage_counter when queuing request from plug code path Ming Lei
2019-04-17 3:44 ` [PATCH V6 2/9] blk-mq: move cancel of requeue_work into blk_mq_release Ming Lei
2019-04-17 12:00 ` Hannes Reinecke
2019-04-17 3:44 ` [PATCH V6 3/9] blk-mq: free hw queue's resource in hctx's release handler Ming Lei
2019-04-17 12:02 ` Hannes Reinecke
2019-04-17 3:44 ` [PATCH V6 4/9] blk-mq: move all hctx alloction & initialization into __blk_mq_alloc_and_init_hctx Ming Lei
2019-04-17 12:03 ` Hannes Reinecke
2019-04-17 3:44 ` [PATCH V6 5/9] blk-mq: split blk_mq_alloc_and_init_hctx into two parts Ming Lei
2019-04-17 3:44 ` [PATCH V6 6/9] blk-mq: always free hctx after request queue is freed Ming Lei
2019-04-17 12:08 ` Hannes Reinecke
2019-04-17 12:59 ` Ming Lei
2019-04-22 3:30 ` Ming Lei
2019-04-23 11:19 ` Hannes Reinecke
2019-04-23 13:30 ` Ming Lei
2019-04-23 14:07 ` Hannes Reinecke
2019-04-24 1:12 ` Ming Lei
2019-04-24 1:45 ` Ming Lei
2019-04-24 5:55 ` Hannes Reinecke
2019-04-17 3:44 ` [PATCH V6 7/9] blk-mq: move cancel of hctx->run_work into blk_mq_hw_sysfs_release Ming Lei
2019-04-17 3:44 ` [PATCH V6 8/9] block: don't drain in-progress dispatch in blk_cleanup_queue() Ming Lei
2019-04-17 3:44 ` [PATCH V6 9/9] nvme: hold request queue's refcount in ns's whole lifetime Ming Lei
2019-04-17 12:10 ` Hannes Reinecke [this message]
2019-04-17 15:55 ` Keith Busch
2019-04-17 17:22 ` [PATCH V6 0/9] blk-mq: fix races related with freeing queue James Smart
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=bc4b8607-9fe7-aece-97f7-9d7d1c2c4e0b@suse.de \
--to=hare@suse.de \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox