From: Jens Axboe <axboe@kernel.dk>
To: "Keith Busch" <keith.busch@intel.com>, "Matias Bjørling" <m@bjorling.me>
Cc: willy@linux.intel.com, sbradshaw@micron.com,
linux-kernel@vger.kernel.org, linux-nvme@lists.infradead.org,
hch@infradead.org
Subject: Re: [PATCH v5] conversion to blk-mq
Date: Wed, 04 Jun 2014 12:42:49 -0600 [thread overview]
Message-ID: <538F68A9.50608@kernel.dk> (raw)
In-Reply-To: <alpine.LRH.2.03.1406041146280.11244@AMR>
On 06/04/2014 12:28 PM, Keith Busch wrote:
> On Wed, 4 Jun 2014, Matias Bjørling wrote:
>> On 06/04/2014 12:27 AM, Keith Busch wrote:
>>>> On Tue, 3 Jun 2014, Matias Bjorling wrote:
>>>>>
>>>>> Keith, will you take the nvmemq_wip_v6 branch for a spin? Thanks!
>>>
>>> BTW, if you want to test this out yourself, it's pretty simple to
>>> recreate. I just run a simple user admin program sending nvme
>>> passthrough
>>> commands in a tight loop, then run:
>>>
>>> # echo 1 > /sys/bus/pci/devices/<bdf>/remove
>>
>> I can't recreate- I use the nvme_get_feature program to continuously
>> hit the ioctl path, testing using your nvme qemu branch.
>
> Okay, I'll try to fix it.
>
> I think there are multiple problems, but the first is that since there
> is no gendisk associated with the admin_q, the QUEUE_FLAG_INIT_DONE flag
> is never set, and blk_mq_queue_enter returns successful whenever this
> flag is not set even though this queue is dying, so we enter with all
> its invalid pointers.
>
> Here's a couple diff's. The first fixes the kernel oops by not entering a
> dying queue. The second is just a few unrelated clean-ups in nvme-core.c.
>
> I still can't complete my current hot-removal test, though; something
> appears hung, but haven't nailed that down yet.
>
> Please let me know what you think! Thanks.
>
> diff --git a/block/blk-mq.c b/block/blk-mq.c
> index d10013b..5a9ae8a 100644
> --- a/block/blk-mq.c
> +++ b/block/blk-mq.c
> @@ -105,6 +105,10 @@ static int blk_mq_queue_enter(struct request_queue *q)
> __percpu_counter_add(&q->mq_usage_counter, 1, 1000000);
> smp_wmb();
> /* we have problems to freeze the queue if it's initializing */
> + if (blk_queue_dying(q)) {
> + __percpu_counter_add(&q->mq_usage_counter, -1, 1000000);
> + ret = -ENODEV;
> + }
> if (!blk_queue_bypass(q) || !blk_queue_init_done(q))
> return 0;
Are you testing against 3.13? You really need the current tree for this,
otherwise I'm sure you'll run into issues (as you appear to be :-)
--
Jens Axboe
next prev parent reply other threads:[~2014-06-04 18:42 UTC|newest]
Thread overview: 14+ messages / expand[flat|nested] mbox.gz Atom feed top
2014-06-02 20:55 [PATCH v5] conversion to blk-mq Matias Bjørling
2014-06-02 20:55 ` [PATCH v5] NVMe: " Matias Bjørling
2014-06-02 22:49 ` [PATCH v5] " Keith Busch
2014-06-02 23:06 ` Keith Busch
2014-06-03 13:56 ` Matias Bjørling
2014-06-03 20:12 ` Matias Bjorling
2014-06-03 22:23 ` Keith Busch
2014-06-03 22:27 ` Keith Busch
2014-06-04 9:16 ` Matias Bjørling
2014-06-04 18:28 ` Keith Busch
2014-06-04 18:42 ` Jens Axboe [this message]
2014-06-04 18:52 ` Keith Busch
2014-06-04 18:55 ` Jens Axboe
2014-06-04 20:01 ` Matias Bjorling
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=538F68A9.50608@kernel.dk \
--to=axboe@kernel.dk \
--cc=hch@infradead.org \
--cc=keith.busch@intel.com \
--cc=linux-kernel@vger.kernel.org \
--cc=linux-nvme@lists.infradead.org \
--cc=m@bjorling.me \
--cc=sbradshaw@micron.com \
--cc=willy@linux.intel.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).