From: Jens Axboe <axboe@kernel.dk>
To: Keith Busch <keith.busch@intel.com>
Cc: "Matias Bjørling" <m@bjorling.me>,
willy@linux.intel.com, sbradshaw@micron.com,
linux-kernel@vger.kernel.org
Subject: Re: [PATCH V3] NVMe: basic conversion to blk-mq
Date: Thu, 29 May 2014 17:12:37 -0600 [thread overview]
Message-ID: <5387BEE5.6060007@kernel.dk> (raw)
In-Reply-To: <5387BD73.9030607@kernel.dk>
[-- Attachment #1: Type: text/plain, Size: 586 bytes --]
On 05/29/2014 05:06 PM, Jens Axboe wrote:
> Ah I see, yes that code apparently got axed. The attached patch brings
> it back. Totally untested, I'll try and synthetically hit it to ensure
> that it does work. Note that it currently does unmap and iod free, so
> the request comes back pristine. We could preserve that if we really
> wanted to, I'm guessing it's not a big deal.
And another totally untested patch that retains the iod and dma mapping
on the requeue event. Again, probably not a big deal, I'm assuming these
happen rarely, but it's simple enough to do.
--
Jens Axboe
[-- Attachment #2: nvme-retry-v2.patch --]
[-- Type: text/x-patch, Size: 1525 bytes --]
diff --git a/drivers/block/nvme-core.c b/drivers/block/nvme-core.c
index 22d8013cd4ff..fd3f11d837bb 100644
--- a/drivers/block/nvme-core.c
+++ b/drivers/block/nvme-core.c
@@ -357,17 +357,22 @@ static void req_completion(struct nvme_queue *nvmeq, void *ctx,
u16 status = le16_to_cpup(&cqe->status) >> 1;
+ if (unlikely(status)) {
+ if (!(status & NVME_SC_DNR || blk_noretry_request(req))
+ && (jiffies - req->start_time) < req->timeout) {
+ blk_mq_requeue_request(req);
+ return;
+ }
+ req->errors = -EIO;
+ } else
+ req->errors = 0;
+
if (iod->nents) {
dma_unmap_sg(&nvmeq->dev->pci_dev->dev, iod->sg, iod->nents,
rq_data_dir(req) ? DMA_TO_DEVICE : DMA_FROM_DEVICE);
}
nvme_free_iod(nvmeq->dev, iod);
- if (unlikely(status))
- req->errors = -EIO;
- else
- req->errors = 0;
-
blk_mq_complete_request(req);
}
@@ -569,11 +574,19 @@ static int nvme_submit_req_queue(struct nvme_queue *nvmeq, struct nvme_ns *ns,
enum dma_data_direction dma_dir;
int psegs = req->nr_phys_segments;
+ /*
+ * Requeued IO has already been prepped
+ */
+ iod = req->special;
+ if (iod)
+ goto submit_iod;
+
iod = nvme_alloc_iod(psegs, blk_rq_bytes(req), GFP_ATOMIC);
if (!iod)
return BLK_MQ_RQ_QUEUE_BUSY;
iod->private = req;
+ req->special = iod;
nvme_set_info(cmd, iod, req_completion);
@@ -602,6 +615,7 @@ static int nvme_submit_req_queue(struct nvme_queue *nvmeq, struct nvme_ns *ns,
goto finish_cmd;
}
+ submit_iod:
if (!nvme_submit_iod(nvmeq, iod, ns))
return 0;
next prev parent reply other threads:[~2014-05-29 23:12 UTC|newest]
Thread overview: 10+ messages / expand[flat|nested] mbox.gz Atom feed top
2014-05-28 22:59 [PATCH V3] basic conversion to blk-mq Matias Bjørling
2014-05-28 22:59 ` [PATCH V3] NVMe: " Matias Bjørling
2014-05-29 3:07 ` Keith Busch
2014-05-29 14:25 ` Jens Axboe
2014-05-29 19:32 ` Jens Axboe
2014-05-29 19:33 ` Jens Axboe
2014-05-29 22:34 ` Keith Busch
2014-05-29 23:06 ` Jens Axboe
2014-05-29 23:12 ` Jens Axboe [this message]
2014-05-30 17:20 ` Matias Bjorling
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=5387BEE5.6060007@kernel.dk \
--to=axboe@kernel.dk \
--cc=keith.busch@intel.com \
--cc=linux-kernel@vger.kernel.org \
--cc=m@bjorling.me \
--cc=sbradshaw@micron.com \
--cc=willy@linux.intel.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox