* [PATCH] NVMe: Do not set IO queue depth beyond device max
@ 2012-07-27 15:34 Keith Busch
2012-07-27 18:00 ` Matthew Wilcox
0 siblings, 1 reply; 2+ messages in thread
From: Keith Busch @ 2012-07-27 15:34 UTC (permalink / raw)
Set the depth for IO queues to the device's maximum supported queue
entries if the requested depth exceeds the device's capabilities.
Signed-off-by: Keith Busch <keith.busch at intel.com>
---
drivers/block/nvme.c | 10 ++++++----
include/linux/nvme.h | 1 +
2 files changed, 7 insertions(+), 4 deletions(-)
diff --git a/drivers/block/nvme.c b/drivers/block/nvme.c
index 7bcd882..8006006 100644
--- a/drivers/block/nvme.c
+++ b/drivers/block/nvme.c
@@ -892,7 +892,8 @@ static struct nvme_queue *nvme_alloc_queue(struct nvme_dev *dev, int qid,
int depth, int vector)
{
struct device *dmadev = &dev->pci_dev->dev;
- unsigned extra = (depth / 8) + (depth * sizeof(struct nvme_cmd_info));
+ unsigned extra = DIV_ROUND_UP(depth, 8) + (depth * sizeof(
+ struct nvme_cmd_info));
struct nvme_queue *nvmeq = kzalloc(sizeof(*nvmeq) + extra, GFP_KERNEL);
if (!nvmeq)
return NULL;
@@ -1388,7 +1389,7 @@ static int set_queue_count(struct nvme_dev *dev, int count)
static int __devinit nvme_setup_io_queues(struct nvme_dev *dev)
{
- int result, cpu, i, nr_io_queues, db_bar_size;
+ int result, cpu, i, nr_io_queues, db_bar_size, q_depth;
nr_io_queues = num_online_cpus();
result = set_queue_count(dev, nr_io_queues);
@@ -1434,9 +1435,10 @@ static int __devinit nvme_setup_io_queues(struct nvme_dev *dev)
cpu = cpumask_next(cpu, cpu_online_mask);
}
+ q_depth = min_t(int, NVME_CAP_MQES(readq(&dev->bar->cap)) + 1,
+ NVME_Q_DEPTH);
for (i = 0; i < nr_io_queues; i++) {
- dev->queues[i + 1] = nvme_create_queue(dev, i + 1,
- NVME_Q_DEPTH, i);
+ dev->queues[i + 1] = nvme_create_queue(dev, i + 1, q_depth, i);
if (IS_ERR(dev->queues[i + 1]))
return PTR_ERR(dev->queues[i + 1]);
dev->queue_count++;
diff --git a/include/linux/nvme.h b/include/linux/nvme.h
index 9490a00..bddc954 100644
--- a/include/linux/nvme.h
+++ b/include/linux/nvme.h
@@ -35,6 +35,7 @@ struct nvme_bar {
__u64 acq; /* Admin CQ Base Address */
};
+#define NVME_CAP_MQES(cap) (cap & 0xffff)
#define NVME_CAP_TIMEOUT(cap) (((cap) >> 24) & 0xff)
#define NVME_CAP_STRIDE(cap) (((cap) >> 32) & 0xf)
--
1.7.0.4
^ permalink raw reply related [flat|nested] 2+ messages in thread
* [PATCH] NVMe: Do not set IO queue depth beyond device max
2012-07-27 15:34 [PATCH] NVMe: Do not set IO queue depth beyond device max Keith Busch
@ 2012-07-27 18:00 ` Matthew Wilcox
0 siblings, 0 replies; 2+ messages in thread
From: Matthew Wilcox @ 2012-07-27 18:00 UTC (permalink / raw)
On Fri, Jul 27, 2012@09:34:04AM -0600, Keith Busch wrote:
> Set the depth for IO queues to the device's maximum supported queue
> entries if the requested depth exceeds the device's capabilities.
Thanks, applied with a couple of fixups.
> struct device *dmadev = &dev->pci_dev->dev;
> - unsigned extra = (depth / 8) + (depth * sizeof(struct nvme_cmd_info));
> + unsigned extra = DIV_ROUND_UP(depth, 8) + (depth * sizeof(
> + struct nvme_cmd_info));
Better to split like so:
unsigned extra = DIV_ROUND_UP(depth, 8) + (depth *
sizeof(struct nvme_cmd_info));
(some might argue for either:
unsigned extra = DIV_ROUND_UP(depth, 8);
extra += depth * sizeof(struct nvme_cmd_info);
or
unsigned extra;
extra = DIV_ROUND_UP(depth, 8) + (depth * sizeof(struct nvme_cmd_info));
but I prefer the way I've done it :-)
> };
>
> +#define NVME_CAP_MQES(cap) (cap & 0xffff)
> #define NVME_CAP_TIMEOUT(cap) (((cap) >> 24) & 0xff)
> #define NVME_CAP_STRIDE(cap) (((cap) >> 32) & 0xf)
>
This hunk failed to apply because of the added definition for NVME_CAP_MPSMIN
which was added in a patch from you I applied yesterday :-) So I fixed it.
^ permalink raw reply [flat|nested] 2+ messages in thread
end of thread, other threads:[~2012-07-27 18:00 UTC | newest]
Thread overview: 2+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2012-07-27 15:34 [PATCH] NVMe: Do not set IO queue depth beyond device max Keith Busch
2012-07-27 18:00 ` Matthew Wilcox
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).