* [RFC PATCH 0/3] *** use rdma device capability to limit queue size ***
@ 2023-12-18 11:05 Guixin Liu
2023-12-18 11:05 ` [RFC PATCH 1/3] nvmet: change get_max_queue_size param to nvmet_sq Guixin Liu
` (2 more replies)
0 siblings, 3 replies; 9+ messages in thread
From: Guixin Liu @ 2023-12-18 11:05 UTC (permalink / raw)
To: hch, sagi, kch, axboe; +Cc: linux-nvme
Hi guys,
Currently, the queue size if nvme over rdma is limited with a depth
of 128, we can use rdma device capability to limit it.
Guixin Liu (3):
nvmet: change get_max_queue_size param to nvmet_sq
nvmet: rdma: utilize ib_device capability for setting max_queue_size
nvme: rdma: use ib_device's max_qp_wr to limit sqsize
drivers/nvme/host/rdma.c | 14 ++++++++------
drivers/nvme/target/core.c | 6 +++---
drivers/nvme/target/nvmet.h | 2 +-
drivers/nvme/target/rdma.c | 9 +++++++--
include/linux/nvme-rdma.h | 2 +-
5 files changed, 20 insertions(+), 13 deletions(-)
--
1.8.3.1
^ permalink raw reply [flat|nested] 9+ messages in thread* [RFC PATCH 1/3] nvmet: change get_max_queue_size param to nvmet_sq 2023-12-18 11:05 [RFC PATCH 0/3] *** use rdma device capability to limit queue size *** Guixin Liu @ 2023-12-18 11:05 ` Guixin Liu 2023-12-18 11:52 ` Sagi Grimberg 2023-12-18 11:05 ` [RFC PATCH 2/3] nvmet: rdma: utilize ib_device capability for setting max_queue_size Guixin Liu 2023-12-18 11:05 ` [RFC PATCH 3/3] nvme: rdma: use ib_device's max_qp_wr to limit sqsize Guixin Liu 2 siblings, 1 reply; 9+ messages in thread From: Guixin Liu @ 2023-12-18 11:05 UTC (permalink / raw) To: hch, sagi, kch, axboe; +Cc: linux-nvme The max queue size is an atrribute private to each transport, use nvmet_sq to let transport get it's own private queue and port. Signed-off-by: Guixin Liu <kanie@linux.alibaba.com> --- drivers/nvme/target/core.c | 6 +++--- drivers/nvme/target/nvmet.h | 2 +- drivers/nvme/target/rdma.c | 2 +- 3 files changed, 5 insertions(+), 5 deletions(-) diff --git a/drivers/nvme/target/core.c b/drivers/nvme/target/core.c index 3935165..1efd46b 100644 --- a/drivers/nvme/target/core.c +++ b/drivers/nvme/target/core.c @@ -1213,7 +1213,7 @@ void nvmet_update_cc(struct nvmet_ctrl *ctrl, u32 new) mutex_unlock(&ctrl->lock); } -static void nvmet_init_cap(struct nvmet_ctrl *ctrl) +static void nvmet_init_cap(struct nvmet_ctrl *ctrl, struct nvmet_sq *nvmet_sq) { /* command sets supported: NVMe command set: */ ctrl->cap = (1ULL << 37); @@ -1223,7 +1223,7 @@ static void nvmet_init_cap(struct nvmet_ctrl *ctrl) ctrl->cap |= (15ULL << 24); /* maximum queue entries supported: */ if (ctrl->ops->get_max_queue_size) - ctrl->cap |= ctrl->ops->get_max_queue_size(ctrl) - 1; + ctrl->cap |= ctrl->ops->get_max_queue_size(nvmet_sq) - 1; else ctrl->cap |= NVMET_QUEUE_SIZE - 1; @@ -1411,7 +1411,7 @@ u16 nvmet_alloc_ctrl(const char *subsysnqn, const char *hostnqn, kref_init(&ctrl->ref); ctrl->subsys = subsys; - nvmet_init_cap(ctrl); + nvmet_init_cap(ctrl, req->sq); WRITE_ONCE(ctrl->aen_enabled, NVMET_AEN_CFG_OPTIONAL); ctrl->changed_ns_list = kmalloc_array(NVME_MAX_CHANGED_NAMESPACES, diff --git a/drivers/nvme/target/nvmet.h b/drivers/nvme/target/nvmet.h index 6c8aceb..ba8ed2e 100644 --- a/drivers/nvme/target/nvmet.h +++ b/drivers/nvme/target/nvmet.h @@ -352,7 +352,7 @@ struct nvmet_fabrics_ops { u16 (*install_queue)(struct nvmet_sq *nvme_sq); void (*discovery_chg)(struct nvmet_port *port); u8 (*get_mdts)(const struct nvmet_ctrl *ctrl); - u16 (*get_max_queue_size)(const struct nvmet_ctrl *ctrl); + u16 (*get_max_queue_size)(const struct nvmet_sq *nvmet_sq); }; #define NVMET_MAX_INLINE_BIOVEC 8 diff --git a/drivers/nvme/target/rdma.c b/drivers/nvme/target/rdma.c index 4597bca..8a728c5 100644 --- a/drivers/nvme/target/rdma.c +++ b/drivers/nvme/target/rdma.c @@ -2000,7 +2000,7 @@ static u8 nvmet_rdma_get_mdts(const struct nvmet_ctrl *ctrl) return NVMET_RDMA_MAX_MDTS; } -static u16 nvmet_rdma_get_max_queue_size(const struct nvmet_ctrl *ctrl) +static u16 nvmet_rdma_get_max_queue_size(const struct nvmet_sq *nvmet_sq) { return NVME_RDMA_MAX_QUEUE_SIZE; } -- 1.8.3.1 ^ permalink raw reply related [flat|nested] 9+ messages in thread
* Re: [RFC PATCH 1/3] nvmet: change get_max_queue_size param to nvmet_sq 2023-12-18 11:05 ` [RFC PATCH 1/3] nvmet: change get_max_queue_size param to nvmet_sq Guixin Liu @ 2023-12-18 11:52 ` Sagi Grimberg 0 siblings, 0 replies; 9+ messages in thread From: Sagi Grimberg @ 2023-12-18 11:52 UTC (permalink / raw) To: Guixin Liu, hch, kch, axboe; +Cc: linux-nvme On 12/18/23 13:05, Guixin Liu wrote: > The max queue size is an atrribute private to each transport, > use nvmet_sq to let transport get it's own private queue and > port. Don't see a reason for doing this. ctrl does not span transports. Why can't you just take this limit from ctrl->port? ^ permalink raw reply [flat|nested] 9+ messages in thread
* [RFC PATCH 2/3] nvmet: rdma: utilize ib_device capability for setting max_queue_size 2023-12-18 11:05 [RFC PATCH 0/3] *** use rdma device capability to limit queue size *** Guixin Liu 2023-12-18 11:05 ` [RFC PATCH 1/3] nvmet: change get_max_queue_size param to nvmet_sq Guixin Liu @ 2023-12-18 11:05 ` Guixin Liu 2023-12-18 11:57 ` Sagi Grimberg 2023-12-18 11:05 ` [RFC PATCH 3/3] nvme: rdma: use ib_device's max_qp_wr to limit sqsize Guixin Liu 2 siblings, 1 reply; 9+ messages in thread From: Guixin Liu @ 2023-12-18 11:05 UTC (permalink / raw) To: hch, sagi, kch, axboe; +Cc: linux-nvme Respond with the smaller value between 1024 and the ib_device's max_qp_wr as the RDMA max queue size. Signed-off-by: Guixin Liu <kanie@linux.alibaba.com> --- drivers/nvme/target/rdma.c | 7 ++++++- include/linux/nvme-rdma.h | 2 ++ 2 files changed, 8 insertions(+), 1 deletion(-) diff --git a/drivers/nvme/target/rdma.c b/drivers/nvme/target/rdma.c index 8a728c5..c3884dd 100644 --- a/drivers/nvme/target/rdma.c +++ b/drivers/nvme/target/rdma.c @@ -2002,7 +2002,12 @@ static u8 nvmet_rdma_get_mdts(const struct nvmet_ctrl *ctrl) static u16 nvmet_rdma_get_max_queue_size(const struct nvmet_sq *nvmet_sq) { - return NVME_RDMA_MAX_QUEUE_SIZE; + struct nvmet_rdma_queue *queue = + container_of(nvmet_sq, struct nvmet_rdma_queue, nvme_sq); + int max_qp_wr = queue->dev->device->attrs.max_qp_wr; + + return (u16)min_t(int, NVMET_QUEUE_SIZE, + max_qp_wr / (NVME_RDMA_SEND_WR_FACTOR + 1)); } static const struct nvmet_fabrics_ops nvmet_rdma_ops = { diff --git a/include/linux/nvme-rdma.h b/include/linux/nvme-rdma.h index 4dd7e6f..c19858b 100644 --- a/include/linux/nvme-rdma.h +++ b/include/linux/nvme-rdma.h @@ -8,6 +8,8 @@ #define NVME_RDMA_MAX_QUEUE_SIZE 128 +#define NVME_RDMA_SEND_WR_FACTOR 3 /* MR, SEND, INV */ + enum nvme_rdma_cm_fmt { NVME_RDMA_CM_FMT_1_0 = 0x0, }; -- 1.8.3.1 ^ permalink raw reply related [flat|nested] 9+ messages in thread
* Re: [RFC PATCH 2/3] nvmet: rdma: utilize ib_device capability for setting max_queue_size 2023-12-18 11:05 ` [RFC PATCH 2/3] nvmet: rdma: utilize ib_device capability for setting max_queue_size Guixin Liu @ 2023-12-18 11:57 ` Sagi Grimberg 2023-12-18 12:41 ` Guixin Liu 0 siblings, 1 reply; 9+ messages in thread From: Sagi Grimberg @ 2023-12-18 11:57 UTC (permalink / raw) To: Guixin Liu, hch, kch, axboe; +Cc: linux-nvme > Respond with the smaller value between 1024 and the ib_device's > max_qp_wr as the RDMA max queue size. > > Signed-off-by: Guixin Liu <kanie@linux.alibaba.com> > --- > drivers/nvme/target/rdma.c | 7 ++++++- > include/linux/nvme-rdma.h | 2 ++ > 2 files changed, 8 insertions(+), 1 deletion(-) > > diff --git a/drivers/nvme/target/rdma.c b/drivers/nvme/target/rdma.c > index 8a728c5..c3884dd 100644 > --- a/drivers/nvme/target/rdma.c > +++ b/drivers/nvme/target/rdma.c > @@ -2002,7 +2002,12 @@ static u8 nvmet_rdma_get_mdts(const struct nvmet_ctrl *ctrl) > > static u16 nvmet_rdma_get_max_queue_size(const struct nvmet_sq *nvmet_sq) > { > - return NVME_RDMA_MAX_QUEUE_SIZE; > + struct nvmet_rdma_queue *queue = > + container_of(nvmet_sq, struct nvmet_rdma_queue, nvme_sq); > + int max_qp_wr = queue->dev->device->attrs.max_qp_wr; > + > + return (u16)min_t(int, NVMET_QUEUE_SIZE, > + max_qp_wr / (NVME_RDMA_SEND_WR_FACTOR + 1)); > } Should be folded to prev patch > > static const struct nvmet_fabrics_ops nvmet_rdma_ops = { > diff --git a/include/linux/nvme-rdma.h b/include/linux/nvme-rdma.h > index 4dd7e6f..c19858b 100644 > --- a/include/linux/nvme-rdma.h > +++ b/include/linux/nvme-rdma.h > @@ -8,6 +8,8 @@ > > #define NVME_RDMA_MAX_QUEUE_SIZE 128 > > +#define NVME_RDMA_SEND_WR_FACTOR 3 /* MR, SEND, INV */ > + > enum nvme_rdma_cm_fmt { > NVME_RDMA_CM_FMT_1_0 = 0x0, > }; ^ permalink raw reply [flat|nested] 9+ messages in thread
* Re: [RFC PATCH 2/3] nvmet: rdma: utilize ib_device capability for setting max_queue_size 2023-12-18 11:57 ` Sagi Grimberg @ 2023-12-18 12:41 ` Guixin Liu 0 siblings, 0 replies; 9+ messages in thread From: Guixin Liu @ 2023-12-18 12:41 UTC (permalink / raw) To: Sagi Grimberg, hch, kch, axboe; +Cc: linux-nvme 在 2023/12/18 19:57, Sagi Grimberg 写道: > >> Respond with the smaller value between 1024 and the ib_device's >> max_qp_wr as the RDMA max queue size. >> >> Signed-off-by: Guixin Liu <kanie@linux.alibaba.com> >> --- >> drivers/nvme/target/rdma.c | 7 ++++++- >> include/linux/nvme-rdma.h | 2 ++ >> 2 files changed, 8 insertions(+), 1 deletion(-) >> >> diff --git a/drivers/nvme/target/rdma.c b/drivers/nvme/target/rdma.c >> index 8a728c5..c3884dd 100644 >> --- a/drivers/nvme/target/rdma.c >> +++ b/drivers/nvme/target/rdma.c >> @@ -2002,7 +2002,12 @@ static u8 nvmet_rdma_get_mdts(const struct >> nvmet_ctrl *ctrl) >> static u16 nvmet_rdma_get_max_queue_size(const struct nvmet_sq >> *nvmet_sq) >> { >> - return NVME_RDMA_MAX_QUEUE_SIZE; >> + struct nvmet_rdma_queue *queue = >> + container_of(nvmet_sq, struct nvmet_rdma_queue, nvme_sq); >> + int max_qp_wr = queue->dev->device->attrs.max_qp_wr; >> + >> + return (u16)min_t(int, NVMET_QUEUE_SIZE, >> + max_qp_wr / (NVME_RDMA_SEND_WR_FACTOR + 1)); >> } > > Should be folded to prev patch OK, I will do it. > >> static const struct nvmet_fabrics_ops nvmet_rdma_ops = { >> diff --git a/include/linux/nvme-rdma.h b/include/linux/nvme-rdma.h >> index 4dd7e6f..c19858b 100644 >> --- a/include/linux/nvme-rdma.h >> +++ b/include/linux/nvme-rdma.h >> @@ -8,6 +8,8 @@ >> #define NVME_RDMA_MAX_QUEUE_SIZE 128 >> +#define NVME_RDMA_SEND_WR_FACTOR 3 /* MR, SEND, INV */ >> + >> enum nvme_rdma_cm_fmt { >> NVME_RDMA_CM_FMT_1_0 = 0x0, >> }; ^ permalink raw reply [flat|nested] 9+ messages in thread
* [RFC PATCH 3/3] nvme: rdma: use ib_device's max_qp_wr to limit sqsize 2023-12-18 11:05 [RFC PATCH 0/3] *** use rdma device capability to limit queue size *** Guixin Liu 2023-12-18 11:05 ` [RFC PATCH 1/3] nvmet: change get_max_queue_size param to nvmet_sq Guixin Liu 2023-12-18 11:05 ` [RFC PATCH 2/3] nvmet: rdma: utilize ib_device capability for setting max_queue_size Guixin Liu @ 2023-12-18 11:05 ` Guixin Liu 2023-12-18 11:57 ` Sagi Grimberg 2 siblings, 1 reply; 9+ messages in thread From: Guixin Liu @ 2023-12-18 11:05 UTC (permalink / raw) To: hch, sagi, kch, axboe; +Cc: linux-nvme Currently, the host is limited to creating queues with a depth of 128. To enable larger queue sizes, constrain the sqsize based on the ib_device's max_qp_wr capability. And also remove unused NVME_RDMA_MAX_QUEUE_SIZE macro. Signed-off-by: Guixin Liu <kanie@linux.alibaba.com> --- drivers/nvme/host/rdma.c | 14 ++++++++------ include/linux/nvme-rdma.h | 2 -- 2 files changed, 8 insertions(+), 8 deletions(-) diff --git a/drivers/nvme/host/rdma.c b/drivers/nvme/host/rdma.c index 81e2621..982f3e4 100644 --- a/drivers/nvme/host/rdma.c +++ b/drivers/nvme/host/rdma.c @@ -489,8 +489,7 @@ static int nvme_rdma_create_cq(struct ib_device *ibdev, static int nvme_rdma_create_queue_ib(struct nvme_rdma_queue *queue) { struct ib_device *ibdev; - const int send_wr_factor = 3; /* MR, SEND, INV */ - const int cq_factor = send_wr_factor + 1; /* + RECV */ + const int cq_factor = NVME_RDMA_SEND_WR_FACTOR + 1; /* + RECV */ int ret, pages_per_mr; queue->device = nvme_rdma_find_get_device(queue->cm_id); @@ -508,7 +507,7 @@ static int nvme_rdma_create_queue_ib(struct nvme_rdma_queue *queue) if (ret) goto out_put_dev; - ret = nvme_rdma_create_qp(queue, send_wr_factor); + ret = nvme_rdma_create_qp(queue, NVME_RDMA_SEND_WR_FACTOR); if (ret) goto out_destroy_ib_cq; @@ -1006,6 +1005,7 @@ static int nvme_rdma_setup_ctrl(struct nvme_rdma_ctrl *ctrl, bool new) { int ret; bool changed; + int ib_max_qsize; ret = nvme_rdma_configure_admin_queue(ctrl, new); if (ret) @@ -1030,11 +1030,13 @@ static int nvme_rdma_setup_ctrl(struct nvme_rdma_ctrl *ctrl, bool new) ctrl->ctrl.opts->queue_size, ctrl->ctrl.sqsize + 1); } - if (ctrl->ctrl.sqsize + 1 > NVME_RDMA_MAX_QUEUE_SIZE) { + ib_max_qsize = ctrl->device->dev->attrs.max_qp_wr / + (NVME_RDMA_SEND_WR_FACTOR + 1); + if (ctrl->ctrl.sqsize + 1 > ib_max_qsize) { dev_warn(ctrl->ctrl.device, "ctrl sqsize %u > max queue size %u, clamping down\n", - ctrl->ctrl.sqsize + 1, NVME_RDMA_MAX_QUEUE_SIZE); - ctrl->ctrl.sqsize = NVME_RDMA_MAX_QUEUE_SIZE - 1; + ctrl->ctrl.sqsize + 1, ib_max_qsize); + ctrl->ctrl.sqsize = ib_max_qsize - 1; } if (ctrl->ctrl.sqsize + 1 > ctrl->ctrl.maxcmd) { diff --git a/include/linux/nvme-rdma.h b/include/linux/nvme-rdma.h index c19858b..67ee770 100644 --- a/include/linux/nvme-rdma.h +++ b/include/linux/nvme-rdma.h @@ -6,8 +6,6 @@ #ifndef _LINUX_NVME_RDMA_H #define _LINUX_NVME_RDMA_H -#define NVME_RDMA_MAX_QUEUE_SIZE 128 - #define NVME_RDMA_SEND_WR_FACTOR 3 /* MR, SEND, INV */ enum nvme_rdma_cm_fmt { -- 1.8.3.1 ^ permalink raw reply related [flat|nested] 9+ messages in thread
* Re: [RFC PATCH 3/3] nvme: rdma: use ib_device's max_qp_wr to limit sqsize 2023-12-18 11:05 ` [RFC PATCH 3/3] nvme: rdma: use ib_device's max_qp_wr to limit sqsize Guixin Liu @ 2023-12-18 11:57 ` Sagi Grimberg 2023-12-18 12:31 ` Guixin Liu 0 siblings, 1 reply; 9+ messages in thread From: Sagi Grimberg @ 2023-12-18 11:57 UTC (permalink / raw) To: Guixin Liu, hch, kch, axboe; +Cc: linux-nvme On 12/18/23 13:05, Guixin Liu wrote: > Currently, the host is limited to creating queues with a depth of > 128. To enable larger queue sizes, constrain the sqsize based on > the ib_device's max_qp_wr capability. > > And also remove unused NVME_RDMA_MAX_QUEUE_SIZE macro. > > Signed-off-by: Guixin Liu <kanie@linux.alibaba.com> > --- > drivers/nvme/host/rdma.c | 14 ++++++++------ > include/linux/nvme-rdma.h | 2 -- > 2 files changed, 8 insertions(+), 8 deletions(-) > > diff --git a/drivers/nvme/host/rdma.c b/drivers/nvme/host/rdma.c > index 81e2621..982f3e4 100644 > --- a/drivers/nvme/host/rdma.c > +++ b/drivers/nvme/host/rdma.c > @@ -489,8 +489,7 @@ static int nvme_rdma_create_cq(struct ib_device *ibdev, > static int nvme_rdma_create_queue_ib(struct nvme_rdma_queue *queue) > { > struct ib_device *ibdev; > - const int send_wr_factor = 3; /* MR, SEND, INV */ > - const int cq_factor = send_wr_factor + 1; /* + RECV */ > + const int cq_factor = NVME_RDMA_SEND_WR_FACTOR + 1; /* + RECV */ > int ret, pages_per_mr; > > queue->device = nvme_rdma_find_get_device(queue->cm_id); > @@ -508,7 +507,7 @@ static int nvme_rdma_create_queue_ib(struct nvme_rdma_queue *queue) > if (ret) > goto out_put_dev; > > - ret = nvme_rdma_create_qp(queue, send_wr_factor); > + ret = nvme_rdma_create_qp(queue, NVME_RDMA_SEND_WR_FACTOR); > if (ret) > goto out_destroy_ib_cq; > > @@ -1006,6 +1005,7 @@ static int nvme_rdma_setup_ctrl(struct nvme_rdma_ctrl *ctrl, bool new) > { > int ret; > bool changed; > + int ib_max_qsize; > > ret = nvme_rdma_configure_admin_queue(ctrl, new); > if (ret) > @@ -1030,11 +1030,13 @@ static int nvme_rdma_setup_ctrl(struct nvme_rdma_ctrl *ctrl, bool new) > ctrl->ctrl.opts->queue_size, ctrl->ctrl.sqsize + 1); > } > > - if (ctrl->ctrl.sqsize + 1 > NVME_RDMA_MAX_QUEUE_SIZE) { > + ib_max_qsize = ctrl->device->dev->attrs.max_qp_wr / > + (NVME_RDMA_SEND_WR_FACTOR + 1); > + if (ctrl->ctrl.sqsize + 1 > ib_max_qsize) { > dev_warn(ctrl->ctrl.device, > "ctrl sqsize %u > max queue size %u, clamping down\n", > - ctrl->ctrl.sqsize + 1, NVME_RDMA_MAX_QUEUE_SIZE); > - ctrl->ctrl.sqsize = NVME_RDMA_MAX_QUEUE_SIZE - 1; > + ctrl->ctrl.sqsize + 1, ib_max_qsize); > + ctrl->ctrl.sqsize = ib_max_qsize - 1; > } This can be very very big, not sure why we should allow a queue of depth of a potentially giant size. We should also impose a hard limit, maybe align to the pci driver limit. ^ permalink raw reply [flat|nested] 9+ messages in thread
* Re: [RFC PATCH 3/3] nvme: rdma: use ib_device's max_qp_wr to limit sqsize 2023-12-18 11:57 ` Sagi Grimberg @ 2023-12-18 12:31 ` Guixin Liu 0 siblings, 0 replies; 9+ messages in thread From: Guixin Liu @ 2023-12-18 12:31 UTC (permalink / raw) To: Sagi Grimberg, hch, kch, axboe; +Cc: linux-nvme 在 2023/12/18 19:57, Sagi Grimberg 写道: > > > On 12/18/23 13:05, Guixin Liu wrote: >> Currently, the host is limited to creating queues with a depth of >> 128. To enable larger queue sizes, constrain the sqsize based on >> the ib_device's max_qp_wr capability. >> >> And also remove unused NVME_RDMA_MAX_QUEUE_SIZE macro. >> >> Signed-off-by: Guixin Liu <kanie@linux.alibaba.com> >> --- >> drivers/nvme/host/rdma.c | 14 ++++++++------ >> include/linux/nvme-rdma.h | 2 -- >> 2 files changed, 8 insertions(+), 8 deletions(-) >> >> diff --git a/drivers/nvme/host/rdma.c b/drivers/nvme/host/rdma.c >> index 81e2621..982f3e4 100644 >> --- a/drivers/nvme/host/rdma.c >> +++ b/drivers/nvme/host/rdma.c >> @@ -489,8 +489,7 @@ static int nvme_rdma_create_cq(struct ib_device >> *ibdev, >> static int nvme_rdma_create_queue_ib(struct nvme_rdma_queue *queue) >> { >> struct ib_device *ibdev; >> - const int send_wr_factor = 3; /* MR, SEND, INV */ >> - const int cq_factor = send_wr_factor + 1; /* + RECV */ >> + const int cq_factor = NVME_RDMA_SEND_WR_FACTOR + 1; /* + RECV */ >> int ret, pages_per_mr; >> queue->device = nvme_rdma_find_get_device(queue->cm_id); >> @@ -508,7 +507,7 @@ static int nvme_rdma_create_queue_ib(struct >> nvme_rdma_queue *queue) >> if (ret) >> goto out_put_dev; >> - ret = nvme_rdma_create_qp(queue, send_wr_factor); >> + ret = nvme_rdma_create_qp(queue, NVME_RDMA_SEND_WR_FACTOR); >> if (ret) >> goto out_destroy_ib_cq; >> @@ -1006,6 +1005,7 @@ static int nvme_rdma_setup_ctrl(struct >> nvme_rdma_ctrl *ctrl, bool new) >> { >> int ret; >> bool changed; >> + int ib_max_qsize; >> ret = nvme_rdma_configure_admin_queue(ctrl, new); >> if (ret) >> @@ -1030,11 +1030,13 @@ static int nvme_rdma_setup_ctrl(struct >> nvme_rdma_ctrl *ctrl, bool new) >> ctrl->ctrl.opts->queue_size, ctrl->ctrl.sqsize + 1); >> } >> - if (ctrl->ctrl.sqsize + 1 > NVME_RDMA_MAX_QUEUE_SIZE) { >> + ib_max_qsize = ctrl->device->dev->attrs.max_qp_wr / >> + (NVME_RDMA_SEND_WR_FACTOR + 1); >> + if (ctrl->ctrl.sqsize + 1 > ib_max_qsize) { >> dev_warn(ctrl->ctrl.device, >> "ctrl sqsize %u > max queue size %u, clamping down\n", >> - ctrl->ctrl.sqsize + 1, NVME_RDMA_MAX_QUEUE_SIZE); >> - ctrl->ctrl.sqsize = NVME_RDMA_MAX_QUEUE_SIZE - 1; >> + ctrl->ctrl.sqsize + 1, ib_max_qsize); >> + ctrl->ctrl.sqsize = ib_max_qsize - 1; >> } > > This can be very very big, not sure why we should allow a queue of depth > of a potentially giant size. We should also impose a hard limit, maybe > align to the pci driver limit. When we exec "nvme connect", the queue depth will be restricted to between 16 and 1024 in nvmf_parse_options(), so this will not be very big, the max is 1024 in any case. ^ permalink raw reply [flat|nested] 9+ messages in thread
end of thread, other threads:[~2023-12-18 12:41 UTC | newest] Thread overview: 9+ messages (download: mbox.gz follow: Atom feed -- links below jump to the message on this page -- 2023-12-18 11:05 [RFC PATCH 0/3] *** use rdma device capability to limit queue size *** Guixin Liu 2023-12-18 11:05 ` [RFC PATCH 1/3] nvmet: change get_max_queue_size param to nvmet_sq Guixin Liu 2023-12-18 11:52 ` Sagi Grimberg 2023-12-18 11:05 ` [RFC PATCH 2/3] nvmet: rdma: utilize ib_device capability for setting max_queue_size Guixin Liu 2023-12-18 11:57 ` Sagi Grimberg 2023-12-18 12:41 ` Guixin Liu 2023-12-18 11:05 ` [RFC PATCH 3/3] nvme: rdma: use ib_device's max_qp_wr to limit sqsize Guixin Liu 2023-12-18 11:57 ` Sagi Grimberg 2023-12-18 12:31 ` Guixin Liu
This is a public inbox, see mirroring instructions for how to clone and mirror all data and code used for this inbox