* [PATCH] nvme-pci: Fix queue_count to consider nr_possible_cpu
@ 2019-05-04 11:39 Minwoo Im
2019-05-08 7:14 ` Christoph Hellwig
0 siblings, 1 reply; 4+ messages in thread
From: Minwoo Im @ 2019-05-04 11:39 UTC (permalink / raw)
The parameter should be set with the updated count 'n' instead of
'val' itself. The local variable 'cnt' is sized in 6bytes because nvme
supports up to 65536 io queues.
Fixes: 3b6592f70("nvme: utilize two queue maps, one for reads and one
for writes")
Cc: Keith Busch <keith.busch at intel.com>
Cc: Jens Axboe <axboe at fb.com>
Cc: Christoph Hellwig <hch at lst.de>
Cc: Sagi Grimberg <sagi at grimberg.me>
Signed-off-by: Minwoo Im <minwoo.im.dev at gmail.com>
---
drivers/nvme/host/pci.c | 4 +++-
1 file changed, 3 insertions(+), 1 deletion(-)
diff --git a/drivers/nvme/host/pci.c b/drivers/nvme/host/pci.c
index 3e4fb891a95a..d3be3193d023 100644
--- a/drivers/nvme/host/pci.c
+++ b/drivers/nvme/host/pci.c
@@ -147,6 +147,7 @@ static int io_queue_depth_set(const char *val, const struct kernel_param *kp)
static int queue_count_set(const char *val, const struct kernel_param *kp)
{
int n, ret;
+ char cnt[6];
ret = kstrtoint(val, 10, &n);
if (ret)
@@ -154,7 +155,8 @@ static int queue_count_set(const char *val, const struct kernel_param *kp)
if (n > num_possible_cpus())
n = num_possible_cpus();
- return param_set_int(val, kp);
+ sprintf(cnt, "%d", n);
+ return param_set_int(cnt, kp);
}
static inline unsigned int sq_idx(unsigned int qid, u32 stride)
--
2.17.1
^ permalink raw reply related [flat|nested] 4+ messages in thread* [PATCH] nvme-pci: Fix queue_count to consider nr_possible_cpu
2019-05-04 11:39 [PATCH] nvme-pci: Fix queue_count to consider nr_possible_cpu Minwoo Im
@ 2019-05-08 7:14 ` Christoph Hellwig
2019-05-12 13:09 ` Minwoo Im
2019-05-12 14:16 ` Sagi Grimberg
0 siblings, 2 replies; 4+ messages in thread
From: Christoph Hellwig @ 2019-05-08 7:14 UTC (permalink / raw)
On Sat, May 04, 2019@08:39:23PM +0900, Minwoo Im wrote:
> --- a/drivers/nvme/host/pci.c
> +++ b/drivers/nvme/host/pci.c
> @@ -147,6 +147,7 @@ static int io_queue_depth_set(const char *val, const struct kernel_param *kp)
> static int queue_count_set(const char *val, const struct kernel_param *kp)
> {
> int n, ret;
> + char cnt[6];
>
> ret = kstrtoint(val, 10, &n);
> if (ret)
> @@ -154,7 +155,8 @@ static int queue_count_set(const char *val, const struct kernel_param *kp)
> if (n > num_possible_cpus())
> n = num_possible_cpus();
>
> - return param_set_int(val, kp);
> + sprintf(cnt, "%d", n);
> + return param_set_int(cnt, kp);
This just looks weird. If we ant to limit the number why not
get rid of all these param_ops stuff and just verify the
number in nvme_calc_irq_sets without all that boilerplate code?
^ permalink raw reply [flat|nested] 4+ messages in thread* [PATCH] nvme-pci: Fix queue_count to consider nr_possible_cpu
2019-05-08 7:14 ` Christoph Hellwig
@ 2019-05-12 13:09 ` Minwoo Im
2019-05-12 14:16 ` Sagi Grimberg
1 sibling, 0 replies; 4+ messages in thread
From: Minwoo Im @ 2019-05-12 13:09 UTC (permalink / raw)
> This just looks weird. If we ant to limit the number why not
> get rid of all these param_ops stuff and just verify the
> number in nvme_calc_irq_sets without all that boilerplate code?
>
Hi Christoph,
Thanks for your review on this. Module parameters which are with
param_ops are currently the following two things, as you know:
(1) write_queues
(2) poll_queues
If those two things are used in nvme_setup_irqs() and
nvme_calc_irq_sets() only, then we can remove param_ops things and go
like what you mentioned. However, before preparing irq sets of them,
the following function is invoked to find out proper nr_io_queues by
referring those modules params.
static unsigned int max_io_queues(void)
{
return num_possible_cpus() + write_queues + poll_queues;
}
if max_io_queues() gives nr_io_queues in a too large value, we need to
do something to fit it to proper value in nvme_setup_irqs() or
nvme_calc_irq_sets() which might be also boilerplate code.
Please see the below code and give your comment on it. I guess it also
looks not that really good :(
diff --git a/drivers/nvme/host/pci.c b/drivers/nvme/host/pci.c
index 2a8708c9ac18..28e43627da5a 100644
--- a/drivers/nvme/host/pci.c
+++ b/drivers/nvme/host/pci.c
@@ -2073,6 +2073,16 @@ static int nvme_setup_irqs(struct nvme_dev *dev,
unsigned int nr_io_queues)
* Poll queues don't need interrupts, but we need at least one IO
* queue left over for non-polled IO.
*/
+ if (poll_queues > num_possible_cpus()) {
+ poll_queues = num_possible_cpus();
+ nr_io_queues = max_io_queues();
+ }
+
+ if (write_queues > num_possible_cpus()) {
+ write_queues = num_possible_cpus();
+ nr_io_queues = max_io_queues();
+ }
+
this_p_queues = poll_queues;
if (this_p_queues >= nr_io_queues) {
this_p_queues = nr_io_queues - 1;
Thanks,
^ permalink raw reply related [flat|nested] 4+ messages in thread* [PATCH] nvme-pci: Fix queue_count to consider nr_possible_cpu
2019-05-08 7:14 ` Christoph Hellwig
2019-05-12 13:09 ` Minwoo Im
@ 2019-05-12 14:16 ` Sagi Grimberg
1 sibling, 0 replies; 4+ messages in thread
From: Sagi Grimberg @ 2019-05-12 14:16 UTC (permalink / raw)
>> --- a/drivers/nvme/host/pci.c
>> +++ b/drivers/nvme/host/pci.c
>> @@ -147,6 +147,7 @@ static int io_queue_depth_set(const char *val, const struct kernel_param *kp)
>> static int queue_count_set(const char *val, const struct kernel_param *kp)
>> {
>> int n, ret;
>> + char cnt[6];
>>
>> ret = kstrtoint(val, 10, &n);
>> if (ret)
>> @@ -154,7 +155,8 @@ static int queue_count_set(const char *val, const struct kernel_param *kp)
>> if (n > num_possible_cpus())
>> n = num_possible_cpus();
>>
>> - return param_set_int(val, kp);
>> + sprintf(cnt, "%d", n);
>> + return param_set_int(cnt, kp);
>
> This just looks weird.
Yea..
> If we ant to limit the number why not
> get rid of all these param_ops stuff and just verify the
> number in nvme_calc_irq_sets without all that boilerplate code?
I would just add the check with the possible_cpu check and be done
with it (although if it passed this check this is very much
theoretical)...
^ permalink raw reply [flat|nested] 4+ messages in thread
end of thread, other threads:[~2019-05-12 14:16 UTC | newest]
Thread overview: 4+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2019-05-04 11:39 [PATCH] nvme-pci: Fix queue_count to consider nr_possible_cpu Minwoo Im
2019-05-08 7:14 ` Christoph Hellwig
2019-05-12 13:09 ` Minwoo Im
2019-05-12 14:16 ` Sagi Grimberg
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox