From: Tokunori Ikegami <ikegami.t@gmail.com>
To: Hannes Reinecke <hare@suse.de>, linux-block@vger.kernel.org
Cc: linux-nvme@lists.infradead.org
Subject: Re: [PATCH] block, nvme: Increase max segments parameter setting value
Date: Wed, 25 Mar 2020 02:17:24 +0900 [thread overview]
Message-ID: <5e296f02-27a4-5c6e-35a4-5bd6a53bef3c@gmail.com> (raw)
In-Reply-To: <2293733b-77d7-6fbb-a81a-b68c10656757@suse.de>
On 2020/03/24 16:16, Hannes Reinecke wrote:
> On 3/23/20 7:23 PM, Tokunori Ikegami wrote:
>> Currently data length can be specified as UINT_MAX but failed.
>> This is caused by the max segments parameter limit set as USHRT_MAX.
>> To resolve this issue change to increase the value limit range.
>>
>> Signed-off-by: Tokunori Ikegami <ikegami.t@gmail.com>
>> Cc: linux-block@vger.kernel.org
>> Cc: linux-nvme@lists.infradead.org
>> ---
>> block/blk-settings.c | 2 +-
>> drivers/nvme/host/core.c | 2 +-
>> include/linux/blkdev.h | 7 ++++---
>> 3 files changed, 6 insertions(+), 5 deletions(-)
>>
>> diff --git a/block/blk-settings.c b/block/blk-settings.c
>> index c8eda2e7b91e..ed40bda573c2 100644
>> --- a/block/blk-settings.c
>> +++ b/block/blk-settings.c
>> @@ -266,7 +266,7 @@ EXPORT_SYMBOL(blk_queue_max_write_zeroes_sectors);
>> * Enables a low level driver to set an upper limit on the
>> number of
>> * hw data segments in a request.
>> **/
>> -void blk_queue_max_segments(struct request_queue *q, unsigned short
>> max_segments)
>> +void blk_queue_max_segments(struct request_queue *q, unsigned int
>> max_segments)
>> {
>> if (!max_segments) {
>> max_segments = 1;
>> diff --git a/drivers/nvme/host/core.c b/drivers/nvme/host/core.c
>> index a4d8c90ee7cc..2b48aab0969e 100644
>> --- a/drivers/nvme/host/core.c
>> +++ b/drivers/nvme/host/core.c
>> @@ -2193,7 +2193,7 @@ static void nvme_set_queue_limits(struct
>> nvme_ctrl *ctrl,
>> max_segments = min_not_zero(max_segments,
>> ctrl->max_segments);
>> blk_queue_max_hw_sectors(q, ctrl->max_hw_sectors);
>> - blk_queue_max_segments(q, min_t(u32, max_segments, USHRT_MAX));
>> + blk_queue_max_segments(q, min_t(u32, max_segments, UINT_MAX));
>> }
>> if ((ctrl->quirks & NVME_QUIRK_STRIPE_SIZE) &&
>> is_power_of_2(ctrl->max_hw_sectors))
>> diff --git a/include/linux/blkdev.h b/include/linux/blkdev.h
>> index f629d40c645c..4f4224e20c28 100644
>> --- a/include/linux/blkdev.h
>> +++ b/include/linux/blkdev.h
>> @@ -338,8 +338,8 @@ struct queue_limits {
>> unsigned int max_write_zeroes_sectors;
>> unsigned int discard_granularity;
>> unsigned int discard_alignment;
>> + unsigned int max_segments;
>> - unsigned short max_segments;
>> unsigned short max_integrity_segments;
>> unsigned short max_discard_segments;
>> @@ -1067,7 +1067,8 @@ extern void blk_queue_make_request(struct
>> request_queue *, make_request_fn *);
>> extern void blk_queue_bounce_limit(struct request_queue *, u64);
>> extern void blk_queue_max_hw_sectors(struct request_queue *,
>> unsigned int);
>> extern void blk_queue_chunk_sectors(struct request_queue *,
>> unsigned int);
>> -extern void blk_queue_max_segments(struct request_queue *, unsigned
>> short);
>> +extern void blk_queue_max_segments(struct request_queue *q,
>> + unsigned int max_segments);
>> extern void blk_queue_max_discard_segments(struct request_queue *,
>> unsigned short);
>> extern void blk_queue_max_segment_size(struct request_queue *,
>> unsigned int);
>> @@ -1276,7 +1277,7 @@ static inline unsigned int
>> queue_max_hw_sectors(const struct request_queue *q)
>> return q->limits.max_hw_sectors;
>> }
>> -static inline unsigned short queue_max_segments(const struct
>> request_queue *q)
>> +static inline unsigned int queue_max_segments(const struct
>> request_queue *q)
>> {
>> return q->limits.max_segments;
>> }
>>
> One would assume that the same reasoning goes for
> max_integrity_segment, no?
The error case itself can be resolved by the change without the
max_integrity_segment changes.
Also the value is set to 0 as default and set to 1 by the nvme driver so
it seems that not necessary to change for this case.
>
> Otherwise looks good.
>
> Reviewed-by: Hannes Reinecke <hare@suse.de>
Thanks for your reviewing.
>
> Cheers,
>
> Hannes
_______________________________________________
linux-nvme mailing list
linux-nvme@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-nvme
prev parent reply other threads:[~2020-03-24 17:17 UTC|newest]
Thread overview: 19+ messages / expand[flat|nested] mbox.gz Atom feed top
2020-03-23 18:23 [PATCH] block, nvme: Increase max segments parameter setting value Tokunori Ikegami
2020-03-23 19:14 ` Chaitanya Kulkarni
2020-03-23 23:09 ` Tokunori Ikegami
2020-03-24 0:02 ` Keith Busch
2020-03-24 16:51 ` Tokunori Ikegami
2020-03-27 17:50 ` Tokunori Ikegami
2020-03-27 18:18 ` Keith Busch
2020-03-28 2:11 ` Ming Lei
2020-03-28 3:13 ` Keith Busch
2020-03-28 8:28 ` Ming Lei
2020-03-28 12:57 ` Tokunori Ikegami
2020-03-29 3:01 ` Ming Lei
2020-03-30 9:15 ` Tokunori Ikegami
2020-03-30 13:53 ` Keith Busch
2020-03-31 15:24 ` Tokunori Ikegami
2020-03-31 14:13 ` Joshi
2020-03-31 15:37 ` Tokunori Ikegami
2020-03-24 7:16 ` Hannes Reinecke
2020-03-24 17:17 ` Tokunori Ikegami [this message]
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=5e296f02-27a4-5c6e-35a4-5bd6a53bef3c@gmail.com \
--to=ikegami.t@gmail.com \
--cc=hare@suse.de \
--cc=linux-block@vger.kernel.org \
--cc=linux-nvme@lists.infradead.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox