public inbox for linux-kernel@vger.kernel.org
 help / color / mirror / Atom feed
* [PATCH] xen/blkfront: increase the default value of xen_blkif_max_segments
@ 2014-12-16 10:11 Bob Liu
  2014-12-16 10:32 ` [Xen-devel] " Roger Pau Monné
  0 siblings, 1 reply; 6+ messages in thread
From: Bob Liu @ 2014-12-16 10:11 UTC (permalink / raw)
  To: xen-devel; +Cc: konrad.wilk, linux-kernel, david.vrabel, Bob Liu

The default maximum value of segments in indirect requests was 32, IO
operations with bigger block size(>32*4k) would be split and performance start
to drop.

Nowadays backend device usually support 512k max_sectors_kb on desktop, and
may larger on server machines with high-end storage system.
The default size 128k was not very appropriate, this patch increases the default
maximum value to 128(128*4k=512k).

Signed-off-by: Bob Liu <bob.liu@oracle.com>
---
 drivers/block/xen-blkfront.c |    4 ++--
 1 file changed, 2 insertions(+), 2 deletions(-)

diff --git a/drivers/block/xen-blkfront.c b/drivers/block/xen-blkfront.c
index 2236c6f..1bf2429 100644
--- a/drivers/block/xen-blkfront.c
+++ b/drivers/block/xen-blkfront.c
@@ -94,9 +94,9 @@ static const struct block_device_operations xlvbd_block_fops;
  * by the backend driver.
  */
 
-static unsigned int xen_blkif_max_segments = 32;
+static unsigned int xen_blkif_max_segments = 128;
 module_param_named(max, xen_blkif_max_segments, int, S_IRUGO);
-MODULE_PARM_DESC(max, "Maximum amount of segments in indirect requests (default is 32)");
+MODULE_PARM_DESC(max, "Maximum amount of segments in indirect requests (default is 128)");
 
 #define BLK_RING_SIZE __CONST_RING_SIZE(blkif, PAGE_SIZE)
 
-- 
1.7.10.4


^ permalink raw reply related	[flat|nested] 6+ messages in thread

* Re: [Xen-devel] [PATCH] xen/blkfront: increase the default value of xen_blkif_max_segments
  2014-12-16 10:11 [PATCH] xen/blkfront: increase the default value of xen_blkif_max_segments Bob Liu
@ 2014-12-16 10:32 ` Roger Pau Monné
  2014-12-17  8:18   ` Bob Liu
  0 siblings, 1 reply; 6+ messages in thread
From: Roger Pau Monné @ 2014-12-16 10:32 UTC (permalink / raw)
  To: Bob Liu, xen-devel; +Cc: david.vrabel, linux-kernel

El 16/12/14 a les 11.11, Bob Liu ha escrit:
> The default maximum value of segments in indirect requests was 32, IO
> operations with bigger block size(>32*4k) would be split and performance start
> to drop.
> 
> Nowadays backend device usually support 512k max_sectors_kb on desktop, and
> may larger on server machines with high-end storage system.
> The default size 128k was not very appropriate, this patch increases the default
> maximum value to 128(128*4k=512k).

This looks fine, do you have any data/graphs to backup your reasoning?

I would also add to the commit message that this change implies we can
now have 32*128+32 = 4128 in-flight grants, which greatly surpasses the
default amount of persistent grants blkback can handle, so the LRU in
blkback will kick in.

Roger.


^ permalink raw reply	[flat|nested] 6+ messages in thread

* Re: [Xen-devel] [PATCH] xen/blkfront: increase the default value of xen_blkif_max_segments
  2014-12-16 10:32 ` [Xen-devel] " Roger Pau Monné
@ 2014-12-17  8:18   ` Bob Liu
  2014-12-17 16:13     ` Konrad Rzeszutek Wilk
  0 siblings, 1 reply; 6+ messages in thread
From: Bob Liu @ 2014-12-17  8:18 UTC (permalink / raw)
  To: Roger Pau Monné; +Cc: xen-devel, david.vrabel, linux-kernel


On 12/16/2014 06:32 PM, Roger Pau Monné wrote:
> El 16/12/14 a les 11.11, Bob Liu ha escrit:
>> The default maximum value of segments in indirect requests was 32, IO
>> operations with bigger block size(>32*4k) would be split and performance start
>> to drop.
>>
>> Nowadays backend device usually support 512k max_sectors_kb on desktop, and
>> may larger on server machines with high-end storage system.
>> The default size 128k was not very appropriate, this patch increases the default
>> maximum value to 128(128*4k=512k).
> 
> This looks fine, do you have any data/graphs to backup your reasoning?
> 

I only have some results for 1M block size FIO test but I think that's
enough.

xen_blkfront.max 	Rate (MB/s) 	Percent of Dom-0
32 	11.1 	31.0%
48 	15.3 	42.7%
64 	19.8 	55.3%
80 	19.9 	55.6%
96 	23.0 	64.2%
112 	23.7 	66.2%
128 	31.6 	88.3%

The rates above are compared against the dom-0 rate of 35.8 MB/s.

> I would also add to the commit message that this change implies we can
> now have 32*128+32 = 4128 in-flight grants, which greatly surpasses the

The number could be larger if using more pages as the
xen-blkfront/backend ring based on Wei Liu's patch "xenbus_client:
extend interface to suppurt multi-page ring", it helped improve the IO
performance a lot on our system connected with high-end storage.
I'm preparing resend related patches.

> default amount of persistent grants blkback can handle, so the LRU in
> blkback will kick in.
> 

Sounds good.

-- 
Regards,
-Bob

^ permalink raw reply	[flat|nested] 6+ messages in thread

* Re: [Xen-devel] [PATCH] xen/blkfront: increase the default value of xen_blkif_max_segments
  2014-12-17  8:18   ` Bob Liu
@ 2014-12-17 16:13     ` Konrad Rzeszutek Wilk
  2014-12-17 16:34       ` David Vrabel
  0 siblings, 1 reply; 6+ messages in thread
From: Konrad Rzeszutek Wilk @ 2014-12-17 16:13 UTC (permalink / raw)
  To: Bob Liu; +Cc: Roger Pau Monné, linux-kernel, david.vrabel, xen-devel

On Wed, Dec 17, 2014 at 04:18:58PM +0800, Bob Liu wrote:
> 
> On 12/16/2014 06:32 PM, Roger Pau Monné wrote:
> > El 16/12/14 a les 11.11, Bob Liu ha escrit:
> >> The default maximum value of segments in indirect requests was 32, IO
> >> operations with bigger block size(>32*4k) would be split and performance start
> >> to drop.
> >>
> >> Nowadays backend device usually support 512k max_sectors_kb on desktop, and
> >> may larger on server machines with high-end storage system.
> >> The default size 128k was not very appropriate, this patch increases the default
> >> maximum value to 128(128*4k=512k).
> > 
> > This looks fine, do you have any data/graphs to backup your reasoning?
> > 
> 
> I only have some results for 1M block size FIO test but I think that's
> enough.
> 
> xen_blkfront.max 	Rate (MB/s) 	Percent of Dom-0
> 32 	11.1 	31.0%
> 48 	15.3 	42.7%
> 64 	19.8 	55.3%
> 80 	19.9 	55.6%
> 96 	23.0 	64.2%
> 112 	23.7 	66.2%
> 128 	31.6 	88.3%
> 
> The rates above are compared against the dom-0 rate of 35.8 MB/s.
> 
> > I would also add to the commit message that this change implies we can
> > now have 32*128+32 = 4128 in-flight grants, which greatly surpasses the
> 
> The number could be larger if using more pages as the
> xen-blkfront/backend ring based on Wei Liu's patch "xenbus_client:
> extend interface to suppurt multi-page ring", it helped improve the IO
> performance a lot on our system connected with high-end storage.
> I'm preparing resend related patches.

Or potentially making the request and response be seperate rings - and the
response ring entries not tied in to the request. As in right now if we
have an request at say slot 1,5, and 7, we expect the response to be at
slot 1,5, and 7 as well. If we made the response ring producer index
not be tied to the request we could put the responses on the first available
slot - so say at 1, 2, and 3 (if all three responses came at the same time).

> 
> > default amount of persistent grants blkback can handle, so the LRU in
> > blkback will kick in.
> > 
> 
> Sounds good.
> 
> -- 
> Regards,
> -Bob
> 
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@lists.xen.org
> http://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 6+ messages in thread

* Re: [Xen-devel] [PATCH] xen/blkfront: increase the default value of xen_blkif_max_segments
  2014-12-17 16:13     ` Konrad Rzeszutek Wilk
@ 2014-12-17 16:34       ` David Vrabel
  2014-12-17 16:47         ` Konrad Rzeszutek Wilk
  0 siblings, 1 reply; 6+ messages in thread
From: David Vrabel @ 2014-12-17 16:34 UTC (permalink / raw)
  To: Konrad Rzeszutek Wilk, Bob Liu
  Cc: david.vrabel, xen-devel, linux-kernel, Roger Pau Monné

On 17/12/14 16:13, Konrad Rzeszutek Wilk wrote:
> On Wed, Dec 17, 2014 at 04:18:58PM +0800, Bob Liu wrote:
>>
>> On 12/16/2014 06:32 PM, Roger Pau Monné wrote:
>>> El 16/12/14 a les 11.11, Bob Liu ha escrit:
>>>> The default maximum value of segments in indirect requests was 32, IO
>>>> operations with bigger block size(>32*4k) would be split and performance start
>>>> to drop.
>>>>
>>>> Nowadays backend device usually support 512k max_sectors_kb on desktop, and
>>>> may larger on server machines with high-end storage system.
>>>> The default size 128k was not very appropriate, this patch increases the default
>>>> maximum value to 128(128*4k=512k).
>>>
>>> This looks fine, do you have any data/graphs to backup your reasoning?
>>>
>>
>> I only have some results for 1M block size FIO test but I think that's
>> enough.
>>
>> xen_blkfront.max 	Rate (MB/s) 	Percent of Dom-0
>> 32 	11.1 	31.0%
>> 48 	15.3 	42.7%
>> 64 	19.8 	55.3%
>> 80 	19.9 	55.6%
>> 96 	23.0 	64.2%
>> 112 	23.7 	66.2%
>> 128 	31.6 	88.3%
>>
>> The rates above are compared against the dom-0 rate of 35.8 MB/s.
>>
>>> I would also add to the commit message that this change implies we can
>>> now have 32*128+32 = 4128 in-flight grants, which greatly surpasses the
>>
>> The number could be larger if using more pages as the
>> xen-blkfront/backend ring based on Wei Liu's patch "xenbus_client:
>> extend interface to suppurt multi-page ring", it helped improve the IO
>> performance a lot on our system connected with high-end storage.
>> I'm preparing resend related patches.
> 
> Or potentially making the request and response be seperate rings - and the
> response ring entries not tied in to the request. As in right now if we
> have an request at say slot 1,5, and 7, we expect the response to be at
> slot 1,5, and 7 as well.

No. Responses are placed in the first available slot.  The response is
associated with the original request by the ID field.

See make_response().

David

^ permalink raw reply	[flat|nested] 6+ messages in thread

* Re: [Xen-devel] [PATCH] xen/blkfront: increase the default value of xen_blkif_max_segments
  2014-12-17 16:34       ` David Vrabel
@ 2014-12-17 16:47         ` Konrad Rzeszutek Wilk
  0 siblings, 0 replies; 6+ messages in thread
From: Konrad Rzeszutek Wilk @ 2014-12-17 16:47 UTC (permalink / raw)
  To: David Vrabel; +Cc: Bob Liu, xen-devel, linux-kernel, Roger Pau Monné

On Wed, Dec 17, 2014 at 04:34:41PM +0000, David Vrabel wrote:
> On 17/12/14 16:13, Konrad Rzeszutek Wilk wrote:
> > On Wed, Dec 17, 2014 at 04:18:58PM +0800, Bob Liu wrote:
> >>
> >> On 12/16/2014 06:32 PM, Roger Pau Monné wrote:
> >>> El 16/12/14 a les 11.11, Bob Liu ha escrit:
> >>>> The default maximum value of segments in indirect requests was 32, IO
> >>>> operations with bigger block size(>32*4k) would be split and performance start
> >>>> to drop.
> >>>>
> >>>> Nowadays backend device usually support 512k max_sectors_kb on desktop, and
> >>>> may larger on server machines with high-end storage system.
> >>>> The default size 128k was not very appropriate, this patch increases the default
> >>>> maximum value to 128(128*4k=512k).
> >>>
> >>> This looks fine, do you have any data/graphs to backup your reasoning?
> >>>
> >>
> >> I only have some results for 1M block size FIO test but I think that's
> >> enough.
> >>
> >> xen_blkfront.max 	Rate (MB/s) 	Percent of Dom-0
> >> 32 	11.1 	31.0%
> >> 48 	15.3 	42.7%
> >> 64 	19.8 	55.3%
> >> 80 	19.9 	55.6%
> >> 96 	23.0 	64.2%
> >> 112 	23.7 	66.2%
> >> 128 	31.6 	88.3%
> >>
> >> The rates above are compared against the dom-0 rate of 35.8 MB/s.
> >>
> >>> I would also add to the commit message that this change implies we can
> >>> now have 32*128+32 = 4128 in-flight grants, which greatly surpasses the
> >>
> >> The number could be larger if using more pages as the
> >> xen-blkfront/backend ring based on Wei Liu's patch "xenbus_client:
> >> extend interface to suppurt multi-page ring", it helped improve the IO
> >> performance a lot on our system connected with high-end storage.
> >> I'm preparing resend related patches.
> > 
> > Or potentially making the request and response be seperate rings - and the
> > response ring entries not tied in to the request. As in right now if we
> > have an request at say slot 1,5, and 7, we expect the response to be at
> > slot 1,5, and 7 as well.
> 
> No. Responses are placed in the first available slot.  The response is
> associated with the original request by the ID field.
> 
> See make_response().

You are right! Thank you for the update.
> 
> David

^ permalink raw reply	[flat|nested] 6+ messages in thread

end of thread, other threads:[~2014-12-17 16:47 UTC | newest]

Thread overview: 6+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2014-12-16 10:11 [PATCH] xen/blkfront: increase the default value of xen_blkif_max_segments Bob Liu
2014-12-16 10:32 ` [Xen-devel] " Roger Pau Monné
2014-12-17  8:18   ` Bob Liu
2014-12-17 16:13     ` Konrad Rzeszutek Wilk
2014-12-17 16:34       ` David Vrabel
2014-12-17 16:47         ` Konrad Rzeszutek Wilk

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox