public inbox for linux-kernel@vger.kernel.org
 help / color / mirror / Atom feed
From: David Vrabel <david.vrabel@citrix.com>
To: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>,
	Bob Liu <bob.liu@oracle.com>
Cc: david.vrabel@citrix.com, xen-devel@lists.xen.org,
	linux-kernel@vger.kernel.org,
	"Roger Pau Monné" <roger.pau@citrix.com>
Subject: Re: [Xen-devel] [PATCH] xen/blkfront: increase the default value of xen_blkif_max_segments
Date: Wed, 17 Dec 2014 16:34:41 +0000	[thread overview]
Message-ID: <5491B0A1.7080601@citrix.com> (raw)
In-Reply-To: <20141217161323.GA6414@laptop.dumpdata.com>

On 17/12/14 16:13, Konrad Rzeszutek Wilk wrote:
> On Wed, Dec 17, 2014 at 04:18:58PM +0800, Bob Liu wrote:
>>
>> On 12/16/2014 06:32 PM, Roger Pau Monné wrote:
>>> El 16/12/14 a les 11.11, Bob Liu ha escrit:
>>>> The default maximum value of segments in indirect requests was 32, IO
>>>> operations with bigger block size(>32*4k) would be split and performance start
>>>> to drop.
>>>>
>>>> Nowadays backend device usually support 512k max_sectors_kb on desktop, and
>>>> may larger on server machines with high-end storage system.
>>>> The default size 128k was not very appropriate, this patch increases the default
>>>> maximum value to 128(128*4k=512k).
>>>
>>> This looks fine, do you have any data/graphs to backup your reasoning?
>>>
>>
>> I only have some results for 1M block size FIO test but I think that's
>> enough.
>>
>> xen_blkfront.max 	Rate (MB/s) 	Percent of Dom-0
>> 32 	11.1 	31.0%
>> 48 	15.3 	42.7%
>> 64 	19.8 	55.3%
>> 80 	19.9 	55.6%
>> 96 	23.0 	64.2%
>> 112 	23.7 	66.2%
>> 128 	31.6 	88.3%
>>
>> The rates above are compared against the dom-0 rate of 35.8 MB/s.
>>
>>> I would also add to the commit message that this change implies we can
>>> now have 32*128+32 = 4128 in-flight grants, which greatly surpasses the
>>
>> The number could be larger if using more pages as the
>> xen-blkfront/backend ring based on Wei Liu's patch "xenbus_client:
>> extend interface to suppurt multi-page ring", it helped improve the IO
>> performance a lot on our system connected with high-end storage.
>> I'm preparing resend related patches.
> 
> Or potentially making the request and response be seperate rings - and the
> response ring entries not tied in to the request. As in right now if we
> have an request at say slot 1,5, and 7, we expect the response to be at
> slot 1,5, and 7 as well.

No. Responses are placed in the first available slot.  The response is
associated with the original request by the ID field.

See make_response().

David

  reply	other threads:[~2014-12-17 16:34 UTC|newest]

Thread overview: 6+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2014-12-16 10:11 [PATCH] xen/blkfront: increase the default value of xen_blkif_max_segments Bob Liu
2014-12-16 10:32 ` [Xen-devel] " Roger Pau Monné
2014-12-17  8:18   ` Bob Liu
2014-12-17 16:13     ` Konrad Rzeszutek Wilk
2014-12-17 16:34       ` David Vrabel [this message]
2014-12-17 16:47         ` Konrad Rzeszutek Wilk

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=5491B0A1.7080601@citrix.com \
    --to=david.vrabel@citrix.com \
    --cc=bob.liu@oracle.com \
    --cc=konrad.wilk@oracle.com \
    --cc=linux-kernel@vger.kernel.org \
    --cc=roger.pau@citrix.com \
    --cc=xen-devel@lists.xen.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox