xen-devel.lists.xenproject.org archive mirror
 help / color / mirror / Atom feed
From: Konrad Rzeszutek Wilk <konrad@kernel.org>
To: "Duan, Ronghui" <ronghui.duan@intel.com>
Cc: "'xen-devel@lists.xen.org'" <xen-devel@lists.xen.org>,
	'Stefano Stabellini' <stefano.stabellini@eu.citrix.com>,
	'Ian Jackson' <Ian.Jackson@eu.citrix.com>,
	'Jan Beulich' <JBeulich@suse.com>,
	'Konrad Rzeszutek Wilk' <konrad.wilk@oracle.com>
Subject: Re: [RFC v1 0/5] VBD: enlarge max segment per request in blkfront
Date: Mon, 17 Sep 2012 10:37:08 -0400	[thread overview]
Message-ID: <20120917143707.GD14012@phenom.dumpdata.com> (raw)
In-Reply-To: <A21691DE07B84740B5F0B81466D5148A23BF7E2F@SHSMSX102.ccr.corp.intel.com>

On Mon, Sep 17, 2012 at 06:33:29AM +0000, Duan, Ronghui wrote:
> At last, I saw the regression in random io.
> This is a patch to fix the performance regression. Original the pending request members are allocated from the stack, I alloc them when each request arrives in my last patch. But it will hurt performance. In this fix, I alloc all of them when blkback init. But due to some bugs there, we can't free it, the same to other pending requests member. I am looking for the reason. But have no idea for this now. 

Right. When I implemented something similar to this (allocate at startup
those pools of pages), I had the same problem of freeing the grant array
blowing up the machine.

But... this was before
http://git.kernel.org/?p=linux/kernel/git/konrad/xen.git;a=commit;h=2fc136eecd0c647a6b13fcd00d0c41a1a28f35a5

- which might be the fix for this.

> Konrad, thanks for your comments. Could you have a try when you have time.
> 
> -ronghui
> 
> > -----Original Message-----
> > From: Duan, Ronghui
> > Sent: Thursday, September 13, 2012 10:06 PM
> > To: Konrad Rzeszutek Wilk; Stefano Stabellini
> > Cc: Jan Beulich; Ian Jackson; xen-devel@lists.xen.org
> > Subject: RE: [Xen-devel] [RFC v1 0/5] VBD: enlarge max segment per request in
> > blkfront
> > 
> > > > > But you certainly shouldn't be proposing features getting used
> > > > > unconditionally or by default that benefit one class of backing
> > > > > devices and severely penalize others.
> > > >
> > > > Right.
> > > > I am wondering.. Considering that the in-kernel blkback is mainly
> > > > used with physical partitions, is it possible that your patches
> > > > cause a regression with unmodified backends that don't support the
> > > > new protocol, like QEMU for example?
> > >
> > > Well for right now I am just using the most simple configuration to
> > > eliminate any extra variables (stacking of components). So my
> > > "testing" has been just on phy:/dev/sda,xvda,w with the sda being a Corsair
> > SSD.
> > 
> > I totally agree that we should not break others when enable what we want.
> > But just from my mind, the patch only have a little overhead in the
> > front/backend code path. It will induce pure random IO with a little overhead.
> > I tried the 4K read case, I just got 50MB/s w/o the patch. I need a more
> > powerful disk to verified it.
> > 
> > Ronghui
> > 
> > 
> > > -----Original Message-----
> > > From: Konrad Rzeszutek Wilk [mailto:konrad.wilk@oracle.com]
> > > Sent: Thursday, September 13, 2012 9:24 PM
> > > To: Stefano Stabellini
> > > Cc: Jan Beulich; Duan, Ronghui; Ian Jackson; xen-devel@lists.xen.org
> > > Subject: Re: [Xen-devel] [RFC v1 0/5] VBD: enlarge max segment per
> > > request in blkfront
> > >
> > > On Thu, Sep 13, 2012 at 12:05:35PM +0100, Stefano Stabellini wrote:
> > > > On Thu, 13 Sep 2012, Jan Beulich wrote:
> > > > > >>> On 13.09.12 at 04:28, "Duan, Ronghui" <ronghui.duan@intel.com> wrote:
> > > > > >> And with your patch got:
> > > > > >>   read : io=4096.0MB, bw=92606KB/s, iops=23151 , runt=
> > > > > >> 45292msec
> > > > > >>
> > > > > >> without:
> > > > > >>   read : io=4096.0MB, bw=145187KB/s, iops=36296 , runt=
> > > > > >> 28889msec
> > > > > >>
> > > > > > What type of backend file you are using? In order to remove the
> > > > > > influence of cache in Dom0, I use a physical partition as backend.
> > > > >
> > > > > But you certainly shouldn't be proposing features getting used
> > > > > unconditionally or by default that benefit one class of backing
> > > > > devices and severely penalize others.
> > > >
> > > > Right.
> > > > I am wondering.. Considering that the in-kernel blkback is mainly
> > > > used with physical partitions, is it possible that your patches
> > > > cause a regression with unmodified backends that don't support the
> > > > new protocol, like QEMU for example?
> > >
> > > Well for right now I am just using the most simple configuration to
> > > eliminate any extra variables (stacking of components). So my
> > > "testing" has been just on phy:/dev/sda,xvda,w with the sda being a Corsair
> > SSD.
> 


> _______________________________________________
> Xen-devel mailing list
> Xen-devel@lists.xen.org
> http://lists.xen.org/xen-devel

  reply	other threads:[~2012-09-17 14:37 UTC|newest]

Thread overview: 17+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2012-08-16 10:22 [RFC v1 0/5] VBD: enlarge max segment per request in blkfront Duan, Ronghui
2012-08-16 11:14 ` Jan Beulich
2012-08-17  1:12   ` Duan, Ronghui
2012-08-16 13:34 ` Konrad Rzeszutek Wilk
2012-08-16 13:55   ` Konrad Rzeszutek Wilk
2012-08-17  1:26     ` Duan, Ronghui
2012-08-16 14:18   ` Jan Beulich
2012-09-07 17:49 ` Konrad Rzeszutek Wilk
2012-09-13  2:28   ` Duan, Ronghui
2012-09-13  7:32     ` Jan Beulich
2012-09-13 11:05       ` Stefano Stabellini
2012-09-13 13:23         ` Konrad Rzeszutek Wilk
2012-09-13 14:05           ` Duan, Ronghui
2012-09-17  6:33             ` Duan, Ronghui
2012-09-17 14:37               ` Konrad Rzeszutek Wilk [this message]
2012-09-19 21:11               ` Konrad Rzeszutek Wilk
2012-09-13 13:21     ` Konrad Rzeszutek Wilk

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20120917143707.GD14012@phenom.dumpdata.com \
    --to=konrad@kernel.org \
    --cc=Ian.Jackson@eu.citrix.com \
    --cc=JBeulich@suse.com \
    --cc=konrad.wilk@oracle.com \
    --cc=ronghui.duan@intel.com \
    --cc=stefano.stabellini@eu.citrix.com \
    --cc=xen-devel@lists.xen.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).