From mboxrd@z Thu Jan 1 00:00:00 1970 From: =?ISO-8859-1?Q?Roger_Pau_Monn=E9?= Subject: Re: IO speed limited by size of IO request (for RBD driver) Date: Sat, 27 Apr 2013 09:06:50 +0200 Message-ID: <517B790A.3020009@citrix.com> References: <51768FA5.6090609@crc.id.au> <5176957E.6010306@citrix.com> <51769B9D.4000708@crc.id.au> <51769CFD.7020907@citrix.com> <51769E1E.6040902@crc.id.au> <5176A19A.2010802@citrix.com> <5176A440.8040303@crc.id.au> <5176A520.5030503@citrix.com> <5176A61F.6050607@crc.id.au> <5176A6DD.5000404@citrix.com> <5176AFF9.4020003@crc.id.au> <5176B237.8020803@citrix.com> <5176C073.3050409@crc.id.au> <5176CF56.8000505@citrix.com> <5176DB88.1070200@crc.id.au> <517A89DA.3030804@citrix.com> <517A8C44.5020103@crc.id.au> <517B3088.7070809@crc.id.au> Mime-Version: 1.0 Content-Type: text/plain; charset="iso-8859-1" Content-Transfer-Encoding: quoted-printable Return-path: In-Reply-To: <517B3088.7070809@crc.id.au> List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Sender: xen-devel-bounces@lists.xen.org Errors-To: xen-devel-bounces@lists.xen.org To: Steven Haigh Cc: Felipe Franciosi , "xen-devel@lists.xen.org" List-Id: xen-devel@lists.xenproject.org On 27/04/13 03:57, Steven Haigh wrote: > On 27/04/2013 12:16 AM, Steven Haigh wrote: >> On 27/04/2013 12:06 AM, Roger Pau Monn=E9 wrote: >>> On 23/04/13 21:05, Steven Haigh wrote: >>>> Sorry - resending this to Felipe as well - as I started talking to him >>>> directly previously. >>>> >>>> Felipe, to bring you up to date, I've copied over the blkback files fr= om >>>> Rogers indirect kernel over the vanilla 3.8.8 kernel files, built and >>>> tested. Results below: >>>> >> >> Bringing this into context in a nutshell - results showed about 5MB/sec >> improvement when using buffered disk access - totalling ~57MB/sec write >> speed vs ~98MB/sec when using the oflag=3Ddirect flag to dd. >> >> When talking about back porting a few indirect patches to mainline >> blkback (3.8.8 atm): >>>> On 24/04/2013 4:13 AM, Roger Pau Monn=E9 wrote: >>>>> I think it requires a non-trivial amount of work, what you could do >>>>> as a >>>>> test is directly replace the affected files with the ones in my >>>>> tree, it >>>>> is not optimal, but I don't think it's going to cause problems, and y= ou >>>>> could at least see if indirect descriptors solve your problem. >>>> >>>> Ok, I copied across those files, built, packaged and installed them on >>>> my Dom0. Good news is that its a little quicker, bad news is not by >>>> much. >>> >>> Could you try increasing xen_blkif_max_segments variable in >>> xen-blkfront.c to 64 or 128? It is set to 32 by default. You will only >>> need to recompile the DomU kernel after this, the Dom0 is able to >>> support up to 256 indirect segments. >> >> I'll have to look at this. All DomU's are Scientific Linux 6.4 systems - >> so essentially RHEL6.4 and so on. I haven't built a RH kernel as yet - >> so I'll have to look at what is involved. It might be as simple as >> rebuilding a normal SRPM. > = > Ok, I've had a look at the RH xen-blkfront.c - and I can't see any = > definition of xen_blkif_max_segments - or anything close. I've attached = > the version used in the EL6 kernel from the kernel-2.6.32-358.6.1.el6 srp= m. > = > Any ideas on where to go from here? I thought you were using the 3.8.x kernel inside the DomU also, if you are not using it, then it's normal that there's no speed difference, you have a Dom0 kernel that supports indirect descriptors, but your DomU doesn't. You must use a kernel that supports indirect descriptors in both Dom0 and DomU in order to make use of this feature.