From mboxrd@z Thu Jan 1 00:00:00 1970 From: =?ISO-8859-1?Q?Roger_Pau_Monn=E9?= Subject: Re: IO speed limited by size of IO request (for RBD driver) Date: Wed, 8 May 2013 10:33:12 +0200 Message-ID: <518A0DC8.4080501@citrix.com> References: <51768FA5.6090609@crc.id.au> <5176957E.6010306@citrix.com> <51769B9D.4000708@crc.id.au> <51769CFD.7020907@citrix.com> <51769E1E.6040902@crc.id.au> <5176A19A.2010802@citrix.com> <5176A440.8040303@crc.id.au> <5176A520.5030503@citrix.com> <5176A61F.6050607@crc.id.au> <5176A6DD.5000404@citrix.com> <5176AFF9.4020003@crc.id.au> <5176B237.8020803@citrix.com> <5176C073.3050409@crc.id.au> <5176CF56.8000505@citrix.com> <5176DB88.1070200@crc.id.au> <517A89DA.3030804@citrix.com> <517A8C44.5020103@crc.id.au> <517B3088.7070809@crc.id.au> <517B790A.3020009@citrix.com> <517B838C.9040607@crc.id.au> <517B8DE3.90306@crc.id.au> <517E3195.8090204@citrix.com> <517EC975.7030807@crc.id.au> <517ECE64.6000503@crc.id.au> <9F2C4E7DFB7839489C89757A66C5AD620E57EA@LONPEX01CL03.citrite.net> <518A0AB8.90506@crc.id.au> Mime-Version: 1.0 Content-Type: text/plain; charset="us-ascii" Content-Transfer-Encoding: 7bit Return-path: In-Reply-To: <518A0AB8.90506@crc.id.au> List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Sender: xen-devel-bounces@lists.xen.org Errors-To: xen-devel-bounces@lists.xen.org To: Steven Haigh Cc: Felipe Franciosi , "xen-devel@lists.xen.org" List-Id: xen-devel@lists.xenproject.org On 08/05/13 10:20, Steven Haigh wrote: > On 30/04/2013 8:07 PM, Felipe Franciosi wrote: >> I noticed you copied your results from "dd", but I didn't see any conclusions drawn from experiment. >> >> Did I understand it wrong or now you have comparable performance on dom0 and domU when using DIRECT? >> >> domU: >> # dd if=/dev/zero of=output.zero bs=1M count=2048 oflag=direct >> 2048+0 records in >> 2048+0 records out >> 2147483648 bytes (2.1 GB) copied, 25.4705 s, 84.3 MB/s >> >> dom0: >> # dd if=/dev/zero of=output.zero bs=1M count=2048 oflag=direct >> 2048+0 records in >> 2048+0 records out >> 2147483648 bytes (2.1 GB) copied, 24.8914 s, 86.3 MB/s >> >> >> I think that if the performance differs when NOT using DIRECT, the issue must be related to the way your guest is flushing the cache. This must be generating a workload that doesn't perform well on Xen's PV protocol. > > Just wondering if there is any further input on this... While DIRECT > writes are as good as can be expected, NON-DIRECT writes in certain > cases (specifically with a mdadm raid in the Dom0) are affected by about > a 50% loss in throughput... > > The hard part is that this is the default mode of writing! As another test with indirect descriptors, could you change xen_blkif_max_segments in xen-blkfront.c to 128 (it is 32 by default), recompile the DomU kernel and see if that helps? Thanks, Roger.