From mboxrd@z Thu Jan 1 00:00:00 1970 From: Konrad Rzeszutek Wilk Subject: Re: [RFC v1 0/5] VBD: enlarge max segment per request in blkfront Date: Fri, 7 Sep 2012 13:49:22 -0400 Message-ID: <20120907174922.GA13040@phenom.dumpdata.com> References: Mime-Version: 1.0 Content-Type: text/plain; charset="us-ascii" Content-Transfer-Encoding: 7bit Return-path: Content-Disposition: inline In-Reply-To: List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Sender: xen-devel-bounces@lists.xen.org Errors-To: xen-devel-bounces@lists.xen.org To: "Duan, Ronghui" Cc: "Stefano.Stabellini@eu.citrix.com" , "Ian.Jackson@eu.citrix.com" , "xen-devel@lists.xen.org" List-Id: xen-devel@lists.xenproject.org On Thu, Aug 16, 2012 at 10:22:56AM +0000, Duan, Ronghui wrote: > Hi, list. > The max segments for request in VBD queue is 11, while for Linux OS/ other VMM, the parameter is set to 128 in default. This may be caused by the limited size of ring between Front/Back. So I guess whether we can put segment data into another ring and dynamic use them for the single request's need. Here is prototype which don't do much test, but it can work on Linux 64 bits 3.4.6 kernel. I can see the CPU% can be reduced to 1/3 compared to original in sequential test. But it bring some overhead which will make random IO's cpu utilization increase a little. > > Here is a short version data use only 1K random read and 64K sequential read in direct mode. Testing a physical SSD disk as blkback in backend. CPU% is got form xentop. > Read 1K random IOPS Dom0 CPU DomU CPU% > W 52005.9 86.6 71 > W/O 52123.1 85.8 66.9 So I am getting some different numbers. I tried a simple 4K read: [/dev/xvda1] bssplit=4K rw=read direct=1 size=4g ioengine=libaio iodepth=64 And with your patch got: read : io=4096.0MB, bw=92606KB/s, iops=23151 , runt= 45292msec without: read : io=4096.0MB, bw=145187KB/s, iops=36296 , runt= 28889msec > > Read 64K seq BW MB/s Dom0 CPU DomU CPU% > W 250 27.1 10.6 > W/O 250 62.6 31.1 Hadn't tried that yet.