From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S932584AbbCRAxe (ORCPT ); Tue, 17 Mar 2015 20:53:34 -0400 Received: from aserp1040.oracle.com ([141.146.126.69]:40484 "EHLO aserp1040.oracle.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S932383AbbCRAxd (ORCPT ); Tue, 17 Mar 2015 20:53:33 -0400 Message-ID: <5508CC61.60809@oracle.com> Date: Wed, 18 Mar 2015 08:52:49 +0800 From: Bob Liu User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:17.0) Gecko/20130308 Thunderbird/17.0.4 MIME-Version: 1.0 To: Felipe Franciosi CC: Malcolm Crossley , Konrad Rzeszutek Wilk , Roger Pau Monne , David Vrabel , "xen-devel@lists.xen.org" , "linux-kernel@vger.kernel.org" , "axboe@fb.com" , "hch@infradead.org" , "avanzini.arianna@gmail.com" , "chegger@amazon.de" Subject: Re: [PATCH 04/10] xen/blkfront: separate ring information to an new struct References: <54E4CBD1.1000802@citrix.com> <20150218173746.GF8152@l.oracle.com> <9F2C4E7DFB7839489C89757A66C5AD629EB997@AMSPEX01CL03.citrite.net> <54E544CC.4080007@oracle.com> <54E5C444.4050100@citrix.com> <54E5C59F.2060300@citrix.com> <9F2C4E7DFB7839489C89757A66C5AD629EDBBA@AMSPEX01CL03.citrite.net> <54E5E13E.9040502@citrix.com> <20150220185937.GC1749@l.oracle.com> <54F068A8.4010606@oracle.com> <20150304212140.GA18253@l.oracle.com> <54F7A796.7080003@oracle.com> <9F2C4E7DFB7839489C89757A66C5AD62A0943B@AMSPEX01CL03.citrite.net> <5507D0F4.9040404@oracle.com> <9F2C4E7DFB7839489C89757A66C5AD62A3EDFD@AMSPEX01CL03.citrite.net> In-Reply-To: <9F2C4E7DFB7839489C89757A66C5AD62A3EDFD@AMSPEX01CL03.citrite.net> Content-Type: text/plain; charset=ISO-8859-1 Content-Transfer-Encoding: 7bit X-Source-IP: acsinet22.oracle.com [141.146.126.238] Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On 03/17/2015 10:52 PM, Felipe Franciosi wrote: > Hi Bob, > > I've put the hardware back together and am sorting out the software for testing. Things are not moving as fast as I wanted due to other commitments. I'll keep this thread updated as I progress. Malcolm is OOO and I'm trying to get his patches to work on a newer Xen. > Thank you! > The evaluation will compare: > 1) bare metal i/o (for baseline) > 2) tapdisk3 (currently using grant copy, which is what scales best in my experience) > 3) blkback w/ persistent grants > 4) blkback w/o persistent grants (I will just comment out the handshake bits in blkback/blkfront) > 5) blkback w/o persistent grants + Malcolm's grant map patches > I think you need to add the patches from Christoph Egger with title "[PATCH v5 0/2] gnttab: Improve scaleability" here. http://lists.xen.org/archives/html/xen-devel/2015-02/msg01188.html > To my knowledge, blkback (w/ or w/o persistent grants) is always faster than user space alternatives (e.g. tapdisk, qemu-qdisk) as latency is much lower. However, tapdisk with grant copy has been shown to produce (much) better aggregate throughput figures as it avoids any issues with grant (un)mapping. > > I'm hoping to show that (5) above scales better than (3) and (4) in a representative scenario. If it does, I will recommend that we get rid of persistent grants in favour of a better and more scalable grant (un)mapping implementation. > Right, but even if 5) have better performance, we have to make sure older hypervisors with new linux kernel won't be affected after get rid of persistent grants. -- Regards, -Bob