From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1754989AbbBRSaA (ORCPT ); Wed, 18 Feb 2015 13:30:00 -0500 Received: from userp1040.oracle.com ([156.151.31.81]:43812 "EHLO userp1040.oracle.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1754958AbbBRS36 (ORCPT ); Wed, 18 Feb 2015 13:29:58 -0500 Date: Wed, 18 Feb 2015 13:29:23 -0500 From: Konrad Rzeszutek Wilk To: Felipe Franciosi Cc: Roger Pau Monne , Bob Liu , "xen-devel@lists.xen.org" , David Vrabel , "linux-kernel@vger.kernel.org" , "axboe@fb.com" , "hch@infradead.org" , "avanzini.arianna@gmail.com" Subject: Re: [PATCH 04/10] xen/blkfront: separate ring information to an new struct Message-ID: <20150218182923.GA12845@l.oracle.com> References: <1423988345-4005-1-git-send-email-bob.liu@oracle.com> <1423988345-4005-5-git-send-email-bob.liu@oracle.com> <54E4CBD1.1000802@citrix.com> <20150218173746.GF8152@l.oracle.com> <9F2C4E7DFB7839489C89757A66C5AD629EB997@AMSPEX01CL03.citrite.net> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <9F2C4E7DFB7839489C89757A66C5AD629EB997@AMSPEX01CL03.citrite.net> User-Agent: Mutt/1.5.23 (2014-03-12) X-Source-IP: aserv0022.oracle.com [141.146.126.234] Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org > > > AFAICT you seem to have a list of persistent grants, indirect pages > > > and a grant table callback for each ring, isn't this supposed to be > > > shared between all rings? > > > > > > I don't think we should be going down that route, or else we can hoard > > > a large amount of memory and grants. > > > > It does remove the lock that would have to be accessed by each ring thread to > > access those. Those values (grants) can be limited to be a smaller value such > > that the overall number is the same as it was with the previous version. As in: > > each ring has = MAX_GRANTS / nr_online_cpus(). > > > > > We should definitely be concerned with the amount of memory consumed on the backend for each plugged virtual disk. We have faced several problems in XenServer around this area before; it drastically affects VBD scalability per host. > > This makes me think that all the persistent grants work was done as a workaround while we were facing several performance problems around concurrent grant un/mapping operations. Given all the recent submissions made around this (grant ops) area, is this something we should perhaps revisit and discuss whether we want to continue offering persistent grants as a feature? > Certainly. Perhaps as a talking point at XenHackathon? > Thanks, > Felipe