From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1753585AbbJSJmo (ORCPT ); Mon, 19 Oct 2015 05:42:44 -0400 Received: from smtp.citrix.com ([66.165.176.89]:35267 "EHLO SMTP.CITRIX.COM" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1753219AbbJSJmn (ORCPT ); Mon, 19 Oct 2015 05:42:43 -0400 X-IronPort-AV: E=Sophos;i="5.17,701,1437436800"; d="scan'208";a="307335792" Subject: Re: [PATCH v3 3/9] xen/blkfront: separate per ring information out of device info To: Bob Liu References: <1441456782-31318-1-git-send-email-bob.liu@oracle.com> <1441456782-31318-4-git-send-email-bob.liu@oracle.com> <560EB88A.9070905@citrix.com> <5618CCB4.6010902@oracle.com> CC: , , , , , , , , , , From: =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= Message-ID: <5624BB0F.6070106@citrix.com> Date: Mon, 19 Oct 2015 11:42:39 +0200 User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.9; rv:38.0) Gecko/20100101 Thunderbird/38.3.0 MIME-Version: 1.0 In-Reply-To: <5618CCB4.6010902@oracle.com> Content-Type: text/plain; charset="windows-1252" Content-Transfer-Encoding: 8bit X-DLP: MIA2 Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org El 10/10/15 a les 10.30, Bob Liu ha escrit: > > On 10/03/2015 01:02 AM, Roger Pau Monné wrote: >> El 05/09/15 a les 14.39, Bob Liu ha escrit: >>> Split per ring information to an new structure:blkfront_ring_info, also rename >>> per blkfront_info to blkfront_dev_info. >> ^ removed. >>> >>> A ring is the representation of a hardware queue, every vbd device can associate >>> with one or more blkfront_ring_info depending on how many hardware >>> queues/rings to be used. >>> >>> This patch is a preparation for supporting real multi hardware queues/rings. >>> >>> Signed-off-by: Arianna Avanzini >>> Signed-off-by: Bob Liu >>> --- >>> drivers/block/xen-blkfront.c | 854 ++++++++++++++++++++++-------------------- >>> 1 file changed, 445 insertions(+), 409 deletions(-) >>> >>> diff --git a/drivers/block/xen-blkfront.c b/drivers/block/xen-blkfront.c >>> index 5dd591d..bf416d5 100644 >>> --- a/drivers/block/xen-blkfront.c >>> +++ b/drivers/block/xen-blkfront.c >>> @@ -107,7 +107,7 @@ static unsigned int xen_blkif_max_ring_order; >>> module_param_named(max_ring_page_order, xen_blkif_max_ring_order, int, S_IRUGO); >>> MODULE_PARM_DESC(max_ring_page_order, "Maximum order of pages to be used for the shared ring"); >>> >>> -#define BLK_RING_SIZE(info) __CONST_RING_SIZE(blkif, PAGE_SIZE * (info)->nr_ring_pages) >>> +#define BLK_RING_SIZE(dinfo) __CONST_RING_SIZE(blkif, PAGE_SIZE * (dinfo)->nr_ring_pages) >> >> This change looks pointless, any reason to use dinfo instead of info? >> >>> #define BLK_MAX_RING_SIZE __CONST_RING_SIZE(blkif, PAGE_SIZE * XENBUS_MAX_RING_PAGES) >>> /* >>> * ring-ref%i i=(-1UL) would take 11 characters + 'ring-ref' is 8, so 19 >>> @@ -116,12 +116,31 @@ MODULE_PARM_DESC(max_ring_page_order, "Maximum order of pages to be used for the >>> #define RINGREF_NAME_LEN (20) >>> >>> /* >>> + * Per-ring info. >>> + * Every blkfront device can associate with one or more blkfront_ring_info, >>> + * depending on how many hardware queues to be used. >>> + */ >>> +struct blkfront_ring_info >>> +{ >>> + struct blkif_front_ring ring; >>> + unsigned int ring_ref[XENBUS_MAX_RING_PAGES]; >>> + unsigned int evtchn, irq; >>> + struct work_struct work; >>> + struct gnttab_free_callback callback; >>> + struct blk_shadow shadow[BLK_MAX_RING_SIZE]; >>> + struct list_head grants; >>> + struct list_head indirect_pages; >>> + unsigned int persistent_gnts_c; >> >> persistent grants should be per-device, not per-queue IMHO. Is it really >> hard to make this global instead of per-queue? >> > > I didn't see the benefit of making it per-device, but disadvantages instead: > If persistent grants are per-device, then we have to introduce an extra lock to protect this list. > Which will complicate the code and may slow down the performance when the queue number is large e.g 16 queues. IMHO, and as I said in the reply to patch 7, there's no way to know that unless you actually implement it, and I think it was easier to just add locks around existing functions without moving the data structures (leaving them per-device). Also, you didn't want to enable multiple queues by default because of the RAM usage, if we make all this per-device RAM usage is not going to be increased much, which will mean we could enable multiple queues by default with a sensible value (4 maybe?). TBH, I don't think we are going to see contention with 4 queues per device. Roger.