From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1754954AbcGZIoX (ORCPT ); Tue, 26 Jul 2016 04:44:23 -0400 Received: from smtp.citrix.com ([66.165.176.89]:54135 "EHLO SMTP.CITRIX.COM" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1754334AbcGZIoR (ORCPT ); Tue, 26 Jul 2016 04:44:17 -0400 X-IronPort-AV: E=Sophos;i="5.28,423,1464652800"; d="scan'208";a="368383123" Date: Tue, 26 Jul 2016 10:44:08 +0200 From: Roger Pau =?iso-8859-1?Q?Monn=E9?= To: Bob Liu CC: , , Subject: Re: [PATCH v2 3/3] xen-blkfront: dynamic configuration of per-vbd resources Message-ID: <20160726084408.gevmpl2u5uvbdumh@mac> References: <1469510377-15131-1-git-send-email-bob.liu@oracle.com> <1469510377-15131-3-git-send-email-bob.liu@oracle.com> MIME-Version: 1.0 Content-Type: text/plain; charset="us-ascii" Content-Disposition: inline In-Reply-To: <1469510377-15131-3-git-send-email-bob.liu@oracle.com> User-Agent: Mutt/1.6.2-neo (2016-06-11) X-DLP: MIA2 Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Tue, Jul 26, 2016 at 01:19:37PM +0800, Bob Liu wrote: > The current VBD layer reserves buffer space for each attached device based on > three statically configured settings which are read at boot time. > * max_indirect_segs: Maximum amount of segments. > * max_ring_page_order: Maximum order of pages to be used for the shared ring. > * max_queues: Maximum of queues(rings) to be used. > > But the storage backend, workload, and guest memory result in very different > tuning requirements. It's impossible to centrally predict application > characteristics so it's best to leave allow the settings can be dynamiclly > adjusted based on workload inside the Guest. > > Usage: > Show current values: > cat /sys/devices/vbd-xxx/max_indirect_segs > cat /sys/devices/vbd-xxx/max_ring_page_order > cat /sys/devices/vbd-xxx/max_queues > > Write new values: > echo > /sys/devices/vbd-xxx/max_indirect_segs > echo > /sys/devices/vbd-xxx/max_ring_page_order > echo > /sys/devices/vbd-xxx/max_queues > > Signed-off-by: Bob Liu > -- > v2: Rename to max_ring_page_order and rm the waiting code suggested by Roger. > --- > drivers/block/xen-blkfront.c | 275 +++++++++++++++++++++++++++++++++++++++++- > 1 file changed, 269 insertions(+), 6 deletions(-) > > diff --git a/drivers/block/xen-blkfront.c b/drivers/block/xen-blkfront.c > index 1b4c380..ff5ebe5 100644 > --- a/drivers/block/xen-blkfront.c > +++ b/drivers/block/xen-blkfront.c > @@ -212,6 +212,11 @@ struct blkfront_info > /* Save uncomplete reqs and bios for migration. */ > struct list_head requests; > struct bio_list bio_list; > + /* For dynamic configuration. */ > + unsigned int reconfiguring:1; > + int new_max_indirect_segments; Can't you just use max_indirect_segments? Is it really needed to introduce a new struct member? > + int max_ring_page_order; > + int max_queues; > }; > > static unsigned int nr_minors; > @@ -1350,6 +1355,31 @@ static void blkif_free(struct blkfront_info *info, int suspend) > for (i = 0; i < info->nr_rings; i++) > blkif_free_ring(&info->rinfo[i]); > > + /* Remove old xenstore nodes. */ > + if (info->nr_ring_pages > 1) > + xenbus_rm(XBT_NIL, info->xbdev->nodename, "ring-page-order"); > + > + if (info->nr_rings == 1) { > + if (info->nr_ring_pages == 1) { > + xenbus_rm(XBT_NIL, info->xbdev->nodename, "ring-ref"); > + } else { > + for (i = 0; i < info->nr_ring_pages; i++) { > + char ring_ref_name[RINGREF_NAME_LEN]; > + > + snprintf(ring_ref_name, RINGREF_NAME_LEN, "ring-ref%u", i); > + xenbus_rm(XBT_NIL, info->xbdev->nodename, ring_ref_name); > + } > + } > + } else { > + xenbus_rm(XBT_NIL, info->xbdev->nodename, "multi-queue-num-queues"); > + > + for (i = 0; i < info->nr_rings; i++) { > + char queuename[QUEUE_NAME_LEN]; > + > + snprintf(queuename, QUEUE_NAME_LEN, "queue-%u", i); > + xenbus_rm(XBT_NIL, info->xbdev->nodename, queuename); > + } > + } > kfree(info->rinfo); > info->rinfo = NULL; > info->nr_rings = 0; > @@ -1763,15 +1793,21 @@ static int talk_to_blkback(struct xenbus_device *dev, > const char *message = NULL; > struct xenbus_transaction xbt; > int err; > - unsigned int i, max_page_order = 0; > + unsigned int i, backend_max_order = 0; > unsigned int ring_page_order = 0; > > err = xenbus_scanf(XBT_NIL, info->xbdev->otherend, > - "max-ring-page-order", "%u", &max_page_order); > + "max-ring-page-order", "%u", &backend_max_order); > if (err != 1) > info->nr_ring_pages = 1; > else { > - ring_page_order = min(xen_blkif_max_ring_order, max_page_order); > + if (info->max_ring_page_order) { > + /* Dynamic configured through /sys. */ > + BUG_ON(info->max_ring_page_order > backend_max_order); Maybe I'm missing something, but I don't think you can BUG here. The following flow for example will trigger this BUG: - Admin sets max_ring_page_order = 2 from sysfs, frontend reconfigures. - VM is migrated to a new host without multipage ring support. - BUG will trigger on reconnection (because backend_max_order == 1 and max_ring_page_order == 2). Roger.