From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1756737AbcGZPvb (ORCPT ); Tue, 26 Jul 2016 11:51:31 -0400 Received: from smtp.citrix.com ([66.165.176.89]:53979 "EHLO SMTP.CITRIX.COM" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1754512AbcGZPv3 (ORCPT ); Tue, 26 Jul 2016 11:51:29 -0400 X-IronPort-AV: E=Sophos;i="5.28,425,1464652800"; d="scan'208";a="368487073" Date: Tue, 26 Jul 2016 17:48:59 +0200 From: Roger Pau =?iso-8859-1?Q?Monn=E9?= To: Bob Liu CC: , , Subject: Re: [PATCH v2 3/3] xen-blkfront: dynamic configuration of per-vbd resources Message-ID: <20160726154859.stbqsaq7p3jjhe4e@mac> References: <1469510377-15131-1-git-send-email-bob.liu@oracle.com> <1469510377-15131-3-git-send-email-bob.liu@oracle.com> <20160726084408.gevmpl2u5uvbdumh@mac> <57972622.2020008@oracle.com> MIME-Version: 1.0 Content-Type: text/plain; charset="iso-8859-1" Content-Disposition: inline Content-Transfer-Encoding: 8bit In-Reply-To: <57972622.2020008@oracle.com> User-Agent: Mutt/1.6.2-neo (2016-06-11) X-DLP: MIA2 Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Tue, Jul 26, 2016 at 04:58:10PM +0800, Bob Liu wrote: > > On 07/26/2016 04:44 PM, Roger Pau Monné wrote: > > On Tue, Jul 26, 2016 at 01:19:37PM +0800, Bob Liu wrote: > >> The current VBD layer reserves buffer space for each attached device based on > >> three statically configured settings which are read at boot time. > >> * max_indirect_segs: Maximum amount of segments. > >> * max_ring_page_order: Maximum order of pages to be used for the shared ring. > >> * max_queues: Maximum of queues(rings) to be used. > >> > >> But the storage backend, workload, and guest memory result in very different > >> tuning requirements. It's impossible to centrally predict application > >> characteristics so it's best to leave allow the settings can be dynamiclly > >> adjusted based on workload inside the Guest. > >> > >> Usage: > >> Show current values: > >> cat /sys/devices/vbd-xxx/max_indirect_segs > >> cat /sys/devices/vbd-xxx/max_ring_page_order > >> cat /sys/devices/vbd-xxx/max_queues > >> > >> Write new values: > >> echo > /sys/devices/vbd-xxx/max_indirect_segs > >> echo > /sys/devices/vbd-xxx/max_ring_page_order > >> echo > /sys/devices/vbd-xxx/max_queues > >> > >> Signed-off-by: Bob Liu > >> -- > >> v2: Rename to max_ring_page_order and rm the waiting code suggested by Roger. > >> --- > >> drivers/block/xen-blkfront.c | 275 +++++++++++++++++++++++++++++++++++++++++- > >> 1 file changed, 269 insertions(+), 6 deletions(-) > >> > >> diff --git a/drivers/block/xen-blkfront.c b/drivers/block/xen-blkfront.c > >> index 1b4c380..ff5ebe5 100644 > >> --- a/drivers/block/xen-blkfront.c > >> +++ b/drivers/block/xen-blkfront.c > >> @@ -212,6 +212,11 @@ struct blkfront_info > >> /* Save uncomplete reqs and bios for migration. */ > >> struct list_head requests; > >> struct bio_list bio_list; > >> + /* For dynamic configuration. */ > >> + unsigned int reconfiguring:1; > >> + int new_max_indirect_segments; > > > > Can't you just use max_indirect_segments? Is it really needed to introduce a > > new struct member? > > > >> + int max_ring_page_order; > >> + int max_queues; > > Do you mean also get rid of these two new struct members? > I'll think about that. Oh no, those two are fine, and AFAICT are needed because now every blkfront instance can have it's own max number of queues or ring pages. What I think can be removed is the introduction of new_max_indirect_segments, and instead just use the already available max_indirect_segments field in that same struct. Roger.