linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: David Vrabel <david.vrabel@citrix.com>
To: Arianna Avanzini <avanzini.arianna@gmail.com>,
	<konrad.wilk@oracle.com>, <boris.ostrovsky@oracle.com>,
	<xen-devel@lists.xenproject.org>, <linux-kernel@vger.kernel.org>
Cc: <hch@infradead.org>, <bob.liu@oracle.com>,
	<felipe.franciosi@citrix.com>, <axboe@fb.com>
Subject: Re: [PATCH RFC v2 3/5] xen, blkfront: negotiate the number of block rings with the backend
Date: Fri, 12 Sep 2014 11:46:49 +0100	[thread overview]
Message-ID: <5412CF19.4020400@citrix.com> (raw)
In-Reply-To: <1410479844-2864-4-git-send-email-avanzini.arianna@gmail.com>

On 12/09/14 00:57, Arianna Avanzini wrote:
> This commit implements the negotiation of the number of block rings
> to be used; as a default, the number of rings is decided by the
> frontend driver and is equal to the number of hardware queues that
> the backend makes available. In case of guest migration towards a
> host whose devices expose a different number of hardware queues, the
> number of I/O rings used by the frontend driver remains the same;
> XenStore keys may vary if the frontend needs to be compatible with
> a host not having multi-queue support.
> 
> Signed-off-by: Arianna Avanzini <avanzini.arianna@gmail.com>
> ---
>  drivers/block/xen-blkfront.c | 95 +++++++++++++++++++++++++++++++++++++++-----
>  1 file changed, 84 insertions(+), 11 deletions(-)
> 
> diff --git a/drivers/block/xen-blkfront.c b/drivers/block/xen-blkfront.c
> index 9282df1..77e311d 100644
> --- a/drivers/block/xen-blkfront.c
> +++ b/drivers/block/xen-blkfront.c
> @@ -137,7 +137,7 @@ struct blkfront_info
>  	int vdevice;
>  	blkif_vdev_t handle;
>  	enum blkif_state connected;
> -	unsigned int nr_rings;
> +	unsigned int nr_rings, old_nr_rings;

I don't think you need old_nr_rings.  nr_rings is the current number of
rings and you should only update nr_rings to a new number after tearing
down all the old rings.

>  	struct blkfront_ring_info *rinfo;
>  	struct request_queue *rq;
>  	unsigned int feature_flush;
> @@ -147,6 +147,7 @@ struct blkfront_info
>  	unsigned int discard_granularity;
>  	unsigned int discard_alignment;
>  	unsigned int feature_persistent:1;
> +	unsigned int hardware_queues;

hardware_queues seems to have the same purpose as nr_rings and isn't
needed.  nr_rings == 1 can mean write old keys for non-multi-queue
capable backends (or a mq capable one that only wants 1 queue).

> @@ -1351,10 +1353,24 @@ again:
>  		goto destroy_blkring;
>  	}
>  
> +	/* Advertise the number of rings */
> +	err = xenbus_printf(xbt, dev->nodename, "nr_blk_rings",
> +			    "%u", info->nr_rings);
> +	if (err) {
> +		xenbus_dev_fatal(dev, err, "advertising number of rings");
> +		goto abort_transaction;
> +	}
> +
>  	for (i = 0 ; i < info->nr_rings ; i++) {
> -		BUG_ON(i > 0);
> -		snprintf(ring_ref_s, 64, "ring-ref");
> -		snprintf(evtchn_s, 64, "event-channel");
> +		if (!info->hardware_queues) {

   if (info->nr_rings == 1)

> +			BUG_ON(i > 0);
> +			/* Support old XenStore keys */
> +			snprintf(ring_ref_s, 64, "ring-ref");
> +			snprintf(evtchn_s, 64, "event-channel");
> +		} else {
> +			snprintf(ring_ref_s, 64, "ring-ref-%d", i);
> +			snprintf(evtchn_s, 64, "event-channel-%d", i);
> +		}
>  		err = xenbus_printf(xbt, dev->nodename,
>  				    ring_ref_s, "%u", info->rinfo[i].ring_ref);
>  		if (err) {
[...]
> @@ -1659,11 +1693,46 @@ static int blkfront_resume(struct xenbus_device *dev)
>  {
>  	struct blkfront_info *info = dev_get_drvdata(&dev->dev);
>  	int err;
> +	unsigned int nr_queues, prev_nr_queues;
> +	bool mq_to_rq_transition;
>  
>  	dev_dbg(&dev->dev, "blkfront_resume: %s\n", dev->nodename);
>  
> +	prev_nr_queues = info->hardware_queues;
> +
> +	err = blkfront_gather_hw_queues(info, &nr_queues);
> +	if (err < 0)
> +		nr_queues = 0;
> +	mq_to_rq_transition = prev_nr_queues && !nr_queues;
> +
> +	if (prev_nr_queues != nr_queues) {
> +		printk(KERN_INFO "blkfront: %s: hw queues %u -> %u\n",
> +		       info->gd->disk_name, prev_nr_queues, nr_queues);
> +		if (mq_to_rq_transition) {
> +			struct blk_mq_hw_ctx *hctx;
> +			unsigned int i;
> +			/*
> +			 * Switch from multi-queue to single-queue:
> +			 * update hctx-to-ring mapping before
> +			 * resubmitting any requests
> +			 */
> +			queue_for_each_hw_ctx(info->rq, hctx, i)
> +				hctx->driver_data = &info->rinfo[0];

I think this does give a mechanism to change (reduce) the number of
rings used if the backend supports fewer.  You don't need to map all
hctxs to one ring.  You can distribute them amongst the available rings.

> @@ -1863,6 +1932,10 @@ static void blkfront_connect(struct blkfront_info *info)
>  		 * supports indirect descriptors, and how many.
>  		 */
>  		blkif_recover(info);
> +		info->rinfo = krealloc(info->rinfo,
> +				       info->nr_rings * sizeof(struct blkfront_ring_info),
> +				       GFP_KERNEL);
> +

You don't check for allocation failure here.

David

  reply	other threads:[~2014-09-12 10:46 UTC|newest]

Thread overview: 32+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2014-09-11 23:57 [PATCH RFC v2 0/5] Multi-queue support for xen-blkfront and xen-blkback Arianna Avanzini
2014-09-11 23:57 ` [PATCH RFC v2 1/5] xen, blkfront: port to the the multi-queue block layer API Arianna Avanzini
2014-09-13 19:29   ` Christoph Hellwig
2014-10-01 20:18   ` Konrad Rzeszutek Wilk
2014-09-11 23:57 ` [PATCH RFC v2 2/5] xen, blkfront: introduce support for multiple block rings Arianna Avanzini
2014-10-01 20:18   ` Konrad Rzeszutek Wilk
2014-09-11 23:57 ` [PATCH RFC v2 3/5] xen, blkfront: negotiate the number of block rings with the backend Arianna Avanzini
2014-09-12 10:46   ` David Vrabel [this message]
2014-10-01 20:18   ` Konrad Rzeszutek Wilk
2014-09-11 23:57 ` [PATCH RFC v2 4/5] xen, blkback: introduce support for multiple block rings Arianna Avanzini
2014-10-01 20:18   ` Konrad Rzeszutek Wilk
2014-09-11 23:57 ` [PATCH RFC v2 5/5] xen, blkback: negotiate of the number of block rings with the frontend Arianna Avanzini
2014-09-12 10:58   ` David Vrabel
2014-10-01 20:23   ` Konrad Rzeszutek Wilk
2014-10-01 20:27 ` [PATCH RFC v2 0/5] Multi-queue support for xen-blkfront and xen-blkback Konrad Rzeszutek Wilk
2015-04-28  7:36   ` Christoph Hellwig
2015-04-28  7:46     ` Arianna Avanzini
2015-05-13 10:29       ` Bob Liu
2015-06-30 14:21         ` [Xen-devel] " Marcus Granado
2015-07-01  0:04           ` Bob Liu
2015-07-01  3:02           ` Jens Axboe
2015-08-10 11:03             ` Rafal Mielniczuk
2015-08-10 11:14               ` Bob Liu
2015-08-10 15:52               ` Jens Axboe
2015-08-11  6:07                 ` Bob Liu
2015-08-11  9:45                   ` Rafal Mielniczuk
2015-08-11 17:32                     ` Jens Axboe
2015-08-12 10:16                       ` Bob Liu
2015-08-12 16:46                         ` Rafal Mielniczuk
2015-08-14  8:29                           ` Bob Liu
2015-08-14 12:30                             ` Rafal Mielniczuk
2015-08-18  9:45                               ` Rafal Mielniczuk

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=5412CF19.4020400@citrix.com \
    --to=david.vrabel@citrix.com \
    --cc=avanzini.arianna@gmail.com \
    --cc=axboe@fb.com \
    --cc=bob.liu@oracle.com \
    --cc=boris.ostrovsky@oracle.com \
    --cc=felipe.franciosi@citrix.com \
    --cc=hch@infradead.org \
    --cc=konrad.wilk@oracle.com \
    --cc=linux-kernel@vger.kernel.org \
    --cc=xen-devel@lists.xenproject.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).