public inbox for kvm@vger.kernel.org
 help / color / mirror / Atom feed
From: "Michael S. Tsirkin" <mst@redhat.com>
To: Sasha Levin <sasha.levin@oracle.com>
Cc: rusty@rustcorp.com.au, penberg@kernel.org, will.deacon@arm.com,
	marc.zyngier@arm.com, kvm@vger.kernel.org, asias@redhat.com,
	jasowang@redhat.com
Subject: Re: [PATCH] virtio-net: fill only rx queues which are being used
Date: Tue, 23 Apr 2013 10:08:45 +0300	[thread overview]
Message-ID: <20130423070845.GA23530@redhat.com> (raw)
In-Reply-To: <1366677336-2278-1-git-send-email-sasha.levin@oracle.com>

On Mon, Apr 22, 2013 at 08:35:36PM -0400, Sasha Levin wrote:
> Due to MQ support we may allocate a whole bunch of rx queues but
> never use them. With this patch we'll safe the space used by
> the receive buffers until they are actually in use:
> 
> sh-4.2# free -h
>              total       used       free     shared    buffers     cached
> Mem:          490M        35M       455M         0B         0B       4.1M
> -/+ buffers/cache:        31M       459M
> Swap:           0B         0B         0B
> sh-4.2# ethtool -L eth0 combined 8
> sh-4.2# free -h
>              total       used       free     shared    buffers     cached
> Mem:          490M       162M       327M         0B         0B       4.1M
> -/+ buffers/cache:       158M       331M
> Swap:           0B         0B         0B
> 
> Signed-off-by: Sasha Levin <sasha.levin@oracle.com>

Overall the idea looks fine to me.

I also ask myself whether we should enable multiqueue capability
with big buffers. 130M extra memory seems excessive.
Want to try on the kvmtools version that has mergeable buffers?
Memory use should be much lower.

> ---
>  drivers/net/virtio_net.c | 16 +++++++++++-----
>  1 file changed, 11 insertions(+), 5 deletions(-)
> 
> diff --git a/drivers/net/virtio_net.c b/drivers/net/virtio_net.c
> index 6bfc511..4d82d17 100644
> --- a/drivers/net/virtio_net.c
> +++ b/drivers/net/virtio_net.c
> @@ -581,7 +581,7 @@ static void refill_work(struct work_struct *work)
>  	bool still_empty;
>  	int i;
>  
> -	for (i = 0; i < vi->max_queue_pairs; i++) {
> +	for (i = 0; i < vi->curr_queue_pairs; i++) {
>  		struct receive_queue *rq = &vi->rq[i];
>  
>  		napi_disable(&rq->napi);
> @@ -636,7 +636,7 @@ static int virtnet_open(struct net_device *dev)
>  	struct virtnet_info *vi = netdev_priv(dev);
>  	int i;
>  
> -	for (i = 0; i < vi->max_queue_pairs; i++) {
> +	for (i = 0; i < vi->curr_queue_pairs; i++) {
>  		/* Make sure we have some buffers: if oom use wq. */
>  		if (!try_fill_recv(&vi->rq[i], GFP_KERNEL))
>  			schedule_delayed_work(&vi->refill, 0);
> @@ -900,6 +900,7 @@ static int virtnet_set_queues(struct virtnet_info *vi, u16 queue_pairs)
>  	struct scatterlist sg;
>  	struct virtio_net_ctrl_mq s;
>  	struct net_device *dev = vi->dev;
> +	int i;
>  
>  	if (!vi->has_cvq || !virtio_has_feature(vi->vdev, VIRTIO_NET_F_MQ))
>  		return 0;
> @@ -912,8 +913,13 @@ static int virtnet_set_queues(struct virtnet_info *vi, u16 queue_pairs)
>  		dev_warn(&dev->dev, "Fail to set num of queue pairs to %d\n",
>  			 queue_pairs);
>  		return -EINVAL;
> -	} else
> +	} else {
> +		if (queue_pairs > vi->curr_queue_pairs)
> +			for (i = 0; i < queue_pairs; i++)
> +				if (!try_fill_recv(&vi->rq[i], GFP_KERNEL))
> +					schedule_delayed_work(&vi->refill, 0);
>  		vi->curr_queue_pairs = queue_pairs;
> +	}
>  
>  	return 0;
>  }
> @@ -1568,7 +1574,7 @@ static int virtnet_probe(struct virtio_device *vdev)
>  	}
>  
>  	/* Last of all, set up some receive buffers. */
> -	for (i = 0; i < vi->max_queue_pairs; i++) {
> +	for (i = 0; i < vi->curr_queue_pairs; i++) {
>  		try_fill_recv(&vi->rq[i], GFP_KERNEL);
>  
>  		/* If we didn't even get one input buffer, we're useless. */
> @@ -1692,7 +1698,7 @@ static int virtnet_restore(struct virtio_device *vdev)
>  
>  	netif_device_attach(vi->dev);
>  
> -	for (i = 0; i < vi->max_queue_pairs; i++)
> +	for (i = 0; i < vi->curr_queue_pairs; i++)
>  		if (!try_fill_recv(&vi->rq[i], GFP_KERNEL))
>  			schedule_delayed_work(&vi->refill, 0);
>  
> -- 
> 1.8.2.1

  parent reply	other threads:[~2013-04-23  7:09 UTC|newest]

Thread overview: 6+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2013-04-23  0:35 [PATCH] virtio-net: fill only rx queues which are being used Sasha Levin
2013-04-23  4:13 ` Rusty Russell
2013-04-23  4:49   ` Sasha Levin
2013-04-23  9:18     ` Will Deacon
2013-04-23  7:08 ` Michael S. Tsirkin [this message]
2013-04-23 14:52   ` Sasha Levin

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20130423070845.GA23530@redhat.com \
    --to=mst@redhat.com \
    --cc=asias@redhat.com \
    --cc=jasowang@redhat.com \
    --cc=kvm@vger.kernel.org \
    --cc=marc.zyngier@arm.com \
    --cc=penberg@kernel.org \
    --cc=rusty@rustcorp.com.au \
    --cc=sasha.levin@oracle.com \
    --cc=will.deacon@arm.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox