netdev.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: Wei Liu <wei.liu2@citrix.com>
To: Igor Druzhinin <igor.druzhinin@citrix.com>
Cc: <wei.liu2@citrix.com>, <xen-devel@lists.xenproject.org>,
	<paul.durrant@citrix.com>, <netdev@vger.kernel.org>,
	<linux-kernel@vger.kernel.org>
Subject: Re: [PATCH] xen-netback: fix memory leaks on XenBus disconnect
Date: Fri, 13 Jan 2017 10:38:00 +0000	[thread overview]
Message-ID: <20170113103800.GA5089@citrix.com> (raw)
In-Reply-To: <1484243516-141100-1-git-send-email-igor.druzhinin@citrix.com>

On Thu, Jan 12, 2017 at 05:51:56PM +0000, Igor Druzhinin wrote:
> Eliminate memory leaks introduced several years ago by cleaning the queue
> resources which are allocated on XenBus connection event. Namely, queue
> structure array and pages used for IO rings.
> vif->lock is used to protect statistics gathering agents from using the
> queue structure during cleaning.
> 

There is code in netback_remove which eventually calls xenvif_free to
free up the resources, maybe you should modify xenvif_free instead? That
seems more symmetric to me. What do you think?

> Signed-off-by: Igor Druzhinin <igor.druzhinin@citrix.com>
> ---
>  drivers/net/xen-netback/interface.c |  6 ++++--
>  drivers/net/xen-netback/xenbus.c    | 13 +++++++++++++
>  2 files changed, 17 insertions(+), 2 deletions(-)
> 
> diff --git a/drivers/net/xen-netback/interface.c b/drivers/net/xen-netback/interface.c
> index e30ffd2..5795213 100644
> --- a/drivers/net/xen-netback/interface.c
> +++ b/drivers/net/xen-netback/interface.c
> @@ -221,18 +221,18 @@ static struct net_device_stats *xenvif_get_stats(struct net_device *dev)
>  {
>  	struct xenvif *vif = netdev_priv(dev);
>  	struct xenvif_queue *queue = NULL;
> -	unsigned int num_queues = vif->num_queues;
>  	unsigned long rx_bytes = 0;
>  	unsigned long rx_packets = 0;
>  	unsigned long tx_bytes = 0;
>  	unsigned long tx_packets = 0;
>  	unsigned int index;
>  
> +	spin_lock(&vif->lock);
>  	if (vif->queues == NULL)
>  		goto out;
>  
>  	/* Aggregate tx and rx stats from each queue */
> -	for (index = 0; index < num_queues; ++index) {
> +	for (index = 0; index < vif->num_queues; ++index) {
>  		queue = &vif->queues[index];
>  		rx_bytes += queue->stats.rx_bytes;
>  		rx_packets += queue->stats.rx_packets;
> @@ -241,6 +241,8 @@ static struct net_device_stats *xenvif_get_stats(struct net_device *dev)
>  	}
>  
>  out:
> +	spin_unlock(&vif->lock);
> +

Good catch, this is definitely needed. And it would probably be in a
separate patch.

Wei.

  parent reply	other threads:[~2017-01-13 10:38 UTC|newest]

Thread overview: 6+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2017-01-12 17:51 [PATCH] xen-netback: fix memory leaks on XenBus disconnect Igor Druzhinin
2017-01-12 18:05 ` Igor Druzhinin
2017-01-13  9:01 ` Paul Durrant
2017-01-13 10:38 ` Wei Liu [this message]
2017-01-13 10:49   ` Paul Durrant
2017-01-13 11:06   ` Igor Druzhinin

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20170113103800.GA5089@citrix.com \
    --to=wei.liu2@citrix.com \
    --cc=igor.druzhinin@citrix.com \
    --cc=linux-kernel@vger.kernel.org \
    --cc=netdev@vger.kernel.org \
    --cc=paul.durrant@citrix.com \
    --cc=xen-devel@lists.xenproject.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).