netdev.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: Ben Hutchings <bhutchings@solarflare.com>
To: David Miller <davem@davemloft.net>
Cc: netdev@vger.kernel.org, linux-net-drivers@solarflare.com,
	Tom Herbert <therbert@google.com>,
	John Fastabend <john.r.fastabend@intel.com>
Subject: [PATCH net-next-2.6 1/5] sch_mqprio: Always set num_tc to 0 in mqprio_destroy()
Date: Tue, 15 Feb 2011 20:14:11 +0000	[thread overview]
Message-ID: <1297800851.2584.16.camel@bwh-desktop> (raw)
In-Reply-To: <1297800733.2584.15.camel@bwh-desktop>

All the cleanup code in mqprio_destroy() is currently conditional on
priv->qdiscs being non-null, but that condition should only apply to
the per-queue qdisc cleanup.  We should always set the number of
traffic classes back to 0 here.

Signed-off-by: Ben Hutchings <bhutchings@solarflare.com>
---
 net/sched/sch_mqprio.c |   14 +++++++-------
 1 files changed, 7 insertions(+), 7 deletions(-)

diff --git a/net/sched/sch_mqprio.c b/net/sched/sch_mqprio.c
index effd4ee..ace37f9 100644
--- a/net/sched/sch_mqprio.c
+++ b/net/sched/sch_mqprio.c
@@ -29,18 +29,18 @@ static void mqprio_destroy(struct Qdisc *sch)
 	struct mqprio_sched *priv = qdisc_priv(sch);
 	unsigned int ntx;
 
-	if (!priv->qdiscs)
-		return;
-
-	for (ntx = 0; ntx < dev->num_tx_queues && priv->qdiscs[ntx]; ntx++)
-		qdisc_destroy(priv->qdiscs[ntx]);
+	if (priv->qdiscs) {
+		for (ntx = 0;
+		     ntx < dev->num_tx_queues && priv->qdiscs[ntx];
+		     ntx++)
+			qdisc_destroy(priv->qdiscs[ntx]);
+		kfree(priv->qdiscs);
+	}
 
 	if (priv->hw_owned && dev->netdev_ops->ndo_setup_tc)
 		dev->netdev_ops->ndo_setup_tc(dev, 0);
 	else
 		netdev_set_num_tc(dev, 0);
-
-	kfree(priv->qdiscs);
 }
 
 static int mqprio_parse_opt(struct net_device *dev, struct tc_mqprio_qopt *qopt)
-- 
1.7.3.4



-- 
Ben Hutchings, Senior Software Engineer, Solarflare Communications
Not speaking for my employer; that's the marketing department's job.
They asked us to note that Solarflare product names are trademarked.


  reply	other threads:[~2011-02-15 20:14 UTC|newest]

Thread overview: 10+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2011-02-15 20:12 pull request: sfc-next-2.6 2011-02-15 Ben Hutchings
2011-02-15 20:14 ` Ben Hutchings [this message]
2011-02-15 20:14 ` [PATCH net-next-2.6 2/5] net: Adjust TX queue kobjects if number of queues changes during unregister Ben Hutchings
2011-02-15 20:14 ` [PATCH net-next-2.6 3/5] sfc: Move TX queue core queue mapping into tx.c Ben Hutchings
2011-02-15 20:14 ` [PATCH net-next-2.6 4/5] sfc: Distinguish queue lookup from test for queue existence Ben Hutchings
2011-02-15 20:15 ` [PATCH net-next-2.6 5/5] sfc: Add TX queues for high-priority traffic Ben Hutchings
2011-02-15 20:26 ` pull request: sfc-next-2.6 2011-02-15 David Miller
2011-02-16 13:48 ` [PATCH] sfc: lower stack usage in efx_ethtool_self_test Eric Dumazet
2011-02-21 16:09   ` Ben Hutchings
2011-02-22 18:12     ` David Miller

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=1297800851.2584.16.camel@bwh-desktop \
    --to=bhutchings@solarflare.com \
    --cc=davem@davemloft.net \
    --cc=john.r.fastabend@intel.com \
    --cc=linux-net-drivers@solarflare.com \
    --cc=netdev@vger.kernel.org \
    --cc=therbert@google.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).