From mboxrd@z Thu Jan 1 00:00:00 1970 From: John Fastabend Subject: [RFC PATCH 05/17] net: sched: a dflt qdisc may be used with per cpu stats Date: Mon, 13 Nov 2017 12:09:16 -0800 Message-ID: <20171113200916.6245.53953.stgit@john-Precision-Tower-5810> References: <20171113195256.6245.64676.stgit@john-Precision-Tower-5810> Mime-Version: 1.0 Content-Type: text/plain; charset="utf-8" Content-Transfer-Encoding: 7bit Cc: make0818@gmail.com, netdev@vger.kernel.org, jiri@resnulli.us, xiyou.wangcong@gmail.com To: willemdebruijn.kernel@gmail.com, daniel@iogearbox.net, eric.dumazet@gmail.com Return-path: Received: from mail-pf0-f194.google.com ([209.85.192.194]:50684 "EHLO mail-pf0-f194.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1755221AbdKMUJ3 (ORCPT ); Mon, 13 Nov 2017 15:09:29 -0500 Received: by mail-pf0-f194.google.com with SMTP id u70so7809895pfa.7 for ; Mon, 13 Nov 2017 12:09:29 -0800 (PST) In-Reply-To: <20171113195256.6245.64676.stgit@john-Precision-Tower-5810> Sender: netdev-owner@vger.kernel.org List-ID: Enable dflt qdisc support for per cpu stats before this patch a dflt qdisc was required to use the global statistics qstats and bstats. This adds a static flags field to qdisc_ops that is propagated into qdisc->flags in qdisc allocate call. This allows the allocation block to completely allocate the qdisc object so we don't have dangling allocations after qdisc init. Signed-off-by: John Fastabend --- 0 files changed diff --git a/include/net/sch_generic.h b/include/net/sch_generic.h index 7c4b96b..7bc2826 100644 --- a/include/net/sch_generic.h +++ b/include/net/sch_generic.h @@ -179,6 +179,7 @@ struct Qdisc_ops { const struct Qdisc_class_ops *cl_ops; char id[IFNAMSIZ]; int priv_size; + unsigned int static_flags; int (*enqueue)(struct sk_buff *skb, struct Qdisc *sch, diff --git a/net/sched/sch_generic.c b/net/sched/sch_generic.c index d063f6b..131eab8 100644 --- a/net/sched/sch_generic.c +++ b/net/sched/sch_generic.c @@ -635,6 +635,19 @@ struct Qdisc *qdisc_alloc(struct netdev_queue *dev_queue, qdisc_skb_head_init(&sch->q); spin_lock_init(&sch->q.lock); + if (ops->static_flags & TCQ_F_CPUSTATS) { + sch->cpu_bstats = + netdev_alloc_pcpu_stats(struct gnet_stats_basic_cpu); + if (!sch->cpu_bstats) + goto errout1; + + sch->cpu_qstats = alloc_percpu(struct gnet_stats_queue); + if (!sch->cpu_qstats) { + free_percpu(sch->cpu_bstats); + goto errout1; + } + } + spin_lock_init(&sch->busylock); lockdep_set_class(&sch->busylock, dev->qdisc_tx_busylock ?: &qdisc_tx_busylock); @@ -644,6 +657,7 @@ struct Qdisc *qdisc_alloc(struct netdev_queue *dev_queue, dev->qdisc_running_key ?: &qdisc_running_key); sch->ops = ops; + sch->flags = ops->static_flags; sch->enqueue = ops->enqueue; sch->dequeue = ops->dequeue; sch->dev_queue = dev_queue; @@ -651,6 +665,8 @@ struct Qdisc *qdisc_alloc(struct netdev_queue *dev_queue, refcount_set(&sch->refcnt, 1); return sch; +errout1: + kfree(p); errout: return ERR_PTR(err); }