* [PATCH net-next 0/2] net: sched: unify __gnet_stats_copy_xxx() for percpu and non-percpu
@ 2019-12-17 8:47 Dust Li
2019-12-17 8:47 ` [PATCH net-next 1/2] net: sched: keep __gnet_stats_copy_xxx() same semantics for percpu stats Dust Li
` (2 more replies)
0 siblings, 3 replies; 5+ messages in thread
From: Dust Li @ 2019-12-17 8:47 UTC (permalink / raw)
To: Cong Wang, Jamal Hadi Salim, John Fastabend, Jiri Pirko; +Cc: Tony Lu, netdev
Currently, __gnet_stats_copy_xxx() will overwrite the return value when
percpu stats are not enabled. But when percpu stats are enabled, it will
add the percpu stats to the result. This inconsistency brings confusion to
its callers.
This patch series unify the behaviour of __gnet_stats_copy_basic() and
__gnet_stats_copy_queue() for percpu and non-percpu stats and fix an
incorrect statistic for mqprio class.
- Patch 1 unified __gnet_stats_copy_xxx() for both percpu and non-percpu
- Patch 2 depending on Patch 1, fixes the problem that 'tc class show'
for mqprio class is always 0.
Dust Li (2):
net: sched: keep __gnet_stats_copy_xxx() same semantics for percpu
stats
net: sched: fix wrong class stats dumping in sch_mqprio
net/core/gen_stats.c | 2 ++
net/sched/sch_mq.c | 35 ++++++++++++-------------
net/sched/sch_mqprio.c | 59 +++++++++++++++++++++++-------------------
3 files changed, 51 insertions(+), 45 deletions(-)
--
2.19.1.3.ge56e4f7
^ permalink raw reply [flat|nested] 5+ messages in thread
* [PATCH net-next 1/2] net: sched: keep __gnet_stats_copy_xxx() same semantics for percpu stats
2019-12-17 8:47 [PATCH net-next 0/2] net: sched: unify __gnet_stats_copy_xxx() for percpu and non-percpu Dust Li
@ 2019-12-17 8:47 ` Dust Li
2019-12-17 8:47 ` [PATCH net-next 2/2] net: sched: fix wrong class stats dumping in sch_mqprio Dust Li
2019-12-21 0:54 ` [PATCH net-next 0/2] net: sched: unify __gnet_stats_copy_xxx() for percpu and non-percpu David Miller
2 siblings, 0 replies; 5+ messages in thread
From: Dust Li @ 2019-12-17 8:47 UTC (permalink / raw)
To: Cong Wang, Jamal Hadi Salim, John Fastabend, Jiri Pirko; +Cc: Tony Lu, netdev
__gnet_stats_copy_basic/queue() support both percpu stat and
non-percpu stat, but they are handle in a different manner:
1. percpu stats are added to the return value;
2. non-percpu stats overwrite the return value;
We should keep the same semantics for both type.
This patch makes percpu stats follow non-percpu's manner by
reset the returned bstats/qstats before add the percpu bstats to it.
Also changes the caller in sch_mq.c/sch_mqprio.c to make sure
they dump the right statistics for percpu qdisc.
Signed-off-by: Dust Li <dust.li@linux.alibaba.com>
Signed-off-by: Tony Lu <tonylu@linux.alibaba.com>
---
net/core/gen_stats.c | 2 ++
net/sched/sch_mq.c | 35 ++++++++++++++++-------------------
net/sched/sch_mqprio.c | 36 +++++++++++++++++-------------------
3 files changed, 35 insertions(+), 38 deletions(-)
diff --git a/net/core/gen_stats.c b/net/core/gen_stats.c
index 1d653fbfcf52..efd3f5f69c73 100644
--- a/net/core/gen_stats.c
+++ b/net/core/gen_stats.c
@@ -145,6 +145,7 @@ __gnet_stats_copy_basic(const seqcount_t *running,
unsigned int seq;
if (cpu) {
+ memset(bstats, 0, sizeof(*bstats));
__gnet_stats_copy_basic_cpu(bstats, cpu);
return;
}
@@ -305,6 +306,7 @@ void __gnet_stats_copy_queue(struct gnet_stats_queue *qstats,
__u32 qlen)
{
if (cpu) {
+ memset(qstats, 0, sizeof(*qstats));
__gnet_stats_copy_queue_cpu(qstats, cpu);
} else {
qstats->qlen = q->qlen;
diff --git a/net/sched/sch_mq.c b/net/sched/sch_mq.c
index e79f1afe0cfd..b2178b7fe3a3 100644
--- a/net/sched/sch_mq.c
+++ b/net/sched/sch_mq.c
@@ -131,6 +131,8 @@ static int mq_dump(struct Qdisc *sch, struct sk_buff *skb)
struct Qdisc *qdisc;
unsigned int ntx;
__u32 qlen = 0;
+ struct gnet_stats_queue qstats = {0};
+ struct gnet_stats_basic_packed bstats = {0};
sch->q.qlen = 0;
memset(&sch->bstats, 0, sizeof(sch->bstats));
@@ -145,25 +147,20 @@ static int mq_dump(struct Qdisc *sch, struct sk_buff *skb)
qdisc = netdev_get_tx_queue(dev, ntx)->qdisc_sleeping;
spin_lock_bh(qdisc_lock(qdisc));
- if (qdisc_is_percpu_stats(qdisc)) {
- qlen = qdisc_qlen_sum(qdisc);
- __gnet_stats_copy_basic(NULL, &sch->bstats,
- qdisc->cpu_bstats,
- &qdisc->bstats);
- __gnet_stats_copy_queue(&sch->qstats,
- qdisc->cpu_qstats,
- &qdisc->qstats, qlen);
- sch->q.qlen += qlen;
- } else {
- sch->q.qlen += qdisc->q.qlen;
- sch->bstats.bytes += qdisc->bstats.bytes;
- sch->bstats.packets += qdisc->bstats.packets;
- sch->qstats.qlen += qdisc->qstats.qlen;
- sch->qstats.backlog += qdisc->qstats.backlog;
- sch->qstats.drops += qdisc->qstats.drops;
- sch->qstats.requeues += qdisc->qstats.requeues;
- sch->qstats.overlimits += qdisc->qstats.overlimits;
- }
+ qlen = qdisc_qlen_sum(qdisc);
+ __gnet_stats_copy_basic(NULL, &bstats, qdisc->cpu_bstats,
+ &qdisc->bstats);
+ __gnet_stats_copy_queue(&qstats, qdisc->cpu_qstats,
+ &qdisc->qstats, qlen);
+
+ sch->q.qlen += qdisc->q.qlen;
+ sch->bstats.bytes += bstats.bytes;
+ sch->bstats.packets += bstats.packets;
+ sch->qstats.qlen += qstats.qlen;
+ sch->qstats.backlog += qstats.backlog;
+ sch->qstats.drops += qstats.drops;
+ sch->qstats.requeues += qstats.requeues;
+ sch->qstats.overlimits += qstats.overlimits;
spin_unlock_bh(qdisc_lock(qdisc));
}
diff --git a/net/sched/sch_mqprio.c b/net/sched/sch_mqprio.c
index 8766ab5b8788..ce95e9b4a796 100644
--- a/net/sched/sch_mqprio.c
+++ b/net/sched/sch_mqprio.c
@@ -388,6 +388,9 @@ static int mqprio_dump(struct Qdisc *sch, struct sk_buff *skb)
struct tc_mqprio_qopt opt = { 0 };
struct Qdisc *qdisc;
unsigned int ntx, tc;
+ __u32 qlen = 0;
+ struct gnet_stats_queue qstats = {0};
+ struct gnet_stats_basic_packed bstats = {0};
sch->q.qlen = 0;
memset(&sch->bstats, 0, sizeof(sch->bstats));
@@ -402,25 +405,20 @@ static int mqprio_dump(struct Qdisc *sch, struct sk_buff *skb)
qdisc = netdev_get_tx_queue(dev, ntx)->qdisc_sleeping;
spin_lock_bh(qdisc_lock(qdisc));
- if (qdisc_is_percpu_stats(qdisc)) {
- __u32 qlen = qdisc_qlen_sum(qdisc);
-
- __gnet_stats_copy_basic(NULL, &sch->bstats,
- qdisc->cpu_bstats,
- &qdisc->bstats);
- __gnet_stats_copy_queue(&sch->qstats,
- qdisc->cpu_qstats,
- &qdisc->qstats, qlen);
- sch->q.qlen += qlen;
- } else {
- sch->q.qlen += qdisc->q.qlen;
- sch->bstats.bytes += qdisc->bstats.bytes;
- sch->bstats.packets += qdisc->bstats.packets;
- sch->qstats.backlog += qdisc->qstats.backlog;
- sch->qstats.drops += qdisc->qstats.drops;
- sch->qstats.requeues += qdisc->qstats.requeues;
- sch->qstats.overlimits += qdisc->qstats.overlimits;
- }
+ qlen = qdisc_qlen_sum(qdisc);
+ __gnet_stats_copy_basic(NULL, &bstats, qdisc->cpu_bstats,
+ &qdisc->bstats);
+ __gnet_stats_copy_queue(&qstats, qdisc->cpu_qstats,
+ &qdisc->qstats, qlen);
+
+ sch->q.qlen += qdisc->q.qlen;
+ sch->bstats.bytes += bstats.bytes;
+ sch->bstats.packets += bstats.packets;
+ sch->qstats.qlen += qstats.qlen;
+ sch->qstats.backlog += qstats.backlog;
+ sch->qstats.drops += qstats.drops;
+ sch->qstats.requeues += qstats.requeues;
+ sch->qstats.overlimits += qstats.overlimits;
spin_unlock_bh(qdisc_lock(qdisc));
}
--
2.19.1.3.ge56e4f7
^ permalink raw reply related [flat|nested] 5+ messages in thread
* [PATCH net-next 2/2] net: sched: fix wrong class stats dumping in sch_mqprio
2019-12-17 8:47 [PATCH net-next 0/2] net: sched: unify __gnet_stats_copy_xxx() for percpu and non-percpu Dust Li
2019-12-17 8:47 ` [PATCH net-next 1/2] net: sched: keep __gnet_stats_copy_xxx() same semantics for percpu stats Dust Li
@ 2019-12-17 8:47 ` Dust Li
2019-12-21 0:54 ` [PATCH net-next 0/2] net: sched: unify __gnet_stats_copy_xxx() for percpu and non-percpu David Miller
2 siblings, 0 replies; 5+ messages in thread
From: Dust Li @ 2019-12-17 8:47 UTC (permalink / raw)
To: Cong Wang, Jamal Hadi Salim, John Fastabend, Jiri Pirko; +Cc: Tony Lu, netdev
Actually, the stack variables bstats and qstats in
mqprio_dump_class_stats() are not really used. As a result,
'tc -s class show' for the mqprio class always return 0 for
both bstats and qstats.
This patch make them(bstats/qstats) storing the child qdisc's
stats, and add them up to a tbstats/tqstats which will store
the result mqprio class.
Fixes: ce679e8df7ed ("net: sched: add support for TCQ_F_NOLOCK subqueues to sch_mqprio")
Signed-off-by: Dust Li <dust.li@linux.alibaba.com>
Signed-off-by: Tony Lu <tonylu@linux.alibaba.com>
---
net/sched/sch_mqprio.c | 23 ++++++++++++++++-------
1 file changed, 16 insertions(+), 7 deletions(-)
diff --git a/net/sched/sch_mqprio.c b/net/sched/sch_mqprio.c
index ce95e9b4a796..b356517c6ef4 100644
--- a/net/sched/sch_mqprio.c
+++ b/net/sched/sch_mqprio.c
@@ -511,8 +511,8 @@ static int mqprio_dump_class_stats(struct Qdisc *sch, unsigned long cl,
if (cl >= TC_H_MIN_PRIORITY) {
int i;
__u32 qlen = 0;
- struct gnet_stats_queue qstats = {0};
- struct gnet_stats_basic_packed bstats = {0};
+ struct gnet_stats_queue tqstats = {0};
+ struct gnet_stats_basic_packed tbstats = {0};
struct net_device *dev = qdisc_dev(sch);
struct netdev_tc_txq tc = dev->tc_to_txq[cl & TC_BITMASK];
@@ -529,6 +529,8 @@ static int mqprio_dump_class_stats(struct Qdisc *sch, unsigned long cl,
struct Qdisc *qdisc = rtnl_dereference(q->qdisc);
struct gnet_stats_basic_cpu __percpu *cpu_bstats = NULL;
struct gnet_stats_queue __percpu *cpu_qstats = NULL;
+ struct gnet_stats_queue qstats = {0};
+ struct gnet_stats_basic_packed bstats = {0};
spin_lock_bh(qdisc_lock(qdisc));
if (qdisc_is_percpu_stats(qdisc)) {
@@ -536,21 +538,28 @@ static int mqprio_dump_class_stats(struct Qdisc *sch, unsigned long cl,
cpu_qstats = qdisc->cpu_qstats;
}
- qlen = qdisc_qlen_sum(qdisc);
- __gnet_stats_copy_basic(NULL, &sch->bstats,
+ qlen += qdisc_qlen_sum(qdisc);
+ __gnet_stats_copy_basic(NULL, &bstats,
cpu_bstats, &qdisc->bstats);
- __gnet_stats_copy_queue(&sch->qstats,
+ __gnet_stats_copy_queue(&qstats,
cpu_qstats,
&qdisc->qstats,
qlen);
spin_unlock_bh(qdisc_lock(qdisc));
+
+ tbstats.bytes += bstats.bytes;
+ tbstats.packets += bstats.packets;
+ tqstats.backlog += qstats.backlog;
+ tqstats.drops += qstats.drops;
+ tqstats.requeues += qstats.requeues;
+ tqstats.overlimits += qstats.overlimits;
}
/* Reclaim root sleeping lock before completing stats */
if (d->lock)
spin_lock_bh(d->lock);
- if (gnet_stats_copy_basic(NULL, d, NULL, &bstats) < 0 ||
- gnet_stats_copy_queue(d, NULL, &qstats, qlen) < 0)
+ if (gnet_stats_copy_basic(NULL, d, NULL, &tbstats) < 0 ||
+ gnet_stats_copy_queue(d, NULL, &tqstats, qlen) < 0)
return -1;
} else {
struct netdev_queue *dev_queue = mqprio_queue_get(sch, cl);
--
2.19.1.3.ge56e4f7
^ permalink raw reply related [flat|nested] 5+ messages in thread
* Re: [PATCH net-next 0/2] net: sched: unify __gnet_stats_copy_xxx() for percpu and non-percpu
2019-12-17 8:47 [PATCH net-next 0/2] net: sched: unify __gnet_stats_copy_xxx() for percpu and non-percpu Dust Li
2019-12-17 8:47 ` [PATCH net-next 1/2] net: sched: keep __gnet_stats_copy_xxx() same semantics for percpu stats Dust Li
2019-12-17 8:47 ` [PATCH net-next 2/2] net: sched: fix wrong class stats dumping in sch_mqprio Dust Li
@ 2019-12-21 0:54 ` David Miller
2019-12-23 9:55 ` Dust Li
2 siblings, 1 reply; 5+ messages in thread
From: David Miller @ 2019-12-21 0:54 UTC (permalink / raw)
To: dust.li; +Cc: xiyou.wangcong, jhs, john.fastabend, jiri, tonylu, netdev
From: Dust Li <dust.li@linux.alibaba.com>
Date: Tue, 17 Dec 2019 16:47:16 +0800
> Currently, __gnet_stats_copy_xxx() will overwrite the return value when
> percpu stats are not enabled. But when percpu stats are enabled, it will
> add the percpu stats to the result. This inconsistency brings confusion to
> its callers.
>
> This patch series unify the behaviour of __gnet_stats_copy_basic() and
> __gnet_stats_copy_queue() for percpu and non-percpu stats and fix an
> incorrect statistic for mqprio class.
>
> - Patch 1 unified __gnet_stats_copy_xxx() for both percpu and non-percpu
> - Patch 2 depending on Patch 1, fixes the problem that 'tc class show'
> for mqprio class is always 0.
I think this is going to break the estimator.
The callers of est_fetch_counters() expect continually incrementing
statistics. It does relative subtractions from previous values each
time, so if we just reset on the next statistics fetch it won't work.
Sorry I can't apply this.
^ permalink raw reply [flat|nested] 5+ messages in thread
* Re: [PATCH net-next 0/2] net: sched: unify __gnet_stats_copy_xxx() for percpu and non-percpu
2019-12-21 0:54 ` [PATCH net-next 0/2] net: sched: unify __gnet_stats_copy_xxx() for percpu and non-percpu David Miller
@ 2019-12-23 9:55 ` Dust Li
0 siblings, 0 replies; 5+ messages in thread
From: Dust Li @ 2019-12-23 9:55 UTC (permalink / raw)
To: David Miller; +Cc: xiyou.wangcong, jhs, john.fastabend, jiri, tonylu, netdev
On 12/21/19 8:54 AM, David Miller wrote:
> From: Dust Li <dust.li@linux.alibaba.com>
> Date: Tue, 17 Dec 2019 16:47:16 +0800
>
>> Currently, __gnet_stats_copy_xxx() will overwrite the return value when
>> percpu stats are not enabled. But when percpu stats are enabled, it will
>> add the percpu stats to the result. This inconsistency brings confusion to
>> its callers.
>>
>> This patch series unify the behaviour of __gnet_stats_copy_basic() and
>> __gnet_stats_copy_queue() for percpu and non-percpu stats and fix an
>> incorrect statistic for mqprio class.
>>
>> - Patch 1 unified __gnet_stats_copy_xxx() for both percpu and non-percpu
>> - Patch 2 depending on Patch 1, fixes the problem that 'tc class show'
>> for mqprio class is always 0.
> I think this is going to break the estimator.
>
> The callers of est_fetch_counters() expect continually incrementing
> statistics. It does relative subtractions from previous values each
> time, so if we just reset on the next statistics fetch it won't work.
>
> Sorry I can't apply this.
Hi David,
Thanks for your reply.
I checked the callers of est_fetch_counters(). I think there is a little
misunderstanding here.We memset() the 'gnet_stats_basic_packed *b'which
is not the original data of the estimator,but just the return value.
Actually, it has been already memseted in est_fetch_counters() now. So it
shouldnot break the estimator.
static void est_fetch_counters(struct net_rate_estimator *e,
struct gnet_stats_basic_packed *b)
{
/ memset(b, 0, sizeof(*b)); // <<--- Here b is already memseted/
if (e->stats_lock)
spin_lock(e->stats_lock);
__gnet_stats_copy_basic(e->running, b, e->cpu_bstats, e->bstats);
if (e->stats_lock)
spin_unlock(e->stats_lock);
}
The purpose of this memset() is to maintain a consistent semantics for both
percpu and non-percpu statisticsin __gnet_stats_copy_basic().
Thanks
Dust
^ permalink raw reply [flat|nested] 5+ messages in thread
end of thread, other threads:[~2019-12-23 9:55 UTC | newest]
Thread overview: 5+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2019-12-17 8:47 [PATCH net-next 0/2] net: sched: unify __gnet_stats_copy_xxx() for percpu and non-percpu Dust Li
2019-12-17 8:47 ` [PATCH net-next 1/2] net: sched: keep __gnet_stats_copy_xxx() same semantics for percpu stats Dust Li
2019-12-17 8:47 ` [PATCH net-next 2/2] net: sched: fix wrong class stats dumping in sch_mqprio Dust Li
2019-12-21 0:54 ` [PATCH net-next 0/2] net: sched: unify __gnet_stats_copy_xxx() for percpu and non-percpu David Miller
2019-12-23 9:55 ` Dust Li
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).