From: Florian Westphal <fw@strlen.de>
To: Eric Dumazet <eric.dumazet@gmail.com>
Cc: Florian Westphal <fw@strlen.de>, netdev@vger.kernel.org
Subject: Re: [PATCH -next 3/4] sched: remove qdisc_rehape_fail
Date: Wed, 8 Jun 2016 23:01:09 +0200 [thread overview]
Message-ID: <20160608210109.GD29699@breakpoint.cc> (raw)
In-Reply-To: <1465407776.7945.24.camel@edumazet-glaptop3.roam.corp.google.com>
Eric Dumazet <eric.dumazet@gmail.com> wrote:
> On Wed, 2016-06-08 at 17:35 +0200, Florian Westphal wrote:
> > After the removal of TCA_CBQ_POLICE in cbq scheduler qdisc->reshape_fail
> > is always NULL, i.e. qdisc_rehape_fail is now the same as qdisc_drop.
> >
> > Signed-off-by: Florian Westphal <fw@strlen.de>
> > ---
> > include/net/sch_generic.h | 19 -------------------
> > net/sched/sch_fifo.c | 4 ++--
> > net/sched/sch_netem.c | 4 ++--
> > net/sched/sch_plug.c | 2 +-
> > net/sched/sch_tbf.c | 4 ++--
> > 5 files changed, 7 insertions(+), 26 deletions(-)
> >
> > diff --git a/include/net/sch_generic.h b/include/net/sch_generic.h
> > index c069ac1..a9aec63 100644
> > --- a/include/net/sch_generic.h
> > +++ b/include/net/sch_generic.h
> > @@ -63,9 +63,6 @@ struct Qdisc {
> > struct list_head list;
> > u32 handle;
> > u32 parent;
> > - int (*reshape_fail)(struct sk_buff *skb,
> > - struct Qdisc *q);
> > -
> > void *u32_node;
> >
>
> You removed 2 pointers from Qdisc, so now next_sched & gso_skb are in a
> different cache line than ->state
>
> Some performance penalty is expected, unless you move a read_mostly
> field there to compensate.
Would you mind an annotation rather than covering the hole?
--- a/include/net/sch_generic.h
+++ b/include/net/sch_generic.h
@@ -71,11 +71,11 @@ struct Qdisc {
struct gnet_stats_basic_cpu __percpu *cpu_bstats;
struct gnet_stats_queue __percpu *cpu_qstats;
- struct Qdisc *next_sched;
- struct sk_buff *gso_skb;
/*
* For performance sake on SMP, we put highly modified fields at the end
*/
+ struct Qdisc *next_sched ____cacheline_aligned_in_smp;
+ struct sk_buff *gso_skb;
... it creates 16 byte hole after cpu_qstats and keeps the rest as-is
(i.e. next_sched is at beginning of 2nd cacheline, as before the removal).
I could also cover the hole by moving rcu_head there but it seems fragile
and doesn't reduce total struct size anyway (we get larger hole at end).
If you have no objection I'd resubmit the series as-is but with this patch.
Let me know, thanks Eric!
next prev parent reply other threads:[~2016-06-08 21:01 UTC|newest]
Thread overview: 9+ messages / expand[flat|nested] mbox.gz Atom feed top
2016-06-08 15:35 [PATCH net-next 0/4] sched, cbq: remove OVL_STRATEGY/POLICE support Florian Westphal
2016-06-08 15:35 ` [PATCH -next 1/4] cbq: remove TCA_CBQ_OVL_STRATEGY support Florian Westphal
2016-06-08 15:35 ` [PATCH -next 2/4] cbq: remove TCA_CBQ_POLICE support Florian Westphal
2016-06-08 15:35 ` [PATCH -next 3/4] sched: remove qdisc_rehape_fail Florian Westphal
2016-06-08 17:42 ` Eric Dumazet
2016-06-08 21:01 ` Florian Westphal [this message]
2016-06-08 22:08 ` Eric Dumazet
2016-06-08 15:35 ` [PATCH -next 4/4] sched: remove qdisc->drop Florian Westphal
2016-06-08 18:24 ` [PATCH net-next 0/4] sched, cbq: remove OVL_STRATEGY/POLICE support David Miller
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20160608210109.GD29699@breakpoint.cc \
--to=fw@strlen.de \
--cc=eric.dumazet@gmail.com \
--cc=netdev@vger.kernel.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).