From: Ian Campbell <Ian.Campbell@citrix.com>
To: Alexander Duyck <alexander.h.duyck@intel.com>
Cc: "netdev@vger.kernel.org" <netdev@vger.kernel.org>,
David Miller <davem@davemloft.net>,
Eric Dumazet <eric.dumazet@gmail.com>,
"Michael S. Tsirkin" <mst@redhat.com>,
"Wei Liu (Intern)" <wei.liu2@citrix.com>,
"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [PATCH 05/10] net: move destructor_arg to the front of sk_buff.
Date: Wed, 11 Apr 2012 07:56:24 +0000 [thread overview]
Message-ID: <1334130984.12209.195.camel@dagon.hellion.org.uk> (raw)
In-Reply-To: <4F847CF9.3090701@intel.com>
On Tue, 2012-04-10 at 19:33 +0100, Alexander Duyck wrote:
> On 04/10/2012 07:26 AM, Ian Campbell wrote:
> > As of the previous patch we align the end (rather than the start) of the struct
> > to a cache line and so, with 32 and 64 byte cache lines and the shinfo size
> > increase from the next patch, the first 8 bytes of the struct end up on a
> > different cache line to the rest of it so make sure it is something relatively
> > unimportant to avoid hitting an extra cache line on hot operations such as
> > kfree_skb.
> >
> > Signed-off-by: Ian Campbell <ian.campbell@citrix.com>
> > Cc: "David S. Miller" <davem@davemloft.net>
> > Cc: Eric Dumazet <eric.dumazet@gmail.com>
> > ---
> > include/linux/skbuff.h | 15 ++++++++++-----
> > net/core/skbuff.c | 5 ++++-
> > 2 files changed, 14 insertions(+), 6 deletions(-)
> >
> > diff --git a/include/linux/skbuff.h b/include/linux/skbuff.h
> > index 0ad6a46..f0ae39c 100644
> > --- a/include/linux/skbuff.h
> > +++ b/include/linux/skbuff.h
> > @@ -265,6 +265,15 @@ struct ubuf_info {
> > * the end of the header data, ie. at skb->end.
> > */
> > struct skb_shared_info {
> > + /* Intermediate layers must ensure that destructor_arg
> > + * remains valid until skb destructor */
> > + void *destructor_arg;
> > +
> > + /*
> > + * Warning: all fields from here until dataref are cleared in
> > + * __alloc_skb()
> > + *
> > + */
> > unsigned char nr_frags;
> > __u8 tx_flags;
> > unsigned short gso_size;
> > @@ -276,14 +285,10 @@ struct skb_shared_info {
> > __be32 ip6_frag_id;
> >
> > /*
> > - * Warning : all fields before dataref are cleared in __alloc_skb()
> > + * Warning: all fields before dataref are cleared in __alloc_skb()
> > */
> > atomic_t dataref;
> >
> > - /* Intermediate layers must ensure that destructor_arg
> > - * remains valid until skb destructor */
> > - void * destructor_arg;
> > -
> > /* must be last field, see pskb_expand_head() */
> > skb_frag_t frags[MAX_SKB_FRAGS];
> > };
> > diff --git a/net/core/skbuff.c b/net/core/skbuff.c
> > index d4e139e..b8a41d6 100644
> > --- a/net/core/skbuff.c
> > +++ b/net/core/skbuff.c
> > @@ -214,7 +214,10 @@ struct sk_buff *__alloc_skb(unsigned int size, gfp_t gfp_mask,
> >
> > /* make sure we initialize shinfo sequentially */
> > shinfo = skb_shinfo(skb);
> > - memset(shinfo, 0, offsetof(struct skb_shared_info, dataref));
> > +
> > + memset(&shinfo->nr_frags, 0,
> > + offsetof(struct skb_shared_info, dataref)
> > + - offsetof(struct skb_shared_info, nr_frags));
> > atomic_set(&shinfo->dataref, 1);
> > kmemcheck_annotate_variable(shinfo->destructor_arg);
> >
>
> Have you checked this for 32 bit as well as 64? Based on my math your
> next patch will still mess up the memset on 32 bit with the structure
> being split somewhere just in front of hwtstamps.
You mean 32 byte cache lines? If so then yes there is a split half way
through the structure in that case but there's no way all this data
could ever fit in a single 32 byte cache line. Not including the frags
or destructor_arg the region nr_frags up to and including dataref is 36
bytes on 32 bit and 40 bytes on 64 bit. I've not changed anything in
this respect.
If you meant 64 byte cache lines with 32 bit structure sizes then by my
calculations everything from destructor_arg (in fact a bit earlier, from
12 bytes before then) up to and including frag[0] is in the same 64 byte
cache line.
I find the easiest way to check is to use gdb and open code an offsetof
macro.
(gdb) print/d sizeof(struct skb_shared_info) - (unsigned long)&(((struct skb_shared_info *)0)->nr_frags)
$3 = 240
(gdb) print/d sizeof(struct skb_shared_info) - (unsigned long)&(((struct skb_shared_info *)0)->frags[1])
$4 = 192
So given 64 byte cache lines the interesting area starts at 240/64=3.75
cache lines from the (aligned) end and it finishes just before 192/64=3
cache lines from the end, so nr_frags through to frags[0] are therefore
on the same cache line.
Ian.
next prev parent reply other threads:[~2012-04-11 7:56 UTC|newest]
Thread overview: 34+ messages / expand[flat|nested] mbox.gz Atom feed top
2012-04-10 14:26 [PATCH v4 0/10] skb paged fragment destructors Ian Campbell
2012-04-10 14:26 ` [PATCH 01/10] net: add and use SKB_ALLOCSIZE Ian Campbell
2012-04-10 14:57 ` Eric Dumazet
2012-04-10 14:26 ` [PATCH 02/10] net: Use SKB_WITH_OVERHEAD in build_skb Ian Campbell
2012-04-10 14:58 ` Eric Dumazet
2012-04-10 14:26 ` [PATCH 03/10] chelsio: use SKB_WITH_OVERHEAD Ian Campbell
2012-04-10 14:59 ` Eric Dumazet
2012-04-10 14:26 ` [PATCH 04/10] net: pad skb data and shinfo as a whole rather than individually Ian Campbell
2012-04-10 15:01 ` Eric Dumazet
2012-04-10 14:26 ` [PATCH 05/10] net: move destructor_arg to the front of sk_buff Ian Campbell
2012-04-10 15:05 ` Eric Dumazet
2012-04-10 15:19 ` Ian Campbell
2012-04-10 18:33 ` Alexander Duyck
2012-04-10 18:41 ` Eric Dumazet
2012-04-10 19:15 ` Alexander Duyck
2012-04-11 8:00 ` Ian Campbell
2012-04-11 16:31 ` Alexander Duyck
2012-04-11 17:00 ` Ian Campbell
2012-04-11 8:20 ` Eric Dumazet
2012-04-11 16:05 ` Alexander Duyck
2012-04-11 7:56 ` Ian Campbell [this message]
2012-04-10 14:26 ` [PATCH 06/10] net: add support for per-paged-fragment destructors Ian Campbell
2012-04-26 20:44 ` [Xen-devel] " Konrad Rzeszutek Wilk
2012-04-10 14:26 ` [PATCH 07/10] net: only allow paged fragments with the same destructor to be coalesced Ian Campbell
2012-04-10 20:11 ` Ben Hutchings
2012-04-11 7:45 ` Ian Campbell
2012-04-10 14:26 ` [PATCH 08/10] net: add skb_orphan_frags to copy aside frags with destructors Ian Campbell
[not found] ` <1334067965.5394.22.camel-o4Be2W7LfRlXesXXhkcM7miJhflN2719@public.gmane.org>
2012-04-10 14:26 ` [PATCH 09/10] net: add paged frag destructor support to kernel_sendpage Ian Campbell
2012-04-10 14:26 ` [PATCH 10/10] sunrpc: use SKB fragment destructors to delay completion until page is released by network stack Ian Campbell
2012-04-10 14:58 ` [PATCH v4 0/10] skb paged fragment destructors Michael S. Tsirkin
2012-04-10 15:00 ` Michael S. Tsirkin
2012-04-10 15:46 ` Bart Van Assche
2012-04-10 15:50 ` Ian Campbell
2012-04-11 10:02 ` Bart Van Assche
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=1334130984.12209.195.camel@dagon.hellion.org.uk \
--to=ian.campbell@citrix.com \
--cc=alexander.h.duyck@intel.com \
--cc=davem@davemloft.net \
--cc=eric.dumazet@gmail.com \
--cc=mst@redhat.com \
--cc=netdev@vger.kernel.org \
--cc=wei.liu2@citrix.com \
--cc=xen-devel@lists.xen.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).