From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: From: Greg Kroah-Hartman To: linux-kernel@vger.kernel.org, netdev@vger.kernel.org Cc: Greg Kroah-Hartman , stable@vger.kernel.org, Eric Dumazet , "David S. Miller" Subject: [PATCH 4.9 59/71] inet: frags: get rid of ipfrag_skb_cb/FRAG_CB Date: Tue, 16 Oct 2018 19:09:56 +0200 Message-Id: <20181016170542.323641535@linuxfoundation.org> In-Reply-To: <20181016170539.315587743@linuxfoundation.org> References: <20181016170539.315587743@linuxfoundation.org> MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit Sender: linux-kernel-owner@vger.kernel.org List-ID: 4.9-stable review patch. If anyone has any objections, please let me know. ------------------ From: Eric Dumazet ip_defrag uses skb->cb[] to store the fragment offset, and unfortunately this integer is currently in a different cache line than skb->next, meaning that we use two cache lines per skb when finding the insertion point. By aliasing skb->ip_defrag_offset and skb->dev, we pack all the fields in a single cache line and save precious memory bandwidth. Note that after the fast path added by Changli Gao in commit d6bebca92c66 ("fragment: add fast path for in-order fragments") this change wont help the fast path, since we still need to access prev->len (2nd cache line), but will show great benefits when slow path is entered, since we perform a linear scan of a potentially long list. Also, note that this potential long list is an attack vector, we might consider also using an rb-tree there eventually. Signed-off-by: Eric Dumazet Signed-off-by: David S. Miller (cherry picked from commit bf66337140c64c27fa37222b7abca7e49d63fb57) Signed-off-by: Greg Kroah-Hartman --- include/linux/skbuff.h | 5 +++++ 1 file changed, 5 insertions(+) --- a/include/linux/skbuff.h +++ b/include/linux/skbuff.h @@ -645,6 +645,11 @@ struct sk_buff { }; struct rb_node rbnode; /* used in netem & tcp stack */ }; + + union { + int ip_defrag_offset; + }; + struct sock *sk; struct net_device *dev;