From: Jesper Dangaard Brouer <brouer@redhat.com>
To: netdev@vger.kernel.org
Cc: Christoph Lameter <cl@linux.com>,
tom@herbertland.com, Alexander Duyck <alexander.duyck@gmail.com>,
alexei.starovoitov@gmail.com,
Jesper Dangaard Brouer <brouer@redhat.com>,
ogerlitz@mellanox.com, gerlitz.or@gmail.com
Subject: [net-next PATCH 07/11] net: introduce napi_alloc_skb_hint() for more use-cases
Date: Tue, 02 Feb 2016 22:13:57 +0100 [thread overview]
Message-ID: <20160202211314.16315.70164.stgit@firesoul> (raw)
In-Reply-To: <20160202211051.16315.51808.stgit@firesoul>
The default bulk alloc size arbitrarily choosen (to be 8) might
not suit all use-cases, this introduce a function napi_alloc_skb_hint()
that allow the caller to specify a bulk size hint they are expecting.
It is a hint because __napi_alloc_skb() limits the bulk size to
the array size.
One user is the mlx5 driver, which bulk re-populate it's RX ring
with both SKBs and pages. Thus, it would like to work with
bigger bulk alloc chunks.
Signed-off-by: Jesper Dangaard Brouer <brouer@redhat.com>
---
include/linux/skbuff.h | 19 +++++++++++++++----
net/core/skbuff.c | 8 +++-----
2 files changed, 18 insertions(+), 9 deletions(-)
diff --git a/include/linux/skbuff.h b/include/linux/skbuff.h
index b06ba2e07c89..4d0c0eacbc34 100644
--- a/include/linux/skbuff.h
+++ b/include/linux/skbuff.h
@@ -2391,14 +2391,25 @@ static inline void skb_free_frag(void *addr)
__free_page_frag(addr);
}
+#define NAPI_SKB_CACHE_SIZE 64U /* Used in struct napi_alloc_cache */
+#define NAPI_SKB_BULK_ALLOC 8U /* Default slab bulk alloc in NAPI */
+
void *napi_alloc_frag(unsigned int fragsz);
-struct sk_buff *__napi_alloc_skb(struct napi_struct *napi,
- unsigned int length, gfp_t gfp_mask);
+struct sk_buff *__napi_alloc_skb(struct napi_struct *napi, unsigned int len,
+ unsigned int bulk_hint, gfp_t gfp_mask);
static inline struct sk_buff *napi_alloc_skb(struct napi_struct *napi,
- unsigned int length)
+ unsigned int len)
+{
+ return __napi_alloc_skb(napi, len, NAPI_SKB_BULK_ALLOC, GFP_ATOMIC);
+}
+static inline struct sk_buff *napi_alloc_skb_hint(struct napi_struct *napi,
+ unsigned int len,
+ unsigned int bulk_hint)
{
- return __napi_alloc_skb(napi, length, GFP_ATOMIC);
+ bulk_hint = bulk_hint ? : 1;
+ return __napi_alloc_skb(napi, len, bulk_hint, GFP_ATOMIC);
}
+
void napi_consume_skb(struct sk_buff *skb, int budget);
void __kfree_skb_flush(void);
diff --git a/net/core/skbuff.c b/net/core/skbuff.c
index ae8cdbec90ee..f77209fb5361 100644
--- a/net/core/skbuff.c
+++ b/net/core/skbuff.c
@@ -347,8 +347,6 @@ struct sk_buff *build_skb(void *data, unsigned int frag_size)
}
EXPORT_SYMBOL(build_skb);
-#define NAPI_SKB_CACHE_SIZE 64
-
struct napi_alloc_cache {
struct page_frag_cache page;
size_t skb_count;
@@ -480,9 +478,10 @@ EXPORT_SYMBOL(__netdev_alloc_skb);
* %NULL is returned if there is no free memory.
*/
struct sk_buff *__napi_alloc_skb(struct napi_struct *napi, unsigned int len,
- gfp_t gfp_mask)
+ unsigned int bulk_hint, gfp_t gfp_mask)
{
struct napi_alloc_cache *nc = this_cpu_ptr(&napi_alloc_cache);
+ unsigned int bulk_sz = min(bulk_hint, NAPI_SKB_CACHE_SIZE);
struct skb_shared_info *shinfo;
struct sk_buff *skb;
void *data;
@@ -507,10 +506,9 @@ struct sk_buff *__napi_alloc_skb(struct napi_struct *napi, unsigned int len,
if (unlikely(!data))
return NULL;
-#define BULK_ALLOC_SIZE 8
if (!nc->skb_count) {
nc->skb_count = kmem_cache_alloc_bulk(skbuff_head_cache,
- gfp_mask, BULK_ALLOC_SIZE,
+ gfp_mask, bulk_sz,
nc->skb_cache);
}
if (likely(nc->skb_count)) {
next prev parent reply other threads:[~2016-02-02 21:13 UTC|newest]
Thread overview: 31+ messages / expand[flat|nested] mbox.gz Atom feed top
2015-10-23 12:46 [PATCH 0/4] net: mitigating kmem_cache slowpath for network stack in NAPI context Jesper Dangaard Brouer
2015-10-23 12:46 ` [PATCH 1/4] net: bulk free infrastructure for NAPI context, use napi_consume_skb Jesper Dangaard Brouer
2015-10-23 12:46 ` [PATCH 2/4] net: bulk free SKBs that were delay free'ed due to IRQ context Jesper Dangaard Brouer
2015-10-23 12:46 ` [PATCH 3/4] ixgbe: bulk free SKBs during TX completion cleanup cycle Jesper Dangaard Brouer
2015-10-23 12:46 ` [PATCH 4/4] net: bulk alloc and reuse of SKBs in NAPI context Jesper Dangaard Brouer
2015-10-27 1:09 ` [PATCH 0/4] net: mitigating kmem_cache slowpath for network stack " David Miller
2016-02-02 21:11 ` [net-next PATCH 00/11] net: mitigating kmem_cache slowpath and BoF discussion patches Jesper Dangaard Brouer
2016-02-02 21:11 ` [net-next PATCH 01/11] net: bulk free infrastructure for NAPI context, use napi_consume_skb Jesper Dangaard Brouer
2016-02-02 21:11 ` [net-next PATCH 02/11] net: bulk free SKBs that were delay free'ed due to IRQ context Jesper Dangaard Brouer
2016-02-02 21:11 ` [net-next PATCH 03/11] ixgbe: bulk free SKBs during TX completion cleanup cycle Jesper Dangaard Brouer
2016-02-02 21:12 ` [net-next PATCH 04/11] net: bulk alloc and reuse of SKBs in NAPI context Jesper Dangaard Brouer
2016-02-03 0:52 ` Alexei Starovoitov
2016-02-03 10:38 ` Jesper Dangaard Brouer
2016-02-02 21:12 ` [net-next PATCH 05/11] mlx5: use napi_*_skb APIs to get bulk alloc and free Jesper Dangaard Brouer
2016-02-02 21:13 ` [net-next PATCH 06/11] RFC: mlx5: RX bulking or bundling of packets before calling network stack Jesper Dangaard Brouer
2016-02-09 11:57 ` Saeed Mahameed
2016-02-10 20:26 ` Jesper Dangaard Brouer
2016-02-16 0:01 ` Saeed Mahameed
2016-02-02 21:13 ` Jesper Dangaard Brouer [this message]
2016-02-02 22:29 ` [net-next PATCH 07/11] net: introduce napi_alloc_skb_hint() for more use-cases kbuild test robot
2016-02-02 21:14 ` [net-next PATCH 08/11] mlx5: hint the NAPI alloc skb API about the expected bulk size Jesper Dangaard Brouer
2016-02-02 21:14 ` [net-next PATCH 09/11] RFC: dummy: bulk free SKBs Jesper Dangaard Brouer
2016-02-02 21:15 ` [net-next PATCH 10/11] RFC: net: API for RX handover of multiple SKBs to stack Jesper Dangaard Brouer
2016-02-02 21:15 ` [net-next PATCH 11/11] RFC: net: RPS bulk enqueue to backlog Jesper Dangaard Brouer
2016-02-07 19:25 ` [net-next PATCH 00/11] net: mitigating kmem_cache slowpath and BoF discussion patches David Miller
2016-02-08 12:14 ` [net-next PATCH V2 0/3] net: mitigating kmem_cache free slowpath Jesper Dangaard Brouer
2016-02-08 12:14 ` [net-next PATCH V2 1/3] net: bulk free infrastructure for NAPI context, use napi_consume_skb Jesper Dangaard Brouer
2016-02-08 12:15 ` [net-next PATCH V2 2/3] net: bulk free SKBs that were delay free'ed due to IRQ context Jesper Dangaard Brouer
2016-02-08 12:15 ` [net-next PATCH V2 3/3] ixgbe: bulk free SKBs during TX completion cleanup cycle Jesper Dangaard Brouer
2016-02-11 16:59 ` [net-next PATCH V2 0/3] net: mitigating kmem_cache free slowpath David Miller
2016-02-13 11:12 ` Tilman Schmidt
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20160202211314.16315.70164.stgit@firesoul \
--to=brouer@redhat.com \
--cc=alexander.duyck@gmail.com \
--cc=alexei.starovoitov@gmail.com \
--cc=cl@linux.com \
--cc=gerlitz.or@gmail.com \
--cc=netdev@vger.kernel.org \
--cc=ogerlitz@mellanox.com \
--cc=tom@herbertland.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).