From: Jesper Dangaard Brouer <brouer@redhat.com>
To: Alexander Duyck <alexander.duyck@gmail.com>
Cc: netdev@vger.kernel.org, akpm@linux-foundation.org,
linux-mm@kvack.org, aravinda@linux.vnet.ibm.com,
Christoph Lameter <cl@linux.com>,
"Paul E. McKenney" <paulmck@linux.vnet.ibm.com>,
iamjoonsoo.kim@lge.com, brouer@redhat.com
Subject: Re: [RFC PATCH 0/3] Network stack, first user of SLAB/kmem_cache bulk free API.
Date: Mon, 7 Sep 2015 10:16:10 +0200 [thread overview]
Message-ID: <20150907101610.44504597@redhat.com> (raw)
In-Reply-To: <55E9DE51.7090109@gmail.com>
On Fri, 4 Sep 2015 11:09:21 -0700
Alexander Duyck <alexander.duyck@gmail.com> wrote:
> This is an interesting start. However I feel like it might work better
> if you were to create a per-cpu pool for skbs that could be freed and
> allocated in NAPI context. So for example we already have
> napi_alloc_skb, why not just add a napi_free_skb
I do like the idea...
> and then make the array
> of objects to be freed part of a pool that could be used for either
> allocation or freeing? If the pool runs empty you just allocate
> something like 8 or 16 new skb heads, and if you fill it you just free
> half of the list?
But I worry that this algorithm will "randomize" the (skb) objects.
And the SLUB bulk optimization only works if we have many objects
belonging to the same page.
It would likely be fastest to implement a simple stack (for these
per-cpu pools), but I again worry that it would randomize the
object-pages. A simple queue might be better, but slightly slower.
Guess I could just reuse part of qmempool / alf_queue as a quick test.
Having a per-cpu pool in networking would solve the problem of the slub
per-cpu pool isn't large enough for our use-case. On the other hand,
maybe we should fix slub to dynamically adjust the size of it's per-cpu
resources?
A pre-req knowledge (for people not knowing slub's internal details):
Slub alloc path will pickup a page, and empty all objects for that page
before proceeding to the next page. Thus, slub bulk alloc will give
many objects belonging to the page. I'm trying to keep these objects
grouped together until they can be free'ed in a bulk.
--
Best regards,
Jesper Dangaard Brouer
MSc.CS, Sr. Network Kernel Developer at Red Hat
Author of http://www.iptv-analyzer.org
LinkedIn: http://www.linkedin.com/in/brouer
--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org. For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>
next prev parent reply other threads:[~2015-09-07 8:16 UTC|newest]
Thread overview: 23+ messages / expand[flat|nested] mbox.gz Atom feed top
[not found] <20150824005727.2947.36065.stgit@localhost>
2015-09-04 17:00 ` [RFC PATCH 0/3] Network stack, first user of SLAB/kmem_cache bulk free API Jesper Dangaard Brouer
2015-09-04 17:00 ` [RFC PATCH 1/3] net: introduce kfree_skb_bulk() user of kmem_cache_free_bulk() Jesper Dangaard Brouer
2015-09-04 18:47 ` Tom Herbert
2015-09-07 8:41 ` Jesper Dangaard Brouer
2015-09-07 16:25 ` Tom Herbert
2015-09-07 20:14 ` Jesper Dangaard Brouer
2015-09-08 21:01 ` David Miller
2015-09-04 17:01 ` [RFC PATCH 2/3] net: NIC helper API for building array of skbs to free Jesper Dangaard Brouer
2015-09-04 17:01 ` [RFC PATCH 3/3] ixgbe: bulk free SKBs during TX completion cleanup cycle Jesper Dangaard Brouer
2015-09-04 18:09 ` [RFC PATCH 0/3] Network stack, first user of SLAB/kmem_cache bulk free API Alexander Duyck
2015-09-04 18:55 ` Christoph Lameter
2015-09-04 20:39 ` Alexander Duyck
2015-09-04 23:45 ` Christoph Lameter
2015-09-05 11:18 ` Jesper Dangaard Brouer
2015-09-08 17:32 ` Christoph Lameter
2015-09-09 12:59 ` Jesper Dangaard Brouer
2015-09-09 14:08 ` Christoph Lameter
2015-09-07 8:16 ` Jesper Dangaard Brouer [this message]
2015-09-07 21:23 ` Alexander Duyck
2015-09-16 10:02 ` Experiences with slub bulk use-case for network stack Jesper Dangaard Brouer
2015-09-16 15:13 ` Christoph Lameter
2015-09-17 20:17 ` Jesper Dangaard Brouer
2015-09-17 23:57 ` Christoph Lameter
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20150907101610.44504597@redhat.com \
--to=brouer@redhat.com \
--cc=akpm@linux-foundation.org \
--cc=alexander.duyck@gmail.com \
--cc=aravinda@linux.vnet.ibm.com \
--cc=cl@linux.com \
--cc=iamjoonsoo.kim@lge.com \
--cc=linux-mm@kvack.org \
--cc=netdev@vger.kernel.org \
--cc=paulmck@linux.vnet.ibm.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).