From: Alexander Duyck <alexander.duyck@gmail.com>
To: Alexei Starovoitov <ast@plumgrid.com>,
Alexander Duyck <alexander.h.duyck@redhat.com>
Cc: Network Development <netdev@vger.kernel.org>,
"David S. Miller" <davem@davemloft.net>,
Eric Dumazet <eric.dumazet@gmail.com>,
Jesper Dangaard Brouer <brouer@redhat.com>
Subject: Re: [net-next PATCH 1/6] net: Split netdev_alloc_frag into __alloc_page_frag and add __napi_alloc_frag
Date: Wed, 10 Dec 2014 07:21:49 -0800 [thread overview]
Message-ID: <5488650D.8060708@gmail.com> (raw)
In-Reply-To: <CAMEtUuwNjq4PnYnou7a+PhbXdYBKwmQajHADMyqyEkhT7WeEnQ@mail.gmail.com>
On 12/09/2014 08:16 PM, Alexei Starovoitov wrote:
> On Tue, Dec 9, 2014 at 7:40 PM, Alexander Duyck
> <alexander.h.duyck@redhat.com> wrote:
>> This patch splits the netdev_alloc_frag function up so that it can be used
>> on one of two page frag pools instead of being fixed on the
>> netdev_alloc_cache. By doing this we can add a NAPI specific function
>> __napi_alloc_frag that accesses a pool that is only used from softirq
>> context. The advantage to this is that we do not need to call
>> local_irq_save/restore which can be a significant savings.
>>
>> I also took the opportunity to refactor the core bits that were placed in
>> __alloc_page_frag. First I updated the allocation to do either a 32K
>> allocation or an order 0 page. This is based on the changes in commmit
>> d9b2938aa where it was found that latencies could be reduced in case of
> thanks for explaining that piece of it.
>
>> + struct page *page = NULL;
>> + gfp_t gfp = gfp_mask;
>> +
>> + if (order) {
>> + gfp_mask |= __GFP_COMP | __GFP_NOWARN | __GFP_NORETRY;
>> + page = alloc_pages_node(NUMA_NO_NODE, gfp_mask, order);
>> + nc->frag.size = PAGE_SIZE << (page ? order : 0);
>> + }
>>
>> - local_irq_save(flags);
>> - nc = this_cpu_ptr(&netdev_alloc_cache);
>> - if (unlikely(!nc->frag.page)) {
>> + if (unlikely(!page))
>> + page = alloc_pages_node(NUMA_NO_NODE, gfp, 0);
> I'm guessing you're not combining this 'if' with above one to
> keep gfp untouched, so there is a 'warn' when it actually fails 2nd time.
> Tricky :)
> Anyway looks good to me and I think I understand it enough to say:
> Acked-by: Alexei Starovoitov <ast@plumgrid.com>
Thanks. Yes the compiler is smart enough to combine the frag.size and
the second check into one if order is non-zero. The other trick here is
if order is 0 then that whole block disappears and I don't have to touch
frag.size or gfp at all and the code gets much simpler as the *page =
NULL falls though and cancels out the 'if' as a compile time check.
- Alex
next prev parent reply other threads:[~2014-12-10 15:21 UTC|newest]
Thread overview: 30+ messages / expand[flat|nested] mbox.gz Atom feed top
2014-12-10 3:40 [net-next PATCH 0/6] net: Alloc NAPI page frags from their own pool Alexander Duyck
2014-12-10 3:40 ` [net-next PATCH 1/6] net: Split netdev_alloc_frag into __alloc_page_frag and add __napi_alloc_frag Alexander Duyck
2014-12-10 4:16 ` Alexei Starovoitov
2014-12-10 15:21 ` Alexander Duyck [this message]
2014-12-10 16:02 ` Eric Dumazet
2014-12-10 17:06 ` Alexander Duyck
2014-12-10 17:13 ` Eric Dumazet
2014-12-10 17:16 ` Alexander Duyck
2014-12-10 3:40 ` [net-next PATCH 2/6] net: Pull out core bits of __netdev_alloc_skb and add __napi_alloc_skb Alexander Duyck
2014-12-10 3:40 ` [net-next PATCH 3/6] ethernet/intel: Use napi_alloc_skb Alexander Duyck
2014-12-11 21:43 ` Jeff Kirsher
2014-12-10 3:41 ` [net-next PATCH 4/6] cxgb: Use napi_alloc_skb instead of netdev_alloc_skb_ip_align Alexander Duyck
2014-12-10 3:41 ` [net-next PATCH 5/6] ethernet/realtek: use " Alexander Duyck
2014-12-10 3:41 ` [net-next PATCH 6/6] ethernet/broadcom: Use " Alexander Duyck
2014-12-10 9:50 ` David Laight
2014-12-10 9:52 ` David Laight
2014-12-10 15:16 ` Alexander Duyck
2014-12-10 14:15 ` [RFC PATCH 0/3] Faster than SLAB caching of SKBs with qmempool (backed by alf_queue) Jesper Dangaard Brouer
2014-12-10 14:15 ` [RFC PATCH 1/3] lib: adding an Array-based Lock-Free (ALF) queue Jesper Dangaard Brouer
2014-12-11 19:15 ` David Miller
2014-12-10 14:15 ` [RFC PATCH 2/3] mm: qmempool - quick queue based memory pool Jesper Dangaard Brouer
2014-12-10 14:15 ` [RFC PATCH 3/3] net: use qmempool in-front of sk_buff kmem_cache Jesper Dangaard Brouer
2014-12-10 14:22 ` [RFC PATCH 0/3] Faster than SLAB caching of SKBs with qmempool (backed by alf_queue) David Laight
2014-12-10 14:40 ` Jesper Dangaard Brouer
2014-12-10 15:17 ` Christoph Lameter
2014-12-10 15:33 ` Jesper Dangaard Brouer
[not found] ` <20141210163321.0e4e4fd2-H+wXaHxf7aLQT0dZR+AlfA@public.gmane.org>
2014-12-10 16:17 ` Christoph Lameter
2014-12-10 19:51 ` Christoph Lameter
2014-12-11 10:18 ` Jesper Dangaard Brouer
2014-12-10 18:32 ` [net-next PATCH 0/6] net: Alloc NAPI page frags from their own pool David Miller
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=5488650D.8060708@gmail.com \
--to=alexander.duyck@gmail.com \
--cc=alexander.h.duyck@redhat.com \
--cc=ast@plumgrid.com \
--cc=brouer@redhat.com \
--cc=davem@davemloft.net \
--cc=eric.dumazet@gmail.com \
--cc=netdev@vger.kernel.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).