linux-mm.kvack.org archive mirror
 help / color / mirror / Atom feed
From: Pekka Enberg <penberg@cs.helsinki.fi>
To: cl@linux-foundation.org
Cc: akpm@linux-foundation.org, linux-mm@kvack.org,
	Tejun Heo <tj@kernel.org>,
	mingo@elte.hu, rusty@rustcorp.com.au, davem@davemloft.net
Subject: Re: [this_cpu_xx V2 13/19] Use this_cpu operations in slub
Date: Thu, 18 Jun 2009 09:25:21 +0300	[thread overview]
Message-ID: <84144f020906172325m5de946gd8aa90328da26906@mail.gmail.com> (raw)
In-Reply-To: <84144f020906172320k39ea5132h823449abc3124b30@mail.gmail.com>

On Thu, Jun 18, 2009 at 9:20 AM, Pekka Enberg<penberg@cs.helsinki.fi> wrote:
> Hi Christoph,
>
> On Wed, Jun 17, 2009 at 11:33 PM, <cl@linux-foundation.org> wrote:
>> @@ -1604,9 +1595,6 @@ static void *__slab_alloc(struct kmem_ca
>>        void **object;
>>        struct page *new;
>>
>> -       /* We handle __GFP_ZERO in the caller */
>> -       gfpflags &= ~__GFP_ZERO;
>> -
>
> This should probably not be here.
>
>> @@ -2724,7 +2607,19 @@ static noinline struct kmem_cache *dma_k
>>        realsize = kmalloc_caches[index].objsize;
>>        text = kasprintf(flags & ~SLUB_DMA, "kmalloc_dma-%d",
>>                         (unsigned int)realsize);
>> -       s = kmalloc(kmem_size, flags & ~SLUB_DMA);
>> +
>> +       if (flags & __GFP_WAIT)
>> +               s = kmalloc(kmem_size, flags & ~SLUB_DMA);
>> +       else {
>> +               int i;
>> +
>> +               s = NULL;
>> +               for (i = 0; i < SLUB_PAGE_SHIFT; i++)
>> +                       if (kmalloc_caches[i].size) {
>> +                               s = kmalloc_caches + i;
>> +                               break;
>> +                       }
>> +       }
>
> [snip]
>
>> A particular problem for the dynamic dma kmalloc slab creation is that the
>> new percpu allocator cannot be called from an atomic context. The solution
>> adopted here for the atomic context is to track spare elements in the per
>> cpu kmem_cache array for non dma kmallocs. Use them if necessary for dma
>> cache creation from an atomic context. Otherwise we just fail the allocation.
>
> OK, I am confused. Isn't the whole point in separating DMA caches that
> we don't mix regular and DMA allocations in the same slab and using up
> precious DMA memory on some archs?
>
> So I don't think the above hunk is a good solution to this at all. We
> certainly can remove the lazy DMA slab creation (why did we add it in
> the first place?) but how hard is it to fix the per-cpu allocator to
> work in atomic contexts?

Oh, and how does this work with the early boot slab code? We're
creating all the kmalloc caches with interrupts disabled and doing
per-cpu allocations, no?

                                                        Pekka

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

  reply	other threads:[~2009-06-18  6:24 UTC|newest]

Thread overview: 48+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2009-06-17 20:33 [this_cpu_xx V2 00/19] Introduce this_cpu_xx operations cl
2009-06-17 20:33 ` [this_cpu_xx V2 01/19] Fix handling of pagesets for downed cpus cl
2009-06-17 20:33 ` [this_cpu_xx V2 02/19] Introduce this_cpu_ptr() and generic this_cpu_* operations cl
2009-06-18  1:50   ` Tejun Heo
2009-06-18  2:29     ` Tejun Heo
2009-06-18 13:54       ` Christoph Lameter
2009-06-18 14:49         ` Tejun Heo
2009-06-17 20:33 ` [this_cpu_xx V2 03/19] Use this_cpu operations for SNMP statistics cl
2009-06-18  1:55   ` Tejun Heo
2009-06-17 20:33 ` [this_cpu_xx V2 04/19] Use this_cpu operations for NFS statistics cl
2009-06-18  2:03   ` Tejun Heo
2009-06-17 20:33 ` [this_cpu_xx V2 05/19] use this_cpu ops for network statistics cl
2009-06-17 20:33 ` [this_cpu_xx V2 06/19] this_cpu_ptr: Straight transformations cl
2009-06-17 20:33 ` [this_cpu_xx V2 07/19] this_cpu_ptr: Elimninate get/put_cpu cl
2009-06-17 20:33 ` [this_cpu_xx V2 08/19] this_cpu_ptr: xfs_icsb_modify_counters does not need "cpu" variable cl
2009-06-17 20:33 ` [this_cpu_xx V2 09/19] Use this_cpu_ptr in crypto subsystem cl
2009-06-17 20:33 ` [this_cpu_xx V2 10/19] this_cpu: X86 optimized this_cpu operations cl
2009-06-18  3:00   ` Tejun Heo
2009-06-18 14:07     ` Christoph Lameter
2009-06-18 14:48       ` Tejun Heo
2009-06-18 15:39         ` Christoph Lameter
2009-06-18 16:06           ` Tejun Heo
2009-06-18 16:15             ` Tejun Heo
2009-06-18 17:05             ` Christoph Lameter
2009-06-19  5:41             ` Rusty Russell
2009-06-23 18:00               ` Christoph Lameter
2009-06-17 20:33 ` [this_cpu_xx V2 11/19] Use this_cpu ops for VM statistics cl
2009-06-18  3:05   ` Tejun Heo
2009-06-17 20:33 ` [this_cpu_xx V2 12/19] RCU: Use this_cpu operations cl
2009-06-17 20:33 ` [this_cpu_xx V2 13/19] Use this_cpu operations in slub cl
2009-06-18  6:20   ` Pekka Enberg
2009-06-18  6:25     ` Pekka Enberg [this message]
2009-06-18 13:59       ` Christoph Lameter
2009-06-25  7:12         ` Pekka Enberg
2009-06-18  6:49     ` Tejun Heo
2009-06-18  7:35       ` Pekka Enberg
2009-06-18 13:59     ` Christoph Lameter
2009-06-25  7:11       ` Pekka Enberg
2009-06-17 20:33 ` [this_cpu_xx V2 14/19] this_cpu: Remove slub kmem_cache fields cl
2009-06-17 20:33 ` [this_cpu_xx V2 15/19] Make slub statistics use this_cpu_inc cl
2009-06-17 20:33 ` [this_cpu_xx V2 16/19] this_cpu: slub aggressive use of this_cpu operations in the hotpaths cl
2009-06-18  6:33   ` Pekka Enberg
2009-06-18 11:59     ` Mathieu Desnoyers
2009-06-18 14:00     ` Christoph Lameter
2009-06-17 20:33 ` [this_cpu_xx V2 17/19] Move early initialization of pagesets out of zone_wait_table_init() cl
2009-06-18  3:13   ` Tejun Heo
2009-06-17 20:33 ` [this_cpu_xx V2 18/19] this_cpu_ops: page allocator conversion cl
2009-06-17 20:33 ` [this_cpu_xx V2 19/19] this_cpu ops: Remove pageset_notifier cl

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=84144f020906172325m5de946gd8aa90328da26906@mail.gmail.com \
    --to=penberg@cs.helsinki.fi \
    --cc=akpm@linux-foundation.org \
    --cc=cl@linux-foundation.org \
    --cc=davem@davemloft.net \
    --cc=linux-mm@kvack.org \
    --cc=mingo@elte.hu \
    --cc=rusty@rustcorp.com.au \
    --cc=tj@kernel.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).