linux-mm.kvack.org archive mirror
 help / color / mirror / Atom feed
From: Joonsoo Kim <iamjoonsoo.kim@lge.com>
To: Christoph Lameter <cl@linux.com>
Cc: Andrew Morton <akpm@linux-foundation.org>,
	Pekka Enberg <penberg@kernel.org>,
	David Rientjes <rientjes@google.com>,
	linux-mm@kvack.org, Tejun Heo <htejun@gmail.com>,
	linux-kernel@vger.kernel.org
Subject: Re: [PATCH 1/3] mm/slab: use percpu allocator for cpu cache
Date: Tue, 26 Aug 2014 11:19:04 +0900	[thread overview]
Message-ID: <20140826021904.GA1035@js1304-P5Q-DELUXE> (raw)
In-Reply-To: <alpine.DEB.2.11.1408250809420.17236@gentwo.org>

On Mon, Aug 25, 2014 at 08:13:58AM -0500, Christoph Lameter wrote:
> On Mon, 25 Aug 2014, Joonsoo Kim wrote:
> 
> > On Thu, Aug 21, 2014 at 09:21:30AM -0500, Christoph Lameter wrote:
> > > On Thu, 21 Aug 2014, Joonsoo Kim wrote:
> > >
> > > > So, this patch try to use percpu allocator in SLAB. This simplify
> > > > initialization step in SLAB so that we could maintain SLAB code more
> > > > easily.
> > >
> > > I thought about this a couple of times but the amount of memory used for
> > > the per cpu arrays can be huge. In contrast to slub which needs just a
> > > few pointers, slab requires one pointer per object that can be in the
> > > local cache. CC Tj.
> > >
> > > Lets say we have 300 caches and we allow 1000 objects to be cached per
> > > cpu. That is 300k pointers per cpu. 1.2M on 32 bit. 2.4M per cpu on
> > > 64bit.
> >
> > Amount of memory we need to keep pointers for object is same in any case.
> 
> What case? SLUB uses a linked list and therefore does not have these
> storage requirements.

I misunderstand that you mentioned just memory usage. My *any case*
means memory usage of previous SLAB and SLAB with this percpu alloc
change. Sorry for confusion.

> 
> > I know that percpu allocator occupy vmalloc space, so maybe we could
> > exhaust vmalloc space on 32 bit. 64 bit has no problem on it.
> > How many cores does largest 32 bit system have? Is it possible
> > to exhaust vmalloc space if we use percpu allocator?
> 
> There were NUMA systems on x86 a while back (not sure if they still
> exists) with 128 or so processors.
> 
> Some people boot 32 bit kernels on contemporary servers. The Intel ones
> max out at 18 cores (36 hyperthreaded). I think they support up to 8
> scokets. So 8 * 36?
> 
> 
> Its different on other platforms with much higher numbers. Power can
> easily go up to hundreds of hardware threads and SGI Altixes 7 yearsago
> where at 8000 or so.

Okay... These large systems with 32 bit kernel could be break with this
change. I will do more investigation. Possibly, I will drop this patch. :)

Thanks.

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

  reply	other threads:[~2014-08-26  2:18 UTC|newest]

Thread overview: 20+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2014-08-21  8:11 [PATCH 1/3] mm/slab: use percpu allocator for cpu cache Joonsoo Kim
2014-08-21  8:11 ` [PATCH 2/3] mm/slab_common: commonize slab merge logic Joonsoo Kim
2014-08-21 14:22   ` Christoph Lameter
2014-08-25  8:26     ` Joonsoo Kim
2014-08-25 15:27   ` Christoph Lameter
2014-08-26  2:23     ` Joonsoo Kim
2014-08-26 21:23       ` Christoph Lameter
2014-08-21  8:11 ` [PATCH 3/3] mm/slab: support slab merge Joonsoo Kim
2014-08-25 15:29   ` Christoph Lameter
2014-08-26  2:26     ` Joonsoo Kim
2014-08-21 14:21 ` [PATCH 1/3] mm/slab: use percpu allocator for cpu cache Christoph Lameter
2014-08-25  8:26   ` Joonsoo Kim
2014-08-25 13:13     ` Christoph Lameter
2014-08-26  2:19       ` Joonsoo Kim [this message]
2014-08-26 21:22         ` Christoph Lameter
2014-08-27 23:37 ` Christoph Lameter
2014-09-01  0:19   ` Joonsoo Kim
2014-09-28  6:24 ` [REGRESSION] " Jeremiah Mahler
2014-09-28 16:38   ` Christoph Lameter
2014-09-29  7:44   ` Joonsoo Kim

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20140826021904.GA1035@js1304-P5Q-DELUXE \
    --to=iamjoonsoo.kim@lge.com \
    --cc=akpm@linux-foundation.org \
    --cc=cl@linux.com \
    --cc=htejun@gmail.com \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linux-mm@kvack.org \
    --cc=penberg@kernel.org \
    --cc=rientjes@google.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).