public inbox for linux-kernel@vger.kernel.org
 help / color / mirror / Atom feed
From: "H. Peter Anvin" <hpa@zytor.com>
To: Tejun Heo <tj@kernel.org>
Cc: Rusty Russell <rusty@rustcorp.com.au>,
	Ingo Molnar <mingo@elte.hu>, Thomas Gleixner <tglx@linutronix.de>,
	x86@kernel.org,
	Linux Kernel Mailing List <linux-kernel@vger.kernel.org>,
	Jeremy Fitzhardinge <jeremy@goop.org>,
	cpw@sgi.com
Subject: Re: #tj-percpu has been rebased
Date: Fri, 13 Feb 2009 17:53:23 -0800	[thread overview]
Message-ID: <49962413.9020101@zytor.com> (raw)
In-Reply-To: <4996141A.1050506@kernel.org>

Tejun Heo wrote:
> 
> Percpu areas are allocated in chunks in vmalloc area.  Each chunk is
> consisted of num_possible_cpus() units and the first chunk is used for
> static percpu variables in the kernel image (special boot time
> alloc/init handling necessary as these areas need to be brought up
> before allocation services are running).  Unit grows as necessary and
> all units grow or shrink in unison.  When a chunk is filled up,
> another chunk is allocated.  ie. in vmalloc area
> 
>   c0                           c1                         c2           
>    -------------------          -------------------        ------------
>   | u0 | u1 | u2 | u3 |        | u0 | u1 | u2 | u3 |      | u0 | u1 | u
>    -------------------  ......  -------------------  ....  ------------
> 
> Allocation is done in offset-size areas of single unit space.  Ie,
> when UNIT_SIZE is 128k, an area at 134k of 512bytes occupy 512bytes at
> 6k of c1:u0, c1:u1, c1:u2 and c1u3.  Percpu access can be done by
> configuring percpu base registers UNIT_SIZE apart.
> 

Okay, let's think about this a bit.

At least for x86, there are two cases:

- 32 bits.  The vmalloc area is *extremely* constrained, and has the 
same class of fragmentation issues as main memory.  In fact, it might 
have *more* just by virtue of being larger.

- 64 bits.  At this point, we have with current memory sizes(*) an 
astronomically large virtual space.  Here we have no real problem 
allocating linearly in virtual space, either by giving each CPU some 
very large hunk of virtual address space (which means each percpu area 
is contiguous in virtual space) or by doing large contiguous allocations 
out of another range.

It doesn't seem to make sense to me at first glance to be any advantage 
to interlacing the CPUs.  Quite on the contrary, it seems to utterly 
preclude ever doing PMDs with a win, since (a) you'd be allocating real 
memory for CPUs which aren't actually there and (b) you'd have the wrong 
NUMA associativity.

	-hpa


(*) In about 20 years we better get the remaining virtual address bits...

  reply	other threads:[~2009-02-14  1:57 UTC|newest]

Thread overview: 20+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2009-01-30 17:05 #tj-percpu has been rebased Tejun Heo
2009-01-31  5:46 ` Tejun Heo
2009-01-31 13:28   ` Ingo Molnar
2009-02-02  9:04   ` Rusty Russell
2009-02-04  3:18     ` Tejun Heo
2009-02-12  3:37       ` Tejun Heo
2009-02-12  3:44         ` Tejun Heo
2009-02-13 20:58           ` Rusty Russell
2009-02-13 21:17             ` Jeremy Fitzhardinge
2009-02-13 22:59             ` H. Peter Anvin
2009-02-14  0:45             ` Tejun Heo
2009-02-14  1:53               ` H. Peter Anvin [this message]
2009-02-14  2:10                 ` Tejun Heo
2009-02-16  7:23               ` Rusty Russell
2009-02-16 17:28                 ` H. Peter Anvin
2009-02-16 23:22                   ` Rusty Russell
2009-02-16 23:28                     ` H. Peter Anvin
2009-02-18  4:25                       ` Rusty Russell
2009-02-18  6:40                         ` H. Peter Anvin
2009-02-18  7:11                           ` Rusty Russell

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=49962413.9020101@zytor.com \
    --to=hpa@zytor.com \
    --cc=cpw@sgi.com \
    --cc=jeremy@goop.org \
    --cc=linux-kernel@vger.kernel.org \
    --cc=mingo@elte.hu \
    --cc=rusty@rustcorp.com.au \
    --cc=tglx@linutronix.de \
    --cc=tj@kernel.org \
    --cc=x86@kernel.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox