public inbox for linux-kernel@vger.kernel.org
 help / color / mirror / Atom feed
From: Tejun Heo <tj@kernel.org>
To: Jan Beulich <JBeulich@novell.com>
Cc: mingo@elte.hu, andi@firstfloor.org, tglx@linutronix.de,
	linux-kernel@vger.kernel.org, linux-kernel-owner@vger.kernel.org,
	hpa@zytor.com
Subject: Re: [GIT PATCH] x86,percpu: fix pageattr handling with remap	 allocator
Date: Fri, 15 May 2009 00:55:37 +0900	[thread overview]
Message-ID: <4A0C3EF9.4050907@kernel.org> (raw)
In-Reply-To: <4A0C46B80200007800000ED4@vpn.id2.novell.com>

Hello, Jan.

Jan Beulich wrote:
> In order to reduce the amount of work to do during lookup as well as
> the chance of having a collision at all, wouldn't it be reasonable
> to use as much of an allocated 2/4M page as possible rather than
> returning whatever is left after a single CPU got its per-CPU memory
> chunk from it? I.e. you'd return only those (few) pages that either
> don't fit another CPU's chunk anymore or that are left after running
> through all CPUs.
> 
> Or is there some hidden requirement that each CPU's per-CPU area must
> start on a PMD boundary?

The whole point of doing the remapping is giving each CPU its own PMD
mapping for perpcu area, so, yeah, that's the requirement.  I don't
think the requirement is hidden tho.

How hot is the cpa path?  On my test systems, there were only few
calls during init and then nothing.  Does it become very hot if, for
example, GEM is used?  But I really don't think the log2 binary search
overhead would be anything noticeable compared to TLB shootdown and
all other stuff going on there.

> This would additionally address a potential problem on 32-bits -
> currently, for a 32-CPU system you consume half of the vmalloc space
> with PAE (on non-PAE you'd even exhaust it, but I think it's
> unreasonable to expect a system having 32 CPUs to not need PAE).

I recall having about the same conversation before.  Looking up...

-- QUOTE --
  Actually, I've been looking at the numbers and I'm not sure if the
  concern is valid.  On x86_32, the practical number of maximum
  processors would be around 16 so it will end up 32M, which isn't
  nice and it would probably a good idea to introduce a parameter to
  select which allocator to use but still it's far from consuming all
  the VM area.  On x86_64, the vmalloc area is obscenely large at 245,
  ie 32 terabytes.  Even with 4096 processors, single chunk is measly
  0.02%.

  If it's a problem for other archs or extreme x86_32 configurations,
  we can add some safety measures but in general I don't think it is a
  problem.
-- END OF QUOTE --

So, yeah, if there are 32bit 32-way NUMA machines out there, it would
be wise to skip remap allocator on such machines.  Maybe we can
implement a heuristic - something like "if vm area consumption goes
over 25%, don't use remap".

Thanks.

-- 
tejun

  reply	other threads:[~2009-05-14 15:54 UTC|newest]

Thread overview: 38+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2009-05-14 12:49 [GIT PATCH] x86,percpu: fix pageattr handling with remap allocator Tejun Heo
2009-05-14 12:49 ` [PATCH 1/4] x86: prepare setup_pcpu_remap() for pageattr fix Tejun Heo
2009-05-14 12:49 ` [PATCH 2/4] x86: simplify cpa_process_alias() Tejun Heo
2009-05-14 14:16   ` Jan Beulich
2009-05-14 15:37     ` Tejun Heo
2009-05-14 16:20       ` [PATCH UPDATED 2/4] x86: reorganize cpa_process_alias() Tejun Heo
2009-05-14 12:49 ` [PATCH 3/4] x86: fix pageattr handling for remap percpu allocator Tejun Heo
2009-05-14 16:21   ` [PATCH UPDATED " Tejun Heo
2009-05-14 12:49 ` [PATCH 4/4] x86: implement percpu_alloc kernel parameter Tejun Heo
2009-05-14 14:28 ` [GIT PATCH] x86,percpu: fix pageattr handling with remap allocator Jan Beulich
2009-05-14 15:55   ` Tejun Heo [this message]
2009-05-15  7:47     ` Jan Beulich
2009-05-15  8:11       ` Tejun Heo
2009-05-15  8:22         ` Jan Beulich
2009-05-15  8:27           ` Tejun Heo
2009-05-14 16:22 ` Tejun Heo
2009-05-15  4:00   ` Tejun Heo
2009-05-15  4:36     ` David Miller
2009-05-15  4:48       ` Tejun Heo
2009-05-16  1:17 ` Suresh Siddha
2009-05-16 15:16   ` Tejun Heo
2009-05-16 19:09     ` Suresh Siddha
2009-05-17  1:23       ` Tejun Heo
2009-05-18 19:20         ` Suresh Siddha
2009-05-18 19:41           ` H. Peter Anvin
2009-05-18 21:07             ` Suresh Siddha
2009-05-19  1:28               ` Tejun Heo
2009-05-20 23:01                 ` Suresh Siddha
2009-05-21  0:08                   ` Tejun Heo
2009-05-21  0:36                     ` Suresh Siddha
2009-05-21  1:46                       ` Tejun Heo
2009-05-21  1:48                         ` Tejun Heo
2009-05-21 19:10                         ` Suresh Siddha
2009-05-21 23:18                           ` Tejun Heo
2009-05-22  0:55                             ` Suresh Siddha
2009-05-19  9:44 ` Tejun Heo
2009-05-20  7:54   ` Ingo Molnar
2009-05-20  7:57     ` Tejun Heo

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=4A0C3EF9.4050907@kernel.org \
    --to=tj@kernel.org \
    --cc=JBeulich@novell.com \
    --cc=andi@firstfloor.org \
    --cc=hpa@zytor.com \
    --cc=linux-kernel-owner@vger.kernel.org \
    --cc=linux-kernel@vger.kernel.org \
    --cc=mingo@elte.hu \
    --cc=tglx@linutronix.de \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox