public inbox for linux-kernel@vger.kernel.org
 help / color / mirror / Atom feed
From: Ravikiran G Thirumalai <kiran@scalex86.org>
To: Andi Kleen <ak@suse.de>
Cc: linux-kernel@vger.kernel.org, discuss@x86-64.org,
	Andrew Morton <akpm@osdl.org>,
	dada1@cosmobay.com,
	"Shai Fultheim (Shai@scalex86.org)" <shai@scalex86.org>
Subject: Re: [discuss] [patch 3/3] x86_64: Node local pda take 2 -- node local pda allocation
Date: Thu, 15 Dec 2005 10:47:04 -0800	[thread overview]
Message-ID: <20051215184704.GA3882@localhost.localdomain> (raw)
In-Reply-To: <20051215094232.GX23384@wotan.suse.de>

On Thu, Dec 15, 2005 at 10:42:32AM +0100, Andi Kleen wrote:
> On Wed, Dec 14, 2005 at 06:37:48PM -0800, Ravikiran G Thirumalai wrote:
> > Patch uses a static PDA array early at boot and reallocates processor PDA
> > with node local memory when kmalloc is ready, just before pda_init.
> > The boot_cpu_pda is needed since the cpu_pda is used even before pda_init for
> > that cpu is called.   
> > (pda_init is called when APs are brought on at rest_init().  But
> > setup_per_cpu_areas is called early in start_kernel and 
> > sched_init uses the per-cpu offset table early)
> > 
> 
> That is why I suggested to allocate it in smpboot.c in advance before
> starting the AP.  Can you please do that change? 

Maybe I am missing something, or not getting what you are suggesting;
As I see it,

asmlinkage void __init start_kernel(void)
{
	...
	...
	...
	setup_arch(&command_line);  --> (1)
	setup_per_cpu_areas();	    --> (2)
	...
	sched_init();		    --> (3)
	...
        vfs_caches_init_early();
        mem_init();
        kmem_cache_init();	    --> (4)
	...
	rest_init()		    --> (5)
}
	

I could allocate memory for pda somewhere in setup_arch after cpu_to_node is
initialized, but I would have to use alloc_bootmem_node and allocate for 
NR_CPUS, which could be wasteful.  I cannot use kmalloc_node until after (4) 
above, and sched_init refers to the per-cpu offset table before that.

So are you suggesting I use alloc_bootmem_node and allocate PDA for
NR_CPUS?


  reply	other threads:[~2005-12-15 18:47 UTC|newest]

Thread overview: 13+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2005-12-15  2:33 [patch 1/3] x86_64: Node local pda take 2 -- early cpu_to_node Ravikiran G Thirumalai
2005-12-15  2:35 ` [patch 2/3] x86_64: Node local pda take 2 -- cpu_pda_prep Ravikiran G Thirumalai
2005-12-15  2:37 ` [patch 3/3] x86_64: Node local pda take 2 -- node local pda allocation Ravikiran G Thirumalai
2005-12-15  8:22   ` Eric Dumazet
2005-12-15  9:36     ` Andi Kleen
2005-12-15  9:42   ` [discuss] " Andi Kleen
2005-12-15 18:47     ` Ravikiran G Thirumalai [this message]
2005-12-16  0:19       ` Andi Kleen
2005-12-16  3:55         ` Ravikiran G Thirumalai
2005-12-15  9:44 ` [discuss] [patch 1/3] x86_64: Node local pda take 2 -- early cpu_to_node Andi Kleen
2005-12-15 19:01   ` Ravikiran G Thirumalai
2005-12-16  0:20     ` Andi Kleen
2005-12-16  8:11       ` Ravikiran G Thirumalai

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20051215184704.GA3882@localhost.localdomain \
    --to=kiran@scalex86.org \
    --cc=ak@suse.de \
    --cc=akpm@osdl.org \
    --cc=dada1@cosmobay.com \
    --cc=discuss@x86-64.org \
    --cc=linux-kernel@vger.kernel.org \
    --cc=shai@scalex86.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox