From: Mike Travis <travis@sgi.com>
To: Andi Kleen <ak@suse.de>
Cc: Ingo Oeser <ioe-lkml@rameria.de>,
Andrew Morton <akpm@linux-foundation.org>,
mingo@elte.hu, Christoph Lameter <clameter@sgi.com>,
linux-mm@kvack.org, linux-kernel@vger.kernel.org
Subject: Re: [PATCH 4/5] x86: Add config variables for SMP_MAX
Date: Fri, 18 Jan 2008 12:48:52 -0800 [thread overview]
Message-ID: <479110B4.50500@sgi.com> (raw)
In-Reply-To: <200801182136.15213.ak@suse.de>
Andi Kleen wrote:
> First I think you have to get rid of the THREAD_ORDER stuff -- your
> goal of the whole patchkit after all is to allow distributions to
> support NR_CPUS==4096 in the standard kernels and I doubt any
> distribution will over chose a THREAD_ORDER > 1 in their
> standard kernels because it would be too unreliable on smaller
> systems.
>
>> Here are the top stack consumers with NR_CPUS = 4k.
>>
>> 16392 isolated_cpu_setup
>> 10328 build_sched_domains
>> 8248 numa_initmem_init
>
> These should run single threaded early at boot so you can probably just make
> the cpumask_t variables static __initdata
>
>> 4664 cpu_attach_domain
>> 4104 show_shared_cpu_map
>
> These above are the real pigs. Fortunately they are all clearly
> slowpath (except perhaps show_shared_cpu_map) so just using heap
> allocations or when needed bootmem for them should be fine.
>
>> 3656 centrino_target
>> 3608 powernowk8_cpu_init
>> 3192 sched_domain_node_span
>
> x86-64 always has 8k stacks and separate interrupt stack. As long
> as the calls are not in some stack intensive layered context (like block
> IO processing path etc.) <3k shouldn't be too big an issue.
>
> BTW there is a trick to get more stack space on x86-64 temporarily:
> run it in a softirq. They got 16k stacks by default. Just leave
> enough left over for the hard irqs that might happen if you don't
> have interrupts disabled.
>
>> 3144 acpi_cpufreq_target
>> 2584 __svc_create_thread
>> 2568 cpu_idle_wait
>> 2136 netxen_nic_flash_print
>> 2104 powernowk8_target
>> 2088 _cpu_down
>> 2072 cache_add_dev
>> 2056 get_cur_freq
>> 0 acpi_processor_ffh_cstate_probe
>> 2056 microcode_write
>> 0 acpi_processor_get_throttling
>> 2048 check_supported_cpu
>>
>> And I've yet to figure out how to accumulate stack sizes using
>> call threads.
>
> One way if you don't care about indirect/asm calls is to use cflow and do
> some post processing that adds up the data from checkstack.pl
>
> The other way is to use mcount, but only for situations you can reproduce
> of course. I did have a 2.4 mcount based stack instrumentation patch
> some time ago that I could probably dig out if it was useful.
>
> -Andi
Thanks for the great feedback Andi. Since cpumask changes are the next
item on my list after NR_CPUS (and friends) are dealt with, perhaps I
could move the THREAD_ORDER stuff to the "Kernel Hacking" area for the
interim?
And yes, I'm interested in any tools to help accumulate information.
Btw, there are 116 functions now that have >= 1k stack size.
Cheers,
Mike
--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org. For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>
next prev parent reply other threads:[~2008-01-18 20:48 UTC|newest]
Thread overview: 41+ messages / expand[flat|nested] mbox.gz Atom feed top
2008-01-18 18:30 [PATCH 0/5] x86: Reduce memory usage for large count NR_CPUs fixup travis
2008-01-18 18:30 ` [PATCH 1/5] x86: Change size of node ids from u8 to u16 fixup travis
2008-01-18 19:56 ` Jan Engelhardt
2008-01-18 19:59 ` Mike Travis
2008-01-19 4:03 ` Yinghai Lu
2008-01-19 4:36 ` David Rientjes
2008-01-19 4:43 ` Yinghai Lu
2008-01-19 5:17 ` David Rientjes
2008-01-19 6:20 ` Yinghai Lu
2008-01-19 21:25 ` Mike Travis
2008-01-19 22:33 ` David Rientjes
2008-01-20 0:41 ` Mike Travis
2008-01-20 1:31 ` Yinghai Lu
2008-01-20 6:22 ` David Rientjes
2008-01-18 18:30 ` [PATCH 2/5] x86: Change NR_CPUS arrays in numa_64 fixup travis
2008-01-18 18:30 ` [PATCH 3/5] x86: Change bios_cpu_apicid to percpu data variable fixup travis
2008-01-18 18:30 ` [PATCH 4/5] x86: Add config variables for SMP_MAX travis
2008-01-18 20:04 ` Ingo Oeser
2008-01-18 20:10 ` Christoph Lameter
2008-01-18 20:14 ` Mike Travis
2008-01-18 20:36 ` Andi Kleen
2008-01-18 20:48 ` Mike Travis [this message]
2008-01-18 21:02 ` [PATCH 4/5] x86: Add config variables for SMP_MAX II Andi Kleen
2008-01-18 20:48 ` [PATCH 4/5] x86: Add config variables for SMP_MAX Ingo Molnar
2008-01-18 20:55 ` Mike Travis
2008-01-18 20:58 ` Andi Kleen
2008-01-28 16:45 ` Paul Jackson
2008-01-28 17:00 ` Andi Kleen
2008-01-18 20:46 ` Ingo Molnar
2008-01-19 14:52 ` Ingo Molnar
2008-01-19 15:15 ` Ingo Molnar
2008-01-19 15:24 ` Ingo Molnar
2008-01-19 21:52 ` Mike Travis
2008-01-19 23:24 ` Mike Travis
2008-01-20 1:14 ` Mike Travis
2008-01-19 21:39 ` Mike Travis
2008-01-18 18:30 ` [PATCH 5/5] x86: Add debug of invalid per_cpu map accesses travis
2008-01-18 18:33 ` Andi Kleen
2008-01-18 18:49 ` Mike Travis
2008-01-18 18:56 ` Christoph Lameter
2008-01-18 20:49 ` Ingo Molnar
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=479110B4.50500@sgi.com \
--to=travis@sgi.com \
--cc=ak@suse.de \
--cc=akpm@linux-foundation.org \
--cc=clameter@sgi.com \
--cc=ioe-lkml@rameria.de \
--cc=linux-kernel@vger.kernel.org \
--cc=linux-mm@kvack.org \
--cc=mingo@elte.hu \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).