public inbox for linux-kernel@vger.kernel.org
 help / color / mirror / Atom feed
From: Ingo Molnar <mingo@elte.hu>
To: Rusty Russell <rusty@rustcorp.com.au>
Cc: Linus Torvalds <torvalds@linux-foundation.org>,
	linux-kernel@vger.kernel.org,
	Andrew Morton <akpm@linux-foundation.org>,
	Mike Travis <travis@sgi.com>
Subject: Re: [git pull] cpus4096 fixes
Date: Thu, 31 Jul 2008 12:30:02 +0200	[thread overview]
Message-ID: <20080731103002.GE488@elte.hu> (raw)
In-Reply-To: <200807282321.53892.rusty@rustcorp.com.au>


* Rusty Russell <rusty@rustcorp.com.au> wrote:

> On Monday 28 July 2008 18:16:39 Ingo Molnar wrote:
> > * Rusty Russell <rusty@rustcorp.com.au> wrote:
> > > Mike: I now think the right long-term answer is Linus' dense cpumap
> > > idea + a convenience allocator for cpumasks.  We sweep the kernel for
> > > all on-stack vars and replace them with one or the other.  Thoughts?
> >
> > The dense cpumap for constant cpumasks is OK as it's clever, compact and
> > static.
> >
> > All-dynamic allocator for on-stack cpumasks ... is a less obvious
> > choice.
> 
> Sorry, I was unclear.  "long-term" == "more than 4096 CPUs", since I 
> thought that was Mike's aim.  If we only want to hack up 4k CPUS and 
> stop, then I understand the current approach.
> 
> If we want huge cpu numbers, I think cpumask_alloc/free gives the 
> clearest code.  So our approach is backwards: let's do that *then* put 
> ugly hacks in if it's really too slow.

My only worry with that principle is that the "does it really hurt" fact 
is seldom really provable on a standalone basis.

Creeping bloat and creeping slowdowns are the hardest to catch. A cycle 
here, a byte there, and it mounts up quickly. Coupled with faster but 
less deterministic CPUs it's pretty hard to prove a slowdown even with 
very careful profiling. We only catch the truly egregious cases that 
manage to shine through the general haze of other changes - and the haze 
is thickening every year.

I dont fundamentally disagree with turning cpumask into standalone 
objects on large machines though. I just think that our profiling 
methods are simply not good enough at the moment to truly trace small 
slowdowns back to their source commits fast enough. So the "we wont do 
it if it hurts" notion, while i agree with it, does not fulfill its 
promise in practice.

[ We might need something like a simulated reference CPU where various 
  "reference" performance tests are 100% repeatable and slowdowns are 
  thus 100% provable and bisectable. That CPU would simulate a cache and 
  would be modern in most aspects, etc. - just that the results it 
  produces would be fully deterministic in virtual time.

  Problem is, hw is not fast enough for that kind of simulation yet IMO
  (tools exist but it would not be fun at all to work in such a
  simulated environment in practice - hence kernel developers would
  generally ignore it) - so there will be a few years of uncertainty
  still. ]

	Ingo

  parent reply	other threads:[~2008-07-31 10:30 UTC|newest]

Thread overview: 43+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2008-07-27 19:06 [git pull] cpus4096 fixes Ingo Molnar
2008-07-27 20:15 ` Linus Torvalds
2008-07-27 21:03   ` Ingo Molnar
2008-07-28 18:42     ` Mike Travis
2008-07-27 21:05   ` Al Viro
2008-07-27 22:17     ` Linus Torvalds
2008-07-28  0:42   ` Rusty Russell
2008-07-28  3:06     ` Andrew Morton
2008-07-28  6:34       ` Rusty Russell
2008-07-28  6:58         ` Nick Piggin
2008-07-28  7:56         ` Ingo Molnar
2008-07-28 18:12         ` Mike Travis
2008-07-28  8:33     ` Ingo Molnar
2008-07-28 18:07       ` Mike Travis
2008-07-28 17:50     ` Mike Travis
2008-07-28 18:32       ` Linus Torvalds
2008-07-28 18:37         ` Linus Torvalds
2008-07-28 18:51           ` Ingo Molnar
2008-07-28 19:22             ` Mike Travis
2008-07-28 19:31               ` Mike Travis
2008-07-28 19:04         ` Mike Travis
2008-07-28 20:57         ` [rfc git pull] cpus4096 fixes, take 2 Ingo Molnar
2008-07-28 21:35           ` Ingo Molnar
2008-07-28 21:41             ` [build error] drivers/char/pcmcia/ipwireless/hardware.c:571: error: invalid use of undefined type 'struct ipw_network' Ingo Molnar
2008-07-28 22:06               ` Ingo Molnar
2008-07-28 22:20                 ` Andrew Morton
2008-07-28 22:29                   ` Ingo Molnar
2008-07-30 14:59               ` David Sterba
2008-07-30 15:11                 ` James Bottomley
2008-07-30 15:14                   ` Jiri Kosina
2008-07-28 21:36           ` [rfc git pull] cpus4096 fixes, take 2 Mike Travis
2008-07-29  1:45           ` Rusty Russell
2008-07-29 12:11             ` Ingo Molnar
2008-07-30  0:15               ` Rusty Russell
2008-07-28 18:46     ` [git pull] cpus4096 fixes Mike Travis
2008-07-28 19:13       ` Ingo Molnar
2008-07-29  1:33       ` Rusty Russell
2008-07-28  0:53 ` Rusty Russell
2008-07-28  8:16   ` Ingo Molnar
2008-07-28 13:21     ` Rusty Russell
2008-07-28 18:23       ` Mike Travis
2008-07-31 10:30       ` Ingo Molnar [this message]
2008-07-28  8:43   ` Ingo Molnar

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20080731103002.GE488@elte.hu \
    --to=mingo@elte.hu \
    --cc=akpm@linux-foundation.org \
    --cc=linux-kernel@vger.kernel.org \
    --cc=rusty@rustcorp.com.au \
    --cc=torvalds@linux-foundation.org \
    --cc=travis@sgi.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox