public inbox for linux-kernel@vger.kernel.org
 help / color / mirror / Atom feed
From: Ingo Molnar <mingo@elte.hu>
To: Yinghai Lu <yinghai@kernel.org>
Cc: Robin Holt <holt@sgi.com>, Andi Kleen <andi@firstfloor.org>,
	linux-kernel@vger.kernel.org,
	Thomas Gleixner <tglx@linutronix.de>,
	Ingo Molnar <mingo@redhat.com>
Subject: Re: [RFC 2/2] Make x86 calibrate_delay run in parallel.
Date: Thu, 31 Mar 2011 08:58:05 +0200	[thread overview]
Message-ID: <20110331065805.GG5938@elte.hu> (raw)
In-Reply-To: <20110331065036.GC5938@elte.hu>


* Ingo Molnar <mingo@elte.hu> wrote:

> 
> * Yinghai Lu <yinghai@kernel.org> wrote:
> 
> > On Tue, Dec 14, 2010 at 5:58 PM,  <Robin@sgi.com> wrote:
> > >
> > > On a 4096 cpu machine, we noticed that 318 seconds were taken for bringing
> > > up the cpus.  By specifying lpj=<value>, we reduced that to 75 seconds.
> > > Andi Kleen suggested we rework the calibrate_delay calls to run in
> > > parallel.  With that code in place, a test boot of the same machine took
> > > 61 seconds to bring the cups up.  I am not sure how we beat the lpj=
> > > case, but it did outperform.
> > >
> > > One thing to note is the total BogoMIPS value is also consistently higher.
> > > I am wondering if this is an effect with the cores being in performance
> > > mode.  I did notice that the parallel calibrate_delay calls did cause the
> > > fans on the machine to ramp up to full speed where the normal sequential
> > > calls did not cause them to budge at all.
> > 
> > please check attached patch, that could calibrate correctly.
> > 
> > Thanks
> > 
> > Yinghai
> 
> > [PATCH -v2] x86: Make calibrate_delay run in parallel.
> > 
> > On a 4096 cpu machine, we noticed that 318 seconds were taken for bringing
> > up the cpus.  By specifying lpj=<value>, we reduced that to 75 seconds.
> > Andi Kleen suggested we rework the calibrate_delay calls to run in
> > parallel.
> 
> The risk wit tat suggestion is that it will spectacularly miscalibrate on 
> hyperthreading systems.

s/wit tat/with that

Thanks,

	Ingo

  reply	other threads:[~2011-03-31  6:58 UTC|newest]

Thread overview: 16+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2010-12-15  1:58 [RFC 0/2] Speed large x86_64 system boot by calling calibrate_delay() in parallel Robin, Holt <holt
2010-12-15  1:58 ` [RFC 1/2] Pass loops_per_jiffy in and out of calibrate_delay() Robin, Holt <holt
2010-12-15  1:58 ` [RFC 2/2] Make x86 calibrate_delay run in parallel Robin, Holt <holt
2010-12-16  8:34   ` Thomas Gleixner
2011-03-31  4:46   ` Yinghai Lu
2011-03-31  6:50     ` Ingo Molnar
2011-03-31  6:58       ` Ingo Molnar [this message]
2011-03-31  9:37         ` Robin Holt
2011-03-31  9:57           ` Ingo Molnar
2011-03-31 10:30             ` Avi Kivity
2011-03-31 10:46               ` Ingo Molnar
2011-03-31 10:49                 ` Avi Kivity
2011-03-31 11:13                   ` Ingo Molnar
2011-03-31 11:50             ` Robin Holt
2011-03-31  9:29     ` Robin Holt
2011-03-31 14:25       ` Yinghai Lu

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20110331065805.GG5938@elte.hu \
    --to=mingo@elte.hu \
    --cc=andi@firstfloor.org \
    --cc=holt@sgi.com \
    --cc=linux-kernel@vger.kernel.org \
    --cc=mingo@redhat.com \
    --cc=tglx@linutronix.de \
    --cc=yinghai@kernel.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox