From: "Paul E. McKenney" <paulmck@linux.vnet.ibm.com>
To: Vaidyanathan Srinivasan <svaidy@linux.vnet.ibm.com>
Cc: Andrew Morton <akpm@linux-foundation.org>,
Gautham R Shenoy <ego@in.ibm.com>,
linux-kernel@vger.kernel.org,
Peter Zijlstra <a.p.zijlstra@chello.nl>,
Balbir Singh <balbir@in.ibm.com>,
Rusty Russel <rusty@rustcorp.com.au>,
Nathan Lynch <ntl@pobox.com>, Ingo Molnar <mingo@elte.hu>,
Venkatesh Pallipadi <venkatesh.pallipadi@intel.com>,
Dipankar Sarma <dipankar@in.ibm.com>,
Shoahua Li <shaohua.li@linux.com>
Subject: Re: [RFD PATCH 0/4] cpu: Bulk CPU Hotplug support.
Date: Tue, 16 Jun 2009 14:00:59 -0700 [thread overview]
Message-ID: <20090616210059.GL6842@linux.vnet.ibm.com> (raw)
In-Reply-To: <20090616080715.GB7961@dirshya.in.ibm.com>
On Tue, Jun 16, 2009 at 01:37:15PM +0530, Vaidyanathan Srinivasan wrote:
> * Andrew Morton <akpm@linux-foundation.org> [2009-06-15 23:23:18]:
>
> > On Tue, 16 Jun 2009 11:08:39 +0530 Gautham R Shenoy <ego@in.ibm.com> wrote:
> >
> > > Currently on a ppc64 box with 16 CPUs, the time taken for
> > > a individual cpu-hotplug operation is as follows.
> > >
> > > # time echo 0 > /sys/devices/system/cpu/cpu2/online
> > > real 0m0.025s
> > > user 0m0.000s
> > > sys 0m0.002s
> > >
> > > # time echo 1 > /sys/devices/system/cpu/cpu2/online
> > > real 0m0.021s
> > > user 0m0.000s
> > > sys 0m0.000s
> >
> > Surprised. Do people really online and offline CPUs frequently enough
> > for this to be a problem?
>
> Certainly not for hardware faults or hardware replacement, but
> cpu-hotplug interface is useful for changing system configuration to
> meet different objectives like
>
> * Reduce system capacity to reduce average power and reduce heat
>
> * Increasing number of cores and threads in a CPU package is leading
> to multiple cpu offline/online operations for any perceivable effect
>
> * Dynamically change CPU configurations in virtualized environments
Perhaps also reducing boot-up time? If I am correctly interpreting the
above numbers, an eight-CPU system would be consuming 175 milliseconds
bringing up the seven non-boot CPUs. Reducing this by 150 milliseconds
might be of interest to some people. ;-)
Thanx, Paul
> Ref:
>
> [1] Saving power by cpu evacuation sched_max_capacity_pct=n
> http://lkml.org/lkml/2009/5/13/173
>
> [2] Make offline cpus to go to deepest idle state using
> http://lkml.org/lkml/2009/5/22/431
>
> [3] cpuset: add new API to change cpuset top group's cpus
> http://lkml.org/lkml/2009/5/19/54
>
> For getting stuff off a certain CPU, cpu-hotplug framework seems to do
> the right thing. Identifying bottlenecks in the framework can
> significantly help other use cases.
>
> --Vaidy
>
next prev parent reply other threads:[~2009-06-16 21:01 UTC|newest]
Thread overview: 21+ messages / expand[flat|nested] mbox.gz Atom feed top
2009-06-16 5:38 [RFD PATCH 0/4] cpu: Bulk CPU Hotplug support Gautham R Shenoy
2009-06-16 5:38 ` [RFD PATCH 1/4] powerpc: cpu: Reduce the polling interval in __cpu_up() Gautham R Shenoy
2009-06-16 16:06 ` Nathan Lynch
2009-06-16 16:37 ` Gautham R Shenoy
2009-06-16 5:38 ` [RFD PATCH 2/4] cpu: sysfs interface for hotplugging bunch of CPUs Gautham R Shenoy
2009-06-16 16:22 ` Nathan Lynch
2009-06-16 16:33 ` Gautham R Shenoy
2009-06-16 5:38 ` [RFD PATCH 3/4] cpu: Define new functions cpu_down_mask and cpu_up_mask Gautham R Shenoy
2009-06-16 5:38 ` [RFD PATCH 4/4] cpu: measure time taken by subsystem notifiers during cpu-hotplug Gautham R Shenoy
2009-06-16 6:23 ` [RFD PATCH 0/4] cpu: Bulk CPU Hotplug support Andrew Morton
2009-06-16 8:07 ` Vaidyanathan Srinivasan
2009-06-16 21:00 ` Paul E. McKenney [this message]
2009-06-24 15:02 ` Pavel Machek
2009-06-17 7:32 ` Peter Zijlstra
2009-06-17 7:40 ` Balbir Singh
2009-06-17 14:38 ` Paul E. McKenney
2009-06-17 15:07 ` Ingo Molnar
2009-06-17 20:26 ` Peter Zijlstra
2009-06-20 15:35 ` Ingo Molnar
2009-06-22 6:08 ` Nathan Lynch
2009-06-17 13:50 ` Suresh Siddha
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20090616210059.GL6842@linux.vnet.ibm.com \
--to=paulmck@linux.vnet.ibm.com \
--cc=a.p.zijlstra@chello.nl \
--cc=akpm@linux-foundation.org \
--cc=balbir@in.ibm.com \
--cc=dipankar@in.ibm.com \
--cc=ego@in.ibm.com \
--cc=linux-kernel@vger.kernel.org \
--cc=mingo@elte.hu \
--cc=ntl@pobox.com \
--cc=rusty@rustcorp.com.au \
--cc=shaohua.li@linux.com \
--cc=svaidy@linux.vnet.ibm.com \
--cc=venkatesh.pallipadi@intel.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox