public inbox for linux-kernel@vger.kernel.org
 help / color / mirror / Atom feed
From: Dipankar Sarma <dipankar@in.ibm.com>
To: Ingo Molnar <mingo@elte.hu>
Cc: Vaidyanathan Srinivasan <svaidy@linux.vnet.ibm.com>,
	Linux Kernel <linux-kernel@vger.kernel.org>,
	Suresh B Siddha <suresh.b.siddha@intel.com>,
	Venkatesh Pallipadi <venkatesh.pallipadi@intel.com>,
	Peter Zijlstra <a.p.zijlstra@chello.nl>,
	Arjan van de Ven <arjan@infradead.org>,
	Balbir Singh <balbir@linux.vnet.ibm.com>,
	Vatsa <vatsa@linux.vnet.ibm.com>,
	Gautham R Shenoy <ego@in.ibm.com>,
	Andi Kleen <andi@firstfloor.org>,
	Gregory Haskins <gregory.haskins@gmail.com>,
	Mike Galbraith <efault@gmx.de>,
	Thomas Gleixner <tglx@linutronix.de>,
	Arun Bharadwaj <arun@linux.vnet.ibm.com>
Subject: Re: [RFC PATCH v1 0/3] Saving power by cpu evacuation using sched_mc=n
Date: Mon, 27 Apr 2009 11:24:51 +0530	[thread overview]
Message-ID: <20090427055451.GF13342@in.ibm.com> (raw)
In-Reply-To: <20090427035216.GD10087@elte.hu>

On Mon, Apr 27, 2009 at 05:52:16AM +0200, Ingo Molnar wrote:
> 
> Regarding the values for 2...5 - is the AvgPower column time 
> normalized or workload normalized?
> 
> If it's time normalized then it appears there's no power win here at 
> all: we'd be better off by throttling the workload directly (by 
> injecting sleeps or something like that), right?

Energy savings with this will depend on the workload running. We have 
seen transactional workloads where taking off a few cores has almost 
no impact on throughput or response time.

Thanks
Dipankar

  parent reply	other threads:[~2009-04-27  5:55 UTC|newest]

Thread overview: 16+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2009-04-26 20:46 [RFC PATCH v1 0/3] Saving power by cpu evacuation using sched_mc=n Vaidyanathan Srinivasan
2009-04-26 20:46 ` [RFC PATCH v1 1/3] sched: add more levels of sched_mc Vaidyanathan Srinivasan
2009-04-26 20:46 ` [RFC PATCH v1 2/3] sched: threshold helper functions Vaidyanathan Srinivasan
2009-04-26 20:47 ` [RFC PATCH v1 3/3] sched: loadbalancer hacks for forced packing of tasks Vaidyanathan Srinivasan
2009-04-27  3:52 ` [RFC PATCH v1 0/3] Saving power by cpu evacuation using sched_mc=n Ingo Molnar
2009-04-27  5:43   ` Vaidyanathan Srinivasan
2009-04-27  5:53     ` Ingo Molnar
2009-04-27  6:39       ` Vaidyanathan Srinivasan
2009-04-27  7:01         ` Balbir Singh
2009-04-27  5:54   ` Dipankar Sarma [this message]
2009-04-27 10:09 ` Peter Zijlstra
2009-04-27 14:20   ` Vaidyanathan Srinivasan
2009-04-28  8:33     ` Peter Zijlstra
2009-04-28  8:52       ` Ingo Molnar
2009-04-28 16:15         ` Vaidyanathan Srinivasan
2009-04-28 16:11       ` Vaidyanathan Srinivasan

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20090427055451.GF13342@in.ibm.com \
    --to=dipankar@in.ibm.com \
    --cc=a.p.zijlstra@chello.nl \
    --cc=andi@firstfloor.org \
    --cc=arjan@infradead.org \
    --cc=arun@linux.vnet.ibm.com \
    --cc=balbir@linux.vnet.ibm.com \
    --cc=efault@gmx.de \
    --cc=ego@in.ibm.com \
    --cc=gregory.haskins@gmail.com \
    --cc=linux-kernel@vger.kernel.org \
    --cc=mingo@elte.hu \
    --cc=suresh.b.siddha@intel.com \
    --cc=svaidy@linux.vnet.ibm.com \
    --cc=tglx@linutronix.de \
    --cc=vatsa@linux.vnet.ibm.com \
    --cc=venkatesh.pallipadi@intel.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox