From: Tim Chen <tim.c.chen@linux.intel.com>
To: Thomas Gleixner <tglx@linutronix.de>
Cc: Peter Zijlstra <peterz@infradead.org>,
rjw@rjwysocki.net, mingo@redhat.com, bp@suse.de, x86@kernel.org,
linux-pm@vger.kernel.org, linux-kernel@vger.kernel.org,
linux-acpi@vger.kernel.org, jolsa@redhat.com,
Srinivas Pandruvada <srinivas.pandruvada@linux.intel.com>
Subject: Re: [PATCH v6 5/9] x86/sysctl: Add sysctl for ITMT scheduling feature
Date: Thu, 27 Oct 2016 12:32:19 -0700 [thread overview]
Message-ID: <1477596739.2680.102.camel@linux.intel.com> (raw)
In-Reply-To: <alpine.DEB.2.20.1610262000410.5013@nanos>
On Wed, 2016-10-26 at 20:09 +0200, Thomas Gleixner wrote:
> On Wed, 26 Oct 2016, Tim Chen wrote:
> >
> > On Wed, 2016-10-26 at 13:24 +0200, Thomas Gleixner wrote:
> > >
> > > >
> > > > There were reservations on the multi-socket case of ITMT, maybe it would
> > > > help to spell those out in great detail here. That is, have the comment
> > > > explain the policy instead of simply stating what the code does (which
> > > > is always bad comment policy, you can read the code just fine).
> > > What is the objection for multi sockets? If it improves the behaviour then
> > > why would this be a bad thing for multi sockets?
> > For multi-socket (server system), it is much more likely that they will
> > have multiple cpus in a socket busy and not run in turbo mode. So the extra
> > work in migrating the workload to the one with extra headroom will
> > not make use of those headroom in that scenario. I will update the comment
> > to reflect this policy.
> So on a single socket server system the extra work does not matter, right?
> Don't tell me that single socket server systems are irrelevant. Intel is
> actively promoting single socket CPUs, like XEON D, for high densitiy
> servers...
>
> Instead of handwaving arguments I prefer a proper analysis of what the
> overhead is and why it is not a good thing for loaded servers in general.
>
> Then instead of slapping half baken heuristics into the code, we should sit
> down and think a bit harder about it.
>
The ITMT scheduling overhead should be small. Mostly a small number of
cycles initially spent to idle balance tasks towards an idled favored core, and cycles to refill
hot data in the mid level cache for the migrated task. Those should be a very
small percentage of the cycles that the task spent running on the favored core.
So any extra boost in frequency should compensate so should be a good trade off.
After some internal discussions, we think we should enable the ITMT feature by
default for all systems supporting ITMT. I will remove the single socket
restriction.
Thanks.
Tim
next prev parent reply other threads:[~2016-10-27 19:32 UTC|newest]
Thread overview: 24+ messages / expand[flat|nested] mbox.gz Atom feed top
2016-10-20 21:59 [PATCH v6 0/9] Support Intel® Turbo Boost Max Technology 3.0 Tim Chen
2016-10-20 21:59 ` [PATCH v6 1/9] sched: Extend scheduler's asym packing Tim Chen
2016-10-26 10:27 ` Thomas Gleixner
2016-10-26 18:10 ` Tim Chen
2016-10-26 18:23 ` Thomas Gleixner
2016-10-20 21:59 ` [PATCH v6 2/9] x86/topology: Provide topology_num_packages() Tim Chen
2016-10-20 21:59 ` [PATCH v6 3/9] x86/topology: Define x86's arch_update_cpu_topology Tim Chen
2016-10-20 21:59 ` [PATCH v6 4/9] x86: Enable Intel Turbo Boost Max Technology 3.0 Tim Chen
2016-10-20 21:59 ` [PATCH v6 5/9] x86/sysctl: Add sysctl for ITMT scheduling feature Tim Chen
2016-10-26 10:49 ` Thomas Gleixner
2016-10-26 11:25 ` Peter Zijlstra
2016-10-26 11:24 ` Thomas Gleixner
2016-10-26 17:23 ` Tim Chen
2016-10-26 18:09 ` Thomas Gleixner
2016-10-27 19:32 ` Tim Chen [this message]
2016-10-26 17:59 ` Tim Chen
2016-10-26 10:52 ` Thomas Gleixner
2016-10-26 18:03 ` Tim Chen
2016-10-26 18:11 ` Thomas Gleixner
2016-10-26 19:38 ` Tim Chen
2016-10-20 22:00 ` [PATCH v6 6/9] x86/sched: Add SD_ASYM_PACKING flags to x86 ITMT CPU Tim Chen
2016-10-20 22:00 ` [PATCH v6 7/9] acpi: bus: Enable HWP CPPC objects Tim Chen
2016-10-20 22:00 ` [PATCH v6 8/9] acpi: bus: Set _OSC for diverse core support Tim Chen
2016-10-20 22:00 ` [PATCH v6 9/9] cpufreq: intel_pstate: Use CPPC to get max performance Tim Chen
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=1477596739.2680.102.camel@linux.intel.com \
--to=tim.c.chen@linux.intel.com \
--cc=bp@suse.de \
--cc=jolsa@redhat.com \
--cc=linux-acpi@vger.kernel.org \
--cc=linux-kernel@vger.kernel.org \
--cc=linux-pm@vger.kernel.org \
--cc=mingo@redhat.com \
--cc=peterz@infradead.org \
--cc=rjw@rjwysocki.net \
--cc=srinivas.pandruvada@linux.intel.com \
--cc=tglx@linutronix.de \
--cc=x86@kernel.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).