From: Andy Lutomirski <luto@kernel.org>
To: Vikas Shivappa <vikas.shivappa@linux.intel.com>,
vikas.shivappa@intel.com
Cc: linux-kernel@vger.kernel.org, x86@kernel.org, hpa@zytor.com,
tglx@linutronix.de, mingo@kernel.org, tj@kernel.org,
peterz@infradead.org, matt.fleming@intel.com,
will.auld@intel.com, glenn.p.williamson@intel.com,
kanaka.d.juvva@intel.com
Subject: Re: [PATCH 7/9] x86/intel_rdt: Implement scheduling support for Intel RDT
Date: Thu, 6 Aug 2015 16:03:12 -0700 [thread overview]
Message-ID: <55C3E7B0.2010609@kernel.org> (raw)
In-Reply-To: <1438898117-3692-8-git-send-email-vikas.shivappa@linux.intel.com>
On 08/06/2015 02:55 PM, Vikas Shivappa wrote:
> Adds support for IA32_PQR_ASSOC MSR writes during task scheduling. For
> Cache Allocation, MSR write would let the task fill in the cache
> 'subset' represented by the task's intel_rdt cgroup cache_mask.
>
> The high 32 bits in the per processor MSR IA32_PQR_ASSOC represents the
> CLOSid. During context switch kernel implements this by writing the
> CLOSid of the cgroup to which the task belongs to the CPU's
> IA32_PQR_ASSOC MSR.
>
> This patch also implements a common software cache for IA32_PQR_MSR
> (RMID 0:9, CLOSId 32:63) to be used by both Cache monitoring (CMT) and
> Cache allocation. CMT updates the RMID where as cache_alloc updates the
> CLOSid in the software cache. During scheduling when the new RMID/CLOSid
> value is different from the cached values, IA32_PQR_MSR is updated.
> Since the measured rdmsr latency for IA32_PQR_MSR is very high (~250
> cycles) this software cache is necessary to avoid reading the MSR to
> compare the current CLOSid value.
>
> The following considerations are done for the PQR MSR write so that it
> minimally impacts scheduler hot path:
> - This path does not exist on any non-intel platforms.
> - On Intel platforms, this would not exist by default unless CGROUP_RDT
> is enabled.
> - remains a no-op when CGROUP_RDT is enabled and intel SKU does not
> support the feature.
> - When feature is available and enabled, never does MSR write till the
> user manually creates a cgroup directory *and* assigns a cache_mask
> different from root cgroup directory. Since the child node inherits
> the parents cache mask, by cgroup creation there is no scheduling hot
> path impact from the new cgroup.
> - MSR write is only done when there is a task with different Closid is
> scheduled on the CPU. Typically if the task groups are bound to be
> scheduled on a set of CPUs, the number of MSR writes is greatly
> reduced.
> - A per CPU cache of CLOSids is maintained to do the check so that we
> dont have to do a rdmsr which actually costs a lot of cycles.
> - For cgroup directories having same cache_mask the CLOSids are reused.
> This minimizes the number of CLOSids used and hence reduces the MSR
> write frequency.
What happens if a user process sets a painfully restrictive CLOS and
then spends most of its time in the kernel doing work on behalf of
unrelated tasks? Does performance suck?
--Andy
next prev parent reply other threads:[~2015-08-06 23:03 UTC|newest]
Thread overview: 16+ messages / expand[flat|nested] mbox.gz Atom feed top
2015-08-06 21:55 [PATCH V13 0/9] Intel cache allocation and Hot cpu handling changes to cqm, rapl Vikas Shivappa
2015-08-06 21:55 ` [PATCH 1/9] x86/intel_cqm: Modify hot cpu notification handling Vikas Shivappa
2015-08-06 21:55 ` [PATCH 2/9] x86/intel_rapl: " Vikas Shivappa
2015-08-06 21:55 ` [PATCH 3/9] x86/intel_rdt: Cache Allocation documentation and cgroup usage guide Vikas Shivappa
2015-08-06 21:55 ` [PATCH 4/9] x86/intel_rdt: Add support for Cache Allocation detection Vikas Shivappa
2015-08-06 21:55 ` [PATCH 5/9] x86/intel_rdt: Add new cgroup and Class of service management Vikas Shivappa
2015-08-06 21:55 ` [PATCH 6/9] x86/intel_rdt: Add support for cache bit mask management Vikas Shivappa
2015-08-06 21:55 ` [PATCH 7/9] x86/intel_rdt: Implement scheduling support for Intel RDT Vikas Shivappa
2015-08-06 23:03 ` Andy Lutomirski [this message]
2015-08-07 18:52 ` Vikas Shivappa
2015-08-06 21:55 ` [PATCH 8/9] x86/intel_rdt: Hot cpu support for Cache Allocation Vikas Shivappa
2015-08-06 21:55 ` [PATCH 9/9] x86/intel_rdt: Intel haswell Cache Allocation enumeration Vikas Shivappa
-- strict thread matches above, loose matches on Subject: below --
2015-07-01 22:21 [PATCH V12 0/9] Hot cpu handling changes to cqm, rapl and Intel Cache Allocation support Vikas Shivappa
2015-07-01 22:21 ` [PATCH 7/9] x86/intel_rdt: Implement scheduling support for Intel RDT Vikas Shivappa
2015-07-29 13:49 ` Peter Zijlstra
2015-07-30 18:16 ` Vikas Shivappa
2015-06-25 19:25 [PATCH V11 0/9] Hot cpu handling changes to cqm,rapl and Intel Cache Allocation support Vikas Shivappa
2015-06-25 19:25 ` [PATCH 7/9] x86/intel_rdt: Implement scheduling support for Intel RDT Vikas Shivappa
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=55C3E7B0.2010609@kernel.org \
--to=luto@kernel.org \
--cc=glenn.p.williamson@intel.com \
--cc=hpa@zytor.com \
--cc=kanaka.d.juvva@intel.com \
--cc=linux-kernel@vger.kernel.org \
--cc=matt.fleming@intel.com \
--cc=mingo@kernel.org \
--cc=peterz@infradead.org \
--cc=tglx@linutronix.de \
--cc=tj@kernel.org \
--cc=vikas.shivappa@intel.com \
--cc=vikas.shivappa@linux.intel.com \
--cc=will.auld@intel.com \
--cc=x86@kernel.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).