From: Vikas Shivappa <vikas.shivappa@linux.intel.com>
To: linux-kernel@vger.kernel.org
Cc: vikas.shivappa@intel.com, x86@kernel.org, hpa@zytor.com,
tglx@linutronix.de, mingo@kernel.org, tj@kernel.org,
peterz@infradead.org, matt.fleming@intel.com,
will.auld@intel.com, glenn.p.williamson@intel.com,
kanaka.d.juvva@intel.com, vikas.shivappa@linux.intel.com
Subject: [PATCH 8/9] x86/intel_rdt: Hot cpu support for Cache Allocation
Date: Wed, 1 Jul 2015 15:21:09 -0700 [thread overview]
Message-ID: <1435789270-27010-9-git-send-email-vikas.shivappa@linux.intel.com> (raw)
In-Reply-To: <1435789270-27010-1-git-send-email-vikas.shivappa@linux.intel.com>
This patch adds hot cpu support for Intel Cache allocation. Support
includes updating the cache bitmask MSRs IA32_L3_QOS_n when a new CPU
package comes online. The IA32_L3_QOS_n MSRs are one per Class of
service on each CPU package. The new package's MSRs are synchronized
with the values of existing MSRs. Also the software cache for
IA32_PQR_ASSOC MSRs are updated during hot cpu notifications.
Signed-off-by: Vikas Shivappa <vikas.shivappa@linux.intel.com>
---
arch/x86/kernel/cpu/intel_rdt.c | 95 ++++++++++++++++++++++++++++++++++++++---
1 file changed, 90 insertions(+), 5 deletions(-)
diff --git a/arch/x86/kernel/cpu/intel_rdt.c b/arch/x86/kernel/cpu/intel_rdt.c
index c8bb134..1f9716c 100644
--- a/arch/x86/kernel/cpu/intel_rdt.c
+++ b/arch/x86/kernel/cpu/intel_rdt.c
@@ -25,6 +25,7 @@
#include <linux/slab.h>
#include <linux/err.h>
#include <linux/spinlock.h>
+#include <linux/cpu.h>
#include <asm/intel_rdt.h>
/*
@@ -40,6 +41,11 @@ struct static_key __read_mostly rdt_enable_key = STATIC_KEY_INIT_FALSE;
* Mask of CPUs for writing CBM values. We only need one CPU per-socket.
*/
static cpumask_t rdt_cpumask;
+/*
+ * Temporary cpumask used during hot cpu notificaiton handling. The usage
+ * is serialized by hot cpu locks.
+ */
+static cpumask_t tmp_cpumask;
#define rdt_for_each_child(pos_css, parent_ir) \
css_for_each_child((pos_css), &(parent_ir)->css)
@@ -313,13 +319,86 @@ out:
return err;
}
-static inline void rdt_cpumask_update(int cpu)
+static inline bool rdt_cpumask_update(int cpu)
{
- static cpumask_t tmp;
-
- cpumask_and(&tmp, &rdt_cpumask, topology_core_cpumask(cpu));
- if (cpumask_empty(&tmp))
+ cpumask_and(&tmp_cpumask, &rdt_cpumask, topology_core_cpumask(cpu));
+ if (cpumask_empty(&tmp_cpumask)) {
cpumask_set_cpu(cpu, &rdt_cpumask);
+ return true;
+ }
+
+ return false;
+}
+
+/*
+ * cbm_update_msrs() - Updates all the existing IA32_L3_MASK_n MSRs
+ * which are one per CLOSid except IA32_L3_MASK_0 on the current package.
+ */
+static void cbm_update_msrs(void *info)
+{
+ int maxid = boot_cpu_data.x86_cache_max_closid;
+ unsigned int i;
+
+ /*
+ * At cpureset, all bits of IA32_L3_MASK_n are set.
+ * The index starts from one as there is no need
+ * to update IA32_L3_MASK_0 as it belongs to root cgroup
+ * whose cache mask is all 1s always.
+ */
+ for (i = 1; i < maxid; i++) {
+ if (ccmap[i].clos_refcnt)
+ cbm_cpu_update((void *)i);
+ }
+}
+
+static inline void intel_rdt_cpu_start(int cpu)
+{
+ struct intel_pqr_state *state = &per_cpu(pqr_state, cpu);
+
+ state->closid = 0;
+ mutex_lock(&rdt_group_mutex);
+ if (rdt_cpumask_update(cpu))
+ smp_call_function_single(cpu, cbm_update_msrs, NULL, 1);
+ mutex_unlock(&rdt_group_mutex);
+}
+
+static void intel_rdt_cpu_exit(unsigned int cpu)
+{
+ int i;
+
+ mutex_lock(&rdt_group_mutex);
+ if (!cpumask_test_and_clear_cpu(cpu, &rdt_cpumask)) {
+ mutex_unlock(&rdt_group_mutex);
+ return;
+ }
+
+ cpumask_and(&tmp_cpumask, topology_core_cpumask(cpu), cpu_online_mask);
+ cpumask_clear_cpu(cpu, &tmp_cpumask);
+ i = cpumask_any(&tmp_cpumask);
+
+ if (i < nr_cpu_ids)
+ cpumask_set_cpu(i, &rdt_cpumask);
+ mutex_unlock(&rdt_group_mutex);
+}
+
+static int intel_rdt_cpu_notifier(struct notifier_block *nb,
+ unsigned long action, void *hcpu)
+{
+ unsigned int cpu = (unsigned long)hcpu;
+
+ switch (action) {
+ case CPU_DOWN_FAILED:
+ case CPU_ONLINE:
+ intel_rdt_cpu_start(cpu);
+ break;
+ case CPU_DOWN_PREPARE:
+ intel_rdt_cpu_exit(cpu);
+ break;
+ default:
+ break;
+ }
+
+ return NOTIFY_OK;
}
static int __init intel_rdt_late_init(void)
@@ -358,9 +437,15 @@ static int __init intel_rdt_late_init(void)
ccm->cache_mask = (1ULL << max_cbm_len) - 1;
ccm->clos_refcnt = 1;
+ cpu_notifier_register_begin();
+
for_each_online_cpu(i)
rdt_cpumask_update(i);
+ __hotcpu_notifier(intel_rdt_cpu_notifier, 0);
+
+ cpu_notifier_register_done();
+
static_key_slow_inc(&rdt_enable_key);
pr_info("Intel cache allocation enabled\n");
out_err:
--
1.9.1
next prev parent reply other threads:[~2015-07-01 22:24 UTC|newest]
Thread overview: 85+ messages / expand[flat|nested] mbox.gz Atom feed top
2015-07-01 22:21 [PATCH V12 0/9] Hot cpu handling changes to cqm, rapl and Intel Cache Allocation support Vikas Shivappa
2015-07-01 22:21 ` [PATCH 1/9] x86/intel_cqm: Modify hot cpu notification handling Vikas Shivappa
2015-07-29 16:44 ` Peter Zijlstra
2015-07-31 23:19 ` Vikas Shivappa
2015-07-01 22:21 ` [PATCH 2/9] x86/intel_rapl: Modify hot cpu notification handling for RAPL Vikas Shivappa
2015-07-01 22:21 ` [PATCH 3/9] x86/intel_rdt: Cache Allocation documentation and cgroup usage guide Vikas Shivappa
2015-07-28 14:54 ` Peter Zijlstra
2015-08-04 20:41 ` Vikas Shivappa
2015-07-28 23:15 ` Marcelo Tosatti
2015-07-29 0:06 ` Vikas Shivappa
2015-07-29 1:28 ` Auld, Will
2015-07-29 19:32 ` Marcelo Tosatti
2015-07-30 17:47 ` Vikas Shivappa
2015-07-30 20:08 ` Marcelo Tosatti
2015-07-31 15:34 ` Marcelo Tosatti
2015-08-02 15:48 ` Martin Kletzander
2015-08-03 15:13 ` Marcelo Tosatti
2015-08-03 18:22 ` Vikas Shivappa
2015-07-30 20:22 ` Marcelo Tosatti
2015-07-30 23:03 ` Vikas Shivappa
2015-07-31 14:45 ` Marcelo Tosatti
2015-07-31 16:41 ` [summary] " Vikas Shivappa
2015-07-31 18:38 ` Marcelo Tosatti
2015-07-29 20:07 ` Vikas Shivappa
2015-07-01 22:21 ` [PATCH 4/9] x86/intel_rdt: Add support for Cache Allocation detection Vikas Shivappa
2015-07-28 16:25 ` Peter Zijlstra
2015-07-28 22:07 ` Vikas Shivappa
2015-07-01 22:21 ` [PATCH 5/9] x86/intel_rdt: Add new cgroup and Class of service management Vikas Shivappa
2015-07-28 17:06 ` Peter Zijlstra
2015-07-30 18:01 ` Vikas Shivappa
2015-07-28 17:17 ` Peter Zijlstra
2015-07-30 18:10 ` Vikas Shivappa
2015-07-30 19:44 ` Tejun Heo
2015-07-31 15:12 ` Marcelo Tosatti
2015-08-02 16:23 ` Tejun Heo
2015-08-03 20:32 ` Marcelo Tosatti
2015-08-04 12:55 ` Marcelo Tosatti
2015-08-04 18:36 ` Tejun Heo
2015-08-04 18:32 ` Tejun Heo
2015-07-31 16:24 ` Vikas Shivappa
2015-08-02 16:31 ` Tejun Heo
2015-08-04 18:50 ` Vikas Shivappa
2015-08-04 19:03 ` Tejun Heo
2015-08-05 2:21 ` Vikas Shivappa
2015-08-05 15:46 ` Tejun Heo
2015-08-06 20:58 ` Vikas Shivappa
2015-08-07 14:48 ` Tejun Heo
2015-08-05 12:22 ` Matt Fleming
2015-08-05 16:10 ` Tejun Heo
2015-08-06 0:24 ` Marcelo Tosatti
2015-08-06 20:46 ` Vikas Shivappa
2015-08-07 13:15 ` Marcelo Tosatti
2015-08-18 0:20 ` Marcelo Tosatti
2015-08-21 0:06 ` Vikas Shivappa
2015-08-21 0:13 ` Vikas Shivappa
2015-08-22 2:28 ` Marcelo Tosatti
2015-08-23 18:47 ` Vikas Shivappa
2015-08-24 13:06 ` Marcelo Tosatti
2015-07-01 22:21 ` [PATCH 6/9] x86/intel_rdt: Add support for cache bit mask management Vikas Shivappa
2015-07-28 16:35 ` Peter Zijlstra
2015-07-28 22:08 ` Vikas Shivappa
2015-07-28 16:37 ` Peter Zijlstra
2015-07-30 17:54 ` Vikas Shivappa
2015-07-01 22:21 ` [PATCH 7/9] x86/intel_rdt: Implement scheduling support for Intel RDT Vikas Shivappa
2015-07-29 13:49 ` Peter Zijlstra
2015-07-30 18:16 ` Vikas Shivappa
2015-07-01 22:21 ` Vikas Shivappa [this message]
2015-07-29 15:53 ` [PATCH 8/9] x86/intel_rdt: Hot cpu support for Cache Allocation Peter Zijlstra
2015-07-31 23:21 ` Vikas Shivappa
2015-07-01 22:21 ` [PATCH 9/9] x86/intel_rdt: Intel haswell Cache Allocation enumeration Vikas Shivappa
2015-07-29 16:35 ` Peter Zijlstra
2015-08-03 20:49 ` Vikas Shivappa
2015-07-29 16:36 ` Peter Zijlstra
2015-07-30 18:45 ` Vikas Shivappa
2015-07-13 17:13 ` [PATCH V12 0/9] Hot cpu handling changes to cqm, rapl and Intel Cache Allocation support Vikas Shivappa
2015-07-16 12:55 ` Thomas Gleixner
2015-07-24 16:52 ` Thomas Gleixner
2015-07-24 18:28 ` Vikas Shivappa
2015-07-24 18:39 ` Thomas Gleixner
2015-07-24 18:45 ` Vikas Shivappa
2015-07-29 16:47 ` Peter Zijlstra
2015-07-29 22:53 ` Vikas Shivappa
2015-07-24 18:32 ` Vikas Shivappa
-- strict thread matches above, loose matches on Subject: below --
2015-08-06 21:55 [PATCH V13 0/9] Intel cache allocation and Hot cpu handling changes to cqm, rapl Vikas Shivappa
2015-08-06 21:55 ` [PATCH 8/9] x86/intel_rdt: Hot cpu support for Cache Allocation Vikas Shivappa
2015-06-25 19:25 [PATCH V11 0/9] Hot cpu handling changes to cqm,rapl and Intel Cache Allocation support Vikas Shivappa
2015-06-25 19:25 ` [PATCH 8/9] x86/intel_rdt: Hot cpu support for Cache Allocation Vikas Shivappa
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=1435789270-27010-9-git-send-email-vikas.shivappa@linux.intel.com \
--to=vikas.shivappa@linux.intel.com \
--cc=glenn.p.williamson@intel.com \
--cc=hpa@zytor.com \
--cc=kanaka.d.juvva@intel.com \
--cc=linux-kernel@vger.kernel.org \
--cc=matt.fleming@intel.com \
--cc=mingo@kernel.org \
--cc=peterz@infradead.org \
--cc=tglx@linutronix.de \
--cc=tj@kernel.org \
--cc=vikas.shivappa@intel.com \
--cc=will.auld@intel.com \
--cc=x86@kernel.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).