From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S934654AbdBQTk5 (ORCPT ); Fri, 17 Feb 2017 14:40:57 -0500 Received: from mga06.intel.com ([134.134.136.31]:24106 "EHLO mga06.intel.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1755721AbdBQTky (ORCPT ); Fri, 17 Feb 2017 14:40:54 -0500 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.35,173,1484035200"; d="scan'208";a="48500994" From: Vikas Shivappa To: vikas.shivappa@intel.com Cc: linux-kernel@vger.kernel.org, x86@kernel.org, hpa@zytor.com, tglx@linutronix.de, mingo@kernel.org, peterz@infradead.org, ravi.v.shankar@intel.com, tony.luck@intel.com, fenghua.yu@intel.com, andi.kleen@intel.com, vikas.shivappa@linux.intel.com Subject: [PATCH 5/5] x86/intel_rdt: hotcpu updates for RDT Date: Fri, 17 Feb 2017 11:38:48 -0800 Message-Id: <1487360328-6768-6-git-send-email-vikas.shivappa@linux.intel.com> X-Mailer: git-send-email 1.9.1 In-Reply-To: <1487360328-6768-1-git-send-email-vikas.shivappa@linux.intel.com> References: <1487360328-6768-1-git-send-email-vikas.shivappa@linux.intel.com> Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org For closid and rmid, change both the per cpu cache and PQR_MSR to be cleared only when offlining cpu at the respective handlers. The other places to clear them may not be required and is removed. This can be done at offlining so that the cache occupancy is not counted soon after the cpu goes down, rather than waiting to clear it during online cpu. Signed-off-by: Vikas Shivappa --- arch/x86/events/intel/cqm.c | 10 +++++----- arch/x86/kernel/cpu/intel_rdt.c | 1 - 2 files changed, 5 insertions(+), 6 deletions(-) diff --git a/arch/x86/events/intel/cqm.c b/arch/x86/events/intel/cqm.c index 8c00dc0..681e32f 100644 --- a/arch/x86/events/intel/cqm.c +++ b/arch/x86/events/intel/cqm.c @@ -1569,13 +1569,8 @@ static inline void cqm_pick_event_reader(int cpu) static int intel_cqm_cpu_starting(unsigned int cpu) { - struct intel_pqr_state *state = &per_cpu(pqr_state, cpu); struct cpuinfo_x86 *c = &cpu_data(cpu); - state->rmid = 0; - state->closid = 0; - state->rmid_usecnt = 0; - WARN_ON(c->x86_cache_max_rmid != cqm_max_rmid); WARN_ON(c->x86_cache_occ_scale != cqm_l3_scale); @@ -1585,12 +1580,17 @@ static int intel_cqm_cpu_starting(unsigned int cpu) static int intel_cqm_cpu_exit(unsigned int cpu) { + struct intel_pqr_state *state = &per_cpu(pqr_state, cpu); int target; /* Is @cpu the current cqm reader for this package ? */ if (!cpumask_test_and_clear_cpu(cpu, &cqm_cpumask)) return 0; + state->rmid = 0; + state->rmid_usecnt = 0; + wrmsr(MSR_IA32_PQR_ASSOC, 0, state->closid); + /* Find another online reader in this package */ target = cpumask_any_but(topology_core_cpumask(cpu), cpu); diff --git a/arch/x86/kernel/cpu/intel_rdt.c b/arch/x86/kernel/cpu/intel_rdt.c index 5a533fe..c8af5d9 100644 --- a/arch/x86/kernel/cpu/intel_rdt.c +++ b/arch/x86/kernel/cpu/intel_rdt.c @@ -350,7 +350,6 @@ static int intel_rdt_online_cpu(unsigned int cpu) domain_add_cpu(cpu, r); /* The cpu is set in default rdtgroup after online. */ cpumask_set_cpu(cpu, &rdtgroup_default.cpu_mask); - clear_closid(cpu); mutex_unlock(&rdtgroup_mutex); return 0; -- 1.9.1