From mboxrd@z Thu Jan 1 00:00:00 1970 From: Boris Ostrovsky Subject: [PATCH v14 for-xen-4.5 02/21] x86/VPMU: Manage VPMU_CONTEXT_SAVE flag in vpmu_save_force() Date: Fri, 17 Oct 2014 17:17:50 -0400 Message-ID: <1413580689-2750-3-git-send-email-boris.ostrovsky@oracle.com> References: <1413580689-2750-1-git-send-email-boris.ostrovsky@oracle.com> Mime-Version: 1.0 Content-Type: text/plain; charset="us-ascii" Content-Transfer-Encoding: 7bit Return-path: In-Reply-To: <1413580689-2750-1-git-send-email-boris.ostrovsky@oracle.com> List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Sender: xen-devel-bounces@lists.xen.org Errors-To: xen-devel-bounces@lists.xen.org To: JBeulich@suse.com, kevin.tian@intel.com, suravee.suthikulpanit@amd.com, Aravind.Gopalakrishnan@amd.com, dietmar.hahn@ts.fujitsu.com, dgdegra@tycho.nsa.gov, konrad.wilk@oracle.com Cc: keir@xen.org, andrew.cooper3@citrix.com, tim@xen.org, xen-devel@lists.xen.org, jun.nakajima@intel.com, boris.ostrovsky@oracle.com List-Id: xen-devel@lists.xenproject.org There is a possibility that we set VPMU_CONTEXT_SAVE on VPMU context in vpmu_load() and never clear it (because vpmu_save_force() will see VPMU_CONTEXT_LOADED bit clear, which is possible on AMD processors) The problem is that amd_vpmu_save() assumes that if VPMU_CONTEXT_SAVE is set then (1) we need to save counters and (2) we don't need to "stop" control registers since they must have been stopped earlier. The latter may cause all sorts of problem (like counters still running in a wrong guest and hypervisor sending to that guest unexpected PMU interrupts). Since setting this flag is currently always done prior to calling vpmu_save_force() let's both set and clear it there. Signed-off-by: Boris Ostrovsky Reviewed-by: Dietmar Hahn Reviewed-by: Konrad Rzeszutek Wilk --- xen/arch/x86/hvm/vpmu.c | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/xen/arch/x86/hvm/vpmu.c b/xen/arch/x86/hvm/vpmu.c index 265fc0e..e74c871 100644 --- a/xen/arch/x86/hvm/vpmu.c +++ b/xen/arch/x86/hvm/vpmu.c @@ -128,6 +128,8 @@ static void vpmu_save_force(void *arg) if ( !vpmu_is_set(vpmu, VPMU_CONTEXT_LOADED) ) return; + vpmu_set(vpmu, VPMU_CONTEXT_SAVE); + if ( vpmu->arch_vpmu_ops ) (void)vpmu->arch_vpmu_ops->arch_vpmu_save(v); @@ -176,7 +178,6 @@ void vpmu_load(struct vcpu *v) */ if ( vpmu_is_set(vpmu, VPMU_CONTEXT_LOADED) ) { - vpmu_set(vpmu, VPMU_CONTEXT_SAVE); on_selected_cpus(cpumask_of(vpmu->last_pcpu), vpmu_save_force, (void *)v, 1); vpmu_reset(vpmu, VPMU_CONTEXT_LOADED); @@ -193,7 +194,6 @@ void vpmu_load(struct vcpu *v) vpmu = vcpu_vpmu(prev); /* Someone ran here before us */ - vpmu_set(vpmu, VPMU_CONTEXT_SAVE); vpmu_save_force(prev); vpmu_reset(vpmu, VPMU_CONTEXT_LOADED); -- 1.8.1.4