From mboxrd@z Thu Jan 1 00:00:00 1970 From: Boris Ostrovsky Subject: [PATCH v5 RESEND 02/17] VPMU: Mark context LOADED before registers are loaded Date: Wed, 23 Apr 2014 08:50:23 -0400 Message-ID: <1398257438-4994-3-git-send-email-boris.ostrovsky@oracle.com> References: <1398257438-4994-1-git-send-email-boris.ostrovsky@oracle.com> Mime-Version: 1.0 Content-Type: text/plain; charset="us-ascii" Content-Transfer-Encoding: 7bit Return-path: In-Reply-To: <1398257438-4994-1-git-send-email-boris.ostrovsky@oracle.com> List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Sender: xen-devel-bounces@lists.xen.org Errors-To: xen-devel-bounces@lists.xen.org To: kevin.tian@intel.com Cc: keir@xen.org, jun.nakajima@intel.com, andrew.cooper3@citrix.com, eddie.dong@intel.com, donald.d.dugger@intel.com, xen-devel@lists.xen.org, dietmar.hahn@ts.fujitsu.com, JBeulich@suse.com, boris.ostrovsky@oracle.com, suravee.suthikulpanit@amd.com List-Id: xen-devel@lists.xenproject.org Because a PMU interrupt may be generated as soon as PMU registers are loaded (or, more precisely, as soon as HW PMU is "armed") we don't want to delay marking context as LOADED until after registers are loaded. Otherwise during interrupt handling VPMU_CONTEXT_LOADED may not be set and this could be confusing. (Technically, only SVM needs this change right now since VMX will "arm" PMU later, during VMRUN when global control register is loaded from VMCS. However, both AMD and Intel code will require this patch when we introduce PV VPMU). Signed-off-by: Boris Ostrovsky --- xen/arch/x86/hvm/svm/vpmu.c | 2 ++ xen/arch/x86/hvm/vmx/vpmu_core2.c | 2 ++ xen/arch/x86/hvm/vpmu.c | 3 +-- 3 files changed, 5 insertions(+), 2 deletions(-) diff --git a/xen/arch/x86/hvm/svm/vpmu.c b/xen/arch/x86/hvm/svm/vpmu.c index 66a3815..3ac7d53 100644 --- a/xen/arch/x86/hvm/svm/vpmu.c +++ b/xen/arch/x86/hvm/svm/vpmu.c @@ -203,6 +203,8 @@ static void amd_vpmu_load(struct vcpu *v) return; } + vpmu_set(vpmu, VPMU_CONTEXT_LOADED); + context_load(v); } diff --git a/xen/arch/x86/hvm/vmx/vpmu_core2.c b/xen/arch/x86/hvm/vmx/vpmu_core2.c index ee26362..8aa7cb2 100644 --- a/xen/arch/x86/hvm/vmx/vpmu_core2.c +++ b/xen/arch/x86/hvm/vmx/vpmu_core2.c @@ -369,6 +369,8 @@ static void core2_vpmu_load(struct vcpu *v) if ( vpmu_is_set(vpmu, VPMU_CONTEXT_LOADED) ) return; + vpmu_set(vpmu, VPMU_CONTEXT_LOADED); + __core2_vpmu_load(v); } diff --git a/xen/arch/x86/hvm/vpmu.c b/xen/arch/x86/hvm/vpmu.c index 21fbaba..63765fa 100644 --- a/xen/arch/x86/hvm/vpmu.c +++ b/xen/arch/x86/hvm/vpmu.c @@ -211,10 +211,9 @@ void vpmu_load(struct vcpu *v) if ( vpmu->arch_vpmu_ops && vpmu->arch_vpmu_ops->arch_vpmu_load ) { apic_write_around(APIC_LVTPC, vpmu->hw_lapic_lvtpc); + /* Arch code needs to set VPMU_CONTEXT_LOADED */ vpmu->arch_vpmu_ops->arch_vpmu_load(v); } - - vpmu_set(vpmu, VPMU_CONTEXT_LOADED); } void vpmu_initialise(struct vcpu *v) -- 1.8.3.1