From mboxrd@z Thu Jan 1 00:00:00 1970 From: Boris Ostrovsky Subject: [PATCH 4/8] x86/AMD: Stop counters on VPMU save Date: Tue, 9 Apr 2013 13:26:15 -0400 Message-ID: <1365528379-2516-5-git-send-email-boris.ostrovsky@oracle.com> References: <1365528379-2516-1-git-send-email-boris.ostrovsky@oracle.com> Mime-Version: 1.0 Content-Type: text/plain; charset="us-ascii" Content-Transfer-Encoding: 7bit Return-path: In-Reply-To: <1365528379-2516-1-git-send-email-boris.ostrovsky@oracle.com> List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Sender: xen-devel-bounces@lists.xen.org Errors-To: xen-devel-bounces@lists.xen.org To: dietmar.hahn@ts.fujitsu.com, suravee.suthikulpanit@amd.com, jun.nakajima@intel.com, haitao.shan@intel.com, jacob.shin@amd.com Cc: Boris Ostrovsky , xen-devel@lists.xen.org List-Id: xen-devel@lists.xenproject.org Stop the counters during VPMU save operation since they shouldn't be running when VPCU that controls them is not. This also makes it unnecessary to check for overflow in context_restore() Set LVTPC vector before loading the context during vpmu_restore(). Otherwise it is possible to trigger an interrupt without proper vector. Signed-off-by: Boris Ostrovsky --- xen/arch/x86/hvm/svm/vpmu.c | 22 ++++++---------------- 1 file changed, 6 insertions(+), 16 deletions(-) diff --git a/xen/arch/x86/hvm/svm/vpmu.c b/xen/arch/x86/hvm/svm/vpmu.c index 3115923..2f4b6e4 100644 --- a/xen/arch/x86/hvm/svm/vpmu.c +++ b/xen/arch/x86/hvm/svm/vpmu.c @@ -197,20 +197,9 @@ static inline void context_restore(struct vcpu *v) struct amd_vpmu_context *ctxt = vpmu->context; for ( i = 0; i < num_counters; i++ ) - wrmsrl(ctrls[i], ctxt->ctrls[i]); - - for ( i = 0; i < num_counters; i++ ) { wrmsrl(counters[i], ctxt->counters[i]); - - /* Force an interrupt to allow guest reset the counter, - if the value is positive */ - if ( is_overflowed(ctxt->counters[i]) && (ctxt->counters[i] > 0) ) - { - gdprintk(XENLOG_WARNING, "VPMU: Force a performance counter " - "overflow interrupt!\n"); - amd_vpmu_do_interrupt(0); - } + wrmsrl(ctrls[i], ctxt->ctrls[i]); } } @@ -223,8 +212,8 @@ static void amd_vpmu_restore(struct vcpu *v) vpmu_is_set(vpmu, VPMU_RUNNING)) ) return; - context_restore(v); apic_write(APIC_LVTPC, ctxt->hw_lapic_lvtpc); + context_restore(v); vpmu_set(vpmu, VPMU_CONTEXT_LOADED); } @@ -236,10 +225,11 @@ static inline void context_save(struct vcpu *v) struct amd_vpmu_context *ctxt = vpmu->context; for ( i = 0; i < num_counters; i++ ) - rdmsrl(counters[i], ctxt->counters[i]); - - for ( i = 0; i < num_counters; i++ ) + { rdmsrl(ctrls[i], ctxt->ctrls[i]); + wrmsrl(ctrls[i], 0); + rdmsrl(counters[i], ctxt->counters[i]); + } } static void amd_vpmu_save(struct vcpu *v) -- 1.8.1.2