From mboxrd@z Thu Jan 1 00:00:00 1970 From: Marcelo Tosatti Subject: Re: [PATCH v4 4/8] KVM-HV: Add VCPU running/pre-empted state for guest Date: Thu, 23 Aug 2012 08:46:22 -0300 Message-ID: <20120823114622.GA4747@amt.cnet> References: <20120821112346.3512.99814.stgit@abhimanyu.in.ibm.com> <20120821112640.3512.43771.stgit@abhimanyu> Mime-Version: 1.0 Content-Type: text/plain; charset=us-ascii Cc: avi@redhat.com, raghukt@linux.vnet.ibm.com, alex.shi@intel.com, kvm@vger.kernel.org, stefano.stabellini@eu.citrix.com, peterz@infradead.org, hpa@zytor.com, vsrivatsa@gmail.com, mingo@elte.hu To: "Nikunj A. Dadhania" Return-path: Received: from mx1.redhat.com ([209.132.183.28]:4904 "EHLO mx1.redhat.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S932702Ab2HWMCq (ORCPT ); Thu, 23 Aug 2012 08:02:46 -0400 Content-Disposition: inline In-Reply-To: <20120821112640.3512.43771.stgit@abhimanyu> Sender: kvm-owner@vger.kernel.org List-ID: On Tue, Aug 21, 2012 at 04:56:43PM +0530, Nikunj A. Dadhania wrote: > From: Nikunj A. Dadhania > > Hypervisor code to indicate guest running/pre-empteded status through > msr. The page is now pinned during MSR write time and use > kmap_atomic/kunmap_atomic to access the shared area vcpu_state area. > > Suggested-by: Marcelo Tosatti > Signed-off-by: Nikunj A. Dadhania > --- > arch/x86/include/asm/kvm_host.h | 7 +++ > arch/x86/kvm/cpuid.c | 1 > arch/x86/kvm/x86.c | 88 ++++++++++++++++++++++++++++++++++++++- > 3 files changed, 94 insertions(+), 2 deletions(-) > > diff --git a/arch/x86/include/asm/kvm_host.h b/arch/x86/include/asm/kvm_host.h > index 09155d6..441348f 100644 > --- a/arch/x86/include/asm/kvm_host.h > +++ b/arch/x86/include/asm/kvm_host.h > @@ -429,6 +429,13 @@ struct kvm_vcpu_arch { > struct kvm_steal_time steal; > } st; > > + /* indicates vcpu is running or preempted */ > + struct { > + u64 msr_val; > + struct page *vs_page; > + unsigned int vs_offset; > + } v_state; > + > u64 last_guest_tsc; > u64 last_kernel_ns; > u64 last_host_tsc; > diff --git a/arch/x86/kvm/cpuid.c b/arch/x86/kvm/cpuid.c > index 0595f13..37ab364 100644 > --- a/arch/x86/kvm/cpuid.c > +++ b/arch/x86/kvm/cpuid.c > @@ -411,6 +411,7 @@ static int do_cpuid_ent(struct kvm_cpuid_entry2 *entry, u32 function, > (1 << KVM_FEATURE_CLOCKSOURCE2) | > (1 << KVM_FEATURE_ASYNC_PF) | > (1 << KVM_FEATURE_PV_EOI) | > + (1 << KVM_FEATURE_VCPU_STATE) | > (1 << KVM_FEATURE_CLOCKSOURCE_STABLE_BIT); > > if (sched_info_on()) > diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c > index 59b5950..43f2c19 100644 > --- a/arch/x86/kvm/x86.c > +++ b/arch/x86/kvm/x86.c > @@ -806,13 +806,13 @@ EXPORT_SYMBOL_GPL(kvm_rdpmc); > * kvm-specific. Those are put in the beginning of the list. > */ > > -#define KVM_SAVE_MSRS_BEGIN 9 > +#define KVM_SAVE_MSRS_BEGIN 10 > static u32 msrs_to_save[] = { > MSR_KVM_SYSTEM_TIME, MSR_KVM_WALL_CLOCK, > MSR_KVM_SYSTEM_TIME_NEW, MSR_KVM_WALL_CLOCK_NEW, > HV_X64_MSR_GUEST_OS_ID, HV_X64_MSR_HYPERCALL, > HV_X64_MSR_APIC_ASSIST_PAGE, MSR_KVM_ASYNC_PF_EN, MSR_KVM_STEAL_TIME, > - MSR_KVM_PV_EOI_EN, > + MSR_KVM_VCPU_STATE, MSR_KVM_PV_EOI_EN, > MSR_IA32_SYSENTER_CS, MSR_IA32_SYSENTER_ESP, MSR_IA32_SYSENTER_EIP, > MSR_STAR, > #ifdef CONFIG_X86_64 > @@ -1557,6 +1557,63 @@ static void record_steal_time(struct kvm_vcpu *vcpu) > &vcpu->arch.st.steal, sizeof(struct kvm_steal_time)); > } > > +static void kvm_set_atomic(u64 *addr, u64 old, u64 new) > +{ > + int loop = 1000000; > + while (1) { > + if (cmpxchg(addr, old, new) == old) > + break; > + loop--; > + if (!loop) { > + pr_info("atomic cur: %lx old: %lx new: %lx\n", > + *addr, old, new); > + break; > + } > + } > +} > + > +static void kvm_set_vcpu_state(struct kvm_vcpu *vcpu) > +{ > + struct kvm_vcpu_state *vs; > + char *kaddr; > + > + if (!((vcpu->arch.v_state.msr_val & KVM_MSR_ENABLED) && > + vcpu->arch.v_state.vs_page)) > + return; It was agreed it was necessary to have valid vs_page only if MSR was enabled? Or was that a misunderstanding? Looks good otherwise.