public inbox for kvm@vger.kernel.org
 help / color / mirror / Atom feed
From: Gleb Natapov <gleb@redhat.com>
To: Avi Kivity <avi@redhat.com>
Cc: Marcelo Tosatti <mtosatti@redhat.com>,
	kvm@vger.kernel.org, Jan Kiszka <jan.kiszka@siemens.com>
Subject: Re: [PATCH 4/9] KVM: VMX: Cache cpl
Date: Tue, 8 Mar 2011 16:20:10 +0200	[thread overview]
Message-ID: <20110308142010.GE2504@redhat.com> (raw)
In-Reply-To: <1299592665-12325-5-git-send-email-avi@redhat.com>

On Tue, Mar 08, 2011 at 03:57:40PM +0200, Avi Kivity wrote:
> We may read the cpl quite often in the same vmexit (instruction privilege
> check, memory access checks for instruction and operands), so we gain
> a bit if we cache the value.
> 
Shouldn't VCPU_EXREG_CPL be cleared in vmx_set_efer too?

> Signed-off-by: Avi Kivity <avi@redhat.com>
> ---
>  arch/x86/include/asm/kvm_host.h |    1 +
>  arch/x86/kvm/vmx.c              |   17 ++++++++++++++++-
>  2 files changed, 17 insertions(+), 1 deletions(-)
> 
> diff --git a/arch/x86/include/asm/kvm_host.h b/arch/x86/include/asm/kvm_host.h
> index 80f3070..4a2496d 100644
> --- a/arch/x86/include/asm/kvm_host.h
> +++ b/arch/x86/include/asm/kvm_host.h
> @@ -119,6 +119,7 @@ enum kvm_reg_ex {
>  	VCPU_EXREG_PDPTR = NR_VCPU_REGS,
>  	VCPU_EXREG_CR3,
>  	VCPU_EXREG_RFLAGS,
> +	VCPU_EXREG_CPL,
>  };
>  
>  enum {
> diff --git a/arch/x86/kvm/vmx.c b/arch/x86/kvm/vmx.c
> index 82dbebd..87e3d86 100644
> --- a/arch/x86/kvm/vmx.c
> +++ b/arch/x86/kvm/vmx.c
> @@ -128,6 +128,7 @@ struct vcpu_vmx {
>  	unsigned long         host_rsp;
>  	int                   launched;
>  	u8                    fail;
> +	u8                    cpl;
>  	u32                   exit_intr_info;
>  	u32                   idt_vectoring_info;
>  	ulong                 rflags;
> @@ -986,6 +987,7 @@ static unsigned long vmx_get_rflags(struct kvm_vcpu *vcpu)
>  static void vmx_set_rflags(struct kvm_vcpu *vcpu, unsigned long rflags)
>  {
>  	__set_bit(VCPU_EXREG_RFLAGS, (ulong *)&vcpu->arch.regs_avail);
> +	__clear_bit(VCPU_EXREG_CPL, (ulong *)&vcpu->arch.regs_avail);
>  	to_vmx(vcpu)->rflags = rflags;
>  	if (to_vmx(vcpu)->rmode.vm86_active) {
>  		to_vmx(vcpu)->rmode.save_rflags = rflags;
> @@ -1992,6 +1994,7 @@ static void vmx_set_cr0(struct kvm_vcpu *vcpu, unsigned long cr0)
>  	vmcs_writel(CR0_READ_SHADOW, cr0);
>  	vmcs_writel(GUEST_CR0, hw_cr0);
>  	vcpu->arch.cr0 = cr0;
> +	__clear_bit(VCPU_EXREG_CPL, (ulong *)&vcpu->arch.regs_avail);
>  }
>  
>  static u64 construct_eptp(unsigned long root_hpa)
> @@ -2102,7 +2105,7 @@ static u64 vmx_get_segment_base(struct kvm_vcpu *vcpu, int seg)
>  	return vmcs_readl(sf->base);
>  }
>  
> -static int vmx_get_cpl(struct kvm_vcpu *vcpu)
> +static int __vmx_get_cpl(struct kvm_vcpu *vcpu)
>  {
>  	if (!is_protmode(vcpu))
>  		return 0;
> @@ -2114,6 +2117,16 @@ static int vmx_get_cpl(struct kvm_vcpu *vcpu)
>  	return vmcs_read16(GUEST_CS_SELECTOR) & 3;
>  }
>  
> +static int vmx_get_cpl(struct kvm_vcpu *vcpu)
> +{
> +	if (!test_bit(VCPU_EXREG_CPL, (ulong *)&vcpu->arch.regs_avail)) {
> +		__set_bit(VCPU_EXREG_CPL, (ulong *)&vcpu->arch.regs_avail);
> +		to_vmx(vcpu)->cpl = __vmx_get_cpl(vcpu);
> +	}
> +	return to_vmx(vcpu)->cpl;
> +}
> +
> +
>  static u32 vmx_segment_access_rights(struct kvm_segment *var)
>  {
>  	u32 ar;
> @@ -2179,6 +2192,7 @@ static void vmx_set_segment(struct kvm_vcpu *vcpu,
>  		ar |= 0x1; /* Accessed */
>  
>  	vmcs_write32(sf->ar_bytes, ar);
> +	__clear_bit(VCPU_EXREG_CPL, (ulong *)&vcpu->arch.regs_avail);
>  }
>  
>  static void vmx_get_cs_db_l_bits(struct kvm_vcpu *vcpu, int *db, int *l)
> @@ -4116,6 +4130,7 @@ static void __noclone vmx_vcpu_run(struct kvm_vcpu *vcpu)
>  
>  	vcpu->arch.regs_avail = ~((1 << VCPU_REGS_RIP) | (1 << VCPU_REGS_RSP)
>  				  | (1 << VCPU_EXREG_RFLAGS)
> +				  | (1 << VCPU_EXREG_CPL)
>  				  | (1 << VCPU_EXREG_PDPTR)
>  				  | (1 << VCPU_EXREG_CR3));
>  	vcpu->arch.regs_dirty = 0;
> -- 
> 1.7.1

--
			Gleb.

  reply	other threads:[~2011-03-08 14:20 UTC|newest]

Thread overview: 15+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2011-03-08 13:57 [PATCH 0/9] Some vmx vmexit optimizations Avi Kivity
2011-03-08 13:57 ` [PATCH 1/9] KVM: Use kvm_get_rflags() and kvm_set_rflags() instead of the raw versions Avi Kivity
2011-03-08 13:57 ` [PATCH 2/9] KVM: VMX: Optimize vmx_get_rflags() Avi Kivity
2011-03-08 13:57 ` [PATCH 3/9] KVM: VMX: Optimize vmx_get_cpl() Avi Kivity
2011-03-08 13:57 ` [PATCH 4/9] KVM: VMX: Cache cpl Avi Kivity
2011-03-08 14:20   ` Gleb Natapov [this message]
2011-03-08 14:29     ` Avi Kivity
2011-03-08 14:38       ` Gleb Natapov
2011-03-08 13:57 ` [PATCH 5/9] KVM: VMX: Avoid vmx_recover_nmi_blocking() when unneeded Avi Kivity
2011-03-08 13:57 ` [PATCH 6/9] KVM: VMX: Qualify check for host NMI Avi Kivity
2011-03-08 13:57 ` [PATCH 7/9] KVM: VMX: Refactor vmx_complete_atomic_exit() Avi Kivity
2011-03-15 21:40   ` Marcelo Tosatti
2011-03-21  8:55     ` Avi Kivity
2011-03-08 13:57 ` [PATCH 8/9] KVM: VMX: Don't VMREAD VM_EXIT_INTR_INFO unconditionally Avi Kivity
2011-03-08 13:57 ` [PATCH 9/9] KVM: VMX: Use cached VM_EXIT_INTR_INFO in handle_exception Avi Kivity

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20110308142010.GE2504@redhat.com \
    --to=gleb@redhat.com \
    --cc=avi@redhat.com \
    --cc=jan.kiszka@siemens.com \
    --cc=kvm@vger.kernel.org \
    --cc=mtosatti@redhat.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox