* Re: [PATCH] KVM: Remove arch specific components from the general code
@ 2007-07-26 23:50 Gregory Haskins
[not found] ` <46A8FB230200005A000283D9-Igcdv/6uVdMHoYOw/+koYqIwWpluYiW7@public.gmane.org>
0 siblings, 1 reply; 17+ messages in thread
From: Gregory Haskins @ 2007-07-26 23:50 UTC (permalink / raw)
To: rusty-8n+1lVoiYb80n/F98K4Iww; +Cc: kvm-devel-5NWGOfrQmneRv+LV9MX5uipxlwaOVQ5f
On Fri, 2007-07-27 at 09:27 +1000, Rusty Russell wrote:
> So the in-kernel apic code has to traverse every element in the array?
> That is clearly better because?
I can't speak for Eddie's implementation, but the one that I had worked
on did in fact take advantage of the array. Not for traversing it as
you suggested, but rather for efficient mapping of vcpu ID to APIC. The
modeled "APIC BUS" already knew its targets. It would then use
"kvm->vcpus[target]" to find the right APIC.
However, that being said I really saw my implementation as "cheating".
In a clean design, the apic-bus model should be independent of the kvm
to vcpu mapping. The apic-bus should never really "see" the entire VM.
It should only see itself as a bus with some interconnected APICs to
talk to. If that were true, your decision to move the kvm->vcpu array
to a list would be inconsequential to the in-kernel APIC code. This
extra level of modeling never came to be in my now defunct series,
however.
So in summary, I don't think moving to a list like you did is a terrible
thing, but other provisions will have to be made if you are not
providing a good mapping mechanism. E.g. either a proper apic-bus model
will have to map processor-id to apic independently, or your kvm->vcpu
mapping will need to be adaptable to relatively efficient lookups
(hlist, maybe?).
> We get to place an artificial maximum
> and keep a ceiling variable like the existing code does?
This always bothered me too. But I guess realistically some reasonable
ceiling could probably be found (something in the 8-64 range sounds
right to me) if the dynamic list idea is shot down. If we do go this
static route, it should probably be in the Kconfig.
>
> > > + spin_lock(&kvm->lock);
> > > + /* What do we care if they create duplicate CPU ids? But be nice. */
> > >
> >
> > Oh, we care. Esp. the apic code.
>
> Yeah, I left the check although it's currently unneeded, but I still
> don't think we should care. We shouldn't use the cpu_id internally, but
> use pointers. Sure, the guest might get confused, by that's not the kvm
> module's problem.
As Avi said, only the APIC (or more specifically, the apic-bus model)
should care. But this is _very_ important. Interrupt messages have
destination addresses expressed in terms of these IDs ;)
Regards,
-Greg
-------------------------------------------------------------------------
This SF.net email is sponsored by: Splunk Inc.
Still grepping through log files to find problems? Stop.
Now Search log events and configuration files using AJAX and a browser.
Download your FREE copy of Splunk now >> http://get.splunk.com/
^ permalink raw reply [flat|nested] 17+ messages in thread* Re: [PATCH] KVM: Remove arch specific components from the general code
@ 2007-07-26 13:51 Gregory Haskins
[not found] ` <46A86E8A0200005A00028346-Igcdv/6uVdMHoYOw/+koYqIwWpluYiW7@public.gmane.org>
0 siblings, 1 reply; 17+ messages in thread
From: Gregory Haskins @ 2007-07-26 13:51 UTC (permalink / raw)
To: avi-atKUWr5tajBWk0Htik3J/w; +Cc: kvm-devel-5NWGOfrQmneRv+LV9MX5uipxlwaOVQ5f
On Thu, 2007-07-26 at 16:37 +0300, Avi Kivity wrote:
> Gregory Haskins wrote:
> > On Thu, 2007-07-26 at 15:33 +0300, Avi Kivity wrote:
> >
> >
> >> The suggestion I liked best (Anthony's) was
> >>
> >> struct vmx_vcpu {
> >> struct kvm_vcpu vcpu;
> >> void *vmcs;
> >> // other vmx-specific data
> >> };
> >>
> >> to move from a kvm_vcpu to a vmx_vcpu, you use container_of() (inside
> >> your vmx() inline). Look ma, no pointers!
> >>
> >
> > Hmm..well, you still have pointers. The advantage is that they are
> > implicitly maintained, but now you have to do pointer arithmetic to
> > compute it. :( I personally would probably rather have the explicit
> > management code than the run-time overhead....
> >
>
> Well, the pointer arithmetic consists of adding or subtracting zero,
> which gcc luckily knows how to deal with.
Ah....i see. The trick is that we make it the first field. Clever!
>
> It's the equivalent of C++ inheritance; no overhead at all.
You mean non-virtuals, right ;)
>
> Paul Turner did something much like yours,
Ah, wasn't aware, thanks. What is the status? I guess I came this far,
so I will make the final change and you can take it or leave it at your
discretion. The important thing is to get this cleaned up more than who
did it, IMO.
> and Rusty is working on
> unraveling the vcpu array, which is complementary to this work.
I saw Rusty's email. Cool!
-------------------------------------------------------------------------
This SF.net email is sponsored by: Splunk Inc.
Still grepping through log files to find problems? Stop.
Now Search log events and configuration files using AJAX and a browser.
Download your FREE copy of Splunk now >> http://get.splunk.com/
^ permalink raw reply [flat|nested] 17+ messages in thread* Re: [PATCH] KVM: Remove arch specific components from the general code
@ 2007-07-26 13:16 Gregory Haskins
[not found] ` <46A866860200005A00028332-Igcdv/6uVdMHoYOw/+koYqIwWpluYiW7@public.gmane.org>
0 siblings, 1 reply; 17+ messages in thread
From: Gregory Haskins @ 2007-07-26 13:16 UTC (permalink / raw)
To: avi-atKUWr5tajBWk0Htik3J/w; +Cc: kvm-devel-5NWGOfrQmneRv+LV9MX5uipxlwaOVQ5f
On Thu, 2007-07-26 at 15:33 +0300, Avi Kivity wrote:
> The suggestion I liked best (Anthony's) was
>
> struct vmx_vcpu {
> struct kvm_vcpu vcpu;
> void *vmcs;
> // other vmx-specific data
> };
>
> to move from a kvm_vcpu to a vmx_vcpu, you use container_of() (inside
> your vmx() inline). Look ma, no pointers!
Hmm..well, you still have pointers. The advantage is that they are
implicitly maintained, but now you have to do pointer arithmetic to
compute it. :( I personally would probably rather have the explicit
management code than the run-time overhead....
But alas, the rest of Linux is moving in this direction as well, so I
think this recommendation makes a lot of sense. Note that it
complicates the initialization code a little bit, but its not a big
deal. I will incorporate this into the next spin.
>
>
> Too many people are touching this area without coordinating (and one
> patch will invalidate all the rest).
Ah, I wasn't aware of this. Who else is working on it?
-------------------------------------------------------------------------
This SF.net email is sponsored by: Splunk Inc.
Still grepping through log files to find problems? Stop.
Now Search log events and configuration files using AJAX and a browser.
Download your FREE copy of Splunk now >> http://get.splunk.com/
^ permalink raw reply [flat|nested] 17+ messages in thread* [PATCH] KVM: Remove arch specific components from the general code
@ 2007-07-26 13:16 Gregory Haskins
0 siblings, 0 replies; 17+ messages in thread
From: Gregory Haskins @ 2007-07-26 13:16 UTC (permalink / raw)
To: kvm-devel-5NWGOfrQmneRv+LV9MX5uipxlwaOVQ5f
(v2)
Incorporates all feedback except for the arch-pointer stuff which is still
being discussed.
Signed-off-by: Gregory Haskins <ghaskins-Et1tbQHTxzrQT0dZR+AlfA@public.gmane.org>
---
drivers/kvm/kvm.h | 31 -----
drivers/kvm/kvm_main.c | 26 +---
drivers/kvm/kvm_svm.h | 3
drivers/kvm/svm.c | 322 +++++++++++++++++++++++++-----------------------
drivers/kvm/vmx.c | 233 +++++++++++++++++++++--------------
5 files changed, 318 insertions(+), 297 deletions(-)
diff --git a/drivers/kvm/kvm.h b/drivers/kvm/kvm.h
index 03f374e..16958fd 100644
--- a/drivers/kvm/kvm.h
+++ b/drivers/kvm/kvm.h
@@ -16,7 +16,6 @@
#include <linux/preempt.h>
#include <asm/signal.h>
-#include "vmx.h"
#include <linux/kvm.h>
#include <linux/kvm_para.h>
@@ -141,14 +140,6 @@ struct kvm_mmu_page {
};
};
-struct vmcs {
- u32 revision_id;
- u32 abort;
- char data[0];
-};
-
-#define vmx_msr_entry kvm_msr_entry
-
struct kvm_vcpu;
/*
@@ -310,16 +301,13 @@ void kvm_io_bus_register_dev(struct kvm_io_bus *bus,
struct kvm_io_device *dev);
struct kvm_vcpu {
+ int valid;
struct kvm *kvm;
struct preempt_hook preempt_hook;
int vcpu_id;
- union {
- struct vmcs *vmcs;
- struct vcpu_svm *svm;
- };
+ void *_priv;
struct mutex mutex;
int cpu;
- int launched;
u64 host_tsc;
struct kvm_run *run;
int interrupt_window_open;
@@ -342,14 +330,6 @@ struct kvm_vcpu {
u64 shadow_efer;
u64 apic_base;
u64 ia32_misc_enable_msr;
- int nmsrs;
- int save_nmsrs;
- int msr_offset_efer;
-#ifdef CONFIG_X86_64
- int msr_offset_kernel_gs_base;
-#endif
- struct vmx_msr_entry *guest_msrs;
- struct vmx_msr_entry *host_msrs;
struct kvm_mmu mmu;
@@ -368,11 +348,6 @@ struct kvm_vcpu {
char *guest_fx_image;
int fpu_active;
int guest_fpu_loaded;
- struct vmx_host_state {
- int loaded;
- u16 fs_sel, gs_sel, ldt_sel;
- int fs_gs_ldt_reload_needed;
- } vmx_host_state;
int mmio_needed;
int mmio_read_completed;
@@ -581,8 +556,6 @@ int kvm_set_msr_common(struct kvm_vcpu *vcpu, u32 msr, u64 data);
void fx_init(struct kvm_vcpu *vcpu);
-void load_msrs(struct vmx_msr_entry *e, int n);
-void save_msrs(struct vmx_msr_entry *e, int n);
void kvm_resched(struct kvm_vcpu *vcpu);
void kvm_load_guest_fpu(struct kvm_vcpu *vcpu);
void kvm_put_guest_fpu(struct kvm_vcpu *vcpu);
diff --git a/drivers/kvm/kvm_main.c b/drivers/kvm/kvm_main.c
index ef67572..d86b430 100644
--- a/drivers/kvm/kvm_main.c
+++ b/drivers/kvm/kvm_main.c
@@ -377,7 +377,7 @@ static void free_pio_guest_pages(struct kvm_vcpu *vcpu)
static void kvm_unload_vcpu_mmu(struct kvm_vcpu *vcpu)
{
- if (!vcpu->vmcs)
+ if (!vcpu->valid)
return;
vcpu_load(vcpu);
@@ -387,7 +387,7 @@ static void kvm_unload_vcpu_mmu(struct kvm_vcpu *vcpu)
static void kvm_free_vcpu(struct kvm_vcpu *vcpu)
{
- if (!vcpu->vmcs)
+ if (!vcpu->valid)
return;
vcpu_load(vcpu);
@@ -1655,24 +1655,6 @@ void kvm_resched(struct kvm_vcpu *vcpu)
}
EXPORT_SYMBOL_GPL(kvm_resched);
-void load_msrs(struct vmx_msr_entry *e, int n)
-{
- int i;
-
- for (i = 0; i < n; ++i)
- wrmsrl(e[i].index, e[i].data);
-}
-EXPORT_SYMBOL_GPL(load_msrs);
-
-void save_msrs(struct vmx_msr_entry *e, int n)
-{
- int i;
-
- for (i = 0; i < n; ++i)
- rdmsrl(e[i].index, e[i].data);
-}
-EXPORT_SYMBOL_GPL(save_msrs);
-
void kvm_emulate_cpuid(struct kvm_vcpu *vcpu)
{
int i;
@@ -2409,7 +2391,7 @@ static int kvm_vm_ioctl_create_vcpu(struct kvm *kvm, int n)
mutex_lock(&vcpu->mutex);
- if (vcpu->vmcs) {
+ if (vcpu->valid) {
mutex_unlock(&vcpu->mutex);
return -EEXIST;
}
@@ -2461,6 +2443,8 @@ static int kvm_vm_ioctl_create_vcpu(struct kvm *kvm, int n)
kvm->nvcpus = n + 1;
spin_unlock(&kvm_lock);
+ vcpu->valid = 1;
+
return r;
out_free_vcpus:
diff --git a/drivers/kvm/kvm_svm.h b/drivers/kvm/kvm_svm.h
index a869983..82e5d77 100644
--- a/drivers/kvm/kvm_svm.h
+++ b/drivers/kvm/kvm_svm.h
@@ -20,7 +20,10 @@ static const u32 host_save_user_msrs[] = {
#define NR_HOST_SAVE_USER_MSRS ARRAY_SIZE(host_save_user_msrs)
#define NUM_DB_REGS 4
+struct kvm_vcpu;
+
struct vcpu_svm {
+ struct kvm_vcpu *vcpu;
struct vmcb *vmcb;
unsigned long vmcb_pa;
struct svm_cpu_data *svm_data;
diff --git a/drivers/kvm/svm.c b/drivers/kvm/svm.c
index ac7d8b0..3346d73 100644
--- a/drivers/kvm/svm.c
+++ b/drivers/kvm/svm.c
@@ -49,6 +49,11 @@ MODULE_LICENSE("GPL");
#define SVM_FEATURE_LBRV (1 << 1)
#define SVM_DEATURE_SVML (1 << 2)
+static inline struct vcpu_svm* svm(struct kvm_vcpu *vcpu)
+{
+ return (struct vcpu_svm*)vcpu->_priv;
+}
+
unsigned long iopm_base;
unsigned long msrpm_base;
@@ -95,7 +100,7 @@ static inline u32 svm_has(u32 feat)
static unsigned get_addr_size(struct kvm_vcpu *vcpu)
{
- struct vmcb_save_area *sa = &vcpu->svm->vmcb->save;
+ struct vmcb_save_area *sa = &svm(vcpu)->vmcb->save;
u16 cs_attrib;
if (!(sa->cr0 & X86_CR0_PE) || (sa->rflags & X86_EFLAGS_VM))
@@ -181,7 +186,7 @@ static inline void write_dr7(unsigned long val)
static inline void force_new_asid(struct kvm_vcpu *vcpu)
{
- vcpu->svm->asid_generation--;
+ svm(vcpu)->asid_generation--;
}
static inline void flush_guest_tlb(struct kvm_vcpu *vcpu)
@@ -194,22 +199,22 @@ static void svm_set_efer(struct kvm_vcpu *vcpu, u64 efer)
if (!(efer & KVM_EFER_LMA))
efer &= ~KVM_EFER_LME;
- vcpu->svm->vmcb->save.efer = efer | MSR_EFER_SVME_MASK;
+ svm(vcpu)->vmcb->save.efer = efer | MSR_EFER_SVME_MASK;
vcpu->shadow_efer = efer;
}
static void svm_inject_gp(struct kvm_vcpu *vcpu, unsigned error_code)
{
- vcpu->svm->vmcb->control.event_inj = SVM_EVTINJ_VALID |
+ svm(vcpu)->vmcb->control.event_inj = SVM_EVTINJ_VALID |
SVM_EVTINJ_VALID_ERR |
SVM_EVTINJ_TYPE_EXEPT |
GP_VECTOR;
- vcpu->svm->vmcb->control.event_inj_err = error_code;
+ svm(vcpu)->vmcb->control.event_inj_err = error_code;
}
static void inject_ud(struct kvm_vcpu *vcpu)
{
- vcpu->svm->vmcb->control.event_inj = SVM_EVTINJ_VALID |
+ svm(vcpu)->vmcb->control.event_inj = SVM_EVTINJ_VALID |
SVM_EVTINJ_TYPE_EXEPT |
UD_VECTOR;
}
@@ -228,19 +233,19 @@ static int is_external_interrupt(u32 info)
static void skip_emulated_instruction(struct kvm_vcpu *vcpu)
{
- if (!vcpu->svm->next_rip) {
+ if (!svm(vcpu)->next_rip) {
printk(KERN_DEBUG "%s: NOP\n", __FUNCTION__);
return;
}
- if (vcpu->svm->next_rip - vcpu->svm->vmcb->save.rip > 15) {
+ if (svm(vcpu)->next_rip - svm(vcpu)->vmcb->save.rip > 15) {
printk(KERN_ERR "%s: ip 0x%llx next 0x%llx\n",
__FUNCTION__,
- vcpu->svm->vmcb->save.rip,
- vcpu->svm->next_rip);
+ svm(vcpu)->vmcb->save.rip,
+ svm(vcpu)->next_rip);
}
- vcpu->rip = vcpu->svm->vmcb->save.rip = vcpu->svm->next_rip;
- vcpu->svm->vmcb->control.int_state &= ~SVM_INTERRUPT_SHADOW_MASK;
+ vcpu->rip = svm(vcpu)->vmcb->save.rip = svm(vcpu)->next_rip;
+ svm(vcpu)->vmcb->control.int_state &= ~SVM_INTERRUPT_SHADOW_MASK;
vcpu->interrupt_window_open = 1;
}
@@ -569,23 +574,27 @@ static void init_vmcb(struct vmcb *vmcb)
static int svm_create_vcpu(struct kvm_vcpu *vcpu)
{
+ struct vcpu_svm *svm;
struct page *page;
int r;
r = -ENOMEM;
- vcpu->svm = kzalloc(sizeof *vcpu->svm, GFP_KERNEL);
- if (!vcpu->svm)
+ svm = kzalloc(sizeof *svm, GFP_KERNEL);
+ if (!svm)
goto out1;
page = alloc_page(GFP_KERNEL);
if (!page)
goto out2;
- vcpu->svm->vmcb = page_address(page);
- clear_page(vcpu->svm->vmcb);
- vcpu->svm->vmcb_pa = page_to_pfn(page) << PAGE_SHIFT;
- vcpu->svm->asid_generation = 0;
- memset(vcpu->svm->db_regs, 0, sizeof(vcpu->svm->db_regs));
- init_vmcb(vcpu->svm->vmcb);
+ svm->vmcb = page_address(page);
+ clear_page(svm->vmcb);
+ svm->vmcb_pa = page_to_pfn(page) << PAGE_SHIFT;
+ svm->asid_generation = 0;
+ memset(svm->db_regs, 0, sizeof(svm->db_regs));
+ init_vmcb(svm->vmcb);
+
+ svm->vcpu = vcpu;
+ vcpu->_priv = svm;
fx_init(vcpu);
vcpu->fpu_active = 1;
@@ -596,18 +605,19 @@ static int svm_create_vcpu(struct kvm_vcpu *vcpu)
return 0;
out2:
- kfree(vcpu->svm);
+ kfree(svm);
out1:
return r;
}
static void svm_free_vcpu(struct kvm_vcpu *vcpu)
{
- if (!vcpu->svm)
+ if (!svm(vcpu))
return;
- if (vcpu->svm->vmcb)
- __free_page(pfn_to_page(vcpu->svm->vmcb_pa >> PAGE_SHIFT));
- kfree(vcpu->svm);
+ if (svm(vcpu)->vmcb)
+ __free_page(pfn_to_page(svm(vcpu)->vmcb_pa >> PAGE_SHIFT));
+ kfree(svm(vcpu));
+ vcpu->_priv = NULL;
}
static void svm_vcpu_load(struct kvm_vcpu *vcpu, int cpu)
@@ -623,12 +633,12 @@ static void svm_vcpu_load(struct kvm_vcpu *vcpu, int cpu)
*/
rdtscll(tsc_this);
delta = vcpu->host_tsc - tsc_this;
- vcpu->svm->vmcb->control.tsc_offset += delta;
+ svm(vcpu)->vmcb->control.tsc_offset += delta;
vcpu->cpu = cpu;
}
for (i = 0; i < NR_HOST_SAVE_USER_MSRS; i++)
- rdmsrl(host_save_user_msrs[i], vcpu->svm->host_user_msrs[i]);
+ rdmsrl(host_save_user_msrs[i], svm(vcpu)->host_user_msrs[i]);
}
static void svm_vcpu_put(struct kvm_vcpu *vcpu)
@@ -636,7 +646,7 @@ static void svm_vcpu_put(struct kvm_vcpu *vcpu)
int i;
for (i = 0; i < NR_HOST_SAVE_USER_MSRS; i++)
- wrmsrl(host_save_user_msrs[i], vcpu->svm->host_user_msrs[i]);
+ wrmsrl(host_save_user_msrs[i], svm(vcpu)->host_user_msrs[i]);
rdtscll(vcpu->host_tsc);
}
@@ -647,31 +657,31 @@ static void svm_vcpu_decache(struct kvm_vcpu *vcpu)
static void svm_cache_regs(struct kvm_vcpu *vcpu)
{
- vcpu->regs[VCPU_REGS_RAX] = vcpu->svm->vmcb->save.rax;
- vcpu->regs[VCPU_REGS_RSP] = vcpu->svm->vmcb->save.rsp;
- vcpu->rip = vcpu->svm->vmcb->save.rip;
+ vcpu->regs[VCPU_REGS_RAX] = svm(vcpu)->vmcb->save.rax;
+ vcpu->regs[VCPU_REGS_RSP] = svm(vcpu)->vmcb->save.rsp;
+ vcpu->rip = svm(vcpu)->vmcb->save.rip;
}
static void svm_decache_regs(struct kvm_vcpu *vcpu)
{
- vcpu->svm->vmcb->save.rax = vcpu->regs[VCPU_REGS_RAX];
- vcpu->svm->vmcb->save.rsp = vcpu->regs[VCPU_REGS_RSP];
- vcpu->svm->vmcb->save.rip = vcpu->rip;
+ svm(vcpu)->vmcb->save.rax = vcpu->regs[VCPU_REGS_RAX];
+ svm(vcpu)->vmcb->save.rsp = vcpu->regs[VCPU_REGS_RSP];
+ svm(vcpu)->vmcb->save.rip = vcpu->rip;
}
static unsigned long svm_get_rflags(struct kvm_vcpu *vcpu)
{
- return vcpu->svm->vmcb->save.rflags;
+ return svm(vcpu)->vmcb->save.rflags;
}
static void svm_set_rflags(struct kvm_vcpu *vcpu, unsigned long rflags)
{
- vcpu->svm->vmcb->save.rflags = rflags;
+ svm(vcpu)->vmcb->save.rflags = rflags;
}
static struct vmcb_seg *svm_seg(struct kvm_vcpu *vcpu, int seg)
{
- struct vmcb_save_area *save = &vcpu->svm->vmcb->save;
+ struct vmcb_save_area *save = &svm(vcpu)->vmcb->save;
switch (seg) {
case VCPU_SREG_CS: return &save->cs;
@@ -723,26 +733,26 @@ static void svm_get_cs_db_l_bits(struct kvm_vcpu *vcpu, int *db, int *l)
static void svm_get_idt(struct kvm_vcpu *vcpu, struct descriptor_table *dt)
{
- dt->limit = vcpu->svm->vmcb->save.idtr.limit;
- dt->base = vcpu->svm->vmcb->save.idtr.base;
+ dt->limit = svm(vcpu)->vmcb->save.idtr.limit;
+ dt->base = svm(vcpu)->vmcb->save.idtr.base;
}
static void svm_set_idt(struct kvm_vcpu *vcpu, struct descriptor_table *dt)
{
- vcpu->svm->vmcb->save.idtr.limit = dt->limit;
- vcpu->svm->vmcb->save.idtr.base = dt->base ;
+ svm(vcpu)->vmcb->save.idtr.limit = dt->limit;
+ svm(vcpu)->vmcb->save.idtr.base = dt->base ;
}
static void svm_get_gdt(struct kvm_vcpu *vcpu, struct descriptor_table *dt)
{
- dt->limit = vcpu->svm->vmcb->save.gdtr.limit;
- dt->base = vcpu->svm->vmcb->save.gdtr.base;
+ dt->limit = svm(vcpu)->vmcb->save.gdtr.limit;
+ dt->base = svm(vcpu)->vmcb->save.gdtr.base;
}
static void svm_set_gdt(struct kvm_vcpu *vcpu, struct descriptor_table *dt)
{
- vcpu->svm->vmcb->save.gdtr.limit = dt->limit;
- vcpu->svm->vmcb->save.gdtr.base = dt->base ;
+ svm(vcpu)->vmcb->save.gdtr.limit = dt->limit;
+ svm(vcpu)->vmcb->save.gdtr.base = dt->base ;
}
static void svm_decache_cr4_guest_bits(struct kvm_vcpu *vcpu)
@@ -755,30 +765,30 @@ static void svm_set_cr0(struct kvm_vcpu *vcpu, unsigned long cr0)
if (vcpu->shadow_efer & KVM_EFER_LME) {
if (!is_paging(vcpu) && (cr0 & X86_CR0_PG)) {
vcpu->shadow_efer |= KVM_EFER_LMA;
- vcpu->svm->vmcb->save.efer |= KVM_EFER_LMA | KVM_EFER_LME;
+ svm(vcpu)->vmcb->save.efer |= KVM_EFER_LMA | KVM_EFER_LME;
}
if (is_paging(vcpu) && !(cr0 & X86_CR0_PG) ) {
vcpu->shadow_efer &= ~KVM_EFER_LMA;
- vcpu->svm->vmcb->save.efer &= ~(KVM_EFER_LMA | KVM_EFER_LME);
+ svm(vcpu)->vmcb->save.efer &= ~(KVM_EFER_LMA | KVM_EFER_LME);
}
}
#endif
if ((vcpu->cr0 & X86_CR0_TS) && !(cr0 & X86_CR0_TS)) {
- vcpu->svm->vmcb->control.intercept_exceptions &= ~(1 << NM_VECTOR);
+ svm(vcpu)->vmcb->control.intercept_exceptions &= ~(1 << NM_VECTOR);
vcpu->fpu_active = 1;
}
vcpu->cr0 = cr0;
cr0 |= X86_CR0_PG | X86_CR0_WP;
cr0 &= ~(X86_CR0_CD | X86_CR0_NW);
- vcpu->svm->vmcb->save.cr0 = cr0;
+ svm(vcpu)->vmcb->save.cr0 = cr0;
}
static void svm_set_cr4(struct kvm_vcpu *vcpu, unsigned long cr4)
{
vcpu->cr4 = cr4;
- vcpu->svm->vmcb->save.cr4 = cr4 | X86_CR4_PAE;
+ svm(vcpu)->vmcb->save.cr4 = cr4 | X86_CR4_PAE;
}
static void svm_set_segment(struct kvm_vcpu *vcpu,
@@ -802,16 +812,16 @@ static void svm_set_segment(struct kvm_vcpu *vcpu,
s->attrib |= (var->g & 1) << SVM_SELECTOR_G_SHIFT;
}
if (seg == VCPU_SREG_CS)
- vcpu->svm->vmcb->save.cpl
- = (vcpu->svm->vmcb->save.cs.attrib
+ svm(vcpu)->vmcb->save.cpl
+ = (svm(vcpu)->vmcb->save.cs.attrib
>> SVM_SELECTOR_DPL_SHIFT) & 3;
}
/* FIXME:
- vcpu->svm->vmcb->control.int_ctl &= ~V_TPR_MASK;
- vcpu->svm->vmcb->control.int_ctl |= (sregs->cr8 & V_TPR_MASK);
+ svm(vcpu)->vmcb->control.int_ctl &= ~V_TPR_MASK;
+ svm(vcpu)->vmcb->control.int_ctl |= (sregs->cr8 & V_TPR_MASK);
*/
@@ -823,14 +833,14 @@ static int svm_guest_debug(struct kvm_vcpu *vcpu, struct kvm_debug_guest *dbg)
static void load_host_msrs(struct kvm_vcpu *vcpu)
{
#ifdef CONFIG_X86_64
- wrmsrl(MSR_GS_BASE, vcpu->svm->host_gs_base);
+ wrmsrl(MSR_GS_BASE, svm(vcpu)->host_gs_base);
#endif
}
static void save_host_msrs(struct kvm_vcpu *vcpu)
{
#ifdef CONFIG_X86_64
- rdmsrl(MSR_GS_BASE, vcpu->svm->host_gs_base);
+ rdmsrl(MSR_GS_BASE, svm(vcpu)->host_gs_base);
#endif
}
@@ -839,22 +849,22 @@ static void new_asid(struct kvm_vcpu *vcpu, struct svm_cpu_data *svm_data)
if (svm_data->next_asid > svm_data->max_asid) {
++svm_data->asid_generation;
svm_data->next_asid = 1;
- vcpu->svm->vmcb->control.tlb_ctl = TLB_CONTROL_FLUSH_ALL_ASID;
+ svm(vcpu)->vmcb->control.tlb_ctl = TLB_CONTROL_FLUSH_ALL_ASID;
}
vcpu->cpu = svm_data->cpu;
- vcpu->svm->asid_generation = svm_data->asid_generation;
- vcpu->svm->vmcb->control.asid = svm_data->next_asid++;
+ svm(vcpu)->asid_generation = svm_data->asid_generation;
+ svm(vcpu)->vmcb->control.asid = svm_data->next_asid++;
}
static void svm_invlpg(struct kvm_vcpu *vcpu, gva_t address)
{
- invlpga(address, vcpu->svm->vmcb->control.asid); // is needed?
+ invlpga(address, svm(vcpu)->vmcb->control.asid); // is needed?
}
static unsigned long svm_get_dr(struct kvm_vcpu *vcpu, int dr)
{
- return vcpu->svm->db_regs[dr];
+ return svm(vcpu)->db_regs[dr];
}
static void svm_set_dr(struct kvm_vcpu *vcpu, int dr, unsigned long value,
@@ -862,16 +872,16 @@ static void svm_set_dr(struct kvm_vcpu *vcpu, int dr, unsigned long value,
{
*exception = 0;
- if (vcpu->svm->vmcb->save.dr7 & DR7_GD_MASK) {
- vcpu->svm->vmcb->save.dr7 &= ~DR7_GD_MASK;
- vcpu->svm->vmcb->save.dr6 |= DR6_BD_MASK;
+ if (svm(vcpu)->vmcb->save.dr7 & DR7_GD_MASK) {
+ svm(vcpu)->vmcb->save.dr7 &= ~DR7_GD_MASK;
+ svm(vcpu)->vmcb->save.dr6 |= DR6_BD_MASK;
*exception = DB_VECTOR;
return;
}
switch (dr) {
case 0 ... 3:
- vcpu->svm->db_regs[dr] = value;
+ svm(vcpu)->db_regs[dr] = value;
return;
case 4 ... 5:
if (vcpu->cr4 & X86_CR4_DE) {
@@ -883,7 +893,7 @@ static void svm_set_dr(struct kvm_vcpu *vcpu, int dr, unsigned long value,
*exception = GP_VECTOR;
return;
}
- vcpu->svm->vmcb->save.dr7 = value;
+ svm(vcpu)->vmcb->save.dr7 = value;
return;
}
default:
@@ -896,7 +906,7 @@ static void svm_set_dr(struct kvm_vcpu *vcpu, int dr, unsigned long value,
static int pf_interception(struct kvm_vcpu *vcpu, struct kvm_run *kvm_run)
{
- u32 exit_int_info = vcpu->svm->vmcb->control.exit_int_info;
+ u32 exit_int_info = svm(vcpu)->vmcb->control.exit_int_info;
u64 fault_address;
u32 error_code;
enum emulation_result er;
@@ -907,8 +917,8 @@ static int pf_interception(struct kvm_vcpu *vcpu, struct kvm_run *kvm_run)
mutex_lock(&vcpu->kvm->lock);
- fault_address = vcpu->svm->vmcb->control.exit_info_2;
- error_code = vcpu->svm->vmcb->control.exit_info_1;
+ fault_address = svm(vcpu)->vmcb->control.exit_info_2;
+ error_code = svm(vcpu)->vmcb->control.exit_info_1;
r = kvm_mmu_page_fault(vcpu, fault_address, error_code);
if (r < 0) {
mutex_unlock(&vcpu->kvm->lock);
@@ -940,9 +950,9 @@ static int pf_interception(struct kvm_vcpu *vcpu, struct kvm_run *kvm_run)
static int nm_interception(struct kvm_vcpu *vcpu, struct kvm_run *kvm_run)
{
- vcpu->svm->vmcb->control.intercept_exceptions &= ~(1 << NM_VECTOR);
+ svm(vcpu)->vmcb->control.intercept_exceptions &= ~(1 << NM_VECTOR);
if (!(vcpu->cr0 & X86_CR0_TS))
- vcpu->svm->vmcb->save.cr0 &= ~X86_CR0_TS;
+ svm(vcpu)->vmcb->save.cr0 &= ~X86_CR0_TS;
vcpu->fpu_active = 1;
return 1;
@@ -954,8 +964,8 @@ static int shutdown_interception(struct kvm_vcpu *vcpu, struct kvm_run *kvm_run)
* VMCB is undefined after a SHUTDOWN intercept
* so reinitialize it.
*/
- clear_page(vcpu->svm->vmcb);
- init_vmcb(vcpu->svm->vmcb);
+ clear_page(svm(vcpu)->vmcb);
+ init_vmcb(svm(vcpu)->vmcb);
kvm_run->exit_reason = KVM_EXIT_SHUTDOWN;
return 0;
@@ -970,18 +980,18 @@ static int io_get_override(struct kvm_vcpu *vcpu,
gva_t rip;
int i;
- rip = vcpu->svm->vmcb->save.rip;
- ins_length = vcpu->svm->next_rip - rip;
- rip += vcpu->svm->vmcb->save.cs.base;
+ rip = svm(vcpu)->vmcb->save.rip;
+ ins_length = svm(vcpu)->next_rip - rip;
+ rip += svm(vcpu)->vmcb->save.cs.base;
if (ins_length > MAX_INST_SIZE)
printk(KERN_DEBUG
"%s: inst length err, cs base 0x%llx rip 0x%llx "
"next rip 0x%llx ins_length %u\n",
__FUNCTION__,
- vcpu->svm->vmcb->save.cs.base,
- vcpu->svm->vmcb->save.rip,
- vcpu->svm->vmcb->control.exit_info_2,
+ svm(vcpu)->vmcb->save.cs.base,
+ svm(vcpu)->vmcb->save.rip,
+ svm(vcpu)->vmcb->control.exit_info_2,
ins_length);
if (kvm_read_guest(vcpu, rip, ins_length, inst) != ins_length)
@@ -1001,22 +1011,22 @@ static int io_get_override(struct kvm_vcpu *vcpu,
*addr_override = 1;
continue;
case 0x2e:
- *seg = &vcpu->svm->vmcb->save.cs;
+ *seg = &svm(vcpu)->vmcb->save.cs;
continue;
case 0x36:
- *seg = &vcpu->svm->vmcb->save.ss;
+ *seg = &svm(vcpu)->vmcb->save.ss;
continue;
case 0x3e:
- *seg = &vcpu->svm->vmcb->save.ds;
+ *seg = &svm(vcpu)->vmcb->save.ds;
continue;
case 0x26:
- *seg = &vcpu->svm->vmcb->save.es;
+ *seg = &svm(vcpu)->vmcb->save.es;
continue;
case 0x64:
- *seg = &vcpu->svm->vmcb->save.fs;
+ *seg = &svm(vcpu)->vmcb->save.fs;
continue;
case 0x65:
- *seg = &vcpu->svm->vmcb->save.gs;
+ *seg = &svm(vcpu)->vmcb->save.gs;
continue;
default:
return 1;
@@ -1031,7 +1041,7 @@ static unsigned long io_adress(struct kvm_vcpu *vcpu, int ins, gva_t *address)
unsigned long *reg;
struct vmcb_seg *seg;
int addr_override;
- struct vmcb_save_area *save_area = &vcpu->svm->vmcb->save;
+ struct vmcb_save_area *save_area = &svm(vcpu)->vmcb->save;
u16 cs_attrib = save_area->cs.attrib;
unsigned addr_size = get_addr_size(vcpu);
@@ -1043,16 +1053,16 @@ static unsigned long io_adress(struct kvm_vcpu *vcpu, int ins, gva_t *address)
if (ins) {
reg = &vcpu->regs[VCPU_REGS_RDI];
- seg = &vcpu->svm->vmcb->save.es;
+ seg = &svm(vcpu)->vmcb->save.es;
} else {
reg = &vcpu->regs[VCPU_REGS_RSI];
- seg = (seg) ? seg : &vcpu->svm->vmcb->save.ds;
+ seg = (seg) ? seg : &svm(vcpu)->vmcb->save.ds;
}
addr_mask = ~0ULL >> (64 - (addr_size * 8));
if ((cs_attrib & SVM_SELECTOR_L_MASK) &&
- !(vcpu->svm->vmcb->save.rflags & X86_EFLAGS_VM)) {
+ !(svm(vcpu)->vmcb->save.rflags & X86_EFLAGS_VM)) {
*address = (*reg & addr_mask);
return addr_mask;
}
@@ -1068,7 +1078,7 @@ static unsigned long io_adress(struct kvm_vcpu *vcpu, int ins, gva_t *address)
static int io_interception(struct kvm_vcpu *vcpu, struct kvm_run *kvm_run)
{
- u32 io_info = vcpu->svm->vmcb->control.exit_info_1; //address size bug?
+ u32 io_info = svm(vcpu)->vmcb->control.exit_info_1; //address size bug?
int size, down, in, string, rep;
unsigned port;
unsigned long count;
@@ -1076,7 +1086,7 @@ static int io_interception(struct kvm_vcpu *vcpu, struct kvm_run *kvm_run)
++vcpu->stat.io_exits;
- vcpu->svm->next_rip = vcpu->svm->vmcb->control.exit_info_2;
+ svm(vcpu)->next_rip = svm(vcpu)->vmcb->control.exit_info_2;
in = (io_info & SVM_IOIO_TYPE_MASK) != 0;
port = io_info >> 16;
@@ -1084,7 +1094,7 @@ static int io_interception(struct kvm_vcpu *vcpu, struct kvm_run *kvm_run)
string = (io_info & SVM_IOIO_STR_MASK) != 0;
rep = (io_info & SVM_IOIO_REP_MASK) != 0;
count = 1;
- down = (vcpu->svm->vmcb->save.rflags & X86_EFLAGS_DF) != 0;
+ down = (svm(vcpu)->vmcb->save.rflags & X86_EFLAGS_DF) != 0;
if (string) {
unsigned addr_mask;
@@ -1110,14 +1120,14 @@ static int nop_on_interception(struct kvm_vcpu *vcpu, struct kvm_run *kvm_run)
static int halt_interception(struct kvm_vcpu *vcpu, struct kvm_run *kvm_run)
{
- vcpu->svm->next_rip = vcpu->svm->vmcb->save.rip + 1;
+ svm(vcpu)->next_rip = svm(vcpu)->vmcb->save.rip + 1;
skip_emulated_instruction(vcpu);
return kvm_emulate_halt(vcpu);
}
static int vmmcall_interception(struct kvm_vcpu *vcpu, struct kvm_run *kvm_run)
{
- vcpu->svm->next_rip = vcpu->svm->vmcb->save.rip + 3;
+ svm(vcpu)->next_rip = svm(vcpu)->vmcb->save.rip + 3;
skip_emulated_instruction(vcpu);
return kvm_hypercall(vcpu, kvm_run);
}
@@ -1137,7 +1147,7 @@ static int task_switch_interception(struct kvm_vcpu *vcpu, struct kvm_run *kvm_r
static int cpuid_interception(struct kvm_vcpu *vcpu, struct kvm_run *kvm_run)
{
- vcpu->svm->next_rip = vcpu->svm->vmcb->save.rip + 2;
+ svm(vcpu)->next_rip = svm(vcpu)->vmcb->save.rip + 2;
kvm_emulate_cpuid(vcpu);
return 1;
}
@@ -1156,34 +1166,34 @@ static int svm_get_msr(struct kvm_vcpu *vcpu, unsigned ecx, u64 *data)
u64 tsc;
rdtscll(tsc);
- *data = vcpu->svm->vmcb->control.tsc_offset + tsc;
+ *data = svm(vcpu)->vmcb->control.tsc_offset + tsc;
break;
}
case MSR_K6_STAR:
- *data = vcpu->svm->vmcb->save.star;
+ *data = svm(vcpu)->vmcb->save.star;
break;
#ifdef CONFIG_X86_64
case MSR_LSTAR:
- *data = vcpu->svm->vmcb->save.lstar;
+ *data = svm(vcpu)->vmcb->save.lstar;
break;
case MSR_CSTAR:
- *data = vcpu->svm->vmcb->save.cstar;
+ *data = svm(vcpu)->vmcb->save.cstar;
break;
case MSR_KERNEL_GS_BASE:
- *data = vcpu->svm->vmcb->save.kernel_gs_base;
+ *data = svm(vcpu)->vmcb->save.kernel_gs_base;
break;
case MSR_SYSCALL_MASK:
- *data = vcpu->svm->vmcb->save.sfmask;
+ *data = svm(vcpu)->vmcb->save.sfmask;
break;
#endif
case MSR_IA32_SYSENTER_CS:
- *data = vcpu->svm->vmcb->save.sysenter_cs;
+ *data = svm(vcpu)->vmcb->save.sysenter_cs;
break;
case MSR_IA32_SYSENTER_EIP:
- *data = vcpu->svm->vmcb->save.sysenter_eip;
+ *data = svm(vcpu)->vmcb->save.sysenter_eip;
break;
case MSR_IA32_SYSENTER_ESP:
- *data = vcpu->svm->vmcb->save.sysenter_esp;
+ *data = svm(vcpu)->vmcb->save.sysenter_esp;
break;
default:
return kvm_get_msr_common(vcpu, ecx, data);
@@ -1199,9 +1209,9 @@ static int rdmsr_interception(struct kvm_vcpu *vcpu, struct kvm_run *kvm_run)
if (svm_get_msr(vcpu, ecx, &data))
svm_inject_gp(vcpu, 0);
else {
- vcpu->svm->vmcb->save.rax = data & 0xffffffff;
+ svm(vcpu)->vmcb->save.rax = data & 0xffffffff;
vcpu->regs[VCPU_REGS_RDX] = data >> 32;
- vcpu->svm->next_rip = vcpu->svm->vmcb->save.rip + 2;
+ svm(vcpu)->next_rip = svm(vcpu)->vmcb->save.rip + 2;
skip_emulated_instruction(vcpu);
}
return 1;
@@ -1214,34 +1224,34 @@ static int svm_set_msr(struct kvm_vcpu *vcpu, unsigned ecx, u64 data)
u64 tsc;
rdtscll(tsc);
- vcpu->svm->vmcb->control.tsc_offset = data - tsc;
+ svm(vcpu)->vmcb->control.tsc_offset = data - tsc;
break;
}
case MSR_K6_STAR:
- vcpu->svm->vmcb->save.star = data;
+ svm(vcpu)->vmcb->save.star = data;
break;
#ifdef CONFIG_X86_64
case MSR_LSTAR:
- vcpu->svm->vmcb->save.lstar = data;
+ svm(vcpu)->vmcb->save.lstar = data;
break;
case MSR_CSTAR:
- vcpu->svm->vmcb->save.cstar = data;
+ svm(vcpu)->vmcb->save.cstar = data;
break;
case MSR_KERNEL_GS_BASE:
- vcpu->svm->vmcb->save.kernel_gs_base = data;
+ svm(vcpu)->vmcb->save.kernel_gs_base = data;
break;
case MSR_SYSCALL_MASK:
- vcpu->svm->vmcb->save.sfmask = data;
+ svm(vcpu)->vmcb->save.sfmask = data;
break;
#endif
case MSR_IA32_SYSENTER_CS:
- vcpu->svm->vmcb->save.sysenter_cs = data;
+ svm(vcpu)->vmcb->save.sysenter_cs = data;
break;
case MSR_IA32_SYSENTER_EIP:
- vcpu->svm->vmcb->save.sysenter_eip = data;
+ svm(vcpu)->vmcb->save.sysenter_eip = data;
break;
case MSR_IA32_SYSENTER_ESP:
- vcpu->svm->vmcb->save.sysenter_esp = data;
+ svm(vcpu)->vmcb->save.sysenter_esp = data;
break;
default:
return kvm_set_msr_common(vcpu, ecx, data);
@@ -1252,9 +1262,9 @@ static int svm_set_msr(struct kvm_vcpu *vcpu, unsigned ecx, u64 data)
static int wrmsr_interception(struct kvm_vcpu *vcpu, struct kvm_run *kvm_run)
{
u32 ecx = vcpu->regs[VCPU_REGS_RCX];
- u64 data = (vcpu->svm->vmcb->save.rax & -1u)
+ u64 data = (svm(vcpu)->vmcb->save.rax & -1u)
| ((u64)(vcpu->regs[VCPU_REGS_RDX] & -1u) << 32);
- vcpu->svm->next_rip = vcpu->svm->vmcb->save.rip + 2;
+ svm(vcpu)->next_rip = svm(vcpu)->vmcb->save.rip + 2;
if (svm_set_msr(vcpu, ecx, data))
svm_inject_gp(vcpu, 0);
else
@@ -1264,7 +1274,7 @@ static int wrmsr_interception(struct kvm_vcpu *vcpu, struct kvm_run *kvm_run)
static int msr_interception(struct kvm_vcpu *vcpu, struct kvm_run *kvm_run)
{
- if (vcpu->svm->vmcb->control.exit_info_1)
+ if (svm(vcpu)->vmcb->control.exit_info_1)
return wrmsr_interception(vcpu, kvm_run);
else
return rdmsr_interception(vcpu, kvm_run);
@@ -1336,13 +1346,13 @@ static int (*svm_exit_handlers[])(struct kvm_vcpu *vcpu,
static int handle_exit(struct kvm_vcpu *vcpu, struct kvm_run *kvm_run)
{
- u32 exit_code = vcpu->svm->vmcb->control.exit_code;
+ u32 exit_code = svm(vcpu)->vmcb->control.exit_code;
- if (is_external_interrupt(vcpu->svm->vmcb->control.exit_int_info) &&
+ if (is_external_interrupt(svm(vcpu)->vmcb->control.exit_int_info) &&
exit_code != SVM_EXIT_EXCP_BASE + PF_VECTOR)
printk(KERN_ERR "%s: unexpected exit_ini_info 0x%x "
"exit_code 0x%x\n",
- __FUNCTION__, vcpu->svm->vmcb->control.exit_int_info,
+ __FUNCTION__, svm(vcpu)->vmcb->control.exit_int_info,
exit_code);
if (exit_code >= ARRAY_SIZE(svm_exit_handlers)
@@ -1370,9 +1380,9 @@ static void pre_svm_run(struct kvm_vcpu *vcpu)
struct svm_cpu_data *svm_data = per_cpu(svm_data, cpu);
- vcpu->svm->vmcb->control.tlb_ctl = TLB_CONTROL_DO_NOTHING;
+ svm(vcpu)->vmcb->control.tlb_ctl = TLB_CONTROL_DO_NOTHING;
if (vcpu->cpu != cpu ||
- vcpu->svm->asid_generation != svm_data->asid_generation)
+ svm(vcpu)->asid_generation != svm_data->asid_generation)
new_asid(vcpu, svm_data);
}
@@ -1381,7 +1391,7 @@ static inline void kvm_do_inject_irq(struct kvm_vcpu *vcpu)
{
struct vmcb_control_area *control;
- control = &vcpu->svm->vmcb->control;
+ control = &svm(vcpu)->vmcb->control;
control->int_vector = pop_irq(vcpu);
control->int_ctl &= ~V_INTR_PRIO_MASK;
control->int_ctl |= V_IRQ_MASK |
@@ -1390,7 +1400,7 @@ static inline void kvm_do_inject_irq(struct kvm_vcpu *vcpu)
static void kvm_reput_irq(struct kvm_vcpu *vcpu)
{
- struct vmcb_control_area *control = &vcpu->svm->vmcb->control;
+ struct vmcb_control_area *control = &svm(vcpu)->vmcb->control;
if (control->int_ctl & V_IRQ_MASK) {
control->int_ctl &= ~V_IRQ_MASK;
@@ -1404,11 +1414,11 @@ static void kvm_reput_irq(struct kvm_vcpu *vcpu)
static void do_interrupt_requests(struct kvm_vcpu *vcpu,
struct kvm_run *kvm_run)
{
- struct vmcb_control_area *control = &vcpu->svm->vmcb->control;
+ struct vmcb_control_area *control = &svm(vcpu)->vmcb->control;
vcpu->interrupt_window_open =
(!(control->int_state & SVM_INTERRUPT_SHADOW_MASK) &&
- (vcpu->svm->vmcb->save.rflags & X86_EFLAGS_IF));
+ (svm(vcpu)->vmcb->save.rflags & X86_EFLAGS_IF));
if (vcpu->interrupt_window_open && vcpu->irq_summary)
/*
@@ -1431,7 +1441,7 @@ static void post_kvm_run_save(struct kvm_vcpu *vcpu,
{
kvm_run->ready_for_interrupt_injection = (vcpu->interrupt_window_open &&
vcpu->irq_summary == 0);
- kvm_run->if_flag = (vcpu->svm->vmcb->save.rflags & X86_EFLAGS_IF) != 0;
+ kvm_run->if_flag = (svm(vcpu)->vmcb->save.rflags & X86_EFLAGS_IF) != 0;
kvm_run->cr8 = vcpu->cr8;
kvm_run->apic_base = vcpu->apic_base;
}
@@ -1448,7 +1458,7 @@ static int dm_request_for_irq_injection(struct kvm_vcpu *vcpu,
return (!vcpu->irq_summary &&
kvm_run->request_interrupt_window &&
vcpu->interrupt_window_open &&
- (vcpu->svm->vmcb->save.rflags & X86_EFLAGS_IF));
+ (svm(vcpu)->vmcb->save.rflags & X86_EFLAGS_IF));
}
static void save_db_regs(unsigned long *db_regs)
@@ -1500,15 +1510,15 @@ again:
fs_selector = read_fs();
gs_selector = read_gs();
ldt_selector = read_ldt();
- vcpu->svm->host_cr2 = kvm_read_cr2();
- vcpu->svm->host_dr6 = read_dr6();
- vcpu->svm->host_dr7 = read_dr7();
- vcpu->svm->vmcb->save.cr2 = vcpu->cr2;
+ svm(vcpu)->host_cr2 = kvm_read_cr2();
+ svm(vcpu)->host_dr6 = read_dr6();
+ svm(vcpu)->host_dr7 = read_dr7();
+ svm(vcpu)->vmcb->save.cr2 = vcpu->cr2;
- if (vcpu->svm->vmcb->save.dr7 & 0xff) {
+ if (svm(vcpu)->vmcb->save.dr7 & 0xff) {
write_dr7(0);
- save_db_regs(vcpu->svm->host_db_regs);
- load_db_regs(vcpu->svm->db_regs);
+ save_db_regs(svm(vcpu)->host_db_regs);
+ load_db_regs(svm(vcpu)->db_regs);
}
if (vcpu->fpu_active) {
@@ -1605,7 +1615,7 @@ again:
#endif
:
: [vcpu]"a"(vcpu),
- [svm]"i"(offsetof(struct kvm_vcpu, svm)),
+ [svm]"i"(offsetof(struct kvm_vcpu, _priv)),
[vmcb]"i"(offsetof(struct vcpu_svm, vmcb_pa)),
[rbx]"i"(offsetof(struct kvm_vcpu, regs[VCPU_REGS_RBX])),
[rcx]"i"(offsetof(struct kvm_vcpu, regs[VCPU_REGS_RCX])),
@@ -1632,14 +1642,14 @@ again:
fx_restore(vcpu->host_fx_image);
}
- if ((vcpu->svm->vmcb->save.dr7 & 0xff))
- load_db_regs(vcpu->svm->host_db_regs);
+ if ((svm(vcpu)->vmcb->save.dr7 & 0xff))
+ load_db_regs(svm(vcpu)->host_db_regs);
- vcpu->cr2 = vcpu->svm->vmcb->save.cr2;
+ vcpu->cr2 = svm(vcpu)->vmcb->save.cr2;
- write_dr6(vcpu->svm->host_dr6);
- write_dr7(vcpu->svm->host_dr7);
- kvm_write_cr2(vcpu->svm->host_cr2);
+ write_dr6(svm(vcpu)->host_dr6);
+ write_dr7(svm(vcpu)->host_dr7);
+ kvm_write_cr2(svm(vcpu)->host_cr2);
load_fs(fs_selector);
load_gs(gs_selector);
@@ -1653,18 +1663,18 @@ again:
*/
if (unlikely(prof_on == KVM_PROFILING))
profile_hit(KVM_PROFILING,
- (void *)(unsigned long)vcpu->svm->vmcb->save.rip);
+ (void *)(unsigned long)svm(vcpu)->vmcb->save.rip);
stgi();
kvm_reput_irq(vcpu);
- vcpu->svm->next_rip = 0;
+ svm(vcpu)->next_rip = 0;
- if (vcpu->svm->vmcb->control.exit_code == SVM_EXIT_ERR) {
+ if (svm(vcpu)->vmcb->control.exit_code == SVM_EXIT_ERR) {
kvm_run->exit_reason = KVM_EXIT_FAIL_ENTRY;
kvm_run->fail_entry.hardware_entry_failure_reason
- = vcpu->svm->vmcb->control.exit_code;
+ = svm(vcpu)->vmcb->control.exit_code;
post_kvm_run_save(vcpu, kvm_run);
return 0;
}
@@ -1693,12 +1703,12 @@ again:
static void svm_set_cr3(struct kvm_vcpu *vcpu, unsigned long root)
{
- vcpu->svm->vmcb->save.cr3 = root;
+ svm(vcpu)->vmcb->save.cr3 = root;
force_new_asid(vcpu);
if (vcpu->fpu_active) {
- vcpu->svm->vmcb->control.intercept_exceptions |= (1 << NM_VECTOR);
- vcpu->svm->vmcb->save.cr0 |= X86_CR0_TS;
+ svm(vcpu)->vmcb->control.intercept_exceptions |= (1 << NM_VECTOR);
+ svm(vcpu)->vmcb->save.cr0 |= X86_CR0_TS;
vcpu->fpu_active = 0;
}
}
@@ -1707,26 +1717,26 @@ static void svm_inject_page_fault(struct kvm_vcpu *vcpu,
unsigned long addr,
uint32_t err_code)
{
- uint32_t exit_int_info = vcpu->svm->vmcb->control.exit_int_info;
+ uint32_t exit_int_info = svm(vcpu)->vmcb->control.exit_int_info;
++vcpu->stat.pf_guest;
if (is_page_fault(exit_int_info)) {
- vcpu->svm->vmcb->control.event_inj_err = 0;
- vcpu->svm->vmcb->control.event_inj = SVM_EVTINJ_VALID |
+ svm(vcpu)->vmcb->control.event_inj_err = 0;
+ svm(vcpu)->vmcb->control.event_inj = SVM_EVTINJ_VALID |
SVM_EVTINJ_VALID_ERR |
SVM_EVTINJ_TYPE_EXEPT |
DF_VECTOR;
return;
}
vcpu->cr2 = addr;
- vcpu->svm->vmcb->save.cr2 = addr;
- vcpu->svm->vmcb->control.event_inj = SVM_EVTINJ_VALID |
+ svm(vcpu)->vmcb->save.cr2 = addr;
+ svm(vcpu)->vmcb->control.event_inj = SVM_EVTINJ_VALID |
SVM_EVTINJ_VALID_ERR |
SVM_EVTINJ_TYPE_EXEPT |
PF_VECTOR;
- vcpu->svm->vmcb->control.event_inj_err = err_code;
+ svm(vcpu)->vmcb->control.event_inj_err = err_code;
}
diff --git a/drivers/kvm/vmx.c b/drivers/kvm/vmx.c
index d58457b..2485f0e 100644
--- a/drivers/kvm/vmx.c
+++ b/drivers/kvm/vmx.c
@@ -32,6 +32,37 @@
MODULE_AUTHOR("Qumranet");
MODULE_LICENSE("GPL");
+struct vmcs {
+ u32 revision_id;
+ u32 abort;
+ char data[0];
+};
+
+struct vcpu_vmx {
+ struct kvm_vcpu *vcpu;
+ int launched;
+ struct kvm_msr_entry *guest_msrs;
+ struct kvm_msr_entry *host_msrs;
+ int nmsrs;
+ int save_nmsrs;
+ int msr_offset_efer;
+#ifdef CONFIG_X86_64
+ int msr_offset_kernel_gs_base;
+#endif
+ struct vmcs *vmcs;
+ struct {
+ int loaded;
+ u16 fs_sel, gs_sel, ldt_sel;
+ int fs_gs_ldt_reload_needed;
+ }host_state;
+
+};
+
+static inline struct vcpu_vmx* vmx(struct kvm_vcpu *vcpu)
+{
+ return (struct vcpu_vmx*)vcpu->_priv;
+}
+
static int init_rmode_tss(struct kvm *kvm);
static DEFINE_PER_CPU(struct vmcs *, vmxarea);
@@ -89,16 +120,32 @@ static const u32 vmx_msr_index[] = {
};
#define NR_VMX_MSR ARRAY_SIZE(vmx_msr_index)
-static inline u64 msr_efer_save_restore_bits(struct vmx_msr_entry msr)
+static void load_msrs(struct kvm_msr_entry *e, int n)
+{
+ int i;
+
+ for (i = 0; i < n; ++i)
+ wrmsrl(e[i].index, e[i].data);
+}
+
+static void save_msrs(struct kvm_msr_entry *e, int n)
+{
+ int i;
+
+ for (i = 0; i < n; ++i)
+ rdmsrl(e[i].index, e[i].data);
+}
+
+static inline u64 msr_efer_save_restore_bits(struct kvm_msr_entry msr)
{
return (u64)msr.data & EFER_SAVE_RESTORE_BITS;
}
static inline int msr_efer_need_save_restore(struct kvm_vcpu *vcpu)
{
- int efer_offset = vcpu->msr_offset_efer;
- return msr_efer_save_restore_bits(vcpu->host_msrs[efer_offset]) !=
- msr_efer_save_restore_bits(vcpu->guest_msrs[efer_offset]);
+ int efer_offset = vmx(vcpu)->msr_offset_efer;
+ return msr_efer_save_restore_bits(vmx(vcpu)->host_msrs[efer_offset]) !=
+ msr_efer_save_restore_bits(vmx(vcpu)->guest_msrs[efer_offset]);
}
static inline int is_page_fault(u32 intr_info)
@@ -125,19 +172,19 @@ static int __find_msr_index(struct kvm_vcpu *vcpu, u32 msr)
{
int i;
- for (i = 0; i < vcpu->nmsrs; ++i)
- if (vcpu->guest_msrs[i].index == msr)
+ for (i = 0; i < vmx(vcpu)->nmsrs; ++i)
+ if (vmx(vcpu)->guest_msrs[i].index == msr)
return i;
return -1;
}
-static struct vmx_msr_entry *find_msr_entry(struct kvm_vcpu *vcpu, u32 msr)
+static struct kvm_msr_entry *find_msr_entry(struct kvm_vcpu *vcpu, u32 msr)
{
int i;
i = __find_msr_index(vcpu, msr);
if (i >= 0)
- return &vcpu->guest_msrs[i];
+ return &vmx(vcpu)->guest_msrs[i];
return NULL;
}
@@ -160,8 +207,8 @@ static void __vcpu_clear(void *arg)
int cpu = raw_smp_processor_id();
if (vcpu->cpu == cpu)
- vmcs_clear(vcpu->vmcs);
- if (per_cpu(current_vmcs, cpu) == vcpu->vmcs)
+ vmcs_clear(vmx(vcpu)->vmcs);
+ if (per_cpu(current_vmcs, cpu) == vmx(vcpu)->vmcs)
per_cpu(current_vmcs, cpu) = NULL;
rdtscll(vcpu->host_tsc);
}
@@ -172,7 +219,7 @@ static void vcpu_clear(struct kvm_vcpu *vcpu)
smp_call_function_single(vcpu->cpu, __vcpu_clear, vcpu, 0, 1);
else
__vcpu_clear(vcpu);
- vcpu->launched = 0;
+ vmx(vcpu)->launched = 0;
}
static unsigned long vmcs_readl(unsigned long field)
@@ -285,81 +332,78 @@ static void reload_tss(void)
static void load_transition_efer(struct kvm_vcpu *vcpu)
{
u64 trans_efer;
- int efer_offset = vcpu->msr_offset_efer;
+ int efer_offset = vmx(vcpu)->msr_offset_efer;
- trans_efer = vcpu->host_msrs[efer_offset].data;
+ trans_efer = vmx(vcpu)->host_msrs[efer_offset].data;
trans_efer &= ~EFER_SAVE_RESTORE_BITS;
trans_efer |= msr_efer_save_restore_bits(
- vcpu->guest_msrs[efer_offset]);
+ vmx(vcpu)->guest_msrs[efer_offset]);
wrmsrl(MSR_EFER, trans_efer);
vcpu->stat.efer_reload++;
}
static void vmx_save_host_state(struct kvm_vcpu *vcpu)
{
- struct vmx_host_state *hs = &vcpu->vmx_host_state;
-
- if (hs->loaded)
+ if (vmx(vcpu)->host_state.loaded)
return;
- hs->loaded = 1;
+ vmx(vcpu)->host_state.loaded = 1;
/*
* Set host fs and gs selectors. Unfortunately, 22.2.3 does not
* allow segment selectors with cpl > 0 or ti == 1.
*/
- hs->ldt_sel = read_ldt();
- hs->fs_gs_ldt_reload_needed = hs->ldt_sel;
- hs->fs_sel = read_fs();
- if (!(hs->fs_sel & 7))
- vmcs_write16(HOST_FS_SELECTOR, hs->fs_sel);
+ vmx(vcpu)->host_state.ldt_sel = read_ldt();
+ vmx(vcpu)->host_state.fs_gs_ldt_reload_needed = vmx(vcpu)->host_state.ldt_sel;
+ vmx(vcpu)->host_state.fs_sel = read_fs();
+ if (!(vmx(vcpu)->host_state.fs_sel & 7))
+ vmcs_write16(HOST_FS_SELECTOR, vmx(vcpu)->host_state.fs_sel);
else {
vmcs_write16(HOST_FS_SELECTOR, 0);
- hs->fs_gs_ldt_reload_needed = 1;
+ vmx(vcpu)->host_state.fs_gs_ldt_reload_needed = 1;
}
- hs->gs_sel = read_gs();
- if (!(hs->gs_sel & 7))
- vmcs_write16(HOST_GS_SELECTOR, hs->gs_sel);
+ vmx(vcpu)->host_state.gs_sel = read_gs();
+ if (!(vmx(vcpu)->host_state.gs_sel & 7))
+ vmcs_write16(HOST_GS_SELECTOR, vmx(vcpu)->host_state.gs_sel);
else {
vmcs_write16(HOST_GS_SELECTOR, 0);
- hs->fs_gs_ldt_reload_needed = 1;
+ vmx(vcpu)->host_state.fs_gs_ldt_reload_needed = 1;
}
#ifdef CONFIG_X86_64
vmcs_writel(HOST_FS_BASE, read_msr(MSR_FS_BASE));
vmcs_writel(HOST_GS_BASE, read_msr(MSR_GS_BASE));
#else
- vmcs_writel(HOST_FS_BASE, segment_base(hs->fs_sel));
- vmcs_writel(HOST_GS_BASE, segment_base(hs->gs_sel));
+ vmcs_writel(HOST_FS_BASE, segment_base(vmx(vcpu)->host_state.fs_sel));
+ vmcs_writel(HOST_GS_BASE, segment_base(vmx(vcpu)->host_state.gs_sel));
#endif
#ifdef CONFIG_X86_64
if (is_long_mode(vcpu)) {
- save_msrs(vcpu->host_msrs + vcpu->msr_offset_kernel_gs_base, 1);
+ save_msrs(vmx(vcpu)->host_msrs + vmx(vcpu)->msr_offset_kernel_gs_base, 1);
}
#endif
- load_msrs(vcpu->guest_msrs, vcpu->save_nmsrs);
+ load_msrs(vmx(vcpu)->guest_msrs, vmx(vcpu)->save_nmsrs);
if (msr_efer_need_save_restore(vcpu))
load_transition_efer(vcpu);
}
static void vmx_load_host_state(struct kvm_vcpu *vcpu)
{
- struct vmx_host_state *hs = &vcpu->vmx_host_state;
unsigned long flags;
- if (!hs->loaded)
+ if (!vmx(vcpu)->host_state.loaded)
return;
- hs->loaded = 0;
- if (hs->fs_gs_ldt_reload_needed) {
- load_ldt(hs->ldt_sel);
- load_fs(hs->fs_sel);
+ vmx(vcpu)->host_state.loaded = 0;
+ if (vmx(vcpu)->host_state.fs_gs_ldt_reload_needed) {
+ load_ldt(vmx(vcpu)->host_state.ldt_sel);
+ load_fs(vmx(vcpu)->host_state.fs_sel);
/*
* If we have to reload gs, we must take care to
* preserve our gs base.
*/
local_irq_save(flags);
- load_gs(hs->gs_sel);
+ load_gs(vmx(vcpu)->host_state.gs_sel);
#ifdef CONFIG_X86_64
wrmsrl(MSR_GS_BASE, vmcs_readl(HOST_GS_BASE));
#endif
@@ -367,10 +411,10 @@ static void vmx_load_host_state(struct kvm_vcpu *vcpu)
reload_tss();
}
- save_msrs(vcpu->guest_msrs, vcpu->save_nmsrs);
- load_msrs(vcpu->host_msrs, vcpu->save_nmsrs);
+ save_msrs(vmx(vcpu)->guest_msrs, vmx(vcpu)->save_nmsrs);
+ load_msrs(vmx(vcpu)->host_msrs, vmx(vcpu)->save_nmsrs);
if (msr_efer_need_save_restore(vcpu))
- load_msrs(vcpu->host_msrs + vcpu->msr_offset_efer, 1);
+ load_msrs(vmx(vcpu)->host_msrs + vmx(vcpu)->msr_offset_efer, 1);
}
/*
@@ -379,22 +423,22 @@ static void vmx_load_host_state(struct kvm_vcpu *vcpu)
*/
static void vmx_vcpu_load(struct kvm_vcpu *vcpu, int cpu)
{
- u64 phys_addr = __pa(vcpu->vmcs);
+ u64 phys_addr = __pa(vmx(vcpu)->vmcs);
u64 tsc_this, delta;
if (vcpu->cpu != cpu)
vcpu_clear(vcpu);
- if (per_cpu(current_vmcs, cpu) != vcpu->vmcs) {
+ if (per_cpu(current_vmcs, cpu) != vmx(vcpu)->vmcs) {
u8 error;
- per_cpu(current_vmcs, cpu) = vcpu->vmcs;
+ per_cpu(current_vmcs, cpu) = vmx(vcpu)->vmcs;
asm volatile (ASM_VMX_VMPTRLD_RAX "; setna %0"
: "=g"(error) : "a"(&phys_addr), "m"(phys_addr)
: "cc");
if (error)
printk(KERN_ERR "kvm: vmptrld %p/%llx fail\n",
- vcpu->vmcs, phys_addr);
+ vmx(vcpu)->vmcs, phys_addr);
}
if (vcpu->cpu != cpu) {
@@ -500,13 +544,13 @@ static void vmx_inject_gp(struct kvm_vcpu *vcpu, unsigned error_code)
*/
void move_msr_up(struct kvm_vcpu *vcpu, int from, int to)
{
- struct vmx_msr_entry tmp;
- tmp = vcpu->guest_msrs[to];
- vcpu->guest_msrs[to] = vcpu->guest_msrs[from];
- vcpu->guest_msrs[from] = tmp;
- tmp = vcpu->host_msrs[to];
- vcpu->host_msrs[to] = vcpu->host_msrs[from];
- vcpu->host_msrs[from] = tmp;
+ struct kvm_msr_entry tmp;
+ tmp = vmx(vcpu)->guest_msrs[to];
+ vmx(vcpu)->guest_msrs[to] = vmx(vcpu)->guest_msrs[from];
+ vmx(vcpu)->guest_msrs[from] = tmp;
+ tmp = vmx(vcpu)->host_msrs[to];
+ vmx(vcpu)->host_msrs[to] = vmx(vcpu)->host_msrs[from];
+ vmx(vcpu)->host_msrs[from] = tmp;
}
/*
@@ -544,13 +588,13 @@ static void setup_msrs(struct kvm_vcpu *vcpu)
move_msr_up(vcpu, index, save_nmsrs++);
}
#endif
- vcpu->save_nmsrs = save_nmsrs;
+ vmx(vcpu)->save_nmsrs = save_nmsrs;
#ifdef CONFIG_X86_64
- vcpu->msr_offset_kernel_gs_base =
+ vmx(vcpu)->msr_offset_kernel_gs_base =
__find_msr_index(vcpu, MSR_KERNEL_GS_BASE);
#endif
- vcpu->msr_offset_efer = __find_msr_index(vcpu, MSR_EFER);
+ vmx(vcpu)->msr_offset_efer = __find_msr_index(vcpu, MSR_EFER);
}
/*
@@ -586,7 +630,7 @@ static void guest_write_tsc(u64 guest_tsc)
static int vmx_get_msr(struct kvm_vcpu *vcpu, u32 msr_index, u64 *pdata)
{
u64 data;
- struct vmx_msr_entry *msr;
+ struct kvm_msr_entry *msr;
if (!pdata) {
printk(KERN_ERR "BUG: get_msr called with NULL pdata\n");
@@ -636,14 +680,14 @@ static int vmx_get_msr(struct kvm_vcpu *vcpu, u32 msr_index, u64 *pdata)
*/
static int vmx_set_msr(struct kvm_vcpu *vcpu, u32 msr_index, u64 data)
{
- struct vmx_msr_entry *msr;
+ struct kvm_msr_entry *msr;
int ret = 0;
switch (msr_index) {
#ifdef CONFIG_X86_64
case MSR_EFER:
ret = kvm_set_msr_common(vcpu, msr_index, data);
- if (vcpu->vmx_host_state.loaded)
+ if (vmx(vcpu)->host_state.loaded)
load_transition_efer(vcpu);
break;
case MSR_FS_BASE:
@@ -669,8 +713,8 @@ static int vmx_set_msr(struct kvm_vcpu *vcpu, u32 msr_index, u64 data)
msr = find_msr_entry(vcpu, msr_index);
if (msr) {
msr->data = data;
- if (vcpu->vmx_host_state.loaded)
- load_msrs(vcpu->guest_msrs, vcpu->save_nmsrs);
+ if (vmx(vcpu)->host_state.loaded)
+ load_msrs(vmx(vcpu)->guest_msrs, vmx(vcpu)->save_nmsrs);
break;
}
ret = kvm_set_msr_common(vcpu, msr_index, data);
@@ -1042,7 +1086,7 @@ static void vmx_set_cr4(struct kvm_vcpu *vcpu, unsigned long cr4)
static void vmx_set_efer(struct kvm_vcpu *vcpu, u64 efer)
{
- struct vmx_msr_entry *msr = find_msr_entry(vcpu, MSR_EFER);
+ struct kvm_msr_entry *msr = find_msr_entry(vcpu, MSR_EFER);
vcpu->shadow_efer = efer;
if (efer & EFER_LMA) {
@@ -1374,18 +1418,18 @@ static int vmx_vcpu_setup(struct kvm_vcpu *vcpu)
u32 index = vmx_msr_index[i];
u32 data_low, data_high;
u64 data;
- int j = vcpu->nmsrs;
+ int j = vmx(vcpu)->nmsrs;
if (rdmsr_safe(index, &data_low, &data_high) < 0)
continue;
if (wrmsr_safe(index, data_low, data_high) < 0)
continue;
data = data_low | ((u64)data_high << 32);
- vcpu->host_msrs[j].index = index;
- vcpu->host_msrs[j].reserved = 0;
- vcpu->host_msrs[j].data = data;
- vcpu->guest_msrs[j] = vcpu->host_msrs[j];
- ++vcpu->nmsrs;
+ vmx(vcpu)->host_msrs[j].index = index;
+ vmx(vcpu)->host_msrs[j].reserved = 0;
+ vmx(vcpu)->host_msrs[j].data = data;
+ vmx(vcpu)->guest_msrs[j] = vmx(vcpu)->host_msrs[j];
+ ++vmx(vcpu)->nmsrs;
}
setup_msrs(vcpu);
@@ -2114,7 +2158,7 @@ again:
#endif
"setbe %0 \n\t"
: "=q" (fail)
- : "r"(vcpu->launched), "d"((unsigned long)HOST_RSP),
+ : "r"(vmx(vcpu)->launched), "d"((unsigned long)HOST_RSP),
"c"(vcpu),
[rax]"i"(offsetof(struct kvm_vcpu, regs[VCPU_REGS_RAX])),
[rbx]"i"(offsetof(struct kvm_vcpu, regs[VCPU_REGS_RBX])),
@@ -2144,7 +2188,7 @@ again:
vcpu->interrupt_window_open = (vmcs_read32(GUEST_INTERRUPTIBILITY_INFO) & 3) == 0;
asm ("mov %0, %%ds; mov %0, %%es" : : "r"(__USER_DS));
- vcpu->launched = 1;
+ vmx(vcpu)->launched = 1;
preempt_enable();
@@ -2225,10 +2269,11 @@ static void vmx_inject_page_fault(struct kvm_vcpu *vcpu,
static void vmx_free_vmcs(struct kvm_vcpu *vcpu)
{
- if (vcpu->vmcs) {
+
+ if (vmx(vcpu)->vmcs) {
on_each_cpu(__vcpu_clear, vcpu, 0, 1);
- free_vmcs(vcpu->vmcs);
- vcpu->vmcs = NULL;
+ free_vmcs(vmx(vcpu)->vmcs);
+ vmx(vcpu)->vmcs = NULL;
}
}
@@ -2239,33 +2284,39 @@ static void vmx_free_vcpu(struct kvm_vcpu *vcpu)
static int vmx_create_vcpu(struct kvm_vcpu *vcpu)
{
- struct vmcs *vmcs;
+ struct vcpu_vmx *vmx;
- vcpu->guest_msrs = kmalloc(PAGE_SIZE, GFP_KERNEL);
- if (!vcpu->guest_msrs)
+ vmx = kzalloc(sizeof(*vmx), GFP_KERNEL);
+ if (!vmx)
return -ENOMEM;
- vcpu->host_msrs = kmalloc(PAGE_SIZE, GFP_KERNEL);
- if (!vcpu->host_msrs)
- goto out_free_guest_msrs;
+ vmx->guest_msrs = kmalloc(PAGE_SIZE, GFP_KERNEL);
+ if (!vmx->guest_msrs)
+ goto out_free;
- vmcs = alloc_vmcs();
- if (!vmcs)
- goto out_free_msrs;
+ vmx->host_msrs = kmalloc(PAGE_SIZE, GFP_KERNEL);
+ if (!vmx->host_msrs)
+ goto out_free;
- vmcs_clear(vmcs);
- vcpu->vmcs = vmcs;
- vcpu->launched = 0;
+ vmx->vmcs = alloc_vmcs();
+ if (!vmx->vmcs)
+ goto out_free;
+
+ vmcs_clear(vmx->vmcs);
+
+ vmx->vcpu = vcpu;
+ vcpu->_priv = vmx;
return 0;
-out_free_msrs:
- kfree(vcpu->host_msrs);
- vcpu->host_msrs = NULL;
+out_free:
+ if (vmx->host_msrs)
+ kfree(vmx->host_msrs);
+
+ if (vmx->guest_msrs)
+ kfree(vmx->guest_msrs);
-out_free_guest_msrs:
- kfree(vcpu->guest_msrs);
- vcpu->guest_msrs = NULL;
+ kfree(vmx);
return -ENOMEM;
}
-------------------------------------------------------------------------
This SF.net email is sponsored by: Splunk Inc.
Still grepping through log files to find problems? Stop.
Now Search log events and configuration files using AJAX and a browser.
Download your FREE copy of Splunk now >> http://get.splunk.com/
^ permalink raw reply related [flat|nested] 17+ messages in thread* Re: [PATCH] KVM: Remove arch specific components from the general code
@ 2007-07-26 12:20 Gregory Haskins
[not found] ` <46A8594E0200005A0002831D-Igcdv/6uVdMHoYOw/+koYqIwWpluYiW7@public.gmane.org>
0 siblings, 1 reply; 17+ messages in thread
From: Gregory Haskins @ 2007-07-26 12:20 UTC (permalink / raw)
To: avi-atKUWr5tajBWk0Htik3J/w; +Cc: kvm-devel-5NWGOfrQmneRv+LV9MX5uipxlwaOVQ5f
On Thu, 2007-07-26 at 13:03 +0300, Avi Kivity wrote:
> This is okay as a temporary measure, until the vcpu array is replaced by
> a vcpu pointer array (when vcpu->valid can be replaced by vcpu != NULL).
Ack
>
>
> > struct kvm *kvm;
> > struct preempt_hook preempt_hook;
> > int vcpu_id;
> > - union {
> > - struct vmcs *vmcs;
> > - struct vcpu_svm *svm;
> > - };
> > + void *_priv;
> >
>
> The other cooks did this by embedding the one of the structures within
> the other, which reduces indirections.
I don't understand what you mean. Can you elaborate more?
>
> > @@ -1664,7 +1664,7 @@ void load_msrs(struct vmx_msr_entry *e, int n)
> > }
> > EXPORT_SYMBOL_GPL(load_msrs);
> >
> > -void save_msrs(struct vmx_msr_entry *e, int n)
> > +void save_msrs(struct kvm_msr_entry *e, int n)
> > {
> > int i;
> >
>
> This isn't actually used in svm, so we can keep the name vmx_msr_entry
Note that the structure really is called "kvm_msr_entry" and there was a
lingering typedef to name it "vmx_msr_entry" for places that still used
the vmx variant. I just cleaned that up and got rid of the typedef.
>
> and just move the function.
Ack. If this can be done, I will move it. I think I recall that it was
used in the core code which is why I didnt touch it already.
> It need not be done in this patch.
Ack
>
> > --- a/drivers/kvm/svm.c
> > +++ b/drivers/kvm/svm.c
> > @@ -49,6 +49,8 @@ MODULE_LICENSE("GPL");
> > #define SVM_FEATURE_LBRV (1 << 1)
> > #define SVM_DEATURE_SVML (1 << 2)
> >
> > +#define svm(vcpu) ((struct vcpu_svm*)vcpu->_priv)
> > +
> >
>
> Inline function.
Ack
>
> > };
> > +
> > +#define vmx(vcpu) ((struct vcpu_vmx*)vcpu->_priv)
> > +
> >
>
> Inline function.
Ack
>
>
> Patch looks good except for the extra indirection. But maybe a later
> patch can do that. Paul? Rusty?
>
-------------------------------------------------------------------------
This SF.net email is sponsored by: Splunk Inc.
Still grepping through log files to find problems? Stop.
Now Search log events and configuration files using AJAX and a browser.
Download your FREE copy of Splunk now >> http://get.splunk.com/
^ permalink raw reply [flat|nested] 17+ messages in thread* [PATCH] KVM: Remove arch specific components from the general code
@ 2007-07-26 4:31 Gregory Haskins
[not found] ` <20070726042948.5893.58975.stgit-sLgBBP33vUGnsjUZhwzVf9HuzzzSOjJt@public.gmane.org>
0 siblings, 1 reply; 17+ messages in thread
From: Gregory Haskins @ 2007-07-26 4:31 UTC (permalink / raw)
To: kvm-devel-5NWGOfrQmneRv+LV9MX5uipxlwaOVQ5f
This is a cleanup patch to "de-VMX" the general code. It was developed in the
preempt-hooks branch, but it should apply elsewhere as well.
Signed-off-by: Gregory Haskins <ghaskins-Et1tbQHTxzrQT0dZR+AlfA@public.gmane.org>
---
drivers/kvm/kvm.h | 33 +----
drivers/kvm/kvm_main.c | 12 +-
drivers/kvm/kvm_svm.h | 3
drivers/kvm/svm.c | 320 +++++++++++++++++++++++++-----------------------
drivers/kvm/vmx.c | 225 ++++++++++++++++++++--------------
5 files changed, 313 insertions(+), 280 deletions(-)
diff --git a/drivers/kvm/kvm.h b/drivers/kvm/kvm.h
index 03f374e..f3725b6 100644
--- a/drivers/kvm/kvm.h
+++ b/drivers/kvm/kvm.h
@@ -16,7 +16,6 @@
#include <linux/preempt.h>
#include <asm/signal.h>
-#include "vmx.h"
#include <linux/kvm.h>
#include <linux/kvm_para.h>
@@ -141,14 +140,6 @@ struct kvm_mmu_page {
};
};
-struct vmcs {
- u32 revision_id;
- u32 abort;
- char data[0];
-};
-
-#define vmx_msr_entry kvm_msr_entry
-
struct kvm_vcpu;
/*
@@ -310,16 +301,13 @@ void kvm_io_bus_register_dev(struct kvm_io_bus *bus,
struct kvm_io_device *dev);
struct kvm_vcpu {
+ int valid;
struct kvm *kvm;
struct preempt_hook preempt_hook;
int vcpu_id;
- union {
- struct vmcs *vmcs;
- struct vcpu_svm *svm;
- };
+ void *_priv;
struct mutex mutex;
int cpu;
- int launched;
u64 host_tsc;
struct kvm_run *run;
int interrupt_window_open;
@@ -342,14 +330,6 @@ struct kvm_vcpu {
u64 shadow_efer;
u64 apic_base;
u64 ia32_misc_enable_msr;
- int nmsrs;
- int save_nmsrs;
- int msr_offset_efer;
-#ifdef CONFIG_X86_64
- int msr_offset_kernel_gs_base;
-#endif
- struct vmx_msr_entry *guest_msrs;
- struct vmx_msr_entry *host_msrs;
struct kvm_mmu mmu;
@@ -368,11 +348,6 @@ struct kvm_vcpu {
char *guest_fx_image;
int fpu_active;
int guest_fpu_loaded;
- struct vmx_host_state {
- int loaded;
- u16 fs_sel, gs_sel, ldt_sel;
- int fs_gs_ldt_reload_needed;
- } vmx_host_state;
int mmio_needed;
int mmio_read_completed;
@@ -581,8 +556,8 @@ int kvm_set_msr_common(struct kvm_vcpu *vcpu, u32 msr, u64 data);
void fx_init(struct kvm_vcpu *vcpu);
-void load_msrs(struct vmx_msr_entry *e, int n);
-void save_msrs(struct vmx_msr_entry *e, int n);
+void load_msrs(struct kvm_msr_entry *e, int n);
+void save_msrs(struct kvm_msr_entry *e, int n);
void kvm_resched(struct kvm_vcpu *vcpu);
void kvm_load_guest_fpu(struct kvm_vcpu *vcpu);
void kvm_put_guest_fpu(struct kvm_vcpu *vcpu);
diff --git a/drivers/kvm/kvm_main.c b/drivers/kvm/kvm_main.c
index ef67572..fec5eb5 100644
--- a/drivers/kvm/kvm_main.c
+++ b/drivers/kvm/kvm_main.c
@@ -377,7 +377,7 @@ static void free_pio_guest_pages(struct kvm_vcpu *vcpu)
static void kvm_unload_vcpu_mmu(struct kvm_vcpu *vcpu)
{
- if (!vcpu->vmcs)
+ if (!vcpu->valid)
return;
vcpu_load(vcpu);
@@ -387,7 +387,7 @@ static void kvm_unload_vcpu_mmu(struct kvm_vcpu *vcpu)
static void kvm_free_vcpu(struct kvm_vcpu *vcpu)
{
- if (!vcpu->vmcs)
+ if (!vcpu->valid)
return;
vcpu_load(vcpu);
@@ -1655,7 +1655,7 @@ void kvm_resched(struct kvm_vcpu *vcpu)
}
EXPORT_SYMBOL_GPL(kvm_resched);
-void load_msrs(struct vmx_msr_entry *e, int n)
+void load_msrs(struct kvm_msr_entry *e, int n)
{
int i;
@@ -1664,7 +1664,7 @@ void load_msrs(struct vmx_msr_entry *e, int n)
}
EXPORT_SYMBOL_GPL(load_msrs);
-void save_msrs(struct vmx_msr_entry *e, int n)
+void save_msrs(struct kvm_msr_entry *e, int n)
{
int i;
@@ -2409,7 +2409,7 @@ static int kvm_vm_ioctl_create_vcpu(struct kvm *kvm, int n)
mutex_lock(&vcpu->mutex);
- if (vcpu->vmcs) {
+ if (vcpu->valid) {
mutex_unlock(&vcpu->mutex);
return -EEXIST;
}
@@ -2461,6 +2461,8 @@ static int kvm_vm_ioctl_create_vcpu(struct kvm *kvm, int n)
kvm->nvcpus = n + 1;
spin_unlock(&kvm_lock);
+ vcpu->valid = 1;
+
return r;
out_free_vcpus:
diff --git a/drivers/kvm/kvm_svm.h b/drivers/kvm/kvm_svm.h
index a869983..82e5d77 100644
--- a/drivers/kvm/kvm_svm.h
+++ b/drivers/kvm/kvm_svm.h
@@ -20,7 +20,10 @@ static const u32 host_save_user_msrs[] = {
#define NR_HOST_SAVE_USER_MSRS ARRAY_SIZE(host_save_user_msrs)
#define NUM_DB_REGS 4
+struct kvm_vcpu;
+
struct vcpu_svm {
+ struct kvm_vcpu *vcpu;
struct vmcb *vmcb;
unsigned long vmcb_pa;
struct svm_cpu_data *svm_data;
diff --git a/drivers/kvm/svm.c b/drivers/kvm/svm.c
index ac7d8b0..53871ee 100644
--- a/drivers/kvm/svm.c
+++ b/drivers/kvm/svm.c
@@ -49,6 +49,8 @@ MODULE_LICENSE("GPL");
#define SVM_FEATURE_LBRV (1 << 1)
#define SVM_DEATURE_SVML (1 << 2)
+#define svm(vcpu) ((struct vcpu_svm*)vcpu->_priv)
+
unsigned long iopm_base;
unsigned long msrpm_base;
@@ -95,7 +97,7 @@ static inline u32 svm_has(u32 feat)
static unsigned get_addr_size(struct kvm_vcpu *vcpu)
{
- struct vmcb_save_area *sa = &vcpu->svm->vmcb->save;
+ struct vmcb_save_area *sa = &svm(vcpu)->vmcb->save;
u16 cs_attrib;
if (!(sa->cr0 & X86_CR0_PE) || (sa->rflags & X86_EFLAGS_VM))
@@ -181,7 +183,7 @@ static inline void write_dr7(unsigned long val)
static inline void force_new_asid(struct kvm_vcpu *vcpu)
{
- vcpu->svm->asid_generation--;
+ svm(vcpu)->asid_generation--;
}
static inline void flush_guest_tlb(struct kvm_vcpu *vcpu)
@@ -194,22 +196,22 @@ static void svm_set_efer(struct kvm_vcpu *vcpu, u64 efer)
if (!(efer & KVM_EFER_LMA))
efer &= ~KVM_EFER_LME;
- vcpu->svm->vmcb->save.efer = efer | MSR_EFER_SVME_MASK;
+ svm(vcpu)->vmcb->save.efer = efer | MSR_EFER_SVME_MASK;
vcpu->shadow_efer = efer;
}
static void svm_inject_gp(struct kvm_vcpu *vcpu, unsigned error_code)
{
- vcpu->svm->vmcb->control.event_inj = SVM_EVTINJ_VALID |
+ svm(vcpu)->vmcb->control.event_inj = SVM_EVTINJ_VALID |
SVM_EVTINJ_VALID_ERR |
SVM_EVTINJ_TYPE_EXEPT |
GP_VECTOR;
- vcpu->svm->vmcb->control.event_inj_err = error_code;
+ svm(vcpu)->vmcb->control.event_inj_err = error_code;
}
static void inject_ud(struct kvm_vcpu *vcpu)
{
- vcpu->svm->vmcb->control.event_inj = SVM_EVTINJ_VALID |
+ svm(vcpu)->vmcb->control.event_inj = SVM_EVTINJ_VALID |
SVM_EVTINJ_TYPE_EXEPT |
UD_VECTOR;
}
@@ -228,19 +230,19 @@ static int is_external_interrupt(u32 info)
static void skip_emulated_instruction(struct kvm_vcpu *vcpu)
{
- if (!vcpu->svm->next_rip) {
+ if (!svm(vcpu)->next_rip) {
printk(KERN_DEBUG "%s: NOP\n", __FUNCTION__);
return;
}
- if (vcpu->svm->next_rip - vcpu->svm->vmcb->save.rip > 15) {
+ if (svm(vcpu)->next_rip - svm(vcpu)->vmcb->save.rip > 15) {
printk(KERN_ERR "%s: ip 0x%llx next 0x%llx\n",
__FUNCTION__,
- vcpu->svm->vmcb->save.rip,
- vcpu->svm->next_rip);
+ svm(vcpu)->vmcb->save.rip,
+ svm(vcpu)->next_rip);
}
- vcpu->rip = vcpu->svm->vmcb->save.rip = vcpu->svm->next_rip;
- vcpu->svm->vmcb->control.int_state &= ~SVM_INTERRUPT_SHADOW_MASK;
+ vcpu->rip = svm(vcpu)->vmcb->save.rip = svm(vcpu)->next_rip;
+ svm(vcpu)->vmcb->control.int_state &= ~SVM_INTERRUPT_SHADOW_MASK;
vcpu->interrupt_window_open = 1;
}
@@ -569,23 +571,27 @@ static void init_vmcb(struct vmcb *vmcb)
static int svm_create_vcpu(struct kvm_vcpu *vcpu)
{
+ struct vcpu_svm *svm;
struct page *page;
int r;
r = -ENOMEM;
- vcpu->svm = kzalloc(sizeof *vcpu->svm, GFP_KERNEL);
- if (!vcpu->svm)
+ svm = kzalloc(sizeof *svm, GFP_KERNEL);
+ if (!svm)
goto out1;
page = alloc_page(GFP_KERNEL);
if (!page)
goto out2;
- vcpu->svm->vmcb = page_address(page);
- clear_page(vcpu->svm->vmcb);
- vcpu->svm->vmcb_pa = page_to_pfn(page) << PAGE_SHIFT;
- vcpu->svm->asid_generation = 0;
- memset(vcpu->svm->db_regs, 0, sizeof(vcpu->svm->db_regs));
- init_vmcb(vcpu->svm->vmcb);
+ svm->vmcb = page_address(page);
+ clear_page(svm->vmcb);
+ svm->vmcb_pa = page_to_pfn(page) << PAGE_SHIFT;
+ svm->asid_generation = 0;
+ memset(svm->db_regs, 0, sizeof(svm->db_regs));
+ init_vmcb(svm->vmcb);
+
+ svm->vcpu = vcpu;
+ vcpu->_priv = svm;
fx_init(vcpu);
vcpu->fpu_active = 1;
@@ -596,18 +602,20 @@ static int svm_create_vcpu(struct kvm_vcpu *vcpu)
return 0;
out2:
- kfree(vcpu->svm);
+ kfree(svm);
out1:
return r;
}
static void svm_free_vcpu(struct kvm_vcpu *vcpu)
{
- if (!vcpu->svm)
+ struct vcpu_svm *svm = svm(vcpu);
+
+ if (!svm)
return;
- if (vcpu->svm->vmcb)
- __free_page(pfn_to_page(vcpu->svm->vmcb_pa >> PAGE_SHIFT));
- kfree(vcpu->svm);
+ if (svm->vmcb)
+ __free_page(pfn_to_page(svm->vmcb_pa >> PAGE_SHIFT));
+ kfree(svm);
}
static void svm_vcpu_load(struct kvm_vcpu *vcpu, int cpu)
@@ -623,12 +631,12 @@ static void svm_vcpu_load(struct kvm_vcpu *vcpu, int cpu)
*/
rdtscll(tsc_this);
delta = vcpu->host_tsc - tsc_this;
- vcpu->svm->vmcb->control.tsc_offset += delta;
+ svm(vcpu)->vmcb->control.tsc_offset += delta;
vcpu->cpu = cpu;
}
for (i = 0; i < NR_HOST_SAVE_USER_MSRS; i++)
- rdmsrl(host_save_user_msrs[i], vcpu->svm->host_user_msrs[i]);
+ rdmsrl(host_save_user_msrs[i], svm(vcpu)->host_user_msrs[i]);
}
static void svm_vcpu_put(struct kvm_vcpu *vcpu)
@@ -636,7 +644,7 @@ static void svm_vcpu_put(struct kvm_vcpu *vcpu)
int i;
for (i = 0; i < NR_HOST_SAVE_USER_MSRS; i++)
- wrmsrl(host_save_user_msrs[i], vcpu->svm->host_user_msrs[i]);
+ wrmsrl(host_save_user_msrs[i], svm(vcpu)->host_user_msrs[i]);
rdtscll(vcpu->host_tsc);
}
@@ -647,31 +655,31 @@ static void svm_vcpu_decache(struct kvm_vcpu *vcpu)
static void svm_cache_regs(struct kvm_vcpu *vcpu)
{
- vcpu->regs[VCPU_REGS_RAX] = vcpu->svm->vmcb->save.rax;
- vcpu->regs[VCPU_REGS_RSP] = vcpu->svm->vmcb->save.rsp;
- vcpu->rip = vcpu->svm->vmcb->save.rip;
+ vcpu->regs[VCPU_REGS_RAX] = svm(vcpu)->vmcb->save.rax;
+ vcpu->regs[VCPU_REGS_RSP] = svm(vcpu)->vmcb->save.rsp;
+ vcpu->rip = svm(vcpu)->vmcb->save.rip;
}
static void svm_decache_regs(struct kvm_vcpu *vcpu)
{
- vcpu->svm->vmcb->save.rax = vcpu->regs[VCPU_REGS_RAX];
- vcpu->svm->vmcb->save.rsp = vcpu->regs[VCPU_REGS_RSP];
- vcpu->svm->vmcb->save.rip = vcpu->rip;
+ svm(vcpu)->vmcb->save.rax = vcpu->regs[VCPU_REGS_RAX];
+ svm(vcpu)->vmcb->save.rsp = vcpu->regs[VCPU_REGS_RSP];
+ svm(vcpu)->vmcb->save.rip = vcpu->rip;
}
static unsigned long svm_get_rflags(struct kvm_vcpu *vcpu)
{
- return vcpu->svm->vmcb->save.rflags;
+ return svm(vcpu)->vmcb->save.rflags;
}
static void svm_set_rflags(struct kvm_vcpu *vcpu, unsigned long rflags)
{
- vcpu->svm->vmcb->save.rflags = rflags;
+ svm(vcpu)->vmcb->save.rflags = rflags;
}
static struct vmcb_seg *svm_seg(struct kvm_vcpu *vcpu, int seg)
{
- struct vmcb_save_area *save = &vcpu->svm->vmcb->save;
+ struct vmcb_save_area *save = &svm(vcpu)->vmcb->save;
switch (seg) {
case VCPU_SREG_CS: return &save->cs;
@@ -723,26 +731,26 @@ static void svm_get_cs_db_l_bits(struct kvm_vcpu *vcpu, int *db, int *l)
static void svm_get_idt(struct kvm_vcpu *vcpu, struct descriptor_table *dt)
{
- dt->limit = vcpu->svm->vmcb->save.idtr.limit;
- dt->base = vcpu->svm->vmcb->save.idtr.base;
+ dt->limit = svm(vcpu)->vmcb->save.idtr.limit;
+ dt->base = svm(vcpu)->vmcb->save.idtr.base;
}
static void svm_set_idt(struct kvm_vcpu *vcpu, struct descriptor_table *dt)
{
- vcpu->svm->vmcb->save.idtr.limit = dt->limit;
- vcpu->svm->vmcb->save.idtr.base = dt->base ;
+ svm(vcpu)->vmcb->save.idtr.limit = dt->limit;
+ svm(vcpu)->vmcb->save.idtr.base = dt->base ;
}
static void svm_get_gdt(struct kvm_vcpu *vcpu, struct descriptor_table *dt)
{
- dt->limit = vcpu->svm->vmcb->save.gdtr.limit;
- dt->base = vcpu->svm->vmcb->save.gdtr.base;
+ dt->limit = svm(vcpu)->vmcb->save.gdtr.limit;
+ dt->base = svm(vcpu)->vmcb->save.gdtr.base;
}
static void svm_set_gdt(struct kvm_vcpu *vcpu, struct descriptor_table *dt)
{
- vcpu->svm->vmcb->save.gdtr.limit = dt->limit;
- vcpu->svm->vmcb->save.gdtr.base = dt->base ;
+ svm(vcpu)->vmcb->save.gdtr.limit = dt->limit;
+ svm(vcpu)->vmcb->save.gdtr.base = dt->base ;
}
static void svm_decache_cr4_guest_bits(struct kvm_vcpu *vcpu)
@@ -755,30 +763,30 @@ static void svm_set_cr0(struct kvm_vcpu *vcpu, unsigned long cr0)
if (vcpu->shadow_efer & KVM_EFER_LME) {
if (!is_paging(vcpu) && (cr0 & X86_CR0_PG)) {
vcpu->shadow_efer |= KVM_EFER_LMA;
- vcpu->svm->vmcb->save.efer |= KVM_EFER_LMA | KVM_EFER_LME;
+ svm(vcpu)->vmcb->save.efer |= KVM_EFER_LMA | KVM_EFER_LME;
}
if (is_paging(vcpu) && !(cr0 & X86_CR0_PG) ) {
vcpu->shadow_efer &= ~KVM_EFER_LMA;
- vcpu->svm->vmcb->save.efer &= ~(KVM_EFER_LMA | KVM_EFER_LME);
+ svm(vcpu)->vmcb->save.efer &= ~(KVM_EFER_LMA | KVM_EFER_LME);
}
}
#endif
if ((vcpu->cr0 & X86_CR0_TS) && !(cr0 & X86_CR0_TS)) {
- vcpu->svm->vmcb->control.intercept_exceptions &= ~(1 << NM_VECTOR);
+ svm(vcpu)->vmcb->control.intercept_exceptions &= ~(1 << NM_VECTOR);
vcpu->fpu_active = 1;
}
vcpu->cr0 = cr0;
cr0 |= X86_CR0_PG | X86_CR0_WP;
cr0 &= ~(X86_CR0_CD | X86_CR0_NW);
- vcpu->svm->vmcb->save.cr0 = cr0;
+ svm(vcpu)->vmcb->save.cr0 = cr0;
}
static void svm_set_cr4(struct kvm_vcpu *vcpu, unsigned long cr4)
{
vcpu->cr4 = cr4;
- vcpu->svm->vmcb->save.cr4 = cr4 | X86_CR4_PAE;
+ svm(vcpu)->vmcb->save.cr4 = cr4 | X86_CR4_PAE;
}
static void svm_set_segment(struct kvm_vcpu *vcpu,
@@ -802,16 +810,16 @@ static void svm_set_segment(struct kvm_vcpu *vcpu,
s->attrib |= (var->g & 1) << SVM_SELECTOR_G_SHIFT;
}
if (seg == VCPU_SREG_CS)
- vcpu->svm->vmcb->save.cpl
- = (vcpu->svm->vmcb->save.cs.attrib
+ svm(vcpu)->vmcb->save.cpl
+ = (svm(vcpu)->vmcb->save.cs.attrib
>> SVM_SELECTOR_DPL_SHIFT) & 3;
}
/* FIXME:
- vcpu->svm->vmcb->control.int_ctl &= ~V_TPR_MASK;
- vcpu->svm->vmcb->control.int_ctl |= (sregs->cr8 & V_TPR_MASK);
+ svm(vcpu)->vmcb->control.int_ctl &= ~V_TPR_MASK;
+ svm(vcpu)->vmcb->control.int_ctl |= (sregs->cr8 & V_TPR_MASK);
*/
@@ -823,14 +831,14 @@ static int svm_guest_debug(struct kvm_vcpu *vcpu, struct kvm_debug_guest *dbg)
static void load_host_msrs(struct kvm_vcpu *vcpu)
{
#ifdef CONFIG_X86_64
- wrmsrl(MSR_GS_BASE, vcpu->svm->host_gs_base);
+ wrmsrl(MSR_GS_BASE, svm(vcpu)->host_gs_base);
#endif
}
static void save_host_msrs(struct kvm_vcpu *vcpu)
{
#ifdef CONFIG_X86_64
- rdmsrl(MSR_GS_BASE, vcpu->svm->host_gs_base);
+ rdmsrl(MSR_GS_BASE, svm(vcpu)->host_gs_base);
#endif
}
@@ -839,22 +847,22 @@ static void new_asid(struct kvm_vcpu *vcpu, struct svm_cpu_data *svm_data)
if (svm_data->next_asid > svm_data->max_asid) {
++svm_data->asid_generation;
svm_data->next_asid = 1;
- vcpu->svm->vmcb->control.tlb_ctl = TLB_CONTROL_FLUSH_ALL_ASID;
+ svm(vcpu)->vmcb->control.tlb_ctl = TLB_CONTROL_FLUSH_ALL_ASID;
}
vcpu->cpu = svm_data->cpu;
- vcpu->svm->asid_generation = svm_data->asid_generation;
- vcpu->svm->vmcb->control.asid = svm_data->next_asid++;
+ svm(vcpu)->asid_generation = svm_data->asid_generation;
+ svm(vcpu)->vmcb->control.asid = svm_data->next_asid++;
}
static void svm_invlpg(struct kvm_vcpu *vcpu, gva_t address)
{
- invlpga(address, vcpu->svm->vmcb->control.asid); // is needed?
+ invlpga(address, svm(vcpu)->vmcb->control.asid); // is needed?
}
static unsigned long svm_get_dr(struct kvm_vcpu *vcpu, int dr)
{
- return vcpu->svm->db_regs[dr];
+ return svm(vcpu)->db_regs[dr];
}
static void svm_set_dr(struct kvm_vcpu *vcpu, int dr, unsigned long value,
@@ -862,16 +870,16 @@ static void svm_set_dr(struct kvm_vcpu *vcpu, int dr, unsigned long value,
{
*exception = 0;
- if (vcpu->svm->vmcb->save.dr7 & DR7_GD_MASK) {
- vcpu->svm->vmcb->save.dr7 &= ~DR7_GD_MASK;
- vcpu->svm->vmcb->save.dr6 |= DR6_BD_MASK;
+ if (svm(vcpu)->vmcb->save.dr7 & DR7_GD_MASK) {
+ svm(vcpu)->vmcb->save.dr7 &= ~DR7_GD_MASK;
+ svm(vcpu)->vmcb->save.dr6 |= DR6_BD_MASK;
*exception = DB_VECTOR;
return;
}
switch (dr) {
case 0 ... 3:
- vcpu->svm->db_regs[dr] = value;
+ svm(vcpu)->db_regs[dr] = value;
return;
case 4 ... 5:
if (vcpu->cr4 & X86_CR4_DE) {
@@ -883,7 +891,7 @@ static void svm_set_dr(struct kvm_vcpu *vcpu, int dr, unsigned long value,
*exception = GP_VECTOR;
return;
}
- vcpu->svm->vmcb->save.dr7 = value;
+ svm(vcpu)->vmcb->save.dr7 = value;
return;
}
default:
@@ -896,7 +904,7 @@ static void svm_set_dr(struct kvm_vcpu *vcpu, int dr, unsigned long value,
static int pf_interception(struct kvm_vcpu *vcpu, struct kvm_run *kvm_run)
{
- u32 exit_int_info = vcpu->svm->vmcb->control.exit_int_info;
+ u32 exit_int_info = svm(vcpu)->vmcb->control.exit_int_info;
u64 fault_address;
u32 error_code;
enum emulation_result er;
@@ -907,8 +915,8 @@ static int pf_interception(struct kvm_vcpu *vcpu, struct kvm_run *kvm_run)
mutex_lock(&vcpu->kvm->lock);
- fault_address = vcpu->svm->vmcb->control.exit_info_2;
- error_code = vcpu->svm->vmcb->control.exit_info_1;
+ fault_address = svm(vcpu)->vmcb->control.exit_info_2;
+ error_code = svm(vcpu)->vmcb->control.exit_info_1;
r = kvm_mmu_page_fault(vcpu, fault_address, error_code);
if (r < 0) {
mutex_unlock(&vcpu->kvm->lock);
@@ -940,9 +948,9 @@ static int pf_interception(struct kvm_vcpu *vcpu, struct kvm_run *kvm_run)
static int nm_interception(struct kvm_vcpu *vcpu, struct kvm_run *kvm_run)
{
- vcpu->svm->vmcb->control.intercept_exceptions &= ~(1 << NM_VECTOR);
+ svm(vcpu)->vmcb->control.intercept_exceptions &= ~(1 << NM_VECTOR);
if (!(vcpu->cr0 & X86_CR0_TS))
- vcpu->svm->vmcb->save.cr0 &= ~X86_CR0_TS;
+ svm(vcpu)->vmcb->save.cr0 &= ~X86_CR0_TS;
vcpu->fpu_active = 1;
return 1;
@@ -954,8 +962,8 @@ static int shutdown_interception(struct kvm_vcpu *vcpu, struct kvm_run *kvm_run)
* VMCB is undefined after a SHUTDOWN intercept
* so reinitialize it.
*/
- clear_page(vcpu->svm->vmcb);
- init_vmcb(vcpu->svm->vmcb);
+ clear_page(svm(vcpu)->vmcb);
+ init_vmcb(svm(vcpu)->vmcb);
kvm_run->exit_reason = KVM_EXIT_SHUTDOWN;
return 0;
@@ -970,18 +978,18 @@ static int io_get_override(struct kvm_vcpu *vcpu,
gva_t rip;
int i;
- rip = vcpu->svm->vmcb->save.rip;
- ins_length = vcpu->svm->next_rip - rip;
- rip += vcpu->svm->vmcb->save.cs.base;
+ rip = svm(vcpu)->vmcb->save.rip;
+ ins_length = svm(vcpu)->next_rip - rip;
+ rip += svm(vcpu)->vmcb->save.cs.base;
if (ins_length > MAX_INST_SIZE)
printk(KERN_DEBUG
"%s: inst length err, cs base 0x%llx rip 0x%llx "
"next rip 0x%llx ins_length %u\n",
__FUNCTION__,
- vcpu->svm->vmcb->save.cs.base,
- vcpu->svm->vmcb->save.rip,
- vcpu->svm->vmcb->control.exit_info_2,
+ svm(vcpu)->vmcb->save.cs.base,
+ svm(vcpu)->vmcb->save.rip,
+ svm(vcpu)->vmcb->control.exit_info_2,
ins_length);
if (kvm_read_guest(vcpu, rip, ins_length, inst) != ins_length)
@@ -1001,22 +1009,22 @@ static int io_get_override(struct kvm_vcpu *vcpu,
*addr_override = 1;
continue;
case 0x2e:
- *seg = &vcpu->svm->vmcb->save.cs;
+ *seg = &svm(vcpu)->vmcb->save.cs;
continue;
case 0x36:
- *seg = &vcpu->svm->vmcb->save.ss;
+ *seg = &svm(vcpu)->vmcb->save.ss;
continue;
case 0x3e:
- *seg = &vcpu->svm->vmcb->save.ds;
+ *seg = &svm(vcpu)->vmcb->save.ds;
continue;
case 0x26:
- *seg = &vcpu->svm->vmcb->save.es;
+ *seg = &svm(vcpu)->vmcb->save.es;
continue;
case 0x64:
- *seg = &vcpu->svm->vmcb->save.fs;
+ *seg = &svm(vcpu)->vmcb->save.fs;
continue;
case 0x65:
- *seg = &vcpu->svm->vmcb->save.gs;
+ *seg = &svm(vcpu)->vmcb->save.gs;
continue;
default:
return 1;
@@ -1031,7 +1039,7 @@ static unsigned long io_adress(struct kvm_vcpu *vcpu, int ins, gva_t *address)
unsigned long *reg;
struct vmcb_seg *seg;
int addr_override;
- struct vmcb_save_area *save_area = &vcpu->svm->vmcb->save;
+ struct vmcb_save_area *save_area = &svm(vcpu)->vmcb->save;
u16 cs_attrib = save_area->cs.attrib;
unsigned addr_size = get_addr_size(vcpu);
@@ -1043,16 +1051,16 @@ static unsigned long io_adress(struct kvm_vcpu *vcpu, int ins, gva_t *address)
if (ins) {
reg = &vcpu->regs[VCPU_REGS_RDI];
- seg = &vcpu->svm->vmcb->save.es;
+ seg = &svm(vcpu)->vmcb->save.es;
} else {
reg = &vcpu->regs[VCPU_REGS_RSI];
- seg = (seg) ? seg : &vcpu->svm->vmcb->save.ds;
+ seg = (seg) ? seg : &svm(vcpu)->vmcb->save.ds;
}
addr_mask = ~0ULL >> (64 - (addr_size * 8));
if ((cs_attrib & SVM_SELECTOR_L_MASK) &&
- !(vcpu->svm->vmcb->save.rflags & X86_EFLAGS_VM)) {
+ !(svm(vcpu)->vmcb->save.rflags & X86_EFLAGS_VM)) {
*address = (*reg & addr_mask);
return addr_mask;
}
@@ -1068,7 +1076,7 @@ static unsigned long io_adress(struct kvm_vcpu *vcpu, int ins, gva_t *address)
static int io_interception(struct kvm_vcpu *vcpu, struct kvm_run *kvm_run)
{
- u32 io_info = vcpu->svm->vmcb->control.exit_info_1; //address size bug?
+ u32 io_info = svm(vcpu)->vmcb->control.exit_info_1; //address size bug?
int size, down, in, string, rep;
unsigned port;
unsigned long count;
@@ -1076,7 +1084,7 @@ static int io_interception(struct kvm_vcpu *vcpu, struct kvm_run *kvm_run)
++vcpu->stat.io_exits;
- vcpu->svm->next_rip = vcpu->svm->vmcb->control.exit_info_2;
+ svm(vcpu)->next_rip = svm(vcpu)->vmcb->control.exit_info_2;
in = (io_info & SVM_IOIO_TYPE_MASK) != 0;
port = io_info >> 16;
@@ -1084,7 +1092,7 @@ static int io_interception(struct kvm_vcpu *vcpu, struct kvm_run *kvm_run)
string = (io_info & SVM_IOIO_STR_MASK) != 0;
rep = (io_info & SVM_IOIO_REP_MASK) != 0;
count = 1;
- down = (vcpu->svm->vmcb->save.rflags & X86_EFLAGS_DF) != 0;
+ down = (svm(vcpu)->vmcb->save.rflags & X86_EFLAGS_DF) != 0;
if (string) {
unsigned addr_mask;
@@ -1110,14 +1118,14 @@ static int nop_on_interception(struct kvm_vcpu *vcpu, struct kvm_run *kvm_run)
static int halt_interception(struct kvm_vcpu *vcpu, struct kvm_run *kvm_run)
{
- vcpu->svm->next_rip = vcpu->svm->vmcb->save.rip + 1;
+ svm(vcpu)->next_rip = svm(vcpu)->vmcb->save.rip + 1;
skip_emulated_instruction(vcpu);
return kvm_emulate_halt(vcpu);
}
static int vmmcall_interception(struct kvm_vcpu *vcpu, struct kvm_run *kvm_run)
{
- vcpu->svm->next_rip = vcpu->svm->vmcb->save.rip + 3;
+ svm(vcpu)->next_rip = svm(vcpu)->vmcb->save.rip + 3;
skip_emulated_instruction(vcpu);
return kvm_hypercall(vcpu, kvm_run);
}
@@ -1137,7 +1145,7 @@ static int task_switch_interception(struct kvm_vcpu *vcpu, struct kvm_run *kvm_r
static int cpuid_interception(struct kvm_vcpu *vcpu, struct kvm_run *kvm_run)
{
- vcpu->svm->next_rip = vcpu->svm->vmcb->save.rip + 2;
+ svm(vcpu)->next_rip = svm(vcpu)->vmcb->save.rip + 2;
kvm_emulate_cpuid(vcpu);
return 1;
}
@@ -1156,34 +1164,34 @@ static int svm_get_msr(struct kvm_vcpu *vcpu, unsigned ecx, u64 *data)
u64 tsc;
rdtscll(tsc);
- *data = vcpu->svm->vmcb->control.tsc_offset + tsc;
+ *data = svm(vcpu)->vmcb->control.tsc_offset + tsc;
break;
}
case MSR_K6_STAR:
- *data = vcpu->svm->vmcb->save.star;
+ *data = svm(vcpu)->vmcb->save.star;
break;
#ifdef CONFIG_X86_64
case MSR_LSTAR:
- *data = vcpu->svm->vmcb->save.lstar;
+ *data = svm(vcpu)->vmcb->save.lstar;
break;
case MSR_CSTAR:
- *data = vcpu->svm->vmcb->save.cstar;
+ *data = svm(vcpu)->vmcb->save.cstar;
break;
case MSR_KERNEL_GS_BASE:
- *data = vcpu->svm->vmcb->save.kernel_gs_base;
+ *data = svm(vcpu)->vmcb->save.kernel_gs_base;
break;
case MSR_SYSCALL_MASK:
- *data = vcpu->svm->vmcb->save.sfmask;
+ *data = svm(vcpu)->vmcb->save.sfmask;
break;
#endif
case MSR_IA32_SYSENTER_CS:
- *data = vcpu->svm->vmcb->save.sysenter_cs;
+ *data = svm(vcpu)->vmcb->save.sysenter_cs;
break;
case MSR_IA32_SYSENTER_EIP:
- *data = vcpu->svm->vmcb->save.sysenter_eip;
+ *data = svm(vcpu)->vmcb->save.sysenter_eip;
break;
case MSR_IA32_SYSENTER_ESP:
- *data = vcpu->svm->vmcb->save.sysenter_esp;
+ *data = svm(vcpu)->vmcb->save.sysenter_esp;
break;
default:
return kvm_get_msr_common(vcpu, ecx, data);
@@ -1199,9 +1207,9 @@ static int rdmsr_interception(struct kvm_vcpu *vcpu, struct kvm_run *kvm_run)
if (svm_get_msr(vcpu, ecx, &data))
svm_inject_gp(vcpu, 0);
else {
- vcpu->svm->vmcb->save.rax = data & 0xffffffff;
+ svm(vcpu)->vmcb->save.rax = data & 0xffffffff;
vcpu->regs[VCPU_REGS_RDX] = data >> 32;
- vcpu->svm->next_rip = vcpu->svm->vmcb->save.rip + 2;
+ svm(vcpu)->next_rip = svm(vcpu)->vmcb->save.rip + 2;
skip_emulated_instruction(vcpu);
}
return 1;
@@ -1214,34 +1222,34 @@ static int svm_set_msr(struct kvm_vcpu *vcpu, unsigned ecx, u64 data)
u64 tsc;
rdtscll(tsc);
- vcpu->svm->vmcb->control.tsc_offset = data - tsc;
+ svm(vcpu)->vmcb->control.tsc_offset = data - tsc;
break;
}
case MSR_K6_STAR:
- vcpu->svm->vmcb->save.star = data;
+ svm(vcpu)->vmcb->save.star = data;
break;
#ifdef CONFIG_X86_64
case MSR_LSTAR:
- vcpu->svm->vmcb->save.lstar = data;
+ svm(vcpu)->vmcb->save.lstar = data;
break;
case MSR_CSTAR:
- vcpu->svm->vmcb->save.cstar = data;
+ svm(vcpu)->vmcb->save.cstar = data;
break;
case MSR_KERNEL_GS_BASE:
- vcpu->svm->vmcb->save.kernel_gs_base = data;
+ svm(vcpu)->vmcb->save.kernel_gs_base = data;
break;
case MSR_SYSCALL_MASK:
- vcpu->svm->vmcb->save.sfmask = data;
+ svm(vcpu)->vmcb->save.sfmask = data;
break;
#endif
case MSR_IA32_SYSENTER_CS:
- vcpu->svm->vmcb->save.sysenter_cs = data;
+ svm(vcpu)->vmcb->save.sysenter_cs = data;
break;
case MSR_IA32_SYSENTER_EIP:
- vcpu->svm->vmcb->save.sysenter_eip = data;
+ svm(vcpu)->vmcb->save.sysenter_eip = data;
break;
case MSR_IA32_SYSENTER_ESP:
- vcpu->svm->vmcb->save.sysenter_esp = data;
+ svm(vcpu)->vmcb->save.sysenter_esp = data;
break;
default:
return kvm_set_msr_common(vcpu, ecx, data);
@@ -1252,9 +1260,9 @@ static int svm_set_msr(struct kvm_vcpu *vcpu, unsigned ecx, u64 data)
static int wrmsr_interception(struct kvm_vcpu *vcpu, struct kvm_run *kvm_run)
{
u32 ecx = vcpu->regs[VCPU_REGS_RCX];
- u64 data = (vcpu->svm->vmcb->save.rax & -1u)
+ u64 data = (svm(vcpu)->vmcb->save.rax & -1u)
| ((u64)(vcpu->regs[VCPU_REGS_RDX] & -1u) << 32);
- vcpu->svm->next_rip = vcpu->svm->vmcb->save.rip + 2;
+ svm(vcpu)->next_rip = svm(vcpu)->vmcb->save.rip + 2;
if (svm_set_msr(vcpu, ecx, data))
svm_inject_gp(vcpu, 0);
else
@@ -1264,7 +1272,7 @@ static int wrmsr_interception(struct kvm_vcpu *vcpu, struct kvm_run *kvm_run)
static int msr_interception(struct kvm_vcpu *vcpu, struct kvm_run *kvm_run)
{
- if (vcpu->svm->vmcb->control.exit_info_1)
+ if (svm(vcpu)->vmcb->control.exit_info_1)
return wrmsr_interception(vcpu, kvm_run);
else
return rdmsr_interception(vcpu, kvm_run);
@@ -1336,13 +1344,13 @@ static int (*svm_exit_handlers[])(struct kvm_vcpu *vcpu,
static int handle_exit(struct kvm_vcpu *vcpu, struct kvm_run *kvm_run)
{
- u32 exit_code = vcpu->svm->vmcb->control.exit_code;
+ u32 exit_code = svm(vcpu)->vmcb->control.exit_code;
- if (is_external_interrupt(vcpu->svm->vmcb->control.exit_int_info) &&
+ if (is_external_interrupt(svm(vcpu)->vmcb->control.exit_int_info) &&
exit_code != SVM_EXIT_EXCP_BASE + PF_VECTOR)
printk(KERN_ERR "%s: unexpected exit_ini_info 0x%x "
"exit_code 0x%x\n",
- __FUNCTION__, vcpu->svm->vmcb->control.exit_int_info,
+ __FUNCTION__, svm(vcpu)->vmcb->control.exit_int_info,
exit_code);
if (exit_code >= ARRAY_SIZE(svm_exit_handlers)
@@ -1370,9 +1378,9 @@ static void pre_svm_run(struct kvm_vcpu *vcpu)
struct svm_cpu_data *svm_data = per_cpu(svm_data, cpu);
- vcpu->svm->vmcb->control.tlb_ctl = TLB_CONTROL_DO_NOTHING;
+ svm(vcpu)->vmcb->control.tlb_ctl = TLB_CONTROL_DO_NOTHING;
if (vcpu->cpu != cpu ||
- vcpu->svm->asid_generation != svm_data->asid_generation)
+ svm(vcpu)->asid_generation != svm_data->asid_generation)
new_asid(vcpu, svm_data);
}
@@ -1381,7 +1389,7 @@ static inline void kvm_do_inject_irq(struct kvm_vcpu *vcpu)
{
struct vmcb_control_area *control;
- control = &vcpu->svm->vmcb->control;
+ control = &svm(vcpu)->vmcb->control;
control->int_vector = pop_irq(vcpu);
control->int_ctl &= ~V_INTR_PRIO_MASK;
control->int_ctl |= V_IRQ_MASK |
@@ -1390,7 +1398,7 @@ static inline void kvm_do_inject_irq(struct kvm_vcpu *vcpu)
static void kvm_reput_irq(struct kvm_vcpu *vcpu)
{
- struct vmcb_control_area *control = &vcpu->svm->vmcb->control;
+ struct vmcb_control_area *control = &svm(vcpu)->vmcb->control;
if (control->int_ctl & V_IRQ_MASK) {
control->int_ctl &= ~V_IRQ_MASK;
@@ -1404,11 +1412,11 @@ static void kvm_reput_irq(struct kvm_vcpu *vcpu)
static void do_interrupt_requests(struct kvm_vcpu *vcpu,
struct kvm_run *kvm_run)
{
- struct vmcb_control_area *control = &vcpu->svm->vmcb->control;
+ struct vmcb_control_area *control = &svm(vcpu)->vmcb->control;
vcpu->interrupt_window_open =
(!(control->int_state & SVM_INTERRUPT_SHADOW_MASK) &&
- (vcpu->svm->vmcb->save.rflags & X86_EFLAGS_IF));
+ (svm(vcpu)->vmcb->save.rflags & X86_EFLAGS_IF));
if (vcpu->interrupt_window_open && vcpu->irq_summary)
/*
@@ -1431,7 +1439,7 @@ static void post_kvm_run_save(struct kvm_vcpu *vcpu,
{
kvm_run->ready_for_interrupt_injection = (vcpu->interrupt_window_open &&
vcpu->irq_summary == 0);
- kvm_run->if_flag = (vcpu->svm->vmcb->save.rflags & X86_EFLAGS_IF) != 0;
+ kvm_run->if_flag = (svm(vcpu)->vmcb->save.rflags & X86_EFLAGS_IF) != 0;
kvm_run->cr8 = vcpu->cr8;
kvm_run->apic_base = vcpu->apic_base;
}
@@ -1448,7 +1456,7 @@ static int dm_request_for_irq_injection(struct kvm_vcpu *vcpu,
return (!vcpu->irq_summary &&
kvm_run->request_interrupt_window &&
vcpu->interrupt_window_open &&
- (vcpu->svm->vmcb->save.rflags & X86_EFLAGS_IF));
+ (svm(vcpu)->vmcb->save.rflags & X86_EFLAGS_IF));
}
static void save_db_regs(unsigned long *db_regs)
@@ -1500,15 +1508,15 @@ again:
fs_selector = read_fs();
gs_selector = read_gs();
ldt_selector = read_ldt();
- vcpu->svm->host_cr2 = kvm_read_cr2();
- vcpu->svm->host_dr6 = read_dr6();
- vcpu->svm->host_dr7 = read_dr7();
- vcpu->svm->vmcb->save.cr2 = vcpu->cr2;
+ svm(vcpu)->host_cr2 = kvm_read_cr2();
+ svm(vcpu)->host_dr6 = read_dr6();
+ svm(vcpu)->host_dr7 = read_dr7();
+ svm(vcpu)->vmcb->save.cr2 = vcpu->cr2;
- if (vcpu->svm->vmcb->save.dr7 & 0xff) {
+ if (svm(vcpu)->vmcb->save.dr7 & 0xff) {
write_dr7(0);
- save_db_regs(vcpu->svm->host_db_regs);
- load_db_regs(vcpu->svm->db_regs);
+ save_db_regs(svm(vcpu)->host_db_regs);
+ load_db_regs(svm(vcpu)->db_regs);
}
if (vcpu->fpu_active) {
@@ -1605,7 +1613,7 @@ again:
#endif
:
: [vcpu]"a"(vcpu),
- [svm]"i"(offsetof(struct kvm_vcpu, svm)),
+ [svm]"i"(offsetof(struct kvm_vcpu, _priv)),
[vmcb]"i"(offsetof(struct vcpu_svm, vmcb_pa)),
[rbx]"i"(offsetof(struct kvm_vcpu, regs[VCPU_REGS_RBX])),
[rcx]"i"(offsetof(struct kvm_vcpu, regs[VCPU_REGS_RCX])),
@@ -1632,14 +1640,14 @@ again:
fx_restore(vcpu->host_fx_image);
}
- if ((vcpu->svm->vmcb->save.dr7 & 0xff))
- load_db_regs(vcpu->svm->host_db_regs);
+ if ((svm(vcpu)->vmcb->save.dr7 & 0xff))
+ load_db_regs(svm(vcpu)->host_db_regs);
- vcpu->cr2 = vcpu->svm->vmcb->save.cr2;
+ vcpu->cr2 = svm(vcpu)->vmcb->save.cr2;
- write_dr6(vcpu->svm->host_dr6);
- write_dr7(vcpu->svm->host_dr7);
- kvm_write_cr2(vcpu->svm->host_cr2);
+ write_dr6(svm(vcpu)->host_dr6);
+ write_dr7(svm(vcpu)->host_dr7);
+ kvm_write_cr2(svm(vcpu)->host_cr2);
load_fs(fs_selector);
load_gs(gs_selector);
@@ -1653,18 +1661,18 @@ again:
*/
if (unlikely(prof_on == KVM_PROFILING))
profile_hit(KVM_PROFILING,
- (void *)(unsigned long)vcpu->svm->vmcb->save.rip);
+ (void *)(unsigned long)svm(vcpu)->vmcb->save.rip);
stgi();
kvm_reput_irq(vcpu);
- vcpu->svm->next_rip = 0;
+ svm(vcpu)->next_rip = 0;
- if (vcpu->svm->vmcb->control.exit_code == SVM_EXIT_ERR) {
+ if (svm(vcpu)->vmcb->control.exit_code == SVM_EXIT_ERR) {
kvm_run->exit_reason = KVM_EXIT_FAIL_ENTRY;
kvm_run->fail_entry.hardware_entry_failure_reason
- = vcpu->svm->vmcb->control.exit_code;
+ = svm(vcpu)->vmcb->control.exit_code;
post_kvm_run_save(vcpu, kvm_run);
return 0;
}
@@ -1693,12 +1701,12 @@ again:
static void svm_set_cr3(struct kvm_vcpu *vcpu, unsigned long root)
{
- vcpu->svm->vmcb->save.cr3 = root;
+ svm(vcpu)->vmcb->save.cr3 = root;
force_new_asid(vcpu);
if (vcpu->fpu_active) {
- vcpu->svm->vmcb->control.intercept_exceptions |= (1 << NM_VECTOR);
- vcpu->svm->vmcb->save.cr0 |= X86_CR0_TS;
+ svm(vcpu)->vmcb->control.intercept_exceptions |= (1 << NM_VECTOR);
+ svm(vcpu)->vmcb->save.cr0 |= X86_CR0_TS;
vcpu->fpu_active = 0;
}
}
@@ -1707,26 +1715,26 @@ static void svm_inject_page_fault(struct kvm_vcpu *vcpu,
unsigned long addr,
uint32_t err_code)
{
- uint32_t exit_int_info = vcpu->svm->vmcb->control.exit_int_info;
+ uint32_t exit_int_info = svm(vcpu)->vmcb->control.exit_int_info;
++vcpu->stat.pf_guest;
if (is_page_fault(exit_int_info)) {
- vcpu->svm->vmcb->control.event_inj_err = 0;
- vcpu->svm->vmcb->control.event_inj = SVM_EVTINJ_VALID |
+ svm(vcpu)->vmcb->control.event_inj_err = 0;
+ svm(vcpu)->vmcb->control.event_inj = SVM_EVTINJ_VALID |
SVM_EVTINJ_VALID_ERR |
SVM_EVTINJ_TYPE_EXEPT |
DF_VECTOR;
return;
}
vcpu->cr2 = addr;
- vcpu->svm->vmcb->save.cr2 = addr;
- vcpu->svm->vmcb->control.event_inj = SVM_EVTINJ_VALID |
+ svm(vcpu)->vmcb->save.cr2 = addr;
+ svm(vcpu)->vmcb->control.event_inj = SVM_EVTINJ_VALID |
SVM_EVTINJ_VALID_ERR |
SVM_EVTINJ_TYPE_EXEPT |
PF_VECTOR;
- vcpu->svm->vmcb->control.event_inj_err = err_code;
+ svm(vcpu)->vmcb->control.event_inj_err = err_code;
}
diff --git a/drivers/kvm/vmx.c b/drivers/kvm/vmx.c
index d58457b..2352e44 100644
--- a/drivers/kvm/vmx.c
+++ b/drivers/kvm/vmx.c
@@ -32,6 +32,34 @@
MODULE_AUTHOR("Qumranet");
MODULE_LICENSE("GPL");
+struct vmcs {
+ u32 revision_id;
+ u32 abort;
+ char data[0];
+};
+
+struct vcpu_vmx {
+ struct kvm_vcpu *vcpu;
+ int launched;
+ struct kvm_msr_entry *guest_msrs;
+ struct kvm_msr_entry *host_msrs;
+ int nmsrs;
+ int save_nmsrs;
+ int msr_offset_efer;
+#ifdef CONFIG_X86_64
+ int msr_offset_kernel_gs_base;
+#endif
+ struct vmcs *vmcs;
+ struct {
+ int loaded;
+ u16 fs_sel, gs_sel, ldt_sel;
+ int fs_gs_ldt_reload_needed;
+ }host_state;
+
+};
+
+#define vmx(vcpu) ((struct vcpu_vmx*)vcpu->_priv)
+
static int init_rmode_tss(struct kvm *kvm);
static DEFINE_PER_CPU(struct vmcs *, vmxarea);
@@ -89,16 +117,17 @@ static const u32 vmx_msr_index[] = {
};
#define NR_VMX_MSR ARRAY_SIZE(vmx_msr_index)
-static inline u64 msr_efer_save_restore_bits(struct vmx_msr_entry msr)
+static inline u64 msr_efer_save_restore_bits(struct kvm_msr_entry msr)
{
return (u64)msr.data & EFER_SAVE_RESTORE_BITS;
}
static inline int msr_efer_need_save_restore(struct kvm_vcpu *vcpu)
{
- int efer_offset = vcpu->msr_offset_efer;
- return msr_efer_save_restore_bits(vcpu->host_msrs[efer_offset]) !=
- msr_efer_save_restore_bits(vcpu->guest_msrs[efer_offset]);
+ struct vcpu_vmx *vmx = vmx(vcpu);
+ int efer_offset = vmx->msr_offset_efer;
+ return msr_efer_save_restore_bits(vmx->host_msrs[efer_offset]) !=
+ msr_efer_save_restore_bits(vmx->guest_msrs[efer_offset]);
}
static inline int is_page_fault(u32 intr_info)
@@ -124,20 +153,21 @@ static inline int is_external_interrupt(u32 intr_info)
static int __find_msr_index(struct kvm_vcpu *vcpu, u32 msr)
{
int i;
+ struct vcpu_vmx *vmx = vmx(vcpu);
- for (i = 0; i < vcpu->nmsrs; ++i)
- if (vcpu->guest_msrs[i].index == msr)
+ for (i = 0; i < vmx->nmsrs; ++i)
+ if (vmx->guest_msrs[i].index == msr)
return i;
return -1;
}
-static struct vmx_msr_entry *find_msr_entry(struct kvm_vcpu *vcpu, u32 msr)
+static struct kvm_msr_entry *find_msr_entry(struct kvm_vcpu *vcpu, u32 msr)
{
int i;
i = __find_msr_index(vcpu, msr);
if (i >= 0)
- return &vcpu->guest_msrs[i];
+ return &vmx(vcpu)->guest_msrs[i];
return NULL;
}
@@ -157,11 +187,12 @@ static void vmcs_clear(struct vmcs *vmcs)
static void __vcpu_clear(void *arg)
{
struct kvm_vcpu *vcpu = arg;
+ struct vcpu_vmx *vmx = vmx(vcpu);
int cpu = raw_smp_processor_id();
if (vcpu->cpu == cpu)
- vmcs_clear(vcpu->vmcs);
- if (per_cpu(current_vmcs, cpu) == vcpu->vmcs)
+ vmcs_clear(vmx->vmcs);
+ if (per_cpu(current_vmcs, cpu) == vmx->vmcs)
per_cpu(current_vmcs, cpu) = NULL;
rdtscll(vcpu->host_tsc);
}
@@ -172,7 +203,7 @@ static void vcpu_clear(struct kvm_vcpu *vcpu)
smp_call_function_single(vcpu->cpu, __vcpu_clear, vcpu, 0, 1);
else
__vcpu_clear(vcpu);
- vcpu->launched = 0;
+ vmx(vcpu)->launched = 0;
}
static unsigned long vmcs_readl(unsigned long field)
@@ -284,82 +315,83 @@ static void reload_tss(void)
static void load_transition_efer(struct kvm_vcpu *vcpu)
{
+ struct vcpu_vmx *vmx = vmx(vcpu);
u64 trans_efer;
- int efer_offset = vcpu->msr_offset_efer;
+ int efer_offset = vmx->msr_offset_efer;
- trans_efer = vcpu->host_msrs[efer_offset].data;
+ trans_efer = vmx->host_msrs[efer_offset].data;
trans_efer &= ~EFER_SAVE_RESTORE_BITS;
trans_efer |= msr_efer_save_restore_bits(
- vcpu->guest_msrs[efer_offset]);
+ vmx->guest_msrs[efer_offset]);
wrmsrl(MSR_EFER, trans_efer);
vcpu->stat.efer_reload++;
}
static void vmx_save_host_state(struct kvm_vcpu *vcpu)
{
- struct vmx_host_state *hs = &vcpu->vmx_host_state;
+ struct vcpu_vmx *vmx = vmx(vcpu);
- if (hs->loaded)
+ if (vmx->host_state.loaded)
return;
- hs->loaded = 1;
+ vmx->host_state.loaded = 1;
/*
* Set host fs and gs selectors. Unfortunately, 22.2.3 does not
* allow segment selectors with cpl > 0 or ti == 1.
*/
- hs->ldt_sel = read_ldt();
- hs->fs_gs_ldt_reload_needed = hs->ldt_sel;
- hs->fs_sel = read_fs();
- if (!(hs->fs_sel & 7))
- vmcs_write16(HOST_FS_SELECTOR, hs->fs_sel);
+ vmx->host_state.ldt_sel = read_ldt();
+ vmx->host_state.fs_gs_ldt_reload_needed = vmx->host_state.ldt_sel;
+ vmx->host_state.fs_sel = read_fs();
+ if (!(vmx->host_state.fs_sel & 7))
+ vmcs_write16(HOST_FS_SELECTOR, vmx->host_state.fs_sel);
else {
vmcs_write16(HOST_FS_SELECTOR, 0);
- hs->fs_gs_ldt_reload_needed = 1;
+ vmx->host_state.fs_gs_ldt_reload_needed = 1;
}
- hs->gs_sel = read_gs();
- if (!(hs->gs_sel & 7))
- vmcs_write16(HOST_GS_SELECTOR, hs->gs_sel);
+ vmx->host_state.gs_sel = read_gs();
+ if (!(vmx->host_state.gs_sel & 7))
+ vmcs_write16(HOST_GS_SELECTOR, vmx->host_state.gs_sel);
else {
vmcs_write16(HOST_GS_SELECTOR, 0);
- hs->fs_gs_ldt_reload_needed = 1;
+ vmx->host_state.fs_gs_ldt_reload_needed = 1;
}
#ifdef CONFIG_X86_64
vmcs_writel(HOST_FS_BASE, read_msr(MSR_FS_BASE));
vmcs_writel(HOST_GS_BASE, read_msr(MSR_GS_BASE));
#else
- vmcs_writel(HOST_FS_BASE, segment_base(hs->fs_sel));
- vmcs_writel(HOST_GS_BASE, segment_base(hs->gs_sel));
+ vmcs_writel(HOST_FS_BASE, segment_base(vmx->host_state.fs_sel));
+ vmcs_writel(HOST_GS_BASE, segment_base(vmx->host_state.gs_sel));
#endif
#ifdef CONFIG_X86_64
if (is_long_mode(vcpu)) {
- save_msrs(vcpu->host_msrs + vcpu->msr_offset_kernel_gs_base, 1);
+ save_msrs(vmx->host_msrs + vmx->msr_offset_kernel_gs_base, 1);
}
#endif
- load_msrs(vcpu->guest_msrs, vcpu->save_nmsrs);
+ load_msrs(vmx->guest_msrs, vmx->save_nmsrs);
if (msr_efer_need_save_restore(vcpu))
load_transition_efer(vcpu);
}
static void vmx_load_host_state(struct kvm_vcpu *vcpu)
{
- struct vmx_host_state *hs = &vcpu->vmx_host_state;
+ struct vcpu_vmx *vmx = vmx(vcpu);
unsigned long flags;
- if (!hs->loaded)
+ if (!vmx->host_state.loaded)
return;
- hs->loaded = 0;
- if (hs->fs_gs_ldt_reload_needed) {
- load_ldt(hs->ldt_sel);
- load_fs(hs->fs_sel);
+ vmx->host_state.loaded = 0;
+ if (vmx->host_state.fs_gs_ldt_reload_needed) {
+ load_ldt(vmx->host_state.ldt_sel);
+ load_fs(vmx->host_state.fs_sel);
/*
* If we have to reload gs, we must take care to
* preserve our gs base.
*/
local_irq_save(flags);
- load_gs(hs->gs_sel);
+ load_gs(vmx->host_state.gs_sel);
#ifdef CONFIG_X86_64
wrmsrl(MSR_GS_BASE, vmcs_readl(HOST_GS_BASE));
#endif
@@ -367,10 +399,10 @@ static void vmx_load_host_state(struct kvm_vcpu *vcpu)
reload_tss();
}
- save_msrs(vcpu->guest_msrs, vcpu->save_nmsrs);
- load_msrs(vcpu->host_msrs, vcpu->save_nmsrs);
+ save_msrs(vmx->guest_msrs, vmx->save_nmsrs);
+ load_msrs(vmx->host_msrs, vmx->save_nmsrs);
if (msr_efer_need_save_restore(vcpu))
- load_msrs(vcpu->host_msrs + vcpu->msr_offset_efer, 1);
+ load_msrs(vmx->host_msrs + vmx->msr_offset_efer, 1);
}
/*
@@ -379,22 +411,23 @@ static void vmx_load_host_state(struct kvm_vcpu *vcpu)
*/
static void vmx_vcpu_load(struct kvm_vcpu *vcpu, int cpu)
{
- u64 phys_addr = __pa(vcpu->vmcs);
+ struct vcpu_vmx *vmx = vmx(vcpu);
+ u64 phys_addr = __pa(vmx->vmcs);
u64 tsc_this, delta;
if (vcpu->cpu != cpu)
vcpu_clear(vcpu);
- if (per_cpu(current_vmcs, cpu) != vcpu->vmcs) {
+ if (per_cpu(current_vmcs, cpu) != vmx->vmcs) {
u8 error;
- per_cpu(current_vmcs, cpu) = vcpu->vmcs;
+ per_cpu(current_vmcs, cpu) = vmx->vmcs;
asm volatile (ASM_VMX_VMPTRLD_RAX "; setna %0"
: "=g"(error) : "a"(&phys_addr), "m"(phys_addr)
: "cc");
if (error)
printk(KERN_ERR "kvm: vmptrld %p/%llx fail\n",
- vcpu->vmcs, phys_addr);
+ vmx->vmcs, phys_addr);
}
if (vcpu->cpu != cpu) {
@@ -500,13 +533,14 @@ static void vmx_inject_gp(struct kvm_vcpu *vcpu, unsigned error_code)
*/
void move_msr_up(struct kvm_vcpu *vcpu, int from, int to)
{
- struct vmx_msr_entry tmp;
- tmp = vcpu->guest_msrs[to];
- vcpu->guest_msrs[to] = vcpu->guest_msrs[from];
- vcpu->guest_msrs[from] = tmp;
- tmp = vcpu->host_msrs[to];
- vcpu->host_msrs[to] = vcpu->host_msrs[from];
- vcpu->host_msrs[from] = tmp;
+ struct vcpu_vmx *vmx = vmx(vcpu);
+ struct kvm_msr_entry tmp;
+ tmp = vmx->guest_msrs[to];
+ vmx->guest_msrs[to] = vmx->guest_msrs[from];
+ vmx->guest_msrs[from] = tmp;
+ tmp = vmx->host_msrs[to];
+ vmx->host_msrs[to] = vmx->host_msrs[from];
+ vmx->host_msrs[from] = tmp;
}
/*
@@ -544,13 +578,13 @@ static void setup_msrs(struct kvm_vcpu *vcpu)
move_msr_up(vcpu, index, save_nmsrs++);
}
#endif
- vcpu->save_nmsrs = save_nmsrs;
+ vmx(vcpu)->save_nmsrs = save_nmsrs;
#ifdef CONFIG_X86_64
- vcpu->msr_offset_kernel_gs_base =
+ vmx(vcpu)->msr_offset_kernel_gs_base =
__find_msr_index(vcpu, MSR_KERNEL_GS_BASE);
#endif
- vcpu->msr_offset_efer = __find_msr_index(vcpu, MSR_EFER);
+ vmx(vcpu)->msr_offset_efer = __find_msr_index(vcpu, MSR_EFER);
}
/*
@@ -586,7 +620,7 @@ static void guest_write_tsc(u64 guest_tsc)
static int vmx_get_msr(struct kvm_vcpu *vcpu, u32 msr_index, u64 *pdata)
{
u64 data;
- struct vmx_msr_entry *msr;
+ struct kvm_msr_entry *msr;
if (!pdata) {
printk(KERN_ERR "BUG: get_msr called with NULL pdata\n");
@@ -636,14 +670,15 @@ static int vmx_get_msr(struct kvm_vcpu *vcpu, u32 msr_index, u64 *pdata)
*/
static int vmx_set_msr(struct kvm_vcpu *vcpu, u32 msr_index, u64 data)
{
- struct vmx_msr_entry *msr;
+ struct vcpu_vmx *vmx = vmx(vcpu);
+ struct kvm_msr_entry *msr;
int ret = 0;
switch (msr_index) {
#ifdef CONFIG_X86_64
case MSR_EFER:
ret = kvm_set_msr_common(vcpu, msr_index, data);
- if (vcpu->vmx_host_state.loaded)
+ if (vmx->host_state.loaded)
load_transition_efer(vcpu);
break;
case MSR_FS_BASE:
@@ -669,8 +704,8 @@ static int vmx_set_msr(struct kvm_vcpu *vcpu, u32 msr_index, u64 data)
msr = find_msr_entry(vcpu, msr_index);
if (msr) {
msr->data = data;
- if (vcpu->vmx_host_state.loaded)
- load_msrs(vcpu->guest_msrs, vcpu->save_nmsrs);
+ if (vmx->host_state.loaded)
+ load_msrs(vmx->guest_msrs, vmx->save_nmsrs);
break;
}
ret = kvm_set_msr_common(vcpu, msr_index, data);
@@ -1042,7 +1077,7 @@ static void vmx_set_cr4(struct kvm_vcpu *vcpu, unsigned long cr4)
static void vmx_set_efer(struct kvm_vcpu *vcpu, u64 efer)
{
- struct vmx_msr_entry *msr = find_msr_entry(vcpu, MSR_EFER);
+ struct kvm_msr_entry *msr = find_msr_entry(vcpu, MSR_EFER);
vcpu->shadow_efer = efer;
if (efer & EFER_LMA) {
@@ -1240,6 +1275,7 @@ static int vmx_vcpu_setup(struct kvm_vcpu *vcpu)
int i;
int ret = 0;
unsigned long kvm_vmx_return;
+ struct vcpu_vmx *vmx = vmx(vcpu);
if (!init_rmode_tss(vcpu->kvm)) {
ret = -ENOMEM;
@@ -1374,18 +1410,18 @@ static int vmx_vcpu_setup(struct kvm_vcpu *vcpu)
u32 index = vmx_msr_index[i];
u32 data_low, data_high;
u64 data;
- int j = vcpu->nmsrs;
+ int j = vmx->nmsrs;
if (rdmsr_safe(index, &data_low, &data_high) < 0)
continue;
if (wrmsr_safe(index, data_low, data_high) < 0)
continue;
data = data_low | ((u64)data_high << 32);
- vcpu->host_msrs[j].index = index;
- vcpu->host_msrs[j].reserved = 0;
- vcpu->host_msrs[j].data = data;
- vcpu->guest_msrs[j] = vcpu->host_msrs[j];
- ++vcpu->nmsrs;
+ vmx->host_msrs[j].index = index;
+ vmx->host_msrs[j].reserved = 0;
+ vmx->host_msrs[j].data = data;
+ vmx->guest_msrs[j] = vmx->host_msrs[j];
+ ++vmx->nmsrs;
}
setup_msrs(vcpu);
@@ -1990,6 +2026,7 @@ static int vmx_vcpu_run(struct kvm_vcpu *vcpu, struct kvm_run *kvm_run)
{
u8 fail;
int r;
+ struct vcpu_vmx *vmx = vmx(vcpu);
preempted:
if (vcpu->guest_debug.enabled)
@@ -2114,7 +2151,7 @@ again:
#endif
"setbe %0 \n\t"
: "=q" (fail)
- : "r"(vcpu->launched), "d"((unsigned long)HOST_RSP),
+ : "r"(vmx->launched), "d"((unsigned long)HOST_RSP),
"c"(vcpu),
[rax]"i"(offsetof(struct kvm_vcpu, regs[VCPU_REGS_RAX])),
[rbx]"i"(offsetof(struct kvm_vcpu, regs[VCPU_REGS_RBX])),
@@ -2144,7 +2181,7 @@ again:
vcpu->interrupt_window_open = (vmcs_read32(GUEST_INTERRUPTIBILITY_INFO) & 3) == 0;
asm ("mov %0, %%ds; mov %0, %%es" : : "r"(__USER_DS));
- vcpu->launched = 1;
+ vmx->launched = 1;
preempt_enable();
@@ -2225,10 +2262,12 @@ static void vmx_inject_page_fault(struct kvm_vcpu *vcpu,
static void vmx_free_vmcs(struct kvm_vcpu *vcpu)
{
- if (vcpu->vmcs) {
+ struct vcpu_vmx *vmx = vmx(vcpu);
+
+ if (vmx->vmcs) {
on_each_cpu(__vcpu_clear, vcpu, 0, 1);
- free_vmcs(vcpu->vmcs);
- vcpu->vmcs = NULL;
+ free_vmcs(vmx->vmcs);
+ vmx->vmcs = NULL;
}
}
@@ -2239,33 +2278,39 @@ static void vmx_free_vcpu(struct kvm_vcpu *vcpu)
static int vmx_create_vcpu(struct kvm_vcpu *vcpu)
{
- struct vmcs *vmcs;
+ struct vcpu_vmx *vmx;
- vcpu->guest_msrs = kmalloc(PAGE_SIZE, GFP_KERNEL);
- if (!vcpu->guest_msrs)
+ vmx = kzalloc(sizeof(*vmx), GFP_KERNEL);
+ if (!vmx)
return -ENOMEM;
- vcpu->host_msrs = kmalloc(PAGE_SIZE, GFP_KERNEL);
- if (!vcpu->host_msrs)
- goto out_free_guest_msrs;
+ vmx->guest_msrs = kmalloc(PAGE_SIZE, GFP_KERNEL);
+ if (!vmx->guest_msrs)
+ goto out_free;
- vmcs = alloc_vmcs();
- if (!vmcs)
- goto out_free_msrs;
+ vmx->host_msrs = kmalloc(PAGE_SIZE, GFP_KERNEL);
+ if (!vmx->host_msrs)
+ goto out_free;
- vmcs_clear(vmcs);
- vcpu->vmcs = vmcs;
- vcpu->launched = 0;
+ vmx->vmcs = alloc_vmcs();
+ if (!vmx->vmcs)
+ goto out_free;
+
+ vmcs_clear(vmx->vmcs);
+
+ vmx->vcpu = vcpu;
+ vcpu->_priv = vmx;
return 0;
-out_free_msrs:
- kfree(vcpu->host_msrs);
- vcpu->host_msrs = NULL;
+out_free:
+ if (vmx->host_msrs)
+ kfree(vmx->host_msrs);
+
+ if (vmx->guest_msrs)
+ kfree(vmx->guest_msrs);
-out_free_guest_msrs:
- kfree(vcpu->guest_msrs);
- vcpu->guest_msrs = NULL;
+ kfree(vmx);
return -ENOMEM;
}
-------------------------------------------------------------------------
This SF.net email is sponsored by: Splunk Inc.
Still grepping through log files to find problems? Stop.
Now Search log events and configuration files using AJAX and a browser.
Download your FREE copy of Splunk now >> http://get.splunk.com/
^ permalink raw reply related [flat|nested] 17+ messages in thread
end of thread, other threads:[~2007-07-27 5:23 UTC | newest]
Thread overview: 17+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2007-07-26 23:50 [PATCH] KVM: Remove arch specific components from the general code Gregory Haskins
[not found] ` <46A8FB230200005A000283D9-Igcdv/6uVdMHoYOw/+koYqIwWpluYiW7@public.gmane.org>
2007-07-27 1:11 ` Rusty Russell
[not found] ` <1185498672.12151.11.camel-bi+AKbBUZKY6gyzm1THtWbp2dZbC/Bob@public.gmane.org>
2007-07-27 4:51 ` Avi Kivity
-- strict thread matches above, loose matches on Subject: below --
2007-07-26 13:51 Gregory Haskins
[not found] ` <46A86E8A0200005A00028346-Igcdv/6uVdMHoYOw/+koYqIwWpluYiW7@public.gmane.org>
2007-07-26 13:59 ` Avi Kivity
2007-07-26 13:16 Gregory Haskins
[not found] ` <46A866860200005A00028332-Igcdv/6uVdMHoYOw/+koYqIwWpluYiW7@public.gmane.org>
2007-07-26 13:37 ` Avi Kivity
2007-07-26 13:16 Gregory Haskins
2007-07-26 12:20 Gregory Haskins
[not found] ` <46A8594E0200005A0002831D-Igcdv/6uVdMHoYOw/+koYqIwWpluYiW7@public.gmane.org>
2007-07-26 12:33 ` Avi Kivity
2007-07-26 4:31 Gregory Haskins
[not found] ` <20070726042948.5893.58975.stgit-sLgBBP33vUGnsjUZhwzVf9HuzzzSOjJt@public.gmane.org>
2007-07-26 10:03 ` Avi Kivity
2007-07-26 10:37 ` Rusty Russell
[not found] ` <1185446238.4895.6.camel-bi+AKbBUZKY6gyzm1THtWbp2dZbC/Bob@public.gmane.org>
2007-07-26 11:05 ` Avi Kivity
[not found] ` <46A87FDD.7060308-atKUWr5tajBWk0Htik3J/w@public.gmane.org>
2007-07-26 23:27 ` Rusty Russell
[not found] ` <1185492423.9484.9.camel-bi+AKbBUZKY6gyzm1THtWbp2dZbC/Bob@public.gmane.org>
2007-07-27 4:41 ` Avi Kivity
[not found] ` <46A9776C.9010007-atKUWr5tajBWk0Htik3J/w@public.gmane.org>
2007-07-27 5:23 ` Rusty Russell
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox