* [RFC] KVM Source layout Proposal to accommodate new CPU architecture
@ 2007-09-26 8:33 Zhang, Xiantao
[not found] ` <42DFA526FC41B1429CE7279EF83C6BDC753A4E-wq7ZOvIWXbMAbVU2wMM1CrfspsVTdybXVpNB7YpNyf8@public.gmane.org>
0 siblings, 1 reply; 29+ messages in thread
From: Zhang, Xiantao @ 2007-09-26 8:33 UTC (permalink / raw)
To: kvm-devel-5NWGOfrQmneRv+LV9MX5uipxlwaOVQ5f; +Cc: avi-atKUWr5tajBWk0Htik3J/w
[-- Attachment #1: Type: text/plain, Size: 1844 bytes --]
Hi Folks,
We are working on enabling KVM support on IA64 platform, and now
Linux, Windows guests get stable run and achieve reasonable performance
on KVM with Open GFW. But you know, the current KVM only considers x86
platform, and is short of cross-architecture framework. Currently, we
have a proposal for KVM source layout to accommodate new CPU
architectures. Attached foil describes the detail. With our proposal, we
can boot x86 guests based on commit
2e278972a11eb14f031dea242a9ed118adfa0932, also didn't see regressions.
For IA64 side, we are rebasing our code to this framework.
Main changes to current source:
1. Add subdirectories, such as x86 and ia64 to hold arch-specific code.
2. Split kvm_main.c to two parts. One is still called kvm_main.c, just
contains KVM common interfaces with user space, and basic KVM
infrastructure. The other one is named as kvm_arch.c under sub-directory
(eg. X86, ia64 etc), which includes arch-specific code to supplement the
functionality of kvm_main.c
3. Add an "include" directory in drivers/kvm. Due to possibly complex
code logic in KVM source, maybe many header files need to maintain for
some architectures. If we put them under top-level include/asm-arch
directory, it may introduce much more maintain effort. So, we put it
under "drivers/kvm", and let it be effective when kernel configuration
time.
BTW, Userspace code changes are not involved in this thread.
Considering the readability, we didn't attach the diff file in the mail,
due to big changes to kvm source structure, and only post the tarball
including whole directory "drivers/kvm" instead. For comparison, I
attached kvm_main.diff as well.
Any comments are appreciated from you! Hope to see IA64 support on KVM
earlier!
Thanks & Best Wishes
Xiantao
Intel Opensource Technology Center.
[-- Attachment #2: KVM source structure proposal.pdf --]
[-- Type: application/octet-stream, Size: 102169 bytes --]
[-- Attachment #3: kvm_main.diff --]
[-- Type: application/octet-stream, Size: 66100 bytes --]
diff --git a/drivers/kvm/kvm_main.c b/drivers/kvm/kvm_main.c
index 99e4917..ab5ef57 100644
--- a/drivers/kvm/kvm_main.c
+++ b/drivers/kvm/kvm_main.c
@@ -16,11 +16,10 @@
*/
#include "kvm.h"
-#include "x86_emulate.h"
-#include "segment_descriptor.h"
-#include "irq.h"
-#include <linux/kvm.h>
+#include <kvm/irq.h>
+#include <kvm/mmu.h>
+
#include <linux/module.h>
#include <linux/errno.h>
#include <linux/percpu.h>
@@ -40,11 +39,10 @@
#include <linux/anon_inodes.h>
#include <linux/profile.h>
+#include <asm/kvm.h>
#include <asm/processor.h>
-#include <asm/msr.h>
#include <asm/io.h>
#include <asm/uaccess.h>
-#include <asm/desc.h>
MODULE_AUTHOR("Qumranet");
MODULE_LICENSE("GPL");
@@ -54,143 +52,39 @@ static LIST_HEAD(vm_list);
static cpumask_t cpus_hardware_enabled;
-struct kvm_x86_ops *kvm_x86_ops;
struct kmem_cache *kvm_vcpu_cache;
EXPORT_SYMBOL_GPL(kvm_vcpu_cache);
static __read_mostly struct preempt_ops kvm_preempt_ops;
-#define STAT_OFFSET(x) offsetof(struct kvm_vcpu, stat.x)
-
-static struct kvm_stats_debugfs_item {
- const char *name;
- int offset;
- struct dentry *dentry;
-} debugfs_entries[] = {
- { "pf_fixed", STAT_OFFSET(pf_fixed) },
- { "pf_guest", STAT_OFFSET(pf_guest) },
- { "tlb_flush", STAT_OFFSET(tlb_flush) },
- { "invlpg", STAT_OFFSET(invlpg) },
- { "exits", STAT_OFFSET(exits) },
- { "io_exits", STAT_OFFSET(io_exits) },
- { "mmio_exits", STAT_OFFSET(mmio_exits) },
- { "signal_exits", STAT_OFFSET(signal_exits) },
- { "irq_window", STAT_OFFSET(irq_window_exits) },
- { "halt_exits", STAT_OFFSET(halt_exits) },
- { "halt_wakeup", STAT_OFFSET(halt_wakeup) },
- { "request_irq", STAT_OFFSET(request_irq_exits) },
- { "irq_exits", STAT_OFFSET(irq_exits) },
- { "light_exits", STAT_OFFSET(light_exits) },
- { "efer_reload", STAT_OFFSET(efer_reload) },
- { NULL }
-};
-
static struct dentry *debugfs_dir;
-#define MAX_IO_MSRS 256
-
-#define CR0_RESERVED_BITS \
- (~(unsigned long)(X86_CR0_PE | X86_CR0_MP | X86_CR0_EM | X86_CR0_TS \
- | X86_CR0_ET | X86_CR0_NE | X86_CR0_WP | X86_CR0_AM \
- | X86_CR0_NW | X86_CR0_CD | X86_CR0_PG))
-#define CR4_RESERVED_BITS \
- (~(unsigned long)(X86_CR4_VME | X86_CR4_PVI | X86_CR4_TSD | X86_CR4_DE\
- | X86_CR4_PSE | X86_CR4_PAE | X86_CR4_MCE \
- | X86_CR4_PGE | X86_CR4_PCE | X86_CR4_OSFXSR \
- | X86_CR4_OSXMMEXCPT | X86_CR4_VMXE))
-
-#define CR8_RESERVED_BITS (~(unsigned long)X86_CR8_TPR)
-#define EFER_RESERVED_BITS 0xfffffffffffff2fe
-
-#ifdef CONFIG_X86_64
-// LDT or TSS descriptor in the GDT. 16 bytes.
-struct segment_descriptor_64 {
- struct segment_descriptor s;
- u32 base_higher;
- u32 pad_zero;
-};
-
-#endif
-
static long kvm_vcpu_ioctl(struct file *file, unsigned int ioctl,
unsigned long arg);
-unsigned long segment_base(u16 selector)
-{
- struct descriptor_table gdt;
- struct segment_descriptor *d;
- unsigned long table_base;
- typedef unsigned long ul;
- unsigned long v;
-
- if (selector == 0)
- return 0;
-
- asm ("sgdt %0" : "=m"(gdt));
- table_base = gdt.base;
-
- if (selector & 4) { /* from ldt */
- u16 ldt_selector;
-
- asm ("sldt %0" : "=g"(ldt_selector));
- table_base = segment_base(ldt_selector);
- }
- d = (struct segment_descriptor *)(table_base + (selector & ~7));
- v = d->base_low | ((ul)d->base_mid << 16) | ((ul)d->base_high << 24);
-#ifdef CONFIG_X86_64
- if (d->system == 0
- && (d->type == 2 || d->type == 9 || d->type == 11))
- v |= ((ul)((struct segment_descriptor_64 *)d)->base_higher) << 32;
-#endif
- return v;
-}
-EXPORT_SYMBOL_GPL(segment_base);
-
static inline int valid_vcpu(int n)
{
return likely(n >= 0 && n < KVM_MAX_VCPUS);
}
-void kvm_load_guest_fpu(struct kvm_vcpu *vcpu)
-{
- if (!vcpu->fpu_active || vcpu->guest_fpu_loaded)
- return;
-
- vcpu->guest_fpu_loaded = 1;
- fx_save(&vcpu->host_fx_image);
- fx_restore(&vcpu->guest_fx_image);
-}
-EXPORT_SYMBOL_GPL(kvm_load_guest_fpu);
-
-void kvm_put_guest_fpu(struct kvm_vcpu *vcpu)
-{
- if (!vcpu->guest_fpu_loaded)
- return;
-
- vcpu->guest_fpu_loaded = 0;
- fx_save(&vcpu->guest_fx_image);
- fx_restore(&vcpu->host_fx_image);
-}
-EXPORT_SYMBOL_GPL(kvm_put_guest_fpu);
-
/*
* Switches to specified vcpu, until a matching vcpu_put()
*/
-static void vcpu_load(struct kvm_vcpu *vcpu)
+void vcpu_load(struct kvm_vcpu *vcpu)
{
int cpu;
mutex_lock(&vcpu->mutex);
cpu = get_cpu();
preempt_notifier_register(&vcpu->preempt_notifier);
- kvm_x86_ops->vcpu_load(vcpu, cpu);
+ kvm_arch_ops->vcpu_load(vcpu, cpu);
put_cpu();
}
-static void vcpu_put(struct kvm_vcpu *vcpu)
+void vcpu_put(struct kvm_vcpu *vcpu)
{
preempt_disable();
- kvm_x86_ops->vcpu_put(vcpu);
+ kvm_arch_ops->vcpu_put(vcpu);
preempt_notifier_unregister(&vcpu->preempt_notifier);
preempt_enable();
mutex_unlock(&vcpu->mutex);
@@ -247,7 +141,7 @@ int kvm_vcpu_init(struct kvm_vcpu *vcpu, struct kvm *kvm, unsigned id)
mutex_init(&vcpu->mutex);
vcpu->cpu = -1;
- vcpu->mmu.root_hpa = INVALID_PAGE;
+ vcpu->arch.mmu.root_hpa = INVALID_PAGE;
vcpu->kvm = kvm;
vcpu->vcpu_id = id;
if (!irqchip_in_kernel(kvm) || id == 0)
@@ -288,9 +182,9 @@ EXPORT_SYMBOL_GPL(kvm_vcpu_init);
void kvm_vcpu_uninit(struct kvm_vcpu *vcpu)
{
kvm_mmu_destroy(vcpu);
- if (vcpu->apic)
- hrtimer_cancel(&vcpu->apic->timer.dev);
- kvm_free_apic(vcpu->apic);
+ if (vcpu->arch.apic)
+ hrtimer_cancel(&vcpu->arch.apic->timer.dev);
+ kvm_free_apic(vcpu->arch.apic);
free_page((unsigned long)vcpu->pio_data);
free_page((unsigned long)vcpu->run);
}
@@ -298,14 +192,15 @@ EXPORT_SYMBOL_GPL(kvm_vcpu_uninit);
static struct kvm *kvm_create_vm(void)
{
- struct kvm *kvm = kzalloc(sizeof(struct kvm), GFP_KERNEL);
+ struct kvm *kvm = alloc_kvm();
+ int r;
if (!kvm)
return ERR_PTR(-ENOMEM);
kvm_io_bus_init(&kvm->pio_bus);
+ arch_create_vm(kvm);
mutex_init(&kvm->lock);
- INIT_LIST_HEAD(&kvm->active_mmu_pages);
kvm_io_bus_init(&kvm->mmio_bus);
spin_lock(&kvm_lock);
list_add(&kvm->vm_list, &vm_list);
@@ -345,7 +240,7 @@ static void kvm_free_physmem(struct kvm *kvm)
kvm_free_physmem_slot(&kvm->memslots[i], NULL);
}
-static void free_pio_guest_pages(struct kvm_vcpu *vcpu)
+void free_pio_guest_pages(struct kvm_vcpu *vcpu)
{
int i;
@@ -375,7 +270,7 @@ static void kvm_free_vcpus(struct kvm *kvm)
kvm_unload_vcpu_mmu(kvm->vcpus[i]);
for (i = 0; i < KVM_MAX_VCPUS; ++i) {
if (kvm->vcpus[i]) {
- kvm_x86_ops->vcpu_free(kvm->vcpus[i]);
+ kvm_arch_ops->vcpu_free(kvm->vcpus[i]);
kvm->vcpus[i] = NULL;
}
}
@@ -389,8 +284,8 @@ static void kvm_destroy_vm(struct kvm *kvm)
spin_unlock(&kvm_lock);
kvm_io_bus_destroy(&kvm->pio_bus);
kvm_io_bus_destroy(&kvm->mmio_bus);
- kfree(kvm->vpic);
- kfree(kvm->vioapic);
+ kfree(kvm->arch.vpic);
+ kfree(kvm->arch.vioapic);
kvm_free_vcpus(kvm);
kvm_free_physmem(kvm);
kfree(kvm);
@@ -404,234 +299,12 @@ static int kvm_vm_release(struct inode *inode, struct file *filp)
return 0;
}
-static void inject_gp(struct kvm_vcpu *vcpu)
-{
- kvm_x86_ops->inject_gp(vcpu, 0);
-}
-
-/*
- * Load the pae pdptrs. Return true is they are all valid.
- */
-static int load_pdptrs(struct kvm_vcpu *vcpu, unsigned long cr3)
-{
- gfn_t pdpt_gfn = cr3 >> PAGE_SHIFT;
- unsigned offset = ((cr3 & (PAGE_SIZE-1)) >> 5) << 2;
- int i;
- u64 *pdpt;
- int ret;
- struct page *page;
- u64 pdpte[ARRAY_SIZE(vcpu->pdptrs)];
-
- mutex_lock(&vcpu->kvm->lock);
- page = gfn_to_page(vcpu->kvm, pdpt_gfn);
- if (!page) {
- ret = 0;
- goto out;
- }
-
- pdpt = kmap_atomic(page, KM_USER0);
- memcpy(pdpte, pdpt+offset, sizeof(pdpte));
- kunmap_atomic(pdpt, KM_USER0);
-
- for (i = 0; i < ARRAY_SIZE(pdpte); ++i) {
- if ((pdpte[i] & 1) && (pdpte[i] & 0xfffffff0000001e6ull)) {
- ret = 0;
- goto out;
- }
- }
- ret = 1;
-
- memcpy(vcpu->pdptrs, pdpte, sizeof(vcpu->pdptrs));
-out:
- mutex_unlock(&vcpu->kvm->lock);
-
- return ret;
-}
-
-void set_cr0(struct kvm_vcpu *vcpu, unsigned long cr0)
-{
- if (cr0 & CR0_RESERVED_BITS) {
- printk(KERN_DEBUG "set_cr0: 0x%lx #GP, reserved bits 0x%lx\n",
- cr0, vcpu->cr0);
- inject_gp(vcpu);
- return;
- }
-
- if ((cr0 & X86_CR0_NW) && !(cr0 & X86_CR0_CD)) {
- printk(KERN_DEBUG "set_cr0: #GP, CD == 0 && NW == 1\n");
- inject_gp(vcpu);
- return;
- }
-
- if ((cr0 & X86_CR0_PG) && !(cr0 & X86_CR0_PE)) {
- printk(KERN_DEBUG "set_cr0: #GP, set PG flag "
- "and a clear PE flag\n");
- inject_gp(vcpu);
- return;
- }
-
- if (!is_paging(vcpu) && (cr0 & X86_CR0_PG)) {
-#ifdef CONFIG_X86_64
- if ((vcpu->shadow_efer & EFER_LME)) {
- int cs_db, cs_l;
-
- if (!is_pae(vcpu)) {
- printk(KERN_DEBUG "set_cr0: #GP, start paging "
- "in long mode while PAE is disabled\n");
- inject_gp(vcpu);
- return;
- }
- kvm_x86_ops->get_cs_db_l_bits(vcpu, &cs_db, &cs_l);
- if (cs_l) {
- printk(KERN_DEBUG "set_cr0: #GP, start paging "
- "in long mode while CS.L == 1\n");
- inject_gp(vcpu);
- return;
-
- }
- } else
-#endif
- if (is_pae(vcpu) && !load_pdptrs(vcpu, vcpu->cr3)) {
- printk(KERN_DEBUG "set_cr0: #GP, pdptrs "
- "reserved bits\n");
- inject_gp(vcpu);
- return;
- }
-
- }
-
- kvm_x86_ops->set_cr0(vcpu, cr0);
- vcpu->cr0 = cr0;
-
- mutex_lock(&vcpu->kvm->lock);
- kvm_mmu_reset_context(vcpu);
- mutex_unlock(&vcpu->kvm->lock);
- return;
-}
-EXPORT_SYMBOL_GPL(set_cr0);
-
-void lmsw(struct kvm_vcpu *vcpu, unsigned long msw)
-{
- set_cr0(vcpu, (vcpu->cr0 & ~0x0ful) | (msw & 0x0f));
-}
-EXPORT_SYMBOL_GPL(lmsw);
-
-void set_cr4(struct kvm_vcpu *vcpu, unsigned long cr4)
-{
- if (cr4 & CR4_RESERVED_BITS) {
- printk(KERN_DEBUG "set_cr4: #GP, reserved bits\n");
- inject_gp(vcpu);
- return;
- }
-
- if (is_long_mode(vcpu)) {
- if (!(cr4 & X86_CR4_PAE)) {
- printk(KERN_DEBUG "set_cr4: #GP, clearing PAE while "
- "in long mode\n");
- inject_gp(vcpu);
- return;
- }
- } else if (is_paging(vcpu) && !is_pae(vcpu) && (cr4 & X86_CR4_PAE)
- && !load_pdptrs(vcpu, vcpu->cr3)) {
- printk(KERN_DEBUG "set_cr4: #GP, pdptrs reserved bits\n");
- inject_gp(vcpu);
- return;
- }
-
- if (cr4 & X86_CR4_VMXE) {
- printk(KERN_DEBUG "set_cr4: #GP, setting VMXE\n");
- inject_gp(vcpu);
- return;
- }
- kvm_x86_ops->set_cr4(vcpu, cr4);
- vcpu->cr4 = cr4;
- mutex_lock(&vcpu->kvm->lock);
- kvm_mmu_reset_context(vcpu);
- mutex_unlock(&vcpu->kvm->lock);
-}
-EXPORT_SYMBOL_GPL(set_cr4);
-
-void set_cr3(struct kvm_vcpu *vcpu, unsigned long cr3)
-{
- if (is_long_mode(vcpu)) {
- if (cr3 & CR3_L_MODE_RESERVED_BITS) {
- printk(KERN_DEBUG "set_cr3: #GP, reserved bits\n");
- inject_gp(vcpu);
- return;
- }
- } else {
- if (is_pae(vcpu)) {
- if (cr3 & CR3_PAE_RESERVED_BITS) {
- printk(KERN_DEBUG
- "set_cr3: #GP, reserved bits\n");
- inject_gp(vcpu);
- return;
- }
- if (is_paging(vcpu) && !load_pdptrs(vcpu, cr3)) {
- printk(KERN_DEBUG "set_cr3: #GP, pdptrs "
- "reserved bits\n");
- inject_gp(vcpu);
- return;
- }
- } else {
- if (cr3 & CR3_NONPAE_RESERVED_BITS) {
- printk(KERN_DEBUG
- "set_cr3: #GP, reserved bits\n");
- inject_gp(vcpu);
- return;
- }
- }
- }
-
- mutex_lock(&vcpu->kvm->lock);
- /*
- * Does the new cr3 value map to physical memory? (Note, we
- * catch an invalid cr3 even in real-mode, because it would
- * cause trouble later on when we turn on paging anyway.)
- *
- * A real CPU would silently accept an invalid cr3 and would
- * attempt to use it - with largely undefined (and often hard
- * to debug) behavior on the guest side.
- */
- if (unlikely(!gfn_to_memslot(vcpu->kvm, cr3 >> PAGE_SHIFT)))
- inject_gp(vcpu);
- else {
- vcpu->cr3 = cr3;
- vcpu->mmu.new_cr3(vcpu);
- }
- mutex_unlock(&vcpu->kvm->lock);
-}
-EXPORT_SYMBOL_GPL(set_cr3);
-
-void set_cr8(struct kvm_vcpu *vcpu, unsigned long cr8)
-{
- if (cr8 & CR8_RESERVED_BITS) {
- printk(KERN_DEBUG "set_cr8: #GP, reserved bits 0x%lx\n", cr8);
- inject_gp(vcpu);
- return;
- }
- if (irqchip_in_kernel(vcpu->kvm))
- kvm_lapic_set_tpr(vcpu, cr8);
- else
- vcpu->cr8 = cr8;
-}
-EXPORT_SYMBOL_GPL(set_cr8);
-
-unsigned long get_cr8(struct kvm_vcpu *vcpu)
-{
- if (irqchip_in_kernel(vcpu->kvm))
- return kvm_lapic_get_cr8(vcpu);
- else
- return vcpu->cr8;
-}
-EXPORT_SYMBOL_GPL(get_cr8);
-
u64 kvm_get_apic_base(struct kvm_vcpu *vcpu)
{
if (irqchip_in_kernel(vcpu->kvm))
- return vcpu->apic_base;
+ return vcpu->arch.apic_base;
else
- return vcpu->apic_base;
+ return vcpu->arch.apic_base;
}
EXPORT_SYMBOL_GPL(kvm_get_apic_base);
@@ -641,30 +314,10 @@ void kvm_set_apic_base(struct kvm_vcpu *vcpu, u64 data)
if (irqchip_in_kernel(vcpu->kvm))
kvm_lapic_set_base(vcpu, data);
else
- vcpu->apic_base = data;
+ vcpu->arch.apic_base = data;
}
EXPORT_SYMBOL_GPL(kvm_set_apic_base);
-void fx_init(struct kvm_vcpu *vcpu)
-{
- unsigned after_mxcsr_mask;
-
- /* Initialize guest FPU by resetting ours and saving into guest's */
- preempt_disable();
- fx_save(&vcpu->host_fx_image);
- fpu_init();
- fx_save(&vcpu->guest_fx_image);
- fx_restore(&vcpu->host_fx_image);
- preempt_enable();
-
- vcpu->cr0 |= X86_CR0_ET;
- after_mxcsr_mask = offsetof(struct i387_fxsave_struct, st_space);
- vcpu->guest_fx_image.mxcsr = 0x1f80;
- memset((void *)&vcpu->guest_fx_image + after_mxcsr_mask,
- 0, sizeof(struct i387_fxsave_struct) - after_mxcsr_mask);
-}
-EXPORT_SYMBOL_GPL(fx_init);
-
/*
* Allocate some memory and give it an address in the guest physical address
* space.
@@ -782,51 +435,6 @@ out:
}
/*
- * Get (and clear) the dirty memory log for a memory slot.
- */
-static int kvm_vm_ioctl_get_dirty_log(struct kvm *kvm,
- struct kvm_dirty_log *log)
-{
- struct kvm_memory_slot *memslot;
- int r, i;
- int n;
- unsigned long any = 0;
-
- mutex_lock(&kvm->lock);
-
- r = -EINVAL;
- if (log->slot >= KVM_MEMORY_SLOTS)
- goto out;
-
- memslot = &kvm->memslots[log->slot];
- r = -ENOENT;
- if (!memslot->dirty_bitmap)
- goto out;
-
- n = ALIGN(memslot->npages, BITS_PER_LONG) / 8;
-
- for (i = 0; !any && i < n/sizeof(long); ++i)
- any = memslot->dirty_bitmap[i];
-
- r = -EFAULT;
- if (copy_to_user(log->dirty_bitmap, memslot->dirty_bitmap, n))
- goto out;
-
- /* If nothing is dirty, don't bother messing with page tables. */
- if (any) {
- kvm_mmu_slot_remove_write_access(kvm, log->slot);
- kvm_flush_remote_tlbs(kvm);
- memset(memslot->dirty_bitmap, 0, n);
- }
-
- r = 0;
-
-out:
- mutex_unlock(&kvm->lock);
- return r;
-}
-
-/*
* Set a new alias region. Aliases map a portion of physical memory into
* another portion. This is useful for memory windows, for example the PC
* VGA region.
@@ -874,63 +482,6 @@ out:
return r;
}
-static int kvm_vm_ioctl_get_irqchip(struct kvm *kvm, struct kvm_irqchip *chip)
-{
- int r;
-
- r = 0;
- switch (chip->chip_id) {
- case KVM_IRQCHIP_PIC_MASTER:
- memcpy (&chip->chip.pic,
- &pic_irqchip(kvm)->pics[0],
- sizeof(struct kvm_pic_state));
- break;
- case KVM_IRQCHIP_PIC_SLAVE:
- memcpy (&chip->chip.pic,
- &pic_irqchip(kvm)->pics[1],
- sizeof(struct kvm_pic_state));
- break;
- case KVM_IRQCHIP_IOAPIC:
- memcpy (&chip->chip.ioapic,
- ioapic_irqchip(kvm),
- sizeof(struct kvm_ioapic_state));
- break;
- default:
- r = -EINVAL;
- break;
- }
- return r;
-}
-
-static int kvm_vm_ioctl_set_irqchip(struct kvm *kvm, struct kvm_irqchip *chip)
-{
- int r;
-
- r = 0;
- switch (chip->chip_id) {
- case KVM_IRQCHIP_PIC_MASTER:
- memcpy (&pic_irqchip(kvm)->pics[0],
- &chip->chip.pic,
- sizeof(struct kvm_pic_state));
- break;
- case KVM_IRQCHIP_PIC_SLAVE:
- memcpy (&pic_irqchip(kvm)->pics[1],
- &chip->chip.pic,
- sizeof(struct kvm_pic_state));
- break;
- case KVM_IRQCHIP_IOAPIC:
- memcpy (ioapic_irqchip(kvm),
- &chip->chip.ioapic,
- sizeof(struct kvm_ioapic_state));
- break;
- default:
- r = -EINVAL;
- break;
- }
- kvm_pic_update_irq(pic_irqchip(kvm));
- return r;
-}
-
static gfn_t unalias_gfn(struct kvm *kvm, gfn_t gfn)
{
int i;
@@ -992,359 +543,11 @@ void mark_page_dirty(struct kvm *kvm, gfn_t gfn)
}
}
-int emulator_read_std(unsigned long addr,
- void *val,
- unsigned int bytes,
- struct kvm_vcpu *vcpu)
-{
- void *data = val;
-
- while (bytes) {
- gpa_t gpa = vcpu->mmu.gva_to_gpa(vcpu, addr);
- unsigned offset = addr & (PAGE_SIZE-1);
- unsigned tocopy = min(bytes, (unsigned)PAGE_SIZE - offset);
- unsigned long pfn;
- struct page *page;
- void *page_virt;
-
- if (gpa == UNMAPPED_GVA)
- return X86EMUL_PROPAGATE_FAULT;
- pfn = gpa >> PAGE_SHIFT;
- page = gfn_to_page(vcpu->kvm, pfn);
- if (!page)
- return X86EMUL_UNHANDLEABLE;
- page_virt = kmap_atomic(page, KM_USER0);
-
- memcpy(data, page_virt + offset, tocopy);
-
- kunmap_atomic(page_virt, KM_USER0);
-
- bytes -= tocopy;
- data += tocopy;
- addr += tocopy;
- }
-
- return X86EMUL_CONTINUE;
-}
-EXPORT_SYMBOL_GPL(emulator_read_std);
-
-static int emulator_write_std(unsigned long addr,
- const void *val,
- unsigned int bytes,
- struct kvm_vcpu *vcpu)
-{
- pr_unimpl(vcpu, "emulator_write_std: addr %lx n %d\n", addr, bytes);
- return X86EMUL_UNHANDLEABLE;
-}
-
-/*
- * Only apic need an MMIO device hook, so shortcut now..
- */
-static struct kvm_io_device *vcpu_find_pervcpu_dev(struct kvm_vcpu *vcpu,
- gpa_t addr)
-{
- struct kvm_io_device *dev;
-
- if (vcpu->apic) {
- dev = &vcpu->apic->dev;
- if (dev->in_range(dev, addr))
- return dev;
- }
- return NULL;
-}
-
-static struct kvm_io_device *vcpu_find_mmio_dev(struct kvm_vcpu *vcpu,
- gpa_t addr)
-{
- struct kvm_io_device *dev;
-
- dev = vcpu_find_pervcpu_dev(vcpu, addr);
- if (dev == NULL)
- dev = kvm_io_bus_find_dev(&vcpu->kvm->mmio_bus, addr);
- return dev;
-}
-
-static struct kvm_io_device *vcpu_find_pio_dev(struct kvm_vcpu *vcpu,
- gpa_t addr)
-{
- return kvm_io_bus_find_dev(&vcpu->kvm->pio_bus, addr);
-}
-
-static int emulator_read_emulated(unsigned long addr,
- void *val,
- unsigned int bytes,
- struct kvm_vcpu *vcpu)
-{
- struct kvm_io_device *mmio_dev;
- gpa_t gpa;
-
- if (vcpu->mmio_read_completed) {
- memcpy(val, vcpu->mmio_data, bytes);
- vcpu->mmio_read_completed = 0;
- return X86EMUL_CONTINUE;
- } else if (emulator_read_std(addr, val, bytes, vcpu)
- == X86EMUL_CONTINUE)
- return X86EMUL_CONTINUE;
-
- gpa = vcpu->mmu.gva_to_gpa(vcpu, addr);
- if (gpa == UNMAPPED_GVA)
- return X86EMUL_PROPAGATE_FAULT;
-
- /*
- * Is this MMIO handled locally?
- */
- mmio_dev = vcpu_find_mmio_dev(vcpu, gpa);
- if (mmio_dev) {
- kvm_iodevice_read(mmio_dev, gpa, bytes, val);
- return X86EMUL_CONTINUE;
- }
-
- vcpu->mmio_needed = 1;
- vcpu->mmio_phys_addr = gpa;
- vcpu->mmio_size = bytes;
- vcpu->mmio_is_write = 0;
-
- return X86EMUL_UNHANDLEABLE;
-}
-
-static int emulator_write_phys(struct kvm_vcpu *vcpu, gpa_t gpa,
- const void *val, int bytes)
-{
- struct page *page;
- void *virt;
-
- if (((gpa + bytes - 1) >> PAGE_SHIFT) != (gpa >> PAGE_SHIFT))
- return 0;
- page = gfn_to_page(vcpu->kvm, gpa >> PAGE_SHIFT);
- if (!page)
- return 0;
- mark_page_dirty(vcpu->kvm, gpa >> PAGE_SHIFT);
- virt = kmap_atomic(page, KM_USER0);
- kvm_mmu_pte_write(vcpu, gpa, val, bytes);
- memcpy(virt + offset_in_page(gpa), val, bytes);
- kunmap_atomic(virt, KM_USER0);
- return 1;
-}
-
-static int emulator_write_emulated_onepage(unsigned long addr,
- const void *val,
- unsigned int bytes,
- struct kvm_vcpu *vcpu)
-{
- struct kvm_io_device *mmio_dev;
- gpa_t gpa = vcpu->mmu.gva_to_gpa(vcpu, addr);
-
- if (gpa == UNMAPPED_GVA) {
- kvm_x86_ops->inject_page_fault(vcpu, addr, 2);
- return X86EMUL_PROPAGATE_FAULT;
- }
-
- if (emulator_write_phys(vcpu, gpa, val, bytes))
- return X86EMUL_CONTINUE;
-
- /*
- * Is this MMIO handled locally?
- */
- mmio_dev = vcpu_find_mmio_dev(vcpu, gpa);
- if (mmio_dev) {
- kvm_iodevice_write(mmio_dev, gpa, bytes, val);
- return X86EMUL_CONTINUE;
- }
-
- vcpu->mmio_needed = 1;
- vcpu->mmio_phys_addr = gpa;
- vcpu->mmio_size = bytes;
- vcpu->mmio_is_write = 1;
- memcpy(vcpu->mmio_data, val, bytes);
-
- return X86EMUL_CONTINUE;
-}
-
-int emulator_write_emulated(unsigned long addr,
- const void *val,
- unsigned int bytes,
- struct kvm_vcpu *vcpu)
-{
- /* Crossing a page boundary? */
- if (((addr + bytes - 1) ^ addr) & PAGE_MASK) {
- int rc, now;
-
- now = -addr & ~PAGE_MASK;
- rc = emulator_write_emulated_onepage(addr, val, now, vcpu);
- if (rc != X86EMUL_CONTINUE)
- return rc;
- addr += now;
- val += now;
- bytes -= now;
- }
- return emulator_write_emulated_onepage(addr, val, bytes, vcpu);
-}
-EXPORT_SYMBOL_GPL(emulator_write_emulated);
-
-static int emulator_cmpxchg_emulated(unsigned long addr,
- const void *old,
- const void *new,
- unsigned int bytes,
- struct kvm_vcpu *vcpu)
-{
- static int reported;
-
- if (!reported) {
- reported = 1;
- printk(KERN_WARNING "kvm: emulating exchange as write\n");
- }
- return emulator_write_emulated(addr, new, bytes, vcpu);
-}
-
-static unsigned long get_segment_base(struct kvm_vcpu *vcpu, int seg)
-{
- return kvm_x86_ops->get_segment_base(vcpu, seg);
-}
-
-int emulate_invlpg(struct kvm_vcpu *vcpu, gva_t address)
-{
- return X86EMUL_CONTINUE;
-}
-
-int emulate_clts(struct kvm_vcpu *vcpu)
-{
- vcpu->cr0 &= ~X86_CR0_TS;
- kvm_x86_ops->set_cr0(vcpu, vcpu->cr0);
- return X86EMUL_CONTINUE;
-}
-
-int emulator_get_dr(struct x86_emulate_ctxt* ctxt, int dr, unsigned long *dest)
-{
- struct kvm_vcpu *vcpu = ctxt->vcpu;
-
- switch (dr) {
- case 0 ... 3:
- *dest = kvm_x86_ops->get_dr(vcpu, dr);
- return X86EMUL_CONTINUE;
- default:
- pr_unimpl(vcpu, "%s: unexpected dr %u\n", __FUNCTION__, dr);
- return X86EMUL_UNHANDLEABLE;
- }
-}
-
-int emulator_set_dr(struct x86_emulate_ctxt *ctxt, int dr, unsigned long value)
-{
- unsigned long mask = (ctxt->mode == X86EMUL_MODE_PROT64) ? ~0ULL : ~0U;
- int exception;
-
- kvm_x86_ops->set_dr(ctxt->vcpu, dr, value & mask, &exception);
- if (exception) {
- /* FIXME: better handling */
- return X86EMUL_UNHANDLEABLE;
- }
- return X86EMUL_CONTINUE;
-}
-
-void kvm_report_emulation_failure(struct kvm_vcpu *vcpu, const char *context)
-{
- static int reported;
- u8 opcodes[4];
- unsigned long rip = vcpu->rip;
- unsigned long rip_linear;
-
- rip_linear = rip + get_segment_base(vcpu, VCPU_SREG_CS);
-
- if (reported)
- return;
-
- emulator_read_std(rip_linear, (void *)opcodes, 4, vcpu);
-
- printk(KERN_ERR "emulation failed (%s) rip %lx %02x %02x %02x %02x\n",
- context, rip, opcodes[0], opcodes[1], opcodes[2], opcodes[3]);
- reported = 1;
-}
-EXPORT_SYMBOL_GPL(kvm_report_emulation_failure);
-
-struct x86_emulate_ops emulate_ops = {
- .read_std = emulator_read_std,
- .write_std = emulator_write_std,
- .read_emulated = emulator_read_emulated,
- .write_emulated = emulator_write_emulated,
- .cmpxchg_emulated = emulator_cmpxchg_emulated,
-};
-
-int emulate_instruction(struct kvm_vcpu *vcpu,
- struct kvm_run *run,
- unsigned long cr2,
- u16 error_code)
-{
- struct x86_emulate_ctxt emulate_ctxt;
- int r;
- int cs_db, cs_l;
-
- vcpu->mmio_fault_cr2 = cr2;
- kvm_x86_ops->cache_regs(vcpu);
-
- kvm_x86_ops->get_cs_db_l_bits(vcpu, &cs_db, &cs_l);
-
- emulate_ctxt.vcpu = vcpu;
- emulate_ctxt.eflags = kvm_x86_ops->get_rflags(vcpu);
- emulate_ctxt.cr2 = cr2;
- emulate_ctxt.mode = (emulate_ctxt.eflags & X86_EFLAGS_VM)
- ? X86EMUL_MODE_REAL : cs_l
- ? X86EMUL_MODE_PROT64 : cs_db
- ? X86EMUL_MODE_PROT32 : X86EMUL_MODE_PROT16;
-
- if (emulate_ctxt.mode == X86EMUL_MODE_PROT64) {
- emulate_ctxt.cs_base = 0;
- emulate_ctxt.ds_base = 0;
- emulate_ctxt.es_base = 0;
- emulate_ctxt.ss_base = 0;
- } else {
- emulate_ctxt.cs_base = get_segment_base(vcpu, VCPU_SREG_CS);
- emulate_ctxt.ds_base = get_segment_base(vcpu, VCPU_SREG_DS);
- emulate_ctxt.es_base = get_segment_base(vcpu, VCPU_SREG_ES);
- emulate_ctxt.ss_base = get_segment_base(vcpu, VCPU_SREG_SS);
- }
-
- emulate_ctxt.gs_base = get_segment_base(vcpu, VCPU_SREG_GS);
- emulate_ctxt.fs_base = get_segment_base(vcpu, VCPU_SREG_FS);
-
- vcpu->mmio_is_write = 0;
- vcpu->pio.string = 0;
- r = x86_emulate_memop(&emulate_ctxt, &emulate_ops);
- if (vcpu->pio.string)
- return EMULATE_DO_MMIO;
-
- if ((r || vcpu->mmio_is_write) && run) {
- run->exit_reason = KVM_EXIT_MMIO;
- run->mmio.phys_addr = vcpu->mmio_phys_addr;
- memcpy(run->mmio.data, vcpu->mmio_data, 8);
- run->mmio.len = vcpu->mmio_size;
- run->mmio.is_write = vcpu->mmio_is_write;
- }
-
- if (r) {
- if (kvm_mmu_unprotect_page_virt(vcpu, cr2))
- return EMULATE_DONE;
- if (!vcpu->mmio_needed) {
- kvm_report_emulation_failure(vcpu, "mmio");
- return EMULATE_FAIL;
- }
- return EMULATE_DO_MMIO;
- }
-
- kvm_x86_ops->decache_regs(vcpu);
- kvm_x86_ops->set_rflags(vcpu, emulate_ctxt.eflags);
-
- if (vcpu->mmio_is_write) {
- vcpu->mmio_needed = 0;
- return EMULATE_DO_MMIO;
- }
-
- return EMULATE_DONE;
-}
-EXPORT_SYMBOL_GPL(emulate_instruction);
/*
* The vCPU has executed a HLT instruction with in-kernel mode enabled.
*/
-static void kvm_vcpu_block(struct kvm_vcpu *vcpu)
+void kvm_vcpu_block(struct kvm_vcpu *vcpu)
{
DECLARE_WAITQUEUE(wait, current);
@@ -1367,340 +570,6 @@ static void kvm_vcpu_block(struct kvm_vcpu *vcpu)
remove_wait_queue(&vcpu->wq, &wait);
}
-int kvm_emulate_halt(struct kvm_vcpu *vcpu)
-{
- ++vcpu->stat.halt_exits;
- if (irqchip_in_kernel(vcpu->kvm)) {
- vcpu->mp_state = VCPU_MP_STATE_HALTED;
- kvm_vcpu_block(vcpu);
- if (vcpu->mp_state != VCPU_MP_STATE_RUNNABLE)
- return -EINTR;
- return 1;
- } else {
- vcpu->run->exit_reason = KVM_EXIT_HLT;
- return 0;
- }
-}
-EXPORT_SYMBOL_GPL(kvm_emulate_halt);
-
-int kvm_hypercall(struct kvm_vcpu *vcpu, struct kvm_run *run)
-{
- unsigned long nr, a0, a1, a2, a3, a4, a5, ret;
-
- kvm_x86_ops->cache_regs(vcpu);
- ret = -KVM_EINVAL;
-#ifdef CONFIG_X86_64
- if (is_long_mode(vcpu)) {
- nr = vcpu->regs[VCPU_REGS_RAX];
- a0 = vcpu->regs[VCPU_REGS_RDI];
- a1 = vcpu->regs[VCPU_REGS_RSI];
- a2 = vcpu->regs[VCPU_REGS_RDX];
- a3 = vcpu->regs[VCPU_REGS_RCX];
- a4 = vcpu->regs[VCPU_REGS_R8];
- a5 = vcpu->regs[VCPU_REGS_R9];
- } else
-#endif
- {
- nr = vcpu->regs[VCPU_REGS_RBX] & -1u;
- a0 = vcpu->regs[VCPU_REGS_RAX] & -1u;
- a1 = vcpu->regs[VCPU_REGS_RCX] & -1u;
- a2 = vcpu->regs[VCPU_REGS_RDX] & -1u;
- a3 = vcpu->regs[VCPU_REGS_RSI] & -1u;
- a4 = vcpu->regs[VCPU_REGS_RDI] & -1u;
- a5 = vcpu->regs[VCPU_REGS_RBP] & -1u;
- }
- switch (nr) {
- default:
- run->hypercall.nr = nr;
- run->hypercall.args[0] = a0;
- run->hypercall.args[1] = a1;
- run->hypercall.args[2] = a2;
- run->hypercall.args[3] = a3;
- run->hypercall.args[4] = a4;
- run->hypercall.args[5] = a5;
- run->hypercall.ret = ret;
- run->hypercall.longmode = is_long_mode(vcpu);
- kvm_x86_ops->decache_regs(vcpu);
- return 0;
- }
- vcpu->regs[VCPU_REGS_RAX] = ret;
- kvm_x86_ops->decache_regs(vcpu);
- return 1;
-}
-EXPORT_SYMBOL_GPL(kvm_hypercall);
-
-static u64 mk_cr_64(u64 curr_cr, u32 new_val)
-{
- return (curr_cr & ~((1ULL << 32) - 1)) | new_val;
-}
-
-void realmode_lgdt(struct kvm_vcpu *vcpu, u16 limit, unsigned long base)
-{
- struct descriptor_table dt = { limit, base };
-
- kvm_x86_ops->set_gdt(vcpu, &dt);
-}
-
-void realmode_lidt(struct kvm_vcpu *vcpu, u16 limit, unsigned long base)
-{
- struct descriptor_table dt = { limit, base };
-
- kvm_x86_ops->set_idt(vcpu, &dt);
-}
-
-void realmode_lmsw(struct kvm_vcpu *vcpu, unsigned long msw,
- unsigned long *rflags)
-{
- lmsw(vcpu, msw);
- *rflags = kvm_x86_ops->get_rflags(vcpu);
-}
-
-unsigned long realmode_get_cr(struct kvm_vcpu *vcpu, int cr)
-{
- kvm_x86_ops->decache_cr4_guest_bits(vcpu);
- switch (cr) {
- case 0:
- return vcpu->cr0;
- case 2:
- return vcpu->cr2;
- case 3:
- return vcpu->cr3;
- case 4:
- return vcpu->cr4;
- default:
- vcpu_printf(vcpu, "%s: unexpected cr %u\n", __FUNCTION__, cr);
- return 0;
- }
-}
-
-void realmode_set_cr(struct kvm_vcpu *vcpu, int cr, unsigned long val,
- unsigned long *rflags)
-{
- switch (cr) {
- case 0:
- set_cr0(vcpu, mk_cr_64(vcpu->cr0, val));
- *rflags = kvm_x86_ops->get_rflags(vcpu);
- break;
- case 2:
- vcpu->cr2 = val;
- break;
- case 3:
- set_cr3(vcpu, val);
- break;
- case 4:
- set_cr4(vcpu, mk_cr_64(vcpu->cr4, val));
- break;
- default:
- vcpu_printf(vcpu, "%s: unexpected cr %u\n", __FUNCTION__, cr);
- }
-}
-
-/*
- * Register the para guest with the host:
- */
-static int vcpu_register_para(struct kvm_vcpu *vcpu, gpa_t para_state_gpa)
-{
- struct kvm_vcpu_para_state *para_state;
- hpa_t para_state_hpa, hypercall_hpa;
- struct page *para_state_page;
- unsigned char *hypercall;
- gpa_t hypercall_gpa;
-
- printk(KERN_DEBUG "kvm: guest trying to enter paravirtual mode\n");
- printk(KERN_DEBUG ".... para_state_gpa: %08Lx\n", para_state_gpa);
-
- /*
- * Needs to be page aligned:
- */
- if (para_state_gpa != PAGE_ALIGN(para_state_gpa))
- goto err_gp;
-
- para_state_hpa = gpa_to_hpa(vcpu, para_state_gpa);
- printk(KERN_DEBUG ".... para_state_hpa: %08Lx\n", para_state_hpa);
- if (is_error_hpa(para_state_hpa))
- goto err_gp;
-
- mark_page_dirty(vcpu->kvm, para_state_gpa >> PAGE_SHIFT);
- para_state_page = pfn_to_page(para_state_hpa >> PAGE_SHIFT);
- para_state = kmap(para_state_page);
-
- printk(KERN_DEBUG ".... guest version: %d\n", para_state->guest_version);
- printk(KERN_DEBUG ".... size: %d\n", para_state->size);
-
- para_state->host_version = KVM_PARA_API_VERSION;
- /*
- * We cannot support guests that try to register themselves
- * with a newer API version than the host supports:
- */
- if (para_state->guest_version > KVM_PARA_API_VERSION) {
- para_state->ret = -KVM_EINVAL;
- goto err_kunmap_skip;
- }
-
- hypercall_gpa = para_state->hypercall_gpa;
- hypercall_hpa = gpa_to_hpa(vcpu, hypercall_gpa);
- printk(KERN_DEBUG ".... hypercall_hpa: %08Lx\n", hypercall_hpa);
- if (is_error_hpa(hypercall_hpa)) {
- para_state->ret = -KVM_EINVAL;
- goto err_kunmap_skip;
- }
-
- printk(KERN_DEBUG "kvm: para guest successfully registered.\n");
- vcpu->para_state_page = para_state_page;
- vcpu->para_state_gpa = para_state_gpa;
- vcpu->hypercall_gpa = hypercall_gpa;
-
- mark_page_dirty(vcpu->kvm, hypercall_gpa >> PAGE_SHIFT);
- hypercall = kmap_atomic(pfn_to_page(hypercall_hpa >> PAGE_SHIFT),
- KM_USER1) + (hypercall_hpa & ~PAGE_MASK);
- kvm_x86_ops->patch_hypercall(vcpu, hypercall);
- kunmap_atomic(hypercall, KM_USER1);
-
- para_state->ret = 0;
-err_kunmap_skip:
- kunmap(para_state_page);
- return 0;
-err_gp:
- return 1;
-}
-
-int kvm_get_msr_common(struct kvm_vcpu *vcpu, u32 msr, u64 *pdata)
-{
- u64 data;
-
- switch (msr) {
- case 0xc0010010: /* SYSCFG */
- case 0xc0010015: /* HWCR */
- case MSR_IA32_PLATFORM_ID:
- case MSR_IA32_P5_MC_ADDR:
- case MSR_IA32_P5_MC_TYPE:
- case MSR_IA32_MC0_CTL:
- case MSR_IA32_MCG_STATUS:
- case MSR_IA32_MCG_CAP:
- case MSR_IA32_MC0_MISC:
- case MSR_IA32_MC0_MISC+4:
- case MSR_IA32_MC0_MISC+8:
- case MSR_IA32_MC0_MISC+12:
- case MSR_IA32_MC0_MISC+16:
- case MSR_IA32_UCODE_REV:
- case MSR_IA32_PERF_STATUS:
- case MSR_IA32_EBL_CR_POWERON:
- /* MTRR registers */
- case 0xfe:
- case 0x200 ... 0x2ff:
- data = 0;
- break;
- case 0xcd: /* fsb frequency */
- data = 3;
- break;
- case MSR_IA32_APICBASE:
- data = kvm_get_apic_base(vcpu);
- break;
- case MSR_IA32_MISC_ENABLE:
- data = vcpu->ia32_misc_enable_msr;
- break;
-#ifdef CONFIG_X86_64
- case MSR_EFER:
- data = vcpu->shadow_efer;
- break;
-#endif
- default:
- pr_unimpl(vcpu, "unhandled rdmsr: 0x%x\n", msr);
- return 1;
- }
- *pdata = data;
- return 0;
-}
-EXPORT_SYMBOL_GPL(kvm_get_msr_common);
-
-/*
- * Reads an msr value (of 'msr_index') into 'pdata'.
- * Returns 0 on success, non-0 otherwise.
- * Assumes vcpu_load() was already called.
- */
-int kvm_get_msr(struct kvm_vcpu *vcpu, u32 msr_index, u64 *pdata)
-{
- return kvm_x86_ops->get_msr(vcpu, msr_index, pdata);
-}
-
-#ifdef CONFIG_X86_64
-
-static void set_efer(struct kvm_vcpu *vcpu, u64 efer)
-{
- if (efer & EFER_RESERVED_BITS) {
- printk(KERN_DEBUG "set_efer: 0x%llx #GP, reserved bits\n",
- efer);
- inject_gp(vcpu);
- return;
- }
-
- if (is_paging(vcpu)
- && (vcpu->shadow_efer & EFER_LME) != (efer & EFER_LME)) {
- printk(KERN_DEBUG "set_efer: #GP, change LME while paging\n");
- inject_gp(vcpu);
- return;
- }
-
- kvm_x86_ops->set_efer(vcpu, efer);
-
- efer &= ~EFER_LMA;
- efer |= vcpu->shadow_efer & EFER_LMA;
-
- vcpu->shadow_efer = efer;
-}
-
-#endif
-
-int kvm_set_msr_common(struct kvm_vcpu *vcpu, u32 msr, u64 data)
-{
- switch (msr) {
-#ifdef CONFIG_X86_64
- case MSR_EFER:
- set_efer(vcpu, data);
- break;
-#endif
- case MSR_IA32_MC0_STATUS:
- pr_unimpl(vcpu, "%s: MSR_IA32_MC0_STATUS 0x%llx, nop\n",
- __FUNCTION__, data);
- break;
- case MSR_IA32_MCG_STATUS:
- pr_unimpl(vcpu, "%s: MSR_IA32_MCG_STATUS 0x%llx, nop\n",
- __FUNCTION__, data);
- break;
- case MSR_IA32_UCODE_REV:
- case MSR_IA32_UCODE_WRITE:
- case 0x200 ... 0x2ff: /* MTRRs */
- break;
- case MSR_IA32_APICBASE:
- kvm_set_apic_base(vcpu, data);
- break;
- case MSR_IA32_MISC_ENABLE:
- vcpu->ia32_misc_enable_msr = data;
- break;
- /*
- * This is the 'probe whether the host is KVM' logic:
- */
- case MSR_KVM_API_MAGIC:
- return vcpu_register_para(vcpu, data);
-
- default:
- pr_unimpl(vcpu, "unhandled wrmsr: 0x%x\n", msr);
- return 1;
- }
- return 0;
-}
-EXPORT_SYMBOL_GPL(kvm_set_msr_common);
-
-/*
- * Writes msr value into into the appropriate "register".
- * Returns 0 on success, non-0 otherwise.
- * Assumes vcpu_load() was already called.
- */
-int kvm_set_msr(struct kvm_vcpu *vcpu, u32 msr_index, u64 data)
-{
- return kvm_x86_ops->set_msr(vcpu, msr_index, data);
-}
-
void kvm_resched(struct kvm_vcpu *vcpu)
{
if (!need_resched())
@@ -1709,44 +578,7 @@ void kvm_resched(struct kvm_vcpu *vcpu)
}
EXPORT_SYMBOL_GPL(kvm_resched);
-void kvm_emulate_cpuid(struct kvm_vcpu *vcpu)
-{
- int i;
- u32 function;
- struct kvm_cpuid_entry *e, *best;
-
- kvm_x86_ops->cache_regs(vcpu);
- function = vcpu->regs[VCPU_REGS_RAX];
- vcpu->regs[VCPU_REGS_RAX] = 0;
- vcpu->regs[VCPU_REGS_RBX] = 0;
- vcpu->regs[VCPU_REGS_RCX] = 0;
- vcpu->regs[VCPU_REGS_RDX] = 0;
- best = NULL;
- for (i = 0; i < vcpu->cpuid_nent; ++i) {
- e = &vcpu->cpuid_entries[i];
- if (e->function == function) {
- best = e;
- break;
- }
- /*
- * Both basic or both extended?
- */
- if (((e->function ^ function) & 0x80000000) == 0)
- if (!best || e->function > best->function)
- best = e;
- }
- if (best) {
- vcpu->regs[VCPU_REGS_RAX] = best->eax;
- vcpu->regs[VCPU_REGS_RBX] = best->ebx;
- vcpu->regs[VCPU_REGS_RCX] = best->ecx;
- vcpu->regs[VCPU_REGS_RDX] = best->edx;
- }
- kvm_x86_ops->decache_regs(vcpu);
- kvm_x86_ops->skip_emulated_instruction(vcpu);
-}
-EXPORT_SYMBOL_GPL(kvm_emulate_cpuid);
-
-static int pio_copy_data(struct kvm_vcpu *vcpu)
+int pio_copy_data(struct kvm_vcpu *vcpu)
{
void *p = vcpu->pio_data;
void *q;
@@ -1771,54 +603,6 @@ static int pio_copy_data(struct kvm_vcpu *vcpu)
return 0;
}
-static int complete_pio(struct kvm_vcpu *vcpu)
-{
- struct kvm_pio_request *io = &vcpu->pio;
- long delta;
- int r;
-
- kvm_x86_ops->cache_regs(vcpu);
-
- if (!io->string) {
- if (io->in)
- memcpy(&vcpu->regs[VCPU_REGS_RAX], vcpu->pio_data,
- io->size);
- } else {
- if (io->in) {
- r = pio_copy_data(vcpu);
- if (r) {
- kvm_x86_ops->cache_regs(vcpu);
- return r;
- }
- }
-
- delta = 1;
- if (io->rep) {
- delta *= io->cur_count;
- /*
- * The size of the register should really depend on
- * current address size.
- */
- vcpu->regs[VCPU_REGS_RCX] -= delta;
- }
- if (io->down)
- delta = -delta;
- delta *= io->size;
- if (io->in)
- vcpu->regs[VCPU_REGS_RDI] += delta;
- else
- vcpu->regs[VCPU_REGS_RSI] += delta;
- }
-
- kvm_x86_ops->decache_regs(vcpu);
-
- io->count -= io->cur_count;
- io->cur_count = 0;
-
- if (!io->count)
- kvm_x86_ops->skip_emulated_instruction(vcpu);
- return 0;
-}
static void kernel_pio(struct kvm_io_device *pio_dev,
struct kvm_vcpu *vcpu,
@@ -1838,22 +622,6 @@ static void kernel_pio(struct kvm_io_device *pio_dev,
mutex_unlock(&vcpu->kvm->lock);
}
-static void pio_string_write(struct kvm_io_device *pio_dev,
- struct kvm_vcpu *vcpu)
-{
- struct kvm_pio_request *io = &vcpu->pio;
- void *pd = vcpu->pio_data;
- int i;
-
- mutex_lock(&vcpu->kvm->lock);
- for (i = 0; i < io->cur_count; i++) {
- kvm_iodevice_write(pio_dev, io->port,
- io->size,
- pd);
- pd += io->size;
- }
- mutex_unlock(&vcpu->kvm->lock);
-}
int kvm_emulate_pio (struct kvm_vcpu *vcpu, struct kvm_run *run, int in,
int size, unsigned port)
@@ -1872,9 +640,9 @@ int kvm_emulate_pio (struct kvm_vcpu *vcpu, struct kvm_run *run, int in,
vcpu->pio.guest_page_offset = 0;
vcpu->pio.rep = 0;
- kvm_x86_ops->cache_regs(vcpu);
- memcpy(vcpu->pio_data, &vcpu->regs[VCPU_REGS_RAX], 4);
- kvm_x86_ops->decache_regs(vcpu);
+ kvm_arch_ops->cache_regs(vcpu);
+ arch_set_pio_data(vcpu);
+ kvm_arch_ops->decache_regs(vcpu);
pio_dev = vcpu_find_pio_dev(vcpu, port);
if (pio_dev) {
@@ -1886,119 +654,6 @@ int kvm_emulate_pio (struct kvm_vcpu *vcpu, struct kvm_run *run, int in,
}
EXPORT_SYMBOL_GPL(kvm_emulate_pio);
-int kvm_emulate_pio_string(struct kvm_vcpu *vcpu, struct kvm_run *run, int in,
- int size, unsigned long count, int down,
- gva_t address, int rep, unsigned port)
-{
- unsigned now, in_page;
- int i, ret = 0;
- int nr_pages = 1;
- struct page *page;
- struct kvm_io_device *pio_dev;
-
- vcpu->run->exit_reason = KVM_EXIT_IO;
- vcpu->run->io.direction = in ? KVM_EXIT_IO_IN : KVM_EXIT_IO_OUT;
- vcpu->run->io.size = vcpu->pio.size = size;
- vcpu->run->io.data_offset = KVM_PIO_PAGE_OFFSET * PAGE_SIZE;
- vcpu->run->io.count = vcpu->pio.count = vcpu->pio.cur_count = count;
- vcpu->run->io.port = vcpu->pio.port = port;
- vcpu->pio.in = in;
- vcpu->pio.string = 1;
- vcpu->pio.down = down;
- vcpu->pio.guest_page_offset = offset_in_page(address);
- vcpu->pio.rep = rep;
-
- if (!count) {
- kvm_x86_ops->skip_emulated_instruction(vcpu);
- return 1;
- }
-
- if (!down)
- in_page = PAGE_SIZE - offset_in_page(address);
- else
- in_page = offset_in_page(address) + size;
- now = min(count, (unsigned long)in_page / size);
- if (!now) {
- /*
- * String I/O straddles page boundary. Pin two guest pages
- * so that we satisfy atomicity constraints. Do just one
- * transaction to avoid complexity.
- */
- nr_pages = 2;
- now = 1;
- }
- if (down) {
- /*
- * String I/O in reverse. Yuck. Kill the guest, fix later.
- */
- pr_unimpl(vcpu, "guest string pio down\n");
- inject_gp(vcpu);
- return 1;
- }
- vcpu->run->io.count = now;
- vcpu->pio.cur_count = now;
-
- for (i = 0; i < nr_pages; ++i) {
- mutex_lock(&vcpu->kvm->lock);
- page = gva_to_page(vcpu, address + i * PAGE_SIZE);
- if (page)
- get_page(page);
- vcpu->pio.guest_pages[i] = page;
- mutex_unlock(&vcpu->kvm->lock);
- if (!page) {
- inject_gp(vcpu);
- free_pio_guest_pages(vcpu);
- return 1;
- }
- }
-
- pio_dev = vcpu_find_pio_dev(vcpu, port);
- if (!vcpu->pio.in) {
- /* string PIO write */
- ret = pio_copy_data(vcpu);
- if (ret >= 0 && pio_dev) {
- pio_string_write(pio_dev, vcpu);
- complete_pio(vcpu);
- if (vcpu->pio.count == 0)
- ret = 1;
- }
- } else if (pio_dev)
- pr_unimpl(vcpu, "no string pio read support yet, "
- "port %x size %d count %ld\n",
- port, size, count);
-
- return ret;
-}
-EXPORT_SYMBOL_GPL(kvm_emulate_pio_string);
-
-/*
- * Check if userspace requested an interrupt window, and that the
- * interrupt window is open.
- *
- * No need to exit to userspace if we already have an interrupt queued.
- */
-static int dm_request_for_irq_injection(struct kvm_vcpu *vcpu,
- struct kvm_run *kvm_run)
-{
- return (!vcpu->irq_summary &&
- kvm_run->request_interrupt_window &&
- vcpu->interrupt_window_open &&
- (kvm_x86_ops->get_rflags(vcpu) & X86_EFLAGS_IF));
-}
-
-static void post_kvm_run_save(struct kvm_vcpu *vcpu,
- struct kvm_run *kvm_run)
-{
- kvm_run->if_flag = (kvm_x86_ops->get_rflags(vcpu) & X86_EFLAGS_IF) != 0;
- kvm_run->cr8 = get_cr8(vcpu);
- kvm_run->apic_base = kvm_get_apic_base(vcpu);
- if (irqchip_in_kernel(vcpu->kvm))
- kvm_run->ready_for_interrupt_injection = 1;
- else
- kvm_run->ready_for_interrupt_injection =
- (vcpu->interrupt_window_open &&
- vcpu->irq_summary == 0);
-}
static int __vcpu_run(struct kvm_vcpu *vcpu, struct kvm_run *kvm_run)
{
@@ -2008,13 +663,13 @@ static int __vcpu_run(struct kvm_vcpu *vcpu, struct kvm_run *kvm_run)
printk("vcpu %d received sipi with vector # %x\n",
vcpu->vcpu_id, vcpu->sipi_vector);
kvm_lapic_reset(vcpu);
- kvm_x86_ops->vcpu_reset(vcpu);
+ kvm_arch_ops->vcpu_reset(vcpu);
vcpu->mp_state = VCPU_MP_STATE_RUNNABLE;
}
preempted:
- if (vcpu->guest_debug.enabled)
- kvm_x86_ops->guest_debug_pre(vcpu);
+ if (vcpu->arch.guest_debug.enabled)
+ kvm_arch_ops->guest_debug_pre(vcpu);
again:
r = kvm_mmu_reload(vcpu);
@@ -2023,7 +678,7 @@ again:
preempt_disable();
- kvm_x86_ops->prepare_guest_switch(vcpu);
+ kvm_arch_ops->prepare_guest_switch(vcpu);
kvm_load_guest_fpu(vcpu);
local_irq_disable();
@@ -2038,17 +693,17 @@ again:
}
if (irqchip_in_kernel(vcpu->kvm))
- kvm_x86_ops->inject_pending_irq(vcpu);
+ kvm_arch_ops->inject_pending_irq(vcpu);
else if (!vcpu->mmio_read_completed)
- kvm_x86_ops->inject_pending_vectors(vcpu, kvm_run);
+ kvm_arch_ops->inject_pending_vectors(vcpu, kvm_run);
vcpu->guest_mode = 1;
if (vcpu->requests)
if (test_and_clear_bit(KVM_TLB_FLUSH, &vcpu->requests))
- kvm_x86_ops->tlb_flush(vcpu);
+ kvm_arch_ops->tlb_flush(vcpu);
- kvm_x86_ops->run(vcpu, kvm_run);
+ kvm_arch_ops->run(vcpu, kvm_run);
vcpu->guest_mode = 0;
local_irq_enable();
@@ -2061,11 +716,11 @@ again:
* Profile KVM exit RIPs:
*/
if (unlikely(prof_on == KVM_PROFILING)) {
- kvm_x86_ops->cache_regs(vcpu);
- profile_hit(KVM_PROFILING, (void *)vcpu->rip);
+ kvm_arch_ops->cache_regs(vcpu);
+ profile_hit(KVM_PROFILING, (void *)vcpu->arch.rip);
}
- r = kvm_x86_ops->handle_exit(kvm_run, vcpu);
+ r = kvm_arch_ops->handle_exit(kvm_run, vcpu);
if (r > 0) {
if (dm_request_for_irq_injection(vcpu, kvm_run)) {
@@ -2107,11 +762,11 @@ static int kvm_vcpu_ioctl_run(struct kvm_vcpu *vcpu, struct kvm_run *kvm_run)
if (vcpu->sigset_active)
sigprocmask(SIG_SETMASK, &vcpu->sigset, &sigsaved);
-
+#ifdef CONFIG_X86
/* re-sync apic's tpr */
if (!irqchip_in_kernel(vcpu->kvm))
set_cr8(vcpu, kvm_run->cr8);
-
+#endif
if (vcpu->pio.cur_count) {
r = complete_pio(vcpu);
if (r)
@@ -2134,9 +789,9 @@ static int kvm_vcpu_ioctl_run(struct kvm_vcpu *vcpu, struct kvm_run *kvm_run)
}
if (kvm_run->exit_reason == KVM_EXIT_HYPERCALL) {
- kvm_x86_ops->cache_regs(vcpu);
- vcpu->regs[VCPU_REGS_RAX] = kvm_run->hypercall.ret;
- kvm_x86_ops->decache_regs(vcpu);
+ kvm_arch_ops->cache_regs(vcpu);
+ arch_set_hypercall_ret(vcpu, kvm_run);
+ kvm_arch_ops->decache_regs(vcpu);
}
r = __vcpu_run(vcpu, kvm_run);
@@ -2149,346 +804,6 @@ out:
return r;
}
-static int kvm_vcpu_ioctl_get_regs(struct kvm_vcpu *vcpu,
- struct kvm_regs *regs)
-{
- vcpu_load(vcpu);
-
- kvm_x86_ops->cache_regs(vcpu);
-
- regs->rax = vcpu->regs[VCPU_REGS_RAX];
- regs->rbx = vcpu->regs[VCPU_REGS_RBX];
- regs->rcx = vcpu->regs[VCPU_REGS_RCX];
- regs->rdx = vcpu->regs[VCPU_REGS_RDX];
- regs->rsi = vcpu->regs[VCPU_REGS_RSI];
- regs->rdi = vcpu->regs[VCPU_REGS_RDI];
- regs->rsp = vcpu->regs[VCPU_REGS_RSP];
- regs->rbp = vcpu->regs[VCPU_REGS_RBP];
-#ifdef CONFIG_X86_64
- regs->r8 = vcpu->regs[VCPU_REGS_R8];
- regs->r9 = vcpu->regs[VCPU_REGS_R9];
- regs->r10 = vcpu->regs[VCPU_REGS_R10];
- regs->r11 = vcpu->regs[VCPU_REGS_R11];
- regs->r12 = vcpu->regs[VCPU_REGS_R12];
- regs->r13 = vcpu->regs[VCPU_REGS_R13];
- regs->r14 = vcpu->regs[VCPU_REGS_R14];
- regs->r15 = vcpu->regs[VCPU_REGS_R15];
-#endif
-
- regs->rip = vcpu->rip;
- regs->rflags = kvm_x86_ops->get_rflags(vcpu);
-
- /*
- * Don't leak debug flags in case they were set for guest debugging
- */
- if (vcpu->guest_debug.enabled && vcpu->guest_debug.singlestep)
- regs->rflags &= ~(X86_EFLAGS_TF | X86_EFLAGS_RF);
-
- vcpu_put(vcpu);
-
- return 0;
-}
-
-static int kvm_vcpu_ioctl_set_regs(struct kvm_vcpu *vcpu,
- struct kvm_regs *regs)
-{
- vcpu_load(vcpu);
-
- vcpu->regs[VCPU_REGS_RAX] = regs->rax;
- vcpu->regs[VCPU_REGS_RBX] = regs->rbx;
- vcpu->regs[VCPU_REGS_RCX] = regs->rcx;
- vcpu->regs[VCPU_REGS_RDX] = regs->rdx;
- vcpu->regs[VCPU_REGS_RSI] = regs->rsi;
- vcpu->regs[VCPU_REGS_RDI] = regs->rdi;
- vcpu->regs[VCPU_REGS_RSP] = regs->rsp;
- vcpu->regs[VCPU_REGS_RBP] = regs->rbp;
-#ifdef CONFIG_X86_64
- vcpu->regs[VCPU_REGS_R8] = regs->r8;
- vcpu->regs[VCPU_REGS_R9] = regs->r9;
- vcpu->regs[VCPU_REGS_R10] = regs->r10;
- vcpu->regs[VCPU_REGS_R11] = regs->r11;
- vcpu->regs[VCPU_REGS_R12] = regs->r12;
- vcpu->regs[VCPU_REGS_R13] = regs->r13;
- vcpu->regs[VCPU_REGS_R14] = regs->r14;
- vcpu->regs[VCPU_REGS_R15] = regs->r15;
-#endif
-
- vcpu->rip = regs->rip;
- kvm_x86_ops->set_rflags(vcpu, regs->rflags);
-
- kvm_x86_ops->decache_regs(vcpu);
-
- vcpu_put(vcpu);
-
- return 0;
-}
-
-static void get_segment(struct kvm_vcpu *vcpu,
- struct kvm_segment *var, int seg)
-{
- return kvm_x86_ops->get_segment(vcpu, var, seg);
-}
-
-static int kvm_vcpu_ioctl_get_sregs(struct kvm_vcpu *vcpu,
- struct kvm_sregs *sregs)
-{
- struct descriptor_table dt;
- int pending_vec;
-
- vcpu_load(vcpu);
-
- get_segment(vcpu, &sregs->cs, VCPU_SREG_CS);
- get_segment(vcpu, &sregs->ds, VCPU_SREG_DS);
- get_segment(vcpu, &sregs->es, VCPU_SREG_ES);
- get_segment(vcpu, &sregs->fs, VCPU_SREG_FS);
- get_segment(vcpu, &sregs->gs, VCPU_SREG_GS);
- get_segment(vcpu, &sregs->ss, VCPU_SREG_SS);
-
- get_segment(vcpu, &sregs->tr, VCPU_SREG_TR);
- get_segment(vcpu, &sregs->ldt, VCPU_SREG_LDTR);
-
- kvm_x86_ops->get_idt(vcpu, &dt);
- sregs->idt.limit = dt.limit;
- sregs->idt.base = dt.base;
- kvm_x86_ops->get_gdt(vcpu, &dt);
- sregs->gdt.limit = dt.limit;
- sregs->gdt.base = dt.base;
-
- kvm_x86_ops->decache_cr4_guest_bits(vcpu);
- sregs->cr0 = vcpu->cr0;
- sregs->cr2 = vcpu->cr2;
- sregs->cr3 = vcpu->cr3;
- sregs->cr4 = vcpu->cr4;
- sregs->cr8 = get_cr8(vcpu);
- sregs->efer = vcpu->shadow_efer;
- sregs->apic_base = kvm_get_apic_base(vcpu);
-
- if (irqchip_in_kernel(vcpu->kvm)) {
- memset(sregs->interrupt_bitmap, 0,
- sizeof sregs->interrupt_bitmap);
- pending_vec = kvm_x86_ops->get_irq(vcpu);
- if (pending_vec >= 0) {
- set_bit(pending_vec, sregs->interrupt_bitmap);
- printk("pending irq in kernel %d\n",pending_vec);
- }
- } else
- memcpy(sregs->interrupt_bitmap, vcpu->irq_pending,
- sizeof sregs->interrupt_bitmap);
-
- vcpu_put(vcpu);
-
- return 0;
-}
-
-static void set_segment(struct kvm_vcpu *vcpu,
- struct kvm_segment *var, int seg)
-{
- return kvm_x86_ops->set_segment(vcpu, var, seg);
-}
-
-static int kvm_vcpu_ioctl_set_sregs(struct kvm_vcpu *vcpu,
- struct kvm_sregs *sregs)
-{
- int mmu_reset_needed = 0;
- int i, pending_vec, max_bits;
- struct descriptor_table dt;
-
- vcpu_load(vcpu);
-
- dt.limit = sregs->idt.limit;
- dt.base = sregs->idt.base;
- kvm_x86_ops->set_idt(vcpu, &dt);
- dt.limit = sregs->gdt.limit;
- dt.base = sregs->gdt.base;
- kvm_x86_ops->set_gdt(vcpu, &dt);
-
- vcpu->cr2 = sregs->cr2;
- mmu_reset_needed |= vcpu->cr3 != sregs->cr3;
- vcpu->cr3 = sregs->cr3;
-
- set_cr8(vcpu, sregs->cr8);
-
- mmu_reset_needed |= vcpu->shadow_efer != sregs->efer;
-#ifdef CONFIG_X86_64
- kvm_x86_ops->set_efer(vcpu, sregs->efer);
-#endif
- kvm_set_apic_base(vcpu, sregs->apic_base);
-
- kvm_x86_ops->decache_cr4_guest_bits(vcpu);
-
- mmu_reset_needed |= vcpu->cr0 != sregs->cr0;
- vcpu->cr0 = sregs->cr0;
- kvm_x86_ops->set_cr0(vcpu, sregs->cr0);
-
- mmu_reset_needed |= vcpu->cr4 != sregs->cr4;
- kvm_x86_ops->set_cr4(vcpu, sregs->cr4);
- if (!is_long_mode(vcpu) && is_pae(vcpu))
- load_pdptrs(vcpu, vcpu->cr3);
-
- if (mmu_reset_needed)
- kvm_mmu_reset_context(vcpu);
-
- if (!irqchip_in_kernel(vcpu->kvm)) {
- memcpy(vcpu->irq_pending, sregs->interrupt_bitmap,
- sizeof vcpu->irq_pending);
- vcpu->irq_summary = 0;
- for (i = 0; i < ARRAY_SIZE(vcpu->irq_pending); ++i)
- if (vcpu->irq_pending[i])
- __set_bit(i, &vcpu->irq_summary);
- } else {
- max_bits = (sizeof sregs->interrupt_bitmap) << 3;
- pending_vec = find_first_bit(
- (const unsigned long *)sregs->interrupt_bitmap,
- max_bits);
- /* Only pending external irq is handled here */
- if (pending_vec < max_bits) {
- kvm_x86_ops->set_irq(vcpu, pending_vec);
- printk("Set back pending irq %d\n", pending_vec);
- }
- }
-
- set_segment(vcpu, &sregs->cs, VCPU_SREG_CS);
- set_segment(vcpu, &sregs->ds, VCPU_SREG_DS);
- set_segment(vcpu, &sregs->es, VCPU_SREG_ES);
- set_segment(vcpu, &sregs->fs, VCPU_SREG_FS);
- set_segment(vcpu, &sregs->gs, VCPU_SREG_GS);
- set_segment(vcpu, &sregs->ss, VCPU_SREG_SS);
-
- set_segment(vcpu, &sregs->tr, VCPU_SREG_TR);
- set_segment(vcpu, &sregs->ldt, VCPU_SREG_LDTR);
-
- vcpu_put(vcpu);
-
- return 0;
-}
-
-void kvm_get_cs_db_l_bits(struct kvm_vcpu *vcpu, int *db, int *l)
-{
- struct kvm_segment cs;
-
- get_segment(vcpu, &cs, VCPU_SREG_CS);
- *db = cs.db;
- *l = cs.l;
-}
-EXPORT_SYMBOL_GPL(kvm_get_cs_db_l_bits);
-
-/*
- * List of msr numbers which we expose to userspace through KVM_GET_MSRS
- * and KVM_SET_MSRS, and KVM_GET_MSR_INDEX_LIST.
- *
- * This list is modified at module load time to reflect the
- * capabilities of the host cpu.
- */
-static u32 msrs_to_save[] = {
- MSR_IA32_SYSENTER_CS, MSR_IA32_SYSENTER_ESP, MSR_IA32_SYSENTER_EIP,
- MSR_K6_STAR,
-#ifdef CONFIG_X86_64
- MSR_CSTAR, MSR_KERNEL_GS_BASE, MSR_SYSCALL_MASK, MSR_LSTAR,
-#endif
- MSR_IA32_TIME_STAMP_COUNTER,
-};
-
-static unsigned num_msrs_to_save;
-
-static u32 emulated_msrs[] = {
- MSR_IA32_MISC_ENABLE,
-};
-
-static __init void kvm_init_msr_list(void)
-{
- u32 dummy[2];
- unsigned i, j;
-
- for (i = j = 0; i < ARRAY_SIZE(msrs_to_save); i++) {
- if (rdmsr_safe(msrs_to_save[i], &dummy[0], &dummy[1]) < 0)
- continue;
- if (j < i)
- msrs_to_save[j] = msrs_to_save[i];
- j++;
- }
- num_msrs_to_save = j;
-}
-
-/*
- * Adapt set_msr() to msr_io()'s calling convention
- */
-static int do_set_msr(struct kvm_vcpu *vcpu, unsigned index, u64 *data)
-{
- return kvm_set_msr(vcpu, index, *data);
-}
-
-/*
- * Read or write a bunch of msrs. All parameters are kernel addresses.
- *
- * @return number of msrs set successfully.
- */
-static int __msr_io(struct kvm_vcpu *vcpu, struct kvm_msrs *msrs,
- struct kvm_msr_entry *entries,
- int (*do_msr)(struct kvm_vcpu *vcpu,
- unsigned index, u64 *data))
-{
- int i;
-
- vcpu_load(vcpu);
-
- for (i = 0; i < msrs->nmsrs; ++i)
- if (do_msr(vcpu, entries[i].index, &entries[i].data))
- break;
-
- vcpu_put(vcpu);
-
- return i;
-}
-
-/*
- * Read or write a bunch of msrs. Parameters are user addresses.
- *
- * @return number of msrs set successfully.
- */
-static int msr_io(struct kvm_vcpu *vcpu, struct kvm_msrs __user *user_msrs,
- int (*do_msr)(struct kvm_vcpu *vcpu,
- unsigned index, u64 *data),
- int writeback)
-{
- struct kvm_msrs msrs;
- struct kvm_msr_entry *entries;
- int r, n;
- unsigned size;
-
- r = -EFAULT;
- if (copy_from_user(&msrs, user_msrs, sizeof msrs))
- goto out;
-
- r = -E2BIG;
- if (msrs.nmsrs >= MAX_IO_MSRS)
- goto out;
-
- r = -ENOMEM;
- size = sizeof(struct kvm_msr_entry) * msrs.nmsrs;
- entries = vmalloc(size);
- if (!entries)
- goto out;
-
- r = -EFAULT;
- if (copy_from_user(entries, user_msrs->entries, size))
- goto out_free;
-
- r = n = __msr_io(vcpu, &msrs, entries, do_msr);
- if (r < 0)
- goto out_free;
-
- r = -EFAULT;
- if (writeback && copy_to_user(user_msrs->entries, entries, size))
- goto out_free;
-
- r = n;
-
-out_free:
- vfree(entries);
-out:
- return r;
-}
-
/*
* Translate a guest virtual address to a guest physical address.
*/
@@ -2500,7 +815,7 @@ static int kvm_vcpu_ioctl_translate(struct kvm_vcpu *vcpu,
vcpu_load(vcpu);
mutex_lock(&vcpu->kvm->lock);
- gpa = vcpu->mmu.gva_to_gpa(vcpu, vaddr);
+ gpa = vcpu->arch.mmu.gva_to_gpa(vcpu, vaddr);
tr->physical_address = gpa;
tr->valid = gpa != UNMAPPED_GVA;
tr->writeable = 1;
@@ -2535,7 +850,7 @@ static int kvm_vcpu_ioctl_debug_guest(struct kvm_vcpu *vcpu,
vcpu_load(vcpu);
- r = kvm_x86_ops->set_guest_debug(vcpu, dbg);
+ r = kvm_arch_ops->set_guest_debug(vcpu, dbg);
vcpu_put(vcpu);
@@ -2617,14 +932,12 @@ static int kvm_vm_ioctl_create_vcpu(struct kvm *kvm, int n)
if (!valid_vcpu(n))
return -EINVAL;
- vcpu = kvm_x86_ops->vcpu_create(kvm, n);
+ vcpu = kvm_arch_ops->vcpu_create(kvm, n);
if (IS_ERR(vcpu))
return PTR_ERR(vcpu);
preempt_notifier_init(&vcpu->preempt_notifier, &kvm_preempt_ops);
- /* We do fxsave: this must be aligned. */
- BUG_ON((unsigned long)&vcpu->host_fx_image & 0xF);
vcpu_load(vcpu);
r = kvm_mmu_setup(vcpu);
@@ -2658,51 +971,10 @@ mmu_unload:
vcpu_put(vcpu);
free_vcpu:
- kvm_x86_ops->vcpu_free(vcpu);
+ kvm_arch_ops->vcpu_free(vcpu);
return r;
}
-static void cpuid_fix_nx_cap(struct kvm_vcpu *vcpu)
-{
- u64 efer;
- int i;
- struct kvm_cpuid_entry *e, *entry;
-
- rdmsrl(MSR_EFER, efer);
- entry = NULL;
- for (i = 0; i < vcpu->cpuid_nent; ++i) {
- e = &vcpu->cpuid_entries[i];
- if (e->function == 0x80000001) {
- entry = e;
- break;
- }
- }
- if (entry && (entry->edx & (1 << 20)) && !(efer & EFER_NX)) {
- entry->edx &= ~(1 << 20);
- printk(KERN_INFO "kvm: guest NX capability removed\n");
- }
-}
-
-static int kvm_vcpu_ioctl_set_cpuid(struct kvm_vcpu *vcpu,
- struct kvm_cpuid *cpuid,
- struct kvm_cpuid_entry __user *entries)
-{
- int r;
-
- r = -E2BIG;
- if (cpuid->nent > KVM_MAX_CPUID_ENTRIES)
- goto out;
- r = -EFAULT;
- if (copy_from_user(&vcpu->cpuid_entries, entries,
- cpuid->nent * sizeof(struct kvm_cpuid_entry)))
- goto out;
- vcpu->cpuid_nent = cpuid->nent;
- cpuid_fix_nx_cap(vcpu);
- return 0;
-
-out:
- return r;
-}
static int kvm_vcpu_ioctl_set_sigmask(struct kvm_vcpu *vcpu, sigset_t *sigset)
{
@@ -2715,87 +987,7 @@ static int kvm_vcpu_ioctl_set_sigmask(struct kvm_vcpu *vcpu, sigset_t *sigset)
return 0;
}
-/*
- * fxsave fpu state. Taken from x86_64/processor.h. To be killed when
- * we have asm/x86/processor.h
- */
-struct fxsave {
- u16 cwd;
- u16 swd;
- u16 twd;
- u16 fop;
- u64 rip;
- u64 rdp;
- u32 mxcsr;
- u32 mxcsr_mask;
- u32 st_space[32]; /* 8*16 bytes for each FP-reg = 128 bytes */
-#ifdef CONFIG_X86_64
- u32 xmm_space[64]; /* 16*16 bytes for each XMM-reg = 256 bytes */
-#else
- u32 xmm_space[32]; /* 8*16 bytes for each XMM-reg = 128 bytes */
-#endif
-};
-static int kvm_vcpu_ioctl_get_fpu(struct kvm_vcpu *vcpu, struct kvm_fpu *fpu)
-{
- struct fxsave *fxsave = (struct fxsave *)&vcpu->guest_fx_image;
-
- vcpu_load(vcpu);
-
- memcpy(fpu->fpr, fxsave->st_space, 128);
- fpu->fcw = fxsave->cwd;
- fpu->fsw = fxsave->swd;
- fpu->ftwx = fxsave->twd;
- fpu->last_opcode = fxsave->fop;
- fpu->last_ip = fxsave->rip;
- fpu->last_dp = fxsave->rdp;
- memcpy(fpu->xmm, fxsave->xmm_space, sizeof fxsave->xmm_space);
-
- vcpu_put(vcpu);
-
- return 0;
-}
-
-static int kvm_vcpu_ioctl_set_fpu(struct kvm_vcpu *vcpu, struct kvm_fpu *fpu)
-{
- struct fxsave *fxsave = (struct fxsave *)&vcpu->guest_fx_image;
-
- vcpu_load(vcpu);
-
- memcpy(fxsave->st_space, fpu->fpr, 128);
- fxsave->cwd = fpu->fcw;
- fxsave->swd = fpu->fsw;
- fxsave->twd = fpu->ftwx;
- fxsave->fop = fpu->last_opcode;
- fxsave->rip = fpu->last_ip;
- fxsave->rdp = fpu->last_dp;
- memcpy(fxsave->xmm_space, fpu->xmm, sizeof fxsave->xmm_space);
-
- vcpu_put(vcpu);
-
- return 0;
-}
-
-static int kvm_vcpu_ioctl_get_lapic(struct kvm_vcpu *vcpu,
- struct kvm_lapic_state *s)
-{
- vcpu_load(vcpu);
- memcpy(s->regs, vcpu->apic->regs, sizeof *s);
- vcpu_put(vcpu);
-
- return 0;
-}
-
-static int kvm_vcpu_ioctl_set_lapic(struct kvm_vcpu *vcpu,
- struct kvm_lapic_state *s)
-{
- vcpu_load(vcpu);
- memcpy(vcpu->apic->regs, s->regs, sizeof *s);
- kvm_apic_post_state_restore(vcpu);
- vcpu_put(vcpu);
-
- return 0;
-}
static long kvm_vcpu_ioctl(struct file *filp,
unsigned int ioctl, unsigned long arg)
@@ -2811,56 +1003,6 @@ static long kvm_vcpu_ioctl(struct file *filp,
goto out;
r = kvm_vcpu_ioctl_run(vcpu, vcpu->run);
break;
- case KVM_GET_REGS: {
- struct kvm_regs kvm_regs;
-
- memset(&kvm_regs, 0, sizeof kvm_regs);
- r = kvm_vcpu_ioctl_get_regs(vcpu, &kvm_regs);
- if (r)
- goto out;
- r = -EFAULT;
- if (copy_to_user(argp, &kvm_regs, sizeof kvm_regs))
- goto out;
- r = 0;
- break;
- }
- case KVM_SET_REGS: {
- struct kvm_regs kvm_regs;
-
- r = -EFAULT;
- if (copy_from_user(&kvm_regs, argp, sizeof kvm_regs))
- goto out;
- r = kvm_vcpu_ioctl_set_regs(vcpu, &kvm_regs);
- if (r)
- goto out;
- r = 0;
- break;
- }
- case KVM_GET_SREGS: {
- struct kvm_sregs kvm_sregs;
-
- memset(&kvm_sregs, 0, sizeof kvm_sregs);
- r = kvm_vcpu_ioctl_get_sregs(vcpu, &kvm_sregs);
- if (r)
- goto out;
- r = -EFAULT;
- if (copy_to_user(argp, &kvm_sregs, sizeof kvm_sregs))
- goto out;
- r = 0;
- break;
- }
- case KVM_SET_SREGS: {
- struct kvm_sregs kvm_sregs;
-
- r = -EFAULT;
- if (copy_from_user(&kvm_sregs, argp, sizeof kvm_sregs))
- goto out;
- r = kvm_vcpu_ioctl_set_sregs(vcpu, &kvm_sregs);
- if (r)
- goto out;
- r = 0;
- break;
- }
case KVM_TRANSLATE: {
struct kvm_translation tr;
@@ -2900,24 +1042,6 @@ static long kvm_vcpu_ioctl(struct file *filp,
r = 0;
break;
}
- case KVM_GET_MSRS:
- r = msr_io(vcpu, argp, kvm_get_msr, 1);
- break;
- case KVM_SET_MSRS:
- r = msr_io(vcpu, argp, do_set_msr, 0);
- break;
- case KVM_SET_CPUID: {
- struct kvm_cpuid __user *cpuid_arg = argp;
- struct kvm_cpuid cpuid;
-
- r = -EFAULT;
- if (copy_from_user(&cpuid, cpuid_arg, sizeof cpuid))
- goto out;
- r = kvm_vcpu_ioctl_set_cpuid(vcpu, &cpuid, cpuid_arg->entries);
- if (r)
- goto out;
- break;
- }
case KVM_SET_SIGNAL_MASK: {
struct kvm_signal_mask __user *sigmask_arg = argp;
struct kvm_signal_mask kvm_sigmask;
@@ -2941,58 +1065,8 @@ static long kvm_vcpu_ioctl(struct file *filp,
r = kvm_vcpu_ioctl_set_sigmask(vcpu, &sigset);
break;
}
- case KVM_GET_FPU: {
- struct kvm_fpu fpu;
-
- memset(&fpu, 0, sizeof fpu);
- r = kvm_vcpu_ioctl_get_fpu(vcpu, &fpu);
- if (r)
- goto out;
- r = -EFAULT;
- if (copy_to_user(argp, &fpu, sizeof fpu))
- goto out;
- r = 0;
- break;
- }
- case KVM_SET_FPU: {
- struct kvm_fpu fpu;
-
- r = -EFAULT;
- if (copy_from_user(&fpu, argp, sizeof fpu))
- goto out;
- r = kvm_vcpu_ioctl_set_fpu(vcpu, &fpu);
- if (r)
- goto out;
- r = 0;
- break;
- }
- case KVM_GET_LAPIC: {
- struct kvm_lapic_state lapic;
-
- memset(&lapic, 0, sizeof lapic);
- r = kvm_vcpu_ioctl_get_lapic(vcpu, &lapic);
- if (r)
- goto out;
- r = -EFAULT;
- if (copy_to_user(argp, &lapic, sizeof lapic))
- goto out;
- r = 0;
- break;
- }
- case KVM_SET_LAPIC: {
- struct kvm_lapic_state lapic;
-
- r = -EFAULT;
- if (copy_from_user(&lapic, argp, sizeof lapic))
- goto out;
- r = kvm_vcpu_ioctl_set_lapic(vcpu, &lapic);;
- if (r)
- goto out;
- r = 0;
- break;
- }
default:
- ;
+ r = kvm_vcpu_arch_ioctl(filp, ioctl, arg);
}
out:
return r;
@@ -3022,17 +1096,6 @@ static long kvm_vm_ioctl(struct file *filp,
goto out;
break;
}
- case KVM_GET_DIRTY_LOG: {
- struct kvm_dirty_log log;
-
- r = -EFAULT;
- if (copy_from_user(&log, argp, sizeof log))
- goto out;
- r = kvm_vm_ioctl_get_dirty_log(kvm, &log);
- if (r)
- goto out;
- break;
- }
case KVM_SET_MEMORY_ALIAS: {
struct kvm_memory_alias alias;
@@ -3044,77 +1107,8 @@ static long kvm_vm_ioctl(struct file *filp,
goto out;
break;
}
- case KVM_CREATE_IRQCHIP:
- r = -ENOMEM;
- kvm->vpic = kvm_create_pic(kvm);
- if (kvm->vpic) {
- r = kvm_ioapic_init(kvm);
- if (r) {
- kfree(kvm->vpic);
- kvm->vpic = NULL;
- goto out;
- }
- }
- else
- goto out;
- break;
- case KVM_IRQ_LINE: {
- struct kvm_irq_level irq_event;
-
- r = -EFAULT;
- if (copy_from_user(&irq_event, argp, sizeof irq_event))
- goto out;
- if (irqchip_in_kernel(kvm)) {
- mutex_lock(&kvm->lock);
- if (irq_event.irq < 16)
- kvm_pic_set_irq(pic_irqchip(kvm),
- irq_event.irq,
- irq_event.level);
- kvm_ioapic_set_irq(kvm->vioapic,
- irq_event.irq,
- irq_event.level);
- mutex_unlock(&kvm->lock);
- r = 0;
- }
- break;
- }
- case KVM_GET_IRQCHIP: {
- /* 0: PIC master, 1: PIC slave, 2: IOAPIC */
- struct kvm_irqchip chip;
-
- r = -EFAULT;
- if (copy_from_user(&chip, argp, sizeof chip))
- goto out;
- r = -ENXIO;
- if (!irqchip_in_kernel(kvm))
- goto out;
- r = kvm_vm_ioctl_get_irqchip(kvm, &chip);
- if (r)
- goto out;
- r = -EFAULT;
- if (copy_to_user(argp, &chip, sizeof chip))
- goto out;
- r = 0;
- break;
- }
- case KVM_SET_IRQCHIP: {
- /* 0: PIC master, 1: PIC slave, 2: IOAPIC */
- struct kvm_irqchip chip;
-
- r = -EFAULT;
- if (copy_from_user(&chip, argp, sizeof chip))
- goto out;
- r = -ENXIO;
- if (!irqchip_in_kernel(kvm))
- goto out;
- r = kvm_vm_ioctl_set_irqchip(kvm, &chip);
- if (r)
- goto out;
- r = 0;
- break;
- }
default:
- ;
+ r = kvm_vm_arch_ioctl(filp, ioctl, arg);
}
out:
return r;
@@ -3196,33 +1190,6 @@ static long kvm_dev_ioctl(struct file *filp,
goto out;
r = kvm_dev_ioctl_create_vm();
break;
- case KVM_GET_MSR_INDEX_LIST: {
- struct kvm_msr_list __user *user_msr_list = argp;
- struct kvm_msr_list msr_list;
- unsigned n;
-
- r = -EFAULT;
- if (copy_from_user(&msr_list, user_msr_list, sizeof msr_list))
- goto out;
- n = msr_list.nmsrs;
- msr_list.nmsrs = num_msrs_to_save + ARRAY_SIZE(emulated_msrs);
- if (copy_to_user(user_msr_list, &msr_list, sizeof msr_list))
- goto out;
- r = -E2BIG;
- if (n < num_msrs_to_save)
- goto out;
- r = -EFAULT;
- if (copy_to_user(user_msr_list->indices, &msrs_to_save,
- num_msrs_to_save * sizeof(u32)))
- goto out;
- if (copy_to_user(user_msr_list->indices
- + num_msrs_to_save * sizeof(u32),
- &emulated_msrs,
- ARRAY_SIZE(emulated_msrs) * sizeof(u32)))
- goto out;
- r = 0;
- break;
- }
case KVM_CHECK_EXTENSION: {
int ext = (long)argp;
@@ -3244,6 +1211,7 @@ static long kvm_dev_ioctl(struct file *filp,
r = 2 * PAGE_SIZE;
break;
default:
+ r = kvm_dev_arch_ioctl(filp, ioctl, arg)
;
}
out:
@@ -3287,7 +1255,7 @@ static void decache_vcpus_on_cpu(int cpu)
*/
if (mutex_trylock(&vcpu->mutex)) {
if (vcpu->cpu == cpu) {
- kvm_x86_ops->vcpu_decache(vcpu);
+ kvm_arch_ops->vcpu_decache(vcpu);
vcpu->cpu = -1;
}
mutex_unlock(&vcpu->mutex);
@@ -3303,7 +1271,7 @@ static void hardware_enable(void *junk)
if (cpu_isset(cpu, cpus_hardware_enabled))
return;
cpu_set(cpu, cpus_hardware_enabled);
- kvm_x86_ops->hardware_enable(NULL);
+ kvm_arch_ops->hardware_enable(NULL);
}
static void hardware_disable(void *junk)
@@ -3314,7 +1282,7 @@ static void hardware_disable(void *junk)
return;
cpu_clear(cpu, cpus_hardware_enabled);
decache_vcpus_on_cpu(cpu);
- kvm_x86_ops->hardware_disable(NULL);
+ kvm_arch_ops->hardware_disable(NULL);
}
static int kvm_cpu_hotplug(struct notifier_block *notifier, unsigned long val,
@@ -3482,7 +1450,7 @@ static void kvm_sched_in(struct preempt_notifier *pn, int cpu)
{
struct kvm_vcpu *vcpu = preempt_notifier_to_vcpu(pn);
- kvm_x86_ops->vcpu_load(vcpu, cpu);
+ kvm_arch_ops->vcpu_load(vcpu, cpu);
}
static void kvm_sched_out(struct preempt_notifier *pn,
@@ -3490,16 +1458,16 @@ static void kvm_sched_out(struct preempt_notifier *pn,
{
struct kvm_vcpu *vcpu = preempt_notifier_to_vcpu(pn);
- kvm_x86_ops->vcpu_put(vcpu);
+ kvm_arch_ops->vcpu_put(vcpu);
}
-int kvm_init_x86(struct kvm_x86_ops *ops, unsigned int vcpu_size,
+int kvm_init_arch(struct kvm_arch_ops *ops, unsigned int vcpu_size,
struct module *module)
{
int r;
int cpu;
- if (kvm_x86_ops) {
+ if (kvm_arch_ops) {
printk(KERN_ERR "kvm: already loaded the other module\n");
return -EEXIST;
}
@@ -3513,15 +1481,15 @@ int kvm_init_x86(struct kvm_x86_ops *ops, unsigned int vcpu_size,
return -EOPNOTSUPP;
}
- kvm_x86_ops = ops;
+ kvm_arch_ops = ops;
- r = kvm_x86_ops->hardware_setup();
+ r = kvm_arch_ops->hardware_setup();
if (r < 0)
goto out;
for_each_online_cpu(cpu) {
smp_call_function_single(cpu,
- kvm_x86_ops->check_processor_compatibility,
+ kvm_arch_ops->check_processor_compatibility,
&r, 0, 1);
if (r < 0)
goto out_free_0;
@@ -3574,13 +1542,14 @@ out_free_2:
out_free_1:
on_each_cpu(hardware_disable, NULL, 0, 1);
out_free_0:
- kvm_x86_ops->hardware_unsetup();
+ kvm_arch_ops->hardware_unsetup();
out:
- kvm_x86_ops = NULL;
+ kvm_arch_ops = NULL;
return r;
}
-void kvm_exit_x86(void)
+
+void kvm_exit_arch(void)
{
misc_deregister(&kvm_dev);
kmem_cache_destroy(kvm_vcpu_cache);
@@ -3589,10 +1558,14 @@ void kvm_exit_x86(void)
unregister_reboot_notifier(&kvm_reboot_notifier);
unregister_cpu_notifier(&kvm_cpu_notifier);
on_each_cpu(hardware_disable, NULL, 0, 1);
- kvm_x86_ops->hardware_unsetup();
- kvm_x86_ops = NULL;
+ kvm_arch_ops->hardware_unsetup();
+ kvm_arch_ops = NULL;
}
+EXPORT_SYMBOL_GPL(kvm_init_arch);
+EXPORT_SYMBOL_GPL(kvm_exit_arch);
+
+
static __init int kvm_init(void)
{
static struct page *bad_page;
@@ -3601,11 +1574,8 @@ static __init int kvm_init(void)
r = kvm_mmu_module_init();
if (r)
goto out4;
-
kvm_init_debug();
-
- kvm_init_msr_list();
-
+ kvm_arch_init();
if ((bad_page = alloc_page(GFP_KERNEL)) == NULL) {
r = -ENOMEM;
goto out;
@@ -3618,6 +1588,7 @@ static __init int kvm_init(void)
out:
kvm_exit_debug();
+ kvm_arch_exit();
kvm_mmu_module_exit();
out4:
return r;
@@ -3627,11 +1598,10 @@ static __exit void kvm_exit(void)
{
kvm_exit_debug();
__free_page(pfn_to_page(bad_page_address >> PAGE_SHIFT));
+ kvm_arch_exit();
kvm_mmu_module_exit();
}
module_init(kvm_init)
module_exit(kvm_exit)
-EXPORT_SYMBOL_GPL(kvm_init_x86);
-EXPORT_SYMBOL_GPL(kvm_exit_x86);
[-- Attachment #4: warning.htm --]
[-- Type: text/html, Size: 2004 bytes --]
[-- Attachment #5: Type: text/plain, Size: 228 bytes --]
-------------------------------------------------------------------------
This SF.net email is sponsored by: Microsoft
Defy all challenges. Microsoft(R) Visual Studio 2005.
http://clk.atdmt.com/MRT/go/vse0120000070mrt/direct/01/
[-- Attachment #6: Type: text/plain, Size: 186 bytes --]
_______________________________________________
kvm-devel mailing list
kvm-devel-5NWGOfrQmneRv+LV9MX5uipxlwaOVQ5f@public.gmane.org
https://lists.sourceforge.net/lists/listinfo/kvm-devel
^ permalink raw reply related [flat|nested] 29+ messages in thread
* Re: [RFC] KVM Source layout Proposal to accommodate new CPU architecture
[not found] ` <42DFA526FC41B1429CE7279EF83C6BDC753A4E-wq7ZOvIWXbMAbVU2wMM1CrfspsVTdybXVpNB7YpNyf8@public.gmane.org>
@ 2007-09-26 8:44 ` Laurent Vivier
[not found] ` <46FA1BDA.2060003-6ktuUTfB/bM@public.gmane.org>
2007-09-27 9:18 ` Avi Kivity
1 sibling, 1 reply; 29+ messages in thread
From: Laurent Vivier @ 2007-09-26 8:44 UTC (permalink / raw)
To: Zhang, Xiantao
Cc: kvm-devel-5NWGOfrQmneRv+LV9MX5uipxlwaOVQ5f,
avi-atKUWr5tajBWk0Htik3J/w
[-- Attachment #1.1: Type: text/plain, Size: 3870 bytes --]
Hi,
is this the same layout introduced for the powerpc port ?
Perhaps you should work together ?
Laurent
Zhang, Xiantao wrote:
> Hi Folks,
> We are working on enabling KVM support on IA64 platform, and now
> Linux, Windows guests get stable run and achieve reasonable performance
> on KVM with Open GFW. But you know, the current KVM only considers x86
> platform, and is short of cross-architecture framework. Currently, we
> have a proposal for KVM source layout to accommodate new CPU
> architectures. Attached foil describes the detail. With our proposal, we
> can boot x86 guests based on commit
> 2e278972a11eb14f031dea242a9ed118adfa0932, also didn't see regressions.
> For IA64 side, we are rebasing our code to this framework.
> Main changes to current source:
> 1. Add subdirectories, such as x86 and ia64 to hold arch-specific code.
> 2. Split kvm_main.c to two parts. One is still called kvm_main.c, just
> contains KVM common interfaces with user space, and basic KVM
> infrastructure. The other one is named as kvm_arch.c under sub-directory
> (eg. X86, ia64 etc), which includes arch-specific code to supplement the
> functionality of kvm_main.c
> 3. Add an "include" directory in drivers/kvm. Due to possibly complex
> code logic in KVM source, maybe many header files need to maintain for
> some architectures. If we put them under top-level include/asm-arch
> directory, it may introduce much more maintain effort. So, we put it
> under "drivers/kvm", and let it be effective when kernel configuration
> time.
> BTW, Userspace code changes are not involved in this thread.
> Considering the readability, we didn't attach the diff file in the mail,
> due to big changes to kvm source structure, and only post the tarball
> including whole directory "drivers/kvm" instead. For comparison, I
> attached kvm_main.diff as well.
>
> Any comments are appreciated from you! Hope to see IA64 support on KVM
> earlier!
>
> Thanks & Best Wishes
> Xiantao
> Intel Opensource Technology Center.
>
>
> ------------------------------------------------------------------------
>
>
> *
> BLOCKED FILE ALERT!*
>
> The attachment '.kvm-intel.ko.cmd' has been blocked because it is a
> disallowed file type. The attachment has been replaced by this message.
>
> If you feel you have received this message in error and are an Intel
> employee, then please contact the Global Service Desk
> <http://servicedesk.intel.com>.
>
> More Information:
>
> If you are an Intel employee and internal to the Intel network, visit
> Secure Intel
> <http://secure.intel.com/infosec/response_services/pc+and+network+protection/email+security/email+security.htm>
> to learn more about E-mail attachment options.
>
> If you are not an Intel employee, please contact your Intel sponsor for
> additional information.
>
>
> <http://it.intel.com> Copyright © Intel Corporation, 2002-2006. All
> rights reserved.
> *Other names and brands may be claimed as the property of others.
> **Intel is not responsible for content of sites outside our intranet.
>
>
> ------------------------------------------------------------------------
>
> -------------------------------------------------------------------------
> This SF.net email is sponsored by: Microsoft
> Defy all challenges. Microsoft(R) Visual Studio 2005.
> http://clk.atdmt.com/MRT/go/vse0120000070mrt/direct/01/
>
>
> ------------------------------------------------------------------------
>
> _______________________________________________
> kvm-devel mailing list
> kvm-devel-5NWGOfrQmneRv+LV9MX5uipxlwaOVQ5f@public.gmane.org
> https://lists.sourceforge.net/lists/listinfo/kvm-devel
--
------------- Laurent.Vivier-6ktuUTfB/bM@public.gmane.org --------------
"Software is hard" - Donald Knuth
[-- Attachment #1.2: OpenPGP digital signature --]
[-- Type: application/pgp-signature, Size: 189 bytes --]
[-- Attachment #2: Type: text/plain, Size: 228 bytes --]
-------------------------------------------------------------------------
This SF.net email is sponsored by: Microsoft
Defy all challenges. Microsoft(R) Visual Studio 2005.
http://clk.atdmt.com/MRT/go/vse0120000070mrt/direct/01/
[-- Attachment #3: Type: text/plain, Size: 186 bytes --]
_______________________________________________
kvm-devel mailing list
kvm-devel-5NWGOfrQmneRv+LV9MX5uipxlwaOVQ5f@public.gmane.org
https://lists.sourceforge.net/lists/listinfo/kvm-devel
^ permalink raw reply [flat|nested] 29+ messages in thread
* Re: [RFC] KVM Source layout Proposal to accommodate new CPU architecture
[not found] ` <FD80ED6F62DC5E41910477505FA01BDFA62D00-wq7ZOvIWXbMAbVU2wMM1CrfspsVTdybXVpNB7YpNyf8@public.gmane.org>
@ 2007-09-26 8:58 ` Zhang, Xiantao
0 siblings, 0 replies; 29+ messages in thread
From: Zhang, Xiantao @ 2007-09-26 8:58 UTC (permalink / raw)
To: Zhang, Xiantao, kvm-devel-5NWGOfrQmneRv+LV9MX5uipxlwaOVQ5f
Cc: avi-atKUWr5tajBWk0Htik3J/w
[-- Attachment #1: Type: text/plain, Size: 2373 bytes --]
Seems our mail server doesn't allow attach source file directly. I have to generate the diff file, although it has bad readability. But instead you can apply it on 2e278972a11eb14f031dea242a9ed118adfa0932, and get the final source layout as we proposed.
Thanks
Xiantao
-----Original Message-----
From: Zhang, Xiantao
Sent: 2007年9月26日 16:34
To: kvm-devel-5NWGOfrQmneRv+LV9MX5uipxlwaOVQ5f@public.gmane.org
Cc: avi-atKUWr5tajBWk0Htik3J/w@public.gmane.org
Subject: [RFC] KVM Source layout Proposal to accommodate new CPU architecture
Hi Folks,
We are working on enabling KVM support on IA64 platform, and now Linux, Windows guests get stable run and achieve reasonable performance on KVM with Open GFW. But you know, the current KVM only considers x86 platform, and is short of cross-architecture framework. Currently, we have a proposal for KVM source layout to accommodate new CPU architectures. Attached foil describes the detail. With our proposal, we can boot x86 guests based on commit 2e278972a11eb14f031dea242a9ed118adfa0932, also didn't see regressions.
For IA64 side, we are rebasing our code to this framework.
Main changes to current source:
1. Add subdirectories, such as x86 and ia64 to hold arch-specific code.
2. Split kvm_main.c to two parts. One is still called kvm_main.c, just contains KVM common interfaces with user space, and basic KVM infrastructure. The other one is named as kvm_arch.c under sub-directory (eg. X86, ia64 etc), which includes arch-specific code to supplement the functionality of kvm_main.c
3. Add an "include" directory in drivers/kvm. Due to possibly complex code logic in KVM source, maybe many header files need to maintain for some architectures. If we put them under top-level include/asm-arch directory, it may introduce much more maintain effort. So, we put it under "drivers/kvm", and let it be effective when kernel configuration time.
BTW, Userspace code changes are not involved in this thread.
Considering the readability, we didn't attach the diff file in the mail, due to big changes to kvm source structure, and only post the tarball including whole directory "drivers/kvm" instead. For comparison, I attached kvm_main.diff as well.
Any comments are appreciated from you! Hope to see IA64 support on KVM earlier!
Thanks & Best Wishes
Xiantao
Intel Opensource Technology Center.
[-- Attachment #2: kvm-diff.tar.gz --]
[-- Type: application/x-gzip, Size: 175780 bytes --]
[-- Attachment #3: Type: text/plain, Size: 228 bytes --]
-------------------------------------------------------------------------
This SF.net email is sponsored by: Microsoft
Defy all challenges. Microsoft(R) Visual Studio 2005.
http://clk.atdmt.com/MRT/go/vse0120000070mrt/direct/01/
[-- Attachment #4: Type: text/plain, Size: 186 bytes --]
_______________________________________________
kvm-devel mailing list
kvm-devel-5NWGOfrQmneRv+LV9MX5uipxlwaOVQ5f@public.gmane.org
https://lists.sourceforge.net/lists/listinfo/kvm-devel
^ permalink raw reply [flat|nested] 29+ messages in thread
* Re: [RFC] KVM Source layout Proposal to accommodate new CPU architecture
[not found] ` <46FA1BDA.2060003-6ktuUTfB/bM@public.gmane.org>
@ 2007-09-26 9:38 ` Zhang, Xiantao
0 siblings, 0 replies; 29+ messages in thread
From: Zhang, Xiantao @ 2007-09-26 9:38 UTC (permalink / raw)
To: Laurent Vivier
Cc: kvm-devel-5NWGOfrQmneRv+LV9MX5uipxlwaOVQ5f,
avi-atKUWr5tajBWk0Htik3J/w
[-- Attachment #1: Type: text/plain, Size: 4619 bytes --]
Hi Laurent,
Thanks for your suggestion! Sure that we should work together to come out the cross-architecture framework. But I think our proposal should be some different with PPC guys provided after a quick reading their patches, although we have similar ideas for the effort. Actually, we need to make a more clear-cut code framework for other CPUs' easy porting.
Thanks
Xiantao
-----Original Message-----
From: Laurent Vivier [mailto:Laurent.Vivier-6ktuUTfB/bM@public.gmane.org]
Sent: 2007年9月26日 16:44
To: Zhang, Xiantao
Cc: kvm-devel-5NWGOfrQmneRv+LV9MX5uipxlwaOVQ5f@public.gmane.org; avi-atKUWr5tajBWk0Htik3J/w@public.gmane.org
Subject: Re: [kvm-devel] [RFC] KVM Source layout Proposal to accommodate new CPU architecture
Hi,
is this the same layout introduced for the powerpc port ?
Perhaps you should work together ?
Laurent
Zhang, Xiantao wrote:
> Hi Folks,
> We are working on enabling KVM support on IA64 platform, and now
> Linux, Windows guests get stable run and achieve reasonable performance
> on KVM with Open GFW. But you know, the current KVM only considers x86
> platform, and is short of cross-architecture framework. Currently, we
> have a proposal for KVM source layout to accommodate new CPU
> architectures. Attached foil describes the detail. With our proposal, we
> can boot x86 guests based on commit
> 2e278972a11eb14f031dea242a9ed118adfa0932, also didn't see regressions.
> For IA64 side, we are rebasing our code to this framework.
> Main changes to current source:
> 1. Add subdirectories, such as x86 and ia64 to hold arch-specific code.
> 2. Split kvm_main.c to two parts. One is still called kvm_main.c, just
> contains KVM common interfaces with user space, and basic KVM
> infrastructure. The other one is named as kvm_arch.c under sub-directory
> (eg. X86, ia64 etc), which includes arch-specific code to supplement the
> functionality of kvm_main.c
> 3. Add an "include" directory in drivers/kvm. Due to possibly complex
> code logic in KVM source, maybe many header files need to maintain for
> some architectures. If we put them under top-level include/asm-arch
> directory, it may introduce much more maintain effort. So, we put it
> under "drivers/kvm", and let it be effective when kernel configuration
> time.
> BTW, Userspace code changes are not involved in this thread.
> Considering the readability, we didn't attach the diff file in the mail,
> due to big changes to kvm source structure, and only post the tarball
> including whole directory "drivers/kvm" instead. For comparison, I
> attached kvm_main.diff as well.
>
> Any comments are appreciated from you! Hope to see IA64 support on KVM
> earlier!
>
> Thanks & Best Wishes
> Xiantao
> Intel Opensource Technology Center.
>
>
> ------------------------------------------------------------------------
>
>
> *
> BLOCKED FILE ALERT!*
>
> The attachment '.kvm-intel.ko.cmd' has been blocked because it is a
> disallowed file type. The attachment has been replaced by this message.
>
> If you feel you have received this message in error and are an Intel
> employee, then please contact the Global Service Desk
> <http://servicedesk.intel.com>.
>
> More Information:
>
> If you are an Intel employee and internal to the Intel network, visit
> Secure Intel
> <http://secure.intel.com/infosec/response_services/pc+and+network+protection/email+security/email+security.htm>
> to learn more about E-mail attachment options.
>
> If you are not an Intel employee, please contact your Intel sponsor for
> additional information.
>
>
> <http://it.intel.com> Copyright (c) Intel Corporation, 2002-2006. All
> rights reserved.
> *Other names and brands may be claimed as the property of others.
> **Intel is not responsible for content of sites outside our intranet.
>
>
> ------------------------------------------------------------------------
>
> -------------------------------------------------------------------------
> This SF.net email is sponsored by: Microsoft
> Defy all challenges. Microsoft(R) Visual Studio 2005.
> http://clk.atdmt.com/MRT/go/vse0120000070mrt/direct/01/
>
>
> ------------------------------------------------------------------------
>
> _______________________________________________
> kvm-devel mailing list
> kvm-devel-5NWGOfrQmneRv+LV9MX5uipxlwaOVQ5f@public.gmane.org
> https://lists.sourceforge.net/lists/listinfo/kvm-devel
--
------------- Laurent.Vivier-6ktuUTfB/bM@public.gmane.org --------------
"Software is hard" - Donald Knuth
[-- Attachment #2: Type: text/plain, Size: 228 bytes --]
-------------------------------------------------------------------------
This SF.net email is sponsored by: Microsoft
Defy all challenges. Microsoft(R) Visual Studio 2005.
http://clk.atdmt.com/MRT/go/vse0120000070mrt/direct/01/
[-- Attachment #3: Type: text/plain, Size: 186 bytes --]
_______________________________________________
kvm-devel mailing list
kvm-devel-5NWGOfrQmneRv+LV9MX5uipxlwaOVQ5f@public.gmane.org
https://lists.sourceforge.net/lists/listinfo/kvm-devel
^ permalink raw reply [flat|nested] 29+ messages in thread
* Re: [RFC] KVM Source layout Proposal to accommodate new CPU architecture
[not found] ` <42DFA526FC41B1429CE7279EF83C6BDC753A4E-wq7ZOvIWXbMAbVU2wMM1CrfspsVTdybXVpNB7YpNyf8@public.gmane.org>
2007-09-26 8:44 ` Laurent Vivier
@ 2007-09-27 9:18 ` Avi Kivity
[not found] ` <46FB7566.9030504-atKUWr5tajBWk0Htik3J/w@public.gmane.org>
1 sibling, 1 reply; 29+ messages in thread
From: Avi Kivity @ 2007-09-27 9:18 UTC (permalink / raw)
To: Zhang, Xiantao; +Cc: kvm-devel-5NWGOfrQmneRv+LV9MX5uipxlwaOVQ5f, virtualization
Zhang, Xiantao wrote:
> Hi Folks,
> We are working on enabling KVM support on IA64 platform, and now
> Linux, Windows guests get stable run and achieve reasonable performance
> on KVM with Open GFW. But you know, the current KVM only considers x86
> platform, and is short of cross-architecture framework. Currently, we
> have a proposal for KVM source layout to accommodate new CPU
> architectures. Attached foil describes the detail. With our proposal, we
> can boot x86 guests based on commit
> 2e278972a11eb14f031dea242a9ed118adfa0932, also didn't see regressions.
> For IA64 side, we are rebasing our code to this framework.
> Main changes to current source:
> 1. Add subdirectories, such as x86 and ia64 to hold arch-specific code.
> 2. Split kvm_main.c to two parts. One is still called kvm_main.c, just
> contains KVM common interfaces with user space, and basic KVM
> infrastructure. The other one is named as kvm_arch.c under sub-directory
> (eg. X86, ia64 etc), which includes arch-specific code to supplement the
> functionality of kvm_main.c
> 3. Add an "include" directory in drivers/kvm. Due to possibly complex
> code logic in KVM source, maybe many header files need to maintain for
> some architectures. If we put them under top-level include/asm-arch
> directory, it may introduce much more maintain effort. So, we put it
> under "drivers/kvm", and let it be effective when kernel configuration
> time.
> BTW, Userspace code changes are not involved in this thread.
> Considering the readability, we didn't attach the diff file in the mail,
> due to big changes to kvm source structure, and only post the tarball
> including whole directory "drivers/kvm" instead. For comparison, I
> attached kvm_main.diff as well.
>
> Any comments are appreciated from you! Hope to see IA64 support on KVM
> earlier!
>
The whole drivers/kvm/ thing was just a trick to get merged quickly. I
think the new layout should be something like
virt/kvm/, include/linux/kvm*.h -> common code
virt/lguest/ -> the other hypervisor
virt/virtio/ -> shared I/O infrastructure
virt/ -> the CONFIG_VIRTIALIZATION menu
arch/x86/kvm/, include/asm-x86/ -> x86 specific code
arch/ia64/kvm/, include/asm-ia64/ -> ia64 specific code
etc.
Of course, this depends on the x86 merge which is scheduled for early
2.6.24.
--
Do not meddle in the internals of kernels, for they are subtle and quick to panic.
-------------------------------------------------------------------------
This SF.net email is sponsored by: Microsoft
Defy all challenges. Microsoft(R) Visual Studio 2005.
http://clk.atdmt.com/MRT/go/vse0120000070mrt/direct/01/
^ permalink raw reply [flat|nested] 29+ messages in thread
* Re: [RFC] KVM Source layout Proposal to accommodate new CPU architecture
[not found] ` <46FB7566.9030504-atKUWr5tajBWk0Htik3J/w@public.gmane.org>
@ 2007-09-28 2:16 ` Zhang, Xiantao
[not found] ` <42DFA526FC41B1429CE7279EF83C6BDC753E73-wq7ZOvIWXbMAbVU2wMM1CrfspsVTdybXVpNB7YpNyf8@public.gmane.org>
2007-09-28 8:20 ` [RFC] KVM Source layout Proposal to accommodate new CPU architecture Carsten Otte
2007-09-29 13:06 ` Rusty Russell
2 siblings, 1 reply; 29+ messages in thread
From: Zhang, Xiantao @ 2007-09-28 2:16 UTC (permalink / raw)
To: Avi Kivity; +Cc: kvm-devel-5NWGOfrQmneRv+LV9MX5uipxlwaOVQ5f, virtualization
[-- Attachment #1: Type: text/plain, Size: 3016 bytes --]
Hi Avi,
Sound good! But what can we do before the merge? You know, we have to spend much effort maintaining our patches with sync with upstream tree. Do you have an interim solution or proposal for merging IA64 code? Thanks.
Xiantao
-----Original Message-----
From: Avi Kivity [mailto:avi-atKUWr5tajBWk0Htik3J/w@public.gmane.org]
Sent: 2007年9月27日 17:19
To: Zhang, Xiantao
Cc: kvm-devel-5NWGOfrQmneRv+LV9MX5uipxlwaOVQ5f@public.gmane.org; virtualization
Subject: Re: [RFC] KVM Source layout Proposal to accommodate new CPU architecture
Zhang, Xiantao wrote:
> Hi Folks,
> We are working on enabling KVM support on IA64 platform, and now
> Linux, Windows guests get stable run and achieve reasonable performance
> on KVM with Open GFW. But you know, the current KVM only considers x86
> platform, and is short of cross-architecture framework. Currently, we
> have a proposal for KVM source layout to accommodate new CPU
> architectures. Attached foil describes the detail. With our proposal, we
> can boot x86 guests based on commit
> 2e278972a11eb14f031dea242a9ed118adfa0932, also didn't see regressions.
> For IA64 side, we are rebasing our code to this framework.
> Main changes to current source:
> 1. Add subdirectories, such as x86 and ia64 to hold arch-specific code.
> 2. Split kvm_main.c to two parts. One is still called kvm_main.c, just
> contains KVM common interfaces with user space, and basic KVM
> infrastructure. The other one is named as kvm_arch.c under sub-directory
> (eg. X86, ia64 etc), which includes arch-specific code to supplement the
> functionality of kvm_main.c
> 3. Add an "include" directory in drivers/kvm. Due to possibly complex
> code logic in KVM source, maybe many header files need to maintain for
> some architectures. If we put them under top-level include/asm-arch
> directory, it may introduce much more maintain effort. So, we put it
> under "drivers/kvm", and let it be effective when kernel configuration
> time.
> BTW, Userspace code changes are not involved in this thread.
> Considering the readability, we didn't attach the diff file in the mail,
> due to big changes to kvm source structure, and only post the tarball
> including whole directory "drivers/kvm" instead. For comparison, I
> attached kvm_main.diff as well.
>
> Any comments are appreciated from you! Hope to see IA64 support on KVM
> earlier!
>
The whole drivers/kvm/ thing was just a trick to get merged quickly. I
think the new layout should be something like
virt/kvm/, include/linux/kvm*.h -> common code
virt/lguest/ -> the other hypervisor
virt/virtio/ -> shared I/O infrastructure
virt/ -> the CONFIG_VIRTIALIZATION menu
arch/x86/kvm/, include/asm-x86/ -> x86 specific code
arch/ia64/kvm/, include/asm-ia64/ -> ia64 specific code
etc.
Of course, this depends on the x86 merge which is scheduled for early
2.6.24.
--
Do not meddle in the internals of kernels, for they are subtle and quick to panic.
[-- Attachment #2: Type: text/plain, Size: 228 bytes --]
-------------------------------------------------------------------------
This SF.net email is sponsored by: Microsoft
Defy all challenges. Microsoft(R) Visual Studio 2005.
http://clk.atdmt.com/MRT/go/vse0120000070mrt/direct/01/
[-- Attachment #3: Type: text/plain, Size: 186 bytes --]
_______________________________________________
kvm-devel mailing list
kvm-devel-5NWGOfrQmneRv+LV9MX5uipxlwaOVQ5f@public.gmane.org
https://lists.sourceforge.net/lists/listinfo/kvm-devel
^ permalink raw reply [flat|nested] 29+ messages in thread
* Re: [RFC] KVM Source layout Proposal to accommodate new CPU architecture
[not found] ` <46FB7566.9030504-atKUWr5tajBWk0Htik3J/w@public.gmane.org>
2007-09-28 2:16 ` Zhang, Xiantao
@ 2007-09-28 8:20 ` Carsten Otte
[not found] ` <46FCB954.50005-tA70FqPdS9bQT0dZR+AlfA@public.gmane.org>
2007-09-29 13:06 ` Rusty Russell
2 siblings, 1 reply; 29+ messages in thread
From: Carsten Otte @ 2007-09-28 8:20 UTC (permalink / raw)
To: Zhang, Xiantao
Cc: kvm-devel-5NWGOfrQmneRv+LV9MX5uipxlwaOVQ5f, Avi Kivity,
virtualization
> Zhang, Xiantao wrote:
> We are working on enabling KVM support on IA64 platform, and now
> Linux, Windows guests get stable run and achieve reasonable performance
> on KVM with Open GFW. But you know, the current KVM only considers x86
> platform, and is short of cross-architecture framework. Currently, we
> have a proposal for KVM source layout to accommodate new CPU
> architectures.
That's great. I agree that general restructure of current x86 code is
needed to fit different archs proper. I do strongly appreciate your
efforts towards this.
> 1. Add subdirectories, such as x86 and ia64 to hold arch-specific code.
I would prefer Avis move to /virt prior to that. But then we'll need
arch specific subdirectories. I think they should go to
/arch/<arch>/kvm. That would involve the architecure maintainers,
which gives us more peer review.
> 2. Split kvm_main.c to two parts. One is still called kvm_main.c, just
> contains KVM common interfaces with user space, and basic KVM
> infrastructure. The other one is named as kvm_arch.c under sub-directory
> (eg. X86, ia64 etc), which includes arch-specific code to supplement the
> functionality of kvm_main.c
I disagree with the split you've made. I think we should try to keep
as much as possible common, rather then just duplicating the efforts
for each architecture we have. Thus, I do prefer to refine a clean
architecutre backend interface based on the current vmx/svm split. We
just need to move x86 specifics to a "kvm-x86" library, on which
kvm-intel, kvm-amd and maybe kvm-rusty do depend. Interfacing to there
needs to go via the same function vector we use for svm/vmx today.
> 3. Add an "include" directory in drivers/kvm. Due to possibly complex
> code logic in KVM source, maybe many header files need to maintain for
> some architectures. If we put them under top-level include/asm-arch
> directory, it may introduce much more maintain effort. So, we put it
> under "drivers/kvm", and let it be effective when kernel configuration
> time.
To me, they clearly belong to include/arch.
so long,
Carsten
-------------------------------------------------------------------------
This SF.net email is sponsored by: Microsoft
Defy all challenges. Microsoft(R) Visual Studio 2005.
http://clk.atdmt.com/MRT/go/vse0120000070mrt/direct/01/
^ permalink raw reply [flat|nested] 29+ messages in thread
* Re: [RFC] KVM Source layout Proposal to accommodate new CPU architecture
[not found] ` <42DFA526FC41B1429CE7279EF83C6BDC753E73-wq7ZOvIWXbMAbVU2wMM1CrfspsVTdybXVpNB7YpNyf8@public.gmane.org>
@ 2007-09-28 14:45 ` Avi Kivity
[not found] ` <46FD1392.1080905-atKUWr5tajBWk0Htik3J/w@public.gmane.org>
0 siblings, 1 reply; 29+ messages in thread
From: Avi Kivity @ 2007-09-28 14:45 UTC (permalink / raw)
To: Zhang, Xiantao; +Cc: kvm-devel-5NWGOfrQmneRv+LV9MX5uipxlwaOVQ5f, virtualization
[-- Attachment #1: Type: text/plain, Size: 3287 bytes --]
Zhang, Xiantao wrote:
> Hi Avi,
> Sound good! But what can we do before the merge? You know, we have to spend much effort maintaining our patches with sync with upstream tree. Do you have an interim solution or proposal for merging IA64 code? Thanks.
> Xiantao
>
The merge is due in a few weeks. If that is too far, we can push the x86
parts to arch/i386 and do makefile magic so that x86-64 sees it too.
> -----Original Message-----
> From: Avi Kivity [mailto:avi-atKUWr5tajBWk0Htik3J/w@public.gmane.org]
> Sent: 2007年9月27日 17:19
> To: Zhang, Xiantao
> Cc: kvm-devel-5NWGOfrQmneRv+LV9MX5uipxlwaOVQ5f@public.gmane.org; virtualization
> Subject: Re: [RFC] KVM Source layout Proposal to accommodate new CPU architecture
>
> Zhang, Xiantao wrote:
>
>> Hi Folks,
>> We are working on enabling KVM support on IA64 platform, and now
>> Linux, Windows guests get stable run and achieve reasonable performance
>> on KVM with Open GFW. But you know, the current KVM only considers x86
>> platform, and is short of cross-architecture framework. Currently, we
>> have a proposal for KVM source layout to accommodate new CPU
>> architectures. Attached foil describes the detail. With our proposal, we
>> can boot x86 guests based on commit
>> 2e278972a11eb14f031dea242a9ed118adfa0932, also didn't see regressions.
>> For IA64 side, we are rebasing our code to this framework.
>> Main changes to current source:
>> 1. Add subdirectories, such as x86 and ia64 to hold arch-specific code.
>> 2. Split kvm_main.c to two parts. One is still called kvm_main.c, just
>> contains KVM common interfaces with user space, and basic KVM
>> infrastructure. The other one is named as kvm_arch.c under sub-directory
>> (eg. X86, ia64 etc), which includes arch-specific code to supplement the
>> functionality of kvm_main.c
>> 3. Add an "include" directory in drivers/kvm. Due to possibly complex
>> code logic in KVM source, maybe many header files need to maintain for
>> some architectures. If we put them under top-level include/asm-arch
>> directory, it may introduce much more maintain effort. So, we put it
>> under "drivers/kvm", and let it be effective when kernel configuration
>> time.
>> BTW, Userspace code changes are not involved in this thread.
>> Considering the readability, we didn't attach the diff file in the mail,
>> due to big changes to kvm source structure, and only post the tarball
>> including whole directory "drivers/kvm" instead. For comparison, I
>> attached kvm_main.diff as well.
>>
>> Any comments are appreciated from you! Hope to see IA64 support on KVM
>> earlier!
>>
>>
>
> The whole drivers/kvm/ thing was just a trick to get merged quickly. I
> think the new layout should be something like
>
> virt/kvm/, include/linux/kvm*.h -> common code
> virt/lguest/ -> the other hypervisor
> virt/virtio/ -> shared I/O infrastructure
> virt/ -> the CONFIG_VIRTIALIZATION menu
> arch/x86/kvm/, include/asm-x86/ -> x86 specific code
> arch/ia64/kvm/, include/asm-ia64/ -> ia64 specific code
>
> etc.
>
> Of course, this depends on the x86 merge which is scheduled for early
> 2.6.24.
>
>
--
Any sufficiently difficult bug is indistinguishable from a feature.
[-- Attachment #2: Type: text/plain, Size: 228 bytes --]
-------------------------------------------------------------------------
This SF.net email is sponsored by: Microsoft
Defy all challenges. Microsoft(R) Visual Studio 2005.
http://clk.atdmt.com/MRT/go/vse0120000070mrt/direct/01/
[-- Attachment #3: Type: text/plain, Size: 186 bytes --]
_______________________________________________
kvm-devel mailing list
kvm-devel-5NWGOfrQmneRv+LV9MX5uipxlwaOVQ5f@public.gmane.org
https://lists.sourceforge.net/lists/listinfo/kvm-devel
^ permalink raw reply [flat|nested] 29+ messages in thread
* Re: [RFC] KVM Source layout Proposal to accommodate new CPU architecture
[not found] ` <46FD1392.1080905-atKUWr5tajBWk0Htik3J/w@public.gmane.org>
@ 2007-09-28 15:28 ` Zhang, Xiantao
[not found] ` <42DFA526FC41B1429CE7279EF83C6BDC754031-wq7ZOvIWXbMAbVU2wMM1CrfspsVTdybXVpNB7YpNyf8@public.gmane.org>
0 siblings, 1 reply; 29+ messages in thread
From: Zhang, Xiantao @ 2007-09-28 15:28 UTC (permalink / raw)
To: Avi Kivity; +Cc: kvm-devel-5NWGOfrQmneRv+LV9MX5uipxlwaOVQ5f, virtualization
[-- Attachment #1: Type: text/plain, Size: 4258 bytes --]
Hi Avi,
So you mean IA64 can adopt the similar method as well? But even so, I am still thinking we have to come out a solution for checking IA64 code into existing KVM upstream tree , because KVM infrastructure in mainline Linux may been a long way to go. Moreover, we also have no method to avoid the abstraction for existing code to support cross-architecture framework before what you pointed gets happen. For example, kvm_main.c is also required to split as common interface for all architectures. So, I think it has no big conflicts between our proposal and the final KVM infrastructure in mainline Linux. How about your ideas? :)
Thanks
Xiantao
-----Original Message-----
From: Avi Kivity [mailto:avi-atKUWr5tajBWk0Htik3J/w@public.gmane.org]
Sent: 2007年9月28日 22:46
To: Zhang, Xiantao
Cc: kvm-devel-5NWGOfrQmneRv+LV9MX5uipxlwaOVQ5f@public.gmane.org; virtualization
Subject: Re: [RFC] KVM Source layout Proposal to accommodate new CPU architecture
Zhang, Xiantao wrote:
> Hi Avi,
> Sound good! But what can we do before the merge? You know, we have to spend much effort maintaining our patches with sync with upstream tree. Do you have an interim solution or proposal for merging IA64 code? Thanks.
> Xiantao
>
The merge is due in a few weeks. If that is too far, we can push the x86
parts to arch/i386 and do makefile magic so that x86-64 sees it too.
> -----Original Message-----
> From: Avi Kivity [mailto:avi-atKUWr5tajBWk0Htik3J/w@public.gmane.org]
> Sent: 2007年9月27日 17:19
> To: Zhang, Xiantao
> Cc: kvm-devel-5NWGOfrQmneRv+LV9MX5uipxlwaOVQ5f@public.gmane.org; virtualization
> Subject: Re: [RFC] KVM Source layout Proposal to accommodate new CPU architecture
>
> Zhang, Xiantao wrote:
>
>> Hi Folks,
>> We are working on enabling KVM support on IA64 platform, and now
>> Linux, Windows guests get stable run and achieve reasonable performance
>> on KVM with Open GFW. But you know, the current KVM only considers x86
>> platform, and is short of cross-architecture framework. Currently, we
>> have a proposal for KVM source layout to accommodate new CPU
>> architectures. Attached foil describes the detail. With our proposal, we
>> can boot x86 guests based on commit
>> 2e278972a11eb14f031dea242a9ed118adfa0932, also didn't see regressions.
>> For IA64 side, we are rebasing our code to this framework.
>> Main changes to current source:
>> 1. Add subdirectories, such as x86 and ia64 to hold arch-specific code.
>> 2. Split kvm_main.c to two parts. One is still called kvm_main.c, just
>> contains KVM common interfaces with user space, and basic KVM
>> infrastructure. The other one is named as kvm_arch.c under sub-directory
>> (eg. X86, ia64 etc), which includes arch-specific code to supplement the
>> functionality of kvm_main.c
>> 3. Add an "include" directory in drivers/kvm. Due to possibly complex
>> code logic in KVM source, maybe many header files need to maintain for
>> some architectures. If we put them under top-level include/asm-arch
>> directory, it may introduce much more maintain effort. So, we put it
>> under "drivers/kvm", and let it be effective when kernel configuration
>> time.
>> BTW, Userspace code changes are not involved in this thread.
>> Considering the readability, we didn't attach the diff file in the mail,
>> due to big changes to kvm source structure, and only post the tarball
>> including whole directory "drivers/kvm" instead. For comparison, I
>> attached kvm_main.diff as well.
>>
>> Any comments are appreciated from you! Hope to see IA64 support on KVM
>> earlier!
>>
>>
>
> The whole drivers/kvm/ thing was just a trick to get merged quickly. I
> think the new layout should be something like
>
> virt/kvm/, include/linux/kvm*.h -> common code
> virt/lguest/ -> the other hypervisor
> virt/virtio/ -> shared I/O infrastructure
> virt/ -> the CONFIG_VIRTIALIZATION menu
> arch/x86/kvm/, include/asm-x86/ -> x86 specific code
> arch/ia64/kvm/, include/asm-ia64/ -> ia64 specific code
>
> etc.
>
> Of course, this depends on the x86 merge which is scheduled for early
> 2.6.24.
>
>
--
Any sufficiently difficult bug is indistinguishable from a feature.
[-- Attachment #2: Type: text/plain, Size: 228 bytes --]
-------------------------------------------------------------------------
This SF.net email is sponsored by: Microsoft
Defy all challenges. Microsoft(R) Visual Studio 2005.
http://clk.atdmt.com/MRT/go/vse0120000070mrt/direct/01/
[-- Attachment #3: Type: text/plain, Size: 186 bytes --]
_______________________________________________
kvm-devel mailing list
kvm-devel-5NWGOfrQmneRv+LV9MX5uipxlwaOVQ5f@public.gmane.org
https://lists.sourceforge.net/lists/listinfo/kvm-devel
^ permalink raw reply [flat|nested] 29+ messages in thread
* Re: [RFC] KVM Source layout Proposal to accommodate new CPU architecture
[not found] ` <42DFA526FC41B1429CE7279EF83C6BDC754031-wq7ZOvIWXbMAbVU2wMM1CrfspsVTdybXVpNB7YpNyf8@public.gmane.org>
@ 2007-09-28 17:03 ` Avi Kivity
[not found] ` <46FD33F2.9090506-atKUWr5tajBWk0Htik3J/w@public.gmane.org>
0 siblings, 1 reply; 29+ messages in thread
From: Avi Kivity @ 2007-09-28 17:03 UTC (permalink / raw)
To: Zhang, Xiantao; +Cc: kvm-devel-5NWGOfrQmneRv+LV9MX5uipxlwaOVQ5f, virtualization
[-- Attachment #1: Type: text/plain, Size: 1088 bytes --]
Zhang, Xiantao wrote:
> Hi Avi,
> So you mean IA64 can adopt the similar method as well?
What method do you mean exactly?
> But even so, I am still thinking we have to come out a solution for checking IA64 code into existing KVM upstream tree , because KVM infrastructure in mainline Linux may been a long way to go. Moreover, we also have no method to avoid the abstraction for existing code to support cross-architecture framework before what you pointed gets happen. For example, kvm_main.c is also required to split as common interface for all architectures. So, I think it has no big conflicts between our proposal and the final KVM infrastructure in mainline Linux. How about your ideas? :)
>
The powerpc people had some patches to make kvm_main arch independent.
We should work on that base.
To avoid a dependency on the x86 merge, we can start by working withing
drivers/kvm/, for example creating drivers/kvm/x86.c and
drivers/kvm/ia64.c. Later patches can move these to arch/*/.
--
Do not meddle in the internals of kernels, for they are subtle and quick to panic.
[-- Attachment #2: Type: text/plain, Size: 228 bytes --]
-------------------------------------------------------------------------
This SF.net email is sponsored by: Microsoft
Defy all challenges. Microsoft(R) Visual Studio 2005.
http://clk.atdmt.com/MRT/go/vse0120000070mrt/direct/01/
[-- Attachment #3: Type: text/plain, Size: 186 bytes --]
_______________________________________________
kvm-devel mailing list
kvm-devel-5NWGOfrQmneRv+LV9MX5uipxlwaOVQ5f@public.gmane.org
https://lists.sourceforge.net/lists/listinfo/kvm-devel
^ permalink raw reply [flat|nested] 29+ messages in thread
* Re: [RFC] KVM Source layout Proposal to accommodate new CPU architecture
[not found] ` <46FD33F2.9090506-atKUWr5tajBWk0Htik3J/w@public.gmane.org>
@ 2007-09-29 1:47 ` Zhang, Xiantao
[not found] ` <42DFA526FC41B1429CE7279EF83C6BDC754076-wq7ZOvIWXbMAbVU2wMM1CrfspsVTdybXVpNB7YpNyf8@public.gmane.org>
0 siblings, 1 reply; 29+ messages in thread
From: Zhang, Xiantao @ 2007-09-29 1:47 UTC (permalink / raw)
To: Avi Kivity; +Cc: kvm-devel-5NWGOfrQmneRv+LV9MX5uipxlwaOVQ5f, virtualization
Zhang, Xiantao wrote:
>> Hi Avi,
>> So you mean IA64 can adopt the similar method as well?
>What method do you mean exactly?
Put all arch-specific files into arch/ia64/kvm as you described in
future KVM infrastructure.
>The powerpc people had some patches to make kvm_main arch independent.
>We should work on that base.
>To avoid a dependency on the x86 merge, we can start by working withing
>drivers/kvm/, for example creating drivers/kvm/x86.c and
>drivers/kvm/ia64.c. Later patches can move these to arch/*/.
It may work on x86 side. But for IA64, we have several source files and
assembly files to implement a VMM module, which contains the
virtualization logic of CPU, MMU and other platform devices. (In KVM
forum, Anthony had presented IA64/KVM architecture which is a bit
different with x86 side due to different approaches for VT.).If we put
all such these arch-specific files in one directory, it looks very
strange!
Thanks
Xiantao
-------------------------------------------------------------------------
This SF.net email is sponsored by: Microsoft
Defy all challenges. Microsoft(R) Visual Studio 2005.
http://clk.atdmt.com/MRT/go/vse0120000070mrt/direct/01/
^ permalink raw reply [flat|nested] 29+ messages in thread
* Re: [RFC] KVM Source layout Proposal to accommodate new CPU architecture
[not found] ` <46FB7566.9030504-atKUWr5tajBWk0Htik3J/w@public.gmane.org>
2007-09-28 2:16 ` Zhang, Xiantao
2007-09-28 8:20 ` [RFC] KVM Source layout Proposal to accommodate new CPU architecture Carsten Otte
@ 2007-09-29 13:06 ` Rusty Russell
[not found] ` <1191071211.26950.28.camel-bi+AKbBUZKY6gyzm1THtWbp2dZbC/Bob@public.gmane.org>
2 siblings, 1 reply; 29+ messages in thread
From: Rusty Russell @ 2007-09-29 13:06 UTC (permalink / raw)
To: Avi Kivity
Cc: kvm-devel-5NWGOfrQmneRv+LV9MX5uipxlwaOVQ5f, Sam Ravnborg,
Zhang, Xiantao, virtualization
On Thu, 2007-09-27 at 11:18 +0200, Avi Kivity wrote:
> The whole drivers/kvm/ thing was just a trick to get merged quickly. I
> think the new layout should be something like
>
> virt/kvm/, include/linux/kvm*.h -> common code
> virt/lguest/ -> the other hypervisor
> virt/virtio/ -> shared I/O infrastructure
> virt/ -> the CONFIG_VIRTIALIZATION menu
> arch/x86/kvm/, include/asm-x86/ -> x86 specific code
> arch/ia64/kvm/, include/asm-ia64/ -> ia64 specific code
The problem with this separation is that module source cannot span
directories (at least, not that I could find).
This is why lguest went for "i386_" prefix for arch separation.
Cheers,
Rusty.
-------------------------------------------------------------------------
This SF.net email is sponsored by: Microsoft
Defy all challenges. Microsoft(R) Visual Studio 2005.
http://clk.atdmt.com/MRT/go/vse0120000070mrt/direct/01/
^ permalink raw reply [flat|nested] 29+ messages in thread
* Re: [RFC] KVM Source layout Proposal to accommodate new CPU architecture
[not found] ` <1191071211.26950.28.camel-bi+AKbBUZKY6gyzm1THtWbp2dZbC/Bob@public.gmane.org>
@ 2007-09-29 14:25 ` Sam Ravnborg
2007-09-30 2:26 ` [RFC] KVM Source layout Proposal to accommodatenew " Zhang, Xiantao
1 sibling, 0 replies; 29+ messages in thread
From: Sam Ravnborg @ 2007-09-29 14:25 UTC (permalink / raw)
To: Rusty Russell
Cc: kvm-devel-5NWGOfrQmneRv+LV9MX5uipxlwaOVQ5f, Zhang, Xiantao,
Avi Kivity, virtualization
Hi Rusty.
On Sat, Sep 29, 2007 at 11:06:51PM +1000, Rusty Russell wrote:
> On Thu, 2007-09-27 at 11:18 +0200, Avi Kivity wrote:
> > The whole drivers/kvm/ thing was just a trick to get merged quickly. I
> > think the new layout should be something like
> >
> > virt/kvm/, include/linux/kvm*.h -> common code
> > virt/lguest/ -> the other hypervisor
> > virt/virtio/ -> shared I/O infrastructure
> > virt/ -> the CONFIG_VIRTIALIZATION menu
> > arch/x86/kvm/, include/asm-x86/ -> x86 specific code
> > arch/ia64/kvm/, include/asm-ia64/ -> ia64 specific code
>
> The problem with this separation is that module source cannot span
> directories (at least, not that I could find).
That's supported but not in the most elegant fashion.
In your Makefile in the top-level directory
just specify the relevant .o files in the subdirectories.
So you would have:
lguest-y := file.o
lguest-y += dir/foo.o
lguest-y += dir2/bar.o
obj-$(CONFIG_...) := lguset.o
If you have trouble making this work drop me a mail and I
will try to help with a more specific example.
Sam
-------------------------------------------------------------------------
This SF.net email is sponsored by: Microsoft
Defy all challenges. Microsoft(R) Visual Studio 2005.
http://clk.atdmt.com/MRT/go/vse0120000070mrt/direct/01/
^ permalink raw reply [flat|nested] 29+ messages in thread
* Re: [RFC] KVM Source layout Proposal to accommodate new CPU architecture
[not found] ` <46FCB954.50005-tA70FqPdS9bQT0dZR+AlfA@public.gmane.org>
@ 2007-09-30 2:26 ` Zhang, Xiantao
0 siblings, 0 replies; 29+ messages in thread
From: Zhang, Xiantao @ 2007-09-30 2:26 UTC (permalink / raw)
To: carsteno-tA70FqPdS9bQT0dZR+AlfA
Cc: kvm-devel-5NWGOfrQmneRv+LV9MX5uipxlwaOVQ5f, Avi Kivity,
virtualization
Carsten Otte wrote:
>> Zhang, Xiantao wrote:
>> We are working on enabling KVM support on IA64 platform, and now
>> Linux, Windows guests get stable run and achieve reasonable
>> performance on KVM with Open GFW. But you know, the current KVM only
>> considers x86 platform, and is short of cross-architecture
>> framework. Currently, we have a proposal for KVM source layout to
>> accommodate new CPU architectures.
> That's great. I agree that general restructure of current x86 code is
> needed to fit different archs proper. I do strongly appreciate your
> efforts towards this.
>
>> 1. Add subdirectories, such as x86 and ia64 to hold arch-specific
>> code.
> I would prefer Avis move to /virt prior to that. But then we'll need
> arch specific subdirectories. I think they should go to
> /arch/<arch>/kvm. That would involve the architecure maintainers,
> which gives us more peer review.
This maybe a long way to go. For current deveploment, maybe
we should focus on exsiting code structure and make it accommodate
new architectures, althoug future virtualization architecture maybe
provided
and looks attractive by mainline Linux kernel.
>
>> 2. Split kvm_main.c to two parts. One is still called kvm_main.c,
>> just contains KVM common interfaces with user space, and basic KVM
>> infrastructure. The other one is named as kvm_arch.c under
>> sub-directory (eg. X86, ia64 etc), which includes arch-specific code
>> to supplement the functionality of kvm_main.c
> I disagree with the split you've made. I think we should try to keep
> as much as possible common, rather then just duplicating the efforts
> for each architecture we have. Thus, I do prefer to refine a clean
> architecutre backend interface based on the current vmx/svm split. We
> just need to move x86 specifics to a "kvm-x86" library, on which
> kvm-intel, kvm-amd and maybe kvm-rusty do depend. Interfacing to there
> needs to go via the same function vector we use for svm/vmx today.
Actually, we kept them as common parts as many as possible. In this
process,
maybe some functions is very hard to split due to its close
relationsship with
x86 side, and we just put it in kvm_arch.c first. Maybe can refine them
in future arch-merge.
>> 3. Add an "include" directory in drivers/kvm. Due to possibly complex
>> code logic in KVM source, maybe many header files need to maintain
>> for some architectures. If we put them under top-level
>> include/asm-arch directory, it may introduce much more maintain
>> effort. So, we put it under "drivers/kvm", and let it be effective
>> when kernel configuration time.
> To me, they clearly belong to include/arch.
Agree, but it may introduce some maintain effort, if many kvm-specific
files were all put
under include/arch directory.
> so long,
> Carsten
-------------------------------------------------------------------------
This SF.net email is sponsored by: Microsoft
Defy all challenges. Microsoft(R) Visual Studio 2005.
http://clk.atdmt.com/MRT/go/vse0120000070mrt/direct/01/
^ permalink raw reply [flat|nested] 29+ messages in thread
* Re: [RFC] KVM Source layout Proposal to accommodatenew CPU architecture
[not found] ` <1191071211.26950.28.camel-bi+AKbBUZKY6gyzm1THtWbp2dZbC/Bob@public.gmane.org>
2007-09-29 14:25 ` Sam Ravnborg
@ 2007-09-30 2:26 ` Zhang, Xiantao
1 sibling, 0 replies; 29+ messages in thread
From: Zhang, Xiantao @ 2007-09-30 2:26 UTC (permalink / raw)
To: Rusty Russell, Avi Kivity
Cc: kvm-devel-5NWGOfrQmneRv+LV9MX5uipxlwaOVQ5f, Sam Ravnborg,
virtualization
Rusty Russell wrote:
> On Thu, 2007-09-27 at 11:18 +0200, Avi Kivity wrote:
>> The whole drivers/kvm/ thing was just a trick to get merged quickly.
>> I think the new layout should be something like
>>
>> virt/kvm/, include/linux/kvm*.h -> common code
>> virt/lguest/ -> the other hypervisor
>> virt/virtio/ -> shared I/O infrastructure
>> virt/ -> the CONFIG_VIRTIALIZATION menu
>> arch/x86/kvm/, include/asm-x86/ -> x86 specific code
>> arch/ia64/kvm/, include/asm-ia64/ -> ia64 specific code
>
> The problem with this separation is that module source cannot span
> directories (at least, not that I could find).
Basically agree. That is also why we put all arch-specifica files in one
directory.
But if only one or two files is outside main directory, maybe Makefile
of module can
use relative position to locate them.
> This is why lguest went for "i386_" prefix for arch separation.
>
> Cheers,
> Rusty.
-------------------------------------------------------------------------
This SF.net email is sponsored by: Microsoft
Defy all challenges. Microsoft(R) Visual Studio 2005.
http://clk.atdmt.com/MRT/go/vse0120000070mrt/direct/01/
^ permalink raw reply [flat|nested] 29+ messages in thread
* Re: [RFC] KVM Source layout Proposal to accommodate new CPU architecture
[not found] ` <42DFA526FC41B1429CE7279EF83C6BDC754076-wq7ZOvIWXbMAbVU2wMM1CrfspsVTdybXVpNB7YpNyf8@public.gmane.org>
@ 2007-09-30 10:52 ` Avi Kivity
[not found] ` <46FF7FF6.6090103-atKUWr5tajBWk0Htik3J/w@public.gmane.org>
0 siblings, 1 reply; 29+ messages in thread
From: Avi Kivity @ 2007-09-30 10:52 UTC (permalink / raw)
To: Zhang, Xiantao; +Cc: kvm-devel-5NWGOfrQmneRv+LV9MX5uipxlwaOVQ5f, virtualization
Zhang, Xiantao wrote:
> Zhang, Xiantao wrote:
>
>>> Hi Avi,
>>> So you mean IA64 can adopt the similar method as well?
>>>
>
>
>> What method do you mean exactly?
>>
> Put all arch-specific files into arch/ia64/kvm as you described in
> future KVM infrastructure.
>
>> The powerpc people had some patches to make kvm_main arch independent.
>> We should work on that base.
>> To avoid a dependency on the x86 merge, we can start by working withing
>> drivers/kvm/, for example creating drivers/kvm/x86.c and
>> drivers/kvm/ia64.c. Later patches can move these to arch/*/.
>>
> It may work on x86 side. But for IA64, we have several source files and
> assembly files to implement a VMM module, which contains the
> virtualization logic of CPU, MMU and other platform devices. (In KVM
> forum, Anthony had presented IA64/KVM architecture which is a bit
> different with x86 side due to different approaches for VT.).If we put
> all such these arch-specific files in one directory, it looks very
> strange!
>
ia64/ subdirectory is also fine.
--
error compiling committee.c: too many arguments to function
-------------------------------------------------------------------------
This SF.net email is sponsored by: Microsoft
Defy all challenges. Microsoft(R) Visual Studio 2005.
http://clk.atdmt.com/MRT/go/vse0120000070mrt/direct/01/
^ permalink raw reply [flat|nested] 29+ messages in thread
* Re: [RFC] KVM Source layout Proposal to accommodate new CPU architecture
[not found] ` <46FF7FF6.6090103-atKUWr5tajBWk0Htik3J/w@public.gmane.org>
@ 2007-09-30 13:53 ` Zhang, Xiantao
[not found] ` <42DFA526FC41B1429CE7279EF83C6BDC75421C-wq7ZOvIWXbMAbVU2wMM1CrfspsVTdybXVpNB7YpNyf8@public.gmane.org>
0 siblings, 1 reply; 29+ messages in thread
From: Zhang, Xiantao @ 2007-09-30 13:53 UTC (permalink / raw)
To: Avi Kivity; +Cc: kvm-devel-5NWGOfrQmneRv+LV9MX5uipxlwaOVQ5f, virtualization
Avi Kivity wrote:
> Zhang, Xiantao wrote:
>> Zhang, Xiantao wrote:
>>
>>>> Hi Avi,
>>>> So you mean IA64 can adopt the similar method as well?
>>>>
>>
>>
>>> What method do you mean exactly?
>>>
>> Put all arch-specific files into arch/ia64/kvm as you described in
>> future KVM infrastructure.
>>
>>> The powerpc people had some patches to make kvm_main arch
>>> independent. We should work on that base. To avoid a dependency on
>>> the x86 merge, we can start by working withing drivers/kvm/, for
>>> example creating drivers/kvm/x86.c and drivers/kvm/ia64.c. Later
>>> patches can move these to arch/*/.
>>>
>> It may work on x86 side. But for IA64, we have several source files
>> and assembly files to implement a VMM module, which contains the
>> virtualization logic of CPU, MMU and other platform devices. (In KVM
>> forum, Anthony had presented IA64/KVM architecture which is a bit
>> different with x86 side due to different approaches for VT.).If we
>> put all such these arch-specific files in one directory, it looks
>> very strange!
>>
>
> ia64/ subdirectory is also fine.
But even so , we have to split current code to be arch-independent, and
to support IA64 and other architectures.
So, why not add an more subdirectory x86 in drivers kvm to hold X86-arch
code?
And it should also conform with with future infrastructure in Linux.
Maybe we can borrow the idea from UML code structure.
Do you think so ?
Thanks
Xiantao
-------------------------------------------------------------------------
This SF.net email is sponsored by: Microsoft
Defy all challenges. Microsoft(R) Visual Studio 2005.
http://clk.atdmt.com/MRT/go/vse0120000070mrt/direct/01/
^ permalink raw reply [flat|nested] 29+ messages in thread
* Re: [RFC] KVM Source layout Proposal to accommodate new CPU architecture
[not found] ` <42DFA526FC41B1429CE7279EF83C6BDC75421C-wq7ZOvIWXbMAbVU2wMM1CrfspsVTdybXVpNB7YpNyf8@public.gmane.org>
@ 2007-09-30 13:56 ` Avi Kivity
2007-10-02 1:19 ` Hollis Blanchard
[not found] ` <46FFAB00.4050103-atKUWr5tajBWk0Htik3J/w@public.gmane.org>
0 siblings, 2 replies; 29+ messages in thread
From: Avi Kivity @ 2007-09-30 13:56 UTC (permalink / raw)
To: Zhang, Xiantao; +Cc: kvm-devel-5NWGOfrQmneRv+LV9MX5uipxlwaOVQ5f, virtualization
Zhang, Xiantao wrote:
> Avi Kivity wrote:
>
>> Zhang, Xiantao wrote:
>>
>>> Zhang, Xiantao wrote:
>>>
>>>
>>>>> Hi Avi,
>>>>> So you mean IA64 can adopt the similar method as well?
>>>>>
>>>>>
>>>
>>>> What method do you mean exactly?
>>>>
>>>>
>>> Put all arch-specific files into arch/ia64/kvm as you described in
>>> future KVM infrastructure.
>>>
>>>
>>>> The powerpc people had some patches to make kvm_main arch
>>>> independent. We should work on that base. To avoid a dependency on
>>>> the x86 merge, we can start by working withing drivers/kvm/, for
>>>> example creating drivers/kvm/x86.c and drivers/kvm/ia64.c. Later
>>>> patches can move these to arch/*/.
>>>>
>>>>
>>> It may work on x86 side. But for IA64, we have several source files
>>> and assembly files to implement a VMM module, which contains the
>>> virtualization logic of CPU, MMU and other platform devices. (In KVM
>>> forum, Anthony had presented IA64/KVM architecture which is a bit
>>> different with x86 side due to different approaches for VT.).If we
>>> put all such these arch-specific files in one directory, it looks
>>> very strange!
>>>
>>>
>> ia64/ subdirectory is also fine.
>>
>
> But even so , we have to split current code to be arch-independent, and
> to support IA64 and other architectures.
> So, why not add an more subdirectory x86 in drivers kvm to hold X86-arch
> code?
>
Sure, that's not an issue.
> And it should also conform with with future infrastructure in Linux.
> Maybe we can borrow the idea from UML code structure.
> Do you think so ?
Eventually I'd like to see the code in arch/*/kvm. That's probably not
easily doable right now because modules cannot span directories, but
once that's solved, we'll do that as this is most consistent with the
rest of the kernel.
--
error compiling committee.c: too many arguments to function
-------------------------------------------------------------------------
This SF.net email is sponsored by: Microsoft
Defy all challenges. Microsoft(R) Visual Studio 2005.
http://clk.atdmt.com/MRT/go/vse0120000070mrt/direct/01/
^ permalink raw reply [flat|nested] 29+ messages in thread
* Re: [RFC] KVM Source layout Proposal to accommodate new CPU architecture
[not found] ` <46FFAB00.4050103-atKUWr5tajBWk0Htik3J/w@public.gmane.org>
@ 2007-09-30 15:01 ` Zhang, Xiantao
2007-10-08 2:36 ` Zhang, Xiantao
1 sibling, 0 replies; 29+ messages in thread
From: Zhang, Xiantao @ 2007-09-30 15:01 UTC (permalink / raw)
To: Avi Kivity; +Cc: kvm-devel-5NWGOfrQmneRv+LV9MX5uipxlwaOVQ5f, virtualization
Avi Kivity wrote:
> Zhang, Xiantao wrote:
>> Avi Kivity wrote:
>>
>>> Zhang, Xiantao wrote:
>>>
>>>> Zhang, Xiantao wrote:
>>>>
>>>>
>>>>>> Hi Avi,
>>>>>> So you mean IA64 can adopt the similar method as well?
>>>>>>
>>>>>>
>>>>
>>>>> What method do you mean exactly?
>>>>>
>>>>>
>>>> Put all arch-specific files into arch/ia64/kvm as you described in
>>>> future KVM infrastructure.
>>>>
>>>>
>>>>> The powerpc people had some patches to make kvm_main arch
>>>>> independent. We should work on that base. To avoid a dependency on
>>>>> the x86 merge, we can start by working withing drivers/kvm/, for
>>>>> example creating drivers/kvm/x86.c and drivers/kvm/ia64.c. Later
>>>>> patches can move these to arch/*/.
>>>>>
>>>>>
>>>> It may work on x86 side. But for IA64, we have several source files
>>>> and assembly files to implement a VMM module, which contains the
>>>> virtualization logic of CPU, MMU and other platform devices. (In
>>>> KVM forum, Anthony had presented IA64/KVM architecture which is a
>>>> bit different with x86 side due to different approaches for
>>>> VT.).If we put all such these arch-specific files in one
>>>> directory, it looks very strange!
>>>>
>>>>
>>> ia64/ subdirectory is also fine.
>>>
>>
>> But even so , we have to split current code to be arch-independent,
>> and to support IA64 and other architectures.
>> So, why not add an more subdirectory x86 in drivers kvm to hold
>> X86-arch code?
>>
>
> Sure, that's not an issue.
Thanks, Maybe we should work together to make it happen earlier :)
>> And it should also conform with with future infrastructure in Linux.
>> Maybe we can borrow the idea from UML code structure.
>> Do you think so ?
>
> Eventually I'd like to see the code in arch/*/kvm. That's probably
> not easily doable right now because modules cannot span directories,
> but once that's solved, we'll do that as this is most consistent with
> the rest of the kernel.
Yeah, that should be an elegant solution in future, if module
compilation can span directories.
Thanks
Xiantao
-------------------------------------------------------------------------
This SF.net email is sponsored by: Microsoft
Defy all challenges. Microsoft(R) Visual Studio 2005.
http://clk.atdmt.com/MRT/go/vse0120000070mrt/direct/01/
^ permalink raw reply [flat|nested] 29+ messages in thread
* Re: [RFC] KVM Source layout Proposal to accommodate new CPU architecture
2007-09-30 13:56 ` Avi Kivity
@ 2007-10-02 1:19 ` Hollis Blanchard
2007-10-02 4:11 ` Rusty Russell
[not found] ` <46FFAB00.4050103-atKUWr5tajBWk0Htik3J/w@public.gmane.org>
1 sibling, 1 reply; 29+ messages in thread
From: Hollis Blanchard @ 2007-10-02 1:19 UTC (permalink / raw)
To: kvm-devel-5NWGOfrQmneRv+LV9MX5uipxlwaOVQ5f
On Sun, 30 Sep 2007 15:56:16 +0200, Avi Kivity wrote:
>
> Eventually I'd like to see the code in arch/*/kvm. That's probably not
> easily doable right now because modules cannot span directories, but
> once that's solved, we'll do that as this is most consistent with the
> rest of the kernel.
What is the "spanning directories" issue? Can't I build
arch/powerpc/kvm/kvm-powerpc.ko and drivers/kvm/kvm.ko and let modprobe
sort out the dependency?
--
Hollis Blanchard
IBM Linux Technology Center
-------------------------------------------------------------------------
This SF.net email is sponsored by: Microsoft
Defy all challenges. Microsoft(R) Visual Studio 2005.
http://clk.atdmt.com/MRT/go/vse0120000070mrt/direct/01/
^ permalink raw reply [flat|nested] 29+ messages in thread
* Re: [RFC] KVM Source layout Proposal to accommodate new CPU architecture
2007-10-02 1:19 ` Hollis Blanchard
@ 2007-10-02 4:11 ` Rusty Russell
[not found] ` <1191298279.6979.50.camel-bi+AKbBUZKY6gyzm1THtWbp2dZbC/Bob@public.gmane.org>
0 siblings, 1 reply; 29+ messages in thread
From: Rusty Russell @ 2007-10-02 4:11 UTC (permalink / raw)
To: Hollis Blanchard; +Cc: kvm-devel-5NWGOfrQmneRv+LV9MX5uipxlwaOVQ5f
On Tue, 2007-10-02 at 01:19 +0000, Hollis Blanchard wrote:
> On Sun, 30 Sep 2007 15:56:16 +0200, Avi Kivity wrote:
> >
> > Eventually I'd like to see the code in arch/*/kvm. That's probably not
> > easily doable right now because modules cannot span directories, but
> > once that's solved, we'll do that as this is most consistent with the
> > rest of the kernel.
>
> What is the "spanning directories" issue? Can't I build
> arch/powerpc/kvm/kvm-powerpc.ko and drivers/kvm/kvm.ko and let modprobe
> sort out the dependency?
Sure, but it creates a silly module.
I think guest code belongs in arch/*/kvm/, but host code can be done as
subdirs under drivers/kvm/.
Cheers,
Rusty.
-------------------------------------------------------------------------
This SF.net email is sponsored by: Microsoft
Defy all challenges. Microsoft(R) Visual Studio 2005.
http://clk.atdmt.com/MRT/go/vse0120000070mrt/direct/01/
^ permalink raw reply [flat|nested] 29+ messages in thread
* Re: [RFC] KVM Source layout Proposal to accommodate new CPU architecture
[not found] ` <1191298279.6979.50.camel-bi+AKbBUZKY6gyzm1THtWbp2dZbC/Bob@public.gmane.org>
@ 2007-10-02 6:01 ` Hollis Blanchard
2007-10-02 6:29 ` Rusty Russell
0 siblings, 1 reply; 29+ messages in thread
From: Hollis Blanchard @ 2007-10-02 6:01 UTC (permalink / raw)
To: Rusty Russell; +Cc: kvm-devel-5NWGOfrQmneRv+LV9MX5uipxlwaOVQ5f
On Tue, 2007-10-02 at 14:11 +1000, Rusty Russell wrote:
> On Tue, 2007-10-02 at 01:19 +0000, Hollis Blanchard wrote:
> > On Sun, 30 Sep 2007 15:56:16 +0200, Avi Kivity wrote:
> > >
> > > Eventually I'd like to see the code in arch/*/kvm. That's probably not
> > > easily doable right now because modules cannot span directories, but
> > > once that's solved, we'll do that as this is most consistent with the
> > > rest of the kernel.
> >
> > What is the "spanning directories" issue? Can't I build
> > arch/powerpc/kvm/kvm-powerpc.ko and drivers/kvm/kvm.ko and let modprobe
> > sort out the dependency?
>
> Sure, but it creates a silly module.
Isn't there precedent in other areas? What about cpufreq or ALSA? (I'm
really asking; don't have time to investigate further right now.)
> I think guest code belongs in arch/*/kvm/, but host code can be done as
> subdirs under drivers/kvm/.
Funny, I would say the opposite. The host code is where I'm mucking with
deep architectural state like stealing the TLB out from under Linux. The
guest state is all "what would a processor like this do?"
--
Hollis Blanchard
IBM Linux Technology Center
-------------------------------------------------------------------------
This SF.net email is sponsored by: Microsoft
Defy all challenges. Microsoft(R) Visual Studio 2005.
http://clk.atdmt.com/MRT/go/vse0120000070mrt/direct/01/
^ permalink raw reply [flat|nested] 29+ messages in thread
* Re: [RFC] KVM Source layout Proposal to accommodate new CPU architecture
2007-10-02 6:01 ` Hollis Blanchard
@ 2007-10-02 6:29 ` Rusty Russell
[not found] ` <1191306576.6979.91.camel-bi+AKbBUZKY6gyzm1THtWbp2dZbC/Bob@public.gmane.org>
0 siblings, 1 reply; 29+ messages in thread
From: Rusty Russell @ 2007-10-02 6:29 UTC (permalink / raw)
To: Hollis Blanchard; +Cc: kvm-devel-5NWGOfrQmneRv+LV9MX5uipxlwaOVQ5f
On Tue, 2007-10-02 at 01:01 -0500, Hollis Blanchard wrote:
> On Tue, 2007-10-02 at 14:11 +1000, Rusty Russell wrote:
> > On Tue, 2007-10-02 at 01:19 +0000, Hollis Blanchard wrote:
> > > On Sun, 30 Sep 2007 15:56:16 +0200, Avi Kivity wrote:
> > > >
> > > > Eventually I'd like to see the code in arch/*/kvm. That's probably not
> > > > easily doable right now because modules cannot span directories, but
> > > > once that's solved, we'll do that as this is most consistent with the
> > > > rest of the kernel.
> > >
> > > What is the "spanning directories" issue? Can't I build
> > > arch/powerpc/kvm/kvm-powerpc.ko and drivers/kvm/kvm.ko and let modprobe
> > > sort out the dependency?
> >
> > Sure, but it creates a silly module.
>
> Isn't there precedent in other areas? What about cpufreq or ALSA? (I'm
> really asking; don't have time to investigate further right now.)
Hmm, cpufreq does do something like this, so I guess it's a fair call.
> > I think guest code belongs in arch/*/kvm/, but host code can be done as
> > subdirs under drivers/kvm/.
>
> Funny, I would say the opposite. The host code is where I'm mucking with
> deep architectural state like stealing the TLB out from under Linux. The
> guest state is all "what would a processor like this do?"
>From my POV all platforms belong in arch/*/, and KVM just presents
another platform. But the implementation of KVM is as much kvm-specific
as arch-specific, so I can argue over that one.
Whatever way we go, grouping both host and guest support in the same dir
seems confusing (which is why lguest is moving to arch/i386/lguest/ for
guest and drivers/lguest/i386/ for host).
Cheers,
Rusty.
-------------------------------------------------------------------------
This SF.net email is sponsored by: Microsoft
Defy all challenges. Microsoft(R) Visual Studio 2005.
http://clk.atdmt.com/MRT/go/vse0120000070mrt/direct/01/
^ permalink raw reply [flat|nested] 29+ messages in thread
* Re: [RFC] KVM Source layout Proposal to accommodate new CPU architecture
[not found] ` <1191306576.6979.91.camel-bi+AKbBUZKY6gyzm1THtWbp2dZbC/Bob@public.gmane.org>
@ 2007-10-02 11:43 ` Carsten Otte
0 siblings, 0 replies; 29+ messages in thread
From: Carsten Otte @ 2007-10-02 11:43 UTC (permalink / raw)
To: Rusty Russell
Cc: kvm-devel-5NWGOfrQmneRv+LV9MX5uipxlwaOVQ5f, Hollis Blanchard
Rusty Russell wrote:
> Whatever way we go, grouping both host and guest support in the same dir
> seems confusing (which is why lguest is moving to arch/i386/lguest/ for
> guest and drivers/lguest/i386/ for host).
That really is funny. Our s39host is just the other way round:
arch/s390/sie for the host, and drivers/s390/sie for the guest. Maybe
lguest is upside down cause it's from australia? But at least the
argument of a clear separation between guest and host in the source
tree seems to be common sense.
Carsten
-------------------------------------------------------------------------
This SF.net email is sponsored by: Microsoft
Defy all challenges. Microsoft(R) Visual Studio 2005.
http://clk.atdmt.com/MRT/go/vse0120000070mrt/direct/01/
^ permalink raw reply [flat|nested] 29+ messages in thread
* Re: [RFC] KVM Source layout Proposal to accommodate new CPU architecture
[not found] ` <46FFAB00.4050103-atKUWr5tajBWk0Htik3J/w@public.gmane.org>
2007-09-30 15:01 ` Zhang, Xiantao
@ 2007-10-08 2:36 ` Zhang, Xiantao
[not found] ` <42DFA526FC41B1429CE7279EF83C6BDC7AE225-wq7ZOvIWXbMAbVU2wMM1CrfspsVTdybXVpNB7YpNyf8@public.gmane.org>
1 sibling, 1 reply; 29+ messages in thread
From: Zhang, Xiantao @ 2007-10-08 2:36 UTC (permalink / raw)
To: Avi Kivity; +Cc: kvm-devel-5NWGOfrQmneRv+LV9MX5uipxlwaOVQ5f, virtualization
Avi Kivity wrote:
> Zhang, Xiantao wrote:
>> Avi Kivity wrote:
>>
>>> Zhang, Xiantao wrote:
>>>
>>>> Zhang, Xiantao wrote:
>>>>
>>>>
>>>>>> Hi Avi,
>>>>>> So you mean IA64 can adopt the similar method as well?
>>>>>>
>>>>>>
>>>>
>>>>> What method do you mean exactly?
>>>>>
>>>>>
>>>> Put all arch-specific files into arch/ia64/kvm as you described in
>>>> future KVM infrastructure.
>>>>
>>>>
>>>>> The powerpc people had some patches to make kvm_main arch
>>>>> independent. We should work on that base. To avoid a dependency on
>>>>> the x86 merge, we can start by working withing drivers/kvm/, for
>>>>> example creating drivers/kvm/x86.c and drivers/kvm/ia64.c. Later
>>>>> patches can move these to arch/*/.
>>>>>
>>>>>
>>>> It may work on x86 side. But for IA64, we have several source files
>>>> and assembly files to implement a VMM module, which contains the
>>>> virtualization logic of CPU, MMU and other platform devices. (In
>>>> KVM forum, Anthony had presented IA64/KVM architecture which is a
>>>> bit different with x86 side due to different approaches for
>>>> VT.).If we put all such these arch-specific files in one
>>>> directory, it looks very strange!
>>>>
>>>>
>>> ia64/ subdirectory is also fine.
>>>
>>
>> But even so , we have to split current code to be arch-independent,
>> and to support IA64 and other architectures.
>> So, why not add an more subdirectory x86 in drivers kvm to hold
>> X86-arch code?
>>
>
> Sure, that's not an issue.
Could you help to open a branch from master tree for this work? We are
very willing to
contribute to it:)
>> And it should also conform with with future infrastructure in Linux.
>> Maybe we can borrow the idea from UML code structure.
>> Do you think so ?
>
> Eventually I'd like to see the code in arch/*/kvm. That's probably
> not easily doable right now because modules cannot span directories,
> but once that's solved, we'll do that as this is most consistent with
> the rest of the kernel.
Agree. Maybe we can investigate the issue at the same time.
Xiantao
-------------------------------------------------------------------------
This SF.net email is sponsored by: Splunk Inc.
Still grepping through log files to find problems? Stop.
Now Search log events and configuration files using AJAX and a browser.
Download your FREE copy of Splunk now >> http://get.splunk.com/
^ permalink raw reply [flat|nested] 29+ messages in thread
* Re: [RFC] KVM Source layout Proposal to accommodate new CPU architecture
[not found] ` <42DFA526FC41B1429CE7279EF83C6BDC7AE225-wq7ZOvIWXbMAbVU2wMM1CrfspsVTdybXVpNB7YpNyf8@public.gmane.org>
@ 2007-10-08 4:04 ` Hollis Blanchard
2007-10-08 4:16 ` [RFC] KVM Source layout Proposal to accommodate newCPU architecture Zhang, Xiantao
0 siblings, 1 reply; 29+ messages in thread
From: Hollis Blanchard @ 2007-10-08 4:04 UTC (permalink / raw)
To: Zhang, Xiantao
Cc: kvm-devel-5NWGOfrQmneRv+LV9MX5uipxlwaOVQ5f, Avi Kivity,
virtualization
On Mon, 2007-10-08 at 10:36 +0800, Zhang, Xiantao wrote:
> Avi Kivity wrote:
> > Zhang, Xiantao wrote:
> >> Avi Kivity wrote:
> >>
> >>> Zhang, Xiantao wrote:
> >>>
> >>>> Zhang, Xiantao wrote:
> >>>>
> >>>>
> >>>>>> Hi Avi,
> >>>>>> So you mean IA64 can adopt the similar method as well?
> >>>>>>
> >>>>>>
> >>>>
> >>>>> What method do you mean exactly?
> >>>>>
> >>>>>
> >>>> Put all arch-specific files into arch/ia64/kvm as you described in
> >>>> future KVM infrastructure.
> >>>>
> >>>>
> >>>>> The powerpc people had some patches to make kvm_main arch
> >>>>> independent. We should work on that base. To avoid a dependency on
> >>>>> the x86 merge, we can start by working withing drivers/kvm/, for
> >>>>> example creating drivers/kvm/x86.c and drivers/kvm/ia64.c. Later
> >>>>> patches can move these to arch/*/.
> >>>>>
> >>>>>
> >>>> It may work on x86 side. But for IA64, we have several source files
> >>>> and assembly files to implement a VMM module, which contains the
> >>>> virtualization logic of CPU, MMU and other platform devices. (In
> >>>> KVM forum, Anthony had presented IA64/KVM architecture which is a
> >>>> bit different with x86 side due to different approaches for
> >>>> VT.).If we put all such these arch-specific files in one
> >>>> directory, it looks very strange!
> >>>>
> >>>>
> >>> ia64/ subdirectory is also fine.
> >>>
> >>
> >> But even so , we have to split current code to be arch-independent,
> >> and to support IA64 and other architectures.
> >> So, why not add an more subdirectory x86 in drivers kvm to hold
> >> X86-arch code?
> >>
> >
> > Sure, that's not an issue.
>
> Could you help to open a branch from master tree for this work? We are
> very willing to contribute to it:)
Do you really need a new branch? Why not just submit patches?
--
Hollis Blanchard
IBM Linux Technology Center
-------------------------------------------------------------------------
This SF.net email is sponsored by: Splunk Inc.
Still grepping through log files to find problems? Stop.
Now Search log events and configuration files using AJAX and a browser.
Download your FREE copy of Splunk now >> http://get.splunk.com/
^ permalink raw reply [flat|nested] 29+ messages in thread
* Re: [RFC] KVM Source layout Proposal to accommodate newCPU architecture
2007-10-08 4:04 ` Hollis Blanchard
@ 2007-10-08 4:16 ` Zhang, Xiantao
[not found] ` <42DFA526FC41B1429CE7279EF83C6BDC7AE2A8-wq7ZOvIWXbMAbVU2wMM1CrfspsVTdybXVpNB7YpNyf8@public.gmane.org>
0 siblings, 1 reply; 29+ messages in thread
From: Zhang, Xiantao @ 2007-10-08 4:16 UTC (permalink / raw)
To: Hollis Blanchard
Cc: kvm-devel-5NWGOfrQmneRv+LV9MX5uipxlwaOVQ5f, Avi Kivity,
virtualization
Hollis Blanchard wrote:
> On Mon, 2007-10-08 at 10:36 +0800, Zhang, Xiantao wrote:
>> Avi Kivity wrote:
>>> Zhang, Xiantao wrote:
>>>> Avi Kivity wrote:
>>>>
>>>>> Zhang, Xiantao wrote:
>>>>>
>>>>>> Zhang, Xiantao wrote:
>>>>>>
>>>>>>
>>>>>>>> Hi Avi,
>>>>>>>> So you mean IA64 can adopt the similar method as well?
>>>>>>>>
>>>>>>>>
>>>>>>
>>>>>>> What method do you mean exactly?
>>>>>>>
>>>>>>>
>>>>>> Put all arch-specific files into arch/ia64/kvm as you described
>>>>>> in future KVM infrastructure.
>>>>>>
>>>>>>
>>>>>>> The powerpc people had some patches to make kvm_main arch
>>>>>>> independent. We should work on that base. To avoid a dependency
>>>>>>> on the x86 merge, we can start by working withing drivers/kvm/,
>>>>>>> for example creating drivers/kvm/x86.c and drivers/kvm/ia64.c.
>>>>>>> Later patches can move these to arch/*/.
>>>>>>>
>>>>>>>
>>>>>> It may work on x86 side. But for IA64, we have several source
>>>>>> files and assembly files to implement a VMM module, which
>>>>>> contains the virtualization logic of CPU, MMU and other platform
>>>>>> devices. (In KVM forum, Anthony had presented IA64/KVM
>>>>>> architecture which is a bit different with x86 side due to
>>>>>> different approaches for VT.).If we put all such these
>>>>>> arch-specific files in one directory, it looks very strange!
>>>>>>
>>>>>>
>>>>> ia64/ subdirectory is also fine.
>>>>>
>>>>
>>>> But even so , we have to split current code to be arch-independent,
>>>> and to support IA64 and other architectures.
>>>> So, why not add an more subdirectory x86 in drivers kvm to hold
>>>> X86-arch code?
>>>>
>>>
>>> Sure, that's not an issue.
>>
>> Could you help to open a branch from master tree for this work? We
>> are very willing to contribute to it:)
>
> Do you really need a new branch? Why not just submit patches?
Due to big changes to current source structure, maybe a new branch would
help to work, and doesn't
impact existing quality of KVM. If it is convenient for you to submit
patches directly, also we are glad to do in that way.
Xiantao
-------------------------------------------------------------------------
This SF.net email is sponsored by: Splunk Inc.
Still grepping through log files to find problems? Stop.
Now Search log events and configuration files using AJAX and a browser.
Download your FREE copy of Splunk now >> http://get.splunk.com/
^ permalink raw reply [flat|nested] 29+ messages in thread
* Re: [RFC] KVM Source layout Proposal to accommodate newCPU architecture
[not found] ` <42DFA526FC41B1429CE7279EF83C6BDC7AE2A8-wq7ZOvIWXbMAbVU2wMM1CrfspsVTdybXVpNB7YpNyf8@public.gmane.org>
@ 2007-10-08 9:57 ` Avi Kivity
[not found] ` <4709FEF1.6010006-atKUWr5tajBWk0Htik3J/w@public.gmane.org>
0 siblings, 1 reply; 29+ messages in thread
From: Avi Kivity @ 2007-10-08 9:57 UTC (permalink / raw)
To: Zhang, Xiantao
Cc: kvm-devel-5NWGOfrQmneRv+LV9MX5uipxlwaOVQ5f, Hollis Blanchard,
virtualization
Zhang, Xiantao wrote:
> Hollis Blanchard wrote:
>
>> On Mon, 2007-10-08 at 10:36 +0800, Zhang, Xiantao wrote:
>>
>>> Avi Kivity wrote:
>>>
>>>> Zhang, Xiantao wrote:
>>>>
>>>>> Avi Kivity wrote:
>>>>>
>>>>>
>>>>>> Zhang, Xiantao wrote:
>>>>>>
>>>>>>
>>>>>>> Zhang, Xiantao wrote:
>>>>>>>
>>>>>>>
>>>>>>>
>>>>>>>>> Hi Avi,
>>>>>>>>> So you mean IA64 can adopt the similar method as well?
>>>>>>>>>
>>>>>>>>>
>>>>>>>>>
>>>>>>>> What method do you mean exactly?
>>>>>>>>
>>>>>>>>
>>>>>>>>
>>>>>>> Put all arch-specific files into arch/ia64/kvm as you described
>>>>>>> in future KVM infrastructure.
>>>>>>>
>>>>>>>
>>>>>>>
>>>>>>>> The powerpc people had some patches to make kvm_main arch
>>>>>>>> independent. We should work on that base. To avoid a dependency
>>>>>>>> on the x86 merge, we can start by working withing drivers/kvm/,
>>>>>>>> for example creating drivers/kvm/x86.c and drivers/kvm/ia64.c.
>>>>>>>> Later patches can move these to arch/*/.
>>>>>>>>
>>>>>>>>
>>>>>>>>
>>>>>>> It may work on x86 side. But for IA64, we have several source
>>>>>>> files and assembly files to implement a VMM module, which
>>>>>>> contains the virtualization logic of CPU, MMU and other platform
>>>>>>> devices. (In KVM forum, Anthony had presented IA64/KVM
>>>>>>> architecture which is a bit different with x86 side due to
>>>>>>> different approaches for VT.).If we put all such these
>>>>>>> arch-specific files in one directory, it looks very strange!
>>>>>>>
>>>>>>>
>>>>>>>
>>>>>> ia64/ subdirectory is also fine.
>>>>>>
>>>>>>
>>>>> But even so , we have to split current code to be arch-independent,
>>>>> and to support IA64 and other architectures.
>>>>> So, why not add an more subdirectory x86 in drivers kvm to hold
>>>>> X86-arch code?
>>>>>
>>>>>
>>>> Sure, that's not an issue.
>>>>
>>> Could you help to open a branch from master tree for this work? We
>>> are very willing to contribute to it:)
>>>
>> Do you really need a new branch? Why not just submit patches?
>>
>
> Due to big changes to current source structure, maybe a new branch would
> help to work, and doesn't
> impact existing quality of KVM. If it is convenient for you to submit
> patches directly, also we are glad to do in that way.
>
A branch with such large changes quickly becomes out-of-date, so it's
best to send patches.
--
Any sufficiently difficult bug is indistinguishable from a feature.
-------------------------------------------------------------------------
This SF.net email is sponsored by: Splunk Inc.
Still grepping through log files to find problems? Stop.
Now Search log events and configuration files using AJAX and a browser.
Download your FREE copy of Splunk now >> http://get.splunk.com/
^ permalink raw reply [flat|nested] 29+ messages in thread
* Re: [RFC] KVM Source layout Proposal to accommodate newCPU architecture
[not found] ` <4709FEF1.6010006-atKUWr5tajBWk0Htik3J/w@public.gmane.org>
@ 2007-10-09 1:10 ` Zhang, Xiantao
0 siblings, 0 replies; 29+ messages in thread
From: Zhang, Xiantao @ 2007-10-09 1:10 UTC (permalink / raw)
To: Avi Kivity
Cc: kvm-devel-5NWGOfrQmneRv+LV9MX5uipxlwaOVQ5f, Hollis Blanchard,
virtualization
Avi Kivity wrote:
> Zhang, Xiantao wrote:
>> Hollis Blanchard wrote:
>>
>>> On Mon, 2007-10-08 at 10:36 +0800, Zhang, Xiantao wrote:
>>>
>>>> Avi Kivity wrote:
>>>>
>>>>> Zhang, Xiantao wrote:
>>>>>
>>>>>> Avi Kivity wrote:
>>>>>>
>>>>>>
>>>>>>> Zhang, Xiantao wrote:
>>>>>>>
>>>>>>>
>>>>>>>> Zhang, Xiantao wrote:
>>>>>>>>
>>>>>>>>
>>>>>>>>
>>>>>>>>>> Hi Avi,
>>>>>>>>>> So you mean IA64 can adopt the similar method as well?
>>>>>>>>>>
>>>>>>>>>>
>>>>>>>>>>
>>>>>>>>> What method do you mean exactly?
>>>>>>>>>
>>>>>>>>>
>>>>>>>>>
>>>>>>>> Put all arch-specific files into arch/ia64/kvm as you
>>>>>>>> described in future KVM infrastructure.
>>>>>>>>
>>>>>>>>
>>>>>>>>
>>>>>>>>> The powerpc people had some patches to make kvm_main arch
>>>>>>>>> independent. We should work on that base. To avoid a
>>>>>>>>> dependency on the x86 merge, we can start by working withing
>>>>>>>>> drivers/kvm/, for example creating drivers/kvm/x86.c and
>>>>>>>>> drivers/kvm/ia64.c. Later patches can move these to arch/*/.
>>>>>>>>>
>>>>>>>>>
>>>>>>>>>
>>>>>>>> It may work on x86 side. But for IA64, we have several source
>>>>>>>> files and assembly files to implement a VMM module, which
>>>>>>>> contains the virtualization logic of CPU, MMU and other
>>>>>>>> platform devices. (In KVM forum, Anthony had presented IA64/KVM
>>>>>>>> architecture which is a bit different with x86 side due to
>>>>>>>> different approaches for VT.).If we put all such these
>>>>>>>> arch-specific files in one directory, it looks very strange!
>>>>>>>>
>>>>>>>>
>>>>>>>>
>>>>>>> ia64/ subdirectory is also fine.
>>>>>>>
>>>>>>>
>>>>>> But even so , we have to split current code to be
>>>>>> arch-independent, and to support IA64 and other architectures.
>>>>>> So, why not add an more subdirectory x86 in drivers kvm to hold
>>>>>> X86-arch code?
>>>>>>
>>>>>>
>>>>> Sure, that's not an issue.
>>>>>
>>>> Could you help to open a branch from master tree for this work? We
>>>> are very willing to contribute to it:)
>>>>
>>> Do you really need a new branch? Why not just submit patches?
>>>
>>
>> Due to big changes to current source structure, maybe a new branch
>> would help to work, and doesn't impact existing quality of KVM. If
>> it is convenient for you to submit patches directly, also we are
>> glad to do in that way.
>>
>
> A branch with such large changes quickly becomes out-of-date, so it's
> best to send patches.
Fine. I will make them out. :)
Thanks
Xiantao
-------------------------------------------------------------------------
This SF.net email is sponsored by: Splunk Inc.
Still grepping through log files to find problems? Stop.
Now Search log events and configuration files using AJAX and a browser.
Download your FREE copy of Splunk now >> http://get.splunk.com/
^ permalink raw reply [flat|nested] 29+ messages in thread
end of thread, other threads:[~2007-10-09 1:10 UTC | newest]
Thread overview: 29+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2007-09-26 8:33 [RFC] KVM Source layout Proposal to accommodate new CPU architecture Zhang, Xiantao
[not found] ` <42DFA526FC41B1429CE7279EF83C6BDC753A4E-wq7ZOvIWXbMAbVU2wMM1CrfspsVTdybXVpNB7YpNyf8@public.gmane.org>
2007-09-26 8:44 ` Laurent Vivier
[not found] ` <46FA1BDA.2060003-6ktuUTfB/bM@public.gmane.org>
2007-09-26 9:38 ` Zhang, Xiantao
2007-09-27 9:18 ` Avi Kivity
[not found] ` <46FB7566.9030504-atKUWr5tajBWk0Htik3J/w@public.gmane.org>
2007-09-28 2:16 ` Zhang, Xiantao
[not found] ` <42DFA526FC41B1429CE7279EF83C6BDC753E73-wq7ZOvIWXbMAbVU2wMM1CrfspsVTdybXVpNB7YpNyf8@public.gmane.org>
2007-09-28 14:45 ` Avi Kivity
[not found] ` <46FD1392.1080905-atKUWr5tajBWk0Htik3J/w@public.gmane.org>
2007-09-28 15:28 ` Zhang, Xiantao
[not found] ` <42DFA526FC41B1429CE7279EF83C6BDC754031-wq7ZOvIWXbMAbVU2wMM1CrfspsVTdybXVpNB7YpNyf8@public.gmane.org>
2007-09-28 17:03 ` Avi Kivity
[not found] ` <46FD33F2.9090506-atKUWr5tajBWk0Htik3J/w@public.gmane.org>
2007-09-29 1:47 ` Zhang, Xiantao
[not found] ` <42DFA526FC41B1429CE7279EF83C6BDC754076-wq7ZOvIWXbMAbVU2wMM1CrfspsVTdybXVpNB7YpNyf8@public.gmane.org>
2007-09-30 10:52 ` Avi Kivity
[not found] ` <46FF7FF6.6090103-atKUWr5tajBWk0Htik3J/w@public.gmane.org>
2007-09-30 13:53 ` Zhang, Xiantao
[not found] ` <42DFA526FC41B1429CE7279EF83C6BDC75421C-wq7ZOvIWXbMAbVU2wMM1CrfspsVTdybXVpNB7YpNyf8@public.gmane.org>
2007-09-30 13:56 ` Avi Kivity
2007-10-02 1:19 ` Hollis Blanchard
2007-10-02 4:11 ` Rusty Russell
[not found] ` <1191298279.6979.50.camel-bi+AKbBUZKY6gyzm1THtWbp2dZbC/Bob@public.gmane.org>
2007-10-02 6:01 ` Hollis Blanchard
2007-10-02 6:29 ` Rusty Russell
[not found] ` <1191306576.6979.91.camel-bi+AKbBUZKY6gyzm1THtWbp2dZbC/Bob@public.gmane.org>
2007-10-02 11:43 ` Carsten Otte
[not found] ` <46FFAB00.4050103-atKUWr5tajBWk0Htik3J/w@public.gmane.org>
2007-09-30 15:01 ` Zhang, Xiantao
2007-10-08 2:36 ` Zhang, Xiantao
[not found] ` <42DFA526FC41B1429CE7279EF83C6BDC7AE225-wq7ZOvIWXbMAbVU2wMM1CrfspsVTdybXVpNB7YpNyf8@public.gmane.org>
2007-10-08 4:04 ` Hollis Blanchard
2007-10-08 4:16 ` [RFC] KVM Source layout Proposal to accommodate newCPU architecture Zhang, Xiantao
[not found] ` <42DFA526FC41B1429CE7279EF83C6BDC7AE2A8-wq7ZOvIWXbMAbVU2wMM1CrfspsVTdybXVpNB7YpNyf8@public.gmane.org>
2007-10-08 9:57 ` Avi Kivity
[not found] ` <4709FEF1.6010006-atKUWr5tajBWk0Htik3J/w@public.gmane.org>
2007-10-09 1:10 ` Zhang, Xiantao
2007-09-28 8:20 ` [RFC] KVM Source layout Proposal to accommodate new CPU architecture Carsten Otte
[not found] ` <46FCB954.50005-tA70FqPdS9bQT0dZR+AlfA@public.gmane.org>
2007-09-30 2:26 ` Zhang, Xiantao
2007-09-29 13:06 ` Rusty Russell
[not found] ` <1191071211.26950.28.camel-bi+AKbBUZKY6gyzm1THtWbp2dZbC/Bob@public.gmane.org>
2007-09-29 14:25 ` Sam Ravnborg
2007-09-30 2:26 ` [RFC] KVM Source layout Proposal to accommodatenew " Zhang, Xiantao
[not found] <FD80ED6F62DC5E41910477505FA01BDFA62D00@pdsmsx415.ccr.corp.intel.com>
[not found] ` <FD80ED6F62DC5E41910477505FA01BDFA62D00-wq7ZOvIWXbMAbVU2wMM1CrfspsVTdybXVpNB7YpNyf8@public.gmane.org>
2007-09-26 8:58 ` [RFC] KVM Source layout Proposal to accommodate new " Zhang, Xiantao
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox