qemu-devel.nongnu.org archive mirror
 help / color / mirror / Atom feed
* [Qemu-devel] [PATCH][RFC] SVM support
@ 2007-08-22 19:58 Alexander Graf
  2007-08-22 20:19 ` Blue Swirl
                   ` (2 more replies)
  0 siblings, 3 replies; 10+ messages in thread
From: Alexander Graf @ 2007-08-22 19:58 UTC (permalink / raw)
  To: qemu-devel

[-- Attachment #1: Type: text/plain, Size: 1379 bytes --]

Hi,

this patch adds support for SVM (the virtual machine extension on amd64)
to qemu's x86_64 target. It still needs cleanup (splitting, indentation,
etc) and lacks some basic functionality but maybe someone will find
interest in it as it is already.

In kvm real and protected modes work flawlessly as far as I can tell
(minix and 32-bit linux worked).
Long mode seems to work quite ok as well, though I am not able to get a
Linux kernel booted up (MenuetOS works).

What does work?

- VMRUN, VMLOAD, VMSAVE, VMEXIT, STGI, CLGI
- Event injection
- All interceptions (well, maybe I did oversee one or two)
- Context switching to the VM and back to the VMM

What is missing?

- According to the SVM specification NPTs are optional, so I did not
include them (yet)
- Everything related to device virtualisation
- The "Secure" part of the extension (would need TPM emulation for that)
- Debugging support (maybe it does work, I actually have never tried to
debug a kvm virtualised machine)
- I included a dirty hack to update EIP on every instruction.
- TSC_OFFSET
- ASID support
- Sanity checks
- Task switch and Ferr_Freeze Intercepts
- VMMCALL
- SMM support
- SVM-Lock

I hope this is useful to someone.
I am going to continue to refine this patch until it implements all of
the SVM specification.

Comments as well as patches are greatly appreciated.


Thanks,

Alexander Graf

[-- Warning: decoded text below may be mangled, UTF-8 assumed --]
[-- Attachment #2: svm.patch --]
[-- Type: text/x-patch; name="svm.patch", Size: 85247 bytes --]

Index: qemu/target-i386/helper2.c
===================================================================
--- qemu.orig/target-i386/helper2.c
+++ qemu/target-i386/helper2.c
@@ -27,8 +27,9 @@
 
 #include "cpu.h"
 #include "exec-all.h"
+#include "svm.h"
 
-//#define DEBUG_MMU
+// #define DEBUG_MMU
 
 #ifdef USE_CODE_COPY
 #include <asm/ldt.h>
@@ -111,6 +112,7 @@ CPUX86State *cpu_x86_init(void)
                                CPUID_CX8 | CPUID_PGE | CPUID_CMOV |
                                CPUID_PAT);
         env->pat = 0x0007040600070406ULL;
+        env->cpuid_ext3_features = CPUID_EXT3_SVM;
         env->cpuid_ext_features = CPUID_EXT_SSE3;
         env->cpuid_features |= CPUID_FXSR | CPUID_MMX | CPUID_SSE | CPUID_SSE2 | CPUID_PAE | CPUID_SEP;
         env->cpuid_features |= CPUID_APIC;
@@ -131,7 +133,7 @@ CPUX86State *cpu_x86_init(void)
         /* currently not enabled for std i386 because not fully tested */
         env->cpuid_ext2_features = (env->cpuid_features & 0x0183F3FF);
         env->cpuid_ext2_features |= CPUID_EXT2_LM | CPUID_EXT2_SYSCALL | CPUID_EXT2_NX;
-        env->cpuid_xlevel = 0x80000008;
+        env->cpuid_xlevel = 0x8000000a;
 
         /* these features are needed for Win64 and aren't fully implemented */
         env->cpuid_features |= CPUID_MTRR | CPUID_CLFLUSH | CPUID_MCA;
@@ -160,6 +162,7 @@ void cpu_reset(CPUX86State *env)
 #ifdef CONFIG_SOFTMMU
     env->hflags |= HF_SOFTMMU_MASK;
 #endif
+    env->hflags |= HF_GIF_MASK;
 
     cpu_x86_update_cr0(env, 0x60000010);
     env->a20_mask = 0xffffffff;
@@ -639,10 +642,12 @@ int cpu_x86_handle_mmu_fault(CPUX86State
             pml4e = ldq_phys(pml4e_addr);
             if (!(pml4e & PG_PRESENT_MASK)) {
                 error_code = 0;
+		//printf("a");
                 goto do_fault;
             }
             if (!(env->efer & MSR_EFER_NXE) && (pml4e & PG_NX_MASK)) {
                 error_code = PG_ERROR_RSVD_MASK;
+		//printf("b");
                 goto do_fault;
             }
             if (!(pml4e & PG_ACCESSED_MASK)) {
@@ -655,10 +660,12 @@ int cpu_x86_handle_mmu_fault(CPUX86State
             pdpe = ldq_phys(pdpe_addr);
             if (!(pdpe & PG_PRESENT_MASK)) {
                 error_code = 0;
+		//printf("c(addr=%#llx, pdpe_addr=%#lx, pdpe=%#lx, a20_mask=%#lx)", addr, pdpe_addr, pdpe, env->a20_mask);
                 goto do_fault;
             }
             if (!(env->efer & MSR_EFER_NXE) && (pdpe & PG_NX_MASK)) {
                 error_code = PG_ERROR_RSVD_MASK;
+		//printf("d");
                 goto do_fault;
             }
             ptep &= pdpe ^ PG_NX_MASK;
@@ -675,6 +682,7 @@ int cpu_x86_handle_mmu_fault(CPUX86State
             pdpe = ldq_phys(pdpe_addr);
             if (!(pdpe & PG_PRESENT_MASK)) {
                 error_code = 0;
+		//printf("e");
                 goto do_fault;
             }
             ptep = PG_NX_MASK | PG_USER_MASK | PG_RW_MASK;
@@ -685,10 +693,12 @@ int cpu_x86_handle_mmu_fault(CPUX86State
         pde = ldq_phys(pde_addr);
         if (!(pde & PG_PRESENT_MASK)) {
             error_code = 0;
+            //printf("f(addr=%#llx, pdpe_addr=%#lx, pdpe=%#lx, pde_addr=%#lx, pde=%#lx, a20_mask=%#lx)", addr, pdpe_addr, pdpe, pde_addr, pde, env->a20_mask);
             goto do_fault;
         }
         if (!(env->efer & MSR_EFER_NXE) && (pde & PG_NX_MASK)) {
             error_code = PG_ERROR_RSVD_MASK;
+	    //printf("g");
             goto do_fault;
         }
         ptep &= pde ^ PG_NX_MASK;
@@ -728,11 +738,18 @@ int cpu_x86_handle_mmu_fault(CPUX86State
                 env->a20_mask;
             pte = ldq_phys(pte_addr);
             if (!(pte & PG_PRESENT_MASK)) {
+		target_ulong pte_addri;
                 error_code = 0;
+                pte_addri = ((pde & PHYS_ADDR_MASK) + (((addr >> 12) & 0x1ff))) & env->a20_mask;
+                //printf("h(addr=%#llx, pdpe_addr=%#lx, pdpe=%#lx, pde_addr=%#lx, pde=%#lx,pte_addr=%#lx, pte=%#lx, pte_addri=%#x, ptei=%#x, a20_mask=%#lx)", addr, pdpe_addr, pdpe, pde_addr, pde, pte_addr, pte, pte_addri, ldq_phys(pte_addri), env->a20_mask);
+/*		for(pte_addri = (pde & PHYS_ADDR_MASK); (pte_addri & PHYS_ADDR_MASK) == (pde & PHYS_ADDR_MASK); pte_addri += 8) {
+			//printf("%#llx - %#llx\n", pte_addri, ldq_phys(pte_addri));
+		}*/
                 goto do_fault;
             }
             if (!(env->efer & MSR_EFER_NXE) && (pte & PG_NX_MASK)) {
                 error_code = PG_ERROR_RSVD_MASK;
+	    	//printf("i");
                 goto do_fault;
             }
             /* combine pde and pte nx, user and rw protections */
@@ -770,6 +787,7 @@ int cpu_x86_handle_mmu_fault(CPUX86State
         pde = ldl_phys(pde_addr);
         if (!(pde & PG_PRESENT_MASK)) {
             error_code = 0;
+	    //printf("j");
             goto do_fault;
         }
         /* if PSE bit is set, then we use a 4MB page */
@@ -808,6 +826,7 @@ int cpu_x86_handle_mmu_fault(CPUX86State
             pte = ldl_phys(pte_addr);
             if (!(pte & PG_PRESENT_MASK)) {
                 error_code = 0;
+	    	//printf("k");
                 goto do_fault;
             }
             /* combine pde and pte user and rw protections */
@@ -863,7 +882,6 @@ int cpu_x86_handle_mmu_fault(CPUX86State
  do_fault_protect:
     error_code = PG_ERROR_P_MASK;
  do_fault:
-    env->cr[2] = addr;
     error_code |= (is_write << PG_ERROR_W_BIT);
     if (is_user)
         error_code |= PG_ERROR_U_MASK;
@@ -871,8 +889,15 @@ int cpu_x86_handle_mmu_fault(CPUX86State
         (env->efer & MSR_EFER_NXE) && 
         (env->cr[4] & CR4_PAE_MASK))
         error_code |= PG_ERROR_I_D_MASK;
+    if(INTERCEPTEDl(_exceptions, 1 << EXCP0E_PAGE)) {
+        stq_phys(env->vm_vmcb + offsetof(struct vmcb, control.exit_info_2), addr);
+    } else {
+        env->cr[2] = addr;
+    }
     env->error_code = error_code;
     env->exception_index = EXCP0E_PAGE;
+    if(env->hflags & HF_SVM_MASK) // the VMM will handle this
+        return 2;
     return 1;
 }
 
Index: qemu/target-i386/translate.c
===================================================================
--- qemu.orig/target-i386/translate.c
+++ qemu/target-i386/translate.c
@@ -1995,6 +1995,8 @@ static void gen_movl_seg_T0(DisasContext
     }
 }
 
+#define update_eip() gen_jmp_im(pc_start - s->cs_base)
+
 static inline void gen_stack_update(DisasContext *s, int addend)
 {
 #ifdef TARGET_X86_64
@@ -3154,6 +3156,7 @@ static target_ulong disas_insn(DisasCont
     target_ulong next_eip, tval;
     int rex_w, rex_r;
 
+    update_eip(); // FIXME: find a way to fetch EIP without updating it all the time
     s->pc = pc_start;
     prefixes = 0;
     aflag = s->code32;
@@ -4873,13 +4876,24 @@ static target_ulong disas_insn(DisasCont
             s->cc_op = CC_OP_SUBB + ot;
         }
         break;
+#ifdef TARGET_X86_64
+	
+#define SVM_CHECK_IO(x) gen_op_movq_T1_im64((s->pc - s->cs_base) << 32, s->pc - s->cs_base); gen_op_svm_check_intercept_io(x);
+#else
+#define SVM_CHECK_IO(x) gen_op_movq_T1_im(s->pc - s->cs_base); gen_op_svm_check_intercept_io(x);
+#endif
+#define SVM_IS_REP ((prefixes & (PREFIX_REPZ | PREFIX_REPNZ)) ? 8 : 0)
     case 0x6c: /* insS */
     case 0x6d:
         if ((b & 1) == 0)
             ot = OT_BYTE;
         else
             ot = dflag ? OT_LONG : OT_WORD;
+	update_eip();
         gen_check_io(s, ot, 1, pc_start - s->cs_base);
+        gen_op_mov_TN_reg[OT_WORD][0][R_EDX]();
+        gen_op_andl_T0_ffff();
+	SVM_CHECK_IO(SVM_IOIO_TYPE_MASK | (1 << (4+ot)) | SVM_IS_REP | 4 | (1 << (7+s->aflag)));
         if (prefixes & (PREFIX_REPZ | PREFIX_REPNZ)) {
             gen_repz_ins(s, ot, pc_start - s->cs_base, s->pc - s->cs_base);
         } else {
@@ -4893,6 +4907,9 @@ static target_ulong disas_insn(DisasCont
         else
             ot = dflag ? OT_LONG : OT_WORD;
         gen_check_io(s, ot, 1, pc_start - s->cs_base);
+        gen_op_mov_TN_reg[OT_WORD][0][R_EDX]();
+        gen_op_andl_T0_ffff();
+	SVM_CHECK_IO((1 << (4+ot)) | SVM_IS_REP | 4 | (1 << (7+s->aflag)));
         if (prefixes & (PREFIX_REPZ | PREFIX_REPNZ)) {
             gen_repz_outs(s, ot, pc_start - s->cs_base, s->pc - s->cs_base);
         } else {
@@ -4902,15 +4919,18 @@ static target_ulong disas_insn(DisasCont
 
         /************************/
         /* port I/O */
+
     case 0xe4:
     case 0xe5:
         if ((b & 1) == 0)
             ot = OT_BYTE;
         else
             ot = dflag ? OT_LONG : OT_WORD;
+	update_eip();
         val = ldub_code(s->pc++);
         gen_op_movl_T0_im(val);
         gen_check_io(s, ot, 0, pc_start - s->cs_base);
+	SVM_CHECK_IO(SVM_IOIO_TYPE_MASK | SVM_IS_REP | (1 << (4+ot)));
         gen_op_in[ot]();
         gen_op_mov_reg_T1[ot][R_EAX]();
         break;
@@ -4920,9 +4940,11 @@ static target_ulong disas_insn(DisasCont
             ot = OT_BYTE;
         else
             ot = dflag ? OT_LONG : OT_WORD;
+	update_eip();
         val = ldub_code(s->pc++);
         gen_op_movl_T0_im(val);
         gen_check_io(s, ot, 0, pc_start - s->cs_base);
+	SVM_CHECK_IO(SVM_IS_REP | (1 << (4+ot)));
         gen_op_mov_TN_reg[ot][1][R_EAX]();
         gen_op_out[ot]();
         break;
@@ -4934,7 +4956,9 @@ static target_ulong disas_insn(DisasCont
             ot = dflag ? OT_LONG : OT_WORD;
         gen_op_mov_TN_reg[OT_WORD][0][R_EDX]();
         gen_op_andl_T0_ffff();
+	update_eip();
         gen_check_io(s, ot, 0, pc_start - s->cs_base);
+	SVM_CHECK_IO(SVM_IOIO_TYPE_MASK | SVM_IS_REP | (1 << (4+ot)));
         gen_op_in[ot]();
         gen_op_mov_reg_T1[ot][R_EAX]();
         break;
@@ -4946,7 +4970,9 @@ static target_ulong disas_insn(DisasCont
             ot = dflag ? OT_LONG : OT_WORD;
         gen_op_mov_TN_reg[OT_WORD][0][R_EDX]();
         gen_op_andl_T0_ffff();
+	update_eip();
         gen_check_io(s, ot, 0, pc_start - s->cs_base);
+	SVM_CHECK_IO(SVM_IS_REP | (1 << (4+ot)));
         gen_op_mov_TN_reg[ot][1][R_EAX]();
         gen_op_out[ot]();
         break;
@@ -5004,6 +5030,8 @@ static target_ulong disas_insn(DisasCont
         val = 0;
         goto do_lret;
     case 0xcf: /* iret */
+	update_eip();
+        gen_op_svm_check_intercept(SVM_EXIT_IRET);
         if (!s->pe) {
             /* real mode */
             gen_op_iret_real(s->dflag);
@@ -5125,6 +5153,8 @@ static target_ulong disas_insn(DisasCont
         /************************/
         /* flags */
     case 0x9c: /* pushf */
+	update_eip();
+        gen_op_svm_check_intercept(SVM_EXIT_PUSHF);
         if (s->vm86 && s->iopl != 3) {
             gen_exception(s, EXCP0D_GPF, pc_start - s->cs_base);
         } else {
@@ -5135,6 +5165,8 @@ static target_ulong disas_insn(DisasCont
         }
         break;
     case 0x9d: /* popf */
+	update_eip();
+        gen_op_svm_check_intercept(SVM_EXIT_POPF);
         if (s->vm86 && s->iopl != 3) {
             gen_exception(s, EXCP0D_GPF, pc_start - s->cs_base);
         } else {
@@ -5344,6 +5376,10 @@ static target_ulong disas_insn(DisasCont
         /* XXX: correct lock test for all insn */
         if (prefixes & PREFIX_LOCK)
             goto illegal_op;
+        if (prefixes & PREFIX_REPZ) {
+	    update_eip();
+            gen_op_svm_check_intercept(SVM_EXIT_INVD);
+	}
         break;
     case 0x9b: /* fwait */
         if ((s->flags & (HF_MP_MASK | HF_TS_MASK)) == 
@@ -5357,9 +5393,13 @@ static target_ulong disas_insn(DisasCont
         }
         break;
     case 0xcc: /* int3 */
+        update_eip();
+        gen_op_svm_check_intercept(SVM_EXIT_SWINT);
         gen_interrupt(s, EXCP03_INT3, pc_start - s->cs_base, s->pc - s->cs_base);
         break;
     case 0xcd: /* int N */
+	update_eip();
+        gen_op_svm_check_intercept(SVM_EXIT_SWINT);
         val = ldub_code(s->pc++);
         if (s->vm86 && s->iopl != 3) {
             gen_exception(s, EXCP0D_GPF, pc_start - s->cs_base); 
@@ -5370,12 +5410,16 @@ static target_ulong disas_insn(DisasCont
     case 0xce: /* into */
         if (CODE64(s))
             goto illegal_op;
+        update_eip();
+        gen_op_svm_check_intercept(SVM_EXIT_SWINT);
         if (s->cc_op != CC_OP_DYNAMIC)
             gen_op_set_cc_op(s->cc_op);
         gen_jmp_im(pc_start - s->cs_base);
         gen_op_into(s->pc - pc_start);
         break;
     case 0xf1: /* icebp (undocumented, exits to external debugger) */
+	update_eip();
+        gen_op_svm_check_intercept(SVM_EXIT_ICEBP);
 #if 1
         gen_debug(s, pc_start - s->cs_base);
 #else
@@ -5503,6 +5547,7 @@ static target_ulong disas_insn(DisasCont
         if (s->cpl != 0) {
             gen_exception(s, EXCP0D_GPF, pc_start - s->cs_base);
         } else {
+	    update_eip();
             if (b & 2)
                 gen_op_rdmsr();
             else
@@ -5580,6 +5625,8 @@ static target_ulong disas_insn(DisasCont
         } else {
             if (s->cc_op != CC_OP_DYNAMIC)
                 gen_op_set_cc_op(s->cc_op);
+	    update_eip();
+            gen_op_svm_check_intercept(SVM_EXIT_HLT);
             gen_jmp_im(s->pc - s->cs_base);
             gen_op_hlt();
             s->is_jmp = 3;
@@ -5589,10 +5636,12 @@ static target_ulong disas_insn(DisasCont
         modrm = ldub_code(s->pc++);
         mod = (modrm >> 6) & 3;
         op = (modrm >> 3) & 7;
+	update_eip();
         switch(op) {
         case 0: /* sldt */
             if (!s->pe || s->vm86)
                 goto illegal_op;
+            gen_op_svm_check_intercept(SVM_EXIT_LDTR_READ);
             gen_op_movl_T0_env(offsetof(CPUX86State,ldt.selector));
             ot = OT_WORD;
             if (mod == 3)
@@ -5606,6 +5655,7 @@ static target_ulong disas_insn(DisasCont
                 gen_exception(s, EXCP0D_GPF, pc_start - s->cs_base);
             } else {
                 gen_ldst_modrm(s, modrm, OT_WORD, OR_TMP0, 0);
+                gen_op_svm_check_intercept(SVM_EXIT_LDTR_WRITE);
                 gen_jmp_im(pc_start - s->cs_base);
                 gen_op_lldt_T0();
             }
@@ -5613,6 +5663,7 @@ static target_ulong disas_insn(DisasCont
         case 1: /* str */
             if (!s->pe || s->vm86)
                 goto illegal_op;
+            gen_op_svm_check_intercept(SVM_EXIT_TR_READ);
             gen_op_movl_T0_env(offsetof(CPUX86State,tr.selector));
             ot = OT_WORD;
             if (mod == 3)
@@ -5626,6 +5677,7 @@ static target_ulong disas_insn(DisasCont
                 gen_exception(s, EXCP0D_GPF, pc_start - s->cs_base);
             } else {
                 gen_ldst_modrm(s, modrm, OT_WORD, OR_TMP0, 0);
+                gen_op_svm_check_intercept(SVM_EXIT_TR_WRITE);
                 gen_jmp_im(pc_start - s->cs_base);
                 gen_op_ltr_T0();
             }
@@ -5652,11 +5704,13 @@ static target_ulong disas_insn(DisasCont
         mod = (modrm >> 6) & 3;
         op = (modrm >> 3) & 7;
         rm = modrm & 7;
+	update_eip();
         switch(op) {
         case 0: /* sgdt */
             if (mod == 3)
                 goto illegal_op;
             gen_lea_modrm(s, modrm, &reg_addr, &offset_addr);
+            gen_op_svm_check_intercept(SVM_EXIT_GDTR_READ);
             gen_op_movl_T0_env(offsetof(CPUX86State, gdt.limit));
             gen_op_st_T0_A0[OT_WORD + s->mem_index]();
             gen_add_A0_im(s, 2);
@@ -5672,7 +5726,8 @@ static target_ulong disas_insn(DisasCont
                     if (!(s->cpuid_ext_features & CPUID_EXT_MONITOR) ||
                         s->cpl != 0)
                         goto illegal_op;
-                    gen_jmp_im(pc_start - s->cs_base);
+                    // gen_jmp_im(pc_start - s->cs_base);
+                    gen_op_svm_check_intercept(SVM_EXIT_MONITOR);
 #ifdef TARGET_X86_64
                     if (s->aflag == 2) {
                         gen_op_movq_A0_reg[R_EBX]();
@@ -5696,7 +5751,8 @@ static target_ulong disas_insn(DisasCont
                         gen_op_set_cc_op(s->cc_op);
                         s->cc_op = CC_OP_DYNAMIC;
                     }
-                    gen_jmp_im(s->pc - s->cs_base);
+                    // gen_jmp_im(s->pc - s->cs_base);
+                    gen_op_svm_check_intercept(SVM_EXIT_MWAIT);
                     gen_op_mwait();
                     gen_eob(s);
                     break;
@@ -5705,6 +5761,7 @@ static target_ulong disas_insn(DisasCont
                 }
             } else { /* sidt */
                 gen_lea_modrm(s, modrm, &reg_addr, &offset_addr);
+		gen_op_svm_check_intercept(SVM_EXIT_IDTR_READ);
                 gen_op_movl_T0_env(offsetof(CPUX86State, idt.limit));
                 gen_op_st_T0_A0[OT_WORD + s->mem_index]();
                 gen_add_A0_im(s, 2);
@@ -5716,9 +5773,46 @@ static target_ulong disas_insn(DisasCont
             break;
         case 2: /* lgdt */
         case 3: /* lidt */
-            if (mod == 3)
-                goto illegal_op;
-            if (s->cpl != 0) {
+            if (mod == 3) {
+		switch(rm) {
+		case 0: /* VMRUN */
+                    gen_op_svm_check_intercept(SVM_EXIT_VMRUN);
+		    gen_op_vmrun();
+		    gen_eob(s); /* We probably will have to set EIP in here */
+		    break;
+		case 1: /* VMMCALL */
+                    gen_op_svm_check_intercept(SVM_EXIT_VMMCALL);
+		    // FIXME: cause #UD if hflags & SVM
+		    gen_op_vmmcall();
+		    break;
+		case 2: /* VMLOAD */
+                    gen_op_svm_check_intercept(SVM_EXIT_VMLOAD);
+		    gen_op_vmload();
+		    break;
+		case 3: /* VMSAVE */
+                    gen_op_svm_check_intercept(SVM_EXIT_VMSAVE);
+		    gen_op_vmsave();
+		    break;
+		case 4: /* STGI */
+                    gen_op_svm_check_intercept(SVM_EXIT_STGI);
+		    gen_op_stgi();
+		    break;
+		case 5: /* CLGI */
+                    gen_op_svm_check_intercept(SVM_EXIT_CLGI);
+		    gen_op_clgi();
+		    break;
+		case 6: /* SKINIT */
+                    gen_op_svm_check_intercept(SVM_EXIT_SKINIT);
+		    gen_op_skinit();
+		    break;
+		case 7: /* INVLPGA */
+                    gen_op_svm_check_intercept(SVM_EXIT_INVLPGA);
+		    gen_op_invlpga();
+		    break;
+		default:
+		    goto illegal_op;
+		}
+	    } else if (s->cpl != 0) {
                 gen_exception(s, EXCP0D_GPF, pc_start - s->cs_base);
             } else {
                 gen_lea_modrm(s, modrm, &reg_addr, &offset_addr);
@@ -5728,9 +5822,11 @@ static target_ulong disas_insn(DisasCont
                 if (!s->dflag)
                     gen_op_andl_T0_im(0xffffff);
                 if (op == 2) {
+		    gen_op_svm_check_intercept(SVM_EXIT_GDTR_WRITE);
                     gen_op_movtl_env_T0(offsetof(CPUX86State,gdt.base));
                     gen_op_movl_env_T1(offsetof(CPUX86State,gdt.limit));
                 } else {
+		    gen_op_svm_check_intercept(SVM_EXIT_IDTR_WRITE);
                     gen_op_movtl_env_T0(offsetof(CPUX86State,idt.base));
                     gen_op_movl_env_T1(offsetof(CPUX86State,idt.limit));
                 }
@@ -5754,6 +5850,7 @@ static target_ulong disas_insn(DisasCont
             if (s->cpl != 0) {
                 gen_exception(s, EXCP0D_GPF, pc_start - s->cs_base);
             } else {
+                gen_op_svm_check_intercept(SVM_EXIT_INVLPG);
                 if (mod == 3) {
 #ifdef TARGET_X86_64
                     if (CODE64(s) && rm == 0) {
@@ -5784,6 +5881,8 @@ static target_ulong disas_insn(DisasCont
         if (s->cpl != 0) {
             gen_exception(s, EXCP0D_GPF, pc_start - s->cs_base);
         } else {
+	    update_eip();
+            gen_op_svm_check_intercept(SVM_EXIT_INVD);
             /* nothing to do */
         }
         break;
@@ -5914,7 +6013,8 @@ static target_ulong disas_insn(DisasCont
                         gen_op_movtl_T0_cr8();
                     else
 #endif
-                        gen_op_movtl_T0_env(offsetof(CPUX86State,cr[reg]));
+//                        gen_op_movtl_T0_env(offsetof(CPUX86State,cr[reg]));
+		        gen_op_movtl_T0_crN(reg);
                     gen_op_mov_reg_T0[ot][rm]();
                 }
                 break;
@@ -6046,6 +6146,8 @@ static target_ulong disas_insn(DisasCont
         /* ignore for now */
         break;
     case 0x1aa: /* rsm */
+	update_eip();
+        gen_op_svm_check_intercept(SVM_EXIT_RSM);
         if (!(s->flags & HF_SMM_MASK))
             goto illegal_op;
         if (s->cc_op != CC_OP_DYNAMIC) {
Index: qemu/target-i386/cpu.h
===================================================================
--- qemu.orig/target-i386/cpu.h
+++ qemu/target-i386/cpu.h
@@ -46,6 +46,8 @@
 
 #include "softfloat.h"
 
+#include "svm.h"
+
 #if defined(__i386__) && !defined(CONFIG_SOFTMMU) && !defined(__APPLE__)
 #define USE_CODE_COPY
 #endif
@@ -84,6 +86,7 @@
 #define DESC_AVL_MASK   (1 << 20)
 #define DESC_P_MASK     (1 << 15)
 #define DESC_DPL_SHIFT  13
+#define DESC_DPL_MASK   (1 << DESC_DPL_SHIFT)
 #define DESC_S_MASK     (1 << 12)
 #define DESC_TYPE_SHIFT 8
 #define DESC_A_MASK     (1 << 8)
@@ -149,6 +152,9 @@
 #define HF_VM_SHIFT         17 /* must be same as eflags */
 #define HF_HALTED_SHIFT     18 /* CPU halted */
 #define HF_SMM_SHIFT        19 /* CPU in SMM mode */
+#define HF_SVM_SHIFT        20 /* CPU in SVM mode */
+#define HF_GIF_SHIFT        21 /* if set CPU takes interrupts */
+#define HF_HIF_SHIFT        22 /* shadow copy of IF_MASK when in SVM */
 
 #define HF_CPL_MASK          (3 << HF_CPL_SHIFT)
 #define HF_SOFTMMU_MASK      (1 << HF_SOFTMMU_SHIFT)
@@ -166,6 +172,9 @@
 #define HF_OSFXSR_MASK       (1 << HF_OSFXSR_SHIFT)
 #define HF_HALTED_MASK       (1 << HF_HALTED_SHIFT)
 #define HF_SMM_MASK          (1 << HF_SMM_SHIFT)
+#define HF_SVM_MASK          (1 << HF_SVM_SHIFT)
+#define HF_GIF_MASK          (1 << HF_GIF_SHIFT)
+#define HF_HIF_MASK          (1 << HF_HIF_SHIFT)
 
 #define CR0_PE_MASK  (1 << 0)
 #define CR0_MP_MASK  (1 << 1)
@@ -249,6 +258,8 @@
 #define MSR_GSBASE                      0xc0000101
 #define MSR_KERNELGSBASE                0xc0000102
 
+#define MSR_VM_HSAVE_PA                 0xc0010117
+
 /* cpuid_features bits */
 #define CPUID_FP87 (1 << 0)
 #define CPUID_VME  (1 << 1)
@@ -283,6 +294,8 @@
 #define CPUID_EXT2_FFXSR   (1 << 25)
 #define CPUID_EXT2_LM      (1 << 29)
 
+#define CPUID_EXT3_SVM     (1 << 2)
+
 #define EXCP00_DIVZ	0
 #define EXCP01_SSTP	1
 #define EXCP02_NMI	2
@@ -489,6 +502,16 @@ typedef struct CPUX86State {
     uint32_t sysenter_eip;
     uint64_t efer;
     uint64_t star;
+
+    struct CPUX86State *vm_hsave; // FIXME: should point to target memory
+    target_phys_addr_t vm_vmcb;
+    uint64_t intercept;
+    uint16_t intercept_cr_read;
+    uint16_t intercept_cr_write;
+    uint16_t intercept_dr_read;
+    uint16_t intercept_dr_write;
+    uint32_t intercept_exceptions;
+
 #ifdef TARGET_X86_64
     target_ulong lstar;
     target_ulong cstar;
Index: qemu/target-i386/op.c
===================================================================
--- qemu.orig/target-i386/op.c
+++ qemu/target-i386/op.c
@@ -21,6 +21,9 @@
 #define ASM_SOFTMMU
 #include "exec.h"
 
+void svm_check_intercept_param(unsigned int type, uint32_t param);
+void svm_check_intercept(unsigned int type);
+
 /* n must be a constant to be efficient */
 static inline target_long lshift(target_long x, int n)
 {
@@ -945,11 +948,13 @@ void op_addq_ESP_im(void)
 
 void OPPROTO op_rdtsc(void)
 {
+    svm_check_intercept(SVM_EXIT_RDTSC);
     helper_rdtsc();
 }
 
 void OPPROTO op_cpuid(void)
 {
+    svm_check_intercept(SVM_EXIT_CPUID);
     helper_cpuid();
 }
 
@@ -989,11 +994,13 @@ void OPPROTO op_sysret(void)
 
 void OPPROTO op_rdmsr(void)
 {
+    svm_check_intercept_param(SVM_EXIT_MSR, 0);
     helper_rdmsr();
 }
 
 void OPPROTO op_wrmsr(void)
 {
+    svm_check_intercept_param(SVM_EXIT_MSR, 1);
     helper_wrmsr();
 }
 
@@ -1243,6 +1250,34 @@ void OPPROTO op_movl_crN_T0(void)
     helper_movl_crN_T0(PARAM1);
 }
 
+void OPPROTO op_movtl_T0_crN(void)
+{
+    if (INTERCEPTEDw(_cr_read, (1 << PARAM1))) {
+	vmexit(SVM_EXIT_READ_CR0 + PARAM1, 0);
+    } else {
+        T0 = env->cr[PARAM1];
+    }
+}
+
+// this pseudo-opcode checks for opcode intercepts
+void OPPROTO op_svm_check_intercept(void)
+{
+    svm_check_intercept(PARAM1);
+}
+
+// this pseudo-opcode checks for IO intercepts
+void OPPROTO op_svm_check_intercept_io(void)
+{
+    // PARAM1 = TYPE (0 = OUT, 1 = IN; 4 = STRING; 8 = REP)
+    // T0     = PORT
+    // T1     = next eip
+    if(env->hflags & HF_SVM_MASK) {
+        stq_phys(env->vm_vmcb + offsetof(struct vmcb, control.exit_info_2), T1);
+	// ASIZE does not appear on real hw
+        svm_check_intercept_param(SVM_EXIT_IOIO, (PARAM1 & ~SVM_IOIO_ASIZE_MASK) | ((T0 & 0xffff) << 16));
+    }
+}
+
 #if !defined(CONFIG_USER_ONLY) 
 void OPPROTO op_movtl_T0_cr8(void)
 {
@@ -1253,7 +1288,11 @@ void OPPROTO op_movtl_T0_cr8(void)
 /* DR registers access */
 void OPPROTO op_movl_drN_T0(void)
 {
-    helper_movl_drN_T0(PARAM1);
+    if (INTERCEPTEDw(_dr_read, (1 << PARAM1))) {
+	vmexit(SVM_EXIT_READ_DR0 + PARAM1, 0);
+    } else {
+        helper_movl_drN_T0(PARAM1);
+    }
 }
 
 void OPPROTO op_lmsw_T0(void)
@@ -1306,8 +1345,12 @@ void OPPROTO op_movtl_env_T1(void)
 
 void OPPROTO op_clts(void)
 {
-    env->cr[0] &= ~CR0_TS_MASK;
-    env->hflags &= ~HF_TS_MASK;
+    if (INTERCEPTEDw(_cr_write, INTERCEPT_CR0_MASK)) {
+	vmexit(SVM_EXIT_WRITE_CR0, 0);
+    } else {
+        env->cr[0] &= ~CR0_TS_MASK;
+        env->hflags &= ~HF_TS_MASK;
+    }
 }
 
 /* flags handling */
@@ -2447,3 +2490,45 @@ void OPPROTO op_emms(void)
 
 #define SHIFT 1
 #include "ops_sse.h"
+
+/* Secure Virtual Machine ops */
+
+void OPPROTO op_vmrun(void)
+{
+    helper_vmrun(EAX);
+}
+
+void OPPROTO op_vmmcall(void)
+{
+    helper_vmmcall();
+}
+
+void OPPROTO op_vmload(void)
+{
+    helper_vmload(EAX);
+}
+
+void OPPROTO op_vmsave(void)
+{
+    helper_vmsave(EAX);
+}
+
+void OPPROTO op_stgi(void)
+{
+    helper_stgi();
+}
+
+void OPPROTO op_clgi(void)
+{
+    helper_clgi();
+}
+
+void OPPROTO op_skinit(void)
+{
+    helper_skinit();
+}
+
+void OPPROTO op_invlpga(void)
+{
+    helper_invlpga();
+}
Index: qemu/target-i386/helper.c
===================================================================
--- qemu.orig/target-i386/helper.c
+++ qemu/target-i386/helper.c
@@ -173,8 +173,9 @@ static inline void get_ss_esp_from_tss(u
     }
 #endif
 
-    if (!(env->tr.flags & DESC_P_MASK))
+    if (!(env->tr.flags & DESC_P_MASK)) {
         cpu_abort(env, "invalid tss");
+    }
     type = (env->tr.flags >> DESC_TYPE_SHIFT) & 0xf;
     if ((type & 7) != 1)
         cpu_abort(env, "invalid tss type");
@@ -594,7 +595,23 @@ static void do_interrupt_protected(int i
     int has_error_code, new_stack, shift;
     uint32_t e1, e2, offset, ss, esp, ss_e1, ss_e2;
     uint32_t old_eip, sp_mask;
+    int svm_should_check = 1;
 
+//    if(is_int) if(svm_check_intercept(SVM_EXIT_INTR)) return;
+    if(env->hflags & HF_SVM_MASK && !is_int && next_eip==-1) {
+	next_eip = EIP;
+        svm_should_check = 0;
+	printf("T");
+    }
+    
+    if(env->hflags & HF_SVM_MASK
+        && svm_should_check
+	&& (INTERCEPTEDl(_exceptions, 1 << intno)
+	&& !is_int)) {
+	// FIXME: this is not complete yet
+	printf("p");
+	raise_interrupt(intno, is_int, error_code, env->eip - next_eip);
+    } else {
     has_error_code = 0;
     if (!is_int && !is_hw) {
         switch(intno) {
@@ -725,7 +742,7 @@ static void do_interrupt_protected(int i
         push_size += 8;
     push_size <<= shift;
 #endif
-    if (shift == 1) {
+      if (shift == 1) {
         if (new_stack) {
             if (env->eflags & VM_MASK) {
                 PUSHL(ssp, esp, sp_mask, env->segs[R_GS].selector);
@@ -742,7 +759,7 @@ static void do_interrupt_protected(int i
         if (has_error_code) {
             PUSHL(ssp, esp, sp_mask, error_code);
         }
-    } else {
+      } else {
         if (new_stack) {
             if (env->eflags & VM_MASK) {
                 PUSHW(ssp, esp, sp_mask, env->segs[R_GS].selector);
@@ -759,9 +776,9 @@ static void do_interrupt_protected(int i
         if (has_error_code) {
             PUSHW(ssp, esp, sp_mask, error_code);
         }
-    }
-    
-    if (new_stack) {
+      }
+      
+      if (new_stack) {
         if (env->eflags & VM_MASK) {
             cpu_x86_load_seg_cache(env, R_ES, 0, 0, 0, 0);
             cpu_x86_load_seg_cache(env, R_DS, 0, 0, 0, 0);
@@ -771,16 +788,18 @@ static void do_interrupt_protected(int i
         ss = (ss & ~3) | dpl;
         cpu_x86_load_seg_cache(env, R_SS, ss, 
                                ssp, get_seg_limit(ss_e1, ss_e2), ss_e2);
-    }
-    SET_ESP(esp, sp_mask);
+      }
+      SET_ESP(esp, sp_mask);
 
-    selector = (selector & ~3) | dpl;
-    cpu_x86_load_seg_cache(env, R_CS, selector, 
+      selector = (selector & ~3) | dpl;
+      cpu_x86_load_seg_cache(env, R_CS, selector, 
                    get_seg_base(e1, e2),
                    get_seg_limit(e1, e2),
                    e2);
-    cpu_x86_set_cpl(env, dpl);
-    env->eip = offset;
+      cpu_x86_set_cpl(env, dpl);
+      env->eip = offset;
+
+    }
 
     /* interrupt gate clear IF mask */
     if ((type & 1) == 0) {
@@ -812,8 +831,9 @@ static inline target_ulong get_rsp_from_
            env->tr.base, env->tr.limit);
 #endif
 
-    if (!(env->tr.flags & DESC_P_MASK))
+    if (!(env->tr.flags & DESC_P_MASK)) {
         cpu_abort(env, "invalid tss");
+    }
     index = 8 * level + 4;
     if ((index + 7) > env->tr.limit)
         raise_exception_err(EXCP0A_TSS, env->tr.selector & 0xfffc);
@@ -830,7 +850,22 @@ static void do_interrupt64(int intno, in
     int has_error_code, new_stack;
     uint32_t e1, e2, e3, ss;
     target_ulong old_eip, esp, offset;
+    int svm_should_check = 1;
 
+    if(env->hflags & HF_SVM_MASK && !is_int && next_eip==-1) {
+	next_eip = EIP;
+        svm_should_check = 0;
+	printf("S");
+    }
+    if(env->hflags & HF_SVM_MASK
+        && svm_should_check
+	&& INTERCEPTEDl(_exceptions, 1 << intno)
+	&& !is_int) {
+	printf("6");
+        if (loglevel & CPU_LOG_TB_IN_ASM)
+	    fprintf(logfile, "\n64 bit interrupt\n");
+	raise_interrupt(intno, is_int, error_code, 0); // env->eip - next_eip);
+    } else {
     has_error_code = 0;
     if (!is_int && !is_hw) {
         switch(intno) {
@@ -941,6 +976,7 @@ static void do_interrupt64(int intno, in
     cpu_x86_set_cpl(env, dpl);
     env->eip = offset;
 
+    }
     /* interrupt gate clear IF mask */
     if ((type & 1) == 0) {
         env->eflags &= ~IF_MASK;
@@ -1077,31 +1113,47 @@ static void do_interrupt_real(int intno,
     int selector;
     uint32_t offset, esp;
     uint32_t old_cs, old_eip;
+    int svm_should_check = 1;
 
-    /* real mode (simpler !) */
-    dt = &env->idt;
-    if (intno * 4 + 3 > dt->limit)
-        raise_exception_err(EXCP0D_GPF, intno * 8 + 2);
-    ptr = dt->base + intno * 4;
-    offset = lduw_kernel(ptr);
-    selector = lduw_kernel(ptr + 2);
-    esp = ESP;
-    ssp = env->segs[R_SS].base;
-    if (is_int)
-        old_eip = next_eip;
-    else
-        old_eip = env->eip;
-    old_cs = env->segs[R_CS].selector;
-    /* XXX: use SS segment size ? */
-    PUSHW(ssp, esp, 0xffff, compute_eflags());
-    PUSHW(ssp, esp, 0xffff, old_cs);
-    PUSHW(ssp, esp, 0xffff, old_eip);
-    
-    /* update processor state */
-    ESP = (ESP & ~0xffff) | (esp & 0xffff);
-    env->eip = offset;
-    env->segs[R_CS].selector = selector;
-    env->segs[R_CS].base = (selector << 4);
+//    if(is_int) if(svm_check_intercept(SVM_EXIT_INTR)) return;
+    if(env->hflags & HF_SVM_MASK && !is_int && next_eip==-1) {
+	next_eip = EIP;
+        svm_should_check = 0;
+	printf("U");
+    }
+    if(env->hflags & HF_SVM_MASK
+        && svm_should_check
+	&& INTERCEPTEDl(_exceptions, 1 << intno)
+	&& !is_int) {
+	// FIXME: this is not complete yet
+	printf("r");
+	raise_interrupt(intno, is_int, error_code, 0); // env->eip - next_eip);
+    } else {
+        /* real mode (simpler !) */
+        dt = &env->idt;
+        if (intno * 4 + 3 > dt->limit)
+            raise_exception_err(EXCP0D_GPF, intno * 8 + 2);
+         ptr = dt->base + intno * 4;
+         offset = lduw_kernel(ptr);
+         selector = lduw_kernel(ptr + 2);
+         esp = ESP;
+         ssp = env->segs[R_SS].base;
+         if (is_int)
+             old_eip = next_eip;
+         else
+             old_eip = env->eip;
+         old_cs = env->segs[R_CS].selector;
+        /* XXX: use SS segment size ? */
+        PUSHW(ssp, esp, 0xffff, compute_eflags());
+        PUSHW(ssp, esp, 0xffff, old_cs);
+        PUSHW(ssp, esp, 0xffff, old_eip);
+        
+        /* update processor state */
+        ESP = (ESP & ~0xffff) | (esp & 0xffff);
+        env->eip = offset;
+        env->segs[R_CS].selector = selector;
+        env->segs[R_CS].base = (selector << 4);
+    }
     env->eflags &= ~(IF_MASK | TF_MASK | AC_MASK | RF_MASK);
 }
 
@@ -1170,6 +1222,7 @@ void do_interrupt(int intno, int is_int,
             count++;
         }
     }
+//    printf("R"); fflush(NULL);
     if (env->cr[0] & CR0_PE_MASK) {
 #if TARGET_X86_64
         if (env->hflags & HF_LMA_MASK) {
@@ -1198,11 +1251,12 @@ int check_exception(int intno, int *erro
                                (intno >= 10 && intno <= 13);
 
     if (loglevel & CPU_LOG_INT)
-        fprintf(logfile, "check_exception old: %x new %x\n",
+        fprintf(logfile, "%c check_exception old: %x new %x\n", env->hflags & HF_SVM_MASK ? 'S' : 'n',
                 env->old_exception, intno);
 
-    if (env->old_exception == EXCP08_DBLE)
+    if (env->old_exception == EXCP08_DBLE) {
         cpu_abort(env, "triple fault");
+    }
 
     if ((first_contributory && second_contributory)
         || (env->old_exception == EXCP0E_PAGE &&
@@ -1227,6 +1281,28 @@ int check_exception(int intno, int *erro
 void raise_interrupt(int intno, int is_int, int error_code, 
                      int next_eip_addend)
 {
+    if(env->hflags & HF_SVM_MASK
+	&& ((INTERCEPTEDl(_exceptions, 1 << intno)) || is_int)) {
+	if(is_int) { // FIXME: nothing to do (this is a softint)
+//	    printf("Interrupt %x - might result in VMEXIT soon\n", intno);
+	    if (loglevel & CPU_LOG_TB_IN_ASM)
+		fprintf(logfile, "Interrupt %x - might result in VMEXIT soon\n", intno);
+/*
+            env->exception_index = intno;
+            env->error_code = error_code;
+            env->exception_is_int = is_int;
+            env->exception_next_eip = env->eip + next_eip_addend;
+
+	    vmexit(SVM_EXIT_INTR, 0);
+	    return;*/
+	} else {
+//	    printf("Exception %x - should result in VMEXIT now\n", intno);
+//	    EIP = env->eip + next_eip_addend;
+	    vmexit(SVM_EXIT_EXCP_BASE + intno, error_code);
+	    return;
+	}
+    }
+    
     if (!is_int)
         intno = check_exception(intno, &error_code);
 
@@ -1234,6 +1310,7 @@ void raise_interrupt(int intno, int is_i
     env->error_code = error_code;
     env->exception_is_int = is_int;
     env->exception_next_eip = env->eip + next_eip_addend;
+    
     cpu_loop_exit();
 }
 
@@ -1665,7 +1742,7 @@ void helper_cpuid(void)
     case 0x80000001:
         EAX = env->cpuid_features;
         EBX = 0;
-        ECX = 0;
+        ECX = env->cpuid_ext3_features;
         EDX = env->cpuid_ext2_features;
         break;
     case 0x80000002:
@@ -2643,25 +2720,30 @@ void helper_sysexit(void)
 #endif
 }
 
+
+
 void helper_movl_crN_T0(int reg)
 {
 #if !defined(CONFIG_USER_ONLY) 
-    switch(reg) {
-    case 0:
-        cpu_x86_update_cr0(env, T0);
-        break;
-    case 3:
-        cpu_x86_update_cr3(env, T0);
-        break;
-    case 4:
-        cpu_x86_update_cr4(env, T0);
-        break;
-    case 8:
-        cpu_set_apic_tpr(env, T0);
-        break;
-    default:
-        env->cr[reg] = T0;
-        break;
+    if(INTERCEPTEDw(_cr_write, (1 << reg)))
+        vmexit(SVM_EXIT_WRITE_CR0 + reg, 0);
+    else
+        switch(reg) {
+        case 0:
+            cpu_x86_update_cr0(env, T0);
+            break;
+        case 3:
+            cpu_x86_update_cr3(env, T0);
+            break;
+        case 4:
+            cpu_x86_update_cr4(env, T0);
+            break;
+        case 8:
+            cpu_set_apic_tpr(env, T0);
+            break;
+        default:
+            env->cr[reg] = T0;
+            break;
     }
 #endif
 }
@@ -2669,7 +2751,10 @@ void helper_movl_crN_T0(int reg)
 /* XXX: do more */
 void helper_movl_drN_T0(int reg)
 {
-    env->dr[reg] = T0;
+    if(INTERCEPTEDw(_dr_write, (1 << reg)))
+        vmexit(SVM_EXIT_WRITE_DR0 + reg, 0);
+    else
+        env->dr[reg] = T0;
 }
 
 void helper_invlpg(target_ulong addr)
@@ -2739,6 +2824,9 @@ void helper_wrmsr(void)
     case MSR_PAT:
         env->pat = val;
         break;
+    case MSR_VM_HSAVE_PA:
+        env->vm_hsave = val;
+        break;
 #ifdef TARGET_X86_64
     case MSR_LSTAR:
         env->lstar = val;
@@ -2790,6 +2878,9 @@ void helper_rdmsr(void)
     case MSR_PAT:
         val = env->pat;
         break;
+    case MSR_VM_HSAVE_PA:
+        val = env->vm_hsave;
+        break;
 #ifdef TARGET_X86_64
     case MSR_LSTAR:
         val = env->lstar;
@@ -3852,6 +3943,7 @@ void tlb_fill(target_ulong addr, int is_
     saved_env = env;
     env = cpu_single_env;
 
+    // if(! (env->hflags & HF_GIF_MASK))
     ret = cpu_x86_handle_mmu_fault(env, addr, is_write, is_user, 1);
     if (ret) {
         if (retaddr) {
@@ -3871,3 +3963,658 @@ void tlb_fill(target_ulong addr, int is_
     }
     env = saved_env;
 }
+
+/* Secure Virtual Machine helpers */
+
+
+uint32_t vmcb2cpu_attrib(uint16_t vmcb_value) {
+    uint32_t retval = 0;
+    retval |= ((vmcb_value & SVM_SELECTOR_S_MASK) >> SVM_SELECTOR_S_SHIFT) << 12;
+    retval |= ((vmcb_value & SVM_SELECTOR_DPL_MASK) >> SVM_SELECTOR_DPL_SHIFT) << DESC_DPL_SHIFT;
+    retval |= ((vmcb_value & SVM_SELECTOR_P_MASK) >> SVM_SELECTOR_P_SHIFT) << 15;
+    retval |= ((vmcb_value & SVM_SELECTOR_AVL_MASK) >> SVM_SELECTOR_AVL_SHIFT) << 20;
+    retval |= ((vmcb_value & SVM_SELECTOR_L_MASK) >> SVM_SELECTOR_L_SHIFT) << DESC_L_SHIFT;
+    retval |= ((vmcb_value & SVM_SELECTOR_DB_MASK) >> SVM_SELECTOR_DB_SHIFT) << DESC_B_SHIFT;
+    retval |= ((vmcb_value & SVM_SELECTOR_G_MASK) >> SVM_SELECTOR_G_SHIFT) << 23;
+    retval |= ((vmcb_value & SVM_SELECTOR_READ_MASK) >> 1) << 9;
+    retval |= ((vmcb_value & SVM_SELECTOR_CODE_MASK) >> 3) << 11;
+
+    // unavailable as kvm constants (so I made these up)
+    retval |= ((vmcb_value & (1 << 12)) >> 12) << 8; // A
+    retval |= ((vmcb_value & (1 << 13)) >> 13) << 10; // C / E
+    return retval;
+}
+
+uint16_t cpu2vmcb_attrib(uint32_t cpu_value) {
+    uint16_t retval = 0;
+    retval |= ((cpu_value & DESC_S_MASK) >> 12) << SVM_SELECTOR_S_SHIFT;
+    retval |= ((cpu_value & DESC_DPL_MASK) >> DESC_DPL_SHIFT) << SVM_SELECTOR_DPL_SHIFT;
+    retval |= ((cpu_value & DESC_P_MASK) >> 15) << SVM_SELECTOR_P_SHIFT;
+    retval |= ((cpu_value & DESC_AVL_MASK) >> 20) << SVM_SELECTOR_AVL_SHIFT;
+    retval |= ((cpu_value & DESC_L_MASK) >> DESC_L_SHIFT) << SVM_SELECTOR_L_SHIFT;
+    retval |= ((cpu_value & DESC_B_MASK) >> DESC_B_SHIFT) << SVM_SELECTOR_DB_SHIFT;
+    retval |= ((cpu_value & DESC_G_MASK) >> 23) << SVM_SELECTOR_G_SHIFT;
+    retval |= ((cpu_value & DESC_R_MASK) >> 9) << 1;
+    retval |= ((cpu_value & DESC_CS_MASK) >> 11) << 3;
+
+    // unavailable as kvm constants (so I made these up)
+    retval |= ((cpu_value & DESC_A_MASK) >> 8) << 12;
+    retval |= ((cpu_value & DESC_C_MASK) >> 10) << 13;
+    return retval;
+}
+
+void svm_check_longmode(void) {
+#ifdef TARGET_X86_64
+    env->hflags &= ~(HF_LMA_MASK | HF_CS64_MASK);
+    if (// !(env->cr[0] & CR0_PG_MASK) && 
+        (env->efer & MSR_EFER_LME)) {
+        /* enter in long mode */
+        /* XXX: generate an exception */
+        if (!(env->cr[4] & CR4_PAE_MASK))
+            return;
+        env->efer |= MSR_EFER_LMA;
+        env->hflags |= HF_LMA_MASK;
+    }
+#endif
+}
+
+extern uint8_t *phys_ram_base;
+void helper_vmrun(target_ulong addr)
+{
+    static CPUX86State* vm_hsave = NULL;
+    uint32_t event_inj;
+    uint32_t int_ctl;
+
+    if (loglevel & CPU_LOG_TB_IN_ASM) fprintf(logfile,"vmrun! %#lx\n", addr);
+
+    env->vm_vmcb = addr;
+    EIP += 3; // store the instruction after the vmrun
+    regs_to_env();
+    // FIXME: we should use the page given by the VMM
+    if(!vm_hsave) vm_hsave = (CPUX86State*) malloc(sizeof(CPUX86State));
+    env->vm_hsave = vm_hsave;
+    memcpy(env->vm_hsave, env, sizeof(CPUX86State));
+    // cpu_physical_memory_write(env->vm_hsave, (void*)env, sizeof(CPUX86State));
+
+    // load the interception bitmaps so we do not need to access the vmcb in svm mode
+    env->intercept            = ldq_phys(env->vm_vmcb + offsetof(struct vmcb, control.intercept));
+    env->intercept_cr_read    = lduw_phys(env->vm_vmcb + offsetof(struct vmcb, control.intercept_cr_read));
+    env->intercept_cr_write   = lduw_phys(env->vm_vmcb + offsetof(struct vmcb, control.intercept_cr_write));
+    env->intercept_dr_read    = lduw_phys(env->vm_vmcb + offsetof(struct vmcb, control.intercept_dr_read));
+    env->intercept_dr_write   = lduw_phys(env->vm_vmcb + offsetof(struct vmcb, control.intercept_dr_write));
+    env->intercept_exceptions = ldl_phys(env->vm_vmcb + offsetof(struct vmcb, control.intercept_exceptions));
+
+    env->gdt.base  = ldq_phys(env->vm_vmcb + offsetof(struct vmcb, save.gdtr.base));
+    env->gdt.limit = ldl_phys(env->vm_vmcb + offsetof(struct vmcb, save.gdtr.limit));
+
+    env->idt.base  = ldq_phys(env->vm_vmcb + offsetof(struct vmcb, save.idtr.base));
+    env->idt.limit = ldl_phys(env->vm_vmcb + offsetof(struct vmcb, save.idtr.limit));
+
+    // clear exit_info_2 so we behave like the real hardware
+    stq_phys(env->vm_vmcb + offsetof(struct vmcb, control.exit_info_2), 0);
+
+    // reset hidden flags to some extent
+    env->hflags &= ~(HF_LMA_MASK | HF_CS64_MASK);
+
+    cpu_x86_update_cr0(env, ldq_phys(env->vm_vmcb + offsetof(struct vmcb, save.cr0)));
+    cpu_x86_update_cr4(env, ldq_phys(env->vm_vmcb + offsetof(struct vmcb, save.cr4)));
+    cpu_x86_update_cr3(env, ldq_phys(env->vm_vmcb + offsetof(struct vmcb, save.cr3)));
+    env->cr[2] = ldq_phys(env->vm_vmcb + offsetof(struct vmcb, save.cr2));
+    int_ctl = ldl_phys(env->vm_vmcb + offsetof(struct vmcb, control.int_ctl));
+    if(int_ctl & V_INTR_MASKING_MASK) {
+        env->cr[8] = int_ctl & V_TPR_MASK;
+        if(env->eflags & IF_MASK) env->hflags |= HF_HIF_MASK;
+    }
+
+    env->efer = ldq_phys(env->vm_vmcb + offsetof(struct vmcb, save.efer));
+    load_eflags(ldq_phys(env->vm_vmcb + offsetof(struct vmcb, save.rflags)), 0xffffffff);
+    svm_check_longmode();
+
+    cpu_x86_load_seg_cache(env, 
+		    R_ES, 
+		    lduw_phys(env->vm_vmcb + offsetof(struct vmcb, save.es.selector)),
+                    ldq_phys(env->vm_vmcb + offsetof(struct vmcb, save.es.base)),
+                    ldl_phys(env->vm_vmcb + offsetof(struct vmcb, save.es.limit)),
+                    vmcb2cpu_attrib(lduw_phys(env->vm_vmcb + offsetof(struct vmcb, save.es.attrib))));
+
+    cpu_x86_load_seg_cache(env, 
+		    R_CS, 
+		    lduw_phys(env->vm_vmcb + offsetof(struct vmcb, save.cs.selector)),
+                    ldq_phys(env->vm_vmcb + offsetof(struct vmcb, save.cs.base)),
+                    ldl_phys(env->vm_vmcb + offsetof(struct vmcb, save.cs.limit)),
+                    vmcb2cpu_attrib(lduw_phys(env->vm_vmcb + offsetof(struct vmcb, save.cs.attrib))));
+
+    cpu_x86_load_seg_cache(env, 
+		    R_SS, 
+		    lduw_phys(env->vm_vmcb + offsetof(struct vmcb, save.ss.selector)),
+                    ldq_phys(env->vm_vmcb + offsetof(struct vmcb, save.ss.base)),
+                    ldl_phys(env->vm_vmcb + offsetof(struct vmcb, save.ss.limit)),
+                    vmcb2cpu_attrib(lduw_phys(env->vm_vmcb + offsetof(struct vmcb, save.ss.attrib))));
+
+    cpu_x86_load_seg_cache(env, 
+		    R_DS, 
+		    lduw_phys(env->vm_vmcb + offsetof(struct vmcb, save.ds.selector)),
+                    ldq_phys(env->vm_vmcb + offsetof(struct vmcb, save.ds.base)),
+                    ldl_phys(env->vm_vmcb + offsetof(struct vmcb, save.ds.limit)),
+                    vmcb2cpu_attrib(lduw_phys(env->vm_vmcb + offsetof(struct vmcb, save.ds.attrib))));
+
+#ifdef SVM_DEBUG
+    fprintf(logfile,"hflags:      %#llx\n", env->hflags);
+    print_hflags();
+    fprintf(logfile,"HF_LMA_MASK: %#llx\n", HF_LMA_MASK);
+#endif
+
+    EIP = ldq_phys(env->vm_vmcb + offsetof(struct vmcb, save.rip));
+    ESP = ldq_phys(env->vm_vmcb + offsetof(struct vmcb, save.rsp));
+    EAX = ldq_phys(env->vm_vmcb + offsetof(struct vmcb, save.rax));
+    env->dr[7] = ldq_phys(env->vm_vmcb + offsetof(struct vmcb, save.dr7));
+    env->dr[6] = ldq_phys(env->vm_vmcb + offsetof(struct vmcb, save.dr6));
+    cpu_x86_set_cpl(env, ldub_phys(env->vm_vmcb + offsetof(struct vmcb, save.cpl)));
+
+    // FIXME: guest state consistency checks
+
+    switch(ldub_phys(env->vm_vmcb + offsetof(struct vmcb, control.tlb_ctl))) {
+        case TLB_CONTROL_DO_NOTHING:
+            break;
+        case TLB_CONTROL_FLUSH_ALL_ASID:
+            // FIXME: this is not 100% correct but should work for now
+            tlb_flush(env, 1);
+        break;
+    }
+
+    helper_stgi();
+    env->hflags |= HF_SVM_MASK;
+
+    regs_to_env();
+
+    // maybe we need to inject an event
+    event_inj = ldl_phys(env->vm_vmcb + offsetof(struct vmcb, control.event_inj));
+    if(event_inj & SVM_EVTINJ_VALID) {
+        uint8_t vector = event_inj & SVM_EVTINJ_VEC_MASK;
+	uint16_t valid_err = event_inj & SVM_EVTINJ_VALID_ERR;
+	uint32_t event_inj_err = ldl_phys(env->vm_vmcb + offsetof(struct vmcb, control.event_inj_err));
+        stl_phys(env->vm_vmcb + offsetof(struct vmcb, control.event_inj), event_inj & ~SVM_EVTINJ_VALID);
+        
+	if (loglevel & CPU_LOG_TB_IN_ASM) fprintf(logfile, "Injecting(%#hx): ", valid_err);
+	// FIXME: need to implement valid_err
+	switch(event_inj & SVM_EVTINJ_TYPE_MASK) {
+		case SVM_EVTINJ_TYPE_INTR:
+                        env->exception_index = vector;
+                        env->error_code = event_inj_err;
+                        env->exception_is_int = 1;
+                        env->exception_next_eip = -1;
+			if (loglevel & CPU_LOG_TB_IN_ASM) fprintf(logfile, "INTR");
+			break;
+		case SVM_EVTINJ_TYPE_NMI:
+                        env->exception_index = vector;
+                        env->error_code = event_inj_err;
+                        env->exception_is_int = 1;
+                        env->exception_next_eip = EIP;
+			if (loglevel & CPU_LOG_TB_IN_ASM) fprintf(logfile, "NMI");
+			break;
+		case SVM_EVTINJ_TYPE_EXEPT:
+                        env->exception_index = check_exception(vector, &event_inj_err);
+                        env->error_code = event_inj_err;
+                        env->exception_is_int = 0;
+                        env->exception_next_eip = -1;
+			if (loglevel & CPU_LOG_TB_IN_ASM) fprintf(logfile, "EXEPT");
+			break;
+		case SVM_EVTINJ_TYPE_SOFT:
+                        env->exception_index = vector;
+                        env->error_code = event_inj_err;
+                        env->exception_is_int = 1;
+                        env->exception_next_eip = EIP;
+			if (loglevel & CPU_LOG_TB_IN_ASM) fprintf(logfile, "SOFT");
+			break;
+	}
+	if (loglevel & CPU_LOG_TB_IN_ASM)
+	fprintf(logfile, " %#x %#x\n", env->exception_index, env->error_code); fflush(NULL);
+    }
+    else if(int_ctl & V_IRQ_MASK) {
+        uint32_t intno = ldl_phys(env->vm_vmcb + offsetof(struct vmcb, control.int_vector));
+        if (loglevel & CPU_LOG_TB_IN_ASM)
+	    fprintf(logfile, "Injecting int: %#x\n", intno); fflush(NULL);
+	// FIXME: this should respect TPR
+	if(env->eflags & IF_MASK) {
+	     svm_check_intercept(SVM_EXIT_VINTR);
+	     if (loglevel & CPU_LOG_TB_IN_ASM)
+	         fprintf(logfile, "Servicing virtual hardware INT=0x%02x\n", intno);
+	     do_interrupt(intno, 0, 0, -1, 1);
+#if defined(__sparc__) && !defined(HOST_SOLARIS)
+             tmp_T0 = 0;
+#else
+             T0 = 0;
+#endif
+             stl_phys(env->vm_vmcb + offsetof(struct vmcb, control.int_ctl), ldl_phys(env->vm_vmcb + offsetof(struct vmcb, control.int_ctl))  & ~V_IRQ_MASK);
+	 }
+    }
+
+    cpu_loop_exit();
+}
+
+void helper_vmmcall()
+{
+    if (loglevel & CPU_LOG_TB_IN_ASM) fprintf(logfile,"vmmcall!\n");
+}
+
+void helper_vmload(target_ulong addr)
+{
+//    fprintf(logfile,"vmload!\n"); fflush(NULL);
+
+    cpu_x86_load_seg_cache(env, 
+		    R_FS, 
+		    lduw_phys(addr + offsetof(struct vmcb, save.fs.selector)),
+                    ldq_phys(addr + offsetof(struct vmcb, save.fs.base)),
+                    ldl_phys(addr + offsetof(struct vmcb, save.fs.limit)),
+                    vmcb2cpu_attrib(lduw_phys(addr + offsetof(struct vmcb, save.fs.attrib))));
+
+    cpu_x86_load_seg_cache(env, 
+		    R_GS, 
+		    lduw_phys(addr + offsetof(struct vmcb, save.gs.selector)),
+                    ldq_phys(addr + offsetof(struct vmcb, save.gs.base)),
+                    ldl_phys(addr + offsetof(struct vmcb, save.gs.limit)),
+                    vmcb2cpu_attrib(lduw_phys(addr + offsetof(struct vmcb, save.gs.attrib))));
+
+    env->tr.selector  = lduw_phys(addr + offsetof(struct vmcb, save.tr.selector));
+    env->tr.base      = ldq_phys(addr + offsetof(struct vmcb, save.tr.base));
+    env->tr.limit     = ldl_phys(addr + offsetof(struct vmcb, save.tr.limit));
+    env->tr.flags     = vmcb2cpu_attrib(lduw_phys(addr + offsetof(struct vmcb, save.tr.attrib)));
+
+    env->ldt.selector = lduw_phys(addr + offsetof(struct vmcb, save.ldtr.selector));
+    env->ldt.base     = ldq_phys(addr + offsetof(struct vmcb, save.ldtr.base));
+    env->ldt.limit    = ldl_phys(addr + offsetof(struct vmcb, save.ldtr.limit));
+    env->ldt.flags    = vmcb2cpu_attrib(lduw_phys(addr + offsetof(struct vmcb, save.ldtr.attrib)));
+
+#ifdef TARGET_X86_64
+    env->kernelgsbase = ldq_phys(addr + offsetof(struct vmcb, save.kernel_gs_base));
+    env->lstar = ldq_phys(addr + offsetof(struct vmcb, save.lstar));
+    env->cstar = ldq_phys(addr + offsetof(struct vmcb, save.cstar));
+    env->fmask = ldq_phys(addr + offsetof(struct vmcb, save.sfmask));
+#endif
+    env->star = ldq_phys(addr + offsetof(struct vmcb, save.star));
+    env->sysenter_cs = ldq_phys(addr + offsetof(struct vmcb, save.sysenter_cs));
+    env->sysenter_esp = ldq_phys(addr + offsetof(struct vmcb, save.sysenter_esp));
+    env->sysenter_eip = ldq_phys(addr + offsetof(struct vmcb, save.sysenter_eip));
+}
+
+void helper_vmsave(target_ulong addr)
+{
+//    fprintf(logfile,"vmsave!\n"); fflush(NULL);
+    stw_phys(addr + offsetof(struct vmcb, save.fs.selector), env->segs[R_FS].selector);
+    stq_phys(addr + offsetof(struct vmcb, save.fs.base), env->segs[R_FS].base);
+    stl_phys(addr + offsetof(struct vmcb, save.fs.limit), env->segs[R_FS].limit);
+    stw_phys(addr + offsetof(struct vmcb, save.fs.attrib), cpu2vmcb_attrib(env->segs[R_FS].flags));
+
+    stw_phys(addr + offsetof(struct vmcb, save.gs.selector), env->segs[R_GS].selector);
+    stq_phys(addr + offsetof(struct vmcb, save.gs.base), env->segs[R_GS].base);
+    stl_phys(addr + offsetof(struct vmcb, save.gs.limit), env->segs[R_GS].limit);
+    stw_phys(addr + offsetof(struct vmcb, save.gs.attrib), cpu2vmcb_attrib(env->segs[R_GS].flags));
+
+    stw_phys(addr + offsetof(struct vmcb, save.tr.selector), env->tr.selector);
+    stq_phys(addr + offsetof(struct vmcb, save.tr.base), env->tr.base);
+    stl_phys(addr + offsetof(struct vmcb, save.tr.limit), env->tr.limit);
+    stw_phys(addr + offsetof(struct vmcb, save.tr.attrib), cpu2vmcb_attrib(env->tr.flags));
+
+    stw_phys(addr + offsetof(struct vmcb, save.ldtr.selector), env->ldt.selector);
+    stq_phys(addr + offsetof(struct vmcb, save.ldtr.base), env->ldt.base);
+    stl_phys(addr + offsetof(struct vmcb, save.ldtr.limit), env->ldt.limit);
+    stw_phys(addr + offsetof(struct vmcb, save.ldtr.attrib), cpu2vmcb_attrib(env->ldt.flags));
+
+#ifdef TARGET_X86_64
+    stq_phys(addr + offsetof(struct vmcb, save.kernel_gs_base), env->kernelgsbase);
+    stq_phys(addr + offsetof(struct vmcb, save.lstar), env->lstar);
+    stq_phys(addr + offsetof(struct vmcb, save.cstar), env->cstar);
+    stq_phys(addr + offsetof(struct vmcb, save.sfmask), env->fmask);
+#endif
+    stq_phys(addr + offsetof(struct vmcb, save.star), env->star);
+    stq_phys(addr + offsetof(struct vmcb, save.sysenter_cs), env->sysenter_cs);
+    stq_phys(addr + offsetof(struct vmcb, save.sysenter_esp), env->sysenter_esp);
+    stq_phys(addr + offsetof(struct vmcb, save.sysenter_eip), env->sysenter_eip);
+}
+
+void helper_stgi()
+{
+    env->hflags |= HF_GIF_MASK;
+    env->exception_next_eip = EIP + 3;
+}
+
+void helper_clgi()
+{
+    env->hflags &= ~HF_GIF_MASK;
+}
+
+void helper_skinit()
+{
+    if (loglevel & CPU_LOG_TB_IN_ASM) fprintf(logfile,"skinit!\n"); fflush(NULL);
+}
+
+void helper_invlpga()
+{
+//    fprintf(logfile,"invlpga!\n"); fflush(NULL);
+    tlb_flush(env, 0);
+}
+
+void print_hflags() {
+    fprintf(logfile,"HF_CPL_MASK         = %d\n", ( env->hflags & HF_CPL_MASK ) > 0);
+    fprintf(logfile,"HF_SOFTMMU_MASK     = %d\n", ( env->hflags & HF_SOFTMMU_MASK ) > 0);
+    fprintf(logfile,"HF_INHIBIT_IRQ_MASK = %d\n", ( env->hflags & HF_INHIBIT_IRQ_MASK ) > 0);
+    fprintf(logfile,"HF_CS32_MASK        = %d\n", ( env->hflags & HF_CS32_MASK ) > 0);
+    fprintf(logfile,"HF_SS32_MASK        = %d\n", ( env->hflags & HF_SS32_MASK ) > 0);
+    fprintf(logfile,"HF_ADDSEG_MASK      = %d\n", ( env->hflags & HF_ADDSEG_MASK ) > 0);
+    fprintf(logfile,"HF_PE_MASK          = %d\n", ( env->hflags & HF_PE_MASK ) > 0);
+    fprintf(logfile,"HF_TF_MASK          = %d\n", ( env->hflags & HF_TF_MASK ) > 0);
+    fprintf(logfile,"HF_MP_MASK          = %d\n", ( env->hflags & HF_MP_MASK ) > 0);
+    fprintf(logfile,"HF_EM_MASK          = %d\n", ( env->hflags & HF_EM_MASK ) > 0);
+    fprintf(logfile,"HF_TS_MASK          = %d\n", ( env->hflags & HF_TS_MASK ) > 0);
+    fprintf(logfile,"HF_LMA_MASK         = %d\n", ( env->hflags & HF_LMA_MASK ) > 0);
+    fprintf(logfile,"HF_CS64_MASK        = %d\n", ( env->hflags & HF_CS64_MASK ) > 0);
+    fprintf(logfile,"HF_OSFXSR_MASK      = %d\n", ( env->hflags & HF_OSFXSR_MASK ) > 0);
+    fprintf(logfile,"HF_HALTED_MASK      = %d\n", ( env->hflags & HF_HALTED_MASK ) > 0);
+    fprintf(logfile,"HF_SMM_MASK         = %d\n", ( env->hflags & HF_SMM_MASK ) > 0);
+    fprintf(logfile,"HF_SVM_MASK         = %d\n", ( env->hflags & HF_SVM_MASK ) > 0);
+    fprintf(logfile,"HF_GIF_MASK         = %d\n", ( env->hflags & HF_GIF_MASK ) > 0);
+}
+
+#define CHECK_INTERCEPT(a,b) case a: if (INTERCEPTED(1L << b)) { vmexit(type, param); return 1; } break;
+int svm_check_intercept_param(uint32_t type, uint64_t param) {
+    if(!(env->hflags & HF_SVM_MASK)) return 0;
+    switch(type) {
+        case SVM_EXIT_READ_CR0 ... SVM_EXIT_READ_CR0 + 8:
+            if (INTERCEPTEDw(_cr_read, (1 << (param - SVM_EXIT_READ_CR0)))) {
+	        vmexit(PARAM1, param);
+		return 1;
+	    }
+            break;
+        case SVM_EXIT_READ_DR0 ... SVM_EXIT_READ_DR0 + 8:
+            if (INTERCEPTEDw(_dr_read, (1 << (param - SVM_EXIT_READ_DR0)))) {
+	        vmexit(PARAM1, param);
+		return 1;
+	    }
+            break;
+        case SVM_EXIT_WRITE_CR0 ... SVM_EXIT_WRITE_CR0 + 8:
+            if (INTERCEPTEDw(_cr_write, (1 << (param - SVM_EXIT_WRITE_CR0)))) {
+	        vmexit(PARAM1, param);
+		return 1;
+	    }
+            break;
+        case SVM_EXIT_WRITE_DR0 ... SVM_EXIT_WRITE_DR0 + 8:
+            if (INTERCEPTEDw(_dr_write, (1 << (param - SVM_EXIT_WRITE_DR0)))) {
+	        vmexit(PARAM1, param);
+		return 1;
+	    }
+            break;
+	CHECK_INTERCEPT(SVM_EXIT_INTR, INTERCEPT_INTR)
+	CHECK_INTERCEPT(SVM_EXIT_NMI, INTERCEPT_NMI)
+	CHECK_INTERCEPT(SVM_EXIT_SMI, INTERCEPT_SMI)
+	CHECK_INTERCEPT(SVM_EXIT_INIT, INTERCEPT_INIT)
+	CHECK_INTERCEPT(SVM_EXIT_VINTR, INTERCEPT_VINTR)
+	CHECK_INTERCEPT(SVM_EXIT_CR0_SEL_WRITE, INTERCEPT_SELECTIVE_CR0)
+	CHECK_INTERCEPT(SVM_EXIT_IDTR_READ, INTERCEPT_STORE_IDTR)
+	CHECK_INTERCEPT(SVM_EXIT_GDTR_READ, INTERCEPT_STORE_GDTR)
+	CHECK_INTERCEPT(SVM_EXIT_LDTR_READ, INTERCEPT_STORE_LDTR)
+	CHECK_INTERCEPT(SVM_EXIT_TR_READ, INTERCEPT_STORE_TR)
+	CHECK_INTERCEPT(SVM_EXIT_IDTR_WRITE, INTERCEPT_LOAD_IDTR)
+	CHECK_INTERCEPT(SVM_EXIT_GDTR_WRITE, INTERCEPT_LOAD_GDTR)
+	CHECK_INTERCEPT(SVM_EXIT_LDTR_WRITE, INTERCEPT_LOAD_LDTR)
+	CHECK_INTERCEPT(SVM_EXIT_TR_WRITE, INTERCEPT_LOAD_TR)
+	CHECK_INTERCEPT(SVM_EXIT_RDTSC, INTERCEPT_RDTSC)
+	CHECK_INTERCEPT(SVM_EXIT_RDPMC, INTERCEPT_RDPMC)
+	CHECK_INTERCEPT(SVM_EXIT_PUSHF, INTERCEPT_PUSHF)
+	CHECK_INTERCEPT(SVM_EXIT_POPF, INTERCEPT_POPF)
+	CHECK_INTERCEPT(SVM_EXIT_CPUID, INTERCEPT_CPUID)
+	CHECK_INTERCEPT(SVM_EXIT_RSM, INTERCEPT_RSM)
+	CHECK_INTERCEPT(SVM_EXIT_IRET, INTERCEPT_IRET)
+	CHECK_INTERCEPT(SVM_EXIT_SWINT, INTERCEPT_INTn)
+	CHECK_INTERCEPT(SVM_EXIT_INVD, INTERCEPT_INVD)
+	CHECK_INTERCEPT(SVM_EXIT_PAUSE, INTERCEPT_PAUSE)
+	CHECK_INTERCEPT(SVM_EXIT_HLT, INTERCEPT_HLT)
+	CHECK_INTERCEPT(SVM_EXIT_INVLPG, INTERCEPT_INVLPG)
+	CHECK_INTERCEPT(SVM_EXIT_INVLPGA, INTERCEPT_INVLPGA)
+	case SVM_EXIT_IOIO:
+            if (INTERCEPTED(INTERCEPT_IOIO_PROT)) {
+		// FIXME: this should be read in at vmrun (faster this way?)
+		uint64_t addr = ldq_phys(env->vm_vmcb + offsetof(struct vmcb, control.iopm_base_pa));
+		uint16_t port = (uint16_t) (param >> 16);
+		
+		if(ldub_phys(addr + port / 8) & (1 << (port % 8)))
+                    vmexit(type, param);
+            }
+            break;
+
+        case SVM_EXIT_MSR:
+            if (INTERCEPTED(1L << INTERCEPT_MSR_PROT)) {
+		// FIXME: this should be read in at vmrun (faster this way?)
+		uint64_t addr = ldq_phys(env->vm_vmcb + offsetof(struct vmcb, control.msrpm_base_pa));
+                switch((uint32_t)ECX) {
+                    case 0 ... 0x1fff:
+			T0 = (ECX * 2) % 32;
+			T1 = ECX / 16;
+                        break;
+                    case 0xc0000000 ... 0xc0001fff:
+			T0 = (8192 + ECX - 0xc0000000) * 2;
+			T1 = (T0 / 32);
+			T0 %= 32;
+                        break;
+                    case 0xc0010000 ... 0xc0011fff:
+			T0 = (16384 + ECX - 0xc0010000) * 2;
+			T1 = (T0 / 32);
+			T0 %= 32;
+                        break;
+                    default:
+			vmexit(type, param);
+                        return 1;
+		}
+                if (ldl_phys(addr + (T1*4)) & ((1 << param) << T0))
+                    vmexit(type, param);
+		return 1;
+	    }
+	    break;
+	CHECK_INTERCEPT(SVM_EXIT_TASK_SWITCH, INTERCEPT_TASK_SWITCH)
+	CHECK_INTERCEPT(SVM_EXIT_FERR_FREEZE, INTERCEPT_FERR_FREEZE)
+	CHECK_INTERCEPT(SVM_EXIT_SHUTDOWN, INTERCEPT_SHUTDOWN)
+	CHECK_INTERCEPT(SVM_EXIT_VMRUN, INTERCEPT_VMRUN)
+	CHECK_INTERCEPT(SVM_EXIT_VMMCALL, INTERCEPT_VMMCALL)
+	CHECK_INTERCEPT(SVM_EXIT_VMLOAD, INTERCEPT_VMLOAD)
+	CHECK_INTERCEPT(SVM_EXIT_VMSAVE, INTERCEPT_VMSAVE)
+	CHECK_INTERCEPT(SVM_EXIT_STGI, INTERCEPT_STGI)
+	CHECK_INTERCEPT(SVM_EXIT_CLGI, INTERCEPT_CLGI)
+	CHECK_INTERCEPT(SVM_EXIT_SKINIT, INTERCEPT_SKINIT)
+	CHECK_INTERCEPT(SVM_EXIT_RDTSCP, INTERCEPT_RDTSCP)
+	CHECK_INTERCEPT(SVM_EXIT_ICEBP, INTERCEPT_ICEBP)
+	CHECK_INTERCEPT(SVM_EXIT_WBINVD, INTERCEPT_WBINVD) 
+//	CHECK_INTERCEPT(SVM_EXIT_MONITOR, x)
+//	CHECK_INTERCEPT(SVM_EXIT_MWAIT, x)
+//	CHECK_INTERCEPT(SVM_EXIT_MWAIT_COND, x)
+//	CHECK_INTERCEPT(SVM_EXIT_NPF, x)
+//	CHECK_INTERCEPT(SVM_EXIT_ERR, x)
+
+    }
+    return 0;
+}
+
+int svm_check_intercept(unsigned int type) {
+    return svm_check_intercept_param(type, 0);
+}
+
+void vmexit(uint64_t exit_code, uint64_t exit_info_1)
+{
+    uint32_t int_ctl;
+    struct CPUX86State *old_env = env->vm_hsave;
+
+#if 0 // FIXME: find a way to get RIP without updating it all the time
+    { // Restore EIP
+        target_ulong pc;
+        TranslationBlock *tb;
+        pc = GETPC();
+        tb = tb_find_pc(pc);
+        if (tb) {
+            printf("Restoring CPU state: %#lx -> ", env->eip); fflush(NULL);
+            cpu_restore_state(tb, env, pc, NULL); // set EIP correctly
+            printf("%#lx\n", env->eip); fflush(NULL);
+	} else {
+            printf("No CPU state for: %#lx\n", pc); fflush(NULL);
+	}
+    }
+#endif
+
+    if (loglevel & CPU_LOG_TB_IN_ASM) fprintf(logfile,"vmexit(%#lx, %#lx, %#lx, %#lx)!\n", exit_code, exit_info_1, ldq_phys(env->vm_vmcb + offsetof(struct vmcb, control.exit_info_2)), EIP); fflush(NULL);
+#ifdef SVM_DEBUG
+    fprintf(logfile,"[vmexit] env->vm_hsave: %p\n", env->vm_hsave);
+    fprintf(logfile,"[vmexit] ESI: %#lx\n", ESI);
+    fprintf(logfile,"[vmexit] EDI: %#lx\n", EDI);
+    fprintf(logfile,"[vmexit] ECX: %#lx\n", ECX);
+    fprintf(logfile,"[vmexit] CS flags (native): %#x", env->segs[R_CS].flags);
+    fprintf(logfile,"[vmexit] CS flags (conv):   %#x", vmcb2cpu_attrib(cpu2vmcb_attrib(env->segs[R_CS].flags)));
+#endif
+// Save the VM state in the vmcb
+    stw_phys(env->vm_vmcb + offsetof(struct vmcb, save.es.selector), env->segs[R_ES].selector);
+    stq_phys(env->vm_vmcb + offsetof(struct vmcb, save.es.base), env->segs[R_ES].base);
+    stl_phys(env->vm_vmcb + offsetof(struct vmcb, save.es.limit), env->segs[R_ES].limit);
+    stw_phys(env->vm_vmcb + offsetof(struct vmcb, save.es.attrib), cpu2vmcb_attrib(env->segs[R_ES].flags));
+
+    stw_phys(env->vm_vmcb + offsetof(struct vmcb, save.cs.selector), env->segs[R_CS].selector);
+    stq_phys(env->vm_vmcb + offsetof(struct vmcb, save.cs.base), env->segs[R_CS].base);
+    stl_phys(env->vm_vmcb + offsetof(struct vmcb, save.cs.limit), env->segs[R_CS].limit);
+    stw_phys(env->vm_vmcb + offsetof(struct vmcb, save.cs.attrib), cpu2vmcb_attrib(env->segs[R_CS].flags));
+
+    stw_phys(env->vm_vmcb + offsetof(struct vmcb, save.ss.selector), env->segs[R_SS].selector);
+    stq_phys(env->vm_vmcb + offsetof(struct vmcb, save.ss.base), env->segs[R_SS].base);
+    stl_phys(env->vm_vmcb + offsetof(struct vmcb, save.ss.limit), env->segs[R_SS].limit);
+    stw_phys(env->vm_vmcb + offsetof(struct vmcb, save.ss.attrib), cpu2vmcb_attrib(env->segs[R_SS].flags));
+
+    stw_phys(env->vm_vmcb + offsetof(struct vmcb, save.ds.selector), env->segs[R_DS].selector);
+    stq_phys(env->vm_vmcb + offsetof(struct vmcb, save.ds.base), env->segs[R_DS].base);
+    stl_phys(env->vm_vmcb + offsetof(struct vmcb, save.ds.limit), env->segs[R_DS].limit);
+    stw_phys(env->vm_vmcb + offsetof(struct vmcb, save.ds.attrib), cpu2vmcb_attrib(env->segs[R_DS].flags));
+
+    stq_phys(env->vm_vmcb + offsetof(struct vmcb, save.gdtr.base), env->gdt.base);
+    stl_phys(env->vm_vmcb + offsetof(struct vmcb, save.gdtr.limit), env->gdt.limit);
+
+    stq_phys(env->vm_vmcb + offsetof(struct vmcb, save.idtr.base), env->idt.base);
+    stl_phys(env->vm_vmcb + offsetof(struct vmcb, save.idtr.limit), env->idt.limit);
+
+    stq_phys(env->vm_vmcb + offsetof(struct vmcb, save.efer), env->efer);
+    stq_phys(env->vm_vmcb + offsetof(struct vmcb, save.cr0), env->cr[0]);
+    stq_phys(env->vm_vmcb + offsetof(struct vmcb, save.cr2), env->cr[2]);
+    stq_phys(env->vm_vmcb + offsetof(struct vmcb, save.cr3), env->cr[3]);
+    stq_phys(env->vm_vmcb + offsetof(struct vmcb, save.cr4), env->cr[4]);
+
+    
+    if((int_ctl = ldl_phys(env->vm_vmcb + offsetof(struct vmcb, control.int_ctl))) & V_INTR_MASKING_MASK) {
+        int_ctl &= ~V_TPR_MASK;
+	int_ctl |= env->cr[8] & V_TPR_MASK;
+	stl_phys(env->vm_vmcb + offsetof(struct vmcb, control.int_ctl), int_ctl);
+    }
+
+    switch(DF) {
+	case 1:
+	    env->eflags &= ~DF_MASK;
+	    break;
+	case -1:
+	    env->eflags |= DF_MASK;
+	    break;
+    }
+
+    stq_phys(env->vm_vmcb + offsetof(struct vmcb, save.rflags), env->eflags);
+    stq_phys(env->vm_vmcb + offsetof(struct vmcb, save.rip), EIP);
+    stq_phys(env->vm_vmcb + offsetof(struct vmcb, save.rsp), ESP);
+    stq_phys(env->vm_vmcb + offsetof(struct vmcb, save.rax), EAX);
+    stq_phys(env->vm_vmcb + offsetof(struct vmcb, save.dr7), env->dr[7]);
+    stq_phys(env->vm_vmcb + offsetof(struct vmcb, save.dr6), env->dr[6]);
+    stb_phys(env->vm_vmcb + offsetof(struct vmcb, save.cpl), env->hflags & HF_CPL_MASK);
+
+// Reload the host state from vm_hsave
+    env->hflags &= ~HF_HIF_MASK;
+
+    env->gdt.base  = old_env->gdt.base;
+    env->gdt.limit = old_env->gdt.limit;
+
+    env->idt.base  = old_env->idt.base;
+    env->idt.limit = old_env->idt.limit;
+
+    cpu_x86_update_cr0(env, old_env->cr[0] | CR0_PE_MASK);
+//    env->cr[0] = old_env->cr[0];
+    cpu_x86_update_cr4(env, old_env->cr[4]);
+//    env->cr[4] = old_env->cr[4];
+    cpu_x86_update_cr3(env, old_env->cr[3]);
+//    env->cr[3] = old_env->cr[3];
+    // env->cr[2] = old_env->cr[2];
+    if(int_ctl & V_INTR_MASKING_MASK)
+        env->cr[8] = old_env->cr[8];
+    // we need to set the efer after the crs so the hidden flags get set properly
+    env->efer = old_env->efer;
+
+    load_eflags(old_env->eflags, 0xffffffff);
+    svm_check_longmode();
+
+    cpu_x86_load_seg_cache(env, R_ES, old_env->segs[R_ES].selector, 
+		    old_env->segs[R_ES].base, old_env->segs[R_ES].limit, 
+		    old_env->segs[R_ES].flags);
+    cpu_x86_load_seg_cache(env, R_CS, old_env->segs[R_CS].selector, 
+		    old_env->segs[R_CS].base, old_env->segs[R_CS].limit, 
+		    old_env->segs[R_CS].flags);
+    cpu_x86_load_seg_cache(env, R_SS, old_env->segs[R_SS].selector, 
+		    old_env->segs[R_SS].base, old_env->segs[R_SS].limit, 
+		    old_env->segs[R_SS].flags);
+    cpu_x86_load_seg_cache(env, R_DS, old_env->segs[R_DS].selector, 
+		    old_env->segs[R_DS].base, old_env->segs[R_DS].limit, 
+		    old_env->segs[R_DS].flags);
+
+    EIP = old_env->eip;
+    ESP = old_env->regs[R_ESP];
+    EAX = old_env->regs[R_EAX];
+    env->dr[7] = old_env->dr[7];
+    env->dr[6] = old_env->dr[6];
+
+// other setups
+    cpu_x86_set_cpl(env, 0);
+    stl_phys(env->vm_vmcb + offsetof(struct vmcb, control.exit_code_hi), (uint32_t)(exit_code >> 32));
+    stl_phys(env->vm_vmcb + offsetof(struct vmcb, control.exit_code), exit_code);
+    stq_phys(env->vm_vmcb + offsetof(struct vmcb, control.exit_info_1), exit_info_1);
+
+    helper_clgi();
+    // FIXME: Resets the current ASID register to zero (host ASID).
+
+    // Clears the V_IRQ and V_INTR_MASKING bits inside the processor.
+    
+    // Clears the TSC_OFFSET inside the processor.
+    
+    // If the host is in PAE mode, the processor reloads the host's PDPEs from the page table indicated the host's CR3. If the PDPEs contain illegal state, the processor causes a shutdown.
+//    if (env->cr[4] & CR4_PAE_MASK)
+//	    fprintf(logfile,"should I reload the PDPEs now?\n");
+
+    // Forces CR0.PE = 1, RFLAGS.VM = 0.
+    env->cr[0] |= CR0_PE_MASK;
+    env->eflags &= ~VM_MASK;
+    
+    // Disables all breakpoints in the host DR7 register.
+
+    // Checks the reloaded host state for consistency;
+
+    // If the host’s rIP reloaded by #VMEXIT is outside the limit of the host’s code segment or non-canonical (in the case of long mode), a #GP fault is delivered inside the host.)
+
+    env->hflags &= ~HF_SVM_MASK;
+
+    // remove any pending exception
+    env->exception_index = -1;
+    env->error_code = 0;
+    env->old_exception = -1;
+
+    regs_to_env();
+//    tlb_flush(env, 0);
+#ifdef SVM_DEBUG
+    fprintf(logfile,"hflags:       %#llx\n", env->hflags);
+    print_hflags();
+    fprintf(logfile,"efer:         %#llx\n", env->efer);
+    fprintf(logfile,"Is the new RIP valid?\n"); fflush(NULL);
+//    fprintf(logfile,"XXX: %llx\n",  ldq_kernel(EIP)); fflush(NULL);
+//    fprintf(logfile,"YESH?\n"); fflush(NULL);
+
+    fprintf(logfile,"env->exception_index   = %d\n", env->exception_index); fflush(NULL);
+    fprintf(logfile,"env->interrupt_request = %d\n", env->interrupt_request); fflush(NULL);
+    fprintf(logfile, " **************** VM LEAVE ***************\n");
+#endif
+    cpu_loop_exit();
+    fprintf(logfile,"I should never reach here\n"); fflush(NULL);
+}
+
Index: qemu/target-i386/svm.h
===================================================================
--- /dev/null
+++ qemu/target-i386/svm.h
@@ -0,0 +1,331 @@
+#ifndef __SVM_H
+#define __SVM_H
+
+enum {
+	INTERCEPT_INTR,
+	INTERCEPT_NMI,
+	INTERCEPT_SMI,
+	INTERCEPT_INIT,
+	INTERCEPT_VINTR,
+	INTERCEPT_SELECTIVE_CR0,
+	INTERCEPT_STORE_IDTR,
+	INTERCEPT_STORE_GDTR,
+	INTERCEPT_STORE_LDTR,
+	INTERCEPT_STORE_TR,
+	INTERCEPT_LOAD_IDTR,
+	INTERCEPT_LOAD_GDTR,
+	INTERCEPT_LOAD_LDTR,
+	INTERCEPT_LOAD_TR,
+	INTERCEPT_RDTSC,
+	INTERCEPT_RDPMC,
+	INTERCEPT_PUSHF,
+	INTERCEPT_POPF,
+	INTERCEPT_CPUID,
+	INTERCEPT_RSM,
+	INTERCEPT_IRET,
+	INTERCEPT_INTn,
+	INTERCEPT_INVD,
+	INTERCEPT_PAUSE,
+	INTERCEPT_HLT,
+	INTERCEPT_INVLPG,
+	INTERCEPT_INVLPGA,
+	INTERCEPT_IOIO_PROT,
+	INTERCEPT_MSR_PROT,
+	INTERCEPT_TASK_SWITCH,
+	INTERCEPT_FERR_FREEZE,
+	INTERCEPT_SHUTDOWN,
+	INTERCEPT_VMRUN,
+	INTERCEPT_VMMCALL,
+	INTERCEPT_VMLOAD,
+	INTERCEPT_VMSAVE,
+	INTERCEPT_STGI,
+	INTERCEPT_CLGI,
+	INTERCEPT_SKINIT,
+	INTERCEPT_RDTSCP,
+	INTERCEPT_ICEBP,
+	INTERCEPT_WBINVD,
+};
+
+
+struct __attribute__ ((__packed__)) vmcb_control_area {
+	uint16_t intercept_cr_read;
+	uint16_t intercept_cr_write;
+	uint16_t intercept_dr_read;
+	uint16_t intercept_dr_write;
+	uint32_t intercept_exceptions;
+	uint64_t intercept;
+	uint8_t reserved_1[44];
+	uint64_t iopm_base_pa;
+	uint64_t msrpm_base_pa;
+	uint64_t tsc_offset;
+	uint32_t asid;
+	uint8_t tlb_ctl;
+	uint8_t reserved_2[3];
+	uint32_t int_ctl;
+	uint32_t int_vector;
+	uint32_t int_state;
+	uint8_t reserved_3[4];
+	uint32_t exit_code;
+	uint32_t exit_code_hi;
+	uint64_t exit_info_1;
+	uint64_t exit_info_2;
+	uint32_t exit_int_info;
+	uint32_t exit_int_info_err;
+	uint64_t nested_ctl;
+	uint8_t reserved_4[16];
+	uint32_t event_inj;
+	uint32_t event_inj_err;
+	uint64_t nested_cr3;
+	uint64_t lbr_ctl;
+	uint8_t reserved_5[832];
+};
+
+
+#define TLB_CONTROL_DO_NOTHING 0
+#define TLB_CONTROL_FLUSH_ALL_ASID 1
+
+#define V_TPR_MASK 0x0f
+
+#define V_IRQ_SHIFT 8
+#define V_IRQ_MASK (1 << V_IRQ_SHIFT)
+
+#define V_INTR_PRIO_SHIFT 16
+#define V_INTR_PRIO_MASK (0x0f << V_INTR_PRIO_SHIFT)
+
+#define V_IGN_TPR_SHIFT 20
+#define V_IGN_TPR_MASK (1 << V_IGN_TPR_SHIFT)
+
+#define V_INTR_MASKING_SHIFT 24
+#define V_INTR_MASKING_MASK (1 << V_INTR_MASKING_SHIFT)
+
+#define SVM_INTERRUPT_SHADOW_MASK 1
+
+#define SVM_IOIO_STR_SHIFT 2
+#define SVM_IOIO_REP_SHIFT 3
+#define SVM_IOIO_SIZE_SHIFT 4
+#define SVM_IOIO_ASIZE_SHIFT 7
+
+#define SVM_IOIO_TYPE_MASK 1
+#define SVM_IOIO_STR_MASK (1 << SVM_IOIO_STR_SHIFT)
+#define SVM_IOIO_REP_MASK (1 << SVM_IOIO_REP_SHIFT)
+#define SVM_IOIO_SIZE_MASK (7 << SVM_IOIO_SIZE_SHIFT)
+#define SVM_IOIO_ASIZE_MASK (7 << SVM_IOIO_ASIZE_SHIFT)
+
+struct __attribute__ ((__packed__)) vmcb_seg {
+	uint16_t selector;
+	uint16_t attrib;
+	uint32_t limit;
+	uint64_t base;
+};
+
+struct __attribute__ ((__packed__)) vmcb_save_area {
+	struct vmcb_seg es;
+	struct vmcb_seg cs;
+	struct vmcb_seg ss;
+	struct vmcb_seg ds;
+	struct vmcb_seg fs;
+	struct vmcb_seg gs;
+	struct vmcb_seg gdtr;
+	struct vmcb_seg ldtr;
+	struct vmcb_seg idtr;
+	struct vmcb_seg tr;
+	uint8_t reserved_1[43];
+	uint8_t cpl;
+	uint8_t reserved_2[4];
+	uint64_t efer;
+	uint8_t reserved_3[112];
+	uint64_t cr4;
+	uint64_t cr3;
+	uint64_t cr0;
+	uint64_t dr7;
+	uint64_t dr6;
+	uint64_t rflags;
+	uint64_t rip;
+	uint8_t reserved_4[88];
+	uint64_t rsp;
+	uint8_t reserved_5[24];
+	uint64_t rax;
+	uint64_t star;
+	uint64_t lstar;
+	uint64_t cstar;
+	uint64_t sfmask;
+	uint64_t kernel_gs_base;
+	uint64_t sysenter_cs;
+	uint64_t sysenter_esp;
+	uint64_t sysenter_eip;
+	uint64_t cr2;
+	uint8_t reserved_6[32];
+	uint64_t g_pat;
+	uint64_t dbgctl;
+	uint64_t br_from;
+	uint64_t br_to;
+	uint64_t last_excp_from;
+	uint64_t last_excp_to;
+};
+
+struct __attribute__ ((__packed__)) vmcb {
+	struct vmcb_control_area control;
+	struct vmcb_save_area save;
+};
+
+#define SVM_CPUID_FEATURE_SHIFT 2
+#define SVM_CPUID_FUNC 0x8000000a
+
+#define MSR_EFER_SVME_MASK (1ULL << 12)
+
+#define SVM_SELECTOR_S_SHIFT 4
+#define SVM_SELECTOR_DPL_SHIFT 5
+#define SVM_SELECTOR_P_SHIFT 7
+#define SVM_SELECTOR_AVL_SHIFT 8
+#define SVM_SELECTOR_L_SHIFT 9
+#define SVM_SELECTOR_DB_SHIFT 10
+#define SVM_SELECTOR_G_SHIFT 11
+
+#define SVM_SELECTOR_TYPE_MASK (0xf)
+#define SVM_SELECTOR_S_MASK (1 << SVM_SELECTOR_S_SHIFT)
+#define SVM_SELECTOR_DPL_MASK (3 << SVM_SELECTOR_DPL_SHIFT)
+#define SVM_SELECTOR_P_MASK (1 << SVM_SELECTOR_P_SHIFT)
+#define SVM_SELECTOR_AVL_MASK (1 << SVM_SELECTOR_AVL_SHIFT)
+#define SVM_SELECTOR_L_MASK (1 << SVM_SELECTOR_L_SHIFT)
+#define SVM_SELECTOR_DB_MASK (1 << SVM_SELECTOR_DB_SHIFT)
+#define SVM_SELECTOR_G_MASK (1 << SVM_SELECTOR_G_SHIFT)
+
+#define SVM_SELECTOR_WRITE_MASK (1 << 1)
+#define SVM_SELECTOR_READ_MASK SVM_SELECTOR_WRITE_MASK
+#define SVM_SELECTOR_CODE_MASK (1 << 3)
+
+#define INTERCEPT_CR0_MASK 1
+#define INTERCEPT_CR3_MASK (1 << 3)
+#define INTERCEPT_CR4_MASK (1 << 4)
+
+#define INTERCEPT_DR0_MASK 1
+#define INTERCEPT_DR1_MASK (1 << 1)
+#define INTERCEPT_DR2_MASK (1 << 2)
+#define INTERCEPT_DR3_MASK (1 << 3)
+#define INTERCEPT_DR4_MASK (1 << 4)
+#define INTERCEPT_DR5_MASK (1 << 5)
+#define INTERCEPT_DR6_MASK (1 << 6)
+#define INTERCEPT_DR7_MASK (1 << 7)
+
+#define SVM_EVTINJ_VEC_MASK 0xff
+
+#define SVM_EVTINJ_TYPE_SHIFT 8
+#define SVM_EVTINJ_TYPE_MASK (7 << SVM_EVTINJ_TYPE_SHIFT)
+
+#define SVM_EVTINJ_TYPE_INTR (0 << SVM_EVTINJ_TYPE_SHIFT)
+#define SVM_EVTINJ_TYPE_NMI (2 << SVM_EVTINJ_TYPE_SHIFT)
+#define SVM_EVTINJ_TYPE_EXEPT (3 << SVM_EVTINJ_TYPE_SHIFT)
+#define SVM_EVTINJ_TYPE_SOFT (4 << SVM_EVTINJ_TYPE_SHIFT)
+
+#define SVM_EVTINJ_VALID (1 << 31)
+#define SVM_EVTINJ_VALID_ERR (1 << 11)
+
+#define SVM_EXITINTINFO_VEC_MASK SVM_EVTINJ_VEC_MASK
+
+#define	SVM_EXITINTINFO_TYPE_INTR SVM_EVTINJ_TYPE_INTR
+#define	SVM_EXITINTINFO_TYPE_NMI SVM_EVTINJ_TYPE_NMI
+#define	SVM_EXITINTINFO_TYPE_EXEPT SVM_EVTINJ_TYPE_EXEPT
+#define	SVM_EXITINTINFO_TYPE_SOFT SVM_EVTINJ_TYPE_SOFT
+
+#define SVM_EXITINTINFO_VALID SVM_EVTINJ_VALID
+#define SVM_EXITINTINFO_VALID_ERR SVM_EVTINJ_VALID_ERR
+
+#define	SVM_EXIT_READ_CR0 	0x000
+#define	SVM_EXIT_READ_CR3 	0x003
+#define	SVM_EXIT_READ_CR4 	0x004
+#define	SVM_EXIT_READ_CR8 	0x008
+#define	SVM_EXIT_WRITE_CR0 	0x010
+#define	SVM_EXIT_WRITE_CR3 	0x013
+#define	SVM_EXIT_WRITE_CR4 	0x014
+#define	SVM_EXIT_WRITE_CR8 	0x018
+#define	SVM_EXIT_READ_DR0 	0x020
+#define	SVM_EXIT_READ_DR1 	0x021
+#define	SVM_EXIT_READ_DR2 	0x022
+#define	SVM_EXIT_READ_DR3 	0x023
+#define	SVM_EXIT_READ_DR4 	0x024
+#define	SVM_EXIT_READ_DR5 	0x025
+#define	SVM_EXIT_READ_DR6 	0x026
+#define	SVM_EXIT_READ_DR7 	0x027
+#define	SVM_EXIT_WRITE_DR0 	0x030
+#define	SVM_EXIT_WRITE_DR1 	0x031
+#define	SVM_EXIT_WRITE_DR2 	0x032
+#define	SVM_EXIT_WRITE_DR3 	0x033
+#define	SVM_EXIT_WRITE_DR4 	0x034
+#define	SVM_EXIT_WRITE_DR5 	0x035
+#define	SVM_EXIT_WRITE_DR6 	0x036
+#define	SVM_EXIT_WRITE_DR7 	0x037
+#define SVM_EXIT_EXCP_BASE      0x040
+#define SVM_EXIT_INTR		0x060
+#define SVM_EXIT_NMI		0x061
+#define SVM_EXIT_SMI		0x062
+#define SVM_EXIT_INIT		0x063
+#define SVM_EXIT_VINTR		0x064
+#define SVM_EXIT_CR0_SEL_WRITE	0x065
+#define SVM_EXIT_IDTR_READ	0x066
+#define SVM_EXIT_GDTR_READ	0x067
+#define SVM_EXIT_LDTR_READ	0x068
+#define SVM_EXIT_TR_READ	0x069
+#define SVM_EXIT_IDTR_WRITE	0x06a
+#define SVM_EXIT_GDTR_WRITE	0x06b
+#define SVM_EXIT_LDTR_WRITE	0x06c
+#define SVM_EXIT_TR_WRITE	0x06d
+#define SVM_EXIT_RDTSC		0x06e
+#define SVM_EXIT_RDPMC		0x06f
+#define SVM_EXIT_PUSHF		0x070
+#define SVM_EXIT_POPF		0x071
+#define SVM_EXIT_CPUID		0x072
+#define SVM_EXIT_RSM		0x073
+#define SVM_EXIT_IRET		0x074
+#define SVM_EXIT_SWINT		0x075
+#define SVM_EXIT_INVD		0x076
+#define SVM_EXIT_PAUSE		0x077
+#define SVM_EXIT_HLT		0x078
+#define SVM_EXIT_INVLPG		0x079
+#define SVM_EXIT_INVLPGA	0x07a
+#define SVM_EXIT_IOIO		0x07b
+#define SVM_EXIT_MSR		0x07c
+#define SVM_EXIT_TASK_SWITCH	0x07d
+#define SVM_EXIT_FERR_FREEZE	0x07e
+#define SVM_EXIT_SHUTDOWN	0x07f
+#define SVM_EXIT_VMRUN		0x080
+#define SVM_EXIT_VMMCALL	0x081
+#define SVM_EXIT_VMLOAD		0x082
+#define SVM_EXIT_VMSAVE		0x083
+#define SVM_EXIT_STGI		0x084
+#define SVM_EXIT_CLGI		0x085
+#define SVM_EXIT_SKINIT		0x086
+#define SVM_EXIT_RDTSCP		0x087
+#define SVM_EXIT_ICEBP		0x088
+#define SVM_EXIT_WBINVD		0x089
+// only included in documentation, maybe wrong
+#define SVM_EXIT_MONITOR	0x08a
+#define SVM_EXIT_MWAIT		0x08b
+#define SVM_EXIT_NPF  		0x400
+
+#define SVM_EXIT_ERR		-1
+
+#define SVM_CR0_SELECTIVE_MASK (1 << 3 | 1) // TS and MP
+
+#define SVM_VMLOAD ".byte 0x0f, 0x01, 0xda"
+#define SVM_VMRUN  ".byte 0x0f, 0x01, 0xd8"
+#define SVM_VMSAVE ".byte 0x0f, 0x01, 0xdb"
+#define SVM_CLGI   ".byte 0x0f, 0x01, 0xdd"
+#define SVM_STGI   ".byte 0x0f, 0x01, 0xdc"
+#define SVM_INVLPGA ".byte 0x0f, 0x01, 0xdf"
+
+/////// function references
+
+void helper_stgi();
+void vmexit(uint64_t exit_code, uint64_t exit_info_1);
+
+#define INTERCEPTED(mask) (env->hflags & HF_SVM_MASK) && (env->intercept & mask) 
+#define INTERCEPTEDw(var, mask) (env->hflags & HF_SVM_MASK) && (env->intercept ## var & mask) 
+#define INTERCEPTEDl(var, mask) (env->hflags & HF_SVM_MASK) && (env->intercept ## var & mask) 
+
+/*
+#define INTERCEPTED(mask) (env->hflags & HF_SVM_MASK) && (ldq_phys(env->vm_vmcb + offsetof(struct vmcb, control.intercept)) & mask) 
+#define INTERCEPTEDw(var, mask) (env->hflags & HF_SVM_MASK) && (lduw_phys(env->vm_vmcb + offsetof(struct vmcb, control.intercept ## var)) & mask) 
+#define INTERCEPTEDl(var, mask) (env->hflags & HF_SVM_MASK) && (ldl_phys(env->vm_vmcb + offsetof(struct vmcb, control.intercept ## var)) & mask) 
+*/
+#endif
+
Index: qemu/cpu-exec.c
===================================================================
--- qemu.orig/cpu-exec.c
+++ qemu/cpu-exec.c
@@ -295,6 +295,8 @@ int cpu_exec(CPUState *env1)
             env->current_tb = NULL;
             /* if an exception is pending, we execute it here */
             if (env->exception_index >= 0) {
+//		if(env->exception_index < 100) {
+//		printf("E(%x)", env->exception_index); fflush(NULL); }
                 if (env->exception_index >= EXCP_INTERRUPT) {
                     /* exit request from the cpu execution loop */
                     ret = env->exception_index;
@@ -316,6 +318,12 @@ int cpu_exec(CPUState *env1)
                     /* simulate a real cpu exception. On i386, it can
                        trigger new exceptions, but we do not handle
                        double or triple faults yet. */
+#if 0
+                    if(env->exception_is_int && env->hflags & HF_SVM_MASK) {
+                        printf("Real Interrupt %x - will result in VMEXIT now\n", interrupt_request);
+                        vmexit(SVM_EXIT_INTR, 0);
+                    }
+#endif
                     do_interrupt(env->exception_index, 
                                  env->exception_is_int, 
                                  env->error_code, 
@@ -373,7 +381,11 @@ int cpu_exec(CPUState *env1)
                 tmp_T0 = T0;
 #endif	    
                 interrupt_request = env->interrupt_request;
-                if (__builtin_expect(interrupt_request, 0)) {
+                if (__builtin_expect(interrupt_request, 0)
+#if defined(TARGET_I386)
+			&& env->hflags & HF_GIF_MASK
+#endif
+				) {
                     if (interrupt_request & CPU_INTERRUPT_DEBUG) {
                         env->interrupt_request &= ~CPU_INTERRUPT_DEBUG;
                         env->exception_index = EXCP_DEBUG;
@@ -391,6 +403,8 @@ int cpu_exec(CPUState *env1)
 #if defined(TARGET_I386)
                     if ((interrupt_request & CPU_INTERRUPT_SMI) &&
                         !(env->hflags & HF_SMM_MASK)) {
+                        if(INTERCEPTED(INTERCEPT_SMI))
+                            vmexit(SVM_EXIT_SMI, 0);
                         env->interrupt_request &= ~CPU_INTERRUPT_SMI;
                         do_smm_enter();
 #if defined(__sparc__) && !defined(HOST_SOLARIS)
@@ -399,9 +413,13 @@ int cpu_exec(CPUState *env1)
                         T0 = 0;
 #endif
                     } else if ((interrupt_request & CPU_INTERRUPT_HARD) &&
-                        (env->eflags & IF_MASK) && 
+                        (env->eflags & IF_MASK || env->hflags & HF_HIF_MASK) && 
                         !(env->hflags & HF_INHIBIT_IRQ_MASK)) {
                         int intno;
+                        if(env->hflags & HF_SVM_MASK) {
+//printf("Real Interrupt %x - will result in VMEXIT now\n", interrupt_request);
+                            vmexit(SVM_EXIT_INTR, 0);
+			}
                         env->interrupt_request &= ~CPU_INTERRUPT_HARD;
                         intno = cpu_get_pic_interrupt(env);
                         if (loglevel & CPU_LOG_TB_IN_ASM) {
@@ -415,7 +433,24 @@ int cpu_exec(CPUState *env1)
 #else
                         T0 = 0;
 #endif
-                    }
+                    } else if(env->hflags & HF_SVM_MASK) {
+			if(ldl_phys(env->vm_vmcb + offsetof(struct vmcb, control.int_ctl)) & V_IRQ_MASK) {
+                             // FIXME: this should respect TPR
+	                     if(env->eflags & IF_MASK) {
+                                 int intno = ldl_phys(env->vm_vmcb + offsetof(struct vmcb, control.int_vector));
+	                         svm_check_intercept(SVM_EXIT_VINTR);
+	                         if (loglevel & CPU_LOG_TB_IN_ASM)
+	                             fprintf(logfile, "Servicing virtual hardware INT=0x%02x\n", intno);
+	                         stl_phys(env->vm_vmcb + offsetof(struct vmcb, control.int_ctl), ldl_phys(env->vm_vmcb + offsetof(struct vmcb, control.int_ctl)) & ~V_IRQ_MASK);
+	                         do_interrupt(intno, 0, 0, 0, 1);
+#if defined(__sparc__) && !defined(HOST_SOLARIS)
+                        tmp_T0 = 0;
+#else
+                        T0 = 0;
+#endif
+	                     }
+			}
+		    }
 #elif defined(TARGET_PPC)
 #if 0
                     if ((interrupt_request & CPU_INTERRUPT_RESET)) {
Index: qemu/target-i386/exec.h
===================================================================
--- qemu.orig/target-i386/exec.h
+++ qemu/target-i386/exec.h
@@ -501,6 +501,15 @@ void update_fp_status(void);
 void helper_hlt(void);
 void helper_monitor(void);
 void helper_mwait(void);
+void helper_vmrun(target_ulong addr);
+void helper_vmmcall(void);
+void helper_vmload(target_ulong addr);
+void helper_vmsave(target_ulong addr);
+void helper_stgi(void);
+void helper_clgi(void);
+void helper_skinit(void);
+void helper_invlpga(void);
+void vmexit(uint64_t exit_code, uint64_t exit_info_1);
 
 extern const uint8_t parity_table[256];
 extern const uint8_t rclw_table[32];
@@ -588,3 +597,4 @@ static inline int cpu_halted(CPUState *e
     }
     return EXCP_HALTED;
 }
+
Index: qemu/exec.c
===================================================================
--- qemu.orig/exec.c
+++ qemu/exec.c
@@ -1281,6 +1281,10 @@ void cpu_abort(CPUState *env, const char
     vfprintf(stderr, fmt, ap);
     fprintf(stderr, "\n");
 #ifdef TARGET_I386
+    if(env->hflags & HF_SVM_MASK) { // most probably the virtual machine should not
+	                            // be shut down but rather caught by the VMM
+        vmexit(SVM_EXIT_SHUTDOWN, 0);
+    }
     cpu_dump_state(env, stderr, fprintf, X86_DUMP_FPU | X86_DUMP_CCOP);
 #else
     cpu_dump_state(env, stderr, fprintf, 0);

^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: [Qemu-devel] [PATCH][RFC] SVM support
  2007-08-22 19:58 [Qemu-devel] [PATCH][RFC] SVM support Alexander Graf
@ 2007-08-22 20:19 ` Blue Swirl
  2007-08-22 20:26   ` Alexander Graf
  2007-08-23  9:05 ` Avi Kivity
  2007-08-24 17:13 ` Alexander Graf
  2 siblings, 1 reply; 10+ messages in thread
From: Blue Swirl @ 2007-08-22 20:19 UTC (permalink / raw)
  To: qemu-devel

On 8/22/07, Alexander Graf <agraf@suse.de> wrote:
> - All interceptions (well, maybe I did oversee one or two)

Nice work! For better performance, you should do the op.c checks
statically at translation time (if possible).

^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: [Qemu-devel] [PATCH][RFC] SVM support
  2007-08-22 20:19 ` Blue Swirl
@ 2007-08-22 20:26   ` Alexander Graf
  2007-08-23  9:14     ` Avi Kivity
  0 siblings, 1 reply; 10+ messages in thread
From: Alexander Graf @ 2007-08-22 20:26 UTC (permalink / raw)
  To: qemu-devel

Blue Swirl wrote:
> On 8/22/07, Alexander Graf <agraf@suse.de> wrote:
>   
>> - All interceptions (well, maybe I did oversee one or two)
>>     
>
> Nice work! For better performance, you should do the op.c checks
> statically at translation time (if possible).
>
>
>   
Thanks. I thought about that first as well, but can't. The information
if an intercept should occur is defined in the VMCB, which is passed as
argument on VMRUN (so whenever one enters the VM). This means that the
very same TB can be executed with completely different intercepts, which
means I have to fall back to runtime detection in op.c.

I thought about moving some functionality from helper.c to op.c.
Does that improve anything?

^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: [Qemu-devel] [PATCH][RFC] SVM support
  2007-08-22 19:58 [Qemu-devel] [PATCH][RFC] SVM support Alexander Graf
  2007-08-22 20:19 ` Blue Swirl
@ 2007-08-23  9:05 ` Avi Kivity
  2007-08-24 17:13 ` Alexander Graf
  2 siblings, 0 replies; 10+ messages in thread
From: Avi Kivity @ 2007-08-23  9:05 UTC (permalink / raw)
  To: qemu-devel

Alexander Graf wrote:
> Hi,
>
> this patch adds support for SVM (the virtual machine extension on amd64)
> to qemu's x86_64 target. It still needs cleanup (splitting, indentation,
> etc) and lacks some basic functionality but maybe someone will find
> interest in it as it is already.
>
>   

Obviously this is a boon for hypervisor developers!  I'm looking forward 
to using qemu to debug the tricky parts of kvm.

Hopefully a VT implementation will turn up as well.

-- 
error compiling committee.c: too many arguments to function

^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: [Qemu-devel] [PATCH][RFC] SVM support
  2007-08-22 20:26   ` Alexander Graf
@ 2007-08-23  9:14     ` Avi Kivity
  0 siblings, 0 replies; 10+ messages in thread
From: Avi Kivity @ 2007-08-23  9:14 UTC (permalink / raw)
  To: qemu-devel

Alexander Graf wrote:
> Blue Swirl wrote:
>   
>> On 8/22/07, Alexander Graf <agraf@suse.de> wrote:
>>   
>>     
>>> - All interceptions (well, maybe I did oversee one or two)
>>>     
>>>       
>> Nice work! For better performance, you should do the op.c checks
>> statically at translation time (if possible).
>>
>>
>>   
>>     
> Thanks. I thought about that first as well, but can't. The information
> if an intercept should occur is defined in the VMCB, which is passed as
> argument on VMRUN (so whenever one enters the VM). This means that the
> very same TB can be executed with completely different intercepts, which
> means I have to fall back to runtime detection in op.c.
>   

You can have the intercept vector be a part of the TB lookup key.  So, 
if the same code is executed natively, or with different intercept 
vectors, it gets a different TB.  Qemu already does this for a number of 
things like cpu operating mode,  cs base, etc.

-- 
error compiling committee.c: too many arguments to function

^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: [Qemu-devel] [PATCH][RFC] SVM support
  2007-08-22 19:58 [Qemu-devel] [PATCH][RFC] SVM support Alexander Graf
  2007-08-22 20:19 ` Blue Swirl
  2007-08-23  9:05 ` Avi Kivity
@ 2007-08-24 17:13 ` Alexander Graf
  2007-08-24 23:55   ` Fabrice Bellard
  2 siblings, 1 reply; 10+ messages in thread
From: Alexander Graf @ 2007-08-24 17:13 UTC (permalink / raw)
  To: qemu-devel

[-- Attachment #1: Type: text/plain, Size: 2227 bytes --]

Alexander Graf wrote:
> Hi,
>
> this patch adds support for SVM (the virtual machine extension on amd64)
> to qemu's x86_64 target. It still needs cleanup (splitting, indentation,
> etc) and lacks some basic functionality but maybe someone will find
> interest in it as it is already.
>
> In kvm real and protected modes work flawlessly as far as I can tell
> (minix and 32-bit linux worked).
> Long mode seems to work quite ok as well, though I am not able to get a
> Linux kernel booted up (MenuetOS works).
>
> What does work?
>
> - VMRUN, VMLOAD, VMSAVE, VMEXIT, STGI, CLGI
> - Event injection
> - All interceptions (well, maybe I did oversee one or two)
> - Context switching to the VM and back to the VMM
>
> What is missing?
>
> - According to the SVM specification NPTs are optional, so I did not
> include them (yet)
> - Everything related to device virtualisation
> - The "Secure" part of the extension (would need TPM emulation for that)
> - Debugging support (maybe it does work, I actually have never tried to
> debug a kvm virtualised machine)
> - I included a dirty hack to update EIP on every instruction.
> - TSC_OFFSET
> - ASID support
> - Sanity checks
> - Task switch and Ferr_Freeze Intercepts
> - VMMCALL
> - SMM support
> - SVM-Lock
>
> I hope this is useful to someone.
> I am going to continue to refine this patch until it implements all of
> the SVM specification.
>
> Comments as well as patches are greatly appreciated.
>
>
> Thanks,
>
> Alexander Graf
>   
This is a reworked version of the same patch, where I can now boot into
a x86_64 Linux kernel.
I rewrote all the access functions for the VMCB, so this time everything
should work just fine on BE-machines. As suggested I moved the injection
detection to translate.c, so the non-virtualized machine should be as
fast as before (w/o svm support), while the virtual one got a speed
boost from that as well.
I removed the EIP hack and set EIP every time an interception occurs, so
unlike the previous version this patch really should have no negative
effect on speed any more.

If any of the people on this list using SVM (kvm developers, maybe xen
developers) could have a deep look into this I'd be really thankful.

Thanks,

Alexander Graf


[-- Warning: decoded text below may be mangled, UTF-8 assumed --]
[-- Attachment #2: svm.patch --]
[-- Type: text/x-patch; name="svm.patch", Size: 77961 bytes --]

Index: qemu/target-i386/helper2.c
===================================================================
--- qemu.orig/target-i386/helper2.c
+++ qemu/target-i386/helper2.c
@@ -27,8 +27,9 @@
 
 #include "cpu.h"
 #include "exec-all.h"
+#include "svm.h"
 
-//#define DEBUG_MMU
+// #define DEBUG_MMU
 
 #ifdef USE_CODE_COPY
 #include <asm/ldt.h>
@@ -111,6 +112,7 @@ CPUX86State *cpu_x86_init(void)
                                CPUID_CX8 | CPUID_PGE | CPUID_CMOV |
                                CPUID_PAT);
         env->pat = 0x0007040600070406ULL;
+        env->cpuid_ext3_features = CPUID_EXT3_SVM;
         env->cpuid_ext_features = CPUID_EXT_SSE3;
         env->cpuid_features |= CPUID_FXSR | CPUID_MMX | CPUID_SSE | CPUID_SSE2 | CPUID_PAE | CPUID_SEP;
         env->cpuid_features |= CPUID_APIC;
@@ -131,7 +133,7 @@ CPUX86State *cpu_x86_init(void)
         /* currently not enabled for std i386 because not fully tested */
         env->cpuid_ext2_features = (env->cpuid_features & 0x0183F3FF);
         env->cpuid_ext2_features |= CPUID_EXT2_LM | CPUID_EXT2_SYSCALL | CPUID_EXT2_NX;
-        env->cpuid_xlevel = 0x80000008;
+        env->cpuid_xlevel = 0x8000000a;
 
         /* these features are needed for Win64 and aren't fully implemented */
         env->cpuid_features |= CPUID_MTRR | CPUID_CLFLUSH | CPUID_MCA;
@@ -160,6 +162,7 @@ void cpu_reset(CPUX86State *env)
 #ifdef CONFIG_SOFTMMU
     env->hflags |= HF_SOFTMMU_MASK;
 #endif
+    env->hflags |= HF_GIF_MASK;
 
     cpu_x86_update_cr0(env, 0x60000010);
     env->a20_mask = 0xffffffff;
@@ -863,7 +866,6 @@ int cpu_x86_handle_mmu_fault(CPUX86State
  do_fault_protect:
     error_code = PG_ERROR_P_MASK;
  do_fault:
-    env->cr[2] = addr;
     error_code |= (is_write << PG_ERROR_W_BIT);
     if (is_user)
         error_code |= PG_ERROR_U_MASK;
@@ -871,8 +873,15 @@ int cpu_x86_handle_mmu_fault(CPUX86State
         (env->efer & MSR_EFER_NXE) && 
         (env->cr[4] & CR4_PAE_MASK))
         error_code |= PG_ERROR_I_D_MASK;
+    if(INTERCEPTEDl(_exceptions, 1 << EXCP0E_PAGE)) {
+        stq_phys(env->vm_vmcb + offsetof(struct vmcb, control.exit_info_2), addr);
+    } else {
+        env->cr[2] = addr;
+    }
     env->error_code = error_code;
     env->exception_index = EXCP0E_PAGE;
+    if(env->hflags & HF_SVM_MASK) // the VMM will handle this
+        return 2;
     return 1;
 }
 
Index: qemu/target-i386/translate.c
===================================================================
--- qemu.orig/target-i386/translate.c
+++ qemu/target-i386/translate.c
@@ -77,6 +77,7 @@ typedef struct DisasContext {
                    static state change (stop translation) */
     /* current block context */
     target_ulong cs_base; /* base of CS segment */
+    uint64_t intercept; /* SVM intercept vector */
     int pe;     /* protected mode */
     int code32; /* 32 bit code segment */
 #ifdef TARGET_X86_64
@@ -1995,6 +1996,50 @@ static void gen_movl_seg_T0(DisasContext
     }
 }
 
+#define update_eip() gen_jmp_im(pc_start - s->cs_base)
+#define svm_check_intercept(x) svm_check_intercept_param(x,0)
+#define svm_check_intercept_param(x,y) _svm_check_intercept_param(s,x,y,pc_start)
+#define svm_check_intercept_io(x) update_eip(); gen_op_svm_check_intercept_io(x);
+
+#ifdef TARGET_X86_64
+#define SVM_movq_T1_im(x) gen_op_movq_T1_im64((x) >> 32, x)
+#else
+#define SVM_movq_T1_im(x) gen_op_movq_T1_im(x)
+#endif
+#define SVM_CHECK_IO(x) if(s->intercept & (1L << INTERCEPT_IOIO_PROT)) { SVM_movq_T1_im(s->pc - s->cs_base); svm_check_intercept_io(x); gen_jmp_im(s->pc - s->cs_base); gen_eob(s); break; } 
+#define SVM_IS_REP ((prefixes & (PREFIX_REPZ | PREFIX_REPNZ)) ? 8 : 0)
+static inline int _svm_check_intercept_param(DisasContext *s, uint64_t type, uint64_t param, target_ulong pc_start) {
+    if(!(s->intercept & (1L << 63))) return 0; // no SVM activated
+    switch(type) {
+        case SVM_EXIT_READ_CR0 ... SVM_EXIT_EXCP_BASE - 1: // CRx and DRx reads/writes
+            update_eip();
+            gen_op_svm_check_intercept_param(type, param);
+            // this is a special case as we do not know if the interception occurs
+            // so we assume there was none
+            return 0;
+        case SVM_EXIT_MSR:
+            if(s->intercept & (1L << INTERCEPT_MSR_PROT)) {
+                update_eip();
+                gen_op_svm_check_intercept_param(type, param);
+                // this is a special case as we do not know if the interception occurs
+                // so we assume there was none
+                return 0;
+            }
+            break;
+        default:
+            if(s->intercept & (1L << (type - SVM_EXIT_INTR))) {
+                update_eip();
+                SVM_movq_T1_im(param);
+                gen_op_svm_vmexit(type >> 32, type);
+                // we can optimize this one so TBs don't get longer than up to vmexit
+                gen_jmp_im(s->pc - s->cs_base);
+                gen_eob(s);
+                return 1;
+            }
+    }
+    return 0;
+}
+
 static inline void gen_stack_update(DisasContext *s, int addend)
 {
 #ifdef TARGET_X86_64
@@ -3154,6 +3199,7 @@ static target_ulong disas_insn(DisasCont
     target_ulong next_eip, tval;
     int rex_w, rex_r;
 
+    // update_eip(); // FIXME: find a way to fetch EIP without updating it all the time
     s->pc = pc_start;
     prefixes = 0;
     aflag = s->code32;
@@ -4880,6 +4926,9 @@ static target_ulong disas_insn(DisasCont
         else
             ot = dflag ? OT_LONG : OT_WORD;
         gen_check_io(s, ot, 1, pc_start - s->cs_base);
+        gen_op_mov_TN_reg[OT_WORD][0][R_EDX]();
+        gen_op_andl_T0_ffff();
+        SVM_CHECK_IO(SVM_IOIO_TYPE_MASK | (1 << (4+ot)) | SVM_IS_REP | 4 | (1 << (7+s->aflag)));
         if (prefixes & (PREFIX_REPZ | PREFIX_REPNZ)) {
             gen_repz_ins(s, ot, pc_start - s->cs_base, s->pc - s->cs_base);
         } else {
@@ -4893,6 +4942,9 @@ static target_ulong disas_insn(DisasCont
         else
             ot = dflag ? OT_LONG : OT_WORD;
         gen_check_io(s, ot, 1, pc_start - s->cs_base);
+        gen_op_mov_TN_reg[OT_WORD][0][R_EDX]();
+        gen_op_andl_T0_ffff();
+        SVM_CHECK_IO((1 << (4+ot)) | SVM_IS_REP | 4 | (1 << (7+s->aflag)));
         if (prefixes & (PREFIX_REPZ | PREFIX_REPNZ)) {
             gen_repz_outs(s, ot, pc_start - s->cs_base, s->pc - s->cs_base);
         } else {
@@ -4902,6 +4954,7 @@ static target_ulong disas_insn(DisasCont
 
         /************************/
         /* port I/O */
+
     case 0xe4:
     case 0xe5:
         if ((b & 1) == 0)
@@ -4911,6 +4964,7 @@ static target_ulong disas_insn(DisasCont
         val = ldub_code(s->pc++);
         gen_op_movl_T0_im(val);
         gen_check_io(s, ot, 0, pc_start - s->cs_base);
+        SVM_CHECK_IO(SVM_IOIO_TYPE_MASK | SVM_IS_REP | (1 << (4+ot)));
         gen_op_in[ot]();
         gen_op_mov_reg_T1[ot][R_EAX]();
         break;
@@ -4923,6 +4977,7 @@ static target_ulong disas_insn(DisasCont
         val = ldub_code(s->pc++);
         gen_op_movl_T0_im(val);
         gen_check_io(s, ot, 0, pc_start - s->cs_base);
+        SVM_CHECK_IO(SVM_IS_REP | (1 << (4+ot)));
         gen_op_mov_TN_reg[ot][1][R_EAX]();
         gen_op_out[ot]();
         break;
@@ -4935,6 +4990,7 @@ static target_ulong disas_insn(DisasCont
         gen_op_mov_TN_reg[OT_WORD][0][R_EDX]();
         gen_op_andl_T0_ffff();
         gen_check_io(s, ot, 0, pc_start - s->cs_base);
+        SVM_CHECK_IO(SVM_IOIO_TYPE_MASK | SVM_IS_REP | (1 << (4+ot)));
         gen_op_in[ot]();
         gen_op_mov_reg_T1[ot][R_EAX]();
         break;
@@ -4947,6 +5003,7 @@ static target_ulong disas_insn(DisasCont
         gen_op_mov_TN_reg[OT_WORD][0][R_EDX]();
         gen_op_andl_T0_ffff();
         gen_check_io(s, ot, 0, pc_start - s->cs_base);
+        SVM_CHECK_IO(SVM_IS_REP | (1 << (4+ot)));
         gen_op_mov_TN_reg[ot][1][R_EAX]();
         gen_op_out[ot]();
         break;
@@ -5004,6 +5061,7 @@ static target_ulong disas_insn(DisasCont
         val = 0;
         goto do_lret;
     case 0xcf: /* iret */
+        if (svm_check_intercept(SVM_EXIT_IRET)) break;
         if (!s->pe) {
             /* real mode */
             gen_op_iret_real(s->dflag);
@@ -5125,6 +5183,7 @@ static target_ulong disas_insn(DisasCont
         /************************/
         /* flags */
     case 0x9c: /* pushf */
+        if (svm_check_intercept(SVM_EXIT_PUSHF)) break;
         if (s->vm86 && s->iopl != 3) {
             gen_exception(s, EXCP0D_GPF, pc_start - s->cs_base);
         } else {
@@ -5135,6 +5194,7 @@ static target_ulong disas_insn(DisasCont
         }
         break;
     case 0x9d: /* popf */
+        if (svm_check_intercept(SVM_EXIT_POPF)) break;
         if (s->vm86 && s->iopl != 3) {
             gen_exception(s, EXCP0D_GPF, pc_start - s->cs_base);
         } else {
@@ -5344,6 +5404,9 @@ static target_ulong disas_insn(DisasCont
         /* XXX: correct lock test for all insn */
         if (prefixes & PREFIX_LOCK)
             goto illegal_op;
+        if (prefixes & PREFIX_REPZ) {
+            svm_check_intercept(SVM_EXIT_INVD);
+        }
         break;
     case 0x9b: /* fwait */
         if ((s->flags & (HF_MP_MASK | HF_TS_MASK)) == 
@@ -5357,11 +5420,13 @@ static target_ulong disas_insn(DisasCont
         }
         break;
     case 0xcc: /* int3 */
+        if (svm_check_intercept(SVM_EXIT_SWINT)) break;
         gen_interrupt(s, EXCP03_INT3, pc_start - s->cs_base, s->pc - s->cs_base);
         break;
     case 0xcd: /* int N */
         val = ldub_code(s->pc++);
-        if (s->vm86 && s->iopl != 3) {
+        if(svm_check_intercept(SVM_EXIT_SWINT)) break;
+	if (s->vm86 && s->iopl != 3) {
             gen_exception(s, EXCP0D_GPF, pc_start - s->cs_base); 
         } else {
             gen_interrupt(s, val, pc_start - s->cs_base, s->pc - s->cs_base);
@@ -5370,12 +5435,14 @@ static target_ulong disas_insn(DisasCont
     case 0xce: /* into */
         if (CODE64(s))
             goto illegal_op;
+        if (svm_check_intercept(SVM_EXIT_SWINT)) break;
         if (s->cc_op != CC_OP_DYNAMIC)
             gen_op_set_cc_op(s->cc_op);
         gen_jmp_im(pc_start - s->cs_base);
         gen_op_into(s->pc - pc_start);
         break;
     case 0xf1: /* icebp (undocumented, exits to external debugger) */
+        if(svm_check_intercept(SVM_EXIT_ICEBP)) break;
 #if 1
         gen_debug(s, pc_start - s->cs_base);
 #else
@@ -5503,13 +5570,20 @@ static target_ulong disas_insn(DisasCont
         if (s->cpl != 0) {
             gen_exception(s, EXCP0D_GPF, pc_start - s->cs_base);
         } else {
-            if (b & 2)
+            int retval = 0;
+            if (b & 2) {
+                retval = svm_check_intercept_param(SVM_EXIT_MSR, 0);
                 gen_op_rdmsr();
-            else
+            } else {
+                retval = svm_check_intercept_param(SVM_EXIT_MSR, 1);
                 gen_op_wrmsr();
+            }
+            if(retval)
+                gen_eob(s);
         }
         break;
     case 0x131: /* rdtsc */
+        if(svm_check_intercept(SVM_EXIT_RDTSC)) break;
         gen_jmp_im(pc_start - s->cs_base);
         gen_op_rdtsc();
         break;
@@ -5572,6 +5646,7 @@ static target_ulong disas_insn(DisasCont
         break;
 #endif
     case 0x1a2: /* cpuid */
+        if(svm_check_intercept(SVM_EXIT_CPUID)) break;
         gen_op_cpuid();
         break;
     case 0xf4: /* hlt */
@@ -5580,6 +5655,7 @@ static target_ulong disas_insn(DisasCont
         } else {
             if (s->cc_op != CC_OP_DYNAMIC)
                 gen_op_set_cc_op(s->cc_op);
+            if(svm_check_intercept(SVM_EXIT_HLT)) break;
             gen_jmp_im(s->pc - s->cs_base);
             gen_op_hlt();
             s->is_jmp = 3;
@@ -5593,6 +5669,7 @@ static target_ulong disas_insn(DisasCont
         case 0: /* sldt */
             if (!s->pe || s->vm86)
                 goto illegal_op;
+            if(svm_check_intercept(SVM_EXIT_LDTR_READ)) break;
             gen_op_movl_T0_env(offsetof(CPUX86State,ldt.selector));
             ot = OT_WORD;
             if (mod == 3)
@@ -5606,6 +5683,7 @@ static target_ulong disas_insn(DisasCont
                 gen_exception(s, EXCP0D_GPF, pc_start - s->cs_base);
             } else {
                 gen_ldst_modrm(s, modrm, OT_WORD, OR_TMP0, 0);
+                if(svm_check_intercept(SVM_EXIT_LDTR_WRITE)) break;
                 gen_jmp_im(pc_start - s->cs_base);
                 gen_op_lldt_T0();
             }
@@ -5613,6 +5691,7 @@ static target_ulong disas_insn(DisasCont
         case 1: /* str */
             if (!s->pe || s->vm86)
                 goto illegal_op;
+            if(svm_check_intercept(SVM_EXIT_TR_READ)) break;
             gen_op_movl_T0_env(offsetof(CPUX86State,tr.selector));
             ot = OT_WORD;
             if (mod == 3)
@@ -5626,6 +5705,7 @@ static target_ulong disas_insn(DisasCont
                 gen_exception(s, EXCP0D_GPF, pc_start - s->cs_base);
             } else {
                 gen_ldst_modrm(s, modrm, OT_WORD, OR_TMP0, 0);
+                if(svm_check_intercept(SVM_EXIT_TR_WRITE)) break;
                 gen_jmp_im(pc_start - s->cs_base);
                 gen_op_ltr_T0();
             }
@@ -5657,6 +5737,7 @@ static target_ulong disas_insn(DisasCont
             if (mod == 3)
                 goto illegal_op;
             gen_lea_modrm(s, modrm, &reg_addr, &offset_addr);
+            if(svm_check_intercept(SVM_EXIT_GDTR_READ)) break;
             gen_op_movl_T0_env(offsetof(CPUX86State, gdt.limit));
             gen_op_st_T0_A0[OT_WORD + s->mem_index]();
             gen_add_A0_im(s, 2);
@@ -5672,7 +5753,8 @@ static target_ulong disas_insn(DisasCont
                     if (!(s->cpuid_ext_features & CPUID_EXT_MONITOR) ||
                         s->cpl != 0)
                         goto illegal_op;
-                    gen_jmp_im(pc_start - s->cs_base);
+                    // gen_jmp_im(pc_start - s->cs_base);
+                    if(svm_check_intercept(SVM_EXIT_MONITOR)) break;
 #ifdef TARGET_X86_64
                     if (s->aflag == 2) {
                         gen_op_movq_A0_reg[R_EBX]();
@@ -5696,7 +5778,8 @@ static target_ulong disas_insn(DisasCont
                         gen_op_set_cc_op(s->cc_op);
                         s->cc_op = CC_OP_DYNAMIC;
                     }
-                    gen_jmp_im(s->pc - s->cs_base);
+                    // gen_jmp_im(s->pc - s->cs_base);
+                    if(svm_check_intercept(SVM_EXIT_MWAIT)) break;
                     gen_op_mwait();
                     gen_eob(s);
                     break;
@@ -5705,6 +5788,7 @@ static target_ulong disas_insn(DisasCont
                 }
             } else { /* sidt */
                 gen_lea_modrm(s, modrm, &reg_addr, &offset_addr);
+                if(svm_check_intercept(SVM_EXIT_IDTR_READ)) break;
                 gen_op_movl_T0_env(offsetof(CPUX86State, idt.limit));
                 gen_op_st_T0_A0[OT_WORD + s->mem_index]();
                 gen_add_A0_im(s, 2);
@@ -5716,9 +5800,47 @@ static target_ulong disas_insn(DisasCont
             break;
         case 2: /* lgdt */
         case 3: /* lidt */
-            if (mod == 3)
-                goto illegal_op;
-            if (s->cpl != 0) {
+            if (mod == 3) {
+                update_eip();
+                switch(rm) {
+                case 0: /* VMRUN */
+                    if(svm_check_intercept(SVM_EXIT_VMRUN)) break;
+                    gen_op_vmrun();
+                    gen_eob(s); /* We probably will have to set EIP in here */
+                    break;
+                case 1: /* VMMCALL */
+                    if(svm_check_intercept(SVM_EXIT_VMMCALL)) break;
+                    // FIXME: cause #UD if hflags & SVM
+                    gen_op_vmmcall();
+                    break;
+                case 2: /* VMLOAD */
+                    if(svm_check_intercept(SVM_EXIT_VMLOAD)) break;
+                    gen_op_vmload();
+                    break;
+                case 3: /* VMSAVE */
+                    if(svm_check_intercept(SVM_EXIT_VMSAVE)) break;
+                    gen_op_vmsave();
+                    break;
+                case 4: /* STGI */
+                    if(svm_check_intercept(SVM_EXIT_STGI)) break;
+                    gen_op_stgi();
+                    break;
+                case 5: /* CLGI */
+                    if(svm_check_intercept(SVM_EXIT_CLGI)) break;
+                    gen_op_clgi();
+                    break;
+                case 6: /* SKINIT */
+                    if(svm_check_intercept(SVM_EXIT_SKINIT)) break;
+                    gen_op_skinit();
+                    break;
+                case 7: /* INVLPGA */
+                    if(svm_check_intercept(SVM_EXIT_INVLPGA)) break;
+                    gen_op_invlpga();
+                    break;
+                default:
+                    goto illegal_op;
+                }
+            } else if (s->cpl != 0) {
                 gen_exception(s, EXCP0D_GPF, pc_start - s->cs_base);
             } else {
                 gen_lea_modrm(s, modrm, &reg_addr, &offset_addr);
@@ -5728,9 +5850,11 @@ static target_ulong disas_insn(DisasCont
                 if (!s->dflag)
                     gen_op_andl_T0_im(0xffffff);
                 if (op == 2) {
+                    if(svm_check_intercept(SVM_EXIT_GDTR_WRITE)) break;
                     gen_op_movtl_env_T0(offsetof(CPUX86State,gdt.base));
                     gen_op_movl_env_T1(offsetof(CPUX86State,gdt.limit));
                 } else {
+                    if(svm_check_intercept(SVM_EXIT_IDTR_WRITE)) break;
                     gen_op_movtl_env_T0(offsetof(CPUX86State,idt.base));
                     gen_op_movl_env_T1(offsetof(CPUX86State,idt.limit));
                 }
@@ -5754,6 +5878,7 @@ static target_ulong disas_insn(DisasCont
             if (s->cpl != 0) {
                 gen_exception(s, EXCP0D_GPF, pc_start - s->cs_base);
             } else {
+                if(svm_check_intercept(SVM_EXIT_INVLPG)) break;
                 if (mod == 3) {
 #ifdef TARGET_X86_64
                     if (CODE64(s) && rm == 0) {
@@ -5784,6 +5909,7 @@ static target_ulong disas_insn(DisasCont
         if (s->cpl != 0) {
             gen_exception(s, EXCP0D_GPF, pc_start - s->cs_base);
         } else {
+            if(svm_check_intercept(SVM_EXIT_INVD)) break;
             /* nothing to do */
         }
         break;
@@ -5905,6 +6031,7 @@ static target_ulong disas_insn(DisasCont
             case 8:
                 if (b & 2) {
                     gen_op_mov_TN_reg[ot][0][rm]();
+                    svm_check_intercept(SVM_EXIT_WRITE_CR0 + reg);
                     gen_op_movl_crN_T0(reg);
                     gen_jmp_im(s->pc - s->cs_base);
                     gen_eob(s);
@@ -5912,9 +6039,11 @@ static target_ulong disas_insn(DisasCont
 #if !defined(CONFIG_USER_ONLY) 
                     if (reg == 8)
                         gen_op_movtl_T0_cr8();
-                    else
+                    else {
 #endif
+                        svm_check_intercept(SVM_EXIT_READ_CR0 + reg);
                         gen_op_movtl_T0_env(offsetof(CPUX86State,cr[reg]));
+                    }
                     gen_op_mov_reg_T0[ot][rm]();
                 }
                 break;
@@ -5941,11 +6070,13 @@ static target_ulong disas_insn(DisasCont
             if (reg == 4 || reg == 5 || reg >= 8)
                 goto illegal_op;
             if (b & 2) {
+                svm_check_intercept(SVM_EXIT_WRITE_DR0 + reg);
                 gen_op_mov_TN_reg[ot][0][rm]();
                 gen_op_movl_drN_T0(reg);
                 gen_jmp_im(s->pc - s->cs_base);
                 gen_eob(s);
             } else {
+                svm_check_intercept(SVM_EXIT_READ_DR0 + reg);
                 gen_op_movtl_T0_env(offsetof(CPUX86State,dr[reg]));
                 gen_op_mov_reg_T0[ot][rm]();
             }
@@ -5955,6 +6086,7 @@ static target_ulong disas_insn(DisasCont
         if (s->cpl != 0) {
             gen_exception(s, EXCP0D_GPF, pc_start - s->cs_base);
         } else {
+            svm_check_intercept(SVM_EXIT_WRITE_CR0);
             gen_op_clts();
             /* abort block because static cpu state changed */
             gen_jmp_im(s->pc - s->cs_base);
@@ -6046,6 +6178,7 @@ static target_ulong disas_insn(DisasCont
         /* ignore for now */
         break;
     case 0x1aa: /* rsm */
+        if(svm_check_intercept(SVM_EXIT_RSM)) break;
         if (!(s->flags & HF_SMM_MASK))
             goto illegal_op;
         if (s->cc_op != CC_OP_DYNAMIC) {
@@ -6480,6 +6613,7 @@ static inline int gen_intermediate_code_
     dc->singlestep_enabled = env->singlestep_enabled;
     dc->cc_op = CC_OP_DYNAMIC;
     dc->cs_base = cs_base;
+    dc->intercept = tb->intercept;
     dc->tb = tb;
     dc->popl_esp_hack = 0;
     /* select memory access functions */
Index: qemu/target-i386/cpu.h
===================================================================
--- qemu.orig/target-i386/cpu.h
+++ qemu/target-i386/cpu.h
@@ -46,6 +46,8 @@
 
 #include "softfloat.h"
 
+#include "svm.h"
+
 #if defined(__i386__) && !defined(CONFIG_SOFTMMU) && !defined(__APPLE__)
 #define USE_CODE_COPY
 #endif
@@ -84,6 +86,7 @@
 #define DESC_AVL_MASK   (1 << 20)
 #define DESC_P_MASK     (1 << 15)
 #define DESC_DPL_SHIFT  13
+#define DESC_DPL_MASK   (1 << DESC_DPL_SHIFT)
 #define DESC_S_MASK     (1 << 12)
 #define DESC_TYPE_SHIFT 8
 #define DESC_A_MASK     (1 << 8)
@@ -149,6 +152,9 @@
 #define HF_VM_SHIFT         17 /* must be same as eflags */
 #define HF_HALTED_SHIFT     18 /* CPU halted */
 #define HF_SMM_SHIFT        19 /* CPU in SMM mode */
+#define HF_SVM_SHIFT        20 /* CPU in SVM mode */
+#define HF_GIF_SHIFT        21 /* if set CPU takes interrupts */
+#define HF_HIF_SHIFT        22 /* shadow copy of IF_MASK when in SVM */
 
 #define HF_CPL_MASK          (3 << HF_CPL_SHIFT)
 #define HF_SOFTMMU_MASK      (1 << HF_SOFTMMU_SHIFT)
@@ -166,6 +172,9 @@
 #define HF_OSFXSR_MASK       (1 << HF_OSFXSR_SHIFT)
 #define HF_HALTED_MASK       (1 << HF_HALTED_SHIFT)
 #define HF_SMM_MASK          (1 << HF_SMM_SHIFT)
+#define HF_SVM_MASK          (1 << HF_SVM_SHIFT)
+#define HF_GIF_MASK          (1 << HF_GIF_SHIFT)
+#define HF_HIF_MASK          (1 << HF_HIF_SHIFT)
 
 #define CR0_PE_MASK  (1 << 0)
 #define CR0_MP_MASK  (1 << 1)
@@ -249,6 +258,8 @@
 #define MSR_GSBASE                      0xc0000101
 #define MSR_KERNELGSBASE                0xc0000102
 
+#define MSR_VM_HSAVE_PA                 0xc0010117
+
 /* cpuid_features bits */
 #define CPUID_FP87 (1 << 0)
 #define CPUID_VME  (1 << 1)
@@ -283,6 +294,8 @@
 #define CPUID_EXT2_FFXSR   (1 << 25)
 #define CPUID_EXT2_LM      (1 << 29)
 
+#define CPUID_EXT3_SVM     (1 << 2)
+
 #define EXCP00_DIVZ	0
 #define EXCP01_SSTP	1
 #define EXCP02_NMI	2
@@ -489,6 +502,16 @@ typedef struct CPUX86State {
     uint32_t sysenter_eip;
     uint64_t efer;
     uint64_t star;
+
+    target_phys_addr_t vm_hsave;
+    target_phys_addr_t vm_vmcb;
+    uint64_t intercept;
+    uint16_t intercept_cr_read;
+    uint16_t intercept_cr_write;
+    uint16_t intercept_dr_read;
+    uint16_t intercept_dr_write;
+    uint32_t intercept_exceptions;
+
 #ifdef TARGET_X86_64
     target_ulong lstar;
     target_ulong cstar;
Index: qemu/target-i386/op.c
===================================================================
--- qemu.orig/target-i386/op.c
+++ qemu/target-i386/op.c
@@ -1243,6 +1243,35 @@ void OPPROTO op_movl_crN_T0(void)
     helper_movl_crN_T0(PARAM1);
 }
 
+// thise pseudo-opcode check for SVM intercepts
+void OPPROTO op_svm_check_intercept(void)
+{
+    svm_check_intercept(PARAM1);
+}
+
+void OPPROTO op_svm_check_intercept_param(void)
+{
+    svm_check_intercept_param(PARAM1, PARAM2);
+}
+
+void OPPROTO op_svm_vmexit(void)
+{
+    vmexit(PARAMQ1, T1);
+}
+
+// this pseudo-opcode checks for IO intercepts
+void OPPROTO op_svm_check_intercept_io(void)
+{
+    // PARAM1 = TYPE (0 = OUT, 1 = IN; 4 = STRING; 8 = REP)
+    // T0     = PORT
+    // T1     = next eip
+    if(env->hflags & HF_SVM_MASK) {
+        stq_phys(env->vm_vmcb + offsetof(struct vmcb, control.exit_info_2), T1);
+	// ASIZE does not appear on real hw
+        svm_check_intercept_param(SVM_EXIT_IOIO, (PARAM1 & ~SVM_IOIO_ASIZE_MASK) | ((T0 & 0xffff) << 16));
+    }
+}
+
 #if !defined(CONFIG_USER_ONLY) 
 void OPPROTO op_movtl_T0_cr8(void)
 {
@@ -2447,3 +2476,45 @@ void OPPROTO op_emms(void)
 
 #define SHIFT 1
 #include "ops_sse.h"
+
+/* Secure Virtual Machine ops */
+
+void OPPROTO op_vmrun(void)
+{
+    helper_vmrun(EAX);
+}
+
+void OPPROTO op_vmmcall(void)
+{
+    helper_vmmcall();
+}
+
+void OPPROTO op_vmload(void)
+{
+    helper_vmload(EAX);
+}
+
+void OPPROTO op_vmsave(void)
+{
+    helper_vmsave(EAX);
+}
+
+void OPPROTO op_stgi(void)
+{
+    helper_stgi();
+}
+
+void OPPROTO op_clgi(void)
+{
+    helper_clgi();
+}
+
+void OPPROTO op_skinit(void)
+{
+    helper_skinit();
+}
+
+void OPPROTO op_invlpga(void)
+{
+    helper_invlpga();
+}
Index: qemu/target-i386/helper.c
===================================================================
--- qemu.orig/target-i386/helper.c
+++ qemu/target-i386/helper.c
@@ -173,8 +173,9 @@ static inline void get_ss_esp_from_tss(u
     }
 #endif
 
-    if (!(env->tr.flags & DESC_P_MASK))
+    if (!(env->tr.flags & DESC_P_MASK)) {
         cpu_abort(env, "invalid tss");
+    }
     type = (env->tr.flags >> DESC_TYPE_SHIFT) & 0xf;
     if ((type & 7) != 1)
         cpu_abort(env, "invalid tss type");
@@ -594,7 +595,19 @@ static void do_interrupt_protected(int i
     int has_error_code, new_stack, shift;
     uint32_t e1, e2, offset, ss, esp, ss_e1, ss_e2;
     uint32_t old_eip, sp_mask;
+    int svm_should_check = 1;
 
+    if(env->hflags & HF_SVM_MASK && !is_int && next_eip==-1) {
+	next_eip = EIP;
+        svm_should_check = 0;
+    }
+    
+    if(env->hflags & HF_SVM_MASK
+        && svm_should_check
+	&& (INTERCEPTEDl(_exceptions, 1 << intno)
+	&& !is_int)) {
+	raise_interrupt(intno, is_int, error_code, env->eip - next_eip);
+    } 
     has_error_code = 0;
     if (!is_int && !is_hw) {
         switch(intno) {
@@ -725,7 +738,7 @@ static void do_interrupt_protected(int i
         push_size += 8;
     push_size <<= shift;
 #endif
-    if (shift == 1) {
+      if (shift == 1) {
         if (new_stack) {
             if (env->eflags & VM_MASK) {
                 PUSHL(ssp, esp, sp_mask, env->segs[R_GS].selector);
@@ -742,7 +755,7 @@ static void do_interrupt_protected(int i
         if (has_error_code) {
             PUSHL(ssp, esp, sp_mask, error_code);
         }
-    } else {
+      } else {
         if (new_stack) {
             if (env->eflags & VM_MASK) {
                 PUSHW(ssp, esp, sp_mask, env->segs[R_GS].selector);
@@ -759,9 +772,9 @@ static void do_interrupt_protected(int i
         if (has_error_code) {
             PUSHW(ssp, esp, sp_mask, error_code);
         }
-    }
-    
-    if (new_stack) {
+      }
+      
+      if (new_stack) {
         if (env->eflags & VM_MASK) {
             cpu_x86_load_seg_cache(env, R_ES, 0, 0, 0, 0);
             cpu_x86_load_seg_cache(env, R_DS, 0, 0, 0, 0);
@@ -771,16 +784,16 @@ static void do_interrupt_protected(int i
         ss = (ss & ~3) | dpl;
         cpu_x86_load_seg_cache(env, R_SS, ss, 
                                ssp, get_seg_limit(ss_e1, ss_e2), ss_e2);
-    }
-    SET_ESP(esp, sp_mask);
+      }
+      SET_ESP(esp, sp_mask);
 
-    selector = (selector & ~3) | dpl;
-    cpu_x86_load_seg_cache(env, R_CS, selector, 
+      selector = (selector & ~3) | dpl;
+      cpu_x86_load_seg_cache(env, R_CS, selector, 
                    get_seg_base(e1, e2),
                    get_seg_limit(e1, e2),
                    e2);
-    cpu_x86_set_cpl(env, dpl);
-    env->eip = offset;
+      cpu_x86_set_cpl(env, dpl);
+      env->eip = offset;
 
     /* interrupt gate clear IF mask */
     if ((type & 1) == 0) {
@@ -812,8 +825,9 @@ static inline target_ulong get_rsp_from_
            env->tr.base, env->tr.limit);
 #endif
 
-    if (!(env->tr.flags & DESC_P_MASK))
+    if (!(env->tr.flags & DESC_P_MASK)) {
         cpu_abort(env, "invalid tss");
+    }
     index = 8 * level + 4;
     if ((index + 7) > env->tr.limit)
         raise_exception_err(EXCP0A_TSS, env->tr.selector & 0xfffc);
@@ -830,7 +844,20 @@ static void do_interrupt64(int intno, in
     int has_error_code, new_stack;
     uint32_t e1, e2, e3, ss;
     target_ulong old_eip, esp, offset;
+    int svm_should_check = 1;
 
+    if(env->hflags & HF_SVM_MASK && !is_int && next_eip==-1) {
+	next_eip = EIP;
+        svm_should_check = 0;
+    }
+    if(env->hflags & HF_SVM_MASK
+        && svm_should_check
+	&& INTERCEPTEDl(_exceptions, 1 << intno)
+	&& !is_int) {
+        if (loglevel & CPU_LOG_TB_IN_ASM)
+	    fprintf(logfile, "\n64 bit interrupt\n");
+	raise_interrupt(intno, is_int, error_code, 0); // env->eip - next_eip);
+    }
     has_error_code = 0;
     if (!is_int && !is_hw) {
         switch(intno) {
@@ -1077,7 +1104,18 @@ static void do_interrupt_real(int intno,
     int selector;
     uint32_t offset, esp;
     uint32_t old_cs, old_eip;
+    int svm_should_check = 1;
 
+    if(env->hflags & HF_SVM_MASK && !is_int && next_eip==-1) {
+	next_eip = EIP;
+        svm_should_check = 0;
+    }
+    if(env->hflags & HF_SVM_MASK
+        && svm_should_check
+	&& INTERCEPTEDl(_exceptions, 1 << intno)
+	&& !is_int) {
+	raise_interrupt(intno, is_int, error_code, 0); // env->eip - next_eip);
+    }
     /* real mode (simpler !) */
     dt = &env->idt;
     if (intno * 4 + 3 > dt->limit)
@@ -1096,7 +1134,7 @@ static void do_interrupt_real(int intno,
     PUSHW(ssp, esp, 0xffff, compute_eflags());
     PUSHW(ssp, esp, 0xffff, old_cs);
     PUSHW(ssp, esp, 0xffff, old_eip);
-    
+        
     /* update processor state */
     ESP = (ESP & ~0xffff) | (esp & 0xffff);
     env->eip = offset;
@@ -1227,13 +1265,16 @@ int check_exception(int intno, int *erro
 void raise_interrupt(int intno, int is_int, int error_code, 
                      int next_eip_addend)
 {
-    if (!is_int)
+    if (!is_int) {
+        svm_check_intercept_param(SVM_EXIT_EXCP_BASE + intno, error_code);
         intno = check_exception(intno, &error_code);
+    }
 
     env->exception_index = intno;
     env->error_code = error_code;
     env->exception_is_int = is_int;
     env->exception_next_eip = env->eip + next_eip_addend;
+    
     cpu_loop_exit();
 }
 
@@ -1665,7 +1706,7 @@ void helper_cpuid(void)
     case 0x80000001:
         EAX = env->cpuid_features;
         EBX = 0;
-        ECX = 0;
+        ECX = env->cpuid_ext3_features;
         EDX = env->cpuid_ext2_features;
         break;
     case 0x80000002:
@@ -2739,6 +2780,9 @@ void helper_wrmsr(void)
     case MSR_PAT:
         env->pat = val;
         break;
+    case MSR_VM_HSAVE_PA:
+        env->vm_hsave = val;
+        break;
 #ifdef TARGET_X86_64
     case MSR_LSTAR:
         env->lstar = val;
@@ -2790,6 +2834,9 @@ void helper_rdmsr(void)
     case MSR_PAT:
         val = env->pat;
         break;
+    case MSR_VM_HSAVE_PA:
+        val = env->vm_hsave;
+        break;
 #ifdef TARGET_X86_64
     case MSR_LSTAR:
         val = env->lstar;
@@ -3871,3 +3918,589 @@ void tlb_fill(target_ulong addr, int is_
     }
     env = saved_env;
 }
+
+/* Secure Virtual Machine helpers */
+
+
+uint32_t vmcb2cpu_attrib(uint16_t vmcb_value) {
+    uint32_t retval = 0;
+    retval |= ((vmcb_value & SVM_SELECTOR_S_MASK) >> SVM_SELECTOR_S_SHIFT) << 12;
+    retval |= ((vmcb_value & SVM_SELECTOR_DPL_MASK) >> SVM_SELECTOR_DPL_SHIFT) << DESC_DPL_SHIFT;
+    retval |= ((vmcb_value & SVM_SELECTOR_P_MASK) >> SVM_SELECTOR_P_SHIFT) << 15;
+    retval |= ((vmcb_value & SVM_SELECTOR_AVL_MASK) >> SVM_SELECTOR_AVL_SHIFT) << 20;
+    retval |= ((vmcb_value & SVM_SELECTOR_L_MASK) >> SVM_SELECTOR_L_SHIFT) << DESC_L_SHIFT;
+    retval |= ((vmcb_value & SVM_SELECTOR_DB_MASK) >> SVM_SELECTOR_DB_SHIFT) << DESC_B_SHIFT;
+    retval |= ((vmcb_value & SVM_SELECTOR_G_MASK) >> SVM_SELECTOR_G_SHIFT) << 23;
+    retval |= ((vmcb_value & SVM_SELECTOR_READ_MASK) >> 1) << 9;
+    retval |= ((vmcb_value & SVM_SELECTOR_CODE_MASK) >> 3) << 11;
+
+    // unavailable as kvm constants (so I made these up)
+    retval |= ((vmcb_value & (1 << 12)) >> 12) << 8; // A
+    retval |= ((vmcb_value & (1 << 13)) >> 13) << 10; // C / E
+    return retval;
+}
+
+uint16_t cpu2vmcb_attrib(uint32_t cpu_value) {
+    uint16_t retval = 0;
+    retval |= ((cpu_value & DESC_S_MASK) >> 12) << SVM_SELECTOR_S_SHIFT;
+    retval |= ((cpu_value & DESC_DPL_MASK) >> DESC_DPL_SHIFT) << SVM_SELECTOR_DPL_SHIFT;
+    retval |= ((cpu_value & DESC_P_MASK) >> 15) << SVM_SELECTOR_P_SHIFT;
+    retval |= ((cpu_value & DESC_AVL_MASK) >> 20) << SVM_SELECTOR_AVL_SHIFT;
+    retval |= ((cpu_value & DESC_L_MASK) >> DESC_L_SHIFT) << SVM_SELECTOR_L_SHIFT;
+    retval |= ((cpu_value & DESC_B_MASK) >> DESC_B_SHIFT) << SVM_SELECTOR_DB_SHIFT;
+    retval |= ((cpu_value & DESC_G_MASK) >> 23) << SVM_SELECTOR_G_SHIFT;
+    retval |= ((cpu_value & DESC_R_MASK) >> 9) << 1;
+    retval |= ((cpu_value & DESC_CS_MASK) >> 11) << 3;
+
+    // unavailable as kvm constants (so I made these up)
+    retval |= ((cpu_value & DESC_A_MASK) >> 8) << 12;
+    retval |= ((cpu_value & DESC_C_MASK) >> 10) << 13;
+    return retval;
+}
+
+void svm_check_longmode(void) {
+#ifdef TARGET_X86_64
+    env->hflags &= ~(HF_LMA_MASK | HF_CS64_MASK);
+    if (// !(env->cr[0] & CR0_PG_MASK) && 
+        (env->efer & MSR_EFER_LME)) {
+        /* enter in long mode */
+        /* XXX: generate an exception */
+        if (!(env->cr[4] & CR4_PAE_MASK))
+            return;
+        env->efer |= MSR_EFER_LMA;
+        env->hflags |= HF_LMA_MASK;
+    }
+#endif
+}
+
+extern uint8_t *phys_ram_base;
+void helper_vmrun(target_ulong addr)
+{
+    uint32_t event_inj;
+    uint32_t int_ctl;
+
+    if (loglevel & CPU_LOG_TB_IN_ASM) fprintf(logfile,"vmrun! %#lx\n", addr);
+
+    env->vm_vmcb = addr;
+    EIP += 3; // store the instruction after the vmrun
+    regs_to_env();
+
+    // save the current CPU state in the hsave page
+    stq_phys(env->vm_hsave + offsetof(struct vmcb, save.gdtr.base), env->gdt.base);
+    stl_phys(env->vm_hsave + offsetof(struct vmcb, save.gdtr.limit), env->gdt.limit);
+
+    stq_phys(env->vm_hsave + offsetof(struct vmcb, save.idtr.base), env->idt.base);
+    stl_phys(env->vm_hsave + offsetof(struct vmcb, save.idtr.limit), env->idt.limit);
+
+    stq_phys(env->vm_hsave + offsetof(struct vmcb, save.cr0), env->cr[0]);
+    stq_phys(env->vm_hsave + offsetof(struct vmcb, save.cr2), env->cr[2]);
+    stq_phys(env->vm_hsave + offsetof(struct vmcb, save.cr3), env->cr[3]);
+    stq_phys(env->vm_hsave + offsetof(struct vmcb, save.cr4), env->cr[4]);
+    stq_phys(env->vm_hsave + offsetof(struct vmcb, save.cr8), env->cr[8]);
+    stq_phys(env->vm_hsave + offsetof(struct vmcb, save.dr6), env->dr[6]);
+    stq_phys(env->vm_hsave + offsetof(struct vmcb, save.dr7), env->dr[7]);
+
+    stq_phys(env->vm_hsave + offsetof(struct vmcb, save.efer), env->efer);
+    stq_phys(env->vm_hsave + offsetof(struct vmcb, save.rflags), env->eflags);
+
+    SVM_SAVE_SEG(env->vm_hsave, segs[R_ES], es);
+    SVM_SAVE_SEG(env->vm_hsave, segs[R_CS], cs);
+    SVM_SAVE_SEG(env->vm_hsave, segs[R_SS], ss);
+    SVM_SAVE_SEG(env->vm_hsave, segs[R_DS], ds);
+
+    stq_phys(env->vm_hsave + offsetof(struct vmcb, save.rip), EIP);
+    stq_phys(env->vm_hsave + offsetof(struct vmcb, save.rsp), ESP);
+    stq_phys(env->vm_hsave + offsetof(struct vmcb, save.rax), EAX);
+
+    // load the interception bitmaps so we do not need to access the vmcb in svm mode
+    env->intercept            = ldq_phys(env->vm_vmcb + offsetof(struct vmcb, control.intercept)) | (1L << 63);
+    env->intercept_cr_read    = lduw_phys(env->vm_vmcb + offsetof(struct vmcb, control.intercept_cr_read));
+    env->intercept_cr_write   = lduw_phys(env->vm_vmcb + offsetof(struct vmcb, control.intercept_cr_write));
+    env->intercept_dr_read    = lduw_phys(env->vm_vmcb + offsetof(struct vmcb, control.intercept_dr_read));
+    env->intercept_dr_write   = lduw_phys(env->vm_vmcb + offsetof(struct vmcb, control.intercept_dr_write));
+    env->intercept_exceptions = ldl_phys(env->vm_vmcb + offsetof(struct vmcb, control.intercept_exceptions));
+
+    env->gdt.base  = ldq_phys(env->vm_vmcb + offsetof(struct vmcb, save.gdtr.base));
+    env->gdt.limit = ldl_phys(env->vm_vmcb + offsetof(struct vmcb, save.gdtr.limit));
+
+    env->idt.base  = ldq_phys(env->vm_vmcb + offsetof(struct vmcb, save.idtr.base));
+    env->idt.limit = ldl_phys(env->vm_vmcb + offsetof(struct vmcb, save.idtr.limit));
+
+    // clear exit_info_2 so we behave like the real hardware
+    stq_phys(env->vm_vmcb + offsetof(struct vmcb, control.exit_info_2), 0);
+
+    // reset hidden flags to some extent
+    env->hflags &= ~(HF_LMA_MASK | HF_CS64_MASK);
+
+    CC_SRC = ldq_phys(env->vm_vmcb + offsetof(struct vmcb, save.cc_src));
+    CC_DST = ldq_phys(env->vm_vmcb + offsetof(struct vmcb, save.cc_dst));
+    // TODO: store and load CC_SRC and CC_DST for the real machine
+
+    cpu_x86_update_cr0(env, ldq_phys(env->vm_vmcb + offsetof(struct vmcb, save.cr0)));
+    cpu_x86_update_cr4(env, ldq_phys(env->vm_vmcb + offsetof(struct vmcb, save.cr4)));
+    cpu_x86_update_cr3(env, ldq_phys(env->vm_vmcb + offsetof(struct vmcb, save.cr3)));
+    env->cr[2] = ldq_phys(env->vm_vmcb + offsetof(struct vmcb, save.cr2));
+    int_ctl = ldl_phys(env->vm_vmcb + offsetof(struct vmcb, control.int_ctl));
+    if(int_ctl & V_INTR_MASKING_MASK) {
+        env->cr[8] = int_ctl & V_TPR_MASK;
+        if(env->eflags & IF_MASK) env->hflags |= HF_HIF_MASK;
+    }
+
+    env->efer = ldq_phys(env->vm_vmcb + offsetof(struct vmcb, save.efer));
+    load_eflags(ldq_phys(env->vm_vmcb + offsetof(struct vmcb, save.rflags)), 0xffffffff);
+    svm_check_longmode();
+
+    SVM_LOAD_SEG(env->vm_vmcb, ES, es);
+    SVM_LOAD_SEG(env->vm_vmcb, CS, cs);
+    SVM_LOAD_SEG(env->vm_vmcb, SS, ss);
+    SVM_LOAD_SEG(env->vm_vmcb, DS, ds);
+
+#ifdef SVM_DEBUG
+    fprintf(logfile,"hflags:      %#llx\n", env->hflags);
+    print_hflags();
+    fprintf(logfile,"HF_LMA_MASK: %#llx\n", HF_LMA_MASK);
+#endif
+
+    EIP = ldq_phys(env->vm_vmcb + offsetof(struct vmcb, save.rip));
+    env->eip = EIP;
+    ESP = ldq_phys(env->vm_vmcb + offsetof(struct vmcb, save.rsp));
+    EAX = ldq_phys(env->vm_vmcb + offsetof(struct vmcb, save.rax));
+    env->dr[7] = ldq_phys(env->vm_vmcb + offsetof(struct vmcb, save.dr7));
+    env->dr[6] = ldq_phys(env->vm_vmcb + offsetof(struct vmcb, save.dr6));
+    cpu_x86_set_cpl(env, ldub_phys(env->vm_vmcb + offsetof(struct vmcb, save.cpl)));
+
+    // FIXME: guest state consistency checks
+
+    switch(ldub_phys(env->vm_vmcb + offsetof(struct vmcb, control.tlb_ctl))) {
+        case TLB_CONTROL_DO_NOTHING:
+            break;
+        case TLB_CONTROL_FLUSH_ALL_ASID:
+            // FIXME: this is not 100% correct but should work for now
+            tlb_flush(env, 1);
+        break;
+    }
+
+    helper_stgi();
+    env->hflags |= HF_SVM_MASK;
+
+    regs_to_env();
+
+    // maybe we need to inject an event
+    event_inj = ldl_phys(env->vm_vmcb + offsetof(struct vmcb, control.event_inj));
+    if(event_inj & SVM_EVTINJ_VALID) {
+        uint8_t vector = event_inj & SVM_EVTINJ_VEC_MASK;
+        uint16_t valid_err = event_inj & SVM_EVTINJ_VALID_ERR;
+        uint32_t event_inj_err = ldl_phys(env->vm_vmcb + offsetof(struct vmcb, control.event_inj_err));
+        stl_phys(env->vm_vmcb + offsetof(struct vmcb, control.event_inj), event_inj & ~SVM_EVTINJ_VALID);
+        
+        if (loglevel & CPU_LOG_TB_IN_ASM) fprintf(logfile, "Injecting(%#hx): ", valid_err);
+        // FIXME: need to implement valid_err
+        switch(event_inj & SVM_EVTINJ_TYPE_MASK) {
+                case SVM_EVTINJ_TYPE_INTR:
+                        env->exception_index = vector;
+                        env->error_code = event_inj_err;
+                        env->exception_is_int = 1;
+                        env->exception_next_eip = -1;
+                        if (loglevel & CPU_LOG_TB_IN_ASM) fprintf(logfile, "INTR");
+                        break;
+                case SVM_EVTINJ_TYPE_NMI:
+                        env->exception_index = vector;
+                        env->error_code = event_inj_err;
+                        env->exception_is_int = 1;
+                        env->exception_next_eip = EIP;
+                        if (loglevel & CPU_LOG_TB_IN_ASM) fprintf(logfile, "NMI");
+                        break;
+                case SVM_EVTINJ_TYPE_EXEPT:
+                        env->exception_index = check_exception(vector, &event_inj_err);
+                        env->error_code = event_inj_err;
+                        env->exception_is_int = 0;
+                        env->exception_next_eip = -1;
+                        if (loglevel & CPU_LOG_TB_IN_ASM) fprintf(logfile, "EXEPT");
+                        break;
+                case SVM_EVTINJ_TYPE_SOFT:
+                        env->exception_index = vector;
+                        env->error_code = event_inj_err;
+                        env->exception_is_int = 1;
+                        env->exception_next_eip = EIP;
+                        if (loglevel & CPU_LOG_TB_IN_ASM) fprintf(logfile, "SOFT");
+                        break;
+        }
+        if (loglevel & CPU_LOG_TB_IN_ASM)
+        fprintf(logfile, " %#x %#x\n", env->exception_index, env->error_code); fflush(NULL);
+    }
+    else if(int_ctl & V_IRQ_MASK) {
+        uint32_t intno = ldl_phys(env->vm_vmcb + offsetof(struct vmcb, control.int_vector));
+        if (loglevel & CPU_LOG_TB_IN_ASM)
+            fprintf(logfile, "Injecting int: %#x\n", intno); fflush(NULL);
+        // FIXME: this should respect TPR
+        if(env->eflags & IF_MASK) {
+             svm_check_intercept(SVM_EXIT_VINTR);
+             if (loglevel & CPU_LOG_TB_IN_ASM)
+                 fprintf(logfile, "Servicing virtual hardware INT=0x%02x\n", intno);
+             do_interrupt(intno, 0, 0, -1, 1);
+#if defined(__sparc__) && !defined(HOST_SOLARIS)
+             tmp_T0 = 0;
+#else
+             T0 = 0;
+#endif
+             stl_phys(env->vm_vmcb + offsetof(struct vmcb, control.int_ctl), ldl_phys(env->vm_vmcb + offsetof(struct vmcb, control.int_ctl))  & ~V_IRQ_MASK);
+         }
+    }
+
+    cpu_loop_exit();
+}
+
+void helper_vmmcall()
+{
+    if (loglevel & CPU_LOG_TB_IN_ASM) fprintf(logfile,"vmmcall!\n");
+}
+
+void helper_vmload(target_ulong addr)
+{
+//    fprintf(logfile,"vmload!\n"); fflush(NULL);
+
+    SVM_LOAD_SEG(addr, FS, fs);
+    SVM_LOAD_SEG(addr, GS, gs);
+    SVM_LOAD_SEG2(addr, tr, tr);
+    SVM_LOAD_SEG2(addr, ldt, ldtr);
+
+
+#ifdef TARGET_X86_64
+    env->kernelgsbase = ldq_phys(addr + offsetof(struct vmcb, save.kernel_gs_base));
+    env->lstar = ldq_phys(addr + offsetof(struct vmcb, save.lstar));
+    env->cstar = ldq_phys(addr + offsetof(struct vmcb, save.cstar));
+    env->fmask = ldq_phys(addr + offsetof(struct vmcb, save.sfmask));
+#endif
+    env->star = ldq_phys(addr + offsetof(struct vmcb, save.star));
+    env->sysenter_cs = ldq_phys(addr + offsetof(struct vmcb, save.sysenter_cs));
+    env->sysenter_esp = ldq_phys(addr + offsetof(struct vmcb, save.sysenter_esp));
+    env->sysenter_eip = ldq_phys(addr + offsetof(struct vmcb, save.sysenter_eip));
+}
+
+void helper_vmsave(target_ulong addr)
+{
+//    fprintf(logfile,"vmsave!\n"); fflush(NULL);
+
+    SVM_SAVE_SEG(addr, segs[R_FS], fs);
+    SVM_SAVE_SEG(addr, segs[R_GS], gs);
+    SVM_SAVE_SEG(addr, tr, tr);
+    SVM_SAVE_SEG(addr, ldt, ldtr);
+
+#ifdef TARGET_X86_64
+    stq_phys(addr + offsetof(struct vmcb, save.kernel_gs_base), env->kernelgsbase);
+    stq_phys(addr + offsetof(struct vmcb, save.lstar), env->lstar);
+    stq_phys(addr + offsetof(struct vmcb, save.cstar), env->cstar);
+    stq_phys(addr + offsetof(struct vmcb, save.sfmask), env->fmask);
+#endif
+    stq_phys(addr + offsetof(struct vmcb, save.star), env->star);
+    stq_phys(addr + offsetof(struct vmcb, save.sysenter_cs), env->sysenter_cs);
+    stq_phys(addr + offsetof(struct vmcb, save.sysenter_esp), env->sysenter_esp);
+    stq_phys(addr + offsetof(struct vmcb, save.sysenter_eip), env->sysenter_eip);
+}
+
+void helper_stgi()
+{
+    env->hflags |= HF_GIF_MASK;
+    env->exception_next_eip = EIP + 3;
+}
+
+void helper_clgi()
+{
+    env->hflags &= ~HF_GIF_MASK;
+}
+
+void helper_skinit()
+{
+    if (loglevel & CPU_LOG_TB_IN_ASM) fprintf(logfile,"skinit!\n"); fflush(NULL);
+}
+
+void helper_invlpga()
+{
+//    fprintf(logfile,"invlpga!\n"); fflush(NULL);
+    tlb_flush(env, 0);
+}
+
+void print_hflags() {
+    fprintf(logfile,"HF_CPL_MASK         = %d\n", ( env->hflags & HF_CPL_MASK ) > 0);
+    fprintf(logfile,"HF_SOFTMMU_MASK     = %d\n", ( env->hflags & HF_SOFTMMU_MASK ) > 0);
+    fprintf(logfile,"HF_INHIBIT_IRQ_MASK = %d\n", ( env->hflags & HF_INHIBIT_IRQ_MASK ) > 0);
+    fprintf(logfile,"HF_CS32_MASK        = %d\n", ( env->hflags & HF_CS32_MASK ) > 0);
+    fprintf(logfile,"HF_SS32_MASK        = %d\n", ( env->hflags & HF_SS32_MASK ) > 0);
+    fprintf(logfile,"HF_ADDSEG_MASK      = %d\n", ( env->hflags & HF_ADDSEG_MASK ) > 0);
+    fprintf(logfile,"HF_PE_MASK          = %d\n", ( env->hflags & HF_PE_MASK ) > 0);
+    fprintf(logfile,"HF_TF_MASK          = %d\n", ( env->hflags & HF_TF_MASK ) > 0);
+    fprintf(logfile,"HF_MP_MASK          = %d\n", ( env->hflags & HF_MP_MASK ) > 0);
+    fprintf(logfile,"HF_EM_MASK          = %d\n", ( env->hflags & HF_EM_MASK ) > 0);
+    fprintf(logfile,"HF_TS_MASK          = %d\n", ( env->hflags & HF_TS_MASK ) > 0);
+    fprintf(logfile,"HF_LMA_MASK         = %d\n", ( env->hflags & HF_LMA_MASK ) > 0);
+    fprintf(logfile,"HF_CS64_MASK        = %d\n", ( env->hflags & HF_CS64_MASK ) > 0);
+    fprintf(logfile,"HF_OSFXSR_MASK      = %d\n", ( env->hflags & HF_OSFXSR_MASK ) > 0);
+    fprintf(logfile,"HF_HALTED_MASK      = %d\n", ( env->hflags & HF_HALTED_MASK ) > 0);
+    fprintf(logfile,"HF_SMM_MASK         = %d\n", ( env->hflags & HF_SMM_MASK ) > 0);
+    fprintf(logfile,"HF_SVM_MASK         = %d\n", ( env->hflags & HF_SVM_MASK ) > 0);
+    fprintf(logfile,"HF_GIF_MASK         = %d\n", ( env->hflags & HF_GIF_MASK ) > 0);
+}
+
+#define CHECK_INTERCEPT(a,b) case a: if (INTERCEPTED(1L << b)) { vmexit(type, param); return 1; } break;
+int svm_check_intercept_param(uint32_t type, uint64_t param) {
+    if(!(env->hflags & HF_SVM_MASK)) return 0;
+    
+    switch(type) {
+        case SVM_EXIT_READ_CR0 ... SVM_EXIT_READ_CR0 + 8:
+            if (INTERCEPTEDw(_cr_read, (1 << (type - SVM_EXIT_READ_CR0)))) {
+                vmexit(type, param);
+                return 1;
+            }
+            break;
+        case SVM_EXIT_READ_DR0 ... SVM_EXIT_READ_DR0 + 8:
+            if (INTERCEPTEDw(_dr_read, (1 << (type - SVM_EXIT_READ_DR0)))) {
+                vmexit(type, param);
+                return 1;
+            }
+            break;
+        case SVM_EXIT_WRITE_CR0 ... SVM_EXIT_WRITE_CR0 + 8:
+            if (INTERCEPTEDw(_cr_write, (1 << (type - SVM_EXIT_WRITE_CR0)))) {
+                vmexit(type, param);
+                return 1;
+            }
+            break;
+        case SVM_EXIT_WRITE_DR0 ... SVM_EXIT_WRITE_DR0 + 8:
+            if (INTERCEPTEDw(_dr_write, (1 << (type - SVM_EXIT_WRITE_DR0)))) {
+                vmexit(type, param);
+                return 1;
+            }
+            break;
+        case SVM_EXIT_EXCP_BASE ... SVM_EXIT_EXCP_BASE + 16:
+            if (INTERCEPTEDl(_exceptions, (1 << (type - SVM_EXIT_EXCP_BASE)))) {
+                vmexit(type, param);
+                return 1;
+            }
+            break;
+        case SVM_EXIT_IOIO:
+            if (INTERCEPTED(INTERCEPT_IOIO_PROT)) {
+                // FIXME: this should be read in at vmrun (faster this way?)
+                uint64_t addr = ldq_phys(env->vm_vmcb + offsetof(struct vmcb, control.iopm_base_pa));
+                uint16_t port = (uint16_t) (param >> 16);
+                
+                if(ldub_phys(addr + port / 8) & (1 << (port % 8)))
+                    vmexit(type, param);
+            }
+            break;
+
+        case SVM_EXIT_MSR:
+            if (INTERCEPTED(1L << INTERCEPT_MSR_PROT)) {
+                // FIXME: this should be read in at vmrun (faster this way?)
+                uint64_t addr = ldq_phys(env->vm_vmcb + offsetof(struct vmcb, control.msrpm_base_pa));
+                switch((uint32_t)ECX) {
+                    case 0 ... 0x1fff:
+                        T0 = (ECX * 2) % 8;
+                        T1 = ECX / 8;
+                        break;
+                    case 0xc0000000 ... 0xc0001fff:
+                        T0 = (8192 + ECX - 0xc0000000) * 2;
+                        T1 = (T0 / 8);
+                        T0 %= 8;
+                        break;
+                    case 0xc0010000 ... 0xc0011fff:
+                        T0 = (16384 + ECX - 0xc0010000) * 2;
+                        T1 = (T0 / 8);
+                        T0 %= 8;
+                        break;
+                    default:
+                        vmexit(type, param);
+                        return 1;
+                }
+                if (ldub_phys(addr + T1) & ((1 << param) << T0))
+                    vmexit(type, param);
+                return 1;
+            }
+            break;
+        default:
+            if (INTERCEPTED((1 << (type - SVM_EXIT_INTR)))) {
+                vmexit(type, param);
+                return 1;
+            }
+            break;
+    }
+    return 0;
+}
+
+inline int svm_check_intercept(unsigned int type) {
+    return svm_check_intercept_param(type, 0);
+}
+
+void vmexit(uint64_t exit_code, uint64_t exit_info_1)
+{
+    uint32_t int_ctl;
+
+#if 0 // FIXME: find a way to get RIP without updating it all the time
+    { // Restore EIP
+        target_ulong pc;
+        TranslationBlock *tb;
+        pc = GETPC();
+        tb = tb_find_pc(pc);
+        if (tb) {
+            printf("Restoring CPU state: %#lx -> ", env->eip); fflush(NULL);
+            cpu_restore_state(tb, env, pc, NULL); // set EIP correctly
+            printf("%#lx\n", env->eip); fflush(NULL);
+        } else {
+            printf("No CPU state for: %#lx\n", pc); fflush(NULL);
+        }
+    }
+#endif
+
+    if (loglevel & CPU_LOG_TB_IN_ASM) fprintf(logfile,"vmexit(%#lx, %#lx, %#lx, %#lx)!\n", exit_code, exit_info_1, ldq_phys(env->vm_vmcb + offsetof(struct vmcb, control.exit_info_2)), EIP); fflush(NULL);
+#ifdef SVM_DEBUG
+    fprintf(logfile,"[vmexit] env->vm_hsave: %p\n", env->vm_hsave);
+    fprintf(logfile,"[vmexit] ESI: %#lx\n", ESI);
+    fprintf(logfile,"[vmexit] EDI: %#lx\n", EDI);
+    fprintf(logfile,"[vmexit] ECX: %#lx\n", ECX);
+    fprintf(logfile,"[vmexit] CS flags (native): %#x", env->segs[R_CS].flags);
+    fprintf(logfile,"[vmexit] CS flags (conv):   %#x", vmcb2cpu_attrib(cpu2vmcb_attrib(env->segs[R_CS].flags)));
+#endif
+// Save the VM state in the vmcb
+    stw_phys(env->vm_vmcb + offsetof(struct vmcb, save.es.selector), env->segs[R_ES].selector);
+    stq_phys(env->vm_vmcb + offsetof(struct vmcb, save.es.base), env->segs[R_ES].base);
+    stl_phys(env->vm_vmcb + offsetof(struct vmcb, save.es.limit), env->segs[R_ES].limit);
+    stw_phys(env->vm_vmcb + offsetof(struct vmcb, save.es.attrib), cpu2vmcb_attrib(env->segs[R_ES].flags));
+
+    stw_phys(env->vm_vmcb + offsetof(struct vmcb, save.cs.selector), env->segs[R_CS].selector);
+    stq_phys(env->vm_vmcb + offsetof(struct vmcb, save.cs.base), env->segs[R_CS].base);
+    stl_phys(env->vm_vmcb + offsetof(struct vmcb, save.cs.limit), env->segs[R_CS].limit);
+    stw_phys(env->vm_vmcb + offsetof(struct vmcb, save.cs.attrib), cpu2vmcb_attrib(env->segs[R_CS].flags));
+
+    stw_phys(env->vm_vmcb + offsetof(struct vmcb, save.ss.selector), env->segs[R_SS].selector);
+    stq_phys(env->vm_vmcb + offsetof(struct vmcb, save.ss.base), env->segs[R_SS].base);
+    stl_phys(env->vm_vmcb + offsetof(struct vmcb, save.ss.limit), env->segs[R_SS].limit);
+    stw_phys(env->vm_vmcb + offsetof(struct vmcb, save.ss.attrib), cpu2vmcb_attrib(env->segs[R_SS].flags));
+
+    stw_phys(env->vm_vmcb + offsetof(struct vmcb, save.ds.selector), env->segs[R_DS].selector);
+    stq_phys(env->vm_vmcb + offsetof(struct vmcb, save.ds.base), env->segs[R_DS].base);
+    stl_phys(env->vm_vmcb + offsetof(struct vmcb, save.ds.limit), env->segs[R_DS].limit);
+    stw_phys(env->vm_vmcb + offsetof(struct vmcb, save.ds.attrib), cpu2vmcb_attrib(env->segs[R_DS].flags));
+
+    stq_phys(env->vm_vmcb + offsetof(struct vmcb, save.gdtr.base), env->gdt.base);
+    stl_phys(env->vm_vmcb + offsetof(struct vmcb, save.gdtr.limit), env->gdt.limit);
+
+    stq_phys(env->vm_vmcb + offsetof(struct vmcb, save.idtr.base), env->idt.base);
+    stl_phys(env->vm_vmcb + offsetof(struct vmcb, save.idtr.limit), env->idt.limit);
+
+    stq_phys(env->vm_vmcb + offsetof(struct vmcb, save.efer), env->efer);
+    stq_phys(env->vm_vmcb + offsetof(struct vmcb, save.cr0), env->cr[0]);
+    stq_phys(env->vm_vmcb + offsetof(struct vmcb, save.cr2), env->cr[2]);
+    stq_phys(env->vm_vmcb + offsetof(struct vmcb, save.cr3), env->cr[3]);
+    stq_phys(env->vm_vmcb + offsetof(struct vmcb, save.cr4), env->cr[4]);
+
+    stq_phys(env->vm_vmcb + offsetof(struct vmcb, save.cc_src), CC_SRC);
+    stq_phys(env->vm_vmcb + offsetof(struct vmcb, save.cc_dst), CC_DST);
+    
+    if((int_ctl = ldl_phys(env->vm_vmcb + offsetof(struct vmcb, control.int_ctl))) & V_INTR_MASKING_MASK) {
+        int_ctl &= ~V_TPR_MASK;
+        int_ctl |= env->cr[8] & V_TPR_MASK;
+        stl_phys(env->vm_vmcb + offsetof(struct vmcb, control.int_ctl), int_ctl);
+    }
+
+    switch(DF) {
+        case 1:
+            env->eflags &= ~DF_MASK;
+            break;
+        case -1:
+            env->eflags |= DF_MASK;
+            break;
+    }
+
+    stq_phys(env->vm_vmcb + offsetof(struct vmcb, save.rflags), env->eflags);
+    stq_phys(env->vm_vmcb + offsetof(struct vmcb, save.rip), env->eip);
+    stq_phys(env->vm_vmcb + offsetof(struct vmcb, save.rsp), ESP);
+    stq_phys(env->vm_vmcb + offsetof(struct vmcb, save.rax), EAX);
+    stq_phys(env->vm_vmcb + offsetof(struct vmcb, save.dr7), env->dr[7]);
+    stq_phys(env->vm_vmcb + offsetof(struct vmcb, save.dr6), env->dr[6]);
+    stb_phys(env->vm_vmcb + offsetof(struct vmcb, save.cpl), env->hflags & HF_CPL_MASK);
+
+// Reload the host state from vm_hsave
+    env->hflags &= ~HF_HIF_MASK;
+    env->intercept = 0;
+
+    env->gdt.base  = ldq_phys(env->vm_hsave + offsetof(struct vmcb, save.gdtr.base));
+    env->gdt.limit = ldl_phys(env->vm_hsave + offsetof(struct vmcb, save.gdtr.limit));
+
+    env->idt.base  = ldq_phys(env->vm_hsave + offsetof(struct vmcb, save.idtr.base));
+    env->idt.limit = ldl_phys(env->vm_hsave + offsetof(struct vmcb, save.idtr.limit));
+
+    cpu_x86_update_cr0(env, ldq_phys(env->vm_hsave + offsetof(struct vmcb, save.cr0)) | CR0_PE_MASK);
+    cpu_x86_update_cr4(env, ldq_phys(env->vm_hsave + offsetof(struct vmcb, save.cr4)));
+    cpu_x86_update_cr3(env, ldq_phys(env->vm_hsave + offsetof(struct vmcb, save.cr3)));
+    if(int_ctl & V_INTR_MASKING_MASK)
+        env->cr[8] = ldq_phys(env->vm_hsave + offsetof(struct vmcb, save.cr8));
+    // we need to set the efer after the crs so the hidden flags get set properly
+    env->efer  = ldq_phys(env->vm_hsave + offsetof(struct vmcb, save.efer));
+
+    load_eflags(ldq_phys(env->vm_hsave + offsetof(struct vmcb, save.rflags)), 0xffffffff);
+    svm_check_longmode();
+
+    SVM_LOAD_SEG(env->vm_hsave, ES, es);
+    SVM_LOAD_SEG(env->vm_hsave, CS, cs);
+    SVM_LOAD_SEG(env->vm_hsave, SS, ss);
+    SVM_LOAD_SEG(env->vm_hsave, DS, ds);
+
+    EIP = ldq_phys(env->vm_hsave + offsetof(struct vmcb, save.rip));
+    ESP = ldq_phys(env->vm_hsave + offsetof(struct vmcb, save.rsp));
+    EAX = ldq_phys(env->vm_hsave + offsetof(struct vmcb, save.rax));
+    CC_SRC = ldq_phys(env->vm_hsave + offsetof(struct vmcb, save.cc_src));
+    CC_DST = ldq_phys(env->vm_hsave + offsetof(struct vmcb, save.cc_dst));
+
+    env->dr[6] = ldq_phys(env->vm_hsave + offsetof(struct vmcb, save.dr6));
+    env->dr[7] = ldq_phys(env->vm_hsave + offsetof(struct vmcb, save.dr7));
+
+// other setups
+    cpu_x86_set_cpl(env, 0);
+    stl_phys(env->vm_vmcb + offsetof(struct vmcb, control.exit_code_hi), (uint32_t)(exit_code >> 32));
+    stl_phys(env->vm_vmcb + offsetof(struct vmcb, control.exit_code), exit_code);
+    stq_phys(env->vm_vmcb + offsetof(struct vmcb, control.exit_info_1), exit_info_1);
+
+    helper_clgi();
+    // FIXME: Resets the current ASID register to zero (host ASID).
+
+    // Clears the V_IRQ and V_INTR_MASKING bits inside the processor.
+    
+    // Clears the TSC_OFFSET inside the processor.
+    
+    // If the host is in PAE mode, the processor reloads the host's PDPEs from the page table indicated the host's CR3. If the PDPEs contain illegal state, the processor causes a shutdown.
+//    if (env->cr[4] & CR4_PAE_MASK)
+//            fprintf(logfile,"should I reload the PDPEs now?\n");
+
+    // Forces CR0.PE = 1, RFLAGS.VM = 0.
+    env->cr[0] |= CR0_PE_MASK;
+    env->eflags &= ~VM_MASK;
+    
+    // Disables all breakpoints in the host DR7 register.
+
+    // Checks the reloaded host state for consistency;
+
+    // If the host’s rIP reloaded by #VMEXIT is outside the limit of the host’s code segment or non-canonical (in the case of long mode), a #GP fault is delivered inside the host.)
+
+    env->hflags &= ~HF_SVM_MASK;
+
+    // remove any pending exception
+    env->exception_index = -1;
+    env->error_code = 0;
+    env->old_exception = -1;
+
+    regs_to_env();
+//    tlb_flush(env, 0);
+#ifdef SVM_DEBUG
+    fprintf(logfile,"hflags:       %#llx\n", env->hflags);
+    print_hflags();
+    fprintf(logfile,"efer:         %#llx\n", env->efer);
+    fprintf(logfile,"Is the new RIP valid?\n"); fflush(NULL);
+//    fprintf(logfile,"XXX: %llx\n",  ldq_kernel(EIP)); fflush(NULL);
+//    fprintf(logfile,"YESH?\n"); fflush(NULL);
+
+    fprintf(logfile,"env->exception_index   = %d\n", env->exception_index); fflush(NULL);
+    fprintf(logfile,"env->interrupt_request = %d\n", env->interrupt_request); fflush(NULL);
+    fprintf(logfile, " **************** VM LEAVE ***************\n");
+#endif
+    cpu_loop_exit();
+    fprintf(logfile,"I should never reach here\n"); fflush(NULL);
+}
+
Index: qemu/target-i386/svm.h
===================================================================
--- /dev/null
+++ qemu/target-i386/svm.h
@@ -0,0 +1,359 @@
+#ifndef __SVM_H
+#define __SVM_H
+
+enum {
+	INTERCEPT_INTR,
+	INTERCEPT_NMI,
+	INTERCEPT_SMI,
+	INTERCEPT_INIT,
+	INTERCEPT_VINTR,
+	INTERCEPT_SELECTIVE_CR0,
+	INTERCEPT_STORE_IDTR,
+	INTERCEPT_STORE_GDTR,
+	INTERCEPT_STORE_LDTR,
+	INTERCEPT_STORE_TR,
+	INTERCEPT_LOAD_IDTR,
+	INTERCEPT_LOAD_GDTR,
+	INTERCEPT_LOAD_LDTR,
+	INTERCEPT_LOAD_TR,
+	INTERCEPT_RDTSC,
+	INTERCEPT_RDPMC,
+	INTERCEPT_PUSHF,
+	INTERCEPT_POPF,
+	INTERCEPT_CPUID,
+	INTERCEPT_RSM,
+	INTERCEPT_IRET,
+	INTERCEPT_INTn,
+	INTERCEPT_INVD,
+	INTERCEPT_PAUSE,
+	INTERCEPT_HLT,
+	INTERCEPT_INVLPG,
+	INTERCEPT_INVLPGA,
+	INTERCEPT_IOIO_PROT,
+	INTERCEPT_MSR_PROT,
+	INTERCEPT_TASK_SWITCH,
+	INTERCEPT_FERR_FREEZE,
+	INTERCEPT_SHUTDOWN,
+	INTERCEPT_VMRUN,
+	INTERCEPT_VMMCALL,
+	INTERCEPT_VMLOAD,
+	INTERCEPT_VMSAVE,
+	INTERCEPT_STGI,
+	INTERCEPT_CLGI,
+	INTERCEPT_SKINIT,
+	INTERCEPT_RDTSCP,
+	INTERCEPT_ICEBP,
+	INTERCEPT_WBINVD,
+};
+
+
+struct __attribute__ ((__packed__)) vmcb_control_area {
+	uint16_t intercept_cr_read;
+	uint16_t intercept_cr_write;
+	uint16_t intercept_dr_read;
+	uint16_t intercept_dr_write;
+	uint32_t intercept_exceptions;
+	uint64_t intercept;
+	uint8_t reserved_1[44];
+	uint64_t iopm_base_pa;
+	uint64_t msrpm_base_pa;
+	uint64_t tsc_offset;
+	uint32_t asid;
+	uint8_t tlb_ctl;
+	uint8_t reserved_2[3];
+	uint32_t int_ctl;
+	uint32_t int_vector;
+	uint32_t int_state;
+	uint8_t reserved_3[4];
+	uint32_t exit_code;
+	uint32_t exit_code_hi;
+	uint64_t exit_info_1;
+	uint64_t exit_info_2;
+	uint32_t exit_int_info;
+	uint32_t exit_int_info_err;
+	uint64_t nested_ctl;
+	uint8_t reserved_4[16];
+	uint32_t event_inj;
+	uint32_t event_inj_err;
+	uint64_t nested_cr3;
+	uint64_t lbr_ctl;
+	uint8_t reserved_5[832];
+};
+
+
+#define TLB_CONTROL_DO_NOTHING 0
+#define TLB_CONTROL_FLUSH_ALL_ASID 1
+
+#define V_TPR_MASK 0x0f
+
+#define V_IRQ_SHIFT 8
+#define V_IRQ_MASK (1 << V_IRQ_SHIFT)
+
+#define V_INTR_PRIO_SHIFT 16
+#define V_INTR_PRIO_MASK (0x0f << V_INTR_PRIO_SHIFT)
+
+#define V_IGN_TPR_SHIFT 20
+#define V_IGN_TPR_MASK (1 << V_IGN_TPR_SHIFT)
+
+#define V_INTR_MASKING_SHIFT 24
+#define V_INTR_MASKING_MASK (1 << V_INTR_MASKING_SHIFT)
+
+#define SVM_INTERRUPT_SHADOW_MASK 1
+
+#define SVM_IOIO_STR_SHIFT 2
+#define SVM_IOIO_REP_SHIFT 3
+#define SVM_IOIO_SIZE_SHIFT 4
+#define SVM_IOIO_ASIZE_SHIFT 7
+
+#define SVM_IOIO_TYPE_MASK 1
+#define SVM_IOIO_STR_MASK (1 << SVM_IOIO_STR_SHIFT)
+#define SVM_IOIO_REP_MASK (1 << SVM_IOIO_REP_SHIFT)
+#define SVM_IOIO_SIZE_MASK (7 << SVM_IOIO_SIZE_SHIFT)
+#define SVM_IOIO_ASIZE_MASK (7 << SVM_IOIO_ASIZE_SHIFT)
+
+struct __attribute__ ((__packed__)) vmcb_seg {
+	uint16_t selector;
+	uint16_t attrib;
+	uint32_t limit;
+	uint64_t base;
+};
+
+struct __attribute__ ((__packed__)) vmcb_save_area {
+	struct vmcb_seg es;
+	struct vmcb_seg cs;
+	struct vmcb_seg ss;
+	struct vmcb_seg ds;
+	struct vmcb_seg fs;
+	struct vmcb_seg gs;
+	struct vmcb_seg gdtr;
+	struct vmcb_seg ldtr;
+	struct vmcb_seg idtr;
+	struct vmcb_seg tr;
+	uint8_t reserved_1[43];
+	uint8_t cpl;
+	uint8_t reserved_2[4];
+	uint64_t efer;
+	uint8_t reserved_3[112];
+	uint64_t cr4;
+	uint64_t cr3;
+	uint64_t cr0;
+	uint64_t dr7;
+	uint64_t dr6;
+	uint64_t rflags;
+	uint64_t rip;
+	uint8_t reserved_4[88];
+	uint64_t rsp;
+	uint8_t reserved_5[24];
+	uint64_t rax;
+	uint64_t star;
+	uint64_t lstar;
+	uint64_t cstar;
+	uint64_t sfmask;
+	uint64_t kernel_gs_base;
+	uint64_t sysenter_cs;
+	uint64_t sysenter_esp;
+	uint64_t sysenter_eip;
+	uint64_t cr2;
+	// qemu: added to reuse this as hsave
+	uint64_t cc_src;
+	uint64_t cc_dst;
+	uint64_t cr8;
+	// end of add
+	uint8_t reserved_6[32 - 24]; // originally 32
+	uint64_t g_pat;
+	uint64_t dbgctl;
+	uint64_t br_from;
+	uint64_t br_to;
+	uint64_t last_excp_from;
+	uint64_t last_excp_to;
+};
+
+struct __attribute__ ((__packed__)) vmcb {
+	struct vmcb_control_area control;
+	struct vmcb_save_area save;
+};
+
+#define SVM_CPUID_FEATURE_SHIFT 2
+#define SVM_CPUID_FUNC 0x8000000a
+
+#define MSR_EFER_SVME_MASK (1ULL << 12)
+
+#define SVM_SELECTOR_S_SHIFT 4
+#define SVM_SELECTOR_DPL_SHIFT 5
+#define SVM_SELECTOR_P_SHIFT 7
+#define SVM_SELECTOR_AVL_SHIFT 8
+#define SVM_SELECTOR_L_SHIFT 9
+#define SVM_SELECTOR_DB_SHIFT 10
+#define SVM_SELECTOR_G_SHIFT 11
+
+#define SVM_SELECTOR_TYPE_MASK (0xf)
+#define SVM_SELECTOR_S_MASK (1 << SVM_SELECTOR_S_SHIFT)
+#define SVM_SELECTOR_DPL_MASK (3 << SVM_SELECTOR_DPL_SHIFT)
+#define SVM_SELECTOR_P_MASK (1 << SVM_SELECTOR_P_SHIFT)
+#define SVM_SELECTOR_AVL_MASK (1 << SVM_SELECTOR_AVL_SHIFT)
+#define SVM_SELECTOR_L_MASK (1 << SVM_SELECTOR_L_SHIFT)
+#define SVM_SELECTOR_DB_MASK (1 << SVM_SELECTOR_DB_SHIFT)
+#define SVM_SELECTOR_G_MASK (1 << SVM_SELECTOR_G_SHIFT)
+
+#define SVM_SELECTOR_WRITE_MASK (1 << 1)
+#define SVM_SELECTOR_READ_MASK SVM_SELECTOR_WRITE_MASK
+#define SVM_SELECTOR_CODE_MASK (1 << 3)
+
+#define INTERCEPT_CR0_MASK 1
+#define INTERCEPT_CR3_MASK (1 << 3)
+#define INTERCEPT_CR4_MASK (1 << 4)
+
+#define INTERCEPT_DR0_MASK 1
+#define INTERCEPT_DR1_MASK (1 << 1)
+#define INTERCEPT_DR2_MASK (1 << 2)
+#define INTERCEPT_DR3_MASK (1 << 3)
+#define INTERCEPT_DR4_MASK (1 << 4)
+#define INTERCEPT_DR5_MASK (1 << 5)
+#define INTERCEPT_DR6_MASK (1 << 6)
+#define INTERCEPT_DR7_MASK (1 << 7)
+
+#define SVM_EVTINJ_VEC_MASK 0xff
+
+#define SVM_EVTINJ_TYPE_SHIFT 8
+#define SVM_EVTINJ_TYPE_MASK (7 << SVM_EVTINJ_TYPE_SHIFT)
+
+#define SVM_EVTINJ_TYPE_INTR (0 << SVM_EVTINJ_TYPE_SHIFT)
+#define SVM_EVTINJ_TYPE_NMI (2 << SVM_EVTINJ_TYPE_SHIFT)
+#define SVM_EVTINJ_TYPE_EXEPT (3 << SVM_EVTINJ_TYPE_SHIFT)
+#define SVM_EVTINJ_TYPE_SOFT (4 << SVM_EVTINJ_TYPE_SHIFT)
+
+#define SVM_EVTINJ_VALID (1 << 31)
+#define SVM_EVTINJ_VALID_ERR (1 << 11)
+
+#define SVM_EXITINTINFO_VEC_MASK SVM_EVTINJ_VEC_MASK
+
+#define	SVM_EXITINTINFO_TYPE_INTR SVM_EVTINJ_TYPE_INTR
+#define	SVM_EXITINTINFO_TYPE_NMI SVM_EVTINJ_TYPE_NMI
+#define	SVM_EXITINTINFO_TYPE_EXEPT SVM_EVTINJ_TYPE_EXEPT
+#define	SVM_EXITINTINFO_TYPE_SOFT SVM_EVTINJ_TYPE_SOFT
+
+#define SVM_EXITINTINFO_VALID SVM_EVTINJ_VALID
+#define SVM_EXITINTINFO_VALID_ERR SVM_EVTINJ_VALID_ERR
+
+#define	SVM_EXIT_READ_CR0 	0x000
+#define	SVM_EXIT_READ_CR3 	0x003
+#define	SVM_EXIT_READ_CR4 	0x004
+#define	SVM_EXIT_READ_CR8 	0x008
+#define	SVM_EXIT_WRITE_CR0 	0x010
+#define	SVM_EXIT_WRITE_CR3 	0x013
+#define	SVM_EXIT_WRITE_CR4 	0x014
+#define	SVM_EXIT_WRITE_CR8 	0x018
+#define	SVM_EXIT_READ_DR0 	0x020
+#define	SVM_EXIT_READ_DR1 	0x021
+#define	SVM_EXIT_READ_DR2 	0x022
+#define	SVM_EXIT_READ_DR3 	0x023
+#define	SVM_EXIT_READ_DR4 	0x024
+#define	SVM_EXIT_READ_DR5 	0x025
+#define	SVM_EXIT_READ_DR6 	0x026
+#define	SVM_EXIT_READ_DR7 	0x027
+#define	SVM_EXIT_WRITE_DR0 	0x030
+#define	SVM_EXIT_WRITE_DR1 	0x031
+#define	SVM_EXIT_WRITE_DR2 	0x032
+#define	SVM_EXIT_WRITE_DR3 	0x033
+#define	SVM_EXIT_WRITE_DR4 	0x034
+#define	SVM_EXIT_WRITE_DR5 	0x035
+#define	SVM_EXIT_WRITE_DR6 	0x036
+#define	SVM_EXIT_WRITE_DR7 	0x037
+#define SVM_EXIT_EXCP_BASE      0x040
+#define SVM_EXIT_INTR		0x060
+#define SVM_EXIT_NMI		0x061
+#define SVM_EXIT_SMI		0x062
+#define SVM_EXIT_INIT		0x063
+#define SVM_EXIT_VINTR		0x064
+#define SVM_EXIT_CR0_SEL_WRITE	0x065
+#define SVM_EXIT_IDTR_READ	0x066
+#define SVM_EXIT_GDTR_READ	0x067
+#define SVM_EXIT_LDTR_READ	0x068
+#define SVM_EXIT_TR_READ	0x069
+#define SVM_EXIT_IDTR_WRITE	0x06a
+#define SVM_EXIT_GDTR_WRITE	0x06b
+#define SVM_EXIT_LDTR_WRITE	0x06c
+#define SVM_EXIT_TR_WRITE	0x06d
+#define SVM_EXIT_RDTSC		0x06e
+#define SVM_EXIT_RDPMC		0x06f
+#define SVM_EXIT_PUSHF		0x070
+#define SVM_EXIT_POPF		0x071
+#define SVM_EXIT_CPUID		0x072
+#define SVM_EXIT_RSM		0x073
+#define SVM_EXIT_IRET		0x074
+#define SVM_EXIT_SWINT		0x075
+#define SVM_EXIT_INVD		0x076
+#define SVM_EXIT_PAUSE		0x077
+#define SVM_EXIT_HLT		0x078
+#define SVM_EXIT_INVLPG		0x079
+#define SVM_EXIT_INVLPGA	0x07a
+#define SVM_EXIT_IOIO		0x07b
+#define SVM_EXIT_MSR		0x07c
+#define SVM_EXIT_TASK_SWITCH	0x07d
+#define SVM_EXIT_FERR_FREEZE	0x07e
+#define SVM_EXIT_SHUTDOWN	0x07f
+#define SVM_EXIT_VMRUN		0x080
+#define SVM_EXIT_VMMCALL	0x081
+#define SVM_EXIT_VMLOAD		0x082
+#define SVM_EXIT_VMSAVE		0x083
+#define SVM_EXIT_STGI		0x084
+#define SVM_EXIT_CLGI		0x085
+#define SVM_EXIT_SKINIT		0x086
+#define SVM_EXIT_RDTSCP		0x087
+#define SVM_EXIT_ICEBP		0x088
+#define SVM_EXIT_WBINVD		0x089
+// only included in documentation, maybe wrong
+#define SVM_EXIT_MONITOR	0x08a
+#define SVM_EXIT_MWAIT		0x08b
+#define SVM_EXIT_NPF  		0x400
+
+#define SVM_EXIT_ERR		-1
+
+#define SVM_CR0_SELECTIVE_MASK (1 << 3 | 1) // TS and MP
+
+#define SVM_VMLOAD ".byte 0x0f, 0x01, 0xda"
+#define SVM_VMRUN  ".byte 0x0f, 0x01, 0xd8"
+#define SVM_VMSAVE ".byte 0x0f, 0x01, 0xdb"
+#define SVM_CLGI   ".byte 0x0f, 0x01, 0xdd"
+#define SVM_STGI   ".byte 0x0f, 0x01, 0xdc"
+#define SVM_INVLPGA ".byte 0x0f, 0x01, 0xdf"
+
+/////// function references
+
+void helper_stgi();
+void vmexit(uint64_t exit_code, uint64_t exit_info_1);
+int svm_check_intercept_param(uint32_t type, uint64_t param);
+inline int svm_check_intercept(unsigned int type);
+
+#define INTERCEPTED(mask) (env->hflags & HF_SVM_MASK) && (env->intercept & mask) 
+#define INTERCEPTEDw(var, mask) (env->hflags & HF_SVM_MASK) && (env->intercept ## var & mask) 
+#define INTERCEPTEDl(var, mask) (env->hflags & HF_SVM_MASK) && (env->intercept ## var & mask) 
+
+/*
+#define INTERCEPTED(mask) (env->hflags & HF_SVM_MASK) && (ldq_phys(env->vm_vmcb + offsetof(struct vmcb, control.intercept)) & mask) 
+#define INTERCEPTEDw(var, mask) (env->hflags & HF_SVM_MASK) && (lduw_phys(env->vm_vmcb + offsetof(struct vmcb, control.intercept ## var)) & mask) 
+#define INTERCEPTEDl(var, mask) (env->hflags & HF_SVM_MASK) && (ldl_phys(env->vm_vmcb + offsetof(struct vmcb, control.intercept ## var)) & mask) 
+*/
+
+#define SVM_LOAD_SEG(addr, seg_index, seg) \
+    cpu_x86_load_seg_cache(env, \
+                    R_##seg_index, \
+                    lduw_phys(addr + offsetof(struct vmcb, save.seg.selector)),\
+                    ldq_phys(addr + offsetof(struct vmcb, save.seg.base)),\
+                    ldl_phys(addr + offsetof(struct vmcb, save.seg.limit)),\
+                    vmcb2cpu_attrib(lduw_phys(addr + offsetof(struct vmcb, save.seg.attrib))))
+
+#define SVM_LOAD_SEG2(addr, seg_qemu, seg_vmcb) \
+    env->seg_qemu.selector  = lduw_phys(addr + offsetof(struct vmcb, save.seg_vmcb.selector)); \
+    env->seg_qemu.base      = ldq_phys(addr + offsetof(struct vmcb, save.seg_vmcb.base)); \
+    env->seg_qemu.limit     = ldl_phys(addr + offsetof(struct vmcb, save.seg_vmcb.limit)); \
+    env->seg_qemu.flags     = vmcb2cpu_attrib(lduw_phys(addr + offsetof(struct vmcb, save.seg_vmcb.attrib)))
+
+#define SVM_SAVE_SEG(addr, seg_qemu, seg_vmcb) \
+    stw_phys(addr + offsetof(struct vmcb, save.seg_vmcb.selector), env->seg_qemu.selector); \
+    stq_phys(addr + offsetof(struct vmcb, save.seg_vmcb.base), env->seg_qemu.base); \
+    stl_phys(addr + offsetof(struct vmcb, save.seg_vmcb.limit), env->seg_qemu.limit); \
+    stw_phys(addr + offsetof(struct vmcb, save.seg_vmcb.attrib), cpu2vmcb_attrib(env->seg_qemu.flags))
+
+#endif
+
Index: qemu/cpu-exec.c
===================================================================
--- qemu.orig/cpu-exec.c
+++ qemu/cpu-exec.c
@@ -104,6 +104,9 @@ static TranslationBlock *tb_find_slow(ta
         if (tb->pc == pc && 
             tb->page_addr[0] == phys_page1 &&
             tb->cs_base == cs_base && 
+#if defined(TARGET_I386)
+	    tb->intercept == env->intercept &&
+#endif
             tb->flags == flags) {
             /* check next page if needed */
             if (tb->page_addr[1] != -1) {
@@ -132,6 +135,9 @@ static TranslationBlock *tb_find_slow(ta
     tc_ptr = code_gen_ptr;
     tb->tc_ptr = tc_ptr;
     tb->cs_base = cs_base;
+#if defined(TARGET_I386)
+    tb->intercept = env->intercept;
+#endif
     tb->flags = flags;
     cpu_gen_code(env, tb, CODE_GEN_MAX_SIZE, &code_gen_size);
     code_gen_ptr = (void *)(((unsigned long)code_gen_ptr + code_gen_size + CODE_GEN_ALIGN - 1) & ~(CODE_GEN_ALIGN - 1));
@@ -214,7 +220,11 @@ static inline TranslationBlock *tb_find_
 #endif
     tb = env->tb_jmp_cache[tb_jmp_cache_hash_func(pc)];
     if (__builtin_expect(!tb || tb->pc != pc || tb->cs_base != cs_base ||
-                         tb->flags != flags, 0)) {
+                         tb->flags != flags
+#if defined(TARGET_I386)
+			 || tb->intercept != env->intercept
+#endif
+			 , 0)) {
         tb = tb_find_slow(pc, cs_base, flags);
         /* Note: we do it here to avoid a gcc bug on Mac OS X when
            doing it in tb_find_slow */
@@ -373,7 +383,11 @@ int cpu_exec(CPUState *env1)
                 tmp_T0 = T0;
 #endif	    
                 interrupt_request = env->interrupt_request;
-                if (__builtin_expect(interrupt_request, 0)) {
+                if (__builtin_expect(interrupt_request, 0)
+#if defined(TARGET_I386)
+			&& env->hflags & HF_GIF_MASK
+#endif
+				) {
                     if (interrupt_request & CPU_INTERRUPT_DEBUG) {
                         env->interrupt_request &= ~CPU_INTERRUPT_DEBUG;
                         env->exception_index = EXCP_DEBUG;
@@ -391,6 +405,7 @@ int cpu_exec(CPUState *env1)
 #if defined(TARGET_I386)
                     if ((interrupt_request & CPU_INTERRUPT_SMI) &&
                         !(env->hflags & HF_SMM_MASK)) {
+                        svm_check_intercept(SVM_EXIT_SMI);
                         env->interrupt_request &= ~CPU_INTERRUPT_SMI;
                         do_smm_enter();
 #if defined(__sparc__) && !defined(HOST_SOLARIS)
@@ -399,9 +414,10 @@ int cpu_exec(CPUState *env1)
                         T0 = 0;
 #endif
                     } else if ((interrupt_request & CPU_INTERRUPT_HARD) &&
-                        (env->eflags & IF_MASK) && 
+                        (env->eflags & IF_MASK || env->hflags & HF_HIF_MASK) && 
                         !(env->hflags & HF_INHIBIT_IRQ_MASK)) {
                         int intno;
+			svm_check_intercept(SVM_EXIT_INTR);
                         env->interrupt_request &= ~CPU_INTERRUPT_HARD;
                         intno = cpu_get_pic_interrupt(env);
                         if (loglevel & CPU_LOG_TB_IN_ASM) {
@@ -415,7 +431,24 @@ int cpu_exec(CPUState *env1)
 #else
                         T0 = 0;
 #endif
-                    }
+                    } else if(env->hflags & HF_SVM_MASK) {
+			if(ldl_phys(env->vm_vmcb + offsetof(struct vmcb, control.int_ctl)) & V_IRQ_MASK) {
+                             // FIXME: this should respect TPR
+	                     if(env->eflags & IF_MASK) {
+                                 int intno = ldl_phys(env->vm_vmcb + offsetof(struct vmcb, control.int_vector));
+	                         svm_check_intercept(SVM_EXIT_VINTR);
+	                         if (loglevel & CPU_LOG_TB_IN_ASM)
+	                             fprintf(logfile, "Servicing virtual hardware INT=0x%02x\n", intno);
+	                         do_interrupt(intno, 0, 0, 0, 1);
+	                         stl_phys(env->vm_vmcb + offsetof(struct vmcb, control.int_ctl), ldl_phys(env->vm_vmcb + offsetof(struct vmcb, control.int_ctl)) & ~V_IRQ_MASK);
+#if defined(__sparc__) && !defined(HOST_SOLARIS)
+                        tmp_T0 = 0;
+#else
+                        T0 = 0;
+#endif
+	                     }
+			}
+		    }
 #elif defined(TARGET_PPC)
 #if 0
                     if ((interrupt_request & CPU_INTERRUPT_RESET)) {
Index: qemu/target-i386/exec.h
===================================================================
--- qemu.orig/target-i386/exec.h
+++ qemu/target-i386/exec.h
@@ -501,6 +501,15 @@ void update_fp_status(void);
 void helper_hlt(void);
 void helper_monitor(void);
 void helper_mwait(void);
+void helper_vmrun(target_ulong addr);
+void helper_vmmcall(void);
+void helper_vmload(target_ulong addr);
+void helper_vmsave(target_ulong addr);
+void helper_stgi(void);
+void helper_clgi(void);
+void helper_skinit(void);
+void helper_invlpga(void);
+void vmexit(uint64_t exit_code, uint64_t exit_info_1);
 
 extern const uint8_t parity_table[256];
 extern const uint8_t rclw_table[32];
@@ -588,3 +597,4 @@ static inline int cpu_halted(CPUState *e
     }
     return EXCP_HALTED;
 }
+
Index: qemu/exec.c
===================================================================
--- qemu.orig/exec.c
+++ qemu/exec.c
@@ -1281,6 +1281,10 @@ void cpu_abort(CPUState *env, const char
     vfprintf(stderr, fmt, ap);
     fprintf(stderr, "\n");
 #ifdef TARGET_I386
+    if(env->hflags & HF_SVM_MASK) { // most probably the virtual machine should not
+	                            // be shut down but rather caught by the VMM
+        vmexit(SVM_EXIT_SHUTDOWN, 0);
+    }
     cpu_dump_state(env, stderr, fprintf, X86_DUMP_FPU | X86_DUMP_CCOP);
 #else
     cpu_dump_state(env, stderr, fprintf, 0);
Index: qemu/exec-all.h
===================================================================
--- qemu.orig/exec-all.h
+++ qemu/exec-all.h
@@ -166,6 +166,7 @@ static inline int tlb_set_page(CPUState 
 typedef struct TranslationBlock {
     target_ulong pc;   /* simulated PC corresponding to this block (EIP + CS base) */
     target_ulong cs_base; /* CS base for this block */
+    uint64_t intercept; /* SVM intercept vector */
     unsigned int flags; /* flags defining in which context the code was generated */
     uint16_t size;      /* size of target code for this block (1 <=
                            size <= TARGET_PAGE_SIZE) */

^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: [Qemu-devel] [PATCH][RFC] SVM support
  2007-08-24 17:13 ` Alexander Graf
@ 2007-08-24 23:55   ` Fabrice Bellard
  2007-08-25  9:57     ` Alexey Eremenko
  0 siblings, 1 reply; 10+ messages in thread
From: Fabrice Bellard @ 2007-08-24 23:55 UTC (permalink / raw)
  To: qemu-devel


> This is a reworked version of the same patch, where I can now boot into
> a x86_64 Linux kernel.
> I rewrote all the access functions for the VMCB, so this time everything
> should work just fine on BE-machines. As suggested I moved the injection
> detection to translate.c, so the non-virtualized machine should be as
> fast as before (w/o svm support), while the virtual one got a speed
> boost from that as well.
> I removed the EIP hack and set EIP every time an interception occurs, so
> unlike the previous version this patch really should have no negative
> effect on speed any more.
> 
> If any of the people on this list using SVM (kvm developers, maybe xen
> developers) could have a deep look into this I'd be really thankful.

Some notes:

- Saving and restoring CC_SRC and CC_DST is not correct as they do not 
belong to the real processor state. You must save and restore eflags 
correctly instead.

- Avoid using macros when inline functions suffice.

Regards,

Fabrice.

^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: [Qemu-devel] [PATCH][RFC] SVM support
  2007-08-24 23:55   ` Fabrice Bellard
@ 2007-08-25  9:57     ` Alexey Eremenko
  2007-08-25 12:30       ` Alexander Graf
  0 siblings, 1 reply; 10+ messages in thread
From: Alexey Eremenko @ 2007-08-25  9:57 UTC (permalink / raw)
  To: qemu-devel

Alexander Graf:
Excuse me, but it is a bit unclear what you did.

Several questions arise:
a. Qemu now is able to emulate SVM on non-SVM old 32-bit processors ?
-or-
b. Qemu now is able to emulate SVM on SVM processors ?
c. Qemu now is able to emulate SVM on VT processors ?
d. How fast it works ? (theoretically, or benchmark results )

Anyway it's nice to hear about this feature.

-- 
-Alexey Eremenko "Technologov"

^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: [Qemu-devel] [PATCH][RFC] SVM support
  2007-08-25  9:57     ` Alexey Eremenko
@ 2007-08-25 12:30       ` Alexander Graf
  2007-08-25 17:51         ` Mulyadi Santosa
  0 siblings, 1 reply; 10+ messages in thread
From: Alexander Graf @ 2007-08-25 12:30 UTC (permalink / raw)
  To: qemu-devel

On Aug 25, 2007, at 11:57 AM, Alexey Eremenko wrote:

> Alexander Graf:
> Excuse me, but it is a bit unclear what you did.
>
> Several questions arise:
> a. Qemu now is able to emulate SVM on non-SVM old 32-bit processors ?
> -or-
> b. Qemu now is able to emulate SVM on SVM processors ?
> c. Qemu now is able to emulate SVM on VT processors ?
> d. How fast it works ? (theoretically, or benchmark results )
>
> Anyway it's nice to hear about this feature.
>
> --  
> -Alexey Eremenko "Technologov"
>
>

Hi Alexey,

I implemented SVM in the qemu emulator (not kqemu, not kvm, but the  
real emulator), so you can use SVM in qemu on _any_ platform qemu  
runs on - even on PowerPC.

Basically concerning speed I did not do any benchmarks, but as far as  
I can tell the ratio real machine:kvm and qemu:kvm are about the  
same. So the virtual machine is about 5-10% slower compared to the  
emulated machine.

Hope I could clarify this,

Alexander Graf

^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: [Qemu-devel] [PATCH][RFC] SVM support
  2007-08-25 12:30       ` Alexander Graf
@ 2007-08-25 17:51         ` Mulyadi Santosa
  0 siblings, 0 replies; 10+ messages in thread
From: Mulyadi Santosa @ 2007-08-25 17:51 UTC (permalink / raw)
  To: qemu-devel; +Cc: agraf

Hello...
>
> I implemented SVM in the qemu emulator (not kqemu, not kvm, but the 
> real emulator), so you can use SVM in qemu on _any_ platform qemu runs 
> on - even on PowerPC.
>
basically, I wanna say thanks for your effort on implementing SVM 
support for Qemu. It helps user who doesn't have Pacifica enabled 
processor to test software how a software behaves in such environment.

It also serves as yet another example on how hack Qemu. Bravo! I hope 
this patch will be merged soon.

regards,

Mulyadi

^ permalink raw reply	[flat|nested] 10+ messages in thread

end of thread, other threads:[~2007-08-25 21:42 UTC | newest]

Thread overview: 10+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2007-08-22 19:58 [Qemu-devel] [PATCH][RFC] SVM support Alexander Graf
2007-08-22 20:19 ` Blue Swirl
2007-08-22 20:26   ` Alexander Graf
2007-08-23  9:14     ` Avi Kivity
2007-08-23  9:05 ` Avi Kivity
2007-08-24 17:13 ` Alexander Graf
2007-08-24 23:55   ` Fabrice Bellard
2007-08-25  9:57     ` Alexey Eremenko
2007-08-25 12:30       ` Alexander Graf
2007-08-25 17:51         ` Mulyadi Santosa

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).