* [kvm-unit-tests PATCH 0/2] x86/svm: Add testing for L1 intercept bug
@ 2025-12-05 8:02 Kevin Cheng
2025-12-05 8:02 ` [kvm-unit-tests PATCH 1/2] x86/svm: Add missing svm intercepts Kevin Cheng
` (2 more replies)
0 siblings, 3 replies; 8+ messages in thread
From: Kevin Cheng @ 2025-12-05 8:02 UTC (permalink / raw)
To: kvm; +Cc: yosryahmed, andrew.jones, thuth, pbonzini, seanjc, Kevin Cheng
If a feature is not advertised to L1, L1 intercepts for instructions
controlled by this feature should be ignored. Currently, the added test
fails due to a bug in nested vm exit handling where vmcb12 intercepts
are checked before vmcb02 intercepts, causing the #UD exception to never
be injected into L2 if the L1 intercept is set. This is fixed in [0]
The first patch just adds the missing intercepts needed for testing and
restructures the vmcb_control_area struct to make adding the missing
intercepts less ugly. The second patch adds the test which disables all
relevant features that have available instruction intercepts, and checks
that the #UD exception is correctly delivered despite the L1 intercept
being set.
[0] https://lore.kernel.org/all/20251205070630.4013452-1-chengkev@google.com/
Kevin Cheng (2):
x86/svm: Add missing svm intercepts
x86/svm: Add unsupported instruction intercept test
x86/svm.c | 6 +-
x86/svm.h | 87 ++++++++++++++++++---
x86/svm_tests.c | 188 ++++++++++++++++++++++++++++++++--------------
x86/unittests.cfg | 9 ++-
4 files changed, 220 insertions(+), 70 deletions(-)
--
2.52.0.223.gf5cc29aaa4-goog
^ permalink raw reply [flat|nested] 8+ messages in thread* [kvm-unit-tests PATCH 1/2] x86/svm: Add missing svm intercepts 2025-12-05 8:02 [kvm-unit-tests PATCH 0/2] x86/svm: Add testing for L1 intercept bug Kevin Cheng @ 2025-12-05 8:02 ` Kevin Cheng 2025-12-05 8:14 ` Kevin Cheng 2025-12-05 8:02 ` [kvm-unit-tests PATCH] x86/svm: Add unsupported instruction intercept test Kevin Cheng 2025-12-05 8:14 ` [kvm-unit-tests PATCH 0/2] x86/svm: Add testing for L1 intercept bug Kevin Cheng 2 siblings, 1 reply; 8+ messages in thread From: Kevin Cheng @ 2025-12-05 8:02 UTC (permalink / raw) To: kvm; +Cc: yosryahmed, andrew.jones, thuth, pbonzini, seanjc, Kevin Cheng Some intercepts are missing from the KUT svm testing. Add all missing intercepts and reorganize the svm intercept definition/setting/clearing. Signed-off-by: Kevin Cheng <chengkev@google.com> --- x86/svm.c | 6 +-- x86/svm.h | 82 +++++++++++++++++++++++++++++++---- x86/svm_tests.c | 111 ++++++++++++++++++++++++------------------------ 3 files changed, 131 insertions(+), 68 deletions(-) diff --git a/x86/svm.c b/x86/svm.c index de9eb19443caa..9a4c14e368cd4 100644 --- a/x86/svm.c +++ b/x86/svm.c @@ -193,9 +193,9 @@ void vmcb_ident(struct vmcb *vmcb) save->cr2 = read_cr2(); save->g_pat = rdmsr(MSR_IA32_CR_PAT); save->dbgctl = rdmsr(MSR_IA32_DEBUGCTLMSR); - ctrl->intercept = (1ULL << INTERCEPT_VMRUN) | - (1ULL << INTERCEPT_VMMCALL) | - (1ULL << INTERCEPT_SHUTDOWN); + vmcb_set_intercept(INTERCEPT_VMRUN); + vmcb_set_intercept(INTERCEPT_VMMCALL); + vmcb_set_intercept(INTERCEPT_SHUTDOWN); ctrl->iopm_base_pa = virt_to_phys(io_bitmap); ctrl->msrpm_base_pa = virt_to_phys(msr_bitmap); diff --git a/x86/svm.h b/x86/svm.h index 264583a6547ef..93ef6f772c6ee 100644 --- a/x86/svm.h +++ b/x86/svm.h @@ -2,9 +2,49 @@ #define X86_SVM_H #include "libcflat.h" +#include "bitops.h" + +enum intercept_words { + INTERCEPT_CR = 0, + INTERCEPT_DR, + INTERCEPT_EXCEPTION, + INTERCEPT_WORD3, + INTERCEPT_WORD4, + INTERCEPT_WORD5, + MAX_INTERCEPT, +}; enum { - INTERCEPT_INTR, + /* Byte offset 000h (word 0) */ + INTERCEPT_CR0_READ = 0, + INTERCEPT_CR3_READ = 3, + INTERCEPT_CR4_READ = 4, + INTERCEPT_CR8_READ = 8, + INTERCEPT_CR0_WRITE = 16, + INTERCEPT_CR3_WRITE = 16 + 3, + INTERCEPT_CR4_WRITE = 16 + 4, + INTERCEPT_CR8_WRITE = 16 + 8, + /* Byte offset 004h (word 1) */ + INTERCEPT_DR0_READ = 32, + INTERCEPT_DR1_READ, + INTERCEPT_DR2_READ, + INTERCEPT_DR3_READ, + INTERCEPT_DR4_READ, + INTERCEPT_DR5_READ, + INTERCEPT_DR6_READ, + INTERCEPT_DR7_READ, + INTERCEPT_DR0_WRITE = 48, + INTERCEPT_DR1_WRITE, + INTERCEPT_DR2_WRITE, + INTERCEPT_DR3_WRITE, + INTERCEPT_DR4_WRITE, + INTERCEPT_DR5_WRITE, + INTERCEPT_DR6_WRITE, + INTERCEPT_DR7_WRITE, + /* Byte offset 008h (word 2) */ + INTERCEPT_EXCEPTION_OFFSET = 64, + /* Byte offset 00Ch (word 3) */ + INTERCEPT_INTR = 96, INTERCEPT_NMI, INTERCEPT_SMI, INTERCEPT_INIT, @@ -36,7 +76,8 @@ enum { INTERCEPT_TASK_SWITCH, INTERCEPT_FERR_FREEZE, INTERCEPT_SHUTDOWN, - INTERCEPT_VMRUN, + /* Byte offset 010h (word 4) */ + INTERCEPT_VMRUN = 128, INTERCEPT_VMMCALL, INTERCEPT_VMLOAD, INTERCEPT_VMSAVE, @@ -49,6 +90,24 @@ enum { INTERCEPT_MONITOR, INTERCEPT_MWAIT, INTERCEPT_MWAIT_COND, + INTERCEPT_XSETBV, + INTERCEPT_RDPRU, + TRAP_EFER_WRITE, + TRAP_CR0_WRITE, + TRAP_CR1_WRITE, + TRAP_CR2_WRITE, + TRAP_CR3_WRITE, + TRAP_CR4_WRITE, + TRAP_CR5_WRITE, + TRAP_CR6_WRITE, + TRAP_CR7_WRITE, + TRAP_CR8_WRITE, + /* Byte offset 014h (word 5) */ + INTERCEPT_INVLPGB = 160, + INTERCEPT_INVLPGB_ILLEGAL, + INTERCEPT_INVPCID, + INTERCEPT_MCOMMIT, + INTERCEPT_TLBSYNC, }; enum { @@ -69,13 +128,8 @@ enum { }; struct __attribute__ ((__packed__)) vmcb_control_area { - u16 intercept_cr_read; - u16 intercept_cr_write; - u16 intercept_dr_read; - u16 intercept_dr_write; - u32 intercept_exceptions; - u64 intercept; - u8 reserved_1[40]; + u32 intercept[MAX_INTERCEPT]; + u8 reserved_1[36]; u16 pause_filter_thresh; u16 pause_filter_count; u64 iopm_base_pa; @@ -441,6 +495,16 @@ void test_set_guest(test_guest_func func); extern struct vmcb *vmcb; +static inline void vmcb_set_intercept(u64 val) +{ + __set_bit(val, vmcb->control.intercept); +} + +static inline void vmcb_clear_intercept(u64 val) +{ + __clear_bit(val, vmcb->control.intercept); +} + static inline void stgi(void) { asm volatile ("stgi"); diff --git a/x86/svm_tests.c b/x86/svm_tests.c index 3761647642542..ccc89d45d4db9 100644 --- a/x86/svm_tests.c +++ b/x86/svm_tests.c @@ -63,7 +63,7 @@ static bool null_check(struct svm_test *test) static void prepare_no_vmrun_int(struct svm_test *test) { - vmcb->control.intercept &= ~(1ULL << INTERCEPT_VMRUN); + vmcb_clear_intercept(INTERCEPT_VMRUN); } static bool check_no_vmrun_int(struct svm_test *test) @@ -84,8 +84,8 @@ static bool check_vmrun(struct svm_test *test) static void prepare_rsm_intercept(struct svm_test *test) { default_prepare(test); - vmcb->control.intercept |= 1 << INTERCEPT_RSM; - vmcb->control.intercept_exceptions |= (1ULL << UD_VECTOR); + vmcb_set_intercept(INTERCEPT_RSM); + vmcb_set_intercept(INTERCEPT_EXCEPTION_OFFSET + UD_VECTOR); } static void test_rsm_intercept(struct svm_test *test) @@ -107,7 +107,7 @@ static bool finished_rsm_intercept(struct svm_test *test) vmcb->control.exit_code); return true; } - vmcb->control.intercept &= ~(1 << INTERCEPT_RSM); + vmcb_clear_intercept(INTERCEPT_RSM); inc_test_stage(test); break; @@ -132,7 +132,7 @@ static void prepare_sel_cr0_intercept(struct svm_test *test) /* Clear CR0.MP and CR0.CD as the tests will set either of them */ vmcb->save.cr0 &= ~X86_CR0_MP; vmcb->save.cr0 &= ~X86_CR0_CD; - vmcb->control.intercept |= (1ULL << INTERCEPT_SELECTIVE_CR0); + vmcb_set_intercept(INTERCEPT_SELECTIVE_CR0); } static void prepare_sel_nonsel_cr0_intercepts(struct svm_test *test) @@ -140,8 +140,8 @@ static void prepare_sel_nonsel_cr0_intercepts(struct svm_test *test) /* Clear CR0.MP and CR0.CD as the tests will set either of them */ vmcb->save.cr0 &= ~X86_CR0_MP; vmcb->save.cr0 &= ~X86_CR0_CD; - vmcb->control.intercept_cr_write |= (1ULL << 0); - vmcb->control.intercept |= (1ULL << INTERCEPT_SELECTIVE_CR0); + vmcb_set_intercept(INTERCEPT_CR0_WRITE); + vmcb_set_intercept(INTERCEPT_SELECTIVE_CR0); } static void __test_cr0_write_bit(struct svm_test *test, unsigned long bit, @@ -218,7 +218,7 @@ static bool check_cr0_nointercept(struct svm_test *test) static void prepare_cr3_intercept(struct svm_test *test) { default_prepare(test); - vmcb->control.intercept_cr_read |= 1 << 3; + vmcb_set_intercept(INTERCEPT_CR3_READ); } static void test_cr3_intercept(struct svm_test *test) @@ -252,7 +252,7 @@ static void corrupt_cr3_intercept_bypass(void *_test) static void prepare_cr3_intercept_bypass(struct svm_test *test) { default_prepare(test); - vmcb->control.intercept_cr_read |= 1 << 3; + vmcb_set_intercept(INTERCEPT_CR3_READ); on_cpu_async(1, corrupt_cr3_intercept_bypass, test); } @@ -272,8 +272,7 @@ static void test_cr3_intercept_bypass(struct svm_test *test) static void prepare_dr_intercept(struct svm_test *test) { default_prepare(test); - vmcb->control.intercept_dr_read = 0xff; - vmcb->control.intercept_dr_write = 0xff; + vmcb->control.intercept[INTERCEPT_DR] = 0xff00ff; } static void test_dr_intercept(struct svm_test *test) @@ -390,7 +389,7 @@ static bool next_rip_supported(void) static void prepare_next_rip(struct svm_test *test) { - vmcb->control.intercept |= (1ULL << INTERCEPT_RDTSC); + vmcb_set_intercept(INTERCEPT_RDTSC); } @@ -416,7 +415,7 @@ static bool is_x2apic; static void prepare_msr_intercept(struct svm_test *test) { default_prepare(test); - vmcb->control.intercept |= (1ULL << INTERCEPT_MSR_PROT); + vmcb_set_intercept(INTERCEPT_MSR_PROT); memset(msr_bitmap, 0, MSR_BITMAP_SIZE); @@ -663,10 +662,10 @@ static bool check_msr_intercept(struct svm_test *test) static void prepare_mode_switch(struct svm_test *test) { - vmcb->control.intercept_exceptions |= (1ULL << GP_VECTOR) - | (1ULL << UD_VECTOR) - | (1ULL << DF_VECTOR) - | (1ULL << PF_VECTOR); + vmcb_set_intercept(INTERCEPT_EXCEPTION_OFFSET + GP_VECTOR); + vmcb_set_intercept(INTERCEPT_EXCEPTION_OFFSET + UD_VECTOR); + vmcb_set_intercept(INTERCEPT_EXCEPTION_OFFSET + DF_VECTOR); + vmcb_set_intercept(INTERCEPT_EXCEPTION_OFFSET + DF_VECTOR); test->scratch = 0; } @@ -773,7 +772,7 @@ extern u8 *io_bitmap; static void prepare_ioio(struct svm_test *test) { - vmcb->control.intercept |= (1ULL << INTERCEPT_IOIO_PROT); + vmcb_set_intercept(INTERCEPT_IOIO_PROT); test->scratch = 0; memset(io_bitmap, 0, 8192); io_bitmap[8192] = 0xFF; @@ -1171,7 +1170,7 @@ static void pending_event_prepare(struct svm_test *test) pending_event_guest_run = false; - vmcb->control.intercept |= (1ULL << INTERCEPT_INTR); + vmcb_set_intercept(INTERCEPT_INTR); vmcb->control.int_ctl |= V_INTR_MASKING_MASK; apic_icr_write(APIC_DEST_SELF | APIC_DEST_PHYSICAL | @@ -1195,7 +1194,7 @@ static bool pending_event_finished(struct svm_test *test) return true; } - vmcb->control.intercept &= ~(1ULL << INTERCEPT_INTR); + vmcb_clear_intercept(INTERCEPT_INTR); vmcb->control.int_ctl &= ~V_INTR_MASKING_MASK; if (pending_event_guest_run) { @@ -1400,7 +1399,7 @@ static bool interrupt_finished(struct svm_test *test) } vmcb->save.rip += 3; - vmcb->control.intercept |= (1ULL << INTERCEPT_INTR); + vmcb_set_intercept(INTERCEPT_INTR); vmcb->control.int_ctl |= V_INTR_MASKING_MASK; break; @@ -1414,7 +1413,7 @@ static bool interrupt_finished(struct svm_test *test) sti_nop_cli(); - vmcb->control.intercept &= ~(1ULL << INTERCEPT_INTR); + vmcb_clear_intercept(INTERCEPT_INTR); vmcb->control.int_ctl &= ~V_INTR_MASKING_MASK; break; @@ -1476,7 +1475,7 @@ static bool nmi_finished(struct svm_test *test) } vmcb->save.rip += 3; - vmcb->control.intercept |= (1ULL << INTERCEPT_NMI); + vmcb_set_intercept(INTERCEPT_NMI); break; case 1: @@ -1569,7 +1568,7 @@ static bool nmi_hlt_finished(struct svm_test *test) } vmcb->save.rip += 3; - vmcb->control.intercept |= (1ULL << INTERCEPT_NMI); + vmcb_set_intercept(INTERCEPT_NMI); break; case 2: @@ -1605,7 +1604,7 @@ static void vnmi_prepare(struct svm_test *test) * Disable NMI interception to start. Enabling vNMI without * intercepting "real" NMIs should result in an ERR VM-Exit. */ - vmcb->control.intercept &= ~(1ULL << INTERCEPT_NMI); + vmcb_clear_intercept(INTERCEPT_NMI); vmcb->control.int_ctl = V_NMI_ENABLE_MASK; vmcb->control.int_vector = NMI_VECTOR; } @@ -1629,7 +1628,7 @@ static bool vnmi_finished(struct svm_test *test) return true; } report(!nmi_fired, "vNMI enabled but NMI_INTERCEPT unset!"); - vmcb->control.intercept |= (1ULL << INTERCEPT_NMI); + vmcb_set_intercept(INTERCEPT_NMI); vmcb->save.rip += 3; break; @@ -1804,7 +1803,7 @@ static bool virq_inject_finished(struct svm_test *test) return true; } virq_fired = false; - vmcb->control.intercept |= (1ULL << INTERCEPT_VINTR); + vmcb_set_intercept(INTERCEPT_VINTR); vmcb->control.int_ctl = V_INTR_MASKING_MASK | V_IRQ_MASK | (0x0f << V_INTR_PRIO_SHIFT); break; @@ -1819,7 +1818,7 @@ static bool virq_inject_finished(struct svm_test *test) report_fail("V_IRQ fired before SVM_EXIT_VINTR"); return true; } - vmcb->control.intercept &= ~(1ULL << INTERCEPT_VINTR); + vmcb_clear_intercept(INTERCEPT_VINTR); break; case 2: @@ -1842,7 +1841,7 @@ static bool virq_inject_finished(struct svm_test *test) vmcb->control.exit_code); return true; } - vmcb->control.intercept |= (1ULL << INTERCEPT_VINTR); + vmcb_set_intercept(INTERCEPT_VINTR); break; case 4: @@ -1943,7 +1942,7 @@ static void reg_corruption_prepare(struct svm_test *test) set_test_stage(test, 0); vmcb->control.int_ctl = V_INTR_MASKING_MASK; - vmcb->control.intercept |= (1ULL << INTERCEPT_INTR); + vmcb_set_intercept(INTERCEPT_INTR); handle_irq(TIMER_VECTOR, reg_corruption_isr); @@ -2050,7 +2049,7 @@ static volatile bool init_intercept; static void init_intercept_prepare(struct svm_test *test) { init_intercept = false; - vmcb->control.intercept |= (1ULL << INTERCEPT_INIT); + vmcb_set_intercept(INTERCEPT_INIT); } static void init_intercept_test(struct svm_test *test) @@ -2547,7 +2546,7 @@ static void test_dr(void) /* TODO: verify if high 32-bits are sign- or zero-extended on bare metal */ #define TEST_BITMAP_ADDR(save_intercept, type, addr, exit_code, \ msg) { \ - vmcb->control.intercept = saved_intercept | 1ULL << type; \ + vmcb_set_intercept(type); \ if (type == INTERCEPT_MSR_PROT) \ vmcb->control.msrpm_base_pa = addr; \ else \ @@ -2574,7 +2573,7 @@ static void test_dr(void) */ static void test_msrpm_iopm_bitmap_addrs(void) { - u64 saved_intercept = vmcb->control.intercept; + u32 saved_intercept = vmcb->control.intercept[INTERCEPT_WORD3]; u64 addr_beyond_limit = 1ull << cpuid_maxphyaddr(); u64 addr = virt_to_phys(msr_bitmap) & (~((1ull << 12) - 1)); @@ -2615,7 +2614,7 @@ static void test_msrpm_iopm_bitmap_addrs(void) TEST_BITMAP_ADDR(saved_intercept, INTERCEPT_IOIO_PROT, addr, SVM_EXIT_VMMCALL, "IOPM"); - vmcb->control.intercept = saved_intercept; + vmcb->control.intercept[INTERCEPT_WORD3] = saved_intercept; } /* @@ -2811,7 +2810,7 @@ static void vmload_vmsave_guest_main(struct svm_test *test) static void svm_vmload_vmsave(void) { - u32 intercept_saved = vmcb->control.intercept; + u32 intercept_saved = vmcb->control.intercept[INTERCEPT_WORD4]; test_set_guest(vmload_vmsave_guest_main); @@ -2819,8 +2818,8 @@ static void svm_vmload_vmsave(void) * Disabling intercept for VMLOAD and VMSAVE doesn't cause * respective #VMEXIT to host */ - vmcb->control.intercept &= ~(1ULL << INTERCEPT_VMLOAD); - vmcb->control.intercept &= ~(1ULL << INTERCEPT_VMSAVE); + vmcb_clear_intercept(INTERCEPT_VMLOAD); + vmcb_clear_intercept(INTERCEPT_VMSAVE); svm_vmrun(); report(vmcb->control.exit_code == SVM_EXIT_VMMCALL, "Test " "VMLOAD/VMSAVE intercept: Expected VMMCALL #VMEXIT"); @@ -2829,39 +2828,39 @@ static void svm_vmload_vmsave(void) * Enabling intercept for VMLOAD and VMSAVE causes respective * #VMEXIT to host */ - vmcb->control.intercept |= (1ULL << INTERCEPT_VMLOAD); + vmcb_set_intercept(INTERCEPT_VMLOAD); svm_vmrun(); report(vmcb->control.exit_code == SVM_EXIT_VMLOAD, "Test " "VMLOAD/VMSAVE intercept: Expected VMLOAD #VMEXIT"); - vmcb->control.intercept &= ~(1ULL << INTERCEPT_VMLOAD); - vmcb->control.intercept |= (1ULL << INTERCEPT_VMSAVE); + vmcb_clear_intercept(INTERCEPT_VMLOAD); + vmcb_set_intercept(INTERCEPT_VMSAVE); svm_vmrun(); report(vmcb->control.exit_code == SVM_EXIT_VMSAVE, "Test " "VMLOAD/VMSAVE intercept: Expected VMSAVE #VMEXIT"); - vmcb->control.intercept &= ~(1ULL << INTERCEPT_VMSAVE); + vmcb_clear_intercept(INTERCEPT_VMSAVE); svm_vmrun(); report(vmcb->control.exit_code == SVM_EXIT_VMMCALL, "Test " "VMLOAD/VMSAVE intercept: Expected VMMCALL #VMEXIT"); - vmcb->control.intercept |= (1ULL << INTERCEPT_VMLOAD); + vmcb_set_intercept(INTERCEPT_VMLOAD); svm_vmrun(); report(vmcb->control.exit_code == SVM_EXIT_VMLOAD, "Test " "VMLOAD/VMSAVE intercept: Expected VMLOAD #VMEXIT"); - vmcb->control.intercept &= ~(1ULL << INTERCEPT_VMLOAD); + vmcb_clear_intercept(INTERCEPT_VMLOAD); svm_vmrun(); report(vmcb->control.exit_code == SVM_EXIT_VMMCALL, "Test " "VMLOAD/VMSAVE intercept: Expected VMMCALL #VMEXIT"); - vmcb->control.intercept |= (1ULL << INTERCEPT_VMSAVE); + vmcb_set_intercept(INTERCEPT_VMSAVE); svm_vmrun(); report(vmcb->control.exit_code == SVM_EXIT_VMSAVE, "Test " "VMLOAD/VMSAVE intercept: Expected VMSAVE #VMEXIT"); - vmcb->control.intercept &= ~(1ULL << INTERCEPT_VMSAVE); + vmcb_clear_intercept(INTERCEPT_VMSAVE); svm_vmrun(); report(vmcb->control.exit_code == SVM_EXIT_VMMCALL, "Test " "VMLOAD/VMSAVE intercept: Expected VMMCALL #VMEXIT"); - vmcb->control.intercept = intercept_saved; + vmcb->control.intercept[INTERCEPT_WORD4] = intercept_saved; } static void prepare_vgif_enabled(struct svm_test *test) @@ -2974,7 +2973,7 @@ static void pause_filter_test(void) return; } - vmcb->control.intercept |= (1 << INTERCEPT_PAUSE); + vmcb_set_intercept(INTERCEPT_PAUSE); // filter count more that pause count - no VMexit pause_filter_run_test(10, 9, 0, 0); @@ -3356,7 +3355,7 @@ static void svm_intr_intercept_mix_if(void) // make a physical interrupt to be pending handle_irq(0x55, dummy_isr); - vmcb->control.intercept |= (1 << INTERCEPT_INTR); + vmcb_set_intercept(INTERCEPT_INTR); vmcb->control.int_ctl &= ~V_INTR_MASKING_MASK; vmcb->save.rflags &= ~X86_EFLAGS_IF; @@ -3389,7 +3388,7 @@ static void svm_intr_intercept_mix_gif(void) { handle_irq(0x55, dummy_isr); - vmcb->control.intercept |= (1 << INTERCEPT_INTR); + vmcb_set_intercept(INTERCEPT_INTR); vmcb->control.int_ctl &= ~V_INTR_MASKING_MASK; vmcb->save.rflags &= ~X86_EFLAGS_IF; @@ -3419,7 +3418,7 @@ static void svm_intr_intercept_mix_gif2(void) { handle_irq(0x55, dummy_isr); - vmcb->control.intercept |= (1 << INTERCEPT_INTR); + vmcb_set_intercept(INTERCEPT_INTR); vmcb->control.int_ctl &= ~V_INTR_MASKING_MASK; vmcb->save.rflags |= X86_EFLAGS_IF; @@ -3448,7 +3447,7 @@ static void svm_intr_intercept_mix_nmi(void) { handle_exception(2, dummy_nmi_handler); - vmcb->control.intercept |= (1 << INTERCEPT_NMI); + vmcb_set_intercept(INTERCEPT_NMI); vmcb->control.int_ctl &= ~V_INTR_MASKING_MASK; vmcb->save.rflags |= X86_EFLAGS_IF; @@ -3472,7 +3471,7 @@ static void svm_intr_intercept_mix_smi_guest(struct svm_test *test) static void svm_intr_intercept_mix_smi(void) { - vmcb->control.intercept |= (1 << INTERCEPT_SMI); + vmcb_set_intercept(INTERCEPT_SMI); vmcb->control.int_ctl &= ~V_INTR_MASKING_MASK; test_set_guest(svm_intr_intercept_mix_smi_guest); svm_intr_intercept_mix_run_guest(NULL, SVM_EXIT_SMI); @@ -3530,14 +3529,14 @@ static void handle_exception_in_l2(u8 vector) static void handle_exception_in_l1(u32 vector) { - u32 old_ie = vmcb->control.intercept_exceptions; + u32 old_ie = vmcb->control.intercept[INTERCEPT_EXCEPTION]; - vmcb->control.intercept_exceptions |= (1ULL << vector); + vmcb_set_intercept(INTERCEPT_EXCEPTION_OFFSET + vector); report(svm_vmrun() == (SVM_EXIT_EXCP_BASE + vector), "%s handled by L1", exception_mnemonic(vector)); - vmcb->control.intercept_exceptions = old_ie; + vmcb->control.intercept[INTERCEPT_EXCEPTION] = old_ie; } static void svm_exception_test(void) @@ -3568,7 +3567,7 @@ static void svm_shutdown_intercept_test(void) { test_set_guest(shutdown_intercept_test_guest); vmcb->save.idtr.base = (u64)alloc_vpage(); - vmcb->control.intercept |= (1ULL << INTERCEPT_SHUTDOWN); + vmcb_set_intercept(INTERCEPT_SHUTDOWN); svm_vmrun(); report(vmcb->control.exit_code == SVM_EXIT_SHUTDOWN, "shutdown test passed"); } -- 2.52.0.223.gf5cc29aaa4-goog ^ permalink raw reply related [flat|nested] 8+ messages in thread
* Re: [kvm-unit-tests PATCH 1/2] x86/svm: Add missing svm intercepts 2025-12-05 8:02 ` [kvm-unit-tests PATCH 1/2] x86/svm: Add missing svm intercepts Kevin Cheng @ 2025-12-05 8:14 ` Kevin Cheng 0 siblings, 0 replies; 8+ messages in thread From: Kevin Cheng @ 2025-12-05 8:14 UTC (permalink / raw) To: kvm; +Cc: yosryahmed, andrew.jones, thuth, pbonzini, seanjc Sorry please ignore this! On Fri, Dec 5, 2025 at 3:02 AM Kevin Cheng <chengkev@google.com> wrote: > > Some intercepts are missing from the KUT svm testing. Add all missing > intercepts and reorganize the svm intercept definition/setting/clearing. > > Signed-off-by: Kevin Cheng <chengkev@google.com> > --- > x86/svm.c | 6 +-- > x86/svm.h | 82 +++++++++++++++++++++++++++++++---- > x86/svm_tests.c | 111 ++++++++++++++++++++++++------------------------ > 3 files changed, 131 insertions(+), 68 deletions(-) > > diff --git a/x86/svm.c b/x86/svm.c > index de9eb19443caa..9a4c14e368cd4 100644 > --- a/x86/svm.c > +++ b/x86/svm.c > @@ -193,9 +193,9 @@ void vmcb_ident(struct vmcb *vmcb) > save->cr2 = read_cr2(); > save->g_pat = rdmsr(MSR_IA32_CR_PAT); > save->dbgctl = rdmsr(MSR_IA32_DEBUGCTLMSR); > - ctrl->intercept = (1ULL << INTERCEPT_VMRUN) | > - (1ULL << INTERCEPT_VMMCALL) | > - (1ULL << INTERCEPT_SHUTDOWN); > + vmcb_set_intercept(INTERCEPT_VMRUN); > + vmcb_set_intercept(INTERCEPT_VMMCALL); > + vmcb_set_intercept(INTERCEPT_SHUTDOWN); > ctrl->iopm_base_pa = virt_to_phys(io_bitmap); > ctrl->msrpm_base_pa = virt_to_phys(msr_bitmap); > > diff --git a/x86/svm.h b/x86/svm.h > index 264583a6547ef..93ef6f772c6ee 100644 > --- a/x86/svm.h > +++ b/x86/svm.h > @@ -2,9 +2,49 @@ > #define X86_SVM_H > > #include "libcflat.h" > +#include "bitops.h" > + > +enum intercept_words { > + INTERCEPT_CR = 0, > + INTERCEPT_DR, > + INTERCEPT_EXCEPTION, > + INTERCEPT_WORD3, > + INTERCEPT_WORD4, > + INTERCEPT_WORD5, > + MAX_INTERCEPT, > +}; > > enum { > - INTERCEPT_INTR, > + /* Byte offset 000h (word 0) */ > + INTERCEPT_CR0_READ = 0, > + INTERCEPT_CR3_READ = 3, > + INTERCEPT_CR4_READ = 4, > + INTERCEPT_CR8_READ = 8, > + INTERCEPT_CR0_WRITE = 16, > + INTERCEPT_CR3_WRITE = 16 + 3, > + INTERCEPT_CR4_WRITE = 16 + 4, > + INTERCEPT_CR8_WRITE = 16 + 8, > + /* Byte offset 004h (word 1) */ > + INTERCEPT_DR0_READ = 32, > + INTERCEPT_DR1_READ, > + INTERCEPT_DR2_READ, > + INTERCEPT_DR3_READ, > + INTERCEPT_DR4_READ, > + INTERCEPT_DR5_READ, > + INTERCEPT_DR6_READ, > + INTERCEPT_DR7_READ, > + INTERCEPT_DR0_WRITE = 48, > + INTERCEPT_DR1_WRITE, > + INTERCEPT_DR2_WRITE, > + INTERCEPT_DR3_WRITE, > + INTERCEPT_DR4_WRITE, > + INTERCEPT_DR5_WRITE, > + INTERCEPT_DR6_WRITE, > + INTERCEPT_DR7_WRITE, > + /* Byte offset 008h (word 2) */ > + INTERCEPT_EXCEPTION_OFFSET = 64, > + /* Byte offset 00Ch (word 3) */ > + INTERCEPT_INTR = 96, > INTERCEPT_NMI, > INTERCEPT_SMI, > INTERCEPT_INIT, > @@ -36,7 +76,8 @@ enum { > INTERCEPT_TASK_SWITCH, > INTERCEPT_FERR_FREEZE, > INTERCEPT_SHUTDOWN, > - INTERCEPT_VMRUN, > + /* Byte offset 010h (word 4) */ > + INTERCEPT_VMRUN = 128, > INTERCEPT_VMMCALL, > INTERCEPT_VMLOAD, > INTERCEPT_VMSAVE, > @@ -49,6 +90,24 @@ enum { > INTERCEPT_MONITOR, > INTERCEPT_MWAIT, > INTERCEPT_MWAIT_COND, > + INTERCEPT_XSETBV, > + INTERCEPT_RDPRU, > + TRAP_EFER_WRITE, > + TRAP_CR0_WRITE, > + TRAP_CR1_WRITE, > + TRAP_CR2_WRITE, > + TRAP_CR3_WRITE, > + TRAP_CR4_WRITE, > + TRAP_CR5_WRITE, > + TRAP_CR6_WRITE, > + TRAP_CR7_WRITE, > + TRAP_CR8_WRITE, > + /* Byte offset 014h (word 5) */ > + INTERCEPT_INVLPGB = 160, > + INTERCEPT_INVLPGB_ILLEGAL, > + INTERCEPT_INVPCID, > + INTERCEPT_MCOMMIT, > + INTERCEPT_TLBSYNC, > }; > > enum { > @@ -69,13 +128,8 @@ enum { > }; > > struct __attribute__ ((__packed__)) vmcb_control_area { > - u16 intercept_cr_read; > - u16 intercept_cr_write; > - u16 intercept_dr_read; > - u16 intercept_dr_write; > - u32 intercept_exceptions; > - u64 intercept; > - u8 reserved_1[40]; > + u32 intercept[MAX_INTERCEPT]; > + u8 reserved_1[36]; > u16 pause_filter_thresh; > u16 pause_filter_count; > u64 iopm_base_pa; > @@ -441,6 +495,16 @@ void test_set_guest(test_guest_func func); > > extern struct vmcb *vmcb; > > +static inline void vmcb_set_intercept(u64 val) > +{ > + __set_bit(val, vmcb->control.intercept); > +} > + > +static inline void vmcb_clear_intercept(u64 val) > +{ > + __clear_bit(val, vmcb->control.intercept); > +} > + > static inline void stgi(void) > { > asm volatile ("stgi"); > diff --git a/x86/svm_tests.c b/x86/svm_tests.c > index 3761647642542..ccc89d45d4db9 100644 > --- a/x86/svm_tests.c > +++ b/x86/svm_tests.c > @@ -63,7 +63,7 @@ static bool null_check(struct svm_test *test) > > static void prepare_no_vmrun_int(struct svm_test *test) > { > - vmcb->control.intercept &= ~(1ULL << INTERCEPT_VMRUN); > + vmcb_clear_intercept(INTERCEPT_VMRUN); > } > > static bool check_no_vmrun_int(struct svm_test *test) > @@ -84,8 +84,8 @@ static bool check_vmrun(struct svm_test *test) > static void prepare_rsm_intercept(struct svm_test *test) > { > default_prepare(test); > - vmcb->control.intercept |= 1 << INTERCEPT_RSM; > - vmcb->control.intercept_exceptions |= (1ULL << UD_VECTOR); > + vmcb_set_intercept(INTERCEPT_RSM); > + vmcb_set_intercept(INTERCEPT_EXCEPTION_OFFSET + UD_VECTOR); > } > > static void test_rsm_intercept(struct svm_test *test) > @@ -107,7 +107,7 @@ static bool finished_rsm_intercept(struct svm_test *test) > vmcb->control.exit_code); > return true; > } > - vmcb->control.intercept &= ~(1 << INTERCEPT_RSM); > + vmcb_clear_intercept(INTERCEPT_RSM); > inc_test_stage(test); > break; > > @@ -132,7 +132,7 @@ static void prepare_sel_cr0_intercept(struct svm_test *test) > /* Clear CR0.MP and CR0.CD as the tests will set either of them */ > vmcb->save.cr0 &= ~X86_CR0_MP; > vmcb->save.cr0 &= ~X86_CR0_CD; > - vmcb->control.intercept |= (1ULL << INTERCEPT_SELECTIVE_CR0); > + vmcb_set_intercept(INTERCEPT_SELECTIVE_CR0); > } > > static void prepare_sel_nonsel_cr0_intercepts(struct svm_test *test) > @@ -140,8 +140,8 @@ static void prepare_sel_nonsel_cr0_intercepts(struct svm_test *test) > /* Clear CR0.MP and CR0.CD as the tests will set either of them */ > vmcb->save.cr0 &= ~X86_CR0_MP; > vmcb->save.cr0 &= ~X86_CR0_CD; > - vmcb->control.intercept_cr_write |= (1ULL << 0); > - vmcb->control.intercept |= (1ULL << INTERCEPT_SELECTIVE_CR0); > + vmcb_set_intercept(INTERCEPT_CR0_WRITE); > + vmcb_set_intercept(INTERCEPT_SELECTIVE_CR0); > } > > static void __test_cr0_write_bit(struct svm_test *test, unsigned long bit, > @@ -218,7 +218,7 @@ static bool check_cr0_nointercept(struct svm_test *test) > static void prepare_cr3_intercept(struct svm_test *test) > { > default_prepare(test); > - vmcb->control.intercept_cr_read |= 1 << 3; > + vmcb_set_intercept(INTERCEPT_CR3_READ); > } > > static void test_cr3_intercept(struct svm_test *test) > @@ -252,7 +252,7 @@ static void corrupt_cr3_intercept_bypass(void *_test) > static void prepare_cr3_intercept_bypass(struct svm_test *test) > { > default_prepare(test); > - vmcb->control.intercept_cr_read |= 1 << 3; > + vmcb_set_intercept(INTERCEPT_CR3_READ); > on_cpu_async(1, corrupt_cr3_intercept_bypass, test); > } > > @@ -272,8 +272,7 @@ static void test_cr3_intercept_bypass(struct svm_test *test) > static void prepare_dr_intercept(struct svm_test *test) > { > default_prepare(test); > - vmcb->control.intercept_dr_read = 0xff; > - vmcb->control.intercept_dr_write = 0xff; > + vmcb->control.intercept[INTERCEPT_DR] = 0xff00ff; > } > > static void test_dr_intercept(struct svm_test *test) > @@ -390,7 +389,7 @@ static bool next_rip_supported(void) > > static void prepare_next_rip(struct svm_test *test) > { > - vmcb->control.intercept |= (1ULL << INTERCEPT_RDTSC); > + vmcb_set_intercept(INTERCEPT_RDTSC); > } > > > @@ -416,7 +415,7 @@ static bool is_x2apic; > static void prepare_msr_intercept(struct svm_test *test) > { > default_prepare(test); > - vmcb->control.intercept |= (1ULL << INTERCEPT_MSR_PROT); > + vmcb_set_intercept(INTERCEPT_MSR_PROT); > > memset(msr_bitmap, 0, MSR_BITMAP_SIZE); > > @@ -663,10 +662,10 @@ static bool check_msr_intercept(struct svm_test *test) > > static void prepare_mode_switch(struct svm_test *test) > { > - vmcb->control.intercept_exceptions |= (1ULL << GP_VECTOR) > - | (1ULL << UD_VECTOR) > - | (1ULL << DF_VECTOR) > - | (1ULL << PF_VECTOR); > + vmcb_set_intercept(INTERCEPT_EXCEPTION_OFFSET + GP_VECTOR); > + vmcb_set_intercept(INTERCEPT_EXCEPTION_OFFSET + UD_VECTOR); > + vmcb_set_intercept(INTERCEPT_EXCEPTION_OFFSET + DF_VECTOR); > + vmcb_set_intercept(INTERCEPT_EXCEPTION_OFFSET + DF_VECTOR); > test->scratch = 0; > } > > @@ -773,7 +772,7 @@ extern u8 *io_bitmap; > > static void prepare_ioio(struct svm_test *test) > { > - vmcb->control.intercept |= (1ULL << INTERCEPT_IOIO_PROT); > + vmcb_set_intercept(INTERCEPT_IOIO_PROT); > test->scratch = 0; > memset(io_bitmap, 0, 8192); > io_bitmap[8192] = 0xFF; > @@ -1171,7 +1170,7 @@ static void pending_event_prepare(struct svm_test *test) > > pending_event_guest_run = false; > > - vmcb->control.intercept |= (1ULL << INTERCEPT_INTR); > + vmcb_set_intercept(INTERCEPT_INTR); > vmcb->control.int_ctl |= V_INTR_MASKING_MASK; > > apic_icr_write(APIC_DEST_SELF | APIC_DEST_PHYSICAL | > @@ -1195,7 +1194,7 @@ static bool pending_event_finished(struct svm_test *test) > return true; > } > > - vmcb->control.intercept &= ~(1ULL << INTERCEPT_INTR); > + vmcb_clear_intercept(INTERCEPT_INTR); > vmcb->control.int_ctl &= ~V_INTR_MASKING_MASK; > > if (pending_event_guest_run) { > @@ -1400,7 +1399,7 @@ static bool interrupt_finished(struct svm_test *test) > } > vmcb->save.rip += 3; > > - vmcb->control.intercept |= (1ULL << INTERCEPT_INTR); > + vmcb_set_intercept(INTERCEPT_INTR); > vmcb->control.int_ctl |= V_INTR_MASKING_MASK; > break; > > @@ -1414,7 +1413,7 @@ static bool interrupt_finished(struct svm_test *test) > > sti_nop_cli(); > > - vmcb->control.intercept &= ~(1ULL << INTERCEPT_INTR); > + vmcb_clear_intercept(INTERCEPT_INTR); > vmcb->control.int_ctl &= ~V_INTR_MASKING_MASK; > break; > > @@ -1476,7 +1475,7 @@ static bool nmi_finished(struct svm_test *test) > } > vmcb->save.rip += 3; > > - vmcb->control.intercept |= (1ULL << INTERCEPT_NMI); > + vmcb_set_intercept(INTERCEPT_NMI); > break; > > case 1: > @@ -1569,7 +1568,7 @@ static bool nmi_hlt_finished(struct svm_test *test) > } > vmcb->save.rip += 3; > > - vmcb->control.intercept |= (1ULL << INTERCEPT_NMI); > + vmcb_set_intercept(INTERCEPT_NMI); > break; > > case 2: > @@ -1605,7 +1604,7 @@ static void vnmi_prepare(struct svm_test *test) > * Disable NMI interception to start. Enabling vNMI without > * intercepting "real" NMIs should result in an ERR VM-Exit. > */ > - vmcb->control.intercept &= ~(1ULL << INTERCEPT_NMI); > + vmcb_clear_intercept(INTERCEPT_NMI); > vmcb->control.int_ctl = V_NMI_ENABLE_MASK; > vmcb->control.int_vector = NMI_VECTOR; > } > @@ -1629,7 +1628,7 @@ static bool vnmi_finished(struct svm_test *test) > return true; > } > report(!nmi_fired, "vNMI enabled but NMI_INTERCEPT unset!"); > - vmcb->control.intercept |= (1ULL << INTERCEPT_NMI); > + vmcb_set_intercept(INTERCEPT_NMI); > vmcb->save.rip += 3; > break; > > @@ -1804,7 +1803,7 @@ static bool virq_inject_finished(struct svm_test *test) > return true; > } > virq_fired = false; > - vmcb->control.intercept |= (1ULL << INTERCEPT_VINTR); > + vmcb_set_intercept(INTERCEPT_VINTR); > vmcb->control.int_ctl = V_INTR_MASKING_MASK | V_IRQ_MASK | > (0x0f << V_INTR_PRIO_SHIFT); > break; > @@ -1819,7 +1818,7 @@ static bool virq_inject_finished(struct svm_test *test) > report_fail("V_IRQ fired before SVM_EXIT_VINTR"); > return true; > } > - vmcb->control.intercept &= ~(1ULL << INTERCEPT_VINTR); > + vmcb_clear_intercept(INTERCEPT_VINTR); > break; > > case 2: > @@ -1842,7 +1841,7 @@ static bool virq_inject_finished(struct svm_test *test) > vmcb->control.exit_code); > return true; > } > - vmcb->control.intercept |= (1ULL << INTERCEPT_VINTR); > + vmcb_set_intercept(INTERCEPT_VINTR); > break; > > case 4: > @@ -1943,7 +1942,7 @@ static void reg_corruption_prepare(struct svm_test *test) > set_test_stage(test, 0); > > vmcb->control.int_ctl = V_INTR_MASKING_MASK; > - vmcb->control.intercept |= (1ULL << INTERCEPT_INTR); > + vmcb_set_intercept(INTERCEPT_INTR); > > handle_irq(TIMER_VECTOR, reg_corruption_isr); > > @@ -2050,7 +2049,7 @@ static volatile bool init_intercept; > static void init_intercept_prepare(struct svm_test *test) > { > init_intercept = false; > - vmcb->control.intercept |= (1ULL << INTERCEPT_INIT); > + vmcb_set_intercept(INTERCEPT_INIT); > } > > static void init_intercept_test(struct svm_test *test) > @@ -2547,7 +2546,7 @@ static void test_dr(void) > /* TODO: verify if high 32-bits are sign- or zero-extended on bare metal */ > #define TEST_BITMAP_ADDR(save_intercept, type, addr, exit_code, \ > msg) { \ > - vmcb->control.intercept = saved_intercept | 1ULL << type; \ > + vmcb_set_intercept(type); \ > if (type == INTERCEPT_MSR_PROT) \ > vmcb->control.msrpm_base_pa = addr; \ > else \ > @@ -2574,7 +2573,7 @@ static void test_dr(void) > */ > static void test_msrpm_iopm_bitmap_addrs(void) > { > - u64 saved_intercept = vmcb->control.intercept; > + u32 saved_intercept = vmcb->control.intercept[INTERCEPT_WORD3]; > u64 addr_beyond_limit = 1ull << cpuid_maxphyaddr(); > u64 addr = virt_to_phys(msr_bitmap) & (~((1ull << 12) - 1)); > > @@ -2615,7 +2614,7 @@ static void test_msrpm_iopm_bitmap_addrs(void) > TEST_BITMAP_ADDR(saved_intercept, INTERCEPT_IOIO_PROT, addr, > SVM_EXIT_VMMCALL, "IOPM"); > > - vmcb->control.intercept = saved_intercept; > + vmcb->control.intercept[INTERCEPT_WORD3] = saved_intercept; > } > > /* > @@ -2811,7 +2810,7 @@ static void vmload_vmsave_guest_main(struct svm_test *test) > > static void svm_vmload_vmsave(void) > { > - u32 intercept_saved = vmcb->control.intercept; > + u32 intercept_saved = vmcb->control.intercept[INTERCEPT_WORD4]; > > test_set_guest(vmload_vmsave_guest_main); > > @@ -2819,8 +2818,8 @@ static void svm_vmload_vmsave(void) > * Disabling intercept for VMLOAD and VMSAVE doesn't cause > * respective #VMEXIT to host > */ > - vmcb->control.intercept &= ~(1ULL << INTERCEPT_VMLOAD); > - vmcb->control.intercept &= ~(1ULL << INTERCEPT_VMSAVE); > + vmcb_clear_intercept(INTERCEPT_VMLOAD); > + vmcb_clear_intercept(INTERCEPT_VMSAVE); > svm_vmrun(); > report(vmcb->control.exit_code == SVM_EXIT_VMMCALL, "Test " > "VMLOAD/VMSAVE intercept: Expected VMMCALL #VMEXIT"); > @@ -2829,39 +2828,39 @@ static void svm_vmload_vmsave(void) > * Enabling intercept for VMLOAD and VMSAVE causes respective > * #VMEXIT to host > */ > - vmcb->control.intercept |= (1ULL << INTERCEPT_VMLOAD); > + vmcb_set_intercept(INTERCEPT_VMLOAD); > svm_vmrun(); > report(vmcb->control.exit_code == SVM_EXIT_VMLOAD, "Test " > "VMLOAD/VMSAVE intercept: Expected VMLOAD #VMEXIT"); > - vmcb->control.intercept &= ~(1ULL << INTERCEPT_VMLOAD); > - vmcb->control.intercept |= (1ULL << INTERCEPT_VMSAVE); > + vmcb_clear_intercept(INTERCEPT_VMLOAD); > + vmcb_set_intercept(INTERCEPT_VMSAVE); > svm_vmrun(); > report(vmcb->control.exit_code == SVM_EXIT_VMSAVE, "Test " > "VMLOAD/VMSAVE intercept: Expected VMSAVE #VMEXIT"); > - vmcb->control.intercept &= ~(1ULL << INTERCEPT_VMSAVE); > + vmcb_clear_intercept(INTERCEPT_VMSAVE); > svm_vmrun(); > report(vmcb->control.exit_code == SVM_EXIT_VMMCALL, "Test " > "VMLOAD/VMSAVE intercept: Expected VMMCALL #VMEXIT"); > > - vmcb->control.intercept |= (1ULL << INTERCEPT_VMLOAD); > + vmcb_set_intercept(INTERCEPT_VMLOAD); > svm_vmrun(); > report(vmcb->control.exit_code == SVM_EXIT_VMLOAD, "Test " > "VMLOAD/VMSAVE intercept: Expected VMLOAD #VMEXIT"); > - vmcb->control.intercept &= ~(1ULL << INTERCEPT_VMLOAD); > + vmcb_clear_intercept(INTERCEPT_VMLOAD); > svm_vmrun(); > report(vmcb->control.exit_code == SVM_EXIT_VMMCALL, "Test " > "VMLOAD/VMSAVE intercept: Expected VMMCALL #VMEXIT"); > > - vmcb->control.intercept |= (1ULL << INTERCEPT_VMSAVE); > + vmcb_set_intercept(INTERCEPT_VMSAVE); > svm_vmrun(); > report(vmcb->control.exit_code == SVM_EXIT_VMSAVE, "Test " > "VMLOAD/VMSAVE intercept: Expected VMSAVE #VMEXIT"); > - vmcb->control.intercept &= ~(1ULL << INTERCEPT_VMSAVE); > + vmcb_clear_intercept(INTERCEPT_VMSAVE); > svm_vmrun(); > report(vmcb->control.exit_code == SVM_EXIT_VMMCALL, "Test " > "VMLOAD/VMSAVE intercept: Expected VMMCALL #VMEXIT"); > > - vmcb->control.intercept = intercept_saved; > + vmcb->control.intercept[INTERCEPT_WORD4] = intercept_saved; > } > > static void prepare_vgif_enabled(struct svm_test *test) > @@ -2974,7 +2973,7 @@ static void pause_filter_test(void) > return; > } > > - vmcb->control.intercept |= (1 << INTERCEPT_PAUSE); > + vmcb_set_intercept(INTERCEPT_PAUSE); > > // filter count more that pause count - no VMexit > pause_filter_run_test(10, 9, 0, 0); > @@ -3356,7 +3355,7 @@ static void svm_intr_intercept_mix_if(void) > // make a physical interrupt to be pending > handle_irq(0x55, dummy_isr); > > - vmcb->control.intercept |= (1 << INTERCEPT_INTR); > + vmcb_set_intercept(INTERCEPT_INTR); > vmcb->control.int_ctl &= ~V_INTR_MASKING_MASK; > vmcb->save.rflags &= ~X86_EFLAGS_IF; > > @@ -3389,7 +3388,7 @@ static void svm_intr_intercept_mix_gif(void) > { > handle_irq(0x55, dummy_isr); > > - vmcb->control.intercept |= (1 << INTERCEPT_INTR); > + vmcb_set_intercept(INTERCEPT_INTR); > vmcb->control.int_ctl &= ~V_INTR_MASKING_MASK; > vmcb->save.rflags &= ~X86_EFLAGS_IF; > > @@ -3419,7 +3418,7 @@ static void svm_intr_intercept_mix_gif2(void) > { > handle_irq(0x55, dummy_isr); > > - vmcb->control.intercept |= (1 << INTERCEPT_INTR); > + vmcb_set_intercept(INTERCEPT_INTR); > vmcb->control.int_ctl &= ~V_INTR_MASKING_MASK; > vmcb->save.rflags |= X86_EFLAGS_IF; > > @@ -3448,7 +3447,7 @@ static void svm_intr_intercept_mix_nmi(void) > { > handle_exception(2, dummy_nmi_handler); > > - vmcb->control.intercept |= (1 << INTERCEPT_NMI); > + vmcb_set_intercept(INTERCEPT_NMI); > vmcb->control.int_ctl &= ~V_INTR_MASKING_MASK; > vmcb->save.rflags |= X86_EFLAGS_IF; > > @@ -3472,7 +3471,7 @@ static void svm_intr_intercept_mix_smi_guest(struct svm_test *test) > > static void svm_intr_intercept_mix_smi(void) > { > - vmcb->control.intercept |= (1 << INTERCEPT_SMI); > + vmcb_set_intercept(INTERCEPT_SMI); > vmcb->control.int_ctl &= ~V_INTR_MASKING_MASK; > test_set_guest(svm_intr_intercept_mix_smi_guest); > svm_intr_intercept_mix_run_guest(NULL, SVM_EXIT_SMI); > @@ -3530,14 +3529,14 @@ static void handle_exception_in_l2(u8 vector) > > static void handle_exception_in_l1(u32 vector) > { > - u32 old_ie = vmcb->control.intercept_exceptions; > + u32 old_ie = vmcb->control.intercept[INTERCEPT_EXCEPTION]; > > - vmcb->control.intercept_exceptions |= (1ULL << vector); > + vmcb_set_intercept(INTERCEPT_EXCEPTION_OFFSET + vector); > > report(svm_vmrun() == (SVM_EXIT_EXCP_BASE + vector), > "%s handled by L1", exception_mnemonic(vector)); > > - vmcb->control.intercept_exceptions = old_ie; > + vmcb->control.intercept[INTERCEPT_EXCEPTION] = old_ie; > } > > static void svm_exception_test(void) > @@ -3568,7 +3567,7 @@ static void svm_shutdown_intercept_test(void) > { > test_set_guest(shutdown_intercept_test_guest); > vmcb->save.idtr.base = (u64)alloc_vpage(); > - vmcb->control.intercept |= (1ULL << INTERCEPT_SHUTDOWN); > + vmcb_set_intercept(INTERCEPT_SHUTDOWN); > svm_vmrun(); > report(vmcb->control.exit_code == SVM_EXIT_SHUTDOWN, "shutdown test passed"); > } > -- > 2.52.0.223.gf5cc29aaa4-goog > ^ permalink raw reply [flat|nested] 8+ messages in thread
* [kvm-unit-tests PATCH] x86/svm: Add unsupported instruction intercept test 2025-12-05 8:02 [kvm-unit-tests PATCH 0/2] x86/svm: Add testing for L1 intercept bug Kevin Cheng 2025-12-05 8:02 ` [kvm-unit-tests PATCH 1/2] x86/svm: Add missing svm intercepts Kevin Cheng @ 2025-12-05 8:02 ` Kevin Cheng 2025-12-05 8:14 ` Kevin Cheng 2025-12-05 8:14 ` [kvm-unit-tests PATCH 0/2] x86/svm: Add testing for L1 intercept bug Kevin Cheng 2 siblings, 1 reply; 8+ messages in thread From: Kevin Cheng @ 2025-12-05 8:02 UTC (permalink / raw) To: kvm; +Cc: yosryahmed, andrew.jones, thuth, pbonzini, seanjc, Kevin Cheng Add tests that expect a nested vm exit, due to an unsupported instruction, to be handled by L0 even if L1 intercepts are set for that instruction. The new test exercises bug fixed by: https://lore.kernel.org/all/20251205070630.4013452-1-chengkev@google.com/ Signed-off-by: Kevin Cheng <chengkev@google.com> --- x86/svm.h | 5 +++- x86/svm_tests.c | 75 +++++++++++++++++++++++++++++++++++++++++++++++ x86/unittests.cfg | 9 +++++- 3 files changed, 87 insertions(+), 2 deletions(-) diff --git a/x86/svm.h b/x86/svm.h index 93ef6f772c6ee..86d58c3100275 100644 --- a/x86/svm.h +++ b/x86/svm.h @@ -406,7 +406,10 @@ struct __attribute__ ((__packed__)) vmcb { #define SVM_EXIT_MONITOR 0x08a #define SVM_EXIT_MWAIT 0x08b #define SVM_EXIT_MWAIT_COND 0x08c -#define SVM_EXIT_NPF 0x400 +#define SVM_EXIT_XSETBV 0x08d +#define SVM_EXIT_RDPRU 0x08e +#define SVM_EXIT_INVPCID 0x0a2 +#define SVM_EXIT_NPF 0x400 #define SVM_EXIT_ERR -1 diff --git a/x86/svm_tests.c b/x86/svm_tests.c index ccc89d45d4db9..cea8865787545 100644 --- a/x86/svm_tests.c +++ b/x86/svm_tests.c @@ -3572,6 +3572,80 @@ static void svm_shutdown_intercept_test(void) report(vmcb->control.exit_code == SVM_EXIT_SHUTDOWN, "shutdown test passed"); } +struct InvpcidDesc { + uint64_t pcid : 12; + uint64_t reserved : 52; + uint64_t addr; +}; + +static void insn_invpcid(struct svm_test *test) +{ + struct InvpcidDesc desc = {0}; + unsigned long type = 0; + + __asm__ volatile ( + "invpcid %1, %0" + : + : "r" (type), "m" (desc) + : "memory" + ); +} + +asm( + "insn_rdtscp: rdtscp;ret\n\t" + "insn_skinit: skinit;ret\n\t" + "insn_xsetbv: xor %eax, %eax; xor %edx, %edx; xor %ecx, %ecx; xsetbv;ret\n\t" + "insn_rdpru: xor %ecx, %ecx; rdpru;ret\n\t" +); + +extern void insn_rdtscp(struct svm_test *test); +extern void insn_skinit(struct svm_test *test); +extern void insn_xsetbv(struct svm_test *test); +extern void insn_rdpru(struct svm_test *test); + +struct insn_table { + const char *name; + u64 intercept; + void (*insn_func)(struct svm_test *test); + u32 reason; +}; + +static struct insn_table insn_table[] = { + { "RDTSCP", INTERCEPT_RDTSCP, insn_rdtscp, SVM_EXIT_RDTSCP}, + { "SKINIT", INTERCEPT_SKINIT, insn_skinit, SVM_EXIT_SKINIT}, + { "XSETBV", INTERCEPT_XSETBV, insn_xsetbv, SVM_EXIT_XSETBV}, + { "RDPRU", INTERCEPT_RDPRU, insn_rdpru, SVM_EXIT_RDPRU}, + { "INVPCID", INTERCEPT_INVPCID, insn_invpcid, SVM_EXIT_INVPCID}, + { NULL }, +}; + +/* + * Test that L1 does not intercept instructions that are not advertised in + * guest CPUID. + */ +static void svm_unsupported_instruction_intercept_test(void) +{ + u32 cur_insn; + u32 exit_code; + + vmcb_set_intercept(INTERCEPT_EXCEPTION_OFFSET + UD_VECTOR); + + for (cur_insn = 0; insn_table[cur_insn].name != NULL; ++cur_insn) { + test_set_guest(insn_table[cur_insn].insn_func); + vmcb_set_intercept(insn_table[cur_insn].intercept); + svm_vmrun(); + exit_code = vmcb->control.exit_code; + + if (exit_code == SVM_EXIT_EXCP_BASE + UD_VECTOR) + report_pass("UD Exception injected"); + else if (exit_code == insn_table[cur_insn].reason) + report_fail("L1 should not intercept %s when instruction is not advertised in guest CPUID", + insn_table[cur_insn].name); + else + report_fail("Unknown exit reason, 0x%x", exit_code); + } +} + struct svm_test svm_tests[] = { { "null", default_supported, default_prepare, default_prepare_gif_clear, null_test, @@ -3713,6 +3787,7 @@ struct svm_test svm_tests[] = { TEST(svm_tsc_scale_test), TEST(pause_filter_test), TEST(svm_shutdown_intercept_test), + TEST(svm_unsupported_instruction_intercept_test), { NULL, NULL, NULL, NULL, NULL, NULL, NULL } }; diff --git a/x86/unittests.cfg b/x86/unittests.cfg index 522318d32bf68..ec456d779b35c 100644 --- a/x86/unittests.cfg +++ b/x86/unittests.cfg @@ -253,11 +253,18 @@ arch = x86_64 [svm] file = svm.flat smp = 2 -test_args = "-pause_filter_test" +test_args = "-pause_filter_test -svm_unsupported_instruction_intercept_test" qemu_params = -cpu max,+svm -m 4g arch = x86_64 groups = svm +[svm_unsupported_instruction_intercept_test] +file = svm.flat +test_args = "svm_unsupported_instruction_intercept_test" +qemu_params = -cpu max,+svm,-rdtscp,-xsave,-invpcid +arch = x86_64 +groups = svm + [svm_pause_filter] file = svm.flat test_args = pause_filter_test -- 2.52.0.223.gf5cc29aaa4-goog ^ permalink raw reply related [flat|nested] 8+ messages in thread
* Re: [kvm-unit-tests PATCH] x86/svm: Add unsupported instruction intercept test 2025-12-05 8:02 ` [kvm-unit-tests PATCH] x86/svm: Add unsupported instruction intercept test Kevin Cheng @ 2025-12-05 8:14 ` Kevin Cheng 0 siblings, 0 replies; 8+ messages in thread From: Kevin Cheng @ 2025-12-05 8:14 UTC (permalink / raw) To: kvm; +Cc: yosryahmed, andrew.jones, thuth, pbonzini, seanjc Sorry please ignore this! On Fri, Dec 5, 2025 at 3:02 AM Kevin Cheng <chengkev@google.com> wrote: > > Add tests that expect a nested vm exit, due to an unsupported > instruction, to be handled by L0 even if L1 intercepts are set for that > instruction. > > The new test exercises bug fixed by: > https://lore.kernel.org/all/20251205070630.4013452-1-chengkev@google.com/ > > Signed-off-by: Kevin Cheng <chengkev@google.com> > --- > x86/svm.h | 5 +++- > x86/svm_tests.c | 75 +++++++++++++++++++++++++++++++++++++++++++++++ > x86/unittests.cfg | 9 +++++- > 3 files changed, 87 insertions(+), 2 deletions(-) > > diff --git a/x86/svm.h b/x86/svm.h > index 93ef6f772c6ee..86d58c3100275 100644 > --- a/x86/svm.h > +++ b/x86/svm.h > @@ -406,7 +406,10 @@ struct __attribute__ ((__packed__)) vmcb { > #define SVM_EXIT_MONITOR 0x08a > #define SVM_EXIT_MWAIT 0x08b > #define SVM_EXIT_MWAIT_COND 0x08c > -#define SVM_EXIT_NPF 0x400 > +#define SVM_EXIT_XSETBV 0x08d > +#define SVM_EXIT_RDPRU 0x08e > +#define SVM_EXIT_INVPCID 0x0a2 > +#define SVM_EXIT_NPF 0x400 > > #define SVM_EXIT_ERR -1 > > diff --git a/x86/svm_tests.c b/x86/svm_tests.c > index ccc89d45d4db9..cea8865787545 100644 > --- a/x86/svm_tests.c > +++ b/x86/svm_tests.c > @@ -3572,6 +3572,80 @@ static void svm_shutdown_intercept_test(void) > report(vmcb->control.exit_code == SVM_EXIT_SHUTDOWN, "shutdown test passed"); > } > > +struct InvpcidDesc { > + uint64_t pcid : 12; > + uint64_t reserved : 52; > + uint64_t addr; > +}; > + > +static void insn_invpcid(struct svm_test *test) > +{ > + struct InvpcidDesc desc = {0}; > + unsigned long type = 0; > + > + __asm__ volatile ( > + "invpcid %1, %0" > + : > + : "r" (type), "m" (desc) > + : "memory" > + ); > +} > + > +asm( > + "insn_rdtscp: rdtscp;ret\n\t" > + "insn_skinit: skinit;ret\n\t" > + "insn_xsetbv: xor %eax, %eax; xor %edx, %edx; xor %ecx, %ecx; xsetbv;ret\n\t" > + "insn_rdpru: xor %ecx, %ecx; rdpru;ret\n\t" > +); > + > +extern void insn_rdtscp(struct svm_test *test); > +extern void insn_skinit(struct svm_test *test); > +extern void insn_xsetbv(struct svm_test *test); > +extern void insn_rdpru(struct svm_test *test); > + > +struct insn_table { > + const char *name; > + u64 intercept; > + void (*insn_func)(struct svm_test *test); > + u32 reason; > +}; > + > +static struct insn_table insn_table[] = { > + { "RDTSCP", INTERCEPT_RDTSCP, insn_rdtscp, SVM_EXIT_RDTSCP}, > + { "SKINIT", INTERCEPT_SKINIT, insn_skinit, SVM_EXIT_SKINIT}, > + { "XSETBV", INTERCEPT_XSETBV, insn_xsetbv, SVM_EXIT_XSETBV}, > + { "RDPRU", INTERCEPT_RDPRU, insn_rdpru, SVM_EXIT_RDPRU}, > + { "INVPCID", INTERCEPT_INVPCID, insn_invpcid, SVM_EXIT_INVPCID}, > + { NULL }, > +}; > + > +/* > + * Test that L1 does not intercept instructions that are not advertised in > + * guest CPUID. > + */ > +static void svm_unsupported_instruction_intercept_test(void) > +{ > + u32 cur_insn; > + u32 exit_code; > + > + vmcb_set_intercept(INTERCEPT_EXCEPTION_OFFSET + UD_VECTOR); > + > + for (cur_insn = 0; insn_table[cur_insn].name != NULL; ++cur_insn) { > + test_set_guest(insn_table[cur_insn].insn_func); > + vmcb_set_intercept(insn_table[cur_insn].intercept); > + svm_vmrun(); > + exit_code = vmcb->control.exit_code; > + > + if (exit_code == SVM_EXIT_EXCP_BASE + UD_VECTOR) > + report_pass("UD Exception injected"); > + else if (exit_code == insn_table[cur_insn].reason) > + report_fail("L1 should not intercept %s when instruction is not advertised in guest CPUID", > + insn_table[cur_insn].name); > + else > + report_fail("Unknown exit reason, 0x%x", exit_code); > + } > +} > + > struct svm_test svm_tests[] = { > { "null", default_supported, default_prepare, > default_prepare_gif_clear, null_test, > @@ -3713,6 +3787,7 @@ struct svm_test svm_tests[] = { > TEST(svm_tsc_scale_test), > TEST(pause_filter_test), > TEST(svm_shutdown_intercept_test), > + TEST(svm_unsupported_instruction_intercept_test), > { NULL, NULL, NULL, NULL, NULL, NULL, NULL } > }; > > diff --git a/x86/unittests.cfg b/x86/unittests.cfg > index 522318d32bf68..ec456d779b35c 100644 > --- a/x86/unittests.cfg > +++ b/x86/unittests.cfg > @@ -253,11 +253,18 @@ arch = x86_64 > [svm] > file = svm.flat > smp = 2 > -test_args = "-pause_filter_test" > +test_args = "-pause_filter_test -svm_unsupported_instruction_intercept_test" > qemu_params = -cpu max,+svm -m 4g > arch = x86_64 > groups = svm > > +[svm_unsupported_instruction_intercept_test] > +file = svm.flat > +test_args = "svm_unsupported_instruction_intercept_test" > +qemu_params = -cpu max,+svm,-rdtscp,-xsave,-invpcid > +arch = x86_64 > +groups = svm > + > [svm_pause_filter] > file = svm.flat > test_args = pause_filter_test > -- > 2.52.0.223.gf5cc29aaa4-goog > ^ permalink raw reply [flat|nested] 8+ messages in thread
* Re: [kvm-unit-tests PATCH 0/2] x86/svm: Add testing for L1 intercept bug 2025-12-05 8:02 [kvm-unit-tests PATCH 0/2] x86/svm: Add testing for L1 intercept bug Kevin Cheng 2025-12-05 8:02 ` [kvm-unit-tests PATCH 1/2] x86/svm: Add missing svm intercepts Kevin Cheng 2025-12-05 8:02 ` [kvm-unit-tests PATCH] x86/svm: Add unsupported instruction intercept test Kevin Cheng @ 2025-12-05 8:14 ` Kevin Cheng 2 siblings, 0 replies; 8+ messages in thread From: Kevin Cheng @ 2025-12-05 8:14 UTC (permalink / raw) To: kvm; +Cc: yosryahmed, andrew.jones, thuth, pbonzini, seanjc Sorry please ignore this! On Fri, Dec 5, 2025 at 3:02 AM Kevin Cheng <chengkev@google.com> wrote: > > If a feature is not advertised to L1, L1 intercepts for instructions > controlled by this feature should be ignored. Currently, the added test > fails due to a bug in nested vm exit handling where vmcb12 intercepts > are checked before vmcb02 intercepts, causing the #UD exception to never > be injected into L2 if the L1 intercept is set. This is fixed in [0] > > The first patch just adds the missing intercepts needed for testing and > restructures the vmcb_control_area struct to make adding the missing > intercepts less ugly. The second patch adds the test which disables all > relevant features that have available instruction intercepts, and checks > that the #UD exception is correctly delivered despite the L1 intercept > being set. > > [0] https://lore.kernel.org/all/20251205070630.4013452-1-chengkev@google.com/ > > Kevin Cheng (2): > x86/svm: Add missing svm intercepts > x86/svm: Add unsupported instruction intercept test > > x86/svm.c | 6 +- > x86/svm.h | 87 ++++++++++++++++++--- > x86/svm_tests.c | 188 ++++++++++++++++++++++++++++++++-------------- > x86/unittests.cfg | 9 ++- > 4 files changed, 220 insertions(+), 70 deletions(-) > > -- > 2.52.0.223.gf5cc29aaa4-goog > ^ permalink raw reply [flat|nested] 8+ messages in thread
* [kvm-unit-tests PATCH 0/2] x86/svm: Add testing for L1 intercept bug @ 2025-12-05 8:14 Kevin Cheng 2025-12-05 8:14 ` [kvm-unit-tests PATCH 1/2] x86/svm: Add missing svm intercepts Kevin Cheng 0 siblings, 1 reply; 8+ messages in thread From: Kevin Cheng @ 2025-12-05 8:14 UTC (permalink / raw) To: kvm; +Cc: yosryahmed, andrew.jones, thuth, pbonzini, seanjc, Kevin Cheng If a feature is not advertised to L1, L1 intercepts for instructions controlled by this feature should be ignored. Currently, the added test fails due to a bug in nested vm exit handling where vmcb12 intercepts are checked before vmcb02 intercepts, causing the #UD exception to never be injected into L2 if the L1 intercept is set. This is fixed in [0] The first patch just adds the missing intercepts needed for testing and restructures the vmcb_control_area struct to make adding the missing intercepts less ugly. The second patch adds the test which disables all relevant features that have available instruction intercepts, and checks that the #UD exception is correctly delivered despite the L1 intercept being set. [0] https://lore.kernel.org/all/20251205070630.4013452-1-chengkev@google.com/ Kevin Cheng (2): x86/svm: Add missing svm intercepts x86/svm: Add unsupported instruction intercept test x86/svm.c | 6 +- x86/svm.h | 87 +++++++++++++++++++--- x86/svm_tests.c | 186 ++++++++++++++++++++++++++++++++-------------- x86/unittests.cfg | 9 ++- 4 files changed, 218 insertions(+), 70 deletions(-) -- 2.52.0.223.gf5cc29aaa4-goog ^ permalink raw reply [flat|nested] 8+ messages in thread
* [kvm-unit-tests PATCH 1/2] x86/svm: Add missing svm intercepts 2025-12-05 8:14 Kevin Cheng @ 2025-12-05 8:14 ` Kevin Cheng 2025-12-09 1:31 ` Yosry Ahmed 0 siblings, 1 reply; 8+ messages in thread From: Kevin Cheng @ 2025-12-05 8:14 UTC (permalink / raw) To: kvm; +Cc: yosryahmed, andrew.jones, thuth, pbonzini, seanjc, Kevin Cheng Some intercepts are missing from the KUT svm testing. Add all missing intercepts and reorganize the svm intercept definition/setting/clearing. Signed-off-by: Kevin Cheng <chengkev@google.com> --- x86/svm.c | 6 +-- x86/svm.h | 82 +++++++++++++++++++++++++++++++---- x86/svm_tests.c | 111 ++++++++++++++++++++++++------------------------ 3 files changed, 131 insertions(+), 68 deletions(-) diff --git a/x86/svm.c b/x86/svm.c index de9eb19443caa..9a4c14e368cd4 100644 --- a/x86/svm.c +++ b/x86/svm.c @@ -193,9 +193,9 @@ void vmcb_ident(struct vmcb *vmcb) save->cr2 = read_cr2(); save->g_pat = rdmsr(MSR_IA32_CR_PAT); save->dbgctl = rdmsr(MSR_IA32_DEBUGCTLMSR); - ctrl->intercept = (1ULL << INTERCEPT_VMRUN) | - (1ULL << INTERCEPT_VMMCALL) | - (1ULL << INTERCEPT_SHUTDOWN); + vmcb_set_intercept(INTERCEPT_VMRUN); + vmcb_set_intercept(INTERCEPT_VMMCALL); + vmcb_set_intercept(INTERCEPT_SHUTDOWN); ctrl->iopm_base_pa = virt_to_phys(io_bitmap); ctrl->msrpm_base_pa = virt_to_phys(msr_bitmap); diff --git a/x86/svm.h b/x86/svm.h index 264583a6547ef..93ef6f772c6ee 100644 --- a/x86/svm.h +++ b/x86/svm.h @@ -2,9 +2,49 @@ #define X86_SVM_H #include "libcflat.h" +#include "bitops.h" + +enum intercept_words { + INTERCEPT_CR = 0, + INTERCEPT_DR, + INTERCEPT_EXCEPTION, + INTERCEPT_WORD3, + INTERCEPT_WORD4, + INTERCEPT_WORD5, + MAX_INTERCEPT, +}; enum { - INTERCEPT_INTR, + /* Byte offset 000h (word 0) */ + INTERCEPT_CR0_READ = 0, + INTERCEPT_CR3_READ = 3, + INTERCEPT_CR4_READ = 4, + INTERCEPT_CR8_READ = 8, + INTERCEPT_CR0_WRITE = 16, + INTERCEPT_CR3_WRITE = 16 + 3, + INTERCEPT_CR4_WRITE = 16 + 4, + INTERCEPT_CR8_WRITE = 16 + 8, + /* Byte offset 004h (word 1) */ + INTERCEPT_DR0_READ = 32, + INTERCEPT_DR1_READ, + INTERCEPT_DR2_READ, + INTERCEPT_DR3_READ, + INTERCEPT_DR4_READ, + INTERCEPT_DR5_READ, + INTERCEPT_DR6_READ, + INTERCEPT_DR7_READ, + INTERCEPT_DR0_WRITE = 48, + INTERCEPT_DR1_WRITE, + INTERCEPT_DR2_WRITE, + INTERCEPT_DR3_WRITE, + INTERCEPT_DR4_WRITE, + INTERCEPT_DR5_WRITE, + INTERCEPT_DR6_WRITE, + INTERCEPT_DR7_WRITE, + /* Byte offset 008h (word 2) */ + INTERCEPT_EXCEPTION_OFFSET = 64, + /* Byte offset 00Ch (word 3) */ + INTERCEPT_INTR = 96, INTERCEPT_NMI, INTERCEPT_SMI, INTERCEPT_INIT, @@ -36,7 +76,8 @@ enum { INTERCEPT_TASK_SWITCH, INTERCEPT_FERR_FREEZE, INTERCEPT_SHUTDOWN, - INTERCEPT_VMRUN, + /* Byte offset 010h (word 4) */ + INTERCEPT_VMRUN = 128, INTERCEPT_VMMCALL, INTERCEPT_VMLOAD, INTERCEPT_VMSAVE, @@ -49,6 +90,24 @@ enum { INTERCEPT_MONITOR, INTERCEPT_MWAIT, INTERCEPT_MWAIT_COND, + INTERCEPT_XSETBV, + INTERCEPT_RDPRU, + TRAP_EFER_WRITE, + TRAP_CR0_WRITE, + TRAP_CR1_WRITE, + TRAP_CR2_WRITE, + TRAP_CR3_WRITE, + TRAP_CR4_WRITE, + TRAP_CR5_WRITE, + TRAP_CR6_WRITE, + TRAP_CR7_WRITE, + TRAP_CR8_WRITE, + /* Byte offset 014h (word 5) */ + INTERCEPT_INVLPGB = 160, + INTERCEPT_INVLPGB_ILLEGAL, + INTERCEPT_INVPCID, + INTERCEPT_MCOMMIT, + INTERCEPT_TLBSYNC, }; enum { @@ -69,13 +128,8 @@ enum { }; struct __attribute__ ((__packed__)) vmcb_control_area { - u16 intercept_cr_read; - u16 intercept_cr_write; - u16 intercept_dr_read; - u16 intercept_dr_write; - u32 intercept_exceptions; - u64 intercept; - u8 reserved_1[40]; + u32 intercept[MAX_INTERCEPT]; + u8 reserved_1[36]; u16 pause_filter_thresh; u16 pause_filter_count; u64 iopm_base_pa; @@ -441,6 +495,16 @@ void test_set_guest(test_guest_func func); extern struct vmcb *vmcb; +static inline void vmcb_set_intercept(u64 val) +{ + __set_bit(val, vmcb->control.intercept); +} + +static inline void vmcb_clear_intercept(u64 val) +{ + __clear_bit(val, vmcb->control.intercept); +} + static inline void stgi(void) { asm volatile ("stgi"); diff --git a/x86/svm_tests.c b/x86/svm_tests.c index 3761647642542..ccc89d45d4db9 100644 --- a/x86/svm_tests.c +++ b/x86/svm_tests.c @@ -63,7 +63,7 @@ static bool null_check(struct svm_test *test) static void prepare_no_vmrun_int(struct svm_test *test) { - vmcb->control.intercept &= ~(1ULL << INTERCEPT_VMRUN); + vmcb_clear_intercept(INTERCEPT_VMRUN); } static bool check_no_vmrun_int(struct svm_test *test) @@ -84,8 +84,8 @@ static bool check_vmrun(struct svm_test *test) static void prepare_rsm_intercept(struct svm_test *test) { default_prepare(test); - vmcb->control.intercept |= 1 << INTERCEPT_RSM; - vmcb->control.intercept_exceptions |= (1ULL << UD_VECTOR); + vmcb_set_intercept(INTERCEPT_RSM); + vmcb_set_intercept(INTERCEPT_EXCEPTION_OFFSET + UD_VECTOR); } static void test_rsm_intercept(struct svm_test *test) @@ -107,7 +107,7 @@ static bool finished_rsm_intercept(struct svm_test *test) vmcb->control.exit_code); return true; } - vmcb->control.intercept &= ~(1 << INTERCEPT_RSM); + vmcb_clear_intercept(INTERCEPT_RSM); inc_test_stage(test); break; @@ -132,7 +132,7 @@ static void prepare_sel_cr0_intercept(struct svm_test *test) /* Clear CR0.MP and CR0.CD as the tests will set either of them */ vmcb->save.cr0 &= ~X86_CR0_MP; vmcb->save.cr0 &= ~X86_CR0_CD; - vmcb->control.intercept |= (1ULL << INTERCEPT_SELECTIVE_CR0); + vmcb_set_intercept(INTERCEPT_SELECTIVE_CR0); } static void prepare_sel_nonsel_cr0_intercepts(struct svm_test *test) @@ -140,8 +140,8 @@ static void prepare_sel_nonsel_cr0_intercepts(struct svm_test *test) /* Clear CR0.MP and CR0.CD as the tests will set either of them */ vmcb->save.cr0 &= ~X86_CR0_MP; vmcb->save.cr0 &= ~X86_CR0_CD; - vmcb->control.intercept_cr_write |= (1ULL << 0); - vmcb->control.intercept |= (1ULL << INTERCEPT_SELECTIVE_CR0); + vmcb_set_intercept(INTERCEPT_CR0_WRITE); + vmcb_set_intercept(INTERCEPT_SELECTIVE_CR0); } static void __test_cr0_write_bit(struct svm_test *test, unsigned long bit, @@ -218,7 +218,7 @@ static bool check_cr0_nointercept(struct svm_test *test) static void prepare_cr3_intercept(struct svm_test *test) { default_prepare(test); - vmcb->control.intercept_cr_read |= 1 << 3; + vmcb_set_intercept(INTERCEPT_CR3_READ); } static void test_cr3_intercept(struct svm_test *test) @@ -252,7 +252,7 @@ static void corrupt_cr3_intercept_bypass(void *_test) static void prepare_cr3_intercept_bypass(struct svm_test *test) { default_prepare(test); - vmcb->control.intercept_cr_read |= 1 << 3; + vmcb_set_intercept(INTERCEPT_CR3_READ); on_cpu_async(1, corrupt_cr3_intercept_bypass, test); } @@ -272,8 +272,7 @@ static void test_cr3_intercept_bypass(struct svm_test *test) static void prepare_dr_intercept(struct svm_test *test) { default_prepare(test); - vmcb->control.intercept_dr_read = 0xff; - vmcb->control.intercept_dr_write = 0xff; + vmcb->control.intercept[INTERCEPT_DR] = 0xff00ff; } static void test_dr_intercept(struct svm_test *test) @@ -390,7 +389,7 @@ static bool next_rip_supported(void) static void prepare_next_rip(struct svm_test *test) { - vmcb->control.intercept |= (1ULL << INTERCEPT_RDTSC); + vmcb_set_intercept(INTERCEPT_RDTSC); } @@ -416,7 +415,7 @@ static bool is_x2apic; static void prepare_msr_intercept(struct svm_test *test) { default_prepare(test); - vmcb->control.intercept |= (1ULL << INTERCEPT_MSR_PROT); + vmcb_set_intercept(INTERCEPT_MSR_PROT); memset(msr_bitmap, 0, MSR_BITMAP_SIZE); @@ -663,10 +662,10 @@ static bool check_msr_intercept(struct svm_test *test) static void prepare_mode_switch(struct svm_test *test) { - vmcb->control.intercept_exceptions |= (1ULL << GP_VECTOR) - | (1ULL << UD_VECTOR) - | (1ULL << DF_VECTOR) - | (1ULL << PF_VECTOR); + vmcb_set_intercept(INTERCEPT_EXCEPTION_OFFSET + GP_VECTOR); + vmcb_set_intercept(INTERCEPT_EXCEPTION_OFFSET + UD_VECTOR); + vmcb_set_intercept(INTERCEPT_EXCEPTION_OFFSET + DF_VECTOR); + vmcb_set_intercept(INTERCEPT_EXCEPTION_OFFSET + DF_VECTOR); test->scratch = 0; } @@ -773,7 +772,7 @@ extern u8 *io_bitmap; static void prepare_ioio(struct svm_test *test) { - vmcb->control.intercept |= (1ULL << INTERCEPT_IOIO_PROT); + vmcb_set_intercept(INTERCEPT_IOIO_PROT); test->scratch = 0; memset(io_bitmap, 0, 8192); io_bitmap[8192] = 0xFF; @@ -1171,7 +1170,7 @@ static void pending_event_prepare(struct svm_test *test) pending_event_guest_run = false; - vmcb->control.intercept |= (1ULL << INTERCEPT_INTR); + vmcb_set_intercept(INTERCEPT_INTR); vmcb->control.int_ctl |= V_INTR_MASKING_MASK; apic_icr_write(APIC_DEST_SELF | APIC_DEST_PHYSICAL | @@ -1195,7 +1194,7 @@ static bool pending_event_finished(struct svm_test *test) return true; } - vmcb->control.intercept &= ~(1ULL << INTERCEPT_INTR); + vmcb_clear_intercept(INTERCEPT_INTR); vmcb->control.int_ctl &= ~V_INTR_MASKING_MASK; if (pending_event_guest_run) { @@ -1400,7 +1399,7 @@ static bool interrupt_finished(struct svm_test *test) } vmcb->save.rip += 3; - vmcb->control.intercept |= (1ULL << INTERCEPT_INTR); + vmcb_set_intercept(INTERCEPT_INTR); vmcb->control.int_ctl |= V_INTR_MASKING_MASK; break; @@ -1414,7 +1413,7 @@ static bool interrupt_finished(struct svm_test *test) sti_nop_cli(); - vmcb->control.intercept &= ~(1ULL << INTERCEPT_INTR); + vmcb_clear_intercept(INTERCEPT_INTR); vmcb->control.int_ctl &= ~V_INTR_MASKING_MASK; break; @@ -1476,7 +1475,7 @@ static bool nmi_finished(struct svm_test *test) } vmcb->save.rip += 3; - vmcb->control.intercept |= (1ULL << INTERCEPT_NMI); + vmcb_set_intercept(INTERCEPT_NMI); break; case 1: @@ -1569,7 +1568,7 @@ static bool nmi_hlt_finished(struct svm_test *test) } vmcb->save.rip += 3; - vmcb->control.intercept |= (1ULL << INTERCEPT_NMI); + vmcb_set_intercept(INTERCEPT_NMI); break; case 2: @@ -1605,7 +1604,7 @@ static void vnmi_prepare(struct svm_test *test) * Disable NMI interception to start. Enabling vNMI without * intercepting "real" NMIs should result in an ERR VM-Exit. */ - vmcb->control.intercept &= ~(1ULL << INTERCEPT_NMI); + vmcb_clear_intercept(INTERCEPT_NMI); vmcb->control.int_ctl = V_NMI_ENABLE_MASK; vmcb->control.int_vector = NMI_VECTOR; } @@ -1629,7 +1628,7 @@ static bool vnmi_finished(struct svm_test *test) return true; } report(!nmi_fired, "vNMI enabled but NMI_INTERCEPT unset!"); - vmcb->control.intercept |= (1ULL << INTERCEPT_NMI); + vmcb_set_intercept(INTERCEPT_NMI); vmcb->save.rip += 3; break; @@ -1804,7 +1803,7 @@ static bool virq_inject_finished(struct svm_test *test) return true; } virq_fired = false; - vmcb->control.intercept |= (1ULL << INTERCEPT_VINTR); + vmcb_set_intercept(INTERCEPT_VINTR); vmcb->control.int_ctl = V_INTR_MASKING_MASK | V_IRQ_MASK | (0x0f << V_INTR_PRIO_SHIFT); break; @@ -1819,7 +1818,7 @@ static bool virq_inject_finished(struct svm_test *test) report_fail("V_IRQ fired before SVM_EXIT_VINTR"); return true; } - vmcb->control.intercept &= ~(1ULL << INTERCEPT_VINTR); + vmcb_clear_intercept(INTERCEPT_VINTR); break; case 2: @@ -1842,7 +1841,7 @@ static bool virq_inject_finished(struct svm_test *test) vmcb->control.exit_code); return true; } - vmcb->control.intercept |= (1ULL << INTERCEPT_VINTR); + vmcb_set_intercept(INTERCEPT_VINTR); break; case 4: @@ -1943,7 +1942,7 @@ static void reg_corruption_prepare(struct svm_test *test) set_test_stage(test, 0); vmcb->control.int_ctl = V_INTR_MASKING_MASK; - vmcb->control.intercept |= (1ULL << INTERCEPT_INTR); + vmcb_set_intercept(INTERCEPT_INTR); handle_irq(TIMER_VECTOR, reg_corruption_isr); @@ -2050,7 +2049,7 @@ static volatile bool init_intercept; static void init_intercept_prepare(struct svm_test *test) { init_intercept = false; - vmcb->control.intercept |= (1ULL << INTERCEPT_INIT); + vmcb_set_intercept(INTERCEPT_INIT); } static void init_intercept_test(struct svm_test *test) @@ -2547,7 +2546,7 @@ static void test_dr(void) /* TODO: verify if high 32-bits are sign- or zero-extended on bare metal */ #define TEST_BITMAP_ADDR(save_intercept, type, addr, exit_code, \ msg) { \ - vmcb->control.intercept = saved_intercept | 1ULL << type; \ + vmcb_set_intercept(type); \ if (type == INTERCEPT_MSR_PROT) \ vmcb->control.msrpm_base_pa = addr; \ else \ @@ -2574,7 +2573,7 @@ static void test_dr(void) */ static void test_msrpm_iopm_bitmap_addrs(void) { - u64 saved_intercept = vmcb->control.intercept; + u32 saved_intercept = vmcb->control.intercept[INTERCEPT_WORD3]; u64 addr_beyond_limit = 1ull << cpuid_maxphyaddr(); u64 addr = virt_to_phys(msr_bitmap) & (~((1ull << 12) - 1)); @@ -2615,7 +2614,7 @@ static void test_msrpm_iopm_bitmap_addrs(void) TEST_BITMAP_ADDR(saved_intercept, INTERCEPT_IOIO_PROT, addr, SVM_EXIT_VMMCALL, "IOPM"); - vmcb->control.intercept = saved_intercept; + vmcb->control.intercept[INTERCEPT_WORD3] = saved_intercept; } /* @@ -2811,7 +2810,7 @@ static void vmload_vmsave_guest_main(struct svm_test *test) static void svm_vmload_vmsave(void) { - u32 intercept_saved = vmcb->control.intercept; + u32 intercept_saved = vmcb->control.intercept[INTERCEPT_WORD4]; test_set_guest(vmload_vmsave_guest_main); @@ -2819,8 +2818,8 @@ static void svm_vmload_vmsave(void) * Disabling intercept for VMLOAD and VMSAVE doesn't cause * respective #VMEXIT to host */ - vmcb->control.intercept &= ~(1ULL << INTERCEPT_VMLOAD); - vmcb->control.intercept &= ~(1ULL << INTERCEPT_VMSAVE); + vmcb_clear_intercept(INTERCEPT_VMLOAD); + vmcb_clear_intercept(INTERCEPT_VMSAVE); svm_vmrun(); report(vmcb->control.exit_code == SVM_EXIT_VMMCALL, "Test " "VMLOAD/VMSAVE intercept: Expected VMMCALL #VMEXIT"); @@ -2829,39 +2828,39 @@ static void svm_vmload_vmsave(void) * Enabling intercept for VMLOAD and VMSAVE causes respective * #VMEXIT to host */ - vmcb->control.intercept |= (1ULL << INTERCEPT_VMLOAD); + vmcb_set_intercept(INTERCEPT_VMLOAD); svm_vmrun(); report(vmcb->control.exit_code == SVM_EXIT_VMLOAD, "Test " "VMLOAD/VMSAVE intercept: Expected VMLOAD #VMEXIT"); - vmcb->control.intercept &= ~(1ULL << INTERCEPT_VMLOAD); - vmcb->control.intercept |= (1ULL << INTERCEPT_VMSAVE); + vmcb_clear_intercept(INTERCEPT_VMLOAD); + vmcb_set_intercept(INTERCEPT_VMSAVE); svm_vmrun(); report(vmcb->control.exit_code == SVM_EXIT_VMSAVE, "Test " "VMLOAD/VMSAVE intercept: Expected VMSAVE #VMEXIT"); - vmcb->control.intercept &= ~(1ULL << INTERCEPT_VMSAVE); + vmcb_clear_intercept(INTERCEPT_VMSAVE); svm_vmrun(); report(vmcb->control.exit_code == SVM_EXIT_VMMCALL, "Test " "VMLOAD/VMSAVE intercept: Expected VMMCALL #VMEXIT"); - vmcb->control.intercept |= (1ULL << INTERCEPT_VMLOAD); + vmcb_set_intercept(INTERCEPT_VMLOAD); svm_vmrun(); report(vmcb->control.exit_code == SVM_EXIT_VMLOAD, "Test " "VMLOAD/VMSAVE intercept: Expected VMLOAD #VMEXIT"); - vmcb->control.intercept &= ~(1ULL << INTERCEPT_VMLOAD); + vmcb_clear_intercept(INTERCEPT_VMLOAD); svm_vmrun(); report(vmcb->control.exit_code == SVM_EXIT_VMMCALL, "Test " "VMLOAD/VMSAVE intercept: Expected VMMCALL #VMEXIT"); - vmcb->control.intercept |= (1ULL << INTERCEPT_VMSAVE); + vmcb_set_intercept(INTERCEPT_VMSAVE); svm_vmrun(); report(vmcb->control.exit_code == SVM_EXIT_VMSAVE, "Test " "VMLOAD/VMSAVE intercept: Expected VMSAVE #VMEXIT"); - vmcb->control.intercept &= ~(1ULL << INTERCEPT_VMSAVE); + vmcb_clear_intercept(INTERCEPT_VMSAVE); svm_vmrun(); report(vmcb->control.exit_code == SVM_EXIT_VMMCALL, "Test " "VMLOAD/VMSAVE intercept: Expected VMMCALL #VMEXIT"); - vmcb->control.intercept = intercept_saved; + vmcb->control.intercept[INTERCEPT_WORD4] = intercept_saved; } static void prepare_vgif_enabled(struct svm_test *test) @@ -2974,7 +2973,7 @@ static void pause_filter_test(void) return; } - vmcb->control.intercept |= (1 << INTERCEPT_PAUSE); + vmcb_set_intercept(INTERCEPT_PAUSE); // filter count more that pause count - no VMexit pause_filter_run_test(10, 9, 0, 0); @@ -3356,7 +3355,7 @@ static void svm_intr_intercept_mix_if(void) // make a physical interrupt to be pending handle_irq(0x55, dummy_isr); - vmcb->control.intercept |= (1 << INTERCEPT_INTR); + vmcb_set_intercept(INTERCEPT_INTR); vmcb->control.int_ctl &= ~V_INTR_MASKING_MASK; vmcb->save.rflags &= ~X86_EFLAGS_IF; @@ -3389,7 +3388,7 @@ static void svm_intr_intercept_mix_gif(void) { handle_irq(0x55, dummy_isr); - vmcb->control.intercept |= (1 << INTERCEPT_INTR); + vmcb_set_intercept(INTERCEPT_INTR); vmcb->control.int_ctl &= ~V_INTR_MASKING_MASK; vmcb->save.rflags &= ~X86_EFLAGS_IF; @@ -3419,7 +3418,7 @@ static void svm_intr_intercept_mix_gif2(void) { handle_irq(0x55, dummy_isr); - vmcb->control.intercept |= (1 << INTERCEPT_INTR); + vmcb_set_intercept(INTERCEPT_INTR); vmcb->control.int_ctl &= ~V_INTR_MASKING_MASK; vmcb->save.rflags |= X86_EFLAGS_IF; @@ -3448,7 +3447,7 @@ static void svm_intr_intercept_mix_nmi(void) { handle_exception(2, dummy_nmi_handler); - vmcb->control.intercept |= (1 << INTERCEPT_NMI); + vmcb_set_intercept(INTERCEPT_NMI); vmcb->control.int_ctl &= ~V_INTR_MASKING_MASK; vmcb->save.rflags |= X86_EFLAGS_IF; @@ -3472,7 +3471,7 @@ static void svm_intr_intercept_mix_smi_guest(struct svm_test *test) static void svm_intr_intercept_mix_smi(void) { - vmcb->control.intercept |= (1 << INTERCEPT_SMI); + vmcb_set_intercept(INTERCEPT_SMI); vmcb->control.int_ctl &= ~V_INTR_MASKING_MASK; test_set_guest(svm_intr_intercept_mix_smi_guest); svm_intr_intercept_mix_run_guest(NULL, SVM_EXIT_SMI); @@ -3530,14 +3529,14 @@ static void handle_exception_in_l2(u8 vector) static void handle_exception_in_l1(u32 vector) { - u32 old_ie = vmcb->control.intercept_exceptions; + u32 old_ie = vmcb->control.intercept[INTERCEPT_EXCEPTION]; - vmcb->control.intercept_exceptions |= (1ULL << vector); + vmcb_set_intercept(INTERCEPT_EXCEPTION_OFFSET + vector); report(svm_vmrun() == (SVM_EXIT_EXCP_BASE + vector), "%s handled by L1", exception_mnemonic(vector)); - vmcb->control.intercept_exceptions = old_ie; + vmcb->control.intercept[INTERCEPT_EXCEPTION] = old_ie; } static void svm_exception_test(void) @@ -3568,7 +3567,7 @@ static void svm_shutdown_intercept_test(void) { test_set_guest(shutdown_intercept_test_guest); vmcb->save.idtr.base = (u64)alloc_vpage(); - vmcb->control.intercept |= (1ULL << INTERCEPT_SHUTDOWN); + vmcb_set_intercept(INTERCEPT_SHUTDOWN); svm_vmrun(); report(vmcb->control.exit_code == SVM_EXIT_SHUTDOWN, "shutdown test passed"); } -- 2.52.0.223.gf5cc29aaa4-goog ^ permalink raw reply related [flat|nested] 8+ messages in thread
* Re: [kvm-unit-tests PATCH 1/2] x86/svm: Add missing svm intercepts 2025-12-05 8:14 ` [kvm-unit-tests PATCH 1/2] x86/svm: Add missing svm intercepts Kevin Cheng @ 2025-12-09 1:31 ` Yosry Ahmed 0 siblings, 0 replies; 8+ messages in thread From: Yosry Ahmed @ 2025-12-09 1:31 UTC (permalink / raw) To: Kevin Cheng; +Cc: kvm, andrew.jones, thuth, pbonzini, seanjc On Fri, Dec 5, 2025 at 12:14 AM Kevin Cheng <chengkev@google.com> wrote: > > Some intercepts are missing from the KUT svm testing. Add all missing > intercepts and reorganize the svm intercept definition/setting/clearing. > > Signed-off-by: Kevin Cheng <chengkev@google.com> Nice cleanup! [..] > @@ -69,13 +128,8 @@ enum { > }; > > struct __attribute__ ((__packed__)) vmcb_control_area { > - u16 intercept_cr_read; > - u16 intercept_cr_write; > - u16 intercept_dr_read; > - u16 intercept_dr_write; > - u32 intercept_exceptions; > - u64 intercept; > - u8 reserved_1[40]; > + u32 intercept[MAX_INTERCEPT]; > + u8 reserved_1[36]; Maybe do "u32 reserved_1[15 - MAX_INTERCEPT];" like the KVM definition? > u16 pause_filter_thresh; > u16 pause_filter_count; > u64 iopm_base_pa; [..] \ > @@ -2574,7 +2573,7 @@ static void test_dr(void) > */ > static void test_msrpm_iopm_bitmap_addrs(void) > { > - u64 saved_intercept = vmcb->control.intercept; > + u32 saved_intercept = vmcb->control.intercept[INTERCEPT_WORD3]; I think hardcoding save/restore for the relevant intercept word here is a bit fragile (and leaks the abstraction). If the test is extended to update more intercepts that are not in INTERCEPT_WORD3, it can easily be missed. How about introducing helpers to save/restore all intercepts and using them here (and in svm_vmload_vmsave() below)? We can define the array on the stack in the test and pass it to the helpers. Something like this (untested): static void test_msrpm_iopm_bitmap_addrs(void) { u32 saved_intercepts[MAX_INTERCEPT]; save_intercepts(vmcb, saved_intercepts); ... restore_intercepts(vmcb, saved_intercepts); } > u64 addr_beyond_limit = 1ull << cpuid_maxphyaddr(); > u64 addr = virt_to_phys(msr_bitmap) & (~((1ull << 12) - 1)); > > @@ -2615,7 +2614,7 @@ static void test_msrpm_iopm_bitmap_addrs(void) > TEST_BITMAP_ADDR(saved_intercept, INTERCEPT_IOIO_PROT, addr, > SVM_EXIT_VMMCALL, "IOPM"); > > - vmcb->control.intercept = saved_intercept; > + vmcb->control.intercept[INTERCEPT_WORD3] = saved_intercept; > } > > /* > @@ -2811,7 +2810,7 @@ static void vmload_vmsave_guest_main(struct svm_test *test) > > static void svm_vmload_vmsave(void) > { > - u32 intercept_saved = vmcb->control.intercept; > + u32 intercept_saved = vmcb->control.intercept[INTERCEPT_WORD4]; > > test_set_guest(vmload_vmsave_guest_main); > > @@ -2819,8 +2818,8 @@ static void svm_vmload_vmsave(void) > * Disabling intercept for VMLOAD and VMSAVE doesn't cause > * respective #VMEXIT to host > */ > - vmcb->control.intercept &= ~(1ULL << INTERCEPT_VMLOAD); > - vmcb->control.intercept &= ~(1ULL << INTERCEPT_VMSAVE); > + vmcb_clear_intercept(INTERCEPT_VMLOAD); > + vmcb_clear_intercept(INTERCEPT_VMSAVE); > svm_vmrun(); > report(vmcb->control.exit_code == SVM_EXIT_VMMCALL, "Test " > "VMLOAD/VMSAVE intercept: Expected VMMCALL #VMEXIT"); > @@ -2829,39 +2828,39 @@ static void svm_vmload_vmsave(void) > * Enabling intercept for VMLOAD and VMSAVE causes respective > * #VMEXIT to host > */ > - vmcb->control.intercept |= (1ULL << INTERCEPT_VMLOAD); > + vmcb_set_intercept(INTERCEPT_VMLOAD); > svm_vmrun(); > report(vmcb->control.exit_code == SVM_EXIT_VMLOAD, "Test " > "VMLOAD/VMSAVE intercept: Expected VMLOAD #VMEXIT"); > - vmcb->control.intercept &= ~(1ULL << INTERCEPT_VMLOAD); > - vmcb->control.intercept |= (1ULL << INTERCEPT_VMSAVE); > + vmcb_clear_intercept(INTERCEPT_VMLOAD); > + vmcb_set_intercept(INTERCEPT_VMSAVE); > svm_vmrun(); > report(vmcb->control.exit_code == SVM_EXIT_VMSAVE, "Test " > "VMLOAD/VMSAVE intercept: Expected VMSAVE #VMEXIT"); > - vmcb->control.intercept &= ~(1ULL << INTERCEPT_VMSAVE); > + vmcb_clear_intercept(INTERCEPT_VMSAVE); > svm_vmrun(); > report(vmcb->control.exit_code == SVM_EXIT_VMMCALL, "Test " > "VMLOAD/VMSAVE intercept: Expected VMMCALL #VMEXIT"); > > - vmcb->control.intercept |= (1ULL << INTERCEPT_VMLOAD); > + vmcb_set_intercept(INTERCEPT_VMLOAD); > svm_vmrun(); > report(vmcb->control.exit_code == SVM_EXIT_VMLOAD, "Test " > "VMLOAD/VMSAVE intercept: Expected VMLOAD #VMEXIT"); > - vmcb->control.intercept &= ~(1ULL << INTERCEPT_VMLOAD); > + vmcb_clear_intercept(INTERCEPT_VMLOAD); > svm_vmrun(); > report(vmcb->control.exit_code == SVM_EXIT_VMMCALL, "Test " > "VMLOAD/VMSAVE intercept: Expected VMMCALL #VMEXIT"); > > - vmcb->control.intercept |= (1ULL << INTERCEPT_VMSAVE); > + vmcb_set_intercept(INTERCEPT_VMSAVE); > svm_vmrun(); > report(vmcb->control.exit_code == SVM_EXIT_VMSAVE, "Test " > "VMLOAD/VMSAVE intercept: Expected VMSAVE #VMEXIT"); > - vmcb->control.intercept &= ~(1ULL << INTERCEPT_VMSAVE); > + vmcb_clear_intercept(INTERCEPT_VMSAVE); > svm_vmrun(); > report(vmcb->control.exit_code == SVM_EXIT_VMMCALL, "Test " > "VMLOAD/VMSAVE intercept: Expected VMMCALL #VMEXIT"); > > - vmcb->control.intercept = intercept_saved; > + vmcb->control.intercept[INTERCEPT_WORD4] = intercept_saved; > } > [..] ^ permalink raw reply [flat|nested] 8+ messages in thread
end of thread, other threads:[~2025-12-09 1:31 UTC | newest] Thread overview: 8+ messages (download: mbox.gz follow: Atom feed -- links below jump to the message on this page -- 2025-12-05 8:02 [kvm-unit-tests PATCH 0/2] x86/svm: Add testing for L1 intercept bug Kevin Cheng 2025-12-05 8:02 ` [kvm-unit-tests PATCH 1/2] x86/svm: Add missing svm intercepts Kevin Cheng 2025-12-05 8:14 ` Kevin Cheng 2025-12-05 8:02 ` [kvm-unit-tests PATCH] x86/svm: Add unsupported instruction intercept test Kevin Cheng 2025-12-05 8:14 ` Kevin Cheng 2025-12-05 8:14 ` [kvm-unit-tests PATCH 0/2] x86/svm: Add testing for L1 intercept bug Kevin Cheng -- strict thread matches above, loose matches on Subject: below -- 2025-12-05 8:14 Kevin Cheng 2025-12-05 8:14 ` [kvm-unit-tests PATCH 1/2] x86/svm: Add missing svm intercepts Kevin Cheng 2025-12-09 1:31 ` Yosry Ahmed
This is a public inbox, see mirroring instructions for how to clone and mirror all data and code used for this inbox