* [PATCH v3 00/19] KVM: selftests: Use kernel-style integer and g[vp]a_t types
@ 2026-04-20 21:19 Sean Christopherson
2026-04-20 21:19 ` [PATCH v3 01/19] KVM: selftests: Use gva_t instead of vm_vaddr_t Sean Christopherson
` (16 more replies)
0 siblings, 17 replies; 18+ messages in thread
From: Sean Christopherson @ 2026-04-20 21:19 UTC (permalink / raw)
To: Paolo Bonzini, Marc Zyngier, Oliver Upton, Tianrui Zhao, Bibo Mao,
Huacai Chen, Anup Patel, Paul Walmsley, Palmer Dabbelt, Albert Ou,
Christian Borntraeger, Janosch Frank, Claudio Imbrenda,
Sean Christopherson
Cc: kvm, linux-arm-kernel, kvmarm, loongarch, kvm-riscv, linux-riscv,
linux-kernel, David Matlack
David's series to renames types across all KVM selftests. I'm going to apply
this ~now in order to get it into -next ASAP. Unless someone screams in the
next few days, I'm going to send a pull request on Thursday, with the goal of
getting this into -rc1 so that all architectures (and developers) can use the
new types straightaway.
Fully tested on x86, and I verified a handful of tests generate identical
code. I tried to do the same for other architectures, but gcc at least doesn't
seem to provide reproducible builds for other architectures. E.g. on arm64 and
LoongArch, a completely benign vaddr_t => gva_t rename would sometimes result
in different offsets in the generated code. But based on manual diffs from
objdump, I'm fairly confident in the result.
The primary goal is to more closely align KVM selftests with the types used in
the kernel proper (selftests are mostly developed by kernel developers):
vm_vaddr_t -> gva_t
vm_paddr_t -> gpa_t
uint64_t -> u64
uint32_t -> u32
uint16_t -> u16
uint8_t -> u8
int64_t -> s64
int32_t -> s32
int16_t -> s16
int8_t -> s8
As a bonus, the new type names are shorter and thus make the KVM selftests code
more concise.
v3:
- Use vm_alloc() instead of gva_alloc(), and put the API renames in a separate
patch.
- Rename vaddr => gva and paddr => gpa throughout KVM selftests.
- Convert a pile of "u64" variables to gva_t or gpa_t as appropriate.
- Clarify ambiguous variables and function names in arm64's inject_uer()
and translate_to_host_paddr().
- Rename pread_uint64() => pread_u64().
v2: https://lore.kernel.org/all/20260220004223.4168331-1-dmatlack@google.com
v1: https://lore.kernel.org/kvm/20250501183304.2433192-1-dmatlack@google.com
David Matlack (10):
KVM: selftests: Use gva_t instead of vm_vaddr_t
KVM: selftests: Use gpa_t instead of vm_paddr_t
KVM: selftests: Use gpa_t for GPAs in Hyper-V selftests
KVM: selftests: Use u64 instead of uint64_t
KVM: selftests: Use s64 instead of int64_t
KVM: selftests: Use u32 instead of uint32_t
KVM: selftests: Use s32 instead of int32_t
KVM: selftests: Use u16 instead of uint16_t
KVM: selftests: Use s16 instead of int16_t
KVM: selftests: Use u8 instead of uint8_t
Sean Christopherson (9):
KVM: selftests: Drop "vaddr_" from APIs that allocate memory for a
given VM
KVM: selftests: Rename vm_vaddr_unused_gap() => vm_unused_gva_gap()
KVM: selftests: Rename vm_vaddr_populate_bitmap() =>
vm_populate_gva_bitmap()
KVM: selftests: Rename translate_to_host_paddr() =>
translate_hva_to_hpa()
KVM: selftests: Clarify that arm64's inject_uer() takes a host PA, not
a guest PA
KVM: selftests: Replace "vaddr" with "gva" throughout
KVM: selftests: Replace "u64 gpa" with "gpa_t" throughout
KVM: selftests: Replace "u64 nested_paddr" with "gpa_t l2_gpa"
KVM: selftests: Replace "paddr" with "gpa" throughout
.../selftests/kvm/access_tracking_perf_test.c | 44 +--
tools/testing/selftests/kvm/arch_timer.c | 6 +-
.../selftests/kvm/arm64/aarch32_id_regs.c | 14 +-
.../testing/selftests/kvm/arm64/arch_timer.c | 8 +-
.../kvm/arm64/arch_timer_edge_cases.c | 161 ++++----
.../selftests/kvm/arm64/debug-exceptions.c | 72 ++--
.../testing/selftests/kvm/arm64/hypercalls.c | 24 +-
.../testing/selftests/kvm/arm64/idreg-idst.c | 4 +-
tools/testing/selftests/kvm/arm64/no-vgic.c | 8 +-
.../selftests/kvm/arm64/page_fault_test.c | 82 ++--
tools/testing/selftests/kvm/arm64/psci_test.c | 26 +-
.../testing/selftests/kvm/arm64/sea_to_user.c | 41 +-
.../testing/selftests/kvm/arm64/set_id_regs.c | 70 ++--
.../selftests/kvm/arm64/smccc_filter.c | 10 +-
tools/testing/selftests/kvm/arm64/vgic_init.c | 56 +--
tools/testing/selftests/kvm/arm64/vgic_irq.c | 137 +++----
.../selftests/kvm/arm64/vgic_lpi_stress.c | 20 +-
tools/testing/selftests/kvm/arm64/vgic_v5.c | 10 +-
.../selftests/kvm/arm64/vpmu_counter_access.c | 56 +--
.../testing/selftests/kvm/coalesced_io_test.c | 38 +-
.../selftests/kvm/demand_paging_test.c | 10 +-
.../selftests/kvm/dirty_log_perf_test.c | 14 +-
tools/testing/selftests/kvm/dirty_log_test.c | 82 ++--
tools/testing/selftests/kvm/get-reg-list.c | 2 +-
.../testing/selftests/kvm/guest_memfd_test.c | 18 +-
.../testing/selftests/kvm/guest_print_test.c | 22 +-
.../selftests/kvm/hardware_disable_test.c | 6 +-
.../selftests/kvm/include/arm64/arch_timer.h | 30 +-
.../selftests/kvm/include/arm64/delay.h | 4 +-
.../testing/selftests/kvm/include/arm64/gic.h | 8 +-
.../selftests/kvm/include/arm64/gic_v3_its.h | 7 +-
.../selftests/kvm/include/arm64/processor.h | 22 +-
.../selftests/kvm/include/arm64/ucall.h | 4 +-
.../selftests/kvm/include/arm64/vgic.h | 22 +-
.../testing/selftests/kvm/include/kvm_util.h | 344 ++++++++---------
.../selftests/kvm/include/kvm_util_types.h | 8 +-
.../kvm/include/loongarch/arch_timer.h | 4 +-
.../selftests/kvm/include/loongarch/ucall.h | 4 +-
.../testing/selftests/kvm/include/memstress.h | 30 +-
.../selftests/kvm/include/riscv/arch_timer.h | 22 +-
.../selftests/kvm/include/riscv/processor.h | 9 +-
.../selftests/kvm/include/riscv/ucall.h | 4 +-
.../kvm/include/s390/diag318_test_handler.h | 2 +-
.../selftests/kvm/include/s390/facility.h | 4 +-
.../selftests/kvm/include/s390/ucall.h | 4 +-
.../testing/selftests/kvm/include/sparsebit.h | 6 +-
.../testing/selftests/kvm/include/test_util.h | 40 +-
.../selftests/kvm/include/timer_test.h | 18 +-
.../selftests/kvm/include/ucall_common.h | 22 +-
.../selftests/kvm/include/userfaultfd_util.h | 6 +-
.../testing/selftests/kvm/include/x86/apic.h | 22 +-
.../testing/selftests/kvm/include/x86/evmcs.h | 22 +-
.../selftests/kvm/include/x86/hyperv.h | 28 +-
.../selftests/kvm/include/x86/kvm_util_arch.h | 36 +-
tools/testing/selftests/kvm/include/x86/pmu.h | 9 +-
.../selftests/kvm/include/x86/processor.h | 292 +++++++-------
tools/testing/selftests/kvm/include/x86/sev.h | 20 +-
tools/testing/selftests/kvm/include/x86/smm.h | 3 +-
.../selftests/kvm/include/x86/svm_util.h | 12 +-
.../testing/selftests/kvm/include/x86/ucall.h | 2 +-
tools/testing/selftests/kvm/include/x86/vmx.h | 70 ++--
.../selftests/kvm/kvm_page_table_test.c | 54 +--
tools/testing/selftests/kvm/lib/arm64/gic.c | 6 +-
.../selftests/kvm/lib/arm64/gic_private.h | 26 +-
.../testing/selftests/kvm/lib/arm64/gic_v3.c | 90 ++---
.../selftests/kvm/lib/arm64/gic_v3_its.c | 11 +-
.../selftests/kvm/lib/arm64/processor.c | 163 ++++----
tools/testing/selftests/kvm/lib/arm64/ucall.c | 12 +-
tools/testing/selftests/kvm/lib/arm64/vgic.c | 40 +-
tools/testing/selftests/kvm/lib/elf.c | 17 +-
tools/testing/selftests/kvm/lib/guest_modes.c | 2 +-
.../testing/selftests/kvm/lib/guest_sprintf.c | 18 +-
tools/testing/selftests/kvm/lib/kvm_util.c | 359 +++++++-----------
.../selftests/kvm/lib/loongarch/processor.c | 110 +++---
.../selftests/kvm/lib/loongarch/ucall.c | 12 +-
tools/testing/selftests/kvm/lib/memstress.c | 38 +-
.../selftests/kvm/lib/riscv/processor.c | 91 +++--
.../kvm/lib/s390/diag318_test_handler.c | 12 +-
.../testing/selftests/kvm/lib/s390/facility.c | 2 +-
.../selftests/kvm/lib/s390/processor.c | 65 ++--
tools/testing/selftests/kvm/lib/sparsebit.c | 18 +-
tools/testing/selftests/kvm/lib/test_util.c | 30 +-
.../testing/selftests/kvm/lib/ucall_common.c | 34 +-
.../selftests/kvm/lib/userfaultfd_util.c | 14 +-
tools/testing/selftests/kvm/lib/x86/apic.c | 2 +-
tools/testing/selftests/kvm/lib/x86/hyperv.c | 14 +-
.../testing/selftests/kvm/lib/x86/memstress.c | 14 +-
tools/testing/selftests/kvm/lib/x86/pmu.c | 8 +-
.../testing/selftests/kvm/lib/x86/processor.c | 292 +++++++-------
tools/testing/selftests/kvm/lib/x86/sev.c | 20 +-
tools/testing/selftests/kvm/lib/x86/svm.c | 16 +-
tools/testing/selftests/kvm/lib/x86/ucall.c | 4 +-
tools/testing/selftests/kvm/lib/x86/vmx.c | 44 +--
.../selftests/kvm/loongarch/arch_timer.c | 28 +-
.../selftests/kvm/loongarch/pmu_test.c | 10 +-
.../kvm/memslot_modification_stress_test.c | 10 +-
.../testing/selftests/kvm/memslot_perf_test.c | 164 ++++----
tools/testing/selftests/kvm/mmu_stress_test.c | 28 +-
.../selftests/kvm/pre_fault_memory_test.c | 12 +-
.../testing/selftests/kvm/riscv/arch_timer.c | 8 +-
.../testing/selftests/kvm/riscv/ebreak_test.c | 6 +-
.../selftests/kvm/riscv/get-reg-list.c | 4 +-
.../selftests/kvm/riscv/sbi_pmu_test.c | 8 +-
tools/testing/selftests/kvm/s390/debug_test.c | 8 +-
.../testing/selftests/kvm/s390/irq_routing.c | 2 +-
tools/testing/selftests/kvm/s390/memop.c | 94 ++---
tools/testing/selftests/kvm/s390/resets.c | 6 +-
.../selftests/kvm/s390/shared_zeropage_test.c | 2 +-
tools/testing/selftests/kvm/s390/tprot.c | 24 +-
.../selftests/kvm/s390/ucontrol_test.c | 8 +-
.../selftests/kvm/set_memory_region_test.c | 40 +-
tools/testing/selftests/kvm/steal_time.c | 74 ++--
.../kvm/system_counter_offset_test.c | 12 +-
tools/testing/selftests/kvm/x86/amx_test.c | 14 +-
.../selftests/kvm/x86/aperfmperf_test.c | 16 +-
.../selftests/kvm/x86/apic_bus_clock_test.c | 24 +-
tools/testing/selftests/kvm/x86/cpuid_test.c | 6 +-
tools/testing/selftests/kvm/x86/debug_regs.c | 4 +-
.../kvm/x86/dirty_log_page_splitting_test.c | 16 +-
.../kvm/x86/evmcs_smm_controls_test.c | 6 +-
.../testing/selftests/kvm/x86/fastops_test.c | 52 +--
.../selftests/kvm/x86/feature_msrs_test.c | 12 +-
.../selftests/kvm/x86/fix_hypercall_test.c | 20 +-
.../selftests/kvm/x86/flds_emulation.h | 6 +-
.../testing/selftests/kvm/x86/hwcr_msr_test.c | 10 +-
.../testing/selftests/kvm/x86/hyperv_clock.c | 6 +-
.../testing/selftests/kvm/x86/hyperv_evmcs.c | 10 +-
.../kvm/x86/hyperv_extended_hypercalls.c | 20 +-
.../selftests/kvm/x86/hyperv_features.c | 26 +-
tools/testing/selftests/kvm/x86/hyperv_ipi.c | 12 +-
.../selftests/kvm/x86/hyperv_svm_test.c | 10 +-
.../selftests/kvm/x86/hyperv_tlb_flush.c | 36 +-
.../selftests/kvm/x86/kvm_buslock_test.c | 2 +-
.../selftests/kvm/x86/kvm_clock_test.c | 14 +-
tools/testing/selftests/kvm/x86/kvm_pv_test.c | 10 +-
.../selftests/kvm/x86/monitor_mwait_test.c | 2 +-
.../selftests/kvm/x86/nested_close_kvm_test.c | 2 +-
.../selftests/kvm/x86/nested_dirty_log_test.c | 10 +-
.../selftests/kvm/x86/nested_emulation_test.c | 20 +-
.../kvm/x86/nested_exceptions_test.c | 6 +-
.../kvm/x86/nested_invalid_cr3_test.c | 2 +-
.../selftests/kvm/x86/nested_set_state_test.c | 4 +-
.../kvm/x86/nested_tsc_adjust_test.c | 12 +-
.../kvm/x86/nested_tsc_scaling_test.c | 24 +-
.../kvm/x86/nested_vmsave_vmload_test.c | 2 +-
.../selftests/kvm/x86/nx_huge_pages_test.c | 18 +-
.../selftests/kvm/x86/platform_info_test.c | 6 +-
.../selftests/kvm/x86/pmu_counters_test.c | 109 +++---
.../selftests/kvm/x86/pmu_event_filter_test.c | 102 ++---
.../kvm/x86/private_mem_conversions_test.c | 78 ++--
.../kvm/x86/private_mem_kvm_exits_test.c | 14 +-
.../selftests/kvm/x86/set_boot_cpu_id.c | 6 +-
.../selftests/kvm/x86/set_sregs_test.c | 6 +-
.../selftests/kvm/x86/sev_init2_tests.c | 6 +-
.../selftests/kvm/x86/sev_smoke_test.c | 22 +-
.../x86/smaller_maxphyaddr_emulation_test.c | 8 +-
tools/testing/selftests/kvm/x86/smm_test.c | 8 +-
tools/testing/selftests/kvm/x86/state_test.c | 14 +-
.../selftests/kvm/x86/svm_int_ctl_test.c | 2 +-
.../selftests/kvm/x86/svm_lbr_nested_state.c | 2 +-
.../kvm/x86/svm_nested_clear_efer_svme.c | 2 +-
.../kvm/x86/svm_nested_shutdown_test.c | 2 +-
.../kvm/x86/svm_nested_soft_inject_test.c | 10 +-
.../selftests/kvm/x86/svm_nested_vmcb12_gpa.c | 14 +-
.../selftests/kvm/x86/svm_vmcall_test.c | 2 +-
.../selftests/kvm/x86/sync_regs_test.c | 2 +-
.../kvm/x86/triple_fault_event_test.c | 4 +-
.../testing/selftests/kvm/x86/tsc_msrs_test.c | 2 +-
.../selftests/kvm/x86/tsc_scaling_sync.c | 4 +-
.../selftests/kvm/x86/ucna_injection_test.c | 45 +--
.../selftests/kvm/x86/userspace_io_test.c | 4 +-
.../kvm/x86/userspace_msr_exit_test.c | 58 +--
.../selftests/kvm/x86/vmx_apic_access_test.c | 4 +-
.../kvm/x86/vmx_apicv_updates_test.c | 4 +-
.../kvm/x86/vmx_invalid_nested_guest_state.c | 2 +-
.../testing/selftests/kvm/x86/vmx_msrs_test.c | 22 +-
.../kvm/x86/vmx_nested_la57_state_test.c | 4 +-
.../selftests/kvm/x86/vmx_pmu_caps_test.c | 12 +-
.../kvm/x86/vmx_preemption_timer_test.c | 2 +-
.../selftests/kvm/x86/xapic_ipi_test.c | 64 ++--
.../selftests/kvm/x86/xapic_state_test.c | 20 +-
.../selftests/kvm/x86/xapic_tpr_test.c | 24 +-
.../selftests/kvm/x86/xcr0_cpuid_test.c | 8 +-
.../selftests/kvm/x86/xen_shinfo_test.c | 22 +-
.../testing/selftests/kvm/x86/xss_msr_test.c | 2 +-
185 files changed, 2706 insertions(+), 2817 deletions(-)
base-commit: 6b802031877a995456c528095c41d1948546bf45
--
2.54.0.rc1.555.g9c883467ad-goog
_______________________________________________
linux-riscv mailing list
linux-riscv@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-riscv
^ permalink raw reply [flat|nested] 18+ messages in thread
* [PATCH v3 01/19] KVM: selftests: Use gva_t instead of vm_vaddr_t
2026-04-20 21:19 [PATCH v3 00/19] KVM: selftests: Use kernel-style integer and g[vp]a_t types Sean Christopherson
@ 2026-04-20 21:19 ` Sean Christopherson
2026-04-20 21:19 ` [PATCH v3 02/19] KVM: selftests: Use gpa_t instead of vm_paddr_t Sean Christopherson
` (15 subsequent siblings)
16 siblings, 0 replies; 18+ messages in thread
From: Sean Christopherson @ 2026-04-20 21:19 UTC (permalink / raw)
To: Paolo Bonzini, Marc Zyngier, Oliver Upton, Tianrui Zhao, Bibo Mao,
Huacai Chen, Anup Patel, Paul Walmsley, Palmer Dabbelt, Albert Ou,
Christian Borntraeger, Janosch Frank, Claudio Imbrenda,
Sean Christopherson
Cc: kvm, linux-arm-kernel, kvmarm, loongarch, kvm-riscv, linux-riscv,
linux-kernel, David Matlack
From: David Matlack <dmatlack@google.com>
Replace all occurrences of vm_vaddr_t with gva_t to align with KVM code
and with the conversion helpers (e.g. addr_gva2hva()).
This commit was generated with the following command:
git ls-files tools/testing/selftests/kvm | xargs sed -i 's/vm_vaddr_/gva_/g'
Then by manually adjusting whitespace to make checkpatch.pl happy, and
dropping renames of functions that allocate memory within a given VM.
No functional change intended.
Signed-off-by: David Matlack <dmatlack@google.com>
[sean: drop renames of allocator APIs]
Signed-off-by: Sean Christopherson <seanjc@google.com>
---
tools/testing/selftests/kvm/arm64/vgic_irq.c | 6 ++--
.../selftests/kvm/include/arm64/processor.h | 4 +--
.../selftests/kvm/include/arm64/ucall.h | 4 +--
.../testing/selftests/kvm/include/kvm_util.h | 32 +++++++++----------
.../selftests/kvm/include/kvm_util_types.h | 2 +-
.../selftests/kvm/include/loongarch/ucall.h | 4 +--
.../selftests/kvm/include/riscv/ucall.h | 2 +-
.../selftests/kvm/include/s390/ucall.h | 2 +-
.../selftests/kvm/include/ucall_common.h | 4 +--
.../selftests/kvm/include/x86/hyperv.h | 10 +++---
.../selftests/kvm/include/x86/kvm_util_arch.h | 6 ++--
.../selftests/kvm/include/x86/svm_util.h | 2 +-
tools/testing/selftests/kvm/include/x86/vmx.h | 2 +-
.../selftests/kvm/kvm_page_table_test.c | 2 +-
.../selftests/kvm/lib/arm64/processor.c | 18 +++++------
tools/testing/selftests/kvm/lib/arm64/ucall.c | 6 ++--
tools/testing/selftests/kvm/lib/elf.c | 6 ++--
tools/testing/selftests/kvm/lib/kvm_util.c | 32 ++++++++-----------
.../selftests/kvm/lib/loongarch/processor.c | 8 ++---
.../selftests/kvm/lib/loongarch/ucall.c | 6 ++--
.../selftests/kvm/lib/riscv/processor.c | 8 ++---
.../selftests/kvm/lib/s390/processor.c | 2 +-
.../testing/selftests/kvm/lib/ucall_common.c | 8 ++---
tools/testing/selftests/kvm/lib/x86/hyperv.c | 4 +--
.../testing/selftests/kvm/lib/x86/memstress.c | 2 +-
.../testing/selftests/kvm/lib/x86/processor.c | 14 ++++----
tools/testing/selftests/kvm/lib/x86/svm.c | 4 +--
tools/testing/selftests/kvm/lib/x86/ucall.c | 2 +-
tools/testing/selftests/kvm/lib/x86/vmx.c | 4 +--
.../selftests/kvm/riscv/sbi_pmu_test.c | 2 +-
tools/testing/selftests/kvm/s390/memop.c | 6 ++--
tools/testing/selftests/kvm/s390/tprot.c | 6 ++--
tools/testing/selftests/kvm/steal_time.c | 2 +-
tools/testing/selftests/kvm/x86/amx_test.c | 2 +-
.../selftests/kvm/x86/aperfmperf_test.c | 2 +-
tools/testing/selftests/kvm/x86/cpuid_test.c | 6 ++--
.../kvm/x86/evmcs_smm_controls_test.c | 2 +-
.../testing/selftests/kvm/x86/hyperv_clock.c | 2 +-
.../testing/selftests/kvm/x86/hyperv_evmcs.c | 6 ++--
.../kvm/x86/hyperv_extended_hypercalls.c | 6 ++--
.../selftests/kvm/x86/hyperv_features.c | 6 ++--
tools/testing/selftests/kvm/x86/hyperv_ipi.c | 8 ++---
.../selftests/kvm/x86/hyperv_svm_test.c | 6 ++--
.../selftests/kvm/x86/hyperv_tlb_flush.c | 12 +++----
.../selftests/kvm/x86/kvm_buslock_test.c | 2 +-
.../selftests/kvm/x86/kvm_clock_test.c | 2 +-
.../selftests/kvm/x86/nested_close_kvm_test.c | 2 +-
.../selftests/kvm/x86/nested_dirty_log_test.c | 10 +++---
.../selftests/kvm/x86/nested_emulation_test.c | 2 +-
.../kvm/x86/nested_exceptions_test.c | 2 +-
.../kvm/x86/nested_invalid_cr3_test.c | 2 +-
.../kvm/x86/nested_tsc_adjust_test.c | 2 +-
.../kvm/x86/nested_tsc_scaling_test.c | 2 +-
.../kvm/x86/nested_vmsave_vmload_test.c | 2 +-
.../selftests/kvm/x86/sev_smoke_test.c | 2 +-
tools/testing/selftests/kvm/x86/smm_test.c | 2 +-
tools/testing/selftests/kvm/x86/state_test.c | 2 +-
.../selftests/kvm/x86/svm_int_ctl_test.c | 2 +-
.../selftests/kvm/x86/svm_lbr_nested_state.c | 2 +-
.../kvm/x86/svm_nested_clear_efer_svme.c | 2 +-
.../kvm/x86/svm_nested_shutdown_test.c | 2 +-
.../kvm/x86/svm_nested_soft_inject_test.c | 4 +--
.../selftests/kvm/x86/svm_nested_vmcb12_gpa.c | 6 ++--
.../selftests/kvm/x86/svm_vmcall_test.c | 2 +-
.../kvm/x86/triple_fault_event_test.c | 4 +--
.../selftests/kvm/x86/vmx_apic_access_test.c | 2 +-
.../kvm/x86/vmx_apicv_updates_test.c | 2 +-
.../kvm/x86/vmx_invalid_nested_guest_state.c | 2 +-
.../kvm/x86/vmx_nested_la57_state_test.c | 2 +-
.../kvm/x86/vmx_preemption_timer_test.c | 2 +-
.../selftests/kvm/x86/xapic_ipi_test.c | 2 +-
71 files changed, 172 insertions(+), 178 deletions(-)
diff --git a/tools/testing/selftests/kvm/arm64/vgic_irq.c b/tools/testing/selftests/kvm/arm64/vgic_irq.c
index 2fb2c7939fe9..da87d049d246 100644
--- a/tools/testing/selftests/kvm/arm64/vgic_irq.c
+++ b/tools/testing/selftests/kvm/arm64/vgic_irq.c
@@ -731,7 +731,7 @@ static void kvm_inject_get_call(struct kvm_vm *vm, struct ucall *uc,
struct kvm_inject_args *args)
{
struct kvm_inject_args *kvm_args_hva;
- vm_vaddr_t kvm_args_gva;
+ gva_t kvm_args_gva;
kvm_args_gva = uc->args[1];
kvm_args_hva = (struct kvm_inject_args *)addr_gva2hva(vm, kvm_args_gva);
@@ -752,7 +752,7 @@ static void test_vgic(uint32_t nr_irqs, bool level_sensitive, bool eoi_split)
struct kvm_vcpu *vcpu;
struct kvm_vm *vm;
struct kvm_inject_args inject_args;
- vm_vaddr_t args_gva;
+ gva_t args_gva;
struct test_args args = {
.nr_irqs = nr_irqs,
@@ -986,7 +986,7 @@ static void test_vgic_two_cpus(void *gcode)
struct kvm_vcpu *vcpus[2];
struct test_args args = {};
struct kvm_vm *vm;
- vm_vaddr_t args_gva;
+ gva_t args_gva;
int gic_fd, ret;
vm = vm_create_with_vcpus(2, gcode, vcpus);
diff --git a/tools/testing/selftests/kvm/include/arm64/processor.h b/tools/testing/selftests/kvm/include/arm64/processor.h
index ac97a1c436fc..5b18ffe68789 100644
--- a/tools/testing/selftests/kvm/include/arm64/processor.h
+++ b/tools/testing/selftests/kvm/include/arm64/processor.h
@@ -179,8 +179,8 @@ void vm_install_exception_handler(struct kvm_vm *vm,
void vm_install_sync_handler(struct kvm_vm *vm,
int vector, int ec, handler_fn handler);
-uint64_t *virt_get_pte_hva_at_level(struct kvm_vm *vm, vm_vaddr_t gva, int level);
-uint64_t *virt_get_pte_hva(struct kvm_vm *vm, vm_vaddr_t gva);
+uint64_t *virt_get_pte_hva_at_level(struct kvm_vm *vm, gva_t gva, int level);
+uint64_t *virt_get_pte_hva(struct kvm_vm *vm, gva_t gva);
static inline void cpu_relax(void)
{
diff --git a/tools/testing/selftests/kvm/include/arm64/ucall.h b/tools/testing/selftests/kvm/include/arm64/ucall.h
index 4ec801f37f00..2210d3d94c40 100644
--- a/tools/testing/selftests/kvm/include/arm64/ucall.h
+++ b/tools/testing/selftests/kvm/include/arm64/ucall.h
@@ -10,9 +10,9 @@
* ucall_exit_mmio_addr holds per-VM values (global data is duplicated by each
* VM), it must not be accessed from host code.
*/
-extern vm_vaddr_t *ucall_exit_mmio_addr;
+extern gva_t *ucall_exit_mmio_addr;
-static inline void ucall_arch_do_ucall(vm_vaddr_t uc)
+static inline void ucall_arch_do_ucall(gva_t uc)
{
WRITE_ONCE(*ucall_exit_mmio_addr, uc);
}
diff --git a/tools/testing/selftests/kvm/include/kvm_util.h b/tools/testing/selftests/kvm/include/kvm_util.h
index f861242b4ae8..2378dd42c988 100644
--- a/tools/testing/selftests/kvm/include/kvm_util.h
+++ b/tools/testing/selftests/kvm/include/kvm_util.h
@@ -112,7 +112,7 @@ struct kvm_vm {
struct sparsebit *vpages_mapped;
bool has_irqchip;
vm_paddr_t ucall_mmio_addr;
- vm_vaddr_t handlers;
+ gva_t handlers;
uint32_t dirty_ring_size;
uint64_t gpa_tag_mask;
@@ -716,22 +716,20 @@ void vm_mem_region_move(struct kvm_vm *vm, uint32_t slot, uint64_t new_gpa);
void vm_mem_region_delete(struct kvm_vm *vm, uint32_t slot);
struct kvm_vcpu *__vm_vcpu_add(struct kvm_vm *vm, uint32_t vcpu_id);
void vm_populate_vaddr_bitmap(struct kvm_vm *vm);
-vm_vaddr_t vm_vaddr_unused_gap(struct kvm_vm *vm, size_t sz, vm_vaddr_t vaddr_min);
-vm_vaddr_t vm_vaddr_alloc(struct kvm_vm *vm, size_t sz, vm_vaddr_t vaddr_min);
-vm_vaddr_t __vm_vaddr_alloc(struct kvm_vm *vm, size_t sz, vm_vaddr_t vaddr_min,
+gva_t vm_vaddr_unused_gap(struct kvm_vm *vm, size_t sz, gva_t vaddr_min);
+gva_t vm_vaddr_alloc(struct kvm_vm *vm, size_t sz, gva_t vaddr_min);
+gva_t __vm_vaddr_alloc(struct kvm_vm *vm, size_t sz, gva_t vaddr_min,
+ enum kvm_mem_region_type type);
+gva_t vm_vaddr_alloc_shared(struct kvm_vm *vm, size_t sz, gva_t vaddr_min,
enum kvm_mem_region_type type);
-vm_vaddr_t vm_vaddr_alloc_shared(struct kvm_vm *vm, size_t sz,
- vm_vaddr_t vaddr_min,
- enum kvm_mem_region_type type);
-vm_vaddr_t vm_vaddr_alloc_pages(struct kvm_vm *vm, int nr_pages);
-vm_vaddr_t __vm_vaddr_alloc_page(struct kvm_vm *vm,
- enum kvm_mem_region_type type);
-vm_vaddr_t vm_vaddr_alloc_page(struct kvm_vm *vm);
+gva_t vm_vaddr_alloc_pages(struct kvm_vm *vm, int nr_pages);
+gva_t __vm_vaddr_alloc_page(struct kvm_vm *vm, enum kvm_mem_region_type type);
+gva_t vm_vaddr_alloc_page(struct kvm_vm *vm);
void virt_map(struct kvm_vm *vm, uint64_t vaddr, uint64_t paddr,
unsigned int npages);
void *addr_gpa2hva(struct kvm_vm *vm, vm_paddr_t gpa);
-void *addr_gva2hva(struct kvm_vm *vm, vm_vaddr_t gva);
+void *addr_gva2hva(struct kvm_vm *vm, gva_t gva);
vm_paddr_t addr_hva2gpa(struct kvm_vm *vm, void *hva);
void *addr_gpa2alias(struct kvm_vm *vm, vm_paddr_t gpa);
@@ -1131,12 +1129,12 @@ vm_adjust_num_guest_pages(enum vm_guest_mode mode, unsigned int num_guest_pages)
}
#define sync_global_to_guest(vm, g) ({ \
- typeof(g) *_p = addr_gva2hva(vm, (vm_vaddr_t)&(g)); \
+ typeof(g) *_p = addr_gva2hva(vm, (gva_t)&(g)); \
memcpy(_p, &(g), sizeof(g)); \
})
#define sync_global_from_guest(vm, g) ({ \
- typeof(g) *_p = addr_gva2hva(vm, (vm_vaddr_t)&(g)); \
+ typeof(g) *_p = addr_gva2hva(vm, (gva_t)&(g)); \
memcpy(&(g), _p, sizeof(g)); \
})
@@ -1147,7 +1145,7 @@ vm_adjust_num_guest_pages(enum vm_guest_mode mode, unsigned int num_guest_pages)
* undesirable to change the host's copy of the global.
*/
#define write_guest_global(vm, g, val) ({ \
- typeof(g) *_p = addr_gva2hva(vm, (vm_vaddr_t)&(g)); \
+ typeof(g) *_p = addr_gva2hva(vm, (gva_t)&(g)); \
typeof(g) _val = val; \
\
memcpy(_p, &(_val), sizeof(g)); \
@@ -1242,9 +1240,9 @@ static inline void virt_pg_map(struct kvm_vm *vm, uint64_t vaddr, uint64_t paddr
* Returns the VM physical address of the translated VM virtual
* address given by @gva.
*/
-vm_paddr_t addr_arch_gva2gpa(struct kvm_vm *vm, vm_vaddr_t gva);
+vm_paddr_t addr_arch_gva2gpa(struct kvm_vm *vm, gva_t gva);
-static inline vm_paddr_t addr_gva2gpa(struct kvm_vm *vm, vm_vaddr_t gva)
+static inline vm_paddr_t addr_gva2gpa(struct kvm_vm *vm, gva_t gva)
{
return addr_arch_gva2gpa(vm, gva);
}
diff --git a/tools/testing/selftests/kvm/include/kvm_util_types.h b/tools/testing/selftests/kvm/include/kvm_util_types.h
index 0366e9bce7f9..f27bd035ea10 100644
--- a/tools/testing/selftests/kvm/include/kvm_util_types.h
+++ b/tools/testing/selftests/kvm/include/kvm_util_types.h
@@ -15,7 +15,7 @@
#define kvm_static_assert(expr, ...) __kvm_static_assert(expr, ##__VA_ARGS__, #expr)
typedef uint64_t vm_paddr_t; /* Virtual Machine (Guest) physical address */
-typedef uint64_t vm_vaddr_t; /* Virtual Machine (Guest) virtual address */
+typedef uint64_t gva_t; /* Virtual Machine (Guest) virtual address */
#define INVALID_GPA (~(uint64_t)0)
diff --git a/tools/testing/selftests/kvm/include/loongarch/ucall.h b/tools/testing/selftests/kvm/include/loongarch/ucall.h
index 4ec801f37f00..2210d3d94c40 100644
--- a/tools/testing/selftests/kvm/include/loongarch/ucall.h
+++ b/tools/testing/selftests/kvm/include/loongarch/ucall.h
@@ -10,9 +10,9 @@
* ucall_exit_mmio_addr holds per-VM values (global data is duplicated by each
* VM), it must not be accessed from host code.
*/
-extern vm_vaddr_t *ucall_exit_mmio_addr;
+extern gva_t *ucall_exit_mmio_addr;
-static inline void ucall_arch_do_ucall(vm_vaddr_t uc)
+static inline void ucall_arch_do_ucall(gva_t uc)
{
WRITE_ONCE(*ucall_exit_mmio_addr, uc);
}
diff --git a/tools/testing/selftests/kvm/include/riscv/ucall.h b/tools/testing/selftests/kvm/include/riscv/ucall.h
index a695ae36f3e0..41d56254968e 100644
--- a/tools/testing/selftests/kvm/include/riscv/ucall.h
+++ b/tools/testing/selftests/kvm/include/riscv/ucall.h
@@ -11,7 +11,7 @@ static inline void ucall_arch_init(struct kvm_vm *vm, vm_paddr_t mmio_gpa)
{
}
-static inline void ucall_arch_do_ucall(vm_vaddr_t uc)
+static inline void ucall_arch_do_ucall(gva_t uc)
{
sbi_ecall(KVM_RISCV_SELFTESTS_SBI_EXT,
KVM_RISCV_SELFTESTS_SBI_UCALL,
diff --git a/tools/testing/selftests/kvm/include/s390/ucall.h b/tools/testing/selftests/kvm/include/s390/ucall.h
index 8035a872a351..befee84c4609 100644
--- a/tools/testing/selftests/kvm/include/s390/ucall.h
+++ b/tools/testing/selftests/kvm/include/s390/ucall.h
@@ -10,7 +10,7 @@ static inline void ucall_arch_init(struct kvm_vm *vm, vm_paddr_t mmio_gpa)
{
}
-static inline void ucall_arch_do_ucall(vm_vaddr_t uc)
+static inline void ucall_arch_do_ucall(gva_t uc)
{
/* Exit via DIAGNOSE 0x501 (normally used for breakpoints) */
asm volatile ("diag 0,%0,0x501" : : "a"(uc) : "memory");
diff --git a/tools/testing/selftests/kvm/include/ucall_common.h b/tools/testing/selftests/kvm/include/ucall_common.h
index d9d6581b8d4f..e5499f170834 100644
--- a/tools/testing/selftests/kvm/include/ucall_common.h
+++ b/tools/testing/selftests/kvm/include/ucall_common.h
@@ -30,7 +30,7 @@ struct ucall {
};
void ucall_arch_init(struct kvm_vm *vm, vm_paddr_t mmio_gpa);
-void ucall_arch_do_ucall(vm_vaddr_t uc);
+void ucall_arch_do_ucall(gva_t uc);
void *ucall_arch_get_ucall(struct kvm_vcpu *vcpu);
void ucall(uint64_t cmd, int nargs, ...);
@@ -48,7 +48,7 @@ int ucall_nr_pages_required(uint64_t page_size);
* the full ucall() are problematic and/or unwanted. Note, this will come out
* as UCALL_NONE on the backend.
*/
-#define GUEST_UCALL_NONE() ucall_arch_do_ucall((vm_vaddr_t)NULL)
+#define GUEST_UCALL_NONE() ucall_arch_do_ucall((gva_t)NULL)
#define GUEST_SYNC_ARGS(stage, arg1, arg2, arg3, arg4) \
ucall(UCALL_SYNC, 6, "hello", stage, arg1, arg2, arg3, arg4)
diff --git a/tools/testing/selftests/kvm/include/x86/hyperv.h b/tools/testing/selftests/kvm/include/x86/hyperv.h
index f13e532be240..eedfff3cf102 100644
--- a/tools/testing/selftests/kvm/include/x86/hyperv.h
+++ b/tools/testing/selftests/kvm/include/x86/hyperv.h
@@ -254,8 +254,8 @@
* Issue a Hyper-V hypercall. Returns exception vector raised or 0, 'hv_status'
* is set to the hypercall status (if no exception occurred).
*/
-static inline uint8_t __hyperv_hypercall(u64 control, vm_vaddr_t input_address,
- vm_vaddr_t output_address,
+static inline uint8_t __hyperv_hypercall(u64 control, gva_t input_address,
+ gva_t output_address,
uint64_t *hv_status)
{
uint64_t error_code;
@@ -274,8 +274,8 @@ static inline uint8_t __hyperv_hypercall(u64 control, vm_vaddr_t input_address,
}
/* Issue a Hyper-V hypercall and assert that it succeeded. */
-static inline void hyperv_hypercall(u64 control, vm_vaddr_t input_address,
- vm_vaddr_t output_address)
+static inline void hyperv_hypercall(u64 control, gva_t input_address,
+ gva_t output_address)
{
uint64_t hv_status;
uint8_t vector;
@@ -347,7 +347,7 @@ struct hyperv_test_pages {
};
struct hyperv_test_pages *vcpu_alloc_hyperv_test_pages(struct kvm_vm *vm,
- vm_vaddr_t *p_hv_pages_gva);
+ gva_t *p_hv_pages_gva);
/* HV_X64_MSR_TSC_INVARIANT_CONTROL bits */
#define HV_INVARIANT_TSC_EXPOSED BIT_ULL(0)
diff --git a/tools/testing/selftests/kvm/include/x86/kvm_util_arch.h b/tools/testing/selftests/kvm/include/x86/kvm_util_arch.h
index be35d26bb320..4c605f624956 100644
--- a/tools/testing/selftests/kvm/include/x86/kvm_util_arch.h
+++ b/tools/testing/selftests/kvm/include/x86/kvm_util_arch.h
@@ -33,9 +33,9 @@ struct kvm_mmu_arch {
struct kvm_mmu;
struct kvm_vm_arch {
- vm_vaddr_t gdt;
- vm_vaddr_t tss;
- vm_vaddr_t idt;
+ gva_t gdt;
+ gva_t tss;
+ gva_t idt;
uint64_t c_bit;
uint64_t s_bit;
diff --git a/tools/testing/selftests/kvm/include/x86/svm_util.h b/tools/testing/selftests/kvm/include/x86/svm_util.h
index 5d7c42534bc4..a25b83e2c233 100644
--- a/tools/testing/selftests/kvm/include/x86/svm_util.h
+++ b/tools/testing/selftests/kvm/include/x86/svm_util.h
@@ -56,7 +56,7 @@ static inline void vmmcall(void)
"clgi\n" \
)
-struct svm_test_data *vcpu_alloc_svm(struct kvm_vm *vm, vm_vaddr_t *p_svm_gva);
+struct svm_test_data *vcpu_alloc_svm(struct kvm_vm *vm, gva_t *p_svm_gva);
void generic_svm_setup(struct svm_test_data *svm, void *guest_rip, void *guest_rsp);
void run_guest(struct vmcb *vmcb, uint64_t vmcb_gpa);
diff --git a/tools/testing/selftests/kvm/include/x86/vmx.h b/tools/testing/selftests/kvm/include/x86/vmx.h
index 92b918700d24..f194723da3d0 100644
--- a/tools/testing/selftests/kvm/include/x86/vmx.h
+++ b/tools/testing/selftests/kvm/include/x86/vmx.h
@@ -550,7 +550,7 @@ union vmx_ctrl_msr {
};
};
-struct vmx_pages *vcpu_alloc_vmx(struct kvm_vm *vm, vm_vaddr_t *p_vmx_gva);
+struct vmx_pages *vcpu_alloc_vmx(struct kvm_vm *vm, gva_t *p_vmx_gva);
bool prepare_for_vmx_operation(struct vmx_pages *vmx);
void prepare_vmcs(struct vmx_pages *vmx, void *guest_rip, void *guest_rsp);
bool load_vmcs(struct vmx_pages *vmx);
diff --git a/tools/testing/selftests/kvm/kvm_page_table_test.c b/tools/testing/selftests/kvm/kvm_page_table_test.c
index c60a24a92829..61915fc89c17 100644
--- a/tools/testing/selftests/kvm/kvm_page_table_test.c
+++ b/tools/testing/selftests/kvm/kvm_page_table_test.c
@@ -292,7 +292,7 @@ static struct kvm_vm *pre_init_before_test(enum vm_guest_mode mode, void *arg)
ret = sem_init(&test_stage_completed, 0, 0);
TEST_ASSERT(ret == 0, "Error in sem_init");
- current_stage = addr_gva2hva(vm, (vm_vaddr_t)(&guest_test_stage));
+ current_stage = addr_gva2hva(vm, (gva_t)(&guest_test_stage));
*current_stage = NUM_TEST_STAGES;
pr_info("Testing guest mode: %s\n", vm_guest_mode_string(mode));
diff --git a/tools/testing/selftests/kvm/lib/arm64/processor.c b/tools/testing/selftests/kvm/lib/arm64/processor.c
index 43ea40edc533..3645acae09ce 100644
--- a/tools/testing/selftests/kvm/lib/arm64/processor.c
+++ b/tools/testing/selftests/kvm/lib/arm64/processor.c
@@ -19,9 +19,9 @@
#define DEFAULT_ARM64_GUEST_STACK_VADDR_MIN 0xac0000
-static vm_vaddr_t exception_handlers;
+static gva_t exception_handlers;
-static uint64_t pgd_index(struct kvm_vm *vm, vm_vaddr_t gva)
+static uint64_t pgd_index(struct kvm_vm *vm, gva_t gva)
{
unsigned int shift = (vm->mmu.pgtable_levels - 1) * (vm->page_shift - 3) + vm->page_shift;
uint64_t mask = (1UL << (vm->va_bits - shift)) - 1;
@@ -29,7 +29,7 @@ static uint64_t pgd_index(struct kvm_vm *vm, vm_vaddr_t gva)
return (gva >> shift) & mask;
}
-static uint64_t pud_index(struct kvm_vm *vm, vm_vaddr_t gva)
+static uint64_t pud_index(struct kvm_vm *vm, gva_t gva)
{
unsigned int shift = 2 * (vm->page_shift - 3) + vm->page_shift;
uint64_t mask = (1UL << (vm->page_shift - 3)) - 1;
@@ -40,7 +40,7 @@ static uint64_t pud_index(struct kvm_vm *vm, vm_vaddr_t gva)
return (gva >> shift) & mask;
}
-static uint64_t pmd_index(struct kvm_vm *vm, vm_vaddr_t gva)
+static uint64_t pmd_index(struct kvm_vm *vm, gva_t gva)
{
unsigned int shift = (vm->page_shift - 3) + vm->page_shift;
uint64_t mask = (1UL << (vm->page_shift - 3)) - 1;
@@ -51,7 +51,7 @@ static uint64_t pmd_index(struct kvm_vm *vm, vm_vaddr_t gva)
return (gva >> shift) & mask;
}
-static uint64_t pte_index(struct kvm_vm *vm, vm_vaddr_t gva)
+static uint64_t pte_index(struct kvm_vm *vm, gva_t gva)
{
uint64_t mask = (1UL << (vm->page_shift - 3)) - 1;
return (gva >> vm->page_shift) & mask;
@@ -181,7 +181,7 @@ void virt_arch_pg_map(struct kvm_vm *vm, uint64_t vaddr, uint64_t paddr)
_virt_pg_map(vm, vaddr, paddr, attr_idx);
}
-uint64_t *virt_get_pte_hva_at_level(struct kvm_vm *vm, vm_vaddr_t gva, int level)
+uint64_t *virt_get_pte_hva_at_level(struct kvm_vm *vm, gva_t gva, int level)
{
uint64_t *ptep;
@@ -225,12 +225,12 @@ uint64_t *virt_get_pte_hva_at_level(struct kvm_vm *vm, vm_vaddr_t gva, int level
exit(EXIT_FAILURE);
}
-uint64_t *virt_get_pte_hva(struct kvm_vm *vm, vm_vaddr_t gva)
+uint64_t *virt_get_pte_hva(struct kvm_vm *vm, gva_t gva)
{
return virt_get_pte_hva_at_level(vm, gva, 3);
}
-vm_paddr_t addr_arch_gva2gpa(struct kvm_vm *vm, vm_vaddr_t gva)
+vm_paddr_t addr_arch_gva2gpa(struct kvm_vm *vm, gva_t gva)
{
uint64_t *ptep = virt_get_pte_hva(vm, gva);
@@ -539,7 +539,7 @@ void vm_init_descriptor_tables(struct kvm_vm *vm)
vm->handlers = __vm_vaddr_alloc(vm, sizeof(struct handlers),
vm->page_size, MEM_REGION_DATA);
- *(vm_vaddr_t *)addr_gva2hva(vm, (vm_vaddr_t)(&exception_handlers)) = vm->handlers;
+ *(gva_t *)addr_gva2hva(vm, (gva_t)(&exception_handlers)) = vm->handlers;
}
void vm_install_sync_handler(struct kvm_vm *vm, int vector, int ec,
diff --git a/tools/testing/selftests/kvm/lib/arm64/ucall.c b/tools/testing/selftests/kvm/lib/arm64/ucall.c
index ddab0ce89d4d..9ea747982d00 100644
--- a/tools/testing/selftests/kvm/lib/arm64/ucall.c
+++ b/tools/testing/selftests/kvm/lib/arm64/ucall.c
@@ -6,17 +6,17 @@
*/
#include "kvm_util.h"
-vm_vaddr_t *ucall_exit_mmio_addr;
+gva_t *ucall_exit_mmio_addr;
void ucall_arch_init(struct kvm_vm *vm, vm_paddr_t mmio_gpa)
{
- vm_vaddr_t mmio_gva = vm_vaddr_unused_gap(vm, vm->page_size, KVM_UTIL_MIN_VADDR);
+ gva_t mmio_gva = vm_vaddr_unused_gap(vm, vm->page_size, KVM_UTIL_MIN_VADDR);
virt_map(vm, mmio_gva, mmio_gpa, 1);
vm->ucall_mmio_addr = mmio_gpa;
- write_guest_global(vm, ucall_exit_mmio_addr, (vm_vaddr_t *)mmio_gva);
+ write_guest_global(vm, ucall_exit_mmio_addr, (gva_t *)mmio_gva);
}
void *ucall_arch_get_ucall(struct kvm_vcpu *vcpu)
diff --git a/tools/testing/selftests/kvm/lib/elf.c b/tools/testing/selftests/kvm/lib/elf.c
index f34d926d9735..ff90fba3a5c6 100644
--- a/tools/testing/selftests/kvm/lib/elf.c
+++ b/tools/testing/selftests/kvm/lib/elf.c
@@ -157,12 +157,12 @@ void kvm_vm_elf_load(struct kvm_vm *vm, const char *filename)
"memsize of 0,\n"
" phdr index: %u p_memsz: 0x%" PRIx64,
n1, (uint64_t) phdr.p_memsz);
- vm_vaddr_t seg_vstart = align_down(phdr.p_vaddr, vm->page_size);
- vm_vaddr_t seg_vend = phdr.p_vaddr + phdr.p_memsz - 1;
+ gva_t seg_vstart = align_down(phdr.p_vaddr, vm->page_size);
+ gva_t seg_vend = phdr.p_vaddr + phdr.p_memsz - 1;
seg_vend |= vm->page_size - 1;
size_t seg_size = seg_vend - seg_vstart + 1;
- vm_vaddr_t vaddr = __vm_vaddr_alloc(vm, seg_size, seg_vstart,
+ gva_t vaddr = __vm_vaddr_alloc(vm, seg_size, seg_vstart,
MEM_REGION_CODE);
TEST_ASSERT(vaddr == seg_vstart, "Unable to allocate "
"virtual memory for segment at requested min addr,\n"
diff --git a/tools/testing/selftests/kvm/lib/kvm_util.c b/tools/testing/selftests/kvm/lib/kvm_util.c
index f5e076591c64..04a59603e93e 100644
--- a/tools/testing/selftests/kvm/lib/kvm_util.c
+++ b/tools/testing/selftests/kvm/lib/kvm_util.c
@@ -1386,8 +1386,7 @@ struct kvm_vcpu *__vm_vcpu_add(struct kvm_vm *vm, uint32_t vcpu_id)
* TEST_ASSERT failure occurs for invalid input or no area of at least
* sz unallocated bytes >= vaddr_min is available.
*/
-vm_vaddr_t vm_vaddr_unused_gap(struct kvm_vm *vm, size_t sz,
- vm_vaddr_t vaddr_min)
+gva_t vm_vaddr_unused_gap(struct kvm_vm *vm, size_t sz, gva_t vaddr_min)
{
uint64_t pages = (sz + vm->page_size - 1) >> vm->page_shift;
@@ -1452,10 +1451,8 @@ vm_vaddr_t vm_vaddr_unused_gap(struct kvm_vm *vm, size_t sz,
return pgidx_start * vm->page_size;
}
-static vm_vaddr_t ____vm_vaddr_alloc(struct kvm_vm *vm, size_t sz,
- vm_vaddr_t vaddr_min,
- enum kvm_mem_region_type type,
- bool protected)
+static gva_t ____vm_vaddr_alloc(struct kvm_vm *vm, size_t sz, gva_t vaddr_min,
+ enum kvm_mem_region_type type, bool protected)
{
uint64_t pages = (sz >> vm->page_shift) + ((sz % vm->page_size) != 0);
@@ -1468,10 +1465,10 @@ static vm_vaddr_t ____vm_vaddr_alloc(struct kvm_vm *vm, size_t sz,
* Find an unused range of virtual page addresses of at least
* pages in length.
*/
- vm_vaddr_t vaddr_start = vm_vaddr_unused_gap(vm, sz, vaddr_min);
+ gva_t vaddr_start = vm_vaddr_unused_gap(vm, sz, vaddr_min);
/* Map the virtual pages. */
- for (vm_vaddr_t vaddr = vaddr_start; pages > 0;
+ for (gva_t vaddr = vaddr_start; pages > 0;
pages--, vaddr += vm->page_size, paddr += vm->page_size) {
virt_pg_map(vm, vaddr, paddr);
@@ -1480,16 +1477,15 @@ static vm_vaddr_t ____vm_vaddr_alloc(struct kvm_vm *vm, size_t sz,
return vaddr_start;
}
-vm_vaddr_t __vm_vaddr_alloc(struct kvm_vm *vm, size_t sz, vm_vaddr_t vaddr_min,
- enum kvm_mem_region_type type)
+gva_t __vm_vaddr_alloc(struct kvm_vm *vm, size_t sz, gva_t vaddr_min,
+ enum kvm_mem_region_type type)
{
return ____vm_vaddr_alloc(vm, sz, vaddr_min, type,
vm_arch_has_protected_memory(vm));
}
-vm_vaddr_t vm_vaddr_alloc_shared(struct kvm_vm *vm, size_t sz,
- vm_vaddr_t vaddr_min,
- enum kvm_mem_region_type type)
+gva_t vm_vaddr_alloc_shared(struct kvm_vm *vm, size_t sz, gva_t vaddr_min,
+ enum kvm_mem_region_type type)
{
return ____vm_vaddr_alloc(vm, sz, vaddr_min, type, false);
}
@@ -1513,7 +1509,7 @@ vm_vaddr_t vm_vaddr_alloc_shared(struct kvm_vm *vm, size_t sz,
* a unique set of pages, with the minimum real allocation being at least
* a page. The allocated physical space comes from the TEST_DATA memory region.
*/
-vm_vaddr_t vm_vaddr_alloc(struct kvm_vm *vm, size_t sz, vm_vaddr_t vaddr_min)
+gva_t vm_vaddr_alloc(struct kvm_vm *vm, size_t sz, gva_t vaddr_min)
{
return __vm_vaddr_alloc(vm, sz, vaddr_min, MEM_REGION_TEST_DATA);
}
@@ -1532,12 +1528,12 @@ vm_vaddr_t vm_vaddr_alloc(struct kvm_vm *vm, size_t sz, vm_vaddr_t vaddr_min)
* Allocates at least N system pages worth of bytes within the virtual address
* space of the vm.
*/
-vm_vaddr_t vm_vaddr_alloc_pages(struct kvm_vm *vm, int nr_pages)
+gva_t vm_vaddr_alloc_pages(struct kvm_vm *vm, int nr_pages)
{
return vm_vaddr_alloc(vm, nr_pages * getpagesize(), KVM_UTIL_MIN_VADDR);
}
-vm_vaddr_t __vm_vaddr_alloc_page(struct kvm_vm *vm, enum kvm_mem_region_type type)
+gva_t __vm_vaddr_alloc_page(struct kvm_vm *vm, enum kvm_mem_region_type type)
{
return __vm_vaddr_alloc(vm, getpagesize(), KVM_UTIL_MIN_VADDR, type);
}
@@ -1556,7 +1552,7 @@ vm_vaddr_t __vm_vaddr_alloc_page(struct kvm_vm *vm, enum kvm_mem_region_type typ
* Allocates at least one system page worth of bytes within the virtual address
* space of the vm.
*/
-vm_vaddr_t vm_vaddr_alloc_page(struct kvm_vm *vm)
+gva_t vm_vaddr_alloc_page(struct kvm_vm *vm)
{
return vm_vaddr_alloc_pages(vm, 1);
}
@@ -2161,7 +2157,7 @@ vm_paddr_t vm_alloc_page_table(struct kvm_vm *vm)
* Return:
* Equivalent host virtual address
*/
-void *addr_gva2hva(struct kvm_vm *vm, vm_vaddr_t gva)
+void *addr_gva2hva(struct kvm_vm *vm, gva_t gva)
{
return addr_gpa2hva(vm, addr_gva2gpa(vm, gva));
}
diff --git a/tools/testing/selftests/kvm/lib/loongarch/processor.c b/tools/testing/selftests/kvm/lib/loongarch/processor.c
index ee4ad3b1d2a4..3b67720fbbe1 100644
--- a/tools/testing/selftests/kvm/lib/loongarch/processor.c
+++ b/tools/testing/selftests/kvm/lib/loongarch/processor.c
@@ -13,9 +13,9 @@
#define LOONGARCH_GUEST_STACK_VADDR_MIN 0x200000
static vm_paddr_t invalid_pgtable[4];
-static vm_vaddr_t exception_handlers;
+static gva_t exception_handlers;
-static uint64_t virt_pte_index(struct kvm_vm *vm, vm_vaddr_t gva, int level)
+static uint64_t virt_pte_index(struct kvm_vm *vm, gva_t gva, int level)
{
unsigned int shift;
uint64_t mask;
@@ -72,7 +72,7 @@ static int virt_pte_none(uint64_t *ptep, int level)
return *ptep == invalid_pgtable[level];
}
-static uint64_t *virt_populate_pte(struct kvm_vm *vm, vm_vaddr_t gva, int alloc)
+static uint64_t *virt_populate_pte(struct kvm_vm *vm, gva_t gva, int alloc)
{
int level;
uint64_t *ptep;
@@ -106,7 +106,7 @@ static uint64_t *virt_populate_pte(struct kvm_vm *vm, vm_vaddr_t gva, int alloc)
exit(EXIT_FAILURE);
}
-vm_paddr_t addr_arch_gva2gpa(struct kvm_vm *vm, vm_vaddr_t gva)
+vm_paddr_t addr_arch_gva2gpa(struct kvm_vm *vm, gva_t gva)
{
uint64_t *ptep;
diff --git a/tools/testing/selftests/kvm/lib/loongarch/ucall.c b/tools/testing/selftests/kvm/lib/loongarch/ucall.c
index fc6cbb50573f..a5aa568f437b 100644
--- a/tools/testing/selftests/kvm/lib/loongarch/ucall.c
+++ b/tools/testing/selftests/kvm/lib/loongarch/ucall.c
@@ -9,17 +9,17 @@
* ucall_exit_mmio_addr holds per-VM values (global data is duplicated by each
* VM), it must not be accessed from host code.
*/
-vm_vaddr_t *ucall_exit_mmio_addr;
+gva_t *ucall_exit_mmio_addr;
void ucall_arch_init(struct kvm_vm *vm, vm_paddr_t mmio_gpa)
{
- vm_vaddr_t mmio_gva = vm_vaddr_unused_gap(vm, vm->page_size, KVM_UTIL_MIN_VADDR);
+ gva_t mmio_gva = vm_vaddr_unused_gap(vm, vm->page_size, KVM_UTIL_MIN_VADDR);
virt_map(vm, mmio_gva, mmio_gpa, 1);
vm->ucall_mmio_addr = mmio_gpa;
- write_guest_global(vm, ucall_exit_mmio_addr, (vm_vaddr_t *)mmio_gva);
+ write_guest_global(vm, ucall_exit_mmio_addr, (gva_t *)mmio_gva);
}
void *ucall_arch_get_ucall(struct kvm_vcpu *vcpu)
diff --git a/tools/testing/selftests/kvm/lib/riscv/processor.c b/tools/testing/selftests/kvm/lib/riscv/processor.c
index 067c6b2c15b0..552628dda4a0 100644
--- a/tools/testing/selftests/kvm/lib/riscv/processor.c
+++ b/tools/testing/selftests/kvm/lib/riscv/processor.c
@@ -15,7 +15,7 @@
#define DEFAULT_RISCV_GUEST_STACK_VADDR_MIN 0xac0000
-static vm_vaddr_t exception_handlers;
+static gva_t exception_handlers;
bool __vcpu_has_ext(struct kvm_vcpu *vcpu, uint64_t ext)
{
@@ -52,7 +52,7 @@ static uint32_t pte_index_shift[] = {
PGTBL_L3_INDEX_SHIFT,
};
-static uint64_t pte_index(struct kvm_vm *vm, vm_vaddr_t gva, int level)
+static uint64_t pte_index(struct kvm_vm *vm, gva_t gva, int level)
{
TEST_ASSERT(level > -1,
"Negative page table level (%d) not possible", level);
@@ -119,7 +119,7 @@ void virt_arch_pg_map(struct kvm_vm *vm, uint64_t vaddr, uint64_t paddr)
PGTBL_PTE_PERM_MASK | PGTBL_PTE_VALID_MASK;
}
-vm_paddr_t addr_arch_gva2gpa(struct kvm_vm *vm, vm_vaddr_t gva)
+vm_paddr_t addr_arch_gva2gpa(struct kvm_vm *vm, gva_t gva)
{
uint64_t *ptep;
int level = vm->mmu.pgtable_levels - 1;
@@ -452,7 +452,7 @@ void vm_init_vector_tables(struct kvm_vm *vm)
vm->handlers = __vm_vaddr_alloc(vm, sizeof(struct handlers),
vm->page_size, MEM_REGION_DATA);
- *(vm_vaddr_t *)addr_gva2hva(vm, (vm_vaddr_t)(&exception_handlers)) = vm->handlers;
+ *(gva_t *)addr_gva2hva(vm, (gva_t)(&exception_handlers)) = vm->handlers;
}
void vm_install_exception_handler(struct kvm_vm *vm, int vector, exception_handler_fn handler)
diff --git a/tools/testing/selftests/kvm/lib/s390/processor.c b/tools/testing/selftests/kvm/lib/s390/processor.c
index 6a9a660413a7..e8d3c1d333d5 100644
--- a/tools/testing/selftests/kvm/lib/s390/processor.c
+++ b/tools/testing/selftests/kvm/lib/s390/processor.c
@@ -86,7 +86,7 @@ void virt_arch_pg_map(struct kvm_vm *vm, uint64_t gva, uint64_t gpa)
entry[idx] = gpa;
}
-vm_paddr_t addr_arch_gva2gpa(struct kvm_vm *vm, vm_vaddr_t gva)
+vm_paddr_t addr_arch_gva2gpa(struct kvm_vm *vm, gva_t gva)
{
int ri, idx;
uint64_t *entry;
diff --git a/tools/testing/selftests/kvm/lib/ucall_common.c b/tools/testing/selftests/kvm/lib/ucall_common.c
index 42151e571953..997444178c78 100644
--- a/tools/testing/selftests/kvm/lib/ucall_common.c
+++ b/tools/testing/selftests/kvm/lib/ucall_common.c
@@ -29,7 +29,7 @@ void ucall_init(struct kvm_vm *vm, vm_paddr_t mmio_gpa)
{
struct ucall_header *hdr;
struct ucall *uc;
- vm_vaddr_t vaddr;
+ gva_t vaddr;
int i;
vaddr = vm_vaddr_alloc_shared(vm, sizeof(*hdr), KVM_UTIL_MIN_VADDR,
@@ -96,7 +96,7 @@ void ucall_assert(uint64_t cmd, const char *exp, const char *file,
guest_vsnprintf(uc->buffer, UCALL_BUFFER_LEN, fmt, va);
va_end(va);
- ucall_arch_do_ucall((vm_vaddr_t)uc->hva);
+ ucall_arch_do_ucall((gva_t)uc->hva);
ucall_free(uc);
}
@@ -113,7 +113,7 @@ void ucall_fmt(uint64_t cmd, const char *fmt, ...)
guest_vsnprintf(uc->buffer, UCALL_BUFFER_LEN, fmt, va);
va_end(va);
- ucall_arch_do_ucall((vm_vaddr_t)uc->hva);
+ ucall_arch_do_ucall((gva_t)uc->hva);
ucall_free(uc);
}
@@ -135,7 +135,7 @@ void ucall(uint64_t cmd, int nargs, ...)
WRITE_ONCE(uc->args[i], va_arg(va, uint64_t));
va_end(va);
- ucall_arch_do_ucall((vm_vaddr_t)uc->hva);
+ ucall_arch_do_ucall((gva_t)uc->hva);
ucall_free(uc);
}
diff --git a/tools/testing/selftests/kvm/lib/x86/hyperv.c b/tools/testing/selftests/kvm/lib/x86/hyperv.c
index 15bc8cd583aa..be8b31572588 100644
--- a/tools/testing/selftests/kvm/lib/x86/hyperv.c
+++ b/tools/testing/selftests/kvm/lib/x86/hyperv.c
@@ -76,9 +76,9 @@ bool kvm_hv_cpu_has(struct kvm_x86_cpu_feature feature)
}
struct hyperv_test_pages *vcpu_alloc_hyperv_test_pages(struct kvm_vm *vm,
- vm_vaddr_t *p_hv_pages_gva)
+ gva_t *p_hv_pages_gva)
{
- vm_vaddr_t hv_pages_gva = vm_vaddr_alloc_page(vm);
+ gva_t hv_pages_gva = vm_vaddr_alloc_page(vm);
struct hyperv_test_pages *hv = addr_gva2hva(vm, hv_pages_gva);
/* Setup of a region of guest memory for the VP Assist page. */
diff --git a/tools/testing/selftests/kvm/lib/x86/memstress.c b/tools/testing/selftests/kvm/lib/x86/memstress.c
index f53414ba7103..73a82730927d 100644
--- a/tools/testing/selftests/kvm/lib/x86/memstress.c
+++ b/tools/testing/selftests/kvm/lib/x86/memstress.c
@@ -104,7 +104,7 @@ static void memstress_setup_ept_mappings(struct kvm_vm *vm)
void memstress_setup_nested(struct kvm_vm *vm, int nr_vcpus, struct kvm_vcpu *vcpus[])
{
struct kvm_regs regs;
- vm_vaddr_t nested_gva;
+ gva_t nested_gva;
int vcpu_id;
TEST_REQUIRE(kvm_cpu_has_tdp());
diff --git a/tools/testing/selftests/kvm/lib/x86/processor.c b/tools/testing/selftests/kvm/lib/x86/processor.c
index 01f0f97d4430..7a01f83cab0b 100644
--- a/tools/testing/selftests/kvm/lib/x86/processor.c
+++ b/tools/testing/selftests/kvm/lib/x86/processor.c
@@ -21,7 +21,7 @@
#define KERNEL_DS 0x10
#define KERNEL_TSS 0x18
-vm_vaddr_t exception_handlers;
+gva_t exception_handlers;
bool host_cpu_is_amd;
bool host_cpu_is_intel;
bool host_cpu_is_hygon;
@@ -618,7 +618,7 @@ static void kvm_seg_set_kernel_data_64bit(struct kvm_segment *segp)
segp->present = true;
}
-vm_paddr_t addr_arch_gva2gpa(struct kvm_vm *vm, vm_vaddr_t gva)
+vm_paddr_t addr_arch_gva2gpa(struct kvm_vm *vm, gva_t gva)
{
int level = PG_LEVEL_NONE;
uint64_t *pte = __vm_get_page_table_entry(vm, &vm->mmu, gva, &level);
@@ -633,7 +633,7 @@ vm_paddr_t addr_arch_gva2gpa(struct kvm_vm *vm, vm_vaddr_t gva)
return vm_untag_gpa(vm, PTE_GET_PA(*pte)) | (gva & ~HUGEPAGE_MASK(level));
}
-static void kvm_seg_set_tss_64bit(vm_vaddr_t base, struct kvm_segment *segp)
+static void kvm_seg_set_tss_64bit(gva_t base, struct kvm_segment *segp)
{
memset(segp, 0, sizeof(*segp));
segp->base = base;
@@ -755,7 +755,7 @@ static void vm_init_descriptor_tables(struct kvm_vm *vm)
for (i = 0; i < NUM_INTERRUPTS; i++)
set_idt_entry(vm, i, (unsigned long)(&idt_handlers)[i], 0, KERNEL_CS);
- *(vm_vaddr_t *)addr_gva2hva(vm, (vm_vaddr_t)(&exception_handlers)) = vm->handlers;
+ *(gva_t *)addr_gva2hva(vm, (gva_t)(&exception_handlers)) = vm->handlers;
kvm_seg_set_kernel_code_64bit(&seg);
kvm_seg_fill_gdt_64bit(vm, &seg);
@@ -770,9 +770,9 @@ static void vm_init_descriptor_tables(struct kvm_vm *vm)
void vm_install_exception_handler(struct kvm_vm *vm, int vector,
void (*handler)(struct ex_regs *))
{
- vm_vaddr_t *handlers = (vm_vaddr_t *)addr_gva2hva(vm, vm->handlers);
+ gva_t *handlers = (gva_t *)addr_gva2hva(vm, vm->handlers);
- handlers[vector] = (vm_vaddr_t)handler;
+ handlers[vector] = (gva_t)handler;
}
void assert_on_unhandled_exception(struct kvm_vcpu *vcpu)
@@ -825,7 +825,7 @@ struct kvm_vcpu *vm_arch_vcpu_add(struct kvm_vm *vm, uint32_t vcpu_id)
{
struct kvm_mp_state mp_state;
struct kvm_regs regs;
- vm_vaddr_t stack_vaddr;
+ gva_t stack_vaddr;
struct kvm_vcpu *vcpu;
stack_vaddr = __vm_vaddr_alloc(vm, DEFAULT_STACK_PGS * getpagesize(),
diff --git a/tools/testing/selftests/kvm/lib/x86/svm.c b/tools/testing/selftests/kvm/lib/x86/svm.c
index eb20b00112c7..4a3b1a2738a2 100644
--- a/tools/testing/selftests/kvm/lib/x86/svm.c
+++ b/tools/testing/selftests/kvm/lib/x86/svm.c
@@ -28,9 +28,9 @@ u64 rflags;
* Pointer to structure with the addresses of the SVM areas.
*/
struct svm_test_data *
-vcpu_alloc_svm(struct kvm_vm *vm, vm_vaddr_t *p_svm_gva)
+vcpu_alloc_svm(struct kvm_vm *vm, gva_t *p_svm_gva)
{
- vm_vaddr_t svm_gva = vm_vaddr_alloc_page(vm);
+ gva_t svm_gva = vm_vaddr_alloc_page(vm);
struct svm_test_data *svm = addr_gva2hva(vm, svm_gva);
svm->vmcb = (void *)vm_vaddr_alloc_page(vm);
diff --git a/tools/testing/selftests/kvm/lib/x86/ucall.c b/tools/testing/selftests/kvm/lib/x86/ucall.c
index 1265cecc7dd1..1af2a6880cdf 100644
--- a/tools/testing/selftests/kvm/lib/x86/ucall.c
+++ b/tools/testing/selftests/kvm/lib/x86/ucall.c
@@ -8,7 +8,7 @@
#define UCALL_PIO_PORT ((uint16_t)0x1000)
-void ucall_arch_do_ucall(vm_vaddr_t uc)
+void ucall_arch_do_ucall(gva_t uc)
{
/*
* FIXME: Revert this hack (the entire commit that added it) once nVMX
diff --git a/tools/testing/selftests/kvm/lib/x86/vmx.c b/tools/testing/selftests/kvm/lib/x86/vmx.c
index c87b340362a9..a6bb649e62c3 100644
--- a/tools/testing/selftests/kvm/lib/x86/vmx.c
+++ b/tools/testing/selftests/kvm/lib/x86/vmx.c
@@ -79,9 +79,9 @@ void vm_enable_ept(struct kvm_vm *vm)
* Pointer to structure with the addresses of the VMX areas.
*/
struct vmx_pages *
-vcpu_alloc_vmx(struct kvm_vm *vm, vm_vaddr_t *p_vmx_gva)
+vcpu_alloc_vmx(struct kvm_vm *vm, gva_t *p_vmx_gva)
{
- vm_vaddr_t vmx_gva = vm_vaddr_alloc_page(vm);
+ gva_t vmx_gva = vm_vaddr_alloc_page(vm);
struct vmx_pages *vmx = addr_gva2hva(vm, vmx_gva);
/* Setup of a region of guest memory for the vmxon region. */
diff --git a/tools/testing/selftests/kvm/riscv/sbi_pmu_test.c b/tools/testing/selftests/kvm/riscv/sbi_pmu_test.c
index cec1621ace23..8366c11131ff 100644
--- a/tools/testing/selftests/kvm/riscv/sbi_pmu_test.c
+++ b/tools/testing/selftests/kvm/riscv/sbi_pmu_test.c
@@ -610,7 +610,7 @@ static void test_vm_setup_snapshot_mem(struct kvm_vm *vm, struct kvm_vcpu *vcpu)
virt_map(vm, PMU_SNAPSHOT_GPA_BASE, PMU_SNAPSHOT_GPA_BASE, 1);
snapshot_gva = (void *)(PMU_SNAPSHOT_GPA_BASE);
- snapshot_gpa = addr_gva2gpa(vcpu->vm, (vm_vaddr_t)snapshot_gva);
+ snapshot_gpa = addr_gva2gpa(vcpu->vm, (gva_t)snapshot_gva);
sync_global_to_guest(vcpu->vm, snapshot_gva);
sync_global_to_guest(vcpu->vm, snapshot_gpa);
}
diff --git a/tools/testing/selftests/kvm/s390/memop.c b/tools/testing/selftests/kvm/s390/memop.c
index 4374b4cd2a80..0e8dc8e5d8bd 100644
--- a/tools/testing/selftests/kvm/s390/memop.c
+++ b/tools/testing/selftests/kvm/s390/memop.c
@@ -878,7 +878,7 @@ static void guest_copy_key_fetch_prot_override(void)
static void test_copy_key_fetch_prot_override(void)
{
struct test_default t = test_default_init(guest_copy_key_fetch_prot_override);
- vm_vaddr_t guest_0_page, guest_last_page;
+ gva_t guest_0_page, guest_last_page;
guest_0_page = vm_vaddr_alloc(t.kvm_vm, PAGE_SIZE, 0);
guest_last_page = vm_vaddr_alloc(t.kvm_vm, PAGE_SIZE, last_page_addr);
@@ -917,7 +917,7 @@ static void test_copy_key_fetch_prot_override(void)
static void test_errors_key_fetch_prot_override_not_enabled(void)
{
struct test_default t = test_default_init(guest_copy_key_fetch_prot_override);
- vm_vaddr_t guest_0_page, guest_last_page;
+ gva_t guest_0_page, guest_last_page;
guest_0_page = vm_vaddr_alloc(t.kvm_vm, PAGE_SIZE, 0);
guest_last_page = vm_vaddr_alloc(t.kvm_vm, PAGE_SIZE, last_page_addr);
@@ -938,7 +938,7 @@ static void test_errors_key_fetch_prot_override_not_enabled(void)
static void test_errors_key_fetch_prot_override_enabled(void)
{
struct test_default t = test_default_init(guest_copy_key_fetch_prot_override);
- vm_vaddr_t guest_0_page, guest_last_page;
+ gva_t guest_0_page, guest_last_page;
guest_0_page = vm_vaddr_alloc(t.kvm_vm, PAGE_SIZE, 0);
guest_last_page = vm_vaddr_alloc(t.kvm_vm, PAGE_SIZE, last_page_addr);
diff --git a/tools/testing/selftests/kvm/s390/tprot.c b/tools/testing/selftests/kvm/s390/tprot.c
index 12d5e1cb62e3..fd8e997de693 100644
--- a/tools/testing/selftests/kvm/s390/tprot.c
+++ b/tools/testing/selftests/kvm/s390/tprot.c
@@ -207,7 +207,7 @@ int main(int argc, char *argv[])
struct kvm_vcpu *vcpu;
struct kvm_vm *vm;
struct kvm_run *run;
- vm_vaddr_t guest_0_page;
+ gva_t guest_0_page;
ksft_print_header();
ksft_set_plan(STAGE_END);
@@ -216,7 +216,7 @@ int main(int argc, char *argv[])
run = vcpu->run;
HOST_SYNC(vcpu, STAGE_INIT_SIMPLE);
- mprotect(addr_gva2hva(vm, (vm_vaddr_t)pages), PAGE_SIZE * 2, PROT_READ);
+ mprotect(addr_gva2hva(vm, (gva_t)pages), PAGE_SIZE * 2, PROT_READ);
HOST_SYNC(vcpu, TEST_SIMPLE);
guest_0_page = vm_vaddr_alloc(vm, PAGE_SIZE, 0);
@@ -229,7 +229,7 @@ int main(int argc, char *argv[])
HOST_SYNC(vcpu, STAGE_INIT_FETCH_PROT_OVERRIDE);
}
if (guest_0_page == 0)
- mprotect(addr_gva2hva(vm, (vm_vaddr_t)0), PAGE_SIZE, PROT_READ);
+ mprotect(addr_gva2hva(vm, (gva_t)0), PAGE_SIZE, PROT_READ);
run->s.regs.crs[0] |= CR0_FETCH_PROTECTION_OVERRIDE;
run->kvm_dirty_regs = KVM_SYNC_CRS;
HOST_SYNC(vcpu, TEST_FETCH_PROT_OVERRIDE);
diff --git a/tools/testing/selftests/kvm/steal_time.c b/tools/testing/selftests/kvm/steal_time.c
index efe56a10d13e..d2a513ec7dd5 100644
--- a/tools/testing/selftests/kvm/steal_time.c
+++ b/tools/testing/selftests/kvm/steal_time.c
@@ -309,7 +309,7 @@ static void steal_time_init(struct kvm_vcpu *vcpu, uint32_t i)
{
/* ST_GPA_BASE is identity mapped */
st_gva[i] = (void *)(ST_GPA_BASE + i * STEAL_TIME_SIZE);
- st_gpa[i] = addr_gva2gpa(vcpu->vm, (vm_vaddr_t)st_gva[i]);
+ st_gpa[i] = addr_gva2gpa(vcpu->vm, (gva_t)st_gva[i]);
sync_global_to_guest(vcpu->vm, st_gva[i]);
sync_global_to_guest(vcpu->vm, st_gpa[i]);
}
diff --git a/tools/testing/selftests/kvm/x86/amx_test.c b/tools/testing/selftests/kvm/x86/amx_test.c
index 37b166260ee3..6f934732c014 100644
--- a/tools/testing/selftests/kvm/x86/amx_test.c
+++ b/tools/testing/selftests/kvm/x86/amx_test.c
@@ -236,7 +236,7 @@ int main(int argc, char *argv[])
struct kvm_x86_state *state;
struct kvm_x86_state *tile_state = NULL;
int xsave_restore_size;
- vm_vaddr_t amx_cfg, tiledata, xstate;
+ gva_t amx_cfg, tiledata, xstate;
struct ucall uc;
int ret;
diff --git a/tools/testing/selftests/kvm/x86/aperfmperf_test.c b/tools/testing/selftests/kvm/x86/aperfmperf_test.c
index 8b15a13df939..2b547fc93ba8 100644
--- a/tools/testing/selftests/kvm/x86/aperfmperf_test.c
+++ b/tools/testing/selftests/kvm/x86/aperfmperf_test.c
@@ -123,7 +123,7 @@ int main(int argc, char *argv[])
{
const bool has_nested = kvm_cpu_has(X86_FEATURE_SVM) || kvm_cpu_has(X86_FEATURE_VMX);
uint64_t host_aperf_before, host_mperf_before;
- vm_vaddr_t nested_test_data_gva;
+ gva_t nested_test_data_gva;
struct kvm_vcpu *vcpu;
struct kvm_vm *vm;
int msr_fd, cpu, i;
diff --git a/tools/testing/selftests/kvm/x86/cpuid_test.c b/tools/testing/selftests/kvm/x86/cpuid_test.c
index f9ed14996977..3c45249a42c4 100644
--- a/tools/testing/selftests/kvm/x86/cpuid_test.c
+++ b/tools/testing/selftests/kvm/x86/cpuid_test.c
@@ -140,10 +140,10 @@ static void run_vcpu(struct kvm_vcpu *vcpu, int stage)
}
}
-struct kvm_cpuid2 *vcpu_alloc_cpuid(struct kvm_vm *vm, vm_vaddr_t *p_gva, struct kvm_cpuid2 *cpuid)
+struct kvm_cpuid2 *vcpu_alloc_cpuid(struct kvm_vm *vm, gva_t *p_gva, struct kvm_cpuid2 *cpuid)
{
int size = sizeof(*cpuid) + cpuid->nent * sizeof(cpuid->entries[0]);
- vm_vaddr_t gva = vm_vaddr_alloc(vm, size, KVM_UTIL_MIN_VADDR);
+ gva_t gva = vm_vaddr_alloc(vm, size, KVM_UTIL_MIN_VADDR);
struct kvm_cpuid2 *guest_cpuids = addr_gva2hva(vm, gva);
memcpy(guest_cpuids, cpuid, size);
@@ -217,7 +217,7 @@ static void test_get_cpuid2(struct kvm_vcpu *vcpu)
int main(void)
{
struct kvm_vcpu *vcpu;
- vm_vaddr_t cpuid_gva;
+ gva_t cpuid_gva;
struct kvm_vm *vm;
int stage;
diff --git a/tools/testing/selftests/kvm/x86/evmcs_smm_controls_test.c b/tools/testing/selftests/kvm/x86/evmcs_smm_controls_test.c
index af7c90103396..62cfde273f71 100644
--- a/tools/testing/selftests/kvm/x86/evmcs_smm_controls_test.c
+++ b/tools/testing/selftests/kvm/x86/evmcs_smm_controls_test.c
@@ -73,7 +73,7 @@ static void guest_code(struct vmx_pages *vmx_pages,
int main(int argc, char *argv[])
{
- vm_vaddr_t vmx_pages_gva = 0, hv_pages_gva = 0;
+ gva_t vmx_pages_gva = 0, hv_pages_gva = 0;
struct hyperv_test_pages *hv;
struct hv_enlightened_vmcs *evmcs;
struct kvm_vcpu *vcpu;
diff --git a/tools/testing/selftests/kvm/x86/hyperv_clock.c b/tools/testing/selftests/kvm/x86/hyperv_clock.c
index e058bc676cd6..b68844924dc5 100644
--- a/tools/testing/selftests/kvm/x86/hyperv_clock.c
+++ b/tools/testing/selftests/kvm/x86/hyperv_clock.c
@@ -208,7 +208,7 @@ int main(void)
struct kvm_vcpu *vcpu;
struct kvm_vm *vm;
struct ucall uc;
- vm_vaddr_t tsc_page_gva;
+ gva_t tsc_page_gva;
int stage;
TEST_REQUIRE(kvm_has_cap(KVM_CAP_HYPERV_TIME));
diff --git a/tools/testing/selftests/kvm/x86/hyperv_evmcs.c b/tools/testing/selftests/kvm/x86/hyperv_evmcs.c
index 74cf19661309..c2de5ac799ee 100644
--- a/tools/testing/selftests/kvm/x86/hyperv_evmcs.c
+++ b/tools/testing/selftests/kvm/x86/hyperv_evmcs.c
@@ -76,7 +76,7 @@ void l2_guest_code(void)
}
void guest_code(struct vmx_pages *vmx_pages, struct hyperv_test_pages *hv_pages,
- vm_vaddr_t hv_hcall_page_gpa)
+ gva_t hv_hcall_page_gpa)
{
#define L2_GUEST_STACK_SIZE 64
unsigned long l2_guest_stack[L2_GUEST_STACK_SIZE];
@@ -231,8 +231,8 @@ static struct kvm_vcpu *save_restore_vm(struct kvm_vm *vm,
int main(int argc, char *argv[])
{
- vm_vaddr_t vmx_pages_gva = 0, hv_pages_gva = 0;
- vm_vaddr_t hcall_page;
+ gva_t vmx_pages_gva = 0, hv_pages_gva = 0;
+ gva_t hcall_page;
struct kvm_vcpu *vcpu;
struct kvm_vm *vm;
diff --git a/tools/testing/selftests/kvm/x86/hyperv_extended_hypercalls.c b/tools/testing/selftests/kvm/x86/hyperv_extended_hypercalls.c
index 949e08e98f31..7762c168bbf3 100644
--- a/tools/testing/selftests/kvm/x86/hyperv_extended_hypercalls.c
+++ b/tools/testing/selftests/kvm/x86/hyperv_extended_hypercalls.c
@@ -16,7 +16,7 @@
#define EXT_CAPABILITIES 0xbull
static void guest_code(vm_paddr_t in_pg_gpa, vm_paddr_t out_pg_gpa,
- vm_vaddr_t out_pg_gva)
+ gva_t out_pg_gva)
{
uint64_t *output_gva;
@@ -35,8 +35,8 @@ static void guest_code(vm_paddr_t in_pg_gpa, vm_paddr_t out_pg_gpa,
int main(void)
{
- vm_vaddr_t hcall_out_page;
- vm_vaddr_t hcall_in_page;
+ gva_t hcall_out_page;
+ gva_t hcall_in_page;
struct kvm_vcpu *vcpu;
struct kvm_run *run;
struct kvm_vm *vm;
diff --git a/tools/testing/selftests/kvm/x86/hyperv_features.c b/tools/testing/selftests/kvm/x86/hyperv_features.c
index 130b9ce7e5dd..1059fcc460e3 100644
--- a/tools/testing/selftests/kvm/x86/hyperv_features.c
+++ b/tools/testing/selftests/kvm/x86/hyperv_features.c
@@ -82,7 +82,7 @@ static void guest_msr(struct msr_data *msr)
GUEST_DONE();
}
-static void guest_hcall(vm_vaddr_t pgs_gpa, struct hcall_data *hcall)
+static void guest_hcall(gva_t pgs_gpa, struct hcall_data *hcall)
{
u64 res, input, output;
uint8_t vector;
@@ -134,7 +134,7 @@ static void guest_test_msrs_access(void)
struct kvm_vm *vm;
struct ucall uc;
int stage = 0;
- vm_vaddr_t msr_gva;
+ gva_t msr_gva;
struct msr_data *msr;
bool has_invtsc = kvm_cpu_has(X86_FEATURE_INVTSC);
@@ -523,7 +523,7 @@ static void guest_test_hcalls_access(void)
struct kvm_vm *vm;
struct ucall uc;
int stage = 0;
- vm_vaddr_t hcall_page, hcall_params;
+ gva_t hcall_page, hcall_params;
struct hcall_data *hcall;
while (true) {
diff --git a/tools/testing/selftests/kvm/x86/hyperv_ipi.c b/tools/testing/selftests/kvm/x86/hyperv_ipi.c
index ca61836c4e32..7d648219833c 100644
--- a/tools/testing/selftests/kvm/x86/hyperv_ipi.c
+++ b/tools/testing/selftests/kvm/x86/hyperv_ipi.c
@@ -45,13 +45,13 @@ struct hv_send_ipi_ex {
struct hv_vpset vp_set;
};
-static inline void hv_init(vm_vaddr_t pgs_gpa)
+static inline void hv_init(gva_t pgs_gpa)
{
wrmsr(HV_X64_MSR_GUEST_OS_ID, HYPERV_LINUX_OS_ID);
wrmsr(HV_X64_MSR_HYPERCALL, pgs_gpa);
}
-static void receiver_code(void *hcall_page, vm_vaddr_t pgs_gpa)
+static void receiver_code(void *hcall_page, gva_t pgs_gpa)
{
u32 vcpu_id;
@@ -85,7 +85,7 @@ static inline void nop_loop(void)
asm volatile("nop");
}
-static void sender_guest_code(void *hcall_page, vm_vaddr_t pgs_gpa)
+static void sender_guest_code(void *hcall_page, gva_t pgs_gpa)
{
struct hv_send_ipi *ipi = (struct hv_send_ipi *)hcall_page;
struct hv_send_ipi_ex *ipi_ex = (struct hv_send_ipi_ex *)hcall_page;
@@ -243,7 +243,7 @@ int main(int argc, char *argv[])
{
struct kvm_vm *vm;
struct kvm_vcpu *vcpu[3];
- vm_vaddr_t hcall_page;
+ gva_t hcall_page;
pthread_t threads[2];
int stage = 1, r;
struct ucall uc;
diff --git a/tools/testing/selftests/kvm/x86/hyperv_svm_test.c b/tools/testing/selftests/kvm/x86/hyperv_svm_test.c
index 0ddb63229bcb..e0caf5ea14bd 100644
--- a/tools/testing/selftests/kvm/x86/hyperv_svm_test.c
+++ b/tools/testing/selftests/kvm/x86/hyperv_svm_test.c
@@ -67,7 +67,7 @@ void l2_guest_code(void)
static void __attribute__((__flatten__)) guest_code(struct svm_test_data *svm,
struct hyperv_test_pages *hv_pages,
- vm_vaddr_t pgs_gpa)
+ gva_t pgs_gpa)
{
unsigned long l2_guest_stack[L2_GUEST_STACK_SIZE];
struct vmcb *vmcb = svm->vmcb;
@@ -149,8 +149,8 @@ static void __attribute__((__flatten__)) guest_code(struct svm_test_data *svm,
int main(int argc, char *argv[])
{
- vm_vaddr_t nested_gva = 0, hv_pages_gva = 0;
- vm_vaddr_t hcall_page;
+ gva_t nested_gva = 0, hv_pages_gva = 0;
+ gva_t hcall_page;
struct kvm_vcpu *vcpu;
struct kvm_vm *vm;
struct ucall uc;
diff --git a/tools/testing/selftests/kvm/x86/hyperv_tlb_flush.c b/tools/testing/selftests/kvm/x86/hyperv_tlb_flush.c
index c542cc4762b1..7f58a5efe6d5 100644
--- a/tools/testing/selftests/kvm/x86/hyperv_tlb_flush.c
+++ b/tools/testing/selftests/kvm/x86/hyperv_tlb_flush.c
@@ -61,14 +61,14 @@ struct hv_tlb_flush_ex {
* - GVAs of the test pages' PTEs
*/
struct test_data {
- vm_vaddr_t hcall_gva;
+ gva_t hcall_gva;
vm_paddr_t hcall_gpa;
- vm_vaddr_t test_pages;
- vm_vaddr_t test_pages_pte[NTEST_PAGES];
+ gva_t test_pages;
+ gva_t test_pages_pte[NTEST_PAGES];
};
/* 'Worker' vCPU code checking the contents of the test page */
-static void worker_guest_code(vm_vaddr_t test_data)
+static void worker_guest_code(gva_t test_data)
{
struct test_data *data = (struct test_data *)test_data;
u32 vcpu_id = rdmsr(HV_X64_MSR_VP_INDEX);
@@ -196,7 +196,7 @@ static inline void post_test(struct test_data *data, u64 exp1, u64 exp2)
#define TESTVAL2 0x0202020202020202
/* Main vCPU doing the test */
-static void sender_guest_code(vm_vaddr_t test_data)
+static void sender_guest_code(gva_t test_data)
{
struct test_data *data = (struct test_data *)test_data;
struct hv_tlb_flush *flush = (struct hv_tlb_flush *)data->hcall_gva;
@@ -581,7 +581,7 @@ int main(int argc, char *argv[])
struct kvm_vm *vm;
struct kvm_vcpu *vcpu[3];
pthread_t threads[2];
- vm_vaddr_t test_data_page, gva;
+ gva_t test_data_page, gva;
vm_paddr_t gpa;
uint64_t *pte;
struct test_data *data;
diff --git a/tools/testing/selftests/kvm/x86/kvm_buslock_test.c b/tools/testing/selftests/kvm/x86/kvm_buslock_test.c
index d88500c118eb..52014a3210c8 100644
--- a/tools/testing/selftests/kvm/x86/kvm_buslock_test.c
+++ b/tools/testing/selftests/kvm/x86/kvm_buslock_test.c
@@ -73,7 +73,7 @@ static void guest_code(void *test_data)
int main(int argc, char *argv[])
{
const bool has_nested = kvm_cpu_has(X86_FEATURE_SVM) || kvm_cpu_has(X86_FEATURE_VMX);
- vm_vaddr_t nested_test_data_gva;
+ gva_t nested_test_data_gva;
struct kvm_vcpu *vcpu;
struct kvm_run *run;
struct kvm_vm *vm;
diff --git a/tools/testing/selftests/kvm/x86/kvm_clock_test.c b/tools/testing/selftests/kvm/x86/kvm_clock_test.c
index 5bc12222d87a..e14f7330302e 100644
--- a/tools/testing/selftests/kvm/x86/kvm_clock_test.c
+++ b/tools/testing/selftests/kvm/x86/kvm_clock_test.c
@@ -135,7 +135,7 @@ static void enter_guest(struct kvm_vcpu *vcpu)
int main(void)
{
struct kvm_vcpu *vcpu;
- vm_vaddr_t pvti_gva;
+ gva_t pvti_gva;
vm_paddr_t pvti_gpa;
struct kvm_vm *vm;
int flags;
diff --git a/tools/testing/selftests/kvm/x86/nested_close_kvm_test.c b/tools/testing/selftests/kvm/x86/nested_close_kvm_test.c
index f001cb836bfa..761fec293408 100644
--- a/tools/testing/selftests/kvm/x86/nested_close_kvm_test.c
+++ b/tools/testing/selftests/kvm/x86/nested_close_kvm_test.c
@@ -67,7 +67,7 @@ static void l1_guest_code(void *data)
int main(int argc, char *argv[])
{
- vm_vaddr_t guest_gva;
+ gva_t guest_gva;
struct kvm_vcpu *vcpu;
struct kvm_vm *vm;
diff --git a/tools/testing/selftests/kvm/x86/nested_dirty_log_test.c b/tools/testing/selftests/kvm/x86/nested_dirty_log_test.c
index 619229bbd693..0e67cce83570 100644
--- a/tools/testing/selftests/kvm/x86/nested_dirty_log_test.c
+++ b/tools/testing/selftests/kvm/x86/nested_dirty_log_test.c
@@ -47,10 +47,10 @@
#define TEST_SYNC_WRITE_FAULT BIT(1)
#define TEST_SYNC_NO_FAULT BIT(2)
-static void l2_guest_code(vm_vaddr_t base)
+static void l2_guest_code(gva_t base)
{
- vm_vaddr_t page0 = TEST_GUEST_ADDR(base, 0);
- vm_vaddr_t page1 = TEST_GUEST_ADDR(base, 1);
+ gva_t page0 = TEST_GUEST_ADDR(base, 0);
+ gva_t page1 = TEST_GUEST_ADDR(base, 1);
READ_ONCE(*(u64 *)page0);
GUEST_SYNC(page0 | TEST_SYNC_READ_FAULT);
@@ -143,7 +143,7 @@ static void l1_guest_code(void *data)
static void test_handle_ucall_sync(struct kvm_vm *vm, u64 arg,
unsigned long *bmap)
{
- vm_vaddr_t gva = arg & ~(PAGE_SIZE - 1);
+ gva_t gva = arg & ~(PAGE_SIZE - 1);
int page_nr, i;
/*
@@ -198,7 +198,7 @@ static void test_handle_ucall_sync(struct kvm_vm *vm, u64 arg,
static void test_dirty_log(bool nested_tdp)
{
- vm_vaddr_t nested_gva = 0;
+ gva_t nested_gva = 0;
unsigned long *bmap;
struct kvm_vcpu *vcpu;
struct kvm_vm *vm;
diff --git a/tools/testing/selftests/kvm/x86/nested_emulation_test.c b/tools/testing/selftests/kvm/x86/nested_emulation_test.c
index abc824dba04f..d398add21e4c 100644
--- a/tools/testing/selftests/kvm/x86/nested_emulation_test.c
+++ b/tools/testing/selftests/kvm/x86/nested_emulation_test.c
@@ -122,7 +122,7 @@ static void guest_code(void *test_data)
int main(int argc, char *argv[])
{
- vm_vaddr_t nested_test_data_gva;
+ gva_t nested_test_data_gva;
struct kvm_vcpu *vcpu;
struct kvm_vm *vm;
diff --git a/tools/testing/selftests/kvm/x86/nested_exceptions_test.c b/tools/testing/selftests/kvm/x86/nested_exceptions_test.c
index 3641a42934ac..646cfb0022b3 100644
--- a/tools/testing/selftests/kvm/x86/nested_exceptions_test.c
+++ b/tools/testing/selftests/kvm/x86/nested_exceptions_test.c
@@ -216,7 +216,7 @@ static void queue_ss_exception(struct kvm_vcpu *vcpu, bool inject)
*/
int main(int argc, char *argv[])
{
- vm_vaddr_t nested_test_data_gva;
+ gva_t nested_test_data_gva;
struct kvm_vcpu_events events;
struct kvm_vcpu *vcpu;
struct kvm_vm *vm;
diff --git a/tools/testing/selftests/kvm/x86/nested_invalid_cr3_test.c b/tools/testing/selftests/kvm/x86/nested_invalid_cr3_test.c
index a6b6da9cf7fe..11fd2467d823 100644
--- a/tools/testing/selftests/kvm/x86/nested_invalid_cr3_test.c
+++ b/tools/testing/selftests/kvm/x86/nested_invalid_cr3_test.c
@@ -78,7 +78,7 @@ int main(int argc, char *argv[])
{
struct kvm_vcpu *vcpu;
struct kvm_vm *vm;
- vm_vaddr_t guest_gva = 0;
+ gva_t guest_gva = 0;
TEST_REQUIRE(kvm_cpu_has(X86_FEATURE_VMX) ||
kvm_cpu_has(X86_FEATURE_SVM));
diff --git a/tools/testing/selftests/kvm/x86/nested_tsc_adjust_test.c b/tools/testing/selftests/kvm/x86/nested_tsc_adjust_test.c
index 2839f650e5c9..d9238116d30d 100644
--- a/tools/testing/selftests/kvm/x86/nested_tsc_adjust_test.c
+++ b/tools/testing/selftests/kvm/x86/nested_tsc_adjust_test.c
@@ -125,7 +125,7 @@ static void report(int64_t val)
int main(int argc, char *argv[])
{
- vm_vaddr_t nested_gva;
+ gva_t nested_gva;
struct kvm_vcpu *vcpu;
TEST_REQUIRE(kvm_cpu_has(X86_FEATURE_VMX) ||
diff --git a/tools/testing/selftests/kvm/x86/nested_tsc_scaling_test.c b/tools/testing/selftests/kvm/x86/nested_tsc_scaling_test.c
index 4260c9e4f489..b76f29e1e775 100644
--- a/tools/testing/selftests/kvm/x86/nested_tsc_scaling_test.c
+++ b/tools/testing/selftests/kvm/x86/nested_tsc_scaling_test.c
@@ -152,7 +152,7 @@ int main(int argc, char *argv[])
{
struct kvm_vcpu *vcpu;
struct kvm_vm *vm;
- vm_vaddr_t guest_gva = 0;
+ gva_t guest_gva = 0;
uint64_t tsc_start, tsc_end;
uint64_t tsc_khz;
diff --git a/tools/testing/selftests/kvm/x86/nested_vmsave_vmload_test.c b/tools/testing/selftests/kvm/x86/nested_vmsave_vmload_test.c
index 71717118d692..85d3f4cc76f3 100644
--- a/tools/testing/selftests/kvm/x86/nested_vmsave_vmload_test.c
+++ b/tools/testing/selftests/kvm/x86/nested_vmsave_vmload_test.c
@@ -128,7 +128,7 @@ static void l1_guest_code(struct svm_test_data *svm)
int main(int argc, char *argv[])
{
- vm_vaddr_t nested_gva = 0;
+ gva_t nested_gva = 0;
struct vmcb *test_vmcb[2];
struct kvm_vcpu *vcpu;
struct kvm_vm *vm;
diff --git a/tools/testing/selftests/kvm/x86/sev_smoke_test.c b/tools/testing/selftests/kvm/x86/sev_smoke_test.c
index 8bd37a476f15..dcb3aee699b9 100644
--- a/tools/testing/selftests/kvm/x86/sev_smoke_test.c
+++ b/tools/testing/selftests/kvm/x86/sev_smoke_test.c
@@ -108,7 +108,7 @@ static void test_sync_vmsa(uint32_t type, uint64_t policy)
{
struct kvm_vcpu *vcpu;
struct kvm_vm *vm;
- vm_vaddr_t gva;
+ gva_t gva;
void *hva;
double x87val = M_PI;
diff --git a/tools/testing/selftests/kvm/x86/smm_test.c b/tools/testing/selftests/kvm/x86/smm_test.c
index ade8412bf94a..3ede3ed8ae5c 100644
--- a/tools/testing/selftests/kvm/x86/smm_test.c
+++ b/tools/testing/selftests/kvm/x86/smm_test.c
@@ -113,7 +113,7 @@ static void guest_code(void *arg)
int main(int argc, char *argv[])
{
- vm_vaddr_t nested_gva = 0;
+ gva_t nested_gva = 0;
struct kvm_vcpu *vcpu;
struct kvm_regs regs;
diff --git a/tools/testing/selftests/kvm/x86/state_test.c b/tools/testing/selftests/kvm/x86/state_test.c
index 992a52504a4a..6797da4bd9d9 100644
--- a/tools/testing/selftests/kvm/x86/state_test.c
+++ b/tools/testing/selftests/kvm/x86/state_test.c
@@ -258,7 +258,7 @@ void check_nested_state(int stage, struct kvm_x86_state *state)
int main(int argc, char *argv[])
{
uint64_t *xstate_bv, saved_xstate_bv;
- vm_vaddr_t nested_gva = 0;
+ gva_t nested_gva = 0;
struct kvm_cpuid2 empty_cpuid = {};
struct kvm_regs regs1, regs2;
struct kvm_vcpu *vcpu, *vcpuN;
diff --git a/tools/testing/selftests/kvm/x86/svm_int_ctl_test.c b/tools/testing/selftests/kvm/x86/svm_int_ctl_test.c
index 917b6066cfc1..d3cc5e4f7883 100644
--- a/tools/testing/selftests/kvm/x86/svm_int_ctl_test.c
+++ b/tools/testing/selftests/kvm/x86/svm_int_ctl_test.c
@@ -82,7 +82,7 @@ static void l1_guest_code(struct svm_test_data *svm)
int main(int argc, char *argv[])
{
struct kvm_vcpu *vcpu;
- vm_vaddr_t svm_gva;
+ gva_t svm_gva;
struct kvm_vm *vm;
struct ucall uc;
diff --git a/tools/testing/selftests/kvm/x86/svm_lbr_nested_state.c b/tools/testing/selftests/kvm/x86/svm_lbr_nested_state.c
index ff99438824d3..7fbfaa054c95 100644
--- a/tools/testing/selftests/kvm/x86/svm_lbr_nested_state.c
+++ b/tools/testing/selftests/kvm/x86/svm_lbr_nested_state.c
@@ -97,9 +97,9 @@ void test_lbrv_nested_state(bool nested_lbrv)
{
struct kvm_x86_state *state = NULL;
struct kvm_vcpu *vcpu;
- vm_vaddr_t svm_gva;
struct kvm_vm *vm;
struct ucall uc;
+ gva_t svm_gva;
pr_info("Testing with nested LBRV %s\n", nested_lbrv ? "enabled" : "disabled");
diff --git a/tools/testing/selftests/kvm/x86/svm_nested_clear_efer_svme.c b/tools/testing/selftests/kvm/x86/svm_nested_clear_efer_svme.c
index a521a9eed061..6a89eaffc657 100644
--- a/tools/testing/selftests/kvm/x86/svm_nested_clear_efer_svme.c
+++ b/tools/testing/selftests/kvm/x86/svm_nested_clear_efer_svme.c
@@ -38,7 +38,7 @@ int main(int argc, char *argv[])
{
struct kvm_vcpu *vcpu;
struct kvm_vm *vm;
- vm_vaddr_t nested_gva = 0;
+ gva_t nested_gva = 0;
TEST_REQUIRE(kvm_cpu_has(X86_FEATURE_SVM));
diff --git a/tools/testing/selftests/kvm/x86/svm_nested_shutdown_test.c b/tools/testing/selftests/kvm/x86/svm_nested_shutdown_test.c
index 00135cbba35e..c6ea3d609a62 100644
--- a/tools/testing/selftests/kvm/x86/svm_nested_shutdown_test.c
+++ b/tools/testing/selftests/kvm/x86/svm_nested_shutdown_test.c
@@ -42,7 +42,7 @@ static void l1_guest_code(struct svm_test_data *svm, struct idt_entry *idt)
int main(int argc, char *argv[])
{
struct kvm_vcpu *vcpu;
- vm_vaddr_t svm_gva;
+ gva_t svm_gva;
struct kvm_vm *vm;
TEST_REQUIRE(kvm_cpu_has(X86_FEATURE_SVM));
diff --git a/tools/testing/selftests/kvm/x86/svm_nested_soft_inject_test.c b/tools/testing/selftests/kvm/x86/svm_nested_soft_inject_test.c
index 4bd1655f9e6d..c739d071d3b3 100644
--- a/tools/testing/selftests/kvm/x86/svm_nested_soft_inject_test.c
+++ b/tools/testing/selftests/kvm/x86/svm_nested_soft_inject_test.c
@@ -144,8 +144,8 @@ static void run_test(bool is_nmi)
{
struct kvm_vcpu *vcpu;
struct kvm_vm *vm;
- vm_vaddr_t svm_gva;
- vm_vaddr_t idt_alt_vm;
+ gva_t svm_gva;
+ gva_t idt_alt_vm;
struct kvm_guest_debug debug;
pr_info("Running %s test\n", is_nmi ? "NMI" : "soft int");
diff --git a/tools/testing/selftests/kvm/x86/svm_nested_vmcb12_gpa.c b/tools/testing/selftests/kvm/x86/svm_nested_vmcb12_gpa.c
index 569869bed20b..ae8a10913af7 100644
--- a/tools/testing/selftests/kvm/x86/svm_nested_vmcb12_gpa.c
+++ b/tools/testing/selftests/kvm/x86/svm_nested_vmcb12_gpa.c
@@ -74,7 +74,7 @@ static u64 unmappable_gpa(struct kvm_vcpu *vcpu)
static void test_invalid_vmcb12(struct kvm_vcpu *vcpu)
{
- vm_vaddr_t nested_gva = 0;
+ gva_t nested_gva = 0;
struct ucall uc;
@@ -90,7 +90,7 @@ static void test_invalid_vmcb12(struct kvm_vcpu *vcpu)
static void test_unmappable_vmcb12(struct kvm_vcpu *vcpu)
{
- vm_vaddr_t nested_gva = 0;
+ gva_t nested_gva = 0;
vcpu_alloc_svm(vcpu->vm, &nested_gva);
vcpu_args_set(vcpu, 2, nested_gva, unmappable_gpa(vcpu));
@@ -103,7 +103,7 @@ static void test_unmappable_vmcb12(struct kvm_vcpu *vcpu)
static void test_unmappable_vmcb12_vmexit(struct kvm_vcpu *vcpu)
{
struct kvm_x86_state *state;
- vm_vaddr_t nested_gva = 0;
+ gva_t nested_gva = 0;
struct ucall uc;
/*
diff --git a/tools/testing/selftests/kvm/x86/svm_vmcall_test.c b/tools/testing/selftests/kvm/x86/svm_vmcall_test.c
index 8a62cca28cfb..b1887242f3b8 100644
--- a/tools/testing/selftests/kvm/x86/svm_vmcall_test.c
+++ b/tools/testing/selftests/kvm/x86/svm_vmcall_test.c
@@ -36,7 +36,7 @@ static void l1_guest_code(struct svm_test_data *svm)
int main(int argc, char *argv[])
{
struct kvm_vcpu *vcpu;
- vm_vaddr_t svm_gva;
+ gva_t svm_gva;
struct kvm_vm *vm;
TEST_REQUIRE(kvm_cpu_has(X86_FEATURE_SVM));
diff --git a/tools/testing/selftests/kvm/x86/triple_fault_event_test.c b/tools/testing/selftests/kvm/x86/triple_fault_event_test.c
index 56306a19144a..f1c488e0d497 100644
--- a/tools/testing/selftests/kvm/x86/triple_fault_event_test.c
+++ b/tools/testing/selftests/kvm/x86/triple_fault_event_test.c
@@ -72,13 +72,13 @@ int main(void)
if (has_vmx) {
- vm_vaddr_t vmx_pages_gva;
+ gva_t vmx_pages_gva;
vm = vm_create_with_one_vcpu(&vcpu, l1_guest_code_vmx);
vcpu_alloc_vmx(vm, &vmx_pages_gva);
vcpu_args_set(vcpu, 1, vmx_pages_gva);
} else {
- vm_vaddr_t svm_gva;
+ gva_t svm_gva;
vm = vm_create_with_one_vcpu(&vcpu, l1_guest_code_svm);
vcpu_alloc_svm(vm, &svm_gva);
diff --git a/tools/testing/selftests/kvm/x86/vmx_apic_access_test.c b/tools/testing/selftests/kvm/x86/vmx_apic_access_test.c
index a81a24761aac..dc5c3d1db346 100644
--- a/tools/testing/selftests/kvm/x86/vmx_apic_access_test.c
+++ b/tools/testing/selftests/kvm/x86/vmx_apic_access_test.c
@@ -72,7 +72,7 @@ static void l1_guest_code(struct vmx_pages *vmx_pages, unsigned long high_gpa)
int main(int argc, char *argv[])
{
unsigned long apic_access_addr = ~0ul;
- vm_vaddr_t vmx_pages_gva;
+ gva_t vmx_pages_gva;
unsigned long high_gpa;
struct vmx_pages *vmx;
bool done = false;
diff --git a/tools/testing/selftests/kvm/x86/vmx_apicv_updates_test.c b/tools/testing/selftests/kvm/x86/vmx_apicv_updates_test.c
index 337c53fddeff..7f84cc92feaf 100644
--- a/tools/testing/selftests/kvm/x86/vmx_apicv_updates_test.c
+++ b/tools/testing/selftests/kvm/x86/vmx_apicv_updates_test.c
@@ -110,7 +110,7 @@ static void l1_guest_code(struct vmx_pages *vmx_pages)
int main(int argc, char *argv[])
{
- vm_vaddr_t vmx_pages_gva;
+ gva_t vmx_pages_gva;
struct vmx_pages *vmx;
struct kvm_vcpu *vcpu;
struct kvm_vm *vm;
diff --git a/tools/testing/selftests/kvm/x86/vmx_invalid_nested_guest_state.c b/tools/testing/selftests/kvm/x86/vmx_invalid_nested_guest_state.c
index a100ee5f0009..a2eaceed9ad5 100644
--- a/tools/testing/selftests/kvm/x86/vmx_invalid_nested_guest_state.c
+++ b/tools/testing/selftests/kvm/x86/vmx_invalid_nested_guest_state.c
@@ -52,7 +52,7 @@ static void l1_guest_code(struct vmx_pages *vmx_pages)
int main(int argc, char *argv[])
{
- vm_vaddr_t vmx_pages_gva;
+ gva_t vmx_pages_gva;
struct kvm_sregs sregs;
struct kvm_vcpu *vcpu;
struct kvm_run *run;
diff --git a/tools/testing/selftests/kvm/x86/vmx_nested_la57_state_test.c b/tools/testing/selftests/kvm/x86/vmx_nested_la57_state_test.c
index 915c42001dba..4ffa11a6bcd8 100644
--- a/tools/testing/selftests/kvm/x86/vmx_nested_la57_state_test.c
+++ b/tools/testing/selftests/kvm/x86/vmx_nested_la57_state_test.c
@@ -73,7 +73,7 @@ void guest_code(struct vmx_pages *vmx_pages)
int main(int argc, char *argv[])
{
- vm_vaddr_t vmx_pages_gva = 0;
+ gva_t vmx_pages_gva = 0;
struct kvm_vm *vm;
struct kvm_vcpu *vcpu;
struct kvm_x86_state *state;
diff --git a/tools/testing/selftests/kvm/x86/vmx_preemption_timer_test.c b/tools/testing/selftests/kvm/x86/vmx_preemption_timer_test.c
index 00dd2ac07a61..1b7b6ba23de7 100644
--- a/tools/testing/selftests/kvm/x86/vmx_preemption_timer_test.c
+++ b/tools/testing/selftests/kvm/x86/vmx_preemption_timer_test.c
@@ -152,7 +152,7 @@ void guest_code(struct vmx_pages *vmx_pages)
int main(int argc, char *argv[])
{
- vm_vaddr_t vmx_pages_gva = 0;
+ gva_t vmx_pages_gva = 0;
struct kvm_regs regs1, regs2;
struct kvm_vm *vm;
diff --git a/tools/testing/selftests/kvm/x86/xapic_ipi_test.c b/tools/testing/selftests/kvm/x86/xapic_ipi_test.c
index ae4a4b6c05ca..0b10dfbfa3ea 100644
--- a/tools/testing/selftests/kvm/x86/xapic_ipi_test.c
+++ b/tools/testing/selftests/kvm/x86/xapic_ipi_test.c
@@ -393,7 +393,7 @@ int main(int argc, char *argv[])
int run_secs = 0;
int delay_usecs = 0;
struct test_data_page *data;
- vm_vaddr_t test_data_page_vaddr;
+ gva_t test_data_page_vaddr;
bool migrate = false;
pthread_t threads[2];
struct thread_params params[2];
--
2.54.0.rc1.555.g9c883467ad-goog
_______________________________________________
linux-riscv mailing list
linux-riscv@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-riscv
^ permalink raw reply related [flat|nested] 18+ messages in thread
* [PATCH v3 02/19] KVM: selftests: Use gpa_t instead of vm_paddr_t
2026-04-20 21:19 [PATCH v3 00/19] KVM: selftests: Use kernel-style integer and g[vp]a_t types Sean Christopherson
2026-04-20 21:19 ` [PATCH v3 01/19] KVM: selftests: Use gva_t instead of vm_vaddr_t Sean Christopherson
@ 2026-04-20 21:19 ` Sean Christopherson
2026-04-20 21:19 ` [PATCH v3 03/19] KVM: selftests: Use gpa_t for GPAs in Hyper-V selftests Sean Christopherson
` (14 subsequent siblings)
16 siblings, 0 replies; 18+ messages in thread
From: Sean Christopherson @ 2026-04-20 21:19 UTC (permalink / raw)
To: Paolo Bonzini, Marc Zyngier, Oliver Upton, Tianrui Zhao, Bibo Mao,
Huacai Chen, Anup Patel, Paul Walmsley, Palmer Dabbelt, Albert Ou,
Christian Borntraeger, Janosch Frank, Claudio Imbrenda,
Sean Christopherson
Cc: kvm, linux-arm-kernel, kvmarm, loongarch, kvm-riscv, linux-riscv,
linux-kernel, David Matlack
From: David Matlack <dmatlack@google.com>
Replace all occurrences of vm_paddr_t with gpa_t to align with KVM code
and with the conversion helpers (e.g. addr_hva2gpa()).
This commit was generated with the following command:
git ls-files tools/testing/selftests/kvm | xargs sed -i 's/vm_paddr_/gpa_/g'
Then by manually adjusting whitespace to make checkpatch.pl happy.
No functional change intended.
Signed-off-by: David Matlack <dmatlack@google.com>
[sean: drop bogus changelog blurb about renaming functions]
Signed-off-by: Sean Christopherson <seanjc@google.com>
---
.../testing/selftests/kvm/arm64/sea_to_user.c | 4 +--
.../selftests/kvm/arm64/vgic_lpi_stress.c | 20 ++++++------
tools/testing/selftests/kvm/dirty_log_test.c | 2 +-
.../testing/selftests/kvm/include/arm64/gic.h | 4 +--
.../selftests/kvm/include/arm64/gic_v3_its.h | 7 ++---
.../testing/selftests/kvm/include/kvm_util.h | 31 +++++++++----------
.../selftests/kvm/include/kvm_util_types.h | 2 +-
.../selftests/kvm/include/riscv/ucall.h | 2 +-
.../selftests/kvm/include/s390/ucall.h | 2 +-
.../selftests/kvm/include/ucall_common.h | 4 +--
tools/testing/selftests/kvm/include/x86/sev.h | 4 +--
.../testing/selftests/kvm/include/x86/ucall.h | 2 +-
.../selftests/kvm/kvm_page_table_test.c | 2 +-
.../testing/selftests/kvm/lib/arm64/gic_v3.c | 4 +--
.../selftests/kvm/lib/arm64/gic_v3_its.c | 11 +++----
.../selftests/kvm/lib/arm64/processor.c | 2 +-
tools/testing/selftests/kvm/lib/arm64/ucall.c | 2 +-
tools/testing/selftests/kvm/lib/kvm_util.c | 27 ++++++++--------
.../selftests/kvm/lib/loongarch/processor.c | 10 +++---
.../selftests/kvm/lib/loongarch/ucall.c | 2 +-
tools/testing/selftests/kvm/lib/memstress.c | 2 +-
.../selftests/kvm/lib/riscv/processor.c | 2 +-
.../selftests/kvm/lib/s390/processor.c | 4 +--
.../testing/selftests/kvm/lib/ucall_common.c | 2 +-
.../testing/selftests/kvm/lib/x86/processor.c | 2 +-
tools/testing/selftests/kvm/lib/x86/sev.c | 2 +-
.../selftests/kvm/riscv/sbi_pmu_test.c | 4 +--
.../testing/selftests/kvm/s390/irq_routing.c | 2 +-
.../selftests/kvm/s390/ucontrol_test.c | 2 +-
tools/testing/selftests/kvm/steal_time.c | 4 +--
.../testing/selftests/kvm/x86/hyperv_clock.c | 2 +-
.../kvm/x86/hyperv_extended_hypercalls.c | 2 +-
.../selftests/kvm/x86/hyperv_tlb_flush.c | 8 ++---
.../selftests/kvm/x86/kvm_clock_test.c | 4 +--
.../kvm/x86/vmx_nested_la57_state_test.c | 2 +-
35 files changed, 92 insertions(+), 96 deletions(-)
diff --git a/tools/testing/selftests/kvm/arm64/sea_to_user.c b/tools/testing/selftests/kvm/arm64/sea_to_user.c
index 573dd790aeb8..f41987dc726a 100644
--- a/tools/testing/selftests/kvm/arm64/sea_to_user.c
+++ b/tools/testing/selftests/kvm/arm64/sea_to_user.c
@@ -51,7 +51,7 @@
#define EINJ_OFFSET 0x01234badUL
#define EINJ_GVA ((START_GVA) + (EINJ_OFFSET))
-static vm_paddr_t einj_gpa;
+static gpa_t einj_gpa;
static void *einj_hva;
static uint64_t einj_hpa;
static bool far_invalid;
@@ -254,7 +254,7 @@ static struct kvm_vm *vm_create_with_sea_handler(struct kvm_vcpu **vcpu)
size_t guest_page_size;
size_t alignment;
uint64_t num_guest_pages;
- vm_paddr_t start_gpa;
+ gpa_t start_gpa;
enum vm_mem_backing_src_type src_type = VM_MEM_SRC_ANONYMOUS_HUGETLB_1GB;
struct kvm_vm *vm;
diff --git a/tools/testing/selftests/kvm/arm64/vgic_lpi_stress.c b/tools/testing/selftests/kvm/arm64/vgic_lpi_stress.c
index e857a605f577..d64d434d3f06 100644
--- a/tools/testing/selftests/kvm/arm64/vgic_lpi_stress.c
+++ b/tools/testing/selftests/kvm/arm64/vgic_lpi_stress.c
@@ -23,7 +23,7 @@
#define GIC_LPI_OFFSET 8192
static size_t nr_iterations = 1000;
-static vm_paddr_t gpa_base;
+static gpa_t gpa_base;
static struct kvm_vm *vm;
static struct kvm_vcpu **vcpus;
@@ -35,14 +35,14 @@ static struct test_data {
u32 nr_devices;
u32 nr_event_ids;
- vm_paddr_t device_table;
- vm_paddr_t collection_table;
- vm_paddr_t cmdq_base;
+ gpa_t device_table;
+ gpa_t collection_table;
+ gpa_t cmdq_base;
void *cmdq_base_va;
- vm_paddr_t itt_tables;
+ gpa_t itt_tables;
- vm_paddr_t lpi_prop_table;
- vm_paddr_t lpi_pend_tables;
+ gpa_t lpi_prop_table;
+ gpa_t lpi_pend_tables;
} test_data = {
.nr_cpus = 1,
.nr_devices = 1,
@@ -73,7 +73,7 @@ static void guest_setup_its_mappings(void)
/* Round-robin the LPIs to all of the vCPUs in the VM */
coll_id = 0;
for (device_id = 0; device_id < nr_devices; device_id++) {
- vm_paddr_t itt_base = test_data.itt_tables + (device_id * SZ_64K);
+ gpa_t itt_base = test_data.itt_tables + (device_id * SZ_64K);
its_send_mapd_cmd(test_data.cmdq_base_va, device_id,
itt_base, SZ_64K, true);
@@ -188,7 +188,7 @@ static void setup_test_data(void)
size_t pages_per_64k = vm_calc_num_guest_pages(vm->mode, SZ_64K);
u32 nr_devices = test_data.nr_devices;
u32 nr_cpus = test_data.nr_cpus;
- vm_paddr_t cmdq_base;
+ gpa_t cmdq_base;
test_data.device_table = vm_phy_pages_alloc(vm, pages_per_64k,
gpa_base,
@@ -224,7 +224,7 @@ static void setup_gic(void)
static void signal_lpi(u32 device_id, u32 event_id)
{
- vm_paddr_t db_addr = GITS_BASE_GPA + GITS_TRANSLATER;
+ gpa_t db_addr = GITS_BASE_GPA + GITS_TRANSLATER;
struct kvm_msi msi = {
.address_lo = db_addr,
diff --git a/tools/testing/selftests/kvm/dirty_log_test.c b/tools/testing/selftests/kvm/dirty_log_test.c
index 7627b328f18a..9b6b9a597175 100644
--- a/tools/testing/selftests/kvm/dirty_log_test.c
+++ b/tools/testing/selftests/kvm/dirty_log_test.c
@@ -667,7 +667,7 @@ static void run_test(enum vm_guest_mode mode, void *arg)
virt_map(vm, guest_test_virt_mem, guest_test_phys_mem, guest_num_pages);
/* Cache the HVA pointer of the region */
- host_test_mem = addr_gpa2hva(vm, (vm_paddr_t)guest_test_phys_mem);
+ host_test_mem = addr_gpa2hva(vm, (gpa_t)guest_test_phys_mem);
/* Export the shared variables to the guest */
sync_global_to_guest(vm, host_page_size);
diff --git a/tools/testing/selftests/kvm/include/arm64/gic.h b/tools/testing/selftests/kvm/include/arm64/gic.h
index cc7a7f34ed37..6408f952cb64 100644
--- a/tools/testing/selftests/kvm/include/arm64/gic.h
+++ b/tools/testing/selftests/kvm/include/arm64/gic.h
@@ -59,7 +59,7 @@ bool gic_irq_get_pending(unsigned int intid);
void gic_irq_set_config(unsigned int intid, bool is_edge);
void gic_irq_set_group(unsigned int intid, bool group);
-void gic_rdist_enable_lpis(vm_paddr_t cfg_table, size_t cfg_table_size,
- vm_paddr_t pend_table);
+void gic_rdist_enable_lpis(gpa_t cfg_table, size_t cfg_table_size,
+ gpa_t pend_table);
#endif /* SELFTEST_KVM_GIC_H */
diff --git a/tools/testing/selftests/kvm/include/arm64/gic_v3_its.h b/tools/testing/selftests/kvm/include/arm64/gic_v3_its.h
index 58feef3eb386..a43a407e2d5c 100644
--- a/tools/testing/selftests/kvm/include/arm64/gic_v3_its.h
+++ b/tools/testing/selftests/kvm/include/arm64/gic_v3_its.h
@@ -5,11 +5,10 @@
#include <linux/sizes.h>
-void its_init(vm_paddr_t coll_tbl, size_t coll_tbl_sz,
- vm_paddr_t device_tbl, size_t device_tbl_sz,
- vm_paddr_t cmdq, size_t cmdq_size);
+void its_init(gpa_t coll_tbl, size_t coll_tbl_sz, gpa_t device_tbl,
+ size_t device_tbl_sz, gpa_t cmdq, size_t cmdq_size);
-void its_send_mapd_cmd(void *cmdq_base, u32 device_id, vm_paddr_t itt_base,
+void its_send_mapd_cmd(void *cmdq_base, u32 device_id, gpa_t itt_base,
size_t itt_size, bool valid);
void its_send_mapc_cmd(void *cmdq_base, u32 vcpu_id, u32 collection_id, bool valid);
void its_send_mapti_cmd(void *cmdq_base, u32 device_id, u32 event_id,
diff --git a/tools/testing/selftests/kvm/include/kvm_util.h b/tools/testing/selftests/kvm/include/kvm_util.h
index 2378dd42c988..9f602c73fbb4 100644
--- a/tools/testing/selftests/kvm/include/kvm_util.h
+++ b/tools/testing/selftests/kvm/include/kvm_util.h
@@ -111,7 +111,7 @@ struct kvm_vm {
struct sparsebit *vpages_valid;
struct sparsebit *vpages_mapped;
bool has_irqchip;
- vm_paddr_t ucall_mmio_addr;
+ gpa_t ucall_mmio_addr;
gva_t handlers;
uint32_t dirty_ring_size;
uint64_t gpa_tag_mask;
@@ -728,16 +728,16 @@ gva_t vm_vaddr_alloc_page(struct kvm_vm *vm);
void virt_map(struct kvm_vm *vm, uint64_t vaddr, uint64_t paddr,
unsigned int npages);
-void *addr_gpa2hva(struct kvm_vm *vm, vm_paddr_t gpa);
+void *addr_gpa2hva(struct kvm_vm *vm, gpa_t gpa);
void *addr_gva2hva(struct kvm_vm *vm, gva_t gva);
-vm_paddr_t addr_hva2gpa(struct kvm_vm *vm, void *hva);
-void *addr_gpa2alias(struct kvm_vm *vm, vm_paddr_t gpa);
+gpa_t addr_hva2gpa(struct kvm_vm *vm, void *hva);
+void *addr_gpa2alias(struct kvm_vm *vm, gpa_t gpa);
#ifndef vcpu_arch_put_guest
#define vcpu_arch_put_guest(mem, val) do { (mem) = (val); } while (0)
#endif
-static inline vm_paddr_t vm_untag_gpa(struct kvm_vm *vm, vm_paddr_t gpa)
+static inline gpa_t vm_untag_gpa(struct kvm_vm *vm, gpa_t gpa)
{
return gpa & ~vm->gpa_tag_mask;
}
@@ -988,15 +988,14 @@ void kvm_gsi_routing_write(struct kvm_vm *vm, struct kvm_irq_routing *routing);
const char *exit_reason_str(unsigned int exit_reason);
-vm_paddr_t vm_phy_page_alloc(struct kvm_vm *vm, vm_paddr_t paddr_min,
- uint32_t memslot);
-vm_paddr_t __vm_phy_pages_alloc(struct kvm_vm *vm, size_t num,
- vm_paddr_t paddr_min, uint32_t memslot,
- bool protected);
-vm_paddr_t vm_alloc_page_table(struct kvm_vm *vm);
+gpa_t vm_phy_page_alloc(struct kvm_vm *vm, gpa_t paddr_min, uint32_t memslot);
+gpa_t __vm_phy_pages_alloc(struct kvm_vm *vm, size_t num,
+ gpa_t paddr_min, uint32_t memslot,
+ bool protected);
+gpa_t vm_alloc_page_table(struct kvm_vm *vm);
-static inline vm_paddr_t vm_phy_pages_alloc(struct kvm_vm *vm, size_t num,
- vm_paddr_t paddr_min, uint32_t memslot)
+static inline gpa_t vm_phy_pages_alloc(struct kvm_vm *vm, size_t num,
+ gpa_t paddr_min, uint32_t memslot)
{
/*
* By default, allocate memory as protected for VMs that support
@@ -1240,9 +1239,9 @@ static inline void virt_pg_map(struct kvm_vm *vm, uint64_t vaddr, uint64_t paddr
* Returns the VM physical address of the translated VM virtual
* address given by @gva.
*/
-vm_paddr_t addr_arch_gva2gpa(struct kvm_vm *vm, gva_t gva);
+gpa_t addr_arch_gva2gpa(struct kvm_vm *vm, gva_t gva);
-static inline vm_paddr_t addr_gva2gpa(struct kvm_vm *vm, gva_t gva)
+static inline gpa_t addr_gva2gpa(struct kvm_vm *vm, gva_t gva)
{
return addr_arch_gva2gpa(vm, gva);
}
@@ -1291,7 +1290,7 @@ void kvm_arch_vm_post_create(struct kvm_vm *vm, unsigned int nr_vcpus);
void kvm_arch_vm_finalize_vcpus(struct kvm_vm *vm);
void kvm_arch_vm_release(struct kvm_vm *vm);
-bool vm_is_gpa_protected(struct kvm_vm *vm, vm_paddr_t paddr);
+bool vm_is_gpa_protected(struct kvm_vm *vm, gpa_t paddr);
uint32_t guest_get_vcpuid(void);
diff --git a/tools/testing/selftests/kvm/include/kvm_util_types.h b/tools/testing/selftests/kvm/include/kvm_util_types.h
index f27bd035ea10..1d9eedb4885e 100644
--- a/tools/testing/selftests/kvm/include/kvm_util_types.h
+++ b/tools/testing/selftests/kvm/include/kvm_util_types.h
@@ -14,7 +14,7 @@
#define __kvm_static_assert(expr, msg, ...) _Static_assert(expr, msg)
#define kvm_static_assert(expr, ...) __kvm_static_assert(expr, ##__VA_ARGS__, #expr)
-typedef uint64_t vm_paddr_t; /* Virtual Machine (Guest) physical address */
+typedef uint64_t gpa_t; /* Virtual Machine (Guest) physical address */
typedef uint64_t gva_t; /* Virtual Machine (Guest) virtual address */
#define INVALID_GPA (~(uint64_t)0)
diff --git a/tools/testing/selftests/kvm/include/riscv/ucall.h b/tools/testing/selftests/kvm/include/riscv/ucall.h
index 41d56254968e..2de7c6a36096 100644
--- a/tools/testing/selftests/kvm/include/riscv/ucall.h
+++ b/tools/testing/selftests/kvm/include/riscv/ucall.h
@@ -7,7 +7,7 @@
#define UCALL_EXIT_REASON KVM_EXIT_RISCV_SBI
-static inline void ucall_arch_init(struct kvm_vm *vm, vm_paddr_t mmio_gpa)
+static inline void ucall_arch_init(struct kvm_vm *vm, gpa_t mmio_gpa)
{
}
diff --git a/tools/testing/selftests/kvm/include/s390/ucall.h b/tools/testing/selftests/kvm/include/s390/ucall.h
index befee84c4609..3907d629304f 100644
--- a/tools/testing/selftests/kvm/include/s390/ucall.h
+++ b/tools/testing/selftests/kvm/include/s390/ucall.h
@@ -6,7 +6,7 @@
#define UCALL_EXIT_REASON KVM_EXIT_S390_SIEIC
-static inline void ucall_arch_init(struct kvm_vm *vm, vm_paddr_t mmio_gpa)
+static inline void ucall_arch_init(struct kvm_vm *vm, gpa_t mmio_gpa)
{
}
diff --git a/tools/testing/selftests/kvm/include/ucall_common.h b/tools/testing/selftests/kvm/include/ucall_common.h
index e5499f170834..1db399c00d02 100644
--- a/tools/testing/selftests/kvm/include/ucall_common.h
+++ b/tools/testing/selftests/kvm/include/ucall_common.h
@@ -29,7 +29,7 @@ struct ucall {
struct ucall *hva;
};
-void ucall_arch_init(struct kvm_vm *vm, vm_paddr_t mmio_gpa);
+void ucall_arch_init(struct kvm_vm *vm, gpa_t mmio_gpa);
void ucall_arch_do_ucall(gva_t uc);
void *ucall_arch_get_ucall(struct kvm_vcpu *vcpu);
@@ -39,7 +39,7 @@ __printf(5, 6) void ucall_assert(uint64_t cmd, const char *exp,
const char *file, unsigned int line,
const char *fmt, ...);
uint64_t get_ucall(struct kvm_vcpu *vcpu, struct ucall *uc);
-void ucall_init(struct kvm_vm *vm, vm_paddr_t mmio_gpa);
+void ucall_init(struct kvm_vm *vm, gpa_t mmio_gpa);
int ucall_nr_pages_required(uint64_t page_size);
/*
diff --git a/tools/testing/selftests/kvm/include/x86/sev.h b/tools/testing/selftests/kvm/include/x86/sev.h
index 008b4169f5e2..289ff5b3f10c 100644
--- a/tools/testing/selftests/kvm/include/x86/sev.h
+++ b/tools/testing/selftests/kvm/include/x86/sev.h
@@ -120,7 +120,7 @@ static inline void sev_register_encrypted_memory(struct kvm_vm *vm,
vm_ioctl(vm, KVM_MEMORY_ENCRYPT_REG_REGION, &range);
}
-static inline void sev_launch_update_data(struct kvm_vm *vm, vm_paddr_t gpa,
+static inline void sev_launch_update_data(struct kvm_vm *vm, gpa_t gpa,
uint64_t size)
{
struct kvm_sev_launch_update_data update_data = {
@@ -131,7 +131,7 @@ static inline void sev_launch_update_data(struct kvm_vm *vm, vm_paddr_t gpa,
vm_sev_ioctl(vm, KVM_SEV_LAUNCH_UPDATE_DATA, &update_data);
}
-static inline void snp_launch_update_data(struct kvm_vm *vm, vm_paddr_t gpa,
+static inline void snp_launch_update_data(struct kvm_vm *vm, gpa_t gpa,
uint64_t hva, uint64_t size, uint8_t type)
{
struct kvm_sev_snp_launch_update update_data = {
diff --git a/tools/testing/selftests/kvm/include/x86/ucall.h b/tools/testing/selftests/kvm/include/x86/ucall.h
index d3825dcc3cd9..0e4950041e3e 100644
--- a/tools/testing/selftests/kvm/include/x86/ucall.h
+++ b/tools/testing/selftests/kvm/include/x86/ucall.h
@@ -6,7 +6,7 @@
#define UCALL_EXIT_REASON KVM_EXIT_IO
-static inline void ucall_arch_init(struct kvm_vm *vm, vm_paddr_t mmio_gpa)
+static inline void ucall_arch_init(struct kvm_vm *vm, gpa_t mmio_gpa)
{
}
diff --git a/tools/testing/selftests/kvm/kvm_page_table_test.c b/tools/testing/selftests/kvm/kvm_page_table_test.c
index 61915fc89c17..e8a60d5ccbe6 100644
--- a/tools/testing/selftests/kvm/kvm_page_table_test.c
+++ b/tools/testing/selftests/kvm/kvm_page_table_test.c
@@ -281,7 +281,7 @@ static struct kvm_vm *pre_init_before_test(enum vm_guest_mode mode, void *arg)
virt_map(vm, guest_test_virt_mem, guest_test_phys_mem, guest_num_pages);
/* Cache the HVA pointer of the region */
- host_test_mem = addr_gpa2hva(vm, (vm_paddr_t)guest_test_phys_mem);
+ host_test_mem = addr_gpa2hva(vm, (gpa_t)guest_test_phys_mem);
/* Export shared structure test_args to guest */
sync_global_to_guest(vm, test_args);
diff --git a/tools/testing/selftests/kvm/lib/arm64/gic_v3.c b/tools/testing/selftests/kvm/lib/arm64/gic_v3.c
index 50754a27f493..ae3959f3bb11 100644
--- a/tools/testing/selftests/kvm/lib/arm64/gic_v3.c
+++ b/tools/testing/selftests/kvm/lib/arm64/gic_v3.c
@@ -424,8 +424,8 @@ const struct gic_common_ops gicv3_ops = {
.gic_irq_set_group = gicv3_set_group,
};
-void gic_rdist_enable_lpis(vm_paddr_t cfg_table, size_t cfg_table_size,
- vm_paddr_t pend_table)
+void gic_rdist_enable_lpis(gpa_t cfg_table, size_t cfg_table_size,
+ gpa_t pend_table)
{
volatile void *rdist_base = gicr_base_cpu(guest_get_vcpuid());
diff --git a/tools/testing/selftests/kvm/lib/arm64/gic_v3_its.c b/tools/testing/selftests/kvm/lib/arm64/gic_v3_its.c
index 7f9fdcf42ae6..1188b578121d 100644
--- a/tools/testing/selftests/kvm/lib/arm64/gic_v3_its.c
+++ b/tools/testing/selftests/kvm/lib/arm64/gic_v3_its.c
@@ -54,7 +54,7 @@ static unsigned long its_find_baser(unsigned int type)
return -1;
}
-static void its_install_table(unsigned int type, vm_paddr_t base, size_t size)
+static void its_install_table(unsigned int type, gpa_t base, size_t size)
{
unsigned long offset = its_find_baser(type);
u64 baser;
@@ -69,7 +69,7 @@ static void its_install_table(unsigned int type, vm_paddr_t base, size_t size)
its_write_u64(offset, baser);
}
-static void its_install_cmdq(vm_paddr_t base, size_t size)
+static void its_install_cmdq(gpa_t base, size_t size)
{
u64 cbaser;
@@ -82,9 +82,8 @@ static void its_install_cmdq(vm_paddr_t base, size_t size)
its_write_u64(GITS_CBASER, cbaser);
}
-void its_init(vm_paddr_t coll_tbl, size_t coll_tbl_sz,
- vm_paddr_t device_tbl, size_t device_tbl_sz,
- vm_paddr_t cmdq, size_t cmdq_size)
+void its_init(gpa_t coll_tbl, size_t coll_tbl_sz, gpa_t device_tbl,
+ size_t device_tbl_sz, gpa_t cmdq, size_t cmdq_size)
{
u32 ctlr;
@@ -204,7 +203,7 @@ static void its_send_cmd(void *cmdq_base, struct its_cmd_block *cmd)
}
}
-void its_send_mapd_cmd(void *cmdq_base, u32 device_id, vm_paddr_t itt_base,
+void its_send_mapd_cmd(void *cmdq_base, u32 device_id, gpa_t itt_base,
size_t itt_size, bool valid)
{
struct its_cmd_block cmd = {};
diff --git a/tools/testing/selftests/kvm/lib/arm64/processor.c b/tools/testing/selftests/kvm/lib/arm64/processor.c
index 3645acae09ce..0e8603788134 100644
--- a/tools/testing/selftests/kvm/lib/arm64/processor.c
+++ b/tools/testing/selftests/kvm/lib/arm64/processor.c
@@ -230,7 +230,7 @@ uint64_t *virt_get_pte_hva(struct kvm_vm *vm, gva_t gva)
return virt_get_pte_hva_at_level(vm, gva, 3);
}
-vm_paddr_t addr_arch_gva2gpa(struct kvm_vm *vm, gva_t gva)
+gpa_t addr_arch_gva2gpa(struct kvm_vm *vm, gva_t gva)
{
uint64_t *ptep = virt_get_pte_hva(vm, gva);
diff --git a/tools/testing/selftests/kvm/lib/arm64/ucall.c b/tools/testing/selftests/kvm/lib/arm64/ucall.c
index 9ea747982d00..5f85fa7a9449 100644
--- a/tools/testing/selftests/kvm/lib/arm64/ucall.c
+++ b/tools/testing/selftests/kvm/lib/arm64/ucall.c
@@ -8,7 +8,7 @@
gva_t *ucall_exit_mmio_addr;
-void ucall_arch_init(struct kvm_vm *vm, vm_paddr_t mmio_gpa)
+void ucall_arch_init(struct kvm_vm *vm, gpa_t mmio_gpa)
{
gva_t mmio_gva = vm_vaddr_unused_gap(vm, vm->page_size, KVM_UTIL_MIN_VADDR);
diff --git a/tools/testing/selftests/kvm/lib/kvm_util.c b/tools/testing/selftests/kvm/lib/kvm_util.c
index 04a59603e93e..89c4e6f01739 100644
--- a/tools/testing/selftests/kvm/lib/kvm_util.c
+++ b/tools/testing/selftests/kvm/lib/kvm_util.c
@@ -1457,9 +1457,9 @@ static gva_t ____vm_vaddr_alloc(struct kvm_vm *vm, size_t sz, gva_t vaddr_min,
uint64_t pages = (sz >> vm->page_shift) + ((sz % vm->page_size) != 0);
virt_pgd_alloc(vm);
- vm_paddr_t paddr = __vm_phy_pages_alloc(vm, pages,
- KVM_UTIL_MIN_PFN * vm->page_size,
- vm->memslots[type], protected);
+ gpa_t paddr = __vm_phy_pages_alloc(vm, pages,
+ KVM_UTIL_MIN_PFN * vm->page_size,
+ vm->memslots[type], protected);
/*
* Find an unused range of virtual page addresses of at least
@@ -1607,7 +1607,7 @@ void virt_map(struct kvm_vm *vm, uint64_t vaddr, uint64_t paddr,
* address providing the memory to the vm physical address is returned.
* A TEST_ASSERT failure occurs if no region containing gpa exists.
*/
-void *addr_gpa2hva(struct kvm_vm *vm, vm_paddr_t gpa)
+void *addr_gpa2hva(struct kvm_vm *vm, gpa_t gpa)
{
struct userspace_mem_region *region;
@@ -1640,7 +1640,7 @@ void *addr_gpa2hva(struct kvm_vm *vm, vm_paddr_t gpa)
* VM physical address is returned. A TEST_ASSERT failure occurs if no
* region containing hva exists.
*/
-vm_paddr_t addr_hva2gpa(struct kvm_vm *vm, void *hva)
+gpa_t addr_hva2gpa(struct kvm_vm *vm, void *hva)
{
struct rb_node *node;
@@ -1651,7 +1651,7 @@ vm_paddr_t addr_hva2gpa(struct kvm_vm *vm, void *hva)
if (hva >= region->host_mem) {
if (hva <= (region->host_mem
+ region->region.memory_size - 1))
- return (vm_paddr_t)((uintptr_t)
+ return (gpa_t)((uintptr_t)
region->region.guest_phys_addr
+ (hva - (uintptr_t)region->host_mem));
@@ -1683,7 +1683,7 @@ vm_paddr_t addr_hva2gpa(struct kvm_vm *vm, void *hva)
* memory without mapping said memory in the guest's address space. And, for
* userfaultfd-based demand paging, to do so without triggering userfaults.
*/
-void *addr_gpa2alias(struct kvm_vm *vm, vm_paddr_t gpa)
+void *addr_gpa2alias(struct kvm_vm *vm, gpa_t gpa)
{
struct userspace_mem_region *region;
uintptr_t offset;
@@ -2087,9 +2087,9 @@ const char *exit_reason_str(unsigned int exit_reason)
* and their base address is returned. A TEST_ASSERT failure occurs if
* not enough pages are available at or above paddr_min.
*/
-vm_paddr_t __vm_phy_pages_alloc(struct kvm_vm *vm, size_t num,
- vm_paddr_t paddr_min, uint32_t memslot,
- bool protected)
+gpa_t __vm_phy_pages_alloc(struct kvm_vm *vm, size_t num,
+ gpa_t paddr_min, uint32_t memslot,
+ bool protected)
{
struct userspace_mem_region *region;
sparsebit_idx_t pg, base;
@@ -2133,13 +2133,12 @@ vm_paddr_t __vm_phy_pages_alloc(struct kvm_vm *vm, size_t num,
return base * vm->page_size;
}
-vm_paddr_t vm_phy_page_alloc(struct kvm_vm *vm, vm_paddr_t paddr_min,
- uint32_t memslot)
+gpa_t vm_phy_page_alloc(struct kvm_vm *vm, gpa_t paddr_min, uint32_t memslot)
{
return vm_phy_pages_alloc(vm, 1, paddr_min, memslot);
}
-vm_paddr_t vm_alloc_page_table(struct kvm_vm *vm)
+gpa_t vm_alloc_page_table(struct kvm_vm *vm)
{
return vm_phy_page_alloc(vm, KVM_GUEST_PAGE_TABLE_MIN_PADDR,
vm->memslots[MEM_REGION_PT]);
@@ -2353,7 +2352,7 @@ void __attribute((constructor)) kvm_selftest_init(void)
kvm_selftest_arch_init();
}
-bool vm_is_gpa_protected(struct kvm_vm *vm, vm_paddr_t paddr)
+bool vm_is_gpa_protected(struct kvm_vm *vm, gpa_t paddr)
{
sparsebit_idx_t pg = 0;
struct userspace_mem_region *region;
diff --git a/tools/testing/selftests/kvm/lib/loongarch/processor.c b/tools/testing/selftests/kvm/lib/loongarch/processor.c
index 3b67720fbbe1..28a384e9704f 100644
--- a/tools/testing/selftests/kvm/lib/loongarch/processor.c
+++ b/tools/testing/selftests/kvm/lib/loongarch/processor.c
@@ -12,7 +12,7 @@
#define LOONGARCH_PAGE_TABLE_PHYS_MIN 0x200000
#define LOONGARCH_GUEST_STACK_VADDR_MIN 0x200000
-static vm_paddr_t invalid_pgtable[4];
+static gpa_t invalid_pgtable[4];
static gva_t exception_handlers;
static uint64_t virt_pte_index(struct kvm_vm *vm, gva_t gva, int level)
@@ -35,7 +35,7 @@ static uint64_t ptrs_per_pte(struct kvm_vm *vm)
return 1 << (vm->page_shift - 3);
}
-static void virt_set_pgtable(struct kvm_vm *vm, vm_paddr_t table, vm_paddr_t child)
+static void virt_set_pgtable(struct kvm_vm *vm, gpa_t table, gpa_t child)
{
uint64_t *ptep;
int i, ptrs_per_pte;
@@ -49,7 +49,7 @@ static void virt_set_pgtable(struct kvm_vm *vm, vm_paddr_t table, vm_paddr_t chi
void virt_arch_pgd_alloc(struct kvm_vm *vm)
{
int i;
- vm_paddr_t child, table;
+ gpa_t child, table;
if (vm->mmu.pgd_created)
return;
@@ -76,7 +76,7 @@ static uint64_t *virt_populate_pte(struct kvm_vm *vm, gva_t gva, int alloc)
{
int level;
uint64_t *ptep;
- vm_paddr_t child;
+ gpa_t child;
if (!vm->mmu.pgd_created)
goto unmapped_gva;
@@ -106,7 +106,7 @@ static uint64_t *virt_populate_pte(struct kvm_vm *vm, gva_t gva, int alloc)
exit(EXIT_FAILURE);
}
-vm_paddr_t addr_arch_gva2gpa(struct kvm_vm *vm, gva_t gva)
+gpa_t addr_arch_gva2gpa(struct kvm_vm *vm, gva_t gva)
{
uint64_t *ptep;
diff --git a/tools/testing/selftests/kvm/lib/loongarch/ucall.c b/tools/testing/selftests/kvm/lib/loongarch/ucall.c
index a5aa568f437b..2c8abe9f5382 100644
--- a/tools/testing/selftests/kvm/lib/loongarch/ucall.c
+++ b/tools/testing/selftests/kvm/lib/loongarch/ucall.c
@@ -11,7 +11,7 @@
*/
gva_t *ucall_exit_mmio_addr;
-void ucall_arch_init(struct kvm_vm *vm, vm_paddr_t mmio_gpa)
+void ucall_arch_init(struct kvm_vm *vm, gpa_t mmio_gpa)
{
gva_t mmio_gva = vm_vaddr_unused_gap(vm, vm->page_size, KVM_UTIL_MIN_VADDR);
diff --git a/tools/testing/selftests/kvm/lib/memstress.c b/tools/testing/selftests/kvm/lib/memstress.c
index 1ea735d66e15..b7bfeade85f7 100644
--- a/tools/testing/selftests/kvm/lib/memstress.c
+++ b/tools/testing/selftests/kvm/lib/memstress.c
@@ -203,7 +203,7 @@ struct kvm_vm *memstress_create_vm(enum vm_guest_mode mode, int nr_vcpus,
/* Add extra memory slots for testing */
for (i = 0; i < slots; i++) {
uint64_t region_pages = guest_num_pages / slots;
- vm_paddr_t region_start = args->gpa + region_pages * args->guest_page_size * i;
+ gpa_t region_start = args->gpa + region_pages * args->guest_page_size * i;
vm_userspace_mem_region_add(vm, backing_src, region_start,
MEMSTRESS_MEM_SLOT_INDEX + i,
diff --git a/tools/testing/selftests/kvm/lib/riscv/processor.c b/tools/testing/selftests/kvm/lib/riscv/processor.c
index 552628dda4a0..25749439fdbf 100644
--- a/tools/testing/selftests/kvm/lib/riscv/processor.c
+++ b/tools/testing/selftests/kvm/lib/riscv/processor.c
@@ -119,7 +119,7 @@ void virt_arch_pg_map(struct kvm_vm *vm, uint64_t vaddr, uint64_t paddr)
PGTBL_PTE_PERM_MASK | PGTBL_PTE_VALID_MASK;
}
-vm_paddr_t addr_arch_gva2gpa(struct kvm_vm *vm, gva_t gva)
+gpa_t addr_arch_gva2gpa(struct kvm_vm *vm, gva_t gva)
{
uint64_t *ptep;
int level = vm->mmu.pgtable_levels - 1;
diff --git a/tools/testing/selftests/kvm/lib/s390/processor.c b/tools/testing/selftests/kvm/lib/s390/processor.c
index e8d3c1d333d5..153cef5c2328 100644
--- a/tools/testing/selftests/kvm/lib/s390/processor.c
+++ b/tools/testing/selftests/kvm/lib/s390/processor.c
@@ -12,7 +12,7 @@
void virt_arch_pgd_alloc(struct kvm_vm *vm)
{
- vm_paddr_t paddr;
+ gpa_t paddr;
TEST_ASSERT(vm->page_size == PAGE_SIZE, "Unsupported page size: 0x%x",
vm->page_size);
@@ -86,7 +86,7 @@ void virt_arch_pg_map(struct kvm_vm *vm, uint64_t gva, uint64_t gpa)
entry[idx] = gpa;
}
-vm_paddr_t addr_arch_gva2gpa(struct kvm_vm *vm, gva_t gva)
+gpa_t addr_arch_gva2gpa(struct kvm_vm *vm, gva_t gva)
{
int ri, idx;
uint64_t *entry;
diff --git a/tools/testing/selftests/kvm/lib/ucall_common.c b/tools/testing/selftests/kvm/lib/ucall_common.c
index 997444178c78..9afcae844d72 100644
--- a/tools/testing/selftests/kvm/lib/ucall_common.c
+++ b/tools/testing/selftests/kvm/lib/ucall_common.c
@@ -25,7 +25,7 @@ int ucall_nr_pages_required(uint64_t page_size)
*/
static struct ucall_header *ucall_pool;
-void ucall_init(struct kvm_vm *vm, vm_paddr_t mmio_gpa)
+void ucall_init(struct kvm_vm *vm, gpa_t mmio_gpa)
{
struct ucall_header *hdr;
struct ucall *uc;
diff --git a/tools/testing/selftests/kvm/lib/x86/processor.c b/tools/testing/selftests/kvm/lib/x86/processor.c
index 7a01f83cab0b..d1de157fedff 100644
--- a/tools/testing/selftests/kvm/lib/x86/processor.c
+++ b/tools/testing/selftests/kvm/lib/x86/processor.c
@@ -618,7 +618,7 @@ static void kvm_seg_set_kernel_data_64bit(struct kvm_segment *segp)
segp->present = true;
}
-vm_paddr_t addr_arch_gva2gpa(struct kvm_vm *vm, gva_t gva)
+gpa_t addr_arch_gva2gpa(struct kvm_vm *vm, gva_t gva)
{
int level = PG_LEVEL_NONE;
uint64_t *pte = __vm_get_page_table_entry(vm, &vm->mmu, gva, &level);
diff --git a/tools/testing/selftests/kvm/lib/x86/sev.c b/tools/testing/selftests/kvm/lib/x86/sev.c
index c3a9838f4806..aecef6048ff1 100644
--- a/tools/testing/selftests/kvm/lib/x86/sev.c
+++ b/tools/testing/selftests/kvm/lib/x86/sev.c
@@ -18,7 +18,7 @@ static void encrypt_region(struct kvm_vm *vm, struct userspace_mem_region *regio
uint8_t page_type, bool private)
{
const struct sparsebit *protected_phy_pages = region->protected_phy_pages;
- const vm_paddr_t gpa_base = region->region.guest_phys_addr;
+ const gpa_t gpa_base = region->region.guest_phys_addr;
const sparsebit_idx_t lowest_page_in_region = gpa_base >> vm->page_shift;
sparsebit_idx_t i, j;
diff --git a/tools/testing/selftests/kvm/riscv/sbi_pmu_test.c b/tools/testing/selftests/kvm/riscv/sbi_pmu_test.c
index 8366c11131ff..207dc5cd36f0 100644
--- a/tools/testing/selftests/kvm/riscv/sbi_pmu_test.c
+++ b/tools/testing/selftests/kvm/riscv/sbi_pmu_test.c
@@ -24,7 +24,7 @@ union sbi_pmu_ctr_info ctrinfo_arr[RISCV_MAX_PMU_COUNTERS];
/* Snapshot shared memory data */
#define PMU_SNAPSHOT_GPA_BASE BIT(30)
static void *snapshot_gva;
-static vm_paddr_t snapshot_gpa;
+static gpa_t snapshot_gpa;
static int vcpu_shared_irq_count;
static int counter_in_use;
@@ -259,7 +259,7 @@ static inline void verify_sbi_requirement_assert(void)
__GUEST_ASSERT(0, "SBI implementation version doesn't support PMU Snapshot");
}
-static void snapshot_set_shmem(vm_paddr_t gpa, unsigned long flags)
+static void snapshot_set_shmem(gpa_t gpa, unsigned long flags)
{
unsigned long lo = (unsigned long)gpa;
#if __riscv_xlen == 32
diff --git a/tools/testing/selftests/kvm/s390/irq_routing.c b/tools/testing/selftests/kvm/s390/irq_routing.c
index 7819a0af19a8..f3839284ac08 100644
--- a/tools/testing/selftests/kvm/s390/irq_routing.c
+++ b/tools/testing/selftests/kvm/s390/irq_routing.c
@@ -27,7 +27,7 @@ static void test(void)
struct kvm_irq_routing *routing;
struct kvm_vcpu *vcpu;
struct kvm_vm *vm;
- vm_paddr_t mem;
+ gpa_t mem;
int ret;
struct kvm_irq_routing_entry ue = {
diff --git a/tools/testing/selftests/kvm/s390/ucontrol_test.c b/tools/testing/selftests/kvm/s390/ucontrol_test.c
index 50bc1c38225a..f773ba0f4641 100644
--- a/tools/testing/selftests/kvm/s390/ucontrol_test.c
+++ b/tools/testing/selftests/kvm/s390/ucontrol_test.c
@@ -111,7 +111,7 @@ FIXTURE(uc_kvm)
uintptr_t base_hva;
uintptr_t code_hva;
int kvm_run_size;
- vm_paddr_t pgd;
+ gpa_t pgd;
void *vm_mem;
int vcpu_fd;
int kvm_fd;
diff --git a/tools/testing/selftests/kvm/steal_time.c b/tools/testing/selftests/kvm/steal_time.c
index d2a513ec7dd5..f461eb7a0f6e 100644
--- a/tools/testing/selftests/kvm/steal_time.c
+++ b/tools/testing/selftests/kvm/steal_time.c
@@ -239,7 +239,7 @@ static void check_steal_time_uapi(void)
/* SBI STA shmem must have 64-byte alignment */
#define STEAL_TIME_SIZE ((sizeof(struct sta_struct) + 63) & ~63)
-static vm_paddr_t st_gpa[NR_VCPUS];
+static gpa_t st_gpa[NR_VCPUS];
struct sta_struct {
uint32_t sequence;
@@ -249,7 +249,7 @@ struct sta_struct {
uint8_t pad[47];
} __packed;
-static void sta_set_shmem(vm_paddr_t gpa, unsigned long flags)
+static void sta_set_shmem(gpa_t gpa, unsigned long flags)
{
unsigned long lo = (unsigned long)gpa;
#if __riscv_xlen == 32
diff --git a/tools/testing/selftests/kvm/x86/hyperv_clock.c b/tools/testing/selftests/kvm/x86/hyperv_clock.c
index b68844924dc5..6bb1ca11256f 100644
--- a/tools/testing/selftests/kvm/x86/hyperv_clock.c
+++ b/tools/testing/selftests/kvm/x86/hyperv_clock.c
@@ -98,7 +98,7 @@ static inline void check_tsc_msr_tsc_page(struct ms_hyperv_tsc_page *tsc_page)
GUEST_ASSERT(r2 >= t1 && r2 - t2 < 100000);
}
-static void guest_main(struct ms_hyperv_tsc_page *tsc_page, vm_paddr_t tsc_page_gpa)
+static void guest_main(struct ms_hyperv_tsc_page *tsc_page, gpa_t tsc_page_gpa)
{
u64 tsc_scale, tsc_offset;
diff --git a/tools/testing/selftests/kvm/x86/hyperv_extended_hypercalls.c b/tools/testing/selftests/kvm/x86/hyperv_extended_hypercalls.c
index 7762c168bbf3..5f561fcda55a 100644
--- a/tools/testing/selftests/kvm/x86/hyperv_extended_hypercalls.c
+++ b/tools/testing/selftests/kvm/x86/hyperv_extended_hypercalls.c
@@ -15,7 +15,7 @@
/* Any value is fine */
#define EXT_CAPABILITIES 0xbull
-static void guest_code(vm_paddr_t in_pg_gpa, vm_paddr_t out_pg_gpa,
+static void guest_code(gpa_t in_pg_gpa, gpa_t out_pg_gpa,
gva_t out_pg_gva)
{
uint64_t *output_gva;
diff --git a/tools/testing/selftests/kvm/x86/hyperv_tlb_flush.c b/tools/testing/selftests/kvm/x86/hyperv_tlb_flush.c
index 7f58a5efe6d5..2de01da9d11d 100644
--- a/tools/testing/selftests/kvm/x86/hyperv_tlb_flush.c
+++ b/tools/testing/selftests/kvm/x86/hyperv_tlb_flush.c
@@ -62,7 +62,7 @@ struct hv_tlb_flush_ex {
*/
struct test_data {
gva_t hcall_gva;
- vm_paddr_t hcall_gpa;
+ gpa_t hcall_gpa;
gva_t test_pages;
gva_t test_pages_pte[NTEST_PAGES];
};
@@ -133,7 +133,7 @@ static void set_expected_val(void *addr, u64 val, int vcpu_id)
* Update PTEs swapping two test pages.
* TODO: use swap()/xchg() when these are provided.
*/
-static void swap_two_test_pages(vm_paddr_t pte_gva1, vm_paddr_t pte_gva2)
+static void swap_two_test_pages(gpa_t pte_gva1, gpa_t pte_gva2)
{
uint64_t tmp = *(uint64_t *)pte_gva1;
@@ -201,7 +201,7 @@ static void sender_guest_code(gva_t test_data)
struct test_data *data = (struct test_data *)test_data;
struct hv_tlb_flush *flush = (struct hv_tlb_flush *)data->hcall_gva;
struct hv_tlb_flush_ex *flush_ex = (struct hv_tlb_flush_ex *)data->hcall_gva;
- vm_paddr_t hcall_gpa = data->hcall_gpa;
+ gpa_t hcall_gpa = data->hcall_gpa;
int i, stage = 1;
wrmsr(HV_X64_MSR_GUEST_OS_ID, HYPERV_LINUX_OS_ID);
@@ -582,7 +582,7 @@ int main(int argc, char *argv[])
struct kvm_vcpu *vcpu[3];
pthread_t threads[2];
gva_t test_data_page, gva;
- vm_paddr_t gpa;
+ gpa_t gpa;
uint64_t *pte;
struct test_data *data;
struct ucall uc;
diff --git a/tools/testing/selftests/kvm/x86/kvm_clock_test.c b/tools/testing/selftests/kvm/x86/kvm_clock_test.c
index e14f7330302e..5721e035e38c 100644
--- a/tools/testing/selftests/kvm/x86/kvm_clock_test.c
+++ b/tools/testing/selftests/kvm/x86/kvm_clock_test.c
@@ -31,7 +31,7 @@ static struct test_case test_cases[] = {
#define GUEST_SYNC_CLOCK(__stage, __val) \
GUEST_SYNC_ARGS(__stage, __val, 0, 0, 0)
-static void guest_main(vm_paddr_t pvti_pa, struct pvclock_vcpu_time_info *pvti)
+static void guest_main(gpa_t pvti_pa, struct pvclock_vcpu_time_info *pvti)
{
int i;
@@ -136,7 +136,7 @@ int main(void)
{
struct kvm_vcpu *vcpu;
gva_t pvti_gva;
- vm_paddr_t pvti_gpa;
+ gpa_t pvti_gpa;
struct kvm_vm *vm;
int flags;
diff --git a/tools/testing/selftests/kvm/x86/vmx_nested_la57_state_test.c b/tools/testing/selftests/kvm/x86/vmx_nested_la57_state_test.c
index 4ffa11a6bcd8..f13dee317383 100644
--- a/tools/testing/selftests/kvm/x86/vmx_nested_la57_state_test.c
+++ b/tools/testing/selftests/kvm/x86/vmx_nested_la57_state_test.c
@@ -30,7 +30,7 @@ static void l1_guest_code(struct vmx_pages *vmx_pages)
#define L2_GUEST_STACK_SIZE 64
unsigned long l2_guest_stack[L2_GUEST_STACK_SIZE];
u64 guest_cr4;
- vm_paddr_t pml5_pa, pml4_pa;
+ gpa_t pml5_pa, pml4_pa;
u64 *pml5;
u64 exit_reason;
--
2.54.0.rc1.555.g9c883467ad-goog
_______________________________________________
linux-riscv mailing list
linux-riscv@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-riscv
^ permalink raw reply related [flat|nested] 18+ messages in thread
* [PATCH v3 03/19] KVM: selftests: Use gpa_t for GPAs in Hyper-V selftests
2026-04-20 21:19 [PATCH v3 00/19] KVM: selftests: Use kernel-style integer and g[vp]a_t types Sean Christopherson
2026-04-20 21:19 ` [PATCH v3 01/19] KVM: selftests: Use gva_t instead of vm_vaddr_t Sean Christopherson
2026-04-20 21:19 ` [PATCH v3 02/19] KVM: selftests: Use gpa_t instead of vm_paddr_t Sean Christopherson
@ 2026-04-20 21:19 ` Sean Christopherson
2026-04-20 21:19 ` [PATCH v3 05/19] KVM: selftests: Use s64 instead of int64_t Sean Christopherson
` (13 subsequent siblings)
16 siblings, 0 replies; 18+ messages in thread
From: Sean Christopherson @ 2026-04-20 21:19 UTC (permalink / raw)
To: Paolo Bonzini, Marc Zyngier, Oliver Upton, Tianrui Zhao, Bibo Mao,
Huacai Chen, Anup Patel, Paul Walmsley, Palmer Dabbelt, Albert Ou,
Christian Borntraeger, Janosch Frank, Claudio Imbrenda,
Sean Christopherson
Cc: kvm, linux-arm-kernel, kvmarm, loongarch, kvm-riscv, linux-riscv,
linux-kernel, David Matlack
From: David Matlack <dmatlack@google.com>
Fix various Hyper-V selftests to use gpa_t for variables that contain
guest physical addresses, rather than gva_t. In practice, the bugs are
benign as both gva_t and gpa_t are u64 typedefs, i.e. gpa_t and gva_t are
interchangeable from a functional perspective, the code is just confusing.
No functional change intended.
Signed-off-by: David Matlack <dmatlack@google.com>
[sean: call out that both are u64 typedefs]
Signed-off-by: Sean Christopherson <seanjc@google.com>
---
tools/testing/selftests/kvm/x86/hyperv_evmcs.c | 2 +-
tools/testing/selftests/kvm/x86/hyperv_features.c | 2 +-
tools/testing/selftests/kvm/x86/hyperv_ipi.c | 6 +++---
tools/testing/selftests/kvm/x86/hyperv_svm_test.c | 2 +-
4 files changed, 6 insertions(+), 6 deletions(-)
diff --git a/tools/testing/selftests/kvm/x86/hyperv_evmcs.c b/tools/testing/selftests/kvm/x86/hyperv_evmcs.c
index c2de5ac799ee..2d1733f9303a 100644
--- a/tools/testing/selftests/kvm/x86/hyperv_evmcs.c
+++ b/tools/testing/selftests/kvm/x86/hyperv_evmcs.c
@@ -76,7 +76,7 @@ void l2_guest_code(void)
}
void guest_code(struct vmx_pages *vmx_pages, struct hyperv_test_pages *hv_pages,
- gva_t hv_hcall_page_gpa)
+ gpa_t hv_hcall_page_gpa)
{
#define L2_GUEST_STACK_SIZE 64
unsigned long l2_guest_stack[L2_GUEST_STACK_SIZE];
diff --git a/tools/testing/selftests/kvm/x86/hyperv_features.c b/tools/testing/selftests/kvm/x86/hyperv_features.c
index 1059fcc460e3..0360fa5915c0 100644
--- a/tools/testing/selftests/kvm/x86/hyperv_features.c
+++ b/tools/testing/selftests/kvm/x86/hyperv_features.c
@@ -82,7 +82,7 @@ static void guest_msr(struct msr_data *msr)
GUEST_DONE();
}
-static void guest_hcall(gva_t pgs_gpa, struct hcall_data *hcall)
+static void guest_hcall(gpa_t pgs_gpa, struct hcall_data *hcall)
{
u64 res, input, output;
uint8_t vector;
diff --git a/tools/testing/selftests/kvm/x86/hyperv_ipi.c b/tools/testing/selftests/kvm/x86/hyperv_ipi.c
index 7d648219833c..5369867efac3 100644
--- a/tools/testing/selftests/kvm/x86/hyperv_ipi.c
+++ b/tools/testing/selftests/kvm/x86/hyperv_ipi.c
@@ -45,13 +45,13 @@ struct hv_send_ipi_ex {
struct hv_vpset vp_set;
};
-static inline void hv_init(gva_t pgs_gpa)
+static inline void hv_init(gpa_t pgs_gpa)
{
wrmsr(HV_X64_MSR_GUEST_OS_ID, HYPERV_LINUX_OS_ID);
wrmsr(HV_X64_MSR_HYPERCALL, pgs_gpa);
}
-static void receiver_code(void *hcall_page, gva_t pgs_gpa)
+static void receiver_code(void *hcall_page, gpa_t pgs_gpa)
{
u32 vcpu_id;
@@ -85,7 +85,7 @@ static inline void nop_loop(void)
asm volatile("nop");
}
-static void sender_guest_code(void *hcall_page, gva_t pgs_gpa)
+static void sender_guest_code(void *hcall_page, gpa_t pgs_gpa)
{
struct hv_send_ipi *ipi = (struct hv_send_ipi *)hcall_page;
struct hv_send_ipi_ex *ipi_ex = (struct hv_send_ipi_ex *)hcall_page;
diff --git a/tools/testing/selftests/kvm/x86/hyperv_svm_test.c b/tools/testing/selftests/kvm/x86/hyperv_svm_test.c
index e0caf5ea14bd..54a1a6dad4d5 100644
--- a/tools/testing/selftests/kvm/x86/hyperv_svm_test.c
+++ b/tools/testing/selftests/kvm/x86/hyperv_svm_test.c
@@ -67,7 +67,7 @@ void l2_guest_code(void)
static void __attribute__((__flatten__)) guest_code(struct svm_test_data *svm,
struct hyperv_test_pages *hv_pages,
- gva_t pgs_gpa)
+ gpa_t pgs_gpa)
{
unsigned long l2_guest_stack[L2_GUEST_STACK_SIZE];
struct vmcb *vmcb = svm->vmcb;
--
2.54.0.rc1.555.g9c883467ad-goog
_______________________________________________
linux-riscv mailing list
linux-riscv@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-riscv
^ permalink raw reply related [flat|nested] 18+ messages in thread
* [PATCH v3 05/19] KVM: selftests: Use s64 instead of int64_t
2026-04-20 21:19 [PATCH v3 00/19] KVM: selftests: Use kernel-style integer and g[vp]a_t types Sean Christopherson
` (2 preceding siblings ...)
2026-04-20 21:19 ` [PATCH v3 03/19] KVM: selftests: Use gpa_t for GPAs in Hyper-V selftests Sean Christopherson
@ 2026-04-20 21:19 ` Sean Christopherson
2026-04-20 21:19 ` [PATCH v3 07/19] KVM: selftests: Use s32 instead of int32_t Sean Christopherson
` (12 subsequent siblings)
16 siblings, 0 replies; 18+ messages in thread
From: Sean Christopherson @ 2026-04-20 21:19 UTC (permalink / raw)
To: Paolo Bonzini, Marc Zyngier, Oliver Upton, Tianrui Zhao, Bibo Mao,
Huacai Chen, Anup Patel, Paul Walmsley, Palmer Dabbelt, Albert Ou,
Christian Borntraeger, Janosch Frank, Claudio Imbrenda,
Sean Christopherson
Cc: kvm, linux-arm-kernel, kvmarm, loongarch, kvm-riscv, linux-riscv,
linux-kernel, David Matlack
From: David Matlack <dmatlack@google.com>
Use s64 instead of int64_t to make the KVM selftests code more concise
and more similar to the kernel (since selftests are primarily developed
by kernel developers).
This commit was generated with the following command:
git ls-files tools/testing/selftests/kvm | xargs sed -i 's/int64_t/s64/g'
Then by manually adjusting whitespace to make checkpatch.pl happy.
No functional change intended.
Signed-off-by: David Matlack <dmatlack@google.com>
Signed-off-by: Sean Christopherson <seanjc@google.com>
---
tools/testing/selftests/kvm/arm64/sea_to_user.c | 2 +-
tools/testing/selftests/kvm/arm64/set_id_regs.c | 2 +-
tools/testing/selftests/kvm/guest_print_test.c | 2 +-
tools/testing/selftests/kvm/include/test_util.h | 4 ++--
tools/testing/selftests/kvm/lib/test_util.c | 16 ++++++++--------
.../testing/selftests/kvm/lib/userfaultfd_util.c | 2 +-
tools/testing/selftests/kvm/lib/x86/processor.c | 2 +-
tools/testing/selftests/kvm/memslot_perf_test.c | 2 +-
tools/testing/selftests/kvm/steal_time.c | 4 ++--
tools/testing/selftests/kvm/x86/kvm_clock_test.c | 2 +-
.../selftests/kvm/x86/nested_tsc_adjust_test.c | 6 +++---
11 files changed, 22 insertions(+), 22 deletions(-)
diff --git a/tools/testing/selftests/kvm/arm64/sea_to_user.c b/tools/testing/selftests/kvm/arm64/sea_to_user.c
index 61954f2221e4..7285eade4acf 100644
--- a/tools/testing/selftests/kvm/arm64/sea_to_user.c
+++ b/tools/testing/selftests/kvm/arm64/sea_to_user.c
@@ -59,7 +59,7 @@ static bool far_invalid;
static u64 translate_to_host_paddr(unsigned long vaddr)
{
u64 pinfo;
- int64_t offset = vaddr / getpagesize() * sizeof(pinfo);
+ s64 offset = vaddr / getpagesize() * sizeof(pinfo);
int fd;
u64 page_addr;
u64 paddr;
diff --git a/tools/testing/selftests/kvm/arm64/set_id_regs.c b/tools/testing/selftests/kvm/arm64/set_id_regs.c
index 9b9c04c963a1..4402f317f7d9 100644
--- a/tools/testing/selftests/kvm/arm64/set_id_regs.c
+++ b/tools/testing/selftests/kvm/arm64/set_id_regs.c
@@ -36,7 +36,7 @@ struct reg_ftr_bits {
* For FTR_EXACT, safe_val is used as the exact safe value.
* For FTR_LOWER_SAFE, safe_val is used as the minimal safe value.
*/
- int64_t safe_val;
+ s64 safe_val;
/* Allowed to be changed by the host after run */
bool mutable;
diff --git a/tools/testing/selftests/kvm/guest_print_test.c b/tools/testing/selftests/kvm/guest_print_test.c
index 894ef7d2481e..b059abcf1a5b 100644
--- a/tools/testing/selftests/kvm/guest_print_test.c
+++ b/tools/testing/selftests/kvm/guest_print_test.c
@@ -25,7 +25,7 @@ static struct guest_vals vals;
/* GUEST_PRINTF()/GUEST_ASSERT_FMT() does not support float or double. */
#define TYPE_LIST \
-TYPE(test_type_i64, I64, "%ld", int64_t) \
+TYPE(test_type_i64, I64, "%ld", s64) \
TYPE(test_type_u64, U64u, "%lu", u64) \
TYPE(test_type_x64, U64x, "0x%lx", u64) \
TYPE(test_type_X64, U64X, "0x%lX", u64) \
diff --git a/tools/testing/selftests/kvm/include/test_util.h b/tools/testing/selftests/kvm/include/test_util.h
index 62fe83763021..d7489db738bf 100644
--- a/tools/testing/selftests/kvm/include/test_util.h
+++ b/tools/testing/selftests/kvm/include/test_util.h
@@ -101,8 +101,8 @@ do { \
size_t parse_size(const char *size);
-int64_t timespec_to_ns(struct timespec ts);
-struct timespec timespec_add_ns(struct timespec ts, int64_t ns);
+s64 timespec_to_ns(struct timespec ts);
+struct timespec timespec_add_ns(struct timespec ts, s64 ns);
struct timespec timespec_add(struct timespec ts1, struct timespec ts2);
struct timespec timespec_sub(struct timespec ts1, struct timespec ts2);
struct timespec timespec_elapsed(struct timespec start);
diff --git a/tools/testing/selftests/kvm/lib/test_util.c b/tools/testing/selftests/kvm/lib/test_util.c
index d863705f6795..f5b460c445be 100644
--- a/tools/testing/selftests/kvm/lib/test_util.c
+++ b/tools/testing/selftests/kvm/lib/test_util.c
@@ -83,12 +83,12 @@ size_t parse_size(const char *size)
return base << shift;
}
-int64_t timespec_to_ns(struct timespec ts)
+s64 timespec_to_ns(struct timespec ts)
{
- return (int64_t)ts.tv_nsec + 1000000000LL * (int64_t)ts.tv_sec;
+ return (s64)ts.tv_nsec + 1000000000LL * (s64)ts.tv_sec;
}
-struct timespec timespec_add_ns(struct timespec ts, int64_t ns)
+struct timespec timespec_add_ns(struct timespec ts, s64 ns)
{
struct timespec res;
@@ -101,15 +101,15 @@ struct timespec timespec_add_ns(struct timespec ts, int64_t ns)
struct timespec timespec_add(struct timespec ts1, struct timespec ts2)
{
- int64_t ns1 = timespec_to_ns(ts1);
- int64_t ns2 = timespec_to_ns(ts2);
+ s64 ns1 = timespec_to_ns(ts1);
+ s64 ns2 = timespec_to_ns(ts2);
return timespec_add_ns((struct timespec){0}, ns1 + ns2);
}
struct timespec timespec_sub(struct timespec ts1, struct timespec ts2)
{
- int64_t ns1 = timespec_to_ns(ts1);
- int64_t ns2 = timespec_to_ns(ts2);
+ s64 ns1 = timespec_to_ns(ts1);
+ s64 ns2 = timespec_to_ns(ts2);
return timespec_add_ns((struct timespec){0}, ns1 - ns2);
}
@@ -123,7 +123,7 @@ struct timespec timespec_elapsed(struct timespec start)
struct timespec timespec_div(struct timespec ts, int divisor)
{
- int64_t ns = timespec_to_ns(ts) / divisor;
+ s64 ns = timespec_to_ns(ts) / divisor;
return timespec_add_ns((struct timespec){0}, ns);
}
diff --git a/tools/testing/selftests/kvm/lib/userfaultfd_util.c b/tools/testing/selftests/kvm/lib/userfaultfd_util.c
index 2f069ce6a446..ef8d76f71f83 100644
--- a/tools/testing/selftests/kvm/lib/userfaultfd_util.c
+++ b/tools/testing/selftests/kvm/lib/userfaultfd_util.c
@@ -27,7 +27,7 @@ static void *uffd_handler_thread_fn(void *arg)
{
struct uffd_reader_args *reader_args = (struct uffd_reader_args *)arg;
int uffd = reader_args->uffd;
- int64_t pages = 0;
+ s64 pages = 0;
struct timespec start;
struct timespec ts_diff;
struct epoll_event evt;
diff --git a/tools/testing/selftests/kvm/lib/x86/processor.c b/tools/testing/selftests/kvm/lib/x86/processor.c
index 81f5dea51fc3..802543aa588c 100644
--- a/tools/testing/selftests/kvm/lib/x86/processor.c
+++ b/tools/testing/selftests/kvm/lib/x86/processor.c
@@ -379,7 +379,7 @@ static u64 *__vm_get_page_table_entry(struct kvm_vm *vm,
* Check that the vaddr is a sign-extended va_width value.
*/
TEST_ASSERT(vaddr ==
- (((int64_t)vaddr << (64 - va_width) >> (64 - va_width))),
+ (((s64)vaddr << (64 - va_width) >> (64 - va_width))),
"Canonical check failed. The virtual address is invalid.");
for (current_level = mmu->pgtable_levels;
diff --git a/tools/testing/selftests/kvm/memslot_perf_test.c b/tools/testing/selftests/kvm/memslot_perf_test.c
index d5161e8aee14..bf62b522d32e 100644
--- a/tools/testing/selftests/kvm/memslot_perf_test.c
+++ b/tools/testing/selftests/kvm/memslot_perf_test.c
@@ -1040,7 +1040,7 @@ static bool parse_args(int argc, char *argv[],
struct test_result {
struct timespec slot_runtime, guest_runtime, iter_runtime;
- int64_t slottimens, runtimens;
+ s64 slottimens, runtimens;
u64 nloops;
};
diff --git a/tools/testing/selftests/kvm/steal_time.c b/tools/testing/selftests/kvm/steal_time.c
index 6379f47af422..d0a41a2bcccb 100644
--- a/tools/testing/selftests/kvm/steal_time.c
+++ b/tools/testing/selftests/kvm/steal_time.c
@@ -123,7 +123,7 @@ struct st_time {
u64 st_time;
};
-static int64_t smccc(uint32_t func, u64 arg)
+static s64 smccc(uint32_t func, u64 arg)
{
struct arm_smccc_res res;
@@ -140,7 +140,7 @@ static void check_status(struct st_time *st)
static void guest_code(int cpu)
{
struct st_time *st;
- int64_t status;
+ s64 status;
status = smccc(SMCCC_ARCH_FEATURES, PV_TIME_FEATURES);
GUEST_ASSERT_EQ(status, 0);
diff --git a/tools/testing/selftests/kvm/x86/kvm_clock_test.c b/tools/testing/selftests/kvm/x86/kvm_clock_test.c
index 5df0cceec03b..2b8a3feee1f8 100644
--- a/tools/testing/selftests/kvm/x86/kvm_clock_test.c
+++ b/tools/testing/selftests/kvm/x86/kvm_clock_test.c
@@ -18,7 +18,7 @@
struct test_case {
u64 kvmclock_base;
- int64_t realtime_offset;
+ s64 realtime_offset;
};
static struct test_case test_cases[] = {
diff --git a/tools/testing/selftests/kvm/x86/nested_tsc_adjust_test.c b/tools/testing/selftests/kvm/x86/nested_tsc_adjust_test.c
index db0d44b8fbd6..a18b0cfd42e2 100644
--- a/tools/testing/selftests/kvm/x86/nested_tsc_adjust_test.c
+++ b/tools/testing/selftests/kvm/x86/nested_tsc_adjust_test.c
@@ -53,9 +53,9 @@ enum {
/* The virtual machine object. */
static struct kvm_vm *vm;
-static void check_ia32_tsc_adjust(int64_t max)
+static void check_ia32_tsc_adjust(s64 max)
{
- int64_t adjust;
+ s64 adjust;
adjust = rdmsr(MSR_IA32_TSC_ADJUST);
GUEST_SYNC(adjust);
@@ -117,7 +117,7 @@ static void l1_guest_code(void *data)
GUEST_DONE();
}
-static void report(int64_t val)
+static void report(s64 val)
{
pr_info("IA32_TSC_ADJUST is %ld (%lld * TSC_ADJUST_VALUE + %lld).\n",
val, val / TSC_ADJUST_VALUE, val % TSC_ADJUST_VALUE);
--
2.54.0.rc1.555.g9c883467ad-goog
_______________________________________________
linux-riscv mailing list
linux-riscv@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-riscv
^ permalink raw reply related [flat|nested] 18+ messages in thread
* [PATCH v3 07/19] KVM: selftests: Use s32 instead of int32_t
2026-04-20 21:19 [PATCH v3 00/19] KVM: selftests: Use kernel-style integer and g[vp]a_t types Sean Christopherson
` (3 preceding siblings ...)
2026-04-20 21:19 ` [PATCH v3 05/19] KVM: selftests: Use s64 instead of int64_t Sean Christopherson
@ 2026-04-20 21:19 ` Sean Christopherson
2026-04-20 21:19 ` [PATCH v3 08/19] KVM: selftests: Use u16 instead of uint16_t Sean Christopherson
` (11 subsequent siblings)
16 siblings, 0 replies; 18+ messages in thread
From: Sean Christopherson @ 2026-04-20 21:19 UTC (permalink / raw)
To: Paolo Bonzini, Marc Zyngier, Oliver Upton, Tianrui Zhao, Bibo Mao,
Huacai Chen, Anup Patel, Paul Walmsley, Palmer Dabbelt, Albert Ou,
Christian Borntraeger, Janosch Frank, Claudio Imbrenda,
Sean Christopherson
Cc: kvm, linux-arm-kernel, kvmarm, loongarch, kvm-riscv, linux-riscv,
linux-kernel, David Matlack
From: David Matlack <dmatlack@google.com>
Use s32 instead of int32_t to make the KVM selftests code more concise
and more similar to the kernel (since selftests are primarily developed
by kernel developers).
This commit was generated with the following command:
git ls-files tools/testing/selftests/kvm | xargs sed -i 's/int32_t/s32/g'
Then by manually adjusting whitespace to make checkpatch.pl happy.
No functional change intended.
Signed-off-by: David Matlack <dmatlack@google.com>
Signed-off-by: Sean Christopherson <seanjc@google.com>
---
.../kvm/arm64/arch_timer_edge_cases.c | 24 +++++++++----------
.../selftests/kvm/include/arm64/arch_timer.h | 4 ++--
2 files changed, 14 insertions(+), 14 deletions(-)
diff --git a/tools/testing/selftests/kvm/arm64/arch_timer_edge_cases.c b/tools/testing/selftests/kvm/arm64/arch_timer_edge_cases.c
index f8b183f13864..f7625eb711d6 100644
--- a/tools/testing/selftests/kvm/arm64/arch_timer_edge_cases.c
+++ b/tools/testing/selftests/kvm/arm64/arch_timer_edge_cases.c
@@ -25,8 +25,8 @@
/* Depends on counter width. */
static u64 CVAL_MAX;
/* tval is a signed 32-bit int. */
-static const int32_t TVAL_MAX = INT32_MAX;
-static const int32_t TVAL_MIN = INT32_MIN;
+static const s32 TVAL_MAX = INT32_MAX;
+static const s32 TVAL_MIN = INT32_MIN;
/* After how much time we say there is no IRQ. */
static const u32 TIMEOUT_NO_IRQ_US = 50000;
@@ -355,7 +355,7 @@ static void test_timer_cval(enum arch_timer timer, u64 cval,
test_timer_xval(timer, cval, TIMER_CVAL, wm, reset_state, reset_cnt);
}
-static void test_timer_tval(enum arch_timer timer, int32_t tval,
+static void test_timer_tval(enum arch_timer timer, s32 tval,
irq_wait_method_t wm, bool reset_state,
u64 reset_cnt)
{
@@ -385,10 +385,10 @@ static void test_cval_no_irq(enum arch_timer timer, u64 cval,
test_xval_check_no_irq(timer, cval, usec, TIMER_CVAL, wm);
}
-static void test_tval_no_irq(enum arch_timer timer, int32_t tval, u64 usec,
+static void test_tval_no_irq(enum arch_timer timer, s32 tval, u64 usec,
sleep_method_t wm)
{
- /* tval will be cast to an int32_t in test_xval_check_no_irq */
+ /* tval will be cast to an s32 in test_xval_check_no_irq */
test_xval_check_no_irq(timer, (u64)tval, usec, TIMER_TVAL, wm);
}
@@ -463,7 +463,7 @@ static void test_timers_fired_multiple_times(enum arch_timer timer)
* timeout for the wait: we use the wfi instruction.
*/
static void test_reprogramming_timer(enum arch_timer timer, irq_wait_method_t wm,
- int32_t delta_1_ms, int32_t delta_2_ms)
+ s32 delta_1_ms, s32 delta_2_ms)
{
local_irq_disable();
reset_timer_state(timer, DEF_CNT);
@@ -504,7 +504,7 @@ static void test_reprogram_timers(enum arch_timer timer)
static void test_basic_functionality(enum arch_timer timer)
{
- int32_t tval = (int32_t) msec_to_cycles(test_args.wait_ms);
+ s32 tval = (s32)msec_to_cycles(test_args.wait_ms);
u64 cval = DEF_CNT + msec_to_cycles(test_args.wait_ms);
int i;
@@ -685,7 +685,7 @@ static void test_set_cnt_after_xval_no_irq(enum arch_timer timer,
}
static void test_set_cnt_after_tval(enum arch_timer timer, u64 cnt_1,
- int32_t tval, u64 cnt_2,
+ s32 tval, u64 cnt_2,
irq_wait_method_t wm)
{
test_set_cnt_after_xval(timer, cnt_1, tval, cnt_2, wm, TIMER_TVAL);
@@ -699,7 +699,7 @@ static void test_set_cnt_after_cval(enum arch_timer timer, u64 cnt_1,
}
static void test_set_cnt_after_tval_no_irq(enum arch_timer timer,
- u64 cnt_1, int32_t tval,
+ u64 cnt_1, s32 tval,
u64 cnt_2, sleep_method_t wm)
{
test_set_cnt_after_xval_no_irq(timer, cnt_1, tval, cnt_2, wm,
@@ -718,7 +718,7 @@ static void test_set_cnt_after_cval_no_irq(enum arch_timer timer,
static void test_move_counters_ahead_of_timers(enum arch_timer timer)
{
int i;
- int32_t tval;
+ s32 tval;
for (i = 0; i < ARRAY_SIZE(irq_wait_method); i++) {
irq_wait_method_t wm = irq_wait_method[i];
@@ -753,7 +753,7 @@ static void test_move_counters_behind_timers(enum arch_timer timer)
static void test_timers_in_the_past(enum arch_timer timer)
{
- int32_t tval = -1 * (int32_t) msec_to_cycles(test_args.wait_ms);
+ s32 tval = -1 * (s32)msec_to_cycles(test_args.wait_ms);
u64 cval;
int i;
@@ -789,7 +789,7 @@ static void test_timers_in_the_past(enum arch_timer timer)
static void test_long_timer_delays(enum arch_timer timer)
{
- int32_t tval = (int32_t) msec_to_cycles(test_args.long_wait_ms);
+ s32 tval = (s32)msec_to_cycles(test_args.long_wait_ms);
u64 cval = DEF_CNT + msec_to_cycles(test_args.long_wait_ms);
int i;
diff --git a/tools/testing/selftests/kvm/include/arm64/arch_timer.h b/tools/testing/selftests/kvm/include/arm64/arch_timer.h
index 4fe0e0d07584..a5836d4ab7ee 100644
--- a/tools/testing/selftests/kvm/include/arm64/arch_timer.h
+++ b/tools/testing/selftests/kvm/include/arm64/arch_timer.h
@@ -79,7 +79,7 @@ static inline u64 timer_get_cval(enum arch_timer timer)
return 0;
}
-static inline void timer_set_tval(enum arch_timer timer, int32_t tval)
+static inline void timer_set_tval(enum arch_timer timer, s32 tval)
{
switch (timer) {
case VIRTUAL:
@@ -95,7 +95,7 @@ static inline void timer_set_tval(enum arch_timer timer, int32_t tval)
isb();
}
-static inline int32_t timer_get_tval(enum arch_timer timer)
+static inline s32 timer_get_tval(enum arch_timer timer)
{
isb();
switch (timer) {
--
2.54.0.rc1.555.g9c883467ad-goog
_______________________________________________
linux-riscv mailing list
linux-riscv@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-riscv
^ permalink raw reply related [flat|nested] 18+ messages in thread
* [PATCH v3 08/19] KVM: selftests: Use u16 instead of uint16_t
2026-04-20 21:19 [PATCH v3 00/19] KVM: selftests: Use kernel-style integer and g[vp]a_t types Sean Christopherson
` (4 preceding siblings ...)
2026-04-20 21:19 ` [PATCH v3 07/19] KVM: selftests: Use s32 instead of int32_t Sean Christopherson
@ 2026-04-20 21:19 ` Sean Christopherson
2026-04-20 21:19 ` [PATCH v3 09/19] KVM: selftests: Use s16 instead of int16_t Sean Christopherson
` (10 subsequent siblings)
16 siblings, 0 replies; 18+ messages in thread
From: Sean Christopherson @ 2026-04-20 21:19 UTC (permalink / raw)
To: Paolo Bonzini, Marc Zyngier, Oliver Upton, Tianrui Zhao, Bibo Mao,
Huacai Chen, Anup Patel, Paul Walmsley, Palmer Dabbelt, Albert Ou,
Christian Borntraeger, Janosch Frank, Claudio Imbrenda,
Sean Christopherson
Cc: kvm, linux-arm-kernel, kvmarm, loongarch, kvm-riscv, linux-riscv,
linux-kernel, David Matlack
From: David Matlack <dmatlack@google.com>
Use u16 instead of uint16_t to make the KVM selftests code more concise
and more similar to the kernel (since selftests are primarily developed
by kernel developers).
This commit was generated with the following command:
git ls-files tools/testing/selftests/kvm | xargs sed -i 's/uint16_t/u16/g'
Then by manually adjusting whitespace to make checkpatch.pl happy.
No functional change intended.
Signed-off-by: David Matlack <dmatlack@google.com>
Signed-off-by: Sean Christopherson <seanjc@google.com>
---
.../selftests/kvm/arm64/page_fault_test.c | 2 +-
.../testing/selftests/kvm/include/kvm_util.h | 2 +-
.../testing/selftests/kvm/include/x86/evmcs.h | 2 +-
.../selftests/kvm/include/x86/processor.h | 58 +++++++++----------
.../testing/selftests/kvm/lib/guest_sprintf.c | 2 +-
.../testing/selftests/kvm/lib/x86/processor.c | 8 +--
tools/testing/selftests/kvm/lib/x86/ucall.c | 2 +-
tools/testing/selftests/kvm/lib/x86/vmx.c | 2 +-
tools/testing/selftests/kvm/s390/memop.c | 2 +-
.../testing/selftests/kvm/x86/fastops_test.c | 2 +-
.../selftests/kvm/x86/sync_regs_test.c | 2 +-
11 files changed, 42 insertions(+), 42 deletions(-)
diff --git a/tools/testing/selftests/kvm/arm64/page_fault_test.c b/tools/testing/selftests/kvm/arm64/page_fault_test.c
index cb52ac8aa0a5..b92a9614d7d2 100644
--- a/tools/testing/selftests/kvm/arm64/page_fault_test.c
+++ b/tools/testing/selftests/kvm/arm64/page_fault_test.c
@@ -148,7 +148,7 @@ static void guest_at(void)
*/
static void guest_dc_zva(void)
{
- uint16_t val;
+ u16 val;
asm volatile("dc zva, %0" :: "r" (guest_test_memory));
dsb(ish);
diff --git a/tools/testing/selftests/kvm/include/kvm_util.h b/tools/testing/selftests/kvm/include/kvm_util.h
index bdb91f627433..34c8a7d94997 100644
--- a/tools/testing/selftests/kvm/include/kvm_util.h
+++ b/tools/testing/selftests/kvm/include/kvm_util.h
@@ -216,7 +216,7 @@ struct vm_shape {
u32 type;
uint8_t mode;
uint8_t pad0;
- uint16_t pad1;
+ u16 pad1;
};
kvm_static_assert(sizeof(struct vm_shape) == sizeof(u64));
diff --git a/tools/testing/selftests/kvm/include/x86/evmcs.h b/tools/testing/selftests/kvm/include/x86/evmcs.h
index 3b0f96b881f9..be79bda024bf 100644
--- a/tools/testing/selftests/kvm/include/x86/evmcs.h
+++ b/tools/testing/selftests/kvm/include/x86/evmcs.h
@@ -10,7 +10,7 @@
#include "hyperv.h"
#include "vmx.h"
-#define u16 uint16_t
+#define u16 u16
#define u32 u32
#define u64 u64
diff --git a/tools/testing/selftests/kvm/include/x86/processor.h b/tools/testing/selftests/kvm/include/x86/processor.h
index 3898665ad2e9..8700d37a5727 100644
--- a/tools/testing/selftests/kvm/include/x86/processor.h
+++ b/tools/testing/selftests/kvm/include/x86/processor.h
@@ -399,8 +399,8 @@ struct gpr64_regs {
};
struct desc64 {
- uint16_t limit0;
- uint16_t base0;
+ u16 limit0;
+ u16 base0;
unsigned base1:8, type:4, s:1, dpl:2, p:1;
unsigned limit1:4, avl:1, l:1, db:1, g:1, base2:8;
u32 base3;
@@ -408,7 +408,7 @@ struct desc64 {
} __attribute__((packed));
struct desc_ptr {
- uint16_t size;
+ u16 size;
u64 address;
} __attribute__((packed));
@@ -476,9 +476,9 @@ static inline void wrmsr(u32 msr, u64 value)
}
-static inline uint16_t inw(uint16_t port)
+static inline u16 inw(u16 port)
{
- uint16_t tmp;
+ u16 tmp;
__asm__ __volatile__("in %%dx, %%ax"
: /* output */ "=a" (tmp)
@@ -487,63 +487,63 @@ static inline uint16_t inw(uint16_t port)
return tmp;
}
-static inline uint16_t get_es(void)
+static inline u16 get_es(void)
{
- uint16_t es;
+ u16 es;
__asm__ __volatile__("mov %%es, %[es]"
: /* output */ [es]"=rm"(es));
return es;
}
-static inline uint16_t get_cs(void)
+static inline u16 get_cs(void)
{
- uint16_t cs;
+ u16 cs;
__asm__ __volatile__("mov %%cs, %[cs]"
: /* output */ [cs]"=rm"(cs));
return cs;
}
-static inline uint16_t get_ss(void)
+static inline u16 get_ss(void)
{
- uint16_t ss;
+ u16 ss;
__asm__ __volatile__("mov %%ss, %[ss]"
: /* output */ [ss]"=rm"(ss));
return ss;
}
-static inline uint16_t get_ds(void)
+static inline u16 get_ds(void)
{
- uint16_t ds;
+ u16 ds;
__asm__ __volatile__("mov %%ds, %[ds]"
: /* output */ [ds]"=rm"(ds));
return ds;
}
-static inline uint16_t get_fs(void)
+static inline u16 get_fs(void)
{
- uint16_t fs;
+ u16 fs;
__asm__ __volatile__("mov %%fs, %[fs]"
: /* output */ [fs]"=rm"(fs));
return fs;
}
-static inline uint16_t get_gs(void)
+static inline u16 get_gs(void)
{
- uint16_t gs;
+ u16 gs;
__asm__ __volatile__("mov %%gs, %[gs]"
: /* output */ [gs]"=rm"(gs));
return gs;
}
-static inline uint16_t get_tr(void)
+static inline u16 get_tr(void)
{
- uint16_t tr;
+ u16 tr;
__asm__ __volatile__("str %[tr]"
: /* output */ [tr]"=rm"(tr));
@@ -651,7 +651,7 @@ static inline struct desc_ptr get_idt(void)
return idt;
}
-static inline void outl(uint16_t port, u32 value)
+static inline void outl(u16 port, u32 value)
{
__asm__ __volatile__("outl %%eax, %%dx" : : "d"(port), "a"(value));
}
@@ -1194,15 +1194,15 @@ struct ex_regs {
};
struct idt_entry {
- uint16_t offset0;
- uint16_t selector;
- uint16_t ist : 3;
- uint16_t : 5;
- uint16_t type : 4;
- uint16_t : 1;
- uint16_t dpl : 2;
- uint16_t p : 1;
- uint16_t offset1;
+ u16 offset0;
+ u16 selector;
+ u16 ist : 3;
+ u16 : 5;
+ u16 type : 4;
+ u16 : 1;
+ u16 dpl : 2;
+ u16 p : 1;
+ u16 offset1;
u32 offset2; u32 reserved;
};
diff --git a/tools/testing/selftests/kvm/lib/guest_sprintf.c b/tools/testing/selftests/kvm/lib/guest_sprintf.c
index 551ad6c658aa..8d60aa81e27e 100644
--- a/tools/testing/selftests/kvm/lib/guest_sprintf.c
+++ b/tools/testing/selftests/kvm/lib/guest_sprintf.c
@@ -286,7 +286,7 @@ int guest_vsnprintf(char *buf, int n, const char *fmt, va_list args)
if (qualifier == 'l')
num = va_arg(args, u64);
else if (qualifier == 'h') {
- num = (uint16_t)va_arg(args, int);
+ num = (u16)va_arg(args, int);
if (flags & SIGN)
num = (int16_t)num;
} else if (flags & SIGN)
diff --git a/tools/testing/selftests/kvm/lib/x86/processor.c b/tools/testing/selftests/kvm/lib/x86/processor.c
index dc31236b004b..8e6393384fa4 100644
--- a/tools/testing/selftests/kvm/lib/x86/processor.c
+++ b/tools/testing/selftests/kvm/lib/x86/processor.c
@@ -424,7 +424,7 @@ void virt_arch_dump(FILE *stream, struct kvm_vm *vm, uint8_t indent)
"addr w exec dirty\n",
indent, "");
pml4e_start = (u64 *)addr_gpa2hva(vm, mmu->pgd);
- for (uint16_t n1 = 0; n1 <= 0x1ffu; n1++) {
+ for (u16 n1 = 0; n1 <= 0x1ffu; n1++) {
pml4e = &pml4e_start[n1];
if (!is_present_pte(mmu, pml4e))
continue;
@@ -436,7 +436,7 @@ void virt_arch_dump(FILE *stream, struct kvm_vm *vm, uint8_t indent)
is_writable_pte(mmu, pml4e), is_nx_pte(mmu, pml4e));
pdpe_start = addr_gpa2hva(vm, *pml4e & PHYSICAL_PAGE_MASK);
- for (uint16_t n2 = 0; n2 <= 0x1ffu; n2++) {
+ for (u16 n2 = 0; n2 <= 0x1ffu; n2++) {
pdpe = &pdpe_start[n2];
if (!is_present_pte(mmu, pdpe))
continue;
@@ -449,7 +449,7 @@ void virt_arch_dump(FILE *stream, struct kvm_vm *vm, uint8_t indent)
is_nx_pte(mmu, pdpe));
pde_start = addr_gpa2hva(vm, *pdpe & PHYSICAL_PAGE_MASK);
- for (uint16_t n3 = 0; n3 <= 0x1ffu; n3++) {
+ for (u16 n3 = 0; n3 <= 0x1ffu; n3++) {
pde = &pde_start[n3];
if (!is_present_pte(mmu, pde))
continue;
@@ -461,7 +461,7 @@ void virt_arch_dump(FILE *stream, struct kvm_vm *vm, uint8_t indent)
is_nx_pte(mmu, pde));
pte_start = addr_gpa2hva(vm, *pde & PHYSICAL_PAGE_MASK);
- for (uint16_t n4 = 0; n4 <= 0x1ffu; n4++) {
+ for (u16 n4 = 0; n4 <= 0x1ffu; n4++) {
pte = &pte_start[n4];
if (!is_present_pte(mmu, pte))
continue;
diff --git a/tools/testing/selftests/kvm/lib/x86/ucall.c b/tools/testing/selftests/kvm/lib/x86/ucall.c
index 1af2a6880cdf..e7dd5791959b 100644
--- a/tools/testing/selftests/kvm/lib/x86/ucall.c
+++ b/tools/testing/selftests/kvm/lib/x86/ucall.c
@@ -6,7 +6,7 @@
*/
#include "kvm_util.h"
-#define UCALL_PIO_PORT ((uint16_t)0x1000)
+#define UCALL_PIO_PORT ((u16)0x1000)
void ucall_arch_do_ucall(gva_t uc)
{
diff --git a/tools/testing/selftests/kvm/lib/x86/vmx.c b/tools/testing/selftests/kvm/lib/x86/vmx.c
index 73b7faa7f357..b2f83c3f7f16 100644
--- a/tools/testing/selftests/kvm/lib/x86/vmx.c
+++ b/tools/testing/selftests/kvm/lib/x86/vmx.c
@@ -27,7 +27,7 @@ struct hv_vp_assist_page *current_vp_assist;
int vcpu_enable_evmcs(struct kvm_vcpu *vcpu)
{
- uint16_t evmcs_ver;
+ u16 evmcs_ver;
vcpu_enable_cap(vcpu, KVM_CAP_HYPERV_ENLIGHTENED_VMCS,
(unsigned long)&evmcs_ver);
diff --git a/tools/testing/selftests/kvm/s390/memop.c b/tools/testing/selftests/kvm/s390/memop.c
index 1cd7b8f81fff..aa92fdf0664d 100644
--- a/tools/testing/selftests/kvm/s390/memop.c
+++ b/tools/testing/selftests/kvm/s390/memop.c
@@ -485,7 +485,7 @@ static __uint128_t cut_to_size(int size, __uint128_t val)
case 1:
return (uint8_t)val;
case 2:
- return (uint16_t)val;
+ return (u16)val;
case 4:
return (u32)val;
case 8:
diff --git a/tools/testing/selftests/kvm/x86/fastops_test.c b/tools/testing/selftests/kvm/x86/fastops_test.c
index a634bc281546..721f56d38f49 100644
--- a/tools/testing/selftests/kvm/x86/fastops_test.c
+++ b/tools/testing/selftests/kvm/x86/fastops_test.c
@@ -186,7 +186,7 @@ if (sizeof(type_t) != 1) { \
static void guest_code(void)
{
guest_test_fastops(uint8_t, "b");
- guest_test_fastops(uint16_t, "w");
+ guest_test_fastops(u16, "w");
guest_test_fastops(u32, "l");
guest_test_fastops(u64, "q");
diff --git a/tools/testing/selftests/kvm/x86/sync_regs_test.c b/tools/testing/selftests/kvm/x86/sync_regs_test.c
index 8fa3948b0170..e0c52321f87c 100644
--- a/tools/testing/selftests/kvm/x86/sync_regs_test.c
+++ b/tools/testing/selftests/kvm/x86/sync_regs_test.c
@@ -20,7 +20,7 @@
#include "kvm_util.h"
#include "processor.h"
-#define UCALL_PIO_PORT ((uint16_t)0x1000)
+#define UCALL_PIO_PORT ((u16)0x1000)
struct ucall uc_none = {
.cmd = UCALL_NONE,
--
2.54.0.rc1.555.g9c883467ad-goog
_______________________________________________
linux-riscv mailing list
linux-riscv@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-riscv
^ permalink raw reply related [flat|nested] 18+ messages in thread
* [PATCH v3 09/19] KVM: selftests: Use s16 instead of int16_t
2026-04-20 21:19 [PATCH v3 00/19] KVM: selftests: Use kernel-style integer and g[vp]a_t types Sean Christopherson
` (5 preceding siblings ...)
2026-04-20 21:19 ` [PATCH v3 08/19] KVM: selftests: Use u16 instead of uint16_t Sean Christopherson
@ 2026-04-20 21:19 ` Sean Christopherson
2026-04-20 21:19 ` [PATCH v3 10/19] KVM: selftests: Use u8 instead of uint8_t Sean Christopherson
` (9 subsequent siblings)
16 siblings, 0 replies; 18+ messages in thread
From: Sean Christopherson @ 2026-04-20 21:19 UTC (permalink / raw)
To: Paolo Bonzini, Marc Zyngier, Oliver Upton, Tianrui Zhao, Bibo Mao,
Huacai Chen, Anup Patel, Paul Walmsley, Palmer Dabbelt, Albert Ou,
Christian Borntraeger, Janosch Frank, Claudio Imbrenda,
Sean Christopherson
Cc: kvm, linux-arm-kernel, kvmarm, loongarch, kvm-riscv, linux-riscv,
linux-kernel, David Matlack
From: David Matlack <dmatlack@google.com>
Use s16 instead of int16_t to make the KVM selftests code more concise
and more similar to the kernel (since selftests are primarily developed
by kernel developers).
This commit was generated with the following command:
git ls-files tools/testing/selftests/kvm | xargs sed -i 's/int16_t/s16/g'
Then by manually adjusting whitespace to make checkpatch.pl happy.
No functional change intended.
Signed-off-by: David Matlack <dmatlack@google.com>
Signed-off-by: Sean Christopherson <seanjc@google.com>
---
tools/testing/selftests/kvm/lib/guest_sprintf.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/tools/testing/selftests/kvm/lib/guest_sprintf.c b/tools/testing/selftests/kvm/lib/guest_sprintf.c
index 8d60aa81e27e..2a3ab9c168f0 100644
--- a/tools/testing/selftests/kvm/lib/guest_sprintf.c
+++ b/tools/testing/selftests/kvm/lib/guest_sprintf.c
@@ -288,7 +288,7 @@ int guest_vsnprintf(char *buf, int n, const char *fmt, va_list args)
else if (qualifier == 'h') {
num = (u16)va_arg(args, int);
if (flags & SIGN)
- num = (int16_t)num;
+ num = (s16)num;
} else if (flags & SIGN)
num = va_arg(args, int);
else
--
2.54.0.rc1.555.g9c883467ad-goog
_______________________________________________
linux-riscv mailing list
linux-riscv@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-riscv
^ permalink raw reply related [flat|nested] 18+ messages in thread
* [PATCH v3 10/19] KVM: selftests: Use u8 instead of uint8_t
2026-04-20 21:19 [PATCH v3 00/19] KVM: selftests: Use kernel-style integer and g[vp]a_t types Sean Christopherson
` (6 preceding siblings ...)
2026-04-20 21:19 ` [PATCH v3 09/19] KVM: selftests: Use s16 instead of int16_t Sean Christopherson
@ 2026-04-20 21:19 ` Sean Christopherson
2026-04-20 21:19 ` [PATCH v3 11/19] KVM: selftests: Drop "vaddr_" from APIs that allocate memory for a given VM Sean Christopherson
` (8 subsequent siblings)
16 siblings, 0 replies; 18+ messages in thread
From: Sean Christopherson @ 2026-04-20 21:19 UTC (permalink / raw)
To: Paolo Bonzini, Marc Zyngier, Oliver Upton, Tianrui Zhao, Bibo Mao,
Huacai Chen, Anup Patel, Paul Walmsley, Palmer Dabbelt, Albert Ou,
Christian Borntraeger, Janosch Frank, Claudio Imbrenda,
Sean Christopherson
Cc: kvm, linux-arm-kernel, kvmarm, loongarch, kvm-riscv, linux-riscv,
linux-kernel, David Matlack
From: David Matlack <dmatlack@google.com>
Use u8 instead of uint8_t to make the KVM selftests code more concise
and more similar to the kernel (since selftests are primarily developed
by kernel developers).
This commit was generated with the following command:
git ls-files tools/testing/selftests/kvm | xargs sed -i 's/uint8_t/u8/g'
Then by manually adjusting whitespace to make checkpatch.pl happy.
No functional change intended.
Signed-off-by: David Matlack <dmatlack@google.com>
Signed-off-by: Sean Christopherson <seanjc@google.com>
---
.../selftests/kvm/arm64/debug-exceptions.c | 18 +++---
.../testing/selftests/kvm/arm64/set_id_regs.c | 6 +-
.../selftests/kvm/arm64/vpmu_counter_access.c | 2 +-
.../testing/selftests/kvm/coalesced_io_test.c | 2 +-
tools/testing/selftests/kvm/get-reg-list.c | 2 +-
.../testing/selftests/kvm/guest_memfd_test.c | 4 +-
.../testing/selftests/kvm/include/kvm_util.h | 14 ++---
.../testing/selftests/kvm/include/test_util.h | 2 +-
.../testing/selftests/kvm/include/x86/apic.h | 6 +-
.../selftests/kvm/include/x86/hyperv.h | 10 +--
.../selftests/kvm/include/x86/processor.h | 27 ++++----
tools/testing/selftests/kvm/include/x86/sev.h | 6 +-
tools/testing/selftests/kvm/include/x86/vmx.h | 12 ++--
.../selftests/kvm/lib/arm64/processor.c | 8 +--
.../testing/selftests/kvm/lib/guest_sprintf.c | 2 +-
tools/testing/selftests/kvm/lib/kvm_util.c | 2 +-
.../selftests/kvm/lib/loongarch/processor.c | 6 +-
.../selftests/kvm/lib/riscv/processor.c | 6 +-
.../selftests/kvm/lib/s390/processor.c | 8 +--
tools/testing/selftests/kvm/lib/sparsebit.c | 2 +-
.../testing/selftests/kvm/lib/x86/processor.c | 16 ++---
tools/testing/selftests/kvm/lib/x86/sev.c | 6 +-
.../testing/selftests/kvm/memslot_perf_test.c | 2 +-
tools/testing/selftests/kvm/mmu_stress_test.c | 2 +-
tools/testing/selftests/kvm/s390/memop.c | 28 ++++-----
tools/testing/selftests/kvm/s390/resets.c | 2 +-
.../selftests/kvm/s390/shared_zeropage_test.c | 2 +-
tools/testing/selftests/kvm/s390/tprot.c | 12 ++--
.../selftests/kvm/set_memory_region_test.c | 2 +-
tools/testing/selftests/kvm/steal_time.c | 4 +-
.../selftests/kvm/x86/aperfmperf_test.c | 2 +-
.../kvm/x86/evmcs_smm_controls_test.c | 2 +-
.../testing/selftests/kvm/x86/fastops_test.c | 8 +--
.../selftests/kvm/x86/fix_hypercall_test.c | 12 ++--
.../selftests/kvm/x86/flds_emulation.h | 2 +-
.../selftests/kvm/x86/hyperv_features.c | 4 +-
tools/testing/selftests/kvm/x86/kvm_pv_test.c | 2 +-
.../selftests/kvm/x86/nested_emulation_test.c | 8 +--
.../selftests/kvm/x86/platform_info_test.c | 2 +-
.../selftests/kvm/x86/pmu_counters_test.c | 61 +++++++++----------
.../selftests/kvm/x86/pmu_event_filter_test.c | 12 ++--
.../kvm/x86/private_mem_conversions_test.c | 24 ++++----
tools/testing/selftests/kvm/x86/smm_test.c | 2 +-
tools/testing/selftests/kvm/x86/state_test.c | 4 +-
.../selftests/kvm/x86/userspace_io_test.c | 4 +-
.../kvm/x86/userspace_msr_exit_test.c | 14 ++---
.../selftests/kvm/x86/vmx_pmu_caps_test.c | 2 +-
.../selftests/kvm/x86/xapic_tpr_test.c | 16 ++---
.../selftests/kvm/x86/xen_shinfo_test.c | 4 +-
49 files changed, 201 insertions(+), 205 deletions(-)
diff --git a/tools/testing/selftests/kvm/arm64/debug-exceptions.c b/tools/testing/selftests/kvm/arm64/debug-exceptions.c
index 5931915ea00a..3eb4b1b6682d 100644
--- a/tools/testing/selftests/kvm/arm64/debug-exceptions.c
+++ b/tools/testing/selftests/kvm/arm64/debug-exceptions.c
@@ -102,7 +102,7 @@ GEN_DEBUG_WRITE_REG(dbgwvr)
static void reset_debug_state(void)
{
- uint8_t brps, wrps, i;
+ u8 brps, wrps, i;
u64 dfr0;
asm volatile("msr daifset, #8");
@@ -149,7 +149,7 @@ static void enable_monitor_debug_exceptions(void)
isb();
}
-static void install_wp(uint8_t wpn, u64 addr)
+static void install_wp(u8 wpn, u64 addr)
{
u32 wcr;
@@ -162,7 +162,7 @@ static void install_wp(uint8_t wpn, u64 addr)
enable_monitor_debug_exceptions();
}
-static void install_hw_bp(uint8_t bpn, u64 addr)
+static void install_hw_bp(u8 bpn, u64 addr)
{
u32 bcr;
@@ -174,8 +174,7 @@ static void install_hw_bp(uint8_t bpn, u64 addr)
enable_monitor_debug_exceptions();
}
-static void install_wp_ctx(uint8_t addr_wp, uint8_t ctx_bp, u64 addr,
- u64 ctx)
+static void install_wp_ctx(u8 addr_wp, u8 ctx_bp, u64 addr, u64 ctx)
{
u32 wcr;
u64 ctx_bcr;
@@ -196,8 +195,7 @@ static void install_wp_ctx(uint8_t addr_wp, uint8_t ctx_bp, u64 addr,
enable_monitor_debug_exceptions();
}
-void install_hw_bp_ctx(uint8_t addr_bp, uint8_t ctx_bp, u64 addr,
- u64 ctx)
+void install_hw_bp_ctx(u8 addr_bp, u8 ctx_bp, u64 addr, u64 ctx)
{
u32 addr_bcr, ctx_bcr;
@@ -234,7 +232,7 @@ static void install_ss(void)
static volatile char write_data;
-static void guest_code(uint8_t bpn, uint8_t wpn, uint8_t ctx_bpn)
+static void guest_code(u8 bpn, u8 wpn, u8 ctx_bpn)
{
u64 ctx = 0xabcdef; /* a random context number */
@@ -421,7 +419,7 @@ static int debug_version(u64 id_aa64dfr0)
return FIELD_GET(ID_AA64DFR0_EL1_DebugVer, id_aa64dfr0);
}
-static void test_guest_debug_exceptions(uint8_t bpn, uint8_t wpn, uint8_t ctx_bpn)
+static void test_guest_debug_exceptions(u8 bpn, u8 wpn, u8 ctx_bpn)
{
struct kvm_vcpu *vcpu;
struct kvm_vm *vm;
@@ -535,7 +533,7 @@ void test_single_step_from_userspace(int test_cnt)
*/
void test_guest_debug_exceptions_all(u64 aa64dfr0)
{
- uint8_t brp_num, wrp_num, ctx_brp_num, normal_brp_num, ctx_brp_base;
+ u8 brp_num, wrp_num, ctx_brp_num, normal_brp_num, ctx_brp_base;
int b, w, c;
/* Number of breakpoints */
diff --git a/tools/testing/selftests/kvm/arm64/set_id_regs.c b/tools/testing/selftests/kvm/arm64/set_id_regs.c
index 8bf9c717b698..8bb53dd4f321 100644
--- a/tools/testing/selftests/kvm/arm64/set_id_regs.c
+++ b/tools/testing/selftests/kvm/arm64/set_id_regs.c
@@ -30,7 +30,7 @@ struct reg_ftr_bits {
char *name;
bool sign;
enum ftr_type type;
- uint8_t shift;
+ u8 shift;
u64 mask;
/*
* For FTR_EXACT, safe_val is used as the exact safe value.
@@ -384,7 +384,7 @@ u64 get_invalid_value(const struct reg_ftr_bits *ftr_bits, u64 ftr)
static u64 test_reg_set_success(struct kvm_vcpu *vcpu, u64 reg,
const struct reg_ftr_bits *ftr_bits)
{
- uint8_t shift = ftr_bits->shift;
+ u8 shift = ftr_bits->shift;
u64 mask = ftr_bits->mask;
u64 val, new_val, ftr;
@@ -407,7 +407,7 @@ static u64 test_reg_set_success(struct kvm_vcpu *vcpu, u64 reg,
static void test_reg_set_fail(struct kvm_vcpu *vcpu, u64 reg,
const struct reg_ftr_bits *ftr_bits)
{
- uint8_t shift = ftr_bits->shift;
+ u8 shift = ftr_bits->shift;
u64 mask = ftr_bits->mask;
u64 val, old_val, ftr;
int r;
diff --git a/tools/testing/selftests/kvm/arm64/vpmu_counter_access.c b/tools/testing/selftests/kvm/arm64/vpmu_counter_access.c
index 4ceab0760447..22223395969e 100644
--- a/tools/testing/selftests/kvm/arm64/vpmu_counter_access.c
+++ b/tools/testing/selftests/kvm/arm64/vpmu_counter_access.c
@@ -402,7 +402,7 @@ static void guest_code(u64 expected_pmcr_n)
static void create_vpmu_vm(void *guest_code)
{
struct kvm_vcpu_init init;
- uint8_t pmuver, ec;
+ u8 pmuver, ec;
u64 dfr0, irq = 23;
struct kvm_device_attr irq_attr = {
.group = KVM_ARM_VCPU_PMU_V3_CTRL,
diff --git a/tools/testing/selftests/kvm/coalesced_io_test.c b/tools/testing/selftests/kvm/coalesced_io_test.c
index f5ab412d2042..df4ed5e3877c 100644
--- a/tools/testing/selftests/kvm/coalesced_io_test.c
+++ b/tools/testing/selftests/kvm/coalesced_io_test.c
@@ -23,7 +23,7 @@ struct kvm_coalesced_io {
* amount of #ifdeffery and complexity, without having to sacrifice
* verbose error messages.
*/
- uint8_t pio_port;
+ u8 pio_port;
};
static struct kvm_coalesced_io kvm_builtin_io_ring;
diff --git a/tools/testing/selftests/kvm/get-reg-list.c b/tools/testing/selftests/kvm/get-reg-list.c
index f4644c9d2d3b..216f10644c1a 100644
--- a/tools/testing/selftests/kvm/get-reg-list.c
+++ b/tools/testing/selftests/kvm/get-reg-list.c
@@ -216,7 +216,7 @@ static void run_test(struct vcpu_reg_list *c)
* since we don't know the capabilities of any new registers.
*/
for_each_present_blessed_reg(i) {
- uint8_t addr[2048 / 8];
+ u8 addr[2048 / 8];
struct kvm_one_reg reg = {
.id = reg_list->reg[i],
.addr = (__u64)&addr,
diff --git a/tools/testing/selftests/kvm/guest_memfd_test.c b/tools/testing/selftests/kvm/guest_memfd_test.c
index ad17ea62555f..9cbd3ad7f44a 100644
--- a/tools/testing/selftests/kvm/guest_memfd_test.c
+++ b/tools/testing/selftests/kvm/guest_memfd_test.c
@@ -470,7 +470,7 @@ static void test_guest_memfd(unsigned long vm_type)
kvm_vm_free(vm);
}
-static void guest_code(uint8_t *mem, u64 size)
+static void guest_code(u8 *mem, u64 size)
{
size_t i;
@@ -494,7 +494,7 @@ static void test_guest_memfd_guest(void)
struct kvm_vcpu *vcpu;
struct kvm_vm *vm;
- uint8_t *mem;
+ u8 *mem;
size_t size;
int fd, i;
diff --git a/tools/testing/selftests/kvm/include/kvm_util.h b/tools/testing/selftests/kvm/include/kvm_util.h
index 34c8a7d94997..676e3ccb1462 100644
--- a/tools/testing/selftests/kvm/include/kvm_util.h
+++ b/tools/testing/selftests/kvm/include/kvm_util.h
@@ -214,8 +214,8 @@ enum vm_guest_mode {
struct vm_shape {
u32 type;
- uint8_t mode;
- uint8_t pad0;
+ u8 mode;
+ u8 pad0;
u16 pad1;
};
@@ -475,7 +475,7 @@ void kvm_vm_release(struct kvm_vm *vmp);
void kvm_vm_elf_load(struct kvm_vm *vm, const char *filename);
int kvm_memfd_alloc(size_t size, bool hugepages);
-void vm_dump(FILE *stream, struct kvm_vm *vm, uint8_t indent);
+void vm_dump(FILE *stream, struct kvm_vm *vm, u8 indent);
static inline void kvm_vm_get_dirty_log(struct kvm_vm *vm, int slot, void *log)
{
@@ -1155,10 +1155,10 @@ vm_adjust_num_guest_pages(enum vm_guest_mode mode, unsigned int num_guest_pages)
void assert_on_unhandled_exception(struct kvm_vcpu *vcpu);
void vcpu_arch_dump(FILE *stream, struct kvm_vcpu *vcpu,
- uint8_t indent);
+ u8 indent);
static inline void vcpu_dump(FILE *stream, struct kvm_vcpu *vcpu,
- uint8_t indent)
+ u8 indent)
{
vcpu_arch_dump(stream, vcpu, indent);
}
@@ -1263,9 +1263,9 @@ static inline gpa_t addr_gva2gpa(struct kvm_vm *vm, gva_t gva)
* Dumps to the FILE stream given by @stream, the contents of all the
* virtual translation tables for the VM given by @vm.
*/
-void virt_arch_dump(FILE *stream, struct kvm_vm *vm, uint8_t indent);
+void virt_arch_dump(FILE *stream, struct kvm_vm *vm, u8 indent);
-static inline void virt_dump(FILE *stream, struct kvm_vm *vm, uint8_t indent)
+static inline void virt_dump(FILE *stream, struct kvm_vm *vm, u8 indent)
{
virt_arch_dump(stream, vm, indent);
}
diff --git a/tools/testing/selftests/kvm/include/test_util.h b/tools/testing/selftests/kvm/include/test_util.h
index fb24347c6e6c..d9b433b834f1 100644
--- a/tools/testing/selftests/kvm/include/test_util.h
+++ b/tools/testing/selftests/kvm/include/test_util.h
@@ -119,7 +119,7 @@ struct guest_random_state new_guest_random_state(u32 seed);
u32 guest_random_u32(struct guest_random_state *state);
static inline bool __guest_random_bool(struct guest_random_state *state,
- uint8_t percent)
+ u8 percent)
{
return (guest_random_u32(state) % 100) < percent;
}
diff --git a/tools/testing/selftests/kvm/include/x86/apic.h b/tools/testing/selftests/kvm/include/x86/apic.h
index 74eaa3bd335d..31887bdc3d6c 100644
--- a/tools/testing/selftests/kvm/include/x86/apic.h
+++ b/tools/testing/selftests/kvm/include/x86/apic.h
@@ -99,14 +99,14 @@ static inline u64 x2apic_read_reg(unsigned int reg)
return rdmsr(APIC_BASE_MSR + (reg >> 4));
}
-static inline uint8_t x2apic_write_reg_safe(unsigned int reg, u64 value)
+static inline u8 x2apic_write_reg_safe(unsigned int reg, u64 value)
{
return wrmsr_safe(APIC_BASE_MSR + (reg >> 4), value);
}
static inline void x2apic_write_reg(unsigned int reg, u64 value)
{
- uint8_t fault = x2apic_write_reg_safe(reg, value);
+ u8 fault = x2apic_write_reg_safe(reg, value);
__GUEST_ASSERT(!fault, "Unexpected fault 0x%x on WRMSR(%x) = %lx\n",
fault, APIC_BASE_MSR + (reg >> 4), value);
@@ -114,7 +114,7 @@ static inline void x2apic_write_reg(unsigned int reg, u64 value)
static inline void x2apic_write_reg_fault(unsigned int reg, u64 value)
{
- uint8_t fault = x2apic_write_reg_safe(reg, value);
+ u8 fault = x2apic_write_reg_safe(reg, value);
__GUEST_ASSERT(fault == GP_VECTOR,
"Wanted #GP on WRMSR(%x) = %lx, got 0x%x\n",
diff --git a/tools/testing/selftests/kvm/include/x86/hyperv.h b/tools/testing/selftests/kvm/include/x86/hyperv.h
index 2add2123e37b..78003f5a22f3 100644
--- a/tools/testing/selftests/kvm/include/x86/hyperv.h
+++ b/tools/testing/selftests/kvm/include/x86/hyperv.h
@@ -254,12 +254,12 @@
* Issue a Hyper-V hypercall. Returns exception vector raised or 0, 'hv_status'
* is set to the hypercall status (if no exception occurred).
*/
-static inline uint8_t __hyperv_hypercall(u64 control, gva_t input_address,
- gva_t output_address,
- u64 *hv_status)
+static inline u8 __hyperv_hypercall(u64 control, gva_t input_address,
+ gva_t output_address,
+ u64 *hv_status)
{
u64 error_code;
- uint8_t vector;
+ u8 vector;
/* Note both the hypercall and the "asm safe" clobber r9-r11. */
asm volatile("mov %[output_address], %%r8\n\t"
@@ -278,7 +278,7 @@ static inline void hyperv_hypercall(u64 control, gva_t input_address,
gva_t output_address)
{
u64 hv_status;
- uint8_t vector;
+ u8 vector;
vector = __hyperv_hypercall(control, input_address, output_address, &hv_status);
diff --git a/tools/testing/selftests/kvm/include/x86/processor.h b/tools/testing/selftests/kvm/include/x86/processor.h
index 8700d37a5727..4efa6c942192 100644
--- a/tools/testing/selftests/kvm/include/x86/processor.h
+++ b/tools/testing/selftests/kvm/include/x86/processor.h
@@ -724,8 +724,7 @@ static inline bool this_cpu_is_hygon(void)
return this_cpu_vendor_string_is("HygonGenuine");
}
-static inline u32 __this_cpu_has(u32 function, u32 index,
- uint8_t reg, uint8_t lo, uint8_t hi)
+static inline u32 __this_cpu_has(u32 function, u32 index, u8 reg, u8 lo, u8 hi)
{
u32 gprs[4];
@@ -1105,7 +1104,7 @@ static inline void vcpu_set_cpuid(struct kvm_vcpu *vcpu)
void vcpu_set_cpuid_property(struct kvm_vcpu *vcpu,
struct kvm_x86_cpu_property property,
u32 value);
-void vcpu_set_cpuid_maxphyaddr(struct kvm_vcpu *vcpu, uint8_t maxphyaddr);
+void vcpu_set_cpuid_maxphyaddr(struct kvm_vcpu *vcpu, u8 maxphyaddr);
void vcpu_clear_cpuid_entry(struct kvm_vcpu *vcpu, u32 function);
@@ -1262,8 +1261,8 @@ void vm_install_exception_handler(struct kvm_vm *vm, int vector,
#define kvm_asm_safe(insn, inputs...) \
({ \
- u64 ign_error_code; \
- uint8_t vector; \
+ u64 ign_error_code; \
+ u8 vector; \
\
asm volatile(KVM_ASM_SAFE(insn) \
: KVM_ASM_SAFE_OUTPUTS(vector, ign_error_code) \
@@ -1274,7 +1273,7 @@ void vm_install_exception_handler(struct kvm_vm *vm, int vector,
#define kvm_asm_safe_ec(insn, error_code, inputs...) \
({ \
- uint8_t vector; \
+ u8 vector; \
\
asm volatile(KVM_ASM_SAFE(insn) \
: KVM_ASM_SAFE_OUTPUTS(vector, error_code) \
@@ -1285,8 +1284,8 @@ void vm_install_exception_handler(struct kvm_vm *vm, int vector,
#define kvm_asm_safe_fep(insn, inputs...) \
({ \
- u64 ign_error_code; \
- uint8_t vector; \
+ u64 ign_error_code; \
+ u8 vector; \
\
asm volatile(KVM_ASM_SAFE_FEP(insn) \
: KVM_ASM_SAFE_OUTPUTS(vector, ign_error_code) \
@@ -1297,7 +1296,7 @@ void vm_install_exception_handler(struct kvm_vm *vm, int vector,
#define kvm_asm_safe_ec_fep(insn, error_code, inputs...) \
({ \
- uint8_t vector; \
+ u8 vector; \
\
asm volatile(KVM_ASM_SAFE_FEP(insn) \
: KVM_ASM_SAFE_OUTPUTS(vector, error_code) \
@@ -1307,10 +1306,10 @@ void vm_install_exception_handler(struct kvm_vm *vm, int vector,
})
#define BUILD_READ_U64_SAFE_HELPER(insn, _fep, _FEP) \
-static inline uint8_t insn##_safe ##_fep(u32 idx, u64 *val) \
+static inline u8 insn##_safe ##_fep(u32 idx, u64 *val) \
{ \
- u64 error_code; \
- uint8_t vector; \
+ u64 error_code; \
+ u8 vector; \
u32 a, d; \
\
asm volatile(KVM_ASM_SAFE##_FEP(#insn) \
@@ -1335,12 +1334,12 @@ BUILD_READ_U64_SAFE_HELPERS(rdmsr)
BUILD_READ_U64_SAFE_HELPERS(rdpmc)
BUILD_READ_U64_SAFE_HELPERS(xgetbv)
-static inline uint8_t wrmsr_safe(u32 msr, u64 val)
+static inline u8 wrmsr_safe(u32 msr, u64 val)
{
return kvm_asm_safe("wrmsr", "a"(val & -1u), "d"(val >> 32), "c"(msr));
}
-static inline uint8_t xsetbv_safe(u32 index, u64 value)
+static inline u8 xsetbv_safe(u32 index, u64 value)
{
u32 eax = value;
u32 edx = value >> 32;
diff --git a/tools/testing/selftests/kvm/include/x86/sev.h b/tools/testing/selftests/kvm/include/x86/sev.h
index 4f91c1179416..1af44c151d60 100644
--- a/tools/testing/selftests/kvm/include/x86/sev.h
+++ b/tools/testing/selftests/kvm/include/x86/sev.h
@@ -47,7 +47,7 @@ static inline bool is_sev_vm(struct kvm_vm *vm)
}
void sev_vm_launch(struct kvm_vm *vm, u32 policy);
-void sev_vm_launch_measure(struct kvm_vm *vm, uint8_t *measurement);
+void sev_vm_launch_measure(struct kvm_vm *vm, u8 *measurement);
void sev_vm_launch_finish(struct kvm_vm *vm);
void snp_vm_launch_start(struct kvm_vm *vm, u64 policy);
void snp_vm_launch_update(struct kvm_vm *vm);
@@ -55,7 +55,7 @@ void snp_vm_launch_finish(struct kvm_vm *vm);
struct kvm_vm *vm_sev_create_with_one_vcpu(u32 type, void *guest_code,
struct kvm_vcpu **cpu);
-void vm_sev_launch(struct kvm_vm *vm, u64 policy, uint8_t *measurement);
+void vm_sev_launch(struct kvm_vm *vm, u64 policy, u8 *measurement);
kvm_static_assert(SEV_RET_SUCCESS == 0);
@@ -132,7 +132,7 @@ static inline void sev_launch_update_data(struct kvm_vm *vm, gpa_t gpa,
}
static inline void snp_launch_update_data(struct kvm_vm *vm, gpa_t gpa,
- u64 hva, u64 size, uint8_t type)
+ u64 hva, u64 size, u8 type)
{
struct kvm_sev_snp_launch_update update_data = {
.uaddr = hva,
diff --git a/tools/testing/selftests/kvm/include/x86/vmx.h b/tools/testing/selftests/kvm/include/x86/vmx.h
index 6cd6bb7efbc2..90fffaf91595 100644
--- a/tools/testing/selftests/kvm/include/x86/vmx.h
+++ b/tools/testing/selftests/kvm/include/x86/vmx.h
@@ -294,7 +294,7 @@ struct vmx_msr_entry {
static inline int vmxon(u64 phys)
{
- uint8_t ret;
+ u8 ret;
__asm__ __volatile__ ("vmxon %[pa]; setna %[ret]"
: [ret]"=rm"(ret)
@@ -311,7 +311,7 @@ static inline void vmxoff(void)
static inline int vmclear(u64 vmcs_pa)
{
- uint8_t ret;
+ u8 ret;
__asm__ __volatile__ ("vmclear %[pa]; setna %[ret]"
: [ret]"=rm"(ret)
@@ -323,7 +323,7 @@ static inline int vmclear(u64 vmcs_pa)
static inline int vmptrld(u64 vmcs_pa)
{
- uint8_t ret;
+ u8 ret;
if (enable_evmcs)
return -1;
@@ -339,7 +339,7 @@ static inline int vmptrld(u64 vmcs_pa)
static inline int vmptrst(u64 *value)
{
u64 tmp;
- uint8_t ret;
+ u8 ret;
if (enable_evmcs)
return evmcs_vmptrst(value);
@@ -450,7 +450,7 @@ static inline void vmcall(void)
static inline int vmread(u64 encoding, u64 *value)
{
u64 tmp;
- uint8_t ret;
+ u8 ret;
if (enable_evmcs)
return evmcs_vmread(encoding, value);
@@ -477,7 +477,7 @@ static inline u64 vmreadz(u64 encoding)
static inline int vmwrite(u64 encoding, u64 value)
{
- uint8_t ret;
+ u8 ret;
if (enable_evmcs)
return evmcs_vmwrite(encoding, value);
diff --git a/tools/testing/selftests/kvm/lib/arm64/processor.c b/tools/testing/selftests/kvm/lib/arm64/processor.c
index f96513262c5b..7ba3a48911e3 100644
--- a/tools/testing/selftests/kvm/lib/arm64/processor.c
+++ b/tools/testing/selftests/kvm/lib/arm64/processor.c
@@ -124,7 +124,7 @@ void virt_arch_pgd_alloc(struct kvm_vm *vm)
static void _virt_pg_map(struct kvm_vm *vm, u64 vaddr, u64 paddr,
u64 flags)
{
- uint8_t attr_idx = flags & (PTE_ATTRINDX_MASK >> PTE_ATTRINDX_SHIFT);
+ u8 attr_idx = flags & (PTE_ATTRINDX_MASK >> PTE_ATTRINDX_SHIFT);
u64 pg_attr;
u64 *ptep;
@@ -237,7 +237,7 @@ gpa_t addr_arch_gva2gpa(struct kvm_vm *vm, gva_t gva)
return pte_addr(vm, *ptep) + (gva & (vm->page_size - 1));
}
-static void pte_dump(FILE *stream, struct kvm_vm *vm, uint8_t indent, u64 page, int level)
+static void pte_dump(FILE *stream, struct kvm_vm *vm, u8 indent, u64 page, int level)
{
#ifdef DEBUG
static const char * const type[] = { "", "pud", "pmd", "pte" };
@@ -256,7 +256,7 @@ static void pte_dump(FILE *stream, struct kvm_vm *vm, uint8_t indent, u64 page,
#endif
}
-void virt_arch_dump(FILE *stream, struct kvm_vm *vm, uint8_t indent)
+void virt_arch_dump(FILE *stream, struct kvm_vm *vm, u8 indent)
{
int level = 4 - (vm->mmu.pgtable_levels - 1);
u64 pgd, *ptep;
@@ -397,7 +397,7 @@ void aarch64_vcpu_setup(struct kvm_vcpu *vcpu, struct kvm_vcpu_init *init)
HCR_EL2_RW | HCR_EL2_TGE | HCR_EL2_E2H);
}
-void vcpu_arch_dump(FILE *stream, struct kvm_vcpu *vcpu, uint8_t indent)
+void vcpu_arch_dump(FILE *stream, struct kvm_vcpu *vcpu, u8 indent)
{
u64 pstate, pc;
diff --git a/tools/testing/selftests/kvm/lib/guest_sprintf.c b/tools/testing/selftests/kvm/lib/guest_sprintf.c
index 2a3ab9c168f0..7a33965349a7 100644
--- a/tools/testing/selftests/kvm/lib/guest_sprintf.c
+++ b/tools/testing/selftests/kvm/lib/guest_sprintf.c
@@ -216,7 +216,7 @@ int guest_vsnprintf(char *buf, int n, const char *fmt, va_list args)
while (--field_width > 0)
APPEND_BUFFER_SAFE(str, end, ' ');
APPEND_BUFFER_SAFE(str, end,
- (uint8_t)va_arg(args, int));
+ (u8)va_arg(args, int));
while (--field_width > 0)
APPEND_BUFFER_SAFE(str, end, ' ');
continue;
diff --git a/tools/testing/selftests/kvm/lib/kvm_util.c b/tools/testing/selftests/kvm/lib/kvm_util.c
index 9f80e2e03001..050ae9c92681 100644
--- a/tools/testing/selftests/kvm/lib/kvm_util.c
+++ b/tools/testing/selftests/kvm/lib/kvm_util.c
@@ -1951,7 +1951,7 @@ void kvm_gsi_routing_write(struct kvm_vm *vm, struct kvm_irq_routing *routing)
* Dumps the current state of the VM given by vm, to the FILE stream
* given by stream.
*/
-void vm_dump(FILE *stream, struct kvm_vm *vm, uint8_t indent)
+void vm_dump(FILE *stream, struct kvm_vm *vm, u8 indent)
{
int ctr;
struct userspace_mem_region *region;
diff --git a/tools/testing/selftests/kvm/lib/loongarch/processor.c b/tools/testing/selftests/kvm/lib/loongarch/processor.c
index 38ee62c6cbfb..2982196db3b2 100644
--- a/tools/testing/selftests/kvm/lib/loongarch/processor.c
+++ b/tools/testing/selftests/kvm/lib/loongarch/processor.c
@@ -140,7 +140,7 @@ void virt_arch_pg_map(struct kvm_vm *vm, u64 vaddr, u64 paddr)
WRITE_ONCE(*ptep, paddr | prot_bits);
}
-static void pte_dump(FILE *stream, struct kvm_vm *vm, uint8_t indent, u64 page, int level)
+static void pte_dump(FILE *stream, struct kvm_vm *vm, u8 indent, u64 page, int level)
{
u64 pte, *ptep;
static const char * const type[] = { "pte", "pmd", "pud", "pgd"};
@@ -158,7 +158,7 @@ static void pte_dump(FILE *stream, struct kvm_vm *vm, uint8_t indent, u64 page,
}
}
-void virt_arch_dump(FILE *stream, struct kvm_vm *vm, uint8_t indent)
+void virt_arch_dump(FILE *stream, struct kvm_vm *vm, u8 indent)
{
int level;
@@ -169,7 +169,7 @@ void virt_arch_dump(FILE *stream, struct kvm_vm *vm, uint8_t indent)
pte_dump(stream, vm, indent, vm->mmu.pgd, level);
}
-void vcpu_arch_dump(FILE *stream, struct kvm_vcpu *vcpu, uint8_t indent)
+void vcpu_arch_dump(FILE *stream, struct kvm_vcpu *vcpu, u8 indent)
{
}
diff --git a/tools/testing/selftests/kvm/lib/riscv/processor.c b/tools/testing/selftests/kvm/lib/riscv/processor.c
index d7646eebcfab..7336d5a20419 100644
--- a/tools/testing/selftests/kvm/lib/riscv/processor.c
+++ b/tools/testing/selftests/kvm/lib/riscv/processor.c
@@ -148,7 +148,7 @@ gpa_t addr_arch_gva2gpa(struct kvm_vm *vm, gva_t gva)
exit(1);
}
-static void pte_dump(FILE *stream, struct kvm_vm *vm, uint8_t indent,
+static void pte_dump(FILE *stream, struct kvm_vm *vm, u8 indent,
u64 page, int level)
{
#ifdef DEBUG
@@ -170,7 +170,7 @@ static void pte_dump(FILE *stream, struct kvm_vm *vm, uint8_t indent,
#endif
}
-void virt_arch_dump(FILE *stream, struct kvm_vm *vm, uint8_t indent)
+void virt_arch_dump(FILE *stream, struct kvm_vm *vm, u8 indent)
{
struct kvm_mmu *mmu = &vm->mmu;
int level = mmu->pgtable_levels - 1;
@@ -233,7 +233,7 @@ void riscv_vcpu_mmu_setup(struct kvm_vcpu *vcpu)
vcpu_set_reg(vcpu, RISCV_GENERAL_CSR_REG(satp), satp);
}
-void vcpu_arch_dump(FILE *stream, struct kvm_vcpu *vcpu, uint8_t indent)
+void vcpu_arch_dump(FILE *stream, struct kvm_vcpu *vcpu, u8 indent)
{
struct kvm_riscv_core core;
diff --git a/tools/testing/selftests/kvm/lib/s390/processor.c b/tools/testing/selftests/kvm/lib/s390/processor.c
index 7591c5167927..d35f23a4db12 100644
--- a/tools/testing/selftests/kvm/lib/s390/processor.c
+++ b/tools/testing/selftests/kvm/lib/s390/processor.c
@@ -111,7 +111,7 @@ gpa_t addr_arch_gva2gpa(struct kvm_vm *vm, gva_t gva)
return (entry[idx] & ~0xffful) + (gva & 0xffful);
}
-static void virt_dump_ptes(FILE *stream, struct kvm_vm *vm, uint8_t indent,
+static void virt_dump_ptes(FILE *stream, struct kvm_vm *vm, u8 indent,
u64 ptea_start)
{
u64 *pte, ptea;
@@ -125,7 +125,7 @@ static void virt_dump_ptes(FILE *stream, struct kvm_vm *vm, uint8_t indent,
}
}
-static void virt_dump_region(FILE *stream, struct kvm_vm *vm, uint8_t indent,
+static void virt_dump_region(FILE *stream, struct kvm_vm *vm, u8 indent,
u64 reg_tab_addr)
{
u64 addr, *entry;
@@ -147,7 +147,7 @@ static void virt_dump_region(FILE *stream, struct kvm_vm *vm, uint8_t indent,
}
}
-void virt_arch_dump(FILE *stream, struct kvm_vm *vm, uint8_t indent)
+void virt_arch_dump(FILE *stream, struct kvm_vm *vm, u8 indent)
{
if (!vm->mmu.pgd_created)
return;
@@ -212,7 +212,7 @@ void vcpu_args_set(struct kvm_vcpu *vcpu, unsigned int num, ...)
va_end(ap);
}
-void vcpu_arch_dump(FILE *stream, struct kvm_vcpu *vcpu, uint8_t indent)
+void vcpu_arch_dump(FILE *stream, struct kvm_vcpu *vcpu, u8 indent)
{
fprintf(stream, "%*spstate: psw: 0x%.16llx:0x%.16llx\n",
indent, "", vcpu->run->psw_mask, vcpu->run->psw_addr);
diff --git a/tools/testing/selftests/kvm/lib/sparsebit.c b/tools/testing/selftests/kvm/lib/sparsebit.c
index 7e7734088f2f..4d845000de15 100644
--- a/tools/testing/selftests/kvm/lib/sparsebit.c
+++ b/tools/testing/selftests/kvm/lib/sparsebit.c
@@ -2074,7 +2074,7 @@ int main(void)
{
s = sparsebit_alloc();
for (;;) {
- uint8_t op = get8() & 0xf;
+ u8 op = get8() & 0xf;
u64 first = get64();
u64 last = get64();
diff --git a/tools/testing/selftests/kvm/lib/x86/processor.c b/tools/testing/selftests/kvm/lib/x86/processor.c
index 8e6393384fa4..723a5200c4bb 100644
--- a/tools/testing/selftests/kvm/lib/x86/processor.c
+++ b/tools/testing/selftests/kvm/lib/x86/processor.c
@@ -62,7 +62,7 @@ const char *ex_str(int vector)
}
}
-static void regs_dump(FILE *stream, struct kvm_regs *regs, uint8_t indent)
+static void regs_dump(FILE *stream, struct kvm_regs *regs, u8 indent)
{
fprintf(stream, "%*srax: 0x%.16llx rbx: 0x%.16llx "
"rcx: 0x%.16llx rdx: 0x%.16llx\n",
@@ -86,7 +86,7 @@ static void regs_dump(FILE *stream, struct kvm_regs *regs, uint8_t indent)
}
static void segment_dump(FILE *stream, struct kvm_segment *segment,
- uint8_t indent)
+ u8 indent)
{
fprintf(stream, "%*sbase: 0x%.16llx limit: 0x%.8x "
"selector: 0x%.4x type: 0x%.2x\n",
@@ -103,7 +103,7 @@ static void segment_dump(FILE *stream, struct kvm_segment *segment,
}
static void dtable_dump(FILE *stream, struct kvm_dtable *dtable,
- uint8_t indent)
+ u8 indent)
{
fprintf(stream, "%*sbase: 0x%.16llx limit: 0x%.4x "
"padding: 0x%.4x 0x%.4x 0x%.4x\n",
@@ -111,7 +111,7 @@ static void dtable_dump(FILE *stream, struct kvm_dtable *dtable,
dtable->padding[0], dtable->padding[1], dtable->padding[2]);
}
-static void sregs_dump(FILE *stream, struct kvm_sregs *sregs, uint8_t indent)
+static void sregs_dump(FILE *stream, struct kvm_sregs *sregs, u8 indent)
{
unsigned int i;
@@ -407,7 +407,7 @@ u64 *vm_get_pte(struct kvm_vm *vm, u64 vaddr)
return __vm_get_page_table_entry(vm, &vm->mmu, vaddr, &level);
}
-void virt_arch_dump(FILE *stream, struct kvm_vm *vm, uint8_t indent)
+void virt_arch_dump(FILE *stream, struct kvm_vm *vm, u8 indent)
{
struct kvm_mmu *mmu = &vm->mmu;
u64 *pml4e, *pml4e_start;
@@ -909,7 +909,7 @@ const struct kvm_cpuid2 *kvm_get_supported_cpuid(void)
static u32 __kvm_cpu_has(const struct kvm_cpuid2 *cpuid,
u32 function, u32 index,
- uint8_t reg, uint8_t lo, uint8_t hi)
+ u8 reg, u8 lo, u8 hi)
{
const struct kvm_cpuid_entry2 *entry;
int i;
@@ -1127,7 +1127,7 @@ void vcpu_args_set(struct kvm_vcpu *vcpu, unsigned int num, ...)
va_end(ap);
}
-void vcpu_arch_dump(FILE *stream, struct kvm_vcpu *vcpu, uint8_t indent)
+void vcpu_arch_dump(FILE *stream, struct kvm_vcpu *vcpu, u8 indent)
{
struct kvm_regs regs;
struct kvm_sregs sregs;
@@ -1378,7 +1378,7 @@ unsigned long vm_compute_max_gfn(struct kvm_vm *vm)
{
const unsigned long num_ht_pages = 12 << (30 - vm->page_shift); /* 12 GiB */
unsigned long ht_gfn, max_gfn, max_pfn;
- uint8_t maxphyaddr, guest_maxphyaddr;
+ u8 maxphyaddr, guest_maxphyaddr;
/*
* Use "guest MAXPHYADDR" from KVM if it's available. Guest MAXPHYADDR
diff --git a/tools/testing/selftests/kvm/lib/x86/sev.c b/tools/testing/selftests/kvm/lib/x86/sev.c
index d82f677b7c5e..93f916903461 100644
--- a/tools/testing/selftests/kvm/lib/x86/sev.c
+++ b/tools/testing/selftests/kvm/lib/x86/sev.c
@@ -15,7 +15,7 @@
* expression would cause us to quit the loop.
*/
static void encrypt_region(struct kvm_vm *vm, struct userspace_mem_region *region,
- uint8_t page_type, bool private)
+ u8 page_type, bool private)
{
const struct sparsebit *protected_phy_pages = region->protected_phy_pages;
const gpa_t gpa_base = region->region.guest_phys_addr;
@@ -103,7 +103,7 @@ void sev_vm_launch(struct kvm_vm *vm, u32 policy)
vm->arch.is_pt_protected = true;
}
-void sev_vm_launch_measure(struct kvm_vm *vm, uint8_t *measurement)
+void sev_vm_launch_measure(struct kvm_vm *vm, u8 *measurement)
{
struct kvm_sev_launch_measure launch_measure;
struct kvm_sev_guest_status guest_status;
@@ -174,7 +174,7 @@ struct kvm_vm *vm_sev_create_with_one_vcpu(u32 type, void *guest_code,
return vm;
}
-void vm_sev_launch(struct kvm_vm *vm, u64 policy, uint8_t *measurement)
+void vm_sev_launch(struct kvm_vm *vm, u64 policy, u8 *measurement)
{
if (is_sev_snp_vm(vm)) {
vm_enable_cap(vm, KVM_CAP_EXIT_HYPERCALL, BIT(KVM_HC_MAP_GPA_RANGE));
diff --git a/tools/testing/selftests/kvm/memslot_perf_test.c b/tools/testing/selftests/kvm/memslot_perf_test.c
index c2b36a4ac638..51f8be50c7e4 100644
--- a/tools/testing/selftests/kvm/memslot_perf_test.c
+++ b/tools/testing/selftests/kvm/memslot_perf_test.c
@@ -217,7 +217,7 @@ static void *vm_gpa2hva(struct vm_data *data, u64 gpa, u64 *rempages)
}
base = data->hva_slots[slot];
- return (uint8_t *)base + slotoffs * guest_page_size + pgoffs;
+ return (u8 *)base + slotoffs * guest_page_size + pgoffs;
}
static u64 vm_slot2gpa(struct vm_data *data, u32 slot)
diff --git a/tools/testing/selftests/kvm/mmu_stress_test.c b/tools/testing/selftests/kvm/mmu_stress_test.c
index c1f1e318e059..e0975a5dcff1 100644
--- a/tools/testing/selftests/kvm/mmu_stress_test.c
+++ b/tools/testing/selftests/kvm/mmu_stress_test.c
@@ -347,7 +347,7 @@ int main(int argc, char *argv[])
/* Pre-fault the memory to avoid taking mmap_sem on guest page faults. */
for (i = 0; i < slot_size; i += vm->page_size)
- ((uint8_t *)mem)[i] = 0xaa;
+ ((u8 *)mem)[i] = 0xaa;
gpa = 0;
for (slot = first_slot; slot < max_slots; slot++) {
diff --git a/tools/testing/selftests/kvm/s390/memop.c b/tools/testing/selftests/kvm/s390/memop.c
index aa92fdf0664d..9855b5bfb5ed 100644
--- a/tools/testing/selftests/kvm/s390/memop.c
+++ b/tools/testing/selftests/kvm/s390/memop.c
@@ -48,13 +48,13 @@ struct mop_desc {
void *buf;
u32 sida_offset;
void *old;
- uint8_t old_value[16];
+ u8 old_value[16];
bool *cmpxchg_success;
- uint8_t ar;
- uint8_t key;
+ u8 ar;
+ u8 key;
};
-const uint8_t NO_KEY = 0xff;
+const u8 NO_KEY = 0xff;
static struct kvm_s390_mem_op ksmo_from_desc(struct mop_desc *desc)
{
@@ -230,8 +230,8 @@ static void memop_ioctl(struct test_info info, struct kvm_s390_mem_op *ksmo,
#define CR0_FETCH_PROTECTION_OVERRIDE (1UL << (63 - 38))
#define CR0_STORAGE_PROTECTION_OVERRIDE (1UL << (63 - 39))
-static uint8_t __aligned(PAGE_SIZE) mem1[65536];
-static uint8_t __aligned(PAGE_SIZE) mem2[65536];
+static u8 __aligned(PAGE_SIZE) mem1[65536];
+static u8 __aligned(PAGE_SIZE) mem2[65536];
struct test_default {
struct kvm_vm *kvm_vm;
@@ -296,7 +296,7 @@ static void prepare_mem12(void)
TEST_ASSERT(!memcmp(p1, p2, size), "Memory contents do not match!")
static void default_write_read(struct test_info copy_cpu, struct test_info mop_cpu,
- enum mop_target mop_target, u32 size, uint8_t key)
+ enum mop_target mop_target, u32 size, u8 key)
{
prepare_mem12();
CHECK_N_DO(MOP, mop_cpu, mop_target, WRITE, mem1, size,
@@ -308,7 +308,7 @@ static void default_write_read(struct test_info copy_cpu, struct test_info mop_c
}
static void default_read(struct test_info copy_cpu, struct test_info mop_cpu,
- enum mop_target mop_target, u32 size, uint8_t key)
+ enum mop_target mop_target, u32 size, u8 key)
{
prepare_mem12();
CHECK_N_DO(MOP, mop_cpu, mop_target, WRITE, mem1, size, GADDR_V(mem1));
@@ -318,12 +318,12 @@ static void default_read(struct test_info copy_cpu, struct test_info mop_cpu,
ASSERT_MEM_EQ(mem1, mem2, size);
}
-static void default_cmpxchg(struct test_default *test, uint8_t key)
+static void default_cmpxchg(struct test_default *test, u8 key)
{
for (int size = 1; size <= 16; size *= 2) {
for (int offset = 0; offset < 16; offset += size) {
- uint8_t __aligned(16) new[16] = {};
- uint8_t __aligned(16) old[16];
+ u8 __aligned(16) new[16] = {};
+ u8 __aligned(16) old[16];
bool succ;
prepare_mem12();
@@ -400,7 +400,7 @@ static void test_copy_access_register(void)
kvm_vm_free(t.kvm_vm);
}
-static void set_storage_key_range(void *addr, size_t len, uint8_t key)
+static void set_storage_key_range(void *addr, size_t len, u8 key)
{
uintptr_t _addr, abs, i;
int not_mapped = 0;
@@ -483,7 +483,7 @@ static __uint128_t cut_to_size(int size, __uint128_t val)
{
switch (size) {
case 1:
- return (uint8_t)val;
+ return (u8)val;
case 2:
return (u16)val;
case 4:
@@ -553,7 +553,7 @@ static __uint128_t permutate_bits(bool guest, int i, int size, __uint128_t old)
if (swap) {
int i, j;
__uint128_t new;
- uint8_t byte0, byte1;
+ u8 byte0, byte1;
rand = rand * 3 + 1;
i = rand % size;
diff --git a/tools/testing/selftests/kvm/s390/resets.c b/tools/testing/selftests/kvm/s390/resets.c
index 7a81d07500bd..e3c7a2f148f9 100644
--- a/tools/testing/selftests/kvm/s390/resets.c
+++ b/tools/testing/selftests/kvm/s390/resets.c
@@ -20,7 +20,7 @@
struct kvm_s390_irq buf[ARBITRARY_NON_ZERO_VCPU_ID + LOCAL_IRQS];
-static uint8_t regs_null[512];
+static u8 regs_null[512];
static void guest_code_initial(void)
{
diff --git a/tools/testing/selftests/kvm/s390/shared_zeropage_test.c b/tools/testing/selftests/kvm/s390/shared_zeropage_test.c
index bba0d9a6dcc8..a9e5a01200b8 100644
--- a/tools/testing/selftests/kvm/s390/shared_zeropage_test.c
+++ b/tools/testing/selftests/kvm/s390/shared_zeropage_test.c
@@ -13,7 +13,7 @@
#include "kselftest.h"
#include "ucall_common.h"
-static void set_storage_key(void *addr, uint8_t skey)
+static void set_storage_key(void *addr, u8 skey)
{
asm volatile("sske %0,%1" : : "d" (skey), "a" (addr));
}
diff --git a/tools/testing/selftests/kvm/s390/tprot.c b/tools/testing/selftests/kvm/s390/tprot.c
index e94a640f6f32..e021e198b28e 100644
--- a/tools/testing/selftests/kvm/s390/tprot.c
+++ b/tools/testing/selftests/kvm/s390/tprot.c
@@ -14,12 +14,12 @@
#define CR0_FETCH_PROTECTION_OVERRIDE (1UL << (63 - 38))
#define CR0_STORAGE_PROTECTION_OVERRIDE (1UL << (63 - 39))
-static __aligned(PAGE_SIZE) uint8_t pages[2][PAGE_SIZE];
-static uint8_t *const page_store_prot = pages[0];
-static uint8_t *const page_fetch_prot = pages[1];
+static __aligned(PAGE_SIZE) u8 pages[2][PAGE_SIZE];
+static u8 *const page_store_prot = pages[0];
+static u8 *const page_fetch_prot = pages[1];
/* Nonzero return value indicates that address not mapped */
-static int set_storage_key(void *addr, uint8_t key)
+static int set_storage_key(void *addr, u8 key)
{
int not_mapped = 0;
@@ -44,7 +44,7 @@ enum permission {
TRANSL_UNAVAIL = 3,
};
-static enum permission test_protection(void *addr, uint8_t key)
+static enum permission test_protection(void *addr, u8 key)
{
u64 mask;
@@ -72,7 +72,7 @@ enum stage {
struct test {
enum stage stage;
void *addr;
- uint8_t key;
+ u8 key;
enum permission expected;
} tests[] = {
/*
diff --git a/tools/testing/selftests/kvm/set_memory_region_test.c b/tools/testing/selftests/kvm/set_memory_region_test.c
index 59a6eae30946..5551dd0f9fad 100644
--- a/tools/testing/selftests/kvm/set_memory_region_test.c
+++ b/tools/testing/selftests/kvm/set_memory_region_test.c
@@ -556,7 +556,7 @@ static void guest_code_mmio_during_vectoring(void)
set_idt(&idt_desc);
/* Generate a #GP by dereferencing a non-canonical address */
- *((uint8_t *)NONCANONICAL) = 0x1;
+ *((u8 *)NONCANONICAL) = 0x1;
GUEST_ASSERT(0);
}
diff --git a/tools/testing/selftests/kvm/steal_time.c b/tools/testing/selftests/kvm/steal_time.c
index 85fabe262864..d46968f5579e 100644
--- a/tools/testing/selftests/kvm/steal_time.c
+++ b/tools/testing/selftests/kvm/steal_time.c
@@ -245,8 +245,8 @@ struct sta_struct {
u32 sequence;
u32 flags;
u64 steal;
- uint8_t preempted;
- uint8_t pad[47];
+ u8 preempted;
+ u8 pad[47];
} __packed;
static void sta_set_shmem(gpa_t gpa, unsigned long flags)
diff --git a/tools/testing/selftests/kvm/x86/aperfmperf_test.c b/tools/testing/selftests/kvm/x86/aperfmperf_test.c
index 620809cf35da..c91660103137 100644
--- a/tools/testing/selftests/kvm/x86/aperfmperf_test.c
+++ b/tools/testing/selftests/kvm/x86/aperfmperf_test.c
@@ -108,7 +108,7 @@ static void guest_code(void *nested_test_data)
static void guest_no_aperfmperf(void)
{
u64 msr_val;
- uint8_t vector;
+ u8 vector;
vector = rdmsr_safe(MSR_IA32_APERF, &msr_val);
GUEST_ASSERT(vector == GP_VECTOR);
diff --git a/tools/testing/selftests/kvm/x86/evmcs_smm_controls_test.c b/tools/testing/selftests/kvm/x86/evmcs_smm_controls_test.c
index 9ff24d4851f5..5b3aef109cfc 100644
--- a/tools/testing/selftests/kvm/x86/evmcs_smm_controls_test.c
+++ b/tools/testing/selftests/kvm/x86/evmcs_smm_controls_test.c
@@ -29,7 +29,7 @@
* SMI handler: runs in real-address mode.
* Reports SMRAM_STAGE via port IO, then does RSM.
*/
-static uint8_t smi_handler[] = {
+static u8 smi_handler[] = {
0xb0, SMRAM_STAGE, /* mov $SMRAM_STAGE, %al */
0xe4, SYNC_PORT, /* in $SYNC_PORT, %al */
0x0f, 0xaa, /* rsm */
diff --git a/tools/testing/selftests/kvm/x86/fastops_test.c b/tools/testing/selftests/kvm/x86/fastops_test.c
index 721f56d38f49..c0d30ccd8767 100644
--- a/tools/testing/selftests/kvm/x86/fastops_test.c
+++ b/tools/testing/selftests/kvm/x86/fastops_test.c
@@ -77,7 +77,7 @@
#define guest_test_fastop_cl(insn, type_t, __val1, __val2) \
({ \
type_t output = __val2, ex_output = __val2, input = __val2; \
- uint8_t shift = __val1; \
+ u8 shift = __val1; \
u64 flags, ex_flags; \
\
guest_execute_fastop_cl("", insn, shift, ex_output, ex_flags); \
@@ -95,7 +95,7 @@
#define guest_execute_fastop_div(__KVM_ASM_SAFE, insn, __a, __d, __rm, __flags) \
({ \
u64 ign_error_code; \
- uint8_t vector; \
+ u8 vector; \
\
__asm__ __volatile__(fastop(__KVM_ASM_SAFE(insn " %[denom]")) \
: "+a"(__a), "+d"(__d), flags_constraint(__flags), \
@@ -110,7 +110,7 @@
type_t _a = __val1, _d = __val1, rm = __val2; \
type_t a = _a, d = _d, ex_a = _a, ex_d = _d; \
u64 flags, ex_flags; \
- uint8_t v, ex_v; \
+ u8 v, ex_v; \
\
ex_v = guest_execute_fastop_div(KVM_ASM_SAFE, insn, ex_a, ex_d, rm, ex_flags); \
v = guest_execute_fastop_div(KVM_ASM_SAFE_FEP, insn, a, d, rm, flags); \
@@ -185,7 +185,7 @@ if (sizeof(type_t) != 1) { \
static void guest_code(void)
{
- guest_test_fastops(uint8_t, "b");
+ guest_test_fastops(u8, "b");
guest_test_fastops(u16, "w");
guest_test_fastops(u32, "l");
guest_test_fastops(u64, "q");
diff --git a/tools/testing/selftests/kvm/x86/fix_hypercall_test.c b/tools/testing/selftests/kvm/x86/fix_hypercall_test.c
index df7a94400d3d..753a0e730ea8 100644
--- a/tools/testing/selftests/kvm/x86/fix_hypercall_test.c
+++ b/tools/testing/selftests/kvm/x86/fix_hypercall_test.c
@@ -26,11 +26,11 @@ static void guest_ud_handler(struct ex_regs *regs)
regs->rip += HYPERCALL_INSN_SIZE;
}
-static const uint8_t vmx_vmcall[HYPERCALL_INSN_SIZE] = { 0x0f, 0x01, 0xc1 };
-static const uint8_t svm_vmmcall[HYPERCALL_INSN_SIZE] = { 0x0f, 0x01, 0xd9 };
+static const u8 vmx_vmcall[HYPERCALL_INSN_SIZE] = { 0x0f, 0x01, 0xc1 };
+static const u8 svm_vmmcall[HYPERCALL_INSN_SIZE] = { 0x0f, 0x01, 0xd9 };
-extern uint8_t hypercall_insn[HYPERCALL_INSN_SIZE];
-static u64 do_sched_yield(uint8_t apic_id)
+extern u8 hypercall_insn[HYPERCALL_INSN_SIZE];
+static u64 do_sched_yield(u8 apic_id)
{
u64 ret;
@@ -45,8 +45,8 @@ static u64 do_sched_yield(uint8_t apic_id)
static void guest_main(void)
{
- const uint8_t *native_hypercall_insn;
- const uint8_t *other_hypercall_insn;
+ const u8 *native_hypercall_insn;
+ const u8 *other_hypercall_insn;
u64 ret;
if (host_cpu_is_intel) {
diff --git a/tools/testing/selftests/kvm/x86/flds_emulation.h b/tools/testing/selftests/kvm/x86/flds_emulation.h
index c7e4f08765fb..fd6b6c67199a 100644
--- a/tools/testing/selftests/kvm/x86/flds_emulation.h
+++ b/tools/testing/selftests/kvm/x86/flds_emulation.h
@@ -21,7 +21,7 @@ static inline void handle_flds_emulation_failure_exit(struct kvm_vcpu *vcpu)
{
struct kvm_run *run = vcpu->run;
struct kvm_regs regs;
- uint8_t *insn_bytes;
+ u8 *insn_bytes;
u64 flags;
TEST_ASSERT_KVM_EXIT_REASON(vcpu, KVM_EXIT_INTERNAL_ERROR);
diff --git a/tools/testing/selftests/kvm/x86/hyperv_features.c b/tools/testing/selftests/kvm/x86/hyperv_features.c
index 80588b7ea259..52dbd52ce606 100644
--- a/tools/testing/selftests/kvm/x86/hyperv_features.c
+++ b/tools/testing/selftests/kvm/x86/hyperv_features.c
@@ -41,7 +41,7 @@ static bool is_write_only_msr(u32 msr)
static void guest_msr(struct msr_data *msr)
{
- uint8_t vector = 0;
+ u8 vector = 0;
u64 msr_val = 0;
GUEST_ASSERT(msr->idx);
@@ -85,7 +85,7 @@ static void guest_msr(struct msr_data *msr)
static void guest_hcall(gpa_t pgs_gpa, struct hcall_data *hcall)
{
u64 res, input, output;
- uint8_t vector;
+ u8 vector;
GUEST_ASSERT_NE(hcall->control, 0);
diff --git a/tools/testing/selftests/kvm/x86/kvm_pv_test.c b/tools/testing/selftests/kvm/x86/kvm_pv_test.c
index babf0f95165a..8ed5fa635021 100644
--- a/tools/testing/selftests/kvm/x86/kvm_pv_test.c
+++ b/tools/testing/selftests/kvm/x86/kvm_pv_test.c
@@ -41,7 +41,7 @@ static struct msr_data msrs_to_test[] = {
static void test_msr(struct msr_data *msr)
{
u64 ignored;
- uint8_t vector;
+ u8 vector;
PR_MSR(msr);
diff --git a/tools/testing/selftests/kvm/x86/nested_emulation_test.c b/tools/testing/selftests/kvm/x86/nested_emulation_test.c
index 42fd24567e26..fb7dcbe53ac7 100644
--- a/tools/testing/selftests/kvm/x86/nested_emulation_test.c
+++ b/tools/testing/selftests/kvm/x86/nested_emulation_test.c
@@ -13,7 +13,7 @@ enum {
struct emulated_instruction {
const char name[32];
- uint8_t opcode[15];
+ u8 opcode[15];
u32 exit_reason[NR_VIRTUALIZATION_FLAVORS];
};
@@ -32,9 +32,9 @@ static struct emulated_instruction instructions[] = {
},
};
-static uint8_t kvm_fep[] = { 0x0f, 0x0b, 0x6b, 0x76, 0x6d }; /* ud2 ; .ascii "kvm" */
-static uint8_t l2_guest_code[sizeof(kvm_fep) + 15];
-static uint8_t *l2_instruction = &l2_guest_code[sizeof(kvm_fep)];
+static u8 kvm_fep[] = { 0x0f, 0x0b, 0x6b, 0x76, 0x6d }; /* ud2 ; .ascii "kvm" */
+static u8 l2_guest_code[sizeof(kvm_fep) + 15];
+static u8 *l2_instruction = &l2_guest_code[sizeof(kvm_fep)];
static u32 get_instruction_length(struct emulated_instruction *insn)
{
diff --git a/tools/testing/selftests/kvm/x86/platform_info_test.c b/tools/testing/selftests/kvm/x86/platform_info_test.c
index 86d1ab0db1e8..80bb07e6531c 100644
--- a/tools/testing/selftests/kvm/x86/platform_info_test.c
+++ b/tools/testing/selftests/kvm/x86/platform_info_test.c
@@ -24,7 +24,7 @@
static void guest_code(void)
{
u64 msr_platform_info;
- uint8_t vector;
+ u8 vector;
GUEST_SYNC(true);
msr_platform_info = rdmsr(MSR_PLATFORM_INFO);
diff --git a/tools/testing/selftests/kvm/x86/pmu_counters_test.c b/tools/testing/selftests/kvm/x86/pmu_counters_test.c
index 2a12c2d42697..dc6afac3aa91 100644
--- a/tools/testing/selftests/kvm/x86/pmu_counters_test.c
+++ b/tools/testing/selftests/kvm/x86/pmu_counters_test.c
@@ -32,7 +32,7 @@
/* Track which architectural events are supported by hardware. */
static u32 hardware_pmu_arch_events;
-static uint8_t kvm_pmu_version;
+static u8 kvm_pmu_version;
static bool kvm_has_perf_caps;
#define X86_PMU_FEATURE_NULL \
@@ -57,7 +57,7 @@ struct kvm_intel_pmu_event {
* kvm_x86_pmu_feature use syntax that's only valid in function scope, and the
* compiler often thinks the feature definitions aren't compile-time constants.
*/
-static struct kvm_intel_pmu_event intel_event_to_feature(uint8_t idx)
+static struct kvm_intel_pmu_event intel_event_to_feature(u8 idx)
{
const struct kvm_intel_pmu_event __intel_event_to_feature[] = {
[INTEL_ARCH_CPU_CYCLES_INDEX] = { X86_PMU_FEATURE_CPU_CYCLES, X86_PMU_FEATURE_CPU_CYCLES_FIXED },
@@ -89,7 +89,7 @@ static struct kvm_intel_pmu_event intel_event_to_feature(uint8_t idx)
static struct kvm_vm *pmu_vm_create_with_one_vcpu(struct kvm_vcpu **vcpu,
void *guest_code,
- uint8_t pmu_version,
+ u8 pmu_version,
u64 perf_capabilities)
{
struct kvm_vm *vm;
@@ -132,7 +132,7 @@ static void run_vcpu(struct kvm_vcpu *vcpu)
} while (uc.cmd != UCALL_DONE);
}
-static uint8_t guest_get_pmu_version(void)
+static u8 guest_get_pmu_version(void)
{
/*
* Return the effective PMU version, i.e. the minimum between what KVM
@@ -141,7 +141,7 @@ static uint8_t guest_get_pmu_version(void)
* supported by KVM to verify KVM doesn't freak out and do something
* bizarre with an architecturally valid, but unsupported, version.
*/
- return min_t(uint8_t, kvm_pmu_version, this_cpu_property(X86_PROPERTY_PMU_VERSION));
+ return min_t(u8, kvm_pmu_version, this_cpu_property(X86_PROPERTY_PMU_VERSION));
}
/*
@@ -153,7 +153,7 @@ static uint8_t guest_get_pmu_version(void)
* Sanity check that in all cases, the event doesn't count when it's disabled,
* and that KVM correctly emulates the write of an arbitrary value.
*/
-static void guest_assert_event_count(uint8_t idx, u32 pmc, u32 pmc_msr)
+static void guest_assert_event_count(u8 idx, u32 pmc, u32 pmc_msr)
{
u64 count;
@@ -255,7 +255,7 @@ do { \
guest_assert_event_count(_idx, _pmc, _pmc_msr); \
} while (0)
-static void __guest_test_arch_event(uint8_t idx, u32 pmc, u32 pmc_msr,
+static void __guest_test_arch_event(u8 idx, u32 pmc, u32 pmc_msr,
u32 ctrl_msr, u64 ctrl_msr_value)
{
GUEST_TEST_EVENT(idx, pmc, pmc_msr, ctrl_msr, ctrl_msr_value, "");
@@ -264,7 +264,7 @@ static void __guest_test_arch_event(uint8_t idx, u32 pmc, u32 pmc_msr,
GUEST_TEST_EVENT(idx, pmc, pmc_msr, ctrl_msr, ctrl_msr_value, KVM_FEP);
}
-static void guest_test_arch_event(uint8_t idx)
+static void guest_test_arch_event(u8 idx)
{
u32 nr_gp_counters = this_cpu_property(X86_PROPERTY_PMU_NR_GP_COUNTERS);
u32 pmu_version = guest_get_pmu_version();
@@ -320,7 +320,7 @@ static void guest_test_arch_event(uint8_t idx)
static void guest_test_arch_events(void)
{
- uint8_t i;
+ u8 i;
for (i = 0; i < NR_INTEL_ARCH_EVENTS; i++)
guest_test_arch_event(i);
@@ -328,8 +328,8 @@ static void guest_test_arch_events(void)
GUEST_DONE();
}
-static void test_arch_events(uint8_t pmu_version, u64 perf_capabilities,
- uint8_t length, u32 unavailable_mask)
+static void test_arch_events(u8 pmu_version, u64 perf_capabilities,
+ u8 length, u32 unavailable_mask)
{
struct kvm_vcpu *vcpu;
struct kvm_vm *vm;
@@ -376,7 +376,7 @@ __GUEST_ASSERT(expect_gp ? vector == GP_VECTOR : !vector, \
static void guest_test_rdpmc(u32 rdpmc_idx, bool expect_success,
u64 expected_val)
{
- uint8_t vector;
+ u8 vector;
u64 val;
vector = rdpmc_safe(rdpmc_idx, &val);
@@ -393,11 +393,11 @@ static void guest_test_rdpmc(u32 rdpmc_idx, bool expect_success,
GUEST_ASSERT_PMC_VALUE(RDPMC, rdpmc_idx, val, expected_val);
}
-static void guest_rd_wr_counters(u32 base_msr, uint8_t nr_possible_counters,
- uint8_t nr_counters, u32 or_mask)
+static void guest_rd_wr_counters(u32 base_msr, u8 nr_possible_counters,
+ u8 nr_counters, u32 or_mask)
{
const bool pmu_has_fast_mode = !guest_get_pmu_version();
- uint8_t i;
+ u8 i;
for (i = 0; i < nr_possible_counters; i++) {
/*
@@ -422,7 +422,7 @@ static void guest_rd_wr_counters(u32 base_msr, uint8_t nr_possible_counters,
const bool expect_gp = !expect_success && msr != MSR_P6_PERFCTR0 &&
msr != MSR_P6_PERFCTR1;
u32 rdpmc_idx;
- uint8_t vector;
+ u8 vector;
u64 val;
vector = wrmsr_safe(msr, test_val);
@@ -461,8 +461,8 @@ static void guest_rd_wr_counters(u32 base_msr, uint8_t nr_possible_counters,
static void guest_test_gp_counters(void)
{
- uint8_t pmu_version = guest_get_pmu_version();
- uint8_t nr_gp_counters = 0;
+ u8 pmu_version = guest_get_pmu_version();
+ u8 nr_gp_counters = 0;
u32 base_msr;
if (pmu_version)
@@ -495,8 +495,8 @@ static void guest_test_gp_counters(void)
GUEST_DONE();
}
-static void test_gp_counters(uint8_t pmu_version, u64 perf_capabilities,
- uint8_t nr_gp_counters)
+static void test_gp_counters(u8 pmu_version, u64 perf_capabilities,
+ u8 nr_gp_counters)
{
struct kvm_vcpu *vcpu;
struct kvm_vm *vm;
@@ -515,8 +515,8 @@ static void test_gp_counters(uint8_t pmu_version, u64 perf_capabilities,
static void guest_test_fixed_counters(void)
{
u64 supported_bitmask = 0;
- uint8_t nr_fixed_counters = 0;
- uint8_t i;
+ u8 nr_fixed_counters = 0;
+ u8 i;
/* Fixed counters require Architectural vPMU Version 2+. */
if (guest_get_pmu_version() >= 2)
@@ -533,7 +533,7 @@ static void guest_test_fixed_counters(void)
nr_fixed_counters, supported_bitmask);
for (i = 0; i < MAX_NR_FIXED_COUNTERS; i++) {
- uint8_t vector;
+ u8 vector;
u64 val;
if (i >= nr_fixed_counters && !(supported_bitmask & BIT_ULL(i))) {
@@ -561,9 +561,8 @@ static void guest_test_fixed_counters(void)
GUEST_DONE();
}
-static void test_fixed_counters(uint8_t pmu_version, u64 perf_capabilities,
- uint8_t nr_fixed_counters,
- u32 supported_bitmask)
+static void test_fixed_counters(u8 pmu_version, u64 perf_capabilities,
+ u8 nr_fixed_counters, u32 supported_bitmask)
{
struct kvm_vcpu *vcpu;
struct kvm_vm *vm;
@@ -583,11 +582,11 @@ static void test_fixed_counters(uint8_t pmu_version, u64 perf_capabilities,
static void test_intel_counters(void)
{
- uint8_t nr_fixed_counters = kvm_cpu_property(X86_PROPERTY_PMU_NR_FIXED_COUNTERS);
- uint8_t nr_gp_counters = kvm_cpu_property(X86_PROPERTY_PMU_NR_GP_COUNTERS);
- uint8_t pmu_version = kvm_cpu_property(X86_PROPERTY_PMU_VERSION);
+ u8 nr_fixed_counters = kvm_cpu_property(X86_PROPERTY_PMU_NR_FIXED_COUNTERS);
+ u8 nr_gp_counters = kvm_cpu_property(X86_PROPERTY_PMU_NR_GP_COUNTERS);
+ u8 pmu_version = kvm_cpu_property(X86_PROPERTY_PMU_VERSION);
unsigned int i;
- uint8_t v, j;
+ u8 v, j;
u32 k;
const u64 perf_caps[] = {
@@ -620,7 +619,7 @@ static void test_intel_counters(void)
* Intel, i.e. is the last version that is guaranteed to be backwards
* compatible with KVM's existing behavior.
*/
- uint8_t max_pmu_version = max_t(typeof(pmu_version), pmu_version, 5);
+ u8 max_pmu_version = max_t(typeof(pmu_version), pmu_version, 5);
/*
* Detect the existence of events that aren't supported by selftests.
diff --git a/tools/testing/selftests/kvm/x86/pmu_event_filter_test.c b/tools/testing/selftests/kvm/x86/pmu_event_filter_test.c
index 5d607b114aeb..c1232344fda8 100644
--- a/tools/testing/selftests/kvm/x86/pmu_event_filter_test.c
+++ b/tools/testing/selftests/kvm/x86/pmu_event_filter_test.c
@@ -685,7 +685,7 @@ static int set_pmu_single_event_filter(struct kvm_vcpu *vcpu, u64 event,
static void test_filter_ioctl(struct kvm_vcpu *vcpu)
{
- uint8_t nr_fixed_counters = kvm_cpu_property(X86_PROPERTY_PMU_NR_FIXED_COUNTERS);
+ u8 nr_fixed_counters = kvm_cpu_property(X86_PROPERTY_PMU_NR_FIXED_COUNTERS);
struct __kvm_pmu_event_filter f;
u64 e = ~0ul;
int r;
@@ -729,7 +729,7 @@ static void test_filter_ioctl(struct kvm_vcpu *vcpu)
TEST_ASSERT(!r, "Masking non-existent fixed counters should be allowed");
}
-static void intel_run_fixed_counter_guest_code(uint8_t idx)
+static void intel_run_fixed_counter_guest_code(u8 idx)
{
for (;;) {
wrmsr(MSR_CORE_PERF_GLOBAL_CTRL, 0);
@@ -770,8 +770,8 @@ static u64 test_set_gp_and_fixed_event_filter(struct kvm_vcpu *vcpu,
return run_vcpu_to_sync(vcpu);
}
-static void __test_fixed_counter_bitmap(struct kvm_vcpu *vcpu, uint8_t idx,
- uint8_t nr_fixed_counters)
+static void __test_fixed_counter_bitmap(struct kvm_vcpu *vcpu, u8 idx,
+ u8 nr_fixed_counters)
{
unsigned int i;
u32 bitmap;
@@ -815,10 +815,10 @@ static void __test_fixed_counter_bitmap(struct kvm_vcpu *vcpu, uint8_t idx,
static void test_fixed_counter_bitmap(void)
{
- uint8_t nr_fixed_counters = kvm_cpu_property(X86_PROPERTY_PMU_NR_FIXED_COUNTERS);
+ u8 nr_fixed_counters = kvm_cpu_property(X86_PROPERTY_PMU_NR_FIXED_COUNTERS);
struct kvm_vm *vm;
struct kvm_vcpu *vcpu;
- uint8_t idx;
+ u8 idx;
/*
* Check that pmu_event_filter works as expected when it's applied to
diff --git a/tools/testing/selftests/kvm/x86/private_mem_conversions_test.c b/tools/testing/selftests/kvm/x86/private_mem_conversions_test.c
index 0bf86d822ee0..27675d7d04c0 100644
--- a/tools/testing/selftests/kvm/x86/private_mem_conversions_test.c
+++ b/tools/testing/selftests/kvm/x86/private_mem_conversions_test.c
@@ -29,7 +29,7 @@
/* Horrific macro so that the line info is captured accurately :-( */
#define memcmp_g(gpa, pattern, size) \
do { \
- uint8_t *mem = (uint8_t *)gpa; \
+ u8 *mem = (u8 *)gpa; \
size_t i; \
\
for (i = 0; i < size; i++) \
@@ -38,7 +38,7 @@ do { \
pattern, i, gpa + i, mem[i]); \
} while (0)
-static void memcmp_h(uint8_t *mem, u64 gpa, uint8_t pattern, size_t size)
+static void memcmp_h(u8 *mem, u64 gpa, u8 pattern, size_t size)
{
size_t i;
@@ -71,12 +71,12 @@ enum ucall_syncs {
};
static void guest_sync_shared(u64 gpa, u64 size,
- uint8_t current_pattern, uint8_t new_pattern)
+ u8 current_pattern, u8 new_pattern)
{
GUEST_SYNC5(SYNC_SHARED, gpa, size, current_pattern, new_pattern);
}
-static void guest_sync_private(u64 gpa, u64 size, uint8_t pattern)
+static void guest_sync_private(u64 gpa, u64 size, u8 pattern)
{
GUEST_SYNC4(SYNC_PRIVATE, gpa, size, pattern);
}
@@ -121,8 +121,8 @@ struct {
static void guest_test_explicit_conversion(u64 base_gpa, bool do_fallocate)
{
- const uint8_t def_p = 0xaa;
- const uint8_t init_p = 0xcc;
+ const u8 def_p = 0xaa;
+ const u8 init_p = 0xcc;
u64 j;
int i;
@@ -136,10 +136,10 @@ static void guest_test_explicit_conversion(u64 base_gpa, bool do_fallocate)
for (i = 0; i < ARRAY_SIZE(test_ranges); i++) {
u64 gpa = base_gpa + test_ranges[i].offset;
u64 size = test_ranges[i].size;
- uint8_t p1 = 0x11;
- uint8_t p2 = 0x22;
- uint8_t p3 = 0x33;
- uint8_t p4 = 0x44;
+ u8 p1 = 0x11;
+ u8 p2 = 0x22;
+ u8 p3 = 0x33;
+ u8 p4 = 0x44;
/*
* Set the test region to pattern one to differentiate it from
@@ -229,7 +229,7 @@ static void guest_punch_hole(u64 gpa, u64 size)
*/
static void guest_test_punch_hole(u64 base_gpa, bool precise)
{
- const uint8_t init_p = 0xcc;
+ const u8 init_p = 0xcc;
int i;
/*
@@ -347,7 +347,7 @@ static void *__test_mem_conversions(void *__vcpu)
for (i = 0; i < size; i += vm->page_size) {
size_t nr_bytes = min_t(size_t, vm->page_size, size - i);
- uint8_t *hva = addr_gpa2hva(vm, gpa + i);
+ u8 *hva = addr_gpa2hva(vm, gpa + i);
/* In all cases, the host should observe the shared data. */
memcmp_h(hva, gpa + i, uc.args[3], nr_bytes);
diff --git a/tools/testing/selftests/kvm/x86/smm_test.c b/tools/testing/selftests/kvm/x86/smm_test.c
index 39e89350c2e7..740051167dbd 100644
--- a/tools/testing/selftests/kvm/x86/smm_test.c
+++ b/tools/testing/selftests/kvm/x86/smm_test.c
@@ -34,7 +34,7 @@
* independent subset of asm here.
* SMI handler always report back fixed stage SMRAM_STAGE.
*/
-uint8_t smi_handler[] = {
+u8 smi_handler[] = {
0xb0, SMRAM_STAGE, /* mov $SMRAM_STAGE, %al */
0xe4, SYNC_PORT, /* in $SYNC_PORT, %al */
0x0f, 0xaa, /* rsm */
diff --git a/tools/testing/selftests/kvm/x86/state_test.c b/tools/testing/selftests/kvm/x86/state_test.c
index 62e14843e5af..409c6cc9f921 100644
--- a/tools/testing/selftests/kvm/x86/state_test.c
+++ b/tools/testing/selftests/kvm/x86/state_test.c
@@ -145,7 +145,7 @@ static void __attribute__((__flatten__)) guest_code(void *arg)
if (this_cpu_has(X86_FEATURE_XSAVE)) {
u64 supported_xcr0 = this_cpu_supported_xcr0();
- uint8_t buffer[PAGE_SIZE];
+ u8 buffer[PAGE_SIZE];
memset(buffer, 0xcc, sizeof(buffer));
@@ -331,7 +331,7 @@ int main(int argc, char *argv[])
* supported features, even if something goes awry in saving
* the original snapshot.
*/
- xstate_bv = (void *)&((uint8_t *)state->xsave->region)[512];
+ xstate_bv = (void *)&((u8 *)state->xsave->region)[512];
saved_xstate_bv = *xstate_bv;
vcpuN = __vm_vcpu_add(vm, vcpu->id + 1);
diff --git a/tools/testing/selftests/kvm/x86/userspace_io_test.c b/tools/testing/selftests/kvm/x86/userspace_io_test.c
index be7d72f3c029..9c5a87576c2e 100644
--- a/tools/testing/selftests/kvm/x86/userspace_io_test.c
+++ b/tools/testing/selftests/kvm/x86/userspace_io_test.c
@@ -10,7 +10,7 @@
#include "kvm_util.h"
#include "processor.h"
-static void guest_ins_port80(uint8_t *buffer, unsigned int count)
+static void guest_ins_port80(u8 *buffer, unsigned int count)
{
unsigned long end;
@@ -26,7 +26,7 @@ static void guest_ins_port80(uint8_t *buffer, unsigned int count)
static void guest_code(void)
{
- uint8_t buffer[8192];
+ u8 buffer[8192];
int i;
/*
diff --git a/tools/testing/selftests/kvm/x86/userspace_msr_exit_test.c b/tools/testing/selftests/kvm/x86/userspace_msr_exit_test.c
index 98b8d285dbb7..2808ce727e5f 100644
--- a/tools/testing/selftests/kvm/x86/userspace_msr_exit_test.c
+++ b/tools/testing/selftests/kvm/x86/userspace_msr_exit_test.c
@@ -23,21 +23,21 @@ struct kvm_msr_filter filter_allow = {
.nmsrs = 1,
/* Test an MSR the kernel knows about. */
.base = MSR_IA32_XSS,
- .bitmap = (uint8_t*)&deny_bits,
+ .bitmap = (u8 *)&deny_bits,
}, {
.flags = KVM_MSR_FILTER_READ |
KVM_MSR_FILTER_WRITE,
.nmsrs = 1,
/* Test an MSR the kernel doesn't know about. */
.base = MSR_IA32_FLUSH_CMD,
- .bitmap = (uint8_t*)&deny_bits,
+ .bitmap = (u8 *)&deny_bits,
}, {
.flags = KVM_MSR_FILTER_READ |
KVM_MSR_FILTER_WRITE,
.nmsrs = 1,
/* Test a fabricated MSR that no one knows about. */
.base = MSR_NON_EXISTENT,
- .bitmap = (uint8_t*)&deny_bits,
+ .bitmap = (u8 *)&deny_bits,
},
},
};
@@ -49,7 +49,7 @@ struct kvm_msr_filter filter_fs = {
.flags = KVM_MSR_FILTER_READ,
.nmsrs = 1,
.base = MSR_FS_BASE,
- .bitmap = (uint8_t*)&deny_bits,
+ .bitmap = (u8 *)&deny_bits,
},
},
};
@@ -61,7 +61,7 @@ struct kvm_msr_filter filter_gs = {
.flags = KVM_MSR_FILTER_READ,
.nmsrs = 1,
.base = MSR_GS_BASE,
- .bitmap = (uint8_t*)&deny_bits,
+ .bitmap = (u8 *)&deny_bits,
},
},
};
@@ -77,7 +77,7 @@ static u8 bitmap_c0000000[KVM_MSR_FILTER_MAX_BITMAP_SIZE];
static u8 bitmap_c0000000_read[KVM_MSR_FILTER_MAX_BITMAP_SIZE];
static u8 bitmap_deadbeef[1] = { 0x1 };
-static void deny_msr(uint8_t *bitmap, u32 msr)
+static void deny_msr(u8 *bitmap, u32 msr)
{
u32 idx = msr & (KVM_MSR_FILTER_MAX_BITMAP_SIZE - 1);
@@ -732,7 +732,7 @@ static void run_msr_filter_flag_test(struct kvm_vm *vm)
.flags = KVM_MSR_FILTER_READ,
.nmsrs = 1,
.base = 0,
- .bitmap = (uint8_t *)&deny_bits,
+ .bitmap = (u8 *)&deny_bits,
},
},
};
diff --git a/tools/testing/selftests/kvm/x86/vmx_pmu_caps_test.c b/tools/testing/selftests/kvm/x86/vmx_pmu_caps_test.c
index 1f3638c6ee14..d004108dbdc6 100644
--- a/tools/testing/selftests/kvm/x86/vmx_pmu_caps_test.c
+++ b/tools/testing/selftests/kvm/x86/vmx_pmu_caps_test.c
@@ -54,7 +54,7 @@ static const union perf_capabilities format_caps = {
static void guest_test_perf_capabilities_gp(u64 val)
{
- uint8_t vector = wrmsr_safe(MSR_IA32_PERF_CAPABILITIES, val);
+ u8 vector = wrmsr_safe(MSR_IA32_PERF_CAPABILITIES, val);
__GUEST_ASSERT(vector == GP_VECTOR,
"Expected #GP for value '0x%lx', got %s",
diff --git a/tools/testing/selftests/kvm/x86/xapic_tpr_test.c b/tools/testing/selftests/kvm/x86/xapic_tpr_test.c
index af1fe833ad4f..ab25db2235d5 100644
--- a/tools/testing/selftests/kvm/x86/xapic_tpr_test.c
+++ b/tools/testing/selftests/kvm/x86/xapic_tpr_test.c
@@ -69,7 +69,7 @@ static void tpr_guest_irq_queue(void)
}
}
-static uint8_t tpr_guest_tpr_get(void)
+static u8 tpr_guest_tpr_get(void)
{
u32 taskpri;
@@ -81,7 +81,7 @@ static uint8_t tpr_guest_tpr_get(void)
return GET_APIC_PRI(taskpri);
}
-static uint8_t tpr_guest_ppr_get(void)
+static u8 tpr_guest_ppr_get(void)
{
u32 procpri;
@@ -93,7 +93,7 @@ static uint8_t tpr_guest_ppr_get(void)
return GET_APIC_PRI(procpri);
}
-static uint8_t tpr_guest_cr8_get(void)
+static u8 tpr_guest_cr8_get(void)
{
u64 cr8;
@@ -104,7 +104,7 @@ static uint8_t tpr_guest_cr8_get(void)
static void tpr_guest_check_tpr_ppr_cr8_equal(void)
{
- uint8_t tpr;
+ u8 tpr;
tpr = tpr_guest_tpr_get();
@@ -157,19 +157,19 @@ static void tpr_guest_code(void)
GUEST_DONE();
}
-static uint8_t lapic_tpr_get(struct kvm_lapic_state *xapic)
+static u8 lapic_tpr_get(struct kvm_lapic_state *xapic)
{
return GET_APIC_PRI(*((u32 *)&xapic->regs[APIC_TASKPRI]));
}
-static void lapic_tpr_set(struct kvm_lapic_state *xapic, uint8_t val)
+static void lapic_tpr_set(struct kvm_lapic_state *xapic, u8 val)
{
u32 *taskpri = (u32 *)&xapic->regs[APIC_TASKPRI];
*taskpri = SET_APIC_PRI(*taskpri, val);
}
-static uint8_t sregs_tpr(struct kvm_sregs *sregs)
+static u8 sregs_tpr(struct kvm_sregs *sregs)
{
return sregs->cr8 & GENMASK(3, 0);
}
@@ -197,7 +197,7 @@ static void test_tpr_check_tpr_cr8_equal(struct kvm_vcpu *vcpu)
static void test_tpr_set_tpr_for_irq(struct kvm_vcpu *vcpu, bool mask)
{
struct kvm_lapic_state xapic;
- uint8_t tpr;
+ u8 tpr;
static_assert(IRQ_VECTOR >= 16, "invalid IRQ vector number");
tpr = IRQ_VECTOR / 16;
diff --git a/tools/testing/selftests/kvm/x86/xen_shinfo_test.c b/tools/testing/selftests/kvm/x86/xen_shinfo_test.c
index c6d00205b59d..5076f6a75455 100644
--- a/tools/testing/selftests/kvm/x86/xen_shinfo_test.c
+++ b/tools/testing/selftests/kvm/x86/xen_shinfo_test.c
@@ -133,8 +133,8 @@ struct arch_vcpu_info {
};
struct vcpu_info {
- uint8_t evtchn_upcall_pending;
- uint8_t evtchn_upcall_mask;
+ u8 evtchn_upcall_pending;
+ u8 evtchn_upcall_mask;
unsigned long evtchn_pending_sel;
struct arch_vcpu_info arch;
struct pvclock_vcpu_time_info time;
--
2.54.0.rc1.555.g9c883467ad-goog
_______________________________________________
linux-riscv mailing list
linux-riscv@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-riscv
^ permalink raw reply related [flat|nested] 18+ messages in thread
* [PATCH v3 11/19] KVM: selftests: Drop "vaddr_" from APIs that allocate memory for a given VM
2026-04-20 21:19 [PATCH v3 00/19] KVM: selftests: Use kernel-style integer and g[vp]a_t types Sean Christopherson
` (7 preceding siblings ...)
2026-04-20 21:19 ` [PATCH v3 10/19] KVM: selftests: Use u8 instead of uint8_t Sean Christopherson
@ 2026-04-20 21:19 ` Sean Christopherson
2026-04-20 21:19 ` [PATCH v3 12/19] KVM: selftests: Rename vm_vaddr_unused_gap() => vm_unused_gva_gap() Sean Christopherson
` (7 subsequent siblings)
16 siblings, 0 replies; 18+ messages in thread
From: Sean Christopherson @ 2026-04-20 21:19 UTC (permalink / raw)
To: Paolo Bonzini, Marc Zyngier, Oliver Upton, Tianrui Zhao, Bibo Mao,
Huacai Chen, Anup Patel, Paul Walmsley, Palmer Dabbelt, Albert Ou,
Christian Borntraeger, Janosch Frank, Claudio Imbrenda,
Sean Christopherson
Cc: kvm, linux-arm-kernel, kvmarm, loongarch, kvm-riscv, linux-riscv,
linux-kernel, David Matlack
Now that KVM selftests use gva_t instead of vm_vaddr_t, drop "vaddr_" from
the core memory allocation APIs as the information is extraneous and does
more harm than good. E.g. the APIs don't _just_ allocate virtual memory,
they allocate backing physical memory and install mappings in the guest
page tables. And as proven by kmalloc() and malloc(), developers generally
expect that allocations come with a working virtual address.
Opportunistically clean up the function comment for vm_alloc(), and drop
the misleading and superfluous comments for its wrappers.
No functional change intended.
Signed-off-by: Sean Christopherson <seanjc@google.com>
---
tools/testing/selftests/kvm/arm64/vgic_irq.c | 4 +-
.../testing/selftests/kvm/include/kvm_util.h | 16 ++--
.../selftests/kvm/lib/arm64/processor.c | 10 +--
tools/testing/selftests/kvm/lib/elf.c | 3 +-
tools/testing/selftests/kvm/lib/kvm_util.c | 84 +++++--------------
.../selftests/kvm/lib/loongarch/processor.c | 13 +--
.../selftests/kvm/lib/riscv/processor.c | 6 +-
.../selftests/kvm/lib/s390/processor.c | 2 +-
.../testing/selftests/kvm/lib/ucall_common.c | 4 +-
tools/testing/selftests/kvm/lib/x86/hyperv.c | 8 +-
.../testing/selftests/kvm/lib/x86/processor.c | 12 +--
tools/testing/selftests/kvm/lib/x86/svm.c | 8 +-
tools/testing/selftests/kvm/lib/x86/vmx.c | 16 ++--
tools/testing/selftests/kvm/s390/memop.c | 12 +--
tools/testing/selftests/kvm/s390/tprot.c | 4 +-
tools/testing/selftests/kvm/x86/amx_test.c | 6 +-
tools/testing/selftests/kvm/x86/cpuid_test.c | 2 +-
.../testing/selftests/kvm/x86/hyperv_clock.c | 2 +-
.../testing/selftests/kvm/x86/hyperv_evmcs.c | 2 +-
.../kvm/x86/hyperv_extended_hypercalls.c | 4 +-
.../selftests/kvm/x86/hyperv_features.c | 6 +-
tools/testing/selftests/kvm/x86/hyperv_ipi.c | 2 +-
.../selftests/kvm/x86/hyperv_svm_test.c | 2 +-
.../selftests/kvm/x86/hyperv_tlb_flush.c | 6 +-
.../selftests/kvm/x86/kvm_clock_test.c | 2 +-
.../selftests/kvm/x86/sev_smoke_test.c | 4 +-
.../kvm/x86/svm_nested_soft_inject_test.c | 2 +-
.../selftests/kvm/x86/xapic_ipi_test.c | 2 +-
28 files changed, 102 insertions(+), 142 deletions(-)
diff --git a/tools/testing/selftests/kvm/arm64/vgic_irq.c b/tools/testing/selftests/kvm/arm64/vgic_irq.c
index 8a9dd79123d4..5e231998617e 100644
--- a/tools/testing/selftests/kvm/arm64/vgic_irq.c
+++ b/tools/testing/selftests/kvm/arm64/vgic_irq.c
@@ -771,7 +771,7 @@ static void test_vgic(u32 nr_irqs, bool level_sensitive, bool eoi_split)
vcpu_init_descriptor_tables(vcpu);
/* Setup the guest args page (so it gets the args). */
- args_gva = vm_vaddr_alloc_page(vm);
+ args_gva = vm_alloc_page(vm);
memcpy(addr_gva2hva(vm, args_gva), &args, sizeof(args));
vcpu_args_set(vcpu, 1, args_gva);
@@ -997,7 +997,7 @@ static void test_vgic_two_cpus(void *gcode)
vcpu_init_descriptor_tables(vcpus[1]);
/* Setup the guest args page (so it gets the args). */
- args_gva = vm_vaddr_alloc_page(vm);
+ args_gva = vm_alloc_page(vm);
memcpy(addr_gva2hva(vm, args_gva), &args, sizeof(args));
vcpu_args_set(vcpus[0], 2, args_gva, 0);
vcpu_args_set(vcpus[1], 2, args_gva, 1);
diff --git a/tools/testing/selftests/kvm/include/kvm_util.h b/tools/testing/selftests/kvm/include/kvm_util.h
index 676e3ccb1462..8f7afc34ea8d 100644
--- a/tools/testing/selftests/kvm/include/kvm_util.h
+++ b/tools/testing/selftests/kvm/include/kvm_util.h
@@ -716,14 +716,14 @@ void vm_mem_region_delete(struct kvm_vm *vm, u32 slot);
struct kvm_vcpu *__vm_vcpu_add(struct kvm_vm *vm, u32 vcpu_id);
void vm_populate_vaddr_bitmap(struct kvm_vm *vm);
gva_t vm_vaddr_unused_gap(struct kvm_vm *vm, size_t sz, gva_t vaddr_min);
-gva_t vm_vaddr_alloc(struct kvm_vm *vm, size_t sz, gva_t vaddr_min);
-gva_t __vm_vaddr_alloc(struct kvm_vm *vm, size_t sz, gva_t vaddr_min,
- enum kvm_mem_region_type type);
-gva_t vm_vaddr_alloc_shared(struct kvm_vm *vm, size_t sz, gva_t vaddr_min,
- enum kvm_mem_region_type type);
-gva_t vm_vaddr_alloc_pages(struct kvm_vm *vm, int nr_pages);
-gva_t __vm_vaddr_alloc_page(struct kvm_vm *vm, enum kvm_mem_region_type type);
-gva_t vm_vaddr_alloc_page(struct kvm_vm *vm);
+gva_t vm_alloc(struct kvm_vm *vm, size_t sz, gva_t vaddr_min);
+gva_t __vm_alloc(struct kvm_vm *vm, size_t sz, gva_t vaddr_min,
+ enum kvm_mem_region_type type);
+gva_t vm_alloc_shared(struct kvm_vm *vm, size_t sz, gva_t vaddr_min,
+ enum kvm_mem_region_type type);
+gva_t vm_alloc_pages(struct kvm_vm *vm, int nr_pages);
+gva_t __vm_alloc_page(struct kvm_vm *vm, enum kvm_mem_region_type type);
+gva_t vm_alloc_page(struct kvm_vm *vm);
void virt_map(struct kvm_vm *vm, u64 vaddr, u64 paddr,
unsigned int npages);
diff --git a/tools/testing/selftests/kvm/lib/arm64/processor.c b/tools/testing/selftests/kvm/lib/arm64/processor.c
index 7ba3a48911e3..c4f0e37f2907 100644
--- a/tools/testing/selftests/kvm/lib/arm64/processor.c
+++ b/tools/testing/selftests/kvm/lib/arm64/processor.c
@@ -422,9 +422,9 @@ static struct kvm_vcpu *__aarch64_vcpu_add(struct kvm_vm *vm, u32 vcpu_id,
stack_size = vm->page_size == 4096 ? DEFAULT_STACK_PGS * vm->page_size :
vm->page_size;
- stack_vaddr = __vm_vaddr_alloc(vm, stack_size,
- DEFAULT_ARM64_GUEST_STACK_VADDR_MIN,
- MEM_REGION_DATA);
+ stack_vaddr = __vm_alloc(vm, stack_size,
+ DEFAULT_ARM64_GUEST_STACK_VADDR_MIN,
+ MEM_REGION_DATA);
aarch64_vcpu_setup(vcpu, init);
@@ -536,8 +536,8 @@ void route_exception(struct ex_regs *regs, int vector)
void vm_init_descriptor_tables(struct kvm_vm *vm)
{
- vm->handlers = __vm_vaddr_alloc(vm, sizeof(struct handlers),
- vm->page_size, MEM_REGION_DATA);
+ vm->handlers = __vm_alloc(vm, sizeof(struct handlers), vm->page_size,
+ MEM_REGION_DATA);
*(gva_t *)addr_gva2hva(vm, (gva_t)(&exception_handlers)) = vm->handlers;
}
diff --git a/tools/testing/selftests/kvm/lib/elf.c b/tools/testing/selftests/kvm/lib/elf.c
index 0f2710cda9d8..2288480f4e1e 100644
--- a/tools/testing/selftests/kvm/lib/elf.c
+++ b/tools/testing/selftests/kvm/lib/elf.c
@@ -162,8 +162,7 @@ void kvm_vm_elf_load(struct kvm_vm *vm, const char *filename)
seg_vend |= vm->page_size - 1;
size_t seg_size = seg_vend - seg_vstart + 1;
- gva_t vaddr = __vm_vaddr_alloc(vm, seg_size, seg_vstart,
- MEM_REGION_CODE);
+ gva_t vaddr = __vm_alloc(vm, seg_size, seg_vstart, MEM_REGION_CODE);
TEST_ASSERT(vaddr == seg_vstart, "Unable to allocate "
"virtual memory for segment at requested min addr,\n"
" segment idx: %u\n"
diff --git a/tools/testing/selftests/kvm/lib/kvm_util.c b/tools/testing/selftests/kvm/lib/kvm_util.c
index 050ae9c92681..b304c0e54837 100644
--- a/tools/testing/selftests/kvm/lib/kvm_util.c
+++ b/tools/testing/selftests/kvm/lib/kvm_util.c
@@ -1450,8 +1450,8 @@ gva_t vm_vaddr_unused_gap(struct kvm_vm *vm, size_t sz, gva_t vaddr_min)
return pgidx_start * vm->page_size;
}
-static gva_t ____vm_vaddr_alloc(struct kvm_vm *vm, size_t sz, gva_t vaddr_min,
- enum kvm_mem_region_type type, bool protected)
+static gva_t ____vm_alloc(struct kvm_vm *vm, size_t sz, gva_t vaddr_min,
+ enum kvm_mem_region_type type, bool protected)
{
u64 pages = (sz >> vm->page_shift) + ((sz % vm->page_size) != 0);
@@ -1476,84 +1476,44 @@ static gva_t ____vm_vaddr_alloc(struct kvm_vm *vm, size_t sz, gva_t vaddr_min,
return vaddr_start;
}
-gva_t __vm_vaddr_alloc(struct kvm_vm *vm, size_t sz, gva_t vaddr_min,
- enum kvm_mem_region_type type)
+gva_t __vm_alloc(struct kvm_vm *vm, size_t sz, gva_t vaddr_min,
+ enum kvm_mem_region_type type)
{
- return ____vm_vaddr_alloc(vm, sz, vaddr_min, type,
- vm_arch_has_protected_memory(vm));
+ return ____vm_alloc(vm, sz, vaddr_min, type,
+ vm_arch_has_protected_memory(vm));
}
-gva_t vm_vaddr_alloc_shared(struct kvm_vm *vm, size_t sz, gva_t vaddr_min,
- enum kvm_mem_region_type type)
+gva_t vm_alloc_shared(struct kvm_vm *vm, size_t sz, gva_t vaddr_min,
+ enum kvm_mem_region_type type)
{
- return ____vm_vaddr_alloc(vm, sz, vaddr_min, type, false);
+ return ____vm_alloc(vm, sz, vaddr_min, type, false);
}
/*
- * VM Virtual Address Allocate
- *
- * Input Args:
- * vm - Virtual Machine
- * sz - Size in bytes
- * vaddr_min - Minimum starting virtual address
- *
- * Output Args: None
- *
- * Return:
- * Starting guest virtual address
- *
- * Allocates at least sz bytes within the virtual address space of the vm
- * given by vm. The allocated bytes are mapped to a virtual address >=
- * the address given by vaddr_min. Note that each allocation uses a
- * a unique set of pages, with the minimum real allocation being at least
- * a page. The allocated physical space comes from the TEST_DATA memory region.
+ * Allocates at least sz bytes within the virtual address space of the VM
+ * given by @vm. The allocated bytes are mapped to a virtual address >= the
+ * address given by @vaddr_min. Note that each allocation uses a a unique set
+ * of pages, with the minimum real allocation being at least a page. The
+ * allocated physical space comes from the TEST_DATA memory region.
*/
-gva_t vm_vaddr_alloc(struct kvm_vm *vm, size_t sz, gva_t vaddr_min)
+gva_t vm_alloc(struct kvm_vm *vm, size_t sz, gva_t vaddr_min)
{
- return __vm_vaddr_alloc(vm, sz, vaddr_min, MEM_REGION_TEST_DATA);
+ return __vm_alloc(vm, sz, vaddr_min, MEM_REGION_TEST_DATA);
}
-/*
- * VM Virtual Address Allocate Pages
- *
- * Input Args:
- * vm - Virtual Machine
- *
- * Output Args: None
- *
- * Return:
- * Starting guest virtual address
- *
- * Allocates at least N system pages worth of bytes within the virtual address
- * space of the vm.
- */
-gva_t vm_vaddr_alloc_pages(struct kvm_vm *vm, int nr_pages)
+gva_t vm_alloc_pages(struct kvm_vm *vm, int nr_pages)
{
- return vm_vaddr_alloc(vm, nr_pages * getpagesize(), KVM_UTIL_MIN_VADDR);
+ return vm_alloc(vm, nr_pages * getpagesize(), KVM_UTIL_MIN_VADDR);
}
-gva_t __vm_vaddr_alloc_page(struct kvm_vm *vm, enum kvm_mem_region_type type)
+gva_t __vm_alloc_page(struct kvm_vm *vm, enum kvm_mem_region_type type)
{
- return __vm_vaddr_alloc(vm, getpagesize(), KVM_UTIL_MIN_VADDR, type);
+ return __vm_alloc(vm, getpagesize(), KVM_UTIL_MIN_VADDR, type);
}
-/*
- * VM Virtual Address Allocate Page
- *
- * Input Args:
- * vm - Virtual Machine
- *
- * Output Args: None
- *
- * Return:
- * Starting guest virtual address
- *
- * Allocates at least one system page worth of bytes within the virtual address
- * space of the vm.
- */
-gva_t vm_vaddr_alloc_page(struct kvm_vm *vm)
+gva_t vm_alloc_page(struct kvm_vm *vm)
{
- return vm_vaddr_alloc_pages(vm, 1);
+ return vm_alloc_pages(vm, 1);
}
/*
diff --git a/tools/testing/selftests/kvm/lib/loongarch/processor.c b/tools/testing/selftests/kvm/lib/loongarch/processor.c
index 2982196db3b2..318520f1f1b9 100644
--- a/tools/testing/selftests/kvm/lib/loongarch/processor.c
+++ b/tools/testing/selftests/kvm/lib/loongarch/processor.c
@@ -206,8 +206,9 @@ void vm_init_descriptor_tables(struct kvm_vm *vm)
{
void *addr;
- vm->handlers = __vm_vaddr_alloc(vm, sizeof(struct handlers),
- LOONGARCH_GUEST_STACK_VADDR_MIN, MEM_REGION_DATA);
+ vm->handlers = __vm_alloc(vm, sizeof(struct handlers),
+ LOONGARCH_GUEST_STACK_VADDR_MIN,
+ MEM_REGION_DATA);
addr = addr_gva2hva(vm, vm->handlers);
memset(addr, 0, vm->page_size);
@@ -354,8 +355,8 @@ void loongarch_vcpu_setup(struct kvm_vcpu *vcpu)
loongarch_set_csr(vcpu, LOONGARCH_CSR_STLBPGSIZE, PS_DEFAULT_SIZE);
/* LOONGARCH_CSR_KS1 is used for exception stack */
- val = __vm_vaddr_alloc(vm, vm->page_size,
- LOONGARCH_GUEST_STACK_VADDR_MIN, MEM_REGION_DATA);
+ val = __vm_alloc(vm, vm->page_size, LOONGARCH_GUEST_STACK_VADDR_MIN,
+ MEM_REGION_DATA);
TEST_ASSERT(val != 0, "No memory for exception stack");
val = val + vm->page_size;
loongarch_set_csr(vcpu, LOONGARCH_CSR_KS1, val);
@@ -378,8 +379,8 @@ struct kvm_vcpu *vm_arch_vcpu_add(struct kvm_vm *vm, u32 vcpu_id)
vcpu = __vm_vcpu_add(vm, vcpu_id);
stack_size = vm->page_size;
- stack_vaddr = __vm_vaddr_alloc(vm, stack_size,
- LOONGARCH_GUEST_STACK_VADDR_MIN, MEM_REGION_DATA);
+ stack_vaddr = __vm_alloc(vm, stack_size,
+ LOONGARCH_GUEST_STACK_VADDR_MIN, MEM_REGION_DATA);
TEST_ASSERT(stack_vaddr != 0, "No memory for vm stack");
loongarch_vcpu_setup(vcpu);
diff --git a/tools/testing/selftests/kvm/lib/riscv/processor.c b/tools/testing/selftests/kvm/lib/riscv/processor.c
index 7336d5a20419..38eb8302922a 100644
--- a/tools/testing/selftests/kvm/lib/riscv/processor.c
+++ b/tools/testing/selftests/kvm/lib/riscv/processor.c
@@ -322,7 +322,7 @@ struct kvm_vcpu *vm_arch_vcpu_add(struct kvm_vm *vm, u32 vcpu_id)
stack_size = vm->page_size == 4096 ? DEFAULT_STACK_PGS * vm->page_size :
vm->page_size;
- stack_vaddr = __vm_vaddr_alloc(vm, stack_size,
+ stack_vaddr = __vm_alloc(vm, stack_size,
DEFAULT_RISCV_GUEST_STACK_VADDR_MIN,
MEM_REGION_DATA);
@@ -449,8 +449,8 @@ void vcpu_init_vector_tables(struct kvm_vcpu *vcpu)
void vm_init_vector_tables(struct kvm_vm *vm)
{
- vm->handlers = __vm_vaddr_alloc(vm, sizeof(struct handlers),
- vm->page_size, MEM_REGION_DATA);
+ vm->handlers = __vm_alloc(vm, sizeof(struct handlers), vm->page_size,
+ MEM_REGION_DATA);
*(gva_t *)addr_gva2hva(vm, (gva_t)(&exception_handlers)) = vm->handlers;
}
diff --git a/tools/testing/selftests/kvm/lib/s390/processor.c b/tools/testing/selftests/kvm/lib/s390/processor.c
index d35f23a4db12..4ae0a39f426f 100644
--- a/tools/testing/selftests/kvm/lib/s390/processor.c
+++ b/tools/testing/selftests/kvm/lib/s390/processor.c
@@ -171,7 +171,7 @@ struct kvm_vcpu *vm_arch_vcpu_add(struct kvm_vm *vm, u32 vcpu_id)
TEST_ASSERT(vm->page_size == PAGE_SIZE, "Unsupported page size: 0x%x",
vm->page_size);
- stack_vaddr = __vm_vaddr_alloc(vm, stack_size,
+ stack_vaddr = __vm_alloc(vm, stack_size,
DEFAULT_GUEST_STACK_VADDR_MIN,
MEM_REGION_DATA);
diff --git a/tools/testing/selftests/kvm/lib/ucall_common.c b/tools/testing/selftests/kvm/lib/ucall_common.c
index b16b5c5b3a1e..4a8a5bc40a45 100644
--- a/tools/testing/selftests/kvm/lib/ucall_common.c
+++ b/tools/testing/selftests/kvm/lib/ucall_common.c
@@ -32,8 +32,8 @@ void ucall_init(struct kvm_vm *vm, gpa_t mmio_gpa)
gva_t vaddr;
int i;
- vaddr = vm_vaddr_alloc_shared(vm, sizeof(*hdr), KVM_UTIL_MIN_VADDR,
- MEM_REGION_DATA);
+ vaddr = vm_alloc_shared(vm, sizeof(*hdr), KVM_UTIL_MIN_VADDR,
+ MEM_REGION_DATA);
hdr = (struct ucall_header *)addr_gva2hva(vm, vaddr);
memset(hdr, 0, sizeof(*hdr));
diff --git a/tools/testing/selftests/kvm/lib/x86/hyperv.c b/tools/testing/selftests/kvm/lib/x86/hyperv.c
index c2806bed43c9..d200c5c26e2e 100644
--- a/tools/testing/selftests/kvm/lib/x86/hyperv.c
+++ b/tools/testing/selftests/kvm/lib/x86/hyperv.c
@@ -78,21 +78,21 @@ bool kvm_hv_cpu_has(struct kvm_x86_cpu_feature feature)
struct hyperv_test_pages *vcpu_alloc_hyperv_test_pages(struct kvm_vm *vm,
gva_t *p_hv_pages_gva)
{
- gva_t hv_pages_gva = vm_vaddr_alloc_page(vm);
+ gva_t hv_pages_gva = vm_alloc_page(vm);
struct hyperv_test_pages *hv = addr_gva2hva(vm, hv_pages_gva);
/* Setup of a region of guest memory for the VP Assist page. */
- hv->vp_assist = (void *)vm_vaddr_alloc_page(vm);
+ hv->vp_assist = (void *)vm_alloc_page(vm);
hv->vp_assist_hva = addr_gva2hva(vm, (uintptr_t)hv->vp_assist);
hv->vp_assist_gpa = addr_gva2gpa(vm, (uintptr_t)hv->vp_assist);
/* Setup of a region of guest memory for the partition assist page. */
- hv->partition_assist = (void *)vm_vaddr_alloc_page(vm);
+ hv->partition_assist = (void *)vm_alloc_page(vm);
hv->partition_assist_hva = addr_gva2hva(vm, (uintptr_t)hv->partition_assist);
hv->partition_assist_gpa = addr_gva2gpa(vm, (uintptr_t)hv->partition_assist);
/* Setup of a region of guest memory for the enlightened VMCS. */
- hv->enlightened_vmcs = (void *)vm_vaddr_alloc_page(vm);
+ hv->enlightened_vmcs = (void *)vm_alloc_page(vm);
hv->enlightened_vmcs_hva = addr_gva2hva(vm, (uintptr_t)hv->enlightened_vmcs);
hv->enlightened_vmcs_gpa = addr_gva2gpa(vm, (uintptr_t)hv->enlightened_vmcs);
diff --git a/tools/testing/selftests/kvm/lib/x86/processor.c b/tools/testing/selftests/kvm/lib/x86/processor.c
index 723a5200c4bb..50848112932c 100644
--- a/tools/testing/selftests/kvm/lib/x86/processor.c
+++ b/tools/testing/selftests/kvm/lib/x86/processor.c
@@ -746,10 +746,10 @@ static void vm_init_descriptor_tables(struct kvm_vm *vm)
struct kvm_segment seg;
int i;
- vm->arch.gdt = __vm_vaddr_alloc_page(vm, MEM_REGION_DATA);
- vm->arch.idt = __vm_vaddr_alloc_page(vm, MEM_REGION_DATA);
- vm->handlers = __vm_vaddr_alloc_page(vm, MEM_REGION_DATA);
- vm->arch.tss = __vm_vaddr_alloc_page(vm, MEM_REGION_DATA);
+ vm->arch.gdt = __vm_alloc_page(vm, MEM_REGION_DATA);
+ vm->arch.idt = __vm_alloc_page(vm, MEM_REGION_DATA);
+ vm->handlers = __vm_alloc_page(vm, MEM_REGION_DATA);
+ vm->arch.tss = __vm_alloc_page(vm, MEM_REGION_DATA);
/* Handlers have the same address in both address spaces.*/
for (i = 0; i < NUM_INTERRUPTS; i++)
@@ -828,7 +828,7 @@ struct kvm_vcpu *vm_arch_vcpu_add(struct kvm_vm *vm, u32 vcpu_id)
gva_t stack_vaddr;
struct kvm_vcpu *vcpu;
- stack_vaddr = __vm_vaddr_alloc(vm, DEFAULT_STACK_PGS * getpagesize(),
+ stack_vaddr = __vm_alloc(vm, DEFAULT_STACK_PGS * getpagesize(),
DEFAULT_GUEST_STACK_VADDR_MIN,
MEM_REGION_DATA);
@@ -844,7 +844,7 @@ struct kvm_vcpu *vm_arch_vcpu_add(struct kvm_vm *vm, u32 vcpu_id)
* may need to subtract 4 bytes instead of 8 bytes.
*/
TEST_ASSERT(IS_ALIGNED(stack_vaddr, PAGE_SIZE),
- "__vm_vaddr_alloc() did not provide a page-aligned address");
+ "__vm_alloc() did not provide a page-aligned address");
stack_vaddr -= 8;
vcpu = __vm_vcpu_add(vm, vcpu_id);
diff --git a/tools/testing/selftests/kvm/lib/x86/svm.c b/tools/testing/selftests/kvm/lib/x86/svm.c
index 620bdc5d3cc2..3b01605ab016 100644
--- a/tools/testing/selftests/kvm/lib/x86/svm.c
+++ b/tools/testing/selftests/kvm/lib/x86/svm.c
@@ -30,18 +30,18 @@ u64 rflags;
struct svm_test_data *
vcpu_alloc_svm(struct kvm_vm *vm, gva_t *p_svm_gva)
{
- gva_t svm_gva = vm_vaddr_alloc_page(vm);
+ gva_t svm_gva = vm_alloc_page(vm);
struct svm_test_data *svm = addr_gva2hva(vm, svm_gva);
- svm->vmcb = (void *)vm_vaddr_alloc_page(vm);
+ svm->vmcb = (void *)vm_alloc_page(vm);
svm->vmcb_hva = addr_gva2hva(vm, (uintptr_t)svm->vmcb);
svm->vmcb_gpa = addr_gva2gpa(vm, (uintptr_t)svm->vmcb);
- svm->save_area = (void *)vm_vaddr_alloc_page(vm);
+ svm->save_area = (void *)vm_alloc_page(vm);
svm->save_area_hva = addr_gva2hva(vm, (uintptr_t)svm->save_area);
svm->save_area_gpa = addr_gva2gpa(vm, (uintptr_t)svm->save_area);
- svm->msr = (void *)vm_vaddr_alloc_page(vm);
+ svm->msr = (void *)vm_alloc_page(vm);
svm->msr_hva = addr_gva2hva(vm, (uintptr_t)svm->msr);
svm->msr_gpa = addr_gva2gpa(vm, (uintptr_t)svm->msr);
memset(svm->msr_hva, 0, getpagesize());
diff --git a/tools/testing/selftests/kvm/lib/x86/vmx.c b/tools/testing/selftests/kvm/lib/x86/vmx.c
index b2f83c3f7f16..67642759e4a0 100644
--- a/tools/testing/selftests/kvm/lib/x86/vmx.c
+++ b/tools/testing/selftests/kvm/lib/x86/vmx.c
@@ -81,37 +81,37 @@ void vm_enable_ept(struct kvm_vm *vm)
struct vmx_pages *
vcpu_alloc_vmx(struct kvm_vm *vm, gva_t *p_vmx_gva)
{
- gva_t vmx_gva = vm_vaddr_alloc_page(vm);
+ gva_t vmx_gva = vm_alloc_page(vm);
struct vmx_pages *vmx = addr_gva2hva(vm, vmx_gva);
/* Setup of a region of guest memory for the vmxon region. */
- vmx->vmxon = (void *)vm_vaddr_alloc_page(vm);
+ vmx->vmxon = (void *)vm_alloc_page(vm);
vmx->vmxon_hva = addr_gva2hva(vm, (uintptr_t)vmx->vmxon);
vmx->vmxon_gpa = addr_gva2gpa(vm, (uintptr_t)vmx->vmxon);
/* Setup of a region of guest memory for a vmcs. */
- vmx->vmcs = (void *)vm_vaddr_alloc_page(vm);
+ vmx->vmcs = (void *)vm_alloc_page(vm);
vmx->vmcs_hva = addr_gva2hva(vm, (uintptr_t)vmx->vmcs);
vmx->vmcs_gpa = addr_gva2gpa(vm, (uintptr_t)vmx->vmcs);
/* Setup of a region of guest memory for the MSR bitmap. */
- vmx->msr = (void *)vm_vaddr_alloc_page(vm);
+ vmx->msr = (void *)vm_alloc_page(vm);
vmx->msr_hva = addr_gva2hva(vm, (uintptr_t)vmx->msr);
vmx->msr_gpa = addr_gva2gpa(vm, (uintptr_t)vmx->msr);
memset(vmx->msr_hva, 0, getpagesize());
/* Setup of a region of guest memory for the shadow VMCS. */
- vmx->shadow_vmcs = (void *)vm_vaddr_alloc_page(vm);
+ vmx->shadow_vmcs = (void *)vm_alloc_page(vm);
vmx->shadow_vmcs_hva = addr_gva2hva(vm, (uintptr_t)vmx->shadow_vmcs);
vmx->shadow_vmcs_gpa = addr_gva2gpa(vm, (uintptr_t)vmx->shadow_vmcs);
/* Setup of a region of guest memory for the VMREAD and VMWRITE bitmaps. */
- vmx->vmread = (void *)vm_vaddr_alloc_page(vm);
+ vmx->vmread = (void *)vm_alloc_page(vm);
vmx->vmread_hva = addr_gva2hva(vm, (uintptr_t)vmx->vmread);
vmx->vmread_gpa = addr_gva2gpa(vm, (uintptr_t)vmx->vmread);
memset(vmx->vmread_hva, 0, getpagesize());
- vmx->vmwrite = (void *)vm_vaddr_alloc_page(vm);
+ vmx->vmwrite = (void *)vm_alloc_page(vm);
vmx->vmwrite_hva = addr_gva2hva(vm, (uintptr_t)vmx->vmwrite);
vmx->vmwrite_gpa = addr_gva2gpa(vm, (uintptr_t)vmx->vmwrite);
memset(vmx->vmwrite_hva, 0, getpagesize());
@@ -390,7 +390,7 @@ bool kvm_cpu_has_ept(void)
void prepare_virtualize_apic_accesses(struct vmx_pages *vmx, struct kvm_vm *vm)
{
- vmx->apic_access = (void *)vm_vaddr_alloc_page(vm);
+ vmx->apic_access = (void *)vm_alloc_page(vm);
vmx->apic_access_hva = addr_gva2hva(vm, (uintptr_t)vmx->apic_access);
vmx->apic_access_gpa = addr_gva2gpa(vm, (uintptr_t)vmx->apic_access);
}
diff --git a/tools/testing/selftests/kvm/s390/memop.c b/tools/testing/selftests/kvm/s390/memop.c
index 9855b5bfb5ed..0244848621b3 100644
--- a/tools/testing/selftests/kvm/s390/memop.c
+++ b/tools/testing/selftests/kvm/s390/memop.c
@@ -880,8 +880,8 @@ static void test_copy_key_fetch_prot_override(void)
struct test_default t = test_default_init(guest_copy_key_fetch_prot_override);
gva_t guest_0_page, guest_last_page;
- guest_0_page = vm_vaddr_alloc(t.kvm_vm, PAGE_SIZE, 0);
- guest_last_page = vm_vaddr_alloc(t.kvm_vm, PAGE_SIZE, last_page_addr);
+ guest_0_page = vm_alloc(t.kvm_vm, PAGE_SIZE, 0);
+ guest_last_page = vm_alloc(t.kvm_vm, PAGE_SIZE, last_page_addr);
if (guest_0_page != 0 || guest_last_page != last_page_addr) {
print_skip("did not allocate guest pages at required positions");
goto out;
@@ -919,8 +919,8 @@ static void test_errors_key_fetch_prot_override_not_enabled(void)
struct test_default t = test_default_init(guest_copy_key_fetch_prot_override);
gva_t guest_0_page, guest_last_page;
- guest_0_page = vm_vaddr_alloc(t.kvm_vm, PAGE_SIZE, 0);
- guest_last_page = vm_vaddr_alloc(t.kvm_vm, PAGE_SIZE, last_page_addr);
+ guest_0_page = vm_alloc(t.kvm_vm, PAGE_SIZE, 0);
+ guest_last_page = vm_alloc(t.kvm_vm, PAGE_SIZE, last_page_addr);
if (guest_0_page != 0 || guest_last_page != last_page_addr) {
print_skip("did not allocate guest pages at required positions");
goto out;
@@ -940,8 +940,8 @@ static void test_errors_key_fetch_prot_override_enabled(void)
struct test_default t = test_default_init(guest_copy_key_fetch_prot_override);
gva_t guest_0_page, guest_last_page;
- guest_0_page = vm_vaddr_alloc(t.kvm_vm, PAGE_SIZE, 0);
- guest_last_page = vm_vaddr_alloc(t.kvm_vm, PAGE_SIZE, last_page_addr);
+ guest_0_page = vm_alloc(t.kvm_vm, PAGE_SIZE, 0);
+ guest_last_page = vm_alloc(t.kvm_vm, PAGE_SIZE, last_page_addr);
if (guest_0_page != 0 || guest_last_page != last_page_addr) {
print_skip("did not allocate guest pages at required positions");
goto out;
diff --git a/tools/testing/selftests/kvm/s390/tprot.c b/tools/testing/selftests/kvm/s390/tprot.c
index e021e198b28e..8054d2b178f0 100644
--- a/tools/testing/selftests/kvm/s390/tprot.c
+++ b/tools/testing/selftests/kvm/s390/tprot.c
@@ -146,7 +146,7 @@ static enum stage perform_next_stage(int *i, bool mapped_0)
/*
* Some fetch protection override tests require that page 0
* be mapped, however, when the hosts tries to map that page via
- * vm_vaddr_alloc, it may happen that some other page gets mapped
+ * vm_alloc, it may happen that some other page gets mapped
* instead.
* In order to skip these tests we detect this inside the guest
*/
@@ -219,7 +219,7 @@ int main(int argc, char *argv[])
mprotect(addr_gva2hva(vm, (gva_t)pages), PAGE_SIZE * 2, PROT_READ);
HOST_SYNC(vcpu, TEST_SIMPLE);
- guest_0_page = vm_vaddr_alloc(vm, PAGE_SIZE, 0);
+ guest_0_page = vm_alloc(vm, PAGE_SIZE, 0);
if (guest_0_page != 0) {
/* Use NO_TAP so we don't get a PASS print */
HOST_SYNC_NO_TAP(vcpu, STAGE_INIT_FETCH_PROT_OVERRIDE);
diff --git a/tools/testing/selftests/kvm/x86/amx_test.c b/tools/testing/selftests/kvm/x86/amx_test.c
index 9ecf7515442b..4e63da2b1889 100644
--- a/tools/testing/selftests/kvm/x86/amx_test.c
+++ b/tools/testing/selftests/kvm/x86/amx_test.c
@@ -263,15 +263,15 @@ int main(int argc, char *argv[])
vcpu_regs_get(vcpu, ®s1);
/* amx cfg for guest_code */
- amx_cfg = vm_vaddr_alloc_page(vm);
+ amx_cfg = vm_alloc_page(vm);
memset(addr_gva2hva(vm, amx_cfg), 0x0, getpagesize());
/* amx tiledata for guest_code */
- tiledata = vm_vaddr_alloc_pages(vm, 2);
+ tiledata = vm_alloc_pages(vm, 2);
memset(addr_gva2hva(vm, tiledata), rand() | 1, 2 * getpagesize());
/* XSAVE state for guest_code */
- xstate = vm_vaddr_alloc_pages(vm, DIV_ROUND_UP(XSAVE_SIZE, PAGE_SIZE));
+ xstate = vm_alloc_pages(vm, DIV_ROUND_UP(XSAVE_SIZE, PAGE_SIZE));
memset(addr_gva2hva(vm, xstate), 0, PAGE_SIZE * DIV_ROUND_UP(XSAVE_SIZE, PAGE_SIZE));
vcpu_args_set(vcpu, 3, amx_cfg, tiledata, xstate);
diff --git a/tools/testing/selftests/kvm/x86/cpuid_test.c b/tools/testing/selftests/kvm/x86/cpuid_test.c
index 3c45249a42c4..ef0ddd240887 100644
--- a/tools/testing/selftests/kvm/x86/cpuid_test.c
+++ b/tools/testing/selftests/kvm/x86/cpuid_test.c
@@ -143,7 +143,7 @@ static void run_vcpu(struct kvm_vcpu *vcpu, int stage)
struct kvm_cpuid2 *vcpu_alloc_cpuid(struct kvm_vm *vm, gva_t *p_gva, struct kvm_cpuid2 *cpuid)
{
int size = sizeof(*cpuid) + cpuid->nent * sizeof(cpuid->entries[0]);
- gva_t gva = vm_vaddr_alloc(vm, size, KVM_UTIL_MIN_VADDR);
+ gva_t gva = vm_alloc(vm, size, KVM_UTIL_MIN_VADDR);
struct kvm_cpuid2 *guest_cpuids = addr_gva2hva(vm, gva);
memcpy(guest_cpuids, cpuid, size);
diff --git a/tools/testing/selftests/kvm/x86/hyperv_clock.c b/tools/testing/selftests/kvm/x86/hyperv_clock.c
index 6bb1ca11256f..c083cea546dc 100644
--- a/tools/testing/selftests/kvm/x86/hyperv_clock.c
+++ b/tools/testing/selftests/kvm/x86/hyperv_clock.c
@@ -218,7 +218,7 @@ int main(void)
vcpu_set_hv_cpuid(vcpu);
- tsc_page_gva = vm_vaddr_alloc_page(vm);
+ tsc_page_gva = vm_alloc_page(vm);
memset(addr_gva2hva(vm, tsc_page_gva), 0x0, getpagesize());
TEST_ASSERT((addr_gva2gpa(vm, tsc_page_gva) & (getpagesize() - 1)) == 0,
"TSC page has to be page aligned");
diff --git a/tools/testing/selftests/kvm/x86/hyperv_evmcs.c b/tools/testing/selftests/kvm/x86/hyperv_evmcs.c
index 061d9e1f02c0..c7fa114aee20 100644
--- a/tools/testing/selftests/kvm/x86/hyperv_evmcs.c
+++ b/tools/testing/selftests/kvm/x86/hyperv_evmcs.c
@@ -246,7 +246,7 @@ int main(int argc, char *argv[])
vm = vm_create_with_one_vcpu(&vcpu, guest_code);
- hcall_page = vm_vaddr_alloc_pages(vm, 1);
+ hcall_page = vm_alloc_pages(vm, 1);
memset(addr_gva2hva(vm, hcall_page), 0x0, getpagesize());
vcpu_set_hv_cpuid(vcpu);
diff --git a/tools/testing/selftests/kvm/x86/hyperv_extended_hypercalls.c b/tools/testing/selftests/kvm/x86/hyperv_extended_hypercalls.c
index be7a2a631789..ae047db7b1be 100644
--- a/tools/testing/selftests/kvm/x86/hyperv_extended_hypercalls.c
+++ b/tools/testing/selftests/kvm/x86/hyperv_extended_hypercalls.c
@@ -57,11 +57,11 @@ int main(void)
vcpu_set_hv_cpuid(vcpu);
/* Hypercall input */
- hcall_in_page = vm_vaddr_alloc_pages(vm, 1);
+ hcall_in_page = vm_alloc_pages(vm, 1);
memset(addr_gva2hva(vm, hcall_in_page), 0x0, vm->page_size);
/* Hypercall output */
- hcall_out_page = vm_vaddr_alloc_pages(vm, 1);
+ hcall_out_page = vm_alloc_pages(vm, 1);
memset(addr_gva2hva(vm, hcall_out_page), 0x0, vm->page_size);
vcpu_args_set(vcpu, 3, addr_gva2gpa(vm, hcall_in_page),
diff --git a/tools/testing/selftests/kvm/x86/hyperv_features.c b/tools/testing/selftests/kvm/x86/hyperv_features.c
index 52dbd52ce606..7347f1fe5157 100644
--- a/tools/testing/selftests/kvm/x86/hyperv_features.c
+++ b/tools/testing/selftests/kvm/x86/hyperv_features.c
@@ -141,7 +141,7 @@ static void guest_test_msrs_access(void)
while (true) {
vm = vm_create_with_one_vcpu(&vcpu, guest_msr);
- msr_gva = vm_vaddr_alloc_page(vm);
+ msr_gva = vm_alloc_page(vm);
memset(addr_gva2hva(vm, msr_gva), 0x0, getpagesize());
msr = addr_gva2hva(vm, msr_gva);
@@ -530,10 +530,10 @@ static void guest_test_hcalls_access(void)
vm = vm_create_with_one_vcpu(&vcpu, guest_hcall);
/* Hypercall input/output */
- hcall_page = vm_vaddr_alloc_pages(vm, 2);
+ hcall_page = vm_alloc_pages(vm, 2);
memset(addr_gva2hva(vm, hcall_page), 0x0, 2 * getpagesize());
- hcall_params = vm_vaddr_alloc_page(vm);
+ hcall_params = vm_alloc_page(vm);
memset(addr_gva2hva(vm, hcall_params), 0x0, getpagesize());
hcall = addr_gva2hva(vm, hcall_params);
diff --git a/tools/testing/selftests/kvm/x86/hyperv_ipi.c b/tools/testing/selftests/kvm/x86/hyperv_ipi.c
index beafcfa4043a..771535f9aad3 100644
--- a/tools/testing/selftests/kvm/x86/hyperv_ipi.c
+++ b/tools/testing/selftests/kvm/x86/hyperv_ipi.c
@@ -253,7 +253,7 @@ int main(int argc, char *argv[])
vm = vm_create_with_one_vcpu(&vcpu[0], sender_guest_code);
/* Hypercall input/output */
- hcall_page = vm_vaddr_alloc_pages(vm, 2);
+ hcall_page = vm_alloc_pages(vm, 2);
memset(addr_gva2hva(vm, hcall_page), 0x0, 2 * getpagesize());
diff --git a/tools/testing/selftests/kvm/x86/hyperv_svm_test.c b/tools/testing/selftests/kvm/x86/hyperv_svm_test.c
index 77b774b5041c..7a62f6a9d606 100644
--- a/tools/testing/selftests/kvm/x86/hyperv_svm_test.c
+++ b/tools/testing/selftests/kvm/x86/hyperv_svm_test.c
@@ -165,7 +165,7 @@ int main(int argc, char *argv[])
vcpu_alloc_svm(vm, &nested_gva);
vcpu_alloc_hyperv_test_pages(vm, &hv_pages_gva);
- hcall_page = vm_vaddr_alloc_pages(vm, 1);
+ hcall_page = vm_alloc_pages(vm, 1);
memset(addr_gva2hva(vm, hcall_page), 0x0, getpagesize());
vcpu_args_set(vcpu, 3, nested_gva, hv_pages_gva, addr_gva2gpa(vm, hcall_page));
diff --git a/tools/testing/selftests/kvm/x86/hyperv_tlb_flush.c b/tools/testing/selftests/kvm/x86/hyperv_tlb_flush.c
index a4fb63112cac..6adf76574921 100644
--- a/tools/testing/selftests/kvm/x86/hyperv_tlb_flush.c
+++ b/tools/testing/selftests/kvm/x86/hyperv_tlb_flush.c
@@ -593,11 +593,11 @@ int main(int argc, char *argv[])
vm = vm_create_with_one_vcpu(&vcpu[0], sender_guest_code);
/* Test data page */
- test_data_page = vm_vaddr_alloc_page(vm);
+ test_data_page = vm_alloc_page(vm);
data = (struct test_data *)addr_gva2hva(vm, test_data_page);
/* Hypercall input/output */
- data->hcall_gva = vm_vaddr_alloc_pages(vm, 2);
+ data->hcall_gva = vm_alloc_pages(vm, 2);
data->hcall_gpa = addr_gva2gpa(vm, data->hcall_gva);
memset(addr_gva2hva(vm, data->hcall_gva), 0x0, 2 * PAGE_SIZE);
@@ -606,7 +606,7 @@ int main(int argc, char *argv[])
* and the test will swap their mappings. The third page keeps the indication
* about the current state of mappings.
*/
- data->test_pages = vm_vaddr_alloc_pages(vm, NTEST_PAGES + 1);
+ data->test_pages = vm_alloc_pages(vm, NTEST_PAGES + 1);
for (i = 0; i < NTEST_PAGES; i++)
memset(addr_gva2hva(vm, data->test_pages + PAGE_SIZE * i),
(u8)(i + 1), PAGE_SIZE);
diff --git a/tools/testing/selftests/kvm/x86/kvm_clock_test.c b/tools/testing/selftests/kvm/x86/kvm_clock_test.c
index 2b8a3feee1f8..5ad4aeb8e373 100644
--- a/tools/testing/selftests/kvm/x86/kvm_clock_test.c
+++ b/tools/testing/selftests/kvm/x86/kvm_clock_test.c
@@ -147,7 +147,7 @@ int main(void)
vm = vm_create_with_one_vcpu(&vcpu, guest_main);
- pvti_gva = vm_vaddr_alloc(vm, getpagesize(), 0x10000);
+ pvti_gva = vm_alloc(vm, getpagesize(), 0x10000);
pvti_gpa = addr_gva2gpa(vm, pvti_gva);
vcpu_args_set(vcpu, 2, pvti_gpa, pvti_gva);
diff --git a/tools/testing/selftests/kvm/x86/sev_smoke_test.c b/tools/testing/selftests/kvm/x86/sev_smoke_test.c
index 4e037795dc33..1a49ee391586 100644
--- a/tools/testing/selftests/kvm/x86/sev_smoke_test.c
+++ b/tools/testing/selftests/kvm/x86/sev_smoke_test.c
@@ -115,8 +115,8 @@ static void test_sync_vmsa(u32 type, u64 policy)
struct kvm_xsave __attribute__((aligned(64))) xsave = { 0 };
vm = vm_sev_create_with_one_vcpu(type, guest_code_xsave, &vcpu);
- gva = vm_vaddr_alloc_shared(vm, PAGE_SIZE, KVM_UTIL_MIN_VADDR,
- MEM_REGION_TEST_DATA);
+ gva = vm_alloc_shared(vm, PAGE_SIZE, KVM_UTIL_MIN_VADDR,
+ MEM_REGION_TEST_DATA);
hva = addr_gva2hva(vm, gva);
vcpu_args_set(vcpu, 1, gva);
diff --git a/tools/testing/selftests/kvm/x86/svm_nested_soft_inject_test.c b/tools/testing/selftests/kvm/x86/svm_nested_soft_inject_test.c
index 5fefb319d9be..f72f11d4c4f8 100644
--- a/tools/testing/selftests/kvm/x86/svm_nested_soft_inject_test.c
+++ b/tools/testing/selftests/kvm/x86/svm_nested_soft_inject_test.c
@@ -161,7 +161,7 @@ static void run_test(bool is_nmi)
if (!is_nmi) {
void *idt, *idt_alt;
- idt_alt_vm = vm_vaddr_alloc_page(vm);
+ idt_alt_vm = vm_alloc_page(vm);
idt_alt = addr_gva2hva(vm, idt_alt_vm);
idt = addr_gva2hva(vm, vm->arch.idt);
memcpy(idt_alt, idt, getpagesize());
diff --git a/tools/testing/selftests/kvm/x86/xapic_ipi_test.c b/tools/testing/selftests/kvm/x86/xapic_ipi_test.c
index 3df6df2a1b55..d2e2410f748b 100644
--- a/tools/testing/selftests/kvm/x86/xapic_ipi_test.c
+++ b/tools/testing/selftests/kvm/x86/xapic_ipi_test.c
@@ -414,7 +414,7 @@ int main(int argc, char *argv[])
params[1].vcpu = vm_vcpu_add(vm, 1, sender_guest_code);
- test_data_page_vaddr = vm_vaddr_alloc_page(vm);
+ test_data_page_vaddr = vm_alloc_page(vm);
data = addr_gva2hva(vm, test_data_page_vaddr);
memset(data, 0, sizeof(*data));
params[0].data = data;
--
2.54.0.rc1.555.g9c883467ad-goog
_______________________________________________
linux-riscv mailing list
linux-riscv@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-riscv
^ permalink raw reply related [flat|nested] 18+ messages in thread
* [PATCH v3 12/19] KVM: selftests: Rename vm_vaddr_unused_gap() => vm_unused_gva_gap()
2026-04-20 21:19 [PATCH v3 00/19] KVM: selftests: Use kernel-style integer and g[vp]a_t types Sean Christopherson
` (8 preceding siblings ...)
2026-04-20 21:19 ` [PATCH v3 11/19] KVM: selftests: Drop "vaddr_" from APIs that allocate memory for a given VM Sean Christopherson
@ 2026-04-20 21:19 ` Sean Christopherson
2026-04-20 21:19 ` [PATCH v3 13/19] KVM: selftests: Rename vm_vaddr_populate_bitmap() => vm_populate_gva_bitmap() Sean Christopherson
` (6 subsequent siblings)
16 siblings, 0 replies; 18+ messages in thread
From: Sean Christopherson @ 2026-04-20 21:19 UTC (permalink / raw)
To: Paolo Bonzini, Marc Zyngier, Oliver Upton, Tianrui Zhao, Bibo Mao,
Huacai Chen, Anup Patel, Paul Walmsley, Palmer Dabbelt, Albert Ou,
Christian Borntraeger, Janosch Frank, Claudio Imbrenda,
Sean Christopherson
Cc: kvm, linux-arm-kernel, kvmarm, loongarch, kvm-riscv, linux-riscv,
linux-kernel, David Matlack
Now that KVM selftests use gva_t instead of vm_vaddr_t, rename the API
for finding an unused range of virtual memory to drop the defunct
terminology and use "vm" for the scope.
Opportunistically clean up the function comment to drop superfluous
and redundant information.
No functional change intended.
Signed-off-by: Sean Christopherson <seanjc@google.com>
---
.../testing/selftests/kvm/include/kvm_util.h | 2 +-
tools/testing/selftests/kvm/lib/arm64/ucall.c | 2 +-
tools/testing/selftests/kvm/lib/kvm_util.c | 24 ++++---------------
.../selftests/kvm/lib/loongarch/ucall.c | 2 +-
.../selftests/kvm/x86/hyperv_tlb_flush.c | 2 +-
5 files changed, 9 insertions(+), 23 deletions(-)
diff --git a/tools/testing/selftests/kvm/include/kvm_util.h b/tools/testing/selftests/kvm/include/kvm_util.h
index 8f7afc34ea8d..0239e89320e5 100644
--- a/tools/testing/selftests/kvm/include/kvm_util.h
+++ b/tools/testing/selftests/kvm/include/kvm_util.h
@@ -715,7 +715,7 @@ void vm_mem_region_move(struct kvm_vm *vm, u32 slot, u64 new_gpa);
void vm_mem_region_delete(struct kvm_vm *vm, u32 slot);
struct kvm_vcpu *__vm_vcpu_add(struct kvm_vm *vm, u32 vcpu_id);
void vm_populate_vaddr_bitmap(struct kvm_vm *vm);
-gva_t vm_vaddr_unused_gap(struct kvm_vm *vm, size_t sz, gva_t vaddr_min);
+gva_t vm_unused_gva_gap(struct kvm_vm *vm, size_t sz, gva_t vaddr_min);
gva_t vm_alloc(struct kvm_vm *vm, size_t sz, gva_t vaddr_min);
gva_t __vm_alloc(struct kvm_vm *vm, size_t sz, gva_t vaddr_min,
enum kvm_mem_region_type type);
diff --git a/tools/testing/selftests/kvm/lib/arm64/ucall.c b/tools/testing/selftests/kvm/lib/arm64/ucall.c
index 8257dc4ae106..e0550ad5aa75 100644
--- a/tools/testing/selftests/kvm/lib/arm64/ucall.c
+++ b/tools/testing/selftests/kvm/lib/arm64/ucall.c
@@ -10,7 +10,7 @@ gva_t *ucall_exit_mmio_addr;
void ucall_arch_init(struct kvm_vm *vm, gpa_t mmio_gpa)
{
- gva_t mmio_gva = vm_vaddr_unused_gap(vm, vm->page_size, KVM_UTIL_MIN_VADDR);
+ gva_t mmio_gva = vm_unused_gva_gap(vm, vm->page_size, KVM_UTIL_MIN_VADDR);
virt_map(vm, mmio_gva, mmio_gpa, 1);
diff --git a/tools/testing/selftests/kvm/lib/kvm_util.c b/tools/testing/selftests/kvm/lib/kvm_util.c
index b304c0e54837..8c82b40a7448 100644
--- a/tools/testing/selftests/kvm/lib/kvm_util.c
+++ b/tools/testing/selftests/kvm/lib/kvm_util.c
@@ -1366,26 +1366,12 @@ struct kvm_vcpu *__vm_vcpu_add(struct kvm_vm *vm, u32 vcpu_id)
}
/*
- * VM Virtual Address Unused Gap
- *
- * Input Args:
- * vm - Virtual Machine
- * sz - Size (bytes)
- * vaddr_min - Minimum Virtual Address
- *
- * Output Args: None
- *
- * Return:
- * Lowest virtual address at or above vaddr_min, with at least
- * sz unused bytes. TEST_ASSERT failure if no area of at least
- * size sz is available.
- *
- * Within the VM specified by vm, locates the lowest starting virtual
- * address >= vaddr_min, that has at least sz unallocated bytes. A
+ * Within the VM specified by @vm, locates the lowest starting guest virtual
+ * address >= @vaddr_min, that has at least @sz unallocated bytes. A
* TEST_ASSERT failure occurs for invalid input or no area of at least
- * sz unallocated bytes >= vaddr_min is available.
+ * @sz unallocated bytes >= @min_gva is available.
*/
-gva_t vm_vaddr_unused_gap(struct kvm_vm *vm, size_t sz, gva_t vaddr_min)
+gva_t vm_unused_gva_gap(struct kvm_vm *vm, size_t sz, gva_t vaddr_min)
{
u64 pages = (sz + vm->page_size - 1) >> vm->page_shift;
@@ -1464,7 +1450,7 @@ static gva_t ____vm_alloc(struct kvm_vm *vm, size_t sz, gva_t vaddr_min,
* Find an unused range of virtual page addresses of at least
* pages in length.
*/
- gva_t vaddr_start = vm_vaddr_unused_gap(vm, sz, vaddr_min);
+ gva_t vaddr_start = vm_unused_gva_gap(vm, sz, vaddr_min);
/* Map the virtual pages. */
for (gva_t vaddr = vaddr_start; pages > 0;
diff --git a/tools/testing/selftests/kvm/lib/loongarch/ucall.c b/tools/testing/selftests/kvm/lib/loongarch/ucall.c
index eb9f714a535c..cd49a3440ead 100644
--- a/tools/testing/selftests/kvm/lib/loongarch/ucall.c
+++ b/tools/testing/selftests/kvm/lib/loongarch/ucall.c
@@ -13,7 +13,7 @@ gva_t *ucall_exit_mmio_addr;
void ucall_arch_init(struct kvm_vm *vm, gpa_t mmio_gpa)
{
- gva_t mmio_gva = vm_vaddr_unused_gap(vm, vm->page_size, KVM_UTIL_MIN_VADDR);
+ gva_t mmio_gva = vm_unused_gva_gap(vm, vm->page_size, KVM_UTIL_MIN_VADDR);
virt_map(vm, mmio_gva, mmio_gpa, 1);
diff --git a/tools/testing/selftests/kvm/x86/hyperv_tlb_flush.c b/tools/testing/selftests/kvm/x86/hyperv_tlb_flush.c
index 6adf76574921..15ee8b7bfc11 100644
--- a/tools/testing/selftests/kvm/x86/hyperv_tlb_flush.c
+++ b/tools/testing/selftests/kvm/x86/hyperv_tlb_flush.c
@@ -617,7 +617,7 @@ int main(int argc, char *argv[])
* Get PTE pointers for test pages and map them inside the guest.
* Use separate page for each PTE for simplicity.
*/
- gva = vm_vaddr_unused_gap(vm, NTEST_PAGES * PAGE_SIZE, KVM_UTIL_MIN_VADDR);
+ gva = vm_unused_gva_gap(vm, NTEST_PAGES * PAGE_SIZE, KVM_UTIL_MIN_VADDR);
for (i = 0; i < NTEST_PAGES; i++) {
pte = vm_get_pte(vm, data->test_pages + i * PAGE_SIZE);
gpa = addr_hva2gpa(vm, pte);
--
2.54.0.rc1.555.g9c883467ad-goog
_______________________________________________
linux-riscv mailing list
linux-riscv@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-riscv
^ permalink raw reply related [flat|nested] 18+ messages in thread
* [PATCH v3 13/19] KVM: selftests: Rename vm_vaddr_populate_bitmap() => vm_populate_gva_bitmap()
2026-04-20 21:19 [PATCH v3 00/19] KVM: selftests: Use kernel-style integer and g[vp]a_t types Sean Christopherson
` (9 preceding siblings ...)
2026-04-20 21:19 ` [PATCH v3 12/19] KVM: selftests: Rename vm_vaddr_unused_gap() => vm_unused_gva_gap() Sean Christopherson
@ 2026-04-20 21:19 ` Sean Christopherson
2026-04-20 21:19 ` [PATCH v3 14/19] KVM: selftests: Rename translate_to_host_paddr() => translate_hva_to_hpa() Sean Christopherson
` (5 subsequent siblings)
16 siblings, 0 replies; 18+ messages in thread
From: Sean Christopherson @ 2026-04-20 21:19 UTC (permalink / raw)
To: Paolo Bonzini, Marc Zyngier, Oliver Upton, Tianrui Zhao, Bibo Mao,
Huacai Chen, Anup Patel, Paul Walmsley, Palmer Dabbelt, Albert Ou,
Christian Borntraeger, Janosch Frank, Claudio Imbrenda,
Sean Christopherson
Cc: kvm, linux-arm-kernel, kvmarm, loongarch, kvm-riscv, linux-riscv,
linux-kernel, David Matlack
Now that KVM selftests use gva_t instead of vm_vaddr_t, rename the helper
for populating the initial GVA bitmap to drop the defunct terminology and
use "vm" for the scope.
Opportunistically fixup the declaration of the API, which has been broken
since day 1. The flaw went unnoticed because the sole caller is defined
after the weak version, i.e. can see the prototype without a previous
declaration.
No functional change intended.
Fixes: e8b9a055fa04 ("KVM: arm64: selftests: Align VA space allocator with TTBR0")
Signed-off-by: Sean Christopherson <seanjc@google.com>
---
tools/testing/selftests/kvm/include/kvm_util.h | 2 +-
tools/testing/selftests/kvm/lib/arm64/processor.c | 2 +-
tools/testing/selftests/kvm/lib/kvm_util.c | 4 ++--
3 files changed, 4 insertions(+), 4 deletions(-)
diff --git a/tools/testing/selftests/kvm/include/kvm_util.h b/tools/testing/selftests/kvm/include/kvm_util.h
index 0239e89320e5..0fbfb2a28767 100644
--- a/tools/testing/selftests/kvm/include/kvm_util.h
+++ b/tools/testing/selftests/kvm/include/kvm_util.h
@@ -714,7 +714,7 @@ void vm_mem_region_reload(struct kvm_vm *vm, u32 slot);
void vm_mem_region_move(struct kvm_vm *vm, u32 slot, u64 new_gpa);
void vm_mem_region_delete(struct kvm_vm *vm, u32 slot);
struct kvm_vcpu *__vm_vcpu_add(struct kvm_vm *vm, u32 vcpu_id);
-void vm_populate_vaddr_bitmap(struct kvm_vm *vm);
+void vm_populate_gva_bitmap(struct kvm_vm *vm);
gva_t vm_unused_gva_gap(struct kvm_vm *vm, size_t sz, gva_t vaddr_min);
gva_t vm_alloc(struct kvm_vm *vm, size_t sz, gva_t vaddr_min);
gva_t __vm_alloc(struct kvm_vm *vm, size_t sz, gva_t vaddr_min,
diff --git a/tools/testing/selftests/kvm/lib/arm64/processor.c b/tools/testing/selftests/kvm/lib/arm64/processor.c
index c4f0e37f2907..384b6c80b1e7 100644
--- a/tools/testing/selftests/kvm/lib/arm64/processor.c
+++ b/tools/testing/selftests/kvm/lib/arm64/processor.c
@@ -671,7 +671,7 @@ void kvm_selftest_arch_init(void)
guest_modes_append_default();
}
-void vm_vaddr_populate_bitmap(struct kvm_vm *vm)
+void vm_populate_gva_bitmap(struct kvm_vm *vm)
{
/*
* arm64 selftests use only TTBR0_EL1, meaning that the valid VA space
diff --git a/tools/testing/selftests/kvm/lib/kvm_util.c b/tools/testing/selftests/kvm/lib/kvm_util.c
index 8c82b40a7448..1a1b41021cc7 100644
--- a/tools/testing/selftests/kvm/lib/kvm_util.c
+++ b/tools/testing/selftests/kvm/lib/kvm_util.c
@@ -267,7 +267,7 @@ _Static_assert(sizeof(vm_guest_mode_params)/sizeof(struct vm_guest_mode_params)
* based on the MSB of the VA. On architectures with this behavior
* the VA region spans [0, 2^(va_bits - 1)), [-(2^(va_bits - 1), -1].
*/
-__weak void vm_vaddr_populate_bitmap(struct kvm_vm *vm)
+__weak void vm_populate_gva_bitmap(struct kvm_vm *vm)
{
sparsebit_set_num(vm->vpages_valid,
0, (1ULL << (vm->va_bits - 1)) >> vm->page_shift);
@@ -385,7 +385,7 @@ struct kvm_vm *____vm_create(struct vm_shape shape)
/* Limit to VA-bit canonical virtual addresses. */
vm->vpages_valid = sparsebit_alloc();
- vm_vaddr_populate_bitmap(vm);
+ vm_populate_gva_bitmap(vm);
/* Limit physical addresses to PA-bits. */
vm->max_gfn = vm_compute_max_gfn(vm);
--
2.54.0.rc1.555.g9c883467ad-goog
_______________________________________________
linux-riscv mailing list
linux-riscv@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-riscv
^ permalink raw reply related [flat|nested] 18+ messages in thread
* [PATCH v3 14/19] KVM: selftests: Rename translate_to_host_paddr() => translate_hva_to_hpa()
2026-04-20 21:19 [PATCH v3 00/19] KVM: selftests: Use kernel-style integer and g[vp]a_t types Sean Christopherson
` (10 preceding siblings ...)
2026-04-20 21:19 ` [PATCH v3 13/19] KVM: selftests: Rename vm_vaddr_populate_bitmap() => vm_populate_gva_bitmap() Sean Christopherson
@ 2026-04-20 21:19 ` Sean Christopherson
2026-04-20 21:20 ` [PATCH v3 15/19] KVM: selftests: Clarify that arm64's inject_uer() takes a host PA, not a guest PA Sean Christopherson
` (4 subsequent siblings)
16 siblings, 0 replies; 18+ messages in thread
From: Sean Christopherson @ 2026-04-20 21:19 UTC (permalink / raw)
To: Paolo Bonzini, Marc Zyngier, Oliver Upton, Tianrui Zhao, Bibo Mao,
Huacai Chen, Anup Patel, Paul Walmsley, Palmer Dabbelt, Albert Ou,
Christian Borntraeger, Janosch Frank, Claudio Imbrenda,
Sean Christopherson
Cc: kvm, linux-arm-kernel, kvmarm, loongarch, kvm-riscv, linux-riscv,
linux-kernel, David Matlack
Rename arm64's translate_to_host_paddr() to translate_hva_to_hpa() and
update variable names to match, as using "vaddr" and "paddr" terminology
is super confusing due to selftests using those exact names for *guest*
addresses.
Opportunisitically drop superfluous local page_addr and paddr variables.
No functional change intended.
Signed-off-by: Sean Christopherson <seanjc@google.com>
---
tools/testing/selftests/kvm/arm64/sea_to_user.c | 13 +++++--------
1 file changed, 5 insertions(+), 8 deletions(-)
diff --git a/tools/testing/selftests/kvm/arm64/sea_to_user.c b/tools/testing/selftests/kvm/arm64/sea_to_user.c
index 7285eade4acf..fb06b9dcb3d9 100644
--- a/tools/testing/selftests/kvm/arm64/sea_to_user.c
+++ b/tools/testing/selftests/kvm/arm64/sea_to_user.c
@@ -56,13 +56,11 @@ static void *einj_hva;
static u64 einj_hpa;
static bool far_invalid;
-static u64 translate_to_host_paddr(unsigned long vaddr)
+static u64 translate_hva_to_hpa(unsigned long hva)
{
u64 pinfo;
- s64 offset = vaddr / getpagesize() * sizeof(pinfo);
+ s64 offset = hva / getpagesize() * sizeof(pinfo);
int fd;
- u64 page_addr;
- u64 paddr;
fd = open("/proc/self/pagemap", O_RDONLY);
if (fd < 0)
@@ -77,9 +75,8 @@ static u64 translate_to_host_paddr(unsigned long vaddr)
if ((pinfo & PAGE_PRESENT) == 0)
ksft_exit_fail_perror("Page not present");
- page_addr = (pinfo & PAGE_PHYSICAL) << MIN_PAGE_SHIFT;
- paddr = page_addr + (vaddr & (getpagesize() - 1));
- return paddr;
+ return ((pinfo & PAGE_PHYSICAL) << MIN_PAGE_SHIFT) +
+ (hva & (getpagesize() - 1));
}
static void write_einj_entry(const char *einj_path, u64 val)
@@ -303,7 +300,7 @@ static void vm_inject_memory_uer(struct kvm_vm *vm)
ksft_print_msg("Before EINJect: data=%#lx\n",
guest_data);
- einj_hpa = translate_to_host_paddr((unsigned long)einj_hva);
+ einj_hpa = translate_hva_to_hpa((unsigned long)einj_hva);
ksft_print_msg("EINJ_GVA=%#lx, einj_gpa=%#lx, einj_hva=%p, einj_hpa=%#lx\n",
EINJ_GVA, einj_gpa, einj_hva, einj_hpa);
--
2.54.0.rc1.555.g9c883467ad-goog
_______________________________________________
linux-riscv mailing list
linux-riscv@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-riscv
^ permalink raw reply related [flat|nested] 18+ messages in thread
* [PATCH v3 15/19] KVM: selftests: Clarify that arm64's inject_uer() takes a host PA, not a guest PA
2026-04-20 21:19 [PATCH v3 00/19] KVM: selftests: Use kernel-style integer and g[vp]a_t types Sean Christopherson
` (11 preceding siblings ...)
2026-04-20 21:19 ` [PATCH v3 14/19] KVM: selftests: Rename translate_to_host_paddr() => translate_hva_to_hpa() Sean Christopherson
@ 2026-04-20 21:20 ` Sean Christopherson
2026-04-20 21:20 ` [PATCH v3 16/19] KVM: selftests: Replace "vaddr" with "gva" throughout Sean Christopherson
` (3 subsequent siblings)
16 siblings, 0 replies; 18+ messages in thread
From: Sean Christopherson @ 2026-04-20 21:20 UTC (permalink / raw)
To: Paolo Bonzini, Marc Zyngier, Oliver Upton, Tianrui Zhao, Bibo Mao,
Huacai Chen, Anup Patel, Paul Walmsley, Palmer Dabbelt, Albert Ou,
Christian Borntraeger, Janosch Frank, Claudio Imbrenda,
Sean Christopherson
Cc: kvm, linux-arm-kernel, kvmarm, loongarch, kvm-riscv, linux-riscv,
linux-kernel, David Matlack
Rename inject_uer()'s @paddr to @hpa to make it more obvious that it
injects an error using a host PA, not a guest PA.
No functional change intended.
Signed-off-by: Sean Christopherson <seanjc@google.com>
---
tools/testing/selftests/kvm/arm64/sea_to_user.c | 4 ++--
1 file changed, 2 insertions(+), 2 deletions(-)
diff --git a/tools/testing/selftests/kvm/arm64/sea_to_user.c b/tools/testing/selftests/kvm/arm64/sea_to_user.c
index fb06b9dcb3d9..e16034852470 100644
--- a/tools/testing/selftests/kvm/arm64/sea_to_user.c
+++ b/tools/testing/selftests/kvm/arm64/sea_to_user.c
@@ -93,7 +93,7 @@ static void write_einj_entry(const char *einj_path, u64 val)
ksft_exit_fail_perror("Failed to write EINJ entry");
}
-static void inject_uer(u64 paddr)
+static void inject_uer(u64 hpa)
{
if (access("/sys/firmware/acpi/tables/EINJ", R_OK) == -1)
ksft_test_result_skip("EINJ table no available in firmware");
@@ -103,7 +103,7 @@ static void inject_uer(u64 paddr)
write_einj_entry(EINJ_ETYPE, ERROR_TYPE_MEMORY_UER);
write_einj_entry(EINJ_FLAGS, MASK_MEMORY_UER);
- write_einj_entry(EINJ_ADDR, paddr);
+ write_einj_entry(EINJ_ADDR, hpa);
write_einj_entry(EINJ_MASK, ~0x0UL);
write_einj_entry(EINJ_NOTRIGGER, 1);
write_einj_entry(EINJ_DOIT, 1);
--
2.54.0.rc1.555.g9c883467ad-goog
_______________________________________________
linux-riscv mailing list
linux-riscv@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-riscv
^ permalink raw reply related [flat|nested] 18+ messages in thread
* [PATCH v3 16/19] KVM: selftests: Replace "vaddr" with "gva" throughout
2026-04-20 21:19 [PATCH v3 00/19] KVM: selftests: Use kernel-style integer and g[vp]a_t types Sean Christopherson
` (12 preceding siblings ...)
2026-04-20 21:20 ` [PATCH v3 15/19] KVM: selftests: Clarify that arm64's inject_uer() takes a host PA, not a guest PA Sean Christopherson
@ 2026-04-20 21:20 ` Sean Christopherson
2026-04-20 21:20 ` [PATCH v3 17/19] KVM: selftests: Replace "u64 gpa" with "gpa_t" throughout Sean Christopherson
` (2 subsequent siblings)
16 siblings, 0 replies; 18+ messages in thread
From: Sean Christopherson @ 2026-04-20 21:20 UTC (permalink / raw)
To: Paolo Bonzini, Marc Zyngier, Oliver Upton, Tianrui Zhao, Bibo Mao,
Huacai Chen, Anup Patel, Paul Walmsley, Palmer Dabbelt, Albert Ou,
Christian Borntraeger, Janosch Frank, Claudio Imbrenda,
Sean Christopherson
Cc: kvm, linux-arm-kernel, kvmarm, loongarch, kvm-riscv, linux-riscv,
linux-kernel, David Matlack
Replace all variations of "vaddr" variables in KVM selftests with "gva",
with the exception of the ELF structures, as those fields are not specific
to guest virtual addresses, to complete the conversion from vm_vaddr_t to
gva_t.
Opportunistically use gva_t instead of u64 for relevant variables, and
fixup indentation as appropriate.
No functional change intended.
Signed-off-by: Sean Christopherson <seanjc@google.com>
---
.../selftests/kvm/access_tracking_perf_test.c | 6 +-
.../selftests/kvm/arm64/page_fault_test.c | 4 +-
.../testing/selftests/kvm/include/kvm_util.h | 32 +++-----
.../testing/selftests/kvm/include/memstress.h | 2 +-
.../selftests/kvm/include/x86/processor.h | 6 +-
.../selftests/kvm/lib/arm64/processor.c | 33 ++++----
tools/testing/selftests/kvm/lib/elf.c | 10 +--
tools/testing/selftests/kvm/lib/kvm_util.c | 60 ++++++--------
.../selftests/kvm/lib/loongarch/processor.c | 25 +++---
tools/testing/selftests/kvm/lib/memstress.c | 2 +-
.../selftests/kvm/lib/riscv/processor.c | 25 +++---
.../selftests/kvm/lib/s390/processor.c | 23 +++---
.../testing/selftests/kvm/lib/ucall_common.c | 8 +-
.../testing/selftests/kvm/lib/x86/processor.c | 82 +++++++++----------
.../selftests/kvm/s390/ucontrol_test.c | 4 +-
.../selftests/kvm/x86/xapic_ipi_test.c | 10 +--
16 files changed, 150 insertions(+), 182 deletions(-)
diff --git a/tools/testing/selftests/kvm/access_tracking_perf_test.c b/tools/testing/selftests/kvm/access_tracking_perf_test.c
index 4479aed94625..e5bbdb5bbdc3 100644
--- a/tools/testing/selftests/kvm/access_tracking_perf_test.c
+++ b/tools/testing/selftests/kvm/access_tracking_perf_test.c
@@ -123,7 +123,7 @@ static u64 pread_u64(int fd, const char *filename, u64 index)
#define PAGEMAP_PRESENT (1ULL << 63)
#define PAGEMAP_PFN_MASK ((1ULL << 55) - 1)
-static u64 lookup_pfn(int pagemap_fd, struct kvm_vm *vm, u64 gva)
+static u64 lookup_pfn(int pagemap_fd, struct kvm_vm *vm, gva_t gva)
{
u64 hva = (u64)addr_gva2hva(vm, gva);
u64 entry;
@@ -174,7 +174,7 @@ static void pageidle_mark_vcpu_memory_idle(struct kvm_vm *vm,
struct memstress_vcpu_args *vcpu_args)
{
int vcpu_idx = vcpu_args->vcpu_idx;
- u64 base_gva = vcpu_args->gva;
+ gva_t base_gva = vcpu_args->gva;
u64 pages = vcpu_args->pages;
u64 page;
u64 still_idle = 0;
@@ -193,7 +193,7 @@ static void pageidle_mark_vcpu_memory_idle(struct kvm_vm *vm,
TEST_ASSERT(pagemap_fd > 0, "Failed to open pagemap.");
for (page = 0; page < pages; page++) {
- u64 gva = base_gva + page * memstress_args.guest_page_size;
+ gva_t gva = base_gva + page * memstress_args.guest_page_size;
u64 pfn = lookup_pfn(pagemap_fd, vm, gva);
if (!pfn) {
diff --git a/tools/testing/selftests/kvm/arm64/page_fault_test.c b/tools/testing/selftests/kvm/arm64/page_fault_test.c
index b92a9614d7d2..6bb3d82906b2 100644
--- a/tools/testing/selftests/kvm/arm64/page_fault_test.c
+++ b/tools/testing/selftests/kvm/arm64/page_fault_test.c
@@ -70,9 +70,9 @@ struct test_params {
struct test_desc *test_desc;
};
-static inline void flush_tlb_page(u64 vaddr)
+static inline void flush_tlb_page(gva_t gva)
{
- u64 page = vaddr >> 12;
+ gva_t page = gva >> 12;
dsb(ishst);
asm volatile("tlbi vaae1is, %0" :: "r" (page));
diff --git a/tools/testing/selftests/kvm/include/kvm_util.h b/tools/testing/selftests/kvm/include/kvm_util.h
index 0fbfb2a28767..0dcfad728edd 100644
--- a/tools/testing/selftests/kvm/include/kvm_util.h
+++ b/tools/testing/selftests/kvm/include/kvm_util.h
@@ -715,17 +715,17 @@ void vm_mem_region_move(struct kvm_vm *vm, u32 slot, u64 new_gpa);
void vm_mem_region_delete(struct kvm_vm *vm, u32 slot);
struct kvm_vcpu *__vm_vcpu_add(struct kvm_vm *vm, u32 vcpu_id);
void vm_populate_gva_bitmap(struct kvm_vm *vm);
-gva_t vm_unused_gva_gap(struct kvm_vm *vm, size_t sz, gva_t vaddr_min);
-gva_t vm_alloc(struct kvm_vm *vm, size_t sz, gva_t vaddr_min);
-gva_t __vm_alloc(struct kvm_vm *vm, size_t sz, gva_t vaddr_min,
+gva_t vm_unused_gva_gap(struct kvm_vm *vm, size_t sz, gva_t min_gva);
+gva_t vm_alloc(struct kvm_vm *vm, size_t sz, gva_t min_gva);
+gva_t __vm_alloc(struct kvm_vm *vm, size_t sz, gva_t min_gva,
enum kvm_mem_region_type type);
-gva_t vm_alloc_shared(struct kvm_vm *vm, size_t sz, gva_t vaddr_min,
+gva_t vm_alloc_shared(struct kvm_vm *vm, size_t sz, gva_t min_gva,
enum kvm_mem_region_type type);
gva_t vm_alloc_pages(struct kvm_vm *vm, int nr_pages);
gva_t __vm_alloc_page(struct kvm_vm *vm, enum kvm_mem_region_type type);
gva_t vm_alloc_page(struct kvm_vm *vm);
-void virt_map(struct kvm_vm *vm, u64 vaddr, u64 paddr,
+void virt_map(struct kvm_vm *vm, gva_t gva, u64 paddr,
unsigned int npages);
void *addr_gpa2hva(struct kvm_vm *vm, gpa_t gpa);
void *addr_gva2hva(struct kvm_vm *vm, gva_t gva);
@@ -1202,27 +1202,15 @@ static inline void virt_pgd_alloc(struct kvm_vm *vm)
}
/*
- * VM Virtual Page Map
- *
- * Input Args:
- * vm - Virtual Machine
- * vaddr - VM Virtual Address
- * paddr - VM Physical Address
- * memslot - Memory region slot for new virtual translation tables
- *
- * Output Args: None
- *
- * Return: None
- *
* Within @vm, creates a virtual translation for the page starting
- * at @vaddr to the page starting at @paddr.
+ * at @gva to the page starting at @paddr.
*/
-void virt_arch_pg_map(struct kvm_vm *vm, u64 vaddr, u64 paddr);
+void virt_arch_pg_map(struct kvm_vm *vm, gva_t gva, u64 paddr);
-static inline void virt_pg_map(struct kvm_vm *vm, u64 vaddr, u64 paddr)
+static inline void virt_pg_map(struct kvm_vm *vm, gva_t gva, u64 paddr)
{
- virt_arch_pg_map(vm, vaddr, paddr);
- sparsebit_set(vm->vpages_mapped, vaddr >> vm->page_shift);
+ virt_arch_pg_map(vm, gva, paddr);
+ sparsebit_set(vm->vpages_mapped, gva >> vm->page_shift);
}
diff --git a/tools/testing/selftests/kvm/include/memstress.h b/tools/testing/selftests/kvm/include/memstress.h
index e3e4b4d6a27a..abd0dca10283 100644
--- a/tools/testing/selftests/kvm/include/memstress.h
+++ b/tools/testing/selftests/kvm/include/memstress.h
@@ -21,7 +21,7 @@
struct memstress_vcpu_args {
u64 gpa;
- u64 gva;
+ gva_t gva;
u64 pages;
/* Only used by the host userspace part of the vCPU thread */
diff --git a/tools/testing/selftests/kvm/include/x86/processor.h b/tools/testing/selftests/kvm/include/x86/processor.h
index 4efa6c942192..15252e75aaf1 100644
--- a/tools/testing/selftests/kvm/include/x86/processor.h
+++ b/tools/testing/selftests/kvm/include/x86/processor.h
@@ -1394,7 +1394,7 @@ static inline bool kvm_is_lbrv_enabled(void)
return !!get_kvm_amd_param_integer("lbrv");
}
-u64 *vm_get_pte(struct kvm_vm *vm, u64 vaddr);
+u64 *vm_get_pte(struct kvm_vm *vm, gva_t gva);
u64 kvm_hypercall(u64 nr, u64 a0, u64 a1, u64 a2, u64 a3);
u64 __xen_hypercall(u64 nr, u64 a0, void *a1);
@@ -1507,9 +1507,9 @@ enum pg_level {
void tdp_mmu_init(struct kvm_vm *vm, int pgtable_levels,
struct pte_masks *pte_masks);
-void __virt_pg_map(struct kvm_vm *vm, struct kvm_mmu *mmu, u64 vaddr,
+void __virt_pg_map(struct kvm_vm *vm, struct kvm_mmu *mmu, gva_t gva,
u64 paddr, int level);
-void virt_map_level(struct kvm_vm *vm, u64 vaddr, u64 paddr,
+void virt_map_level(struct kvm_vm *vm, gva_t gva, u64 paddr,
u64 nr_bytes, int level);
void vm_enable_tdp(struct kvm_vm *vm);
diff --git a/tools/testing/selftests/kvm/lib/arm64/processor.c b/tools/testing/selftests/kvm/lib/arm64/processor.c
index 384b6c80b1e7..0f693d8891d2 100644
--- a/tools/testing/selftests/kvm/lib/arm64/processor.c
+++ b/tools/testing/selftests/kvm/lib/arm64/processor.c
@@ -121,19 +121,18 @@ void virt_arch_pgd_alloc(struct kvm_vm *vm)
vm->mmu.pgd_created = true;
}
-static void _virt_pg_map(struct kvm_vm *vm, u64 vaddr, u64 paddr,
+static void _virt_pg_map(struct kvm_vm *vm, gva_t gva, u64 paddr,
u64 flags)
{
u8 attr_idx = flags & (PTE_ATTRINDX_MASK >> PTE_ATTRINDX_SHIFT);
u64 pg_attr;
u64 *ptep;
- TEST_ASSERT((vaddr % vm->page_size) == 0,
+ TEST_ASSERT((gva % vm->page_size) == 0,
"Virtual address not on page boundary,\n"
- " vaddr: 0x%lx vm->page_size: 0x%x", vaddr, vm->page_size);
- TEST_ASSERT(sparsebit_is_set(vm->vpages_valid,
- (vaddr >> vm->page_shift)),
- "Invalid virtual address, vaddr: 0x%lx", vaddr);
+ " gva: 0x%lx vm->page_size: 0x%x", gva, vm->page_size);
+ TEST_ASSERT(sparsebit_is_set(vm->vpages_valid, (gva >> vm->page_shift)),
+ "Invalid virtual address, gva: 0x%lx", gva);
TEST_ASSERT((paddr % vm->page_size) == 0,
"Physical address not on page boundary,\n"
" paddr: 0x%lx vm->page_size: 0x%x", paddr, vm->page_size);
@@ -142,26 +141,26 @@ static void _virt_pg_map(struct kvm_vm *vm, u64 vaddr, u64 paddr,
" paddr: 0x%lx vm->max_gfn: 0x%lx vm->page_size: 0x%x",
paddr, vm->max_gfn, vm->page_size);
- ptep = addr_gpa2hva(vm, vm->mmu.pgd) + pgd_index(vm, vaddr) * 8;
+ ptep = addr_gpa2hva(vm, vm->mmu.pgd) + pgd_index(vm, gva) * 8;
if (!*ptep)
*ptep = addr_pte(vm, vm_alloc_page_table(vm),
PGD_TYPE_TABLE | PTE_VALID);
switch (vm->mmu.pgtable_levels) {
case 4:
- ptep = addr_gpa2hva(vm, pte_addr(vm, *ptep)) + pud_index(vm, vaddr) * 8;
+ ptep = addr_gpa2hva(vm, pte_addr(vm, *ptep)) + pud_index(vm, gva) * 8;
if (!*ptep)
*ptep = addr_pte(vm, vm_alloc_page_table(vm),
PUD_TYPE_TABLE | PTE_VALID);
/* fall through */
case 3:
- ptep = addr_gpa2hva(vm, pte_addr(vm, *ptep)) + pmd_index(vm, vaddr) * 8;
+ ptep = addr_gpa2hva(vm, pte_addr(vm, *ptep)) + pmd_index(vm, gva) * 8;
if (!*ptep)
*ptep = addr_pte(vm, vm_alloc_page_table(vm),
PMD_TYPE_TABLE | PTE_VALID);
/* fall through */
case 2:
- ptep = addr_gpa2hva(vm, pte_addr(vm, *ptep)) + pte_index(vm, vaddr) * 8;
+ ptep = addr_gpa2hva(vm, pte_addr(vm, *ptep)) + pte_index(vm, gva) * 8;
break;
default:
TEST_FAIL("Page table levels must be 2, 3, or 4");
@@ -174,11 +173,11 @@ static void _virt_pg_map(struct kvm_vm *vm, u64 vaddr, u64 paddr,
*ptep = addr_pte(vm, paddr, pg_attr);
}
-void virt_arch_pg_map(struct kvm_vm *vm, u64 vaddr, u64 paddr)
+void virt_arch_pg_map(struct kvm_vm *vm, gva_t gva, u64 paddr)
{
u64 attr_idx = MT_NORMAL;
- _virt_pg_map(vm, vaddr, paddr, attr_idx);
+ _virt_pg_map(vm, gva, paddr, attr_idx);
}
u64 *virt_get_pte_hva_at_level(struct kvm_vm *vm, gva_t gva, int level)
@@ -417,18 +416,18 @@ static struct kvm_vcpu *__aarch64_vcpu_add(struct kvm_vm *vm, u32 vcpu_id,
struct kvm_vcpu_init *init)
{
size_t stack_size;
- u64 stack_vaddr;
+ gva_t stack_gva;
struct kvm_vcpu *vcpu = __vm_vcpu_add(vm, vcpu_id);
stack_size = vm->page_size == 4096 ? DEFAULT_STACK_PGS * vm->page_size :
vm->page_size;
- stack_vaddr = __vm_alloc(vm, stack_size,
- DEFAULT_ARM64_GUEST_STACK_VADDR_MIN,
- MEM_REGION_DATA);
+ stack_gva = __vm_alloc(vm, stack_size,
+ DEFAULT_ARM64_GUEST_STACK_VADDR_MIN,
+ MEM_REGION_DATA);
aarch64_vcpu_setup(vcpu, init);
- vcpu_set_reg(vcpu, ctxt_reg_alias(vcpu, SYS_SP_EL1), stack_vaddr + stack_size);
+ vcpu_set_reg(vcpu, ctxt_reg_alias(vcpu, SYS_SP_EL1), stack_gva + stack_size);
return vcpu;
}
diff --git a/tools/testing/selftests/kvm/lib/elf.c b/tools/testing/selftests/kvm/lib/elf.c
index 2288480f4e1e..b689c4df4a01 100644
--- a/tools/testing/selftests/kvm/lib/elf.c
+++ b/tools/testing/selftests/kvm/lib/elf.c
@@ -162,14 +162,14 @@ void kvm_vm_elf_load(struct kvm_vm *vm, const char *filename)
seg_vend |= vm->page_size - 1;
size_t seg_size = seg_vend - seg_vstart + 1;
- gva_t vaddr = __vm_alloc(vm, seg_size, seg_vstart, MEM_REGION_CODE);
- TEST_ASSERT(vaddr == seg_vstart, "Unable to allocate "
+ gva_t gva = __vm_alloc(vm, seg_size, seg_vstart, MEM_REGION_CODE);
+ TEST_ASSERT(gva == seg_vstart, "Unable to allocate "
"virtual memory for segment at requested min addr,\n"
" segment idx: %u\n"
" seg_vstart: 0x%lx\n"
- " vaddr: 0x%lx",
- n1, seg_vstart, vaddr);
- memset(addr_gva2hva(vm, vaddr), 0, seg_size);
+ " gva: 0x%lx",
+ n1, seg_vstart, gva);
+ memset(addr_gva2hva(vm, gva), 0, seg_size);
/* TODO(lhuemill): Set permissions of each memory segment
* based on the least-significant 3 bits of phdr.p_flags.
*/
diff --git a/tools/testing/selftests/kvm/lib/kvm_util.c b/tools/testing/selftests/kvm/lib/kvm_util.c
index 1a1b41021cc7..e282f9abd4c7 100644
--- a/tools/testing/selftests/kvm/lib/kvm_util.c
+++ b/tools/testing/selftests/kvm/lib/kvm_util.c
@@ -1367,17 +1367,17 @@ struct kvm_vcpu *__vm_vcpu_add(struct kvm_vm *vm, u32 vcpu_id)
/*
* Within the VM specified by @vm, locates the lowest starting guest virtual
- * address >= @vaddr_min, that has at least @sz unallocated bytes. A
+ * address >= @min_gva, that has at least @sz unallocated bytes. A
* TEST_ASSERT failure occurs for invalid input or no area of at least
* @sz unallocated bytes >= @min_gva is available.
*/
-gva_t vm_unused_gva_gap(struct kvm_vm *vm, size_t sz, gva_t vaddr_min)
+gva_t vm_unused_gva_gap(struct kvm_vm *vm, size_t sz, gva_t min_gva)
{
u64 pages = (sz + vm->page_size - 1) >> vm->page_shift;
/* Determine lowest permitted virtual page index. */
- u64 pgidx_start = (vaddr_min + vm->page_size - 1) >> vm->page_shift;
- if ((pgidx_start * vm->page_size) < vaddr_min)
+ u64 pgidx_start = (min_gva + vm->page_size - 1) >> vm->page_shift;
+ if ((pgidx_start * vm->page_size) < min_gva)
goto no_va_found;
/* Loop over section with enough valid virtual page indexes. */
@@ -1414,7 +1414,7 @@ gva_t vm_unused_gva_gap(struct kvm_vm *vm, size_t sz, gva_t vaddr_min)
} while (pgidx_start != 0);
no_va_found:
- TEST_FAIL("No vaddr of specified pages available, pages: 0x%lx", pages);
+ TEST_FAIL("No gva of specified pages available, pages: 0x%lx", pages);
/* NOT REACHED */
return -1;
@@ -1436,7 +1436,7 @@ gva_t vm_unused_gva_gap(struct kvm_vm *vm, size_t sz, gva_t vaddr_min)
return pgidx_start * vm->page_size;
}
-static gva_t ____vm_alloc(struct kvm_vm *vm, size_t sz, gva_t vaddr_min,
+static gva_t ____vm_alloc(struct kvm_vm *vm, size_t sz, gva_t min_gva,
enum kvm_mem_region_type type, bool protected)
{
u64 pages = (sz >> vm->page_shift) + ((sz % vm->page_size) != 0);
@@ -1450,41 +1450,41 @@ static gva_t ____vm_alloc(struct kvm_vm *vm, size_t sz, gva_t vaddr_min,
* Find an unused range of virtual page addresses of at least
* pages in length.
*/
- gva_t vaddr_start = vm_unused_gva_gap(vm, sz, vaddr_min);
+ gva_t gva_start = vm_unused_gva_gap(vm, sz, min_gva);
/* Map the virtual pages. */
- for (gva_t vaddr = vaddr_start; pages > 0;
- pages--, vaddr += vm->page_size, paddr += vm->page_size) {
+ for (gva_t gva = gva_start; pages > 0;
+ pages--, gva += vm->page_size, paddr += vm->page_size) {
- virt_pg_map(vm, vaddr, paddr);
+ virt_pg_map(vm, gva, paddr);
}
- return vaddr_start;
+ return gva_start;
}
-gva_t __vm_alloc(struct kvm_vm *vm, size_t sz, gva_t vaddr_min,
+gva_t __vm_alloc(struct kvm_vm *vm, size_t sz, gva_t min_gva,
enum kvm_mem_region_type type)
{
- return ____vm_alloc(vm, sz, vaddr_min, type,
+ return ____vm_alloc(vm, sz, min_gva, type,
vm_arch_has_protected_memory(vm));
}
-gva_t vm_alloc_shared(struct kvm_vm *vm, size_t sz, gva_t vaddr_min,
+gva_t vm_alloc_shared(struct kvm_vm *vm, size_t sz, gva_t min_gva,
enum kvm_mem_region_type type)
{
- return ____vm_alloc(vm, sz, vaddr_min, type, false);
+ return ____vm_alloc(vm, sz, min_gva, type, false);
}
/*
* Allocates at least sz bytes within the virtual address space of the VM
* given by @vm. The allocated bytes are mapped to a virtual address >= the
- * address given by @vaddr_min. Note that each allocation uses a a unique set
+ * address given by @min_gva. Note that each allocation uses a a unique set
* of pages, with the minimum real allocation being at least a page. The
* allocated physical space comes from the TEST_DATA memory region.
*/
-gva_t vm_alloc(struct kvm_vm *vm, size_t sz, gva_t vaddr_min)
+gva_t vm_alloc(struct kvm_vm *vm, size_t sz, gva_t min_gva)
{
- return __vm_alloc(vm, sz, vaddr_min, MEM_REGION_TEST_DATA);
+ return __vm_alloc(vm, sz, min_gva, MEM_REGION_TEST_DATA);
}
gva_t vm_alloc_pages(struct kvm_vm *vm, int nr_pages)
@@ -1503,34 +1503,24 @@ gva_t vm_alloc_page(struct kvm_vm *vm)
}
/*
- * Map a range of VM virtual address to the VM's physical address
+ * Map a range of VM virtual address to the VM's physical address.
*
- * Input Args:
- * vm - Virtual Machine
- * vaddr - Virtuall address to map
- * paddr - VM Physical Address
- * npages - The number of pages to map
- *
- * Output Args: None
- *
- * Return: None
- *
- * Within the VM given by @vm, creates a virtual translation for
- * @npages starting at @vaddr to the page range starting at @paddr.
+ * Within the VM given by @vm, creates a virtual translation for @npages
+ * starting at @gva to the page range starting at @paddr.
*/
-void virt_map(struct kvm_vm *vm, u64 vaddr, u64 paddr,
+void virt_map(struct kvm_vm *vm, gva_t gva, u64 paddr,
unsigned int npages)
{
size_t page_size = vm->page_size;
size_t size = npages * page_size;
- TEST_ASSERT(vaddr + size > vaddr, "Vaddr overflow");
+ TEST_ASSERT(gva + size > gva, "Vaddr overflow");
TEST_ASSERT(paddr + size > paddr, "Paddr overflow");
while (npages--) {
- virt_pg_map(vm, vaddr, paddr);
+ virt_pg_map(vm, gva, paddr);
- vaddr += page_size;
+ gva += page_size;
paddr += page_size;
}
}
diff --git a/tools/testing/selftests/kvm/lib/loongarch/processor.c b/tools/testing/selftests/kvm/lib/loongarch/processor.c
index 318520f1f1b9..47e782056196 100644
--- a/tools/testing/selftests/kvm/lib/loongarch/processor.c
+++ b/tools/testing/selftests/kvm/lib/loongarch/processor.c
@@ -111,22 +111,21 @@ gpa_t addr_arch_gva2gpa(struct kvm_vm *vm, gva_t gva)
u64 *ptep;
ptep = virt_populate_pte(vm, gva, 0);
- TEST_ASSERT(*ptep != 0, "Virtual address vaddr: 0x%lx not mapped\n", gva);
+ TEST_ASSERT(*ptep != 0, "Virtual address gva: 0x%lx not mapped\n", gva);
return pte_addr(vm, *ptep) + (gva & (vm->page_size - 1));
}
-void virt_arch_pg_map(struct kvm_vm *vm, u64 vaddr, u64 paddr)
+void virt_arch_pg_map(struct kvm_vm *vm, gva_t gva, u64 paddr)
{
u32 prot_bits;
u64 *ptep;
- TEST_ASSERT((vaddr % vm->page_size) == 0,
+ TEST_ASSERT((gva % vm->page_size) == 0,
"Virtual address not on page boundary,\n"
- "vaddr: 0x%lx vm->page_size: 0x%x", vaddr, vm->page_size);
- TEST_ASSERT(sparsebit_is_set(vm->vpages_valid,
- (vaddr >> vm->page_shift)),
- "Invalid virtual address, vaddr: 0x%lx", vaddr);
+ "gva: 0x%lx vm->page_size: 0x%x", gva, vm->page_size);
+ TEST_ASSERT(sparsebit_is_set(vm->vpages_valid, (gva >> vm->page_shift)),
+ "Invalid virtual address, gva: 0x%lx", gva);
TEST_ASSERT((paddr % vm->page_size) == 0,
"Physical address not on page boundary,\n"
"paddr: 0x%lx vm->page_size: 0x%x", paddr, vm->page_size);
@@ -135,7 +134,7 @@ void virt_arch_pg_map(struct kvm_vm *vm, u64 vaddr, u64 paddr)
"paddr: 0x%lx vm->max_gfn: 0x%lx vm->page_size: 0x%x",
paddr, vm->max_gfn, vm->page_size);
- ptep = virt_populate_pte(vm, vaddr, 1);
+ ptep = virt_populate_pte(vm, gva, 1);
prot_bits = _PAGE_PRESENT | __READABLE | __WRITEABLE | _CACHE_CC | _PAGE_USER;
WRITE_ONCE(*ptep, paddr | prot_bits);
}
@@ -373,20 +372,20 @@ void loongarch_vcpu_setup(struct kvm_vcpu *vcpu)
struct kvm_vcpu *vm_arch_vcpu_add(struct kvm_vm *vm, u32 vcpu_id)
{
size_t stack_size;
- u64 stack_vaddr;
+ u64 stack_gva;
struct kvm_regs regs;
struct kvm_vcpu *vcpu;
vcpu = __vm_vcpu_add(vm, vcpu_id);
stack_size = vm->page_size;
- stack_vaddr = __vm_alloc(vm, stack_size,
- LOONGARCH_GUEST_STACK_VADDR_MIN, MEM_REGION_DATA);
- TEST_ASSERT(stack_vaddr != 0, "No memory for vm stack");
+ stack_gva = __vm_alloc(vm, stack_size,
+ LOONGARCH_GUEST_STACK_VADDR_MIN, MEM_REGION_DATA);
+ TEST_ASSERT(stack_gva != 0, "No memory for vm stack");
loongarch_vcpu_setup(vcpu);
/* Setup guest general purpose registers */
vcpu_regs_get(vcpu, ®s);
- regs.gpr[3] = stack_vaddr + stack_size;
+ regs.gpr[3] = stack_gva + stack_size;
vcpu_regs_set(vcpu, ®s);
return vcpu;
diff --git a/tools/testing/selftests/kvm/lib/memstress.c b/tools/testing/selftests/kvm/lib/memstress.c
index b59dc3344ff3..6dcd15910a06 100644
--- a/tools/testing/selftests/kvm/lib/memstress.c
+++ b/tools/testing/selftests/kvm/lib/memstress.c
@@ -49,7 +49,7 @@ void memstress_guest_code(u32 vcpu_idx)
struct memstress_args *args = &memstress_args;
struct memstress_vcpu_args *vcpu_args = &args->vcpu_args[vcpu_idx];
struct guest_random_state rand_state;
- u64 gva;
+ gva_t gva;
u64 pages;
u64 addr;
u64 page;
diff --git a/tools/testing/selftests/kvm/lib/riscv/processor.c b/tools/testing/selftests/kvm/lib/riscv/processor.c
index 38eb8302922a..108144fb858b 100644
--- a/tools/testing/selftests/kvm/lib/riscv/processor.c
+++ b/tools/testing/selftests/kvm/lib/riscv/processor.c
@@ -75,17 +75,16 @@ void virt_arch_pgd_alloc(struct kvm_vm *vm)
vm->mmu.pgd_created = true;
}
-void virt_arch_pg_map(struct kvm_vm *vm, u64 vaddr, u64 paddr)
+void virt_arch_pg_map(struct kvm_vm *vm, gva_t gva, u64 paddr)
{
u64 *ptep, next_ppn;
int level = vm->mmu.pgtable_levels - 1;
- TEST_ASSERT((vaddr % vm->page_size) == 0,
+ TEST_ASSERT((gva % vm->page_size) == 0,
"Virtual address not on page boundary,\n"
- " vaddr: 0x%lx vm->page_size: 0x%x", vaddr, vm->page_size);
- TEST_ASSERT(sparsebit_is_set(vm->vpages_valid,
- (vaddr >> vm->page_shift)),
- "Invalid virtual address, vaddr: 0x%lx", vaddr);
+ " gva: 0x%lx vm->page_size: 0x%x", gva, vm->page_size);
+ TEST_ASSERT(sparsebit_is_set(vm->vpages_valid, (gva >> vm->page_shift)),
+ "Invalid virtual address, gva: 0x%lx", gva);
TEST_ASSERT((paddr % vm->page_size) == 0,
"Physical address not on page boundary,\n"
" paddr: 0x%lx vm->page_size: 0x%x", paddr, vm->page_size);
@@ -94,7 +93,7 @@ void virt_arch_pg_map(struct kvm_vm *vm, u64 vaddr, u64 paddr)
" paddr: 0x%lx vm->max_gfn: 0x%lx vm->page_size: 0x%x",
paddr, vm->max_gfn, vm->page_size);
- ptep = addr_gpa2hva(vm, vm->mmu.pgd) + pte_index(vm, vaddr, level) * 8;
+ ptep = addr_gpa2hva(vm, vm->mmu.pgd) + pte_index(vm, gva, level) * 8;
if (!*ptep) {
next_ppn = vm_alloc_page_table(vm) >> PGTBL_PAGE_SIZE_SHIFT;
*ptep = (next_ppn << PGTBL_PTE_ADDR_SHIFT) |
@@ -104,7 +103,7 @@ void virt_arch_pg_map(struct kvm_vm *vm, u64 vaddr, u64 paddr)
while (level > -1) {
ptep = addr_gpa2hva(vm, pte_addr(vm, *ptep)) +
- pte_index(vm, vaddr, level) * 8;
+ pte_index(vm, gva, level) * 8;
if (!*ptep && level > 0) {
next_ppn = vm_alloc_page_table(vm) >>
PGTBL_PAGE_SIZE_SHIFT;
@@ -315,16 +314,16 @@ struct kvm_vcpu *vm_arch_vcpu_add(struct kvm_vm *vm, u32 vcpu_id)
{
int r;
size_t stack_size;
- unsigned long stack_vaddr;
+ unsigned long stack_gva;
unsigned long current_gp = 0;
struct kvm_mp_state mps;
struct kvm_vcpu *vcpu;
stack_size = vm->page_size == 4096 ? DEFAULT_STACK_PGS * vm->page_size :
vm->page_size;
- stack_vaddr = __vm_alloc(vm, stack_size,
- DEFAULT_RISCV_GUEST_STACK_VADDR_MIN,
- MEM_REGION_DATA);
+ stack_gva = __vm_alloc(vm, stack_size,
+ DEFAULT_RISCV_GUEST_STACK_VADDR_MIN,
+ MEM_REGION_DATA);
vcpu = __vm_vcpu_add(vm, vcpu_id);
riscv_vcpu_mmu_setup(vcpu);
@@ -344,7 +343,7 @@ struct kvm_vcpu *vm_arch_vcpu_add(struct kvm_vm *vm, u32 vcpu_id)
vcpu_set_reg(vcpu, RISCV_CORE_REG(regs.gp), current_gp);
/* Setup stack pointer and program counter of guest */
- vcpu_set_reg(vcpu, RISCV_CORE_REG(regs.sp), stack_vaddr + stack_size);
+ vcpu_set_reg(vcpu, RISCV_CORE_REG(regs.sp), stack_gva + stack_size);
/* Setup sscratch for guest_get_vcpuid() */
vcpu_set_reg(vcpu, RISCV_GENERAL_CSR_REG(sscratch), vcpu_id);
diff --git a/tools/testing/selftests/kvm/lib/s390/processor.c b/tools/testing/selftests/kvm/lib/s390/processor.c
index 4ae0a39f426f..643e583c804c 100644
--- a/tools/testing/selftests/kvm/lib/s390/processor.c
+++ b/tools/testing/selftests/kvm/lib/s390/processor.c
@@ -47,19 +47,17 @@ static u64 virt_alloc_region(struct kvm_vm *vm, int ri)
| ((ri < 4 ? (PAGES_PER_REGION - 1) : 0) & REGION_ENTRY_LENGTH);
}
-void virt_arch_pg_map(struct kvm_vm *vm, u64 gva, u64 gpa)
+void virt_arch_pg_map(struct kvm_vm *vm, gva_t gva, u64 gpa)
{
int ri, idx;
u64 *entry;
TEST_ASSERT((gva % vm->page_size) == 0,
- "Virtual address not on page boundary,\n"
- " vaddr: 0x%lx vm->page_size: 0x%x",
- gva, vm->page_size);
- TEST_ASSERT(sparsebit_is_set(vm->vpages_valid,
- (gva >> vm->page_shift)),
- "Invalid virtual address, vaddr: 0x%lx",
- gva);
+ "Virtual address not on page boundary,\n"
+ " gva: 0x%lx vm->page_size: 0x%x",
+ gva, vm->page_size);
+ TEST_ASSERT(sparsebit_is_set(vm->vpages_valid, (gva >> vm->page_shift)),
+ "Invalid virtual address, gva: 0x%lx", gva);
TEST_ASSERT((gpa % vm->page_size) == 0,
"Physical address not on page boundary,\n"
" paddr: 0x%lx vm->page_size: 0x%x",
@@ -163,7 +161,7 @@ void vcpu_arch_set_entry_point(struct kvm_vcpu *vcpu, void *guest_code)
struct kvm_vcpu *vm_arch_vcpu_add(struct kvm_vm *vm, u32 vcpu_id)
{
size_t stack_size = DEFAULT_STACK_PGS * getpagesize();
- u64 stack_vaddr;
+ u64 stack_gva;
struct kvm_regs regs;
struct kvm_sregs sregs;
struct kvm_vcpu *vcpu;
@@ -171,15 +169,14 @@ struct kvm_vcpu *vm_arch_vcpu_add(struct kvm_vm *vm, u32 vcpu_id)
TEST_ASSERT(vm->page_size == PAGE_SIZE, "Unsupported page size: 0x%x",
vm->page_size);
- stack_vaddr = __vm_alloc(vm, stack_size,
- DEFAULT_GUEST_STACK_VADDR_MIN,
- MEM_REGION_DATA);
+ stack_gva = __vm_alloc(vm, stack_size, DEFAULT_GUEST_STACK_VADDR_MIN,
+ MEM_REGION_DATA);
vcpu = __vm_vcpu_add(vm, vcpu_id);
/* Setup guest registers */
vcpu_regs_get(vcpu, ®s);
- regs.gprs[15] = stack_vaddr + (DEFAULT_STACK_PGS * getpagesize()) - 160;
+ regs.gprs[15] = stack_gva + (DEFAULT_STACK_PGS * getpagesize()) - 160;
vcpu_regs_set(vcpu, ®s);
vcpu_sregs_get(vcpu, &sregs);
diff --git a/tools/testing/selftests/kvm/lib/ucall_common.c b/tools/testing/selftests/kvm/lib/ucall_common.c
index 4a8a5bc40a45..029ce21f9f2f 100644
--- a/tools/testing/selftests/kvm/lib/ucall_common.c
+++ b/tools/testing/selftests/kvm/lib/ucall_common.c
@@ -29,12 +29,12 @@ void ucall_init(struct kvm_vm *vm, gpa_t mmio_gpa)
{
struct ucall_header *hdr;
struct ucall *uc;
- gva_t vaddr;
+ gva_t gva;
int i;
- vaddr = vm_alloc_shared(vm, sizeof(*hdr), KVM_UTIL_MIN_VADDR,
+ gva = vm_alloc_shared(vm, sizeof(*hdr), KVM_UTIL_MIN_VADDR,
MEM_REGION_DATA);
- hdr = (struct ucall_header *)addr_gva2hva(vm, vaddr);
+ hdr = (struct ucall_header *)addr_gva2hva(vm, gva);
memset(hdr, 0, sizeof(*hdr));
for (i = 0; i < KVM_MAX_VCPUS; ++i) {
@@ -42,7 +42,7 @@ void ucall_init(struct kvm_vm *vm, gpa_t mmio_gpa)
uc->hva = uc;
}
- write_guest_global(vm, ucall_pool, (struct ucall_header *)vaddr);
+ write_guest_global(vm, ucall_pool, (struct ucall_header *)gva);
ucall_arch_init(vm, mmio_gpa);
}
diff --git a/tools/testing/selftests/kvm/lib/x86/processor.c b/tools/testing/selftests/kvm/lib/x86/processor.c
index 50848112932c..3c55980c81b2 100644
--- a/tools/testing/selftests/kvm/lib/x86/processor.c
+++ b/tools/testing/selftests/kvm/lib/x86/processor.c
@@ -207,15 +207,15 @@ void tdp_mmu_init(struct kvm_vm *vm, int pgtable_levels,
}
static void *virt_get_pte(struct kvm_vm *vm, struct kvm_mmu *mmu,
- u64 *parent_pte, u64 vaddr, int level)
+ u64 *parent_pte, gva_t gva, int level)
{
u64 pt_gpa = PTE_GET_PA(*parent_pte);
u64 *page_table = addr_gpa2hva(vm, pt_gpa);
- int index = (vaddr >> PG_LEVEL_SHIFT(level)) & 0x1ffu;
+ int index = (gva >> PG_LEVEL_SHIFT(level)) & 0x1ffu;
TEST_ASSERT((*parent_pte == mmu->pgd) || is_present_pte(mmu, parent_pte),
"Parent PTE (level %d) not PRESENT for gva: 0x%08lx",
- level + 1, vaddr);
+ level + 1, gva);
return &page_table[index];
}
@@ -223,12 +223,12 @@ static void *virt_get_pte(struct kvm_vm *vm, struct kvm_mmu *mmu,
static u64 *virt_create_upper_pte(struct kvm_vm *vm,
struct kvm_mmu *mmu,
u64 *parent_pte,
- u64 vaddr,
+ gva_t gva,
u64 paddr,
int current_level,
int target_level)
{
- u64 *pte = virt_get_pte(vm, mmu, parent_pte, vaddr, current_level);
+ u64 *pte = virt_get_pte(vm, mmu, parent_pte, gva, current_level);
paddr = vm_untag_gpa(vm, paddr);
@@ -247,16 +247,16 @@ static u64 *virt_create_upper_pte(struct kvm_vm *vm,
* this level.
*/
TEST_ASSERT(current_level != target_level,
- "Cannot create hugepage at level: %u, vaddr: 0x%lx",
- current_level, vaddr);
+ "Cannot create hugepage at level: %u, gva: 0x%lx",
+ current_level, gva);
TEST_ASSERT(!is_huge_pte(mmu, pte),
- "Cannot create page table at level: %u, vaddr: 0x%lx",
- current_level, vaddr);
+ "Cannot create page table at level: %u, gva: 0x%lx",
+ current_level, gva);
}
return pte;
}
-void __virt_pg_map(struct kvm_vm *vm, struct kvm_mmu *mmu, u64 vaddr,
+void __virt_pg_map(struct kvm_vm *vm, struct kvm_mmu *mmu, gva_t gva,
u64 paddr, int level)
{
const u64 pg_size = PG_LEVEL_SIZE(level);
@@ -266,11 +266,11 @@ void __virt_pg_map(struct kvm_vm *vm, struct kvm_mmu *mmu, u64 vaddr,
TEST_ASSERT(vm->mode == VM_MODE_PXXVYY_4K,
"Unknown or unsupported guest mode: 0x%x", vm->mode);
- TEST_ASSERT((vaddr % pg_size) == 0,
+ TEST_ASSERT((gva % pg_size) == 0,
"Virtual address not aligned,\n"
- "vaddr: 0x%lx page size: 0x%lx", vaddr, pg_size);
- TEST_ASSERT(sparsebit_is_set(vm->vpages_valid, (vaddr >> vm->page_shift)),
- "Invalid virtual address, vaddr: 0x%lx", vaddr);
+ "gva: 0x%lx page size: 0x%lx", gva, pg_size);
+ TEST_ASSERT(sparsebit_is_set(vm->vpages_valid, (gva >> vm->page_shift)),
+ "Invalid virtual address, gva: 0x%lx", gva);
TEST_ASSERT((paddr % pg_size) == 0,
"Physical address not aligned,\n"
" paddr: 0x%lx page size: 0x%lx", paddr, pg_size);
@@ -291,16 +291,16 @@ void __virt_pg_map(struct kvm_vm *vm, struct kvm_mmu *mmu, u64 vaddr,
for (current_level = mmu->pgtable_levels;
current_level > PG_LEVEL_4K;
current_level--) {
- pte = virt_create_upper_pte(vm, mmu, pte, vaddr, paddr,
+ pte = virt_create_upper_pte(vm, mmu, pte, gva, paddr,
current_level, level);
if (is_huge_pte(mmu, pte))
return;
}
/* Fill in page table entry. */
- pte = virt_get_pte(vm, mmu, pte, vaddr, PG_LEVEL_4K);
+ pte = virt_get_pte(vm, mmu, pte, gva, PG_LEVEL_4K);
TEST_ASSERT(!is_present_pte(mmu, pte),
- "PTE already present for 4k page at vaddr: 0x%lx", vaddr);
+ "PTE already present for 4k page at gva: 0x%lx", gva);
*pte = PTE_PRESENT_MASK(mmu) | PTE_READABLE_MASK(mmu) |
PTE_WRITABLE_MASK(mmu) | PTE_EXECUTABLE_MASK(mmu) |
PTE_ALWAYS_SET_MASK(mmu) | (paddr & PHYSICAL_PAGE_MASK);
@@ -315,12 +315,12 @@ void __virt_pg_map(struct kvm_vm *vm, struct kvm_mmu *mmu, u64 vaddr,
*pte |= PTE_S_BIT_MASK(mmu);
}
-void virt_arch_pg_map(struct kvm_vm *vm, u64 vaddr, u64 paddr)
+void virt_arch_pg_map(struct kvm_vm *vm, gva_t gva, u64 paddr)
{
- __virt_pg_map(vm, &vm->mmu, vaddr, paddr, PG_LEVEL_4K);
+ __virt_pg_map(vm, &vm->mmu, gva, paddr, PG_LEVEL_4K);
}
-void virt_map_level(struct kvm_vm *vm, u64 vaddr, u64 paddr,
+void virt_map_level(struct kvm_vm *vm, gva_t gva, u64 paddr,
u64 nr_bytes, int level)
{
u64 pg_size = PG_LEVEL_SIZE(level);
@@ -332,11 +332,11 @@ void virt_map_level(struct kvm_vm *vm, u64 vaddr, u64 paddr,
nr_bytes, pg_size);
for (i = 0; i < nr_pages; i++) {
- __virt_pg_map(vm, &vm->mmu, vaddr, paddr, level);
- sparsebit_set_num(vm->vpages_mapped, vaddr >> vm->page_shift,
+ __virt_pg_map(vm, &vm->mmu, gva, paddr, level);
+ sparsebit_set_num(vm->vpages_mapped, gva >> vm->page_shift,
nr_bytes / PAGE_SIZE);
- vaddr += pg_size;
+ gva += pg_size;
paddr += pg_size;
}
}
@@ -356,7 +356,7 @@ static bool vm_is_target_pte(struct kvm_mmu *mmu, u64 *pte,
static u64 *__vm_get_page_table_entry(struct kvm_vm *vm,
struct kvm_mmu *mmu,
- u64 vaddr,
+ gva_t gva,
int *level)
{
int va_width = 12 + (mmu->pgtable_levels) * 9;
@@ -371,26 +371,23 @@ static u64 *__vm_get_page_table_entry(struct kvm_vm *vm,
TEST_ASSERT(vm->mode == VM_MODE_PXXVYY_4K,
"Unknown or unsupported guest mode: 0x%x", vm->mode);
- TEST_ASSERT(sparsebit_is_set(vm->vpages_valid,
- (vaddr >> vm->page_shift)),
- "Invalid virtual address, vaddr: 0x%lx",
- vaddr);
+ TEST_ASSERT(sparsebit_is_set(vm->vpages_valid, (gva >> vm->page_shift)),
+ "Invalid virtual address, gva: 0x%lx", gva);
/*
- * Check that the vaddr is a sign-extended va_width value.
+ * Check that the gva is a sign-extended va_width value.
*/
- TEST_ASSERT(vaddr ==
- (((s64)vaddr << (64 - va_width) >> (64 - va_width))),
+ TEST_ASSERT(gva == (((s64)gva << (64 - va_width) >> (64 - va_width))),
"Canonical check failed. The virtual address is invalid.");
for (current_level = mmu->pgtable_levels;
current_level > PG_LEVEL_4K;
current_level--) {
- pte = virt_get_pte(vm, mmu, pte, vaddr, current_level);
+ pte = virt_get_pte(vm, mmu, pte, gva, current_level);
if (vm_is_target_pte(mmu, pte, level, current_level))
return pte;
}
- return virt_get_pte(vm, mmu, pte, vaddr, PG_LEVEL_4K);
+ return virt_get_pte(vm, mmu, pte, gva, PG_LEVEL_4K);
}
u64 *tdp_get_pte(struct kvm_vm *vm, u64 l2_gpa)
@@ -400,11 +397,11 @@ u64 *tdp_get_pte(struct kvm_vm *vm, u64 l2_gpa)
return __vm_get_page_table_entry(vm, &vm->stage2_mmu, l2_gpa, &level);
}
-u64 *vm_get_pte(struct kvm_vm *vm, u64 vaddr)
+u64 *vm_get_pte(struct kvm_vm *vm, gva_t gva)
{
int level = PG_LEVEL_4K;
- return __vm_get_page_table_entry(vm, &vm->mmu, vaddr, &level);
+ return __vm_get_page_table_entry(vm, &vm->mmu, gva, &level);
}
void virt_arch_dump(FILE *stream, struct kvm_vm *vm, u8 indent)
@@ -825,14 +822,13 @@ struct kvm_vcpu *vm_arch_vcpu_add(struct kvm_vm *vm, u32 vcpu_id)
{
struct kvm_mp_state mp_state;
struct kvm_regs regs;
- gva_t stack_vaddr;
+ gva_t stack_gva;
struct kvm_vcpu *vcpu;
- stack_vaddr = __vm_alloc(vm, DEFAULT_STACK_PGS * getpagesize(),
- DEFAULT_GUEST_STACK_VADDR_MIN,
- MEM_REGION_DATA);
+ stack_gva = __vm_alloc(vm, DEFAULT_STACK_PGS * getpagesize(),
+ DEFAULT_GUEST_STACK_VADDR_MIN, MEM_REGION_DATA);
- stack_vaddr += DEFAULT_STACK_PGS * getpagesize();
+ stack_gva += DEFAULT_STACK_PGS * getpagesize();
/*
* Align stack to match calling sequence requirements in section "The
@@ -843,9 +839,9 @@ struct kvm_vcpu *vm_arch_vcpu_add(struct kvm_vm *vm, u32 vcpu_id)
* If this code is ever used to launch a vCPU with 32-bit entry point it
* may need to subtract 4 bytes instead of 8 bytes.
*/
- TEST_ASSERT(IS_ALIGNED(stack_vaddr, PAGE_SIZE),
+ TEST_ASSERT(IS_ALIGNED(stack_gva, PAGE_SIZE),
"__vm_alloc() did not provide a page-aligned address");
- stack_vaddr -= 8;
+ stack_gva -= 8;
vcpu = __vm_vcpu_add(vm, vcpu_id);
vcpu_init_cpuid(vcpu, kvm_get_supported_cpuid());
@@ -855,7 +851,7 @@ struct kvm_vcpu *vm_arch_vcpu_add(struct kvm_vm *vm, u32 vcpu_id)
/* Setup guest general purpose registers */
vcpu_regs_get(vcpu, ®s);
regs.rflags = regs.rflags | 0x2;
- regs.rsp = stack_vaddr;
+ regs.rsp = stack_gva;
vcpu_regs_set(vcpu, ®s);
/* Setup the MP state */
diff --git a/tools/testing/selftests/kvm/s390/ucontrol_test.c b/tools/testing/selftests/kvm/s390/ucontrol_test.c
index f773ba0f4641..dbdee4c39d47 100644
--- a/tools/testing/selftests/kvm/s390/ucontrol_test.c
+++ b/tools/testing/selftests/kvm/s390/ucontrol_test.c
@@ -571,7 +571,7 @@ TEST_F(uc_kvm, uc_skey)
{
struct kvm_s390_sie_block *sie_block = self->sie_block;
struct kvm_sync_regs *sync_regs = &self->run->s.regs;
- u64 test_vaddr = VM_MEM_SIZE - (SZ_1M / 2);
+ u64 test_gva = VM_MEM_SIZE - (SZ_1M / 2);
struct kvm_run *run = self->run;
const u8 skeyvalue = 0x34;
@@ -583,7 +583,7 @@ TEST_F(uc_kvm, uc_skey)
/* set register content for test_skey_asm to access not mapped memory */
sync_regs->gprs[1] = skeyvalue;
sync_regs->gprs[5] = self->base_gpa;
- sync_regs->gprs[6] = test_vaddr;
+ sync_regs->gprs[6] = test_gva;
run->kvm_dirty_regs |= KVM_SYNC_GPRS;
/* DAT disabled + 64 bit mode */
diff --git a/tools/testing/selftests/kvm/x86/xapic_ipi_test.c b/tools/testing/selftests/kvm/x86/xapic_ipi_test.c
index d2e2410f748b..39ce9a9369f5 100644
--- a/tools/testing/selftests/kvm/x86/xapic_ipi_test.c
+++ b/tools/testing/selftests/kvm/x86/xapic_ipi_test.c
@@ -393,7 +393,7 @@ int main(int argc, char *argv[])
int run_secs = 0;
int delay_usecs = 0;
struct test_data_page *data;
- gva_t test_data_page_vaddr;
+ gva_t test_data_page_gva;
bool migrate = false;
pthread_t threads[2];
struct thread_params params[2];
@@ -414,14 +414,14 @@ int main(int argc, char *argv[])
params[1].vcpu = vm_vcpu_add(vm, 1, sender_guest_code);
- test_data_page_vaddr = vm_alloc_page(vm);
- data = addr_gva2hva(vm, test_data_page_vaddr);
+ test_data_page_gva = vm_alloc_page(vm);
+ data = addr_gva2hva(vm, test_data_page_gva);
memset(data, 0, sizeof(*data));
params[0].data = data;
params[1].data = data;
- vcpu_args_set(params[0].vcpu, 1, test_data_page_vaddr);
- vcpu_args_set(params[1].vcpu, 1, test_data_page_vaddr);
+ vcpu_args_set(params[0].vcpu, 1, test_data_page_gva);
+ vcpu_args_set(params[1].vcpu, 1, test_data_page_gva);
pipis_rcvd = (u64 *)addr_gva2hva(vm, (u64)&ipis_rcvd);
params[0].pipis_rcvd = pipis_rcvd;
--
2.54.0.rc1.555.g9c883467ad-goog
_______________________________________________
linux-riscv mailing list
linux-riscv@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-riscv
^ permalink raw reply related [flat|nested] 18+ messages in thread
* [PATCH v3 17/19] KVM: selftests: Replace "u64 gpa" with "gpa_t" throughout
2026-04-20 21:19 [PATCH v3 00/19] KVM: selftests: Use kernel-style integer and g[vp]a_t types Sean Christopherson
` (13 preceding siblings ...)
2026-04-20 21:20 ` [PATCH v3 16/19] KVM: selftests: Replace "vaddr" with "gva" throughout Sean Christopherson
@ 2026-04-20 21:20 ` Sean Christopherson
2026-04-20 21:20 ` [PATCH v3 18/19] KVM: selftests: Replace "u64 nested_paddr" with "gpa_t l2_gpa" Sean Christopherson
2026-04-20 21:20 ` [PATCH v3 19/19] KVM: selftests: Replace "paddr" with "gpa" throughout Sean Christopherson
16 siblings, 0 replies; 18+ messages in thread
From: Sean Christopherson @ 2026-04-20 21:20 UTC (permalink / raw)
To: Paolo Bonzini, Marc Zyngier, Oliver Upton, Tianrui Zhao, Bibo Mao,
Huacai Chen, Anup Patel, Paul Walmsley, Palmer Dabbelt, Albert Ou,
Christian Borntraeger, Janosch Frank, Claudio Imbrenda,
Sean Christopherson
Cc: kvm, linux-arm-kernel, kvmarm, loongarch, kvm-riscv, linux-riscv,
linux-kernel, David Matlack
Use gpa_t instead of u64 for obvious declarations of GPA variables.
No functional change intended.
Signed-off-by: Sean Christopherson <seanjc@google.com>
---
.../testing/selftests/kvm/guest_memfd_test.c | 2 +-
.../testing/selftests/kvm/include/kvm_util.h | 26 +++++++++----------
.../testing/selftests/kvm/include/memstress.h | 4 +--
.../selftests/kvm/include/x86/processor.h | 4 +--
tools/testing/selftests/kvm/lib/kvm_util.c | 14 +++++-----
.../selftests/kvm/lib/s390/processor.c | 2 +-
.../kvm/memslot_modification_stress_test.c | 2 +-
.../testing/selftests/kvm/memslot_perf_test.c | 10 +++----
tools/testing/selftests/kvm/mmu_stress_test.c | 4 +--
.../selftests/kvm/pre_fault_memory_test.c | 4 +--
.../selftests/kvm/s390/ucontrol_test.c | 2 +-
.../selftests/kvm/set_memory_region_test.c | 2 +-
.../kvm/x86/private_mem_conversions_test.c | 24 ++++++++---------
.../x86/smaller_maxphyaddr_emulation_test.c | 2 +-
.../selftests/kvm/x86/svm_nested_vmcb12_gpa.c | 8 +++---
15 files changed, 55 insertions(+), 55 deletions(-)
diff --git a/tools/testing/selftests/kvm/guest_memfd_test.c b/tools/testing/selftests/kvm/guest_memfd_test.c
index 9cbd3ad7f44a..d6528c6f5e03 100644
--- a/tools/testing/selftests/kvm/guest_memfd_test.c
+++ b/tools/testing/selftests/kvm/guest_memfd_test.c
@@ -489,7 +489,7 @@ static void test_guest_memfd_guest(void)
* the guest's code, stack, and page tables, and low memory contains
* the PCI hole and other MMIO regions that need to be avoided.
*/
- const u64 gpa = SZ_4G;
+ const gpa_t gpa = SZ_4G;
const int slot = 1;
struct kvm_vcpu *vcpu;
diff --git a/tools/testing/selftests/kvm/include/kvm_util.h b/tools/testing/selftests/kvm/include/kvm_util.h
index 0dcfad728edd..0d9f11be9806 100644
--- a/tools/testing/selftests/kvm/include/kvm_util.h
+++ b/tools/testing/selftests/kvm/include/kvm_util.h
@@ -114,7 +114,7 @@ struct kvm_vm {
gpa_t ucall_mmio_addr;
gva_t handlers;
u32 dirty_ring_size;
- u64 gpa_tag_mask;
+ gpa_t gpa_tag_mask;
/*
* "mmu" is the guest's stage-1, with a short name because the vast
@@ -418,7 +418,7 @@ static inline void vm_enable_cap(struct kvm_vm *vm, u32 cap, u64 arg0)
vm_ioctl(vm, KVM_ENABLE_CAP, &enable_cap);
}
-static inline void vm_set_memory_attributes(struct kvm_vm *vm, u64 gpa,
+static inline void vm_set_memory_attributes(struct kvm_vm *vm, gpa_t gpa,
u64 size, u64 attributes)
{
struct kvm_memory_attributes attr = {
@@ -439,28 +439,28 @@ static inline void vm_set_memory_attributes(struct kvm_vm *vm, u64 gpa,
}
-static inline void vm_mem_set_private(struct kvm_vm *vm, u64 gpa,
+static inline void vm_mem_set_private(struct kvm_vm *vm, gpa_t gpa,
u64 size)
{
vm_set_memory_attributes(vm, gpa, size, KVM_MEMORY_ATTRIBUTE_PRIVATE);
}
-static inline void vm_mem_set_shared(struct kvm_vm *vm, u64 gpa,
+static inline void vm_mem_set_shared(struct kvm_vm *vm, gpa_t gpa,
u64 size)
{
vm_set_memory_attributes(vm, gpa, size, 0);
}
-void vm_guest_mem_fallocate(struct kvm_vm *vm, u64 gpa, u64 size,
+void vm_guest_mem_fallocate(struct kvm_vm *vm, gpa_t gpa, u64 size,
bool punch_hole);
-static inline void vm_guest_mem_punch_hole(struct kvm_vm *vm, u64 gpa,
+static inline void vm_guest_mem_punch_hole(struct kvm_vm *vm, gpa_t gpa,
u64 size)
{
vm_guest_mem_fallocate(vm, gpa, size, true);
}
-static inline void vm_guest_mem_allocate(struct kvm_vm *vm, u64 gpa,
+static inline void vm_guest_mem_allocate(struct kvm_vm *vm, gpa_t gpa,
u64 size)
{
vm_guest_mem_fallocate(vm, gpa, size, false);
@@ -685,21 +685,21 @@ static inline int vm_create_guest_memfd(struct kvm_vm *vm, u64 size,
}
void vm_set_user_memory_region(struct kvm_vm *vm, u32 slot, u32 flags,
- u64 gpa, u64 size, void *hva);
+ gpa_t gpa, u64 size, void *hva);
int __vm_set_user_memory_region(struct kvm_vm *vm, u32 slot, u32 flags,
- u64 gpa, u64 size, void *hva);
+ gpa_t gpa, u64 size, void *hva);
void vm_set_user_memory_region2(struct kvm_vm *vm, u32 slot, u32 flags,
- u64 gpa, u64 size, void *hva,
+ gpa_t gpa, u64 size, void *hva,
u32 guest_memfd, u64 guest_memfd_offset);
int __vm_set_user_memory_region2(struct kvm_vm *vm, u32 slot, u32 flags,
- u64 gpa, u64 size, void *hva,
+ gpa_t gpa, u64 size, void *hva,
u32 guest_memfd, u64 guest_memfd_offset);
void vm_userspace_mem_region_add(struct kvm_vm *vm,
enum vm_mem_backing_src_type src_type,
- u64 gpa, u32 slot, u64 npages, u32 flags);
+ gpa_t gpa, u32 slot, u64 npages, u32 flags);
void vm_mem_add(struct kvm_vm *vm, enum vm_mem_backing_src_type src_type,
- u64 gpa, u32 slot, u64 npages, u32 flags,
+ gpa_t gpa, u32 slot, u64 npages, u32 flags,
int guest_memfd_fd, u64 guest_memfd_offset);
#ifndef vm_arch_has_protected_memory
diff --git a/tools/testing/selftests/kvm/include/memstress.h b/tools/testing/selftests/kvm/include/memstress.h
index abd0dca10283..0d1d6230cc05 100644
--- a/tools/testing/selftests/kvm/include/memstress.h
+++ b/tools/testing/selftests/kvm/include/memstress.h
@@ -20,7 +20,7 @@
#define MEMSTRESS_MEM_SLOT_INDEX 1
struct memstress_vcpu_args {
- u64 gpa;
+ gpa_t gpa;
gva_t gva;
u64 pages;
@@ -32,7 +32,7 @@ struct memstress_vcpu_args {
struct memstress_args {
struct kvm_vm *vm;
/* The starting address and size of the guest test region. */
- u64 gpa;
+ gpa_t gpa;
u64 size;
u64 guest_page_size;
u32 random_seed;
diff --git a/tools/testing/selftests/kvm/include/x86/processor.h b/tools/testing/selftests/kvm/include/x86/processor.h
index 15252e75aaf1..fc7efd722229 100644
--- a/tools/testing/selftests/kvm/include/x86/processor.h
+++ b/tools/testing/selftests/kvm/include/x86/processor.h
@@ -1400,12 +1400,12 @@ u64 kvm_hypercall(u64 nr, u64 a0, u64 a1, u64 a2, u64 a3);
u64 __xen_hypercall(u64 nr, u64 a0, void *a1);
void xen_hypercall(u64 nr, u64 a0, void *a1);
-static inline u64 __kvm_hypercall_map_gpa_range(u64 gpa, u64 size, u64 flags)
+static inline u64 __kvm_hypercall_map_gpa_range(gpa_t gpa, u64 size, u64 flags)
{
return kvm_hypercall(KVM_HC_MAP_GPA_RANGE, gpa, size >> PAGE_SHIFT, flags, 0);
}
-static inline void kvm_hypercall_map_gpa_range(u64 gpa, u64 size, u64 flags)
+static inline void kvm_hypercall_map_gpa_range(gpa_t gpa, u64 size, u64 flags)
{
u64 ret = __kvm_hypercall_map_gpa_range(gpa, size, flags);
diff --git a/tools/testing/selftests/kvm/lib/kvm_util.c b/tools/testing/selftests/kvm/lib/kvm_util.c
index e282f9abd4c7..905fa214099d 100644
--- a/tools/testing/selftests/kvm/lib/kvm_util.c
+++ b/tools/testing/selftests/kvm/lib/kvm_util.c
@@ -919,7 +919,7 @@ static void vm_userspace_mem_region_hva_insert(struct rb_root *hva_tree,
int __vm_set_user_memory_region(struct kvm_vm *vm, u32 slot, u32 flags,
- u64 gpa, u64 size, void *hva)
+ gpa_t gpa, u64 size, void *hva)
{
struct kvm_userspace_memory_region region = {
.slot = slot,
@@ -933,7 +933,7 @@ int __vm_set_user_memory_region(struct kvm_vm *vm, u32 slot, u32 flags,
}
void vm_set_user_memory_region(struct kvm_vm *vm, u32 slot, u32 flags,
- u64 gpa, u64 size, void *hva)
+ gpa_t gpa, u64 size, void *hva)
{
int ret = __vm_set_user_memory_region(vm, slot, flags, gpa, size, hva);
@@ -946,7 +946,7 @@ void vm_set_user_memory_region(struct kvm_vm *vm, u32 slot, u32 flags,
"KVM selftests now require KVM_SET_USER_MEMORY_REGION2 (introduced in v6.8)")
int __vm_set_user_memory_region2(struct kvm_vm *vm, u32 slot, u32 flags,
- u64 gpa, u64 size, void *hva,
+ gpa_t gpa, u64 size, void *hva,
u32 guest_memfd, u64 guest_memfd_offset)
{
struct kvm_userspace_memory_region2 region = {
@@ -965,7 +965,7 @@ int __vm_set_user_memory_region2(struct kvm_vm *vm, u32 slot, u32 flags,
}
void vm_set_user_memory_region2(struct kvm_vm *vm, u32 slot, u32 flags,
- u64 gpa, u64 size, void *hva,
+ gpa_t gpa, u64 size, void *hva,
u32 guest_memfd, u64 guest_memfd_offset)
{
int ret = __vm_set_user_memory_region2(vm, slot, flags, gpa, size, hva,
@@ -978,7 +978,7 @@ void vm_set_user_memory_region2(struct kvm_vm *vm, u32 slot, u32 flags,
/* FIXME: This thing needs to be ripped apart and rewritten. */
void vm_mem_add(struct kvm_vm *vm, enum vm_mem_backing_src_type src_type,
- u64 gpa, u32 slot, u64 npages, u32 flags,
+ gpa_t gpa, u32 slot, u64 npages, u32 flags,
int guest_memfd, u64 guest_memfd_offset)
{
int ret;
@@ -1141,7 +1141,7 @@ void vm_mem_add(struct kvm_vm *vm, enum vm_mem_backing_src_type src_type,
void vm_userspace_mem_region_add(struct kvm_vm *vm,
enum vm_mem_backing_src_type src_type,
- u64 gpa, u32 slot, u64 npages, u32 flags)
+ gpa_t gpa, u32 slot, u64 npages, u32 flags)
{
vm_mem_add(vm, src_type, gpa, slot, npages, flags, -1, 0);
}
@@ -1278,7 +1278,7 @@ void vm_guest_mem_fallocate(struct kvm_vm *vm, u64 base, u64 size,
const int mode = FALLOC_FL_KEEP_SIZE | (punch_hole ? FALLOC_FL_PUNCH_HOLE : 0);
struct userspace_mem_region *region;
u64 end = base + size;
- u64 gpa, len;
+ gpa_t gpa, len;
off_t fd_offset;
int ret;
diff --git a/tools/testing/selftests/kvm/lib/s390/processor.c b/tools/testing/selftests/kvm/lib/s390/processor.c
index 643e583c804c..77a7b6965812 100644
--- a/tools/testing/selftests/kvm/lib/s390/processor.c
+++ b/tools/testing/selftests/kvm/lib/s390/processor.c
@@ -47,7 +47,7 @@ static u64 virt_alloc_region(struct kvm_vm *vm, int ri)
| ((ri < 4 ? (PAGES_PER_REGION - 1) : 0) & REGION_ENTRY_LENGTH);
}
-void virt_arch_pg_map(struct kvm_vm *vm, gva_t gva, u64 gpa)
+void virt_arch_pg_map(struct kvm_vm *vm, gva_t gva, gpa_t gpa)
{
int ri, idx;
u64 *entry;
diff --git a/tools/testing/selftests/kvm/memslot_modification_stress_test.c b/tools/testing/selftests/kvm/memslot_modification_stress_test.c
index 9d7c4afab961..9c7578a098c3 100644
--- a/tools/testing/selftests/kvm/memslot_modification_stress_test.c
+++ b/tools/testing/selftests/kvm/memslot_modification_stress_test.c
@@ -58,7 +58,7 @@ static void add_remove_memslot(struct kvm_vm *vm, useconds_t delay,
u64 nr_modifications)
{
u64 pages = max_t(int, vm->page_size, getpagesize()) / vm->page_size;
- u64 gpa;
+ gpa_t gpa;
int i;
/*
diff --git a/tools/testing/selftests/kvm/memslot_perf_test.c b/tools/testing/selftests/kvm/memslot_perf_test.c
index 51f8be50c7e4..3d02db371422 100644
--- a/tools/testing/selftests/kvm/memslot_perf_test.c
+++ b/tools/testing/selftests/kvm/memslot_perf_test.c
@@ -186,9 +186,9 @@ static void wait_for_vcpu(void)
"sem_timedwait() failed: %d", errno);
}
-static void *vm_gpa2hva(struct vm_data *data, u64 gpa, u64 *rempages)
+static void *vm_gpa2hva(struct vm_data *data, gpa_t gpa, u64 *rempages)
{
- u64 gpage, pgoffs;
+ gpa_t gpage, pgoffs;
u32 slot, slotoffs;
void *base;
u32 guest_page_size = data->vm->page_size;
@@ -332,7 +332,7 @@ static bool prepare_vm(struct vm_data *data, int nslots, u64 *maxslots,
for (slot = 1, guest_addr = MEM_GPA; slot <= data->nslots; slot++) {
u64 npages;
- u64 gpa;
+ gpa_t gpa;
npages = data->pages_per_slot;
if (slot == data->nslots)
@@ -638,7 +638,7 @@ static void test_memslot_move_loop(struct vm_data *data, struct sync_area *sync)
static void test_memslot_do_unmap(struct vm_data *data,
u64 offsp, u64 count)
{
- u64 gpa, ctr;
+ gpa_t gpa, ctr;
u32 guest_page_size = data->vm->page_size;
for (gpa = MEM_TEST_GPA + offsp * guest_page_size, ctr = 0; ctr < count; ) {
@@ -663,7 +663,7 @@ static void test_memslot_do_unmap(struct vm_data *data,
static void test_memslot_map_unmap_check(struct vm_data *data,
u64 offsp, u64 valexp)
{
- u64 gpa;
+ gpa_t gpa;
u64 *val;
u32 guest_page_size = data->vm->page_size;
diff --git a/tools/testing/selftests/kvm/mmu_stress_test.c b/tools/testing/selftests/kvm/mmu_stress_test.c
index e0975a5dcff1..54d281419d31 100644
--- a/tools/testing/selftests/kvm/mmu_stress_test.c
+++ b/tools/testing/selftests/kvm/mmu_stress_test.c
@@ -22,7 +22,7 @@ static bool all_vcpus_hit_ro_fault;
static void guest_code(u64 start_gpa, u64 end_gpa, u64 stride)
{
- u64 gpa;
+ gpa_t gpa;
int i;
for (i = 0; i < 2; i++) {
@@ -206,7 +206,7 @@ static pthread_t *spawn_workers(struct kvm_vm *vm, struct kvm_vcpu **vcpus,
u64 start_gpa, u64 end_gpa)
{
struct vcpu_info *info;
- u64 gpa, nr_bytes;
+ gpa_t gpa, nr_bytes;
pthread_t *threads;
int i;
diff --git a/tools/testing/selftests/kvm/pre_fault_memory_test.c b/tools/testing/selftests/kvm/pre_fault_memory_test.c
index bfdaaeed3a8c..fcb57fd034e6 100644
--- a/tools/testing/selftests/kvm/pre_fault_memory_test.c
+++ b/tools/testing/selftests/kvm/pre_fault_memory_test.c
@@ -33,7 +33,7 @@ static void guest_code(u64 base_gva)
struct slot_worker_data {
struct kvm_vm *vm;
- u64 gpa;
+ gpa_t gpa;
u32 flags;
bool worker_ready;
bool prefault_ready;
@@ -161,7 +161,7 @@ static void pre_fault_memory(struct kvm_vcpu *vcpu, u64 base_gpa, u64 offset,
static void __test_pre_fault_memory(unsigned long vm_type, bool private)
{
- u64 gpa, gva, alignment, guest_page_size;
+ gpa_t gpa, gva, alignment, guest_page_size;
const struct vm_shape shape = {
.mode = VM_MODE_DEFAULT,
.type = vm_type,
diff --git a/tools/testing/selftests/kvm/s390/ucontrol_test.c b/tools/testing/selftests/kvm/s390/ucontrol_test.c
index dbdee4c39d47..b8c6f37b53e0 100644
--- a/tools/testing/selftests/kvm/s390/ucontrol_test.c
+++ b/tools/testing/selftests/kvm/s390/ucontrol_test.c
@@ -269,7 +269,7 @@ TEST(uc_cap_hpage)
}
/* calculate host virtual addr from guest physical addr */
-static void *gpa2hva(FIXTURE_DATA(uc_kvm) *self, u64 gpa)
+static void *gpa2hva(FIXTURE_DATA(uc_kvm) *self, gpa_t gpa)
{
return (void *)(self->base_hva - self->base_gpa + gpa);
}
diff --git a/tools/testing/selftests/kvm/set_memory_region_test.c b/tools/testing/selftests/kvm/set_memory_region_test.c
index 5551dd0f9fad..9b919a231c93 100644
--- a/tools/testing/selftests/kvm/set_memory_region_test.c
+++ b/tools/testing/selftests/kvm/set_memory_region_test.c
@@ -112,7 +112,7 @@ static struct kvm_vm *spawn_vm(struct kvm_vcpu **vcpu, pthread_t *vcpu_thread,
{
struct kvm_vm *vm;
u64 *hva;
- u64 gpa;
+ gpa_t gpa;
vm = vm_create_with_one_vcpu(vcpu, guest_code);
diff --git a/tools/testing/selftests/kvm/x86/private_mem_conversions_test.c b/tools/testing/selftests/kvm/x86/private_mem_conversions_test.c
index 27675d7d04c0..1d2f5d4fd45d 100644
--- a/tools/testing/selftests/kvm/x86/private_mem_conversions_test.c
+++ b/tools/testing/selftests/kvm/x86/private_mem_conversions_test.c
@@ -38,7 +38,7 @@ do { \
pattern, i, gpa + i, mem[i]); \
} while (0)
-static void memcmp_h(u8 *mem, u64 gpa, u8 pattern, size_t size)
+static void memcmp_h(u8 *mem, gpa_t gpa, u8 pattern, size_t size)
{
size_t i;
@@ -70,13 +70,13 @@ enum ucall_syncs {
SYNC_PRIVATE,
};
-static void guest_sync_shared(u64 gpa, u64 size,
+static void guest_sync_shared(gpa_t gpa, u64 size,
u8 current_pattern, u8 new_pattern)
{
GUEST_SYNC5(SYNC_SHARED, gpa, size, current_pattern, new_pattern);
}
-static void guest_sync_private(u64 gpa, u64 size, u8 pattern)
+static void guest_sync_private(gpa_t gpa, u64 size, u8 pattern)
{
GUEST_SYNC4(SYNC_PRIVATE, gpa, size, pattern);
}
@@ -86,7 +86,7 @@ static void guest_sync_private(u64 gpa, u64 size, u8 pattern)
#define MAP_GPA_SHARED BIT(1)
#define MAP_GPA_DO_FALLOCATE BIT(2)
-static void guest_map_mem(u64 gpa, u64 size, bool map_shared,
+static void guest_map_mem(gpa_t gpa, u64 size, bool map_shared,
bool do_fallocate)
{
u64 flags = MAP_GPA_SET_ATTRIBUTES;
@@ -98,12 +98,12 @@ static void guest_map_mem(u64 gpa, u64 size, bool map_shared,
kvm_hypercall_map_gpa_range(gpa, size, flags);
}
-static void guest_map_shared(u64 gpa, u64 size, bool do_fallocate)
+static void guest_map_shared(gpa_t gpa, u64 size, bool do_fallocate)
{
guest_map_mem(gpa, size, true, do_fallocate);
}
-static void guest_map_private(u64 gpa, u64 size, bool do_fallocate)
+static void guest_map_private(gpa_t gpa, u64 size, bool do_fallocate)
{
guest_map_mem(gpa, size, false, do_fallocate);
}
@@ -134,7 +134,7 @@ static void guest_test_explicit_conversion(u64 base_gpa, bool do_fallocate)
memcmp_g(base_gpa, init_p, PER_CPU_DATA_SIZE);
for (i = 0; i < ARRAY_SIZE(test_ranges); i++) {
- u64 gpa = base_gpa + test_ranges[i].offset;
+ gpa_t gpa = base_gpa + test_ranges[i].offset;
u64 size = test_ranges[i].size;
u8 p1 = 0x11;
u8 p2 = 0x22;
@@ -214,7 +214,7 @@ static void guest_test_explicit_conversion(u64 base_gpa, bool do_fallocate)
}
}
-static void guest_punch_hole(u64 gpa, u64 size)
+static void guest_punch_hole(gpa_t gpa, u64 size)
{
/* "Mapping" memory shared via fallocate() is done via PUNCH_HOLE. */
u64 flags = MAP_GPA_SHARED | MAP_GPA_DO_FALLOCATE;
@@ -239,7 +239,7 @@ static void guest_test_punch_hole(u64 base_gpa, bool precise)
guest_map_private(base_gpa, PER_CPU_DATA_SIZE, false);
for (i = 0; i < ARRAY_SIZE(test_ranges); i++) {
- u64 gpa = base_gpa + test_ranges[i].offset;
+ gpa_t gpa = base_gpa + test_ranges[i].offset;
u64 size = test_ranges[i].size;
/*
@@ -289,7 +289,7 @@ static void guest_code(u64 base_gpa)
static void handle_exit_hypercall(struct kvm_vcpu *vcpu)
{
struct kvm_run *run = vcpu->run;
- u64 gpa = run->hypercall.args[0];
+ gpa_t gpa = run->hypercall.args[0];
u64 size = run->hypercall.args[1] * PAGE_SIZE;
bool set_attributes = run->hypercall.args[2] & MAP_GPA_SET_ATTRIBUTES;
bool map_shared = run->hypercall.args[2] & MAP_GPA_SHARED;
@@ -337,7 +337,7 @@ static void *__test_mem_conversions(void *__vcpu)
case UCALL_ABORT:
REPORT_GUEST_ASSERT(uc);
case UCALL_SYNC: {
- u64 gpa = uc.args[1];
+ gpa_t gpa = uc.args[1];
size_t size = uc.args[2];
size_t i;
@@ -402,7 +402,7 @@ static void test_mem_conversions(enum vm_mem_backing_src_type src_type, u32 nr_v
KVM_MEM_GUEST_MEMFD, memfd, slot_size * i);
for (i = 0; i < nr_vcpus; i++) {
- u64 gpa = BASE_DATA_GPA + i * per_cpu_size;
+ gpa_t gpa = BASE_DATA_GPA + i * per_cpu_size;
vcpu_args_set(vcpus[i], 1, gpa);
diff --git a/tools/testing/selftests/kvm/x86/smaller_maxphyaddr_emulation_test.c b/tools/testing/selftests/kvm/x86/smaller_maxphyaddr_emulation_test.c
index 27cded643699..3dca85e95478 100644
--- a/tools/testing/selftests/kvm/x86/smaller_maxphyaddr_emulation_test.c
+++ b/tools/testing/selftests/kvm/x86/smaller_maxphyaddr_emulation_test.c
@@ -48,7 +48,7 @@ int main(int argc, char *argv[])
struct kvm_vm *vm;
struct ucall uc;
u64 *hva;
- u64 gpa;
+ gpa_t gpa;
int rc;
TEST_REQUIRE(kvm_has_cap(KVM_CAP_SMALLER_MAXPHYADDR));
diff --git a/tools/testing/selftests/kvm/x86/svm_nested_vmcb12_gpa.c b/tools/testing/selftests/kvm/x86/svm_nested_vmcb12_gpa.c
index ae8a10913af7..a4935ce2fb99 100644
--- a/tools/testing/selftests/kvm/x86/svm_nested_vmcb12_gpa.c
+++ b/tools/testing/selftests/kvm/x86/svm_nested_vmcb12_gpa.c
@@ -28,28 +28,28 @@ static void l2_code(void)
vmcall();
}
-static void l1_vmrun(struct svm_test_data *svm, u64 gpa)
+static void l1_vmrun(struct svm_test_data *svm, gpa_t gpa)
{
generic_svm_setup(svm, l2_code, &l2_guest_stack[L2_GUEST_STACK_SIZE]);
asm volatile ("vmrun %[gpa]" : : [gpa] "a" (gpa) : "memory");
}
-static void l1_vmload(struct svm_test_data *svm, u64 gpa)
+static void l1_vmload(struct svm_test_data *svm, gpa_t gpa)
{
generic_svm_setup(svm, l2_code, &l2_guest_stack[L2_GUEST_STACK_SIZE]);
asm volatile ("vmload %[gpa]" : : [gpa] "a" (gpa) : "memory");
}
-static void l1_vmsave(struct svm_test_data *svm, u64 gpa)
+static void l1_vmsave(struct svm_test_data *svm, gpa_t gpa)
{
generic_svm_setup(svm, l2_code, &l2_guest_stack[L2_GUEST_STACK_SIZE]);
asm volatile ("vmsave %[gpa]" : : [gpa] "a" (gpa) : "memory");
}
-static void l1_vmexit(struct svm_test_data *svm, u64 gpa)
+static void l1_vmexit(struct svm_test_data *svm, gpa_t gpa)
{
generic_svm_setup(svm, l2_code, &l2_guest_stack[L2_GUEST_STACK_SIZE]);
--
2.54.0.rc1.555.g9c883467ad-goog
_______________________________________________
linux-riscv mailing list
linux-riscv@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-riscv
^ permalink raw reply related [flat|nested] 18+ messages in thread
* [PATCH v3 18/19] KVM: selftests: Replace "u64 nested_paddr" with "gpa_t l2_gpa"
2026-04-20 21:19 [PATCH v3 00/19] KVM: selftests: Use kernel-style integer and g[vp]a_t types Sean Christopherson
` (14 preceding siblings ...)
2026-04-20 21:20 ` [PATCH v3 17/19] KVM: selftests: Replace "u64 gpa" with "gpa_t" throughout Sean Christopherson
@ 2026-04-20 21:20 ` Sean Christopherson
2026-04-20 21:20 ` [PATCH v3 19/19] KVM: selftests: Replace "paddr" with "gpa" throughout Sean Christopherson
16 siblings, 0 replies; 18+ messages in thread
From: Sean Christopherson @ 2026-04-20 21:20 UTC (permalink / raw)
To: Paolo Bonzini, Marc Zyngier, Oliver Upton, Tianrui Zhao, Bibo Mao,
Huacai Chen, Anup Patel, Paul Walmsley, Palmer Dabbelt, Albert Ou,
Christian Borntraeger, Janosch Frank, Claudio Imbrenda,
Sean Christopherson
Cc: kvm, linux-arm-kernel, kvmarm, loongarch, kvm-riscv, linux-riscv,
linux-kernel, David Matlack
In x86's nested TDP APIs, use the appropriate gpa_t typedef and rename
variables from nested_paddr to l2_gpa to match KVM x86's nomenclature.
No functional change intended.
Signed-off-by: Sean Christopherson <seanjc@google.com>
---
.../testing/selftests/kvm/include/x86/processor.h | 2 +-
tools/testing/selftests/kvm/lib/x86/processor.c | 14 ++++++--------
2 files changed, 7 insertions(+), 9 deletions(-)
diff --git a/tools/testing/selftests/kvm/include/x86/processor.h b/tools/testing/selftests/kvm/include/x86/processor.h
index fc7efd722229..97dc887658c3 100644
--- a/tools/testing/selftests/kvm/include/x86/processor.h
+++ b/tools/testing/selftests/kvm/include/x86/processor.h
@@ -1514,7 +1514,7 @@ void virt_map_level(struct kvm_vm *vm, gva_t gva, u64 paddr,
void vm_enable_tdp(struct kvm_vm *vm);
bool kvm_cpu_has_tdp(void);
-void tdp_map(struct kvm_vm *vm, u64 nested_paddr, u64 paddr, u64 size);
+void tdp_map(struct kvm_vm *vm, gpa_t l2_gpa, u64 paddr, u64 size);
void tdp_identity_map_default_memslots(struct kvm_vm *vm);
void tdp_identity_map_1g(struct kvm_vm *vm, u64 addr, u64 size);
u64 *tdp_get_pte(struct kvm_vm *vm, u64 l2_gpa);
diff --git a/tools/testing/selftests/kvm/lib/x86/processor.c b/tools/testing/selftests/kvm/lib/x86/processor.c
index 3c55980c81b2..892cc517d9f1 100644
--- a/tools/testing/selftests/kvm/lib/x86/processor.c
+++ b/tools/testing/selftests/kvm/lib/x86/processor.c
@@ -495,26 +495,24 @@ bool kvm_cpu_has_tdp(void)
return kvm_cpu_has_ept() || kvm_cpu_has_npt();
}
-void __tdp_map(struct kvm_vm *vm, u64 nested_paddr, u64 paddr,
- u64 size, int level)
+void __tdp_map(struct kvm_vm *vm, gpa_t l2_gpa, u64 paddr, u64 size, int level)
{
size_t page_size = PG_LEVEL_SIZE(level);
size_t npages = size / page_size;
- TEST_ASSERT(nested_paddr + size > nested_paddr, "Vaddr overflow");
+ TEST_ASSERT(l2_gpa + size > l2_gpa, "L2 GPA overflow");
TEST_ASSERT(paddr + size > paddr, "Paddr overflow");
while (npages--) {
- __virt_pg_map(vm, &vm->stage2_mmu, nested_paddr, paddr, level);
- nested_paddr += page_size;
+ __virt_pg_map(vm, &vm->stage2_mmu, l2_gpa, paddr, level);
+ l2_gpa += page_size;
paddr += page_size;
}
}
-void tdp_map(struct kvm_vm *vm, u64 nested_paddr, u64 paddr,
- u64 size)
+void tdp_map(struct kvm_vm *vm, gpa_t l2_gpa, u64 paddr, u64 size)
{
- __tdp_map(vm, nested_paddr, paddr, size, PG_LEVEL_4K);
+ __tdp_map(vm, l2_gpa, paddr, size, PG_LEVEL_4K);
}
/* Prepare an identity extended page table that maps all the
--
2.54.0.rc1.555.g9c883467ad-goog
_______________________________________________
linux-riscv mailing list
linux-riscv@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-riscv
^ permalink raw reply related [flat|nested] 18+ messages in thread
* [PATCH v3 19/19] KVM: selftests: Replace "paddr" with "gpa" throughout
2026-04-20 21:19 [PATCH v3 00/19] KVM: selftests: Use kernel-style integer and g[vp]a_t types Sean Christopherson
` (15 preceding siblings ...)
2026-04-20 21:20 ` [PATCH v3 18/19] KVM: selftests: Replace "u64 nested_paddr" with "gpa_t l2_gpa" Sean Christopherson
@ 2026-04-20 21:20 ` Sean Christopherson
16 siblings, 0 replies; 18+ messages in thread
From: Sean Christopherson @ 2026-04-20 21:20 UTC (permalink / raw)
To: Paolo Bonzini, Marc Zyngier, Oliver Upton, Tianrui Zhao, Bibo Mao,
Huacai Chen, Anup Patel, Paul Walmsley, Palmer Dabbelt, Albert Ou,
Christian Borntraeger, Janosch Frank, Claudio Imbrenda,
Sean Christopherson
Cc: kvm, linux-arm-kernel, kvmarm, loongarch, kvm-riscv, linux-riscv,
linux-kernel, David Matlack
Replace all variations of "paddr" variables in KVM selftests with "gpa",
with the exception of the ELF structures, as those fields are not specific
to guest virtual addresses, to complete the conversion from vm_paddr_t to
gpa_t.
No functional change intended.
Signed-off-by: Sean Christopherson <seanjc@google.com>
---
.../testing/selftests/kvm/arm64/sea_to_user.c | 2 +-
.../testing/selftests/kvm/include/kvm_util.h | 23 ++++----
.../selftests/kvm/include/x86/processor.h | 6 +--
.../selftests/kvm/lib/arm64/processor.c | 22 ++++----
tools/testing/selftests/kvm/lib/kvm_util.c | 53 +++++++++----------
.../selftests/kvm/lib/loongarch/processor.c | 14 ++---
.../selftests/kvm/lib/riscv/processor.c | 16 +++---
.../selftests/kvm/lib/s390/processor.c | 12 ++---
.../testing/selftests/kvm/lib/x86/processor.c | 50 ++++++++---------
9 files changed, 98 insertions(+), 100 deletions(-)
diff --git a/tools/testing/selftests/kvm/arm64/sea_to_user.c b/tools/testing/selftests/kvm/arm64/sea_to_user.c
index e16034852470..e96d8982c28b 100644
--- a/tools/testing/selftests/kvm/arm64/sea_to_user.c
+++ b/tools/testing/selftests/kvm/arm64/sea_to_user.c
@@ -275,7 +275,7 @@ static struct kvm_vm *vm_create_with_sea_handler(struct kvm_vcpu **vcpu)
vm_userspace_mem_region_add(
/*vm=*/vm,
/*src_type=*/src_type,
- /*guest_paddr=*/start_gpa,
+ /*gpa=*/start_gpa,
/*slot=*/1,
/*npages=*/num_guest_pages,
/*flags=*/0);
diff --git a/tools/testing/selftests/kvm/include/kvm_util.h b/tools/testing/selftests/kvm/include/kvm_util.h
index 0d9f11be9806..2ecaaa0e9965 100644
--- a/tools/testing/selftests/kvm/include/kvm_util.h
+++ b/tools/testing/selftests/kvm/include/kvm_util.h
@@ -725,7 +725,7 @@ gva_t vm_alloc_pages(struct kvm_vm *vm, int nr_pages);
gva_t __vm_alloc_page(struct kvm_vm *vm, enum kvm_mem_region_type type);
gva_t vm_alloc_page(struct kvm_vm *vm);
-void virt_map(struct kvm_vm *vm, gva_t gva, u64 paddr,
+void virt_map(struct kvm_vm *vm, gva_t gva, gpa_t gpa,
unsigned int npages);
void *addr_gpa2hva(struct kvm_vm *vm, gpa_t gpa);
void *addr_gva2hva(struct kvm_vm *vm, gva_t gva);
@@ -990,21 +990,20 @@ void kvm_gsi_routing_write(struct kvm_vm *vm, struct kvm_irq_routing *routing);
const char *exit_reason_str(unsigned int exit_reason);
-gpa_t vm_phy_page_alloc(struct kvm_vm *vm, gpa_t paddr_min, u32 memslot);
-gpa_t __vm_phy_pages_alloc(struct kvm_vm *vm, size_t num,
- gpa_t paddr_min, u32 memslot,
- bool protected);
+gpa_t vm_phy_page_alloc(struct kvm_vm *vm, gpa_t min_gpa, u32 memslot);
+gpa_t __vm_phy_pages_alloc(struct kvm_vm *vm, size_t num, gpa_t min_gpa,
+ u32 memslot, bool protected);
gpa_t vm_alloc_page_table(struct kvm_vm *vm);
static inline gpa_t vm_phy_pages_alloc(struct kvm_vm *vm, size_t num,
- gpa_t paddr_min, u32 memslot)
+ gpa_t min_gpa, u32 memslot)
{
/*
* By default, allocate memory as protected for VMs that support
* protected memory, as the majority of memory for such VMs is
* protected, i.e. using shared memory is effectively opt-in.
*/
- return __vm_phy_pages_alloc(vm, num, paddr_min, memslot,
+ return __vm_phy_pages_alloc(vm, num, min_gpa, memslot,
vm_arch_has_protected_memory(vm));
}
@@ -1203,13 +1202,13 @@ static inline void virt_pgd_alloc(struct kvm_vm *vm)
/*
* Within @vm, creates a virtual translation for the page starting
- * at @gva to the page starting at @paddr.
+ * at @gva to the page starting at @gpa.
*/
-void virt_arch_pg_map(struct kvm_vm *vm, gva_t gva, u64 paddr);
+void virt_arch_pg_map(struct kvm_vm *vm, gva_t gva, gpa_t gpa);
-static inline void virt_pg_map(struct kvm_vm *vm, gva_t gva, u64 paddr)
+static inline void virt_pg_map(struct kvm_vm *vm, gva_t gva, gpa_t gpa)
{
- virt_arch_pg_map(vm, gva, paddr);
+ virt_arch_pg_map(vm, gva, gpa);
sparsebit_set(vm->vpages_mapped, gva >> vm->page_shift);
}
@@ -1280,7 +1279,7 @@ void kvm_arch_vm_post_create(struct kvm_vm *vm, unsigned int nr_vcpus);
void kvm_arch_vm_finalize_vcpus(struct kvm_vm *vm);
void kvm_arch_vm_release(struct kvm_vm *vm);
-bool vm_is_gpa_protected(struct kvm_vm *vm, gpa_t paddr);
+bool vm_is_gpa_protected(struct kvm_vm *vm, gpa_t gpa);
u32 guest_get_vcpuid(void);
diff --git a/tools/testing/selftests/kvm/include/x86/processor.h b/tools/testing/selftests/kvm/include/x86/processor.h
index 97dc887658c3..77f576ee7789 100644
--- a/tools/testing/selftests/kvm/include/x86/processor.h
+++ b/tools/testing/selftests/kvm/include/x86/processor.h
@@ -1508,13 +1508,13 @@ void tdp_mmu_init(struct kvm_vm *vm, int pgtable_levels,
struct pte_masks *pte_masks);
void __virt_pg_map(struct kvm_vm *vm, struct kvm_mmu *mmu, gva_t gva,
- u64 paddr, int level);
-void virt_map_level(struct kvm_vm *vm, gva_t gva, u64 paddr,
+ gpa_t gpa, int level);
+void virt_map_level(struct kvm_vm *vm, gva_t gva, gpa_t gpa,
u64 nr_bytes, int level);
void vm_enable_tdp(struct kvm_vm *vm);
bool kvm_cpu_has_tdp(void);
-void tdp_map(struct kvm_vm *vm, gpa_t l2_gpa, u64 paddr, u64 size);
+void tdp_map(struct kvm_vm *vm, gpa_t l2_gpa, gpa_t gpa, u64 size);
void tdp_identity_map_default_memslots(struct kvm_vm *vm);
void tdp_identity_map_1g(struct kvm_vm *vm, u64 addr, u64 size);
u64 *tdp_get_pte(struct kvm_vm *vm, u64 l2_gpa);
diff --git a/tools/testing/selftests/kvm/lib/arm64/processor.c b/tools/testing/selftests/kvm/lib/arm64/processor.c
index 0f693d8891d2..01325bf4d36f 100644
--- a/tools/testing/selftests/kvm/lib/arm64/processor.c
+++ b/tools/testing/selftests/kvm/lib/arm64/processor.c
@@ -121,7 +121,7 @@ void virt_arch_pgd_alloc(struct kvm_vm *vm)
vm->mmu.pgd_created = true;
}
-static void _virt_pg_map(struct kvm_vm *vm, gva_t gva, u64 paddr,
+static void _virt_pg_map(struct kvm_vm *vm, gva_t gva, gpa_t gpa,
u64 flags)
{
u8 attr_idx = flags & (PTE_ATTRINDX_MASK >> PTE_ATTRINDX_SHIFT);
@@ -133,13 +133,13 @@ static void _virt_pg_map(struct kvm_vm *vm, gva_t gva, u64 paddr,
" gva: 0x%lx vm->page_size: 0x%x", gva, vm->page_size);
TEST_ASSERT(sparsebit_is_set(vm->vpages_valid, (gva >> vm->page_shift)),
"Invalid virtual address, gva: 0x%lx", gva);
- TEST_ASSERT((paddr % vm->page_size) == 0,
- "Physical address not on page boundary,\n"
- " paddr: 0x%lx vm->page_size: 0x%x", paddr, vm->page_size);
- TEST_ASSERT((paddr >> vm->page_shift) <= vm->max_gfn,
- "Physical address beyond beyond maximum supported,\n"
- " paddr: 0x%lx vm->max_gfn: 0x%lx vm->page_size: 0x%x",
- paddr, vm->max_gfn, vm->page_size);
+ TEST_ASSERT((gpa % vm->page_size) == 0,
+ "Physical address not on page boundary,\n"
+ " gpa: 0x%lx vm->page_size: 0x%x", gpa, vm->page_size);
+ TEST_ASSERT((gpa >> vm->page_shift) <= vm->max_gfn,
+ "Physical address beyond beyond maximum supported,\n"
+ " gpa: 0x%lx vm->max_gfn: 0x%lx vm->page_size: 0x%x",
+ gpa, vm->max_gfn, vm->page_size);
ptep = addr_gpa2hva(vm, vm->mmu.pgd) + pgd_index(vm, gva) * 8;
if (!*ptep)
@@ -170,14 +170,14 @@ static void _virt_pg_map(struct kvm_vm *vm, gva_t gva, u64 paddr,
if (!use_lpa2_pte_format(vm))
pg_attr |= PTE_SHARED;
- *ptep = addr_pte(vm, paddr, pg_attr);
+ *ptep = addr_pte(vm, gpa, pg_attr);
}
-void virt_arch_pg_map(struct kvm_vm *vm, gva_t gva, u64 paddr)
+void virt_arch_pg_map(struct kvm_vm *vm, gva_t gva, gpa_t gpa)
{
u64 attr_idx = MT_NORMAL;
- _virt_pg_map(vm, gva, paddr, attr_idx);
+ _virt_pg_map(vm, gva, gpa, attr_idx);
}
u64 *virt_get_pte_hva_at_level(struct kvm_vm *vm, gva_t gva, int level)
diff --git a/tools/testing/selftests/kvm/lib/kvm_util.c b/tools/testing/selftests/kvm/lib/kvm_util.c
index 905fa214099d..2a76eca7029d 100644
--- a/tools/testing/selftests/kvm/lib/kvm_util.c
+++ b/tools/testing/selftests/kvm/lib/kvm_util.c
@@ -1027,8 +1027,8 @@ void vm_mem_add(struct kvm_vm *vm, enum vm_mem_backing_src_type src_type,
TEST_FAIL("A mem region with the requested slot "
"already exists.\n"
- " requested slot: %u paddr: 0x%lx npages: 0x%lx\n"
- " existing slot: %u paddr: 0x%lx size: 0x%lx",
+ " requested slot: %u gpa: 0x%lx npages: 0x%lx\n"
+ " existing slot: %u gpa: 0x%lx size: 0x%lx",
slot, gpa, npages, region->region.slot,
(u64)region->region.guest_phys_addr,
(u64)region->region.memory_size);
@@ -1442,7 +1442,7 @@ static gva_t ____vm_alloc(struct kvm_vm *vm, size_t sz, gva_t min_gva,
u64 pages = (sz >> vm->page_shift) + ((sz % vm->page_size) != 0);
virt_pgd_alloc(vm);
- gpa_t paddr = __vm_phy_pages_alloc(vm, pages,
+ gpa_t gpa = __vm_phy_pages_alloc(vm, pages,
KVM_UTIL_MIN_PFN * vm->page_size,
vm->memslots[type], protected);
@@ -1454,9 +1454,9 @@ static gva_t ____vm_alloc(struct kvm_vm *vm, size_t sz, gva_t min_gva,
/* Map the virtual pages. */
for (gva_t gva = gva_start; pages > 0;
- pages--, gva += vm->page_size, paddr += vm->page_size) {
+ pages--, gva += vm->page_size, gpa += vm->page_size) {
- virt_pg_map(vm, gva, paddr);
+ virt_pg_map(vm, gva, gpa);
}
return gva_start;
@@ -1506,22 +1506,21 @@ gva_t vm_alloc_page(struct kvm_vm *vm)
* Map a range of VM virtual address to the VM's physical address.
*
* Within the VM given by @vm, creates a virtual translation for @npages
- * starting at @gva to the page range starting at @paddr.
+ * starting at @gva to the page range starting at @gpa.
*/
-void virt_map(struct kvm_vm *vm, gva_t gva, u64 paddr,
- unsigned int npages)
+void virt_map(struct kvm_vm *vm, gva_t gva, gpa_t gpa, unsigned int npages)
{
size_t page_size = vm->page_size;
size_t size = npages * page_size;
TEST_ASSERT(gva + size > gva, "Vaddr overflow");
- TEST_ASSERT(paddr + size > paddr, "Paddr overflow");
+ TEST_ASSERT(gpa + size > gpa, "Paddr overflow");
while (npages--) {
- virt_pg_map(vm, gva, paddr);
+ virt_pg_map(vm, gva, gpa);
gva += page_size;
- paddr += page_size;
+ gpa += page_size;
}
}
@@ -2008,7 +2007,7 @@ const char *exit_reason_str(unsigned int exit_reason)
* Input Args:
* vm - Virtual Machine
* num - number of pages
- * paddr_min - Physical address minimum
+ * min_gpa - Physical address minimum
* memslot - Memory region to allocate page from
* protected - True if the pages will be used as protected/private memory
*
@@ -2018,12 +2017,12 @@ const char *exit_reason_str(unsigned int exit_reason)
* Starting physical address
*
* Within the VM specified by vm, locates a range of available physical
- * pages at or above paddr_min. If found, the pages are marked as in use
+ * pages at or above min_gpa. If found, the pages are marked as in use
* and their base address is returned. A TEST_ASSERT failure occurs if
- * not enough pages are available at or above paddr_min.
+ * not enough pages are available at or above min_gpa.
*/
gpa_t __vm_phy_pages_alloc(struct kvm_vm *vm, size_t num,
- gpa_t paddr_min, u32 memslot,
+ gpa_t min_gpa, u32 memslot,
bool protected)
{
struct userspace_mem_region *region;
@@ -2031,16 +2030,16 @@ gpa_t __vm_phy_pages_alloc(struct kvm_vm *vm, size_t num,
TEST_ASSERT(num > 0, "Must allocate at least one page");
- TEST_ASSERT((paddr_min % vm->page_size) == 0, "Min physical address "
+ TEST_ASSERT((min_gpa % vm->page_size) == 0, "Min physical address "
"not divisible by page size.\n"
- " paddr_min: 0x%lx page_size: 0x%x",
- paddr_min, vm->page_size);
+ " min_gpa: 0x%lx page_size: 0x%x",
+ min_gpa, vm->page_size);
region = memslot2region(vm, memslot);
TEST_ASSERT(!protected || region->protected_phy_pages,
"Region doesn't support protected memory");
- base = pg = paddr_min >> vm->page_shift;
+ base = pg = min_gpa >> vm->page_shift;
do {
for (; pg < base + num; ++pg) {
if (!sparsebit_is_set(region->unused_phy_pages, pg)) {
@@ -2052,8 +2051,8 @@ gpa_t __vm_phy_pages_alloc(struct kvm_vm *vm, size_t num,
if (pg == 0) {
fprintf(stderr, "No guest physical page available, "
- "paddr_min: 0x%lx page_size: 0x%x memslot: %u\n",
- paddr_min, vm->page_size, memslot);
+ "min_gpa: 0x%lx page_size: 0x%x memslot: %u\n",
+ min_gpa, vm->page_size, memslot);
fputs("---- vm dump ----\n", stderr);
vm_dump(stderr, vm, 2);
abort();
@@ -2068,9 +2067,9 @@ gpa_t __vm_phy_pages_alloc(struct kvm_vm *vm, size_t num,
return base * vm->page_size;
}
-gpa_t vm_phy_page_alloc(struct kvm_vm *vm, gpa_t paddr_min, u32 memslot)
+gpa_t vm_phy_page_alloc(struct kvm_vm *vm, gpa_t min_gpa, u32 memslot)
{
- return vm_phy_pages_alloc(vm, 1, paddr_min, memslot);
+ return vm_phy_pages_alloc(vm, 1, min_gpa, memslot);
}
gpa_t vm_alloc_page_table(struct kvm_vm *vm)
@@ -2287,7 +2286,7 @@ void __attribute((constructor)) kvm_selftest_init(void)
kvm_selftest_arch_init();
}
-bool vm_is_gpa_protected(struct kvm_vm *vm, gpa_t paddr)
+bool vm_is_gpa_protected(struct kvm_vm *vm, gpa_t gpa)
{
sparsebit_idx_t pg = 0;
struct userspace_mem_region *region;
@@ -2295,10 +2294,10 @@ bool vm_is_gpa_protected(struct kvm_vm *vm, gpa_t paddr)
if (!vm_arch_has_protected_memory(vm))
return false;
- region = userspace_mem_region_find(vm, paddr, paddr);
- TEST_ASSERT(region, "No vm physical memory at 0x%lx", paddr);
+ region = userspace_mem_region_find(vm, gpa, gpa);
+ TEST_ASSERT(region, "No vm physical memory at 0x%lx", gpa);
- pg = paddr >> vm->page_shift;
+ pg = gpa >> vm->page_shift;
return sparsebit_is_set(region->protected_phy_pages, pg);
}
diff --git a/tools/testing/selftests/kvm/lib/loongarch/processor.c b/tools/testing/selftests/kvm/lib/loongarch/processor.c
index 47e782056196..64d91fb76522 100644
--- a/tools/testing/selftests/kvm/lib/loongarch/processor.c
+++ b/tools/testing/selftests/kvm/lib/loongarch/processor.c
@@ -116,7 +116,7 @@ gpa_t addr_arch_gva2gpa(struct kvm_vm *vm, gva_t gva)
return pte_addr(vm, *ptep) + (gva & (vm->page_size - 1));
}
-void virt_arch_pg_map(struct kvm_vm *vm, gva_t gva, u64 paddr)
+void virt_arch_pg_map(struct kvm_vm *vm, gva_t gva, gpa_t gpa)
{
u32 prot_bits;
u64 *ptep;
@@ -126,17 +126,17 @@ void virt_arch_pg_map(struct kvm_vm *vm, gva_t gva, u64 paddr)
"gva: 0x%lx vm->page_size: 0x%x", gva, vm->page_size);
TEST_ASSERT(sparsebit_is_set(vm->vpages_valid, (gva >> vm->page_shift)),
"Invalid virtual address, gva: 0x%lx", gva);
- TEST_ASSERT((paddr % vm->page_size) == 0,
+ TEST_ASSERT((gpa % vm->page_size) == 0,
"Physical address not on page boundary,\n"
- "paddr: 0x%lx vm->page_size: 0x%x", paddr, vm->page_size);
- TEST_ASSERT((paddr >> vm->page_shift) <= vm->max_gfn,
+ "gpa: 0x%lx vm->page_size: 0x%x", gpa, vm->page_size);
+ TEST_ASSERT((gpa >> vm->page_shift) <= vm->max_gfn,
"Physical address beyond maximum supported,\n"
- "paddr: 0x%lx vm->max_gfn: 0x%lx vm->page_size: 0x%x",
- paddr, vm->max_gfn, vm->page_size);
+ "gpa: 0x%lx vm->max_gfn: 0x%lx vm->page_size: 0x%x",
+ gpa, vm->max_gfn, vm->page_size);
ptep = virt_populate_pte(vm, gva, 1);
prot_bits = _PAGE_PRESENT | __READABLE | __WRITEABLE | _CACHE_CC | _PAGE_USER;
- WRITE_ONCE(*ptep, paddr | prot_bits);
+ WRITE_ONCE(*ptep, gpa | prot_bits);
}
static void pte_dump(FILE *stream, struct kvm_vm *vm, u8 indent, u64 page, int level)
diff --git a/tools/testing/selftests/kvm/lib/riscv/processor.c b/tools/testing/selftests/kvm/lib/riscv/processor.c
index 108144fb858b..ded5429f3448 100644
--- a/tools/testing/selftests/kvm/lib/riscv/processor.c
+++ b/tools/testing/selftests/kvm/lib/riscv/processor.c
@@ -75,7 +75,7 @@ void virt_arch_pgd_alloc(struct kvm_vm *vm)
vm->mmu.pgd_created = true;
}
-void virt_arch_pg_map(struct kvm_vm *vm, gva_t gva, u64 paddr)
+void virt_arch_pg_map(struct kvm_vm *vm, gva_t gva, gpa_t gpa)
{
u64 *ptep, next_ppn;
int level = vm->mmu.pgtable_levels - 1;
@@ -85,13 +85,13 @@ void virt_arch_pg_map(struct kvm_vm *vm, gva_t gva, u64 paddr)
" gva: 0x%lx vm->page_size: 0x%x", gva, vm->page_size);
TEST_ASSERT(sparsebit_is_set(vm->vpages_valid, (gva >> vm->page_shift)),
"Invalid virtual address, gva: 0x%lx", gva);
- TEST_ASSERT((paddr % vm->page_size) == 0,
+ TEST_ASSERT((gpa % vm->page_size) == 0,
"Physical address not on page boundary,\n"
- " paddr: 0x%lx vm->page_size: 0x%x", paddr, vm->page_size);
- TEST_ASSERT((paddr >> vm->page_shift) <= vm->max_gfn,
+ " gpa: 0x%lx vm->page_size: 0x%x", gpa, vm->page_size);
+ TEST_ASSERT((gpa >> vm->page_shift) <= vm->max_gfn,
"Physical address beyond maximum supported,\n"
- " paddr: 0x%lx vm->max_gfn: 0x%lx vm->page_size: 0x%x",
- paddr, vm->max_gfn, vm->page_size);
+ " gpa: 0x%lx vm->max_gfn: 0x%lx vm->page_size: 0x%x",
+ gpa, vm->max_gfn, vm->page_size);
ptep = addr_gpa2hva(vm, vm->mmu.pgd) + pte_index(vm, gva, level) * 8;
if (!*ptep) {
@@ -113,8 +113,8 @@ void virt_arch_pg_map(struct kvm_vm *vm, gva_t gva, u64 paddr)
level--;
}
- paddr = paddr >> PGTBL_PAGE_SIZE_SHIFT;
- *ptep = (paddr << PGTBL_PTE_ADDR_SHIFT) |
+ gpa = gpa >> PGTBL_PAGE_SIZE_SHIFT;
+ *ptep = (gpa << PGTBL_PTE_ADDR_SHIFT) |
PGTBL_PTE_PERM_MASK | PGTBL_PTE_VALID_MASK;
}
diff --git a/tools/testing/selftests/kvm/lib/s390/processor.c b/tools/testing/selftests/kvm/lib/s390/processor.c
index 77a7b6965812..a9adb3782b35 100644
--- a/tools/testing/selftests/kvm/lib/s390/processor.c
+++ b/tools/testing/selftests/kvm/lib/s390/processor.c
@@ -12,7 +12,7 @@
void virt_arch_pgd_alloc(struct kvm_vm *vm)
{
- gpa_t paddr;
+ gpa_t gpa;
TEST_ASSERT(vm->page_size == PAGE_SIZE, "Unsupported page size: 0x%x",
vm->page_size);
@@ -20,12 +20,12 @@ void virt_arch_pgd_alloc(struct kvm_vm *vm)
if (vm->mmu.pgd_created)
return;
- paddr = vm_phy_pages_alloc(vm, PAGES_PER_REGION,
+ gpa = vm_phy_pages_alloc(vm, PAGES_PER_REGION,
KVM_GUEST_PAGE_TABLE_MIN_PADDR,
vm->memslots[MEM_REGION_PT]);
- memset(addr_gpa2hva(vm, paddr), 0xff, PAGES_PER_REGION * vm->page_size);
+ memset(addr_gpa2hva(vm, gpa), 0xff, PAGES_PER_REGION * vm->page_size);
- vm->mmu.pgd = paddr;
+ vm->mmu.pgd = gpa;
vm->mmu.pgd_created = true;
}
@@ -60,11 +60,11 @@ void virt_arch_pg_map(struct kvm_vm *vm, gva_t gva, gpa_t gpa)
"Invalid virtual address, gva: 0x%lx", gva);
TEST_ASSERT((gpa % vm->page_size) == 0,
"Physical address not on page boundary,\n"
- " paddr: 0x%lx vm->page_size: 0x%x",
+ " gpa: 0x%lx vm->page_size: 0x%x",
gva, vm->page_size);
TEST_ASSERT((gpa >> vm->page_shift) <= vm->max_gfn,
"Physical address beyond beyond maximum supported,\n"
- " paddr: 0x%lx vm->max_gfn: 0x%lx vm->page_size: 0x%x",
+ " gpa: 0x%lx vm->max_gfn: 0x%lx vm->page_size: 0x%x",
gva, vm->max_gfn, vm->page_size);
/* Walk through region and segment tables */
diff --git a/tools/testing/selftests/kvm/lib/x86/processor.c b/tools/testing/selftests/kvm/lib/x86/processor.c
index 892cc517d9f1..b51467d70f6e 100644
--- a/tools/testing/selftests/kvm/lib/x86/processor.c
+++ b/tools/testing/selftests/kvm/lib/x86/processor.c
@@ -224,20 +224,20 @@ static u64 *virt_create_upper_pte(struct kvm_vm *vm,
struct kvm_mmu *mmu,
u64 *parent_pte,
gva_t gva,
- u64 paddr,
+ gpa_t gpa,
int current_level,
int target_level)
{
u64 *pte = virt_get_pte(vm, mmu, parent_pte, gva, current_level);
- paddr = vm_untag_gpa(vm, paddr);
+ gpa = vm_untag_gpa(vm, gpa);
if (!is_present_pte(mmu, pte)) {
*pte = PTE_PRESENT_MASK(mmu) | PTE_READABLE_MASK(mmu) |
PTE_WRITABLE_MASK(mmu) | PTE_EXECUTABLE_MASK(mmu) |
PTE_ALWAYS_SET_MASK(mmu);
if (current_level == target_level)
- *pte |= PTE_HUGE_MASK(mmu) | (paddr & PHYSICAL_PAGE_MASK);
+ *pte |= PTE_HUGE_MASK(mmu) | (gpa & PHYSICAL_PAGE_MASK);
else
*pte |= vm_alloc_page_table(vm) & PHYSICAL_PAGE_MASK;
} else {
@@ -257,7 +257,7 @@ static u64 *virt_create_upper_pte(struct kvm_vm *vm,
}
void __virt_pg_map(struct kvm_vm *vm, struct kvm_mmu *mmu, gva_t gva,
- u64 paddr, int level)
+ gpa_t gpa, int level)
{
const u64 pg_size = PG_LEVEL_SIZE(level);
u64 *pte = &mmu->pgd;
@@ -271,15 +271,15 @@ void __virt_pg_map(struct kvm_vm *vm, struct kvm_mmu *mmu, gva_t gva,
"gva: 0x%lx page size: 0x%lx", gva, pg_size);
TEST_ASSERT(sparsebit_is_set(vm->vpages_valid, (gva >> vm->page_shift)),
"Invalid virtual address, gva: 0x%lx", gva);
- TEST_ASSERT((paddr % pg_size) == 0,
+ TEST_ASSERT((gpa % pg_size) == 0,
"Physical address not aligned,\n"
- " paddr: 0x%lx page size: 0x%lx", paddr, pg_size);
- TEST_ASSERT((paddr >> vm->page_shift) <= vm->max_gfn,
+ " gpa: 0x%lx page size: 0x%lx", gpa, pg_size);
+ TEST_ASSERT((gpa >> vm->page_shift) <= vm->max_gfn,
"Physical address beyond maximum supported,\n"
- " paddr: 0x%lx vm->max_gfn: 0x%lx vm->page_size: 0x%x",
- paddr, vm->max_gfn, vm->page_size);
- TEST_ASSERT(vm_untag_gpa(vm, paddr) == paddr,
- "Unexpected bits in paddr: %lx", paddr);
+ " gpa: 0x%lx vm->max_gfn: 0x%lx vm->page_size: 0x%x",
+ gpa, vm->max_gfn, vm->page_size);
+ TEST_ASSERT(vm_untag_gpa(vm, gpa) == gpa,
+ "Unexpected bits in gpa: %lx", gpa);
TEST_ASSERT(!PTE_EXECUTABLE_MASK(mmu) || !PTE_NX_MASK(mmu),
"X and NX bit masks cannot be used simultaneously");
@@ -291,7 +291,7 @@ void __virt_pg_map(struct kvm_vm *vm, struct kvm_mmu *mmu, gva_t gva,
for (current_level = mmu->pgtable_levels;
current_level > PG_LEVEL_4K;
current_level--) {
- pte = virt_create_upper_pte(vm, mmu, pte, gva, paddr,
+ pte = virt_create_upper_pte(vm, mmu, pte, gva, gpa,
current_level, level);
if (is_huge_pte(mmu, pte))
return;
@@ -303,24 +303,24 @@ void __virt_pg_map(struct kvm_vm *vm, struct kvm_mmu *mmu, gva_t gva,
"PTE already present for 4k page at gva: 0x%lx", gva);
*pte = PTE_PRESENT_MASK(mmu) | PTE_READABLE_MASK(mmu) |
PTE_WRITABLE_MASK(mmu) | PTE_EXECUTABLE_MASK(mmu) |
- PTE_ALWAYS_SET_MASK(mmu) | (paddr & PHYSICAL_PAGE_MASK);
+ PTE_ALWAYS_SET_MASK(mmu) | (gpa & PHYSICAL_PAGE_MASK);
/*
* Neither SEV nor TDX supports shared page tables, so only the final
* leaf PTE needs manually set the C/S-bit.
*/
- if (vm_is_gpa_protected(vm, paddr))
+ if (vm_is_gpa_protected(vm, gpa))
*pte |= PTE_C_BIT_MASK(mmu);
else
*pte |= PTE_S_BIT_MASK(mmu);
}
-void virt_arch_pg_map(struct kvm_vm *vm, gva_t gva, u64 paddr)
+void virt_arch_pg_map(struct kvm_vm *vm, gva_t gva, gpa_t gpa)
{
- __virt_pg_map(vm, &vm->mmu, gva, paddr, PG_LEVEL_4K);
+ __virt_pg_map(vm, &vm->mmu, gva, gpa, PG_LEVEL_4K);
}
-void virt_map_level(struct kvm_vm *vm, gva_t gva, u64 paddr,
+void virt_map_level(struct kvm_vm *vm, gva_t gva, gpa_t gpa,
u64 nr_bytes, int level)
{
u64 pg_size = PG_LEVEL_SIZE(level);
@@ -332,12 +332,12 @@ void virt_map_level(struct kvm_vm *vm, gva_t gva, u64 paddr,
nr_bytes, pg_size);
for (i = 0; i < nr_pages; i++) {
- __virt_pg_map(vm, &vm->mmu, gva, paddr, level);
+ __virt_pg_map(vm, &vm->mmu, gva, gpa, level);
sparsebit_set_num(vm->vpages_mapped, gva >> vm->page_shift,
nr_bytes / PAGE_SIZE);
gva += pg_size;
- paddr += pg_size;
+ gpa += pg_size;
}
}
@@ -495,24 +495,24 @@ bool kvm_cpu_has_tdp(void)
return kvm_cpu_has_ept() || kvm_cpu_has_npt();
}
-void __tdp_map(struct kvm_vm *vm, gpa_t l2_gpa, u64 paddr, u64 size, int level)
+void __tdp_map(struct kvm_vm *vm, gpa_t l2_gpa, gpa_t gpa, u64 size, int level)
{
size_t page_size = PG_LEVEL_SIZE(level);
size_t npages = size / page_size;
TEST_ASSERT(l2_gpa + size > l2_gpa, "L2 GPA overflow");
- TEST_ASSERT(paddr + size > paddr, "Paddr overflow");
+ TEST_ASSERT(gpa + size > gpa, "GPA overflow");
while (npages--) {
- __virt_pg_map(vm, &vm->stage2_mmu, l2_gpa, paddr, level);
+ __virt_pg_map(vm, &vm->stage2_mmu, l2_gpa, gpa, level);
l2_gpa += page_size;
- paddr += page_size;
+ gpa += page_size;
}
}
-void tdp_map(struct kvm_vm *vm, gpa_t l2_gpa, u64 paddr, u64 size)
+void tdp_map(struct kvm_vm *vm, gpa_t l2_gpa, gpa_t gpa, u64 size)
{
- __tdp_map(vm, l2_gpa, paddr, size, PG_LEVEL_4K);
+ __tdp_map(vm, l2_gpa, gpa, size, PG_LEVEL_4K);
}
/* Prepare an identity extended page table that maps all the
--
2.54.0.rc1.555.g9c883467ad-goog
_______________________________________________
linux-riscv mailing list
linux-riscv@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-riscv
^ permalink raw reply related [flat|nested] 18+ messages in thread
end of thread, other threads:[~2026-04-20 21:21 UTC | newest]
Thread overview: 18+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2026-04-20 21:19 [PATCH v3 00/19] KVM: selftests: Use kernel-style integer and g[vp]a_t types Sean Christopherson
2026-04-20 21:19 ` [PATCH v3 01/19] KVM: selftests: Use gva_t instead of vm_vaddr_t Sean Christopherson
2026-04-20 21:19 ` [PATCH v3 02/19] KVM: selftests: Use gpa_t instead of vm_paddr_t Sean Christopherson
2026-04-20 21:19 ` [PATCH v3 03/19] KVM: selftests: Use gpa_t for GPAs in Hyper-V selftests Sean Christopherson
2026-04-20 21:19 ` [PATCH v3 05/19] KVM: selftests: Use s64 instead of int64_t Sean Christopherson
2026-04-20 21:19 ` [PATCH v3 07/19] KVM: selftests: Use s32 instead of int32_t Sean Christopherson
2026-04-20 21:19 ` [PATCH v3 08/19] KVM: selftests: Use u16 instead of uint16_t Sean Christopherson
2026-04-20 21:19 ` [PATCH v3 09/19] KVM: selftests: Use s16 instead of int16_t Sean Christopherson
2026-04-20 21:19 ` [PATCH v3 10/19] KVM: selftests: Use u8 instead of uint8_t Sean Christopherson
2026-04-20 21:19 ` [PATCH v3 11/19] KVM: selftests: Drop "vaddr_" from APIs that allocate memory for a given VM Sean Christopherson
2026-04-20 21:19 ` [PATCH v3 12/19] KVM: selftests: Rename vm_vaddr_unused_gap() => vm_unused_gva_gap() Sean Christopherson
2026-04-20 21:19 ` [PATCH v3 13/19] KVM: selftests: Rename vm_vaddr_populate_bitmap() => vm_populate_gva_bitmap() Sean Christopherson
2026-04-20 21:19 ` [PATCH v3 14/19] KVM: selftests: Rename translate_to_host_paddr() => translate_hva_to_hpa() Sean Christopherson
2026-04-20 21:20 ` [PATCH v3 15/19] KVM: selftests: Clarify that arm64's inject_uer() takes a host PA, not a guest PA Sean Christopherson
2026-04-20 21:20 ` [PATCH v3 16/19] KVM: selftests: Replace "vaddr" with "gva" throughout Sean Christopherson
2026-04-20 21:20 ` [PATCH v3 17/19] KVM: selftests: Replace "u64 gpa" with "gpa_t" throughout Sean Christopherson
2026-04-20 21:20 ` [PATCH v3 18/19] KVM: selftests: Replace "u64 nested_paddr" with "gpa_t l2_gpa" Sean Christopherson
2026-04-20 21:20 ` [PATCH v3 19/19] KVM: selftests: Replace "paddr" with "gpa" throughout Sean Christopherson
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox