* [PATCH 00/10] KVM: selftests: Convert to kernel-style types
@ 2025-05-01 18:32 David Matlack
2025-05-01 18:32 ` [PATCH 01/10] KVM: selftests: Use gva_t instead of vm_vaddr_t David Matlack
` (11 more replies)
0 siblings, 12 replies; 16+ messages in thread
From: David Matlack @ 2025-05-01 18:32 UTC (permalink / raw)
To: Paolo Bonzini
Cc: Marc Zyngier, Oliver Upton, Joey Gouly, Suzuki K Poulose,
Zenghui Yu, Anup Patel, Atish Patra, Paul Walmsley,
Palmer Dabbelt, Albert Ou, Alexandre Ghiti, Christian Borntraeger,
Janosch Frank, Claudio Imbrenda, David Hildenbrand,
Sean Christopherson, David Matlack, Andrew Jones, Isaku Yamahata,
Reinette Chatre, Eric Auger, James Houghton, Colin Ian King, kvm,
linux-arm-kernel, kvmarm, kvm-riscv, linux-riscv
This series renames types across all KVM selftests to more align with
types used in the kernel:
vm_vaddr_t -> gva_t
vm_paddr_t -> gpa_t
uint64_t -> u64
uint32_t -> u32
uint16_t -> u16
uint8_t -> u8
int64_t -> s64
int32_t -> s32
int16_t -> s16
int8_t -> s8
The goal of this series is to make the KVM selftests code more concise
(the new type names are shorter) and more similar to the kernel, since
selftests are developed by kernel developers.
I know broad changes like this series can be difficult to merge and also
muddies up the git-blame history, so if there isn't appetite for this we
can drop it. But if there is I would be happy to help with rebasing and
resolving merge conflicts to get it in.
Most of the commits in this series are auto-generated with a single
command (see commit messages), aside from whitespace fixes, so rebasing
onto a different base isn't terrible.
David Matlack (10):
KVM: selftests: Use gva_t instead of vm_vaddr_t
KVM: selftests: Use gpa_t instead of vm_paddr_t
KVM: selftests: Use gpa_t for GPAs in Hyper-V selftests
KVM: selftests: Use u64 instead of uint64_t
KVM: selftests: Use s64 instead of int64_t
KVM: selftests: Use u32 instead of uint32_t
KVM: selftests: Use s32 instead of int32_t
KVM: selftests: Use u16 instead of uint16_t
KVM: selftests: Use s16 instead of int16_t
KVM: selftests: Use u8 instead of uint8_t
.../selftests/kvm/access_tracking_perf_test.c | 40 +--
tools/testing/selftests/kvm/arch_timer.c | 6 +-
.../selftests/kvm/arm64/aarch32_id_regs.c | 14 +-
.../testing/selftests/kvm/arm64/arch_timer.c | 8 +-
.../kvm/arm64/arch_timer_edge_cases.c | 159 +++++----
.../selftests/kvm/arm64/debug-exceptions.c | 73 ++--
.../testing/selftests/kvm/arm64/hypercalls.c | 24 +-
.../testing/selftests/kvm/arm64/no-vgic-v3.c | 6 +-
.../selftests/kvm/arm64/page_fault_test.c | 82 ++---
tools/testing/selftests/kvm/arm64/psci_test.c | 26 +-
.../testing/selftests/kvm/arm64/set_id_regs.c | 58 ++--
.../selftests/kvm/arm64/smccc_filter.c | 10 +-
tools/testing/selftests/kvm/arm64/vgic_init.c | 56 ++--
tools/testing/selftests/kvm/arm64/vgic_irq.c | 116 +++----
.../selftests/kvm/arm64/vgic_lpi_stress.c | 20 +-
.../selftests/kvm/arm64/vpmu_counter_access.c | 62 ++--
.../testing/selftests/kvm/coalesced_io_test.c | 38 +--
.../selftests/kvm/demand_paging_test.c | 10 +-
.../selftests/kvm/dirty_log_perf_test.c | 14 +-
tools/testing/selftests/kvm/dirty_log_test.c | 82 ++---
tools/testing/selftests/kvm/get-reg-list.c | 2 +-
.../testing/selftests/kvm/guest_memfd_test.c | 2 +-
.../testing/selftests/kvm/guest_print_test.c | 22 +-
.../selftests/kvm/hardware_disable_test.c | 6 +-
.../selftests/kvm/include/arm64/arch_timer.h | 30 +-
.../selftests/kvm/include/arm64/delay.h | 4 +-
.../testing/selftests/kvm/include/arm64/gic.h | 8 +-
.../selftests/kvm/include/arm64/gic_v3_its.h | 8 +-
.../selftests/kvm/include/arm64/processor.h | 20 +-
.../selftests/kvm/include/arm64/ucall.h | 4 +-
.../selftests/kvm/include/arm64/vgic.h | 20 +-
.../testing/selftests/kvm/include/kvm_util.h | 311 +++++++++---------
.../selftests/kvm/include/kvm_util_types.h | 4 +-
.../testing/selftests/kvm/include/memstress.h | 30 +-
.../selftests/kvm/include/riscv/arch_timer.h | 22 +-
.../selftests/kvm/include/riscv/processor.h | 9 +-
.../selftests/kvm/include/riscv/ucall.h | 4 +-
.../kvm/include/s390/diag318_test_handler.h | 2 +-
.../selftests/kvm/include/s390/facility.h | 4 +-
.../selftests/kvm/include/s390/ucall.h | 4 +-
.../testing/selftests/kvm/include/sparsebit.h | 6 +-
.../testing/selftests/kvm/include/test_util.h | 40 +--
.../selftests/kvm/include/timer_test.h | 18 +-
.../selftests/kvm/include/ucall_common.h | 22 +-
.../selftests/kvm/include/userfaultfd_util.h | 6 +-
.../testing/selftests/kvm/include/x86/apic.h | 22 +-
.../testing/selftests/kvm/include/x86/evmcs.h | 22 +-
.../selftests/kvm/include/x86/hyperv.h | 28 +-
.../selftests/kvm/include/x86/kvm_util_arch.h | 12 +-
tools/testing/selftests/kvm/include/x86/pmu.h | 6 +-
.../selftests/kvm/include/x86/processor.h | 272 ++++++++-------
tools/testing/selftests/kvm/include/x86/sev.h | 14 +-
.../selftests/kvm/include/x86/svm_util.h | 10 +-
.../testing/selftests/kvm/include/x86/ucall.h | 2 +-
tools/testing/selftests/kvm/include/x86/vmx.h | 80 ++---
.../selftests/kvm/kvm_page_table_test.c | 54 +--
tools/testing/selftests/kvm/lib/arm64/gic.c | 6 +-
.../selftests/kvm/lib/arm64/gic_private.h | 24 +-
.../testing/selftests/kvm/lib/arm64/gic_v3.c | 84 ++---
.../selftests/kvm/lib/arm64/gic_v3_its.c | 12 +-
.../selftests/kvm/lib/arm64/processor.c | 126 +++----
tools/testing/selftests/kvm/lib/arm64/ucall.c | 12 +-
tools/testing/selftests/kvm/lib/arm64/vgic.c | 38 +--
tools/testing/selftests/kvm/lib/elf.c | 8 +-
tools/testing/selftests/kvm/lib/guest_modes.c | 2 +-
.../testing/selftests/kvm/lib/guest_sprintf.c | 18 +-
tools/testing/selftests/kvm/lib/kvm_util.c | 222 +++++++------
tools/testing/selftests/kvm/lib/memstress.c | 38 +--
.../selftests/kvm/lib/riscv/processor.c | 56 ++--
.../kvm/lib/s390/diag318_test_handler.c | 12 +-
.../testing/selftests/kvm/lib/s390/facility.c | 2 +-
.../selftests/kvm/lib/s390/processor.c | 42 +--
tools/testing/selftests/kvm/lib/sparsebit.c | 18 +-
tools/testing/selftests/kvm/lib/test_util.c | 30 +-
.../testing/selftests/kvm/lib/ucall_common.c | 30 +-
.../selftests/kvm/lib/userfaultfd_util.c | 14 +-
tools/testing/selftests/kvm/lib/x86/apic.c | 2 +-
tools/testing/selftests/kvm/lib/x86/hyperv.c | 14 +-
.../testing/selftests/kvm/lib/x86/memstress.c | 10 +-
tools/testing/selftests/kvm/lib/x86/pmu.c | 4 +-
.../testing/selftests/kvm/lib/x86/processor.c | 178 +++++-----
tools/testing/selftests/kvm/lib/x86/sev.c | 14 +-
tools/testing/selftests/kvm/lib/x86/svm.c | 16 +-
tools/testing/selftests/kvm/lib/x86/ucall.c | 4 +-
tools/testing/selftests/kvm/lib/x86/vmx.c | 108 +++---
.../kvm/memslot_modification_stress_test.c | 10 +-
.../testing/selftests/kvm/memslot_perf_test.c | 164 ++++-----
tools/testing/selftests/kvm/mmu_stress_test.c | 28 +-
.../selftests/kvm/pre_fault_memory_test.c | 12 +-
.../testing/selftests/kvm/riscv/arch_timer.c | 8 +-
.../testing/selftests/kvm/riscv/ebreak_test.c | 6 +-
.../selftests/kvm/riscv/get-reg-list.c | 2 +-
.../selftests/kvm/riscv/sbi_pmu_test.c | 8 +-
tools/testing/selftests/kvm/s390/debug_test.c | 8 +-
tools/testing/selftests/kvm/s390/memop.c | 94 +++---
tools/testing/selftests/kvm/s390/resets.c | 6 +-
.../selftests/kvm/s390/shared_zeropage_test.c | 2 +-
tools/testing/selftests/kvm/s390/tprot.c | 24 +-
.../selftests/kvm/s390/ucontrol_test.c | 2 +-
.../selftests/kvm/set_memory_region_test.c | 40 +--
tools/testing/selftests/kvm/steal_time.c | 52 +--
.../kvm/system_counter_offset_test.c | 12 +-
tools/testing/selftests/kvm/x86/amx_test.c | 14 +-
.../selftests/kvm/x86/apic_bus_clock_test.c | 24 +-
tools/testing/selftests/kvm/x86/cpuid_test.c | 6 +-
tools/testing/selftests/kvm/x86/debug_regs.c | 4 +-
.../kvm/x86/dirty_log_page_splitting_test.c | 16 +-
.../selftests/kvm/x86/feature_msrs_test.c | 12 +-
.../selftests/kvm/x86/fix_hypercall_test.c | 20 +-
.../selftests/kvm/x86/flds_emulation.h | 6 +-
.../testing/selftests/kvm/x86/hwcr_msr_test.c | 10 +-
.../testing/selftests/kvm/x86/hyperv_clock.c | 6 +-
.../testing/selftests/kvm/x86/hyperv_evmcs.c | 10 +-
.../kvm/x86/hyperv_extended_hypercalls.c | 20 +-
.../selftests/kvm/x86/hyperv_features.c | 26 +-
tools/testing/selftests/kvm/x86/hyperv_ipi.c | 12 +-
.../selftests/kvm/x86/hyperv_svm_test.c | 10 +-
.../selftests/kvm/x86/hyperv_tlb_flush.c | 36 +-
.../selftests/kvm/x86/kvm_clock_test.c | 14 +-
tools/testing/selftests/kvm/x86/kvm_pv_test.c | 10 +-
.../selftests/kvm/x86/monitor_mwait_test.c | 2 +-
.../selftests/kvm/x86/nested_emulation_test.c | 20 +-
.../kvm/x86/nested_exceptions_test.c | 6 +-
.../selftests/kvm/x86/nx_huge_pages_test.c | 18 +-
.../selftests/kvm/x86/platform_info_test.c | 6 +-
.../selftests/kvm/x86/pmu_counters_test.c | 108 +++---
.../selftests/kvm/x86/pmu_event_filter_test.c | 102 +++---
.../kvm/x86/private_mem_conversions_test.c | 78 ++---
.../kvm/x86/private_mem_kvm_exits_test.c | 14 +-
.../selftests/kvm/x86/set_boot_cpu_id.c | 6 +-
.../selftests/kvm/x86/set_sregs_test.c | 6 +-
.../selftests/kvm/x86/sev_init2_tests.c | 6 +-
.../selftests/kvm/x86/sev_smoke_test.c | 14 +-
.../x86/smaller_maxphyaddr_emulation_test.c | 10 +-
tools/testing/selftests/kvm/x86/smm_test.c | 8 +-
tools/testing/selftests/kvm/x86/state_test.c | 14 +-
.../selftests/kvm/x86/svm_int_ctl_test.c | 2 +-
.../kvm/x86/svm_nested_shutdown_test.c | 2 +-
.../kvm/x86/svm_nested_soft_inject_test.c | 10 +-
.../selftests/kvm/x86/svm_vmcall_test.c | 2 +-
.../selftests/kvm/x86/sync_regs_test.c | 2 +-
.../kvm/x86/triple_fault_event_test.c | 4 +-
.../testing/selftests/kvm/x86/tsc_msrs_test.c | 2 +-
.../selftests/kvm/x86/tsc_scaling_sync.c | 4 +-
.../selftests/kvm/x86/ucna_injection_test.c | 45 +--
.../selftests/kvm/x86/userspace_io_test.c | 4 +-
.../kvm/x86/userspace_msr_exit_test.c | 58 ++--
.../selftests/kvm/x86/vmx_apic_access_test.c | 4 +-
.../kvm/x86/vmx_close_while_nested_test.c | 2 +-
.../selftests/kvm/x86/vmx_dirty_log_test.c | 4 +-
.../kvm/x86/vmx_invalid_nested_guest_state.c | 2 +-
.../testing/selftests/kvm/x86/vmx_msrs_test.c | 22 +-
.../kvm/x86/vmx_nested_tsc_scaling_test.c | 26 +-
.../selftests/kvm/x86/vmx_pmu_caps_test.c | 12 +-
.../kvm/x86/vmx_preemption_timer_test.c | 2 +-
.../selftests/kvm/x86/vmx_tsc_adjust_test.c | 12 +-
.../selftests/kvm/x86/xapic_ipi_test.c | 58 ++--
.../selftests/kvm/x86/xapic_state_test.c | 20 +-
.../selftests/kvm/x86/xcr0_cpuid_test.c | 8 +-
.../selftests/kvm/x86/xen_shinfo_test.c | 22 +-
.../testing/selftests/kvm/x86/xss_msr_test.c | 2 +-
161 files changed, 2323 insertions(+), 2338 deletions(-)
base-commit: 45eb29140e68ffe8e93a5471006858a018480a45
prerequisite-patch-id: 3bae97c9e1093148763235f47a84fa040b512d04
--
2.49.0.906.g1f30a19c02-goog
^ permalink raw reply [flat|nested] 16+ messages in thread
* [PATCH 01/10] KVM: selftests: Use gva_t instead of vm_vaddr_t
2025-05-01 18:32 [PATCH 00/10] KVM: selftests: Convert to kernel-style types David Matlack
@ 2025-05-01 18:32 ` David Matlack
2025-05-01 18:32 ` [PATCH 02/10] KVM: selftests: Use gpa_t instead of vm_paddr_t David Matlack
` (10 subsequent siblings)
11 siblings, 0 replies; 16+ messages in thread
From: David Matlack @ 2025-05-01 18:32 UTC (permalink / raw)
To: Paolo Bonzini
Cc: Marc Zyngier, Oliver Upton, Joey Gouly, Suzuki K Poulose,
Zenghui Yu, Anup Patel, Atish Patra, Paul Walmsley,
Palmer Dabbelt, Albert Ou, Alexandre Ghiti, Christian Borntraeger,
Janosch Frank, Claudio Imbrenda, David Hildenbrand,
Sean Christopherson, David Matlack, Andrew Jones, Isaku Yamahata,
Reinette Chatre, Eric Auger, James Houghton, Colin Ian King, kvm,
linux-arm-kernel, kvmarm, kvm-riscv, linux-riscv
Replace all occurrences of vm_vaddr_t with gva_t to align with KVM code
and with the conversion helpers (e.g. addr_gva2hva()). Also replace
vm_vaddr in function names with gva to align with the new type name.
This commit was generated with the following command:
git ls-files tools/testing/selftests/kvm | \
xargs sed -i 's/vm_vaddr_/gva_/g'
Then by manually adjusting whitespace to make checkpatch.pl happy.
No functional change intended.
Signed-off-by: David Matlack <dmatlack@google.com>
---
tools/testing/selftests/kvm/arm64/vgic_irq.c | 6 +--
.../selftests/kvm/include/arm64/processor.h | 2 +-
.../selftests/kvm/include/arm64/ucall.h | 4 +-
.../testing/selftests/kvm/include/kvm_util.h | 36 ++++++-------
.../selftests/kvm/include/kvm_util_types.h | 2 +-
.../selftests/kvm/include/riscv/ucall.h | 2 +-
.../selftests/kvm/include/s390/ucall.h | 2 +-
.../selftests/kvm/include/ucall_common.h | 4 +-
.../selftests/kvm/include/x86/hyperv.h | 10 ++--
.../selftests/kvm/include/x86/kvm_util_arch.h | 6 +--
.../selftests/kvm/include/x86/svm_util.h | 2 +-
tools/testing/selftests/kvm/include/x86/vmx.h | 2 +-
.../selftests/kvm/kvm_page_table_test.c | 2 +-
.../selftests/kvm/lib/arm64/processor.c | 28 +++++-----
tools/testing/selftests/kvm/lib/arm64/ucall.c | 6 +--
tools/testing/selftests/kvm/lib/elf.c | 6 +--
tools/testing/selftests/kvm/lib/kvm_util.c | 51 +++++++++----------
.../selftests/kvm/lib/riscv/processor.c | 16 +++---
.../selftests/kvm/lib/s390/processor.c | 8 +--
.../testing/selftests/kvm/lib/ucall_common.c | 12 ++---
tools/testing/selftests/kvm/lib/x86/hyperv.c | 10 ++--
.../testing/selftests/kvm/lib/x86/memstress.c | 2 +-
.../testing/selftests/kvm/lib/x86/processor.c | 30 +++++------
tools/testing/selftests/kvm/lib/x86/svm.c | 10 ++--
tools/testing/selftests/kvm/lib/x86/ucall.c | 2 +-
tools/testing/selftests/kvm/lib/x86/vmx.c | 20 ++++----
.../selftests/kvm/riscv/sbi_pmu_test.c | 2 +-
tools/testing/selftests/kvm/s390/memop.c | 18 +++----
tools/testing/selftests/kvm/s390/tprot.c | 10 ++--
tools/testing/selftests/kvm/steal_time.c | 2 +-
tools/testing/selftests/kvm/x86/amx_test.c | 8 +--
tools/testing/selftests/kvm/x86/cpuid_test.c | 6 +--
.../testing/selftests/kvm/x86/hyperv_clock.c | 4 +-
.../testing/selftests/kvm/x86/hyperv_evmcs.c | 8 +--
.../kvm/x86/hyperv_extended_hypercalls.c | 10 ++--
.../selftests/kvm/x86/hyperv_features.c | 12 ++---
tools/testing/selftests/kvm/x86/hyperv_ipi.c | 10 ++--
.../selftests/kvm/x86/hyperv_svm_test.c | 8 +--
.../selftests/kvm/x86/hyperv_tlb_flush.c | 20 ++++----
.../selftests/kvm/x86/kvm_clock_test.c | 4 +-
.../selftests/kvm/x86/nested_emulation_test.c | 2 +-
.../kvm/x86/nested_exceptions_test.c | 2 +-
.../selftests/kvm/x86/sev_smoke_test.c | 6 +--
tools/testing/selftests/kvm/x86/smm_test.c | 2 +-
tools/testing/selftests/kvm/x86/state_test.c | 2 +-
.../selftests/kvm/x86/svm_int_ctl_test.c | 2 +-
.../kvm/x86/svm_nested_shutdown_test.c | 2 +-
.../kvm/x86/svm_nested_soft_inject_test.c | 6 +--
.../selftests/kvm/x86/svm_vmcall_test.c | 2 +-
.../kvm/x86/triple_fault_event_test.c | 4 +-
.../selftests/kvm/x86/vmx_apic_access_test.c | 2 +-
.../kvm/x86/vmx_close_while_nested_test.c | 2 +-
.../selftests/kvm/x86/vmx_dirty_log_test.c | 2 +-
.../kvm/x86/vmx_invalid_nested_guest_state.c | 2 +-
.../kvm/x86/vmx_nested_tsc_scaling_test.c | 2 +-
.../kvm/x86/vmx_preemption_timer_test.c | 2 +-
.../selftests/kvm/x86/vmx_tsc_adjust_test.c | 2 +-
.../selftests/kvm/x86/xapic_ipi_test.c | 4 +-
58 files changed, 225 insertions(+), 226 deletions(-)
diff --git a/tools/testing/selftests/kvm/arm64/vgic_irq.c b/tools/testing/selftests/kvm/arm64/vgic_irq.c
index f4ac28d53747..f6b77da48785 100644
--- a/tools/testing/selftests/kvm/arm64/vgic_irq.c
+++ b/tools/testing/selftests/kvm/arm64/vgic_irq.c
@@ -714,7 +714,7 @@ static void kvm_inject_get_call(struct kvm_vm *vm, struct ucall *uc,
struct kvm_inject_args *args)
{
struct kvm_inject_args *kvm_args_hva;
- vm_vaddr_t kvm_args_gva;
+ gva_t kvm_args_gva;
kvm_args_gva = uc->args[1];
kvm_args_hva = (struct kvm_inject_args *)addr_gva2hva(vm, kvm_args_gva);
@@ -735,7 +735,7 @@ static void test_vgic(uint32_t nr_irqs, bool level_sensitive, bool eoi_split)
struct kvm_vcpu *vcpu;
struct kvm_vm *vm;
struct kvm_inject_args inject_args;
- vm_vaddr_t args_gva;
+ gva_t args_gva;
struct test_args args = {
.nr_irqs = nr_irqs,
@@ -753,7 +753,7 @@ static void test_vgic(uint32_t nr_irqs, bool level_sensitive, bool eoi_split)
vcpu_init_descriptor_tables(vcpu);
/* Setup the guest args page (so it gets the args). */
- args_gva = vm_vaddr_alloc_page(vm);
+ args_gva = gva_alloc_page(vm);
memcpy(addr_gva2hva(vm, args_gva), &args, sizeof(args));
vcpu_args_set(vcpu, 1, args_gva);
diff --git a/tools/testing/selftests/kvm/include/arm64/processor.h b/tools/testing/selftests/kvm/include/arm64/processor.h
index b0fc0f945766..68b692e1cc32 100644
--- a/tools/testing/selftests/kvm/include/arm64/processor.h
+++ b/tools/testing/selftests/kvm/include/arm64/processor.h
@@ -175,7 +175,7 @@ void vm_install_exception_handler(struct kvm_vm *vm,
void vm_install_sync_handler(struct kvm_vm *vm,
int vector, int ec, handler_fn handler);
-uint64_t *virt_get_pte_hva(struct kvm_vm *vm, vm_vaddr_t gva);
+uint64_t *virt_get_pte_hva(struct kvm_vm *vm, gva_t gva);
static inline void cpu_relax(void)
{
diff --git a/tools/testing/selftests/kvm/include/arm64/ucall.h b/tools/testing/selftests/kvm/include/arm64/ucall.h
index 4ec801f37f00..2210d3d94c40 100644
--- a/tools/testing/selftests/kvm/include/arm64/ucall.h
+++ b/tools/testing/selftests/kvm/include/arm64/ucall.h
@@ -10,9 +10,9 @@
* ucall_exit_mmio_addr holds per-VM values (global data is duplicated by each
* VM), it must not be accessed from host code.
*/
-extern vm_vaddr_t *ucall_exit_mmio_addr;
+extern gva_t *ucall_exit_mmio_addr;
-static inline void ucall_arch_do_ucall(vm_vaddr_t uc)
+static inline void ucall_arch_do_ucall(gva_t uc)
{
WRITE_ONCE(*ucall_exit_mmio_addr, uc);
}
diff --git a/tools/testing/selftests/kvm/include/kvm_util.h b/tools/testing/selftests/kvm/include/kvm_util.h
index 373912464fb4..4b7012bd8041 100644
--- a/tools/testing/selftests/kvm/include/kvm_util.h
+++ b/tools/testing/selftests/kvm/include/kvm_util.h
@@ -100,7 +100,7 @@ struct kvm_vm {
bool pgd_created;
vm_paddr_t ucall_mmio_addr;
vm_paddr_t pgd;
- vm_vaddr_t handlers;
+ gva_t handlers;
uint32_t dirty_ring_size;
uint64_t gpa_tag_mask;
@@ -602,22 +602,22 @@ void vm_mem_region_move(struct kvm_vm *vm, uint32_t slot, uint64_t new_gpa);
void vm_mem_region_delete(struct kvm_vm *vm, uint32_t slot);
struct kvm_vcpu *__vm_vcpu_add(struct kvm_vm *vm, uint32_t vcpu_id);
void vm_populate_vaddr_bitmap(struct kvm_vm *vm);
-vm_vaddr_t vm_vaddr_unused_gap(struct kvm_vm *vm, size_t sz, vm_vaddr_t vaddr_min);
-vm_vaddr_t vm_vaddr_alloc(struct kvm_vm *vm, size_t sz, vm_vaddr_t vaddr_min);
-vm_vaddr_t __vm_vaddr_alloc(struct kvm_vm *vm, size_t sz, vm_vaddr_t vaddr_min,
- enum kvm_mem_region_type type);
-vm_vaddr_t vm_vaddr_alloc_shared(struct kvm_vm *vm, size_t sz,
- vm_vaddr_t vaddr_min,
- enum kvm_mem_region_type type);
-vm_vaddr_t vm_vaddr_alloc_pages(struct kvm_vm *vm, int nr_pages);
-vm_vaddr_t __vm_vaddr_alloc_page(struct kvm_vm *vm,
- enum kvm_mem_region_type type);
-vm_vaddr_t vm_vaddr_alloc_page(struct kvm_vm *vm);
+gva_t gva_unused_gap(struct kvm_vm *vm, size_t sz, gva_t vaddr_min);
+gva_t gva_alloc(struct kvm_vm *vm, size_t sz, gva_t vaddr_min);
+gva_t __gva_alloc(struct kvm_vm *vm, size_t sz, gva_t vaddr_min,
+ enum kvm_mem_region_type type);
+gva_t gva_alloc_shared(struct kvm_vm *vm, size_t sz,
+ gva_t vaddr_min,
+ enum kvm_mem_region_type type);
+gva_t gva_alloc_pages(struct kvm_vm *vm, int nr_pages);
+gva_t __gva_alloc_page(struct kvm_vm *vm,
+ enum kvm_mem_region_type type);
+gva_t gva_alloc_page(struct kvm_vm *vm);
void virt_map(struct kvm_vm *vm, uint64_t vaddr, uint64_t paddr,
unsigned int npages);
void *addr_gpa2hva(struct kvm_vm *vm, vm_paddr_t gpa);
-void *addr_gva2hva(struct kvm_vm *vm, vm_vaddr_t gva);
+void *addr_gva2hva(struct kvm_vm *vm, gva_t gva);
vm_paddr_t addr_hva2gpa(struct kvm_vm *vm, void *hva);
void *addr_gpa2alias(struct kvm_vm *vm, vm_paddr_t gpa);
@@ -994,12 +994,12 @@ vm_adjust_num_guest_pages(enum vm_guest_mode mode, unsigned int num_guest_pages)
}
#define sync_global_to_guest(vm, g) ({ \
- typeof(g) *_p = addr_gva2hva(vm, (vm_vaddr_t)&(g)); \
+ typeof(g) *_p = addr_gva2hva(vm, (gva_t)&(g)); \
memcpy(_p, &(g), sizeof(g)); \
})
#define sync_global_from_guest(vm, g) ({ \
- typeof(g) *_p = addr_gva2hva(vm, (vm_vaddr_t)&(g)); \
+ typeof(g) *_p = addr_gva2hva(vm, (gva_t)&(g)); \
memcpy(&(g), _p, sizeof(g)); \
})
@@ -1010,7 +1010,7 @@ vm_adjust_num_guest_pages(enum vm_guest_mode mode, unsigned int num_guest_pages)
* undesirable to change the host's copy of the global.
*/
#define write_guest_global(vm, g, val) ({ \
- typeof(g) *_p = addr_gva2hva(vm, (vm_vaddr_t)&(g)); \
+ typeof(g) *_p = addr_gva2hva(vm, (gva_t)&(g)); \
typeof(g) _val = val; \
\
memcpy(_p, &(_val), sizeof(g)); \
@@ -1104,9 +1104,9 @@ static inline void virt_pg_map(struct kvm_vm *vm, uint64_t vaddr, uint64_t paddr
* Returns the VM physical address of the translated VM virtual
* address given by @gva.
*/
-vm_paddr_t addr_arch_gva2gpa(struct kvm_vm *vm, vm_vaddr_t gva);
+vm_paddr_t addr_arch_gva2gpa(struct kvm_vm *vm, gva_t gva);
-static inline vm_paddr_t addr_gva2gpa(struct kvm_vm *vm, vm_vaddr_t gva)
+static inline vm_paddr_t addr_gva2gpa(struct kvm_vm *vm, gva_t gva)
{
return addr_arch_gva2gpa(vm, gva);
}
diff --git a/tools/testing/selftests/kvm/include/kvm_util_types.h b/tools/testing/selftests/kvm/include/kvm_util_types.h
index ec787b97cf18..a53e04286554 100644
--- a/tools/testing/selftests/kvm/include/kvm_util_types.h
+++ b/tools/testing/selftests/kvm/include/kvm_util_types.h
@@ -15,6 +15,6 @@
#define kvm_static_assert(expr, ...) __kvm_static_assert(expr, ##__VA_ARGS__, #expr)
typedef uint64_t vm_paddr_t; /* Virtual Machine (Guest) physical address */
-typedef uint64_t vm_vaddr_t; /* Virtual Machine (Guest) virtual address */
+typedef uint64_t gva_t; /* Virtual Machine (Guest) virtual address */
#endif /* SELFTEST_KVM_UTIL_TYPES_H */
diff --git a/tools/testing/selftests/kvm/include/riscv/ucall.h b/tools/testing/selftests/kvm/include/riscv/ucall.h
index a695ae36f3e0..41d56254968e 100644
--- a/tools/testing/selftests/kvm/include/riscv/ucall.h
+++ b/tools/testing/selftests/kvm/include/riscv/ucall.h
@@ -11,7 +11,7 @@ static inline void ucall_arch_init(struct kvm_vm *vm, vm_paddr_t mmio_gpa)
{
}
-static inline void ucall_arch_do_ucall(vm_vaddr_t uc)
+static inline void ucall_arch_do_ucall(gva_t uc)
{
sbi_ecall(KVM_RISCV_SELFTESTS_SBI_EXT,
KVM_RISCV_SELFTESTS_SBI_UCALL,
diff --git a/tools/testing/selftests/kvm/include/s390/ucall.h b/tools/testing/selftests/kvm/include/s390/ucall.h
index 8035a872a351..befee84c4609 100644
--- a/tools/testing/selftests/kvm/include/s390/ucall.h
+++ b/tools/testing/selftests/kvm/include/s390/ucall.h
@@ -10,7 +10,7 @@ static inline void ucall_arch_init(struct kvm_vm *vm, vm_paddr_t mmio_gpa)
{
}
-static inline void ucall_arch_do_ucall(vm_vaddr_t uc)
+static inline void ucall_arch_do_ucall(gva_t uc)
{
/* Exit via DIAGNOSE 0x501 (normally used for breakpoints) */
asm volatile ("diag 0,%0,0x501" : : "a"(uc) : "memory");
diff --git a/tools/testing/selftests/kvm/include/ucall_common.h b/tools/testing/selftests/kvm/include/ucall_common.h
index d9d6581b8d4f..e5499f170834 100644
--- a/tools/testing/selftests/kvm/include/ucall_common.h
+++ b/tools/testing/selftests/kvm/include/ucall_common.h
@@ -30,7 +30,7 @@ struct ucall {
};
void ucall_arch_init(struct kvm_vm *vm, vm_paddr_t mmio_gpa);
-void ucall_arch_do_ucall(vm_vaddr_t uc);
+void ucall_arch_do_ucall(gva_t uc);
void *ucall_arch_get_ucall(struct kvm_vcpu *vcpu);
void ucall(uint64_t cmd, int nargs, ...);
@@ -48,7 +48,7 @@ int ucall_nr_pages_required(uint64_t page_size);
* the full ucall() are problematic and/or unwanted. Note, this will come out
* as UCALL_NONE on the backend.
*/
-#define GUEST_UCALL_NONE() ucall_arch_do_ucall((vm_vaddr_t)NULL)
+#define GUEST_UCALL_NONE() ucall_arch_do_ucall((gva_t)NULL)
#define GUEST_SYNC_ARGS(stage, arg1, arg2, arg3, arg4) \
ucall(UCALL_SYNC, 6, "hello", stage, arg1, arg2, arg3, arg4)
diff --git a/tools/testing/selftests/kvm/include/x86/hyperv.h b/tools/testing/selftests/kvm/include/x86/hyperv.h
index f13e532be240..eedfff3cf102 100644
--- a/tools/testing/selftests/kvm/include/x86/hyperv.h
+++ b/tools/testing/selftests/kvm/include/x86/hyperv.h
@@ -254,8 +254,8 @@
* Issue a Hyper-V hypercall. Returns exception vector raised or 0, 'hv_status'
* is set to the hypercall status (if no exception occurred).
*/
-static inline uint8_t __hyperv_hypercall(u64 control, vm_vaddr_t input_address,
- vm_vaddr_t output_address,
+static inline uint8_t __hyperv_hypercall(u64 control, gva_t input_address,
+ gva_t output_address,
uint64_t *hv_status)
{
uint64_t error_code;
@@ -274,8 +274,8 @@ static inline uint8_t __hyperv_hypercall(u64 control, vm_vaddr_t input_address,
}
/* Issue a Hyper-V hypercall and assert that it succeeded. */
-static inline void hyperv_hypercall(u64 control, vm_vaddr_t input_address,
- vm_vaddr_t output_address)
+static inline void hyperv_hypercall(u64 control, gva_t input_address,
+ gva_t output_address)
{
uint64_t hv_status;
uint8_t vector;
@@ -347,7 +347,7 @@ struct hyperv_test_pages {
};
struct hyperv_test_pages *vcpu_alloc_hyperv_test_pages(struct kvm_vm *vm,
- vm_vaddr_t *p_hv_pages_gva);
+ gva_t *p_hv_pages_gva);
/* HV_X64_MSR_TSC_INVARIANT_CONTROL bits */
#define HV_INVARIANT_TSC_EXPOSED BIT_ULL(0)
diff --git a/tools/testing/selftests/kvm/include/x86/kvm_util_arch.h b/tools/testing/selftests/kvm/include/x86/kvm_util_arch.h
index 972bb1c4ab4c..36d4b6727cb6 100644
--- a/tools/testing/selftests/kvm/include/x86/kvm_util_arch.h
+++ b/tools/testing/selftests/kvm/include/x86/kvm_util_arch.h
@@ -11,9 +11,9 @@
extern bool is_forced_emulation_enabled;
struct kvm_vm_arch {
- vm_vaddr_t gdt;
- vm_vaddr_t tss;
- vm_vaddr_t idt;
+ gva_t gdt;
+ gva_t tss;
+ gva_t idt;
uint64_t c_bit;
uint64_t s_bit;
diff --git a/tools/testing/selftests/kvm/include/x86/svm_util.h b/tools/testing/selftests/kvm/include/x86/svm_util.h
index b74c6dcddcbd..c2ebb8b61e38 100644
--- a/tools/testing/selftests/kvm/include/x86/svm_util.h
+++ b/tools/testing/selftests/kvm/include/x86/svm_util.h
@@ -53,7 +53,7 @@ static inline void vmmcall(void)
"clgi\n" \
)
-struct svm_test_data *vcpu_alloc_svm(struct kvm_vm *vm, vm_vaddr_t *p_svm_gva);
+struct svm_test_data *vcpu_alloc_svm(struct kvm_vm *vm, gva_t *p_svm_gva);
void generic_svm_setup(struct svm_test_data *svm, void *guest_rip, void *guest_rsp);
void run_guest(struct vmcb *vmcb, uint64_t vmcb_gpa);
diff --git a/tools/testing/selftests/kvm/include/x86/vmx.h b/tools/testing/selftests/kvm/include/x86/vmx.h
index edb3c391b982..16603e8f2006 100644
--- a/tools/testing/selftests/kvm/include/x86/vmx.h
+++ b/tools/testing/selftests/kvm/include/x86/vmx.h
@@ -552,7 +552,7 @@ union vmx_ctrl_msr {
};
};
-struct vmx_pages *vcpu_alloc_vmx(struct kvm_vm *vm, vm_vaddr_t *p_vmx_gva);
+struct vmx_pages *vcpu_alloc_vmx(struct kvm_vm *vm, gva_t *p_vmx_gva);
bool prepare_for_vmx_operation(struct vmx_pages *vmx);
void prepare_vmcs(struct vmx_pages *vmx, void *guest_rip, void *guest_rsp);
bool load_vmcs(struct vmx_pages *vmx);
diff --git a/tools/testing/selftests/kvm/kvm_page_table_test.c b/tools/testing/selftests/kvm/kvm_page_table_test.c
index dd8b12f626d3..6e909a96b095 100644
--- a/tools/testing/selftests/kvm/kvm_page_table_test.c
+++ b/tools/testing/selftests/kvm/kvm_page_table_test.c
@@ -295,7 +295,7 @@ static struct kvm_vm *pre_init_before_test(enum vm_guest_mode mode, void *arg)
ret = sem_init(&test_stage_completed, 0, 0);
TEST_ASSERT(ret == 0, "Error in sem_init");
- current_stage = addr_gva2hva(vm, (vm_vaddr_t)(&guest_test_stage));
+ current_stage = addr_gva2hva(vm, (gva_t)(&guest_test_stage));
*current_stage = NUM_TEST_STAGES;
pr_info("Testing guest mode: %s\n", vm_guest_mode_string(mode));
diff --git a/tools/testing/selftests/kvm/lib/arm64/processor.c b/tools/testing/selftests/kvm/lib/arm64/processor.c
index 9d69904cb608..102b0b829420 100644
--- a/tools/testing/selftests/kvm/lib/arm64/processor.c
+++ b/tools/testing/selftests/kvm/lib/arm64/processor.c
@@ -18,14 +18,14 @@
#define DEFAULT_ARM64_GUEST_STACK_VADDR_MIN 0xac0000
-static vm_vaddr_t exception_handlers;
+static gva_t exception_handlers;
static uint64_t page_align(struct kvm_vm *vm, uint64_t v)
{
return (v + vm->page_size) & ~(vm->page_size - 1);
}
-static uint64_t pgd_index(struct kvm_vm *vm, vm_vaddr_t gva)
+static uint64_t pgd_index(struct kvm_vm *vm, gva_t gva)
{
unsigned int shift = (vm->pgtable_levels - 1) * (vm->page_shift - 3) + vm->page_shift;
uint64_t mask = (1UL << (vm->va_bits - shift)) - 1;
@@ -33,7 +33,7 @@ static uint64_t pgd_index(struct kvm_vm *vm, vm_vaddr_t gva)
return (gva >> shift) & mask;
}
-static uint64_t pud_index(struct kvm_vm *vm, vm_vaddr_t gva)
+static uint64_t pud_index(struct kvm_vm *vm, gva_t gva)
{
unsigned int shift = 2 * (vm->page_shift - 3) + vm->page_shift;
uint64_t mask = (1UL << (vm->page_shift - 3)) - 1;
@@ -44,7 +44,7 @@ static uint64_t pud_index(struct kvm_vm *vm, vm_vaddr_t gva)
return (gva >> shift) & mask;
}
-static uint64_t pmd_index(struct kvm_vm *vm, vm_vaddr_t gva)
+static uint64_t pmd_index(struct kvm_vm *vm, gva_t gva)
{
unsigned int shift = (vm->page_shift - 3) + vm->page_shift;
uint64_t mask = (1UL << (vm->page_shift - 3)) - 1;
@@ -55,7 +55,7 @@ static uint64_t pmd_index(struct kvm_vm *vm, vm_vaddr_t gva)
return (gva >> shift) & mask;
}
-static uint64_t pte_index(struct kvm_vm *vm, vm_vaddr_t gva)
+static uint64_t pte_index(struct kvm_vm *vm, gva_t gva)
{
uint64_t mask = (1UL << (vm->page_shift - 3)) - 1;
return (gva >> vm->page_shift) & mask;
@@ -185,7 +185,7 @@ void virt_arch_pg_map(struct kvm_vm *vm, uint64_t vaddr, uint64_t paddr)
_virt_pg_map(vm, vaddr, paddr, attr_idx);
}
-uint64_t *virt_get_pte_hva(struct kvm_vm *vm, vm_vaddr_t gva)
+uint64_t *virt_get_pte_hva(struct kvm_vm *vm, gva_t gva)
{
uint64_t *ptep;
@@ -223,7 +223,7 @@ uint64_t *virt_get_pte_hva(struct kvm_vm *vm, vm_vaddr_t gva)
exit(EXIT_FAILURE);
}
-vm_paddr_t addr_arch_gva2gpa(struct kvm_vm *vm, vm_vaddr_t gva)
+vm_paddr_t addr_arch_gva2gpa(struct kvm_vm *vm, gva_t gva)
{
uint64_t *ptep = virt_get_pte_hva(vm, gva);
@@ -389,9 +389,9 @@ static struct kvm_vcpu *__aarch64_vcpu_add(struct kvm_vm *vm, uint32_t vcpu_id,
stack_size = vm->page_size == 4096 ? DEFAULT_STACK_PGS * vm->page_size :
vm->page_size;
- stack_vaddr = __vm_vaddr_alloc(vm, stack_size,
- DEFAULT_ARM64_GUEST_STACK_VADDR_MIN,
- MEM_REGION_DATA);
+ stack_vaddr = __gva_alloc(vm, stack_size,
+ DEFAULT_ARM64_GUEST_STACK_VADDR_MIN,
+ MEM_REGION_DATA);
aarch64_vcpu_setup(vcpu, init);
@@ -503,10 +503,10 @@ void route_exception(struct ex_regs *regs, int vector)
void vm_init_descriptor_tables(struct kvm_vm *vm)
{
- vm->handlers = __vm_vaddr_alloc(vm, sizeof(struct handlers),
- vm->page_size, MEM_REGION_DATA);
+ vm->handlers = __gva_alloc(vm, sizeof(struct handlers),
+ vm->page_size, MEM_REGION_DATA);
- *(vm_vaddr_t *)addr_gva2hva(vm, (vm_vaddr_t)(&exception_handlers)) = vm->handlers;
+ *(gva_t *)addr_gva2hva(vm, (gva_t)(&exception_handlers)) = vm->handlers;
}
void vm_install_sync_handler(struct kvm_vm *vm, int vector, int ec,
@@ -638,7 +638,7 @@ void kvm_selftest_arch_init(void)
guest_modes_append_default();
}
-void vm_vaddr_populate_bitmap(struct kvm_vm *vm)
+void gva_populate_bitmap(struct kvm_vm *vm)
{
/*
* arm64 selftests use only TTBR0_EL1, meaning that the valid VA space
diff --git a/tools/testing/selftests/kvm/lib/arm64/ucall.c b/tools/testing/selftests/kvm/lib/arm64/ucall.c
index ddab0ce89d4d..a1a3b4dcdce1 100644
--- a/tools/testing/selftests/kvm/lib/arm64/ucall.c
+++ b/tools/testing/selftests/kvm/lib/arm64/ucall.c
@@ -6,17 +6,17 @@
*/
#include "kvm_util.h"
-vm_vaddr_t *ucall_exit_mmio_addr;
+gva_t *ucall_exit_mmio_addr;
void ucall_arch_init(struct kvm_vm *vm, vm_paddr_t mmio_gpa)
{
- vm_vaddr_t mmio_gva = vm_vaddr_unused_gap(vm, vm->page_size, KVM_UTIL_MIN_VADDR);
+ gva_t mmio_gva = gva_unused_gap(vm, vm->page_size, KVM_UTIL_MIN_VADDR);
virt_map(vm, mmio_gva, mmio_gpa, 1);
vm->ucall_mmio_addr = mmio_gpa;
- write_guest_global(vm, ucall_exit_mmio_addr, (vm_vaddr_t *)mmio_gva);
+ write_guest_global(vm, ucall_exit_mmio_addr, (gva_t *)mmio_gva);
}
void *ucall_arch_get_ucall(struct kvm_vcpu *vcpu)
diff --git a/tools/testing/selftests/kvm/lib/elf.c b/tools/testing/selftests/kvm/lib/elf.c
index f34d926d9735..6fddebb96a3c 100644
--- a/tools/testing/selftests/kvm/lib/elf.c
+++ b/tools/testing/selftests/kvm/lib/elf.c
@@ -157,12 +157,12 @@ void kvm_vm_elf_load(struct kvm_vm *vm, const char *filename)
"memsize of 0,\n"
" phdr index: %u p_memsz: 0x%" PRIx64,
n1, (uint64_t) phdr.p_memsz);
- vm_vaddr_t seg_vstart = align_down(phdr.p_vaddr, vm->page_size);
- vm_vaddr_t seg_vend = phdr.p_vaddr + phdr.p_memsz - 1;
+ gva_t seg_vstart = align_down(phdr.p_vaddr, vm->page_size);
+ gva_t seg_vend = phdr.p_vaddr + phdr.p_memsz - 1;
seg_vend |= vm->page_size - 1;
size_t seg_size = seg_vend - seg_vstart + 1;
- vm_vaddr_t vaddr = __vm_vaddr_alloc(vm, seg_size, seg_vstart,
+ gva_t vaddr = __gva_alloc(vm, seg_size, seg_vstart,
MEM_REGION_CODE);
TEST_ASSERT(vaddr == seg_vstart, "Unable to allocate "
"virtual memory for segment at requested min addr,\n"
diff --git a/tools/testing/selftests/kvm/lib/kvm_util.c b/tools/testing/selftests/kvm/lib/kvm_util.c
index 815bc45dd8dc..2e6d275d4ba0 100644
--- a/tools/testing/selftests/kvm/lib/kvm_util.c
+++ b/tools/testing/selftests/kvm/lib/kvm_util.c
@@ -262,7 +262,7 @@ _Static_assert(sizeof(vm_guest_mode_params)/sizeof(struct vm_guest_mode_params)
* based on the MSB of the VA. On architectures with this behavior
* the VA region spans [0, 2^(va_bits - 1)), [-(2^(va_bits - 1), -1].
*/
-__weak void vm_vaddr_populate_bitmap(struct kvm_vm *vm)
+__weak void gva_populate_bitmap(struct kvm_vm *vm)
{
sparsebit_set_num(vm->vpages_valid,
0, (1ULL << (vm->va_bits - 1)) >> vm->page_shift);
@@ -362,7 +362,7 @@ struct kvm_vm *____vm_create(struct vm_shape shape)
/* Limit to VA-bit canonical virtual addresses. */
vm->vpages_valid = sparsebit_alloc();
- vm_vaddr_populate_bitmap(vm);
+ gva_populate_bitmap(vm);
/* Limit physical addresses to PA-bits. */
vm->max_gfn = vm_compute_max_gfn(vm);
@@ -1373,8 +1373,7 @@ struct kvm_vcpu *__vm_vcpu_add(struct kvm_vm *vm, uint32_t vcpu_id)
* TEST_ASSERT failure occurs for invalid input or no area of at least
* sz unallocated bytes >= vaddr_min is available.
*/
-vm_vaddr_t vm_vaddr_unused_gap(struct kvm_vm *vm, size_t sz,
- vm_vaddr_t vaddr_min)
+gva_t gva_unused_gap(struct kvm_vm *vm, size_t sz, gva_t vaddr_min)
{
uint64_t pages = (sz + vm->page_size - 1) >> vm->page_shift;
@@ -1439,10 +1438,10 @@ vm_vaddr_t vm_vaddr_unused_gap(struct kvm_vm *vm, size_t sz,
return pgidx_start * vm->page_size;
}
-static vm_vaddr_t ____vm_vaddr_alloc(struct kvm_vm *vm, size_t sz,
- vm_vaddr_t vaddr_min,
- enum kvm_mem_region_type type,
- bool protected)
+static gva_t ____gva_alloc(struct kvm_vm *vm, size_t sz,
+ gva_t vaddr_min,
+ enum kvm_mem_region_type type,
+ bool protected)
{
uint64_t pages = (sz >> vm->page_shift) + ((sz % vm->page_size) != 0);
@@ -1455,10 +1454,10 @@ static vm_vaddr_t ____vm_vaddr_alloc(struct kvm_vm *vm, size_t sz,
* Find an unused range of virtual page addresses of at least
* pages in length.
*/
- vm_vaddr_t vaddr_start = vm_vaddr_unused_gap(vm, sz, vaddr_min);
+ gva_t vaddr_start = gva_unused_gap(vm, sz, vaddr_min);
/* Map the virtual pages. */
- for (vm_vaddr_t vaddr = vaddr_start; pages > 0;
+ for (gva_t vaddr = vaddr_start; pages > 0;
pages--, vaddr += vm->page_size, paddr += vm->page_size) {
virt_pg_map(vm, vaddr, paddr);
@@ -1469,18 +1468,18 @@ static vm_vaddr_t ____vm_vaddr_alloc(struct kvm_vm *vm, size_t sz,
return vaddr_start;
}
-vm_vaddr_t __vm_vaddr_alloc(struct kvm_vm *vm, size_t sz, vm_vaddr_t vaddr_min,
- enum kvm_mem_region_type type)
+gva_t __gva_alloc(struct kvm_vm *vm, size_t sz, gva_t vaddr_min,
+ enum kvm_mem_region_type type)
{
- return ____vm_vaddr_alloc(vm, sz, vaddr_min, type,
+ return ____gva_alloc(vm, sz, vaddr_min, type,
vm_arch_has_protected_memory(vm));
}
-vm_vaddr_t vm_vaddr_alloc_shared(struct kvm_vm *vm, size_t sz,
- vm_vaddr_t vaddr_min,
- enum kvm_mem_region_type type)
+gva_t gva_alloc_shared(struct kvm_vm *vm, size_t sz,
+ gva_t vaddr_min,
+ enum kvm_mem_region_type type)
{
- return ____vm_vaddr_alloc(vm, sz, vaddr_min, type, false);
+ return ____gva_alloc(vm, sz, vaddr_min, type, false);
}
/*
@@ -1502,9 +1501,9 @@ vm_vaddr_t vm_vaddr_alloc_shared(struct kvm_vm *vm, size_t sz,
* a unique set of pages, with the minimum real allocation being at least
* a page. The allocated physical space comes from the TEST_DATA memory region.
*/
-vm_vaddr_t vm_vaddr_alloc(struct kvm_vm *vm, size_t sz, vm_vaddr_t vaddr_min)
+gva_t gva_alloc(struct kvm_vm *vm, size_t sz, gva_t vaddr_min)
{
- return __vm_vaddr_alloc(vm, sz, vaddr_min, MEM_REGION_TEST_DATA);
+ return __gva_alloc(vm, sz, vaddr_min, MEM_REGION_TEST_DATA);
}
/*
@@ -1521,14 +1520,14 @@ vm_vaddr_t vm_vaddr_alloc(struct kvm_vm *vm, size_t sz, vm_vaddr_t vaddr_min)
* Allocates at least N system pages worth of bytes within the virtual address
* space of the vm.
*/
-vm_vaddr_t vm_vaddr_alloc_pages(struct kvm_vm *vm, int nr_pages)
+gva_t gva_alloc_pages(struct kvm_vm *vm, int nr_pages)
{
- return vm_vaddr_alloc(vm, nr_pages * getpagesize(), KVM_UTIL_MIN_VADDR);
+ return gva_alloc(vm, nr_pages * getpagesize(), KVM_UTIL_MIN_VADDR);
}
-vm_vaddr_t __vm_vaddr_alloc_page(struct kvm_vm *vm, enum kvm_mem_region_type type)
+gva_t __gva_alloc_page(struct kvm_vm *vm, enum kvm_mem_region_type type)
{
- return __vm_vaddr_alloc(vm, getpagesize(), KVM_UTIL_MIN_VADDR, type);
+ return __gva_alloc(vm, getpagesize(), KVM_UTIL_MIN_VADDR, type);
}
/*
@@ -1545,9 +1544,9 @@ vm_vaddr_t __vm_vaddr_alloc_page(struct kvm_vm *vm, enum kvm_mem_region_type typ
* Allocates at least one system page worth of bytes within the virtual address
* space of the vm.
*/
-vm_vaddr_t vm_vaddr_alloc_page(struct kvm_vm *vm)
+gva_t gva_alloc_page(struct kvm_vm *vm)
{
- return vm_vaddr_alloc_pages(vm, 1);
+ return gva_alloc_pages(vm, 1);
}
/*
@@ -2140,7 +2139,7 @@ vm_paddr_t vm_alloc_page_table(struct kvm_vm *vm)
* Return:
* Equivalent host virtual address
*/
-void *addr_gva2hva(struct kvm_vm *vm, vm_vaddr_t gva)
+void *addr_gva2hva(struct kvm_vm *vm, gva_t gva)
{
return addr_gpa2hva(vm, addr_gva2gpa(vm, gva));
}
diff --git a/tools/testing/selftests/kvm/lib/riscv/processor.c b/tools/testing/selftests/kvm/lib/riscv/processor.c
index dd663bcf0cc0..3ba5163c72b3 100644
--- a/tools/testing/selftests/kvm/lib/riscv/processor.c
+++ b/tools/testing/selftests/kvm/lib/riscv/processor.c
@@ -14,7 +14,7 @@
#define DEFAULT_RISCV_GUEST_STACK_VADDR_MIN 0xac0000
-static vm_vaddr_t exception_handlers;
+static gva_t exception_handlers;
bool __vcpu_has_ext(struct kvm_vcpu *vcpu, uint64_t ext)
{
@@ -56,7 +56,7 @@ static uint32_t pte_index_shift[] = {
PGTBL_L3_INDEX_SHIFT,
};
-static uint64_t pte_index(struct kvm_vm *vm, vm_vaddr_t gva, int level)
+static uint64_t pte_index(struct kvm_vm *vm, gva_t gva, int level)
{
TEST_ASSERT(level > -1,
"Negative page table level (%d) not possible", level);
@@ -123,7 +123,7 @@ void virt_arch_pg_map(struct kvm_vm *vm, uint64_t vaddr, uint64_t paddr)
PGTBL_PTE_PERM_MASK | PGTBL_PTE_VALID_MASK;
}
-vm_paddr_t addr_arch_gva2gpa(struct kvm_vm *vm, vm_vaddr_t gva)
+vm_paddr_t addr_arch_gva2gpa(struct kvm_vm *vm, gva_t gva)
{
uint64_t *ptep;
int level = vm->pgtable_levels - 1;
@@ -306,9 +306,9 @@ struct kvm_vcpu *vm_arch_vcpu_add(struct kvm_vm *vm, uint32_t vcpu_id)
stack_size = vm->page_size == 4096 ? DEFAULT_STACK_PGS * vm->page_size :
vm->page_size;
- stack_vaddr = __vm_vaddr_alloc(vm, stack_size,
- DEFAULT_RISCV_GUEST_STACK_VADDR_MIN,
- MEM_REGION_DATA);
+ stack_vaddr = __gva_alloc(vm, stack_size,
+ DEFAULT_RISCV_GUEST_STACK_VADDR_MIN,
+ MEM_REGION_DATA);
vcpu = __vm_vcpu_add(vm, vcpu_id);
riscv_vcpu_mmu_setup(vcpu);
@@ -433,10 +433,10 @@ void vcpu_init_vector_tables(struct kvm_vcpu *vcpu)
void vm_init_vector_tables(struct kvm_vm *vm)
{
- vm->handlers = __vm_vaddr_alloc(vm, sizeof(struct handlers),
+ vm->handlers = __gva_alloc(vm, sizeof(struct handlers),
vm->page_size, MEM_REGION_DATA);
- *(vm_vaddr_t *)addr_gva2hva(vm, (vm_vaddr_t)(&exception_handlers)) = vm->handlers;
+ *(gva_t *)addr_gva2hva(vm, (gva_t)(&exception_handlers)) = vm->handlers;
}
void vm_install_exception_handler(struct kvm_vm *vm, int vector, exception_handler_fn handler)
diff --git a/tools/testing/selftests/kvm/lib/s390/processor.c b/tools/testing/selftests/kvm/lib/s390/processor.c
index 20cfe970e3e3..a6438e3ea8e7 100644
--- a/tools/testing/selftests/kvm/lib/s390/processor.c
+++ b/tools/testing/selftests/kvm/lib/s390/processor.c
@@ -86,7 +86,7 @@ void virt_arch_pg_map(struct kvm_vm *vm, uint64_t gva, uint64_t gpa)
entry[idx] = gpa;
}
-vm_paddr_t addr_arch_gva2gpa(struct kvm_vm *vm, vm_vaddr_t gva)
+vm_paddr_t addr_arch_gva2gpa(struct kvm_vm *vm, gva_t gva)
{
int ri, idx;
uint64_t *entry;
@@ -171,9 +171,9 @@ struct kvm_vcpu *vm_arch_vcpu_add(struct kvm_vm *vm, uint32_t vcpu_id)
TEST_ASSERT(vm->page_size == PAGE_SIZE, "Unsupported page size: 0x%x",
vm->page_size);
- stack_vaddr = __vm_vaddr_alloc(vm, stack_size,
- DEFAULT_GUEST_STACK_VADDR_MIN,
- MEM_REGION_DATA);
+ stack_vaddr = __gva_alloc(vm, stack_size,
+ DEFAULT_GUEST_STACK_VADDR_MIN,
+ MEM_REGION_DATA);
vcpu = __vm_vcpu_add(vm, vcpu_id);
diff --git a/tools/testing/selftests/kvm/lib/ucall_common.c b/tools/testing/selftests/kvm/lib/ucall_common.c
index 42151e571953..3a72169b61ac 100644
--- a/tools/testing/selftests/kvm/lib/ucall_common.c
+++ b/tools/testing/selftests/kvm/lib/ucall_common.c
@@ -29,11 +29,11 @@ void ucall_init(struct kvm_vm *vm, vm_paddr_t mmio_gpa)
{
struct ucall_header *hdr;
struct ucall *uc;
- vm_vaddr_t vaddr;
+ gva_t vaddr;
int i;
- vaddr = vm_vaddr_alloc_shared(vm, sizeof(*hdr), KVM_UTIL_MIN_VADDR,
- MEM_REGION_DATA);
+ vaddr = gva_alloc_shared(vm, sizeof(*hdr), KVM_UTIL_MIN_VADDR,
+ MEM_REGION_DATA);
hdr = (struct ucall_header *)addr_gva2hva(vm, vaddr);
memset(hdr, 0, sizeof(*hdr));
@@ -96,7 +96,7 @@ void ucall_assert(uint64_t cmd, const char *exp, const char *file,
guest_vsnprintf(uc->buffer, UCALL_BUFFER_LEN, fmt, va);
va_end(va);
- ucall_arch_do_ucall((vm_vaddr_t)uc->hva);
+ ucall_arch_do_ucall((gva_t)uc->hva);
ucall_free(uc);
}
@@ -113,7 +113,7 @@ void ucall_fmt(uint64_t cmd, const char *fmt, ...)
guest_vsnprintf(uc->buffer, UCALL_BUFFER_LEN, fmt, va);
va_end(va);
- ucall_arch_do_ucall((vm_vaddr_t)uc->hva);
+ ucall_arch_do_ucall((gva_t)uc->hva);
ucall_free(uc);
}
@@ -135,7 +135,7 @@ void ucall(uint64_t cmd, int nargs, ...)
WRITE_ONCE(uc->args[i], va_arg(va, uint64_t));
va_end(va);
- ucall_arch_do_ucall((vm_vaddr_t)uc->hva);
+ ucall_arch_do_ucall((gva_t)uc->hva);
ucall_free(uc);
}
diff --git a/tools/testing/selftests/kvm/lib/x86/hyperv.c b/tools/testing/selftests/kvm/lib/x86/hyperv.c
index 15bc8cd583aa..2284bc936404 100644
--- a/tools/testing/selftests/kvm/lib/x86/hyperv.c
+++ b/tools/testing/selftests/kvm/lib/x86/hyperv.c
@@ -76,23 +76,23 @@ bool kvm_hv_cpu_has(struct kvm_x86_cpu_feature feature)
}
struct hyperv_test_pages *vcpu_alloc_hyperv_test_pages(struct kvm_vm *vm,
- vm_vaddr_t *p_hv_pages_gva)
+ gva_t *p_hv_pages_gva)
{
- vm_vaddr_t hv_pages_gva = vm_vaddr_alloc_page(vm);
+ gva_t hv_pages_gva = gva_alloc_page(vm);
struct hyperv_test_pages *hv = addr_gva2hva(vm, hv_pages_gva);
/* Setup of a region of guest memory for the VP Assist page. */
- hv->vp_assist = (void *)vm_vaddr_alloc_page(vm);
+ hv->vp_assist = (void *)gva_alloc_page(vm);
hv->vp_assist_hva = addr_gva2hva(vm, (uintptr_t)hv->vp_assist);
hv->vp_assist_gpa = addr_gva2gpa(vm, (uintptr_t)hv->vp_assist);
/* Setup of a region of guest memory for the partition assist page. */
- hv->partition_assist = (void *)vm_vaddr_alloc_page(vm);
+ hv->partition_assist = (void *)gva_alloc_page(vm);
hv->partition_assist_hva = addr_gva2hva(vm, (uintptr_t)hv->partition_assist);
hv->partition_assist_gpa = addr_gva2gpa(vm, (uintptr_t)hv->partition_assist);
/* Setup of a region of guest memory for the enlightened VMCS. */
- hv->enlightened_vmcs = (void *)vm_vaddr_alloc_page(vm);
+ hv->enlightened_vmcs = (void *)gva_alloc_page(vm);
hv->enlightened_vmcs_hva = addr_gva2hva(vm, (uintptr_t)hv->enlightened_vmcs);
hv->enlightened_vmcs_gpa = addr_gva2gpa(vm, (uintptr_t)hv->enlightened_vmcs);
diff --git a/tools/testing/selftests/kvm/lib/x86/memstress.c b/tools/testing/selftests/kvm/lib/x86/memstress.c
index 7f5d62a65c68..e5249c442318 100644
--- a/tools/testing/selftests/kvm/lib/x86/memstress.c
+++ b/tools/testing/selftests/kvm/lib/x86/memstress.c
@@ -81,7 +81,7 @@ void memstress_setup_nested(struct kvm_vm *vm, int nr_vcpus, struct kvm_vcpu *vc
{
struct vmx_pages *vmx, *vmx0 = NULL;
struct kvm_regs regs;
- vm_vaddr_t vmx_gva;
+ gva_t vmx_gva;
int vcpu_id;
TEST_REQUIRE(kvm_cpu_has(X86_FEATURE_VMX));
diff --git a/tools/testing/selftests/kvm/lib/x86/processor.c b/tools/testing/selftests/kvm/lib/x86/processor.c
index bd5a802fa7a5..3ff61033790e 100644
--- a/tools/testing/selftests/kvm/lib/x86/processor.c
+++ b/tools/testing/selftests/kvm/lib/x86/processor.c
@@ -17,7 +17,7 @@
#define KERNEL_DS 0x10
#define KERNEL_TSS 0x18
-vm_vaddr_t exception_handlers;
+gva_t exception_handlers;
bool host_cpu_is_amd;
bool host_cpu_is_intel;
bool is_forced_emulation_enabled;
@@ -463,7 +463,7 @@ static void kvm_seg_set_kernel_data_64bit(struct kvm_segment *segp)
segp->present = true;
}
-vm_paddr_t addr_arch_gva2gpa(struct kvm_vm *vm, vm_vaddr_t gva)
+vm_paddr_t addr_arch_gva2gpa(struct kvm_vm *vm, gva_t gva)
{
int level = PG_LEVEL_NONE;
uint64_t *pte = __vm_get_page_table_entry(vm, gva, &level);
@@ -478,7 +478,7 @@ vm_paddr_t addr_arch_gva2gpa(struct kvm_vm *vm, vm_vaddr_t gva)
return vm_untag_gpa(vm, PTE_GET_PA(*pte)) | (gva & ~HUGEPAGE_MASK(level));
}
-static void kvm_seg_set_tss_64bit(vm_vaddr_t base, struct kvm_segment *segp)
+static void kvm_seg_set_tss_64bit(gva_t base, struct kvm_segment *segp)
{
memset(segp, 0, sizeof(*segp));
segp->base = base;
@@ -588,16 +588,16 @@ static void vm_init_descriptor_tables(struct kvm_vm *vm)
struct kvm_segment seg;
int i;
- vm->arch.gdt = __vm_vaddr_alloc_page(vm, MEM_REGION_DATA);
- vm->arch.idt = __vm_vaddr_alloc_page(vm, MEM_REGION_DATA);
- vm->handlers = __vm_vaddr_alloc_page(vm, MEM_REGION_DATA);
- vm->arch.tss = __vm_vaddr_alloc_page(vm, MEM_REGION_DATA);
+ vm->arch.gdt = __gva_alloc_page(vm, MEM_REGION_DATA);
+ vm->arch.idt = __gva_alloc_page(vm, MEM_REGION_DATA);
+ vm->handlers = __gva_alloc_page(vm, MEM_REGION_DATA);
+ vm->arch.tss = __gva_alloc_page(vm, MEM_REGION_DATA);
/* Handlers have the same address in both address spaces.*/
for (i = 0; i < NUM_INTERRUPTS; i++)
set_idt_entry(vm, i, (unsigned long)(&idt_handlers)[i], 0, KERNEL_CS);
- *(vm_vaddr_t *)addr_gva2hva(vm, (vm_vaddr_t)(&exception_handlers)) = vm->handlers;
+ *(gva_t *)addr_gva2hva(vm, (gva_t)(&exception_handlers)) = vm->handlers;
kvm_seg_set_kernel_code_64bit(&seg);
kvm_seg_fill_gdt_64bit(vm, &seg);
@@ -612,9 +612,9 @@ static void vm_init_descriptor_tables(struct kvm_vm *vm)
void vm_install_exception_handler(struct kvm_vm *vm, int vector,
void (*handler)(struct ex_regs *))
{
- vm_vaddr_t *handlers = (vm_vaddr_t *)addr_gva2hva(vm, vm->handlers);
+ gva_t *handlers = (gva_t *)addr_gva2hva(vm, vm->handlers);
- handlers[vector] = (vm_vaddr_t)handler;
+ handlers[vector] = (gva_t)handler;
}
void assert_on_unhandled_exception(struct kvm_vcpu *vcpu)
@@ -664,12 +664,12 @@ struct kvm_vcpu *vm_arch_vcpu_add(struct kvm_vm *vm, uint32_t vcpu_id)
{
struct kvm_mp_state mp_state;
struct kvm_regs regs;
- vm_vaddr_t stack_vaddr;
+ gva_t stack_vaddr;
struct kvm_vcpu *vcpu;
- stack_vaddr = __vm_vaddr_alloc(vm, DEFAULT_STACK_PGS * getpagesize(),
- DEFAULT_GUEST_STACK_VADDR_MIN,
- MEM_REGION_DATA);
+ stack_vaddr = __gva_alloc(vm, DEFAULT_STACK_PGS * getpagesize(),
+ DEFAULT_GUEST_STACK_VADDR_MIN,
+ MEM_REGION_DATA);
stack_vaddr += DEFAULT_STACK_PGS * getpagesize();
@@ -683,7 +683,7 @@ struct kvm_vcpu *vm_arch_vcpu_add(struct kvm_vm *vm, uint32_t vcpu_id)
* may need to subtract 4 bytes instead of 8 bytes.
*/
TEST_ASSERT(IS_ALIGNED(stack_vaddr, PAGE_SIZE),
- "__vm_vaddr_alloc() did not provide a page-aligned address");
+ "__gva_alloc() did not provide a page-aligned address");
stack_vaddr -= 8;
vcpu = __vm_vcpu_add(vm, vcpu_id);
diff --git a/tools/testing/selftests/kvm/lib/x86/svm.c b/tools/testing/selftests/kvm/lib/x86/svm.c
index d239c2097391..104fe606d7d1 100644
--- a/tools/testing/selftests/kvm/lib/x86/svm.c
+++ b/tools/testing/selftests/kvm/lib/x86/svm.c
@@ -28,20 +28,20 @@ u64 rflags;
* Pointer to structure with the addresses of the SVM areas.
*/
struct svm_test_data *
-vcpu_alloc_svm(struct kvm_vm *vm, vm_vaddr_t *p_svm_gva)
+vcpu_alloc_svm(struct kvm_vm *vm, gva_t *p_svm_gva)
{
- vm_vaddr_t svm_gva = vm_vaddr_alloc_page(vm);
+ gva_t svm_gva = gva_alloc_page(vm);
struct svm_test_data *svm = addr_gva2hva(vm, svm_gva);
- svm->vmcb = (void *)vm_vaddr_alloc_page(vm);
+ svm->vmcb = (void *)gva_alloc_page(vm);
svm->vmcb_hva = addr_gva2hva(vm, (uintptr_t)svm->vmcb);
svm->vmcb_gpa = addr_gva2gpa(vm, (uintptr_t)svm->vmcb);
- svm->save_area = (void *)vm_vaddr_alloc_page(vm);
+ svm->save_area = (void *)gva_alloc_page(vm);
svm->save_area_hva = addr_gva2hva(vm, (uintptr_t)svm->save_area);
svm->save_area_gpa = addr_gva2gpa(vm, (uintptr_t)svm->save_area);
- svm->msr = (void *)vm_vaddr_alloc_page(vm);
+ svm->msr = (void *)gva_alloc_page(vm);
svm->msr_hva = addr_gva2hva(vm, (uintptr_t)svm->msr);
svm->msr_gpa = addr_gva2gpa(vm, (uintptr_t)svm->msr);
memset(svm->msr_hva, 0, getpagesize());
diff --git a/tools/testing/selftests/kvm/lib/x86/ucall.c b/tools/testing/selftests/kvm/lib/x86/ucall.c
index 1265cecc7dd1..1af2a6880cdf 100644
--- a/tools/testing/selftests/kvm/lib/x86/ucall.c
+++ b/tools/testing/selftests/kvm/lib/x86/ucall.c
@@ -8,7 +8,7 @@
#define UCALL_PIO_PORT ((uint16_t)0x1000)
-void ucall_arch_do_ucall(vm_vaddr_t uc)
+void ucall_arch_do_ucall(gva_t uc)
{
/*
* FIXME: Revert this hack (the entire commit that added it) once nVMX
diff --git a/tools/testing/selftests/kvm/lib/x86/vmx.c b/tools/testing/selftests/kvm/lib/x86/vmx.c
index d4d1208dd023..ea37261b207c 100644
--- a/tools/testing/selftests/kvm/lib/x86/vmx.c
+++ b/tools/testing/selftests/kvm/lib/x86/vmx.c
@@ -70,39 +70,39 @@ int vcpu_enable_evmcs(struct kvm_vcpu *vcpu)
* Pointer to structure with the addresses of the VMX areas.
*/
struct vmx_pages *
-vcpu_alloc_vmx(struct kvm_vm *vm, vm_vaddr_t *p_vmx_gva)
+vcpu_alloc_vmx(struct kvm_vm *vm, gva_t *p_vmx_gva)
{
- vm_vaddr_t vmx_gva = vm_vaddr_alloc_page(vm);
+ gva_t vmx_gva = gva_alloc_page(vm);
struct vmx_pages *vmx = addr_gva2hva(vm, vmx_gva);
/* Setup of a region of guest memory for the vmxon region. */
- vmx->vmxon = (void *)vm_vaddr_alloc_page(vm);
+ vmx->vmxon = (void *)gva_alloc_page(vm);
vmx->vmxon_hva = addr_gva2hva(vm, (uintptr_t)vmx->vmxon);
vmx->vmxon_gpa = addr_gva2gpa(vm, (uintptr_t)vmx->vmxon);
/* Setup of a region of guest memory for a vmcs. */
- vmx->vmcs = (void *)vm_vaddr_alloc_page(vm);
+ vmx->vmcs = (void *)gva_alloc_page(vm);
vmx->vmcs_hva = addr_gva2hva(vm, (uintptr_t)vmx->vmcs);
vmx->vmcs_gpa = addr_gva2gpa(vm, (uintptr_t)vmx->vmcs);
/* Setup of a region of guest memory for the MSR bitmap. */
- vmx->msr = (void *)vm_vaddr_alloc_page(vm);
+ vmx->msr = (void *)gva_alloc_page(vm);
vmx->msr_hva = addr_gva2hva(vm, (uintptr_t)vmx->msr);
vmx->msr_gpa = addr_gva2gpa(vm, (uintptr_t)vmx->msr);
memset(vmx->msr_hva, 0, getpagesize());
/* Setup of a region of guest memory for the shadow VMCS. */
- vmx->shadow_vmcs = (void *)vm_vaddr_alloc_page(vm);
+ vmx->shadow_vmcs = (void *)gva_alloc_page(vm);
vmx->shadow_vmcs_hva = addr_gva2hva(vm, (uintptr_t)vmx->shadow_vmcs);
vmx->shadow_vmcs_gpa = addr_gva2gpa(vm, (uintptr_t)vmx->shadow_vmcs);
/* Setup of a region of guest memory for the VMREAD and VMWRITE bitmaps. */
- vmx->vmread = (void *)vm_vaddr_alloc_page(vm);
+ vmx->vmread = (void *)gva_alloc_page(vm);
vmx->vmread_hva = addr_gva2hva(vm, (uintptr_t)vmx->vmread);
vmx->vmread_gpa = addr_gva2gpa(vm, (uintptr_t)vmx->vmread);
memset(vmx->vmread_hva, 0, getpagesize());
- vmx->vmwrite = (void *)vm_vaddr_alloc_page(vm);
+ vmx->vmwrite = (void *)gva_alloc_page(vm);
vmx->vmwrite_hva = addr_gva2hva(vm, (uintptr_t)vmx->vmwrite);
vmx->vmwrite_gpa = addr_gva2gpa(vm, (uintptr_t)vmx->vmwrite);
memset(vmx->vmwrite_hva, 0, getpagesize());
@@ -539,14 +539,14 @@ void prepare_eptp(struct vmx_pages *vmx, struct kvm_vm *vm,
{
TEST_ASSERT(kvm_cpu_has_ept(), "KVM doesn't support nested EPT");
- vmx->eptp = (void *)vm_vaddr_alloc_page(vm);
+ vmx->eptp = (void *)gva_alloc_page(vm);
vmx->eptp_hva = addr_gva2hva(vm, (uintptr_t)vmx->eptp);
vmx->eptp_gpa = addr_gva2gpa(vm, (uintptr_t)vmx->eptp);
}
void prepare_virtualize_apic_accesses(struct vmx_pages *vmx, struct kvm_vm *vm)
{
- vmx->apic_access = (void *)vm_vaddr_alloc_page(vm);
+ vmx->apic_access = (void *)gva_alloc_page(vm);
vmx->apic_access_hva = addr_gva2hva(vm, (uintptr_t)vmx->apic_access);
vmx->apic_access_gpa = addr_gva2gpa(vm, (uintptr_t)vmx->apic_access);
}
diff --git a/tools/testing/selftests/kvm/riscv/sbi_pmu_test.c b/tools/testing/selftests/kvm/riscv/sbi_pmu_test.c
index 03406de4989d..e166e20544d3 100644
--- a/tools/testing/selftests/kvm/riscv/sbi_pmu_test.c
+++ b/tools/testing/selftests/kvm/riscv/sbi_pmu_test.c
@@ -574,7 +574,7 @@ static void test_vm_setup_snapshot_mem(struct kvm_vm *vm, struct kvm_vcpu *vcpu)
virt_map(vm, PMU_SNAPSHOT_GPA_BASE, PMU_SNAPSHOT_GPA_BASE, 1);
snapshot_gva = (void *)(PMU_SNAPSHOT_GPA_BASE);
- snapshot_gpa = addr_gva2gpa(vcpu->vm, (vm_vaddr_t)snapshot_gva);
+ snapshot_gpa = addr_gva2gpa(vcpu->vm, (gva_t)snapshot_gva);
sync_global_to_guest(vcpu->vm, snapshot_gva);
sync_global_to_guest(vcpu->vm, snapshot_gpa);
}
diff --git a/tools/testing/selftests/kvm/s390/memop.c b/tools/testing/selftests/kvm/s390/memop.c
index 4374b4cd2a80..a808fb2f6b2c 100644
--- a/tools/testing/selftests/kvm/s390/memop.c
+++ b/tools/testing/selftests/kvm/s390/memop.c
@@ -878,10 +878,10 @@ static void guest_copy_key_fetch_prot_override(void)
static void test_copy_key_fetch_prot_override(void)
{
struct test_default t = test_default_init(guest_copy_key_fetch_prot_override);
- vm_vaddr_t guest_0_page, guest_last_page;
+ gva_t guest_0_page, guest_last_page;
- guest_0_page = vm_vaddr_alloc(t.kvm_vm, PAGE_SIZE, 0);
- guest_last_page = vm_vaddr_alloc(t.kvm_vm, PAGE_SIZE, last_page_addr);
+ guest_0_page = gva_alloc(t.kvm_vm, PAGE_SIZE, 0);
+ guest_last_page = gva_alloc(t.kvm_vm, PAGE_SIZE, last_page_addr);
if (guest_0_page != 0 || guest_last_page != last_page_addr) {
print_skip("did not allocate guest pages at required positions");
goto out;
@@ -917,10 +917,10 @@ static void test_copy_key_fetch_prot_override(void)
static void test_errors_key_fetch_prot_override_not_enabled(void)
{
struct test_default t = test_default_init(guest_copy_key_fetch_prot_override);
- vm_vaddr_t guest_0_page, guest_last_page;
+ gva_t guest_0_page, guest_last_page;
- guest_0_page = vm_vaddr_alloc(t.kvm_vm, PAGE_SIZE, 0);
- guest_last_page = vm_vaddr_alloc(t.kvm_vm, PAGE_SIZE, last_page_addr);
+ guest_0_page = gva_alloc(t.kvm_vm, PAGE_SIZE, 0);
+ guest_last_page = gva_alloc(t.kvm_vm, PAGE_SIZE, last_page_addr);
if (guest_0_page != 0 || guest_last_page != last_page_addr) {
print_skip("did not allocate guest pages at required positions");
goto out;
@@ -938,10 +938,10 @@ static void test_errors_key_fetch_prot_override_not_enabled(void)
static void test_errors_key_fetch_prot_override_enabled(void)
{
struct test_default t = test_default_init(guest_copy_key_fetch_prot_override);
- vm_vaddr_t guest_0_page, guest_last_page;
+ gva_t guest_0_page, guest_last_page;
- guest_0_page = vm_vaddr_alloc(t.kvm_vm, PAGE_SIZE, 0);
- guest_last_page = vm_vaddr_alloc(t.kvm_vm, PAGE_SIZE, last_page_addr);
+ guest_0_page = gva_alloc(t.kvm_vm, PAGE_SIZE, 0);
+ guest_last_page = gva_alloc(t.kvm_vm, PAGE_SIZE, last_page_addr);
if (guest_0_page != 0 || guest_last_page != last_page_addr) {
print_skip("did not allocate guest pages at required positions");
goto out;
diff --git a/tools/testing/selftests/kvm/s390/tprot.c b/tools/testing/selftests/kvm/s390/tprot.c
index 12d5e1cb62e3..b50209979e10 100644
--- a/tools/testing/selftests/kvm/s390/tprot.c
+++ b/tools/testing/selftests/kvm/s390/tprot.c
@@ -146,7 +146,7 @@ static enum stage perform_next_stage(int *i, bool mapped_0)
/*
* Some fetch protection override tests require that page 0
* be mapped, however, when the hosts tries to map that page via
- * vm_vaddr_alloc, it may happen that some other page gets mapped
+ * gva_alloc, it may happen that some other page gets mapped
* instead.
* In order to skip these tests we detect this inside the guest
*/
@@ -207,7 +207,7 @@ int main(int argc, char *argv[])
struct kvm_vcpu *vcpu;
struct kvm_vm *vm;
struct kvm_run *run;
- vm_vaddr_t guest_0_page;
+ gva_t guest_0_page;
ksft_print_header();
ksft_set_plan(STAGE_END);
@@ -216,10 +216,10 @@ int main(int argc, char *argv[])
run = vcpu->run;
HOST_SYNC(vcpu, STAGE_INIT_SIMPLE);
- mprotect(addr_gva2hva(vm, (vm_vaddr_t)pages), PAGE_SIZE * 2, PROT_READ);
+ mprotect(addr_gva2hva(vm, (gva_t)pages), PAGE_SIZE * 2, PROT_READ);
HOST_SYNC(vcpu, TEST_SIMPLE);
- guest_0_page = vm_vaddr_alloc(vm, PAGE_SIZE, 0);
+ guest_0_page = gva_alloc(vm, PAGE_SIZE, 0);
if (guest_0_page != 0) {
/* Use NO_TAP so we don't get a PASS print */
HOST_SYNC_NO_TAP(vcpu, STAGE_INIT_FETCH_PROT_OVERRIDE);
@@ -229,7 +229,7 @@ int main(int argc, char *argv[])
HOST_SYNC(vcpu, STAGE_INIT_FETCH_PROT_OVERRIDE);
}
if (guest_0_page == 0)
- mprotect(addr_gva2hva(vm, (vm_vaddr_t)0), PAGE_SIZE, PROT_READ);
+ mprotect(addr_gva2hva(vm, (gva_t)0), PAGE_SIZE, PROT_READ);
run->s.regs.crs[0] |= CR0_FETCH_PROTECTION_OVERRIDE;
run->kvm_dirty_regs = KVM_SYNC_CRS;
HOST_SYNC(vcpu, TEST_FETCH_PROT_OVERRIDE);
diff --git a/tools/testing/selftests/kvm/steal_time.c b/tools/testing/selftests/kvm/steal_time.c
index cce2520af720..24eaabe372bc 100644
--- a/tools/testing/selftests/kvm/steal_time.c
+++ b/tools/testing/selftests/kvm/steal_time.c
@@ -280,7 +280,7 @@ static void steal_time_init(struct kvm_vcpu *vcpu, uint32_t i)
{
/* ST_GPA_BASE is identity mapped */
st_gva[i] = (void *)(ST_GPA_BASE + i * STEAL_TIME_SIZE);
- st_gpa[i] = addr_gva2gpa(vcpu->vm, (vm_vaddr_t)st_gva[i]);
+ st_gpa[i] = addr_gva2gpa(vcpu->vm, (gva_t)st_gva[i]);
sync_global_to_guest(vcpu->vm, st_gva[i]);
sync_global_to_guest(vcpu->vm, st_gpa[i]);
}
diff --git a/tools/testing/selftests/kvm/x86/amx_test.c b/tools/testing/selftests/kvm/x86/amx_test.c
index f4ce5a185a7d..d49230ad5caf 100644
--- a/tools/testing/selftests/kvm/x86/amx_test.c
+++ b/tools/testing/selftests/kvm/x86/amx_test.c
@@ -201,7 +201,7 @@ int main(int argc, char *argv[])
struct kvm_vm *vm;
struct kvm_x86_state *state;
int xsave_restore_size;
- vm_vaddr_t amx_cfg, tiledata, xstate;
+ gva_t amx_cfg, tiledata, xstate;
struct ucall uc;
u32 amx_offset;
int ret;
@@ -232,15 +232,15 @@ int main(int argc, char *argv[])
vm_install_exception_handler(vm, NM_VECTOR, guest_nm_handler);
/* amx cfg for guest_code */
- amx_cfg = vm_vaddr_alloc_page(vm);
+ amx_cfg = gva_alloc_page(vm);
memset(addr_gva2hva(vm, amx_cfg), 0x0, getpagesize());
/* amx tiledata for guest_code */
- tiledata = vm_vaddr_alloc_pages(vm, 2);
+ tiledata = gva_alloc_pages(vm, 2);
memset(addr_gva2hva(vm, tiledata), rand() | 1, 2 * getpagesize());
/* XSAVE state for guest_code */
- xstate = vm_vaddr_alloc_pages(vm, DIV_ROUND_UP(XSAVE_SIZE, PAGE_SIZE));
+ xstate = gva_alloc_pages(vm, DIV_ROUND_UP(XSAVE_SIZE, PAGE_SIZE));
memset(addr_gva2hva(vm, xstate), 0, PAGE_SIZE * DIV_ROUND_UP(XSAVE_SIZE, PAGE_SIZE));
vcpu_args_set(vcpu, 3, amx_cfg, tiledata, xstate);
diff --git a/tools/testing/selftests/kvm/x86/cpuid_test.c b/tools/testing/selftests/kvm/x86/cpuid_test.c
index 7b3fda6842bc..43e1cbcb3592 100644
--- a/tools/testing/selftests/kvm/x86/cpuid_test.c
+++ b/tools/testing/selftests/kvm/x86/cpuid_test.c
@@ -140,10 +140,10 @@ static void run_vcpu(struct kvm_vcpu *vcpu, int stage)
}
}
-struct kvm_cpuid2 *vcpu_alloc_cpuid(struct kvm_vm *vm, vm_vaddr_t *p_gva, struct kvm_cpuid2 *cpuid)
+struct kvm_cpuid2 *vcpu_alloc_cpuid(struct kvm_vm *vm, gva_t *p_gva, struct kvm_cpuid2 *cpuid)
{
int size = sizeof(*cpuid) + cpuid->nent * sizeof(cpuid->entries[0]);
- vm_vaddr_t gva = vm_vaddr_alloc(vm, size, KVM_UTIL_MIN_VADDR);
+ gva_t gva = gva_alloc(vm, size, KVM_UTIL_MIN_VADDR);
struct kvm_cpuid2 *guest_cpuids = addr_gva2hva(vm, gva);
memcpy(guest_cpuids, cpuid, size);
@@ -202,7 +202,7 @@ static void test_get_cpuid2(struct kvm_vcpu *vcpu)
int main(void)
{
struct kvm_vcpu *vcpu;
- vm_vaddr_t cpuid_gva;
+ gva_t cpuid_gva;
struct kvm_vm *vm;
int stage;
diff --git a/tools/testing/selftests/kvm/x86/hyperv_clock.c b/tools/testing/selftests/kvm/x86/hyperv_clock.c
index e058bc676cd6..046d33ec69fc 100644
--- a/tools/testing/selftests/kvm/x86/hyperv_clock.c
+++ b/tools/testing/selftests/kvm/x86/hyperv_clock.c
@@ -208,7 +208,7 @@ int main(void)
struct kvm_vcpu *vcpu;
struct kvm_vm *vm;
struct ucall uc;
- vm_vaddr_t tsc_page_gva;
+ gva_t tsc_page_gva;
int stage;
TEST_REQUIRE(kvm_has_cap(KVM_CAP_HYPERV_TIME));
@@ -218,7 +218,7 @@ int main(void)
vcpu_set_hv_cpuid(vcpu);
- tsc_page_gva = vm_vaddr_alloc_page(vm);
+ tsc_page_gva = gva_alloc_page(vm);
memset(addr_gva2hva(vm, tsc_page_gva), 0x0, getpagesize());
TEST_ASSERT((addr_gva2gpa(vm, tsc_page_gva) & (getpagesize() - 1)) == 0,
"TSC page has to be page aligned");
diff --git a/tools/testing/selftests/kvm/x86/hyperv_evmcs.c b/tools/testing/selftests/kvm/x86/hyperv_evmcs.c
index 74cf19661309..58f27dcc3d5f 100644
--- a/tools/testing/selftests/kvm/x86/hyperv_evmcs.c
+++ b/tools/testing/selftests/kvm/x86/hyperv_evmcs.c
@@ -76,7 +76,7 @@ void l2_guest_code(void)
}
void guest_code(struct vmx_pages *vmx_pages, struct hyperv_test_pages *hv_pages,
- vm_vaddr_t hv_hcall_page_gpa)
+ gva_t hv_hcall_page_gpa)
{
#define L2_GUEST_STACK_SIZE 64
unsigned long l2_guest_stack[L2_GUEST_STACK_SIZE];
@@ -231,8 +231,8 @@ static struct kvm_vcpu *save_restore_vm(struct kvm_vm *vm,
int main(int argc, char *argv[])
{
- vm_vaddr_t vmx_pages_gva = 0, hv_pages_gva = 0;
- vm_vaddr_t hcall_page;
+ gva_t vmx_pages_gva = 0, hv_pages_gva = 0;
+ gva_t hcall_page;
struct kvm_vcpu *vcpu;
struct kvm_vm *vm;
@@ -246,7 +246,7 @@ int main(int argc, char *argv[])
vm = vm_create_with_one_vcpu(&vcpu, guest_code);
- hcall_page = vm_vaddr_alloc_pages(vm, 1);
+ hcall_page = gva_alloc_pages(vm, 1);
memset(addr_gva2hva(vm, hcall_page), 0x0, getpagesize());
vcpu_set_hv_cpuid(vcpu);
diff --git a/tools/testing/selftests/kvm/x86/hyperv_extended_hypercalls.c b/tools/testing/selftests/kvm/x86/hyperv_extended_hypercalls.c
index 949e08e98f31..43e1c5149d97 100644
--- a/tools/testing/selftests/kvm/x86/hyperv_extended_hypercalls.c
+++ b/tools/testing/selftests/kvm/x86/hyperv_extended_hypercalls.c
@@ -16,7 +16,7 @@
#define EXT_CAPABILITIES 0xbull
static void guest_code(vm_paddr_t in_pg_gpa, vm_paddr_t out_pg_gpa,
- vm_vaddr_t out_pg_gva)
+ gva_t out_pg_gva)
{
uint64_t *output_gva;
@@ -35,8 +35,8 @@ static void guest_code(vm_paddr_t in_pg_gpa, vm_paddr_t out_pg_gpa,
int main(void)
{
- vm_vaddr_t hcall_out_page;
- vm_vaddr_t hcall_in_page;
+ gva_t hcall_out_page;
+ gva_t hcall_in_page;
struct kvm_vcpu *vcpu;
struct kvm_run *run;
struct kvm_vm *vm;
@@ -57,11 +57,11 @@ int main(void)
vcpu_set_hv_cpuid(vcpu);
/* Hypercall input */
- hcall_in_page = vm_vaddr_alloc_pages(vm, 1);
+ hcall_in_page = gva_alloc_pages(vm, 1);
memset(addr_gva2hva(vm, hcall_in_page), 0x0, vm->page_size);
/* Hypercall output */
- hcall_out_page = vm_vaddr_alloc_pages(vm, 1);
+ hcall_out_page = gva_alloc_pages(vm, 1);
memset(addr_gva2hva(vm, hcall_out_page), 0x0, vm->page_size);
vcpu_args_set(vcpu, 3, addr_gva2gpa(vm, hcall_in_page),
diff --git a/tools/testing/selftests/kvm/x86/hyperv_features.c b/tools/testing/selftests/kvm/x86/hyperv_features.c
index 068e9c69710d..5cf19d96120d 100644
--- a/tools/testing/selftests/kvm/x86/hyperv_features.c
+++ b/tools/testing/selftests/kvm/x86/hyperv_features.c
@@ -82,7 +82,7 @@ static void guest_msr(struct msr_data *msr)
GUEST_DONE();
}
-static void guest_hcall(vm_vaddr_t pgs_gpa, struct hcall_data *hcall)
+static void guest_hcall(gva_t pgs_gpa, struct hcall_data *hcall)
{
u64 res, input, output;
uint8_t vector;
@@ -134,14 +134,14 @@ static void guest_test_msrs_access(void)
struct kvm_vm *vm;
struct ucall uc;
int stage = 0;
- vm_vaddr_t msr_gva;
+ gva_t msr_gva;
struct msr_data *msr;
bool has_invtsc = kvm_cpu_has(X86_FEATURE_INVTSC);
while (true) {
vm = vm_create_with_one_vcpu(&vcpu, guest_msr);
- msr_gva = vm_vaddr_alloc_page(vm);
+ msr_gva = gva_alloc_page(vm);
memset(addr_gva2hva(vm, msr_gva), 0x0, getpagesize());
msr = addr_gva2hva(vm, msr_gva);
@@ -523,17 +523,17 @@ static void guest_test_hcalls_access(void)
struct kvm_vm *vm;
struct ucall uc;
int stage = 0;
- vm_vaddr_t hcall_page, hcall_params;
+ gva_t hcall_page, hcall_params;
struct hcall_data *hcall;
while (true) {
vm = vm_create_with_one_vcpu(&vcpu, guest_hcall);
/* Hypercall input/output */
- hcall_page = vm_vaddr_alloc_pages(vm, 2);
+ hcall_page = gva_alloc_pages(vm, 2);
memset(addr_gva2hva(vm, hcall_page), 0x0, 2 * getpagesize());
- hcall_params = vm_vaddr_alloc_page(vm);
+ hcall_params = gva_alloc_page(vm);
memset(addr_gva2hva(vm, hcall_params), 0x0, getpagesize());
hcall = addr_gva2hva(vm, hcall_params);
diff --git a/tools/testing/selftests/kvm/x86/hyperv_ipi.c b/tools/testing/selftests/kvm/x86/hyperv_ipi.c
index 2b5b4bc6ef7e..865fdd8e4284 100644
--- a/tools/testing/selftests/kvm/x86/hyperv_ipi.c
+++ b/tools/testing/selftests/kvm/x86/hyperv_ipi.c
@@ -45,13 +45,13 @@ struct hv_send_ipi_ex {
struct hv_vpset vp_set;
};
-static inline void hv_init(vm_vaddr_t pgs_gpa)
+static inline void hv_init(gva_t pgs_gpa)
{
wrmsr(HV_X64_MSR_GUEST_OS_ID, HYPERV_LINUX_OS_ID);
wrmsr(HV_X64_MSR_HYPERCALL, pgs_gpa);
}
-static void receiver_code(void *hcall_page, vm_vaddr_t pgs_gpa)
+static void receiver_code(void *hcall_page, gva_t pgs_gpa)
{
u32 vcpu_id;
@@ -85,7 +85,7 @@ static inline void nop_loop(void)
asm volatile("nop");
}
-static void sender_guest_code(void *hcall_page, vm_vaddr_t pgs_gpa)
+static void sender_guest_code(void *hcall_page, gva_t pgs_gpa)
{
struct hv_send_ipi *ipi = (struct hv_send_ipi *)hcall_page;
struct hv_send_ipi_ex *ipi_ex = (struct hv_send_ipi_ex *)hcall_page;
@@ -243,7 +243,7 @@ int main(int argc, char *argv[])
{
struct kvm_vm *vm;
struct kvm_vcpu *vcpu[3];
- vm_vaddr_t hcall_page;
+ gva_t hcall_page;
pthread_t threads[2];
int stage = 1, r;
struct ucall uc;
@@ -253,7 +253,7 @@ int main(int argc, char *argv[])
vm = vm_create_with_one_vcpu(&vcpu[0], sender_guest_code);
/* Hypercall input/output */
- hcall_page = vm_vaddr_alloc_pages(vm, 2);
+ hcall_page = gva_alloc_pages(vm, 2);
memset(addr_gva2hva(vm, hcall_page), 0x0, 2 * getpagesize());
diff --git a/tools/testing/selftests/kvm/x86/hyperv_svm_test.c b/tools/testing/selftests/kvm/x86/hyperv_svm_test.c
index 0ddb63229bcb..436c16460fe0 100644
--- a/tools/testing/selftests/kvm/x86/hyperv_svm_test.c
+++ b/tools/testing/selftests/kvm/x86/hyperv_svm_test.c
@@ -67,7 +67,7 @@ void l2_guest_code(void)
static void __attribute__((__flatten__)) guest_code(struct svm_test_data *svm,
struct hyperv_test_pages *hv_pages,
- vm_vaddr_t pgs_gpa)
+ gva_t pgs_gpa)
{
unsigned long l2_guest_stack[L2_GUEST_STACK_SIZE];
struct vmcb *vmcb = svm->vmcb;
@@ -149,8 +149,8 @@ static void __attribute__((__flatten__)) guest_code(struct svm_test_data *svm,
int main(int argc, char *argv[])
{
- vm_vaddr_t nested_gva = 0, hv_pages_gva = 0;
- vm_vaddr_t hcall_page;
+ gva_t nested_gva = 0, hv_pages_gva = 0;
+ gva_t hcall_page;
struct kvm_vcpu *vcpu;
struct kvm_vm *vm;
struct ucall uc;
@@ -165,7 +165,7 @@ int main(int argc, char *argv[])
vcpu_alloc_svm(vm, &nested_gva);
vcpu_alloc_hyperv_test_pages(vm, &hv_pages_gva);
- hcall_page = vm_vaddr_alloc_pages(vm, 1);
+ hcall_page = gva_alloc_pages(vm, 1);
memset(addr_gva2hva(vm, hcall_page), 0x0, getpagesize());
vcpu_args_set(vcpu, 3, nested_gva, hv_pages_gva, addr_gva2gpa(vm, hcall_page));
diff --git a/tools/testing/selftests/kvm/x86/hyperv_tlb_flush.c b/tools/testing/selftests/kvm/x86/hyperv_tlb_flush.c
index 077cd0ec3040..aa795f9a5950 100644
--- a/tools/testing/selftests/kvm/x86/hyperv_tlb_flush.c
+++ b/tools/testing/selftests/kvm/x86/hyperv_tlb_flush.c
@@ -61,14 +61,14 @@ struct hv_tlb_flush_ex {
* - GVAs of the test pages' PTEs
*/
struct test_data {
- vm_vaddr_t hcall_gva;
+ gva_t hcall_gva;
vm_paddr_t hcall_gpa;
- vm_vaddr_t test_pages;
- vm_vaddr_t test_pages_pte[NTEST_PAGES];
+ gva_t test_pages;
+ gva_t test_pages_pte[NTEST_PAGES];
};
/* 'Worker' vCPU code checking the contents of the test page */
-static void worker_guest_code(vm_vaddr_t test_data)
+static void worker_guest_code(gva_t test_data)
{
struct test_data *data = (struct test_data *)test_data;
u32 vcpu_id = rdmsr(HV_X64_MSR_VP_INDEX);
@@ -196,7 +196,7 @@ static inline void post_test(struct test_data *data, u64 exp1, u64 exp2)
#define TESTVAL2 0x0202020202020202
/* Main vCPU doing the test */
-static void sender_guest_code(vm_vaddr_t test_data)
+static void sender_guest_code(gva_t test_data)
{
struct test_data *data = (struct test_data *)test_data;
struct hv_tlb_flush *flush = (struct hv_tlb_flush *)data->hcall_gva;
@@ -581,7 +581,7 @@ int main(int argc, char *argv[])
struct kvm_vm *vm;
struct kvm_vcpu *vcpu[3];
pthread_t threads[2];
- vm_vaddr_t test_data_page, gva;
+ gva_t test_data_page, gva;
vm_paddr_t gpa;
uint64_t *pte;
struct test_data *data;
@@ -593,11 +593,11 @@ int main(int argc, char *argv[])
vm = vm_create_with_one_vcpu(&vcpu[0], sender_guest_code);
/* Test data page */
- test_data_page = vm_vaddr_alloc_page(vm);
+ test_data_page = gva_alloc_page(vm);
data = (struct test_data *)addr_gva2hva(vm, test_data_page);
/* Hypercall input/output */
- data->hcall_gva = vm_vaddr_alloc_pages(vm, 2);
+ data->hcall_gva = gva_alloc_pages(vm, 2);
data->hcall_gpa = addr_gva2gpa(vm, data->hcall_gva);
memset(addr_gva2hva(vm, data->hcall_gva), 0x0, 2 * PAGE_SIZE);
@@ -606,7 +606,7 @@ int main(int argc, char *argv[])
* and the test will swap their mappings. The third page keeps the indication
* about the current state of mappings.
*/
- data->test_pages = vm_vaddr_alloc_pages(vm, NTEST_PAGES + 1);
+ data->test_pages = gva_alloc_pages(vm, NTEST_PAGES + 1);
for (i = 0; i < NTEST_PAGES; i++)
memset(addr_gva2hva(vm, data->test_pages + PAGE_SIZE * i),
(u8)(i + 1), PAGE_SIZE);
@@ -617,7 +617,7 @@ int main(int argc, char *argv[])
* Get PTE pointers for test pages and map them inside the guest.
* Use separate page for each PTE for simplicity.
*/
- gva = vm_vaddr_unused_gap(vm, NTEST_PAGES * PAGE_SIZE, KVM_UTIL_MIN_VADDR);
+ gva = gva_unused_gap(vm, NTEST_PAGES * PAGE_SIZE, KVM_UTIL_MIN_VADDR);
for (i = 0; i < NTEST_PAGES; i++) {
pte = vm_get_page_table_entry(vm, data->test_pages + i * PAGE_SIZE);
gpa = addr_hva2gpa(vm, pte);
diff --git a/tools/testing/selftests/kvm/x86/kvm_clock_test.c b/tools/testing/selftests/kvm/x86/kvm_clock_test.c
index 5bc12222d87a..b335ee2a8e97 100644
--- a/tools/testing/selftests/kvm/x86/kvm_clock_test.c
+++ b/tools/testing/selftests/kvm/x86/kvm_clock_test.c
@@ -135,7 +135,7 @@ static void enter_guest(struct kvm_vcpu *vcpu)
int main(void)
{
struct kvm_vcpu *vcpu;
- vm_vaddr_t pvti_gva;
+ gva_t pvti_gva;
vm_paddr_t pvti_gpa;
struct kvm_vm *vm;
int flags;
@@ -147,7 +147,7 @@ int main(void)
vm = vm_create_with_one_vcpu(&vcpu, guest_main);
- pvti_gva = vm_vaddr_alloc(vm, getpagesize(), 0x10000);
+ pvti_gva = gva_alloc(vm, getpagesize(), 0x10000);
pvti_gpa = addr_gva2gpa(vm, pvti_gva);
vcpu_args_set(vcpu, 2, pvti_gpa, pvti_gva);
diff --git a/tools/testing/selftests/kvm/x86/nested_emulation_test.c b/tools/testing/selftests/kvm/x86/nested_emulation_test.c
index abc824dba04f..d398add21e4c 100644
--- a/tools/testing/selftests/kvm/x86/nested_emulation_test.c
+++ b/tools/testing/selftests/kvm/x86/nested_emulation_test.c
@@ -122,7 +122,7 @@ static void guest_code(void *test_data)
int main(int argc, char *argv[])
{
- vm_vaddr_t nested_test_data_gva;
+ gva_t nested_test_data_gva;
struct kvm_vcpu *vcpu;
struct kvm_vm *vm;
diff --git a/tools/testing/selftests/kvm/x86/nested_exceptions_test.c b/tools/testing/selftests/kvm/x86/nested_exceptions_test.c
index 3641a42934ac..646cfb0022b3 100644
--- a/tools/testing/selftests/kvm/x86/nested_exceptions_test.c
+++ b/tools/testing/selftests/kvm/x86/nested_exceptions_test.c
@@ -216,7 +216,7 @@ static void queue_ss_exception(struct kvm_vcpu *vcpu, bool inject)
*/
int main(int argc, char *argv[])
{
- vm_vaddr_t nested_test_data_gva;
+ gva_t nested_test_data_gva;
struct kvm_vcpu_events events;
struct kvm_vcpu *vcpu;
struct kvm_vm *vm;
diff --git a/tools/testing/selftests/kvm/x86/sev_smoke_test.c b/tools/testing/selftests/kvm/x86/sev_smoke_test.c
index d97816dc476a..dc0734d2973c 100644
--- a/tools/testing/selftests/kvm/x86/sev_smoke_test.c
+++ b/tools/testing/selftests/kvm/x86/sev_smoke_test.c
@@ -66,15 +66,15 @@ static void test_sync_vmsa(uint32_t policy)
{
struct kvm_vcpu *vcpu;
struct kvm_vm *vm;
- vm_vaddr_t gva;
+ gva_t gva;
void *hva;
double x87val = M_PI;
struct kvm_xsave __attribute__((aligned(64))) xsave = { 0 };
vm = vm_sev_create_with_one_vcpu(KVM_X86_SEV_ES_VM, guest_code_xsave, &vcpu);
- gva = vm_vaddr_alloc_shared(vm, PAGE_SIZE, KVM_UTIL_MIN_VADDR,
- MEM_REGION_TEST_DATA);
+ gva = gva_alloc_shared(vm, PAGE_SIZE, KVM_UTIL_MIN_VADDR,
+ MEM_REGION_TEST_DATA);
hva = addr_gva2hva(vm, gva);
vcpu_args_set(vcpu, 1, gva);
diff --git a/tools/testing/selftests/kvm/x86/smm_test.c b/tools/testing/selftests/kvm/x86/smm_test.c
index 55c88d664a94..ba64f4e8456d 100644
--- a/tools/testing/selftests/kvm/x86/smm_test.c
+++ b/tools/testing/selftests/kvm/x86/smm_test.c
@@ -127,7 +127,7 @@ void inject_smi(struct kvm_vcpu *vcpu)
int main(int argc, char *argv[])
{
- vm_vaddr_t nested_gva = 0;
+ gva_t nested_gva = 0;
struct kvm_vcpu *vcpu;
struct kvm_regs regs;
diff --git a/tools/testing/selftests/kvm/x86/state_test.c b/tools/testing/selftests/kvm/x86/state_test.c
index 141b7fc0c965..062f425db75b 100644
--- a/tools/testing/selftests/kvm/x86/state_test.c
+++ b/tools/testing/selftests/kvm/x86/state_test.c
@@ -225,7 +225,7 @@ static void __attribute__((__flatten__)) guest_code(void *arg)
int main(int argc, char *argv[])
{
uint64_t *xstate_bv, saved_xstate_bv;
- vm_vaddr_t nested_gva = 0;
+ gva_t nested_gva = 0;
struct kvm_cpuid2 empty_cpuid = {};
struct kvm_regs regs1, regs2;
struct kvm_vcpu *vcpu, *vcpuN;
diff --git a/tools/testing/selftests/kvm/x86/svm_int_ctl_test.c b/tools/testing/selftests/kvm/x86/svm_int_ctl_test.c
index 917b6066cfc1..d3cc5e4f7883 100644
--- a/tools/testing/selftests/kvm/x86/svm_int_ctl_test.c
+++ b/tools/testing/selftests/kvm/x86/svm_int_ctl_test.c
@@ -82,7 +82,7 @@ static void l1_guest_code(struct svm_test_data *svm)
int main(int argc, char *argv[])
{
struct kvm_vcpu *vcpu;
- vm_vaddr_t svm_gva;
+ gva_t svm_gva;
struct kvm_vm *vm;
struct ucall uc;
diff --git a/tools/testing/selftests/kvm/x86/svm_nested_shutdown_test.c b/tools/testing/selftests/kvm/x86/svm_nested_shutdown_test.c
index 00135cbba35e..c6ea3d609a62 100644
--- a/tools/testing/selftests/kvm/x86/svm_nested_shutdown_test.c
+++ b/tools/testing/selftests/kvm/x86/svm_nested_shutdown_test.c
@@ -42,7 +42,7 @@ static void l1_guest_code(struct svm_test_data *svm, struct idt_entry *idt)
int main(int argc, char *argv[])
{
struct kvm_vcpu *vcpu;
- vm_vaddr_t svm_gva;
+ gva_t svm_gva;
struct kvm_vm *vm;
TEST_REQUIRE(kvm_cpu_has(X86_FEATURE_SVM));
diff --git a/tools/testing/selftests/kvm/x86/svm_nested_soft_inject_test.c b/tools/testing/selftests/kvm/x86/svm_nested_soft_inject_test.c
index 7b6481d6c0d3..5068b2dd8005 100644
--- a/tools/testing/selftests/kvm/x86/svm_nested_soft_inject_test.c
+++ b/tools/testing/selftests/kvm/x86/svm_nested_soft_inject_test.c
@@ -144,8 +144,8 @@ static void run_test(bool is_nmi)
{
struct kvm_vcpu *vcpu;
struct kvm_vm *vm;
- vm_vaddr_t svm_gva;
- vm_vaddr_t idt_alt_vm;
+ gva_t svm_gva;
+ gva_t idt_alt_vm;
struct kvm_guest_debug debug;
pr_info("Running %s test\n", is_nmi ? "NMI" : "soft int");
@@ -161,7 +161,7 @@ static void run_test(bool is_nmi)
if (!is_nmi) {
void *idt, *idt_alt;
- idt_alt_vm = vm_vaddr_alloc_page(vm);
+ idt_alt_vm = gva_alloc_page(vm);
idt_alt = addr_gva2hva(vm, idt_alt_vm);
idt = addr_gva2hva(vm, vm->arch.idt);
memcpy(idt_alt, idt, getpagesize());
diff --git a/tools/testing/selftests/kvm/x86/svm_vmcall_test.c b/tools/testing/selftests/kvm/x86/svm_vmcall_test.c
index 8a62cca28cfb..b1887242f3b8 100644
--- a/tools/testing/selftests/kvm/x86/svm_vmcall_test.c
+++ b/tools/testing/selftests/kvm/x86/svm_vmcall_test.c
@@ -36,7 +36,7 @@ static void l1_guest_code(struct svm_test_data *svm)
int main(int argc, char *argv[])
{
struct kvm_vcpu *vcpu;
- vm_vaddr_t svm_gva;
+ gva_t svm_gva;
struct kvm_vm *vm;
TEST_REQUIRE(kvm_cpu_has(X86_FEATURE_SVM));
diff --git a/tools/testing/selftests/kvm/x86/triple_fault_event_test.c b/tools/testing/selftests/kvm/x86/triple_fault_event_test.c
index 56306a19144a..f1c488e0d497 100644
--- a/tools/testing/selftests/kvm/x86/triple_fault_event_test.c
+++ b/tools/testing/selftests/kvm/x86/triple_fault_event_test.c
@@ -72,13 +72,13 @@ int main(void)
if (has_vmx) {
- vm_vaddr_t vmx_pages_gva;
+ gva_t vmx_pages_gva;
vm = vm_create_with_one_vcpu(&vcpu, l1_guest_code_vmx);
vcpu_alloc_vmx(vm, &vmx_pages_gva);
vcpu_args_set(vcpu, 1, vmx_pages_gva);
} else {
- vm_vaddr_t svm_gva;
+ gva_t svm_gva;
vm = vm_create_with_one_vcpu(&vcpu, l1_guest_code_svm);
vcpu_alloc_svm(vm, &svm_gva);
diff --git a/tools/testing/selftests/kvm/x86/vmx_apic_access_test.c b/tools/testing/selftests/kvm/x86/vmx_apic_access_test.c
index a81a24761aac..dc5c3d1db346 100644
--- a/tools/testing/selftests/kvm/x86/vmx_apic_access_test.c
+++ b/tools/testing/selftests/kvm/x86/vmx_apic_access_test.c
@@ -72,7 +72,7 @@ static void l1_guest_code(struct vmx_pages *vmx_pages, unsigned long high_gpa)
int main(int argc, char *argv[])
{
unsigned long apic_access_addr = ~0ul;
- vm_vaddr_t vmx_pages_gva;
+ gva_t vmx_pages_gva;
unsigned long high_gpa;
struct vmx_pages *vmx;
bool done = false;
diff --git a/tools/testing/selftests/kvm/x86/vmx_close_while_nested_test.c b/tools/testing/selftests/kvm/x86/vmx_close_while_nested_test.c
index dad988351493..aa27053d7dcd 100644
--- a/tools/testing/selftests/kvm/x86/vmx_close_while_nested_test.c
+++ b/tools/testing/selftests/kvm/x86/vmx_close_while_nested_test.c
@@ -47,7 +47,7 @@ static void l1_guest_code(struct vmx_pages *vmx_pages)
int main(int argc, char *argv[])
{
- vm_vaddr_t vmx_pages_gva;
+ gva_t vmx_pages_gva;
struct kvm_vcpu *vcpu;
struct kvm_vm *vm;
diff --git a/tools/testing/selftests/kvm/x86/vmx_dirty_log_test.c b/tools/testing/selftests/kvm/x86/vmx_dirty_log_test.c
index fa512d033205..9bf08e278ffe 100644
--- a/tools/testing/selftests/kvm/x86/vmx_dirty_log_test.c
+++ b/tools/testing/selftests/kvm/x86/vmx_dirty_log_test.c
@@ -79,7 +79,7 @@ void l1_guest_code(struct vmx_pages *vmx)
static void test_vmx_dirty_log(bool enable_ept)
{
- vm_vaddr_t vmx_pages_gva = 0;
+ gva_t vmx_pages_gva = 0;
struct vmx_pages *vmx;
unsigned long *bmap;
uint64_t *host_test_mem;
diff --git a/tools/testing/selftests/kvm/x86/vmx_invalid_nested_guest_state.c b/tools/testing/selftests/kvm/x86/vmx_invalid_nested_guest_state.c
index a100ee5f0009..a2eaceed9ad5 100644
--- a/tools/testing/selftests/kvm/x86/vmx_invalid_nested_guest_state.c
+++ b/tools/testing/selftests/kvm/x86/vmx_invalid_nested_guest_state.c
@@ -52,7 +52,7 @@ static void l1_guest_code(struct vmx_pages *vmx_pages)
int main(int argc, char *argv[])
{
- vm_vaddr_t vmx_pages_gva;
+ gva_t vmx_pages_gva;
struct kvm_sregs sregs;
struct kvm_vcpu *vcpu;
struct kvm_run *run;
diff --git a/tools/testing/selftests/kvm/x86/vmx_nested_tsc_scaling_test.c b/tools/testing/selftests/kvm/x86/vmx_nested_tsc_scaling_test.c
index 1759fa5cb3f2..530d71b6d6bc 100644
--- a/tools/testing/selftests/kvm/x86/vmx_nested_tsc_scaling_test.c
+++ b/tools/testing/selftests/kvm/x86/vmx_nested_tsc_scaling_test.c
@@ -120,7 +120,7 @@ int main(int argc, char *argv[])
{
struct kvm_vcpu *vcpu;
struct kvm_vm *vm;
- vm_vaddr_t vmx_pages_gva;
+ gva_t vmx_pages_gva;
uint64_t tsc_start, tsc_end;
uint64_t tsc_khz;
diff --git a/tools/testing/selftests/kvm/x86/vmx_preemption_timer_test.c b/tools/testing/selftests/kvm/x86/vmx_preemption_timer_test.c
index 00dd2ac07a61..1b7b6ba23de7 100644
--- a/tools/testing/selftests/kvm/x86/vmx_preemption_timer_test.c
+++ b/tools/testing/selftests/kvm/x86/vmx_preemption_timer_test.c
@@ -152,7 +152,7 @@ void guest_code(struct vmx_pages *vmx_pages)
int main(int argc, char *argv[])
{
- vm_vaddr_t vmx_pages_gva = 0;
+ gva_t vmx_pages_gva = 0;
struct kvm_regs regs1, regs2;
struct kvm_vm *vm;
diff --git a/tools/testing/selftests/kvm/x86/vmx_tsc_adjust_test.c b/tools/testing/selftests/kvm/x86/vmx_tsc_adjust_test.c
index 2ceb5c78c442..fc294ccc2a7e 100644
--- a/tools/testing/selftests/kvm/x86/vmx_tsc_adjust_test.c
+++ b/tools/testing/selftests/kvm/x86/vmx_tsc_adjust_test.c
@@ -119,7 +119,7 @@ static void report(int64_t val)
int main(int argc, char *argv[])
{
- vm_vaddr_t vmx_pages_gva;
+ gva_t vmx_pages_gva;
struct kvm_vcpu *vcpu;
TEST_REQUIRE(kvm_cpu_has(X86_FEATURE_VMX));
diff --git a/tools/testing/selftests/kvm/x86/xapic_ipi_test.c b/tools/testing/selftests/kvm/x86/xapic_ipi_test.c
index 35cb9de54a82..2aa14bd237d9 100644
--- a/tools/testing/selftests/kvm/x86/xapic_ipi_test.c
+++ b/tools/testing/selftests/kvm/x86/xapic_ipi_test.c
@@ -394,7 +394,7 @@ int main(int argc, char *argv[])
int run_secs = 0;
int delay_usecs = 0;
struct test_data_page *data;
- vm_vaddr_t test_data_page_vaddr;
+ gva_t test_data_page_vaddr;
bool migrate = false;
pthread_t threads[2];
struct thread_params params[2];
@@ -415,7 +415,7 @@ int main(int argc, char *argv[])
params[1].vcpu = vm_vcpu_add(vm, 1, sender_guest_code);
- test_data_page_vaddr = vm_vaddr_alloc_page(vm);
+ test_data_page_vaddr = gva_alloc_page(vm);
data = addr_gva2hva(vm, test_data_page_vaddr);
memset(data, 0, sizeof(*data));
params[0].data = data;
--
2.49.0.906.g1f30a19c02-goog
^ permalink raw reply related [flat|nested] 16+ messages in thread
* [PATCH 02/10] KVM: selftests: Use gpa_t instead of vm_paddr_t
2025-05-01 18:32 [PATCH 00/10] KVM: selftests: Convert to kernel-style types David Matlack
2025-05-01 18:32 ` [PATCH 01/10] KVM: selftests: Use gva_t instead of vm_vaddr_t David Matlack
@ 2025-05-01 18:32 ` David Matlack
2025-05-01 18:32 ` [PATCH 03/10] KVM: selftests: Use gpa_t for GPAs in Hyper-V selftests David Matlack
` (9 subsequent siblings)
11 siblings, 0 replies; 16+ messages in thread
From: David Matlack @ 2025-05-01 18:32 UTC (permalink / raw)
To: Paolo Bonzini
Cc: Marc Zyngier, Oliver Upton, Joey Gouly, Suzuki K Poulose,
Zenghui Yu, Anup Patel, Atish Patra, Paul Walmsley,
Palmer Dabbelt, Albert Ou, Alexandre Ghiti, Christian Borntraeger,
Janosch Frank, Claudio Imbrenda, David Hildenbrand,
Sean Christopherson, David Matlack, Andrew Jones, Isaku Yamahata,
Reinette Chatre, Eric Auger, James Houghton, Colin Ian King, kvm,
linux-arm-kernel, kvmarm, kvm-riscv, linux-riscv
Replace all occurrences of vm_paddr_t with gpa_t to align with KVM code
and with the conversion helpers (e.g. addr_hva2gpa()). Also replace
vm_paddr in function names with gpa to align with the new type name.
This commit was generated with the following command:
git ls-files tools/testing/selftests/kvm | \
xargs sed -i 's/vm_paddr_/gpa_/g'
Then by manually adjusting whitespace to make checkpatch.pl happy.
No functional change intended.
Signed-off-by: David Matlack <dmatlack@google.com>
---
.../selftests/kvm/arm64/vgic_lpi_stress.c | 20 +++++------
tools/testing/selftests/kvm/dirty_log_test.c | 2 +-
.../testing/selftests/kvm/include/arm64/gic.h | 4 +--
.../selftests/kvm/include/arm64/gic_v3_its.h | 8 ++---
.../testing/selftests/kvm/include/kvm_util.h | 33 +++++++++----------
.../selftests/kvm/include/kvm_util_types.h | 2 +-
.../selftests/kvm/include/riscv/ucall.h | 2 +-
.../selftests/kvm/include/s390/ucall.h | 2 +-
.../selftests/kvm/include/ucall_common.h | 4 +--
tools/testing/selftests/kvm/include/x86/sev.h | 2 +-
.../testing/selftests/kvm/include/x86/ucall.h | 2 +-
.../selftests/kvm/kvm_page_table_test.c | 2 +-
.../testing/selftests/kvm/lib/arm64/gic_v3.c | 4 +--
.../selftests/kvm/lib/arm64/gic_v3_its.c | 12 +++----
.../selftests/kvm/lib/arm64/processor.c | 2 +-
tools/testing/selftests/kvm/lib/arm64/ucall.c | 2 +-
tools/testing/selftests/kvm/lib/kvm_util.c | 23 +++++++------
tools/testing/selftests/kvm/lib/memstress.c | 2 +-
.../selftests/kvm/lib/riscv/processor.c | 2 +-
.../selftests/kvm/lib/s390/processor.c | 4 +--
.../testing/selftests/kvm/lib/ucall_common.c | 2 +-
.../testing/selftests/kvm/lib/x86/processor.c | 2 +-
tools/testing/selftests/kvm/lib/x86/sev.c | 2 +-
.../selftests/kvm/riscv/sbi_pmu_test.c | 4 +--
.../selftests/kvm/s390/ucontrol_test.c | 2 +-
tools/testing/selftests/kvm/steal_time.c | 4 +--
.../testing/selftests/kvm/x86/hyperv_clock.c | 2 +-
.../kvm/x86/hyperv_extended_hypercalls.c | 2 +-
.../selftests/kvm/x86/hyperv_tlb_flush.c | 8 ++---
.../selftests/kvm/x86/kvm_clock_test.c | 4 +--
30 files changed, 82 insertions(+), 84 deletions(-)
diff --git a/tools/testing/selftests/kvm/arm64/vgic_lpi_stress.c b/tools/testing/selftests/kvm/arm64/vgic_lpi_stress.c
index fc4fe52fb6f8..3cb3f5d6ea8b 100644
--- a/tools/testing/selftests/kvm/arm64/vgic_lpi_stress.c
+++ b/tools/testing/selftests/kvm/arm64/vgic_lpi_stress.c
@@ -23,7 +23,7 @@
#define GIC_LPI_OFFSET 8192
static size_t nr_iterations = 1000;
-static vm_paddr_t gpa_base;
+static gpa_t gpa_base;
static struct kvm_vm *vm;
static struct kvm_vcpu **vcpus;
@@ -35,14 +35,14 @@ static struct test_data {
u32 nr_devices;
u32 nr_event_ids;
- vm_paddr_t device_table;
- vm_paddr_t collection_table;
- vm_paddr_t cmdq_base;
+ gpa_t device_table;
+ gpa_t collection_table;
+ gpa_t cmdq_base;
void *cmdq_base_va;
- vm_paddr_t itt_tables;
+ gpa_t itt_tables;
- vm_paddr_t lpi_prop_table;
- vm_paddr_t lpi_pend_tables;
+ gpa_t lpi_prop_table;
+ gpa_t lpi_pend_tables;
} test_data = {
.nr_cpus = 1,
.nr_devices = 1,
@@ -73,7 +73,7 @@ static void guest_setup_its_mappings(void)
/* Round-robin the LPIs to all of the vCPUs in the VM */
coll_id = 0;
for (device_id = 0; device_id < nr_devices; device_id++) {
- vm_paddr_t itt_base = test_data.itt_tables + (device_id * SZ_64K);
+ gpa_t itt_base = test_data.itt_tables + (device_id * SZ_64K);
its_send_mapd_cmd(test_data.cmdq_base_va, device_id,
itt_base, SZ_64K, true);
@@ -183,7 +183,7 @@ static void setup_test_data(void)
size_t pages_per_64k = vm_calc_num_guest_pages(vm->mode, SZ_64K);
u32 nr_devices = test_data.nr_devices;
u32 nr_cpus = test_data.nr_cpus;
- vm_paddr_t cmdq_base;
+ gpa_t cmdq_base;
test_data.device_table = vm_phy_pages_alloc(vm, pages_per_64k,
gpa_base,
@@ -222,7 +222,7 @@ static void setup_gic(void)
static void signal_lpi(u32 device_id, u32 event_id)
{
- vm_paddr_t db_addr = GITS_BASE_GPA + GITS_TRANSLATER;
+ gpa_t db_addr = GITS_BASE_GPA + GITS_TRANSLATER;
struct kvm_msi msi = {
.address_lo = db_addr,
diff --git a/tools/testing/selftests/kvm/dirty_log_test.c b/tools/testing/selftests/kvm/dirty_log_test.c
index 23593d9eeba9..a7744974663b 100644
--- a/tools/testing/selftests/kvm/dirty_log_test.c
+++ b/tools/testing/selftests/kvm/dirty_log_test.c
@@ -669,7 +669,7 @@ static void run_test(enum vm_guest_mode mode, void *arg)
virt_map(vm, guest_test_virt_mem, guest_test_phys_mem, guest_num_pages);
/* Cache the HVA pointer of the region */
- host_test_mem = addr_gpa2hva(vm, (vm_paddr_t)guest_test_phys_mem);
+ host_test_mem = addr_gpa2hva(vm, (gpa_t)guest_test_phys_mem);
/* Export the shared variables to the guest */
sync_global_to_guest(vm, host_page_size);
diff --git a/tools/testing/selftests/kvm/include/arm64/gic.h b/tools/testing/selftests/kvm/include/arm64/gic.h
index baeb3c859389..7dbecc6daa4e 100644
--- a/tools/testing/selftests/kvm/include/arm64/gic.h
+++ b/tools/testing/selftests/kvm/include/arm64/gic.h
@@ -58,7 +58,7 @@ void gic_irq_clear_pending(unsigned int intid);
bool gic_irq_get_pending(unsigned int intid);
void gic_irq_set_config(unsigned int intid, bool is_edge);
-void gic_rdist_enable_lpis(vm_paddr_t cfg_table, size_t cfg_table_size,
- vm_paddr_t pend_table);
+void gic_rdist_enable_lpis(gpa_t cfg_table, size_t cfg_table_size,
+ gpa_t pend_table);
#endif /* SELFTEST_KVM_GIC_H */
diff --git a/tools/testing/selftests/kvm/include/arm64/gic_v3_its.h b/tools/testing/selftests/kvm/include/arm64/gic_v3_its.h
index 3722ed9c8f96..57eabcd64104 100644
--- a/tools/testing/selftests/kvm/include/arm64/gic_v3_its.h
+++ b/tools/testing/selftests/kvm/include/arm64/gic_v3_its.h
@@ -5,11 +5,11 @@
#include <linux/sizes.h>
-void its_init(vm_paddr_t coll_tbl, size_t coll_tbl_sz,
- vm_paddr_t device_tbl, size_t device_tbl_sz,
- vm_paddr_t cmdq, size_t cmdq_size);
+void its_init(gpa_t coll_tbl, size_t coll_tbl_sz,
+ gpa_t device_tbl, size_t device_tbl_sz,
+ gpa_t cmdq, size_t cmdq_size);
-void its_send_mapd_cmd(void *cmdq_base, u32 device_id, vm_paddr_t itt_base,
+void its_send_mapd_cmd(void *cmdq_base, u32 device_id, gpa_t itt_base,
size_t itt_size, bool valid);
void its_send_mapc_cmd(void *cmdq_base, u32 vcpu_id, u32 collection_id, bool valid);
void its_send_mapti_cmd(void *cmdq_base, u32 device_id, u32 event_id,
diff --git a/tools/testing/selftests/kvm/include/kvm_util.h b/tools/testing/selftests/kvm/include/kvm_util.h
index 4b7012bd8041..67ac59f66b6e 100644
--- a/tools/testing/selftests/kvm/include/kvm_util.h
+++ b/tools/testing/selftests/kvm/include/kvm_util.h
@@ -98,8 +98,8 @@ struct kvm_vm {
struct sparsebit *vpages_mapped;
bool has_irqchip;
bool pgd_created;
- vm_paddr_t ucall_mmio_addr;
- vm_paddr_t pgd;
+ gpa_t ucall_mmio_addr;
+ gpa_t pgd;
gva_t handlers;
uint32_t dirty_ring_size;
uint64_t gpa_tag_mask;
@@ -616,16 +616,16 @@ gva_t gva_alloc_page(struct kvm_vm *vm);
void virt_map(struct kvm_vm *vm, uint64_t vaddr, uint64_t paddr,
unsigned int npages);
-void *addr_gpa2hva(struct kvm_vm *vm, vm_paddr_t gpa);
+void *addr_gpa2hva(struct kvm_vm *vm, gpa_t gpa);
void *addr_gva2hva(struct kvm_vm *vm, gva_t gva);
-vm_paddr_t addr_hva2gpa(struct kvm_vm *vm, void *hva);
-void *addr_gpa2alias(struct kvm_vm *vm, vm_paddr_t gpa);
+gpa_t addr_hva2gpa(struct kvm_vm *vm, void *hva);
+void *addr_gpa2alias(struct kvm_vm *vm, gpa_t gpa);
#ifndef vcpu_arch_put_guest
#define vcpu_arch_put_guest(mem, val) do { (mem) = (val); } while (0)
#endif
-static inline vm_paddr_t vm_untag_gpa(struct kvm_vm *vm, vm_paddr_t gpa)
+static inline gpa_t vm_untag_gpa(struct kvm_vm *vm, gpa_t gpa)
{
return gpa & ~vm->gpa_tag_mask;
}
@@ -876,15 +876,14 @@ void kvm_gsi_routing_write(struct kvm_vm *vm, struct kvm_irq_routing *routing);
const char *exit_reason_str(unsigned int exit_reason);
-vm_paddr_t vm_phy_page_alloc(struct kvm_vm *vm, vm_paddr_t paddr_min,
- uint32_t memslot);
-vm_paddr_t __vm_phy_pages_alloc(struct kvm_vm *vm, size_t num,
- vm_paddr_t paddr_min, uint32_t memslot,
- bool protected);
-vm_paddr_t vm_alloc_page_table(struct kvm_vm *vm);
+gpa_t vm_phy_page_alloc(struct kvm_vm *vm, gpa_t paddr_min, uint32_t memslot);
+gpa_t __vm_phy_pages_alloc(struct kvm_vm *vm, size_t num,
+ gpa_t paddr_min, uint32_t memslot,
+ bool protected);
+gpa_t vm_alloc_page_table(struct kvm_vm *vm);
-static inline vm_paddr_t vm_phy_pages_alloc(struct kvm_vm *vm, size_t num,
- vm_paddr_t paddr_min, uint32_t memslot)
+static inline gpa_t vm_phy_pages_alloc(struct kvm_vm *vm, size_t num,
+ gpa_t paddr_min, uint32_t memslot)
{
/*
* By default, allocate memory as protected for VMs that support
@@ -1104,9 +1103,9 @@ static inline void virt_pg_map(struct kvm_vm *vm, uint64_t vaddr, uint64_t paddr
* Returns the VM physical address of the translated VM virtual
* address given by @gva.
*/
-vm_paddr_t addr_arch_gva2gpa(struct kvm_vm *vm, gva_t gva);
+gpa_t addr_arch_gva2gpa(struct kvm_vm *vm, gva_t gva);
-static inline vm_paddr_t addr_gva2gpa(struct kvm_vm *vm, gva_t gva)
+static inline gpa_t addr_gva2gpa(struct kvm_vm *vm, gva_t gva)
{
return addr_arch_gva2gpa(vm, gva);
}
@@ -1148,7 +1147,7 @@ void kvm_selftest_arch_init(void);
void kvm_arch_vm_post_create(struct kvm_vm *vm);
-bool vm_is_gpa_protected(struct kvm_vm *vm, vm_paddr_t paddr);
+bool vm_is_gpa_protected(struct kvm_vm *vm, gpa_t paddr);
uint32_t guest_get_vcpuid(void);
diff --git a/tools/testing/selftests/kvm/include/kvm_util_types.h b/tools/testing/selftests/kvm/include/kvm_util_types.h
index a53e04286554..224a29cea790 100644
--- a/tools/testing/selftests/kvm/include/kvm_util_types.h
+++ b/tools/testing/selftests/kvm/include/kvm_util_types.h
@@ -14,7 +14,7 @@
#define __kvm_static_assert(expr, msg, ...) _Static_assert(expr, msg)
#define kvm_static_assert(expr, ...) __kvm_static_assert(expr, ##__VA_ARGS__, #expr)
-typedef uint64_t vm_paddr_t; /* Virtual Machine (Guest) physical address */
+typedef uint64_t gpa_t; /* Virtual Machine (Guest) physical address */
typedef uint64_t gva_t; /* Virtual Machine (Guest) virtual address */
#endif /* SELFTEST_KVM_UTIL_TYPES_H */
diff --git a/tools/testing/selftests/kvm/include/riscv/ucall.h b/tools/testing/selftests/kvm/include/riscv/ucall.h
index 41d56254968e..2de7c6a36096 100644
--- a/tools/testing/selftests/kvm/include/riscv/ucall.h
+++ b/tools/testing/selftests/kvm/include/riscv/ucall.h
@@ -7,7 +7,7 @@
#define UCALL_EXIT_REASON KVM_EXIT_RISCV_SBI
-static inline void ucall_arch_init(struct kvm_vm *vm, vm_paddr_t mmio_gpa)
+static inline void ucall_arch_init(struct kvm_vm *vm, gpa_t mmio_gpa)
{
}
diff --git a/tools/testing/selftests/kvm/include/s390/ucall.h b/tools/testing/selftests/kvm/include/s390/ucall.h
index befee84c4609..3907d629304f 100644
--- a/tools/testing/selftests/kvm/include/s390/ucall.h
+++ b/tools/testing/selftests/kvm/include/s390/ucall.h
@@ -6,7 +6,7 @@
#define UCALL_EXIT_REASON KVM_EXIT_S390_SIEIC
-static inline void ucall_arch_init(struct kvm_vm *vm, vm_paddr_t mmio_gpa)
+static inline void ucall_arch_init(struct kvm_vm *vm, gpa_t mmio_gpa)
{
}
diff --git a/tools/testing/selftests/kvm/include/ucall_common.h b/tools/testing/selftests/kvm/include/ucall_common.h
index e5499f170834..1db399c00d02 100644
--- a/tools/testing/selftests/kvm/include/ucall_common.h
+++ b/tools/testing/selftests/kvm/include/ucall_common.h
@@ -29,7 +29,7 @@ struct ucall {
struct ucall *hva;
};
-void ucall_arch_init(struct kvm_vm *vm, vm_paddr_t mmio_gpa);
+void ucall_arch_init(struct kvm_vm *vm, gpa_t mmio_gpa);
void ucall_arch_do_ucall(gva_t uc);
void *ucall_arch_get_ucall(struct kvm_vcpu *vcpu);
@@ -39,7 +39,7 @@ __printf(5, 6) void ucall_assert(uint64_t cmd, const char *exp,
const char *file, unsigned int line,
const char *fmt, ...);
uint64_t get_ucall(struct kvm_vcpu *vcpu, struct ucall *uc);
-void ucall_init(struct kvm_vm *vm, vm_paddr_t mmio_gpa);
+void ucall_init(struct kvm_vm *vm, gpa_t mmio_gpa);
int ucall_nr_pages_required(uint64_t page_size);
/*
diff --git a/tools/testing/selftests/kvm/include/x86/sev.h b/tools/testing/selftests/kvm/include/x86/sev.h
index 82c11c81a956..9aefe83e16b8 100644
--- a/tools/testing/selftests/kvm/include/x86/sev.h
+++ b/tools/testing/selftests/kvm/include/x86/sev.h
@@ -82,7 +82,7 @@ static inline void sev_register_encrypted_memory(struct kvm_vm *vm,
vm_ioctl(vm, KVM_MEMORY_ENCRYPT_REG_REGION, &range);
}
-static inline void sev_launch_update_data(struct kvm_vm *vm, vm_paddr_t gpa,
+static inline void sev_launch_update_data(struct kvm_vm *vm, gpa_t gpa,
uint64_t size)
{
struct kvm_sev_launch_update_data update_data = {
diff --git a/tools/testing/selftests/kvm/include/x86/ucall.h b/tools/testing/selftests/kvm/include/x86/ucall.h
index d3825dcc3cd9..0e4950041e3e 100644
--- a/tools/testing/selftests/kvm/include/x86/ucall.h
+++ b/tools/testing/selftests/kvm/include/x86/ucall.h
@@ -6,7 +6,7 @@
#define UCALL_EXIT_REASON KVM_EXIT_IO
-static inline void ucall_arch_init(struct kvm_vm *vm, vm_paddr_t mmio_gpa)
+static inline void ucall_arch_init(struct kvm_vm *vm, gpa_t mmio_gpa)
{
}
diff --git a/tools/testing/selftests/kvm/kvm_page_table_test.c b/tools/testing/selftests/kvm/kvm_page_table_test.c
index 6e909a96b095..6cf1fa092752 100644
--- a/tools/testing/selftests/kvm/kvm_page_table_test.c
+++ b/tools/testing/selftests/kvm/kvm_page_table_test.c
@@ -284,7 +284,7 @@ static struct kvm_vm *pre_init_before_test(enum vm_guest_mode mode, void *arg)
virt_map(vm, guest_test_virt_mem, guest_test_phys_mem, guest_num_pages);
/* Cache the HVA pointer of the region */
- host_test_mem = addr_gpa2hva(vm, (vm_paddr_t)guest_test_phys_mem);
+ host_test_mem = addr_gpa2hva(vm, (gpa_t)guest_test_phys_mem);
/* Export shared structure test_args to guest */
sync_global_to_guest(vm, test_args);
diff --git a/tools/testing/selftests/kvm/lib/arm64/gic_v3.c b/tools/testing/selftests/kvm/lib/arm64/gic_v3.c
index 66d05506f78b..911650132446 100644
--- a/tools/testing/selftests/kvm/lib/arm64/gic_v3.c
+++ b/tools/testing/selftests/kvm/lib/arm64/gic_v3.c
@@ -402,8 +402,8 @@ const struct gic_common_ops gicv3_ops = {
.gic_irq_set_config = gicv3_irq_set_config,
};
-void gic_rdist_enable_lpis(vm_paddr_t cfg_table, size_t cfg_table_size,
- vm_paddr_t pend_table)
+void gic_rdist_enable_lpis(gpa_t cfg_table, size_t cfg_table_size,
+ gpa_t pend_table)
{
volatile void *rdist_base = gicr_base_cpu(guest_get_vcpuid());
diff --git a/tools/testing/selftests/kvm/lib/arm64/gic_v3_its.c b/tools/testing/selftests/kvm/lib/arm64/gic_v3_its.c
index 09f270545646..37ef53b8fa33 100644
--- a/tools/testing/selftests/kvm/lib/arm64/gic_v3_its.c
+++ b/tools/testing/selftests/kvm/lib/arm64/gic_v3_its.c
@@ -52,7 +52,7 @@ static unsigned long its_find_baser(unsigned int type)
return -1;
}
-static void its_install_table(unsigned int type, vm_paddr_t base, size_t size)
+static void its_install_table(unsigned int type, gpa_t base, size_t size)
{
unsigned long offset = its_find_baser(type);
u64 baser;
@@ -67,7 +67,7 @@ static void its_install_table(unsigned int type, vm_paddr_t base, size_t size)
its_write_u64(offset, baser);
}
-static void its_install_cmdq(vm_paddr_t base, size_t size)
+static void its_install_cmdq(gpa_t base, size_t size)
{
u64 cbaser;
@@ -80,9 +80,9 @@ static void its_install_cmdq(vm_paddr_t base, size_t size)
its_write_u64(GITS_CBASER, cbaser);
}
-void its_init(vm_paddr_t coll_tbl, size_t coll_tbl_sz,
- vm_paddr_t device_tbl, size_t device_tbl_sz,
- vm_paddr_t cmdq, size_t cmdq_size)
+void its_init(gpa_t coll_tbl, size_t coll_tbl_sz,
+ gpa_t device_tbl, size_t device_tbl_sz,
+ gpa_t cmdq, size_t cmdq_size)
{
u32 ctlr;
@@ -197,7 +197,7 @@ static void its_send_cmd(void *cmdq_base, struct its_cmd_block *cmd)
}
}
-void its_send_mapd_cmd(void *cmdq_base, u32 device_id, vm_paddr_t itt_base,
+void its_send_mapd_cmd(void *cmdq_base, u32 device_id, gpa_t itt_base,
size_t itt_size, bool valid)
{
struct its_cmd_block cmd = {};
diff --git a/tools/testing/selftests/kvm/lib/arm64/processor.c b/tools/testing/selftests/kvm/lib/arm64/processor.c
index 102b0b829420..e57b757b4256 100644
--- a/tools/testing/selftests/kvm/lib/arm64/processor.c
+++ b/tools/testing/selftests/kvm/lib/arm64/processor.c
@@ -223,7 +223,7 @@ uint64_t *virt_get_pte_hva(struct kvm_vm *vm, gva_t gva)
exit(EXIT_FAILURE);
}
-vm_paddr_t addr_arch_gva2gpa(struct kvm_vm *vm, gva_t gva)
+gpa_t addr_arch_gva2gpa(struct kvm_vm *vm, gva_t gva)
{
uint64_t *ptep = virt_get_pte_hva(vm, gva);
diff --git a/tools/testing/selftests/kvm/lib/arm64/ucall.c b/tools/testing/selftests/kvm/lib/arm64/ucall.c
index a1a3b4dcdce1..62109407a1ff 100644
--- a/tools/testing/selftests/kvm/lib/arm64/ucall.c
+++ b/tools/testing/selftests/kvm/lib/arm64/ucall.c
@@ -8,7 +8,7 @@
gva_t *ucall_exit_mmio_addr;
-void ucall_arch_init(struct kvm_vm *vm, vm_paddr_t mmio_gpa)
+void ucall_arch_init(struct kvm_vm *vm, gpa_t mmio_gpa)
{
gva_t mmio_gva = gva_unused_gap(vm, vm->page_size, KVM_UTIL_MIN_VADDR);
diff --git a/tools/testing/selftests/kvm/lib/kvm_util.c b/tools/testing/selftests/kvm/lib/kvm_util.c
index 2e6d275d4ba0..6dd2755fdb7b 100644
--- a/tools/testing/selftests/kvm/lib/kvm_util.c
+++ b/tools/testing/selftests/kvm/lib/kvm_util.c
@@ -1446,7 +1446,7 @@ static gva_t ____gva_alloc(struct kvm_vm *vm, size_t sz,
uint64_t pages = (sz >> vm->page_shift) + ((sz % vm->page_size) != 0);
virt_pgd_alloc(vm);
- vm_paddr_t paddr = __vm_phy_pages_alloc(vm, pages,
+ gpa_t paddr = __vm_phy_pages_alloc(vm, pages,
KVM_UTIL_MIN_PFN * vm->page_size,
vm->memslots[type], protected);
@@ -1600,7 +1600,7 @@ void virt_map(struct kvm_vm *vm, uint64_t vaddr, uint64_t paddr,
* address providing the memory to the vm physical address is returned.
* A TEST_ASSERT failure occurs if no region containing gpa exists.
*/
-void *addr_gpa2hva(struct kvm_vm *vm, vm_paddr_t gpa)
+void *addr_gpa2hva(struct kvm_vm *vm, gpa_t gpa)
{
struct userspace_mem_region *region;
@@ -1633,7 +1633,7 @@ void *addr_gpa2hva(struct kvm_vm *vm, vm_paddr_t gpa)
* VM physical address is returned. A TEST_ASSERT failure occurs if no
* region containing hva exists.
*/
-vm_paddr_t addr_hva2gpa(struct kvm_vm *vm, void *hva)
+gpa_t addr_hva2gpa(struct kvm_vm *vm, void *hva)
{
struct rb_node *node;
@@ -1644,7 +1644,7 @@ vm_paddr_t addr_hva2gpa(struct kvm_vm *vm, void *hva)
if (hva >= region->host_mem) {
if (hva <= (region->host_mem
+ region->region.memory_size - 1))
- return (vm_paddr_t)((uintptr_t)
+ return (gpa_t)((uintptr_t)
region->region.guest_phys_addr
+ (hva - (uintptr_t)region->host_mem));
@@ -1676,7 +1676,7 @@ vm_paddr_t addr_hva2gpa(struct kvm_vm *vm, void *hva)
* memory without mapping said memory in the guest's address space. And, for
* userfaultfd-based demand paging, to do so without triggering userfaults.
*/
-void *addr_gpa2alias(struct kvm_vm *vm, vm_paddr_t gpa)
+void *addr_gpa2alias(struct kvm_vm *vm, gpa_t gpa)
{
struct userspace_mem_region *region;
uintptr_t offset;
@@ -2069,9 +2069,9 @@ const char *exit_reason_str(unsigned int exit_reason)
* and their base address is returned. A TEST_ASSERT failure occurs if
* not enough pages are available at or above paddr_min.
*/
-vm_paddr_t __vm_phy_pages_alloc(struct kvm_vm *vm, size_t num,
- vm_paddr_t paddr_min, uint32_t memslot,
- bool protected)
+gpa_t __vm_phy_pages_alloc(struct kvm_vm *vm, size_t num,
+ gpa_t paddr_min, uint32_t memslot,
+ bool protected)
{
struct userspace_mem_region *region;
sparsebit_idx_t pg, base;
@@ -2115,13 +2115,12 @@ vm_paddr_t __vm_phy_pages_alloc(struct kvm_vm *vm, size_t num,
return base * vm->page_size;
}
-vm_paddr_t vm_phy_page_alloc(struct kvm_vm *vm, vm_paddr_t paddr_min,
- uint32_t memslot)
+gpa_t vm_phy_page_alloc(struct kvm_vm *vm, gpa_t paddr_min, uint32_t memslot)
{
return vm_phy_pages_alloc(vm, 1, paddr_min, memslot);
}
-vm_paddr_t vm_alloc_page_table(struct kvm_vm *vm)
+gpa_t vm_alloc_page_table(struct kvm_vm *vm)
{
return vm_phy_page_alloc(vm, KVM_GUEST_PAGE_TABLE_MIN_PADDR,
vm->memslots[MEM_REGION_PT]);
@@ -2303,7 +2302,7 @@ void __attribute((constructor)) kvm_selftest_init(void)
kvm_selftest_arch_init();
}
-bool vm_is_gpa_protected(struct kvm_vm *vm, vm_paddr_t paddr)
+bool vm_is_gpa_protected(struct kvm_vm *vm, gpa_t paddr)
{
sparsebit_idx_t pg = 0;
struct userspace_mem_region *region;
diff --git a/tools/testing/selftests/kvm/lib/memstress.c b/tools/testing/selftests/kvm/lib/memstress.c
index 313277486a1d..d51680509839 100644
--- a/tools/testing/selftests/kvm/lib/memstress.c
+++ b/tools/testing/selftests/kvm/lib/memstress.c
@@ -207,7 +207,7 @@ struct kvm_vm *memstress_create_vm(enum vm_guest_mode mode, int nr_vcpus,
/* Add extra memory slots for testing */
for (i = 0; i < slots; i++) {
uint64_t region_pages = guest_num_pages / slots;
- vm_paddr_t region_start = args->gpa + region_pages * args->guest_page_size * i;
+ gpa_t region_start = args->gpa + region_pages * args->guest_page_size * i;
vm_userspace_mem_region_add(vm, backing_src, region_start,
MEMSTRESS_MEM_SLOT_INDEX + i,
diff --git a/tools/testing/selftests/kvm/lib/riscv/processor.c b/tools/testing/selftests/kvm/lib/riscv/processor.c
index 3ba5163c72b3..c4717aad1b3c 100644
--- a/tools/testing/selftests/kvm/lib/riscv/processor.c
+++ b/tools/testing/selftests/kvm/lib/riscv/processor.c
@@ -123,7 +123,7 @@ void virt_arch_pg_map(struct kvm_vm *vm, uint64_t vaddr, uint64_t paddr)
PGTBL_PTE_PERM_MASK | PGTBL_PTE_VALID_MASK;
}
-vm_paddr_t addr_arch_gva2gpa(struct kvm_vm *vm, gva_t gva)
+gpa_t addr_arch_gva2gpa(struct kvm_vm *vm, gva_t gva)
{
uint64_t *ptep;
int level = vm->pgtable_levels - 1;
diff --git a/tools/testing/selftests/kvm/lib/s390/processor.c b/tools/testing/selftests/kvm/lib/s390/processor.c
index a6438e3ea8e7..2baafbe608ac 100644
--- a/tools/testing/selftests/kvm/lib/s390/processor.c
+++ b/tools/testing/selftests/kvm/lib/s390/processor.c
@@ -12,7 +12,7 @@
void virt_arch_pgd_alloc(struct kvm_vm *vm)
{
- vm_paddr_t paddr;
+ gpa_t paddr;
TEST_ASSERT(vm->page_size == PAGE_SIZE, "Unsupported page size: 0x%x",
vm->page_size);
@@ -86,7 +86,7 @@ void virt_arch_pg_map(struct kvm_vm *vm, uint64_t gva, uint64_t gpa)
entry[idx] = gpa;
}
-vm_paddr_t addr_arch_gva2gpa(struct kvm_vm *vm, gva_t gva)
+gpa_t addr_arch_gva2gpa(struct kvm_vm *vm, gva_t gva)
{
int ri, idx;
uint64_t *entry;
diff --git a/tools/testing/selftests/kvm/lib/ucall_common.c b/tools/testing/selftests/kvm/lib/ucall_common.c
index 3a72169b61ac..60297819d508 100644
--- a/tools/testing/selftests/kvm/lib/ucall_common.c
+++ b/tools/testing/selftests/kvm/lib/ucall_common.c
@@ -25,7 +25,7 @@ int ucall_nr_pages_required(uint64_t page_size)
*/
static struct ucall_header *ucall_pool;
-void ucall_init(struct kvm_vm *vm, vm_paddr_t mmio_gpa)
+void ucall_init(struct kvm_vm *vm, gpa_t mmio_gpa)
{
struct ucall_header *hdr;
struct ucall *uc;
diff --git a/tools/testing/selftests/kvm/lib/x86/processor.c b/tools/testing/selftests/kvm/lib/x86/processor.c
index 3ff61033790e..5a97cf829291 100644
--- a/tools/testing/selftests/kvm/lib/x86/processor.c
+++ b/tools/testing/selftests/kvm/lib/x86/processor.c
@@ -463,7 +463,7 @@ static void kvm_seg_set_kernel_data_64bit(struct kvm_segment *segp)
segp->present = true;
}
-vm_paddr_t addr_arch_gva2gpa(struct kvm_vm *vm, gva_t gva)
+gpa_t addr_arch_gva2gpa(struct kvm_vm *vm, gva_t gva)
{
int level = PG_LEVEL_NONE;
uint64_t *pte = __vm_get_page_table_entry(vm, gva, &level);
diff --git a/tools/testing/selftests/kvm/lib/x86/sev.c b/tools/testing/selftests/kvm/lib/x86/sev.c
index e9535ee20b7f..5fcd26ac2def 100644
--- a/tools/testing/selftests/kvm/lib/x86/sev.c
+++ b/tools/testing/selftests/kvm/lib/x86/sev.c
@@ -17,7 +17,7 @@
static void encrypt_region(struct kvm_vm *vm, struct userspace_mem_region *region)
{
const struct sparsebit *protected_phy_pages = region->protected_phy_pages;
- const vm_paddr_t gpa_base = region->region.guest_phys_addr;
+ const gpa_t gpa_base = region->region.guest_phys_addr;
const sparsebit_idx_t lowest_page_in_region = gpa_base >> vm->page_shift;
sparsebit_idx_t i, j;
diff --git a/tools/testing/selftests/kvm/riscv/sbi_pmu_test.c b/tools/testing/selftests/kvm/riscv/sbi_pmu_test.c
index e166e20544d3..c0ec7b284a3d 100644
--- a/tools/testing/selftests/kvm/riscv/sbi_pmu_test.c
+++ b/tools/testing/selftests/kvm/riscv/sbi_pmu_test.c
@@ -24,7 +24,7 @@ union sbi_pmu_ctr_info ctrinfo_arr[RISCV_MAX_PMU_COUNTERS];
/* Snapshot shared memory data */
#define PMU_SNAPSHOT_GPA_BASE BIT(30)
static void *snapshot_gva;
-static vm_paddr_t snapshot_gpa;
+static gpa_t snapshot_gpa;
static int vcpu_shared_irq_count;
static int counter_in_use;
@@ -241,7 +241,7 @@ static inline void verify_sbi_requirement_assert(void)
__GUEST_ASSERT(0, "SBI implementation version doesn't support PMU Snapshot");
}
-static void snapshot_set_shmem(vm_paddr_t gpa, unsigned long flags)
+static void snapshot_set_shmem(gpa_t gpa, unsigned long flags)
{
unsigned long lo = (unsigned long)gpa;
#if __riscv_xlen == 32
diff --git a/tools/testing/selftests/kvm/s390/ucontrol_test.c b/tools/testing/selftests/kvm/s390/ucontrol_test.c
index d265b34c54be..d9f0584978a9 100644
--- a/tools/testing/selftests/kvm/s390/ucontrol_test.c
+++ b/tools/testing/selftests/kvm/s390/ucontrol_test.c
@@ -111,7 +111,7 @@ FIXTURE(uc_kvm)
uintptr_t base_hva;
uintptr_t code_hva;
int kvm_run_size;
- vm_paddr_t pgd;
+ gpa_t pgd;
void *vm_mem;
int vcpu_fd;
int kvm_fd;
diff --git a/tools/testing/selftests/kvm/steal_time.c b/tools/testing/selftests/kvm/steal_time.c
index 24eaabe372bc..fd931243b6ce 100644
--- a/tools/testing/selftests/kvm/steal_time.c
+++ b/tools/testing/selftests/kvm/steal_time.c
@@ -210,7 +210,7 @@ static void steal_time_dump(struct kvm_vm *vm, uint32_t vcpu_idx)
/* SBI STA shmem must have 64-byte alignment */
#define STEAL_TIME_SIZE ((sizeof(struct sta_struct) + 63) & ~63)
-static vm_paddr_t st_gpa[NR_VCPUS];
+static gpa_t st_gpa[NR_VCPUS];
struct sta_struct {
uint32_t sequence;
@@ -220,7 +220,7 @@ struct sta_struct {
uint8_t pad[47];
} __packed;
-static void sta_set_shmem(vm_paddr_t gpa, unsigned long flags)
+static void sta_set_shmem(gpa_t gpa, unsigned long flags)
{
unsigned long lo = (unsigned long)gpa;
#if __riscv_xlen == 32
diff --git a/tools/testing/selftests/kvm/x86/hyperv_clock.c b/tools/testing/selftests/kvm/x86/hyperv_clock.c
index 046d33ec69fc..dd8ce5686736 100644
--- a/tools/testing/selftests/kvm/x86/hyperv_clock.c
+++ b/tools/testing/selftests/kvm/x86/hyperv_clock.c
@@ -98,7 +98,7 @@ static inline void check_tsc_msr_tsc_page(struct ms_hyperv_tsc_page *tsc_page)
GUEST_ASSERT(r2 >= t1 && r2 - t2 < 100000);
}
-static void guest_main(struct ms_hyperv_tsc_page *tsc_page, vm_paddr_t tsc_page_gpa)
+static void guest_main(struct ms_hyperv_tsc_page *tsc_page, gpa_t tsc_page_gpa)
{
u64 tsc_scale, tsc_offset;
diff --git a/tools/testing/selftests/kvm/x86/hyperv_extended_hypercalls.c b/tools/testing/selftests/kvm/x86/hyperv_extended_hypercalls.c
index 43e1c5149d97..f2d990ce4e2b 100644
--- a/tools/testing/selftests/kvm/x86/hyperv_extended_hypercalls.c
+++ b/tools/testing/selftests/kvm/x86/hyperv_extended_hypercalls.c
@@ -15,7 +15,7 @@
/* Any value is fine */
#define EXT_CAPABILITIES 0xbull
-static void guest_code(vm_paddr_t in_pg_gpa, vm_paddr_t out_pg_gpa,
+static void guest_code(gpa_t in_pg_gpa, gpa_t out_pg_gpa,
gva_t out_pg_gva)
{
uint64_t *output_gva;
diff --git a/tools/testing/selftests/kvm/x86/hyperv_tlb_flush.c b/tools/testing/selftests/kvm/x86/hyperv_tlb_flush.c
index aa795f9a5950..bc5828ce505e 100644
--- a/tools/testing/selftests/kvm/x86/hyperv_tlb_flush.c
+++ b/tools/testing/selftests/kvm/x86/hyperv_tlb_flush.c
@@ -62,7 +62,7 @@ struct hv_tlb_flush_ex {
*/
struct test_data {
gva_t hcall_gva;
- vm_paddr_t hcall_gpa;
+ gpa_t hcall_gpa;
gva_t test_pages;
gva_t test_pages_pte[NTEST_PAGES];
};
@@ -133,7 +133,7 @@ static void set_expected_val(void *addr, u64 val, int vcpu_id)
* Update PTEs swapping two test pages.
* TODO: use swap()/xchg() when these are provided.
*/
-static void swap_two_test_pages(vm_paddr_t pte_gva1, vm_paddr_t pte_gva2)
+static void swap_two_test_pages(gpa_t pte_gva1, gpa_t pte_gva2)
{
uint64_t tmp = *(uint64_t *)pte_gva1;
@@ -201,7 +201,7 @@ static void sender_guest_code(gva_t test_data)
struct test_data *data = (struct test_data *)test_data;
struct hv_tlb_flush *flush = (struct hv_tlb_flush *)data->hcall_gva;
struct hv_tlb_flush_ex *flush_ex = (struct hv_tlb_flush_ex *)data->hcall_gva;
- vm_paddr_t hcall_gpa = data->hcall_gpa;
+ gpa_t hcall_gpa = data->hcall_gpa;
int i, stage = 1;
wrmsr(HV_X64_MSR_GUEST_OS_ID, HYPERV_LINUX_OS_ID);
@@ -582,7 +582,7 @@ int main(int argc, char *argv[])
struct kvm_vcpu *vcpu[3];
pthread_t threads[2];
gva_t test_data_page, gva;
- vm_paddr_t gpa;
+ gpa_t gpa;
uint64_t *pte;
struct test_data *data;
struct ucall uc;
diff --git a/tools/testing/selftests/kvm/x86/kvm_clock_test.c b/tools/testing/selftests/kvm/x86/kvm_clock_test.c
index b335ee2a8e97..ada4b2abf55d 100644
--- a/tools/testing/selftests/kvm/x86/kvm_clock_test.c
+++ b/tools/testing/selftests/kvm/x86/kvm_clock_test.c
@@ -31,7 +31,7 @@ static struct test_case test_cases[] = {
#define GUEST_SYNC_CLOCK(__stage, __val) \
GUEST_SYNC_ARGS(__stage, __val, 0, 0, 0)
-static void guest_main(vm_paddr_t pvti_pa, struct pvclock_vcpu_time_info *pvti)
+static void guest_main(gpa_t pvti_pa, struct pvclock_vcpu_time_info *pvti)
{
int i;
@@ -136,7 +136,7 @@ int main(void)
{
struct kvm_vcpu *vcpu;
gva_t pvti_gva;
- vm_paddr_t pvti_gpa;
+ gpa_t pvti_gpa;
struct kvm_vm *vm;
int flags;
--
2.49.0.906.g1f30a19c02-goog
^ permalink raw reply related [flat|nested] 16+ messages in thread
* [PATCH 03/10] KVM: selftests: Use gpa_t for GPAs in Hyper-V selftests
2025-05-01 18:32 [PATCH 00/10] KVM: selftests: Convert to kernel-style types David Matlack
2025-05-01 18:32 ` [PATCH 01/10] KVM: selftests: Use gva_t instead of vm_vaddr_t David Matlack
2025-05-01 18:32 ` [PATCH 02/10] KVM: selftests: Use gpa_t instead of vm_paddr_t David Matlack
@ 2025-05-01 18:32 ` David Matlack
2025-05-01 18:32 ` [PATCH 04/10] KVM: selftests: Use u64 instead of uint64_t David Matlack
` (8 subsequent siblings)
11 siblings, 0 replies; 16+ messages in thread
From: David Matlack @ 2025-05-01 18:32 UTC (permalink / raw)
To: Paolo Bonzini
Cc: Marc Zyngier, Oliver Upton, Joey Gouly, Suzuki K Poulose,
Zenghui Yu, Anup Patel, Atish Patra, Paul Walmsley,
Palmer Dabbelt, Albert Ou, Alexandre Ghiti, Christian Borntraeger,
Janosch Frank, Claudio Imbrenda, David Hildenbrand,
Sean Christopherson, David Matlack, Andrew Jones, Isaku Yamahata,
Reinette Chatre, Eric Auger, James Houghton, Colin Ian King, kvm,
linux-arm-kernel, kvmarm, kvm-riscv, linux-riscv
Fix various Hyper-V selftests to use gpa_t for variables that contain
guest physical addresses, rather than gva_t.
No functional change intended.
Signed-off-by: David Matlack <dmatlack@google.com>
---
tools/testing/selftests/kvm/x86/hyperv_evmcs.c | 2 +-
tools/testing/selftests/kvm/x86/hyperv_features.c | 2 +-
tools/testing/selftests/kvm/x86/hyperv_ipi.c | 6 +++---
tools/testing/selftests/kvm/x86/hyperv_svm_test.c | 2 +-
4 files changed, 6 insertions(+), 6 deletions(-)
diff --git a/tools/testing/selftests/kvm/x86/hyperv_evmcs.c b/tools/testing/selftests/kvm/x86/hyperv_evmcs.c
index 58f27dcc3d5f..9fa91b0f168a 100644
--- a/tools/testing/selftests/kvm/x86/hyperv_evmcs.c
+++ b/tools/testing/selftests/kvm/x86/hyperv_evmcs.c
@@ -76,7 +76,7 @@ void l2_guest_code(void)
}
void guest_code(struct vmx_pages *vmx_pages, struct hyperv_test_pages *hv_pages,
- gva_t hv_hcall_page_gpa)
+ gpa_t hv_hcall_page_gpa)
{
#define L2_GUEST_STACK_SIZE 64
unsigned long l2_guest_stack[L2_GUEST_STACK_SIZE];
diff --git a/tools/testing/selftests/kvm/x86/hyperv_features.c b/tools/testing/selftests/kvm/x86/hyperv_features.c
index 5cf19d96120d..b3847b5ea314 100644
--- a/tools/testing/selftests/kvm/x86/hyperv_features.c
+++ b/tools/testing/selftests/kvm/x86/hyperv_features.c
@@ -82,7 +82,7 @@ static void guest_msr(struct msr_data *msr)
GUEST_DONE();
}
-static void guest_hcall(gva_t pgs_gpa, struct hcall_data *hcall)
+static void guest_hcall(gpa_t pgs_gpa, struct hcall_data *hcall)
{
u64 res, input, output;
uint8_t vector;
diff --git a/tools/testing/selftests/kvm/x86/hyperv_ipi.c b/tools/testing/selftests/kvm/x86/hyperv_ipi.c
index 865fdd8e4284..85c2948e5a79 100644
--- a/tools/testing/selftests/kvm/x86/hyperv_ipi.c
+++ b/tools/testing/selftests/kvm/x86/hyperv_ipi.c
@@ -45,13 +45,13 @@ struct hv_send_ipi_ex {
struct hv_vpset vp_set;
};
-static inline void hv_init(gva_t pgs_gpa)
+static inline void hv_init(gpa_t pgs_gpa)
{
wrmsr(HV_X64_MSR_GUEST_OS_ID, HYPERV_LINUX_OS_ID);
wrmsr(HV_X64_MSR_HYPERCALL, pgs_gpa);
}
-static void receiver_code(void *hcall_page, gva_t pgs_gpa)
+static void receiver_code(void *hcall_page, gpa_t pgs_gpa)
{
u32 vcpu_id;
@@ -85,7 +85,7 @@ static inline void nop_loop(void)
asm volatile("nop");
}
-static void sender_guest_code(void *hcall_page, gva_t pgs_gpa)
+static void sender_guest_code(void *hcall_page, gpa_t pgs_gpa)
{
struct hv_send_ipi *ipi = (struct hv_send_ipi *)hcall_page;
struct hv_send_ipi_ex *ipi_ex = (struct hv_send_ipi_ex *)hcall_page;
diff --git a/tools/testing/selftests/kvm/x86/hyperv_svm_test.c b/tools/testing/selftests/kvm/x86/hyperv_svm_test.c
index 436c16460fe0..b7f35424c838 100644
--- a/tools/testing/selftests/kvm/x86/hyperv_svm_test.c
+++ b/tools/testing/selftests/kvm/x86/hyperv_svm_test.c
@@ -67,7 +67,7 @@ void l2_guest_code(void)
static void __attribute__((__flatten__)) guest_code(struct svm_test_data *svm,
struct hyperv_test_pages *hv_pages,
- gva_t pgs_gpa)
+ gpa_t pgs_gpa)
{
unsigned long l2_guest_stack[L2_GUEST_STACK_SIZE];
struct vmcb *vmcb = svm->vmcb;
--
2.49.0.906.g1f30a19c02-goog
^ permalink raw reply related [flat|nested] 16+ messages in thread
* [PATCH 04/10] KVM: selftests: Use u64 instead of uint64_t
2025-05-01 18:32 [PATCH 00/10] KVM: selftests: Convert to kernel-style types David Matlack
` (2 preceding siblings ...)
2025-05-01 18:32 ` [PATCH 03/10] KVM: selftests: Use gpa_t for GPAs in Hyper-V selftests David Matlack
@ 2025-05-01 18:32 ` David Matlack
2025-05-01 18:32 ` [PATCH 05/10] KVM: selftests: Use s64 instead of int64_t David Matlack
` (7 subsequent siblings)
11 siblings, 0 replies; 16+ messages in thread
From: David Matlack @ 2025-05-01 18:32 UTC (permalink / raw)
To: Paolo Bonzini
Cc: Marc Zyngier, Oliver Upton, Joey Gouly, Suzuki K Poulose,
Zenghui Yu, Anup Patel, Atish Patra, Paul Walmsley,
Palmer Dabbelt, Albert Ou, Alexandre Ghiti, Christian Borntraeger,
Janosch Frank, Claudio Imbrenda, David Hildenbrand,
Sean Christopherson, David Matlack, Andrew Jones, Isaku Yamahata,
Reinette Chatre, Eric Auger, James Houghton, Colin Ian King, kvm,
linux-arm-kernel, kvmarm, kvm-riscv, linux-riscv
Use u64 instead of uint64_t to make the KVM selftests code more concise
and more similar to the kernel (since selftests are primarily developed
by kernel developers).
This commit was generated with the following command:
git ls-files tools/testing/selftests/kvm | xargs sed -i 's/uint64_t/u64/g'
Then by manually adjusting whitespace to make checkpatch.pl happy.
Also include <linux/types.h> in include/x86/pmu.h to avoid compilation
failure.
No functional change intended.
Signed-off-by: David Matlack <dmatlack@google.com>
---
.../selftests/kvm/access_tracking_perf_test.c | 40 ++---
.../selftests/kvm/arm64/aarch32_id_regs.c | 14 +-
.../testing/selftests/kvm/arm64/arch_timer.c | 2 +-
.../kvm/arm64/arch_timer_edge_cases.c | 115 +++++++-------
.../selftests/kvm/arm64/debug-exceptions.c | 49 +++---
.../testing/selftests/kvm/arm64/hypercalls.c | 18 +--
.../testing/selftests/kvm/arm64/no-vgic-v3.c | 6 +-
.../selftests/kvm/arm64/page_fault_test.c | 74 ++++-----
tools/testing/selftests/kvm/arm64/psci_test.c | 26 ++--
.../testing/selftests/kvm/arm64/set_id_regs.c | 44 +++---
tools/testing/selftests/kvm/arm64/vgic_init.c | 26 ++--
tools/testing/selftests/kvm/arm64/vgic_irq.c | 36 ++---
.../selftests/kvm/arm64/vpmu_counter_access.c | 60 +++----
.../testing/selftests/kvm/coalesced_io_test.c | 14 +-
.../selftests/kvm/demand_paging_test.c | 10 +-
.../selftests/kvm/dirty_log_perf_test.c | 12 +-
tools/testing/selftests/kvm/dirty_log_test.c | 44 +++---
.../testing/selftests/kvm/guest_memfd_test.c | 2 +-
.../testing/selftests/kvm/guest_print_test.c | 14 +-
.../selftests/kvm/include/arm64/arch_timer.h | 16 +-
.../selftests/kvm/include/arm64/delay.h | 4 +-
.../testing/selftests/kvm/include/arm64/gic.h | 2 +-
.../selftests/kvm/include/arm64/processor.h | 14 +-
.../selftests/kvm/include/arm64/vgic.h | 6 +-
.../testing/selftests/kvm/include/kvm_util.h | 146 +++++++++---------
.../selftests/kvm/include/kvm_util_types.h | 4 +-
.../testing/selftests/kvm/include/memstress.h | 20 +--
.../selftests/kvm/include/riscv/arch_timer.h | 20 +--
.../selftests/kvm/include/riscv/processor.h | 9 +-
.../kvm/include/s390/diag318_test_handler.h | 2 +-
.../selftests/kvm/include/s390/facility.h | 4 +-
.../testing/selftests/kvm/include/sparsebit.h | 6 +-
.../testing/selftests/kvm/include/test_util.h | 14 +-
.../selftests/kvm/include/timer_test.h | 6 +-
.../selftests/kvm/include/ucall_common.h | 14 +-
.../selftests/kvm/include/userfaultfd_util.h | 6 +-
.../testing/selftests/kvm/include/x86/apic.h | 8 +-
.../testing/selftests/kvm/include/x86/evmcs.h | 18 +--
.../selftests/kvm/include/x86/hyperv.h | 14 +-
.../selftests/kvm/include/x86/kvm_util_arch.h | 6 +-
tools/testing/selftests/kvm/include/x86/pmu.h | 6 +-
.../selftests/kvm/include/x86/processor.h | 116 +++++++-------
tools/testing/selftests/kvm/include/x86/sev.h | 4 +-
.../selftests/kvm/include/x86/svm_util.h | 8 +-
tools/testing/selftests/kvm/include/x86/vmx.h | 56 +++----
.../selftests/kvm/kvm_page_table_test.c | 48 +++---
tools/testing/selftests/kvm/lib/arm64/gic.c | 4 +-
.../selftests/kvm/lib/arm64/gic_private.h | 4 +-
.../testing/selftests/kvm/lib/arm64/gic_v3.c | 20 +--
.../selftests/kvm/lib/arm64/processor.c | 84 +++++-----
tools/testing/selftests/kvm/lib/arm64/ucall.c | 4 +-
tools/testing/selftests/kvm/lib/arm64/vgic.c | 18 +--
tools/testing/selftests/kvm/lib/elf.c | 2 +-
.../testing/selftests/kvm/lib/guest_sprintf.c | 10 +-
tools/testing/selftests/kvm/lib/kvm_util.c | 92 +++++------
tools/testing/selftests/kvm/lib/memstress.c | 32 ++--
.../selftests/kvm/lib/riscv/processor.c | 30 ++--
.../kvm/lib/s390/diag318_test_handler.c | 12 +-
.../testing/selftests/kvm/lib/s390/facility.c | 2 +-
.../selftests/kvm/lib/s390/processor.c | 22 +--
tools/testing/selftests/kvm/lib/sparsebit.c | 12 +-
tools/testing/selftests/kvm/lib/test_util.c | 2 +-
.../testing/selftests/kvm/lib/ucall_common.c | 16 +-
.../selftests/kvm/lib/userfaultfd_util.c | 12 +-
tools/testing/selftests/kvm/lib/x86/apic.c | 2 +-
tools/testing/selftests/kvm/lib/x86/hyperv.c | 4 +-
.../testing/selftests/kvm/lib/x86/memstress.c | 8 +-
tools/testing/selftests/kvm/lib/x86/pmu.c | 4 +-
.../testing/selftests/kvm/lib/x86/processor.c | 100 ++++++------
tools/testing/selftests/kvm/lib/x86/sev.c | 4 +-
tools/testing/selftests/kvm/lib/x86/svm.c | 6 +-
tools/testing/selftests/kvm/lib/x86/vmx.c | 70 ++++-----
.../kvm/memslot_modification_stress_test.c | 10 +-
.../testing/selftests/kvm/memslot_perf_test.c | 114 +++++++-------
tools/testing/selftests/kvm/mmu_stress_test.c | 26 ++--
.../selftests/kvm/pre_fault_memory_test.c | 12 +-
.../testing/selftests/kvm/riscv/arch_timer.c | 2 +-
.../testing/selftests/kvm/riscv/ebreak_test.c | 6 +-
.../selftests/kvm/riscv/get-reg-list.c | 2 +-
.../selftests/kvm/riscv/sbi_pmu_test.c | 2 +-
tools/testing/selftests/kvm/s390/debug_test.c | 8 +-
tools/testing/selftests/kvm/s390/memop.c | 32 ++--
tools/testing/selftests/kvm/s390/resets.c | 4 +-
tools/testing/selftests/kvm/s390/tprot.c | 2 +-
.../selftests/kvm/set_memory_region_test.c | 30 ++--
tools/testing/selftests/kvm/steal_time.c | 16 +-
.../kvm/system_counter_offset_test.c | 12 +-
tools/testing/selftests/kvm/x86/amx_test.c | 2 +-
.../selftests/kvm/x86/apic_bus_clock_test.c | 12 +-
tools/testing/selftests/kvm/x86/debug_regs.c | 2 +-
.../kvm/x86/dirty_log_page_splitting_test.c | 16 +-
.../selftests/kvm/x86/feature_msrs_test.c | 4 +-
.../selftests/kvm/x86/fix_hypercall_test.c | 10 +-
.../selftests/kvm/x86/flds_emulation.h | 4 +-
.../testing/selftests/kvm/x86/hwcr_msr_test.c | 10 +-
.../kvm/x86/hyperv_extended_hypercalls.c | 8 +-
.../selftests/kvm/x86/hyperv_features.c | 6 +-
tools/testing/selftests/kvm/x86/hyperv_ipi.c | 2 +-
.../selftests/kvm/x86/hyperv_tlb_flush.c | 8 +-
.../selftests/kvm/x86/kvm_clock_test.c | 4 +-
tools/testing/selftests/kvm/x86/kvm_pv_test.c | 6 +-
.../selftests/kvm/x86/monitor_mwait_test.c | 2 +-
.../selftests/kvm/x86/nx_huge_pages_test.c | 18 +--
.../selftests/kvm/x86/platform_info_test.c | 4 +-
.../selftests/kvm/x86/pmu_counters_test.c | 32 ++--
.../selftests/kvm/x86/pmu_event_filter_test.c | 78 +++++-----
.../kvm/x86/private_mem_conversions_test.c | 50 +++---
.../kvm/x86/private_mem_kvm_exits_test.c | 8 +-
.../selftests/kvm/x86/set_sregs_test.c | 6 +-
.../selftests/kvm/x86/sev_init2_tests.c | 4 +-
.../selftests/kvm/x86/sev_smoke_test.c | 2 +-
.../x86/smaller_maxphyaddr_emulation_test.c | 10 +-
tools/testing/selftests/kvm/x86/smm_test.c | 4 +-
tools/testing/selftests/kvm/x86/state_test.c | 8 +-
.../kvm/x86/svm_nested_soft_inject_test.c | 4 +-
.../testing/selftests/kvm/x86/tsc_msrs_test.c | 2 +-
.../selftests/kvm/x86/tsc_scaling_sync.c | 4 +-
.../selftests/kvm/x86/ucna_injection_test.c | 43 +++---
.../kvm/x86/userspace_msr_exit_test.c | 24 +--
.../selftests/kvm/x86/vmx_dirty_log_test.c | 2 +-
.../testing/selftests/kvm/x86/vmx_msrs_test.c | 18 +--
.../kvm/x86/vmx_nested_tsc_scaling_test.c | 20 +--
.../selftests/kvm/x86/vmx_pmu_caps_test.c | 10 +-
.../selftests/kvm/x86/vmx_tsc_adjust_test.c | 2 +-
.../selftests/kvm/x86/xapic_ipi_test.c | 38 ++---
.../selftests/kvm/x86/xapic_state_test.c | 16 +-
.../selftests/kvm/x86/xcr0_cpuid_test.c | 8 +-
.../selftests/kvm/x86/xen_shinfo_test.c | 12 +-
.../testing/selftests/kvm/x86/xss_msr_test.c | 2 +-
129 files changed, 1276 insertions(+), 1286 deletions(-)
diff --git a/tools/testing/selftests/kvm/access_tracking_perf_test.c b/tools/testing/selftests/kvm/access_tracking_perf_test.c
index 447e619cf856..71a52c4e3559 100644
--- a/tools/testing/selftests/kvm/access_tracking_perf_test.c
+++ b/tools/testing/selftests/kvm/access_tracking_perf_test.c
@@ -70,15 +70,15 @@ struct test_params {
enum vm_mem_backing_src_type backing_src;
/* The amount of memory to allocate for each vCPU. */
- uint64_t vcpu_memory_bytes;
+ u64 vcpu_memory_bytes;
/* The number of vCPUs to create in the VM. */
int nr_vcpus;
};
-static uint64_t pread_uint64(int fd, const char *filename, uint64_t index)
+static u64 pread_uint64(int fd, const char *filename, u64 index)
{
- uint64_t value;
+ u64 value;
off_t offset = index * sizeof(value);
TEST_ASSERT(pread(fd, &value, sizeof(value), offset) == sizeof(value),
@@ -92,11 +92,11 @@ static uint64_t pread_uint64(int fd, const char *filename, uint64_t index)
#define PAGEMAP_PRESENT (1ULL << 63)
#define PAGEMAP_PFN_MASK ((1ULL << 55) - 1)
-static uint64_t lookup_pfn(int pagemap_fd, struct kvm_vm *vm, uint64_t gva)
+static u64 lookup_pfn(int pagemap_fd, struct kvm_vm *vm, u64 gva)
{
- uint64_t hva = (uint64_t) addr_gva2hva(vm, gva);
- uint64_t entry;
- uint64_t pfn;
+ u64 hva = (u64)addr_gva2hva(vm, gva);
+ u64 entry;
+ u64 pfn;
entry = pread_uint64(pagemap_fd, "pagemap", hva / getpagesize());
if (!(entry & PAGEMAP_PRESENT))
@@ -108,16 +108,16 @@ static uint64_t lookup_pfn(int pagemap_fd, struct kvm_vm *vm, uint64_t gva)
return pfn;
}
-static bool is_page_idle(int page_idle_fd, uint64_t pfn)
+static bool is_page_idle(int page_idle_fd, u64 pfn)
{
- uint64_t bits = pread_uint64(page_idle_fd, "page_idle", pfn / 64);
+ u64 bits = pread_uint64(page_idle_fd, "page_idle", pfn / 64);
return !!((bits >> (pfn % 64)) & 1);
}
-static void mark_page_idle(int page_idle_fd, uint64_t pfn)
+static void mark_page_idle(int page_idle_fd, u64 pfn)
{
- uint64_t bits = 1ULL << (pfn % 64);
+ u64 bits = 1ULL << (pfn % 64);
TEST_ASSERT(pwrite(page_idle_fd, &bits, 8, 8 * (pfn / 64)) == 8,
"Set page_idle bits for PFN 0x%" PRIx64, pfn);
@@ -127,11 +127,11 @@ static void mark_vcpu_memory_idle(struct kvm_vm *vm,
struct memstress_vcpu_args *vcpu_args)
{
int vcpu_idx = vcpu_args->vcpu_idx;
- uint64_t base_gva = vcpu_args->gva;
- uint64_t pages = vcpu_args->pages;
- uint64_t page;
- uint64_t still_idle = 0;
- uint64_t no_pfn = 0;
+ u64 base_gva = vcpu_args->gva;
+ u64 pages = vcpu_args->pages;
+ u64 page;
+ u64 still_idle = 0;
+ u64 no_pfn = 0;
int page_idle_fd;
int pagemap_fd;
@@ -146,8 +146,8 @@ static void mark_vcpu_memory_idle(struct kvm_vm *vm,
TEST_ASSERT(pagemap_fd > 0, "Failed to open pagemap.");
for (page = 0; page < pages; page++) {
- uint64_t gva = base_gva + page * memstress_args.guest_page_size;
- uint64_t pfn = lookup_pfn(pagemap_fd, vm, gva);
+ u64 gva = base_gva + page * memstress_args.guest_page_size;
+ u64 pfn = lookup_pfn(pagemap_fd, vm, gva);
if (!pfn) {
no_pfn++;
@@ -198,10 +198,10 @@ static void mark_vcpu_memory_idle(struct kvm_vm *vm,
close(pagemap_fd);
}
-static void assert_ucall(struct kvm_vcpu *vcpu, uint64_t expected_ucall)
+static void assert_ucall(struct kvm_vcpu *vcpu, u64 expected_ucall)
{
struct ucall uc;
- uint64_t actual_ucall = get_ucall(vcpu, &uc);
+ u64 actual_ucall = get_ucall(vcpu, &uc);
TEST_ASSERT(expected_ucall == actual_ucall,
"Guest exited unexpectedly (expected ucall %" PRIu64
diff --git a/tools/testing/selftests/kvm/arm64/aarch32_id_regs.c b/tools/testing/selftests/kvm/arm64/aarch32_id_regs.c
index cef8f7323ceb..ea87ae8f7b33 100644
--- a/tools/testing/selftests/kvm/arm64/aarch32_id_regs.c
+++ b/tools/testing/selftests/kvm/arm64/aarch32_id_regs.c
@@ -66,7 +66,7 @@ static void test_guest_raz(struct kvm_vcpu *vcpu)
}
}
-static uint64_t raz_wi_reg_ids[] = {
+static u64 raz_wi_reg_ids[] = {
KVM_ARM64_SYS_REG(SYS_ID_PFR0_EL1),
KVM_ARM64_SYS_REG(SYS_ID_PFR1_EL1),
KVM_ARM64_SYS_REG(SYS_ID_DFR0_EL1),
@@ -94,8 +94,8 @@ static void test_user_raz_wi(struct kvm_vcpu *vcpu)
int i;
for (i = 0; i < ARRAY_SIZE(raz_wi_reg_ids); i++) {
- uint64_t reg_id = raz_wi_reg_ids[i];
- uint64_t val;
+ u64 reg_id = raz_wi_reg_ids[i];
+ u64 val;
val = vcpu_get_reg(vcpu, reg_id);
TEST_ASSERT_EQ(val, 0);
@@ -111,7 +111,7 @@ static void test_user_raz_wi(struct kvm_vcpu *vcpu)
}
}
-static uint64_t raz_invariant_reg_ids[] = {
+static u64 raz_invariant_reg_ids[] = {
KVM_ARM64_SYS_REG(SYS_ID_AFR0_EL1),
KVM_ARM64_SYS_REG(sys_reg(3, 0, 0, 3, 3)),
KVM_ARM64_SYS_REG(SYS_ID_DFR1_EL1),
@@ -123,8 +123,8 @@ static void test_user_raz_invariant(struct kvm_vcpu *vcpu)
int i, r;
for (i = 0; i < ARRAY_SIZE(raz_invariant_reg_ids); i++) {
- uint64_t reg_id = raz_invariant_reg_ids[i];
- uint64_t val;
+ u64 reg_id = raz_invariant_reg_ids[i];
+ u64 val;
val = vcpu_get_reg(vcpu, reg_id);
TEST_ASSERT_EQ(val, 0);
@@ -142,7 +142,7 @@ static void test_user_raz_invariant(struct kvm_vcpu *vcpu)
static bool vcpu_aarch64_only(struct kvm_vcpu *vcpu)
{
- uint64_t val, el0;
+ u64 val, el0;
val = vcpu_get_reg(vcpu, KVM_ARM64_SYS_REG(SYS_ID_AA64PFR0_EL1));
diff --git a/tools/testing/selftests/kvm/arm64/arch_timer.c b/tools/testing/selftests/kvm/arm64/arch_timer.c
index eeba1cc87ff8..68757b55ea98 100644
--- a/tools/testing/selftests/kvm/arm64/arch_timer.c
+++ b/tools/testing/selftests/kvm/arm64/arch_timer.c
@@ -56,7 +56,7 @@ static void guest_validate_irq(unsigned int intid,
struct test_vcpu_shared_data *shared_data)
{
enum guest_stage stage = shared_data->guest_stage;
- uint64_t xcnt = 0, xcnt_diff_us, cval = 0;
+ u64 xcnt = 0, xcnt_diff_us, cval = 0;
unsigned long xctl = 0;
unsigned int timer_irq = 0;
unsigned int accessor;
diff --git a/tools/testing/selftests/kvm/arm64/arch_timer_edge_cases.c b/tools/testing/selftests/kvm/arm64/arch_timer_edge_cases.c
index a36a7e2db434..dffdb303a14e 100644
--- a/tools/testing/selftests/kvm/arm64/arch_timer_edge_cases.c
+++ b/tools/testing/selftests/kvm/arm64/arch_timer_edge_cases.c
@@ -22,7 +22,7 @@
#include "gic.h"
#include "vgic.h"
-static const uint64_t CVAL_MAX = ~0ULL;
+static const u64 CVAL_MAX = ~0ULL;
/* tval is a signed 32-bit int. */
static const int32_t TVAL_MAX = INT32_MAX;
static const int32_t TVAL_MIN = INT32_MIN;
@@ -31,7 +31,7 @@ static const int32_t TVAL_MIN = INT32_MIN;
static const uint32_t TIMEOUT_NO_IRQ_US = 50000;
/* A nice counter value to use as the starting one for most tests. */
-static const uint64_t DEF_CNT = (CVAL_MAX / 2);
+static const u64 DEF_CNT = (CVAL_MAX / 2);
/* Number of runs. */
static const uint32_t NR_TEST_ITERS_DEF = 5;
@@ -52,9 +52,9 @@ struct test_args {
/* Virtual or physical timer and counter tests. */
enum arch_timer timer;
/* Delay used for most timer tests. */
- uint64_t wait_ms;
+ u64 wait_ms;
/* Delay used in the test_long_timer_delays test. */
- uint64_t long_wait_ms;
+ u64 long_wait_ms;
/* Number of iterations. */
int iterations;
/* Whether to test the physical timer. */
@@ -81,12 +81,12 @@ enum sync_cmd {
NO_USERSPACE_CMD,
};
-typedef void (*sleep_method_t)(enum arch_timer timer, uint64_t usec);
+typedef void (*sleep_method_t)(enum arch_timer timer, u64 usec);
-static void sleep_poll(enum arch_timer timer, uint64_t usec);
-static void sleep_sched_poll(enum arch_timer timer, uint64_t usec);
-static void sleep_in_userspace(enum arch_timer timer, uint64_t usec);
-static void sleep_migrate(enum arch_timer timer, uint64_t usec);
+static void sleep_poll(enum arch_timer timer, u64 usec);
+static void sleep_sched_poll(enum arch_timer timer, u64 usec);
+static void sleep_in_userspace(enum arch_timer timer, u64 usec);
+static void sleep_migrate(enum arch_timer timer, u64 usec);
sleep_method_t sleep_method[] = {
sleep_poll,
@@ -121,7 +121,7 @@ static void assert_irqs_handled(uint32_t n)
__GUEST_ASSERT(h == n, "Handled %d IRQS but expected %d", h, n);
}
-static void userspace_cmd(uint64_t cmd)
+static void userspace_cmd(u64 cmd)
{
GUEST_SYNC_ARGS(cmd, 0, 0, 0, 0);
}
@@ -131,12 +131,12 @@ static void userspace_migrate_vcpu(void)
userspace_cmd(USERSPACE_MIGRATE_SELF);
}
-static void userspace_sleep(uint64_t usecs)
+static void userspace_sleep(u64 usecs)
{
GUEST_SYNC_ARGS(USERSPACE_USLEEP, usecs, 0, 0, 0);
}
-static void set_counter(enum arch_timer timer, uint64_t counter)
+static void set_counter(enum arch_timer timer, u64 counter)
{
GUEST_SYNC_ARGS(SET_COUNTER_VALUE, counter, timer, 0, 0);
}
@@ -145,7 +145,7 @@ static void guest_irq_handler(struct ex_regs *regs)
{
unsigned int intid = gic_get_and_ack_irq();
enum arch_timer timer;
- uint64_t cnt, cval;
+ u64 cnt, cval;
uint32_t ctl;
bool timer_condition, istatus;
@@ -177,7 +177,7 @@ static void guest_irq_handler(struct ex_regs *regs)
gic_set_eoi(intid);
}
-static void set_cval_irq(enum arch_timer timer, uint64_t cval_cycles,
+static void set_cval_irq(enum arch_timer timer, u64 cval_cycles,
uint32_t ctl)
{
atomic_set(&shared_data.handled, 0);
@@ -186,7 +186,7 @@ static void set_cval_irq(enum arch_timer timer, uint64_t cval_cycles,
timer_set_ctl(timer, ctl);
}
-static void set_tval_irq(enum arch_timer timer, uint64_t tval_cycles,
+static void set_tval_irq(enum arch_timer timer, u64 tval_cycles,
uint32_t ctl)
{
atomic_set(&shared_data.handled, 0);
@@ -195,7 +195,7 @@ static void set_tval_irq(enum arch_timer timer, uint64_t tval_cycles,
timer_set_tval(timer, tval_cycles);
}
-static void set_xval_irq(enum arch_timer timer, uint64_t xval, uint32_t ctl,
+static void set_xval_irq(enum arch_timer timer, u64 xval, uint32_t ctl,
enum timer_view tv)
{
switch (tv) {
@@ -274,13 +274,13 @@ static void wait_migrate_poll_for_irq(void)
* Sleep for usec microseconds by polling in the guest or in
* userspace (e.g. userspace_cmd=USERSPACE_SCHEDULE).
*/
-static void guest_poll(enum arch_timer test_timer, uint64_t usec,
+static void guest_poll(enum arch_timer test_timer, u64 usec,
enum sync_cmd usp_cmd)
{
- uint64_t cycles = usec_to_cycles(usec);
+ u64 cycles = usec_to_cycles(usec);
/* Whichever timer we are testing with, sleep with the other. */
enum arch_timer sleep_timer = 1 - test_timer;
- uint64_t start = timer_get_cntct(sleep_timer);
+ u64 start = timer_get_cntct(sleep_timer);
while ((timer_get_cntct(sleep_timer) - start) < cycles) {
if (usp_cmd == NO_USERSPACE_CMD)
@@ -290,22 +290,22 @@ static void guest_poll(enum arch_timer test_timer, uint64_t usec,
}
}
-static void sleep_poll(enum arch_timer timer, uint64_t usec)
+static void sleep_poll(enum arch_timer timer, u64 usec)
{
guest_poll(timer, usec, NO_USERSPACE_CMD);
}
-static void sleep_sched_poll(enum arch_timer timer, uint64_t usec)
+static void sleep_sched_poll(enum arch_timer timer, u64 usec)
{
guest_poll(timer, usec, USERSPACE_SCHED_YIELD);
}
-static void sleep_migrate(enum arch_timer timer, uint64_t usec)
+static void sleep_migrate(enum arch_timer timer, u64 usec)
{
guest_poll(timer, usec, USERSPACE_MIGRATE_SELF);
}
-static void sleep_in_userspace(enum arch_timer timer, uint64_t usec)
+static void sleep_in_userspace(enum arch_timer timer, u64 usec)
{
userspace_sleep(usec);
}
@@ -314,15 +314,15 @@ static void sleep_in_userspace(enum arch_timer timer, uint64_t usec)
* Reset the timer state to some nice values like the counter not being close
* to the edge, and the control register masked and disabled.
*/
-static void reset_timer_state(enum arch_timer timer, uint64_t cnt)
+static void reset_timer_state(enum arch_timer timer, u64 cnt)
{
set_counter(timer, cnt);
timer_set_ctl(timer, CTL_IMASK);
}
-static void test_timer_xval(enum arch_timer timer, uint64_t xval,
+static void test_timer_xval(enum arch_timer timer, u64 xval,
enum timer_view tv, irq_wait_method_t wm, bool reset_state,
- uint64_t reset_cnt)
+ u64 reset_cnt)
{
local_irq_disable();
@@ -347,23 +347,23 @@ static void test_timer_xval(enum arch_timer timer, uint64_t xval,
* the "runner", like: tools/testing/selftests/kselftest/runner.sh.
*/
-static void test_timer_cval(enum arch_timer timer, uint64_t cval,
+static void test_timer_cval(enum arch_timer timer, u64 cval,
irq_wait_method_t wm, bool reset_state,
- uint64_t reset_cnt)
+ u64 reset_cnt)
{
test_timer_xval(timer, cval, TIMER_CVAL, wm, reset_state, reset_cnt);
}
static void test_timer_tval(enum arch_timer timer, int32_t tval,
irq_wait_method_t wm, bool reset_state,
- uint64_t reset_cnt)
+ u64 reset_cnt)
{
- test_timer_xval(timer, (uint64_t) tval, TIMER_TVAL, wm, reset_state,
+ test_timer_xval(timer, (u64)tval, TIMER_TVAL, wm, reset_state,
reset_cnt);
}
-static void test_xval_check_no_irq(enum arch_timer timer, uint64_t xval,
- uint64_t usec, enum timer_view timer_view,
+static void test_xval_check_no_irq(enum arch_timer timer, u64 xval,
+ u64 usec, enum timer_view timer_view,
sleep_method_t guest_sleep)
{
local_irq_disable();
@@ -378,17 +378,17 @@ static void test_xval_check_no_irq(enum arch_timer timer, uint64_t xval,
assert_irqs_handled(0);
}
-static void test_cval_no_irq(enum arch_timer timer, uint64_t cval,
- uint64_t usec, sleep_method_t wm)
+static void test_cval_no_irq(enum arch_timer timer, u64 cval,
+ u64 usec, sleep_method_t wm)
{
test_xval_check_no_irq(timer, cval, usec, TIMER_CVAL, wm);
}
-static void test_tval_no_irq(enum arch_timer timer, int32_t tval, uint64_t usec,
+static void test_tval_no_irq(enum arch_timer timer, int32_t tval, u64 usec,
sleep_method_t wm)
{
/* tval will be cast to an int32_t in test_xval_check_no_irq */
- test_xval_check_no_irq(timer, (uint64_t) tval, usec, TIMER_TVAL, wm);
+ test_xval_check_no_irq(timer, (u64)tval, usec, TIMER_TVAL, wm);
}
/* Test masking/unmasking a timer using the timer mask (not the IRQ mask). */
@@ -487,7 +487,7 @@ static void test_reprogramming_timer(enum arch_timer timer, irq_wait_method_t wm
static void test_reprogram_timers(enum arch_timer timer)
{
int i;
- uint64_t base_wait = test_args.wait_ms;
+ u64 base_wait = test_args.wait_ms;
for (i = 0; i < ARRAY_SIZE(irq_wait_method); i++) {
/*
@@ -504,7 +504,7 @@ static void test_reprogram_timers(enum arch_timer timer)
static void test_basic_functionality(enum arch_timer timer)
{
int32_t tval = (int32_t) msec_to_cycles(test_args.wait_ms);
- uint64_t cval = DEF_CNT + msec_to_cycles(test_args.wait_ms);
+ u64 cval = DEF_CNT + msec_to_cycles(test_args.wait_ms);
int i;
for (i = 0; i < ARRAY_SIZE(irq_wait_method); i++) {
@@ -592,7 +592,7 @@ static void test_set_cnt_after_tval_max(enum arch_timer timer, irq_wait_method_t
reset_timer_state(timer, DEF_CNT);
set_cval_irq(timer,
- (uint64_t) TVAL_MAX +
+ (u64)TVAL_MAX +
msec_to_cycles(test_args.wait_ms) / 2, CTL_ENABLE);
set_counter(timer, TVAL_MAX);
@@ -607,7 +607,7 @@ static void test_set_cnt_after_tval_max(enum arch_timer timer, irq_wait_method_t
/* Test timers set for: cval = now + TVAL_MAX + wait_ms / 2 */
static void test_timers_above_tval_max(enum arch_timer timer)
{
- uint64_t cval;
+ u64 cval;
int i;
/*
@@ -637,8 +637,8 @@ static void test_timers_above_tval_max(enum arch_timer timer)
* sets the counter to cnt_1, the [c|t]val, the counter to cnt_2, and
* then waits for an IRQ.
*/
-static void test_set_cnt_after_xval(enum arch_timer timer, uint64_t cnt_1,
- uint64_t xval, uint64_t cnt_2,
+static void test_set_cnt_after_xval(enum arch_timer timer, u64 cnt_1,
+ u64 xval, u64 cnt_2,
irq_wait_method_t wm, enum timer_view tv)
{
local_irq_disable();
@@ -661,8 +661,8 @@ static void test_set_cnt_after_xval(enum arch_timer timer, uint64_t cnt_1,
* then waits for an IRQ.
*/
static void test_set_cnt_after_xval_no_irq(enum arch_timer timer,
- uint64_t cnt_1, uint64_t xval,
- uint64_t cnt_2,
+ u64 cnt_1, u64 xval,
+ u64 cnt_2,
sleep_method_t guest_sleep,
enum timer_view tv)
{
@@ -683,31 +683,31 @@ static void test_set_cnt_after_xval_no_irq(enum arch_timer timer,
timer_set_ctl(timer, CTL_IMASK);
}
-static void test_set_cnt_after_tval(enum arch_timer timer, uint64_t cnt_1,
- int32_t tval, uint64_t cnt_2,
+static void test_set_cnt_after_tval(enum arch_timer timer, u64 cnt_1,
+ int32_t tval, u64 cnt_2,
irq_wait_method_t wm)
{
test_set_cnt_after_xval(timer, cnt_1, tval, cnt_2, wm, TIMER_TVAL);
}
-static void test_set_cnt_after_cval(enum arch_timer timer, uint64_t cnt_1,
- uint64_t cval, uint64_t cnt_2,
+static void test_set_cnt_after_cval(enum arch_timer timer, u64 cnt_1,
+ u64 cval, u64 cnt_2,
irq_wait_method_t wm)
{
test_set_cnt_after_xval(timer, cnt_1, cval, cnt_2, wm, TIMER_CVAL);
}
static void test_set_cnt_after_tval_no_irq(enum arch_timer timer,
- uint64_t cnt_1, int32_t tval,
- uint64_t cnt_2, sleep_method_t wm)
+ u64 cnt_1, int32_t tval,
+ u64 cnt_2, sleep_method_t wm)
{
test_set_cnt_after_xval_no_irq(timer, cnt_1, tval, cnt_2, wm,
TIMER_TVAL);
}
static void test_set_cnt_after_cval_no_irq(enum arch_timer timer,
- uint64_t cnt_1, uint64_t cval,
- uint64_t cnt_2, sleep_method_t wm)
+ u64 cnt_1, u64 cval,
+ u64 cnt_2, sleep_method_t wm)
{
test_set_cnt_after_xval_no_irq(timer, cnt_1, cval, cnt_2, wm,
TIMER_CVAL);
@@ -729,8 +729,7 @@ static void test_move_counters_ahead_of_timers(enum arch_timer timer)
test_set_cnt_after_tval(timer, 0, -1, DEF_CNT + 1, wm);
test_set_cnt_after_tval(timer, 0, -1, TVAL_MAX, wm);
tval = TVAL_MAX;
- test_set_cnt_after_tval(timer, 0, tval, (uint64_t) tval + 1,
- wm);
+ test_set_cnt_after_tval(timer, 0, tval, (u64)tval + 1, wm);
}
for (i = 0; i < ARRAY_SIZE(sleep_method); i++) {
@@ -760,7 +759,7 @@ static void test_move_counters_behind_timers(enum arch_timer timer)
static void test_timers_in_the_past(enum arch_timer timer)
{
int32_t tval = -1 * (int32_t) msec_to_cycles(test_args.wait_ms);
- uint64_t cval;
+ u64 cval;
int i;
for (i = 0; i < ARRAY_SIZE(irq_wait_method); i++) {
@@ -796,7 +795,7 @@ static void test_timers_in_the_past(enum arch_timer timer)
static void test_long_timer_delays(enum arch_timer timer)
{
int32_t tval = (int32_t) msec_to_cycles(test_args.long_wait_ms);
- uint64_t cval = DEF_CNT + msec_to_cycles(test_args.long_wait_ms);
+ u64 cval = DEF_CNT + msec_to_cycles(test_args.long_wait_ms);
int i;
for (i = 0; i < ARRAY_SIZE(irq_wait_method); i++) {
@@ -886,7 +885,7 @@ static void migrate_self(uint32_t new_pcpu)
new_pcpu, ret);
}
-static void kvm_set_cntxct(struct kvm_vcpu *vcpu, uint64_t cnt,
+static void kvm_set_cntxct(struct kvm_vcpu *vcpu, u64 cnt,
enum arch_timer timer)
{
if (timer == PHYSICAL)
@@ -898,7 +897,7 @@ static void kvm_set_cntxct(struct kvm_vcpu *vcpu, uint64_t cnt,
static void handle_sync(struct kvm_vcpu *vcpu, struct ucall *uc)
{
enum sync_cmd cmd = uc->args[1];
- uint64_t val = uc->args[2];
+ u64 val = uc->args[2];
enum arch_timer timer = uc->args[3];
switch (cmd) {
diff --git a/tools/testing/selftests/kvm/arm64/debug-exceptions.c b/tools/testing/selftests/kvm/arm64/debug-exceptions.c
index c7fb55c9135b..b97d3a183246 100644
--- a/tools/testing/selftests/kvm/arm64/debug-exceptions.c
+++ b/tools/testing/selftests/kvm/arm64/debug-exceptions.c
@@ -31,14 +31,14 @@
extern unsigned char sw_bp, sw_bp2, hw_bp, hw_bp2, bp_svc, bp_brk, hw_wp, ss_start, hw_bp_ctx;
extern unsigned char iter_ss_begin, iter_ss_end;
-static volatile uint64_t sw_bp_addr, hw_bp_addr;
-static volatile uint64_t wp_addr, wp_data_addr;
-static volatile uint64_t svc_addr;
-static volatile uint64_t ss_addr[4], ss_idx;
-#define PC(v) ((uint64_t)&(v))
+static volatile u64 sw_bp_addr, hw_bp_addr;
+static volatile u64 wp_addr, wp_data_addr;
+static volatile u64 svc_addr;
+static volatile u64 ss_addr[4], ss_idx;
+#define PC(v) ((u64)&(v))
#define GEN_DEBUG_WRITE_REG(reg_name) \
-static void write_##reg_name(int num, uint64_t val) \
+static void write_##reg_name(int num, u64 val) \
{ \
switch (num) { \
case 0: \
@@ -103,7 +103,7 @@ GEN_DEBUG_WRITE_REG(dbgwvr)
static void reset_debug_state(void)
{
uint8_t brps, wrps, i;
- uint64_t dfr0;
+ u64 dfr0;
asm volatile("msr daifset, #8");
@@ -149,7 +149,7 @@ static void enable_monitor_debug_exceptions(void)
isb();
}
-static void install_wp(uint8_t wpn, uint64_t addr)
+static void install_wp(uint8_t wpn, u64 addr)
{
uint32_t wcr;
@@ -162,7 +162,7 @@ static void install_wp(uint8_t wpn, uint64_t addr)
enable_monitor_debug_exceptions();
}
-static void install_hw_bp(uint8_t bpn, uint64_t addr)
+static void install_hw_bp(uint8_t bpn, u64 addr)
{
uint32_t bcr;
@@ -174,11 +174,11 @@ static void install_hw_bp(uint8_t bpn, uint64_t addr)
enable_monitor_debug_exceptions();
}
-static void install_wp_ctx(uint8_t addr_wp, uint8_t ctx_bp, uint64_t addr,
- uint64_t ctx)
+static void install_wp_ctx(uint8_t addr_wp, uint8_t ctx_bp, u64 addr,
+ u64 ctx)
{
uint32_t wcr;
- uint64_t ctx_bcr;
+ u64 ctx_bcr;
/* Setup a context-aware breakpoint for Linked Context ID Match */
ctx_bcr = DBGBCR_LEN8 | DBGBCR_EXEC | DBGBCR_EL1 | DBGBCR_E |
@@ -196,8 +196,8 @@ static void install_wp_ctx(uint8_t addr_wp, uint8_t ctx_bp, uint64_t addr,
enable_monitor_debug_exceptions();
}
-void install_hw_bp_ctx(uint8_t addr_bp, uint8_t ctx_bp, uint64_t addr,
- uint64_t ctx)
+void install_hw_bp_ctx(uint8_t addr_bp, uint8_t ctx_bp, u64 addr,
+ u64 ctx)
{
uint32_t addr_bcr, ctx_bcr;
@@ -236,7 +236,7 @@ static volatile char write_data;
static void guest_code(uint8_t bpn, uint8_t wpn, uint8_t ctx_bpn)
{
- uint64_t ctx = 0xabcdef; /* a random context number */
+ u64 ctx = 0xabcdef; /* a random context number */
/* Software-breakpoint */
reset_debug_state();
@@ -377,8 +377,8 @@ static void guest_svc_handler(struct ex_regs *regs)
static void guest_code_ss(int test_cnt)
{
- uint64_t i;
- uint64_t bvr, wvr, w_bvr, w_wvr;
+ u64 i;
+ u64 bvr, wvr, w_bvr, w_wvr;
for (i = 0; i < test_cnt; i++) {
/* Bits [1:0] of dbg{b,w}vr are RES0 */
@@ -416,7 +416,7 @@ static void guest_code_ss(int test_cnt)
GUEST_DONE();
}
-static int debug_version(uint64_t id_aa64dfr0)
+static int debug_version(u64 id_aa64dfr0)
{
return FIELD_GET(ARM64_FEATURE_MASK(ID_AA64DFR0_EL1_DebugVer), id_aa64dfr0);
}
@@ -468,8 +468,8 @@ void test_single_step_from_userspace(int test_cnt)
struct kvm_vm *vm;
struct ucall uc;
struct kvm_run *run;
- uint64_t pc, cmd;
- uint64_t test_pc = 0;
+ u64 pc, cmd;
+ u64 test_pc = 0;
bool ss_enable = false;
struct kvm_guest_debug debug = {};
@@ -506,7 +506,7 @@ void test_single_step_from_userspace(int test_cnt)
"Unexpected pc 0x%lx (expected 0x%lx)",
pc, test_pc);
- if ((pc + 4) == (uint64_t)&iter_ss_end) {
+ if ((pc + 4) == (u64)&iter_ss_end) {
test_pc = 0;
debug.control = KVM_GUESTDBG_ENABLE;
ss_enable = false;
@@ -519,8 +519,7 @@ void test_single_step_from_userspace(int test_cnt)
* iter_ss_end, the pc for the next KVM_EXIT_DEBUG should
* be the current pc + 4.
*/
- if ((pc >= (uint64_t)&iter_ss_begin) &&
- (pc < (uint64_t)&iter_ss_end))
+ if (pc >= (u64)&iter_ss_begin && pc < (u64)&iter_ss_end)
test_pc = pc + 4;
else
test_pc = 0;
@@ -533,7 +532,7 @@ void test_single_step_from_userspace(int test_cnt)
* Run debug testing using the various breakpoint#, watchpoint# and
* context-aware breakpoint# with the given ID_AA64DFR0_EL1 configuration.
*/
-void test_guest_debug_exceptions_all(uint64_t aa64dfr0)
+void test_guest_debug_exceptions_all(u64 aa64dfr0)
{
uint8_t brp_num, wrp_num, ctx_brp_num, normal_brp_num, ctx_brp_base;
int b, w, c;
@@ -580,7 +579,7 @@ int main(int argc, char *argv[])
struct kvm_vm *vm;
int opt;
int ss_iteration = 10000;
- uint64_t aa64dfr0;
+ u64 aa64dfr0;
vm = vm_create_with_one_vcpu(&vcpu, guest_code);
aa64dfr0 = vcpu_get_reg(vcpu, KVM_ARM64_SYS_REG(SYS_ID_AA64DFR0_EL1));
diff --git a/tools/testing/selftests/kvm/arm64/hypercalls.c b/tools/testing/selftests/kvm/arm64/hypercalls.c
index 44cfcf8a7f46..53d9d86c06a4 100644
--- a/tools/testing/selftests/kvm/arm64/hypercalls.c
+++ b/tools/testing/selftests/kvm/arm64/hypercalls.c
@@ -29,9 +29,9 @@
#define KVM_REG_ARM_VENDOR_HYP_BMAP_2_RESET_VAL 0
struct kvm_fw_reg_info {
- uint64_t reg; /* Register definition */
- uint64_t max_feat_bit; /* Bit that represents the upper limit of the feature-map */
- uint64_t reset_val; /* Reset value for the register */
+ u64 reg; /* Register definition */
+ u64 max_feat_bit; /* Bit that represents the upper limit of the feature-map */
+ u64 reset_val; /* Reset value for the register */
};
#define FW_REG_INFO(r) \
@@ -60,7 +60,7 @@ static int stage = TEST_STAGE_REG_IFACE;
struct test_hvc_info {
uint32_t func_id;
- uint64_t arg1;
+ u64 arg1;
};
#define TEST_HVC_INFO(f, a1) \
@@ -154,7 +154,7 @@ static void guest_code(void)
struct st_time {
uint32_t rev;
uint32_t attr;
- uint64_t st_time;
+ u64 st_time;
};
#define STEAL_TIME_SIZE ((sizeof(struct st_time) + 63) & ~63)
@@ -162,7 +162,7 @@ struct st_time {
static void steal_time_init(struct kvm_vcpu *vcpu)
{
- uint64_t st_ipa = (ulong)ST_GPA_BASE;
+ u64 st_ipa = (ulong)ST_GPA_BASE;
unsigned int gpages;
gpages = vm_calc_num_guest_pages(VM_MODE_DEFAULT, STEAL_TIME_SIZE);
@@ -174,13 +174,13 @@ static void steal_time_init(struct kvm_vcpu *vcpu)
static void test_fw_regs_before_vm_start(struct kvm_vcpu *vcpu)
{
- uint64_t val;
+ u64 val;
unsigned int i;
int ret;
for (i = 0; i < ARRAY_SIZE(fw_reg_info); i++) {
const struct kvm_fw_reg_info *reg_info = &fw_reg_info[i];
- uint64_t set_val;
+ u64 set_val;
/* First 'read' should be the reset value for the reg */
val = vcpu_get_reg(vcpu, reg_info->reg);
@@ -229,7 +229,7 @@ static void test_fw_regs_before_vm_start(struct kvm_vcpu *vcpu)
static void test_fw_regs_after_vm_start(struct kvm_vcpu *vcpu)
{
- uint64_t val;
+ u64 val;
unsigned int i;
int ret;
diff --git a/tools/testing/selftests/kvm/arm64/no-vgic-v3.c b/tools/testing/selftests/kvm/arm64/no-vgic-v3.c
index ebd70430c89d..883a98dc97e5 100644
--- a/tools/testing/selftests/kvm/arm64/no-vgic-v3.c
+++ b/tools/testing/selftests/kvm/arm64/no-vgic-v3.c
@@ -11,7 +11,7 @@ static volatile bool handled;
#define __check_sr_read(r) \
({ \
- uint64_t val; \
+ u64 val; \
\
handled = false; \
dsb(sy); \
@@ -48,7 +48,7 @@ static volatile bool handled;
static void guest_code(void)
{
- uint64_t val;
+ u64 val;
/*
* Check that we advertise that ID_AA64PFR0_EL1.GIC == 0, having
@@ -161,7 +161,7 @@ int main(int argc, char *argv[])
{
struct kvm_vcpu *vcpu;
struct kvm_vm *vm;
- uint64_t pfr0;
+ u64 pfr0;
vm = vm_create_with_one_vcpu(&vcpu, NULL);
pfr0 = vcpu_get_reg(vcpu, KVM_ARM64_SYS_REG(SYS_ID_AA64PFR0_EL1));
diff --git a/tools/testing/selftests/kvm/arm64/page_fault_test.c b/tools/testing/selftests/kvm/arm64/page_fault_test.c
index dc6559dad9d8..1c04e0f28953 100644
--- a/tools/testing/selftests/kvm/arm64/page_fault_test.c
+++ b/tools/testing/selftests/kvm/arm64/page_fault_test.c
@@ -23,7 +23,7 @@
#define TEST_PTE_GVA 0xb0000000
#define TEST_DATA 0x0123456789ABCDEF
-static uint64_t *guest_test_memory = (uint64_t *)TEST_GVA;
+static u64 *guest_test_memory = (u64 *)TEST_GVA;
#define CMD_NONE (0)
#define CMD_SKIP_TEST (1ULL << 1)
@@ -48,7 +48,7 @@ static struct event_cnt {
struct test_desc {
const char *name;
- uint64_t mem_mark_cmd;
+ u64 mem_mark_cmd;
/* Skip the test if any prepare function returns false */
bool (*guest_prepare[PREPARE_FN_NR])(void);
void (*guest_test)(void);
@@ -70,9 +70,9 @@ struct test_params {
struct test_desc *test_desc;
};
-static inline void flush_tlb_page(uint64_t vaddr)
+static inline void flush_tlb_page(u64 vaddr)
{
- uint64_t page = vaddr >> 12;
+ u64 page = vaddr >> 12;
dsb(ishst);
asm volatile("tlbi vaae1is, %0" :: "r" (page));
@@ -82,7 +82,7 @@ static inline void flush_tlb_page(uint64_t vaddr)
static void guest_write64(void)
{
- uint64_t val;
+ u64 val;
WRITE_ONCE(*guest_test_memory, TEST_DATA);
val = READ_ONCE(*guest_test_memory);
@@ -92,8 +92,8 @@ static void guest_write64(void)
/* Check the system for atomic instructions. */
static bool guest_check_lse(void)
{
- uint64_t isar0 = read_sysreg(id_aa64isar0_el1);
- uint64_t atomic;
+ u64 isar0 = read_sysreg(id_aa64isar0_el1);
+ u64 atomic;
atomic = FIELD_GET(ARM64_FEATURE_MASK(ID_AA64ISAR0_EL1_ATOMIC), isar0);
return atomic >= 2;
@@ -101,8 +101,8 @@ static bool guest_check_lse(void)
static bool guest_check_dc_zva(void)
{
- uint64_t dczid = read_sysreg(dczid_el0);
- uint64_t dzp = FIELD_GET(ARM64_FEATURE_MASK(DCZID_EL0_DZP), dczid);
+ u64 dczid = read_sysreg(dczid_el0);
+ u64 dzp = FIELD_GET(ARM64_FEATURE_MASK(DCZID_EL0_DZP), dczid);
return dzp == 0;
}
@@ -110,7 +110,7 @@ static bool guest_check_dc_zva(void)
/* Compare and swap instruction. */
static void guest_cas(void)
{
- uint64_t val;
+ u64 val;
GUEST_ASSERT(guest_check_lse());
asm volatile(".arch_extension lse\n"
@@ -122,7 +122,7 @@ static void guest_cas(void)
static void guest_read64(void)
{
- uint64_t val;
+ u64 val;
val = READ_ONCE(*guest_test_memory);
GUEST_ASSERT_EQ(val, 0);
@@ -131,7 +131,7 @@ static void guest_read64(void)
/* Address translation instruction */
static void guest_at(void)
{
- uint64_t par;
+ u64 par;
asm volatile("at s1e1r, %0" :: "r" (guest_test_memory));
isb();
@@ -164,8 +164,8 @@ static void guest_dc_zva(void)
*/
static void guest_ld_preidx(void)
{
- uint64_t val;
- uint64_t addr = TEST_GVA - 8;
+ u64 val;
+ u64 addr = TEST_GVA - 8;
/*
* This ends up accessing "TEST_GVA + 8 - 8", where "TEST_GVA - 8" is
@@ -179,8 +179,8 @@ static void guest_ld_preidx(void)
static void guest_st_preidx(void)
{
- uint64_t val = TEST_DATA;
- uint64_t addr = TEST_GVA - 8;
+ u64 val = TEST_DATA;
+ u64 addr = TEST_GVA - 8;
asm volatile("str %0, [%1, #8]!"
: "+r" (val), "+r" (addr));
@@ -191,8 +191,8 @@ static void guest_st_preidx(void)
static bool guest_set_ha(void)
{
- uint64_t mmfr1 = read_sysreg(id_aa64mmfr1_el1);
- uint64_t hadbs, tcr;
+ u64 mmfr1 = read_sysreg(id_aa64mmfr1_el1);
+ u64 hadbs, tcr;
/* Skip if HA is not supported. */
hadbs = FIELD_GET(ARM64_FEATURE_MASK(ID_AA64MMFR1_EL1_HAFDBS), mmfr1);
@@ -208,7 +208,7 @@ static bool guest_set_ha(void)
static bool guest_clear_pte_af(void)
{
- *((uint64_t *)TEST_PTE_GVA) &= ~PTE_AF;
+ *((u64 *)TEST_PTE_GVA) &= ~PTE_AF;
flush_tlb_page(TEST_GVA);
return true;
@@ -217,7 +217,7 @@ static bool guest_clear_pte_af(void)
static void guest_check_pte_af(void)
{
dsb(ish);
- GUEST_ASSERT_EQ(*((uint64_t *)TEST_PTE_GVA) & PTE_AF, PTE_AF);
+ GUEST_ASSERT_EQ(*((u64 *)TEST_PTE_GVA) & PTE_AF, PTE_AF);
}
static void guest_check_write_in_dirty_log(void)
@@ -302,26 +302,26 @@ static void no_iabt_handler(struct ex_regs *regs)
static struct uffd_args {
char *copy;
void *hva;
- uint64_t paging_size;
+ u64 paging_size;
} pt_args, data_args;
/* Returns true to continue the test, and false if it should be skipped. */
static int uffd_generic_handler(int uffd_mode, int uffd, struct uffd_msg *msg,
struct uffd_args *args)
{
- uint64_t addr = msg->arg.pagefault.address;
- uint64_t flags = msg->arg.pagefault.flags;
+ u64 addr = msg->arg.pagefault.address;
+ u64 flags = msg->arg.pagefault.flags;
struct uffdio_copy copy;
int ret;
TEST_ASSERT(uffd_mode == UFFDIO_REGISTER_MODE_MISSING,
"The only expected UFFD mode is MISSING");
- TEST_ASSERT_EQ(addr, (uint64_t)args->hva);
+ TEST_ASSERT_EQ(addr, (u64)args->hva);
pr_debug("uffd fault: addr=%p write=%d\n",
(void *)addr, !!(flags & UFFD_PAGEFAULT_FLAG_WRITE));
- copy.src = (uint64_t)args->copy;
+ copy.src = (u64)args->copy;
copy.dst = addr;
copy.len = args->paging_size;
copy.mode = 0;
@@ -407,7 +407,7 @@ static bool punch_hole_in_backing_store(struct kvm_vm *vm,
struct userspace_mem_region *region)
{
void *hva = (void *)region->region.userspace_addr;
- uint64_t paging_size = region->region.memory_size;
+ u64 paging_size = region->region.memory_size;
int ret, fd = region->fd;
if (fd != -1) {
@@ -438,7 +438,7 @@ static void mmio_on_test_gpa_handler(struct kvm_vm *vm, struct kvm_run *run)
static void mmio_no_handler(struct kvm_vm *vm, struct kvm_run *run)
{
- uint64_t data;
+ u64 data;
memcpy(&data, run->mmio.data, sizeof(data));
pr_debug("addr=%lld len=%d w=%d data=%lx\n",
@@ -449,11 +449,11 @@ static void mmio_no_handler(struct kvm_vm *vm, struct kvm_run *run)
static bool check_write_in_dirty_log(struct kvm_vm *vm,
struct userspace_mem_region *region,
- uint64_t host_pg_nr)
+ u64 host_pg_nr)
{
unsigned long *bmap;
bool first_page_dirty;
- uint64_t size = region->region.memory_size;
+ u64 size = region->region.memory_size;
/* getpage_size() is not always equal to vm->page_size */
bmap = bitmap_zalloc(size / getpagesize());
@@ -468,7 +468,7 @@ static bool handle_cmd(struct kvm_vm *vm, int cmd)
{
struct userspace_mem_region *data_region, *pt_region;
bool continue_test = true;
- uint64_t pte_gpa, pte_pg;
+ u64 pte_gpa, pte_pg;
data_region = vm_get_mem_region(vm, MEM_REGION_TEST_DATA);
pt_region = vm_get_mem_region(vm, MEM_REGION_PT);
@@ -525,7 +525,7 @@ noinline void __return_0x77(void)
*/
static void load_exec_code_for_test(struct kvm_vm *vm)
{
- uint64_t *code;
+ u64 *code;
struct userspace_mem_region *region;
void *hva;
@@ -552,7 +552,7 @@ static void setup_abort_handlers(struct kvm_vm *vm, struct kvm_vcpu *vcpu,
static void setup_gva_maps(struct kvm_vm *vm)
{
struct userspace_mem_region *region;
- uint64_t pte_gpa;
+ u64 pte_gpa;
region = vm_get_mem_region(vm, MEM_REGION_TEST_DATA);
/* Map TEST_GVA first. This will install a new PTE. */
@@ -574,12 +574,12 @@ enum pf_test_memslots {
*/
static void setup_memslots(struct kvm_vm *vm, struct test_params *p)
{
- uint64_t backing_src_pagesz = get_backing_src_pagesz(p->src_type);
- uint64_t guest_page_size = vm->page_size;
- uint64_t max_gfn = vm_compute_max_gfn(vm);
+ u64 backing_src_pagesz = get_backing_src_pagesz(p->src_type);
+ u64 guest_page_size = vm->page_size;
+ u64 max_gfn = vm_compute_max_gfn(vm);
/* Enough for 2M of code when using 4K guest pages. */
- uint64_t code_npages = 512;
- uint64_t pt_size, data_size, data_gpa;
+ u64 code_npages = 512;
+ u64 pt_size, data_size, data_gpa;
/*
* This test requires 1 pgd, 2 pud, 4 pmd, and 6 pte pages when using
diff --git a/tools/testing/selftests/kvm/arm64/psci_test.c b/tools/testing/selftests/kvm/arm64/psci_test.c
index ab491ee9e5f7..27aa19a35256 100644
--- a/tools/testing/selftests/kvm/arm64/psci_test.c
+++ b/tools/testing/selftests/kvm/arm64/psci_test.c
@@ -22,8 +22,7 @@
#define CPU_ON_ENTRY_ADDR 0xfeedf00dul
#define CPU_ON_CONTEXT_ID 0xdeadc0deul
-static uint64_t psci_cpu_on(uint64_t target_cpu, uint64_t entry_addr,
- uint64_t context_id)
+static u64 psci_cpu_on(u64 target_cpu, u64 entry_addr, u64 context_id)
{
struct arm_smccc_res res;
@@ -33,8 +32,7 @@ static uint64_t psci_cpu_on(uint64_t target_cpu, uint64_t entry_addr,
return res.a0;
}
-static uint64_t psci_affinity_info(uint64_t target_affinity,
- uint64_t lowest_affinity_level)
+static u64 psci_affinity_info(u64 target_affinity, u64 lowest_affinity_level)
{
struct arm_smccc_res res;
@@ -44,7 +42,7 @@ static uint64_t psci_affinity_info(uint64_t target_affinity,
return res.a0;
}
-static uint64_t psci_system_suspend(uint64_t entry_addr, uint64_t context_id)
+static u64 psci_system_suspend(u64 entry_addr, u64 context_id)
{
struct arm_smccc_res res;
@@ -54,7 +52,7 @@ static uint64_t psci_system_suspend(uint64_t entry_addr, uint64_t context_id)
return res.a0;
}
-static uint64_t psci_system_off2(uint64_t type, uint64_t cookie)
+static u64 psci_system_off2(u64 type, u64 cookie)
{
struct arm_smccc_res res;
@@ -63,7 +61,7 @@ static uint64_t psci_system_off2(uint64_t type, uint64_t cookie)
return res.a0;
}
-static uint64_t psci_features(uint32_t func_id)
+static u64 psci_features(uint32_t func_id)
{
struct arm_smccc_res res;
@@ -109,7 +107,7 @@ static void enter_guest(struct kvm_vcpu *vcpu)
static void assert_vcpu_reset(struct kvm_vcpu *vcpu)
{
- uint64_t obs_pc, obs_x0;
+ u64 obs_pc, obs_x0;
obs_pc = vcpu_get_reg(vcpu, ARM64_CORE_REG(regs.pc));
obs_x0 = vcpu_get_reg(vcpu, ARM64_CORE_REG(regs.regs[0]));
@@ -122,9 +120,9 @@ static void assert_vcpu_reset(struct kvm_vcpu *vcpu)
obs_x0, CPU_ON_CONTEXT_ID);
}
-static void guest_test_cpu_on(uint64_t target_cpu)
+static void guest_test_cpu_on(u64 target_cpu)
{
- uint64_t target_state;
+ u64 target_state;
GUEST_ASSERT(!psci_cpu_on(target_cpu, CPU_ON_ENTRY_ADDR, CPU_ON_CONTEXT_ID));
@@ -141,7 +139,7 @@ static void guest_test_cpu_on(uint64_t target_cpu)
static void host_test_cpu_on(void)
{
struct kvm_vcpu *source, *target;
- uint64_t target_mpidr;
+ u64 target_mpidr;
struct kvm_vm *vm;
struct ucall uc;
@@ -165,7 +163,7 @@ static void host_test_cpu_on(void)
static void guest_test_system_suspend(void)
{
- uint64_t ret;
+ u64 ret;
/* assert that SYSTEM_SUSPEND is discoverable */
GUEST_ASSERT(!psci_features(PSCI_1_0_FN_SYSTEM_SUSPEND));
@@ -199,7 +197,7 @@ static void host_test_system_suspend(void)
static void guest_test_system_off2(void)
{
- uint64_t ret;
+ u64 ret;
/* assert that SYSTEM_OFF2 is discoverable */
GUEST_ASSERT(psci_features(PSCI_1_3_FN_SYSTEM_OFF2) &
@@ -237,7 +235,7 @@ static void host_test_system_off2(void)
{
struct kvm_vcpu *source, *target;
struct kvm_mp_state mps;
- uint64_t psci_version = 0;
+ u64 psci_version = 0;
int nr_shutdowns = 0;
struct kvm_run *run;
struct ucall uc;
diff --git a/tools/testing/selftests/kvm/arm64/set_id_regs.c b/tools/testing/selftests/kvm/arm64/set_id_regs.c
index 322b9d3b0125..d2b689d844ae 100644
--- a/tools/testing/selftests/kvm/arm64/set_id_regs.c
+++ b/tools/testing/selftests/kvm/arm64/set_id_regs.c
@@ -31,7 +31,7 @@ struct reg_ftr_bits {
bool sign;
enum ftr_type type;
uint8_t shift;
- uint64_t mask;
+ u64 mask;
/*
* For FTR_EXACT, safe_val is used as the exact safe value.
* For FTR_LOWER_SAFE, safe_val is used as the minimal safe value.
@@ -241,9 +241,9 @@ static void guest_code(void)
}
/* Return a safe value to a given ftr_bits an ftr value */
-uint64_t get_safe_value(const struct reg_ftr_bits *ftr_bits, uint64_t ftr)
+u64 get_safe_value(const struct reg_ftr_bits *ftr_bits, u64 ftr)
{
- uint64_t ftr_max = GENMASK_ULL(ARM64_FEATURE_FIELD_BITS - 1, 0);
+ u64 ftr_max = GENMASK_ULL(ARM64_FEATURE_FIELD_BITS - 1, 0);
if (ftr_bits->sign == FTR_UNSIGNED) {
switch (ftr_bits->type) {
@@ -293,14 +293,14 @@ uint64_t get_safe_value(const struct reg_ftr_bits *ftr_bits, uint64_t ftr)
}
/* Return an invalid value to a given ftr_bits an ftr value */
-uint64_t get_invalid_value(const struct reg_ftr_bits *ftr_bits, uint64_t ftr)
+u64 get_invalid_value(const struct reg_ftr_bits *ftr_bits, u64 ftr)
{
- uint64_t ftr_max = GENMASK_ULL(ARM64_FEATURE_FIELD_BITS - 1, 0);
+ u64 ftr_max = GENMASK_ULL(ARM64_FEATURE_FIELD_BITS - 1, 0);
if (ftr_bits->sign == FTR_UNSIGNED) {
switch (ftr_bits->type) {
case FTR_EXACT:
- ftr = max((uint64_t)ftr_bits->safe_val + 1, ftr + 1);
+ ftr = max((u64)ftr_bits->safe_val + 1, ftr + 1);
break;
case FTR_LOWER_SAFE:
ftr++;
@@ -320,7 +320,7 @@ uint64_t get_invalid_value(const struct reg_ftr_bits *ftr_bits, uint64_t ftr)
} else if (ftr != ftr_max) {
switch (ftr_bits->type) {
case FTR_EXACT:
- ftr = max((uint64_t)ftr_bits->safe_val + 1, ftr + 1);
+ ftr = max((u64)ftr_bits->safe_val + 1, ftr + 1);
break;
case FTR_LOWER_SAFE:
ftr++;
@@ -344,12 +344,12 @@ uint64_t get_invalid_value(const struct reg_ftr_bits *ftr_bits, uint64_t ftr)
return ftr;
}
-static uint64_t test_reg_set_success(struct kvm_vcpu *vcpu, uint64_t reg,
- const struct reg_ftr_bits *ftr_bits)
+static u64 test_reg_set_success(struct kvm_vcpu *vcpu, u64 reg,
+ const struct reg_ftr_bits *ftr_bits)
{
uint8_t shift = ftr_bits->shift;
- uint64_t mask = ftr_bits->mask;
- uint64_t val, new_val, ftr;
+ u64 mask = ftr_bits->mask;
+ u64 val, new_val, ftr;
val = vcpu_get_reg(vcpu, reg);
ftr = (val & mask) >> shift;
@@ -367,12 +367,12 @@ static uint64_t test_reg_set_success(struct kvm_vcpu *vcpu, uint64_t reg,
return new_val;
}
-static void test_reg_set_fail(struct kvm_vcpu *vcpu, uint64_t reg,
+static void test_reg_set_fail(struct kvm_vcpu *vcpu, u64 reg,
const struct reg_ftr_bits *ftr_bits)
{
uint8_t shift = ftr_bits->shift;
- uint64_t mask = ftr_bits->mask;
- uint64_t val, old_val, ftr;
+ u64 mask = ftr_bits->mask;
+ u64 val, old_val, ftr;
int r;
val = vcpu_get_reg(vcpu, reg);
@@ -393,7 +393,7 @@ static void test_reg_set_fail(struct kvm_vcpu *vcpu, uint64_t reg,
TEST_ASSERT_EQ(val, old_val);
}
-static uint64_t test_reg_vals[KVM_ARM_FEATURE_ID_RANGE_SIZE];
+static u64 test_reg_vals[KVM_ARM_FEATURE_ID_RANGE_SIZE];
#define encoding_to_range_idx(encoding) \
KVM_ARM_FEATURE_ID_RANGE_IDX(sys_reg_Op0(encoding), sys_reg_Op1(encoding), \
@@ -403,7 +403,7 @@ static uint64_t test_reg_vals[KVM_ARM_FEATURE_ID_RANGE_SIZE];
static void test_vm_ftr_id_regs(struct kvm_vcpu *vcpu, bool aarch64_only)
{
- uint64_t masks[KVM_ARM_FEATURE_ID_RANGE_SIZE];
+ u64 masks[KVM_ARM_FEATURE_ID_RANGE_SIZE];
struct reg_mask_range range = {
.addr = (__u64)masks,
};
@@ -421,7 +421,7 @@ static void test_vm_ftr_id_regs(struct kvm_vcpu *vcpu, bool aarch64_only)
for (int i = 0; i < ARRAY_SIZE(test_regs); i++) {
const struct reg_ftr_bits *ftr_bits = test_regs[i].ftr_bits;
uint32_t reg_id = test_regs[i].reg;
- uint64_t reg = KVM_ARM64_SYS_REG(reg_id);
+ u64 reg = KVM_ARM64_SYS_REG(reg_id);
int idx;
/* Get the index to masks array for the idreg */
@@ -451,11 +451,11 @@ static void test_vm_ftr_id_regs(struct kvm_vcpu *vcpu, bool aarch64_only)
#define MPAM_IDREG_TEST 6
static void test_user_set_mpam_reg(struct kvm_vcpu *vcpu)
{
- uint64_t masks[KVM_ARM_FEATURE_ID_RANGE_SIZE];
+ u64 masks[KVM_ARM_FEATURE_ID_RANGE_SIZE];
struct reg_mask_range range = {
.addr = (__u64)masks,
};
- uint64_t val;
+ u64 val;
int idx, err;
/*
@@ -578,7 +578,7 @@ static void test_guest_reg_read(struct kvm_vcpu *vcpu)
static void test_clidr(struct kvm_vcpu *vcpu)
{
- uint64_t clidr;
+ u64 clidr;
int level;
clidr = vcpu_get_reg(vcpu, KVM_ARM64_SYS_REG(SYS_CLIDR_EL1));
@@ -646,7 +646,7 @@ static void test_vcpu_non_ftr_id_regs(struct kvm_vcpu *vcpu)
static void test_assert_id_reg_unchanged(struct kvm_vcpu *vcpu, uint32_t encoding)
{
size_t idx = encoding_to_range_idx(encoding);
- uint64_t observed;
+ u64 observed;
observed = vcpu_get_reg(vcpu, KVM_ARM64_SYS_REG(encoding));
TEST_ASSERT_EQ(test_reg_vals[idx], observed);
@@ -678,7 +678,7 @@ int main(void)
struct kvm_vcpu *vcpu;
struct kvm_vm *vm;
bool aarch64_only;
- uint64_t val, el0;
+ u64 val, el0;
int test_cnt;
TEST_REQUIRE(kvm_has_cap(KVM_CAP_ARM_SUPPORTED_REG_MASK_RANGES));
diff --git a/tools/testing/selftests/kvm/arm64/vgic_init.c b/tools/testing/selftests/kvm/arm64/vgic_init.c
index b3b5fb0ff0a9..8f13d4979dc5 100644
--- a/tools/testing/selftests/kvm/arm64/vgic_init.c
+++ b/tools/testing/selftests/kvm/arm64/vgic_init.c
@@ -16,7 +16,7 @@
#define NR_VCPUS 4
-#define REG_OFFSET(vcpu, offset) (((uint64_t)vcpu << 32) | offset)
+#define REG_OFFSET(vcpu, offset) (((u64)(vcpu) << 32) | (offset))
#define GICR_TYPER 0x8
@@ -29,7 +29,7 @@ struct vm_gic {
uint32_t gic_dev_type;
};
-static uint64_t max_phys_size;
+static u64 max_phys_size;
/*
* Helpers to access a redistributor register and verify the ioctl() failed or
@@ -102,9 +102,9 @@ static void vm_gic_destroy(struct vm_gic *v)
}
struct vgic_region_attr {
- uint64_t attr;
- uint64_t size;
- uint64_t alignment;
+ u64 attr;
+ u64 size;
+ u64 alignment;
};
struct vgic_region_attr gic_v3_dist_region = {
@@ -142,7 +142,7 @@ struct vgic_region_attr gic_v2_cpu_region = {
static void subtest_dist_rdist(struct vm_gic *v)
{
int ret;
- uint64_t addr;
+ u64 addr;
struct vgic_region_attr rdist; /* CPU interface in GICv2*/
struct vgic_region_attr dist;
@@ -222,7 +222,7 @@ static void subtest_dist_rdist(struct vm_gic *v)
/* Test the new REDIST region API */
static void subtest_v3_redist_regions(struct vm_gic *v)
{
- uint64_t addr, expected_addr;
+ u64 addr, expected_addr;
int ret;
ret = __kvm_has_device_attr(v->gic_fd, KVM_DEV_ARM_VGIC_GRP_ADDR,
@@ -407,7 +407,7 @@ static void test_v3_new_redist_regions(void)
struct kvm_vcpu *vcpus[NR_VCPUS];
void *dummy = NULL;
struct vm_gic v;
- uint64_t addr;
+ u64 addr;
int ret;
v = vm_gic_create_with_vcpus(KVM_DEV_TYPE_ARM_VGIC_V3, NR_VCPUS, vcpus);
@@ -459,7 +459,7 @@ static void test_v3_new_redist_regions(void)
static void test_v3_typer_accesses(void)
{
struct vm_gic v;
- uint64_t addr;
+ u64 addr;
int ret, i;
v.vm = vm_create(NR_VCPUS);
@@ -545,7 +545,7 @@ static void test_v3_last_bit_redist_regions(void)
{
uint32_t vcpuids[] = { 0, 3, 5, 4, 1, 2 };
struct vm_gic v;
- uint64_t addr;
+ u64 addr;
v = vm_gic_v3_create_with_vcpuids(ARRAY_SIZE(vcpuids), vcpuids);
@@ -579,7 +579,7 @@ static void test_v3_last_bit_single_rdist(void)
{
uint32_t vcpuids[] = { 0, 3, 5, 4, 1, 2 };
struct vm_gic v;
- uint64_t addr;
+ u64 addr;
v = vm_gic_v3_create_with_vcpuids(ARRAY_SIZE(vcpuids), vcpuids);
@@ -605,7 +605,7 @@ static void test_v3_redist_ipa_range_check_at_vcpu_run(void)
struct kvm_vcpu *vcpus[NR_VCPUS];
struct vm_gic v;
int ret, i;
- uint64_t addr;
+ u64 addr;
v = vm_gic_create_with_vcpus(KVM_DEV_TYPE_ARM_VGIC_V3, 1, vcpus);
@@ -637,7 +637,7 @@ static void test_v3_its_region(void)
{
struct kvm_vcpu *vcpus[NR_VCPUS];
struct vm_gic v;
- uint64_t addr;
+ u64 addr;
int its_fd, ret;
v = vm_gic_create_with_vcpus(KVM_DEV_TYPE_ARM_VGIC_V3, NR_VCPUS, vcpus);
diff --git a/tools/testing/selftests/kvm/arm64/vgic_irq.c b/tools/testing/selftests/kvm/arm64/vgic_irq.c
index f6b77da48785..e6f91bb293a6 100644
--- a/tools/testing/selftests/kvm/arm64/vgic_irq.c
+++ b/tools/testing/selftests/kvm/arm64/vgic_irq.c
@@ -132,7 +132,7 @@ static struct kvm_inject_desc set_active_fns[] = {
for_each_supported_inject_fn((args), (t), (f))
/* Shared between the guest main thread and the IRQ handlers. */
-volatile uint64_t irq_handled;
+volatile u64 irq_handled;
volatile uint32_t irqnr_received[MAX_SPI + 1];
static void reset_stats(void)
@@ -144,15 +144,15 @@ static void reset_stats(void)
irqnr_received[i] = 0;
}
-static uint64_t gic_read_ap1r0(void)
+static u64 gic_read_ap1r0(void)
{
- uint64_t reg = read_sysreg_s(SYS_ICC_AP1R0_EL1);
+ u64 reg = read_sysreg_s(SYS_ICC_AP1R0_EL1);
dsb(sy);
return reg;
}
-static void gic_write_ap1r0(uint64_t val)
+static void gic_write_ap1r0(u64 val)
{
write_sysreg_s(val, SYS_ICC_AP1R0_EL1);
isb();
@@ -555,12 +555,12 @@ static void kvm_set_gsi_routing_irqchip_check(struct kvm_vm *vm,
{
struct kvm_irq_routing *routing;
int ret;
- uint64_t i;
+ u64 i;
assert(num <= kvm_max_routes && kvm_max_routes <= KVM_MAX_IRQ_ROUTES);
routing = kvm_gsi_routing_create();
- for (i = intid; i < (uint64_t)intid + num; i++)
+ for (i = intid; i < (u64)intid + num; i++)
kvm_gsi_routing_irqchip_add(routing, i - MIN_SPI, i - MIN_SPI);
if (!expect_failure) {
@@ -568,7 +568,7 @@ static void kvm_set_gsi_routing_irqchip_check(struct kvm_vm *vm,
} else {
ret = _kvm_gsi_routing_write(vm, routing);
/* The kernel only checks e->irqchip.pin >= KVM_IRQCHIP_NUM_PINS */
- if (((uint64_t)intid + num - 1 - MIN_SPI) >= KVM_IRQCHIP_NUM_PINS)
+ if (((u64)intid + num - 1 - MIN_SPI) >= KVM_IRQCHIP_NUM_PINS)
TEST_ASSERT(ret != 0 && errno == EINVAL,
"Bad intid %u did not cause KVM_SET_GSI_ROUTING "
"error: rc: %i errno: %i", intid, ret, errno);
@@ -599,9 +599,9 @@ static void kvm_routing_and_irqfd_check(struct kvm_vm *vm,
bool expect_failure)
{
int fd[MAX_SPI];
- uint64_t val;
+ u64 val;
int ret, f;
- uint64_t i;
+ u64 i;
/*
* There is no way to try injecting an SGI or PPI as the interface
@@ -620,35 +620,35 @@ static void kvm_routing_and_irqfd_check(struct kvm_vm *vm,
* that no actual interrupt was injected for those cases.
*/
- for (f = 0, i = intid; i < (uint64_t)intid + num; i++, f++) {
+ for (f = 0, i = intid; i < (u64)intid + num; i++, f++) {
fd[f] = eventfd(0, 0);
TEST_ASSERT(fd[f] != -1, __KVM_SYSCALL_ERROR("eventfd()", fd[f]));
}
- for (f = 0, i = intid; i < (uint64_t)intid + num; i++, f++) {
+ for (f = 0, i = intid; i < (u64)intid + num; i++, f++) {
struct kvm_irqfd irqfd = {
.fd = fd[f],
.gsi = i - MIN_SPI,
};
- assert(i <= (uint64_t)UINT_MAX);
+ assert(i <= (u64)UINT_MAX);
vm_ioctl(vm, KVM_IRQFD, &irqfd);
}
- for (f = 0, i = intid; i < (uint64_t)intid + num; i++, f++) {
+ for (f = 0, i = intid; i < (u64)intid + num; i++, f++) {
val = 1;
- ret = write(fd[f], &val, sizeof(uint64_t));
- TEST_ASSERT(ret == sizeof(uint64_t),
+ ret = write(fd[f], &val, sizeof(u64));
+ TEST_ASSERT(ret == sizeof(u64),
__KVM_SYSCALL_ERROR("write()", ret));
}
- for (f = 0, i = intid; i < (uint64_t)intid + num; i++, f++)
+ for (f = 0, i = intid; i < (u64)intid + num; i++, f++)
close(fd[f]);
}
/* handles the valid case: intid=0xffffffff num=1 */
#define for_each_intid(first, num, tmp, i) \
for ((tmp) = (i) = (first); \
- (tmp) < (uint64_t)(first) + (uint64_t)(num); \
+ (tmp) < (u64)(first) + (u64)(num); \
(tmp)++, (i)++)
static void run_guest_cmd(struct kvm_vcpu *vcpu, int gic_fd,
@@ -661,7 +661,7 @@ static void run_guest_cmd(struct kvm_vcpu *vcpu, int gic_fd,
int level = inject_args->level;
bool expect_failure = inject_args->expect_failure;
struct kvm_vm *vm = vcpu->vm;
- uint64_t tmp;
+ u64 tmp;
uint32_t i;
/* handles the valid case: intid=0xffffffff num=1 */
diff --git a/tools/testing/selftests/kvm/arm64/vpmu_counter_access.c b/tools/testing/selftests/kvm/arm64/vpmu_counter_access.c
index f16b3b27e32e..986ff950a652 100644
--- a/tools/testing/selftests/kvm/arm64/vpmu_counter_access.c
+++ b/tools/testing/selftests/kvm/arm64/vpmu_counter_access.c
@@ -34,25 +34,25 @@ struct vpmu_vm {
static struct vpmu_vm vpmu_vm;
struct pmreg_sets {
- uint64_t set_reg_id;
- uint64_t clr_reg_id;
+ u64 set_reg_id;
+ u64 clr_reg_id;
};
#define PMREG_SET(set, clr) {.set_reg_id = set, .clr_reg_id = clr}
-static uint64_t get_pmcr_n(uint64_t pmcr)
+static u64 get_pmcr_n(u64 pmcr)
{
return FIELD_GET(ARMV8_PMU_PMCR_N, pmcr);
}
-static void set_pmcr_n(uint64_t *pmcr, uint64_t pmcr_n)
+static void set_pmcr_n(u64 *pmcr, u64 pmcr_n)
{
u64p_replace_bits((__u64 *) pmcr, pmcr_n, ARMV8_PMU_PMCR_N);
}
-static uint64_t get_counters_mask(uint64_t n)
+static u64 get_counters_mask(u64 n)
{
- uint64_t mask = BIT(ARMV8_PMU_CYCLE_IDX);
+ u64 mask = BIT(ARMV8_PMU_CYCLE_IDX);
if (n)
mask |= GENMASK(n - 1, 0);
@@ -95,7 +95,7 @@ static inline void write_sel_evtyper(int sel, unsigned long val)
static void pmu_disable_reset(void)
{
- uint64_t pmcr = read_sysreg(pmcr_el0);
+ u64 pmcr = read_sysreg(pmcr_el0);
/* Reset all counters, disabling them */
pmcr &= ~ARMV8_PMU_PMCR_E;
@@ -175,7 +175,7 @@ struct pmc_accessor pmc_accessors[] = {
#define GUEST_ASSERT_BITMAP_REG(regname, mask, set_expected) \
{ \
- uint64_t _tval = read_sysreg(regname); \
+ u64 _tval = read_sysreg(regname); \
\
if (set_expected) \
__GUEST_ASSERT((_tval & mask), \
@@ -191,7 +191,7 @@ struct pmc_accessor pmc_accessors[] = {
* Check if @mask bits in {PMCNTEN,PMINTEN,PMOVS}{SET,CLR} registers
* are set or cleared as specified in @set_expected.
*/
-static void check_bitmap_pmu_regs(uint64_t mask, bool set_expected)
+static void check_bitmap_pmu_regs(u64 mask, bool set_expected)
{
GUEST_ASSERT_BITMAP_REG(pmcntenset_el0, mask, set_expected);
GUEST_ASSERT_BITMAP_REG(pmcntenclr_el0, mask, set_expected);
@@ -213,7 +213,7 @@ static void check_bitmap_pmu_regs(uint64_t mask, bool set_expected)
*/
static void test_bitmap_pmu_regs(int pmc_idx, bool set_op)
{
- uint64_t pmcr_n, test_bit = BIT(pmc_idx);
+ u64 pmcr_n, test_bit = BIT(pmc_idx);
bool set_expected = false;
if (set_op) {
@@ -238,7 +238,7 @@ static void test_bitmap_pmu_regs(int pmc_idx, bool set_op)
*/
static void test_access_pmc_regs(struct pmc_accessor *acc, int pmc_idx)
{
- uint64_t write_data, read_data;
+ u64 write_data, read_data;
/* Disable all PMCs and reset all PMCs to zero. */
pmu_disable_reset();
@@ -293,11 +293,11 @@ static void test_access_pmc_regs(struct pmc_accessor *acc, int pmc_idx)
}
#define INVALID_EC (-1ul)
-uint64_t expected_ec = INVALID_EC;
+u64 expected_ec = INVALID_EC;
static void guest_sync_handler(struct ex_regs *regs)
{
- uint64_t esr, ec;
+ u64 esr, ec;
esr = read_sysreg(esr_el1);
ec = ESR_ELx_EC(esr);
@@ -357,9 +357,9 @@ static void test_access_invalid_pmc_regs(struct pmc_accessor *acc, int pmc_idx)
* if reading/writing PMU registers for implemented or unimplemented
* counters works as expected.
*/
-static void guest_code(uint64_t expected_pmcr_n)
+static void guest_code(u64 expected_pmcr_n)
{
- uint64_t pmcr, pmcr_n, unimp_mask;
+ u64 pmcr, pmcr_n, unimp_mask;
int i, pmc;
__GUEST_ASSERT(expected_pmcr_n <= ARMV8_PMU_MAX_GENERAL_COUNTERS,
@@ -409,11 +409,11 @@ static void create_vpmu_vm(void *guest_code)
{
struct kvm_vcpu_init init;
uint8_t pmuver, ec;
- uint64_t dfr0, irq = 23;
+ u64 dfr0, irq = 23;
struct kvm_device_attr irq_attr = {
.group = KVM_ARM_VCPU_PMU_V3_CTRL,
.attr = KVM_ARM_VCPU_PMU_V3_IRQ,
- .addr = (uint64_t)&irq,
+ .addr = (u64)&irq,
};
struct kvm_device_attr init_attr = {
.group = KVM_ARM_VCPU_PMU_V3_CTRL,
@@ -457,7 +457,7 @@ static void destroy_vpmu_vm(void)
kvm_vm_free(vpmu_vm.vm);
}
-static void run_vcpu(struct kvm_vcpu *vcpu, uint64_t pmcr_n)
+static void run_vcpu(struct kvm_vcpu *vcpu, u64 pmcr_n)
{
struct ucall uc;
@@ -475,10 +475,10 @@ static void run_vcpu(struct kvm_vcpu *vcpu, uint64_t pmcr_n)
}
}
-static void test_create_vpmu_vm_with_pmcr_n(uint64_t pmcr_n, bool expect_fail)
+static void test_create_vpmu_vm_with_pmcr_n(u64 pmcr_n, bool expect_fail)
{
struct kvm_vcpu *vcpu;
- uint64_t pmcr, pmcr_orig;
+ u64 pmcr, pmcr_orig;
create_vpmu_vm(guest_code);
vcpu = vpmu_vm.vcpu;
@@ -508,9 +508,9 @@ static void test_create_vpmu_vm_with_pmcr_n(uint64_t pmcr_n, bool expect_fail)
* Create a guest with one vCPU, set the PMCR_EL0.N for the vCPU to @pmcr_n,
* and run the test.
*/
-static void run_access_test(uint64_t pmcr_n)
+static void run_access_test(u64 pmcr_n)
{
- uint64_t sp;
+ u64 sp;
struct kvm_vcpu *vcpu;
struct kvm_vcpu_init init;
@@ -533,7 +533,7 @@ static void run_access_test(uint64_t pmcr_n)
aarch64_vcpu_setup(vcpu, &init);
vcpu_init_descriptor_tables(vcpu);
vcpu_set_reg(vcpu, ARM64_CORE_REG(sp_el1), sp);
- vcpu_set_reg(vcpu, ARM64_CORE_REG(regs.pc), (uint64_t)guest_code);
+ vcpu_set_reg(vcpu, ARM64_CORE_REG(regs.pc), (u64)guest_code);
run_vcpu(vcpu, pmcr_n);
@@ -550,12 +550,12 @@ static struct pmreg_sets validity_check_reg_sets[] = {
* Create a VM, and check if KVM handles the userspace accesses of
* the PMU register sets in @validity_check_reg_sets[] correctly.
*/
-static void run_pmregs_validity_test(uint64_t pmcr_n)
+static void run_pmregs_validity_test(u64 pmcr_n)
{
int i;
struct kvm_vcpu *vcpu;
- uint64_t set_reg_id, clr_reg_id, reg_val;
- uint64_t valid_counters_mask, max_counters_mask;
+ u64 set_reg_id, clr_reg_id, reg_val;
+ u64 valid_counters_mask, max_counters_mask;
test_create_vpmu_vm_with_pmcr_n(pmcr_n, false);
vcpu = vpmu_vm.vcpu;
@@ -607,7 +607,7 @@ static void run_pmregs_validity_test(uint64_t pmcr_n)
* the vCPU to @pmcr_n, which is larger than the host value.
* The attempt should fail as @pmcr_n is too big to set for the vCPU.
*/
-static void run_error_test(uint64_t pmcr_n)
+static void run_error_test(u64 pmcr_n)
{
pr_debug("Error test with pmcr_n %lu (larger than the host)\n", pmcr_n);
@@ -619,9 +619,9 @@ static void run_error_test(uint64_t pmcr_n)
* Return the default number of implemented PMU event counters excluding
* the cycle counter (i.e. PMCR_EL0.N value) for the guest.
*/
-static uint64_t get_pmcr_n_limit(void)
+static u64 get_pmcr_n_limit(void)
{
- uint64_t pmcr;
+ u64 pmcr;
create_vpmu_vm(guest_code);
pmcr = vcpu_get_reg(vpmu_vm.vcpu, KVM_ARM64_SYS_REG(SYS_PMCR_EL0));
@@ -631,7 +631,7 @@ static uint64_t get_pmcr_n_limit(void)
int main(void)
{
- uint64_t i, pmcr_n;
+ u64 i, pmcr_n;
TEST_REQUIRE(kvm_has_cap(KVM_CAP_ARM_PMU_V3));
diff --git a/tools/testing/selftests/kvm/coalesced_io_test.c b/tools/testing/selftests/kvm/coalesced_io_test.c
index 60cb25454899..ed6a66020b1e 100644
--- a/tools/testing/selftests/kvm/coalesced_io_test.c
+++ b/tools/testing/selftests/kvm/coalesced_io_test.c
@@ -15,8 +15,8 @@
struct kvm_coalesced_io {
struct kvm_coalesced_mmio_ring *ring;
uint32_t ring_size;
- uint64_t mmio_gpa;
- uint64_t *mmio;
+ u64 mmio_gpa;
+ u64 *mmio;
/*
* x86-only, but define pio_port for all architectures to minimize the
@@ -94,7 +94,7 @@ static void vcpu_run_and_verify_io_exit(struct kvm_vcpu *vcpu,
TEST_ASSERT((!want_pio && (run->exit_reason == KVM_EXIT_MMIO && run->mmio.is_write &&
run->mmio.phys_addr == io->mmio_gpa && run->mmio.len == 8 &&
- *(uint64_t *)run->mmio.data == io->mmio_gpa + io->ring_size - 1)) ||
+ *(u64 *)run->mmio.data == io->mmio_gpa + io->ring_size - 1)) ||
(want_pio && (run->exit_reason == KVM_EXIT_IO && run->io.port == io->pio_port &&
run->io.direction == KVM_EXIT_IO_OUT && run->io.count == 1 &&
pio_value == io->pio_port + io->ring_size - 1)),
@@ -105,7 +105,7 @@ static void vcpu_run_and_verify_io_exit(struct kvm_vcpu *vcpu,
want_pio ? (unsigned long long)io->pio_port : io->mmio_gpa,
(want_pio ? io->pio_port : io->mmio_gpa) + io->ring_size - 1, run->exit_reason,
run->exit_reason == KVM_EXIT_MMIO ? "MMIO" : run->exit_reason == KVM_EXIT_IO ? "PIO" : "other",
- run->mmio.phys_addr, run->mmio.is_write, run->mmio.len, *(uint64_t *)run->mmio.data,
+ run->mmio.phys_addr, run->mmio.is_write, run->mmio.len, *(u64 *)run->mmio.data,
run->io.port, run->io.direction, run->io.size, run->io.count, pio_value);
}
@@ -143,7 +143,7 @@ static void vcpu_run_and_verify_coalesced_io(struct kvm_vcpu *vcpu,
"Wanted 8-byte MMIO to 0x%lx = %lx in entry %u, got %u-byte %s 0x%llx = 0x%lx",
io->mmio_gpa, io->mmio_gpa + i, i,
entry->len, entry->pio ? "PIO" : "MMIO",
- entry->phys_addr, *(uint64_t *)entry->data);
+ entry->phys_addr, *(u64 *)entry->data);
}
}
@@ -219,11 +219,11 @@ int main(int argc, char *argv[])
* the MMIO GPA identity mapped in the guest.
*/
.mmio_gpa = 4ull * SZ_1G,
- .mmio = (uint64_t *)(4ull * SZ_1G),
+ .mmio = (u64 *)(4ull * SZ_1G),
.pio_port = 0x80,
};
- virt_map(vm, (uint64_t)kvm_builtin_io_ring.mmio, kvm_builtin_io_ring.mmio_gpa, 1);
+ virt_map(vm, (u64)kvm_builtin_io_ring.mmio, kvm_builtin_io_ring.mmio_gpa, 1);
sync_global_to_guest(vm, kvm_builtin_io_ring);
vcpu_args_set(vcpu, 1, &kvm_builtin_io_ring);
diff --git a/tools/testing/selftests/kvm/demand_paging_test.c b/tools/testing/selftests/kvm/demand_paging_test.c
index 0202b78f8680..302c4923d093 100644
--- a/tools/testing/selftests/kvm/demand_paging_test.c
+++ b/tools/testing/selftests/kvm/demand_paging_test.c
@@ -24,7 +24,7 @@
#ifdef __NR_userfaultfd
static int nr_vcpus = 1;
-static uint64_t guest_percpu_mem_size = DEFAULT_PER_VCPU_MEM_SIZE;
+static u64 guest_percpu_mem_size = DEFAULT_PER_VCPU_MEM_SIZE;
static size_t demand_paging_size;
static char *guest_data_prototype;
@@ -58,7 +58,7 @@ static int handle_uffd_page_request(int uffd_mode, int uffd,
struct uffd_msg *msg)
{
pid_t tid = syscall(__NR_gettid);
- uint64_t addr = msg->arg.pagefault.address;
+ u64 addr = msg->arg.pagefault.address;
struct timespec start;
struct timespec ts_diff;
int r;
@@ -68,7 +68,7 @@ static int handle_uffd_page_request(int uffd_mode, int uffd,
if (uffd_mode == UFFDIO_REGISTER_MODE_MISSING) {
struct uffdio_copy copy;
- copy.src = (uint64_t)guest_data_prototype;
+ copy.src = (u64)guest_data_prototype;
copy.dst = addr;
copy.len = demand_paging_size;
copy.mode = 0;
@@ -138,7 +138,7 @@ struct test_params {
bool partition_vcpu_memory_access;
};
-static void prefault_mem(void *alias, uint64_t len)
+static void prefault_mem(void *alias, u64 len)
{
size_t p;
@@ -154,7 +154,7 @@ static void run_test(enum vm_guest_mode mode, void *arg)
struct memstress_vcpu_args *vcpu_args;
struct test_params *p = arg;
struct uffd_desc **uffd_descs = NULL;
- uint64_t uffd_region_size;
+ u64 uffd_region_size;
struct timespec start;
struct timespec ts_diff;
double vcpu_paging_rate;
diff --git a/tools/testing/selftests/kvm/dirty_log_perf_test.c b/tools/testing/selftests/kvm/dirty_log_perf_test.c
index e79817bd0e29..49b85b3be8d2 100644
--- a/tools/testing/selftests/kvm/dirty_log_perf_test.c
+++ b/tools/testing/selftests/kvm/dirty_log_perf_test.c
@@ -56,7 +56,7 @@ static void arch_cleanup_vm(struct kvm_vm *vm)
#define TEST_HOST_LOOP_N 2UL
static int nr_vcpus = 1;
-static uint64_t guest_percpu_mem_size = DEFAULT_PER_VCPU_MEM_SIZE;
+static u64 guest_percpu_mem_size = DEFAULT_PER_VCPU_MEM_SIZE;
static bool run_vcpus_while_disabling_dirty_logging;
/* Host variables */
@@ -69,7 +69,7 @@ static void vcpu_worker(struct memstress_vcpu_args *vcpu_args)
{
struct kvm_vcpu *vcpu = vcpu_args->vcpu;
int vcpu_idx = vcpu_args->vcpu_idx;
- uint64_t pages_count = 0;
+ u64 pages_count = 0;
struct kvm_run *run;
struct timespec start;
struct timespec ts_diff;
@@ -125,7 +125,7 @@ static void vcpu_worker(struct memstress_vcpu_args *vcpu_args)
struct test_params {
unsigned long iterations;
- uint64_t phys_offset;
+ u64 phys_offset;
bool partition_vcpu_memory_access;
enum vm_mem_backing_src_type backing_src;
int slots;
@@ -138,9 +138,9 @@ static void run_test(enum vm_guest_mode mode, void *arg)
struct test_params *p = arg;
struct kvm_vm *vm;
unsigned long **bitmaps;
- uint64_t guest_num_pages;
- uint64_t host_num_pages;
- uint64_t pages_per_slot;
+ u64 guest_num_pages;
+ u64 host_num_pages;
+ u64 pages_per_slot;
struct timespec start;
struct timespec ts_diff;
struct timespec get_dirty_log_total = (struct timespec){0};
diff --git a/tools/testing/selftests/kvm/dirty_log_test.c b/tools/testing/selftests/kvm/dirty_log_test.c
index a7744974663b..0bc76b9439a2 100644
--- a/tools/testing/selftests/kvm/dirty_log_test.c
+++ b/tools/testing/selftests/kvm/dirty_log_test.c
@@ -74,11 +74,11 @@
* the host. READ/WRITE_ONCE() should also be used with anything
* that may change.
*/
-static uint64_t host_page_size;
-static uint64_t guest_page_size;
-static uint64_t guest_num_pages;
-static uint64_t iteration;
-static uint64_t nr_writes;
+static u64 host_page_size;
+static u64 guest_page_size;
+static u64 guest_num_pages;
+static u64 iteration;
+static u64 nr_writes;
static bool vcpu_stop;
/*
@@ -86,13 +86,13 @@ static bool vcpu_stop;
* This will be set to the topmost valid physical address minus
* the test memory size.
*/
-static uint64_t guest_test_phys_mem;
+static u64 guest_test_phys_mem;
/*
* Guest virtual memory offset of the testing memory slot.
* Must not conflict with identity mapped test code.
*/
-static uint64_t guest_test_virt_mem = DEFAULT_GUEST_TEST_MEM;
+static u64 guest_test_virt_mem = DEFAULT_GUEST_TEST_MEM;
/*
* Continuously write to the first 8 bytes of a random pages within
@@ -100,10 +100,10 @@ static uint64_t guest_test_virt_mem = DEFAULT_GUEST_TEST_MEM;
*/
static void guest_code(void)
{
- uint64_t addr;
+ u64 addr;
#ifdef __s390x__
- uint64_t i;
+ u64 i;
/*
* On s390x, all pages of a 1M segment are initially marked as dirty
@@ -113,7 +113,7 @@ static void guest_code(void)
*/
for (i = 0; i < guest_num_pages; i++) {
addr = guest_test_virt_mem + i * guest_page_size;
- vcpu_arch_put_guest(*(uint64_t *)addr, READ_ONCE(iteration));
+ vcpu_arch_put_guest(*(u64 *)addr, READ_ONCE(iteration));
nr_writes++;
}
#endif
@@ -125,7 +125,7 @@ static void guest_code(void)
* guest_page_size;
addr = align_down(addr, host_page_size);
- vcpu_arch_put_guest(*(uint64_t *)addr, READ_ONCE(iteration));
+ vcpu_arch_put_guest(*(u64 *)addr, READ_ONCE(iteration));
nr_writes++;
}
@@ -138,11 +138,11 @@ static bool host_quit;
/* Points to the test VM memory region on which we track dirty logs */
static void *host_test_mem;
-static uint64_t host_num_pages;
+static u64 host_num_pages;
/* For statistics only */
-static uint64_t host_dirty_count;
-static uint64_t host_clear_count;
+static u64 host_dirty_count;
+static u64 host_clear_count;
/* Whether dirty ring reset is requested, or finished */
static sem_t sem_vcpu_stop;
@@ -169,7 +169,7 @@ static bool dirty_ring_vcpu_ring_full;
* dirty gfn we've collected, so that if a mismatch of data found later in the
* verifying process, we let it pass.
*/
-static uint64_t dirty_ring_last_page = -1ULL;
+static u64 dirty_ring_last_page = -1ULL;
/*
* In addition to the above, it is possible (especially if this
@@ -213,7 +213,7 @@ static uint64_t dirty_ring_last_page = -1ULL;
* and also don't fail when it is reported in the next iteration, together with
* an outdated iteration count.
*/
-static uint64_t dirty_ring_prev_iteration_last_page;
+static u64 dirty_ring_prev_iteration_last_page;
enum log_mode_t {
/* Only use KVM_GET_DIRTY_LOG for logging */
@@ -297,7 +297,7 @@ static bool dirty_ring_supported(void)
static void dirty_ring_create_vm_done(struct kvm_vm *vm)
{
- uint64_t pages;
+ u64 pages;
uint32_t limit;
/*
@@ -494,11 +494,11 @@ static void *vcpu_worker(void *data)
static void vm_dirty_log_verify(enum vm_guest_mode mode, unsigned long **bmap)
{
- uint64_t page, nr_dirty_pages = 0, nr_clean_pages = 0;
- uint64_t step = vm_num_host_pages(mode, 1);
+ u64 page, nr_dirty_pages = 0, nr_clean_pages = 0;
+ u64 step = vm_num_host_pages(mode, 1);
for (page = 0; page < host_num_pages; page += step) {
- uint64_t val = *(uint64_t *)(host_test_mem + page * host_page_size);
+ u64 val = *(u64 *)(host_test_mem + page * host_page_size);
bool bmap0_dirty = __test_and_clear_bit_le(page, bmap[0]);
/*
@@ -575,7 +575,7 @@ static void vm_dirty_log_verify(enum vm_guest_mode mode, unsigned long **bmap)
}
static struct kvm_vm *create_vm(enum vm_guest_mode mode, struct kvm_vcpu **vcpu,
- uint64_t extra_mem_pages, void *guest_code)
+ u64 extra_mem_pages, void *guest_code)
{
struct kvm_vm *vm;
@@ -591,7 +591,7 @@ static struct kvm_vm *create_vm(enum vm_guest_mode mode, struct kvm_vcpu **vcpu,
struct test_params {
unsigned long iterations;
unsigned long interval;
- uint64_t phys_offset;
+ u64 phys_offset;
};
static void run_test(enum vm_guest_mode mode, void *arg)
diff --git a/tools/testing/selftests/kvm/guest_memfd_test.c b/tools/testing/selftests/kvm/guest_memfd_test.c
index ce687f8d248f..8b3454d373cc 100644
--- a/tools/testing/selftests/kvm/guest_memfd_test.c
+++ b/tools/testing/selftests/kvm/guest_memfd_test.c
@@ -123,7 +123,7 @@ static void test_invalid_punch_hole(int fd, size_t page_size, size_t total_size)
static void test_create_guest_memfd_invalid(struct kvm_vm *vm)
{
size_t page_size = getpagesize();
- uint64_t flag;
+ u64 flag;
size_t size;
int fd;
diff --git a/tools/testing/selftests/kvm/guest_print_test.c b/tools/testing/selftests/kvm/guest_print_test.c
index bcf582852db9..894ef7d2481e 100644
--- a/tools/testing/selftests/kvm/guest_print_test.c
+++ b/tools/testing/selftests/kvm/guest_print_test.c
@@ -16,9 +16,9 @@
#include "ucall_common.h"
struct guest_vals {
- uint64_t a;
- uint64_t b;
- uint64_t type;
+ u64 a;
+ u64 b;
+ u64 type;
};
static struct guest_vals vals;
@@ -26,9 +26,9 @@ static struct guest_vals vals;
/* GUEST_PRINTF()/GUEST_ASSERT_FMT() does not support float or double. */
#define TYPE_LIST \
TYPE(test_type_i64, I64, "%ld", int64_t) \
-TYPE(test_type_u64, U64u, "%lu", uint64_t) \
-TYPE(test_type_x64, U64x, "0x%lx", uint64_t) \
-TYPE(test_type_X64, U64X, "0x%lX", uint64_t) \
+TYPE(test_type_u64, U64u, "%lu", u64) \
+TYPE(test_type_x64, U64x, "0x%lx", u64) \
+TYPE(test_type_X64, U64X, "0x%lX", u64) \
TYPE(test_type_u32, U32u, "%u", uint32_t) \
TYPE(test_type_x32, U32x, "0x%x", uint32_t) \
TYPE(test_type_X32, U32X, "0x%X", uint32_t) \
@@ -56,7 +56,7 @@ static void fn(struct kvm_vcpu *vcpu, T a, T b) \
\
snprintf(expected_printf, UCALL_BUFFER_LEN, PRINTF_FMT_##ext, a, b); \
snprintf(expected_assert, UCALL_BUFFER_LEN, ASSERT_FMT_##ext, a, b); \
- vals = (struct guest_vals){ (uint64_t)a, (uint64_t)b, TYPE_##ext }; \
+ vals = (struct guest_vals){ (u64)a, (u64)b, TYPE_##ext }; \
sync_global_to_guest(vcpu->vm, vals); \
run_test(vcpu, expected_printf, expected_assert); \
}
diff --git a/tools/testing/selftests/kvm/include/arm64/arch_timer.h b/tools/testing/selftests/kvm/include/arm64/arch_timer.h
index bf461de34785..cdb34e8a4416 100644
--- a/tools/testing/selftests/kvm/include/arm64/arch_timer.h
+++ b/tools/testing/selftests/kvm/include/arm64/arch_timer.h
@@ -18,20 +18,20 @@ enum arch_timer {
#define CTL_ISTATUS (1 << 2)
#define msec_to_cycles(msec) \
- (timer_get_cntfrq() * (uint64_t)(msec) / 1000)
+ (timer_get_cntfrq() * (u64)(msec) / 1000)
#define usec_to_cycles(usec) \
- (timer_get_cntfrq() * (uint64_t)(usec) / 1000000)
+ (timer_get_cntfrq() * (u64)(usec) / 1000000)
#define cycles_to_usec(cycles) \
- ((uint64_t)(cycles) * 1000000 / timer_get_cntfrq())
+ ((u64)(cycles) * 1000000 / timer_get_cntfrq())
static inline uint32_t timer_get_cntfrq(void)
{
return read_sysreg(cntfrq_el0);
}
-static inline uint64_t timer_get_cntct(enum arch_timer timer)
+static inline u64 timer_get_cntct(enum arch_timer timer)
{
isb();
@@ -48,7 +48,7 @@ static inline uint64_t timer_get_cntct(enum arch_timer timer)
return 0;
}
-static inline void timer_set_cval(enum arch_timer timer, uint64_t cval)
+static inline void timer_set_cval(enum arch_timer timer, u64 cval)
{
switch (timer) {
case VIRTUAL:
@@ -64,7 +64,7 @@ static inline void timer_set_cval(enum arch_timer timer, uint64_t cval)
isb();
}
-static inline uint64_t timer_get_cval(enum arch_timer timer)
+static inline u64 timer_get_cval(enum arch_timer timer)
{
switch (timer) {
case VIRTUAL:
@@ -144,8 +144,8 @@ static inline uint32_t timer_get_ctl(enum arch_timer timer)
static inline void timer_set_next_cval_ms(enum arch_timer timer, uint32_t msec)
{
- uint64_t now_ct = timer_get_cntct(timer);
- uint64_t next_ct = now_ct + msec_to_cycles(msec);
+ u64 now_ct = timer_get_cntct(timer);
+ u64 next_ct = now_ct + msec_to_cycles(msec);
timer_set_cval(timer, next_ct);
}
diff --git a/tools/testing/selftests/kvm/include/arm64/delay.h b/tools/testing/selftests/kvm/include/arm64/delay.h
index 329e4f5079ea..6a5d4634af2c 100644
--- a/tools/testing/selftests/kvm/include/arm64/delay.h
+++ b/tools/testing/selftests/kvm/include/arm64/delay.h
@@ -8,10 +8,10 @@
#include "arch_timer.h"
-static inline void __delay(uint64_t cycles)
+static inline void __delay(u64 cycles)
{
enum arch_timer timer = VIRTUAL;
- uint64_t start = timer_get_cntct(timer);
+ u64 start = timer_get_cntct(timer);
while ((timer_get_cntct(timer) - start) < cycles)
cpu_relax();
diff --git a/tools/testing/selftests/kvm/include/arm64/gic.h b/tools/testing/selftests/kvm/include/arm64/gic.h
index 7dbecc6daa4e..8231cad8554e 100644
--- a/tools/testing/selftests/kvm/include/arm64/gic.h
+++ b/tools/testing/selftests/kvm/include/arm64/gic.h
@@ -48,7 +48,7 @@ void gic_set_dir(unsigned int intid);
* split is true, EOI drops the priority and deactivates the interrupt.
*/
void gic_set_eoi_split(bool split);
-void gic_set_priority_mask(uint64_t mask);
+void gic_set_priority_mask(u64 mask);
void gic_set_priority(uint32_t intid, uint32_t prio);
void gic_irq_set_active(unsigned int intid);
void gic_irq_clear_active(unsigned int intid);
diff --git a/tools/testing/selftests/kvm/include/arm64/processor.h b/tools/testing/selftests/kvm/include/arm64/processor.h
index 68b692e1cc32..4d8144a0e025 100644
--- a/tools/testing/selftests/kvm/include/arm64/processor.h
+++ b/tools/testing/selftests/kvm/include/arm64/processor.h
@@ -175,7 +175,7 @@ void vm_install_exception_handler(struct kvm_vm *vm,
void vm_install_sync_handler(struct kvm_vm *vm,
int vector, int ec, handler_fn handler);
-uint64_t *virt_get_pte_hva(struct kvm_vm *vm, gva_t gva);
+u64 *virt_get_pte_hva(struct kvm_vm *vm, gva_t gva);
static inline void cpu_relax(void)
{
@@ -272,9 +272,9 @@ struct arm_smccc_res {
* @res: pointer to write the return values from registers x0-x3
*
*/
-void smccc_hvc(uint32_t function_id, uint64_t arg0, uint64_t arg1,
- uint64_t arg2, uint64_t arg3, uint64_t arg4, uint64_t arg5,
- uint64_t arg6, struct arm_smccc_res *res);
+void smccc_hvc(uint32_t function_id, u64 arg0, u64 arg1,
+ u64 arg2, u64 arg3, u64 arg4, u64 arg5,
+ u64 arg6, struct arm_smccc_res *res);
/**
* smccc_smc - Invoke a SMCCC function using the smc conduit
@@ -283,9 +283,9 @@ void smccc_hvc(uint32_t function_id, uint64_t arg0, uint64_t arg1,
* @res: pointer to write the return values from registers x0-x3
*
*/
-void smccc_smc(uint32_t function_id, uint64_t arg0, uint64_t arg1,
- uint64_t arg2, uint64_t arg3, uint64_t arg4, uint64_t arg5,
- uint64_t arg6, struct arm_smccc_res *res);
+void smccc_smc(uint32_t function_id, u64 arg0, u64 arg1,
+ u64 arg2, u64 arg3, u64 arg4, u64 arg5,
+ u64 arg6, struct arm_smccc_res *res);
/* Execute a Wait For Interrupt instruction. */
void wfi(void);
diff --git a/tools/testing/selftests/kvm/include/arm64/vgic.h b/tools/testing/selftests/kvm/include/arm64/vgic.h
index c481d0c00a5d..e88190d49c3d 100644
--- a/tools/testing/selftests/kvm/include/arm64/vgic.h
+++ b/tools/testing/selftests/kvm/include/arm64/vgic.h
@@ -11,9 +11,9 @@
#include "kvm_util.h"
#define REDIST_REGION_ATTR_ADDR(count, base, flags, index) \
- (((uint64_t)(count) << 52) | \
- ((uint64_t)((base) >> 16) << 16) | \
- ((uint64_t)(flags) << 12) | \
+ (((u64)(count) << 52) | \
+ ((u64)((base) >> 16) << 16) | \
+ ((u64)(flags) << 12) | \
index)
int vgic_v3_setup(struct kvm_vm *vm, unsigned int nr_vcpus, uint32_t nr_irqs);
diff --git a/tools/testing/selftests/kvm/include/kvm_util.h b/tools/testing/selftests/kvm/include/kvm_util.h
index 67ac59f66b6e..816c4199c168 100644
--- a/tools/testing/selftests/kvm/include/kvm_util.h
+++ b/tools/testing/selftests/kvm/include/kvm_util.h
@@ -91,7 +91,7 @@ struct kvm_vm {
unsigned int page_shift;
unsigned int pa_bits;
unsigned int va_bits;
- uint64_t max_gfn;
+ u64 max_gfn;
struct list_head vcpus;
struct userspace_mem_regions regions;
struct sparsebit *vpages_valid;
@@ -102,7 +102,7 @@ struct kvm_vm {
gpa_t pgd;
gva_t handlers;
uint32_t dirty_ring_size;
- uint64_t gpa_tag_mask;
+ u64 gpa_tag_mask;
struct kvm_vm_arch arch;
@@ -188,7 +188,7 @@ struct vm_shape {
uint16_t pad1;
};
-kvm_static_assert(sizeof(struct vm_shape) == sizeof(uint64_t));
+kvm_static_assert(sizeof(struct vm_shape) == sizeof(u64));
#define VM_TYPE_DEFAULT 0
@@ -365,21 +365,22 @@ static inline int vm_check_cap(struct kvm_vm *vm, long cap)
return ret;
}
-static inline int __vm_enable_cap(struct kvm_vm *vm, uint32_t cap, uint64_t arg0)
+static inline int __vm_enable_cap(struct kvm_vm *vm, uint32_t cap, u64 arg0)
{
struct kvm_enable_cap enable_cap = { .cap = cap, .args = { arg0 } };
return __vm_ioctl(vm, KVM_ENABLE_CAP, &enable_cap);
}
-static inline void vm_enable_cap(struct kvm_vm *vm, uint32_t cap, uint64_t arg0)
+
+static inline void vm_enable_cap(struct kvm_vm *vm, uint32_t cap, u64 arg0)
{
struct kvm_enable_cap enable_cap = { .cap = cap, .args = { arg0 } };
vm_ioctl(vm, KVM_ENABLE_CAP, &enable_cap);
}
-static inline void vm_set_memory_attributes(struct kvm_vm *vm, uint64_t gpa,
- uint64_t size, uint64_t attributes)
+static inline void vm_set_memory_attributes(struct kvm_vm *vm, u64 gpa,
+ u64 size, u64 attributes)
{
struct kvm_memory_attributes attr = {
.attributes = attributes,
@@ -399,29 +400,25 @@ static inline void vm_set_memory_attributes(struct kvm_vm *vm, uint64_t gpa,
}
-static inline void vm_mem_set_private(struct kvm_vm *vm, uint64_t gpa,
- uint64_t size)
+static inline void vm_mem_set_private(struct kvm_vm *vm, u64 gpa, u64 size)
{
vm_set_memory_attributes(vm, gpa, size, KVM_MEMORY_ATTRIBUTE_PRIVATE);
}
-static inline void vm_mem_set_shared(struct kvm_vm *vm, uint64_t gpa,
- uint64_t size)
+static inline void vm_mem_set_shared(struct kvm_vm *vm, u64 gpa, u64 size)
{
vm_set_memory_attributes(vm, gpa, size, 0);
}
-void vm_guest_mem_fallocate(struct kvm_vm *vm, uint64_t gpa, uint64_t size,
+void vm_guest_mem_fallocate(struct kvm_vm *vm, u64 gpa, u64 size,
bool punch_hole);
-static inline void vm_guest_mem_punch_hole(struct kvm_vm *vm, uint64_t gpa,
- uint64_t size)
+static inline void vm_guest_mem_punch_hole(struct kvm_vm *vm, u64 gpa, u64 size)
{
vm_guest_mem_fallocate(vm, gpa, size, true);
}
-static inline void vm_guest_mem_allocate(struct kvm_vm *vm, uint64_t gpa,
- uint64_t size)
+static inline void vm_guest_mem_allocate(struct kvm_vm *vm, u64 gpa, u64 size)
{
vm_guest_mem_fallocate(vm, gpa, size, false);
}
@@ -445,7 +442,7 @@ static inline void kvm_vm_get_dirty_log(struct kvm_vm *vm, int slot, void *log)
}
static inline void kvm_vm_clear_dirty_log(struct kvm_vm *vm, int slot, void *log,
- uint64_t first_page, uint32_t num_pages)
+ u64 first_page, uint32_t num_pages)
{
struct kvm_clear_dirty_log args = {
.dirty_bitmap = log,
@@ -463,8 +460,8 @@ static inline uint32_t kvm_vm_reset_dirty_ring(struct kvm_vm *vm)
}
static inline void kvm_vm_register_coalesced_io(struct kvm_vm *vm,
- uint64_t address,
- uint64_t size, bool pio)
+ u64 address,
+ u64 size, bool pio)
{
struct kvm_coalesced_mmio_zone zone = {
.addr = address,
@@ -476,8 +473,8 @@ static inline void kvm_vm_register_coalesced_io(struct kvm_vm *vm,
}
static inline void kvm_vm_unregister_coalesced_io(struct kvm_vm *vm,
- uint64_t address,
- uint64_t size, bool pio)
+ u64 address,
+ u64 size, bool pio)
{
struct kvm_coalesced_mmio_zone zone = {
.addr = address,
@@ -532,15 +529,15 @@ static inline struct kvm_stats_desc *get_stats_descriptor(struct kvm_stats_desc
}
void read_stat_data(int stats_fd, struct kvm_stats_header *header,
- struct kvm_stats_desc *desc, uint64_t *data,
+ struct kvm_stats_desc *desc, u64 *data,
size_t max_elements);
void kvm_get_stat(struct kvm_binary_stats *stats, const char *name,
- uint64_t *data, size_t max_elements);
+ u64 *data, size_t max_elements);
#define __get_stat(stats, stat) \
({ \
- uint64_t data; \
+ u64 data; \
\
kvm_get_stat(stats, #stat, &data, 1); \
data; \
@@ -551,8 +548,7 @@ void kvm_get_stat(struct kvm_binary_stats *stats, const char *name,
void vm_create_irqchip(struct kvm_vm *vm);
-static inline int __vm_create_guest_memfd(struct kvm_vm *vm, uint64_t size,
- uint64_t flags)
+static inline int __vm_create_guest_memfd(struct kvm_vm *vm, u64 size, u64 flags)
{
struct kvm_create_guest_memfd guest_memfd = {
.size = size,
@@ -562,8 +558,7 @@ static inline int __vm_create_guest_memfd(struct kvm_vm *vm, uint64_t size,
return __vm_ioctl(vm, KVM_CREATE_GUEST_MEMFD, &guest_memfd);
}
-static inline int vm_create_guest_memfd(struct kvm_vm *vm, uint64_t size,
- uint64_t flags)
+static inline int vm_create_guest_memfd(struct kvm_vm *vm, u64 size, u64 flags)
{
int fd = __vm_create_guest_memfd(vm, size, flags);
@@ -572,23 +567,23 @@ static inline int vm_create_guest_memfd(struct kvm_vm *vm, uint64_t size,
}
void vm_set_user_memory_region(struct kvm_vm *vm, uint32_t slot, uint32_t flags,
- uint64_t gpa, uint64_t size, void *hva);
+ u64 gpa, u64 size, void *hva);
int __vm_set_user_memory_region(struct kvm_vm *vm, uint32_t slot, uint32_t flags,
- uint64_t gpa, uint64_t size, void *hva);
+ u64 gpa, u64 size, void *hva);
void vm_set_user_memory_region2(struct kvm_vm *vm, uint32_t slot, uint32_t flags,
- uint64_t gpa, uint64_t size, void *hva,
- uint32_t guest_memfd, uint64_t guest_memfd_offset);
+ u64 gpa, u64 size, void *hva,
+ uint32_t guest_memfd, u64 guest_memfd_offset);
int __vm_set_user_memory_region2(struct kvm_vm *vm, uint32_t slot, uint32_t flags,
- uint64_t gpa, uint64_t size, void *hva,
- uint32_t guest_memfd, uint64_t guest_memfd_offset);
+ u64 gpa, u64 size, void *hva,
+ uint32_t guest_memfd, u64 guest_memfd_offset);
void vm_userspace_mem_region_add(struct kvm_vm *vm,
- enum vm_mem_backing_src_type src_type,
- uint64_t guest_paddr, uint32_t slot, uint64_t npages,
- uint32_t flags);
+ enum vm_mem_backing_src_type src_type,
+ u64 guest_paddr, uint32_t slot, u64 npages,
+ uint32_t flags);
void vm_mem_add(struct kvm_vm *vm, enum vm_mem_backing_src_type src_type,
- uint64_t guest_paddr, uint32_t slot, uint64_t npages,
- uint32_t flags, int guest_memfd_fd, uint64_t guest_memfd_offset);
+ u64 guest_paddr, uint32_t slot, u64 npages,
+ uint32_t flags, int guest_memfd_fd, u64 guest_memfd_offset);
#ifndef vm_arch_has_protected_memory
static inline bool vm_arch_has_protected_memory(struct kvm_vm *vm)
@@ -598,7 +593,7 @@ static inline bool vm_arch_has_protected_memory(struct kvm_vm *vm)
#endif
void vm_mem_region_set_flags(struct kvm_vm *vm, uint32_t slot, uint32_t flags);
-void vm_mem_region_move(struct kvm_vm *vm, uint32_t slot, uint64_t new_gpa);
+void vm_mem_region_move(struct kvm_vm *vm, uint32_t slot, u64 new_gpa);
void vm_mem_region_delete(struct kvm_vm *vm, uint32_t slot);
struct kvm_vcpu *__vm_vcpu_add(struct kvm_vm *vm, uint32_t vcpu_id);
void vm_populate_vaddr_bitmap(struct kvm_vm *vm);
@@ -614,7 +609,7 @@ gva_t __gva_alloc_page(struct kvm_vm *vm,
enum kvm_mem_region_type type);
gva_t gva_alloc_page(struct kvm_vm *vm);
-void virt_map(struct kvm_vm *vm, uint64_t vaddr, uint64_t paddr,
+void virt_map(struct kvm_vm *vm, u64 vaddr, u64 paddr,
unsigned int npages);
void *addr_gpa2hva(struct kvm_vm *vm, gpa_t gpa);
void *addr_gva2hva(struct kvm_vm *vm, gva_t gva);
@@ -642,7 +637,7 @@ void vcpu_run_complete_io(struct kvm_vcpu *vcpu);
struct kvm_reg_list *vcpu_get_reg_list(struct kvm_vcpu *vcpu);
static inline void vcpu_enable_cap(struct kvm_vcpu *vcpu, uint32_t cap,
- uint64_t arg0)
+ u64 arg0)
{
struct kvm_enable_cap enable_cap = { .cap = cap, .args = { arg0 } };
@@ -697,31 +692,34 @@ static inline void vcpu_fpu_set(struct kvm_vcpu *vcpu, struct kvm_fpu *fpu)
vcpu_ioctl(vcpu, KVM_SET_FPU, fpu);
}
-static inline int __vcpu_get_reg(struct kvm_vcpu *vcpu, uint64_t id, void *addr)
+static inline int __vcpu_get_reg(struct kvm_vcpu *vcpu, u64 id, void *addr)
{
- struct kvm_one_reg reg = { .id = id, .addr = (uint64_t)addr };
+ struct kvm_one_reg reg = { .id = id, .addr = (u64)addr };
return __vcpu_ioctl(vcpu, KVM_GET_ONE_REG, ®);
}
-static inline int __vcpu_set_reg(struct kvm_vcpu *vcpu, uint64_t id, uint64_t val)
+
+static inline int __vcpu_set_reg(struct kvm_vcpu *vcpu, u64 id, u64 val)
{
- struct kvm_one_reg reg = { .id = id, .addr = (uint64_t)&val };
+ struct kvm_one_reg reg = { .id = id, .addr = (u64)&val };
return __vcpu_ioctl(vcpu, KVM_SET_ONE_REG, ®);
}
-static inline uint64_t vcpu_get_reg(struct kvm_vcpu *vcpu, uint64_t id)
+
+static inline u64 vcpu_get_reg(struct kvm_vcpu *vcpu, u64 id)
{
- uint64_t val;
- struct kvm_one_reg reg = { .id = id, .addr = (uint64_t)&val };
+ u64 val;
+ struct kvm_one_reg reg = { .id = id, .addr = (u64)&val };
TEST_ASSERT(KVM_REG_SIZE(id) <= sizeof(val), "Reg %lx too big", id);
vcpu_ioctl(vcpu, KVM_GET_ONE_REG, ®);
return val;
}
-static inline void vcpu_set_reg(struct kvm_vcpu *vcpu, uint64_t id, uint64_t val)
+
+static inline void vcpu_set_reg(struct kvm_vcpu *vcpu, u64 id, u64 val)
{
- struct kvm_one_reg reg = { .id = id, .addr = (uint64_t)&val };
+ struct kvm_one_reg reg = { .id = id, .addr = (u64)&val };
TEST_ASSERT(KVM_REG_SIZE(id) <= sizeof(val), "Reg %lx too big", id);
@@ -766,29 +764,29 @@ static inline int vcpu_get_stats_fd(struct kvm_vcpu *vcpu)
return fd;
}
-int __kvm_has_device_attr(int dev_fd, uint32_t group, uint64_t attr);
+int __kvm_has_device_attr(int dev_fd, uint32_t group, u64 attr);
-static inline void kvm_has_device_attr(int dev_fd, uint32_t group, uint64_t attr)
+static inline void kvm_has_device_attr(int dev_fd, uint32_t group, u64 attr)
{
int ret = __kvm_has_device_attr(dev_fd, group, attr);
TEST_ASSERT(!ret, "KVM_HAS_DEVICE_ATTR failed, rc: %i errno: %i", ret, errno);
}
-int __kvm_device_attr_get(int dev_fd, uint32_t group, uint64_t attr, void *val);
+int __kvm_device_attr_get(int dev_fd, uint32_t group, u64 attr, void *val);
static inline void kvm_device_attr_get(int dev_fd, uint32_t group,
- uint64_t attr, void *val)
+ u64 attr, void *val)
{
int ret = __kvm_device_attr_get(dev_fd, group, attr, val);
TEST_ASSERT(!ret, KVM_IOCTL_ERROR(KVM_GET_DEVICE_ATTR, ret));
}
-int __kvm_device_attr_set(int dev_fd, uint32_t group, uint64_t attr, void *val);
+int __kvm_device_attr_set(int dev_fd, uint32_t group, u64 attr, void *val);
static inline void kvm_device_attr_set(int dev_fd, uint32_t group,
- uint64_t attr, void *val)
+ u64 attr, void *val)
{
int ret = __kvm_device_attr_set(dev_fd, group, attr, val);
@@ -796,45 +794,45 @@ static inline void kvm_device_attr_set(int dev_fd, uint32_t group,
}
static inline int __vcpu_has_device_attr(struct kvm_vcpu *vcpu, uint32_t group,
- uint64_t attr)
+ u64 attr)
{
return __kvm_has_device_attr(vcpu->fd, group, attr);
}
static inline void vcpu_has_device_attr(struct kvm_vcpu *vcpu, uint32_t group,
- uint64_t attr)
+ u64 attr)
{
kvm_has_device_attr(vcpu->fd, group, attr);
}
static inline int __vcpu_device_attr_get(struct kvm_vcpu *vcpu, uint32_t group,
- uint64_t attr, void *val)
+ u64 attr, void *val)
{
return __kvm_device_attr_get(vcpu->fd, group, attr, val);
}
static inline void vcpu_device_attr_get(struct kvm_vcpu *vcpu, uint32_t group,
- uint64_t attr, void *val)
+ u64 attr, void *val)
{
kvm_device_attr_get(vcpu->fd, group, attr, val);
}
static inline int __vcpu_device_attr_set(struct kvm_vcpu *vcpu, uint32_t group,
- uint64_t attr, void *val)
+ u64 attr, void *val)
{
return __kvm_device_attr_set(vcpu->fd, group, attr, val);
}
static inline void vcpu_device_attr_set(struct kvm_vcpu *vcpu, uint32_t group,
- uint64_t attr, void *val)
+ u64 attr, void *val)
{
kvm_device_attr_set(vcpu->fd, group, attr, val);
}
-int __kvm_test_create_device(struct kvm_vm *vm, uint64_t type);
-int __kvm_create_device(struct kvm_vm *vm, uint64_t type);
+int __kvm_test_create_device(struct kvm_vm *vm, u64 type);
+int __kvm_create_device(struct kvm_vm *vm, u64 type);
-static inline int kvm_create_device(struct kvm_vm *vm, uint64_t type)
+static inline int kvm_create_device(struct kvm_vm *vm, u64 type)
{
int fd = __kvm_create_device(vm, type);
@@ -850,7 +848,7 @@ void *vcpu_map_dirty_ring(struct kvm_vcpu *vcpu);
* Input Args:
* vm - Virtual Machine
* num - number of arguments
- * ... - arguments, each of type uint64_t
+ * ... - arguments, each of type u64
*
* Output Args: None
*
@@ -858,7 +856,7 @@ void *vcpu_map_dirty_ring(struct kvm_vcpu *vcpu);
*
* Sets the first @num input parameters for the function at @vcpu's entry point,
* per the C calling convention of the architecture, to the values given as
- * variable args. Each of the variable args is expected to be of type uint64_t.
+ * variable args. Each of the variable args is expected to be of type u64.
* The maximum @num can be is specific to the architecture.
*/
void vcpu_args_set(struct kvm_vcpu *vcpu, unsigned int num, ...);
@@ -902,7 +900,7 @@ static inline gpa_t vm_phy_pages_alloc(struct kvm_vm *vm, size_t num,
*/
struct kvm_vm *____vm_create(struct vm_shape shape);
struct kvm_vm *__vm_create(struct vm_shape shape, uint32_t nr_runnable_vcpus,
- uint64_t nr_extra_pages);
+ u64 nr_extra_pages);
static inline struct kvm_vm *vm_create_barebones(void)
{
@@ -925,7 +923,7 @@ static inline struct kvm_vm *vm_create(uint32_t nr_runnable_vcpus)
}
struct kvm_vm *__vm_create_with_vcpus(struct vm_shape shape, uint32_t nr_vcpus,
- uint64_t extra_mem_pages,
+ u64 extra_mem_pages,
void *guest_code, struct kvm_vcpu *vcpus[]);
static inline struct kvm_vm *vm_create_with_vcpus(uint32_t nr_vcpus,
@@ -939,7 +937,7 @@ static inline struct kvm_vm *vm_create_with_vcpus(uint32_t nr_vcpus,
struct kvm_vm *__vm_create_shape_with_one_vcpu(struct vm_shape shape,
struct kvm_vcpu **vcpu,
- uint64_t extra_mem_pages,
+ u64 extra_mem_pages,
void *guest_code);
/*
@@ -947,7 +945,7 @@ struct kvm_vm *__vm_create_shape_with_one_vcpu(struct vm_shape shape,
* additional pages of guest memory. Returns the VM and vCPU (via out param).
*/
static inline struct kvm_vm *__vm_create_with_one_vcpu(struct kvm_vcpu **vcpu,
- uint64_t extra_mem_pages,
+ u64 extra_mem_pages,
void *guest_code)
{
return __vm_create_shape_with_one_vcpu(VM_SHAPE_DEFAULT, vcpu,
@@ -1080,9 +1078,9 @@ static inline void virt_pgd_alloc(struct kvm_vm *vm)
* Within @vm, creates a virtual translation for the page starting
* at @vaddr to the page starting at @paddr.
*/
-void virt_arch_pg_map(struct kvm_vm *vm, uint64_t vaddr, uint64_t paddr);
+void virt_arch_pg_map(struct kvm_vm *vm, u64 vaddr, u64 paddr);
-static inline void virt_pg_map(struct kvm_vm *vm, uint64_t vaddr, uint64_t paddr)
+static inline void virt_pg_map(struct kvm_vm *vm, u64 vaddr, u64 paddr)
{
virt_arch_pg_map(vm, vaddr, paddr);
}
diff --git a/tools/testing/selftests/kvm/include/kvm_util_types.h b/tools/testing/selftests/kvm/include/kvm_util_types.h
index 224a29cea790..34f610ecd670 100644
--- a/tools/testing/selftests/kvm/include/kvm_util_types.h
+++ b/tools/testing/selftests/kvm/include/kvm_util_types.h
@@ -14,7 +14,7 @@
#define __kvm_static_assert(expr, msg, ...) _Static_assert(expr, msg)
#define kvm_static_assert(expr, ...) __kvm_static_assert(expr, ##__VA_ARGS__, #expr)
-typedef uint64_t gpa_t; /* Virtual Machine (Guest) physical address */
-typedef uint64_t gva_t; /* Virtual Machine (Guest) virtual address */
+typedef u64 gpa_t; /* Virtual Machine (Guest) physical address */
+typedef u64 gva_t; /* Virtual Machine (Guest) virtual address */
#endif /* SELFTEST_KVM_UTIL_TYPES_H */
diff --git a/tools/testing/selftests/kvm/include/memstress.h b/tools/testing/selftests/kvm/include/memstress.h
index 9071eb6dea60..71296909302c 100644
--- a/tools/testing/selftests/kvm/include/memstress.h
+++ b/tools/testing/selftests/kvm/include/memstress.h
@@ -20,9 +20,9 @@
#define MEMSTRESS_MEM_SLOT_INDEX 1
struct memstress_vcpu_args {
- uint64_t gpa;
- uint64_t gva;
- uint64_t pages;
+ u64 gpa;
+ u64 gva;
+ u64 pages;
/* Only used by the host userspace part of the vCPU thread */
struct kvm_vcpu *vcpu;
@@ -32,9 +32,9 @@ struct memstress_vcpu_args {
struct memstress_args {
struct kvm_vm *vm;
/* The starting address and size of the guest test region. */
- uint64_t gpa;
- uint64_t size;
- uint64_t guest_page_size;
+ u64 gpa;
+ u64 size;
+ u64 guest_page_size;
uint32_t random_seed;
uint32_t write_percent;
@@ -56,7 +56,7 @@ struct memstress_args {
extern struct memstress_args memstress_args;
struct kvm_vm *memstress_create_vm(enum vm_guest_mode mode, int nr_vcpus,
- uint64_t vcpu_memory_bytes, int slots,
+ u64 vcpu_memory_bytes, int slots,
enum vm_mem_backing_src_type backing_src,
bool partition_vcpu_memory_access);
void memstress_destroy_vm(struct kvm_vm *vm);
@@ -68,15 +68,15 @@ void memstress_start_vcpu_threads(int vcpus, void (*vcpu_fn)(struct memstress_vc
void memstress_join_vcpu_threads(int vcpus);
void memstress_guest_code(uint32_t vcpu_id);
-uint64_t memstress_nested_pages(int nr_vcpus);
+u64 memstress_nested_pages(int nr_vcpus);
void memstress_setup_nested(struct kvm_vm *vm, int nr_vcpus, struct kvm_vcpu *vcpus[]);
void memstress_enable_dirty_logging(struct kvm_vm *vm, int slots);
void memstress_disable_dirty_logging(struct kvm_vm *vm, int slots);
void memstress_get_dirty_log(struct kvm_vm *vm, unsigned long *bitmaps[], int slots);
void memstress_clear_dirty_log(struct kvm_vm *vm, unsigned long *bitmaps[],
- int slots, uint64_t pages_per_slot);
-unsigned long **memstress_alloc_bitmaps(int slots, uint64_t pages_per_slot);
+ int slots, u64 pages_per_slot);
+unsigned long **memstress_alloc_bitmaps(int slots, u64 pages_per_slot);
void memstress_free_bitmaps(unsigned long *bitmaps[], int slots);
#endif /* SELFTEST_KVM_MEMSTRESS_H */
diff --git a/tools/testing/selftests/kvm/include/riscv/arch_timer.h b/tools/testing/selftests/kvm/include/riscv/arch_timer.h
index 225d81dad064..66ed7e36a7cb 100644
--- a/tools/testing/selftests/kvm/include/riscv/arch_timer.h
+++ b/tools/testing/selftests/kvm/include/riscv/arch_timer.h
@@ -14,25 +14,25 @@
static unsigned long timer_freq;
#define msec_to_cycles(msec) \
- ((timer_freq) * (uint64_t)(msec) / 1000)
+ ((timer_freq) * (u64)(msec) / 1000)
#define usec_to_cycles(usec) \
- ((timer_freq) * (uint64_t)(usec) / 1000000)
+ ((timer_freq) * (u64)(usec) / 1000000)
#define cycles_to_usec(cycles) \
- ((uint64_t)(cycles) * 1000000 / (timer_freq))
+ ((u64)(cycles) * 1000000 / (timer_freq))
-static inline uint64_t timer_get_cycles(void)
+static inline u64 timer_get_cycles(void)
{
return csr_read(CSR_TIME);
}
-static inline void timer_set_cmp(uint64_t cval)
+static inline void timer_set_cmp(u64 cval)
{
csr_write(CSR_STIMECMP, cval);
}
-static inline uint64_t timer_get_cmp(void)
+static inline u64 timer_get_cmp(void)
{
return csr_read(CSR_STIMECMP);
}
@@ -49,15 +49,15 @@ static inline void timer_irq_disable(void)
static inline void timer_set_next_cmp_ms(uint32_t msec)
{
- uint64_t now_ct = timer_get_cycles();
- uint64_t next_ct = now_ct + msec_to_cycles(msec);
+ u64 now_ct = timer_get_cycles();
+ u64 next_ct = now_ct + msec_to_cycles(msec);
timer_set_cmp(next_ct);
}
-static inline void __delay(uint64_t cycles)
+static inline void __delay(u64 cycles)
{
- uint64_t start = timer_get_cycles();
+ u64 start = timer_get_cycles();
while ((timer_get_cycles() - start) < cycles)
cpu_relax();
diff --git a/tools/testing/selftests/kvm/include/riscv/processor.h b/tools/testing/selftests/kvm/include/riscv/processor.h
index 5f389166338c..f877b8b2571e 100644
--- a/tools/testing/selftests/kvm/include/riscv/processor.h
+++ b/tools/testing/selftests/kvm/include/riscv/processor.h
@@ -11,8 +11,7 @@
#include <asm/csr.h>
#include "kvm_util.h"
-static inline uint64_t __kvm_reg_id(uint64_t type, uint64_t subtype,
- uint64_t idx, uint64_t size)
+static inline u64 __kvm_reg_id(u64 type, u64 subtype, u64 idx, u64 size)
{
return KVM_REG_RISCV | type | subtype | idx | size;
}
@@ -48,14 +47,14 @@ static inline uint64_t __kvm_reg_id(uint64_t type, uint64_t subtype,
KVM_REG_RISCV_SBI_SINGLE, \
idx, KVM_REG_SIZE_ULONG)
-bool __vcpu_has_ext(struct kvm_vcpu *vcpu, uint64_t ext);
+bool __vcpu_has_ext(struct kvm_vcpu *vcpu, u64 ext);
-static inline bool __vcpu_has_isa_ext(struct kvm_vcpu *vcpu, uint64_t isa_ext)
+static inline bool __vcpu_has_isa_ext(struct kvm_vcpu *vcpu, u64 isa_ext)
{
return __vcpu_has_ext(vcpu, RISCV_ISA_EXT_REG(isa_ext));
}
-static inline bool __vcpu_has_sbi_ext(struct kvm_vcpu *vcpu, uint64_t sbi_ext)
+static inline bool __vcpu_has_sbi_ext(struct kvm_vcpu *vcpu, u64 sbi_ext)
{
return __vcpu_has_ext(vcpu, RISCV_SBI_EXT_REG(sbi_ext));
}
diff --git a/tools/testing/selftests/kvm/include/s390/diag318_test_handler.h b/tools/testing/selftests/kvm/include/s390/diag318_test_handler.h
index b0ed71302722..6deaf18fec22 100644
--- a/tools/testing/selftests/kvm/include/s390/diag318_test_handler.h
+++ b/tools/testing/selftests/kvm/include/s390/diag318_test_handler.h
@@ -8,6 +8,6 @@
#ifndef SELFTEST_KVM_DIAG318_TEST_HANDLER
#define SELFTEST_KVM_DIAG318_TEST_HANDLER
-uint64_t get_diag318_info(void);
+u64 get_diag318_info(void);
#endif
diff --git a/tools/testing/selftests/kvm/include/s390/facility.h b/tools/testing/selftests/kvm/include/s390/facility.h
index 00a1ced6538b..41a265742666 100644
--- a/tools/testing/selftests/kvm/include/s390/facility.h
+++ b/tools/testing/selftests/kvm/include/s390/facility.h
@@ -16,7 +16,7 @@
/* alt_stfle_fac_list[16] + stfle_fac_list[16] */
#define NB_STFL_DOUBLEWORDS 32
-extern uint64_t stfl_doublewords[NB_STFL_DOUBLEWORDS];
+extern u64 stfl_doublewords[NB_STFL_DOUBLEWORDS];
extern bool stfle_flag;
static inline bool test_bit_inv(unsigned long nr, const unsigned long *ptr)
@@ -24,7 +24,7 @@ static inline bool test_bit_inv(unsigned long nr, const unsigned long *ptr)
return test_bit(nr ^ (BITS_PER_LONG - 1), ptr);
}
-static inline void stfle(uint64_t *fac, unsigned int nb_doublewords)
+static inline void stfle(u64 *fac, unsigned int nb_doublewords)
{
register unsigned long r0 asm("0") = nb_doublewords - 1;
diff --git a/tools/testing/selftests/kvm/include/sparsebit.h b/tools/testing/selftests/kvm/include/sparsebit.h
index bc760761e1a3..e027e5790946 100644
--- a/tools/testing/selftests/kvm/include/sparsebit.h
+++ b/tools/testing/selftests/kvm/include/sparsebit.h
@@ -6,7 +6,7 @@
*
* Header file that describes API to the sparsebit library.
* This library provides a memory efficient means of storing
- * the settings of bits indexed via a uint64_t. Memory usage
+ * the settings of bits indexed via a u64. Memory usage
* is reasonable, significantly less than (2^64 / 8) bytes, as
* long as bits that are mostly set or mostly cleared are close
* to each other. This library is efficient in memory usage
@@ -25,8 +25,8 @@ extern "C" {
#endif
struct sparsebit;
-typedef uint64_t sparsebit_idx_t;
-typedef uint64_t sparsebit_num_t;
+typedef u64 sparsebit_idx_t;
+typedef u64 sparsebit_num_t;
struct sparsebit *sparsebit_alloc(void);
void sparsebit_free(struct sparsebit **sbitp);
diff --git a/tools/testing/selftests/kvm/include/test_util.h b/tools/testing/selftests/kvm/include/test_util.h
index 77d13d7920cb..7cd539776533 100644
--- a/tools/testing/selftests/kvm/include/test_util.h
+++ b/tools/testing/selftests/kvm/include/test_util.h
@@ -20,6 +20,8 @@
#include <sys/mman.h>
#include "kselftest.h"
+#include <linux/types.h>
+
#define msecs_to_usecs(msec) ((msec) * 1000ULL)
static inline __printf(1, 2) int _no_printf(const char *format, ...) { return 0; }
@@ -108,9 +110,9 @@ static inline bool guest_random_bool(struct guest_random_state *state)
return __guest_random_bool(state, 50);
}
-static inline uint64_t guest_random_u64(struct guest_random_state *state)
+static inline u64 guest_random_u64(struct guest_random_state *state)
{
- return ((uint64_t)guest_random_u32(state) << 32) | guest_random_u32(state);
+ return ((u64)guest_random_u32(state) << 32) | guest_random_u32(state);
}
enum vm_mem_backing_src_type {
@@ -169,18 +171,18 @@ static inline bool backing_src_can_be_huge(enum vm_mem_backing_src_type t)
}
/* Aligns x up to the next multiple of size. Size must be a power of 2. */
-static inline uint64_t align_up(uint64_t x, uint64_t size)
+static inline u64 align_up(u64 x, u64 size)
{
- uint64_t mask = size - 1;
+ u64 mask = size - 1;
TEST_ASSERT(size != 0 && !(size & (size - 1)),
"size not a power of 2: %lu", size);
return ((x + mask) & ~mask);
}
-static inline uint64_t align_down(uint64_t x, uint64_t size)
+static inline u64 align_down(u64 x, u64 size)
{
- uint64_t x_aligned_up = align_up(x, size);
+ u64 x_aligned_up = align_up(x, size);
if (x == x_aligned_up)
return x;
diff --git a/tools/testing/selftests/kvm/include/timer_test.h b/tools/testing/selftests/kvm/include/timer_test.h
index 9b6edaafe6d4..9501c6c825e2 100644
--- a/tools/testing/selftests/kvm/include/timer_test.h
+++ b/tools/testing/selftests/kvm/include/timer_test.h
@@ -24,15 +24,15 @@ struct test_args {
uint32_t migration_freq_ms;
uint32_t timer_err_margin_us;
/* Members of struct kvm_arm_counter_offset */
- uint64_t counter_offset;
- uint64_t reserved;
+ u64 counter_offset;
+ u64 reserved;
};
/* Shared variables between host and guest */
struct test_vcpu_shared_data {
uint32_t nr_iter;
int guest_stage;
- uint64_t xcnt;
+ u64 xcnt;
};
extern struct test_args test_args;
diff --git a/tools/testing/selftests/kvm/include/ucall_common.h b/tools/testing/selftests/kvm/include/ucall_common.h
index 1db399c00d02..cbdcb0a50c4f 100644
--- a/tools/testing/selftests/kvm/include/ucall_common.h
+++ b/tools/testing/selftests/kvm/include/ucall_common.h
@@ -21,8 +21,8 @@ enum {
#define UCALL_BUFFER_LEN 1024
struct ucall {
- uint64_t cmd;
- uint64_t args[UCALL_MAX_ARGS];
+ u64 cmd;
+ u64 args[UCALL_MAX_ARGS];
char buffer[UCALL_BUFFER_LEN];
/* Host virtual address of this struct. */
@@ -33,14 +33,14 @@ void ucall_arch_init(struct kvm_vm *vm, gpa_t mmio_gpa);
void ucall_arch_do_ucall(gva_t uc);
void *ucall_arch_get_ucall(struct kvm_vcpu *vcpu);
-void ucall(uint64_t cmd, int nargs, ...);
-__printf(2, 3) void ucall_fmt(uint64_t cmd, const char *fmt, ...);
-__printf(5, 6) void ucall_assert(uint64_t cmd, const char *exp,
+void ucall(u64 cmd, int nargs, ...);
+__printf(2, 3) void ucall_fmt(u64 cmd, const char *fmt, ...);
+__printf(5, 6) void ucall_assert(u64 cmd, const char *exp,
const char *file, unsigned int line,
const char *fmt, ...);
-uint64_t get_ucall(struct kvm_vcpu *vcpu, struct ucall *uc);
+u64 get_ucall(struct kvm_vcpu *vcpu, struct ucall *uc);
void ucall_init(struct kvm_vm *vm, gpa_t mmio_gpa);
-int ucall_nr_pages_required(uint64_t page_size);
+int ucall_nr_pages_required(u64 page_size);
/*
* Perform userspace call without any associated data. This bare call avoids
diff --git a/tools/testing/selftests/kvm/include/userfaultfd_util.h b/tools/testing/selftests/kvm/include/userfaultfd_util.h
index 60f7f9d435dc..0bc1dc16600e 100644
--- a/tools/testing/selftests/kvm/include/userfaultfd_util.h
+++ b/tools/testing/selftests/kvm/include/userfaultfd_util.h
@@ -25,7 +25,7 @@ struct uffd_reader_args {
struct uffd_desc {
int uffd;
- uint64_t num_readers;
+ u64 num_readers;
/* Holds the write ends of the pipes for killing the readers. */
int *pipefds;
pthread_t *readers;
@@ -33,8 +33,8 @@ struct uffd_desc {
};
struct uffd_desc *uffd_setup_demand_paging(int uffd_mode, useconds_t delay,
- void *hva, uint64_t len,
- uint64_t num_readers,
+ void *hva, u64 len,
+ u64 num_readers,
uffd_handler_t handler);
void uffd_stop_demand_paging(struct uffd_desc *uffd);
diff --git a/tools/testing/selftests/kvm/include/x86/apic.h b/tools/testing/selftests/kvm/include/x86/apic.h
index 80fe9f69b38d..484e9a234346 100644
--- a/tools/testing/selftests/kvm/include/x86/apic.h
+++ b/tools/testing/selftests/kvm/include/x86/apic.h
@@ -87,17 +87,17 @@ static inline void xapic_write_reg(unsigned int reg, uint32_t val)
((volatile uint32_t *)APIC_DEFAULT_GPA)[reg >> 2] = val;
}
-static inline uint64_t x2apic_read_reg(unsigned int reg)
+static inline u64 x2apic_read_reg(unsigned int reg)
{
return rdmsr(APIC_BASE_MSR + (reg >> 4));
}
-static inline uint8_t x2apic_write_reg_safe(unsigned int reg, uint64_t value)
+static inline uint8_t x2apic_write_reg_safe(unsigned int reg, u64 value)
{
return wrmsr_safe(APIC_BASE_MSR + (reg >> 4), value);
}
-static inline void x2apic_write_reg(unsigned int reg, uint64_t value)
+static inline void x2apic_write_reg(unsigned int reg, u64 value)
{
uint8_t fault = x2apic_write_reg_safe(reg, value);
@@ -105,7 +105,7 @@ static inline void x2apic_write_reg(unsigned int reg, uint64_t value)
fault, APIC_BASE_MSR + (reg >> 4), value);
}
-static inline void x2apic_write_reg_fault(unsigned int reg, uint64_t value)
+static inline void x2apic_write_reg_fault(unsigned int reg, u64 value)
{
uint8_t fault = x2apic_write_reg_safe(reg, value);
diff --git a/tools/testing/selftests/kvm/include/x86/evmcs.h b/tools/testing/selftests/kvm/include/x86/evmcs.h
index 5a74bb30e2f8..5ec5cca6f9e4 100644
--- a/tools/testing/selftests/kvm/include/x86/evmcs.h
+++ b/tools/testing/selftests/kvm/include/x86/evmcs.h
@@ -12,7 +12,7 @@
#define u16 uint16_t
#define u32 uint32_t
-#define u64 uint64_t
+#define u64 u64
#define EVMCS_VERSION 1
@@ -245,7 +245,7 @@ static inline void evmcs_enable(void)
enable_evmcs = true;
}
-static inline int evmcs_vmptrld(uint64_t vmcs_pa, void *vmcs)
+static inline int evmcs_vmptrld(u64 vmcs_pa, void *vmcs)
{
current_vp_assist->current_nested_vmcs = vmcs_pa;
current_vp_assist->enlighten_vmentry = 1;
@@ -265,7 +265,7 @@ static inline bool load_evmcs(struct hyperv_test_pages *hv)
return true;
}
-static inline int evmcs_vmptrst(uint64_t *value)
+static inline int evmcs_vmptrst(u64 *value)
{
*value = current_vp_assist->current_nested_vmcs &
~HV_X64_MSR_VP_ASSIST_PAGE_ENABLE;
@@ -273,7 +273,7 @@ static inline int evmcs_vmptrst(uint64_t *value)
return 0;
}
-static inline int evmcs_vmread(uint64_t encoding, uint64_t *value)
+static inline int evmcs_vmread(u64 encoding, u64 *value)
{
switch (encoding) {
case GUEST_RIP:
@@ -672,7 +672,7 @@ static inline int evmcs_vmread(uint64_t encoding, uint64_t *value)
return 0;
}
-static inline int evmcs_vmwrite(uint64_t encoding, uint64_t value)
+static inline int evmcs_vmwrite(u64 encoding, u64 value)
{
switch (encoding) {
case GUEST_RIP:
@@ -1226,9 +1226,9 @@ static inline int evmcs_vmlaunch(void)
"pop %%rbp;"
: [ret]"=&a"(ret)
: [host_rsp]"r"
- ((uint64_t)¤t_evmcs->host_rsp),
+ ((u64)¤t_evmcs->host_rsp),
[host_rip]"r"
- ((uint64_t)¤t_evmcs->host_rip)
+ ((u64)¤t_evmcs->host_rip)
: "memory", "cc", "rbx", "r8", "r9", "r10",
"r11", "r12", "r13", "r14", "r15");
return ret;
@@ -1265,9 +1265,9 @@ static inline int evmcs_vmresume(void)
"pop %%rbp;"
: [ret]"=&a"(ret)
: [host_rsp]"r"
- ((uint64_t)¤t_evmcs->host_rsp),
+ ((u64)¤t_evmcs->host_rsp),
[host_rip]"r"
- ((uint64_t)¤t_evmcs->host_rip)
+ ((u64)¤t_evmcs->host_rip)
: "memory", "cc", "rbx", "r8", "r9", "r10",
"r11", "r12", "r13", "r14", "r15");
return ret;
diff --git a/tools/testing/selftests/kvm/include/x86/hyperv.h b/tools/testing/selftests/kvm/include/x86/hyperv.h
index eedfff3cf102..2add2123e37b 100644
--- a/tools/testing/selftests/kvm/include/x86/hyperv.h
+++ b/tools/testing/selftests/kvm/include/x86/hyperv.h
@@ -256,9 +256,9 @@
*/
static inline uint8_t __hyperv_hypercall(u64 control, gva_t input_address,
gva_t output_address,
- uint64_t *hv_status)
+ u64 *hv_status)
{
- uint64_t error_code;
+ u64 error_code;
uint8_t vector;
/* Note both the hypercall and the "asm safe" clobber r9-r11. */
@@ -277,7 +277,7 @@ static inline uint8_t __hyperv_hypercall(u64 control, gva_t input_address,
static inline void hyperv_hypercall(u64 control, gva_t input_address,
gva_t output_address)
{
- uint64_t hv_status;
+ u64 hv_status;
uint8_t vector;
vector = __hyperv_hypercall(control, input_address, output_address, &hv_status);
@@ -327,22 +327,22 @@ struct hv_vp_assist_page {
extern struct hv_vp_assist_page *current_vp_assist;
-int enable_vp_assist(uint64_t vp_assist_pa, void *vp_assist);
+int enable_vp_assist(u64 vp_assist_pa, void *vp_assist);
struct hyperv_test_pages {
/* VP assist page */
void *vp_assist_hva;
- uint64_t vp_assist_gpa;
+ u64 vp_assist_gpa;
void *vp_assist;
/* Partition assist page */
void *partition_assist_hva;
- uint64_t partition_assist_gpa;
+ u64 partition_assist_gpa;
void *partition_assist;
/* Enlightened VMCS */
void *enlightened_vmcs_hva;
- uint64_t enlightened_vmcs_gpa;
+ u64 enlightened_vmcs_gpa;
void *enlightened_vmcs;
};
diff --git a/tools/testing/selftests/kvm/include/x86/kvm_util_arch.h b/tools/testing/selftests/kvm/include/x86/kvm_util_arch.h
index 36d4b6727cb6..42d125e06114 100644
--- a/tools/testing/selftests/kvm/include/x86/kvm_util_arch.h
+++ b/tools/testing/selftests/kvm/include/x86/kvm_util_arch.h
@@ -15,8 +15,8 @@ struct kvm_vm_arch {
gva_t tss;
gva_t idt;
- uint64_t c_bit;
- uint64_t s_bit;
+ u64 c_bit;
+ u64 s_bit;
int sev_fd;
bool is_pt_protected;
};
@@ -40,7 +40,7 @@ do { \
: "+m" (mem) \
: "r" (val) : "memory"); \
} else { \
- uint64_t __old = READ_ONCE(mem); \
+ u64 __old = READ_ONCE(mem); \
\
__asm__ __volatile__(KVM_FEP LOCK_PREFIX "cmpxchg %[new], %[ptr]" \
: [ptr] "+m" (mem), [old] "+a" (__old) \
diff --git a/tools/testing/selftests/kvm/include/x86/pmu.h b/tools/testing/selftests/kvm/include/x86/pmu.h
index 3c10c4dc0ae8..a7332c4374a3 100644
--- a/tools/testing/selftests/kvm/include/x86/pmu.h
+++ b/tools/testing/selftests/kvm/include/x86/pmu.h
@@ -5,7 +5,7 @@
#ifndef SELFTEST_KVM_PMU_H
#define SELFTEST_KVM_PMU_H
-#include <stdint.h>
+#include <linux/types.h>
#define KVM_PMU_EVENT_FILTER_MAX_EVENTS 300
@@ -91,7 +91,7 @@ enum amd_pmu_zen_events {
NR_AMD_ZEN_EVENTS,
};
-extern const uint64_t intel_pmu_arch_events[];
-extern const uint64_t amd_pmu_zen_events[];
+extern const u64 intel_pmu_arch_events[];
+extern const u64 amd_pmu_zen_events[];
#endif /* SELFTEST_KVM_PMU_H */
diff --git a/tools/testing/selftests/kvm/include/x86/processor.h b/tools/testing/selftests/kvm/include/x86/processor.h
index 32ab6ca7ec32..72cadb47cd86 100644
--- a/tools/testing/selftests/kvm/include/x86/processor.h
+++ b/tools/testing/selftests/kvm/include/x86/processor.h
@@ -21,7 +21,7 @@
extern bool host_cpu_is_intel;
extern bool host_cpu_is_amd;
-extern uint64_t guest_tsc_khz;
+extern u64 guest_tsc_khz;
#ifndef MAX_NR_CPUID_ENTRIES
#define MAX_NR_CPUID_ENTRIES 100
@@ -408,7 +408,7 @@ struct desc64 {
struct desc_ptr {
uint16_t size;
- uint64_t address;
+ u64 address;
} __attribute__((packed));
struct kvm_x86_state {
@@ -426,16 +426,16 @@ struct kvm_x86_state {
struct kvm_msrs msrs;
};
-static inline uint64_t get_desc64_base(const struct desc64 *desc)
+static inline u64 get_desc64_base(const struct desc64 *desc)
{
- return ((uint64_t)desc->base3 << 32) |
+ return ((u64)desc->base3 << 32) |
(desc->base0 | ((desc->base1) << 16) | ((desc->base2) << 24));
}
-static inline uint64_t rdtsc(void)
+static inline u64 rdtsc(void)
{
uint32_t eax, edx;
- uint64_t tsc_val;
+ u64 tsc_val;
/*
* The lfence is to wait (on Intel CPUs) until all previous
* instructions have been executed. If software requires RDTSC to be
@@ -443,28 +443,28 @@ static inline uint64_t rdtsc(void)
* execute LFENCE immediately after RDTSC
*/
__asm__ __volatile__("lfence; rdtsc; lfence" : "=a"(eax), "=d"(edx));
- tsc_val = ((uint64_t)edx) << 32 | eax;
+ tsc_val = ((u64)edx) << 32 | eax;
return tsc_val;
}
-static inline uint64_t rdtscp(uint32_t *aux)
+static inline u64 rdtscp(uint32_t *aux)
{
uint32_t eax, edx;
__asm__ __volatile__("rdtscp" : "=a"(eax), "=d"(edx), "=c"(*aux));
- return ((uint64_t)edx) << 32 | eax;
+ return ((u64)edx) << 32 | eax;
}
-static inline uint64_t rdmsr(uint32_t msr)
+static inline u64 rdmsr(uint32_t msr)
{
uint32_t a, d;
__asm__ __volatile__("rdmsr" : "=a"(a), "=d"(d) : "c"(msr) : "memory");
- return a | ((uint64_t) d << 32);
+ return a | ((u64)d << 32);
}
-static inline void wrmsr(uint32_t msr, uint64_t value)
+static inline void wrmsr(uint32_t msr, u64 value)
{
uint32_t a = value;
uint32_t d = value >> 32;
@@ -547,34 +547,34 @@ static inline uint16_t get_tr(void)
return tr;
}
-static inline uint64_t get_cr0(void)
+static inline u64 get_cr0(void)
{
- uint64_t cr0;
+ u64 cr0;
__asm__ __volatile__("mov %%cr0, %[cr0]"
: /* output */ [cr0]"=r"(cr0));
return cr0;
}
-static inline uint64_t get_cr3(void)
+static inline u64 get_cr3(void)
{
- uint64_t cr3;
+ u64 cr3;
__asm__ __volatile__("mov %%cr3, %[cr3]"
: /* output */ [cr3]"=r"(cr3));
return cr3;
}
-static inline uint64_t get_cr4(void)
+static inline u64 get_cr4(void)
{
- uint64_t cr4;
+ u64 cr4;
__asm__ __volatile__("mov %%cr4, %[cr4]"
: /* output */ [cr4]"=r"(cr4));
return cr4;
}
-static inline void set_cr4(uint64_t val)
+static inline void set_cr4(u64 val)
{
__asm__ __volatile__("mov %0, %%cr4" : : "r" (val) : "memory");
}
@@ -751,13 +751,13 @@ static inline bool this_pmu_has(struct kvm_x86_pmu_feature feature)
return nr_bits > feature.f.bit || this_cpu_has(feature.f);
}
-static __always_inline uint64_t this_cpu_supported_xcr0(void)
+static __always_inline u64 this_cpu_supported_xcr0(void)
{
if (!this_cpu_has_p(X86_PROPERTY_SUPPORTED_XCR0_LO))
return 0;
return this_cpu_property(X86_PROPERTY_SUPPORTED_XCR0_LO) |
- ((uint64_t)this_cpu_property(X86_PROPERTY_SUPPORTED_XCR0_HI) << 32);
+ ((u64)this_cpu_property(X86_PROPERTY_SUPPORTED_XCR0_HI) << 32);
}
typedef u32 __attribute__((vector_size(16))) sse128_t;
@@ -836,7 +836,7 @@ static inline void cpu_relax(void)
static inline void udelay(unsigned long usec)
{
- uint64_t start, now, cycles;
+ u64 start, now, cycles;
GUEST_ASSERT(guest_tsc_khz);
cycles = guest_tsc_khz / 1000 * usec;
@@ -868,7 +868,7 @@ void kvm_x86_state_cleanup(struct kvm_x86_state *state);
const struct kvm_msr_list *kvm_get_msr_index_list(void);
const struct kvm_msr_list *kvm_get_feature_msr_index_list(void);
bool kvm_msr_is_in_save_restore_list(uint32_t msr_index);
-uint64_t kvm_get_feature_msr(uint64_t msr_index);
+u64 kvm_get_feature_msr(u64 msr_index);
static inline void vcpu_msrs_get(struct kvm_vcpu *vcpu,
struct kvm_msrs *msrs)
@@ -991,13 +991,13 @@ static inline bool kvm_pmu_has(struct kvm_x86_pmu_feature feature)
return nr_bits > feature.f.bit || kvm_cpu_has(feature.f);
}
-static __always_inline uint64_t kvm_cpu_supported_xcr0(void)
+static __always_inline u64 kvm_cpu_supported_xcr0(void)
{
if (!kvm_cpu_has_p(X86_PROPERTY_SUPPORTED_XCR0_LO))
return 0;
return kvm_cpu_property(X86_PROPERTY_SUPPORTED_XCR0_LO) |
- ((uint64_t)kvm_cpu_property(X86_PROPERTY_SUPPORTED_XCR0_HI) << 32);
+ ((u64)kvm_cpu_property(X86_PROPERTY_SUPPORTED_XCR0_HI) << 32);
}
static inline size_t kvm_cpuid2_size(int nr_entries)
@@ -1104,8 +1104,8 @@ static inline void vcpu_clear_cpuid_feature(struct kvm_vcpu *vcpu,
vcpu_set_or_clear_cpuid_feature(vcpu, feature, false);
}
-uint64_t vcpu_get_msr(struct kvm_vcpu *vcpu, uint64_t msr_index);
-int _vcpu_set_msr(struct kvm_vcpu *vcpu, uint64_t msr_index, uint64_t msr_value);
+u64 vcpu_get_msr(struct kvm_vcpu *vcpu, u64 msr_index);
+int _vcpu_set_msr(struct kvm_vcpu *vcpu, u64 msr_index, u64 msr_value);
/*
* Assert on an MSR access(es) and pretty print the MSR name when possible.
@@ -1137,7 +1137,7 @@ static inline bool is_durable_msr(uint32_t msr)
#define vcpu_set_msr(vcpu, msr, val) \
do { \
- uint64_t r, v = val; \
+ u64 r, v = val; \
\
TEST_ASSERT_MSR(_vcpu_set_msr(vcpu, msr, v) == 1, \
"KVM_SET_MSRS failed on %s, value = 0x%lx", msr, #msr, v); \
@@ -1152,15 +1152,15 @@ void kvm_init_vm_address_properties(struct kvm_vm *vm);
bool vm_is_unrestricted_guest(struct kvm_vm *vm);
struct ex_regs {
- uint64_t rax, rcx, rdx, rbx;
- uint64_t rbp, rsi, rdi;
- uint64_t r8, r9, r10, r11;
- uint64_t r12, r13, r14, r15;
- uint64_t vector;
- uint64_t error_code;
- uint64_t rip;
- uint64_t cs;
- uint64_t rflags;
+ u64 rax, rcx, rdx, rbx;
+ u64 rbp, rsi, rdi;
+ u64 r8, r9, r10, r11;
+ u64 r12, r13, r14, r15;
+ u64 vector;
+ u64 error_code;
+ u64 rip;
+ u64 cs;
+ u64 rflags;
};
struct idt_entry {
@@ -1226,7 +1226,7 @@ void vm_install_exception_handler(struct kvm_vm *vm, int vector,
#define kvm_asm_safe(insn, inputs...) \
({ \
- uint64_t ign_error_code; \
+ u64 ign_error_code; \
uint8_t vector; \
\
asm volatile(KVM_ASM_SAFE(insn) \
@@ -1249,7 +1249,7 @@ void vm_install_exception_handler(struct kvm_vm *vm, int vector,
#define kvm_asm_safe_fep(insn, inputs...) \
({ \
- uint64_t ign_error_code; \
+ u64 ign_error_code; \
uint8_t vector; \
\
asm volatile(KVM_ASM_SAFE_FEP(insn) \
@@ -1271,9 +1271,9 @@ void vm_install_exception_handler(struct kvm_vm *vm, int vector,
})
#define BUILD_READ_U64_SAFE_HELPER(insn, _fep, _FEP) \
-static inline uint8_t insn##_safe ##_fep(uint32_t idx, uint64_t *val) \
+static inline uint8_t insn##_safe ##_fep(uint32_t idx, u64 *val) \
{ \
- uint64_t error_code; \
+ u64 error_code; \
uint8_t vector; \
uint32_t a, d; \
\
@@ -1283,7 +1283,7 @@ static inline uint8_t insn##_safe ##_fep(uint32_t idx, uint64_t *val) \
: "c"(idx) \
: KVM_ASM_SAFE_CLOBBERS); \
\
- *val = (uint64_t)a | ((uint64_t)d << 32); \
+ *val = (u64)a | ((u64)d << 32); \
return vector; \
}
@@ -1299,12 +1299,12 @@ BUILD_READ_U64_SAFE_HELPERS(rdmsr)
BUILD_READ_U64_SAFE_HELPERS(rdpmc)
BUILD_READ_U64_SAFE_HELPERS(xgetbv)
-static inline uint8_t wrmsr_safe(uint32_t msr, uint64_t val)
+static inline uint8_t wrmsr_safe(uint32_t msr, u64 val)
{
return kvm_asm_safe("wrmsr", "a"(val & -1u), "d"(val >> 32), "c"(msr));
}
-static inline uint8_t xsetbv_safe(uint32_t index, uint64_t value)
+static inline uint8_t xsetbv_safe(uint32_t index, u64 value)
{
u32 eax = value;
u32 edx = value >> 32;
@@ -1324,25 +1324,21 @@ static inline bool kvm_is_forced_emulation_enabled(void)
return !!get_kvm_param_integer("force_emulation_prefix");
}
-uint64_t *__vm_get_page_table_entry(struct kvm_vm *vm, uint64_t vaddr,
- int *level);
-uint64_t *vm_get_page_table_entry(struct kvm_vm *vm, uint64_t vaddr);
+u64 *__vm_get_page_table_entry(struct kvm_vm *vm, u64 vaddr, int *level);
+u64 *vm_get_page_table_entry(struct kvm_vm *vm, u64 vaddr);
-uint64_t kvm_hypercall(uint64_t nr, uint64_t a0, uint64_t a1, uint64_t a2,
- uint64_t a3);
-uint64_t __xen_hypercall(uint64_t nr, uint64_t a0, void *a1);
-void xen_hypercall(uint64_t nr, uint64_t a0, void *a1);
+u64 kvm_hypercall(u64 nr, u64 a0, u64 a1, u64 a2, u64 a3);
+u64 __xen_hypercall(u64 nr, u64 a0, void *a1);
+void xen_hypercall(u64 nr, u64 a0, void *a1);
-static inline uint64_t __kvm_hypercall_map_gpa_range(uint64_t gpa,
- uint64_t size, uint64_t flags)
+static inline u64 __kvm_hypercall_map_gpa_range(u64 gpa, u64 size, u64 flags)
{
return kvm_hypercall(KVM_HC_MAP_GPA_RANGE, gpa, size >> PAGE_SHIFT, flags, 0);
}
-static inline void kvm_hypercall_map_gpa_range(uint64_t gpa, uint64_t size,
- uint64_t flags)
+static inline void kvm_hypercall_map_gpa_range(u64 gpa, u64 size, u64 flags)
{
- uint64_t ret = __kvm_hypercall_map_gpa_range(gpa, size, flags);
+ u64 ret = __kvm_hypercall_map_gpa_range(gpa, size, flags);
GUEST_ASSERT(!ret);
}
@@ -1387,7 +1383,7 @@ static inline void cli(void)
asm volatile ("cli");
}
-void __vm_xsave_require_permission(uint64_t xfeature, const char *name);
+void __vm_xsave_require_permission(u64 xfeature, const char *name);
#define vm_xsave_require_permission(xfeature) \
__vm_xsave_require_permission(xfeature, #xfeature)
@@ -1408,9 +1404,9 @@ enum pg_level {
#define PG_SIZE_2M PG_LEVEL_SIZE(PG_LEVEL_2M)
#define PG_SIZE_1G PG_LEVEL_SIZE(PG_LEVEL_1G)
-void __virt_pg_map(struct kvm_vm *vm, uint64_t vaddr, uint64_t paddr, int level);
-void virt_map_level(struct kvm_vm *vm, uint64_t vaddr, uint64_t paddr,
- uint64_t nr_bytes, int level);
+void __virt_pg_map(struct kvm_vm *vm, u64 vaddr, u64 paddr, int level);
+void virt_map_level(struct kvm_vm *vm, u64 vaddr, u64 paddr,
+ u64 nr_bytes, int level);
/*
* Basic CPU control in CR0
diff --git a/tools/testing/selftests/kvm/include/x86/sev.h b/tools/testing/selftests/kvm/include/x86/sev.h
index 9aefe83e16b8..02f6324d7e77 100644
--- a/tools/testing/selftests/kvm/include/x86/sev.h
+++ b/tools/testing/selftests/kvm/include/x86/sev.h
@@ -53,7 +53,7 @@ kvm_static_assert(SEV_RET_SUCCESS == 0);
unsigned long raw; \
} sev_cmd = { .c = { \
.id = (cmd), \
- .data = (uint64_t)(arg), \
+ .data = (u64)(arg), \
.sev_fd = (vm)->arch.sev_fd, \
} }; \
\
@@ -83,7 +83,7 @@ static inline void sev_register_encrypted_memory(struct kvm_vm *vm,
}
static inline void sev_launch_update_data(struct kvm_vm *vm, gpa_t gpa,
- uint64_t size)
+ u64 size)
{
struct kvm_sev_launch_update_data update_data = {
.uaddr = (unsigned long)addr_gpa2hva(vm, gpa),
diff --git a/tools/testing/selftests/kvm/include/x86/svm_util.h b/tools/testing/selftests/kvm/include/x86/svm_util.h
index c2ebb8b61e38..f22784534f6e 100644
--- a/tools/testing/selftests/kvm/include/x86/svm_util.h
+++ b/tools/testing/selftests/kvm/include/x86/svm_util.h
@@ -16,17 +16,17 @@ struct svm_test_data {
/* VMCB */
struct vmcb *vmcb; /* gva */
void *vmcb_hva;
- uint64_t vmcb_gpa;
+ u64 vmcb_gpa;
/* host state-save area */
struct vmcb_save_area *save_area; /* gva */
void *save_area_hva;
- uint64_t save_area_gpa;
+ u64 save_area_gpa;
/* MSR-Bitmap */
void *msr; /* gva */
void *msr_hva;
- uint64_t msr_gpa;
+ u64 msr_gpa;
};
static inline void vmmcall(void)
@@ -55,7 +55,7 @@ static inline void vmmcall(void)
struct svm_test_data *vcpu_alloc_svm(struct kvm_vm *vm, gva_t *p_svm_gva);
void generic_svm_setup(struct svm_test_data *svm, void *guest_rip, void *guest_rsp);
-void run_guest(struct vmcb *vmcb, uint64_t vmcb_gpa);
+void run_guest(struct vmcb *vmcb, u64 vmcb_gpa);
int open_sev_dev_path_or_exit(void);
diff --git a/tools/testing/selftests/kvm/include/x86/vmx.h b/tools/testing/selftests/kvm/include/x86/vmx.h
index 16603e8f2006..b5e6931cc979 100644
--- a/tools/testing/selftests/kvm/include/x86/vmx.h
+++ b/tools/testing/selftests/kvm/include/x86/vmx.h
@@ -287,12 +287,12 @@ enum vmcs_field {
struct vmx_msr_entry {
uint32_t index;
uint32_t reserved;
- uint64_t value;
+ u64 value;
} __attribute__ ((aligned(16)));
#include "evmcs.h"
-static inline int vmxon(uint64_t phys)
+static inline int vmxon(u64 phys)
{
uint8_t ret;
@@ -309,7 +309,7 @@ static inline void vmxoff(void)
__asm__ __volatile__("vmxoff");
}
-static inline int vmclear(uint64_t vmcs_pa)
+static inline int vmclear(u64 vmcs_pa)
{
uint8_t ret;
@@ -321,7 +321,7 @@ static inline int vmclear(uint64_t vmcs_pa)
return ret;
}
-static inline int vmptrld(uint64_t vmcs_pa)
+static inline int vmptrld(u64 vmcs_pa)
{
uint8_t ret;
@@ -336,9 +336,9 @@ static inline int vmptrld(uint64_t vmcs_pa)
return ret;
}
-static inline int vmptrst(uint64_t *value)
+static inline int vmptrst(u64 *value)
{
- uint64_t tmp;
+ u64 tmp;
uint8_t ret;
if (enable_evmcs)
@@ -356,9 +356,9 @@ static inline int vmptrst(uint64_t *value)
* A wrapper around vmptrst that ignores errors and returns zero if the
* vmptrst instruction fails.
*/
-static inline uint64_t vmptrstz(void)
+static inline u64 vmptrstz(void)
{
- uint64_t value = 0;
+ u64 value = 0;
vmptrst(&value);
return value;
}
@@ -391,8 +391,8 @@ static inline int vmlaunch(void)
"pop %%rcx;"
"pop %%rbp;"
: [ret]"=&a"(ret)
- : [host_rsp]"r"((uint64_t)HOST_RSP),
- [host_rip]"r"((uint64_t)HOST_RIP)
+ : [host_rsp]"r"((u64)HOST_RSP),
+ [host_rip]"r"((u64)HOST_RIP)
: "memory", "cc", "rbx", "r8", "r9", "r10",
"r11", "r12", "r13", "r14", "r15");
return ret;
@@ -426,8 +426,8 @@ static inline int vmresume(void)
"pop %%rcx;"
"pop %%rbp;"
: [ret]"=&a"(ret)
- : [host_rsp]"r"((uint64_t)HOST_RSP),
- [host_rip]"r"((uint64_t)HOST_RIP)
+ : [host_rsp]"r"((u64)HOST_RSP),
+ [host_rip]"r"((u64)HOST_RIP)
: "memory", "cc", "rbx", "r8", "r9", "r10",
"r11", "r12", "r13", "r14", "r15");
return ret;
@@ -447,9 +447,9 @@ static inline void vmcall(void)
"r10", "r11", "r12", "r13", "r14", "r15");
}
-static inline int vmread(uint64_t encoding, uint64_t *value)
+static inline int vmread(u64 encoding, u64 *value)
{
- uint64_t tmp;
+ u64 tmp;
uint8_t ret;
if (enable_evmcs)
@@ -468,14 +468,14 @@ static inline int vmread(uint64_t encoding, uint64_t *value)
* A wrapper around vmread that ignores errors and returns zero if the
* vmread instruction fails.
*/
-static inline uint64_t vmreadz(uint64_t encoding)
+static inline u64 vmreadz(u64 encoding)
{
- uint64_t value = 0;
+ u64 value = 0;
vmread(encoding, &value);
return value;
}
-static inline int vmwrite(uint64_t encoding, uint64_t value)
+static inline int vmwrite(u64 encoding, u64 value)
{
uint8_t ret;
@@ -497,35 +497,35 @@ static inline uint32_t vmcs_revision(void)
struct vmx_pages {
void *vmxon_hva;
- uint64_t vmxon_gpa;
+ u64 vmxon_gpa;
void *vmxon;
void *vmcs_hva;
- uint64_t vmcs_gpa;
+ u64 vmcs_gpa;
void *vmcs;
void *msr_hva;
- uint64_t msr_gpa;
+ u64 msr_gpa;
void *msr;
void *shadow_vmcs_hva;
- uint64_t shadow_vmcs_gpa;
+ u64 shadow_vmcs_gpa;
void *shadow_vmcs;
void *vmread_hva;
- uint64_t vmread_gpa;
+ u64 vmread_gpa;
void *vmread;
void *vmwrite_hva;
- uint64_t vmwrite_gpa;
+ u64 vmwrite_gpa;
void *vmwrite;
void *eptp_hva;
- uint64_t eptp_gpa;
+ u64 eptp_gpa;
void *eptp;
void *apic_access_hva;
- uint64_t apic_access_gpa;
+ u64 apic_access_gpa;
void *apic_access;
};
@@ -560,13 +560,13 @@ bool load_vmcs(struct vmx_pages *vmx);
bool ept_1g_pages_supported(void);
void nested_pg_map(struct vmx_pages *vmx, struct kvm_vm *vm,
- uint64_t nested_paddr, uint64_t paddr);
+ u64 nested_paddr, u64 paddr);
void nested_map(struct vmx_pages *vmx, struct kvm_vm *vm,
- uint64_t nested_paddr, uint64_t paddr, uint64_t size);
+ u64 nested_paddr, u64 paddr, u64 size);
void nested_map_memslot(struct vmx_pages *vmx, struct kvm_vm *vm,
uint32_t memslot);
void nested_identity_map_1g(struct vmx_pages *vmx, struct kvm_vm *vm,
- uint64_t addr, uint64_t size);
+ u64 addr, u64 size);
bool kvm_cpu_has_ept(void);
void prepare_eptp(struct vmx_pages *vmx, struct kvm_vm *vm,
uint32_t eptp_memslot);
diff --git a/tools/testing/selftests/kvm/kvm_page_table_test.c b/tools/testing/selftests/kvm/kvm_page_table_test.c
index 6cf1fa092752..dcd213733604 100644
--- a/tools/testing/selftests/kvm/kvm_page_table_test.c
+++ b/tools/testing/selftests/kvm/kvm_page_table_test.c
@@ -46,12 +46,12 @@ static const char * const test_stage_string[] = {
struct test_args {
struct kvm_vm *vm;
- uint64_t guest_test_virt_mem;
- uint64_t host_page_size;
- uint64_t host_num_pages;
- uint64_t large_page_size;
- uint64_t large_num_pages;
- uint64_t host_pages_per_lpage;
+ u64 guest_test_virt_mem;
+ u64 host_page_size;
+ u64 host_num_pages;
+ u64 large_page_size;
+ u64 large_num_pages;
+ u64 host_pages_per_lpage;
enum vm_mem_backing_src_type src_type;
struct kvm_vcpu *vcpus[KVM_MAX_VCPUS];
};
@@ -77,19 +77,19 @@ static sem_t test_stage_completed;
* This will be set to the topmost valid physical address minus
* the test memory size.
*/
-static uint64_t guest_test_phys_mem;
+static u64 guest_test_phys_mem;
/*
* Guest virtual memory offset of the testing memory slot.
* Must not conflict with identity mapped test code.
*/
-static uint64_t guest_test_virt_mem = DEFAULT_GUEST_TEST_MEM;
+static u64 guest_test_virt_mem = DEFAULT_GUEST_TEST_MEM;
static void guest_code(bool do_write)
{
struct test_args *p = &test_args;
enum test_stage *current_stage = &guest_test_stage;
- uint64_t addr;
+ u64 addr;
int i, j;
while (true) {
@@ -113,9 +113,9 @@ static void guest_code(bool do_write)
case KVM_CREATE_MAPPINGS:
for (i = 0; i < p->large_num_pages; i++) {
if (do_write)
- *(uint64_t *)addr = 0x0123456789ABCDEF;
+ *(u64 *)addr = 0x0123456789ABCDEF;
else
- READ_ONCE(*(uint64_t *)addr);
+ READ_ONCE(*(u64 *)addr);
addr += p->large_page_size;
}
@@ -131,7 +131,7 @@ static void guest_code(bool do_write)
case KVM_UPDATE_MAPPINGS:
if (p->src_type == VM_MEM_SRC_ANONYMOUS) {
for (i = 0; i < p->host_num_pages; i++) {
- *(uint64_t *)addr = 0x0123456789ABCDEF;
+ *(u64 *)addr = 0x0123456789ABCDEF;
addr += p->host_page_size;
}
break;
@@ -142,7 +142,7 @@ static void guest_code(bool do_write)
* Write to the first host page in each large
* page region, and triger break of large pages.
*/
- *(uint64_t *)addr = 0x0123456789ABCDEF;
+ *(u64 *)addr = 0x0123456789ABCDEF;
/*
* Access the middle host pages in each large
@@ -152,7 +152,7 @@ static void guest_code(bool do_write)
*/
addr += p->large_page_size / 2;
for (j = 0; j < p->host_pages_per_lpage / 2; j++) {
- READ_ONCE(*(uint64_t *)addr);
+ READ_ONCE(*(u64 *)addr);
addr += p->host_page_size;
}
}
@@ -167,7 +167,7 @@ static void guest_code(bool do_write)
*/
case KVM_ADJUST_MAPPINGS:
for (i = 0; i < p->host_num_pages; i++) {
- READ_ONCE(*(uint64_t *)addr);
+ READ_ONCE(*(u64 *)addr);
addr += p->host_page_size;
}
break;
@@ -227,8 +227,8 @@ static void *vcpu_worker(void *data)
}
struct test_params {
- uint64_t phys_offset;
- uint64_t test_mem_size;
+ u64 phys_offset;
+ u64 test_mem_size;
enum vm_mem_backing_src_type src_type;
};
@@ -237,12 +237,12 @@ static struct kvm_vm *pre_init_before_test(enum vm_guest_mode mode, void *arg)
int ret;
struct test_params *p = arg;
enum vm_mem_backing_src_type src_type = p->src_type;
- uint64_t large_page_size = get_backing_src_pagesz(src_type);
- uint64_t guest_page_size = vm_guest_mode_params[mode].page_size;
- uint64_t host_page_size = getpagesize();
- uint64_t test_mem_size = p->test_mem_size;
- uint64_t guest_num_pages;
- uint64_t alignment;
+ u64 large_page_size = get_backing_src_pagesz(src_type);
+ u64 guest_page_size = vm_guest_mode_params[mode].page_size;
+ u64 host_page_size = getpagesize();
+ u64 test_mem_size = p->test_mem_size;
+ u64 guest_num_pages;
+ u64 alignment;
void *host_test_mem;
struct kvm_vm *vm;
@@ -307,7 +307,7 @@ static struct kvm_vm *pre_init_before_test(enum vm_guest_mode mode, void *arg)
pr_info("Guest physical test memory offset: 0x%lx\n",
guest_test_phys_mem);
pr_info("Host virtual test memory offset: 0x%lx\n",
- (uint64_t)host_test_mem);
+ (u64)host_test_mem);
pr_info("Number of testing vCPUs: %d\n", nr_vcpus);
return vm;
diff --git a/tools/testing/selftests/kvm/lib/arm64/gic.c b/tools/testing/selftests/kvm/lib/arm64/gic.c
index 7abbf8866512..ac3987cdac6d 100644
--- a/tools/testing/selftests/kvm/lib/arm64/gic.c
+++ b/tools/testing/selftests/kvm/lib/arm64/gic.c
@@ -73,7 +73,7 @@ void gic_irq_disable(unsigned int intid)
unsigned int gic_get_and_ack_irq(void)
{
- uint64_t irqstat;
+ u64 irqstat;
unsigned int intid;
GUEST_ASSERT(gic_common_ops);
@@ -102,7 +102,7 @@ void gic_set_eoi_split(bool split)
gic_common_ops->gic_set_eoi_split(split);
}
-void gic_set_priority_mask(uint64_t pmr)
+void gic_set_priority_mask(u64 pmr)
{
GUEST_ASSERT(gic_common_ops);
gic_common_ops->gic_set_priority_mask(pmr);
diff --git a/tools/testing/selftests/kvm/lib/arm64/gic_private.h b/tools/testing/selftests/kvm/lib/arm64/gic_private.h
index d24e9ecc96c6..d231bb7594df 100644
--- a/tools/testing/selftests/kvm/lib/arm64/gic_private.h
+++ b/tools/testing/selftests/kvm/lib/arm64/gic_private.h
@@ -12,11 +12,11 @@ struct gic_common_ops {
void (*gic_cpu_init)(unsigned int cpu);
void (*gic_irq_enable)(unsigned int intid);
void (*gic_irq_disable)(unsigned int intid);
- uint64_t (*gic_read_iar)(void);
+ u64 (*gic_read_iar)(void);
void (*gic_write_eoir)(uint32_t irq);
void (*gic_write_dir)(uint32_t irq);
void (*gic_set_eoi_split)(bool split);
- void (*gic_set_priority_mask)(uint64_t mask);
+ void (*gic_set_priority_mask)(u64 mask);
void (*gic_set_priority)(uint32_t intid, uint32_t prio);
void (*gic_irq_set_active)(uint32_t intid);
void (*gic_irq_clear_active)(uint32_t intid);
diff --git a/tools/testing/selftests/kvm/lib/arm64/gic_v3.c b/tools/testing/selftests/kvm/lib/arm64/gic_v3.c
index 911650132446..2f5d8a706ce3 100644
--- a/tools/testing/selftests/kvm/lib/arm64/gic_v3.c
+++ b/tools/testing/selftests/kvm/lib/arm64/gic_v3.c
@@ -91,9 +91,9 @@ static enum gicv3_intid_range get_intid_range(unsigned int intid)
return INVALID_RANGE;
}
-static uint64_t gicv3_read_iar(void)
+static u64 gicv3_read_iar(void)
{
- uint64_t irqstat = read_sysreg_s(SYS_ICC_IAR1_EL1);
+ u64 irqstat = read_sysreg_s(SYS_ICC_IAR1_EL1);
dsb(sy);
return irqstat;
@@ -111,7 +111,7 @@ static void gicv3_write_dir(uint32_t irq)
isb();
}
-static void gicv3_set_priority_mask(uint64_t mask)
+static void gicv3_set_priority_mask(u64 mask)
{
write_sysreg_s(mask, SYS_ICC_PMR_EL1);
}
@@ -129,26 +129,26 @@ static void gicv3_set_eoi_split(bool split)
isb();
}
-uint32_t gicv3_reg_readl(uint32_t cpu_or_dist, uint64_t offset)
+uint32_t gicv3_reg_readl(uint32_t cpu_or_dist, u64 offset)
{
volatile void *base = cpu_or_dist & DIST_BIT ? GICD_BASE_GVA
: sgi_base_from_redist(gicr_base_cpu(cpu_or_dist));
return readl(base + offset);
}
-void gicv3_reg_writel(uint32_t cpu_or_dist, uint64_t offset, uint32_t reg_val)
+void gicv3_reg_writel(uint32_t cpu_or_dist, u64 offset, uint32_t reg_val)
{
volatile void *base = cpu_or_dist & DIST_BIT ? GICD_BASE_GVA
: sgi_base_from_redist(gicr_base_cpu(cpu_or_dist));
writel(reg_val, base + offset);
}
-uint32_t gicv3_getl_fields(uint32_t cpu_or_dist, uint64_t offset, uint32_t mask)
+uint32_t gicv3_getl_fields(uint32_t cpu_or_dist, u64 offset, uint32_t mask)
{
return gicv3_reg_readl(cpu_or_dist, offset) & mask;
}
-void gicv3_setl_fields(uint32_t cpu_or_dist, uint64_t offset,
+void gicv3_setl_fields(uint32_t cpu_or_dist, u64 offset,
uint32_t mask, uint32_t reg_val)
{
uint32_t tmp = gicv3_reg_readl(cpu_or_dist, offset) & ~mask;
@@ -165,7 +165,7 @@ void gicv3_setl_fields(uint32_t cpu_or_dist, uint64_t offset,
* map that doesn't implement it; like GICR_WAKER's offset of 0x0014 being
* marked as "Reserved" in the Distributor map.
*/
-static void gicv3_access_reg(uint32_t intid, uint64_t offset,
+static void gicv3_access_reg(uint32_t intid, u64 offset,
uint32_t reg_bits, uint32_t bits_per_field,
bool write, uint32_t *val)
{
@@ -197,14 +197,14 @@ static void gicv3_access_reg(uint32_t intid, uint64_t offset,
*val = gicv3_getl_fields(cpu_or_dist, offset, mask) >> shift;
}
-static void gicv3_write_reg(uint32_t intid, uint64_t offset,
+static void gicv3_write_reg(uint32_t intid, u64 offset,
uint32_t reg_bits, uint32_t bits_per_field, uint32_t val)
{
gicv3_access_reg(intid, offset, reg_bits,
bits_per_field, true, &val);
}
-static uint32_t gicv3_read_reg(uint32_t intid, uint64_t offset,
+static uint32_t gicv3_read_reg(uint32_t intid, u64 offset,
uint32_t reg_bits, uint32_t bits_per_field)
{
uint32_t val;
diff --git a/tools/testing/selftests/kvm/lib/arm64/processor.c b/tools/testing/selftests/kvm/lib/arm64/processor.c
index e57b757b4256..d7cfd8899b97 100644
--- a/tools/testing/selftests/kvm/lib/arm64/processor.c
+++ b/tools/testing/selftests/kvm/lib/arm64/processor.c
@@ -20,23 +20,23 @@
static gva_t exception_handlers;
-static uint64_t page_align(struct kvm_vm *vm, uint64_t v)
+static u64 page_align(struct kvm_vm *vm, u64 v)
{
return (v + vm->page_size) & ~(vm->page_size - 1);
}
-static uint64_t pgd_index(struct kvm_vm *vm, gva_t gva)
+static u64 pgd_index(struct kvm_vm *vm, gva_t gva)
{
unsigned int shift = (vm->pgtable_levels - 1) * (vm->page_shift - 3) + vm->page_shift;
- uint64_t mask = (1UL << (vm->va_bits - shift)) - 1;
+ u64 mask = (1UL << (vm->va_bits - shift)) - 1;
return (gva >> shift) & mask;
}
-static uint64_t pud_index(struct kvm_vm *vm, gva_t gva)
+static u64 pud_index(struct kvm_vm *vm, gva_t gva)
{
unsigned int shift = 2 * (vm->page_shift - 3) + vm->page_shift;
- uint64_t mask = (1UL << (vm->page_shift - 3)) - 1;
+ u64 mask = (1UL << (vm->page_shift - 3)) - 1;
TEST_ASSERT(vm->pgtable_levels == 4,
"Mode %d does not have 4 page table levels", vm->mode);
@@ -44,10 +44,10 @@ static uint64_t pud_index(struct kvm_vm *vm, gva_t gva)
return (gva >> shift) & mask;
}
-static uint64_t pmd_index(struct kvm_vm *vm, gva_t gva)
+static u64 pmd_index(struct kvm_vm *vm, gva_t gva)
{
unsigned int shift = (vm->page_shift - 3) + vm->page_shift;
- uint64_t mask = (1UL << (vm->page_shift - 3)) - 1;
+ u64 mask = (1UL << (vm->page_shift - 3)) - 1;
TEST_ASSERT(vm->pgtable_levels >= 3,
"Mode %d does not have >= 3 page table levels", vm->mode);
@@ -55,9 +55,9 @@ static uint64_t pmd_index(struct kvm_vm *vm, gva_t gva)
return (gva >> shift) & mask;
}
-static uint64_t pte_index(struct kvm_vm *vm, gva_t gva)
+static u64 pte_index(struct kvm_vm *vm, gva_t gva)
{
- uint64_t mask = (1UL << (vm->page_shift - 3)) - 1;
+ u64 mask = (1UL << (vm->page_shift - 3)) - 1;
return (gva >> vm->page_shift) & mask;
}
@@ -67,9 +67,9 @@ static inline bool use_lpa2_pte_format(struct kvm_vm *vm)
(vm->pa_bits > 48 || vm->va_bits > 48);
}
-static uint64_t addr_pte(struct kvm_vm *vm, uint64_t pa, uint64_t attrs)
+static u64 addr_pte(struct kvm_vm *vm, u64 pa, u64 attrs)
{
- uint64_t pte;
+ u64 pte;
if (use_lpa2_pte_format(vm)) {
pte = pa & PTE_ADDR_MASK_LPA2(vm->page_shift);
@@ -85,9 +85,9 @@ static uint64_t addr_pte(struct kvm_vm *vm, uint64_t pa, uint64_t attrs)
return pte;
}
-static uint64_t pte_addr(struct kvm_vm *vm, uint64_t pte)
+static u64 pte_addr(struct kvm_vm *vm, u64 pte)
{
- uint64_t pa;
+ u64 pa;
if (use_lpa2_pte_format(vm)) {
pa = pte & PTE_ADDR_MASK_LPA2(vm->page_shift);
@@ -101,13 +101,13 @@ static uint64_t pte_addr(struct kvm_vm *vm, uint64_t pte)
return pa;
}
-static uint64_t ptrs_per_pgd(struct kvm_vm *vm)
+static u64 ptrs_per_pgd(struct kvm_vm *vm)
{
unsigned int shift = (vm->pgtable_levels - 1) * (vm->page_shift - 3) + vm->page_shift;
return 1 << (vm->va_bits - shift);
}
-static uint64_t __maybe_unused ptrs_per_pte(struct kvm_vm *vm)
+static u64 __maybe_unused ptrs_per_pte(struct kvm_vm *vm)
{
return 1 << (vm->page_shift - 3);
}
@@ -125,12 +125,12 @@ void virt_arch_pgd_alloc(struct kvm_vm *vm)
vm->pgd_created = true;
}
-static void _virt_pg_map(struct kvm_vm *vm, uint64_t vaddr, uint64_t paddr,
- uint64_t flags)
+static void _virt_pg_map(struct kvm_vm *vm, u64 vaddr, u64 paddr,
+ u64 flags)
{
uint8_t attr_idx = flags & (PTE_ATTRINDX_MASK >> PTE_ATTRINDX_SHIFT);
- uint64_t pg_attr;
- uint64_t *ptep;
+ u64 pg_attr;
+ u64 *ptep;
TEST_ASSERT((vaddr % vm->page_size) == 0,
"Virtual address not on page boundary,\n"
@@ -178,16 +178,16 @@ static void _virt_pg_map(struct kvm_vm *vm, uint64_t vaddr, uint64_t paddr,
*ptep = addr_pte(vm, paddr, pg_attr);
}
-void virt_arch_pg_map(struct kvm_vm *vm, uint64_t vaddr, uint64_t paddr)
+void virt_arch_pg_map(struct kvm_vm *vm, u64 vaddr, u64 paddr)
{
- uint64_t attr_idx = MT_NORMAL;
+ u64 attr_idx = MT_NORMAL;
_virt_pg_map(vm, vaddr, paddr, attr_idx);
}
-uint64_t *virt_get_pte_hva(struct kvm_vm *vm, gva_t gva)
+u64 *virt_get_pte_hva(struct kvm_vm *vm, gva_t gva)
{
- uint64_t *ptep;
+ u64 *ptep;
if (!vm->pgd_created)
goto unmapped_gva;
@@ -225,16 +225,16 @@ uint64_t *virt_get_pte_hva(struct kvm_vm *vm, gva_t gva)
gpa_t addr_arch_gva2gpa(struct kvm_vm *vm, gva_t gva)
{
- uint64_t *ptep = virt_get_pte_hva(vm, gva);
+ u64 *ptep = virt_get_pte_hva(vm, gva);
return pte_addr(vm, *ptep) + (gva & (vm->page_size - 1));
}
-static void pte_dump(FILE *stream, struct kvm_vm *vm, uint8_t indent, uint64_t page, int level)
+static void pte_dump(FILE *stream, struct kvm_vm *vm, uint8_t indent, u64 page, int level)
{
#ifdef DEBUG
static const char * const type[] = { "", "pud", "pmd", "pte" };
- uint64_t pte, *ptep;
+ u64 pte, *ptep;
if (level == 4)
return;
@@ -252,7 +252,7 @@ static void pte_dump(FILE *stream, struct kvm_vm *vm, uint8_t indent, uint64_t p
void virt_arch_dump(FILE *stream, struct kvm_vm *vm, uint8_t indent)
{
int level = 4 - (vm->pgtable_levels - 1);
- uint64_t pgd, *ptep;
+ u64 pgd, *ptep;
if (!vm->pgd_created)
return;
@@ -270,7 +270,7 @@ void aarch64_vcpu_setup(struct kvm_vcpu *vcpu, struct kvm_vcpu_init *init)
{
struct kvm_vcpu_init default_init = { .target = -1, };
struct kvm_vm *vm = vcpu->vm;
- uint64_t sctlr_el1, tcr_el1, ttbr0_el1;
+ u64 sctlr_el1, tcr_el1, ttbr0_el1;
if (!init)
init = &default_init;
@@ -366,7 +366,7 @@ void aarch64_vcpu_setup(struct kvm_vcpu *vcpu, struct kvm_vcpu_init *init)
void vcpu_arch_dump(FILE *stream, struct kvm_vcpu *vcpu, uint8_t indent)
{
- uint64_t pstate, pc;
+ u64 pstate, pc;
pstate = vcpu_get_reg(vcpu, ARM64_CORE_REG(regs.pstate));
pc = vcpu_get_reg(vcpu, ARM64_CORE_REG(regs.pc));
@@ -377,14 +377,14 @@ void vcpu_arch_dump(FILE *stream, struct kvm_vcpu *vcpu, uint8_t indent)
void vcpu_arch_set_entry_point(struct kvm_vcpu *vcpu, void *guest_code)
{
- vcpu_set_reg(vcpu, ARM64_CORE_REG(regs.pc), (uint64_t)guest_code);
+ vcpu_set_reg(vcpu, ARM64_CORE_REG(regs.pc), (u64)guest_code);
}
static struct kvm_vcpu *__aarch64_vcpu_add(struct kvm_vm *vm, uint32_t vcpu_id,
struct kvm_vcpu_init *init)
{
size_t stack_size;
- uint64_t stack_vaddr;
+ u64 stack_vaddr;
struct kvm_vcpu *vcpu = __vm_vcpu_add(vm, vcpu_id);
stack_size = vm->page_size == 4096 ? DEFAULT_STACK_PGS * vm->page_size :
@@ -426,13 +426,13 @@ void vcpu_args_set(struct kvm_vcpu *vcpu, unsigned int num, ...)
for (i = 0; i < num; i++) {
vcpu_set_reg(vcpu, ARM64_CORE_REG(regs.regs[i]),
- va_arg(ap, uint64_t));
+ va_arg(ap, u64));
}
va_end(ap);
}
-void kvm_exit_unexpected_exception(int vector, uint64_t ec, bool valid_ec)
+void kvm_exit_unexpected_exception(int vector, u64 ec, bool valid_ec)
{
ucall(UCALL_UNHANDLED, 3, vector, ec, valid_ec);
while (1)
@@ -465,7 +465,7 @@ void vcpu_init_descriptor_tables(struct kvm_vcpu *vcpu)
{
extern char vectors;
- vcpu_set_reg(vcpu, KVM_ARM64_SYS_REG(SYS_VBAR_EL1), (uint64_t)&vectors);
+ vcpu_set_reg(vcpu, KVM_ARM64_SYS_REG(SYS_VBAR_EL1), (u64)&vectors);
}
void route_exception(struct ex_regs *regs, int vector)
@@ -551,11 +551,11 @@ void aarch64_get_supported_page_sizes(uint32_t ipa, uint32_t *ipa4k,
{
struct kvm_vcpu_init preferred_init;
int kvm_fd, vm_fd, vcpu_fd, err;
- uint64_t val;
+ u64 val;
uint32_t gran;
struct kvm_one_reg reg = {
.id = KVM_ARM64_SYS_REG(SYS_ID_AA64MMFR0_EL1),
- .addr = (uint64_t)&val,
+ .addr = (u64)&val,
};
kvm_fd = open_kvm_dev_path_or_exit();
@@ -613,17 +613,17 @@ void aarch64_get_supported_page_sizes(uint32_t ipa, uint32_t *ipa4k,
: "x0", "x1", "x2", "x3", "x4", "x5", "x6", "x7")
-void smccc_hvc(uint32_t function_id, uint64_t arg0, uint64_t arg1,
- uint64_t arg2, uint64_t arg3, uint64_t arg4, uint64_t arg5,
- uint64_t arg6, struct arm_smccc_res *res)
+void smccc_hvc(uint32_t function_id, u64 arg0, u64 arg1,
+ u64 arg2, u64 arg3, u64 arg4, u64 arg5,
+ u64 arg6, struct arm_smccc_res *res)
{
__smccc_call(hvc, function_id, arg0, arg1, arg2, arg3, arg4, arg5,
arg6, res);
}
-void smccc_smc(uint32_t function_id, uint64_t arg0, uint64_t arg1,
- uint64_t arg2, uint64_t arg3, uint64_t arg4, uint64_t arg5,
- uint64_t arg6, struct arm_smccc_res *res)
+void smccc_smc(uint32_t function_id, u64 arg0, u64 arg1,
+ u64 arg2, u64 arg3, u64 arg4, u64 arg5,
+ u64 arg6, struct arm_smccc_res *res)
{
__smccc_call(smc, function_id, arg0, arg1, arg2, arg3, arg4, arg5,
arg6, res);
diff --git a/tools/testing/selftests/kvm/lib/arm64/ucall.c b/tools/testing/selftests/kvm/lib/arm64/ucall.c
index 62109407a1ff..270f12f9593f 100644
--- a/tools/testing/selftests/kvm/lib/arm64/ucall.c
+++ b/tools/testing/selftests/kvm/lib/arm64/ucall.c
@@ -25,9 +25,9 @@ void *ucall_arch_get_ucall(struct kvm_vcpu *vcpu)
if (run->exit_reason == KVM_EXIT_MMIO &&
run->mmio.phys_addr == vcpu->vm->ucall_mmio_addr) {
- TEST_ASSERT(run->mmio.is_write && run->mmio.len == sizeof(uint64_t),
+ TEST_ASSERT(run->mmio.is_write && run->mmio.len == sizeof(u64),
"Unexpected ucall exit mmio address access");
- return (void *)(*((uint64_t *)run->mmio.data));
+ return (void *)(*((u64 *)run->mmio.data));
}
return NULL;
diff --git a/tools/testing/selftests/kvm/lib/arm64/vgic.c b/tools/testing/selftests/kvm/lib/arm64/vgic.c
index 4427f43f73ea..63aefbdb1829 100644
--- a/tools/testing/selftests/kvm/lib/arm64/vgic.c
+++ b/tools/testing/selftests/kvm/lib/arm64/vgic.c
@@ -33,7 +33,7 @@
int vgic_v3_setup(struct kvm_vm *vm, unsigned int nr_vcpus, uint32_t nr_irqs)
{
int gic_fd;
- uint64_t attr;
+ u64 attr;
struct list_head *iter;
unsigned int nr_gic_pages, nr_vcpus_created = 0;
@@ -82,9 +82,9 @@ int vgic_v3_setup(struct kvm_vm *vm, unsigned int nr_vcpus, uint32_t nr_irqs)
/* should only work for level sensitive interrupts */
int _kvm_irq_set_level_info(int gic_fd, uint32_t intid, int level)
{
- uint64_t attr = 32 * (intid / 32);
- uint64_t index = intid % 32;
- uint64_t val;
+ u64 attr = 32 * (intid / 32);
+ u64 index = intid % 32;
+ u64 val;
int ret;
ret = __kvm_device_attr_get(gic_fd, KVM_DEV_ARM_VGIC_GRP_LEVEL_INFO,
@@ -128,12 +128,12 @@ void kvm_arm_irq_line(struct kvm_vm *vm, uint32_t intid, int level)
}
static void vgic_poke_irq(int gic_fd, uint32_t intid, struct kvm_vcpu *vcpu,
- uint64_t reg_off)
+ u64 reg_off)
{
- uint64_t reg = intid / 32;
- uint64_t index = intid % 32;
- uint64_t attr = reg_off + reg * 4;
- uint64_t val;
+ u64 reg = intid / 32;
+ u64 index = intid % 32;
+ u64 attr = reg_off + reg * 4;
+ u64 val;
bool intid_is_private = INTID_IS_SGI(intid) || INTID_IS_PPI(intid);
uint32_t group = intid_is_private ? KVM_DEV_ARM_VGIC_GRP_REDIST_REGS
diff --git a/tools/testing/selftests/kvm/lib/elf.c b/tools/testing/selftests/kvm/lib/elf.c
index 6fddebb96a3c..102a778a0ae4 100644
--- a/tools/testing/selftests/kvm/lib/elf.c
+++ b/tools/testing/selftests/kvm/lib/elf.c
@@ -156,7 +156,7 @@ void kvm_vm_elf_load(struct kvm_vm *vm, const char *filename)
TEST_ASSERT(phdr.p_memsz > 0, "Unexpected loadable segment "
"memsize of 0,\n"
" phdr index: %u p_memsz: 0x%" PRIx64,
- n1, (uint64_t) phdr.p_memsz);
+ n1, (u64)phdr.p_memsz);
gva_t seg_vstart = align_down(phdr.p_vaddr, vm->page_size);
gva_t seg_vend = phdr.p_vaddr + phdr.p_memsz - 1;
seg_vend |= vm->page_size - 1;
diff --git a/tools/testing/selftests/kvm/lib/guest_sprintf.c b/tools/testing/selftests/kvm/lib/guest_sprintf.c
index 74627514c4d4..224de8a3f862 100644
--- a/tools/testing/selftests/kvm/lib/guest_sprintf.c
+++ b/tools/testing/selftests/kvm/lib/guest_sprintf.c
@@ -35,8 +35,8 @@ static int skip_atoi(const char **s)
({ \
int __res; \
\
- __res = ((uint64_t) n) % (uint32_t) base; \
- n = ((uint64_t) n) / (uint32_t) base; \
+ __res = ((u64)n) % (uint32_t) base; \
+ n = ((u64)n) / (uint32_t) base; \
__res; \
})
@@ -119,7 +119,7 @@ int guest_vsnprintf(char *buf, int n, const char *fmt, va_list args)
{
char *str, *end;
const char *s;
- uint64_t num;
+ u64 num;
int i, base;
int len;
@@ -240,7 +240,7 @@ int guest_vsnprintf(char *buf, int n, const char *fmt, va_list args)
flags |= SPECIAL | SMALL | ZEROPAD;
}
str = number(str, end,
- (uint64_t)va_arg(args, void *), 16,
+ (u64)va_arg(args, void *), 16,
field_width, precision, flags);
continue;
@@ -284,7 +284,7 @@ int guest_vsnprintf(char *buf, int n, const char *fmt, va_list args)
continue;
}
if (qualifier == 'l')
- num = va_arg(args, uint64_t);
+ num = va_arg(args, u64);
else if (qualifier == 'h') {
num = (uint16_t)va_arg(args, int);
if (flags & SIGN)
diff --git a/tools/testing/selftests/kvm/lib/kvm_util.c b/tools/testing/selftests/kvm/lib/kvm_util.c
index 6dd2755fdb7b..1b46de455f2d 100644
--- a/tools/testing/selftests/kvm/lib/kvm_util.c
+++ b/tools/testing/selftests/kvm/lib/kvm_util.c
@@ -373,12 +373,12 @@ struct kvm_vm *____vm_create(struct vm_shape shape)
return vm;
}
-static uint64_t vm_nr_pages_required(enum vm_guest_mode mode,
- uint32_t nr_runnable_vcpus,
- uint64_t extra_mem_pages)
+static u64 vm_nr_pages_required(enum vm_guest_mode mode,
+ uint32_t nr_runnable_vcpus,
+ u64 extra_mem_pages)
{
- uint64_t page_size = vm_guest_mode_params[mode].page_size;
- uint64_t nr_pages;
+ u64 page_size = vm_guest_mode_params[mode].page_size;
+ u64 nr_pages;
TEST_ASSERT(nr_runnable_vcpus,
"Use vm_create_barebones() for VMs that _never_ have vCPUs");
@@ -445,9 +445,9 @@ void kvm_set_files_rlimit(uint32_t nr_vcpus)
}
struct kvm_vm *__vm_create(struct vm_shape shape, uint32_t nr_runnable_vcpus,
- uint64_t nr_extra_pages)
+ u64 nr_extra_pages)
{
- uint64_t nr_pages = vm_nr_pages_required(shape.mode, nr_runnable_vcpus,
+ u64 nr_pages = vm_nr_pages_required(shape.mode, nr_runnable_vcpus,
nr_extra_pages);
struct userspace_mem_region *slot0;
struct kvm_vm *vm;
@@ -507,7 +507,7 @@ struct kvm_vm *__vm_create(struct vm_shape shape, uint32_t nr_runnable_vcpus,
* no real memory allocation for non-slot0 memory in this function.
*/
struct kvm_vm *__vm_create_with_vcpus(struct vm_shape shape, uint32_t nr_vcpus,
- uint64_t extra_mem_pages,
+ u64 extra_mem_pages,
void *guest_code, struct kvm_vcpu *vcpus[])
{
struct kvm_vm *vm;
@@ -525,7 +525,7 @@ struct kvm_vm *__vm_create_with_vcpus(struct vm_shape shape, uint32_t nr_vcpus,
struct kvm_vm *__vm_create_shape_with_one_vcpu(struct vm_shape shape,
struct kvm_vcpu **vcpu,
- uint64_t extra_mem_pages,
+ u64 extra_mem_pages,
void *guest_code)
{
struct kvm_vcpu *vcpus[1];
@@ -675,15 +675,15 @@ void kvm_parse_vcpu_pinning(const char *pcpus_string, uint32_t vcpu_to_pcpu[],
* region exists.
*/
static struct userspace_mem_region *
-userspace_mem_region_find(struct kvm_vm *vm, uint64_t start, uint64_t end)
+userspace_mem_region_find(struct kvm_vm *vm, u64 start, u64 end)
{
struct rb_node *node;
for (node = vm->regions.gpa_tree.rb_node; node; ) {
struct userspace_mem_region *region =
container_of(node, struct userspace_mem_region, gpa_node);
- uint64_t existing_start = region->region.guest_phys_addr;
- uint64_t existing_end = region->region.guest_phys_addr
+ u64 existing_start = region->region.guest_phys_addr;
+ u64 existing_end = region->region.guest_phys_addr
+ region->region.memory_size - 1;
if (start <= existing_end && end >= existing_start)
return region;
@@ -897,7 +897,7 @@ static void vm_userspace_mem_region_hva_insert(struct rb_root *hva_tree,
int __vm_set_user_memory_region(struct kvm_vm *vm, uint32_t slot, uint32_t flags,
- uint64_t gpa, uint64_t size, void *hva)
+ u64 gpa, u64 size, void *hva)
{
struct kvm_userspace_memory_region region = {
.slot = slot,
@@ -911,7 +911,7 @@ int __vm_set_user_memory_region(struct kvm_vm *vm, uint32_t slot, uint32_t flags
}
void vm_set_user_memory_region(struct kvm_vm *vm, uint32_t slot, uint32_t flags,
- uint64_t gpa, uint64_t size, void *hva)
+ u64 gpa, u64 size, void *hva)
{
int ret = __vm_set_user_memory_region(vm, slot, flags, gpa, size, hva);
@@ -924,8 +924,8 @@ void vm_set_user_memory_region(struct kvm_vm *vm, uint32_t slot, uint32_t flags,
"KVM selftests now require KVM_SET_USER_MEMORY_REGION2 (introduced in v6.8)")
int __vm_set_user_memory_region2(struct kvm_vm *vm, uint32_t slot, uint32_t flags,
- uint64_t gpa, uint64_t size, void *hva,
- uint32_t guest_memfd, uint64_t guest_memfd_offset)
+ u64 gpa, u64 size, void *hva,
+ uint32_t guest_memfd, u64 guest_memfd_offset)
{
struct kvm_userspace_memory_region2 region = {
.slot = slot,
@@ -943,8 +943,8 @@ int __vm_set_user_memory_region2(struct kvm_vm *vm, uint32_t slot, uint32_t flag
}
void vm_set_user_memory_region2(struct kvm_vm *vm, uint32_t slot, uint32_t flags,
- uint64_t gpa, uint64_t size, void *hva,
- uint32_t guest_memfd, uint64_t guest_memfd_offset)
+ u64 gpa, u64 size, void *hva,
+ uint32_t guest_memfd, u64 guest_memfd_offset)
{
int ret = __vm_set_user_memory_region2(vm, slot, flags, gpa, size, hva,
guest_memfd, guest_memfd_offset);
@@ -956,8 +956,8 @@ void vm_set_user_memory_region2(struct kvm_vm *vm, uint32_t slot, uint32_t flags
/* FIXME: This thing needs to be ripped apart and rewritten. */
void vm_mem_add(struct kvm_vm *vm, enum vm_mem_backing_src_type src_type,
- uint64_t guest_paddr, uint32_t slot, uint64_t npages,
- uint32_t flags, int guest_memfd, uint64_t guest_memfd_offset)
+ u64 guest_paddr, uint32_t slot, u64 npages,
+ uint32_t flags, int guest_memfd, u64 guest_memfd_offset)
{
int ret;
struct userspace_mem_region *region;
@@ -995,8 +995,8 @@ void vm_mem_add(struct kvm_vm *vm, enum vm_mem_backing_src_type src_type,
"page_size: 0x%x\n"
" existing guest_paddr: 0x%lx size: 0x%lx",
guest_paddr, npages, vm->page_size,
- (uint64_t) region->region.guest_phys_addr,
- (uint64_t) region->region.memory_size);
+ (u64)region->region.guest_phys_addr,
+ (u64)region->region.memory_size);
/* Confirm no region with the requested slot already exists. */
hash_for_each_possible(vm->regions.slot_hash, region, slot_node,
@@ -1010,8 +1010,8 @@ void vm_mem_add(struct kvm_vm *vm, enum vm_mem_backing_src_type src_type,
" existing slot: %u paddr: 0x%lx size: 0x%lx",
slot, guest_paddr, npages,
region->region.slot,
- (uint64_t) region->region.guest_phys_addr,
- (uint64_t) region->region.memory_size);
+ (u64)region->region.guest_phys_addr,
+ (u64)region->region.memory_size);
}
/* Allocate and initialize new mem region structure. */
@@ -1112,7 +1112,7 @@ void vm_mem_add(struct kvm_vm *vm, enum vm_mem_backing_src_type src_type,
" slot: %u flags: 0x%x\n"
" guest_phys_addr: 0x%lx size: 0x%lx guest_memfd: %d",
ret, errno, slot, flags,
- guest_paddr, (uint64_t) region->region.memory_size,
+ guest_paddr, (u64)region->region.memory_size,
region->region.guest_memfd);
/* Add to quick lookup data structures */
@@ -1136,8 +1136,8 @@ void vm_mem_add(struct kvm_vm *vm, enum vm_mem_backing_src_type src_type,
void vm_userspace_mem_region_add(struct kvm_vm *vm,
enum vm_mem_backing_src_type src_type,
- uint64_t guest_paddr, uint32_t slot,
- uint64_t npages, uint32_t flags)
+ u64 guest_paddr, uint32_t slot,
+ u64 npages, uint32_t flags)
{
vm_mem_add(vm, src_type, guest_paddr, slot, npages, flags, -1, 0);
}
@@ -1219,7 +1219,7 @@ void vm_mem_region_set_flags(struct kvm_vm *vm, uint32_t slot, uint32_t flags)
*
* Change the gpa of a memory region.
*/
-void vm_mem_region_move(struct kvm_vm *vm, uint32_t slot, uint64_t new_gpa)
+void vm_mem_region_move(struct kvm_vm *vm, uint32_t slot, u64 new_gpa)
{
struct userspace_mem_region *region;
int ret;
@@ -1258,18 +1258,18 @@ void vm_mem_region_delete(struct kvm_vm *vm, uint32_t slot)
__vm_mem_region_delete(vm, region);
}
-void vm_guest_mem_fallocate(struct kvm_vm *vm, uint64_t base, uint64_t size,
+void vm_guest_mem_fallocate(struct kvm_vm *vm, u64 base, u64 size,
bool punch_hole)
{
const int mode = FALLOC_FL_KEEP_SIZE | (punch_hole ? FALLOC_FL_PUNCH_HOLE : 0);
struct userspace_mem_region *region;
- uint64_t end = base + size;
- uint64_t gpa, len;
+ u64 end = base + size;
+ u64 gpa, len;
off_t fd_offset;
int ret;
for (gpa = base; gpa < end; gpa += len) {
- uint64_t offset;
+ u64 offset;
region = userspace_mem_region_find(vm, gpa, gpa);
TEST_ASSERT(region && region->region.flags & KVM_MEM_GUEST_MEMFD,
@@ -1277,7 +1277,7 @@ void vm_guest_mem_fallocate(struct kvm_vm *vm, uint64_t base, uint64_t size,
offset = gpa - region->region.guest_phys_addr;
fd_offset = region->region.guest_memfd_offset + offset;
- len = min_t(uint64_t, end - gpa, region->region.memory_size - offset);
+ len = min_t(u64, end - gpa, region->region.memory_size - offset);
ret = fallocate(region->region.guest_memfd, mode, fd_offset, len);
TEST_ASSERT(!ret, "fallocate() failed to %s at %lx (len = %lu), fd = %d, mode = %x, offset = %lx",
@@ -1375,10 +1375,10 @@ struct kvm_vcpu *__vm_vcpu_add(struct kvm_vm *vm, uint32_t vcpu_id)
*/
gva_t gva_unused_gap(struct kvm_vm *vm, size_t sz, gva_t vaddr_min)
{
- uint64_t pages = (sz + vm->page_size - 1) >> vm->page_shift;
+ u64 pages = (sz + vm->page_size - 1) >> vm->page_shift;
/* Determine lowest permitted virtual page index. */
- uint64_t pgidx_start = (vaddr_min + vm->page_size - 1) >> vm->page_shift;
+ u64 pgidx_start = (vaddr_min + vm->page_size - 1) >> vm->page_shift;
if ((pgidx_start * vm->page_size) < vaddr_min)
goto no_va_found;
@@ -1443,7 +1443,7 @@ static gva_t ____gva_alloc(struct kvm_vm *vm, size_t sz,
enum kvm_mem_region_type type,
bool protected)
{
- uint64_t pages = (sz >> vm->page_shift) + ((sz % vm->page_size) != 0);
+ u64 pages = (sz >> vm->page_shift) + ((sz % vm->page_size) != 0);
virt_pgd_alloc(vm);
gpa_t paddr = __vm_phy_pages_alloc(vm, pages,
@@ -1565,7 +1565,7 @@ gva_t gva_alloc_page(struct kvm_vm *vm)
* Within the VM given by @vm, creates a virtual translation for
* @npages starting at @vaddr to the page range starting at @paddr.
*/
-void virt_map(struct kvm_vm *vm, uint64_t vaddr, uint64_t paddr,
+void virt_map(struct kvm_vm *vm, u64 vaddr, u64 paddr,
unsigned int npages)
{
size_t page_size = vm->page_size;
@@ -1790,7 +1790,7 @@ void *vcpu_map_dirty_ring(struct kvm_vcpu *vcpu)
* Device Ioctl
*/
-int __kvm_has_device_attr(int dev_fd, uint32_t group, uint64_t attr)
+int __kvm_has_device_attr(int dev_fd, uint32_t group, u64 attr)
{
struct kvm_device_attr attribute = {
.group = group,
@@ -1801,7 +1801,7 @@ int __kvm_has_device_attr(int dev_fd, uint32_t group, uint64_t attr)
return ioctl(dev_fd, KVM_HAS_DEVICE_ATTR, &attribute);
}
-int __kvm_test_create_device(struct kvm_vm *vm, uint64_t type)
+int __kvm_test_create_device(struct kvm_vm *vm, u64 type)
{
struct kvm_create_device create_dev = {
.type = type,
@@ -1811,7 +1811,7 @@ int __kvm_test_create_device(struct kvm_vm *vm, uint64_t type)
return __vm_ioctl(vm, KVM_CREATE_DEVICE, &create_dev);
}
-int __kvm_create_device(struct kvm_vm *vm, uint64_t type)
+int __kvm_create_device(struct kvm_vm *vm, u64 type)
{
struct kvm_create_device create_dev = {
.type = type,
@@ -1825,7 +1825,7 @@ int __kvm_create_device(struct kvm_vm *vm, uint64_t type)
return err ? : create_dev.fd;
}
-int __kvm_device_attr_get(int dev_fd, uint32_t group, uint64_t attr, void *val)
+int __kvm_device_attr_get(int dev_fd, uint32_t group, u64 attr, void *val)
{
struct kvm_device_attr kvmattr = {
.group = group,
@@ -1837,7 +1837,7 @@ int __kvm_device_attr_get(int dev_fd, uint32_t group, uint64_t attr, void *val)
return __kvm_ioctl(dev_fd, KVM_GET_DEVICE_ATTR, &kvmattr);
}
-int __kvm_device_attr_set(int dev_fd, uint32_t group, uint64_t attr, void *val)
+int __kvm_device_attr_set(int dev_fd, uint32_t group, u64 attr, void *val)
{
struct kvm_device_attr kvmattr = {
.group = group,
@@ -1948,8 +1948,8 @@ void vm_dump(FILE *stream, struct kvm_vm *vm, uint8_t indent)
hash_for_each(vm->regions.slot_hash, ctr, region, slot_node) {
fprintf(stream, "%*sguest_phys: 0x%lx size: 0x%lx "
"host_virt: %p\n", indent + 2, "",
- (uint64_t) region->region.guest_phys_addr,
- (uint64_t) region->region.memory_size,
+ (u64)region->region.guest_phys_addr,
+ (u64)region->region.memory_size,
region->host_mem);
fprintf(stream, "%*sunused_phy_pages: ", indent + 2, "");
sparsebit_dump(stream, region->unused_phy_pages, 0);
@@ -2236,7 +2236,7 @@ struct kvm_stats_desc *read_stats_descriptors(int stats_fd,
* Read the data values of a specified stat from the binary stats interface.
*/
void read_stat_data(int stats_fd, struct kvm_stats_header *header,
- struct kvm_stats_desc *desc, uint64_t *data,
+ struct kvm_stats_desc *desc, u64 *data,
size_t max_elements)
{
size_t nr_elements = min_t(ssize_t, desc->size, max_elements);
@@ -2257,7 +2257,7 @@ void read_stat_data(int stats_fd, struct kvm_stats_header *header,
}
void kvm_get_stat(struct kvm_binary_stats *stats, const char *name,
- uint64_t *data, size_t max_elements)
+ u64 *data, size_t max_elements)
{
struct kvm_stats_desc *desc;
size_t size_desc;
diff --git a/tools/testing/selftests/kvm/lib/memstress.c b/tools/testing/selftests/kvm/lib/memstress.c
index d51680509839..f6657bd34b80 100644
--- a/tools/testing/selftests/kvm/lib/memstress.c
+++ b/tools/testing/selftests/kvm/lib/memstress.c
@@ -16,7 +16,7 @@ struct memstress_args memstress_args;
* Guest virtual memory offset of the testing memory slot.
* Must not conflict with identity mapped test code.
*/
-static uint64_t guest_test_virt_mem = DEFAULT_GUEST_TEST_MEM;
+static u64 guest_test_virt_mem = DEFAULT_GUEST_TEST_MEM;
struct vcpu_thread {
/* The index of the vCPU. */
@@ -49,10 +49,10 @@ void memstress_guest_code(uint32_t vcpu_idx)
struct memstress_args *args = &memstress_args;
struct memstress_vcpu_args *vcpu_args = &args->vcpu_args[vcpu_idx];
struct guest_random_state rand_state;
- uint64_t gva;
- uint64_t pages;
- uint64_t addr;
- uint64_t page;
+ u64 gva;
+ u64 pages;
+ u64 addr;
+ u64 page;
int i;
rand_state = new_guest_random_state(guest_random_seed + vcpu_idx);
@@ -76,9 +76,9 @@ void memstress_guest_code(uint32_t vcpu_idx)
addr = gva + (page * args->guest_page_size);
if (__guest_random_bool(&rand_state, args->write_percent))
- *(uint64_t *)addr = 0x0123456789ABCDEF;
+ *(u64 *)addr = 0x0123456789ABCDEF;
else
- READ_ONCE(*(uint64_t *)addr);
+ READ_ONCE(*(u64 *)addr);
}
GUEST_SYNC(1);
@@ -87,7 +87,7 @@ void memstress_guest_code(uint32_t vcpu_idx)
void memstress_setup_vcpus(struct kvm_vm *vm, int nr_vcpus,
struct kvm_vcpu *vcpus[],
- uint64_t vcpu_memory_bytes,
+ u64 vcpu_memory_bytes,
bool partition_vcpu_memory_access)
{
struct memstress_args *args = &memstress_args;
@@ -122,15 +122,15 @@ void memstress_setup_vcpus(struct kvm_vm *vm, int nr_vcpus,
}
struct kvm_vm *memstress_create_vm(enum vm_guest_mode mode, int nr_vcpus,
- uint64_t vcpu_memory_bytes, int slots,
+ u64 vcpu_memory_bytes, int slots,
enum vm_mem_backing_src_type backing_src,
bool partition_vcpu_memory_access)
{
struct memstress_args *args = &memstress_args;
struct kvm_vm *vm;
- uint64_t guest_num_pages, slot0_pages = 0;
- uint64_t backing_src_pagesz = get_backing_src_pagesz(backing_src);
- uint64_t region_end_gfn;
+ u64 guest_num_pages, slot0_pages = 0;
+ u64 backing_src_pagesz = get_backing_src_pagesz(backing_src);
+ u64 region_end_gfn;
int i;
pr_info("Testing guest mode: %s\n", vm_guest_mode_string(mode));
@@ -206,7 +206,7 @@ struct kvm_vm *memstress_create_vm(enum vm_guest_mode mode, int nr_vcpus,
/* Add extra memory slots for testing */
for (i = 0; i < slots; i++) {
- uint64_t region_pages = guest_num_pages / slots;
+ u64 region_pages = guest_num_pages / slots;
gpa_t region_start = args->gpa + region_pages * args->guest_page_size * i;
vm_userspace_mem_region_add(vm, backing_src, region_start,
@@ -248,7 +248,7 @@ void memstress_set_random_access(struct kvm_vm *vm, bool random_access)
sync_global_to_guest(vm, memstress_args.random_access);
}
-uint64_t __weak memstress_nested_pages(int nr_vcpus)
+u64 __weak memstress_nested_pages(int nr_vcpus)
{
return 0;
}
@@ -353,7 +353,7 @@ void memstress_get_dirty_log(struct kvm_vm *vm, unsigned long *bitmaps[], int sl
}
void memstress_clear_dirty_log(struct kvm_vm *vm, unsigned long *bitmaps[],
- int slots, uint64_t pages_per_slot)
+ int slots, u64 pages_per_slot)
{
int i;
@@ -364,7 +364,7 @@ void memstress_clear_dirty_log(struct kvm_vm *vm, unsigned long *bitmaps[],
}
}
-unsigned long **memstress_alloc_bitmaps(int slots, uint64_t pages_per_slot)
+unsigned long **memstress_alloc_bitmaps(int slots, u64 pages_per_slot)
{
unsigned long **bitmaps;
int i;
diff --git a/tools/testing/selftests/kvm/lib/riscv/processor.c b/tools/testing/selftests/kvm/lib/riscv/processor.c
index c4717aad1b3c..df0403adccac 100644
--- a/tools/testing/selftests/kvm/lib/riscv/processor.c
+++ b/tools/testing/selftests/kvm/lib/riscv/processor.c
@@ -16,7 +16,7 @@
static gva_t exception_handlers;
-bool __vcpu_has_ext(struct kvm_vcpu *vcpu, uint64_t ext)
+bool __vcpu_has_ext(struct kvm_vcpu *vcpu, u64 ext)
{
unsigned long value = 0;
int ret;
@@ -26,23 +26,23 @@ bool __vcpu_has_ext(struct kvm_vcpu *vcpu, uint64_t ext)
return !ret && !!value;
}
-static uint64_t page_align(struct kvm_vm *vm, uint64_t v)
+static u64 page_align(struct kvm_vm *vm, u64 v)
{
return (v + vm->page_size) & ~(vm->page_size - 1);
}
-static uint64_t pte_addr(struct kvm_vm *vm, uint64_t entry)
+static u64 pte_addr(struct kvm_vm *vm, u64 entry)
{
return ((entry & PGTBL_PTE_ADDR_MASK) >> PGTBL_PTE_ADDR_SHIFT) <<
PGTBL_PAGE_SIZE_SHIFT;
}
-static uint64_t ptrs_per_pte(struct kvm_vm *vm)
+static u64 ptrs_per_pte(struct kvm_vm *vm)
{
- return PGTBL_PAGE_SIZE / sizeof(uint64_t);
+ return PGTBL_PAGE_SIZE / sizeof(u64);
}
-static uint64_t pte_index_mask[] = {
+static u64 pte_index_mask[] = {
PGTBL_L0_INDEX_MASK,
PGTBL_L1_INDEX_MASK,
PGTBL_L2_INDEX_MASK,
@@ -56,7 +56,7 @@ static uint32_t pte_index_shift[] = {
PGTBL_L3_INDEX_SHIFT,
};
-static uint64_t pte_index(struct kvm_vm *vm, gva_t gva, int level)
+static u64 pte_index(struct kvm_vm *vm, gva_t gva, int level)
{
TEST_ASSERT(level > -1,
"Negative page table level (%d) not possible", level);
@@ -79,9 +79,9 @@ void virt_arch_pgd_alloc(struct kvm_vm *vm)
vm->pgd_created = true;
}
-void virt_arch_pg_map(struct kvm_vm *vm, uint64_t vaddr, uint64_t paddr)
+void virt_arch_pg_map(struct kvm_vm *vm, u64 vaddr, u64 paddr)
{
- uint64_t *ptep, next_ppn;
+ u64 *ptep, next_ppn;
int level = vm->pgtable_levels - 1;
TEST_ASSERT((vaddr % vm->page_size) == 0,
@@ -125,7 +125,7 @@ void virt_arch_pg_map(struct kvm_vm *vm, uint64_t vaddr, uint64_t paddr)
gpa_t addr_arch_gva2gpa(struct kvm_vm *vm, gva_t gva)
{
- uint64_t *ptep;
+ u64 *ptep;
int level = vm->pgtable_levels - 1;
if (!vm->pgd_created)
@@ -153,11 +153,11 @@ gpa_t addr_arch_gva2gpa(struct kvm_vm *vm, gva_t gva)
}
static void pte_dump(FILE *stream, struct kvm_vm *vm, uint8_t indent,
- uint64_t page, int level)
+ u64 page, int level)
{
#ifdef DEBUG
static const char *const type[] = { "pte", "pmd", "pud", "p4d"};
- uint64_t pte, *ptep;
+ u64 pte, *ptep;
if (level < 0)
return;
@@ -177,7 +177,7 @@ static void pte_dump(FILE *stream, struct kvm_vm *vm, uint8_t indent,
void virt_arch_dump(FILE *stream, struct kvm_vm *vm, uint8_t indent)
{
int level = vm->pgtable_levels - 1;
- uint64_t pgd, *ptep;
+ u64 pgd, *ptep;
if (!vm->pgd_created)
return;
@@ -342,7 +342,7 @@ struct kvm_vcpu *vm_arch_vcpu_add(struct kvm_vm *vm, uint32_t vcpu_id)
void vcpu_args_set(struct kvm_vcpu *vcpu, unsigned int num, ...)
{
va_list ap;
- uint64_t id = RISCV_CORE_REG(regs.a0);
+ u64 id = RISCV_CORE_REG(regs.a0);
int i;
TEST_ASSERT(num >= 1 && num <= 8, "Unsupported number of args,\n"
@@ -377,7 +377,7 @@ void vcpu_args_set(struct kvm_vcpu *vcpu, unsigned int num, ...)
id = RISCV_CORE_REG(regs.a7);
break;
}
- vcpu_set_reg(vcpu, id, va_arg(ap, uint64_t));
+ vcpu_set_reg(vcpu, id, va_arg(ap, u64));
}
va_end(ap);
diff --git a/tools/testing/selftests/kvm/lib/s390/diag318_test_handler.c b/tools/testing/selftests/kvm/lib/s390/diag318_test_handler.c
index 2c432fa164f1..f5480473f192 100644
--- a/tools/testing/selftests/kvm/lib/s390/diag318_test_handler.c
+++ b/tools/testing/selftests/kvm/lib/s390/diag318_test_handler.c
@@ -13,7 +13,7 @@
static void guest_code(void)
{
- uint64_t diag318_info = 0x12345678;
+ u64 diag318_info = 0x12345678;
asm volatile ("diag %0,0,0x318\n" : : "d" (diag318_info));
}
@@ -23,13 +23,13 @@ static void guest_code(void)
* we create an ad-hoc VM here to handle the instruction then extract the
* necessary data. It is up to the caller to decide what to do with that data.
*/
-static uint64_t diag318_handler(void)
+static u64 diag318_handler(void)
{
struct kvm_vcpu *vcpu;
struct kvm_vm *vm;
struct kvm_run *run;
- uint64_t reg;
- uint64_t diag318_info;
+ u64 reg;
+ u64 diag318_info;
vm = vm_create_with_one_vcpu(&vcpu, guest_code);
vcpu_run(vcpu);
@@ -51,9 +51,9 @@ static uint64_t diag318_handler(void)
return diag318_info;
}
-uint64_t get_diag318_info(void)
+u64 get_diag318_info(void)
{
- static uint64_t diag318_info;
+ static u64 diag318_info;
static bool printed_skip;
/*
diff --git a/tools/testing/selftests/kvm/lib/s390/facility.c b/tools/testing/selftests/kvm/lib/s390/facility.c
index d540812d911a..9a778054f07f 100644
--- a/tools/testing/selftests/kvm/lib/s390/facility.c
+++ b/tools/testing/selftests/kvm/lib/s390/facility.c
@@ -10,5 +10,5 @@
#include "facility.h"
-uint64_t stfl_doublewords[NB_STFL_DOUBLEWORDS];
+u64 stfl_doublewords[NB_STFL_DOUBLEWORDS];
bool stfle_flag;
diff --git a/tools/testing/selftests/kvm/lib/s390/processor.c b/tools/testing/selftests/kvm/lib/s390/processor.c
index 2baafbe608ac..96f98cdca15b 100644
--- a/tools/testing/selftests/kvm/lib/s390/processor.c
+++ b/tools/testing/selftests/kvm/lib/s390/processor.c
@@ -34,9 +34,9 @@ void virt_arch_pgd_alloc(struct kvm_vm *vm)
* a page table (ri == 4). Returns a suitable region/segment table entry
* which points to the freshly allocated pages.
*/
-static uint64_t virt_alloc_region(struct kvm_vm *vm, int ri)
+static u64 virt_alloc_region(struct kvm_vm *vm, int ri)
{
- uint64_t taddr;
+ u64 taddr;
taddr = vm_phy_pages_alloc(vm, ri < 4 ? PAGES_PER_REGION : 1,
KVM_GUEST_PAGE_TABLE_MIN_PADDR, 0);
@@ -47,10 +47,10 @@ static uint64_t virt_alloc_region(struct kvm_vm *vm, int ri)
| ((ri < 4 ? (PAGES_PER_REGION - 1) : 0) & REGION_ENTRY_LENGTH);
}
-void virt_arch_pg_map(struct kvm_vm *vm, uint64_t gva, uint64_t gpa)
+void virt_arch_pg_map(struct kvm_vm *vm, u64 gva, u64 gpa)
{
int ri, idx;
- uint64_t *entry;
+ u64 *entry;
TEST_ASSERT((gva % vm->page_size) == 0,
"Virtual address not on page boundary,\n"
@@ -89,7 +89,7 @@ void virt_arch_pg_map(struct kvm_vm *vm, uint64_t gva, uint64_t gpa)
gpa_t addr_arch_gva2gpa(struct kvm_vm *vm, gva_t gva)
{
int ri, idx;
- uint64_t *entry;
+ u64 *entry;
TEST_ASSERT(vm->page_size == PAGE_SIZE, "Unsupported page size: 0x%x",
vm->page_size);
@@ -112,9 +112,9 @@ gpa_t addr_arch_gva2gpa(struct kvm_vm *vm, gva_t gva)
}
static void virt_dump_ptes(FILE *stream, struct kvm_vm *vm, uint8_t indent,
- uint64_t ptea_start)
+ u64 ptea_start)
{
- uint64_t *pte, ptea;
+ u64 *pte, ptea;
for (ptea = ptea_start; ptea < ptea_start + 0x100 * 8; ptea += 8) {
pte = addr_gpa2hva(vm, ptea);
@@ -126,9 +126,9 @@ static void virt_dump_ptes(FILE *stream, struct kvm_vm *vm, uint8_t indent,
}
static void virt_dump_region(FILE *stream, struct kvm_vm *vm, uint8_t indent,
- uint64_t reg_tab_addr)
+ u64 reg_tab_addr)
{
- uint64_t addr, *entry;
+ u64 addr, *entry;
for (addr = reg_tab_addr; addr < reg_tab_addr + 0x400 * 8; addr += 8) {
entry = addr_gpa2hva(vm, addr);
@@ -163,7 +163,7 @@ void vcpu_arch_set_entry_point(struct kvm_vcpu *vcpu, void *guest_code)
struct kvm_vcpu *vm_arch_vcpu_add(struct kvm_vm *vm, uint32_t vcpu_id)
{
size_t stack_size = DEFAULT_STACK_PGS * getpagesize();
- uint64_t stack_vaddr;
+ u64 stack_vaddr;
struct kvm_regs regs;
struct kvm_sregs sregs;
struct kvm_vcpu *vcpu;
@@ -206,7 +206,7 @@ void vcpu_args_set(struct kvm_vcpu *vcpu, unsigned int num, ...)
vcpu_regs_get(vcpu, ®s);
for (i = 0; i < num; i++)
- regs.gprs[i + 2] = va_arg(ap, uint64_t);
+ regs.gprs[i + 2] = va_arg(ap, u64);
vcpu_regs_set(vcpu, ®s);
va_end(ap);
diff --git a/tools/testing/selftests/kvm/lib/sparsebit.c b/tools/testing/selftests/kvm/lib/sparsebit.c
index cfed9d26cc71..df6d888d71e9 100644
--- a/tools/testing/selftests/kvm/lib/sparsebit.c
+++ b/tools/testing/selftests/kvm/lib/sparsebit.c
@@ -76,8 +76,8 @@
* the use of a binary-search tree, where each node contains at least
* the following members:
*
- * typedef uint64_t sparsebit_idx_t;
- * typedef uint64_t sparsebit_num_t;
+ * typedef u64 sparsebit_idx_t;
+ * typedef u64 sparsebit_num_t;
*
* sparsebit_idx_t idx;
* uint32_t mask;
@@ -2056,9 +2056,9 @@ unsigned char get8(void)
return ch;
}
-uint64_t get64(void)
+u64 get64(void)
{
- uint64_t x;
+ u64 x;
x = get8();
x = (x << 8) | get8();
@@ -2075,8 +2075,8 @@ int main(void)
s = sparsebit_alloc();
for (;;) {
uint8_t op = get8() & 0xf;
- uint64_t first = get64();
- uint64_t last = get64();
+ u64 first = get64();
+ u64 last = get64();
operate(op, first, last);
}
diff --git a/tools/testing/selftests/kvm/lib/test_util.c b/tools/testing/selftests/kvm/lib/test_util.c
index 8ed0b74ae837..a23dbb796f2e 100644
--- a/tools/testing/selftests/kvm/lib/test_util.c
+++ b/tools/testing/selftests/kvm/lib/test_util.c
@@ -31,7 +31,7 @@ struct guest_random_state new_guest_random_state(uint32_t seed)
uint32_t guest_random_u32(struct guest_random_state *state)
{
- state->seed = (uint64_t)state->seed * 48271 % ((uint32_t)(1 << 31) - 1);
+ state->seed = (u64)state->seed * 48271 % ((uint32_t)(1 << 31) - 1);
return state->seed;
}
diff --git a/tools/testing/selftests/kvm/lib/ucall_common.c b/tools/testing/selftests/kvm/lib/ucall_common.c
index 60297819d508..fd7609c83473 100644
--- a/tools/testing/selftests/kvm/lib/ucall_common.c
+++ b/tools/testing/selftests/kvm/lib/ucall_common.c
@@ -14,7 +14,7 @@ struct ucall_header {
struct ucall ucalls[KVM_MAX_VCPUS];
};
-int ucall_nr_pages_required(uint64_t page_size)
+int ucall_nr_pages_required(u64 page_size)
{
return align_up(sizeof(struct ucall_header), page_size) / page_size;
}
@@ -79,7 +79,7 @@ static void ucall_free(struct ucall *uc)
clear_bit(uc - ucall_pool->ucalls, ucall_pool->in_use);
}
-void ucall_assert(uint64_t cmd, const char *exp, const char *file,
+void ucall_assert(u64 cmd, const char *exp, const char *file,
unsigned int line, const char *fmt, ...)
{
struct ucall *uc;
@@ -88,8 +88,8 @@ void ucall_assert(uint64_t cmd, const char *exp, const char *file,
uc = ucall_alloc();
uc->cmd = cmd;
- WRITE_ONCE(uc->args[GUEST_ERROR_STRING], (uint64_t)(exp));
- WRITE_ONCE(uc->args[GUEST_FILE], (uint64_t)(file));
+ WRITE_ONCE(uc->args[GUEST_ERROR_STRING], (u64)(exp));
+ WRITE_ONCE(uc->args[GUEST_FILE], (u64)(file));
WRITE_ONCE(uc->args[GUEST_LINE], line);
va_start(va, fmt);
@@ -101,7 +101,7 @@ void ucall_assert(uint64_t cmd, const char *exp, const char *file,
ucall_free(uc);
}
-void ucall_fmt(uint64_t cmd, const char *fmt, ...)
+void ucall_fmt(u64 cmd, const char *fmt, ...)
{
struct ucall *uc;
va_list va;
@@ -118,7 +118,7 @@ void ucall_fmt(uint64_t cmd, const char *fmt, ...)
ucall_free(uc);
}
-void ucall(uint64_t cmd, int nargs, ...)
+void ucall(u64 cmd, int nargs, ...)
{
struct ucall *uc;
va_list va;
@@ -132,7 +132,7 @@ void ucall(uint64_t cmd, int nargs, ...)
va_start(va, nargs);
for (i = 0; i < nargs; ++i)
- WRITE_ONCE(uc->args[i], va_arg(va, uint64_t));
+ WRITE_ONCE(uc->args[i], va_arg(va, u64));
va_end(va);
ucall_arch_do_ucall((gva_t)uc->hva);
@@ -140,7 +140,7 @@ void ucall(uint64_t cmd, int nargs, ...)
ucall_free(uc);
}
-uint64_t get_ucall(struct kvm_vcpu *vcpu, struct ucall *uc)
+u64 get_ucall(struct kvm_vcpu *vcpu, struct ucall *uc)
{
struct ucall ucall;
void *addr;
diff --git a/tools/testing/selftests/kvm/lib/userfaultfd_util.c b/tools/testing/selftests/kvm/lib/userfaultfd_util.c
index 5bde176cedd5..516ae5bd7576 100644
--- a/tools/testing/selftests/kvm/lib/userfaultfd_util.c
+++ b/tools/testing/selftests/kvm/lib/userfaultfd_util.c
@@ -100,8 +100,8 @@ static void *uffd_handler_thread_fn(void *arg)
}
struct uffd_desc *uffd_setup_demand_paging(int uffd_mode, useconds_t delay,
- void *hva, uint64_t len,
- uint64_t num_readers,
+ void *hva, u64 len,
+ u64 num_readers,
uffd_handler_t handler)
{
struct uffd_desc *uffd_desc;
@@ -109,7 +109,7 @@ struct uffd_desc *uffd_setup_demand_paging(int uffd_mode, useconds_t delay,
int uffd;
struct uffdio_api uffdio_api;
struct uffdio_register uffdio_register;
- uint64_t expected_ioctls = ((uint64_t) 1) << _UFFDIO_COPY;
+ u64 expected_ioctls = ((u64)1) << _UFFDIO_COPY;
int ret, i;
PER_PAGE_DEBUG("Userfaultfd %s mode, faults resolved with %s\n",
@@ -132,7 +132,7 @@ struct uffd_desc *uffd_setup_demand_paging(int uffd_mode, useconds_t delay,
/* In order to get minor faults, prefault via the alias. */
if (is_minor)
- expected_ioctls = ((uint64_t) 1) << _UFFDIO_CONTINUE;
+ expected_ioctls = ((u64) 1) << _UFFDIO_CONTINUE;
uffd = syscall(__NR_userfaultfd, O_CLOEXEC | O_NONBLOCK);
TEST_ASSERT(uffd >= 0, "uffd creation failed, errno: %d", errno);
@@ -141,9 +141,9 @@ struct uffd_desc *uffd_setup_demand_paging(int uffd_mode, useconds_t delay,
uffdio_api.features = 0;
TEST_ASSERT(ioctl(uffd, UFFDIO_API, &uffdio_api) != -1,
"ioctl UFFDIO_API failed: %" PRIu64,
- (uint64_t)uffdio_api.api);
+ (u64)uffdio_api.api);
- uffdio_register.range.start = (uint64_t)hva;
+ uffdio_register.range.start = (u64)hva;
uffdio_register.range.len = len;
uffdio_register.mode = uffd_mode;
TEST_ASSERT(ioctl(uffd, UFFDIO_REGISTER, &uffdio_register) != -1,
diff --git a/tools/testing/selftests/kvm/lib/x86/apic.c b/tools/testing/selftests/kvm/lib/x86/apic.c
index 89153a333e83..5182fd0d6a76 100644
--- a/tools/testing/selftests/kvm/lib/x86/apic.c
+++ b/tools/testing/selftests/kvm/lib/x86/apic.c
@@ -14,7 +14,7 @@ void apic_disable(void)
void xapic_enable(void)
{
- uint64_t val = rdmsr(MSR_IA32_APICBASE);
+ u64 val = rdmsr(MSR_IA32_APICBASE);
/* Per SDM: to enable xAPIC when in x2APIC must first disable APIC */
if (val & MSR_IA32_APICBASE_EXTD) {
diff --git a/tools/testing/selftests/kvm/lib/x86/hyperv.c b/tools/testing/selftests/kvm/lib/x86/hyperv.c
index 2284bc936404..2eb3f0d15576 100644
--- a/tools/testing/selftests/kvm/lib/x86/hyperv.c
+++ b/tools/testing/selftests/kvm/lib/x86/hyperv.c
@@ -100,9 +100,9 @@ struct hyperv_test_pages *vcpu_alloc_hyperv_test_pages(struct kvm_vm *vm,
return hv;
}
-int enable_vp_assist(uint64_t vp_assist_pa, void *vp_assist)
+int enable_vp_assist(u64 vp_assist_pa, void *vp_assist)
{
- uint64_t val = (vp_assist_pa & HV_X64_MSR_VP_ASSIST_PAGE_ADDRESS_MASK) |
+ u64 val = (vp_assist_pa & HV_X64_MSR_VP_ASSIST_PAGE_ADDRESS_MASK) |
HV_X64_MSR_VP_ASSIST_PAGE_ENABLE;
wrmsr(HV_X64_MSR_VP_ASSIST_PAGE, val);
diff --git a/tools/testing/selftests/kvm/lib/x86/memstress.c b/tools/testing/selftests/kvm/lib/x86/memstress.c
index e5249c442318..4a72cb8e1f94 100644
--- a/tools/testing/selftests/kvm/lib/x86/memstress.c
+++ b/tools/testing/selftests/kvm/lib/x86/memstress.c
@@ -15,7 +15,7 @@
#include "processor.h"
#include "vmx.h"
-void memstress_l2_guest_code(uint64_t vcpu_id)
+void memstress_l2_guest_code(u64 vcpu_id)
{
memstress_guest_code(vcpu_id);
vmcall();
@@ -29,7 +29,7 @@ __asm__(
" ud2;"
);
-static void memstress_l1_guest_code(struct vmx_pages *vmx, uint64_t vcpu_id)
+static void memstress_l1_guest_code(struct vmx_pages *vmx, u64 vcpu_id)
{
#define L2_GUEST_STACK_SIZE 64
unsigned long l2_guest_stack[L2_GUEST_STACK_SIZE];
@@ -49,7 +49,7 @@ static void memstress_l1_guest_code(struct vmx_pages *vmx, uint64_t vcpu_id)
GUEST_DONE();
}
-uint64_t memstress_nested_pages(int nr_vcpus)
+u64 memstress_nested_pages(int nr_vcpus)
{
/*
* 513 page tables is enough to identity-map 256 TiB of L2 with 1G
@@ -61,7 +61,7 @@ uint64_t memstress_nested_pages(int nr_vcpus)
void memstress_setup_ept(struct vmx_pages *vmx, struct kvm_vm *vm)
{
- uint64_t start, end;
+ u64 start, end;
prepare_eptp(vmx, vm, 0);
diff --git a/tools/testing/selftests/kvm/lib/x86/pmu.c b/tools/testing/selftests/kvm/lib/x86/pmu.c
index f31f0427c17c..e80e4d46fb29 100644
--- a/tools/testing/selftests/kvm/lib/x86/pmu.c
+++ b/tools/testing/selftests/kvm/lib/x86/pmu.c
@@ -10,7 +10,7 @@
#include "kvm_util.h"
#include "pmu.h"
-const uint64_t intel_pmu_arch_events[] = {
+const u64 intel_pmu_arch_events[] = {
INTEL_ARCH_CPU_CYCLES,
INTEL_ARCH_INSTRUCTIONS_RETIRED,
INTEL_ARCH_REFERENCE_CYCLES,
@@ -22,7 +22,7 @@ const uint64_t intel_pmu_arch_events[] = {
};
kvm_static_assert(ARRAY_SIZE(intel_pmu_arch_events) == NR_INTEL_ARCH_EVENTS);
-const uint64_t amd_pmu_zen_events[] = {
+const u64 amd_pmu_zen_events[] = {
AMD_ZEN_CORE_CYCLES,
AMD_ZEN_INSTRUCTIONS_RETIRED,
AMD_ZEN_BRANCHES_RETIRED,
diff --git a/tools/testing/selftests/kvm/lib/x86/processor.c b/tools/testing/selftests/kvm/lib/x86/processor.c
index 5a97cf829291..e06dec2fddc2 100644
--- a/tools/testing/selftests/kvm/lib/x86/processor.c
+++ b/tools/testing/selftests/kvm/lib/x86/processor.c
@@ -21,7 +21,7 @@ gva_t exception_handlers;
bool host_cpu_is_amd;
bool host_cpu_is_intel;
bool is_forced_emulation_enabled;
-uint64_t guest_tsc_khz;
+u64 guest_tsc_khz;
static void regs_dump(FILE *stream, struct kvm_regs *regs, uint8_t indent)
{
@@ -134,11 +134,11 @@ void virt_arch_pgd_alloc(struct kvm_vm *vm)
}
}
-static void *virt_get_pte(struct kvm_vm *vm, uint64_t *parent_pte,
- uint64_t vaddr, int level)
+static void *virt_get_pte(struct kvm_vm *vm, u64 *parent_pte,
+ u64 vaddr, int level)
{
- uint64_t pt_gpa = PTE_GET_PA(*parent_pte);
- uint64_t *page_table = addr_gpa2hva(vm, pt_gpa);
+ u64 pt_gpa = PTE_GET_PA(*parent_pte);
+ u64 *page_table = addr_gpa2hva(vm, pt_gpa);
int index = (vaddr >> PG_LEVEL_SHIFT(level)) & 0x1ffu;
TEST_ASSERT((*parent_pte & PTE_PRESENT_MASK) || parent_pte == &vm->pgd,
@@ -148,14 +148,14 @@ static void *virt_get_pte(struct kvm_vm *vm, uint64_t *parent_pte,
return &page_table[index];
}
-static uint64_t *virt_create_upper_pte(struct kvm_vm *vm,
- uint64_t *parent_pte,
- uint64_t vaddr,
- uint64_t paddr,
- int current_level,
- int target_level)
+static u64 *virt_create_upper_pte(struct kvm_vm *vm,
+ u64 *parent_pte,
+ u64 vaddr,
+ u64 paddr,
+ int current_level,
+ int target_level)
{
- uint64_t *pte = virt_get_pte(vm, parent_pte, vaddr, current_level);
+ u64 *pte = virt_get_pte(vm, parent_pte, vaddr, current_level);
paddr = vm_untag_gpa(vm, paddr);
@@ -181,11 +181,11 @@ static uint64_t *virt_create_upper_pte(struct kvm_vm *vm,
return pte;
}
-void __virt_pg_map(struct kvm_vm *vm, uint64_t vaddr, uint64_t paddr, int level)
+void __virt_pg_map(struct kvm_vm *vm, u64 vaddr, u64 paddr, int level)
{
- const uint64_t pg_size = PG_LEVEL_SIZE(level);
- uint64_t *pml4e, *pdpe, *pde;
- uint64_t *pte;
+ const u64 pg_size = PG_LEVEL_SIZE(level);
+ u64 *pml4e, *pdpe, *pde;
+ u64 *pte;
TEST_ASSERT(vm->mode == VM_MODE_PXXV48_4K,
"Unknown or unsupported guest mode, mode: 0x%x", vm->mode);
@@ -237,16 +237,16 @@ void __virt_pg_map(struct kvm_vm *vm, uint64_t vaddr, uint64_t paddr, int level)
*pte |= vm->arch.s_bit;
}
-void virt_arch_pg_map(struct kvm_vm *vm, uint64_t vaddr, uint64_t paddr)
+void virt_arch_pg_map(struct kvm_vm *vm, u64 vaddr, u64 paddr)
{
__virt_pg_map(vm, vaddr, paddr, PG_LEVEL_4K);
}
-void virt_map_level(struct kvm_vm *vm, uint64_t vaddr, uint64_t paddr,
- uint64_t nr_bytes, int level)
+void virt_map_level(struct kvm_vm *vm, u64 vaddr, u64 paddr,
+ u64 nr_bytes, int level)
{
- uint64_t pg_size = PG_LEVEL_SIZE(level);
- uint64_t nr_pages = nr_bytes / pg_size;
+ u64 pg_size = PG_LEVEL_SIZE(level);
+ u64 nr_pages = nr_bytes / pg_size;
int i;
TEST_ASSERT(nr_bytes % pg_size == 0,
@@ -261,7 +261,7 @@ void virt_map_level(struct kvm_vm *vm, uint64_t vaddr, uint64_t paddr,
}
}
-static bool vm_is_target_pte(uint64_t *pte, int *level, int current_level)
+static bool vm_is_target_pte(u64 *pte, int *level, int current_level)
{
if (*pte & PTE_LARGE_MASK) {
TEST_ASSERT(*level == PG_LEVEL_NONE ||
@@ -273,10 +273,9 @@ static bool vm_is_target_pte(uint64_t *pte, int *level, int current_level)
return *level == current_level;
}
-uint64_t *__vm_get_page_table_entry(struct kvm_vm *vm, uint64_t vaddr,
- int *level)
+u64 *__vm_get_page_table_entry(struct kvm_vm *vm, u64 vaddr, int *level)
{
- uint64_t *pml4e, *pdpe, *pde;
+ u64 *pml4e, *pdpe, *pde;
TEST_ASSERT(!vm->arch.is_pt_protected,
"Walking page tables of protected guests is impossible");
@@ -312,7 +311,7 @@ uint64_t *__vm_get_page_table_entry(struct kvm_vm *vm, uint64_t vaddr,
return virt_get_pte(vm, pde, vaddr, PG_LEVEL_4K);
}
-uint64_t *vm_get_page_table_entry(struct kvm_vm *vm, uint64_t vaddr)
+u64 *vm_get_page_table_entry(struct kvm_vm *vm, u64 vaddr)
{
int level = PG_LEVEL_4K;
@@ -321,10 +320,10 @@ uint64_t *vm_get_page_table_entry(struct kvm_vm *vm, uint64_t vaddr)
void virt_arch_dump(FILE *stream, struct kvm_vm *vm, uint8_t indent)
{
- uint64_t *pml4e, *pml4e_start;
- uint64_t *pdpe, *pdpe_start;
- uint64_t *pde, *pde_start;
- uint64_t *pte, *pte_start;
+ u64 *pml4e, *pml4e_start;
+ u64 *pdpe, *pdpe_start;
+ u64 *pde, *pde_start;
+ u64 *pte, *pte_start;
if (!vm->pgd_created)
return;
@@ -334,7 +333,7 @@ void virt_arch_dump(FILE *stream, struct kvm_vm *vm, uint8_t indent)
fprintf(stream, "%*s index hvaddr gpaddr "
"addr w exec dirty\n",
indent, "");
- pml4e_start = (uint64_t *) addr_gpa2hva(vm, vm->pgd);
+ pml4e_start = addr_gpa2hva(vm, vm->pgd);
for (uint16_t n1 = 0; n1 <= 0x1ffu; n1++) {
pml4e = &pml4e_start[n1];
if (!(*pml4e & PTE_PRESENT_MASK))
@@ -386,10 +385,10 @@ void virt_arch_dump(FILE *stream, struct kvm_vm *vm, uint8_t indent)
!!(*pte & PTE_WRITABLE_MASK),
!!(*pte & PTE_NX_MASK),
!!(*pte & PTE_DIRTY_MASK),
- ((uint64_t) n1 << 27)
- | ((uint64_t) n2 << 18)
- | ((uint64_t) n3 << 9)
- | ((uint64_t) n4));
+ ((u64)n1 << 27)
+ | ((u64)n2 << 18)
+ | ((u64)n3 << 9)
+ | ((u64)n4));
}
}
}
@@ -466,7 +465,7 @@ static void kvm_seg_set_kernel_data_64bit(struct kvm_segment *segp)
gpa_t addr_arch_gva2gpa(struct kvm_vm *vm, gva_t gva)
{
int level = PG_LEVEL_NONE;
- uint64_t *pte = __vm_get_page_table_entry(vm, gva, &level);
+ u64 *pte = __vm_get_page_table_entry(vm, gva, &level);
TEST_ASSERT(*pte & PTE_PRESENT_MASK,
"Leaf PTE not PRESENT for gva: 0x%08lx", gva);
@@ -782,7 +781,7 @@ uint32_t kvm_cpuid_property(const struct kvm_cpuid2 *cpuid,
property.reg, property.lo_bit, property.hi_bit);
}
-uint64_t kvm_get_feature_msr(uint64_t msr_index)
+u64 kvm_get_feature_msr(u64 msr_index)
{
struct {
struct kvm_msrs header;
@@ -801,7 +800,7 @@ uint64_t kvm_get_feature_msr(uint64_t msr_index)
return buffer.entry.data;
}
-void __vm_xsave_require_permission(uint64_t xfeature, const char *name)
+void __vm_xsave_require_permission(u64 xfeature, const char *name)
{
int kvm_fd;
u64 bitmask;
@@ -902,7 +901,7 @@ void vcpu_set_or_clear_cpuid_feature(struct kvm_vcpu *vcpu,
vcpu_set_cpuid(vcpu);
}
-uint64_t vcpu_get_msr(struct kvm_vcpu *vcpu, uint64_t msr_index)
+u64 vcpu_get_msr(struct kvm_vcpu *vcpu, u64 msr_index)
{
struct {
struct kvm_msrs header;
@@ -917,7 +916,7 @@ uint64_t vcpu_get_msr(struct kvm_vcpu *vcpu, uint64_t msr_index)
return buffer.entry.data;
}
-int _vcpu_set_msr(struct kvm_vcpu *vcpu, uint64_t msr_index, uint64_t msr_value)
+int _vcpu_set_msr(struct kvm_vcpu *vcpu, u64 msr_index, u64 msr_value)
{
struct {
struct kvm_msrs header;
@@ -945,22 +944,22 @@ void vcpu_args_set(struct kvm_vcpu *vcpu, unsigned int num, ...)
vcpu_regs_get(vcpu, ®s);
if (num >= 1)
- regs.rdi = va_arg(ap, uint64_t);
+ regs.rdi = va_arg(ap, u64);
if (num >= 2)
- regs.rsi = va_arg(ap, uint64_t);
+ regs.rsi = va_arg(ap, u64);
if (num >= 3)
- regs.rdx = va_arg(ap, uint64_t);
+ regs.rdx = va_arg(ap, u64);
if (num >= 4)
- regs.rcx = va_arg(ap, uint64_t);
+ regs.rcx = va_arg(ap, u64);
if (num >= 5)
- regs.r8 = va_arg(ap, uint64_t);
+ regs.r8 = va_arg(ap, u64);
if (num >= 6)
- regs.r9 = va_arg(ap, uint64_t);
+ regs.r9 = va_arg(ap, u64);
vcpu_regs_set(vcpu, ®s);
va_end(ap);
@@ -1183,7 +1182,7 @@ const struct kvm_cpuid_entry2 *get_cpuid_entry(const struct kvm_cpuid2 *cpuid,
#define X86_HYPERCALL(inputs...) \
({ \
- uint64_t r; \
+ u64 r; \
\
asm volatile("test %[use_vmmcall], %[use_vmmcall]\n\t" \
"jnz 1f\n\t" \
@@ -1197,18 +1196,17 @@ const struct kvm_cpuid_entry2 *get_cpuid_entry(const struct kvm_cpuid2 *cpuid,
r; \
})
-uint64_t kvm_hypercall(uint64_t nr, uint64_t a0, uint64_t a1, uint64_t a2,
- uint64_t a3)
+u64 kvm_hypercall(u64 nr, u64 a0, u64 a1, u64 a2, u64 a3)
{
return X86_HYPERCALL("a"(nr), "b"(a0), "c"(a1), "d"(a2), "S"(a3));
}
-uint64_t __xen_hypercall(uint64_t nr, uint64_t a0, void *a1)
+u64 __xen_hypercall(u64 nr, u64 a0, void *a1)
{
return X86_HYPERCALL("a"(nr), "D"(a0), "S"(a1));
}
-void xen_hypercall(uint64_t nr, uint64_t a0, void *a1)
+void xen_hypercall(u64 nr, u64 a0, void *a1)
{
GUEST_ASSERT(!__xen_hypercall(nr, a0, a1));
}
diff --git a/tools/testing/selftests/kvm/lib/x86/sev.c b/tools/testing/selftests/kvm/lib/x86/sev.c
index 5fcd26ac2def..e677eeeb05f7 100644
--- a/tools/testing/selftests/kvm/lib/x86/sev.c
+++ b/tools/testing/selftests/kvm/lib/x86/sev.c
@@ -27,8 +27,8 @@ static void encrypt_region(struct kvm_vm *vm, struct userspace_mem_region *regio
sev_register_encrypted_memory(vm, region);
sparsebit_for_each_set_range(protected_phy_pages, i, j) {
- const uint64_t size = (j - i + 1) * vm->page_size;
- const uint64_t offset = (i - lowest_page_in_region) * vm->page_size;
+ const u64 size = (j - i + 1) * vm->page_size;
+ const u64 offset = (i - lowest_page_in_region) * vm->page_size;
sev_launch_update_data(vm, gpa_base + offset, size);
}
diff --git a/tools/testing/selftests/kvm/lib/x86/svm.c b/tools/testing/selftests/kvm/lib/x86/svm.c
index 104fe606d7d1..d8ed7f0f8235 100644
--- a/tools/testing/selftests/kvm/lib/x86/svm.c
+++ b/tools/testing/selftests/kvm/lib/x86/svm.c
@@ -62,14 +62,14 @@ static void vmcb_set_seg(struct vmcb_seg *seg, u16 selector,
void generic_svm_setup(struct svm_test_data *svm, void *guest_rip, void *guest_rsp)
{
struct vmcb *vmcb = svm->vmcb;
- uint64_t vmcb_gpa = svm->vmcb_gpa;
+ u64 vmcb_gpa = svm->vmcb_gpa;
struct vmcb_save_area *save = &vmcb->save;
struct vmcb_control_area *ctrl = &vmcb->control;
u32 data_seg_attr = 3 | SVM_SELECTOR_S_MASK | SVM_SELECTOR_P_MASK
| SVM_SELECTOR_DB_MASK | SVM_SELECTOR_G_MASK;
u32 code_seg_attr = 9 | SVM_SELECTOR_S_MASK | SVM_SELECTOR_P_MASK
| SVM_SELECTOR_L_MASK | SVM_SELECTOR_G_MASK;
- uint64_t efer;
+ u64 efer;
efer = rdmsr(MSR_EFER);
wrmsr(MSR_EFER, efer | EFER_SVME);
@@ -131,7 +131,7 @@ void generic_svm_setup(struct svm_test_data *svm, void *guest_rip, void *guest_r
* for now. registers involved in LOAD/SAVE_GPR_C are eventually
* unmodified so they do not need to be in the clobber list.
*/
-void run_guest(struct vmcb *vmcb, uint64_t vmcb_gpa)
+void run_guest(struct vmcb *vmcb, u64 vmcb_gpa)
{
asm volatile (
"vmload %[vmcb_gpa]\n\t"
diff --git a/tools/testing/selftests/kvm/lib/x86/vmx.c b/tools/testing/selftests/kvm/lib/x86/vmx.c
index ea37261b207c..11f89ffc28bc 100644
--- a/tools/testing/selftests/kvm/lib/x86/vmx.c
+++ b/tools/testing/selftests/kvm/lib/x86/vmx.c
@@ -20,27 +20,27 @@ struct hv_enlightened_vmcs *current_evmcs;
struct hv_vp_assist_page *current_vp_assist;
struct eptPageTableEntry {
- uint64_t readable:1;
- uint64_t writable:1;
- uint64_t executable:1;
- uint64_t memory_type:3;
- uint64_t ignore_pat:1;
- uint64_t page_size:1;
- uint64_t accessed:1;
- uint64_t dirty:1;
- uint64_t ignored_11_10:2;
- uint64_t address:40;
- uint64_t ignored_62_52:11;
- uint64_t suppress_ve:1;
+ u64 readable:1;
+ u64 writable:1;
+ u64 executable:1;
+ u64 memory_type:3;
+ u64 ignore_pat:1;
+ u64 page_size:1;
+ u64 accessed:1;
+ u64 dirty:1;
+ u64 ignored_11_10:2;
+ u64 address:40;
+ u64 ignored_62_52:11;
+ u64 suppress_ve:1;
};
struct eptPageTablePointer {
- uint64_t memory_type:3;
- uint64_t page_walk_length:3;
- uint64_t ad_enabled:1;
- uint64_t reserved_11_07:5;
- uint64_t address:40;
- uint64_t reserved_63_52:12;
+ u64 memory_type:3;
+ u64 page_walk_length:3;
+ u64 ad_enabled:1;
+ u64 reserved_11_07:5;
+ u64 address:40;
+ u64 reserved_63_52:12;
};
int vcpu_enable_evmcs(struct kvm_vcpu *vcpu)
{
@@ -113,8 +113,8 @@ vcpu_alloc_vmx(struct kvm_vm *vm, gva_t *p_vmx_gva)
bool prepare_for_vmx_operation(struct vmx_pages *vmx)
{
- uint64_t feature_control;
- uint64_t required;
+ u64 feature_control;
+ u64 required;
unsigned long cr0;
unsigned long cr4;
@@ -173,7 +173,7 @@ bool load_vmcs(struct vmx_pages *vmx)
return true;
}
-static bool ept_vpid_cap_supported(uint64_t mask)
+static bool ept_vpid_cap_supported(u64 mask)
{
return rdmsr(MSR_IA32_VMX_EPT_VPID_CAP) & mask;
}
@@ -196,7 +196,7 @@ static inline void init_vmcs_control_fields(struct vmx_pages *vmx)
vmwrite(PIN_BASED_VM_EXEC_CONTROL, rdmsr(MSR_IA32_VMX_TRUE_PINBASED_CTLS));
if (vmx->eptp_gpa) {
- uint64_t ept_paddr;
+ u64 ept_paddr;
struct eptPageTablePointer eptp = {
.memory_type = X86_MEMTYPE_WB,
.page_walk_length = 3, /* + 1 */
@@ -347,8 +347,8 @@ static inline void init_vmcs_guest_state(void *rip, void *rsp)
vmwrite(GUEST_GDTR_BASE, vmreadz(HOST_GDTR_BASE));
vmwrite(GUEST_IDTR_BASE, vmreadz(HOST_IDTR_BASE));
vmwrite(GUEST_DR7, 0x400);
- vmwrite(GUEST_RSP, (uint64_t)rsp);
- vmwrite(GUEST_RIP, (uint64_t)rip);
+ vmwrite(GUEST_RSP, (u64)rsp);
+ vmwrite(GUEST_RIP, (u64)rip);
vmwrite(GUEST_RFLAGS, 2);
vmwrite(GUEST_PENDING_DBG_EXCEPTIONS, 0);
vmwrite(GUEST_SYSENTER_ESP, vmreadz(HOST_IA32_SYSENTER_ESP));
@@ -364,8 +364,8 @@ void prepare_vmcs(struct vmx_pages *vmx, void *guest_rip, void *guest_rsp)
static void nested_create_pte(struct kvm_vm *vm,
struct eptPageTableEntry *pte,
- uint64_t nested_paddr,
- uint64_t paddr,
+ u64 nested_paddr,
+ u64 paddr,
int current_level,
int target_level)
{
@@ -395,9 +395,9 @@ static void nested_create_pte(struct kvm_vm *vm,
void __nested_pg_map(struct vmx_pages *vmx, struct kvm_vm *vm,
- uint64_t nested_paddr, uint64_t paddr, int target_level)
+ u64 nested_paddr, u64 paddr, int target_level)
{
- const uint64_t page_size = PG_LEVEL_SIZE(target_level);
+ const u64 page_size = PG_LEVEL_SIZE(target_level);
struct eptPageTableEntry *pt = vmx->eptp_hva, *pte;
uint16_t index;
@@ -446,7 +446,7 @@ void __nested_pg_map(struct vmx_pages *vmx, struct kvm_vm *vm,
}
void nested_pg_map(struct vmx_pages *vmx, struct kvm_vm *vm,
- uint64_t nested_paddr, uint64_t paddr)
+ u64 nested_paddr, u64 paddr)
{
__nested_pg_map(vmx, vm, nested_paddr, paddr, PG_LEVEL_4K);
}
@@ -469,7 +469,7 @@ void nested_pg_map(struct vmx_pages *vmx, struct kvm_vm *vm,
* page range starting at nested_paddr to the page range starting at paddr.
*/
void __nested_map(struct vmx_pages *vmx, struct kvm_vm *vm,
- uint64_t nested_paddr, uint64_t paddr, uint64_t size,
+ u64 nested_paddr, u64 paddr, u64 size,
int level)
{
size_t page_size = PG_LEVEL_SIZE(level);
@@ -486,7 +486,7 @@ void __nested_map(struct vmx_pages *vmx, struct kvm_vm *vm,
}
void nested_map(struct vmx_pages *vmx, struct kvm_vm *vm,
- uint64_t nested_paddr, uint64_t paddr, uint64_t size)
+ u64 nested_paddr, u64 paddr, u64 size)
{
__nested_map(vmx, vm, nested_paddr, paddr, size, PG_LEVEL_4K);
}
@@ -509,22 +509,22 @@ void nested_map_memslot(struct vmx_pages *vmx, struct kvm_vm *vm,
break;
nested_map(vmx, vm,
- (uint64_t)i << vm->page_shift,
- (uint64_t)i << vm->page_shift,
+ (u64)i << vm->page_shift,
+ (u64)i << vm->page_shift,
1 << vm->page_shift);
}
}
/* Identity map a region with 1GiB Pages. */
void nested_identity_map_1g(struct vmx_pages *vmx, struct kvm_vm *vm,
- uint64_t addr, uint64_t size)
+ u64 addr, u64 size)
{
__nested_map(vmx, vm, addr, addr, size, PG_LEVEL_1G);
}
bool kvm_cpu_has_ept(void)
{
- uint64_t ctrl;
+ u64 ctrl;
ctrl = kvm_get_feature_msr(MSR_IA32_VMX_TRUE_PROCBASED_CTLS) >> 32;
if (!(ctrl & CPU_BASED_ACTIVATE_SECONDARY_CONTROLS))
diff --git a/tools/testing/selftests/kvm/memslot_modification_stress_test.c b/tools/testing/selftests/kvm/memslot_modification_stress_test.c
index c81a84990eab..a0ea83741d4f 100644
--- a/tools/testing/selftests/kvm/memslot_modification_stress_test.c
+++ b/tools/testing/selftests/kvm/memslot_modification_stress_test.c
@@ -29,7 +29,7 @@
static int nr_vcpus = 1;
-static uint64_t guest_percpu_mem_size = DEFAULT_PER_VCPU_MEM_SIZE;
+static u64 guest_percpu_mem_size = DEFAULT_PER_VCPU_MEM_SIZE;
static void vcpu_worker(struct memstress_vcpu_args *vcpu_args)
{
@@ -54,10 +54,10 @@ static void vcpu_worker(struct memstress_vcpu_args *vcpu_args)
}
static void add_remove_memslot(struct kvm_vm *vm, useconds_t delay,
- uint64_t nr_modifications)
+ u64 nr_modifications)
{
- uint64_t pages = max_t(int, vm->page_size, getpagesize()) / vm->page_size;
- uint64_t gpa;
+ u64 pages = max_t(int, vm->page_size, getpagesize()) / vm->page_size;
+ u64 gpa;
int i;
/*
@@ -77,7 +77,7 @@ static void add_remove_memslot(struct kvm_vm *vm, useconds_t delay,
struct test_params {
useconds_t delay;
- uint64_t nr_iterations;
+ u64 nr_iterations;
bool partition_vcpu_memory_access;
bool disable_slot_zap_quirk;
};
diff --git a/tools/testing/selftests/kvm/memslot_perf_test.c b/tools/testing/selftests/kvm/memslot_perf_test.c
index e3711beff7f3..7ad29c775336 100644
--- a/tools/testing/selftests/kvm/memslot_perf_test.c
+++ b/tools/testing/selftests/kvm/memslot_perf_test.c
@@ -85,12 +85,12 @@ struct vm_data {
struct kvm_vcpu *vcpu;
pthread_t vcpu_thread;
uint32_t nslots;
- uint64_t npages;
- uint64_t pages_per_slot;
+ u64 npages;
+ u64 pages_per_slot;
void **hva_slots;
bool mmio_ok;
- uint64_t mmio_gpa_min;
- uint64_t mmio_gpa_max;
+ u64 mmio_gpa_min;
+ u64 mmio_gpa_max;
};
struct sync_area {
@@ -185,9 +185,9 @@ static void wait_for_vcpu(void)
"sem_timedwait() failed: %d", errno);
}
-static void *vm_gpa2hva(struct vm_data *data, uint64_t gpa, uint64_t *rempages)
+static void *vm_gpa2hva(struct vm_data *data, u64 gpa, u64 *rempages)
{
- uint64_t gpage, pgoffs;
+ u64 gpage, pgoffs;
uint32_t slot, slotoffs;
void *base;
uint32_t guest_page_size = data->vm->page_size;
@@ -199,11 +199,11 @@ static void *vm_gpa2hva(struct vm_data *data, uint64_t gpa, uint64_t *rempages)
gpage = gpa / guest_page_size;
pgoffs = gpa % guest_page_size;
- slot = min(gpage / data->pages_per_slot, (uint64_t)data->nslots - 1);
+ slot = min(gpage / data->pages_per_slot, (u64)data->nslots - 1);
slotoffs = gpage - (slot * data->pages_per_slot);
if (rempages) {
- uint64_t slotpages;
+ u64 slotpages;
if (slot == data->nslots - 1)
slotpages = data->npages - slot * data->pages_per_slot;
@@ -219,7 +219,7 @@ static void *vm_gpa2hva(struct vm_data *data, uint64_t gpa, uint64_t *rempages)
return (uint8_t *)base + slotoffs * guest_page_size + pgoffs;
}
-static uint64_t vm_slot2gpa(struct vm_data *data, uint32_t slot)
+static u64 vm_slot2gpa(struct vm_data *data, uint32_t slot)
{
uint32_t guest_page_size = data->vm->page_size;
@@ -243,7 +243,7 @@ static struct vm_data *alloc_vm(void)
}
static bool check_slot_pages(uint32_t host_page_size, uint32_t guest_page_size,
- uint64_t pages_per_slot, uint64_t rempages)
+ u64 pages_per_slot, u64 rempages)
{
if (!pages_per_slot)
return false;
@@ -258,11 +258,11 @@ static bool check_slot_pages(uint32_t host_page_size, uint32_t guest_page_size,
}
-static uint64_t get_max_slots(struct vm_data *data, uint32_t host_page_size)
+static u64 get_max_slots(struct vm_data *data, uint32_t host_page_size)
{
uint32_t guest_page_size = data->vm->page_size;
- uint64_t mempages, pages_per_slot, rempages;
- uint64_t slots;
+ u64 mempages, pages_per_slot, rempages;
+ u64 slots;
mempages = data->npages;
slots = data->nslots;
@@ -280,12 +280,12 @@ static uint64_t get_max_slots(struct vm_data *data, uint32_t host_page_size)
return 0;
}
-static bool prepare_vm(struct vm_data *data, int nslots, uint64_t *maxslots,
- void *guest_code, uint64_t mem_size,
+static bool prepare_vm(struct vm_data *data, int nslots, u64 *maxslots,
+ void *guest_code, u64 mem_size,
struct timespec *slot_runtime)
{
- uint64_t mempages, rempages;
- uint64_t guest_addr;
+ u64 mempages, rempages;
+ u64 guest_addr;
uint32_t slot, host_page_size, guest_page_size;
struct timespec tstart;
struct sync_area *sync;
@@ -316,7 +316,7 @@ static bool prepare_vm(struct vm_data *data, int nslots, uint64_t *maxslots,
clock_gettime(CLOCK_MONOTONIC, &tstart);
for (slot = 1, guest_addr = MEM_GPA; slot <= data->nslots; slot++) {
- uint64_t npages;
+ u64 npages;
npages = data->pages_per_slot;
if (slot == data->nslots)
@@ -330,8 +330,8 @@ static bool prepare_vm(struct vm_data *data, int nslots, uint64_t *maxslots,
*slot_runtime = timespec_elapsed(tstart);
for (slot = 1, guest_addr = MEM_GPA; slot <= data->nslots; slot++) {
- uint64_t npages;
- uint64_t gpa;
+ u64 npages;
+ u64 gpa;
npages = data->pages_per_slot;
if (slot == data->nslots)
@@ -459,7 +459,7 @@ static void guest_code_test_memslot_move(void)
for (ptr = base; ptr < base + MEM_TEST_MOVE_SIZE;
ptr += page_size)
- *(uint64_t *)ptr = MEM_TEST_VAL_1;
+ *(u64 *)ptr = MEM_TEST_VAL_1;
/*
* No host sync here since the MMIO exits are so expensive
@@ -488,7 +488,7 @@ static void guest_code_test_memslot_map(void)
for (ptr = MEM_TEST_GPA;
ptr < MEM_TEST_GPA + MEM_TEST_MAP_SIZE / 2;
ptr += page_size)
- *(uint64_t *)ptr = MEM_TEST_VAL_1;
+ *(u64 *)ptr = MEM_TEST_VAL_1;
if (!guest_perform_sync())
break;
@@ -496,7 +496,7 @@ static void guest_code_test_memslot_map(void)
for (ptr = MEM_TEST_GPA + MEM_TEST_MAP_SIZE / 2;
ptr < MEM_TEST_GPA + MEM_TEST_MAP_SIZE;
ptr += page_size)
- *(uint64_t *)ptr = MEM_TEST_VAL_2;
+ *(u64 *)ptr = MEM_TEST_VAL_2;
if (!guest_perform_sync())
break;
@@ -525,13 +525,13 @@ static void guest_code_test_memslot_unmap(void)
*
* Just access a single page to be on the safe side.
*/
- *(uint64_t *)ptr = MEM_TEST_VAL_1;
+ *(u64 *)ptr = MEM_TEST_VAL_1;
if (!guest_perform_sync())
break;
ptr += MEM_TEST_UNMAP_SIZE / 2;
- *(uint64_t *)ptr = MEM_TEST_VAL_2;
+ *(u64 *)ptr = MEM_TEST_VAL_2;
if (!guest_perform_sync())
break;
@@ -554,17 +554,17 @@ static void guest_code_test_memslot_rw(void)
for (ptr = MEM_TEST_GPA;
ptr < MEM_TEST_GPA + MEM_TEST_SIZE; ptr += page_size)
- *(uint64_t *)ptr = MEM_TEST_VAL_1;
+ *(u64 *)ptr = MEM_TEST_VAL_1;
if (!guest_perform_sync())
break;
for (ptr = MEM_TEST_GPA + page_size / 2;
ptr < MEM_TEST_GPA + MEM_TEST_SIZE; ptr += page_size) {
- uint64_t val = *(uint64_t *)ptr;
+ u64 val = *(u64 *)ptr;
GUEST_ASSERT_EQ(val, MEM_TEST_VAL_2);
- *(uint64_t *)ptr = 0;
+ *(u64 *)ptr = 0;
}
if (!guest_perform_sync())
@@ -576,10 +576,10 @@ static void guest_code_test_memslot_rw(void)
static bool test_memslot_move_prepare(struct vm_data *data,
struct sync_area *sync,
- uint64_t *maxslots, bool isactive)
+ u64 *maxslots, bool isactive)
{
uint32_t guest_page_size = data->vm->page_size;
- uint64_t movesrcgpa, movetestgpa;
+ u64 movesrcgpa, movetestgpa;
#ifdef __x86_64__
if (disable_slot_zap_quirk)
@@ -589,7 +589,7 @@ static bool test_memslot_move_prepare(struct vm_data *data,
movesrcgpa = vm_slot2gpa(data, data->nslots - 1);
if (isactive) {
- uint64_t lastpages;
+ u64 lastpages;
vm_gpa2hva(data, movesrcgpa, &lastpages);
if (lastpages * guest_page_size < MEM_TEST_MOVE_SIZE / 2) {
@@ -612,21 +612,21 @@ static bool test_memslot_move_prepare(struct vm_data *data,
static bool test_memslot_move_prepare_active(struct vm_data *data,
struct sync_area *sync,
- uint64_t *maxslots)
+ u64 *maxslots)
{
return test_memslot_move_prepare(data, sync, maxslots, true);
}
static bool test_memslot_move_prepare_inactive(struct vm_data *data,
struct sync_area *sync,
- uint64_t *maxslots)
+ u64 *maxslots)
{
return test_memslot_move_prepare(data, sync, maxslots, false);
}
static void test_memslot_move_loop(struct vm_data *data, struct sync_area *sync)
{
- uint64_t movesrcgpa;
+ u64 movesrcgpa;
movesrcgpa = vm_slot2gpa(data, data->nslots - 1);
vm_mem_region_move(data->vm, data->nslots - 1 + 1,
@@ -635,13 +635,13 @@ static void test_memslot_move_loop(struct vm_data *data, struct sync_area *sync)
}
static void test_memslot_do_unmap(struct vm_data *data,
- uint64_t offsp, uint64_t count)
+ u64 offsp, u64 count)
{
- uint64_t gpa, ctr;
+ u64 gpa, ctr;
uint32_t guest_page_size = data->vm->page_size;
for (gpa = MEM_TEST_GPA + offsp * guest_page_size, ctr = 0; ctr < count; ) {
- uint64_t npages;
+ u64 npages;
void *hva;
int ret;
@@ -660,10 +660,10 @@ static void test_memslot_do_unmap(struct vm_data *data,
}
static void test_memslot_map_unmap_check(struct vm_data *data,
- uint64_t offsp, uint64_t valexp)
+ u64 offsp, u64 valexp)
{
- uint64_t gpa;
- uint64_t *val;
+ u64 gpa;
+ u64 *val;
uint32_t guest_page_size = data->vm->page_size;
if (!map_unmap_verify)
@@ -680,7 +680,7 @@ static void test_memslot_map_unmap_check(struct vm_data *data,
static void test_memslot_map_loop(struct vm_data *data, struct sync_area *sync)
{
uint32_t guest_page_size = data->vm->page_size;
- uint64_t guest_pages = MEM_TEST_MAP_SIZE / guest_page_size;
+ u64 guest_pages = MEM_TEST_MAP_SIZE / guest_page_size;
/*
* Unmap the second half of the test area while guest writes to (maps)
@@ -717,11 +717,11 @@ static void test_memslot_map_loop(struct vm_data *data, struct sync_area *sync)
static void test_memslot_unmap_loop_common(struct vm_data *data,
struct sync_area *sync,
- uint64_t chunk)
+ u64 chunk)
{
uint32_t guest_page_size = data->vm->page_size;
- uint64_t guest_pages = MEM_TEST_UNMAP_SIZE / guest_page_size;
- uint64_t ctr;
+ u64 guest_pages = MEM_TEST_UNMAP_SIZE / guest_page_size;
+ u64 ctr;
/*
* Wait for the guest to finish mapping page(s) in the first half
@@ -747,7 +747,7 @@ static void test_memslot_unmap_loop(struct vm_data *data,
{
uint32_t host_page_size = getpagesize();
uint32_t guest_page_size = data->vm->page_size;
- uint64_t guest_chunk_pages = guest_page_size >= host_page_size ?
+ u64 guest_chunk_pages = guest_page_size >= host_page_size ?
1 : host_page_size / guest_page_size;
test_memslot_unmap_loop_common(data, sync, guest_chunk_pages);
@@ -757,26 +757,26 @@ static void test_memslot_unmap_loop_chunked(struct vm_data *data,
struct sync_area *sync)
{
uint32_t guest_page_size = data->vm->page_size;
- uint64_t guest_chunk_pages = MEM_TEST_UNMAP_CHUNK_SIZE / guest_page_size;
+ u64 guest_chunk_pages = MEM_TEST_UNMAP_CHUNK_SIZE / guest_page_size;
test_memslot_unmap_loop_common(data, sync, guest_chunk_pages);
}
static void test_memslot_rw_loop(struct vm_data *data, struct sync_area *sync)
{
- uint64_t gptr;
+ u64 gptr;
uint32_t guest_page_size = data->vm->page_size;
for (gptr = MEM_TEST_GPA + guest_page_size / 2;
gptr < MEM_TEST_GPA + MEM_TEST_SIZE; gptr += guest_page_size)
- *(uint64_t *)vm_gpa2hva(data, gptr, NULL) = MEM_TEST_VAL_2;
+ *(u64 *)vm_gpa2hva(data, gptr, NULL) = MEM_TEST_VAL_2;
host_perform_sync(sync);
for (gptr = MEM_TEST_GPA;
gptr < MEM_TEST_GPA + MEM_TEST_SIZE; gptr += guest_page_size) {
- uint64_t *vptr = (typeof(vptr))vm_gpa2hva(data, gptr, NULL);
- uint64_t val = *vptr;
+ u64 *vptr = (typeof(vptr))vm_gpa2hva(data, gptr, NULL);
+ u64 val = *vptr;
TEST_ASSERT(val == MEM_TEST_VAL_1,
"Guest written values should read back correctly (is %"PRIu64" @ %"PRIx64")",
@@ -789,21 +789,21 @@ static void test_memslot_rw_loop(struct vm_data *data, struct sync_area *sync)
struct test_data {
const char *name;
- uint64_t mem_size;
+ u64 mem_size;
void (*guest_code)(void);
bool (*prepare)(struct vm_data *data, struct sync_area *sync,
- uint64_t *maxslots);
+ u64 *maxslots);
void (*loop)(struct vm_data *data, struct sync_area *sync);
};
-static bool test_execute(int nslots, uint64_t *maxslots,
+static bool test_execute(int nslots, u64 *maxslots,
unsigned int maxtime,
const struct test_data *tdata,
- uint64_t *nloops,
+ u64 *nloops,
struct timespec *slot_runtime,
struct timespec *guest_runtime)
{
- uint64_t mem_size = tdata->mem_size ? : MEM_SIZE;
+ u64 mem_size = tdata->mem_size ? : MEM_SIZE;
struct vm_data *data;
struct sync_area *sync;
struct timespec tstart;
@@ -1040,7 +1040,7 @@ static bool parse_args(int argc, char *argv[],
struct test_result {
struct timespec slot_runtime, guest_runtime, iter_runtime;
int64_t slottimens, runtimens;
- uint64_t nloops;
+ u64 nloops;
};
static bool test_loop(const struct test_data *data,
@@ -1048,7 +1048,7 @@ static bool test_loop(const struct test_data *data,
struct test_result *rbestslottime,
struct test_result *rbestruntime)
{
- uint64_t maxslots;
+ u64 maxslots;
struct test_result result = {};
if (!test_execute(targs->nslots, &maxslots, targs->seconds, data,
diff --git a/tools/testing/selftests/kvm/mmu_stress_test.c b/tools/testing/selftests/kvm/mmu_stress_test.c
index 6a437d2be9fa..4c3b96dcab21 100644
--- a/tools/testing/selftests/kvm/mmu_stress_test.c
+++ b/tools/testing/selftests/kvm/mmu_stress_test.c
@@ -20,19 +20,19 @@
static bool mprotect_ro_done;
static bool all_vcpus_hit_ro_fault;
-static void guest_code(uint64_t start_gpa, uint64_t end_gpa, uint64_t stride)
+static void guest_code(u64 start_gpa, u64 end_gpa, u64 stride)
{
- uint64_t gpa;
+ u64 gpa;
int i;
for (i = 0; i < 2; i++) {
for (gpa = start_gpa; gpa < end_gpa; gpa += stride)
- vcpu_arch_put_guest(*((volatile uint64_t *)gpa), gpa);
+ vcpu_arch_put_guest(*((volatile u64 *)gpa), gpa);
GUEST_SYNC(i);
}
for (gpa = start_gpa; gpa < end_gpa; gpa += stride)
- *((volatile uint64_t *)gpa);
+ *((volatile u64 *)gpa);
GUEST_SYNC(2);
/*
@@ -55,7 +55,7 @@ static void guest_code(uint64_t start_gpa, uint64_t end_gpa, uint64_t stride)
#elif defined(__aarch64__)
asm volatile("str %0, [%0]" :: "r" (gpa) : "memory");
#else
- vcpu_arch_put_guest(*((volatile uint64_t *)gpa), gpa);
+ vcpu_arch_put_guest(*((volatile u64 *)gpa), gpa);
#endif
} while (!READ_ONCE(mprotect_ro_done) || !READ_ONCE(all_vcpus_hit_ro_fault));
@@ -68,7 +68,7 @@ static void guest_code(uint64_t start_gpa, uint64_t end_gpa, uint64_t stride)
#endif
for (gpa = start_gpa; gpa < end_gpa; gpa += stride)
- vcpu_arch_put_guest(*((volatile uint64_t *)gpa), gpa);
+ vcpu_arch_put_guest(*((volatile u64 *)gpa), gpa);
GUEST_SYNC(4);
GUEST_ASSERT(0);
@@ -76,8 +76,8 @@ static void guest_code(uint64_t start_gpa, uint64_t end_gpa, uint64_t stride)
struct vcpu_info {
struct kvm_vcpu *vcpu;
- uint64_t start_gpa;
- uint64_t end_gpa;
+ u64 start_gpa;
+ u64 end_gpa;
};
static int nr_vcpus;
@@ -203,10 +203,10 @@ static void *vcpu_worker(void *data)
}
static pthread_t *spawn_workers(struct kvm_vm *vm, struct kvm_vcpu **vcpus,
- uint64_t start_gpa, uint64_t end_gpa)
+ u64 start_gpa, u64 end_gpa)
{
struct vcpu_info *info;
- uint64_t gpa, nr_bytes;
+ u64 gpa, nr_bytes;
pthread_t *threads;
int i;
@@ -217,7 +217,7 @@ static pthread_t *spawn_workers(struct kvm_vm *vm, struct kvm_vcpu **vcpus,
TEST_ASSERT(info, "Failed to allocate vCPU gpa ranges");
nr_bytes = ((end_gpa - start_gpa) / nr_vcpus) &
- ~((uint64_t)vm->page_size - 1);
+ ~((u64)vm->page_size - 1);
TEST_ASSERT(nr_bytes, "C'mon, no way you have %d CPUs", nr_vcpus);
for (i = 0, gpa = start_gpa; i < nr_vcpus; i++, gpa += nr_bytes) {
@@ -276,11 +276,11 @@ int main(int argc, char *argv[])
* just below the 4gb boundary. This test could create memory at
* 1gb-3gb,but it's simpler to skip straight to 4gb.
*/
- const uint64_t start_gpa = SZ_4G;
+ const u64 start_gpa = SZ_4G;
const int first_slot = 1;
struct timespec time_start, time_run1, time_reset, time_run2, time_ro, time_rw;
- uint64_t max_gpa, gpa, slot_size, max_mem, i;
+ u64 max_gpa, gpa, slot_size, max_mem, i;
int max_slots, slot, opt, fd;
bool hugepages = false;
struct kvm_vcpu **vcpus;
diff --git a/tools/testing/selftests/kvm/pre_fault_memory_test.c b/tools/testing/selftests/kvm/pre_fault_memory_test.c
index 0350a8896a2f..4e4a3b483e6e 100644
--- a/tools/testing/selftests/kvm/pre_fault_memory_test.c
+++ b/tools/testing/selftests/kvm/pre_fault_memory_test.c
@@ -16,13 +16,13 @@
#define TEST_NPAGES (TEST_SIZE / PAGE_SIZE)
#define TEST_SLOT 10
-static void guest_code(uint64_t base_gpa)
+static void guest_code(u64 base_gpa)
{
- volatile uint64_t val __used;
+ volatile u64 val __used;
int i;
for (i = 0; i < TEST_NPAGES; i++) {
- uint64_t *src = (uint64_t *)(base_gpa + i * PAGE_SIZE);
+ u64 *src = (u64 *)(base_gpa + i * PAGE_SIZE);
val = *src;
}
@@ -74,9 +74,9 @@ static void __test_pre_fault_memory(unsigned long vm_type, bool private)
struct kvm_vm *vm;
struct ucall uc;
- uint64_t guest_test_phys_mem;
- uint64_t guest_test_virt_mem;
- uint64_t alignment, guest_page_size;
+ u64 guest_test_phys_mem;
+ u64 guest_test_virt_mem;
+ u64 alignment, guest_page_size;
vm = vm_create_shape_with_one_vcpu(shape, &vcpu, guest_code);
diff --git a/tools/testing/selftests/kvm/riscv/arch_timer.c b/tools/testing/selftests/kvm/riscv/arch_timer.c
index 9e370800a6a2..e8ddb168c13e 100644
--- a/tools/testing/selftests/kvm/riscv/arch_timer.c
+++ b/tools/testing/selftests/kvm/riscv/arch_timer.c
@@ -17,7 +17,7 @@ static int timer_irq = IRQ_S_TIMER;
static void guest_irq_handler(struct ex_regs *regs)
{
- uint64_t xcnt, xcnt_diff_us, cmp;
+ u64 xcnt, xcnt_diff_us, cmp;
unsigned int intid = regs->cause & ~CAUSE_IRQ_FLAG;
uint32_t cpu = guest_get_vcpuid();
struct test_vcpu_shared_data *shared_data = &vcpu_shared_data[cpu];
diff --git a/tools/testing/selftests/kvm/riscv/ebreak_test.c b/tools/testing/selftests/kvm/riscv/ebreak_test.c
index cfed6c727bfc..7fac0dff3b94 100644
--- a/tools/testing/selftests/kvm/riscv/ebreak_test.c
+++ b/tools/testing/selftests/kvm/riscv/ebreak_test.c
@@ -8,10 +8,10 @@
#include "kvm_util.h"
#include "ucall_common.h"
-#define LABEL_ADDRESS(v) ((uint64_t)&(v))
+#define LABEL_ADDRESS(v) ((u64)&(v))
extern unsigned char sw_bp_1, sw_bp_2;
-static uint64_t sw_bp_addr;
+static u64 sw_bp_addr;
static void guest_code(void)
{
@@ -37,7 +37,7 @@ int main(void)
{
struct kvm_vm *vm;
struct kvm_vcpu *vcpu;
- uint64_t pc;
+ u64 pc;
struct kvm_guest_debug debug = {
.control = KVM_GUESTDBG_ENABLE,
};
diff --git a/tools/testing/selftests/kvm/riscv/get-reg-list.c b/tools/testing/selftests/kvm/riscv/get-reg-list.c
index 569f2d67c9b8..f8f1ac4b5e5f 100644
--- a/tools/testing/selftests/kvm/riscv/get-reg-list.c
+++ b/tools/testing/selftests/kvm/riscv/get-reg-list.c
@@ -147,7 +147,7 @@ void finalize_vcpu(struct kvm_vcpu *vcpu, struct vcpu_reg_list *c)
{
unsigned long isa_ext_state[KVM_RISCV_ISA_EXT_MAX] = { 0 };
struct vcpu_reg_sublist *s;
- uint64_t feature;
+ u64 feature;
int rc;
for (int i = 0; i < KVM_RISCV_ISA_EXT_MAX; i++)
diff --git a/tools/testing/selftests/kvm/riscv/sbi_pmu_test.c b/tools/testing/selftests/kvm/riscv/sbi_pmu_test.c
index c0ec7b284a3d..53ee31d144dc 100644
--- a/tools/testing/selftests/kvm/riscv/sbi_pmu_test.c
+++ b/tools/testing/selftests/kvm/riscv/sbi_pmu_test.c
@@ -87,7 +87,7 @@ unsigned long pmu_csr_read_num(int csr_num)
#undef switchcase_csr_read
}
-static inline void dummy_func_loop(uint64_t iter)
+static inline void dummy_func_loop(u64 iter)
{
int i = 0;
diff --git a/tools/testing/selftests/kvm/s390/debug_test.c b/tools/testing/selftests/kvm/s390/debug_test.c
index ad8095968601..751c61c0f056 100644
--- a/tools/testing/selftests/kvm/s390/debug_test.c
+++ b/tools/testing/selftests/kvm/s390/debug_test.c
@@ -17,7 +17,7 @@ asm("int_handler:\n"
"j .\n");
static struct kvm_vm *test_step_int_1(struct kvm_vcpu **vcpu, void *guest_code,
- size_t new_psw_off, uint64_t *new_psw)
+ size_t new_psw_off, u64 *new_psw)
{
struct kvm_guest_debug debug = {};
struct kvm_regs regs;
@@ -27,7 +27,7 @@ static struct kvm_vm *test_step_int_1(struct kvm_vcpu **vcpu, void *guest_code,
vm = vm_create_with_one_vcpu(vcpu, guest_code);
lowcore = addr_gpa2hva(vm, 0);
new_psw[0] = (*vcpu)->run->psw_mask;
- new_psw[1] = (uint64_t)int_handler;
+ new_psw[1] = (u64)int_handler;
memcpy(lowcore + new_psw_off, new_psw, 16);
vcpu_regs_get(*vcpu, ®s);
regs.gprs[2] = -1;
@@ -42,7 +42,7 @@ static struct kvm_vm *test_step_int_1(struct kvm_vcpu **vcpu, void *guest_code,
static void test_step_int(void *guest_code, size_t new_psw_off)
{
struct kvm_vcpu *vcpu;
- uint64_t new_psw[2];
+ u64 new_psw[2];
struct kvm_vm *vm;
vm = test_step_int_1(&vcpu, guest_code, new_psw_off, new_psw);
@@ -79,7 +79,7 @@ static void test_step_pgm_diag(void)
.u.pgm.code = PGM_SPECIFICATION,
};
struct kvm_vcpu *vcpu;
- uint64_t new_psw[2];
+ u64 new_psw[2];
struct kvm_vm *vm;
vm = test_step_int_1(&vcpu, test_step_pgm_diag_guest_code,
diff --git a/tools/testing/selftests/kvm/s390/memop.c b/tools/testing/selftests/kvm/s390/memop.c
index a808fb2f6b2c..a6f90821835e 100644
--- a/tools/testing/selftests/kvm/s390/memop.c
+++ b/tools/testing/selftests/kvm/s390/memop.c
@@ -34,7 +34,7 @@ enum mop_access_mode {
struct mop_desc {
uintptr_t gaddr;
uintptr_t gaddr_v;
- uint64_t set_flags;
+ u64 set_flags;
unsigned int f_check : 1;
unsigned int f_inject : 1;
unsigned int f_key : 1;
@@ -85,7 +85,7 @@ static struct kvm_s390_mem_op ksmo_from_desc(struct mop_desc *desc)
ksmo.op = KVM_S390_MEMOP_ABSOLUTE_WRITE;
if (desc->mode == CMPXCHG) {
ksmo.op = KVM_S390_MEMOP_ABSOLUTE_CMPXCHG;
- ksmo.old_addr = (uint64_t)desc->old;
+ ksmo.old_addr = (u64)desc->old;
memcpy(desc->old_value, desc->old, desc->size);
}
break;
@@ -489,7 +489,7 @@ static __uint128_t cut_to_size(int size, __uint128_t val)
case 4:
return (uint32_t)val;
case 8:
- return (uint64_t)val;
+ return (u64)val;
case 16:
return val;
}
@@ -501,10 +501,10 @@ static bool popcount_eq(__uint128_t a, __uint128_t b)
{
unsigned int count_a, count_b;
- count_a = __builtin_popcountl((uint64_t)(a >> 64)) +
- __builtin_popcountl((uint64_t)a);
- count_b = __builtin_popcountl((uint64_t)(b >> 64)) +
- __builtin_popcountl((uint64_t)b);
+ count_a = __builtin_popcountl((u64)(a >> 64)) +
+ __builtin_popcountl((u64)a);
+ count_b = __builtin_popcountl((u64)(b >> 64)) +
+ __builtin_popcountl((u64)b);
return count_a == count_b;
}
@@ -598,15 +598,15 @@ static bool _cmpxchg(int size, void *target, __uint128_t *old_addr, __uint128_t
return ret;
}
case 8: {
- uint64_t old = *old_addr;
+ u64 old = *old_addr;
asm volatile ("csg %[old],%[new],%[address]"
: [old] "+d" (old),
- [address] "+Q" (*(uint64_t *)(target))
- : [new] "d" ((uint64_t)new)
+ [address] "+Q" (*(u64 *)(target))
+ : [new] "d" ((u64)new)
: "cc"
);
- ret = old == (uint64_t)*old_addr;
+ ret = old == (u64)*old_addr;
*old_addr = old;
return ret;
}
@@ -811,10 +811,10 @@ static void test_errors_cmpxchg_key(void)
static void test_termination(void)
{
struct test_default t = test_default_init(guest_error_key);
- uint64_t prefix;
- uint64_t teid;
- uint64_t teid_mask = BIT(63 - 56) | BIT(63 - 60) | BIT(63 - 61);
- uint64_t psw[2];
+ u64 prefix;
+ u64 teid;
+ u64 teid_mask = BIT(63 - 56) | BIT(63 - 60) | BIT(63 - 61);
+ u64 psw[2];
HOST_SYNC(t.vcpu, STAGE_INITED);
HOST_SYNC(t.vcpu, STAGE_SKEYS_SET);
@@ -855,7 +855,7 @@ static void test_errors_key_storage_prot_override(void)
kvm_vm_free(t.kvm_vm);
}
-const uint64_t last_page_addr = -PAGE_SIZE;
+const u64 last_page_addr = -PAGE_SIZE;
static void guest_copy_key_fetch_prot_override(void)
{
diff --git a/tools/testing/selftests/kvm/s390/resets.c b/tools/testing/selftests/kvm/s390/resets.c
index b58f75b381e5..7a81d07500bd 100644
--- a/tools/testing/selftests/kvm/s390/resets.c
+++ b/tools/testing/selftests/kvm/s390/resets.c
@@ -57,9 +57,9 @@ static void guest_code_initial(void)
);
}
-static void test_one_reg(struct kvm_vcpu *vcpu, uint64_t id, uint64_t value)
+static void test_one_reg(struct kvm_vcpu *vcpu, u64 id, u64 value)
{
- uint64_t eval_reg;
+ u64 eval_reg;
eval_reg = vcpu_get_reg(vcpu, id);
TEST_ASSERT(eval_reg == value, "value == 0x%lx", value);
diff --git a/tools/testing/selftests/kvm/s390/tprot.c b/tools/testing/selftests/kvm/s390/tprot.c
index b50209979e10..ffd5d139082a 100644
--- a/tools/testing/selftests/kvm/s390/tprot.c
+++ b/tools/testing/selftests/kvm/s390/tprot.c
@@ -46,7 +46,7 @@ enum permission {
static enum permission test_protection(void *addr, uint8_t key)
{
- uint64_t mask;
+ u64 mask;
asm volatile (
"tprot %[addr], 0(%[key])\n"
diff --git a/tools/testing/selftests/kvm/set_memory_region_test.c b/tools/testing/selftests/kvm/set_memory_region_test.c
index bc440d5aba57..6c680fcf07a4 100644
--- a/tools/testing/selftests/kvm/set_memory_region_test.c
+++ b/tools/testing/selftests/kvm/set_memory_region_test.c
@@ -30,19 +30,19 @@
#define MEM_REGION_GPA 0xc0000000
#define MEM_REGION_SLOT 10
-static const uint64_t MMIO_VAL = 0xbeefull;
+static const u64 MMIO_VAL = 0xbeefull;
-extern const uint64_t final_rip_start;
-extern const uint64_t final_rip_end;
+extern const u64 final_rip_start;
+extern const u64 final_rip_end;
static sem_t vcpu_ready;
-static inline uint64_t guest_spin_on_val(uint64_t spin_val)
+static inline u64 guest_spin_on_val(u64 spin_val)
{
- uint64_t val;
+ u64 val;
do {
- val = READ_ONCE(*((uint64_t *)MEM_REGION_GPA));
+ val = READ_ONCE(*((u64 *)MEM_REGION_GPA));
} while (val == spin_val);
GUEST_SYNC(0);
@@ -54,7 +54,7 @@ static void *vcpu_worker(void *data)
struct kvm_vcpu *vcpu = data;
struct kvm_run *run = vcpu->run;
struct ucall uc;
- uint64_t cmd;
+ u64 cmd;
/*
* Loop until the guest is done. Re-enter the guest on all MMIO exits,
@@ -111,8 +111,8 @@ static struct kvm_vm *spawn_vm(struct kvm_vcpu **vcpu, pthread_t *vcpu_thread,
void *guest_code)
{
struct kvm_vm *vm;
- uint64_t *hva;
- uint64_t gpa;
+ u64 *hva;
+ u64 gpa;
vm = vm_create_with_one_vcpu(vcpu, guest_code);
@@ -144,7 +144,7 @@ static struct kvm_vm *spawn_vm(struct kvm_vcpu **vcpu, pthread_t *vcpu_thread,
static void guest_code_move_memory_region(void)
{
- uint64_t val;
+ u64 val;
GUEST_SYNC(0);
@@ -180,7 +180,7 @@ static void test_move_memory_region(bool disable_slot_zap_quirk)
pthread_t vcpu_thread;
struct kvm_vcpu *vcpu;
struct kvm_vm *vm;
- uint64_t *hva;
+ u64 *hva;
vm = spawn_vm(&vcpu, &vcpu_thread, guest_code_move_memory_region);
@@ -224,7 +224,7 @@ static void test_move_memory_region(bool disable_slot_zap_quirk)
static void guest_code_delete_memory_region(void)
{
struct desc_ptr idt;
- uint64_t val;
+ u64 val;
/*
* Clobber the IDT so that a #PF due to the memory region being deleted
@@ -441,9 +441,9 @@ static void test_add_max_memory_regions(void)
for (slot = 0; slot < max_mem_slots; slot++)
vm_set_user_memory_region(vm, slot, 0,
- ((uint64_t)slot * MEM_REGION_SIZE),
+ ((u64)slot * MEM_REGION_SIZE),
MEM_REGION_SIZE,
- mem_aligned + (uint64_t)slot * MEM_REGION_SIZE);
+ mem_aligned + (u64)slot * MEM_REGION_SIZE);
/* Check it cannot be added memory slots beyond the limit */
mem_extra = mmap(NULL, MEM_REGION_SIZE, PROT_READ | PROT_WRITE,
@@ -451,7 +451,7 @@ static void test_add_max_memory_regions(void)
TEST_ASSERT(mem_extra != MAP_FAILED, "Failed to mmap() host");
ret = __vm_set_user_memory_region(vm, max_mem_slots, 0,
- (uint64_t)max_mem_slots * MEM_REGION_SIZE,
+ (u64)max_mem_slots * MEM_REGION_SIZE,
MEM_REGION_SIZE, mem_extra);
TEST_ASSERT(ret == -1 && errno == EINVAL,
"Adding one more memory slot should fail with EINVAL");
diff --git a/tools/testing/selftests/kvm/steal_time.c b/tools/testing/selftests/kvm/steal_time.c
index fd931243b6ce..3370988bd35b 100644
--- a/tools/testing/selftests/kvm/steal_time.c
+++ b/tools/testing/selftests/kvm/steal_time.c
@@ -25,7 +25,7 @@
#define ST_GPA_BASE (1 << 30)
static void *st_gva[NR_VCPUS];
-static uint64_t guest_stolen_time[NR_VCPUS];
+static u64 guest_stolen_time[NR_VCPUS];
#if defined(__x86_64__)
@@ -44,7 +44,7 @@ static void guest_code(int cpu)
struct kvm_steal_time *st = st_gva[cpu];
uint32_t version;
- GUEST_ASSERT_EQ(rdmsr(MSR_KVM_STEAL_TIME), ((uint64_t)st_gva[cpu] | KVM_MSR_ENABLED));
+ GUEST_ASSERT_EQ(rdmsr(MSR_KVM_STEAL_TIME), ((u64)st_gva[cpu] | KVM_MSR_ENABLED));
memset(st, 0, sizeof(*st));
GUEST_SYNC(0);
@@ -111,10 +111,10 @@ static void steal_time_dump(struct kvm_vm *vm, uint32_t vcpu_idx)
struct st_time {
uint32_t rev;
uint32_t attr;
- uint64_t st_time;
+ u64 st_time;
};
-static int64_t smccc(uint32_t func, uint64_t arg)
+static int64_t smccc(uint32_t func, u64 arg)
{
struct arm_smccc_res res;
@@ -169,13 +169,13 @@ static bool is_steal_time_supported(struct kvm_vcpu *vcpu)
static void steal_time_init(struct kvm_vcpu *vcpu, uint32_t i)
{
struct kvm_vm *vm = vcpu->vm;
- uint64_t st_ipa;
+ u64 st_ipa;
int ret;
struct kvm_device_attr dev = {
.group = KVM_ARM_VCPU_PVTIME_CTRL,
.attr = KVM_ARM_VCPU_PVTIME_IPA,
- .addr = (uint64_t)&st_ipa,
+ .addr = (u64)&st_ipa,
};
vcpu_ioctl(vcpu, KVM_HAS_DEVICE_ATTR, &dev);
@@ -215,7 +215,7 @@ static gpa_t st_gpa[NR_VCPUS];
struct sta_struct {
uint32_t sequence;
uint32_t flags;
- uint64_t steal;
+ u64 steal;
uint8_t preempted;
uint8_t pad[47];
} __packed;
@@ -268,7 +268,7 @@ static void guest_code(int cpu)
static bool is_steal_time_supported(struct kvm_vcpu *vcpu)
{
- uint64_t id = RISCV_SBI_EXT_REG(KVM_RISCV_SBI_EXT_STA);
+ u64 id = RISCV_SBI_EXT_REG(KVM_RISCV_SBI_EXT_STA);
unsigned long enabled = vcpu_get_reg(vcpu, id);
TEST_ASSERT(enabled == 0 || enabled == 1, "Expected boolean result");
diff --git a/tools/testing/selftests/kvm/system_counter_offset_test.c b/tools/testing/selftests/kvm/system_counter_offset_test.c
index 513d421a9bff..dc5e30b7b77f 100644
--- a/tools/testing/selftests/kvm/system_counter_offset_test.c
+++ b/tools/testing/selftests/kvm/system_counter_offset_test.c
@@ -17,7 +17,7 @@
#ifdef __x86_64__
struct test_case {
- uint64_t tsc_offset;
+ u64 tsc_offset;
};
static struct test_case test_cases[] = {
@@ -39,12 +39,12 @@ static void setup_system_counter(struct kvm_vcpu *vcpu, struct test_case *test)
&test->tsc_offset);
}
-static uint64_t guest_read_system_counter(struct test_case *test)
+static u64 guest_read_system_counter(struct test_case *test)
{
return rdtsc();
}
-static uint64_t host_read_guest_system_counter(struct test_case *test)
+static u64 host_read_guest_system_counter(struct test_case *test)
{
return rdtsc() + test->tsc_offset;
}
@@ -69,9 +69,9 @@ static void guest_main(void)
}
}
-static void handle_sync(struct ucall *uc, uint64_t start, uint64_t end)
+static void handle_sync(struct ucall *uc, u64 start, u64 end)
{
- uint64_t obs = uc->args[2];
+ u64 obs = uc->args[2];
TEST_ASSERT(start <= obs && obs <= end,
"unexpected system counter value: %"PRIu64" expected range: [%"PRIu64", %"PRIu64"]",
@@ -88,7 +88,7 @@ static void handle_abort(struct ucall *uc)
static void enter_guest(struct kvm_vcpu *vcpu)
{
- uint64_t start, end;
+ u64 start, end;
struct ucall uc;
int i;
diff --git a/tools/testing/selftests/kvm/x86/amx_test.c b/tools/testing/selftests/kvm/x86/amx_test.c
index d49230ad5caf..b847b1b2d8b9 100644
--- a/tools/testing/selftests/kvm/x86/amx_test.c
+++ b/tools/testing/selftests/kvm/x86/amx_test.c
@@ -74,7 +74,7 @@ static inline void __tilerelease(void)
asm volatile(".byte 0xc4, 0xe2, 0x78, 0x49, 0xc0" ::);
}
-static inline void __xsavec(struct xstate *xstate, uint64_t rfbm)
+static inline void __xsavec(struct xstate *xstate, u64 rfbm)
{
uint32_t rfbm_lo = rfbm;
uint32_t rfbm_hi = rfbm >> 32;
diff --git a/tools/testing/selftests/kvm/x86/apic_bus_clock_test.c b/tools/testing/selftests/kvm/x86/apic_bus_clock_test.c
index f8916bb34405..81f76c7d5621 100644
--- a/tools/testing/selftests/kvm/x86/apic_bus_clock_test.c
+++ b/tools/testing/selftests/kvm/x86/apic_bus_clock_test.c
@@ -55,11 +55,11 @@ static void apic_write_reg(unsigned int reg, uint32_t val)
xapic_write_reg(reg, val);
}
-static void apic_guest_code(uint64_t apic_hz, uint64_t delay_ms)
+static void apic_guest_code(u64 apic_hz, u64 delay_ms)
{
- uint64_t tsc_hz = guest_tsc_khz * 1000;
+ u64 tsc_hz = guest_tsc_khz * 1000;
const uint32_t tmict = ~0u;
- uint64_t tsc0, tsc1, freq;
+ u64 tsc0, tsc1, freq;
uint32_t tmcct;
int i;
@@ -121,7 +121,7 @@ static void test_apic_bus_clock(struct kvm_vcpu *vcpu)
}
}
-static void run_apic_bus_clock_test(uint64_t apic_hz, uint64_t delay_ms,
+static void run_apic_bus_clock_test(u64 apic_hz, u64 delay_ms,
bool x2apic)
{
struct kvm_vcpu *vcpu;
@@ -168,8 +168,8 @@ int main(int argc, char *argv[])
* Arbitrarilty default to 25MHz for the APIC bus frequency, which is
* different enough from the default 1GHz to be interesting.
*/
- uint64_t apic_hz = 25 * 1000 * 1000;
- uint64_t delay_ms = 100;
+ u64 apic_hz = 25 * 1000 * 1000;
+ u64 delay_ms = 100;
int opt;
TEST_REQUIRE(kvm_has_cap(KVM_CAP_X86_APIC_BUS_CYCLES_NS));
diff --git a/tools/testing/selftests/kvm/x86/debug_regs.c b/tools/testing/selftests/kvm/x86/debug_regs.c
index 2d814c1d1dc4..542a0eac0f32 100644
--- a/tools/testing/selftests/kvm/x86/debug_regs.c
+++ b/tools/testing/selftests/kvm/x86/debug_regs.c
@@ -86,7 +86,7 @@ int main(void)
struct kvm_run *run;
struct kvm_vm *vm;
struct ucall uc;
- uint64_t cmd;
+ u64 cmd;
int i;
/* Instruction lengths starting at ss_start */
int ss_size[6] = {
diff --git a/tools/testing/selftests/kvm/x86/dirty_log_page_splitting_test.c b/tools/testing/selftests/kvm/x86/dirty_log_page_splitting_test.c
index b0d2b04a7ff2..388ba4101f97 100644
--- a/tools/testing/selftests/kvm/x86/dirty_log_page_splitting_test.c
+++ b/tools/testing/selftests/kvm/x86/dirty_log_page_splitting_test.c
@@ -23,7 +23,7 @@
#define SLOTS 2
#define ITERATIONS 2
-static uint64_t guest_percpu_mem_size = DEFAULT_PER_VCPU_MEM_SIZE;
+static u64 guest_percpu_mem_size = DEFAULT_PER_VCPU_MEM_SIZE;
static enum vm_mem_backing_src_type backing_src = VM_MEM_SRC_ANONYMOUS_HUGETLB;
@@ -33,10 +33,10 @@ static int iteration;
static int vcpu_last_completed_iteration[KVM_MAX_VCPUS];
struct kvm_page_stats {
- uint64_t pages_4k;
- uint64_t pages_2m;
- uint64_t pages_1g;
- uint64_t hugepages;
+ u64 pages_4k;
+ u64 pages_2m;
+ u64 pages_1g;
+ u64 hugepages;
};
static void get_page_stats(struct kvm_vm *vm, struct kvm_page_stats *stats, const char *stage)
@@ -89,9 +89,9 @@ static void run_test(enum vm_guest_mode mode, void *unused)
{
struct kvm_vm *vm;
unsigned long **bitmaps;
- uint64_t guest_num_pages;
- uint64_t host_num_pages;
- uint64_t pages_per_slot;
+ u64 guest_num_pages;
+ u64 host_num_pages;
+ u64 pages_per_slot;
int i;
struct kvm_page_stats stats_populated;
struct kvm_page_stats stats_dirty_logging_enabled;
diff --git a/tools/testing/selftests/kvm/x86/feature_msrs_test.c b/tools/testing/selftests/kvm/x86/feature_msrs_test.c
index a72f13ae2edb..a0e54af60544 100644
--- a/tools/testing/selftests/kvm/x86/feature_msrs_test.c
+++ b/tools/testing/selftests/kvm/x86/feature_msrs_test.c
@@ -41,8 +41,8 @@ static bool is_quirked_msr(uint32_t msr)
static void test_feature_msr(uint32_t msr)
{
- const uint64_t supported_mask = kvm_get_feature_msr(msr);
- uint64_t reset_value = is_quirked_msr(msr) ? supported_mask : 0;
+ const u64 supported_mask = kvm_get_feature_msr(msr);
+ u64 reset_value = is_quirked_msr(msr) ? supported_mask : 0;
struct kvm_vcpu *vcpu;
struct kvm_vm *vm;
diff --git a/tools/testing/selftests/kvm/x86/fix_hypercall_test.c b/tools/testing/selftests/kvm/x86/fix_hypercall_test.c
index 762628f7d4ba..a2c8202cb80e 100644
--- a/tools/testing/selftests/kvm/x86/fix_hypercall_test.c
+++ b/tools/testing/selftests/kvm/x86/fix_hypercall_test.c
@@ -30,14 +30,14 @@ static const uint8_t vmx_vmcall[HYPERCALL_INSN_SIZE] = { 0x0f, 0x01, 0xc1 };
static const uint8_t svm_vmmcall[HYPERCALL_INSN_SIZE] = { 0x0f, 0x01, 0xd9 };
extern uint8_t hypercall_insn[HYPERCALL_INSN_SIZE];
-static uint64_t do_sched_yield(uint8_t apic_id)
+static u64 do_sched_yield(uint8_t apic_id)
{
- uint64_t ret;
+ u64 ret;
asm volatile("hypercall_insn:\n\t"
".byte 0xcc,0xcc,0xcc\n\t"
: "=a"(ret)
- : "a"((uint64_t)KVM_HC_SCHED_YIELD), "b"((uint64_t)apic_id)
+ : "a"((u64)KVM_HC_SCHED_YIELD), "b"((u64)apic_id)
: "memory");
return ret;
@@ -47,7 +47,7 @@ static void guest_main(void)
{
const uint8_t *native_hypercall_insn;
const uint8_t *other_hypercall_insn;
- uint64_t ret;
+ u64 ret;
if (host_cpu_is_intel) {
native_hypercall_insn = vmx_vmcall;
@@ -72,7 +72,7 @@ static void guest_main(void)
* the "right" hypercall.
*/
if (quirk_disabled) {
- GUEST_ASSERT(ret == (uint64_t)-EFAULT);
+ GUEST_ASSERT(ret == (u64)-EFAULT);
GUEST_ASSERT(!memcmp(other_hypercall_insn, hypercall_insn,
HYPERCALL_INSN_SIZE));
} else {
diff --git a/tools/testing/selftests/kvm/x86/flds_emulation.h b/tools/testing/selftests/kvm/x86/flds_emulation.h
index 37b1a9f52864..c7e4f08765fb 100644
--- a/tools/testing/selftests/kvm/x86/flds_emulation.h
+++ b/tools/testing/selftests/kvm/x86/flds_emulation.h
@@ -12,7 +12,7 @@
* KVM to emulate the instruction (e.g. by providing an MMIO address) to
* exercise emulation failures.
*/
-static inline void flds(uint64_t address)
+static inline void flds(u64 address)
{
__asm__ __volatile__(FLDS_MEM_EAX :: "a"(address));
}
@@ -22,7 +22,7 @@ static inline void handle_flds_emulation_failure_exit(struct kvm_vcpu *vcpu)
struct kvm_run *run = vcpu->run;
struct kvm_regs regs;
uint8_t *insn_bytes;
- uint64_t flags;
+ u64 flags;
TEST_ASSERT_KVM_EXIT_REASON(vcpu, KVM_EXIT_INTERNAL_ERROR);
diff --git a/tools/testing/selftests/kvm/x86/hwcr_msr_test.c b/tools/testing/selftests/kvm/x86/hwcr_msr_test.c
index 10b1b0ba374e..8e20a03b3329 100644
--- a/tools/testing/selftests/kvm/x86/hwcr_msr_test.c
+++ b/tools/testing/selftests/kvm/x86/hwcr_msr_test.c
@@ -10,11 +10,11 @@
void test_hwcr_bit(struct kvm_vcpu *vcpu, unsigned int bit)
{
- const uint64_t ignored = BIT_ULL(3) | BIT_ULL(6) | BIT_ULL(8);
- const uint64_t valid = BIT_ULL(18) | BIT_ULL(24);
- const uint64_t legal = ignored | valid;
- uint64_t val = BIT_ULL(bit);
- uint64_t actual;
+ const u64 ignored = BIT_ULL(3) | BIT_ULL(6) | BIT_ULL(8);
+ const u64 valid = BIT_ULL(18) | BIT_ULL(24);
+ const u64 legal = ignored | valid;
+ u64 val = BIT_ULL(bit);
+ u64 actual;
int r;
r = _vcpu_set_msr(vcpu, MSR_K7_HWCR, val);
diff --git a/tools/testing/selftests/kvm/x86/hyperv_extended_hypercalls.c b/tools/testing/selftests/kvm/x86/hyperv_extended_hypercalls.c
index f2d990ce4e2b..2bad57246fe8 100644
--- a/tools/testing/selftests/kvm/x86/hyperv_extended_hypercalls.c
+++ b/tools/testing/selftests/kvm/x86/hyperv_extended_hypercalls.c
@@ -18,16 +18,16 @@
static void guest_code(gpa_t in_pg_gpa, gpa_t out_pg_gpa,
gva_t out_pg_gva)
{
- uint64_t *output_gva;
+ u64 *output_gva;
wrmsr(HV_X64_MSR_GUEST_OS_ID, HYPERV_LINUX_OS_ID);
wrmsr(HV_X64_MSR_HYPERCALL, in_pg_gpa);
- output_gva = (uint64_t *)out_pg_gva;
+ output_gva = (u64 *)out_pg_gva;
hyperv_hypercall(HV_EXT_CALL_QUERY_CAPABILITIES, in_pg_gpa, out_pg_gpa);
- /* TLFS states output will be a uint64_t value */
+ /* TLFS states output will be a u64 value */
GUEST_ASSERT_EQ(*output_gva, EXT_CAPABILITIES);
GUEST_DONE();
@@ -40,7 +40,7 @@ int main(void)
struct kvm_vcpu *vcpu;
struct kvm_run *run;
struct kvm_vm *vm;
- uint64_t *outval;
+ u64 *outval;
struct ucall uc;
TEST_REQUIRE(kvm_has_cap(KVM_CAP_HYPERV_CPUID));
diff --git a/tools/testing/selftests/kvm/x86/hyperv_features.c b/tools/testing/selftests/kvm/x86/hyperv_features.c
index b3847b5ea314..c275c6401525 100644
--- a/tools/testing/selftests/kvm/x86/hyperv_features.c
+++ b/tools/testing/selftests/kvm/x86/hyperv_features.c
@@ -29,8 +29,8 @@ struct msr_data {
};
struct hcall_data {
- uint64_t control;
- uint64_t expect;
+ u64 control;
+ u64 expect;
bool ud_expected;
};
@@ -42,7 +42,7 @@ static bool is_write_only_msr(uint32_t msr)
static void guest_msr(struct msr_data *msr)
{
uint8_t vector = 0;
- uint64_t msr_val = 0;
+ u64 msr_val = 0;
GUEST_ASSERT(msr->idx);
diff --git a/tools/testing/selftests/kvm/x86/hyperv_ipi.c b/tools/testing/selftests/kvm/x86/hyperv_ipi.c
index 85c2948e5a79..cdc9c6144477 100644
--- a/tools/testing/selftests/kvm/x86/hyperv_ipi.c
+++ b/tools/testing/selftests/kvm/x86/hyperv_ipi.c
@@ -18,7 +18,7 @@
#define IPI_VECTOR 0xfe
-static volatile uint64_t ipis_rcvd[RECEIVER_VCPU_ID_2 + 1];
+static volatile u64 ipis_rcvd[RECEIVER_VCPU_ID_2 + 1];
struct hv_vpset {
u64 format;
diff --git a/tools/testing/selftests/kvm/x86/hyperv_tlb_flush.c b/tools/testing/selftests/kvm/x86/hyperv_tlb_flush.c
index bc5828ce505e..f2bddd8b5f1f 100644
--- a/tools/testing/selftests/kvm/x86/hyperv_tlb_flush.c
+++ b/tools/testing/selftests/kvm/x86/hyperv_tlb_flush.c
@@ -135,10 +135,10 @@ static void set_expected_val(void *addr, u64 val, int vcpu_id)
*/
static void swap_two_test_pages(gpa_t pte_gva1, gpa_t pte_gva2)
{
- uint64_t tmp = *(uint64_t *)pte_gva1;
+ u64 tmp = *(u64 *)pte_gva1;
- *(uint64_t *)pte_gva1 = *(uint64_t *)pte_gva2;
- *(uint64_t *)pte_gva2 = tmp;
+ *(u64 *)pte_gva1 = *(u64 *)pte_gva2;
+ *(u64 *)pte_gva2 = tmp;
}
/*
@@ -583,7 +583,7 @@ int main(int argc, char *argv[])
pthread_t threads[2];
gva_t test_data_page, gva;
gpa_t gpa;
- uint64_t *pte;
+ u64 *pte;
struct test_data *data;
struct ucall uc;
int stage = 1, r, i;
diff --git a/tools/testing/selftests/kvm/x86/kvm_clock_test.c b/tools/testing/selftests/kvm/x86/kvm_clock_test.c
index ada4b2abf55d..e986d289e19b 100644
--- a/tools/testing/selftests/kvm/x86/kvm_clock_test.c
+++ b/tools/testing/selftests/kvm/x86/kvm_clock_test.c
@@ -17,7 +17,7 @@
#include "processor.h"
struct test_case {
- uint64_t kvmclock_base;
+ u64 kvmclock_base;
int64_t realtime_offset;
};
@@ -52,7 +52,7 @@ static inline void assert_flags(struct kvm_clock_data *data)
static void handle_sync(struct ucall *uc, struct kvm_clock_data *start,
struct kvm_clock_data *end)
{
- uint64_t obs, exp_lo, exp_hi;
+ u64 obs, exp_lo, exp_hi;
obs = uc->args[2];
exp_lo = start->clock;
diff --git a/tools/testing/selftests/kvm/x86/kvm_pv_test.c b/tools/testing/selftests/kvm/x86/kvm_pv_test.c
index 1b805cbdb47b..e49ae65f8171 100644
--- a/tools/testing/selftests/kvm/x86/kvm_pv_test.c
+++ b/tools/testing/selftests/kvm/x86/kvm_pv_test.c
@@ -40,7 +40,7 @@ static struct msr_data msrs_to_test[] = {
static void test_msr(struct msr_data *msr)
{
- uint64_t ignored;
+ u64 ignored;
uint8_t vector;
PR_MSR(msr);
@@ -53,7 +53,7 @@ static void test_msr(struct msr_data *msr)
}
struct hcall_data {
- uint64_t nr;
+ u64 nr;
const char *name;
};
@@ -73,7 +73,7 @@ static struct hcall_data hcalls_to_test[] = {
static void test_hcall(struct hcall_data *hc)
{
- uint64_t r;
+ u64 r;
PR_HCALL(hc);
r = kvm_hypercall(hc->nr, 0, 0, 0, 0);
diff --git a/tools/testing/selftests/kvm/x86/monitor_mwait_test.c b/tools/testing/selftests/kvm/x86/monitor_mwait_test.c
index 390ae2d87493..1e4a9a45c22a 100644
--- a/tools/testing/selftests/kvm/x86/monitor_mwait_test.c
+++ b/tools/testing/selftests/kvm/x86/monitor_mwait_test.c
@@ -67,7 +67,7 @@ static void guest_monitor_wait(void *arg)
int main(int argc, char *argv[])
{
- uint64_t disabled_quirks;
+ u64 disabled_quirks;
struct kvm_vcpu *vcpu;
struct kvm_vm *vm;
struct ucall uc;
diff --git a/tools/testing/selftests/kvm/x86/nx_huge_pages_test.c b/tools/testing/selftests/kvm/x86/nx_huge_pages_test.c
index c0d84827f736..70950067b989 100644
--- a/tools/testing/selftests/kvm/x86/nx_huge_pages_test.c
+++ b/tools/testing/selftests/kvm/x86/nx_huge_pages_test.c
@@ -32,7 +32,7 @@
#define RETURN_OPCODE 0xC3
/* Call the specified memory address. */
-static void guest_do_CALL(uint64_t target)
+static void guest_do_CALL(u64 target)
{
((void (*)(void)) target)();
}
@@ -46,14 +46,14 @@ static void guest_do_CALL(uint64_t target)
*/
void guest_code(void)
{
- uint64_t hpage_1 = HPAGE_GVA;
- uint64_t hpage_2 = hpage_1 + (PAGE_SIZE * 512);
- uint64_t hpage_3 = hpage_2 + (PAGE_SIZE * 512);
+ u64 hpage_1 = HPAGE_GVA;
+ u64 hpage_2 = hpage_1 + (PAGE_SIZE * 512);
+ u64 hpage_3 = hpage_2 + (PAGE_SIZE * 512);
- READ_ONCE(*(uint64_t *)hpage_1);
+ READ_ONCE(*(u64 *)hpage_1);
GUEST_SYNC(1);
- READ_ONCE(*(uint64_t *)hpage_2);
+ READ_ONCE(*(u64 *)hpage_2);
GUEST_SYNC(2);
guest_do_CALL(hpage_1);
@@ -62,10 +62,10 @@ void guest_code(void)
guest_do_CALL(hpage_3);
GUEST_SYNC(4);
- READ_ONCE(*(uint64_t *)hpage_1);
+ READ_ONCE(*(u64 *)hpage_1);
GUEST_SYNC(5);
- READ_ONCE(*(uint64_t *)hpage_3);
+ READ_ONCE(*(u64 *)hpage_3);
GUEST_SYNC(6);
}
@@ -107,7 +107,7 @@ void run_test(int reclaim_period_ms, bool disable_nx_huge_pages,
{
struct kvm_vcpu *vcpu;
struct kvm_vm *vm;
- uint64_t nr_bytes;
+ u64 nr_bytes;
void *hva;
int r;
diff --git a/tools/testing/selftests/kvm/x86/platform_info_test.c b/tools/testing/selftests/kvm/x86/platform_info_test.c
index 9cbf283ebc55..86d1ab0db1e8 100644
--- a/tools/testing/selftests/kvm/x86/platform_info_test.c
+++ b/tools/testing/selftests/kvm/x86/platform_info_test.c
@@ -23,7 +23,7 @@
static void guest_code(void)
{
- uint64_t msr_platform_info;
+ u64 msr_platform_info;
uint8_t vector;
GUEST_SYNC(true);
@@ -42,7 +42,7 @@ int main(int argc, char *argv[])
{
struct kvm_vcpu *vcpu;
struct kvm_vm *vm;
- uint64_t msr_platform_info;
+ u64 msr_platform_info;
struct ucall uc;
TEST_REQUIRE(kvm_has_cap(KVM_CAP_MSR_PLATFORM_INFO));
diff --git a/tools/testing/selftests/kvm/x86/pmu_counters_test.c b/tools/testing/selftests/kvm/x86/pmu_counters_test.c
index 8aaaf25b6111..ef9ed5edf47b 100644
--- a/tools/testing/selftests/kvm/x86/pmu_counters_test.c
+++ b/tools/testing/selftests/kvm/x86/pmu_counters_test.c
@@ -85,7 +85,7 @@ static struct kvm_intel_pmu_event intel_event_to_feature(uint8_t idx)
static struct kvm_vm *pmu_vm_create_with_one_vcpu(struct kvm_vcpu **vcpu,
void *guest_code,
uint8_t pmu_version,
- uint64_t perf_capabilities)
+ u64 perf_capabilities)
{
struct kvm_vm *vm;
@@ -150,7 +150,7 @@ static uint8_t guest_get_pmu_version(void)
*/
static void guest_assert_event_count(uint8_t idx, uint32_t pmc, uint32_t pmc_msr)
{
- uint64_t count;
+ u64 count;
count = _rdpmc(pmc);
if (!(hardware_pmu_arch_events & BIT(idx)))
@@ -238,7 +238,7 @@ do { \
} while (0)
static void __guest_test_arch_event(uint8_t idx, uint32_t pmc, uint32_t pmc_msr,
- uint32_t ctrl_msr, uint64_t ctrl_msr_value)
+ uint32_t ctrl_msr, u64 ctrl_msr_value)
{
GUEST_TEST_EVENT(idx, pmc, pmc_msr, ctrl_msr, ctrl_msr_value, "");
@@ -271,7 +271,7 @@ static void guest_test_arch_event(uint8_t idx)
GUEST_ASSERT(nr_gp_counters);
for (i = 0; i < nr_gp_counters; i++) {
- uint64_t eventsel = ARCH_PERFMON_EVENTSEL_OS |
+ u64 eventsel = ARCH_PERFMON_EVENTSEL_OS |
ARCH_PERFMON_EVENTSEL_ENABLE |
intel_pmu_arch_events[idx];
@@ -310,7 +310,7 @@ static void guest_test_arch_events(void)
GUEST_DONE();
}
-static void test_arch_events(uint8_t pmu_version, uint64_t perf_capabilities,
+static void test_arch_events(uint8_t pmu_version, u64 perf_capabilities,
uint8_t length, uint8_t unavailable_mask)
{
struct kvm_vcpu *vcpu;
@@ -353,10 +353,10 @@ __GUEST_ASSERT(expect_gp ? vector == GP_VECTOR : !vector, \
msr, expected, val);
static void guest_test_rdpmc(uint32_t rdpmc_idx, bool expect_success,
- uint64_t expected_val)
+ u64 expected_val)
{
uint8_t vector;
- uint64_t val;
+ u64 val;
vector = rdpmc_safe(rdpmc_idx, &val);
GUEST_ASSERT_PMC_MSR_ACCESS(RDPMC, rdpmc_idx, !expect_success, vector);
@@ -383,7 +383,7 @@ static void guest_rd_wr_counters(uint32_t base_msr, uint8_t nr_possible_counters
* TODO: Test a value that validates full-width writes and the
* width of the counters.
*/
- const uint64_t test_val = 0xffff;
+ const u64 test_val = 0xffff;
const uint32_t msr = base_msr + i;
/*
@@ -397,12 +397,12 @@ static void guest_rd_wr_counters(uint32_t base_msr, uint8_t nr_possible_counters
* KVM drops writes to MSR_P6_PERFCTR[0|1] if the counters are
* unsupported, i.e. doesn't #GP and reads back '0'.
*/
- const uint64_t expected_val = expect_success ? test_val : 0;
+ const u64 expected_val = expect_success ? test_val : 0;
const bool expect_gp = !expect_success && msr != MSR_P6_PERFCTR0 &&
msr != MSR_P6_PERFCTR1;
uint32_t rdpmc_idx;
uint8_t vector;
- uint64_t val;
+ u64 val;
vector = wrmsr_safe(msr, test_val);
GUEST_ASSERT_PMC_MSR_ACCESS(WRMSR, msr, expect_gp, vector);
@@ -456,7 +456,7 @@ static void guest_test_gp_counters(void)
* counters, of which there are none.
*/
if (pmu_version > 1) {
- uint64_t global_ctrl = rdmsr(MSR_CORE_PERF_GLOBAL_CTRL);
+ u64 global_ctrl = rdmsr(MSR_CORE_PERF_GLOBAL_CTRL);
if (nr_gp_counters)
GUEST_ASSERT_EQ(global_ctrl, GENMASK_ULL(nr_gp_counters - 1, 0));
@@ -474,7 +474,7 @@ static void guest_test_gp_counters(void)
GUEST_DONE();
}
-static void test_gp_counters(uint8_t pmu_version, uint64_t perf_capabilities,
+static void test_gp_counters(uint8_t pmu_version, u64 perf_capabilities,
uint8_t nr_gp_counters)
{
struct kvm_vcpu *vcpu;
@@ -493,7 +493,7 @@ static void test_gp_counters(uint8_t pmu_version, uint64_t perf_capabilities,
static void guest_test_fixed_counters(void)
{
- uint64_t supported_bitmask = 0;
+ u64 supported_bitmask = 0;
uint8_t nr_fixed_counters = 0;
uint8_t i;
@@ -513,7 +513,7 @@ static void guest_test_fixed_counters(void)
for (i = 0; i < MAX_NR_FIXED_COUNTERS; i++) {
uint8_t vector;
- uint64_t val;
+ u64 val;
if (i >= nr_fixed_counters && !(supported_bitmask & BIT_ULL(i))) {
vector = wrmsr_safe(MSR_CORE_PERF_FIXED_CTR_CTRL,
@@ -540,7 +540,7 @@ static void guest_test_fixed_counters(void)
GUEST_DONE();
}
-static void test_fixed_counters(uint8_t pmu_version, uint64_t perf_capabilities,
+static void test_fixed_counters(uint8_t pmu_version, u64 perf_capabilities,
uint8_t nr_fixed_counters,
uint32_t supported_bitmask)
{
@@ -569,7 +569,7 @@ static void test_intel_counters(void)
uint8_t v, j;
uint32_t k;
- const uint64_t perf_caps[] = {
+ const u64 perf_caps[] = {
0,
PMU_CAP_FW_WRITES,
};
diff --git a/tools/testing/selftests/kvm/x86/pmu_event_filter_test.c b/tools/testing/selftests/kvm/x86/pmu_event_filter_test.c
index c15513cd74d1..86831c590df8 100644
--- a/tools/testing/selftests/kvm/x86/pmu_event_filter_test.c
+++ b/tools/testing/selftests/kvm/x86/pmu_event_filter_test.c
@@ -53,11 +53,11 @@ static const struct __kvm_pmu_event_filter base_event_filter = {
};
struct {
- uint64_t loads;
- uint64_t stores;
- uint64_t loads_stores;
- uint64_t branches_retired;
- uint64_t instructions_retired;
+ u64 loads;
+ u64 stores;
+ u64 loads_stores;
+ u64 branches_retired;
+ u64 instructions_retired;
} pmc_results;
/*
@@ -75,9 +75,9 @@ static void guest_gp_handler(struct ex_regs *regs)
*
* Return on success. GUEST_SYNC(0) on error.
*/
-static void check_msr(uint32_t msr, uint64_t bits_to_flip)
+static void check_msr(uint32_t msr, u64 bits_to_flip)
{
- uint64_t v = rdmsr(msr) ^ bits_to_flip;
+ u64 v = rdmsr(msr) ^ bits_to_flip;
wrmsr(msr, v);
if (rdmsr(msr) != v)
@@ -91,8 +91,8 @@ static void check_msr(uint32_t msr, uint64_t bits_to_flip)
static void run_and_measure_loop(uint32_t msr_base)
{
- const uint64_t branches_retired = rdmsr(msr_base + 0);
- const uint64_t insn_retired = rdmsr(msr_base + 1);
+ const u64 branches_retired = rdmsr(msr_base + 0);
+ const u64 insn_retired = rdmsr(msr_base + 1);
__asm__ __volatile__("loop ." : "+c"((int){NUM_BRANCHES}));
@@ -147,7 +147,7 @@ static void amd_guest_code(void)
* Run the VM to the next GUEST_SYNC(value), and return the value passed
* to the sync. Any other exit from the guest is fatal.
*/
-static uint64_t run_vcpu_to_sync(struct kvm_vcpu *vcpu)
+static u64 run_vcpu_to_sync(struct kvm_vcpu *vcpu)
{
struct ucall uc;
@@ -161,7 +161,7 @@ static uint64_t run_vcpu_to_sync(struct kvm_vcpu *vcpu)
static void run_vcpu_and_sync_pmc_results(struct kvm_vcpu *vcpu)
{
- uint64_t r;
+ u64 r;
memset(&pmc_results, 0, sizeof(pmc_results));
sync_global_to_guest(vcpu->vm, pmc_results);
@@ -182,7 +182,7 @@ static void run_vcpu_and_sync_pmc_results(struct kvm_vcpu *vcpu)
*/
static bool sanity_check_pmu(struct kvm_vcpu *vcpu)
{
- uint64_t r;
+ u64 r;
vm_install_exception_handler(vcpu->vm, GP_VECTOR, guest_gp_handler);
r = run_vcpu_to_sync(vcpu);
@@ -195,7 +195,7 @@ static bool sanity_check_pmu(struct kvm_vcpu *vcpu)
* Remove the first occurrence of 'event' (if any) from the filter's
* event list.
*/
-static void remove_event(struct __kvm_pmu_event_filter *f, uint64_t event)
+static void remove_event(struct __kvm_pmu_event_filter *f, u64 event)
{
bool found = false;
int i;
@@ -212,8 +212,8 @@ static void remove_event(struct __kvm_pmu_event_filter *f, uint64_t event)
#define ASSERT_PMC_COUNTING_INSTRUCTIONS() \
do { \
- uint64_t br = pmc_results.branches_retired; \
- uint64_t ir = pmc_results.instructions_retired; \
+ u64 br = pmc_results.branches_retired; \
+ u64 ir = pmc_results.instructions_retired; \
\
if (br && br != NUM_BRANCHES) \
pr_info("%s: Branch instructions retired = %lu (expected %u)\n", \
@@ -226,8 +226,8 @@ do { \
#define ASSERT_PMC_NOT_COUNTING_INSTRUCTIONS() \
do { \
- uint64_t br = pmc_results.branches_retired; \
- uint64_t ir = pmc_results.instructions_retired; \
+ u64 br = pmc_results.branches_retired; \
+ u64 ir = pmc_results.instructions_retired; \
\
TEST_ASSERT(!br, "%s: Branch instructions retired = %lu (expected 0)", \
__func__, br); \
@@ -418,9 +418,9 @@ static void masked_events_guest_test(uint32_t msr_base)
* The actual value of the counters don't determine the outcome of
* the test. Only that they are zero or non-zero.
*/
- const uint64_t loads = rdmsr(msr_base + 0);
- const uint64_t stores = rdmsr(msr_base + 1);
- const uint64_t loads_stores = rdmsr(msr_base + 2);
+ const u64 loads = rdmsr(msr_base + 0);
+ const u64 stores = rdmsr(msr_base + 1);
+ const u64 loads_stores = rdmsr(msr_base + 2);
int val;
@@ -473,7 +473,7 @@ static void amd_masked_events_guest_code(void)
}
static void run_masked_events_test(struct kvm_vcpu *vcpu,
- const uint64_t masked_events[],
+ const u64 masked_events[],
const int nmasked_events)
{
struct __kvm_pmu_event_filter f = {
@@ -482,7 +482,7 @@ static void run_masked_events_test(struct kvm_vcpu *vcpu,
.flags = KVM_PMU_EVENT_FLAG_MASKED_EVENTS,
};
- memcpy(f.events, masked_events, sizeof(uint64_t) * nmasked_events);
+ memcpy(f.events, masked_events, sizeof(u64) * nmasked_events);
test_with_filter(vcpu, &f);
}
@@ -491,10 +491,10 @@ static void run_masked_events_test(struct kvm_vcpu *vcpu,
#define ALLOW_LOADS_STORES BIT(2)
struct masked_events_test {
- uint64_t intel_events[MAX_TEST_EVENTS];
- uint64_t intel_event_end;
- uint64_t amd_events[MAX_TEST_EVENTS];
- uint64_t amd_event_end;
+ u64 intel_events[MAX_TEST_EVENTS];
+ u64 intel_event_end;
+ u64 amd_events[MAX_TEST_EVENTS];
+ u64 amd_event_end;
const char *msg;
uint32_t flags;
};
@@ -579,9 +579,9 @@ const struct masked_events_test test_cases[] = {
};
static int append_test_events(const struct masked_events_test *test,
- uint64_t *events, int nevents)
+ u64 *events, int nevents)
{
- const uint64_t *evts;
+ const u64 *evts;
int i;
evts = use_intel_pmu() ? test->intel_events : test->amd_events;
@@ -600,7 +600,7 @@ static bool bool_eq(bool a, bool b)
return a == b;
}
-static void run_masked_events_tests(struct kvm_vcpu *vcpu, uint64_t *events,
+static void run_masked_events_tests(struct kvm_vcpu *vcpu, u64 *events,
int nevents)
{
int ntests = ARRAY_SIZE(test_cases);
@@ -627,7 +627,7 @@ static void run_masked_events_tests(struct kvm_vcpu *vcpu, uint64_t *events,
}
}
-static void add_dummy_events(uint64_t *events, int nevents)
+static void add_dummy_events(u64 *events, int nevents)
{
int i;
@@ -647,7 +647,7 @@ static void add_dummy_events(uint64_t *events, int nevents)
static void test_masked_events(struct kvm_vcpu *vcpu)
{
int nevents = KVM_PMU_EVENT_FILTER_MAX_EVENTS - MAX_TEST_EVENTS;
- uint64_t events[KVM_PMU_EVENT_FILTER_MAX_EVENTS];
+ u64 events[KVM_PMU_EVENT_FILTER_MAX_EVENTS];
/* Run the test cases against a sparse PMU event filter. */
run_masked_events_tests(vcpu, events, 0);
@@ -665,7 +665,7 @@ static int set_pmu_event_filter(struct kvm_vcpu *vcpu,
return __vm_ioctl(vcpu->vm, KVM_SET_PMU_EVENT_FILTER, f);
}
-static int set_pmu_single_event_filter(struct kvm_vcpu *vcpu, uint64_t event,
+static int set_pmu_single_event_filter(struct kvm_vcpu *vcpu, u64 event,
uint32_t flags, uint32_t action)
{
struct __kvm_pmu_event_filter f = {
@@ -684,7 +684,7 @@ static void test_filter_ioctl(struct kvm_vcpu *vcpu)
{
uint8_t nr_fixed_counters = kvm_cpu_property(X86_PROPERTY_PMU_NR_FIXED_COUNTERS);
struct __kvm_pmu_event_filter f;
- uint64_t e = ~0ul;
+ u64 e = ~0ul;
int r;
/*
@@ -742,8 +742,8 @@ static void intel_run_fixed_counter_guest_code(uint8_t idx)
}
}
-static uint64_t test_with_fixed_counter_filter(struct kvm_vcpu *vcpu,
- uint32_t action, uint32_t bitmap)
+static u64 test_with_fixed_counter_filter(struct kvm_vcpu *vcpu,
+ uint32_t action, uint32_t bitmap)
{
struct __kvm_pmu_event_filter f = {
.action = action,
@@ -754,9 +754,9 @@ static uint64_t test_with_fixed_counter_filter(struct kvm_vcpu *vcpu,
return run_vcpu_to_sync(vcpu);
}
-static uint64_t test_set_gp_and_fixed_event_filter(struct kvm_vcpu *vcpu,
- uint32_t action,
- uint32_t bitmap)
+static u64 test_set_gp_and_fixed_event_filter(struct kvm_vcpu *vcpu,
+ uint32_t action,
+ uint32_t bitmap)
{
struct __kvm_pmu_event_filter f = base_event_filter;
@@ -772,7 +772,7 @@ static void __test_fixed_counter_bitmap(struct kvm_vcpu *vcpu, uint8_t idx,
{
unsigned int i;
uint32_t bitmap;
- uint64_t count;
+ u64 count;
TEST_ASSERT(nr_fixed_counters < sizeof(bitmap) * 8,
"Invalid nr_fixed_counters");
diff --git a/tools/testing/selftests/kvm/x86/private_mem_conversions_test.c b/tools/testing/selftests/kvm/x86/private_mem_conversions_test.c
index 82a8d88b5338..7e650895c96f 100644
--- a/tools/testing/selftests/kvm/x86/private_mem_conversions_test.c
+++ b/tools/testing/selftests/kvm/x86/private_mem_conversions_test.c
@@ -23,8 +23,8 @@
#include <processor.h>
#define BASE_DATA_SLOT 10
-#define BASE_DATA_GPA ((uint64_t)(1ull << 32))
-#define PER_CPU_DATA_SIZE ((uint64_t)(SZ_2M + PAGE_SIZE))
+#define BASE_DATA_GPA ((u64)(1ull << 32))
+#define PER_CPU_DATA_SIZE ((u64)(SZ_2M + PAGE_SIZE))
/* Horrific macro so that the line info is captured accurately :-( */
#define memcmp_g(gpa, pattern, size) \
@@ -38,7 +38,7 @@ do { \
pattern, i, gpa + i, mem[i]); \
} while (0)
-static void memcmp_h(uint8_t *mem, uint64_t gpa, uint8_t pattern, size_t size)
+static void memcmp_h(uint8_t *mem, u64 gpa, uint8_t pattern, size_t size)
{
size_t i;
@@ -70,13 +70,13 @@ enum ucall_syncs {
SYNC_PRIVATE,
};
-static void guest_sync_shared(uint64_t gpa, uint64_t size,
+static void guest_sync_shared(u64 gpa, u64 size,
uint8_t current_pattern, uint8_t new_pattern)
{
GUEST_SYNC5(SYNC_SHARED, gpa, size, current_pattern, new_pattern);
}
-static void guest_sync_private(uint64_t gpa, uint64_t size, uint8_t pattern)
+static void guest_sync_private(u64 gpa, u64 size, uint8_t pattern)
{
GUEST_SYNC4(SYNC_PRIVATE, gpa, size, pattern);
}
@@ -86,10 +86,10 @@ static void guest_sync_private(uint64_t gpa, uint64_t size, uint8_t pattern)
#define MAP_GPA_SHARED BIT(1)
#define MAP_GPA_DO_FALLOCATE BIT(2)
-static void guest_map_mem(uint64_t gpa, uint64_t size, bool map_shared,
+static void guest_map_mem(u64 gpa, u64 size, bool map_shared,
bool do_fallocate)
{
- uint64_t flags = MAP_GPA_SET_ATTRIBUTES;
+ u64 flags = MAP_GPA_SET_ATTRIBUTES;
if (map_shared)
flags |= MAP_GPA_SHARED;
@@ -98,19 +98,19 @@ static void guest_map_mem(uint64_t gpa, uint64_t size, bool map_shared,
kvm_hypercall_map_gpa_range(gpa, size, flags);
}
-static void guest_map_shared(uint64_t gpa, uint64_t size, bool do_fallocate)
+static void guest_map_shared(u64 gpa, u64 size, bool do_fallocate)
{
guest_map_mem(gpa, size, true, do_fallocate);
}
-static void guest_map_private(uint64_t gpa, uint64_t size, bool do_fallocate)
+static void guest_map_private(u64 gpa, u64 size, bool do_fallocate)
{
guest_map_mem(gpa, size, false, do_fallocate);
}
struct {
- uint64_t offset;
- uint64_t size;
+ u64 offset;
+ u64 size;
} static const test_ranges[] = {
GUEST_STAGE(0, PAGE_SIZE),
GUEST_STAGE(0, SZ_2M),
@@ -119,11 +119,11 @@ struct {
GUEST_STAGE(SZ_2M, PAGE_SIZE),
};
-static void guest_test_explicit_conversion(uint64_t base_gpa, bool do_fallocate)
+static void guest_test_explicit_conversion(u64 base_gpa, bool do_fallocate)
{
const uint8_t def_p = 0xaa;
const uint8_t init_p = 0xcc;
- uint64_t j;
+ u64 j;
int i;
/* Memory should be shared by default. */
@@ -134,8 +134,8 @@ static void guest_test_explicit_conversion(uint64_t base_gpa, bool do_fallocate)
memcmp_g(base_gpa, init_p, PER_CPU_DATA_SIZE);
for (i = 0; i < ARRAY_SIZE(test_ranges); i++) {
- uint64_t gpa = base_gpa + test_ranges[i].offset;
- uint64_t size = test_ranges[i].size;
+ u64 gpa = base_gpa + test_ranges[i].offset;
+ u64 size = test_ranges[i].size;
uint8_t p1 = 0x11;
uint8_t p2 = 0x22;
uint8_t p3 = 0x33;
@@ -214,10 +214,10 @@ static void guest_test_explicit_conversion(uint64_t base_gpa, bool do_fallocate)
}
}
-static void guest_punch_hole(uint64_t gpa, uint64_t size)
+static void guest_punch_hole(u64 gpa, u64 size)
{
/* "Mapping" memory shared via fallocate() is done via PUNCH_HOLE. */
- uint64_t flags = MAP_GPA_SHARED | MAP_GPA_DO_FALLOCATE;
+ u64 flags = MAP_GPA_SHARED | MAP_GPA_DO_FALLOCATE;
kvm_hypercall_map_gpa_range(gpa, size, flags);
}
@@ -227,7 +227,7 @@ static void guest_punch_hole(uint64_t gpa, uint64_t size)
* proper conversion. Freeing (PUNCH_HOLE) should zap SPTEs, and reallocating
* (subsequent fault) should zero memory.
*/
-static void guest_test_punch_hole(uint64_t base_gpa, bool precise)
+static void guest_test_punch_hole(u64 base_gpa, bool precise)
{
const uint8_t init_p = 0xcc;
int i;
@@ -239,8 +239,8 @@ static void guest_test_punch_hole(uint64_t base_gpa, bool precise)
guest_map_private(base_gpa, PER_CPU_DATA_SIZE, false);
for (i = 0; i < ARRAY_SIZE(test_ranges); i++) {
- uint64_t gpa = base_gpa + test_ranges[i].offset;
- uint64_t size = test_ranges[i].size;
+ u64 gpa = base_gpa + test_ranges[i].offset;
+ u64 size = test_ranges[i].size;
/*
* Free all memory before each iteration, even for the !precise
@@ -268,7 +268,7 @@ static void guest_test_punch_hole(uint64_t base_gpa, bool precise)
}
}
-static void guest_code(uint64_t base_gpa)
+static void guest_code(u64 base_gpa)
{
/*
* Run the conversion test twice, with and without doing fallocate() on
@@ -289,8 +289,8 @@ static void guest_code(uint64_t base_gpa)
static void handle_exit_hypercall(struct kvm_vcpu *vcpu)
{
struct kvm_run *run = vcpu->run;
- uint64_t gpa = run->hypercall.args[0];
- uint64_t size = run->hypercall.args[1] * PAGE_SIZE;
+ u64 gpa = run->hypercall.args[0];
+ u64 size = run->hypercall.args[1] * PAGE_SIZE;
bool set_attributes = run->hypercall.args[2] & MAP_GPA_SET_ATTRIBUTES;
bool map_shared = run->hypercall.args[2] & MAP_GPA_SHARED;
bool do_fallocate = run->hypercall.args[2] & MAP_GPA_DO_FALLOCATE;
@@ -337,7 +337,7 @@ static void *__test_mem_conversions(void *__vcpu)
case UCALL_ABORT:
REPORT_GUEST_ASSERT(uc);
case UCALL_SYNC: {
- uint64_t gpa = uc.args[1];
+ u64 gpa = uc.args[1];
size_t size = uc.args[2];
size_t i;
@@ -402,7 +402,7 @@ static void test_mem_conversions(enum vm_mem_backing_src_type src_type, uint32_t
KVM_MEM_GUEST_MEMFD, memfd, slot_size * i);
for (i = 0; i < nr_vcpus; i++) {
- uint64_t gpa = BASE_DATA_GPA + i * per_cpu_size;
+ u64 gpa = BASE_DATA_GPA + i * per_cpu_size;
vcpu_args_set(vcpus[i], 1, gpa);
diff --git a/tools/testing/selftests/kvm/x86/private_mem_kvm_exits_test.c b/tools/testing/selftests/kvm/x86/private_mem_kvm_exits_test.c
index 13e72fcec8dd..925040f394de 100644
--- a/tools/testing/selftests/kvm/x86/private_mem_kvm_exits_test.c
+++ b/tools/testing/selftests/kvm/x86/private_mem_kvm_exits_test.c
@@ -17,12 +17,12 @@
#define EXITS_TEST_SIZE (EXITS_TEST_NPAGES * PAGE_SIZE)
#define EXITS_TEST_SLOT 10
-static uint64_t guest_repeatedly_read(void)
+static u64 guest_repeatedly_read(void)
{
- volatile uint64_t value;
+ volatile u64 value;
while (true)
- value = *((uint64_t *) EXITS_TEST_GVA);
+ value = *((u64 *)EXITS_TEST_GVA);
return value;
}
@@ -72,7 +72,7 @@ static void test_private_access_memslot_deleted(void)
vm_mem_region_delete(vm, EXITS_TEST_SLOT);
pthread_join(vm_thread, &thread_return);
- exit_reason = (uint32_t)(uint64_t)thread_return;
+ exit_reason = (uint32_t)(u64)thread_return;
TEST_ASSERT_EQ(exit_reason, KVM_EXIT_MEMORY_FAULT);
TEST_ASSERT_EQ(vcpu->run->memory_fault.flags, KVM_MEMORY_EXIT_FLAG_PRIVATE);
diff --git a/tools/testing/selftests/kvm/x86/set_sregs_test.c b/tools/testing/selftests/kvm/x86/set_sregs_test.c
index f4095a3d1278..8e654cc9ab16 100644
--- a/tools/testing/selftests/kvm/x86/set_sregs_test.c
+++ b/tools/testing/selftests/kvm/x86/set_sregs_test.c
@@ -46,9 +46,9 @@ do { \
X86_CR4_MCE | X86_CR4_PGE | X86_CR4_PCE | \
X86_CR4_OSFXSR | X86_CR4_OSXMMEXCPT)
-static uint64_t calc_supported_cr4_feature_bits(void)
+static u64 calc_supported_cr4_feature_bits(void)
{
- uint64_t cr4 = KVM_ALWAYS_ALLOWED_CR4;
+ u64 cr4 = KVM_ALWAYS_ALLOWED_CR4;
if (kvm_cpu_has(X86_FEATURE_UMIP))
cr4 |= X86_CR4_UMIP;
@@ -74,7 +74,7 @@ static uint64_t calc_supported_cr4_feature_bits(void)
return cr4;
}
-static void test_cr_bits(struct kvm_vcpu *vcpu, uint64_t cr4)
+static void test_cr_bits(struct kvm_vcpu *vcpu, u64 cr4)
{
struct kvm_sregs sregs;
int rc, i;
diff --git a/tools/testing/selftests/kvm/x86/sev_init2_tests.c b/tools/testing/selftests/kvm/x86/sev_init2_tests.c
index 3fb967f40c6a..3515b4c0e860 100644
--- a/tools/testing/selftests/kvm/x86/sev_init2_tests.c
+++ b/tools/testing/selftests/kvm/x86/sev_init2_tests.c
@@ -33,7 +33,7 @@ static int __sev_ioctl(int vm_fd, int cmd_id, void *data)
{
struct kvm_sev_cmd cmd = {
.id = cmd_id,
- .data = (uint64_t)data,
+ .data = (u64)data,
.sev_fd = open_sev_dev_path_or_exit(),
};
int ret;
@@ -100,7 +100,7 @@ void test_flags(uint32_t vm_type)
"invalid flag");
}
-void test_features(uint32_t vm_type, uint64_t supported_features)
+void test_features(uint32_t vm_type, u64 supported_features)
{
int i;
diff --git a/tools/testing/selftests/kvm/x86/sev_smoke_test.c b/tools/testing/selftests/kvm/x86/sev_smoke_test.c
index dc0734d2973c..7ee7cc1da061 100644
--- a/tools/testing/selftests/kvm/x86/sev_smoke_test.c
+++ b/tools/testing/selftests/kvm/x86/sev_smoke_test.c
@@ -108,7 +108,7 @@ static void test_sync_vmsa(uint32_t policy)
kvm_vm_free(vm);
}
-static void test_sev(void *guest_code, uint64_t policy)
+static void test_sev(void *guest_code, u64 policy)
{
struct kvm_vcpu *vcpu;
struct kvm_vm *vm;
diff --git a/tools/testing/selftests/kvm/x86/smaller_maxphyaddr_emulation_test.c b/tools/testing/selftests/kvm/x86/smaller_maxphyaddr_emulation_test.c
index fabeeaddfb3a..ae4ea5bb1bed 100644
--- a/tools/testing/selftests/kvm/x86/smaller_maxphyaddr_emulation_test.c
+++ b/tools/testing/selftests/kvm/x86/smaller_maxphyaddr_emulation_test.c
@@ -20,8 +20,8 @@
static void guest_code(bool tdp_enabled)
{
- uint64_t error_code;
- uint64_t vector;
+ u64 error_code;
+ u64 vector;
vector = kvm_asm_safe_ec(FLDS_MEM_EAX, error_code, "a"(MEM_REGION_GVA));
@@ -47,9 +47,9 @@ int main(int argc, char *argv[])
struct kvm_vcpu *vcpu;
struct kvm_vm *vm;
struct ucall uc;
- uint64_t *pte;
- uint64_t *hva;
- uint64_t gpa;
+ u64 *pte;
+ u64 *hva;
+ u64 gpa;
int rc;
TEST_REQUIRE(kvm_has_cap(KVM_CAP_SMALLER_MAXPHYADDR));
diff --git a/tools/testing/selftests/kvm/x86/smm_test.c b/tools/testing/selftests/kvm/x86/smm_test.c
index ba64f4e8456d..32f2cdea4c4f 100644
--- a/tools/testing/selftests/kvm/x86/smm_test.c
+++ b/tools/testing/selftests/kvm/x86/smm_test.c
@@ -42,7 +42,7 @@ uint8_t smi_handler[] = {
0x0f, 0xaa, /* rsm */
};
-static inline void sync_with_host(uint64_t phase)
+static inline void sync_with_host(u64 phase)
{
asm volatile("in $" XSTR(SYNC_PORT)", %%al \n"
: "+a" (phase));
@@ -67,7 +67,7 @@ static void guest_code(void *arg)
{
#define L2_GUEST_STACK_SIZE 64
unsigned long l2_guest_stack[L2_GUEST_STACK_SIZE];
- uint64_t apicbase = rdmsr(MSR_IA32_APICBASE);
+ u64 apicbase = rdmsr(MSR_IA32_APICBASE);
struct svm_test_data *svm = arg;
struct vmx_pages *vmx_pages = arg;
diff --git a/tools/testing/selftests/kvm/x86/state_test.c b/tools/testing/selftests/kvm/x86/state_test.c
index 062f425db75b..151eead91baf 100644
--- a/tools/testing/selftests/kvm/x86/state_test.c
+++ b/tools/testing/selftests/kvm/x86/state_test.c
@@ -140,7 +140,7 @@ static void __attribute__((__flatten__)) guest_code(void *arg)
GUEST_SYNC(1);
if (this_cpu_has(X86_FEATURE_XSAVE)) {
- uint64_t supported_xcr0 = this_cpu_supported_xcr0();
+ u64 supported_xcr0 = this_cpu_supported_xcr0();
uint8_t buffer[4096];
memset(buffer, 0xcc, sizeof(buffer));
@@ -168,8 +168,8 @@ static void __attribute__((__flatten__)) guest_code(void *arg)
}
if (this_cpu_has(X86_FEATURE_MPX)) {
- uint64_t bounds[2] = { 10, 0xffffffffull };
- uint64_t output[2] = { };
+ u64 bounds[2] = { 10, 0xffffffffull };
+ u64 output[2] = { };
GUEST_ASSERT(supported_xcr0 & XFEATURE_MASK_BNDREGS);
GUEST_ASSERT(supported_xcr0 & XFEATURE_MASK_BNDCSR);
@@ -224,7 +224,7 @@ static void __attribute__((__flatten__)) guest_code(void *arg)
int main(int argc, char *argv[])
{
- uint64_t *xstate_bv, saved_xstate_bv;
+ u64 *xstate_bv, saved_xstate_bv;
gva_t nested_gva = 0;
struct kvm_cpuid2 empty_cpuid = {};
struct kvm_regs regs1, regs2;
diff --git a/tools/testing/selftests/kvm/x86/svm_nested_soft_inject_test.c b/tools/testing/selftests/kvm/x86/svm_nested_soft_inject_test.c
index 5068b2dd8005..36b327c8c7c5 100644
--- a/tools/testing/selftests/kvm/x86/svm_nested_soft_inject_test.c
+++ b/tools/testing/selftests/kvm/x86/svm_nested_soft_inject_test.c
@@ -76,7 +76,7 @@ static void l2_guest_code_nmi(void)
ud2();
}
-static void l1_guest_code(struct svm_test_data *svm, uint64_t is_nmi, uint64_t idt_alt)
+static void l1_guest_code(struct svm_test_data *svm, u64 is_nmi, u64 idt_alt)
{
#define L2_GUEST_STACK_SIZE 64
unsigned long l2_guest_stack[L2_GUEST_STACK_SIZE];
@@ -168,7 +168,7 @@ static void run_test(bool is_nmi)
} else {
idt_alt_vm = 0;
}
- vcpu_args_set(vcpu, 3, svm_gva, (uint64_t)is_nmi, (uint64_t)idt_alt_vm);
+ vcpu_args_set(vcpu, 3, svm_gva, (u64)is_nmi, (u64)idt_alt_vm);
memset(&debug, 0, sizeof(debug));
vcpu_guest_debug_set(vcpu, &debug);
diff --git a/tools/testing/selftests/kvm/x86/tsc_msrs_test.c b/tools/testing/selftests/kvm/x86/tsc_msrs_test.c
index 12b0964f4f13..91583969a14f 100644
--- a/tools/testing/selftests/kvm/x86/tsc_msrs_test.c
+++ b/tools/testing/selftests/kvm/x86/tsc_msrs_test.c
@@ -95,7 +95,7 @@ int main(void)
{
struct kvm_vcpu *vcpu;
struct kvm_vm *vm;
- uint64_t val;
+ u64 val;
ksft_print_header();
ksft_set_plan(5);
diff --git a/tools/testing/selftests/kvm/x86/tsc_scaling_sync.c b/tools/testing/selftests/kvm/x86/tsc_scaling_sync.c
index 59c7304f805e..59da8d4da607 100644
--- a/tools/testing/selftests/kvm/x86/tsc_scaling_sync.c
+++ b/tools/testing/selftests/kvm/x86/tsc_scaling_sync.c
@@ -21,10 +21,10 @@ pthread_spinlock_t create_lock;
#define TEST_TSC_KHZ 2345678UL
#define TEST_TSC_OFFSET 200000000
-uint64_t tsc_sync;
+u64 tsc_sync;
static void guest_code(void)
{
- uint64_t start_tsc, local_tsc, tmp;
+ u64 start_tsc, local_tsc, tmp;
start_tsc = rdtsc();
do {
diff --git a/tools/testing/selftests/kvm/x86/ucna_injection_test.c b/tools/testing/selftests/kvm/x86/ucna_injection_test.c
index 1e5e564523b3..27aae6c92a38 100644
--- a/tools/testing/selftests/kvm/x86/ucna_injection_test.c
+++ b/tools/testing/selftests/kvm/x86/ucna_injection_test.c
@@ -45,7 +45,7 @@
#define MCI_CTL2_RESERVED_BIT BIT_ULL(29)
-static uint64_t supported_mcg_caps;
+static u64 supported_mcg_caps;
/*
* Record states about the injected UCNA.
@@ -53,30 +53,30 @@ static uint64_t supported_mcg_caps;
* handler. Variables without the 'i_' prefixes are recorded in guest main
* execution thread.
*/
-static volatile uint64_t i_ucna_rcvd;
-static volatile uint64_t i_ucna_addr;
-static volatile uint64_t ucna_addr;
-static volatile uint64_t ucna_addr2;
+static volatile u64 i_ucna_rcvd;
+static volatile u64 i_ucna_addr;
+static volatile u64 ucna_addr;
+static volatile u64 ucna_addr2;
struct thread_params {
struct kvm_vcpu *vcpu;
- uint64_t *p_i_ucna_rcvd;
- uint64_t *p_i_ucna_addr;
- uint64_t *p_ucna_addr;
- uint64_t *p_ucna_addr2;
+ u64 *p_i_ucna_rcvd;
+ u64 *p_i_ucna_addr;
+ u64 *p_ucna_addr;
+ u64 *p_ucna_addr2;
};
static void verify_apic_base_addr(void)
{
- uint64_t msr = rdmsr(MSR_IA32_APICBASE);
- uint64_t base = GET_APIC_BASE(msr);
+ u64 msr = rdmsr(MSR_IA32_APICBASE);
+ u64 base = GET_APIC_BASE(msr);
GUEST_ASSERT(base == APIC_DEFAULT_GPA);
}
static void ucna_injection_guest_code(void)
{
- uint64_t ctl2;
+ u64 ctl2;
verify_apic_base_addr();
xapic_enable();
@@ -106,7 +106,7 @@ static void ucna_injection_guest_code(void)
static void cmci_disabled_guest_code(void)
{
- uint64_t ctl2 = rdmsr(MSR_IA32_MCx_CTL2(UCNA_BANK));
+ u64 ctl2 = rdmsr(MSR_IA32_MCx_CTL2(UCNA_BANK));
wrmsr(MSR_IA32_MCx_CTL2(UCNA_BANK), ctl2 | MCI_CTL2_CMCI_EN);
GUEST_DONE();
@@ -114,7 +114,7 @@ static void cmci_disabled_guest_code(void)
static void cmci_enabled_guest_code(void)
{
- uint64_t ctl2 = rdmsr(MSR_IA32_MCx_CTL2(UCNA_BANK));
+ u64 ctl2 = rdmsr(MSR_IA32_MCx_CTL2(UCNA_BANK));
wrmsr(MSR_IA32_MCx_CTL2(UCNA_BANK), ctl2 | MCI_CTL2_RESERVED_BIT);
GUEST_DONE();
@@ -145,14 +145,15 @@ static void run_vcpu_expect_gp(struct kvm_vcpu *vcpu)
printf("vCPU received GP in guest.\n");
}
-static void inject_ucna(struct kvm_vcpu *vcpu, uint64_t addr) {
+static void inject_ucna(struct kvm_vcpu *vcpu, u64 addr)
+{
/*
* A UCNA error is indicated with VAL=1, UC=1, PCC=0, S=0 and AR=0 in
* the IA32_MCi_STATUS register.
* MSCOD=1 (BIT[16] - MscodDataRdErr).
* MCACOD=0x0090 (Memory controller error format, channel 0)
*/
- uint64_t status = MCI_STATUS_VAL | MCI_STATUS_UC | MCI_STATUS_EN |
+ u64 status = MCI_STATUS_VAL | MCI_STATUS_UC | MCI_STATUS_EN |
MCI_STATUS_MISCV | MCI_STATUS_ADDRV | 0x10090;
struct kvm_x86_mce mce = {};
mce.status = status;
@@ -216,10 +217,10 @@ static void test_ucna_injection(struct kvm_vcpu *vcpu, struct thread_params *par
{
struct kvm_vm *vm = vcpu->vm;
params->vcpu = vcpu;
- params->p_i_ucna_rcvd = (uint64_t *)addr_gva2hva(vm, (uint64_t)&i_ucna_rcvd);
- params->p_i_ucna_addr = (uint64_t *)addr_gva2hva(vm, (uint64_t)&i_ucna_addr);
- params->p_ucna_addr = (uint64_t *)addr_gva2hva(vm, (uint64_t)&ucna_addr);
- params->p_ucna_addr2 = (uint64_t *)addr_gva2hva(vm, (uint64_t)&ucna_addr2);
+ params->p_i_ucna_rcvd = (u64 *)addr_gva2hva(vm, (u64)&i_ucna_rcvd);
+ params->p_i_ucna_addr = (u64 *)addr_gva2hva(vm, (u64)&i_ucna_addr);
+ params->p_ucna_addr = (u64 *)addr_gva2hva(vm, (u64)&ucna_addr);
+ params->p_ucna_addr2 = (u64 *)addr_gva2hva(vm, (u64)&ucna_addr2);
run_ucna_injection(params);
@@ -242,7 +243,7 @@ static void test_ucna_injection(struct kvm_vcpu *vcpu, struct thread_params *par
static void setup_mce_cap(struct kvm_vcpu *vcpu, bool enable_cmci_p)
{
- uint64_t mcg_caps = MCG_CTL_P | MCG_SER_P | MCG_LMCE_P | KVM_MAX_MCE_BANKS;
+ u64 mcg_caps = MCG_CTL_P | MCG_SER_P | MCG_LMCE_P | KVM_MAX_MCE_BANKS;
if (enable_cmci_p)
mcg_caps |= MCG_CMCI_P;
diff --git a/tools/testing/selftests/kvm/x86/userspace_msr_exit_test.c b/tools/testing/selftests/kvm/x86/userspace_msr_exit_test.c
index 32b2794b78fe..983d1ae0718f 100644
--- a/tools/testing/selftests/kvm/x86/userspace_msr_exit_test.c
+++ b/tools/testing/selftests/kvm/x86/userspace_msr_exit_test.c
@@ -66,7 +66,7 @@ struct kvm_msr_filter filter_gs = {
},
};
-static uint64_t msr_non_existent_data;
+static u64 msr_non_existent_data;
static int guest_exception_count;
static u32 msr_reads, msr_writes;
@@ -142,7 +142,7 @@ struct kvm_msr_filter no_filter_deny = {
* Note: Force test_rdmsr() to not be inlined to prevent the labels,
* rdmsr_start and rdmsr_end, from being defined multiple times.
*/
-static noinline uint64_t test_rdmsr(uint32_t msr)
+static noinline u64 test_rdmsr(uint32_t msr)
{
uint32_t a, d;
@@ -151,14 +151,14 @@ static noinline uint64_t test_rdmsr(uint32_t msr)
__asm__ __volatile__("rdmsr_start: rdmsr; rdmsr_end:" :
"=a"(a), "=d"(d) : "c"(msr) : "memory");
- return a | ((uint64_t) d << 32);
+ return a | ((u64)d << 32);
}
/*
* Note: Force test_wrmsr() to not be inlined to prevent the labels,
* wrmsr_start and wrmsr_end, from being defined multiple times.
*/
-static noinline void test_wrmsr(uint32_t msr, uint64_t value)
+static noinline void test_wrmsr(uint32_t msr, u64 value)
{
uint32_t a = value;
uint32_t d = value >> 32;
@@ -176,7 +176,7 @@ extern char wrmsr_start, wrmsr_end;
* Note: Force test_em_rdmsr() to not be inlined to prevent the labels,
* rdmsr_start and rdmsr_end, from being defined multiple times.
*/
-static noinline uint64_t test_em_rdmsr(uint32_t msr)
+static noinline u64 test_em_rdmsr(uint32_t msr)
{
uint32_t a, d;
@@ -185,14 +185,14 @@ static noinline uint64_t test_em_rdmsr(uint32_t msr)
__asm__ __volatile__(KVM_FEP "em_rdmsr_start: rdmsr; em_rdmsr_end:" :
"=a"(a), "=d"(d) : "c"(msr) : "memory");
- return a | ((uint64_t) d << 32);
+ return a | ((u64)d << 32);
}
/*
* Note: Force test_em_wrmsr() to not be inlined to prevent the labels,
* wrmsr_start and wrmsr_end, from being defined multiple times.
*/
-static noinline void test_em_wrmsr(uint32_t msr, uint64_t value)
+static noinline void test_em_wrmsr(uint32_t msr, u64 value)
{
uint32_t a = value;
uint32_t d = value >> 32;
@@ -208,7 +208,7 @@ extern char em_wrmsr_start, em_wrmsr_end;
static void guest_code_filter_allow(void)
{
- uint64_t data;
+ u64 data;
/*
* Test userspace intercepting rdmsr / wrmsr for MSR_IA32_XSS.
@@ -328,7 +328,7 @@ static void guest_code_filter_deny(void)
static void guest_code_permission_bitmap(void)
{
- uint64_t data;
+ u64 data;
data = test_rdmsr(MSR_FS_BASE);
GUEST_ASSERT(data == MSR_FS_BASE);
@@ -458,7 +458,7 @@ static void process_ucall_done(struct kvm_vcpu *vcpu)
uc.cmd, UCALL_DONE);
}
-static uint64_t process_ucall(struct kvm_vcpu *vcpu)
+static u64 process_ucall(struct kvm_vcpu *vcpu)
{
struct ucall uc = {};
@@ -496,7 +496,7 @@ static void run_guest_then_process_wrmsr(struct kvm_vcpu *vcpu,
process_wrmsr(vcpu, msr_index);
}
-static uint64_t run_guest_then_process_ucall(struct kvm_vcpu *vcpu)
+static u64 run_guest_then_process_ucall(struct kvm_vcpu *vcpu)
{
vcpu_run(vcpu);
return process_ucall(vcpu);
@@ -513,7 +513,7 @@ KVM_ONE_VCPU_TEST_SUITE(user_msr);
KVM_ONE_VCPU_TEST(user_msr, msr_filter_allow, guest_code_filter_allow)
{
struct kvm_vm *vm = vcpu->vm;
- uint64_t cmd;
+ u64 cmd;
int rc;
rc = kvm_check_cap(KVM_CAP_X86_USER_SPACE_MSR);
diff --git a/tools/testing/selftests/kvm/x86/vmx_dirty_log_test.c b/tools/testing/selftests/kvm/x86/vmx_dirty_log_test.c
index 9bf08e278ffe..ea521e752f66 100644
--- a/tools/testing/selftests/kvm/x86/vmx_dirty_log_test.c
+++ b/tools/testing/selftests/kvm/x86/vmx_dirty_log_test.c
@@ -82,7 +82,7 @@ static void test_vmx_dirty_log(bool enable_ept)
gva_t vmx_pages_gva = 0;
struct vmx_pages *vmx;
unsigned long *bmap;
- uint64_t *host_test_mem;
+ u64 *host_test_mem;
struct kvm_vcpu *vcpu;
struct kvm_vm *vm;
diff --git a/tools/testing/selftests/kvm/x86/vmx_msrs_test.c b/tools/testing/selftests/kvm/x86/vmx_msrs_test.c
index 90720b6205f4..d61c8c69ade3 100644
--- a/tools/testing/selftests/kvm/x86/vmx_msrs_test.c
+++ b/tools/testing/selftests/kvm/x86/vmx_msrs_test.c
@@ -13,10 +13,10 @@
#include "vmx.h"
static void vmx_fixed1_msr_test(struct kvm_vcpu *vcpu, uint32_t msr_index,
- uint64_t mask)
+ u64 mask)
{
- uint64_t val = vcpu_get_msr(vcpu, msr_index);
- uint64_t bit;
+ u64 val = vcpu_get_msr(vcpu, msr_index);
+ u64 bit;
mask &= val;
@@ -27,10 +27,10 @@ static void vmx_fixed1_msr_test(struct kvm_vcpu *vcpu, uint32_t msr_index,
}
static void vmx_fixed0_msr_test(struct kvm_vcpu *vcpu, uint32_t msr_index,
- uint64_t mask)
+ u64 mask)
{
- uint64_t val = vcpu_get_msr(vcpu, msr_index);
- uint64_t bit;
+ u64 val = vcpu_get_msr(vcpu, msr_index);
+ u64 bit;
mask = ~mask | val;
@@ -68,10 +68,10 @@ static void vmx_save_restore_msrs_test(struct kvm_vcpu *vcpu)
}
static void __ia32_feature_control_msr_test(struct kvm_vcpu *vcpu,
- uint64_t msr_bit,
+ u64 msr_bit,
struct kvm_x86_cpu_feature feature)
{
- uint64_t val;
+ u64 val;
vcpu_clear_cpuid_feature(vcpu, feature);
@@ -90,7 +90,7 @@ static void __ia32_feature_control_msr_test(struct kvm_vcpu *vcpu,
static void ia32_feature_control_msr_test(struct kvm_vcpu *vcpu)
{
- uint64_t supported_bits = FEAT_CTL_LOCKED |
+ u64 supported_bits = FEAT_CTL_LOCKED |
FEAT_CTL_VMX_ENABLED_INSIDE_SMX |
FEAT_CTL_VMX_ENABLED_OUTSIDE_SMX |
FEAT_CTL_SGX_LC_ENABLED |
diff --git a/tools/testing/selftests/kvm/x86/vmx_nested_tsc_scaling_test.c b/tools/testing/selftests/kvm/x86/vmx_nested_tsc_scaling_test.c
index 530d71b6d6bc..43861b96b5a4 100644
--- a/tools/testing/selftests/kvm/x86/vmx_nested_tsc_scaling_test.c
+++ b/tools/testing/selftests/kvm/x86/vmx_nested_tsc_scaling_test.c
@@ -18,7 +18,7 @@
/* L2 is scaled up (from L1's perspective) by this factor */
#define L2_SCALE_FACTOR 4ULL
-#define TSC_OFFSET_L2 ((uint64_t) -33125236320908)
+#define TSC_OFFSET_L2 ((u64)-33125236320908)
#define TSC_MULTIPLIER_L2 (L2_SCALE_FACTOR << 48)
#define L2_GUEST_STACK_SIZE 64
@@ -34,9 +34,9 @@ enum { USLEEP, UCHECK_L1, UCHECK_L2 };
* measurements, a difference of 1% between the actual and the expected value
* is tolerated.
*/
-static void compare_tsc_freq(uint64_t actual, uint64_t expected)
+static void compare_tsc_freq(u64 actual, u64 expected)
{
- uint64_t tolerance, thresh_low, thresh_high;
+ u64 tolerance, thresh_low, thresh_high;
tolerance = expected / 100;
thresh_low = expected - tolerance;
@@ -54,7 +54,7 @@ static void compare_tsc_freq(uint64_t actual, uint64_t expected)
static void check_tsc_freq(int level)
{
- uint64_t tsc_start, tsc_end, tsc_freq;
+ u64 tsc_start, tsc_end, tsc_freq;
/*
* Reading the TSC twice with about a second's difference should give
@@ -122,12 +122,12 @@ int main(int argc, char *argv[])
struct kvm_vm *vm;
gva_t vmx_pages_gva;
- uint64_t tsc_start, tsc_end;
- uint64_t tsc_khz;
- uint64_t l1_scale_factor;
- uint64_t l0_tsc_freq = 0;
- uint64_t l1_tsc_freq = 0;
- uint64_t l2_tsc_freq = 0;
+ u64 tsc_start, tsc_end;
+ u64 tsc_khz;
+ u64 l1_scale_factor;
+ u64 l0_tsc_freq = 0;
+ u64 l1_tsc_freq = 0;
+ u64 l2_tsc_freq = 0;
TEST_REQUIRE(kvm_cpu_has(X86_FEATURE_VMX));
TEST_REQUIRE(kvm_has_cap(KVM_CAP_TSC_CONTROL));
diff --git a/tools/testing/selftests/kvm/x86/vmx_pmu_caps_test.c b/tools/testing/selftests/kvm/x86/vmx_pmu_caps_test.c
index a1f5ff45d518..0563bd20621b 100644
--- a/tools/testing/selftests/kvm/x86/vmx_pmu_caps_test.c
+++ b/tools/testing/selftests/kvm/x86/vmx_pmu_caps_test.c
@@ -51,7 +51,7 @@ static const union perf_capabilities format_caps = {
.pebs_format = -1,
};
-static void guest_test_perf_capabilities_gp(uint64_t val)
+static void guest_test_perf_capabilities_gp(u64 val)
{
uint8_t vector = wrmsr_safe(MSR_IA32_PERF_CAPABILITIES, val);
@@ -60,7 +60,7 @@ static void guest_test_perf_capabilities_gp(uint64_t val)
val, vector);
}
-static void guest_code(uint64_t current_val)
+static void guest_code(u64 current_val)
{
int i;
@@ -128,7 +128,7 @@ KVM_ONE_VCPU_TEST(vmx_pmu_caps, basic_perf_capabilities, guest_code)
KVM_ONE_VCPU_TEST(vmx_pmu_caps, fungible_perf_capabilities, guest_code)
{
- const uint64_t fungible_caps = host_cap.capabilities & ~immutable_caps.capabilities;
+ const u64 fungible_caps = host_cap.capabilities & ~immutable_caps.capabilities;
int bit;
for_each_set_bit(bit, &fungible_caps, 64) {
@@ -147,7 +147,7 @@ KVM_ONE_VCPU_TEST(vmx_pmu_caps, fungible_perf_capabilities, guest_code)
*/
KVM_ONE_VCPU_TEST(vmx_pmu_caps, immutable_perf_capabilities, guest_code)
{
- const uint64_t reserved_caps = (~host_cap.capabilities |
+ const u64 reserved_caps = (~host_cap.capabilities |
immutable_caps.capabilities) &
~format_caps.capabilities;
union perf_capabilities val = host_cap;
@@ -209,7 +209,7 @@ KVM_ONE_VCPU_TEST(vmx_pmu_caps, lbr_perf_capabilities, guest_code)
KVM_ONE_VCPU_TEST(vmx_pmu_caps, perf_capabilities_unsupported, guest_code)
{
- uint64_t val;
+ u64 val;
int i, r;
vcpu_set_msr(vcpu, MSR_IA32_PERF_CAPABILITIES, host_cap.capabilities);
diff --git a/tools/testing/selftests/kvm/x86/vmx_tsc_adjust_test.c b/tools/testing/selftests/kvm/x86/vmx_tsc_adjust_test.c
index fc294ccc2a7e..ed32522f5644 100644
--- a/tools/testing/selftests/kvm/x86/vmx_tsc_adjust_test.c
+++ b/tools/testing/selftests/kvm/x86/vmx_tsc_adjust_test.c
@@ -63,7 +63,7 @@ static void check_ia32_tsc_adjust(int64_t max)
static void l2_guest_code(void)
{
- uint64_t l1_tsc = rdtsc() - TSC_OFFSET_VALUE;
+ u64 l1_tsc = rdtsc() - TSC_OFFSET_VALUE;
wrmsr(MSR_IA32_TSC, l1_tsc - TSC_ADJUST_VALUE);
check_ia32_tsc_adjust(-2 * TSC_ADJUST_VALUE);
diff --git a/tools/testing/selftests/kvm/x86/xapic_ipi_test.c b/tools/testing/selftests/kvm/x86/xapic_ipi_test.c
index 2aa14bd237d9..bd7b51342441 100644
--- a/tools/testing/selftests/kvm/x86/xapic_ipi_test.c
+++ b/tools/testing/selftests/kvm/x86/xapic_ipi_test.c
@@ -48,16 +48,16 @@
* Incremented in the IPI handler. Provides evidence to the sender that the IPI
* arrived at the destination
*/
-static volatile uint64_t ipis_rcvd;
+static volatile u64 ipis_rcvd;
/* Data struct shared between host main thread and vCPUs */
struct test_data_page {
uint32_t halter_apic_id;
- volatile uint64_t hlt_count;
- volatile uint64_t wake_count;
- uint64_t ipis_sent;
- uint64_t migrations_attempted;
- uint64_t migrations_completed;
+ volatile u64 hlt_count;
+ volatile u64 wake_count;
+ u64 ipis_sent;
+ u64 migrations_attempted;
+ u64 migrations_completed;
uint32_t icr;
uint32_t icr2;
uint32_t halter_tpr;
@@ -75,13 +75,13 @@ struct test_data_page {
struct thread_params {
struct test_data_page *data;
struct kvm_vcpu *vcpu;
- uint64_t *pipis_rcvd; /* host address of ipis_rcvd global */
+ u64 *pipis_rcvd; /* host address of ipis_rcvd global */
};
void verify_apic_base_addr(void)
{
- uint64_t msr = rdmsr(MSR_IA32_APICBASE);
- uint64_t base = GET_APIC_BASE(msr);
+ u64 msr = rdmsr(MSR_IA32_APICBASE);
+ u64 base = GET_APIC_BASE(msr);
GUEST_ASSERT(base == APIC_DEFAULT_GPA);
}
@@ -125,12 +125,12 @@ static void guest_ipi_handler(struct ex_regs *regs)
static void sender_guest_code(struct test_data_page *data)
{
- uint64_t last_wake_count;
- uint64_t last_hlt_count;
- uint64_t last_ipis_rcvd_count;
+ u64 last_wake_count;
+ u64 last_hlt_count;
+ u64 last_ipis_rcvd_count;
uint32_t icr_val;
uint32_t icr2_val;
- uint64_t tsc_start;
+ u64 tsc_start;
verify_apic_base_addr();
xapic_enable();
@@ -248,7 +248,7 @@ static void cancel_join_vcpu_thread(pthread_t thread, struct kvm_vcpu *vcpu)
}
void do_migrations(struct test_data_page *data, int run_secs, int delay_usecs,
- uint64_t *pipis_rcvd)
+ u64 *pipis_rcvd)
{
long pages_not_moved;
unsigned long nodemask = 0;
@@ -259,9 +259,9 @@ void do_migrations(struct test_data_page *data, int run_secs, int delay_usecs,
int i, r;
int from, to;
unsigned long bit;
- uint64_t hlt_count;
- uint64_t wake_count;
- uint64_t ipis_sent;
+ u64 hlt_count;
+ u64 wake_count;
+ u64 ipis_sent;
fprintf(stderr, "Calling migrate_pages every %d microseconds\n",
delay_usecs);
@@ -399,7 +399,7 @@ int main(int argc, char *argv[])
pthread_t threads[2];
struct thread_params params[2];
struct kvm_vm *vm;
- uint64_t *pipis_rcvd;
+ u64 *pipis_rcvd;
get_cmdline_args(argc, argv, &run_secs, &migrate, &delay_usecs);
if (run_secs <= 0)
@@ -424,7 +424,7 @@ int main(int argc, char *argv[])
vcpu_args_set(params[0].vcpu, 1, test_data_page_vaddr);
vcpu_args_set(params[1].vcpu, 1, test_data_page_vaddr);
- pipis_rcvd = (uint64_t *)addr_gva2hva(vm, (uint64_t)&ipis_rcvd);
+ pipis_rcvd = (u64 *)addr_gva2hva(vm, (u64)&ipis_rcvd);
params[0].pipis_rcvd = pipis_rcvd;
params[1].pipis_rcvd = pipis_rcvd;
diff --git a/tools/testing/selftests/kvm/x86/xapic_state_test.c b/tools/testing/selftests/kvm/x86/xapic_state_test.c
index fdebff1165c7..4d610bffbbd2 100644
--- a/tools/testing/selftests/kvm/x86/xapic_state_test.c
+++ b/tools/testing/selftests/kvm/x86/xapic_state_test.c
@@ -23,7 +23,7 @@ static void xapic_guest_code(void)
xapic_enable();
while (1) {
- uint64_t val = (u64)xapic_read_reg(APIC_IRR) |
+ u64 val = (u64)xapic_read_reg(APIC_IRR) |
(u64)xapic_read_reg(APIC_IRR + 0x10) << 32;
xapic_write_reg(APIC_ICR2, val >> 32);
@@ -43,7 +43,7 @@ static void x2apic_guest_code(void)
x2apic_enable();
do {
- uint64_t val = x2apic_read_reg(APIC_IRR) |
+ u64 val = x2apic_read_reg(APIC_IRR) |
x2apic_read_reg(APIC_IRR + 0x10) << 32;
if (val & X2APIC_RSVD_BITS_MASK) {
@@ -56,12 +56,12 @@ static void x2apic_guest_code(void)
} while (1);
}
-static void ____test_icr(struct xapic_vcpu *x, uint64_t val)
+static void ____test_icr(struct xapic_vcpu *x, u64 val)
{
struct kvm_vcpu *vcpu = x->vcpu;
struct kvm_lapic_state xapic;
struct ucall uc;
- uint64_t icr;
+ u64 icr;
/*
* Tell the guest what ICR value to write. Use the IRR to pass info,
@@ -93,7 +93,7 @@ static void ____test_icr(struct xapic_vcpu *x, uint64_t val)
TEST_ASSERT_EQ(icr, val & ~APIC_ICR_BUSY);
}
-static void __test_icr(struct xapic_vcpu *x, uint64_t val)
+static void __test_icr(struct xapic_vcpu *x, u64 val)
{
/*
* The BUSY bit is reserved on both AMD and Intel, but only AMD treats
@@ -109,7 +109,7 @@ static void __test_icr(struct xapic_vcpu *x, uint64_t val)
static void test_icr(struct xapic_vcpu *x)
{
struct kvm_vcpu *vcpu = x->vcpu;
- uint64_t icr, i, j;
+ u64 icr, i, j;
icr = APIC_DEST_SELF | APIC_INT_ASSERT | APIC_DM_FIXED;
for (i = 0; i <= 0xff; i++)
@@ -142,7 +142,7 @@ static void test_icr(struct xapic_vcpu *x)
__test_icr(x, -1ull & ~APIC_DM_FIXED_MASK);
}
-static void __test_apic_id(struct kvm_vcpu *vcpu, uint64_t apic_base)
+static void __test_apic_id(struct kvm_vcpu *vcpu, u64 apic_base)
{
uint32_t apic_id, expected;
struct kvm_lapic_state xapic;
@@ -172,7 +172,7 @@ static void test_apic_id(void)
{
const uint32_t NR_VCPUS = 3;
struct kvm_vcpu *vcpus[NR_VCPUS];
- uint64_t apic_base;
+ u64 apic_base;
struct kvm_vm *vm;
int i;
diff --git a/tools/testing/selftests/kvm/x86/xcr0_cpuid_test.c b/tools/testing/selftests/kvm/x86/xcr0_cpuid_test.c
index c8a5c5e51661..650b18434ec8 100644
--- a/tools/testing/selftests/kvm/x86/xcr0_cpuid_test.c
+++ b/tools/testing/selftests/kvm/x86/xcr0_cpuid_test.c
@@ -21,7 +21,7 @@
*/
#define ASSERT_XFEATURE_DEPENDENCIES(supported_xcr0, xfeatures, dependencies) \
do { \
- uint64_t __supported = (supported_xcr0) & ((xfeatures) | (dependencies)); \
+ u64 __supported = (supported_xcr0) & ((xfeatures) | (dependencies)); \
\
__GUEST_ASSERT((__supported & (xfeatures)) != (xfeatures) || \
__supported == ((xfeatures) | (dependencies)), \
@@ -39,7 +39,7 @@ do { \
*/
#define ASSERT_ALL_OR_NONE_XFEATURE(supported_xcr0, xfeatures) \
do { \
- uint64_t __supported = (supported_xcr0) & (xfeatures); \
+ u64 __supported = (supported_xcr0) & (xfeatures); \
\
__GUEST_ASSERT(!__supported || __supported == (xfeatures), \
"supported = 0x%lx, xfeatures = 0x%llx", \
@@ -48,8 +48,8 @@ do { \
static void guest_code(void)
{
- uint64_t initial_xcr0;
- uint64_t supported_xcr0;
+ u64 initial_xcr0;
+ u64 supported_xcr0;
int i, vector;
set_cr4(get_cr4() | X86_CR4_OSXSAVE);
diff --git a/tools/testing/selftests/kvm/x86/xen_shinfo_test.c b/tools/testing/selftests/kvm/x86/xen_shinfo_test.c
index 287829f850f7..77fcf8345342 100644
--- a/tools/testing/selftests/kvm/x86/xen_shinfo_test.c
+++ b/tools/testing/selftests/kvm/x86/xen_shinfo_test.c
@@ -117,14 +117,14 @@ struct pvclock_wall_clock {
struct vcpu_runstate_info {
uint32_t state;
- uint64_t state_entry_time;
- uint64_t time[5]; /* Extra field for overrun check */
+ u64 state_entry_time;
+ u64 time[5]; /* Extra field for overrun check */
};
struct compat_vcpu_runstate_info {
uint32_t state;
- uint64_t state_entry_time;
- uint64_t time[5];
+ u64 state_entry_time;
+ u64 time[5];
} __attribute__((__packed__));
struct arch_vcpu_info {
@@ -671,7 +671,7 @@ int main(int argc, char *argv[])
printf("Testing RUNSTATE_ADJUST\n");
rst.type = KVM_XEN_VCPU_ATTR_TYPE_RUNSTATE_ADJUST;
memset(&rst.u, 0, sizeof(rst.u));
- rst.u.runstate.state = (uint64_t)-1;
+ rst.u.runstate.state = (u64)-1;
rst.u.runstate.time_blocked =
0x5a - rs->time[RUNSTATE_blocked];
rst.u.runstate.time_offline =
@@ -1126,7 +1126,7 @@ int main(int argc, char *argv[])
/* Don't change the address, just trigger a write */
struct kvm_xen_vcpu_attr adj = {
.type = KVM_XEN_VCPU_ATTR_TYPE_RUNSTATE_ADJUST,
- .u.runstate.state = (uint64_t)-1
+ .u.runstate.state = (u64)-1
};
vcpu_ioctl(vcpu, KVM_XEN_VCPU_SET_ATTR, &adj);
diff --git a/tools/testing/selftests/kvm/x86/xss_msr_test.c b/tools/testing/selftests/kvm/x86/xss_msr_test.c
index f331a4e9bae3..12c63df6bbce 100644
--- a/tools/testing/selftests/kvm/x86/xss_msr_test.c
+++ b/tools/testing/selftests/kvm/x86/xss_msr_test.c
@@ -17,7 +17,7 @@ int main(int argc, char *argv[])
bool xss_in_msr_list;
struct kvm_vm *vm;
struct kvm_vcpu *vcpu;
- uint64_t xss_val;
+ u64 xss_val;
int i, r;
/* Create VM */
--
2.49.0.906.g1f30a19c02-goog
^ permalink raw reply related [flat|nested] 16+ messages in thread
* [PATCH 05/10] KVM: selftests: Use s64 instead of int64_t
2025-05-01 18:32 [PATCH 00/10] KVM: selftests: Convert to kernel-style types David Matlack
` (3 preceding siblings ...)
2025-05-01 18:32 ` [PATCH 04/10] KVM: selftests: Use u64 instead of uint64_t David Matlack
@ 2025-05-01 18:32 ` David Matlack
2025-05-01 18:33 ` [PATCH 06/10] KVM: selftests: Use u32 instead of uint32_t David Matlack
` (6 subsequent siblings)
11 siblings, 0 replies; 16+ messages in thread
From: David Matlack @ 2025-05-01 18:32 UTC (permalink / raw)
To: Paolo Bonzini
Cc: Marc Zyngier, Oliver Upton, Joey Gouly, Suzuki K Poulose,
Zenghui Yu, Anup Patel, Atish Patra, Paul Walmsley,
Palmer Dabbelt, Albert Ou, Alexandre Ghiti, Christian Borntraeger,
Janosch Frank, Claudio Imbrenda, David Hildenbrand,
Sean Christopherson, David Matlack, Andrew Jones, Isaku Yamahata,
Reinette Chatre, Eric Auger, James Houghton, Colin Ian King, kvm,
linux-arm-kernel, kvmarm, kvm-riscv, linux-riscv
Use s64 instead of int64_t to make the KVM selftests code more concise
and more similar to the kernel (since selftests are primarily developed
by kernel developers).
This commit was generated with the following command:
git ls-files tools/testing/selftests/kvm | xargs sed -i 's/int64_t/s64/g'
Then by manually adjusting whitespace to make checkpatch.pl happy.
No functional change intended.
Signed-off-by: David Matlack <dmatlack@google.com>
---
tools/testing/selftests/kvm/arm64/set_id_regs.c | 2 +-
tools/testing/selftests/kvm/guest_print_test.c | 2 +-
tools/testing/selftests/kvm/include/test_util.h | 4 ++--
tools/testing/selftests/kvm/lib/test_util.c | 16 ++++++++--------
.../testing/selftests/kvm/lib/userfaultfd_util.c | 2 +-
tools/testing/selftests/kvm/lib/x86/processor.c | 4 ++--
tools/testing/selftests/kvm/memslot_perf_test.c | 2 +-
tools/testing/selftests/kvm/steal_time.c | 4 ++--
tools/testing/selftests/kvm/x86/kvm_clock_test.c | 2 +-
.../selftests/kvm/x86/vmx_tsc_adjust_test.c | 6 +++---
10 files changed, 22 insertions(+), 22 deletions(-)
diff --git a/tools/testing/selftests/kvm/arm64/set_id_regs.c b/tools/testing/selftests/kvm/arm64/set_id_regs.c
index d2b689d844ae..502b8e605048 100644
--- a/tools/testing/selftests/kvm/arm64/set_id_regs.c
+++ b/tools/testing/selftests/kvm/arm64/set_id_regs.c
@@ -36,7 +36,7 @@ struct reg_ftr_bits {
* For FTR_EXACT, safe_val is used as the exact safe value.
* For FTR_LOWER_SAFE, safe_val is used as the minimal safe value.
*/
- int64_t safe_val;
+ s64 safe_val;
};
struct test_feature_reg {
diff --git a/tools/testing/selftests/kvm/guest_print_test.c b/tools/testing/selftests/kvm/guest_print_test.c
index 894ef7d2481e..b059abcf1a5b 100644
--- a/tools/testing/selftests/kvm/guest_print_test.c
+++ b/tools/testing/selftests/kvm/guest_print_test.c
@@ -25,7 +25,7 @@ static struct guest_vals vals;
/* GUEST_PRINTF()/GUEST_ASSERT_FMT() does not support float or double. */
#define TYPE_LIST \
-TYPE(test_type_i64, I64, "%ld", int64_t) \
+TYPE(test_type_i64, I64, "%ld", s64) \
TYPE(test_type_u64, U64u, "%lu", u64) \
TYPE(test_type_x64, U64x, "0x%lx", u64) \
TYPE(test_type_X64, U64X, "0x%lX", u64) \
diff --git a/tools/testing/selftests/kvm/include/test_util.h b/tools/testing/selftests/kvm/include/test_util.h
index 7cd539776533..e3cc5832c1ad 100644
--- a/tools/testing/selftests/kvm/include/test_util.h
+++ b/tools/testing/selftests/kvm/include/test_util.h
@@ -82,8 +82,8 @@ do { \
size_t parse_size(const char *size);
-int64_t timespec_to_ns(struct timespec ts);
-struct timespec timespec_add_ns(struct timespec ts, int64_t ns);
+s64 timespec_to_ns(struct timespec ts);
+struct timespec timespec_add_ns(struct timespec ts, s64 ns);
struct timespec timespec_add(struct timespec ts1, struct timespec ts2);
struct timespec timespec_sub(struct timespec ts1, struct timespec ts2);
struct timespec timespec_elapsed(struct timespec start);
diff --git a/tools/testing/selftests/kvm/lib/test_util.c b/tools/testing/selftests/kvm/lib/test_util.c
index a23dbb796f2e..06378718d67d 100644
--- a/tools/testing/selftests/kvm/lib/test_util.c
+++ b/tools/testing/selftests/kvm/lib/test_util.c
@@ -76,12 +76,12 @@ size_t parse_size(const char *size)
return base << shift;
}
-int64_t timespec_to_ns(struct timespec ts)
+s64 timespec_to_ns(struct timespec ts)
{
- return (int64_t)ts.tv_nsec + 1000000000LL * (int64_t)ts.tv_sec;
+ return (s64)ts.tv_nsec + 1000000000LL * (s64)ts.tv_sec;
}
-struct timespec timespec_add_ns(struct timespec ts, int64_t ns)
+struct timespec timespec_add_ns(struct timespec ts, s64 ns)
{
struct timespec res;
@@ -94,15 +94,15 @@ struct timespec timespec_add_ns(struct timespec ts, int64_t ns)
struct timespec timespec_add(struct timespec ts1, struct timespec ts2)
{
- int64_t ns1 = timespec_to_ns(ts1);
- int64_t ns2 = timespec_to_ns(ts2);
+ s64 ns1 = timespec_to_ns(ts1);
+ s64 ns2 = timespec_to_ns(ts2);
return timespec_add_ns((struct timespec){0}, ns1 + ns2);
}
struct timespec timespec_sub(struct timespec ts1, struct timespec ts2)
{
- int64_t ns1 = timespec_to_ns(ts1);
- int64_t ns2 = timespec_to_ns(ts2);
+ s64 ns1 = timespec_to_ns(ts1);
+ s64 ns2 = timespec_to_ns(ts2);
return timespec_add_ns((struct timespec){0}, ns1 - ns2);
}
@@ -116,7 +116,7 @@ struct timespec timespec_elapsed(struct timespec start)
struct timespec timespec_div(struct timespec ts, int divisor)
{
- int64_t ns = timespec_to_ns(ts) / divisor;
+ s64 ns = timespec_to_ns(ts) / divisor;
return timespec_add_ns((struct timespec){0}, ns);
}
diff --git a/tools/testing/selftests/kvm/lib/userfaultfd_util.c b/tools/testing/selftests/kvm/lib/userfaultfd_util.c
index 516ae5bd7576..029465ea4b0b 100644
--- a/tools/testing/selftests/kvm/lib/userfaultfd_util.c
+++ b/tools/testing/selftests/kvm/lib/userfaultfd_util.c
@@ -27,7 +27,7 @@ static void *uffd_handler_thread_fn(void *arg)
{
struct uffd_reader_args *reader_args = (struct uffd_reader_args *)arg;
int uffd = reader_args->uffd;
- int64_t pages = 0;
+ s64 pages = 0;
struct timespec start;
struct timespec ts_diff;
struct epoll_event evt;
diff --git a/tools/testing/selftests/kvm/lib/x86/processor.c b/tools/testing/selftests/kvm/lib/x86/processor.c
index e06dec2fddc2..33be57ae6807 100644
--- a/tools/testing/selftests/kvm/lib/x86/processor.c
+++ b/tools/testing/selftests/kvm/lib/x86/processor.c
@@ -293,8 +293,8 @@ u64 *__vm_get_page_table_entry(struct kvm_vm *vm, u64 vaddr, int *level)
* Based on the mode check above there are 48 bits in the vaddr, so
* shift 16 to sign extend the last bit (bit-47),
*/
- TEST_ASSERT(vaddr == (((int64_t)vaddr << 16) >> 16),
- "Canonical check failed. The virtual address is invalid.");
+ TEST_ASSERT(vaddr == (((s64)vaddr << 16) >> 16),
+ "Canonical check failed. The virtual address is invalid.");
pml4e = virt_get_pte(vm, &vm->pgd, vaddr, PG_LEVEL_512G);
if (vm_is_target_pte(pml4e, level, PG_LEVEL_512G))
diff --git a/tools/testing/selftests/kvm/memslot_perf_test.c b/tools/testing/selftests/kvm/memslot_perf_test.c
index 7ad29c775336..75c54c277690 100644
--- a/tools/testing/selftests/kvm/memslot_perf_test.c
+++ b/tools/testing/selftests/kvm/memslot_perf_test.c
@@ -1039,7 +1039,7 @@ static bool parse_args(int argc, char *argv[],
struct test_result {
struct timespec slot_runtime, guest_runtime, iter_runtime;
- int64_t slottimens, runtimens;
+ s64 slottimens, runtimens;
u64 nloops;
};
diff --git a/tools/testing/selftests/kvm/steal_time.c b/tools/testing/selftests/kvm/steal_time.c
index 3370988bd35b..57f32a31d7ac 100644
--- a/tools/testing/selftests/kvm/steal_time.c
+++ b/tools/testing/selftests/kvm/steal_time.c
@@ -114,7 +114,7 @@ struct st_time {
u64 st_time;
};
-static int64_t smccc(uint32_t func, u64 arg)
+static s64 smccc(uint32_t func, u64 arg)
{
struct arm_smccc_res res;
@@ -131,7 +131,7 @@ static void check_status(struct st_time *st)
static void guest_code(int cpu)
{
struct st_time *st;
- int64_t status;
+ s64 status;
status = smccc(SMCCC_ARCH_FEATURES, PV_TIME_FEATURES);
GUEST_ASSERT_EQ(status, 0);
diff --git a/tools/testing/selftests/kvm/x86/kvm_clock_test.c b/tools/testing/selftests/kvm/x86/kvm_clock_test.c
index e986d289e19b..d885926c578d 100644
--- a/tools/testing/selftests/kvm/x86/kvm_clock_test.c
+++ b/tools/testing/selftests/kvm/x86/kvm_clock_test.c
@@ -18,7 +18,7 @@
struct test_case {
u64 kvmclock_base;
- int64_t realtime_offset;
+ s64 realtime_offset;
};
static struct test_case test_cases[] = {
diff --git a/tools/testing/selftests/kvm/x86/vmx_tsc_adjust_test.c b/tools/testing/selftests/kvm/x86/vmx_tsc_adjust_test.c
index ed32522f5644..450932e4b0c9 100644
--- a/tools/testing/selftests/kvm/x86/vmx_tsc_adjust_test.c
+++ b/tools/testing/selftests/kvm/x86/vmx_tsc_adjust_test.c
@@ -52,9 +52,9 @@ enum {
/* The virtual machine object. */
static struct kvm_vm *vm;
-static void check_ia32_tsc_adjust(int64_t max)
+static void check_ia32_tsc_adjust(s64 max)
{
- int64_t adjust;
+ s64 adjust;
adjust = rdmsr(MSR_IA32_TSC_ADJUST);
GUEST_SYNC(adjust);
@@ -111,7 +111,7 @@ static void l1_guest_code(struct vmx_pages *vmx_pages)
GUEST_DONE();
}
-static void report(int64_t val)
+static void report(s64 val)
{
pr_info("IA32_TSC_ADJUST is %ld (%lld * TSC_ADJUST_VALUE + %lld).\n",
val, val / TSC_ADJUST_VALUE, val % TSC_ADJUST_VALUE);
--
2.49.0.906.g1f30a19c02-goog
^ permalink raw reply related [flat|nested] 16+ messages in thread
* [PATCH 06/10] KVM: selftests: Use u32 instead of uint32_t
2025-05-01 18:32 [PATCH 00/10] KVM: selftests: Convert to kernel-style types David Matlack
` (4 preceding siblings ...)
2025-05-01 18:32 ` [PATCH 05/10] KVM: selftests: Use s64 instead of int64_t David Matlack
@ 2025-05-01 18:33 ` David Matlack
2025-05-01 18:33 ` [PATCH 07/10] KVM: selftests: Use s32 instead of int32_t David Matlack
` (5 subsequent siblings)
11 siblings, 0 replies; 16+ messages in thread
From: David Matlack @ 2025-05-01 18:33 UTC (permalink / raw)
To: Paolo Bonzini
Cc: Marc Zyngier, Oliver Upton, Joey Gouly, Suzuki K Poulose,
Zenghui Yu, Anup Patel, Atish Patra, Paul Walmsley,
Palmer Dabbelt, Albert Ou, Alexandre Ghiti, Christian Borntraeger,
Janosch Frank, Claudio Imbrenda, David Hildenbrand,
Sean Christopherson, David Matlack, Andrew Jones, Isaku Yamahata,
Reinette Chatre, Eric Auger, James Houghton, Colin Ian King, kvm,
linux-arm-kernel, kvmarm, kvm-riscv, linux-riscv
Use u32 instead of uint32_t to make the KVM selftests code more concise
and more similar to the kernel (since selftests are primarily developed
by kernel developers).
This commit was generated with the following command:
git ls-files tools/testing/selftests/kvm | xargs sed -i 's/uint32_t/u32/g'
Then by manually adjusting whitespace to make checkpatch.pl happy.
No functional change intended.
Signed-off-by: David Matlack <dmatlack@google.com>
---
tools/testing/selftests/kvm/arch_timer.c | 6 +-
.../testing/selftests/kvm/arm64/arch_timer.c | 6 +-
.../kvm/arm64/arch_timer_edge_cases.c | 28 ++---
.../selftests/kvm/arm64/debug-exceptions.c | 16 +--
.../testing/selftests/kvm/arm64/hypercalls.c | 6 +-
.../selftests/kvm/arm64/page_fault_test.c | 6 +-
tools/testing/selftests/kvm/arm64/psci_test.c | 2 +-
.../testing/selftests/kvm/arm64/set_id_regs.c | 6 +-
.../selftests/kvm/arm64/smccc_filter.c | 10 +-
tools/testing/selftests/kvm/arm64/vgic_init.c | 30 ++---
tools/testing/selftests/kvm/arm64/vgic_irq.c | 74 +++++------
.../testing/selftests/kvm/coalesced_io_test.c | 22 ++--
.../selftests/kvm/dirty_log_perf_test.c | 2 +-
tools/testing/selftests/kvm/dirty_log_test.c | 36 +++---
.../testing/selftests/kvm/guest_print_test.c | 6 +-
.../selftests/kvm/hardware_disable_test.c | 6 +-
.../selftests/kvm/include/arm64/arch_timer.h | 10 +-
.../testing/selftests/kvm/include/arm64/gic.h | 2 +-
.../selftests/kvm/include/arm64/processor.h | 10 +-
.../selftests/kvm/include/arm64/vgic.h | 14 +--
.../testing/selftests/kvm/include/kvm_util.h | 116 +++++++++---------
.../testing/selftests/kvm/include/memstress.h | 10 +-
.../selftests/kvm/include/riscv/arch_timer.h | 2 +-
.../testing/selftests/kvm/include/test_util.h | 20 +--
.../selftests/kvm/include/timer_test.h | 12 +-
.../testing/selftests/kvm/include/x86/apic.h | 10 +-
.../testing/selftests/kvm/include/x86/evmcs.h | 2 +-
.../selftests/kvm/include/x86/processor.h | 100 +++++++--------
tools/testing/selftests/kvm/include/x86/sev.h | 6 +-
tools/testing/selftests/kvm/include/x86/vmx.h | 10 +-
.../selftests/kvm/kvm_page_table_test.c | 2 +-
tools/testing/selftests/kvm/lib/arm64/gic.c | 2 +-
.../selftests/kvm/lib/arm64/gic_private.h | 20 +--
.../testing/selftests/kvm/lib/arm64/gic_v3.c | 74 +++++------
.../selftests/kvm/lib/arm64/processor.c | 22 ++--
tools/testing/selftests/kvm/lib/arm64/vgic.c | 20 +--
tools/testing/selftests/kvm/lib/guest_modes.c | 2 +-
.../testing/selftests/kvm/lib/guest_sprintf.c | 6 +-
tools/testing/selftests/kvm/lib/kvm_util.c | 80 ++++++------
tools/testing/selftests/kvm/lib/memstress.c | 4 +-
.../selftests/kvm/lib/riscv/processor.c | 6 +-
.../selftests/kvm/lib/s390/processor.c | 2 +-
tools/testing/selftests/kvm/lib/sparsebit.c | 4 +-
tools/testing/selftests/kvm/lib/test_util.c | 14 +--
.../testing/selftests/kvm/lib/x86/processor.c | 22 ++--
tools/testing/selftests/kvm/lib/x86/sev.c | 6 +-
tools/testing/selftests/kvm/lib/x86/vmx.c | 14 +--
.../testing/selftests/kvm/memslot_perf_test.c | 50 ++++----
.../testing/selftests/kvm/riscv/arch_timer.c | 6 +-
tools/testing/selftests/kvm/s390/memop.c | 18 +--
.../selftests/kvm/set_memory_region_test.c | 8 +-
tools/testing/selftests/kvm/steal_time.c | 26 ++--
tools/testing/selftests/kvm/x86/amx_test.c | 4 +-
.../selftests/kvm/x86/apic_bus_clock_test.c | 12 +-
tools/testing/selftests/kvm/x86/debug_regs.c | 2 +-
.../selftests/kvm/x86/feature_msrs_test.c | 8 +-
.../testing/selftests/kvm/x86/hyperv_evmcs.c | 2 +-
.../selftests/kvm/x86/hyperv_features.c | 4 +-
.../selftests/kvm/x86/hyperv_svm_test.c | 2 +-
tools/testing/selftests/kvm/x86/kvm_pv_test.c | 2 +-
.../selftests/kvm/x86/nested_emulation_test.c | 10 +-
.../kvm/x86/nested_exceptions_test.c | 4 +-
.../selftests/kvm/x86/pmu_counters_test.c | 32 ++---
.../selftests/kvm/x86/pmu_event_filter_test.c | 20 +--
.../kvm/x86/private_mem_conversions_test.c | 8 +-
.../kvm/x86/private_mem_kvm_exits_test.c | 8 +-
.../selftests/kvm/x86/set_boot_cpu_id.c | 6 +-
.../selftests/kvm/x86/sev_init2_tests.c | 4 +-
.../selftests/kvm/x86/sev_smoke_test.c | 6 +-
.../selftests/kvm/x86/ucna_injection_test.c | 2 +-
.../kvm/x86/userspace_msr_exit_test.c | 28 ++---
.../selftests/kvm/x86/vmx_apic_access_test.c | 2 +-
.../testing/selftests/kvm/x86/vmx_msrs_test.c | 8 +-
.../kvm/x86/vmx_nested_tsc_scaling_test.c | 2 +-
.../selftests/kvm/x86/vmx_tsc_adjust_test.c | 2 +-
.../selftests/kvm/x86/xapic_ipi_test.c | 16 +--
.../selftests/kvm/x86/xapic_state_test.c | 4 +-
.../selftests/kvm/x86/xen_shinfo_test.c | 6 +-
78 files changed, 598 insertions(+), 600 deletions(-)
diff --git a/tools/testing/selftests/kvm/arch_timer.c b/tools/testing/selftests/kvm/arch_timer.c
index acb2cb596332..6902bbe45654 100644
--- a/tools/testing/selftests/kvm/arch_timer.c
+++ b/tools/testing/selftests/kvm/arch_timer.c
@@ -78,9 +78,9 @@ static void *test_vcpu_run(void *arg)
return NULL;
}
-static uint32_t test_get_pcpu(void)
+static u32 test_get_pcpu(void)
{
- uint32_t pcpu;
+ u32 pcpu;
unsigned int nproc_conf;
cpu_set_t online_cpuset;
@@ -99,7 +99,7 @@ static int test_migrate_vcpu(unsigned int vcpu_idx)
{
int ret;
cpu_set_t cpuset;
- uint32_t new_pcpu = test_get_pcpu();
+ u32 new_pcpu = test_get_pcpu();
CPU_ZERO(&cpuset);
CPU_SET(new_pcpu, &cpuset);
diff --git a/tools/testing/selftests/kvm/arm64/arch_timer.c b/tools/testing/selftests/kvm/arm64/arch_timer.c
index 68757b55ea98..b46a11e94215 100644
--- a/tools/testing/selftests/kvm/arm64/arch_timer.c
+++ b/tools/testing/selftests/kvm/arm64/arch_timer.c
@@ -105,7 +105,7 @@ static void guest_validate_irq(unsigned int intid,
static void guest_irq_handler(struct ex_regs *regs)
{
unsigned int intid = gic_get_and_ack_irq();
- uint32_t cpu = guest_get_vcpuid();
+ u32 cpu = guest_get_vcpuid();
struct test_vcpu_shared_data *shared_data = &vcpu_shared_data[cpu];
guest_validate_irq(intid, shared_data);
@@ -116,7 +116,7 @@ static void guest_irq_handler(struct ex_regs *regs)
static void guest_run_stage(struct test_vcpu_shared_data *shared_data,
enum guest_stage stage)
{
- uint32_t irq_iter, config_iter;
+ u32 irq_iter, config_iter;
shared_data->guest_stage = stage;
shared_data->nr_iter = 0;
@@ -140,7 +140,7 @@ static void guest_run_stage(struct test_vcpu_shared_data *shared_data,
static void guest_code(void)
{
- uint32_t cpu = guest_get_vcpuid();
+ u32 cpu = guest_get_vcpuid();
struct test_vcpu_shared_data *shared_data = &vcpu_shared_data[cpu];
local_irq_disable();
diff --git a/tools/testing/selftests/kvm/arm64/arch_timer_edge_cases.c b/tools/testing/selftests/kvm/arm64/arch_timer_edge_cases.c
index dffdb303a14e..2d799823a366 100644
--- a/tools/testing/selftests/kvm/arm64/arch_timer_edge_cases.c
+++ b/tools/testing/selftests/kvm/arm64/arch_timer_edge_cases.c
@@ -28,19 +28,19 @@ static const int32_t TVAL_MAX = INT32_MAX;
static const int32_t TVAL_MIN = INT32_MIN;
/* After how much time we say there is no IRQ. */
-static const uint32_t TIMEOUT_NO_IRQ_US = 50000;
+static const u32 TIMEOUT_NO_IRQ_US = 50000;
/* A nice counter value to use as the starting one for most tests. */
static const u64 DEF_CNT = (CVAL_MAX / 2);
/* Number of runs. */
-static const uint32_t NR_TEST_ITERS_DEF = 5;
+static const u32 NR_TEST_ITERS_DEF = 5;
/* Default wait test time in ms. */
-static const uint32_t WAIT_TEST_MS = 10;
+static const u32 WAIT_TEST_MS = 10;
/* Default "long" wait test time in ms. */
-static const uint32_t LONG_WAIT_TEST_MS = 100;
+static const u32 LONG_WAIT_TEST_MS = 100;
/* Shared with IRQ handler. */
struct test_vcpu_shared_data {
@@ -114,7 +114,7 @@ enum timer_view {
TIMER_TVAL,
};
-static void assert_irqs_handled(uint32_t n)
+static void assert_irqs_handled(u32 n)
{
int h = atomic_read(&shared_data.handled);
@@ -146,7 +146,7 @@ static void guest_irq_handler(struct ex_regs *regs)
unsigned int intid = gic_get_and_ack_irq();
enum arch_timer timer;
u64 cnt, cval;
- uint32_t ctl;
+ u32 ctl;
bool timer_condition, istatus;
if (intid == IAR_SPURIOUS) {
@@ -178,7 +178,7 @@ static void guest_irq_handler(struct ex_regs *regs)
}
static void set_cval_irq(enum arch_timer timer, u64 cval_cycles,
- uint32_t ctl)
+ u32 ctl)
{
atomic_set(&shared_data.handled, 0);
atomic_set(&shared_data.spurious, 0);
@@ -187,7 +187,7 @@ static void set_cval_irq(enum arch_timer timer, u64 cval_cycles,
}
static void set_tval_irq(enum arch_timer timer, u64 tval_cycles,
- uint32_t ctl)
+ u32 ctl)
{
atomic_set(&shared_data.handled, 0);
atomic_set(&shared_data.spurious, 0);
@@ -195,7 +195,7 @@ static void set_tval_irq(enum arch_timer timer, u64 tval_cycles,
timer_set_tval(timer, tval_cycles);
}
-static void set_xval_irq(enum arch_timer timer, u64 xval, uint32_t ctl,
+static void set_xval_irq(enum arch_timer timer, u64 xval, u32 ctl,
enum timer_view tv)
{
switch (tv) {
@@ -848,11 +848,11 @@ static void guest_code(enum arch_timer timer)
GUEST_DONE();
}
-static uint32_t next_pcpu(void)
+static u32 next_pcpu(void)
{
- uint32_t max = get_nprocs();
- uint32_t cur = sched_getcpu();
- uint32_t next = cur;
+ u32 max = get_nprocs();
+ u32 cur = sched_getcpu();
+ u32 next = cur;
cpu_set_t cpuset;
TEST_ASSERT(max > 1, "Need at least two physical cpus");
@@ -866,7 +866,7 @@ static uint32_t next_pcpu(void)
return next;
}
-static void migrate_self(uint32_t new_pcpu)
+static void migrate_self(u32 new_pcpu)
{
int ret;
cpu_set_t cpuset;
diff --git a/tools/testing/selftests/kvm/arm64/debug-exceptions.c b/tools/testing/selftests/kvm/arm64/debug-exceptions.c
index b97d3a183246..8576e707b05e 100644
--- a/tools/testing/selftests/kvm/arm64/debug-exceptions.c
+++ b/tools/testing/selftests/kvm/arm64/debug-exceptions.c
@@ -140,7 +140,7 @@ static void enable_os_lock(void)
static void enable_monitor_debug_exceptions(void)
{
- uint32_t mdscr;
+ u32 mdscr;
asm volatile("msr daifclr, #8");
@@ -151,7 +151,7 @@ static void enable_monitor_debug_exceptions(void)
static void install_wp(uint8_t wpn, u64 addr)
{
- uint32_t wcr;
+ u32 wcr;
wcr = DBGWCR_LEN8 | DBGWCR_RD | DBGWCR_WR | DBGWCR_EL1 | DBGWCR_E;
write_dbgwcr(wpn, wcr);
@@ -164,7 +164,7 @@ static void install_wp(uint8_t wpn, u64 addr)
static void install_hw_bp(uint8_t bpn, u64 addr)
{
- uint32_t bcr;
+ u32 bcr;
bcr = DBGBCR_LEN8 | DBGBCR_EXEC | DBGBCR_EL1 | DBGBCR_E;
write_dbgbcr(bpn, bcr);
@@ -177,7 +177,7 @@ static void install_hw_bp(uint8_t bpn, u64 addr)
static void install_wp_ctx(uint8_t addr_wp, uint8_t ctx_bp, u64 addr,
u64 ctx)
{
- uint32_t wcr;
+ u32 wcr;
u64 ctx_bcr;
/* Setup a context-aware breakpoint for Linked Context ID Match */
@@ -188,7 +188,7 @@ static void install_wp_ctx(uint8_t addr_wp, uint8_t ctx_bp, u64 addr,
/* Setup a linked watchpoint (linked to the context-aware breakpoint) */
wcr = DBGWCR_LEN8 | DBGWCR_RD | DBGWCR_WR | DBGWCR_EL1 | DBGWCR_E |
- DBGWCR_WT_LINK | ((uint32_t)ctx_bp << DBGWCR_LBN_SHIFT);
+ DBGWCR_WT_LINK | ((u32)ctx_bp << DBGWCR_LBN_SHIFT);
write_dbgwcr(addr_wp, wcr);
write_dbgwvr(addr_wp, addr);
isb();
@@ -199,7 +199,7 @@ static void install_wp_ctx(uint8_t addr_wp, uint8_t ctx_bp, u64 addr,
void install_hw_bp_ctx(uint8_t addr_bp, uint8_t ctx_bp, u64 addr,
u64 ctx)
{
- uint32_t addr_bcr, ctx_bcr;
+ u32 addr_bcr, ctx_bcr;
/* Setup a context-aware breakpoint for Linked Context ID Match */
ctx_bcr = DBGBCR_LEN8 | DBGBCR_EXEC | DBGBCR_EL1 | DBGBCR_E |
@@ -213,7 +213,7 @@ void install_hw_bp_ctx(uint8_t addr_bp, uint8_t ctx_bp, u64 addr,
*/
addr_bcr = DBGBCR_LEN8 | DBGBCR_EXEC | DBGBCR_EL1 | DBGBCR_E |
DBGBCR_BT_ADDR_LINK_CTX |
- ((uint32_t)ctx_bp << DBGBCR_LBN_SHIFT);
+ ((u32)ctx_bp << DBGBCR_LBN_SHIFT);
write_dbgbcr(addr_bp, addr_bcr);
write_dbgbvr(addr_bp, addr);
isb();
@@ -223,7 +223,7 @@ void install_hw_bp_ctx(uint8_t addr_bp, uint8_t ctx_bp, u64 addr,
static void install_ss(void)
{
- uint32_t mdscr;
+ u32 mdscr;
asm volatile("msr daifclr, #8");
diff --git a/tools/testing/selftests/kvm/arm64/hypercalls.c b/tools/testing/selftests/kvm/arm64/hypercalls.c
index 53d9d86c06a4..acaea3fad08f 100644
--- a/tools/testing/selftests/kvm/arm64/hypercalls.c
+++ b/tools/testing/selftests/kvm/arm64/hypercalls.c
@@ -59,7 +59,7 @@ enum test_stage {
static int stage = TEST_STAGE_REG_IFACE;
struct test_hvc_info {
- uint32_t func_id;
+ u32 func_id;
u64 arg1;
};
@@ -152,8 +152,8 @@ static void guest_code(void)
}
struct st_time {
- uint32_t rev;
- uint32_t attr;
+ u32 rev;
+ u32 attr;
u64 st_time;
};
diff --git a/tools/testing/selftests/kvm/arm64/page_fault_test.c b/tools/testing/selftests/kvm/arm64/page_fault_test.c
index 1c04e0f28953..235582206aee 100644
--- a/tools/testing/selftests/kvm/arm64/page_fault_test.c
+++ b/tools/testing/selftests/kvm/arm64/page_fault_test.c
@@ -59,8 +59,8 @@ struct test_desc {
void (*iabt_handler)(struct ex_regs *regs);
void (*mmio_handler)(struct kvm_vm *vm, struct kvm_run *run);
void (*fail_vcpu_run_handler)(int ret);
- uint32_t pt_memslot_flags;
- uint32_t data_memslot_flags;
+ u32 pt_memslot_flags;
+ u32 data_memslot_flags;
bool skip;
struct event_cnt expected_events;
};
@@ -510,7 +510,7 @@ void fail_vcpu_run_mmio_no_syndrome_handler(int ret)
events.fail_vcpu_runs += 1;
}
-typedef uint32_t aarch64_insn_t;
+typedef u32 aarch64_insn_t;
extern aarch64_insn_t __exec_test[2];
noinline void __return_0x77(void)
diff --git a/tools/testing/selftests/kvm/arm64/psci_test.c b/tools/testing/selftests/kvm/arm64/psci_test.c
index 27aa19a35256..ebc00538e7de 100644
--- a/tools/testing/selftests/kvm/arm64/psci_test.c
+++ b/tools/testing/selftests/kvm/arm64/psci_test.c
@@ -61,7 +61,7 @@ static u64 psci_system_off2(u64 type, u64 cookie)
return res.a0;
}
-static u64 psci_features(uint32_t func_id)
+static u64 psci_features(u32 func_id)
{
struct arm_smccc_res res;
diff --git a/tools/testing/selftests/kvm/arm64/set_id_regs.c b/tools/testing/selftests/kvm/arm64/set_id_regs.c
index 502b8e605048..77c197ef4f4a 100644
--- a/tools/testing/selftests/kvm/arm64/set_id_regs.c
+++ b/tools/testing/selftests/kvm/arm64/set_id_regs.c
@@ -40,7 +40,7 @@ struct reg_ftr_bits {
};
struct test_feature_reg {
- uint32_t reg;
+ u32 reg;
const struct reg_ftr_bits *ftr_bits;
};
@@ -420,7 +420,7 @@ static void test_vm_ftr_id_regs(struct kvm_vcpu *vcpu, bool aarch64_only)
for (int i = 0; i < ARRAY_SIZE(test_regs); i++) {
const struct reg_ftr_bits *ftr_bits = test_regs[i].ftr_bits;
- uint32_t reg_id = test_regs[i].reg;
+ u32 reg_id = test_regs[i].reg;
u64 reg = KVM_ARM64_SYS_REG(reg_id);
int idx;
@@ -643,7 +643,7 @@ static void test_vcpu_non_ftr_id_regs(struct kvm_vcpu *vcpu)
ksft_test_result_pass("%s\n", __func__);
}
-static void test_assert_id_reg_unchanged(struct kvm_vcpu *vcpu, uint32_t encoding)
+static void test_assert_id_reg_unchanged(struct kvm_vcpu *vcpu, u32 encoding)
{
size_t idx = encoding_to_range_idx(encoding);
u64 observed;
diff --git a/tools/testing/selftests/kvm/arm64/smccc_filter.c b/tools/testing/selftests/kvm/arm64/smccc_filter.c
index 2d189f3da228..f3baf99380b3 100644
--- a/tools/testing/selftests/kvm/arm64/smccc_filter.c
+++ b/tools/testing/selftests/kvm/arm64/smccc_filter.c
@@ -25,7 +25,7 @@ enum smccc_conduit {
#define for_each_conduit(conduit) \
for (conduit = HVC_INSN; conduit <= SMC_INSN; conduit++)
-static void guest_main(uint32_t func_id, enum smccc_conduit conduit)
+static void guest_main(u32 func_id, enum smccc_conduit conduit)
{
struct arm_smccc_res res;
@@ -37,7 +37,7 @@ static void guest_main(uint32_t func_id, enum smccc_conduit conduit)
GUEST_SYNC(res.a0);
}
-static int __set_smccc_filter(struct kvm_vm *vm, uint32_t start, uint32_t nr_functions,
+static int __set_smccc_filter(struct kvm_vm *vm, u32 start, u32 nr_functions,
enum kvm_smccc_filter_action action)
{
struct kvm_smccc_filter filter = {
@@ -50,7 +50,7 @@ static int __set_smccc_filter(struct kvm_vm *vm, uint32_t start, uint32_t nr_fun
KVM_ARM_VM_SMCCC_FILTER, &filter);
}
-static void set_smccc_filter(struct kvm_vm *vm, uint32_t start, uint32_t nr_functions,
+static void set_smccc_filter(struct kvm_vm *vm, u32 start, u32 nr_functions,
enum kvm_smccc_filter_action action)
{
int ret = __set_smccc_filter(vm, start, nr_functions, action);
@@ -99,7 +99,7 @@ static void test_filter_reserved_range(void)
{
struct kvm_vcpu *vcpu;
struct kvm_vm *vm = setup_vm(&vcpu);
- uint32_t smc64_fn;
+ u32 smc64_fn;
int r;
r = __set_smccc_filter(vm, ARM_SMCCC_ARCH_WORKAROUND_1,
@@ -204,7 +204,7 @@ static void test_filter_denied(void)
}
}
-static void expect_call_fwd_to_user(struct kvm_vcpu *vcpu, uint32_t func_id,
+static void expect_call_fwd_to_user(struct kvm_vcpu *vcpu, u32 func_id,
enum smccc_conduit conduit)
{
struct kvm_run *run = vcpu->run;
diff --git a/tools/testing/selftests/kvm/arm64/vgic_init.c b/tools/testing/selftests/kvm/arm64/vgic_init.c
index 8f13d4979dc5..9026bf3cdfb5 100644
--- a/tools/testing/selftests/kvm/arm64/vgic_init.c
+++ b/tools/testing/selftests/kvm/arm64/vgic_init.c
@@ -26,7 +26,7 @@
struct vm_gic {
struct kvm_vm *vm;
int gic_fd;
- uint32_t gic_dev_type;
+ u32 gic_dev_type;
};
static u64 max_phys_size;
@@ -38,17 +38,17 @@ static u64 max_phys_size;
static void v3_redist_reg_get_errno(int gicv3_fd, int vcpu, int offset,
int want, const char *msg)
{
- uint32_t ignored_val;
+ u32 ignored_val;
int ret = __kvm_device_attr_get(gicv3_fd, KVM_DEV_ARM_VGIC_GRP_REDIST_REGS,
REG_OFFSET(vcpu, offset), &ignored_val);
TEST_ASSERT(ret && errno == want, "%s; want errno = %d", msg, want);
}
-static void v3_redist_reg_get(int gicv3_fd, int vcpu, int offset, uint32_t want,
+static void v3_redist_reg_get(int gicv3_fd, int vcpu, int offset, u32 want,
const char *msg)
{
- uint32_t val;
+ u32 val;
kvm_device_attr_get(gicv3_fd, KVM_DEV_ARM_VGIC_GRP_REDIST_REGS,
REG_OFFSET(vcpu, offset), &val);
@@ -70,8 +70,8 @@ static int run_vcpu(struct kvm_vcpu *vcpu)
return __vcpu_run(vcpu) ? -errno : 0;
}
-static struct vm_gic vm_gic_create_with_vcpus(uint32_t gic_dev_type,
- uint32_t nr_vcpus,
+static struct vm_gic vm_gic_create_with_vcpus(u32 gic_dev_type,
+ u32 nr_vcpus,
struct kvm_vcpu *vcpus[])
{
struct vm_gic v;
@@ -83,7 +83,7 @@ static struct vm_gic vm_gic_create_with_vcpus(uint32_t gic_dev_type,
return v;
}
-static struct vm_gic vm_gic_create_barebones(uint32_t gic_dev_type)
+static struct vm_gic vm_gic_create_barebones(u32 gic_dev_type)
{
struct vm_gic v;
@@ -331,7 +331,7 @@ static void subtest_v3_redist_regions(struct vm_gic *v)
* VGIC KVM device is created and initialized before the secondary CPUs
* get created
*/
-static void test_vgic_then_vcpus(uint32_t gic_dev_type)
+static void test_vgic_then_vcpus(u32 gic_dev_type)
{
struct kvm_vcpu *vcpus[NR_VCPUS];
struct vm_gic v;
@@ -352,7 +352,7 @@ static void test_vgic_then_vcpus(uint32_t gic_dev_type)
}
/* All the VCPUs are created before the VGIC KVM device gets initialized */
-static void test_vcpus_then_vgic(uint32_t gic_dev_type)
+static void test_vcpus_then_vgic(u32 gic_dev_type)
{
struct kvm_vcpu *vcpus[NR_VCPUS];
struct vm_gic v;
@@ -517,7 +517,7 @@ static void test_v3_typer_accesses(void)
}
static struct vm_gic vm_gic_v3_create_with_vcpuids(int nr_vcpus,
- uint32_t vcpuids[])
+ u32 vcpuids[])
{
struct vm_gic v;
int i;
@@ -543,7 +543,7 @@ static struct vm_gic vm_gic_v3_create_with_vcpuids(int nr_vcpus,
*/
static void test_v3_last_bit_redist_regions(void)
{
- uint32_t vcpuids[] = { 0, 3, 5, 4, 1, 2 };
+ u32 vcpuids[] = { 0, 3, 5, 4, 1, 2 };
struct vm_gic v;
u64 addr;
@@ -577,7 +577,7 @@ static void test_v3_last_bit_redist_regions(void)
/* Test last bit with legacy region */
static void test_v3_last_bit_single_rdist(void)
{
- uint32_t vcpuids[] = { 0, 3, 5, 4, 1, 2 };
+ u32 vcpuids[] = { 0, 3, 5, 4, 1, 2 };
struct vm_gic v;
u64 addr;
@@ -678,11 +678,11 @@ static void test_v3_its_region(void)
/*
* Returns 0 if it's possible to create GIC device of a given type (V2 or V3).
*/
-int test_kvm_device(uint32_t gic_dev_type)
+int test_kvm_device(u32 gic_dev_type)
{
struct kvm_vcpu *vcpus[NR_VCPUS];
struct vm_gic v;
- uint32_t other;
+ u32 other;
int ret;
v.vm = vm_create_with_vcpus(NR_VCPUS, guest_code, vcpus);
@@ -715,7 +715,7 @@ int test_kvm_device(uint32_t gic_dev_type)
return 0;
}
-void run_tests(uint32_t gic_dev_type)
+void run_tests(u32 gic_dev_type)
{
test_vcpus_then_vgic(gic_dev_type);
test_vgic_then_vcpus(gic_dev_type);
diff --git a/tools/testing/selftests/kvm/arm64/vgic_irq.c b/tools/testing/selftests/kvm/arm64/vgic_irq.c
index e6f91bb293a6..4aa290a59037 100644
--- a/tools/testing/selftests/kvm/arm64/vgic_irq.c
+++ b/tools/testing/selftests/kvm/arm64/vgic_irq.c
@@ -24,7 +24,7 @@
* function.
*/
struct test_args {
- uint32_t nr_irqs; /* number of KVM supported IRQs. */
+ u32 nr_irqs; /* number of KVM supported IRQs. */
bool eoi_split; /* 1 is eoir+dir, 0 is eoir only */
bool level_sensitive; /* 1 is level, 0 is edge */
int kvm_max_routes; /* output of KVM_CAP_IRQ_ROUTING */
@@ -63,15 +63,15 @@ typedef enum {
struct kvm_inject_args {
kvm_inject_cmd cmd;
- uint32_t first_intid;
- uint32_t num;
+ u32 first_intid;
+ u32 num;
int level;
bool expect_failure;
};
/* Used on the guest side to perform the hypercall. */
-static void kvm_inject_call(kvm_inject_cmd cmd, uint32_t first_intid,
- uint32_t num, int level, bool expect_failure);
+static void kvm_inject_call(kvm_inject_cmd cmd, u32 first_intid,
+ u32 num, int level, bool expect_failure);
/* Used on the host side to get the hypercall info. */
static void kvm_inject_get_call(struct kvm_vm *vm, struct ucall *uc,
@@ -133,7 +133,7 @@ static struct kvm_inject_desc set_active_fns[] = {
/* Shared between the guest main thread and the IRQ handlers. */
volatile u64 irq_handled;
-volatile uint32_t irqnr_received[MAX_SPI + 1];
+volatile u32 irqnr_received[MAX_SPI + 1];
static void reset_stats(void)
{
@@ -158,11 +158,11 @@ static void gic_write_ap1r0(u64 val)
isb();
}
-static void guest_set_irq_line(uint32_t intid, uint32_t level);
+static void guest_set_irq_line(u32 intid, u32 level);
static void guest_irq_generic_handler(bool eoi_split, bool level_sensitive)
{
- uint32_t intid = gic_get_and_ack_irq();
+ u32 intid = gic_get_and_ack_irq();
if (intid == IAR_SPURIOUS)
return;
@@ -188,8 +188,8 @@ static void guest_irq_generic_handler(bool eoi_split, bool level_sensitive)
GUEST_ASSERT(!gic_irq_get_pending(intid));
}
-static void kvm_inject_call(kvm_inject_cmd cmd, uint32_t first_intid,
- uint32_t num, int level, bool expect_failure)
+static void kvm_inject_call(kvm_inject_cmd cmd, u32 first_intid,
+ u32 num, int level, bool expect_failure)
{
struct kvm_inject_args args = {
.cmd = cmd,
@@ -203,7 +203,7 @@ static void kvm_inject_call(kvm_inject_cmd cmd, uint32_t first_intid,
#define GUEST_ASSERT_IAR_EMPTY() \
do { \
- uint32_t _intid; \
+ u32 _intid; \
_intid = gic_get_and_ack_irq(); \
GUEST_ASSERT(_intid == 0 || _intid == IAR_SPURIOUS); \
} while (0)
@@ -236,13 +236,13 @@ static void reset_priorities(struct test_args *args)
gic_set_priority(i, IRQ_DEFAULT_PRIO_REG);
}
-static void guest_set_irq_line(uint32_t intid, uint32_t level)
+static void guest_set_irq_line(u32 intid, u32 level)
{
kvm_inject_call(KVM_SET_IRQ_LINE, intid, 1, level, false);
}
static void test_inject_fail(struct test_args *args,
- uint32_t intid, kvm_inject_cmd cmd)
+ u32 intid, kvm_inject_cmd cmd)
{
reset_stats();
@@ -254,10 +254,10 @@ static void test_inject_fail(struct test_args *args,
}
static void guest_inject(struct test_args *args,
- uint32_t first_intid, uint32_t num,
+ u32 first_intid, u32 num,
kvm_inject_cmd cmd)
{
- uint32_t i;
+ u32 i;
reset_stats();
@@ -291,10 +291,10 @@ static void guest_inject(struct test_args *args,
* deactivated yet.
*/
static void guest_restore_active(struct test_args *args,
- uint32_t first_intid, uint32_t num,
+ u32 first_intid, u32 num,
kvm_inject_cmd cmd)
{
- uint32_t prio, intid, ap1r;
+ u32 prio, intid, ap1r;
int i;
/*
@@ -341,9 +341,9 @@ static void guest_restore_active(struct test_args *args,
* This function should only be used in test_inject_preemption (with IRQs
* masked).
*/
-static uint32_t wait_for_and_activate_irq(void)
+static u32 wait_for_and_activate_irq(void)
{
- uint32_t intid;
+ u32 intid;
do {
asm volatile("wfi" : : : "memory");
@@ -359,10 +359,10 @@ static uint32_t wait_for_and_activate_irq(void)
* interrupts for the whole test.
*/
static void test_inject_preemption(struct test_args *args,
- uint32_t first_intid, int num,
+ u32 first_intid, int num,
kvm_inject_cmd cmd)
{
- uint32_t intid, prio, step = KVM_PRIO_STEPS;
+ u32 intid, prio, step = KVM_PRIO_STEPS;
int i;
/* Set the priorities of the first (KVM_NUM_PRIOS - 1) IRQs
@@ -377,7 +377,7 @@ static void test_inject_preemption(struct test_args *args,
local_irq_disable();
for (i = 0; i < num; i++) {
- uint32_t tmp;
+ u32 tmp;
intid = i + first_intid;
KVM_INJECT(cmd, intid);
/* Each successive IRQ will preempt the previous one. */
@@ -407,7 +407,7 @@ static void test_inject_preemption(struct test_args *args,
static void test_injection(struct test_args *args, struct kvm_inject_desc *f)
{
- uint32_t nr_irqs = args->nr_irqs;
+ u32 nr_irqs = args->nr_irqs;
if (f->sgi) {
guest_inject(args, MIN_SGI, 1, f->cmd);
@@ -427,7 +427,7 @@ static void test_injection(struct test_args *args, struct kvm_inject_desc *f)
static void test_injection_failure(struct test_args *args,
struct kvm_inject_desc *f)
{
- uint32_t bad_intid[] = { args->nr_irqs, 1020, 1024, 1120, 5120, ~0U, };
+ u32 bad_intid[] = { args->nr_irqs, 1020, 1024, 1120, 5120, ~0U, };
int i;
for (i = 0; i < ARRAY_SIZE(bad_intid); i++)
@@ -467,7 +467,7 @@ static void test_restore_active(struct test_args *args, struct kvm_inject_desc *
static void guest_code(struct test_args *args)
{
- uint32_t i, nr_irqs = args->nr_irqs;
+ u32 i, nr_irqs = args->nr_irqs;
bool level_sensitive = args->level_sensitive;
struct kvm_inject_desc *f, *inject_fns;
@@ -506,8 +506,8 @@ static void guest_code(struct test_args *args)
GUEST_DONE();
}
-static void kvm_irq_line_check(struct kvm_vm *vm, uint32_t intid, int level,
- struct test_args *test_args, bool expect_failure)
+static void kvm_irq_line_check(struct kvm_vm *vm, u32 intid, int level,
+ struct test_args *test_args, bool expect_failure)
{
int ret;
@@ -525,8 +525,8 @@ static void kvm_irq_line_check(struct kvm_vm *vm, uint32_t intid, int level,
}
}
-void kvm_irq_set_level_info_check(int gic_fd, uint32_t intid, int level,
- bool expect_failure)
+void kvm_irq_set_level_info_check(int gic_fd, u32 intid, int level,
+ bool expect_failure)
{
if (!expect_failure) {
kvm_irq_set_level_info(gic_fd, intid, level);
@@ -550,7 +550,7 @@ void kvm_irq_set_level_info_check(int gic_fd, uint32_t intid, int level,
}
static void kvm_set_gsi_routing_irqchip_check(struct kvm_vm *vm,
- uint32_t intid, uint32_t num, uint32_t kvm_max_routes,
+ u32 intid, u32 num, u32 kvm_max_routes,
bool expect_failure)
{
struct kvm_irq_routing *routing;
@@ -579,7 +579,7 @@ static void kvm_set_gsi_routing_irqchip_check(struct kvm_vm *vm,
}
}
-static void kvm_irq_write_ispendr_check(int gic_fd, uint32_t intid,
+static void kvm_irq_write_ispendr_check(int gic_fd, u32 intid,
struct kvm_vcpu *vcpu,
bool expect_failure)
{
@@ -595,7 +595,7 @@ static void kvm_irq_write_ispendr_check(int gic_fd, uint32_t intid,
}
static void kvm_routing_and_irqfd_check(struct kvm_vm *vm,
- uint32_t intid, uint32_t num, uint32_t kvm_max_routes,
+ u32 intid, u32 num, u32 kvm_max_routes,
bool expect_failure)
{
int fd[MAX_SPI];
@@ -656,13 +656,13 @@ static void run_guest_cmd(struct kvm_vcpu *vcpu, int gic_fd,
struct test_args *test_args)
{
kvm_inject_cmd cmd = inject_args->cmd;
- uint32_t intid = inject_args->first_intid;
- uint32_t num = inject_args->num;
+ u32 intid = inject_args->first_intid;
+ u32 num = inject_args->num;
int level = inject_args->level;
bool expect_failure = inject_args->expect_failure;
struct kvm_vm *vm = vcpu->vm;
u64 tmp;
- uint32_t i;
+ u32 i;
/* handles the valid case: intid=0xffffffff num=1 */
assert(intid < UINT_MAX - num || num == 1);
@@ -728,7 +728,7 @@ static void print_args(struct test_args *args)
args->eoi_split);
}
-static void test_vgic(uint32_t nr_irqs, bool level_sensitive, bool eoi_split)
+static void test_vgic(u32 nr_irqs, bool level_sensitive, bool eoi_split)
{
struct ucall uc;
int gic_fd;
@@ -802,7 +802,7 @@ static void help(const char *name)
int main(int argc, char **argv)
{
- uint32_t nr_irqs = 64;
+ u32 nr_irqs = 64;
bool default_args = true;
bool level_sensitive = false;
int opt;
diff --git a/tools/testing/selftests/kvm/coalesced_io_test.c b/tools/testing/selftests/kvm/coalesced_io_test.c
index ed6a66020b1e..f5ab412d2042 100644
--- a/tools/testing/selftests/kvm/coalesced_io_test.c
+++ b/tools/testing/selftests/kvm/coalesced_io_test.c
@@ -14,7 +14,7 @@
struct kvm_coalesced_io {
struct kvm_coalesced_mmio_ring *ring;
- uint32_t ring_size;
+ u32 ring_size;
u64 mmio_gpa;
u64 *mmio;
@@ -70,13 +70,13 @@ static void guest_code(struct kvm_coalesced_io *io)
static void vcpu_run_and_verify_io_exit(struct kvm_vcpu *vcpu,
struct kvm_coalesced_io *io,
- uint32_t ring_start,
- uint32_t expected_exit)
+ u32 ring_start,
+ u32 expected_exit)
{
const bool want_pio = expected_exit == KVM_EXIT_IO;
struct kvm_coalesced_mmio_ring *ring = io->ring;
struct kvm_run *run = vcpu->run;
- uint32_t pio_value;
+ u32 pio_value;
WRITE_ONCE(ring->first, ring_start);
WRITE_ONCE(ring->last, ring_start);
@@ -88,7 +88,7 @@ static void vcpu_run_and_verify_io_exit(struct kvm_vcpu *vcpu,
* data_offset is garbage, e.g. an MMIO gpa.
*/
if (run->exit_reason == KVM_EXIT_IO)
- pio_value = *(uint32_t *)((void *)run + run->io.data_offset);
+ pio_value = *(u32 *)((void *)run + run->io.data_offset);
else
pio_value = 0;
@@ -111,8 +111,8 @@ static void vcpu_run_and_verify_io_exit(struct kvm_vcpu *vcpu,
static void vcpu_run_and_verify_coalesced_io(struct kvm_vcpu *vcpu,
struct kvm_coalesced_io *io,
- uint32_t ring_start,
- uint32_t expected_exit)
+ u32 ring_start,
+ u32 expected_exit)
{
struct kvm_coalesced_mmio_ring *ring = io->ring;
int i;
@@ -124,18 +124,18 @@ static void vcpu_run_and_verify_coalesced_io(struct kvm_vcpu *vcpu,
ring->first, ring->last, io->ring_size, ring_start);
for (i = 0; i < io->ring_size - 1; i++) {
- uint32_t idx = (ring->first + i) % io->ring_size;
+ u32 idx = (ring->first + i) % io->ring_size;
struct kvm_coalesced_mmio *entry = &ring->coalesced_mmio[idx];
#ifdef __x86_64__
if (i & 1)
TEST_ASSERT(entry->phys_addr == io->pio_port &&
entry->len == 4 && entry->pio &&
- *(uint32_t *)entry->data == io->pio_port + i,
+ *(u32 *)entry->data == io->pio_port + i,
"Wanted 4-byte port I/O 0x%x = 0x%x in entry %u, got %u-byte %s 0x%llx = 0x%x",
io->pio_port, io->pio_port + i, i,
entry->len, entry->pio ? "PIO" : "MMIO",
- entry->phys_addr, *(uint32_t *)entry->data);
+ entry->phys_addr, *(u32 *)entry->data);
else
#endif
TEST_ASSERT(entry->phys_addr == io->mmio_gpa &&
@@ -148,7 +148,7 @@ static void vcpu_run_and_verify_coalesced_io(struct kvm_vcpu *vcpu,
}
static void test_coalesced_io(struct kvm_vcpu *vcpu,
- struct kvm_coalesced_io *io, uint32_t ring_start)
+ struct kvm_coalesced_io *io, u32 ring_start)
{
struct kvm_coalesced_mmio_ring *ring = io->ring;
diff --git a/tools/testing/selftests/kvm/dirty_log_perf_test.c b/tools/testing/selftests/kvm/dirty_log_perf_test.c
index 49b85b3be8d2..faa31fe9f468 100644
--- a/tools/testing/selftests/kvm/dirty_log_perf_test.c
+++ b/tools/testing/selftests/kvm/dirty_log_perf_test.c
@@ -129,7 +129,7 @@ struct test_params {
bool partition_vcpu_memory_access;
enum vm_mem_backing_src_type backing_src;
int slots;
- uint32_t write_percent;
+ u32 write_percent;
bool random_access;
};
diff --git a/tools/testing/selftests/kvm/dirty_log_test.c b/tools/testing/selftests/kvm/dirty_log_test.c
index 0bc76b9439a2..a33b163ca1c9 100644
--- a/tools/testing/selftests/kvm/dirty_log_test.c
+++ b/tools/testing/selftests/kvm/dirty_log_test.c
@@ -236,7 +236,7 @@ static enum log_mode_t host_log_mode_option = LOG_MODE_ALL;
/* Logging mode for current run */
static enum log_mode_t host_log_mode;
static pthread_t vcpu_thread;
-static uint32_t test_dirty_ring_count = TEST_DIRTY_RING_COUNT;
+static u32 test_dirty_ring_count = TEST_DIRTY_RING_COUNT;
static bool clear_log_supported(void)
{
@@ -255,15 +255,15 @@ static void clear_log_create_vm_done(struct kvm_vm *vm)
}
static void dirty_log_collect_dirty_pages(struct kvm_vcpu *vcpu, int slot,
- void *bitmap, uint32_t num_pages,
- uint32_t *unused)
+ void *bitmap, u32 num_pages,
+ u32 *unused)
{
kvm_vm_get_dirty_log(vcpu->vm, slot, bitmap);
}
static void clear_log_collect_dirty_pages(struct kvm_vcpu *vcpu, int slot,
- void *bitmap, uint32_t num_pages,
- uint32_t *unused)
+ void *bitmap, u32 num_pages,
+ u32 *unused)
{
kvm_vm_get_dirty_log(vcpu->vm, slot, bitmap);
kvm_vm_clear_dirty_log(vcpu->vm, slot, bitmap, 0, num_pages);
@@ -298,7 +298,7 @@ static bool dirty_ring_supported(void)
static void dirty_ring_create_vm_done(struct kvm_vm *vm)
{
u64 pages;
- uint32_t limit;
+ u32 limit;
/*
* We rely on vcpu exit due to full dirty ring state. Adjust
@@ -333,12 +333,12 @@ static inline void dirty_gfn_set_collected(struct kvm_dirty_gfn *gfn)
smp_store_release(&gfn->flags, KVM_DIRTY_GFN_F_RESET);
}
-static uint32_t dirty_ring_collect_one(struct kvm_dirty_gfn *dirty_gfns,
- int slot, void *bitmap,
- uint32_t num_pages, uint32_t *fetch_index)
+static u32 dirty_ring_collect_one(struct kvm_dirty_gfn *dirty_gfns,
+ int slot, void *bitmap,
+ u32 num_pages, u32 *fetch_index)
{
struct kvm_dirty_gfn *cur;
- uint32_t count = 0;
+ u32 count = 0;
while (true) {
cur = &dirty_gfns[*fetch_index % test_dirty_ring_count];
@@ -359,10 +359,10 @@ static uint32_t dirty_ring_collect_one(struct kvm_dirty_gfn *dirty_gfns,
}
static void dirty_ring_collect_dirty_pages(struct kvm_vcpu *vcpu, int slot,
- void *bitmap, uint32_t num_pages,
- uint32_t *ring_buf_idx)
+ void *bitmap, u32 num_pages,
+ u32 *ring_buf_idx)
{
- uint32_t count, cleared;
+ u32 count, cleared;
/* Only have one vcpu */
count = dirty_ring_collect_one(vcpu_map_dirty_ring(vcpu),
@@ -404,8 +404,8 @@ struct log_mode {
void (*create_vm_done)(struct kvm_vm *vm);
/* Hook to collect the dirty pages into the bitmap provided */
void (*collect_dirty_pages) (struct kvm_vcpu *vcpu, int slot,
- void *bitmap, uint32_t num_pages,
- uint32_t *ring_buf_idx);
+ void *bitmap, u32 num_pages,
+ u32 *ring_buf_idx);
/* Hook to call when after each vcpu run */
void (*after_vcpu_run)(struct kvm_vcpu *vcpu);
} log_modes[LOG_MODE_NUM] = {
@@ -459,8 +459,8 @@ static void log_mode_create_vm_done(struct kvm_vm *vm)
}
static void log_mode_collect_dirty_pages(struct kvm_vcpu *vcpu, int slot,
- void *bitmap, uint32_t num_pages,
- uint32_t *ring_buf_idx)
+ void *bitmap, u32 num_pages,
+ u32 *ring_buf_idx)
{
struct log_mode *mode = &log_modes[host_log_mode];
@@ -600,7 +600,7 @@ static void run_test(enum vm_guest_mode mode, void *arg)
struct kvm_vcpu *vcpu;
struct kvm_vm *vm;
unsigned long *bmap[2];
- uint32_t ring_buf_idx = 0;
+ u32 ring_buf_idx = 0;
int sem_val;
if (!log_mode_supported()) {
diff --git a/tools/testing/selftests/kvm/guest_print_test.c b/tools/testing/selftests/kvm/guest_print_test.c
index b059abcf1a5b..79d3fc326e91 100644
--- a/tools/testing/selftests/kvm/guest_print_test.c
+++ b/tools/testing/selftests/kvm/guest_print_test.c
@@ -29,9 +29,9 @@ TYPE(test_type_i64, I64, "%ld", s64) \
TYPE(test_type_u64, U64u, "%lu", u64) \
TYPE(test_type_x64, U64x, "0x%lx", u64) \
TYPE(test_type_X64, U64X, "0x%lX", u64) \
-TYPE(test_type_u32, U32u, "%u", uint32_t) \
-TYPE(test_type_x32, U32x, "0x%x", uint32_t) \
-TYPE(test_type_X32, U32X, "0x%X", uint32_t) \
+TYPE(test_type_u32, U32u, "%u", u32) \
+TYPE(test_type_x32, U32x, "0x%x", u32) \
+TYPE(test_type_X32, U32X, "0x%X", u32) \
TYPE(test_type_int, INT, "%d", int) \
TYPE(test_type_char, CHAR, "%c", char) \
TYPE(test_type_str, STR, "'%s'", const char *) \
diff --git a/tools/testing/selftests/kvm/hardware_disable_test.c b/tools/testing/selftests/kvm/hardware_disable_test.c
index 94bd6ed24cf3..3147f5c97e94 100644
--- a/tools/testing/selftests/kvm/hardware_disable_test.c
+++ b/tools/testing/selftests/kvm/hardware_disable_test.c
@@ -80,7 +80,7 @@ static inline void check_join(pthread_t thread, void **retval)
TEST_ASSERT(r == 0, "%s: failed to join thread", __func__);
}
-static void run_test(uint32_t run)
+static void run_test(u32 run)
{
struct kvm_vcpu *vcpu;
struct kvm_vm *vm;
@@ -88,7 +88,7 @@ static void run_test(uint32_t run)
pthread_t threads[VCPU_NUM];
pthread_t throw_away;
void *b;
- uint32_t i, j;
+ u32 i, j;
CPU_ZERO(&cpu_set);
for (i = 0; i < VCPU_NUM; i++)
@@ -149,7 +149,7 @@ void wait_for_child_setup(pid_t pid)
int main(int argc, char **argv)
{
- uint32_t i;
+ u32 i;
int s, r;
pid_t pid;
diff --git a/tools/testing/selftests/kvm/include/arm64/arch_timer.h b/tools/testing/selftests/kvm/include/arm64/arch_timer.h
index cdb34e8a4416..600ee9163604 100644
--- a/tools/testing/selftests/kvm/include/arm64/arch_timer.h
+++ b/tools/testing/selftests/kvm/include/arm64/arch_timer.h
@@ -26,7 +26,7 @@ enum arch_timer {
#define cycles_to_usec(cycles) \
((u64)(cycles) * 1000000 / timer_get_cntfrq())
-static inline uint32_t timer_get_cntfrq(void)
+static inline u32 timer_get_cntfrq(void)
{
return read_sysreg(cntfrq_el0);
}
@@ -111,7 +111,7 @@ static inline int32_t timer_get_tval(enum arch_timer timer)
return 0;
}
-static inline void timer_set_ctl(enum arch_timer timer, uint32_t ctl)
+static inline void timer_set_ctl(enum arch_timer timer, u32 ctl)
{
switch (timer) {
case VIRTUAL:
@@ -127,7 +127,7 @@ static inline void timer_set_ctl(enum arch_timer timer, uint32_t ctl)
isb();
}
-static inline uint32_t timer_get_ctl(enum arch_timer timer)
+static inline u32 timer_get_ctl(enum arch_timer timer)
{
switch (timer) {
case VIRTUAL:
@@ -142,7 +142,7 @@ static inline uint32_t timer_get_ctl(enum arch_timer timer)
return 0;
}
-static inline void timer_set_next_cval_ms(enum arch_timer timer, uint32_t msec)
+static inline void timer_set_next_cval_ms(enum arch_timer timer, u32 msec)
{
u64 now_ct = timer_get_cntct(timer);
u64 next_ct = now_ct + msec_to_cycles(msec);
@@ -150,7 +150,7 @@ static inline void timer_set_next_cval_ms(enum arch_timer timer, uint32_t msec)
timer_set_cval(timer, next_ct);
}
-static inline void timer_set_next_tval_ms(enum arch_timer timer, uint32_t msec)
+static inline void timer_set_next_tval_ms(enum arch_timer timer, u32 msec)
{
timer_set_tval(timer, msec_to_cycles(msec));
}
diff --git a/tools/testing/selftests/kvm/include/arm64/gic.h b/tools/testing/selftests/kvm/include/arm64/gic.h
index 8231cad8554e..0fb5ef183ddc 100644
--- a/tools/testing/selftests/kvm/include/arm64/gic.h
+++ b/tools/testing/selftests/kvm/include/arm64/gic.h
@@ -49,7 +49,7 @@ void gic_set_dir(unsigned int intid);
*/
void gic_set_eoi_split(bool split);
void gic_set_priority_mask(u64 mask);
-void gic_set_priority(uint32_t intid, uint32_t prio);
+void gic_set_priority(u32 intid, u32 prio);
void gic_irq_set_active(unsigned int intid);
void gic_irq_clear_active(unsigned int intid);
bool gic_irq_get_active(unsigned int intid);
diff --git a/tools/testing/selftests/kvm/include/arm64/processor.h b/tools/testing/selftests/kvm/include/arm64/processor.h
index 4d8144a0e025..552e0e3bc7c8 100644
--- a/tools/testing/selftests/kvm/include/arm64/processor.h
+++ b/tools/testing/selftests/kvm/include/arm64/processor.h
@@ -124,7 +124,7 @@
#define PTE_ADDR_51_50_LPA2_SHIFT 8
void aarch64_vcpu_setup(struct kvm_vcpu *vcpu, struct kvm_vcpu_init *init);
-struct kvm_vcpu *aarch64_vcpu_add(struct kvm_vm *vm, uint32_t vcpu_id,
+struct kvm_vcpu *aarch64_vcpu_add(struct kvm_vm *vm, u32 vcpu_id,
struct kvm_vcpu_init *init, void *guest_code);
struct ex_regs {
@@ -163,8 +163,8 @@ enum {
(v) == VECTOR_SYNC_LOWER_64 || \
(v) == VECTOR_SYNC_LOWER_32)
-void aarch64_get_supported_page_sizes(uint32_t ipa, uint32_t *ipa4k,
- uint32_t *ipa16k, uint32_t *ipa64k);
+void aarch64_get_supported_page_sizes(u32 ipa, u32 *ipa4k,
+ u32 *ipa16k, u32 *ipa64k);
void vm_init_descriptor_tables(struct kvm_vm *vm);
void vcpu_init_descriptor_tables(struct kvm_vcpu *vcpu);
@@ -272,7 +272,7 @@ struct arm_smccc_res {
* @res: pointer to write the return values from registers x0-x3
*
*/
-void smccc_hvc(uint32_t function_id, u64 arg0, u64 arg1,
+void smccc_hvc(u32 function_id, u64 arg0, u64 arg1,
u64 arg2, u64 arg3, u64 arg4, u64 arg5,
u64 arg6, struct arm_smccc_res *res);
@@ -283,7 +283,7 @@ void smccc_hvc(uint32_t function_id, u64 arg0, u64 arg1,
* @res: pointer to write the return values from registers x0-x3
*
*/
-void smccc_smc(uint32_t function_id, u64 arg0, u64 arg1,
+void smccc_smc(u32 function_id, u64 arg0, u64 arg1,
u64 arg2, u64 arg3, u64 arg4, u64 arg5,
u64 arg6, struct arm_smccc_res *res);
diff --git a/tools/testing/selftests/kvm/include/arm64/vgic.h b/tools/testing/selftests/kvm/include/arm64/vgic.h
index e88190d49c3d..007a3ef73d26 100644
--- a/tools/testing/selftests/kvm/include/arm64/vgic.h
+++ b/tools/testing/selftests/kvm/include/arm64/vgic.h
@@ -16,19 +16,19 @@
((u64)(flags) << 12) | \
index)
-int vgic_v3_setup(struct kvm_vm *vm, unsigned int nr_vcpus, uint32_t nr_irqs);
+int vgic_v3_setup(struct kvm_vm *vm, unsigned int nr_vcpus, u32 nr_irqs);
#define VGIC_MAX_RESERVED 1023
-void kvm_irq_set_level_info(int gic_fd, uint32_t intid, int level);
-int _kvm_irq_set_level_info(int gic_fd, uint32_t intid, int level);
+void kvm_irq_set_level_info(int gic_fd, u32 intid, int level);
+int _kvm_irq_set_level_info(int gic_fd, u32 intid, int level);
-void kvm_arm_irq_line(struct kvm_vm *vm, uint32_t intid, int level);
-int _kvm_arm_irq_line(struct kvm_vm *vm, uint32_t intid, int level);
+void kvm_arm_irq_line(struct kvm_vm *vm, u32 intid, int level);
+int _kvm_arm_irq_line(struct kvm_vm *vm, u32 intid, int level);
/* The vcpu arg only applies to private interrupts. */
-void kvm_irq_write_ispendr(int gic_fd, uint32_t intid, struct kvm_vcpu *vcpu);
-void kvm_irq_write_isactiver(int gic_fd, uint32_t intid, struct kvm_vcpu *vcpu);
+void kvm_irq_write_ispendr(int gic_fd, u32 intid, struct kvm_vcpu *vcpu);
+void kvm_irq_write_isactiver(int gic_fd, u32 intid, struct kvm_vcpu *vcpu);
#define KVM_IRQCHIP_NUM_PINS (1020 - 32)
diff --git a/tools/testing/selftests/kvm/include/kvm_util.h b/tools/testing/selftests/kvm/include/kvm_util.h
index 816c4199c168..d76410a0fa1d 100644
--- a/tools/testing/selftests/kvm/include/kvm_util.h
+++ b/tools/testing/selftests/kvm/include/kvm_util.h
@@ -54,7 +54,7 @@ struct kvm_binary_stats {
struct kvm_vcpu {
struct list_head list;
- uint32_t id;
+ u32 id;
int fd;
struct kvm_vm *vm;
struct kvm_run *run;
@@ -63,8 +63,8 @@ struct kvm_vcpu {
#endif
struct kvm_binary_stats stats;
struct kvm_dirty_gfn *dirty_gfns;
- uint32_t fetch_index;
- uint32_t dirty_gfns_count;
+ u32 fetch_index;
+ u32 dirty_gfns_count;
};
struct userspace_mem_regions {
@@ -101,7 +101,7 @@ struct kvm_vm {
gpa_t ucall_mmio_addr;
gpa_t pgd;
gva_t handlers;
- uint32_t dirty_ring_size;
+ u32 dirty_ring_size;
u64 gpa_tag_mask;
struct kvm_vm_arch arch;
@@ -113,7 +113,7 @@ struct kvm_vm {
* allocators, e.g., lib/elf uses the memslots[MEM_REGION_CODE]
* memslot.
*/
- uint32_t memslots[NR_MEM_REGIONS];
+ u32 memslots[NR_MEM_REGIONS];
};
struct vcpu_reg_sublist {
@@ -145,7 +145,7 @@ struct vcpu_reg_list {
else
struct userspace_mem_region *
-memslot2region(struct kvm_vm *vm, uint32_t memslot);
+memslot2region(struct kvm_vm *vm, u32 memslot);
static inline struct userspace_mem_region *vm_get_mem_region(struct kvm_vm *vm,
enum kvm_mem_region_type type)
@@ -182,7 +182,7 @@ enum vm_guest_mode {
};
struct vm_shape {
- uint32_t type;
+ u32 type;
uint8_t mode;
uint8_t pad0;
uint16_t pad1;
@@ -365,14 +365,14 @@ static inline int vm_check_cap(struct kvm_vm *vm, long cap)
return ret;
}
-static inline int __vm_enable_cap(struct kvm_vm *vm, uint32_t cap, u64 arg0)
+static inline int __vm_enable_cap(struct kvm_vm *vm, u32 cap, u64 arg0)
{
struct kvm_enable_cap enable_cap = { .cap = cap, .args = { arg0 } };
return __vm_ioctl(vm, KVM_ENABLE_CAP, &enable_cap);
}
-static inline void vm_enable_cap(struct kvm_vm *vm, uint32_t cap, u64 arg0)
+static inline void vm_enable_cap(struct kvm_vm *vm, u32 cap, u64 arg0)
{
struct kvm_enable_cap enable_cap = { .cap = cap, .args = { arg0 } };
@@ -423,8 +423,8 @@ static inline void vm_guest_mem_allocate(struct kvm_vm *vm, u64 gpa, u64 size)
vm_guest_mem_fallocate(vm, gpa, size, false);
}
-void vm_enable_dirty_ring(struct kvm_vm *vm, uint32_t ring_size);
-const char *vm_guest_mode_string(uint32_t i);
+void vm_enable_dirty_ring(struct kvm_vm *vm, u32 ring_size);
+const char *vm_guest_mode_string(u32 i);
void kvm_vm_free(struct kvm_vm *vmp);
void kvm_vm_restart(struct kvm_vm *vmp);
@@ -442,7 +442,7 @@ static inline void kvm_vm_get_dirty_log(struct kvm_vm *vm, int slot, void *log)
}
static inline void kvm_vm_clear_dirty_log(struct kvm_vm *vm, int slot, void *log,
- u64 first_page, uint32_t num_pages)
+ u64 first_page, u32 num_pages)
{
struct kvm_clear_dirty_log args = {
.dirty_bitmap = log,
@@ -454,7 +454,7 @@ static inline void kvm_vm_clear_dirty_log(struct kvm_vm *vm, int slot, void *log
vm_ioctl(vm, KVM_CLEAR_DIRTY_LOG, &args);
}
-static inline uint32_t kvm_vm_reset_dirty_ring(struct kvm_vm *vm)
+static inline u32 kvm_vm_reset_dirty_ring(struct kvm_vm *vm)
{
return __vm_ioctl(vm, KVM_RESET_DIRTY_RINGS, NULL);
}
@@ -566,24 +566,24 @@ static inline int vm_create_guest_memfd(struct kvm_vm *vm, u64 size, u64 flags)
return fd;
}
-void vm_set_user_memory_region(struct kvm_vm *vm, uint32_t slot, uint32_t flags,
+void vm_set_user_memory_region(struct kvm_vm *vm, u32 slot, u32 flags,
u64 gpa, u64 size, void *hva);
-int __vm_set_user_memory_region(struct kvm_vm *vm, uint32_t slot, uint32_t flags,
+int __vm_set_user_memory_region(struct kvm_vm *vm, u32 slot, u32 flags,
u64 gpa, u64 size, void *hva);
-void vm_set_user_memory_region2(struct kvm_vm *vm, uint32_t slot, uint32_t flags,
+void vm_set_user_memory_region2(struct kvm_vm *vm, u32 slot, u32 flags,
u64 gpa, u64 size, void *hva,
- uint32_t guest_memfd, u64 guest_memfd_offset);
-int __vm_set_user_memory_region2(struct kvm_vm *vm, uint32_t slot, uint32_t flags,
+ u32 guest_memfd, u64 guest_memfd_offset);
+int __vm_set_user_memory_region2(struct kvm_vm *vm, u32 slot, u32 flags,
u64 gpa, u64 size, void *hva,
- uint32_t guest_memfd, u64 guest_memfd_offset);
+ u32 guest_memfd, u64 guest_memfd_offset);
void vm_userspace_mem_region_add(struct kvm_vm *vm,
enum vm_mem_backing_src_type src_type,
- u64 guest_paddr, uint32_t slot, u64 npages,
- uint32_t flags);
+ u64 guest_paddr, u32 slot, u64 npages,
+ u32 flags);
void vm_mem_add(struct kvm_vm *vm, enum vm_mem_backing_src_type src_type,
- u64 guest_paddr, uint32_t slot, u64 npages,
- uint32_t flags, int guest_memfd_fd, u64 guest_memfd_offset);
+ u64 guest_paddr, u32 slot, u64 npages,
+ u32 flags, int guest_memfd_fd, u64 guest_memfd_offset);
#ifndef vm_arch_has_protected_memory
static inline bool vm_arch_has_protected_memory(struct kvm_vm *vm)
@@ -592,10 +592,10 @@ static inline bool vm_arch_has_protected_memory(struct kvm_vm *vm)
}
#endif
-void vm_mem_region_set_flags(struct kvm_vm *vm, uint32_t slot, uint32_t flags);
-void vm_mem_region_move(struct kvm_vm *vm, uint32_t slot, u64 new_gpa);
-void vm_mem_region_delete(struct kvm_vm *vm, uint32_t slot);
-struct kvm_vcpu *__vm_vcpu_add(struct kvm_vm *vm, uint32_t vcpu_id);
+void vm_mem_region_set_flags(struct kvm_vm *vm, u32 slot, u32 flags);
+void vm_mem_region_move(struct kvm_vm *vm, u32 slot, u64 new_gpa);
+void vm_mem_region_delete(struct kvm_vm *vm, u32 slot);
+struct kvm_vcpu *__vm_vcpu_add(struct kvm_vm *vm, u32 vcpu_id);
void vm_populate_vaddr_bitmap(struct kvm_vm *vm);
gva_t gva_unused_gap(struct kvm_vm *vm, size_t sz, gva_t vaddr_min);
gva_t gva_alloc(struct kvm_vm *vm, size_t sz, gva_t vaddr_min);
@@ -636,7 +636,7 @@ static inline int __vcpu_run(struct kvm_vcpu *vcpu)
void vcpu_run_complete_io(struct kvm_vcpu *vcpu);
struct kvm_reg_list *vcpu_get_reg_list(struct kvm_vcpu *vcpu);
-static inline void vcpu_enable_cap(struct kvm_vcpu *vcpu, uint32_t cap,
+static inline void vcpu_enable_cap(struct kvm_vcpu *vcpu, u32 cap,
u64 arg0)
{
struct kvm_enable_cap enable_cap = { .cap = cap, .args = { arg0 } };
@@ -764,18 +764,18 @@ static inline int vcpu_get_stats_fd(struct kvm_vcpu *vcpu)
return fd;
}
-int __kvm_has_device_attr(int dev_fd, uint32_t group, u64 attr);
+int __kvm_has_device_attr(int dev_fd, u32 group, u64 attr);
-static inline void kvm_has_device_attr(int dev_fd, uint32_t group, u64 attr)
+static inline void kvm_has_device_attr(int dev_fd, u32 group, u64 attr)
{
int ret = __kvm_has_device_attr(dev_fd, group, attr);
TEST_ASSERT(!ret, "KVM_HAS_DEVICE_ATTR failed, rc: %i errno: %i", ret, errno);
}
-int __kvm_device_attr_get(int dev_fd, uint32_t group, u64 attr, void *val);
+int __kvm_device_attr_get(int dev_fd, u32 group, u64 attr, void *val);
-static inline void kvm_device_attr_get(int dev_fd, uint32_t group,
+static inline void kvm_device_attr_get(int dev_fd, u32 group,
u64 attr, void *val)
{
int ret = __kvm_device_attr_get(dev_fd, group, attr, val);
@@ -783,9 +783,9 @@ static inline void kvm_device_attr_get(int dev_fd, uint32_t group,
TEST_ASSERT(!ret, KVM_IOCTL_ERROR(KVM_GET_DEVICE_ATTR, ret));
}
-int __kvm_device_attr_set(int dev_fd, uint32_t group, u64 attr, void *val);
+int __kvm_device_attr_set(int dev_fd, u32 group, u64 attr, void *val);
-static inline void kvm_device_attr_set(int dev_fd, uint32_t group,
+static inline void kvm_device_attr_set(int dev_fd, u32 group,
u64 attr, void *val)
{
int ret = __kvm_device_attr_set(dev_fd, group, attr, val);
@@ -793,37 +793,37 @@ static inline void kvm_device_attr_set(int dev_fd, uint32_t group,
TEST_ASSERT(!ret, KVM_IOCTL_ERROR(KVM_SET_DEVICE_ATTR, ret));
}
-static inline int __vcpu_has_device_attr(struct kvm_vcpu *vcpu, uint32_t group,
+static inline int __vcpu_has_device_attr(struct kvm_vcpu *vcpu, u32 group,
u64 attr)
{
return __kvm_has_device_attr(vcpu->fd, group, attr);
}
-static inline void vcpu_has_device_attr(struct kvm_vcpu *vcpu, uint32_t group,
+static inline void vcpu_has_device_attr(struct kvm_vcpu *vcpu, u32 group,
u64 attr)
{
kvm_has_device_attr(vcpu->fd, group, attr);
}
-static inline int __vcpu_device_attr_get(struct kvm_vcpu *vcpu, uint32_t group,
+static inline int __vcpu_device_attr_get(struct kvm_vcpu *vcpu, u32 group,
u64 attr, void *val)
{
return __kvm_device_attr_get(vcpu->fd, group, attr, val);
}
-static inline void vcpu_device_attr_get(struct kvm_vcpu *vcpu, uint32_t group,
+static inline void vcpu_device_attr_get(struct kvm_vcpu *vcpu, u32 group,
u64 attr, void *val)
{
kvm_device_attr_get(vcpu->fd, group, attr, val);
}
-static inline int __vcpu_device_attr_set(struct kvm_vcpu *vcpu, uint32_t group,
+static inline int __vcpu_device_attr_set(struct kvm_vcpu *vcpu, u32 group,
u64 attr, void *val)
{
return __kvm_device_attr_set(vcpu->fd, group, attr, val);
}
-static inline void vcpu_device_attr_set(struct kvm_vcpu *vcpu, uint32_t group,
+static inline void vcpu_device_attr_set(struct kvm_vcpu *vcpu, u32 group,
u64 attr, void *val)
{
kvm_device_attr_set(vcpu->fd, group, attr, val);
@@ -861,27 +861,27 @@ void *vcpu_map_dirty_ring(struct kvm_vcpu *vcpu);
*/
void vcpu_args_set(struct kvm_vcpu *vcpu, unsigned int num, ...);
-void kvm_irq_line(struct kvm_vm *vm, uint32_t irq, int level);
-int _kvm_irq_line(struct kvm_vm *vm, uint32_t irq, int level);
+void kvm_irq_line(struct kvm_vm *vm, u32 irq, int level);
+int _kvm_irq_line(struct kvm_vm *vm, u32 irq, int level);
#define KVM_MAX_IRQ_ROUTES 4096
struct kvm_irq_routing *kvm_gsi_routing_create(void);
void kvm_gsi_routing_irqchip_add(struct kvm_irq_routing *routing,
- uint32_t gsi, uint32_t pin);
+ u32 gsi, u32 pin);
int _kvm_gsi_routing_write(struct kvm_vm *vm, struct kvm_irq_routing *routing);
void kvm_gsi_routing_write(struct kvm_vm *vm, struct kvm_irq_routing *routing);
const char *exit_reason_str(unsigned int exit_reason);
-gpa_t vm_phy_page_alloc(struct kvm_vm *vm, gpa_t paddr_min, uint32_t memslot);
+gpa_t vm_phy_page_alloc(struct kvm_vm *vm, gpa_t paddr_min, u32 memslot);
gpa_t __vm_phy_pages_alloc(struct kvm_vm *vm, size_t num,
- gpa_t paddr_min, uint32_t memslot,
+ gpa_t paddr_min, u32 memslot,
bool protected);
gpa_t vm_alloc_page_table(struct kvm_vm *vm);
static inline gpa_t vm_phy_pages_alloc(struct kvm_vm *vm, size_t num,
- gpa_t paddr_min, uint32_t memslot)
+ gpa_t paddr_min, u32 memslot)
{
/*
* By default, allocate memory as protected for VMs that support
@@ -899,7 +899,7 @@ static inline gpa_t vm_phy_pages_alloc(struct kvm_vm *vm, size_t num,
* calculate the amount of memory needed for per-vCPU data, e.g. stacks.
*/
struct kvm_vm *____vm_create(struct vm_shape shape);
-struct kvm_vm *__vm_create(struct vm_shape shape, uint32_t nr_runnable_vcpus,
+struct kvm_vm *__vm_create(struct vm_shape shape, u32 nr_runnable_vcpus,
u64 nr_extra_pages);
static inline struct kvm_vm *vm_create_barebones(void)
@@ -917,16 +917,16 @@ static inline struct kvm_vm *vm_create_barebones_type(unsigned long type)
return ____vm_create(shape);
}
-static inline struct kvm_vm *vm_create(uint32_t nr_runnable_vcpus)
+static inline struct kvm_vm *vm_create(u32 nr_runnable_vcpus)
{
return __vm_create(VM_SHAPE_DEFAULT, nr_runnable_vcpus, 0);
}
-struct kvm_vm *__vm_create_with_vcpus(struct vm_shape shape, uint32_t nr_vcpus,
+struct kvm_vm *__vm_create_with_vcpus(struct vm_shape shape, u32 nr_vcpus,
u64 extra_mem_pages,
void *guest_code, struct kvm_vcpu *vcpus[]);
-static inline struct kvm_vm *vm_create_with_vcpus(uint32_t nr_vcpus,
+static inline struct kvm_vm *vm_create_with_vcpus(u32 nr_vcpus,
void *guest_code,
struct kvm_vcpu *vcpus[])
{
@@ -967,11 +967,11 @@ static inline struct kvm_vm *vm_create_shape_with_one_vcpu(struct vm_shape shape
struct kvm_vcpu *vm_recreate_with_one_vcpu(struct kvm_vm *vm);
-void kvm_set_files_rlimit(uint32_t nr_vcpus);
+void kvm_set_files_rlimit(u32 nr_vcpus);
-void kvm_pin_this_task_to_pcpu(uint32_t pcpu);
+void kvm_pin_this_task_to_pcpu(u32 pcpu);
void kvm_print_vcpu_pinning_help(void);
-void kvm_parse_vcpu_pinning(const char *pcpus_string, uint32_t vcpu_to_pcpu[],
+void kvm_parse_vcpu_pinning(const char *pcpus_string, u32 vcpu_to_pcpu[],
int nr_vcpus);
unsigned long vm_compute_max_gfn(struct kvm_vm *vm);
@@ -1031,10 +1031,10 @@ static inline void vcpu_dump(FILE *stream, struct kvm_vcpu *vcpu,
* vm - Virtual Machine
* vcpu_id - The id of the VCPU to add to the VM.
*/
-struct kvm_vcpu *vm_arch_vcpu_add(struct kvm_vm *vm, uint32_t vcpu_id);
+struct kvm_vcpu *vm_arch_vcpu_add(struct kvm_vm *vm, u32 vcpu_id);
void vcpu_arch_set_entry_point(struct kvm_vcpu *vcpu, void *guest_code);
-static inline struct kvm_vcpu *vm_vcpu_add(struct kvm_vm *vm, uint32_t vcpu_id,
+static inline struct kvm_vcpu *vm_vcpu_add(struct kvm_vm *vm, u32 vcpu_id,
void *guest_code)
{
struct kvm_vcpu *vcpu = vm_arch_vcpu_add(vm, vcpu_id);
@@ -1045,10 +1045,10 @@ static inline struct kvm_vcpu *vm_vcpu_add(struct kvm_vm *vm, uint32_t vcpu_id,
}
/* Re-create a vCPU after restarting a VM, e.g. for state save/restore tests. */
-struct kvm_vcpu *vm_arch_vcpu_recreate(struct kvm_vm *vm, uint32_t vcpu_id);
+struct kvm_vcpu *vm_arch_vcpu_recreate(struct kvm_vm *vm, u32 vcpu_id);
static inline struct kvm_vcpu *vm_vcpu_recreate(struct kvm_vm *vm,
- uint32_t vcpu_id)
+ u32 vcpu_id)
{
return vm_arch_vcpu_recreate(vm, vcpu_id);
}
@@ -1147,6 +1147,6 @@ void kvm_arch_vm_post_create(struct kvm_vm *vm);
bool vm_is_gpa_protected(struct kvm_vm *vm, gpa_t paddr);
-uint32_t guest_get_vcpuid(void);
+u32 guest_get_vcpuid(void);
#endif /* SELFTEST_KVM_UTIL_H */
diff --git a/tools/testing/selftests/kvm/include/memstress.h b/tools/testing/selftests/kvm/include/memstress.h
index 71296909302c..e3e4b4d6a27a 100644
--- a/tools/testing/selftests/kvm/include/memstress.h
+++ b/tools/testing/selftests/kvm/include/memstress.h
@@ -35,8 +35,8 @@ struct memstress_args {
u64 gpa;
u64 size;
u64 guest_page_size;
- uint32_t random_seed;
- uint32_t write_percent;
+ u32 random_seed;
+ u32 write_percent;
/* Run vCPUs in L2 instead of L1, if the architecture supports it. */
bool nested;
@@ -45,7 +45,7 @@ struct memstress_args {
/* True if all vCPUs are pinned to pCPUs */
bool pin_vcpus;
/* The vCPU=>pCPU pinning map. Only valid if pin_vcpus is true. */
- uint32_t vcpu_to_pcpu[KVM_MAX_VCPUS];
+ u32 vcpu_to_pcpu[KVM_MAX_VCPUS];
/* Test is done, stop running vCPUs. */
bool stop_vcpus;
@@ -61,12 +61,12 @@ struct kvm_vm *memstress_create_vm(enum vm_guest_mode mode, int nr_vcpus,
bool partition_vcpu_memory_access);
void memstress_destroy_vm(struct kvm_vm *vm);
-void memstress_set_write_percent(struct kvm_vm *vm, uint32_t write_percent);
+void memstress_set_write_percent(struct kvm_vm *vm, u32 write_percent);
void memstress_set_random_access(struct kvm_vm *vm, bool random_access);
void memstress_start_vcpu_threads(int vcpus, void (*vcpu_fn)(struct memstress_vcpu_args *));
void memstress_join_vcpu_threads(int vcpus);
-void memstress_guest_code(uint32_t vcpu_id);
+void memstress_guest_code(u32 vcpu_id);
u64 memstress_nested_pages(int nr_vcpus);
void memstress_setup_nested(struct kvm_vm *vm, int nr_vcpus, struct kvm_vcpu *vcpus[]);
diff --git a/tools/testing/selftests/kvm/include/riscv/arch_timer.h b/tools/testing/selftests/kvm/include/riscv/arch_timer.h
index 66ed7e36a7cb..28ffc014da2a 100644
--- a/tools/testing/selftests/kvm/include/riscv/arch_timer.h
+++ b/tools/testing/selftests/kvm/include/riscv/arch_timer.h
@@ -47,7 +47,7 @@ static inline void timer_irq_disable(void)
csr_clear(CSR_SIE, IE_TIE);
}
-static inline void timer_set_next_cmp_ms(uint32_t msec)
+static inline void timer_set_next_cmp_ms(u32 msec)
{
u64 now_ct = timer_get_cycles();
u64 next_ct = now_ct + msec_to_cycles(msec);
diff --git a/tools/testing/selftests/kvm/include/test_util.h b/tools/testing/selftests/kvm/include/test_util.h
index e3cc5832c1ad..5608008cfe61 100644
--- a/tools/testing/selftests/kvm/include/test_util.h
+++ b/tools/testing/selftests/kvm/include/test_util.h
@@ -90,14 +90,14 @@ struct timespec timespec_elapsed(struct timespec start);
struct timespec timespec_div(struct timespec ts, int divisor);
struct guest_random_state {
- uint32_t seed;
+ u32 seed;
};
-extern uint32_t guest_random_seed;
+extern u32 guest_random_seed;
extern struct guest_random_state guest_rng;
-struct guest_random_state new_guest_random_state(uint32_t seed);
-uint32_t guest_random_u32(struct guest_random_state *state);
+struct guest_random_state new_guest_random_state(u32 seed);
+u32 guest_random_u32(struct guest_random_state *state);
static inline bool __guest_random_bool(struct guest_random_state *state,
uint8_t percent)
@@ -141,7 +141,7 @@ enum vm_mem_backing_src_type {
struct vm_mem_backing_src_alias {
const char *name;
- uint32_t flag;
+ u32 flag;
};
#define MIN_RUN_DELAY_NS 200000UL
@@ -149,9 +149,9 @@ struct vm_mem_backing_src_alias {
bool thp_configured(void);
size_t get_trans_hugepagesz(void);
size_t get_def_hugetlb_pagesz(void);
-const struct vm_mem_backing_src_alias *vm_mem_backing_src_alias(uint32_t i);
-size_t get_backing_src_pagesz(uint32_t i);
-bool is_backing_src_hugetlb(uint32_t i);
+const struct vm_mem_backing_src_alias *vm_mem_backing_src_alias(u32 i);
+size_t get_backing_src_pagesz(u32 i);
+bool is_backing_src_hugetlb(u32 i);
void backing_src_help(const char *flag);
enum vm_mem_backing_src_type parse_backing_src_type(const char *type_name);
long get_run_delay(void);
@@ -197,7 +197,7 @@ static inline void *align_ptr_up(void *x, size_t size)
int atoi_paranoid(const char *num_str);
-static inline uint32_t atoi_positive(const char *name, const char *num_str)
+static inline u32 atoi_positive(const char *name, const char *num_str)
{
int num = atoi_paranoid(num_str);
@@ -205,7 +205,7 @@ static inline uint32_t atoi_positive(const char *name, const char *num_str)
return num;
}
-static inline uint32_t atoi_non_negative(const char *name, const char *num_str)
+static inline u32 atoi_non_negative(const char *name, const char *num_str)
{
int num = atoi_paranoid(num_str);
diff --git a/tools/testing/selftests/kvm/include/timer_test.h b/tools/testing/selftests/kvm/include/timer_test.h
index 9501c6c825e2..b7d5d2c84701 100644
--- a/tools/testing/selftests/kvm/include/timer_test.h
+++ b/tools/testing/selftests/kvm/include/timer_test.h
@@ -18,11 +18,11 @@
/* Timer test cmdline parameters */
struct test_args {
- uint32_t nr_vcpus;
- uint32_t nr_iter;
- uint32_t timer_period_ms;
- uint32_t migration_freq_ms;
- uint32_t timer_err_margin_us;
+ u32 nr_vcpus;
+ u32 nr_iter;
+ u32 timer_period_ms;
+ u32 migration_freq_ms;
+ u32 timer_err_margin_us;
/* Members of struct kvm_arm_counter_offset */
u64 counter_offset;
u64 reserved;
@@ -30,7 +30,7 @@ struct test_args {
/* Shared variables between host and guest */
struct test_vcpu_shared_data {
- uint32_t nr_iter;
+ u32 nr_iter;
int guest_stage;
u64 xcnt;
};
diff --git a/tools/testing/selftests/kvm/include/x86/apic.h b/tools/testing/selftests/kvm/include/x86/apic.h
index 484e9a234346..2d164405e7f2 100644
--- a/tools/testing/selftests/kvm/include/x86/apic.h
+++ b/tools/testing/selftests/kvm/include/x86/apic.h
@@ -72,19 +72,19 @@ void apic_disable(void);
void xapic_enable(void);
void x2apic_enable(void);
-static inline uint32_t get_bsp_flag(void)
+static inline u32 get_bsp_flag(void)
{
return rdmsr(MSR_IA32_APICBASE) & MSR_IA32_APICBASE_BSP;
}
-static inline uint32_t xapic_read_reg(unsigned int reg)
+static inline u32 xapic_read_reg(unsigned int reg)
{
- return ((volatile uint32_t *)APIC_DEFAULT_GPA)[reg >> 2];
+ return ((volatile u32 *)APIC_DEFAULT_GPA)[reg >> 2];
}
-static inline void xapic_write_reg(unsigned int reg, uint32_t val)
+static inline void xapic_write_reg(unsigned int reg, u32 val)
{
- ((volatile uint32_t *)APIC_DEFAULT_GPA)[reg >> 2] = val;
+ ((volatile u32 *)APIC_DEFAULT_GPA)[reg >> 2] = val;
}
static inline u64 x2apic_read_reg(unsigned int reg)
diff --git a/tools/testing/selftests/kvm/include/x86/evmcs.h b/tools/testing/selftests/kvm/include/x86/evmcs.h
index 5ec5cca6f9e4..3b0f96b881f9 100644
--- a/tools/testing/selftests/kvm/include/x86/evmcs.h
+++ b/tools/testing/selftests/kvm/include/x86/evmcs.h
@@ -11,7 +11,7 @@
#include "vmx.h"
#define u16 uint16_t
-#define u32 uint32_t
+#define u32 u32
#define u64 u64
#define EVMCS_VERSION 1
diff --git a/tools/testing/selftests/kvm/include/x86/processor.h b/tools/testing/selftests/kvm/include/x86/processor.h
index 72cadb47cd86..8afbb3315c85 100644
--- a/tools/testing/selftests/kvm/include/x86/processor.h
+++ b/tools/testing/selftests/kvm/include/x86/processor.h
@@ -402,8 +402,8 @@ struct desc64 {
uint16_t base0;
unsigned base1:8, type:4, s:1, dpl:2, p:1;
unsigned limit1:4, avl:1, l:1, db:1, g:1, base2:8;
- uint32_t base3;
- uint32_t zero1;
+ u32 base3;
+ u32 zero1;
} __attribute__((packed));
struct desc_ptr {
@@ -434,7 +434,7 @@ static inline u64 get_desc64_base(const struct desc64 *desc)
static inline u64 rdtsc(void)
{
- uint32_t eax, edx;
+ u32 eax, edx;
u64 tsc_val;
/*
* The lfence is to wait (on Intel CPUs) until all previous
@@ -447,27 +447,27 @@ static inline u64 rdtsc(void)
return tsc_val;
}
-static inline u64 rdtscp(uint32_t *aux)
+static inline u64 rdtscp(u32 *aux)
{
- uint32_t eax, edx;
+ u32 eax, edx;
__asm__ __volatile__("rdtscp" : "=a"(eax), "=d"(edx), "=c"(*aux));
return ((u64)edx) << 32 | eax;
}
-static inline u64 rdmsr(uint32_t msr)
+static inline u64 rdmsr(u32 msr)
{
- uint32_t a, d;
+ u32 a, d;
__asm__ __volatile__("rdmsr" : "=a"(a), "=d"(d) : "c"(msr) : "memory");
return a | ((u64)d << 32);
}
-static inline void wrmsr(uint32_t msr, u64 value)
+static inline void wrmsr(u32 msr, u64 value)
{
- uint32_t a = value;
- uint32_t d = value >> 32;
+ u32 a = value;
+ u32 d = value >> 32;
__asm__ __volatile__("wrmsr" :: "a"(a), "d"(d), "c"(msr) : "memory");
}
@@ -625,14 +625,14 @@ static inline struct desc_ptr get_idt(void)
return idt;
}
-static inline void outl(uint16_t port, uint32_t value)
+static inline void outl(uint16_t port, u32 value)
{
__asm__ __volatile__("outl %%eax, %%dx" : : "d"(port), "a"(value));
}
-static inline void __cpuid(uint32_t function, uint32_t index,
- uint32_t *eax, uint32_t *ebx,
- uint32_t *ecx, uint32_t *edx)
+static inline void __cpuid(u32 function, u32 index,
+ u32 *eax, u32 *ebx,
+ u32 *ecx, u32 *edx)
{
*eax = function;
*ecx = index;
@@ -646,35 +646,35 @@ static inline void __cpuid(uint32_t function, uint32_t index,
: "memory");
}
-static inline void cpuid(uint32_t function,
- uint32_t *eax, uint32_t *ebx,
- uint32_t *ecx, uint32_t *edx)
+static inline void cpuid(u32 function,
+ u32 *eax, u32 *ebx,
+ u32 *ecx, u32 *edx)
{
return __cpuid(function, 0, eax, ebx, ecx, edx);
}
-static inline uint32_t this_cpu_fms(void)
+static inline u32 this_cpu_fms(void)
{
- uint32_t eax, ebx, ecx, edx;
+ u32 eax, ebx, ecx, edx;
cpuid(1, &eax, &ebx, &ecx, &edx);
return eax;
}
-static inline uint32_t this_cpu_family(void)
+static inline u32 this_cpu_family(void)
{
return x86_family(this_cpu_fms());
}
-static inline uint32_t this_cpu_model(void)
+static inline u32 this_cpu_model(void)
{
return x86_model(this_cpu_fms());
}
static inline bool this_cpu_vendor_string_is(const char *vendor)
{
- const uint32_t *chunk = (const uint32_t *)vendor;
- uint32_t eax, ebx, ecx, edx;
+ const u32 *chunk = (const u32 *)vendor;
+ u32 eax, ebx, ecx, edx;
cpuid(0, &eax, &ebx, &ecx, &edx);
return (ebx == chunk[0] && edx == chunk[1] && ecx == chunk[2]);
@@ -693,10 +693,10 @@ static inline bool this_cpu_is_amd(void)
return this_cpu_vendor_string_is("AuthenticAMD");
}
-static inline uint32_t __this_cpu_has(uint32_t function, uint32_t index,
- uint8_t reg, uint8_t lo, uint8_t hi)
+static inline u32 __this_cpu_has(u32 function, u32 index,
+ uint8_t reg, uint8_t lo, uint8_t hi)
{
- uint32_t gprs[4];
+ u32 gprs[4];
__cpuid(function, index,
&gprs[KVM_CPUID_EAX], &gprs[KVM_CPUID_EBX],
@@ -711,7 +711,7 @@ static inline bool this_cpu_has(struct kvm_x86_cpu_feature feature)
feature.reg, feature.bit, feature.bit);
}
-static inline uint32_t this_cpu_property(struct kvm_x86_cpu_property property)
+static inline u32 this_cpu_property(struct kvm_x86_cpu_property property)
{
return __this_cpu_has(property.function, property.index,
property.reg, property.lo_bit, property.hi_bit);
@@ -719,7 +719,7 @@ static inline uint32_t this_cpu_property(struct kvm_x86_cpu_property property)
static __always_inline bool this_cpu_has_p(struct kvm_x86_cpu_property property)
{
- uint32_t max_leaf;
+ u32 max_leaf;
switch (property.function & 0xc0000000) {
case 0:
@@ -739,7 +739,7 @@ static __always_inline bool this_cpu_has_p(struct kvm_x86_cpu_property property)
static inline bool this_pmu_has(struct kvm_x86_pmu_feature feature)
{
- uint32_t nr_bits;
+ u32 nr_bits;
if (feature.f.reg == KVM_CPUID_EBX) {
nr_bits = this_cpu_property(X86_PROPERTY_PMU_EBX_BIT_VECTOR_LENGTH);
@@ -867,7 +867,7 @@ void kvm_x86_state_cleanup(struct kvm_x86_state *state);
const struct kvm_msr_list *kvm_get_msr_index_list(void);
const struct kvm_msr_list *kvm_get_feature_msr_index_list(void);
-bool kvm_msr_is_in_save_restore_list(uint32_t msr_index);
+bool kvm_msr_is_in_save_restore_list(u32 msr_index);
u64 kvm_get_feature_msr(u64 msr_index);
static inline void vcpu_msrs_get(struct kvm_vcpu *vcpu,
@@ -923,20 +923,20 @@ static inline void vcpu_xcrs_set(struct kvm_vcpu *vcpu, struct kvm_xcrs *xcrs)
}
const struct kvm_cpuid_entry2 *get_cpuid_entry(const struct kvm_cpuid2 *cpuid,
- uint32_t function, uint32_t index);
+ u32 function, u32 index);
const struct kvm_cpuid2 *kvm_get_supported_cpuid(void);
-static inline uint32_t kvm_cpu_fms(void)
+static inline u32 kvm_cpu_fms(void)
{
return get_cpuid_entry(kvm_get_supported_cpuid(), 0x1, 0)->eax;
}
-static inline uint32_t kvm_cpu_family(void)
+static inline u32 kvm_cpu_family(void)
{
return x86_family(kvm_cpu_fms());
}
-static inline uint32_t kvm_cpu_model(void)
+static inline u32 kvm_cpu_model(void)
{
return x86_model(kvm_cpu_fms());
}
@@ -949,17 +949,17 @@ static inline bool kvm_cpu_has(struct kvm_x86_cpu_feature feature)
return kvm_cpuid_has(kvm_get_supported_cpuid(), feature);
}
-uint32_t kvm_cpuid_property(const struct kvm_cpuid2 *cpuid,
- struct kvm_x86_cpu_property property);
+u32 kvm_cpuid_property(const struct kvm_cpuid2 *cpuid,
+ struct kvm_x86_cpu_property property);
-static inline uint32_t kvm_cpu_property(struct kvm_x86_cpu_property property)
+static inline u32 kvm_cpu_property(struct kvm_x86_cpu_property property)
{
return kvm_cpuid_property(kvm_get_supported_cpuid(), property);
}
static __always_inline bool kvm_cpu_has_p(struct kvm_x86_cpu_property property)
{
- uint32_t max_leaf;
+ u32 max_leaf;
switch (property.function & 0xc0000000) {
case 0:
@@ -979,7 +979,7 @@ static __always_inline bool kvm_cpu_has_p(struct kvm_x86_cpu_property property)
static inline bool kvm_pmu_has(struct kvm_x86_pmu_feature feature)
{
- uint32_t nr_bits;
+ u32 nr_bits;
if (feature.f.reg == KVM_CPUID_EBX) {
nr_bits = kvm_cpu_property(X86_PROPERTY_PMU_EBX_BIT_VECTOR_LENGTH);
@@ -1031,8 +1031,8 @@ static inline void vcpu_get_cpuid(struct kvm_vcpu *vcpu)
}
static inline struct kvm_cpuid_entry2 *__vcpu_get_cpuid_entry(struct kvm_vcpu *vcpu,
- uint32_t function,
- uint32_t index)
+ u32 function,
+ u32 index)
{
TEST_ASSERT(vcpu->cpuid, "Must do vcpu_init_cpuid() first (or equivalent)");
@@ -1043,7 +1043,7 @@ static inline struct kvm_cpuid_entry2 *__vcpu_get_cpuid_entry(struct kvm_vcpu *v
}
static inline struct kvm_cpuid_entry2 *vcpu_get_cpuid_entry(struct kvm_vcpu *vcpu,
- uint32_t function)
+ u32 function)
{
return __vcpu_get_cpuid_entry(vcpu, function, 0);
}
@@ -1073,10 +1073,10 @@ static inline void vcpu_set_cpuid(struct kvm_vcpu *vcpu)
void vcpu_set_cpuid_property(struct kvm_vcpu *vcpu,
struct kvm_x86_cpu_property property,
- uint32_t value);
+ u32 value);
void vcpu_set_cpuid_maxphyaddr(struct kvm_vcpu *vcpu, uint8_t maxphyaddr);
-void vcpu_clear_cpuid_entry(struct kvm_vcpu *vcpu, uint32_t function);
+void vcpu_clear_cpuid_entry(struct kvm_vcpu *vcpu, u32 function);
static inline bool vcpu_cpuid_has(struct kvm_vcpu *vcpu,
struct kvm_x86_cpu_feature feature)
@@ -1130,7 +1130,7 @@ do { \
* is changing, etc. This is NOT an exhaustive list! The intent is to filter
* out MSRs that are not durable _and_ that a selftest wants to write.
*/
-static inline bool is_durable_msr(uint32_t msr)
+static inline bool is_durable_msr(u32 msr)
{
return msr != MSR_IA32_TSC;
}
@@ -1173,7 +1173,7 @@ struct idt_entry {
uint16_t dpl : 2;
uint16_t p : 1;
uint16_t offset1;
- uint32_t offset2; uint32_t reserved;
+ u32 offset2; u32 reserved;
};
void vm_install_exception_handler(struct kvm_vm *vm, int vector,
@@ -1271,11 +1271,11 @@ void vm_install_exception_handler(struct kvm_vm *vm, int vector,
})
#define BUILD_READ_U64_SAFE_HELPER(insn, _fep, _FEP) \
-static inline uint8_t insn##_safe ##_fep(uint32_t idx, u64 *val) \
+static inline uint8_t insn##_safe ##_fep(u32 idx, u64 *val) \
{ \
u64 error_code; \
uint8_t vector; \
- uint32_t a, d; \
+ u32 a, d; \
\
asm volatile(KVM_ASM_SAFE##_FEP(#insn) \
: "=a"(a), "=d"(d), \
@@ -1299,12 +1299,12 @@ BUILD_READ_U64_SAFE_HELPERS(rdmsr)
BUILD_READ_U64_SAFE_HELPERS(rdpmc)
BUILD_READ_U64_SAFE_HELPERS(xgetbv)
-static inline uint8_t wrmsr_safe(uint32_t msr, u64 val)
+static inline uint8_t wrmsr_safe(u32 msr, u64 val)
{
return kvm_asm_safe("wrmsr", "a"(val & -1u), "d"(val >> 32), "c"(msr));
}
-static inline uint8_t xsetbv_safe(uint32_t index, u64 value)
+static inline uint8_t xsetbv_safe(u32 index, u64 value)
{
u32 eax = value;
u32 edx = value >> 32;
diff --git a/tools/testing/selftests/kvm/include/x86/sev.h b/tools/testing/selftests/kvm/include/x86/sev.h
index 02f6324d7e77..fa056d2e1c7e 100644
--- a/tools/testing/selftests/kvm/include/x86/sev.h
+++ b/tools/testing/selftests/kvm/include/x86/sev.h
@@ -27,13 +27,13 @@ enum sev_guest_state {
#define GHCB_MSR_TERM_REQ 0x100
-void sev_vm_launch(struct kvm_vm *vm, uint32_t policy);
+void sev_vm_launch(struct kvm_vm *vm, u32 policy);
void sev_vm_launch_measure(struct kvm_vm *vm, uint8_t *measurement);
void sev_vm_launch_finish(struct kvm_vm *vm);
-struct kvm_vm *vm_sev_create_with_one_vcpu(uint32_t type, void *guest_code,
+struct kvm_vm *vm_sev_create_with_one_vcpu(u32 type, void *guest_code,
struct kvm_vcpu **cpu);
-void vm_sev_launch(struct kvm_vm *vm, uint32_t policy, uint8_t *measurement);
+void vm_sev_launch(struct kvm_vm *vm, u32 policy, uint8_t *measurement);
kvm_static_assert(SEV_RET_SUCCESS == 0);
diff --git a/tools/testing/selftests/kvm/include/x86/vmx.h b/tools/testing/selftests/kvm/include/x86/vmx.h
index b5e6931cc979..e1772fb66811 100644
--- a/tools/testing/selftests/kvm/include/x86/vmx.h
+++ b/tools/testing/selftests/kvm/include/x86/vmx.h
@@ -285,8 +285,8 @@ enum vmcs_field {
};
struct vmx_msr_entry {
- uint32_t index;
- uint32_t reserved;
+ u32 index;
+ u32 reserved;
u64 value;
} __attribute__ ((aligned(16)));
@@ -490,7 +490,7 @@ static inline int vmwrite(u64 encoding, u64 value)
return ret;
}
-static inline uint32_t vmcs_revision(void)
+static inline u32 vmcs_revision(void)
{
return rdmsr(MSR_IA32_VMX_BASIC);
}
@@ -564,12 +564,12 @@ void nested_pg_map(struct vmx_pages *vmx, struct kvm_vm *vm,
void nested_map(struct vmx_pages *vmx, struct kvm_vm *vm,
u64 nested_paddr, u64 paddr, u64 size);
void nested_map_memslot(struct vmx_pages *vmx, struct kvm_vm *vm,
- uint32_t memslot);
+ u32 memslot);
void nested_identity_map_1g(struct vmx_pages *vmx, struct kvm_vm *vm,
u64 addr, u64 size);
bool kvm_cpu_has_ept(void);
void prepare_eptp(struct vmx_pages *vmx, struct kvm_vm *vm,
- uint32_t eptp_memslot);
+ u32 eptp_memslot);
void prepare_virtualize_apic_accesses(struct vmx_pages *vmx, struct kvm_vm *vm);
#endif /* SELFTEST_KVM_VMX_H */
diff --git a/tools/testing/selftests/kvm/kvm_page_table_test.c b/tools/testing/selftests/kvm/kvm_page_table_test.c
index dcd213733604..da3d9e8a0735 100644
--- a/tools/testing/selftests/kvm/kvm_page_table_test.c
+++ b/tools/testing/selftests/kvm/kvm_page_table_test.c
@@ -63,7 +63,7 @@ struct test_args {
static enum test_stage guest_test_stage;
/* Host variables */
-static uint32_t nr_vcpus = 1;
+static u32 nr_vcpus = 1;
static struct test_args test_args;
static enum test_stage *current_stage;
static bool host_quit;
diff --git a/tools/testing/selftests/kvm/lib/arm64/gic.c b/tools/testing/selftests/kvm/lib/arm64/gic.c
index ac3987cdac6d..c16166bcf11b 100644
--- a/tools/testing/selftests/kvm/lib/arm64/gic.c
+++ b/tools/testing/selftests/kvm/lib/arm64/gic.c
@@ -50,7 +50,7 @@ static void gic_dist_init(enum gic_type type, unsigned int nr_cpus)
void gic_init(enum gic_type type, unsigned int nr_cpus)
{
- uint32_t cpu = guest_get_vcpuid();
+ u32 cpu = guest_get_vcpuid();
GUEST_ASSERT(type < GIC_TYPE_MAX);
GUEST_ASSERT(nr_cpus);
diff --git a/tools/testing/selftests/kvm/lib/arm64/gic_private.h b/tools/testing/selftests/kvm/lib/arm64/gic_private.h
index d231bb7594df..b895f235d8a1 100644
--- a/tools/testing/selftests/kvm/lib/arm64/gic_private.h
+++ b/tools/testing/selftests/kvm/lib/arm64/gic_private.h
@@ -13,18 +13,18 @@ struct gic_common_ops {
void (*gic_irq_enable)(unsigned int intid);
void (*gic_irq_disable)(unsigned int intid);
u64 (*gic_read_iar)(void);
- void (*gic_write_eoir)(uint32_t irq);
- void (*gic_write_dir)(uint32_t irq);
+ void (*gic_write_eoir)(u32 irq);
+ void (*gic_write_dir)(u32 irq);
void (*gic_set_eoi_split)(bool split);
void (*gic_set_priority_mask)(u64 mask);
- void (*gic_set_priority)(uint32_t intid, uint32_t prio);
- void (*gic_irq_set_active)(uint32_t intid);
- void (*gic_irq_clear_active)(uint32_t intid);
- bool (*gic_irq_get_active)(uint32_t intid);
- void (*gic_irq_set_pending)(uint32_t intid);
- void (*gic_irq_clear_pending)(uint32_t intid);
- bool (*gic_irq_get_pending)(uint32_t intid);
- void (*gic_irq_set_config)(uint32_t intid, bool is_edge);
+ void (*gic_set_priority)(u32 intid, u32 prio);
+ void (*gic_irq_set_active)(u32 intid);
+ void (*gic_irq_clear_active)(u32 intid);
+ bool (*gic_irq_get_active)(u32 intid);
+ void (*gic_irq_set_pending)(u32 intid);
+ void (*gic_irq_clear_pending)(u32 intid);
+ bool (*gic_irq_get_pending)(u32 intid);
+ void (*gic_irq_set_config)(u32 intid, bool is_edge);
};
extern const struct gic_common_ops gicv3_ops;
diff --git a/tools/testing/selftests/kvm/lib/arm64/gic_v3.c b/tools/testing/selftests/kvm/lib/arm64/gic_v3.c
index 2f5d8a706ce3..092d58803c8c 100644
--- a/tools/testing/selftests/kvm/lib/arm64/gic_v3.c
+++ b/tools/testing/selftests/kvm/lib/arm64/gic_v3.c
@@ -50,13 +50,13 @@ static void gicv3_gicd_wait_for_rwp(void)
}
}
-static inline volatile void *gicr_base_cpu(uint32_t cpu)
+static inline volatile void *gicr_base_cpu(u32 cpu)
{
/* Align all the redistributors sequentially */
return GICR_BASE_GVA + cpu * SZ_64K * 2;
}
-static void gicv3_gicr_wait_for_rwp(uint32_t cpu)
+static void gicv3_gicr_wait_for_rwp(u32 cpu)
{
unsigned int count = 100000; /* 1s */
@@ -66,7 +66,7 @@ static void gicv3_gicr_wait_for_rwp(uint32_t cpu)
}
}
-static void gicv3_wait_for_rwp(uint32_t cpu_or_dist)
+static void gicv3_wait_for_rwp(u32 cpu_or_dist)
{
if (cpu_or_dist & DIST_BIT)
gicv3_gicd_wait_for_rwp();
@@ -99,13 +99,13 @@ static u64 gicv3_read_iar(void)
return irqstat;
}
-static void gicv3_write_eoir(uint32_t irq)
+static void gicv3_write_eoir(u32 irq)
{
write_sysreg_s(irq, SYS_ICC_EOIR1_EL1);
isb();
}
-static void gicv3_write_dir(uint32_t irq)
+static void gicv3_write_dir(u32 irq)
{
write_sysreg_s(irq, SYS_ICC_DIR_EL1);
isb();
@@ -118,7 +118,7 @@ static void gicv3_set_priority_mask(u64 mask)
static void gicv3_set_eoi_split(bool split)
{
- uint32_t val;
+ u32 val;
/*
* All other fields are read-only, so no need to read CTLR first. In
@@ -129,29 +129,29 @@ static void gicv3_set_eoi_split(bool split)
isb();
}
-uint32_t gicv3_reg_readl(uint32_t cpu_or_dist, u64 offset)
+u32 gicv3_reg_readl(u32 cpu_or_dist, u64 offset)
{
volatile void *base = cpu_or_dist & DIST_BIT ? GICD_BASE_GVA
: sgi_base_from_redist(gicr_base_cpu(cpu_or_dist));
return readl(base + offset);
}
-void gicv3_reg_writel(uint32_t cpu_or_dist, u64 offset, uint32_t reg_val)
+void gicv3_reg_writel(u32 cpu_or_dist, u64 offset, u32 reg_val)
{
volatile void *base = cpu_or_dist & DIST_BIT ? GICD_BASE_GVA
: sgi_base_from_redist(gicr_base_cpu(cpu_or_dist));
writel(reg_val, base + offset);
}
-uint32_t gicv3_getl_fields(uint32_t cpu_or_dist, u64 offset, uint32_t mask)
+u32 gicv3_getl_fields(u32 cpu_or_dist, u64 offset, u32 mask)
{
return gicv3_reg_readl(cpu_or_dist, offset) & mask;
}
-void gicv3_setl_fields(uint32_t cpu_or_dist, u64 offset,
- uint32_t mask, uint32_t reg_val)
+void gicv3_setl_fields(u32 cpu_or_dist, u64 offset,
+ u32 mask, u32 reg_val)
{
- uint32_t tmp = gicv3_reg_readl(cpu_or_dist, offset) & ~mask;
+ u32 tmp = gicv3_reg_readl(cpu_or_dist, offset) & ~mask;
tmp |= (reg_val & mask);
gicv3_reg_writel(cpu_or_dist, offset, tmp);
@@ -165,14 +165,14 @@ void gicv3_setl_fields(uint32_t cpu_or_dist, u64 offset,
* map that doesn't implement it; like GICR_WAKER's offset of 0x0014 being
* marked as "Reserved" in the Distributor map.
*/
-static void gicv3_access_reg(uint32_t intid, u64 offset,
- uint32_t reg_bits, uint32_t bits_per_field,
- bool write, uint32_t *val)
+static void gicv3_access_reg(u32 intid, u64 offset,
+ u32 reg_bits, u32 bits_per_field,
+ bool write, u32 *val)
{
- uint32_t cpu = guest_get_vcpuid();
+ u32 cpu = guest_get_vcpuid();
enum gicv3_intid_range intid_range = get_intid_range(intid);
- uint32_t fields_per_reg, index, mask, shift;
- uint32_t cpu_or_dist;
+ u32 fields_per_reg, index, mask, shift;
+ u32 cpu_or_dist;
GUEST_ASSERT(bits_per_field <= reg_bits);
GUEST_ASSERT(!write || *val < (1U << bits_per_field));
@@ -197,32 +197,32 @@ static void gicv3_access_reg(uint32_t intid, u64 offset,
*val = gicv3_getl_fields(cpu_or_dist, offset, mask) >> shift;
}
-static void gicv3_write_reg(uint32_t intid, u64 offset,
- uint32_t reg_bits, uint32_t bits_per_field, uint32_t val)
+static void gicv3_write_reg(u32 intid, u64 offset,
+ u32 reg_bits, u32 bits_per_field, u32 val)
{
gicv3_access_reg(intid, offset, reg_bits,
bits_per_field, true, &val);
}
-static uint32_t gicv3_read_reg(uint32_t intid, u64 offset,
- uint32_t reg_bits, uint32_t bits_per_field)
+static u32 gicv3_read_reg(u32 intid, u64 offset,
+ u32 reg_bits, u32 bits_per_field)
{
- uint32_t val;
+ u32 val;
gicv3_access_reg(intid, offset, reg_bits,
bits_per_field, false, &val);
return val;
}
-static void gicv3_set_priority(uint32_t intid, uint32_t prio)
+static void gicv3_set_priority(u32 intid, u32 prio)
{
gicv3_write_reg(intid, GICD_IPRIORITYR, 32, 8, prio);
}
/* Sets the intid to be level-sensitive or edge-triggered. */
-static void gicv3_irq_set_config(uint32_t intid, bool is_edge)
+static void gicv3_irq_set_config(u32 intid, bool is_edge)
{
- uint32_t val;
+ u32 val;
/* N/A for private interrupts. */
GUEST_ASSERT(get_intid_range(intid) == SPI_RANGE);
@@ -230,57 +230,57 @@ static void gicv3_irq_set_config(uint32_t intid, bool is_edge)
gicv3_write_reg(intid, GICD_ICFGR, 32, 2, val);
}
-static void gicv3_irq_enable(uint32_t intid)
+static void gicv3_irq_enable(u32 intid)
{
bool is_spi = get_intid_range(intid) == SPI_RANGE;
- uint32_t cpu = guest_get_vcpuid();
+ u32 cpu = guest_get_vcpuid();
gicv3_write_reg(intid, GICD_ISENABLER, 32, 1, 1);
gicv3_wait_for_rwp(is_spi ? DIST_BIT : cpu);
}
-static void gicv3_irq_disable(uint32_t intid)
+static void gicv3_irq_disable(u32 intid)
{
bool is_spi = get_intid_range(intid) == SPI_RANGE;
- uint32_t cpu = guest_get_vcpuid();
+ u32 cpu = guest_get_vcpuid();
gicv3_write_reg(intid, GICD_ICENABLER, 32, 1, 1);
gicv3_wait_for_rwp(is_spi ? DIST_BIT : cpu);
}
-static void gicv3_irq_set_active(uint32_t intid)
+static void gicv3_irq_set_active(u32 intid)
{
gicv3_write_reg(intid, GICD_ISACTIVER, 32, 1, 1);
}
-static void gicv3_irq_clear_active(uint32_t intid)
+static void gicv3_irq_clear_active(u32 intid)
{
gicv3_write_reg(intid, GICD_ICACTIVER, 32, 1, 1);
}
-static bool gicv3_irq_get_active(uint32_t intid)
+static bool gicv3_irq_get_active(u32 intid)
{
return gicv3_read_reg(intid, GICD_ISACTIVER, 32, 1);
}
-static void gicv3_irq_set_pending(uint32_t intid)
+static void gicv3_irq_set_pending(u32 intid)
{
gicv3_write_reg(intid, GICD_ISPENDR, 32, 1, 1);
}
-static void gicv3_irq_clear_pending(uint32_t intid)
+static void gicv3_irq_clear_pending(u32 intid)
{
gicv3_write_reg(intid, GICD_ICPENDR, 32, 1, 1);
}
-static bool gicv3_irq_get_pending(uint32_t intid)
+static bool gicv3_irq_get_pending(u32 intid)
{
return gicv3_read_reg(intid, GICD_ISPENDR, 32, 1);
}
static void gicv3_enable_redist(volatile void *redist_base)
{
- uint32_t val = readl(redist_base + GICR_WAKER);
+ u32 val = readl(redist_base + GICR_WAKER);
unsigned int count = 100000; /* 1s */
val &= ~GICR_WAKER_ProcessorSleep;
diff --git a/tools/testing/selftests/kvm/lib/arm64/processor.c b/tools/testing/selftests/kvm/lib/arm64/processor.c
index d7cfd8899b97..01c8ee96b8ec 100644
--- a/tools/testing/selftests/kvm/lib/arm64/processor.c
+++ b/tools/testing/selftests/kvm/lib/arm64/processor.c
@@ -380,7 +380,7 @@ void vcpu_arch_set_entry_point(struct kvm_vcpu *vcpu, void *guest_code)
vcpu_set_reg(vcpu, ARM64_CORE_REG(regs.pc), (u64)guest_code);
}
-static struct kvm_vcpu *__aarch64_vcpu_add(struct kvm_vm *vm, uint32_t vcpu_id,
+static struct kvm_vcpu *__aarch64_vcpu_add(struct kvm_vm *vm, u32 vcpu_id,
struct kvm_vcpu_init *init)
{
size_t stack_size;
@@ -399,7 +399,7 @@ static struct kvm_vcpu *__aarch64_vcpu_add(struct kvm_vm *vm, uint32_t vcpu_id,
return vcpu;
}
-struct kvm_vcpu *aarch64_vcpu_add(struct kvm_vm *vm, uint32_t vcpu_id,
+struct kvm_vcpu *aarch64_vcpu_add(struct kvm_vm *vm, u32 vcpu_id,
struct kvm_vcpu_init *init, void *guest_code)
{
struct kvm_vcpu *vcpu = __aarch64_vcpu_add(vm, vcpu_id, init);
@@ -409,7 +409,7 @@ struct kvm_vcpu *aarch64_vcpu_add(struct kvm_vm *vm, uint32_t vcpu_id,
return vcpu;
}
-struct kvm_vcpu *vm_arch_vcpu_add(struct kvm_vm *vm, uint32_t vcpu_id)
+struct kvm_vcpu *vm_arch_vcpu_add(struct kvm_vm *vm, u32 vcpu_id)
{
return __aarch64_vcpu_add(vm, vcpu_id, NULL);
}
@@ -530,13 +530,13 @@ void vm_install_exception_handler(struct kvm_vm *vm, int vector,
handlers->exception_handlers[vector][0] = handler;
}
-uint32_t guest_get_vcpuid(void)
+u32 guest_get_vcpuid(void)
{
return read_sysreg(tpidr_el1);
}
-static uint32_t max_ipa_for_page_size(uint32_t vm_ipa, uint32_t gran,
- uint32_t not_sup_val, uint32_t ipa52_min_val)
+static u32 max_ipa_for_page_size(u32 vm_ipa, u32 gran,
+ u32 not_sup_val, u32 ipa52_min_val)
{
if (gran == not_sup_val)
return 0;
@@ -546,13 +546,13 @@ static uint32_t max_ipa_for_page_size(uint32_t vm_ipa, uint32_t gran,
return min(vm_ipa, 48U);
}
-void aarch64_get_supported_page_sizes(uint32_t ipa, uint32_t *ipa4k,
- uint32_t *ipa16k, uint32_t *ipa64k)
+void aarch64_get_supported_page_sizes(u32 ipa, u32 *ipa4k,
+ u32 *ipa16k, u32 *ipa64k)
{
struct kvm_vcpu_init preferred_init;
int kvm_fd, vm_fd, vcpu_fd, err;
u64 val;
- uint32_t gran;
+ u32 gran;
struct kvm_one_reg reg = {
.id = KVM_ARM64_SYS_REG(SYS_ID_AA64MMFR0_EL1),
.addr = (u64)&val,
@@ -613,7 +613,7 @@ void aarch64_get_supported_page_sizes(uint32_t ipa, uint32_t *ipa4k,
: "x0", "x1", "x2", "x3", "x4", "x5", "x6", "x7")
-void smccc_hvc(uint32_t function_id, u64 arg0, u64 arg1,
+void smccc_hvc(u32 function_id, u64 arg0, u64 arg1,
u64 arg2, u64 arg3, u64 arg4, u64 arg5,
u64 arg6, struct arm_smccc_res *res)
{
@@ -621,7 +621,7 @@ void smccc_hvc(uint32_t function_id, u64 arg0, u64 arg1,
arg6, res);
}
-void smccc_smc(uint32_t function_id, u64 arg0, u64 arg1,
+void smccc_smc(u32 function_id, u64 arg0, u64 arg1,
u64 arg2, u64 arg3, u64 arg4, u64 arg5,
u64 arg6, struct arm_smccc_res *res)
{
diff --git a/tools/testing/selftests/kvm/lib/arm64/vgic.c b/tools/testing/selftests/kvm/lib/arm64/vgic.c
index 63aefbdb1829..87673889c63e 100644
--- a/tools/testing/selftests/kvm/lib/arm64/vgic.c
+++ b/tools/testing/selftests/kvm/lib/arm64/vgic.c
@@ -30,7 +30,7 @@
* redistributor regions of the guest. Since it depends on the number of
* vCPUs for the VM, it must be called after all the vCPUs have been created.
*/
-int vgic_v3_setup(struct kvm_vm *vm, unsigned int nr_vcpus, uint32_t nr_irqs)
+int vgic_v3_setup(struct kvm_vm *vm, unsigned int nr_vcpus, u32 nr_irqs)
{
int gic_fd;
u64 attr;
@@ -80,7 +80,7 @@ int vgic_v3_setup(struct kvm_vm *vm, unsigned int nr_vcpus, uint32_t nr_irqs)
}
/* should only work for level sensitive interrupts */
-int _kvm_irq_set_level_info(int gic_fd, uint32_t intid, int level)
+int _kvm_irq_set_level_info(int gic_fd, u32 intid, int level)
{
u64 attr = 32 * (intid / 32);
u64 index = intid % 32;
@@ -98,16 +98,16 @@ int _kvm_irq_set_level_info(int gic_fd, uint32_t intid, int level)
return ret;
}
-void kvm_irq_set_level_info(int gic_fd, uint32_t intid, int level)
+void kvm_irq_set_level_info(int gic_fd, u32 intid, int level)
{
int ret = _kvm_irq_set_level_info(gic_fd, intid, level);
TEST_ASSERT(!ret, KVM_IOCTL_ERROR(KVM_DEV_ARM_VGIC_GRP_LEVEL_INFO, ret));
}
-int _kvm_arm_irq_line(struct kvm_vm *vm, uint32_t intid, int level)
+int _kvm_arm_irq_line(struct kvm_vm *vm, u32 intid, int level)
{
- uint32_t irq = intid & KVM_ARM_IRQ_NUM_MASK;
+ u32 irq = intid & KVM_ARM_IRQ_NUM_MASK;
TEST_ASSERT(!INTID_IS_SGI(intid), "KVM_IRQ_LINE's interface itself "
"doesn't allow injecting SGIs. There's no mask for it.");
@@ -120,14 +120,14 @@ int _kvm_arm_irq_line(struct kvm_vm *vm, uint32_t intid, int level)
return _kvm_irq_line(vm, irq, level);
}
-void kvm_arm_irq_line(struct kvm_vm *vm, uint32_t intid, int level)
+void kvm_arm_irq_line(struct kvm_vm *vm, u32 intid, int level)
{
int ret = _kvm_arm_irq_line(vm, intid, level);
TEST_ASSERT(!ret, KVM_IOCTL_ERROR(KVM_IRQ_LINE, ret));
}
-static void vgic_poke_irq(int gic_fd, uint32_t intid, struct kvm_vcpu *vcpu,
+static void vgic_poke_irq(int gic_fd, u32 intid, struct kvm_vcpu *vcpu,
u64 reg_off)
{
u64 reg = intid / 32;
@@ -136,7 +136,7 @@ static void vgic_poke_irq(int gic_fd, uint32_t intid, struct kvm_vcpu *vcpu,
u64 val;
bool intid_is_private = INTID_IS_SGI(intid) || INTID_IS_PPI(intid);
- uint32_t group = intid_is_private ? KVM_DEV_ARM_VGIC_GRP_REDIST_REGS
+ u32 group = intid_is_private ? KVM_DEV_ARM_VGIC_GRP_REDIST_REGS
: KVM_DEV_ARM_VGIC_GRP_DIST_REGS;
if (intid_is_private) {
@@ -159,12 +159,12 @@ static void vgic_poke_irq(int gic_fd, uint32_t intid, struct kvm_vcpu *vcpu,
kvm_device_attr_set(gic_fd, group, attr, &val);
}
-void kvm_irq_write_ispendr(int gic_fd, uint32_t intid, struct kvm_vcpu *vcpu)
+void kvm_irq_write_ispendr(int gic_fd, u32 intid, struct kvm_vcpu *vcpu)
{
vgic_poke_irq(gic_fd, intid, vcpu, GICD_ISPENDR);
}
-void kvm_irq_write_isactiver(int gic_fd, uint32_t intid, struct kvm_vcpu *vcpu)
+void kvm_irq_write_isactiver(int gic_fd, u32 intid, struct kvm_vcpu *vcpu)
{
vgic_poke_irq(gic_fd, intid, vcpu, GICD_ISACTIVER);
}
diff --git a/tools/testing/selftests/kvm/lib/guest_modes.c b/tools/testing/selftests/kvm/lib/guest_modes.c
index b04901e55138..c67cb7b86eb3 100644
--- a/tools/testing/selftests/kvm/lib/guest_modes.c
+++ b/tools/testing/selftests/kvm/lib/guest_modes.c
@@ -18,7 +18,7 @@ void guest_modes_append_default(void)
#else
{
unsigned int limit = kvm_check_cap(KVM_CAP_ARM_VM_IPA_SIZE);
- uint32_t ipa4k, ipa16k, ipa64k;
+ u32 ipa4k, ipa16k, ipa64k;
int i;
aarch64_get_supported_page_sizes(limit, &ipa4k, &ipa16k, &ipa64k);
diff --git a/tools/testing/selftests/kvm/lib/guest_sprintf.c b/tools/testing/selftests/kvm/lib/guest_sprintf.c
index 224de8a3f862..768e12cd8d1d 100644
--- a/tools/testing/selftests/kvm/lib/guest_sprintf.c
+++ b/tools/testing/selftests/kvm/lib/guest_sprintf.c
@@ -35,8 +35,8 @@ static int skip_atoi(const char **s)
({ \
int __res; \
\
- __res = ((u64)n) % (uint32_t) base; \
- n = ((u64)n) / (uint32_t) base; \
+ __res = ((u64)n) % (u32)base; \
+ n = ((u64)n) / (u32)base; \
__res; \
})
@@ -292,7 +292,7 @@ int guest_vsnprintf(char *buf, int n, const char *fmt, va_list args)
} else if (flags & SIGN)
num = va_arg(args, int);
else
- num = va_arg(args, uint32_t);
+ num = va_arg(args, u32);
str = number(str, end, num, base, field_width, precision, flags);
}
diff --git a/tools/testing/selftests/kvm/lib/kvm_util.c b/tools/testing/selftests/kvm/lib/kvm_util.c
index 1b46de455f2d..ade04f83485e 100644
--- a/tools/testing/selftests/kvm/lib/kvm_util.c
+++ b/tools/testing/selftests/kvm/lib/kvm_util.c
@@ -20,9 +20,9 @@
#define KVM_UTIL_MIN_PFN 2
-uint32_t guest_random_seed;
+u32 guest_random_seed;
struct guest_random_state guest_rng;
-static uint32_t last_guest_seed;
+static u32 last_guest_seed;
static int vcpu_mmap_sz(void);
@@ -180,7 +180,7 @@ unsigned int kvm_check_cap(long cap)
return (unsigned int)ret;
}
-void vm_enable_dirty_ring(struct kvm_vm *vm, uint32_t ring_size)
+void vm_enable_dirty_ring(struct kvm_vm *vm, u32 ring_size)
{
if (vm_check_cap(vm, KVM_CAP_DIRTY_LOG_RING_ACQ_REL))
vm_enable_cap(vm, KVM_CAP_DIRTY_LOG_RING_ACQ_REL, ring_size);
@@ -204,7 +204,7 @@ static void vm_open(struct kvm_vm *vm)
vm->stats.fd = -1;
}
-const char *vm_guest_mode_string(uint32_t i)
+const char *vm_guest_mode_string(u32 i)
{
static const char * const strings[] = {
[VM_MODE_P52V48_4K] = "PA-bits:52, VA-bits:48, 4K pages",
@@ -374,7 +374,7 @@ struct kvm_vm *____vm_create(struct vm_shape shape)
}
static u64 vm_nr_pages_required(enum vm_guest_mode mode,
- uint32_t nr_runnable_vcpus,
+ u32 nr_runnable_vcpus,
u64 extra_mem_pages)
{
u64 page_size = vm_guest_mode_params[mode].page_size;
@@ -412,7 +412,7 @@ static u64 vm_nr_pages_required(enum vm_guest_mode mode,
return vm_adjust_num_guest_pages(mode, nr_pages);
}
-void kvm_set_files_rlimit(uint32_t nr_vcpus)
+void kvm_set_files_rlimit(u32 nr_vcpus)
{
/*
* Each vCPU will open two file descriptors: the vCPU itself and the
@@ -444,7 +444,7 @@ void kvm_set_files_rlimit(uint32_t nr_vcpus)
}
-struct kvm_vm *__vm_create(struct vm_shape shape, uint32_t nr_runnable_vcpus,
+struct kvm_vm *__vm_create(struct vm_shape shape, u32 nr_runnable_vcpus,
u64 nr_extra_pages)
{
u64 nr_pages = vm_nr_pages_required(shape.mode, nr_runnable_vcpus,
@@ -506,7 +506,7 @@ struct kvm_vm *__vm_create(struct vm_shape shape, uint32_t nr_runnable_vcpus,
* extra_mem_pages is only used to calculate the maximum page table size,
* no real memory allocation for non-slot0 memory in this function.
*/
-struct kvm_vm *__vm_create_with_vcpus(struct vm_shape shape, uint32_t nr_vcpus,
+struct kvm_vm *__vm_create_with_vcpus(struct vm_shape shape, u32 nr_vcpus,
u64 extra_mem_pages,
void *guest_code, struct kvm_vcpu *vcpus[])
{
@@ -573,7 +573,7 @@ void kvm_vm_restart(struct kvm_vm *vmp)
}
__weak struct kvm_vcpu *vm_arch_vcpu_recreate(struct kvm_vm *vm,
- uint32_t vcpu_id)
+ u32 vcpu_id)
{
return __vm_vcpu_add(vm, vcpu_id);
}
@@ -585,7 +585,7 @@ struct kvm_vcpu *vm_recreate_with_one_vcpu(struct kvm_vm *vm)
return vm_vcpu_recreate(vm, 0);
}
-void kvm_pin_this_task_to_pcpu(uint32_t pcpu)
+void kvm_pin_this_task_to_pcpu(u32 pcpu)
{
cpu_set_t mask;
int r;
@@ -596,9 +596,9 @@ void kvm_pin_this_task_to_pcpu(uint32_t pcpu)
TEST_ASSERT(!r, "sched_setaffinity() failed for pCPU '%u'.", pcpu);
}
-static uint32_t parse_pcpu(const char *cpu_str, const cpu_set_t *allowed_mask)
+static u32 parse_pcpu(const char *cpu_str, const cpu_set_t *allowed_mask)
{
- uint32_t pcpu = atoi_non_negative("CPU number", cpu_str);
+ u32 pcpu = atoi_non_negative("CPU number", cpu_str);
TEST_ASSERT(CPU_ISSET(pcpu, allowed_mask),
"Not allowed to run on pCPU '%d', check cgroups?", pcpu);
@@ -622,7 +622,7 @@ void kvm_print_vcpu_pinning_help(void)
" (default: no pinning)\n", name, name);
}
-void kvm_parse_vcpu_pinning(const char *pcpus_string, uint32_t vcpu_to_pcpu[],
+void kvm_parse_vcpu_pinning(const char *pcpus_string, u32 vcpu_to_pcpu[],
int nr_vcpus)
{
cpu_set_t allowed_mask;
@@ -896,7 +896,7 @@ static void vm_userspace_mem_region_hva_insert(struct rb_root *hva_tree,
}
-int __vm_set_user_memory_region(struct kvm_vm *vm, uint32_t slot, uint32_t flags,
+int __vm_set_user_memory_region(struct kvm_vm *vm, u32 slot, u32 flags,
u64 gpa, u64 size, void *hva)
{
struct kvm_userspace_memory_region region = {
@@ -910,7 +910,7 @@ int __vm_set_user_memory_region(struct kvm_vm *vm, uint32_t slot, uint32_t flags
return ioctl(vm->fd, KVM_SET_USER_MEMORY_REGION, ®ion);
}
-void vm_set_user_memory_region(struct kvm_vm *vm, uint32_t slot, uint32_t flags,
+void vm_set_user_memory_region(struct kvm_vm *vm, u32 slot, u32 flags,
u64 gpa, u64 size, void *hva)
{
int ret = __vm_set_user_memory_region(vm, slot, flags, gpa, size, hva);
@@ -923,9 +923,9 @@ void vm_set_user_memory_region(struct kvm_vm *vm, uint32_t slot, uint32_t flags,
__TEST_REQUIRE(kvm_has_cap(KVM_CAP_USER_MEMORY2), \
"KVM selftests now require KVM_SET_USER_MEMORY_REGION2 (introduced in v6.8)")
-int __vm_set_user_memory_region2(struct kvm_vm *vm, uint32_t slot, uint32_t flags,
+int __vm_set_user_memory_region2(struct kvm_vm *vm, u32 slot, u32 flags,
u64 gpa, u64 size, void *hva,
- uint32_t guest_memfd, u64 guest_memfd_offset)
+ u32 guest_memfd, u64 guest_memfd_offset)
{
struct kvm_userspace_memory_region2 region = {
.slot = slot,
@@ -942,9 +942,9 @@ int __vm_set_user_memory_region2(struct kvm_vm *vm, uint32_t slot, uint32_t flag
return ioctl(vm->fd, KVM_SET_USER_MEMORY_REGION2, ®ion);
}
-void vm_set_user_memory_region2(struct kvm_vm *vm, uint32_t slot, uint32_t flags,
+void vm_set_user_memory_region2(struct kvm_vm *vm, u32 slot, u32 flags,
u64 gpa, u64 size, void *hva,
- uint32_t guest_memfd, u64 guest_memfd_offset)
+ u32 guest_memfd, u64 guest_memfd_offset)
{
int ret = __vm_set_user_memory_region2(vm, slot, flags, gpa, size, hva,
guest_memfd, guest_memfd_offset);
@@ -956,8 +956,8 @@ void vm_set_user_memory_region2(struct kvm_vm *vm, uint32_t slot, uint32_t flags
/* FIXME: This thing needs to be ripped apart and rewritten. */
void vm_mem_add(struct kvm_vm *vm, enum vm_mem_backing_src_type src_type,
- u64 guest_paddr, uint32_t slot, u64 npages,
- uint32_t flags, int guest_memfd, u64 guest_memfd_offset)
+ u64 guest_paddr, u32 slot, u64 npages,
+ u32 flags, int guest_memfd, u64 guest_memfd_offset)
{
int ret;
struct userspace_mem_region *region;
@@ -1075,7 +1075,7 @@ void vm_mem_add(struct kvm_vm *vm, enum vm_mem_backing_src_type src_type,
if (flags & KVM_MEM_GUEST_MEMFD) {
if (guest_memfd < 0) {
- uint32_t guest_memfd_flags = 0;
+ u32 guest_memfd_flags = 0;
TEST_ASSERT(!guest_memfd_offset,
"Offset must be zero when creating new guest_memfd");
guest_memfd = vm_create_guest_memfd(vm, mem_size, guest_memfd_flags);
@@ -1136,8 +1136,8 @@ void vm_mem_add(struct kvm_vm *vm, enum vm_mem_backing_src_type src_type,
void vm_userspace_mem_region_add(struct kvm_vm *vm,
enum vm_mem_backing_src_type src_type,
- u64 guest_paddr, uint32_t slot,
- u64 npages, uint32_t flags)
+ u64 guest_paddr, u32 slot,
+ u64 npages, u32 flags)
{
vm_mem_add(vm, src_type, guest_paddr, slot, npages, flags, -1, 0);
}
@@ -1158,7 +1158,7 @@ void vm_userspace_mem_region_add(struct kvm_vm *vm,
* memory slot ID).
*/
struct userspace_mem_region *
-memslot2region(struct kvm_vm *vm, uint32_t memslot)
+memslot2region(struct kvm_vm *vm, u32 memslot)
{
struct userspace_mem_region *region;
@@ -1189,7 +1189,7 @@ memslot2region(struct kvm_vm *vm, uint32_t memslot)
* Sets the flags of the memory region specified by the value of slot,
* to the values given by flags.
*/
-void vm_mem_region_set_flags(struct kvm_vm *vm, uint32_t slot, uint32_t flags)
+void vm_mem_region_set_flags(struct kvm_vm *vm, u32 slot, u32 flags)
{
int ret;
struct userspace_mem_region *region;
@@ -1219,7 +1219,7 @@ void vm_mem_region_set_flags(struct kvm_vm *vm, uint32_t slot, uint32_t flags)
*
* Change the gpa of a memory region.
*/
-void vm_mem_region_move(struct kvm_vm *vm, uint32_t slot, u64 new_gpa)
+void vm_mem_region_move(struct kvm_vm *vm, u32 slot, u64 new_gpa)
{
struct userspace_mem_region *region;
int ret;
@@ -1248,7 +1248,7 @@ void vm_mem_region_move(struct kvm_vm *vm, uint32_t slot, u64 new_gpa)
*
* Delete a memory region.
*/
-void vm_mem_region_delete(struct kvm_vm *vm, uint32_t slot)
+void vm_mem_region_delete(struct kvm_vm *vm, u32 slot)
{
struct userspace_mem_region *region = memslot2region(vm, slot);
@@ -1302,7 +1302,7 @@ static int vcpu_mmap_sz(void)
return ret;
}
-static bool vcpu_exists(struct kvm_vm *vm, uint32_t vcpu_id)
+static bool vcpu_exists(struct kvm_vm *vm, u32 vcpu_id)
{
struct kvm_vcpu *vcpu;
@@ -1318,7 +1318,7 @@ static bool vcpu_exists(struct kvm_vm *vm, uint32_t vcpu_id)
* Adds a virtual CPU to the VM specified by vm with the ID given by vcpu_id.
* No additional vCPU setup is done. Returns the vCPU.
*/
-struct kvm_vcpu *__vm_vcpu_add(struct kvm_vm *vm, uint32_t vcpu_id)
+struct kvm_vcpu *__vm_vcpu_add(struct kvm_vm *vm, u32 vcpu_id)
{
struct kvm_vcpu *vcpu;
@@ -1759,8 +1759,8 @@ struct kvm_reg_list *vcpu_get_reg_list(struct kvm_vcpu *vcpu)
void *vcpu_map_dirty_ring(struct kvm_vcpu *vcpu)
{
- uint32_t page_size = getpagesize();
- uint32_t size = vcpu->vm->dirty_ring_size;
+ u32 page_size = getpagesize();
+ u32 size = vcpu->vm->dirty_ring_size;
TEST_ASSERT(size > 0, "Should enable dirty ring first");
@@ -1790,7 +1790,7 @@ void *vcpu_map_dirty_ring(struct kvm_vcpu *vcpu)
* Device Ioctl
*/
-int __kvm_has_device_attr(int dev_fd, uint32_t group, u64 attr)
+int __kvm_has_device_attr(int dev_fd, u32 group, u64 attr)
{
struct kvm_device_attr attribute = {
.group = group,
@@ -1825,7 +1825,7 @@ int __kvm_create_device(struct kvm_vm *vm, u64 type)
return err ? : create_dev.fd;
}
-int __kvm_device_attr_get(int dev_fd, uint32_t group, u64 attr, void *val)
+int __kvm_device_attr_get(int dev_fd, u32 group, u64 attr, void *val)
{
struct kvm_device_attr kvmattr = {
.group = group,
@@ -1837,7 +1837,7 @@ int __kvm_device_attr_get(int dev_fd, uint32_t group, u64 attr, void *val)
return __kvm_ioctl(dev_fd, KVM_GET_DEVICE_ATTR, &kvmattr);
}
-int __kvm_device_attr_set(int dev_fd, uint32_t group, u64 attr, void *val)
+int __kvm_device_attr_set(int dev_fd, u32 group, u64 attr, void *val)
{
struct kvm_device_attr kvmattr = {
.group = group,
@@ -1853,7 +1853,7 @@ int __kvm_device_attr_set(int dev_fd, uint32_t group, u64 attr, void *val)
* IRQ related functions.
*/
-int _kvm_irq_line(struct kvm_vm *vm, uint32_t irq, int level)
+int _kvm_irq_line(struct kvm_vm *vm, u32 irq, int level)
{
struct kvm_irq_level irq_level = {
.irq = irq,
@@ -1863,7 +1863,7 @@ int _kvm_irq_line(struct kvm_vm *vm, uint32_t irq, int level)
return __vm_ioctl(vm, KVM_IRQ_LINE, &irq_level);
}
-void kvm_irq_line(struct kvm_vm *vm, uint32_t irq, int level)
+void kvm_irq_line(struct kvm_vm *vm, u32 irq, int level)
{
int ret = _kvm_irq_line(vm, irq, level);
@@ -1885,7 +1885,7 @@ struct kvm_irq_routing *kvm_gsi_routing_create(void)
}
void kvm_gsi_routing_irqchip_add(struct kvm_irq_routing *routing,
- uint32_t gsi, uint32_t pin)
+ u32 gsi, u32 pin)
{
int i;
@@ -2070,7 +2070,7 @@ const char *exit_reason_str(unsigned int exit_reason)
* not enough pages are available at or above paddr_min.
*/
gpa_t __vm_phy_pages_alloc(struct kvm_vm *vm, size_t num,
- gpa_t paddr_min, uint32_t memslot,
+ gpa_t paddr_min, u32 memslot,
bool protected)
{
struct userspace_mem_region *region;
@@ -2115,7 +2115,7 @@ gpa_t __vm_phy_pages_alloc(struct kvm_vm *vm, size_t num,
return base * vm->page_size;
}
-gpa_t vm_phy_page_alloc(struct kvm_vm *vm, gpa_t paddr_min, uint32_t memslot)
+gpa_t vm_phy_page_alloc(struct kvm_vm *vm, gpa_t paddr_min, u32 memslot)
{
return vm_phy_pages_alloc(vm, 1, paddr_min, memslot);
}
diff --git a/tools/testing/selftests/kvm/lib/memstress.c b/tools/testing/selftests/kvm/lib/memstress.c
index f6657bd34b80..d9b0d8ba232e 100644
--- a/tools/testing/selftests/kvm/lib/memstress.c
+++ b/tools/testing/selftests/kvm/lib/memstress.c
@@ -44,7 +44,7 @@ static struct kvm_vcpu *vcpus[KVM_MAX_VCPUS];
* Continuously write to the first 8 bytes of each page in the
* specified region.
*/
-void memstress_guest_code(uint32_t vcpu_idx)
+void memstress_guest_code(u32 vcpu_idx)
{
struct memstress_args *args = &memstress_args;
struct memstress_vcpu_args *vcpu_args = &args->vcpu_args[vcpu_idx];
@@ -236,7 +236,7 @@ void memstress_destroy_vm(struct kvm_vm *vm)
kvm_vm_free(vm);
}
-void memstress_set_write_percent(struct kvm_vm *vm, uint32_t write_percent)
+void memstress_set_write_percent(struct kvm_vm *vm, u32 write_percent)
{
memstress_args.write_percent = write_percent;
sync_global_to_guest(vm, memstress_args.write_percent);
diff --git a/tools/testing/selftests/kvm/lib/riscv/processor.c b/tools/testing/selftests/kvm/lib/riscv/processor.c
index df0403adccac..19db0671a390 100644
--- a/tools/testing/selftests/kvm/lib/riscv/processor.c
+++ b/tools/testing/selftests/kvm/lib/riscv/processor.c
@@ -49,7 +49,7 @@ static u64 pte_index_mask[] = {
PGTBL_L3_INDEX_MASK,
};
-static uint32_t pte_index_shift[] = {
+static u32 pte_index_shift[] = {
PGTBL_L0_INDEX_SHIFT,
PGTBL_L1_INDEX_SHIFT,
PGTBL_L2_INDEX_SHIFT,
@@ -295,7 +295,7 @@ void vcpu_arch_set_entry_point(struct kvm_vcpu *vcpu, void *guest_code)
vcpu_set_reg(vcpu, RISCV_CORE_REG(regs.pc), (unsigned long)guest_code);
}
-struct kvm_vcpu *vm_arch_vcpu_add(struct kvm_vm *vm, uint32_t vcpu_id)
+struct kvm_vcpu *vm_arch_vcpu_add(struct kvm_vm *vm, u32 vcpu_id)
{
int r;
size_t stack_size;
@@ -454,7 +454,7 @@ void vm_install_interrupt_handler(struct kvm_vm *vm, exception_handler_fn handle
handlers->exception_handlers[1][0] = handler;
}
-uint32_t guest_get_vcpuid(void)
+u32 guest_get_vcpuid(void)
{
return csr_read(CSR_SSCRATCH);
}
diff --git a/tools/testing/selftests/kvm/lib/s390/processor.c b/tools/testing/selftests/kvm/lib/s390/processor.c
index 96f98cdca15b..5445a54b44bb 100644
--- a/tools/testing/selftests/kvm/lib/s390/processor.c
+++ b/tools/testing/selftests/kvm/lib/s390/processor.c
@@ -160,7 +160,7 @@ void vcpu_arch_set_entry_point(struct kvm_vcpu *vcpu, void *guest_code)
vcpu->run->psw_addr = (uintptr_t)guest_code;
}
-struct kvm_vcpu *vm_arch_vcpu_add(struct kvm_vm *vm, uint32_t vcpu_id)
+struct kvm_vcpu *vm_arch_vcpu_add(struct kvm_vm *vm, u32 vcpu_id)
{
size_t stack_size = DEFAULT_STACK_PGS * getpagesize();
u64 stack_vaddr;
diff --git a/tools/testing/selftests/kvm/lib/sparsebit.c b/tools/testing/selftests/kvm/lib/sparsebit.c
index df6d888d71e9..2789d34436e6 100644
--- a/tools/testing/selftests/kvm/lib/sparsebit.c
+++ b/tools/testing/selftests/kvm/lib/sparsebit.c
@@ -80,7 +80,7 @@
* typedef u64 sparsebit_num_t;
*
* sparsebit_idx_t idx;
- * uint32_t mask;
+ * u32 mask;
* sparsebit_num_t num_after;
*
* The idx member contains the bit index of the first bit described by this
@@ -162,7 +162,7 @@
#define DUMP_LINE_MAX 100 /* Does not include indent amount */
-typedef uint32_t mask_t;
+typedef u32 mask_t;
#define MASK_BITS (sizeof(mask_t) * CHAR_BIT)
struct node {
diff --git a/tools/testing/selftests/kvm/lib/test_util.c b/tools/testing/selftests/kvm/lib/test_util.c
index 06378718d67d..31a3fa50e44a 100644
--- a/tools/testing/selftests/kvm/lib/test_util.c
+++ b/tools/testing/selftests/kvm/lib/test_util.c
@@ -23,15 +23,15 @@
* Park-Miller LCG using standard constants.
*/
-struct guest_random_state new_guest_random_state(uint32_t seed)
+struct guest_random_state new_guest_random_state(u32 seed)
{
struct guest_random_state s = {.seed = seed};
return s;
}
-uint32_t guest_random_u32(struct guest_random_state *state)
+u32 guest_random_u32(struct guest_random_state *state)
{
- state->seed = (u64)state->seed * 48271 % ((uint32_t)(1 << 31) - 1);
+ state->seed = (u64)state->seed * 48271 % ((u32)(1 << 31) - 1);
return state->seed;
}
@@ -198,7 +198,7 @@ size_t get_def_hugetlb_pagesz(void)
#define ANON_FLAGS (MAP_PRIVATE | MAP_ANONYMOUS)
#define ANON_HUGE_FLAGS (ANON_FLAGS | MAP_HUGETLB)
-const struct vm_mem_backing_src_alias *vm_mem_backing_src_alias(uint32_t i)
+const struct vm_mem_backing_src_alias *vm_mem_backing_src_alias(u32 i)
{
static const struct vm_mem_backing_src_alias aliases[] = {
[VM_MEM_SRC_ANONYMOUS] = {
@@ -290,9 +290,9 @@ const struct vm_mem_backing_src_alias *vm_mem_backing_src_alias(uint32_t i)
#define MAP_HUGE_PAGE_SIZE(x) (1ULL << ((x >> MAP_HUGE_SHIFT) & MAP_HUGE_MASK))
-size_t get_backing_src_pagesz(uint32_t i)
+size_t get_backing_src_pagesz(u32 i)
{
- uint32_t flag = vm_mem_backing_src_alias(i)->flag;
+ u32 flag = vm_mem_backing_src_alias(i)->flag;
switch (i) {
case VM_MEM_SRC_ANONYMOUS:
@@ -308,7 +308,7 @@ size_t get_backing_src_pagesz(uint32_t i)
}
}
-bool is_backing_src_hugetlb(uint32_t i)
+bool is_backing_src_hugetlb(u32 i)
{
return !!(vm_mem_backing_src_alias(i)->flag & MAP_HUGETLB);
}
diff --git a/tools/testing/selftests/kvm/lib/x86/processor.c b/tools/testing/selftests/kvm/lib/x86/processor.c
index 33be57ae6807..e3ca7001b436 100644
--- a/tools/testing/selftests/kvm/lib/x86/processor.c
+++ b/tools/testing/selftests/kvm/lib/x86/processor.c
@@ -659,7 +659,7 @@ void vcpu_arch_set_entry_point(struct kvm_vcpu *vcpu, void *guest_code)
vcpu_regs_set(vcpu, ®s);
}
-struct kvm_vcpu *vm_arch_vcpu_add(struct kvm_vm *vm, uint32_t vcpu_id)
+struct kvm_vcpu *vm_arch_vcpu_add(struct kvm_vm *vm, u32 vcpu_id)
{
struct kvm_mp_state mp_state;
struct kvm_regs regs;
@@ -710,7 +710,7 @@ struct kvm_vcpu *vm_arch_vcpu_add(struct kvm_vm *vm, uint32_t vcpu_id)
return vcpu;
}
-struct kvm_vcpu *vm_arch_vcpu_recreate(struct kvm_vm *vm, uint32_t vcpu_id)
+struct kvm_vcpu *vm_arch_vcpu_recreate(struct kvm_vm *vm, u32 vcpu_id)
{
struct kvm_vcpu *vcpu = __vm_vcpu_add(vm, vcpu_id);
@@ -745,9 +745,9 @@ const struct kvm_cpuid2 *kvm_get_supported_cpuid(void)
return kvm_supported_cpuid;
}
-static uint32_t __kvm_cpu_has(const struct kvm_cpuid2 *cpuid,
- uint32_t function, uint32_t index,
- uint8_t reg, uint8_t lo, uint8_t hi)
+static u32 __kvm_cpu_has(const struct kvm_cpuid2 *cpuid,
+ u32 function, u32 index,
+ uint8_t reg, uint8_t lo, uint8_t hi)
{
const struct kvm_cpuid_entry2 *entry;
int i;
@@ -774,8 +774,8 @@ bool kvm_cpuid_has(const struct kvm_cpuid2 *cpuid,
feature.reg, feature.bit, feature.bit);
}
-uint32_t kvm_cpuid_property(const struct kvm_cpuid2 *cpuid,
- struct kvm_x86_cpu_property property)
+u32 kvm_cpuid_property(const struct kvm_cpuid2 *cpuid,
+ struct kvm_x86_cpu_property property)
{
return __kvm_cpu_has(cpuid, property.function, property.index,
property.reg, property.lo_bit, property.hi_bit);
@@ -857,7 +857,7 @@ void vcpu_init_cpuid(struct kvm_vcpu *vcpu, const struct kvm_cpuid2 *cpuid)
void vcpu_set_cpuid_property(struct kvm_vcpu *vcpu,
struct kvm_x86_cpu_property property,
- uint32_t value)
+ u32 value)
{
struct kvm_cpuid_entry2 *entry;
@@ -872,7 +872,7 @@ void vcpu_set_cpuid_property(struct kvm_vcpu *vcpu,
TEST_ASSERT_EQ(kvm_cpuid_property(vcpu->cpuid, property), value);
}
-void vcpu_clear_cpuid_entry(struct kvm_vcpu *vcpu, uint32_t function)
+void vcpu_clear_cpuid_entry(struct kvm_vcpu *vcpu, u32 function)
{
struct kvm_cpuid_entry2 *entry = vcpu_get_cpuid_entry(vcpu, function);
@@ -1034,7 +1034,7 @@ const struct kvm_msr_list *kvm_get_feature_msr_index_list(void)
return list;
}
-bool kvm_msr_is_in_save_restore_list(uint32_t msr_index)
+bool kvm_msr_is_in_save_restore_list(u32 msr_index)
{
const struct kvm_msr_list *list = kvm_get_msr_index_list();
int i;
@@ -1165,7 +1165,7 @@ void kvm_init_vm_address_properties(struct kvm_vm *vm)
}
const struct kvm_cpuid_entry2 *get_cpuid_entry(const struct kvm_cpuid2 *cpuid,
- uint32_t function, uint32_t index)
+ u32 function, u32 index)
{
int i;
diff --git a/tools/testing/selftests/kvm/lib/x86/sev.c b/tools/testing/selftests/kvm/lib/x86/sev.c
index e677eeeb05f7..dba0aa744561 100644
--- a/tools/testing/selftests/kvm/lib/x86/sev.c
+++ b/tools/testing/selftests/kvm/lib/x86/sev.c
@@ -60,7 +60,7 @@ void sev_es_vm_init(struct kvm_vm *vm)
}
}
-void sev_vm_launch(struct kvm_vm *vm, uint32_t policy)
+void sev_vm_launch(struct kvm_vm *vm, u32 policy)
{
struct kvm_sev_launch_start launch_start = {
.policy = policy,
@@ -112,7 +112,7 @@ void sev_vm_launch_finish(struct kvm_vm *vm)
TEST_ASSERT_EQ(status.state, SEV_GUEST_STATE_RUNNING);
}
-struct kvm_vm *vm_sev_create_with_one_vcpu(uint32_t type, void *guest_code,
+struct kvm_vm *vm_sev_create_with_one_vcpu(u32 type, void *guest_code,
struct kvm_vcpu **cpu)
{
struct vm_shape shape = {
@@ -128,7 +128,7 @@ struct kvm_vm *vm_sev_create_with_one_vcpu(uint32_t type, void *guest_code,
return vm;
}
-void vm_sev_launch(struct kvm_vm *vm, uint32_t policy, uint8_t *measurement)
+void vm_sev_launch(struct kvm_vm *vm, u32 policy, uint8_t *measurement)
{
sev_vm_launch(vm, policy);
diff --git a/tools/testing/selftests/kvm/lib/x86/vmx.c b/tools/testing/selftests/kvm/lib/x86/vmx.c
index 11f89ffc28bc..8d7b759a403c 100644
--- a/tools/testing/selftests/kvm/lib/x86/vmx.c
+++ b/tools/testing/selftests/kvm/lib/x86/vmx.c
@@ -148,7 +148,7 @@ bool prepare_for_vmx_operation(struct vmx_pages *vmx)
wrmsr(MSR_IA32_FEAT_CTL, feature_control | required);
/* Enter VMX root operation. */
- *(uint32_t *)(vmx->vmxon) = vmcs_revision();
+ *(u32 *)(vmx->vmxon) = vmcs_revision();
if (vmxon(vmx->vmxon_gpa))
return false;
@@ -158,7 +158,7 @@ bool prepare_for_vmx_operation(struct vmx_pages *vmx)
bool load_vmcs(struct vmx_pages *vmx)
{
/* Load a VMCS. */
- *(uint32_t *)(vmx->vmcs) = vmcs_revision();
+ *(u32 *)(vmx->vmcs) = vmcs_revision();
if (vmclear(vmx->vmcs_gpa))
return false;
@@ -166,7 +166,7 @@ bool load_vmcs(struct vmx_pages *vmx)
return false;
/* Setup shadow VMCS, do not load it yet. */
- *(uint32_t *)(vmx->shadow_vmcs) = vmcs_revision() | 0x80000000ul;
+ *(u32 *)(vmx->shadow_vmcs) = vmcs_revision() | 0x80000000ul;
if (vmclear(vmx->shadow_vmcs_gpa))
return false;
@@ -188,7 +188,7 @@ bool ept_1g_pages_supported(void)
*/
static inline void init_vmcs_control_fields(struct vmx_pages *vmx)
{
- uint32_t sec_exec_ctl = 0;
+ u32 sec_exec_ctl = 0;
vmwrite(VIRTUAL_PROCESSOR_ID, 0);
vmwrite(POSTED_INTR_NV, 0);
@@ -248,7 +248,7 @@ static inline void init_vmcs_control_fields(struct vmx_pages *vmx)
*/
static inline void init_vmcs_host_state(void)
{
- uint32_t exit_controls = vmreadz(VM_EXIT_CONTROLS);
+ u32 exit_controls = vmreadz(VM_EXIT_CONTROLS);
vmwrite(HOST_ES_SELECTOR, get_es());
vmwrite(HOST_CS_SELECTOR, get_cs());
@@ -495,7 +495,7 @@ void nested_map(struct vmx_pages *vmx, struct kvm_vm *vm,
* physical pages in VM.
*/
void nested_map_memslot(struct vmx_pages *vmx, struct kvm_vm *vm,
- uint32_t memslot)
+ u32 memslot)
{
sparsebit_idx_t i, last;
struct userspace_mem_region *region =
@@ -535,7 +535,7 @@ bool kvm_cpu_has_ept(void)
}
void prepare_eptp(struct vmx_pages *vmx, struct kvm_vm *vm,
- uint32_t eptp_memslot)
+ u32 eptp_memslot)
{
TEST_ASSERT(kvm_cpu_has_ept(), "KVM doesn't support nested EPT");
diff --git a/tools/testing/selftests/kvm/memslot_perf_test.c b/tools/testing/selftests/kvm/memslot_perf_test.c
index 75c54c277690..29b2bb605ee6 100644
--- a/tools/testing/selftests/kvm/memslot_perf_test.c
+++ b/tools/testing/selftests/kvm/memslot_perf_test.c
@@ -84,7 +84,7 @@ struct vm_data {
struct kvm_vm *vm;
struct kvm_vcpu *vcpu;
pthread_t vcpu_thread;
- uint32_t nslots;
+ u32 nslots;
u64 npages;
u64 pages_per_slot;
void **hva_slots;
@@ -94,7 +94,7 @@ struct vm_data {
};
struct sync_area {
- uint32_t guest_page_size;
+ u32 guest_page_size;
atomic_bool start_flag;
atomic_bool exit_flag;
atomic_bool sync_flag;
@@ -188,9 +188,9 @@ static void wait_for_vcpu(void)
static void *vm_gpa2hva(struct vm_data *data, u64 gpa, u64 *rempages)
{
u64 gpage, pgoffs;
- uint32_t slot, slotoffs;
+ u32 slot, slotoffs;
void *base;
- uint32_t guest_page_size = data->vm->page_size;
+ u32 guest_page_size = data->vm->page_size;
TEST_ASSERT(gpa >= MEM_GPA, "Too low gpa to translate");
TEST_ASSERT(gpa < MEM_GPA + data->npages * guest_page_size,
@@ -219,9 +219,9 @@ static void *vm_gpa2hva(struct vm_data *data, u64 gpa, u64 *rempages)
return (uint8_t *)base + slotoffs * guest_page_size + pgoffs;
}
-static u64 vm_slot2gpa(struct vm_data *data, uint32_t slot)
+static u64 vm_slot2gpa(struct vm_data *data, u32 slot)
{
- uint32_t guest_page_size = data->vm->page_size;
+ u32 guest_page_size = data->vm->page_size;
TEST_ASSERT(slot < data->nslots, "Too high slot number");
@@ -242,7 +242,7 @@ static struct vm_data *alloc_vm(void)
return data;
}
-static bool check_slot_pages(uint32_t host_page_size, uint32_t guest_page_size,
+static bool check_slot_pages(u32 host_page_size, u32 guest_page_size,
u64 pages_per_slot, u64 rempages)
{
if (!pages_per_slot)
@@ -258,9 +258,9 @@ static bool check_slot_pages(uint32_t host_page_size, uint32_t guest_page_size,
}
-static u64 get_max_slots(struct vm_data *data, uint32_t host_page_size)
+static u64 get_max_slots(struct vm_data *data, u32 host_page_size)
{
- uint32_t guest_page_size = data->vm->page_size;
+ u32 guest_page_size = data->vm->page_size;
u64 mempages, pages_per_slot, rempages;
u64 slots;
@@ -286,7 +286,7 @@ static bool prepare_vm(struct vm_data *data, int nslots, u64 *maxslots,
{
u64 mempages, rempages;
u64 guest_addr;
- uint32_t slot, host_page_size, guest_page_size;
+ u32 slot, host_page_size, guest_page_size;
struct timespec tstart;
struct sync_area *sync;
@@ -447,7 +447,7 @@ static bool guest_perform_sync(void)
static void guest_code_test_memslot_move(void)
{
struct sync_area *sync = (typeof(sync))MEM_SYNC_GPA;
- uint32_t page_size = (typeof(page_size))READ_ONCE(sync->guest_page_size);
+ u32 page_size = (typeof(page_size))READ_ONCE(sync->guest_page_size);
uintptr_t base = (typeof(base))READ_ONCE(sync->move_area_ptr);
GUEST_SYNC(0);
@@ -476,7 +476,7 @@ static void guest_code_test_memslot_move(void)
static void guest_code_test_memslot_map(void)
{
struct sync_area *sync = (typeof(sync))MEM_SYNC_GPA;
- uint32_t page_size = (typeof(page_size))READ_ONCE(sync->guest_page_size);
+ u32 page_size = (typeof(page_size))READ_ONCE(sync->guest_page_size);
GUEST_SYNC(0);
@@ -543,7 +543,7 @@ static void guest_code_test_memslot_unmap(void)
static void guest_code_test_memslot_rw(void)
{
struct sync_area *sync = (typeof(sync))MEM_SYNC_GPA;
- uint32_t page_size = (typeof(page_size))READ_ONCE(sync->guest_page_size);
+ u32 page_size = (typeof(page_size))READ_ONCE(sync->guest_page_size);
GUEST_SYNC(0);
@@ -578,7 +578,7 @@ static bool test_memslot_move_prepare(struct vm_data *data,
struct sync_area *sync,
u64 *maxslots, bool isactive)
{
- uint32_t guest_page_size = data->vm->page_size;
+ u32 guest_page_size = data->vm->page_size;
u64 movesrcgpa, movetestgpa;
#ifdef __x86_64__
@@ -638,7 +638,7 @@ static void test_memslot_do_unmap(struct vm_data *data,
u64 offsp, u64 count)
{
u64 gpa, ctr;
- uint32_t guest_page_size = data->vm->page_size;
+ u32 guest_page_size = data->vm->page_size;
for (gpa = MEM_TEST_GPA + offsp * guest_page_size, ctr = 0; ctr < count; ) {
u64 npages;
@@ -664,7 +664,7 @@ static void test_memslot_map_unmap_check(struct vm_data *data,
{
u64 gpa;
u64 *val;
- uint32_t guest_page_size = data->vm->page_size;
+ u32 guest_page_size = data->vm->page_size;
if (!map_unmap_verify)
return;
@@ -679,7 +679,7 @@ static void test_memslot_map_unmap_check(struct vm_data *data,
static void test_memslot_map_loop(struct vm_data *data, struct sync_area *sync)
{
- uint32_t guest_page_size = data->vm->page_size;
+ u32 guest_page_size = data->vm->page_size;
u64 guest_pages = MEM_TEST_MAP_SIZE / guest_page_size;
/*
@@ -719,7 +719,7 @@ static void test_memslot_unmap_loop_common(struct vm_data *data,
struct sync_area *sync,
u64 chunk)
{
- uint32_t guest_page_size = data->vm->page_size;
+ u32 guest_page_size = data->vm->page_size;
u64 guest_pages = MEM_TEST_UNMAP_SIZE / guest_page_size;
u64 ctr;
@@ -745,8 +745,8 @@ static void test_memslot_unmap_loop_common(struct vm_data *data,
static void test_memslot_unmap_loop(struct vm_data *data,
struct sync_area *sync)
{
- uint32_t host_page_size = getpagesize();
- uint32_t guest_page_size = data->vm->page_size;
+ u32 host_page_size = getpagesize();
+ u32 guest_page_size = data->vm->page_size;
u64 guest_chunk_pages = guest_page_size >= host_page_size ?
1 : host_page_size / guest_page_size;
@@ -756,7 +756,7 @@ static void test_memslot_unmap_loop(struct vm_data *data,
static void test_memslot_unmap_loop_chunked(struct vm_data *data,
struct sync_area *sync)
{
- uint32_t guest_page_size = data->vm->page_size;
+ u32 guest_page_size = data->vm->page_size;
u64 guest_chunk_pages = MEM_TEST_UNMAP_CHUNK_SIZE / guest_page_size;
test_memslot_unmap_loop_common(data, sync, guest_chunk_pages);
@@ -765,7 +765,7 @@ static void test_memslot_unmap_loop_chunked(struct vm_data *data,
static void test_memslot_rw_loop(struct vm_data *data, struct sync_area *sync)
{
u64 gptr;
- uint32_t guest_page_size = data->vm->page_size;
+ u32 guest_page_size = data->vm->page_size;
for (gptr = MEM_TEST_GPA + guest_page_size / 2;
gptr < MEM_TEST_GPA + MEM_TEST_SIZE; gptr += guest_page_size)
@@ -923,8 +923,8 @@ static void help(char *name, struct test_args *targs)
static bool check_memory_sizes(void)
{
- uint32_t host_page_size = getpagesize();
- uint32_t guest_page_size = vm_guest_mode_params[VM_MODE_DEFAULT].page_size;
+ u32 host_page_size = getpagesize();
+ u32 guest_page_size = vm_guest_mode_params[VM_MODE_DEFAULT].page_size;
if (host_page_size > SZ_64K || guest_page_size > SZ_64K) {
pr_info("Unsupported page size on host (0x%x) or guest (0x%x)\n",
@@ -960,7 +960,7 @@ static bool check_memory_sizes(void)
static bool parse_args(int argc, char *argv[],
struct test_args *targs)
{
- uint32_t max_mem_slots;
+ u32 max_mem_slots;
int opt;
while ((opt = getopt(argc, argv, "hvdqs:f:e:l:r:")) != -1) {
diff --git a/tools/testing/selftests/kvm/riscv/arch_timer.c b/tools/testing/selftests/kvm/riscv/arch_timer.c
index e8ddb168c13e..b744663588fb 100644
--- a/tools/testing/selftests/kvm/riscv/arch_timer.c
+++ b/tools/testing/selftests/kvm/riscv/arch_timer.c
@@ -19,7 +19,7 @@ static void guest_irq_handler(struct ex_regs *regs)
{
u64 xcnt, xcnt_diff_us, cmp;
unsigned int intid = regs->cause & ~CAUSE_IRQ_FLAG;
- uint32_t cpu = guest_get_vcpuid();
+ u32 cpu = guest_get_vcpuid();
struct test_vcpu_shared_data *shared_data = &vcpu_shared_data[cpu];
timer_irq_disable();
@@ -40,7 +40,7 @@ static void guest_irq_handler(struct ex_regs *regs)
static void guest_run(struct test_vcpu_shared_data *shared_data)
{
- uint32_t irq_iter, config_iter;
+ u32 irq_iter, config_iter;
shared_data->nr_iter = 0;
shared_data->guest_stage = 0;
@@ -66,7 +66,7 @@ static void guest_run(struct test_vcpu_shared_data *shared_data)
static void guest_code(void)
{
- uint32_t cpu = guest_get_vcpuid();
+ u32 cpu = guest_get_vcpuid();
struct test_vcpu_shared_data *shared_data = &vcpu_shared_data[cpu];
timer_irq_disable();
diff --git a/tools/testing/selftests/kvm/s390/memop.c b/tools/testing/selftests/kvm/s390/memop.c
index a6f90821835e..fc640f3c5176 100644
--- a/tools/testing/selftests/kvm/s390/memop.c
+++ b/tools/testing/selftests/kvm/s390/memop.c
@@ -42,11 +42,11 @@ struct mop_desc {
unsigned int _set_flags : 1;
unsigned int _sida_offset : 1;
unsigned int _ar : 1;
- uint32_t size;
+ u32 size;
enum mop_target target;
enum mop_access_mode mode;
void *buf;
- uint32_t sida_offset;
+ u32 sida_offset;
void *old;
uint8_t old_value[16];
bool *cmpxchg_success;
@@ -296,7 +296,7 @@ static void prepare_mem12(void)
TEST_ASSERT(!memcmp(p1, p2, size), "Memory contents do not match!")
static void default_write_read(struct test_info copy_cpu, struct test_info mop_cpu,
- enum mop_target mop_target, uint32_t size, uint8_t key)
+ enum mop_target mop_target, u32 size, uint8_t key)
{
prepare_mem12();
CHECK_N_DO(MOP, mop_cpu, mop_target, WRITE, mem1, size,
@@ -308,7 +308,7 @@ static void default_write_read(struct test_info copy_cpu, struct test_info mop_c
}
static void default_read(struct test_info copy_cpu, struct test_info mop_cpu,
- enum mop_target mop_target, uint32_t size, uint8_t key)
+ enum mop_target mop_target, u32 size, uint8_t key)
{
prepare_mem12();
CHECK_N_DO(MOP, mop_cpu, mop_target, WRITE, mem1, size, GADDR_V(mem1));
@@ -487,7 +487,7 @@ static __uint128_t cut_to_size(int size, __uint128_t val)
case 2:
return (uint16_t)val;
case 4:
- return (uint32_t)val;
+ return (u32)val;
case 8:
return (u64)val;
case 16:
@@ -585,15 +585,15 @@ static bool _cmpxchg(int size, void *target, __uint128_t *old_addr, __uint128_t
switch (size) {
case 4: {
- uint32_t old = *old_addr;
+ u32 old = *old_addr;
asm volatile ("cs %[old],%[new],%[address]"
: [old] "+d" (old),
- [address] "+Q" (*(uint32_t *)(target))
- : [new] "d" ((uint32_t)new)
+ [address] "+Q" (*(u32 *)(target))
+ : [new] "d" ((u32)new)
: "cc"
);
- ret = old == (uint32_t)*old_addr;
+ ret = old == (u32)*old_addr;
*old_addr = old;
return ret;
}
diff --git a/tools/testing/selftests/kvm/set_memory_region_test.c b/tools/testing/selftests/kvm/set_memory_region_test.c
index 6c680fcf07a4..730f94cb1e86 100644
--- a/tools/testing/selftests/kvm/set_memory_region_test.c
+++ b/tools/testing/selftests/kvm/set_memory_region_test.c
@@ -345,8 +345,8 @@ static void test_zero_memory_regions(void)
static void test_invalid_memory_region_flags(void)
{
- uint32_t supported_flags = KVM_MEM_LOG_DIRTY_PAGES;
- const uint32_t v2_only_flags = KVM_MEM_GUEST_MEMFD;
+ u32 supported_flags = KVM_MEM_LOG_DIRTY_PAGES;
+ const u32 v2_only_flags = KVM_MEM_GUEST_MEMFD;
struct kvm_vm *vm;
int r, i;
@@ -410,8 +410,8 @@ static void test_add_max_memory_regions(void)
{
int ret;
struct kvm_vm *vm;
- uint32_t max_mem_slots;
- uint32_t slot;
+ u32 max_mem_slots;
+ u32 slot;
void *mem, *mem_aligned, *mem_extra;
size_t alignment;
diff --git a/tools/testing/selftests/kvm/steal_time.c b/tools/testing/selftests/kvm/steal_time.c
index 57f32a31d7ac..369d6290dcdc 100644
--- a/tools/testing/selftests/kvm/steal_time.c
+++ b/tools/testing/selftests/kvm/steal_time.c
@@ -42,7 +42,7 @@ static void check_status(struct kvm_steal_time *st)
static void guest_code(int cpu)
{
struct kvm_steal_time *st = st_gva[cpu];
- uint32_t version;
+ u32 version;
GUEST_ASSERT_EQ(rdmsr(MSR_KVM_STEAL_TIME), ((u64)st_gva[cpu] | KVM_MSR_ENABLED));
@@ -67,7 +67,7 @@ static bool is_steal_time_supported(struct kvm_vcpu *vcpu)
return kvm_cpu_has(X86_FEATURE_KVM_STEAL_TIME);
}
-static void steal_time_init(struct kvm_vcpu *vcpu, uint32_t i)
+static void steal_time_init(struct kvm_vcpu *vcpu, u32 i)
{
int ret;
@@ -82,7 +82,7 @@ static void steal_time_init(struct kvm_vcpu *vcpu, uint32_t i)
vcpu_set_msr(vcpu, MSR_KVM_STEAL_TIME, (ulong)st_gva[i] | KVM_MSR_ENABLED);
}
-static void steal_time_dump(struct kvm_vm *vm, uint32_t vcpu_idx)
+static void steal_time_dump(struct kvm_vm *vm, u32 vcpu_idx)
{
struct kvm_steal_time *st = addr_gva2hva(vm, (ulong)st_gva[vcpu_idx]);
@@ -109,12 +109,12 @@ static void steal_time_dump(struct kvm_vm *vm, uint32_t vcpu_idx)
#define PV_TIME_ST 0xc5000021
struct st_time {
- uint32_t rev;
- uint32_t attr;
+ u32 rev;
+ u32 attr;
u64 st_time;
};
-static s64 smccc(uint32_t func, u64 arg)
+static s64 smccc(u32 func, u64 arg)
{
struct arm_smccc_res res;
@@ -166,7 +166,7 @@ static bool is_steal_time_supported(struct kvm_vcpu *vcpu)
return !__vcpu_ioctl(vcpu, KVM_HAS_DEVICE_ATTR, &dev);
}
-static void steal_time_init(struct kvm_vcpu *vcpu, uint32_t i)
+static void steal_time_init(struct kvm_vcpu *vcpu, u32 i)
{
struct kvm_vm *vm = vcpu->vm;
u64 st_ipa;
@@ -195,7 +195,7 @@ static void steal_time_init(struct kvm_vcpu *vcpu, uint32_t i)
TEST_ASSERT(ret == -1 && errno == EEXIST, "Set IPA twice without EEXIST");
}
-static void steal_time_dump(struct kvm_vm *vm, uint32_t vcpu_idx)
+static void steal_time_dump(struct kvm_vm *vm, u32 vcpu_idx)
{
struct st_time *st = addr_gva2hva(vm, (ulong)st_gva[vcpu_idx]);
@@ -213,8 +213,8 @@ static void steal_time_dump(struct kvm_vm *vm, uint32_t vcpu_idx)
static gpa_t st_gpa[NR_VCPUS];
struct sta_struct {
- uint32_t sequence;
- uint32_t flags;
+ u32 sequence;
+ u32 flags;
u64 steal;
uint8_t preempted;
uint8_t pad[47];
@@ -243,7 +243,7 @@ static void check_status(struct sta_struct *st)
static void guest_code(int cpu)
{
struct sta_struct *st = st_gva[cpu];
- uint32_t sequence;
+ u32 sequence;
long out_val = 0;
bool probe;
@@ -276,7 +276,7 @@ static bool is_steal_time_supported(struct kvm_vcpu *vcpu)
return enabled;
}
-static void steal_time_init(struct kvm_vcpu *vcpu, uint32_t i)
+static void steal_time_init(struct kvm_vcpu *vcpu, u32 i)
{
/* ST_GPA_BASE is identity mapped */
st_gva[i] = (void *)(ST_GPA_BASE + i * STEAL_TIME_SIZE);
@@ -285,7 +285,7 @@ static void steal_time_init(struct kvm_vcpu *vcpu, uint32_t i)
sync_global_to_guest(vcpu->vm, st_gpa[i]);
}
-static void steal_time_dump(struct kvm_vm *vm, uint32_t vcpu_idx)
+static void steal_time_dump(struct kvm_vm *vm, u32 vcpu_idx)
{
struct sta_struct *st = addr_gva2hva(vm, (ulong)st_gva[vcpu_idx]);
int i;
diff --git a/tools/testing/selftests/kvm/x86/amx_test.c b/tools/testing/selftests/kvm/x86/amx_test.c
index b847b1b2d8b9..1e1231e38bf7 100644
--- a/tools/testing/selftests/kvm/x86/amx_test.c
+++ b/tools/testing/selftests/kvm/x86/amx_test.c
@@ -76,8 +76,8 @@ static inline void __tilerelease(void)
static inline void __xsavec(struct xstate *xstate, u64 rfbm)
{
- uint32_t rfbm_lo = rfbm;
- uint32_t rfbm_hi = rfbm >> 32;
+ u32 rfbm_lo = rfbm;
+ u32 rfbm_hi = rfbm >> 32;
asm volatile("xsavec (%%rdi)"
: : "D" (xstate), "a" (rfbm_lo), "d" (rfbm_hi)
diff --git a/tools/testing/selftests/kvm/x86/apic_bus_clock_test.c b/tools/testing/selftests/kvm/x86/apic_bus_clock_test.c
index 81f76c7d5621..404f0028e110 100644
--- a/tools/testing/selftests/kvm/x86/apic_bus_clock_test.c
+++ b/tools/testing/selftests/kvm/x86/apic_bus_clock_test.c
@@ -19,8 +19,8 @@
* timer frequency.
*/
static const struct {
- const uint32_t tdcr;
- const uint32_t divide_count;
+ const u32 tdcr;
+ const u32 divide_count;
} tdcrs[] = {
{0x0, 2},
{0x1, 4},
@@ -42,12 +42,12 @@ static void apic_enable(void)
xapic_enable();
}
-static uint32_t apic_read_reg(unsigned int reg)
+static u32 apic_read_reg(unsigned int reg)
{
return is_x2apic ? x2apic_read_reg(reg) : xapic_read_reg(reg);
}
-static void apic_write_reg(unsigned int reg, uint32_t val)
+static void apic_write_reg(unsigned int reg, u32 val)
{
if (is_x2apic)
x2apic_write_reg(reg, val);
@@ -58,9 +58,9 @@ static void apic_write_reg(unsigned int reg, uint32_t val)
static void apic_guest_code(u64 apic_hz, u64 delay_ms)
{
u64 tsc_hz = guest_tsc_khz * 1000;
- const uint32_t tmict = ~0u;
+ const u32 tmict = ~0u;
u64 tsc0, tsc1, freq;
- uint32_t tmcct;
+ u32 tmcct;
int i;
apic_enable();
diff --git a/tools/testing/selftests/kvm/x86/debug_regs.c b/tools/testing/selftests/kvm/x86/debug_regs.c
index 542a0eac0f32..0dfaf03cd0a0 100644
--- a/tools/testing/selftests/kvm/x86/debug_regs.c
+++ b/tools/testing/selftests/kvm/x86/debug_regs.c
@@ -16,7 +16,7 @@
#define IRQ_VECTOR 0xAA
/* For testing data access debug BP */
-uint32_t guest_value;
+u32 guest_value;
extern unsigned char sw_bp, hw_bp, write_data, ss_start, bd_start;
diff --git a/tools/testing/selftests/kvm/x86/feature_msrs_test.c b/tools/testing/selftests/kvm/x86/feature_msrs_test.c
index a0e54af60544..158550701771 100644
--- a/tools/testing/selftests/kvm/x86/feature_msrs_test.c
+++ b/tools/testing/selftests/kvm/x86/feature_msrs_test.c
@@ -12,7 +12,7 @@
#include "kvm_util.h"
#include "processor.h"
-static bool is_kvm_controlled_msr(uint32_t msr)
+static bool is_kvm_controlled_msr(u32 msr)
{
return msr == MSR_IA32_VMX_CR0_FIXED1 || msr == MSR_IA32_VMX_CR4_FIXED1;
}
@@ -21,7 +21,7 @@ static bool is_kvm_controlled_msr(uint32_t msr)
* For VMX MSRs with a "true" variant, KVM requires userspace to set the "true"
* MSR, and doesn't allow setting the hidden version.
*/
-static bool is_hidden_vmx_msr(uint32_t msr)
+static bool is_hidden_vmx_msr(u32 msr)
{
switch (msr) {
case MSR_IA32_VMX_PINBASED_CTLS:
@@ -34,12 +34,12 @@ static bool is_hidden_vmx_msr(uint32_t msr)
}
}
-static bool is_quirked_msr(uint32_t msr)
+static bool is_quirked_msr(u32 msr)
{
return msr != MSR_AMD64_DE_CFG;
}
-static void test_feature_msr(uint32_t msr)
+static void test_feature_msr(u32 msr)
{
const u64 supported_mask = kvm_get_feature_msr(msr);
u64 reset_value = is_quirked_msr(msr) ? supported_mask : 0;
diff --git a/tools/testing/selftests/kvm/x86/hyperv_evmcs.c b/tools/testing/selftests/kvm/x86/hyperv_evmcs.c
index 9fa91b0f168a..9b4b46a0322e 100644
--- a/tools/testing/selftests/kvm/x86/hyperv_evmcs.c
+++ b/tools/testing/selftests/kvm/x86/hyperv_evmcs.c
@@ -30,7 +30,7 @@ static void guest_nmi_handler(struct ex_regs *regs)
{
}
-static inline void rdmsr_from_l2(uint32_t msr)
+static inline void rdmsr_from_l2(u32 msr)
{
/* Currently, L1 doesn't preserve GPRs during vmexits. */
__asm__ __volatile__ ("rdmsr" : : "c"(msr) :
diff --git a/tools/testing/selftests/kvm/x86/hyperv_features.c b/tools/testing/selftests/kvm/x86/hyperv_features.c
index c275c6401525..31e568150c98 100644
--- a/tools/testing/selftests/kvm/x86/hyperv_features.c
+++ b/tools/testing/selftests/kvm/x86/hyperv_features.c
@@ -22,7 +22,7 @@
KVM_X86_CPU_FEATURE(HYPERV_CPUID_ENLIGHTMENT_INFO, 0, EBX, 0)
struct msr_data {
- uint32_t idx;
+ u32 idx;
bool fault_expected;
bool write;
u64 write_val;
@@ -34,7 +34,7 @@ struct hcall_data {
bool ud_expected;
};
-static bool is_write_only_msr(uint32_t msr)
+static bool is_write_only_msr(u32 msr)
{
return msr == HV_X64_MSR_EOI;
}
diff --git a/tools/testing/selftests/kvm/x86/hyperv_svm_test.c b/tools/testing/selftests/kvm/x86/hyperv_svm_test.c
index b7f35424c838..36fedadd7b6c 100644
--- a/tools/testing/selftests/kvm/x86/hyperv_svm_test.c
+++ b/tools/testing/selftests/kvm/x86/hyperv_svm_test.c
@@ -21,7 +21,7 @@
#define L2_GUEST_STACK_SIZE 256
/* Exit to L1 from L2 with RDMSR instruction */
-static inline void rdmsr_from_l2(uint32_t msr)
+static inline void rdmsr_from_l2(u32 msr)
{
/* Currently, L1 doesn't preserve GPRs during vmexits. */
__asm__ __volatile__ ("rdmsr" : : "c"(msr) :
diff --git a/tools/testing/selftests/kvm/x86/kvm_pv_test.c b/tools/testing/selftests/kvm/x86/kvm_pv_test.c
index e49ae65f8171..babf0f95165a 100644
--- a/tools/testing/selftests/kvm/x86/kvm_pv_test.c
+++ b/tools/testing/selftests/kvm/x86/kvm_pv_test.c
@@ -13,7 +13,7 @@
#include "processor.h"
struct msr_data {
- uint32_t idx;
+ u32 idx;
const char *name;
};
diff --git a/tools/testing/selftests/kvm/x86/nested_emulation_test.c b/tools/testing/selftests/kvm/x86/nested_emulation_test.c
index d398add21e4c..42fd24567e26 100644
--- a/tools/testing/selftests/kvm/x86/nested_emulation_test.c
+++ b/tools/testing/selftests/kvm/x86/nested_emulation_test.c
@@ -14,7 +14,7 @@ enum {
struct emulated_instruction {
const char name[32];
uint8_t opcode[15];
- uint32_t exit_reason[NR_VIRTUALIZATION_FLAVORS];
+ u32 exit_reason[NR_VIRTUALIZATION_FLAVORS];
};
static struct emulated_instruction instructions[] = {
@@ -36,9 +36,9 @@ static uint8_t kvm_fep[] = { 0x0f, 0x0b, 0x6b, 0x76, 0x6d }; /* ud2 ; .ascii "kv
static uint8_t l2_guest_code[sizeof(kvm_fep) + 15];
static uint8_t *l2_instruction = &l2_guest_code[sizeof(kvm_fep)];
-static uint32_t get_instruction_length(struct emulated_instruction *insn)
+static u32 get_instruction_length(struct emulated_instruction *insn)
{
- uint32_t i;
+ u32 i;
for (i = 0; i < ARRAY_SIZE(insn->opcode) && insn->opcode[i]; i++)
;
@@ -81,8 +81,8 @@ static void guest_code(void *test_data)
for (i = 0; i < ARRAY_SIZE(instructions); i++) {
struct emulated_instruction *insn = &instructions[i];
- uint32_t insn_len = get_instruction_length(insn);
- uint32_t exit_insn_len;
+ u32 insn_len = get_instruction_length(insn);
+ u32 exit_insn_len;
u32 exit_reason;
/*
diff --git a/tools/testing/selftests/kvm/x86/nested_exceptions_test.c b/tools/testing/selftests/kvm/x86/nested_exceptions_test.c
index 646cfb0022b3..186e980aa8ee 100644
--- a/tools/testing/selftests/kvm/x86/nested_exceptions_test.c
+++ b/tools/testing/selftests/kvm/x86/nested_exceptions_test.c
@@ -72,7 +72,7 @@ static void l2_ss_injected_tf_test(void)
}
static void svm_run_l2(struct svm_test_data *svm, void *l2_code, int vector,
- uint32_t error_code)
+ u32 error_code)
{
struct vmcb *vmcb = svm->vmcb;
struct vmcb_control_area *ctrl = &vmcb->control;
@@ -111,7 +111,7 @@ static void l1_svm_code(struct svm_test_data *svm)
GUEST_DONE();
}
-static void vmx_run_l2(void *l2_code, int vector, uint32_t error_code)
+static void vmx_run_l2(void *l2_code, int vector, u32 error_code)
{
GUEST_ASSERT(!vmwrite(GUEST_RIP, (u64)l2_code));
diff --git a/tools/testing/selftests/kvm/x86/pmu_counters_test.c b/tools/testing/selftests/kvm/x86/pmu_counters_test.c
index ef9ed5edf47b..16a2093b14eb 100644
--- a/tools/testing/selftests/kvm/x86/pmu_counters_test.c
+++ b/tools/testing/selftests/kvm/x86/pmu_counters_test.c
@@ -30,7 +30,7 @@
#define NUM_INSNS_RETIRED (NUM_LOOPS * NUM_INSNS_PER_LOOP + NUM_EXTRA_INSNS)
/* Track which architectural events are supported by hardware. */
-static uint32_t hardware_pmu_arch_events;
+static u32 hardware_pmu_arch_events;
static uint8_t kvm_pmu_version;
static bool kvm_has_perf_caps;
@@ -148,7 +148,7 @@ static uint8_t guest_get_pmu_version(void)
* Sanity check that in all cases, the event doesn't count when it's disabled,
* and that KVM correctly emulates the write of an arbitrary value.
*/
-static void guest_assert_event_count(uint8_t idx, uint32_t pmc, uint32_t pmc_msr)
+static void guest_assert_event_count(uint8_t idx, u32 pmc, u32 pmc_msr)
{
u64 count;
@@ -218,7 +218,7 @@ do { \
FEP "xor %%eax, %%eax\n\t" \
FEP "xor %%edx, %%edx\n\t" \
"wrmsr\n\t" \
- :: "a"((uint32_t)_value), "d"(_value >> 32), \
+ :: "a"((u32)_value), "d"(_value >> 32), \
"c"(_msr), "D"(_msr), [m]"m"(kvm_pmu_version) \
); \
} while (0)
@@ -237,8 +237,8 @@ do { \
guest_assert_event_count(_idx, _pmc, _pmc_msr); \
} while (0)
-static void __guest_test_arch_event(uint8_t idx, uint32_t pmc, uint32_t pmc_msr,
- uint32_t ctrl_msr, u64 ctrl_msr_value)
+static void __guest_test_arch_event(uint8_t idx, u32 pmc, u32 pmc_msr,
+ u32 ctrl_msr, u64 ctrl_msr_value)
{
GUEST_TEST_EVENT(idx, pmc, pmc_msr, ctrl_msr, ctrl_msr_value, "");
@@ -248,12 +248,12 @@ static void __guest_test_arch_event(uint8_t idx, uint32_t pmc, uint32_t pmc_msr,
static void guest_test_arch_event(uint8_t idx)
{
- uint32_t nr_gp_counters = this_cpu_property(X86_PROPERTY_PMU_NR_GP_COUNTERS);
- uint32_t pmu_version = guest_get_pmu_version();
+ u32 nr_gp_counters = this_cpu_property(X86_PROPERTY_PMU_NR_GP_COUNTERS);
+ u32 pmu_version = guest_get_pmu_version();
/* PERF_GLOBAL_CTRL exists only for Architectural PMU Version 2+. */
bool guest_has_perf_global_ctrl = pmu_version >= 2;
struct kvm_x86_pmu_feature gp_event, fixed_event;
- uint32_t base_pmc_msr;
+ u32 base_pmc_msr;
unsigned int i;
/* The host side shouldn't invoke this without a guest PMU. */
@@ -352,7 +352,7 @@ __GUEST_ASSERT(expect_gp ? vector == GP_VECTOR : !vector, \
"Expected " #insn "(0x%x) to yield 0x%lx, got 0x%lx", \
msr, expected, val);
-static void guest_test_rdpmc(uint32_t rdpmc_idx, bool expect_success,
+static void guest_test_rdpmc(u32 rdpmc_idx, bool expect_success,
u64 expected_val)
{
uint8_t vector;
@@ -372,8 +372,8 @@ static void guest_test_rdpmc(uint32_t rdpmc_idx, bool expect_success,
GUEST_ASSERT_PMC_VALUE(RDPMC, rdpmc_idx, val, expected_val);
}
-static void guest_rd_wr_counters(uint32_t base_msr, uint8_t nr_possible_counters,
- uint8_t nr_counters, uint32_t or_mask)
+static void guest_rd_wr_counters(u32 base_msr, uint8_t nr_possible_counters,
+ uint8_t nr_counters, u32 or_mask)
{
const bool pmu_has_fast_mode = !guest_get_pmu_version();
uint8_t i;
@@ -384,7 +384,7 @@ static void guest_rd_wr_counters(uint32_t base_msr, uint8_t nr_possible_counters
* width of the counters.
*/
const u64 test_val = 0xffff;
- const uint32_t msr = base_msr + i;
+ const u32 msr = base_msr + i;
/*
* Fixed counters are supported if the counter is less than the
@@ -400,7 +400,7 @@ static void guest_rd_wr_counters(uint32_t base_msr, uint8_t nr_possible_counters
const u64 expected_val = expect_success ? test_val : 0;
const bool expect_gp = !expect_success && msr != MSR_P6_PERFCTR0 &&
msr != MSR_P6_PERFCTR1;
- uint32_t rdpmc_idx;
+ u32 rdpmc_idx;
uint8_t vector;
u64 val;
@@ -442,7 +442,7 @@ static void guest_test_gp_counters(void)
{
uint8_t pmu_version = guest_get_pmu_version();
uint8_t nr_gp_counters = 0;
- uint32_t base_msr;
+ u32 base_msr;
if (pmu_version)
nr_gp_counters = this_cpu_property(X86_PROPERTY_PMU_NR_GP_COUNTERS);
@@ -542,7 +542,7 @@ static void guest_test_fixed_counters(void)
static void test_fixed_counters(uint8_t pmu_version, u64 perf_capabilities,
uint8_t nr_fixed_counters,
- uint32_t supported_bitmask)
+ u32 supported_bitmask)
{
struct kvm_vcpu *vcpu;
struct kvm_vm *vm;
@@ -567,7 +567,7 @@ static void test_intel_counters(void)
uint8_t pmu_version = kvm_cpu_property(X86_PROPERTY_PMU_VERSION);
unsigned int i;
uint8_t v, j;
- uint32_t k;
+ u32 k;
const u64 perf_caps[] = {
0,
diff --git a/tools/testing/selftests/kvm/x86/pmu_event_filter_test.c b/tools/testing/selftests/kvm/x86/pmu_event_filter_test.c
index 86831c590df8..d140fd6b951e 100644
--- a/tools/testing/selftests/kvm/x86/pmu_event_filter_test.c
+++ b/tools/testing/selftests/kvm/x86/pmu_event_filter_test.c
@@ -75,7 +75,7 @@ static void guest_gp_handler(struct ex_regs *regs)
*
* Return on success. GUEST_SYNC(0) on error.
*/
-static void check_msr(uint32_t msr, u64 bits_to_flip)
+static void check_msr(u32 msr, u64 bits_to_flip)
{
u64 v = rdmsr(msr) ^ bits_to_flip;
@@ -89,7 +89,7 @@ static void check_msr(uint32_t msr, u64 bits_to_flip)
GUEST_SYNC(-EIO);
}
-static void run_and_measure_loop(uint32_t msr_base)
+static void run_and_measure_loop(u32 msr_base)
{
const u64 branches_retired = rdmsr(msr_base + 0);
const u64 insn_retired = rdmsr(msr_base + 1);
@@ -375,7 +375,7 @@ static bool use_amd_pmu(void)
static bool supports_event_mem_inst_retired(void)
{
- uint32_t eax, ebx, ecx, edx;
+ u32 eax, ebx, ecx, edx;
cpuid(1, &eax, &ebx, &ecx, &edx);
if (x86_family(eax) == 0x6) {
@@ -412,7 +412,7 @@ static bool supports_event_mem_inst_retired(void)
#define EXCLUDE_MASKED_ENTRY(event_select, mask, match) \
KVM_PMU_ENCODE_MASKED_ENTRY(event_select, mask, match, true)
-static void masked_events_guest_test(uint32_t msr_base)
+static void masked_events_guest_test(u32 msr_base)
{
/*
* The actual value of the counters don't determine the outcome of
@@ -496,7 +496,7 @@ struct masked_events_test {
u64 amd_events[MAX_TEST_EVENTS];
u64 amd_event_end;
const char *msg;
- uint32_t flags;
+ u32 flags;
};
/*
@@ -666,7 +666,7 @@ static int set_pmu_event_filter(struct kvm_vcpu *vcpu,
}
static int set_pmu_single_event_filter(struct kvm_vcpu *vcpu, u64 event,
- uint32_t flags, uint32_t action)
+ u32 flags, u32 action)
{
struct __kvm_pmu_event_filter f = {
.nevents = 1,
@@ -743,7 +743,7 @@ static void intel_run_fixed_counter_guest_code(uint8_t idx)
}
static u64 test_with_fixed_counter_filter(struct kvm_vcpu *vcpu,
- uint32_t action, uint32_t bitmap)
+ u32 action, u32 bitmap)
{
struct __kvm_pmu_event_filter f = {
.action = action,
@@ -755,8 +755,8 @@ static u64 test_with_fixed_counter_filter(struct kvm_vcpu *vcpu,
}
static u64 test_set_gp_and_fixed_event_filter(struct kvm_vcpu *vcpu,
- uint32_t action,
- uint32_t bitmap)
+ u32 action,
+ u32 bitmap)
{
struct __kvm_pmu_event_filter f = base_event_filter;
@@ -771,7 +771,7 @@ static void __test_fixed_counter_bitmap(struct kvm_vcpu *vcpu, uint8_t idx,
uint8_t nr_fixed_counters)
{
unsigned int i;
- uint32_t bitmap;
+ u32 bitmap;
u64 count;
TEST_ASSERT(nr_fixed_counters < sizeof(bitmap) * 8,
diff --git a/tools/testing/selftests/kvm/x86/private_mem_conversions_test.c b/tools/testing/selftests/kvm/x86/private_mem_conversions_test.c
index 7e650895c96f..73f540894f06 100644
--- a/tools/testing/selftests/kvm/x86/private_mem_conversions_test.c
+++ b/tools/testing/selftests/kvm/x86/private_mem_conversions_test.c
@@ -366,8 +366,8 @@ static void *__test_mem_conversions(void *__vcpu)
}
}
-static void test_mem_conversions(enum vm_mem_backing_src_type src_type, uint32_t nr_vcpus,
- uint32_t nr_memslots)
+static void test_mem_conversions(enum vm_mem_backing_src_type src_type, u32 nr_vcpus,
+ u32 nr_memslots)
{
/*
* Allocate enough memory so that each vCPU's chunk of memory can be
@@ -453,8 +453,8 @@ static void usage(const char *cmd)
int main(int argc, char *argv[])
{
enum vm_mem_backing_src_type src_type = DEFAULT_VM_MEM_SRC;
- uint32_t nr_memslots = 1;
- uint32_t nr_vcpus = 1;
+ u32 nr_memslots = 1;
+ u32 nr_vcpus = 1;
int opt;
TEST_REQUIRE(kvm_check_cap(KVM_CAP_VM_TYPES) & BIT(KVM_X86_SW_PROTECTED_VM));
diff --git a/tools/testing/selftests/kvm/x86/private_mem_kvm_exits_test.c b/tools/testing/selftests/kvm/x86/private_mem_kvm_exits_test.c
index 925040f394de..10db9fe6d906 100644
--- a/tools/testing/selftests/kvm/x86/private_mem_kvm_exits_test.c
+++ b/tools/testing/selftests/kvm/x86/private_mem_kvm_exits_test.c
@@ -27,7 +27,7 @@ static u64 guest_repeatedly_read(void)
return value;
}
-static uint32_t run_vcpu_get_exit_reason(struct kvm_vcpu *vcpu)
+static u32 run_vcpu_get_exit_reason(struct kvm_vcpu *vcpu)
{
int r;
@@ -50,7 +50,7 @@ static void test_private_access_memslot_deleted(void)
struct kvm_vcpu *vcpu;
pthread_t vm_thread;
void *thread_return;
- uint32_t exit_reason;
+ u32 exit_reason;
vm = vm_create_shape_with_one_vcpu(protected_vm_shape, &vcpu,
guest_repeatedly_read);
@@ -72,7 +72,7 @@ static void test_private_access_memslot_deleted(void)
vm_mem_region_delete(vm, EXITS_TEST_SLOT);
pthread_join(vm_thread, &thread_return);
- exit_reason = (uint32_t)(u64)thread_return;
+ exit_reason = (u32)(u64)thread_return;
TEST_ASSERT_EQ(exit_reason, KVM_EXIT_MEMORY_FAULT);
TEST_ASSERT_EQ(vcpu->run->memory_fault.flags, KVM_MEMORY_EXIT_FLAG_PRIVATE);
@@ -86,7 +86,7 @@ static void test_private_access_memslot_not_private(void)
{
struct kvm_vm *vm;
struct kvm_vcpu *vcpu;
- uint32_t exit_reason;
+ u32 exit_reason;
vm = vm_create_shape_with_one_vcpu(protected_vm_shape, &vcpu,
guest_repeatedly_read);
diff --git a/tools/testing/selftests/kvm/x86/set_boot_cpu_id.c b/tools/testing/selftests/kvm/x86/set_boot_cpu_id.c
index 49913784bc82..8e3898646c69 100644
--- a/tools/testing/selftests/kvm/x86/set_boot_cpu_id.c
+++ b/tools/testing/selftests/kvm/x86/set_boot_cpu_id.c
@@ -86,11 +86,11 @@ static void run_vcpu(struct kvm_vcpu *vcpu)
}
}
-static struct kvm_vm *create_vm(uint32_t nr_vcpus, uint32_t bsp_vcpu_id,
+static struct kvm_vm *create_vm(u32 nr_vcpus, u32 bsp_vcpu_id,
struct kvm_vcpu *vcpus[])
{
struct kvm_vm *vm;
- uint32_t i;
+ u32 i;
vm = vm_create(nr_vcpus);
@@ -104,7 +104,7 @@ static struct kvm_vm *create_vm(uint32_t nr_vcpus, uint32_t bsp_vcpu_id,
return vm;
}
-static void run_vm_bsp(uint32_t bsp_vcpu_id)
+static void run_vm_bsp(u32 bsp_vcpu_id)
{
struct kvm_vcpu *vcpus[2];
struct kvm_vm *vm;
diff --git a/tools/testing/selftests/kvm/x86/sev_init2_tests.c b/tools/testing/selftests/kvm/x86/sev_init2_tests.c
index 3515b4c0e860..6a405e694459 100644
--- a/tools/testing/selftests/kvm/x86/sev_init2_tests.c
+++ b/tools/testing/selftests/kvm/x86/sev_init2_tests.c
@@ -90,7 +90,7 @@ void test_vm_types(void)
"VM type is KVM_X86_SW_PROTECTED_VM");
}
-void test_flags(uint32_t vm_type)
+void test_flags(u32 vm_type)
{
int i;
@@ -100,7 +100,7 @@ void test_flags(uint32_t vm_type)
"invalid flag");
}
-void test_features(uint32_t vm_type, u64 supported_features)
+void test_features(u32 vm_type, u64 supported_features)
{
int i;
diff --git a/tools/testing/selftests/kvm/x86/sev_smoke_test.c b/tools/testing/selftests/kvm/x86/sev_smoke_test.c
index 7ee7cc1da061..8f7c1b2da31f 100644
--- a/tools/testing/selftests/kvm/x86/sev_smoke_test.c
+++ b/tools/testing/selftests/kvm/x86/sev_smoke_test.c
@@ -62,7 +62,7 @@ static void compare_xsave(u8 *from_host, u8 *from_guest)
abort();
}
-static void test_sync_vmsa(uint32_t policy)
+static void test_sync_vmsa(u32 policy)
{
struct kvm_vcpu *vcpu;
struct kvm_vm *vm;
@@ -114,7 +114,7 @@ static void test_sev(void *guest_code, u64 policy)
struct kvm_vm *vm;
struct ucall uc;
- uint32_t type = policy & SEV_POLICY_ES ? KVM_X86_SEV_ES_VM : KVM_X86_SEV_VM;
+ u32 type = policy & SEV_POLICY_ES ? KVM_X86_SEV_ES_VM : KVM_X86_SEV_VM;
vm = vm_sev_create_with_one_vcpu(type, guest_code, &vcpu);
@@ -166,7 +166,7 @@ static void test_sev_es_shutdown(void)
struct kvm_vcpu *vcpu;
struct kvm_vm *vm;
- uint32_t type = KVM_X86_SEV_ES_VM;
+ u32 type = KVM_X86_SEV_ES_VM;
vm = vm_sev_create_with_one_vcpu(type, guest_shutdown_code, &vcpu);
diff --git a/tools/testing/selftests/kvm/x86/ucna_injection_test.c b/tools/testing/selftests/kvm/x86/ucna_injection_test.c
index 27aae6c92a38..df1ec8209c76 100644
--- a/tools/testing/selftests/kvm/x86/ucna_injection_test.c
+++ b/tools/testing/selftests/kvm/x86/ucna_injection_test.c
@@ -251,7 +251,7 @@ static void setup_mce_cap(struct kvm_vcpu *vcpu, bool enable_cmci_p)
vcpu_ioctl(vcpu, KVM_X86_SETUP_MCE, &mcg_caps);
}
-static struct kvm_vcpu *create_vcpu_with_mce_cap(struct kvm_vm *vm, uint32_t vcpuid,
+static struct kvm_vcpu *create_vcpu_with_mce_cap(struct kvm_vm *vm, u32 vcpuid,
bool enable_cmci_p, void *guest_code)
{
struct kvm_vcpu *vcpu = vm_vcpu_add(vm, vcpuid, guest_code);
diff --git a/tools/testing/selftests/kvm/x86/userspace_msr_exit_test.c b/tools/testing/selftests/kvm/x86/userspace_msr_exit_test.c
index 983d1ae0718f..e87e2e8d9c38 100644
--- a/tools/testing/selftests/kvm/x86/userspace_msr_exit_test.c
+++ b/tools/testing/selftests/kvm/x86/userspace_msr_exit_test.c
@@ -142,9 +142,9 @@ struct kvm_msr_filter no_filter_deny = {
* Note: Force test_rdmsr() to not be inlined to prevent the labels,
* rdmsr_start and rdmsr_end, from being defined multiple times.
*/
-static noinline u64 test_rdmsr(uint32_t msr)
+static noinline u64 test_rdmsr(u32 msr)
{
- uint32_t a, d;
+ u32 a, d;
guest_exception_count = 0;
@@ -158,10 +158,10 @@ static noinline u64 test_rdmsr(uint32_t msr)
* Note: Force test_wrmsr() to not be inlined to prevent the labels,
* wrmsr_start and wrmsr_end, from being defined multiple times.
*/
-static noinline void test_wrmsr(uint32_t msr, u64 value)
+static noinline void test_wrmsr(u32 msr, u64 value)
{
- uint32_t a = value;
- uint32_t d = value >> 32;
+ u32 a = value;
+ u32 d = value >> 32;
guest_exception_count = 0;
@@ -176,9 +176,9 @@ extern char wrmsr_start, wrmsr_end;
* Note: Force test_em_rdmsr() to not be inlined to prevent the labels,
* rdmsr_start and rdmsr_end, from being defined multiple times.
*/
-static noinline u64 test_em_rdmsr(uint32_t msr)
+static noinline u64 test_em_rdmsr(u32 msr)
{
- uint32_t a, d;
+ u32 a, d;
guest_exception_count = 0;
@@ -192,10 +192,10 @@ static noinline u64 test_em_rdmsr(uint32_t msr)
* Note: Force test_em_wrmsr() to not be inlined to prevent the labels,
* wrmsr_start and wrmsr_end, from being defined multiple times.
*/
-static noinline void test_em_wrmsr(uint32_t msr, u64 value)
+static noinline void test_em_wrmsr(u32 msr, u64 value)
{
- uint32_t a = value;
- uint32_t d = value >> 32;
+ u32 a = value;
+ u32 d = value >> 32;
guest_exception_count = 0;
@@ -385,7 +385,7 @@ static void check_for_guest_assert(struct kvm_vcpu *vcpu)
}
}
-static void process_rdmsr(struct kvm_vcpu *vcpu, uint32_t msr_index)
+static void process_rdmsr(struct kvm_vcpu *vcpu, u32 msr_index)
{
struct kvm_run *run = vcpu->run;
@@ -417,7 +417,7 @@ static void process_rdmsr(struct kvm_vcpu *vcpu, uint32_t msr_index)
}
}
-static void process_wrmsr(struct kvm_vcpu *vcpu, uint32_t msr_index)
+static void process_wrmsr(struct kvm_vcpu *vcpu, u32 msr_index)
{
struct kvm_run *run = vcpu->run;
@@ -483,14 +483,14 @@ static u64 process_ucall(struct kvm_vcpu *vcpu)
}
static void run_guest_then_process_rdmsr(struct kvm_vcpu *vcpu,
- uint32_t msr_index)
+ u32 msr_index)
{
vcpu_run(vcpu);
process_rdmsr(vcpu, msr_index);
}
static void run_guest_then_process_wrmsr(struct kvm_vcpu *vcpu,
- uint32_t msr_index)
+ u32 msr_index)
{
vcpu_run(vcpu);
process_wrmsr(vcpu, msr_index);
diff --git a/tools/testing/selftests/kvm/x86/vmx_apic_access_test.c b/tools/testing/selftests/kvm/x86/vmx_apic_access_test.c
index dc5c3d1db346..1720113eae79 100644
--- a/tools/testing/selftests/kvm/x86/vmx_apic_access_test.c
+++ b/tools/testing/selftests/kvm/x86/vmx_apic_access_test.c
@@ -38,7 +38,7 @@ static void l1_guest_code(struct vmx_pages *vmx_pages, unsigned long high_gpa)
{
#define L2_GUEST_STACK_SIZE 64
unsigned long l2_guest_stack[L2_GUEST_STACK_SIZE];
- uint32_t control;
+ u32 control;
GUEST_ASSERT(prepare_for_vmx_operation(vmx_pages));
GUEST_ASSERT(load_vmcs(vmx_pages));
diff --git a/tools/testing/selftests/kvm/x86/vmx_msrs_test.c b/tools/testing/selftests/kvm/x86/vmx_msrs_test.c
index d61c8c69ade3..c1e8632a1bb6 100644
--- a/tools/testing/selftests/kvm/x86/vmx_msrs_test.c
+++ b/tools/testing/selftests/kvm/x86/vmx_msrs_test.c
@@ -12,8 +12,7 @@
#include "kvm_util.h"
#include "vmx.h"
-static void vmx_fixed1_msr_test(struct kvm_vcpu *vcpu, uint32_t msr_index,
- u64 mask)
+static void vmx_fixed1_msr_test(struct kvm_vcpu *vcpu, u32 msr_index, u64 mask)
{
u64 val = vcpu_get_msr(vcpu, msr_index);
u64 bit;
@@ -26,8 +25,7 @@ static void vmx_fixed1_msr_test(struct kvm_vcpu *vcpu, uint32_t msr_index,
}
}
-static void vmx_fixed0_msr_test(struct kvm_vcpu *vcpu, uint32_t msr_index,
- u64 mask)
+static void vmx_fixed0_msr_test(struct kvm_vcpu *vcpu, u32 msr_index, u64 mask)
{
u64 val = vcpu_get_msr(vcpu, msr_index);
u64 bit;
@@ -40,7 +38,7 @@ static void vmx_fixed0_msr_test(struct kvm_vcpu *vcpu, uint32_t msr_index,
}
}
-static void vmx_fixed0and1_msr_test(struct kvm_vcpu *vcpu, uint32_t msr_index)
+static void vmx_fixed0and1_msr_test(struct kvm_vcpu *vcpu, u32 msr_index)
{
vmx_fixed0_msr_test(vcpu, msr_index, GENMASK_ULL(31, 0));
vmx_fixed1_msr_test(vcpu, msr_index, GENMASK_ULL(63, 32));
diff --git a/tools/testing/selftests/kvm/x86/vmx_nested_tsc_scaling_test.c b/tools/testing/selftests/kvm/x86/vmx_nested_tsc_scaling_test.c
index 43861b96b5a4..8e0af20c594e 100644
--- a/tools/testing/selftests/kvm/x86/vmx_nested_tsc_scaling_test.c
+++ b/tools/testing/selftests/kvm/x86/vmx_nested_tsc_scaling_test.c
@@ -82,7 +82,7 @@ static void l2_guest_code(void)
static void l1_guest_code(struct vmx_pages *vmx_pages)
{
unsigned long l2_guest_stack[L2_GUEST_STACK_SIZE];
- uint32_t control;
+ u32 control;
/* check that L1's frequency looks alright before launching L2 */
check_tsc_freq(UCHECK_L1);
diff --git a/tools/testing/selftests/kvm/x86/vmx_tsc_adjust_test.c b/tools/testing/selftests/kvm/x86/vmx_tsc_adjust_test.c
index 450932e4b0c9..f03b831a5025 100644
--- a/tools/testing/selftests/kvm/x86/vmx_tsc_adjust_test.c
+++ b/tools/testing/selftests/kvm/x86/vmx_tsc_adjust_test.c
@@ -76,7 +76,7 @@ static void l1_guest_code(struct vmx_pages *vmx_pages)
{
#define L2_GUEST_STACK_SIZE 64
unsigned long l2_guest_stack[L2_GUEST_STACK_SIZE];
- uint32_t control;
+ u32 control;
uintptr_t save_cr3;
GUEST_ASSERT(rdtsc() < TSC_ADJUST_VALUE);
diff --git a/tools/testing/selftests/kvm/x86/xapic_ipi_test.c b/tools/testing/selftests/kvm/x86/xapic_ipi_test.c
index bd7b51342441..2cacdcd7fc35 100644
--- a/tools/testing/selftests/kvm/x86/xapic_ipi_test.c
+++ b/tools/testing/selftests/kvm/x86/xapic_ipi_test.c
@@ -52,16 +52,16 @@ static volatile u64 ipis_rcvd;
/* Data struct shared between host main thread and vCPUs */
struct test_data_page {
- uint32_t halter_apic_id;
+ u32 halter_apic_id;
volatile u64 hlt_count;
volatile u64 wake_count;
u64 ipis_sent;
u64 migrations_attempted;
u64 migrations_completed;
- uint32_t icr;
- uint32_t icr2;
- uint32_t halter_tpr;
- uint32_t halter_ppr;
+ u32 icr;
+ u32 icr2;
+ u32 halter_tpr;
+ u32 halter_ppr;
/*
* Record local version register as a cross-check that APIC access
@@ -69,7 +69,7 @@ struct test_data_page {
* arch/x86/kvm/lapic.c). If test is failing, check that values match
* to determine whether APIC access exits are working.
*/
- uint32_t halter_lvr;
+ u32 halter_lvr;
};
struct thread_params {
@@ -128,8 +128,8 @@ static void sender_guest_code(struct test_data_page *data)
u64 last_wake_count;
u64 last_hlt_count;
u64 last_ipis_rcvd_count;
- uint32_t icr_val;
- uint32_t icr2_val;
+ u32 icr_val;
+ u32 icr2_val;
u64 tsc_start;
verify_apic_base_addr();
diff --git a/tools/testing/selftests/kvm/x86/xapic_state_test.c b/tools/testing/selftests/kvm/x86/xapic_state_test.c
index 4d610bffbbd2..85798183f04d 100644
--- a/tools/testing/selftests/kvm/x86/xapic_state_test.c
+++ b/tools/testing/selftests/kvm/x86/xapic_state_test.c
@@ -144,7 +144,7 @@ static void test_icr(struct xapic_vcpu *x)
static void __test_apic_id(struct kvm_vcpu *vcpu, u64 apic_base)
{
- uint32_t apic_id, expected;
+ u32 apic_id, expected;
struct kvm_lapic_state xapic;
vcpu_set_msr(vcpu, MSR_IA32_APICBASE, apic_base);
@@ -170,7 +170,7 @@ static void __test_apic_id(struct kvm_vcpu *vcpu, u64 apic_base)
*/
static void test_apic_id(void)
{
- const uint32_t NR_VCPUS = 3;
+ const u32 NR_VCPUS = 3;
struct kvm_vcpu *vcpus[NR_VCPUS];
u64 apic_base;
struct kvm_vm *vm;
diff --git a/tools/testing/selftests/kvm/x86/xen_shinfo_test.c b/tools/testing/selftests/kvm/x86/xen_shinfo_test.c
index 77fcf8345342..974a6c5d3080 100644
--- a/tools/testing/selftests/kvm/x86/xen_shinfo_test.c
+++ b/tools/testing/selftests/kvm/x86/xen_shinfo_test.c
@@ -116,13 +116,13 @@ struct pvclock_wall_clock {
} __attribute__((__packed__));
struct vcpu_runstate_info {
- uint32_t state;
+ u32 state;
u64 state_entry_time;
u64 time[5]; /* Extra field for overrun check */
};
struct compat_vcpu_runstate_info {
- uint32_t state;
+ u32 state;
u64 state_entry_time;
u64 time[5];
} __attribute__((__packed__));
@@ -145,7 +145,7 @@ struct shared_info {
unsigned long evtchn_pending[64];
unsigned long evtchn_mask[64];
struct pvclock_wall_clock wc;
- uint32_t wc_sec_hi;
+ u32 wc_sec_hi;
/* arch_shared_info here */
};
--
2.49.0.906.g1f30a19c02-goog
^ permalink raw reply related [flat|nested] 16+ messages in thread
* [PATCH 07/10] KVM: selftests: Use s32 instead of int32_t
2025-05-01 18:32 [PATCH 00/10] KVM: selftests: Convert to kernel-style types David Matlack
` (5 preceding siblings ...)
2025-05-01 18:33 ` [PATCH 06/10] KVM: selftests: Use u32 instead of uint32_t David Matlack
@ 2025-05-01 18:33 ` David Matlack
2025-05-01 18:33 ` [PATCH 08/10] KVM: selftests: Use u16 instead of uint16_t David Matlack
` (4 subsequent siblings)
11 siblings, 0 replies; 16+ messages in thread
From: David Matlack @ 2025-05-01 18:33 UTC (permalink / raw)
To: Paolo Bonzini
Cc: Marc Zyngier, Oliver Upton, Joey Gouly, Suzuki K Poulose,
Zenghui Yu, Anup Patel, Atish Patra, Paul Walmsley,
Palmer Dabbelt, Albert Ou, Alexandre Ghiti, Christian Borntraeger,
Janosch Frank, Claudio Imbrenda, David Hildenbrand,
Sean Christopherson, David Matlack, Andrew Jones, Isaku Yamahata,
Reinette Chatre, Eric Auger, James Houghton, Colin Ian King, kvm,
linux-arm-kernel, kvmarm, kvm-riscv, linux-riscv
Use s32 instead of int32_t to make the KVM selftests code more concise
and more similar to the kernel (since selftests are primarily developed
by kernel developers).
This commit was generated with the following command:
git ls-files tools/testing/selftests/kvm | xargs sed -i 's/int32_t/s32/g'
Then by manually adjusting whitespace to make checkpatch.pl happy.
No functional change intended.
Signed-off-by: David Matlack <dmatlack@google.com>
---
.../kvm/arm64/arch_timer_edge_cases.c | 24 +++++++++----------
.../selftests/kvm/include/arm64/arch_timer.h | 4 ++--
2 files changed, 14 insertions(+), 14 deletions(-)
diff --git a/tools/testing/selftests/kvm/arm64/arch_timer_edge_cases.c b/tools/testing/selftests/kvm/arm64/arch_timer_edge_cases.c
index 2d799823a366..b99eb6b4b314 100644
--- a/tools/testing/selftests/kvm/arm64/arch_timer_edge_cases.c
+++ b/tools/testing/selftests/kvm/arm64/arch_timer_edge_cases.c
@@ -24,8 +24,8 @@
static const u64 CVAL_MAX = ~0ULL;
/* tval is a signed 32-bit int. */
-static const int32_t TVAL_MAX = INT32_MAX;
-static const int32_t TVAL_MIN = INT32_MIN;
+static const s32 TVAL_MAX = INT32_MAX;
+static const s32 TVAL_MIN = INT32_MIN;
/* After how much time we say there is no IRQ. */
static const u32 TIMEOUT_NO_IRQ_US = 50000;
@@ -354,7 +354,7 @@ static void test_timer_cval(enum arch_timer timer, u64 cval,
test_timer_xval(timer, cval, TIMER_CVAL, wm, reset_state, reset_cnt);
}
-static void test_timer_tval(enum arch_timer timer, int32_t tval,
+static void test_timer_tval(enum arch_timer timer, s32 tval,
irq_wait_method_t wm, bool reset_state,
u64 reset_cnt)
{
@@ -384,10 +384,10 @@ static void test_cval_no_irq(enum arch_timer timer, u64 cval,
test_xval_check_no_irq(timer, cval, usec, TIMER_CVAL, wm);
}
-static void test_tval_no_irq(enum arch_timer timer, int32_t tval, u64 usec,
+static void test_tval_no_irq(enum arch_timer timer, s32 tval, u64 usec,
sleep_method_t wm)
{
- /* tval will be cast to an int32_t in test_xval_check_no_irq */
+ /* tval will be cast to an s32 in test_xval_check_no_irq */
test_xval_check_no_irq(timer, (u64)tval, usec, TIMER_TVAL, wm);
}
@@ -462,7 +462,7 @@ static void test_timers_fired_multiple_times(enum arch_timer timer)
* timeout for the wait: we use the wfi instruction.
*/
static void test_reprogramming_timer(enum arch_timer timer, irq_wait_method_t wm,
- int32_t delta_1_ms, int32_t delta_2_ms)
+ s32 delta_1_ms, s32 delta_2_ms)
{
local_irq_disable();
reset_timer_state(timer, DEF_CNT);
@@ -503,7 +503,7 @@ static void test_reprogram_timers(enum arch_timer timer)
static void test_basic_functionality(enum arch_timer timer)
{
- int32_t tval = (int32_t) msec_to_cycles(test_args.wait_ms);
+ s32 tval = (s32)msec_to_cycles(test_args.wait_ms);
u64 cval = DEF_CNT + msec_to_cycles(test_args.wait_ms);
int i;
@@ -684,7 +684,7 @@ static void test_set_cnt_after_xval_no_irq(enum arch_timer timer,
}
static void test_set_cnt_after_tval(enum arch_timer timer, u64 cnt_1,
- int32_t tval, u64 cnt_2,
+ s32 tval, u64 cnt_2,
irq_wait_method_t wm)
{
test_set_cnt_after_xval(timer, cnt_1, tval, cnt_2, wm, TIMER_TVAL);
@@ -698,7 +698,7 @@ static void test_set_cnt_after_cval(enum arch_timer timer, u64 cnt_1,
}
static void test_set_cnt_after_tval_no_irq(enum arch_timer timer,
- u64 cnt_1, int32_t tval,
+ u64 cnt_1, s32 tval,
u64 cnt_2, sleep_method_t wm)
{
test_set_cnt_after_xval_no_irq(timer, cnt_1, tval, cnt_2, wm,
@@ -717,7 +717,7 @@ static void test_set_cnt_after_cval_no_irq(enum arch_timer timer,
static void test_move_counters_ahead_of_timers(enum arch_timer timer)
{
int i;
- int32_t tval;
+ s32 tval;
for (i = 0; i < ARRAY_SIZE(irq_wait_method); i++) {
irq_wait_method_t wm = irq_wait_method[i];
@@ -758,7 +758,7 @@ static void test_move_counters_behind_timers(enum arch_timer timer)
static void test_timers_in_the_past(enum arch_timer timer)
{
- int32_t tval = -1 * (int32_t) msec_to_cycles(test_args.wait_ms);
+ s32 tval = -1 * (s32)msec_to_cycles(test_args.wait_ms);
u64 cval;
int i;
@@ -794,7 +794,7 @@ static void test_timers_in_the_past(enum arch_timer timer)
static void test_long_timer_delays(enum arch_timer timer)
{
- int32_t tval = (int32_t) msec_to_cycles(test_args.long_wait_ms);
+ s32 tval = (s32)msec_to_cycles(test_args.long_wait_ms);
u64 cval = DEF_CNT + msec_to_cycles(test_args.long_wait_ms);
int i;
diff --git a/tools/testing/selftests/kvm/include/arm64/arch_timer.h b/tools/testing/selftests/kvm/include/arm64/arch_timer.h
index 600ee9163604..9d32c196c7ab 100644
--- a/tools/testing/selftests/kvm/include/arm64/arch_timer.h
+++ b/tools/testing/selftests/kvm/include/arm64/arch_timer.h
@@ -79,7 +79,7 @@ static inline u64 timer_get_cval(enum arch_timer timer)
return 0;
}
-static inline void timer_set_tval(enum arch_timer timer, int32_t tval)
+static inline void timer_set_tval(enum arch_timer timer, s32 tval)
{
switch (timer) {
case VIRTUAL:
@@ -95,7 +95,7 @@ static inline void timer_set_tval(enum arch_timer timer, int32_t tval)
isb();
}
-static inline int32_t timer_get_tval(enum arch_timer timer)
+static inline s32 timer_get_tval(enum arch_timer timer)
{
isb();
switch (timer) {
--
2.49.0.906.g1f30a19c02-goog
^ permalink raw reply related [flat|nested] 16+ messages in thread
* [PATCH 08/10] KVM: selftests: Use u16 instead of uint16_t
2025-05-01 18:32 [PATCH 00/10] KVM: selftests: Convert to kernel-style types David Matlack
` (6 preceding siblings ...)
2025-05-01 18:33 ` [PATCH 07/10] KVM: selftests: Use s32 instead of int32_t David Matlack
@ 2025-05-01 18:33 ` David Matlack
2025-05-01 18:33 ` [PATCH 09/10] KVM: selftests: Use s16 instead of int16_t David Matlack
` (3 subsequent siblings)
11 siblings, 0 replies; 16+ messages in thread
From: David Matlack @ 2025-05-01 18:33 UTC (permalink / raw)
To: Paolo Bonzini
Cc: Marc Zyngier, Oliver Upton, Joey Gouly, Suzuki K Poulose,
Zenghui Yu, Anup Patel, Atish Patra, Paul Walmsley,
Palmer Dabbelt, Albert Ou, Alexandre Ghiti, Christian Borntraeger,
Janosch Frank, Claudio Imbrenda, David Hildenbrand,
Sean Christopherson, David Matlack, Andrew Jones, Isaku Yamahata,
Reinette Chatre, Eric Auger, James Houghton, Colin Ian King, kvm,
linux-arm-kernel, kvmarm, kvm-riscv, linux-riscv
Use u16 instead of uint16_t to make the KVM selftests code more concise
and more similar to the kernel (since selftests are primarily developed
by kernel developers).
This commit was generated with the following command:
git ls-files tools/testing/selftests/kvm | xargs sed -i 's/uint16_t/u16/g'
Then by manually adjusting whitespace to make checkpatch.pl happy.
No functional change intended.
Signed-off-by: David Matlack <dmatlack@google.com>
---
.../selftests/kvm/arm64/page_fault_test.c | 2 +-
.../testing/selftests/kvm/include/kvm_util.h | 2 +-
.../testing/selftests/kvm/include/x86/evmcs.h | 2 +-
.../selftests/kvm/include/x86/processor.h | 58 +++++++++----------
.../testing/selftests/kvm/lib/guest_sprintf.c | 2 +-
.../testing/selftests/kvm/lib/x86/processor.c | 8 +--
tools/testing/selftests/kvm/lib/x86/ucall.c | 2 +-
tools/testing/selftests/kvm/lib/x86/vmx.c | 4 +-
tools/testing/selftests/kvm/s390/memop.c | 2 +-
.../selftests/kvm/x86/sync_regs_test.c | 2 +-
10 files changed, 42 insertions(+), 42 deletions(-)
diff --git a/tools/testing/selftests/kvm/arm64/page_fault_test.c b/tools/testing/selftests/kvm/arm64/page_fault_test.c
index 235582206aee..cb5ada7dd041 100644
--- a/tools/testing/selftests/kvm/arm64/page_fault_test.c
+++ b/tools/testing/selftests/kvm/arm64/page_fault_test.c
@@ -148,7 +148,7 @@ static void guest_at(void)
*/
static void guest_dc_zva(void)
{
- uint16_t val;
+ u16 val;
asm volatile("dc zva, %0" :: "r" (guest_test_memory));
dsb(ish);
diff --git a/tools/testing/selftests/kvm/include/kvm_util.h b/tools/testing/selftests/kvm/include/kvm_util.h
index d76410a0fa1d..271d7a434a4c 100644
--- a/tools/testing/selftests/kvm/include/kvm_util.h
+++ b/tools/testing/selftests/kvm/include/kvm_util.h
@@ -185,7 +185,7 @@ struct vm_shape {
u32 type;
uint8_t mode;
uint8_t pad0;
- uint16_t pad1;
+ u16 pad1;
};
kvm_static_assert(sizeof(struct vm_shape) == sizeof(u64));
diff --git a/tools/testing/selftests/kvm/include/x86/evmcs.h b/tools/testing/selftests/kvm/include/x86/evmcs.h
index 3b0f96b881f9..be79bda024bf 100644
--- a/tools/testing/selftests/kvm/include/x86/evmcs.h
+++ b/tools/testing/selftests/kvm/include/x86/evmcs.h
@@ -10,7 +10,7 @@
#include "hyperv.h"
#include "vmx.h"
-#define u16 uint16_t
+#define u16 u16
#define u32 u32
#define u64 u64
diff --git a/tools/testing/selftests/kvm/include/x86/processor.h b/tools/testing/selftests/kvm/include/x86/processor.h
index 8afbb3315c85..302836e276e0 100644
--- a/tools/testing/selftests/kvm/include/x86/processor.h
+++ b/tools/testing/selftests/kvm/include/x86/processor.h
@@ -398,8 +398,8 @@ struct gpr64_regs {
};
struct desc64 {
- uint16_t limit0;
- uint16_t base0;
+ u16 limit0;
+ u16 base0;
unsigned base1:8, type:4, s:1, dpl:2, p:1;
unsigned limit1:4, avl:1, l:1, db:1, g:1, base2:8;
u32 base3;
@@ -407,7 +407,7 @@ struct desc64 {
} __attribute__((packed));
struct desc_ptr {
- uint16_t size;
+ u16 size;
u64 address;
} __attribute__((packed));
@@ -473,9 +473,9 @@ static inline void wrmsr(u32 msr, u64 value)
}
-static inline uint16_t inw(uint16_t port)
+static inline u16 inw(u16 port)
{
- uint16_t tmp;
+ u16 tmp;
__asm__ __volatile__("in %%dx, %%ax"
: /* output */ "=a" (tmp)
@@ -484,63 +484,63 @@ static inline uint16_t inw(uint16_t port)
return tmp;
}
-static inline uint16_t get_es(void)
+static inline u16 get_es(void)
{
- uint16_t es;
+ u16 es;
__asm__ __volatile__("mov %%es, %[es]"
: /* output */ [es]"=rm"(es));
return es;
}
-static inline uint16_t get_cs(void)
+static inline u16 get_cs(void)
{
- uint16_t cs;
+ u16 cs;
__asm__ __volatile__("mov %%cs, %[cs]"
: /* output */ [cs]"=rm"(cs));
return cs;
}
-static inline uint16_t get_ss(void)
+static inline u16 get_ss(void)
{
- uint16_t ss;
+ u16 ss;
__asm__ __volatile__("mov %%ss, %[ss]"
: /* output */ [ss]"=rm"(ss));
return ss;
}
-static inline uint16_t get_ds(void)
+static inline u16 get_ds(void)
{
- uint16_t ds;
+ u16 ds;
__asm__ __volatile__("mov %%ds, %[ds]"
: /* output */ [ds]"=rm"(ds));
return ds;
}
-static inline uint16_t get_fs(void)
+static inline u16 get_fs(void)
{
- uint16_t fs;
+ u16 fs;
__asm__ __volatile__("mov %%fs, %[fs]"
: /* output */ [fs]"=rm"(fs));
return fs;
}
-static inline uint16_t get_gs(void)
+static inline u16 get_gs(void)
{
- uint16_t gs;
+ u16 gs;
__asm__ __volatile__("mov %%gs, %[gs]"
: /* output */ [gs]"=rm"(gs));
return gs;
}
-static inline uint16_t get_tr(void)
+static inline u16 get_tr(void)
{
- uint16_t tr;
+ u16 tr;
__asm__ __volatile__("str %[tr]"
: /* output */ [tr]"=rm"(tr));
@@ -625,7 +625,7 @@ static inline struct desc_ptr get_idt(void)
return idt;
}
-static inline void outl(uint16_t port, u32 value)
+static inline void outl(u16 port, u32 value)
{
__asm__ __volatile__("outl %%eax, %%dx" : : "d"(port), "a"(value));
}
@@ -1164,15 +1164,15 @@ struct ex_regs {
};
struct idt_entry {
- uint16_t offset0;
- uint16_t selector;
- uint16_t ist : 3;
- uint16_t : 5;
- uint16_t type : 4;
- uint16_t : 1;
- uint16_t dpl : 2;
- uint16_t p : 1;
- uint16_t offset1;
+ u16 offset0;
+ u16 selector;
+ u16 ist : 3;
+ u16 : 5;
+ u16 type : 4;
+ u16 : 1;
+ u16 dpl : 2;
+ u16 p : 1;
+ u16 offset1;
u32 offset2; u32 reserved;
};
diff --git a/tools/testing/selftests/kvm/lib/guest_sprintf.c b/tools/testing/selftests/kvm/lib/guest_sprintf.c
index 768e12cd8d1d..afbddb53ddd6 100644
--- a/tools/testing/selftests/kvm/lib/guest_sprintf.c
+++ b/tools/testing/selftests/kvm/lib/guest_sprintf.c
@@ -286,7 +286,7 @@ int guest_vsnprintf(char *buf, int n, const char *fmt, va_list args)
if (qualifier == 'l')
num = va_arg(args, u64);
else if (qualifier == 'h') {
- num = (uint16_t)va_arg(args, int);
+ num = (u16)va_arg(args, int);
if (flags & SIGN)
num = (int16_t)num;
} else if (flags & SIGN)
diff --git a/tools/testing/selftests/kvm/lib/x86/processor.c b/tools/testing/selftests/kvm/lib/x86/processor.c
index e3ca7001b436..7258f9f8f0bf 100644
--- a/tools/testing/selftests/kvm/lib/x86/processor.c
+++ b/tools/testing/selftests/kvm/lib/x86/processor.c
@@ -334,7 +334,7 @@ void virt_arch_dump(FILE *stream, struct kvm_vm *vm, uint8_t indent)
"addr w exec dirty\n",
indent, "");
pml4e_start = addr_gpa2hva(vm, vm->pgd);
- for (uint16_t n1 = 0; n1 <= 0x1ffu; n1++) {
+ for (u16 n1 = 0; n1 <= 0x1ffu; n1++) {
pml4e = &pml4e_start[n1];
if (!(*pml4e & PTE_PRESENT_MASK))
continue;
@@ -346,7 +346,7 @@ void virt_arch_dump(FILE *stream, struct kvm_vm *vm, uint8_t indent)
!!(*pml4e & PTE_WRITABLE_MASK), !!(*pml4e & PTE_NX_MASK));
pdpe_start = addr_gpa2hva(vm, *pml4e & PHYSICAL_PAGE_MASK);
- for (uint16_t n2 = 0; n2 <= 0x1ffu; n2++) {
+ for (u16 n2 = 0; n2 <= 0x1ffu; n2++) {
pdpe = &pdpe_start[n2];
if (!(*pdpe & PTE_PRESENT_MASK))
continue;
@@ -359,7 +359,7 @@ void virt_arch_dump(FILE *stream, struct kvm_vm *vm, uint8_t indent)
!!(*pdpe & PTE_NX_MASK));
pde_start = addr_gpa2hva(vm, *pdpe & PHYSICAL_PAGE_MASK);
- for (uint16_t n3 = 0; n3 <= 0x1ffu; n3++) {
+ for (u16 n3 = 0; n3 <= 0x1ffu; n3++) {
pde = &pde_start[n3];
if (!(*pde & PTE_PRESENT_MASK))
continue;
@@ -371,7 +371,7 @@ void virt_arch_dump(FILE *stream, struct kvm_vm *vm, uint8_t indent)
!!(*pde & PTE_NX_MASK));
pte_start = addr_gpa2hva(vm, *pde & PHYSICAL_PAGE_MASK);
- for (uint16_t n4 = 0; n4 <= 0x1ffu; n4++) {
+ for (u16 n4 = 0; n4 <= 0x1ffu; n4++) {
pte = &pte_start[n4];
if (!(*pte & PTE_PRESENT_MASK))
continue;
diff --git a/tools/testing/selftests/kvm/lib/x86/ucall.c b/tools/testing/selftests/kvm/lib/x86/ucall.c
index 1af2a6880cdf..e7dd5791959b 100644
--- a/tools/testing/selftests/kvm/lib/x86/ucall.c
+++ b/tools/testing/selftests/kvm/lib/x86/ucall.c
@@ -6,7 +6,7 @@
*/
#include "kvm_util.h"
-#define UCALL_PIO_PORT ((uint16_t)0x1000)
+#define UCALL_PIO_PORT ((u16)0x1000)
void ucall_arch_do_ucall(gva_t uc)
{
diff --git a/tools/testing/selftests/kvm/lib/x86/vmx.c b/tools/testing/selftests/kvm/lib/x86/vmx.c
index 8d7b759a403c..52c6ab56a1f3 100644
--- a/tools/testing/selftests/kvm/lib/x86/vmx.c
+++ b/tools/testing/selftests/kvm/lib/x86/vmx.c
@@ -44,7 +44,7 @@ struct eptPageTablePointer {
};
int vcpu_enable_evmcs(struct kvm_vcpu *vcpu)
{
- uint16_t evmcs_ver;
+ u16 evmcs_ver;
vcpu_enable_cap(vcpu, KVM_CAP_HYPERV_ENLIGHTENED_VMCS,
(unsigned long)&evmcs_ver);
@@ -399,7 +399,7 @@ void __nested_pg_map(struct vmx_pages *vmx, struct kvm_vm *vm,
{
const u64 page_size = PG_LEVEL_SIZE(target_level);
struct eptPageTableEntry *pt = vmx->eptp_hva, *pte;
- uint16_t index;
+ u16 index;
TEST_ASSERT(vm->mode == VM_MODE_PXXV48_4K, "Attempt to use "
"unknown or unsupported guest mode, mode: 0x%x", vm->mode);
diff --git a/tools/testing/selftests/kvm/s390/memop.c b/tools/testing/selftests/kvm/s390/memop.c
index fc640f3c5176..2283ad346746 100644
--- a/tools/testing/selftests/kvm/s390/memop.c
+++ b/tools/testing/selftests/kvm/s390/memop.c
@@ -485,7 +485,7 @@ static __uint128_t cut_to_size(int size, __uint128_t val)
case 1:
return (uint8_t)val;
case 2:
- return (uint16_t)val;
+ return (u16)val;
case 4:
return (u32)val;
case 8:
diff --git a/tools/testing/selftests/kvm/x86/sync_regs_test.c b/tools/testing/selftests/kvm/x86/sync_regs_test.c
index 8fa3948b0170..e0c52321f87c 100644
--- a/tools/testing/selftests/kvm/x86/sync_regs_test.c
+++ b/tools/testing/selftests/kvm/x86/sync_regs_test.c
@@ -20,7 +20,7 @@
#include "kvm_util.h"
#include "processor.h"
-#define UCALL_PIO_PORT ((uint16_t)0x1000)
+#define UCALL_PIO_PORT ((u16)0x1000)
struct ucall uc_none = {
.cmd = UCALL_NONE,
--
2.49.0.906.g1f30a19c02-goog
^ permalink raw reply related [flat|nested] 16+ messages in thread
* [PATCH 09/10] KVM: selftests: Use s16 instead of int16_t
2025-05-01 18:32 [PATCH 00/10] KVM: selftests: Convert to kernel-style types David Matlack
` (7 preceding siblings ...)
2025-05-01 18:33 ` [PATCH 08/10] KVM: selftests: Use u16 instead of uint16_t David Matlack
@ 2025-05-01 18:33 ` David Matlack
2025-05-01 18:33 ` [PATCH 10/10] KVM: selftests: Use u8 instead of uint8_t David Matlack
` (2 subsequent siblings)
11 siblings, 0 replies; 16+ messages in thread
From: David Matlack @ 2025-05-01 18:33 UTC (permalink / raw)
To: Paolo Bonzini
Cc: Marc Zyngier, Oliver Upton, Joey Gouly, Suzuki K Poulose,
Zenghui Yu, Anup Patel, Atish Patra, Paul Walmsley,
Palmer Dabbelt, Albert Ou, Alexandre Ghiti, Christian Borntraeger,
Janosch Frank, Claudio Imbrenda, David Hildenbrand,
Sean Christopherson, David Matlack, Andrew Jones, Isaku Yamahata,
Reinette Chatre, Eric Auger, James Houghton, Colin Ian King, kvm,
linux-arm-kernel, kvmarm, kvm-riscv, linux-riscv
Use s16 instead of int16_t to make the KVM selftests code more concise
and more similar to the kernel (since selftests are primarily developed
by kernel developers).
This commit was generated with the following command:
git ls-files tools/testing/selftests/kvm | xargs sed -i 's/int16_t/s16/g'
Then by manually adjusting whitespace to make checkpatch.pl happy.
No functional change intended.
Signed-off-by: David Matlack <dmatlack@google.com>
---
tools/testing/selftests/kvm/lib/guest_sprintf.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/tools/testing/selftests/kvm/lib/guest_sprintf.c b/tools/testing/selftests/kvm/lib/guest_sprintf.c
index afbddb53ddd6..0f6d5c3e060c 100644
--- a/tools/testing/selftests/kvm/lib/guest_sprintf.c
+++ b/tools/testing/selftests/kvm/lib/guest_sprintf.c
@@ -288,7 +288,7 @@ int guest_vsnprintf(char *buf, int n, const char *fmt, va_list args)
else if (qualifier == 'h') {
num = (u16)va_arg(args, int);
if (flags & SIGN)
- num = (int16_t)num;
+ num = (s16)num;
} else if (flags & SIGN)
num = va_arg(args, int);
else
--
2.49.0.906.g1f30a19c02-goog
^ permalink raw reply related [flat|nested] 16+ messages in thread
* [PATCH 10/10] KVM: selftests: Use u8 instead of uint8_t
2025-05-01 18:32 [PATCH 00/10] KVM: selftests: Convert to kernel-style types David Matlack
` (8 preceding siblings ...)
2025-05-01 18:33 ` [PATCH 09/10] KVM: selftests: Use s16 instead of int16_t David Matlack
@ 2025-05-01 18:33 ` David Matlack
2025-05-01 21:03 ` [PATCH 00/10] KVM: selftests: Convert to kernel-style types Sean Christopherson
2025-05-02 9:11 ` Andrew Jones
11 siblings, 0 replies; 16+ messages in thread
From: David Matlack @ 2025-05-01 18:33 UTC (permalink / raw)
To: Paolo Bonzini
Cc: Marc Zyngier, Oliver Upton, Joey Gouly, Suzuki K Poulose,
Zenghui Yu, Anup Patel, Atish Patra, Paul Walmsley,
Palmer Dabbelt, Albert Ou, Alexandre Ghiti, Christian Borntraeger,
Janosch Frank, Claudio Imbrenda, David Hildenbrand,
Sean Christopherson, David Matlack, Andrew Jones, Isaku Yamahata,
Reinette Chatre, Eric Auger, James Houghton, Colin Ian King, kvm,
linux-arm-kernel, kvmarm, kvm-riscv, linux-riscv
Use u8 instead of uint8_t to make the KVM selftests code more concise
and more similar to the kernel (since selftests are primarily developed
by kernel developers).
This commit was generated with the following command:
git ls-files tools/testing/selftests/kvm | xargs sed -i 's/uint8_t/u8/g'
Then by manually adjusting whitespace to make checkpatch.pl happy.
No functional change intended.
Signed-off-by: David Matlack <dmatlack@google.com>
---
.../selftests/kvm/arm64/debug-exceptions.c | 16 ++---
.../testing/selftests/kvm/arm64/set_id_regs.c | 6 +-
.../selftests/kvm/arm64/vpmu_counter_access.c | 2 +-
.../testing/selftests/kvm/coalesced_io_test.c | 2 +-
tools/testing/selftests/kvm/get-reg-list.c | 2 +-
.../testing/selftests/kvm/include/kvm_util.h | 14 ++---
.../testing/selftests/kvm/include/test_util.h | 2 +-
.../testing/selftests/kvm/include/x86/apic.h | 6 +-
.../selftests/kvm/include/x86/hyperv.h | 10 ++--
.../selftests/kvm/include/x86/processor.h | 20 +++----
tools/testing/selftests/kvm/include/x86/sev.h | 4 +-
tools/testing/selftests/kvm/include/x86/vmx.h | 12 ++--
.../selftests/kvm/lib/arm64/processor.c | 8 +--
.../testing/selftests/kvm/lib/guest_sprintf.c | 2 +-
tools/testing/selftests/kvm/lib/kvm_util.c | 2 +-
.../selftests/kvm/lib/riscv/processor.c | 6 +-
.../selftests/kvm/lib/s390/processor.c | 8 +--
tools/testing/selftests/kvm/lib/sparsebit.c | 2 +-
.../testing/selftests/kvm/lib/x86/processor.c | 16 ++---
tools/testing/selftests/kvm/lib/x86/sev.c | 4 +-
.../testing/selftests/kvm/memslot_perf_test.c | 2 +-
tools/testing/selftests/kvm/mmu_stress_test.c | 2 +-
tools/testing/selftests/kvm/s390/memop.c | 28 ++++-----
tools/testing/selftests/kvm/s390/resets.c | 2 +-
.../selftests/kvm/s390/shared_zeropage_test.c | 2 +-
tools/testing/selftests/kvm/s390/tprot.c | 12 ++--
.../selftests/kvm/set_memory_region_test.c | 2 +-
tools/testing/selftests/kvm/steal_time.c | 4 +-
.../selftests/kvm/x86/fix_hypercall_test.c | 12 ++--
.../selftests/kvm/x86/flds_emulation.h | 2 +-
.../selftests/kvm/x86/hyperv_features.c | 4 +-
tools/testing/selftests/kvm/x86/kvm_pv_test.c | 2 +-
.../selftests/kvm/x86/nested_emulation_test.c | 8 +--
.../selftests/kvm/x86/platform_info_test.c | 2 +-
.../selftests/kvm/x86/pmu_counters_test.c | 60 +++++++++----------
.../selftests/kvm/x86/pmu_event_filter_test.c | 12 ++--
.../kvm/x86/private_mem_conversions_test.c | 24 ++++----
tools/testing/selftests/kvm/x86/smm_test.c | 2 +-
tools/testing/selftests/kvm/x86/state_test.c | 4 +-
.../selftests/kvm/x86/userspace_io_test.c | 4 +-
.../kvm/x86/userspace_msr_exit_test.c | 14 ++---
.../selftests/kvm/x86/vmx_pmu_caps_test.c | 2 +-
.../selftests/kvm/x86/xen_shinfo_test.c | 4 +-
43 files changed, 177 insertions(+), 177 deletions(-)
diff --git a/tools/testing/selftests/kvm/arm64/debug-exceptions.c b/tools/testing/selftests/kvm/arm64/debug-exceptions.c
index 8576e707b05e..4b5de274584e 100644
--- a/tools/testing/selftests/kvm/arm64/debug-exceptions.c
+++ b/tools/testing/selftests/kvm/arm64/debug-exceptions.c
@@ -102,7 +102,7 @@ GEN_DEBUG_WRITE_REG(dbgwvr)
static void reset_debug_state(void)
{
- uint8_t brps, wrps, i;
+ u8 brps, wrps, i;
u64 dfr0;
asm volatile("msr daifset, #8");
@@ -149,7 +149,7 @@ static void enable_monitor_debug_exceptions(void)
isb();
}
-static void install_wp(uint8_t wpn, u64 addr)
+static void install_wp(u8 wpn, u64 addr)
{
u32 wcr;
@@ -162,7 +162,7 @@ static void install_wp(uint8_t wpn, u64 addr)
enable_monitor_debug_exceptions();
}
-static void install_hw_bp(uint8_t bpn, u64 addr)
+static void install_hw_bp(u8 bpn, u64 addr)
{
u32 bcr;
@@ -174,7 +174,7 @@ static void install_hw_bp(uint8_t bpn, u64 addr)
enable_monitor_debug_exceptions();
}
-static void install_wp_ctx(uint8_t addr_wp, uint8_t ctx_bp, u64 addr,
+static void install_wp_ctx(u8 addr_wp, u8 ctx_bp, u64 addr,
u64 ctx)
{
u32 wcr;
@@ -196,7 +196,7 @@ static void install_wp_ctx(uint8_t addr_wp, uint8_t ctx_bp, u64 addr,
enable_monitor_debug_exceptions();
}
-void install_hw_bp_ctx(uint8_t addr_bp, uint8_t ctx_bp, u64 addr,
+void install_hw_bp_ctx(u8 addr_bp, u8 ctx_bp, u64 addr,
u64 ctx)
{
u32 addr_bcr, ctx_bcr;
@@ -234,7 +234,7 @@ static void install_ss(void)
static volatile char write_data;
-static void guest_code(uint8_t bpn, uint8_t wpn, uint8_t ctx_bpn)
+static void guest_code(u8 bpn, u8 wpn, u8 ctx_bpn)
{
u64 ctx = 0xabcdef; /* a random context number */
@@ -421,7 +421,7 @@ static int debug_version(u64 id_aa64dfr0)
return FIELD_GET(ARM64_FEATURE_MASK(ID_AA64DFR0_EL1_DebugVer), id_aa64dfr0);
}
-static void test_guest_debug_exceptions(uint8_t bpn, uint8_t wpn, uint8_t ctx_bpn)
+static void test_guest_debug_exceptions(u8 bpn, u8 wpn, u8 ctx_bpn)
{
struct kvm_vcpu *vcpu;
struct kvm_vm *vm;
@@ -534,7 +534,7 @@ void test_single_step_from_userspace(int test_cnt)
*/
void test_guest_debug_exceptions_all(u64 aa64dfr0)
{
- uint8_t brp_num, wrp_num, ctx_brp_num, normal_brp_num, ctx_brp_base;
+ u8 brp_num, wrp_num, ctx_brp_num, normal_brp_num, ctx_brp_base;
int b, w, c;
/* Number of breakpoints */
diff --git a/tools/testing/selftests/kvm/arm64/set_id_regs.c b/tools/testing/selftests/kvm/arm64/set_id_regs.c
index 77c197ef4f4a..20fcebe90741 100644
--- a/tools/testing/selftests/kvm/arm64/set_id_regs.c
+++ b/tools/testing/selftests/kvm/arm64/set_id_regs.c
@@ -30,7 +30,7 @@ struct reg_ftr_bits {
char *name;
bool sign;
enum ftr_type type;
- uint8_t shift;
+ u8 shift;
u64 mask;
/*
* For FTR_EXACT, safe_val is used as the exact safe value.
@@ -347,7 +347,7 @@ u64 get_invalid_value(const struct reg_ftr_bits *ftr_bits, u64 ftr)
static u64 test_reg_set_success(struct kvm_vcpu *vcpu, u64 reg,
const struct reg_ftr_bits *ftr_bits)
{
- uint8_t shift = ftr_bits->shift;
+ u8 shift = ftr_bits->shift;
u64 mask = ftr_bits->mask;
u64 val, new_val, ftr;
@@ -370,7 +370,7 @@ static u64 test_reg_set_success(struct kvm_vcpu *vcpu, u64 reg,
static void test_reg_set_fail(struct kvm_vcpu *vcpu, u64 reg,
const struct reg_ftr_bits *ftr_bits)
{
- uint8_t shift = ftr_bits->shift;
+ u8 shift = ftr_bits->shift;
u64 mask = ftr_bits->mask;
u64 val, old_val, ftr;
int r;
diff --git a/tools/testing/selftests/kvm/arm64/vpmu_counter_access.c b/tools/testing/selftests/kvm/arm64/vpmu_counter_access.c
index 986ff950a652..c2af221ca6ca 100644
--- a/tools/testing/selftests/kvm/arm64/vpmu_counter_access.c
+++ b/tools/testing/selftests/kvm/arm64/vpmu_counter_access.c
@@ -408,7 +408,7 @@ static void guest_code(u64 expected_pmcr_n)
static void create_vpmu_vm(void *guest_code)
{
struct kvm_vcpu_init init;
- uint8_t pmuver, ec;
+ u8 pmuver, ec;
u64 dfr0, irq = 23;
struct kvm_device_attr irq_attr = {
.group = KVM_ARM_VCPU_PMU_V3_CTRL,
diff --git a/tools/testing/selftests/kvm/coalesced_io_test.c b/tools/testing/selftests/kvm/coalesced_io_test.c
index f5ab412d2042..df4ed5e3877c 100644
--- a/tools/testing/selftests/kvm/coalesced_io_test.c
+++ b/tools/testing/selftests/kvm/coalesced_io_test.c
@@ -23,7 +23,7 @@ struct kvm_coalesced_io {
* amount of #ifdeffery and complexity, without having to sacrifice
* verbose error messages.
*/
- uint8_t pio_port;
+ u8 pio_port;
};
static struct kvm_coalesced_io kvm_builtin_io_ring;
diff --git a/tools/testing/selftests/kvm/get-reg-list.c b/tools/testing/selftests/kvm/get-reg-list.c
index 91f05f78e824..9c791fdd7123 100644
--- a/tools/testing/selftests/kvm/get-reg-list.c
+++ b/tools/testing/selftests/kvm/get-reg-list.c
@@ -213,7 +213,7 @@ static void run_test(struct vcpu_reg_list *c)
* since we don't know the capabilities of any new registers.
*/
for_each_present_blessed_reg(i) {
- uint8_t addr[2048 / 8];
+ u8 addr[2048 / 8];
struct kvm_one_reg reg = {
.id = reg_list->reg[i],
.addr = (__u64)&addr,
diff --git a/tools/testing/selftests/kvm/include/kvm_util.h b/tools/testing/selftests/kvm/include/kvm_util.h
index 271d7a434a4c..7edffa281c91 100644
--- a/tools/testing/selftests/kvm/include/kvm_util.h
+++ b/tools/testing/selftests/kvm/include/kvm_util.h
@@ -183,8 +183,8 @@ enum vm_guest_mode {
struct vm_shape {
u32 type;
- uint8_t mode;
- uint8_t pad0;
+ u8 mode;
+ u8 pad0;
u16 pad1;
};
@@ -432,7 +432,7 @@ void kvm_vm_release(struct kvm_vm *vmp);
void kvm_vm_elf_load(struct kvm_vm *vm, const char *filename);
int kvm_memfd_alloc(size_t size, bool hugepages);
-void vm_dump(FILE *stream, struct kvm_vm *vm, uint8_t indent);
+void vm_dump(FILE *stream, struct kvm_vm *vm, u8 indent);
static inline void kvm_vm_get_dirty_log(struct kvm_vm *vm, int slot, void *log)
{
@@ -1016,10 +1016,10 @@ vm_adjust_num_guest_pages(enum vm_guest_mode mode, unsigned int num_guest_pages)
void assert_on_unhandled_exception(struct kvm_vcpu *vcpu);
void vcpu_arch_dump(FILE *stream, struct kvm_vcpu *vcpu,
- uint8_t indent);
+ u8 indent);
static inline void vcpu_dump(FILE *stream, struct kvm_vcpu *vcpu,
- uint8_t indent)
+ u8 indent)
{
vcpu_arch_dump(stream, vcpu, indent);
}
@@ -1123,9 +1123,9 @@ static inline gpa_t addr_gva2gpa(struct kvm_vm *vm, gva_t gva)
* Dumps to the FILE stream given by @stream, the contents of all the
* virtual translation tables for the VM given by @vm.
*/
-void virt_arch_dump(FILE *stream, struct kvm_vm *vm, uint8_t indent);
+void virt_arch_dump(FILE *stream, struct kvm_vm *vm, u8 indent);
-static inline void virt_dump(FILE *stream, struct kvm_vm *vm, uint8_t indent)
+static inline void virt_dump(FILE *stream, struct kvm_vm *vm, u8 indent)
{
virt_arch_dump(stream, vm, indent);
}
diff --git a/tools/testing/selftests/kvm/include/test_util.h b/tools/testing/selftests/kvm/include/test_util.h
index 5608008cfe61..c7f6a3ee6793 100644
--- a/tools/testing/selftests/kvm/include/test_util.h
+++ b/tools/testing/selftests/kvm/include/test_util.h
@@ -100,7 +100,7 @@ struct guest_random_state new_guest_random_state(u32 seed);
u32 guest_random_u32(struct guest_random_state *state);
static inline bool __guest_random_bool(struct guest_random_state *state,
- uint8_t percent)
+ u8 percent)
{
return (guest_random_u32(state) % 100) < percent;
}
diff --git a/tools/testing/selftests/kvm/include/x86/apic.h b/tools/testing/selftests/kvm/include/x86/apic.h
index 2d164405e7f2..f86bd22f4c16 100644
--- a/tools/testing/selftests/kvm/include/x86/apic.h
+++ b/tools/testing/selftests/kvm/include/x86/apic.h
@@ -92,14 +92,14 @@ static inline u64 x2apic_read_reg(unsigned int reg)
return rdmsr(APIC_BASE_MSR + (reg >> 4));
}
-static inline uint8_t x2apic_write_reg_safe(unsigned int reg, u64 value)
+static inline u8 x2apic_write_reg_safe(unsigned int reg, u64 value)
{
return wrmsr_safe(APIC_BASE_MSR + (reg >> 4), value);
}
static inline void x2apic_write_reg(unsigned int reg, u64 value)
{
- uint8_t fault = x2apic_write_reg_safe(reg, value);
+ u8 fault = x2apic_write_reg_safe(reg, value);
__GUEST_ASSERT(!fault, "Unexpected fault 0x%x on WRMSR(%x) = %lx\n",
fault, APIC_BASE_MSR + (reg >> 4), value);
@@ -107,7 +107,7 @@ static inline void x2apic_write_reg(unsigned int reg, u64 value)
static inline void x2apic_write_reg_fault(unsigned int reg, u64 value)
{
- uint8_t fault = x2apic_write_reg_safe(reg, value);
+ u8 fault = x2apic_write_reg_safe(reg, value);
__GUEST_ASSERT(fault == GP_VECTOR,
"Wanted #GP on WRMSR(%x) = %lx, got 0x%x\n",
diff --git a/tools/testing/selftests/kvm/include/x86/hyperv.h b/tools/testing/selftests/kvm/include/x86/hyperv.h
index 2add2123e37b..78003f5a22f3 100644
--- a/tools/testing/selftests/kvm/include/x86/hyperv.h
+++ b/tools/testing/selftests/kvm/include/x86/hyperv.h
@@ -254,12 +254,12 @@
* Issue a Hyper-V hypercall. Returns exception vector raised or 0, 'hv_status'
* is set to the hypercall status (if no exception occurred).
*/
-static inline uint8_t __hyperv_hypercall(u64 control, gva_t input_address,
- gva_t output_address,
- u64 *hv_status)
+static inline u8 __hyperv_hypercall(u64 control, gva_t input_address,
+ gva_t output_address,
+ u64 *hv_status)
{
u64 error_code;
- uint8_t vector;
+ u8 vector;
/* Note both the hypercall and the "asm safe" clobber r9-r11. */
asm volatile("mov %[output_address], %%r8\n\t"
@@ -278,7 +278,7 @@ static inline void hyperv_hypercall(u64 control, gva_t input_address,
gva_t output_address)
{
u64 hv_status;
- uint8_t vector;
+ u8 vector;
vector = __hyperv_hypercall(control, input_address, output_address, &hv_status);
diff --git a/tools/testing/selftests/kvm/include/x86/processor.h b/tools/testing/selftests/kvm/include/x86/processor.h
index 302836e276e0..5598ea6b86df 100644
--- a/tools/testing/selftests/kvm/include/x86/processor.h
+++ b/tools/testing/selftests/kvm/include/x86/processor.h
@@ -694,7 +694,7 @@ static inline bool this_cpu_is_amd(void)
}
static inline u32 __this_cpu_has(u32 function, u32 index,
- uint8_t reg, uint8_t lo, uint8_t hi)
+ u8 reg, u8 lo, u8 hi)
{
u32 gprs[4];
@@ -1074,7 +1074,7 @@ static inline void vcpu_set_cpuid(struct kvm_vcpu *vcpu)
void vcpu_set_cpuid_property(struct kvm_vcpu *vcpu,
struct kvm_x86_cpu_property property,
u32 value);
-void vcpu_set_cpuid_maxphyaddr(struct kvm_vcpu *vcpu, uint8_t maxphyaddr);
+void vcpu_set_cpuid_maxphyaddr(struct kvm_vcpu *vcpu, u8 maxphyaddr);
void vcpu_clear_cpuid_entry(struct kvm_vcpu *vcpu, u32 function);
@@ -1227,7 +1227,7 @@ void vm_install_exception_handler(struct kvm_vm *vm, int vector,
#define kvm_asm_safe(insn, inputs...) \
({ \
u64 ign_error_code; \
- uint8_t vector; \
+ u8 vector; \
\
asm volatile(KVM_ASM_SAFE(insn) \
: KVM_ASM_SAFE_OUTPUTS(vector, ign_error_code) \
@@ -1238,7 +1238,7 @@ void vm_install_exception_handler(struct kvm_vm *vm, int vector,
#define kvm_asm_safe_ec(insn, error_code, inputs...) \
({ \
- uint8_t vector; \
+ u8 vector; \
\
asm volatile(KVM_ASM_SAFE(insn) \
: KVM_ASM_SAFE_OUTPUTS(vector, error_code) \
@@ -1250,7 +1250,7 @@ void vm_install_exception_handler(struct kvm_vm *vm, int vector,
#define kvm_asm_safe_fep(insn, inputs...) \
({ \
u64 ign_error_code; \
- uint8_t vector; \
+ u8 vector; \
\
asm volatile(KVM_ASM_SAFE_FEP(insn) \
: KVM_ASM_SAFE_OUTPUTS(vector, ign_error_code) \
@@ -1261,7 +1261,7 @@ void vm_install_exception_handler(struct kvm_vm *vm, int vector,
#define kvm_asm_safe_ec_fep(insn, error_code, inputs...) \
({ \
- uint8_t vector; \
+ u8 vector; \
\
asm volatile(KVM_ASM_SAFE_FEP(insn) \
: KVM_ASM_SAFE_OUTPUTS(vector, error_code) \
@@ -1271,10 +1271,10 @@ void vm_install_exception_handler(struct kvm_vm *vm, int vector,
})
#define BUILD_READ_U64_SAFE_HELPER(insn, _fep, _FEP) \
-static inline uint8_t insn##_safe ##_fep(u32 idx, u64 *val) \
+static inline u8 insn##_safe ##_fep(u32 idx, u64 *val) \
{ \
u64 error_code; \
- uint8_t vector; \
+ u8 vector; \
u32 a, d; \
\
asm volatile(KVM_ASM_SAFE##_FEP(#insn) \
@@ -1299,12 +1299,12 @@ BUILD_READ_U64_SAFE_HELPERS(rdmsr)
BUILD_READ_U64_SAFE_HELPERS(rdpmc)
BUILD_READ_U64_SAFE_HELPERS(xgetbv)
-static inline uint8_t wrmsr_safe(u32 msr, u64 val)
+static inline u8 wrmsr_safe(u32 msr, u64 val)
{
return kvm_asm_safe("wrmsr", "a"(val & -1u), "d"(val >> 32), "c"(msr));
}
-static inline uint8_t xsetbv_safe(u32 index, u64 value)
+static inline u8 xsetbv_safe(u32 index, u64 value)
{
u32 eax = value;
u32 edx = value >> 32;
diff --git a/tools/testing/selftests/kvm/include/x86/sev.h b/tools/testing/selftests/kvm/include/x86/sev.h
index fa056d2e1c7e..0eead22b248a 100644
--- a/tools/testing/selftests/kvm/include/x86/sev.h
+++ b/tools/testing/selftests/kvm/include/x86/sev.h
@@ -28,12 +28,12 @@ enum sev_guest_state {
#define GHCB_MSR_TERM_REQ 0x100
void sev_vm_launch(struct kvm_vm *vm, u32 policy);
-void sev_vm_launch_measure(struct kvm_vm *vm, uint8_t *measurement);
+void sev_vm_launch_measure(struct kvm_vm *vm, u8 *measurement);
void sev_vm_launch_finish(struct kvm_vm *vm);
struct kvm_vm *vm_sev_create_with_one_vcpu(u32 type, void *guest_code,
struct kvm_vcpu **cpu);
-void vm_sev_launch(struct kvm_vm *vm, u32 policy, uint8_t *measurement);
+void vm_sev_launch(struct kvm_vm *vm, u32 policy, u8 *measurement);
kvm_static_assert(SEV_RET_SUCCESS == 0);
diff --git a/tools/testing/selftests/kvm/include/x86/vmx.h b/tools/testing/selftests/kvm/include/x86/vmx.h
index e1772fb66811..52bd89390b33 100644
--- a/tools/testing/selftests/kvm/include/x86/vmx.h
+++ b/tools/testing/selftests/kvm/include/x86/vmx.h
@@ -294,7 +294,7 @@ struct vmx_msr_entry {
static inline int vmxon(u64 phys)
{
- uint8_t ret;
+ u8 ret;
__asm__ __volatile__ ("vmxon %[pa]; setna %[ret]"
: [ret]"=rm"(ret)
@@ -311,7 +311,7 @@ static inline void vmxoff(void)
static inline int vmclear(u64 vmcs_pa)
{
- uint8_t ret;
+ u8 ret;
__asm__ __volatile__ ("vmclear %[pa]; setna %[ret]"
: [ret]"=rm"(ret)
@@ -323,7 +323,7 @@ static inline int vmclear(u64 vmcs_pa)
static inline int vmptrld(u64 vmcs_pa)
{
- uint8_t ret;
+ u8 ret;
if (enable_evmcs)
return -1;
@@ -339,7 +339,7 @@ static inline int vmptrld(u64 vmcs_pa)
static inline int vmptrst(u64 *value)
{
u64 tmp;
- uint8_t ret;
+ u8 ret;
if (enable_evmcs)
return evmcs_vmptrst(value);
@@ -450,7 +450,7 @@ static inline void vmcall(void)
static inline int vmread(u64 encoding, u64 *value)
{
u64 tmp;
- uint8_t ret;
+ u8 ret;
if (enable_evmcs)
return evmcs_vmread(encoding, value);
@@ -477,7 +477,7 @@ static inline u64 vmreadz(u64 encoding)
static inline int vmwrite(u64 encoding, u64 value)
{
- uint8_t ret;
+ u8 ret;
if (enable_evmcs)
return evmcs_vmwrite(encoding, value);
diff --git a/tools/testing/selftests/kvm/lib/arm64/processor.c b/tools/testing/selftests/kvm/lib/arm64/processor.c
index 01c8ee96b8ec..8369119421e1 100644
--- a/tools/testing/selftests/kvm/lib/arm64/processor.c
+++ b/tools/testing/selftests/kvm/lib/arm64/processor.c
@@ -128,7 +128,7 @@ void virt_arch_pgd_alloc(struct kvm_vm *vm)
static void _virt_pg_map(struct kvm_vm *vm, u64 vaddr, u64 paddr,
u64 flags)
{
- uint8_t attr_idx = flags & (PTE_ATTRINDX_MASK >> PTE_ATTRINDX_SHIFT);
+ u8 attr_idx = flags & (PTE_ATTRINDX_MASK >> PTE_ATTRINDX_SHIFT);
u64 pg_attr;
u64 *ptep;
@@ -230,7 +230,7 @@ gpa_t addr_arch_gva2gpa(struct kvm_vm *vm, gva_t gva)
return pte_addr(vm, *ptep) + (gva & (vm->page_size - 1));
}
-static void pte_dump(FILE *stream, struct kvm_vm *vm, uint8_t indent, u64 page, int level)
+static void pte_dump(FILE *stream, struct kvm_vm *vm, u8 indent, u64 page, int level)
{
#ifdef DEBUG
static const char * const type[] = { "", "pud", "pmd", "pte" };
@@ -249,7 +249,7 @@ static void pte_dump(FILE *stream, struct kvm_vm *vm, uint8_t indent, u64 page,
#endif
}
-void virt_arch_dump(FILE *stream, struct kvm_vm *vm, uint8_t indent)
+void virt_arch_dump(FILE *stream, struct kvm_vm *vm, u8 indent)
{
int level = 4 - (vm->pgtable_levels - 1);
u64 pgd, *ptep;
@@ -364,7 +364,7 @@ void aarch64_vcpu_setup(struct kvm_vcpu *vcpu, struct kvm_vcpu_init *init)
vcpu_set_reg(vcpu, KVM_ARM64_SYS_REG(SYS_TPIDR_EL1), vcpu->id);
}
-void vcpu_arch_dump(FILE *stream, struct kvm_vcpu *vcpu, uint8_t indent)
+void vcpu_arch_dump(FILE *stream, struct kvm_vcpu *vcpu, u8 indent)
{
u64 pstate, pc;
diff --git a/tools/testing/selftests/kvm/lib/guest_sprintf.c b/tools/testing/selftests/kvm/lib/guest_sprintf.c
index 0f6d5c3e060c..e5fbc39a9312 100644
--- a/tools/testing/selftests/kvm/lib/guest_sprintf.c
+++ b/tools/testing/selftests/kvm/lib/guest_sprintf.c
@@ -216,7 +216,7 @@ int guest_vsnprintf(char *buf, int n, const char *fmt, va_list args)
while (--field_width > 0)
APPEND_BUFFER_SAFE(str, end, ' ');
APPEND_BUFFER_SAFE(str, end,
- (uint8_t)va_arg(args, int));
+ (u8)va_arg(args, int));
while (--field_width > 0)
APPEND_BUFFER_SAFE(str, end, ' ');
continue;
diff --git a/tools/testing/selftests/kvm/lib/kvm_util.c b/tools/testing/selftests/kvm/lib/kvm_util.c
index ade04f83485e..45c998b5be22 100644
--- a/tools/testing/selftests/kvm/lib/kvm_util.c
+++ b/tools/testing/selftests/kvm/lib/kvm_util.c
@@ -1935,7 +1935,7 @@ void kvm_gsi_routing_write(struct kvm_vm *vm, struct kvm_irq_routing *routing)
* Dumps the current state of the VM given by vm, to the FILE stream
* given by stream.
*/
-void vm_dump(FILE *stream, struct kvm_vm *vm, uint8_t indent)
+void vm_dump(FILE *stream, struct kvm_vm *vm, u8 indent)
{
int ctr;
struct userspace_mem_region *region;
diff --git a/tools/testing/selftests/kvm/lib/riscv/processor.c b/tools/testing/selftests/kvm/lib/riscv/processor.c
index 19db0671a390..645c9630a981 100644
--- a/tools/testing/selftests/kvm/lib/riscv/processor.c
+++ b/tools/testing/selftests/kvm/lib/riscv/processor.c
@@ -152,7 +152,7 @@ gpa_t addr_arch_gva2gpa(struct kvm_vm *vm, gva_t gva)
exit(1);
}
-static void pte_dump(FILE *stream, struct kvm_vm *vm, uint8_t indent,
+static void pte_dump(FILE *stream, struct kvm_vm *vm, u8 indent,
u64 page, int level)
{
#ifdef DEBUG
@@ -174,7 +174,7 @@ static void pte_dump(FILE *stream, struct kvm_vm *vm, uint8_t indent,
#endif
}
-void virt_arch_dump(FILE *stream, struct kvm_vm *vm, uint8_t indent)
+void virt_arch_dump(FILE *stream, struct kvm_vm *vm, u8 indent)
{
int level = vm->pgtable_levels - 1;
u64 pgd, *ptep;
@@ -217,7 +217,7 @@ void riscv_vcpu_mmu_setup(struct kvm_vcpu *vcpu)
vcpu_set_reg(vcpu, RISCV_GENERAL_CSR_REG(satp), satp);
}
-void vcpu_arch_dump(FILE *stream, struct kvm_vcpu *vcpu, uint8_t indent)
+void vcpu_arch_dump(FILE *stream, struct kvm_vcpu *vcpu, u8 indent)
{
struct kvm_riscv_core core;
diff --git a/tools/testing/selftests/kvm/lib/s390/processor.c b/tools/testing/selftests/kvm/lib/s390/processor.c
index 5445a54b44bb..4cc212396885 100644
--- a/tools/testing/selftests/kvm/lib/s390/processor.c
+++ b/tools/testing/selftests/kvm/lib/s390/processor.c
@@ -111,7 +111,7 @@ gpa_t addr_arch_gva2gpa(struct kvm_vm *vm, gva_t gva)
return (entry[idx] & ~0xffful) + (gva & 0xffful);
}
-static void virt_dump_ptes(FILE *stream, struct kvm_vm *vm, uint8_t indent,
+static void virt_dump_ptes(FILE *stream, struct kvm_vm *vm, u8 indent,
u64 ptea_start)
{
u64 *pte, ptea;
@@ -125,7 +125,7 @@ static void virt_dump_ptes(FILE *stream, struct kvm_vm *vm, uint8_t indent,
}
}
-static void virt_dump_region(FILE *stream, struct kvm_vm *vm, uint8_t indent,
+static void virt_dump_region(FILE *stream, struct kvm_vm *vm, u8 indent,
u64 reg_tab_addr)
{
u64 addr, *entry;
@@ -147,7 +147,7 @@ static void virt_dump_region(FILE *stream, struct kvm_vm *vm, uint8_t indent,
}
}
-void virt_arch_dump(FILE *stream, struct kvm_vm *vm, uint8_t indent)
+void virt_arch_dump(FILE *stream, struct kvm_vm *vm, u8 indent)
{
if (!vm->pgd_created)
return;
@@ -212,7 +212,7 @@ void vcpu_args_set(struct kvm_vcpu *vcpu, unsigned int num, ...)
va_end(ap);
}
-void vcpu_arch_dump(FILE *stream, struct kvm_vcpu *vcpu, uint8_t indent)
+void vcpu_arch_dump(FILE *stream, struct kvm_vcpu *vcpu, u8 indent)
{
fprintf(stream, "%*spstate: psw: 0x%.16llx:0x%.16llx\n",
indent, "", vcpu->run->psw_mask, vcpu->run->psw_addr);
diff --git a/tools/testing/selftests/kvm/lib/sparsebit.c b/tools/testing/selftests/kvm/lib/sparsebit.c
index 2789d34436e6..7572a5033c37 100644
--- a/tools/testing/selftests/kvm/lib/sparsebit.c
+++ b/tools/testing/selftests/kvm/lib/sparsebit.c
@@ -2074,7 +2074,7 @@ int main(void)
{
s = sparsebit_alloc();
for (;;) {
- uint8_t op = get8() & 0xf;
+ u8 op = get8() & 0xf;
u64 first = get64();
u64 last = get64();
diff --git a/tools/testing/selftests/kvm/lib/x86/processor.c b/tools/testing/selftests/kvm/lib/x86/processor.c
index 7258f9f8f0bf..dfbc1c7199c4 100644
--- a/tools/testing/selftests/kvm/lib/x86/processor.c
+++ b/tools/testing/selftests/kvm/lib/x86/processor.c
@@ -23,7 +23,7 @@ bool host_cpu_is_intel;
bool is_forced_emulation_enabled;
u64 guest_tsc_khz;
-static void regs_dump(FILE *stream, struct kvm_regs *regs, uint8_t indent)
+static void regs_dump(FILE *stream, struct kvm_regs *regs, u8 indent)
{
fprintf(stream, "%*srax: 0x%.16llx rbx: 0x%.16llx "
"rcx: 0x%.16llx rdx: 0x%.16llx\n",
@@ -47,7 +47,7 @@ static void regs_dump(FILE *stream, struct kvm_regs *regs, uint8_t indent)
}
static void segment_dump(FILE *stream, struct kvm_segment *segment,
- uint8_t indent)
+ u8 indent)
{
fprintf(stream, "%*sbase: 0x%.16llx limit: 0x%.8x "
"selector: 0x%.4x type: 0x%.2x\n",
@@ -64,7 +64,7 @@ static void segment_dump(FILE *stream, struct kvm_segment *segment,
}
static void dtable_dump(FILE *stream, struct kvm_dtable *dtable,
- uint8_t indent)
+ u8 indent)
{
fprintf(stream, "%*sbase: 0x%.16llx limit: 0x%.4x "
"padding: 0x%.4x 0x%.4x 0x%.4x\n",
@@ -72,7 +72,7 @@ static void dtable_dump(FILE *stream, struct kvm_dtable *dtable,
dtable->padding[0], dtable->padding[1], dtable->padding[2]);
}
-static void sregs_dump(FILE *stream, struct kvm_sregs *sregs, uint8_t indent)
+static void sregs_dump(FILE *stream, struct kvm_sregs *sregs, u8 indent)
{
unsigned int i;
@@ -318,7 +318,7 @@ u64 *vm_get_page_table_entry(struct kvm_vm *vm, u64 vaddr)
return __vm_get_page_table_entry(vm, vaddr, &level);
}
-void virt_arch_dump(FILE *stream, struct kvm_vm *vm, uint8_t indent)
+void virt_arch_dump(FILE *stream, struct kvm_vm *vm, u8 indent)
{
u64 *pml4e, *pml4e_start;
u64 *pdpe, *pdpe_start;
@@ -747,7 +747,7 @@ const struct kvm_cpuid2 *kvm_get_supported_cpuid(void)
static u32 __kvm_cpu_has(const struct kvm_cpuid2 *cpuid,
u32 function, u32 index,
- uint8_t reg, uint8_t lo, uint8_t hi)
+ u8 reg, u8 lo, u8 hi)
{
const struct kvm_cpuid_entry2 *entry;
int i;
@@ -965,7 +965,7 @@ void vcpu_args_set(struct kvm_vcpu *vcpu, unsigned int num, ...)
va_end(ap);
}
-void vcpu_arch_dump(FILE *stream, struct kvm_vcpu *vcpu, uint8_t indent)
+void vcpu_arch_dump(FILE *stream, struct kvm_vcpu *vcpu, u8 indent)
{
struct kvm_regs regs;
struct kvm_sregs sregs;
@@ -1215,7 +1215,7 @@ unsigned long vm_compute_max_gfn(struct kvm_vm *vm)
{
const unsigned long num_ht_pages = 12 << (30 - vm->page_shift); /* 12 GiB */
unsigned long ht_gfn, max_gfn, max_pfn;
- uint8_t maxphyaddr, guest_maxphyaddr;
+ u8 maxphyaddr, guest_maxphyaddr;
/*
* Use "guest MAXPHYADDR" from KVM if it's available. Guest MAXPHYADDR
diff --git a/tools/testing/selftests/kvm/lib/x86/sev.c b/tools/testing/selftests/kvm/lib/x86/sev.c
index dba0aa744561..572245abcdd8 100644
--- a/tools/testing/selftests/kvm/lib/x86/sev.c
+++ b/tools/testing/selftests/kvm/lib/x86/sev.c
@@ -84,7 +84,7 @@ void sev_vm_launch(struct kvm_vm *vm, u32 policy)
vm->arch.is_pt_protected = true;
}
-void sev_vm_launch_measure(struct kvm_vm *vm, uint8_t *measurement)
+void sev_vm_launch_measure(struct kvm_vm *vm, u8 *measurement)
{
struct kvm_sev_launch_measure launch_measure;
struct kvm_sev_guest_status guest_status;
@@ -128,7 +128,7 @@ struct kvm_vm *vm_sev_create_with_one_vcpu(u32 type, void *guest_code,
return vm;
}
-void vm_sev_launch(struct kvm_vm *vm, u32 policy, uint8_t *measurement)
+void vm_sev_launch(struct kvm_vm *vm, u32 policy, u8 *measurement)
{
sev_vm_launch(vm, policy);
diff --git a/tools/testing/selftests/kvm/memslot_perf_test.c b/tools/testing/selftests/kvm/memslot_perf_test.c
index 29b2bb605ee6..0d369dbde88f 100644
--- a/tools/testing/selftests/kvm/memslot_perf_test.c
+++ b/tools/testing/selftests/kvm/memslot_perf_test.c
@@ -216,7 +216,7 @@ static void *vm_gpa2hva(struct vm_data *data, u64 gpa, u64 *rempages)
}
base = data->hva_slots[slot];
- return (uint8_t *)base + slotoffs * guest_page_size + pgoffs;
+ return (u8 *)base + slotoffs * guest_page_size + pgoffs;
}
static u64 vm_slot2gpa(struct vm_data *data, u32 slot)
diff --git a/tools/testing/selftests/kvm/mmu_stress_test.c b/tools/testing/selftests/kvm/mmu_stress_test.c
index 4c3b96dcab21..af8e97d36764 100644
--- a/tools/testing/selftests/kvm/mmu_stress_test.c
+++ b/tools/testing/selftests/kvm/mmu_stress_test.c
@@ -346,7 +346,7 @@ int main(int argc, char *argv[])
/* Pre-fault the memory to avoid taking mmap_sem on guest page faults. */
for (i = 0; i < slot_size; i += vm->page_size)
- ((uint8_t *)mem)[i] = 0xaa;
+ ((u8 *)mem)[i] = 0xaa;
gpa = 0;
for (slot = first_slot; slot < max_slots; slot++) {
diff --git a/tools/testing/selftests/kvm/s390/memop.c b/tools/testing/selftests/kvm/s390/memop.c
index 2283ad346746..4f63ff79ee46 100644
--- a/tools/testing/selftests/kvm/s390/memop.c
+++ b/tools/testing/selftests/kvm/s390/memop.c
@@ -48,13 +48,13 @@ struct mop_desc {
void *buf;
u32 sida_offset;
void *old;
- uint8_t old_value[16];
+ u8 old_value[16];
bool *cmpxchg_success;
- uint8_t ar;
- uint8_t key;
+ u8 ar;
+ u8 key;
};
-const uint8_t NO_KEY = 0xff;
+const u8 NO_KEY = 0xff;
static struct kvm_s390_mem_op ksmo_from_desc(struct mop_desc *desc)
{
@@ -230,8 +230,8 @@ static void memop_ioctl(struct test_info info, struct kvm_s390_mem_op *ksmo,
#define CR0_FETCH_PROTECTION_OVERRIDE (1UL << (63 - 38))
#define CR0_STORAGE_PROTECTION_OVERRIDE (1UL << (63 - 39))
-static uint8_t __aligned(PAGE_SIZE) mem1[65536];
-static uint8_t __aligned(PAGE_SIZE) mem2[65536];
+static u8 __aligned(PAGE_SIZE) mem1[65536];
+static u8 __aligned(PAGE_SIZE) mem2[65536];
struct test_default {
struct kvm_vm *kvm_vm;
@@ -296,7 +296,7 @@ static void prepare_mem12(void)
TEST_ASSERT(!memcmp(p1, p2, size), "Memory contents do not match!")
static void default_write_read(struct test_info copy_cpu, struct test_info mop_cpu,
- enum mop_target mop_target, u32 size, uint8_t key)
+ enum mop_target mop_target, u32 size, u8 key)
{
prepare_mem12();
CHECK_N_DO(MOP, mop_cpu, mop_target, WRITE, mem1, size,
@@ -308,7 +308,7 @@ static void default_write_read(struct test_info copy_cpu, struct test_info mop_c
}
static void default_read(struct test_info copy_cpu, struct test_info mop_cpu,
- enum mop_target mop_target, u32 size, uint8_t key)
+ enum mop_target mop_target, u32 size, u8 key)
{
prepare_mem12();
CHECK_N_DO(MOP, mop_cpu, mop_target, WRITE, mem1, size, GADDR_V(mem1));
@@ -318,12 +318,12 @@ static void default_read(struct test_info copy_cpu, struct test_info mop_cpu,
ASSERT_MEM_EQ(mem1, mem2, size);
}
-static void default_cmpxchg(struct test_default *test, uint8_t key)
+static void default_cmpxchg(struct test_default *test, u8 key)
{
for (int size = 1; size <= 16; size *= 2) {
for (int offset = 0; offset < 16; offset += size) {
- uint8_t __aligned(16) new[16] = {};
- uint8_t __aligned(16) old[16];
+ u8 __aligned(16) new[16] = {};
+ u8 __aligned(16) old[16];
bool succ;
prepare_mem12();
@@ -400,7 +400,7 @@ static void test_copy_access_register(void)
kvm_vm_free(t.kvm_vm);
}
-static void set_storage_key_range(void *addr, size_t len, uint8_t key)
+static void set_storage_key_range(void *addr, size_t len, u8 key)
{
uintptr_t _addr, abs, i;
int not_mapped = 0;
@@ -483,7 +483,7 @@ static __uint128_t cut_to_size(int size, __uint128_t val)
{
switch (size) {
case 1:
- return (uint8_t)val;
+ return (u8)val;
case 2:
return (u16)val;
case 4:
@@ -553,7 +553,7 @@ static __uint128_t permutate_bits(bool guest, int i, int size, __uint128_t old)
if (swap) {
int i, j;
__uint128_t new;
- uint8_t byte0, byte1;
+ u8 byte0, byte1;
rand = rand * 3 + 1;
i = rand % size;
diff --git a/tools/testing/selftests/kvm/s390/resets.c b/tools/testing/selftests/kvm/s390/resets.c
index 7a81d07500bd..e3c7a2f148f9 100644
--- a/tools/testing/selftests/kvm/s390/resets.c
+++ b/tools/testing/selftests/kvm/s390/resets.c
@@ -20,7 +20,7 @@
struct kvm_s390_irq buf[ARBITRARY_NON_ZERO_VCPU_ID + LOCAL_IRQS];
-static uint8_t regs_null[512];
+static u8 regs_null[512];
static void guest_code_initial(void)
{
diff --git a/tools/testing/selftests/kvm/s390/shared_zeropage_test.c b/tools/testing/selftests/kvm/s390/shared_zeropage_test.c
index bba0d9a6dcc8..a9e5a01200b8 100644
--- a/tools/testing/selftests/kvm/s390/shared_zeropage_test.c
+++ b/tools/testing/selftests/kvm/s390/shared_zeropage_test.c
@@ -13,7 +13,7 @@
#include "kselftest.h"
#include "ucall_common.h"
-static void set_storage_key(void *addr, uint8_t skey)
+static void set_storage_key(void *addr, u8 skey)
{
asm volatile("sske %0,%1" : : "d" (skey), "a" (addr));
}
diff --git a/tools/testing/selftests/kvm/s390/tprot.c b/tools/testing/selftests/kvm/s390/tprot.c
index ffd5d139082a..4c5d524915d1 100644
--- a/tools/testing/selftests/kvm/s390/tprot.c
+++ b/tools/testing/selftests/kvm/s390/tprot.c
@@ -14,12 +14,12 @@
#define CR0_FETCH_PROTECTION_OVERRIDE (1UL << (63 - 38))
#define CR0_STORAGE_PROTECTION_OVERRIDE (1UL << (63 - 39))
-static __aligned(PAGE_SIZE) uint8_t pages[2][PAGE_SIZE];
-static uint8_t *const page_store_prot = pages[0];
-static uint8_t *const page_fetch_prot = pages[1];
+static __aligned(PAGE_SIZE) u8 pages[2][PAGE_SIZE];
+static u8 *const page_store_prot = pages[0];
+static u8 *const page_fetch_prot = pages[1];
/* Nonzero return value indicates that address not mapped */
-static int set_storage_key(void *addr, uint8_t key)
+static int set_storage_key(void *addr, u8 key)
{
int not_mapped = 0;
@@ -44,7 +44,7 @@ enum permission {
TRANSL_UNAVAIL = 3,
};
-static enum permission test_protection(void *addr, uint8_t key)
+static enum permission test_protection(void *addr, u8 key)
{
u64 mask;
@@ -72,7 +72,7 @@ enum stage {
struct test {
enum stage stage;
void *addr;
- uint8_t key;
+ u8 key;
enum permission expected;
} tests[] = {
/*
diff --git a/tools/testing/selftests/kvm/set_memory_region_test.c b/tools/testing/selftests/kvm/set_memory_region_test.c
index 730f94cb1e86..58855e5e0b29 100644
--- a/tools/testing/selftests/kvm/set_memory_region_test.c
+++ b/tools/testing/selftests/kvm/set_memory_region_test.c
@@ -564,7 +564,7 @@ static void guest_code_mmio_during_vectoring(void)
set_idt(&idt_desc);
/* Generate a #GP by dereferencing a non-canonical address */
- *((uint8_t *)NONCANONICAL) = 0x1;
+ *((u8 *)NONCANONICAL) = 0x1;
GUEST_ASSERT(0);
}
diff --git a/tools/testing/selftests/kvm/steal_time.c b/tools/testing/selftests/kvm/steal_time.c
index 369d6290dcdc..5bd587098c90 100644
--- a/tools/testing/selftests/kvm/steal_time.c
+++ b/tools/testing/selftests/kvm/steal_time.c
@@ -216,8 +216,8 @@ struct sta_struct {
u32 sequence;
u32 flags;
u64 steal;
- uint8_t preempted;
- uint8_t pad[47];
+ u8 preempted;
+ u8 pad[47];
} __packed;
static void sta_set_shmem(gpa_t gpa, unsigned long flags)
diff --git a/tools/testing/selftests/kvm/x86/fix_hypercall_test.c b/tools/testing/selftests/kvm/x86/fix_hypercall_test.c
index a2c8202cb80e..5ab8bd042397 100644
--- a/tools/testing/selftests/kvm/x86/fix_hypercall_test.c
+++ b/tools/testing/selftests/kvm/x86/fix_hypercall_test.c
@@ -26,11 +26,11 @@ static void guest_ud_handler(struct ex_regs *regs)
regs->rip += HYPERCALL_INSN_SIZE;
}
-static const uint8_t vmx_vmcall[HYPERCALL_INSN_SIZE] = { 0x0f, 0x01, 0xc1 };
-static const uint8_t svm_vmmcall[HYPERCALL_INSN_SIZE] = { 0x0f, 0x01, 0xd9 };
+static const u8 vmx_vmcall[HYPERCALL_INSN_SIZE] = { 0x0f, 0x01, 0xc1 };
+static const u8 svm_vmmcall[HYPERCALL_INSN_SIZE] = { 0x0f, 0x01, 0xd9 };
-extern uint8_t hypercall_insn[HYPERCALL_INSN_SIZE];
-static u64 do_sched_yield(uint8_t apic_id)
+extern u8 hypercall_insn[HYPERCALL_INSN_SIZE];
+static u64 do_sched_yield(u8 apic_id)
{
u64 ret;
@@ -45,8 +45,8 @@ static u64 do_sched_yield(uint8_t apic_id)
static void guest_main(void)
{
- const uint8_t *native_hypercall_insn;
- const uint8_t *other_hypercall_insn;
+ const u8 *native_hypercall_insn;
+ const u8 *other_hypercall_insn;
u64 ret;
if (host_cpu_is_intel) {
diff --git a/tools/testing/selftests/kvm/x86/flds_emulation.h b/tools/testing/selftests/kvm/x86/flds_emulation.h
index c7e4f08765fb..fd6b6c67199a 100644
--- a/tools/testing/selftests/kvm/x86/flds_emulation.h
+++ b/tools/testing/selftests/kvm/x86/flds_emulation.h
@@ -21,7 +21,7 @@ static inline void handle_flds_emulation_failure_exit(struct kvm_vcpu *vcpu)
{
struct kvm_run *run = vcpu->run;
struct kvm_regs regs;
- uint8_t *insn_bytes;
+ u8 *insn_bytes;
u64 flags;
TEST_ASSERT_KVM_EXIT_REASON(vcpu, KVM_EXIT_INTERNAL_ERROR);
diff --git a/tools/testing/selftests/kvm/x86/hyperv_features.c b/tools/testing/selftests/kvm/x86/hyperv_features.c
index 31e568150c98..47759619a879 100644
--- a/tools/testing/selftests/kvm/x86/hyperv_features.c
+++ b/tools/testing/selftests/kvm/x86/hyperv_features.c
@@ -41,7 +41,7 @@ static bool is_write_only_msr(u32 msr)
static void guest_msr(struct msr_data *msr)
{
- uint8_t vector = 0;
+ u8 vector = 0;
u64 msr_val = 0;
GUEST_ASSERT(msr->idx);
@@ -85,7 +85,7 @@ static void guest_msr(struct msr_data *msr)
static void guest_hcall(gpa_t pgs_gpa, struct hcall_data *hcall)
{
u64 res, input, output;
- uint8_t vector;
+ u8 vector;
GUEST_ASSERT_NE(hcall->control, 0);
diff --git a/tools/testing/selftests/kvm/x86/kvm_pv_test.c b/tools/testing/selftests/kvm/x86/kvm_pv_test.c
index babf0f95165a..8ed5fa635021 100644
--- a/tools/testing/selftests/kvm/x86/kvm_pv_test.c
+++ b/tools/testing/selftests/kvm/x86/kvm_pv_test.c
@@ -41,7 +41,7 @@ static struct msr_data msrs_to_test[] = {
static void test_msr(struct msr_data *msr)
{
u64 ignored;
- uint8_t vector;
+ u8 vector;
PR_MSR(msr);
diff --git a/tools/testing/selftests/kvm/x86/nested_emulation_test.c b/tools/testing/selftests/kvm/x86/nested_emulation_test.c
index 42fd24567e26..fb7dcbe53ac7 100644
--- a/tools/testing/selftests/kvm/x86/nested_emulation_test.c
+++ b/tools/testing/selftests/kvm/x86/nested_emulation_test.c
@@ -13,7 +13,7 @@ enum {
struct emulated_instruction {
const char name[32];
- uint8_t opcode[15];
+ u8 opcode[15];
u32 exit_reason[NR_VIRTUALIZATION_FLAVORS];
};
@@ -32,9 +32,9 @@ static struct emulated_instruction instructions[] = {
},
};
-static uint8_t kvm_fep[] = { 0x0f, 0x0b, 0x6b, 0x76, 0x6d }; /* ud2 ; .ascii "kvm" */
-static uint8_t l2_guest_code[sizeof(kvm_fep) + 15];
-static uint8_t *l2_instruction = &l2_guest_code[sizeof(kvm_fep)];
+static u8 kvm_fep[] = { 0x0f, 0x0b, 0x6b, 0x76, 0x6d }; /* ud2 ; .ascii "kvm" */
+static u8 l2_guest_code[sizeof(kvm_fep) + 15];
+static u8 *l2_instruction = &l2_guest_code[sizeof(kvm_fep)];
static u32 get_instruction_length(struct emulated_instruction *insn)
{
diff --git a/tools/testing/selftests/kvm/x86/platform_info_test.c b/tools/testing/selftests/kvm/x86/platform_info_test.c
index 86d1ab0db1e8..80bb07e6531c 100644
--- a/tools/testing/selftests/kvm/x86/platform_info_test.c
+++ b/tools/testing/selftests/kvm/x86/platform_info_test.c
@@ -24,7 +24,7 @@
static void guest_code(void)
{
u64 msr_platform_info;
- uint8_t vector;
+ u8 vector;
GUEST_SYNC(true);
msr_platform_info = rdmsr(MSR_PLATFORM_INFO);
diff --git a/tools/testing/selftests/kvm/x86/pmu_counters_test.c b/tools/testing/selftests/kvm/x86/pmu_counters_test.c
index 16a2093b14eb..f7b1c15be748 100644
--- a/tools/testing/selftests/kvm/x86/pmu_counters_test.c
+++ b/tools/testing/selftests/kvm/x86/pmu_counters_test.c
@@ -32,7 +32,7 @@
/* Track which architectural events are supported by hardware. */
static u32 hardware_pmu_arch_events;
-static uint8_t kvm_pmu_version;
+static u8 kvm_pmu_version;
static bool kvm_has_perf_caps;
#define X86_PMU_FEATURE_NULL \
@@ -57,7 +57,7 @@ struct kvm_intel_pmu_event {
* kvm_x86_pmu_feature use syntax that's only valid in function scope, and the
* compiler often thinks the feature definitions aren't compile-time constants.
*/
-static struct kvm_intel_pmu_event intel_event_to_feature(uint8_t idx)
+static struct kvm_intel_pmu_event intel_event_to_feature(u8 idx)
{
const struct kvm_intel_pmu_event __intel_event_to_feature[] = {
[INTEL_ARCH_CPU_CYCLES_INDEX] = { X86_PMU_FEATURE_CPU_CYCLES, X86_PMU_FEATURE_CPU_CYCLES_FIXED },
@@ -84,7 +84,7 @@ static struct kvm_intel_pmu_event intel_event_to_feature(uint8_t idx)
static struct kvm_vm *pmu_vm_create_with_one_vcpu(struct kvm_vcpu **vcpu,
void *guest_code,
- uint8_t pmu_version,
+ u8 pmu_version,
u64 perf_capabilities)
{
struct kvm_vm *vm;
@@ -127,7 +127,7 @@ static void run_vcpu(struct kvm_vcpu *vcpu)
} while (uc.cmd != UCALL_DONE);
}
-static uint8_t guest_get_pmu_version(void)
+static u8 guest_get_pmu_version(void)
{
/*
* Return the effective PMU version, i.e. the minimum between what KVM
@@ -136,7 +136,7 @@ static uint8_t guest_get_pmu_version(void)
* supported by KVM to verify KVM doesn't freak out and do something
* bizarre with an architecturally valid, but unsupported, version.
*/
- return min_t(uint8_t, kvm_pmu_version, this_cpu_property(X86_PROPERTY_PMU_VERSION));
+ return min_t(u8, kvm_pmu_version, this_cpu_property(X86_PROPERTY_PMU_VERSION));
}
/*
@@ -148,7 +148,7 @@ static uint8_t guest_get_pmu_version(void)
* Sanity check that in all cases, the event doesn't count when it's disabled,
* and that KVM correctly emulates the write of an arbitrary value.
*/
-static void guest_assert_event_count(uint8_t idx, u32 pmc, u32 pmc_msr)
+static void guest_assert_event_count(u8 idx, u32 pmc, u32 pmc_msr)
{
u64 count;
@@ -237,7 +237,7 @@ do { \
guest_assert_event_count(_idx, _pmc, _pmc_msr); \
} while (0)
-static void __guest_test_arch_event(uint8_t idx, u32 pmc, u32 pmc_msr,
+static void __guest_test_arch_event(u8 idx, u32 pmc, u32 pmc_msr,
u32 ctrl_msr, u64 ctrl_msr_value)
{
GUEST_TEST_EVENT(idx, pmc, pmc_msr, ctrl_msr, ctrl_msr_value, "");
@@ -246,7 +246,7 @@ static void __guest_test_arch_event(uint8_t idx, u32 pmc, u32 pmc_msr,
GUEST_TEST_EVENT(idx, pmc, pmc_msr, ctrl_msr, ctrl_msr_value, KVM_FEP);
}
-static void guest_test_arch_event(uint8_t idx)
+static void guest_test_arch_event(u8 idx)
{
u32 nr_gp_counters = this_cpu_property(X86_PROPERTY_PMU_NR_GP_COUNTERS);
u32 pmu_version = guest_get_pmu_version();
@@ -302,7 +302,7 @@ static void guest_test_arch_event(uint8_t idx)
static void guest_test_arch_events(void)
{
- uint8_t i;
+ u8 i;
for (i = 0; i < NR_INTEL_ARCH_EVENTS; i++)
guest_test_arch_event(i);
@@ -310,8 +310,8 @@ static void guest_test_arch_events(void)
GUEST_DONE();
}
-static void test_arch_events(uint8_t pmu_version, u64 perf_capabilities,
- uint8_t length, uint8_t unavailable_mask)
+static void test_arch_events(u8 pmu_version, u64 perf_capabilities,
+ u8 length, u8 unavailable_mask)
{
struct kvm_vcpu *vcpu;
struct kvm_vm *vm;
@@ -355,7 +355,7 @@ __GUEST_ASSERT(expect_gp ? vector == GP_VECTOR : !vector, \
static void guest_test_rdpmc(u32 rdpmc_idx, bool expect_success,
u64 expected_val)
{
- uint8_t vector;
+ u8 vector;
u64 val;
vector = rdpmc_safe(rdpmc_idx, &val);
@@ -372,11 +372,11 @@ static void guest_test_rdpmc(u32 rdpmc_idx, bool expect_success,
GUEST_ASSERT_PMC_VALUE(RDPMC, rdpmc_idx, val, expected_val);
}
-static void guest_rd_wr_counters(u32 base_msr, uint8_t nr_possible_counters,
- uint8_t nr_counters, u32 or_mask)
+static void guest_rd_wr_counters(u32 base_msr, u8 nr_possible_counters,
+ u8 nr_counters, u32 or_mask)
{
const bool pmu_has_fast_mode = !guest_get_pmu_version();
- uint8_t i;
+ u8 i;
for (i = 0; i < nr_possible_counters; i++) {
/*
@@ -401,7 +401,7 @@ static void guest_rd_wr_counters(u32 base_msr, uint8_t nr_possible_counters,
const bool expect_gp = !expect_success && msr != MSR_P6_PERFCTR0 &&
msr != MSR_P6_PERFCTR1;
u32 rdpmc_idx;
- uint8_t vector;
+ u8 vector;
u64 val;
vector = wrmsr_safe(msr, test_val);
@@ -440,8 +440,8 @@ static void guest_rd_wr_counters(u32 base_msr, uint8_t nr_possible_counters,
static void guest_test_gp_counters(void)
{
- uint8_t pmu_version = guest_get_pmu_version();
- uint8_t nr_gp_counters = 0;
+ u8 pmu_version = guest_get_pmu_version();
+ u8 nr_gp_counters = 0;
u32 base_msr;
if (pmu_version)
@@ -474,8 +474,8 @@ static void guest_test_gp_counters(void)
GUEST_DONE();
}
-static void test_gp_counters(uint8_t pmu_version, u64 perf_capabilities,
- uint8_t nr_gp_counters)
+static void test_gp_counters(u8 pmu_version, u64 perf_capabilities,
+ u8 nr_gp_counters)
{
struct kvm_vcpu *vcpu;
struct kvm_vm *vm;
@@ -494,8 +494,8 @@ static void test_gp_counters(uint8_t pmu_version, u64 perf_capabilities,
static void guest_test_fixed_counters(void)
{
u64 supported_bitmask = 0;
- uint8_t nr_fixed_counters = 0;
- uint8_t i;
+ u8 nr_fixed_counters = 0;
+ u8 i;
/* Fixed counters require Architectural vPMU Version 2+. */
if (guest_get_pmu_version() >= 2)
@@ -512,7 +512,7 @@ static void guest_test_fixed_counters(void)
nr_fixed_counters, supported_bitmask);
for (i = 0; i < MAX_NR_FIXED_COUNTERS; i++) {
- uint8_t vector;
+ u8 vector;
u64 val;
if (i >= nr_fixed_counters && !(supported_bitmask & BIT_ULL(i))) {
@@ -540,8 +540,8 @@ static void guest_test_fixed_counters(void)
GUEST_DONE();
}
-static void test_fixed_counters(uint8_t pmu_version, u64 perf_capabilities,
- uint8_t nr_fixed_counters,
+static void test_fixed_counters(u8 pmu_version, u64 perf_capabilities,
+ u8 nr_fixed_counters,
u32 supported_bitmask)
{
struct kvm_vcpu *vcpu;
@@ -562,11 +562,11 @@ static void test_fixed_counters(uint8_t pmu_version, u64 perf_capabilities,
static void test_intel_counters(void)
{
- uint8_t nr_fixed_counters = kvm_cpu_property(X86_PROPERTY_PMU_NR_FIXED_COUNTERS);
- uint8_t nr_gp_counters = kvm_cpu_property(X86_PROPERTY_PMU_NR_GP_COUNTERS);
- uint8_t pmu_version = kvm_cpu_property(X86_PROPERTY_PMU_VERSION);
+ u8 nr_fixed_counters = kvm_cpu_property(X86_PROPERTY_PMU_NR_FIXED_COUNTERS);
+ u8 nr_gp_counters = kvm_cpu_property(X86_PROPERTY_PMU_NR_GP_COUNTERS);
+ u8 pmu_version = kvm_cpu_property(X86_PROPERTY_PMU_VERSION);
unsigned int i;
- uint8_t v, j;
+ u8 v, j;
u32 k;
const u64 perf_caps[] = {
@@ -579,7 +579,7 @@ static void test_intel_counters(void)
* Intel, i.e. is the last version that is guaranteed to be backwards
* compatible with KVM's existing behavior.
*/
- uint8_t max_pmu_version = max_t(typeof(pmu_version), pmu_version, 5);
+ u8 max_pmu_version = max_t(typeof(pmu_version), pmu_version, 5);
/*
* Detect the existence of events that aren't supported by selftests.
diff --git a/tools/testing/selftests/kvm/x86/pmu_event_filter_test.c b/tools/testing/selftests/kvm/x86/pmu_event_filter_test.c
index d140fd6b951e..df8d3f2c05f8 100644
--- a/tools/testing/selftests/kvm/x86/pmu_event_filter_test.c
+++ b/tools/testing/selftests/kvm/x86/pmu_event_filter_test.c
@@ -682,7 +682,7 @@ static int set_pmu_single_event_filter(struct kvm_vcpu *vcpu, u64 event,
static void test_filter_ioctl(struct kvm_vcpu *vcpu)
{
- uint8_t nr_fixed_counters = kvm_cpu_property(X86_PROPERTY_PMU_NR_FIXED_COUNTERS);
+ u8 nr_fixed_counters = kvm_cpu_property(X86_PROPERTY_PMU_NR_FIXED_COUNTERS);
struct __kvm_pmu_event_filter f;
u64 e = ~0ul;
int r;
@@ -726,7 +726,7 @@ static void test_filter_ioctl(struct kvm_vcpu *vcpu)
TEST_ASSERT(!r, "Masking non-existent fixed counters should be allowed");
}
-static void intel_run_fixed_counter_guest_code(uint8_t idx)
+static void intel_run_fixed_counter_guest_code(u8 idx)
{
for (;;) {
wrmsr(MSR_CORE_PERF_GLOBAL_CTRL, 0);
@@ -767,8 +767,8 @@ static u64 test_set_gp_and_fixed_event_filter(struct kvm_vcpu *vcpu,
return run_vcpu_to_sync(vcpu);
}
-static void __test_fixed_counter_bitmap(struct kvm_vcpu *vcpu, uint8_t idx,
- uint8_t nr_fixed_counters)
+static void __test_fixed_counter_bitmap(struct kvm_vcpu *vcpu, u8 idx,
+ u8 nr_fixed_counters)
{
unsigned int i;
u32 bitmap;
@@ -812,10 +812,10 @@ static void __test_fixed_counter_bitmap(struct kvm_vcpu *vcpu, uint8_t idx,
static void test_fixed_counter_bitmap(void)
{
- uint8_t nr_fixed_counters = kvm_cpu_property(X86_PROPERTY_PMU_NR_FIXED_COUNTERS);
+ u8 nr_fixed_counters = kvm_cpu_property(X86_PROPERTY_PMU_NR_FIXED_COUNTERS);
struct kvm_vm *vm;
struct kvm_vcpu *vcpu;
- uint8_t idx;
+ u8 idx;
/*
* Check that pmu_event_filter works as expected when it's applied to
diff --git a/tools/testing/selftests/kvm/x86/private_mem_conversions_test.c b/tools/testing/selftests/kvm/x86/private_mem_conversions_test.c
index 73f540894f06..c412c6ae8356 100644
--- a/tools/testing/selftests/kvm/x86/private_mem_conversions_test.c
+++ b/tools/testing/selftests/kvm/x86/private_mem_conversions_test.c
@@ -29,7 +29,7 @@
/* Horrific macro so that the line info is captured accurately :-( */
#define memcmp_g(gpa, pattern, size) \
do { \
- uint8_t *mem = (uint8_t *)gpa; \
+ u8 *mem = (u8 *)gpa; \
size_t i; \
\
for (i = 0; i < size; i++) \
@@ -38,7 +38,7 @@ do { \
pattern, i, gpa + i, mem[i]); \
} while (0)
-static void memcmp_h(uint8_t *mem, u64 gpa, uint8_t pattern, size_t size)
+static void memcmp_h(u8 *mem, u64 gpa, u8 pattern, size_t size)
{
size_t i;
@@ -71,12 +71,12 @@ enum ucall_syncs {
};
static void guest_sync_shared(u64 gpa, u64 size,
- uint8_t current_pattern, uint8_t new_pattern)
+ u8 current_pattern, u8 new_pattern)
{
GUEST_SYNC5(SYNC_SHARED, gpa, size, current_pattern, new_pattern);
}
-static void guest_sync_private(u64 gpa, u64 size, uint8_t pattern)
+static void guest_sync_private(u64 gpa, u64 size, u8 pattern)
{
GUEST_SYNC4(SYNC_PRIVATE, gpa, size, pattern);
}
@@ -121,8 +121,8 @@ struct {
static void guest_test_explicit_conversion(u64 base_gpa, bool do_fallocate)
{
- const uint8_t def_p = 0xaa;
- const uint8_t init_p = 0xcc;
+ const u8 def_p = 0xaa;
+ const u8 init_p = 0xcc;
u64 j;
int i;
@@ -136,10 +136,10 @@ static void guest_test_explicit_conversion(u64 base_gpa, bool do_fallocate)
for (i = 0; i < ARRAY_SIZE(test_ranges); i++) {
u64 gpa = base_gpa + test_ranges[i].offset;
u64 size = test_ranges[i].size;
- uint8_t p1 = 0x11;
- uint8_t p2 = 0x22;
- uint8_t p3 = 0x33;
- uint8_t p4 = 0x44;
+ u8 p1 = 0x11;
+ u8 p2 = 0x22;
+ u8 p3 = 0x33;
+ u8 p4 = 0x44;
/*
* Set the test region to pattern one to differentiate it from
@@ -229,7 +229,7 @@ static void guest_punch_hole(u64 gpa, u64 size)
*/
static void guest_test_punch_hole(u64 base_gpa, bool precise)
{
- const uint8_t init_p = 0xcc;
+ const u8 init_p = 0xcc;
int i;
/*
@@ -347,7 +347,7 @@ static void *__test_mem_conversions(void *__vcpu)
for (i = 0; i < size; i += vm->page_size) {
size_t nr_bytes = min_t(size_t, vm->page_size, size - i);
- uint8_t *hva = addr_gpa2hva(vm, gpa + i);
+ u8 *hva = addr_gpa2hva(vm, gpa + i);
/* In all cases, the host should observe the shared data. */
memcmp_h(hva, gpa + i, uc.args[3], nr_bytes);
diff --git a/tools/testing/selftests/kvm/x86/smm_test.c b/tools/testing/selftests/kvm/x86/smm_test.c
index 32f2cdea4c4f..dd9d7944145e 100644
--- a/tools/testing/selftests/kvm/x86/smm_test.c
+++ b/tools/testing/selftests/kvm/x86/smm_test.c
@@ -36,7 +36,7 @@
* independent subset of asm here.
* SMI handler always report back fixed stage SMRAM_STAGE.
*/
-uint8_t smi_handler[] = {
+u8 smi_handler[] = {
0xb0, SMRAM_STAGE, /* mov $SMRAM_STAGE, %al */
0xe4, SYNC_PORT, /* in $SYNC_PORT, %al */
0x0f, 0xaa, /* rsm */
diff --git a/tools/testing/selftests/kvm/x86/state_test.c b/tools/testing/selftests/kvm/x86/state_test.c
index 151eead91baf..b4b3678c719a 100644
--- a/tools/testing/selftests/kvm/x86/state_test.c
+++ b/tools/testing/selftests/kvm/x86/state_test.c
@@ -141,7 +141,7 @@ static void __attribute__((__flatten__)) guest_code(void *arg)
if (this_cpu_has(X86_FEATURE_XSAVE)) {
u64 supported_xcr0 = this_cpu_supported_xcr0();
- uint8_t buffer[4096];
+ u8 buffer[4096];
memset(buffer, 0xcc, sizeof(buffer));
@@ -296,7 +296,7 @@ int main(int argc, char *argv[])
* supported features, even if something goes awry in saving
* the original snapshot.
*/
- xstate_bv = (void *)&((uint8_t *)state->xsave->region)[512];
+ xstate_bv = (void *)&((u8 *)state->xsave->region)[512];
saved_xstate_bv = *xstate_bv;
vcpuN = __vm_vcpu_add(vm, vcpu->id + 1);
diff --git a/tools/testing/selftests/kvm/x86/userspace_io_test.c b/tools/testing/selftests/kvm/x86/userspace_io_test.c
index 9481cbcf284f..e0272ba73502 100644
--- a/tools/testing/selftests/kvm/x86/userspace_io_test.c
+++ b/tools/testing/selftests/kvm/x86/userspace_io_test.c
@@ -10,7 +10,7 @@
#include "kvm_util.h"
#include "processor.h"
-static void guest_ins_port80(uint8_t *buffer, unsigned int count)
+static void guest_ins_port80(u8 *buffer, unsigned int count)
{
unsigned long end;
@@ -26,7 +26,7 @@ static void guest_ins_port80(uint8_t *buffer, unsigned int count)
static void guest_code(void)
{
- uint8_t buffer[8192];
+ u8 buffer[8192];
int i;
/*
diff --git a/tools/testing/selftests/kvm/x86/userspace_msr_exit_test.c b/tools/testing/selftests/kvm/x86/userspace_msr_exit_test.c
index e87e2e8d9c38..920f775fe5eb 100644
--- a/tools/testing/selftests/kvm/x86/userspace_msr_exit_test.c
+++ b/tools/testing/selftests/kvm/x86/userspace_msr_exit_test.c
@@ -23,21 +23,21 @@ struct kvm_msr_filter filter_allow = {
.nmsrs = 1,
/* Test an MSR the kernel knows about. */
.base = MSR_IA32_XSS,
- .bitmap = (uint8_t*)&deny_bits,
+ .bitmap = (u8 *)&deny_bits,
}, {
.flags = KVM_MSR_FILTER_READ |
KVM_MSR_FILTER_WRITE,
.nmsrs = 1,
/* Test an MSR the kernel doesn't know about. */
.base = MSR_IA32_FLUSH_CMD,
- .bitmap = (uint8_t*)&deny_bits,
+ .bitmap = (u8 *)&deny_bits,
}, {
.flags = KVM_MSR_FILTER_READ |
KVM_MSR_FILTER_WRITE,
.nmsrs = 1,
/* Test a fabricated MSR that no one knows about. */
.base = MSR_NON_EXISTENT,
- .bitmap = (uint8_t*)&deny_bits,
+ .bitmap = (u8 *)&deny_bits,
},
},
};
@@ -49,7 +49,7 @@ struct kvm_msr_filter filter_fs = {
.flags = KVM_MSR_FILTER_READ,
.nmsrs = 1,
.base = MSR_FS_BASE,
- .bitmap = (uint8_t*)&deny_bits,
+ .bitmap = (u8 *)&deny_bits,
},
},
};
@@ -61,7 +61,7 @@ struct kvm_msr_filter filter_gs = {
.flags = KVM_MSR_FILTER_READ,
.nmsrs = 1,
.base = MSR_GS_BASE,
- .bitmap = (uint8_t*)&deny_bits,
+ .bitmap = (u8 *)&deny_bits,
},
},
};
@@ -77,7 +77,7 @@ static u8 bitmap_c0000000[KVM_MSR_FILTER_MAX_BITMAP_SIZE];
static u8 bitmap_c0000000_read[KVM_MSR_FILTER_MAX_BITMAP_SIZE];
static u8 bitmap_deadbeef[1] = { 0x1 };
-static void deny_msr(uint8_t *bitmap, u32 msr)
+static void deny_msr(u8 *bitmap, u32 msr)
{
u32 idx = msr & (KVM_MSR_FILTER_MAX_BITMAP_SIZE - 1);
@@ -724,7 +724,7 @@ static void run_msr_filter_flag_test(struct kvm_vm *vm)
.flags = KVM_MSR_FILTER_READ,
.nmsrs = 1,
.base = 0,
- .bitmap = (uint8_t *)&deny_bits,
+ .bitmap = (u8 *)&deny_bits,
},
},
};
diff --git a/tools/testing/selftests/kvm/x86/vmx_pmu_caps_test.c b/tools/testing/selftests/kvm/x86/vmx_pmu_caps_test.c
index 0563bd20621b..d5f1eee6fa2e 100644
--- a/tools/testing/selftests/kvm/x86/vmx_pmu_caps_test.c
+++ b/tools/testing/selftests/kvm/x86/vmx_pmu_caps_test.c
@@ -53,7 +53,7 @@ static const union perf_capabilities format_caps = {
static void guest_test_perf_capabilities_gp(u64 val)
{
- uint8_t vector = wrmsr_safe(MSR_IA32_PERF_CAPABILITIES, val);
+ u8 vector = wrmsr_safe(MSR_IA32_PERF_CAPABILITIES, val);
__GUEST_ASSERT(vector == GP_VECTOR,
"Expected #GP for value '0x%lx', got vector '0x%x'",
diff --git a/tools/testing/selftests/kvm/x86/xen_shinfo_test.c b/tools/testing/selftests/kvm/x86/xen_shinfo_test.c
index 974a6c5d3080..11ff5ee0782e 100644
--- a/tools/testing/selftests/kvm/x86/xen_shinfo_test.c
+++ b/tools/testing/selftests/kvm/x86/xen_shinfo_test.c
@@ -133,8 +133,8 @@ struct arch_vcpu_info {
};
struct vcpu_info {
- uint8_t evtchn_upcall_pending;
- uint8_t evtchn_upcall_mask;
+ u8 evtchn_upcall_pending;
+ u8 evtchn_upcall_mask;
unsigned long evtchn_pending_sel;
struct arch_vcpu_info arch;
struct pvclock_vcpu_time_info time;
--
2.49.0.906.g1f30a19c02-goog
^ permalink raw reply related [flat|nested] 16+ messages in thread
* Re: [PATCH 00/10] KVM: selftests: Convert to kernel-style types
2025-05-01 18:32 [PATCH 00/10] KVM: selftests: Convert to kernel-style types David Matlack
` (9 preceding siblings ...)
2025-05-01 18:33 ` [PATCH 10/10] KVM: selftests: Use u8 instead of uint8_t David Matlack
@ 2025-05-01 21:03 ` Sean Christopherson
2025-10-17 22:38 ` David Matlack
2025-05-02 9:11 ` Andrew Jones
11 siblings, 1 reply; 16+ messages in thread
From: Sean Christopherson @ 2025-05-01 21:03 UTC (permalink / raw)
To: David Matlack
Cc: Paolo Bonzini, Marc Zyngier, Oliver Upton, Joey Gouly,
Suzuki K Poulose, Zenghui Yu, Anup Patel, Atish Patra,
Paul Walmsley, Palmer Dabbelt, Albert Ou, Alexandre Ghiti,
Christian Borntraeger, Janosch Frank, Claudio Imbrenda,
David Hildenbrand, Andrew Jones, Isaku Yamahata, Reinette Chatre,
Eric Auger, James Houghton, Colin Ian King, kvm, linux-arm-kernel,
kvmarm, kvm-riscv, linux-riscv
On Thu, May 01, 2025, David Matlack wrote:
> This series renames types across all KVM selftests to more align with
> types used in the kernel:
>
> vm_vaddr_t -> gva_t
> vm_paddr_t -> gpa_t
10000% on these.
> uint64_t -> u64
> uint32_t -> u32
> uint16_t -> u16
> uint8_t -> u8
>
> int64_t -> s64
> int32_t -> s32
> int16_t -> s16
> int8_t -> s8
I'm definitely in favor of these renames. I thought I was the only one that
tripped over the uintNN_t stuff; at this point, I've probably lost hours of my
life trying to type those things out.
^ permalink raw reply [flat|nested] 16+ messages in thread
* Re: [PATCH 00/10] KVM: selftests: Convert to kernel-style types
2025-05-01 18:32 [PATCH 00/10] KVM: selftests: Convert to kernel-style types David Matlack
` (10 preceding siblings ...)
2025-05-01 21:03 ` [PATCH 00/10] KVM: selftests: Convert to kernel-style types Sean Christopherson
@ 2025-05-02 9:11 ` Andrew Jones
11 siblings, 0 replies; 16+ messages in thread
From: Andrew Jones @ 2025-05-02 9:11 UTC (permalink / raw)
To: David Matlack
Cc: Paolo Bonzini, Marc Zyngier, Oliver Upton, Joey Gouly,
Suzuki K Poulose, Zenghui Yu, Anup Patel, Atish Patra,
Paul Walmsley, Palmer Dabbelt, Albert Ou, Alexandre Ghiti,
Christian Borntraeger, Janosch Frank, Claudio Imbrenda,
David Hildenbrand, Sean Christopherson, Isaku Yamahata,
Reinette Chatre, Eric Auger, James Houghton, Colin Ian King, kvm,
linux-arm-kernel, kvmarm, kvm-riscv, linux-riscv
On Thu, May 01, 2025 at 11:32:54AM -0700, David Matlack wrote:
> This series renames types across all KVM selftests to more align with
> types used in the kernel:
>
> vm_vaddr_t -> gva_t
> vm_paddr_t -> gpa_t
>
> uint64_t -> u64
> uint32_t -> u32
> uint16_t -> u16
> uint8_t -> u8
>
> int64_t -> s64
> int32_t -> s32
> int16_t -> s16
> int8_t -> s8
>
> The goal of this series is to make the KVM selftests code more concise
> (the new type names are shorter) and more similar to the kernel, since
> selftests are developed by kernel developers.
>
> I know broad changes like this series can be difficult to merge and also
> muddies up the git-blame history, so if there isn't appetite for this we
> can drop it. But if there is I would be happy to help with rebasing and
> resolving merge conflicts to get it in.
I don't have a strong preference on this. I'm used to the uint*t stuff
since I work on QEMU frequently, but the u* stuff is also fine by me.
I guess the biggest downside is the git-blame muddying, but,
[knock-on-wood] we don't typically have a lot of bisecting / bug fixing
to do.
Thanks,
drew
>
> Most of the commits in this series are auto-generated with a single
> command (see commit messages), aside from whitespace fixes, so rebasing
> onto a different base isn't terrible.
>
> David Matlack (10):
> KVM: selftests: Use gva_t instead of vm_vaddr_t
> KVM: selftests: Use gpa_t instead of vm_paddr_t
> KVM: selftests: Use gpa_t for GPAs in Hyper-V selftests
> KVM: selftests: Use u64 instead of uint64_t
> KVM: selftests: Use s64 instead of int64_t
> KVM: selftests: Use u32 instead of uint32_t
> KVM: selftests: Use s32 instead of int32_t
> KVM: selftests: Use u16 instead of uint16_t
> KVM: selftests: Use s16 instead of int16_t
> KVM: selftests: Use u8 instead of uint8_t
>
> .../selftests/kvm/access_tracking_perf_test.c | 40 +--
> tools/testing/selftests/kvm/arch_timer.c | 6 +-
> .../selftests/kvm/arm64/aarch32_id_regs.c | 14 +-
> .../testing/selftests/kvm/arm64/arch_timer.c | 8 +-
> .../kvm/arm64/arch_timer_edge_cases.c | 159 +++++----
> .../selftests/kvm/arm64/debug-exceptions.c | 73 ++--
> .../testing/selftests/kvm/arm64/hypercalls.c | 24 +-
> .../testing/selftests/kvm/arm64/no-vgic-v3.c | 6 +-
> .../selftests/kvm/arm64/page_fault_test.c | 82 ++---
> tools/testing/selftests/kvm/arm64/psci_test.c | 26 +-
> .../testing/selftests/kvm/arm64/set_id_regs.c | 58 ++--
> .../selftests/kvm/arm64/smccc_filter.c | 10 +-
> tools/testing/selftests/kvm/arm64/vgic_init.c | 56 ++--
> tools/testing/selftests/kvm/arm64/vgic_irq.c | 116 +++----
> .../selftests/kvm/arm64/vgic_lpi_stress.c | 20 +-
> .../selftests/kvm/arm64/vpmu_counter_access.c | 62 ++--
> .../testing/selftests/kvm/coalesced_io_test.c | 38 +--
> .../selftests/kvm/demand_paging_test.c | 10 +-
> .../selftests/kvm/dirty_log_perf_test.c | 14 +-
> tools/testing/selftests/kvm/dirty_log_test.c | 82 ++---
> tools/testing/selftests/kvm/get-reg-list.c | 2 +-
> .../testing/selftests/kvm/guest_memfd_test.c | 2 +-
> .../testing/selftests/kvm/guest_print_test.c | 22 +-
> .../selftests/kvm/hardware_disable_test.c | 6 +-
> .../selftests/kvm/include/arm64/arch_timer.h | 30 +-
> .../selftests/kvm/include/arm64/delay.h | 4 +-
> .../testing/selftests/kvm/include/arm64/gic.h | 8 +-
> .../selftests/kvm/include/arm64/gic_v3_its.h | 8 +-
> .../selftests/kvm/include/arm64/processor.h | 20 +-
> .../selftests/kvm/include/arm64/ucall.h | 4 +-
> .../selftests/kvm/include/arm64/vgic.h | 20 +-
> .../testing/selftests/kvm/include/kvm_util.h | 311 +++++++++---------
> .../selftests/kvm/include/kvm_util_types.h | 4 +-
> .../testing/selftests/kvm/include/memstress.h | 30 +-
> .../selftests/kvm/include/riscv/arch_timer.h | 22 +-
> .../selftests/kvm/include/riscv/processor.h | 9 +-
> .../selftests/kvm/include/riscv/ucall.h | 4 +-
> .../kvm/include/s390/diag318_test_handler.h | 2 +-
> .../selftests/kvm/include/s390/facility.h | 4 +-
> .../selftests/kvm/include/s390/ucall.h | 4 +-
> .../testing/selftests/kvm/include/sparsebit.h | 6 +-
> .../testing/selftests/kvm/include/test_util.h | 40 +--
> .../selftests/kvm/include/timer_test.h | 18 +-
> .../selftests/kvm/include/ucall_common.h | 22 +-
> .../selftests/kvm/include/userfaultfd_util.h | 6 +-
> .../testing/selftests/kvm/include/x86/apic.h | 22 +-
> .../testing/selftests/kvm/include/x86/evmcs.h | 22 +-
> .../selftests/kvm/include/x86/hyperv.h | 28 +-
> .../selftests/kvm/include/x86/kvm_util_arch.h | 12 +-
> tools/testing/selftests/kvm/include/x86/pmu.h | 6 +-
> .../selftests/kvm/include/x86/processor.h | 272 ++++++++-------
> tools/testing/selftests/kvm/include/x86/sev.h | 14 +-
> .../selftests/kvm/include/x86/svm_util.h | 10 +-
> .../testing/selftests/kvm/include/x86/ucall.h | 2 +-
> tools/testing/selftests/kvm/include/x86/vmx.h | 80 ++---
> .../selftests/kvm/kvm_page_table_test.c | 54 +--
> tools/testing/selftests/kvm/lib/arm64/gic.c | 6 +-
> .../selftests/kvm/lib/arm64/gic_private.h | 24 +-
> .../testing/selftests/kvm/lib/arm64/gic_v3.c | 84 ++---
> .../selftests/kvm/lib/arm64/gic_v3_its.c | 12 +-
> .../selftests/kvm/lib/arm64/processor.c | 126 +++----
> tools/testing/selftests/kvm/lib/arm64/ucall.c | 12 +-
> tools/testing/selftests/kvm/lib/arm64/vgic.c | 38 +--
> tools/testing/selftests/kvm/lib/elf.c | 8 +-
> tools/testing/selftests/kvm/lib/guest_modes.c | 2 +-
> .../testing/selftests/kvm/lib/guest_sprintf.c | 18 +-
> tools/testing/selftests/kvm/lib/kvm_util.c | 222 +++++++------
> tools/testing/selftests/kvm/lib/memstress.c | 38 +--
> .../selftests/kvm/lib/riscv/processor.c | 56 ++--
> .../kvm/lib/s390/diag318_test_handler.c | 12 +-
> .../testing/selftests/kvm/lib/s390/facility.c | 2 +-
> .../selftests/kvm/lib/s390/processor.c | 42 +--
> tools/testing/selftests/kvm/lib/sparsebit.c | 18 +-
> tools/testing/selftests/kvm/lib/test_util.c | 30 +-
> .../testing/selftests/kvm/lib/ucall_common.c | 30 +-
> .../selftests/kvm/lib/userfaultfd_util.c | 14 +-
> tools/testing/selftests/kvm/lib/x86/apic.c | 2 +-
> tools/testing/selftests/kvm/lib/x86/hyperv.c | 14 +-
> .../testing/selftests/kvm/lib/x86/memstress.c | 10 +-
> tools/testing/selftests/kvm/lib/x86/pmu.c | 4 +-
> .../testing/selftests/kvm/lib/x86/processor.c | 178 +++++-----
> tools/testing/selftests/kvm/lib/x86/sev.c | 14 +-
> tools/testing/selftests/kvm/lib/x86/svm.c | 16 +-
> tools/testing/selftests/kvm/lib/x86/ucall.c | 4 +-
> tools/testing/selftests/kvm/lib/x86/vmx.c | 108 +++---
> .../kvm/memslot_modification_stress_test.c | 10 +-
> .../testing/selftests/kvm/memslot_perf_test.c | 164 ++++-----
> tools/testing/selftests/kvm/mmu_stress_test.c | 28 +-
> .../selftests/kvm/pre_fault_memory_test.c | 12 +-
> .../testing/selftests/kvm/riscv/arch_timer.c | 8 +-
> .../testing/selftests/kvm/riscv/ebreak_test.c | 6 +-
> .../selftests/kvm/riscv/get-reg-list.c | 2 +-
> .../selftests/kvm/riscv/sbi_pmu_test.c | 8 +-
> tools/testing/selftests/kvm/s390/debug_test.c | 8 +-
> tools/testing/selftests/kvm/s390/memop.c | 94 +++---
> tools/testing/selftests/kvm/s390/resets.c | 6 +-
> .../selftests/kvm/s390/shared_zeropage_test.c | 2 +-
> tools/testing/selftests/kvm/s390/tprot.c | 24 +-
> .../selftests/kvm/s390/ucontrol_test.c | 2 +-
> .../selftests/kvm/set_memory_region_test.c | 40 +--
> tools/testing/selftests/kvm/steal_time.c | 52 +--
> .../kvm/system_counter_offset_test.c | 12 +-
> tools/testing/selftests/kvm/x86/amx_test.c | 14 +-
> .../selftests/kvm/x86/apic_bus_clock_test.c | 24 +-
> tools/testing/selftests/kvm/x86/cpuid_test.c | 6 +-
> tools/testing/selftests/kvm/x86/debug_regs.c | 4 +-
> .../kvm/x86/dirty_log_page_splitting_test.c | 16 +-
> .../selftests/kvm/x86/feature_msrs_test.c | 12 +-
> .../selftests/kvm/x86/fix_hypercall_test.c | 20 +-
> .../selftests/kvm/x86/flds_emulation.h | 6 +-
> .../testing/selftests/kvm/x86/hwcr_msr_test.c | 10 +-
> .../testing/selftests/kvm/x86/hyperv_clock.c | 6 +-
> .../testing/selftests/kvm/x86/hyperv_evmcs.c | 10 +-
> .../kvm/x86/hyperv_extended_hypercalls.c | 20 +-
> .../selftests/kvm/x86/hyperv_features.c | 26 +-
> tools/testing/selftests/kvm/x86/hyperv_ipi.c | 12 +-
> .../selftests/kvm/x86/hyperv_svm_test.c | 10 +-
> .../selftests/kvm/x86/hyperv_tlb_flush.c | 36 +-
> .../selftests/kvm/x86/kvm_clock_test.c | 14 +-
> tools/testing/selftests/kvm/x86/kvm_pv_test.c | 10 +-
> .../selftests/kvm/x86/monitor_mwait_test.c | 2 +-
> .../selftests/kvm/x86/nested_emulation_test.c | 20 +-
> .../kvm/x86/nested_exceptions_test.c | 6 +-
> .../selftests/kvm/x86/nx_huge_pages_test.c | 18 +-
> .../selftests/kvm/x86/platform_info_test.c | 6 +-
> .../selftests/kvm/x86/pmu_counters_test.c | 108 +++---
> .../selftests/kvm/x86/pmu_event_filter_test.c | 102 +++---
> .../kvm/x86/private_mem_conversions_test.c | 78 ++---
> .../kvm/x86/private_mem_kvm_exits_test.c | 14 +-
> .../selftests/kvm/x86/set_boot_cpu_id.c | 6 +-
> .../selftests/kvm/x86/set_sregs_test.c | 6 +-
> .../selftests/kvm/x86/sev_init2_tests.c | 6 +-
> .../selftests/kvm/x86/sev_smoke_test.c | 14 +-
> .../x86/smaller_maxphyaddr_emulation_test.c | 10 +-
> tools/testing/selftests/kvm/x86/smm_test.c | 8 +-
> tools/testing/selftests/kvm/x86/state_test.c | 14 +-
> .../selftests/kvm/x86/svm_int_ctl_test.c | 2 +-
> .../kvm/x86/svm_nested_shutdown_test.c | 2 +-
> .../kvm/x86/svm_nested_soft_inject_test.c | 10 +-
> .../selftests/kvm/x86/svm_vmcall_test.c | 2 +-
> .../selftests/kvm/x86/sync_regs_test.c | 2 +-
> .../kvm/x86/triple_fault_event_test.c | 4 +-
> .../testing/selftests/kvm/x86/tsc_msrs_test.c | 2 +-
> .../selftests/kvm/x86/tsc_scaling_sync.c | 4 +-
> .../selftests/kvm/x86/ucna_injection_test.c | 45 +--
> .../selftests/kvm/x86/userspace_io_test.c | 4 +-
> .../kvm/x86/userspace_msr_exit_test.c | 58 ++--
> .../selftests/kvm/x86/vmx_apic_access_test.c | 4 +-
> .../kvm/x86/vmx_close_while_nested_test.c | 2 +-
> .../selftests/kvm/x86/vmx_dirty_log_test.c | 4 +-
> .../kvm/x86/vmx_invalid_nested_guest_state.c | 2 +-
> .../testing/selftests/kvm/x86/vmx_msrs_test.c | 22 +-
> .../kvm/x86/vmx_nested_tsc_scaling_test.c | 26 +-
> .../selftests/kvm/x86/vmx_pmu_caps_test.c | 12 +-
> .../kvm/x86/vmx_preemption_timer_test.c | 2 +-
> .../selftests/kvm/x86/vmx_tsc_adjust_test.c | 12 +-
> .../selftests/kvm/x86/xapic_ipi_test.c | 58 ++--
> .../selftests/kvm/x86/xapic_state_test.c | 20 +-
> .../selftests/kvm/x86/xcr0_cpuid_test.c | 8 +-
> .../selftests/kvm/x86/xen_shinfo_test.c | 22 +-
> .../testing/selftests/kvm/x86/xss_msr_test.c | 2 +-
> 161 files changed, 2323 insertions(+), 2338 deletions(-)
>
>
> base-commit: 45eb29140e68ffe8e93a5471006858a018480a45
> prerequisite-patch-id: 3bae97c9e1093148763235f47a84fa040b512d04
> --
> 2.49.0.906.g1f30a19c02-goog
>
^ permalink raw reply [flat|nested] 16+ messages in thread
* Re: [PATCH 00/10] KVM: selftests: Convert to kernel-style types
2025-05-01 21:03 ` [PATCH 00/10] KVM: selftests: Convert to kernel-style types Sean Christopherson
@ 2025-10-17 22:38 ` David Matlack
2025-11-14 0:52 ` Sean Christopherson
0 siblings, 1 reply; 16+ messages in thread
From: David Matlack @ 2025-10-17 22:38 UTC (permalink / raw)
To: Sean Christopherson
Cc: Paolo Bonzini, Marc Zyngier, Oliver Upton, Joey Gouly,
Suzuki K Poulose, Zenghui Yu, Anup Patel, Atish Patra,
Paul Walmsley, Palmer Dabbelt, Albert Ou, Alexandre Ghiti,
Christian Borntraeger, Janosch Frank, Claudio Imbrenda,
David Hildenbrand, Andrew Jones, Isaku Yamahata, Reinette Chatre,
Eric Auger, James Houghton, Colin Ian King, kvm, linux-arm-kernel,
kvmarm, kvm-riscv, linux-riscv
On Thu, May 1, 2025 at 2:03 PM Sean Christopherson <seanjc@google.com> wrote:
> On Thu, May 01, 2025, David Matlack wrote:
> > This series renames types across all KVM selftests to more align with
> > types used in the kernel:
> >
> > vm_vaddr_t -> gva_t
> > vm_paddr_t -> gpa_t
>
> 10000% on these.
>
> > uint64_t -> u64
> > uint32_t -> u32
> > uint16_t -> u16
> > uint8_t -> u8
> >
> > int64_t -> s64
> > int32_t -> s32
> > int16_t -> s16
> > int8_t -> s8
>
> I'm definitely in favor of these renames. I thought I was the only one that
> tripped over the uintNN_t stuff; at this point, I've probably lost hours of my
> life trying to type those things out.
What should the next step be here? I'd be happy to spin a new version
whenever on whatever base commit you prefer.
^ permalink raw reply [flat|nested] 16+ messages in thread
* Re: [PATCH 00/10] KVM: selftests: Convert to kernel-style types
2025-10-17 22:38 ` David Matlack
@ 2025-11-14 0:52 ` Sean Christopherson
2025-12-04 0:04 ` David Matlack
0 siblings, 1 reply; 16+ messages in thread
From: Sean Christopherson @ 2025-11-14 0:52 UTC (permalink / raw)
To: David Matlack
Cc: Paolo Bonzini, Marc Zyngier, Oliver Upton, Joey Gouly,
Suzuki K Poulose, Zenghui Yu, Anup Patel, Atish Patra,
Paul Walmsley, Palmer Dabbelt, Albert Ou, Alexandre Ghiti,
Christian Borntraeger, Janosch Frank, Claudio Imbrenda,
David Hildenbrand, Andrew Jones, Isaku Yamahata, Reinette Chatre,
Eric Auger, James Houghton, Colin Ian King, kvm, linux-arm-kernel,
kvmarm, kvm-riscv, linux-riscv
On Fri, Oct 17, 2025, David Matlack wrote:
> On Thu, May 1, 2025 at 2:03 PM Sean Christopherson <seanjc@google.com> wrote:
> > On Thu, May 01, 2025, David Matlack wrote:
> > > This series renames types across all KVM selftests to more align with
> > > types used in the kernel:
> > >
> > > vm_vaddr_t -> gva_t
> > > vm_paddr_t -> gpa_t
> >
> > 10000% on these.
> >
> > > uint64_t -> u64
> > > uint32_t -> u32
> > > uint16_t -> u16
> > > uint8_t -> u8
> > >
> > > int64_t -> s64
> > > int32_t -> s32
> > > int16_t -> s16
> > > int8_t -> s8
> >
> > I'm definitely in favor of these renames. I thought I was the only one that
> > tripped over the uintNN_t stuff; at this point, I've probably lost hours of my
> > life trying to type those things out.
>
> What should the next step be here? I'd be happy to spin a new version
> whenever on whatever base commit you prefer.
Sorry for the slow reply, I've had this window sitting open for something like
two weeks.
My slowness is largely because I'm not sure how to land/approach this. I'm 100%
in favor of the renames, it's the timing and coordination I'm unsure of.
In hindsight, it probably would have best to squeeze it into 6.18, so at least
the most recent LTS wouldn't generate conflicts all over the place. The next
best option would probably be to spin a new version, bribe Paolo to apply it at
the end of the next merge window, and tag the whole thing for stable@ (maybe
limited to 6.18+?) to minimize downstream pain.
^ permalink raw reply [flat|nested] 16+ messages in thread
* Re: [PATCH 00/10] KVM: selftests: Convert to kernel-style types
2025-11-14 0:52 ` Sean Christopherson
@ 2025-12-04 0:04 ` David Matlack
0 siblings, 0 replies; 16+ messages in thread
From: David Matlack @ 2025-12-04 0:04 UTC (permalink / raw)
To: Sean Christopherson
Cc: Paolo Bonzini, Marc Zyngier, Oliver Upton, Joey Gouly,
Suzuki K Poulose, Zenghui Yu, Anup Patel, Atish Patra,
Paul Walmsley, Palmer Dabbelt, Albert Ou, Alexandre Ghiti,
Christian Borntraeger, Janosch Frank, Claudio Imbrenda,
David Hildenbrand, Andrew Jones, Isaku Yamahata, Reinette Chatre,
Eric Auger, James Houghton, Colin Ian King, kvm, linux-arm-kernel,
kvmarm, kvm-riscv, linux-riscv
On Thu, Nov 13, 2025 at 4:52 PM Sean Christopherson <seanjc@google.com> wrote:
>
> On Fri, Oct 17, 2025, David Matlack wrote:
> > On Thu, May 1, 2025 at 2:03 PM Sean Christopherson <seanjc@google.com> wrote:
> > > On Thu, May 01, 2025, David Matlack wrote:
> > > > This series renames types across all KVM selftests to more align with
> > > > types used in the kernel:
> > > >
> > > > vm_vaddr_t -> gva_t
> > > > vm_paddr_t -> gpa_t
> > >
> > > 10000% on these.
> > >
> > > > uint64_t -> u64
> > > > uint32_t -> u32
> > > > uint16_t -> u16
> > > > uint8_t -> u8
> > > >
> > > > int64_t -> s64
> > > > int32_t -> s32
> > > > int16_t -> s16
> > > > int8_t -> s8
> > >
> > > I'm definitely in favor of these renames. I thought I was the only one that
> > > tripped over the uintNN_t stuff; at this point, I've probably lost hours of my
> > > life trying to type those things out.
> >
> > What should the next step be here? I'd be happy to spin a new version
> > whenever on whatever base commit you prefer.
>
> Sorry for the slow reply, I've had this window sitting open for something like
> two weeks.
No worries, thanks for taking a look :)
>
> My slowness is largely because I'm not sure how to land/approach this. I'm 100%
> in favor of the renames, it's the timing and coordination I'm unsure of.
>
> In hindsight, it probably would have best to squeeze it into 6.18, so at least
> the most recent LTS wouldn't generate conflicts all over the place. The next
> best option would probably be to spin a new version, bribe Paolo to apply it at
> the end of the next merge window, and tag the whole thing for stable@ (maybe
> limited to 6.18+?) to minimize downstream pain.
With LPC coming up I won't have cycles to post a new version before
the 6.19 merge window closes.
I'm tempted to say let's just wait for the next LTS release and merge
it in then. This is low priorit, so I'm fine with waiting.
^ permalink raw reply [flat|nested] 16+ messages in thread
end of thread, other threads:[~2025-12-04 0:05 UTC | newest]
Thread overview: 16+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2025-05-01 18:32 [PATCH 00/10] KVM: selftests: Convert to kernel-style types David Matlack
2025-05-01 18:32 ` [PATCH 01/10] KVM: selftests: Use gva_t instead of vm_vaddr_t David Matlack
2025-05-01 18:32 ` [PATCH 02/10] KVM: selftests: Use gpa_t instead of vm_paddr_t David Matlack
2025-05-01 18:32 ` [PATCH 03/10] KVM: selftests: Use gpa_t for GPAs in Hyper-V selftests David Matlack
2025-05-01 18:32 ` [PATCH 04/10] KVM: selftests: Use u64 instead of uint64_t David Matlack
2025-05-01 18:32 ` [PATCH 05/10] KVM: selftests: Use s64 instead of int64_t David Matlack
2025-05-01 18:33 ` [PATCH 06/10] KVM: selftests: Use u32 instead of uint32_t David Matlack
2025-05-01 18:33 ` [PATCH 07/10] KVM: selftests: Use s32 instead of int32_t David Matlack
2025-05-01 18:33 ` [PATCH 08/10] KVM: selftests: Use u16 instead of uint16_t David Matlack
2025-05-01 18:33 ` [PATCH 09/10] KVM: selftests: Use s16 instead of int16_t David Matlack
2025-05-01 18:33 ` [PATCH 10/10] KVM: selftests: Use u8 instead of uint8_t David Matlack
2025-05-01 21:03 ` [PATCH 00/10] KVM: selftests: Convert to kernel-style types Sean Christopherson
2025-10-17 22:38 ` David Matlack
2025-11-14 0:52 ` Sean Christopherson
2025-12-04 0:04 ` David Matlack
2025-05-02 9:11 ` Andrew Jones
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).