From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 40D0EF5A8CA for ; Mon, 20 Apr 2026 21:21:34 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:Reply-To:List-Subscribe:List-Help: List-Post:List-Archive:List-Unsubscribe:List-Id:Cc:To:From:Subject:Message-ID :References:Mime-Version:In-Reply-To:Date:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=ol1mMoIfAdOJaPZ984GJYoo6Tm46G4TlmWuwMRhXjAo=; b=t1xtdM6b73z9AE lcaMt6DtnoapiilKkbXvwsPfzTB52HojIzlRxTeeDIb2+CgBd2AchEru9rUjioo7K8pYxH3pRkLZn PElS7FYtnH+qNXMhKozyFSyPxGzNS2xRFS3MP1K+dBylohOEWi3fO6eYPt21eSw+x+4Cdl25jAeqV nDSyhlQeVZsDQKmV+I5IPxM2vio5lcNA0MgzrskI5KRP6Dy9aXH9u4c3WINyGq4qK9gZtKSLicwVx fmrUOdxI0cUsCjP6Y1iLfUVwi5YzWpFzwKQaDkgnpkI9oNwo5vyPfEE+lrKMrTtW8Cp/SA5QsenY7 Ibbp1txNebnShZH3Yd4A==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.98.2 #2 (Red Hat Linux)) id 1wEw3Y-00000007gxZ-273Z; Mon, 20 Apr 2026 21:21:24 +0000 Received: from casper.infradead.org ([2001:8b0:10b:1236::1]) by bombadil.infradead.org with esmtps (Exim 4.98.2 #2 (Red Hat Linux)) id 1wEw2y-00000007g85-0bGr for linux-riscv@bombadil.infradead.org; Mon, 20 Apr 2026 21:20:48 +0000 DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Type:Cc:To:From:Subject: Message-ID:References:Mime-Version:In-Reply-To:Date:Reply-To:Sender: Content-Transfer-Encoding:Content-ID:Content-Description; bh=Rpd6vMcLqXnLIzoMZjZBJw63vKplluuAVwdNOopWl7o=; b=TUGUuiG18uLlFlMmIbCGQGYJ1x iKNHsjaHCg5whxKrzZDrbmqrQwovyLJcfV23kdLcHJsoleglqanIT1p/O8PeOmv4maM9Zf3djxvoR l+91YNrpGabFA3Pb1uKLJwbKgFfC9jSPZKEbTjyKOpBHhE1DoBt8ZZMnjlNvGW+qdSrqQHlZYTxNc psqbgp8xTopZnvQ3ymPsA+LmvvkHRj2yWxQ0gzANqPa6OpwCOnw/FheZfOQWCBcQ7S6JMLkHzB4dw F7fV+QPSYo+RL8VLV8M4/wRs+XMVN2AQLQZV9urCxRFnBamOPi/tbUykDchnXAJvHGN4UKbZf8CGK ZNOhQ3Mw==; Received: from mail-pf1-x449.google.com ([2607:f8b0:4864:20::449]) by casper.infradead.org with esmtps (Exim 4.98.2 #2 (Red Hat Linux)) id 1wEw2t-000000093Ra-2LRF for linux-riscv@lists.infradead.org; Mon, 20 Apr 2026 21:20:46 +0000 Received: by mail-pf1-x449.google.com with SMTP id d2e1a72fcca58-82f756ebd0dso2243661b3a.1 for ; Mon, 20 Apr 2026 14:20:43 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20251104; t=1776720041; x=1777324841; darn=lists.infradead.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:reply-to:from:to:cc:subject:date:message-id:reply-to; bh=Rpd6vMcLqXnLIzoMZjZBJw63vKplluuAVwdNOopWl7o=; b=Xk2Dgr2FDNBtqogFkebk28Nd9FXcS/LSPyHqy4qmUMTOeOPJ/D1zTFvPqGBJEqPSmq LjHVC6J006nnsi30L+UO2obgLfEf4aR+c4cisGkE1TfNVAK4xUDT7SnAWqY+FEKw/nRU w5tELf0b58ABJ+8CVbg6sDaZ0d/PHzHvfhUlVplzA8uDl6YKPcqgdZlUddkImTzKF1DH fbzDpOZy3TpxSTpfjGaPpyO5o/hiktmvT1mydT8qJ2NaQGlo0G0h9QW4RQ9W7zv/375M 1iqaPc3CRVXus8CQGyEkSldrhkJMXULmETA4x1vibMG4JlyPrzZ+njMqDrD4FSuhSRWS rBxg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20251104; t=1776720041; x=1777324841; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:reply-to:x-gm-message-state:from:to:cc:subject:date:message-id :reply-to; bh=Rpd6vMcLqXnLIzoMZjZBJw63vKplluuAVwdNOopWl7o=; b=D9qm1xBryQRFEm4Lw5EBv+dL0yOgozjPTTWGExP4SumdE3RWbnauDmnyCDNDOQ0kCA 9yuvAPFcRY7C3hHoS4982wN3M+Tk8Zc/GY8BHU99NoDMvSWFCcyaBw0hXJacM4DujH1q dV7HA9b9nU8BLcAptiNdJ+uKUrT0MQnMItNNRSIiHlSpXn5osLT/WClzyCmGaJ7RQc/Q JzmGdjCs+ja33/ONDvabEar2dN9O0U5jOsHuPpElHW3grZYihCLFAwlzdsxZSAZ/mGFA b+YvBmZkt/wYo5SpBM9t8jKWzgZfc5qWQRdSore66wFRzysHV0YFlB11LRli+HFd5xav Jl/Q== X-Forwarded-Encrypted: i=1; AFNElJ+iF10sLfh0rZG3EGnXwwgPiV/vPI0S+WpQL8kDR8gKfhvVAL2g7UVdL2KNwqx7VhLP87I5+JHPXUWDxA==@lists.infradead.org X-Gm-Message-State: AOJu0YyKwKBmsrjVFV4PxeHtbYfKXLTBvkMWIgaqP3Xpdr4XKEixmTXa W7kqZEtwUvuiCCkw5vZaDGCVJ9qXM0A8noWf/D+CayVb1To+o5fRHj18gBi7/ANbtbGAnjre9kA Bh6+8ag== X-Received: from pffv19.prod.google.com ([2002:aa7:8093:0:b0:82f:5d4f:734f]) (user=seanjc job=prod-delivery.src-stubby-dispatcher) by 2002:a05:6a00:4f87:b0:82f:8698:101 with SMTP id d2e1a72fcca58-82f8c9bbeb9mr15967277b3a.44.1776720040495; Mon, 20 Apr 2026 14:20:40 -0700 (PDT) Date: Mon, 20 Apr 2026 14:20:02 -0700 In-Reply-To: <20260420212004.3938325-1-seanjc@google.com> Mime-Version: 1.0 References: <20260420212004.3938325-1-seanjc@google.com> X-Mailer: git-send-email 2.54.0.rc1.555.g9c883467ad-goog Message-ID: <20260420212004.3938325-18-seanjc@google.com> Subject: [PATCH v3 17/19] KVM: selftests: Replace "u64 gpa" with "gpa_t" throughout From: Sean Christopherson To: Paolo Bonzini , Marc Zyngier , Oliver Upton , Tianrui Zhao , Bibo Mao , Huacai Chen , Anup Patel , Paul Walmsley , Palmer Dabbelt , Albert Ou , Christian Borntraeger , Janosch Frank , Claudio Imbrenda , Sean Christopherson Cc: kvm@vger.kernel.org, linux-arm-kernel@lists.infradead.org, kvmarm@lists.linux.dev, loongarch@lists.linux.dev, kvm-riscv@lists.infradead.org, linux-riscv@lists.infradead.org, linux-kernel@vger.kernel.org, David Matlack X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20260420_222043_624135_A96D1ABD X-CRM114-Status: GOOD ( 12.91 ) X-BeenThere: linux-riscv@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Reply-To: Sean Christopherson Content-Type: text/plain; charset="us-ascii" Content-Transfer-Encoding: 7bit Sender: "linux-riscv" Errors-To: linux-riscv-bounces+linux-riscv=archiver.kernel.org@lists.infradead.org Use gpa_t instead of u64 for obvious declarations of GPA variables. No functional change intended. Signed-off-by: Sean Christopherson --- .../testing/selftests/kvm/guest_memfd_test.c | 2 +- .../testing/selftests/kvm/include/kvm_util.h | 26 +++++++++---------- .../testing/selftests/kvm/include/memstress.h | 4 +-- .../selftests/kvm/include/x86/processor.h | 4 +-- tools/testing/selftests/kvm/lib/kvm_util.c | 14 +++++----- .../selftests/kvm/lib/s390/processor.c | 2 +- .../kvm/memslot_modification_stress_test.c | 2 +- .../testing/selftests/kvm/memslot_perf_test.c | 10 +++---- tools/testing/selftests/kvm/mmu_stress_test.c | 4 +-- .../selftests/kvm/pre_fault_memory_test.c | 4 +-- .../selftests/kvm/s390/ucontrol_test.c | 2 +- .../selftests/kvm/set_memory_region_test.c | 2 +- .../kvm/x86/private_mem_conversions_test.c | 24 ++++++++--------- .../x86/smaller_maxphyaddr_emulation_test.c | 2 +- .../selftests/kvm/x86/svm_nested_vmcb12_gpa.c | 8 +++--- 15 files changed, 55 insertions(+), 55 deletions(-) diff --git a/tools/testing/selftests/kvm/guest_memfd_test.c b/tools/testing/selftests/kvm/guest_memfd_test.c index 9cbd3ad7f44a..d6528c6f5e03 100644 --- a/tools/testing/selftests/kvm/guest_memfd_test.c +++ b/tools/testing/selftests/kvm/guest_memfd_test.c @@ -489,7 +489,7 @@ static void test_guest_memfd_guest(void) * the guest's code, stack, and page tables, and low memory contains * the PCI hole and other MMIO regions that need to be avoided. */ - const u64 gpa = SZ_4G; + const gpa_t gpa = SZ_4G; const int slot = 1; struct kvm_vcpu *vcpu; diff --git a/tools/testing/selftests/kvm/include/kvm_util.h b/tools/testing/selftests/kvm/include/kvm_util.h index 0dcfad728edd..0d9f11be9806 100644 --- a/tools/testing/selftests/kvm/include/kvm_util.h +++ b/tools/testing/selftests/kvm/include/kvm_util.h @@ -114,7 +114,7 @@ struct kvm_vm { gpa_t ucall_mmio_addr; gva_t handlers; u32 dirty_ring_size; - u64 gpa_tag_mask; + gpa_t gpa_tag_mask; /* * "mmu" is the guest's stage-1, with a short name because the vast @@ -418,7 +418,7 @@ static inline void vm_enable_cap(struct kvm_vm *vm, u32 cap, u64 arg0) vm_ioctl(vm, KVM_ENABLE_CAP, &enable_cap); } -static inline void vm_set_memory_attributes(struct kvm_vm *vm, u64 gpa, +static inline void vm_set_memory_attributes(struct kvm_vm *vm, gpa_t gpa, u64 size, u64 attributes) { struct kvm_memory_attributes attr = { @@ -439,28 +439,28 @@ static inline void vm_set_memory_attributes(struct kvm_vm *vm, u64 gpa, } -static inline void vm_mem_set_private(struct kvm_vm *vm, u64 gpa, +static inline void vm_mem_set_private(struct kvm_vm *vm, gpa_t gpa, u64 size) { vm_set_memory_attributes(vm, gpa, size, KVM_MEMORY_ATTRIBUTE_PRIVATE); } -static inline void vm_mem_set_shared(struct kvm_vm *vm, u64 gpa, +static inline void vm_mem_set_shared(struct kvm_vm *vm, gpa_t gpa, u64 size) { vm_set_memory_attributes(vm, gpa, size, 0); } -void vm_guest_mem_fallocate(struct kvm_vm *vm, u64 gpa, u64 size, +void vm_guest_mem_fallocate(struct kvm_vm *vm, gpa_t gpa, u64 size, bool punch_hole); -static inline void vm_guest_mem_punch_hole(struct kvm_vm *vm, u64 gpa, +static inline void vm_guest_mem_punch_hole(struct kvm_vm *vm, gpa_t gpa, u64 size) { vm_guest_mem_fallocate(vm, gpa, size, true); } -static inline void vm_guest_mem_allocate(struct kvm_vm *vm, u64 gpa, +static inline void vm_guest_mem_allocate(struct kvm_vm *vm, gpa_t gpa, u64 size) { vm_guest_mem_fallocate(vm, gpa, size, false); @@ -685,21 +685,21 @@ static inline int vm_create_guest_memfd(struct kvm_vm *vm, u64 size, } void vm_set_user_memory_region(struct kvm_vm *vm, u32 slot, u32 flags, - u64 gpa, u64 size, void *hva); + gpa_t gpa, u64 size, void *hva); int __vm_set_user_memory_region(struct kvm_vm *vm, u32 slot, u32 flags, - u64 gpa, u64 size, void *hva); + gpa_t gpa, u64 size, void *hva); void vm_set_user_memory_region2(struct kvm_vm *vm, u32 slot, u32 flags, - u64 gpa, u64 size, void *hva, + gpa_t gpa, u64 size, void *hva, u32 guest_memfd, u64 guest_memfd_offset); int __vm_set_user_memory_region2(struct kvm_vm *vm, u32 slot, u32 flags, - u64 gpa, u64 size, void *hva, + gpa_t gpa, u64 size, void *hva, u32 guest_memfd, u64 guest_memfd_offset); void vm_userspace_mem_region_add(struct kvm_vm *vm, enum vm_mem_backing_src_type src_type, - u64 gpa, u32 slot, u64 npages, u32 flags); + gpa_t gpa, u32 slot, u64 npages, u32 flags); void vm_mem_add(struct kvm_vm *vm, enum vm_mem_backing_src_type src_type, - u64 gpa, u32 slot, u64 npages, u32 flags, + gpa_t gpa, u32 slot, u64 npages, u32 flags, int guest_memfd_fd, u64 guest_memfd_offset); #ifndef vm_arch_has_protected_memory diff --git a/tools/testing/selftests/kvm/include/memstress.h b/tools/testing/selftests/kvm/include/memstress.h index abd0dca10283..0d1d6230cc05 100644 --- a/tools/testing/selftests/kvm/include/memstress.h +++ b/tools/testing/selftests/kvm/include/memstress.h @@ -20,7 +20,7 @@ #define MEMSTRESS_MEM_SLOT_INDEX 1 struct memstress_vcpu_args { - u64 gpa; + gpa_t gpa; gva_t gva; u64 pages; @@ -32,7 +32,7 @@ struct memstress_vcpu_args { struct memstress_args { struct kvm_vm *vm; /* The starting address and size of the guest test region. */ - u64 gpa; + gpa_t gpa; u64 size; u64 guest_page_size; u32 random_seed; diff --git a/tools/testing/selftests/kvm/include/x86/processor.h b/tools/testing/selftests/kvm/include/x86/processor.h index 15252e75aaf1..fc7efd722229 100644 --- a/tools/testing/selftests/kvm/include/x86/processor.h +++ b/tools/testing/selftests/kvm/include/x86/processor.h @@ -1400,12 +1400,12 @@ u64 kvm_hypercall(u64 nr, u64 a0, u64 a1, u64 a2, u64 a3); u64 __xen_hypercall(u64 nr, u64 a0, void *a1); void xen_hypercall(u64 nr, u64 a0, void *a1); -static inline u64 __kvm_hypercall_map_gpa_range(u64 gpa, u64 size, u64 flags) +static inline u64 __kvm_hypercall_map_gpa_range(gpa_t gpa, u64 size, u64 flags) { return kvm_hypercall(KVM_HC_MAP_GPA_RANGE, gpa, size >> PAGE_SHIFT, flags, 0); } -static inline void kvm_hypercall_map_gpa_range(u64 gpa, u64 size, u64 flags) +static inline void kvm_hypercall_map_gpa_range(gpa_t gpa, u64 size, u64 flags) { u64 ret = __kvm_hypercall_map_gpa_range(gpa, size, flags); diff --git a/tools/testing/selftests/kvm/lib/kvm_util.c b/tools/testing/selftests/kvm/lib/kvm_util.c index e282f9abd4c7..905fa214099d 100644 --- a/tools/testing/selftests/kvm/lib/kvm_util.c +++ b/tools/testing/selftests/kvm/lib/kvm_util.c @@ -919,7 +919,7 @@ static void vm_userspace_mem_region_hva_insert(struct rb_root *hva_tree, int __vm_set_user_memory_region(struct kvm_vm *vm, u32 slot, u32 flags, - u64 gpa, u64 size, void *hva) + gpa_t gpa, u64 size, void *hva) { struct kvm_userspace_memory_region region = { .slot = slot, @@ -933,7 +933,7 @@ int __vm_set_user_memory_region(struct kvm_vm *vm, u32 slot, u32 flags, } void vm_set_user_memory_region(struct kvm_vm *vm, u32 slot, u32 flags, - u64 gpa, u64 size, void *hva) + gpa_t gpa, u64 size, void *hva) { int ret = __vm_set_user_memory_region(vm, slot, flags, gpa, size, hva); @@ -946,7 +946,7 @@ void vm_set_user_memory_region(struct kvm_vm *vm, u32 slot, u32 flags, "KVM selftests now require KVM_SET_USER_MEMORY_REGION2 (introduced in v6.8)") int __vm_set_user_memory_region2(struct kvm_vm *vm, u32 slot, u32 flags, - u64 gpa, u64 size, void *hva, + gpa_t gpa, u64 size, void *hva, u32 guest_memfd, u64 guest_memfd_offset) { struct kvm_userspace_memory_region2 region = { @@ -965,7 +965,7 @@ int __vm_set_user_memory_region2(struct kvm_vm *vm, u32 slot, u32 flags, } void vm_set_user_memory_region2(struct kvm_vm *vm, u32 slot, u32 flags, - u64 gpa, u64 size, void *hva, + gpa_t gpa, u64 size, void *hva, u32 guest_memfd, u64 guest_memfd_offset) { int ret = __vm_set_user_memory_region2(vm, slot, flags, gpa, size, hva, @@ -978,7 +978,7 @@ void vm_set_user_memory_region2(struct kvm_vm *vm, u32 slot, u32 flags, /* FIXME: This thing needs to be ripped apart and rewritten. */ void vm_mem_add(struct kvm_vm *vm, enum vm_mem_backing_src_type src_type, - u64 gpa, u32 slot, u64 npages, u32 flags, + gpa_t gpa, u32 slot, u64 npages, u32 flags, int guest_memfd, u64 guest_memfd_offset) { int ret; @@ -1141,7 +1141,7 @@ void vm_mem_add(struct kvm_vm *vm, enum vm_mem_backing_src_type src_type, void vm_userspace_mem_region_add(struct kvm_vm *vm, enum vm_mem_backing_src_type src_type, - u64 gpa, u32 slot, u64 npages, u32 flags) + gpa_t gpa, u32 slot, u64 npages, u32 flags) { vm_mem_add(vm, src_type, gpa, slot, npages, flags, -1, 0); } @@ -1278,7 +1278,7 @@ void vm_guest_mem_fallocate(struct kvm_vm *vm, u64 base, u64 size, const int mode = FALLOC_FL_KEEP_SIZE | (punch_hole ? FALLOC_FL_PUNCH_HOLE : 0); struct userspace_mem_region *region; u64 end = base + size; - u64 gpa, len; + gpa_t gpa, len; off_t fd_offset; int ret; diff --git a/tools/testing/selftests/kvm/lib/s390/processor.c b/tools/testing/selftests/kvm/lib/s390/processor.c index 643e583c804c..77a7b6965812 100644 --- a/tools/testing/selftests/kvm/lib/s390/processor.c +++ b/tools/testing/selftests/kvm/lib/s390/processor.c @@ -47,7 +47,7 @@ static u64 virt_alloc_region(struct kvm_vm *vm, int ri) | ((ri < 4 ? (PAGES_PER_REGION - 1) : 0) & REGION_ENTRY_LENGTH); } -void virt_arch_pg_map(struct kvm_vm *vm, gva_t gva, u64 gpa) +void virt_arch_pg_map(struct kvm_vm *vm, gva_t gva, gpa_t gpa) { int ri, idx; u64 *entry; diff --git a/tools/testing/selftests/kvm/memslot_modification_stress_test.c b/tools/testing/selftests/kvm/memslot_modification_stress_test.c index 9d7c4afab961..9c7578a098c3 100644 --- a/tools/testing/selftests/kvm/memslot_modification_stress_test.c +++ b/tools/testing/selftests/kvm/memslot_modification_stress_test.c @@ -58,7 +58,7 @@ static void add_remove_memslot(struct kvm_vm *vm, useconds_t delay, u64 nr_modifications) { u64 pages = max_t(int, vm->page_size, getpagesize()) / vm->page_size; - u64 gpa; + gpa_t gpa; int i; /* diff --git a/tools/testing/selftests/kvm/memslot_perf_test.c b/tools/testing/selftests/kvm/memslot_perf_test.c index 51f8be50c7e4..3d02db371422 100644 --- a/tools/testing/selftests/kvm/memslot_perf_test.c +++ b/tools/testing/selftests/kvm/memslot_perf_test.c @@ -186,9 +186,9 @@ static void wait_for_vcpu(void) "sem_timedwait() failed: %d", errno); } -static void *vm_gpa2hva(struct vm_data *data, u64 gpa, u64 *rempages) +static void *vm_gpa2hva(struct vm_data *data, gpa_t gpa, u64 *rempages) { - u64 gpage, pgoffs; + gpa_t gpage, pgoffs; u32 slot, slotoffs; void *base; u32 guest_page_size = data->vm->page_size; @@ -332,7 +332,7 @@ static bool prepare_vm(struct vm_data *data, int nslots, u64 *maxslots, for (slot = 1, guest_addr = MEM_GPA; slot <= data->nslots; slot++) { u64 npages; - u64 gpa; + gpa_t gpa; npages = data->pages_per_slot; if (slot == data->nslots) @@ -638,7 +638,7 @@ static void test_memslot_move_loop(struct vm_data *data, struct sync_area *sync) static void test_memslot_do_unmap(struct vm_data *data, u64 offsp, u64 count) { - u64 gpa, ctr; + gpa_t gpa, ctr; u32 guest_page_size = data->vm->page_size; for (gpa = MEM_TEST_GPA + offsp * guest_page_size, ctr = 0; ctr < count; ) { @@ -663,7 +663,7 @@ static void test_memslot_do_unmap(struct vm_data *data, static void test_memslot_map_unmap_check(struct vm_data *data, u64 offsp, u64 valexp) { - u64 gpa; + gpa_t gpa; u64 *val; u32 guest_page_size = data->vm->page_size; diff --git a/tools/testing/selftests/kvm/mmu_stress_test.c b/tools/testing/selftests/kvm/mmu_stress_test.c index e0975a5dcff1..54d281419d31 100644 --- a/tools/testing/selftests/kvm/mmu_stress_test.c +++ b/tools/testing/selftests/kvm/mmu_stress_test.c @@ -22,7 +22,7 @@ static bool all_vcpus_hit_ro_fault; static void guest_code(u64 start_gpa, u64 end_gpa, u64 stride) { - u64 gpa; + gpa_t gpa; int i; for (i = 0; i < 2; i++) { @@ -206,7 +206,7 @@ static pthread_t *spawn_workers(struct kvm_vm *vm, struct kvm_vcpu **vcpus, u64 start_gpa, u64 end_gpa) { struct vcpu_info *info; - u64 gpa, nr_bytes; + gpa_t gpa, nr_bytes; pthread_t *threads; int i; diff --git a/tools/testing/selftests/kvm/pre_fault_memory_test.c b/tools/testing/selftests/kvm/pre_fault_memory_test.c index bfdaaeed3a8c..fcb57fd034e6 100644 --- a/tools/testing/selftests/kvm/pre_fault_memory_test.c +++ b/tools/testing/selftests/kvm/pre_fault_memory_test.c @@ -33,7 +33,7 @@ static void guest_code(u64 base_gva) struct slot_worker_data { struct kvm_vm *vm; - u64 gpa; + gpa_t gpa; u32 flags; bool worker_ready; bool prefault_ready; @@ -161,7 +161,7 @@ static void pre_fault_memory(struct kvm_vcpu *vcpu, u64 base_gpa, u64 offset, static void __test_pre_fault_memory(unsigned long vm_type, bool private) { - u64 gpa, gva, alignment, guest_page_size; + gpa_t gpa, gva, alignment, guest_page_size; const struct vm_shape shape = { .mode = VM_MODE_DEFAULT, .type = vm_type, diff --git a/tools/testing/selftests/kvm/s390/ucontrol_test.c b/tools/testing/selftests/kvm/s390/ucontrol_test.c index dbdee4c39d47..b8c6f37b53e0 100644 --- a/tools/testing/selftests/kvm/s390/ucontrol_test.c +++ b/tools/testing/selftests/kvm/s390/ucontrol_test.c @@ -269,7 +269,7 @@ TEST(uc_cap_hpage) } /* calculate host virtual addr from guest physical addr */ -static void *gpa2hva(FIXTURE_DATA(uc_kvm) *self, u64 gpa) +static void *gpa2hva(FIXTURE_DATA(uc_kvm) *self, gpa_t gpa) { return (void *)(self->base_hva - self->base_gpa + gpa); } diff --git a/tools/testing/selftests/kvm/set_memory_region_test.c b/tools/testing/selftests/kvm/set_memory_region_test.c index 5551dd0f9fad..9b919a231c93 100644 --- a/tools/testing/selftests/kvm/set_memory_region_test.c +++ b/tools/testing/selftests/kvm/set_memory_region_test.c @@ -112,7 +112,7 @@ static struct kvm_vm *spawn_vm(struct kvm_vcpu **vcpu, pthread_t *vcpu_thread, { struct kvm_vm *vm; u64 *hva; - u64 gpa; + gpa_t gpa; vm = vm_create_with_one_vcpu(vcpu, guest_code); diff --git a/tools/testing/selftests/kvm/x86/private_mem_conversions_test.c b/tools/testing/selftests/kvm/x86/private_mem_conversions_test.c index 27675d7d04c0..1d2f5d4fd45d 100644 --- a/tools/testing/selftests/kvm/x86/private_mem_conversions_test.c +++ b/tools/testing/selftests/kvm/x86/private_mem_conversions_test.c @@ -38,7 +38,7 @@ do { \ pattern, i, gpa + i, mem[i]); \ } while (0) -static void memcmp_h(u8 *mem, u64 gpa, u8 pattern, size_t size) +static void memcmp_h(u8 *mem, gpa_t gpa, u8 pattern, size_t size) { size_t i; @@ -70,13 +70,13 @@ enum ucall_syncs { SYNC_PRIVATE, }; -static void guest_sync_shared(u64 gpa, u64 size, +static void guest_sync_shared(gpa_t gpa, u64 size, u8 current_pattern, u8 new_pattern) { GUEST_SYNC5(SYNC_SHARED, gpa, size, current_pattern, new_pattern); } -static void guest_sync_private(u64 gpa, u64 size, u8 pattern) +static void guest_sync_private(gpa_t gpa, u64 size, u8 pattern) { GUEST_SYNC4(SYNC_PRIVATE, gpa, size, pattern); } @@ -86,7 +86,7 @@ static void guest_sync_private(u64 gpa, u64 size, u8 pattern) #define MAP_GPA_SHARED BIT(1) #define MAP_GPA_DO_FALLOCATE BIT(2) -static void guest_map_mem(u64 gpa, u64 size, bool map_shared, +static void guest_map_mem(gpa_t gpa, u64 size, bool map_shared, bool do_fallocate) { u64 flags = MAP_GPA_SET_ATTRIBUTES; @@ -98,12 +98,12 @@ static void guest_map_mem(u64 gpa, u64 size, bool map_shared, kvm_hypercall_map_gpa_range(gpa, size, flags); } -static void guest_map_shared(u64 gpa, u64 size, bool do_fallocate) +static void guest_map_shared(gpa_t gpa, u64 size, bool do_fallocate) { guest_map_mem(gpa, size, true, do_fallocate); } -static void guest_map_private(u64 gpa, u64 size, bool do_fallocate) +static void guest_map_private(gpa_t gpa, u64 size, bool do_fallocate) { guest_map_mem(gpa, size, false, do_fallocate); } @@ -134,7 +134,7 @@ static void guest_test_explicit_conversion(u64 base_gpa, bool do_fallocate) memcmp_g(base_gpa, init_p, PER_CPU_DATA_SIZE); for (i = 0; i < ARRAY_SIZE(test_ranges); i++) { - u64 gpa = base_gpa + test_ranges[i].offset; + gpa_t gpa = base_gpa + test_ranges[i].offset; u64 size = test_ranges[i].size; u8 p1 = 0x11; u8 p2 = 0x22; @@ -214,7 +214,7 @@ static void guest_test_explicit_conversion(u64 base_gpa, bool do_fallocate) } } -static void guest_punch_hole(u64 gpa, u64 size) +static void guest_punch_hole(gpa_t gpa, u64 size) { /* "Mapping" memory shared via fallocate() is done via PUNCH_HOLE. */ u64 flags = MAP_GPA_SHARED | MAP_GPA_DO_FALLOCATE; @@ -239,7 +239,7 @@ static void guest_test_punch_hole(u64 base_gpa, bool precise) guest_map_private(base_gpa, PER_CPU_DATA_SIZE, false); for (i = 0; i < ARRAY_SIZE(test_ranges); i++) { - u64 gpa = base_gpa + test_ranges[i].offset; + gpa_t gpa = base_gpa + test_ranges[i].offset; u64 size = test_ranges[i].size; /* @@ -289,7 +289,7 @@ static void guest_code(u64 base_gpa) static void handle_exit_hypercall(struct kvm_vcpu *vcpu) { struct kvm_run *run = vcpu->run; - u64 gpa = run->hypercall.args[0]; + gpa_t gpa = run->hypercall.args[0]; u64 size = run->hypercall.args[1] * PAGE_SIZE; bool set_attributes = run->hypercall.args[2] & MAP_GPA_SET_ATTRIBUTES; bool map_shared = run->hypercall.args[2] & MAP_GPA_SHARED; @@ -337,7 +337,7 @@ static void *__test_mem_conversions(void *__vcpu) case UCALL_ABORT: REPORT_GUEST_ASSERT(uc); case UCALL_SYNC: { - u64 gpa = uc.args[1]; + gpa_t gpa = uc.args[1]; size_t size = uc.args[2]; size_t i; @@ -402,7 +402,7 @@ static void test_mem_conversions(enum vm_mem_backing_src_type src_type, u32 nr_v KVM_MEM_GUEST_MEMFD, memfd, slot_size * i); for (i = 0; i < nr_vcpus; i++) { - u64 gpa = BASE_DATA_GPA + i * per_cpu_size; + gpa_t gpa = BASE_DATA_GPA + i * per_cpu_size; vcpu_args_set(vcpus[i], 1, gpa); diff --git a/tools/testing/selftests/kvm/x86/smaller_maxphyaddr_emulation_test.c b/tools/testing/selftests/kvm/x86/smaller_maxphyaddr_emulation_test.c index 27cded643699..3dca85e95478 100644 --- a/tools/testing/selftests/kvm/x86/smaller_maxphyaddr_emulation_test.c +++ b/tools/testing/selftests/kvm/x86/smaller_maxphyaddr_emulation_test.c @@ -48,7 +48,7 @@ int main(int argc, char *argv[]) struct kvm_vm *vm; struct ucall uc; u64 *hva; - u64 gpa; + gpa_t gpa; int rc; TEST_REQUIRE(kvm_has_cap(KVM_CAP_SMALLER_MAXPHYADDR)); diff --git a/tools/testing/selftests/kvm/x86/svm_nested_vmcb12_gpa.c b/tools/testing/selftests/kvm/x86/svm_nested_vmcb12_gpa.c index ae8a10913af7..a4935ce2fb99 100644 --- a/tools/testing/selftests/kvm/x86/svm_nested_vmcb12_gpa.c +++ b/tools/testing/selftests/kvm/x86/svm_nested_vmcb12_gpa.c @@ -28,28 +28,28 @@ static void l2_code(void) vmcall(); } -static void l1_vmrun(struct svm_test_data *svm, u64 gpa) +static void l1_vmrun(struct svm_test_data *svm, gpa_t gpa) { generic_svm_setup(svm, l2_code, &l2_guest_stack[L2_GUEST_STACK_SIZE]); asm volatile ("vmrun %[gpa]" : : [gpa] "a" (gpa) : "memory"); } -static void l1_vmload(struct svm_test_data *svm, u64 gpa) +static void l1_vmload(struct svm_test_data *svm, gpa_t gpa) { generic_svm_setup(svm, l2_code, &l2_guest_stack[L2_GUEST_STACK_SIZE]); asm volatile ("vmload %[gpa]" : : [gpa] "a" (gpa) : "memory"); } -static void l1_vmsave(struct svm_test_data *svm, u64 gpa) +static void l1_vmsave(struct svm_test_data *svm, gpa_t gpa) { generic_svm_setup(svm, l2_code, &l2_guest_stack[L2_GUEST_STACK_SIZE]); asm volatile ("vmsave %[gpa]" : : [gpa] "a" (gpa) : "memory"); } -static void l1_vmexit(struct svm_test_data *svm, u64 gpa) +static void l1_vmexit(struct svm_test_data *svm, gpa_t gpa) { generic_svm_setup(svm, l2_code, &l2_guest_stack[L2_GUEST_STACK_SIZE]); -- 2.54.0.rc1.555.g9c883467ad-goog _______________________________________________ linux-riscv mailing list linux-riscv@lists.infradead.org http://lists.infradead.org/mailman/listinfo/linux-riscv