From mboxrd@z Thu Jan 1 00:00:00 1970 Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id F04523A545A; Tue, 28 Apr 2026 23:25:19 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=10.30.226.201 ARC-Seal:i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1777418720; cv=none; b=N0m/eQ9SQpdf1mVvJ6kd/I8uky58dvgoerElFG+oS4Sw1ppOEZ6TRz+tmtSM7Ndxho7/Rb5PzAJmID8eWLjdC4rQPHACZ5Xoler8ct5cDC32IK4sspZpXWI/ZzFpQxgxzf5otmVr64WeAgysvcBfzQzOEipZ+l2tXn5rA7yHjZA= ARC-Message-Signature:i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1777418720; c=relaxed/simple; bh=Tb5TguUqj83q6EkDS3KK98jP7mZKk3DZAdSKTO+qVyE=; h=From:Date:Subject:MIME-Version:Content-Type:Message-Id:References: In-Reply-To:To:Cc; b=QO0Nc9oxqXgzt8+m5z2D9IMX4sSyqsOhLII3rd2O/9nR2RuoyZMxPlf7G2S8HD83lhIxJ5J15GreAbwETEAiGFqIUA3puz6ZVFqTEbl3fN6vG27eaFwTqzG+6lIMtLvVmGw8jeM/swdPw7rGxl2481Pyp46K82rxMmTh8dWqtcs= ARC-Authentication-Results:i=1; smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b=KDSTg4WS; arc=none smtp.client-ip=10.30.226.201 Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b="KDSTg4WS" Received: by smtp.kernel.org (Postfix) with ESMTPS id CF426C2BCB3; Tue, 28 Apr 2026 23:25:19 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1777418719; bh=Tb5TguUqj83q6EkDS3KK98jP7mZKk3DZAdSKTO+qVyE=; h=From:Date:Subject:References:In-Reply-To:To:Cc:Reply-To:From; b=KDSTg4WSIZFOC5Ne9UPdjPf11FJ2ufRAsp7BnH1g63i8+JvnsKb1bwRa7IkczuiPz fpYVsVUxrkjgtxLek+knPsaeFBrqT4IVzST2ph6s+zhTp8oJ+coEphM9oVvLpFBtMN bs1FL17Ck7z8IKneb3i7RO4PGF/RADSxwCrp1GqL3Q7VlYPqfo4EMrRW2kd3Cqi110 1akzhK4luTxaYhC3WMAQATv0oETiZj+2Yg6/NSkSASCJjJESareUF2ogoqxZO/DD6u cLSA7FVdL7fHukrZPStXG5d7TrXAB9IyjC8wOH8+KwoDPeEtzuAFPzPDdZ+zTC8UfI gmeixg9KrIrmg== Received: from aws-us-west-2-korg-lkml-1.web.codeaurora.org (localhost.localdomain [127.0.0.1]) by smtp.lore.kernel.org (Postfix) with ESMTP id C589FFF8875; Tue, 28 Apr 2026 23:25:19 +0000 (UTC) From: Ackerley Tng via B4 Relay Date: Tue, 28 Apr 2026 16:25:26 -0700 Subject: [PATCH RFC v5 31/53] KVM: selftests: Add support for mmap() on guest_memfd in core library Precedence: bulk X-Mailing-List: linux-trace-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Type: text/plain; charset="utf-8" Content-Transfer-Encoding: 7bit Message-Id: <20260428-gmem-inplace-conversion-v5-31-d8608ccfca22@google.com> References: <20260428-gmem-inplace-conversion-v5-0-d8608ccfca22@google.com> In-Reply-To: <20260428-gmem-inplace-conversion-v5-0-d8608ccfca22@google.com> To: aik@amd.com, andrew.jones@linux.dev, binbin.wu@linux.intel.com, brauner@kernel.org, chao.p.peng@linux.intel.com, david@kernel.org, ira.weiny@intel.com, jmattson@google.com, jthoughton@google.com, michael.roth@amd.com, oupton@kernel.org, pankaj.gupta@amd.com, qperret@google.com, rick.p.edgecombe@intel.com, rientjes@google.com, shivankg@amd.com, steven.price@arm.com, tabba@google.com, willy@infradead.org, wyihan@google.com, yan.y.zhao@intel.com, forkloop@google.com, pratyush@kernel.org, suzuki.poulose@arm.com, aneesh.kumar@kernel.org, Paolo Bonzini , Sean Christopherson , Thomas Gleixner , Ingo Molnar , Borislav Petkov , Dave Hansen , x86@kernel.org, "H. Peter Anvin" , Steven Rostedt , Masami Hiramatsu , Mathieu Desnoyers , Jonathan Corbet , Shuah Khan , Shuah Khan , Vishal Annapurve , Andrew Morton , Chris Li , Kairui Song , Kemeng Shi , Nhat Pham , Baoquan He , Barry Song , Axel Rasmussen , Yuanchu Xie , Wei Xu , Youngjun Park , Qi Zheng , Shakeel Butt , Kiryl Shutsemau , Jason Gunthorpe , Vlastimil Babka Cc: kvm@vger.kernel.org, linux-kernel@vger.kernel.org, linux-trace-kernel@vger.kernel.org, linux-doc@vger.kernel.org, linux-kselftest@vger.kernel.org, linux-mm@kvack.org, linux-coco@lists.linux.dev, Ackerley Tng X-Mailer: b4 0.14.3 X-Developer-Signature: v=1; a=ed25519-sha256; t=1777418714; l=5449; i=ackerleytng@google.com; s=20260225; h=from:subject:message-id; bh=9LuZKo8RnG3pI99RA8uUaagBn7KtPFHLhC8KQm9YV7Q=; b=a5nVRdhvKlIKuUk1DfR/yQJcWLS2+TX8K8rJWDhgYQaX3s6oiuHHkvhGzXMmyHNeeQ16C+zSY 45XzSUi64ncDaYSqmwLop1tZ3QU92mUwgcnpg8HmfrsWfw5lBZhlUqM X-Developer-Key: i=ackerleytng@google.com; a=ed25519; pk=sAZDYXdm6Iz8FHitpHeFlCMXwabodTm7p8/3/8xUxuU= X-Endpoint-Received: by B4 Relay for ackerleytng@google.com/20260225 with auth_id=649 X-Original-From: Ackerley Tng Reply-To: ackerleytng@google.com From: Sean Christopherson Accept gmem_flags in vm_mem_add() to be able to create a guest_memfd within vm_mem_add(). When vm_mem_add() is used to set up a guest_memfd for a memslot, set up the provided (or created) gmem_fd as the fd for the user memory region. This makes it available to be mmap()-ed from just like fds from other memory sources. mmap() from guest_memfd using the provided gmem_flags and gmem_offset. Add a kvm_slot_to_fd() helper to provide convenient access to the file descriptor of a memslot. Update existing callers of vm_mem_add() to pass 0 for gmem_flags to preserve existing behavior. Signed-off-by: Sean Christopherson [For guest_memfds, mmap() using gmem_offset instead of 0 all the time.] Signed-off-by: Ackerley Tng --- tools/testing/selftests/kvm/include/kvm_util.h | 7 ++++++- tools/testing/selftests/kvm/lib/kvm_util.c | 19 +++++++++++-------- .../selftests/kvm/x86/private_mem_conversions_test.c | 2 +- 3 files changed, 18 insertions(+), 10 deletions(-) diff --git a/tools/testing/selftests/kvm/include/kvm_util.h b/tools/testing/selftests/kvm/include/kvm_util.h index f19383376ee8e..fb54694e6568b 100644 --- a/tools/testing/selftests/kvm/include/kvm_util.h +++ b/tools/testing/selftests/kvm/include/kvm_util.h @@ -700,7 +700,7 @@ void vm_userspace_mem_region_add(struct kvm_vm *vm, gpa_t gpa, u32 slot, u64 npages, u32 flags); void vm_mem_add(struct kvm_vm *vm, enum vm_mem_backing_src_type src_type, gpa_t gpa, u32 slot, u64 npages, u32 flags, - int gmem_fd, u64 gmem_offset); + int gmem_fd, u64 gmem_offset, u64 gmem_flags); #ifndef vm_arch_has_protected_memory static inline bool vm_arch_has_protected_memory(struct kvm_vm *vm) @@ -732,6 +732,11 @@ void *addr_gva2hva(struct kvm_vm *vm, gva_t gva); gpa_t addr_hva2gpa(struct kvm_vm *vm, void *hva); void *addr_gpa2alias(struct kvm_vm *vm, gpa_t gpa); +static inline int kvm_slot_to_fd(struct kvm_vm *vm, u32 slot) +{ + return memslot2region(vm, slot)->fd; +} + #ifndef vcpu_arch_put_guest #define vcpu_arch_put_guest(mem, val) do { (mem) = (val); } while (0) #endif diff --git a/tools/testing/selftests/kvm/lib/kvm_util.c b/tools/testing/selftests/kvm/lib/kvm_util.c index 11da9b7546d03..ff301e7c22b2f 100644 --- a/tools/testing/selftests/kvm/lib/kvm_util.c +++ b/tools/testing/selftests/kvm/lib/kvm_util.c @@ -979,12 +979,13 @@ void vm_set_user_memory_region2(struct kvm_vm *vm, u32 slot, u32 flags, /* FIXME: This thing needs to be ripped apart and rewritten. */ void vm_mem_add(struct kvm_vm *vm, enum vm_mem_backing_src_type src_type, gpa_t gpa, u32 slot, u64 npages, u32 flags, - int gmem_fd, u64 gmem_offset) + int gmem_fd, u64 gmem_offset, u64 gmem_flags) { int ret; struct userspace_mem_region *region; size_t backing_src_pagesz = get_backing_src_pagesz(src_type); size_t mem_size = npages * vm->page_size; + off_t mmap_offset = 0; size_t alignment = 1; TEST_REQUIRE_SET_USER_MEMORY_REGION2(); @@ -1056,8 +1057,6 @@ void vm_mem_add(struct kvm_vm *vm, enum vm_mem_backing_src_type src_type, if (flags & KVM_MEM_GUEST_MEMFD) { if (gmem_fd < 0) { - u32 gmem_flags = 0; - TEST_ASSERT(!gmem_offset, "Offset must be zero when creating new guest_memfd"); gmem_fd = vm_create_guest_memfd(vm, mem_size, gmem_flags); @@ -1078,13 +1077,17 @@ void vm_mem_add(struct kvm_vm *vm, enum vm_mem_backing_src_type src_type, } region->fd = -1; - if (backing_src_is_shared(src_type)) + if (flags & KVM_MEM_GUEST_MEMFD && gmem_flags & GUEST_MEMFD_FLAG_MMAP) { + region->fd = kvm_dup(gmem_fd); + mmap_offset = gmem_offset; + } else if (backing_src_is_shared(src_type)) { region->fd = kvm_memfd_alloc(region->mmap_size, src_type == VM_MEM_SRC_SHARED_HUGETLB); + } - region->mmap_start = kvm_mmap(region->mmap_size, PROT_READ | PROT_WRITE, - vm_mem_backing_src_alias(src_type)->flag, - region->fd); + region->mmap_start = __kvm_mmap(region->mmap_size, PROT_READ | PROT_WRITE, + vm_mem_backing_src_alias(src_type)->flag, + region->fd, mmap_offset); TEST_ASSERT(!is_backing_src_hugetlb(src_type) || region->mmap_start == align_ptr_up(region->mmap_start, backing_src_pagesz), @@ -1144,7 +1147,7 @@ void vm_userspace_mem_region_add(struct kvm_vm *vm, enum vm_mem_backing_src_type src_type, gpa_t gpa, u32 slot, u64 npages, u32 flags) { - vm_mem_add(vm, src_type, gpa, slot, npages, flags, -1, 0); + vm_mem_add(vm, src_type, gpa, slot, npages, flags, -1, 0, 0); } /* diff --git a/tools/testing/selftests/kvm/x86/private_mem_conversions_test.c b/tools/testing/selftests/kvm/x86/private_mem_conversions_test.c index 1d2f5d4fd45d7..861baff201e78 100644 --- a/tools/testing/selftests/kvm/x86/private_mem_conversions_test.c +++ b/tools/testing/selftests/kvm/x86/private_mem_conversions_test.c @@ -399,7 +399,7 @@ static void test_mem_conversions(enum vm_mem_backing_src_type src_type, u32 nr_v for (i = 0; i < nr_memslots; i++) vm_mem_add(vm, src_type, BASE_DATA_GPA + slot_size * i, BASE_DATA_SLOT + i, slot_size / vm->page_size, - KVM_MEM_GUEST_MEMFD, memfd, slot_size * i); + KVM_MEM_GUEST_MEMFD, memfd, slot_size * i, 0); for (i = 0; i < nr_vcpus; i++) { gpa_t gpa = BASE_DATA_GPA + i * per_cpu_size; -- 2.54.0.545.g6539524ca2-goog