From mboxrd@z Thu Jan 1 00:00:00 1970 Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id DA17E4A13A2; Thu, 7 May 2026 20:22:51 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=10.30.226.201 ARC-Seal:i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1778185371; cv=none; b=JujRU0iigLhu7IX32Pf+lUdOuxtQAXyzk+wPD4OgdZBQIXXowAs0omVvUlyAa9zBeTkolYV4LThMxZRXqYvOQ+1LKIx92DO5Pf+SbGCul7efpsPfkDpfEAkAOkVU01UfpWpaXBd82Wc+TqaejhjBo/plYLnHZOnIChFiA0OqGeo= ARC-Message-Signature:i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1778185371; c=relaxed/simple; bh=TNHilAW5thOJ2PZgyw/A2QUdZ08vBxaJW5sTEBhIx7U=; h=From:Date:Subject:MIME-Version:Content-Type:Message-Id:References: In-Reply-To:To:Cc; b=YnDzk/NcinhzGWdEWjLphtDoQP2hNp0GVs5tNY2h2lLZUR27Hgd8C3TFmII4XPQACS50tAkcCDedboHhwpX6CYAc9ZQGcusUQUjUUF6gOeCKlEfHoCZXQOV5k478n1QQf/FIdUFbP1JYxFFtsZCPZ/r+cD4fc7xiiFkZmwc7pcs= ARC-Authentication-Results:i=1; smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b=cnvEEDX4; arc=none smtp.client-ip=10.30.226.201 Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b="cnvEEDX4" Received: by smtp.kernel.org (Postfix) with ESMTPS id A747DC4AF55; Thu, 7 May 2026 20:22:51 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1778185371; bh=TNHilAW5thOJ2PZgyw/A2QUdZ08vBxaJW5sTEBhIx7U=; h=From:Date:Subject:References:In-Reply-To:To:Cc:Reply-To:From; b=cnvEEDX4aBxKM0+1ZwsQ7Gi+NYAr+LbhmYeAb8yoKBeP5/FpnxTIMRoN1P/kE8vSy DXiym3od5qY17PVn/XPvb2YaAgfpDh3ZU+6js2apvbN0MwGXSMzPkELTTcYxiDHD2+ h86lMFMt+TuYIjhXnkvR9iztuFeT+FsMl77RF5LQMIU50d3WWqWINxGXK+XaB2150q Wn0ZC/jGsvBZ5mCurzdPPJ+HdTxoexwwIzaS+cejt4Wp7AtJp4BSv2VfAYP6M97E8C 1VhFDHz/XA8qlG/w6iyENXzsuQBN06C0g73H8aVYQx9OrTLUax6yYKnaIzqYgbeP97 GnhYVEIc27pMQ== Received: from aws-us-west-2-korg-lkml-1.web.codeaurora.org (localhost.localdomain [127.0.0.1]) by smtp.lore.kernel.org (Postfix) with ESMTP id 8A389CD3446; Thu, 7 May 2026 20:22:51 +0000 (UTC) From: Ackerley Tng via B4 Relay Date: Thu, 07 May 2026 13:22:44 -0700 Subject: [PATCH v6 25/43] KVM: selftests: Add support for mmap() on guest_memfd in core library Precedence: bulk X-Mailing-List: linux-trace-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Type: text/plain; charset="utf-8" Content-Transfer-Encoding: 7bit Message-Id: <20260507-gmem-inplace-conversion-v6-25-91ab5a8b19a4@google.com> References: <20260507-gmem-inplace-conversion-v6-0-91ab5a8b19a4@google.com> In-Reply-To: <20260507-gmem-inplace-conversion-v6-0-91ab5a8b19a4@google.com> To: aik@amd.com, andrew.jones@linux.dev, binbin.wu@linux.intel.com, brauner@kernel.org, chao.p.peng@linux.intel.com, david@kernel.org, ira.weiny@intel.com, jmattson@google.com, jthoughton@google.com, michael.roth@amd.com, oupton@kernel.org, pankaj.gupta@amd.com, qperret@google.com, rick.p.edgecombe@intel.com, rientjes@google.com, shivankg@amd.com, steven.price@arm.com, tabba@google.com, willy@infradead.org, wyihan@google.com, yan.y.zhao@intel.com, forkloop@google.com, pratyush@kernel.org, suzuki.poulose@arm.com, aneesh.kumar@kernel.org, liam@infradead.org, Paolo Bonzini , Sean Christopherson , Thomas Gleixner , Ingo Molnar , Borislav Petkov , Dave Hansen , x86@kernel.org, "H. Peter Anvin" , Steven Rostedt , Masami Hiramatsu , Mathieu Desnoyers , Jonathan Corbet , Shuah Khan , Shuah Khan , Vishal Annapurve , Andrew Morton , Chris Li , Kairui Song , Kemeng Shi , Nhat Pham , Baoquan He , Barry Song , Axel Rasmussen , Yuanchu Xie , Wei Xu , Youngjun Park , Qi Zheng , Shakeel Butt , Kiryl Shutsemau , Jason Gunthorpe , Vlastimil Babka Cc: kvm@vger.kernel.org, linux-kernel@vger.kernel.org, linux-trace-kernel@vger.kernel.org, linux-doc@vger.kernel.org, linux-kselftest@vger.kernel.org, linux-mm@kvack.org, linux-coco@lists.linux.dev, Ackerley Tng X-Mailer: b4 0.14.3 X-Developer-Signature: v=1; a=ed25519-sha256; t=1778185365; l=5449; i=ackerleytng@google.com; s=20260225; h=from:subject:message-id; bh=b9lNsHFbMF0ToptO1Wpva+FNCHAqGWWJC7Pbd/PfquA=; b=7VG9elETxfz3F0ZRVNfv/0inINCFcfJsY28+u8RRkiagrh2zVIrUVzpGnXVT4OK4GAK3tUPgV gx8vUmOy/OHA+HlcQIdyYQZ7n63OtfIELfwApqQCmPgwN2uFxPhFenx X-Developer-Key: i=ackerleytng@google.com; a=ed25519; pk=sAZDYXdm6Iz8FHitpHeFlCMXwabodTm7p8/3/8xUxuU= X-Endpoint-Received: by B4 Relay for ackerleytng@google.com/20260225 with auth_id=649 X-Original-From: Ackerley Tng Reply-To: ackerleytng@google.com From: Sean Christopherson Accept gmem_flags in vm_mem_add() to be able to create a guest_memfd within vm_mem_add(). When vm_mem_add() is used to set up a guest_memfd for a memslot, set up the provided (or created) gmem_fd as the fd for the user memory region. This makes it available to be mmap()-ed from just like fds from other memory sources. mmap() from guest_memfd using the provided gmem_flags and gmem_offset. Add a kvm_slot_to_fd() helper to provide convenient access to the file descriptor of a memslot. Update existing callers of vm_mem_add() to pass 0 for gmem_flags to preserve existing behavior. Signed-off-by: Sean Christopherson [For guest_memfds, mmap() using gmem_offset instead of 0 all the time.] Signed-off-by: Ackerley Tng --- tools/testing/selftests/kvm/include/kvm_util.h | 7 ++++++- tools/testing/selftests/kvm/lib/kvm_util.c | 19 +++++++++++-------- .../selftests/kvm/x86/private_mem_conversions_test.c | 2 +- 3 files changed, 18 insertions(+), 10 deletions(-) diff --git a/tools/testing/selftests/kvm/include/kvm_util.h b/tools/testing/selftests/kvm/include/kvm_util.h index f19383376ee8e..fb54694e6568b 100644 --- a/tools/testing/selftests/kvm/include/kvm_util.h +++ b/tools/testing/selftests/kvm/include/kvm_util.h @@ -700,7 +700,7 @@ void vm_userspace_mem_region_add(struct kvm_vm *vm, gpa_t gpa, u32 slot, u64 npages, u32 flags); void vm_mem_add(struct kvm_vm *vm, enum vm_mem_backing_src_type src_type, gpa_t gpa, u32 slot, u64 npages, u32 flags, - int gmem_fd, u64 gmem_offset); + int gmem_fd, u64 gmem_offset, u64 gmem_flags); #ifndef vm_arch_has_protected_memory static inline bool vm_arch_has_protected_memory(struct kvm_vm *vm) @@ -732,6 +732,11 @@ void *addr_gva2hva(struct kvm_vm *vm, gva_t gva); gpa_t addr_hva2gpa(struct kvm_vm *vm, void *hva); void *addr_gpa2alias(struct kvm_vm *vm, gpa_t gpa); +static inline int kvm_slot_to_fd(struct kvm_vm *vm, u32 slot) +{ + return memslot2region(vm, slot)->fd; +} + #ifndef vcpu_arch_put_guest #define vcpu_arch_put_guest(mem, val) do { (mem) = (val); } while (0) #endif diff --git a/tools/testing/selftests/kvm/lib/kvm_util.c b/tools/testing/selftests/kvm/lib/kvm_util.c index 11da9b7546d03..ff301e7c22b2f 100644 --- a/tools/testing/selftests/kvm/lib/kvm_util.c +++ b/tools/testing/selftests/kvm/lib/kvm_util.c @@ -979,12 +979,13 @@ void vm_set_user_memory_region2(struct kvm_vm *vm, u32 slot, u32 flags, /* FIXME: This thing needs to be ripped apart and rewritten. */ void vm_mem_add(struct kvm_vm *vm, enum vm_mem_backing_src_type src_type, gpa_t gpa, u32 slot, u64 npages, u32 flags, - int gmem_fd, u64 gmem_offset) + int gmem_fd, u64 gmem_offset, u64 gmem_flags) { int ret; struct userspace_mem_region *region; size_t backing_src_pagesz = get_backing_src_pagesz(src_type); size_t mem_size = npages * vm->page_size; + off_t mmap_offset = 0; size_t alignment = 1; TEST_REQUIRE_SET_USER_MEMORY_REGION2(); @@ -1056,8 +1057,6 @@ void vm_mem_add(struct kvm_vm *vm, enum vm_mem_backing_src_type src_type, if (flags & KVM_MEM_GUEST_MEMFD) { if (gmem_fd < 0) { - u32 gmem_flags = 0; - TEST_ASSERT(!gmem_offset, "Offset must be zero when creating new guest_memfd"); gmem_fd = vm_create_guest_memfd(vm, mem_size, gmem_flags); @@ -1078,13 +1077,17 @@ void vm_mem_add(struct kvm_vm *vm, enum vm_mem_backing_src_type src_type, } region->fd = -1; - if (backing_src_is_shared(src_type)) + if (flags & KVM_MEM_GUEST_MEMFD && gmem_flags & GUEST_MEMFD_FLAG_MMAP) { + region->fd = kvm_dup(gmem_fd); + mmap_offset = gmem_offset; + } else if (backing_src_is_shared(src_type)) { region->fd = kvm_memfd_alloc(region->mmap_size, src_type == VM_MEM_SRC_SHARED_HUGETLB); + } - region->mmap_start = kvm_mmap(region->mmap_size, PROT_READ | PROT_WRITE, - vm_mem_backing_src_alias(src_type)->flag, - region->fd); + region->mmap_start = __kvm_mmap(region->mmap_size, PROT_READ | PROT_WRITE, + vm_mem_backing_src_alias(src_type)->flag, + region->fd, mmap_offset); TEST_ASSERT(!is_backing_src_hugetlb(src_type) || region->mmap_start == align_ptr_up(region->mmap_start, backing_src_pagesz), @@ -1144,7 +1147,7 @@ void vm_userspace_mem_region_add(struct kvm_vm *vm, enum vm_mem_backing_src_type src_type, gpa_t gpa, u32 slot, u64 npages, u32 flags) { - vm_mem_add(vm, src_type, gpa, slot, npages, flags, -1, 0); + vm_mem_add(vm, src_type, gpa, slot, npages, flags, -1, 0, 0); } /* diff --git a/tools/testing/selftests/kvm/x86/private_mem_conversions_test.c b/tools/testing/selftests/kvm/x86/private_mem_conversions_test.c index 1d2f5d4fd45d7..861baff201e78 100644 --- a/tools/testing/selftests/kvm/x86/private_mem_conversions_test.c +++ b/tools/testing/selftests/kvm/x86/private_mem_conversions_test.c @@ -399,7 +399,7 @@ static void test_mem_conversions(enum vm_mem_backing_src_type src_type, u32 nr_v for (i = 0; i < nr_memslots; i++) vm_mem_add(vm, src_type, BASE_DATA_GPA + slot_size * i, BASE_DATA_SLOT + i, slot_size / vm->page_size, - KVM_MEM_GUEST_MEMFD, memfd, slot_size * i); + KVM_MEM_GUEST_MEMFD, memfd, slot_size * i, 0); for (i = 0; i < nr_vcpus; i++) { gpa_t gpa = BASE_DATA_GPA + i * per_cpu_size; -- 2.54.0.563.g4f69b47b94-goog