From mboxrd@z Thu Jan 1 00:00:00 1970 Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 8CE9B496902; Thu, 7 May 2026 20:22:51 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=10.30.226.201 ARC-Seal:i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1778185371; cv=none; b=Qxl+FvCSJyu+0dxPA/Spoz+GrZz8RlCVZsR695ZT9Y+DNCQsRVSCMy5tQBw897YM5zZJc/cPNDz1+HV+AgkZHUoJHDA3er0RhMbnF0ZIM2VvuHVDjquHyv5lrP5m0xm3pks3X8FjQcMH+PcotfhhlX0NmsP9jUlOCn4DpKAX/Vg= ARC-Message-Signature:i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1778185371; c=relaxed/simple; bh=C+xELju6R3fIL3f1dr1rvRrp+aANV5vu2X8OfPWrq7c=; h=From:Date:Subject:MIME-Version:Content-Type:Message-Id:References: In-Reply-To:To:Cc; b=brn2UxfnKoJeli/9eEuQZlYtfztDvxXr5z9fWXWvhDbAlPO0eBmJ1wV+bLsT1ZucGarUWYgUWcCKuXQLPZ6yNEYXuJNPc7Fm4T8LL3xb1JJk/Rr4h2wJrcvcTNLDX8cfn/7Ny0biPW7xsFzj5RxLTnNASjZfCgg/Khh/6Xnd12Y= ARC-Authentication-Results:i=1; smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b=nwxkxRY4; arc=none smtp.client-ip=10.30.226.201 Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b="nwxkxRY4" Received: by smtp.kernel.org (Postfix) with ESMTPS id 63204C2BCF6; Thu, 7 May 2026 20:22:51 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1778185371; bh=C+xELju6R3fIL3f1dr1rvRrp+aANV5vu2X8OfPWrq7c=; h=From:Date:Subject:References:In-Reply-To:To:Cc:Reply-To:From; b=nwxkxRY4yvIRlWs93JdnGLyo3Ka/r1xrTaO4Gq5F8+Qh9/V51yAEToatZ2xMpgrgw 0GyUZGxZp1GOwjdltRWmTI6M+22bblb8lT/sFAPcIHwkhz89G6EpEznDCSCUYULsZf LspsYQEnFTgKourNPyEbV5LILIFhd1L+21ZFk/WsYEzD/fppM8yeOZlVi+fEZ4wjLa ZTroTi47cpwXyhJVXt1UbW+FIg/wt6VOWPHHcz0D2DA3E0EwMhSjk/f8Ovdxtcn8Y4 bn3/MUOTJ4D7hVcSl8Qfut1NDJo4tFNSfzHbJK+yKbBH6CPMon86VbaCSNe9CnTUWY uMGU4hkhqadoQ== Received: from aws-us-west-2-korg-lkml-1.web.codeaurora.org (localhost.localdomain [127.0.0.1]) by smtp.lore.kernel.org (Postfix) with ESMTP id 566AACD3446; Thu, 7 May 2026 20:22:51 +0000 (UTC) From: Ackerley Tng via B4 Relay Date: Thu, 07 May 2026 13:22:42 -0700 Subject: [PATCH v6 23/43] KVM: selftests: Create gmem fd before "regular" fd when adding memslot Precedence: bulk X-Mailing-List: linux-doc@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Type: text/plain; charset="utf-8" Content-Transfer-Encoding: 7bit Message-Id: <20260507-gmem-inplace-conversion-v6-23-91ab5a8b19a4@google.com> References: <20260507-gmem-inplace-conversion-v6-0-91ab5a8b19a4@google.com> In-Reply-To: <20260507-gmem-inplace-conversion-v6-0-91ab5a8b19a4@google.com> To: aik@amd.com, andrew.jones@linux.dev, binbin.wu@linux.intel.com, brauner@kernel.org, chao.p.peng@linux.intel.com, david@kernel.org, ira.weiny@intel.com, jmattson@google.com, jthoughton@google.com, michael.roth@amd.com, oupton@kernel.org, pankaj.gupta@amd.com, qperret@google.com, rick.p.edgecombe@intel.com, rientjes@google.com, shivankg@amd.com, steven.price@arm.com, tabba@google.com, willy@infradead.org, wyihan@google.com, yan.y.zhao@intel.com, forkloop@google.com, pratyush@kernel.org, suzuki.poulose@arm.com, aneesh.kumar@kernel.org, liam@infradead.org, Paolo Bonzini , Sean Christopherson , Thomas Gleixner , Ingo Molnar , Borislav Petkov , Dave Hansen , x86@kernel.org, "H. Peter Anvin" , Steven Rostedt , Masami Hiramatsu , Mathieu Desnoyers , Jonathan Corbet , Shuah Khan , Shuah Khan , Vishal Annapurve , Andrew Morton , Chris Li , Kairui Song , Kemeng Shi , Nhat Pham , Baoquan He , Barry Song , Axel Rasmussen , Yuanchu Xie , Wei Xu , Youngjun Park , Qi Zheng , Shakeel Butt , Kiryl Shutsemau , Jason Gunthorpe , Vlastimil Babka Cc: kvm@vger.kernel.org, linux-kernel@vger.kernel.org, linux-trace-kernel@vger.kernel.org, linux-doc@vger.kernel.org, linux-kselftest@vger.kernel.org, linux-mm@kvack.org, linux-coco@lists.linux.dev, Ackerley Tng X-Mailer: b4 0.14.3 X-Developer-Signature: v=1; a=ed25519-sha256; t=1778185365; l=2823; i=ackerleytng@google.com; s=20260225; h=from:subject:message-id; bh=osz8BG4F8/rIRywpp/gLibMgBsT8VmvgL1G5ovKf39I=; b=XTN/MJVZn4LBVZKcc3bhKE48c5ryNJO1/4FV0ZkfiqtAahVwPSXK1HhLYE3U8J7+MK3K7DmDR 1wRHNpZAb/ZC0bYyZPGEZRJWsoQ6mUhi63Db0zukdHtmm9KgIiDCXKb X-Developer-Key: i=ackerleytng@google.com; a=ed25519; pk=sAZDYXdm6Iz8FHitpHeFlCMXwabodTm7p8/3/8xUxuU= X-Endpoint-Received: by B4 Relay for ackerleytng@google.com/20260225 with auth_id=649 X-Original-From: Ackerley Tng Reply-To: ackerleytng@google.com From: Sean Christopherson When adding a memslot associated a guest_memfd instance, create/dup the guest_memfd before creating the "normal" backing file. This will allow dup'ing the gmem fd as the normal fd when guest_memfd supports mmap(), i.e. to make guest_memfd the _only_ backing source for the memslot. Signed-off-by: Sean Christopherson Signed-off-by: Ackerley Tng --- tools/testing/selftests/kvm/lib/kvm_util.c | 45 +++++++++++++++--------------- 1 file changed, 23 insertions(+), 22 deletions(-) diff --git a/tools/testing/selftests/kvm/lib/kvm_util.c b/tools/testing/selftests/kvm/lib/kvm_util.c index 2a76eca7029d3..df73b23a4c66a 100644 --- a/tools/testing/selftests/kvm/lib/kvm_util.c +++ b/tools/testing/selftests/kvm/lib/kvm_util.c @@ -1054,6 +1054,29 @@ void vm_mem_add(struct kvm_vm *vm, enum vm_mem_backing_src_type src_type, if (alignment > 1) region->mmap_size += alignment; + if (flags & KVM_MEM_GUEST_MEMFD) { + if (guest_memfd < 0) { + u32 guest_memfd_flags = 0; + + TEST_ASSERT(!guest_memfd_offset, + "Offset must be zero when creating new guest_memfd"); + guest_memfd = vm_create_guest_memfd(vm, mem_size, guest_memfd_flags); + } else { + /* + * Install a unique fd for each memslot so that the fd + * can be closed when the region is deleted without + * needing to track if the fd is owned by the framework + * or by the caller. + */ + guest_memfd = kvm_dup(guest_memfd); + } + + region->region.guest_memfd = guest_memfd; + region->region.guest_memfd_offset = guest_memfd_offset; + } else { + region->region.guest_memfd = -1; + } + region->fd = -1; if (backing_src_is_shared(src_type)) region->fd = kvm_memfd_alloc(region->mmap_size, @@ -1083,28 +1106,6 @@ void vm_mem_add(struct kvm_vm *vm, enum vm_mem_backing_src_type src_type, region->backing_src_type = src_type; - if (flags & KVM_MEM_GUEST_MEMFD) { - if (guest_memfd < 0) { - u32 guest_memfd_flags = 0; - TEST_ASSERT(!guest_memfd_offset, - "Offset must be zero when creating new guest_memfd"); - guest_memfd = vm_create_guest_memfd(vm, mem_size, guest_memfd_flags); - } else { - /* - * Install a unique fd for each memslot so that the fd - * can be closed when the region is deleted without - * needing to track if the fd is owned by the framework - * or by the caller. - */ - guest_memfd = kvm_dup(guest_memfd); - } - - region->region.guest_memfd = guest_memfd; - region->region.guest_memfd_offset = guest_memfd_offset; - } else { - region->region.guest_memfd = -1; - } - region->unused_phy_pages = sparsebit_alloc(); if (vm_arch_has_protected_memory(vm)) region->protected_phy_pages = sparsebit_alloc(); -- 2.54.0.563.g4f69b47b94-goog