From mboxrd@z Thu Jan 1 00:00:00 1970 Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 622343B894A; Tue, 28 Apr 2026 23:25:20 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=10.30.226.201 ARC-Seal:i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1777418720; cv=none; b=OBeL8aE/UaYTLKSCYa7nyZSJmvHStozcqf/uZfI3QGmEPSXuHKWCOvT/slrTGkHBU1n9TI0yqasL94Xckm91oRQRq+GI8XGr7beh8V3nEjBFip1uVavhfrJnoQBX8NL88DOrZ2XhK4+hhuR09mCz6MoFT1qhVAmRJaXgOYldDLA= ARC-Message-Signature:i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1777418720; c=relaxed/simple; bh=sBN0h+hAvY4kNZLEDPJgJDjLNnhtjXYYJqN0Dnxr2Po=; h=From:Date:Subject:MIME-Version:Content-Type:Message-Id:References: In-Reply-To:To:Cc; b=SWyXUnYVr8b74dpTQAvcUP8eAm0wGkMRnkhipODXpbw975dwRFa6OudyahellxsM38y2GMImYCx5ASQ88Bku/QgZvZycumlOY0wem4vAGJSu9MeE6KCKkzaNmiNt0D+7koftMBjbaV+RPy0RpfdexDWEH7mKceyJnrLqynOiRo8= ARC-Authentication-Results:i=1; smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b=BNsITGgp; arc=none smtp.client-ip=10.30.226.201 Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b="BNsITGgp" Received: by smtp.kernel.org (Postfix) with ESMTPS id 45123C2BCB8; Tue, 28 Apr 2026 23:25:20 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1777418720; bh=sBN0h+hAvY4kNZLEDPJgJDjLNnhtjXYYJqN0Dnxr2Po=; h=From:Date:Subject:References:In-Reply-To:To:Cc:Reply-To:From; b=BNsITGgpdEa/FLWM7c1vj4FHAz3KBfI7YZfj38qSTyRWKqYISJe269cfKs8+v1ayt n7uOHUB39QmkC8YPp1P+q2Cr9PRG71C4oGiQdMwVSdvShr9egNxp44HVjjd75fCtI/ JPFVIG3YsD+U+jaHvRakxZTjamyRYXVdzyvrj2ot0L2BNpyahQzEiaZX6kqwxqLYRd +4ogPkIl0fURXuWFteMM9m5sqU8wI9H5ni0L6sZcE/4m2UL6oj49S7CsyeG+ygGMeT 6QogDt1wZvgz8cGoP2QwxedfIGHxKUWfCYCLWAkLYxZx/cYUT+XFL5l1xgzPil2/xC dw+DnkEp7yAQQ== Received: from aws-us-west-2-korg-lkml-1.web.codeaurora.org (localhost.localdomain [127.0.0.1]) by smtp.lore.kernel.org (Postfix) with ESMTP id 35C72FF8877; Tue, 28 Apr 2026 23:25:20 +0000 (UTC) From: Ackerley Tng via B4 Relay Date: Tue, 28 Apr 2026 16:25:31 -0700 Subject: [PATCH RFC v5 36/53] KVM: selftests: Test conversion precision in guest_memfd Precedence: bulk X-Mailing-List: linux-trace-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Type: text/plain; charset="utf-8" Content-Transfer-Encoding: 7bit Message-Id: <20260428-gmem-inplace-conversion-v5-36-d8608ccfca22@google.com> References: <20260428-gmem-inplace-conversion-v5-0-d8608ccfca22@google.com> In-Reply-To: <20260428-gmem-inplace-conversion-v5-0-d8608ccfca22@google.com> To: aik@amd.com, andrew.jones@linux.dev, binbin.wu@linux.intel.com, brauner@kernel.org, chao.p.peng@linux.intel.com, david@kernel.org, ira.weiny@intel.com, jmattson@google.com, jthoughton@google.com, michael.roth@amd.com, oupton@kernel.org, pankaj.gupta@amd.com, qperret@google.com, rick.p.edgecombe@intel.com, rientjes@google.com, shivankg@amd.com, steven.price@arm.com, tabba@google.com, willy@infradead.org, wyihan@google.com, yan.y.zhao@intel.com, forkloop@google.com, pratyush@kernel.org, suzuki.poulose@arm.com, aneesh.kumar@kernel.org, Paolo Bonzini , Sean Christopherson , Thomas Gleixner , Ingo Molnar , Borislav Petkov , Dave Hansen , x86@kernel.org, "H. Peter Anvin" , Steven Rostedt , Masami Hiramatsu , Mathieu Desnoyers , Jonathan Corbet , Shuah Khan , Shuah Khan , Vishal Annapurve , Andrew Morton , Chris Li , Kairui Song , Kemeng Shi , Nhat Pham , Baoquan He , Barry Song , Axel Rasmussen , Yuanchu Xie , Wei Xu , Youngjun Park , Qi Zheng , Shakeel Butt , Kiryl Shutsemau , Jason Gunthorpe , Vlastimil Babka Cc: kvm@vger.kernel.org, linux-kernel@vger.kernel.org, linux-trace-kernel@vger.kernel.org, linux-doc@vger.kernel.org, linux-kselftest@vger.kernel.org, linux-mm@kvack.org, linux-coco@lists.linux.dev, Ackerley Tng X-Mailer: b4 0.14.3 X-Developer-Signature: v=1; a=ed25519-sha256; t=1777418714; l=4475; i=ackerleytng@google.com; s=20260225; h=from:subject:message-id; bh=YNDxiD1D/UeS0bOub6DUeHdSR87Xs3I++ttjrV0QqyU=; b=NLYnFgBo2Gpr5x06ieUdK+BOSHCCOAcVrCvIuE55RGZ4yINojPIaxgBdsaBoKXBUI2/o/be7D NkKZmV5UnN5BWuR0cPm/qbBDbjA8VVWYDhM0cxkjfA1Wu3umAvB4gWo X-Developer-Key: i=ackerleytng@google.com; a=ed25519; pk=sAZDYXdm6Iz8FHitpHeFlCMXwabodTm7p8/3/8xUxuU= X-Endpoint-Received: by B4 Relay for ackerleytng@google.com/20260225 with auth_id=649 X-Original-From: Ackerley Tng Reply-To: ackerleytng@google.com From: Ackerley Tng The existing guest_memfd conversion tests only use single-page memory regions. This provides no coverage for multi-page guest_memfd objects, specifically whether KVM correctly handles the page index for conversion operations. An incorrect implementation could, for example, always operate on the first page regardless of the index provided. Add a new test case to verify that conversions between private and shared memory correctly target the specified page within a multi-page guest_memfd. This test also verifies the precision of memory conversions by converting a single page an then iterating through all other pages ensure they remain in their original state. To support this test, add a new GMEM_CONVERSION_MULTIPAGE_TEST_INIT_SHARED macro that handles setting up and tearing down the VM for each page iteration. The teardown logic is adjusted to prevent a double-free in this new scenario. Signed-off-by: Ackerley Tng Co-developed-by: Sean Christopherson Signed-off-by: Sean Christopherson --- .../kvm/x86/guest_memfd_conversions_test.c | 70 ++++++++++++++++++++++ 1 file changed, 70 insertions(+) diff --git a/tools/testing/selftests/kvm/x86/guest_memfd_conversions_test.c b/tools/testing/selftests/kvm/x86/guest_memfd_conversions_test.c index 40ac1b3769af1..25f463bc9da52 100644 --- a/tools/testing/selftests/kvm/x86/guest_memfd_conversions_test.c +++ b/tools/testing/selftests/kvm/x86/guest_memfd_conversions_test.c @@ -65,8 +65,13 @@ static void gmem_conversions_do_setup(test_data_t *t, int nr_pages, static void gmem_conversions_do_teardown(test_data_t *t) { + /* Use NULL to avoid second free in FIXTURE_TEARDOWN (multipage tests). */ + if (!t->vcpu) + return; + /* No need to close gmem_fd, it's owned by the VM structure. */ kvm_vm_free(t->vcpu->vm); + t->vcpu = NULL; } FIXTURE_TEARDOWN(gmem_conversions) @@ -105,6 +110,29 @@ static void __gmem_conversions_##test(test_data_t *t, int nr_pages) \ #define GMEM_CONVERSION_TEST_INIT_SHARED(test) \ __GMEM_CONVERSION_TEST_INIT_SHARED(test, 1) +/* + * Repeats test over nr_pages in a guest_memfd of size nr_pages, providing each + * test iteration with test_page, the index of the page under test in + * guest_memfd. test_page takes values 0..(nr_pages - 1) inclusive. + */ +#define GMEM_CONVERSION_MULTIPAGE_TEST_INIT_SHARED(test, __nr_pages) \ +static void __gmem_conversions_multipage_##test(test_data_t *t, int nr_pages, \ + const int test_page); \ + \ +TEST_F(gmem_conversions, test) \ +{ \ + const u64 flags = GUEST_MEMFD_FLAG_MMAP | GUEST_MEMFD_FLAG_INIT_SHARED; \ + int i; \ + \ + for (i = 0; i < __nr_pages; ++i) { \ + gmem_conversions_do_setup(self, __nr_pages, flags); \ + __gmem_conversions_multipage_##test(self, __nr_pages, i); \ + gmem_conversions_do_teardown(self); \ + } \ +} \ +static void __gmem_conversions_multipage_##test(test_data_t *t, int nr_pages, \ + const int test_page) + struct guest_check_data { void *mem; char expected_val; @@ -205,6 +233,48 @@ GMEM_CONVERSION_TEST_INIT_SHARED(init_shared) test_convert_to_shared(t, 0, 'C', 'D', 'E'); } +/* + * Test indexing of pages within guest_memfd, using test data that is a multiple + * of page index. + */ +GMEM_CONVERSION_MULTIPAGE_TEST_INIT_SHARED(indexing, 4) +{ + int i; + + /* Get a char that varies with both i and v. */ +#define f(x, v) ((x << 4) + (v)) +#define r(v) (f(i, v)) +#define c(v) (f(test_page, v)) + + /* + * Start with the highest index, to catch any errors when, perhaps, the + * first page is returned even for the last index. + */ + for (i = nr_pages - 1; i >= 0; --i) + test_shared(t, i, 0, r(0), r(2)); + + test_convert_to_private(t, test_page, c(2), c(3)); + + for (i = 0; i < nr_pages; ++i) { + if (i == test_page) + test_private(t, i, r(3), r(4)); + else + test_shared(t, i, r(2), r(3), r(4)); + } + + test_convert_to_shared(t, test_page, c(4), c(5), c(6)); + + for (i = 0; i < nr_pages; ++i) { + char expected = i == test_page ? r(6) : r(4); + + test_shared(t, i, expected, r(7), r(8)); + } + +#undef c +#undef r +#undef f +} + int main(int argc, char *argv[]) { TEST_REQUIRE(kvm_check_cap(KVM_CAP_VM_TYPES) & BIT(KVM_X86_SW_PROTECTED_VM)); -- 2.54.0.545.g6539524ca2-goog