From: Ackerley Tng <ackerleytng@google.com>
To: Sean Christopherson <seanjc@google.com>,
Marc Zyngier <maz@kernel.org>,
Oliver Upton <oliver.upton@linux.dev>,
Paolo Bonzini <pbonzini@redhat.com>
Cc: linux-arm-kernel@lists.infradead.org, kvmarm@lists.linux.dev,
kvm@vger.kernel.org, linux-kernel@vger.kernel.org,
David Hildenbrand <david@redhat.com>,
Fuad Tabba <tabba@google.com>, Shivank Garg <shivankg@amd.com>,
Ashish Kalra <ashish.kalra@amd.com>,
Vlastimil Babka <vbabka@suse.cz>
Subject: Re: [PATCH v12 11/12] KVM: selftests: Add guest_memfd tests for mmap and NUMA policy support
Date: Thu, 09 Oct 2025 16:08:44 -0700 [thread overview]
Message-ID: <diqzcy6vhdvn.fsf@google.com> (raw)
In-Reply-To: <20251007221420.344669-12-seanjc@google.com>
Sean Christopherson <seanjc@google.com> writes:
> From: Shivank Garg <shivankg@amd.com>
>
> Add tests for NUMA memory policy binding and NUMA aware allocation in
> guest_memfd. This extends the existing selftests by adding proper
> validation for:
> - KVM GMEM set_policy and get_policy() vm_ops functionality using
> mbind() and get_mempolicy()
> - NUMA policy application before and after memory allocation
>
> Run the NUMA mbind() test with and without INIT_SHARED, as KVM should allow
> doing mbind(), madvise(), etc. on guest-private memory, e.g. so that
> userspace can set NUMA policy for CoCo VMs.
>
> Run the NUMA allocation test only for INIT_SHARED, i.e. if the host can't
> fault-in memory (via direct access, madvise(), etc.) as move_pages()
> returns -ENOENT if the page hasn't been faulted in (walks the host page
> tables to find the associated folio)
>
> Signed-off-by: Shivank Garg <shivankg@amd.com>
> Tested-by: Ashish Kalra <ashish.kalra@amd.com>
> [sean: don't skip entire test when running on non-NUMA system, test mbind()
> with private memory, provide more info in assert messages]
> Signed-off-by: Sean Christopherson <seanjc@google.com>
> ---
> .../testing/selftests/kvm/guest_memfd_test.c | 98 +++++++++++++++++++
> 1 file changed, 98 insertions(+)
>
>
> [...snip...]
>
> +static void test_numa_allocation(int fd, size_t total_size)
> +{
> + unsigned long node0_mask = 1; /* Node 0 */
> + unsigned long node1_mask = 2; /* Node 1 */
> + unsigned long maxnode = 8;
> + void *pages[4];
> + int status[4];
> + char *mem;
> + int i;
> +
> + if (!is_multi_numa_node_system())
> + return;
> +
> + mem = kvm_mmap(total_size, PROT_READ | PROT_WRITE, MAP_SHARED, fd);
> +
> + for (i = 0; i < 4; i++)
> + pages[i] = (char *)mem + page_size * i;
> +
> + /* Set NUMA policy after allocation */
> + memset(mem, 0xaa, page_size);
> + kvm_mbind(pages[0], page_size, MPOL_BIND, &node0_mask, maxnode, 0);
> + kvm_fallocate(fd, FALLOC_FL_PUNCH_HOLE | FALLOC_FL_KEEP_SIZE, 0, page_size);
> +
> + /* Set NUMA policy before allocation */
> + kvm_mbind(pages[0], page_size * 2, MPOL_BIND, &node1_mask, maxnode, 0);
> + kvm_mbind(pages[2], page_size * 2, MPOL_BIND, &node0_mask, maxnode, 0);
> + memset(mem, 0xaa, total_size);
> +
> + /* Validate if pages are allocated on specified NUMA nodes */
> + kvm_move_pages(0, 4, pages, NULL, status, 0);
> + TEST_ASSERT(status[0] == 1, "Expected page 0 on node 1, got it on node %d", status[0]);
> + TEST_ASSERT(status[1] == 1, "Expected page 1 on node 1, got it on node %d", status[1]);
> + TEST_ASSERT(status[2] == 0, "Expected page 2 on node 0, got it on node %d", status[2]);
> + TEST_ASSERT(status[3] == 0, "Expected page 3 on node 0, got it on node %d", status[3]);
> +
> + /* Punch hole for all pages */
> + kvm_fallocate(fd, FALLOC_FL_PUNCH_HOLE | FALLOC_FL_KEEP_SIZE, 0, total_size);
> +
> + /* Change NUMA policy nodes and reallocate */
> + kvm_mbind(pages[0], page_size * 2, MPOL_BIND, &node0_mask, maxnode, 0);
> + kvm_mbind(pages[2], page_size * 2, MPOL_BIND, &node1_mask, maxnode, 0);
> + memset(mem, 0xaa, total_size);
> +
> + kvm_move_pages(0, 4, pages, NULL, status, 0);
> + TEST_ASSERT(status[0] == 0, "Expected page 0 on node 0, got it on node %d", status[0]);
> + TEST_ASSERT(status[1] == 0, "Expected page 1 on node 0, got it on node %d", status[1]);
> + TEST_ASSERT(status[2] == 1, "Expected page 2 on node 1, got it on node %d", status[2]);
> + TEST_ASSERT(status[3] == 1, "Expected page 3 on node 1, got it on node %d", status[3]);
> +
Related to my comment on patch 5: might a test for guest_memfd with
regard to the memory spread page cache feature provided by the cpuset
subsystem be missing?
Perhaps we need tests for
1. Test that the allocation matches current's mempolicy, with no
mempolicy defined for specific indices.
2. Test that during allocation, current's mempolicy can be overridden with
a mempolicy defined for specific indices.
3. Test that during allocation, current's mempolicy and the effect of
cpuset config can be overridden with a mempolicy defined for specific
indices.
4. Test that during allocation, without defining a mempolicy for given
index, current's mempolicy is overridden by the effect of cpuset
config
I believe test 4, before patch 5, will show that guest_memfd respects
cpuset config, but after patch 5, will show that guest_memfd no longer
allows cpuset config to override current's mempolicy.
> + kvm_munmap(mem, total_size);
> +}
> +
> static void test_fault_sigbus(int fd, size_t accessible_size, size_t map_size)
> {
> const char val = 0xaa;
> @@ -273,11 +369,13 @@ static void __test_guest_memfd(struct kvm_vm *vm, uint64_t flags)
> if (flags & GUEST_MEMFD_FLAG_INIT_SHARED) {
> gmem_test(mmap_supported, vm, flags);
> gmem_test(fault_overflow, vm, flags);
> + gmem_test(numa_allocation, vm, flags);
> } else {
> gmem_test(fault_private, vm, flags);
> }
>
> gmem_test(mmap_cow, vm, flags);
> + gmem_test(mbind, vm, flags);
> } else {
> gmem_test(mmap_not_supported, vm, flags);
> }
> --
> 2.51.0.710.ga91ca5db03-goog
next prev parent reply other threads:[~2025-10-09 23:08 UTC|newest]
Thread overview: 34+ messages / expand[flat|nested] mbox.gz Atom feed top
2025-10-07 22:14 [PATCH v12 00/12] KVM: guest_memfd: Add NUMA mempolicy support Sean Christopherson
2025-10-07 22:14 ` [PATCH v12 01/12] KVM: guest_memfd: Rename "struct kvm_gmem" to "struct gmem_file" Sean Christopherson
2025-10-08 5:25 ` Garg, Shivank
2025-10-09 21:08 ` Ackerley Tng
2025-10-10 15:07 ` David Hildenbrand
2025-10-07 22:14 ` [PATCH v12 02/12] KVM: guest_memfd: Add macro to iterate over gmem_files for a mapping/inode Sean Christopherson
2025-10-08 5:30 ` Garg, Shivank
2025-10-09 21:27 ` Ackerley Tng
2025-10-07 22:14 ` [PATCH v12 03/12] KVM: guest_memfd: Use guest mem inodes instead of anonymous inodes Sean Christopherson
2025-10-07 22:14 ` [PATCH v12 04/12] KVM: guest_memfd: Add slab-allocated inode cache Sean Christopherson
2025-10-09 21:39 ` Ackerley Tng
2025-10-09 22:16 ` Ackerley Tng
2025-10-07 22:14 ` [PATCH v12 05/12] KVM: guest_memfd: Enforce NUMA mempolicy using shared policy Sean Christopherson
2025-10-09 22:15 ` Ackerley Tng
2025-10-10 7:57 ` Garg, Shivank
2025-10-10 20:33 ` Sean Christopherson
2025-10-10 21:57 ` Ackerley Tng
2025-10-12 20:00 ` Garg, Shivank
2025-10-15 16:56 ` Sean Christopherson
2025-10-07 22:14 ` [PATCH v12 06/12] KVM: selftests: Define wrappers for common syscalls to assert success Sean Christopherson
2025-10-09 21:44 ` Ackerley Tng
2025-10-07 22:14 ` [PATCH v12 07/12] KVM: selftests: Report stacktraces SIGBUS, SIGSEGV, SIGILL, and SIGFPE by default Sean Christopherson
2025-10-09 22:31 ` Ackerley Tng
2025-10-07 22:14 ` [PATCH v12 08/12] KVM: selftests: Add additional equivalents to libnuma APIs in KVM's numaif.h Sean Christopherson
2025-10-09 22:34 ` Ackerley Tng
2025-10-07 22:14 ` [PATCH v12 09/12] KVM: selftests: Use proper uAPI headers to pick up mempolicy.h definitions Sean Christopherson
2025-10-10 17:59 ` Ackerley Tng
2025-10-07 22:14 ` [PATCH v12 10/12] KVM: selftests: Add helpers to probe for NUMA support, and multi-node systems Sean Christopherson
2025-10-07 22:14 ` [PATCH v12 11/12] KVM: selftests: Add guest_memfd tests for mmap and NUMA policy support Sean Christopherson
2025-10-09 23:08 ` Ackerley Tng [this message]
2025-10-07 22:14 ` [PATCH v12 12/12] KVM: guest_memfd: Add gmem_inode.flags field instead of using i_private Sean Christopherson
2025-10-09 20:58 ` [PATCH v12 00/12] KVM: guest_memfd: Add NUMA mempolicy support Ackerley Tng
2025-10-10 4:59 ` Garg, Shivank
2025-10-10 17:56 ` Ackerley Tng
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=diqzcy6vhdvn.fsf@google.com \
--to=ackerleytng@google.com \
--cc=ashish.kalra@amd.com \
--cc=david@redhat.com \
--cc=kvm@vger.kernel.org \
--cc=kvmarm@lists.linux.dev \
--cc=linux-arm-kernel@lists.infradead.org \
--cc=linux-kernel@vger.kernel.org \
--cc=maz@kernel.org \
--cc=oliver.upton@linux.dev \
--cc=pbonzini@redhat.com \
--cc=seanjc@google.com \
--cc=shivankg@amd.com \
--cc=tabba@google.com \
--cc=vbabka@suse.cz \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox