From: Ackerley Tng <ackerleytng@google.com>
To: kvm@vger.kernel.org, linux-mm@kvack.org,
linux-kernel@vger.kernel.org, x86@kernel.org,
linux-fsdevel@vger.kernel.org
Cc: ackerleytng@google.com, aik@amd.com, ajones@ventanamicro.com,
akpm@linux-foundation.org, amoorthy@google.com,
anthony.yznaga@oracle.com, anup@brainfault.org,
aou@eecs.berkeley.edu, bfoster@redhat.com,
binbin.wu@linux.intel.com, brauner@kernel.org,
catalin.marinas@arm.com, chao.p.peng@intel.com,
chenhuacai@kernel.org, dave.hansen@intel.com, david@redhat.com,
dmatlack@google.com, dwmw@amazon.co.uk, erdemaktas@google.com,
fan.du@intel.com, fvdl@google.com, graf@amazon.com,
haibo1.xu@intel.com, hch@infradead.org, hughd@google.com,
ira.weiny@intel.com, isaku.yamahata@intel.com, jack@suse.cz,
james.morse@arm.com, jarkko@kernel.org, jgg@ziepe.ca,
jgowans@amazon.com, jhubbard@nvidia.com, jroedel@suse.de,
jthoughton@google.com, jun.miao@intel.com, kai.huang@intel.com,
keirf@google.com, kent.overstreet@linux.dev,
kirill.shutemov@intel.com, liam.merwick@oracle.com,
maciej.wieczor-retman@intel.com, mail@maciej.szmigiero.name,
maz@kernel.org, mic@digikod.net, michael.roth@amd.com,
mpe@ellerman.id.au, muchun.song@linux.dev, nikunj@amd.com,
nsaenz@amazon.es, oliver.upton@linux.dev, palmer@dabbelt.com,
pankaj.gupta@amd.com, paul.walmsley@sifive.com,
pbonzini@redhat.com, pdurrant@amazon.co.uk, peterx@redhat.com,
pgonda@google.com, pvorel@suse.cz, qperret@google.com,
quic_cvanscha@quicinc.com, quic_eberman@quicinc.com,
quic_mnalajal@quicinc.com, quic_pderrin@quicinc.com,
quic_pheragu@quicinc.com, quic_svaddagi@quicinc.com,
quic_tsoni@quicinc.com, richard.weiyang@gmail.com,
rick.p.edgecombe@intel.com, rientjes@google.com,
roypat@amazon.co.uk, rppt@kernel.org, seanjc@google.com,
shuah@kernel.org, steven.price@arm.com,
steven.sistare@oracle.com, suzuki.poulose@arm.com,
tabba@google.com, thomas.lendacky@amd.com,
usama.arif@bytedance.com, vannapurve@google.com, vbabka@suse.cz,
viro@zeniv.linux.org.uk, vkuznets@redhat.com,
wei.w.wang@intel.com, will@kernel.org, willy@infradead.org,
xiaoyao.li@intel.com, yan.y.zhao@intel.com, yilun.xu@intel.com,
yuzenghui@huawei.com, zhiquan1.li@intel.com
Subject: [RFC PATCH v2 10/51] KVM: selftests: Refactor vm_mem_add to be more flexible
Date: Wed, 14 May 2025 16:41:49 -0700 [thread overview]
Message-ID: <9a9db594cc0e9d059dd30d2415d0346e09065bb6.1747264138.git.ackerleytng@google.com> (raw)
In-Reply-To: <cover.1747264138.git.ackerleytng@google.com>
enum vm_mem_backing_src_type is encoding too many different
possibilities on different axes of (1) whether to mmap from an fd, (2)
granularity of mapping for THP, (3) size of hugetlb mapping, and has
yet to be extended to support guest_memfd.
When guest_memfd supports mmap() and we also want to support testing
with mmap()ing from guest_memfd, the number of combinations make
enumeration in vm_mem_backing_src_type difficult.
This refactor separates out vm_mem_backing_src_type from
userspace_mem_region. For now, vm_mem_backing_src_type remains a
possible way for tests to specify, on the command line, the
combination of backing memory to test.
vm_mem_add() is now the last place where vm_mem_backing_src_type is
interpreted, to
1. Check validity of requested guest_paddr
2. Align mmap_size appropriately based on the mapping's page_size and
architecture
3. Install memory appropriately according to mapping's page size
mmap()ing an alias seems to be specific to userfaultfd tests and could
be refactored out of struct userspace_mem_region and localized in
userfaultfd tests in future.
This paves the way for replacing vm_mem_backing_src_type with multiple
command line flags that would specify backing memory more
flexibly. Future tests are expected to use vm_mem_region_alloc() to
allocate a struct userspace_mem_region, then use more fundamental
functions like vm_mem_region_mmap(), vm_mem_region_madvise_thp(),
kvm_memfd_create(), vm_create_guest_memfd(), and other functions in
vm_mem_add() to flexibly build up struct userspace_mem_region before
finally adding the region to the vm with vm_mem_region_add().
Change-Id: Ibb37af8a1a3bbb6de776426302433c5d9613ee76
Signed-off-by: Ackerley Tng <ackerleytng@google.com>
---
.../testing/selftests/kvm/include/kvm_util.h | 29 +-
.../testing/selftests/kvm/include/test_util.h | 2 +
tools/testing/selftests/kvm/lib/kvm_util.c | 429 +++++++++++-------
tools/testing/selftests/kvm/lib/test_util.c | 25 +
4 files changed, 328 insertions(+), 157 deletions(-)
diff --git a/tools/testing/selftests/kvm/include/kvm_util.h b/tools/testing/selftests/kvm/include/kvm_util.h
index 373912464fb4..853ab68cff79 100644
--- a/tools/testing/selftests/kvm/include/kvm_util.h
+++ b/tools/testing/selftests/kvm/include/kvm_util.h
@@ -35,11 +35,26 @@ struct userspace_mem_region {
struct sparsebit *protected_phy_pages;
int fd;
off_t offset;
- enum vm_mem_backing_src_type backing_src_type;
+ /*
+ * host_mem is mmap_start aligned upwards to an address suitable for the
+ * architecture. In most cases, host_mem and mmap_start are the same,
+ * except for s390x, where the host address must be aligned to 1M (due
+ * to PGSTEs).
+ */
+#ifdef __s390x__
+#define S390X_HOST_ADDRESS_ALIGNMENT 0x100000
+#endif
void *host_mem;
+ /* host_alias is to mmap_alias as host_mem is to mmap_start */
void *host_alias;
void *mmap_start;
void *mmap_alias;
+ /*
+ * mmap_size is possibly larger than region.memory_size because in some
+ * cases, host_mem has to be adjusted upwards (see comment for host_mem
+ * above). In those cases, mmap_size has to be adjusted upwards so that
+ * enough memory is available in this memslot.
+ */
size_t mmap_size;
struct rb_node gpa_node;
struct rb_node hva_node;
@@ -582,6 +597,18 @@ int __vm_set_user_memory_region2(struct kvm_vm *vm, uint32_t slot, uint32_t flag
uint64_t gpa, uint64_t size, void *hva,
uint32_t guest_memfd, uint64_t guest_memfd_offset);
+struct userspace_mem_region *vm_mem_region_alloc(struct kvm_vm *vm);
+void *vm_mem_region_mmap(struct userspace_mem_region *region, size_t length,
+ int flags, int fd, off_t offset);
+void vm_mem_region_install_memory(struct userspace_mem_region *region,
+ size_t memslot_size, size_t alignment);
+void vm_mem_region_madvise_thp(struct userspace_mem_region *region, int advice);
+int vm_mem_region_install_guest_memfd(struct userspace_mem_region *region,
+ int guest_memfd);
+void *vm_mem_region_mmap_alias(struct userspace_mem_region *region, int flags,
+ size_t alignment);
+void vm_mem_region_add(struct kvm_vm *vm, struct userspace_mem_region *region);
+
void vm_userspace_mem_region_add(struct kvm_vm *vm,
enum vm_mem_backing_src_type src_type,
uint64_t guest_paddr, uint32_t slot, uint64_t npages,
diff --git a/tools/testing/selftests/kvm/include/test_util.h b/tools/testing/selftests/kvm/include/test_util.h
index 77d13d7920cb..b4a03784ac4f 100644
--- a/tools/testing/selftests/kvm/include/test_util.h
+++ b/tools/testing/selftests/kvm/include/test_util.h
@@ -149,6 +149,8 @@ size_t get_trans_hugepagesz(void);
size_t get_def_hugetlb_pagesz(void);
const struct vm_mem_backing_src_alias *vm_mem_backing_src_alias(uint32_t i);
size_t get_backing_src_pagesz(uint32_t i);
+int backing_src_should_madvise(uint32_t i);
+int get_backing_src_madvise_advice(uint32_t i);
bool is_backing_src_hugetlb(uint32_t i);
void backing_src_help(const char *flag);
enum vm_mem_backing_src_type parse_backing_src_type(const char *type_name);
diff --git a/tools/testing/selftests/kvm/lib/kvm_util.c b/tools/testing/selftests/kvm/lib/kvm_util.c
index 815bc45dd8dc..58a3365f479c 100644
--- a/tools/testing/selftests/kvm/lib/kvm_util.c
+++ b/tools/testing/selftests/kvm/lib/kvm_util.c
@@ -824,15 +824,12 @@ void kvm_vm_free(struct kvm_vm *vmp)
free(vmp);
}
-int kvm_memfd_alloc(size_t size, bool hugepages)
+int kvm_create_memfd(size_t size, unsigned int flags)
{
- int memfd_flags = MFD_CLOEXEC;
- int fd, r;
+ int fd;
+ int r;
- if (hugepages)
- memfd_flags |= MFD_HUGETLB;
-
- fd = memfd_create("kvm_selftest", memfd_flags);
+ fd = memfd_create("kvm_selftest", flags);
TEST_ASSERT(fd != -1, __KVM_SYSCALL_ERROR("memfd_create()", fd));
r = ftruncate(fd, size);
@@ -844,6 +841,16 @@ int kvm_memfd_alloc(size_t size, bool hugepages)
return fd;
}
+int kvm_memfd_alloc(size_t size, bool hugepages)
+{
+ int memfd_flags = MFD_CLOEXEC;
+
+ if (hugepages)
+ memfd_flags |= MFD_HUGETLB;
+
+ return kvm_create_memfd(size, memfd_flags);
+}
+
static void vm_userspace_mem_region_gpa_insert(struct rb_root *gpa_tree,
struct userspace_mem_region *region)
{
@@ -953,185 +960,295 @@ void vm_set_user_memory_region2(struct kvm_vm *vm, uint32_t slot, uint32_t flags
errno, strerror(errno));
}
+/**
+ * Allocates and returns a struct userspace_mem_region.
+ */
+struct userspace_mem_region *vm_mem_region_alloc(struct kvm_vm *vm)
+{
+ struct userspace_mem_region *region;
-/* FIXME: This thing needs to be ripped apart and rewritten. */
-void vm_mem_add(struct kvm_vm *vm, enum vm_mem_backing_src_type src_type,
- uint64_t guest_paddr, uint32_t slot, uint64_t npages,
- uint32_t flags, int guest_memfd, uint64_t guest_memfd_offset)
+ /* Allocate and initialize new mem region structure. */
+ region = calloc(1, sizeof(*region));
+ TEST_ASSERT(region != NULL, "Insufficient Memory");
+
+ region->unused_phy_pages = sparsebit_alloc();
+ if (vm_arch_has_protected_memory(vm))
+ region->protected_phy_pages = sparsebit_alloc();
+
+ region->fd = -1;
+ region->region.guest_memfd = -1;
+
+ return region;
+}
+
+static size_t compute_page_size(int mmap_flags, int madvise_advice)
+{
+ if (mmap_flags & MAP_HUGETLB) {
+ int size_flags = (mmap_flags >> MAP_HUGE_SHIFT) & MAP_HUGE_MASK;
+
+ if (!size_flags)
+ return get_def_hugetlb_pagesz();
+
+ return 1ULL << size_flags;
+ }
+
+ return madvise_advice == MADV_HUGEPAGE ? get_trans_hugepagesz() : getpagesize();
+}
+
+/**
+ * Calls mmap() with @length, @flags, @fd, @offset for @region.
+ *
+ * Think of this as the struct userspace_mem_region wrapper for the mmap()
+ * syscall.
+ */
+void *vm_mem_region_mmap(struct userspace_mem_region *region, size_t length,
+ int flags, int fd, off_t offset)
+{
+ void *mem;
+
+ if (flags & MAP_SHARED) {
+ TEST_ASSERT(fd != -1,
+ "Ensure that fd is provided for shared mappings.");
+ TEST_ASSERT(
+ region->fd == fd || region->region.guest_memfd == fd,
+ "Ensure that fd is opened before mmap, and is either "
+ "set up in region->fd or region->region.guest_memfd.");
+ }
+
+ mem = mmap(NULL, length, PROT_READ | PROT_WRITE, flags, fd, offset);
+ TEST_ASSERT(mem != MAP_FAILED, "Couldn't mmap anonymous memory");
+
+ region->mmap_start = mem;
+ region->mmap_size = length;
+ region->offset = offset;
+
+ return mem;
+}
+
+/**
+ * Installs mmap()ed memory in @region->mmap_start as @region->host_mem,
+ * checking constraints.
+ */
+void vm_mem_region_install_memory(struct userspace_mem_region *region,
+ size_t memslot_size, size_t alignment)
+{
+ TEST_ASSERT(region->mmap_size >= memslot_size,
+ "mmap()ed memory insufficient for memslot");
+
+ region->host_mem = align_ptr_up(region->mmap_start, alignment);
+ region->region.userspace_addr = (uint64_t)region->host_mem;
+ region->region.memory_size = memslot_size;
+}
+
+
+/**
+ * Calls madvise with @advice for @region.
+ *
+ * Think of this as the struct userspace_mem_region wrapper for the madvise()
+ * syscall.
+ */
+void vm_mem_region_madvise_thp(struct userspace_mem_region *region, int advice)
{
int ret;
+
+ TEST_ASSERT(
+ region->host_mem && region->mmap_size,
+ "vm_mem_region_madvise_thp() must be called after vm_mem_region_mmap()");
+
+ ret = madvise(region->host_mem, region->mmap_size, advice);
+ TEST_ASSERT(ret == 0, "madvise failed, addr: %p length: 0x%lx",
+ region->host_mem, region->mmap_size);
+}
+
+/**
+ * Installs guest_memfd by setting it up in @region.
+ *
+ * Returns the guest_memfd that was installed in the @region.
+ */
+int vm_mem_region_install_guest_memfd(struct userspace_mem_region *region,
+ int guest_memfd)
+{
+ /*
+ * Install a unique fd for each memslot so that the fd can be closed
+ * when the region is deleted without needing to track if the fd is
+ * owned by the framework or by the caller.
+ */
+ guest_memfd = dup(guest_memfd);
+ TEST_ASSERT(guest_memfd >= 0, __KVM_SYSCALL_ERROR("dup()", guest_memfd));
+ region->region.guest_memfd = guest_memfd;
+
+ return guest_memfd;
+}
+
+/**
+ * Calls mmap() to create an alias for mmap()ed memory at region->host_mem,
+ * exactly the same size the was mmap()ed.
+ *
+ * This is used mainly for userfaultfd tests.
+ */
+void *vm_mem_region_mmap_alias(struct userspace_mem_region *region, int flags,
+ size_t alignment)
+{
+ region->mmap_alias = mmap(NULL, region->mmap_size,
+ PROT_READ | PROT_WRITE, flags, region->fd, 0);
+ TEST_ASSERT(region->mmap_alias != MAP_FAILED,
+ __KVM_SYSCALL_ERROR("mmap()", (int)(unsigned long)MAP_FAILED));
+
+ region->host_alias = align_ptr_up(region->mmap_alias, alignment);
+
+ return region->host_alias;
+}
+
+static void vm_mem_region_assert_no_duplicate(struct kvm_vm *vm, uint32_t slot,
+ uint64_t gpa, size_t size)
+{
struct userspace_mem_region *region;
- size_t backing_src_pagesz = get_backing_src_pagesz(src_type);
- size_t mem_size = npages * vm->page_size;
- size_t alignment;
-
- TEST_REQUIRE_SET_USER_MEMORY_REGION2();
-
- TEST_ASSERT(vm_adjust_num_guest_pages(vm->mode, npages) == npages,
- "Number of guest pages is not compatible with the host. "
- "Try npages=%d", vm_adjust_num_guest_pages(vm->mode, npages));
-
- TEST_ASSERT((guest_paddr % vm->page_size) == 0, "Guest physical "
- "address not on a page boundary.\n"
- " guest_paddr: 0x%lx vm->page_size: 0x%x",
- guest_paddr, vm->page_size);
- TEST_ASSERT((((guest_paddr >> vm->page_shift) + npages) - 1)
- <= vm->max_gfn, "Physical range beyond maximum "
- "supported physical address,\n"
- " guest_paddr: 0x%lx npages: 0x%lx\n"
- " vm->max_gfn: 0x%lx vm->page_size: 0x%x",
- guest_paddr, npages, vm->max_gfn, vm->page_size);
/*
* Confirm a mem region with an overlapping address doesn't
* already exist.
*/
- region = (struct userspace_mem_region *) userspace_mem_region_find(
- vm, guest_paddr, (guest_paddr + npages * vm->page_size) - 1);
- if (region != NULL)
- TEST_FAIL("overlapping userspace_mem_region already "
- "exists\n"
- " requested guest_paddr: 0x%lx npages: 0x%lx "
- "page_size: 0x%x\n"
- " existing guest_paddr: 0x%lx size: 0x%lx",
- guest_paddr, npages, vm->page_size,
- (uint64_t) region->region.guest_phys_addr,
- (uint64_t) region->region.memory_size);
+ region = userspace_mem_region_find(vm, gpa, gpa + size - 1);
+ if (region != NULL) {
+ TEST_FAIL("overlapping userspace_mem_region already exists\n"
+ " requested gpa: 0x%lx size: 0x%lx"
+ " existing gpa: 0x%lx size: 0x%lx",
+ gpa, size,
+ (uint64_t) region->region.guest_phys_addr,
+ (uint64_t) region->region.memory_size);
+ }
/* Confirm no region with the requested slot already exists. */
- hash_for_each_possible(vm->regions.slot_hash, region, slot_node,
- slot) {
+ hash_for_each_possible(vm->regions.slot_hash, region, slot_node, slot) {
if (region->region.slot != slot)
continue;
- TEST_FAIL("A mem region with the requested slot "
- "already exists.\n"
- " requested slot: %u paddr: 0x%lx npages: 0x%lx\n"
- " existing slot: %u paddr: 0x%lx size: 0x%lx",
- slot, guest_paddr, npages,
- region->region.slot,
- (uint64_t) region->region.guest_phys_addr,
- (uint64_t) region->region.memory_size);
+ TEST_FAIL("A mem region with the requested slot already exists.\n"
+ " requested slot: %u paddr: 0x%lx size: 0x%lx\n"
+ " existing slot: %u paddr: 0x%lx size: 0x%lx",
+ slot, gpa, size,
+ region->region.slot,
+ (uint64_t) region->region.guest_phys_addr,
+ (uint64_t) region->region.memory_size);
}
+}
- /* Allocate and initialize new mem region structure. */
- region = calloc(1, sizeof(*region));
- TEST_ASSERT(region != NULL, "Insufficient Memory");
- region->mmap_size = mem_size;
+/**
+ * Add a @region to @vm. All necessary fields in region->region should already
+ * be populated.
+ *
+ * Think of this as the struct userspace_mem_region wrapper for the
+ * KVM_SET_USER_MEMORY_REGION2 ioctl.
+ */
+void vm_mem_region_add(struct kvm_vm *vm, struct userspace_mem_region *region)
+{
+ uint64_t npages;
+ uint64_t gpa;
+ int ret;
-#ifdef __s390x__
- /* On s390x, the host address must be aligned to 1M (due to PGSTEs) */
- alignment = 0x100000;
-#else
- alignment = 1;
-#endif
+ TEST_REQUIRE_SET_USER_MEMORY_REGION2();
- /*
- * When using THP mmap is not guaranteed to returned a hugepage aligned
- * address so we have to pad the mmap. Padding is not needed for HugeTLB
- * because mmap will always return an address aligned to the HugeTLB
- * page size.
- */
- if (src_type == VM_MEM_SRC_ANONYMOUS_THP)
- alignment = max(backing_src_pagesz, alignment);
+ npages = region->region.memory_size / vm->page_size;
+ TEST_ASSERT(vm_adjust_num_guest_pages(vm->mode, npages) == npages,
+ "Number of guest pages is not compatible with the host. "
+ "Try npages=%d", vm_adjust_num_guest_pages(vm->mode, npages));
- TEST_ASSERT_EQ(guest_paddr, align_up(guest_paddr, backing_src_pagesz));
+ gpa = region->region.guest_phys_addr;
+ TEST_ASSERT((gpa % vm->page_size) == 0,
+ "Guest physical address not on a page boundary.\n"
+ " gpa: 0x%lx vm->page_size: 0x%x",
+ gpa, vm->page_size);
+ TEST_ASSERT((((gpa >> vm->page_shift) + npages) - 1) <= vm->max_gfn,
+ "Physical range beyond maximum supported physical address,\n"
+ " gpa: 0x%lx npages: 0x%lx\n"
+ " vm->max_gfn: 0x%lx vm->page_size: 0x%x",
+ gpa, npages, vm->max_gfn, vm->page_size);
- /* Add enough memory to align up if necessary */
- if (alignment > 1)
- region->mmap_size += alignment;
+ vm_mem_region_assert_no_duplicate(vm, region->region.slot, gpa,
+ region->mmap_size);
- region->fd = -1;
- if (backing_src_is_shared(src_type))
- region->fd = kvm_memfd_alloc(region->mmap_size,
- src_type == VM_MEM_SRC_SHARED_HUGETLB);
-
- region->mmap_start = mmap(NULL, region->mmap_size,
- PROT_READ | PROT_WRITE,
- vm_mem_backing_src_alias(src_type)->flag,
- region->fd, 0);
- TEST_ASSERT(region->mmap_start != MAP_FAILED,
- __KVM_SYSCALL_ERROR("mmap()", (int)(unsigned long)MAP_FAILED));
-
- TEST_ASSERT(!is_backing_src_hugetlb(src_type) ||
- region->mmap_start == align_ptr_up(region->mmap_start, backing_src_pagesz),
- "mmap_start %p is not aligned to HugeTLB page size 0x%lx",
- region->mmap_start, backing_src_pagesz);
-
- /* Align host address */
- region->host_mem = align_ptr_up(region->mmap_start, alignment);
-
- /* As needed perform madvise */
- if ((src_type == VM_MEM_SRC_ANONYMOUS ||
- src_type == VM_MEM_SRC_ANONYMOUS_THP) && thp_configured()) {
- ret = madvise(region->host_mem, mem_size,
- src_type == VM_MEM_SRC_ANONYMOUS ? MADV_NOHUGEPAGE : MADV_HUGEPAGE);
- TEST_ASSERT(ret == 0, "madvise failed, addr: %p length: 0x%lx src_type: %s",
- region->host_mem, mem_size,
- vm_mem_backing_src_alias(src_type)->name);
- }
-
- region->backing_src_type = src_type;
-
- if (flags & KVM_MEM_GUEST_MEMFD) {
- if (guest_memfd < 0) {
- uint32_t guest_memfd_flags = 0;
- TEST_ASSERT(!guest_memfd_offset,
- "Offset must be zero when creating new guest_memfd");
- guest_memfd = vm_create_guest_memfd(vm, mem_size, guest_memfd_flags);
- } else {
- /*
- * Install a unique fd for each memslot so that the fd
- * can be closed when the region is deleted without
- * needing to track if the fd is owned by the framework
- * or by the caller.
- */
- guest_memfd = dup(guest_memfd);
- TEST_ASSERT(guest_memfd >= 0, __KVM_SYSCALL_ERROR("dup()", guest_memfd));
- }
-
- region->region.guest_memfd = guest_memfd;
- region->region.guest_memfd_offset = guest_memfd_offset;
- } else {
- region->region.guest_memfd = -1;
- }
-
- region->unused_phy_pages = sparsebit_alloc();
- if (vm_arch_has_protected_memory(vm))
- region->protected_phy_pages = sparsebit_alloc();
- sparsebit_set_num(region->unused_phy_pages,
- guest_paddr >> vm->page_shift, npages);
- region->region.slot = slot;
- region->region.flags = flags;
- region->region.guest_phys_addr = guest_paddr;
- region->region.memory_size = npages * vm->page_size;
- region->region.userspace_addr = (uintptr_t) region->host_mem;
ret = __vm_ioctl(vm, KVM_SET_USER_MEMORY_REGION2, ®ion->region);
TEST_ASSERT(ret == 0, "KVM_SET_USER_MEMORY_REGION2 IOCTL failed,\n"
- " rc: %i errno: %i\n"
- " slot: %u flags: 0x%x\n"
- " guest_phys_addr: 0x%lx size: 0x%lx guest_memfd: %d",
- ret, errno, slot, flags,
- guest_paddr, (uint64_t) region->region.memory_size,
- region->region.guest_memfd);
+ " rc: %i errno: %i\n"
+ " slot: %u flags: 0x%x\n"
+ " guest_phys_addr: 0x%lx size: 0x%llx guest_memfd: %d",
+ ret, errno, region->region.slot, region->region.flags,
+ gpa, region->region.memory_size,
+ region->region.guest_memfd);
+
+ sparsebit_set_num(region->unused_phy_pages, gpa >> vm->page_shift, npages);
/* Add to quick lookup data structures */
vm_userspace_mem_region_gpa_insert(&vm->regions.gpa_tree, region);
vm_userspace_mem_region_hva_insert(&vm->regions.hva_tree, region);
- hash_add(vm->regions.slot_hash, ®ion->slot_node, slot);
+ hash_add(vm->regions.slot_hash, ®ion->slot_node, region->region.slot);
+}
- /* If shared memory, create an alias. */
- if (region->fd >= 0) {
- region->mmap_alias = mmap(NULL, region->mmap_size,
- PROT_READ | PROT_WRITE,
- vm_mem_backing_src_alias(src_type)->flag,
- region->fd, 0);
- TEST_ASSERT(region->mmap_alias != MAP_FAILED,
- __KVM_SYSCALL_ERROR("mmap()", (int)(unsigned long)MAP_FAILED));
+void vm_mem_add(struct kvm_vm *vm, enum vm_mem_backing_src_type src_type,
+ uint64_t guest_paddr, uint32_t slot, uint64_t npages,
+ uint32_t flags, int guest_memfd, uint64_t guest_memfd_offset)
+{
+ struct userspace_mem_region *region;
+ size_t mapping_page_size;
+ size_t memslot_size;
+ int madvise_advice;
+ size_t mmap_size;
+ size_t alignment;
+ int mmap_flags;
+ int memfd;
- /* Align host alias address */
- region->host_alias = align_ptr_up(region->mmap_alias, alignment);
+ memslot_size = npages * vm->page_size;
+
+ mmap_flags = vm_mem_backing_src_alias(src_type)->flag;
+ madvise_advice = get_backing_src_madvise_advice(src_type);
+ mapping_page_size = compute_page_size(mmap_flags, madvise_advice);
+
+ TEST_ASSERT_EQ(guest_paddr, align_up(guest_paddr, mapping_page_size));
+
+ alignment = mapping_page_size;
+#ifdef __s390x__
+ alignment = max(alignment, S390X_HOST_ADDRESS_ALIGNMENT);
+#endif
+
+ region = vm_mem_region_alloc(vm);
+
+ memfd = -1;
+ if (backing_src_is_shared(src_type)) {
+ unsigned int memfd_flags = MFD_CLOEXEC;
+
+ if (src_type == VM_MEM_SRC_SHARED_HUGETLB)
+ memfd_flags |= MFD_HUGETLB;
+
+ memfd = kvm_create_memfd(memslot_size, memfd_flags);
}
+ region->fd = memfd;
+
+ mmap_size = align_up(memslot_size, alignment);
+ vm_mem_region_mmap(region, mmap_size, mmap_flags, memfd, 0);
+ vm_mem_region_install_memory(region, memslot_size, alignment);
+
+ if (backing_src_should_madvise(src_type))
+ vm_mem_region_madvise_thp(region, madvise_advice);
+
+ if (backing_src_is_shared(src_type))
+ vm_mem_region_mmap_alias(region, mmap_flags, alignment);
+
+ if (flags & KVM_MEM_GUEST_MEMFD) {
+ if (guest_memfd < 0) {
+ TEST_ASSERT(
+ guest_memfd_offset == 0,
+ "Offset must be zero when creating new guest_memfd");
+ guest_memfd = vm_create_guest_memfd(vm, memslot_size, 0);
+ }
+
+ vm_mem_region_install_guest_memfd(region, guest_memfd);
+ }
+
+ region->region.slot = slot;
+ region->region.flags = flags;
+ region->region.guest_phys_addr = guest_paddr;
+ region->region.guest_memfd_offset = guest_memfd_offset;
+ vm_mem_region_add(vm, region);
}
void vm_userspace_mem_region_add(struct kvm_vm *vm,
diff --git a/tools/testing/selftests/kvm/lib/test_util.c b/tools/testing/selftests/kvm/lib/test_util.c
index 8ed0b74ae837..24dc90693afd 100644
--- a/tools/testing/selftests/kvm/lib/test_util.c
+++ b/tools/testing/selftests/kvm/lib/test_util.c
@@ -308,6 +308,31 @@ size_t get_backing_src_pagesz(uint32_t i)
}
}
+int backing_src_should_madvise(uint32_t i)
+{
+ switch (i) {
+ case VM_MEM_SRC_ANONYMOUS:
+ case VM_MEM_SRC_SHMEM:
+ case VM_MEM_SRC_ANONYMOUS_THP:
+ return true;
+ default:
+ return false;
+ }
+}
+
+int get_backing_src_madvise_advice(uint32_t i)
+{
+ switch (i) {
+ case VM_MEM_SRC_ANONYMOUS:
+ case VM_MEM_SRC_SHMEM:
+ return MADV_NOHUGEPAGE;
+ case VM_MEM_SRC_ANONYMOUS_THP:
+ return MADV_NOHUGEPAGE;
+ default:
+ return 0;
+ }
+}
+
bool is_backing_src_hugetlb(uint32_t i)
{
return !!(vm_mem_backing_src_alias(i)->flag & MAP_HUGETLB);
--
2.49.0.1045.g170613ef41-goog
next prev parent reply other threads:[~2025-05-14 23:43 UTC|newest]
Thread overview: 231+ messages / expand[flat|nested] mbox.gz Atom feed top
2025-05-14 23:41 [RFC PATCH v2 00/51] 1G page support for guest_memfd Ackerley Tng
2025-05-14 23:41 ` [RFC PATCH v2 01/51] KVM: guest_memfd: Make guest mem use guest mem inodes instead of anonymous inodes Ackerley Tng
2025-05-14 23:41 ` [RFC PATCH v2 02/51] KVM: guest_memfd: Introduce and use shareability to guard faulting Ackerley Tng
2025-05-27 3:54 ` Yan Zhao
2025-05-29 18:20 ` Ackerley Tng
2025-05-30 8:53 ` Fuad Tabba
2025-05-30 18:32 ` Ackerley Tng
2025-06-02 9:43 ` Fuad Tabba
2025-05-27 8:25 ` Binbin Wu
2025-05-27 8:43 ` Binbin Wu
2025-05-29 18:26 ` Ackerley Tng
2025-05-29 20:37 ` Ackerley Tng
2025-05-29 5:42 ` Michael Roth
2025-06-11 21:51 ` Ackerley Tng
2025-07-02 23:25 ` Michael Roth
2025-07-03 0:46 ` Vishal Annapurve
2025-07-03 0:52 ` Vishal Annapurve
2025-07-03 4:12 ` Michael Roth
2025-07-03 5:10 ` Vishal Annapurve
2025-07-03 20:39 ` Michael Roth
2025-07-07 14:55 ` Vishal Annapurve
2025-07-12 0:10 ` Michael Roth
2025-07-12 17:53 ` Vishal Annapurve
2025-08-12 8:23 ` Fuad Tabba
2025-08-13 17:11 ` Ira Weiny
2025-06-11 22:10 ` Ackerley Tng
2025-08-01 0:01 ` Yan Zhao
2025-08-14 21:35 ` Ackerley Tng
2025-05-14 23:41 ` [RFC PATCH v2 03/51] KVM: selftests: Update guest_memfd_test for INIT_PRIVATE flag Ackerley Tng
2025-05-15 13:49 ` Ira Weiny
2025-05-16 17:42 ` Ackerley Tng
2025-05-16 19:31 ` Ira Weiny
2025-05-27 8:53 ` Binbin Wu
2025-05-30 19:59 ` Ackerley Tng
2025-05-14 23:41 ` [RFC PATCH v2 04/51] KVM: guest_memfd: Introduce KVM_GMEM_CONVERT_SHARED/PRIVATE ioctls Ackerley Tng
2025-05-15 14:50 ` Ira Weiny
2025-05-16 17:53 ` Ackerley Tng
2025-05-20 9:22 ` Fuad Tabba
2025-05-20 13:02 ` Vishal Annapurve
2025-05-20 13:44 ` Fuad Tabba
2025-05-20 14:11 ` Vishal Annapurve
2025-05-20 14:33 ` Fuad Tabba
2025-05-20 16:02 ` Vishal Annapurve
2025-05-20 18:05 ` Fuad Tabba
2025-05-20 19:40 ` Ackerley Tng
2025-05-21 12:36 ` Fuad Tabba
2025-05-21 14:42 ` Vishal Annapurve
2025-05-21 15:21 ` Fuad Tabba
2025-05-21 15:51 ` Vishal Annapurve
2025-05-21 18:27 ` Fuad Tabba
2025-05-22 14:52 ` Sean Christopherson
2025-05-22 15:07 ` Fuad Tabba
2025-05-22 16:26 ` Sean Christopherson
2025-05-23 10:12 ` Fuad Tabba
2025-06-24 8:23 ` Alexey Kardashevskiy
2025-06-24 13:08 ` Jason Gunthorpe
2025-06-24 14:10 ` Vishal Annapurve
2025-06-27 4:49 ` Alexey Kardashevskiy
2025-06-27 15:17 ` Vishal Annapurve
2025-06-30 0:19 ` Alexey Kardashevskiy
2025-06-30 14:19 ` Vishal Annapurve
2025-07-10 6:57 ` Alexey Kardashevskiy
2025-07-10 17:58 ` Jason Gunthorpe
2025-07-02 8:35 ` Yan Zhao
2025-07-02 13:54 ` Vishal Annapurve
2025-07-02 14:13 ` Jason Gunthorpe
2025-07-02 14:32 ` Vishal Annapurve
2025-07-10 10:50 ` Xu Yilun
2025-07-10 17:54 ` Jason Gunthorpe
2025-07-11 4:31 ` Xu Yilun
2025-07-11 9:33 ` Xu Yilun
2025-07-16 22:22 ` Ackerley Tng
2025-07-17 9:32 ` Xu Yilun
2025-07-17 16:56 ` Ackerley Tng
2025-07-18 2:48 ` Xu Yilun
2025-07-18 14:15 ` Jason Gunthorpe
2025-07-21 14:18 ` Xu Yilun
2025-07-18 15:13 ` Ira Weiny
2025-07-21 9:58 ` Xu Yilun
2025-07-22 18:17 ` Ackerley Tng
2025-07-22 19:25 ` Edgecombe, Rick P
2025-05-28 3:16 ` Binbin Wu
2025-05-30 20:10 ` Ackerley Tng
2025-06-03 0:54 ` Binbin Wu
2025-05-14 23:41 ` [RFC PATCH v2 05/51] KVM: guest_memfd: Skip LRU for guest_memfd folios Ackerley Tng
2025-05-28 7:01 ` Binbin Wu
2025-05-30 20:32 ` Ackerley Tng
2025-05-14 23:41 ` [RFC PATCH v2 06/51] KVM: Query guest_memfd for private/shared status Ackerley Tng
2025-05-27 3:55 ` Yan Zhao
2025-05-28 8:08 ` Binbin Wu
2025-05-28 9:55 ` Yan Zhao
2025-05-14 23:41 ` [RFC PATCH v2 07/51] KVM: guest_memfd: Add CAP KVM_CAP_GMEM_CONVERSION Ackerley Tng
2025-05-14 23:41 ` [RFC PATCH v2 08/51] KVM: selftests: Test flag validity after guest_memfd supports conversions Ackerley Tng
2025-05-14 23:41 ` [RFC PATCH v2 09/51] KVM: selftests: Test faulting with respect to GUEST_MEMFD_FLAG_INIT_PRIVATE Ackerley Tng
2025-05-14 23:41 ` Ackerley Tng [this message]
2025-05-14 23:41 ` [RFC PATCH v2 11/51] KVM: selftests: Allow cleanup of ucall_pool from host Ackerley Tng
2025-05-14 23:41 ` [RFC PATCH v2 12/51] KVM: selftests: Test conversion flows for guest_memfd Ackerley Tng
2025-05-14 23:41 ` [RFC PATCH v2 13/51] KVM: selftests: Add script to exercise private_mem_conversions_test Ackerley Tng
2025-05-14 23:41 ` [RFC PATCH v2 14/51] KVM: selftests: Update private_mem_conversions_test to mmap guest_memfd Ackerley Tng
2025-05-14 23:41 ` [RFC PATCH v2 15/51] KVM: selftests: Update script to map shared memory from guest_memfd Ackerley Tng
2025-05-14 23:41 ` [RFC PATCH v2 16/51] mm: hugetlb: Consolidate interpretation of gbl_chg within alloc_hugetlb_folio() Ackerley Tng
2025-05-15 2:09 ` Matthew Wilcox
2025-05-28 8:55 ` Binbin Wu
2025-07-07 18:27 ` James Houghton
2025-05-14 23:41 ` [RFC PATCH v2 17/51] mm: hugetlb: Cleanup interpretation of gbl_chg in alloc_hugetlb_folio() Ackerley Tng
2025-05-14 23:41 ` [RFC PATCH v2 18/51] mm: hugetlb: Cleanup interpretation of map_chg_state within alloc_hugetlb_folio() Ackerley Tng
2025-07-07 18:08 ` James Houghton
2025-05-14 23:41 ` [RFC PATCH v2 19/51] mm: hugetlb: Rename alloc_surplus_hugetlb_folio Ackerley Tng
2025-05-14 23:41 ` [RFC PATCH v2 20/51] mm: mempolicy: Refactor out policy_node_nodemask() Ackerley Tng
2025-05-14 23:42 ` [RFC PATCH v2 21/51] mm: hugetlb: Inline huge_node() into callers Ackerley Tng
2025-05-14 23:42 ` [RFC PATCH v2 22/51] mm: hugetlb: Refactor hugetlb allocation functions Ackerley Tng
2025-05-31 23:45 ` Ira Weiny
2025-06-13 22:03 ` Ackerley Tng
2025-05-14 23:42 ` [RFC PATCH v2 23/51] mm: hugetlb: Refactor out hugetlb_alloc_folio() Ackerley Tng
2025-06-01 0:38 ` Ira Weiny
2025-06-13 22:07 ` Ackerley Tng
2025-05-14 23:42 ` [RFC PATCH v2 24/51] mm: hugetlb: Add option to create new subpool without using surplus Ackerley Tng
2025-05-14 23:42 ` [RFC PATCH v2 25/51] mm: truncate: Expose preparation steps for truncate_inode_pages_final Ackerley Tng
2025-05-14 23:42 ` [RFC PATCH v2 26/51] mm: Consolidate freeing of typed folios on final folio_put() Ackerley Tng
2025-05-14 23:42 ` [RFC PATCH v2 27/51] mm: hugetlb: Expose hugetlb_subpool_{get,put}_pages() Ackerley Tng
2025-05-14 23:42 ` [RFC PATCH v2 28/51] mm: Introduce guestmem_hugetlb to support folio_put() handling of guestmem pages Ackerley Tng
2025-05-14 23:42 ` [RFC PATCH v2 29/51] mm: guestmem_hugetlb: Wrap HugeTLB as an allocator for guest_memfd Ackerley Tng
2025-05-16 14:07 ` Ackerley Tng
2025-05-16 20:33 ` Ackerley Tng
2025-05-14 23:42 ` [RFC PATCH v2 30/51] mm: truncate: Expose truncate_inode_folio() Ackerley Tng
2025-05-14 23:42 ` [RFC PATCH v2 31/51] KVM: x86: Set disallow_lpage on base_gfn and guest_memfd pgoff misalignment Ackerley Tng
2025-05-14 23:42 ` [RFC PATCH v2 32/51] KVM: guest_memfd: Support guestmem_hugetlb as custom allocator Ackerley Tng
2025-05-23 10:47 ` Yan Zhao
2025-08-12 9:13 ` Tony Lindgren
2025-05-14 23:42 ` [RFC PATCH v2 33/51] KVM: guest_memfd: Allocate and truncate from " Ackerley Tng
2025-05-21 18:05 ` Vishal Annapurve
2025-05-22 23:12 ` Edgecombe, Rick P
2025-05-28 10:58 ` Yan Zhao
2025-06-03 7:43 ` Binbin Wu
2025-07-16 22:13 ` Ackerley Tng
2025-05-14 23:42 ` [RFC PATCH v2 34/51] mm: hugetlb: Add functions to add/delete folio from hugetlb lists Ackerley Tng
2025-05-14 23:42 ` [RFC PATCH v2 35/51] mm: guestmem_hugetlb: Add support for splitting and merging pages Ackerley Tng
2025-05-14 23:42 ` [RFC PATCH v2 36/51] mm: Convert split_folio() macro to function Ackerley Tng
2025-05-21 16:40 ` Edgecombe, Rick P
2025-05-14 23:42 ` [RFC PATCH v2 37/51] filemap: Pass address_space mapping to ->free_folio() Ackerley Tng
2025-05-14 23:42 ` [RFC PATCH v2 38/51] KVM: guest_memfd: Split allocator pages for guest_memfd use Ackerley Tng
2025-05-22 22:19 ` Edgecombe, Rick P
2025-06-05 17:15 ` Ackerley Tng
2025-06-05 17:53 ` Edgecombe, Rick P
2025-06-05 17:15 ` Ackerley Tng
2025-06-05 17:16 ` Ackerley Tng
2025-06-05 17:16 ` Ackerley Tng
2025-06-05 17:16 ` Ackerley Tng
2025-05-27 4:30 ` Yan Zhao
2025-05-27 4:38 ` Yan Zhao
2025-06-05 17:50 ` Ackerley Tng
2025-05-27 8:45 ` Yan Zhao
2025-06-05 19:10 ` Ackerley Tng
2025-06-16 11:15 ` Yan Zhao
2025-06-05 5:24 ` Binbin Wu
2025-06-05 19:16 ` Ackerley Tng
2025-05-14 23:42 ` [RFC PATCH v2 39/51] KVM: guest_memfd: Merge and truncate on fallocate(PUNCH_HOLE) Ackerley Tng
2025-05-28 11:00 ` Yan Zhao
2025-05-28 16:39 ` Ackerley Tng
2025-05-29 3:26 ` Yan Zhao
2025-05-14 23:42 ` [RFC PATCH v2 40/51] KVM: guest_memfd: Update kvm_gmem_mapping_order to account for page status Ackerley Tng
2025-05-14 23:42 ` [RFC PATCH v2 41/51] KVM: Add CAP to indicate support for HugeTLB as custom allocator Ackerley Tng
2025-05-14 23:42 ` [RFC PATCH v2 42/51] KVM: selftests: Add basic selftests for hugetlb-backed guest_memfd Ackerley Tng
2025-05-14 23:42 ` [RFC PATCH v2 43/51] KVM: selftests: Update conversion flows test for HugeTLB Ackerley Tng
2025-05-14 23:42 ` [RFC PATCH v2 44/51] KVM: selftests: Test truncation paths of guest_memfd Ackerley Tng
2025-05-14 23:42 ` [RFC PATCH v2 45/51] KVM: selftests: Test allocation and conversion of subfolios Ackerley Tng
2025-05-14 23:42 ` [RFC PATCH v2 46/51] KVM: selftests: Test that guest_memfd usage is reported via hugetlb Ackerley Tng
2025-05-14 23:42 ` [RFC PATCH v2 47/51] KVM: selftests: Support various types of backing sources for private memory Ackerley Tng
2025-05-14 23:42 ` [RFC PATCH v2 48/51] KVM: selftests: Update test for various private memory backing source types Ackerley Tng
2025-05-14 23:42 ` [RFC PATCH v2 49/51] KVM: selftests: Update private_mem_conversions_test.sh to test with HugeTLB pages Ackerley Tng
2025-05-14 23:42 ` [RFC PATCH v2 50/51] KVM: selftests: Add script to test HugeTLB statistics Ackerley Tng
2025-05-15 18:03 ` [RFC PATCH v2 00/51] 1G page support for guest_memfd Edgecombe, Rick P
2025-05-15 18:42 ` Vishal Annapurve
2025-05-15 23:35 ` Edgecombe, Rick P
2025-05-16 0:57 ` Sean Christopherson
2025-05-16 2:12 ` Edgecombe, Rick P
2025-05-16 13:11 ` Vishal Annapurve
2025-05-16 16:45 ` Edgecombe, Rick P
2025-05-16 17:51 ` Sean Christopherson
2025-05-16 19:14 ` Edgecombe, Rick P
2025-05-16 20:25 ` Dave Hansen
2025-05-16 21:42 ` Edgecombe, Rick P
2025-05-16 17:45 ` Sean Christopherson
2025-05-16 13:09 ` Jason Gunthorpe
2025-05-16 17:04 ` Edgecombe, Rick P
2025-05-16 0:22 ` [RFC PATCH v2 51/51] KVM: selftests: Test guest_memfd for accuracy of st_blocks Ackerley Tng
2025-05-16 19:48 ` [RFC PATCH v2 00/51] 1G page support for guest_memfd Ira Weiny
2025-05-16 19:59 ` Ira Weiny
2025-05-16 20:26 ` Ackerley Tng
2025-05-16 22:43 ` Ackerley Tng
2025-06-19 8:13 ` Yan Zhao
2025-06-19 8:59 ` Xiaoyao Li
2025-06-19 9:18 ` Xiaoyao Li
2025-06-19 9:28 ` Yan Zhao
2025-06-19 9:45 ` Xiaoyao Li
2025-06-19 9:49 ` Xiaoyao Li
2025-06-29 18:28 ` Vishal Annapurve
2025-06-30 3:14 ` Yan Zhao
2025-06-30 14:14 ` Vishal Annapurve
2025-07-01 5:23 ` Yan Zhao
2025-07-01 19:48 ` Vishal Annapurve
2025-07-07 23:25 ` Sean Christopherson
2025-07-08 0:14 ` Vishal Annapurve
2025-07-08 1:08 ` Edgecombe, Rick P
2025-07-08 14:20 ` Sean Christopherson
2025-07-08 14:52 ` Edgecombe, Rick P
2025-07-08 15:07 ` Vishal Annapurve
2025-07-08 15:31 ` Edgecombe, Rick P
2025-07-08 17:16 ` Vishal Annapurve
2025-07-08 17:39 ` Edgecombe, Rick P
2025-07-08 18:03 ` Sean Christopherson
2025-07-08 18:13 ` Edgecombe, Rick P
2025-07-08 18:55 ` Sean Christopherson
2025-07-08 21:23 ` Edgecombe, Rick P
2025-07-09 14:28 ` Vishal Annapurve
2025-07-09 15:00 ` Sean Christopherson
2025-07-10 1:30 ` Vishal Annapurve
2025-07-10 23:33 ` Sean Christopherson
2025-07-11 21:18 ` Vishal Annapurve
2025-07-12 17:33 ` Vishal Annapurve
2025-07-09 15:17 ` Edgecombe, Rick P
2025-07-10 3:39 ` Vishal Annapurve
2025-07-08 19:28 ` Vishal Annapurve
2025-07-08 19:58 ` Sean Christopherson
2025-07-08 22:54 ` Vishal Annapurve
2025-07-08 15:38 ` Sean Christopherson
2025-07-08 16:22 ` Fuad Tabba
2025-07-08 17:25 ` Sean Christopherson
2025-07-08 18:37 ` Fuad Tabba
2025-07-16 23:06 ` Ackerley Tng
2025-06-26 23:19 ` Ackerley Tng
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=9a9db594cc0e9d059dd30d2415d0346e09065bb6.1747264138.git.ackerleytng@google.com \
--to=ackerleytng@google.com \
--cc=aik@amd.com \
--cc=ajones@ventanamicro.com \
--cc=akpm@linux-foundation.org \
--cc=amoorthy@google.com \
--cc=anthony.yznaga@oracle.com \
--cc=anup@brainfault.org \
--cc=aou@eecs.berkeley.edu \
--cc=bfoster@redhat.com \
--cc=binbin.wu@linux.intel.com \
--cc=brauner@kernel.org \
--cc=catalin.marinas@arm.com \
--cc=chao.p.peng@intel.com \
--cc=chenhuacai@kernel.org \
--cc=dave.hansen@intel.com \
--cc=david@redhat.com \
--cc=dmatlack@google.com \
--cc=dwmw@amazon.co.uk \
--cc=erdemaktas@google.com \
--cc=fan.du@intel.com \
--cc=fvdl@google.com \
--cc=graf@amazon.com \
--cc=haibo1.xu@intel.com \
--cc=hch@infradead.org \
--cc=hughd@google.com \
--cc=ira.weiny@intel.com \
--cc=isaku.yamahata@intel.com \
--cc=jack@suse.cz \
--cc=james.morse@arm.com \
--cc=jarkko@kernel.org \
--cc=jgg@ziepe.ca \
--cc=jgowans@amazon.com \
--cc=jhubbard@nvidia.com \
--cc=jroedel@suse.de \
--cc=jthoughton@google.com \
--cc=jun.miao@intel.com \
--cc=kai.huang@intel.com \
--cc=keirf@google.com \
--cc=kent.overstreet@linux.dev \
--cc=kirill.shutemov@intel.com \
--cc=kvm@vger.kernel.org \
--cc=liam.merwick@oracle.com \
--cc=linux-fsdevel@vger.kernel.org \
--cc=linux-kernel@vger.kernel.org \
--cc=linux-mm@kvack.org \
--cc=maciej.wieczor-retman@intel.com \
--cc=mail@maciej.szmigiero.name \
--cc=maz@kernel.org \
--cc=mic@digikod.net \
--cc=michael.roth@amd.com \
--cc=mpe@ellerman.id.au \
--cc=muchun.song@linux.dev \
--cc=nikunj@amd.com \
--cc=nsaenz@amazon.es \
--cc=oliver.upton@linux.dev \
--cc=palmer@dabbelt.com \
--cc=pankaj.gupta@amd.com \
--cc=paul.walmsley@sifive.com \
--cc=pbonzini@redhat.com \
--cc=pdurrant@amazon.co.uk \
--cc=peterx@redhat.com \
--cc=pgonda@google.com \
--cc=pvorel@suse.cz \
--cc=qperret@google.com \
--cc=quic_cvanscha@quicinc.com \
--cc=quic_eberman@quicinc.com \
--cc=quic_mnalajal@quicinc.com \
--cc=quic_pderrin@quicinc.com \
--cc=quic_pheragu@quicinc.com \
--cc=quic_svaddagi@quicinc.com \
--cc=quic_tsoni@quicinc.com \
--cc=richard.weiyang@gmail.com \
--cc=rick.p.edgecombe@intel.com \
--cc=rientjes@google.com \
--cc=roypat@amazon.co.uk \
--cc=rppt@kernel.org \
--cc=seanjc@google.com \
--cc=shuah@kernel.org \
--cc=steven.price@arm.com \
--cc=steven.sistare@oracle.com \
--cc=suzuki.poulose@arm.com \
--cc=tabba@google.com \
--cc=thomas.lendacky@amd.com \
--cc=usama.arif@bytedance.com \
--cc=vannapurve@google.com \
--cc=vbabka@suse.cz \
--cc=viro@zeniv.linux.org.uk \
--cc=vkuznets@redhat.com \
--cc=wei.w.wang@intel.com \
--cc=will@kernel.org \
--cc=willy@infradead.org \
--cc=x86@kernel.org \
--cc=xiaoyao.li@intel.com \
--cc=yan.y.zhao@intel.com \
--cc=yilun.xu@intel.com \
--cc=yuzenghui@huawei.com \
--cc=zhiquan1.li@intel.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).