From: Paolo Bonzini <pbonzini@redhat.com>
To: Paolo Bonzini <pbonzini@redhat.com>,
Marc Zyngier <maz@kernel.org>,
Oliver Upton <oliver.upton@linux.dev>,
Huacai Chen <chenhuacai@kernel.org>,
Michael Ellerman <mpe@ellerman.id.au>,
Anup Patel <anup@brainfault.org>,
Paul Walmsley <paul.walmsley@sifive.com>,
Palmer Dabbelt <palmer@dabbelt.com>,
Albert Ou <aou@eecs.berkeley.edu>,
Sean Christopherson <seanjc@google.com>,
Alexander Viro <viro@zeniv.linux.org.uk>,
Christian Brauner <brauner@kernel.org>,
"Matthew Wilcox (Oracle)" <willy@infradead.org>,
Andrew Morton <akpm@linux-foundation.org>
Cc: kvm@vger.kernel.org, linux-arm-kernel@lists.infradead.org,
kvmarm@lists.linux.dev, linux-mips@vger.kernel.org,
linuxppc-dev@lists.ozlabs.org, kvm-riscv@lists.infradead.org,
linux-riscv@lists.infradead.org, linux-fsdevel@vger.kernel.org,
linux-mm@kvack.org, linux-kernel@vger.kernel.org,
"Xiaoyao Li" <xiaoyao.li@intel.com>,
"Xu Yilun" <yilun.xu@intel.com>,
"Chao Peng" <chao.p.peng@linux.intel.com>,
"Fuad Tabba" <tabba@google.com>,
"Jarkko Sakkinen" <jarkko@kernel.org>,
"Anish Moorthy" <amoorthy@google.com>,
"David Matlack" <dmatlack@google.com>,
"Yu Zhang" <yu.c.zhang@linux.intel.com>,
"Isaku Yamahata" <isaku.yamahata@intel.com>,
"Mickaël Salaün" <mic@digikod.net>,
"Vlastimil Babka" <vbabka@suse.cz>,
"Vishal Annapurve" <vannapurve@google.com>,
"Ackerley Tng" <ackerleytng@google.com>,
"Maciej Szmigiero" <mail@maciej.szmigiero.name>,
"David Hildenbrand" <david@redhat.com>,
"Quentin Perret" <qperret@google.com>,
"Michael Roth" <michael.roth@amd.com>,
Wang <wei.w.wang@intel.com>,
"Liam Merwick" <liam.merwick@oracle.com>,
"Isaku Yamahata" <isaku.yamahata@gmail.com>,
"Kirill A. Shutemov" <kirill.shutemov@linux.intel.com>
Subject: [PATCH 36/34] KVM: Add transparent hugepage support for dedicated guest memory
Date: Sun, 5 Nov 2023 17:30:39 +0100 [thread overview]
Message-ID: <20231105163040.14904-37-pbonzini@redhat.com> (raw)
In-Reply-To: <20231105163040.14904-1-pbonzini@redhat.com>
From: Sean Christopherson <seanjc@google.com>
Extended guest_memfd to allow backing guest memory with transparent
hugepages. Require userspace to opt-in via a flag even though there's no
known/anticipated use case for forcing small pages as THP is optional,
i.e. to avoid ending up in a situation where userspace is unaware that
KVM can't provide hugepages.
For simplicity, require the guest_memfd size to be a multiple of the
hugepage size, e.g. so that KVM doesn't need to do bounds checking when
deciding whether or not to allocate a huge folio.
When reporting the max order when KVM gets a pfn from guest_memfd, force
order-0 pages if the hugepage is not fully contained by the memslot
binding, e.g. if userspace requested hugepages but punches a hole in the
memslot bindings in order to emulate x86's VGA hole.
Signed-off-by: Sean Christopherson <seanjc@google.com>
Message-Id: <20231027182217.3615211-18-seanjc@google.com>
[Allow even with CONFIG_TRANSPARENT_HUGEPAGE; dropped momentarily due to
uneasiness about the API. - Paolo]
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
---
Documentation/virt/kvm/api.rst | 7 ++
include/uapi/linux/kvm.h | 2 +
.../testing/selftests/kvm/guest_memfd_test.c | 15 ++++
tools/testing/selftests/kvm/lib/kvm_util.c | 9 +++
.../kvm/x86_64/private_mem_conversions_test.c | 7 +-
virt/kvm/guest_memfd.c | 70 ++++++++++++++++---
6 files changed, 101 insertions(+), 9 deletions(-)
diff --git a/Documentation/virt/kvm/api.rst b/Documentation/virt/kvm/api.rst
index 38882263278d..c13ede498369 100644
--- a/Documentation/virt/kvm/api.rst
+++ b/Documentation/virt/kvm/api.rst
@@ -6318,6 +6318,8 @@ and cannot be resized (guest_memfd files do however support PUNCH_HOLE).
__u64 reserved[6];
};
+ #define KVM_GUEST_MEMFD_ALLOW_HUGEPAGE (1ULL << 0)
+
Conceptually, the inode backing a guest_memfd file represents physical memory,
i.e. is coupled to the virtual machine as a thing, not to a "struct kvm". The
file itself, which is bound to a "struct kvm", is that instance's view of the
@@ -6334,6 +6336,11 @@ most one mapping per page, i.e. binding multiple memory regions to a single
guest_memfd range is not allowed (any number of memory regions can be bound to
a single guest_memfd file, but the bound ranges must not overlap).
+If KVM_GUEST_MEMFD_ALLOW_HUGEPAGE is set in flags, KVM will attempt to allocate
+and map hugepages for the guest_memfd file. This is currently best effort. If
+KVM_GUEST_MEMFD_ALLOW_HUGEPAGE is set, the size must be aligned to the maximum
+transparent hugepage size supported by the kernel
+
See KVM_SET_USER_MEMORY_REGION2 for additional details.
5. The kvm_run structure
diff --git a/include/uapi/linux/kvm.h b/include/uapi/linux/kvm.h
index e9cb2df67a1d..b4ba4b53b834 100644
--- a/include/uapi/linux/kvm.h
+++ b/include/uapi/linux/kvm.h
@@ -2316,4 +2316,6 @@ struct kvm_create_guest_memfd {
__u64 reserved[6];
};
+#define KVM_GUEST_MEMFD_ALLOW_HUGEPAGE (1ULL << 0)
+
#endif /* __LINUX_KVM_H */
diff --git a/tools/testing/selftests/kvm/guest_memfd_test.c b/tools/testing/selftests/kvm/guest_memfd_test.c
index ea0ae7e25330..c15de9852316 100644
--- a/tools/testing/selftests/kvm/guest_memfd_test.c
+++ b/tools/testing/selftests/kvm/guest_memfd_test.c
@@ -123,6 +123,7 @@ static void test_invalid_punch_hole(int fd, size_t page_size, size_t total_size)
static void test_create_guest_memfd_invalid(struct kvm_vm *vm)
{
+ uint64_t valid_flags = 0;
size_t page_size = getpagesize();
uint64_t flag;
size_t size;
@@ -135,9 +136,23 @@ static void test_create_guest_memfd_invalid(struct kvm_vm *vm)
size);
}
+ if (thp_configured()) {
+ for (size = page_size * 2; size < get_trans_hugepagesz(); size += page_size) {
+ fd = __vm_create_guest_memfd(vm, size, KVM_GUEST_MEMFD_ALLOW_HUGEPAGE);
+ TEST_ASSERT(fd == -1 && errno == EINVAL,
+ "guest_memfd() with non-hugepage-aligned page size '0x%lx' should fail with EINVAL",
+ size);
+ }
+
+ valid_flags = KVM_GUEST_MEMFD_ALLOW_HUGEPAGE;
+ }
+
for (flag = 1; flag; flag <<= 1) {
uint64_t bit;
+ if (flag & valid_flags)
+ continue;
+
fd = __vm_create_guest_memfd(vm, page_size, flag);
TEST_ASSERT(fd == -1 && errno == EINVAL,
"guest_memfd() with flag '0x%lx' should fail with EINVAL",
diff --git a/tools/testing/selftests/kvm/lib/kvm_util.c b/tools/testing/selftests/kvm/lib/kvm_util.c
index d05d95cc3693..ed81a00e5df1 100644
--- a/tools/testing/selftests/kvm/lib/kvm_util.c
+++ b/tools/testing/selftests/kvm/lib/kvm_util.c
@@ -1022,6 +1022,15 @@ void vm_mem_add(struct kvm_vm *vm, enum vm_mem_backing_src_type src_type,
if (flags & KVM_MEM_GUEST_MEMFD) {
if (guest_memfd < 0) {
uint32_t guest_memfd_flags = 0;
+
+ /*
+ * Allow hugepages for the guest memfd backing if the
+ * "normal" backing is allowed/required to be huge.
+ */
+ if (src_type != VM_MEM_SRC_ANONYMOUS &&
+ src_type != VM_MEM_SRC_SHMEM)
+ guest_memfd_flags |= KVM_GUEST_MEMFD_ALLOW_HUGEPAGE;
+
TEST_ASSERT(!guest_memfd_offset,
"Offset must be zero when creating new guest_memfd");
guest_memfd = vm_create_guest_memfd(vm, mem_size, guest_memfd_flags);
diff --git a/tools/testing/selftests/kvm/x86_64/private_mem_conversions_test.c b/tools/testing/selftests/kvm/x86_64/private_mem_conversions_test.c
index 4d6a37a5d896..f707fd401a4f 100644
--- a/tools/testing/selftests/kvm/x86_64/private_mem_conversions_test.c
+++ b/tools/testing/selftests/kvm/x86_64/private_mem_conversions_test.c
@@ -380,6 +380,7 @@ static void test_mem_conversions(enum vm_mem_backing_src_type src_type, uint32_t
const size_t slot_size = memfd_size / nr_memslots;
struct kvm_vcpu *vcpus[KVM_MAX_VCPUS];
pthread_t threads[KVM_MAX_VCPUS];
+ uint64_t memfd_flags;
struct kvm_vm *vm;
int memfd, i, r;
@@ -395,7 +396,11 @@ static void test_mem_conversions(enum vm_mem_backing_src_type src_type, uint32_t
vm_enable_cap(vm, KVM_CAP_EXIT_HYPERCALL, (1 << KVM_HC_MAP_GPA_RANGE));
- memfd = vm_create_guest_memfd(vm, memfd_size, 0);
+ if (backing_src_can_be_huge(src_type))
+ memfd_flags = KVM_GUEST_MEMFD_ALLOW_HUGEPAGE;
+ else
+ memfd_flags = 0;
+ memfd = vm_create_guest_memfd(vm, memfd_size, memfd_flags);
for (i = 0; i < nr_memslots; i++)
vm_mem_add(vm, src_type, BASE_DATA_GPA + slot_size * i,
diff --git a/virt/kvm/guest_memfd.c b/virt/kvm/guest_memfd.c
index e65f4170425c..3e48e8997626 100644
--- a/virt/kvm/guest_memfd.c
+++ b/virt/kvm/guest_memfd.c
@@ -13,14 +13,44 @@ struct kvm_gmem {
struct list_head entry;
};
+static struct folio *kvm_gmem_get_huge_folio(struct inode *inode, pgoff_t index, unsigned order)
+{
+ pgoff_t npages = 1UL << order;
+ pgoff_t huge_index = round_down(index, npages);
+ unsigned long flags = (unsigned long)inode->i_private;
+ struct address_space *mapping = inode->i_mapping;
+ gfp_t gfp = mapping_gfp_mask(mapping);
+ struct folio *folio;
+
+ if (!(flags & KVM_GUEST_MEMFD_ALLOW_HUGEPAGE))
+ return NULL;
+
+ if (filemap_range_has_page(mapping, (loff_t)huge_index << PAGE_SHIFT,
+ (loff_t)(huge_index + npages - 1) << PAGE_SHIFT))
+ return NULL;
+
+ folio = filemap_alloc_folio(gfp, order);
+ if (!folio)
+ return NULL;
+
+ if (filemap_add_folio(mapping, folio, huge_index, gfp)) {
+ folio_put(folio);
+ return NULL;
+ }
+
+ return folio;
+}
+
static struct folio *kvm_gmem_get_folio(struct inode *inode, pgoff_t index)
{
struct folio *folio;
- /* TODO: Support huge pages. */
- folio = filemap_grab_folio(inode->i_mapping, index);
- if (IS_ERR_OR_NULL(folio))
- return NULL;
+ folio = kvm_gmem_get_huge_folio(inode, index, PMD_ORDER);
+ if (!folio) {
+ folio = filemap_grab_folio(inode->i_mapping, index);
+ if (IS_ERR_OR_NULL(folio))
+ return NULL;
+ }
/*
* Use the up-to-date flag to track whether or not the memory has been
@@ -366,6 +396,7 @@ static int __kvm_gmem_create(struct kvm *kvm, loff_t size, u64 flags)
inode->i_mode |= S_IFREG;
inode->i_size = size;
mapping_set_gfp_mask(inode->i_mapping, GFP_HIGHUSER);
+ mapping_set_large_folios(inode->i_mapping);
mapping_set_unmovable(inode->i_mapping);
/* Unmovable mappings are supposed to be marked unevictable as well. */
WARN_ON_ONCE(!mapping_unevictable(inode->i_mapping));
@@ -389,7 +420,7 @@ int kvm_gmem_create(struct kvm *kvm, struct kvm_create_guest_memfd *args)
{
loff_t size = args->size;
u64 flags = args->flags;
- u64 valid_flags = 0;
+ u64 valid_flags = KVM_GUEST_MEMFD_ALLOW_HUGEPAGE;
if (flags & ~valid_flags)
return -EINVAL;
@@ -397,6 +428,13 @@ int kvm_gmem_create(struct kvm *kvm, struct kvm_create_guest_memfd *args)
if (size <= 0 || !PAGE_ALIGNED(size))
return -EINVAL;
+#ifdef CONFIG_TRANSPARENT_HUGEPAGE
+ BUILD_BUG_ON(PMD_SIZE != HPAGE_PMD_SIZE);
+#endif
+ if ((flags & KVM_GUEST_MEMFD_ALLOW_HUGEPAGE) &&
+ !IS_ALIGNED(size, PMD_SIZE))
+ return -EINVAL;
+
return __kvm_gmem_create(kvm, size, flags);
}
@@ -491,7 +529,7 @@ void kvm_gmem_unbind(struct kvm_memory_slot *slot)
int kvm_gmem_get_pfn(struct kvm *kvm, struct kvm_memory_slot *slot,
gfn_t gfn, kvm_pfn_t *pfn, int *max_order)
{
- pgoff_t index = gfn - slot->base_gfn + slot->gmem.pgoff;
+ pgoff_t index, huge_index;
struct kvm_gmem *gmem;
struct folio *folio;
struct page *page;
@@ -504,6 +542,7 @@ int kvm_gmem_get_pfn(struct kvm *kvm, struct kvm_memory_slot *slot,
gmem = file->private_data;
+ index = gfn - slot->base_gfn + slot->gmem.pgoff;
if (WARN_ON_ONCE(xa_load(&gmem->bindings, index) != slot)) {
r = -EIO;
goto out_fput;
@@ -523,9 +562,24 @@ int kvm_gmem_get_pfn(struct kvm *kvm, struct kvm_memory_slot *slot,
page = folio_file_page(folio, index);
*pfn = page_to_pfn(page);
- if (max_order)
- *max_order = 0;
+ if (!max_order)
+ goto success;
+ *max_order = compound_order(compound_head(page));
+ if (!*max_order)
+ goto success;
+
+ /*
+ * The folio can be mapped with a hugepage if and only if the folio is
+ * fully contained by the range the memslot is bound to. Note, the
+ * caller is responsible for handling gfn alignment, this only deals
+ * with the file binding.
+ */
+ huge_index = ALIGN(index, 1ull << *max_order);
+ if (huge_index < ALIGN(slot->gmem.pgoff, 1ull << *max_order) ||
+ huge_index + (1ull << *max_order) > slot->gmem.pgoff + slot->npages)
+ *max_order = 0;
+success:
r = 0;
out_unlock:
--
2.39.1
next prev parent reply other threads:[~2023-11-05 16:35 UTC|newest]
Thread overview: 78+ messages / expand[flat|nested] mbox.gz Atom feed top
2023-11-05 16:30 [PATCH v14 00/34] KVM: guest_memfd() and per-page attributes Paolo Bonzini
2023-11-05 16:30 ` [PATCH 01/34] KVM: Tweak kvm_hva_range and hva_handler_t to allow reusing for gfn ranges Paolo Bonzini
2023-11-06 9:28 ` Huang, Kai
2023-11-05 16:30 ` [PATCH 02/34] KVM: Assert that mmu_invalidate_in_progress *never* goes negative Paolo Bonzini
2023-11-06 9:29 ` Huang, Kai
2023-11-05 16:30 ` [PATCH 03/34] KVM: Use gfn instead of hva for mmu_notifier_retry Paolo Bonzini
2023-11-06 9:29 ` Huang, Kai
2023-11-05 16:30 ` [PATCH 04/34] KVM: WARN if there are dangling MMU invalidations at VM destruction Paolo Bonzini
2023-11-05 16:30 ` [PATCH 05/34] KVM: PPC: Drop dead code related to KVM_ARCH_WANT_MMU_NOTIFIER Paolo Bonzini
2023-11-05 16:30 ` [PATCH 06/34] KVM: PPC: Return '1' unconditionally for KVM_CAP_SYNC_MMU Paolo Bonzini
2023-11-05 16:30 ` [PATCH 07/34] KVM: Convert KVM_ARCH_WANT_MMU_NOTIFIER to CONFIG_KVM_GENERIC_MMU_NOTIFIER Paolo Bonzini
2023-11-05 16:30 ` [PATCH 08/34] KVM: Introduce KVM_SET_USER_MEMORY_REGION2 Paolo Bonzini
2023-11-06 9:27 ` Huang, Kai
2023-11-07 5:47 ` Yuan Yao
2023-11-05 16:30 ` [PATCH 09/34] KVM: Add KVM_EXIT_MEMORY_FAULT exit to report faults to userspace Paolo Bonzini
2023-11-06 10:23 ` Fuad Tabba
2023-11-09 7:30 ` Xiaoyao Li
2023-11-05 16:30 ` [PATCH 10/34] KVM: Add a dedicated mmu_notifier flag for reclaiming freed memory Paolo Bonzini
2023-11-05 16:30 ` [PATCH 11/34] KVM: Drop .on_unlock() mmu_notifier hook Paolo Bonzini
2023-11-05 16:30 ` [PATCH 12/34] KVM: Introduce per-page memory attributes Paolo Bonzini
2023-11-06 10:39 ` Fuad Tabba
2023-11-05 16:30 ` [PATCH 13/34] mm: Add AS_UNMOVABLE to mark mapping as completely unmovable Paolo Bonzini
2023-11-05 16:30 ` [PATCH 14/34] fs: Rename anon_inode_getfile_secure() and anon_inode_getfd_secure() Paolo Bonzini
2023-11-06 11:41 ` Fuad Tabba
2023-11-06 15:16 ` Christian Brauner
2023-11-05 16:30 ` [PATCH 15/34] KVM: Add KVM_CREATE_GUEST_MEMFD ioctl() for guest-specific backing memory Paolo Bonzini
2023-11-06 10:51 ` Fuad Tabba
2023-11-10 1:53 ` Xiaoyao Li
2023-11-10 18:22 ` Sean Christopherson
2023-11-13 3:37 ` Xiaoyao Li
2023-11-05 16:30 ` [PATCH 16/34] KVM: x86: "Reset" vcpu->run->exit_reason early in KVM_RUN Paolo Bonzini
2023-11-10 8:49 ` Xiaoyao Li
2023-11-05 16:30 ` [PATCH 17/34] KVM: x86: Disallow hugepages when memory attributes are mixed Paolo Bonzini
2023-11-05 16:30 ` [PATCH 18/34] KVM: x86/mmu: Handle page fault for private memory Paolo Bonzini
2023-11-06 10:54 ` Fuad Tabba
2023-11-05 16:30 ` [PATCH 19/34] KVM: Drop superfluous __KVM_VCPU_MULTIPLE_ADDRESS_SPACE macro Paolo Bonzini
2023-11-05 16:30 ` [PATCH 20/34] KVM: Allow arch code to track number of memslot address spaces per VM Paolo Bonzini
2023-11-05 16:30 ` [PATCH 21/34] KVM: x86: Add support for "protected VMs" that can utilize private memory Paolo Bonzini
2023-11-06 11:01 ` Fuad Tabba
2023-11-05 16:30 ` [PATCH 22/34] KVM: selftests: Drop unused kvm_userspace_memory_region_find() helper Paolo Bonzini
2023-11-06 11:02 ` Fuad Tabba
2023-11-05 16:30 ` [PATCH 23/34] KVM: selftests: Convert lib's mem regions to KVM_SET_USER_MEMORY_REGION2 Paolo Bonzini
2023-11-06 11:03 ` Fuad Tabba
2023-11-05 16:30 ` [PATCH 24/34] KVM: selftests: Add support for creating private memslots Paolo Bonzini
2023-11-06 11:09 ` Fuad Tabba
2023-11-05 16:30 ` [PATCH 25/34] KVM: selftests: Add helpers to convert guest memory b/w private and shared Paolo Bonzini
2023-11-06 11:24 ` Fuad Tabba
2023-11-06 16:13 ` Sean Christopherson
2023-11-06 16:24 ` Fuad Tabba
2023-11-05 16:30 ` [PATCH 26/34] KVM: selftests: Add helpers to do KVM_HC_MAP_GPA_RANGE hypercalls (x86) Paolo Bonzini
2023-11-06 11:44 ` Fuad Tabba
2023-11-05 16:30 ` [PATCH 27/34] KVM: selftests: Introduce VM "shape" to allow tests to specify the VM type Paolo Bonzini
2023-11-06 11:54 ` Fuad Tabba
2023-11-06 16:04 ` Sean Christopherson
2023-11-06 16:17 ` Fuad Tabba
2023-11-08 17:00 ` Anish Moorthy
2023-11-08 23:37 ` Anish Moorthy
2023-11-09 8:25 ` Paolo Bonzini
2023-11-05 16:30 ` [PATCH 28/34] KVM: selftests: Add GUEST_SYNC[1-6] macros for synchronizing more data Paolo Bonzini
2023-11-06 11:44 ` Fuad Tabba
2023-11-05 16:30 ` [PATCH 29/34] KVM: selftests: Add x86-only selftest for private memory conversions Paolo Bonzini
2023-11-05 16:30 ` [PATCH 30/34] KVM: selftests: Add KVM_SET_USER_MEMORY_REGION2 helper Paolo Bonzini
2023-11-07 12:54 ` Fuad Tabba
2023-11-05 16:30 ` [PATCH 31/34] KVM: selftests: Expand set_memory_region_test to validate guest_memfd() Paolo Bonzini
2023-11-06 14:26 ` Paolo Bonzini
2023-11-05 16:30 ` [PATCH 32/34] KVM: selftests: Add basic selftest for guest_memfd() Paolo Bonzini
2023-11-07 13:07 ` Fuad Tabba
2023-11-16 21:00 ` Ackerley Tng
2023-11-05 16:30 ` [PATCH 33/34] KVM: selftests: Test KVM exit behavior for private memory/access Paolo Bonzini
2023-11-07 14:38 ` Fuad Tabba
2023-11-05 16:30 ` [PATCH 34/34] KVM: selftests: Add a memory region subtest to validate invalid flags Paolo Bonzini
2023-11-09 1:08 ` Anish Moorthy
2023-11-09 8:54 ` Fuad Tabba
2023-11-20 14:09 ` Mark Brown
2023-11-21 17:00 ` Paolo Bonzini
2023-11-05 16:30 ` [PATCH 35/34] KVM: Prepare for handling only shared mappings in mmu_notifier events Paolo Bonzini
2023-11-05 16:30 ` Paolo Bonzini [this message]
2023-11-13 12:21 ` [PATCH v14 00/34] KVM: guest_memfd() and per-page attributes Paolo Bonzini
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20231105163040.14904-37-pbonzini@redhat.com \
--to=pbonzini@redhat.com \
--cc=ackerleytng@google.com \
--cc=akpm@linux-foundation.org \
--cc=amoorthy@google.com \
--cc=anup@brainfault.org \
--cc=aou@eecs.berkeley.edu \
--cc=brauner@kernel.org \
--cc=chao.p.peng@linux.intel.com \
--cc=chenhuacai@kernel.org \
--cc=david@redhat.com \
--cc=dmatlack@google.com \
--cc=isaku.yamahata@gmail.com \
--cc=isaku.yamahata@intel.com \
--cc=jarkko@kernel.org \
--cc=kirill.shutemov@linux.intel.com \
--cc=kvm-riscv@lists.infradead.org \
--cc=kvm@vger.kernel.org \
--cc=kvmarm@lists.linux.dev \
--cc=liam.merwick@oracle.com \
--cc=linux-arm-kernel@lists.infradead.org \
--cc=linux-fsdevel@vger.kernel.org \
--cc=linux-kernel@vger.kernel.org \
--cc=linux-mips@vger.kernel.org \
--cc=linux-mm@kvack.org \
--cc=linux-riscv@lists.infradead.org \
--cc=linuxppc-dev@lists.ozlabs.org \
--cc=mail@maciej.szmigiero.name \
--cc=maz@kernel.org \
--cc=mic@digikod.net \
--cc=michael.roth@amd.com \
--cc=mpe@ellerman.id.au \
--cc=oliver.upton@linux.dev \
--cc=palmer@dabbelt.com \
--cc=paul.walmsley@sifive.com \
--cc=qperret@google.com \
--cc=seanjc@google.com \
--cc=tabba@google.com \
--cc=vannapurve@google.com \
--cc=vbabka@suse.cz \
--cc=viro@zeniv.linux.org.uk \
--cc=wei.w.wang@intel.com \
--cc=willy@infradead.org \
--cc=xiaoyao.li@intel.com \
--cc=yilun.xu@intel.com \
--cc=yu.c.zhang@linux.intel.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).