* [PATCH RFC 1/3] KVM: guest_memfd: Extend creation API to support NUMA mempolicy
2024-09-16 16:57 [PATCH RFC 0/3] Add NUMA mempolicy support for KVM guest_memfd Shivank Garg
@ 2024-09-16 16:57 ` Shivank Garg
2024-09-16 16:57 ` [PATCH RFC 2/3] mm: Add mempolicy support to the filemap layer Shivank Garg
2024-09-16 16:57 ` [PATCH RFC 3/3] KVM: guest_memfd: Enforce NUMA mempolicy if available Shivank Garg
2 siblings, 0 replies; 6+ messages in thread
From: Shivank Garg @ 2024-09-16 16:57 UTC (permalink / raw)
To: pbonzini, corbet, akpm, willy
Cc: acme, namhyung, mpe, isaku.yamahata, joel, kvm, linux-doc,
linux-kernel, linux-mm, linux-fsdevel, shivankg, shivansh.dhiman,
bharata, nikunj
From: Shivansh Dhiman <shivansh.dhiman@amd.com>
Extend the API of creating guest-memfd to introduce proper NUMA support,
allowing VMM to set memory policies effectively. The memory policy defines
from which node memory is allocated.
The current implementation of KVM guest-memfd does not honor the settings
provided by VMM. While mbind() can be used for NUMA policy support in
userspace applications, it is not functional for guest-memfd as the memory
is not mapped to userspace.
Currently, SEV-SNP guest use guest-memfd as a memory backend and would
benefit from NUMA support. It enables fine-grained control over memory
allocation, optimizing performance for specific workload requirements.
To apply memory policy on a guest-memfd, extend the KVM_CREATE_GUEST_MEMFD
IOCTL with additional fields related to mempolicy.
- mpol_mode represents the policy mode (default, bind, interleave, or
preferred).
- host_nodes_addr denotes the userspace address of the nodemask, a bit
mask of nodes containing up to maxnode bits.
- First bit of flags must be set to use mempolicy.
Store the mempolicy struct in i_private_data of the memfd's inode, which
is currently unused in the context of guest-memfd.
Signed-off-by: Shivansh Dhiman <shivansh.dhiman@amd.com>
Signed-off-by: Shivank Garg <shivankg@amd.com>
---
Documentation/virt/kvm/api.rst | 13 ++++++++-
include/linux/mempolicy.h | 4 +++
include/uapi/linux/kvm.h | 5 +++-
mm/mempolicy.c | 52 ++++++++++++++++++++++++++++++++++
tools/include/uapi/linux/kvm.h | 5 +++-
virt/kvm/guest_memfd.c | 21 ++++++++++++--
virt/kvm/kvm_mm.h | 3 ++
7 files changed, 97 insertions(+), 6 deletions(-)
diff --git a/Documentation/virt/kvm/api.rst b/Documentation/virt/kvm/api.rst
index b3be87489108..dcb61282c773 100644
--- a/Documentation/virt/kvm/api.rst
+++ b/Documentation/virt/kvm/api.rst
@@ -6346,7 +6346,10 @@ and cannot be resized (guest_memfd files do however support PUNCH_HOLE).
struct kvm_create_guest_memfd {
__u64 size;
__u64 flags;
- __u64 reserved[6];
+ __u64 host_nodes_addr;
+ __u16 maxnode;
+ __u8 mpol_mode;
+ __u8 reserved[37];
};
Conceptually, the inode backing a guest_memfd file represents physical memory,
@@ -6367,6 +6370,14 @@ a single guest_memfd file, but the bound ranges must not overlap).
See KVM_SET_USER_MEMORY_REGION2 for additional details.
+NUMA memory policy support for KVM guest_memfd allows the host to specify
+memory allocation behavior for guest NUMA nodes, similar to mbind(). If
+KVM_GUEST_MEMFD_NUMA_ENABLE flag is set, memory allocations from the guest
+will use the specified policy and host-nodes for physical memory.
+- mpol_mode refers to the policy mode: default, preferred, bind, interleave, or
+ preferred.
+- host_nodes_addr points to bitmask of nodes containing up to maxnode bits.
+
4.143 KVM_PRE_FAULT_MEMORY
---------------------------
diff --git a/include/linux/mempolicy.h b/include/linux/mempolicy.h
index 1add16f21612..468eeda2ec2f 100644
--- a/include/linux/mempolicy.h
+++ b/include/linux/mempolicy.h
@@ -299,4 +299,8 @@ static inline bool mpol_is_preferred_many(struct mempolicy *pol)
}
#endif /* CONFIG_NUMA */
+
+struct mempolicy *create_mpol_from_args(unsigned char mode,
+ const unsigned long __user *nmask,
+ unsigned short maxnode);
#endif
diff --git a/include/uapi/linux/kvm.h b/include/uapi/linux/kvm.h
index 637efc055145..fda6cbef0a1d 100644
--- a/include/uapi/linux/kvm.h
+++ b/include/uapi/linux/kvm.h
@@ -1561,7 +1561,10 @@ struct kvm_memory_attributes {
struct kvm_create_guest_memfd {
__u64 size;
__u64 flags;
- __u64 reserved[6];
+ __u64 host_nodes_addr;
+ __u16 maxnode;
+ __u8 mpol_mode;
+ __u8 reserved[37];
};
#define KVM_PRE_FAULT_MEMORY _IOWR(KVMIO, 0xd5, struct kvm_pre_fault_memory)
diff --git a/mm/mempolicy.c b/mm/mempolicy.c
index b858e22b259d..9e9450433fcc 100644
--- a/mm/mempolicy.c
+++ b/mm/mempolicy.c
@@ -3557,3 +3557,55 @@ static int __init mempolicy_sysfs_init(void)
late_initcall(mempolicy_sysfs_init);
#endif /* CONFIG_SYSFS */
+
+#ifdef CONFIG_KVM_PRIVATE_MEM
+/**
+ * create_mpol_from_args - create a mempolicy structure from args
+ * @mode: NUMA memory policy mode
+ * @nmask: bitmask of NUMA nodes
+ * @maxnode: number of bits in the nodes bitmask
+ *
+ * Create a mempolicy from given nodemask and memory policy such as
+ * default, preferred, interleave or bind.
+ *
+ * Return: error encoded in a pointer or memory policy on success.
+ */
+struct mempolicy *create_mpol_from_args(unsigned char mode,
+ const unsigned long __user *nmask,
+ unsigned short maxnode)
+{
+ struct mm_struct *mm = current->mm;
+ unsigned short mode_flags;
+ struct mempolicy *mpol;
+ nodemask_t nodes;
+ int lmode = mode;
+ int err = -ENOMEM;
+
+ err = sanitize_mpol_flags(&lmode, &mode_flags);
+ if (err)
+ return ERR_PTR(err);
+
+ err = get_nodes(&nodes, nmask, maxnode);
+ if (err)
+ return ERR_PTR(err);
+
+ mpol = mpol_new(mode, mode_flags, &nodes);
+ if (IS_ERR_OR_NULL(mpol))
+ return mpol;
+
+ NODEMASK_SCRATCH(scratch);
+ if (!scratch)
+ return ERR_PTR(-ENOMEM);
+
+ mmap_write_lock(mm);
+ err = mpol_set_nodemask(mpol, &nodes, scratch);
+ mmap_write_unlock(mm);
+ NODEMASK_SCRATCH_FREE(scratch);
+
+ if (err)
+ return ERR_PTR(err);
+
+ return mpol;
+}
+EXPORT_SYMBOL(create_mpol_from_args);
+#endif
diff --git a/tools/include/uapi/linux/kvm.h b/tools/include/uapi/linux/kvm.h
index e5af8c692dc0..e3effcd1e358 100644
--- a/tools/include/uapi/linux/kvm.h
+++ b/tools/include/uapi/linux/kvm.h
@@ -1546,7 +1546,10 @@ struct kvm_memory_attributes {
struct kvm_create_guest_memfd {
__u64 size;
__u64 flags;
- __u64 reserved[6];
+ __u64 host_nodes_addr;
+ __u16 maxnode;
+ __u8 mpol_mode;
+ __u8 reserved[37];
};
#define KVM_PRE_FAULT_MEMORY _IOWR(KVMIO, 0xd5, struct kvm_pre_fault_memory)
diff --git a/virt/kvm/guest_memfd.c b/virt/kvm/guest_memfd.c
index e930014b4bdc..8f1877be4976 100644
--- a/virt/kvm/guest_memfd.c
+++ b/virt/kvm/guest_memfd.c
@@ -4,6 +4,7 @@
#include <linux/kvm_host.h>
#include <linux/pagemap.h>
#include <linux/anon_inodes.h>
+#include <linux/mempolicy.h>
#include "kvm_mm.h"
@@ -445,7 +446,8 @@ static const struct inode_operations kvm_gmem_iops = {
.setattr = kvm_gmem_setattr,
};
-static int __kvm_gmem_create(struct kvm *kvm, loff_t size, u64 flags)
+static int __kvm_gmem_create(struct kvm *kvm, loff_t size, u64 flags,
+ struct mempolicy *pol)
{
const char *anon_name = "[kvm-gmem]";
struct kvm_gmem *gmem;
@@ -478,6 +480,7 @@ static int __kvm_gmem_create(struct kvm *kvm, loff_t size, u64 flags)
inode->i_private = (void *)(unsigned long)flags;
inode->i_op = &kvm_gmem_iops;
inode->i_mapping->a_ops = &kvm_gmem_aops;
+ inode->i_mapping->i_private_data = (void *)pol;
inode->i_mode |= S_IFREG;
inode->i_size = size;
mapping_set_gfp_mask(inode->i_mapping, GFP_HIGHUSER);
@@ -505,7 +508,8 @@ int kvm_gmem_create(struct kvm *kvm, struct kvm_create_guest_memfd *args)
{
loff_t size = args->size;
u64 flags = args->flags;
- u64 valid_flags = 0;
+ u64 valid_flags = GUEST_MEMFD_NUMA_ENABLE;
+ struct mempolicy *mpol = NULL;
if (flags & ~valid_flags)
return -EINVAL;
@@ -513,7 +517,18 @@ int kvm_gmem_create(struct kvm *kvm, struct kvm_create_guest_memfd *args)
if (size <= 0 || !PAGE_ALIGNED(size))
return -EINVAL;
- return __kvm_gmem_create(kvm, size, flags);
+ if (flags & GUEST_MEMFD_NUMA_ENABLE) {
+ unsigned char mode = args->mpol_mode;
+ unsigned short maxnode = args->maxnode;
+ const unsigned long __user *user_nmask =
+ (const unsigned long *)args->host_nodes_addr;
+
+ mpol = create_mpol_from_args(mode, user_nmask, maxnode);
+ if (IS_ERR_OR_NULL(mpol))
+ return PTR_ERR(mpol);
+ }
+
+ return __kvm_gmem_create(kvm, size, flags, mpol);
}
int kvm_gmem_bind(struct kvm *kvm, struct kvm_memory_slot *slot,
diff --git a/virt/kvm/kvm_mm.h b/virt/kvm/kvm_mm.h
index 715f19669d01..3dd8495ae03d 100644
--- a/virt/kvm/kvm_mm.h
+++ b/virt/kvm/kvm_mm.h
@@ -36,6 +36,9 @@ static inline void gfn_to_pfn_cache_invalidate_start(struct kvm *kvm,
#endif /* HAVE_KVM_PFNCACHE */
#ifdef CONFIG_KVM_PRIVATE_MEM
+/* Flag to check NUMA policy while creating KVM guest-memfd. */
+#define GUEST_MEMFD_NUMA_ENABLE BIT_ULL(0)
+
void kvm_gmem_init(struct module *module);
int kvm_gmem_create(struct kvm *kvm, struct kvm_create_guest_memfd *args);
int kvm_gmem_bind(struct kvm *kvm, struct kvm_memory_slot *slot,
--
2.34.1
^ permalink raw reply related [flat|nested] 6+ messages in thread* [PATCH RFC 2/3] mm: Add mempolicy support to the filemap layer
2024-09-16 16:57 [PATCH RFC 0/3] Add NUMA mempolicy support for KVM guest_memfd Shivank Garg
2024-09-16 16:57 ` [PATCH RFC 1/3] KVM: guest_memfd: Extend creation API to support NUMA mempolicy Shivank Garg
@ 2024-09-16 16:57 ` Shivank Garg
2024-09-16 21:42 ` Matthew Wilcox
2024-09-16 16:57 ` [PATCH RFC 3/3] KVM: guest_memfd: Enforce NUMA mempolicy if available Shivank Garg
2 siblings, 1 reply; 6+ messages in thread
From: Shivank Garg @ 2024-09-16 16:57 UTC (permalink / raw)
To: pbonzini, corbet, akpm, willy
Cc: acme, namhyung, mpe, isaku.yamahata, joel, kvm, linux-doc,
linux-kernel, linux-mm, linux-fsdevel, shivankg, shivansh.dhiman,
bharata, nikunj
From: Shivansh Dhiman <shivansh.dhiman@amd.com>
Introduce mempolicy support to the filemap. Add filemap_grab_folio_mpol,
filemap_alloc_folio_mpol_noprof() and __filemap_get_folio_mpol() APIs that
take mempolicy struct as an argument.
The API is required by VMs using KVM guest-memfd memory backends for NUMA
mempolicy aware allocations.
Signed-off-by: Shivansh Dhiman <shivansh.dhiman@amd.com>
Signed-off-by: Shivank Garg <shivankg@amd.com>
---
include/linux/pagemap.h | 30 ++++++++++++++++++++++++++++++
mm/filemap.c | 30 +++++++++++++++++++++++++-----
mm/mempolicy.c | 1 +
3 files changed, 56 insertions(+), 5 deletions(-)
diff --git a/include/linux/pagemap.h b/include/linux/pagemap.h
index d9c7edb6422b..da7e41a45588 100644
--- a/include/linux/pagemap.h
+++ b/include/linux/pagemap.h
@@ -564,11 +564,19 @@ static inline void *detach_page_private(struct page *page)
#ifdef CONFIG_NUMA
struct folio *filemap_alloc_folio_noprof(gfp_t gfp, unsigned int order);
+struct folio *filemap_alloc_folio_mpol_noprof(gfp_t gfp, unsigned int order,
+ struct mempolicy *mpol);
#else
static inline struct folio *filemap_alloc_folio_noprof(gfp_t gfp, unsigned int order)
{
return folio_alloc_noprof(gfp, order);
}
+static inline struct folio *filemap_alloc_folio_mpol_noprof(gfp_t gfp,
+ unsigned int order,
+ struct mempolicy *mpol)
+{
+ return filemap_alloc_folio_noprof(gfp, order);
+}
#endif
#define filemap_alloc_folio(...) \
@@ -652,6 +660,8 @@ static inline fgf_t fgf_set_order(size_t size)
void *filemap_get_entry(struct address_space *mapping, pgoff_t index);
struct folio *__filemap_get_folio(struct address_space *mapping, pgoff_t index,
fgf_t fgp_flags, gfp_t gfp);
+struct folio *__filemap_get_folio_mpol(struct address_space *mapping,
+ pgoff_t index, fgf_t fgp_flags, gfp_t gfp, struct mempolicy *mpol);
struct page *pagecache_get_page(struct address_space *mapping, pgoff_t index,
fgf_t fgp_flags, gfp_t gfp);
@@ -710,6 +720,26 @@ static inline struct folio *filemap_grab_folio(struct address_space *mapping,
mapping_gfp_mask(mapping));
}
+/**
+ * filemap_grab_folio_mpol - grab a folio from the page cache
+ * @mapping: The address space to search
+ * @index: The page index
+ * @mpol: The mempolicy to apply
+ *
+ * Same as filemap_grab_folio(), except that it allocates the folio using
+ * given memory policy.
+ *
+ * Return: A found or created folio. ERR_PTR(-ENOMEM) if no folio is found
+ * and failed to create a folio.
+ */
+static inline struct folio *filemap_grab_folio_mpol(struct address_space *mapping,
+ pgoff_t index, struct mempolicy *mpol)
+{
+ return __filemap_get_folio_mpol(mapping, index,
+ FGP_LOCK | FGP_ACCESSED | FGP_CREAT,
+ mapping_gfp_mask(mapping), mpol);
+}
+
/**
* find_get_page - find and get a page reference
* @mapping: the address_space to search
diff --git a/mm/filemap.c b/mm/filemap.c
index d62150418b91..a94022e31974 100644
--- a/mm/filemap.c
+++ b/mm/filemap.c
@@ -990,8 +990,13 @@ int filemap_add_folio(struct address_space *mapping, struct folio *folio,
EXPORT_SYMBOL_GPL(filemap_add_folio);
#ifdef CONFIG_NUMA
-struct folio *filemap_alloc_folio_noprof(gfp_t gfp, unsigned int order)
+struct folio *filemap_alloc_folio_mpol_noprof(gfp_t gfp, unsigned int order,
+ struct mempolicy *mpol)
{
+ if (mpol)
+ return folio_alloc_mpol_noprof(gfp, order, mpol,
+ NO_INTERLEAVE_INDEX, numa_node_id());
+
int n;
struct folio *folio;
@@ -1007,6 +1012,12 @@ struct folio *filemap_alloc_folio_noprof(gfp_t gfp, unsigned int order)
}
return folio_alloc_noprof(gfp, order);
}
+EXPORT_SYMBOL(filemap_alloc_folio_mpol_noprof);
+
+struct folio *filemap_alloc_folio_noprof(gfp_t gfp, unsigned int order)
+{
+ return filemap_alloc_folio_mpol_noprof(gfp, order, NULL);
+}
EXPORT_SYMBOL(filemap_alloc_folio_noprof);
#endif
@@ -1861,11 +1872,12 @@ void *filemap_get_entry(struct address_space *mapping, pgoff_t index)
}
/**
- * __filemap_get_folio - Find and get a reference to a folio.
+ * __filemap_get_folio_mpol - Find and get a reference to a folio.
* @mapping: The address_space to search.
* @index: The page index.
* @fgp_flags: %FGP flags modify how the folio is returned.
* @gfp: Memory allocation flags to use if %FGP_CREAT is specified.
+ * @mpol: The mempolicy to apply.
*
* Looks up the page cache entry at @mapping & @index.
*
@@ -1876,8 +1888,8 @@ void *filemap_get_entry(struct address_space *mapping, pgoff_t index)
*
* Return: The found folio or an ERR_PTR() otherwise.
*/
-struct folio *__filemap_get_folio(struct address_space *mapping, pgoff_t index,
- fgf_t fgp_flags, gfp_t gfp)
+struct folio *__filemap_get_folio_mpol(struct address_space *mapping, pgoff_t index,
+ fgf_t fgp_flags, gfp_t gfp, struct mempolicy *mpol)
{
struct folio *folio;
@@ -1947,7 +1959,7 @@ struct folio *__filemap_get_folio(struct address_space *mapping, pgoff_t index,
err = -ENOMEM;
if (order > 0)
alloc_gfp |= __GFP_NORETRY | __GFP_NOWARN;
- folio = filemap_alloc_folio(alloc_gfp, order);
+ folio = filemap_alloc_folio_mpol_noprof(alloc_gfp, order, mpol);
if (!folio)
continue;
@@ -1978,6 +1990,14 @@ struct folio *__filemap_get_folio(struct address_space *mapping, pgoff_t index,
return ERR_PTR(-ENOENT);
return folio;
}
+EXPORT_SYMBOL(__filemap_get_folio_mpol);
+
+struct folio *__filemap_get_folio(struct address_space *mapping, pgoff_t index,
+ fgf_t fgp_flags, gfp_t gfp)
+{
+ return __filemap_get_folio_mpol(mapping, index,
+ fgp_flags, gfp, NULL);
+}
EXPORT_SYMBOL(__filemap_get_folio);
static inline struct folio *find_get_entry(struct xa_state *xas, pgoff_t max,
diff --git a/mm/mempolicy.c b/mm/mempolicy.c
index 9e9450433fcc..88da732cf2be 100644
--- a/mm/mempolicy.c
+++ b/mm/mempolicy.c
@@ -2281,6 +2281,7 @@ struct folio *folio_alloc_mpol_noprof(gfp_t gfp, unsigned int order,
return page_rmappable_folio(alloc_pages_mpol_noprof(gfp | __GFP_COMP,
order, pol, ilx, nid));
}
+EXPORT_SYMBOL(folio_alloc_mpol_noprof);
/**
* vma_alloc_folio - Allocate a folio for a VMA.
--
2.34.1
^ permalink raw reply related [flat|nested] 6+ messages in thread* Re: [PATCH RFC 2/3] mm: Add mempolicy support to the filemap layer
2024-09-16 16:57 ` [PATCH RFC 2/3] mm: Add mempolicy support to the filemap layer Shivank Garg
@ 2024-09-16 21:42 ` Matthew Wilcox
2024-09-17 12:43 ` Shivank Garg
0 siblings, 1 reply; 6+ messages in thread
From: Matthew Wilcox @ 2024-09-16 21:42 UTC (permalink / raw)
To: Shivank Garg
Cc: pbonzini, corbet, akpm, acme, namhyung, mpe, isaku.yamahata, joel,
kvm, linux-doc, linux-kernel, linux-mm, linux-fsdevel,
shivansh.dhiman, bharata, nikunj
On Mon, Sep 16, 2024 at 04:57:42PM +0000, Shivank Garg wrote:
> @@ -652,6 +660,8 @@ static inline fgf_t fgf_set_order(size_t size)
> void *filemap_get_entry(struct address_space *mapping, pgoff_t index);
> struct folio *__filemap_get_folio(struct address_space *mapping, pgoff_t index,
> fgf_t fgp_flags, gfp_t gfp);
> +struct folio *__filemap_get_folio_mpol(struct address_space *mapping,
> + pgoff_t index, fgf_t fgp_flags, gfp_t gfp, struct mempolicy *mpol);
> struct page *pagecache_get_page(struct address_space *mapping, pgoff_t index,
> fgf_t fgp_flags, gfp_t gfp);
>
> @@ -710,6 +720,26 @@ static inline struct folio *filemap_grab_folio(struct address_space *mapping,
> mapping_gfp_mask(mapping));
> }
>
> +/**
> + * filemap_grab_folio_mpol - grab a folio from the page cache
> + * @mapping: The address space to search
> + * @index: The page index
> + * @mpol: The mempolicy to apply
> + *
> + * Same as filemap_grab_folio(), except that it allocates the folio using
> + * given memory policy.
> + *
> + * Return: A found or created folio. ERR_PTR(-ENOMEM) if no folio is found
> + * and failed to create a folio.
> + */
> +static inline struct folio *filemap_grab_folio_mpol(struct address_space *mapping,
> + pgoff_t index, struct mempolicy *mpol)
> +{
> + return __filemap_get_folio_mpol(mapping, index,
> + FGP_LOCK | FGP_ACCESSED | FGP_CREAT,
> + mapping_gfp_mask(mapping), mpol);
> +}
This should be conditional on CONFIG_NUMA, just like
filemap_alloc_folio_mpol_noprof() above.
> @@ -1947,7 +1959,7 @@ struct folio *__filemap_get_folio(struct address_space *mapping, pgoff_t index,
> err = -ENOMEM;
> if (order > 0)
> alloc_gfp |= __GFP_NORETRY | __GFP_NOWARN;
> - folio = filemap_alloc_folio(alloc_gfp, order);
> + folio = filemap_alloc_folio_mpol_noprof(alloc_gfp, order, mpol);
Why use the _noprof variant here?
> diff --git a/mm/mempolicy.c b/mm/mempolicy.c
> index 9e9450433fcc..88da732cf2be 100644
> --- a/mm/mempolicy.c
> +++ b/mm/mempolicy.c
> @@ -2281,6 +2281,7 @@ struct folio *folio_alloc_mpol_noprof(gfp_t gfp, unsigned int order,
> return page_rmappable_folio(alloc_pages_mpol_noprof(gfp | __GFP_COMP,
> order, pol, ilx, nid));
> }
> +EXPORT_SYMBOL(folio_alloc_mpol_noprof);
Why does this need to be exported? What module will use it?
^ permalink raw reply [flat|nested] 6+ messages in thread* Re: [PATCH RFC 2/3] mm: Add mempolicy support to the filemap layer
2024-09-16 21:42 ` Matthew Wilcox
@ 2024-09-17 12:43 ` Shivank Garg
0 siblings, 0 replies; 6+ messages in thread
From: Shivank Garg @ 2024-09-17 12:43 UTC (permalink / raw)
To: Matthew Wilcox
Cc: pbonzini, corbet, akpm, acme, namhyung, mpe, isaku.yamahata, joel,
kvm, linux-doc, linux-kernel, linux-mm, linux-fsdevel, bharata,
nikunj
Hello Matthew,
Thank you for the review comments.
On 9/17/2024 3:12 AM, Matthew Wilcox wrote:
> On Mon, Sep 16, 2024 at 04:57:42PM +0000, Shivank Garg wrote:
>> +static inline struct folio *filemap_grab_folio_mpol(struct address_space *mapping,
>> + pgoff_t index, struct mempolicy *mpol)
>> +{
>> + return __filemap_get_folio_mpol(mapping, index,
>> + FGP_LOCK | FGP_ACCESSED | FGP_CREAT,
>> + mapping_gfp_mask(mapping), mpol);
>> +}
>
> This should be conditional on CONFIG_NUMA, just like
> filemap_alloc_folio_mpol_noprof() above.
+#ifdef CONFIG_NUMA
static inline struct folio *filemap_grab_folio_mpol(struct address_space *mapping,
pgoff_t index, struct mempolicy *mpol)
{
@@ -739,6 +742,13 @@ static inline struct folio *filemap_grab_folio_mpol(struct address_space *mappin
FGP_LOCK | FGP_ACCESSED | FGP_CREAT,
mapping_gfp_mask(mapping), mpol);
}
+#else
+static inline struct folio *filemap_grab_folio_mpol(struct address_space *mapping,
+ pgoff_t index, struct mempolicy *mpol)
+{
+ return filemap_grab_folio(mapping, index);
+}
+#endif /* CONFIG_NUMA */
>
>> @@ -1947,7 +1959,7 @@ struct folio *__filemap_get_folio(struct address_space *mapping, pgoff_t index,
>> err = -ENOMEM;
>> if (order > 0)
>> alloc_gfp |= __GFP_NORETRY | __GFP_NOWARN;
>> - folio = filemap_alloc_folio(alloc_gfp, order);
>> + folio = filemap_alloc_folio_mpol_noprof(alloc_gfp, order, mpol);
>
> Why use the _noprof variant here?
I've defined the filemap_alloc_folio_mpol variant for using here:
+#define filemap_alloc_folio_mpol(...) \
+ alloc_hooks(filemap_alloc_folio_mpol_noprof(__VA_ARGS__))
+++ b/mm/filemap.c
@@ -1959,7 +1959,7 @@ struct folio *__filemap_get_folio_mpol(struct address_space *mapping, pgoff_t in
err = -ENOMEM;
if (order > 0)
alloc_gfp |= __GFP_NORETRY | __GFP_NOWARN;
- folio = filemap_alloc_folio_mpol_noprof(alloc_gfp, order, mpol);
+ folio = filemap_alloc_folio_mpol(alloc_gfp, order, mpol);
if (!folio)
>
>> diff --git a/mm/mempolicy.c b/mm/mempolicy.c
>> index 9e9450433fcc..88da732cf2be 100644
>> --- a/mm/mempolicy.c
>> +++ b/mm/mempolicy.c
>> @@ -2281,6 +2281,7 @@ struct folio *folio_alloc_mpol_noprof(gfp_t gfp, unsigned int order,
>> return page_rmappable_folio(alloc_pages_mpol_noprof(gfp | __GFP_COMP,
>> order, pol, ilx, nid));
>> }
>> +EXPORT_SYMBOL(folio_alloc_mpol_noprof);
>
> Why does this need to be exported? What module will use itI've removed this EXPORT.
Thank you for the suggestion.
I overlooked those details and will post the replied changes in next version of this patchset.
Best Regards,
Shivank
^ permalink raw reply [flat|nested] 6+ messages in thread
* [PATCH RFC 3/3] KVM: guest_memfd: Enforce NUMA mempolicy if available
2024-09-16 16:57 [PATCH RFC 0/3] Add NUMA mempolicy support for KVM guest_memfd Shivank Garg
2024-09-16 16:57 ` [PATCH RFC 1/3] KVM: guest_memfd: Extend creation API to support NUMA mempolicy Shivank Garg
2024-09-16 16:57 ` [PATCH RFC 2/3] mm: Add mempolicy support to the filemap layer Shivank Garg
@ 2024-09-16 16:57 ` Shivank Garg
2 siblings, 0 replies; 6+ messages in thread
From: Shivank Garg @ 2024-09-16 16:57 UTC (permalink / raw)
To: pbonzini, corbet, akpm, willy
Cc: acme, namhyung, mpe, isaku.yamahata, joel, kvm, linux-doc,
linux-kernel, linux-mm, linux-fsdevel, shivankg, shivansh.dhiman,
bharata, nikunj
From: Shivansh Dhiman <shivansh.dhiman@amd.com>
Enforce memory policy on guest-memfd to provide proper NUMA support.
Previously, guest-memfd allocations were following local NUMA node id in
absence of process mempolicy, resulting in random memory allocation.
Moreover, it cannot use mbind() since memory isn't mapped to userspace.
To support NUMA policies, retrieve the mempolicy struct from
i_private_data part of memfd's inode. Use filemap_grab_folio_mpol() to
ensure that allocations follow the specified memory policy.
Signed-off-by: Shivansh Dhiman <shivansh.dhiman@amd.com>
Signed-off-by: Shivank Garg <shivankg@amd.com>
---
virt/kvm/guest_memfd.c | 7 +++++--
1 file changed, 5 insertions(+), 2 deletions(-)
diff --git a/virt/kvm/guest_memfd.c b/virt/kvm/guest_memfd.c
index 8f1877be4976..8553d7069ba8 100644
--- a/virt/kvm/guest_memfd.c
+++ b/virt/kvm/guest_memfd.c
@@ -130,12 +130,15 @@ static struct folio *__kvm_gmem_get_folio(struct inode *inode, pgoff_t index,
bool allow_huge)
{
struct folio *folio = NULL;
+ struct mempolicy *mpol;
if (gmem_2m_enabled && allow_huge)
folio = kvm_gmem_get_huge_folio(inode, index, PMD_ORDER);
- if (!folio)
- folio = filemap_grab_folio(inode->i_mapping, index);
+ if (!folio) {
+ mpol = (struct mempolicy *)(inode->i_mapping->i_private_data);
+ folio = filemap_grab_folio_mpol(inode->i_mapping, index, mpol);
+ }
pr_debug("%s: allocate folio with PFN %lx order %d\n",
__func__, folio_pfn(folio), folio_order(folio));
--
2.34.1
^ permalink raw reply related [flat|nested] 6+ messages in thread