linux-mm.kvack.org archive mirror
 help / color / mirror / Atom feed
From: Lance Yang <lance.yang@linux.dev>
To: Yafang Shao <laoar.shao@gmail.com>
Cc: akpm@linux-foundation.org, david@redhat.com, ziy@nvidia.com,
	 baolin.wang@linux.alibaba.com, lorenzo.stoakes@oracle.com,
	 Liam.Howlett@oracle.com, npache@redhat.com,
	ryan.roberts@arm.com,  dev.jain@arm.com, hannes@cmpxchg.org,
	usamaarif642@gmail.com,  gutierrez.asier@huawei-partners.com,
	willy@infradead.org, ast@kernel.org,  daniel@iogearbox.net,
	andrii@kernel.org, ameryhung@gmail.com,  rientjes@google.com,
	corbet@lwn.net, 21cnbao@gmail.com,  shakeel.butt@linux.dev,
	bpf@vger.kernel.org, linux-mm@kvack.org,
	 linux-doc@vger.kernel.org
Subject: Re: [PATCH v7 mm-new 02/10] mm: thp: add support for BPF based THP order selection
Date: Thu, 25 Sep 2025 18:05:36 +0800	[thread overview]
Message-ID: <CABzRoyaFv4ciJwcdU=1qQNvSWE_PPQonn7ehE7Zz_PHNHfN4gA@mail.gmail.com> (raw)
In-Reply-To: <20250910024447.64788-3-laoar.shao@gmail.com>

On Wed, Sep 10, 2025 at 10:53 AM Yafang Shao <laoar.shao@gmail.com> wrote:
>
> This patch introduces a new BPF struct_ops called bpf_thp_ops for dynamic
> THP tuning. It includes a hook bpf_hook_thp_get_order(), allowing BPF
> programs to influence THP order selection based on factors such as:
> - Workload identity
>   For example, workloads running in specific containers or cgroups.
> - Allocation context
>   Whether the allocation occurs during a page fault, khugepaged, swap or
>   other paths.
> - VMA's memory advice settings
>   MADV_HUGEPAGE or MADV_NOHUGEPAGE
> - Memory pressure
>   PSI system data or associated cgroup PSI metrics
>
> The kernel API of this new BPF hook is as follows,
>
> /**
>  * @thp_order_fn_t: Get the suggested THP orders from a BPF program for allocation
>  * @vma: vm_area_struct associated with the THP allocation
>  * @vma_type: The VMA type, such as BPF_THP_VM_HUGEPAGE if VM_HUGEPAGE is set
>  *            BPF_THP_VM_NOHUGEPAGE if VM_NOHUGEPAGE is set, or BPF_THP_VM_NONE if
>  *            neither is set.
>  * @tva_type: TVA type for current @vma
>  * @orders: Bitmask of requested THP orders for this allocation
>  *          - PMD-mapped allocation if PMD_ORDER is set
>  *          - mTHP allocation otherwise
>  *
>  * Return: The suggested THP order from the BPF program for allocation. It will
>  *         not exceed the highest requested order in @orders. Return -1 to
>  *         indicate that the original requested @orders should remain unchanged.
>  */
> typedef int thp_order_fn_t(struct vm_area_struct *vma,
>                            enum bpf_thp_vma_type vma_type,
>                            enum tva_type tva_type,
>                            unsigned long orders);
>
> Only a single BPF program can be attached at any given time, though it can
> be dynamically updated to adjust the policy. The implementation supports
> anonymous THP, shmem THP, and mTHP, with future extensions planned for
> file-backed THP.
>
> This functionality is only active when system-wide THP is configured to
> madvise or always mode. It remains disabled in never mode. Additionally,
> if THP is explicitly disabled for a specific task via prctl(), this BPF
> functionality will also be unavailable for that task.
>
> This feature requires CONFIG_BPF_GET_THP_ORDER (marked EXPERIMENTAL) to be
> enabled. Note that this capability is currently unstable and may undergo
> significant changes—including potential removal—in future kernel versions.
>
> Suggested-by: David Hildenbrand <david@redhat.com>
> Suggested-by: Lorenzo Stoakes <lorenzo.stoakes@oracle.com>
> Signed-off-by: Yafang Shao <laoar.shao@gmail.com>

I've tested this patch on my machine, and it works as expected. Using BPF
hooks to control THP is a great step forward!

Tested-by: Lance Yang <lance.yang@linux.dev>

This work also inspires some ideas for another useful hook for THP that I
might propose in the future, once this series is settled and merged ;)

Cheers,
Lance

> ---
>  MAINTAINERS             |   1 +
>  include/linux/huge_mm.h |  26 ++++-
>  mm/Kconfig              |  12 ++
>  mm/Makefile             |   1 +
>  mm/huge_memory_bpf.c    | 243 ++++++++++++++++++++++++++++++++++++++++
>  5 files changed, 280 insertions(+), 3 deletions(-)
>  create mode 100644 mm/huge_memory_bpf.c
>
> diff --git a/MAINTAINERS b/MAINTAINERS
> index 8fef05bc2224..d055a3c95300 100644
> --- a/MAINTAINERS
> +++ b/MAINTAINERS
> @@ -16252,6 +16252,7 @@ F:      include/linux/huge_mm.h
>  F:     include/linux/khugepaged.h
>  F:     include/trace/events/huge_memory.h
>  F:     mm/huge_memory.c
> +F:     mm/huge_memory_bpf.c
>  F:     mm/khugepaged.c
>  F:     mm/mm_slot.h
>  F:     tools/testing/selftests/mm/khugepaged.c
> diff --git a/include/linux/huge_mm.h b/include/linux/huge_mm.h
> index 23f124493c47..f72a5fd04e4f 100644
> --- a/include/linux/huge_mm.h
> +++ b/include/linux/huge_mm.h
> @@ -56,6 +56,7 @@ enum transparent_hugepage_flag {
>         TRANSPARENT_HUGEPAGE_DEFRAG_REQ_MADV_FLAG,
>         TRANSPARENT_HUGEPAGE_DEFRAG_KHUGEPAGED_FLAG,
>         TRANSPARENT_HUGEPAGE_USE_ZERO_PAGE_FLAG,
> +       TRANSPARENT_HUGEPAGE_BPF_ATTACHED,      /* BPF prog is attached */
>  };
>
>  struct kobject;
> @@ -270,6 +271,19 @@ unsigned long __thp_vma_allowable_orders(struct vm_area_struct *vma,
>                                          enum tva_type type,
>                                          unsigned long orders);
>
> +#ifdef CONFIG_BPF_GET_THP_ORDER
> +unsigned long
> +bpf_hook_thp_get_orders(struct vm_area_struct *vma, vm_flags_t vma_flags,
> +                       enum tva_type type, unsigned long orders);
> +#else
> +static inline unsigned long
> +bpf_hook_thp_get_orders(struct vm_area_struct *vma, vm_flags_t vma_flags,
> +                       enum tva_type tva_flags, unsigned long orders)
> +{
> +       return orders;
> +}
> +#endif
> +
>  /**
>   * thp_vma_allowable_orders - determine hugepage orders that are allowed for vma
>   * @vma:  the vm area to check
> @@ -291,6 +305,12 @@ unsigned long thp_vma_allowable_orders(struct vm_area_struct *vma,
>                                        enum tva_type type,
>                                        unsigned long orders)
>  {
> +       unsigned long bpf_orders;
> +
> +       bpf_orders = bpf_hook_thp_get_orders(vma, vm_flags, type, orders);
> +       if (!bpf_orders)
> +               return 0;
> +
>         /*
>          * Optimization to check if required orders are enabled early. Only
>          * forced collapse ignores sysfs configs.
> @@ -304,12 +324,12 @@ unsigned long thp_vma_allowable_orders(struct vm_area_struct *vma,
>                     ((vm_flags & VM_HUGEPAGE) && hugepage_global_enabled()))
>                         mask |= READ_ONCE(huge_anon_orders_inherit);
>
> -               orders &= mask;
> -               if (!orders)
> +               bpf_orders &= mask;
> +               if (!bpf_orders)
>                         return 0;
>         }
>
> -       return __thp_vma_allowable_orders(vma, vm_flags, type, orders);
> +       return __thp_vma_allowable_orders(vma, vm_flags, type, bpf_orders);
>  }
>
>  struct thpsize {
> diff --git a/mm/Kconfig b/mm/Kconfig
> index d1ed839ca710..4d89d2158f10 100644
> --- a/mm/Kconfig
> +++ b/mm/Kconfig
> @@ -896,6 +896,18 @@ config NO_PAGE_MAPCOUNT
>
>           EXPERIMENTAL because the impact of some changes is still unclear.
>
> +config BPF_GET_THP_ORDER
> +       bool "BPF-based THP order selection (EXPERIMENTAL)"
> +       depends on TRANSPARENT_HUGEPAGE && BPF_SYSCALL
> +
> +       help
> +         Enable dynamic THP order selection using BPF programs. This
> +         experimental feature allows custom BPF logic to determine optimal
> +         transparent hugepage allocation sizes at runtime.
> +
> +         WARNING: This feature is unstable and may change in future kernel
> +         versions.
> +
>  endif # TRANSPARENT_HUGEPAGE
>
>  # simple helper to make the code a bit easier to read
> diff --git a/mm/Makefile b/mm/Makefile
> index 21abb3353550..f180332f2ad0 100644
> --- a/mm/Makefile
> +++ b/mm/Makefile
> @@ -99,6 +99,7 @@ obj-$(CONFIG_MIGRATION) += migrate.o
>  obj-$(CONFIG_NUMA) += memory-tiers.o
>  obj-$(CONFIG_DEVICE_MIGRATION) += migrate_device.o
>  obj-$(CONFIG_TRANSPARENT_HUGEPAGE) += huge_memory.o khugepaged.o
> +obj-$(CONFIG_BPF_GET_THP_ORDER) += huge_memory_bpf.o
>  obj-$(CONFIG_PAGE_COUNTER) += page_counter.o
>  obj-$(CONFIG_MEMCG_V1) += memcontrol-v1.o
>  obj-$(CONFIG_MEMCG) += memcontrol.o vmpressure.o
> diff --git a/mm/huge_memory_bpf.c b/mm/huge_memory_bpf.c
> new file mode 100644
> index 000000000000..525ee22ab598
> --- /dev/null
> +++ b/mm/huge_memory_bpf.c
> @@ -0,0 +1,243 @@
> +// SPDX-License-Identifier: GPL-2.0
> +/*
> + * BPF-based THP policy management
> + *
> + * Author: Yafang Shao <laoar.shao@gmail.com>
> + */
> +
> +#include <linux/bpf.h>
> +#include <linux/btf.h>
> +#include <linux/huge_mm.h>
> +#include <linux/khugepaged.h>
> +
> +enum bpf_thp_vma_type {
> +       BPF_THP_VM_NONE = 0,
> +       BPF_THP_VM_HUGEPAGE,    /* VM_HUGEPAGE */
> +       BPF_THP_VM_NOHUGEPAGE,  /* VM_NOHUGEPAGE */
> +};
> +
> +/**
> + * @thp_order_fn_t: Get the suggested THP orders from a BPF program for allocation
> + * @vma: vm_area_struct associated with the THP allocation
> + * @vma_type: The VMA type, such as BPF_THP_VM_HUGEPAGE if VM_HUGEPAGE is set
> + *            BPF_THP_VM_NOHUGEPAGE if VM_NOHUGEPAGE is set, or BPF_THP_VM_NONE if
> + *            neither is set.
> + * @tva_type: TVA type for current @vma
> + * @orders: Bitmask of requested THP orders for this allocation
> + *          - PMD-mapped allocation if PMD_ORDER is set
> + *          - mTHP allocation otherwise
> + *
> + * Return: The suggested THP order from the BPF program for allocation. It will
> + *         not exceed the highest requested order in @orders. Return -1 to
> + *         indicate that the original requested @orders should remain unchanged.
> + */
> +typedef int thp_order_fn_t(struct vm_area_struct *vma,
> +                          enum bpf_thp_vma_type vma_type,
> +                          enum tva_type tva_type,
> +                          unsigned long orders);
> +
> +struct bpf_thp_ops {
> +       thp_order_fn_t __rcu *thp_get_order;
> +};
> +
> +static struct bpf_thp_ops bpf_thp;
> +static DEFINE_SPINLOCK(thp_ops_lock);
> +
> +/*
> + * Returns the original @orders if no BPF program is attached or if the
> + * suggested order is invalid.
> + */
> +unsigned long bpf_hook_thp_get_orders(struct vm_area_struct *vma,
> +                                     vm_flags_t vma_flags,
> +                                     enum tva_type tva_type,
> +                                     unsigned long orders)
> +{
> +       thp_order_fn_t *bpf_hook_thp_get_order;
> +       unsigned long thp_orders = orders;
> +       enum bpf_thp_vma_type vma_type;
> +       int thp_order;
> +
> +       /* No BPF program is attached */
> +       if (!test_bit(TRANSPARENT_HUGEPAGE_BPF_ATTACHED,
> +                     &transparent_hugepage_flags))
> +               return orders;
> +
> +       if (vma_flags & VM_HUGEPAGE)
> +               vma_type = BPF_THP_VM_HUGEPAGE;
> +       else if (vma_flags & VM_NOHUGEPAGE)
> +               vma_type = BPF_THP_VM_NOHUGEPAGE;
> +       else
> +               vma_type = BPF_THP_VM_NONE;
> +
> +       rcu_read_lock();
> +       bpf_hook_thp_get_order = rcu_dereference(bpf_thp.thp_get_order);
> +       if (!bpf_hook_thp_get_order)
> +               goto out;
> +
> +       thp_order = bpf_hook_thp_get_order(vma, vma_type, tva_type, orders);
> +       if (thp_order < 0)
> +               goto out;
> +       /*
> +        * The maximum requested order is determined by the callsite. E.g.:
> +        * - PMD-mapped THP uses PMD_ORDER
> +        * - mTHP uses (PMD_ORDER - 1)
> +        *
> +        * We must respect this upper bound to avoid undefined behavior. So the
> +        * highest suggested order can't exceed the highest requested order.
> +        */
> +       if (thp_order <= highest_order(orders))
> +               thp_orders = BIT(thp_order);
> +
> +out:
> +       rcu_read_unlock();
> +       return thp_orders;
> +}
> +
> +static bool bpf_thp_ops_is_valid_access(int off, int size,
> +                                       enum bpf_access_type type,
> +                                       const struct bpf_prog *prog,
> +                                       struct bpf_insn_access_aux *info)
> +{
> +       return bpf_tracing_btf_ctx_access(off, size, type, prog, info);
> +}
> +
> +static const struct bpf_func_proto *
> +bpf_thp_get_func_proto(enum bpf_func_id func_id, const struct bpf_prog *prog)
> +{
> +       return bpf_base_func_proto(func_id, prog);
> +}
> +
> +static const struct bpf_verifier_ops thp_bpf_verifier_ops = {
> +       .get_func_proto = bpf_thp_get_func_proto,
> +       .is_valid_access = bpf_thp_ops_is_valid_access,
> +};
> +
> +static int bpf_thp_init(struct btf *btf)
> +{
> +       return 0;
> +}
> +
> +static int bpf_thp_check_member(const struct btf_type *t,
> +                               const struct btf_member *member,
> +                               const struct bpf_prog *prog)
> +{
> +       /* The call site operates under RCU protection. */
> +       if (prog->sleepable)
> +               return -EINVAL;
> +       return 0;
> +}
> +
> +static int bpf_thp_init_member(const struct btf_type *t,
> +                              const struct btf_member *member,
> +                              void *kdata, const void *udata)
> +{
> +       return 0;
> +}
> +
> +static int bpf_thp_reg(void *kdata, struct bpf_link *link)
> +{
> +       struct bpf_thp_ops *ops = kdata;
> +
> +       spin_lock(&thp_ops_lock);
> +       if (test_and_set_bit(TRANSPARENT_HUGEPAGE_BPF_ATTACHED,
> +                            &transparent_hugepage_flags)) {
> +               spin_unlock(&thp_ops_lock);
> +               return -EBUSY;
> +       }
> +       WARN_ON_ONCE(rcu_access_pointer(bpf_thp.thp_get_order));
> +       rcu_assign_pointer(bpf_thp.thp_get_order, ops->thp_get_order);
> +       spin_unlock(&thp_ops_lock);
> +       return 0;
> +}
> +
> +static void bpf_thp_unreg(void *kdata, struct bpf_link *link)
> +{
> +       thp_order_fn_t *old_fn;
> +
> +       spin_lock(&thp_ops_lock);
> +       clear_bit(TRANSPARENT_HUGEPAGE_BPF_ATTACHED, &transparent_hugepage_flags);
> +       old_fn = rcu_replace_pointer(bpf_thp.thp_get_order, NULL,
> +                                    lockdep_is_held(&thp_ops_lock));
> +       WARN_ON_ONCE(!old_fn);
> +       spin_unlock(&thp_ops_lock);
> +
> +       synchronize_rcu();
> +}
> +
> +static int bpf_thp_update(void *kdata, void *old_kdata, struct bpf_link *link)
> +{
> +       thp_order_fn_t *old_fn, *new_fn;
> +       struct bpf_thp_ops *old = old_kdata;
> +       struct bpf_thp_ops *ops = kdata;
> +       int ret = 0;
> +
> +       if (!ops || !old)
> +               return -EINVAL;
> +
> +       spin_lock(&thp_ops_lock);
> +       /* The prog has aleady been removed. */
> +       if (!test_bit(TRANSPARENT_HUGEPAGE_BPF_ATTACHED,
> +                     &transparent_hugepage_flags)) {
> +               ret = -ENOENT;
> +               goto out;
> +       }
> +
> +       new_fn = rcu_dereference(ops->thp_get_order);
> +       old_fn = rcu_replace_pointer(bpf_thp.thp_get_order, new_fn,
> +                                    lockdep_is_held(&thp_ops_lock));
> +       WARN_ON_ONCE(!old_fn || !new_fn);
> +
> +out:
> +       spin_unlock(&thp_ops_lock);
> +       if (!ret)
> +               synchronize_rcu();
> +       return ret;
> +}
> +
> +static int bpf_thp_validate(void *kdata)
> +{
> +       struct bpf_thp_ops *ops = kdata;
> +
> +       if (!ops->thp_get_order) {
> +               pr_err("bpf_thp: required ops isn't implemented\n");
> +               return -EINVAL;
> +       }
> +       return 0;
> +}
> +
> +static int bpf_thp_get_order(struct vm_area_struct *vma,
> +                            enum bpf_thp_vma_type vma_type,
> +                            enum tva_type tva_type,
> +                            unsigned long orders)
> +{
> +       return -1;
> +}
> +
> +static struct bpf_thp_ops __bpf_thp_ops = {
> +       .thp_get_order = (thp_order_fn_t __rcu *)bpf_thp_get_order,
> +};
> +
> +static struct bpf_struct_ops bpf_bpf_thp_ops = {
> +       .verifier_ops = &thp_bpf_verifier_ops,
> +       .init = bpf_thp_init,
> +       .check_member = bpf_thp_check_member,
> +       .init_member = bpf_thp_init_member,
> +       .reg = bpf_thp_reg,
> +       .unreg = bpf_thp_unreg,
> +       .update = bpf_thp_update,
> +       .validate = bpf_thp_validate,
> +       .cfi_stubs = &__bpf_thp_ops,
> +       .owner = THIS_MODULE,
> +       .name = "bpf_thp_ops",
> +};
> +
> +static int __init bpf_thp_ops_init(void)
> +{
> +       int err;
> +
> +       err = register_bpf_struct_ops(&bpf_bpf_thp_ops, bpf_thp_ops);
> +       if (err)
> +               pr_err("bpf_thp: Failed to register struct_ops (%d)\n", err);
> +       return err;
> +}
> +late_initcall(bpf_thp_ops_init);
> --
> 2.47.3
>
>


  parent reply	other threads:[~2025-09-25 10:06 UTC|newest]

Thread overview: 61+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2025-09-10  2:44 [PATCH v7 mm-new 0/9] mm, bpf: BPF based THP order selection Yafang Shao
2025-09-10  2:44 ` [PATCH v7 mm-new 01/10] mm: thp: remove disabled task from khugepaged_mm_slot Yafang Shao
2025-09-10  5:11   ` Lance Yang
2025-09-10  6:17     ` Yafang Shao
2025-09-10  7:21   ` Lance Yang
2025-09-10 17:27   ` kernel test robot
2025-09-11  2:12     ` Lance Yang
2025-09-11  2:28       ` Zi Yan
2025-09-11  2:35         ` Yafang Shao
2025-09-11  2:38         ` Lance Yang
2025-09-11 13:47         ` Lorenzo Stoakes
2025-09-14  2:48           ` Yafang Shao
2025-09-11 13:43   ` Lorenzo Stoakes
2025-09-14  2:47     ` Yafang Shao
2025-09-10  2:44 ` [PATCH v7 mm-new 02/10] mm: thp: add support for BPF based THP order selection Yafang Shao
2025-09-10 12:42   ` Lance Yang
2025-09-10 12:54     ` Lance Yang
2025-09-10 13:56       ` Lance Yang
2025-09-11  2:48         ` Yafang Shao
2025-09-11  3:04           ` Lance Yang
2025-09-11 14:45         ` Lorenzo Stoakes
2025-09-11 14:02     ` Lorenzo Stoakes
2025-09-11 14:42       ` Lance Yang
2025-09-11 14:58         ` Lorenzo Stoakes
2025-09-12  7:58           ` Yafang Shao
2025-09-12 12:04             ` Lorenzo Stoakes
2025-09-11 14:33   ` Lorenzo Stoakes
2025-09-12  8:28     ` Yafang Shao
2025-09-12 11:53       ` Lorenzo Stoakes
2025-09-14  2:22         ` Yafang Shao
2025-09-11 14:51   ` Lorenzo Stoakes
2025-09-12  8:03     ` Yafang Shao
2025-09-12 12:00       ` Lorenzo Stoakes
2025-09-25 10:05   ` Lance Yang [this message]
2025-09-25 11:38     ` Yafang Shao
2025-09-10  2:44 ` [PATCH v7 mm-new 03/10] mm: thp: decouple THP allocation between swap and page fault paths Yafang Shao
2025-09-11 14:55   ` Lorenzo Stoakes
2025-09-12  7:20     ` Yafang Shao
2025-09-12 12:04       ` Lorenzo Stoakes
2025-09-10  2:44 ` [PATCH v7 mm-new 04/10] mm: thp: enable THP allocation exclusively through khugepaged Yafang Shao
2025-09-11 15:53   ` Lance Yang
2025-09-12  6:21     ` Yafang Shao
2025-09-11 15:58   ` Lorenzo Stoakes
2025-09-12  6:17     ` Yafang Shao
2025-09-12 13:48       ` Lorenzo Stoakes
2025-09-14  2:19         ` Yafang Shao
2025-09-10  2:44 ` [PATCH v7 mm-new 05/10] bpf: mark mm->owner as __safe_rcu_or_null Yafang Shao
2025-09-11 16:04   ` Lorenzo Stoakes
2025-09-10  2:44 ` [PATCH v7 mm-new 06/10] bpf: mark vma->vm_mm as __safe_trusted_or_null Yafang Shao
2025-09-11 17:08   ` Lorenzo Stoakes
2025-09-11 17:30   ` Liam R. Howlett
2025-09-11 17:44     ` Lorenzo Stoakes
2025-09-12  3:56       ` Yafang Shao
2025-09-12  3:50     ` Yafang Shao
2025-09-10  2:44 ` [PATCH v7 mm-new 07/10] selftests/bpf: add a simple BPF based THP policy Yafang Shao
2025-09-10 20:44   ` Alexei Starovoitov
2025-09-11  2:31     ` Yafang Shao
2025-09-10  2:44 ` [PATCH v7 mm-new 08/10] selftests/bpf: add test case to update " Yafang Shao
2025-09-10  2:44 ` [PATCH v7 mm-new 09/10] selftests/bpf: add test cases for invalid thp_adjust usage Yafang Shao
2025-09-10  2:44 ` [PATCH v7 mm-new 10/10] Documentation: add BPF-based THP policy management Yafang Shao
2025-09-10 11:11 ` [PATCH v7 mm-new 0/9] mm, bpf: BPF based THP order selection Lance Yang

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to='CABzRoyaFv4ciJwcdU=1qQNvSWE_PPQonn7ehE7Zz_PHNHfN4gA@mail.gmail.com' \
    --to=lance.yang@linux.dev \
    --cc=21cnbao@gmail.com \
    --cc=Liam.Howlett@oracle.com \
    --cc=akpm@linux-foundation.org \
    --cc=ameryhung@gmail.com \
    --cc=andrii@kernel.org \
    --cc=ast@kernel.org \
    --cc=baolin.wang@linux.alibaba.com \
    --cc=bpf@vger.kernel.org \
    --cc=corbet@lwn.net \
    --cc=daniel@iogearbox.net \
    --cc=david@redhat.com \
    --cc=dev.jain@arm.com \
    --cc=gutierrez.asier@huawei-partners.com \
    --cc=hannes@cmpxchg.org \
    --cc=laoar.shao@gmail.com \
    --cc=linux-doc@vger.kernel.org \
    --cc=linux-mm@kvack.org \
    --cc=lorenzo.stoakes@oracle.com \
    --cc=npache@redhat.com \
    --cc=rientjes@google.com \
    --cc=ryan.roberts@arm.com \
    --cc=shakeel.butt@linux.dev \
    --cc=usamaarif642@gmail.com \
    --cc=willy@infradead.org \
    --cc=ziy@nvidia.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).