From: Bijan Tabatabai <bijan311@gmail.com>
To: Joshua Hahn <joshua.hahnjy@gmail.com>
Cc: linux-mm@kvack.org, linux-doc@vger.kernel.org,
linux-kernel@vger.kernel.org, sj@kernel.org,
akpm@linux-foundation.org, corbet@lwn.net, david@redhat.com,
ziy@nvidia.com, matthew.brost@intel.com, rakie.kim@sk.com,
byungchul@sk.com, gourry@gourry.net,
ying.huang@linux.alibaba.com, apopple@nvidia.com,
bijantabatab@micron.com, venkataravis@micron.com,
emirakhur@micron.com, ajayjoshi@micron.com,
vtavarespetr@micron.com, damon@lists.linux.dev
Subject: Re: [RFC PATCH 0/4] mm/damon: Add DAMOS action to interleave data across nodes
Date: Fri, 13 Jun 2025 11:46:22 -0500 [thread overview]
Message-ID: <CAMvvPS4COqinefth9rEB4etJF2erjQa3xfcOGQMtZ-LCUQnwFw@mail.gmail.com> (raw)
In-Reply-To: <20250613152517.225529-1-joshua.hahnjy@gmail.com>
Hi Joshua,
On Fri, Jun 13, 2025 at 10:25 AM Joshua Hahn <joshua.hahnjy@gmail.com> wrote:
>
> On Thu, 12 Jun 2025 13:13:26 -0500 Bijan Tabatabai <bijan311@gmail.com> wrote:
>
> > From: Bijan Tabatabai <bijantabatab@micron.com>
> >
> > A recent patch set automatically set the interleave weight for each node
> > according to the node's maximum bandwidth [1]. In another thread, the patch
> > set's author, Joshua Hahn, wondered if/how these weights should be changed
> > if the bandwidth utilization of the system changes [2].
>
> Hi Bijan,
>
> Thank you for this patchset, and thank you for finding interest in my
> question!
>
> > This patch set adds the mechanism for dynamically changing how application
> > data is interleaved across nodes while leaving the policy of what the
> > interleave weights should be to userspace. It does this by adding a new
> > DAMOS action: DAMOS_INTERLEAVE. We implement DAMOS_INTERLEAVE with both
> > paddr and vaddr operations sets. Using the paddr version is useful for
> > managing page placement globally. Using the vaddr version limits tracking
> > to one process per kdamond instance, but the va based tracking better
> > captures spacial locality.
> >
> > DAMOS_INTERLEAVE interleaves pages within a region across nodes using the
> > interleave weights at /sys/kernel/mm/mempolicy/weighted_interleave/node<N>
> > and the page placement algorithm in weighted_interleave_nid via
> > policy_nodemask. We chose to reuse the mempolicy weighted interleave
> > infrastructure to avoid reimplementing code. However, this has the awkward
> > side effect that only pages that are mapped to processes using
> > MPOL_WEIGHTED_INTERLEAVE will be migrated according to new interleave
> > weights. This might be fine because workloads that want their data to be
> > dynamically interleaved will want their newly allocated data to be
> > interleaved at the same ratio.
>
> I think this is generally true. Maybe until a user says that they have a
> usecase where they would like to have a non-weighted-interleave policy
> to allocate pages, but would like to place them according to a set weight,
> we can leave support for other mempolicies out for now.
>
> > If exposing policy_nodemask is undesirable, we have two alternative methods
> > for having DAMON access the interleave weights it should use. We would
> > appreciate feedback on which method is preferred.
> > 1. Use mpol_misplaced instead
> > pros: mpol_misplaced is already exposed publically
> > cons: Would require refactoring mpol_misplaced to take a struct vm_area
> > instead of a struct vm_fault, and require refactoring mpol_misplaced and
> > get_vma_policy to take in a struct task_struct rather than just using
> > current. Also requires processes to use MPOL_WEIGHTED_INTERLEAVE.
> > 2. Add a new field to struct damos, similar to target_nid for the
> > MIGRATE_HOT/COLD schemes.
> > pros: Keeps changes contained inside DAMON. Would not require processes
> > to use MPOL_WEIGHTED_INTERLEAVE.
> > cons: Duplicates page placement code. Requires discussion on the sysfs
> > interface to use for users to pass in the interleave weights.
>
> Here I agree with SJ's sentiment -- I think mpol_misplaced runs with the
> context of working with current / fault contexts, like you pointed out.
> Perhaps it is best to keep the scope of the changes as local as possible : -)
> As for duplicating page placement code, I think that is something we can
> refine over iterations of this patchset, and maybe SJ will have some great
> ideas on how this can best be done as well.
David Hildenbrand responded to this and proposed adding a new function that
just returns the nid a folio should go on based on its mempolicy. I think that's
probably the best way to go for now. I think the common case would want
the weights used by this and mempolicy to be the same. However, if there is
a use case where different weights are desired, I don't mind coming back and
adding that functionality.
> > This patchset was tested on an AMD machine with a NUMA node with CPUs
> > attached to DDR memory and a cpu-less NUMA node attached to CXL memory.
> > However, this patch set should generalize to other architectures and number
> > of NUMA nodes.
>
> I think moving the test results to the cover letter will help reviewers
> better understand the intent of the work. Also, I think it will also be
> very helpful to include some potential use-cases in here as well. That is,
> what workloads would benefit from placing pages according to a set ratio,
> rather than using existing migration policies that adjust this based on
> hotness / coldness?
Noted. I will be sure to include that in the next revision.
> One such use case that I can think of is using this patchset + weighted
> interleave auto-tuning, which would help alleviate bandwidth limitations
> by ensuring that past the allocation stage, pages are being accessed
> in a way that maximizes the bandwidth usage of the system (at the cost of
> latency, which may or may not even be true based on how bandwidth-bound
> the workload is).
This was the exact use case I envisioned for this patch. I talk about it in more
detail in my reply to SeongJae.
> Thank you again for the amazing patchset! Have a great day : -)
> Joshua
I appreciate you taking the time to respond,
Bijan
prev parent reply other threads:[~2025-06-13 16:46 UTC|newest]
Thread overview: 30+ messages / expand[flat|nested] mbox.gz Atom feed top
2025-06-12 18:13 [RFC PATCH 0/4] mm/damon: Add DAMOS action to interleave data across nodes Bijan Tabatabai
2025-06-12 18:13 ` [RFC PATCH 1/4] mm/mempolicy: Expose policy_nodemask() in include/linux/mempolicy.h Bijan Tabatabai
2025-06-13 13:45 ` David Hildenbrand
2025-06-13 16:33 ` Bijan Tabatabai
2025-06-16 9:45 ` David Hildenbrand
2025-06-16 11:02 ` Huang, Ying
2025-06-16 11:11 ` David Hildenbrand
2025-06-16 14:16 ` Bijan Tabatabai
2025-06-16 14:26 ` David Hildenbrand
2025-06-16 17:43 ` Gregory Price
2025-06-16 22:16 ` Bijan Tabatabai
2025-06-17 18:58 ` SeongJae Park
2025-06-17 19:54 ` Bijan Tabatabai
2025-06-17 22:30 ` SeongJae Park
2025-06-16 10:55 ` Huang, Ying
2025-06-12 18:13 ` [RFC PATCH 2/4] mm/damon/paddr: Add DAMOS_INTERLEAVE action Bijan Tabatabai
2025-06-13 13:43 ` David Hildenbrand
2025-06-12 18:13 ` [RFC PATCH 3/4] mm/damon: Move damon_pa_migrate_pages to ops-common Bijan Tabatabai
2025-06-12 18:13 ` [RFC PATCH 4/4] mm/damon/vaddr: Add vaddr version of DAMOS_INTERLEAVE Bijan Tabatabai
2025-06-12 23:49 ` [RFC PATCH 0/4] mm/damon: Add DAMOS action to interleave data across nodes SeongJae Park
2025-06-13 2:41 ` Huang, Ying
2025-06-13 16:02 ` Bijan Tabatabai
2025-06-13 15:44 ` Bijan Tabatabai
2025-06-13 17:12 ` SeongJae Park
2025-06-16 7:42 ` Byungchul Park
2025-06-16 15:01 ` Bijan Tabatabai
2025-06-13 9:55 ` Rakie Kim
2025-06-13 16:12 ` Bijan Tabatabai
2025-06-13 15:25 ` Joshua Hahn
2025-06-13 16:46 ` Bijan Tabatabai [this message]
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=CAMvvPS4COqinefth9rEB4etJF2erjQa3xfcOGQMtZ-LCUQnwFw@mail.gmail.com \
--to=bijan311@gmail.com \
--cc=ajayjoshi@micron.com \
--cc=akpm@linux-foundation.org \
--cc=apopple@nvidia.com \
--cc=bijantabatab@micron.com \
--cc=byungchul@sk.com \
--cc=corbet@lwn.net \
--cc=damon@lists.linux.dev \
--cc=david@redhat.com \
--cc=emirakhur@micron.com \
--cc=gourry@gourry.net \
--cc=joshua.hahnjy@gmail.com \
--cc=linux-doc@vger.kernel.org \
--cc=linux-kernel@vger.kernel.org \
--cc=linux-mm@kvack.org \
--cc=matthew.brost@intel.com \
--cc=rakie.kim@sk.com \
--cc=sj@kernel.org \
--cc=venkataravis@micron.com \
--cc=vtavarespetr@micron.com \
--cc=ying.huang@linux.alibaba.com \
--cc=ziy@nvidia.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).