From: Gregory Price <gourry@gourry.net>
To: Matthew Wilcox <willy@infradead.org>
Cc: "Ritesh Harjani (IBM)" <ritesh.list@gmail.com>,
linux-fsdevel <linux-fsdevel@vger.kernel.org>,
Amir Goldstein <amir73il@gmail.com>,
Christian Brauner <brauner@kernel.org>, Jan Kara <jack@suse.cz>,
lsf-pc <lsf-pc@lists.linux-foundation.org>,
Bharata B Rao <bharata@amd.com>,
Donet Tom <donettom@linux.ibm.com>,
Aboorva Devarajan <aboorvad@linux.ibm.com>,
linux-mm@kvack.org, Ojaswin Mujoo <ojaswin@linux.ibm.com>
Subject: Re: [LSF/MM/BPF BoF Session] Numa-Aware Placement for Page Cache Pages
Date: Sat, 2 May 2026 15:57:19 +0100 [thread overview]
Message-ID: <afYQz3YJdWB2R-1q@gourry-fedora-PF4VCD3F> (raw)
In-Reply-To: <afNV5wbhsFQJLzxi@casper.infradead.org>
On Thu, Apr 30, 2026 at 02:15:19PM +0100, Matthew Wilcox wrote:
> On Thu, Apr 30, 2026 at 05:03:37PM +0530, Ritesh Harjani (IBM) wrote:
> > Linux already supports memory tiers and there are ongoing discussions around
> > promotion of unmapped page cache pages, which lets kernel do the right thing
> > for userspace page cache pages on a tiered system.
>
> Well, you know my opinion of that idea ...
>
> > So the question is:
> > Do we need a userspace interface for the placement policy of page cache pages on a per file basis?
>
> What do we do if two tasks both "know" the right NUMA placement for the
> inode's data, and they disagree?
>
> > 1. Is there a need for an interface that allows userspace to do per-fd page
> > placement and maybe per-fd page migration?
>
> Ideally, no, the kernel should observe the task and get it right.
>
Out of curiosity, a use case i've been exploring is something like
fd = open()
buf = mmap(fd, ...)
mbind(buf, device_node)
/* fault file pages directly onto device memory */
this obviously breaks if there are concurrent accessors of said file
with read() (filemap will just fault onto the local node - clear race).
Do you think there's a world where we can hang a mempolicy off the
address_space via an fctrl() call with CAP_SYS_NICE?
I haven't quite worked through the full lifetime, since there's a
possibility the mempolicy ends up with stale nodes (hotplug, etc)
without plumbing for that. But it did seem like a somewhat clean
abstraction that isn't specifically a tiering use case.
(not interested in this for anything other than single node placement
policy or tiering, so no interleave or migration support or anything)
~Gregory
next prev parent reply other threads:[~2026-05-02 14:57 UTC|newest]
Thread overview: 10+ messages / expand[flat|nested] mbox.gz Atom feed top
2026-04-30 11:33 [LSF/MM/BPF BoF Session] Numa-Aware Placement for Page Cache Pages Ritesh Harjani (IBM)
2026-04-30 13:15 ` Matthew Wilcox
2026-04-30 14:43 ` Ritesh Harjani
2026-05-02 14:57 ` Gregory Price [this message]
2026-05-02 15:49 ` Gregory Price
2026-05-03 16:18 ` Ritesh Harjani
2026-05-03 23:48 ` Gregory Price
2026-05-02 23:00 ` Matthew Wilcox
2026-05-03 14:15 ` Gregory Price
2026-04-30 17:32 ` Gregory Price
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=afYQz3YJdWB2R-1q@gourry-fedora-PF4VCD3F \
--to=gourry@gourry.net \
--cc=aboorvad@linux.ibm.com \
--cc=amir73il@gmail.com \
--cc=bharata@amd.com \
--cc=brauner@kernel.org \
--cc=donettom@linux.ibm.com \
--cc=jack@suse.cz \
--cc=linux-fsdevel@vger.kernel.org \
--cc=linux-mm@kvack.org \
--cc=lsf-pc@lists.linux-foundation.org \
--cc=ojaswin@linux.ibm.com \
--cc=ritesh.list@gmail.com \
--cc=willy@infradead.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox