From: Jonathan Cameron <jonathan.cameron@huawei.com>
To: Hannes Reinecke <hare@suse.de>
Cc: lsf-pc <lsf-pc@lists.linux-foundation.org>,
<linux-cxl@vger.kernel.org>, <linux-fsdevel@vger.kernel.org>
Subject: Re: [LSF/MM/BPF TOPIC] Strategies for memory deallocation/movement for Dynamic Capacity Pooling
Date: Mon, 13 Apr 2026 16:43:59 +0100 [thread overview]
Message-ID: <20260413164359.00001c86@huawei.com> (raw)
In-Reply-To: <e06deabf-44ad-4f3f-b817-78506c5e3203@suse.de>
On Mon, 30 Mar 2026 09:59:56 +0200
Hannes Reinecke <hare@suse.de> wrote:
> Hi all,
>
> during discussion with our partners regarding implementing dynamic
> capacity devices (DCD) on CXL the question has been brought up if
> we can somehow 'steer' which memory page to move.
> The problem is that for dynamic capacity devices we have a certain
> freedom which memory page to move/deallocate, so ideally there would
> be a strategy which pages to move/deallocate.
Hi Hannes,
Can you talk through your use model a little bit more?
I'm guessing this is about untagged DCD being used in a virtio-mem
like way? Hence you want to clear out a range of DPA base so you can
do a partial release?
I may have completely missed what you are targetting though so an
example would be great.
> Should it be per application/cgroup?
> Does it make sense to move individual pages from one application/cgroup
> or would it be better to move all pages from the application/cgroup?
> Should we implement something (eg via madvise()) to allow applicaitons
> to influence the policy?
> If so, what would that be?
>
> So quite some things to discuss; however, not sure if this isn't too
> much of an arcane topic which should rather be directed at places like
> LPC. But I'll let the PC decide.
Superficially feels a bit arcane, particularly as we are currently
kicking untagged memory into the long grass as there are too many
open questions on how to present it at all (e.g. related to Gregory's
recent work on private nodes). On recent CXL sync calls the proposal
has been to do tagged memory first and only support allocation of
all memory with a given tag in one go and full release.
Anyhow, sounds like the sort of thing I'm always keen to discuss
but I'm not going to be at LSFMM this year.
Jonathan
>
> Cheers,
>
> Hannes
next prev parent reply other threads:[~2026-04-13 15:44 UTC|newest]
Thread overview: 5+ messages / expand[flat|nested] mbox.gz Atom feed top
2026-03-30 7:59 [LSF/MM/BPF TOPIC] Strategies for memory deallocation/movement for Dynamic Capacity Pooling Hannes Reinecke
2026-04-13 15:43 ` Jonathan Cameron [this message]
2026-04-13 21:10 ` Gregory Price
2026-04-14 7:08 ` Hannes Reinecke
2026-04-15 0:26 ` Gregory Price
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20260413164359.00001c86@huawei.com \
--to=jonathan.cameron@huawei.com \
--cc=hare@suse.de \
--cc=linux-cxl@vger.kernel.org \
--cc=linux-fsdevel@vger.kernel.org \
--cc=lsf-pc@lists.linux-foundation.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox