* Re: [LSF/MM/BPF TOPIC][RFC PATCH v4 00/27] Private Memory Nodes (w/ Compressed RAM)
[not found] ` <afIKxG5mJZE6QgpR@gourry-fedora-PF4VCD3F>
@ 2026-05-04 13:08 ` Arun George/Arun George
2026-05-05 7:45 ` Gregory Price
0 siblings, 1 reply; 5+ messages in thread
From: Arun George/Arun George @ 2026-05-04 13:08 UTC (permalink / raw)
To: Gregory Price
Cc: lsf-pc, linux-kernel, linux-cxl, cgroups, linux-mm,
linux-trace-kernel, damon, kernel-team, gregkh, rafael, dakr,
dave, jonathan.cameron, dave.jiang, alison.schofield,
vishal.l.verma, ira.weiny, longman, akpm, david, lorenzo.stoakes,
Liam.Howlett, vbabka, rppt, surenb, mhocko, osalvador, ziy,
matthew.brost, joshua.hahnjy, rakie.kim, byungchul, ying.huang,
apopple, axelrasmussen, yuanchu, weixugc, yury.norov, linux,
mhiramat, mathieu.desnoyers, tj, hannes, mkoutny, jackmanb, sj,
baolin.wang, npache, ryan.roberts, dev.jain, baohua, lance.yang,
muchun.song, xu.xin16, chengming.zhou, jannh, linmiaohe,
nao.horiguchi, pfalcato, rientjes, shakeel.butt, riel, harry.yoo,
cl, roman.gushchin, chrisl, kasong, shikemeng, nphamcs, bhe,
zhengqi.arch, terry.bowman, gost.dev, arungeorge05, cpgs
On 29-04-2026 07:12 pm, Gregory Price wrote:
>>
>> Great! I believe "writable budget" could be an interesting idea which
>> can solve the 'bus error' sort of scenarios due to device not capable of
>> taking any more writes. The write budget could be replenished using the
>> control path and writes will not go ahead without the budget available,
>> right?>
>>
>
> Write budget is simple
>
> budget=1 (up to 1 page can be writable
> 1) swap 1 page -> cram alloc 1 page, put VSWAP_CRAM in PTE
> 2) read-fault -> cram upgrades VSWAP_CRAM to R/O PTE
> 3) write-fault ->
> a) if (writable_cnt < budget) { budget++; mkwrite(pte); }
> b) else: normal swap semantic -> promote to normal memory
>
> Meanwhile - use ballooning and a simple shrinker to dynamically size the
> region to respond to real compression ratio.
>
>
> All said an done - you get something close to zswap but with R/O
> mappings for all entries, and optional R/W-mappings for administrators
> who know something about their workload and can afford to take the risk
> of some amount of capacity being written to uncontended in exchange for
> performance.
>
> The writable-budget is a risk-dial: How much do you trust your workload
> to now spew un/poorly-compressible memory? The write-budget is a direct
> measure of that. (so take P99.99999 compression ratios, and you can make
> a good chunk of that writable).
>
> ~Gregory
>
>
I believe we are converging. Agree to most points you mentioned.
I see this problem statement can be solved by 'write-control + write
budget' approach similar to what you have described, whether we take
swap path or not.
But I see this 'write budget' (budget in terms of number of write
operations that can be handled by the device, not capacity) to be
provided by the device in control plane; not by the workloads in the host.
The budget can be communicated by the device in the device control plane
periodically (to be handled in the specific cram back-end driver; may be
interpreting the device back-pressure indications into a write budget
value). Even if the control plane breaks down, the host does not run
into issues except that it will not write further.
I assume you see this value coming from the workloads. This might be a
place where I have a different opinion.
There are multiple advantages of this value coming from the device:
1) We can modulate the write budget depending on the actual
compressibility in the device (and so workloads data). We don't have to
do estimation based on the workloads.
2) We don't have to do the capacity modulation - as in ballooning or
shrinker.
3) Even if the control path is broken, host can write only till the
available 'write budget'; so it won't get into 'bus error' situations.
~Arun George
^ permalink raw reply [flat|nested] 5+ messages in thread
* Re: [LSF/MM/BPF TOPIC][RFC PATCH v4 00/27] Private Memory Nodes (w/ Compressed RAM)
2026-05-04 13:08 ` [LSF/MM/BPF TOPIC][RFC PATCH v4 00/27] Private Memory Nodes (w/ Compressed RAM) Arun George/Arun George
@ 2026-05-05 7:45 ` Gregory Price
0 siblings, 0 replies; 5+ messages in thread
From: Gregory Price @ 2026-05-05 7:45 UTC (permalink / raw)
To: Arun George/Arun George
Cc: lsf-pc, linux-kernel, linux-cxl, cgroups, linux-mm,
linux-trace-kernel, damon, kernel-team, gregkh, rafael, dakr,
dave, jonathan.cameron, dave.jiang, alison.schofield,
vishal.l.verma, ira.weiny, longman, akpm, david, lorenzo.stoakes,
Liam.Howlett, vbabka, rppt, surenb, mhocko, osalvador, ziy,
matthew.brost, joshua.hahnjy, rakie.kim, byungchul, ying.huang,
apopple, axelrasmussen, yuanchu, weixugc, yury.norov, linux,
mhiramat, mathieu.desnoyers, tj, hannes, mkoutny, jackmanb, sj,
baolin.wang, npache, ryan.roberts, dev.jain, baohua, lance.yang,
muchun.song, xu.xin16, chengming.zhou, jannh, linmiaohe,
nao.horiguchi, pfalcato, rientjes, shakeel.butt, riel, harry.yoo,
cl, roman.gushchin, chrisl, kasong, shikemeng, nphamcs, bhe,
zhengqi.arch, terry.bowman, gost.dev, arungeorge05, cpgs
On Mon, May 04, 2026 at 06:38:54PM +0530, Arun George/Arun George wrote:
> On 29-04-2026 07:12 pm, Gregory Price wrote:
>
> But I see this 'write budget' (budget in terms of number of write
> operations that can be handled by the device, not capacity) to be
> provided by the device in control plane; not by the workloads in the host.
>
In the scenario i'm talking about, a "write budget" is defined as a
number of pages that are allows to be mapped writable in the page
tables at any given time.
> 1) We can modulate the write budget depending on the actual
> compressibility in the device (and so workloads data). We don't have to
> do estimation based on the workloads.
>
Barring the device causing backpressure to increase latency and slow
down writes, modulating a write budget doesn't actually do anything
useful. Once a page is mapped writable - the CPU is free to write to
that page uncontended.
I think a write budget is "doable" but maybe a bit optimistic for an
MVP. There are many corner cases to handle, and I would prefer to see
that as an experimental optimization.
> 2) We don't have to do the capacity modulation - as in ballooning or
> shrinker.
>
You still need capacity modulation in some way to respond to drops in
compression ratio.
~Gregory
^ permalink raw reply [flat|nested] 5+ messages in thread
* Re: [LSF/MM/BPF TOPIC][RFC PATCH v4 00/27] Private Memory Nodes (w/ Compressed RAM)
[not found] <20260222084842.1824063-1-gourry@gourry.net>
[not found] ` <CGME20260427123800epcas5p1e1a2fed257091b31e2e6c3a7d1b0c2b0@epcas5p1.samsung.com>
@ 2026-05-05 22:21 ` Yiannis Nikolakopoulos
[not found] ` <b704b05e-3e65-4a73-84c0-21557b0cc38f@amd.com>
2026-05-09 16:38 ` [LSF/MM/BPF TOPIC] Private Memory Nodes - follow up Gregory Price
3 siblings, 0 replies; 5+ messages in thread
From: Yiannis Nikolakopoulos @ 2026-05-05 22:21 UTC (permalink / raw)
To: Gregory Price
Cc: lsf-pc, linux-kernel, linux-cxl, cgroups, linux-mm,
linux-trace-kernel, damon, kernel-team, gregkh, rafael, dakr,
dave, jonathan.cameron, dave.jiang, alison.schofield,
vishal.l.verma, Ira Weiny, dan.j.williams, longman, akpm, david,
lorenzo.stoakes, Liam.Howlett, vbabka, rppt, Suren Baghdasaryan,
Michal Hocko, osalvador, ziy, matthew.brost, joshua.hahnjy,
rakie.kim, byungchul, ying.huang, apopple, axelrasmussen, yuanchu,
weixugc, yury.norov, linux, mhiramat, mathieu.desnoyers, tj,
hannes, mkoutny, jackmanb, sj, baolin.wang, npache, ryan.roberts,
dev.jain, baohua, lance.yang, muchun.song, xu.xin16,
chengming.zhou, jannh, linmiaohe, nao.horiguchi, pfalcato,
David Rientjes, shakeel.butt, riel, harry.yoo, cl, roman.gushchin,
chrisl, kasong, shikemeng, nphamcs, bhe, zhengqi.arch,
terry.bowman, Yiannis Nikolakopoulos
> On 22 Feb 2026, at 09:48, Gregory Price <gourry@gourry.net> wrote:
>
> Topic type: MM
>
> Presenter: Gregory Price <gourry@gourry.net>
>
> This series introduces N_MEMORY_PRIVATE, a NUMA node state for memory
> managed by the buddy allocator but excluded from normal allocations.
>
> I present it with an end-to-end Compressed RAM service (mm/cram.c)
> that would otherwise not be possible (or would be considerably more
> difficult, be device-specific, and add to the ZONE_DEVICE boondoggle).
>
>
> TL;DR
> ===
>
> N_MEMORY_PRIVATE is all about isolating NUMA nodes and then punching
> explicit holes in that isolation to do useful things we couldn't do
> before without re-implementing entire portions of mm/ in a driver.
>
>
> /* This is my memory. There are many like it, but this one is mine. */
> rc = add_private_memory_driver_managed(nid, start, size, name, flags,
> online_type, private_context);
>
> page = alloc_pages_node(nid, __GFP_PRIVATE, 0);
>
> /* Ok but I want to do something useful with it */
> static const struct node_private_ops ops = {
> .migrate_to = my_migrate_to,
> .folio_migrate = my_folio_migrate,
> .flags = NP_OPS_MIGRATION | NP_OPS_MEMPOLICY,
> };
> node_private_set_ops(nid, &ops);
>
> /* And now I can use mempolicy with my memory */
> buf = mmap(...);
> mbind(buf, len, mode, private_node, ...);
> buf[0] = 0xdeadbeef; /* Faults onto private node */
>
> /* And to be clear, no one else gets my memory */
> buf2 = malloc(4096); /* Standard allocation */
> buf2[0] = 0xdeadbeef; /* Can never land on private node */
>
> /* But i can choose to migrate it to the private node */
> move_pages(0, 1, &buf, &private_node, NULL, ...);
>
> /* And more fun things like this */
>
>
> Patchwork
> ===
> A fully working branch based on cxl/next can be found here:
> https://github.com/gourryinverse/linux/tree/private_compression
>
> A QEMU device which can inject high/low interrupts can be found here:
> https://github.com/gourryinverse/qemu/tree/compressed_cxl_clean
>
> The additional patches on these branches are CXL and DAX driver
> housecleaning only tangentially relevant to this RFC, so i've
> omitted them for the sake of trying to keep it somewhat clean
> here. Those patches should (hopefully) be going upstream anyway.
>
> Patches 1-22: Core Private Node Infrastructure
>
> Patch 1: Introduce N_MEMORY_PRIVATE scaffolding
> Patch 2: Introduce __GFP_PRIVATE
> Patch 3: Apply allocation isolation mechanisms
> Patch 4: Add N_MEMORY nodes to private fallback lists
> Patches 5-9: Filter operations not yet supported
> Patch 10: free_folio callback
> Patch 11: split_folio callback
> Patches 12-20: mm/ service opt-ins:
> Migration, Mempolicy, Demotion, Write Protect,
> Reclaim, OOM, NUMA Balancing, Compaction,
> LongTerm Pinning
> Patch 21: memory_failure callback
> Patch 22: Memory hotplug plumbing for private nodes
>
> Patch 23: mm/cram -- Compressed RAM Management
>
> Patches 24-27: CXL Driver examples
> Sysram Regions with Private node support
> Basic Driver Example: (MIGRATION | MEMPOLICY)
> Compression Driver Example (Generic)
>
Hi,
As I think this is about to be discussed in the conference, I thought
to share some high level comments.
I have tested this for some time on a device with compression (after some
necessary fixes for CXL RCD to work, that Greg helped me with).
Overall, the isolation property that this provides is something I deem necessary
for this technology. Others are better placed to judge the MM plumbing
itself, but I wanted to say that this functionality is an important piece of the puzzle
from the device/use-case side.
For cram itself, as it is in this RFC, I think there is still performance and
value left on the table (as noted in the description), but I fully understand Gregory’s
premise in approaching it this way.
<snip>
>
> Future CRAM : Loosening the read-only constraint
> ===
>
> The read-only model is safe but conservative. For workloads where
> compressed pages are occasionally written, the promotion fault adds
> latency. A future optimization could allow a tunable fraction of
> compressed pages to be mapped writable, accepting some risk of
> write-driven decompression in exchange for lower overhead.
>
> The private node ops make this straightforward:
>
> - Adjust fixup_migration_pte to selectively skip
> write-protection.
> - Use the backpressure system to either revoke writable mappings,
> deny additional demotions, or evict when device pressure rises.
I have some quick hacks playing with these ideas but I haven’t had the time
to test it thoroughly and get to something robust yet. I saw in another thread
that there is a follow up cooking which looks interesting.
Thanks Greg for pushing this, and I’m happy to test more on HW in our lab.
Best,
/Yiannis
^ permalink raw reply [flat|nested] 5+ messages in thread
* Re: [LSF/MM/BPF TOPIC][RFC PATCH v4 00/27] Private Memory Nodes (w/ Compressed RAM)
[not found] ` <b704b05e-3e65-4a73-84c0-21557b0cc38f@amd.com>
@ 2026-05-06 14:43 ` Gregory Price
0 siblings, 0 replies; 5+ messages in thread
From: Gregory Price @ 2026-05-06 14:43 UTC (permalink / raw)
To: Alejandro Lucero Palau
Cc: lsf-pc, linux-kernel, linux-cxl, cgroups, linux-mm,
linux-trace-kernel, damon, kernel-team, gregkh, rafael, dakr,
dave, jonathan.cameron, dave.jiang, alison.schofield,
vishal.l.verma, ira.weiny, dan.j.williams, longman, akpm, david,
lorenzo.stoakes, Liam.Howlett, vbabka, rppt, surenb, mhocko,
osalvador, ziy, matthew.brost, joshua.hahnjy, rakie.kim,
byungchul, ying.huang, apopple, axelrasmussen, yuanchu, weixugc,
yury.norov, linux, mhiramat, mathieu.desnoyers, tj, hannes,
mkoutny, jackmanb, sj, baolin.wang, npache, ryan.roberts,
dev.jain, baohua, lance.yang, muchun.song, xu.xin16,
chengming.zhou, jannh, linmiaohe, nao.horiguchi, pfalcato,
rientjes, shakeel.butt, riel, harry.yoo, cl, roman.gushchin,
chrisl, kasong, shikemeng, nphamcs, bhe, zhengqi.arch,
terry.bowman
On Wed, Feb 25, 2026 at 12:40:09PM +0000, Alejandro Lucero Palau wrote:
>
> I can see the nid param is just a "preferred nid" with alloc pages. Using
> __GFP_PRIVATE will restrict the allocation to private nodes but I think the
> idea here is:
>
>
> 1) I own this node
>
> 2) Do not give me memory from another private node but from mine.
>
>
I mildly mis-read this question, apologies.
Multiple private nodes in the nodemask are ignored, because the nodemask
is a filter function for the fallback lists - and private nodes never
show up in the fallback lists (except for their own).
So for example
Nodes: Normal(A,B), Private(C,D)
Fallback lists:
A: [A,B]
B: [B,A]
C: [C,A,B]
D: [D,B,A]
combination | possible result
----------------------------------------------------------------
__GFP_PRIVATE + pref_node(C) + nodemask(NULL) = (C or A or B)
__GFP_PRIVATE + pref_node(C) + nodemask(C,D) = C
GFP_PRIVATE + pref_node(C) + nodemask(ALL) = C
Basically private nodes are completely ignored in the nodemask, so you
cannot do fallback allocations to other private nodes.
There is no good abstraction (that I have found) to communicate
multi-private-node allocations simply because this would imply needing
private nodes to be in the fallback lists for other nodes.
Maybe there is a possibility of modifying fallback lists explicitly, but
I think that is out of scope for the first implementation.
~Gregory
^ permalink raw reply [flat|nested] 5+ messages in thread
* Re: [LSF/MM/BPF TOPIC] Private Memory Nodes - follow up
[not found] <20260222084842.1824063-1-gourry@gourry.net>
` (2 preceding siblings ...)
[not found] ` <b704b05e-3e65-4a73-84c0-21557b0cc38f@amd.com>
@ 2026-05-09 16:38 ` Gregory Price
3 siblings, 0 replies; 5+ messages in thread
From: Gregory Price @ 2026-05-09 16:38 UTC (permalink / raw)
To: lsf-pc
Cc: linux-kernel, linux-cxl, cgroups, linux-mm, linux-trace-kernel,
damon, kernel-team, gregkh, rafael, dakr, dave, jonathan.cameron,
dave.jiang, alison.schofield, vishal.l.verma, ira.weiny,
dan.j.williams, longman, akpm, david, lorenzo.stoakes,
Liam.Howlett, vbabka, rppt, surenb, mhocko, osalvador, ziy,
matthew.brost, joshua.hahnjy, rakie.kim, byungchul, ying.huang,
apopple, axelrasmussen, yuanchu, weixugc, yury.norov, linux,
mhiramat, mathieu.desnoyers, tj, hannes, mkoutny, jackmanb, sj,
baolin.wang, npache, ryan.roberts, dev.jain, baohua, lance.yang,
muchun.song, xu.xin16, chengming.zhou, jannh, linmiaohe,
nao.horiguchi, pfalcato, rientjes, shakeel.butt, riel, harry.yoo,
cl, roman.gushchin, chrisl, kasong, shikemeng, nphamcs, bhe,
zhengqi.arch, terry.bowman
Just wanting to follow up post-conference with a few major take-aways
since I will be a bit sparse during May / Early June (so want to not
forget, and garner a bit of input on the notes).
If you just want the tl;dr:
0) naming: private -> managed
1) remove global general "possible" and "online" node lists
2) add consistency with "normal" nodes, by opting them all in
to all the new things, and just making that the new normal.
e.g.: node_is_private_managed -> node_is_lru_eligible
3) Have __init add init time nodes to all the lists
Otherwise service/owner must add/enable services.
4) Make folio checks just way more explicit per service
e.g.: folio_is_private_managed -> folio_is_ksm_eligible
5) I still think w/o __GFP_PRIVATE this will still be too fragile,
but we're going to give it a try.
6) No callbacks in the MVP
7) MVP will be, essentially, Buddy + MBind support
Otherwise, more notes below.
~Gregory
<wall of text>
0) Naming is hard. Willy and Liam expressed concern over "private".
We briefly discussed "Managed"
This results in the following changes:
- if (folio_is_zone_device(folio))
+ if (folio_is_managed(folio))
and
+ if (node_is_managed(nid))
and
- N_MEMORY_PRIVATE
+ N_MEMORY_MANAGED
I'm less enthused the last one, but i'm ok with it.
1) There is a desire to fix possible / online node masks to avoid
bad patterns, and maybe to audit existing nodemask users.
there's one UAPI issue with this, and that that these masks
are exposed to userland by nature of existing node attributes
(N_MEMORY, N_CPU, N_POSSIBLE, etc).
I'm considering a name change from `possible` -> `init`, because
that's mostly how it is used (initialize some set of per-node
resources during __init, not at runtime). Externally, this set
would still be reported to uapi as possible.
2) There was concern about inconsistency towards nodes.
Along the lines of #1 - I'm thinking about actually adding explicit
service nodelists, which are populated at boot by __init, and by
hotplug if it's a general purpose node.
So we'd end up with things like:
for_each_ksm_node
for_each_lru_node
for_each_x_node
And we would retire such general defines like
for_each_node
for_each_online_node
For any "normal" node, it lands in all the lists.
For the buddy, we would have
for_buddy_node
For the default buddy-node list, and otherwise "managed" nodes would
still be removed from the standard fallback lists.
This means these nodes cannot be reached via nodemask arguments, and
can only be reached by `alloc_pages_node(nid, ...)` nid argument.
I *think* might resolve __GFP_PRIVATE.
But it's still dependent on system-wide for_each good behavior.
3) How do private nodes get into the lists in the new system?
For any private node, the registering driver (owner) and the managing
service are responsible for adding/removing the nodes from the list.
Example workflow:
0) CXL driver hotplug: add_memory_driver_managed(..., nid, owner)
a) owner=NULL means general purpose node
b) otherwise, reserve nid and (pgdat->owner = owner)
1) hotplug memory onto the node
a) if node is normal, add to all service lists
b) if node is "managed" (private), omit from all lists
2) CXL driver registers node with specific services, e.g.:
cram_register_node(..., nid, owner);
3) Service sets node enabled in appropriate node list, and starts
any appropriate services (kswapd, kcompactd, etc) for that node.
In some cases, nodes would have individual mappings onto services
(cram), in other cases the intent would be to have the memory
otherwise treated as general-purpose, but with special access
patterns (e.g. an LRU node not marked N_MEMORY).
4) There are still concerns about random hooks around the kernel.
My thought is to make this less "random", and more a change
in the way we think about folio operations / node operations
for ALL nodes.
ZONE_DEVICE has a bunch of implicit filtering due to not being
on the LRU - but the intent is to allow flexible LRU membership.
So what if we just made these checks much more explict overall
if (folio_is_ksm_eligible(folio)) /* can be merged */
if (folio_is_lru_eligible(folio)) /* managed by lru services */
if (folio_is_demotion_eligible(folio)) /* demotion target */
if (folio_is_mbind_eligible(folio)) /* can be an mbind target */
Rather than rathole over what the set of bits should be, i think it's
more important to determine what the actual operation here will be.
right now I have this defined as essentially:
folio_pgdat(folio)->private.ops.mask & NP_OPT_KSM
But if we generalize to all nodes / all features, it's essentially
a per-pgdat bitmask lookup:
bool folio_is_ksm_eligible(folio)) {
return test_bit(N_FEATURE_KSM, folio_pgdat(folio)->features);
}
With the bonus that all ZONE_DEVICE hooks can be sunk into these
checks, so there are many places in mm/ where this becomes essentially
a single-line change.
5) Lacking __GFP_PRIVATE, I have concern over fragility.
Previously, __GFP_PRIVATE created a "default opt-out" mechanism.
I *think* the above nodelist changes, specifically removing:
for_each_node()
for_each_online_node()
for_each_node_with_cpus()
The problem I foresee is with existing node_state masks, like
node_state((node), N_POSSIBLE)
node_state((node), N_CPU)
This might be tractable, but it may also simply be too fragile.
Right now only 3 or 4 locations use node_state() outside mm/, and
I'm tempted to try to sink these into mm/internal.h instead of
include/linux/nodemask.h. If that becomes unpalletable, then I will
lobby for __GFP_PRIVATE again (I may still anyway :P).
6) No callbacks by default, but nothing technically prevents it.
I was already in the process of killing this. I think mmu_notifier
does *most* of what the callbacks where doing anyway, so we can
probably collapse that.
7) David asked me to limit the MVP to Buddy + MBind support.
There's some odd interactions with pagecache, so that might evolve
too (may not be able to reliably fault a file directly onto a private
node, tbd - mempolicy does not apply to page cache faults, so it's
just unreliable).
</wall of text>
~Gregory
^ permalink raw reply [flat|nested] 5+ messages in thread
end of thread, other threads:[~2026-05-09 16:39 UTC | newest]
Thread overview: 5+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
[not found] <20260222084842.1824063-1-gourry@gourry.net>
[not found] ` <CGME20260427123800epcas5p1e1a2fed257091b31e2e6c3a7d1b0c2b0@epcas5p1.samsung.com>
[not found] ` <1983025922.01777297382206.JavaMail.epsvc@epcpadp2new>
[not found] ` <ae_i9IlIndumJWN3@gourry-fedora-PF4VCD3F>
[not found] ` <1891546521.01777455002601.JavaMail.epsvc@epcpadp1new>
[not found] ` <afIKxG5mJZE6QgpR@gourry-fedora-PF4VCD3F>
2026-05-04 13:08 ` [LSF/MM/BPF TOPIC][RFC PATCH v4 00/27] Private Memory Nodes (w/ Compressed RAM) Arun George/Arun George
2026-05-05 7:45 ` Gregory Price
2026-05-05 22:21 ` Yiannis Nikolakopoulos
[not found] ` <b704b05e-3e65-4a73-84c0-21557b0cc38f@amd.com>
2026-05-06 14:43 ` Gregory Price
2026-05-09 16:38 ` [LSF/MM/BPF TOPIC] Private Memory Nodes - follow up Gregory Price
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox