linux-mm.kvack.org archive mirror
 help / color / mirror / Atom feed
* [RFC] Memory tiering kernel alignment
@ 2024-01-25 18:26 David Rientjes
  2024-01-25 18:52 ` Matthew Wilcox
                   ` (3 more replies)
  0 siblings, 4 replies; 17+ messages in thread
From: David Rientjes @ 2024-01-25 18:26 UTC (permalink / raw)
  To: John Hubbard, Zi Yan, Bharata B Rao, Dave Jiang, Aneesh Kumar K.V,
	Huang, Ying, Alistair Popple, Christoph Lameter, Andrew Morton,
	Linus Torvalds, Dave Hansen, Mel Gorman, Jon Grimm, Gregory Price,
	Brian Morris, Wei Xu, Johannes Weiner
  Cc: linux-mm

Hi everybody,

There is a lot of excitement around upcoming CXL type 3 memory expansion
devices and their cost savings potential.  As the industry starts to
adopt this technology, one of the key components in strategic planning is
how the upstream Linux kernel will support various tiered configurations
to meet various user needs.  I think it goes without saying that this is
quite interesting to cloud providers as well as other hyperscalers :)

I think this discussion would benefit from a collaborative approach
between various stakeholders and interested parties.  Reason being is
that there are several different use cases the need different support
models, but also because there is great incentive toward moving "with"
upstream Linux for this support rather than having multiple parties
bringing up their own stacks only to find that they are diverging from
upstream rather than converging with it.

I'm interested to learn if there is interest in forming a "Linux Memory
Tiering Work Group" to share ideas, discuss multi-faceted approaches, and
keep track of work items?

Some recent discussions have proven that there is widespread interest in
some very foundational topics for this technology such as:

 - Decoupling CPU balancing from memory balancing (or obsoleting CPU
   balancing entirely)

   + John Hubbard notes this would be useful for GPUs:

      a) GPUs have their own processors that are invisible to the kernel's
         NUMA "which tasks are active on which NUMA nodes" calculations,
         and

      b) Similar to where CXL is generally going, we have already built
         fully memory-coherent hardware, which include memory-only NUMA
         nodes.

 - In-kernel hot memory abstraction, informed by hardware hinting drivers
   (incl some architectures like Power10), usable as a NUMA Balancing
   backend for promotion and other areas of the kernel like transparent
   hugepage utilization

 - NUMA and memory tiering enlightenment for accelerators, such as for
   optimal use of GPU memory, extremely important for a cloud provider
   (hint hint :)

 - Asynchronous memory promotion independent of task_numa_fault() while
   considering the cost of page migration (due to identifying cold memory)

It looks like there is already some interest in such a working group that
would have a biweekly discussion of shared interests with the goal of
accelerating design, development, testing, and division of work:

Alistair Popple
Aneesh Kumar K V
Brian Morris
Christoph Lameter
Dan Williams
Gregory Price
Grimm, Jon
Huang, Ying
Johannes Weiner
John Hubbard
Zi Yan

Specifically for the in-kernel hot memory abstraction topic, Google and
Meta recently publushed an OCP base specification "Hyperscale CXL Tiered
Memory Expander Specification" available at
https://drive.google.com/file/d/1fFfU7dFmCyl6V9-9qiakdWaDr9d38ewZ/view?usp=drive_link
that would be great to discuss.

There is also on-going work in the CXL Consortium to standardize some of
the abstractions for CXL 3.1.

If folks are interested in this topic and your name doesn't appear above
(I already got you :), please:

 - reply-all to this email to express interest and expand upon the list
   of topics above to represent additional areas of interest that should
   be included, *or*

 - email me privately to express interest to make sure you are included

Perhaps I'm overly optimistic, but one thing that would be absolutely
*amazing* would be if we all have a very clear and understandable vision
for how Linux will support the wide variety of use cases, even before
that work is fully implemented (or even designed), by LSF/MM/BPF 2024
time in May.

Thanks!


^ permalink raw reply	[flat|nested] 17+ messages in thread

end of thread, other threads:[~2024-02-29 18:23 UTC | newest]

Thread overview: 17+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2024-01-25 18:26 [RFC] Memory tiering kernel alignment David Rientjes
2024-01-25 18:52 ` Matthew Wilcox
2024-01-25 20:04   ` David Rientjes
2024-01-25 20:19     ` Matthew Wilcox
2024-01-25 21:37       ` David Rientjes
2024-01-25 22:28         ` Gregory Price
2024-01-26  0:16         ` SeongJae Park
2024-01-26 21:06         ` Christoph Lameter (Ampere)
2024-01-26 23:03           ` Gregory Price
2024-01-28 20:15         ` David Rientjes
2024-01-29 10:27       ` David Hildenbrand
2024-01-26 20:41   ` Christoph Lameter (Ampere)
2024-01-26  0:04 ` SeongJae Park
     [not found] ` <tsnp3a6oxglx2siv7aoplo665k7xsigkqtpfm5yiu2r3wvys26@3vntgau4t2gv>
2024-01-26 14:31   ` John Groves
2024-02-29  2:04 ` Davidlohr Bueso
2024-02-29  4:01   ` Bharata B Rao
2024-02-29 18:23     ` SeongJae Park

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).