From: Anshuman Khandual <khandual@linux.vnet.ibm.com>
To: Dave Hansen <dave.hansen@intel.com>,
"lsf-pc@lists.linux-foundation.org"
<lsf-pc@lists.linux-foundation.org>
Cc: Linux-MM <linux-mm@kvack.org>
Subject: Re: [LSF/MM TOPIC/ATTEND] Memory Types
Date: Mon, 16 Jan 2017 16:29:10 +0530 [thread overview]
Message-ID: <22fbcb9f-f69a-6532-691f-c0f757cf6b8b@linux.vnet.ibm.com> (raw)
In-Reply-To: <9a0ae921-34df-db23-a25e-022f189608f4@intel.com>
On 01/16/2017 10:59 AM, Dave Hansen wrote:
> Historically, computers have sped up memory accesses by either adding
> cache (or cache layers), or by moving to faster memory technologies
> (like the DDR3 to DDR4 transition). Today we are seeing new types of
> memory being exposed not as caches, but as RAM [1].
>
> I'd like to discuss how the NUMA APIs are being reused to manage not
> just the physical locality of memory, but the various types. I'd also
> like to discuss the parts of the NUMA API that are a bit lacking to
> manage these types, like the inability to have fallback lists based on
> memory type instead of location.
>
> I believe this needs to be a distinct discussion from Jerome's HMM
> topic. All of the cases we care about are cache-coherent and can be
> treated as "normal" RAM by the VM. The HMM model is for on-device
> memory and is largely managed outside the core VM.
Agreed. In future core VM should be able to deal with these type of
coherent memory directly as part of the generic NUMA API and page
allocator framework. The type of the coherent memory must be a factor
other than NUMA distances while dealing with it from a NUMA perspective
as well from page allocation fallback sequence perspective. I have been
working on a very similar solution called CDM (Coherent Device Memory)
where we change the zonelist building process as well mbind() interface
to accommodate a different type of coherent memory other than existing
normal system RAM. Here are the related postings and discussions.
https://lkml.org/lkml/2016/10/24/19 (CDM with modified zonelists)
https://lkml.org/lkml/2016/11/22/339 (CDM with modified cpusets)
Though named as "device" for now, it can very well evolve into a generic
solution to accommodate all kinds of coherent memory (which warrants
them to be treated at par with system RAM in the core VM in the first
place). I would like to attend to discuss this topic.
--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org. For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>
next prev parent reply other threads:[~2017-01-16 10:59 UTC|newest]
Thread overview: 3+ messages / expand[flat|nested] mbox.gz Atom feed top
2017-01-16 5:29 [LSF/MM TOPIC/ATTEND] Memory Types Dave Hansen
2017-01-16 10:59 ` Anshuman Khandual [this message]
2017-01-16 22:45 ` John Hubbard
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=22fbcb9f-f69a-6532-691f-c0f757cf6b8b@linux.vnet.ibm.com \
--to=khandual@linux.vnet.ibm.com \
--cc=dave.hansen@intel.com \
--cc=linux-mm@kvack.org \
--cc=lsf-pc@lists.linux-foundation.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).