From: Aneesh Kumar K V <aneesh.kumar@linux.ibm.com>
To: Matthew Wilcox <willy@infradead.org>
Cc: linux-mm@kvack.org, akpm@linux-foundation.org,
Wei Xu <weixugc@google.com>, Huang Ying <ying.huang@intel.com>,
Yang Shi <shy828301@gmail.com>,
Davidlohr Bueso <dave@stgolabs.net>,
Tim C Chen <tim.c.chen@intel.com>,
Michal Hocko <mhocko@kernel.org>,
Linux Kernel Mailing List <linux-kernel@vger.kernel.org>,
Hesham Almatary <hesham.almatary@huawei.com>,
Dave Hansen <dave.hansen@intel.com>,
Jonathan Cameron <Jonathan.Cameron@huawei.com>,
Alistair Popple <apopple@nvidia.com>,
Dan Williams <dan.j.williams@intel.com>,
Johannes Weiner <hannes@cmpxchg.org>,
jvgediya.oss@gmail.com
Subject: Re: [PATCH v8 00/12] mm/demotion: Memory tiers and demotion
Date: Tue, 5 Jul 2022 09:47:58 +0530 [thread overview]
Message-ID: <ee0539e9-e123-e871-dae5-30d09e010c76@linux.ibm.com> (raw)
In-Reply-To: <YsMAeU2fwEoysohr@casper.infradead.org>
On 7/4/22 8:30 PM, Matthew Wilcox wrote:
> On Mon, Jul 04, 2022 at 12:36:00PM +0530, Aneesh Kumar K.V wrote:
>> * The current tier initialization code always initializes
>> each memory-only NUMA node into a lower tier. But a memory-only
>> NUMA node may have a high performance memory device (e.g. a DRAM
>> device attached via CXL.mem or a DRAM-backed memory-only node on
>> a virtual machine) and should be put into a higher tier.
>>
>> * The current tier hierarchy always puts CPU nodes into the top
>> tier. But on a system with HBM (e.g. GPU memory) devices, these
>> memory-only HBM NUMA nodes should be in the top tier, and DRAM nodes
>> with CPUs are better to be placed into the next lower tier.
>
> These things that you identify as problems seem perfectly sensible to me.
> Memory which is attached to this CPU has the lowest latency and should
> be preferred over more remote memory, no matter its bandwidth.
Allocation will prefer local memory over remote memory. Memory tiers are used during
demotion and currently, the kernel demotes cold pages from DRAM memory to these
special device memories because they appear as memory-only NUMA nodes. In many cases
(ex: GPU) what is desired is the demotion of cold pages from GPU memory to DRAM or
even slow memory.
This patchset builds a framework to enable such demotion criteria.
-aneesh
next prev parent reply other threads:[~2022-07-05 4:19 UTC|newest]
Thread overview: 42+ messages / expand[flat|nested] mbox.gz Atom feed top
2022-07-04 7:06 [PATCH v8 00/12] mm/demotion: Memory tiers and demotion Aneesh Kumar K.V
2022-07-04 7:06 ` [PATCH v8 01/12] mm/demotion: Add support for explicit memory tiers Aneesh Kumar K.V
2022-07-04 7:06 ` [PATCH v8 02/12] mm/demotion: Move memory demotion related code Aneesh Kumar K.V
2022-07-04 7:06 ` [PATCH v8 03/12] mm/demotion/dax/kmem: Set node's memory tier to MEMORY_TIER_PMEM Aneesh Kumar K.V
2022-07-04 7:06 ` [PATCH v8 04/12] mm/demotion: Add hotplug callbacks to handle new numa node onlined Aneesh Kumar K.V
2022-07-04 7:06 ` [PATCH v8 05/12] mm/demotion: Build demotion targets based on explicit memory tiers Aneesh Kumar K.V
2022-07-04 7:06 ` [PATCH v8 06/12] mm/demotion: Expose memory tier details via sysfs Aneesh Kumar K.V
2022-07-04 7:06 ` [PATCH v8 07/12] mm/demotion: Add per node memory tier attribute to sysfs Aneesh Kumar K.V
2022-07-04 7:06 ` [PATCH v8 08/12] mm/demotion: Add pg_data_t member to track node memory tier details Aneesh Kumar K.V
2022-07-04 7:06 ` [PATCH v8 09/12] mm/demotion: Demote pages according to allocation fallback order Aneesh Kumar K.V
2022-07-04 7:06 ` [PATCH v8 10/12] mm/demotion: Update node_is_toptier to work with memory tiers Aneesh Kumar K.V
2022-07-04 7:06 ` [PATCH v8 11/12] mm/demotion: Add documentation for memory tiering Aneesh Kumar K.V
2022-07-04 7:06 ` [PATCH v8 12/12] mm/demotion: Add sysfs ABI documentation Aneesh Kumar K.V
2022-07-04 15:00 ` [PATCH v8 00/12] mm/demotion: Memory tiers and demotion Matthew Wilcox
2022-07-05 3:45 ` Alistair Popple
2022-07-05 4:17 ` Aneesh Kumar K V [this message]
2022-07-05 4:29 ` Huang, Ying
2022-07-05 5:22 ` Aneesh Kumar K V
2022-07-12 1:16 ` Huang, Ying
2022-07-12 4:42 ` Aneesh Kumar K V
2022-07-12 5:09 ` Aneesh Kumar K V
2022-07-12 18:02 ` Yang Shi
2022-07-13 3:42 ` Huang, Ying
2022-07-13 6:38 ` Wei Xu
2022-07-13 6:39 ` Wei Xu
2022-07-13 7:25 ` Aneesh Kumar K V
2022-07-13 8:20 ` Huang, Ying
2022-07-12 6:59 ` Huang, Ying
2022-07-12 7:31 ` Aneesh Kumar K V
2022-07-12 8:48 ` Huang, Ying
2022-07-12 9:17 ` Aneesh Kumar K V
2022-07-13 2:59 ` Huang, Ying
2022-07-13 6:46 ` Wei Xu
2022-07-13 8:17 ` Huang, Ying
2022-07-19 14:00 ` Jonathan Cameron
2022-07-25 6:02 ` Huang, Ying
2022-07-13 9:44 ` Aneesh Kumar K.V
2022-07-13 9:40 ` Aneesh Kumar K.V
2022-07-14 4:56 ` Huang, Ying
2022-07-14 5:29 ` Aneesh Kumar K V
2022-07-14 7:21 ` Huang, Ying
2022-07-11 15:29 ` Aneesh Kumar K.V
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=ee0539e9-e123-e871-dae5-30d09e010c76@linux.ibm.com \
--to=aneesh.kumar@linux.ibm.com \
--cc=Jonathan.Cameron@huawei.com \
--cc=akpm@linux-foundation.org \
--cc=apopple@nvidia.com \
--cc=dan.j.williams@intel.com \
--cc=dave.hansen@intel.com \
--cc=dave@stgolabs.net \
--cc=hannes@cmpxchg.org \
--cc=hesham.almatary@huawei.com \
--cc=jvgediya.oss@gmail.com \
--cc=linux-kernel@vger.kernel.org \
--cc=linux-mm@kvack.org \
--cc=mhocko@kernel.org \
--cc=shy828301@gmail.com \
--cc=tim.c.chen@intel.com \
--cc=weixugc@google.com \
--cc=willy@infradead.org \
--cc=ying.huang@intel.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).