From: Larry Woodman <lwoodman@redhat.com>
To: Christopher Lameter <cl@linux.com>,
"Aneesh Kumar K.V" <aneesh.kumar@linux.ibm.com>
Cc: Michal Hocko <mhocko@kernel.org>,
lsf-pc@lists.linux-foundation.org, linux-mm@kvack.org,
LKML <linux-kernel@vger.kernel.org>,
linux-nvme@lists.infradead.org
Subject: Re: [LSF/MM ATTEND ] memory reclaim with NUMA rebalancing
Date: Fri, 22 Feb 2019 09:12:15 -0500 [thread overview]
Message-ID: <d491fcc4-97c1-168e-e1c5-1106ea77f080@redhat.com> (raw)
In-Reply-To: <01000168c431dbc5-65c68c0c-e853-4dda-9eef-8a9346834e59-000000@email.amazonses.com>
On 02/06/2019 02:03 PM, Christopher Lameter wrote:
> On Thu, 31 Jan 2019, Aneesh Kumar K.V wrote:
>
>> I would be interested in this topic too. I would like to
>> understand the API and how it can help exploit the different type of
>> devices we have on OpenCAPI.
Same here, we/RedHat have quite a bit of experience running on several
large system
(32TB/128nodes/1024CPUs). Some of these systems have NVRAM and can operated
in memory mode as well as storage mode.
Larry
> So am I. We may want to rethink the whole NUMA API and the way we handle
> different types of memory with their divergent performance
> characteristics.
>
> We need some way to allow a better selection of memory from the kernel
> without creating too much complexity. We have new characteristics to
> cover:
>
> 1. Persistence (NVRAM) or generally a storage device that allows access to
> the medium via a RAM like interface.
>
> 2. Coprocessor memory that can be shuffled back and forth to a device
> (HMM).
>
> 3. On Device memory (important since PCIe limitations are currently a
> problem and Intel is stuck on PCIe3 and devices start to bypass the
> processor to gain performance)
>
> 4. High Density RAM (GDDR f.e.) with different caching behavior
> and/or different cacheline sizes.
>
> 5. Modifying access characteristics by reserving slice of a cache (f.e.
> L3) for a specific memory region.
>
> 6. SRAM support (high speed memory on the processor itself or by using
> the processor cache to persist a cacheline)
>
> And then the old NUMA stuff where only the latency to memory varies. But
> that was a particular solution targeted at scaling SMP system through
> interconnects. This was a mostly symmetric approach. The use of
> accellerators etc etc and the above characteristics lead to more complex
> assymmetric memory approaches that may be difficult to manage and use from
> kernel space.
>
next prev parent reply other threads:[~2019-02-22 14:12 UTC|newest]
Thread overview: 9+ messages / expand[flat|nested] mbox.gz Atom feed top
2019-01-30 17:48 [LSF/MM TOPIC] memory reclaim with NUMA rebalancing Michal Hocko
2019-01-30 18:12 ` Keith Busch
2019-01-30 23:53 ` Yang Shi
2019-01-31 6:49 ` [LSF/MM ATTEND ] " Aneesh Kumar K.V
2019-02-06 19:03 ` Christopher Lameter
2019-02-22 13:48 ` Jonathan Cameron
2019-02-22 14:12 ` Larry Woodman [this message]
2019-02-23 13:27 ` Fengguang Wu
2019-02-23 13:42 ` Fengguang Wu
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=d491fcc4-97c1-168e-e1c5-1106ea77f080@redhat.com \
--to=lwoodman@redhat.com \
--cc=aneesh.kumar@linux.ibm.com \
--cc=cl@linux.com \
--cc=linux-kernel@vger.kernel.org \
--cc=linux-mm@kvack.org \
--cc=linux-nvme@lists.infradead.org \
--cc=lsf-pc@lists.linux-foundation.org \
--cc=mhocko@kernel.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).