linux-mm.kvack.org archive mirror
 help / color / mirror / Atom feed
From: Artem Kuzin <artem.kuzin@huawei.com>
To: Dave Hansen <dave.hansen@intel.com>,
	"Christoph Lameter (Ampere)" <cl@gentwo.org>
Cc: <x86@kernel.org>, <tglx@linutronix.de>, <mingo@redhat.com>,
	<bp@alien8.de>, <dave.hansen@linux.intel.com>, <hpa@zytor.com>,
	<luto@kernel.org>, <peterz@infradead.org>,
	<akpm@linux-foundation.org>, <urezki@gmail.com>,
	<hch@infradead.org>, <lstoakes@gmail.com>, <mcgrof@kernel.org>,
	<rmk+kernel@armlinux.org.uk>, <nikita.panov@huawei-partners.com>,
	<alexander.grubnikov@huawei.com>, <stepanov.anatoly@huawei.com>,
	<guohanjun@huawei.com>, <weiyongjun1@huawei.com>,
	<wangkefeng.wang@huawei.com>, <judy.chenhui@huawei.com>,
	<yusongping@huawei.com>, <kang.sun@huawei.com>,
	<linux-mm@kvack.org>, <linux-modules@vger.kernel.org>
Subject: Re: [PATCH RFC 02/12] mm: add config option and per-NUMA node VMS support
Date: Mon, 29 Jan 2024 09:22:59 +0300	[thread overview]
Message-ID: <5a7485f8-a070-465d-9e17-e7cd3d2aaa80@huawei.com> (raw)
In-Reply-To: <6c5c72ab-01b1-45b4-9a33-529688e449eb@intel.com>


On 1/25/2024 6:07 PM, Dave Hansen wrote:
> On 1/9/24 08:57, Artem Kuzin wrote:
>> We already have per-NUMA node init_mm, but this is not enough.
>> We need this array of pointers in the task struct due to the proper pgd 
>> (per-NUMA node) should be used for threads of process that occupy more 
>> than one NUMA node.
> Let me repeat what Christoph said in a bit more forceful way.
>
> MAX_NUMNODES can be 1024.  You're adding 1023*8 bytes of overhead for
> each process ... everywhere, including on my single node laptop.  That's
> completely unacceptable.  You need to find another way to do this.
>
> I'd suggest just ignoring the problem for now.  Do multi-node processes
> with a later optimization.

Hi Dave, thanks to you and Christoph for the comments. I've just gave some details why this is necessary, and didn't want to push the solution with MAX_NUMNODES forward, this is temporarily
thing and this place should be definitely updated in future.

As for possible options, for now I am thinking about two:
1. additional config option to limit the number of page tables and corresponding replicas
2. setup per-NUMA node page tables and replicas in a lazy way allocating them on demand

But here I need to try and test everything.

Thanks!



  reply	other threads:[~2024-01-29  6:23 UTC|newest]

Thread overview: 24+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2023-12-28 13:10 [PATCH RFC 00/12] x86 NUMA-aware kernel replication artem.kuzin
2023-12-28 13:10 ` [PATCH RFC 01/12] mm: allow per-NUMA node local PUD/PMD allocation artem.kuzin
2023-12-28 13:10 ` [PATCH RFC 02/12] mm: add config option and per-NUMA node VMS support artem.kuzin
2024-01-03 19:43   ` Christoph Lameter (Ampere)
2024-01-09 16:57     ` Artem Kuzin
2024-01-25 15:07       ` Dave Hansen
2024-01-29  6:22         ` Artem Kuzin [this message]
2024-01-30 23:36           ` Dave Hansen
2023-12-28 13:10 ` [PATCH RFC 03/12] mm: per-NUMA node replication core infrastructure artem.kuzin
2023-12-28 13:10 ` [PATCH RFC 04/12] x86: add support of memory protection for NUMA replicas artem.kuzin
2024-01-09  6:46   ` Garg, Shivank
2024-01-09 15:53     ` a00561249@china.huawei.com
2024-01-10  6:19       ` Garg, Shivank
2023-12-28 13:10 ` [PATCH RFC 05/12] x86: enable memory protection for replicated memory artem.kuzin
2023-12-28 13:10 ` [PATCH RFC 06/12] x86: align kernel text and rodata using HUGE_PAGE boundary artem.kuzin
2023-12-28 13:10 ` [PATCH RFC 07/12] x86: enable per-NUMA node kernel text and rodata replication artem.kuzin
2023-12-28 13:10 ` [PATCH RFC 08/12] x86: make kernel text patching aware about replicas artem.kuzin
2023-12-28 13:10 ` [PATCH RFC 09/12] x86: add support of NUMA replication for efi page tables artem.kuzin
2023-12-28 13:10 ` [PATCH RFC 10/12] mm: add replicas allocation support for vmalloc artem.kuzin
2023-12-28 13:10 ` [PATCH RFC 11/12] x86: add kernel modules text and rodata replication support artem.kuzin
2023-12-28 13:10 ` [PATCH RFC 12/12] mm: set memory permissions for BPF handlers replicas artem.kuzin
2024-01-10 10:03 ` [PATCH RFC 00/12] x86 NUMA-aware kernel replication Russell King (Oracle)
2024-01-25  4:30 ` Garg, Shivank
2024-01-29  7:51   ` Artem Kuzin

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=5a7485f8-a070-465d-9e17-e7cd3d2aaa80@huawei.com \
    --to=artem.kuzin@huawei.com \
    --cc=akpm@linux-foundation.org \
    --cc=alexander.grubnikov@huawei.com \
    --cc=bp@alien8.de \
    --cc=cl@gentwo.org \
    --cc=dave.hansen@intel.com \
    --cc=dave.hansen@linux.intel.com \
    --cc=guohanjun@huawei.com \
    --cc=hch@infradead.org \
    --cc=hpa@zytor.com \
    --cc=judy.chenhui@huawei.com \
    --cc=kang.sun@huawei.com \
    --cc=linux-mm@kvack.org \
    --cc=linux-modules@vger.kernel.org \
    --cc=lstoakes@gmail.com \
    --cc=luto@kernel.org \
    --cc=mcgrof@kernel.org \
    --cc=mingo@redhat.com \
    --cc=nikita.panov@huawei-partners.com \
    --cc=peterz@infradead.org \
    --cc=rmk+kernel@armlinux.org.uk \
    --cc=stepanov.anatoly@huawei.com \
    --cc=tglx@linutronix.de \
    --cc=urezki@gmail.com \
    --cc=wangkefeng.wang@huawei.com \
    --cc=weiyongjun1@huawei.com \
    --cc=x86@kernel.org \
    --cc=yusongping@huawei.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).