From: David Hildenbrand <david@redhat.com>
To: John Hubbard <jhubbard@nvidia.com>, Yan Zhao <yan.y.zhao@intel.com>
Cc: linux-mm@kvack.org, linux-kernel@vger.kernel.org,
kvm@vger.kernel.org, pbonzini@redhat.com, seanjc@google.com,
mike.kravetz@oracle.com, apopple@nvidia.com, jgg@nvidia.com,
rppt@kernel.org, akpm@linux-foundation.org, kevin.tian@intel.com,
Mel Gorman <mgorman@techsingularity.net>
Subject: Re: [RFC PATCH v2 0/5] Reduce NUMA balance caused TLB-shootdowns in a VM
Date: Fri, 11 Aug 2023 20:39:46 +0200 [thread overview]
Message-ID: <846e9117-1f79-a5e0-1b14-3dba91ab8033@redhat.com> (raw)
In-Reply-To: <1ad2c33d-95e1-49ec-acd2-ac02b506974e@nvidia.com>
>> Ah, okay I see, thanks. That's indeed unfortunate.
>
> Sigh. All this difficulty reminds me that this mechanism was created in
> the early days of NUMA. I wonder sometimes lately whether the cost, in
> complexity and CPU time, is still worth it on today's hardware.
>
> But of course I am deeply biased, so don't take that too seriously.
> See below. :)
:)
>>
>>>
>>> Then current KVM will unmap all notified pages from secondary MMU
>>> in .invalidate_range_start(), which could include pages that finally not
>>> set to PROT_NONE in primary MMU.
>>>
>>> For VMs with pass-through devices, though all guest pages are pinned,
>>> KVM still periodically unmap pages in response to the
>>> .invalidate_range_start() notification from auto NUMA balancing, which
>>> is a waste.
>>
>> Should we want to disable NUMA hinting for such VMAs instead (for example, by QEMU/hypervisor) that knows that any NUMA hinting activity on these ranges would be a complete waste of time? I recall that John H. once mentioned that there are
> similar issues with GPU memory: NUMA hinting is actually counter-productive and they end up disabling it.
>>
>
> Yes, NUMA balancing is incredibly harmful to performance, for GPU and
> accelerators that map memory...and VMs as well, it seems. Basically,
> anything that has its own processors and page tables needs to be left
> strictly alone by NUMA balancing. Because the kernel is (still, even
> today) unaware of what those processors are doing, and so it has no way
> to do productive NUMA balancing.
Is there any existing way we could handle that better on a per-VMA
level, or on the process level? Any magic toggles?
MMF_HAS_PINNED might be too restrictive. MMF_HAS_PINNED_LONGTERM might
be better, but with things like iouring still too restrictive eventually.
I recall that setting a mempolicy could prevent auto-numa from getting
active, but that might be undesired.
CCing Mel.
--
Cheers,
David / dhildenb
next prev parent reply other threads:[~2023-08-11 18:39 UTC|newest]
Thread overview: 47+ messages / expand[flat|nested] mbox.gz Atom feed top
2023-08-10 8:56 [RFC PATCH v2 0/5] Reduce NUMA balance caused TLB-shootdowns in a VM Yan Zhao
2023-08-10 8:57 ` [RFC PATCH v2 1/5] mm/mmu_notifier: introduce a new mmu notifier flag MMU_NOTIFIER_RANGE_NUMA Yan Zhao
2023-08-10 8:58 ` [RFC PATCH v2 2/5] mm: don't set PROT_NONE to maybe-dma-pinned pages for NUMA-migrate purpose Yan Zhao
2023-08-10 9:00 ` [RFC PATCH v2 3/5] mm/mmu_notifier: introduce a new callback .numa_protect Yan Zhao
2023-08-10 9:00 ` [RFC PATCH v2 4/5] mm/autonuma: call .numa_protect() when page is protected for NUMA migrate Yan Zhao
2023-08-11 18:52 ` Nadav Amit
2023-08-14 7:52 ` Yan Zhao
2023-08-10 9:02 ` [RFC PATCH v2 5/5] KVM: Unmap pages only when it's indeed protected for NUMA migration Yan Zhao
2023-08-10 13:16 ` bibo mao
2023-08-11 3:45 ` Yan Zhao
2023-08-11 7:40 ` bibo mao
2023-08-11 8:01 ` Yan Zhao
2023-08-11 17:14 ` Sean Christopherson
2023-08-11 17:18 ` Jason Gunthorpe
2023-08-14 6:52 ` Yan Zhao
2023-08-14 7:44 ` Yan Zhao
2023-08-14 16:40 ` Sean Christopherson
2023-08-15 1:54 ` Yan Zhao
2023-08-15 14:50 ` Sean Christopherson
2023-08-16 2:43 ` bibo mao
2023-08-16 3:44 ` bibo mao
2023-08-16 5:14 ` Yan Zhao
2023-08-16 7:29 ` bibo mao
2023-08-16 7:18 ` Yan Zhao
2023-08-16 7:53 ` bibo mao
2023-08-16 13:39 ` Sean Christopherson
2023-08-10 9:34 ` [RFC PATCH v2 0/5] Reduce NUMA balance caused TLB-shootdowns in a VM David Hildenbrand
2023-08-10 9:50 ` Yan Zhao
2023-08-11 17:25 ` David Hildenbrand
2023-08-11 18:20 ` John Hubbard
2023-08-11 18:39 ` David Hildenbrand [this message]
2023-08-11 19:35 ` John Hubbard
2023-08-14 9:09 ` Yan Zhao
2023-08-15 2:34 ` John Hubbard
2023-08-16 7:43 ` David Hildenbrand
2023-08-16 9:06 ` Yan Zhao
2023-08-16 9:49 ` David Hildenbrand
2023-08-16 18:00 ` John Hubbard
2023-08-17 5:05 ` Yan Zhao
2023-08-17 7:38 ` David Hildenbrand
2023-08-18 0:13 ` Yan Zhao
2023-08-18 2:29 ` John Hubbard
2023-09-04 9:18 ` Yan Zhao
2023-08-15 2:36 ` Yuan Yao
2023-08-15 2:37 ` Yan Zhao
2023-08-10 13:58 ` Chao Gao
2023-08-11 5:22 ` Yan Zhao
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=846e9117-1f79-a5e0-1b14-3dba91ab8033@redhat.com \
--to=david@redhat.com \
--cc=akpm@linux-foundation.org \
--cc=apopple@nvidia.com \
--cc=jgg@nvidia.com \
--cc=jhubbard@nvidia.com \
--cc=kevin.tian@intel.com \
--cc=kvm@vger.kernel.org \
--cc=linux-kernel@vger.kernel.org \
--cc=linux-mm@kvack.org \
--cc=mgorman@techsingularity.net \
--cc=mike.kravetz@oracle.com \
--cc=pbonzini@redhat.com \
--cc=rppt@kernel.org \
--cc=seanjc@google.com \
--cc=yan.y.zhao@intel.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).