From: Sean Christopherson <seanjc@google.com>
To: Ben Gardon <bgardon@google.com>
Cc: Vipin Sharma <vipinsh@google.com>,
dmatlack@google.com, pbonzini@redhat.com, kvm@vger.kernel.org,
linux-kernel@vger.kernel.org
Subject: Re: [Patch v2 1/2] KVM: x86/mmu: Allocate page table pages on TDP splits during dirty log enable on the underlying page's numa node
Date: Mon, 5 Dec 2022 18:47:49 +0000 [thread overview]
Message-ID: <Y4481WPLstNidb9X@google.com> (raw)
In-Reply-To: <CANgfPd_sZoW6gRNgs44BbBu4RhwqNPjUO-=biJ++L5d8LpU3zg@mail.gmail.com>
Side topic, the shortlog is way, way too long. The purpose of the shortlog is
to provide a synopsis of the change, not to describe the change in detail.
I also think this patch should be 2/2, with the more generic support added along
with the module param (or capability) in 1/2. E.g. to yield something like
KVM: x86/mmu: Add a module param to make per-vCPU SPTs NUMA aware
KVM: x86/mmu: Honor NUMA awareness for per-VM page table allocations
On Mon, Dec 05, 2022, Ben Gardon wrote:
> > diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c
> > index 4736d7849c60..0554dfc55553 100644
> > --- a/arch/x86/kvm/mmu/mmu.c
> > +++ b/arch/x86/kvm/mmu/mmu.c
> > @@ -90,6 +90,9 @@ __MODULE_PARM_TYPE(nx_huge_pages_recovery_period_ms, "uint");
> > static bool __read_mostly force_flush_and_sync_on_reuse;
> > module_param_named(flush_on_reuse, force_flush_and_sync_on_reuse, bool, 0644);
> >
> > +static bool __read_mostly numa_aware_pagetable = true;
> > +module_param_named(numa_aware_pagetable, numa_aware_pagetable, bool, 0644);
> > +
>
> I'm usually all for having module params to control things, but in
> this case I don't think it provides much value because whether this
> NUMA optimization is useful or not is going to depend more on VM size
> and workload than anything else. If we wanted to make this
> configurable, a VM capability would probably be a better mechanism so
> that userspace could leave it off when running small,
> non-performance-sensitive VMs
Would we actually want to turn it off in this case? IIUC, @nid is just the
preferred node, i.e. failure to allocate for the preferred @nid will result in
falling back to other nodes, not outright failure. So the pathological worst
case scenario would be that for a system with VMs that don't care about performance,
all of a nodes memory is allocated due to all VMs starting on that node.
On the flip side, if a system had a mix of VM shapes, I think we'd want even the
performance insensitive VMs to be NUMA aware so that they can be sequestered on
their own node(s), i.e. don't "steal" memory from the VMs that are performance
sensitive and have been affined to a single node.
> and turn it on when running large, multi-node VMs. A whole-host module
> parameter seems overly restrictive.
next prev parent reply other threads:[~2022-12-05 18:47 UTC|newest]
Thread overview: 17+ messages / expand[flat|nested] mbox.gz Atom feed top
2022-12-01 19:57 [Patch v2 0/2] NUMA aware page table allocation Vipin Sharma
2022-12-01 19:57 ` [Patch v2 1/2] KVM: x86/mmu: Allocate page table pages on TDP splits during dirty log enable on the underlying page's numa node Vipin Sharma
2022-12-05 18:24 ` Ben Gardon
2022-12-05 18:47 ` Sean Christopherson [this message]
2022-12-05 18:51 ` Ben Gardon
2022-12-05 21:07 ` Sean Christopherson
2022-12-08 22:44 ` Vipin Sharma
2022-12-01 19:57 ` [Patch v2 2/2] KVM: x86/mmu: Allocate page table pages on NUMA node of underlying pages Vipin Sharma
2022-12-05 18:17 ` Ben Gardon
2022-12-05 23:40 ` Vipin Sharma
2022-12-06 18:17 ` Ben Gardon
[not found] ` <CAHVum0f_6UQvcqWAJxDJyL_LN-6ryAXNuh9xY6nFtLxCOMtoXA@mail.gmail.com>
[not found] ` <CANgfPd-XkHPZyFsPe75WbUrufLpKtdr1Neri1JrrApQrjRLRJw@mail.gmail.com>
[not found] ` <CAHVum0dkKSY9e90xgfBVBHUqntwJOmONK+TYBXFEwg6acvUrAw@mail.gmail.com>
2022-12-07 19:05 ` Vipin Sharma
2022-12-09 0:06 ` David Matlack
2022-12-09 18:47 ` Vipin Sharma
2022-12-09 0:27 ` David Matlack
2022-12-09 18:51 ` Vipin Sharma
2022-12-09 0:21 ` [Patch v2 0/2] NUMA aware page table allocation David Matlack
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=Y4481WPLstNidb9X@google.com \
--to=seanjc@google.com \
--cc=bgardon@google.com \
--cc=dmatlack@google.com \
--cc=kvm@vger.kernel.org \
--cc=linux-kernel@vger.kernel.org \
--cc=pbonzini@redhat.com \
--cc=vipinsh@google.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox