From: Sean Christopherson <sean.j.christopherson@intel.com>
To: Brijesh Singh <brijesh.singh@amd.com>, Borislav Petkov <bp@suse.de>
Cc: x86@kernel.org, linux-kernel@vger.kernel.org,
kvm@vger.kernel.org, "Tom Lendacky" <thomas.lendacky@amd.com>,
"Thomas Gleixner" <tglx@linutronix.de>,
"H. Peter Anvin" <hpa@zytor.com>,
"Paolo Bonzini" <pbonzini@redhat.com>,
"Radim Krčmář" <rkrcmar@redhat.com>
Subject: Re: [PATCH v6 5/5] x86/kvm: Avoid dynamic allocation of pvclock data when SEV is active
Date: Mon, 10 Sep 2018 08:28:17 -0700 [thread overview]
Message-ID: <1536593297.11460.72.camel@intel.com> (raw)
In-Reply-To: <097eb5f5-2cd9-8b08-32c5-d90c8e0cbb6d@amd.com>
On Mon, 2018-09-10 at 10:10 -0500, Brijesh Singh wrote:
>
> On 09/10/2018 08:29 AM, Sean Christopherson wrote:
> ...
>
> > > > > + */
> > > > > +static struct pvclock_vsyscall_time_info
> > > > > + hv_clock_aux[NR_CPUS] __decrypted_aux;
> > > > Hmm, so worst case that's 64 4K pages:
> > > >
> > > > (8192*32)/4096 = 64 4K pages.
> > > We can minimize the worst case memory usage. The number of VCPUs
> > > supported by KVM maybe less than NR_CPUS. e.g Currently KVM_MAX_VCPUS is
> > > set to 288
> > KVM_MAX_VCPUS is a property of the host, whereas this code runs in the
> > guest, e.g. KVM_MAX_VCPUS could be 2048 in the host for all we know.
> >
>
> IIRC, during guest creation time qemu will check the host supported
> VCPUS count. If count is greater than KVM_MAX_VCPUS then it will
> fail to launch guest (or fail to hot plug vcpus). In other words, the
> number of vcpus in a KVM guest will never to > KVM_MAX_VCPUS.
>
> Am I missing something ?
KVM_MAX_VCPUS is a definition for use in the *host*, it's even defined
in kvm_host.h. The guest's pvclock code won't get magically recompiled
if KVM_MAX_VCPUS is changed in the host. KVM_MAX_VCPUS is an arbitrary
value in the sense that there isn't a fundamental hard limit, i.e. the
value can be changed, either for a custom KVM build or in mainline,
e.g. it was bumped in 2016:
commit 682f732ecf7396e9d6fe24d44738966699fae6c0
Author: Radim Krčmář <rkrcmar@redhat.com>
Date: Tue Jul 12 22:09:29 2016 +0200
KVM: x86: bump MAX_VCPUS to 288
288 is in high demand because of Knights Landing CPU.
We cannot set the limit to 640k, because that would be wasting space.
Reviewed-by: Paolo Bonzini <pbonzini@redhat.com>
Signed-off-by: Radim Krčmář <rkrcmar@redhat.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
diff --git a/arch/x86/include/asm/kvm_host.h b/arch/x86/include/asm/kvm_host.h
index 074b5c760327..21a40dc7aad6 100644
--- a/arch/x86/include/asm/kvm_host.h
+++ b/arch/x86/include/asm/kvm_host.h
@@ -34,7 +34,7 @@
#include <asm/asm.h>
#include <asm/kvm_page_track.h>
-#define KVM_MAX_VCPUS 255
+#define KVM_MAX_VCPUS 288
#define KVM_SOFT_MAX_VCPUS 240
#define KVM_USER_MEM_SLOTS 509
/* memory slots that are not exposed to userspace */
next prev parent reply other threads:[~2018-09-10 15:28 UTC|newest]
Thread overview: 37+ messages / expand[flat|nested] mbox.gz Atom feed top
2018-09-07 17:57 [PATCH v6 0/5] x86: Fix SEV guest regression Brijesh Singh
2018-09-07 17:57 ` [PATCH v6 1/5] x86/mm: Restructure sme_encrypt_kernel() Brijesh Singh
2018-09-10 11:32 ` Borislav Petkov
2018-09-07 17:57 ` [PATCH v6 2/5] x86/mm: fix sme_populate_pgd() to update page flags Brijesh Singh
2018-09-10 11:36 ` Borislav Petkov
2018-09-10 12:28 ` Brijesh Singh
2018-09-10 12:32 ` Borislav Petkov
2018-09-07 17:57 ` [PATCH v6 3/5] x86/mm: add .data..decrypted section to hold shared variables Brijesh Singh
2018-09-10 11:54 ` Borislav Petkov
2018-09-10 12:33 ` Brijesh Singh
2018-09-07 17:57 ` [PATCH v6 4/5] x86/kvm: use __decrypted attribute in " Brijesh Singh
2018-09-10 12:04 ` Borislav Petkov
2018-09-10 13:15 ` Sean Christopherson
2018-09-10 13:29 ` Thomas Gleixner
2018-09-10 15:34 ` Borislav Petkov
2018-09-10 12:29 ` Paolo Bonzini
2018-09-10 12:33 ` Borislav Petkov
2018-09-10 12:46 ` Paolo Bonzini
2018-09-07 17:57 ` [PATCH v6 5/5] x86/kvm: Avoid dynamic allocation of pvclock data when SEV is active Brijesh Singh
2018-09-10 12:27 ` Borislav Petkov
2018-09-10 13:15 ` Brijesh Singh
2018-09-10 13:29 ` Sean Christopherson
2018-09-10 15:10 ` Brijesh Singh
2018-09-10 15:28 ` Sean Christopherson [this message]
2018-09-10 15:30 ` Brijesh Singh
2018-09-10 16:48 ` Borislav Petkov
2018-09-11 9:26 ` Paolo Bonzini
2018-09-11 10:01 ` Borislav Petkov
2018-09-11 10:19 ` Paolo Bonzini
2018-09-11 10:25 ` Borislav Petkov
2018-09-11 11:07 ` Paolo Bonzini
2018-09-11 13:55 ` Borislav Petkov
2018-09-11 14:00 ` Paolo Bonzini
2018-09-10 15:53 ` Borislav Petkov
2018-09-10 16:13 ` Sean Christopherson
2018-09-10 16:14 ` Brijesh Singh
2018-09-10 12:28 ` Paolo Bonzini
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=1536593297.11460.72.camel@intel.com \
--to=sean.j.christopherson@intel.com \
--cc=bp@suse.de \
--cc=brijesh.singh@amd.com \
--cc=hpa@zytor.com \
--cc=kvm@vger.kernel.org \
--cc=linux-kernel@vger.kernel.org \
--cc=pbonzini@redhat.com \
--cc=rkrcmar@redhat.com \
--cc=tglx@linutronix.de \
--cc=thomas.lendacky@amd.com \
--cc=x86@kernel.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox