From mboxrd@z Thu Jan 1 00:00:00 1970 From: Sean Christopherson Subject: Re: [PATCH v6 5/5] x86/kvm: Avoid dynamic allocation of pvclock data when SEV is active Date: Mon, 10 Sep 2018 06:29:12 -0700 Message-ID: <1536586152.11460.40.camel@intel.com> References: <1536343050-18532-1-git-send-email-brijesh.singh@amd.com> <1536343050-18532-6-git-send-email-brijesh.singh@amd.com> <20180910122727.GE21815@zn.tnic> <026d5ca5-7b77-de6c-477e-ff39f0291ac0@amd.com> Mime-Version: 1.0 Content-Type: text/plain; charset="UTF-8" Content-Transfer-Encoding: 8bit Cc: x86@kernel.org, linux-kernel@vger.kernel.org, kvm@vger.kernel.org, Tom Lendacky , Thomas Gleixner , "H. Peter Anvin" , Paolo Bonzini , Radim =?UTF-8?Q?Kr=C4=8Dm=C3=A1=C5=99?= To: Brijesh Singh , Borislav Petkov Return-path: In-Reply-To: <026d5ca5-7b77-de6c-477e-ff39f0291ac0@amd.com> Sender: linux-kernel-owner@vger.kernel.org List-Id: kvm.vger.kernel.org On Mon, 2018-09-10 at 08:15 -0500, Brijesh Singh wrote: > > On 9/10/18 7:27 AM, Borislav Petkov wrote: > > > > On Fri, Sep 07, 2018 at 12:57:30PM -0500, Brijesh Singh wrote: > > > > > > diff --git a/arch/x86/kernel/kvmclock.c b/arch/x86/kernel/kvmclock.c > > > index 376fd3a..6086b56 100644 > > > --- a/arch/x86/kernel/kvmclock.c > > > +++ b/arch/x86/kernel/kvmclock.c > > > @@ -65,6 +65,15 @@ static struct pvclock_vsyscall_time_info > > >  static struct pvclock_wall_clock wall_clock __decrypted; > > >  static DEFINE_PER_CPU(struct pvclock_vsyscall_time_info *, hv_clock_per_cpu); > > >   > > > +#ifdef CONFIG_AMD_MEM_ENCRYPT > > > +/* > > > + * The auxiliary array will be used when SEV is active. In non-SEV case, > > > + * it will be freed by free_decrypted_mem(). > > > + */ > > > +static struct pvclock_vsyscall_time_info > > > + hv_clock_aux[NR_CPUS] __decrypted_aux; > > Hmm, so worst case that's 64 4K pages: > > > > (8192*32)/4096 = 64 4K pages. > We can minimize the worst case memory usage. The number of VCPUs > supported by KVM maybe less than NR_CPUS. e.g Currently KVM_MAX_VCPUS is > set to 288 KVM_MAX_VCPUS is a property of the host, whereas this code runs in the guest, e.g. KVM_MAX_VCPUS could be 2048 in the host for all we know. > (288 * 64)/4096 = 4 4K pages. > > (pvclock_vsyscall_time_info is cache aligned so it will be 64 bytes) Ah, I was wondering why my calculations were always different than yours.  I was looking at struct pvclock_vcpu_time_info, which is 32 bytes. > #if NR_CPUS > KVM_MAX_VCPUS > #define HV_AUX_ARRAY_SIZE  KVM_MAX_VCPUS > #else > #define HV_AUX_ARRAY_SIZE NR_CPUS > #endif > > static struct pvclock_vsyscall_time_info >                         hv_clock_aux[HV_AUX_ARRAY_SIZE] __decrypted_aux;