From mboxrd@z Thu Jan 1 00:00:00 1970 From: Borislav Petkov Subject: Re: [PATCH v5 5/5] x86/kvm: Avoid dynamic allocation of pvclock data when SEV is active Date: Thu, 6 Sep 2018 17:19:38 +0200 Message-ID: <20180906151938.GD11144@zn.tnic> References: <1536234182-2809-1-git-send-email-brijesh.singh@amd.com> <1536234182-2809-6-git-send-email-brijesh.singh@amd.com> <20180906122423.GA11144@zn.tnic> <20180906135041.GB32336@linux.intel.com> <20180906144342.GB11144@zn.tnic> <20180906145639.GA1522@linux.intel.com> Mime-Version: 1.0 Content-Type: text/plain; charset=utf-8 Content-Transfer-Encoding: 8bit Cc: Brijesh Singh , x86@kernel.org, linux-kernel@vger.kernel.org, kvm@vger.kernel.org, Tom Lendacky , Thomas Gleixner , "H. Peter Anvin" , Paolo Bonzini , Radim =?utf-8?B?S3LEjW3DocWZ?= To: Sean Christopherson Return-path: Content-Disposition: inline In-Reply-To: <20180906145639.GA1522@linux.intel.com> Sender: linux-kernel-owner@vger.kernel.org List-Id: kvm.vger.kernel.org On Thu, Sep 06, 2018 at 07:56:40AM -0700, Sean Christopherson wrote: > Wouldn't that result in @hv_clock_boot being incorrectly freed in the > !SEV case? Ok, maybe I'm missing something but why do we need 4K per CPU? Why can't we map all those pages which contain the clock variable, decrypted in all guests' page tables? Basically (NR_CPUS * sizeof(struct pvclock_vsyscall_time_info)) / 4096 pages. For the !SEV case then nothing changes. -- Regards/Gruss, Boris. SUSE Linux GmbH, GF: Felix Imendörffer, Jane Smithard, Graham Norton, HRB 21284 (AG Nürnberg) --