From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id E924FC433F5 for ; Wed, 6 Apr 2022 23:43:38 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S237902AbiDFXpf (ORCPT ); Wed, 6 Apr 2022 19:45:35 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:56586 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231739AbiDFXpd (ORCPT ); Wed, 6 Apr 2022 19:45:33 -0400 Received: from mga14.intel.com (mga14.intel.com [192.55.52.115]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 4B4F22FE7E; Wed, 6 Apr 2022 16:43:36 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1649288616; x=1680824616; h=message-id:subject:from:to:cc:date:in-reply-to: references:mime-version:content-transfer-encoding; bh=var08CWaVapTWvJV0XMLthqg4e7b4fr7jqqhMfupECY=; b=CETrv9zYGlfaezMeg5SEE0SRXlNfGfxhkXu+QCfaxDsHDSZVKDd9pUbM Tx+DsWtfjBkhMmb0/Pvla8xNg+/xAPF716aaOcIVtL/p98ODI4hHxe3IS 2a7R5SMGLfPmnKjsj2Py919Am0oyHrSsTwOtYeCfYj6+vXTWoQQDt5So2 yrell8fb6cOuC5aKkrXVKhnoQUhCa31mzGF9rjEw2QiadEdY8M5k/jqVr ZDFIYUCChozj5TfgBTZZ0tAnF2YjVBbK4Gkn9oKCzQq6WtXEc9V1qlYlo /5zU7aDqY+e8A9XQso72k5DVjAT7jlqS5zh3jvPmGNyB5wnH9gF//gHzP g==; X-IronPort-AV: E=McAfee;i="6200,9189,10309"; a="261365448" X-IronPort-AV: E=Sophos;i="5.90,240,1643702400"; d="scan'208";a="261365448" Received: from orsmga001.jf.intel.com ([10.7.209.18]) by fmsmga103.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 06 Apr 2022 16:43:35 -0700 X-IronPort-AV: E=Sophos;i="5.90,240,1643702400"; d="scan'208";a="588580003" Received: from mgailhax-mobl.amr.corp.intel.com (HELO khuang2-desk.gar.corp.intel.com) ([10.254.55.23]) by orsmga001-auth.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 06 Apr 2022 16:43:33 -0700 Message-ID: Subject: Re: [RFC PATCH v5 047/104] KVM: x86/mmu: add a private pointer to struct kvm_mmu_page From: Kai Huang To: isaku.yamahata@intel.com, kvm@vger.kernel.org, linux-kernel@vger.kernel.org Cc: isaku.yamahata@gmail.com, Paolo Bonzini , Jim Mattson , erdemaktas@google.com, Connor Kuehl , Sean Christopherson Date: Thu, 07 Apr 2022 11:43:30 +1200 In-Reply-To: <499d1fd01b0d1d9a8b46a55bb863afd0c76f1111.1646422845.git.isaku.yamahata@intel.com> References: <499d1fd01b0d1d9a8b46a55bb863afd0c76f1111.1646422845.git.isaku.yamahata@intel.com> Content-Type: text/plain; charset="UTF-8" User-Agent: Evolution 3.42.4 (3.42.4-1.fc35) MIME-Version: 1.0 Content-Transfer-Encoding: 7bit Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org On Fri, 2022-03-04 at 11:49 -0800, isaku.yamahata@intel.com wrote: > From: Isaku Yamahata > > Add a private pointer to kvm_mmu_page for private EPT. > > To resolve KVM page fault on private GPA, it will allocate additional page > for Secure EPT in addition to private EPT. Add memory allocator for it and > topup its memory allocator before resolving KVM page fault similar to > shared EPT page. Allocation of those memory will be done for TDP MMU by > alloc_tdp_mmu_page(). Freeing those memory will be done for TDP MMU on > behalf of kvm_tdp_mmu_zap_all() called by kvm_mmu_zap_all(). Private EPT > page needs to carry one more page used for Secure EPT in addition to the > private EPT page. Add private pointer to struct kvm_mmu_page for that > purpose and Add helper functions to allocate/free a page for Secure EPT. > Also add helper functions to check if a given kvm_mmu_page is private. > > Signed-off-by: Isaku Yamahata > --- > arch/x86/include/asm/kvm_host.h | 1 + > arch/x86/kvm/mmu/mmu.c | 9 ++++ > arch/x86/kvm/mmu/mmu_internal.h | 84 +++++++++++++++++++++++++++++++++ > arch/x86/kvm/mmu/tdp_mmu.c | 3 ++ > 4 files changed, 97 insertions(+) > > diff --git a/arch/x86/include/asm/kvm_host.h b/arch/x86/include/asm/kvm_host.h > index fcab2337819c..0c8cc7d73371 100644 > --- a/arch/x86/include/asm/kvm_host.h > +++ b/arch/x86/include/asm/kvm_host.h > @@ -689,6 +689,7 @@ struct kvm_vcpu_arch { > struct kvm_mmu_memory_cache mmu_shadow_page_cache; > struct kvm_mmu_memory_cache mmu_gfn_array_cache; > struct kvm_mmu_memory_cache mmu_page_header_cache; > + struct kvm_mmu_memory_cache mmu_private_sp_cache; > > /* > * QEMU userspace and the guest each have their own FPU state. > diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c > index 6e9847b1124b..8def8b97978f 100644 > --- a/arch/x86/kvm/mmu/mmu.c > +++ b/arch/x86/kvm/mmu/mmu.c > @@ -758,6 +758,13 @@ static int mmu_topup_shadow_page_cache(struct kvm_vcpu *vcpu) > struct kvm_mmu_memory_cache *mc = &vcpu->arch.mmu_shadow_page_cache; > int start, end, i, r; > > + if (kvm_gfn_stolen_mask(vcpu->kvm)) { Please get rid of kvm_gfn_stolen_mask(). > + r = kvm_mmu_topup_memory_cache(&vcpu->arch.mmu_private_sp_cache, > + PT64_ROOT_MAX_LEVEL); > + if (r) > + return r; > + } > + > if (shadow_init_value) > start = kvm_mmu_memory_cache_nr_free_objects(mc); >