kvm.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: Binbin Wu <binbin.wu@linux.intel.com>
To: isaku.yamahata@intel.com
Cc: kvm@vger.kernel.org, linux-kernel@vger.kernel.org,
	isaku.yamahata@gmail.com, Paolo Bonzini <pbonzini@redhat.com>,
	erdemaktas@google.com, Sean Christopherson <seanjc@google.com>,
	Sagi Shahar <sagis@google.com>,
	David Matlack <dmatlack@google.com>,
	Kai Huang <kai.huang@intel.com>,
	Zhi Wang <zhi.wang.linux@gmail.com>,
	chen.bo@intel.com, hang.yuan@intel.com, tina.zhang@intel.com,
	Xiaoyao Li <xiaoyao.li@intel.com>
Subject: Re: [PATCH v6 08/16] KVM: TDX: Pin pages via get_page() right before ADD/AUG'ed to TDs
Date: Mon, 20 Nov 2023 19:05:39 +0800	[thread overview]
Message-ID: <c62e8f7e-46ed-47e3-b7ff-231bd1f343e5@linux.intel.com> (raw)
In-Reply-To: <c8d8b880963cc6799b681f7905a956022e47f16f.1699368363.git.isaku.yamahata@intel.com>



On 11/7/2023 11:00 PM, isaku.yamahata@intel.com wrote:
> From: Xiaoyao Li <xiaoyao.li@intel.com>
>
> When kvm_faultin_pfn(), it doesn't have the info regarding which page level
> will the gfn be mapped at. Hence it doesn't know to pin a 4K page or a
> 2M page.
>
> Move the guest private pages pinning logic right before
> TDH_MEM_PAGE_ADD/AUG() since at that time it knows the page level info.
This patch looks strange, the code has nothing to do with the shortlog.
It seems that the change of this patch has already been covered by 06/16.

Something went wrong when formatting the patch?

>
> Signed-off-by: Xiaoyao Li <xiaoyao.li@intel.com>
> ---
>   arch/x86/kvm/vmx/tdx.c | 15 ++++++++-------
>   1 file changed, 8 insertions(+), 7 deletions(-)
>
> diff --git a/arch/x86/kvm/vmx/tdx.c b/arch/x86/kvm/vmx/tdx.c
> index e4167f08b58b..7b81811eb404 100644
> --- a/arch/x86/kvm/vmx/tdx.c
> +++ b/arch/x86/kvm/vmx/tdx.c
> @@ -1454,7 +1454,8 @@ static void tdx_measure_page(struct kvm_tdx *kvm_tdx, hpa_t gpa, int size)
>   	}
>   }
>   
> -static void tdx_unpin(struct kvm *kvm, kvm_pfn_t pfn, int level)
> +static void tdx_unpin(struct kvm *kvm, gfn_t gfn, kvm_pfn_t pfn,
> +		      enum pg_level level)
>   {
>   	int i;
>   
> @@ -1476,7 +1477,7 @@ static int tdx_sept_page_aug(struct kvm *kvm, gfn_t gfn,
>   
>   	err = tdh_mem_page_aug(kvm_tdx->tdr_pa, gpa, tdx_level, hpa, &out);
>   	if (unlikely(err == TDX_ERROR_SEPT_BUSY)) {
> -		tdx_unpin(kvm, pfn, level);
> +		tdx_unpin(kvm, gfn, pfn, level);
>   		return -EAGAIN;
>   	}
>   	if (unlikely(err == (TDX_EPT_ENTRY_STATE_INCORRECT | TDX_OPERAND_ID_RCX))) {
> @@ -1493,7 +1494,7 @@ static int tdx_sept_page_aug(struct kvm *kvm, gfn_t gfn,
>   	}
>   	if (KVM_BUG_ON(err, kvm)) {
>   		pr_tdx_error(TDH_MEM_PAGE_AUG, err, &out);
> -		tdx_unpin(kvm, pfn, level);
> +		tdx_unpin(kvm, gfn, pfn, level);
>   		return -EIO;
>   	}
>   
> @@ -1529,7 +1530,7 @@ static int tdx_sept_page_add(struct kvm *kvm, gfn_t gfn,
>   	 * always uses vcpu 0's page table and protected by vcpu->mutex).
>   	 */
>   	if (KVM_BUG_ON(kvm_tdx->source_pa == INVALID_PAGE, kvm)) {
> -		tdx_unpin(kvm, pfn, level);
> +		tdx_unpin(kvm, gfn, pfn, level);
>   		return -EINVAL;
>   	}
>   
> @@ -1547,7 +1548,7 @@ static int tdx_sept_page_add(struct kvm *kvm, gfn_t gfn,
>   	} while (unlikely(err == TDX_ERROR_SEPT_BUSY));
>   	if (KVM_BUG_ON(err, kvm)) {
>   		pr_tdx_error(TDH_MEM_PAGE_ADD, err, &out);
> -		tdx_unpin(kvm, pfn, level);
> +		tdx_unpin(kvm, gfn, pfn, level);
>   		return -EIO;
>   	} else if (measure)
>   		tdx_measure_page(kvm_tdx, gpa, KVM_HPAGE_SIZE(level));
> @@ -1600,7 +1601,7 @@ static int tdx_sept_drop_private_spte(struct kvm *kvm, gfn_t gfn,
>   		err = tdx_reclaim_page(hpa, level);
>   		if (KVM_BUG_ON(err, kvm))
>   			return -EIO;
> -		tdx_unpin(kvm, pfn, level);
> +		tdx_unpin(kvm, gfn, pfn, level);
>   		return 0;
>   	}
>   
> @@ -1633,7 +1634,7 @@ static int tdx_sept_drop_private_spte(struct kvm *kvm, gfn_t gfn,
>   			r = -EIO;
>   		} else {
>   			tdx_clear_page(hpa, PAGE_SIZE);
> -			tdx_unpin(kvm, pfn + i, PG_LEVEL_4K);
> +			tdx_unpin(kvm, gfn + i, pfn + i, PG_LEVEL_4K);
>   		}
>   		hpa += PAGE_SIZE;
>   	}


  reply	other threads:[~2023-11-20 11:05 UTC|newest]

Thread overview: 39+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2023-11-07 15:00 [PATCH v6 00/16] KVM TDX: TDP MMU: large page support isaku.yamahata
2023-11-07 15:00 ` [PATCH v6 01/16] KVM: TDP_MMU: Go to next level if smaller private mapping exists isaku.yamahata
2023-11-16  1:32   ` Binbin Wu
2023-11-17  1:05     ` Isaku Yamahata
2023-11-07 15:00 ` [PATCH v6 02/16] KVM: TDX: Pass page level to cache flush before TDX SEAMCALL isaku.yamahata
2023-11-16  5:36   ` Binbin Wu
2023-11-07 15:00 ` [PATCH v6 03/16] KVM: TDX: Pass KVM page level to tdh_mem_page_add() and tdh_mem_page_aug() isaku.yamahata
2023-11-16  8:18   ` Binbin Wu
2023-11-17  0:23     ` Isaku Yamahata
2023-11-07 15:00 ` [PATCH v6 04/16] KVM: TDX: Pass size to tdx_measure_page() isaku.yamahata
2023-11-16  8:57   ` Binbin Wu
2023-11-17  0:36     ` Isaku Yamahata
2023-11-07 15:00 ` [PATCH v6 05/16] KVM: TDX: Pass size to reclaim_page() isaku.yamahata
2023-11-19  6:42   ` Binbin Wu
2023-11-19  6:58     ` Binbin Wu
2023-11-07 15:00 ` [PATCH v6 06/16] KVM: TDX: Update tdx_sept_{set,drop}_private_spte() to support large page isaku.yamahata
2023-11-07 15:00 ` [PATCH v6 07/16] KVM: MMU: Introduce level info in PFERR code isaku.yamahata
2023-11-20 10:54   ` Binbin Wu
2023-11-21 10:02     ` Isaku Yamahata
2023-11-07 15:00 ` [PATCH v6 08/16] KVM: TDX: Pin pages via get_page() right before ADD/AUG'ed to TDs isaku.yamahata
2023-11-20 11:05   ` Binbin Wu [this message]
2023-11-21 10:04     ` Isaku Yamahata
2023-11-07 15:00 ` [PATCH v6 09/16] KVM: TDX: Pass desired page level in err code for page fault handler isaku.yamahata
2023-11-20 11:24   ` Binbin Wu
2023-11-21 10:27     ` Isaku Yamahata
2023-11-07 15:00 ` [PATCH v6 10/16] KVM: x86/tdp_mmu: Allocate private page table for large page split isaku.yamahata
2023-11-07 15:00 ` [PATCH v6 11/16] KVM: x86/tdp_mmu: Split the large page when zap leaf isaku.yamahata
2023-11-21  9:57   ` Binbin Wu
2023-11-21 11:00     ` Isaku Yamahata
2023-11-22  2:18       ` Binbin Wu
2023-11-07 15:00 ` [PATCH v6 12/16] KVM: x86/tdp_mmu, TDX: Split a large page when 4KB page within it converted to shared isaku.yamahata
2023-11-22  5:45   ` Binbin Wu
2023-11-07 15:00 ` [PATCH v6 13/16] KVM: x86/tdp_mmu: Try to merge pages into a large page isaku.yamahata
2023-11-22  7:24   ` Binbin Wu
2023-11-07 15:00 ` [PATCH v6 14/16] KVM: x86/tdp_mmu: TDX: Implement " isaku.yamahata
2023-11-22  7:50   ` Binbin Wu
2023-11-07 15:00 ` [PATCH v6 15/16] KVM: x86/mmu: Make kvm fault handler aware of large page of private memslot isaku.yamahata
2023-11-22  9:05   ` Binbin Wu
2023-11-07 15:00 ` [PATCH v6 16/16] KVM: TDX: Allow 2MB large page for TD GUEST isaku.yamahata

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=c62e8f7e-46ed-47e3-b7ff-231bd1f343e5@linux.intel.com \
    --to=binbin.wu@linux.intel.com \
    --cc=chen.bo@intel.com \
    --cc=dmatlack@google.com \
    --cc=erdemaktas@google.com \
    --cc=hang.yuan@intel.com \
    --cc=isaku.yamahata@gmail.com \
    --cc=isaku.yamahata@intel.com \
    --cc=kai.huang@intel.com \
    --cc=kvm@vger.kernel.org \
    --cc=linux-kernel@vger.kernel.org \
    --cc=pbonzini@redhat.com \
    --cc=sagis@google.com \
    --cc=seanjc@google.com \
    --cc=tina.zhang@intel.com \
    --cc=xiaoyao.li@intel.com \
    --cc=zhi.wang.linux@gmail.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).