linux-hyperv.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: "Huang, Kai" <kai.huang@intel.com>
To: "x86@kernel.org" <x86@kernel.org>,
	"Lutomirski, Andy" <luto@kernel.org>,
	"Hansen, Dave" <dave.hansen@intel.com>,
	"dave.hansen@linux.intel.com" <dave.hansen@linux.intel.com>,
	"ak@linux.intel.com" <ak@linux.intel.com>,
	"haiyangz@microsoft.com" <haiyangz@microsoft.com>,
	"kirill.shutemov@linux.intel.com"
	<kirill.shutemov@linux.intel.com>,
	"mingo@redhat.com" <mingo@redhat.com>,
	"rostedt@goodmis.org" <rostedt@goodmis.org>,
	"kys@microsoft.com" <kys@microsoft.com>,
	"tglx@linutronix.de" <tglx@linutronix.de>,
	"Cui, Dexuan" <decui@microsoft.com>,
	"mikelley@microsoft.com" <mikelley@microsoft.com>,
	"arnd@arndb.de" <arnd@arndb.de>,
	"nik.borisov@suse.com" <nik.borisov@suse.com>,
	"chu, jane" <jane.chu@oracle.com>,
	"hpa@zytor.com" <hpa@zytor.com>,
	"peterz@infradead.org" <peterz@infradead.org>,
	"wei.liu@kernel.org" <wei.liu@kernel.org>,
	"bp@alien8.de" <bp@alien8.de>, "Luck, Tony" <tony.luck@intel.com>,
	"Christopherson,, Sean" <seanjc@google.com>,
	"sathyanarayanan.kuppuswamy@linux.intel.com"
	<sathyanarayanan.kuppuswamy@linux.intel.com>,
	"brijesh.singh@amd.com" <brijesh.singh@amd.com>,
	"Jason@zx2c4.com" <Jason@zx2c4.com>,
	"Williams, Dan J" <dan.j.williams@intel.com>
Cc: "mheslin@redhat.com" <mheslin@redhat.com>,
	"Edgecombe, Rick P" <rick.p.edgecombe@intel.com>,
	"Tianyu.Lan@microsoft.com" <Tianyu.Lan@microsoft.com>,
	"vkuznets@redhat.com" <vkuznets@redhat.com>,
	"Li, Xiaoyao" <xiaoyao.li@intel.com>,
	"linux-kernel@vger.kernel.org" <linux-kernel@vger.kernel.org>,
	"andavis@redhat.com" <andavis@redhat.com>,
	"linux-hyperv@vger.kernel.org" <linux-hyperv@vger.kernel.org>
Subject: Re: [PATCH v10 1/2] x86/tdx: Retry partially-completed page conversion hypercalls
Date: Wed, 6 Sep 2023 03:06:26 +0000	[thread overview]
Message-ID: <2a7c27a206244d46d1d7da6f60ccd0cb498beab0.camel@intel.com> (raw)
In-Reply-To: <cb3de236547eefd48f5ea098dfa72a45373257ab.camel@intel.com>

On Wed, 2023-09-06 at 01:19 +0000, Huang, Kai wrote:
> On Fri, 2023-08-11 at 14:48 -0700, Dexuan Cui wrote:
> > TDX guest memory is private by default and the VMM may not access it.
> > However, in cases where the guest needs to share data with the VMM,
> > the guest and the VMM can coordinate to make memory shared between
> > them.
> > 
> > The guest side of this protocol includes the "MapGPA" hypercall.  This
> > call takes a guest physical address range.  The hypercall spec (aka.
> > the GHCI) says that the MapGPA call is allowed to return partial
> > progress in mapping this range and indicate that fact with a special
> > error code.  A guest that sees such partial progress is expected to
> > retry the operation for the portion of the address range that was not
> > completed.
> > 
> > Hyper-V does this partial completion dance when set_memory_decrypted()
> > is called to "decrypt" swiotlb bounce buffers that can be up to 1GB
> > in size.  It is evidently the only VMM that does this, which is why
> > nobody noticed this until now.
> 
> Sorry for late commenting.
> 
> Nit:
> 
> IMHO this patch is doing two separate things together:
> 
> 1) convert tdx_enc_status_changed() to tdx_map_gpa() to take physical address.
> 2) Handle MapGPA() retry
> 
> The reason of doing 1), IIUC, is hidden in the second patch, that hyperv guest
> code is using vzalloc().  I.e., handle MapGPA() retry doesn't strictly require
> to change API to take PA rather than VA.
> 
> So to me it's better to split this into two patches and give properly
> justification to each of them.

Sorry I realized I missed one point in the retry logic below, so feel free to
ignore this comment.

> 
> Also, see below for the retry ...
> 
> > 
> > Acked-by: Kirill A. Shutemov <kirill.shutemov@linux.intel.com>
> > Reviewed-by: Michael Kelley <mikelley@microsoft.com>
> > Reviewed-by: Kuppuswamy Sathyanarayanan <sathyanarayanan.kuppuswamy@linux.intel.com>
> > Signed-off-by: Dexuan Cui <decui@microsoft.com>
> > ---
> >  arch/x86/coco/tdx/tdx.c           | 64 +++++++++++++++++++++++++------
> >  arch/x86/include/asm/shared/tdx.h |  2 +
> >  2 files changed, 54 insertions(+), 12 deletions(-)
> > 
> > Changes in v10:
> >     Dave kindly re-wrote the changelog. No other changes.
> > 
> > diff --git a/arch/x86/coco/tdx/tdx.c b/arch/x86/coco/tdx/tdx.c
> > index 1d6b863c42b00..746075d20cd2d 100644
> > --- a/arch/x86/coco/tdx/tdx.c
> > +++ b/arch/x86/coco/tdx/tdx.c
> > @@ -703,14 +703,15 @@ static bool tdx_cache_flush_required(void)
> >  }
> >  
> >  /*
> > - * Inform the VMM of the guest's intent for this physical page: shared with
> > - * the VMM or private to the guest.  The VMM is expected to change its mapping
> > - * of the page in response.
> > + * Notify the VMM about page mapping conversion. More info about ABI
> > + * can be found in TDX Guest-Host-Communication Interface (GHCI),
> > + * section "TDG.VP.VMCALL<MapGPA>".
> >   */
> > -static bool tdx_enc_status_changed(unsigned long vaddr, int numpages, bool enc)
> > +static bool tdx_map_gpa(phys_addr_t start, phys_addr_t end, bool enc)
> >  {
> > -	phys_addr_t start = __pa(vaddr);
> > -	phys_addr_t end   = __pa(vaddr + numpages * PAGE_SIZE);
> > +	/* Retrying the hypercall a second time should succeed; use 3 just in case */
> > +	const int max_retries_per_page = 3;
> > +	int retry_count = 0;
> 
> ... I tried to dig the full history, but sorry if I am still missing something.
> 
> Using 3 is fine if "Retrying the hypercall a second time should succeed" is
> always true.  I assume this is because hyperV is able to handle large amount of
> pages in one call?
> 
> That being said, this is purely hypervisor implementation specific.  Here IIUC
> Linux is trying to define a non-spec-based value to retry, which can happen to
> work for hyperv *current* implementation.  I am not sure whether it's a good
> idea?  For instance, what happens if hyperv is changed in the future to reduce
> the number of pages it can handle in one call?
> 
> Is there any hyperv specification to define how many pages it can handle in one
> call?
> 
> What's more, given this function only takes a random range of pages, it makes
> even more strange to use hard-coded retry here.  Looks a more reasonable way is
> to let the caller who knows how many pages are going to be converted, and
> *ideally*, also knows the which hypervisor is running underneath, to determine
> how many pages to be converted in one call.  
> 
> For instance, any hyperv specific guest code can safely assume hyperv is able to
> handle how many pages thus can determine how many pages to try in one call.
> 
> Just my 2cents.  And feel free to ignore if all others are fine with the current
> solution in this patch.
> 
> 

Ah, reading patch again I missed the fact that retry is only consumed when no
forward progress is made.   If there's any page has been converted by the
hypervisor, retry_count is reset to 0, and this function will loop until all
pages are converted. So feel free to ignore my above comments.

But is it better to explicitly call this out in the comment?

/*
 * When the hypercall made no forward progress, retrying the hypercall a second
 * time should succeed to make some progress.  Use 3 just in case.
 */

Also, this "retry w/o forward progress" seems a little bit odd, i.e., it is a
special case when hypervisor cannot convert any.  Have you seen any in practice?
How true can we say "retrying a second time should succeed"?

> > +
> > +		if (ret != TDVMCALL_STATUS_RETRY)
> > +			return !ret;
> > +		/*
> > +		 * The guest must retry the operation for the pages in the
> > +		 * region starting at the GPA specified in R11. R11 comes
> > +		 * from the untrusted VMM. Sanity check it.
> > +		 */
> > +		map_fail_paddr = args.r11;
> > +		if (map_fail_paddr < start || map_fail_paddr >= end)
> > +			return false;
> > +
> > +		/* "Consume" a retry without forward progress */
> > +		if (map_fail_paddr == start) {
> > +			retry_count++;
> > +			continue;
> > +		}
> > +
> > +		start = map_fail_paddr;
> > +		retry_count = 0;
> > +	}
> > +
> > +	return false;
> > +}
> > +
> > +/*
> > + * Inform the VMM of the guest's intent for this physical page: shared with
> > + * the VMM or private to the guest.  The VMM is expected to change its mapping
> > + * of the page in response.
> > + */
> > +static bool tdx_enc_status_changed(unsigned long vaddr, int numpages, bool enc)
> > +{
> > +	phys_addr_t start = __pa(vaddr);
> > +	phys_addr_t end   = __pa(vaddr + numpages * PAGE_SIZE);
> > +
> > +	if (!tdx_map_gpa(start, end, enc))
> >  		return false;
> >  
> >  	/* shared->private conversion requires memory to be accepted before use */
> > diff --git a/arch/x86/include/asm/shared/tdx.h b/arch/x86/include/asm/shared/tdx.h
> > index 7513b3bb69b7e..22ee23a3f24a6 100644
> > --- a/arch/x86/include/asm/shared/tdx.h
> > +++ b/arch/x86/include/asm/shared/tdx.h
> > @@ -24,6 +24,8 @@
> >  #define TDVMCALL_MAP_GPA		0x10001
> >  #define TDVMCALL_REPORT_FATAL_ERROR	0x10003
> >  
> > +#define TDVMCALL_STATUS_RETRY		1
> > +
> >  #ifndef __ASSEMBLY__
> >  
> >  /*
> 


  reply	other threads:[~2023-09-06  3:06 UTC|newest]

Thread overview: 12+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2023-08-11 21:48 [PATCH v10 0/2] Support TDX guests on Hyper-V (the x86/tdx part) Dexuan Cui
2023-08-11 21:48 ` [PATCH v10 1/2] x86/tdx: Retry partially-completed page conversion hypercalls Dexuan Cui
2023-08-14 19:03   ` Isaku Yamahata
2023-08-22 16:47     ` Dexuan Cui
2023-09-06  1:19   ` Huang, Kai
2023-09-06  3:06     ` Huang, Kai [this message]
2023-09-07 21:13   ` Dave Hansen
2023-08-11 21:48 ` [PATCH v10 2/2] x86/tdx: Support vmalloc() for tdx_enc_status_changed() Dexuan Cui
2023-09-05 16:25   ` Edgecombe, Rick P
2023-09-05 18:04     ` Dexuan Cui
2023-09-06  1:27   ` Huang, Kai
2023-09-07 21:14   ` Dave Hansen

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=2a7c27a206244d46d1d7da6f60ccd0cb498beab0.camel@intel.com \
    --to=kai.huang@intel.com \
    --cc=Jason@zx2c4.com \
    --cc=Tianyu.Lan@microsoft.com \
    --cc=ak@linux.intel.com \
    --cc=andavis@redhat.com \
    --cc=arnd@arndb.de \
    --cc=bp@alien8.de \
    --cc=brijesh.singh@amd.com \
    --cc=dan.j.williams@intel.com \
    --cc=dave.hansen@intel.com \
    --cc=dave.hansen@linux.intel.com \
    --cc=decui@microsoft.com \
    --cc=haiyangz@microsoft.com \
    --cc=hpa@zytor.com \
    --cc=jane.chu@oracle.com \
    --cc=kirill.shutemov@linux.intel.com \
    --cc=kys@microsoft.com \
    --cc=linux-hyperv@vger.kernel.org \
    --cc=linux-kernel@vger.kernel.org \
    --cc=luto@kernel.org \
    --cc=mheslin@redhat.com \
    --cc=mikelley@microsoft.com \
    --cc=mingo@redhat.com \
    --cc=nik.borisov@suse.com \
    --cc=peterz@infradead.org \
    --cc=rick.p.edgecombe@intel.com \
    --cc=rostedt@goodmis.org \
    --cc=sathyanarayanan.kuppuswamy@linux.intel.com \
    --cc=seanjc@google.com \
    --cc=tglx@linutronix.de \
    --cc=tony.luck@intel.com \
    --cc=vkuznets@redhat.com \
    --cc=wei.liu@kernel.org \
    --cc=x86@kernel.org \
    --cc=xiaoyao.li@intel.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).