From: Bharata B Rao <bharata@linux.ibm.com>
To: Ram Pai <linuxram@us.ibm.com>
Cc: ldufour@linux.ibm.com, cclaudio@linux.ibm.com,
kvm-ppc@vger.kernel.org, sathnaga@linux.vnet.ibm.com,
aneesh.kumar@linux.ibm.com, sukadev@linux.vnet.ibm.com,
linuxppc-dev@lists.ozlabs.org, bauerman@linux.ibm.com,
david@gibson.dropbear.id.au
Subject: Re: [PATCH v3 0/4] Migrate non-migrated pages of a SVM.
Date: Mon, 29 Jun 2020 07:23:30 +0530 [thread overview]
Message-ID: <20200629015330.GC27215@in.ibm.com> (raw)
In-Reply-To: <20200628161149.GA27215@in.ibm.com>
On Sun, Jun 28, 2020 at 09:41:53PM +0530, Bharata B Rao wrote:
> On Fri, Jun 19, 2020 at 03:43:38PM -0700, Ram Pai wrote:
> > The time taken to switch a VM to Secure-VM, increases by the size of the VM. A
> > 100GB VM takes about 7minutes. This is unacceptable. This linear increase is
> > caused by a suboptimal behavior by the Ultravisor and the Hypervisor. The
> > Ultravisor unnecessarily migrates all the GFN of the VM from normal-memory to
> > secure-memory. It has to just migrate the necessary and sufficient GFNs.
> >
> > However when the optimization is incorporated in the Ultravisor, the Hypervisor
> > starts misbehaving. The Hypervisor has a inbuilt assumption that the Ultravisor
> > will explicitly request to migrate, each and every GFN of the VM. If only
> > necessary and sufficient GFNs are requested for migration, the Hypervisor
> > continues to manage the remaining GFNs as normal GFNs. This leads of memory
> > corruption, manifested consistently when the SVM reboots.
> >
> > The same is true, when a memory slot is hotplugged into a SVM. The Hypervisor
> > expects the ultravisor to request migration of all GFNs to secure-GFN. But at
> > the same time, the hypervisor is unable to handle any H_SVM_PAGE_IN requests
> > from the Ultravisor, done in the context of UV_REGISTER_MEM_SLOT ucall. This
> > problem manifests as random errors in the SVM, when a memory-slot is
> > hotplugged.
> >
> > This patch series automatically migrates the non-migrated pages of a SVM,
> > and thus solves the problem.
>
> So this is what I understand as the objective of this patchset:
>
> 1. Getting all the pages into the secure memory right when the guest
> transitions into secure mode is expensive. Ultravisor wants to just get
> the necessary and sufficient pages in and put the onus on the Hypervisor
> to mark the remaining pages (w/o actual page-in) as secure during
> H_SVM_INIT_DONE.
> 2. During H_SVM_INIT_DONE, you want a way to differentiate the pages that
> are already secure from the pages that are shared and that are paged-out.
> For this you are introducing all these new states in HV.
>
> UV knows about the shared GFNs and maintains the state of the same. Hence
> let HV send all the pages (minus already secured pages) via H_SVM_PAGE_IN
> and if UV finds any shared pages in them, let it fail the uv-page-in call.
> Then HV can fail the migration for it and the page continues to remain
> shared. With this, you don't need to maintain a state for secured GFN in HV.
>
> In the unlikely case of sending a paged-out page to UV during
> H_SVM_INIT_DONE, let the page-in succeed and HV will fault on it again
> if required. With this, you don't need a state in HV to identify a
> paged-out-but-encrypted state.
>
> Doesn't the above work?
I see that you want to infact skip the uv-page-in calls from H_SVM_INIT_DONE.
So that would need the extra states in HV which you are proposing here.
Regards,
Bharata.
next prev parent reply other threads:[~2020-06-29 1:55 UTC|newest]
Thread overview: 10+ messages / expand[flat|nested] mbox.gz Atom feed top
2020-06-19 22:43 [PATCH v3 0/4] Migrate non-migrated pages of a SVM Ram Pai
2020-06-19 22:43 ` [PATCH v3 1/4] KVM: PPC: Book3S HV: Fix function definition in book3s_hv_uvmem.c Ram Pai
2020-06-19 22:43 ` [PATCH v3 2/4] KVM: PPC: Book3S HV: track the state GFNs associated with secure VMs Ram Pai
2020-06-19 22:43 ` [PATCH v3 3/4] KVM: PPC: Book3S HV: migrate remaining normal-GFNs to secure-GFNs in H_SVM_INIT_DONE Ram Pai
2020-06-28 16:20 ` Bharata B Rao
2020-06-29 8:48 ` Laurent Dufour
2020-06-19 22:43 ` [PATCH v3 4/4] KVM: PPC: Book3S HV: migrate hot plugged memory Ram Pai
2020-06-28 16:11 ` [PATCH v3 0/4] Migrate non-migrated pages of a SVM Bharata B Rao
2020-06-29 1:53 ` Bharata B Rao [this message]
2020-06-29 19:25 ` Ram Pai
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20200629015330.GC27215@in.ibm.com \
--to=bharata@linux.ibm.com \
--cc=aneesh.kumar@linux.ibm.com \
--cc=bauerman@linux.ibm.com \
--cc=cclaudio@linux.ibm.com \
--cc=david@gibson.dropbear.id.au \
--cc=kvm-ppc@vger.kernel.org \
--cc=ldufour@linux.ibm.com \
--cc=linuxppc-dev@lists.ozlabs.org \
--cc=linuxram@us.ibm.com \
--cc=sathnaga@linux.vnet.ibm.com \
--cc=sukadev@linux.vnet.ibm.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).