From: Bharata B Rao <bharata@linux.ibm.com>
To: Ram Pai <linuxram@us.ibm.com>
Cc: rcampbell@nvidia.com, ldufour@linux.ibm.com,
cclaudio@linux.ibm.com, kvm-ppc@vger.kernel.org,
aneesh.kumar@linux.ibm.com, sukadev@linux.vnet.ibm.com,
linuxppc-dev@lists.ozlabs.org, bauerman@linux.ibm.com,
david@gibson.dropbear.id.au
Subject: Re: [PATCH v1 3/4] KVM: PPC: Book3S HV: migrate remaining normal-GFNs to secure-GFNs in H_SVM_INIT_DONE
Date: Tue, 2 Jun 2020 15:36:39 +0530 [thread overview]
Message-ID: <20200602100639.GB31382@in.ibm.com> (raw)
In-Reply-To: <20200601190535.GA6925@oc0525413822.ibm.com>
On Mon, Jun 01, 2020 at 12:05:35PM -0700, Ram Pai wrote:
> On Mon, Jun 01, 2020 at 05:25:18PM +0530, Bharata B Rao wrote:
> > On Sat, May 30, 2020 at 07:27:50PM -0700, Ram Pai wrote:
> > > H_SVM_INIT_DONE incorrectly assumes that the Ultravisor has explicitly
> > > called H_SVM_PAGE_IN for all secure pages.
> >
> > I don't think that is quite true. HV doesn't assume anything about
> > secure pages by itself.
>
> Yes. Currently, it does not assume anything about secure pages. But I am
> proposing that it should consider all pages (except the shared pages) as
> secure pages, when H_SVM_INIT_DONE is called.
Ok, then may be also add the proposed changes to H_SVM_INIT_DONE
documentation.
>
> In other words, HV should treat all pages; except shared pages, as
> secure pages once H_SVM_INIT_DONE is called. And this includes pages
> added subsequently through memory hotplug.
So after H_SVM_INIT_DONE, if HV touches a secure page for any
reason and gets encrypted contents via page-out, HV drops the
device pfn at that time. So what state we would be in that case? We
have completed H_SVM_INIT_DONE, but still have a normal (but encrypted)
page in HV?
>
> Yes. the Ultravisor can explicitly request the HV to move the pages
> individually. But that will slow down the transition too significantly.
> It takes above 20min to transition them, for a SVM of size 100G.
>
> With this proposed enhancement, the switch completes in a few seconds.
I think, many pages during initial switch and most pages for hotplugged
memory are zero pages, for which we don't anyway issue UV page-in calls.
So the 20min saving you are observing is purely due to hcall overhead?
How about extending H_SVM_PAGE_IN interface or a new hcall to request
multiple pages in one request?
Also, how about requesting for bigger page sizes (2M)? Ralph Campbell
had patches that added THP support for migrate_vma_* calls.
>
> >
> > > These GFNs continue to be
> > > normal GFNs associated with normal PFNs; when infact, these GFNs should
> > > have been secure GFNs associated with device PFNs.
> >
> > Transition to secure state is driven by SVM/UV and HV just responds to
> > hcalls by issuing appropriate uvcalls. SVM/UV is in the best position to
> > determine the required pages that need to be moved into secure side.
> > HV just responds to it and tracks such pages as device private pages.
> >
> > If SVM/UV doesn't get in all the pages to secure side by the time
> > of H_SVM_INIT_DONE, the remaining pages are just normal (shared or
> > otherwise) pages as far as HV is concerned. Why should HV assume that
> > SVM/UV didn't ask for a few pages and hence push those pages during
> > H_SVM_INIT_DONE?
>
> By definition, SVM is a VM backed by secure pages.
> Hence all pages(except shared) must turn secure when a VM switches to SVM.
>
> UV is interested in only a certain pages for the VM, which it will
> request explicitly through H_SVM_PAGE_IN. All other pages, need not
> be paged-in through UV_PAGE_IN. They just need to be switched to
> device-pages.
>
> >
> > I think UV should drive the movement of pages into secure side both
> > of boot-time SVM memory and hot-plugged memory. HV does memslot
> > registration uvcall when new memory is plugged in, UV should explicitly
> > get the required pages in at that time instead of expecting HV to drive
> > the same.
> >
> > > +static int uv_migrate_mem_slot(struct kvm *kvm,
> > > + const struct kvm_memory_slot *memslot)
> > > +{
> > > + unsigned long gfn = memslot->base_gfn;
> > > + unsigned long end;
> > > + bool downgrade = false;
> > > + struct vm_area_struct *vma;
> > > + int i, ret = 0;
> > > + unsigned long start = gfn_to_hva(kvm, gfn);
> > > +
> > > + if (kvm_is_error_hva(start))
> > > + return H_STATE;
> > > +
> > > + end = start + (memslot->npages << PAGE_SHIFT);
> > > +
> > > + down_write(&kvm->mm->mmap_sem);
> > > +
> > > + mutex_lock(&kvm->arch.uvmem_lock);
> > > + vma = find_vma_intersection(kvm->mm, start, end);
> > > + if (!vma || vma->vm_start > start || vma->vm_end < end) {
> > > + ret = H_STATE;
> > > + goto out_unlock;
> > > + }
> > > +
> > > + ret = ksm_madvise(vma, vma->vm_start, vma->vm_end,
> > > + MADV_UNMERGEABLE, &vma->vm_flags);
> > > + downgrade_write(&kvm->mm->mmap_sem);
> > > + downgrade = true;
> > > + if (ret) {
> > > + ret = H_STATE;
> > > + goto out_unlock;
> > > + }
> > > +
> > > + for (i = 0; i < memslot->npages; i++, ++gfn) {
> > > + /* skip paged-in pages and shared pages */
> > > + if (kvmppc_gfn_is_uvmem_pfn(gfn, kvm, NULL) ||
> > > + kvmppc_gfn_is_uvmem_shared(gfn, kvm))
> > > + continue;
> > > +
> > > + start = gfn_to_hva(kvm, gfn);
> > > + end = start + (1UL << PAGE_SHIFT);
> > > + ret = kvmppc_svm_migrate_page(vma, start, end,
> > > + (gfn << PAGE_SHIFT), kvm, PAGE_SHIFT, false);
> > > +
> > > + if (ret)
> > > + goto out_unlock;
> > > + }
> >
> > Is there a guarantee that the vma you got for the start address remains
> > valid for all the addresses till end in a memslot? If not, you should
> > re-get the vma for the current address in each iteration I suppose.
>
>
> mm->mmap_sem is the semaphore that guards the vma. right? If that
> semaphore is held, can the vma change?
I am not sure if the vma you obtained would span the entire address range
in the memslot.
Regards,
Bharata.
next prev parent reply other threads:[~2020-06-02 10:08 UTC|newest]
Thread overview: 18+ messages / expand[flat|nested] mbox.gz Atom feed top
2020-05-31 2:27 [PATCH v1 0/4] Migrate non-migrated pages of a SVM Ram Pai
2020-05-31 2:27 ` [PATCH v1 1/4] KVM: PPC: Book3S HV: Fix function definition in book3s_hv_uvmem.c Ram Pai
2020-05-31 2:27 ` [PATCH v1 2/4] KVM: PPC: Book3S HV: track shared GFNs of secure VMs Ram Pai
2020-06-01 4:12 ` kbuild test robot
2020-06-05 9:48 ` Laurent Dufour
2020-06-05 14:38 ` Ram Pai
2020-05-31 2:27 ` [PATCH v1 3/4] KVM: PPC: Book3S HV: migrate remaining normal-GFNs to secure-GFNs in H_SVM_INIT_DONE Ram Pai
2020-06-01 11:55 ` Bharata B Rao
2020-06-01 19:05 ` Ram Pai
2020-06-02 10:06 ` Bharata B Rao [this message]
2020-06-03 23:10 ` Ram Pai
2020-06-04 2:31 ` Bharata B Rao
2020-05-31 2:27 ` [PATCH v1 4/4] KVM: PPC: Book3S HV: migrate hot plugged memory Ram Pai
2020-06-01 4:35 ` kbuild test robot
2020-06-01 4:45 ` kbuild test robot
2020-06-02 8:31 ` Laurent Dufour
2020-06-03 13:25 ` Ram Pai
2020-06-15 17:00 ` Laurent Dufour
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20200602100639.GB31382@in.ibm.com \
--to=bharata@linux.ibm.com \
--cc=aneesh.kumar@linux.ibm.com \
--cc=bauerman@linux.ibm.com \
--cc=cclaudio@linux.ibm.com \
--cc=david@gibson.dropbear.id.au \
--cc=kvm-ppc@vger.kernel.org \
--cc=ldufour@linux.ibm.com \
--cc=linuxppc-dev@lists.ozlabs.org \
--cc=linuxram@us.ibm.com \
--cc=rcampbell@nvidia.com \
--cc=sukadev@linux.vnet.ibm.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).