linuxppc-dev.lists.ozlabs.org archive mirror
 help / color / mirror / Atom feed
From: David Gibson <david@gibson.dropbear.id.au>
To: Alexey Kardashevskiy <aik@ozlabs.ru>
Cc: linuxppc-dev@lists.ozlabs.org, Paul Mackerras <paulus@samba.org>,
	Alex Williamson <alex.williamson@redhat.com>,
	kvm-ppc@vger.kernel.org, kvm@vger.kernel.org
Subject: Re: [PATCH kernel 3/9] KVM: PPC: Use preregistered memory API to access TCE list
Date: Tue, 8 Mar 2016 17:30:18 +1100	[thread overview]
Message-ID: <20160308063018.GA22546@voom.fritz.box> (raw)
In-Reply-To: <56DE6768.4030202@ozlabs.ru>

[-- Attachment #1: Type: text/plain, Size: 5527 bytes --]

On Tue, Mar 08, 2016 at 04:47:20PM +1100, Alexey Kardashevskiy wrote:
> On 03/07/2016 05:00 PM, David Gibson wrote:
> >On Mon, Mar 07, 2016 at 02:41:11PM +1100, Alexey Kardashevskiy wrote:
> >>VFIO on sPAPR already implements guest memory pre-registration
> >>when the entire guest RAM gets pinned. This can be used to translate
> >>the physical address of a guest page containing the TCE list
> >>from H_PUT_TCE_INDIRECT.
> >>
> >>This makes use of the pre-registrered memory API to access TCE list
> >>pages in order to avoid unnecessary locking on the KVM memory
> >>reverse map.
> >>
> >>Signed-off-by: Alexey Kardashevskiy <aik@ozlabs.ru>
> >
> >Ok.. so, what's the benefit of not having to lock the rmap?
> 
> Less locking -> less racing == good, no?

Well.. maybe.  The increased difficulty in verifying that the code is
correct isn't always a good price to pay.

> >>---
> >>  arch/powerpc/kvm/book3s_64_vio_hv.c | 86 ++++++++++++++++++++++++++++++-------
> >>  1 file changed, 70 insertions(+), 16 deletions(-)
> >>
> >>diff --git a/arch/powerpc/kvm/book3s_64_vio_hv.c b/arch/powerpc/kvm/book3s_64_vio_hv.c
> >>index 44be73e..af155f6 100644
> >>--- a/arch/powerpc/kvm/book3s_64_vio_hv.c
> >>+++ b/arch/powerpc/kvm/book3s_64_vio_hv.c
> >>@@ -180,6 +180,38 @@ long kvmppc_gpa_to_ua(struct kvm *kvm, unsigned long gpa,
> >>  EXPORT_SYMBOL_GPL(kvmppc_gpa_to_ua);
> >>
> >>  #ifdef CONFIG_KVM_BOOK3S_HV_POSSIBLE
> >>+static mm_context_t *kvmppc_mm_context(struct kvm_vcpu *vcpu)
> >>+{
> >>+	struct task_struct *task;
> >>+
> >>+	task = vcpu->arch.run_task;
> >>+	if (unlikely(!task || !task->mm))
> >>+		return NULL;
> >>+
> >>+	return &task->mm->context;
> >>+}
> >>+
> >>+static inline bool kvmppc_preregistered(struct kvm_vcpu *vcpu)
> >>+{
> >>+	mm_context_t *mm = kvmppc_mm_context(vcpu);
> >>+
> >>+	if (unlikely(!mm))
> >>+		return false;
> >>+
> >>+	return mm_iommu_preregistered(mm);
> >>+}
> >>+
> >>+static struct mm_iommu_table_group_mem_t *kvmppc_rm_iommu_lookup(
> >>+		struct kvm_vcpu *vcpu, unsigned long ua, unsigned long size)
> >>+{
> >>+	mm_context_t *mm = kvmppc_mm_context(vcpu);
> >>+
> >>+	if (unlikely(!mm))
> >>+		return NULL;
> >>+
> >>+	return mm_iommu_lookup_rm(mm, ua, size);
> >>+}
> >>+
> >>  long kvmppc_h_put_tce(struct kvm_vcpu *vcpu, unsigned long liobn,
> >>  		      unsigned long ioba, unsigned long tce)
> >>  {
> >>@@ -261,23 +293,44 @@ long kvmppc_rm_h_put_tce_indirect(struct kvm_vcpu *vcpu,
> >>  	if (ret != H_SUCCESS)
> >>  		return ret;
> >>
> >>-	if (kvmppc_gpa_to_ua(vcpu->kvm, tce_list, &ua, &rmap))
> >>-		return H_TOO_HARD;
> >>+	if (kvmppc_preregistered(vcpu)) {
> >>+		/*
> >>+		 * We get here if guest memory was pre-registered which
> >>+		 * is normally VFIO case and gpa->hpa translation does not
> >>+		 * depend on hpt.
> >>+		 */
> >>+		struct mm_iommu_table_group_mem_t *mem;
> >>
> >>-	rmap = (void *) vmalloc_to_phys(rmap);
> >>+		if (kvmppc_gpa_to_ua(vcpu->kvm, tce_list, &ua, NULL))
> >>+			return H_TOO_HARD;
> >>
> >>-	/*
> >>-	 * Synchronize with the MMU notifier callbacks in
> >>-	 * book3s_64_mmu_hv.c (kvm_unmap_hva_hv etc.).
> >>-	 * While we have the rmap lock, code running on other CPUs
> >>-	 * cannot finish unmapping the host real page that backs
> >>-	 * this guest real page, so we are OK to access the host
> >>-	 * real page.
> >>-	 */
> >>-	lock_rmap(rmap);
> >>-	if (kvmppc_rm_ua_to_hpa(vcpu, ua, &tces)) {
> >>-		ret = H_TOO_HARD;
> >>-		goto unlock_exit;
> >>+		mem = kvmppc_rm_iommu_lookup(vcpu, ua, IOMMU_PAGE_SIZE_4K);
> >>+		if (!mem || mm_iommu_rm_ua_to_hpa(mem, ua, &tces))
> >>+			return H_TOO_HARD;
> >>+	} else {
> >>+		/*
> >>+		 * This is emulated devices case.
> >>+		 * We do not require memory to be preregistered in this case
> >>+		 * so lock rmap and do __find_linux_pte_or_hugepte().
> >>+		 */
> >>+		if (kvmppc_gpa_to_ua(vcpu->kvm, tce_list, &ua, &rmap))
> >>+			return H_TOO_HARD;
> >>+
> >>+		rmap = (void *) vmalloc_to_phys(rmap);
> >>+
> >>+		/*
> >>+		 * Synchronize with the MMU notifier callbacks in
> >>+		 * book3s_64_mmu_hv.c (kvm_unmap_hva_hv etc.).
> >>+		 * While we have the rmap lock, code running on other CPUs
> >>+		 * cannot finish unmapping the host real page that backs
> >>+		 * this guest real page, so we are OK to access the host
> >>+		 * real page.
> >>+		 */
> >>+		lock_rmap(rmap);
> >>+		if (kvmppc_rm_ua_to_hpa(vcpu, ua, &tces)) {
> >>+			ret = H_TOO_HARD;
> >>+			goto unlock_exit;
> >>+		}
> >>  	}
> >>
> >>  	for (i = 0; i < npages; ++i) {
> >>@@ -291,7 +344,8 @@ long kvmppc_rm_h_put_tce_indirect(struct kvm_vcpu *vcpu,
> >>  	}
> >>
> >>  unlock_exit:
> >>-	unlock_rmap(rmap);
> >>+	if (rmap)
> >
> >I don't see where rmap is initialized to NULL in the case where it's
> >not being used.
> 
> @rmap is not new to this function, and it has always been initialized to
> NULL as it was returned via a pointer from kvmppc_gpa_to_ua().

This comment confuses me.  Looking closer at the code I see you're
right, and it's initialized to NULL where defined, which I missed.

But that has nothing to do with being returned by pointer from
kvmppc_gpa_to_ua(), since one of your branches in the new code no
longer passes &rmap to that function.

-- 
David Gibson			| I'll have my music baroque, and my code
david AT gibson.dropbear.id.au	| minimalist, thank you.  NOT _the_ _other_
				| _way_ _around_!
http://www.ozlabs.org/~dgibson

[-- Attachment #2: signature.asc --]
[-- Type: application/pgp-signature, Size: 819 bytes --]

  reply	other threads:[~2016-03-08  6:33 UTC|newest]

Thread overview: 46+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2016-03-07  3:41 [PATCH kernel 0/9] KVM, PPC, VFIO: Enable in-kernel acceleration Alexey Kardashevskiy
2016-03-07  3:41 ` [PATCH kernel 1/9] KVM: PPC: Reserve KVM_CAP_SPAPR_TCE_VFIO capability number Alexey Kardashevskiy
2016-03-07  4:58   ` David Gibson
2016-03-07  3:41 ` [PATCH kernel 2/9] powerpc/mmu: Add real mode support for IOMMU preregistered memory Alexey Kardashevskiy
2016-03-07  5:30   ` David Gibson
2016-03-07  3:41 ` [PATCH kernel 3/9] KVM: PPC: Use preregistered memory API to access TCE list Alexey Kardashevskiy
2016-03-07  6:00   ` David Gibson
2016-03-08  5:47     ` Alexey Kardashevskiy
2016-03-08  6:30       ` David Gibson [this message]
2016-03-09  8:55         ` Alexey Kardashevskiy
2016-03-09 23:46           ` David Gibson
2016-03-10  8:33     ` Paul Mackerras
2016-03-10 23:42       ` David Gibson
2016-03-07  3:41 ` [PATCH kernel 4/9] powerpc/powernv/iommu: Add real mode version of xchg() Alexey Kardashevskiy
2016-03-07  6:05   ` David Gibson
2016-03-07  7:32     ` Alexey Kardashevskiy
2016-03-08  4:50       ` David Gibson
2016-03-10  8:43   ` Paul Mackerras
2016-03-10  8:46   ` Paul Mackerras
2016-03-07  3:41 ` [PATCH kernel 5/9] KVM: PPC: Enable IOMMU_API for KVM_BOOK3S_64 permanently Alexey Kardashevskiy
2016-03-07  3:41 ` [PATCH kernel 6/9] KVM: PPC: Associate IOMMU group with guest view of TCE table Alexey Kardashevskiy
2016-03-07  6:25   ` David Gibson
2016-03-07  9:38     ` Alexey Kardashevskiy
2016-03-08  4:55       ` David Gibson
2016-03-07  3:41 ` [PATCH kernel 7/9] KVM: PPC: Create a virtual-mode only TCE table handlers Alexey Kardashevskiy
2016-03-08  6:32   ` David Gibson
2016-03-07  3:41 ` [PATCH kernel 8/9] KVM: PPC: Add in-kernel handling for VFIO Alexey Kardashevskiy
2016-03-08 11:08   ` David Gibson
2016-03-09  8:46     ` Alexey Kardashevskiy
2016-03-10  5:18       ` David Gibson
2016-03-11  2:15         ` Alexey Kardashevskiy
2016-03-15  6:00           ` David Gibson
2016-03-07  3:41 ` [PATCH kernel 9/9] KVM: PPC: VFIO device: support SPAPR TCE Alexey Kardashevskiy
2016-03-09  5:45   ` David Gibson
2016-03-09  9:20     ` Alexey Kardashevskiy
2016-03-10  5:21       ` David Gibson
2016-03-10 23:09         ` Alexey Kardashevskiy
2016-03-15  6:04           ` David Gibson
     [not found]             ` <15389a41428.27cb.1ca38dd7e845b990cd13d431eb58563d@ozlabs.ru>
     [not found]               ` <20160321051932.GJ23586@voom.redhat.com>
2016-03-22  0:34                 ` Alexey Kardashevskiy
2016-03-23  3:03                   ` David Gibson
2016-06-09  6:47                     ` Alexey Kardashevskiy
2016-06-10  6:50                       ` David Gibson
2016-06-14  3:30                         ` Alexey Kardashevskiy
2016-06-15  4:43                           ` David Gibson
2016-04-08  9:13     ` Alexey Kardashevskiy
2016-04-11  3:36       ` David Gibson

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20160308063018.GA22546@voom.fritz.box \
    --to=david@gibson.dropbear.id.au \
    --cc=aik@ozlabs.ru \
    --cc=alex.williamson@redhat.com \
    --cc=kvm-ppc@vger.kernel.org \
    --cc=kvm@vger.kernel.org \
    --cc=linuxppc-dev@lists.ozlabs.org \
    --cc=paulus@samba.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).