linuxppc-dev.lists.ozlabs.org archive mirror
 help / color / mirror / Atom feed
From: Paul Mackerras <paulus@ozlabs.org>
To: David Gibson <david@gibson.dropbear.id.au>
Cc: Alexey Kardashevskiy <aik@ozlabs.ru>,
	linuxppc-dev@lists.ozlabs.org, Alexander Graf <agraf@suse.com>,
	kvm-ppc@vger.kernel.org, kvm@vger.kernel.org
Subject: Re: [PATCH kernel v3 7/7] KVM: PPC: Add support for multiple-TCE hcalls
Date: Tue, 16 Feb 2016 12:05:56 +1100	[thread overview]
Message-ID: <20160216010555.GA23111@oak.ozlabs.ibm.com> (raw)
In-Reply-To: <20160216004058.GB2269@voom.redhat.com>

On Tue, Feb 16, 2016 at 11:40:58AM +1100, David Gibson wrote:
> On Mon, Feb 15, 2016 at 12:55:09PM +1100, Alexey Kardashevskiy wrote:
> > This adds real and virtual mode handlers for the H_PUT_TCE_INDIRECT and
> > H_STUFF_TCE hypercalls for user space emulated devices such as IBMVIO
> > devices or emulated PCI. These calls allow adding multiple entries
> > (up to 512) into the TCE table in one call which saves time on
> > transition between kernel and user space.
> > 
> > The current implementation of kvmppc_h_stuff_tce() allows it to be
> > executed in both real and virtual modes so there is one helper.
> > The kvmppc_rm_h_put_tce_indirect() needs to translate the guest address
> > to the host address and since the translation is different, there are
> > 2 helpers - one for each mode.
> > 
> > This implements the KVM_CAP_PPC_MULTITCE capability. When present,
> > the kernel will try handling H_PUT_TCE_INDIRECT and H_STUFF_TCE if these
> > are enabled by the userspace via KVM_CAP_PPC_ENABLE_HCALL.
> > If they can not be handled by the kernel, they are passed on to
> > the user space. The user space still has to have an implementation
> > for these.
> > 
> > Both HV and PR-syle KVM are supported.
> > 
> > Signed-off-by: Alexey Kardashevskiy <aik@ozlabs.ru>

[snip]

> > +	idx = srcu_read_lock(&vcpu->kvm->srcu);
> > +	if (kvmppc_gpa_to_ua(vcpu->kvm, tce_list, &ua, NULL)) {
> > +		ret = H_TOO_HARD;
> > +		goto unlock_exit;
> > +	}
> > +	tces = (u64 __user *) ua;
> > +
> > +	for (i = 0; i < npages; ++i) {
> > +		if (get_user(tce, tces + i)) {
> > +			ret = H_PARAMETER;
> 
> I'm trying to work out if H_PARAMETER is really the right thing here.
> 
> If the guest has actually supplied a bad address, I'd expect
> kvmppc_gpa_to_ua() to have picked that up.  So I see two cases here:
> 1) this shouldn't ever happen, in which case a WARN_ON() and
> H_HARDWARE would be better or 2) this can happen because of something
> concurrently unmapping / swapping out the userspace memory, in whih
> case it's not the guest's fault and should probably be H_TOO_HARD.
> 
> Or am I missing something?

The only situations I can see that would cause this to fail here are
an out-of-memory condition or userspace concurrently unmapping the
memory.  If it's just a swapout then the get_user should bring it back
in.

[snip]

> > +	rmap = (void *) vmalloc_to_phys(rmap);
> > +
> > +	/*
> > +	 * Synchronize with the MMU notifier callbacks in
> > +	 * book3s_64_mmu_hv.c (kvm_unmap_hva_hv etc.).
> > +	 * While we have the rmap lock, code running on other CPUs
> > +	 * cannot finish unmapping the host real page that backs
> > +	 * this guest real page, so we are OK to access the host
> > +	 * real page.
> > +	 */
> > +	lock_rmap(rmap);
> 
> You don't appear to actually use rmap between the lock and unlock..

No, he doesn't need to.  The effect of taking the lock is to stop the
page getting unmapped, by stopping other code from running.  That's
what we are trying to explain with the comment just above the
lock_rmap call.  Is the comment not clear enough?  How would you word
it?

Paul.

  reply	other threads:[~2016-02-16  1:05 UTC|newest]

Thread overview: 18+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2016-02-15  1:55 [PATCH kernel v3 0/7] KVM: PPC: Add in-kernel multitce handling Alexey Kardashevskiy
2016-02-15  1:55 ` [PATCH kernel v3 1/7] powerpc: Make vmalloc_to_phys() public Alexey Kardashevskiy
2016-02-15  3:47   ` David Gibson
2016-02-15  1:55 ` [PATCH kernel v3 2/7] KVM: PPC: Rework H_PUT_TCE/H_GET_TCE handlers Alexey Kardashevskiy
2016-02-15  3:53   ` David Gibson
2016-02-15  1:55 ` [PATCH kernel v3 3/7] KVM: PPC: Use RCU for arch.spapr_tce_tables Alexey Kardashevskiy
2016-02-15  1:55 ` [PATCH kernel v3 4/7] KVM: PPC: Account TCE-containing pages in locked_vm Alexey Kardashevskiy
2016-02-15  4:08   ` David Gibson
2016-02-15  1:55 ` [PATCH kernel v3 5/7] KVM: PPC: Replace SPAPR_TCE_SHIFT with IOMMU_PAGE_SHIFT_4K Alexey Kardashevskiy
2016-02-15  1:55 ` [PATCH kernel v3 6/7] KVM: PPC: Move reusable bits of H_PUT_TCE handler to helpers Alexey Kardashevskiy
2016-02-15 22:59   ` David Gibson
2016-02-15  1:55 ` [PATCH kernel v3 7/7] KVM: PPC: Add support for multiple-TCE hcalls Alexey Kardashevskiy
2016-02-16  0:40   ` David Gibson
2016-02-16  1:05     ` Paul Mackerras [this message]
2016-02-16  2:14       ` David Gibson
2016-02-18  2:39   ` Alexey Kardashevskiy
2016-02-29  8:37     ` Paul Mackerras
2016-02-29 11:30 ` [PATCH kernel v3 0/7] KVM: PPC: Add in-kernel multitce handling Paul Mackerras

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20160216010555.GA23111@oak.ozlabs.ibm.com \
    --to=paulus@ozlabs.org \
    --cc=agraf@suse.com \
    --cc=aik@ozlabs.ru \
    --cc=david@gibson.dropbear.id.au \
    --cc=kvm-ppc@vger.kernel.org \
    --cc=kvm@vger.kernel.org \
    --cc=linuxppc-dev@lists.ozlabs.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).