From: Alexey Kardashevskiy <aik@au1.ibm.com>
To: Alexey Kardashevskiy <aik@ozlabs.ru>,
Benjamin Herrenschmidt <benh@kernel.crashing.org>,
Laurent Dufour <ldufour@linux.vnet.ibm.com>
Cc: kvm@vger.kernel.org, Gleb Natapov <gleb@kernel.org>,
Alexander Graf <agraf@suse.de>,
kvm-ppc@vger.kernel.org, Paul Mackerras <paulus@samba.org>,
Paolo Bonzini <pbonzini@redhat.com>,
linuxppc-dev@lists.ozlabs.org
Subject: Re: [PATCH] PPC: KVM: Introduce hypervisor call H_GET_TCE
Date: Sat, 22 Feb 2014 11:48:47 +1100 [thread overview]
Message-ID: <5307F3EF.1050200@au1.ibm.com> (raw)
In-Reply-To: <5307EF2F.9090100@ozlabs.ru>
On 02/22/2014 11:28 AM, Alexey Kardashevskiy wrote:
> On 02/22/2014 06:23 AM, Benjamin Herrenschmidt wrote:
>> On Fri, 2014-02-21 at 16:31 +0100, Laurent Dufour wrote:
>>> This fix introduces the H_GET_TCE hypervisor call which is basically the
>>> reverse of H_PUT_TCE, as defined in the Power Architecture Platform
>>> Requirements (PAPR).
>>>
>>> The hcall H_GET_TCE is required by the kdump kernel which is calling it to
>>> retrieve the TCE set up by the panicing kernel.
>>
>> Alexey, will that work for VFIO ?
>
> Yes.
Oh! My bad, this is _G_et. Not, this won't support VFIO but this should not
break the current "slow" VFIO support in upstream.
>> Or are those patches *still* not
>> upstream ?
>
> Yes.
This part is still true. I cannot get Alex Graf attention even on much
simpler things for several months.
>
>
>>
>>> Signed-off-by: Laurent Dufour <ldufour@linux.vnet.ibm.com>
>>> ---
>>> arch/powerpc/include/asm/kvm_ppc.h | 2 ++
>>> arch/powerpc/kvm/book3s_64_vio_hv.c | 28 ++++++++++++++++++++++++++++
>>> arch/powerpc/kvm/book3s_hv_rmhandlers.S | 2 +-
>>> 3 files changed, 31 insertions(+), 1 deletion(-)
>>>
>>> diff --git a/arch/powerpc/include/asm/kvm_ppc.h b/arch/powerpc/include/asm/kvm_ppc.h
>>> index fcd53f0..4096f16 100644
>>> --- a/arch/powerpc/include/asm/kvm_ppc.h
>>> +++ b/arch/powerpc/include/asm/kvm_ppc.h
>>> @@ -129,6 +129,8 @@ extern long kvm_vm_ioctl_create_spapr_tce(struct kvm *kvm,
>>> struct kvm_create_spapr_tce *args);
>>> extern long kvmppc_h_put_tce(struct kvm_vcpu *vcpu, unsigned long liobn,
>>> unsigned long ioba, unsigned long tce);
>>> +extern long kvmppc_h_get_tce(struct kvm_vcpu *vcpu, unsigned long liobn,
>>> + unsigned long ioba);
>>> extern struct kvm_rma_info *kvm_alloc_rma(void);
>>> extern void kvm_release_rma(struct kvm_rma_info *ri);
>>> extern struct page *kvm_alloc_hpt(unsigned long nr_pages);
>>> diff --git a/arch/powerpc/kvm/book3s_64_vio_hv.c b/arch/powerpc/kvm/book3s_64_vio_hv.c
>>> index 2c25f54..89e96b3 100644
>>> --- a/arch/powerpc/kvm/book3s_64_vio_hv.c
>>> +++ b/arch/powerpc/kvm/book3s_64_vio_hv.c
>>> @@ -75,3 +75,31 @@ long kvmppc_h_put_tce(struct kvm_vcpu *vcpu, unsigned long liobn,
>>> return H_TOO_HARD;
>>> }
>>> EXPORT_SYMBOL_GPL(kvmppc_h_put_tce);
>>> +
>>> +long kvmppc_h_get_tce(struct kvm_vcpu *vcpu, unsigned long liobn,
>>> + unsigned long ioba)
>>> +{
>>> + struct kvm *kvm = vcpu->kvm;
>>> + struct kvmppc_spapr_tce_table *stt;
>>> +
>>> + list_for_each_entry(stt, &kvm->arch.spapr_tce_tables, list) {
>>> + if (stt->liobn == liobn) {
>>> + unsigned long idx = ioba >> SPAPR_TCE_SHIFT;
>>> + struct page *page;
>>> + u64 *tbl;
>>> +
>>> + if (ioba >= stt->window_size)
>>> + return H_PARAMETER;
>>> +
>>> + page = stt->pages[idx / TCES_PER_PAGE];
>>> + tbl = (u64 *)page_address(page);
>>> +
>>> + vcpu->arch.gpr[4] = tbl[idx % TCES_PER_PAGE];
>>> + return H_SUCCESS;
>>> + }
>>> + }
>>> +
>>> + /* Didn't find the liobn, punt it to userspace */
>>> + return H_TOO_HARD;
>>> +}
>>> +EXPORT_SYMBOL_GPL(kvmppc_h_get_tce);
>>> diff --git a/arch/powerpc/kvm/book3s_hv_rmhandlers.S b/arch/powerpc/kvm/book3s_hv_rmhandlers.S
>>> index e66d4ec..7d4fe2a 100644
>>> --- a/arch/powerpc/kvm/book3s_hv_rmhandlers.S
>>> +++ b/arch/powerpc/kvm/book3s_hv_rmhandlers.S
>>> @@ -1758,7 +1758,7 @@ hcall_real_table:
>>> .long 0 /* 0x10 - H_CLEAR_MOD */
>>> .long 0 /* 0x14 - H_CLEAR_REF */
>>> .long .kvmppc_h_protect - hcall_real_table
>>> - .long 0 /* 0x1c - H_GET_TCE */
>>> + .long .kvmppc_h_get_tce - hcall_real_table
>>> .long .kvmppc_h_put_tce - hcall_real_table
>>> .long 0 /* 0x24 - H_SET_SPRG0 */
>>> .long .kvmppc_h_set_dabr - hcall_real_table
>>
>>
>> _______________________________________________
>> Linuxppc-dev mailing list
>> Linuxppc-dev@lists.ozlabs.org
>> https://lists.ozlabs.org/listinfo/linuxppc-dev
>>
>
>
--
Alexey Kardashevskiy
IBM OzLabs, LTC Team
e-mail: aik@au1.ibm.com
notes: Alexey Kardashevskiy/Australia/IBM
prev parent reply other threads:[~2014-02-22 0:48 UTC|newest]
Thread overview: 6+ messages / expand[flat|nested] mbox.gz Atom feed top
2014-02-21 15:31 [PATCH] PPC: KVM: Introduce hypervisor call H_GET_TCE Laurent Dufour
2014-02-21 15:57 ` Alexander Graf
2014-02-25 16:00 ` Laurent Dufour
2014-02-21 19:23 ` Benjamin Herrenschmidt
2014-02-22 0:28 ` Alexey Kardashevskiy
2014-02-22 0:48 ` Alexey Kardashevskiy [this message]
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=5307F3EF.1050200@au1.ibm.com \
--to=aik@au1.ibm.com \
--cc=agraf@suse.de \
--cc=aik@ozlabs.ru \
--cc=benh@kernel.crashing.org \
--cc=gleb@kernel.org \
--cc=kvm-ppc@vger.kernel.org \
--cc=kvm@vger.kernel.org \
--cc=ldufour@linux.vnet.ibm.com \
--cc=linuxppc-dev@lists.ozlabs.org \
--cc=paulus@samba.org \
--cc=pbonzini@redhat.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).