From: David Gibson <david@gibson.dropbear.id.au>
To: Alexey Kardashevskiy <aik@ozlabs.ru>
Cc: linuxppc-dev@lists.ozlabs.org, kvm-ppc@vger.kernel.org,
Paul Mackerras <paulus@ozlabs.org>
Subject: Re: [PATCH kernel 3/4] KVM: PPC: Validate TCEs against preregistered memory page sizes
Date: Thu, 30 Aug 2018 14:03:22 +1000 [thread overview]
Message-ID: <20180830040322.GI2222@umbus.fritz.box> (raw)
In-Reply-To: <20180830031647.34134-4-aik@ozlabs.ru>
[-- Attachment #1: Type: text/plain, Size: 7311 bytes --]
On Thu, Aug 30, 2018 at 01:16:46PM +1000, Alexey Kardashevskiy wrote:
> The userspace can request an arbitrary supported page size for a DMA
> window and this works fine as long as the mapped memory is backed with
> the pages of the same or bigger size; if this is not the case,
> mm_iommu_ua_to_hpa{_rm}() fail and tables do not populated with
> dangerously incorrect TCEs.
>
> However since it is quite easy to misconfigure the KVM and we do not do
> reverts to all changes made to TCE tables if an error happens in a middle,
> we better do the acceptable page size validation before we even touch
> the tables.
>
> This enhances kvmppc_tce_validate() to check the hardware IOMMU page sizes
> against the preregistered memory page sizes.
>
> Since the new check uses real/virtual mode helpers, this renames
> kvmppc_tce_validate() to kvmppc_rm_tce_validate() to handle the real mode
> case and mirrors it for the virtual mode under the old name. The real
> mode handler is not used for the virtual mode as:
> 1. it uses _lockless() list traversing primitives instead of RCU;
> 2. realmode's mm_iommu_ua_to_hpa_rm() uses vmalloc_to_phys() which
> virtual mode does not have to use and since on POWER9+radix only virtual
> mode handlers actually work, we do not want to slow down that path even
> a bit.
>
> This removes EXPORT_SYMBOL_GPL(kvmppc_tce_validate) as the validators
> are static now.
>
> >From now on the attempts on mapping IOMMU pages bigger than allowed will
> result in KVM exit.
>
> Signed-off-by: Alexey Kardashevskiy <aik@ozlabs.ru>
Reviewed-by: David Gibson <david@gibson.dropbear.id.au>
> ---
> arch/powerpc/include/asm/kvm_ppc.h | 2 --
> arch/powerpc/kvm/book3s_64_vio.c | 42 +++++++++++++++++++++++++++++++++++++
> arch/powerpc/kvm/book3s_64_vio_hv.c | 30 +++++++++++++++++++-------
> 3 files changed, 65 insertions(+), 9 deletions(-)
>
> diff --git a/arch/powerpc/include/asm/kvm_ppc.h b/arch/powerpc/include/asm/kvm_ppc.h
> index e991821..2f5d431 100644
> --- a/arch/powerpc/include/asm/kvm_ppc.h
> +++ b/arch/powerpc/include/asm/kvm_ppc.h
> @@ -194,8 +194,6 @@ extern struct kvmppc_spapr_tce_table *kvmppc_find_table(
> (iommu_tce_check_ioba((stt)->page_shift, (stt)->offset, \
> (stt)->size, (ioba), (npages)) ? \
> H_PARAMETER : H_SUCCESS)
> -extern long kvmppc_tce_validate(struct kvmppc_spapr_tce_table *tt,
> - unsigned long tce);
> extern long kvmppc_gpa_to_ua(struct kvm *kvm, unsigned long gpa,
> unsigned long *ua, unsigned long **prmap);
> extern void kvmppc_tce_put(struct kvmppc_spapr_tce_table *tt,
> diff --git a/arch/powerpc/kvm/book3s_64_vio.c b/arch/powerpc/kvm/book3s_64_vio.c
> index 3e8ac98..5cd2a66 100644
> --- a/arch/powerpc/kvm/book3s_64_vio.c
> +++ b/arch/powerpc/kvm/book3s_64_vio.c
> @@ -363,6 +363,41 @@ long kvm_vm_ioctl_create_spapr_tce(struct kvm *kvm,
> return ret;
> }
>
> +static long kvmppc_tce_validate(struct kvmppc_spapr_tce_table *stt,
> + unsigned long tce)
> +{
> + unsigned long gpa = tce & ~(TCE_PCI_READ | TCE_PCI_WRITE);
> + enum dma_data_direction dir = iommu_tce_direction(tce);
> + struct kvmppc_spapr_tce_iommu_table *stit;
> + unsigned long ua = 0;
> +
> + /* Allow userspace to poison TCE table */
> + if (dir == DMA_NONE)
> + return H_SUCCESS;
> +
> + if (iommu_tce_check_gpa(stt->page_shift, gpa))
> + return H_TOO_HARD;
> +
> + if (kvmppc_gpa_to_ua(stt->kvm, tce & ~(TCE_PCI_READ | TCE_PCI_WRITE),
> + &ua, NULL))
> + return H_TOO_HARD;
> +
> + list_for_each_entry_rcu(stit, &stt->iommu_tables, next) {
> + unsigned long hpa = 0;
> + struct mm_iommu_table_group_mem_t *mem;
> + long shift = stit->tbl->it_page_shift;
> +
> + mem = mm_iommu_lookup(stt->kvm->mm, ua, 1ULL << shift);
> + if (!mem)
> + return H_TOO_HARD;
> +
> + if (mm_iommu_ua_to_hpa(mem, ua, shift, &hpa))
> + return H_TOO_HARD;
> + }
> +
> + return H_SUCCESS;
> +}
> +
> static void kvmppc_clear_tce(struct iommu_table *tbl, unsigned long entry)
> {
> unsigned long hpa = 0;
> @@ -602,6 +637,13 @@ long kvmppc_h_put_tce_indirect(struct kvm_vcpu *vcpu,
> }
>
> for (i = 0; i < npages; ++i) {
> + /*
> + * This get_user() may produce a different result than few
> + * lines in the validation loop above but we translate it
> + * again little later anyway and if that fails, we simply stop
> + * and return error as it is likely the userspace shooting
> + * itself in a foot.
> + */
> if (get_user(tce, tces + i)) {
> ret = H_TOO_HARD;
> goto unlock_exit;
> diff --git a/arch/powerpc/kvm/book3s_64_vio_hv.c b/arch/powerpc/kvm/book3s_64_vio_hv.c
> index 9584d9b..e79ffbb 100644
> --- a/arch/powerpc/kvm/book3s_64_vio_hv.c
> +++ b/arch/powerpc/kvm/book3s_64_vio_hv.c
> @@ -94,14 +94,14 @@ EXPORT_SYMBOL_GPL(kvmppc_find_table);
> * to the table and user space is supposed to process them), we can skip
> * checking other things (such as TCE is a guest RAM address or the page
> * was actually allocated).
> - *
> - * WARNING: This will be called in real-mode on HV KVM and virtual
> - * mode on PR KVM
> */
> -long kvmppc_tce_validate(struct kvmppc_spapr_tce_table *stt, unsigned long tce)
> +static long kvmppc_rm_tce_validate(struct kvmppc_spapr_tce_table *stt,
> + unsigned long tce)
> {
> unsigned long gpa = tce & ~(TCE_PCI_READ | TCE_PCI_WRITE);
> enum dma_data_direction dir = iommu_tce_direction(tce);
> + struct kvmppc_spapr_tce_iommu_table *stit;
> + unsigned long ua = 0;
>
> /* Allow userspace to poison TCE table */
> if (dir == DMA_NONE)
> @@ -110,9 +110,25 @@ long kvmppc_tce_validate(struct kvmppc_spapr_tce_table *stt, unsigned long tce)
> if (iommu_tce_check_gpa(stt->page_shift, gpa))
> return H_PARAMETER;
>
> + if (kvmppc_gpa_to_ua(stt->kvm, tce & ~(TCE_PCI_READ | TCE_PCI_WRITE),
> + &ua, NULL))
> + return H_TOO_HARD;
> +
> + list_for_each_entry_lockless(stit, &stt->iommu_tables, next) {
> + unsigned long hpa = 0;
> + struct mm_iommu_table_group_mem_t *mem;
> + long shift = stit->tbl->it_page_shift;
> +
> + mem = mm_iommu_lookup_rm(stt->kvm->mm, ua, 1ULL << shift);
> + if (!mem)
> + return H_TOO_HARD;
> +
> + if (mm_iommu_ua_to_hpa_rm(mem, ua, shift, &hpa))
> + return H_TOO_HARD;
> + }
> +
> return H_SUCCESS;
> }
> -EXPORT_SYMBOL_GPL(kvmppc_tce_validate);
>
> /* Note on the use of page_address() in real mode,
> *
> @@ -345,7 +361,7 @@ long kvmppc_rm_h_put_tce(struct kvm_vcpu *vcpu, unsigned long liobn,
> if (ret != H_SUCCESS)
> return ret;
>
> - ret = kvmppc_tce_validate(stt, tce);
> + ret = kvmppc_rm_tce_validate(stt, tce);
> if (ret != H_SUCCESS)
> return ret;
>
> @@ -498,7 +514,7 @@ long kvmppc_rm_h_put_tce_indirect(struct kvm_vcpu *vcpu,
> for (i = 0; i < npages; ++i) {
> unsigned long tce = be64_to_cpu(((u64 *)tces)[i]);
>
> - ret = kvmppc_tce_validate(stt, tce);
> + ret = kvmppc_rm_tce_validate(stt, tce);
> if (ret != H_SUCCESS)
> goto unlock_exit;
> }
--
David Gibson | I'll have my music baroque, and my code
david AT gibson.dropbear.id.au | minimalist, thank you. NOT _the_ _other_
| _way_ _around_!
http://www.ozlabs.org/~dgibson
[-- Attachment #2: signature.asc --]
[-- Type: application/pgp-signature, Size: 833 bytes --]
next prev parent reply other threads:[~2018-08-30 4:04 UTC|newest]
Thread overview: 10+ messages / expand[flat|nested] mbox.gz Atom feed top
2018-08-30 3:16 [PATCH kernel 0/4] KVM: PPC: Some error handling rework Alexey Kardashevskiy
2018-08-30 3:16 ` [PATCH kernel 1/4] KVM: PPC: Validate all tces before updating tables Alexey Kardashevskiy
2018-08-30 4:01 ` David Gibson
2018-08-31 4:04 ` Alexey Kardashevskiy
2018-08-30 3:16 ` [PATCH kernel 2/4] KVM: PPC: Inform the userspace about TCE update failures Alexey Kardashevskiy
2018-08-30 4:01 ` David Gibson
2018-08-30 3:16 ` [PATCH kernel 3/4] KVM: PPC: Validate TCEs against preregistered memory page sizes Alexey Kardashevskiy
2018-08-30 4:03 ` David Gibson [this message]
2018-08-30 3:16 ` [PATCH kernel 4/4] KVM: PPC: Propagate errors to the guest when failed instead of ignoring Alexey Kardashevskiy
2018-08-30 4:04 ` David Gibson
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20180830040322.GI2222@umbus.fritz.box \
--to=david@gibson.dropbear.id.au \
--cc=aik@ozlabs.ru \
--cc=kvm-ppc@vger.kernel.org \
--cc=linuxppc-dev@lists.ozlabs.org \
--cc=paulus@ozlabs.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).