From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mx1.redhat.com (mx1.redhat.com [209.132.183.28]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by lists.ozlabs.org (Postfix) with ESMTPS id A43111A0629 for ; Wed, 11 Mar 2015 06:56:58 +1100 (AEDT) Message-ID: <1426017408.25026.79.camel@redhat.com> Subject: Re: [PATCH v5 03/29] vfio: powerpc/spapr: Check that TCE page size is equal to it_page_size From: Alex Williamson To: Alexey Kardashevskiy Date: Tue, 10 Mar 2015 13:56:48 -0600 In-Reply-To: <1425910045-26167-4-git-send-email-aik@ozlabs.ru> References: <1425910045-26167-1-git-send-email-aik@ozlabs.ru> <1425910045-26167-4-git-send-email-aik@ozlabs.ru> Content-Type: text/plain; charset="UTF-8" Mime-Version: 1.0 Cc: linuxppc-dev@lists.ozlabs.org, kvm@vger.kernel.org, Paul Mackerras , linux-kernel@vger.kernel.org List-Id: Linux on PowerPC Developers Mail List List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , On Tue, 2015-03-10 at 01:06 +1100, Alexey Kardashevskiy wrote: > This checks that the TCE table page size is not bigger that the size of > a page we just pinned and going to put its physical address to the table. > > Otherwise the hardware gets unwanted access to physical memory between > the end of the actual page and the end of the aligned up TCE page. > > Since compound_order() and compound_head() work correctly on non-huge > pages, there is no need for additional check whether the page is huge. > > Signed-off-by: Alexey Kardashevskiy > --- > Changes: > v4: > * s/tce_check_page_size/tce_page_is_contained/ > --- > drivers/vfio/vfio_iommu_spapr_tce.c | 22 ++++++++++++++++++++++ > 1 file changed, 22 insertions(+) > > diff --git a/drivers/vfio/vfio_iommu_spapr_tce.c b/drivers/vfio/vfio_iommu_spapr_tce.c > index 756831f..91e7599 100644 > --- a/drivers/vfio/vfio_iommu_spapr_tce.c > +++ b/drivers/vfio/vfio_iommu_spapr_tce.c > @@ -49,6 +49,22 @@ struct tce_container { > bool enabled; > }; > > +static bool tce_page_is_contained(struct page *page, unsigned page_shift) > +{ > + unsigned shift; > + > + /* > + * Check that the TCE table granularity is not bigger than the size of > + * a page we just found. Otherwise the hardware can get access to > + * a bigger memory chunk that it should. > + */ > + shift = PAGE_SHIFT + compound_order(compound_head(page)); > + if (shift >= page_shift) > + return true; > + > + return false; nit, simplified: return (PAGE_SHIFT + compound_order(compound_head(page) >= page_shift); > +} > + > static int tce_iommu_enable(struct tce_container *container) > { > int ret = 0; > @@ -197,6 +213,12 @@ static long tce_iommu_build(struct tce_container *container, > ret = -EFAULT; > break; > } > + > + if (!tce_page_is_contained(page, tbl->it_page_shift)) { > + ret = -EPERM; > + break; > + } > + > hva = (unsigned long) page_address(page) + > (tce & IOMMU_PAGE_MASK(tbl) & ~PAGE_MASK); >