From mboxrd@z Thu Jan 1 00:00:00 1970 From: Robin Murphy Subject: Re: [PATCH 4/5] iommu/rockchip: add ARM64 cache flush operation for iommu Date: Mon, 23 May 2016 11:44:14 +0100 Message-ID: <5742DEFE.1040902@arm.com> References: <1463967439-13354-1-git-send-email-zhengsq@rock-chips.com> <1463967439-13354-5-git-send-email-zhengsq@rock-chips.com> Mime-Version: 1.0 Content-Type: text/plain; charset="us-ascii"; Format="flowed" Content-Transfer-Encoding: 7bit Return-path: In-Reply-To: <1463967439-13354-5-git-send-email-zhengsq-TNX95d0MmH7DzftRWevZcw@public.gmane.org> List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: iommu-bounces-cunTk1MwBs9QetFLy7KEm3xJsTq8ys+cHZ5vskTnxNA@public.gmane.org Errors-To: iommu-bounces-cunTk1MwBs9QetFLy7KEm3xJsTq8ys+cHZ5vskTnxNA@public.gmane.org To: Shunqian Zheng , joro-zLv9SwRftAIdnm+yROfE0A@public.gmane.org, heiko-4mtYJXux2i+zQB+pC5nmwQ@public.gmane.org, Catalin Marinas , Mark Rutland Cc: linux-rockchip-IAPFreCvJWM7uuMidbF8XUB+6BGkLq7r@public.gmane.org, iommu-cunTk1MwBs9QetFLy7KEm3xJsTq8ys+cHZ5vskTnxNA@public.gmane.org, linux-kernel-u79uwXL29TY76Z2rM5mHXA@public.gmane.org, Simon List-Id: linux-rockchip.vger.kernel.org On 23/05/16 02:37, Shunqian Zheng wrote: > From: Simon > > Signed-off-by: Simon > --- > drivers/iommu/rockchip-iommu.c | 4 ++++ > 1 file changed, 4 insertions(+) > > diff --git a/drivers/iommu/rockchip-iommu.c b/drivers/iommu/rockchip-iommu.c > index 043d18c..1741b65 100644 > --- a/drivers/iommu/rockchip-iommu.c > +++ b/drivers/iommu/rockchip-iommu.c > @@ -95,12 +95,16 @@ struct rk_iommu { > > static inline void rk_table_flush(u32 *va, unsigned int count) > { > +#if defined(CONFIG_ARM) > phys_addr_t pa_start = virt_to_phys(va); > phys_addr_t pa_end = virt_to_phys(va + count); > size_t size = pa_end - pa_start; > > __cpuc_flush_dcache_area(va, size); > outer_flush_range(pa_start, pa_end); > +#elif defined(CONFIG_ARM64) > + __dma_flush_range(va, va + count); > +#endif Ugh, please don't use arch-private cache maintenance functions directly from a driver. Allocating/mapping page tables to be read by the IOMMU is still DMA, so using the DMA APIs is the correct way to manage them, *especially* if it needs to work across multiple architectures. Robin. > } > > static struct rk_iommu_domain *to_rk_domain(struct iommu_domain *dom) >