From mboxrd@z Thu Jan 1 00:00:00 1970 From: Dmitry Osipenko Subject: [PATCH v4 05/20] iommu/tegra: gart: Optimize mapping / unmapping performance Date: Mon, 24 Sep 2018 03:41:38 +0300 Message-ID: <20180924004153.8232-6-digetx@gmail.com> References: <20180924004153.8232-1-digetx@gmail.com> Mime-Version: 1.0 Content-Type: text/plain; charset="us-ascii" Content-Transfer-Encoding: 7bit Return-path: In-Reply-To: <20180924004153.8232-1-digetx-Re5JQEeQqe8AvxtiuMwx3w@public.gmane.org> List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: iommu-bounces-cunTk1MwBs9QetFLy7KEm3xJsTq8ys+cHZ5vskTnxNA@public.gmane.org Errors-To: iommu-bounces-cunTk1MwBs9QetFLy7KEm3xJsTq8ys+cHZ5vskTnxNA@public.gmane.org To: Thierry Reding , Jonathan Hunter , Joerg Roedel , Rob Herring , Robin Murphy Cc: linux-tegra-u79uwXL29TY76Z2rM5mHXA@public.gmane.org, devicetree-u79uwXL29TY76Z2rM5mHXA@public.gmane.org, iommu-cunTk1MwBs9QetFLy7KEm3xJsTq8ys+cHZ5vskTnxNA@public.gmane.org, linux-kernel-u79uwXL29TY76Z2rM5mHXA@public.gmane.org List-Id: devicetree@vger.kernel.org Currently GART writes one page entry at a time. More optimal would be to aggregate the writes and flush BUS buffer in the end, this gives map/unmap 10-40% performance boost (depending on size of mapping) in comparison to flushing after each page entry update. Signed-off-by: Dmitry Osipenko --- drivers/iommu/tegra-gart.c | 12 ++++++++++-- 1 file changed, 10 insertions(+), 2 deletions(-) diff --git a/drivers/iommu/tegra-gart.c b/drivers/iommu/tegra-gart.c index f6cf5cd5aaca..86a855c0d031 100644 --- a/drivers/iommu/tegra-gart.c +++ b/drivers/iommu/tegra-gart.c @@ -287,7 +287,6 @@ static int gart_iommu_map(struct iommu_domain *domain, unsigned long iova, } } gart_set_pte(gart, iova, GART_PTE(pfn)); - FLUSH_GART_REGS(gart); spin_unlock_irqrestore(&gart->pte_lock, flags); return 0; } @@ -304,7 +303,6 @@ static size_t gart_iommu_unmap(struct iommu_domain *domain, unsigned long iova, spin_lock_irqsave(&gart->pte_lock, flags); gart_set_pte(gart, iova, 0); - FLUSH_GART_REGS(gart); spin_unlock_irqrestore(&gart->pte_lock, flags); return bytes; } @@ -370,6 +368,14 @@ static int gart_iommu_of_xlate(struct device *dev, return 0; } +static void gart_iommu_sync(struct iommu_domain *domain) +{ + struct gart_domain *gart_domain = to_gart_domain(domain); + struct gart_device *gart = gart_domain->gart; + + FLUSH_GART_REGS(gart); +} + static const struct iommu_ops gart_iommu_ops = { .capable = gart_iommu_capable, .domain_alloc = gart_iommu_domain_alloc, @@ -384,6 +390,8 @@ static const struct iommu_ops gart_iommu_ops = { .iova_to_phys = gart_iommu_iova_to_phys, .pgsize_bitmap = GART_IOMMU_PGSIZES, .of_xlate = gart_iommu_of_xlate, + .iotlb_sync_map = gart_iommu_sync, + .iotlb_sync = gart_iommu_sync, }; static int tegra_gart_suspend(struct device *dev) -- 2.19.0