From: manoj.iyer@canonical.com (Manoj Iyer)
To: linux-arm-kernel@lists.infradead.org
Subject: [RESEND,3/3] iommu/dma: Plumb in the per-CPU IOVA caches
Date: Thu, 6 Apr 2017 13:15:37 -0500 (CDT) [thread overview]
Message-ID: <alpine.DEB.2.20.1704061315040.4602@lazy> (raw)
In-Reply-To: <fcb9121e608da4f62e21538f558710fbb12d4a69.1490971180.git.robin.murphy@arm.com>
On Fri, 31 Mar 2017, Robin Murphy wrote:
> With IOVA allocation suitably tidied up, we are finally free to opt in
> to the per-CPU caching mechanism. The caching alone can provide a modest
> improvement over walking the rbtree for weedier systems (iperf3 shows
> ~10% more ethernet throughput on an ARM Juno r1 constrained to a single
> 650MHz Cortex-A53), but the real gain will be in sidestepping the rbtree
> lock contention which larger ARM-based systems with lots of parallel I/O
> are starting to feel the pain of.
>
> Reviewed-by: Nate Watterson <nwatters@codeaurora.org>
> Tested-by: Nate Watterson <nwatters@codeaurora.org>
> Signed-off-by: Robin Murphy <robin.murphy@arm.com>
> ---
> drivers/iommu/dma-iommu.c | 39 ++++++++++++++++++---------------------
> 1 file changed, 18 insertions(+), 21 deletions(-)
>
> diff --git a/drivers/iommu/dma-iommu.c b/drivers/iommu/dma-iommu.c
> index 1b94beb43036..8348f366ddd1 100644
> --- a/drivers/iommu/dma-iommu.c
> +++ b/drivers/iommu/dma-iommu.c
> @@ -361,8 +361,7 @@ static dma_addr_t iommu_dma_alloc_iova(struct iommu_domain *domain,
> {
> struct iommu_dma_cookie *cookie = domain->iova_cookie;
> struct iova_domain *iovad = &cookie->iovad;
> - unsigned long shift, iova_len;
> - struct iova *iova = NULL;
> + unsigned long shift, iova_len, iova = 0;
>
> if (cookie->type == IOMMU_DMA_MSI_COOKIE) {
> cookie->msi_iova += size;
> @@ -371,41 +370,39 @@ static dma_addr_t iommu_dma_alloc_iova(struct iommu_domain *domain,
>
> shift = iova_shift(iovad);
> iova_len = size >> shift;
> + /*
> + * Freeing non-power-of-two-sized allocations back into the IOVA caches
> + * will come back to bite us badly, so we have to waste a bit of space
> + * rounding up anything cacheable to make sure that can't happen. The
> + * order of the unadjusted size will still match upon freeing.
> + */
> + if (iova_len < (1 << (IOVA_RANGE_CACHE_MAX_SIZE - 1)))
> + iova_len = roundup_pow_of_two(iova_len);
>
> if (domain->geometry.force_aperture)
> dma_limit = min(dma_limit, domain->geometry.aperture_end);
>
> /* Try to get PCI devices a SAC address */
> if (dma_limit > DMA_BIT_MASK(32) && dev_is_pci(dev))
> - iova = alloc_iova(iovad, iova_len, DMA_BIT_MASK(32) >> shift,
> - true);
> - /*
> - * Enforce size-alignment to be safe - there could perhaps be an
> - * attribute to control this per-device, or at least per-domain...
> - */
> - if (!iova)
> - iova = alloc_iova(iovad, iova_len, dma_limit >> shift, true);
> + iova = alloc_iova_fast(iovad, iova_len, DMA_BIT_MASK(32) >> shift);
>
> - return (dma_addr_t)iova->pfn_lo << shift;
> + if (!iova)
> + iova = alloc_iova_fast(iovad, iova_len, dma_limit >> shift);
> +
> + return (dma_addr_t)iova << shift;
> }
>
> static void iommu_dma_free_iova(struct iommu_dma_cookie *cookie,
> dma_addr_t iova, size_t size)
> {
> struct iova_domain *iovad = &cookie->iovad;
> - struct iova *iova_rbnode;
> + unsigned long shift = iova_shift(iovad);
>
> /* The MSI case is only ever cleaning up its most recent allocation */
> - if (cookie->type == IOMMU_DMA_MSI_COOKIE) {
> + if (cookie->type == IOMMU_DMA_MSI_COOKIE)
> cookie->msi_iova -= size;
> - return;
> - }
> -
> - iova_rbnode = find_iova(iovad, iova_pfn(iovad, iova));
> - if (WARN_ON(!iova_rbnode))
> - return;
> -
> - __free_iova(iovad, iova_rbnode);
> + else
> + free_iova_fast(iovad, iova >> shift, size >> shift);
> }
>
> static void __iommu_dma_unmap(struct iommu_domain *domain, dma_addr_t dma_addr,
>
This patch series helps to resolve the Ubuntu bug, where we see the Ubuntu
Zesty (4.10 based) kernel reporting multi cpu soft lockups on QDF2400 SDP.
https://bugs.launchpad.net/ubuntu/+source/linux/+bug/1680549
This patch series along with the following cherry-picks from Linus's tree
dddd632b072f iommu/dma: Implement PCI allocation optimisation
de84f5f049d9 iommu/dma: Stop getting dma_32bit_pfn wrong
were applied to Ubuntu Zesty 4.10 kernel (Ubuntu-4.10.0-18.20) and tested
on a QDF2400 SDP.
Tested-by: Manoj Iyer <manoj.iyer@canonical.com>
--
============================
Manoj Iyer
Ubuntu/Canonical
ARM Servers - Cloud
============================
next prev parent reply other threads:[~2017-04-06 18:15 UTC|newest]
Thread overview: 10+ messages / expand[flat|nested] mbox.gz Atom feed top
2017-03-31 14:46 [PATCH RESEND 0/3] IOVA allocation improvements for iommu-dma Robin Murphy
2017-03-31 14:46 ` [PATCH RESEND 1/3] iommu/dma: Convert to address-based allocation Robin Murphy
2017-04-06 18:11 ` [RESEND,1/3] " Manoj Iyer
2017-03-31 14:46 ` [PATCH RESEND 2/3] iommu/dma: Clean up MSI IOVA allocation Robin Murphy
2017-04-06 18:14 ` [RESEND,2/3] " Manoj Iyer
2017-03-31 14:46 ` [PATCH RESEND 3/3] iommu/dma: Plumb in the per-CPU IOVA caches Robin Murphy
2017-04-06 18:15 ` Manoj Iyer [this message]
2017-04-06 18:56 ` [RESEND,3/3] " Robin Murphy
2017-04-07 7:22 ` Nate Watterson
2017-04-03 10:45 ` [PATCH RESEND 0/3] IOVA allocation improvements for iommu-dma Joerg Roedel
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=alpine.DEB.2.20.1704061315040.4602@lazy \
--to=manoj.iyer@canonical.com \
--cc=linux-arm-kernel@lists.infradead.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox