From: Vasant Hegde <vasant.hegde@amd.com>
To: Robin Murphy <robin.murphy@arm.com>, joro@8bytes.org
Cc: will@kernel.org, iommu@lists.linux.dev,
linux-kernel@vger.kernel.org,
Linus Torvalds <torvalds@linux-foundation.org>,
Jakub Kicinski <kuba@kernel.org>,
John Garry <john.g.garry@oracle.com>
Subject: Re: [PATCH v4] iommu: Optimise PCI SAC address trick
Date: Tue, 18 Apr 2023 18:35:54 +0530 [thread overview]
Message-ID: <30c4dc3d-dc49-c533-3af0-3d804aaf1407@amd.com> (raw)
In-Reply-To: <54083e3b-cba3-c719-651d-bf99d6eca16d@arm.com>
Robin,
On 4/18/2023 4:27 PM, Robin Murphy wrote:
> On 2023-04-18 10:23, Vasant Hegde wrote:
>> Robin,
>>
>>
>> On 4/13/2023 7:10 PM, Robin Murphy wrote:
>>> Per the reasoning in commit 4bf7fda4dce2 ("iommu/dma: Add config for
>>> PCI SAC address trick") and its subsequent revert, this mechanism no
>>> longer serves its original purpose, but now only works around broken
>>> hardware/drivers in a way that is unfortunately too impactful to remove.
>>>
>>> This does not, however, prevent us from solving the performance impact
>>> which that workaround has on large-scale systems that don't need it.
>>> Once the 32-bit IOVA space fills up and a workload starts allocating and
>>> freeing on both sides of the boundary, the opportunistic SAC allocation
>>> can then end up spending significant time hunting down scattered
>>> fragments of free 32-bit space, or just reestablishing max32_alloc_size.
>>> This can easily be exacerbated by a change in allocation pattern, such
>>> as by changing the network MTU, which can increase pressure on the
>>> 32-bit space by leaving a large quantity of cached IOVAs which are now
>>> the wrong size to be recycled, but also won't be freed since the
>>> non-opportunistic allocations can still be satisfied from the whole
>>> 64-bit space without triggering the reclaim path.
>>>
>>> However, in the context of a workaround where smaller DMA addresses
>>> aren't simply a preference but a necessity, if we get to that point at
>>> all then in fact it's already the endgame. The nature of the allocator
>>> is currently such that the first IOVA we give to a device after the
>>> 32-bit space runs out will be the highest possible address for that
>>> device, ever. If that works, then great, we know we can optimise for
>>> speed by always allocating from the full range. And if it doesn't, then
>>> the worst has already happened and any brokenness is now showing, so
>>> there's little point in continuing to try to hide it.
>>>
>>> To that end, implement a flag to refine the SAC business into a
>>> per-device policy that can automatically get itself out of the way if
>>> and when it stops being useful.
>>>
>>> CC: Linus Torvalds <torvalds@linux-foundation.org>
>>> CC: Jakub Kicinski <kuba@kernel.org>
>>> Reviewed-by: John Garry <john.g.garry@oracle.com>
>>> Signed-off-by: Robin Murphy <robin.murphy@arm.com>
>>
>> We hit kernel softlockup while running stress-ng test system having 384 CPU and
>> NVMe disk. This patch helped to solve one soft lockup in allocation path.
>>
>>> ---
>>>
>>> v4: Rebase to use the new bitfield in dev_iommu, expand commit message.
>>>
>>> drivers/iommu/dma-iommu.c | 26 ++++++++++++++++++++------
>>> drivers/iommu/dma-iommu.h | 8 ++++++++
>>> drivers/iommu/iommu.c | 3 +++
>>> include/linux/iommu.h | 2 ++
>>> 4 files changed, 33 insertions(+), 6 deletions(-)
>>>
>>> diff --git a/drivers/iommu/dma-iommu.c b/drivers/iommu/dma-iommu.c
>>> index 99b2646cb5c7..9193ad5bc72f 100644
>>> --- a/drivers/iommu/dma-iommu.c
>>> +++ b/drivers/iommu/dma-iommu.c
>>> @@ -630,7 +630,7 @@ static dma_addr_t iommu_dma_alloc_iova(struct
>>> iommu_domain *domain,
>>> {
>>> struct iommu_dma_cookie *cookie = domain->iova_cookie;
>>> struct iova_domain *iovad = &cookie->iovad;
>>> - unsigned long shift, iova_len, iova = 0;
>>> + unsigned long shift, iova_len, iova;
>>> if (cookie->type == IOMMU_DMA_MSI_COOKIE) {
>>> cookie->msi_iova += size;
>>> @@ -645,15 +645,29 @@ static dma_addr_t iommu_dma_alloc_iova(struct
>>> iommu_domain *domain,
>>> if (domain->geometry.force_aperture)
>>> dma_limit = min(dma_limit, (u64)domain->geometry.aperture_end);
>>> - /* Try to get PCI devices a SAC address */
>>> - if (dma_limit > DMA_BIT_MASK(32) && !iommu_dma_forcedac && dev_is_pci(dev))
>>> + /*
>>> + * Try to use all the 32-bit PCI addresses first. The original SAC vs.
>>> + * DAC reasoning loses relevance with PCIe, but enough hardware and
>>> + * firmware bugs are still lurking out there that it's safest not to
>>> + * venture into the 64-bit space until necessary.
>>> + *
>>> + * If your device goes wrong after seeing the notice then likely either
>>> + * its driver is not setting DMA masks accurately, the hardware has
>>> + * some inherent bug in handling >32-bit addresses, or not all the
>>> + * expected address bits are wired up between the device and the IOMMU.
>>> + */
>>> + if (dma_limit > DMA_BIT_MASK(32) && dev->iommu->pci_32bit_workaround) {
>>> iova = alloc_iova_fast(iovad, iova_len,
>>> DMA_BIT_MASK(32) >> shift, false);
>>> + if (iova)
>>> + goto done;
>>> - if (!iova)
>>> - iova = alloc_iova_fast(iovad, iova_len, dma_limit >> shift,
>>> - true);
>>> + dev->iommu->pci_32bit_workaround = false;
>>> + dev_notice(dev, "Using %d-bit DMA addresses\n", bits_per(dma_limit));
>>
>> May be dev_notice_once? Otherwise we may see this message multiple time for same
>> device like below:
>
> Oh, that's a bit irritating. Of course multiple threads can reach this
> in parallel, silly me :(
>
> I would really prefer the notice to be once per device rather than once
> globally, since there's clearly no guarantee that the first device to
Agree. Makes sense.
> hit this case is going to be the one which is liable to go wrong. Does
> the (untested) diff below do any better?
Thanks for the patch. I have tested and its working fine.
-Vasant
next prev parent reply other threads:[~2023-04-18 13:06 UTC|newest]
Thread overview: 20+ messages / expand[flat|nested] mbox.gz Atom feed top
2023-04-13 13:40 [PATCH v4] iommu: Optimise PCI SAC address trick Robin Murphy
2023-04-13 14:02 ` Jakub Kicinski
2023-04-14 11:45 ` Joerg Roedel
2023-04-14 17:45 ` Robin Murphy
2023-05-23 16:06 ` Joerg Roedel
2023-05-24 14:56 ` Robin Murphy
2023-06-13 17:58 ` Jakub Kicinski
2023-06-15 7:49 ` John Garry
2023-06-15 9:04 ` Robin Murphy
2023-06-15 10:11 ` John Garry
2023-06-15 11:41 ` Robin Murphy
2023-06-15 12:15 ` John Garry
2023-04-18 9:23 ` Vasant Hegde
2023-04-18 10:19 ` John Garry
2023-04-18 17:36 ` Linus Torvalds
2023-04-18 18:50 ` John Garry
2023-04-18 10:57 ` Robin Murphy
2023-04-18 13:05 ` Vasant Hegde [this message]
2023-07-14 14:09 ` Joerg Roedel
2023-07-17 9:24 ` John Garry
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=30c4dc3d-dc49-c533-3af0-3d804aaf1407@amd.com \
--to=vasant.hegde@amd.com \
--cc=iommu@lists.linux.dev \
--cc=john.g.garry@oracle.com \
--cc=joro@8bytes.org \
--cc=kuba@kernel.org \
--cc=linux-kernel@vger.kernel.org \
--cc=robin.murphy@arm.com \
--cc=torvalds@linux-foundation.org \
--cc=will@kernel.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox