From: Robin Murphy <robin.murphy@arm.com>
To: Peter Geis <pgwipeout@gmail.com>
Cc: "xxm@rock-chips.com" <xxm@rock-chips.com>, hch <hch@lst.de>,
joro <joro@8bytes.org>, will <will@kernel.org>,
iommu <iommu@lists.linux-foundation.org>,
linux-rockchip <linux-rockchip@lists.infradead.org>
Subject: Re: Different type iommus integrated in a SoC
Date: Thu, 3 Jun 2021 13:49:20 +0100 [thread overview]
Message-ID: <144e1c48-96f2-3596-3354-4c023bf6ccc0@arm.com> (raw)
In-Reply-To: <CAMdYzYpiykTtK3CtAN9F4g+f6JasTSsUh54wvAZ_st3C=_LygQ@mail.gmail.com>
On 2021-06-03 13:24, Peter Geis wrote:
> On Thu, Jun 3, 2021 at 8:07 AM Robin Murphy <robin.murphy@arm.com> wrote:
>>
>> On 2021-05-27 03:37, xxm@rock-chips.com wrote:
>>> Hi all,
>>>
>>> I have a SoC integrate with two different types of iommus, one is ARM SMMU, serves the PCIe/SATA/USB,
>>> the others are vendor specific iommus, serves display device and multimedia device.
>>>
>>> In the current linux kernel, the iommu framework seems only support one type iommu at runtime, if enable both types iommu, only one type can work.
>>> Is there any way to support this kind of SoC?
>>
>> Hooray! I've been forecasting this for years, but the cases we regularly
>> hit with internal FPGA prototyping (nor the secret unused MMU-400 I
>> found on RK3288) have never really been a strong enough argument to
>> stand behind.
>>
>> Based on what I remember from looking into this a few years ago,
>> converting *most* of the API to per-device ops (now via dev->iommu) is
>> trivial; the main challenge will be getting the per-device data
>> bootstrapped in iommu_probe_device(), which would probably need to rely
>> on the fwspec and/or list of registered IOMMU instances.
>>
>> The other notable thing which will need to change is the domain
>> allocation interface, but in practice I think everyone who calls
>> iommu_domain_alloc() today is in fact doing so for a specific device, so
>> I don't think it's as big a problem as it might first appear.
>>
>> Robin.
>>
>
> Good Morning Robin,
>
> I think the Tegra group would also be interested in this work.
> AFAIK they have the smmu and the tegra gart and have been trying to
> figure out the runtime handover from the bootloader to the kernel
> without smashing everything and starting over.
No, handoff of live DMA from the bootlader is an entirely unrelated
issue, and there are already several patchsets in flight to address
various parts of that. My understanding of Tegras is that they *either*
use tegra-gart, tegra-smmu, or arm-smmu depending on the SoC generation,
but they aren't mixed within any single SoC.
Robin.
_______________________________________________
Linux-rockchip mailing list
Linux-rockchip@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-rockchip
next prev parent reply other threads:[~2021-06-03 12:49 UTC|newest]
Thread overview: 4+ messages / expand[flat|nested] mbox.gz Atom feed top
[not found] <2021052710373173260118@rock-chips.com>
2021-06-03 12:05 ` Different type iommus integrated in a SoC Robin Murphy
2021-06-03 12:24 ` Peter Geis
2021-06-03 12:49 ` Robin Murphy [this message]
2021-06-04 15:44 ` joro
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=144e1c48-96f2-3596-3354-4c023bf6ccc0@arm.com \
--to=robin.murphy@arm.com \
--cc=hch@lst.de \
--cc=iommu@lists.linux-foundation.org \
--cc=joro@8bytes.org \
--cc=linux-rockchip@lists.infradead.org \
--cc=pgwipeout@gmail.com \
--cc=will@kernel.org \
--cc=xxm@rock-chips.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox