From: Martin Kelly <mkelly@xevo.com>
To: Oleksandr Tyshchenko <olekstysh@gmail.com>
Cc: xen-devel@lists.xenproject.org,
Julien Grall <julien.grall@linaro.org>,
Oleksandr Tyshchenko <oleksandr_tyshchenko@epam.com>
Subject: Re: GPU passthrough on ARM
Date: Fri, 26 Jan 2018 16:41:28 -0800 [thread overview]
Message-ID: <ec3f44df-2a4e-f418-f15e-610891cbcb24@xevo.com> (raw)
In-Reply-To: <CAPD2p-nYe305YPenOFTQfptVHnNQ2ms=UJ0O7cb4pJHUkeXFrw@mail.gmail.com>
On 01/26/2018 10:13 AM, Oleksandr Tyshchenko wrote:
> Hi, Martin
>
> On Fri, Jan 26, 2018 at 8:05 PM, Oleksandr Tyshchenko
> <olekstysh@gmail.com> wrote:
>> On Fri, Jan 26, 2018 at 3:49 PM, Julien Grall <julien.grall@linaro.org> wrote:
>>> Hi,
>>>
>>> I am CCing Oleksandr. He knows better than me this platform.
>>
>> Hi, Julien.
>>
>> OK, thank you, I will try to provide some pointers.
>>
>>>
>>> Cheers,
>>>
>>> On 26/01/18 00:29, Martin Kelly wrote:
>>>>
>>>> On 01/25/2018 04:17 AM, Julien Grall wrote:
>>>>>
>>>>>
>>>>>
>>>>> On 24/01/18 22:10, Martin Kelly wrote:
>>>>>>
>>>>>> Hi,
>>>>>
>>>>>
>>>>> Hello,
>>>>>
>>>>>> Does anyone know if GPU passthrough is supported on ARM? (e.g. for a GPU
>>>>>> integrated into an ARM SoC). I checked documentation and the code, but I
>>>>>> couldn't tell for sure.
>>>>>>
>>>>>> If so, what are the hardware requirements for it? If not, is it feasible
>>>>>> to do in the future?
>>>>>
>>>>>
>>>>> Xen Arm supports device integrated into an ARM SoC. In general we highly
>>>>> recommend to have the GPU behind an IOMMU. So passthrough would be fully
>>>>> secure.
>>>>>
>>>>> Does your platform has an IOMMU? If so which one? Do you know if the GPU
>>>>> is behind it?
>>>>>
>>>>> It would be possible to do passthrough without IOMMU, but that's more
>>>>> complex and would require some hack in Xen to make sure the guest memory is
>>>>> direct mapped (e.g guest physical address = host physical address).
>>>>>
>>>>> For more documentation on how to do it (see [1] and [2]).
>>>>>
>>>>> Cheers,
>>>>>
>>>>> [1]
>>>>> https://events.static.linuxfound.org/sites/events/files/slides/talk_5.pdf
>>>>> [2] https://wiki.xen.org/images/1/17/Device_passthrough_xen.pdf
>>>>>
>>>>
>>>> Hi Julien,
>>>>
>>>> Thanks very much for the information. I'm looking at the Renesas R-Car H3
>>>> R8A7795, which has an IOMMU (using the Linux ipmmu-vmsa driver in
>>>> drivers/iommu/ipmmu-vmsa.c). Looking at the device tree for it
>>>> (r8a7795.dtsi), it appears you could pass through the display@feb00000 node
>>>> for the DRM driver.
>>>>
>>>> I did notice this patch series, which didn't get merged:
>>>>
>>>> https://lists.xenproject.org/archives/html/xen-devel/2017-07/msg02679.html
>>>>
>>>> Presumably that driver would be needed in Xen.
>>>>
>>>> Are there any gotchas I'm missing? Is GPU passthrough on ARM something
>>>> that is "theoretically doable" or something that has been done already and
>>>> shown to be performant?
>
> I assume the H3 SoC version you are using is ES2.0, because of r8a7795.dtsi.
>
> BTW, what BSP version are you using? I am wondering what is your use-case?
> If you want to keep GPU in some dedicated domain without no hardware
> at all, you have to use something like PV DRM frontend running here
> and PV DRM backend in the hardware/driver domain.
> The things are going to be much simple if you pass through all
> required display sub-components as well, for the "rcar-du" DRM to be
> functional.
> Which way are you looking for?
My BSP and kernel version is flexible, and I'd be happy to use whatever
works best. The use-case is using OpenCL inside a VM for
high-performance GPGPU. This means that performance is critical, and I
would go with whatever solution offers the best performance.
>
> Anyway, in both cases you have to pass through GPU. For that some
> activities should be done:
>
> 1. Xen side:
>
> As for the patch series, you are right, you have to base on it. There
> are two separate patch series which haven't upstreamed yet,
> but needed for the passthrough feature to work on R-Car Gen3 SoCs (M3, H3).
>
> https://www.mail-archive.com/xen-devel@lists.xen.org/msg115901.html
> https://www.mail-archive.com/xen-devel@lists.xen.org/msg116038.html
>
> Also additional patch is needed to teach IPMMU-VMSA driver to handle
> devices which are hooked up to multiple IPMMU caches.
> Since the GPU on H3 SoC is connected to multiple IPMMU caches: PV0 - PV3.
>
> I have created new branch you can simply base on to get required
> support in hand.
> repo: https://github.com/otyshchenko1/xen.git branch: ipmmu_next
>
> 2. Device trees and guest config file:
>
> 2.1. You have to add following to the domain 0 device tree:
>
> There is no magic here. This is just to enable corresponding IPMMUs,
> hooked up GPU to them and notify Xen
> that device is going to be pass throughed.
>
> &gsx {
> xen,passthrough;
>
> iommus = <&ipmmu_pv0 0>, <&ipmmu_pv0 1>,
> <&ipmmu_pv0 2>, <&ipmmu_pv0 3>,
> <&ipmmu_pv0 4>, <&ipmmu_pv0 5>,
> <&ipmmu_pv0 6>, <&ipmmu_pv0 7>,
> <&ipmmu_pv1 0>, <&ipmmu_pv1 1>,
> <&ipmmu_pv1 2>, <&ipmmu_pv1 3>,
> <&ipmmu_pv1 4>, <&ipmmu_pv1 5>,
> <&ipmmu_pv1 6>, <&ipmmu_pv1 7>,
> <&ipmmu_pv2 0>, <&ipmmu_pv2 1>,
> <&ipmmu_pv2 2>, <&ipmmu_pv2 3>,
> <&ipmmu_pv2 4>, <&ipmmu_pv2 5>,
> <&ipmmu_pv2 6>, <&ipmmu_pv2 7>,
> <&ipmmu_pv3 0>, <&ipmmu_pv3 1>,
> <&ipmmu_pv3 2>, <&ipmmu_pv3 3>,
> <&ipmmu_pv3 4>, <&ipmmu_pv3 5>,
> <&ipmmu_pv3 6>, <&ipmmu_pv3 7>;
> };
>
> &ipmmu_pv0 {
> status = "okay";
> };
>
> &ipmmu_pv1 {
> status = "okay";
> };
>
> &ipmmu_pv2 {
> status = "okay";
> };
>
> &ipmmu_pv3 {
> status = "okay";
> };
>
> &ipmmu_mm {
> status = "okay";
> };
>
> 2.2. You have to add following to the guest config file:
>
> I might mistake here, please use existing documentation to see how a
> guest config file
> should be properly modified. For example
> https://events.static.linuxfound.org/sites/events/files/slides/talk_5.pdf
>
> dtdev = [ "/soc/gsx@fd000000" ]
>
> irqs = [ 151 ]
>
> iomem = [ "fd000,40" ]
>
> device_tree = "domU.dtb" <- This is the guest partial device tree.
>
> 2.3 Actually the guest partial device tree I have used is:
>
> /dts-v1/;
>
> #include <dt-bindings/interrupt-controller/arm-gic.h>
>
> / {
> #address-cells = <2>;
> #size-cells = <2>;
>
> passthrough {
> compatible = "simple-bus";
> ranges;
>
> #address-cells = <2>;
> #size-cells = <2>;
>
> gsx: gsx@fd000000 {
> compatible = "renesas,gsx";
> reg = <0 0xfd000000 0 0x3ffff>;
> interrupts = <GIC_SPI 119 IRQ_TYPE_LEVEL_HIGH>;
> /*clocks = <&cpg CPG_MOD 112>;*/
> /*power-domains = <&sysc R8A7795_PD_3DG_E>;*/
> };
> };
> };
>
> Hope this helps.
>
Yes, this is certainly very helpful, thank you.
Just to verify, it sounds like this would be HVM mode with the
passthrough being transparent to the guest; is that correct?
Also, do you know if it would be possible to share the GPU across
multiple VMs, or would it be exclusive to a VM? I'm not sure whether the
IOMMU supports this, or whether the DRM driver supports it.
_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel
next prev parent reply other threads:[~2018-01-27 0:41 UTC|newest]
Thread overview: 11+ messages / expand[flat|nested] mbox.gz Atom feed top
2018-01-24 22:10 GPU passthrough on ARM Martin Kelly
2018-01-25 12:17 ` Julien Grall
2018-01-26 0:29 ` Martin Kelly
2018-01-26 13:49 ` Julien Grall
2018-01-26 18:05 ` Oleksandr Tyshchenko
2018-01-26 18:13 ` Oleksandr Tyshchenko
2018-01-27 0:41 ` Martin Kelly [this message]
2018-01-29 16:31 ` Oleksandr Tyshchenko
2018-01-30 0:22 ` Martin Kelly
2018-01-30 19:53 ` Oleksandr Tyshchenko
2018-01-31 0:37 ` Martin Kelly
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=ec3f44df-2a4e-f418-f15e-610891cbcb24@xevo.com \
--to=mkelly@xevo.com \
--cc=julien.grall@linaro.org \
--cc=oleksandr_tyshchenko@epam.com \
--cc=olekstysh@gmail.com \
--cc=xen-devel@lists.xenproject.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).