* GPU passthrough on ARM
@ 2018-01-24 22:10 Martin Kelly
2018-01-25 12:17 ` Julien Grall
0 siblings, 1 reply; 11+ messages in thread
From: Martin Kelly @ 2018-01-24 22:10 UTC (permalink / raw)
To: xen-devel
Hi,
Does anyone know if GPU passthrough is supported on ARM? (e.g. for a GPU
integrated into an ARM SoC). I checked documentation and the code, but I
couldn't tell for sure.
If so, what are the hardware requirements for it? If not, is it feasible
to do in the future?
Thanks,
Martin
_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel
^ permalink raw reply [flat|nested] 11+ messages in thread
* Re: GPU passthrough on ARM
2018-01-24 22:10 GPU passthrough on ARM Martin Kelly
@ 2018-01-25 12:17 ` Julien Grall
2018-01-26 0:29 ` Martin Kelly
0 siblings, 1 reply; 11+ messages in thread
From: Julien Grall @ 2018-01-25 12:17 UTC (permalink / raw)
To: Martin Kelly, xen-devel
On 24/01/18 22:10, Martin Kelly wrote:
> Hi,
Hello,
> Does anyone know if GPU passthrough is supported on ARM? (e.g. for a GPU
> integrated into an ARM SoC). I checked documentation and the code, but I
> couldn't tell for sure.
>
> If so, what are the hardware requirements for it? If not, is it feasible
> to do in the future?
Xen Arm supports device integrated into an ARM SoC. In general we highly
recommend to have the GPU behind an IOMMU. So passthrough would be fully
secure.
Does your platform has an IOMMU? If so which one? Do you know if the GPU
is behind it?
It would be possible to do passthrough without IOMMU, but that's more
complex and would require some hack in Xen to make sure the guest memory
is direct mapped (e.g guest physical address = host physical address).
For more documentation on how to do it (see [1] and [2]).
Cheers,
[1]
https://events.static.linuxfound.org/sites/events/files/slides/talk_5.pdf
[2] https://wiki.xen.org/images/1/17/Device_passthrough_xen.pdf
>
> Thanks,
> Martin
>
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@lists.xenproject.org
> https://lists.xenproject.org/mailman/listinfo/xen-devel
--
Julien Grall
_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel
^ permalink raw reply [flat|nested] 11+ messages in thread
* Re: GPU passthrough on ARM
2018-01-25 12:17 ` Julien Grall
@ 2018-01-26 0:29 ` Martin Kelly
2018-01-26 13:49 ` Julien Grall
0 siblings, 1 reply; 11+ messages in thread
From: Martin Kelly @ 2018-01-26 0:29 UTC (permalink / raw)
To: Julien Grall, xen-devel
On 01/25/2018 04:17 AM, Julien Grall wrote:
>
>
> On 24/01/18 22:10, Martin Kelly wrote:
>> Hi,
>
> Hello,
>
>> Does anyone know if GPU passthrough is supported on ARM? (e.g. for a
>> GPU integrated into an ARM SoC). I checked documentation and the code,
>> but I couldn't tell for sure.
>>
>> If so, what are the hardware requirements for it? If not, is it
>> feasible to do in the future?
>
> Xen Arm supports device integrated into an ARM SoC. In general we highly
> recommend to have the GPU behind an IOMMU. So passthrough would be fully
> secure.
>
> Does your platform has an IOMMU? If so which one? Do you know if the GPU
> is behind it?
>
> It would be possible to do passthrough without IOMMU, but that's more
> complex and would require some hack in Xen to make sure the guest memory
> is direct mapped (e.g guest physical address = host physical address).
>
> For more documentation on how to do it (see [1] and [2]).
>
> Cheers,
>
> [1]
> https://events.static.linuxfound.org/sites/events/files/slides/talk_5.pdf
> [2] https://wiki.xen.org/images/1/17/Device_passthrough_xen.pdf
>
Hi Julien,
Thanks very much for the information. I'm looking at the Renesas R-Car
H3 R8A7795, which has an IOMMU (using the Linux ipmmu-vmsa driver in
drivers/iommu/ipmmu-vmsa.c). Looking at the device tree for it
(r8a7795.dtsi), it appears you could pass through the display@feb00000
node for the DRM driver.
I did notice this patch series, which didn't get merged:
https://lists.xenproject.org/archives/html/xen-devel/2017-07/msg02679.html
Presumably that driver would be needed in Xen.
Are there any gotchas I'm missing? Is GPU passthrough on ARM something
that is "theoretically doable" or something that has been done already
and shown to be performant?
Thanks again,
Martin
_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel
^ permalink raw reply [flat|nested] 11+ messages in thread
* Re: GPU passthrough on ARM
2018-01-26 0:29 ` Martin Kelly
@ 2018-01-26 13:49 ` Julien Grall
2018-01-26 18:05 ` Oleksandr Tyshchenko
0 siblings, 1 reply; 11+ messages in thread
From: Julien Grall @ 2018-01-26 13:49 UTC (permalink / raw)
To: Martin Kelly, xen-devel, Oleksandr Tyshchenko
Hi,
I am CCing Oleksandr. He knows better than me this platform.
Cheers,
On 26/01/18 00:29, Martin Kelly wrote:
> On 01/25/2018 04:17 AM, Julien Grall wrote:
>>
>>
>> On 24/01/18 22:10, Martin Kelly wrote:
>>> Hi,
>>
>> Hello,
>>
>>> Does anyone know if GPU passthrough is supported on ARM? (e.g. for a
>>> GPU integrated into an ARM SoC). I checked documentation and the
>>> code, but I couldn't tell for sure.
>>>
>>> If so, what are the hardware requirements for it? If not, is it
>>> feasible to do in the future?
>>
>> Xen Arm supports device integrated into an ARM SoC. In general we
>> highly recommend to have the GPU behind an IOMMU. So passthrough would
>> be fully secure.
>>
>> Does your platform has an IOMMU? If so which one? Do you know if the
>> GPU is behind it?
>>
>> It would be possible to do passthrough without IOMMU, but that's more
>> complex and would require some hack in Xen to make sure the guest
>> memory is direct mapped (e.g guest physical address = host physical
>> address).
>>
>> For more documentation on how to do it (see [1] and [2]).
>>
>> Cheers,
>>
>> [1]
>> https://events.static.linuxfound.org/sites/events/files/slides/talk_5.pdf
>> [2] https://wiki.xen.org/images/1/17/Device_passthrough_xen.pdf
>>
>
> Hi Julien,
>
> Thanks very much for the information. I'm looking at the Renesas R-Car
> H3 R8A7795, which has an IOMMU (using the Linux ipmmu-vmsa driver in
> drivers/iommu/ipmmu-vmsa.c). Looking at the device tree for it
> (r8a7795.dtsi), it appears you could pass through the display@feb00000
> node for the DRM driver.
>
> I did notice this patch series, which didn't get merged:
>
> https://lists.xenproject.org/archives/html/xen-devel/2017-07/msg02679.html
>
> Presumably that driver would be needed in Xen.
>
> Are there any gotchas I'm missing? Is GPU passthrough on ARM something
> that is "theoretically doable" or something that has been done already
> and shown to be performant?
>
> Thanks again,
> Martin
--
Julien Grall
_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel
^ permalink raw reply [flat|nested] 11+ messages in thread
* Re: GPU passthrough on ARM
2018-01-26 13:49 ` Julien Grall
@ 2018-01-26 18:05 ` Oleksandr Tyshchenko
2018-01-26 18:13 ` Oleksandr Tyshchenko
0 siblings, 1 reply; 11+ messages in thread
From: Oleksandr Tyshchenko @ 2018-01-26 18:05 UTC (permalink / raw)
To: Julien Grall; +Cc: Martin Kelly, xen-devel, Oleksandr Tyshchenko
On Fri, Jan 26, 2018 at 3:49 PM, Julien Grall <julien.grall@linaro.org> wrote:
> Hi,
>
> I am CCing Oleksandr. He knows better than me this platform.
Hi, Julien.
OK, thank you, I will try to provide some pointers.
>
> Cheers,
>
> On 26/01/18 00:29, Martin Kelly wrote:
>>
>> On 01/25/2018 04:17 AM, Julien Grall wrote:
>>>
>>>
>>>
>>> On 24/01/18 22:10, Martin Kelly wrote:
>>>>
>>>> Hi,
>>>
>>>
>>> Hello,
>>>
>>>> Does anyone know if GPU passthrough is supported on ARM? (e.g. for a GPU
>>>> integrated into an ARM SoC). I checked documentation and the code, but I
>>>> couldn't tell for sure.
>>>>
>>>> If so, what are the hardware requirements for it? If not, is it feasible
>>>> to do in the future?
>>>
>>>
>>> Xen Arm supports device integrated into an ARM SoC. In general we highly
>>> recommend to have the GPU behind an IOMMU. So passthrough would be fully
>>> secure.
>>>
>>> Does your platform has an IOMMU? If so which one? Do you know if the GPU
>>> is behind it?
>>>
>>> It would be possible to do passthrough without IOMMU, but that's more
>>> complex and would require some hack in Xen to make sure the guest memory is
>>> direct mapped (e.g guest physical address = host physical address).
>>>
>>> For more documentation on how to do it (see [1] and [2]).
>>>
>>> Cheers,
>>>
>>> [1]
>>> https://events.static.linuxfound.org/sites/events/files/slides/talk_5.pdf
>>> [2] https://wiki.xen.org/images/1/17/Device_passthrough_xen.pdf
>>>
>>
>> Hi Julien,
>>
>> Thanks very much for the information. I'm looking at the Renesas R-Car H3
>> R8A7795, which has an IOMMU (using the Linux ipmmu-vmsa driver in
>> drivers/iommu/ipmmu-vmsa.c). Looking at the device tree for it
>> (r8a7795.dtsi), it appears you could pass through the display@feb00000 node
>> for the DRM driver.
>>
>> I did notice this patch series, which didn't get merged:
>>
>> https://lists.xenproject.org/archives/html/xen-devel/2017-07/msg02679.html
>>
>> Presumably that driver would be needed in Xen.
>>
>> Are there any gotchas I'm missing? Is GPU passthrough on ARM something
>> that is "theoretically doable" or something that has been done already and
>> shown to be performant?
>>
>> Thanks again,
>> Martin
>
>
> --
> Julien Grall
>
>
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@lists.xenproject.org
> https://lists.xenproject.org/mailman/listinfo/xen-devel
--
Regards,
Oleksandr Tyshchenko
_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel
^ permalink raw reply [flat|nested] 11+ messages in thread
* Re: GPU passthrough on ARM
2018-01-26 18:05 ` Oleksandr Tyshchenko
@ 2018-01-26 18:13 ` Oleksandr Tyshchenko
2018-01-27 0:41 ` Martin Kelly
0 siblings, 1 reply; 11+ messages in thread
From: Oleksandr Tyshchenko @ 2018-01-26 18:13 UTC (permalink / raw)
To: Martin Kelly; +Cc: xen-devel, Julien Grall, Oleksandr Tyshchenko
Hi, Martin
On Fri, Jan 26, 2018 at 8:05 PM, Oleksandr Tyshchenko
<olekstysh@gmail.com> wrote:
> On Fri, Jan 26, 2018 at 3:49 PM, Julien Grall <julien.grall@linaro.org> wrote:
>> Hi,
>>
>> I am CCing Oleksandr. He knows better than me this platform.
>
> Hi, Julien.
>
> OK, thank you, I will try to provide some pointers.
>
>>
>> Cheers,
>>
>> On 26/01/18 00:29, Martin Kelly wrote:
>>>
>>> On 01/25/2018 04:17 AM, Julien Grall wrote:
>>>>
>>>>
>>>>
>>>> On 24/01/18 22:10, Martin Kelly wrote:
>>>>>
>>>>> Hi,
>>>>
>>>>
>>>> Hello,
>>>>
>>>>> Does anyone know if GPU passthrough is supported on ARM? (e.g. for a GPU
>>>>> integrated into an ARM SoC). I checked documentation and the code, but I
>>>>> couldn't tell for sure.
>>>>>
>>>>> If so, what are the hardware requirements for it? If not, is it feasible
>>>>> to do in the future?
>>>>
>>>>
>>>> Xen Arm supports device integrated into an ARM SoC. In general we highly
>>>> recommend to have the GPU behind an IOMMU. So passthrough would be fully
>>>> secure.
>>>>
>>>> Does your platform has an IOMMU? If so which one? Do you know if the GPU
>>>> is behind it?
>>>>
>>>> It would be possible to do passthrough without IOMMU, but that's more
>>>> complex and would require some hack in Xen to make sure the guest memory is
>>>> direct mapped (e.g guest physical address = host physical address).
>>>>
>>>> For more documentation on how to do it (see [1] and [2]).
>>>>
>>>> Cheers,
>>>>
>>>> [1]
>>>> https://events.static.linuxfound.org/sites/events/files/slides/talk_5.pdf
>>>> [2] https://wiki.xen.org/images/1/17/Device_passthrough_xen.pdf
>>>>
>>>
>>> Hi Julien,
>>>
>>> Thanks very much for the information. I'm looking at the Renesas R-Car H3
>>> R8A7795, which has an IOMMU (using the Linux ipmmu-vmsa driver in
>>> drivers/iommu/ipmmu-vmsa.c). Looking at the device tree for it
>>> (r8a7795.dtsi), it appears you could pass through the display@feb00000 node
>>> for the DRM driver.
>>>
>>> I did notice this patch series, which didn't get merged:
>>>
>>> https://lists.xenproject.org/archives/html/xen-devel/2017-07/msg02679.html
>>>
>>> Presumably that driver would be needed in Xen.
>>>
>>> Are there any gotchas I'm missing? Is GPU passthrough on ARM something
>>> that is "theoretically doable" or something that has been done already and
>>> shown to be performant?
I assume the H3 SoC version you are using is ES2.0, because of r8a7795.dtsi.
BTW, what BSP version are you using? I am wondering what is your use-case?
If you want to keep GPU in some dedicated domain without no hardware
at all, you have to use something like PV DRM frontend running here
and PV DRM backend in the hardware/driver domain.
The things are going to be much simple if you pass through all
required display sub-components as well, for the "rcar-du" DRM to be
functional.
Which way are you looking for?
Anyway, in both cases you have to pass through GPU. For that some
activities should be done:
1. Xen side:
As for the patch series, you are right, you have to base on it. There
are two separate patch series which haven't upstreamed yet,
but needed for the passthrough feature to work on R-Car Gen3 SoCs (M3, H3).
https://www.mail-archive.com/xen-devel@lists.xen.org/msg115901.html
https://www.mail-archive.com/xen-devel@lists.xen.org/msg116038.html
Also additional patch is needed to teach IPMMU-VMSA driver to handle
devices which are hooked up to multiple IPMMU caches.
Since the GPU on H3 SoC is connected to multiple IPMMU caches: PV0 - PV3.
I have created new branch you can simply base on to get required
support in hand.
repo: https://github.com/otyshchenko1/xen.git branch: ipmmu_next
2. Device trees and guest config file:
2.1. You have to add following to the domain 0 device tree:
There is no magic here. This is just to enable corresponding IPMMUs,
hooked up GPU to them and notify Xen
that device is going to be pass throughed.
&gsx {
xen,passthrough;
iommus = <&ipmmu_pv0 0>, <&ipmmu_pv0 1>,
<&ipmmu_pv0 2>, <&ipmmu_pv0 3>,
<&ipmmu_pv0 4>, <&ipmmu_pv0 5>,
<&ipmmu_pv0 6>, <&ipmmu_pv0 7>,
<&ipmmu_pv1 0>, <&ipmmu_pv1 1>,
<&ipmmu_pv1 2>, <&ipmmu_pv1 3>,
<&ipmmu_pv1 4>, <&ipmmu_pv1 5>,
<&ipmmu_pv1 6>, <&ipmmu_pv1 7>,
<&ipmmu_pv2 0>, <&ipmmu_pv2 1>,
<&ipmmu_pv2 2>, <&ipmmu_pv2 3>,
<&ipmmu_pv2 4>, <&ipmmu_pv2 5>,
<&ipmmu_pv2 6>, <&ipmmu_pv2 7>,
<&ipmmu_pv3 0>, <&ipmmu_pv3 1>,
<&ipmmu_pv3 2>, <&ipmmu_pv3 3>,
<&ipmmu_pv3 4>, <&ipmmu_pv3 5>,
<&ipmmu_pv3 6>, <&ipmmu_pv3 7>;
};
&ipmmu_pv0 {
status = "okay";
};
&ipmmu_pv1 {
status = "okay";
};
&ipmmu_pv2 {
status = "okay";
};
&ipmmu_pv3 {
status = "okay";
};
&ipmmu_mm {
status = "okay";
};
2.2. You have to add following to the guest config file:
I might mistake here, please use existing documentation to see how a
guest config file
should be properly modified. For example
https://events.static.linuxfound.org/sites/events/files/slides/talk_5.pdf
dtdev = [ "/soc/gsx@fd000000" ]
irqs = [ 151 ]
iomem = [ "fd000,40" ]
device_tree = "domU.dtb" <- This is the guest partial device tree.
2.3 Actually the guest partial device tree I have used is:
/dts-v1/;
#include <dt-bindings/interrupt-controller/arm-gic.h>
/ {
#address-cells = <2>;
#size-cells = <2>;
passthrough {
compatible = "simple-bus";
ranges;
#address-cells = <2>;
#size-cells = <2>;
gsx: gsx@fd000000 {
compatible = "renesas,gsx";
reg = <0 0xfd000000 0 0x3ffff>;
interrupts = <GIC_SPI 119 IRQ_TYPE_LEVEL_HIGH>;
/*clocks = <&cpg CPG_MOD 112>;*/
/*power-domains = <&sysc R8A7795_PD_3DG_E>;*/
};
};
};
Hope this helps.
>>>
>>> Thanks again,
>>> Martin
>>
>>
>> --
>> Julien Grall
>>
>>
>> _______________________________________________
>> Xen-devel mailing list
>> Xen-devel@lists.xenproject.org
>> https://lists.xenproject.org/mailman/listinfo/xen-devel
>
>
>
> --
> Regards,
>
> Oleksandr Tyshchenko
--
Regards,
Oleksandr Tyshchenko
_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel
^ permalink raw reply [flat|nested] 11+ messages in thread
* Re: GPU passthrough on ARM
2018-01-26 18:13 ` Oleksandr Tyshchenko
@ 2018-01-27 0:41 ` Martin Kelly
2018-01-29 16:31 ` Oleksandr Tyshchenko
0 siblings, 1 reply; 11+ messages in thread
From: Martin Kelly @ 2018-01-27 0:41 UTC (permalink / raw)
To: Oleksandr Tyshchenko; +Cc: xen-devel, Julien Grall, Oleksandr Tyshchenko
On 01/26/2018 10:13 AM, Oleksandr Tyshchenko wrote:
> Hi, Martin
>
> On Fri, Jan 26, 2018 at 8:05 PM, Oleksandr Tyshchenko
> <olekstysh@gmail.com> wrote:
>> On Fri, Jan 26, 2018 at 3:49 PM, Julien Grall <julien.grall@linaro.org> wrote:
>>> Hi,
>>>
>>> I am CCing Oleksandr. He knows better than me this platform.
>>
>> Hi, Julien.
>>
>> OK, thank you, I will try to provide some pointers.
>>
>>>
>>> Cheers,
>>>
>>> On 26/01/18 00:29, Martin Kelly wrote:
>>>>
>>>> On 01/25/2018 04:17 AM, Julien Grall wrote:
>>>>>
>>>>>
>>>>>
>>>>> On 24/01/18 22:10, Martin Kelly wrote:
>>>>>>
>>>>>> Hi,
>>>>>
>>>>>
>>>>> Hello,
>>>>>
>>>>>> Does anyone know if GPU passthrough is supported on ARM? (e.g. for a GPU
>>>>>> integrated into an ARM SoC). I checked documentation and the code, but I
>>>>>> couldn't tell for sure.
>>>>>>
>>>>>> If so, what are the hardware requirements for it? If not, is it feasible
>>>>>> to do in the future?
>>>>>
>>>>>
>>>>> Xen Arm supports device integrated into an ARM SoC. In general we highly
>>>>> recommend to have the GPU behind an IOMMU. So passthrough would be fully
>>>>> secure.
>>>>>
>>>>> Does your platform has an IOMMU? If so which one? Do you know if the GPU
>>>>> is behind it?
>>>>>
>>>>> It would be possible to do passthrough without IOMMU, but that's more
>>>>> complex and would require some hack in Xen to make sure the guest memory is
>>>>> direct mapped (e.g guest physical address = host physical address).
>>>>>
>>>>> For more documentation on how to do it (see [1] and [2]).
>>>>>
>>>>> Cheers,
>>>>>
>>>>> [1]
>>>>> https://events.static.linuxfound.org/sites/events/files/slides/talk_5.pdf
>>>>> [2] https://wiki.xen.org/images/1/17/Device_passthrough_xen.pdf
>>>>>
>>>>
>>>> Hi Julien,
>>>>
>>>> Thanks very much for the information. I'm looking at the Renesas R-Car H3
>>>> R8A7795, which has an IOMMU (using the Linux ipmmu-vmsa driver in
>>>> drivers/iommu/ipmmu-vmsa.c). Looking at the device tree for it
>>>> (r8a7795.dtsi), it appears you could pass through the display@feb00000 node
>>>> for the DRM driver.
>>>>
>>>> I did notice this patch series, which didn't get merged:
>>>>
>>>> https://lists.xenproject.org/archives/html/xen-devel/2017-07/msg02679.html
>>>>
>>>> Presumably that driver would be needed in Xen.
>>>>
>>>> Are there any gotchas I'm missing? Is GPU passthrough on ARM something
>>>> that is "theoretically doable" or something that has been done already and
>>>> shown to be performant?
>
> I assume the H3 SoC version you are using is ES2.0, because of r8a7795.dtsi.
>
> BTW, what BSP version are you using? I am wondering what is your use-case?
> If you want to keep GPU in some dedicated domain without no hardware
> at all, you have to use something like PV DRM frontend running here
> and PV DRM backend in the hardware/driver domain.
> The things are going to be much simple if you pass through all
> required display sub-components as well, for the "rcar-du" DRM to be
> functional.
> Which way are you looking for?
My BSP and kernel version is flexible, and I'd be happy to use whatever
works best. The use-case is using OpenCL inside a VM for
high-performance GPGPU. This means that performance is critical, and I
would go with whatever solution offers the best performance.
>
> Anyway, in both cases you have to pass through GPU. For that some
> activities should be done:
>
> 1. Xen side:
>
> As for the patch series, you are right, you have to base on it. There
> are two separate patch series which haven't upstreamed yet,
> but needed for the passthrough feature to work on R-Car Gen3 SoCs (M3, H3).
>
> https://www.mail-archive.com/xen-devel@lists.xen.org/msg115901.html
> https://www.mail-archive.com/xen-devel@lists.xen.org/msg116038.html
>
> Also additional patch is needed to teach IPMMU-VMSA driver to handle
> devices which are hooked up to multiple IPMMU caches.
> Since the GPU on H3 SoC is connected to multiple IPMMU caches: PV0 - PV3.
>
> I have created new branch you can simply base on to get required
> support in hand.
> repo: https://github.com/otyshchenko1/xen.git branch: ipmmu_next
>
> 2. Device trees and guest config file:
>
> 2.1. You have to add following to the domain 0 device tree:
>
> There is no magic here. This is just to enable corresponding IPMMUs,
> hooked up GPU to them and notify Xen
> that device is going to be pass throughed.
>
> &gsx {
> xen,passthrough;
>
> iommus = <&ipmmu_pv0 0>, <&ipmmu_pv0 1>,
> <&ipmmu_pv0 2>, <&ipmmu_pv0 3>,
> <&ipmmu_pv0 4>, <&ipmmu_pv0 5>,
> <&ipmmu_pv0 6>, <&ipmmu_pv0 7>,
> <&ipmmu_pv1 0>, <&ipmmu_pv1 1>,
> <&ipmmu_pv1 2>, <&ipmmu_pv1 3>,
> <&ipmmu_pv1 4>, <&ipmmu_pv1 5>,
> <&ipmmu_pv1 6>, <&ipmmu_pv1 7>,
> <&ipmmu_pv2 0>, <&ipmmu_pv2 1>,
> <&ipmmu_pv2 2>, <&ipmmu_pv2 3>,
> <&ipmmu_pv2 4>, <&ipmmu_pv2 5>,
> <&ipmmu_pv2 6>, <&ipmmu_pv2 7>,
> <&ipmmu_pv3 0>, <&ipmmu_pv3 1>,
> <&ipmmu_pv3 2>, <&ipmmu_pv3 3>,
> <&ipmmu_pv3 4>, <&ipmmu_pv3 5>,
> <&ipmmu_pv3 6>, <&ipmmu_pv3 7>;
> };
>
> &ipmmu_pv0 {
> status = "okay";
> };
>
> &ipmmu_pv1 {
> status = "okay";
> };
>
> &ipmmu_pv2 {
> status = "okay";
> };
>
> &ipmmu_pv3 {
> status = "okay";
> };
>
> &ipmmu_mm {
> status = "okay";
> };
>
> 2.2. You have to add following to the guest config file:
>
> I might mistake here, please use existing documentation to see how a
> guest config file
> should be properly modified. For example
> https://events.static.linuxfound.org/sites/events/files/slides/talk_5.pdf
>
> dtdev = [ "/soc/gsx@fd000000" ]
>
> irqs = [ 151 ]
>
> iomem = [ "fd000,40" ]
>
> device_tree = "domU.dtb" <- This is the guest partial device tree.
>
> 2.3 Actually the guest partial device tree I have used is:
>
> /dts-v1/;
>
> #include <dt-bindings/interrupt-controller/arm-gic.h>
>
> / {
> #address-cells = <2>;
> #size-cells = <2>;
>
> passthrough {
> compatible = "simple-bus";
> ranges;
>
> #address-cells = <2>;
> #size-cells = <2>;
>
> gsx: gsx@fd000000 {
> compatible = "renesas,gsx";
> reg = <0 0xfd000000 0 0x3ffff>;
> interrupts = <GIC_SPI 119 IRQ_TYPE_LEVEL_HIGH>;
> /*clocks = <&cpg CPG_MOD 112>;*/
> /*power-domains = <&sysc R8A7795_PD_3DG_E>;*/
> };
> };
> };
>
> Hope this helps.
>
Yes, this is certainly very helpful, thank you.
Just to verify, it sounds like this would be HVM mode with the
passthrough being transparent to the guest; is that correct?
Also, do you know if it would be possible to share the GPU across
multiple VMs, or would it be exclusive to a VM? I'm not sure whether the
IOMMU supports this, or whether the DRM driver supports it.
_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel
^ permalink raw reply [flat|nested] 11+ messages in thread
* Re: GPU passthrough on ARM
2018-01-27 0:41 ` Martin Kelly
@ 2018-01-29 16:31 ` Oleksandr Tyshchenko
2018-01-30 0:22 ` Martin Kelly
0 siblings, 1 reply; 11+ messages in thread
From: Oleksandr Tyshchenko @ 2018-01-29 16:31 UTC (permalink / raw)
To: Martin Kelly; +Cc: xen-devel, Julien Grall, Oleksandr Tyshchenko
Hi
On Sat, Jan 27, 2018 at 2:41 AM, Martin Kelly <mkelly@xevo.com> wrote:
> On 01/26/2018 10:13 AM, Oleksandr Tyshchenko wrote:
>>
>> Hi, Martin
>>
>> On Fri, Jan 26, 2018 at 8:05 PM, Oleksandr Tyshchenko
>> <olekstysh@gmail.com> wrote:
>>>
>>> On Fri, Jan 26, 2018 at 3:49 PM, Julien Grall <julien.grall@linaro.org>
>>> wrote:
>>>>
>>>> Hi,
>>>>
>>>> I am CCing Oleksandr. He knows better than me this platform.
>>>
>>>
>>> Hi, Julien.
>>>
>>> OK, thank you, I will try to provide some pointers.
>>>
>>>>
>>>> Cheers,
>>>>
>>>> On 26/01/18 00:29, Martin Kelly wrote:
>>>>>
>>>>>
>>>>> On 01/25/2018 04:17 AM, Julien Grall wrote:
>>>>>>
>>>>>>
>>>>>>
>>>>>>
>>>>>> On 24/01/18 22:10, Martin Kelly wrote:
>>>>>>>
>>>>>>>
>>>>>>> Hi,
>>>>>>
>>>>>>
>>>>>>
>>>>>> Hello,
>>>>>>
>>>>>>> Does anyone know if GPU passthrough is supported on ARM? (e.g. for a
>>>>>>> GPU
>>>>>>> integrated into an ARM SoC). I checked documentation and the code,
>>>>>>> but I
>>>>>>> couldn't tell for sure.
>>>>>>>
>>>>>>> If so, what are the hardware requirements for it? If not, is it
>>>>>>> feasible
>>>>>>> to do in the future?
>>>>>>
>>>>>>
>>>>>>
>>>>>> Xen Arm supports device integrated into an ARM SoC. In general we
>>>>>> highly
>>>>>> recommend to have the GPU behind an IOMMU. So passthrough would be
>>>>>> fully
>>>>>> secure.
>>>>>>
>>>>>> Does your platform has an IOMMU? If so which one? Do you know if the
>>>>>> GPU
>>>>>> is behind it?
>>>>>>
>>>>>> It would be possible to do passthrough without IOMMU, but that's more
>>>>>> complex and would require some hack in Xen to make sure the guest
>>>>>> memory is
>>>>>> direct mapped (e.g guest physical address = host physical address).
>>>>>>
>>>>>> For more documentation on how to do it (see [1] and [2]).
>>>>>>
>>>>>> Cheers,
>>>>>>
>>>>>> [1]
>>>>>>
>>>>>> https://events.static.linuxfound.org/sites/events/files/slides/talk_5.pdf
>>>>>> [2] https://wiki.xen.org/images/1/17/Device_passthrough_xen.pdf
>>>>>>
>>>>>
>>>>> Hi Julien,
>>>>>
>>>>> Thanks very much for the information. I'm looking at the Renesas R-Car
>>>>> H3
>>>>> R8A7795, which has an IOMMU (using the Linux ipmmu-vmsa driver in
>>>>> drivers/iommu/ipmmu-vmsa.c). Looking at the device tree for it
>>>>> (r8a7795.dtsi), it appears you could pass through the display@feb00000
>>>>> node
>>>>> for the DRM driver.
>>>>>
>>>>> I did notice this patch series, which didn't get merged:
>>>>>
>>>>>
>>>>> https://lists.xenproject.org/archives/html/xen-devel/2017-07/msg02679.html
>>>>>
>>>>> Presumably that driver would be needed in Xen.
>>>>>
>>>>> Are there any gotchas I'm missing? Is GPU passthrough on ARM something
>>>>> that is "theoretically doable" or something that has been done already
>>>>> and
>>>>> shown to be performant?
>>
>>
>> I assume the H3 SoC version you are using is ES2.0, because of
>> r8a7795.dtsi.
>>
>> BTW, what BSP version are you using? I am wondering what is your use-case?
>> If you want to keep GPU in some dedicated domain without no hardware
>> at all, you have to use something like PV DRM frontend running here
>> and PV DRM backend in the hardware/driver domain.
>> The things are going to be much simple if you pass through all
>> required display sub-components as well, for the "rcar-du" DRM to be
>> functional.
>> Which way are you looking for?
>
>
> My BSP and kernel version is flexible, and I'd be happy to use whatever
> works best. The use-case is using OpenCL inside a VM for high-performance
> GPGPU.
Sounds indeed interesting.
> This means that performance is critical, and I would go with whatever
> solution offers the best performance.
Oh, I see.
>
>
>>
>> Anyway, in both cases you have to pass through GPU. For that some
>> activities should be done:
>>
>> 1. Xen side:
>>
>> As for the patch series, you are right, you have to base on it. There
>> are two separate patch series which haven't upstreamed yet,
>> but needed for the passthrough feature to work on R-Car Gen3 SoCs (M3,
>> H3).
>>
>> https://www.mail-archive.com/xen-devel@lists.xen.org/msg115901.html
>> https://www.mail-archive.com/xen-devel@lists.xen.org/msg116038.html
>>
>> Also additional patch is needed to teach IPMMU-VMSA driver to handle
>> devices which are hooked up to multiple IPMMU caches.
>> Since the GPU on H3 SoC is connected to multiple IPMMU caches: PV0 - PV3.
>>
>> I have created new branch you can simply base on to get required
>> support in hand.
>> repo: https://github.com/otyshchenko1/xen.git branch: ipmmu_next
>>
>> 2. Device trees and guest config file:
>>
>> 2.1. You have to add following to the domain 0 device tree:
>>
>> There is no magic here. This is just to enable corresponding IPMMUs,
>> hooked up GPU to them and notify Xen
>> that device is going to be pass throughed.
>>
>> &gsx {
>> xen,passthrough;
>>
>> iommus = <&ipmmu_pv0 0>, <&ipmmu_pv0 1>,
>> <&ipmmu_pv0 2>, <&ipmmu_pv0 3>,
>> <&ipmmu_pv0 4>, <&ipmmu_pv0 5>,
>> <&ipmmu_pv0 6>, <&ipmmu_pv0 7>,
>> <&ipmmu_pv1 0>, <&ipmmu_pv1 1>,
>> <&ipmmu_pv1 2>, <&ipmmu_pv1 3>,
>> <&ipmmu_pv1 4>, <&ipmmu_pv1 5>,
>> <&ipmmu_pv1 6>, <&ipmmu_pv1 7>,
>> <&ipmmu_pv2 0>, <&ipmmu_pv2 1>,
>> <&ipmmu_pv2 2>, <&ipmmu_pv2 3>,
>> <&ipmmu_pv2 4>, <&ipmmu_pv2 5>,
>> <&ipmmu_pv2 6>, <&ipmmu_pv2 7>,
>> <&ipmmu_pv3 0>, <&ipmmu_pv3 1>,
>> <&ipmmu_pv3 2>, <&ipmmu_pv3 3>,
>> <&ipmmu_pv3 4>, <&ipmmu_pv3 5>,
>> <&ipmmu_pv3 6>, <&ipmmu_pv3 7>;
>> };
>>
>> &ipmmu_pv0 {
>> status = "okay";
>> };
>>
>> &ipmmu_pv1 {
>> status = "okay";
>> };
>>
>> &ipmmu_pv2 {
>> status = "okay";
>> };
>>
>> &ipmmu_pv3 {
>> status = "okay";
>> };
>>
>> &ipmmu_mm {
>> status = "okay";
>> };
>>
>> 2.2. You have to add following to the guest config file:
>>
>> I might mistake here, please use existing documentation to see how a
>> guest config file
>> should be properly modified. For example
>> https://events.static.linuxfound.org/sites/events/files/slides/talk_5.pdf
>>
>> dtdev = [ "/soc/gsx@fd000000" ]
>>
>> irqs = [ 151 ]
>>
>> iomem = [ "fd000,40" ]
>>
>> device_tree = "domU.dtb" <- This is the guest partial device tree.
>>
>> 2.3 Actually the guest partial device tree I have used is:
>>
>> /dts-v1/;
>>
>> #include <dt-bindings/interrupt-controller/arm-gic.h>
>>
>> / {
>> #address-cells = <2>;
>> #size-cells = <2>;
>>
>> passthrough {
>> compatible = "simple-bus";
>> ranges;
>>
>> #address-cells = <2>;
>> #size-cells = <2>;
>>
>> gsx: gsx@fd000000 {
>> compatible = "renesas,gsx";
>> reg = <0 0xfd000000 0 0x3ffff>;
>> interrupts = <GIC_SPI 119 IRQ_TYPE_LEVEL_HIGH>;
>> /*clocks = <&cpg CPG_MOD 112>;*/
>> /*power-domains = <&sysc R8A7795_PD_3DG_E>;*/
>> };
>> };
>> };
>>
>> Hope this helps.
>>
>
> Yes, this is certainly very helpful, thank you.
>
> Just to verify, it sounds like this would be HVM mode with the passthrough
> being transparent to the guest; is that correct?
I am not sure that understood your question.
>
> Also, do you know if it would be possible to share the GPU across multiple
> VMs, or would it be exclusive to a VM?
> I'm not sure whether the IOMMU
> supports this, or whether the DRM driver supports it.
Xen assigns all devices to domain 0 (privileged domain) by default.
With passthrough feature you can assign device to another VM, and it
will be exclusive to that VM only.
It is possible to share the GPU across multiple VMs, but it is yet
another technique than passthrough.
We have a solution to do so, but it hasn't gone public yet. We are
going to redesign it.
--
Regards,
Oleksandr Tyshchenko
_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel
^ permalink raw reply [flat|nested] 11+ messages in thread
* Re: GPU passthrough on ARM
2018-01-29 16:31 ` Oleksandr Tyshchenko
@ 2018-01-30 0:22 ` Martin Kelly
2018-01-30 19:53 ` Oleksandr Tyshchenko
0 siblings, 1 reply; 11+ messages in thread
From: Martin Kelly @ 2018-01-30 0:22 UTC (permalink / raw)
To: Oleksandr Tyshchenko; +Cc: xen-devel, Julien Grall, Oleksandr Tyshchenko
On 01/29/2018 08:31 AM, Oleksandr Tyshchenko wrote:
> Hi
>
> On Sat, Jan 27, 2018 at 2:41 AM, Martin Kelly <mkelly@xevo.com> wrote:
>> On 01/26/2018 10:13 AM, Oleksandr Tyshchenko wrote:
>>>
>>> Hi, Martin
>>>
>>> On Fri, Jan 26, 2018 at 8:05 PM, Oleksandr Tyshchenko
>>> <olekstysh@gmail.com> wrote:
>>>>
>>>> On Fri, Jan 26, 2018 at 3:49 PM, Julien Grall <julien.grall@linaro.org>
>>>> wrote:
>>>>>
>>>>> Hi,
>>>>>
>>>>> I am CCing Oleksandr. He knows better than me this platform.
>>>>
>>>>
>>>> Hi, Julien.
>>>>
>>>> OK, thank you, I will try to provide some pointers.
>>>>
>>>>>
>>>>> Cheers,
>>>>>
>>>>> On 26/01/18 00:29, Martin Kelly wrote:
>>>>>>
>>>>>>
>>>>>> On 01/25/2018 04:17 AM, Julien Grall wrote:
>>>>>>>
>>>>>>>
>>>>>>>
>>>>>>>
>>>>>>> On 24/01/18 22:10, Martin Kelly wrote:
>>>>>>>>
>>>>>>>>
>>>>>>>> Hi,
>>>>>>>
>>>>>>>
>>>>>>>
>>>>>>> Hello,
>>>>>>>
>>>>>>>> Does anyone know if GPU passthrough is supported on ARM? (e.g. for a
>>>>>>>> GPU
>>>>>>>> integrated into an ARM SoC). I checked documentation and the code,
>>>>>>>> but I
>>>>>>>> couldn't tell for sure.
>>>>>>>>
>>>>>>>> If so, what are the hardware requirements for it? If not, is it
>>>>>>>> feasible
>>>>>>>> to do in the future?
>>>>>>>
>>>>>>>
>>>>>>>
>>>>>>> Xen Arm supports device integrated into an ARM SoC. In general we
>>>>>>> highly
>>>>>>> recommend to have the GPU behind an IOMMU. So passthrough would be
>>>>>>> fully
>>>>>>> secure.
>>>>>>>
>>>>>>> Does your platform has an IOMMU? If so which one? Do you know if the
>>>>>>> GPU
>>>>>>> is behind it?
>>>>>>>
>>>>>>> It would be possible to do passthrough without IOMMU, but that's more
>>>>>>> complex and would require some hack in Xen to make sure the guest
>>>>>>> memory is
>>>>>>> direct mapped (e.g guest physical address = host physical address).
>>>>>>>
>>>>>>> For more documentation on how to do it (see [1] and [2]).
>>>>>>>
>>>>>>> Cheers,
>>>>>>>
>>>>>>> [1]
>>>>>>>
>>>>>>> https://events.static.linuxfound.org/sites/events/files/slides/talk_5.pdf
>>>>>>> [2] https://wiki.xen.org/images/1/17/Device_passthrough_xen.pdf
>>>>>>>
>>>>>>
>>>>>> Hi Julien,
>>>>>>
>>>>>> Thanks very much for the information. I'm looking at the Renesas R-Car
>>>>>> H3
>>>>>> R8A7795, which has an IOMMU (using the Linux ipmmu-vmsa driver in
>>>>>> drivers/iommu/ipmmu-vmsa.c). Looking at the device tree for it
>>>>>> (r8a7795.dtsi), it appears you could pass through the display@feb00000
>>>>>> node
>>>>>> for the DRM driver.
>>>>>>
>>>>>> I did notice this patch series, which didn't get merged:
>>>>>>
>>>>>>
>>>>>> https://lists.xenproject.org/archives/html/xen-devel/2017-07/msg02679.html
>>>>>>
>>>>>> Presumably that driver would be needed in Xen.
>>>>>>
>>>>>> Are there any gotchas I'm missing? Is GPU passthrough on ARM something
>>>>>> that is "theoretically doable" or something that has been done already
>>>>>> and
>>>>>> shown to be performant?
>>>
>>>
>>> I assume the H3 SoC version you are using is ES2.0, because of
>>> r8a7795.dtsi.
>>>
>>> BTW, what BSP version are you using? I am wondering what is your use-case?
>>> If you want to keep GPU in some dedicated domain without no hardware
>>> at all, you have to use something like PV DRM frontend running here
>>> and PV DRM backend in the hardware/driver domain.
>>> The things are going to be much simple if you pass through all
>>> required display sub-components as well, for the "rcar-du" DRM to be
>>> functional.
>>> Which way are you looking for?
>>
>>
>> My BSP and kernel version is flexible, and I'd be happy to use whatever
>> works best. The use-case is using OpenCL inside a VM for high-performance
>> GPGPU.
> Sounds indeed interesting.
>
>> This means that performance is critical, and I would go with whatever
>> solution offers the best performance.
> Oh, I see.
>
>>
>>
>>>
>>> Anyway, in both cases you have to pass through GPU. For that some
>>> activities should be done:
>>>
>>> 1. Xen side:
>>>
>>> As for the patch series, you are right, you have to base on it. There
>>> are two separate patch series which haven't upstreamed yet,
>>> but needed for the passthrough feature to work on R-Car Gen3 SoCs (M3,
>>> H3).
>>>
>>> https://www.mail-archive.com/xen-devel@lists.xen.org/msg115901.html
>>> https://www.mail-archive.com/xen-devel@lists.xen.org/msg116038.html
>>>
>>> Also additional patch is needed to teach IPMMU-VMSA driver to handle
>>> devices which are hooked up to multiple IPMMU caches.
>>> Since the GPU on H3 SoC is connected to multiple IPMMU caches: PV0 - PV3.
>>>
>>> I have created new branch you can simply base on to get required
>>> support in hand.
>>> repo: https://github.com/otyshchenko1/xen.git branch: ipmmu_next
>>>
>>> 2. Device trees and guest config file:
>>>
>>> 2.1. You have to add following to the domain 0 device tree:
>>>
>>> There is no magic here. This is just to enable corresponding IPMMUs,
>>> hooked up GPU to them and notify Xen
>>> that device is going to be pass throughed.
>>>
>>> &gsx {
>>> xen,passthrough;
>>>
>>> iommus = <&ipmmu_pv0 0>, <&ipmmu_pv0 1>,
>>> <&ipmmu_pv0 2>, <&ipmmu_pv0 3>,
>>> <&ipmmu_pv0 4>, <&ipmmu_pv0 5>,
>>> <&ipmmu_pv0 6>, <&ipmmu_pv0 7>,
>>> <&ipmmu_pv1 0>, <&ipmmu_pv1 1>,
>>> <&ipmmu_pv1 2>, <&ipmmu_pv1 3>,
>>> <&ipmmu_pv1 4>, <&ipmmu_pv1 5>,
>>> <&ipmmu_pv1 6>, <&ipmmu_pv1 7>,
>>> <&ipmmu_pv2 0>, <&ipmmu_pv2 1>,
>>> <&ipmmu_pv2 2>, <&ipmmu_pv2 3>,
>>> <&ipmmu_pv2 4>, <&ipmmu_pv2 5>,
>>> <&ipmmu_pv2 6>, <&ipmmu_pv2 7>,
>>> <&ipmmu_pv3 0>, <&ipmmu_pv3 1>,
>>> <&ipmmu_pv3 2>, <&ipmmu_pv3 3>,
>>> <&ipmmu_pv3 4>, <&ipmmu_pv3 5>,
>>> <&ipmmu_pv3 6>, <&ipmmu_pv3 7>;
>>> };
>>>
>>> &ipmmu_pv0 {
>>> status = "okay";
>>> };
>>>
>>> &ipmmu_pv1 {
>>> status = "okay";
>>> };
>>>
>>> &ipmmu_pv2 {
>>> status = "okay";
>>> };
>>>
>>> &ipmmu_pv3 {
>>> status = "okay";
>>> };
>>>
>>> &ipmmu_mm {
>>> status = "okay";
>>> };
>>>
>>> 2.2. You have to add following to the guest config file:
>>>
>>> I might mistake here, please use existing documentation to see how a
>>> guest config file
>>> should be properly modified. For example
>>> https://events.static.linuxfound.org/sites/events/files/slides/talk_5.pdf
>>>
>>> dtdev = [ "/soc/gsx@fd000000" ]
>>>
>>> irqs = [ 151 ]
>>>
>>> iomem = [ "fd000,40" ]
>>>
>>> device_tree = "domU.dtb" <- This is the guest partial device tree.
>>>
>>> 2.3 Actually the guest partial device tree I have used is:
>>>
>>> /dts-v1/;
>>>
>>> #include <dt-bindings/interrupt-controller/arm-gic.h>
>>>
>>> / {
>>> #address-cells = <2>;
>>> #size-cells = <2>;
>>>
>>> passthrough {
>>> compatible = "simple-bus";
>>> ranges;
>>>
>>> #address-cells = <2>;
>>> #size-cells = <2>;
>>>
>>> gsx: gsx@fd000000 {
>>> compatible = "renesas,gsx";
>>> reg = <0 0xfd000000 0 0x3ffff>;
>>> interrupts = <GIC_SPI 119 IRQ_TYPE_LEVEL_HIGH>;
>>> /*clocks = <&cpg CPG_MOD 112>;*/
>>> /*power-domains = <&sysc R8A7795_PD_3DG_E>;*/
>>> };
>>> };
>>> };
>>>
>>> Hope this helps.
>>>
>>
>> Yes, this is certainly very helpful, thank you.
>>
>> Just to verify, it sounds like this would be HVM mode with the passthrough
>> being transparent to the guest; is that correct?
> I am not sure that understood your question.
>
I'm wondering what the GPU would like to the guest. Would it be the same
interface as on a native machine, or would I need PV drivers?
>>
>> Also, do you know if it would be possible to share the GPU across multiple
>> VMs, or would it be exclusive to a VM?
>> I'm not sure whether the IOMMU
>> supports this, or whether the DRM driver supports it.
>
> Xen assigns all devices to domain 0 (privileged domain) by default.
> With passthrough feature you can assign device to another VM, and it
> will be exclusive to that VM only.
>
> It is possible to share the GPU across multiple VMs, but it is yet
> another technique than passthrough.
> We have a solution to do so, but it hasn't gone public yet. We are
> going to redesign it.
>
That sounds interesting; do you have any more details you could share?
_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel
^ permalink raw reply [flat|nested] 11+ messages in thread
* Re: GPU passthrough on ARM
2018-01-30 0:22 ` Martin Kelly
@ 2018-01-30 19:53 ` Oleksandr Tyshchenko
2018-01-31 0:37 ` Martin Kelly
0 siblings, 1 reply; 11+ messages in thread
From: Oleksandr Tyshchenko @ 2018-01-30 19:53 UTC (permalink / raw)
To: Martin Kelly; +Cc: xen-devel, Julien Grall, Oleksandr Tyshchenko
Hi
On Tue, Jan 30, 2018 at 2:22 AM, Martin Kelly <mkelly@xevo.com> wrote:
> On 01/29/2018 08:31 AM, Oleksandr Tyshchenko wrote:
>>
>> Hi
>>
>> On Sat, Jan 27, 2018 at 2:41 AM, Martin Kelly <mkelly@xevo.com> wrote:
>>>
>>> On 01/26/2018 10:13 AM, Oleksandr Tyshchenko wrote:
>>>>
>>>>
>>>> Hi, Martin
>>>>
>>>> On Fri, Jan 26, 2018 at 8:05 PM, Oleksandr Tyshchenko
>>>> <olekstysh@gmail.com> wrote:
>>>>>
>>>>>
>>>>> On Fri, Jan 26, 2018 at 3:49 PM, Julien Grall <julien.grall@linaro.org>
>>>>> wrote:
>>>>>>
>>>>>>
>>>>>> Hi,
>>>>>>
>>>>>> I am CCing Oleksandr. He knows better than me this platform.
>>>>>
>>>>>
>>>>>
>>>>> Hi, Julien.
>>>>>
>>>>> OK, thank you, I will try to provide some pointers.
>>>>>
>>>>>>
>>>>>> Cheers,
>>>>>>
>>>>>> On 26/01/18 00:29, Martin Kelly wrote:
>>>>>>>
>>>>>>>
>>>>>>>
>>>>>>> On 01/25/2018 04:17 AM, Julien Grall wrote:
>>>>>>>>
>>>>>>>>
>>>>>>>>
>>>>>>>>
>>>>>>>>
>>>>>>>> On 24/01/18 22:10, Martin Kelly wrote:
>>>>>>>>>
>>>>>>>>>
>>>>>>>>>
>>>>>>>>> Hi,
>>>>>>>>
>>>>>>>>
>>>>>>>>
>>>>>>>>
>>>>>>>> Hello,
>>>>>>>>
>>>>>>>>> Does anyone know if GPU passthrough is supported on ARM? (e.g. for
>>>>>>>>> a
>>>>>>>>> GPU
>>>>>>>>> integrated into an ARM SoC). I checked documentation and the code,
>>>>>>>>> but I
>>>>>>>>> couldn't tell for sure.
>>>>>>>>>
>>>>>>>>> If so, what are the hardware requirements for it? If not, is it
>>>>>>>>> feasible
>>>>>>>>> to do in the future?
>>>>>>>>
>>>>>>>>
>>>>>>>>
>>>>>>>>
>>>>>>>> Xen Arm supports device integrated into an ARM SoC. In general we
>>>>>>>> highly
>>>>>>>> recommend to have the GPU behind an IOMMU. So passthrough would be
>>>>>>>> fully
>>>>>>>> secure.
>>>>>>>>
>>>>>>>> Does your platform has an IOMMU? If so which one? Do you know if the
>>>>>>>> GPU
>>>>>>>> is behind it?
>>>>>>>>
>>>>>>>> It would be possible to do passthrough without IOMMU, but that's
>>>>>>>> more
>>>>>>>> complex and would require some hack in Xen to make sure the guest
>>>>>>>> memory is
>>>>>>>> direct mapped (e.g guest physical address = host physical address).
>>>>>>>>
>>>>>>>> For more documentation on how to do it (see [1] and [2]).
>>>>>>>>
>>>>>>>> Cheers,
>>>>>>>>
>>>>>>>> [1]
>>>>>>>>
>>>>>>>>
>>>>>>>> https://events.static.linuxfound.org/sites/events/files/slides/talk_5.pdf
>>>>>>>> [2] https://wiki.xen.org/images/1/17/Device_passthrough_xen.pdf
>>>>>>>>
>>>>>>>
>>>>>>> Hi Julien,
>>>>>>>
>>>>>>> Thanks very much for the information. I'm looking at the Renesas
>>>>>>> R-Car
>>>>>>> H3
>>>>>>> R8A7795, which has an IOMMU (using the Linux ipmmu-vmsa driver in
>>>>>>> drivers/iommu/ipmmu-vmsa.c). Looking at the device tree for it
>>>>>>> (r8a7795.dtsi), it appears you could pass through the
>>>>>>> display@feb00000
>>>>>>> node
>>>>>>> for the DRM driver.
>>>>>>>
>>>>>>> I did notice this patch series, which didn't get merged:
>>>>>>>
>>>>>>>
>>>>>>>
>>>>>>> https://lists.xenproject.org/archives/html/xen-devel/2017-07/msg02679.html
>>>>>>>
>>>>>>> Presumably that driver would be needed in Xen.
>>>>>>>
>>>>>>> Are there any gotchas I'm missing? Is GPU passthrough on ARM
>>>>>>> something
>>>>>>> that is "theoretically doable" or something that has been done
>>>>>>> already
>>>>>>> and
>>>>>>> shown to be performant?
>>>>
>>>>
>>>>
>>>> I assume the H3 SoC version you are using is ES2.0, because of
>>>> r8a7795.dtsi.
>>>>
>>>> BTW, what BSP version are you using? I am wondering what is your
>>>> use-case?
>>>> If you want to keep GPU in some dedicated domain without no hardware
>>>> at all, you have to use something like PV DRM frontend running here
>>>> and PV DRM backend in the hardware/driver domain.
>>>> The things are going to be much simple if you pass through all
>>>> required display sub-components as well, for the "rcar-du" DRM to be
>>>> functional.
>>>> Which way are you looking for?
>>>
>>>
>>>
>>> My BSP and kernel version is flexible, and I'd be happy to use whatever
>>> works best. The use-case is using OpenCL inside a VM for high-performance
>>> GPGPU.
>>
>> Sounds indeed interesting.
>>
>>> This means that performance is critical, and I would go with whatever
>>> solution offers the best performance.
>>
>> Oh, I see.
>>
>>>
>>>
>>>>
>>>> Anyway, in both cases you have to pass through GPU. For that some
>>>> activities should be done:
>>>>
>>>> 1. Xen side:
>>>>
>>>> As for the patch series, you are right, you have to base on it. There
>>>> are two separate patch series which haven't upstreamed yet,
>>>> but needed for the passthrough feature to work on R-Car Gen3 SoCs (M3,
>>>> H3).
>>>>
>>>> https://www.mail-archive.com/xen-devel@lists.xen.org/msg115901.html
>>>> https://www.mail-archive.com/xen-devel@lists.xen.org/msg116038.html
>>>>
>>>> Also additional patch is needed to teach IPMMU-VMSA driver to handle
>>>> devices which are hooked up to multiple IPMMU caches.
>>>> Since the GPU on H3 SoC is connected to multiple IPMMU caches: PV0 -
>>>> PV3.
>>>>
>>>> I have created new branch you can simply base on to get required
>>>> support in hand.
>>>> repo: https://github.com/otyshchenko1/xen.git branch: ipmmu_next
>>>>
>>>> 2. Device trees and guest config file:
>>>>
>>>> 2.1. You have to add following to the domain 0 device tree:
>>>>
>>>> There is no magic here. This is just to enable corresponding IPMMUs,
>>>> hooked up GPU to them and notify Xen
>>>> that device is going to be pass throughed.
>>>>
>>>> &gsx {
>>>> xen,passthrough;
>>>>
>>>> iommus = <&ipmmu_pv0 0>, <&ipmmu_pv0 1>,
>>>> <&ipmmu_pv0 2>, <&ipmmu_pv0 3>,
>>>> <&ipmmu_pv0 4>, <&ipmmu_pv0 5>,
>>>> <&ipmmu_pv0 6>, <&ipmmu_pv0 7>,
>>>> <&ipmmu_pv1 0>, <&ipmmu_pv1 1>,
>>>> <&ipmmu_pv1 2>, <&ipmmu_pv1 3>,
>>>> <&ipmmu_pv1 4>, <&ipmmu_pv1 5>,
>>>> <&ipmmu_pv1 6>, <&ipmmu_pv1 7>,
>>>> <&ipmmu_pv2 0>, <&ipmmu_pv2 1>,
>>>> <&ipmmu_pv2 2>, <&ipmmu_pv2 3>,
>>>> <&ipmmu_pv2 4>, <&ipmmu_pv2 5>,
>>>> <&ipmmu_pv2 6>, <&ipmmu_pv2 7>,
>>>> <&ipmmu_pv3 0>, <&ipmmu_pv3 1>,
>>>> <&ipmmu_pv3 2>, <&ipmmu_pv3 3>,
>>>> <&ipmmu_pv3 4>, <&ipmmu_pv3 5>,
>>>> <&ipmmu_pv3 6>, <&ipmmu_pv3 7>;
>>>> };
>>>>
>>>> &ipmmu_pv0 {
>>>> status = "okay";
>>>> };
>>>>
>>>> &ipmmu_pv1 {
>>>> status = "okay";
>>>> };
>>>>
>>>> &ipmmu_pv2 {
>>>> status = "okay";
>>>> };
>>>>
>>>> &ipmmu_pv3 {
>>>> status = "okay";
>>>> };
>>>>
>>>> &ipmmu_mm {
>>>> status = "okay";
>>>> };
>>>>
>>>> 2.2. You have to add following to the guest config file:
>>>>
>>>> I might mistake here, please use existing documentation to see how a
>>>> guest config file
>>>> should be properly modified. For example
>>>>
>>>> https://events.static.linuxfound.org/sites/events/files/slides/talk_5.pdf
>>>>
>>>> dtdev = [ "/soc/gsx@fd000000" ]
>>>>
>>>> irqs = [ 151 ]
>>>>
>>>> iomem = [ "fd000,40" ]
>>>>
>>>> device_tree = "domU.dtb" <- This is the guest partial device tree.
>>>>
>>>> 2.3 Actually the guest partial device tree I have used is:
>>>>
>>>> /dts-v1/;
>>>>
>>>> #include <dt-bindings/interrupt-controller/arm-gic.h>
>>>>
>>>> / {
>>>> #address-cells = <2>;
>>>> #size-cells = <2>;
>>>>
>>>> passthrough {
>>>> compatible = "simple-bus";
>>>> ranges;
>>>>
>>>> #address-cells = <2>;
>>>> #size-cells = <2>;
>>>>
>>>> gsx: gsx@fd000000 {
>>>> compatible = "renesas,gsx";
>>>> reg = <0 0xfd000000 0 0x3ffff>;
>>>> interrupts = <GIC_SPI 119 IRQ_TYPE_LEVEL_HIGH>;
>>>> /*clocks = <&cpg CPG_MOD 112>;*/
>>>> /*power-domains = <&sysc R8A7795_PD_3DG_E>;*/
>>>> };
>>>> };
>>>> };
>>>>
>>>> Hope this helps.
>>>>
>>>
>>> Yes, this is certainly very helpful, thank you.
>>>
>>> Just to verify, it sounds like this would be HVM mode with the
>>> passthrough
>>> being transparent to the guest; is that correct?
>>
>> I am not sure that understood your question.
>>
>
> I'm wondering what the GPU would like to the guest. Would it be the same
> interface as on a native machine, or would I need PV drivers?
It depends on what guest domain contains.
If DomU has both Display Unit and GPU assigned then PV driver is not needed.
So, here we have the same interface as on a native machine.
But if DomU has only GPU assigned then something like PV display
frontend is needed here,
plus PV display backend running in a guest domain which owns Display Unit.
The possible hint is that if Display Unit has independent crtcs,
connectors, etc it might be possible to split it
into the separate units and assign them to different guest domains (of
course, the HW must be splittable).
I think, this would avoid PV driver usage.
>
>>>
>>> Also, do you know if it would be possible to share the GPU across
>>> multiple
>>> VMs, or would it be exclusive to a VM?
>>> I'm not sure whether the IOMMU
>>> supports this, or whether the DRM driver supports it.
>>
>>
>> Xen assigns all devices to domain 0 (privileged domain) by default.
>> With passthrough feature you can assign device to another VM, and it
>> will be exclusive to that VM only.
>>
>> It is possible to share the GPU across multiple VMs, but it is yet
>> another technique than passthrough.
>> We have a solution to do so, but it hasn't gone public yet. We are
>> going to redesign it.
>>
>
> That sounds interesting; do you have any more details you could share?
I've sent you a PM.
--
Regards,
Oleksandr Tyshchenko
_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel
^ permalink raw reply [flat|nested] 11+ messages in thread
* Re: GPU passthrough on ARM
2018-01-30 19:53 ` Oleksandr Tyshchenko
@ 2018-01-31 0:37 ` Martin Kelly
0 siblings, 0 replies; 11+ messages in thread
From: Martin Kelly @ 2018-01-31 0:37 UTC (permalink / raw)
To: Oleksandr Tyshchenko; +Cc: xen-devel, Julien Grall, Oleksandr Tyshchenko
On 01/30/2018 11:53 AM, Oleksandr Tyshchenko wrote:
>>
>> I'm wondering what the GPU would like to the guest. Would it be the same
>> interface as on a native machine, or would I need PV drivers?
>
> It depends on what guest domain contains.
>
> If DomU has both Display Unit and GPU assigned then PV driver is not needed.
> So, here we have the same interface as on a native machine.
>
> But if DomU has only GPU assigned then something like PV display
> frontend is needed here,
> plus PV display backend running in a guest domain which owns Display Unit.
>
> The possible hint is that if Display Unit has independent crtcs,
> connectors, etc it might be possible to split it
> into the separate units and assign them to different guest domains (of
> course, the HW must be splittable).
> I think, this would avoid PV driver usage.
>
That makes sense. Thanks again for your help understanding this better.
_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel
^ permalink raw reply [flat|nested] 11+ messages in thread
end of thread, other threads:[~2018-01-31 0:37 UTC | newest]
Thread overview: 11+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2018-01-24 22:10 GPU passthrough on ARM Martin Kelly
2018-01-25 12:17 ` Julien Grall
2018-01-26 0:29 ` Martin Kelly
2018-01-26 13:49 ` Julien Grall
2018-01-26 18:05 ` Oleksandr Tyshchenko
2018-01-26 18:13 ` Oleksandr Tyshchenko
2018-01-27 0:41 ` Martin Kelly
2018-01-29 16:31 ` Oleksandr Tyshchenko
2018-01-30 0:22 ` Martin Kelly
2018-01-30 19:53 ` Oleksandr Tyshchenko
2018-01-31 0:37 ` Martin Kelly
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).