From: Artem Mygaiev <artem_mygaiev@epam.com>
To: Julien Grall <julien.grall@arm.com>,
Stefano Stabellini <sstabellini@kernel.org>,
"lars.kurth@citrix.com" <lars.kurth@citrix.com>,
Andrii Anisov <andrii_anisov@epam.com>,
Oleksandr Andrushchenko <oleksandr_andrushchenko@epam.com>,
Paul Durrant <paul.durrant@citrix.com>,
Rich Persaud <persaur@gmail.com>
Cc: xen-devel <xen-devel@lists.xenproject.org>
Subject: Re: [Notes for xen summit 2018 design session] Graphic virtualization
Date: Fri, 3 Aug 2018 13:32:57 +0300 [thread overview]
Message-ID: <aeeb938e-6d1c-1e08-7bfd-9cbbfb69e78a@epam.com> (raw)
In-Reply-To: <2c43233d-f140-834c-3620-a36ded9269b4@arm.com>
Hi Julien
On 03.08.18 12:37, Julien Grall wrote:
> On 08/02/2018 04:26 PM, Artem Mygaiev wrote:
>> Hello Julien
>
> Hi Artem,
>
> Thank you for the feedback!
>> On 02.08.18 12:56, Julien Grall wrote:
>>> Hi,
>>>
>>> Sorry for the late posting. The notes were taken by Stefano
>>> Stabellini. Thank you.
>>>
>>> This has some clarifications requested from EPAM regarding PowerVR.
>>>
>>> The existing graphics solutions on Xen today are:
>>> - PV DRM:
>>> * Supports multiple displays per VM
>>> * Based on Grant-tables.
>>> * Improvement of Xen FB which is based on foreign mapping
>>>
>> Frontend driver will be part of LK starting 4.18
>> https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/tree/drivers/gpu/drm/xen?h=v4.18-rc7
>>
> That's a good news. Do you know the state of the backend?
>
PV display backend is implemented as a userspace service and available
on our GutHub, along with ones for PV sound BE and library for writing
userspace backends (everything is GPLv2):
https://github.com/xen-troops/displ_be
https://github.com/xen-troops/snd_be
https://github.com/xen-troops/libxenbe
>>
>>
>>> - Intel GVT: https://01.org/igvt-g
>>> * Based on IOREQ server infrastructure
>>> * Performance is 70% of direct assigned hardware
>>>
>>> - NVIDIA:
>>> * Much more virtualizable
>>> * Provide mappable chunk of PCI BARs.
>>> * Userspace component emulates PCI config space
>>>
>>> Current effort for graphic virtualization on Arm:
>>> - Samsung: They have a PV OpenGL solution. This seems to be fast.
>>
>> This is interesting. Do you know if there is any open benchmark data?
>
> Stefano introduced you with the Samsung speaker. Hopefully we will get
> more details on the benchmark.
If I get some more details - I'll share :)
> Unfortunately, PV OpenGL is not available upstream at the moment. It was
> not clear whether the backend and frontend would ever be upstreamed and
> when.
>
> However, the work looks quite similar to virgil
> (https://virgil3d.github.io/). It is Graphic virtualization solution
> based on virtio for the transport. I think it would be possible to
> re-use it by just replacing the transport layer.
>
> Another solution is to implement virtio on Xen (see the discussion on
> the last community call).
Do we plan some follow-up discussion on this? I have missed the call due
to travels last couple weeks so I may not have a full picture...
>>> - EPAM:
>>> * PV OpenGL was dismissed because of performance concern
>>> * PV DRM for sharing display
>>> * PowerVR native virtualization (see below)
>>>
>>> PoverVR virtualization:
>>>
>>> Recent PoverVR hardware provided some virtualization support. The
>>> solution is implemented in the firmware. A kernel module is used to talk
>>> to the firmware via shared memory. The toolstack only have to setup
>>> memory context for each VM.
>>>
>>> ** Recent PoverVR HW has some virtualization support
>>> ** Kernel module
>>>
>>> It was not clear whether an extra pair of frontend/backend was
>>> required along with the PowerVR driver.
>>>
>>> @Action: EPAM, could you clarify it?
>>>
>>
>> No, there are no extra FE/BE drivers for GPU sharing in case of PowerVR.
>>
>>> Potential solution for upstream:
>>> - PV OpenGL
>>> - vGPU solution outside of the hypervisor (see below)
>>>
>>> vGPU solution outside of the hypervisor:
>>>
>>> A unikernel (or Dom0) based environment could be provided to run
>>> proprietary software. >
>>
>> One more option we we were discussing is "de-priviledged" or "native
>> applications in Xen:
>> https://lists.xenproject.org/archives/html/xen-devel/2017-04/msg01002.html
>>
>> We are looking into unikernels, too.
>>
>>> The proprietary software would use IOREQ server infrastructure to
>>> emulate guest memory region used by the GPU and do the scheduling
>>> decisions.
>>>
>>
>> We also had an RFC for co-processors (including GPU) management some
>> time ago:
>> https://lists.xenproject.org/archives/html/xen-devel/2016-10/msg01966.html
>>
> If I remember the series, the code may require to trap access to guest
> GPU access and manage to the GPU. There are a fair amount of chance that
> GPU vendors will not want to have that under GPL. So this would have to
> live outside of Xen.
Yes, and taht's why we are looking into de-privileged applications and
unikernels - GPU code will have intimate knowledge of GPU internals
which is vendor's protected IP.
> This is where the IOREQ infrastructure comes into place. It allows to
> forward MMIO access to an external entity. This entity could be
> proprietary.
I need to look into this... I am not sure we used it before
-- BR, Artem
_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel
next prev parent reply other threads:[~2018-08-03 10:33 UTC|newest]
Thread overview: 10+ messages / expand[flat|nested] mbox.gz Atom feed top
2018-08-02 9:56 [Notes for xen summit 2018 design session] Graphic virtualization Julien Grall
2018-08-02 15:26 ` Artem Mygaiev
2018-08-02 15:29 ` Lars Kurth
2018-08-02 15:54 ` Artem Mygaiev
2018-08-02 15:57 ` Julien Grall
2018-08-02 16:12 ` Artem Mygaiev
2018-08-02 17:53 ` Stefano Stabellini
2018-08-03 9:37 ` Julien Grall
2018-08-03 10:32 ` Artem Mygaiev [this message]
2018-08-03 17:46 ` Stefano Stabellini
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=aeeb938e-6d1c-1e08-7bfd-9cbbfb69e78a@epam.com \
--to=artem_mygaiev@epam.com \
--cc=andrii_anisov@epam.com \
--cc=julien.grall@arm.com \
--cc=lars.kurth@citrix.com \
--cc=oleksandr_andrushchenko@epam.com \
--cc=paul.durrant@citrix.com \
--cc=persaur@gmail.com \
--cc=sstabellini@kernel.org \
--cc=xen-devel@lists.xenproject.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).