* Re: [RFC PATCH 01/60] hyper_dmabuf: initial working version of hyper_dmabuf drv
[not found] ` <20171219232731.GA6497@downor-Z87X-UD5H>
@ 2017-12-20 9:59 ` Daniel Vetter
2017-12-26 18:19 ` Matt Roper
2018-01-10 23:13 ` Dongwon Kim
0 siblings, 2 replies; 4+ messages in thread
From: Daniel Vetter @ 2017-12-20 9:59 UTC (permalink / raw)
To: Dongwon Kim
Cc: Intel Graphics Development, linux-kernel@vger.kernel.org,
dri-devel@lists.freedesktop.org, Potrola, MateuszX,
xen-devel@lists.xenproject.org, intel-gvt-dev
On Tue, Dec 19, 2017 at 03:27:31PM -0800, Dongwon Kim wrote:
> I forgot to include this brief information about this patch series.
>
> This patch series contains the implementation of a new device driver,
> hyper_dmabuf, which provides a method for DMA-BUF sharing across
> different OSes running on the same virtual OS platform powered by
> a hypervisor.
>
> Detailed information about this driver is described in a high-level doc
> added by the second patch of the series.
>
> [RFC PATCH 02/60] hyper_dmabuf: added a doc for hyper_dmabuf sharing
>
> I am attaching 'Overview' section here as a summary.
>
> ------------------------------------------------------------------------------
> Section 1. Overview
> ------------------------------------------------------------------------------
>
> Hyper_DMABUF driver is a Linux device driver running on multiple Virtual
> achines (VMs), which expands DMA-BUF sharing capability to the VM environment
> where multiple different OS instances need to share same physical data without
> data-copy across VMs.
>
> To share a DMA_BUF across VMs, an instance of the Hyper_DMABUF drv on the
> exporting VM (so called, “exporter”) imports a local DMA_BUF from the original
> producer of the buffer, then re-exports it with an unique ID, hyper_dmabuf_id
> for the buffer to the importing VM (so called, “importer”).
>
> Another instance of the Hyper_DMABUF driver on importer registers
> a hyper_dmabuf_id together with reference information for the shared physical
> pages associated with the DMA_BUF to its database when the export happens.
>
> The actual mapping of the DMA_BUF on the importer’s side is done by
> the Hyper_DMABUF driver when user space issues the IOCTL command to access
> the shared DMA_BUF. The Hyper_DMABUF driver works as both an importing and
> exporting driver as is, that is, no special configuration is required.
> Consequently, only a single module per VM is needed to enable cross-VM DMA_BUF
> exchange.
So I know that most dma-buf implementations (especially lots of importers
in drivers/gpu) break this, but fundamentally only the original exporter
is allowed to know about the underlying pages. There's various scenarios
where a dma-buf isn't backed by anything like a struct page.
So your first step of noodling the underlying struct page out from the
dma-buf is kinda breaking the abstraction, and I think it's not a good
idea to have that. Especially not for sharing across VMs.
I think a better design would be if hyper-dmabuf would be the dma-buf
exporter in both of the VMs, and you'd import it everywhere you want to in
some gpu/video/whatever driver in the VMs. That way hyper-dmabuf is always
in control of the pages, and a lot of the troubling forwarding you
currently need to do disappears.
2nd thing: This seems very much related to what's happening around gvt and
allowing at least the host (in a kvm based VM environment) to be able to
access some of the dma-buf (or well, framebuffers in general) that the
client is using. Adding some mailing lists for that.
-Daniel
>
> ------------------------------------------------------------------------------
>
> There is a git repository at github.com where this series of patches are all
> integrated in Linux kernel tree based on the commit:
>
> commit ae64f9bd1d3621b5e60d7363bc20afb46aede215
> Author: Linus Torvalds <torvalds@linux-foundation.org>
> Date: Sun Dec 3 11:01:47 2017 -0500
>
> Linux 4.15-rc2
>
> https://github.com/downor/linux_hyper_dmabuf.git hyper_dmabuf_integration_v3
>
> _______________________________________________
> dri-devel mailing list
> dri-devel@lists.freedesktop.org
> https://lists.freedesktop.org/mailman/listinfo/dri-devel
--
Daniel Vetter
Software Engineer, Intel Corporation
http://blog.ffwll.ch
_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx
^ permalink raw reply [flat|nested] 4+ messages in thread
* Re: [RFC PATCH 01/60] hyper_dmabuf: initial working version of hyper_dmabuf drv
2017-12-20 9:59 ` [RFC PATCH 01/60] hyper_dmabuf: initial working version of hyper_dmabuf drv Daniel Vetter
@ 2017-12-26 18:19 ` Matt Roper
2017-12-29 13:03 ` Tomeu Vizoso
2018-01-10 23:13 ` Dongwon Kim
1 sibling, 1 reply; 4+ messages in thread
From: Matt Roper @ 2017-12-26 18:19 UTC (permalink / raw)
To: Dongwon Kim, linux-kernel@vger.kernel.org,
xen-devel@lists.xenproject.org, Potrola, MateuszX,
dri-devel@lists.freedesktop.org, Intel Graphics Development,
intel-gvt-dev
On Wed, Dec 20, 2017 at 10:59:57AM +0100, Daniel Vetter wrote:
> On Tue, Dec 19, 2017 at 03:27:31PM -0800, Dongwon Kim wrote:
> > I forgot to include this brief information about this patch series.
> >
> > This patch series contains the implementation of a new device driver,
> > hyper_dmabuf, which provides a method for DMA-BUF sharing across
> > different OSes running on the same virtual OS platform powered by
> > a hypervisor.
> >
> > Detailed information about this driver is described in a high-level doc
> > added by the second patch of the series.
> >
> > [RFC PATCH 02/60] hyper_dmabuf: added a doc for hyper_dmabuf sharing
> >
> > I am attaching 'Overview' section here as a summary.
> >
> > ------------------------------------------------------------------------------
> > Section 1. Overview
> > ------------------------------------------------------------------------------
> >
> > Hyper_DMABUF driver is a Linux device driver running on multiple Virtual
> > achines (VMs), which expands DMA-BUF sharing capability to the VM environment
> > where multiple different OS instances need to share same physical data without
> > data-copy across VMs.
> >
> > To share a DMA_BUF across VMs, an instance of the Hyper_DMABUF drv on the
> > exporting VM (so called, “exporter”) imports a local DMA_BUF from the original
> > producer of the buffer, then re-exports it with an unique ID, hyper_dmabuf_id
> > for the buffer to the importing VM (so called, “importer”).
> >
> > Another instance of the Hyper_DMABUF driver on importer registers
> > a hyper_dmabuf_id together with reference information for the shared physical
> > pages associated with the DMA_BUF to its database when the export happens.
> >
> > The actual mapping of the DMA_BUF on the importer’s side is done by
> > the Hyper_DMABUF driver when user space issues the IOCTL command to access
> > the shared DMA_BUF. The Hyper_DMABUF driver works as both an importing and
> > exporting driver as is, that is, no special configuration is required.
> > Consequently, only a single module per VM is needed to enable cross-VM DMA_BUF
> > exchange.
>
> So I know that most dma-buf implementations (especially lots of importers
> in drivers/gpu) break this, but fundamentally only the original exporter
> is allowed to know about the underlying pages. There's various scenarios
> where a dma-buf isn't backed by anything like a struct page.
>
> So your first step of noodling the underlying struct page out from the
> dma-buf is kinda breaking the abstraction, and I think it's not a good
> idea to have that. Especially not for sharing across VMs.
>
> I think a better design would be if hyper-dmabuf would be the dma-buf
> exporter in both of the VMs, and you'd import it everywhere you want to in
> some gpu/video/whatever driver in the VMs. That way hyper-dmabuf is always
> in control of the pages, and a lot of the troubling forwarding you
> currently need to do disappears.
I think one of the main driving use cases here is for a "local" graphics
compositor inside the VM to accept client buffers from unmodified
applications and then pass those buffers along to a "global" compositor
running in the service domain. This would allow the global compositor
to composite applications running in different virtual machines (and
possibly running under different operating systems).
If we require that hyper-dmabuf always be the exporter, that complicates
things a little bit since a buffer allocated via regular interfaces (GEM
ioctls or whatever) wouldn't be directly transferrable to the global
compositor. For graphics use cases like this, we could probably hide a
lot of the details by modifying/replacing the EGL implementation that
handles the details of buffer allocation. However if we have
applications that are themselves just passing along externally-allocated
buffers (e.g., images from a camera device), we'd probably need to
modify those applications and/or the drivers they get their content
from.
Matt
>
> 2nd thing: This seems very much related to what's happening around gvt and
> allowing at least the host (in a kvm based VM environment) to be able to
> access some of the dma-buf (or well, framebuffers in general) that the
> client is using. Adding some mailing lists for that.
> -Daniel
>
> >
> > ------------------------------------------------------------------------------
> >
> > There is a git repository at github.com where this series of patches are all
> > integrated in Linux kernel tree based on the commit:
> >
> > commit ae64f9bd1d3621b5e60d7363bc20afb46aede215
> > Author: Linus Torvalds <torvalds@linux-foundation.org>
> > Date: Sun Dec 3 11:01:47 2017 -0500
> >
> > Linux 4.15-rc2
> >
> > https://github.com/downor/linux_hyper_dmabuf.git hyper_dmabuf_integration_v3
> >
> > _______________________________________________
> > dri-devel mailing list
> > dri-devel@lists.freedesktop.org
> > https://lists.freedesktop.org/mailman/listinfo/dri-devel
>
> --
> Daniel Vetter
> Software Engineer, Intel Corporation
> http://blog.ffwll.ch
> _______________________________________________
> dri-devel mailing list
> dri-devel@lists.freedesktop.org
> https://lists.freedesktop.org/mailman/listinfo/dri-devel
--
Matt Roper
Graphics Software Engineer
IoTG Platform Enabling & Development
Intel Corporation
(916) 356-2795
_______________________________________________
dri-devel mailing list
dri-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/dri-devel
^ permalink raw reply [flat|nested] 4+ messages in thread
* Re: [RFC PATCH 01/60] hyper_dmabuf: initial working version of hyper_dmabuf drv
2017-12-26 18:19 ` Matt Roper
@ 2017-12-29 13:03 ` Tomeu Vizoso
0 siblings, 0 replies; 4+ messages in thread
From: Tomeu Vizoso @ 2017-12-29 13:03 UTC (permalink / raw)
To: Matt Roper
Cc: Dongwon Kim, Intel Graphics Development,
linux-kernel@vger.kernel.org, dri-devel@lists.freedesktop.org,
Potrola, MateuszX, xen-devel@lists.xenproject.org, intel-gvt-dev
On 26 December 2017 at 19:19, Matt Roper <matthew.d.roper@intel.com> wrote:
> On Wed, Dec 20, 2017 at 10:59:57AM +0100, Daniel Vetter wrote:
>> On Tue, Dec 19, 2017 at 03:27:31PM -0800, Dongwon Kim wrote:
>> > I forgot to include this brief information about this patch series.
>> >
>> > This patch series contains the implementation of a new device driver,
>> > hyper_dmabuf, which provides a method for DMA-BUF sharing across
>> > different OSes running on the same virtual OS platform powered by
>> > a hypervisor.
>> >
>> > Detailed information about this driver is described in a high-level doc
>> > added by the second patch of the series.
>> >
>> > [RFC PATCH 02/60] hyper_dmabuf: added a doc for hyper_dmabuf sharing
>> >
>> > I am attaching 'Overview' section here as a summary.
>> >
>> > ------------------------------------------------------------------------------
>> > Section 1. Overview
>> > ------------------------------------------------------------------------------
>> >
>> > Hyper_DMABUF driver is a Linux device driver running on multiple Virtual
>> > achines (VMs), which expands DMA-BUF sharing capability to the VM environment
>> > where multiple different OS instances need to share same physical data without
>> > data-copy across VMs.
>> >
>> > To share a DMA_BUF across VMs, an instance of the Hyper_DMABUF drv on the
>> > exporting VM (so called, “exporter”) imports a local DMA_BUF from the original
>> > producer of the buffer, then re-exports it with an unique ID, hyper_dmabuf_id
>> > for the buffer to the importing VM (so called, “importer”).
>> >
>> > Another instance of the Hyper_DMABUF driver on importer registers
>> > a hyper_dmabuf_id together with reference information for the shared physical
>> > pages associated with the DMA_BUF to its database when the export happens.
>> >
>> > The actual mapping of the DMA_BUF on the importer’s side is done by
>> > the Hyper_DMABUF driver when user space issues the IOCTL command to access
>> > the shared DMA_BUF. The Hyper_DMABUF driver works as both an importing and
>> > exporting driver as is, that is, no special configuration is required.
>> > Consequently, only a single module per VM is needed to enable cross-VM DMA_BUF
>> > exchange.
>>
>> So I know that most dma-buf implementations (especially lots of importers
>> in drivers/gpu) break this, but fundamentally only the original exporter
>> is allowed to know about the underlying pages. There's various scenarios
>> where a dma-buf isn't backed by anything like a struct page.
>>
>> So your first step of noodling the underlying struct page out from the
>> dma-buf is kinda breaking the abstraction, and I think it's not a good
>> idea to have that. Especially not for sharing across VMs.
>>
>> I think a better design would be if hyper-dmabuf would be the dma-buf
>> exporter in both of the VMs, and you'd import it everywhere you want to in
>> some gpu/video/whatever driver in the VMs. That way hyper-dmabuf is always
>> in control of the pages, and a lot of the troubling forwarding you
>> currently need to do disappears.
>
> I think one of the main driving use cases here is for a "local" graphics
> compositor inside the VM to accept client buffers from unmodified
> applications and then pass those buffers along to a "global" compositor
> running in the service domain. This would allow the global compositor
> to composite applications running in different virtual machines (and
> possibly running under different operating systems).
>
> If we require that hyper-dmabuf always be the exporter, that complicates
> things a little bit since a buffer allocated via regular interfaces (GEM
> ioctls or whatever) wouldn't be directly transferrable to the global
> compositor. For graphics use cases like this, we could probably hide a
> lot of the details by modifying/replacing the EGL implementation that
> handles the details of buffer allocation. However if we have
> applications that are themselves just passing along externally-allocated
> buffers (e.g., images from a camera device), we'd probably need to
> modify those applications and/or the drivers they get their content
> from.
There's also non-GPU-rendering clients that pass SHM buffers to the compositor.
For now, a Wayland proxy in the guest is copying the client-provided
buffers to virtio-gpu resources at the appropriate times, which also
need to be copied once more to host memory. Would be great to reduce
the number of copies that that implies.
For more on this effort:
https://patchwork.kernel.org/patch/10134603/
Regards,
Tomeu
>
> Matt
>
>>
>> 2nd thing: This seems very much related to what's happening around gvt and
>> allowing at least the host (in a kvm based VM environment) to be able to
>> access some of the dma-buf (or well, framebuffers in general) that the
>> client is using. Adding some mailing lists for that.
>> -Daniel
>>
>> >
>> > ------------------------------------------------------------------------------
>> >
>> > There is a git repository at github.com where this series of patches are all
>> > integrated in Linux kernel tree based on the commit:
>> >
>> > commit ae64f9bd1d3621b5e60d7363bc20afb46aede215
>> > Author: Linus Torvalds <torvalds@linux-foundation.org>
>> > Date: Sun Dec 3 11:01:47 2017 -0500
>> >
>> > Linux 4.15-rc2
>> >
>> > https://github.com/downor/linux_hyper_dmabuf.git hyper_dmabuf_integration_v3
>> >
>> > _______________________________________________
>> > dri-devel mailing list
>> > dri-devel@lists.freedesktop.org
>> > https://lists.freedesktop.org/mailman/listinfo/dri-devel
>>
>> --
>> Daniel Vetter
>> Software Engineer, Intel Corporation
>> http://blog.ffwll.ch
>> _______________________________________________
>> dri-devel mailing list
>> dri-devel@lists.freedesktop.org
>> https://lists.freedesktop.org/mailman/listinfo/dri-devel
>
> --
> Matt Roper
> Graphics Software Engineer
> IoTG Platform Enabling & Development
> Intel Corporation
> (916) 356-2795
_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx
^ permalink raw reply [flat|nested] 4+ messages in thread
* Re: [RFC PATCH 01/60] hyper_dmabuf: initial working version of hyper_dmabuf drv
2017-12-20 9:59 ` [RFC PATCH 01/60] hyper_dmabuf: initial working version of hyper_dmabuf drv Daniel Vetter
2017-12-26 18:19 ` Matt Roper
@ 2018-01-10 23:13 ` Dongwon Kim
1 sibling, 0 replies; 4+ messages in thread
From: Dongwon Kim @ 2018-01-10 23:13 UTC (permalink / raw)
To: linux-kernel@vger.kernel.org, xen-devel@lists.xenproject.org,
Potrola, MateuszX, dri-devel@lists.freedesktop.org,
Intel Graphics Development, intel-gvt-dev
On Wed, Dec 20, 2017 at 10:59:57AM +0100, Daniel Vetter wrote:
> On Tue, Dec 19, 2017 at 03:27:31PM -0800, Dongwon Kim wrote:
> > I forgot to include this brief information about this patch series.
> >
> > This patch series contains the implementation of a new device driver,
> > hyper_dmabuf, which provides a method for DMA-BUF sharing across
> > different OSes running on the same virtual OS platform powered by
> > a hypervisor.
> >
> > Detailed information about this driver is described in a high-level doc
> > added by the second patch of the series.
> >
> > [RFC PATCH 02/60] hyper_dmabuf: added a doc for hyper_dmabuf sharing
> >
> > I am attaching 'Overview' section here as a summary.
> >
> > ------------------------------------------------------------------------------
> > Section 1. Overview
> > ------------------------------------------------------------------------------
> >
> > Hyper_DMABUF driver is a Linux device driver running on multiple Virtual
> > achines (VMs), which expands DMA-BUF sharing capability to the VM environment
> > where multiple different OS instances need to share same physical data without
> > data-copy across VMs.
> >
> > To share a DMA_BUF across VMs, an instance of the Hyper_DMABUF drv on the
> > exporting VM (so called, “exporter”) imports a local DMA_BUF from the original
> > producer of the buffer, then re-exports it with an unique ID, hyper_dmabuf_id
> > for the buffer to the importing VM (so called, “importer”).
> >
> > Another instance of the Hyper_DMABUF driver on importer registers
> > a hyper_dmabuf_id together with reference information for the shared physical
> > pages associated with the DMA_BUF to its database when the export happens.
> >
> > The actual mapping of the DMA_BUF on the importer’s side is done by
> > the Hyper_DMABUF driver when user space issues the IOCTL command to access
> > the shared DMA_BUF. The Hyper_DMABUF driver works as both an importing and
> > exporting driver as is, that is, no special configuration is required.
> > Consequently, only a single module per VM is needed to enable cross-VM DMA_BUF
> > exchange.
>
> So I know that most dma-buf implementations (especially lots of importers
> in drivers/gpu) break this, but fundamentally only the original exporter
> is allowed to know about the underlying pages. There's various scenarios
> where a dma-buf isn't backed by anything like a struct page.
>
> So your first step of noodling the underlying struct page out from the
> dma-buf is kinda breaking the abstraction, and I think it's not a good
> idea to have that. Especially not for sharing across VMs.
>
> I think a better design would be if hyper-dmabuf would be the dma-buf
> exporter in both of the VMs, and you'd import it everywhere you want to in
> some gpu/video/whatever driver in the VMs. That way hyper-dmabuf is always
> in control of the pages, and a lot of the troubling forwarding you
> currently need to do disappears.
It could be another way to implement dma-buf sharing however, it would break
the flexibility and transparency that this driver has now. With suggested
method, there will be two different types of dma-buf exist in general usage
model, one is local-dmabuf, a traditional dmabuf that can be shared only
within in the same OS instance and the other is cross-vm sharable dmabuf
created by hyper_dmabuf driver.
The problem with this approach is that an application needs to know whether
the contents will be shared or not across VMs in advance before deciding
what type of dma-buf it needs to create. Otherwise, the application should
always use hyper_dmabuf as the exporter for all contents that can be possibly
shared in the future and I think this will require significant amount of
application changes and also adds unnecessary dependency on hyper_dmabuf driver.
>
> 2nd thing: This seems very much related to what's happening around gvt and
> allowing at least the host (in a kvm based VM environment) to be able to
> access some of the dma-buf (or well, framebuffers in general) that the
> client is using. Adding some mailing lists for that.
I think you are talking about exposing framebuffer to another domain via GTT
memory sharing. And yes, one of primary use cases for hyper_dmabuf is to share
a framebuffer or other graphic object across VMs but it is designed to do it
via more general way using existing dma-buf framework. Also, we wanted to
make this feature available virtually for any sharable contents which can
currently be shared via dma-buf locally.
> -Daniel
>
> >
> > ------------------------------------------------------------------------------
> >
> > There is a git repository at github.com where this series of patches are all
> > integrated in Linux kernel tree based on the commit:
> >
> > commit ae64f9bd1d3621b5e60d7363bc20afb46aede215
> > Author: Linus Torvalds <torvalds@linux-foundation.org>
> > Date: Sun Dec 3 11:01:47 2017 -0500
> >
> > Linux 4.15-rc2
> >
> > https://github.com/downor/linux_hyper_dmabuf.git hyper_dmabuf_integration_v3
> >
> > _______________________________________________
> > dri-devel mailing list
> > dri-devel@lists.freedesktop.org
> > https://lists.freedesktop.org/mailman/listinfo/dri-devel
>
> --
> Daniel Vetter
> Software Engineer, Intel Corporation
> http://blog.ffwll.ch
^ permalink raw reply [flat|nested] 4+ messages in thread
end of thread, other threads:[~2018-01-10 23:13 UTC | newest]
Thread overview: 4+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
[not found] <1513711816-2618-1-git-send-email-dongwon.kim@intel.com>
[not found] ` <20171219232731.GA6497@downor-Z87X-UD5H>
2017-12-20 9:59 ` [RFC PATCH 01/60] hyper_dmabuf: initial working version of hyper_dmabuf drv Daniel Vetter
2017-12-26 18:19 ` Matt Roper
2017-12-29 13:03 ` Tomeu Vizoso
2018-01-10 23:13 ` Dongwon Kim
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).