* [RFC PATCH 2/4] gpu: dxgkrnl: hook up dxgkrnl
2020-05-19 16:32 [RFC PATCH 0/4] DirectX on Linux Sasha Levin
@ 2020-05-19 16:32 ` Sasha Levin
2020-05-19 16:32 ` [RFC PATCH 3/4] Drivers: hv: vmbus: " Sasha Levin
` (5 subsequent siblings)
6 siblings, 0 replies; 28+ messages in thread
From: Sasha Levin @ 2020-05-19 16:32 UTC (permalink / raw)
To: alexander.deucher, chris, ville.syrjala, Hawking.Zhang,
tvrtko.ursulin
Cc: Sasha Levin, linux-hyperv, sthemmin, gregkh, haiyangz,
linux-kernel, dri-devel, spronovo, wei.liu, linux-fbdev, iourit,
kys
Connect the dxgkrnl module to the drivers/gpu/ makefile and Kconfig.
Signed-off-by: Sasha Levin <sashal@kernel.org>
---
drivers/gpu/Makefile | 2 +-
drivers/video/Kconfig | 2 ++
2 files changed, 3 insertions(+), 1 deletion(-)
diff --git a/drivers/gpu/Makefile b/drivers/gpu/Makefile
index 835c88318cec..28c22c814494 100644
--- a/drivers/gpu/Makefile
+++ b/drivers/gpu/Makefile
@@ -3,6 +3,6 @@
# taken to initialize them in the correct order. Link order is the only way
# to ensure this currently.
obj-$(CONFIG_TEGRA_HOST1X) += host1x/
-obj-y += drm/ vga/
+obj-y += drm/ vga/ dxgkrnl/
obj-$(CONFIG_IMX_IPUV3_CORE) += ipu-v3/
obj-$(CONFIG_TRACE_GPU_MEM) += trace/
diff --git a/drivers/video/Kconfig b/drivers/video/Kconfig
index 427a993c7f57..362c08778a54 100644
--- a/drivers/video/Kconfig
+++ b/drivers/video/Kconfig
@@ -19,6 +19,8 @@ source "drivers/gpu/ipu-v3/Kconfig"
source "drivers/gpu/drm/Kconfig"
+source "drivers/gpu/dxgkrnl/Kconfig"
+
menu "Frame buffer Devices"
source "drivers/video/fbdev/Kconfig"
endmenu
--
2.25.1
^ permalink raw reply related [flat|nested] 28+ messages in thread* [RFC PATCH 3/4] Drivers: hv: vmbus: hook up dxgkrnl
2020-05-19 16:32 [RFC PATCH 0/4] DirectX on Linux Sasha Levin
2020-05-19 16:32 ` [RFC PATCH 2/4] gpu: dxgkrnl: hook up dxgkrnl Sasha Levin
@ 2020-05-19 16:32 ` Sasha Levin
2020-05-19 16:32 ` [RFC PATCH 4/4] gpu: dxgkrnl: create a MAINTAINERS entry Sasha Levin
` (4 subsequent siblings)
6 siblings, 0 replies; 28+ messages in thread
From: Sasha Levin @ 2020-05-19 16:32 UTC (permalink / raw)
To: alexander.deucher, chris, ville.syrjala, Hawking.Zhang,
tvrtko.ursulin
Cc: Sasha Levin, linux-hyperv, sthemmin, gregkh, haiyangz,
linux-kernel, dri-devel, spronovo, wei.liu, linux-fbdev, iourit,
kys
Register a new device type with vmbus.
Signed-off-by: Sasha Levin <sashal@kernel.org>
---
include/linux/hyperv.h | 16 ++++++++++++++++
1 file changed, 16 insertions(+)
diff --git a/include/linux/hyperv.h b/include/linux/hyperv.h
index 692c89ccf5df..ad16e9bc676a 100644
--- a/include/linux/hyperv.h
+++ b/include/linux/hyperv.h
@@ -1352,6 +1352,22 @@ void vmbus_free_mmio(resource_size_t start, resource_size_t size);
.guid = GUID_INIT(0xda0a7802, 0xe377, 0x4aac, 0x8e, 0x77, \
0x05, 0x58, 0xeb, 0x10, 0x73, 0xf8)
+/*
+ * GPU paravirtualization global DXGK channel
+ * {DDE9CBC0-5060-4436-9448-EA1254A5D177}
+ */
+#define HV_GPUP_DXGK_GLOBAL_GUID \
+ .guid = GUID_INIT(0xdde9cbc0, 0x5060, 0x4436, 0x94, 0x48, \
+ 0xea, 0x12, 0x54, 0xa5, 0xd1, 0x77)
+
+/*
+ * GPU paravirtualization per virtual GPU DXGK channel
+ * {6E382D18-3336-4F4B-ACC4-2B7703D4DF4A}
+ */
+#define HV_GPUP_DXGK_VGPU_GUID \
+ .guid = GUID_INIT(0x6e382d18, 0x3336, 0x4f4b, 0xac, 0xc4, \
+ 0x2b, 0x77, 0x3, 0xd4, 0xdf, 0x4a)
+
/*
* Synthetic FC GUID
* {2f9bcc4a-0069-4af3-b76b-6fd0be528cda}
--
2.25.1
^ permalink raw reply related [flat|nested] 28+ messages in thread* [RFC PATCH 4/4] gpu: dxgkrnl: create a MAINTAINERS entry
2020-05-19 16:32 [RFC PATCH 0/4] DirectX on Linux Sasha Levin
2020-05-19 16:32 ` [RFC PATCH 2/4] gpu: dxgkrnl: hook up dxgkrnl Sasha Levin
2020-05-19 16:32 ` [RFC PATCH 3/4] Drivers: hv: vmbus: " Sasha Levin
@ 2020-05-19 16:32 ` Sasha Levin
[not found] ` <20200519163234.226513-2-sashal@kernel.org>
` (3 subsequent siblings)
6 siblings, 0 replies; 28+ messages in thread
From: Sasha Levin @ 2020-05-19 16:32 UTC (permalink / raw)
To: alexander.deucher, chris, ville.syrjala, Hawking.Zhang,
tvrtko.ursulin
Cc: Sasha Levin, linux-hyperv, sthemmin, gregkh, haiyangz,
linux-kernel, dri-devel, spronovo, wei.liu, linux-fbdev, iourit,
kys
Signed-off-by: Sasha Levin <sashal@kernel.org>
---
MAINTAINERS | 7 +++++++
1 file changed, 7 insertions(+)
diff --git a/MAINTAINERS b/MAINTAINERS
index e64e5db31497..dccdfadda5df 100644
--- a/MAINTAINERS
+++ b/MAINTAINERS
@@ -4997,6 +4997,13 @@ F: Documentation/filesystems/dnotify.txt
F: fs/notify/dnotify/
F: include/linux/dnotify.h
+DirectX GPU DRIVER
+M: Sasha Levin <sashal@kernel.org>
+M: Iouri Tarassov <iourit@microsoft.com>
+L: linux-hyperv@vger.kernel.org
+S: Supported
+F: drivers/gpu/dxgcore/
+
DISK GEOMETRY AND PARTITION HANDLING
M: Andries Brouwer <aeb@cwi.nl>
S: Maintained
--
2.25.1
^ permalink raw reply related [flat|nested] 28+ messages in thread[parent not found: <20200519163234.226513-2-sashal@kernel.org>]
* Re: [RFC PATCH 1/4] gpu: dxgkrnl: core code
[not found] ` <20200519163234.226513-2-sashal@kernel.org>
@ 2020-05-19 17:19 ` Greg KH
2020-05-19 17:21 ` Greg KH
2020-05-19 17:27 ` Greg KH
2 siblings, 0 replies; 28+ messages in thread
From: Greg KH @ 2020-05-19 17:19 UTC (permalink / raw)
To: Sasha Levin
Cc: linux-hyperv, sthemmin, tvrtko.ursulin, haiyangz, spronovo,
linux-kernel, dri-devel, chris, wei.liu, linux-fbdev, iourit,
alexander.deucher, kys, Hawking.Zhang
On Tue, May 19, 2020 at 12:32:31PM -0400, Sasha Levin wrote:
> +/*
> + * Dxgkrnl Graphics Port Driver ioctl definitions
> + *
> + */
> +
> +#define LX_IOCTL_DIR_WRITE 0x1
> +#define LX_IOCTL_DIR_READ 0x2
> +
> +#define LX_IOCTL_DIR(_ioctl) (((_ioctl) >> 30) & 0x3)
> +#define LX_IOCTL_SIZE(_ioctl) (((_ioctl) >> 16) & 0x3FFF)
> +#define LX_IOCTL_TYPE(_ioctl) (((_ioctl) >> 8) & 0xFF)
> +#define LX_IOCTL_CODE(_ioctl) (((_ioctl) >> 0) & 0xFF)
Why create new ioctl macros, can't the "normal" kernel macros work
properly?
> +#define LX_IOCTL(_dir, _size, _type, _code) ( \
> + (((uint)(_dir) & 0x3) << 30) | \
> + (((uint)(_size) & 0x3FFF) << 16) | \
> + (((uint)(_type) & 0xFF) << 8) | \
> + (((uint)(_code) & 0xFF) << 0))
> +
> +#define LX_IO(_type, _code) LX_IOCTL(0, 0, (_type), (_code))
> +#define LX_IOR(_type, _code, _size) \
> + LX_IOCTL(LX_IOCTL_DIR_READ, (_size), (_type), (_code))
> +#define LX_IOW(_type, _code, _size) \
> + LX_IOCTL(LX_IOCTL_DIR_WRITE, (_size), (_type), (_code))
> +#define LX_IOWR(_type, _code, _size) \
> + LX_IOCTL(LX_IOCTL_DIR_WRITE | \
> + LX_IOCTL_DIR_READ, (_size), (_type), (_code))
> +
> +#define LX_DXOPENADAPTERFROMLUID \
> + LX_IOWR(0x47, 0x01, sizeof(struct d3dkmt_openadapterfromluid))
<snip>
These structures do not seem to be all using the correct types for a
"real" ioctl in the kernel, so you will have to fix them all up before
this will work properly.
> +void ioctl_desc_init(void);
Very odd global name you are using here :)
Anyway, neat stuff, glad to see it posted, great work!
greg k-h
^ permalink raw reply [flat|nested] 28+ messages in thread* Re: [RFC PATCH 1/4] gpu: dxgkrnl: core code
[not found] ` <20200519163234.226513-2-sashal@kernel.org>
2020-05-19 17:19 ` [RFC PATCH 1/4] gpu: dxgkrnl: core code Greg KH
@ 2020-05-19 17:21 ` Greg KH
2020-05-19 17:45 ` Sasha Levin
2020-05-19 17:27 ` Greg KH
2 siblings, 1 reply; 28+ messages in thread
From: Greg KH @ 2020-05-19 17:21 UTC (permalink / raw)
To: Sasha Levin
Cc: linux-hyperv, sthemmin, tvrtko.ursulin, haiyangz, spronovo,
linux-kernel, dri-devel, chris, wei.liu, linux-fbdev, iourit,
alexander.deucher, kys, Hawking.Zhang
On Tue, May 19, 2020 at 12:32:31PM -0400, Sasha Levin wrote:
> +
> +#define DXGK_MAX_LOCK_DEPTH 64
> +#define W_MAX_PATH 260
We already have a max path number, why use a different one?
> +#define d3dkmt_handle u32
> +#define d3dgpu_virtual_address u64
> +#define winwchar u16
> +#define winhandle u64
> +#define ntstatus int
> +#define winbool u32
> +#define d3dgpu_size_t u64
These are all ripe for a simple search/replace in your editor before you
do your next version :)
thanks,
greg k-h
^ permalink raw reply [flat|nested] 28+ messages in thread* Re: [RFC PATCH 1/4] gpu: dxgkrnl: core code
2020-05-19 17:21 ` Greg KH
@ 2020-05-19 17:45 ` Sasha Levin
2020-05-20 6:13 ` Greg KH
0 siblings, 1 reply; 28+ messages in thread
From: Sasha Levin @ 2020-05-19 17:45 UTC (permalink / raw)
To: Greg KH
Cc: linux-hyperv, sthemmin, tvrtko.ursulin, haiyangz, spronovo,
linux-kernel, dri-devel, chris, wei.liu, linux-fbdev, iourit,
alexander.deucher, kys, Hawking.Zhang
On Tue, May 19, 2020 at 07:21:05PM +0200, Greg KH wrote:
>On Tue, May 19, 2020 at 12:32:31PM -0400, Sasha Levin wrote:
>> +
>> +#define DXGK_MAX_LOCK_DEPTH 64
>> +#define W_MAX_PATH 260
>
>We already have a max path number, why use a different one?
It's max path for Windows, not Linux (thus the "W_" prefix) :)
Maybe changing it to WIN_MAX_PATH or such will make it better?
>> +#define d3dkmt_handle u32
>> +#define d3dgpu_virtual_address u64
>> +#define winwchar u16
>> +#define winhandle u64
>> +#define ntstatus int
>> +#define winbool u32
>> +#define d3dgpu_size_t u64
>
>These are all ripe for a simple search/replace in your editor before you
>do your next version :)
I've actually attempted that, and reverted that change, mostly because
the whole 'handle' thing became very confusing.
Note that we have a few 'handles', each with a different size, and thus
calling get_something_something_handle() type of functions becase very
confusing since it's not clear what handle we're working with in that
case.
With regards to the rest, I wanted to leave stuff like 'winbool' to
document the expected ABI between the Windows and Linux side of things.
Ideally it would be 'bool' or 'u8', but as you see we had to use 'u32'
here which I feel lessens our ability to have the code document itself.
I don't feel too strongly against doing the conversion, and I won't
object to doing it if you do, but just be aware that I've tried it and
preferred to go back (even though our coding style doesn't like this) :)
--
Thanks,
Sasha
^ permalink raw reply [flat|nested] 28+ messages in thread
* Re: [RFC PATCH 1/4] gpu: dxgkrnl: core code
2020-05-19 17:45 ` Sasha Levin
@ 2020-05-20 6:13 ` Greg KH
0 siblings, 0 replies; 28+ messages in thread
From: Greg KH @ 2020-05-20 6:13 UTC (permalink / raw)
To: Sasha Levin
Cc: linux-hyperv, sthemmin, tvrtko.ursulin, haiyangz, spronovo,
linux-kernel, dri-devel, chris, wei.liu, linux-fbdev, iourit,
alexander.deucher, kys, Hawking.Zhang
On Tue, May 19, 2020 at 01:45:53PM -0400, Sasha Levin wrote:
> On Tue, May 19, 2020 at 07:21:05PM +0200, Greg KH wrote:
> > On Tue, May 19, 2020 at 12:32:31PM -0400, Sasha Levin wrote:
> > > +
> > > +#define DXGK_MAX_LOCK_DEPTH 64
> > > +#define W_MAX_PATH 260
> >
> > We already have a max path number, why use a different one?
>
> It's max path for Windows, not Linux (thus the "W_" prefix) :)
Ah, not obvious :)
> Maybe changing it to WIN_MAX_PATH or such will make it better?
Probably.
> > > +#define d3dkmt_handle u32
> > > +#define d3dgpu_virtual_address u64
> > > +#define winwchar u16
> > > +#define winhandle u64
> > > +#define ntstatus int
> > > +#define winbool u32
> > > +#define d3dgpu_size_t u64
> >
> > These are all ripe for a simple search/replace in your editor before you
> > do your next version :)
>
> I've actually attempted that, and reverted that change, mostly because
> the whole 'handle' thing became very confusing.
Yeah, "handles" in windows can be a mess, with some being pointers and
others just integers. Trying to make a specific typedef for it is
usually the better way overall, that way you can get the compiler to
check for mistakes. These #defines will not really help with that.
But, 'ntstatus' should be ok to just make "int" everywhere, right?
> Note that we have a few 'handles', each with a different size, and thus
> calling get_something_something_handle() type of functions becase very
> confusing since it's not clear what handle we're working with in that
> case.
Yeah, typedefs can help there.
> With regards to the rest, I wanted to leave stuff like 'winbool' to
> document the expected ABI between the Windows and Linux side of things.
> Ideally it would be 'bool' or 'u8', but as you see we had to use 'u32'
> here which I feel lessens our ability to have the code document itself.
'bool' probably will not work as I think it's compiler dependent, __u8
is probably best.
thanks,
greg k-h
^ permalink raw reply [flat|nested] 28+ messages in thread
* Re: [RFC PATCH 1/4] gpu: dxgkrnl: core code
[not found] ` <20200519163234.226513-2-sashal@kernel.org>
2020-05-19 17:19 ` [RFC PATCH 1/4] gpu: dxgkrnl: core code Greg KH
2020-05-19 17:21 ` Greg KH
@ 2020-05-19 17:27 ` Greg KH
2 siblings, 0 replies; 28+ messages in thread
From: Greg KH @ 2020-05-19 17:27 UTC (permalink / raw)
To: Sasha Levin
Cc: linux-hyperv, sthemmin, tvrtko.ursulin, haiyangz, spronovo,
linux-kernel, dri-devel, chris, wei.liu, linux-fbdev, iourit,
alexander.deucher, kys, Hawking.Zhang
On Tue, May 19, 2020 at 12:32:31PM -0400, Sasha Levin wrote:
> +static int dxgglobal_init_global_channel(struct hv_device *hdev)
> +{
> + int ret = 0;
> +
> + TRACE_DEBUG(1, "%s %x %x", __func__, hdev->vendor_id, hdev->device_id);
> + {
> + TRACE_DEBUG(1, "device type : %pUb\n", &hdev->dev_type);
> + TRACE_DEBUG(1, "device channel: %pUb %p primary: %p\n",
> + &hdev->channel->offermsg.offer.if_type,
> + hdev->channel, hdev->channel->primary_channel);
> + }
> +
> + if (dxgglobal->hdev) {
> + /* This device should appear only once */
> + pr_err("dxgglobal already initialized\n");
> + ret = -EBADE;
> + goto error;
> + }
> +
> + dxgglobal->hdev = hdev;
> +
> + ret = dxgvmbuschannel_init(&dxgglobal->channel, hdev);
> + if (ret) {
> + pr_err("dxgvmbuschannel_init failed: %d\n", ret);
> + goto error;
> + }
> +
> + ret = dxgglobal_getiospace(dxgglobal);
> + if (ret) {
> + pr_err("getiospace failed: %d\n", ret);
> + goto error;
> + }
> +
> + ret = dxgvmb_send_set_iospace_region(dxgglobal->mmiospace_base,
> + dxgglobal->mmiospace_size, 0);
> + if (ret) {
> + pr_err("send_set_iospace_region failed\n");
> + goto error;
> + }
> +
> + hv_set_drvdata(hdev, dxgglobal);
> +
> + if (alloc_chrdev_region(&dxgglobal->device_devt, 0, 1, "dxgkrnl") < 0) {
> + pr_err("alloc_chrdev_region failed\n");
> + ret = -ENODEV;
> + goto error;
> + }
> + dxgglobal->devt_initialized = true;
> + dxgglobal->device_class = class_create(THIS_MODULE, "dxgkdrv");
> + if (dxgglobal->device_class = NULL) {
> + pr_err("class_create failed\n");
> + ret = -ENODEV;
> + goto error;
> + }
> + dxgglobal->device_class->devnode = dxg_devnode;
> + dxgglobal->device = device_create(dxgglobal->device_class, NULL,
> + dxgglobal->device_devt, NULL, "dxg");
> + if (dxgglobal->device = NULL) {
> + pr_err("device_create failed\n");
> + ret = -ENODEV;
> + goto error;
> + }
> + dxgglobaldev = dxgglobal->device;
> + cdev_init(&dxgglobal->device_cdev, &dxgk_fops);
> + ret = cdev_add(&dxgglobal->device_cdev, dxgglobal->device_devt, 1);
> + if (ret < 0) {
> + pr_err("cdev_add failed: %d\n", ret);
> + goto error;
> + }
> + dxgglobal->cdev_initialized = true;
> +
> +error:
> + return ret;
> +}
As you only are asking for a single char dev node, please just use the
misc device api instead of creating your own class and major number on
the fly. It's much simpler and easier overall to make sure you got all
of the above logic correct.
thanks,
greg k-h
^ permalink raw reply [flat|nested] 28+ messages in thread
* Re: [RFC PATCH 0/4] DirectX on Linux
2020-05-19 16:32 [RFC PATCH 0/4] DirectX on Linux Sasha Levin
` (3 preceding siblings ...)
[not found] ` <20200519163234.226513-2-sashal@kernel.org>
@ 2020-05-19 19:21 ` Daniel Vetter
2020-05-19 20:36 ` Sasha Levin
2020-05-19 22:42 ` Dave Airlie
2020-05-20 7:10 ` Thomas Zimmermann
6 siblings, 1 reply; 28+ messages in thread
From: Daniel Vetter @ 2020-05-19 19:21 UTC (permalink / raw)
To: Sasha Levin, Olof Johansson, Jerome Glisse, Jason Ekstrand
Cc: linux-hyperv, Stephen Hemminger, Tvrtko Ursulin, Greg KH,
Haiyang Zhang, Linux Kernel Mailing List, dri-devel,
Wilson, Chris, spronovo, Linux Fbdev development list, iourit,
Alex Deucher, K. Y. Srinivasan, Wei Liu, Hawking Zhang
Hi Sasha
So obviously great that Microsoft is trying to upstream all this, and
very much welcome and all that.
But I guess there's a bunch of rather fundamental issues before we
look into any kind of code details. And that might make this quite a
hard sell for upstream to drivers/gpu subsystem:
- From the blog it sounds like the userspace is all closed. That
includes the hw specific part and compiler chunks, all stuff we've
generally expected to be able to look in the past for any kind of
other driver. It's event documented here:
https://dri.freedesktop.org/docs/drm/gpu/drm-uapi.html#open-source-userspace-requirements
What's your plan here?
btw since the main goal here (at least at first) seems to be get
compute and ML going the official work-around here is to relabel your
driver as an accelerator driver (just sed -e s/vGPU/vaccel/ over the
entire thing or so) and then Olof and Greg will take it into
drivers/accel ...
- Next up (but that's not really a surprise for a fresh vendor driver)
at a more technical level, this seems to reinvent the world, from
device enumeration (why is this not exposed as /dev/dri/card0 so it
better integrates with existing linux desktop stuff, in case that
becomes a goal ever) down to reinvented kref_put_mutex (and please
look at drm_device->struct_mutex for an example of how bad of a
nightmare that locking pattern is and how many years it took us to
untangle that one.
- Why DX12 on linux? Looking at this feels like classic divide and
conquer (or well triple E from the 90s), we have vk, we have
drm_syncobj, we have an entire ecosystem of winsys layers that work
across vendors. Is the plan here that we get a dx12 driver for other
hw mesa drivers from you guys, so this is all consistent and we have a
nice linux platform? How does this integrate everywhere else with
linux winsys standards, like dma-buf for passing stuff around,
dma-fence/sync_file/drm_syncobj for syncing, drm_fourcc/modifiers for
some idea how it all meshes together?
- There's been a pile of hallway track/private discussions about
moving on from the buffer-based memory managed model to something more
modern. That relates to your DXLOCK2 question, but there's a lot more
to userspace managed gpu memory residency than just that. monitored
fences are another part. Also, to avoid a platform split we need to
figure out how to tie this back into the dma-buf and dma-fence
(including various uapi flavours) or it'll be made of fail. dx12 has
all that in some form, except 0 integration with the linux stuff we
have (no surprise, since linux isn't windows). Finally if we go to the
trouble of a completely revamped I think ioctls aren't a great idea,
something like iouring (the gossip name is drm_uring) would be a lot
better. Also for easier paravirt we'd need 0 cpu pointers in any such
new interface. Adding a few people who've been involved in these
discussions thus far, mostly under a drm/hmm.ko heading iirc.
I think the above are the really big ticket items around what's the
plan here and are we solving even the right problem.
Cheers, Daniel
On Tue, May 19, 2020 at 6:33 PM Sasha Levin <sashal@kernel.org> wrote:
>
> There is a blog post that goes into more detail about the bigger
> picture, and walks through all the required pieces to make this work. It
> is available here:
> https://devblogs.microsoft.com/directx/directx-heart-linux . The rest of
> this cover letter will focus on the Linux Kernel bits.
>
> Overview
> ====
>
> This is the first draft of the Microsoft Virtual GPU (vGPU) driver. The
> driver exposes a paravirtualized GPU to user mode applications running
> in a virtual machine on a Windows host. This enables hardware
> acceleration in environment such as WSL (Windows Subsystem for Linux)
> where the Linux virtual machine is able to share the GPU with the
> Windows host.
>
> The projection is accomplished by exposing the WDDM (Windows Display
> Driver Model) interface as a set of IOCTL. This allows APIs and user
> mode driver written against the WDDM GPU abstraction on Windows to be
> ported to run within a Linux environment. This enables the port of the
> D3D12 and DirectML APIs as well as their associated user mode driver to
> Linux. This also enables third party APIs, such as the popular NVIDIA
> Cuda compute API, to be hardware accelerated within a WSL environment.
>
> Only the rendering/compute aspect of the GPU are projected to the
> virtual machine, no display functionality is exposed. Further, at this
> time there are no presentation integration. So although the D3D12 API
> can be use to render graphics offscreen, there is no path (yet) for
> pixel to flow from the Linux environment back onto the Windows host
> desktop. This GPU stack is effectively side-by-side with the native
> Linux graphics stack.
>
> The driver creates the /dev/dxg device, which can be opened by user mode
> application and handles their ioctls. The IOCTL interface to the driver
> is defined in dxgkmthk.h (Dxgkrnl Graphics Port Driver ioctl
> definitions). The interface matches the D3DKMT interface on Windows.
> Ioctls are implemented in ioctl.c.
>
> When a VM starts, hyper-v on the host adds virtual GPU devices to the VM
> via the hyper-v driver. The host offers several VM bus channels to the
> VM: the global channel and one channel per virtual GPU, assigned to the
> VM.
>
> The driver registers with the hyper-v driver (hv_driver) for the arrival
> of VM bus channels. dxg_probe_device recognizes the vGPU channels and
> creates the corresponding objects (dxgadapter for vGPUs and dxgglobal
> for the global channel).
>
> The driver uses the hyper-V VM bus interface to communicate with the
> host. dxgvmbus.c implements the communication interface.
>
> The global channel has 8GB of IO space assigned by the host. This space
> is managed by the host and used to give the guest direct CPU access to
> some allocations. Video memory is allocated on the host except in the
> case of existing_sysmem allocations. The Windows host allocates memory
> for the GPU on behalf of the guest. The Linux guest can access that
> memory by mapping GPU virtual address to allocations and then
> referencing those GPU virtual address from within GPU command buffers
> submitted to the GPU. For allocations which require CPU access, the
> allocation is mapped by the host into a location in the 8GB of IO space
> reserved in the guest for that purpose. The Windows host uses the nested
> CPU page table to ensure that this guest IO space always map to the
> correct location for the allocation as it may migrate between dedicated
> GPU memory (e.g. VRAM, firmware reserved DDR) and shared system memory
> (regular DDR) over its lifetime. The Linux guest maps a user mode CPU
> virtual address to an allocation IO space range for direct access by
> user mode APIs and drivers.
>
>
>
> Implementation of LX_DXLOCK2 ioctl
> =================
>
> We would appreciate your feedback on the implementation of the
> LX_DXLOCK2 ioctl.
>
> This ioctl is used to get a CPU address to an allocation, which is
> resident in video/system memory on the host. The way it works:
>
> 1. The driver sends the Lock message to the host
>
> 2. The host allocates space in the VM IO space and maps it to the
> allocation memory
>
> 3. The host returns the address in IO space for the mapped allocation
>
> 4. The driver (in dxg_map_iospace) allocates a user mode virtual address
> range using vm_mmap and maps it to the IO space using
> io_remap_ofn_range)
>
> 5. The VA is returned to the application
>
>
>
> Internal objects
> ========
>
> The following objects are created by the driver (defined in dxgkrnl.h):
>
> - dxgadapter - represents a virtual GPU
>
> - dxgprocess - tracks per process state (handle table of created
> objects, list of objects, etc.)
>
> - dxgdevice - a container for other objects (contexts, paging queues,
> allocations, GPU synchronization objects)
>
> - dxgcontext - represents thread of GPU execution for packet
> scheduling.
>
> - dxghwqueue - represents thread of GPU execution of hardware scheduling
>
> - dxgallocation - represents a GPU accessible allocation
>
> - dxgsyncobject - represents a GPU synchronization object
>
> - dxgresource - collection of dxgalloction objects
>
> - dxgsharedresource, dxgsharedsyncobj - helper objects to share objects
> between different dxgdevice objects, which can belong to different
> processes
>
>
>
> Object handles
> =======
>
> All GPU objects, created by the driver, are accessible by a handle
> (d3dkmt_handle). Each process has its own handle table, which is
> implemented in hmgr.c. For each API visible object, created by the
> driver, there is an object, created on the host. For example, the is a
> dxgprocess object on the host for each dxgprocess object in the VM, etc.
> The object handles have the same value in the host and the VM, which is
> done to avoid translation from the guest handles to the host handles.
>
>
>
> Signaling CPU events by the host
> ================
>
> The WDDM interface provides a way to signal CPU event objects when
> execution of a context reached certain point. The way it is implemented:
>
> - application sends an event_fd via ioctl to the driver
>
> - eventfd_ctx_get is used to get a pointer to the file object
> (eventfd_ctx)
>
> - the pointer to sent the host via a VM bus message
>
> - when GPU execution reaches a certain point, the host sends a message
> to the VM with the event pointer
>
> - signal_guest_event() handles the messages and eventually
> eventfd_signal() is called.
>
>
> Sasha Levin (4):
> gpu: dxgkrnl: core code
> gpu: dxgkrnl: hook up dxgkrnl
> Drivers: hv: vmbus: hook up dxgkrnl
> gpu: dxgkrnl: create a MAINTAINERS entry
>
> MAINTAINERS | 7 +
> drivers/gpu/Makefile | 2 +-
> drivers/gpu/dxgkrnl/Kconfig | 10 +
> drivers/gpu/dxgkrnl/Makefile | 12 +
> drivers/gpu/dxgkrnl/d3dkmthk.h | 1635 +++++++++
> drivers/gpu/dxgkrnl/dxgadapter.c | 1399 ++++++++
> drivers/gpu/dxgkrnl/dxgkrnl.h | 913 ++++++
> drivers/gpu/dxgkrnl/dxgmodule.c | 692 ++++
> drivers/gpu/dxgkrnl/dxgprocess.c | 355 ++
> drivers/gpu/dxgkrnl/dxgvmbus.c | 2955 +++++++++++++++++
> drivers/gpu/dxgkrnl/dxgvmbus.h | 859 +++++
> drivers/gpu/dxgkrnl/hmgr.c | 593 ++++
> drivers/gpu/dxgkrnl/hmgr.h | 107 +
> drivers/gpu/dxgkrnl/ioctl.c | 5269 ++++++++++++++++++++++++++++++
> drivers/gpu/dxgkrnl/misc.c | 280 ++
> drivers/gpu/dxgkrnl/misc.h | 288 ++
> drivers/video/Kconfig | 2 +
> include/linux/hyperv.h | 16 +
> 18 files changed, 15393 insertions(+), 1 deletion(-)
> create mode 100644 drivers/gpu/dxgkrnl/Kconfig
> create mode 100644 drivers/gpu/dxgkrnl/Makefile
> create mode 100644 drivers/gpu/dxgkrnl/d3dkmthk.h
> create mode 100644 drivers/gpu/dxgkrnl/dxgadapter.c
> create mode 100644 drivers/gpu/dxgkrnl/dxgkrnl.h
> create mode 100644 drivers/gpu/dxgkrnl/dxgmodule.c
> create mode 100644 drivers/gpu/dxgkrnl/dxgprocess.c
> create mode 100644 drivers/gpu/dxgkrnl/dxgvmbus.c
> create mode 100644 drivers/gpu/dxgkrnl/dxgvmbus.h
> create mode 100644 drivers/gpu/dxgkrnl/hmgr.c
> create mode 100644 drivers/gpu/dxgkrnl/hmgr.h
> create mode 100644 drivers/gpu/dxgkrnl/ioctl.c
> create mode 100644 drivers/gpu/dxgkrnl/misc.c
> create mode 100644 drivers/gpu/dxgkrnl/misc.h
>
> --
> 2.25.1
>
> _______________________________________________
> dri-devel mailing list
> dri-devel@lists.freedesktop.org
> https://lists.freedesktop.org/mailman/listinfo/dri-devel
--
Daniel Vetter
Software Engineer, Intel Corporation
+41 (0) 79 365 57 48 - http://blog.ffwll.ch
^ permalink raw reply [flat|nested] 28+ messages in thread* Re: [RFC PATCH 0/4] DirectX on Linux
2020-05-19 19:21 ` [RFC PATCH 0/4] DirectX on Linux Daniel Vetter
@ 2020-05-19 20:36 ` Sasha Levin
2020-05-20 10:37 ` Jan Engelhardt
2020-06-28 23:39 ` James Hilliard
0 siblings, 2 replies; 28+ messages in thread
From: Sasha Levin @ 2020-05-19 20:36 UTC (permalink / raw)
To: Daniel Vetter
Cc: linux-hyperv, Olof Johansson, Tvrtko Ursulin, Greg KH,
Haiyang Zhang, Linux Kernel Mailing List, dri-devel,
Wilson, Chris, Jerome Glisse, spronovo,
Linux Fbdev development list, Jason Ekstrand, iourit,
Alex Deucher, Stephen Hemminger, K. Y. Srinivasan, Wei Liu,
Hawking Zhang
Hi Daniel,
On Tue, May 19, 2020 at 09:21:15PM +0200, Daniel Vetter wrote:
>Hi Sasha
>
>So obviously great that Microsoft is trying to upstream all this, and
>very much welcome and all that.
>
>But I guess there's a bunch of rather fundamental issues before we
>look into any kind of code details. And that might make this quite a
>hard sell for upstream to drivers/gpu subsystem:
Let me preface my answers by saying that speaking personally I very much
dislike that the userspace is closed and wish I could do something about
it.
>- From the blog it sounds like the userspace is all closed. That
>includes the hw specific part and compiler chunks, all stuff we've
>generally expected to be able to look in the past for any kind of
>other driver. It's event documented here:
>
>https://dri.freedesktop.org/docs/drm/gpu/drm-uapi.html#open-source-userspace-requirements
>
>What's your plan here?
Let me answer with a (genuine) question: does this driver have anything
to do with DRM even after we enable graphics on it? I'm still trying to
figure it out.
There is an open source DX12 Galluim driver (that lives here:
https://gitlab.freedesktop.org/kusma/mesa/-/tree/msclc-d3d12) with open
source compiler and so on.
The plan is for Microsoft to provide shims to allow the existing Linux
userspace interact with DX12; I'll explain below why we had to pipe DX12
all the way into the Linux guest, but this is *not* to introduce DX12
into the Linux world as competition. There is no intent for anyone in
the Linux world to start coding for the DX12 API.
This is why I'm not sure whether this touches DRM on the Linux side of
things. Nothing is actually rendered on Linux but rather piped to
Windows to be done there.
>btw since the main goal here (at least at first) seems to be get
>compute and ML going the official work-around here is to relabel your
>driver as an accelerator driver (just sed -e s/vGPU/vaccel/ over the
>entire thing or so) and then Olof and Greg will take it into
>drivers/accel ...
This submission is not a case of "we want it upstream NOW" but rather
"let's work together to figure out how to do it right" :)
I thought about placing this driver in drivers/hyper-v/ given that it's
basically just a pipe between the host and the guest. There is no fancy
logic in this drivers. Maybe the right place is indeed drivers/accel or
drivers/hyper-v but I'd love if we agree on that rather than doing that
as a workaround and 6 months down the road enabling graphics.
>- Next up (but that's not really a surprise for a fresh vendor driver)
>at a more technical level, this seems to reinvent the world, from
>device enumeration (why is this not exposed as /dev/dri/card0 so it
>better integrates with existing linux desktop stuff, in case that
>becomes a goal ever) down to reinvented kref_put_mutex (and please
>look at drm_device->struct_mutex for an example of how bad of a
>nightmare that locking pattern is and how many years it took us to
>untangle that one.
I'd maybe note that neither of us here at Microsoft is an expert in the
Linux DRM world. Stuff might have been done in a certain way because we
didn't know better.
>- Why DX12 on linux? Looking at this feels like classic divide and
There is a single usecase for this: WSL2 developer who wants to run
machine learning on his GPU. The developer is working on his laptop,
which is running Windows and that laptop has a single GPU that Windows
is using.
Since the GPU is being used by Windows, we can't assign it directly to
the Linux guest, but instead we can use GPU Partitioning to give the
guest access to the GPU. This means that the guest needs to be able to
"speak" DX12, which is why we pulled DX12 into Linux.
>conquer (or well triple E from the 90s), we have vk, we have
>drm_syncobj, we have an entire ecosystem of winsys layers that work
>across vendors. Is the plan here that we get a dx12 driver for other
>hw mesa drivers from you guys, so this is all consistent and we have a
>nice linux platform? How does this integrate everywhere else with
>linux winsys standards, like dma-buf for passing stuff around,
>dma-fence/sync_file/drm_syncobj for syncing, drm_fourcc/modifiers for
>some idea how it all meshes together?
Let me point you to this blog post that has more information about the
graphics side of things:
https://www.collabora.com/news-and-blog/news-and-events/introducing-opencl-and-opengl-on-directx.html
.
The intent is to wrap DX12 with shims to work with the existing
ecosystem; DX12 isn't a new player on it's own and thus isn't trying to
divide/conquer anything.
>- There's been a pile of hallway track/private discussions about
>moving on from the buffer-based memory managed model to something more
>modern. That relates to your DXLOCK2 question, but there's a lot more
>to userspace managed gpu memory residency than just that. monitored
>fences are another part. Also, to avoid a platform split we need to
>figure out how to tie this back into the dma-buf and dma-fence
>(including various uapi flavours) or it'll be made of fail. dx12 has
>all that in some form, except 0 integration with the linux stuff we
>have (no surprise, since linux isn't windows). Finally if we go to the
>trouble of a completely revamped I think ioctls aren't a great idea,
>something like iouring (the gossip name is drm_uring) would be a lot
>better. Also for easier paravirt we'd need 0 cpu pointers in any such
>new interface. Adding a few people who've been involved in these
>discussions thus far, mostly under a drm/hmm.ko heading iirc.
>
>I think the above are the really big ticket items around what's the
>plan here and are we solving even the right problem.
Part of the reason behind this implementation is simplicity. Again, no
objections around moving to uring and doing other improvements.
--
Thanks,
Sasha
^ permalink raw reply [flat|nested] 28+ messages in thread
* Re: [RFC PATCH 0/4] DirectX on Linux
2020-05-19 20:36 ` Sasha Levin
@ 2020-05-20 10:37 ` Jan Engelhardt
2020-06-28 23:39 ` James Hilliard
1 sibling, 0 replies; 28+ messages in thread
From: Jan Engelhardt @ 2020-05-20 10:37 UTC (permalink / raw)
To: Sasha Levin
Cc: linux-hyperv, Olof Johansson, Tvrtko Ursulin, Greg KH,
Haiyang Zhang, Linux Kernel Mailing List, dri-devel,
Wilson, Chris, Jerome Glisse, spronovo,
Linux Fbdev development list, Jason Ekstrand, iourit,
Alex Deucher, Stephen Hemminger, K. Y. Srinivasan, Wei Liu,
Hawking Zhang
On Tuesday 2020-05-19 22:36, Sasha Levin wrote:
>
>> - Why DX12 on linux? Looking at this feels like classic divide and
>
> There is a single usecase for this: WSL2 developer who wants to run
> machine learning on his GPU. The developer is working on his laptop,
> which is running Windows and that laptop has a single GPU that Windows
> is using.
It does not feel right conceptually. If the target is a Windows API
(DX12/ML), why bother with Linux environments? Make it a Windows executable,
thereby skipping the WSL translation layer and passthrough.
^ permalink raw reply [flat|nested] 28+ messages in thread
* Re: [RFC PATCH 0/4] DirectX on Linux
2020-05-19 20:36 ` Sasha Levin
2020-05-20 10:37 ` Jan Engelhardt
@ 2020-06-28 23:39 ` James Hilliard
1 sibling, 0 replies; 28+ messages in thread
From: James Hilliard @ 2020-06-28 23:39 UTC (permalink / raw)
To: Sasha Levin
Cc: linux-hyperv, Olof Johansson, Tvrtko Ursulin, Greg KH,
Haiyang Zhang, Linux Kernel Mailing List, dri-devel,
Wilson, Chris, Jerome Glisse, spronovo,
Linux Fbdev development list, Jason Ekstrand, iourit,
Alex Deucher, Stephen Hemminger, K. Y. Srinivasan, Wei Liu,
Hawking Zhang
On Tue, May 19, 2020 at 2:36 PM Sasha Levin <sashal@kernel.org> wrote:
>
> Hi Daniel,
>
> On Tue, May 19, 2020 at 09:21:15PM +0200, Daniel Vetter wrote:
> >Hi Sasha
> >
> >So obviously great that Microsoft is trying to upstream all this, and
> >very much welcome and all that.
> >
> >But I guess there's a bunch of rather fundamental issues before we
> >look into any kind of code details. And that might make this quite a
> >hard sell for upstream to drivers/gpu subsystem:
>
> Let me preface my answers by saying that speaking personally I very much
> dislike that the userspace is closed and wish I could do something about
> it.
>
> >- From the blog it sounds like the userspace is all closed. That
> >includes the hw specific part and compiler chunks, all stuff we've
> >generally expected to be able to look in the past for any kind of
> >other driver. It's event documented here:
> >
> >https://dri.freedesktop.org/docs/drm/gpu/drm-uapi.html#open-source-userspace-requirements
> >
> >What's your plan here?
>
> Let me answer with a (genuine) question: does this driver have anything
> to do with DRM even after we enable graphics on it? I'm still trying to
> figure it out.
>
> There is an open source DX12 Galluim driver (that lives here:
> https://gitlab.freedesktop.org/kusma/mesa/-/tree/msclc-d3d12) with open
> source compiler and so on.
>
> The plan is for Microsoft to provide shims to allow the existing Linux
> userspace interact with DX12; I'll explain below why we had to pipe DX12
> all the way into the Linux guest, but this is *not* to introduce DX12
> into the Linux world as competition. There is no intent for anyone in
> the Linux world to start coding for the DX12 API.
If that really is the case why is microsoft recommending developers to break
compatibility with native Linux and use the DX12 API's here:
https://devblogs.microsoft.com/directx/in-the-works-opencl-and-opengl-mapping-layers-to-directx/
Quote:
"Make it easier for developers to port their apps to D3D12. For developers
looking to move from older OpenCL and OpenGL API versions to D3D12,
the open source mapping layers will provide helpful example code on how
to use the D3D12 Translation Layer library."
If developers of applications that use OpenCL and OpenGL API's were to
follow this advice and transition to D3D12 their applications would no longer
work on Linux systems unless using WSL2. Is Microsoft planning on creating
a D3D12/DirectML frontend that doesn't depend on WSL2?
>
> This is why I'm not sure whether this touches DRM on the Linux side of
> things. Nothing is actually rendered on Linux but rather piped to
> Windows to be done there.
>
> >btw since the main goal here (at least at first) seems to be get
> >compute and ML going the official work-around here is to relabel your
> >driver as an accelerator driver (just sed -e s/vGPU/vaccel/ over the
> >entire thing or so) and then Olof and Greg will take it into
> >drivers/accel ...
>
> This submission is not a case of "we want it upstream NOW" but rather
> "let's work together to figure out how to do it right" :)
>
> I thought about placing this driver in drivers/hyper-v/ given that it's
> basically just a pipe between the host and the guest. There is no fancy
> logic in this drivers. Maybe the right place is indeed drivers/accel or
> drivers/hyper-v but I'd love if we agree on that rather than doing that
> as a workaround and 6 months down the road enabling graphics.
>
> >- Next up (but that's not really a surprise for a fresh vendor driver)
> >at a more technical level, this seems to reinvent the world, from
> >device enumeration (why is this not exposed as /dev/dri/card0 so it
> >better integrates with existing linux desktop stuff, in case that
> >becomes a goal ever) down to reinvented kref_put_mutex (and please
> >look at drm_device->struct_mutex for an example of how bad of a
> >nightmare that locking pattern is and how many years it took us to
> >untangle that one.
>
> I'd maybe note that neither of us here at Microsoft is an expert in the
> Linux DRM world. Stuff might have been done in a certain way because we
> didn't know better.
>
> >- Why DX12 on linux? Looking at this feels like classic divide and
>
> There is a single usecase for this: WSL2 developer who wants to run
> machine learning on his GPU. The developer is working on his laptop,
> which is running Windows and that laptop has a single GPU that Windows
> is using.
>
> Since the GPU is being used by Windows, we can't assign it directly to
> the Linux guest, but instead we can use GPU Partitioning to give the
> guest access to the GPU. This means that the guest needs to be able to
> "speak" DX12, which is why we pulled DX12 into Linux.
>
> >conquer (or well triple E from the 90s), we have vk, we have
> >drm_syncobj, we have an entire ecosystem of winsys layers that work
> >across vendors. Is the plan here that we get a dx12 driver for other
> >hw mesa drivers from you guys, so this is all consistent and we have a
> >nice linux platform? How does this integrate everywhere else with
> >linux winsys standards, like dma-buf for passing stuff around,
> >dma-fence/sync_file/drm_syncobj for syncing, drm_fourcc/modifiers for
> >some idea how it all meshes together?
>
> Let me point you to this blog post that has more information about the
> graphics side of things:
> https://www.collabora.com/news-and-blog/news-and-events/introducing-opencl-and-opengl-on-directx.html
> .
>
> The intent is to wrap DX12 with shims to work with the existing
> ecosystem; DX12 isn't a new player on it's own and thus isn't trying to
> divide/conquer anything.
Shouldn't tensorflow/machine learning be going through the opencl
compatibility layer/shims instead of talking directly to DX12/DirectML?
If tensorflow or any other machine learning software uses DX12 API's
directly then they won't be compatible with Linux unless running on top
of WSL2.
>
> >- There's been a pile of hallway track/private discussions about
> >moving on from the buffer-based memory managed model to something more
> >modern. That relates to your DXLOCK2 question, but there's a lot more
> >to userspace managed gpu memory residency than just that. monitored
> >fences are another part. Also, to avoid a platform split we need to
> >figure out how to tie this back into the dma-buf and dma-fence
> >(including various uapi flavours) or it'll be made of fail. dx12 has
> >all that in some form, except 0 integration with the linux stuff we
> >have (no surprise, since linux isn't windows). Finally if we go to the
> >trouble of a completely revamped I think ioctls aren't a great idea,
> >something like iouring (the gossip name is drm_uring) would be a lot
> >better. Also for easier paravirt we'd need 0 cpu pointers in any such
> >new interface. Adding a few people who've been involved in these
> >discussions thus far, mostly under a drm/hmm.ko heading iirc.
> >
> >I think the above are the really big ticket items around what's the
> >plan here and are we solving even the right problem.
>
> Part of the reason behind this implementation is simplicity. Again, no
> objections around moving to uring and doing other improvements.
>
> --
> Thanks,
> Sasha
>
>
>
^ permalink raw reply [flat|nested] 28+ messages in thread
* Re: [RFC PATCH 0/4] DirectX on Linux
2020-05-19 16:32 [RFC PATCH 0/4] DirectX on Linux Sasha Levin
` (4 preceding siblings ...)
2020-05-19 19:21 ` [RFC PATCH 0/4] DirectX on Linux Daniel Vetter
@ 2020-05-19 22:42 ` Dave Airlie
2020-05-19 23:01 ` Daniel Vetter
2020-05-19 23:12 ` Dave Airlie
2020-05-20 7:10 ` Thomas Zimmermann
6 siblings, 2 replies; 28+ messages in thread
From: Dave Airlie @ 2020-05-19 22:42 UTC (permalink / raw)
To: Sasha Levin
Cc: linux-hyperv, sthemmin, Ursulin, Tvrtko, Greg Kroah-Hartman,
haiyangz, LKML, dri-devel, Chris Wilson, spronovo,
Linux Fbdev development list, iourit, Deucher, Alexander, kys,
wei.liu, Hawking Zhang
On Wed, 20 May 2020 at 02:33, Sasha Levin <sashal@kernel.org> wrote:
>
> There is a blog post that goes into more detail about the bigger
> picture, and walks through all the required pieces to make this work. It
> is available here:
> https://devblogs.microsoft.com/directx/directx-heart-linux . The rest of
> this cover letter will focus on the Linux Kernel bits.
>
> Overview
> ====
>
> This is the first draft of the Microsoft Virtual GPU (vGPU) driver. The
> driver exposes a paravirtualized GPU to user mode applications running
> in a virtual machine on a Windows host. This enables hardware
> acceleration in environment such as WSL (Windows Subsystem for Linux)
> where the Linux virtual machine is able to share the GPU with the
> Windows host.
>
> The projection is accomplished by exposing the WDDM (Windows Display
> Driver Model) interface as a set of IOCTL. This allows APIs and user
> mode driver written against the WDDM GPU abstraction on Windows to be
> ported to run within a Linux environment. This enables the port of the
> D3D12 and DirectML APIs as well as their associated user mode driver to
> Linux. This also enables third party APIs, such as the popular NVIDIA
> Cuda compute API, to be hardware accelerated within a WSL environment.
>
> Only the rendering/compute aspect of the GPU are projected to the
> virtual machine, no display functionality is exposed. Further, at this
> time there are no presentation integration. So although the D3D12 API
> can be use to render graphics offscreen, there is no path (yet) for
> pixel to flow from the Linux environment back onto the Windows host
> desktop. This GPU stack is effectively side-by-side with the native
> Linux graphics stack.
Okay I've had some caffiene and absorbed some more of this.
This is a driver that connects a binary blob interface in the Windows
kernel drivers to a binary blob that you run inside a Linux guest.
It's a binary transport between two binary pieces. Personally this
holds little of interest to me, I can see why it might be nice to have
this upstream, but I don't forsee any other Linux distributor ever
enabling it or having to ship it, it's purely a WSL2 pipe. I'm not
saying I'd be happy to see this in the tree, since I don't see the
value of maintaining it upstream, but it probably should just exists
in a drivers/hyperv type area.
Having said that, I hit one stumbling block:
"Further, at this time there are no presentation integration. "
If we upstream this driver as-is into some hyperv specific place, and
you decide to add presentation integration this is more than likely
going to mean you will want to interact with dma-bufs and dma-fences.
If the driver is hidden away in a hyperv place it's likely we won't
even notice that feature landing until it's too late.
I would like to see a coherent plan for presentation support (not
code, just an architectural diagram), because I think when you
contemplate how that works it will change the picture of how this
driver looks and intergrates into the rest of the Linux graphics
ecosystem.
As-is I'd rather this didn't land under my purview, since I don't see
the value this adds to the Linux ecosystem at all, and I think it's
important when putting a burden on upstream that you provide some
value.
Dave.
^ permalink raw reply [flat|nested] 28+ messages in thread* Re: [RFC PATCH 0/4] DirectX on Linux
2020-05-19 22:42 ` Dave Airlie
@ 2020-05-19 23:01 ` Daniel Vetter
2020-05-20 3:47 ` [EXTERNAL] " Steve Pronovost
2020-05-19 23:12 ` Dave Airlie
1 sibling, 1 reply; 28+ messages in thread
From: Daniel Vetter @ 2020-05-19 23:01 UTC (permalink / raw)
To: Dave Airlie
Cc: Sasha Levin, linux-hyperv, Stephen Hemminger, Ursulin, Tvrtko,
Greg Kroah-Hartman, Haiyang Zhang, LKML, dri-devel, Chris Wilson,
spronovo, Linux Fbdev development list, iourit,
Deucher, Alexander, K. Y. Srinivasan, Wei Liu, Hawking Zhang
On Wed, May 20, 2020 at 12:42 AM Dave Airlie <airlied@gmail.com> wrote:
>
> On Wed, 20 May 2020 at 02:33, Sasha Levin <sashal@kernel.org> wrote:
> >
> > There is a blog post that goes into more detail about the bigger
> > picture, and walks through all the required pieces to make this work. It
> > is available here:
> > https://devblogs.microsoft.com/directx/directx-heart-linux . The rest of
> > this cover letter will focus on the Linux Kernel bits.
> >
> > Overview
> > ====
> >
> > This is the first draft of the Microsoft Virtual GPU (vGPU) driver. The
> > driver exposes a paravirtualized GPU to user mode applications running
> > in a virtual machine on a Windows host. This enables hardware
> > acceleration in environment such as WSL (Windows Subsystem for Linux)
> > where the Linux virtual machine is able to share the GPU with the
> > Windows host.
> >
> > The projection is accomplished by exposing the WDDM (Windows Display
> > Driver Model) interface as a set of IOCTL. This allows APIs and user
> > mode driver written against the WDDM GPU abstraction on Windows to be
> > ported to run within a Linux environment. This enables the port of the
> > D3D12 and DirectML APIs as well as their associated user mode driver to
> > Linux. This also enables third party APIs, such as the popular NVIDIA
> > Cuda compute API, to be hardware accelerated within a WSL environment.
> >
> > Only the rendering/compute aspect of the GPU are projected to the
> > virtual machine, no display functionality is exposed. Further, at this
> > time there are no presentation integration. So although the D3D12 API
> > can be use to render graphics offscreen, there is no path (yet) for
> > pixel to flow from the Linux environment back onto the Windows host
> > desktop. This GPU stack is effectively side-by-side with the native
> > Linux graphics stack.
>
> Okay I've had some caffiene and absorbed some more of this.
>
> This is a driver that connects a binary blob interface in the Windows
> kernel drivers to a binary blob that you run inside a Linux guest.
> It's a binary transport between two binary pieces. Personally this
> holds little of interest to me, I can see why it might be nice to have
> this upstream, but I don't forsee any other Linux distributor ever
> enabling it or having to ship it, it's purely a WSL2 pipe. I'm not
> saying I'd be happy to see this in the tree, since I don't see the
> value of maintaining it upstream, but it probably should just exists
> in a drivers/hyperv type area.
Yup as-is (especially with the goal of this being aimed at ml/compute
only) drivers/hyperv sounds a bunch more reasonable than drivers/gpu.
> Having said that, I hit one stumbling block:
> "Further, at this time there are no presentation integration. "
>
> If we upstream this driver as-is into some hyperv specific place, and
> you decide to add presentation integration this is more than likely
> going to mean you will want to interact with dma-bufs and dma-fences.
> If the driver is hidden away in a hyperv place it's likely we won't
> even notice that feature landing until it's too late.
I've recently added regex matches to MAINTAINERS so we'll see
dma_buf/fence/anything show up on dri-devel. So that part is solved
hopefully.
> I would like to see a coherent plan for presentation support (not
> code, just an architectural diagram), because I think when you
> contemplate how that works it will change the picture of how this
> driver looks and intergrates into the rest of the Linux graphics
> ecosystem.
Yeah once we have the feature-creep to presentation support all the
integration fun starts, with all the questions about "why does this
not look like any other linux gpu driver". We have that already with
nvidia insisting they just can't implement any of the upstream gpu
uapi we have, but at least they're not in-tree, so not our problem
from an upstream maintainership pov.
But once this dx12 pipe is landed and then we want to extend it it's
still going to have all the "we can't ever release the sources to any
of the parts we usually expect to be open for gpu drivers in upstream"
problems. Then we're stuck at a rather awkward point of why one vendor
gets an exception and all the others dont.
> As-is I'd rather this didn't land under my purview, since I don't see
> the value this adds to the Linux ecosystem at all, and I think it's
> important when putting a burden on upstream that you provide some
> value.
Well there is some in the form of "more hw/platform support". But
given that gpus evolved rather fast, including the entire integration
ecosystem (it's by far not just the hw drivers that move quickly). So
that value deprecates a lot faster than for other kernel subsystems.
And all that's left is the pain of not breaking anything without
actually being able to evolve the overall stack in any meaningful way.
-Daniel
--
Daniel Vetter
Software Engineer, Intel Corporation
+41 (0) 79 365 57 48 - http://blog.ffwll.ch
^ permalink raw reply [flat|nested] 28+ messages in thread
* RE: [EXTERNAL] Re: [RFC PATCH 0/4] DirectX on Linux
2020-05-19 23:01 ` Daniel Vetter
@ 2020-05-20 3:47 ` Steve Pronovost
[not found] ` <CAKMK7uFubAxtMEeCOYtvgjGYtmDVJeXcPFzmRD7t5BUm_GPP0w@mail.gmail.com>
2020-06-16 10:51 ` Pavel Machek
0 siblings, 2 replies; 28+ messages in thread
From: Steve Pronovost @ 2020-05-20 3:47 UTC (permalink / raw)
To: Daniel Vetter, Dave Airlie
Cc: Sasha Levin, linux-hyperv@vger.kernel.org, Stephen Hemminger,
Ursulin, Tvrtko, Greg Kroah-Hartman, Haiyang Zhang, LKML,
dri-devel, Chris Wilson, Wei Liu, Linux Fbdev development list,
Iouri Tarassov, Deucher, Alexander, KY Srinivasan, Hawking Zhang
SGV5IGd1eXMsDQoNClRoYW5rcyBmb3IgdGhlIGRpc2N1c3Npb24uIEkgbWF5IG5vdCBiZSBhYmxl
IHRvIGltbWVkaWF0ZWx5IGFuc3dlciBhbGwgb2YgeW91ciBxdWVzdGlvbnMsIGJ1dCBJJ2xsIGRv
IG15IGJlc3Qg8J+Yii4NCg0KZHJpdmVycy9oeXBlcnYgc291bmRzIGxpa2UgaXQgY291bGQgYmUg
YSBiZXR0ZXIgbG9jYXRpb24uIFdlIHdlcmVuJ3QgdG9vIHN1cmUgd2hlcmUgdG8gcHV0IHRoaXMs
IHdlIHRob3VnaCAvZHJpdmVycy9ncHUgd291bGQgYmUgYXBwcm9wcmlhdGUgZ2l2ZW4gdGhpcyBk
ZWFsIHdpdGggR1BVcywgYnV0IEkgZ2V0IHlvdXIgcG9pbnQuLi4gdGhpcyBpcyBhIHZHUFUgZHJp
dmVyIHRoYXQgcmVhbGx5IG9ubHkgd29ya3Mgd2hlbiBiZWluZyBydW4gdW5kZXIgSHlwZXItViwg
c28gZHJpdmVycy9oeXBlcnYgaXMgbGlrZWx5IG1vcmUgYXBwcm9wcmlhdGUuDQoNCkluIHRlcm0g
b2YgcHJlc2VudGF0aW9uLCBJIG5lZWQgdG8gY2xhcmlmeSBhIGZldyB0aGluZ3MuIFdlIGFubm91
bmNlZCB0b2RheSB0aGF0IHdlJ3JlIGFsc28gYWRkaW5nIHN1cHBvcnQgZm9yIExpbnV4IEdVSSBh
cHBsaWNhdGlvbnMuIFRoZSB3YXkgdGhpcyB3aWxsIHdvcmsgaXMgcm91Z2hseSBhcyBmb2xsb3cu
IFdlJ3JlIHdyaXRpbmcgYSBXYXlsYW5kIGNvbXBvc2l0b3IgdGhhdCB3aWxsIGVzc2VudGlhbGx5
IGJyaWRnZSBvdmVyIFJEUC1SQUlMIChSQUlMPVJlbW90ZSBBcHBsaWNhdGlvbiBJbnRlZ3JhdGVk
IExvY2FsbHkpLiBXZSdyZSBzdGFydGluZyBmcm9tIGEgV2VzdG9uIGJhc2UuIFdlc3RvbiBhbHJl
YWR5IGhhcyBhbiBSRFAgQmFja2VuZCwgYnV0IHRoYXQncyBmb3IgYSBmdWxsIGRlc2t0b3AgcmVt
b3Rpbmcgc2NoZW1lLiBXZXN0b24gZHJhd3MgYSBkZXNrdG9wIGFuZCByZW1vdGUgaXQgb3ZlciBS
RFAuLi4gYW5kIHRoZW4geW91IGNhbiBwZWVrIGF0IHRoYXQgZGVza3RvcCB1c2luZyBhbiByZHAg
Y2xpZW50IG9uIHRoZSBXaW5kb3dzIHNpZGUuIFJBSUwgd29ya3MgZGlmZmVyZW50bHkuIEluIHRo
YXQgY2FzZSBvdXIgd2F5bGFuZCBjb21wb3NpdG9yIG5vIGxvbmdlciBwYWludCBhIGRlc2t0b3Au
Li4gaW5zdGVhZCBpdCBzaW1wbHkgZm9yd2FyZCBpbmRpdmlkdWFsIHZpc3VhbCAvIHdsX3N1cmZh
Y2Ugb3ZlciB0aGUgUkRQIFJBSUwgY2hhbm5lbCBzdWNoIHRoYXQgdGhlc2UgdmlzdWFsIGNhbiBi
ZSBkaXNwbGF5ZWQgb24gdGhlIFdpbmRvd3MgZGVza3RvcC4gVGhlIFJEUCBjbGllbnQgY3JlYXRl
IHByb3h5IHdpbmRvdyBmb3IgZWFjaCBvZiB0aGVzZSB0b3AgbGV2ZWwgdmlzdWFsIGFuZCB0aGVp
ciBjb250ZW50IGlzIGZpbGxlZCB3aXRoIHRoZSBkYXRhIGNvbWluZyBvdmVyIHRoZSBSRFAgY2hh
bm5lbC4gQWxsIHBpeGVscyBhcmUgb3duZWQgYnkgdGhlIFJEUCBzZXJ2ZXIvV1NMLi4uIHNvIHRo
ZXNlIHdpbmRvd3MgbG9va3MgZGlmZmVyZW50IHRoYW4gbmF0aXZlIHdpbmRvdyBhcmUgdGhleSBh
cmUgcGFpbnRlZCBhbmQgdGhlbWVkIGJ5IFdTTC4gVGhlIHByb3h5IHdpbmRvdyBvbiB0aGUgaG9z
dCBnYXRoZXIgaW5wdXQgYW5kIGluamVjdCBiYWNrIG92ZXIgUkRQLi4uIFRoaXMgaXMgZXNzZW50
aWFsbHkgaG93IGFwcGxpY2F0aW9uIHJlbW90aW5nIHdvcmtzIG9uIHdpbmRvd3MgYW5kIHRoaXMg
aXMgYWxsIHB1YmxpY2x5IGRvY3VtZW50ZWQgYXMgcGFydCBvZiB0aGUgdmFyaW91cyBSRFAgcHJv
dG9jb2wgc3BlY2lmaWNhdGlvbi4gQXMgYSBtYXR0ZXIgb2YgZmFjdCwgZm9yIHRoZSBSRFAgc2Vy
dmVyIG9uIHRoZSBXZXN0b24gc2lkZSB3ZSBhcmUgbG9va2luZyBhdCBjb250aW51ZSB0byBsZXZl
cmFnZSBGcmVlUkRQIChhbmQgcHJvdmlkZSBmaXhlcy9lbmhhbmNlbWVudCBhcyBuZWVkZWQgdG8g
dGhlIHB1YmxpYyBwcm9qZWN0KS4gRnVydGhlciwgd2UncmUgbG9va2luZyBhdCBmdXJ0aGVyIGlt
cHJvdmVtZW50IGRvd24gdGhpcyBwYXRoIHRvIGF2b2lkIGhhdmluZyB0byBjb3B5IHRoZSBjb250
ZW50IG92ZXIgdGhlIFJBSUwgY2hhbm5lbCBhbmQgaW5zdGVhZCBqdXN0IHNoYXJlL3N3YXAgYnVm
ZmVyIGJldHdlZW4gdGhlIGd1ZXN0IGFuZCB0aGUgaG9zdC4gV2UgaGF2ZSBleHRlbnNpb24gdG8g
dGhlIFJEUCBwcm90b2NvbCwgY2FsbGVkIFZBSUwgKFZpcnR1YWxpemVkIEFwcGxpY2F0aW9uIElu
dGVncmF0ZWQgTG9jYWxseSkgd2hpY2ggZG9lcyB0aGF0IHRvZGF5LiBUb2RheSB0aGlzIGlzIG9u
bHkgdXNlIGluIFdpbmRvd3Mgb24gV2luZG93cyBmb3IgdmVyeSBzcGVjaWZpYyBzY2VuYXJpby4g
V2UncmUgbG9va2luZyBhdCBleHRlbmRpbmcgdGhlIHB1YmxpYyBSRFAgcHJvdG9jb2wgd2l0aCB0
aGVzZSBWQUlMIGV4dGVuc2lvbiB0byBtYWtlIHRoaXMgYW4gb2ZmaWNpYWwgTWljcm9zb2Z0IHN1
cHBvcnRlZCBwcm90b2NvbCB3aGljaCB3b3VsZCBhbGxvdyB1cyB0byB0YXJnZXQgdGhpcyBpbiBX
U0wuIFdlIGhhdmUgZmluaXNoZWQgZGVzaWduaW5nIHRoaXMgcGFydCBpbiBkZXRhaWxzLiBPdXIg
Z29hbCB3b3VsZCBiZSB0byBsZXZlcmFnZSBzb21ldGhpbmcgYWxvbmcgdGhlIGxpbmUgb2Ygd2xf
ZHJtLCBkbWEtYnVmLCBkbWEtZmVuY2UsIGV0Yy4uLiBUaGlzIGNvbXBvc2l0b3IgYW5kIGFsbCBv
dXIgY29udHJpYnV0aW9uIHRvIEZyZWVSRFAgd2lsbCBiZSBmdWxseSBvcGVuIHNvdXJjZSwgaW5j
bHVkaW5nIG91ciBkZXNpZ24gZG9jLiBXZSdyZSBub3QgcXVpdGUgc3VyZSB5ZXQgd2hldGhlciB0
aGlzIHdpbGwgYmUgb2ZmZXJlZCBhcyBhIHNlcGFyYXRlIHByb2plY3QgZW50aXJlbHkgZGlzdGlu
Y3QgZnJvbSBpdCdzIFdlc3RvbiByb290Li4uIG9yIGlmIHdlJ2xsIHByb3Bvc2UgYW4gZXh0ZW5z
aW9uIHRvIFdlc3RvbiB0byBvcGVyYXRlIGluIHRoaXMgbW9kZS4gV2Ugd291bGQgbGlrZSB0byBi
dWlsZCBpdCBzdWNoIHRoYXQgaW4gdGhlb3J5IGFueSBXYXlsYW5kIGNvbXBvc2l0b3IgY291bGQg
YWRkIHN1cHBvcnQgZm9yIHRoaXMgbW9kZSBvZiBvcGVyYXRpb24gaWYgdGhleSB3YW50IHRvIHJl
bW90ZSBhcHBsaWNhdGlvbiB0byBhIFdpbmRvd3MgaG9zdCAob3ZlciB0aGUgbmV0d29yaywgb3Ig
b24gdGhlIHNhbWUgYm94KS4NCg0KV2Ugc2VlIC9kZXYvZHhnIHJlYWxseSBhcyBhIHByb2plY3Rp
b24gb2YgdGhlIEdQVSB3aGVuIHJ1bm5pbmcgaW4gV1NMIHN1Y2ggdGhhdCB0aGUgR1BVIGNhbiBi
ZSBzaGFyZWQgYmV0d2VlbiBXU0wgYW5kIHRoZSBob3N0Li4uIG5vdCBzb21ldGhpbmcgdGhhdCB3
b3VsZCBjb2V4aXN0ICJhdCB0aGUgc2FtZSB0aW1lIiB3aXRoIGEgcmVhbCBEUk0gR1BVLg0KDQpX
ZSBoYXZlIGNvbnNpZGVyIHRoZSBwb3NzaWJpbGl0eSBvZiBicmluZ2luZyBEWCB0byBMaW51eCB3
aXRoIG5vIFdpbmRvd3MgY29yZCBhdHRhY2hlZC4gSSdtIG5vdCByZWFkeSB0byBkaXNjdXNzIHRo
aXMgYXQgdGhpcyB0aW1lIPCfmIouLi4gYnV0IGluIHRoZSBoeXBvdGhldGljYWwgdGhhdCB3ZSB3
ZXJlIGRvIHRoaXMsIERYIHdvdWxkIGJlIHJ1bm5pbmcgb24gdG9wIG9mIERSSS9EUk0gb24gbmF0
aXZlIExpbnV4LiBXZSBsaWtlbHkgd291bGQgYmUgY29udHJpYnV0aW5nIHNvbWUgY2hhbmdlcyB0
byBEUk0gdG8gYWRkcmVzcyBhcmVhIG9mIGRpdmVyZ2VuY2UgYW5kIGdldCBiZXR0ZXIgbWFwcGlu
ZyBmb3Igb3VyIHVzZXIgbW9kZSBkcml2ZXIsIGJ1dCB3ZSB3b3VsZG4ndCB0cnkgdG8gc2hvZWhv
cm4gL2Rldi9keGcgaW50byB0aGUgcGljdHVyZS4gSW4gdGhhdCBoeXBvdGhldGljYWwgd29ybGQs
IHdlIHdvdWxkIGVzc2VudGlhbGx5IGhhdmUgRFggdGFyZ2V0IERSTSBvbiBuYXRpdmUgTGludXgg
YW5kIERYIGNvbnRpbnVlIHRvIHRhcmdldCBEWEcgaW4gV1NMIHRvIHNoYXJlIHRoZSBHUFUgd2l0
aCB0aGUgaG9zdC4gSSB0aGluayB0aGlzIGZ1cnRoZXIgcmVpbmZvcmNlIHRoZSBwb2ludCB5b3Ug
Z3V5cyB3ZXJlIG1ha2luZyB0aGF0IHRoZSByaWdodCBwbGFjZSBmb3Igb3VyIGN1cnJlbnQgZHhn
a3JubCBkcml2ZXIgdG8gbGl2ZSBpbiB3b3VsZCBiZSAvZHJpdmVycy9oeXBlcnYvZHhna3JubC4g
SW4gaW5zaWdodCwgSSB0b3RhbGx5IGFncmVlIPCfmIouDQoNCkkgdGhpbmsgdGhpcyBjb3ZlciBh
bGwgcXVlc3Rpb25zLCBsZXQgbWUga25vdyBpZiBJIG1pc3NlZCBhbnl0aGluZy4NCg0KVGhhbmtz
LA0KU3RldmUNCg0KLS0tLS1PcmlnaW5hbCBNZXNzYWdlLS0tLS0NCkZyb206IERhbmllbCBWZXR0
ZXIgPGRhbmllbEBmZndsbC5jaD4gDQpTZW50OiBUdWVzZGF5LCBNYXkgMTksIDIwMjAgNDowMSBQ
TQ0KVG86IERhdmUgQWlybGllIDxhaXJsaWVkQGdtYWlsLmNvbT4NCkNjOiBTYXNoYSBMZXZpbiA8
c2FzaGFsQGtlcm5lbC5vcmc+OyBsaW51eC1oeXBlcnZAdmdlci5rZXJuZWwub3JnOyBTdGVwaGVu
IEhlbW1pbmdlciA8c3RoZW1taW5AbWljcm9zb2Z0LmNvbT47IFVyc3VsaW4sIFR2cnRrbyA8dHZy
dGtvLnVyc3VsaW5AaW50ZWwuY29tPjsgR3JlZyBLcm9haC1IYXJ0bWFuIDxncmVna2hAbGludXhm
b3VuZGF0aW9uLm9yZz47IEhhaXlhbmcgWmhhbmcgPGhhaXlhbmd6QG1pY3Jvc29mdC5jb20+OyBM
S01MIDxsaW51eC1rZXJuZWxAdmdlci5rZXJuZWwub3JnPjsgZHJpLWRldmVsIDxkcmktZGV2ZWxA
bGlzdHMuZnJlZWRlc2t0b3Aub3JnPjsgQ2hyaXMgV2lsc29uIDxjaHJpc0BjaHJpcy13aWxzb24u
Y28udWs+OyBTdGV2ZSBQcm9ub3Zvc3QgPHNwcm9ub3ZvQG1pY3Jvc29mdC5jb20+OyBMaW51eCBG
YmRldiBkZXZlbG9wbWVudCBsaXN0IDxsaW51eC1mYmRldkB2Z2VyLmtlcm5lbC5vcmc+OyBJb3Vy
aSBUYXJhc3NvdiA8aW91cml0QG1pY3Jvc29mdC5jb20+OyBEZXVjaGVyLCBBbGV4YW5kZXIgPGFs
ZXhhbmRlci5kZXVjaGVyQGFtZC5jb20+OyBLWSBTcmluaXZhc2FuIDxreXNAbWljcm9zb2Z0LmNv
bT47IFdlaSBMaXUgPHdlaS5saXVAa2VybmVsLm9yZz47IEhhd2tpbmcgWmhhbmcgPEhhd2tpbmcu
WmhhbmdAYW1kLmNvbT4NClN1YmplY3Q6IFtFWFRFUk5BTF0gUmU6IFtSRkMgUEFUQ0ggMC80XSBE
aXJlY3RYIG9uIExpbnV4DQoNCk9uIFdlZCwgTWF5IDIwLCAyMDIwIGF0IDEyOjQyIEFNIERhdmUg
QWlybGllIDxhaXJsaWVkQGdtYWlsLmNvbT4gd3JvdGU6DQo+DQo+IE9uIFdlZCwgMjAgTWF5IDIw
MjAgYXQgMDI6MzMsIFNhc2hhIExldmluIDxzYXNoYWxAa2VybmVsLm9yZz4gd3JvdGU6DQo+ID4N
Cj4gPiBUaGVyZSBpcyBhIGJsb2cgcG9zdCB0aGF0IGdvZXMgaW50byBtb3JlIGRldGFpbCBhYm91
dCB0aGUgYmlnZ2VyIA0KPiA+IHBpY3R1cmUsIGFuZCB3YWxrcyB0aHJvdWdoIGFsbCB0aGUgcmVx
dWlyZWQgcGllY2VzIHRvIG1ha2UgdGhpcyANCj4gPiB3b3JrLiBJdCBpcyBhdmFpbGFibGUgaGVy
ZToNCj4gPiBodHRwczovL25hbTA2LnNhZmVsaW5rcy5wcm90ZWN0aW9uLm91dGxvb2suY29tLz91
cmw9aHR0cHMlM0ElMkYlMkZkZQ0KPiA+IHZibG9ncy5taWNyb3NvZnQuY29tJTJGZGlyZWN0eCUy
RmRpcmVjdHgtaGVhcnQtbGludXgmYW1wO2RhdGE9MDIlN0MwMSU3Q3Nwcm9ub3ZvJTQwbWljcm9z
b2Z0LmNvbSU3QzNmMThlNDYxOTJiMjRjY2NmNmEwMDhkN2ZjNDg5MDYzJTdDNzJmOTg4YmY4NmYx
NDFhZjkxYWIyZDdjZDAxMWRiNDclN0MxJTdDMCU3QzYzNzI1NTI2MDkxMDczMDI0MyZhbXA7c2Rh
dGE9SVJSa256ZyUyRjZNeXpqM0pYRVNON0dnbU42QWNVVjNEeGhMOTVQJTJCdXR0Q3clM0QmYW1w
O3Jlc2VydmVkPTAgLiBUaGUgcmVzdCBvZiB0aGlzIGNvdmVyIGxldHRlciB3aWxsIGZvY3VzIG9u
IHRoZSBMaW51eCBLZXJuZWwgYml0cy4NCj4gPg0KPiA+IE92ZXJ2aWV3DQo+ID4gPT09PT09PT0N
Cj4gPg0KPiA+IFRoaXMgaXMgdGhlIGZpcnN0IGRyYWZ0IG9mIHRoZSBNaWNyb3NvZnQgVmlydHVh
bCBHUFUgKHZHUFUpIGRyaXZlci4gDQo+ID4gVGhlIGRyaXZlciBleHBvc2VzIGEgcGFyYXZpcnR1
YWxpemVkIEdQVSB0byB1c2VyIG1vZGUgYXBwbGljYXRpb25zIA0KPiA+IHJ1bm5pbmcgaW4gYSB2
aXJ0dWFsIG1hY2hpbmUgb24gYSBXaW5kb3dzIGhvc3QuIFRoaXMgZW5hYmxlcyANCj4gPiBoYXJk
d2FyZSBhY2NlbGVyYXRpb24gaW4gZW52aXJvbm1lbnQgc3VjaCBhcyBXU0wgKFdpbmRvd3MgU3Vi
c3lzdGVtIA0KPiA+IGZvciBMaW51eCkgd2hlcmUgdGhlIExpbnV4IHZpcnR1YWwgbWFjaGluZSBp
cyBhYmxlIHRvIHNoYXJlIHRoZSBHUFUgDQo+ID4gd2l0aCB0aGUgV2luZG93cyBob3N0Lg0KPiA+
DQo+ID4gVGhlIHByb2plY3Rpb24gaXMgYWNjb21wbGlzaGVkIGJ5IGV4cG9zaW5nIHRoZSBXRERN
IChXaW5kb3dzIERpc3BsYXkgDQo+ID4gRHJpdmVyIE1vZGVsKSBpbnRlcmZhY2UgYXMgYSBzZXQg
b2YgSU9DVEwuIFRoaXMgYWxsb3dzIEFQSXMgYW5kIHVzZXIgDQo+ID4gbW9kZSBkcml2ZXIgd3Jp
dHRlbiBhZ2FpbnN0IHRoZSBXRERNIEdQVSBhYnN0cmFjdGlvbiBvbiBXaW5kb3dzIHRvIA0KPiA+
IGJlIHBvcnRlZCB0byBydW4gd2l0aGluIGEgTGludXggZW52aXJvbm1lbnQuIFRoaXMgZW5hYmxl
cyB0aGUgcG9ydCANCj4gPiBvZiB0aGUNCj4gPiBEM0QxMiBhbmQgRGlyZWN0TUwgQVBJcyBhcyB3
ZWxsIGFzIHRoZWlyIGFzc29jaWF0ZWQgdXNlciBtb2RlIGRyaXZlciANCj4gPiB0byBMaW51eC4g
VGhpcyBhbHNvIGVuYWJsZXMgdGhpcmQgcGFydHkgQVBJcywgc3VjaCBhcyB0aGUgcG9wdWxhciAN
Cj4gPiBOVklESUEgQ3VkYSBjb21wdXRlIEFQSSwgdG8gYmUgaGFyZHdhcmUgYWNjZWxlcmF0ZWQg
d2l0aGluIGEgV1NMIGVudmlyb25tZW50Lg0KPiA+DQo+ID4gT25seSB0aGUgcmVuZGVyaW5nL2Nv
bXB1dGUgYXNwZWN0IG9mIHRoZSBHUFUgYXJlIHByb2plY3RlZCB0byB0aGUgDQo+ID4gdmlydHVh
bCBtYWNoaW5lLCBubyBkaXNwbGF5IGZ1bmN0aW9uYWxpdHkgaXMgZXhwb3NlZC4gRnVydGhlciwg
YXQgDQo+ID4gdGhpcyB0aW1lIHRoZXJlIGFyZSBubyBwcmVzZW50YXRpb24gaW50ZWdyYXRpb24u
IFNvIGFsdGhvdWdoIHRoZSANCj4gPiBEM0QxMiBBUEkgY2FuIGJlIHVzZSB0byByZW5kZXIgZ3Jh
cGhpY3Mgb2Zmc2NyZWVuLCB0aGVyZSBpcyBubyBwYXRoIA0KPiA+ICh5ZXQpIGZvciBwaXhlbCB0
byBmbG93IGZyb20gdGhlIExpbnV4IGVudmlyb25tZW50IGJhY2sgb250byB0aGUgDQo+ID4gV2lu
ZG93cyBob3N0IGRlc2t0b3AuIFRoaXMgR1BVIHN0YWNrIGlzIGVmZmVjdGl2ZWx5IHNpZGUtYnkt
c2lkZSANCj4gPiB3aXRoIHRoZSBuYXRpdmUgTGludXggZ3JhcGhpY3Mgc3RhY2suDQo+DQo+IE9r
YXkgSSd2ZSBoYWQgc29tZSBjYWZmaWVuZSBhbmQgYWJzb3JiZWQgc29tZSBtb3JlIG9mIHRoaXMu
DQo+DQo+IFRoaXMgaXMgYSBkcml2ZXIgdGhhdCBjb25uZWN0cyBhIGJpbmFyeSBibG9iIGludGVy
ZmFjZSBpbiB0aGUgV2luZG93cyANCj4ga2VybmVsIGRyaXZlcnMgdG8gYSBiaW5hcnkgYmxvYiB0
aGF0IHlvdSBydW4gaW5zaWRlIGEgTGludXggZ3Vlc3QuDQo+IEl0J3MgYSBiaW5hcnkgdHJhbnNw
b3J0IGJldHdlZW4gdHdvIGJpbmFyeSBwaWVjZXMuIFBlcnNvbmFsbHkgdGhpcyANCj4gaG9sZHMg
bGl0dGxlIG9mIGludGVyZXN0IHRvIG1lLCBJIGNhbiBzZWUgd2h5IGl0IG1pZ2h0IGJlIG5pY2Ug
dG8gaGF2ZSANCj4gdGhpcyB1cHN0cmVhbSwgYnV0IEkgZG9uJ3QgZm9yc2VlIGFueSBvdGhlciBM
aW51eCBkaXN0cmlidXRvciBldmVyIA0KPiBlbmFibGluZyBpdCBvciBoYXZpbmcgdG8gc2hpcCBp
dCwgaXQncyBwdXJlbHkgYSBXU0wyIHBpcGUuIEknbSBub3QgDQo+IHNheWluZyBJJ2QgYmUgaGFw
cHkgdG8gc2VlIHRoaXMgaW4gdGhlIHRyZWUsIHNpbmNlIEkgZG9uJ3Qgc2VlIHRoZSANCj4gdmFs
dWUgb2YgbWFpbnRhaW5pbmcgaXQgdXBzdHJlYW0sIGJ1dCBpdCBwcm9iYWJseSBzaG91bGQganVz
dCBleGlzdHMgDQo+IGluIGEgZHJpdmVycy9oeXBlcnYgdHlwZSBhcmVhLg0KDQpZdXAgYXMtaXMg
KGVzcGVjaWFsbHkgd2l0aCB0aGUgZ29hbCBvZiB0aGlzIGJlaW5nIGFpbWVkIGF0IG1sL2NvbXB1
dGUNCm9ubHkpIGRyaXZlcnMvaHlwZXJ2IHNvdW5kcyBhIGJ1bmNoIG1vcmUgcmVhc29uYWJsZSB0
aGFuIGRyaXZlcnMvZ3B1Lg0KDQo+IEhhdmluZyBzYWlkIHRoYXQsIEkgaGl0IG9uZSBzdHVtYmxp
bmcgYmxvY2s6DQo+ICJGdXJ0aGVyLCBhdCB0aGlzIHRpbWUgdGhlcmUgYXJlIG5vIHByZXNlbnRh
dGlvbiBpbnRlZ3JhdGlvbi4gIg0KPg0KPiBJZiB3ZSB1cHN0cmVhbSB0aGlzIGRyaXZlciBhcy1p
cyBpbnRvIHNvbWUgaHlwZXJ2IHNwZWNpZmljIHBsYWNlLCBhbmQgDQo+IHlvdSBkZWNpZGUgdG8g
YWRkIHByZXNlbnRhdGlvbiBpbnRlZ3JhdGlvbiB0aGlzIGlzIG1vcmUgdGhhbiBsaWtlbHkgDQo+
IGdvaW5nIHRvIG1lYW4geW91IHdpbGwgd2FudCB0byBpbnRlcmFjdCB3aXRoIGRtYS1idWZzIGFu
ZCBkbWEtZmVuY2VzLg0KPiBJZiB0aGUgZHJpdmVyIGlzIGhpZGRlbiBhd2F5IGluIGEgaHlwZXJ2
IHBsYWNlIGl0J3MgbGlrZWx5IHdlIHdvbid0IA0KPiBldmVuIG5vdGljZSB0aGF0IGZlYXR1cmUg
bGFuZGluZyB1bnRpbCBpdCdzIHRvbyBsYXRlLg0KDQpJJ3ZlIHJlY2VudGx5IGFkZGVkIHJlZ2V4
IG1hdGNoZXMgdG8gTUFJTlRBSU5FUlMgc28gd2UnbGwgc2VlIGRtYV9idWYvZmVuY2UvYW55dGhp
bmcgc2hvdyB1cCBvbiBkcmktZGV2ZWwuIFNvIHRoYXQgcGFydCBpcyBzb2x2ZWQgaG9wZWZ1bGx5
Lg0KDQo+IEkgd291bGQgbGlrZSB0byBzZWUgYSBjb2hlcmVudCBwbGFuIGZvciBwcmVzZW50YXRp
b24gc3VwcG9ydCAobm90IA0KPiBjb2RlLCBqdXN0IGFuIGFyY2hpdGVjdHVyYWwgZGlhZ3JhbSks
IGJlY2F1c2UgSSB0aGluayB3aGVuIHlvdSANCj4gY29udGVtcGxhdGUgaG93IHRoYXQgd29ya3Mg
aXQgd2lsbCBjaGFuZ2UgdGhlIHBpY3R1cmUgb2YgaG93IHRoaXMgDQo+IGRyaXZlciBsb29rcyBh
bmQgaW50ZXJncmF0ZXMgaW50byB0aGUgcmVzdCBvZiB0aGUgTGludXggZ3JhcGhpY3MgDQo+IGVj
b3N5c3RlbS4NCg0KWWVhaCBvbmNlIHdlIGhhdmUgdGhlIGZlYXR1cmUtY3JlZXAgdG8gcHJlc2Vu
dGF0aW9uIHN1cHBvcnQgYWxsIHRoZSBpbnRlZ3JhdGlvbiBmdW4gc3RhcnRzLCB3aXRoIGFsbCB0
aGUgcXVlc3Rpb25zIGFib3V0ICJ3aHkgZG9lcyB0aGlzIG5vdCBsb29rIGxpa2UgYW55IG90aGVy
IGxpbnV4IGdwdSBkcml2ZXIiLiBXZSBoYXZlIHRoYXQgYWxyZWFkeSB3aXRoIG52aWRpYSBpbnNp
c3RpbmcgdGhleSBqdXN0IGNhbid0IGltcGxlbWVudCBhbnkgb2YgdGhlIHVwc3RyZWFtIGdwdSB1
YXBpIHdlIGhhdmUsIGJ1dCBhdCBsZWFzdCB0aGV5J3JlIG5vdCBpbi10cmVlLCBzbyBub3Qgb3Vy
IHByb2JsZW0gZnJvbSBhbiB1cHN0cmVhbSBtYWludGFpbmVyc2hpcCBwb3YuDQoNCkJ1dCBvbmNl
IHRoaXMgZHgxMiBwaXBlIGlzIGxhbmRlZCBhbmQgdGhlbiB3ZSB3YW50IHRvIGV4dGVuZCBpdCBp
dCdzIHN0aWxsIGdvaW5nIHRvIGhhdmUgYWxsIHRoZSAid2UgY2FuJ3QgZXZlciByZWxlYXNlIHRo
ZSBzb3VyY2VzIHRvIGFueSBvZiB0aGUgcGFydHMgd2UgdXN1YWxseSBleHBlY3QgdG8gYmUgb3Bl
biBmb3IgZ3B1IGRyaXZlcnMgaW4gdXBzdHJlYW0iDQpwcm9ibGVtcy4gVGhlbiB3ZSdyZSBzdHVj
ayBhdCBhIHJhdGhlciBhd2t3YXJkIHBvaW50IG9mIHdoeSBvbmUgdmVuZG9yIGdldHMgYW4gZXhj
ZXB0aW9uIGFuZCBhbGwgdGhlIG90aGVycyBkb250Lg0KDQo+IEFzLWlzIEknZCByYXRoZXIgdGhp
cyBkaWRuJ3QgbGFuZCB1bmRlciBteSBwdXJ2aWV3LCBzaW5jZSBJIGRvbid0IHNlZSANCj4gdGhl
IHZhbHVlIHRoaXMgYWRkcyB0byB0aGUgTGludXggZWNvc3lzdGVtIGF0IGFsbCwgYW5kIEkgdGhp
bmsgaXQncyANCj4gaW1wb3J0YW50IHdoZW4gcHV0dGluZyBhIGJ1cmRlbiBvbiB1cHN0cmVhbSB0
aGF0IHlvdSBwcm92aWRlIHNvbWUgDQo+IHZhbHVlLg0KDQpXZWxsIHRoZXJlIGlzIHNvbWUgaW4g
dGhlIGZvcm0gb2YgIm1vcmUgaHcvcGxhdGZvcm0gc3VwcG9ydCIuIEJ1dCBnaXZlbiB0aGF0IGdw
dXMgZXZvbHZlZCByYXRoZXIgZmFzdCwgaW5jbHVkaW5nIHRoZSBlbnRpcmUgaW50ZWdyYXRpb24g
ZWNvc3lzdGVtIChpdCdzIGJ5IGZhciBub3QganVzdCB0aGUgaHcgZHJpdmVycyB0aGF0IG1vdmUg
cXVpY2tseSkuIFNvIHRoYXQgdmFsdWUgZGVwcmVjYXRlcyBhIGxvdCBmYXN0ZXIgdGhhbiBmb3Ig
b3RoZXIga2VybmVsIHN1YnN5c3RlbXMuDQpBbmQgYWxsIHRoYXQncyBsZWZ0IGlzIHRoZSBwYWlu
IG9mIG5vdCBicmVha2luZyBhbnl0aGluZyB3aXRob3V0IGFjdHVhbGx5IGJlaW5nIGFibGUgdG8g
ZXZvbHZlIHRoZSBvdmVyYWxsIHN0YWNrIGluIGFueSBtZWFuaW5nZnVsIHdheS4NCi1EYW5pZWwN
Ci0tDQpEYW5pZWwgVmV0dGVyDQpTb2Z0d2FyZSBFbmdpbmVlciwgSW50ZWwgQ29ycG9yYXRpb24N
Cis0MSAoMCkgNzkgMzY1IDU3IDQ4IC0gDQoraHR0cHM6Ly9uYW0wNi5zYWZlbGlua3MucHJvdGVj
dGlvbi5vdXRsb29rLmNvbS8/dXJsPWh0dHAlM0ElMkYlMkZibG9nLmYNCitmd2xsLmNoJTJGJmFt
cDtkYXRhPTAyJTdDMDElN0NzcHJvbm92byU0MG1pY3Jvc29mdC5jb20lN0MzZjE4ZTQ2MTkyYjI0
Yw0KK2NjZjZhMDA4ZDdmYzQ4OTA2MyU3QzcyZjk4OGJmODZmMTQxYWY5MWFiMmQ3Y2QwMTFkYjQ3
JTdDMSU3QzAlN0M2MzcyNTUyDQorNjA5MTA3MzUyMzAmYW1wO3NkYXRhPWhBSVYxd0oyOVdGOUlY
VHZKbTNkcjRTdEN3UHpGMEdkTzJpV1B5Zm5FbGclM0QmYW0NCitwO3Jlc2VydmVkPTANCg=
^ permalink raw reply [flat|nested] 28+ messages in thread[parent not found: <CAKMK7uFubAxtMEeCOYtvgjGYtmDVJeXcPFzmRD7t5BUm_GPP0w@mail.gmail.com>]
* Re: [EXTERNAL] Re: [RFC PATCH 0/4] DirectX on Linux
2020-05-20 3:47 ` [EXTERNAL] " Steve Pronovost
[not found] ` <CAKMK7uFubAxtMEeCOYtvgjGYtmDVJeXcPFzmRD7t5BUm_GPP0w@mail.gmail.com>
@ 2020-06-16 10:51 ` Pavel Machek
1 sibling, 0 replies; 28+ messages in thread
From: Pavel Machek @ 2020-06-16 10:51 UTC (permalink / raw)
To: Steve Pronovost
Cc: Sasha Levin, linux-hyperv@vger.kernel.org, Stephen Hemminger,
Ursulin, Tvrtko, Greg Kroah-Hartman, Haiyang Zhang, LKML,
dri-devel, Chris Wilson, Linux Fbdev development list, Wei Liu,
Iouri Tarassov, Deucher, Alexander, KY Srinivasan, Hawking Zhang
Hi!
> Thanks for the discussion. I may not be able to immediately answer all of your questions, but I'll do my best ????.
>
Could you do something with your email settings? Because this is not how you should use
email on lkml. "[EXTERNAL]" in the subject, top-posting, unwrapped lines...
Thank you,
Pavel
--
(english) http://www.livejournal.com/~pavelmachek
(cesky, pictures) http://atrey.karlin.mff.cuni.cz/~pavel/picture/horses/blog.html
^ permalink raw reply [flat|nested] 28+ messages in thread
* Re: [RFC PATCH 0/4] DirectX on Linux
2020-05-19 22:42 ` Dave Airlie
2020-05-19 23:01 ` Daniel Vetter
@ 2020-05-19 23:12 ` Dave Airlie
2020-06-16 10:51 ` Pavel Machek
1 sibling, 1 reply; 28+ messages in thread
From: Dave Airlie @ 2020-05-19 23:12 UTC (permalink / raw)
To: Sasha Levin
Cc: linux-hyperv, sthemmin, Ursulin, Tvrtko, Greg Kroah-Hartman,
haiyangz, LKML, dri-devel, Chris Wilson, spronovo,
Linux Fbdev development list, iourit, Deucher, Alexander, kys,
wei.liu, Hawking Zhang
On Wed, 20 May 2020 at 08:42, Dave Airlie <airlied@gmail.com> wrote:
>
> On Wed, 20 May 2020 at 02:33, Sasha Levin <sashal@kernel.org> wrote:
> >
> > There is a blog post that goes into more detail about the bigger
> > picture, and walks through all the required pieces to make this work. It
> > is available here:
> > https://devblogs.microsoft.com/directx/directx-heart-linux . The rest of
> > this cover letter will focus on the Linux Kernel bits.
> >
> > Overview
> > ====
> >
> > This is the first draft of the Microsoft Virtual GPU (vGPU) driver. The
> > driver exposes a paravirtualized GPU to user mode applications running
> > in a virtual machine on a Windows host. This enables hardware
> > acceleration in environment such as WSL (Windows Subsystem for Linux)
> > where the Linux virtual machine is able to share the GPU with the
> > Windows host.
> >
> > The projection is accomplished by exposing the WDDM (Windows Display
> > Driver Model) interface as a set of IOCTL. This allows APIs and user
> > mode driver written against the WDDM GPU abstraction on Windows to be
> > ported to run within a Linux environment. This enables the port of the
> > D3D12 and DirectML APIs as well as their associated user mode driver to
> > Linux. This also enables third party APIs, such as the popular NVIDIA
> > Cuda compute API, to be hardware accelerated within a WSL environment.
> >
> > Only the rendering/compute aspect of the GPU are projected to the
> > virtual machine, no display functionality is exposed. Further, at this
> > time there are no presentation integration. So although the D3D12 API
> > can be use to render graphics offscreen, there is no path (yet) for
> > pixel to flow from the Linux environment back onto the Windows host
> > desktop. This GPU stack is effectively side-by-side with the native
> > Linux graphics stack.
>
> Okay I've had some caffiene and absorbed some more of this.
>
> This is a driver that connects a binary blob interface in the Windows
> kernel drivers to a binary blob that you run inside a Linux guest.
> It's a binary transport between two binary pieces. Personally this
> holds little of interest to me, I can see why it might be nice to have
> this upstream, but I don't forsee any other Linux distributor ever
> enabling it or having to ship it, it's purely a WSL2 pipe. I'm not
> saying I'd be happy to see this in the tree, since I don't see the
> value of maintaining it upstream, but it probably should just exists
> in a drivers/hyperv type area.
>
> Having said that, I hit one stumbling block:
> "Further, at this time there are no presentation integration. "
>
> If we upstream this driver as-is into some hyperv specific place, and
> you decide to add presentation integration this is more than likely
> going to mean you will want to interact with dma-bufs and dma-fences.
> If the driver is hidden away in a hyperv place it's likely we won't
> even notice that feature landing until it's too late.
>
> I would like to see a coherent plan for presentation support (not
> code, just an architectural diagram), because I think when you
> contemplate how that works it will change the picture of how this
> driver looks and intergrates into the rest of the Linux graphics
> ecosystem.
>
> As-is I'd rather this didn't land under my purview, since I don't see
> the value this adds to the Linux ecosystem at all, and I think it's
> important when putting a burden on upstream that you provide some
> value.
I also have another concern from a legal standpoint I'd rather not
review the ioctl part of this. I'd probably request under DRI
developers abstain as well.
This is a Windows kernel API being smashed into a Linux driver. I
don't want to be tainted by knowledge of an API that I've no idea of
the legal status of derived works. (it this all covered patent wise
under OIN?)
I don't want to ever be accused of designing a Linux kernel API with
illgotten D3DKMT knowledge, I feel tainting myself with knowledge of a
properietary API might cause derived work issues.
Dave.
^ permalink raw reply [flat|nested] 28+ messages in thread
* Re: [RFC PATCH 0/4] DirectX on Linux
2020-05-19 23:12 ` Dave Airlie
@ 2020-06-16 10:51 ` Pavel Machek
2020-06-16 13:21 ` Sasha Levin
0 siblings, 1 reply; 28+ messages in thread
From: Pavel Machek @ 2020-06-16 10:51 UTC (permalink / raw)
To: Dave Airlie
Cc: Sasha Levin, linux-hyperv, sthemmin, Ursulin, Tvrtko,
Greg Kroah-Hartman, haiyangz, LKML, dri-devel, Chris Wilson,
spronovo, Linux Fbdev development list, iourit,
Deucher, Alexander, kys, wei.liu, Hawking Zhang
> > Having said that, I hit one stumbling block:
> > "Further, at this time there are no presentation integration. "
> >
> > If we upstream this driver as-is into some hyperv specific place, and
> > you decide to add presentation integration this is more than likely
> > going to mean you will want to interact with dma-bufs and dma-fences.
> > If the driver is hidden away in a hyperv place it's likely we won't
> > even notice that feature landing until it's too late.
> >
> > I would like to see a coherent plan for presentation support (not
> > code, just an architectural diagram), because I think when you
> > contemplate how that works it will change the picture of how this
> > driver looks and intergrates into the rest of the Linux graphics
> > ecosystem.
> >
> > As-is I'd rather this didn't land under my purview, since I don't see
> > the value this adds to the Linux ecosystem at all, and I think it's
> > important when putting a burden on upstream that you provide some
> > value.
>
> I also have another concern from a legal standpoint I'd rather not
> review the ioctl part of this. I'd probably request under DRI
> developers abstain as well.
>
> This is a Windows kernel API being smashed into a Linux driver. I don't want to be
> tainted by knowledge of an API that I've no idea of the legal status of derived works.
> (it this all covered patent wise under OIN?)
If you can't look onto it, perhaps it is not suitable to merge into kernel...?
What would be legal requirements so this is "safe to look at"? We should really
require submitter to meet them...
Pavel
--
(english) http://www.livejournal.com/~pavelmachek
(cesky, pictures) http://atrey.karlin.mff.cuni.cz/~pavel/picture/horses/blog.html
^ permalink raw reply [flat|nested] 28+ messages in thread
* Re: [RFC PATCH 0/4] DirectX on Linux
2020-06-16 10:51 ` Pavel Machek
@ 2020-06-16 13:21 ` Sasha Levin
0 siblings, 0 replies; 28+ messages in thread
From: Sasha Levin @ 2020-06-16 13:21 UTC (permalink / raw)
To: Pavel Machek
Cc: linux-hyperv, sthemmin, Ursulin, Tvrtko, Greg Kroah-Hartman,
haiyangz, LKML, dri-devel, Chris Wilson, spronovo,
Linux Fbdev development list, iourit, Deucher, Alexander, kys,
wei.liu, Hawking Zhang
On Tue, Jun 16, 2020 at 12:51:56PM +0200, Pavel Machek wrote:
>> > Having said that, I hit one stumbling block:
>> > "Further, at this time there are no presentation integration. "
>> >
>> > If we upstream this driver as-is into some hyperv specific place, and
>> > you decide to add presentation integration this is more than likely
>> > going to mean you will want to interact with dma-bufs and dma-fences.
>> > If the driver is hidden away in a hyperv place it's likely we won't
>> > even notice that feature landing until it's too late.
>> >
>> > I would like to see a coherent plan for presentation support (not
>> > code, just an architectural diagram), because I think when you
>> > contemplate how that works it will change the picture of how this
>> > driver looks and intergrates into the rest of the Linux graphics
>> > ecosystem.
>> >
>> > As-is I'd rather this didn't land under my purview, since I don't see
>> > the value this adds to the Linux ecosystem at all, and I think it's
>> > important when putting a burden on upstream that you provide some
>> > value.
>>
>> I also have another concern from a legal standpoint I'd rather not
>> review the ioctl part of this. I'd probably request under DRI
>> developers abstain as well.
>>
>> This is a Windows kernel API being smashed into a Linux driver. I don't want to be
>> tainted by knowledge of an API that I've no idea of the legal status of derived works.
>> (it this all covered patent wise under OIN?)
>
>If you can't look onto it, perhaps it is not suitable to merge into kernel...?
>
>What would be legal requirements so this is "safe to look at"? We should really
>require submitter to meet them...
Could you walk me through your view on what the function of the
"Signed-off-by" tag is?
--
Thanks,
Sasha
^ permalink raw reply [flat|nested] 28+ messages in thread
* Re: [RFC PATCH 0/4] DirectX on Linux
2020-05-19 16:32 [RFC PATCH 0/4] DirectX on Linux Sasha Levin
` (5 preceding siblings ...)
2020-05-19 22:42 ` Dave Airlie
@ 2020-05-20 7:10 ` Thomas Zimmermann
2020-05-20 7:42 ` [EXTERNAL] " Steve Pronovost
2020-06-16 10:51 ` Pavel Machek
6 siblings, 2 replies; 28+ messages in thread
From: Thomas Zimmermann @ 2020-05-20 7:10 UTC (permalink / raw)
To: Sasha Levin, alexander.deucher, chris, ville.syrjala,
Hawking.Zhang, tvrtko.ursulin
Cc: linux-hyperv, sthemmin, gregkh, haiyangz, linux-kernel, dri-devel,
spronovo, wei.liu, linux-fbdev, iourit, kys
[-- Attachment #1.1: Type: text/plain, Size: 9368 bytes --]
Hi
Am 19.05.20 um 18:32 schrieb Sasha Levin:
> There is a blog post that goes into more detail about the bigger
> picture, and walks through all the required pieces to make this work. It
> is available here:
> https://devblogs.microsoft.com/directx/directx-heart-linux . The rest of
> this cover letter will focus on the Linux Kernel bits.
That's quite a surprise. Thanks for your efforts to contribute.
>
> Overview
> ========
>
> This is the first draft of the Microsoft Virtual GPU (vGPU) driver. The
> driver exposes a paravirtualized GPU to user mode applications running
> in a virtual machine on a Windows host. This enables hardware
> acceleration in environment such as WSL (Windows Subsystem for Linux)
> where the Linux virtual machine is able to share the GPU with the
> Windows host.
>
> The projection is accomplished by exposing the WDDM (Windows Display
> Driver Model) interface as a set of IOCTL. This allows APIs and user
> mode driver written against the WDDM GPU abstraction on Windows to be
> ported to run within a Linux environment. This enables the port of the
> D3D12 and DirectML APIs as well as their associated user mode driver to
> Linux. This also enables third party APIs, such as the popular NVIDIA
> Cuda compute API, to be hardware accelerated within a WSL environment.
>
> Only the rendering/compute aspect of the GPU are projected to the
> virtual machine, no display functionality is exposed. Further, at this
> time there are no presentation integration. So although the D3D12 API
> can be use to render graphics offscreen, there is no path (yet) for
> pixel to flow from the Linux environment back onto the Windows host
> desktop. This GPU stack is effectively side-by-side with the native
> Linux graphics stack.
>
> The driver creates the /dev/dxg device, which can be opened by user mode
> application and handles their ioctls. The IOCTL interface to the driver
> is defined in dxgkmthk.h (Dxgkrnl Graphics Port Driver ioctl
> definitions). The interface matches the D3DKMT interface on Windows.
> Ioctls are implemented in ioctl.c.
Echoing what others said, you're not making a DRM driver. The driver
should live outside of the DRM code.
I have one question about the driver API: on Windows, DirectX versions
are loosly tied to Windows releases. So I guess you can change the
kernel interface among DirectX versions?
If so, how would this work on Linux in the long term? If there ever is a
DirectX 13 or 14 with incompatible kernel interfaces, how would you plan
to update the Linux driver?
Best regards
Thomas
>
> When a VM starts, hyper-v on the host adds virtual GPU devices to the VM
> via the hyper-v driver. The host offers several VM bus channels to the
> VM: the global channel and one channel per virtual GPU, assigned to the
> VM.
>
> The driver registers with the hyper-v driver (hv_driver) for the arrival
> of VM bus channels. dxg_probe_device recognizes the vGPU channels and
> creates the corresponding objects (dxgadapter for vGPUs and dxgglobal
> for the global channel).
>
> The driver uses the hyper-V VM bus interface to communicate with the
> host. dxgvmbus.c implements the communication interface.
>
> The global channel has 8GB of IO space assigned by the host. This space
> is managed by the host and used to give the guest direct CPU access to
> some allocations. Video memory is allocated on the host except in the
> case of existing_sysmem allocations. The Windows host allocates memory
> for the GPU on behalf of the guest. The Linux guest can access that
> memory by mapping GPU virtual address to allocations and then
> referencing those GPU virtual address from within GPU command buffers
> submitted to the GPU. For allocations which require CPU access, the
> allocation is mapped by the host into a location in the 8GB of IO space
> reserved in the guest for that purpose. The Windows host uses the nested
> CPU page table to ensure that this guest IO space always map to the
> correct location for the allocation as it may migrate between dedicated
> GPU memory (e.g. VRAM, firmware reserved DDR) and shared system memory
> (regular DDR) over its lifetime. The Linux guest maps a user mode CPU
> virtual address to an allocation IO space range for direct access by
> user mode APIs and drivers.
>
>
>
> Implementation of LX_DXLOCK2 ioctl
> ==================================
>
> We would appreciate your feedback on the implementation of the
> LX_DXLOCK2 ioctl.
>
> This ioctl is used to get a CPU address to an allocation, which is
> resident in video/system memory on the host. The way it works:
>
> 1. The driver sends the Lock message to the host
>
> 2. The host allocates space in the VM IO space and maps it to the
> allocation memory
>
> 3. The host returns the address in IO space for the mapped allocation
>
> 4. The driver (in dxg_map_iospace) allocates a user mode virtual address
> range using vm_mmap and maps it to the IO space using
> io_remap_ofn_range)
>
> 5. The VA is returned to the application
>
>
>
> Internal objects
> ================
>
> The following objects are created by the driver (defined in dxgkrnl.h):
>
> - dxgadapter - represents a virtual GPU
>
> - dxgprocess - tracks per process state (handle table of created
> objects, list of objects, etc.)
>
> - dxgdevice - a container for other objects (contexts, paging queues,
> allocations, GPU synchronization objects)
>
> - dxgcontext - represents thread of GPU execution for packet
> scheduling.
>
> - dxghwqueue - represents thread of GPU execution of hardware scheduling
>
> - dxgallocation - represents a GPU accessible allocation
>
> - dxgsyncobject - represents a GPU synchronization object
>
> - dxgresource - collection of dxgalloction objects
>
> - dxgsharedresource, dxgsharedsyncobj - helper objects to share objects
> between different dxgdevice objects, which can belong to different
> processes
>
>
>
> Object handles
> ==============
>
> All GPU objects, created by the driver, are accessible by a handle
> (d3dkmt_handle). Each process has its own handle table, which is
> implemented in hmgr.c. For each API visible object, created by the
> driver, there is an object, created on the host. For example, the is a
> dxgprocess object on the host for each dxgprocess object in the VM, etc.
> The object handles have the same value in the host and the VM, which is
> done to avoid translation from the guest handles to the host handles.
>
>
>
> Signaling CPU events by the host
> ================================
>
> The WDDM interface provides a way to signal CPU event objects when
> execution of a context reached certain point. The way it is implemented:
>
> - application sends an event_fd via ioctl to the driver
>
> - eventfd_ctx_get is used to get a pointer to the file object
> (eventfd_ctx)
>
> - the pointer to sent the host via a VM bus message
>
> - when GPU execution reaches a certain point, the host sends a message
> to the VM with the event pointer
>
> - signal_guest_event() handles the messages and eventually
> eventfd_signal() is called.
>
>
> Sasha Levin (4):
> gpu: dxgkrnl: core code
> gpu: dxgkrnl: hook up dxgkrnl
> Drivers: hv: vmbus: hook up dxgkrnl
> gpu: dxgkrnl: create a MAINTAINERS entry
>
> MAINTAINERS | 7 +
> drivers/gpu/Makefile | 2 +-
> drivers/gpu/dxgkrnl/Kconfig | 10 +
> drivers/gpu/dxgkrnl/Makefile | 12 +
> drivers/gpu/dxgkrnl/d3dkmthk.h | 1635 +++++++++
> drivers/gpu/dxgkrnl/dxgadapter.c | 1399 ++++++++
> drivers/gpu/dxgkrnl/dxgkrnl.h | 913 ++++++
> drivers/gpu/dxgkrnl/dxgmodule.c | 692 ++++
> drivers/gpu/dxgkrnl/dxgprocess.c | 355 ++
> drivers/gpu/dxgkrnl/dxgvmbus.c | 2955 +++++++++++++++++
> drivers/gpu/dxgkrnl/dxgvmbus.h | 859 +++++
> drivers/gpu/dxgkrnl/hmgr.c | 593 ++++
> drivers/gpu/dxgkrnl/hmgr.h | 107 +
> drivers/gpu/dxgkrnl/ioctl.c | 5269 ++++++++++++++++++++++++++++++
> drivers/gpu/dxgkrnl/misc.c | 280 ++
> drivers/gpu/dxgkrnl/misc.h | 288 ++
> drivers/video/Kconfig | 2 +
> include/linux/hyperv.h | 16 +
> 18 files changed, 15393 insertions(+), 1 deletion(-)
> create mode 100644 drivers/gpu/dxgkrnl/Kconfig
> create mode 100644 drivers/gpu/dxgkrnl/Makefile
> create mode 100644 drivers/gpu/dxgkrnl/d3dkmthk.h
> create mode 100644 drivers/gpu/dxgkrnl/dxgadapter.c
> create mode 100644 drivers/gpu/dxgkrnl/dxgkrnl.h
> create mode 100644 drivers/gpu/dxgkrnl/dxgmodule.c
> create mode 100644 drivers/gpu/dxgkrnl/dxgprocess.c
> create mode 100644 drivers/gpu/dxgkrnl/dxgvmbus.c
> create mode 100644 drivers/gpu/dxgkrnl/dxgvmbus.h
> create mode 100644 drivers/gpu/dxgkrnl/hmgr.c
> create mode 100644 drivers/gpu/dxgkrnl/hmgr.h
> create mode 100644 drivers/gpu/dxgkrnl/ioctl.c
> create mode 100644 drivers/gpu/dxgkrnl/misc.c
> create mode 100644 drivers/gpu/dxgkrnl/misc.h
>
--
Thomas Zimmermann
Graphics Driver Developer
SUSE Software Solutions Germany GmbH
Maxfeldstr. 5, 90409 Nürnberg, Germany
(HRB 36809, AG Nürnberg)
Geschäftsführer: Felix Imendörffer
[-- Attachment #2: OpenPGP digital signature --]
[-- Type: application/pgp-signature, Size: 488 bytes --]
^ permalink raw reply [flat|nested] 28+ messages in thread* RE: [EXTERNAL] Re: [RFC PATCH 0/4] DirectX on Linux
2020-05-20 7:10 ` Thomas Zimmermann
@ 2020-05-20 7:42 ` Steve Pronovost
2020-05-20 11:06 ` Thomas Zimmermann
2020-06-16 10:51 ` Pavel Machek
1 sibling, 1 reply; 28+ messages in thread
From: Steve Pronovost @ 2020-05-20 7:42 UTC (permalink / raw)
To: Thomas Zimmermann, Sasha Levin, alexander.deucher@amd.com,
chris@chris-wilson.co.uk, ville.syrjala@linux.intel.com,
Hawking.Zhang@amd.com, tvrtko.ursulin@intel.com
Cc: linux-hyperv@vger.kernel.org, Stephen Hemminger,
gregkh@linuxfoundation.org, Haiyang Zhang,
linux-kernel@vger.kernel.org, dri-devel@lists.freedesktop.org,
Max McMullen, wei.liu@kernel.org, linux-fbdev@vger.kernel.org,
Iouri Tarassov, KY Srinivasan
PkVjaG9pbmcgd2hhdCBvdGhlcnMgc2FpZCwgeW91J3JlIG5vdCBtYWtpbmcgYSBEUk0gZHJpdmVy
LiBUaGUgZHJpdmVyIHNob3VsZCBsaXZlIG91dHNpZGUgb2YgdGhlIERSTSBjb2RlLg0KDQpBZ3Jl
ZWQsIHBsZWFzZSBzZWUgbXkgZWFybGllciByZXBseS4gV2UnbGwgYmUgbW92aW5nIHRoZSBkcml2
ZXIgdG8gZHJpdmVycy9oeXBlcnYgbm9kZSBvciBzb21ldGhpbmcgc2ltaWxhci4gQXBvbG9neSBm
b3IgdGhlIGNvbmZ1c2lvbiBoZXJlLg0KDQo+IEkgaGF2ZSBvbmUgcXVlc3Rpb24gYWJvdXQgdGhl
IGRyaXZlciBBUEk6IG9uIFdpbmRvd3MsIERpcmVjdFggdmVyc2lvbnMgYXJlIGxvb3NseSB0aWVk
IHRvIFdpbmRvd3MgcmVsZWFzZXMuIFNvIEkgZ3Vlc3MgeW91IGNhbiBjaGFuZ2UgdGhlIGtlcm5l
bCBpbnRlcmZhY2UgYW1vbmcgRGlyZWN0WCB2ZXJzaW9ucz8NCj4gSWYgc28sIGhvdyB3b3VsZCB0
aGlzIHdvcmsgb24gTGludXggaW4gdGhlIGxvbmcgdGVybT8gSWYgdGhlcmUgZXZlciBpcyBhIERp
cmVjdFggMTMgb3IgMTQgd2l0aCBpbmNvbXBhdGlibGUga2VybmVsIGludGVyZmFjZXMsIGhvdyB3
b3VsZCB5b3UgcGxhbiB0byB1cGRhdGUgdGhlIExpbnV4IGRyaXZlcj8NCg0KWW91IHNob3VsZCB0
aGluayBvZiB0aGUgY29tbXVuaWNhdGlvbiBvdmVyIHRoZSBWTSBCdXMgZm9yIHRoZSB2R1BVIHBy
b2plY3Rpb24gYXMgYSBzdHJvbmdseSB2ZXJzaW9uZWQgaW50ZXJmYWNlLiBXZSB3aWxsIGJlIGtl
ZXBpbmcgY29tcGF0aWJpbGl0eSB3aXRoIG9sZGVyIHZlcnNpb24gb2YgdGhhdCBpbnRlcmZhY2Ug
YXMgaXQgZXZvbHZlcyBvdmVyIHRpbWUgc28gd2UgY2FuIGNvbnRpbnVlIHRvIHJ1biBvbGRlciBn
dWVzdCAod2UgYWxyZWFkeSBkbykuIFRoaXMgcHJvdG9jb2wgaXNuJ3QgYWN0dWFsbHkgdGllZCB0
byB0aGUgRFggQVBJLiBJdCBpcyBhIGdlbmVyaWMgYWJzdHJhY3Rpb24gZm9yIHRoZSBHUFUgdGhh
dCBjYW4gYmUgdXNlZCBmb3IgYW55IEFQSXMgKGZvciBleGFtcGxlIHRoZSBOVklESUEgQ1VEQSBk
cml2ZXIgdGhhdCB3ZSBhbm5vdW5jZWQgaXMgZ29pbmcgb3ZlciB0aGUgc2FtZSBwcm90b2NvbCB0
byBhY2Nlc3MgdGhlIEdQVSkuIA0KDQpOZXcgdmVyc2lvbiBvZiB1c2VyIG1vZGUgRFggY2FuIGVp
dGhlciB0YWtlIGFkdmFudGFnZSBvciBzb21ldGltZSByZXF1aXJlIG5ldyBzZXJ2aWNlcyBmcm9t
IHRoaXMga2VybmVsIGFic3RyYWN0aW9uLiBUaGlzIG1lYW4gdGhhdCBwdWxsaW5nIGEgbmV3IHZl
cnNpb24gb2YgdXNlciBtb2RlIERYIGNhbiBtZWFuIGhhdmluZyB0byBhbHNvIHB1bGwgYSBuZXcg
dmVyc2lvbiBvZiB0aGlzIHZHUFUga2VybmVsIGRyaXZlci4gRm9yIFdTTCwgdGhlc2UgZXNzZW50
aWFsbHkgc2hpcHMgdG9nZXRoZXIuIFRoZSBrZXJuZWwgZHJpdmVyIHNoaXBzIGFzIHBhcnQgb2Yg
b3VyIFdTTDIgTGludXggS2VybmVsIGludGVncmF0aW9uLiBVc2VyIG1vZGUgRFggYml0cyBzaGlw
cyB3aXRoIFdpbmRvd3MuIA0KDQotLS0tLU9yaWdpbmFsIE1lc3NhZ2UtLS0tLQ0KRnJvbTogVGhv
bWFzIFppbW1lcm1hbm4gPHR6aW1tZXJtYW5uQHN1c2UuZGU+IA0KU2VudDogV2VkbmVzZGF5LCBN
YXkgMjAsIDIwMjAgMTI6MTEgQU0NClRvOiBTYXNoYSBMZXZpbiA8c2FzaGFsQGtlcm5lbC5vcmc+
OyBhbGV4YW5kZXIuZGV1Y2hlckBhbWQuY29tOyBjaHJpc0BjaHJpcy13aWxzb24uY28udWs7IHZp
bGxlLnN5cmphbGFAbGludXguaW50ZWwuY29tOyBIYXdraW5nLlpoYW5nQGFtZC5jb207IHR2cnRr
by51cnN1bGluQGludGVsLmNvbQ0KQ2M6IGxpbnV4LWtlcm5lbEB2Z2VyLmtlcm5lbC5vcmc7IGxp
bnV4LWh5cGVydkB2Z2VyLmtlcm5lbC5vcmc7IEtZIFNyaW5pdmFzYW4gPGt5c0BtaWNyb3NvZnQu
Y29tPjsgSGFpeWFuZyBaaGFuZyA8aGFpeWFuZ3pAbWljcm9zb2Z0LmNvbT47IFN0ZXBoZW4gSGVt
bWluZ2VyIDxzdGhlbW1pbkBtaWNyb3NvZnQuY29tPjsgd2VpLmxpdUBrZXJuZWwub3JnOyBTdGV2
ZSBQcm9ub3Zvc3QgPHNwcm9ub3ZvQG1pY3Jvc29mdC5jb20+OyBJb3VyaSBUYXJhc3NvdiA8aW91
cml0QG1pY3Jvc29mdC5jb20+OyBkcmktZGV2ZWxAbGlzdHMuZnJlZWRlc2t0b3Aub3JnOyBsaW51
eC1mYmRldkB2Z2VyLmtlcm5lbC5vcmc7IGdyZWdraEBsaW51eGZvdW5kYXRpb24ub3JnDQpTdWJq
ZWN0OiBbRVhURVJOQUxdIFJlOiBbUkZDIFBBVENIIDAvNF0gRGlyZWN0WCBvbiBMaW51eA0KDQpI
aQ0KDQpBbSAxOS4wNS4yMCB1bSAxODozMiBzY2hyaWViIFNhc2hhIExldmluOg0KPiBUaGVyZSBp
cyBhIGJsb2cgcG9zdCB0aGF0IGdvZXMgaW50byBtb3JlIGRldGFpbCBhYm91dCB0aGUgYmlnZ2Vy
IA0KPiBwaWN0dXJlLCBhbmQgd2Fsa3MgdGhyb3VnaCBhbGwgdGhlIHJlcXVpcmVkIHBpZWNlcyB0
byBtYWtlIHRoaXMgd29yay4gDQo+IEl0IGlzIGF2YWlsYWJsZSBoZXJlOg0KPiBodHRwczovL2Rl
dmJsb2dzLm1pY3Jvc29mdC5jb20vZGlyZWN0eC9kaXJlY3R4LWhlYXJ0LWxpbnV4IC4gVGhlIHJl
c3QgDQo+IG9mIHRoaXMgY292ZXIgbGV0dGVyIHdpbGwgZm9jdXMgb24gdGhlIExpbnV4IEtlcm5l
bCBiaXRzLg0KDQpUaGF0J3MgcXVpdGUgYSBzdXJwcmlzZS4gVGhhbmtzIGZvciB5b3VyIGVmZm9y
dHMgdG8gY29udHJpYnV0ZS4NCg0KPiANCj4gT3ZlcnZpZXcNCj4gPT09PT09PT0NCj4gDQo+IFRo
aXMgaXMgdGhlIGZpcnN0IGRyYWZ0IG9mIHRoZSBNaWNyb3NvZnQgVmlydHVhbCBHUFUgKHZHUFUp
IGRyaXZlci4gDQo+IFRoZSBkcml2ZXIgZXhwb3NlcyBhIHBhcmF2aXJ0dWFsaXplZCBHUFUgdG8g
dXNlciBtb2RlIGFwcGxpY2F0aW9ucyANCj4gcnVubmluZyBpbiBhIHZpcnR1YWwgbWFjaGluZSBv
biBhIFdpbmRvd3MgaG9zdC4gVGhpcyBlbmFibGVzIGhhcmR3YXJlIA0KPiBhY2NlbGVyYXRpb24g
aW4gZW52aXJvbm1lbnQgc3VjaCBhcyBXU0wgKFdpbmRvd3MgU3Vic3lzdGVtIGZvciBMaW51eCkg
DQo+IHdoZXJlIHRoZSBMaW51eCB2aXJ0dWFsIG1hY2hpbmUgaXMgYWJsZSB0byBzaGFyZSB0aGUg
R1BVIHdpdGggdGhlIA0KPiBXaW5kb3dzIGhvc3QuDQo+IA0KPiBUaGUgcHJvamVjdGlvbiBpcyBh
Y2NvbXBsaXNoZWQgYnkgZXhwb3NpbmcgdGhlIFdERE0gKFdpbmRvd3MgRGlzcGxheSANCj4gRHJp
dmVyIE1vZGVsKSBpbnRlcmZhY2UgYXMgYSBzZXQgb2YgSU9DVEwuIFRoaXMgYWxsb3dzIEFQSXMg
YW5kIHVzZXIgDQo+IG1vZGUgZHJpdmVyIHdyaXR0ZW4gYWdhaW5zdCB0aGUgV0RETSBHUFUgYWJz
dHJhY3Rpb24gb24gV2luZG93cyB0byBiZSANCj4gcG9ydGVkIHRvIHJ1biB3aXRoaW4gYSBMaW51
eCBlbnZpcm9ubWVudC4gVGhpcyBlbmFibGVzIHRoZSBwb3J0IG9mIHRoZQ0KPiBEM0QxMiBhbmQg
RGlyZWN0TUwgQVBJcyBhcyB3ZWxsIGFzIHRoZWlyIGFzc29jaWF0ZWQgdXNlciBtb2RlIGRyaXZl
ciANCj4gdG8gTGludXguIFRoaXMgYWxzbyBlbmFibGVzIHRoaXJkIHBhcnR5IEFQSXMsIHN1Y2gg
YXMgdGhlIHBvcHVsYXIgDQo+IE5WSURJQSBDdWRhIGNvbXB1dGUgQVBJLCB0byBiZSBoYXJkd2Fy
ZSBhY2NlbGVyYXRlZCB3aXRoaW4gYSBXU0wgZW52aXJvbm1lbnQuDQo+IA0KPiBPbmx5IHRoZSBy
ZW5kZXJpbmcvY29tcHV0ZSBhc3BlY3Qgb2YgdGhlIEdQVSBhcmUgcHJvamVjdGVkIHRvIHRoZSAN
Cj4gdmlydHVhbCBtYWNoaW5lLCBubyBkaXNwbGF5IGZ1bmN0aW9uYWxpdHkgaXMgZXhwb3NlZC4g
RnVydGhlciwgYXQgdGhpcyANCj4gdGltZSB0aGVyZSBhcmUgbm8gcHJlc2VudGF0aW9uIGludGVn
cmF0aW9uLiBTbyBhbHRob3VnaCB0aGUgRDNEMTIgQVBJIA0KPiBjYW4gYmUgdXNlIHRvIHJlbmRl
ciBncmFwaGljcyBvZmZzY3JlZW4sIHRoZXJlIGlzIG5vIHBhdGggKHlldCkgZm9yIA0KPiBwaXhl
bCB0byBmbG93IGZyb20gdGhlIExpbnV4IGVudmlyb25tZW50IGJhY2sgb250byB0aGUgV2luZG93
cyBob3N0IA0KPiBkZXNrdG9wLiBUaGlzIEdQVSBzdGFjayBpcyBlZmZlY3RpdmVseSBzaWRlLWJ5
LXNpZGUgd2l0aCB0aGUgbmF0aXZlIA0KPiBMaW51eCBncmFwaGljcyBzdGFjay4NCj4gDQo+IFRo
ZSBkcml2ZXIgY3JlYXRlcyB0aGUgL2Rldi9keGcgZGV2aWNlLCB3aGljaCBjYW4gYmUgb3BlbmVk
IGJ5IHVzZXIgDQo+IG1vZGUgYXBwbGljYXRpb24gYW5kIGhhbmRsZXMgdGhlaXIgaW9jdGxzLiBU
aGUgSU9DVEwgaW50ZXJmYWNlIHRvIHRoZSANCj4gZHJpdmVyIGlzIGRlZmluZWQgaW4gZHhna210
aGsuaCAoRHhna3JubCBHcmFwaGljcyBQb3J0IERyaXZlciBpb2N0bCANCj4gZGVmaW5pdGlvbnMp
LiBUaGUgaW50ZXJmYWNlIG1hdGNoZXMgdGhlIEQzREtNVCBpbnRlcmZhY2Ugb24gV2luZG93cy4N
Cj4gSW9jdGxzIGFyZSBpbXBsZW1lbnRlZCBpbiBpb2N0bC5jLg0KDQpFY2hvaW5nIHdoYXQgb3Ro
ZXJzIHNhaWQsIHlvdSdyZSBub3QgbWFraW5nIGEgRFJNIGRyaXZlci4gVGhlIGRyaXZlciBzaG91
bGQgbGl2ZSBvdXRzaWRlIG9mIHRoZSBEUk0gY29kZS4NCg0KSSBoYXZlIG9uZSBxdWVzdGlvbiBh
Ym91dCB0aGUgZHJpdmVyIEFQSTogb24gV2luZG93cywgRGlyZWN0WCB2ZXJzaW9ucyBhcmUgbG9v
c2x5IHRpZWQgdG8gV2luZG93cyByZWxlYXNlcy4gU28gSSBndWVzcyB5b3UgY2FuIGNoYW5nZSB0
aGUga2VybmVsIGludGVyZmFjZSBhbW9uZyBEaXJlY3RYIHZlcnNpb25zPw0KDQpJZiBzbywgaG93
IHdvdWxkIHRoaXMgd29yayBvbiBMaW51eCBpbiB0aGUgbG9uZyB0ZXJtPyBJZiB0aGVyZSBldmVy
IGlzIGEgRGlyZWN0WCAxMyBvciAxNCB3aXRoIGluY29tcGF0aWJsZSBrZXJuZWwgaW50ZXJmYWNl
cywgaG93IHdvdWxkIHlvdSBwbGFuIHRvIHVwZGF0ZSB0aGUgTGludXggZHJpdmVyPw0KDQpCZXN0
IHJlZ2FyZHMNClRob21hcw0KDQo+IA0KPiBXaGVuIGEgVk0gc3RhcnRzLCBoeXBlci12IG9uIHRo
ZSBob3N0IGFkZHMgdmlydHVhbCBHUFUgZGV2aWNlcyB0byB0aGUgDQo+IFZNIHZpYSB0aGUgaHlw
ZXItdiBkcml2ZXIuIFRoZSBob3N0IG9mZmVycyBzZXZlcmFsIFZNIGJ1cyBjaGFubmVscyB0byAN
Cj4gdGhlDQo+IFZNOiB0aGUgZ2xvYmFsIGNoYW5uZWwgYW5kIG9uZSBjaGFubmVsIHBlciB2aXJ0
dWFsIEdQVSwgYXNzaWduZWQgdG8gDQo+IHRoZSBWTS4NCj4gDQo+IFRoZSBkcml2ZXIgcmVnaXN0
ZXJzIHdpdGggdGhlIGh5cGVyLXYgZHJpdmVyIChodl9kcml2ZXIpIGZvciB0aGUgDQo+IGFycml2
YWwgb2YgVk0gYnVzIGNoYW5uZWxzLiBkeGdfcHJvYmVfZGV2aWNlIHJlY29nbml6ZXMgdGhlIHZH
UFUgDQo+IGNoYW5uZWxzIGFuZCBjcmVhdGVzIHRoZSBjb3JyZXNwb25kaW5nIG9iamVjdHMgKGR4
Z2FkYXB0ZXIgZm9yIHZHUFVzIA0KPiBhbmQgZHhnZ2xvYmFsIGZvciB0aGUgZ2xvYmFsIGNoYW5u
ZWwpLg0KPiANCj4gVGhlIGRyaXZlciB1c2VzIHRoZSBoeXBlci1WIFZNIGJ1cyBpbnRlcmZhY2Ug
dG8gY29tbXVuaWNhdGUgd2l0aCB0aGUgDQo+IGhvc3QuIGR4Z3ZtYnVzLmMgaW1wbGVtZW50cyB0
aGUgY29tbXVuaWNhdGlvbiBpbnRlcmZhY2UuDQo+IA0KPiBUaGUgZ2xvYmFsIGNoYW5uZWwgaGFz
IDhHQiBvZiBJTyBzcGFjZSBhc3NpZ25lZCBieSB0aGUgaG9zdC4gVGhpcyANCj4gc3BhY2UgaXMg
bWFuYWdlZCBieSB0aGUgaG9zdCBhbmQgdXNlZCB0byBnaXZlIHRoZSBndWVzdCBkaXJlY3QgQ1BV
IA0KPiBhY2Nlc3MgdG8gc29tZSBhbGxvY2F0aW9ucy4gVmlkZW8gbWVtb3J5IGlzIGFsbG9jYXRl
ZCBvbiB0aGUgaG9zdCANCj4gZXhjZXB0IGluIHRoZSBjYXNlIG9mIGV4aXN0aW5nX3N5c21lbSBh
bGxvY2F0aW9ucy4gVGhlIFdpbmRvd3MgaG9zdCANCj4gYWxsb2NhdGVzIG1lbW9yeSBmb3IgdGhl
IEdQVSBvbiBiZWhhbGYgb2YgdGhlIGd1ZXN0LiBUaGUgTGludXggZ3Vlc3QgDQo+IGNhbiBhY2Nl
c3MgdGhhdCBtZW1vcnkgYnkgbWFwcGluZyBHUFUgdmlydHVhbCBhZGRyZXNzIHRvIGFsbG9jYXRp
b25zIA0KPiBhbmQgdGhlbiByZWZlcmVuY2luZyB0aG9zZSBHUFUgdmlydHVhbCBhZGRyZXNzIGZy
b20gd2l0aGluIEdQVSBjb21tYW5kIA0KPiBidWZmZXJzIHN1Ym1pdHRlZCB0byB0aGUgR1BVLiBG
b3IgYWxsb2NhdGlvbnMgd2hpY2ggcmVxdWlyZSBDUFUgDQo+IGFjY2VzcywgdGhlIGFsbG9jYXRp
b24gaXMgbWFwcGVkIGJ5IHRoZSBob3N0IGludG8gYSBsb2NhdGlvbiBpbiB0aGUgDQo+IDhHQiBv
ZiBJTyBzcGFjZSByZXNlcnZlZCBpbiB0aGUgZ3Vlc3QgZm9yIHRoYXQgcHVycG9zZS4gVGhlIFdp
bmRvd3MgDQo+IGhvc3QgdXNlcyB0aGUgbmVzdGVkIENQVSBwYWdlIHRhYmxlIHRvIGVuc3VyZSB0
aGF0IHRoaXMgZ3Vlc3QgSU8gc3BhY2UgDQo+IGFsd2F5cyBtYXAgdG8gdGhlIGNvcnJlY3QgbG9j
YXRpb24gZm9yIHRoZSBhbGxvY2F0aW9uIGFzIGl0IG1heSANCj4gbWlncmF0ZSBiZXR3ZWVuIGRl
ZGljYXRlZCBHUFUgbWVtb3J5IChlLmcuIFZSQU0sIGZpcm13YXJlIHJlc2VydmVkIA0KPiBERFIp
IGFuZCBzaGFyZWQgc3lzdGVtIG1lbW9yeSAocmVndWxhciBERFIpIG92ZXIgaXRzIGxpZmV0aW1l
LiBUaGUgDQo+IExpbnV4IGd1ZXN0IG1hcHMgYSB1c2VyIG1vZGUgQ1BVIHZpcnR1YWwgYWRkcmVz
cyB0byBhbiBhbGxvY2F0aW9uIElPIA0KPiBzcGFjZSByYW5nZSBmb3IgZGlyZWN0IGFjY2VzcyBi
eSB1c2VyIG1vZGUgQVBJcyBhbmQgZHJpdmVycy4NCj4gDQo+ICANCj4gDQo+IEltcGxlbWVudGF0
aW9uIG9mIExYX0RYTE9DSzIgaW9jdGwNCj4gPT09PT09PT09PT09PT09PT09PT09PT09PT09PT09
PT09PQ0KPiANCj4gV2Ugd291bGQgYXBwcmVjaWF0ZSB5b3VyIGZlZWRiYWNrIG9uIHRoZSBpbXBs
ZW1lbnRhdGlvbiBvZiB0aGUNCj4gTFhfRFhMT0NLMiBpb2N0bC4NCj4gDQo+IFRoaXMgaW9jdGwg
aXMgdXNlZCB0byBnZXQgYSBDUFUgYWRkcmVzcyB0byBhbiBhbGxvY2F0aW9uLCB3aGljaCBpcyAN
Cj4gcmVzaWRlbnQgaW4gdmlkZW8vc3lzdGVtIG1lbW9yeSBvbiB0aGUgaG9zdC4gVGhlIHdheSBp
dCB3b3JrczoNCj4gDQo+IDEuIFRoZSBkcml2ZXIgc2VuZHMgdGhlIExvY2sgbWVzc2FnZSB0byB0
aGUgaG9zdA0KPiANCj4gMi4gVGhlIGhvc3QgYWxsb2NhdGVzIHNwYWNlIGluIHRoZSBWTSBJTyBz
cGFjZSBhbmQgbWFwcyBpdCB0byB0aGUgDQo+IGFsbG9jYXRpb24gbWVtb3J5DQo+IA0KPiAzLiBU
aGUgaG9zdCByZXR1cm5zIHRoZSBhZGRyZXNzIGluIElPIHNwYWNlIGZvciB0aGUgbWFwcGVkIGFs
bG9jYXRpb24NCj4gDQo+IDQuIFRoZSBkcml2ZXIgKGluIGR4Z19tYXBfaW9zcGFjZSkgYWxsb2Nh
dGVzIGEgdXNlciBtb2RlIHZpcnR1YWwgDQo+IGFkZHJlc3MgcmFuZ2UgdXNpbmcgdm1fbW1hcCBh
bmQgbWFwcyBpdCB0byB0aGUgSU8gc3BhY2UgdXNpbmcNCj4gaW9fcmVtYXBfb2ZuX3JhbmdlKQ0K
PiANCj4gNS4gVGhlIFZBIGlzIHJldHVybmVkIHRvIHRoZSBhcHBsaWNhdGlvbg0KPiANCj4gIA0K
PiANCj4gSW50ZXJuYWwgb2JqZWN0cw0KPiA9PT09PT09PT09PT09PT09DQo+IA0KPiBUaGUgZm9s
bG93aW5nIG9iamVjdHMgYXJlIGNyZWF0ZWQgYnkgdGhlIGRyaXZlciAoZGVmaW5lZCBpbiBkeGdr
cm5sLmgpOg0KPiANCj4gLSBkeGdhZGFwdGVyIC0gcmVwcmVzZW50cyBhIHZpcnR1YWwgR1BVDQo+
IA0KPiAtIGR4Z3Byb2Nlc3MgLSB0cmFja3MgcGVyIHByb2Nlc3Mgc3RhdGUgKGhhbmRsZSB0YWJs
ZSBvZiBjcmVhdGVkDQo+ICAgb2JqZWN0cywgbGlzdCBvZiBvYmplY3RzLCBldGMuKQ0KPiANCj4g
LSBkeGdkZXZpY2UgLSBhIGNvbnRhaW5lciBmb3Igb3RoZXIgb2JqZWN0cyAoY29udGV4dHMsIHBh
Z2luZyBxdWV1ZXMsDQo+ICAgYWxsb2NhdGlvbnMsIEdQVSBzeW5jaHJvbml6YXRpb24gb2JqZWN0
cykNCj4gDQo+IC0gZHhnY29udGV4dCAtIHJlcHJlc2VudHMgdGhyZWFkIG9mIEdQVSBleGVjdXRp
b24gZm9yIHBhY2tldA0KPiAgIHNjaGVkdWxpbmcuDQo+IA0KPiAtIGR4Z2h3cXVldWUgLSByZXBy
ZXNlbnRzIHRocmVhZCBvZiBHUFUgZXhlY3V0aW9uIG9mIGhhcmR3YXJlIA0KPiBzY2hlZHVsaW5n
DQo+IA0KPiAtIGR4Z2FsbG9jYXRpb24gLSByZXByZXNlbnRzIGEgR1BVIGFjY2Vzc2libGUgYWxs
b2NhdGlvbg0KPiANCj4gLSBkeGdzeW5jb2JqZWN0IC0gcmVwcmVzZW50cyBhIEdQVSBzeW5jaHJv
bml6YXRpb24gb2JqZWN0DQo+IA0KPiAtIGR4Z3Jlc291cmNlIC0gY29sbGVjdGlvbiBvZiBkeGdh
bGxvY3Rpb24gb2JqZWN0cw0KPiANCj4gLSBkeGdzaGFyZWRyZXNvdXJjZSwgZHhnc2hhcmVkc3lu
Y29iaiAtIGhlbHBlciBvYmplY3RzIHRvIHNoYXJlIG9iamVjdHMNCj4gICBiZXR3ZWVuIGRpZmZl
cmVudCBkeGdkZXZpY2Ugb2JqZWN0cywgd2hpY2ggY2FuIGJlbG9uZyB0byBkaWZmZXJlbnQgDQo+
IHByb2Nlc3Nlcw0KPiANCj4gDQo+ICANCj4gT2JqZWN0IGhhbmRsZXMNCj4gPT09PT09PT09PT09
PT0NCj4gDQo+IEFsbCBHUFUgb2JqZWN0cywgY3JlYXRlZCBieSB0aGUgZHJpdmVyLCBhcmUgYWNj
ZXNzaWJsZSBieSBhIGhhbmRsZSANCj4gKGQzZGttdF9oYW5kbGUpLiBFYWNoIHByb2Nlc3MgaGFz
IGl0cyBvd24gaGFuZGxlIHRhYmxlLCB3aGljaCBpcyANCj4gaW1wbGVtZW50ZWQgaW4gaG1nci5j
LiBGb3IgZWFjaCBBUEkgdmlzaWJsZSBvYmplY3QsIGNyZWF0ZWQgYnkgdGhlIA0KPiBkcml2ZXIs
IHRoZXJlIGlzIGFuIG9iamVjdCwgY3JlYXRlZCBvbiB0aGUgaG9zdC4gRm9yIGV4YW1wbGUsIHRo
ZSBpcyBhIA0KPiBkeGdwcm9jZXNzIG9iamVjdCBvbiB0aGUgaG9zdCBmb3IgZWFjaCBkeGdwcm9j
ZXNzIG9iamVjdCBpbiB0aGUgVk0sIGV0Yy4NCj4gVGhlIG9iamVjdCBoYW5kbGVzIGhhdmUgdGhl
IHNhbWUgdmFsdWUgaW4gdGhlIGhvc3QgYW5kIHRoZSBWTSwgd2hpY2ggDQo+IGlzIGRvbmUgdG8g
YXZvaWQgdHJhbnNsYXRpb24gZnJvbSB0aGUgZ3Vlc3QgaGFuZGxlcyB0byB0aGUgaG9zdCBoYW5k
bGVzLg0KPiAgDQo+IA0KPiANCj4gU2lnbmFsaW5nIENQVSBldmVudHMgYnkgdGhlIGhvc3QNCj4g
PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT0NCj4gDQo+IFRoZSBXRERNIGludGVyZmFj
ZSBwcm92aWRlcyBhIHdheSB0byBzaWduYWwgQ1BVIGV2ZW50IG9iamVjdHMgd2hlbiANCj4gZXhl
Y3V0aW9uIG9mIGEgY29udGV4dCByZWFjaGVkIGNlcnRhaW4gcG9pbnQuIFRoZSB3YXkgaXQgaXMg
aW1wbGVtZW50ZWQ6DQo+IA0KPiAtIGFwcGxpY2F0aW9uIHNlbmRzIGFuIGV2ZW50X2ZkIHZpYSBp
b2N0bCB0byB0aGUgZHJpdmVyDQo+IA0KPiAtIGV2ZW50ZmRfY3R4X2dldCBpcyB1c2VkIHRvIGdl
dCBhIHBvaW50ZXIgdG8gdGhlIGZpbGUgb2JqZWN0DQo+ICAgKGV2ZW50ZmRfY3R4KQ0KPiANCj4g
LSB0aGUgcG9pbnRlciB0byBzZW50IHRoZSBob3N0IHZpYSBhIFZNIGJ1cyBtZXNzYWdlDQo+IA0K
PiAtIHdoZW4gR1BVIGV4ZWN1dGlvbiByZWFjaGVzIGEgY2VydGFpbiBwb2ludCwgdGhlIGhvc3Qg
c2VuZHMgYSBtZXNzYWdlDQo+ICAgdG8gdGhlIFZNIHdpdGggdGhlIGV2ZW50IHBvaW50ZXINCj4g
DQo+IC0gc2lnbmFsX2d1ZXN0X2V2ZW50KCkgaGFuZGxlcyB0aGUgbWVzc2FnZXMgYW5kIGV2ZW50
dWFsbHkNCj4gICBldmVudGZkX3NpZ25hbCgpIGlzIGNhbGxlZC4NCj4gDQo+IA0KPiBTYXNoYSBM
ZXZpbiAoNCk6DQo+ICAgZ3B1OiBkeGdrcm5sOiBjb3JlIGNvZGUNCj4gICBncHU6IGR4Z2tybmw6
IGhvb2sgdXAgZHhna3JubA0KPiAgIERyaXZlcnM6IGh2OiB2bWJ1czogaG9vayB1cCBkeGdrcm5s
DQo+ICAgZ3B1OiBkeGdrcm5sOiBjcmVhdGUgYSBNQUlOVEFJTkVSUyBlbnRyeQ0KPiANCj4gIE1B
SU5UQUlORVJTICAgICAgICAgICAgICAgICAgICAgIHwgICAgNyArDQo+ICBkcml2ZXJzL2dwdS9N
YWtlZmlsZSAgICAgICAgICAgICB8ICAgIDIgKy0NCj4gIGRyaXZlcnMvZ3B1L2R4Z2tybmwvS2Nv
bmZpZyAgICAgIHwgICAxMCArDQo+ICBkcml2ZXJzL2dwdS9keGdrcm5sL01ha2VmaWxlICAgICB8
ICAgMTIgKw0KPiAgZHJpdmVycy9ncHUvZHhna3JubC9kM2RrbXRoay5oICAgfCAxNjM1ICsrKysr
KysrKw0KPiAgZHJpdmVycy9ncHUvZHhna3JubC9keGdhZGFwdGVyLmMgfCAxMzk5ICsrKysrKysr
DQo+ICBkcml2ZXJzL2dwdS9keGdrcm5sL2R4Z2tybmwuaCAgICB8ICA5MTMgKysrKysrDQo+ICBk
cml2ZXJzL2dwdS9keGdrcm5sL2R4Z21vZHVsZS5jICB8ICA2OTIgKysrKyAgDQo+IGRyaXZlcnMv
Z3B1L2R4Z2tybmwvZHhncHJvY2Vzcy5jIHwgIDM1NSArKw0KPiAgZHJpdmVycy9ncHUvZHhna3Ju
bC9keGd2bWJ1cy5jICAgfCAyOTU1ICsrKysrKysrKysrKysrKysrDQo+ICBkcml2ZXJzL2dwdS9k
eGdrcm5sL2R4Z3ZtYnVzLmggICB8ICA4NTkgKysrKysNCj4gIGRyaXZlcnMvZ3B1L2R4Z2tybmwv
aG1nci5jICAgICAgIHwgIDU5MyArKysrDQo+ICBkcml2ZXJzL2dwdS9keGdrcm5sL2htZ3IuaCAg
ICAgICB8ICAxMDcgKw0KPiAgZHJpdmVycy9ncHUvZHhna3JubC9pb2N0bC5jICAgICAgfCA1MjY5
ICsrKysrKysrKysrKysrKysrKysrKysrKysrKysrKw0KPiAgZHJpdmVycy9ncHUvZHhna3JubC9t
aXNjLmMgICAgICAgfCAgMjgwICsrDQo+ICBkcml2ZXJzL2dwdS9keGdrcm5sL21pc2MuaCAgICAg
ICB8ICAyODggKysNCj4gIGRyaXZlcnMvdmlkZW8vS2NvbmZpZyAgICAgICAgICAgIHwgICAgMiAr
DQo+ICBpbmNsdWRlL2xpbnV4L2h5cGVydi5oICAgICAgICAgICB8ICAgMTYgKw0KPiAgMTggZmls
ZXMgY2hhbmdlZCwgMTUzOTMgaW5zZXJ0aW9ucygrKSwgMSBkZWxldGlvbigtKSAgY3JlYXRlIG1v
ZGUgDQo+IDEwMDY0NCBkcml2ZXJzL2dwdS9keGdrcm5sL0tjb25maWcgIGNyZWF0ZSBtb2RlIDEw
MDY0NCANCj4gZHJpdmVycy9ncHUvZHhna3JubC9NYWtlZmlsZSAgY3JlYXRlIG1vZGUgMTAwNjQ0
IA0KPiBkcml2ZXJzL2dwdS9keGdrcm5sL2QzZGttdGhrLmggIGNyZWF0ZSBtb2RlIDEwMDY0NCAN
Cj4gZHJpdmVycy9ncHUvZHhna3JubC9keGdhZGFwdGVyLmMgIGNyZWF0ZSBtb2RlIDEwMDY0NCAN
Cj4gZHJpdmVycy9ncHUvZHhna3JubC9keGdrcm5sLmggIGNyZWF0ZSBtb2RlIDEwMDY0NCANCj4g
ZHJpdmVycy9ncHUvZHhna3JubC9keGdtb2R1bGUuYyAgY3JlYXRlIG1vZGUgMTAwNjQ0IA0KPiBk
cml2ZXJzL2dwdS9keGdrcm5sL2R4Z3Byb2Nlc3MuYyAgY3JlYXRlIG1vZGUgMTAwNjQ0IA0KPiBk
cml2ZXJzL2dwdS9keGdrcm5sL2R4Z3ZtYnVzLmMgIGNyZWF0ZSBtb2RlIDEwMDY0NCANCj4gZHJp
dmVycy9ncHUvZHhna3JubC9keGd2bWJ1cy5oICBjcmVhdGUgbW9kZSAxMDA2NDQgDQo+IGRyaXZl
cnMvZ3B1L2R4Z2tybmwvaG1nci5jICBjcmVhdGUgbW9kZSAxMDA2NDQgDQo+IGRyaXZlcnMvZ3B1
L2R4Z2tybmwvaG1nci5oICBjcmVhdGUgbW9kZSAxMDA2NDQgDQo+IGRyaXZlcnMvZ3B1L2R4Z2ty
bmwvaW9jdGwuYyAgY3JlYXRlIG1vZGUgMTAwNjQ0IA0KPiBkcml2ZXJzL2dwdS9keGdrcm5sL21p
c2MuYyAgY3JlYXRlIG1vZGUgMTAwNjQ0IA0KPiBkcml2ZXJzL2dwdS9keGdrcm5sL21pc2MuaA0K
PiANCg0KLS0NClRob21hcyBaaW1tZXJtYW5uDQpHcmFwaGljcyBEcml2ZXIgRGV2ZWxvcGVyDQpT
VVNFIFNvZnR3YXJlIFNvbHV0aW9ucyBHZXJtYW55IEdtYkgNCk1heGZlbGRzdHIuIDUsIDkwNDA5
IE7DvHJuYmVyZywgR2VybWFueQ0KKEhSQiAzNjgwOSwgQUcgTsO8cm5iZXJnKQ0KR2VzY2jDpGZ0
c2bDvGhyZXI6IEZlbGl4IEltZW5kw7ZyZmZlcg0KDQo
^ permalink raw reply [flat|nested] 28+ messages in thread
* Re: [EXTERNAL] Re: [RFC PATCH 0/4] DirectX on Linux
2020-05-20 7:42 ` [EXTERNAL] " Steve Pronovost
@ 2020-05-20 11:06 ` Thomas Zimmermann
0 siblings, 0 replies; 28+ messages in thread
From: Thomas Zimmermann @ 2020-05-20 11:06 UTC (permalink / raw)
To: Steve Pronovost, Sasha Levin, alexander.deucher@amd.com,
chris@chris-wilson.co.uk, ville.syrjala@linux.intel.com,
Hawking.Zhang@amd.com, tvrtko.ursulin@intel.com
Cc: linux-hyperv@vger.kernel.org, Stephen Hemminger,
gregkh@linuxfoundation.org, Haiyang Zhang,
linux-kernel@vger.kernel.org, dri-devel@lists.freedesktop.org,
Max McMullen, wei.liu@kernel.org, linux-fbdev@vger.kernel.org,
Iouri Tarassov, KY Srinivasan
[-- Attachment #1.1: Type: text/plain, Size: 12424 bytes --]
Hi Steve,
thank you for the fast reply.
Am 20.05.20 um 09:42 schrieb Steve Pronovost:
>> Echoing what others said, you're not making a DRM driver. The driver should live outside of the DRM code.
>
> Agreed, please see my earlier reply. We'll be moving the driver to drivers/hyperv node or something similar. Apology for the confusion here.
>
>> I have one question about the driver API: on Windows, DirectX versions are loosly tied to Windows releases. So I guess you can change the kernel interface among DirectX versions?
>> If so, how would this work on Linux in the long term? If there ever is a DirectX 13 or 14 with incompatible kernel interfaces, how would you plan to update the Linux driver?
>
> You should think of the communication over the VM Bus for the vGPU projection as a strongly versioned interface. We will be keeping compatibility with older version of that interface as it evolves over time so we can continue to run older guest (we already do). This protocol isn't actually tied to the DX API. It is a generic abstraction for the GPU that can be used for any APIs (for example the NVIDIA CUDA driver that we announced is going over the same protocol to access the GPU).
>
> New version of user mode DX can either take advantage or sometime require new services from this kernel abstraction. This mean that pulling a new version of user mode DX can mean having to also pull a new version of this vGPU kernel driver. For WSL, these essentially ships together. The kernel driver ships as part of our WSL2 Linux Kernel integration. User mode DX bits ships with Windows.
Just a friendly advise: maintaining a proprietary component within a
Linux environment is tough. You will need a good plan for long-term
interface stability and compatibility with the other components.
Best regards
Thomas
>
> -----Original Message-----
> From: Thomas Zimmermann <tzimmermann@suse.de>
> Sent: Wednesday, May 20, 2020 12:11 AM
> To: Sasha Levin <sashal@kernel.org>; alexander.deucher@amd.com; chris@chris-wilson.co.uk; ville.syrjala@linux.intel.com; Hawking.Zhang@amd.com; tvrtko.ursulin@intel.com
> Cc: linux-kernel@vger.kernel.org; linux-hyperv@vger.kernel.org; KY Srinivasan <kys@microsoft.com>; Haiyang Zhang <haiyangz@microsoft.com>; Stephen Hemminger <sthemmin@microsoft.com>; wei.liu@kernel.org; Steve Pronovost <spronovo@microsoft.com>; Iouri Tarassov <iourit@microsoft.com>; dri-devel@lists.freedesktop.org; linux-fbdev@vger.kernel.org; gregkh@linuxfoundation.org
> Subject: [EXTERNAL] Re: [RFC PATCH 0/4] DirectX on Linux
>
> Hi
>
> Am 19.05.20 um 18:32 schrieb Sasha Levin:
>> There is a blog post that goes into more detail about the bigger
>> picture, and walks through all the required pieces to make this work.
>> It is available here:
>> https://devblogs.microsoft.com/directx/directx-heart-linux . The rest
>> of this cover letter will focus on the Linux Kernel bits.
>
> That's quite a surprise. Thanks for your efforts to contribute.
>
>>
>> Overview
>> ========
>>
>> This is the first draft of the Microsoft Virtual GPU (vGPU) driver.
>> The driver exposes a paravirtualized GPU to user mode applications
>> running in a virtual machine on a Windows host. This enables hardware
>> acceleration in environment such as WSL (Windows Subsystem for Linux)
>> where the Linux virtual machine is able to share the GPU with the
>> Windows host.
>>
>> The projection is accomplished by exposing the WDDM (Windows Display
>> Driver Model) interface as a set of IOCTL. This allows APIs and user
>> mode driver written against the WDDM GPU abstraction on Windows to be
>> ported to run within a Linux environment. This enables the port of the
>> D3D12 and DirectML APIs as well as their associated user mode driver
>> to Linux. This also enables third party APIs, such as the popular
>> NVIDIA Cuda compute API, to be hardware accelerated within a WSL environment.
>>
>> Only the rendering/compute aspect of the GPU are projected to the
>> virtual machine, no display functionality is exposed. Further, at this
>> time there are no presentation integration. So although the D3D12 API
>> can be use to render graphics offscreen, there is no path (yet) for
>> pixel to flow from the Linux environment back onto the Windows host
>> desktop. This GPU stack is effectively side-by-side with the native
>> Linux graphics stack.
>>
>> The driver creates the /dev/dxg device, which can be opened by user
>> mode application and handles their ioctls. The IOCTL interface to the
>> driver is defined in dxgkmthk.h (Dxgkrnl Graphics Port Driver ioctl
>> definitions). The interface matches the D3DKMT interface on Windows.
>> Ioctls are implemented in ioctl.c.
>
> Echoing what others said, you're not making a DRM driver. The driver should live outside of the DRM code.
>
> I have one question about the driver API: on Windows, DirectX versions are loosly tied to Windows releases. So I guess you can change the kernel interface among DirectX versions?
>
> If so, how would this work on Linux in the long term? If there ever is a DirectX 13 or 14 with incompatible kernel interfaces, how would you plan to update the Linux driver?
>
> Best regards
> Thomas
>
>>
>> When a VM starts, hyper-v on the host adds virtual GPU devices to the
>> VM via the hyper-v driver. The host offers several VM bus channels to
>> the
>> VM: the global channel and one channel per virtual GPU, assigned to
>> the VM.
>>
>> The driver registers with the hyper-v driver (hv_driver) for the
>> arrival of VM bus channels. dxg_probe_device recognizes the vGPU
>> channels and creates the corresponding objects (dxgadapter for vGPUs
>> and dxgglobal for the global channel).
>>
>> The driver uses the hyper-V VM bus interface to communicate with the
>> host. dxgvmbus.c implements the communication interface.
>>
>> The global channel has 8GB of IO space assigned by the host. This
>> space is managed by the host and used to give the guest direct CPU
>> access to some allocations. Video memory is allocated on the host
>> except in the case of existing_sysmem allocations. The Windows host
>> allocates memory for the GPU on behalf of the guest. The Linux guest
>> can access that memory by mapping GPU virtual address to allocations
>> and then referencing those GPU virtual address from within GPU command
>> buffers submitted to the GPU. For allocations which require CPU
>> access, the allocation is mapped by the host into a location in the
>> 8GB of IO space reserved in the guest for that purpose. The Windows
>> host uses the nested CPU page table to ensure that this guest IO space
>> always map to the correct location for the allocation as it may
>> migrate between dedicated GPU memory (e.g. VRAM, firmware reserved
>> DDR) and shared system memory (regular DDR) over its lifetime. The
>> Linux guest maps a user mode CPU virtual address to an allocation IO
>> space range for direct access by user mode APIs and drivers.
>>
>>
>>
>> Implementation of LX_DXLOCK2 ioctl
>> ==================================
>>
>> We would appreciate your feedback on the implementation of the
>> LX_DXLOCK2 ioctl.
>>
>> This ioctl is used to get a CPU address to an allocation, which is
>> resident in video/system memory on the host. The way it works:
>>
>> 1. The driver sends the Lock message to the host
>>
>> 2. The host allocates space in the VM IO space and maps it to the
>> allocation memory
>>
>> 3. The host returns the address in IO space for the mapped allocation
>>
>> 4. The driver (in dxg_map_iospace) allocates a user mode virtual
>> address range using vm_mmap and maps it to the IO space using
>> io_remap_ofn_range)
>>
>> 5. The VA is returned to the application
>>
>>
>>
>> Internal objects
>> ================
>>
>> The following objects are created by the driver (defined in dxgkrnl.h):
>>
>> - dxgadapter - represents a virtual GPU
>>
>> - dxgprocess - tracks per process state (handle table of created
>> objects, list of objects, etc.)
>>
>> - dxgdevice - a container for other objects (contexts, paging queues,
>> allocations, GPU synchronization objects)
>>
>> - dxgcontext - represents thread of GPU execution for packet
>> scheduling.
>>
>> - dxghwqueue - represents thread of GPU execution of hardware
>> scheduling
>>
>> - dxgallocation - represents a GPU accessible allocation
>>
>> - dxgsyncobject - represents a GPU synchronization object
>>
>> - dxgresource - collection of dxgalloction objects
>>
>> - dxgsharedresource, dxgsharedsyncobj - helper objects to share objects
>> between different dxgdevice objects, which can belong to different
>> processes
>>
>>
>>
>> Object handles
>> ==============
>>
>> All GPU objects, created by the driver, are accessible by a handle
>> (d3dkmt_handle). Each process has its own handle table, which is
>> implemented in hmgr.c. For each API visible object, created by the
>> driver, there is an object, created on the host. For example, the is a
>> dxgprocess object on the host for each dxgprocess object in the VM, etc.
>> The object handles have the same value in the host and the VM, which
>> is done to avoid translation from the guest handles to the host handles.
>>
>>
>>
>> Signaling CPU events by the host
>> ================================
>>
>> The WDDM interface provides a way to signal CPU event objects when
>> execution of a context reached certain point. The way it is implemented:
>>
>> - application sends an event_fd via ioctl to the driver
>>
>> - eventfd_ctx_get is used to get a pointer to the file object
>> (eventfd_ctx)
>>
>> - the pointer to sent the host via a VM bus message
>>
>> - when GPU execution reaches a certain point, the host sends a message
>> to the VM with the event pointer
>>
>> - signal_guest_event() handles the messages and eventually
>> eventfd_signal() is called.
>>
>>
>> Sasha Levin (4):
>> gpu: dxgkrnl: core code
>> gpu: dxgkrnl: hook up dxgkrnl
>> Drivers: hv: vmbus: hook up dxgkrnl
>> gpu: dxgkrnl: create a MAINTAINERS entry
>>
>> MAINTAINERS | 7 +
>> drivers/gpu/Makefile | 2 +-
>> drivers/gpu/dxgkrnl/Kconfig | 10 +
>> drivers/gpu/dxgkrnl/Makefile | 12 +
>> drivers/gpu/dxgkrnl/d3dkmthk.h | 1635 +++++++++
>> drivers/gpu/dxgkrnl/dxgadapter.c | 1399 ++++++++
>> drivers/gpu/dxgkrnl/dxgkrnl.h | 913 ++++++
>> drivers/gpu/dxgkrnl/dxgmodule.c | 692 ++++
>> drivers/gpu/dxgkrnl/dxgprocess.c | 355 ++
>> drivers/gpu/dxgkrnl/dxgvmbus.c | 2955 +++++++++++++++++
>> drivers/gpu/dxgkrnl/dxgvmbus.h | 859 +++++
>> drivers/gpu/dxgkrnl/hmgr.c | 593 ++++
>> drivers/gpu/dxgkrnl/hmgr.h | 107 +
>> drivers/gpu/dxgkrnl/ioctl.c | 5269 ++++++++++++++++++++++++++++++
>> drivers/gpu/dxgkrnl/misc.c | 280 ++
>> drivers/gpu/dxgkrnl/misc.h | 288 ++
>> drivers/video/Kconfig | 2 +
>> include/linux/hyperv.h | 16 +
>> 18 files changed, 15393 insertions(+), 1 deletion(-) create mode
>> 100644 drivers/gpu/dxgkrnl/Kconfig create mode 100644
>> drivers/gpu/dxgkrnl/Makefile create mode 100644
>> drivers/gpu/dxgkrnl/d3dkmthk.h create mode 100644
>> drivers/gpu/dxgkrnl/dxgadapter.c create mode 100644
>> drivers/gpu/dxgkrnl/dxgkrnl.h create mode 100644
>> drivers/gpu/dxgkrnl/dxgmodule.c create mode 100644
>> drivers/gpu/dxgkrnl/dxgprocess.c create mode 100644
>> drivers/gpu/dxgkrnl/dxgvmbus.c create mode 100644
>> drivers/gpu/dxgkrnl/dxgvmbus.h create mode 100644
>> drivers/gpu/dxgkrnl/hmgr.c create mode 100644
>> drivers/gpu/dxgkrnl/hmgr.h create mode 100644
>> drivers/gpu/dxgkrnl/ioctl.c create mode 100644
>> drivers/gpu/dxgkrnl/misc.c create mode 100644
>> drivers/gpu/dxgkrnl/misc.h
>>
>
> --
> Thomas Zimmermann
> Graphics Driver Developer
> SUSE Software Solutions Germany GmbH
> Maxfeldstr. 5, 90409 Nürnberg, Germany
> (HRB 36809, AG Nürnberg)
> Geschäftsführer: Felix Imendörffer
>
--
Thomas Zimmermann
Graphics Driver Developer
SUSE Software Solutions Germany GmbH
Maxfeldstr. 5, 90409 Nürnberg, Germany
(HRB 36809, AG Nürnberg)
Geschäftsführer: Felix Imendörffer
[-- Attachment #2: OpenPGP digital signature --]
[-- Type: application/pgp-signature, Size: 488 bytes --]
^ permalink raw reply [flat|nested] 28+ messages in thread
* Re: [RFC PATCH 0/4] DirectX on Linux
2020-05-20 7:10 ` Thomas Zimmermann
2020-05-20 7:42 ` [EXTERNAL] " Steve Pronovost
@ 2020-06-16 10:51 ` Pavel Machek
2020-06-16 13:28 ` Sasha Levin
1 sibling, 1 reply; 28+ messages in thread
From: Pavel Machek @ 2020-06-16 10:51 UTC (permalink / raw)
To: Thomas Zimmermann
Cc: Sasha Levin, linux-hyperv, sthemmin, tvrtko.ursulin, gregkh,
haiyangz, spronovo, linux-kernel, dri-devel, chris, wei.liu,
linux-fbdev, iourit, alexander.deucher, kys, Hawking.Zhang
Hi!
> > The driver creates the /dev/dxg device, which can be opened by user mode
> > application and handles their ioctls. The IOCTL interface to the driver
> > is defined in dxgkmthk.h (Dxgkrnl Graphics Port Driver ioctl
> > definitions). The interface matches the D3DKMT interface on Windows.
> > Ioctls are implemented in ioctl.c.
>
> Echoing what others said, you're not making a DRM driver. The driver should live outside
> of the DRM code.
>
Actually, this sounds to me like "this should not be merged into linux kernel". I mean,
we already have DRM API on Linux. We don't want another one, do we?
And at the very least... this misses API docs for /dev/dxg. Code can't really
be reviewed without that.
Best regards,
Pavel
--
(english) http://www.livejournal.com/~pavelmachek
(cesky, pictures) http://atrey.karlin.mff.cuni.cz/~pavel/picture/horses/blog.html
^ permalink raw reply [flat|nested] 28+ messages in thread
* Re: [RFC PATCH 0/4] DirectX on Linux
2020-06-16 10:51 ` Pavel Machek
@ 2020-06-16 13:28 ` Sasha Levin
2020-06-16 14:41 ` Pavel Machek
0 siblings, 1 reply; 28+ messages in thread
From: Sasha Levin @ 2020-06-16 13:28 UTC (permalink / raw)
To: Pavel Machek
Cc: linux-hyperv, sthemmin, tvrtko.ursulin, gregkh, haiyangz,
spronovo, linux-kernel, dri-devel, chris, linux-fbdev, wei.liu,
Thomas Zimmermann, iourit, alexander.deucher, kys, Hawking.Zhang
On Tue, Jun 16, 2020 at 12:51:13PM +0200, Pavel Machek wrote:
>Hi!
>
>> > The driver creates the /dev/dxg device, which can be opened by user mode
>> > application and handles their ioctls. The IOCTL interface to the driver
>> > is defined in dxgkmthk.h (Dxgkrnl Graphics Port Driver ioctl
>> > definitions). The interface matches the D3DKMT interface on Windows.
>> > Ioctls are implemented in ioctl.c.
>>
>> Echoing what others said, you're not making a DRM driver. The driver should live outside
>> of the DRM code.
>>
>
>Actually, this sounds to me like "this should not be merged into linux kernel". I mean,
>we already have DRM API on Linux. We don't want another one, do we?
This driver doesn't have any display functionality.
>And at the very least... this misses API docs for /dev/dxg. Code can't really
>be reviewed without that.
The docs live here: https://docs.microsoft.com/en-us/windows-hardware/drivers/ddi/d3dkmthk/
--
Thanks,
Sasha
^ permalink raw reply [flat|nested] 28+ messages in thread
* Re: [RFC PATCH 0/4] DirectX on Linux
2020-06-16 13:28 ` Sasha Levin
@ 2020-06-16 14:41 ` Pavel Machek
2020-06-16 16:00 ` Sasha Levin
0 siblings, 1 reply; 28+ messages in thread
From: Pavel Machek @ 2020-06-16 14:41 UTC (permalink / raw)
To: Sasha Levin
Cc: linux-hyperv, sthemmin, tvrtko.ursulin, gregkh, haiyangz,
spronovo, linux-kernel, dri-devel, chris, linux-fbdev, wei.liu,
Thomas Zimmermann, iourit, alexander.deucher, kys, Hawking.Zhang
[-- Attachment #1: Type: text/plain, Size: 1495 bytes --]
On Tue 2020-06-16 09:28:19, Sasha Levin wrote:
> On Tue, Jun 16, 2020 at 12:51:13PM +0200, Pavel Machek wrote:
> > Hi!
> >
> > > > The driver creates the /dev/dxg device, which can be opened by user mode
> > > > application and handles their ioctls. The IOCTL interface to the driver
> > > > is defined in dxgkmthk.h (Dxgkrnl Graphics Port Driver ioctl
> > > > definitions). The interface matches the D3DKMT interface on Windows.
> > > > Ioctls are implemented in ioctl.c.
> > >
> > > Echoing what others said, you're not making a DRM driver. The driver should live outside
> > > of the DRM code.
> > >
> >
> > Actually, this sounds to me like "this should not be merged into linux kernel". I mean,
> > we already have DRM API on Linux. We don't want another one, do we?
>
> This driver doesn't have any display functionality.
Graphics cards without displays connected are quite common. I may be
wrong, but I believe we normally handle them using DRM...
> > And at the very least... this misses API docs for /dev/dxg. Code can't really
> > be reviewed without that.
>
> The docs live here: https://docs.microsoft.com/en-us/windows-hardware/drivers/ddi/d3dkmthk/
I don't see "/dev/dxg" being metioned there. Plus, kernel API
documentation should really go to Documentation, and be suitably
licensed.
Pavel
--
(english) http://www.livejournal.com/~pavelmachek
(cesky, pictures) http://atrey.karlin.mff.cuni.cz/~pavel/picture/horses/blog.html
[-- Attachment #2: signature.asc --]
[-- Type: application/pgp-signature, Size: 195 bytes --]
^ permalink raw reply [flat|nested] 28+ messages in thread
* Re: [RFC PATCH 0/4] DirectX on Linux
2020-06-16 14:41 ` Pavel Machek
@ 2020-06-16 16:00 ` Sasha Levin
0 siblings, 0 replies; 28+ messages in thread
From: Sasha Levin @ 2020-06-16 16:00 UTC (permalink / raw)
To: Pavel Machek
Cc: linux-hyperv, sthemmin, tvrtko.ursulin, gregkh, haiyangz,
spronovo, linux-kernel, dri-devel, chris, linux-fbdev, wei.liu,
Thomas Zimmermann, iourit, alexander.deucher, kys, Hawking.Zhang
On Tue, Jun 16, 2020 at 04:41:22PM +0200, Pavel Machek wrote:
>On Tue 2020-06-16 09:28:19, Sasha Levin wrote:
>> On Tue, Jun 16, 2020 at 12:51:13PM +0200, Pavel Machek wrote:
>> > Hi!
>> >
>> > > > The driver creates the /dev/dxg device, which can be opened by user mode
>> > > > application and handles their ioctls. The IOCTL interface to the driver
>> > > > is defined in dxgkmthk.h (Dxgkrnl Graphics Port Driver ioctl
>> > > > definitions). The interface matches the D3DKMT interface on Windows.
>> > > > Ioctls are implemented in ioctl.c.
>> > >
>> > > Echoing what others said, you're not making a DRM driver. The driver should live outside
>> > > of the DRM code.
>> > >
>> >
>> > Actually, this sounds to me like "this should not be merged into linux kernel". I mean,
>> > we already have DRM API on Linux. We don't want another one, do we?
>>
>> This driver doesn't have any display functionality.
>
>Graphics cards without displays connected are quite common. I may be
>wrong, but I believe we normally handle them using DRM...
This is more similar to the accelerators that live in drivers/misc/
right now.
>> > And at the very least... this misses API docs for /dev/dxg. Code can't really
>> > be reviewed without that.
>>
>> The docs live here: https://docs.microsoft.com/en-us/windows-hardware/drivers/ddi/d3dkmthk/
>
>I don't see "/dev/dxg" being metioned there. Plus, kernel API
Right, this is because this entire codebase is just a pipe to the API
I've linked, it doesn't implement anything new on it's own.
>documentation should really go to Documentation, and be suitably
>licensed.
While I don't mind copying the docs into Documentation, I'm concerned
that over time they will diverge from the docs on the website. This is
similar to how other documentation (such as the virtio spec) live out of
tree to avoid these issues.
w.r.t the licensing, again: this was sent under GPL2 (note the SPDX tags
in each file), and the patches carry a S-O-B by someone who was a
Microsoft employee at the time the patches were sent.
--
Thanks,
Sasha
^ permalink raw reply [flat|nested] 28+ messages in thread