* [PATCH 01/55] drivers: hv: dxgkrnl: Driver initialization and loading
2026-03-19 20:24 [PATCH v4 00/55] drivers: hv: dxgkrnl: Driver for Hyper-V virtual compute device Eric Curtin
@ 2026-03-19 20:24 ` Eric Curtin
2026-03-19 20:24 ` [PATCH 02/55] drivers: hv: dxgkrnl: Add VMBus message support, initialize VMBus channels Eric Curtin
` (53 subsequent siblings)
54 siblings, 0 replies; 56+ messages in thread
From: Eric Curtin @ 2026-03-19 20:24 UTC (permalink / raw)
To: linux-hyperv; +Cc: linux-kernel, iourit, wei.liu, decui, haiyangz
From: Iouri Tarassov <iourit@linux.microsoft.com>
- Create skeleton and add basic functionality for the Hyper-V
compute device driver (dxgkrnl).
- Register for PCI and VMBus driver notifications and handle
initialization of VMBus channels.
- Connect the dxgkrnl module to the drivers/hv/ Makefile and Kconfig
- Create a MAINTAINERS entry
A VMBus channel is a communication interface between the Hyper-V guest
and the host. The are two type of VMBus channels, used in the driver:
- the global channel
- per virtual compute device channel
A PCI device is created for each virtual compute device, projected
by the host. The device vendor is PCI_VENDOR_ID_MICROSOFT and device
id is PCI_DEVICE_ID_VIRTUAL_RENDER. dxg_pci_probe_device handles
arrival of such devices. The PCI config space of the virtual compute
device has luid of the corresponding virtual compute device VM
bus channel. This is how the compute device adapter objects are
linked to VMBus channels.
VMBus interface version is exchanged by reading/writing the PCI config
space of the virtual compute device.
The IO space is used to handle CPU accessible compute device
allocations. Hyper-V allocates IO space for the global VMBus channel.
Signed-off-by: Iouri Tarassov <iourit@linux.microsoft.com>
[kms: forward port to 6.6 from 6.1. No code changes made.]
Signed-off-by: Kelsey Steele <kelseysteele@microsoft.com>
---
MAINTAINERS | 7 +
drivers/hv/Kconfig | 2 +
drivers/hv/Makefile | 1 +
drivers/hv/dxgkrnl/Kconfig | 26 ++
drivers/hv/dxgkrnl/Makefile | 5 +
drivers/hv/dxgkrnl/dxgkrnl.h | 155 ++++++++++
drivers/hv/dxgkrnl/dxgmodule.c | 506 +++++++++++++++++++++++++++++++++
drivers/hv/dxgkrnl/dxgvmbus.c | 92 ++++++
drivers/hv/dxgkrnl/dxgvmbus.h | 19 ++
include/uapi/misc/d3dkmthk.h | 27 ++
10 files changed, 840 insertions(+)
create mode 100644 drivers/hv/dxgkrnl/Kconfig
create mode 100644 drivers/hv/dxgkrnl/Makefile
create mode 100644 drivers/hv/dxgkrnl/dxgkrnl.h
create mode 100644 drivers/hv/dxgkrnl/dxgmodule.c
create mode 100644 drivers/hv/dxgkrnl/dxgvmbus.c
create mode 100644 drivers/hv/dxgkrnl/dxgvmbus.h
create mode 100644 include/uapi/misc/d3dkmthk.h
diff --git a/MAINTAINERS b/MAINTAINERS
index ae4c0cec5073..4fe0b3501931 100644
--- a/MAINTAINERS
+++ b/MAINTAINERS
@@ -9771,6 +9771,13 @@ F: Documentation/devicetree/bindings/mtd/ti,am654-hbmc.yaml
F: drivers/mtd/hyperbus/
F: include/linux/mtd/hyperbus.h
+Hyper-V vGPU DRIVER
+M: Iouri Tarassov <iourit@microsoft.com>
+L: linux-hyperv@vger.kernel.org
+S: Supported
+F: drivers/hv/dxgkrnl/
+F: include/uapi/misc/d3dkmthk.h
+
HYPERVISOR VIRTUAL CONSOLE DRIVER
L: linuxppc-dev@lists.ozlabs.org
S: Odd Fixes
diff --git a/drivers/hv/Kconfig b/drivers/hv/Kconfig
index 862c47b191af..b16c7701da19 100644
--- a/drivers/hv/Kconfig
+++ b/drivers/hv/Kconfig
@@ -55,4 +55,6 @@ config HYPERV_BALLOON
help
Select this option to enable Hyper-V Balloon driver.
+source "drivers/hv/dxgkrnl/Kconfig"
+
endmenu
diff --git a/drivers/hv/Makefile b/drivers/hv/Makefile
index d76df5c8c2a9..aa1cbdb5d0d2 100644
--- a/drivers/hv/Makefile
+++ b/drivers/hv/Makefile
@@ -2,6 +2,7 @@
obj-$(CONFIG_HYPERV) += hv_vmbus.o
obj-$(CONFIG_HYPERV_UTILS) += hv_utils.o
obj-$(CONFIG_HYPERV_BALLOON) += hv_balloon.o
+obj-$(CONFIG_DXGKRNL) += dxgkrnl/
CFLAGS_hv_trace.o = -I$(src)
CFLAGS_hv_balloon.o = -I$(src)
diff --git a/drivers/hv/dxgkrnl/Kconfig b/drivers/hv/dxgkrnl/Kconfig
new file mode 100644
index 000000000000..bcd92bbff939
--- /dev/null
+++ b/drivers/hv/dxgkrnl/Kconfig
@@ -0,0 +1,26 @@
+# SPDX-License-Identifier: GPL-2.0
+# Configuration for the hyper-v virtual compute driver (dxgkrnl)
+#
+
+config DXGKRNL
+ tristate "Microsoft Paravirtualized GPU support"
+ depends on HYPERV
+ depends on 64BIT || COMPILE_TEST
+ help
+ This driver supports paravirtualized virtual compute devices, exposed
+ by Microsoft Hyper-V when Linux is running inside of a virtual machine
+ hosted by Windows. The virtual machines needs to be configured to use
+ host compute adapters. The driver name is dxgkrnl.
+
+ An example of such virtual machine is a Windows Subsystem for
+ Linux container. When such container is instantiated, the Windows host
+ assigns compatible host GPU adapters to the container. The corresponding
+ virtual GPU devices appear on the PCI bus in the container. These
+ devices are enumerated and accessed by this driver.
+
+ Communications with the driver are done by using the Microsoft libdxcore
+ library, which translates the D3DKMT interface
+ <https://docs.microsoft.com/en-us/windows-hardware/drivers/ddi/d3dkmthk/>
+ to the driver IOCTLs. The virtual GPU devices are paravirtualized,
+ which means that access to the hardware is done in the host. The driver
+ communicates with the host using Hyper-V VM bus communication channels.
diff --git a/drivers/hv/dxgkrnl/Makefile b/drivers/hv/dxgkrnl/Makefile
new file mode 100644
index 000000000000..76349064b60a
--- /dev/null
+++ b/drivers/hv/dxgkrnl/Makefile
@@ -0,0 +1,5 @@
+# SPDX-License-Identifier: GPL-2.0
+# Makefile for the hyper-v compute device driver (dxgkrnl).
+
+obj-$(CONFIG_DXGKRNL) += dxgkrnl.o
+dxgkrnl-y := dxgmodule.o dxgvmbus.o
diff --git a/drivers/hv/dxgkrnl/dxgkrnl.h b/drivers/hv/dxgkrnl/dxgkrnl.h
new file mode 100644
index 000000000000..f7900840d1ed
--- /dev/null
+++ b/drivers/hv/dxgkrnl/dxgkrnl.h
@@ -0,0 +1,155 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+
+/*
+ * Copyright (c) 2022, Microsoft Corporation.
+ *
+ * Author:
+ * Iouri Tarassov <iourit@linux.microsoft.com>
+ *
+ * Dxgkrnl Graphics Driver
+ * Headers for internal objects
+ *
+ */
+
+#ifndef _DXGKRNL_H
+#define _DXGKRNL_H
+
+#include <linux/uuid.h>
+#include <linux/kernel.h>
+#include <linux/mutex.h>
+#include <linux/semaphore.h>
+#include <linux/refcount.h>
+#include <linux/rwsem.h>
+#include <linux/atomic.h>
+#include <linux/spinlock.h>
+#include <linux/gfp.h>
+#include <linux/miscdevice.h>
+#include <linux/pci.h>
+#include <linux/hyperv.h>
+#include <uapi/misc/d3dkmthk.h>
+#include <linux/version.h>
+
+struct dxgadapter;
+
+/*
+ * Driver private data.
+ * A single /dev/dxg device is created per virtual machine.
+ */
+struct dxgdriver{
+ struct dxgglobal *dxgglobal;
+ struct device *dxgdev;
+ struct pci_driver pci_drv;
+ struct hv_driver vmbus_drv;
+};
+extern struct dxgdriver dxgdrv;
+
+#define DXGDEV dxgdrv.dxgdev
+
+struct dxgvmbuschannel {
+ struct vmbus_channel *channel;
+ struct hv_device *hdev;
+ spinlock_t packet_list_mutex;
+ struct list_head packet_list_head;
+ struct kmem_cache *packet_cache;
+ atomic64_t packet_request_id;
+};
+
+int dxgvmbuschannel_init(struct dxgvmbuschannel *ch, struct hv_device *hdev);
+void dxgvmbuschannel_destroy(struct dxgvmbuschannel *ch);
+void dxgvmbuschannel_receive(void *ctx);
+
+/*
+ * The structure defines an offered vGPU vm bus channel.
+ */
+struct dxgvgpuchannel {
+ struct list_head vgpu_ch_list_entry;
+ struct winluid adapter_luid;
+ struct hv_device *hdev;
+};
+
+struct dxgglobal {
+ struct dxgdriver *drvdata;
+ struct dxgvmbuschannel channel;
+ struct hv_device *hdev;
+ u32 num_adapters;
+ u32 vmbus_ver; /* Interface version */
+ struct resource *mem;
+ u64 mmiospace_base;
+ u64 mmiospace_size;
+ struct miscdevice dxgdevice;
+ struct mutex device_mutex;
+
+ /*
+ * List of the vGPU VM bus channels (dxgvgpuchannel)
+ * Protected by device_mutex
+ */
+ struct list_head vgpu_ch_list_head;
+
+ /* protects acces to the global VM bus channel */
+ struct rw_semaphore channel_lock;
+
+ bool global_channel_initialized;
+ bool async_msg_enabled;
+ bool misc_registered;
+ bool pci_registered;
+ bool vmbus_registered;
+};
+
+static inline struct dxgglobal *dxggbl(void)
+{
+ return dxgdrv.dxgglobal;
+}
+
+struct dxgprocess {
+ /* Placeholder */
+};
+
+/*
+ * The convention is that VNBus instance id is a GUID, but the host sets
+ * the lower part of the value to the host adapter LUID. The function
+ * provides the necessary conversion.
+ */
+static inline void guid_to_luid(guid_t *guid, struct winluid *luid)
+{
+ *luid = *(struct winluid *)&guid->b[0];
+}
+
+/*
+ * VM bus interface
+ *
+ */
+
+/*
+ * The interface version is used to ensure that the host and the guest use the
+ * same VM bus protocol. It needs to be incremented every time the VM bus
+ * interface changes. DXGK_VMBUS_LAST_COMPATIBLE_INTERFACE_VERSION is
+ * incremented each time the earlier versions of the interface are no longer
+ * compatible with the current version.
+ */
+#define DXGK_VMBUS_INTERFACE_VERSION_OLD 27
+#define DXGK_VMBUS_INTERFACE_VERSION 40
+#define DXGK_VMBUS_LAST_COMPATIBLE_INTERFACE_VERSION 16
+
+#ifdef DEBUG
+
+void dxgk_validate_ioctls(void);
+
+#define DXG_TRACE(fmt, ...) do { \
+ trace_printk(dev_fmt(fmt) "\n", ##__VA_ARGS__); \
+} while (0)
+
+#define DXG_ERR(fmt, ...) do { \
+ dev_err(DXGDEV, fmt, ##__VA_ARGS__); \
+ trace_printk("*** dxgkerror *** " dev_fmt(fmt) "\n", ##__VA_ARGS__); \
+} while (0)
+
+#else
+
+#define DXG_TRACE(...)
+#define DXG_ERR(fmt, ...) do { \
+ dev_err(DXGDEV, fmt, ##__VA_ARGS__); \
+} while (0)
+
+#endif /* DEBUG */
+
+#endif
diff --git a/drivers/hv/dxgkrnl/dxgmodule.c b/drivers/hv/dxgkrnl/dxgmodule.c
new file mode 100644
index 000000000000..de02edc4d023
--- /dev/null
+++ b/drivers/hv/dxgkrnl/dxgmodule.c
@@ -0,0 +1,506 @@
+// SPDX-License-Identifier: GPL-2.0
+
+/*
+ * Copyright (c) 2022, Microsoft Corporation.
+ *
+ * Author:
+ * Iouri Tarassov <iourit@linux.microsoft.com>
+ *
+ * Dxgkrnl Graphics Driver
+ * Interface with Linux kernel, PCI driver and the VM bus driver
+ *
+ */
+
+#include <linux/module.h>
+#include <linux/eventfd.h>
+#include <linux/hyperv.h>
+#include <linux/pci.h>
+#include "dxgkrnl.h"
+
+#define PCI_VENDOR_ID_MICROSOFT 0x1414
+#define PCI_DEVICE_ID_VIRTUAL_RENDER 0x008E
+
+#undef pr_fmt
+#define pr_fmt(fmt) "dxgk: " fmt
+
+/*
+ * Interface from dxgglobal
+ */
+
+struct vmbus_channel *dxgglobal_get_vmbus(void)
+{
+ return dxggbl()->channel.channel;
+}
+
+struct dxgvmbuschannel *dxgglobal_get_dxgvmbuschannel(void)
+{
+ return &dxggbl()->channel;
+}
+
+int dxgglobal_acquire_channel_lock(void)
+{
+ struct dxgglobal *dxgglobal = dxggbl();
+
+ down_read(&dxgglobal->channel_lock);
+ if (dxgglobal->channel.channel == NULL) {
+ DXG_ERR("Failed to acquire global channel lock");
+ return -ENODEV;
+ } else {
+ return 0;
+ }
+}
+
+void dxgglobal_release_channel_lock(void)
+{
+ up_read(&dxggbl()->channel_lock);
+}
+
+const struct file_operations dxgk_fops = {
+ .owner = THIS_MODULE,
+};
+
+/*
+ * Interface with the PCI driver
+ */
+
+/*
+ * Part of the PCI config space of the compute device is used for
+ * configuration data. Reading/writing of the PCI config space is forwarded
+ * to the host.
+ *
+ * Below are offsets in the PCI config spaces for various configuration values.
+ */
+
+/* Compute device VM bus channel instance ID */
+#define DXGK_VMBUS_CHANNEL_ID_OFFSET 192
+
+/* DXGK_VMBUS_INTERFACE_VERSION (u32) */
+#define DXGK_VMBUS_VERSION_OFFSET (DXGK_VMBUS_CHANNEL_ID_OFFSET + \
+ sizeof(guid_t))
+
+/* Luid of the virtual GPU on the host (struct winluid) */
+#define DXGK_VMBUS_VGPU_LUID_OFFSET (DXGK_VMBUS_VERSION_OFFSET + \
+ sizeof(u32))
+
+/* The guest writes its capabilities to this address */
+#define DXGK_VMBUS_GUESTCAPS_OFFSET (DXGK_VMBUS_VERSION_OFFSET + \
+ sizeof(u32))
+
+/* Capabilities of the guest driver, reported to the host */
+struct dxgk_vmbus_guestcaps {
+ union {
+ struct {
+ u32 wsl2 : 1;
+ u32 reserved : 31;
+ };
+ u32 guest_caps;
+ };
+};
+
+/*
+ * A helper function to read PCI config space.
+ */
+static int dxg_pci_read_dwords(struct pci_dev *dev, int offset, int size,
+ void *val)
+{
+ int off = offset;
+ int ret;
+ int i;
+
+ /* Make sure the offset and size are 32 bit aligned */
+ if (offset & 3 || size & 3)
+ return -EINVAL;
+
+ for (i = 0; i < size / sizeof(int); i++) {
+ ret = pci_read_config_dword(dev, off, &((int *)val)[i]);
+ if (ret) {
+ DXG_ERR("Failed to read PCI config: %d", off);
+ return ret;
+ }
+ off += sizeof(int);
+ }
+ return 0;
+}
+
+static int dxg_pci_probe_device(struct pci_dev *dev,
+ const struct pci_device_id *id)
+{
+ int ret;
+ guid_t guid;
+ u32 vmbus_interface_ver = DXGK_VMBUS_INTERFACE_VERSION;
+ struct winluid vgpu_luid = {};
+ struct dxgk_vmbus_guestcaps guest_caps = {.wsl2 = 1};
+ struct dxgglobal *dxgglobal = dxggbl();
+
+ mutex_lock(&dxgglobal->device_mutex);
+
+ if (dxgglobal->vmbus_ver == 0) {
+ /* Report capabilities to the host */
+
+ ret = pci_write_config_dword(dev, DXGK_VMBUS_GUESTCAPS_OFFSET,
+ guest_caps.guest_caps);
+ if (ret)
+ goto cleanup;
+
+ /* Negotiate the VM bus version */
+
+ ret = pci_read_config_dword(dev, DXGK_VMBUS_VERSION_OFFSET,
+ &vmbus_interface_ver);
+ if (ret == 0 && vmbus_interface_ver != 0)
+ dxgglobal->vmbus_ver = vmbus_interface_ver;
+ else
+ dxgglobal->vmbus_ver = DXGK_VMBUS_INTERFACE_VERSION_OLD;
+
+ if (dxgglobal->vmbus_ver < DXGK_VMBUS_INTERFACE_VERSION)
+ goto read_channel_id;
+
+ ret = pci_write_config_dword(dev, DXGK_VMBUS_VERSION_OFFSET,
+ DXGK_VMBUS_INTERFACE_VERSION);
+ if (ret)
+ goto cleanup;
+
+ if (dxgglobal->vmbus_ver > DXGK_VMBUS_INTERFACE_VERSION)
+ dxgglobal->vmbus_ver = DXGK_VMBUS_INTERFACE_VERSION;
+ }
+
+read_channel_id:
+
+ /* Get the VM bus channel ID for the virtual GPU */
+ ret = dxg_pci_read_dwords(dev, DXGK_VMBUS_CHANNEL_ID_OFFSET,
+ sizeof(guid), (int *)&guid);
+ if (ret)
+ goto cleanup;
+
+ if (dxgglobal->vmbus_ver >= DXGK_VMBUS_INTERFACE_VERSION) {
+ ret = dxg_pci_read_dwords(dev, DXGK_VMBUS_VGPU_LUID_OFFSET,
+ sizeof(vgpu_luid), &vgpu_luid);
+ if (ret)
+ goto cleanup;
+ }
+
+ DXG_TRACE("Adapter channel: %pUb", &guid);
+ DXG_TRACE("Vmbus interface version: %d", dxgglobal->vmbus_ver);
+ DXG_TRACE("Host luid: %x-%x", vgpu_luid.b, vgpu_luid.a);
+
+cleanup:
+
+ mutex_unlock(&dxgglobal->device_mutex);
+
+ if (ret)
+ DXG_TRACE("err: %d", ret);
+ return ret;
+}
+
+static void dxg_pci_remove_device(struct pci_dev *dev)
+{
+ /* Placeholder */
+}
+
+static struct pci_device_id dxg_pci_id_table[] = {
+ {
+ .vendor = PCI_VENDOR_ID_MICROSOFT,
+ .device = PCI_DEVICE_ID_VIRTUAL_RENDER,
+ .subvendor = PCI_ANY_ID,
+ .subdevice = PCI_ANY_ID
+ },
+ { 0 }
+};
+
+/*
+ * Interface with the VM bus driver
+ */
+
+static int dxgglobal_getiospace(struct dxgglobal *dxgglobal)
+{
+ /* Get mmio space for the global channel */
+ struct hv_device *hdev = dxgglobal->hdev;
+ struct vmbus_channel *channel = hdev->channel;
+ resource_size_t pot_start = 0;
+ resource_size_t pot_end = -1;
+ int ret;
+
+ dxgglobal->mmiospace_size = channel->offermsg.offer.mmio_megabytes;
+ if (dxgglobal->mmiospace_size == 0) {
+ DXG_TRACE("Zero mmio space is offered");
+ return -ENOMEM;
+ }
+ dxgglobal->mmiospace_size <<= 20;
+ DXG_TRACE("mmio offered: %llx", dxgglobal->mmiospace_size);
+
+ ret = vmbus_allocate_mmio(&dxgglobal->mem, hdev, pot_start, pot_end,
+ dxgglobal->mmiospace_size, 0x10000, false);
+ if (ret) {
+ DXG_ERR("Unable to allocate mmio memory: %d", ret);
+ return ret;
+ }
+ dxgglobal->mmiospace_size = dxgglobal->mem->end -
+ dxgglobal->mem->start + 1;
+ dxgglobal->mmiospace_base = dxgglobal->mem->start;
+ DXG_TRACE("mmio allocated %llx %llx %llx %llx",
+ dxgglobal->mmiospace_base, dxgglobal->mmiospace_size,
+ dxgglobal->mem->start, dxgglobal->mem->end);
+
+ return 0;
+}
+
+int dxgglobal_init_global_channel(void)
+{
+ int ret = 0;
+ struct dxgglobal *dxgglobal = dxggbl();
+
+ ret = dxgvmbuschannel_init(&dxgglobal->channel, dxgglobal->hdev);
+ if (ret) {
+ DXG_ERR("dxgvmbuschannel_init failed: %d", ret);
+ goto error;
+ }
+
+ ret = dxgglobal_getiospace(dxgglobal);
+ if (ret) {
+ DXG_ERR("getiospace failed: %d", ret);
+ goto error;
+ }
+
+ hv_set_drvdata(dxgglobal->hdev, dxgglobal);
+
+error:
+ return ret;
+}
+
+void dxgglobal_destroy_global_channel(void)
+{
+ struct dxgglobal *dxgglobal = dxggbl();
+
+ down_write(&dxgglobal->channel_lock);
+
+ dxgglobal->global_channel_initialized = false;
+
+ if (dxgglobal->mem) {
+ vmbus_free_mmio(dxgglobal->mmiospace_base,
+ dxgglobal->mmiospace_size);
+ dxgglobal->mem = NULL;
+ }
+
+ dxgvmbuschannel_destroy(&dxgglobal->channel);
+
+ if (dxgglobal->hdev) {
+ hv_set_drvdata(dxgglobal->hdev, NULL);
+ dxgglobal->hdev = NULL;
+ }
+
+ up_write(&dxgglobal->channel_lock);
+}
+
+static const struct hv_vmbus_device_id dxg_vmbus_id_table[] = {
+ /* Per GPU Device GUID */
+ { HV_GPUP_DXGK_VGPU_GUID },
+ /* Global Dxgkgnl channel for the virtual machine */
+ { HV_GPUP_DXGK_GLOBAL_GUID },
+ { }
+};
+
+static int dxg_probe_vmbus(struct hv_device *hdev,
+ const struct hv_vmbus_device_id *dev_id)
+{
+ int ret = 0;
+ struct winluid luid;
+ struct dxgvgpuchannel *vgpuch;
+ struct dxgglobal *dxgglobal = dxggbl();
+
+ mutex_lock(&dxgglobal->device_mutex);
+
+ if (uuid_le_cmp(hdev->dev_type, dxg_vmbus_id_table[0].guid) == 0) {
+ /* This is a new virtual GPU channel */
+ guid_to_luid(&hdev->channel->offermsg.offer.if_instance, &luid);
+ DXG_TRACE("vGPU channel: %pUb",
+ &hdev->channel->offermsg.offer.if_instance);
+ vgpuch = kzalloc(sizeof(struct dxgvgpuchannel), GFP_KERNEL);
+ if (vgpuch == NULL) {
+ ret = -ENOMEM;
+ goto error;
+ }
+ vgpuch->adapter_luid = luid;
+ vgpuch->hdev = hdev;
+ list_add_tail(&vgpuch->vgpu_ch_list_entry,
+ &dxgglobal->vgpu_ch_list_head);
+ } else if (uuid_le_cmp(hdev->dev_type,
+ dxg_vmbus_id_table[1].guid) == 0) {
+ /* This is the global Dxgkgnl channel */
+ DXG_TRACE("Global channel: %pUb",
+ &hdev->channel->offermsg.offer.if_instance);
+ if (dxgglobal->hdev) {
+ /* This device should appear only once */
+ DXG_ERR("global channel already exists");
+ ret = -EBADE;
+ goto error;
+ }
+ dxgglobal->hdev = hdev;
+ } else {
+ /* Unknown device type */
+ DXG_ERR("Unknown VM bus device type");
+ ret = -ENODEV;
+ }
+
+error:
+
+ mutex_unlock(&dxgglobal->device_mutex);
+
+ return ret;
+}
+
+static int dxg_remove_vmbus(struct hv_device *hdev)
+{
+ int ret = 0;
+ struct dxgvgpuchannel *vgpu_channel;
+ struct dxgglobal *dxgglobal = dxggbl();
+
+ mutex_lock(&dxgglobal->device_mutex);
+
+ if (uuid_le_cmp(hdev->dev_type, dxg_vmbus_id_table[0].guid) == 0) {
+ DXG_TRACE("Remove virtual GPU channel");
+ list_for_each_entry(vgpu_channel,
+ &dxgglobal->vgpu_ch_list_head,
+ vgpu_ch_list_entry) {
+ if (vgpu_channel->hdev == hdev) {
+ list_del(&vgpu_channel->vgpu_ch_list_entry);
+ kfree(vgpu_channel);
+ break;
+ }
+ }
+ } else if (uuid_le_cmp(hdev->dev_type,
+ dxg_vmbus_id_table[1].guid) == 0) {
+ DXG_TRACE("Remove global channel device");
+ dxgglobal_destroy_global_channel();
+ } else {
+ /* Unknown device type */
+ DXG_ERR("Unknown device type");
+ ret = -ENODEV;
+ }
+
+ mutex_unlock(&dxgglobal->device_mutex);
+
+ return ret;
+}
+
+MODULE_DEVICE_TABLE(vmbus, dxg_vmbus_id_table);
+MODULE_DEVICE_TABLE(pci, dxg_pci_id_table);
+
+/*
+ * Global driver data
+ */
+
+struct dxgdriver dxgdrv = {
+ .vmbus_drv.name = KBUILD_MODNAME,
+ .vmbus_drv.id_table = dxg_vmbus_id_table,
+ .vmbus_drv.probe = dxg_probe_vmbus,
+ .vmbus_drv.remove = dxg_remove_vmbus,
+ .vmbus_drv.driver = {
+ .probe_type = PROBE_PREFER_ASYNCHRONOUS,
+ },
+ .pci_drv.name = KBUILD_MODNAME,
+ .pci_drv.id_table = dxg_pci_id_table,
+ .pci_drv.probe = dxg_pci_probe_device,
+ .pci_drv.remove = dxg_pci_remove_device
+};
+
+static struct dxgglobal *dxgglobal_create(void)
+{
+ struct dxgglobal *dxgglobal;
+
+ dxgglobal = kzalloc(sizeof(struct dxgglobal), GFP_KERNEL);
+ if (!dxgglobal)
+ return NULL;
+
+ mutex_init(&dxgglobal->device_mutex);
+
+ INIT_LIST_HEAD(&dxgglobal->vgpu_ch_list_head);
+
+ init_rwsem(&dxgglobal->channel_lock);
+
+ return dxgglobal;
+}
+
+static void dxgglobal_destroy(struct dxgglobal *dxgglobal)
+{
+ if (dxgglobal) {
+ mutex_lock(&dxgglobal->device_mutex);
+ dxgglobal_destroy_global_channel();
+ mutex_unlock(&dxgglobal->device_mutex);
+
+ if (dxgglobal->vmbus_registered)
+ vmbus_driver_unregister(&dxgdrv.vmbus_drv);
+
+ dxgglobal_destroy_global_channel();
+
+ if (dxgglobal->pci_registered)
+ pci_unregister_driver(&dxgdrv.pci_drv);
+
+ if (dxgglobal->misc_registered)
+ misc_deregister(&dxgglobal->dxgdevice);
+
+ dxgglobal->drvdata->dxgdev = NULL;
+
+ kfree(dxgglobal);
+ dxgglobal = NULL;
+ }
+}
+
+static int __init dxg_drv_init(void)
+{
+ int ret;
+ struct dxgglobal *dxgglobal = NULL;
+
+ dxgglobal = dxgglobal_create();
+ if (dxgglobal == NULL) {
+ pr_err("dxgglobal_init failed");
+ ret = -ENOMEM;
+ goto error;
+ }
+ dxgglobal->drvdata = &dxgdrv;
+
+ dxgglobal->dxgdevice.minor = MISC_DYNAMIC_MINOR;
+ dxgglobal->dxgdevice.name = "dxg";
+ dxgglobal->dxgdevice.fops = &dxgk_fops;
+ dxgglobal->dxgdevice.mode = 0666;
+ ret = misc_register(&dxgglobal->dxgdevice);
+ if (ret) {
+ pr_err("misc_register failed: %d", ret);
+ goto error;
+ }
+ dxgglobal->misc_registered = true;
+ dxgdrv.dxgdev = dxgglobal->dxgdevice.this_device;
+ dxgdrv.dxgglobal = dxgglobal;
+
+ ret = vmbus_driver_register(&dxgdrv.vmbus_drv);
+ if (ret) {
+ DXG_ERR("vmbus_driver_register failed: %d", ret);
+ goto error;
+ }
+ dxgglobal->vmbus_registered = true;
+
+ ret = pci_register_driver(&dxgdrv.pci_drv);
+ if (ret) {
+ DXG_ERR("pci_driver_register failed: %d", ret);
+ goto error;
+ }
+ dxgglobal->pci_registered = true;
+
+ return 0;
+
+error:
+ /* This function does the cleanup */
+ dxgglobal_destroy(dxgglobal);
+ dxgdrv.dxgglobal = NULL;
+
+ return ret;
+}
+
+static void __exit dxg_drv_exit(void)
+{
+ dxgglobal_destroy(dxgdrv.dxgglobal);
+}
+
+module_init(dxg_drv_init);
+module_exit(dxg_drv_exit);
+
+MODULE_LICENSE("GPL");
+MODULE_DESCRIPTION("Microsoft Dxgkrnl virtual compute device Driver");
diff --git a/drivers/hv/dxgkrnl/dxgvmbus.c b/drivers/hv/dxgkrnl/dxgvmbus.c
new file mode 100644
index 000000000000..deb880e34377
--- /dev/null
+++ b/drivers/hv/dxgkrnl/dxgvmbus.c
@@ -0,0 +1,92 @@
+// SPDX-License-Identifier: GPL-2.0
+
+/*
+ * Copyright (c) 2022, Microsoft Corporation.
+ *
+ * Author:
+ * Iouri Tarassov <iourit@linux.microsoft.com>
+ *
+ * Dxgkrnl Graphics Driver
+ * VM bus interface implementation
+ *
+ */
+
+#include <linux/kernel.h>
+#include <linux/completion.h>
+#include <linux/slab.h>
+#include <linux/eventfd.h>
+#include <linux/hyperv.h>
+#include <linux/mman.h>
+#include <linux/delay.h>
+#include <linux/pagemap.h>
+#include "dxgkrnl.h"
+#include "dxgvmbus.h"
+
+#undef pr_fmt
+#define pr_fmt(fmt) "dxgk: " fmt
+
+#define RING_BUFSIZE (256 * 1024)
+
+/*
+ * The structure is used to track VM bus packets, waiting for completion.
+ */
+struct dxgvmbuspacket {
+ struct list_head packet_list_entry;
+ u64 request_id;
+ struct completion wait;
+ void *buffer;
+ u32 buffer_length;
+ int status;
+ bool completed;
+};
+
+int dxgvmbuschannel_init(struct dxgvmbuschannel *ch, struct hv_device *hdev)
+{
+ int ret;
+
+ ch->hdev = hdev;
+ spin_lock_init(&ch->packet_list_mutex);
+ INIT_LIST_HEAD(&ch->packet_list_head);
+ atomic64_set(&ch->packet_request_id, 0);
+
+ ch->packet_cache = kmem_cache_create("DXGK packet cache",
+ sizeof(struct dxgvmbuspacket), 0,
+ 0, NULL);
+ if (ch->packet_cache == NULL) {
+ DXG_ERR("packet_cache alloc failed");
+ ret = -ENOMEM;
+ goto cleanup;
+ }
+
+#if LINUX_VERSION_CODE >= KERNEL_VERSION(5,15,0)
+ hdev->channel->max_pkt_size = DXG_MAX_VM_BUS_PACKET_SIZE;
+#endif
+ ret = vmbus_open(hdev->channel, RING_BUFSIZE, RING_BUFSIZE,
+ NULL, 0, dxgvmbuschannel_receive, ch);
+ if (ret) {
+ DXG_ERR("vmbus_open failed: %d", ret);
+ goto cleanup;
+ }
+
+ ch->channel = hdev->channel;
+
+cleanup:
+
+ return ret;
+}
+
+void dxgvmbuschannel_destroy(struct dxgvmbuschannel *ch)
+{
+ kmem_cache_destroy(ch->packet_cache);
+ ch->packet_cache = NULL;
+
+ if (ch->channel) {
+ vmbus_close(ch->channel);
+ ch->channel = NULL;
+ }
+}
+
+/* Receive callback for messages from the host */
+void dxgvmbuschannel_receive(void *ctx)
+{
+}
diff --git a/drivers/hv/dxgkrnl/dxgvmbus.h b/drivers/hv/dxgkrnl/dxgvmbus.h
new file mode 100644
index 000000000000..6cdca5e03d1f
--- /dev/null
+++ b/drivers/hv/dxgkrnl/dxgvmbus.h
@@ -0,0 +1,19 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+
+/*
+ * Copyright (c) 2022, Microsoft Corporation.
+ *
+ * Author:
+ * Iouri Tarassov <iourit@linux.microsoft.com>
+ *
+ * Dxgkrnl Graphics Driver
+ * VM bus interface with the host definitions
+ *
+ */
+
+#ifndef _DXGVMBUS_H
+#define _DXGVMBUS_H
+
+#define DXG_MAX_VM_BUS_PACKET_SIZE (1024 * 128)
+
+#endif /* _DXGVMBUS_H */
diff --git a/include/uapi/misc/d3dkmthk.h b/include/uapi/misc/d3dkmthk.h
new file mode 100644
index 000000000000..5d973604400c
--- /dev/null
+++ b/include/uapi/misc/d3dkmthk.h
@@ -0,0 +1,27 @@
+/* SPDX-License-Identifier: GPL-2.0 WITH Linux-syscall-note */
+
+/*
+ * Copyright (c) 2019, Microsoft Corporation.
+ *
+ * Author:
+ * Iouri Tarassov <iourit@linux.microsoft.com>
+ *
+ * Dxgkrnl Graphics Driver
+ * User mode WDDM interface definitions
+ *
+ */
+
+#ifndef _D3DKMTHK_H
+#define _D3DKMTHK_H
+
+/*
+ * Matches the Windows LUID definition.
+ * LUID is a locally unique identifier (similar to GUID, but not global),
+ * which is guaranteed to be unique intil the computer is rebooted.
+ */
+struct winluid {
+ __u32 a;
+ __u32 b;
+};
+
+#endif /* _D3DKMTHK_H */
^ permalink raw reply related [flat|nested] 56+ messages in thread* [PATCH 02/55] drivers: hv: dxgkrnl: Add VMBus message support, initialize VMBus channels.
2026-03-19 20:24 [PATCH v4 00/55] drivers: hv: dxgkrnl: Driver for Hyper-V virtual compute device Eric Curtin
2026-03-19 20:24 ` [PATCH 01/55] drivers: hv: dxgkrnl: Driver initialization and loading Eric Curtin
@ 2026-03-19 20:24 ` Eric Curtin
2026-03-19 20:24 ` [PATCH 03/55] drivers: hv: dxgkrnl: Creation of dxgadapter object Eric Curtin
` (52 subsequent siblings)
54 siblings, 0 replies; 56+ messages in thread
From: Eric Curtin @ 2026-03-19 20:24 UTC (permalink / raw)
To: linux-hyperv; +Cc: linux-kernel, iourit, wei.liu, decui, haiyangz
From: Iouri Tarassov <iourit@linux.microsoft.com>
Implement support for sending/receiving VMBus messages between
the host and the guest.
Initialize the VMBus channels and notify the host about IO space
settings of the VMBus global channel.
Signed-off-by: Iouri Tarassov <iourit@linux.microsoft.com>
[kms: forward port to 6.6 from 6.1. No code changes made.]
Signed-off-by: Kelsey Steele <kelseysteele@microsoft.com>
---
drivers/hv/dxgkrnl/dxgkrnl.h | 14 ++
drivers/hv/dxgkrnl/dxgmodule.c | 9 +-
drivers/hv/dxgkrnl/dxgvmbus.c | 318 +++++++++++++++++++++++++++++++++
drivers/hv/dxgkrnl/dxgvmbus.h | 67 +++++++
drivers/hv/dxgkrnl/ioctl.c | 24 +++
drivers/hv/dxgkrnl/misc.h | 72 ++++++++
include/uapi/misc/d3dkmthk.h | 34 ++++
7 files changed, 536 insertions(+), 2 deletions(-)
create mode 100644 drivers/hv/dxgkrnl/ioctl.c
create mode 100644 drivers/hv/dxgkrnl/misc.h
diff --git a/drivers/hv/dxgkrnl/dxgkrnl.h b/drivers/hv/dxgkrnl/dxgkrnl.h
index f7900840d1ed..52b9e82c51e6 100644
--- a/drivers/hv/dxgkrnl/dxgkrnl.h
+++ b/drivers/hv/dxgkrnl/dxgkrnl.h
@@ -28,6 +28,8 @@
#include <linux/hyperv.h>
#include <uapi/misc/d3dkmthk.h>
#include <linux/version.h>
+#include "misc.h"
+#include <uapi/misc/d3dkmthk.h>
struct dxgadapter;
@@ -100,6 +102,13 @@ static inline struct dxgglobal *dxggbl(void)
return dxgdrv.dxgglobal;
}
+int dxgglobal_init_global_channel(void);
+void dxgglobal_destroy_global_channel(void);
+struct vmbus_channel *dxgglobal_get_vmbus(void);
+struct dxgvmbuschannel *dxgglobal_get_dxgvmbuschannel(void);
+int dxgglobal_acquire_channel_lock(void);
+void dxgglobal_release_channel_lock(void);
+
struct dxgprocess {
/* Placeholder */
};
@@ -130,6 +139,11 @@ static inline void guid_to_luid(guid_t *guid, struct winluid *luid)
#define DXGK_VMBUS_INTERFACE_VERSION 40
#define DXGK_VMBUS_LAST_COMPATIBLE_INTERFACE_VERSION 16
+void dxgvmb_initialize(void);
+int dxgvmb_send_set_iospace_region(u64 start, u64 len);
+
+int ntstatus2int(struct ntstatus status);
+
#ifdef DEBUG
void dxgk_validate_ioctls(void);
diff --git a/drivers/hv/dxgkrnl/dxgmodule.c b/drivers/hv/dxgkrnl/dxgmodule.c
index de02edc4d023..e55639dc0adc 100644
--- a/drivers/hv/dxgkrnl/dxgmodule.c
+++ b/drivers/hv/dxgkrnl/dxgmodule.c
@@ -260,6 +260,13 @@ int dxgglobal_init_global_channel(void)
goto error;
}
+ ret = dxgvmb_send_set_iospace_region(dxgglobal->mmiospace_base,
+ dxgglobal->mmiospace_size);
+ if (ret < 0) {
+ DXG_ERR("send_set_iospace_region failed");
+ goto error;
+ }
+
hv_set_drvdata(dxgglobal->hdev, dxgglobal);
error:
@@ -429,8 +436,6 @@ static void dxgglobal_destroy(struct dxgglobal *dxgglobal)
if (dxgglobal->vmbus_registered)
vmbus_driver_unregister(&dxgdrv.vmbus_drv);
- dxgglobal_destroy_global_channel();
-
if (dxgglobal->pci_registered)
pci_unregister_driver(&dxgdrv.pci_drv);
diff --git a/drivers/hv/dxgkrnl/dxgvmbus.c b/drivers/hv/dxgkrnl/dxgvmbus.c
index deb880e34377..a4365739826a 100644
--- a/drivers/hv/dxgkrnl/dxgvmbus.c
+++ b/drivers/hv/dxgkrnl/dxgvmbus.c
@@ -40,6 +40,121 @@ struct dxgvmbuspacket {
bool completed;
};
+struct dxgvmb_ext_header {
+ /* Offset from the start of the message to DXGKVMB_COMMAND_BASE */
+ u32 command_offset;
+ u32 reserved;
+ struct winluid vgpu_luid;
+};
+
+#define VMBUSMESSAGEONSTACK 64
+
+struct dxgvmbusmsg {
+/* Points to the allocated buffer */
+ struct dxgvmb_ext_header *hdr;
+/* Points to dxgkvmb_command_vm_to_host or dxgkvmb_command_vgpu_to_host */
+ void *msg;
+/* The vm bus channel, used to pass the message to the host */
+ struct dxgvmbuschannel *channel;
+/* Message size in bytes including the header and the payload */
+ u32 size;
+/* Buffer used for small messages */
+ char msg_on_stack[VMBUSMESSAGEONSTACK];
+};
+
+struct dxgvmbusmsgres {
+/* Points to the allocated buffer */
+ struct dxgvmb_ext_header *hdr;
+/* Points to dxgkvmb_command_vm_to_host or dxgkvmb_command_vgpu_to_host */
+ void *msg;
+/* The vm bus channel, used to pass the message to the host */
+ struct dxgvmbuschannel *channel;
+/* Message size in bytes including the header, the payload and the result */
+ u32 size;
+/* Result buffer size in bytes */
+ u32 res_size;
+/* Points to the result within the allocated buffer */
+ void *res;
+};
+
+static int init_message(struct dxgvmbusmsg *msg,
+ struct dxgprocess *process, u32 size)
+{
+ struct dxgglobal *dxgglobal = dxggbl();
+
+ bool use_ext_header = dxgglobal->vmbus_ver >=
+ DXGK_VMBUS_INTERFACE_VERSION;
+
+ if (use_ext_header)
+ size += sizeof(struct dxgvmb_ext_header);
+ msg->size = size;
+ if (size <= VMBUSMESSAGEONSTACK) {
+ msg->hdr = (void *)msg->msg_on_stack;
+ memset(msg->hdr, 0, size);
+ } else {
+ msg->hdr = vzalloc(size);
+ if (msg->hdr == NULL)
+ return -ENOMEM;
+ }
+ if (use_ext_header) {
+ msg->msg = (char *)&msg->hdr[1];
+ msg->hdr->command_offset = sizeof(msg->hdr[0]);
+ } else {
+ msg->msg = (char *)msg->hdr;
+ }
+ msg->channel = &dxgglobal->channel;
+ return 0;
+}
+
+static void free_message(struct dxgvmbusmsg *msg, struct dxgprocess *process)
+{
+ if (msg->hdr && (char *)msg->hdr != msg->msg_on_stack)
+ vfree(msg->hdr);
+}
+
+/*
+ * Helper functions
+ */
+
+int ntstatus2int(struct ntstatus status)
+{
+ if (NT_SUCCESS(status))
+ return (int)status.v;
+ switch (status.v) {
+ case STATUS_OBJECT_NAME_COLLISION:
+ return -EEXIST;
+ case STATUS_NO_MEMORY:
+ return -ENOMEM;
+ case STATUS_INVALID_PARAMETER:
+ return -EINVAL;
+ case STATUS_OBJECT_NAME_INVALID:
+ case STATUS_OBJECT_NAME_NOT_FOUND:
+ return -ENOENT;
+ case STATUS_TIMEOUT:
+ return -EAGAIN;
+ case STATUS_BUFFER_TOO_SMALL:
+ return -EOVERFLOW;
+ case STATUS_DEVICE_REMOVED:
+ return -ENODEV;
+ case STATUS_ACCESS_DENIED:
+ return -EACCES;
+ case STATUS_NOT_SUPPORTED:
+ return -EPERM;
+ case STATUS_ILLEGAL_INSTRUCTION:
+ return -EOPNOTSUPP;
+ case STATUS_INVALID_HANDLE:
+ return -EBADF;
+ case STATUS_GRAPHICS_ALLOCATION_BUSY:
+ return -EINPROGRESS;
+ case STATUS_OBJECT_TYPE_MISMATCH:
+ return -EPROTOTYPE;
+ case STATUS_NOT_IMPLEMENTED:
+ return -EPERM;
+ default:
+ return -EINVAL;
+ }
+}
+
int dxgvmbuschannel_init(struct dxgvmbuschannel *ch, struct hv_device *hdev)
{
int ret;
@@ -86,7 +201,210 @@ void dxgvmbuschannel_destroy(struct dxgvmbuschannel *ch)
}
}
+static void command_vm_to_host_init1(struct dxgkvmb_command_vm_to_host *command,
+ enum dxgkvmb_commandtype_global type)
+{
+ command->command_type = type;
+ command->process.v = 0;
+ command->command_id = 0;
+ command->channel_type = DXGKVMB_VM_TO_HOST;
+}
+
+static void process_inband_packet(struct dxgvmbuschannel *channel,
+ struct vmpacket_descriptor *desc)
+{
+ u32 packet_length = hv_pkt_datalen(desc);
+ struct dxgkvmb_command_host_to_vm *packet;
+
+ if (packet_length < sizeof(struct dxgkvmb_command_host_to_vm)) {
+ DXG_ERR("Invalid global packet");
+ } else {
+ packet = hv_pkt_data(desc);
+ DXG_TRACE("global packet %d",
+ packet->command_type);
+ switch (packet->command_type) {
+ case DXGK_VMBCOMMAND_SIGNALGUESTEVENT:
+ case DXGK_VMBCOMMAND_SIGNALGUESTEVENTPASSIVE:
+ break;
+ case DXGK_VMBCOMMAND_SENDWNFNOTIFICATION:
+ break;
+ default:
+ DXG_ERR("unexpected host message %d",
+ packet->command_type);
+ }
+ }
+}
+
+static void process_completion_packet(struct dxgvmbuschannel *channel,
+ struct vmpacket_descriptor *desc)
+{
+ struct dxgvmbuspacket *packet = NULL;
+ struct dxgvmbuspacket *entry;
+ u32 packet_length = hv_pkt_datalen(desc);
+ unsigned long flags;
+
+ spin_lock_irqsave(&channel->packet_list_mutex, flags);
+ list_for_each_entry(entry, &channel->packet_list_head,
+ packet_list_entry) {
+ if (desc->trans_id == entry->request_id) {
+ packet = entry;
+ list_del(&packet->packet_list_entry);
+ packet->completed = true;
+ break;
+ }
+ }
+ spin_unlock_irqrestore(&channel->packet_list_mutex, flags);
+ if (packet) {
+ if (packet->buffer_length) {
+ if (packet_length < packet->buffer_length) {
+ DXG_TRACE("invalid size %d Expected:%d",
+ packet_length,
+ packet->buffer_length);
+ packet->status = -EOVERFLOW;
+ } else {
+ memcpy(packet->buffer, hv_pkt_data(desc),
+ packet->buffer_length);
+ }
+ }
+ complete(&packet->wait);
+ } else {
+ DXG_ERR("did not find packet to complete");
+ }
+}
+
/* Receive callback for messages from the host */
void dxgvmbuschannel_receive(void *ctx)
{
+ struct dxgvmbuschannel *channel = ctx;
+ struct vmpacket_descriptor *desc;
+ u32 packet_length = 0;
+
+ foreach_vmbus_pkt(desc, channel->channel) {
+ packet_length = hv_pkt_datalen(desc);
+ DXG_TRACE("next packet (id, size, type): %llu %d %d",
+ desc->trans_id, packet_length, desc->type);
+ if (desc->type == VM_PKT_COMP) {
+ process_completion_packet(channel, desc);
+ } else {
+ if (desc->type != VM_PKT_DATA_INBAND)
+ DXG_ERR("unexpected packet type");
+ else
+ process_inband_packet(channel, desc);
+ }
+ }
+}
+
+int dxgvmb_send_sync_msg(struct dxgvmbuschannel *channel,
+ void *command,
+ u32 cmd_size,
+ void *result,
+ u32 result_size)
+{
+ int ret;
+ struct dxgvmbuspacket *packet = NULL;
+
+ if (cmd_size > DXG_MAX_VM_BUS_PACKET_SIZE ||
+ result_size > DXG_MAX_VM_BUS_PACKET_SIZE) {
+ DXG_ERR("%s invalid data size", __func__);
+ return -EINVAL;
+ }
+
+ packet = kmem_cache_alloc(channel->packet_cache, 0);
+ if (packet == NULL) {
+ DXG_ERR("kmem_cache_alloc failed");
+ return -ENOMEM;
+ }
+
+ packet->request_id = atomic64_inc_return(&channel->packet_request_id);
+ init_completion(&packet->wait);
+ packet->buffer = result;
+ packet->buffer_length = result_size;
+ packet->status = 0;
+ packet->completed = false;
+ spin_lock_irq(&channel->packet_list_mutex);
+ list_add_tail(&packet->packet_list_entry, &channel->packet_list_head);
+ spin_unlock_irq(&channel->packet_list_mutex);
+
+ ret = vmbus_sendpacket(channel->channel, command, cmd_size,
+ packet->request_id, VM_PKT_DATA_INBAND,
+ VMBUS_DATA_PACKET_FLAG_COMPLETION_REQUESTED);
+ if (ret) {
+ DXG_ERR("vmbus_sendpacket failed: %x", ret);
+ spin_lock_irq(&channel->packet_list_mutex);
+ list_del(&packet->packet_list_entry);
+ spin_unlock_irq(&channel->packet_list_mutex);
+ goto cleanup;
+ }
+
+ DXG_TRACE("waiting completion: %llu", packet->request_id);
+ ret = wait_for_completion_killable(&packet->wait);
+ if (ret) {
+ DXG_ERR("wait_for_completion failed: %x", ret);
+ spin_lock_irq(&channel->packet_list_mutex);
+ if (!packet->completed)
+ list_del(&packet->packet_list_entry);
+ spin_unlock_irq(&channel->packet_list_mutex);
+ goto cleanup;
+ }
+ DXG_TRACE("completion done: %llu %x",
+ packet->request_id, packet->status);
+ ret = packet->status;
+
+cleanup:
+
+ kmem_cache_free(channel->packet_cache, packet);
+ if (ret < 0)
+ DXG_TRACE("Error: %x", ret);
+ return ret;
+}
+
+static int
+dxgvmb_send_sync_msg_ntstatus(struct dxgvmbuschannel *channel,
+ void *command, u32 cmd_size)
+{
+ struct ntstatus status;
+ int ret;
+
+ ret = dxgvmb_send_sync_msg(channel, command, cmd_size,
+ &status, sizeof(status));
+ if (ret >= 0)
+ ret = ntstatus2int(status);
+ return ret;
+}
+
+/*
+ * Global messages to the host
+ */
+
+int dxgvmb_send_set_iospace_region(u64 start, u64 len)
+{
+ int ret;
+ struct dxgkvmb_command_setiospaceregion *command;
+ struct dxgvmbusmsg msg;
+ struct dxgglobal *dxgglobal = dxggbl();
+
+ ret = init_message(&msg, NULL, sizeof(*command));
+ if (ret)
+ return ret;
+ command = (void *)msg.msg;
+
+ ret = dxgglobal_acquire_channel_lock();
+ if (ret < 0)
+ goto cleanup;
+
+ command_vm_to_host_init1(&command->hdr,
+ DXGK_VMBCOMMAND_SETIOSPACEREGION);
+ command->start = start;
+ command->length = len;
+ ret = dxgvmb_send_sync_msg_ntstatus(&dxgglobal->channel, msg.hdr,
+ msg.size);
+ if (ret < 0)
+ DXG_ERR("send_set_iospace_region failed %x", ret);
+
+ dxgglobal_release_channel_lock();
+cleanup:
+ free_message(&msg, NULL);
+ if (ret)
+ DXG_TRACE("Error: %d", ret);
+ return ret;
}
diff --git a/drivers/hv/dxgkrnl/dxgvmbus.h b/drivers/hv/dxgkrnl/dxgvmbus.h
index 6cdca5e03d1f..b1bdd6039b73 100644
--- a/drivers/hv/dxgkrnl/dxgvmbus.h
+++ b/drivers/hv/dxgkrnl/dxgvmbus.h
@@ -16,4 +16,71 @@
#define DXG_MAX_VM_BUS_PACKET_SIZE (1024 * 128)
+enum dxgkvmb_commandchanneltype {
+ DXGKVMB_VGPU_TO_HOST,
+ DXGKVMB_VM_TO_HOST,
+ DXGKVMB_HOST_TO_VM
+};
+
+/*
+ *
+ * Commands, sent to the host via the guest global VM bus channel
+ * DXG_GUEST_GLOBAL_VMBUS
+ *
+ */
+
+enum dxgkvmb_commandtype_global {
+ DXGK_VMBCOMMAND_VM_TO_HOST_FIRST = 1000,
+ DXGK_VMBCOMMAND_CREATEPROCESS = DXGK_VMBCOMMAND_VM_TO_HOST_FIRST,
+ DXGK_VMBCOMMAND_DESTROYPROCESS = 1001,
+ DXGK_VMBCOMMAND_OPENSYNCOBJECT = 1002,
+ DXGK_VMBCOMMAND_DESTROYSYNCOBJECT = 1003,
+ DXGK_VMBCOMMAND_CREATENTSHAREDOBJECT = 1004,
+ DXGK_VMBCOMMAND_DESTROYNTSHAREDOBJECT = 1005,
+ DXGK_VMBCOMMAND_SIGNALFENCE = 1006,
+ DXGK_VMBCOMMAND_NOTIFYPROCESSFREEZE = 1007,
+ DXGK_VMBCOMMAND_NOTIFYPROCESSTHAW = 1008,
+ DXGK_VMBCOMMAND_QUERYETWSESSION = 1009,
+ DXGK_VMBCOMMAND_SETIOSPACEREGION = 1010,
+ DXGK_VMBCOMMAND_COMPLETETRANSACTION = 1011,
+ DXGK_VMBCOMMAND_SHAREOBJECTWITHHOST = 1021,
+ DXGK_VMBCOMMAND_INVALID_VM_TO_HOST
+};
+
+/*
+ * Commands, sent by the host to the VM
+ */
+enum dxgkvmb_commandtype_host_to_vm {
+ DXGK_VMBCOMMAND_SIGNALGUESTEVENT,
+ DXGK_VMBCOMMAND_PROPAGATEPRESENTHISTORYTOKEN,
+ DXGK_VMBCOMMAND_SETGUESTDATA,
+ DXGK_VMBCOMMAND_SIGNALGUESTEVENTPASSIVE,
+ DXGK_VMBCOMMAND_SENDWNFNOTIFICATION,
+ DXGK_VMBCOMMAND_INVALID_HOST_TO_VM
+};
+
+struct dxgkvmb_command_vm_to_host {
+ u64 command_id;
+ struct d3dkmthandle process;
+ enum dxgkvmb_commandchanneltype channel_type;
+ enum dxgkvmb_commandtype_global command_type;
+};
+
+struct dxgkvmb_command_host_to_vm {
+ u64 command_id;
+ struct d3dkmthandle process;
+ u32 channel_type : 8;
+ u32 async_msg : 1;
+ u32 reserved : 23;
+ enum dxgkvmb_commandtype_host_to_vm command_type;
+};
+
+/* Returns ntstatus */
+struct dxgkvmb_command_setiospaceregion {
+ struct dxgkvmb_command_vm_to_host hdr;
+ u64 start;
+ u64 length;
+ u32 shared_page_gpadl;
+};
+
#endif /* _DXGVMBUS_H */
diff --git a/drivers/hv/dxgkrnl/ioctl.c b/drivers/hv/dxgkrnl/ioctl.c
new file mode 100644
index 000000000000..23ecd15b0cd7
--- /dev/null
+++ b/drivers/hv/dxgkrnl/ioctl.c
@@ -0,0 +1,24 @@
+// SPDX-License-Identifier: GPL-2.0
+
+/*
+ * Copyright (c) 2022, Microsoft Corporation.
+ *
+ * Author:
+ * Iouri Tarassov <iourit@linux.microsoft.com>
+ *
+ * Dxgkrnl Graphics Driver
+ * Ioctl implementation
+ *
+ */
+
+#include <linux/eventfd.h>
+#include <linux/file.h>
+#include <linux/fs.h>
+#include <linux/anon_inodes.h>
+#include <linux/mman.h>
+
+#include "dxgkrnl.h"
+#include "dxgvmbus.h"
+
+#undef pr_fmt
+#define pr_fmt(fmt) "dxgk: " fmt
diff --git a/drivers/hv/dxgkrnl/misc.h b/drivers/hv/dxgkrnl/misc.h
new file mode 100644
index 000000000000..4c6047c32a20
--- /dev/null
+++ b/drivers/hv/dxgkrnl/misc.h
@@ -0,0 +1,72 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+
+/*
+ * Copyright (c) 2022, Microsoft Corporation.
+ *
+ * Author:
+ * Iouri Tarassov <iourit@linux.microsoft.com>
+ *
+ * Dxgkrnl Graphics Driver
+ * Misc definitions
+ *
+ */
+
+#ifndef _MISC_H_
+#define _MISC_H_
+
+extern const struct d3dkmthandle zerohandle;
+
+/*
+ * Synchronization lock hierarchy.
+ *
+ * The higher enum value, the higher is the lock order.
+ * When a lower lock ois held, the higher lock should not be acquired.
+ *
+ * channel_lock
+ * device_mutex
+ */
+
+/*
+ * Some of the Windows return codes, which needs to be translated to Linux
+ * IOCTL return codes. Positive values are success codes and need to be
+ * returned from the driver IOCTLs. libdxcore.so depends on returning
+ * specific return codes.
+ */
+#define STATUS_SUCCESS ((int)(0))
+#define STATUS_OBJECT_NAME_INVALID ((int)(0xC0000033L))
+#define STATUS_DEVICE_REMOVED ((int)(0xC00002B6L))
+#define STATUS_INVALID_HANDLE ((int)(0xC0000008L))
+#define STATUS_ILLEGAL_INSTRUCTION ((int)(0xC000001DL))
+#define STATUS_NOT_IMPLEMENTED ((int)(0xC0000002L))
+#define STATUS_PENDING ((int)(0x00000103L))
+#define STATUS_ACCESS_DENIED ((int)(0xC0000022L))
+#define STATUS_BUFFER_TOO_SMALL ((int)(0xC0000023L))
+#define STATUS_OBJECT_TYPE_MISMATCH ((int)(0xC0000024L))
+#define STATUS_GRAPHICS_ALLOCATION_BUSY ((int)(0xC01E0102L))
+#define STATUS_NOT_SUPPORTED ((int)(0xC00000BBL))
+#define STATUS_TIMEOUT ((int)(0x00000102L))
+#define STATUS_INVALID_PARAMETER ((int)(0xC000000DL))
+#define STATUS_NO_MEMORY ((int)(0xC0000017L))
+#define STATUS_OBJECT_NAME_COLLISION ((int)(0xC0000035L))
+#define STATUS_OBJECT_NAME_NOT_FOUND ((int)(0xC0000034L))
+
+
+#define NT_SUCCESS(status) (status.v >= 0)
+
+#ifndef DEBUG
+
+#define DXGKRNL_ASSERT(exp)
+
+#else
+
+#define DXGKRNL_ASSERT(exp) \
+do { \
+ if (!(exp)) { \
+ dump_stack(); \
+ BUG_ON(true); \
+ } \
+} while (0)
+
+#endif /* DEBUG */
+
+#endif /* _MISC_H_ */
diff --git a/include/uapi/misc/d3dkmthk.h b/include/uapi/misc/d3dkmthk.h
index 5d973604400c..2ea04cc02a1f 100644
--- a/include/uapi/misc/d3dkmthk.h
+++ b/include/uapi/misc/d3dkmthk.h
@@ -14,6 +14,40 @@
#ifndef _D3DKMTHK_H
#define _D3DKMTHK_H
+/*
+ * This structure matches the definition of D3DKMTHANDLE in Windows.
+ * The handle is opaque in user mode. It is used by user mode applications to
+ * represent kernel mode objects, created by dxgkrnl.
+ */
+struct d3dkmthandle {
+ union {
+ struct {
+ __u32 instance : 6;
+ __u32 index : 24;
+ __u32 unique : 2;
+ };
+ __u32 v;
+ };
+};
+
+/*
+ * VM bus messages return Windows' NTSTATUS, which is integer and only negative
+ * value indicates a failure. A positive number is a success and needs to be
+ * returned to user mode as the IOCTL return code. Negative status codes are
+ * converted to Linux error codes.
+ */
+struct ntstatus {
+ union {
+ struct {
+ int code : 16;
+ int facility : 13;
+ int customer : 1;
+ int severity : 2;
+ };
+ int v;
+ };
+};
+
/*
* Matches the Windows LUID definition.
* LUID is a locally unique identifier (similar to GUID, but not global),
^ permalink raw reply related [flat|nested] 56+ messages in thread* [PATCH 03/55] drivers: hv: dxgkrnl: Creation of dxgadapter object
2026-03-19 20:24 [PATCH v4 00/55] drivers: hv: dxgkrnl: Driver for Hyper-V virtual compute device Eric Curtin
2026-03-19 20:24 ` [PATCH 01/55] drivers: hv: dxgkrnl: Driver initialization and loading Eric Curtin
2026-03-19 20:24 ` [PATCH 02/55] drivers: hv: dxgkrnl: Add VMBus message support, initialize VMBus channels Eric Curtin
@ 2026-03-19 20:24 ` Eric Curtin
2026-03-19 20:24 ` [PATCH 04/55] drivers: hv: dxgkrnl: Opening of /dev/dxg device and dxgprocess creation Eric Curtin
` (51 subsequent siblings)
54 siblings, 0 replies; 56+ messages in thread
From: Eric Curtin @ 2026-03-19 20:24 UTC (permalink / raw)
To: linux-hyperv; +Cc: linux-kernel, iourit, wei.liu, decui, haiyangz
From: Iouri Tarassov <iourit@linux.microsoft.com>
Handle creation and destruction of dxgadapter object, which
represents a virtual compute device, projected to the VM by
the host. The dxgadapter object is created when the
corresponding VMBus channel is offered by Hyper-V.
There could be multiple virtual compute device objects, projected
by the host to VM. They are enumerated by issuing IOCTLs to
the /dev/dxg device.
The adapter object can start functioning only when the global VMBus
channel and the corresponding per device VMBus channel are
initialized. Notifications about arrival of a virtual compute PCI
device and VMBus channels can happen in any order. Therefore,
the initial dxgadapter object state is DXGADAPTER_STATE_WAITING_VMBUS.
A list of VMBus channels and a list of waiting dxgadapter objects
are maintained. When dxgkrnl is notified about a VMBus channel
arrival, if tries to start all adapters, which are not started yet.
Properties of the adapter object are determined by sending VMBus
messages to the host to the corresponding VMBus channel.
When the per virtual compute device VMBus channel or the global
channel are destroyed, the adapter object is destroyed.
Signed-off-by: Iouri Tarassov <iourit@linux.microsoft.com>
[kms: forward port to 6.6 from 6.1. No code changes made.]
Signed-off-by: Kelsey Steele <kelseysteele@microsoft.com>
---
drivers/hv/dxgkrnl/Makefile | 2 +-
drivers/hv/dxgkrnl/dxgadapter.c | 170 +++++++++++++++++++++++++
drivers/hv/dxgkrnl/dxgkrnl.h | 85 +++++++++++++
drivers/hv/dxgkrnl/dxgmodule.c | 204 +++++++++++++++++++++++++++++-
drivers/hv/dxgkrnl/dxgvmbus.c | 217 +++++++++++++++++++++++++++++---
drivers/hv/dxgkrnl/dxgvmbus.h | 128 +++++++++++++++++++
drivers/hv/dxgkrnl/misc.c | 37 ++++++
drivers/hv/dxgkrnl/misc.h | 24 +++-
8 files changed, 844 insertions(+), 23 deletions(-)
create mode 100644 drivers/hv/dxgkrnl/dxgadapter.c
create mode 100644 drivers/hv/dxgkrnl/misc.c
diff --git a/drivers/hv/dxgkrnl/Makefile b/drivers/hv/dxgkrnl/Makefile
index 76349064b60a..2ed07d877c91 100644
--- a/drivers/hv/dxgkrnl/Makefile
+++ b/drivers/hv/dxgkrnl/Makefile
@@ -2,4 +2,4 @@
# Makefile for the hyper-v compute device driver (dxgkrnl).
obj-$(CONFIG_DXGKRNL) += dxgkrnl.o
-dxgkrnl-y := dxgmodule.o dxgvmbus.o
+dxgkrnl-y := dxgmodule.o misc.o dxgadapter.o ioctl.o dxgvmbus.o
diff --git a/drivers/hv/dxgkrnl/dxgadapter.c b/drivers/hv/dxgkrnl/dxgadapter.c
new file mode 100644
index 000000000000..07d47699d255
--- /dev/null
+++ b/drivers/hv/dxgkrnl/dxgadapter.c
@@ -0,0 +1,170 @@
+// SPDX-License-Identifier: GPL-2.0
+
+/*
+ * Copyright (c) 2022, Microsoft Corporation.
+ *
+ * Author:
+ * Iouri Tarassov <iourit@linux.microsoft.com>
+ *
+ * Dxgkrnl Graphics Driver
+ * Implementation of dxgadapter and its objects
+ *
+ */
+
+#include <linux/module.h>
+#include <linux/hyperv.h>
+#include <linux/pagemap.h>
+#include <linux/eventfd.h>
+
+#include "dxgkrnl.h"
+
+#undef pr_fmt
+#define pr_fmt(fmt) "dxgk: " fmt
+
+int dxgadapter_set_vmbus(struct dxgadapter *adapter, struct hv_device *hdev)
+{
+ int ret;
+
+ guid_to_luid(&hdev->channel->offermsg.offer.if_instance,
+ &adapter->luid);
+ DXG_TRACE("%x:%x %p %pUb",
+ adapter->luid.b, adapter->luid.a, hdev->channel,
+ &hdev->channel->offermsg.offer.if_instance);
+
+ ret = dxgvmbuschannel_init(&adapter->channel, hdev);
+ if (ret)
+ goto cleanup;
+
+ adapter->channel.adapter = adapter;
+ adapter->hv_dev = hdev;
+
+ ret = dxgvmb_send_open_adapter(adapter);
+ if (ret < 0) {
+ DXG_ERR("dxgvmb_send_open_adapter failed: %d", ret);
+ goto cleanup;
+ }
+
+ ret = dxgvmb_send_get_internal_adapter_info(adapter);
+
+cleanup:
+ if (ret)
+ DXG_ERR("Failed to set vmbus: %d", ret);
+ return ret;
+}
+
+void dxgadapter_start(struct dxgadapter *adapter)
+{
+ struct dxgvgpuchannel *ch = NULL;
+ struct dxgvgpuchannel *entry;
+ int ret;
+ struct dxgglobal *dxgglobal = dxggbl();
+
+ DXG_TRACE("%x-%x", adapter->luid.a, adapter->luid.b);
+
+ /* Find the corresponding vGPU vm bus channel */
+ list_for_each_entry(entry, &dxgglobal->vgpu_ch_list_head,
+ vgpu_ch_list_entry) {
+ if (memcmp(&adapter->luid,
+ &entry->adapter_luid,
+ sizeof(struct winluid)) == 0) {
+ ch = entry;
+ break;
+ }
+ }
+ if (ch == NULL) {
+ DXG_TRACE("vGPU chanel is not ready");
+ return;
+ }
+
+ /* The global channel is initialized when the first adapter starts */
+ if (!dxgglobal->global_channel_initialized) {
+ ret = dxgglobal_init_global_channel();
+ if (ret) {
+ dxgglobal_destroy_global_channel();
+ return;
+ }
+ dxgglobal->global_channel_initialized = true;
+ }
+
+ /* Initialize vGPU vm bus channel */
+ ret = dxgadapter_set_vmbus(adapter, ch->hdev);
+ if (ret) {
+ DXG_ERR("Failed to start adapter %p", adapter);
+ adapter->adapter_state = DXGADAPTER_STATE_STOPPED;
+ return;
+ }
+
+ adapter->adapter_state = DXGADAPTER_STATE_ACTIVE;
+ DXG_TRACE("Adapter started %p", adapter);
+}
+
+void dxgadapter_stop(struct dxgadapter *adapter)
+{
+ bool adapter_stopped = false;
+
+ down_write(&adapter->core_lock);
+ if (!adapter->stopping_adapter)
+ adapter->stopping_adapter = true;
+ else
+ adapter_stopped = true;
+ up_write(&adapter->core_lock);
+
+ if (adapter_stopped)
+ return;
+
+ if (dxgadapter_acquire_lock_exclusive(adapter) == 0) {
+ dxgvmb_send_close_adapter(adapter);
+ dxgadapter_release_lock_exclusive(adapter);
+ }
+ dxgvmbuschannel_destroy(&adapter->channel);
+
+ adapter->adapter_state = DXGADAPTER_STATE_STOPPED;
+}
+
+void dxgadapter_release(struct kref *refcount)
+{
+ struct dxgadapter *adapter;
+
+ adapter = container_of(refcount, struct dxgadapter, adapter_kref);
+ DXG_TRACE("%p", adapter);
+ kfree(adapter);
+}
+
+bool dxgadapter_is_active(struct dxgadapter *adapter)
+{
+ return adapter->adapter_state == DXGADAPTER_STATE_ACTIVE;
+}
+
+int dxgadapter_acquire_lock_exclusive(struct dxgadapter *adapter)
+{
+ down_write(&adapter->core_lock);
+ if (adapter->adapter_state != DXGADAPTER_STATE_ACTIVE) {
+ dxgadapter_release_lock_exclusive(adapter);
+ return -ENODEV;
+ }
+ return 0;
+}
+
+void dxgadapter_acquire_lock_forced(struct dxgadapter *adapter)
+{
+ down_write(&adapter->core_lock);
+}
+
+void dxgadapter_release_lock_exclusive(struct dxgadapter *adapter)
+{
+ up_write(&adapter->core_lock);
+}
+
+int dxgadapter_acquire_lock_shared(struct dxgadapter *adapter)
+{
+ down_read(&adapter->core_lock);
+ if (adapter->adapter_state == DXGADAPTER_STATE_ACTIVE)
+ return 0;
+ dxgadapter_release_lock_shared(adapter);
+ return -ENODEV;
+}
+
+void dxgadapter_release_lock_shared(struct dxgadapter *adapter)
+{
+ up_read(&adapter->core_lock);
+}
diff --git a/drivers/hv/dxgkrnl/dxgkrnl.h b/drivers/hv/dxgkrnl/dxgkrnl.h
index 52b9e82c51e6..ba2a7c6001aa 100644
--- a/drivers/hv/dxgkrnl/dxgkrnl.h
+++ b/drivers/hv/dxgkrnl/dxgkrnl.h
@@ -47,9 +47,39 @@ extern struct dxgdriver dxgdrv;
#define DXGDEV dxgdrv.dxgdev
+struct dxgk_device_types {
+ u32 post_device:1;
+ u32 post_device_certain:1;
+ u32 software_device:1;
+ u32 soft_gpu_device:1;
+ u32 warp_device:1;
+ u32 bdd_device:1;
+ u32 support_miracast:1;
+ u32 mismatched_lda:1;
+ u32 indirect_display_device:1;
+ u32 xbox_one_device:1;
+ u32 child_id_support_dwm_clone:1;
+ u32 child_id_support_dwm_clone2:1;
+ u32 has_internal_panel:1;
+ u32 rfx_vgpu_device:1;
+ u32 virtual_render_device:1;
+ u32 support_preserve_boot_display:1;
+ u32 is_uefi_frame_buffer:1;
+ u32 removable_device:1;
+ u32 virtual_monitor_device:1;
+};
+
+enum dxgobjectstate {
+ DXGOBJECTSTATE_CREATED,
+ DXGOBJECTSTATE_ACTIVE,
+ DXGOBJECTSTATE_STOPPED,
+ DXGOBJECTSTATE_DESTROYED,
+};
+
struct dxgvmbuschannel {
struct vmbus_channel *channel;
struct hv_device *hdev;
+ struct dxgadapter *adapter;
spinlock_t packet_list_mutex;
struct list_head packet_list_head;
struct kmem_cache *packet_cache;
@@ -81,6 +111,10 @@ struct dxgglobal {
struct miscdevice dxgdevice;
struct mutex device_mutex;
+ /* list of created adapters */
+ struct list_head adapter_list_head;
+ struct rw_semaphore adapter_list_lock;
+
/*
* List of the vGPU VM bus channels (dxgvgpuchannel)
* Protected by device_mutex
@@ -102,6 +136,10 @@ static inline struct dxgglobal *dxggbl(void)
return dxgdrv.dxgglobal;
}
+int dxgglobal_create_adapter(struct pci_dev *dev, guid_t *guid,
+ struct winluid host_vgpu_luid);
+void dxgglobal_acquire_adapter_list_lock(enum dxglockstate state);
+void dxgglobal_release_adapter_list_lock(enum dxglockstate state);
int dxgglobal_init_global_channel(void);
void dxgglobal_destroy_global_channel(void);
struct vmbus_channel *dxgglobal_get_vmbus(void);
@@ -113,6 +151,47 @@ struct dxgprocess {
/* Placeholder */
};
+enum dxgadapter_state {
+ DXGADAPTER_STATE_ACTIVE = 0,
+ DXGADAPTER_STATE_STOPPED = 1,
+ DXGADAPTER_STATE_WAITING_VMBUS = 2,
+};
+
+/*
+ * This object represents the grapchis adapter.
+ * Objects, which take reference on the adapter:
+ * - dxgglobal
+ * - adapter handle (struct d3dkmthandle)
+ */
+struct dxgadapter {
+ struct rw_semaphore core_lock;
+ struct kref adapter_kref;
+ /* Entry in the list of adapters in dxgglobal */
+ struct list_head adapter_list_entry;
+ struct pci_dev *pci_dev;
+ struct hv_device *hv_dev;
+ struct dxgvmbuschannel channel;
+ struct d3dkmthandle host_handle;
+ enum dxgadapter_state adapter_state;
+ struct winluid host_adapter_luid;
+ struct winluid host_vgpu_luid;
+ struct winluid luid; /* VM bus channel luid */
+ u16 device_description[80];
+ u16 device_instance_id[WIN_MAX_PATH];
+ bool stopping_adapter;
+};
+
+int dxgadapter_set_vmbus(struct dxgadapter *adapter, struct hv_device *hdev);
+bool dxgadapter_is_active(struct dxgadapter *adapter);
+void dxgadapter_start(struct dxgadapter *adapter);
+void dxgadapter_stop(struct dxgadapter *adapter);
+void dxgadapter_release(struct kref *refcount);
+int dxgadapter_acquire_lock_shared(struct dxgadapter *adapter);
+void dxgadapter_release_lock_shared(struct dxgadapter *adapter);
+int dxgadapter_acquire_lock_exclusive(struct dxgadapter *adapter);
+void dxgadapter_acquire_lock_forced(struct dxgadapter *adapter);
+void dxgadapter_release_lock_exclusive(struct dxgadapter *adapter);
+
/*
* The convention is that VNBus instance id is a GUID, but the host sets
* the lower part of the value to the host adapter LUID. The function
@@ -141,6 +220,12 @@ static inline void guid_to_luid(guid_t *guid, struct winluid *luid)
void dxgvmb_initialize(void);
int dxgvmb_send_set_iospace_region(u64 start, u64 len);
+int dxgvmb_send_open_adapter(struct dxgadapter *adapter);
+int dxgvmb_send_close_adapter(struct dxgadapter *adapter);
+int dxgvmb_send_get_internal_adapter_info(struct dxgadapter *adapter);
+int dxgvmb_send_async_msg(struct dxgvmbuschannel *channel,
+ void *command,
+ u32 cmd_size);
int ntstatus2int(struct ntstatus status);
diff --git a/drivers/hv/dxgkrnl/dxgmodule.c b/drivers/hv/dxgkrnl/dxgmodule.c
index e55639dc0adc..ef80b920f010 100644
--- a/drivers/hv/dxgkrnl/dxgmodule.c
+++ b/drivers/hv/dxgkrnl/dxgmodule.c
@@ -55,6 +55,156 @@ void dxgglobal_release_channel_lock(void)
up_read(&dxggbl()->channel_lock);
}
+void dxgglobal_acquire_adapter_list_lock(enum dxglockstate state)
+{
+ struct dxgglobal *dxgglobal = dxggbl();
+
+ if (state == DXGLOCK_EXCL)
+ down_write(&dxgglobal->adapter_list_lock);
+ else
+ down_read(&dxgglobal->adapter_list_lock);
+}
+
+void dxgglobal_release_adapter_list_lock(enum dxglockstate state)
+{
+ struct dxgglobal *dxgglobal = dxggbl();
+
+ if (state == DXGLOCK_EXCL)
+ up_write(&dxgglobal->adapter_list_lock);
+ else
+ up_read(&dxgglobal->adapter_list_lock);
+}
+
+/*
+ * Returns a pointer to dxgadapter object, which corresponds to the given PCI
+ * device, or NULL.
+ */
+static struct dxgadapter *find_pci_adapter(struct pci_dev *dev)
+{
+ struct dxgadapter *entry;
+ struct dxgadapter *adapter = NULL;
+ struct dxgglobal *dxgglobal = dxggbl();
+
+ dxgglobal_acquire_adapter_list_lock(DXGLOCK_EXCL);
+
+ list_for_each_entry(entry, &dxgglobal->adapter_list_head,
+ adapter_list_entry) {
+ if (dev == entry->pci_dev) {
+ adapter = entry;
+ break;
+ }
+ }
+
+ dxgglobal_release_adapter_list_lock(DXGLOCK_EXCL);
+ return adapter;
+}
+
+/*
+ * Returns a pointer to dxgadapter object, which has the givel LUID
+ * device, or NULL.
+ */
+static struct dxgadapter *find_adapter(struct winluid *luid)
+{
+ struct dxgadapter *entry;
+ struct dxgadapter *adapter = NULL;
+ struct dxgglobal *dxgglobal = dxggbl();
+
+ dxgglobal_acquire_adapter_list_lock(DXGLOCK_EXCL);
+
+ list_for_each_entry(entry, &dxgglobal->adapter_list_head,
+ adapter_list_entry) {
+ if (memcmp(luid, &entry->luid, sizeof(struct winluid)) == 0) {
+ adapter = entry;
+ break;
+ }
+ }
+
+ dxgglobal_release_adapter_list_lock(DXGLOCK_EXCL);
+ return adapter;
+}
+
+/*
+ * Creates a new dxgadapter object, which represents a virtual GPU, projected
+ * by the host.
+ * The adapter is in the waiting state. It will become active when the global
+ * VM bus channel and the adapter VM bus channel are created.
+ */
+int dxgglobal_create_adapter(struct pci_dev *dev, guid_t *guid,
+ struct winluid host_vgpu_luid)
+{
+ struct dxgadapter *adapter;
+ int ret = 0;
+ struct dxgglobal *dxgglobal = dxggbl();
+
+ adapter = kzalloc(sizeof(struct dxgadapter), GFP_KERNEL);
+ if (adapter == NULL) {
+ ret = -ENOMEM;
+ goto cleanup;
+ }
+
+ adapter->adapter_state = DXGADAPTER_STATE_WAITING_VMBUS;
+ adapter->host_vgpu_luid = host_vgpu_luid;
+ kref_init(&adapter->adapter_kref);
+ init_rwsem(&adapter->core_lock);
+
+ adapter->pci_dev = dev;
+ guid_to_luid(guid, &adapter->luid);
+
+ dxgglobal_acquire_adapter_list_lock(DXGLOCK_EXCL);
+
+ list_add_tail(&adapter->adapter_list_entry,
+ &dxgglobal->adapter_list_head);
+ dxgglobal->num_adapters++;
+ dxgglobal_release_adapter_list_lock(DXGLOCK_EXCL);
+
+ DXG_TRACE("new adapter added %p %x-%x", adapter,
+ adapter->luid.a, adapter->luid.b);
+cleanup:
+ return ret;
+}
+
+/*
+ * Attempts to start dxgadapter objects, which are not active yet.
+ */
+static void dxgglobal_start_adapters(void)
+{
+ struct dxgadapter *adapter;
+ struct dxgglobal *dxgglobal = dxggbl();
+
+ if (dxgglobal->hdev == NULL) {
+ DXG_TRACE("Global channel is not ready");
+ return;
+ }
+ dxgglobal_acquire_adapter_list_lock(DXGLOCK_EXCL);
+ list_for_each_entry(adapter, &dxgglobal->adapter_list_head,
+ adapter_list_entry) {
+ if (adapter->adapter_state == DXGADAPTER_STATE_WAITING_VMBUS)
+ dxgadapter_start(adapter);
+ }
+ dxgglobal_release_adapter_list_lock(DXGLOCK_EXCL);
+}
+
+/*
+ * Stopsthe active dxgadapter objects.
+ */
+static void dxgglobal_stop_adapters(void)
+{
+ struct dxgadapter *adapter;
+ struct dxgglobal *dxgglobal = dxggbl();
+
+ if (dxgglobal->hdev == NULL) {
+ DXG_TRACE("Global channel is not ready");
+ return;
+ }
+ dxgglobal_acquire_adapter_list_lock(DXGLOCK_EXCL);
+ list_for_each_entry(adapter, &dxgglobal->adapter_list_head,
+ adapter_list_entry) {
+ if (adapter->adapter_state == DXGADAPTER_STATE_ACTIVE)
+ dxgadapter_stop(adapter);
+ }
+ dxgglobal_release_adapter_list_lock(DXGLOCK_EXCL);
+}
+
const struct file_operations dxgk_fops = {
.owner = THIS_MODULE,
};
@@ -182,6 +332,15 @@ static int dxg_pci_probe_device(struct pci_dev *dev,
DXG_TRACE("Vmbus interface version: %d", dxgglobal->vmbus_ver);
DXG_TRACE("Host luid: %x-%x", vgpu_luid.b, vgpu_luid.a);
+ /* Create new virtual GPU adapter */
+ ret = dxgglobal_create_adapter(dev, &guid, vgpu_luid);
+ if (ret)
+ goto cleanup;
+
+ /* Attempt to start the adapter in case VM bus channels are created */
+
+ dxgglobal_start_adapters();
+
cleanup:
mutex_unlock(&dxgglobal->device_mutex);
@@ -193,7 +352,25 @@ static int dxg_pci_probe_device(struct pci_dev *dev,
static void dxg_pci_remove_device(struct pci_dev *dev)
{
- /* Placeholder */
+ struct dxgadapter *adapter;
+ struct dxgglobal *dxgglobal = dxggbl();
+
+ mutex_lock(&dxgglobal->device_mutex);
+
+ adapter = find_pci_adapter(dev);
+ if (adapter) {
+ dxgglobal_acquire_adapter_list_lock(DXGLOCK_EXCL);
+ list_del(&adapter->adapter_list_entry);
+ dxgglobal->num_adapters--;
+ dxgglobal_release_adapter_list_lock(DXGLOCK_EXCL);
+
+ dxgadapter_stop(adapter);
+ kref_put(&adapter->adapter_kref, dxgadapter_release);
+ } else {
+ DXG_ERR("Failed to find dxgadapter for pcidev");
+ }
+
+ mutex_unlock(&dxgglobal->device_mutex);
}
static struct pci_device_id dxg_pci_id_table[] = {
@@ -297,6 +474,25 @@ void dxgglobal_destroy_global_channel(void)
up_write(&dxgglobal->channel_lock);
}
+static void dxgglobal_stop_adapter_vmbus(struct hv_device *hdev)
+{
+ struct dxgadapter *adapter = NULL;
+ struct winluid luid;
+
+ guid_to_luid(&hdev->channel->offermsg.offer.if_instance, &luid);
+
+ DXG_TRACE("Stopping adapter %x:%x", luid.b, luid.a);
+
+ adapter = find_adapter(&luid);
+
+ if (adapter && adapter->adapter_state == DXGADAPTER_STATE_ACTIVE) {
+ down_write(&adapter->core_lock);
+ dxgvmbuschannel_destroy(&adapter->channel);
+ adapter->adapter_state = DXGADAPTER_STATE_STOPPED;
+ up_write(&adapter->core_lock);
+ }
+}
+
static const struct hv_vmbus_device_id dxg_vmbus_id_table[] = {
/* Per GPU Device GUID */
{ HV_GPUP_DXGK_VGPU_GUID },
@@ -329,6 +525,7 @@ static int dxg_probe_vmbus(struct hv_device *hdev,
vgpuch->hdev = hdev;
list_add_tail(&vgpuch->vgpu_ch_list_entry,
&dxgglobal->vgpu_ch_list_head);
+ dxgglobal_start_adapters();
} else if (uuid_le_cmp(hdev->dev_type,
dxg_vmbus_id_table[1].guid) == 0) {
/* This is the global Dxgkgnl channel */
@@ -341,6 +538,7 @@ static int dxg_probe_vmbus(struct hv_device *hdev,
goto error;
}
dxgglobal->hdev = hdev;
+ dxgglobal_start_adapters();
} else {
/* Unknown device type */
DXG_ERR("Unknown VM bus device type");
@@ -364,6 +562,7 @@ static int dxg_remove_vmbus(struct hv_device *hdev)
if (uuid_le_cmp(hdev->dev_type, dxg_vmbus_id_table[0].guid) == 0) {
DXG_TRACE("Remove virtual GPU channel");
+ dxgglobal_stop_adapter_vmbus(hdev);
list_for_each_entry(vgpu_channel,
&dxgglobal->vgpu_ch_list_head,
vgpu_ch_list_entry) {
@@ -420,6 +619,8 @@ static struct dxgglobal *dxgglobal_create(void)
mutex_init(&dxgglobal->device_mutex);
INIT_LIST_HEAD(&dxgglobal->vgpu_ch_list_head);
+ INIT_LIST_HEAD(&dxgglobal->adapter_list_head);
+ init_rwsem(&dxgglobal->adapter_list_lock);
init_rwsem(&dxgglobal->channel_lock);
@@ -430,6 +631,7 @@ static void dxgglobal_destroy(struct dxgglobal *dxgglobal)
{
if (dxgglobal) {
mutex_lock(&dxgglobal->device_mutex);
+ dxgglobal_stop_adapters();
dxgglobal_destroy_global_channel();
mutex_unlock(&dxgglobal->device_mutex);
diff --git a/drivers/hv/dxgkrnl/dxgvmbus.c b/drivers/hv/dxgkrnl/dxgvmbus.c
index a4365739826a..6d4b8d9d8d07 100644
--- a/drivers/hv/dxgkrnl/dxgvmbus.c
+++ b/drivers/hv/dxgkrnl/dxgvmbus.c
@@ -77,7 +77,7 @@ struct dxgvmbusmsgres {
void *res;
};
-static int init_message(struct dxgvmbusmsg *msg,
+static int init_message(struct dxgvmbusmsg *msg, struct dxgadapter *adapter,
struct dxgprocess *process, u32 size)
{
struct dxgglobal *dxgglobal = dxggbl();
@@ -99,10 +99,15 @@ static int init_message(struct dxgvmbusmsg *msg,
if (use_ext_header) {
msg->msg = (char *)&msg->hdr[1];
msg->hdr->command_offset = sizeof(msg->hdr[0]);
+ if (adapter)
+ msg->hdr->vgpu_luid = adapter->host_vgpu_luid;
} else {
msg->msg = (char *)msg->hdr;
}
- msg->channel = &dxgglobal->channel;
+ if (adapter && !dxgglobal->async_msg_enabled)
+ msg->channel = &adapter->channel;
+ else
+ msg->channel = &dxgglobal->channel;
return 0;
}
@@ -116,6 +121,37 @@ static void free_message(struct dxgvmbusmsg *msg, struct dxgprocess *process)
* Helper functions
*/
+static void command_vm_to_host_init2(struct dxgkvmb_command_vm_to_host *command,
+ enum dxgkvmb_commandtype_global t,
+ struct d3dkmthandle process)
+{
+ command->command_type = t;
+ command->process = process;
+ command->command_id = 0;
+ command->channel_type = DXGKVMB_VM_TO_HOST;
+}
+
+static void command_vgpu_to_host_init1(struct dxgkvmb_command_vgpu_to_host
+ *command,
+ enum dxgkvmb_commandtype type)
+{
+ command->command_type = type;
+ command->process.v = 0;
+ command->command_id = 0;
+ command->channel_type = DXGKVMB_VGPU_TO_HOST;
+}
+
+static void command_vgpu_to_host_init2(struct dxgkvmb_command_vgpu_to_host
+ *command,
+ enum dxgkvmb_commandtype type,
+ struct d3dkmthandle process)
+{
+ command->command_type = type;
+ command->process = process;
+ command->command_id = 0;
+ command->channel_type = DXGKVMB_VGPU_TO_HOST;
+}
+
int ntstatus2int(struct ntstatus status)
{
if (NT_SUCCESS(status))
@@ -216,22 +252,26 @@ static void process_inband_packet(struct dxgvmbuschannel *channel,
u32 packet_length = hv_pkt_datalen(desc);
struct dxgkvmb_command_host_to_vm *packet;
- if (packet_length < sizeof(struct dxgkvmb_command_host_to_vm)) {
- DXG_ERR("Invalid global packet");
- } else {
- packet = hv_pkt_data(desc);
- DXG_TRACE("global packet %d",
- packet->command_type);
- switch (packet->command_type) {
- case DXGK_VMBCOMMAND_SIGNALGUESTEVENT:
- case DXGK_VMBCOMMAND_SIGNALGUESTEVENTPASSIVE:
- break;
- case DXGK_VMBCOMMAND_SENDWNFNOTIFICATION:
- break;
- default:
- DXG_ERR("unexpected host message %d",
+ if (channel->adapter == NULL) {
+ if (packet_length < sizeof(struct dxgkvmb_command_host_to_vm)) {
+ DXG_ERR("Invalid global packet");
+ } else {
+ packet = hv_pkt_data(desc);
+ DXG_TRACE("global packet %d",
packet->command_type);
+ switch (packet->command_type) {
+ case DXGK_VMBCOMMAND_SIGNALGUESTEVENT:
+ case DXGK_VMBCOMMAND_SIGNALGUESTEVENTPASSIVE:
+ break;
+ case DXGK_VMBCOMMAND_SENDWNFNOTIFICATION:
+ break;
+ default:
+ DXG_ERR("unexpected host message %d",
+ packet->command_type);
+ }
}
+ } else {
+ DXG_ERR("Unexpected packet for adapter channel");
}
}
@@ -279,6 +319,7 @@ void dxgvmbuschannel_receive(void *ctx)
struct vmpacket_descriptor *desc;
u32 packet_length = 0;
+ DXG_TRACE("New adapter message: %p", channel->adapter);
foreach_vmbus_pkt(desc, channel->channel) {
packet_length = hv_pkt_datalen(desc);
DXG_TRACE("next packet (id, size, type): %llu %d %d",
@@ -302,6 +343,8 @@ int dxgvmb_send_sync_msg(struct dxgvmbuschannel *channel,
{
int ret;
struct dxgvmbuspacket *packet = NULL;
+ struct dxgkvmb_command_vm_to_host *cmd1;
+ struct dxgkvmb_command_vgpu_to_host *cmd2;
if (cmd_size > DXG_MAX_VM_BUS_PACKET_SIZE ||
result_size > DXG_MAX_VM_BUS_PACKET_SIZE) {
@@ -315,6 +358,16 @@ int dxgvmb_send_sync_msg(struct dxgvmbuschannel *channel,
return -ENOMEM;
}
+ if (channel->adapter == NULL) {
+ cmd1 = command;
+ DXG_TRACE("send_sync_msg global: %d %p %d %d",
+ cmd1->command_type, command, cmd_size, result_size);
+ } else {
+ cmd2 = command;
+ DXG_TRACE("send_sync_msg adapter: %d %p %d %d",
+ cmd2->command_type, command, cmd_size, result_size);
+ }
+
packet->request_id = atomic64_inc_return(&channel->packet_request_id);
init_completion(&packet->wait);
packet->buffer = result;
@@ -358,6 +411,41 @@ int dxgvmb_send_sync_msg(struct dxgvmbuschannel *channel,
return ret;
}
+int dxgvmb_send_async_msg(struct dxgvmbuschannel *channel,
+ void *command,
+ u32 cmd_size)
+{
+ int ret;
+ int try_count = 0;
+
+ if (cmd_size > DXG_MAX_VM_BUS_PACKET_SIZE) {
+ DXG_ERR("%s invalid data size", __func__);
+ return -EINVAL;
+ }
+
+ if (channel->adapter) {
+ DXG_ERR("Async message sent to the adapter channel");
+ return -EINVAL;
+ }
+
+ do {
+ ret = vmbus_sendpacket(channel->channel, command, cmd_size,
+ 0, VM_PKT_DATA_INBAND, 0);
+ /*
+ * -EAGAIN is returned when the VM bus ring buffer if full.
+ * Wait 2ms to allow the host to process messages and try again.
+ */
+ if (ret == -EAGAIN) {
+ usleep_range(1000, 2000);
+ try_count++;
+ }
+ } while (ret == -EAGAIN && try_count < 5000);
+ if (ret < 0)
+ DXG_ERR("vmbus_sendpacket failed: %x", ret);
+
+ return ret;
+}
+
static int
dxgvmb_send_sync_msg_ntstatus(struct dxgvmbuschannel *channel,
void *command, u32 cmd_size)
@@ -383,7 +471,7 @@ int dxgvmb_send_set_iospace_region(u64 start, u64 len)
struct dxgvmbusmsg msg;
struct dxgglobal *dxgglobal = dxggbl();
- ret = init_message(&msg, NULL, sizeof(*command));
+ ret = init_message(&msg, NULL, NULL, sizeof(*command));
if (ret)
return ret;
command = (void *)msg.msg;
@@ -408,3 +496,98 @@ int dxgvmb_send_set_iospace_region(u64 start, u64 len)
DXG_TRACE("Error: %d", ret);
return ret;
}
+
+/*
+ * Virtual GPU messages to the host
+ */
+
+int dxgvmb_send_open_adapter(struct dxgadapter *adapter)
+{
+ int ret;
+ struct dxgkvmb_command_openadapter *command;
+ struct dxgkvmb_command_openadapter_return result = { };
+ struct dxgvmbusmsg msg;
+ struct dxgglobal *dxgglobal = dxggbl();
+
+ ret = init_message(&msg, adapter, NULL, sizeof(*command));
+ if (ret)
+ return ret;
+ command = (void *)msg.msg;
+
+ command_vgpu_to_host_init1(&command->hdr, DXGK_VMBCOMMAND_OPENADAPTER);
+ command->vmbus_interface_version = dxgglobal->vmbus_ver;
+ command->vmbus_last_compatible_interface_version =
+ DXGK_VMBUS_LAST_COMPATIBLE_INTERFACE_VERSION;
+
+ ret = dxgvmb_send_sync_msg(msg.channel, msg.hdr, msg.size,
+ &result, sizeof(result));
+ if (ret < 0)
+ goto cleanup;
+
+ ret = ntstatus2int(result.status);
+ adapter->host_handle = result.host_adapter_handle;
+
+cleanup:
+ free_message(&msg, NULL);
+ if (ret)
+ DXG_ERR("Failed to open adapter: %d", ret);
+ return ret;
+}
+
+int dxgvmb_send_close_adapter(struct dxgadapter *adapter)
+{
+ int ret;
+ struct dxgkvmb_command_closeadapter *command;
+ struct dxgvmbusmsg msg;
+
+ ret = init_message(&msg, adapter, NULL, sizeof(*command));
+ if (ret)
+ return ret;
+ command = (void *)msg.msg;
+
+ command_vgpu_to_host_init1(&command->hdr, DXGK_VMBCOMMAND_CLOSEADAPTER);
+ command->host_handle = adapter->host_handle;
+
+ ret = dxgvmb_send_sync_msg(msg.channel, msg.hdr, msg.size,
+ NULL, 0);
+ free_message(&msg, NULL);
+ if (ret)
+ DXG_ERR("Failed to close adapter: %d", ret);
+ return ret;
+}
+
+int dxgvmb_send_get_internal_adapter_info(struct dxgadapter *adapter)
+{
+ int ret;
+ struct dxgkvmb_command_getinternaladapterinfo *command;
+ struct dxgkvmb_command_getinternaladapterinfo_return result = { };
+ struct dxgvmbusmsg msg;
+ u32 result_size = sizeof(result);
+ struct dxgglobal *dxgglobal = dxggbl();
+
+ ret = init_message(&msg, adapter, NULL, sizeof(*command));
+ if (ret)
+ return ret;
+ command = (void *)msg.msg;
+
+ command_vgpu_to_host_init1(&command->hdr,
+ DXGK_VMBCOMMAND_GETINTERNALADAPTERINFO);
+ if (dxgglobal->vmbus_ver < DXGK_VMBUS_INTERFACE_VERSION)
+ result_size -= sizeof(struct winluid);
+
+ ret = dxgvmb_send_sync_msg(msg.channel, msg.hdr, msg.size,
+ &result, result_size);
+ if (ret >= 0) {
+ adapter->host_adapter_luid = result.host_adapter_luid;
+ adapter->host_vgpu_luid = result.host_vgpu_luid;
+ wcsncpy(adapter->device_description, result.device_description,
+ sizeof(adapter->device_description) / sizeof(u16));
+ wcsncpy(adapter->device_instance_id, result.device_instance_id,
+ sizeof(adapter->device_instance_id) / sizeof(u16));
+ dxgglobal->async_msg_enabled = result.async_msg_enabled != 0;
+ }
+ free_message(&msg, NULL);
+ if (ret)
+ DXG_ERR("Failed to get adapter info: %d", ret);
+ return ret;
+}
diff --git a/drivers/hv/dxgkrnl/dxgvmbus.h b/drivers/hv/dxgkrnl/dxgvmbus.h
index b1bdd6039b73..584cdd3db6c0 100644
--- a/drivers/hv/dxgkrnl/dxgvmbus.h
+++ b/drivers/hv/dxgkrnl/dxgvmbus.h
@@ -47,6 +47,83 @@ enum dxgkvmb_commandtype_global {
DXGK_VMBCOMMAND_INVALID_VM_TO_HOST
};
+/*
+ *
+ * Commands, sent to the host via the per adapter VM bus channel
+ * DXG_GUEST_VGPU_VMBUS
+ *
+ */
+
+enum dxgkvmb_commandtype {
+ DXGK_VMBCOMMAND_CREATEDEVICE = 0,
+ DXGK_VMBCOMMAND_DESTROYDEVICE = 1,
+ DXGK_VMBCOMMAND_QUERYADAPTERINFO = 2,
+ DXGK_VMBCOMMAND_DDIQUERYADAPTERINFO = 3,
+ DXGK_VMBCOMMAND_CREATEALLOCATION = 4,
+ DXGK_VMBCOMMAND_DESTROYALLOCATION = 5,
+ DXGK_VMBCOMMAND_CREATECONTEXTVIRTUAL = 6,
+ DXGK_VMBCOMMAND_DESTROYCONTEXT = 7,
+ DXGK_VMBCOMMAND_CREATESYNCOBJECT = 8,
+ DXGK_VMBCOMMAND_CREATEPAGINGQUEUE = 9,
+ DXGK_VMBCOMMAND_DESTROYPAGINGQUEUE = 10,
+ DXGK_VMBCOMMAND_MAKERESIDENT = 11,
+ DXGK_VMBCOMMAND_EVICT = 12,
+ DXGK_VMBCOMMAND_ESCAPE = 13,
+ DXGK_VMBCOMMAND_OPENADAPTER = 14,
+ DXGK_VMBCOMMAND_CLOSEADAPTER = 15,
+ DXGK_VMBCOMMAND_FREEGPUVIRTUALADDRESS = 16,
+ DXGK_VMBCOMMAND_MAPGPUVIRTUALADDRESS = 17,
+ DXGK_VMBCOMMAND_RESERVEGPUVIRTUALADDRESS = 18,
+ DXGK_VMBCOMMAND_UPDATEGPUVIRTUALADDRESS = 19,
+ DXGK_VMBCOMMAND_SUBMITCOMMAND = 20,
+ dxgk_vmbcommand_queryvideomemoryinfo = 21,
+ DXGK_VMBCOMMAND_WAITFORSYNCOBJECTFROMCPU = 22,
+ DXGK_VMBCOMMAND_LOCK2 = 23,
+ DXGK_VMBCOMMAND_UNLOCK2 = 24,
+ DXGK_VMBCOMMAND_WAITFORSYNCOBJECTFROMGPU = 25,
+ DXGK_VMBCOMMAND_SIGNALSYNCOBJECT = 26,
+ DXGK_VMBCOMMAND_SIGNALFENCENTSHAREDBYREF = 27,
+ DXGK_VMBCOMMAND_GETDEVICESTATE = 28,
+ DXGK_VMBCOMMAND_MARKDEVICEASERROR = 29,
+ DXGK_VMBCOMMAND_ADAPTERSTOP = 30,
+ DXGK_VMBCOMMAND_SETQUEUEDLIMIT = 31,
+ DXGK_VMBCOMMAND_OPENRESOURCE = 32,
+ DXGK_VMBCOMMAND_SETCONTEXTSCHEDULINGPRIORITY = 33,
+ DXGK_VMBCOMMAND_PRESENTHISTORYTOKEN = 34,
+ DXGK_VMBCOMMAND_SETREDIRECTEDFLIPFENCEVALUE = 35,
+ DXGK_VMBCOMMAND_GETINTERNALADAPTERINFO = 36,
+ DXGK_VMBCOMMAND_FLUSHHEAPTRANSITIONS = 37,
+ DXGK_VMBCOMMAND_BLT = 38,
+ DXGK_VMBCOMMAND_DDIGETSTANDARDALLOCATIONDRIVERDATA = 39,
+ DXGK_VMBCOMMAND_CDDGDICOMMAND = 40,
+ DXGK_VMBCOMMAND_QUERYALLOCATIONRESIDENCY = 41,
+ DXGK_VMBCOMMAND_FLUSHDEVICE = 42,
+ DXGK_VMBCOMMAND_FLUSHADAPTER = 43,
+ DXGK_VMBCOMMAND_DDIGETNODEMETADATA = 44,
+ DXGK_VMBCOMMAND_SETEXISTINGSYSMEMSTORE = 45,
+ DXGK_VMBCOMMAND_ISSYNCOBJECTSIGNALED = 46,
+ DXGK_VMBCOMMAND_CDDSYNCGPUACCESS = 47,
+ DXGK_VMBCOMMAND_QUERYSTATISTICS = 48,
+ DXGK_VMBCOMMAND_CHANGEVIDEOMEMORYRESERVATION = 49,
+ DXGK_VMBCOMMAND_CREATEHWQUEUE = 50,
+ DXGK_VMBCOMMAND_DESTROYHWQUEUE = 51,
+ DXGK_VMBCOMMAND_SUBMITCOMMANDTOHWQUEUE = 52,
+ DXGK_VMBCOMMAND_GETDRIVERSTOREFILE = 53,
+ DXGK_VMBCOMMAND_READDRIVERSTOREFILE = 54,
+ DXGK_VMBCOMMAND_GETNEXTHARDLINK = 55,
+ DXGK_VMBCOMMAND_UPDATEALLOCATIONPROPERTY = 56,
+ DXGK_VMBCOMMAND_OFFERALLOCATIONS = 57,
+ DXGK_VMBCOMMAND_RECLAIMALLOCATIONS = 58,
+ DXGK_VMBCOMMAND_SETALLOCATIONPRIORITY = 59,
+ DXGK_VMBCOMMAND_GETALLOCATIONPRIORITY = 60,
+ DXGK_VMBCOMMAND_GETCONTEXTSCHEDULINGPRIORITY = 61,
+ DXGK_VMBCOMMAND_QUERYCLOCKCALIBRATION = 62,
+ DXGK_VMBCOMMAND_QUERYRESOURCEINFO = 64,
+ DXGK_VMBCOMMAND_LOGEVENT = 65,
+ DXGK_VMBCOMMAND_SETEXISTINGSYSMEMPAGES = 66,
+ DXGK_VMBCOMMAND_INVALID
+};
+
/*
* Commands, sent by the host to the VM
*/
@@ -66,6 +143,15 @@ struct dxgkvmb_command_vm_to_host {
enum dxgkvmb_commandtype_global command_type;
};
+struct dxgkvmb_command_vgpu_to_host {
+ u64 command_id;
+ struct d3dkmthandle process;
+ u32 channel_type : 8;
+ u32 async_msg : 1;
+ u32 reserved : 23;
+ enum dxgkvmb_commandtype command_type;
+};
+
struct dxgkvmb_command_host_to_vm {
u64 command_id;
struct d3dkmthandle process;
@@ -83,4 +169,46 @@ struct dxgkvmb_command_setiospaceregion {
u32 shared_page_gpadl;
};
+struct dxgkvmb_command_openadapter {
+ struct dxgkvmb_command_vgpu_to_host hdr;
+ u32 vmbus_interface_version;
+ u32 vmbus_last_compatible_interface_version;
+ struct winluid guest_adapter_luid;
+};
+
+struct dxgkvmb_command_openadapter_return {
+ struct d3dkmthandle host_adapter_handle;
+ struct ntstatus status;
+ u32 vmbus_interface_version;
+ u32 vmbus_last_compatible_interface_version;
+};
+
+struct dxgkvmb_command_closeadapter {
+ struct dxgkvmb_command_vgpu_to_host hdr;
+ struct d3dkmthandle host_handle;
+};
+
+struct dxgkvmb_command_getinternaladapterinfo {
+ struct dxgkvmb_command_vgpu_to_host hdr;
+};
+
+struct dxgkvmb_command_getinternaladapterinfo_return {
+ struct dxgk_device_types device_types;
+ u32 driver_store_copy_mode;
+ u32 driver_ddi_version;
+ u32 secure_virtual_machine : 1;
+ u32 virtual_machine_reset : 1;
+ u32 is_vail_supported : 1;
+ u32 hw_sch_enabled : 1;
+ u32 hw_sch_capable : 1;
+ u32 va_backed_vm : 1;
+ u32 async_msg_enabled : 1;
+ u32 hw_support_state : 2;
+ u32 reserved : 23;
+ struct winluid host_adapter_luid;
+ u16 device_description[80];
+ u16 device_instance_id[WIN_MAX_PATH];
+ struct winluid host_vgpu_luid;
+};
+
#endif /* _DXGVMBUS_H */
diff --git a/drivers/hv/dxgkrnl/misc.c b/drivers/hv/dxgkrnl/misc.c
new file mode 100644
index 000000000000..cb1e0635bebc
--- /dev/null
+++ b/drivers/hv/dxgkrnl/misc.c
@@ -0,0 +1,37 @@
+// SPDX-License-Identifier: GPL-2.0
+
+/*
+ * Copyright (c) 2019, Microsoft Corporation.
+ *
+ * Author:
+ * Iouri Tarassov <iourit@linux.microsoft.com>
+ *
+ * Dxgkrnl Graphics Driver
+ * Helper functions
+ *
+ */
+
+#include <linux/slab.h>
+#include <linux/vmalloc.h>
+#include <linux/uaccess.h>
+
+#include "dxgkrnl.h"
+#include "misc.h"
+
+#undef pr_fmt
+#define pr_fmt(fmt) "dxgk: " fmt
+
+u16 *wcsncpy(u16 *dest, const u16 *src, size_t n)
+{
+ int i;
+
+ for (i = 0; i < n; i++) {
+ dest[i] = src[i];
+ if (src[i] == 0) {
+ i++;
+ break;
+ }
+ }
+ dest[i - 1] = 0;
+ return dest;
+}
diff --git a/drivers/hv/dxgkrnl/misc.h b/drivers/hv/dxgkrnl/misc.h
index 4c6047c32a20..d292e9a9bb7f 100644
--- a/drivers/hv/dxgkrnl/misc.h
+++ b/drivers/hv/dxgkrnl/misc.h
@@ -14,18 +14,34 @@
#ifndef _MISC_H_
#define _MISC_H_
+/* Max characters in Windows path */
+#define WIN_MAX_PATH 260
+
extern const struct d3dkmthandle zerohandle;
/*
* Synchronization lock hierarchy.
*
- * The higher enum value, the higher is the lock order.
- * When a lower lock ois held, the higher lock should not be acquired.
+ * The locks here are in the order from lowest to highest.
+ * When a lower lock is held, the higher lock should not be acquired.
*
- * channel_lock
- * device_mutex
+ * channel_lock (VMBus channel lock)
+ * fd_mutex
+ * plistmutex (process list mutex)
+ * table_lock (handle table lock)
+ * core_lock (dxgadapter lock)
+ * device_lock (dxgdevice lock)
+ * adapter_list_lock
+ * device_mutex (dxgglobal mutex)
*/
+u16 *wcsncpy(u16 *dest, const u16 *src, size_t n);
+
+enum dxglockstate {
+ DXGLOCK_SHARED,
+ DXGLOCK_EXCL
+};
+
/*
* Some of the Windows return codes, which needs to be translated to Linux
* IOCTL return codes. Positive values are success codes and need to be
^ permalink raw reply related [flat|nested] 56+ messages in thread* [PATCH 04/55] drivers: hv: dxgkrnl: Opening of /dev/dxg device and dxgprocess creation
2026-03-19 20:24 [PATCH v4 00/55] drivers: hv: dxgkrnl: Driver for Hyper-V virtual compute device Eric Curtin
` (2 preceding siblings ...)
2026-03-19 20:24 ` [PATCH 03/55] drivers: hv: dxgkrnl: Creation of dxgadapter object Eric Curtin
@ 2026-03-19 20:24 ` Eric Curtin
2026-03-19 20:24 ` [PATCH 05/55] drivers: hv: dxgkrnl: Enumerate and open dxgadapter objects Eric Curtin
` (50 subsequent siblings)
54 siblings, 0 replies; 56+ messages in thread
From: Eric Curtin @ 2026-03-19 20:24 UTC (permalink / raw)
To: linux-hyperv; +Cc: linux-kernel, iourit, wei.liu, decui, haiyangz
From: Iouri Tarassov <iourit@linux.microsoft.com>
- Implement opening of the device (/dev/dxg) file object and creation of
dxgprocess objects.
- Add VM bus messages to create and destroy the host side of a dxgprocess
object.
- Implement the handle manager, which manages d3dkmthandle handles
for the internal process objects. The handles are used by a user mode
client to reference dxgkrnl objects.
dxgprocess is created for each process, which opens /dev/dxg.
dxgprocess is ref counted, so the existing dxgprocess objects is used
for a process, which opens the device object multiple time.
dxgprocess is destroyed when the file object is released.
A corresponding dxgprocess object is created on the host for every
dxgprocess object in the guest.
When a dxgkrnl object is created, in most cases the corresponding
object is created in the host. The VM references the host objects by
handles (d3dkmthandle). d3dkmthandle values for a host object and
the corresponding VM object are the same. A host handle is allocated
first and its value is assigned to the guest object.
Signed-off-by: Iouri Tarassov <iourit@linux.microsoft.com>
[kms: forward port to 6.6 from 6.1. No code changes made.]
Signed-off-by: Kelsey Steele <kelseysteele@microsoft.com>
---
drivers/hv/dxgkrnl/Makefile | 2 +-
drivers/hv/dxgkrnl/dxgadapter.c | 72 ++++
drivers/hv/dxgkrnl/dxgkrnl.h | 95 +++++-
drivers/hv/dxgkrnl/dxgmodule.c | 97 ++++++
drivers/hv/dxgkrnl/dxgprocess.c | 262 +++++++++++++++
drivers/hv/dxgkrnl/dxgvmbus.c | 164 ++++++++++
drivers/hv/dxgkrnl/dxgvmbus.h | 36 ++
drivers/hv/dxgkrnl/hmgr.c | 563 ++++++++++++++++++++++++++++++++
drivers/hv/dxgkrnl/hmgr.h | 112 +++++++
drivers/hv/dxgkrnl/ioctl.c | 60 ++++
drivers/hv/dxgkrnl/misc.h | 9 +-
include/uapi/misc/d3dkmthk.h | 103 ++++++
12 files changed, 1569 insertions(+), 6 deletions(-)
create mode 100644 drivers/hv/dxgkrnl/dxgprocess.c
create mode 100644 drivers/hv/dxgkrnl/hmgr.c
create mode 100644 drivers/hv/dxgkrnl/hmgr.h
diff --git a/drivers/hv/dxgkrnl/Makefile b/drivers/hv/dxgkrnl/Makefile
index 2ed07d877c91..9d821e83448a 100644
--- a/drivers/hv/dxgkrnl/Makefile
+++ b/drivers/hv/dxgkrnl/Makefile
@@ -2,4 +2,4 @@
# Makefile for the hyper-v compute device driver (dxgkrnl).
obj-$(CONFIG_DXGKRNL) += dxgkrnl.o
-dxgkrnl-y := dxgmodule.o misc.o dxgadapter.o ioctl.o dxgvmbus.o
+dxgkrnl-y := dxgmodule.o hmgr.o misc.o dxgadapter.o ioctl.o dxgvmbus.o dxgprocess.o
diff --git a/drivers/hv/dxgkrnl/dxgadapter.c b/drivers/hv/dxgkrnl/dxgadapter.c
index 07d47699d255..fa0d6beca157 100644
--- a/drivers/hv/dxgkrnl/dxgadapter.c
+++ b/drivers/hv/dxgkrnl/dxgadapter.c
@@ -100,6 +100,7 @@ void dxgadapter_start(struct dxgadapter *adapter)
void dxgadapter_stop(struct dxgadapter *adapter)
{
+ struct dxgprocess_adapter *entry;
bool adapter_stopped = false;
down_write(&adapter->core_lock);
@@ -112,6 +113,15 @@ void dxgadapter_stop(struct dxgadapter *adapter)
if (adapter_stopped)
return;
+ dxgglobal_acquire_process_adapter_lock();
+
+ list_for_each_entry(entry, &adapter->adapter_process_list_head,
+ adapter_process_list_entry) {
+ dxgprocess_adapter_stop(entry);
+ }
+
+ dxgglobal_release_process_adapter_lock();
+
if (dxgadapter_acquire_lock_exclusive(adapter) == 0) {
dxgvmb_send_close_adapter(adapter);
dxgadapter_release_lock_exclusive(adapter);
@@ -135,6 +145,21 @@ bool dxgadapter_is_active(struct dxgadapter *adapter)
return adapter->adapter_state == DXGADAPTER_STATE_ACTIVE;
}
+/* Protected by dxgglobal_acquire_process_adapter_lock */
+void dxgadapter_add_process(struct dxgadapter *adapter,
+ struct dxgprocess_adapter *process_info)
+{
+ DXG_TRACE("%p %p", adapter, process_info);
+ list_add_tail(&process_info->adapter_process_list_entry,
+ &adapter->adapter_process_list_head);
+}
+
+void dxgadapter_remove_process(struct dxgprocess_adapter *process_info)
+{
+ DXG_TRACE("%p %p", process_info->adapter, process_info);
+ list_del(&process_info->adapter_process_list_entry);
+}
+
int dxgadapter_acquire_lock_exclusive(struct dxgadapter *adapter)
{
down_write(&adapter->core_lock);
@@ -168,3 +193,50 @@ void dxgadapter_release_lock_shared(struct dxgadapter *adapter)
{
up_read(&adapter->core_lock);
}
+
+struct dxgprocess_adapter *dxgprocess_adapter_create(struct dxgprocess *process,
+ struct dxgadapter *adapter)
+{
+ struct dxgprocess_adapter *adapter_info;
+
+ adapter_info = kzalloc(sizeof(*adapter_info), GFP_KERNEL);
+ if (adapter_info) {
+ if (kref_get_unless_zero(&adapter->adapter_kref) == 0) {
+ DXG_ERR("failed to acquire adapter reference");
+ goto cleanup;
+ }
+ adapter_info->adapter = adapter;
+ adapter_info->process = process;
+ adapter_info->refcount = 1;
+ list_add_tail(&adapter_info->process_adapter_list_entry,
+ &process->process_adapter_list_head);
+ dxgadapter_add_process(adapter, adapter_info);
+ }
+ return adapter_info;
+cleanup:
+ if (adapter_info)
+ kfree(adapter_info);
+ return NULL;
+}
+
+void dxgprocess_adapter_stop(struct dxgprocess_adapter *adapter_info)
+{
+}
+
+void dxgprocess_adapter_destroy(struct dxgprocess_adapter *adapter_info)
+{
+ dxgadapter_remove_process(adapter_info);
+ kref_put(&adapter_info->adapter->adapter_kref, dxgadapter_release);
+ list_del(&adapter_info->process_adapter_list_entry);
+ kfree(adapter_info);
+}
+
+/*
+ * Must be called when dxgglobal::process_adapter_mutex is held
+ */
+void dxgprocess_adapter_release(struct dxgprocess_adapter *adapter_info)
+{
+ adapter_info->refcount--;
+ if (adapter_info->refcount == 0)
+ dxgprocess_adapter_destroy(adapter_info);
+}
diff --git a/drivers/hv/dxgkrnl/dxgkrnl.h b/drivers/hv/dxgkrnl/dxgkrnl.h
index ba2a7c6001aa..b089d126f801 100644
--- a/drivers/hv/dxgkrnl/dxgkrnl.h
+++ b/drivers/hv/dxgkrnl/dxgkrnl.h
@@ -29,8 +29,10 @@
#include <uapi/misc/d3dkmthk.h>
#include <linux/version.h>
#include "misc.h"
+#include "hmgr.h"
#include <uapi/misc/d3dkmthk.h>
+struct dxgprocess;
struct dxgadapter;
/*
@@ -111,6 +113,10 @@ struct dxgglobal {
struct miscdevice dxgdevice;
struct mutex device_mutex;
+ /* list of created processes */
+ struct list_head plisthead;
+ struct mutex plistmutex;
+
/* list of created adapters */
struct list_head adapter_list_head;
struct rw_semaphore adapter_list_lock;
@@ -124,6 +130,9 @@ struct dxgglobal {
/* protects acces to the global VM bus channel */
struct rw_semaphore channel_lock;
+ /* protects the dxgprocess_adapter lists */
+ struct mutex process_adapter_mutex;
+
bool global_channel_initialized;
bool async_msg_enabled;
bool misc_registered;
@@ -144,13 +153,84 @@ int dxgglobal_init_global_channel(void);
void dxgglobal_destroy_global_channel(void);
struct vmbus_channel *dxgglobal_get_vmbus(void);
struct dxgvmbuschannel *dxgglobal_get_dxgvmbuschannel(void);
+void dxgglobal_acquire_process_adapter_lock(void);
+void dxgglobal_release_process_adapter_lock(void);
int dxgglobal_acquire_channel_lock(void);
void dxgglobal_release_channel_lock(void);
+/*
+ * Describes adapter information for each process
+ */
+struct dxgprocess_adapter {
+ /* Entry in dxgadapter::adapter_process_list_head */
+ struct list_head adapter_process_list_entry;
+ /* Entry in dxgprocess::process_adapter_list_head */
+ struct list_head process_adapter_list_entry;
+ struct dxgadapter *adapter;
+ struct dxgprocess *process;
+ int refcount;
+};
+
+struct dxgprocess_adapter *dxgprocess_adapter_create(struct dxgprocess *process,
+ struct dxgadapter
+ *adapter);
+void dxgprocess_adapter_release(struct dxgprocess_adapter *adapter);
+void dxgprocess_adapter_stop(struct dxgprocess_adapter *adapter_info);
+void dxgprocess_adapter_destroy(struct dxgprocess_adapter *adapter_info);
+
+/*
+ * The structure represents a process, which opened the /dev/dxg device.
+ * A corresponding object is created on the host.
+ */
struct dxgprocess {
- /* Placeholder */
+ /*
+ * Process list entry in dxgglobal.
+ * Protected by the dxgglobal->plistmutex.
+ */
+ struct list_head plistentry;
+ pid_t pid;
+ pid_t tgid;
+ /* how many time the process was opened */
+ struct kref process_kref;
+ /*
+ * This handle table is used for all objects except dxgadapter
+ * The handle table lock order is higher than the local_handle_table
+ * lock
+ */
+ struct hmgrtable handle_table;
+ /*
+ * This handle table is used for dxgadapter objects.
+ * The handle table lock order is lowest.
+ */
+ struct hmgrtable local_handle_table;
+ /* Handle of the corresponding objec on the host */
+ struct d3dkmthandle host_handle;
+
+ /* List of opened adapters (dxgprocess_adapter) */
+ struct list_head process_adapter_list_head;
};
+struct dxgprocess *dxgprocess_create(void);
+void dxgprocess_destroy(struct dxgprocess *process);
+void dxgprocess_release(struct kref *refcount);
+int dxgprocess_open_adapter(struct dxgprocess *process,
+ struct dxgadapter *adapter,
+ struct d3dkmthandle *handle);
+int dxgprocess_close_adapter(struct dxgprocess *process,
+ struct d3dkmthandle handle);
+struct dxgadapter *dxgprocess_get_adapter(struct dxgprocess *process,
+ struct d3dkmthandle handle);
+struct dxgadapter *dxgprocess_adapter_by_handle(struct dxgprocess *process,
+ struct d3dkmthandle handle);
+void dxgprocess_ht_lock_shared_down(struct dxgprocess *process);
+void dxgprocess_ht_lock_shared_up(struct dxgprocess *process);
+void dxgprocess_ht_lock_exclusive_down(struct dxgprocess *process);
+void dxgprocess_ht_lock_exclusive_up(struct dxgprocess *process);
+struct dxgprocess_adapter *dxgprocess_get_adapter_info(struct dxgprocess
+ *process,
+ struct dxgadapter
+ *adapter);
+
enum dxgadapter_state {
DXGADAPTER_STATE_ACTIVE = 0,
DXGADAPTER_STATE_STOPPED = 1,
@@ -168,6 +248,8 @@ struct dxgadapter {
struct kref adapter_kref;
/* Entry in the list of adapters in dxgglobal */
struct list_head adapter_list_entry;
+ /* The list of dxgprocess_adapter entries */
+ struct list_head adapter_process_list_head;
struct pci_dev *pci_dev;
struct hv_device *hv_dev;
struct dxgvmbuschannel channel;
@@ -191,6 +273,12 @@ void dxgadapter_release_lock_shared(struct dxgadapter *adapter);
int dxgadapter_acquire_lock_exclusive(struct dxgadapter *adapter);
void dxgadapter_acquire_lock_forced(struct dxgadapter *adapter);
void dxgadapter_release_lock_exclusive(struct dxgadapter *adapter);
+void dxgadapter_add_process(struct dxgadapter *adapter,
+ struct dxgprocess_adapter *process_info);
+void dxgadapter_remove_process(struct dxgprocess_adapter *process_info);
+
+long dxgk_compat_ioctl(struct file *f, unsigned int p1, unsigned long p2);
+long dxgk_unlocked_ioctl(struct file *f, unsigned int p1, unsigned long p2);
/*
* The convention is that VNBus instance id is a GUID, but the host sets
@@ -220,9 +308,14 @@ static inline void guid_to_luid(guid_t *guid, struct winluid *luid)
void dxgvmb_initialize(void);
int dxgvmb_send_set_iospace_region(u64 start, u64 len);
+int dxgvmb_send_create_process(struct dxgprocess *process);
+int dxgvmb_send_destroy_process(struct d3dkmthandle process);
int dxgvmb_send_open_adapter(struct dxgadapter *adapter);
int dxgvmb_send_close_adapter(struct dxgadapter *adapter);
int dxgvmb_send_get_internal_adapter_info(struct dxgadapter *adapter);
+int dxgvmb_send_query_adapter_info(struct dxgprocess *process,
+ struct dxgadapter *adapter,
+ struct d3dkmt_queryadapterinfo *args);
int dxgvmb_send_async_msg(struct dxgvmbuschannel *channel,
void *command,
u32 cmd_size);
diff --git a/drivers/hv/dxgkrnl/dxgmodule.c b/drivers/hv/dxgkrnl/dxgmodule.c
index ef80b920f010..17c22001ca6c 100644
--- a/drivers/hv/dxgkrnl/dxgmodule.c
+++ b/drivers/hv/dxgkrnl/dxgmodule.c
@@ -123,6 +123,20 @@ static struct dxgadapter *find_adapter(struct winluid *luid)
return adapter;
}
+void dxgglobal_acquire_process_adapter_lock(void)
+{
+ struct dxgglobal *dxgglobal = dxggbl();
+
+ mutex_lock(&dxgglobal->process_adapter_mutex);
+}
+
+void dxgglobal_release_process_adapter_lock(void)
+{
+ struct dxgglobal *dxgglobal = dxggbl();
+
+ mutex_unlock(&dxgglobal->process_adapter_mutex);
+}
+
/*
* Creates a new dxgadapter object, which represents a virtual GPU, projected
* by the host.
@@ -147,6 +161,7 @@ int dxgglobal_create_adapter(struct pci_dev *dev, guid_t *guid,
kref_init(&adapter->adapter_kref);
init_rwsem(&adapter->core_lock);
+ INIT_LIST_HEAD(&adapter->adapter_process_list_head);
adapter->pci_dev = dev;
guid_to_luid(guid, &adapter->luid);
@@ -205,8 +220,87 @@ static void dxgglobal_stop_adapters(void)
dxgglobal_release_adapter_list_lock(DXGLOCK_EXCL);
}
+/*
+ * Returns dxgprocess for the current executing process.
+ * Creates dxgprocess if it doesn't exist.
+ */
+static struct dxgprocess *dxgglobal_get_current_process(void)
+{
+ /*
+ * Find the DXG process for the current process.
+ * A new process is created if necessary.
+ */
+ struct dxgprocess *process = NULL;
+ struct dxgprocess *entry = NULL;
+ struct dxgglobal *dxgglobal = dxggbl();
+
+ mutex_lock(&dxgglobal->plistmutex);
+ list_for_each_entry(entry, &dxgglobal->plisthead, plistentry) {
+ /* All threads of a process have the same thread group ID */
+ if (entry->tgid == current->tgid) {
+ if (kref_get_unless_zero(&entry->process_kref)) {
+ process = entry;
+ DXG_TRACE("found dxgprocess");
+ } else {
+ DXG_TRACE("process is destroyed");
+ }
+ break;
+ }
+ }
+ mutex_unlock(&dxgglobal->plistmutex);
+
+ if (process == NULL)
+ process = dxgprocess_create();
+
+ return process;
+}
+
+/*
+ * File operations for the /dev/dxg device
+ */
+
+static int dxgk_open(struct inode *n, struct file *f)
+{
+ int ret = 0;
+ struct dxgprocess *process;
+
+ DXG_TRACE("%p %d %d", f, current->pid, current->tgid);
+
+ /* Find/create a dxgprocess structure for this process */
+ process = dxgglobal_get_current_process();
+
+ if (process) {
+ f->private_data = process;
+ } else {
+ DXG_TRACE("cannot create dxgprocess");
+ ret = -EBADF;
+ }
+
+ return ret;
+}
+
+static int dxgk_release(struct inode *n, struct file *f)
+{
+ struct dxgprocess *process;
+
+ process = (struct dxgprocess *)f->private_data;
+ DXG_TRACE("%p, %p", f, process);
+
+ if (process == NULL)
+ return -EINVAL;
+
+ kref_put(&process->process_kref, dxgprocess_release);
+
+ f->private_data = NULL;
+ return 0;
+}
+
const struct file_operations dxgk_fops = {
.owner = THIS_MODULE,
+ .open = dxgk_open,
+ .release = dxgk_release,
+ .compat_ioctl = dxgk_compat_ioctl,
+ .unlocked_ioctl = dxgk_unlocked_ioctl,
};
/*
@@ -616,7 +710,10 @@ static struct dxgglobal *dxgglobal_create(void)
if (!dxgglobal)
return NULL;
+ INIT_LIST_HEAD(&dxgglobal->plisthead);
+ mutex_init(&dxgglobal->plistmutex);
mutex_init(&dxgglobal->device_mutex);
+ mutex_init(&dxgglobal->process_adapter_mutex);
INIT_LIST_HEAD(&dxgglobal->vgpu_ch_list_head);
INIT_LIST_HEAD(&dxgglobal->adapter_list_head);
diff --git a/drivers/hv/dxgkrnl/dxgprocess.c b/drivers/hv/dxgkrnl/dxgprocess.c
new file mode 100644
index 000000000000..ab9a01e3c8c8
--- /dev/null
+++ b/drivers/hv/dxgkrnl/dxgprocess.c
@@ -0,0 +1,262 @@
+// SPDX-License-Identifier: GPL-2.0
+
+/*
+ * Copyright (c) 2022, Microsoft Corporation.
+ *
+ * Author:
+ * Iouri Tarassov <iourit@linux.microsoft.com>
+ *
+ * Dxgkrnl Graphics Driver
+ * DXGPROCESS implementation
+ *
+ */
+
+#include "dxgkrnl.h"
+
+#undef pr_fmt
+#define pr_fmt(fmt) "dxgk: " fmt
+
+/*
+ * Creates a new dxgprocess object
+ * Must be called when dxgglobal->plistmutex is held
+ */
+struct dxgprocess *dxgprocess_create(void)
+{
+ struct dxgprocess *process;
+ int ret;
+ struct dxgglobal *dxgglobal = dxggbl();
+
+ process = kzalloc(sizeof(struct dxgprocess), GFP_KERNEL);
+ if (process != NULL) {
+ DXG_TRACE("new dxgprocess created");
+ process->pid = current->pid;
+ process->tgid = current->tgid;
+ ret = dxgvmb_send_create_process(process);
+ if (ret < 0) {
+ DXG_TRACE("send_create_process failed");
+ kfree(process);
+ process = NULL;
+ } else {
+ INIT_LIST_HEAD(&process->plistentry);
+ kref_init(&process->process_kref);
+
+ mutex_lock(&dxgglobal->plistmutex);
+ list_add_tail(&process->plistentry,
+ &dxgglobal->plisthead);
+ mutex_unlock(&dxgglobal->plistmutex);
+
+ hmgrtable_init(&process->handle_table, process);
+ hmgrtable_init(&process->local_handle_table, process);
+ INIT_LIST_HEAD(&process->process_adapter_list_head);
+ }
+ }
+ return process;
+}
+
+void dxgprocess_destroy(struct dxgprocess *process)
+{
+ int i;
+ enum hmgrentry_type t;
+ struct d3dkmthandle h;
+ void *o;
+ struct dxgprocess_adapter *entry;
+ struct dxgprocess_adapter *tmp;
+
+ /* Destroy all adapter state */
+ dxgglobal_acquire_process_adapter_lock();
+ list_for_each_entry_safe(entry, tmp,
+ &process->process_adapter_list_head,
+ process_adapter_list_entry) {
+ dxgprocess_adapter_destroy(entry);
+ }
+ dxgglobal_release_process_adapter_lock();
+
+ i = 0;
+ while (hmgrtable_next_entry(&process->local_handle_table,
+ &i, &t, &h, &o)) {
+ switch (t) {
+ case HMGRENTRY_TYPE_DXGADAPTER:
+ dxgprocess_close_adapter(process, h);
+ break;
+ default:
+ DXG_ERR("invalid entry in handle table %d", t);
+ break;
+ }
+ }
+
+ hmgrtable_destroy(&process->handle_table);
+ hmgrtable_destroy(&process->local_handle_table);
+}
+
+void dxgprocess_release(struct kref *refcount)
+{
+ struct dxgprocess *process;
+ struct dxgglobal *dxgglobal = dxggbl();
+
+ process = container_of(refcount, struct dxgprocess, process_kref);
+
+ mutex_lock(&dxgglobal->plistmutex);
+ list_del(&process->plistentry);
+ mutex_unlock(&dxgglobal->plistmutex);
+
+ dxgprocess_destroy(process);
+
+ if (process->host_handle.v)
+ dxgvmb_send_destroy_process(process->host_handle);
+ kfree(process);
+}
+
+struct dxgprocess_adapter *dxgprocess_get_adapter_info(struct dxgprocess
+ *process,
+ struct dxgadapter
+ *adapter)
+{
+ struct dxgprocess_adapter *entry;
+
+ list_for_each_entry(entry, &process->process_adapter_list_head,
+ process_adapter_list_entry) {
+ if (adapter == entry->adapter) {
+ DXG_TRACE("Found process info %p", entry);
+ return entry;
+ }
+ }
+ return NULL;
+}
+
+/*
+ * Dxgprocess takes references on dxgadapter and dxgprocess_adapter.
+ *
+ * The process_adapter lock is held.
+ *
+ */
+int dxgprocess_open_adapter(struct dxgprocess *process,
+ struct dxgadapter *adapter,
+ struct d3dkmthandle *h)
+{
+ int ret = 0;
+ struct dxgprocess_adapter *adapter_info;
+ struct d3dkmthandle handle;
+
+ h->v = 0;
+ adapter_info = dxgprocess_get_adapter_info(process, adapter);
+ if (adapter_info == NULL) {
+ DXG_TRACE("creating new process adapter info");
+ adapter_info = dxgprocess_adapter_create(process, adapter);
+ if (adapter_info == NULL) {
+ ret = -ENOMEM;
+ goto cleanup;
+ }
+ } else {
+ adapter_info->refcount++;
+ }
+
+ handle = hmgrtable_alloc_handle_safe(&process->local_handle_table,
+ adapter, HMGRENTRY_TYPE_DXGADAPTER,
+ true);
+ if (handle.v) {
+ *h = handle;
+ } else {
+ DXG_ERR("failed to create adapter handle");
+ ret = -ENOMEM;
+ }
+
+cleanup:
+
+ if (ret < 0) {
+ if (adapter_info)
+ dxgprocess_adapter_release(adapter_info);
+ }
+
+ return ret;
+}
+
+int dxgprocess_close_adapter(struct dxgprocess *process,
+ struct d3dkmthandle handle)
+{
+ struct dxgadapter *adapter;
+ struct dxgprocess_adapter *adapter_info;
+ int ret = 0;
+
+ if (handle.v == 0)
+ return 0;
+
+ hmgrtable_lock(&process->local_handle_table, DXGLOCK_EXCL);
+ adapter = dxgprocess_get_adapter(process, handle);
+ if (adapter)
+ hmgrtable_free_handle(&process->local_handle_table,
+ HMGRENTRY_TYPE_DXGADAPTER, handle);
+ hmgrtable_unlock(&process->local_handle_table, DXGLOCK_EXCL);
+
+ if (adapter) {
+ adapter_info = dxgprocess_get_adapter_info(process, adapter);
+ if (adapter_info) {
+ dxgglobal_acquire_process_adapter_lock();
+ dxgprocess_adapter_release(adapter_info);
+ dxgglobal_release_process_adapter_lock();
+ } else {
+ ret = -EINVAL;
+ }
+ } else {
+ DXG_ERR("Adapter not found %x", handle.v);
+ ret = -EINVAL;
+ }
+
+ return ret;
+}
+
+struct dxgadapter *dxgprocess_get_adapter(struct dxgprocess *process,
+ struct d3dkmthandle handle)
+{
+ struct dxgadapter *adapter;
+
+ adapter = hmgrtable_get_object_by_type(&process->local_handle_table,
+ HMGRENTRY_TYPE_DXGADAPTER,
+ handle);
+ if (adapter == NULL)
+ DXG_ERR("Adapter not found %x", handle.v);
+ return adapter;
+}
+
+/*
+ * Gets the adapter object from the process handle table.
+ * The adapter object is referenced.
+ * The function acquired the handle table lock shared.
+ */
+struct dxgadapter *dxgprocess_adapter_by_handle(struct dxgprocess *process,
+ struct d3dkmthandle handle)
+{
+ struct dxgadapter *adapter;
+
+ hmgrtable_lock(&process->local_handle_table, DXGLOCK_SHARED);
+ adapter = hmgrtable_get_object_by_type(&process->local_handle_table,
+ HMGRENTRY_TYPE_DXGADAPTER,
+ handle);
+ if (adapter == NULL)
+ DXG_ERR("adapter_by_handle failed %x", handle.v);
+ else if (kref_get_unless_zero(&adapter->adapter_kref) == 0) {
+ DXG_ERR("failed to acquire adapter reference");
+ adapter = NULL;
+ }
+ hmgrtable_unlock(&process->local_handle_table, DXGLOCK_SHARED);
+ return adapter;
+}
+
+void dxgprocess_ht_lock_shared_down(struct dxgprocess *process)
+{
+ hmgrtable_lock(&process->handle_table, DXGLOCK_SHARED);
+}
+
+void dxgprocess_ht_lock_shared_up(struct dxgprocess *process)
+{
+ hmgrtable_unlock(&process->handle_table, DXGLOCK_SHARED);
+}
+
+void dxgprocess_ht_lock_exclusive_down(struct dxgprocess *process)
+{
+ hmgrtable_lock(&process->handle_table, DXGLOCK_EXCL);
+}
+
+void dxgprocess_ht_lock_exclusive_up(struct dxgprocess *process)
+{
+ hmgrtable_unlock(&process->handle_table, DXGLOCK_EXCL);
+}
diff --git a/drivers/hv/dxgkrnl/dxgvmbus.c b/drivers/hv/dxgkrnl/dxgvmbus.c
index 6d4b8d9d8d07..0abf45d0d3f7 100644
--- a/drivers/hv/dxgkrnl/dxgvmbus.c
+++ b/drivers/hv/dxgkrnl/dxgvmbus.c
@@ -497,6 +497,87 @@ int dxgvmb_send_set_iospace_region(u64 start, u64 len)
return ret;
}
+int dxgvmb_send_create_process(struct dxgprocess *process)
+{
+ int ret;
+ struct dxgkvmb_command_createprocess *command;
+ struct dxgkvmb_command_createprocess_return result = { 0 };
+ struct dxgvmbusmsg msg;
+ char s[WIN_MAX_PATH];
+ int i;
+ struct dxgglobal *dxgglobal = dxggbl();
+
+ ret = init_message(&msg, NULL, process, sizeof(*command));
+ if (ret)
+ return ret;
+ command = (void *)msg.msg;
+
+ ret = dxgglobal_acquire_channel_lock();
+ if (ret < 0)
+ goto cleanup;
+
+ command_vm_to_host_init1(&command->hdr, DXGK_VMBCOMMAND_CREATEPROCESS);
+ command->process = process;
+ command->process_id = process->pid;
+ command->linux_process = 1;
+ s[0] = 0;
+ __get_task_comm(s, WIN_MAX_PATH, current);
+ for (i = 0; i < WIN_MAX_PATH; i++) {
+ command->process_name[i] = s[i];
+ if (s[i] == 0)
+ break;
+ }
+
+ ret = dxgvmb_send_sync_msg(&dxgglobal->channel, msg.hdr, msg.size,
+ &result, sizeof(result));
+ if (ret < 0) {
+ DXG_ERR("create_process failed %d", ret);
+ } else if (result.hprocess.v == 0) {
+ DXG_ERR("create_process returned 0 handle");
+ ret = -ENOTRECOVERABLE;
+ } else {
+ process->host_handle = result.hprocess;
+ DXG_TRACE("create_process returned %x",
+ process->host_handle.v);
+ }
+
+ dxgglobal_release_channel_lock();
+
+cleanup:
+ free_message(&msg, process);
+ if (ret)
+ DXG_TRACE("err: %d", ret);
+ return ret;
+}
+
+int dxgvmb_send_destroy_process(struct d3dkmthandle process)
+{
+ int ret;
+ struct dxgkvmb_command_destroyprocess *command;
+ struct dxgvmbusmsg msg;
+ struct dxgglobal *dxgglobal = dxggbl();
+
+ ret = init_message(&msg, NULL, NULL, sizeof(*command));
+ if (ret)
+ return ret;
+ command = (void *)msg.msg;
+
+ ret = dxgglobal_acquire_channel_lock();
+ if (ret < 0)
+ goto cleanup;
+ command_vm_to_host_init2(&command->hdr, DXGK_VMBCOMMAND_DESTROYPROCESS,
+ process);
+ ret = dxgvmb_send_sync_msg_ntstatus(&dxgglobal->channel,
+ msg.hdr, msg.size);
+ dxgglobal_release_channel_lock();
+
+cleanup:
+ free_message(&msg, NULL);
+ if (ret)
+ DXG_TRACE("err: %d", ret);
+ return ret;
+}
+
/*
* Virtual GPU messages to the host
*/
@@ -591,3 +672,86 @@ int dxgvmb_send_get_internal_adapter_info(struct dxgadapter *adapter)
DXG_ERR("Failed to get adapter info: %d", ret);
return ret;
}
+
+int dxgvmb_send_query_adapter_info(struct dxgprocess *process,
+ struct dxgadapter *adapter,
+ struct d3dkmt_queryadapterinfo *args)
+{
+ struct dxgkvmb_command_queryadapterinfo *command;
+ u32 cmd_size = sizeof(*command) + args->private_data_size - 1;
+ int ret;
+ u32 private_data_size;
+ void *private_data;
+ struct dxgvmbusmsg msg = {.hdr = NULL};
+ struct dxgglobal *dxgglobal = dxggbl();
+
+ ret = init_message(&msg, adapter, process, cmd_size);
+ if (ret)
+ goto cleanup;
+ command = (void *)msg.msg;
+
+ ret = copy_from_user(command->private_data,
+ args->private_data, args->private_data_size);
+ if (ret) {
+ DXG_ERR("Faled to copy private data");
+ ret = -EINVAL;
+ goto cleanup;
+ }
+
+ command_vgpu_to_host_init2(&command->hdr,
+ DXGK_VMBCOMMAND_QUERYADAPTERINFO,
+ process->host_handle);
+ command->private_data_size = args->private_data_size;
+ command->query_type = args->type;
+
+ if (dxgglobal->vmbus_ver >= DXGK_VMBUS_INTERFACE_VERSION) {
+ private_data = msg.msg;
+ private_data_size = command->private_data_size +
+ sizeof(struct ntstatus);
+ } else {
+ private_data = command->private_data;
+ private_data_size = command->private_data_size;
+ }
+
+ ret = dxgvmb_send_sync_msg(msg.channel, msg.hdr, msg.size,
+ private_data, private_data_size);
+ if (ret < 0)
+ goto cleanup;
+
+ if (dxgglobal->vmbus_ver >= DXGK_VMBUS_INTERFACE_VERSION) {
+ ret = ntstatus2int(*(struct ntstatus *)private_data);
+ if (ret < 0)
+ goto cleanup;
+ private_data = (char *)private_data + sizeof(struct ntstatus);
+ }
+
+ switch (args->type) {
+ case _KMTQAITYPE_ADAPTERTYPE:
+ case _KMTQAITYPE_ADAPTERTYPE_RENDER:
+ {
+ struct d3dkmt_adaptertype *adapter_type =
+ (void *)private_data;
+ adapter_type->paravirtualized = 1;
+ adapter_type->display_supported = 0;
+ adapter_type->post_device = 0;
+ adapter_type->indirect_display_device = 0;
+ adapter_type->acg_supported = 0;
+ adapter_type->support_set_timings_from_vidpn = 0;
+ break;
+ }
+ default:
+ break;
+ }
+ ret = copy_to_user(args->private_data, private_data,
+ args->private_data_size);
+ if (ret) {
+ DXG_ERR("Faled to copy private data to user");
+ ret = -EINVAL;
+ }
+
+cleanup:
+ free_message(&msg, process);
+ if (ret)
+ DXG_TRACE("err: %d", ret);
+ return ret;
+}
diff --git a/drivers/hv/dxgkrnl/dxgvmbus.h b/drivers/hv/dxgkrnl/dxgvmbus.h
index 584cdd3db6c0..a805a396e083 100644
--- a/drivers/hv/dxgkrnl/dxgvmbus.h
+++ b/drivers/hv/dxgkrnl/dxgvmbus.h
@@ -14,7 +14,11 @@
#ifndef _DXGVMBUS_H
#define _DXGVMBUS_H
+struct dxgprocess;
+struct dxgadapter;
+
#define DXG_MAX_VM_BUS_PACKET_SIZE (1024 * 128)
+#define DXG_VM_PROCESS_NAME_LENGTH 260
enum dxgkvmb_commandchanneltype {
DXGKVMB_VGPU_TO_HOST,
@@ -169,6 +173,26 @@ struct dxgkvmb_command_setiospaceregion {
u32 shared_page_gpadl;
};
+struct dxgkvmb_command_createprocess {
+ struct dxgkvmb_command_vm_to_host hdr;
+ void *process;
+ u64 process_id;
+ u16 process_name[DXG_VM_PROCESS_NAME_LENGTH + 1];
+ u8 csrss_process:1;
+ u8 dwm_process:1;
+ u8 wow64_process:1;
+ u8 linux_process:1;
+};
+
+struct dxgkvmb_command_createprocess_return {
+ struct d3dkmthandle hprocess;
+};
+
+// The command returns ntstatus
+struct dxgkvmb_command_destroyprocess {
+ struct dxgkvmb_command_vm_to_host hdr;
+};
+
struct dxgkvmb_command_openadapter {
struct dxgkvmb_command_vgpu_to_host hdr;
u32 vmbus_interface_version;
@@ -211,4 +235,16 @@ struct dxgkvmb_command_getinternaladapterinfo_return {
struct winluid host_vgpu_luid;
};
+struct dxgkvmb_command_queryadapterinfo {
+ struct dxgkvmb_command_vgpu_to_host hdr;
+ enum kmtqueryadapterinfotype query_type;
+ u32 private_data_size;
+ u8 private_data[1];
+};
+
+struct dxgkvmb_command_queryadapterinfo_return {
+ struct ntstatus status;
+ u8 private_data[1];
+};
+
#endif /* _DXGVMBUS_H */
diff --git a/drivers/hv/dxgkrnl/hmgr.c b/drivers/hv/dxgkrnl/hmgr.c
new file mode 100644
index 000000000000..526b50f46d96
--- /dev/null
+++ b/drivers/hv/dxgkrnl/hmgr.c
@@ -0,0 +1,563 @@
+// SPDX-License-Identifier: GPL-2.0
+
+/*
+ * Copyright (c) 2022, Microsoft Corporation.
+ *
+ * Author:
+ * Iouri Tarassov <iourit@linux.microsoft.com>
+ *
+ * Dxgkrnl Graphics Driver
+ * Handle manager implementation
+ *
+ */
+
+#include <linux/kernel.h>
+#include <linux/mutex.h>
+#include <linux/rwsem.h>
+
+#include "misc.h"
+#include "dxgkrnl.h"
+#include "hmgr.h"
+
+#undef pr_fmt
+#define pr_fmt(fmt) "dxgk: " fmt
+
+const struct d3dkmthandle zerohandle;
+
+/*
+ * Handle parameters
+ */
+#define HMGRHANDLE_INSTANCE_BITS 6
+#define HMGRHANDLE_INDEX_BITS 24
+#define HMGRHANDLE_UNIQUE_BITS 2
+
+#define HMGRHANDLE_INSTANCE_SHIFT 0
+#define HMGRHANDLE_INDEX_SHIFT \
+ (HMGRHANDLE_INSTANCE_BITS + HMGRHANDLE_INSTANCE_SHIFT)
+#define HMGRHANDLE_UNIQUE_SHIFT \
+ (HMGRHANDLE_INDEX_BITS + HMGRHANDLE_INDEX_SHIFT)
+
+#define HMGRHANDLE_INSTANCE_MASK \
+ (((1 << HMGRHANDLE_INSTANCE_BITS) - 1) << HMGRHANDLE_INSTANCE_SHIFT)
+#define HMGRHANDLE_INDEX_MASK \
+ (((1 << HMGRHANDLE_INDEX_BITS) - 1) << HMGRHANDLE_INDEX_SHIFT)
+#define HMGRHANDLE_UNIQUE_MASK \
+ (((1 << HMGRHANDLE_UNIQUE_BITS) - 1) << HMGRHANDLE_UNIQUE_SHIFT)
+
+#define HMGRHANDLE_INSTANCE_MAX ((1 << HMGRHANDLE_INSTANCE_BITS) - 1)
+#define HMGRHANDLE_INDEX_MAX ((1 << HMGRHANDLE_INDEX_BITS) - 1)
+#define HMGRHANDLE_UNIQUE_MAX ((1 << HMGRHANDLE_UNIQUE_BITS) - 1)
+
+/*
+ * Handle entry
+ */
+struct hmgrentry {
+ union {
+ void *object;
+ struct {
+ u32 prev_free_index;
+ u32 next_free_index;
+ };
+ };
+ u32 type:HMGRENTRY_TYPE_BITS + 1;
+ u32 unique:HMGRHANDLE_UNIQUE_BITS;
+ u32 instance:HMGRHANDLE_INSTANCE_BITS;
+ u32 destroyed:1;
+};
+
+#define HMGRTABLE_SIZE_INCREMENT 1024
+#define HMGRTABLE_MIN_FREE_ENTRIES 128
+#define HMGRTABLE_INVALID_INDEX (~((1 << HMGRHANDLE_INDEX_BITS) - 1))
+#define HMGRTABLE_SIZE_MAX 0xFFFFFFF
+
+static u32 table_size_increment = HMGRTABLE_SIZE_INCREMENT;
+
+static u32 get_unique(struct d3dkmthandle h)
+{
+ return (h.v & HMGRHANDLE_UNIQUE_MASK) >> HMGRHANDLE_UNIQUE_SHIFT;
+}
+
+static u32 get_index(struct d3dkmthandle h)
+{
+ return (h.v & HMGRHANDLE_INDEX_MASK) >> HMGRHANDLE_INDEX_SHIFT;
+}
+
+static bool is_handle_valid(struct hmgrtable *table, struct d3dkmthandle h,
+ bool ignore_destroyed, enum hmgrentry_type t)
+{
+ u32 index = get_index(h);
+ u32 unique = get_unique(h);
+ struct hmgrentry *entry;
+
+ if (index >= table->table_size) {
+ DXG_ERR("Invalid index %x %d", h.v, index);
+ return false;
+ }
+
+ entry = &table->entry_table[index];
+ if (unique != entry->unique) {
+ DXG_ERR("Invalid unique %x %d %d %d %p",
+ h.v, unique, entry->unique, index, entry->object);
+ return false;
+ }
+
+ if (entry->destroyed && !ignore_destroyed) {
+ DXG_ERR("Invalid destroyed value");
+ return false;
+ }
+
+ if (entry->type == HMGRENTRY_TYPE_FREE) {
+ DXG_ERR("Entry is freed %x %d", h.v, index);
+ return false;
+ }
+
+ if (t != HMGRENTRY_TYPE_FREE && t != entry->type) {
+ DXG_ERR("type mismatch %x %d %d", h.v, t, entry->type);
+ return false;
+ }
+
+ return true;
+}
+
+static struct d3dkmthandle build_handle(u32 index, u32 unique, u32 instance)
+{
+ struct d3dkmthandle handle;
+
+ handle.v = (index << HMGRHANDLE_INDEX_SHIFT) & HMGRHANDLE_INDEX_MASK;
+ handle.v |= (unique << HMGRHANDLE_UNIQUE_SHIFT) &
+ HMGRHANDLE_UNIQUE_MASK;
+ handle.v |= (instance << HMGRHANDLE_INSTANCE_SHIFT) &
+ HMGRHANDLE_INSTANCE_MASK;
+
+ return handle;
+}
+
+inline u32 hmgrtable_get_used_entry_count(struct hmgrtable *table)
+{
+ DXGKRNL_ASSERT(table->table_size >= table->free_count);
+ return (table->table_size - table->free_count);
+}
+
+bool hmgrtable_mark_destroyed(struct hmgrtable *table, struct d3dkmthandle h)
+{
+ if (!is_handle_valid(table, h, false, HMGRENTRY_TYPE_FREE))
+ return false;
+
+ table->entry_table[get_index(h)].destroyed = true;
+ return true;
+}
+
+bool hmgrtable_unmark_destroyed(struct hmgrtable *table, struct d3dkmthandle h)
+{
+ if (!is_handle_valid(table, h, true, HMGRENTRY_TYPE_FREE))
+ return true;
+
+ DXGKRNL_ASSERT(table->entry_table[get_index(h)].destroyed);
+ table->entry_table[get_index(h)].destroyed = 0;
+ return true;
+}
+
+static bool expand_table(struct hmgrtable *table, u32 NumEntries)
+{
+ u32 new_table_size;
+ struct hmgrentry *new_entry;
+ u32 table_index;
+ u32 new_free_count;
+ u32 prev_free_index;
+ u32 tail_index = table->free_handle_list_tail;
+
+ /* The tail should point to the last free element in the list */
+ if (table->free_count != 0) {
+ if (tail_index >= table->table_size ||
+ table->entry_table[tail_index].next_free_index !=
+ HMGRTABLE_INVALID_INDEX) {
+ DXG_ERR("corruption");
+ DXG_ERR("tail_index: %x", tail_index);
+ DXG_ERR("table size: %x", table->table_size);
+ DXG_ERR("free_count: %d", table->free_count);
+ DXG_ERR("NumEntries: %x", NumEntries);
+ return false;
+ }
+ }
+
+ new_free_count = table_size_increment + table->free_count;
+ new_table_size = table->table_size + table_size_increment;
+ if (new_table_size < NumEntries) {
+ new_free_count += NumEntries - new_table_size;
+ new_table_size = NumEntries;
+ }
+
+ if (new_table_size > HMGRHANDLE_INDEX_MAX) {
+ DXG_ERR("Invalid new table size");
+ return false;
+ }
+
+ new_entry = (struct hmgrentry *)
+ vzalloc(new_table_size * sizeof(struct hmgrentry));
+ if (new_entry == NULL) {
+ DXG_ERR("allocation failed");
+ return false;
+ }
+
+ if (table->entry_table) {
+ memcpy(new_entry, table->entry_table,
+ table->table_size * sizeof(struct hmgrentry));
+ vfree(table->entry_table);
+ } else {
+ table->free_handle_list_head = 0;
+ }
+
+ table->entry_table = new_entry;
+
+ /* Initialize new table entries and add to the free list */
+ table_index = table->table_size;
+
+ prev_free_index = table->free_handle_list_tail;
+
+ while (table_index < new_table_size) {
+ struct hmgrentry *entry = &table->entry_table[table_index];
+
+ entry->prev_free_index = prev_free_index;
+ entry->next_free_index = table_index + 1;
+ entry->type = HMGRENTRY_TYPE_FREE;
+ entry->unique = 1;
+ entry->instance = 0;
+ prev_free_index = table_index;
+
+ table_index++;
+ }
+
+ table->entry_table[table_index - 1].next_free_index =
+ (u32) HMGRTABLE_INVALID_INDEX;
+
+ if (table->free_count != 0) {
+ /* Link the current free list with the new entries */
+ struct hmgrentry *entry;
+
+ entry = &table->entry_table[table->free_handle_list_tail];
+ entry->next_free_index = table->table_size;
+ }
+ table->free_handle_list_tail = new_table_size - 1;
+ if (table->free_handle_list_head == HMGRTABLE_INVALID_INDEX)
+ table->free_handle_list_head = table->table_size;
+
+ table->table_size = new_table_size;
+ table->free_count = new_free_count;
+
+ return true;
+}
+
+void hmgrtable_init(struct hmgrtable *table, struct dxgprocess *process)
+{
+ table->process = process;
+ table->entry_table = NULL;
+ table->table_size = 0;
+ table->free_handle_list_head = HMGRTABLE_INVALID_INDEX;
+ table->free_handle_list_tail = HMGRTABLE_INVALID_INDEX;
+ table->free_count = 0;
+ init_rwsem(&table->table_lock);
+}
+
+void hmgrtable_destroy(struct hmgrtable *table)
+{
+ if (table->entry_table) {
+ vfree(table->entry_table);
+ table->entry_table = NULL;
+ }
+}
+
+void hmgrtable_lock(struct hmgrtable *table, enum dxglockstate state)
+{
+ if (state == DXGLOCK_EXCL)
+ down_write(&table->table_lock);
+ else
+ down_read(&table->table_lock);
+}
+
+void hmgrtable_unlock(struct hmgrtable *table, enum dxglockstate state)
+{
+ if (state == DXGLOCK_EXCL)
+ up_write(&table->table_lock);
+ else
+ up_read(&table->table_lock);
+}
+
+struct d3dkmthandle hmgrtable_alloc_handle(struct hmgrtable *table,
+ void *object,
+ enum hmgrentry_type type,
+ bool make_valid)
+{
+ u32 index;
+ struct hmgrentry *entry;
+ u32 unique;
+
+ DXGKRNL_ASSERT(type <= HMGRENTRY_TYPE_LIMIT);
+ DXGKRNL_ASSERT(type > HMGRENTRY_TYPE_FREE);
+
+ if (table->free_count <= HMGRTABLE_MIN_FREE_ENTRIES) {
+ if (!expand_table(table, 0)) {
+ DXG_ERR("hmgrtable expand_table failed");
+ return zerohandle;
+ }
+ }
+
+ if (table->free_handle_list_head >= table->table_size) {
+ DXG_ERR("hmgrtable corrupted handle table head");
+ return zerohandle;
+ }
+
+ index = table->free_handle_list_head;
+ entry = &table->entry_table[index];
+
+ if (entry->type != HMGRENTRY_TYPE_FREE) {
+ DXG_ERR("hmgrtable expected free handle");
+ return zerohandle;
+ }
+
+ table->free_handle_list_head = entry->next_free_index;
+
+ if (entry->next_free_index != table->free_handle_list_tail) {
+ if (entry->next_free_index >= table->table_size) {
+ DXG_ERR("hmgrtable invalid next free index");
+ return zerohandle;
+ }
+ table->entry_table[entry->next_free_index].prev_free_index =
+ HMGRTABLE_INVALID_INDEX;
+ }
+
+ unique = table->entry_table[index].unique;
+
+ table->entry_table[index].object = object;
+ table->entry_table[index].type = type;
+ table->entry_table[index].instance = 0;
+ table->entry_table[index].destroyed = !make_valid;
+ table->free_count--;
+ DXGKRNL_ASSERT(table->free_count <= table->table_size);
+
+ return build_handle(index, unique, table->entry_table[index].instance);
+}
+
+int hmgrtable_assign_handle_safe(struct hmgrtable *table,
+ void *object,
+ enum hmgrentry_type type,
+ struct d3dkmthandle h)
+{
+ int ret;
+
+ hmgrtable_lock(table, DXGLOCK_EXCL);
+ ret = hmgrtable_assign_handle(table, object, type, h);
+ hmgrtable_unlock(table, DXGLOCK_EXCL);
+ return ret;
+}
+
+int hmgrtable_assign_handle(struct hmgrtable *table, void *object,
+ enum hmgrentry_type type, struct d3dkmthandle h)
+{
+ u32 index = get_index(h);
+ u32 unique = get_unique(h);
+ struct hmgrentry *entry = NULL;
+
+ DXG_TRACE("%x, %d %p, %p", h.v, index, object, table);
+
+ if (index >= HMGRHANDLE_INDEX_MAX) {
+ DXG_ERR("handle index is too big: %x %d", h.v, index);
+ return -EINVAL;
+ }
+
+ if (index >= table->table_size) {
+ u32 new_size = index + table_size_increment;
+
+ if (new_size > HMGRHANDLE_INDEX_MAX)
+ new_size = HMGRHANDLE_INDEX_MAX;
+ if (!expand_table(table, new_size)) {
+ DXG_ERR("failed to expand handle table %d",
+ new_size);
+ return -ENOMEM;
+ }
+ }
+
+ entry = &table->entry_table[index];
+
+ if (entry->type != HMGRENTRY_TYPE_FREE) {
+ DXG_ERR("the entry is not free: %d %x", entry->type,
+ hmgrtable_build_entry_handle(table, index).v);
+ return -EINVAL;
+ }
+
+ if (index != table->free_handle_list_tail) {
+ if (entry->next_free_index >= table->table_size) {
+ DXG_ERR("hmgr: invalid next free index %d",
+ entry->next_free_index);
+ return -EINVAL;
+ }
+ table->entry_table[entry->next_free_index].prev_free_index =
+ entry->prev_free_index;
+ } else {
+ table->free_handle_list_tail = entry->prev_free_index;
+ }
+
+ if (index != table->free_handle_list_head) {
+ if (entry->prev_free_index >= table->table_size) {
+ DXG_ERR("hmgr: invalid next prev index %d",
+ entry->prev_free_index);
+ return -EINVAL;
+ }
+ table->entry_table[entry->prev_free_index].next_free_index =
+ entry->next_free_index;
+ } else {
+ table->free_handle_list_head = entry->next_free_index;
+ }
+
+ entry->prev_free_index = HMGRTABLE_INVALID_INDEX;
+ entry->next_free_index = HMGRTABLE_INVALID_INDEX;
+ entry->object = object;
+ entry->type = type;
+ entry->instance = 0;
+ entry->unique = unique;
+ entry->destroyed = false;
+
+ table->free_count--;
+ DXGKRNL_ASSERT(table->free_count <= table->table_size);
+ return 0;
+}
+
+struct d3dkmthandle hmgrtable_alloc_handle_safe(struct hmgrtable *table,
+ void *obj,
+ enum hmgrentry_type type,
+ bool make_valid)
+{
+ struct d3dkmthandle h;
+
+ hmgrtable_lock(table, DXGLOCK_EXCL);
+ h = hmgrtable_alloc_handle(table, obj, type, make_valid);
+ hmgrtable_unlock(table, DXGLOCK_EXCL);
+ return h;
+}
+
+void hmgrtable_free_handle(struct hmgrtable *table, enum hmgrentry_type t,
+ struct d3dkmthandle h)
+{
+ struct hmgrentry *entry;
+ u32 i = get_index(h);
+
+ DXG_TRACE("%p %x", table, h.v);
+
+ /* Ignore the destroyed flag when checking the handle */
+ if (is_handle_valid(table, h, true, t)) {
+ DXGKRNL_ASSERT(table->free_count < table->table_size);
+ entry = &table->entry_table[i];
+ entry->unique = 1;
+ entry->type = HMGRENTRY_TYPE_FREE;
+ entry->destroyed = 0;
+ if (entry->unique != HMGRHANDLE_UNIQUE_MAX)
+ entry->unique += 1;
+ else
+ entry->unique = 1;
+
+ table->free_count++;
+ DXGKRNL_ASSERT(table->free_count <= table->table_size);
+
+ /*
+ * Insert the index to the free list at the tail.
+ */
+ entry->next_free_index = HMGRTABLE_INVALID_INDEX;
+ entry->prev_free_index = table->free_handle_list_tail;
+ entry = &table->entry_table[table->free_handle_list_tail];
+ entry->next_free_index = i;
+ table->free_handle_list_tail = i;
+ } else {
+ DXG_ERR("Invalid handle to free: %d %x", i, h.v);
+ }
+}
+
+void hmgrtable_free_handle_safe(struct hmgrtable *table, enum hmgrentry_type t,
+ struct d3dkmthandle h)
+{
+ hmgrtable_lock(table, DXGLOCK_EXCL);
+ hmgrtable_free_handle(table, t, h);
+ hmgrtable_unlock(table, DXGLOCK_EXCL);
+}
+
+struct d3dkmthandle hmgrtable_build_entry_handle(struct hmgrtable *table,
+ u32 index)
+{
+ DXGKRNL_ASSERT(index < table->table_size);
+
+ return build_handle(index, table->entry_table[index].unique,
+ table->entry_table[index].instance);
+}
+
+void *hmgrtable_get_object(struct hmgrtable *table, struct d3dkmthandle h)
+{
+ if (!is_handle_valid(table, h, false, HMGRENTRY_TYPE_FREE))
+ return NULL;
+
+ return table->entry_table[get_index(h)].object;
+}
+
+void *hmgrtable_get_object_by_type(struct hmgrtable *table,
+ enum hmgrentry_type type,
+ struct d3dkmthandle h)
+{
+ if (!is_handle_valid(table, h, false, type)) {
+ DXG_ERR("Invalid handle %x", h.v);
+ return NULL;
+ }
+ return table->entry_table[get_index(h)].object;
+}
+
+void *hmgrtable_get_entry_object(struct hmgrtable *table, u32 index)
+{
+ DXGKRNL_ASSERT(index < table->table_size);
+ DXGKRNL_ASSERT(table->entry_table[index].type != HMGRENTRY_TYPE_FREE);
+
+ return table->entry_table[index].object;
+}
+
+static enum hmgrentry_type hmgrtable_get_entry_type(struct hmgrtable *table,
+ u32 index)
+{
+ DXGKRNL_ASSERT(index < table->table_size);
+ return (enum hmgrentry_type)table->entry_table[index].type;
+}
+
+enum hmgrentry_type hmgrtable_get_object_type(struct hmgrtable *table,
+ struct d3dkmthandle h)
+{
+ if (!is_handle_valid(table, h, false, HMGRENTRY_TYPE_FREE))
+ return HMGRENTRY_TYPE_FREE;
+
+ return hmgrtable_get_entry_type(table, get_index(h));
+}
+
+void *hmgrtable_get_object_ignore_destroyed(struct hmgrtable *table,
+ struct d3dkmthandle h,
+ enum hmgrentry_type type)
+{
+ if (!is_handle_valid(table, h, true, type))
+ return NULL;
+ return table->entry_table[get_index(h)].object;
+}
+
+bool hmgrtable_next_entry(struct hmgrtable *tbl,
+ u32 *index,
+ enum hmgrentry_type *type,
+ struct d3dkmthandle *handle,
+ void **object)
+{
+ u32 i;
+ struct hmgrentry *entry;
+
+ for (i = *index; i < tbl->table_size; i++) {
+ entry = &tbl->entry_table[i];
+ if (entry->type != HMGRENTRY_TYPE_FREE) {
+ *index = i + 1;
+ *object = entry->object;
+ *handle = build_handle(i, entry->unique,
+ entry->instance);
+ *type = entry->type;
+ return true;
+ }
+ }
+ return false;
+}
diff --git a/drivers/hv/dxgkrnl/hmgr.h b/drivers/hv/dxgkrnl/hmgr.h
new file mode 100644
index 000000000000..23eec301137f
--- /dev/null
+++ b/drivers/hv/dxgkrnl/hmgr.h
@@ -0,0 +1,112 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+
+/*
+ * Copyright (c) 2022, Microsoft Corporation.
+ *
+ * Author:
+ * Iouri Tarassov <iourit@linux.microsoft.com>
+ *
+ * Dxgkrnl Graphics Driver
+ * Handle manager definitions
+ *
+ */
+
+#ifndef _HMGR_H_
+#define _HMGR_H_
+
+#include "misc.h"
+
+struct hmgrentry;
+
+/*
+ * Handle manager table.
+ *
+ * Implementation notes:
+ * A list of free handles is built on top of the array of table entries.
+ * free_handle_list_head is the index of the first entry in the list.
+ * m_FreeHandleListTail is the index of an entry in the list, which is
+ * HMGRTABLE_MIN_FREE_ENTRIES from the head. It means that when a handle is
+ * freed, the next time the handle can be re-used is after allocating
+ * HMGRTABLE_MIN_FREE_ENTRIES number of handles.
+ * Handles are allocated from the start of the list and free handles are
+ * inserted after the tail of the list.
+ *
+ */
+struct hmgrtable {
+ struct dxgprocess *process;
+ struct hmgrentry *entry_table;
+ u32 free_handle_list_head;
+ u32 free_handle_list_tail;
+ u32 table_size;
+ u32 free_count;
+ struct rw_semaphore table_lock;
+};
+
+/*
+ * Handle entry data types.
+ */
+#define HMGRENTRY_TYPE_BITS 5
+
+enum hmgrentry_type {
+ HMGRENTRY_TYPE_FREE = 0,
+ HMGRENTRY_TYPE_DXGADAPTER = 1,
+ HMGRENTRY_TYPE_DXGSHAREDRESOURCE = 2,
+ HMGRENTRY_TYPE_DXGDEVICE = 3,
+ HMGRENTRY_TYPE_DXGRESOURCE = 4,
+ HMGRENTRY_TYPE_DXGALLOCATION = 5,
+ HMGRENTRY_TYPE_DXGOVERLAY = 6,
+ HMGRENTRY_TYPE_DXGCONTEXT = 7,
+ HMGRENTRY_TYPE_DXGSYNCOBJECT = 8,
+ HMGRENTRY_TYPE_DXGKEYEDMUTEX = 9,
+ HMGRENTRY_TYPE_DXGPAGINGQUEUE = 10,
+ HMGRENTRY_TYPE_DXGDEVICESYNCOBJECT = 11,
+ HMGRENTRY_TYPE_DXGPROCESS = 12,
+ HMGRENTRY_TYPE_DXGSHAREDVMOBJECT = 13,
+ HMGRENTRY_TYPE_DXGPROTECTEDSESSION = 14,
+ HMGRENTRY_TYPE_DXGHWQUEUE = 15,
+ HMGRENTRY_TYPE_DXGREMOTEBUNDLEOBJECT = 16,
+ HMGRENTRY_TYPE_DXGCOMPOSITIONSURFACEOBJECT = 17,
+ HMGRENTRY_TYPE_DXGCOMPOSITIONSURFACEPROXY = 18,
+ HMGRENTRY_TYPE_DXGTRACKEDWORKLOAD = 19,
+ HMGRENTRY_TYPE_LIMIT = ((1 << HMGRENTRY_TYPE_BITS) - 1),
+ HMGRENTRY_TYPE_MONITOREDFENCE = HMGRENTRY_TYPE_LIMIT + 1,
+};
+
+void hmgrtable_init(struct hmgrtable *tbl, struct dxgprocess *process);
+void hmgrtable_destroy(struct hmgrtable *tbl);
+void hmgrtable_lock(struct hmgrtable *tbl, enum dxglockstate state);
+void hmgrtable_unlock(struct hmgrtable *tbl, enum dxglockstate state);
+struct d3dkmthandle hmgrtable_alloc_handle(struct hmgrtable *tbl, void *object,
+ enum hmgrentry_type t, bool make_valid);
+struct d3dkmthandle hmgrtable_alloc_handle_safe(struct hmgrtable *tbl,
+ void *obj,
+ enum hmgrentry_type t,
+ bool reserve);
+int hmgrtable_assign_handle(struct hmgrtable *tbl, void *obj,
+ enum hmgrentry_type, struct d3dkmthandle h);
+int hmgrtable_assign_handle_safe(struct hmgrtable *tbl, void *obj,
+ enum hmgrentry_type t, struct d3dkmthandle h);
+void hmgrtable_free_handle(struct hmgrtable *tbl, enum hmgrentry_type t,
+ struct d3dkmthandle h);
+void hmgrtable_free_handle_safe(struct hmgrtable *tbl, enum hmgrentry_type t,
+ struct d3dkmthandle h);
+struct d3dkmthandle hmgrtable_build_entry_handle(struct hmgrtable *tbl,
+ u32 index);
+enum hmgrentry_type hmgrtable_get_object_type(struct hmgrtable *tbl,
+ struct d3dkmthandle h);
+void *hmgrtable_get_object(struct hmgrtable *tbl, struct d3dkmthandle h);
+void *hmgrtable_get_object_by_type(struct hmgrtable *tbl, enum hmgrentry_type t,
+ struct d3dkmthandle h);
+void *hmgrtable_get_object_ignore_destroyed(struct hmgrtable *tbl,
+ struct d3dkmthandle h,
+ enum hmgrentry_type t);
+bool hmgrtable_mark_destroyed(struct hmgrtable *tbl, struct d3dkmthandle h);
+bool hmgrtable_unmark_destroyed(struct hmgrtable *tbl, struct d3dkmthandle h);
+void *hmgrtable_get_entry_object(struct hmgrtable *tbl, u32 index);
+bool hmgrtable_next_entry(struct hmgrtable *tbl,
+ u32 *start_index,
+ enum hmgrentry_type *type,
+ struct d3dkmthandle *handle,
+ void **object);
+
+#endif
diff --git a/drivers/hv/dxgkrnl/ioctl.c b/drivers/hv/dxgkrnl/ioctl.c
index 23ecd15b0cd7..60e38d104517 100644
--- a/drivers/hv/dxgkrnl/ioctl.c
+++ b/drivers/hv/dxgkrnl/ioctl.c
@@ -22,3 +22,63 @@
#undef pr_fmt
#define pr_fmt(fmt) "dxgk: " fmt
+
+struct ioctl_desc {
+ int (*ioctl_callback)(struct dxgprocess *p, void __user *arg);
+ u32 ioctl;
+ u32 arg_size;
+};
+
+static struct ioctl_desc ioctls[] = {
+
+};
+
+/*
+ * IOCTL processing
+ * The driver IOCTLs return
+ * - 0 in case of success
+ * - positive values, which are Windows NTSTATUS (for example, STATUS_PENDING).
+ * Positive values are success codes.
+ * - Linux negative error codes
+ */
+static int dxgk_ioctl(struct file *f, unsigned int p1, unsigned long p2)
+{
+ int code = _IOC_NR(p1);
+ int status;
+ struct dxgprocess *process;
+
+ if (code < 1 || code >= ARRAY_SIZE(ioctls)) {
+ DXG_ERR("bad ioctl %x %x %x %x",
+ code, _IOC_TYPE(p1), _IOC_SIZE(p1), _IOC_DIR(p1));
+ return -ENOTTY;
+ }
+ if (ioctls[code].ioctl_callback == NULL) {
+ DXG_ERR("ioctl callback is NULL %x", code);
+ return -ENOTTY;
+ }
+ if (ioctls[code].ioctl != p1) {
+ DXG_ERR("ioctl mismatch. Code: %x User: %x Kernel: %x",
+ code, p1, ioctls[code].ioctl);
+ return -ENOTTY;
+ }
+ process = (struct dxgprocess *)f->private_data;
+ if (process->tgid != current->tgid) {
+ DXG_ERR("Call from a wrong process: %d %d",
+ process->tgid, current->tgid);
+ return -ENOTTY;
+ }
+ status = ioctls[code].ioctl_callback(process, (void *__user)p2);
+ return status;
+}
+
+long dxgk_compat_ioctl(struct file *f, unsigned int p1, unsigned long p2)
+{
+ DXG_TRACE("compat ioctl %x", p1);
+ return dxgk_ioctl(f, p1, p2);
+}
+
+long dxgk_unlocked_ioctl(struct file *f, unsigned int p1, unsigned long p2)
+{
+ DXG_TRACE("unlocked ioctl %x Code:%d", p1, _IOC_NR(p1));
+ return dxgk_ioctl(f, p1, p2);
+}
diff --git a/drivers/hv/dxgkrnl/misc.h b/drivers/hv/dxgkrnl/misc.h
index d292e9a9bb7f..dc849a8ed3f2 100644
--- a/drivers/hv/dxgkrnl/misc.h
+++ b/drivers/hv/dxgkrnl/misc.h
@@ -27,10 +27,11 @@ extern const struct d3dkmthandle zerohandle;
*
* channel_lock (VMBus channel lock)
* fd_mutex
- * plistmutex (process list mutex)
- * table_lock (handle table lock)
- * core_lock (dxgadapter lock)
- * device_lock (dxgdevice lock)
+ * plistmutex
+ * table_lock
+ * core_lock
+ * device_lock
+ * process_adapter_mutex
* adapter_list_lock
* device_mutex (dxgglobal mutex)
*/
diff --git a/include/uapi/misc/d3dkmthk.h b/include/uapi/misc/d3dkmthk.h
index 2ea04cc02a1f..c675d5827ed5 100644
--- a/include/uapi/misc/d3dkmthk.h
+++ b/include/uapi/misc/d3dkmthk.h
@@ -58,4 +58,107 @@ struct winluid {
__u32 b;
};
+#define D3DKMT_ADAPTERS_MAX 64
+
+struct d3dkmt_adapterinfo {
+ struct d3dkmthandle adapter_handle;
+ struct winluid adapter_luid;
+ __u32 num_sources;
+ __u32 present_move_regions_preferred;
+};
+
+struct d3dkmt_enumadapters2 {
+ __u32 num_adapters;
+ __u32 reserved;
+#ifdef __KERNEL__
+ struct d3dkmt_adapterinfo *adapters;
+#else
+ __u64 *adapters;
+#endif
+};
+
+struct d3dkmt_closeadapter {
+ struct d3dkmthandle adapter_handle;
+};
+
+struct d3dkmt_openadapterfromluid {
+ struct winluid adapter_luid;
+ struct d3dkmthandle adapter_handle;
+};
+
+struct d3dkmt_adaptertype {
+ union {
+ struct {
+ __u32 render_supported:1;
+ __u32 display_supported:1;
+ __u32 software_device:1;
+ __u32 post_device:1;
+ __u32 hybrid_discrete:1;
+ __u32 hybrid_integrated:1;
+ __u32 indirect_display_device:1;
+ __u32 paravirtualized:1;
+ __u32 acg_supported:1;
+ __u32 support_set_timings_from_vidpn:1;
+ __u32 detachable:1;
+ __u32 compute_only:1;
+ __u32 prototype:1;
+ __u32 reserved:19;
+ };
+ __u32 value;
+ };
+};
+
+enum kmtqueryadapterinfotype {
+ _KMTQAITYPE_UMDRIVERPRIVATE = 0,
+ _KMTQAITYPE_ADAPTERTYPE = 15,
+ _KMTQAITYPE_ADAPTERTYPE_RENDER = 57
+};
+
+struct d3dkmt_queryadapterinfo {
+ struct d3dkmthandle adapter;
+ enum kmtqueryadapterinfotype type;
+#ifdef __KERNEL__
+ void *private_data;
+#else
+ __u64 private_data;
+#endif
+ __u32 private_data_size;
+};
+
+union d3dkmt_enumadapters_filter {
+ struct {
+ __u64 include_compute_only:1;
+ __u64 include_display_only:1;
+ __u64 reserved:62;
+ };
+ __u64 value;
+};
+
+struct d3dkmt_enumadapters3 {
+ union d3dkmt_enumadapters_filter filter;
+ __u32 adapter_count;
+ __u32 reserved;
+#ifdef __KERNEL__
+ struct d3dkmt_adapterinfo *adapters;
+#else
+ __u64 adapters;
+#endif
+};
+
+/*
+ * Dxgkrnl Graphics Port Driver ioctl definitions
+ *
+ */
+
+#define LX_DXOPENADAPTERFROMLUID \
+ _IOWR(0x47, 0x01, struct d3dkmt_openadapterfromluid)
+#define LX_DXQUERYADAPTERINFO \
+ _IOWR(0x47, 0x09, struct d3dkmt_queryadapterinfo)
+#define LX_DXENUMADAPTERS2 \
+ _IOWR(0x47, 0x14, struct d3dkmt_enumadapters2)
+#define LX_DXCLOSEADAPTER \
+ _IOWR(0x47, 0x15, struct d3dkmt_closeadapter)
+#define LX_DXENUMADAPTERS3 \
+ _IOWR(0x47, 0x3e, struct d3dkmt_enumadapters3)
+
#endif /* _D3DKMTHK_H */
^ permalink raw reply related [flat|nested] 56+ messages in thread* [PATCH 05/55] drivers: hv: dxgkrnl: Enumerate and open dxgadapter objects
2026-03-19 20:24 [PATCH v4 00/55] drivers: hv: dxgkrnl: Driver for Hyper-V virtual compute device Eric Curtin
` (3 preceding siblings ...)
2026-03-19 20:24 ` [PATCH 04/55] drivers: hv: dxgkrnl: Opening of /dev/dxg device and dxgprocess creation Eric Curtin
@ 2026-03-19 20:24 ` Eric Curtin
2026-03-19 20:24 ` [PATCH 06/55] drivers: hv: dxgkrnl: Creation of dxgdevice objects Eric Curtin
` (49 subsequent siblings)
54 siblings, 0 replies; 56+ messages in thread
From: Eric Curtin @ 2026-03-19 20:24 UTC (permalink / raw)
To: linux-hyperv; +Cc: linux-kernel, iourit, wei.liu, decui, haiyangz
From: Iouri Tarassov <iourit@linux.microsoft.com>
Implement ioctls to enumerate dxgadapter objects:
- The LX_DXENUMADAPTERS2 ioctl
- The LX_DXENUMADAPTERS3 ioctl.
Implement ioctls to open adapter by LUID and to close adapter
handle:
- The LX_DXOPENADAPTERFROMLUID ioctl
- the LX_DXCLOSEADAPTER ioctl
Impllement the ioctl to query dxgadapter information:
- The LX_DXQUERYADAPTERINFO ioctl
When a dxgadapter is enumerated, it is implicitely opened and
a handle (d3dkmthandle) is created in the current process handle
table. The handle is returned to the caller and can be used
by user mode to reference the VGPU adapter in other ioctls.
The caller is responsible to close the adapter when it is not
longer used by sending the LX_DXCLOSEADAPTER ioctl.
A dxgprocess has a list of opened dxgadapter objects
(dxgprocess_adapter is used to represent the entry in the list).
A dxgadapter also has a list of dxgprocess_adapter objects.
This is needed for cleanup because either a process or an adapter
could be destroyed first.
Signed-off-by: Iouri Tarassov <iourit@linux.microsoft.com>
[kms: forward port to 6.6 from 6.1. No code changes made.]
Signed-off-by: Kelsey Steele <kelseysteele@microsoft.com>
---
drivers/hv/dxgkrnl/dxgmodule.c | 3 +
drivers/hv/dxgkrnl/ioctl.c | 482 ++++++++++++++++++++++++++++++++-
2 files changed, 484 insertions(+), 1 deletion(-)
diff --git a/drivers/hv/dxgkrnl/dxgmodule.c b/drivers/hv/dxgkrnl/dxgmodule.c
index 17c22001ca6c..fbe1c58ecb46 100644
--- a/drivers/hv/dxgkrnl/dxgmodule.c
+++ b/drivers/hv/dxgkrnl/dxgmodule.c
@@ -721,6 +721,9 @@ static struct dxgglobal *dxgglobal_create(void)
init_rwsem(&dxgglobal->channel_lock);
+#ifdef DEBUG
+ dxgk_validate_ioctls();
+#endif
return dxgglobal;
}
diff --git a/drivers/hv/dxgkrnl/ioctl.c b/drivers/hv/dxgkrnl/ioctl.c
index 60e38d104517..b08ea9430093 100644
--- a/drivers/hv/dxgkrnl/ioctl.c
+++ b/drivers/hv/dxgkrnl/ioctl.c
@@ -29,8 +29,472 @@ struct ioctl_desc {
u32 arg_size;
};
-static struct ioctl_desc ioctls[] = {
+#ifdef DEBUG
+static char *errorstr(int ret)
+{
+ return ret < 0 ? "err" : "";
+}
+#endif
+
+static int dxgkio_open_adapter_from_luid(struct dxgprocess *process,
+ void *__user inargs)
+{
+ struct d3dkmt_openadapterfromluid args;
+ int ret;
+ struct dxgadapter *entry;
+ struct dxgadapter *adapter = NULL;
+ struct d3dkmt_openadapterfromluid *__user result = inargs;
+ struct dxgglobal *dxgglobal = dxggbl();
+
+ ret = copy_from_user(&args, inargs, sizeof(args));
+ if (ret) {
+ DXG_ERR("Faled to copy input args");
+ ret = -EINVAL;
+ goto cleanup;
+ }
+
+ dxgglobal_acquire_adapter_list_lock(DXGLOCK_SHARED);
+ dxgglobal_acquire_process_adapter_lock();
+
+ list_for_each_entry(entry, &dxgglobal->adapter_list_head,
+ adapter_list_entry) {
+ if (dxgadapter_acquire_lock_shared(entry) == 0) {
+ if (*(u64 *) &entry->luid ==
+ *(u64 *) &args.adapter_luid) {
+ ret = dxgprocess_open_adapter(process, entry,
+ &args.adapter_handle);
+
+ if (ret >= 0) {
+ ret = copy_to_user(
+ &result->adapter_handle,
+ &args.adapter_handle,
+ sizeof(struct d3dkmthandle));
+ if (ret)
+ ret = -EINVAL;
+ }
+ adapter = entry;
+ }
+ dxgadapter_release_lock_shared(entry);
+ if (adapter)
+ break;
+ }
+ }
+
+ dxgglobal_release_process_adapter_lock();
+ dxgglobal_release_adapter_list_lock(DXGLOCK_SHARED);
+
+ if (args.adapter_handle.v == 0)
+ ret = -EINVAL;
+
+cleanup:
+
+ if (ret < 0)
+ dxgprocess_close_adapter(process, args.adapter_handle);
+
+ DXG_TRACE("ioctl:%s %d", errorstr(ret), ret);
+ return ret;
+}
+
+static int
+dxgkp_enum_adapters(struct dxgprocess *process,
+ union d3dkmt_enumadapters_filter filter,
+ u32 adapter_count_max,
+ struct d3dkmt_adapterinfo *__user info_out,
+ u32 * __user adapter_count_out)
+{
+ int ret = 0;
+ struct dxgadapter *entry;
+ struct d3dkmt_adapterinfo *info = NULL;
+ struct dxgadapter **adapters = NULL;
+ int adapter_count = 0;
+ int i;
+ struct dxgglobal *dxgglobal = dxggbl();
+
+ if (info_out == NULL || adapter_count_max == 0) {
+ ret = copy_to_user(adapter_count_out,
+ &dxgglobal->num_adapters, sizeof(u32));
+ if (ret) {
+ DXG_ERR("copy_to_user faled");
+ ret = -EINVAL;
+ }
+ goto cleanup;
+ }
+
+ if (adapter_count_max > 0xFFFF) {
+ DXG_ERR("too many adapters");
+ ret = -EINVAL;
+ goto cleanup;
+ }
+
+ info = vzalloc(sizeof(struct d3dkmt_adapterinfo) * adapter_count_max);
+ if (info == NULL) {
+ ret = -ENOMEM;
+ goto cleanup;
+ }
+
+ adapters = vzalloc(sizeof(struct dxgadapter *) * adapter_count_max);
+ if (adapters == NULL) {
+ ret = -ENOMEM;
+ goto cleanup;
+ }
+
+ dxgglobal_acquire_adapter_list_lock(DXGLOCK_SHARED);
+ dxgglobal_acquire_process_adapter_lock();
+ list_for_each_entry(entry, &dxgglobal->adapter_list_head,
+ adapter_list_entry) {
+ if (dxgadapter_acquire_lock_shared(entry) == 0) {
+ struct d3dkmt_adapterinfo *inf = &info[adapter_count];
+
+ ret = dxgprocess_open_adapter(process, entry,
+ &inf->adapter_handle);
+ if (ret >= 0) {
+ inf->adapter_luid = entry->luid;
+ adapters[adapter_count] = entry;
+ DXG_TRACE("adapter: %x %x:%x",
+ inf->adapter_handle.v,
+ inf->adapter_luid.b,
+ inf->adapter_luid.a);
+ adapter_count++;
+ }
+ dxgadapter_release_lock_shared(entry);
+ }
+ if (ret < 0)
+ break;
+ }
+
+ dxgglobal_release_process_adapter_lock();
+ dxgglobal_release_adapter_list_lock(DXGLOCK_SHARED);
+
+ if (adapter_count > adapter_count_max) {
+ ret = STATUS_BUFFER_TOO_SMALL;
+ DXG_TRACE("Too many adapters");
+ ret = copy_to_user(adapter_count_out,
+ &dxgglobal->num_adapters, sizeof(u32));
+ if (ret) {
+ DXG_ERR("copy_to_user failed");
+ ret = -EINVAL;
+ }
+ goto cleanup;
+ }
+
+ ret = copy_to_user(adapter_count_out, &adapter_count,
+ sizeof(adapter_count));
+ if (ret) {
+ DXG_ERR("failed to copy adapter_count");
+ ret = -EINVAL;
+ goto cleanup;
+ }
+ ret = copy_to_user(info_out, info, sizeof(info[0]) * adapter_count);
+ if (ret) {
+ DXG_ERR("failed to copy adapter info");
+ ret = -EINVAL;
+ }
+
+cleanup:
+
+ if (ret >= 0) {
+ DXG_TRACE("found %d adapters", adapter_count);
+ goto success;
+ }
+ if (info) {
+ for (i = 0; i < adapter_count; i++)
+ dxgprocess_close_adapter(process,
+ info[i].adapter_handle);
+ }
+success:
+ if (info)
+ vfree(info);
+ if (adapters)
+ vfree(adapters);
+
+ DXG_TRACE("ioctl:%s %d", errorstr(ret), ret);
+ return ret;
+}
+
+static int
+dxgkio_enum_adapters(struct dxgprocess *process, void *__user inargs)
+{
+ struct d3dkmt_enumadapters2 args;
+ int ret;
+ struct dxgadapter *entry;
+ struct d3dkmt_adapterinfo *info = NULL;
+ struct dxgadapter **adapters = NULL;
+ int adapter_count = 0;
+ int i;
+ struct dxgglobal *dxgglobal = dxggbl();
+
+ ret = copy_from_user(&args, inargs, sizeof(args));
+ if (ret) {
+ DXG_ERR("failed to copy input args");
+ ret = -EINVAL;
+ goto cleanup;
+ }
+
+ if (args.adapters == NULL) {
+ DXG_TRACE("buffer is NULL");
+ args.num_adapters = dxgglobal->num_adapters;
+ ret = copy_to_user(inargs, &args, sizeof(args));
+ if (ret) {
+ DXG_ERR("failed to copy args to user");
+ ret = -EINVAL;
+ }
+ goto cleanup;
+ }
+ if (args.num_adapters < dxgglobal->num_adapters) {
+ args.num_adapters = dxgglobal->num_adapters;
+ DXG_TRACE("buffer is too small");
+ ret = -EOVERFLOW;
+ goto cleanup;
+ }
+
+ if (args.num_adapters > D3DKMT_ADAPTERS_MAX) {
+ DXG_TRACE("too many adapters");
+ ret = -EINVAL;
+ goto cleanup;
+ }
+
+ info = vzalloc(sizeof(struct d3dkmt_adapterinfo) * args.num_adapters);
+ if (info == NULL) {
+ ret = -ENOMEM;
+ goto cleanup;
+ }
+
+ adapters = vzalloc(sizeof(struct dxgadapter *) * args.num_adapters);
+ if (adapters == NULL) {
+ ret = -ENOMEM;
+ goto cleanup;
+ }
+
+ dxgglobal_acquire_adapter_list_lock(DXGLOCK_SHARED);
+ dxgglobal_acquire_process_adapter_lock();
+
+ list_for_each_entry(entry, &dxgglobal->adapter_list_head,
+ adapter_list_entry) {
+ if (dxgadapter_acquire_lock_shared(entry) == 0) {
+ struct d3dkmt_adapterinfo *inf = &info[adapter_count];
+
+ ret = dxgprocess_open_adapter(process, entry,
+ &inf->adapter_handle);
+ if (ret >= 0) {
+ inf->adapter_luid = entry->luid;
+ adapters[adapter_count] = entry;
+ DXG_TRACE("adapter: %x %llx",
+ inf->adapter_handle.v,
+ *(u64 *) &inf->adapter_luid);
+ adapter_count++;
+ }
+ dxgadapter_release_lock_shared(entry);
+ }
+ if (ret < 0)
+ break;
+ }
+
+ dxgglobal_release_process_adapter_lock();
+ dxgglobal_release_adapter_list_lock(DXGLOCK_SHARED);
+
+ args.num_adapters = adapter_count;
+
+ ret = copy_to_user(inargs, &args, sizeof(args));
+ if (ret) {
+ DXG_ERR("failed to copy args to user");
+ ret = -EINVAL;
+ goto cleanup;
+ }
+ ret = copy_to_user(args.adapters, info,
+ sizeof(info[0]) * args.num_adapters);
+ if (ret) {
+ DXG_ERR("failed to copy adapter info to user");
+ ret = -EINVAL;
+ }
+
+cleanup:
+
+ if (ret < 0) {
+ if (info) {
+ for (i = 0; i < args.num_adapters; i++) {
+ dxgprocess_close_adapter(process,
+ info[i].adapter_handle);
+ }
+ }
+ } else {
+ DXG_TRACE("found %d adapters", args.num_adapters);
+ }
+
+ if (info)
+ vfree(info);
+ if (adapters)
+ vfree(adapters);
+
+ DXG_TRACE("ioctl:%s %d", errorstr(ret), ret);
+ return ret;
+}
+
+static int
+dxgkio_enum_adapters3(struct dxgprocess *process, void *__user inargs)
+{
+ struct d3dkmt_enumadapters3 args;
+ int ret;
+
+ ret = copy_from_user(&args, inargs, sizeof(args));
+ if (ret) {
+ DXG_ERR("failed to copy input args");
+ ret = -EINVAL;
+ goto cleanup;
+ }
+
+ ret = dxgkp_enum_adapters(process, args.filter,
+ args.adapter_count,
+ args.adapters,
+ &((struct d3dkmt_enumadapters3 *)inargs)->
+ adapter_count);
+
+cleanup:
+
+ DXG_TRACE("ioctl: %s %d", errorstr(ret), ret);
+ return ret;
+}
+
+static int
+dxgkio_close_adapter(struct dxgprocess *process, void *__user inargs)
+{
+ struct d3dkmthandle args;
+ int ret;
+
+ ret = copy_from_user(&args, inargs, sizeof(args));
+ if (ret) {
+ DXG_ERR("failed to copy input args");
+ ret = -EINVAL;
+ goto cleanup;
+ }
+
+ ret = dxgprocess_close_adapter(process, args);
+ if (ret < 0)
+ DXG_ERR("failed to close adapter: %d", ret);
+
+cleanup:
+
+ DXG_TRACE("ioctl: %s %d", errorstr(ret), ret);
+ return ret;
+}
+
+static int
+dxgkio_query_adapter_info(struct dxgprocess *process, void *__user inargs)
+{
+ struct d3dkmt_queryadapterinfo args;
+ int ret;
+ struct dxgadapter *adapter = NULL;
+
+ ret = copy_from_user(&args, inargs, sizeof(args));
+ if (ret) {
+ DXG_ERR("failed to copy input args");
+ ret = -EINVAL;
+ goto cleanup;
+ }
+
+ if (args.private_data_size > DXG_MAX_VM_BUS_PACKET_SIZE ||
+ args.private_data_size == 0) {
+ DXG_ERR("invalid private data size");
+ ret = -EINVAL;
+ goto cleanup;
+ }
+
+ DXG_TRACE("Type: %d Size: %x", args.type, args.private_data_size);
+
+ adapter = dxgprocess_adapter_by_handle(process, args.adapter);
+ if (adapter == NULL) {
+ ret = -EINVAL;
+ goto cleanup;
+ }
+
+ ret = dxgadapter_acquire_lock_shared(adapter);
+ if (ret < 0)
+ goto cleanup;
+
+ ret = dxgvmb_send_query_adapter_info(process, adapter, &args);
+
+ dxgadapter_release_lock_shared(adapter);
+
+cleanup:
+
+ if (adapter)
+ kref_put(&adapter->adapter_kref, dxgadapter_release);
+
+ DXG_TRACE("ioctl:%s %d", errorstr(ret), ret);
+ return ret;
+}
+
+static struct ioctl_desc ioctls[] = {
+/* 0x00 */ {},
+/* 0x01 */ {dxgkio_open_adapter_from_luid, LX_DXOPENADAPTERFROMLUID},
+/* 0x02 */ {},
+/* 0x03 */ {},
+/* 0x04 */ {},
+/* 0x05 */ {},
+/* 0x06 */ {},
+/* 0x07 */ {},
+/* 0x08 */ {},
+/* 0x09 */ {dxgkio_query_adapter_info, LX_DXQUERYADAPTERINFO},
+/* 0x0a */ {},
+/* 0x0b */ {},
+/* 0x0c */ {},
+/* 0x0d */ {},
+/* 0x0e */ {},
+/* 0x0f */ {},
+/* 0x10 */ {},
+/* 0x11 */ {},
+/* 0x12 */ {},
+/* 0x13 */ {},
+/* 0x14 */ {dxgkio_enum_adapters, LX_DXENUMADAPTERS2},
+/* 0x15 */ {dxgkio_close_adapter, LX_DXCLOSEADAPTER},
+/* 0x16 */ {},
+/* 0x17 */ {},
+/* 0x18 */ {},
+/* 0x19 */ {},
+/* 0x1a */ {},
+/* 0x1b */ {},
+/* 0x1c */ {},
+/* 0x1d */ {},
+/* 0x1e */ {},
+/* 0x1f */ {},
+/* 0x20 */ {},
+/* 0x21 */ {},
+/* 0x22 */ {},
+/* 0x23 */ {},
+/* 0x24 */ {},
+/* 0x25 */ {},
+/* 0x26 */ {},
+/* 0x27 */ {},
+/* 0x28 */ {},
+/* 0x29 */ {},
+/* 0x2a */ {},
+/* 0x2b */ {},
+/* 0x2c */ {},
+/* 0x2d */ {},
+/* 0x2e */ {},
+/* 0x2f */ {},
+/* 0x30 */ {},
+/* 0x31 */ {},
+/* 0x32 */ {},
+/* 0x33 */ {},
+/* 0x34 */ {},
+/* 0x35 */ {},
+/* 0x36 */ {},
+/* 0x37 */ {},
+/* 0x38 */ {},
+/* 0x39 */ {},
+/* 0x3a */ {},
+/* 0x3b */ {},
+/* 0x3c */ {},
+/* 0x3d */ {},
+/* 0x3e */ {dxgkio_enum_adapters3, LX_DXENUMADAPTERS3},
+/* 0x3f */ {},
+/* 0x40 */ {},
+/* 0x41 */ {},
+/* 0x42 */ {},
+/* 0x43 */ {},
+/* 0x44 */ {},
+/* 0x45 */ {},
};
/*
@@ -82,3 +546,19 @@ long dxgk_unlocked_ioctl(struct file *f, unsigned int p1, unsigned long p2)
DXG_TRACE("unlocked ioctl %x Code:%d", p1, _IOC_NR(p1));
return dxgk_ioctl(f, p1, p2);
}
+
+#ifdef DEBUG
+void dxgk_validate_ioctls(void)
+{
+ int i;
+
+ for (i=0; i < ARRAY_SIZE(ioctls); i++)
+ {
+ if (ioctls[i].ioctl && _IOC_NR(ioctls[i].ioctl) != i)
+ {
+ DXG_ERR("Invalid ioctl");
+ DXGKRNL_ASSERT(0);
+ }
+ }
+}
+#endif
^ permalink raw reply related [flat|nested] 56+ messages in thread* [PATCH 06/55] drivers: hv: dxgkrnl: Creation of dxgdevice objects
2026-03-19 20:24 [PATCH v4 00/55] drivers: hv: dxgkrnl: Driver for Hyper-V virtual compute device Eric Curtin
` (4 preceding siblings ...)
2026-03-19 20:24 ` [PATCH 05/55] drivers: hv: dxgkrnl: Enumerate and open dxgadapter objects Eric Curtin
@ 2026-03-19 20:24 ` Eric Curtin
2026-03-19 20:24 ` [PATCH 07/55] drivers: hv: dxgkrnl: Creation of dxgcontext objects Eric Curtin
` (48 subsequent siblings)
54 siblings, 0 replies; 56+ messages in thread
From: Eric Curtin @ 2026-03-19 20:24 UTC (permalink / raw)
To: linux-hyperv; +Cc: linux-kernel, iourit, wei.liu, decui, haiyangz
From: Iouri Tarassov <iourit@linux.microsoft.com>
Implement ioctls for creation and destruction of dxgdevice
objects:
- the LX_DXCREATEDEVICE ioctl
- the LX_DXDESTROYDEVICE ioctl
A dxgdevice object represents a container of other virtual
compute device objects (allocations, sync objects, contexts,
etc.). It belongs to a dxgadapter object.
Signed-off-by: Iouri Tarassov <iourit@linux.microsoft.com>
[kms: forward port to 6.6 from 6.1. No code changes made.]
Signed-off-by: Kelsey Steele <kelseysteele@microsoft.com>
---
drivers/hv/dxgkrnl/dxgadapter.c | 187 ++++++++++++++++++++++++++++++++
drivers/hv/dxgkrnl/dxgkrnl.h | 58 ++++++++++
drivers/hv/dxgkrnl/dxgprocess.c | 43 ++++++++
drivers/hv/dxgkrnl/dxgvmbus.c | 80 ++++++++++++++
drivers/hv/dxgkrnl/dxgvmbus.h | 22 ++++
drivers/hv/dxgkrnl/ioctl.c | 130 +++++++++++++++++++++-
drivers/hv/dxgkrnl/misc.h | 8 +-
include/uapi/misc/d3dkmthk.h | 82 ++++++++++++++
8 files changed, 604 insertions(+), 6 deletions(-)
diff --git a/drivers/hv/dxgkrnl/dxgadapter.c b/drivers/hv/dxgkrnl/dxgadapter.c
index fa0d6beca157..a9a341716eba 100644
--- a/drivers/hv/dxgkrnl/dxgadapter.c
+++ b/drivers/hv/dxgkrnl/dxgadapter.c
@@ -194,6 +194,122 @@ void dxgadapter_release_lock_shared(struct dxgadapter *adapter)
up_read(&adapter->core_lock);
}
+struct dxgdevice *dxgdevice_create(struct dxgadapter *adapter,
+ struct dxgprocess *process)
+{
+ struct dxgdevice *device;
+ int ret;
+
+ device = kzalloc(sizeof(struct dxgdevice), GFP_KERNEL);
+ if (device) {
+ kref_init(&device->device_kref);
+ device->adapter = adapter;
+ device->process = process;
+ kref_get(&adapter->adapter_kref);
+ init_rwsem(&device->device_lock);
+ INIT_LIST_HEAD(&device->pqueue_list_head);
+ device->object_state = DXGOBJECTSTATE_CREATED;
+ device->execution_state = _D3DKMT_DEVICEEXECUTION_ACTIVE;
+
+ ret = dxgprocess_adapter_add_device(process, adapter, device);
+ if (ret < 0) {
+ kref_put(&device->device_kref, dxgdevice_release);
+ device = NULL;
+ }
+ }
+ return device;
+}
+
+void dxgdevice_stop(struct dxgdevice *device)
+{
+}
+
+void dxgdevice_mark_destroyed(struct dxgdevice *device)
+{
+ down_write(&device->device_lock);
+ device->object_state = DXGOBJECTSTATE_DESTROYED;
+ up_write(&device->device_lock);
+}
+
+void dxgdevice_destroy(struct dxgdevice *device)
+{
+ struct dxgprocess *process = device->process;
+ struct dxgadapter *adapter = device->adapter;
+ struct d3dkmthandle device_handle = {};
+
+ DXG_TRACE("Destroying device: %p", device);
+
+ down_write(&device->device_lock);
+
+ if (device->object_state != DXGOBJECTSTATE_ACTIVE)
+ goto cleanup;
+
+ device->object_state = DXGOBJECTSTATE_DESTROYED;
+
+ dxgdevice_stop(device);
+
+ /* Guest handles need to be released before the host handles */
+ hmgrtable_lock(&process->handle_table, DXGLOCK_EXCL);
+ if (device->handle_valid) {
+ hmgrtable_free_handle(&process->handle_table,
+ HMGRENTRY_TYPE_DXGDEVICE, device->handle);
+ device_handle = device->handle;
+ device->handle_valid = 0;
+ }
+ hmgrtable_unlock(&process->handle_table, DXGLOCK_EXCL);
+
+ if (device_handle.v) {
+ up_write(&device->device_lock);
+ if (dxgadapter_acquire_lock_shared(adapter) == 0) {
+ dxgvmb_send_destroy_device(adapter, process,
+ device_handle);
+ dxgadapter_release_lock_shared(adapter);
+ }
+ down_write(&device->device_lock);
+ }
+
+cleanup:
+
+ if (device->adapter) {
+ dxgprocess_adapter_remove_device(device);
+ kref_put(&device->adapter->adapter_kref, dxgadapter_release);
+ device->adapter = NULL;
+ }
+
+ up_write(&device->device_lock);
+
+ kref_put(&device->device_kref, dxgdevice_release);
+ DXG_TRACE("Device destroyed");
+}
+
+int dxgdevice_acquire_lock_shared(struct dxgdevice *device)
+{
+ down_read(&device->device_lock);
+ if (!dxgdevice_is_active(device)) {
+ up_read(&device->device_lock);
+ return -ENODEV;
+ }
+ return 0;
+}
+
+void dxgdevice_release_lock_shared(struct dxgdevice *device)
+{
+ up_read(&device->device_lock);
+}
+
+bool dxgdevice_is_active(struct dxgdevice *device)
+{
+ return device->object_state == DXGOBJECTSTATE_ACTIVE;
+}
+
+void dxgdevice_release(struct kref *refcount)
+{
+ struct dxgdevice *device;
+
+ device = container_of(refcount, struct dxgdevice, device_kref);
+ kfree(device);
+}
+
struct dxgprocess_adapter *dxgprocess_adapter_create(struct dxgprocess *process,
struct dxgadapter *adapter)
{
@@ -208,6 +324,8 @@ struct dxgprocess_adapter *dxgprocess_adapter_create(struct dxgprocess *process,
adapter_info->adapter = adapter;
adapter_info->process = process;
adapter_info->refcount = 1;
+ mutex_init(&adapter_info->device_list_mutex);
+ INIT_LIST_HEAD(&adapter_info->device_list_head);
list_add_tail(&adapter_info->process_adapter_list_entry,
&process->process_adapter_list_head);
dxgadapter_add_process(adapter, adapter_info);
@@ -221,10 +339,34 @@ struct dxgprocess_adapter *dxgprocess_adapter_create(struct dxgprocess *process,
void dxgprocess_adapter_stop(struct dxgprocess_adapter *adapter_info)
{
+ struct dxgdevice *device;
+
+ mutex_lock(&adapter_info->device_list_mutex);
+ list_for_each_entry(device, &adapter_info->device_list_head,
+ device_list_entry) {
+ dxgdevice_stop(device);
+ }
+ mutex_unlock(&adapter_info->device_list_mutex);
}
void dxgprocess_adapter_destroy(struct dxgprocess_adapter *adapter_info)
{
+ struct dxgdevice *device;
+
+ mutex_lock(&adapter_info->device_list_mutex);
+ while (!list_empty(&adapter_info->device_list_head)) {
+ device = list_first_entry(&adapter_info->device_list_head,
+ struct dxgdevice, device_list_entry);
+ list_del(&device->device_list_entry);
+ device->device_list_entry.next = NULL;
+ mutex_unlock(&adapter_info->device_list_mutex);
+ dxgvmb_send_flush_device(device,
+ DXGDEVICE_FLUSHSCHEDULER_DEVICE_TERMINATE);
+ dxgdevice_destroy(device);
+ mutex_lock(&adapter_info->device_list_mutex);
+ }
+ mutex_unlock(&adapter_info->device_list_mutex);
+
dxgadapter_remove_process(adapter_info);
kref_put(&adapter_info->adapter->adapter_kref, dxgadapter_release);
list_del(&adapter_info->process_adapter_list_entry);
@@ -240,3 +382,48 @@ void dxgprocess_adapter_release(struct dxgprocess_adapter *adapter_info)
if (adapter_info->refcount == 0)
dxgprocess_adapter_destroy(adapter_info);
}
+
+int dxgprocess_adapter_add_device(struct dxgprocess *process,
+ struct dxgadapter *adapter,
+ struct dxgdevice *device)
+{
+ struct dxgprocess_adapter *entry;
+ struct dxgprocess_adapter *adapter_info = NULL;
+ int ret = 0;
+
+ dxgglobal_acquire_process_adapter_lock();
+
+ list_for_each_entry(entry, &process->process_adapter_list_head,
+ process_adapter_list_entry) {
+ if (entry->adapter == adapter) {
+ adapter_info = entry;
+ break;
+ }
+ }
+ if (adapter_info == NULL) {
+ DXG_ERR("failed to find process adapter info");
+ ret = -EINVAL;
+ goto cleanup;
+ }
+ mutex_lock(&adapter_info->device_list_mutex);
+ list_add_tail(&device->device_list_entry,
+ &adapter_info->device_list_head);
+ device->adapter_info = adapter_info;
+ mutex_unlock(&adapter_info->device_list_mutex);
+
+cleanup:
+
+ dxgglobal_release_process_adapter_lock();
+ return ret;
+}
+
+void dxgprocess_adapter_remove_device(struct dxgdevice *device)
+{
+ DXG_TRACE("Removing device: %p", device);
+ mutex_lock(&device->adapter_info->device_list_mutex);
+ if (device->device_list_entry.next) {
+ list_del(&device->device_list_entry);
+ device->device_list_entry.next = NULL;
+ }
+ mutex_unlock(&device->adapter_info->device_list_mutex);
+}
diff --git a/drivers/hv/dxgkrnl/dxgkrnl.h b/drivers/hv/dxgkrnl/dxgkrnl.h
index b089d126f801..45ac1f25cc5e 100644
--- a/drivers/hv/dxgkrnl/dxgkrnl.h
+++ b/drivers/hv/dxgkrnl/dxgkrnl.h
@@ -34,6 +34,7 @@
struct dxgprocess;
struct dxgadapter;
+struct dxgdevice;
/*
* Driver private data.
@@ -71,6 +72,10 @@ struct dxgk_device_types {
u32 virtual_monitor_device:1;
};
+enum dxgdevice_flushschedulerreason {
+ DXGDEVICE_FLUSHSCHEDULER_DEVICE_TERMINATE = 4,
+};
+
enum dxgobjectstate {
DXGOBJECTSTATE_CREATED,
DXGOBJECTSTATE_ACTIVE,
@@ -166,6 +171,9 @@ struct dxgprocess_adapter {
struct list_head adapter_process_list_entry;
/* Entry in dxgprocess::process_adapter_list_head */
struct list_head process_adapter_list_entry;
+ /* List of all dxgdevice objects created for the process on adapter */
+ struct list_head device_list_head;
+ struct mutex device_list_mutex;
struct dxgadapter *adapter;
struct dxgprocess *process;
int refcount;
@@ -175,6 +183,10 @@ struct dxgprocess_adapter *dxgprocess_adapter_create(struct dxgprocess *process,
struct dxgadapter
*adapter);
void dxgprocess_adapter_release(struct dxgprocess_adapter *adapter);
+int dxgprocess_adapter_add_device(struct dxgprocess *process,
+ struct dxgadapter *adapter,
+ struct dxgdevice *device);
+void dxgprocess_adapter_remove_device(struct dxgdevice *device);
void dxgprocess_adapter_stop(struct dxgprocess_adapter *adapter_info);
void dxgprocess_adapter_destroy(struct dxgprocess_adapter *adapter_info);
@@ -222,6 +234,11 @@ struct dxgadapter *dxgprocess_get_adapter(struct dxgprocess *process,
struct d3dkmthandle handle);
struct dxgadapter *dxgprocess_adapter_by_handle(struct dxgprocess *process,
struct d3dkmthandle handle);
+struct dxgdevice *dxgprocess_device_by_handle(struct dxgprocess *process,
+ struct d3dkmthandle handle);
+struct dxgdevice *dxgprocess_device_by_object_handle(struct dxgprocess *process,
+ enum hmgrentry_type t,
+ struct d3dkmthandle h);
void dxgprocess_ht_lock_shared_down(struct dxgprocess *process);
void dxgprocess_ht_lock_shared_up(struct dxgprocess *process);
void dxgprocess_ht_lock_exclusive_down(struct dxgprocess *process);
@@ -241,6 +258,7 @@ enum dxgadapter_state {
* This object represents the grapchis adapter.
* Objects, which take reference on the adapter:
* - dxgglobal
+ * - dxgdevice
* - adapter handle (struct d3dkmthandle)
*/
struct dxgadapter {
@@ -277,6 +295,38 @@ void dxgadapter_add_process(struct dxgadapter *adapter,
struct dxgprocess_adapter *process_info);
void dxgadapter_remove_process(struct dxgprocess_adapter *process_info);
+/*
+ * The object represent the device object.
+ * The following objects take reference on the device
+ * - device handle (struct d3dkmthandle)
+ */
+struct dxgdevice {
+ enum dxgobjectstate object_state;
+ /* Device takes reference on the adapter */
+ struct dxgadapter *adapter;
+ struct dxgprocess_adapter *adapter_info;
+ struct dxgprocess *process;
+ /* Entry in the DGXPROCESS_ADAPTER device list */
+ struct list_head device_list_entry;
+ struct kref device_kref;
+ /* Protects destcruction of the device object */
+ struct rw_semaphore device_lock;
+ /* List of paging queues. Protected by process handle table lock. */
+ struct list_head pqueue_list_head;
+ struct d3dkmthandle handle;
+ enum d3dkmt_deviceexecution_state execution_state;
+ u32 handle_valid;
+};
+
+struct dxgdevice *dxgdevice_create(struct dxgadapter *a, struct dxgprocess *p);
+void dxgdevice_destroy(struct dxgdevice *device);
+void dxgdevice_stop(struct dxgdevice *device);
+void dxgdevice_mark_destroyed(struct dxgdevice *device);
+int dxgdevice_acquire_lock_shared(struct dxgdevice *dev);
+void dxgdevice_release_lock_shared(struct dxgdevice *dev);
+void dxgdevice_release(struct kref *refcount);
+bool dxgdevice_is_active(struct dxgdevice *dev);
+
long dxgk_compat_ioctl(struct file *f, unsigned int p1, unsigned long p2);
long dxgk_unlocked_ioctl(struct file *f, unsigned int p1, unsigned long p2);
@@ -313,6 +363,14 @@ int dxgvmb_send_destroy_process(struct d3dkmthandle process);
int dxgvmb_send_open_adapter(struct dxgadapter *adapter);
int dxgvmb_send_close_adapter(struct dxgadapter *adapter);
int dxgvmb_send_get_internal_adapter_info(struct dxgadapter *adapter);
+struct d3dkmthandle dxgvmb_send_create_device(struct dxgadapter *adapter,
+ struct dxgprocess *process,
+ struct d3dkmt_createdevice *args);
+int dxgvmb_send_destroy_device(struct dxgadapter *adapter,
+ struct dxgprocess *process,
+ struct d3dkmthandle h);
+int dxgvmb_send_flush_device(struct dxgdevice *device,
+ enum dxgdevice_flushschedulerreason reason);
int dxgvmb_send_query_adapter_info(struct dxgprocess *process,
struct dxgadapter *adapter,
struct d3dkmt_queryadapterinfo *args);
diff --git a/drivers/hv/dxgkrnl/dxgprocess.c b/drivers/hv/dxgkrnl/dxgprocess.c
index ab9a01e3c8c8..8373f681e822 100644
--- a/drivers/hv/dxgkrnl/dxgprocess.c
+++ b/drivers/hv/dxgkrnl/dxgprocess.c
@@ -241,6 +241,49 @@ struct dxgadapter *dxgprocess_adapter_by_handle(struct dxgprocess *process,
return adapter;
}
+struct dxgdevice *dxgprocess_device_by_object_handle(struct dxgprocess *process,
+ enum hmgrentry_type t,
+ struct d3dkmthandle handle)
+{
+ struct dxgdevice *device = NULL;
+ void *obj;
+
+ hmgrtable_lock(&process->handle_table, DXGLOCK_SHARED);
+ obj = hmgrtable_get_object_by_type(&process->handle_table, t, handle);
+ if (obj) {
+ struct d3dkmthandle device_handle = {};
+
+ switch (t) {
+ case HMGRENTRY_TYPE_DXGDEVICE:
+ device = obj;
+ break;
+ default:
+ DXG_ERR("invalid handle type: %d", t);
+ break;
+ }
+ if (device == NULL)
+ device = hmgrtable_get_object_by_type(
+ &process->handle_table,
+ HMGRENTRY_TYPE_DXGDEVICE,
+ device_handle);
+ if (device)
+ if (kref_get_unless_zero(&device->device_kref) == 0)
+ device = NULL;
+ }
+ if (device == NULL)
+ DXG_ERR("device_by_handle failed: %d %x", t, handle.v);
+ hmgrtable_unlock(&process->handle_table, DXGLOCK_SHARED);
+ return device;
+}
+
+struct dxgdevice *dxgprocess_device_by_handle(struct dxgprocess *process,
+ struct d3dkmthandle handle)
+{
+ return dxgprocess_device_by_object_handle(process,
+ HMGRENTRY_TYPE_DXGDEVICE,
+ handle);
+}
+
void dxgprocess_ht_lock_shared_down(struct dxgprocess *process)
{
hmgrtable_lock(&process->handle_table, DXGLOCK_SHARED);
diff --git a/drivers/hv/dxgkrnl/dxgvmbus.c b/drivers/hv/dxgkrnl/dxgvmbus.c
index 0abf45d0d3f7..73804d11ec49 100644
--- a/drivers/hv/dxgkrnl/dxgvmbus.c
+++ b/drivers/hv/dxgkrnl/dxgvmbus.c
@@ -673,6 +673,86 @@ int dxgvmb_send_get_internal_adapter_info(struct dxgadapter *adapter)
return ret;
}
+struct d3dkmthandle dxgvmb_send_create_device(struct dxgadapter *adapter,
+ struct dxgprocess *process,
+ struct d3dkmt_createdevice *args)
+{
+ int ret;
+ struct dxgkvmb_command_createdevice *command;
+ struct dxgkvmb_command_createdevice_return result = { };
+ struct dxgvmbusmsg msg;
+
+ ret = init_message(&msg, adapter, process, sizeof(*command));
+ if (ret)
+ goto cleanup;
+ command = (void *)msg.msg;
+
+ command_vgpu_to_host_init2(&command->hdr, DXGK_VMBCOMMAND_CREATEDEVICE,
+ process->host_handle);
+ command->flags = args->flags;
+
+ ret = dxgvmb_send_sync_msg(msg.channel, msg.hdr, msg.size,
+ &result, sizeof(result));
+ if (ret < 0)
+ result.device.v = 0;
+ free_message(&msg, process);
+cleanup:
+ if (ret)
+ DXG_TRACE("err: %d", ret);
+ return result.device;
+}
+
+int dxgvmb_send_destroy_device(struct dxgadapter *adapter,
+ struct dxgprocess *process,
+ struct d3dkmthandle h)
+{
+ int ret;
+ struct dxgkvmb_command_destroydevice *command;
+ struct dxgvmbusmsg msg = {.hdr = NULL};
+
+ ret = init_message(&msg, adapter, process, sizeof(*command));
+ if (ret)
+ goto cleanup;
+ command = (void *)msg.msg;
+
+ command_vgpu_to_host_init2(&command->hdr, DXGK_VMBCOMMAND_DESTROYDEVICE,
+ process->host_handle);
+ command->device = h;
+
+ ret = dxgvmb_send_sync_msg_ntstatus(msg.channel, msg.hdr, msg.size);
+cleanup:
+ free_message(&msg, process);
+ if (ret)
+ DXG_TRACE("err: %d", ret);
+ return ret;
+}
+
+int dxgvmb_send_flush_device(struct dxgdevice *device,
+ enum dxgdevice_flushschedulerreason reason)
+{
+ int ret;
+ struct dxgkvmb_command_flushdevice *command;
+ struct dxgvmbusmsg msg = {.hdr = NULL};
+ struct dxgprocess *process = device->process;
+
+ ret = init_message(&msg, device->adapter, process, sizeof(*command));
+ if (ret)
+ goto cleanup;
+ command = (void *)msg.msg;
+
+ command_vgpu_to_host_init2(&command->hdr, DXGK_VMBCOMMAND_FLUSHDEVICE,
+ process->host_handle);
+ command->device = device->handle;
+ command->reason = reason;
+
+ ret = dxgvmb_send_sync_msg_ntstatus(msg.channel, msg.hdr, msg.size);
+cleanup:
+ free_message(&msg, process);
+ if (ret)
+ DXG_TRACE("err: %d", ret);
+ return ret;
+}
+
int dxgvmb_send_query_adapter_info(struct dxgprocess *process,
struct dxgadapter *adapter,
struct d3dkmt_queryadapterinfo *args)
diff --git a/drivers/hv/dxgkrnl/dxgvmbus.h b/drivers/hv/dxgkrnl/dxgvmbus.h
index a805a396e083..4ccf45765954 100644
--- a/drivers/hv/dxgkrnl/dxgvmbus.h
+++ b/drivers/hv/dxgkrnl/dxgvmbus.h
@@ -247,4 +247,26 @@ struct dxgkvmb_command_queryadapterinfo_return {
u8 private_data[1];
};
+struct dxgkvmb_command_createdevice {
+ struct dxgkvmb_command_vgpu_to_host hdr;
+ struct d3dkmt_createdeviceflags flags;
+ bool cdd_device;
+ void *error_code;
+};
+
+struct dxgkvmb_command_createdevice_return {
+ struct d3dkmthandle device;
+};
+
+struct dxgkvmb_command_destroydevice {
+ struct dxgkvmb_command_vgpu_to_host hdr;
+ struct d3dkmthandle device;
+};
+
+struct dxgkvmb_command_flushdevice {
+ struct dxgkvmb_command_vgpu_to_host hdr;
+ struct d3dkmthandle device;
+ enum dxgdevice_flushschedulerreason reason;
+};
+
#endif /* _DXGVMBUS_H */
diff --git a/drivers/hv/dxgkrnl/ioctl.c b/drivers/hv/dxgkrnl/ioctl.c
index b08ea9430093..405e8b92913e 100644
--- a/drivers/hv/dxgkrnl/ioctl.c
+++ b/drivers/hv/dxgkrnl/ioctl.c
@@ -424,10 +424,136 @@ dxgkio_query_adapter_info(struct dxgprocess *process, void *__user inargs)
return ret;
}
+static int
+dxgkio_create_device(struct dxgprocess *process, void *__user inargs)
+{
+ struct d3dkmt_createdevice args;
+ int ret;
+ struct dxgadapter *adapter = NULL;
+ struct dxgdevice *device = NULL;
+ struct d3dkmthandle host_device_handle = {};
+ bool adapter_locked = false;
+
+ ret = copy_from_user(&args, inargs, sizeof(args));
+ if (ret) {
+ DXG_ERR("failed to copy input args");
+ ret = -EINVAL;
+ goto cleanup;
+ }
+
+ /* The call acquires reference on the adapter */
+ adapter = dxgprocess_adapter_by_handle(process, args.adapter);
+ if (adapter == NULL) {
+ ret = -EINVAL;
+ goto cleanup;
+ }
+
+ device = dxgdevice_create(adapter, process);
+ if (device == NULL) {
+ ret = -ENOMEM;
+ goto cleanup;
+ }
+
+ ret = dxgadapter_acquire_lock_shared(adapter);
+ if (ret < 0)
+ goto cleanup;
+
+ adapter_locked = true;
+
+ host_device_handle = dxgvmb_send_create_device(adapter, process, &args);
+ if (host_device_handle.v) {
+ ret = copy_to_user(&((struct d3dkmt_createdevice *)inargs)->
+ device, &host_device_handle,
+ sizeof(struct d3dkmthandle));
+ if (ret) {
+ DXG_ERR("failed to copy device handle");
+ ret = -EINVAL;
+ goto cleanup;
+ }
+
+ hmgrtable_lock(&process->handle_table, DXGLOCK_EXCL);
+ ret = hmgrtable_assign_handle(&process->handle_table, device,
+ HMGRENTRY_TYPE_DXGDEVICE,
+ host_device_handle);
+ if (ret >= 0) {
+ device->handle = host_device_handle;
+ device->handle_valid = 1;
+ device->object_state = DXGOBJECTSTATE_ACTIVE;
+ }
+ hmgrtable_unlock(&process->handle_table, DXGLOCK_EXCL);
+ }
+
+cleanup:
+
+ if (ret < 0) {
+ if (host_device_handle.v)
+ dxgvmb_send_destroy_device(adapter, process,
+ host_device_handle);
+ if (device)
+ dxgdevice_destroy(device);
+ }
+
+ if (adapter_locked)
+ dxgadapter_release_lock_shared(adapter);
+
+ if (adapter)
+ kref_put(&adapter->adapter_kref, dxgadapter_release);
+
+ DXG_TRACE("ioctl:%s %d", errorstr(ret), ret);
+ return ret;
+}
+
+static int
+dxgkio_destroy_device(struct dxgprocess *process, void *__user inargs)
+{
+ struct d3dkmt_destroydevice args;
+ int ret;
+ struct dxgadapter *adapter = NULL;
+ struct dxgdevice *device = NULL;
+
+ ret = copy_from_user(&args, inargs, sizeof(args));
+ if (ret) {
+ DXG_ERR("failed to copy input args");
+ ret = -EINVAL;
+ goto cleanup;
+ }
+
+ hmgrtable_lock(&process->handle_table, DXGLOCK_EXCL);
+ device = hmgrtable_get_object_by_type(&process->handle_table,
+ HMGRENTRY_TYPE_DXGDEVICE,
+ args.device);
+ if (device) {
+ hmgrtable_free_handle(&process->handle_table,
+ HMGRENTRY_TYPE_DXGDEVICE, args.device);
+ device->handle_valid = 0;
+ }
+ hmgrtable_unlock(&process->handle_table, DXGLOCK_EXCL);
+
+ if (device == NULL) {
+ DXG_ERR("invalid device handle: %x", args.device.v);
+ ret = -EINVAL;
+ goto cleanup;
+ }
+
+ adapter = device->adapter;
+
+ dxgdevice_destroy(device);
+
+ if (dxgadapter_acquire_lock_shared(adapter) == 0) {
+ dxgvmb_send_destroy_device(adapter, process, args.device);
+ dxgadapter_release_lock_shared(adapter);
+ }
+
+cleanup:
+
+ DXG_TRACE("ioctl:%s %d", errorstr(ret), ret);
+ return ret;
+}
+
static struct ioctl_desc ioctls[] = {
/* 0x00 */ {},
/* 0x01 */ {dxgkio_open_adapter_from_luid, LX_DXOPENADAPTERFROMLUID},
-/* 0x02 */ {},
+/* 0x02 */ {dxgkio_create_device, LX_DXCREATEDEVICE},
/* 0x03 */ {},
/* 0x04 */ {},
/* 0x05 */ {},
@@ -450,7 +576,7 @@ static struct ioctl_desc ioctls[] = {
/* 0x16 */ {},
/* 0x17 */ {},
/* 0x18 */ {},
-/* 0x19 */ {},
+/* 0x19 */ {dxgkio_destroy_device, LX_DXDESTROYDEVICE},
/* 0x1a */ {},
/* 0x1b */ {},
/* 0x1c */ {},
diff --git a/drivers/hv/dxgkrnl/misc.h b/drivers/hv/dxgkrnl/misc.h
index dc849a8ed3f2..e0bd33b365b0 100644
--- a/drivers/hv/dxgkrnl/misc.h
+++ b/drivers/hv/dxgkrnl/misc.h
@@ -27,10 +27,10 @@ extern const struct d3dkmthandle zerohandle;
*
* channel_lock (VMBus channel lock)
* fd_mutex
- * plistmutex
- * table_lock
- * core_lock
- * device_lock
+ * plistmutex (process list mutex)
+ * table_lock (handle table lock)
+ * core_lock (dxgadapter lock)
+ * device_lock (dxgdevice lock)
* process_adapter_mutex
* adapter_list_lock
* device_mutex (dxgglobal mutex)
diff --git a/include/uapi/misc/d3dkmthk.h b/include/uapi/misc/d3dkmthk.h
index c675d5827ed5..7414f0f5ce8e 100644
--- a/include/uapi/misc/d3dkmthk.h
+++ b/include/uapi/misc/d3dkmthk.h
@@ -86,6 +86,74 @@ struct d3dkmt_openadapterfromluid {
struct d3dkmthandle adapter_handle;
};
+struct d3dddi_allocationlist {
+ struct d3dkmthandle allocation;
+ union {
+ struct {
+ __u32 write_operation :1;
+ __u32 do_not_retire_instance :1;
+ __u32 offer_priority :3;
+ __u32 reserved :27;
+ };
+ __u32 value;
+ };
+};
+
+struct d3dddi_patchlocationlist {
+ __u32 allocation_index;
+ union {
+ struct {
+ __u32 slot_id:24;
+ __u32 reserved:8;
+ };
+ __u32 value;
+ };
+ __u32 driver_id;
+ __u32 allocation_offset;
+ __u32 patch_offset;
+ __u32 split_offset;
+};
+
+struct d3dkmt_createdeviceflags {
+ __u32 legacy_mode:1;
+ __u32 request_vSync:1;
+ __u32 disable_gpu_timeout:1;
+ __u32 gdi_device:1;
+ __u32 reserved:28;
+};
+
+struct d3dkmt_createdevice {
+ struct d3dkmthandle adapter;
+ __u32 reserved3;
+ struct d3dkmt_createdeviceflags flags;
+ struct d3dkmthandle device;
+#ifdef __KERNEL__
+ void *command_buffer;
+#else
+ __u64 command_buffer;
+#endif
+ __u32 command_buffer_size;
+ __u32 reserved;
+#ifdef __KERNEL__
+ struct d3dddi_allocationlist *allocation_list;
+#else
+ __u64 allocation_list;
+#endif
+ __u32 allocation_list_size;
+ __u32 reserved1;
+#ifdef __KERNEL__
+ struct d3dddi_patchlocationlist *patch_location_list;
+#else
+ __u64 patch_location_list;
+#endif
+ __u32 patch_location_list_size;
+ __u32 reserved2;
+};
+
+struct d3dkmt_destroydevice {
+ struct d3dkmthandle device;
+};
+
struct d3dkmt_adaptertype {
union {
struct {
@@ -125,6 +193,16 @@ struct d3dkmt_queryadapterinfo {
__u32 private_data_size;
};
+enum d3dkmt_deviceexecution_state {
+ _D3DKMT_DEVICEEXECUTION_ACTIVE = 1,
+ _D3DKMT_DEVICEEXECUTION_RESET = 2,
+ _D3DKMT_DEVICEEXECUTION_HUNG = 3,
+ _D3DKMT_DEVICEEXECUTION_STOPPED = 4,
+ _D3DKMT_DEVICEEXECUTION_ERROR_OUTOFMEMORY = 5,
+ _D3DKMT_DEVICEEXECUTION_ERROR_DMAFAULT = 6,
+ _D3DKMT_DEVICEEXECUTION_ERROR_DMAPAGEFAULT = 7,
+};
+
union d3dkmt_enumadapters_filter {
struct {
__u64 include_compute_only:1;
@@ -152,12 +230,16 @@ struct d3dkmt_enumadapters3 {
#define LX_DXOPENADAPTERFROMLUID \
_IOWR(0x47, 0x01, struct d3dkmt_openadapterfromluid)
+#define LX_DXCREATEDEVICE \
+ _IOWR(0x47, 0x02, struct d3dkmt_createdevice)
#define LX_DXQUERYADAPTERINFO \
_IOWR(0x47, 0x09, struct d3dkmt_queryadapterinfo)
#define LX_DXENUMADAPTERS2 \
_IOWR(0x47, 0x14, struct d3dkmt_enumadapters2)
#define LX_DXCLOSEADAPTER \
_IOWR(0x47, 0x15, struct d3dkmt_closeadapter)
+#define LX_DXDESTROYDEVICE \
+ _IOWR(0x47, 0x19, struct d3dkmt_destroydevice)
#define LX_DXENUMADAPTERS3 \
_IOWR(0x47, 0x3e, struct d3dkmt_enumadapters3)
^ permalink raw reply related [flat|nested] 56+ messages in thread* [PATCH 07/55] drivers: hv: dxgkrnl: Creation of dxgcontext objects
2026-03-19 20:24 [PATCH v4 00/55] drivers: hv: dxgkrnl: Driver for Hyper-V virtual compute device Eric Curtin
` (5 preceding siblings ...)
2026-03-19 20:24 ` [PATCH 06/55] drivers: hv: dxgkrnl: Creation of dxgdevice objects Eric Curtin
@ 2026-03-19 20:24 ` Eric Curtin
2026-03-19 20:24 ` [PATCH 08/55] drivers: hv: dxgkrnl: Creation of compute device allocations and resources Eric Curtin
` (47 subsequent siblings)
54 siblings, 0 replies; 56+ messages in thread
From: Eric Curtin @ 2026-03-19 20:24 UTC (permalink / raw)
To: linux-hyperv; +Cc: linux-kernel, iourit, wei.liu, decui, haiyangz
From: Iouri Tarassov <iourit@linux.microsoft.com>
Implement ioctls for creation/destruction of dxgcontext
objects:
- the LX_DXCREATECONTEXTVIRTUAL ioctl
- the LX_DXDESTROYCONTEXT ioctl.
A dxgcontext object represents a compute device execution thread.
Ccompute device DMA buffers and synchronization operations are
submitted for execution to a dxgcontext. dxgcontexts objects
belong to a dxgdevice object.
Signed-off-by: Iouri Tarassov <iourit@linux.microsoft.com>
[kms: forward port to 6.6 from 6.1. No code changes made.]
Signed-off-by: Kelsey Steele <kelseysteele@microsoft.com>
---
drivers/hv/dxgkrnl/dxgadapter.c | 103 ++++++++++++++++++++
drivers/hv/dxgkrnl/dxgkrnl.h | 38 ++++++++
drivers/hv/dxgkrnl/dxgprocess.c | 4 +
drivers/hv/dxgkrnl/dxgvmbus.c | 101 ++++++++++++++++++-
drivers/hv/dxgkrnl/dxgvmbus.h | 18 ++++
drivers/hv/dxgkrnl/ioctl.c | 168 +++++++++++++++++++++++++++++++-
drivers/hv/dxgkrnl/misc.h | 1 +
include/uapi/misc/d3dkmthk.h | 47 +++++++++
8 files changed, 477 insertions(+), 3 deletions(-)
diff --git a/drivers/hv/dxgkrnl/dxgadapter.c b/drivers/hv/dxgkrnl/dxgadapter.c
index a9a341716eba..cd103e092ac2 100644
--- a/drivers/hv/dxgkrnl/dxgadapter.c
+++ b/drivers/hv/dxgkrnl/dxgadapter.c
@@ -206,7 +206,9 @@ struct dxgdevice *dxgdevice_create(struct dxgadapter *adapter,
device->adapter = adapter;
device->process = process;
kref_get(&adapter->adapter_kref);
+ INIT_LIST_HEAD(&device->context_list_head);
init_rwsem(&device->device_lock);
+ init_rwsem(&device->context_list_lock);
INIT_LIST_HEAD(&device->pqueue_list_head);
device->object_state = DXGOBJECTSTATE_CREATED;
device->execution_state = _D3DKMT_DEVICEEXECUTION_ACTIVE;
@@ -248,6 +250,20 @@ void dxgdevice_destroy(struct dxgdevice *device)
dxgdevice_stop(device);
+ {
+ struct dxgcontext *context;
+ struct dxgcontext *tmp;
+
+ DXG_TRACE("destroying contexts");
+ dxgdevice_acquire_context_list_lock(device);
+ list_for_each_entry_safe(context, tmp,
+ &device->context_list_head,
+ context_list_entry) {
+ dxgcontext_destroy(process, context);
+ }
+ dxgdevice_release_context_list_lock(device);
+ }
+
/* Guest handles need to be released before the host handles */
hmgrtable_lock(&process->handle_table, DXGLOCK_EXCL);
if (device->handle_valid) {
@@ -302,6 +318,32 @@ bool dxgdevice_is_active(struct dxgdevice *device)
return device->object_state == DXGOBJECTSTATE_ACTIVE;
}
+void dxgdevice_acquire_context_list_lock(struct dxgdevice *device)
+{
+ down_write(&device->context_list_lock);
+}
+
+void dxgdevice_release_context_list_lock(struct dxgdevice *device)
+{
+ up_write(&device->context_list_lock);
+}
+
+void dxgdevice_add_context(struct dxgdevice *device, struct dxgcontext *context)
+{
+ down_write(&device->context_list_lock);
+ list_add_tail(&context->context_list_entry, &device->context_list_head);
+ up_write(&device->context_list_lock);
+}
+
+void dxgdevice_remove_context(struct dxgdevice *device,
+ struct dxgcontext *context)
+{
+ if (context->context_list_entry.next) {
+ list_del(&context->context_list_entry);
+ context->context_list_entry.next = NULL;
+ }
+}
+
void dxgdevice_release(struct kref *refcount)
{
struct dxgdevice *device;
@@ -310,6 +352,67 @@ void dxgdevice_release(struct kref *refcount)
kfree(device);
}
+struct dxgcontext *dxgcontext_create(struct dxgdevice *device)
+{
+ struct dxgcontext *context;
+
+ context = kzalloc(sizeof(struct dxgcontext), GFP_KERNEL);
+ if (context) {
+ kref_init(&context->context_kref);
+ context->device = device;
+ context->process = device->process;
+ context->device_handle = device->handle;
+ kref_get(&device->device_kref);
+ INIT_LIST_HEAD(&context->hwqueue_list_head);
+ init_rwsem(&context->hwqueue_list_lock);
+ dxgdevice_add_context(device, context);
+ context->object_state = DXGOBJECTSTATE_ACTIVE;
+ }
+ return context;
+}
+
+/*
+ * Called when the device context list lock is held
+ */
+void dxgcontext_destroy(struct dxgprocess *process, struct dxgcontext *context)
+{
+ DXG_TRACE("Destroying context %p", context);
+ context->object_state = DXGOBJECTSTATE_DESTROYED;
+ if (context->device) {
+ if (context->handle.v) {
+ hmgrtable_free_handle_safe(&process->handle_table,
+ HMGRENTRY_TYPE_DXGCONTEXT,
+ context->handle);
+ }
+ dxgdevice_remove_context(context->device, context);
+ kref_put(&context->device->device_kref, dxgdevice_release);
+ }
+ kref_put(&context->context_kref, dxgcontext_release);
+}
+
+void dxgcontext_destroy_safe(struct dxgprocess *process,
+ struct dxgcontext *context)
+{
+ struct dxgdevice *device = context->device;
+
+ dxgdevice_acquire_context_list_lock(device);
+ dxgcontext_destroy(process, context);
+ dxgdevice_release_context_list_lock(device);
+}
+
+bool dxgcontext_is_active(struct dxgcontext *context)
+{
+ return context->object_state == DXGOBJECTSTATE_ACTIVE;
+}
+
+void dxgcontext_release(struct kref *refcount)
+{
+ struct dxgcontext *context;
+
+ context = container_of(refcount, struct dxgcontext, context_kref);
+ kfree(context);
+}
+
struct dxgprocess_adapter *dxgprocess_adapter_create(struct dxgprocess *process,
struct dxgadapter *adapter)
{
diff --git a/drivers/hv/dxgkrnl/dxgkrnl.h b/drivers/hv/dxgkrnl/dxgkrnl.h
index 45ac1f25cc5e..a3d8d3c9f37d 100644
--- a/drivers/hv/dxgkrnl/dxgkrnl.h
+++ b/drivers/hv/dxgkrnl/dxgkrnl.h
@@ -35,6 +35,7 @@
struct dxgprocess;
struct dxgadapter;
struct dxgdevice;
+struct dxgcontext;
/*
* Driver private data.
@@ -298,6 +299,7 @@ void dxgadapter_remove_process(struct dxgprocess_adapter *process_info);
/*
* The object represent the device object.
* The following objects take reference on the device
+ * - dxgcontext
* - device handle (struct d3dkmthandle)
*/
struct dxgdevice {
@@ -311,6 +313,8 @@ struct dxgdevice {
struct kref device_kref;
/* Protects destcruction of the device object */
struct rw_semaphore device_lock;
+ struct rw_semaphore context_list_lock;
+ struct list_head context_list_head;
/* List of paging queues. Protected by process handle table lock. */
struct list_head pqueue_list_head;
struct d3dkmthandle handle;
@@ -325,7 +329,33 @@ void dxgdevice_mark_destroyed(struct dxgdevice *device);
int dxgdevice_acquire_lock_shared(struct dxgdevice *dev);
void dxgdevice_release_lock_shared(struct dxgdevice *dev);
void dxgdevice_release(struct kref *refcount);
+void dxgdevice_add_context(struct dxgdevice *dev, struct dxgcontext *ctx);
+void dxgdevice_remove_context(struct dxgdevice *dev, struct dxgcontext *ctx);
bool dxgdevice_is_active(struct dxgdevice *dev);
+void dxgdevice_acquire_context_list_lock(struct dxgdevice *dev);
+void dxgdevice_release_context_list_lock(struct dxgdevice *dev);
+
+/*
+ * The object represent the execution context of a device.
+ */
+struct dxgcontext {
+ enum dxgobjectstate object_state;
+ struct dxgdevice *device;
+ struct dxgprocess *process;
+ /* entry in the device context list */
+ struct list_head context_list_entry;
+ struct list_head hwqueue_list_head;
+ struct rw_semaphore hwqueue_list_lock;
+ struct kref context_kref;
+ struct d3dkmthandle handle;
+ struct d3dkmthandle device_handle;
+};
+
+struct dxgcontext *dxgcontext_create(struct dxgdevice *dev);
+void dxgcontext_destroy(struct dxgprocess *pr, struct dxgcontext *ctx);
+void dxgcontext_destroy_safe(struct dxgprocess *pr, struct dxgcontext *ctx);
+void dxgcontext_release(struct kref *refcount);
+bool dxgcontext_is_active(struct dxgcontext *ctx);
long dxgk_compat_ioctl(struct file *f, unsigned int p1, unsigned long p2);
long dxgk_unlocked_ioctl(struct file *f, unsigned int p1, unsigned long p2);
@@ -371,6 +401,14 @@ int dxgvmb_send_destroy_device(struct dxgadapter *adapter,
struct d3dkmthandle h);
int dxgvmb_send_flush_device(struct dxgdevice *device,
enum dxgdevice_flushschedulerreason reason);
+struct d3dkmthandle
+dxgvmb_send_create_context(struct dxgadapter *adapter,
+ struct dxgprocess *process,
+ struct d3dkmt_createcontextvirtual
+ *args);
+int dxgvmb_send_destroy_context(struct dxgadapter *adapter,
+ struct dxgprocess *process,
+ struct d3dkmthandle h);
int dxgvmb_send_query_adapter_info(struct dxgprocess *process,
struct dxgadapter *adapter,
struct d3dkmt_queryadapterinfo *args);
diff --git a/drivers/hv/dxgkrnl/dxgprocess.c b/drivers/hv/dxgkrnl/dxgprocess.c
index 8373f681e822..ca307beb9a9a 100644
--- a/drivers/hv/dxgkrnl/dxgprocess.c
+++ b/drivers/hv/dxgkrnl/dxgprocess.c
@@ -257,6 +257,10 @@ struct dxgdevice *dxgprocess_device_by_object_handle(struct dxgprocess *process,
case HMGRENTRY_TYPE_DXGDEVICE:
device = obj;
break;
+ case HMGRENTRY_TYPE_DXGCONTEXT:
+ device_handle =
+ ((struct dxgcontext *)obj)->device_handle;
+ break;
default:
DXG_ERR("invalid handle type: %d", t);
break;
diff --git a/drivers/hv/dxgkrnl/dxgvmbus.c b/drivers/hv/dxgkrnl/dxgvmbus.c
index 73804d11ec49..e66aac7c13cb 100644
--- a/drivers/hv/dxgkrnl/dxgvmbus.c
+++ b/drivers/hv/dxgkrnl/dxgvmbus.c
@@ -731,7 +731,7 @@ int dxgvmb_send_flush_device(struct dxgdevice *device,
enum dxgdevice_flushschedulerreason reason)
{
int ret;
- struct dxgkvmb_command_flushdevice *command;
+ struct dxgkvmb_command_flushdevice *command = NULL;
struct dxgvmbusmsg msg = {.hdr = NULL};
struct dxgprocess *process = device->process;
@@ -745,6 +745,105 @@ int dxgvmb_send_flush_device(struct dxgdevice *device,
command->device = device->handle;
command->reason = reason;
+ ret = dxgvmb_send_sync_msg_ntstatus(msg.channel, msg.hdr, msg.size);
+
+cleanup:
+ free_message(&msg, process);
+ if (ret)
+ DXG_TRACE("err: %d", ret);
+ return ret;
+}
+
+struct d3dkmthandle
+dxgvmb_send_create_context(struct dxgadapter *adapter,
+ struct dxgprocess *process,
+ struct d3dkmt_createcontextvirtual *args)
+{
+ struct dxgkvmb_command_createcontextvirtual *command = NULL;
+ u32 cmd_size;
+ int ret;
+ struct d3dkmthandle context = {};
+ struct dxgvmbusmsg msg = {.hdr = NULL};
+
+ if (args->priv_drv_data_size > DXG_MAX_VM_BUS_PACKET_SIZE) {
+ DXG_ERR("PrivateDriverDataSize is invalid");
+ ret = -EINVAL;
+ goto cleanup;
+ }
+ cmd_size = sizeof(struct dxgkvmb_command_createcontextvirtual) +
+ args->priv_drv_data_size - 1;
+
+ ret = init_message(&msg, adapter, process, cmd_size);
+ if (ret)
+ goto cleanup;
+ command = (void *)msg.msg;
+
+ command_vgpu_to_host_init2(&command->hdr,
+ DXGK_VMBCOMMAND_CREATECONTEXTVIRTUAL,
+ process->host_handle);
+ command->device = args->device;
+ command->node_ordinal = args->node_ordinal;
+ command->engine_affinity = args->engine_affinity;
+ command->flags = args->flags;
+ command->client_hint = args->client_hint;
+ command->priv_drv_data_size = args->priv_drv_data_size;
+ if (args->priv_drv_data_size) {
+ ret = copy_from_user(command->priv_drv_data,
+ args->priv_drv_data,
+ args->priv_drv_data_size);
+ if (ret) {
+ DXG_ERR("Faled to copy private data");
+ ret = -EINVAL;
+ goto cleanup;
+ }
+ }
+ /* Input command is returned back as output */
+ ret = dxgvmb_send_sync_msg(msg.channel, msg.hdr, msg.size,
+ command, cmd_size);
+ if (ret < 0) {
+ goto cleanup;
+ } else {
+ context = command->context;
+ if (args->priv_drv_data_size) {
+ ret = copy_to_user(args->priv_drv_data,
+ command->priv_drv_data,
+ args->priv_drv_data_size);
+ if (ret) {
+ dev_err(DXGDEV,
+ "Faled to copy private data to user");
+ ret = -EINVAL;
+ dxgvmb_send_destroy_context(adapter, process,
+ context);
+ context.v = 0;
+ }
+ }
+ }
+
+cleanup:
+ free_message(&msg, process);
+ if (ret)
+ DXG_TRACE("err: %d", ret);
+ return context;
+}
+
+int dxgvmb_send_destroy_context(struct dxgadapter *adapter,
+ struct dxgprocess *process,
+ struct d3dkmthandle h)
+{
+ int ret;
+ struct dxgkvmb_command_destroycontext *command;
+ struct dxgvmbusmsg msg = {.hdr = NULL};
+
+ ret = init_message(&msg, adapter, process, sizeof(*command));
+ if (ret)
+ goto cleanup;
+ command = (void *)msg.msg;
+
+ command_vgpu_to_host_init2(&command->hdr,
+ DXGK_VMBCOMMAND_DESTROYCONTEXT,
+ process->host_handle);
+ command->context = h;
+
ret = dxgvmb_send_sync_msg_ntstatus(msg.channel, msg.hdr, msg.size);
cleanup:
free_message(&msg, process);
diff --git a/drivers/hv/dxgkrnl/dxgvmbus.h b/drivers/hv/dxgkrnl/dxgvmbus.h
index 4ccf45765954..ebcb7b0f62c1 100644
--- a/drivers/hv/dxgkrnl/dxgvmbus.h
+++ b/drivers/hv/dxgkrnl/dxgvmbus.h
@@ -269,4 +269,22 @@ struct dxgkvmb_command_flushdevice {
enum dxgdevice_flushschedulerreason reason;
};
+struct dxgkvmb_command_createcontextvirtual {
+ struct dxgkvmb_command_vgpu_to_host hdr;
+ struct d3dkmthandle context;
+ struct d3dkmthandle device;
+ u32 node_ordinal;
+ u32 engine_affinity;
+ struct d3dddi_createcontextflags flags;
+ enum d3dkmt_clienthint client_hint;
+ u32 priv_drv_data_size;
+ u8 priv_drv_data[1];
+};
+
+/* The command returns ntstatus */
+struct dxgkvmb_command_destroycontext {
+ struct dxgkvmb_command_vgpu_to_host hdr;
+ struct d3dkmthandle context;
+};
+
#endif /* _DXGVMBUS_H */
diff --git a/drivers/hv/dxgkrnl/ioctl.c b/drivers/hv/dxgkrnl/ioctl.c
index 405e8b92913e..5d10ebd2ce6a 100644
--- a/drivers/hv/dxgkrnl/ioctl.c
+++ b/drivers/hv/dxgkrnl/ioctl.c
@@ -550,13 +550,177 @@ dxgkio_destroy_device(struct dxgprocess *process, void *__user inargs)
return ret;
}
+static int
+dxgkio_create_context_virtual(struct dxgprocess *process, void *__user inargs)
+{
+ struct d3dkmt_createcontextvirtual args;
+ int ret;
+ struct dxgadapter *adapter = NULL;
+ struct dxgdevice *device = NULL;
+ struct dxgcontext *context = NULL;
+ struct d3dkmthandle host_context_handle = {};
+ bool device_lock_acquired = false;
+
+ ret = copy_from_user(&args, inargs, sizeof(args));
+ if (ret) {
+ DXG_ERR("failed to copy input args");
+ ret = -EINVAL;
+ goto cleanup;
+ }
+
+ /*
+ * The call acquires reference on the device. It is safe to access the
+ * adapter, because the device holds reference on it.
+ */
+ device = dxgprocess_device_by_handle(process, args.device);
+ if (device == NULL) {
+ ret = -EINVAL;
+ goto cleanup;
+ }
+
+ ret = dxgdevice_acquire_lock_shared(device);
+ if (ret < 0)
+ goto cleanup;
+
+ device_lock_acquired = true;
+
+ adapter = device->adapter;
+ ret = dxgadapter_acquire_lock_shared(adapter);
+ if (ret < 0) {
+ adapter = NULL;
+ goto cleanup;
+ }
+
+ context = dxgcontext_create(device);
+ if (context == NULL) {
+ ret = -ENOMEM;
+ goto cleanup;
+ }
+
+ host_context_handle = dxgvmb_send_create_context(adapter,
+ process, &args);
+ if (host_context_handle.v) {
+ hmgrtable_lock(&process->handle_table, DXGLOCK_EXCL);
+ ret = hmgrtable_assign_handle(&process->handle_table, context,
+ HMGRENTRY_TYPE_DXGCONTEXT,
+ host_context_handle);
+ if (ret >= 0)
+ context->handle = host_context_handle;
+ hmgrtable_unlock(&process->handle_table, DXGLOCK_EXCL);
+ if (ret < 0)
+ goto cleanup;
+ ret = copy_to_user(&((struct d3dkmt_createcontextvirtual *)
+ inargs)->context, &host_context_handle,
+ sizeof(struct d3dkmthandle));
+ if (ret) {
+ DXG_ERR("failed to copy context handle");
+ ret = -EINVAL;
+ }
+ } else {
+ DXG_ERR("invalid host handle");
+ ret = -EINVAL;
+ }
+
+cleanup:
+
+ if (ret < 0) {
+ if (host_context_handle.v) {
+ dxgvmb_send_destroy_context(adapter, process,
+ host_context_handle);
+ }
+ if (context)
+ dxgcontext_destroy_safe(process, context);
+ }
+
+ if (adapter)
+ dxgadapter_release_lock_shared(adapter);
+
+ if (device) {
+ if (device_lock_acquired)
+ dxgdevice_release_lock_shared(device);
+ kref_put(&device->device_kref, dxgdevice_release);
+ }
+
+ DXG_TRACE("ioctl:%s %d", errorstr(ret), ret);
+ return ret;
+}
+
+static int
+dxgkio_destroy_context(struct dxgprocess *process, void *__user inargs)
+{
+ struct d3dkmt_destroycontext args;
+ int ret;
+ struct dxgadapter *adapter = NULL;
+ struct dxgcontext *context = NULL;
+ struct dxgdevice *device = NULL;
+ struct d3dkmthandle device_handle = {};
+
+ ret = copy_from_user(&args, inargs, sizeof(args));
+ if (ret) {
+ DXG_ERR("failed to copy input args");
+ ret = -EINVAL;
+ goto cleanup;
+ }
+
+ hmgrtable_lock(&process->handle_table, DXGLOCK_EXCL);
+ context = hmgrtable_get_object_by_type(&process->handle_table,
+ HMGRENTRY_TYPE_DXGCONTEXT,
+ args.context);
+ if (context) {
+ hmgrtable_free_handle(&process->handle_table,
+ HMGRENTRY_TYPE_DXGCONTEXT, args.context);
+ context->handle.v = 0;
+ device_handle = context->device_handle;
+ context->object_state = DXGOBJECTSTATE_DESTROYED;
+ }
+ hmgrtable_unlock(&process->handle_table, DXGLOCK_EXCL);
+
+ if (context == NULL) {
+ DXG_ERR("invalid context handle: %x", args.context.v);
+ ret = -EINVAL;
+ goto cleanup;
+ }
+
+ /*
+ * The call acquires reference on the device. It is safe to access the
+ * adapter, because the device holds reference on it.
+ */
+ device = dxgprocess_device_by_handle(process, device_handle);
+ if (device == NULL) {
+ ret = -EINVAL;
+ goto cleanup;
+ }
+
+ adapter = device->adapter;
+ ret = dxgadapter_acquire_lock_shared(adapter);
+ if (ret < 0) {
+ adapter = NULL;
+ goto cleanup;
+ }
+
+ ret = dxgvmb_send_destroy_context(adapter, process, args.context);
+
+ dxgcontext_destroy_safe(process, context);
+
+cleanup:
+
+ if (adapter)
+ dxgadapter_release_lock_shared(adapter);
+
+ if (device)
+ kref_put(&device->device_kref, dxgdevice_release);
+
+ DXG_TRACE("ioctl:%s %s %d", errorstr(ret), __func__, ret);
+ return ret;
+}
+
static struct ioctl_desc ioctls[] = {
/* 0x00 */ {},
/* 0x01 */ {dxgkio_open_adapter_from_luid, LX_DXOPENADAPTERFROMLUID},
/* 0x02 */ {dxgkio_create_device, LX_DXCREATEDEVICE},
/* 0x03 */ {},
-/* 0x04 */ {},
-/* 0x05 */ {},
+/* 0x04 */ {dxgkio_create_context_virtual, LX_DXCREATECONTEXTVIRTUAL},
+/* 0x05 */ {dxgkio_destroy_context, LX_DXDESTROYCONTEXT},
/* 0x06 */ {},
/* 0x07 */ {},
/* 0x08 */ {},
diff --git a/drivers/hv/dxgkrnl/misc.h b/drivers/hv/dxgkrnl/misc.h
index e0bd33b365b0..3a9637f0b5e2 100644
--- a/drivers/hv/dxgkrnl/misc.h
+++ b/drivers/hv/dxgkrnl/misc.h
@@ -29,6 +29,7 @@ extern const struct d3dkmthandle zerohandle;
* fd_mutex
* plistmutex (process list mutex)
* table_lock (handle table lock)
+ * context_list_lock
* core_lock (dxgadapter lock)
* device_lock (dxgdevice lock)
* process_adapter_mutex
diff --git a/include/uapi/misc/d3dkmthk.h b/include/uapi/misc/d3dkmthk.h
index 7414f0f5ce8e..4ba0070b061f 100644
--- a/include/uapi/misc/d3dkmthk.h
+++ b/include/uapi/misc/d3dkmthk.h
@@ -154,6 +154,49 @@ struct d3dkmt_destroydevice {
struct d3dkmthandle device;
};
+enum d3dkmt_clienthint {
+ _D3DKMT_CLIENTHNT_UNKNOWN = 0,
+ _D3DKMT_CLIENTHINT_OPENGL = 1,
+ _D3DKMT_CLIENTHINT_CDD = 2,
+ _D3DKMT_CLIENTHINT_DX7 = 7,
+ _D3DKMT_CLIENTHINT_DX8 = 8,
+ _D3DKMT_CLIENTHINT_DX9 = 9,
+ _D3DKMT_CLIENTHINT_DX10 = 10,
+};
+
+struct d3dddi_createcontextflags {
+ union {
+ struct {
+ __u32 null_rendering:1;
+ __u32 initial_data:1;
+ __u32 disable_gpu_timeout:1;
+ __u32 synchronization_only:1;
+ __u32 hw_queue_supported:1;
+ __u32 reserved:27;
+ };
+ __u32 value;
+ };
+};
+
+struct d3dkmt_destroycontext {
+ struct d3dkmthandle context;
+};
+
+struct d3dkmt_createcontextvirtual {
+ struct d3dkmthandle device;
+ __u32 node_ordinal;
+ __u32 engine_affinity;
+ struct d3dddi_createcontextflags flags;
+#ifdef __KERNEL__
+ void *priv_drv_data;
+#else
+ __u64 priv_drv_data;
+#endif
+ __u32 priv_drv_data_size;
+ enum d3dkmt_clienthint client_hint;
+ struct d3dkmthandle context;
+};
+
struct d3dkmt_adaptertype {
union {
struct {
@@ -232,6 +275,10 @@ struct d3dkmt_enumadapters3 {
_IOWR(0x47, 0x01, struct d3dkmt_openadapterfromluid)
#define LX_DXCREATEDEVICE \
_IOWR(0x47, 0x02, struct d3dkmt_createdevice)
+#define LX_DXCREATECONTEXTVIRTUAL \
+ _IOWR(0x47, 0x04, struct d3dkmt_createcontextvirtual)
+#define LX_DXDESTROYCONTEXT \
+ _IOWR(0x47, 0x05, struct d3dkmt_destroycontext)
#define LX_DXQUERYADAPTERINFO \
_IOWR(0x47, 0x09, struct d3dkmt_queryadapterinfo)
#define LX_DXENUMADAPTERS2 \
^ permalink raw reply related [flat|nested] 56+ messages in thread* [PATCH 08/55] drivers: hv: dxgkrnl: Creation of compute device allocations and resources
2026-03-19 20:24 [PATCH v4 00/55] drivers: hv: dxgkrnl: Driver for Hyper-V virtual compute device Eric Curtin
` (6 preceding siblings ...)
2026-03-19 20:24 ` [PATCH 07/55] drivers: hv: dxgkrnl: Creation of dxgcontext objects Eric Curtin
@ 2026-03-19 20:24 ` Eric Curtin
2026-03-19 20:24 ` [PATCH 09/55] drivers: hv: dxgkrnl: Creation of compute device sync objects Eric Curtin
` (46 subsequent siblings)
54 siblings, 0 replies; 56+ messages in thread
From: Eric Curtin @ 2026-03-19 20:24 UTC (permalink / raw)
To: linux-hyperv; +Cc: linux-kernel, iourit, wei.liu, decui, haiyangz
From: Iouri Tarassov <iourit@linux.microsoft.com>
Implemented ioctls to create and destroy virtual compute device
allocations (dxgallocation) and resources (dxgresource):
- the LX_DXCREATEALLOCATION ioctl,
- the LX_DXDESTROYALLOCATION2 ioctl.
Compute device allocations (dxgallocation objects) represent memory
allocation, which could be accessible by the device. Allocations can
be created around existing system memory (provided by an application)
or memory, allocated by dxgkrnl on the host.
Compute device resources (dxgresource objects) represent containers of
compute device allocations. Allocations could be dynamically added,
removed from a resource.
Each allocation/resource has associated driver private data, which
is provided during creation.
Each created resource or allocation have a handle (d3dkmthandle),
which is used to reference the corresponding object in other ioctls.
A dxgallocation can be resident (meaning that it is accessible by
the compute device) or evicted. When an allocation is evicted,
its content is stored in the backing store in system memory.
Signed-off-by: Iouri Tarassov <iourit@linux.microsoft.com>
[kms: forward port to 6.6 from 6.1. No code changes made.]
Signed-off-by: Kelsey Steele <kelseysteele@microsoft.com>
---
drivers/hv/dxgkrnl/dxgadapter.c | 282 ++++++++++++++
drivers/hv/dxgkrnl/dxgkrnl.h | 113 ++++++
drivers/hv/dxgkrnl/dxgmodule.c | 1 +
drivers/hv/dxgkrnl/dxgvmbus.c | 649 ++++++++++++++++++++++++++++++++
drivers/hv/dxgkrnl/dxgvmbus.h | 123 ++++++
drivers/hv/dxgkrnl/ioctl.c | 631 ++++++++++++++++++++++++++++++-
drivers/hv/dxgkrnl/misc.h | 3 +
include/uapi/misc/d3dkmthk.h | 204 ++++++++++
8 files changed, 2004 insertions(+), 2 deletions(-)
diff --git a/drivers/hv/dxgkrnl/dxgadapter.c b/drivers/hv/dxgkrnl/dxgadapter.c
index cd103e092ac2..402caa81a5db 100644
--- a/drivers/hv/dxgkrnl/dxgadapter.c
+++ b/drivers/hv/dxgkrnl/dxgadapter.c
@@ -207,8 +207,11 @@ struct dxgdevice *dxgdevice_create(struct dxgadapter *adapter,
device->process = process;
kref_get(&adapter->adapter_kref);
INIT_LIST_HEAD(&device->context_list_head);
+ INIT_LIST_HEAD(&device->alloc_list_head);
+ INIT_LIST_HEAD(&device->resource_list_head);
init_rwsem(&device->device_lock);
init_rwsem(&device->context_list_lock);
+ init_rwsem(&device->alloc_list_lock);
INIT_LIST_HEAD(&device->pqueue_list_head);
device->object_state = DXGOBJECTSTATE_CREATED;
device->execution_state = _D3DKMT_DEVICEEXECUTION_ACTIVE;
@@ -224,6 +227,14 @@ struct dxgdevice *dxgdevice_create(struct dxgadapter *adapter,
void dxgdevice_stop(struct dxgdevice *device)
{
+ struct dxgallocation *alloc;
+
+ DXG_TRACE("Destroying device: %p", device);
+ dxgdevice_acquire_alloc_list_lock(device);
+ list_for_each_entry(alloc, &device->alloc_list_head, alloc_list_entry) {
+ dxgallocation_stop(alloc);
+ }
+ dxgdevice_release_alloc_list_lock(device);
}
void dxgdevice_mark_destroyed(struct dxgdevice *device)
@@ -250,6 +261,33 @@ void dxgdevice_destroy(struct dxgdevice *device)
dxgdevice_stop(device);
+ dxgdevice_acquire_alloc_list_lock(device);
+
+ {
+ struct dxgallocation *alloc;
+ struct dxgallocation *tmp;
+
+ DXG_TRACE("destroying allocations");
+ list_for_each_entry_safe(alloc, tmp, &device->alloc_list_head,
+ alloc_list_entry) {
+ dxgallocation_destroy(alloc);
+ }
+ }
+
+ {
+ struct dxgresource *resource;
+ struct dxgresource *tmp;
+
+ DXG_TRACE("destroying resources");
+ list_for_each_entry_safe(resource, tmp,
+ &device->resource_list_head,
+ resource_list_entry) {
+ dxgresource_destroy(resource);
+ }
+ }
+
+ dxgdevice_release_alloc_list_lock(device);
+
{
struct dxgcontext *context;
struct dxgcontext *tmp;
@@ -328,6 +366,26 @@ void dxgdevice_release_context_list_lock(struct dxgdevice *device)
up_write(&device->context_list_lock);
}
+void dxgdevice_acquire_alloc_list_lock(struct dxgdevice *device)
+{
+ down_write(&device->alloc_list_lock);
+}
+
+void dxgdevice_release_alloc_list_lock(struct dxgdevice *device)
+{
+ up_write(&device->alloc_list_lock);
+}
+
+void dxgdevice_acquire_alloc_list_lock_shared(struct dxgdevice *device)
+{
+ down_read(&device->alloc_list_lock);
+}
+
+void dxgdevice_release_alloc_list_lock_shared(struct dxgdevice *device)
+{
+ up_read(&device->alloc_list_lock);
+}
+
void dxgdevice_add_context(struct dxgdevice *device, struct dxgcontext *context)
{
down_write(&device->context_list_lock);
@@ -344,6 +402,161 @@ void dxgdevice_remove_context(struct dxgdevice *device,
}
}
+void dxgdevice_add_alloc(struct dxgdevice *device, struct dxgallocation *alloc)
+{
+ dxgdevice_acquire_alloc_list_lock(device);
+ list_add_tail(&alloc->alloc_list_entry, &device->alloc_list_head);
+ kref_get(&device->device_kref);
+ alloc->owner.device = device;
+ dxgdevice_release_alloc_list_lock(device);
+}
+
+void dxgdevice_remove_alloc(struct dxgdevice *device,
+ struct dxgallocation *alloc)
+{
+ if (alloc->alloc_list_entry.next) {
+ list_del(&alloc->alloc_list_entry);
+ alloc->alloc_list_entry.next = NULL;
+ kref_put(&device->device_kref, dxgdevice_release);
+ }
+}
+
+void dxgdevice_remove_alloc_safe(struct dxgdevice *device,
+ struct dxgallocation *alloc)
+{
+ dxgdevice_acquire_alloc_list_lock(device);
+ dxgdevice_remove_alloc(device, alloc);
+ dxgdevice_release_alloc_list_lock(device);
+}
+
+void dxgdevice_add_resource(struct dxgdevice *device, struct dxgresource *res)
+{
+ dxgdevice_acquire_alloc_list_lock(device);
+ list_add_tail(&res->resource_list_entry, &device->resource_list_head);
+ kref_get(&device->device_kref);
+ dxgdevice_release_alloc_list_lock(device);
+}
+
+void dxgdevice_remove_resource(struct dxgdevice *device,
+ struct dxgresource *res)
+{
+ if (res->resource_list_entry.next) {
+ list_del(&res->resource_list_entry);
+ res->resource_list_entry.next = NULL;
+ kref_put(&device->device_kref, dxgdevice_release);
+ }
+}
+
+struct dxgresource *dxgresource_create(struct dxgdevice *device)
+{
+ struct dxgresource *resource;
+
+ resource = kzalloc(sizeof(struct dxgresource), GFP_KERNEL);
+ if (resource) {
+ kref_init(&resource->resource_kref);
+ resource->device = device;
+ resource->process = device->process;
+ resource->object_state = DXGOBJECTSTATE_ACTIVE;
+ mutex_init(&resource->resource_mutex);
+ INIT_LIST_HEAD(&resource->alloc_list_head);
+ dxgdevice_add_resource(device, resource);
+ }
+ return resource;
+}
+
+void dxgresource_free_handle(struct dxgresource *resource)
+{
+ struct dxgallocation *alloc;
+ struct dxgprocess *process;
+
+ if (resource->handle_valid) {
+ process = resource->device->process;
+ hmgrtable_free_handle_safe(&process->handle_table,
+ HMGRENTRY_TYPE_DXGRESOURCE,
+ resource->handle);
+ resource->handle_valid = 0;
+ }
+ list_for_each_entry(alloc, &resource->alloc_list_head,
+ alloc_list_entry) {
+ dxgallocation_free_handle(alloc);
+ }
+}
+
+void dxgresource_destroy(struct dxgresource *resource)
+{
+ /* device->alloc_list_lock is held */
+ struct dxgallocation *alloc;
+ struct dxgallocation *tmp;
+ struct d3dkmt_destroyallocation2 args = { };
+ int destroyed = test_and_set_bit(0, &resource->flags);
+ struct dxgdevice *device = resource->device;
+
+ if (!destroyed) {
+ dxgresource_free_handle(resource);
+ if (resource->handle.v) {
+ args.device = device->handle;
+ args.resource = resource->handle;
+ dxgvmb_send_destroy_allocation(device->process,
+ device, &args, NULL);
+ resource->handle.v = 0;
+ }
+ list_for_each_entry_safe(alloc, tmp, &resource->alloc_list_head,
+ alloc_list_entry) {
+ dxgallocation_destroy(alloc);
+ }
+ dxgdevice_remove_resource(device, resource);
+ }
+ kref_put(&resource->resource_kref, dxgresource_release);
+}
+
+void dxgresource_release(struct kref *refcount)
+{
+ struct dxgresource *resource;
+
+ resource = container_of(refcount, struct dxgresource, resource_kref);
+ kfree(resource);
+}
+
+bool dxgresource_is_active(struct dxgresource *resource)
+{
+ return resource->object_state == DXGOBJECTSTATE_ACTIVE;
+}
+
+int dxgresource_add_alloc(struct dxgresource *resource,
+ struct dxgallocation *alloc)
+{
+ int ret = -ENODEV;
+ struct dxgdevice *device = resource->device;
+
+ dxgdevice_acquire_alloc_list_lock(device);
+ if (dxgresource_is_active(resource)) {
+ list_add_tail(&alloc->alloc_list_entry,
+ &resource->alloc_list_head);
+ alloc->owner.resource = resource;
+ ret = 0;
+ }
+ alloc->resource_owner = 1;
+ dxgdevice_release_alloc_list_lock(device);
+ return ret;
+}
+
+void dxgresource_remove_alloc(struct dxgresource *resource,
+ struct dxgallocation *alloc)
+{
+ if (alloc->alloc_list_entry.next) {
+ list_del(&alloc->alloc_list_entry);
+ alloc->alloc_list_entry.next = NULL;
+ }
+}
+
+void dxgresource_remove_alloc_safe(struct dxgresource *resource,
+ struct dxgallocation *alloc)
+{
+ dxgdevice_acquire_alloc_list_lock(resource->device);
+ dxgresource_remove_alloc(resource, alloc);
+ dxgdevice_release_alloc_list_lock(resource->device);
+}
+
void dxgdevice_release(struct kref *refcount)
{
struct dxgdevice *device;
@@ -413,6 +626,75 @@ void dxgcontext_release(struct kref *refcount)
kfree(context);
}
+struct dxgallocation *dxgallocation_create(struct dxgprocess *process)
+{
+ struct dxgallocation *alloc;
+
+ alloc = kzalloc(sizeof(struct dxgallocation), GFP_KERNEL);
+ if (alloc)
+ alloc->process = process;
+ return alloc;
+}
+
+void dxgallocation_stop(struct dxgallocation *alloc)
+{
+ if (alloc->pages) {
+ release_pages(alloc->pages, alloc->num_pages);
+ vfree(alloc->pages);
+ alloc->pages = NULL;
+ }
+}
+
+void dxgallocation_free_handle(struct dxgallocation *alloc)
+{
+ dxgprocess_ht_lock_exclusive_down(alloc->process);
+ if (alloc->handle_valid) {
+ hmgrtable_free_handle(&alloc->process->handle_table,
+ HMGRENTRY_TYPE_DXGALLOCATION,
+ alloc->alloc_handle);
+ alloc->handle_valid = 0;
+ }
+ dxgprocess_ht_lock_exclusive_up(alloc->process);
+}
+
+void dxgallocation_destroy(struct dxgallocation *alloc)
+{
+ struct dxgprocess *process = alloc->process;
+ struct d3dkmt_destroyallocation2 args = { };
+
+ dxgallocation_stop(alloc);
+ if (alloc->resource_owner)
+ dxgresource_remove_alloc(alloc->owner.resource, alloc);
+ else if (alloc->owner.device)
+ dxgdevice_remove_alloc(alloc->owner.device, alloc);
+ dxgallocation_free_handle(alloc);
+ if (alloc->alloc_handle.v && !alloc->resource_owner) {
+ args.device = alloc->owner.device->handle;
+ args.alloc_count = 1;
+ dxgvmb_send_destroy_allocation(process,
+ alloc->owner.device,
+ &args, &alloc->alloc_handle);
+ }
+#ifdef _MAIN_KERNEL_
+ if (alloc->gpadl.gpadl_handle) {
+ DXG_TRACE("Teardown gpadl %d",
+ alloc->gpadl.gpadl_handle);
+ vmbus_teardown_gpadl(dxgglobal_get_vmbus(), &alloc->gpadl);
+ alloc->gpadl.gpadl_handle = 0;
+ }
+else
+ if (alloc->gpadl) {
+ DXG_TRACE("Teardown gpadl %d",
+ alloc->gpadl);
+ vmbus_teardown_gpadl(dxgglobal_get_vmbus(), alloc->gpadl);
+ alloc->gpadl = 0;
+ }
+#endif
+ if (alloc->priv_drv_data)
+ vfree(alloc->priv_drv_data);
+ kfree(alloc);
+}
+
struct dxgprocess_adapter *dxgprocess_adapter_create(struct dxgprocess *process,
struct dxgadapter *adapter)
{
diff --git a/drivers/hv/dxgkrnl/dxgkrnl.h b/drivers/hv/dxgkrnl/dxgkrnl.h
index a3d8d3c9f37d..fa053fb6ac9c 100644
--- a/drivers/hv/dxgkrnl/dxgkrnl.h
+++ b/drivers/hv/dxgkrnl/dxgkrnl.h
@@ -36,6 +36,8 @@ struct dxgprocess;
struct dxgadapter;
struct dxgdevice;
struct dxgcontext;
+struct dxgallocation;
+struct dxgresource;
/*
* Driver private data.
@@ -269,6 +271,8 @@ struct dxgadapter {
struct list_head adapter_list_entry;
/* The list of dxgprocess_adapter entries */
struct list_head adapter_process_list_head;
+ /* This lock protects shared resource and syncobject lists */
+ struct rw_semaphore shared_resource_list_lock;
struct pci_dev *pci_dev;
struct hv_device *hv_dev;
struct dxgvmbuschannel channel;
@@ -315,6 +319,10 @@ struct dxgdevice {
struct rw_semaphore device_lock;
struct rw_semaphore context_list_lock;
struct list_head context_list_head;
+ /* List of device allocations */
+ struct rw_semaphore alloc_list_lock;
+ struct list_head alloc_list_head;
+ struct list_head resource_list_head;
/* List of paging queues. Protected by process handle table lock. */
struct list_head pqueue_list_head;
struct d3dkmthandle handle;
@@ -331,9 +339,19 @@ void dxgdevice_release_lock_shared(struct dxgdevice *dev);
void dxgdevice_release(struct kref *refcount);
void dxgdevice_add_context(struct dxgdevice *dev, struct dxgcontext *ctx);
void dxgdevice_remove_context(struct dxgdevice *dev, struct dxgcontext *ctx);
+void dxgdevice_add_alloc(struct dxgdevice *dev, struct dxgallocation *a);
+void dxgdevice_remove_alloc(struct dxgdevice *dev, struct dxgallocation *a);
+void dxgdevice_remove_alloc_safe(struct dxgdevice *dev,
+ struct dxgallocation *a);
+void dxgdevice_add_resource(struct dxgdevice *dev, struct dxgresource *res);
+void dxgdevice_remove_resource(struct dxgdevice *dev, struct dxgresource *res);
bool dxgdevice_is_active(struct dxgdevice *dev);
void dxgdevice_acquire_context_list_lock(struct dxgdevice *dev);
void dxgdevice_release_context_list_lock(struct dxgdevice *dev);
+void dxgdevice_acquire_alloc_list_lock(struct dxgdevice *dev);
+void dxgdevice_release_alloc_list_lock(struct dxgdevice *dev);
+void dxgdevice_acquire_alloc_list_lock_shared(struct dxgdevice *dev);
+void dxgdevice_release_alloc_list_lock_shared(struct dxgdevice *dev);
/*
* The object represent the execution context of a device.
@@ -357,6 +375,83 @@ void dxgcontext_destroy_safe(struct dxgprocess *pr, struct dxgcontext *ctx);
void dxgcontext_release(struct kref *refcount);
bool dxgcontext_is_active(struct dxgcontext *ctx);
+struct dxgresource {
+ struct kref resource_kref;
+ enum dxgobjectstate object_state;
+ struct d3dkmthandle handle;
+ struct list_head alloc_list_head;
+ struct list_head resource_list_entry;
+ struct list_head shared_resource_list_entry;
+ struct dxgdevice *device;
+ struct dxgprocess *process;
+ /* Protects adding allocations to resource and resource destruction */
+ struct mutex resource_mutex;
+ u64 private_runtime_handle;
+ union {
+ struct {
+ u32 destroyed:1; /* Must be the first */
+ u32 handle_valid:1;
+ u32 reserved:30;
+ };
+ long flags;
+ };
+};
+
+struct dxgresource *dxgresource_create(struct dxgdevice *dev);
+void dxgresource_destroy(struct dxgresource *res);
+void dxgresource_free_handle(struct dxgresource *res);
+void dxgresource_release(struct kref *refcount);
+int dxgresource_add_alloc(struct dxgresource *res,
+ struct dxgallocation *a);
+void dxgresource_remove_alloc(struct dxgresource *res, struct dxgallocation *a);
+void dxgresource_remove_alloc_safe(struct dxgresource *res,
+ struct dxgallocation *a);
+bool dxgresource_is_active(struct dxgresource *res);
+
+struct privdata {
+ u32 data_size;
+ u8 data[1];
+};
+
+struct dxgallocation {
+ /* Entry in the device list or resource list (when resource exists) */
+ struct list_head alloc_list_entry;
+ /* Allocation owner */
+ union {
+ struct dxgdevice *device;
+ struct dxgresource *resource;
+ } owner;
+ struct dxgprocess *process;
+ /* Pointer to private driver data desc. Used for shared resources */
+ struct privdata *priv_drv_data;
+ struct d3dkmthandle alloc_handle;
+ /* Set to 1 when allocation belongs to resource. */
+ u32 resource_owner:1;
+ /* Set to 1 when the allocatio is mapped as cached */
+ u32 cached:1;
+ u32 handle_valid:1;
+ /* GPADL address list for existing sysmem allocations */
+#ifdef _MAIN_KERNEL_
+ struct vmbus_gpadl gpadl;
+#else
+ u32 gpadl;
+#endif
+ /* Number of pages in the 'pages' array */
+ u32 num_pages;
+ /*
+ * CPU address from the existing sysmem allocation, or
+ * mapped to the CPU visible backing store in the IO space
+ */
+ void *cpu_address;
+ /* Describes pages for the existing sysmem allocation */
+ struct page **pages;
+};
+
+struct dxgallocation *dxgallocation_create(struct dxgprocess *process);
+void dxgallocation_stop(struct dxgallocation *a);
+void dxgallocation_destroy(struct dxgallocation *a);
+void dxgallocation_free_handle(struct dxgallocation *a);
+
long dxgk_compat_ioctl(struct file *f, unsigned int p1, unsigned long p2);
long dxgk_unlocked_ioctl(struct file *f, unsigned int p1, unsigned long p2);
@@ -409,9 +504,27 @@ dxgvmb_send_create_context(struct dxgadapter *adapter,
int dxgvmb_send_destroy_context(struct dxgadapter *adapter,
struct dxgprocess *process,
struct d3dkmthandle h);
+int dxgvmb_send_create_allocation(struct dxgprocess *pr, struct dxgdevice *dev,
+ struct d3dkmt_createallocation *args,
+ struct d3dkmt_createallocation *__user inargs,
+ struct dxgresource *res,
+ struct dxgallocation **allocs,
+ struct d3dddi_allocationinfo2 *alloc_info,
+ struct d3dkmt_createstandardallocation *stda);
+int dxgvmb_send_destroy_allocation(struct dxgprocess *pr, struct dxgdevice *dev,
+ struct d3dkmt_destroyallocation2 *args,
+ struct d3dkmthandle *alloc_handles);
int dxgvmb_send_query_adapter_info(struct dxgprocess *process,
struct dxgadapter *adapter,
struct d3dkmt_queryadapterinfo *args);
+int dxgvmb_send_get_stdalloc_data(struct dxgdevice *device,
+ enum d3dkmdt_standardallocationtype t,
+ struct d3dkmdt_gdisurfacedata *data,
+ u32 physical_adapter_index,
+ u32 *alloc_priv_driver_size,
+ void *prive_alloc_data,
+ u32 *res_priv_data_size,
+ void *priv_res_data);
int dxgvmb_send_async_msg(struct dxgvmbuschannel *channel,
void *command,
u32 cmd_size);
diff --git a/drivers/hv/dxgkrnl/dxgmodule.c b/drivers/hv/dxgkrnl/dxgmodule.c
index fbe1c58ecb46..053ce6f3e083 100644
--- a/drivers/hv/dxgkrnl/dxgmodule.c
+++ b/drivers/hv/dxgkrnl/dxgmodule.c
@@ -162,6 +162,7 @@ int dxgglobal_create_adapter(struct pci_dev *dev, guid_t *guid,
init_rwsem(&adapter->core_lock);
INIT_LIST_HEAD(&adapter->adapter_process_list_head);
+ init_rwsem(&adapter->shared_resource_list_lock);
adapter->pci_dev = dev;
guid_to_luid(guid, &adapter->luid);
diff --git a/drivers/hv/dxgkrnl/dxgvmbus.c b/drivers/hv/dxgkrnl/dxgvmbus.c
index e66aac7c13cb..14b51a3c6afc 100644
--- a/drivers/hv/dxgkrnl/dxgvmbus.c
+++ b/drivers/hv/dxgkrnl/dxgvmbus.c
@@ -111,6 +111,41 @@ static int init_message(struct dxgvmbusmsg *msg, struct dxgadapter *adapter,
return 0;
}
+static int init_message_res(struct dxgvmbusmsgres *msg,
+ struct dxgadapter *adapter,
+ struct dxgprocess *process,
+ u32 size,
+ u32 result_size)
+{
+ struct dxgglobal *dxgglobal = dxggbl();
+ bool use_ext_header = dxgglobal->vmbus_ver >=
+ DXGK_VMBUS_INTERFACE_VERSION;
+
+ if (use_ext_header)
+ size += sizeof(struct dxgvmb_ext_header);
+ msg->size = size;
+ msg->res_size += (result_size + 7) & ~7;
+ size += msg->res_size;
+ msg->hdr = vzalloc(size);
+ if (msg->hdr == NULL) {
+ DXG_ERR("Failed to allocate VM bus message: %d", size);
+ return -ENOMEM;
+ }
+ if (use_ext_header) {
+ msg->msg = (char *)&msg->hdr[1];
+ msg->hdr->command_offset = sizeof(msg->hdr[0]);
+ msg->hdr->vgpu_luid = adapter->host_vgpu_luid;
+ } else {
+ msg->msg = (char *)msg->hdr;
+ }
+ msg->res = (char *)msg->hdr + msg->size;
+ if (dxgglobal->async_msg_enabled)
+ msg->channel = &dxgglobal->channel;
+ else
+ msg->channel = &adapter->channel;
+ return 0;
+}
+
static void free_message(struct dxgvmbusmsg *msg, struct dxgprocess *process)
{
if (msg->hdr && (char *)msg->hdr != msg->msg_on_stack)
@@ -852,6 +887,620 @@ int dxgvmb_send_destroy_context(struct dxgadapter *adapter,
return ret;
}
+static int
+copy_private_data(struct d3dkmt_createallocation *args,
+ struct dxgkvmb_command_createallocation *command,
+ struct d3dddi_allocationinfo2 *input_alloc_info,
+ struct d3dkmt_createstandardallocation *standard_alloc)
+{
+ struct dxgkvmb_command_createallocation_allocinfo *alloc_info;
+ struct d3dddi_allocationinfo2 *input_alloc;
+ int ret = 0;
+ int i;
+ u8 *private_data_dest = (u8 *) &command[1] +
+ (args->alloc_count *
+ sizeof(struct dxgkvmb_command_createallocation_allocinfo));
+
+ if (args->private_runtime_data_size) {
+ ret = copy_from_user(private_data_dest,
+ args->private_runtime_data,
+ args->private_runtime_data_size);
+ if (ret) {
+ DXG_ERR("failed to copy runtime data");
+ ret = -EINVAL;
+ goto cleanup;
+ }
+ private_data_dest += args->private_runtime_data_size;
+ }
+
+ if (args->flags.standard_allocation) {
+ DXG_TRACE("private data offset %d",
+ (u32) (private_data_dest - (u8 *) command));
+
+ args->priv_drv_data_size = sizeof(*args->standard_allocation);
+ memcpy(private_data_dest, standard_alloc,
+ sizeof(*standard_alloc));
+ private_data_dest += args->priv_drv_data_size;
+ } else if (args->priv_drv_data_size) {
+ ret = copy_from_user(private_data_dest,
+ args->priv_drv_data,
+ args->priv_drv_data_size);
+ if (ret) {
+ DXG_ERR("failed to copy private data");
+ ret = -EINVAL;
+ goto cleanup;
+ }
+ private_data_dest += args->priv_drv_data_size;
+ }
+
+ alloc_info = (void *)&command[1];
+ input_alloc = input_alloc_info;
+ if (input_alloc_info[0].sysmem)
+ command->flags.existing_sysmem = 1;
+ for (i = 0; i < args->alloc_count; i++) {
+ alloc_info->flags = input_alloc->flags.value;
+ alloc_info->vidpn_source_id = input_alloc->vidpn_source_id;
+ alloc_info->priv_drv_data_size =
+ input_alloc->priv_drv_data_size;
+ if (input_alloc->priv_drv_data_size) {
+ ret = copy_from_user(private_data_dest,
+ input_alloc->priv_drv_data,
+ input_alloc->priv_drv_data_size);
+ if (ret) {
+ DXG_ERR("failed to copy alloc data");
+ ret = -EINVAL;
+ goto cleanup;
+ }
+ private_data_dest += input_alloc->priv_drv_data_size;
+ }
+ alloc_info++;
+ input_alloc++;
+ }
+
+cleanup:
+ if (ret)
+ DXG_TRACE("err: %d", ret);
+ return ret;
+}
+
+static
+int create_existing_sysmem(struct dxgdevice *device,
+ struct dxgkvmb_command_allocinfo_return *host_alloc,
+ struct dxgallocation *dxgalloc,
+ bool read_only,
+ const void *sysmem)
+{
+ int ret1 = 0;
+ void *kmem = NULL;
+ int ret = 0;
+ struct dxgkvmb_command_setexistingsysmemstore *set_store_command;
+ u64 alloc_size = host_alloc->allocation_size;
+ u32 npages = alloc_size >> PAGE_SHIFT;
+ struct dxgvmbusmsg msg = {.hdr = NULL};
+
+ ret = init_message(&msg, device->adapter, device->process,
+ sizeof(*set_store_command));
+ if (ret)
+ goto cleanup;
+ set_store_command = (void *)msg.msg;
+
+ /*
+ * Create a guest physical address list and set it as the allocation
+ * backing store in the host. This is done after creating the host
+ * allocation, because only now the allocation size is known.
+ */
+
+ DXG_TRACE("Alloc size: %lld", alloc_size);
+
+ dxgalloc->cpu_address = (void *)sysmem;
+ dxgalloc->pages = vzalloc(npages * sizeof(void *));
+ if (dxgalloc->pages == NULL) {
+ DXG_ERR("failed to allocate pages");
+ ret = -ENOMEM;
+ goto cleanup;
+ }
+ ret1 = get_user_pages_fast((unsigned long)sysmem, npages, !read_only,
+ dxgalloc->pages);
+ if (ret1 != npages) {
+ DXG_ERR("get_user_pages_fast failed: %d", ret1);
+ if (ret1 > 0 && ret1 < npages)
+ release_pages(dxgalloc->pages, ret1);
+ vfree(dxgalloc->pages);
+ dxgalloc->pages = NULL;
+ ret = -ENOMEM;
+ goto cleanup;
+ }
+ kmem = vmap(dxgalloc->pages, npages, VM_MAP, PAGE_KERNEL);
+ if (kmem == NULL) {
+ DXG_ERR("vmap failed");
+ ret = -ENOMEM;
+ goto cleanup;
+ }
+ ret1 = vmbus_establish_gpadl(dxgglobal_get_vmbus(), kmem,
+ alloc_size, &dxgalloc->gpadl);
+ if (ret1) {
+ DXG_ERR("establish_gpadl failed: %d", ret1);
+ ret = -ENOMEM;
+ goto cleanup;
+ }
+ DXG_TRACE("New gpadl %d", dxgalloc->gpadl.gpadl_handle);
+
+ command_vgpu_to_host_init2(&set_store_command->hdr,
+ DXGK_VMBCOMMAND_SETEXISTINGSYSMEMSTORE,
+ device->process->host_handle);
+ set_store_command->device = device->handle;
+ set_store_command->device = device->handle;
+ set_store_command->allocation = host_alloc->allocation;
+#ifdef _MAIN_KERNEL_
+ set_store_command->gpadl = dxgalloc->gpadl.gpadl_handle;
+#else
+ set_store_command->gpadl = dxgalloc->gpadl;
+#endif
+ ret = dxgvmb_send_sync_msg_ntstatus(msg.channel, msg.hdr, msg.size);
+ if (ret < 0)
+ DXG_ERR("failed to set existing store: %x", ret);
+
+cleanup:
+ if (kmem)
+ vunmap(kmem);
+ free_message(&msg, device->process);
+ if (ret)
+ DXG_TRACE("err: %d", ret);
+ return ret;
+}
+
+static int
+process_allocation_handles(struct dxgprocess *process,
+ struct dxgdevice *device,
+ struct d3dkmt_createallocation *args,
+ struct dxgkvmb_command_createallocation_return *res,
+ struct dxgallocation **dxgalloc,
+ struct dxgresource *resource)
+{
+ int ret = 0;
+ int i;
+
+ hmgrtable_lock(&process->handle_table, DXGLOCK_EXCL);
+ if (args->flags.create_resource) {
+ ret = hmgrtable_assign_handle(&process->handle_table, resource,
+ HMGRENTRY_TYPE_DXGRESOURCE,
+ res->resource);
+ if (ret < 0) {
+ DXG_ERR("failed to assign resource handle %x",
+ res->resource.v);
+ } else {
+ resource->handle = res->resource;
+ resource->handle_valid = 1;
+ }
+ }
+ for (i = 0; i < args->alloc_count; i++) {
+ struct dxgkvmb_command_allocinfo_return *host_alloc;
+
+ host_alloc = &res->allocation_info[i];
+ ret = hmgrtable_assign_handle(&process->handle_table,
+ dxgalloc[i],
+ HMGRENTRY_TYPE_DXGALLOCATION,
+ host_alloc->allocation);
+ if (ret < 0) {
+ DXG_ERR("failed assign alloc handle %x %d %d",
+ host_alloc->allocation.v,
+ args->alloc_count, i);
+ break;
+ }
+ dxgalloc[i]->alloc_handle = host_alloc->allocation;
+ dxgalloc[i]->handle_valid = 1;
+ }
+ hmgrtable_unlock(&process->handle_table, DXGLOCK_EXCL);
+
+ if (ret)
+ DXG_TRACE("err: %d", ret);
+ return ret;
+}
+
+static int
+create_local_allocations(struct dxgprocess *process,
+ struct dxgdevice *device,
+ struct d3dkmt_createallocation *args,
+ struct d3dkmt_createallocation *__user input_args,
+ struct d3dddi_allocationinfo2 *alloc_info,
+ struct dxgkvmb_command_createallocation_return *result,
+ struct dxgresource *resource,
+ struct dxgallocation **dxgalloc,
+ u32 destroy_buffer_size)
+{
+ int i;
+ int alloc_count = args->alloc_count;
+ u8 *alloc_private_data = NULL;
+ int ret = 0;
+ int ret1;
+ struct dxgkvmb_command_destroyallocation *destroy_buf;
+ struct dxgvmbusmsg msg = {.hdr = NULL};
+
+ ret = init_message(&msg, device->adapter, process,
+ destroy_buffer_size);
+ if (ret)
+ goto cleanup;
+ destroy_buf = (void *)msg.msg;
+
+ /* Prepare the command to destroy allocation in case of failure */
+ command_vgpu_to_host_init2(&destroy_buf->hdr,
+ DXGK_VMBCOMMAND_DESTROYALLOCATION,
+ process->host_handle);
+ destroy_buf->device = args->device;
+ destroy_buf->resource = args->resource;
+ destroy_buf->alloc_count = alloc_count;
+ destroy_buf->flags.assume_not_in_use = 1;
+ for (i = 0; i < alloc_count; i++) {
+ DXG_TRACE("host allocation: %d %x",
+ i, result->allocation_info[i].allocation.v);
+ destroy_buf->allocations[i] =
+ result->allocation_info[i].allocation;
+ }
+
+ if (args->flags.create_resource) {
+ DXG_TRACE("new resource: %x", result->resource.v);
+ ret = copy_to_user(&input_args->resource, &result->resource,
+ sizeof(struct d3dkmthandle));
+ if (ret) {
+ DXG_ERR("failed to copy resource handle");
+ ret = -EINVAL;
+ goto cleanup;
+ }
+ }
+
+ alloc_private_data = (u8 *) result +
+ sizeof(struct dxgkvmb_command_createallocation_return) +
+ sizeof(struct dxgkvmb_command_allocinfo_return) * (alloc_count - 1);
+
+ for (i = 0; i < alloc_count; i++) {
+ struct dxgkvmb_command_allocinfo_return *host_alloc;
+ struct d3dddi_allocationinfo2 *user_alloc;
+
+ host_alloc = &result->allocation_info[i];
+ user_alloc = &alloc_info[i];
+ dxgalloc[i]->num_pages =
+ host_alloc->allocation_size >> PAGE_SHIFT;
+ if (user_alloc->sysmem) {
+ ret = create_existing_sysmem(device, host_alloc,
+ dxgalloc[i],
+ args->flags.read_only != 0,
+ user_alloc->sysmem);
+ if (ret < 0)
+ goto cleanup;
+ }
+ dxgalloc[i]->cached = host_alloc->allocation_flags.cached;
+ if (host_alloc->priv_drv_data_size) {
+ ret = copy_to_user(user_alloc->priv_drv_data,
+ alloc_private_data,
+ host_alloc->priv_drv_data_size);
+ if (ret) {
+ DXG_ERR("failed to copy private data");
+ ret = -EINVAL;
+ goto cleanup;
+ }
+ alloc_private_data += host_alloc->priv_drv_data_size;
+ }
+ ret = copy_to_user(&args->allocation_info[i].allocation,
+ &host_alloc->allocation,
+ sizeof(struct d3dkmthandle));
+ if (ret) {
+ DXG_ERR("failed to copy alloc handle");
+ ret = -EINVAL;
+ goto cleanup;
+ }
+ }
+
+ ret = process_allocation_handles(process, device, args, result,
+ dxgalloc, resource);
+ if (ret < 0)
+ goto cleanup;
+
+ ret = copy_to_user(&input_args->global_share, &args->global_share,
+ sizeof(struct d3dkmthandle));
+ if (ret) {
+ DXG_ERR("failed to copy global share");
+ ret = -EINVAL;
+ }
+
+cleanup:
+
+ if (ret < 0) {
+ /* Free local handles before freeing the handles in the host */
+ dxgdevice_acquire_alloc_list_lock(device);
+ if (dxgalloc)
+ for (i = 0; i < alloc_count; i++)
+ if (dxgalloc[i])
+ dxgallocation_free_handle(dxgalloc[i]);
+ if (resource && args->flags.create_resource)
+ dxgresource_free_handle(resource);
+ dxgdevice_release_alloc_list_lock(device);
+
+ /* Destroy allocations in the host to unmap gpadls */
+ ret1 = dxgvmb_send_sync_msg_ntstatus(msg.channel, msg.hdr,
+ msg.size);
+ if (ret1 < 0)
+ DXG_ERR("failed to destroy allocations: %x",
+ ret1);
+
+ dxgdevice_acquire_alloc_list_lock(device);
+ if (dxgalloc) {
+ for (i = 0; i < alloc_count; i++) {
+ if (dxgalloc[i]) {
+ dxgalloc[i]->alloc_handle.v = 0;
+ dxgallocation_destroy(dxgalloc[i]);
+ dxgalloc[i] = NULL;
+ }
+ }
+ }
+ if (resource && args->flags.create_resource) {
+ /*
+ * Prevent the resource memory from freeing.
+ * It will be freed in the top level function.
+ */
+ kref_get(&resource->resource_kref);
+ dxgresource_destroy(resource);
+ }
+ dxgdevice_release_alloc_list_lock(device);
+ }
+
+ free_message(&msg, process);
+ if (ret)
+ DXG_TRACE("err: %d", ret);
+ return ret;
+}
+
+int dxgvmb_send_create_allocation(struct dxgprocess *process,
+ struct dxgdevice *device,
+ struct d3dkmt_createallocation *args,
+ struct d3dkmt_createallocation *__user
+ input_args,
+ struct dxgresource *resource,
+ struct dxgallocation **dxgalloc,
+ struct d3dddi_allocationinfo2 *alloc_info,
+ struct d3dkmt_createstandardallocation
+ *standard_alloc)
+{
+ struct dxgkvmb_command_createallocation *command = NULL;
+ struct dxgkvmb_command_createallocation_return *result = NULL;
+ int ret = -EINVAL;
+ int i;
+ u32 result_size = 0;
+ u32 cmd_size = 0;
+ u32 destroy_buffer_size = 0;
+ u32 priv_drv_data_size;
+ struct dxgvmbusmsg msg = {.hdr = NULL};
+
+ if (args->private_runtime_data_size >= DXG_MAX_VM_BUS_PACKET_SIZE ||
+ args->priv_drv_data_size >= DXG_MAX_VM_BUS_PACKET_SIZE) {
+ ret = -EOVERFLOW;
+ goto cleanup;
+ }
+
+ /*
+ * Preallocate the buffer, which will be used for destruction in case
+ * of a failure
+ */
+ destroy_buffer_size = sizeof(struct dxgkvmb_command_destroyallocation) +
+ args->alloc_count * sizeof(struct d3dkmthandle);
+
+ /* Compute the total private driver size */
+
+ priv_drv_data_size = 0;
+
+ for (i = 0; i < args->alloc_count; i++) {
+ if (alloc_info[i].priv_drv_data_size >=
+ DXG_MAX_VM_BUS_PACKET_SIZE) {
+ ret = -EOVERFLOW;
+ goto cleanup;
+ } else {
+ priv_drv_data_size += alloc_info[i].priv_drv_data_size;
+ }
+ if (priv_drv_data_size >= DXG_MAX_VM_BUS_PACKET_SIZE) {
+ ret = -EOVERFLOW;
+ goto cleanup;
+ }
+ }
+
+ /*
+ * Private driver data for the result includes only per allocation
+ * private data
+ */
+ result_size = sizeof(struct dxgkvmb_command_createallocation_return) +
+ (args->alloc_count - 1) *
+ sizeof(struct dxgkvmb_command_allocinfo_return) +
+ priv_drv_data_size;
+ result = vzalloc(result_size);
+ if (result == NULL) {
+ ret = -ENOMEM;
+ goto cleanup;
+ }
+
+ /* Private drv data for the command includes the global private data */
+ priv_drv_data_size += args->priv_drv_data_size;
+
+ cmd_size = sizeof(struct dxgkvmb_command_createallocation) +
+ args->alloc_count *
+ sizeof(struct dxgkvmb_command_createallocation_allocinfo) +
+ args->private_runtime_data_size + priv_drv_data_size;
+ if (cmd_size > DXG_MAX_VM_BUS_PACKET_SIZE) {
+ ret = -EOVERFLOW;
+ goto cleanup;
+ }
+
+ DXG_TRACE("command size, driver_data_size %d %d %ld %ld",
+ cmd_size, priv_drv_data_size,
+ sizeof(struct dxgkvmb_command_createallocation),
+ sizeof(struct dxgkvmb_command_createallocation_allocinfo));
+
+ ret = init_message(&msg, device->adapter, process,
+ cmd_size);
+ if (ret)
+ goto cleanup;
+ command = (void *)msg.msg;
+
+ command_vgpu_to_host_init2(&command->hdr,
+ DXGK_VMBCOMMAND_CREATEALLOCATION,
+ process->host_handle);
+ command->device = args->device;
+ command->flags = args->flags;
+ command->resource = args->resource;
+ command->private_runtime_resource_handle =
+ args->private_runtime_resource_handle;
+ command->alloc_count = args->alloc_count;
+ command->private_runtime_data_size = args->private_runtime_data_size;
+ command->priv_drv_data_size = args->priv_drv_data_size;
+
+ ret = copy_private_data(args, command, alloc_info, standard_alloc);
+ if (ret < 0)
+ goto cleanup;
+
+ ret = dxgvmb_send_sync_msg(msg.channel, msg.hdr, msg.size,
+ result, result_size);
+ if (ret < 0) {
+ DXG_ERR("send_create_allocation failed %x", ret);
+ goto cleanup;
+ }
+
+ ret = create_local_allocations(process, device, args, input_args,
+ alloc_info, result, resource, dxgalloc,
+ destroy_buffer_size);
+cleanup:
+
+ if (result)
+ vfree(result);
+ free_message(&msg, process);
+
+ if (ret)
+ DXG_TRACE("err: %d", ret);
+ return ret;
+}
+
+int dxgvmb_send_destroy_allocation(struct dxgprocess *process,
+ struct dxgdevice *device,
+ struct d3dkmt_destroyallocation2 *args,
+ struct d3dkmthandle *alloc_handles)
+{
+ struct dxgkvmb_command_destroyallocation *destroy_buffer;
+ u32 destroy_buffer_size;
+ int ret;
+ int allocations_size = args->alloc_count * sizeof(struct d3dkmthandle);
+ struct dxgvmbusmsg msg = {.hdr = NULL};
+
+ destroy_buffer_size = sizeof(struct dxgkvmb_command_destroyallocation) +
+ allocations_size;
+
+ ret = init_message(&msg, device->adapter, process,
+ destroy_buffer_size);
+ if (ret)
+ goto cleanup;
+ destroy_buffer = (void *)msg.msg;
+
+ command_vgpu_to_host_init2(&destroy_buffer->hdr,
+ DXGK_VMBCOMMAND_DESTROYALLOCATION,
+ process->host_handle);
+ destroy_buffer->device = args->device;
+ destroy_buffer->resource = args->resource;
+ destroy_buffer->alloc_count = args->alloc_count;
+ destroy_buffer->flags = args->flags;
+ if (allocations_size)
+ memcpy(destroy_buffer->allocations, alloc_handles,
+ allocations_size);
+
+ ret = dxgvmb_send_sync_msg_ntstatus(msg.channel, msg.hdr, msg.size);
+
+cleanup:
+
+ free_message(&msg, process);
+ if (ret)
+ DXG_TRACE("err: %d", ret);
+ return ret;
+}
+
+int dxgvmb_send_get_stdalloc_data(struct dxgdevice *device,
+ enum d3dkmdt_standardallocationtype alloctype,
+ struct d3dkmdt_gdisurfacedata *alloc_data,
+ u32 physical_adapter_index,
+ u32 *alloc_priv_driver_size,
+ void *priv_alloc_data,
+ u32 *res_priv_data_size,
+ void *priv_res_data)
+{
+ struct dxgkvmb_command_getstandardallocprivdata *command;
+ struct dxgkvmb_command_getstandardallocprivdata_return *result = NULL;
+ u32 result_size = sizeof(*result);
+ int ret;
+ struct dxgvmbusmsgres msg = {.hdr = NULL};
+
+ if (priv_alloc_data)
+ result_size += *alloc_priv_driver_size;
+ if (priv_res_data)
+ result_size += *res_priv_data_size;
+ ret = init_message_res(&msg, device->adapter, device->process,
+ sizeof(*command), result_size);
+ if (ret)
+ goto cleanup;
+ command = msg.msg;
+ result = msg.res;
+
+ command_vgpu_to_host_init2(&command->hdr,
+ DXGK_VMBCOMMAND_DDIGETSTANDARDALLOCATIONDRIVERDATA,
+ device->process->host_handle);
+
+ command->alloc_type = alloctype;
+ command->priv_driver_data_size = *alloc_priv_driver_size;
+ command->physical_adapter_index = physical_adapter_index;
+ command->priv_driver_resource_size = *res_priv_data_size;
+ switch (alloctype) {
+ case _D3DKMDT_STANDARDALLOCATION_GDISURFACE:
+ command->gdi_surface = *alloc_data;
+ break;
+ case _D3DKMDT_STANDARDALLOCATION_SHAREDPRIMARYSURFACE:
+ case _D3DKMDT_STANDARDALLOCATION_SHADOWSURFACE:
+ case _D3DKMDT_STANDARDALLOCATION_STAGINGSURFACE:
+ default:
+ DXG_ERR("Invalid standard alloc type");
+ goto cleanup;
+ }
+
+ ret = dxgvmb_send_sync_msg(msg.channel, msg.hdr, msg.size,
+ result, msg.res_size);
+ if (ret < 0)
+ goto cleanup;
+
+ ret = ntstatus2int(result->status);
+ if (ret < 0)
+ goto cleanup;
+
+ if (*alloc_priv_driver_size &&
+ result->priv_driver_data_size != *alloc_priv_driver_size) {
+ DXG_ERR("Priv data size mismatch");
+ goto cleanup;
+ }
+ if (*res_priv_data_size &&
+ result->priv_driver_resource_size != *res_priv_data_size) {
+ DXG_ERR("Resource priv data size mismatch");
+ goto cleanup;
+ }
+ *alloc_priv_driver_size = result->priv_driver_data_size;
+ *res_priv_data_size = result->priv_driver_resource_size;
+ if (priv_alloc_data) {
+ memcpy(priv_alloc_data, &result[1],
+ result->priv_driver_data_size);
+ }
+ if (priv_res_data) {
+ memcpy(priv_res_data,
+ (char *)(&result[1]) + result->priv_driver_data_size,
+ result->priv_driver_resource_size);
+ }
+
+cleanup:
+
+ free_message((struct dxgvmbusmsg *)&msg, device->process);
+ if (ret)
+ DXG_TRACE("err: %d", ret);
+ return ret;
+}
+
int dxgvmb_send_query_adapter_info(struct dxgprocess *process,
struct dxgadapter *adapter,
struct d3dkmt_queryadapterinfo *args)
diff --git a/drivers/hv/dxgkrnl/dxgvmbus.h b/drivers/hv/dxgkrnl/dxgvmbus.h
index ebcb7b0f62c1..4b7466d1b9f2 100644
--- a/drivers/hv/dxgkrnl/dxgvmbus.h
+++ b/drivers/hv/dxgkrnl/dxgvmbus.h
@@ -173,6 +173,14 @@ struct dxgkvmb_command_setiospaceregion {
u32 shared_page_gpadl;
};
+/* Returns ntstatus */
+struct dxgkvmb_command_setexistingsysmemstore {
+ struct dxgkvmb_command_vgpu_to_host hdr;
+ struct d3dkmthandle device;
+ struct d3dkmthandle allocation;
+ u32 gpadl;
+};
+
struct dxgkvmb_command_createprocess {
struct dxgkvmb_command_vm_to_host hdr;
void *process;
@@ -269,6 +277,121 @@ struct dxgkvmb_command_flushdevice {
enum dxgdevice_flushschedulerreason reason;
};
+struct dxgkvmb_command_createallocation_allocinfo {
+ u32 flags;
+ u32 priv_drv_data_size;
+ u32 vidpn_source_id;
+};
+
+struct dxgkvmb_command_createallocation {
+ struct dxgkvmb_command_vgpu_to_host hdr;
+ struct d3dkmthandle device;
+ struct d3dkmthandle resource;
+ u32 private_runtime_data_size;
+ u32 priv_drv_data_size;
+ u32 alloc_count;
+ struct d3dkmt_createallocationflags flags;
+ u64 private_runtime_resource_handle;
+ bool make_resident;
+/* dxgkvmb_command_createallocation_allocinfo alloc_info[alloc_count]; */
+/* u8 private_rutime_data[private_runtime_data_size] */
+/* u8 priv_drv_data[] for each alloc_info */
+};
+
+struct dxgkvmb_command_getstandardallocprivdata {
+ struct dxgkvmb_command_vgpu_to_host hdr;
+ enum d3dkmdt_standardallocationtype alloc_type;
+ u32 priv_driver_data_size;
+ u32 priv_driver_resource_size;
+ u32 physical_adapter_index;
+ union {
+ struct d3dkmdt_sharedprimarysurfacedata primary;
+ struct d3dkmdt_shadowsurfacedata shadow;
+ struct d3dkmdt_stagingsurfacedata staging;
+ struct d3dkmdt_gdisurfacedata gdi_surface;
+ };
+};
+
+struct dxgkvmb_command_getstandardallocprivdata_return {
+ struct ntstatus status;
+ u32 priv_driver_data_size;
+ u32 priv_driver_resource_size;
+ union {
+ struct d3dkmdt_sharedprimarysurfacedata primary;
+ struct d3dkmdt_shadowsurfacedata shadow;
+ struct d3dkmdt_stagingsurfacedata staging;
+ struct d3dkmdt_gdisurfacedata gdi_surface;
+ };
+/* char alloc_priv_data[priv_driver_data_size]; */
+/* char resource_priv_data[priv_driver_resource_size]; */
+};
+
+struct dxgkarg_describeallocation {
+ u64 allocation;
+ u32 width;
+ u32 height;
+ u32 format;
+ u32 multisample_method;
+ struct d3dddi_rational refresh_rate;
+ u32 private_driver_attribute;
+ u32 flags;
+ u32 rotation;
+};
+
+struct dxgkvmb_allocflags {
+ union {
+ u32 flags;
+ struct {
+ u32 primary:1;
+ u32 cdd_primary:1;
+ u32 dod_primary:1;
+ u32 overlay:1;
+ u32 reserved6:1;
+ u32 capture:1;
+ u32 reserved0:4;
+ u32 reserved1:1;
+ u32 existing_sysmem:1;
+ u32 stereo:1;
+ u32 direct_flip:1;
+ u32 hardware_protected:1;
+ u32 reserved2:1;
+ u32 reserved3:1;
+ u32 reserved4:1;
+ u32 protected:1;
+ u32 cached:1;
+ u32 independent_primary:1;
+ u32 reserved:11;
+ };
+ };
+};
+
+struct dxgkvmb_command_allocinfo_return {
+ struct d3dkmthandle allocation;
+ u32 priv_drv_data_size;
+ struct dxgkvmb_allocflags allocation_flags;
+ u64 allocation_size;
+ struct dxgkarg_describeallocation driver_info;
+};
+
+struct dxgkvmb_command_createallocation_return {
+ struct d3dkmt_createallocationflags flags;
+ struct d3dkmthandle resource;
+ struct d3dkmthandle global_share;
+ u32 vgpu_flags;
+ struct dxgkvmb_command_allocinfo_return allocation_info[1];
+ /* Private driver data for allocations */
+};
+
+/* The command returns ntstatus */
+struct dxgkvmb_command_destroyallocation {
+ struct dxgkvmb_command_vgpu_to_host hdr;
+ struct d3dkmthandle device;
+ struct d3dkmthandle resource;
+ u32 alloc_count;
+ struct d3dddicb_destroyallocation2flags flags;
+ struct d3dkmthandle allocations[1];
+};
+
struct dxgkvmb_command_createcontextvirtual {
struct dxgkvmb_command_vgpu_to_host hdr;
struct d3dkmthandle context;
diff --git a/drivers/hv/dxgkrnl/ioctl.c b/drivers/hv/dxgkrnl/ioctl.c
index 5d10ebd2ce6a..0eaa577d7ed4 100644
--- a/drivers/hv/dxgkrnl/ioctl.c
+++ b/drivers/hv/dxgkrnl/ioctl.c
@@ -714,6 +714,633 @@ dxgkio_destroy_context(struct dxgprocess *process, void *__user inargs)
return ret;
}
+static int
+get_standard_alloc_priv_data(struct dxgdevice *device,
+ struct d3dkmt_createstandardallocation *alloc_info,
+ u32 *standard_alloc_priv_data_size,
+ void **standard_alloc_priv_data,
+ u32 *standard_res_priv_data_size,
+ void **standard_res_priv_data)
+{
+ int ret;
+ struct d3dkmdt_gdisurfacedata gdi_data = { };
+ u32 priv_data_size = 0;
+ u32 res_priv_data_size = 0;
+ void *priv_data = NULL;
+ void *res_priv_data = NULL;
+
+ gdi_data.type = _D3DKMDT_GDISURFACE_TEXTURE_CROSSADAPTER;
+ gdi_data.width = alloc_info->existing_heap_data.size;
+ gdi_data.height = 1;
+ gdi_data.format = _D3DDDIFMT_UNKNOWN;
+
+ *standard_alloc_priv_data_size = 0;
+ ret = dxgvmb_send_get_stdalloc_data(device,
+ _D3DKMDT_STANDARDALLOCATION_GDISURFACE,
+ &gdi_data, 0,
+ &priv_data_size, NULL,
+ &res_priv_data_size,
+ NULL);
+ if (ret < 0)
+ goto cleanup;
+ DXG_TRACE("Priv data size: %d", priv_data_size);
+ if (priv_data_size == 0) {
+ ret = -EINVAL;
+ goto cleanup;
+ }
+ priv_data = vzalloc(priv_data_size);
+ if (priv_data == NULL) {
+ ret = -ENOMEM;
+ DXG_ERR("failed to allocate memory for priv data: %d",
+ priv_data_size);
+ goto cleanup;
+ }
+ if (res_priv_data_size) {
+ res_priv_data = vzalloc(res_priv_data_size);
+ if (res_priv_data == NULL) {
+ ret = -ENOMEM;
+ dev_err(DXGDEV,
+ "failed to alloc memory for res priv data: %d",
+ res_priv_data_size);
+ goto cleanup;
+ }
+ }
+ ret = dxgvmb_send_get_stdalloc_data(device,
+ _D3DKMDT_STANDARDALLOCATION_GDISURFACE,
+ &gdi_data, 0,
+ &priv_data_size,
+ priv_data,
+ &res_priv_data_size,
+ res_priv_data);
+ if (ret < 0)
+ goto cleanup;
+ *standard_alloc_priv_data_size = priv_data_size;
+ *standard_alloc_priv_data = priv_data;
+ *standard_res_priv_data_size = res_priv_data_size;
+ *standard_res_priv_data = res_priv_data;
+ priv_data = NULL;
+ res_priv_data = NULL;
+
+cleanup:
+ if (priv_data)
+ vfree(priv_data);
+ if (res_priv_data)
+ vfree(res_priv_data);
+ if (ret)
+ DXG_TRACE("err: %d", ret);
+ return ret;
+}
+
+static int
+dxgkio_create_allocation(struct dxgprocess *process, void *__user inargs)
+{
+ struct d3dkmt_createallocation args;
+ int ret;
+ struct dxgadapter *adapter = NULL;
+ struct dxgdevice *device = NULL;
+ struct d3dddi_allocationinfo2 *alloc_info = NULL;
+ struct d3dkmt_createstandardallocation standard_alloc;
+ u32 alloc_info_size = 0;
+ struct dxgresource *resource = NULL;
+ struct dxgallocation **dxgalloc = NULL;
+ bool resource_mutex_acquired = false;
+ u32 standard_alloc_priv_data_size = 0;
+ void *standard_alloc_priv_data = NULL;
+ u32 res_priv_data_size = 0;
+ void *res_priv_data = NULL;
+ int i;
+
+ ret = copy_from_user(&args, inargs, sizeof(args));
+ if (ret) {
+ DXG_ERR("failed to copy input args");
+ ret = -EINVAL;
+ goto cleanup;
+ }
+
+ if (args.alloc_count > D3DKMT_CREATEALLOCATION_MAX ||
+ args.alloc_count == 0) {
+ DXG_ERR("invalid number of allocations to create");
+ ret = -EINVAL;
+ goto cleanup;
+ }
+
+ alloc_info_size = sizeof(struct d3dddi_allocationinfo2) *
+ args.alloc_count;
+ alloc_info = vzalloc(alloc_info_size);
+ if (alloc_info == NULL) {
+ ret = -ENOMEM;
+ goto cleanup;
+ }
+ ret = copy_from_user(alloc_info, args.allocation_info,
+ alloc_info_size);
+ if (ret) {
+ DXG_ERR("failed to copy alloc info");
+ ret = -EINVAL;
+ goto cleanup;
+ }
+
+ for (i = 0; i < args.alloc_count; i++) {
+ if (args.flags.standard_allocation) {
+ if (alloc_info[i].priv_drv_data_size != 0) {
+ DXG_ERR("private data size not zero");
+ ret = -EINVAL;
+ goto cleanup;
+ }
+ }
+ if (alloc_info[i].priv_drv_data_size >=
+ DXG_MAX_VM_BUS_PACKET_SIZE) {
+ DXG_ERR("private data size too big: %d %d %ld",
+ i, alloc_info[i].priv_drv_data_size,
+ sizeof(alloc_info[0]));
+ ret = -EINVAL;
+ goto cleanup;
+ }
+ }
+
+ if (args.flags.existing_section || args.flags.create_protected) {
+ DXG_ERR("invalid allocation flags");
+ ret = -EINVAL;
+ goto cleanup;
+ }
+
+ if (args.flags.standard_allocation) {
+ if (args.standard_allocation == NULL) {
+ DXG_ERR("invalid standard allocation");
+ ret = -EINVAL;
+ goto cleanup;
+ }
+ ret = copy_from_user(&standard_alloc,
+ args.standard_allocation,
+ sizeof(standard_alloc));
+ if (ret) {
+ DXG_ERR("failed to copy std alloc data");
+ ret = -EINVAL;
+ goto cleanup;
+ }
+ if (standard_alloc.type ==
+ _D3DKMT_STANDARDALLOCATIONTYPE_EXISTINGHEAP) {
+ if (alloc_info[0].sysmem == NULL ||
+ (unsigned long)alloc_info[0].sysmem &
+ (PAGE_SIZE - 1)) {
+ DXG_ERR("invalid sysmem pointer");
+ ret = STATUS_INVALID_PARAMETER;
+ goto cleanup;
+ }
+ if (!args.flags.existing_sysmem) {
+ DXG_ERR("expect existing_sysmem flag");
+ ret = STATUS_INVALID_PARAMETER;
+ goto cleanup;
+ }
+ } else if (standard_alloc.type ==
+ _D3DKMT_STANDARDALLOCATIONTYPE_CROSSADAPTER) {
+ if (args.flags.existing_sysmem) {
+ DXG_ERR("existing_sysmem flag invalid");
+ ret = STATUS_INVALID_PARAMETER;
+ goto cleanup;
+
+ }
+ if (alloc_info[0].sysmem != NULL) {
+ DXG_ERR("sysmem should be NULL");
+ ret = STATUS_INVALID_PARAMETER;
+ goto cleanup;
+ }
+ } else {
+ DXG_ERR("invalid standard allocation type");
+ ret = STATUS_INVALID_PARAMETER;
+ goto cleanup;
+ }
+
+ if (args.priv_drv_data_size != 0 ||
+ args.alloc_count != 1 ||
+ standard_alloc.existing_heap_data.size == 0 ||
+ standard_alloc.existing_heap_data.size & (PAGE_SIZE - 1)) {
+ DXG_ERR("invalid standard allocation");
+ ret = -EINVAL;
+ goto cleanup;
+ }
+ args.priv_drv_data_size =
+ sizeof(struct d3dkmt_createstandardallocation);
+ }
+
+ if (args.flags.create_shared && !args.flags.create_resource) {
+ DXG_ERR("create_resource must be set for create_shared");
+ ret = -EINVAL;
+ goto cleanup;
+ }
+
+ /*
+ * The call acquires reference on the device. It is safe to access the
+ * adapter, because the device holds reference on it.
+ */
+ device = dxgprocess_device_by_handle(process, args.device);
+ if (device == NULL) {
+ ret = -EINVAL;
+ goto cleanup;
+ }
+
+ ret = dxgdevice_acquire_lock_shared(device);
+ if (ret < 0) {
+ kref_put(&device->device_kref, dxgdevice_release);
+ device = NULL;
+ goto cleanup;
+ }
+
+ adapter = device->adapter;
+ ret = dxgadapter_acquire_lock_shared(adapter);
+ if (ret < 0) {
+ adapter = NULL;
+ goto cleanup;
+ }
+
+ if (args.flags.standard_allocation) {
+ ret = get_standard_alloc_priv_data(device,
+ &standard_alloc,
+ &standard_alloc_priv_data_size,
+ &standard_alloc_priv_data,
+ &res_priv_data_size,
+ &res_priv_data);
+ if (ret < 0)
+ goto cleanup;
+ DXG_TRACE("Alloc private data: %d",
+ standard_alloc_priv_data_size);
+ }
+
+ if (args.flags.create_resource) {
+ resource = dxgresource_create(device);
+ if (resource == NULL) {
+ ret = -ENOMEM;
+ goto cleanup;
+ }
+ resource->private_runtime_handle =
+ args.private_runtime_resource_handle;
+ } else {
+ if (args.resource.v) {
+ /* Adding new allocations to the given resource */
+
+ dxgprocess_ht_lock_shared_down(process);
+ resource = hmgrtable_get_object_by_type(
+ &process->handle_table,
+ HMGRENTRY_TYPE_DXGRESOURCE,
+ args.resource);
+ kref_get(&resource->resource_kref);
+ dxgprocess_ht_lock_shared_up(process);
+
+ if (resource == NULL || resource->device != device) {
+ DXG_ERR("invalid resource handle %x",
+ args.resource.v);
+ ret = -EINVAL;
+ goto cleanup;
+ }
+ /* Synchronize with resource destruction */
+ mutex_lock(&resource->resource_mutex);
+ if (!dxgresource_is_active(resource)) {
+ mutex_unlock(&resource->resource_mutex);
+ ret = -EINVAL;
+ goto cleanup;
+ }
+ resource_mutex_acquired = true;
+ }
+ }
+
+ dxgalloc = vzalloc(sizeof(struct dxgallocation *) * args.alloc_count);
+ if (dxgalloc == NULL) {
+ ret = -ENOMEM;
+ goto cleanup;
+ }
+
+ for (i = 0; i < args.alloc_count; i++) {
+ struct dxgallocation *alloc;
+ u32 priv_data_size;
+
+ if (args.flags.standard_allocation)
+ priv_data_size = standard_alloc_priv_data_size;
+ else
+ priv_data_size = alloc_info[i].priv_drv_data_size;
+
+ if (alloc_info[i].sysmem && !args.flags.standard_allocation) {
+ if ((unsigned long)
+ alloc_info[i].sysmem & (PAGE_SIZE - 1)) {
+ DXG_ERR("invalid sysmem alloc %d, %p",
+ i, alloc_info[i].sysmem);
+ ret = -EINVAL;
+ goto cleanup;
+ }
+ }
+ if ((alloc_info[0].sysmem == NULL) !=
+ (alloc_info[i].sysmem == NULL)) {
+ DXG_ERR("All allocs must have sysmem pointer");
+ ret = -EINVAL;
+ goto cleanup;
+ }
+
+ dxgalloc[i] = dxgallocation_create(process);
+ if (dxgalloc[i] == NULL) {
+ ret = -ENOMEM;
+ goto cleanup;
+ }
+ alloc = dxgalloc[i];
+
+ if (resource) {
+ ret = dxgresource_add_alloc(resource, alloc);
+ if (ret < 0)
+ goto cleanup;
+ } else {
+ dxgdevice_add_alloc(device, alloc);
+ }
+ if (args.flags.create_shared) {
+ /* Remember alloc private data to use it during open */
+ alloc->priv_drv_data = vzalloc(priv_data_size +
+ offsetof(struct privdata, data));
+ if (alloc->priv_drv_data == NULL) {
+ ret = -ENOMEM;
+ goto cleanup;
+ }
+ if (args.flags.standard_allocation) {
+ memcpy(alloc->priv_drv_data->data,
+ standard_alloc_priv_data,
+ priv_data_size);
+ } else {
+ ret = copy_from_user(
+ alloc->priv_drv_data->data,
+ alloc_info[i].priv_drv_data,
+ priv_data_size);
+ if (ret) {
+ dev_err(DXGDEV,
+ "failed to copy priv data");
+ ret = -EFAULT;
+ goto cleanup;
+ }
+ }
+ alloc->priv_drv_data->data_size = priv_data_size;
+ }
+ }
+
+ ret = dxgvmb_send_create_allocation(process, device, &args, inargs,
+ resource, dxgalloc, alloc_info,
+ &standard_alloc);
+cleanup:
+
+ if (resource_mutex_acquired) {
+ mutex_unlock(&resource->resource_mutex);
+ kref_put(&resource->resource_kref, dxgresource_release);
+ }
+ if (ret < 0) {
+ if (dxgalloc) {
+ for (i = 0; i < args.alloc_count; i++) {
+ if (dxgalloc[i])
+ dxgallocation_destroy(dxgalloc[i]);
+ }
+ }
+ if (resource && args.flags.create_resource) {
+ dxgresource_destroy(resource);
+ }
+ }
+ if (dxgalloc)
+ vfree(dxgalloc);
+ if (standard_alloc_priv_data)
+ vfree(standard_alloc_priv_data);
+ if (res_priv_data)
+ vfree(res_priv_data);
+ if (alloc_info)
+ vfree(alloc_info);
+
+ if (adapter)
+ dxgadapter_release_lock_shared(adapter);
+
+ if (device) {
+ dxgdevice_release_lock_shared(device);
+ kref_put(&device->device_kref, dxgdevice_release);
+ }
+
+ DXG_TRACE("ioctl:%s %d", errorstr(ret), ret);
+ return ret;
+}
+
+static int validate_alloc(struct dxgallocation *alloc0,
+ struct dxgallocation *alloc,
+ struct dxgdevice *device,
+ struct d3dkmthandle alloc_handle)
+{
+ u32 fail_reason;
+
+ if (alloc == NULL) {
+ fail_reason = 1;
+ goto cleanup;
+ }
+ if (alloc->resource_owner != alloc0->resource_owner) {
+ fail_reason = 2;
+ goto cleanup;
+ }
+ if (alloc->resource_owner) {
+ if (alloc->owner.resource != alloc0->owner.resource) {
+ fail_reason = 3;
+ goto cleanup;
+ }
+ if (alloc->owner.resource->device != device) {
+ fail_reason = 4;
+ goto cleanup;
+ }
+ } else {
+ if (alloc->owner.device != device) {
+ fail_reason = 6;
+ goto cleanup;
+ }
+ }
+ return 0;
+cleanup:
+ DXG_ERR("Alloc validation failed: reason: %d %x",
+ fail_reason, alloc_handle.v);
+ return -EINVAL;
+}
+
+static int
+dxgkio_destroy_allocation(struct dxgprocess *process, void *__user inargs)
+{
+ struct d3dkmt_destroyallocation2 args;
+ struct dxgdevice *device = NULL;
+ struct dxgadapter *adapter = NULL;
+ int ret;
+ struct d3dkmthandle *alloc_handles = NULL;
+ struct dxgallocation **allocs = NULL;
+ struct dxgresource *resource = NULL;
+ int i;
+
+ ret = copy_from_user(&args, inargs, sizeof(args));
+ if (ret) {
+ DXG_ERR("failed to copy input args");
+ ret = -EINVAL;
+ goto cleanup;
+ }
+
+ if (args.alloc_count > D3DKMT_CREATEALLOCATION_MAX ||
+ ((args.alloc_count == 0) == (args.resource.v == 0))) {
+ DXG_ERR("invalid number of allocations");
+ ret = -EINVAL;
+ goto cleanup;
+ }
+
+ if (args.alloc_count) {
+ u32 handle_size = sizeof(struct d3dkmthandle) *
+ args.alloc_count;
+
+ alloc_handles = vzalloc(handle_size);
+ if (alloc_handles == NULL) {
+ ret = -ENOMEM;
+ goto cleanup;
+ }
+ allocs = vzalloc(sizeof(struct dxgallocation *) *
+ args.alloc_count);
+ if (allocs == NULL) {
+ ret = -ENOMEM;
+ goto cleanup;
+ }
+ ret = copy_from_user(alloc_handles, args.allocations,
+ handle_size);
+ if (ret) {
+ DXG_ERR("failed to copy alloc handles");
+ ret = -EINVAL;
+ goto cleanup;
+ }
+ }
+
+ /*
+ * The call acquires reference on the device. It is safe to access the
+ * adapter, because the device holds reference on it.
+ */
+ device = dxgprocess_device_by_handle(process, args.device);
+ if (device == NULL) {
+ ret = -EINVAL;
+ goto cleanup;
+ }
+
+ /* Acquire the device lock to synchronize with the device destriction */
+ ret = dxgdevice_acquire_lock_shared(device);
+ if (ret < 0) {
+ kref_put(&device->device_kref, dxgdevice_release);
+ device = NULL;
+ goto cleanup;
+ }
+
+ adapter = device->adapter;
+ ret = dxgadapter_acquire_lock_shared(adapter);
+ if (ret < 0) {
+ adapter = NULL;
+ goto cleanup;
+ }
+
+ /*
+ * Destroy the local allocation handles first. If the host handle
+ * is destroyed first, another object could be assigned to the process
+ * table at the same place as the allocation handle and it will fail.
+ */
+ if (args.alloc_count) {
+ dxgprocess_ht_lock_exclusive_down(process);
+ for (i = 0; i < args.alloc_count; i++) {
+ allocs[i] =
+ hmgrtable_get_object_by_type(&process->handle_table,
+ HMGRENTRY_TYPE_DXGALLOCATION,
+ alloc_handles[i]);
+ ret =
+ validate_alloc(allocs[0], allocs[i], device,
+ alloc_handles[i]);
+ if (ret < 0) {
+ dxgprocess_ht_lock_exclusive_up(process);
+ goto cleanup;
+ }
+ }
+ dxgprocess_ht_lock_exclusive_up(process);
+ for (i = 0; i < args.alloc_count; i++)
+ dxgallocation_free_handle(allocs[i]);
+ } else {
+ struct dxgallocation *alloc;
+
+ dxgprocess_ht_lock_exclusive_down(process);
+ resource = hmgrtable_get_object_by_type(&process->handle_table,
+ HMGRENTRY_TYPE_DXGRESOURCE,
+ args.resource);
+ if (resource == NULL) {
+ DXG_ERR("Invalid resource handle: %x",
+ args.resource.v);
+ ret = -EINVAL;
+ } else if (resource->device != device) {
+ DXG_ERR("Resource belongs to wrong device: %x",
+ args.resource.v);
+ ret = -EINVAL;
+ } else {
+ hmgrtable_free_handle(&process->handle_table,
+ HMGRENTRY_TYPE_DXGRESOURCE,
+ args.resource);
+ resource->object_state = DXGOBJECTSTATE_DESTROYED;
+ resource->handle.v = 0;
+ resource->handle_valid = 0;
+ }
+ dxgprocess_ht_lock_exclusive_up(process);
+
+ if (ret < 0)
+ goto cleanup;
+
+ dxgdevice_acquire_alloc_list_lock_shared(device);
+ list_for_each_entry(alloc, &resource->alloc_list_head,
+ alloc_list_entry) {
+ dxgallocation_free_handle(alloc);
+ }
+ dxgdevice_release_alloc_list_lock_shared(device);
+ }
+
+ if (args.alloc_count && allocs[0]->resource_owner)
+ resource = allocs[0]->owner.resource;
+
+ if (resource) {
+ kref_get(&resource->resource_kref);
+ mutex_lock(&resource->resource_mutex);
+ }
+
+ ret = dxgvmb_send_destroy_allocation(process, device, &args,
+ alloc_handles);
+
+ /*
+ * Destroy the allocations after the host destroyed it.
+ * The allocation gpadl teardown will wait until the host unmaps its
+ * gpadl.
+ */
+ dxgdevice_acquire_alloc_list_lock(device);
+ if (args.alloc_count) {
+ for (i = 0; i < args.alloc_count; i++) {
+ if (allocs[i]) {
+ allocs[i]->alloc_handle.v = 0;
+ dxgallocation_destroy(allocs[i]);
+ }
+ }
+ } else {
+ dxgresource_destroy(resource);
+ }
+ dxgdevice_release_alloc_list_lock(device);
+
+ if (resource) {
+ mutex_unlock(&resource->resource_mutex);
+ kref_put(&resource->resource_kref, dxgresource_release);
+ }
+
+cleanup:
+
+ if (adapter)
+ dxgadapter_release_lock_shared(adapter);
+
+ if (device) {
+ dxgdevice_release_lock_shared(device);
+ kref_put(&device->device_kref, dxgdevice_release);
+ }
+
+ if (alloc_handles)
+ vfree(alloc_handles);
+
+ if (allocs)
+ vfree(allocs);
+
+ DXG_TRACE("ioctl:%s %d", errorstr(ret), ret);
+ return ret;
+}
+
static struct ioctl_desc ioctls[] = {
/* 0x00 */ {},
/* 0x01 */ {dxgkio_open_adapter_from_luid, LX_DXOPENADAPTERFROMLUID},
@@ -721,7 +1348,7 @@ static struct ioctl_desc ioctls[] = {
/* 0x03 */ {},
/* 0x04 */ {dxgkio_create_context_virtual, LX_DXCREATECONTEXTVIRTUAL},
/* 0x05 */ {dxgkio_destroy_context, LX_DXDESTROYCONTEXT},
-/* 0x06 */ {},
+/* 0x06 */ {dxgkio_create_allocation, LX_DXCREATEALLOCATION},
/* 0x07 */ {},
/* 0x08 */ {},
/* 0x09 */ {dxgkio_query_adapter_info, LX_DXQUERYADAPTERINFO},
@@ -734,7 +1361,7 @@ static struct ioctl_desc ioctls[] = {
/* 0x10 */ {},
/* 0x11 */ {},
/* 0x12 */ {},
-/* 0x13 */ {},
+/* 0x13 */ {dxgkio_destroy_allocation, LX_DXDESTROYALLOCATION2},
/* 0x14 */ {dxgkio_enum_adapters, LX_DXENUMADAPTERS2},
/* 0x15 */ {dxgkio_close_adapter, LX_DXCLOSEADAPTER},
/* 0x16 */ {},
diff --git a/drivers/hv/dxgkrnl/misc.h b/drivers/hv/dxgkrnl/misc.h
index 3a9637f0b5e2..a51b29a6a68f 100644
--- a/drivers/hv/dxgkrnl/misc.h
+++ b/drivers/hv/dxgkrnl/misc.h
@@ -30,6 +30,9 @@ extern const struct d3dkmthandle zerohandle;
* plistmutex (process list mutex)
* table_lock (handle table lock)
* context_list_lock
+ * alloc_list_lock
+ * resource_mutex
+ * shared_resource_list_lock
* core_lock (dxgadapter lock)
* device_lock (dxgdevice lock)
* process_adapter_mutex
diff --git a/include/uapi/misc/d3dkmthk.h b/include/uapi/misc/d3dkmthk.h
index 4ba0070b061f..cf670b9c4dc2 100644
--- a/include/uapi/misc/d3dkmthk.h
+++ b/include/uapi/misc/d3dkmthk.h
@@ -58,6 +58,7 @@ struct winluid {
__u32 b;
};
+#define D3DKMT_CREATEALLOCATION_MAX 1024
#define D3DKMT_ADAPTERS_MAX 64
struct d3dkmt_adapterinfo {
@@ -197,6 +198,205 @@ struct d3dkmt_createcontextvirtual {
struct d3dkmthandle context;
};
+enum d3dkmdt_gdisurfacetype {
+ _D3DKMDT_GDISURFACE_INVALID = 0,
+ _D3DKMDT_GDISURFACE_TEXTURE = 1,
+ _D3DKMDT_GDISURFACE_STAGING_CPUVISIBLE = 2,
+ _D3DKMDT_GDISURFACE_STAGING = 3,
+ _D3DKMDT_GDISURFACE_LOOKUPTABLE = 4,
+ _D3DKMDT_GDISURFACE_EXISTINGSYSMEM = 5,
+ _D3DKMDT_GDISURFACE_TEXTURE_CPUVISIBLE = 6,
+ _D3DKMDT_GDISURFACE_TEXTURE_CROSSADAPTER = 7,
+ _D3DKMDT_GDISURFACE_TEXTURE_CPUVISIBLE_CROSSADAPTER = 8,
+};
+
+struct d3dddi_rational {
+ __u32 numerator;
+ __u32 denominator;
+};
+
+enum d3dddiformat {
+ _D3DDDIFMT_UNKNOWN = 0,
+};
+
+struct d3dkmdt_gdisurfacedata {
+ __u32 width;
+ __u32 height;
+ __u32 format;
+ enum d3dkmdt_gdisurfacetype type;
+ __u32 flags;
+ __u32 pitch;
+};
+
+struct d3dkmdt_stagingsurfacedata {
+ __u32 width;
+ __u32 height;
+ __u32 pitch;
+};
+
+struct d3dkmdt_sharedprimarysurfacedata {
+ __u32 width;
+ __u32 height;
+ enum d3dddiformat format;
+ struct d3dddi_rational refresh_rate;
+ __u32 vidpn_source_id;
+};
+
+struct d3dkmdt_shadowsurfacedata {
+ __u32 width;
+ __u32 height;
+ enum d3dddiformat format;
+ __u32 pitch;
+};
+
+enum d3dkmdt_standardallocationtype {
+ _D3DKMDT_STANDARDALLOCATION_SHAREDPRIMARYSURFACE = 1,
+ _D3DKMDT_STANDARDALLOCATION_SHADOWSURFACE = 2,
+ _D3DKMDT_STANDARDALLOCATION_STAGINGSURFACE = 3,
+ _D3DKMDT_STANDARDALLOCATION_GDISURFACE = 4,
+};
+
+enum d3dkmt_standardallocationtype {
+ _D3DKMT_STANDARDALLOCATIONTYPE_EXISTINGHEAP = 1,
+ _D3DKMT_STANDARDALLOCATIONTYPE_CROSSADAPTER = 2,
+};
+
+struct d3dkmt_standardallocation_existingheap {
+ __u64 size;
+};
+
+struct d3dkmt_createstandardallocationflags {
+ union {
+ struct {
+ __u32 reserved:32;
+ };
+ __u32 value;
+ };
+};
+
+struct d3dkmt_createstandardallocation {
+ enum d3dkmt_standardallocationtype type;
+ __u32 reserved;
+ struct d3dkmt_standardallocation_existingheap existing_heap_data;
+ struct d3dkmt_createstandardallocationflags flags;
+ __u32 reserved1;
+};
+
+struct d3dddi_allocationinfo2 {
+ struct d3dkmthandle allocation;
+#ifdef __KERNEL__
+ const void *sysmem;
+#else
+ __u64 sysmem;
+#endif
+#ifdef __KERNEL__
+ void *priv_drv_data;
+#else
+ __u64 priv_drv_data;
+#endif
+ __u32 priv_drv_data_size;
+ __u32 vidpn_source_id;
+ union {
+ struct {
+ __u32 primary:1;
+ __u32 stereo:1;
+ __u32 override_priority:1;
+ __u32 reserved:29;
+ };
+ __u32 value;
+ } flags;
+ __u64 gpu_virtual_address;
+ union {
+ __u32 priority;
+ __u64 unused;
+ };
+ __u64 reserved[5];
+};
+
+struct d3dkmt_createallocationflags {
+ union {
+ struct {
+ __u32 create_resource:1;
+ __u32 create_shared:1;
+ __u32 non_secure:1;
+ __u32 create_protected:1;
+ __u32 restrict_shared_access:1;
+ __u32 existing_sysmem:1;
+ __u32 nt_security_sharing:1;
+ __u32 read_only:1;
+ __u32 create_write_combined:1;
+ __u32 create_cached:1;
+ __u32 swap_chain_back_buffer:1;
+ __u32 cross_adapter:1;
+ __u32 open_cross_adapter:1;
+ __u32 partial_shared_creation:1;
+ __u32 zeroed:1;
+ __u32 write_watch:1;
+ __u32 standard_allocation:1;
+ __u32 existing_section:1;
+ __u32 reserved:14;
+ };
+ __u32 value;
+ };
+};
+
+struct d3dkmt_createallocation {
+ struct d3dkmthandle device;
+ struct d3dkmthandle resource;
+ struct d3dkmthandle global_share;
+ __u32 reserved;
+#ifdef __KERNEL__
+ const void *private_runtime_data;
+#else
+ __u64 private_runtime_data;
+#endif
+ __u32 private_runtime_data_size;
+ __u32 reserved1;
+ union {
+#ifdef __KERNEL__
+ struct d3dkmt_createstandardallocation *standard_allocation;
+ const void *priv_drv_data;
+#else
+ __u64 standard_allocation;
+ __u64 priv_drv_data;
+#endif
+ };
+ __u32 priv_drv_data_size;
+ __u32 alloc_count;
+#ifdef __KERNEL__
+ struct d3dddi_allocationinfo2 *allocation_info;
+#else
+ __u64 allocation_info;
+#endif
+ struct d3dkmt_createallocationflags flags;
+ __u32 reserved2;
+ __u64 private_runtime_resource_handle;
+};
+
+struct d3dddicb_destroyallocation2flags {
+ union {
+ struct {
+ __u32 assume_not_in_use:1;
+ __u32 synchronous_destroy:1;
+ __u32 reserved:29;
+ __u32 system_use_only:1;
+ };
+ __u32 value;
+ };
+};
+
+struct d3dkmt_destroyallocation2 {
+ struct d3dkmthandle device;
+ struct d3dkmthandle resource;
+#ifdef __KERNEL__
+ const struct d3dkmthandle *allocations;
+#else
+ __u64 allocations;
+#endif
+ __u32 alloc_count;
+ struct d3dddicb_destroyallocation2flags flags;
+};
+
struct d3dkmt_adaptertype {
union {
struct {
@@ -279,8 +479,12 @@ struct d3dkmt_enumadapters3 {
_IOWR(0x47, 0x04, struct d3dkmt_createcontextvirtual)
#define LX_DXDESTROYCONTEXT \
_IOWR(0x47, 0x05, struct d3dkmt_destroycontext)
+#define LX_DXCREATEALLOCATION \
+ _IOWR(0x47, 0x06, struct d3dkmt_createallocation)
#define LX_DXQUERYADAPTERINFO \
_IOWR(0x47, 0x09, struct d3dkmt_queryadapterinfo)
+#define LX_DXDESTROYALLOCATION2 \
+ _IOWR(0x47, 0x13, struct d3dkmt_destroyallocation2)
#define LX_DXENUMADAPTERS2 \
_IOWR(0x47, 0x14, struct d3dkmt_enumadapters2)
#define LX_DXCLOSEADAPTER \
^ permalink raw reply related [flat|nested] 56+ messages in thread* [PATCH 09/55] drivers: hv: dxgkrnl: Creation of compute device sync objects
2026-03-19 20:24 [PATCH v4 00/55] drivers: hv: dxgkrnl: Driver for Hyper-V virtual compute device Eric Curtin
` (7 preceding siblings ...)
2026-03-19 20:24 ` [PATCH 08/55] drivers: hv: dxgkrnl: Creation of compute device allocations and resources Eric Curtin
@ 2026-03-19 20:24 ` Eric Curtin
2026-03-19 20:24 ` [PATCH 10/55] drivers: hv: dxgkrnl: Operations using " Eric Curtin
` (45 subsequent siblings)
54 siblings, 0 replies; 56+ messages in thread
From: Eric Curtin @ 2026-03-19 20:24 UTC (permalink / raw)
To: linux-hyperv; +Cc: linux-kernel, iourit, wei.liu, decui, haiyangz
From: Iouri Tarassov <iourit@linux.microsoft.com>
Implement ioctls to create and destroy compute devicesync objects:
- the LX_DXCREATESYNCHRONIZATIONOBJECT ioctl,
- the LX_DXDESTROYSYNCHRONIZATIONOBJECT ioctl.
Compute device synchronization objects are used to synchronize
execution of compute device commands, which are queued to
different execution contexts (dxgcontext objects).
There are several types of sync objects (mutex, monitored
fence, CPU event, fence). A "signal" or a "wait" operation
could be queued to an execution context.
Monitored fence sync objects are particular important.
A monitored fence object has a fence value, which could be
monitored by the compute device or by CPU. Therefore, a CPU
virtual address is allocated during object creation to allow
an application to read the fence value. dxg_map_iospace and
dxg_unmap_iospace implement creation of the CPU virtual address.
This is done as follow:
- The host allocates a portion of the guest IO space, which is mapped
to the actual fence value memory on the host
- The host returns the guest IO space address to the guest
- The guest allocates a CPU virtual address and updates page tables
to point to the IO space address
Signed-off-by: Iouri Tarassov <iourit@linux.microsoft.com>
[kms: forward port to 6.6 from 6.1. No code changes made.]
Signed-off-by: Kelsey Steele <kelseysteele@microsoft.com>
---
drivers/hv/dxgkrnl/dxgadapter.c | 184 ++++++++++++++++++++++++++++
drivers/hv/dxgkrnl/dxgkrnl.h | 80 +++++++++++++
drivers/hv/dxgkrnl/dxgmodule.c | 1 +
drivers/hv/dxgkrnl/dxgprocess.c | 16 +++
drivers/hv/dxgkrnl/dxgvmbus.c | 205 ++++++++++++++++++++++++++++++++
drivers/hv/dxgkrnl/dxgvmbus.h | 20 ++++
drivers/hv/dxgkrnl/ioctl.c | 130 +++++++++++++++++++-
include/uapi/misc/d3dkmthk.h | 95 +++++++++++++++
8 files changed, 729 insertions(+), 2 deletions(-)
diff --git a/drivers/hv/dxgkrnl/dxgadapter.c b/drivers/hv/dxgkrnl/dxgadapter.c
index 402caa81a5db..d2f2b96527e6 100644
--- a/drivers/hv/dxgkrnl/dxgadapter.c
+++ b/drivers/hv/dxgkrnl/dxgadapter.c
@@ -160,6 +160,24 @@ void dxgadapter_remove_process(struct dxgprocess_adapter *process_info)
list_del(&process_info->adapter_process_list_entry);
}
+void dxgadapter_add_syncobj(struct dxgadapter *adapter,
+ struct dxgsyncobject *object)
+{
+ down_write(&adapter->shared_resource_list_lock);
+ list_add_tail(&object->syncobj_list_entry, &adapter->syncobj_list_head);
+ up_write(&adapter->shared_resource_list_lock);
+}
+
+void dxgadapter_remove_syncobj(struct dxgsyncobject *object)
+{
+ down_write(&object->adapter->shared_resource_list_lock);
+ if (object->syncobj_list_entry.next) {
+ list_del(&object->syncobj_list_entry);
+ object->syncobj_list_entry.next = NULL;
+ }
+ up_write(&object->adapter->shared_resource_list_lock);
+}
+
int dxgadapter_acquire_lock_exclusive(struct dxgadapter *adapter)
{
down_write(&adapter->core_lock);
@@ -213,6 +231,7 @@ struct dxgdevice *dxgdevice_create(struct dxgadapter *adapter,
init_rwsem(&device->context_list_lock);
init_rwsem(&device->alloc_list_lock);
INIT_LIST_HEAD(&device->pqueue_list_head);
+ INIT_LIST_HEAD(&device->syncobj_list_head);
device->object_state = DXGOBJECTSTATE_CREATED;
device->execution_state = _D3DKMT_DEVICEEXECUTION_ACTIVE;
@@ -228,6 +247,7 @@ struct dxgdevice *dxgdevice_create(struct dxgadapter *adapter,
void dxgdevice_stop(struct dxgdevice *device)
{
struct dxgallocation *alloc;
+ struct dxgsyncobject *syncobj;
DXG_TRACE("Destroying device: %p", device);
dxgdevice_acquire_alloc_list_lock(device);
@@ -235,6 +255,14 @@ void dxgdevice_stop(struct dxgdevice *device)
dxgallocation_stop(alloc);
}
dxgdevice_release_alloc_list_lock(device);
+
+ hmgrtable_lock(&device->process->handle_table, DXGLOCK_EXCL);
+ list_for_each_entry(syncobj, &device->syncobj_list_head,
+ syncobj_list_entry) {
+ dxgsyncobject_stop(syncobj);
+ }
+ hmgrtable_unlock(&device->process->handle_table, DXGLOCK_EXCL);
+ DXG_TRACE("Device stopped: %p", device);
}
void dxgdevice_mark_destroyed(struct dxgdevice *device)
@@ -263,6 +291,20 @@ void dxgdevice_destroy(struct dxgdevice *device)
dxgdevice_acquire_alloc_list_lock(device);
+ while (!list_empty(&device->syncobj_list_head)) {
+ struct dxgsyncobject *syncobj =
+ list_first_entry(&device->syncobj_list_head,
+ struct dxgsyncobject,
+ syncobj_list_entry);
+ list_del(&syncobj->syncobj_list_entry);
+ syncobj->syncobj_list_entry.next = NULL;
+ dxgdevice_release_alloc_list_lock(device);
+
+ dxgsyncobject_destroy(process, syncobj);
+
+ dxgdevice_acquire_alloc_list_lock(device);
+ }
+
{
struct dxgallocation *alloc;
struct dxgallocation *tmp;
@@ -565,6 +607,30 @@ void dxgdevice_release(struct kref *refcount)
kfree(device);
}
+void dxgdevice_add_syncobj(struct dxgdevice *device,
+ struct dxgsyncobject *syncobj)
+{
+ dxgdevice_acquire_alloc_list_lock(device);
+ list_add_tail(&syncobj->syncobj_list_entry, &device->syncobj_list_head);
+ kref_get(&syncobj->syncobj_kref);
+ dxgdevice_release_alloc_list_lock(device);
+}
+
+void dxgdevice_remove_syncobj(struct dxgsyncobject *entry)
+{
+ struct dxgdevice *device = entry->device;
+
+ dxgdevice_acquire_alloc_list_lock(device);
+ if (entry->syncobj_list_entry.next) {
+ list_del(&entry->syncobj_list_entry);
+ entry->syncobj_list_entry.next = NULL;
+ kref_put(&entry->syncobj_kref, dxgsyncobject_release);
+ }
+ dxgdevice_release_alloc_list_lock(device);
+ kref_put(&device->device_kref, dxgdevice_release);
+ entry->device = NULL;
+}
+
struct dxgcontext *dxgcontext_create(struct dxgdevice *device)
{
struct dxgcontext *context;
@@ -812,3 +878,121 @@ void dxgprocess_adapter_remove_device(struct dxgdevice *device)
}
mutex_unlock(&device->adapter_info->device_list_mutex);
}
+
+struct dxgsyncobject *dxgsyncobject_create(struct dxgprocess *process,
+ struct dxgdevice *device,
+ struct dxgadapter *adapter,
+ enum
+ d3dddi_synchronizationobject_type
+ type,
+ struct
+ d3dddi_synchronizationobject_flags
+ flags)
+{
+ struct dxgsyncobject *syncobj;
+
+ syncobj = kzalloc(sizeof(*syncobj), GFP_KERNEL);
+ if (syncobj == NULL)
+ goto cleanup;
+ syncobj->type = type;
+ syncobj->process = process;
+ switch (type) {
+ case _D3DDDI_MONITORED_FENCE:
+ case _D3DDDI_PERIODIC_MONITORED_FENCE:
+ syncobj->monitored_fence = 1;
+ break;
+ default:
+ break;
+ }
+ if (flags.shared) {
+ syncobj->shared = 1;
+ if (!flags.nt_security_sharing) {
+ DXG_ERR("nt_security_sharing must be set");
+ goto cleanup;
+ }
+ }
+
+ kref_init(&syncobj->syncobj_kref);
+
+ if (syncobj->monitored_fence) {
+ syncobj->device = device;
+ syncobj->device_handle = device->handle;
+ kref_get(&device->device_kref);
+ dxgdevice_add_syncobj(device, syncobj);
+ } else {
+ dxgadapter_add_syncobj(adapter, syncobj);
+ }
+ syncobj->adapter = adapter;
+ kref_get(&adapter->adapter_kref);
+
+ DXG_TRACE("Syncobj created: %p", syncobj);
+ return syncobj;
+cleanup:
+ if (syncobj)
+ kfree(syncobj);
+ return NULL;
+}
+
+void dxgsyncobject_destroy(struct dxgprocess *process,
+ struct dxgsyncobject *syncobj)
+{
+ int destroyed;
+
+ DXG_TRACE("Destroying syncobj: %p", syncobj);
+
+ dxgsyncobject_stop(syncobj);
+
+ destroyed = test_and_set_bit(0, &syncobj->flags);
+ if (!destroyed) {
+ DXG_TRACE("Deleting handle: %x", syncobj->handle.v);
+ hmgrtable_lock(&process->handle_table, DXGLOCK_EXCL);
+ if (syncobj->handle.v) {
+ hmgrtable_free_handle(&process->handle_table,
+ HMGRENTRY_TYPE_DXGSYNCOBJECT,
+ syncobj->handle);
+ syncobj->handle.v = 0;
+ kref_put(&syncobj->syncobj_kref, dxgsyncobject_release);
+ }
+ hmgrtable_unlock(&process->handle_table, DXGLOCK_EXCL);
+
+ if (syncobj->monitored_fence)
+ dxgdevice_remove_syncobj(syncobj);
+ else
+ dxgadapter_remove_syncobj(syncobj);
+ if (syncobj->adapter) {
+ kref_put(&syncobj->adapter->adapter_kref,
+ dxgadapter_release);
+ syncobj->adapter = NULL;
+ }
+ }
+ kref_put(&syncobj->syncobj_kref, dxgsyncobject_release);
+}
+
+void dxgsyncobject_stop(struct dxgsyncobject *syncobj)
+{
+ int stopped = test_and_set_bit(1, &syncobj->flags);
+
+ if (!stopped) {
+ DXG_TRACE("Stopping syncobj");
+ if (syncobj->monitored_fence) {
+ if (syncobj->mapped_address) {
+ int ret =
+ dxg_unmap_iospace(syncobj->mapped_address,
+ PAGE_SIZE);
+
+ (void)ret;
+ DXG_TRACE("unmap fence %d %p",
+ ret, syncobj->mapped_address);
+ syncobj->mapped_address = NULL;
+ }
+ }
+ }
+}
+
+void dxgsyncobject_release(struct kref *refcount)
+{
+ struct dxgsyncobject *syncobj;
+
+ syncobj = container_of(refcount, struct dxgsyncobject, syncobj_kref);
+ kfree(syncobj);
+}
diff --git a/drivers/hv/dxgkrnl/dxgkrnl.h b/drivers/hv/dxgkrnl/dxgkrnl.h
index fa053fb6ac9c..1b9410c9152b 100644
--- a/drivers/hv/dxgkrnl/dxgkrnl.h
+++ b/drivers/hv/dxgkrnl/dxgkrnl.h
@@ -38,6 +38,7 @@ struct dxgdevice;
struct dxgcontext;
struct dxgallocation;
struct dxgresource;
+struct dxgsyncobject;
/*
* Driver private data.
@@ -100,6 +101,56 @@ int dxgvmbuschannel_init(struct dxgvmbuschannel *ch, struct hv_device *hdev);
void dxgvmbuschannel_destroy(struct dxgvmbuschannel *ch);
void dxgvmbuschannel_receive(void *ctx);
+/*
+ * This is GPU synchronization object, which is used to synchronize execution
+ * between GPU contextx/hardware queues or for tracking GPU execution progress.
+ * A dxgsyncobject is created when somebody creates a syncobject or opens a
+ * shared syncobject.
+ * A syncobject belongs to an adapter, unless it is a cross-adapter object.
+ * Cross adapter syncobjects are currently not implemented.
+ *
+ * D3DDDI_MONITORED_FENCE and D3DDDI_PERIODIC_MONITORED_FENCE are called
+ * "device" syncobject, because the belong to a device (dxgdevice).
+ * Device syncobjects are inserted to a list in dxgdevice.
+ *
+ */
+struct dxgsyncobject {
+ struct kref syncobj_kref;
+ enum d3dddi_synchronizationobject_type type;
+ /*
+ * List entry in dxgdevice for device sync objects.
+ * List entry in dxgadapter for other objects
+ */
+ struct list_head syncobj_list_entry;
+ /* Adapter, the syncobject belongs to. NULL for stopped sync obejcts. */
+ struct dxgadapter *adapter;
+ /*
+ * Pointer to the device, which was used to create the object.
+ * This is NULL for non-device syncbjects
+ */
+ struct dxgdevice *device;
+ struct dxgprocess *process;
+ /* CPU virtual address of the fence value for "device" syncobjects */
+ void *mapped_address;
+ /* Handle in the process handle table */
+ struct d3dkmthandle handle;
+ /* Cached handle of the device. Used to avoid device dereference. */
+ struct d3dkmthandle device_handle;
+ union {
+ struct {
+ /* Must be the first bit */
+ u32 destroyed:1;
+ /* Must be the second bit */
+ u32 stopped:1;
+ /* device syncobject */
+ u32 monitored_fence:1;
+ u32 shared:1;
+ u32 reserved:27;
+ };
+ long flags;
+ };
+};
+
/*
* The structure defines an offered vGPU vm bus channel.
*/
@@ -109,6 +160,20 @@ struct dxgvgpuchannel {
struct hv_device *hdev;
};
+struct dxgsyncobject *dxgsyncobject_create(struct dxgprocess *process,
+ struct dxgdevice *device,
+ struct dxgadapter *adapter,
+ enum
+ d3dddi_synchronizationobject_type
+ type,
+ struct
+ d3dddi_synchronizationobject_flags
+ flags);
+void dxgsyncobject_destroy(struct dxgprocess *process,
+ struct dxgsyncobject *syncobj);
+void dxgsyncobject_stop(struct dxgsyncobject *syncobj);
+void dxgsyncobject_release(struct kref *refcount);
+
struct dxgglobal {
struct dxgdriver *drvdata;
struct dxgvmbuschannel channel;
@@ -271,6 +336,8 @@ struct dxgadapter {
struct list_head adapter_list_entry;
/* The list of dxgprocess_adapter entries */
struct list_head adapter_process_list_head;
+ /* List of all non-device dxgsyncobject objects */
+ struct list_head syncobj_list_head;
/* This lock protects shared resource and syncobject lists */
struct rw_semaphore shared_resource_list_lock;
struct pci_dev *pci_dev;
@@ -296,6 +363,9 @@ void dxgadapter_release_lock_shared(struct dxgadapter *adapter);
int dxgadapter_acquire_lock_exclusive(struct dxgadapter *adapter);
void dxgadapter_acquire_lock_forced(struct dxgadapter *adapter);
void dxgadapter_release_lock_exclusive(struct dxgadapter *adapter);
+void dxgadapter_add_syncobj(struct dxgadapter *adapter,
+ struct dxgsyncobject *so);
+void dxgadapter_remove_syncobj(struct dxgsyncobject *so);
void dxgadapter_add_process(struct dxgadapter *adapter,
struct dxgprocess_adapter *process_info);
void dxgadapter_remove_process(struct dxgprocess_adapter *process_info);
@@ -325,6 +395,7 @@ struct dxgdevice {
struct list_head resource_list_head;
/* List of paging queues. Protected by process handle table lock. */
struct list_head pqueue_list_head;
+ struct list_head syncobj_list_head;
struct d3dkmthandle handle;
enum d3dkmt_deviceexecution_state execution_state;
u32 handle_valid;
@@ -345,6 +416,8 @@ void dxgdevice_remove_alloc_safe(struct dxgdevice *dev,
struct dxgallocation *a);
void dxgdevice_add_resource(struct dxgdevice *dev, struct dxgresource *res);
void dxgdevice_remove_resource(struct dxgdevice *dev, struct dxgresource *res);
+void dxgdevice_add_syncobj(struct dxgdevice *dev, struct dxgsyncobject *so);
+void dxgdevice_remove_syncobj(struct dxgsyncobject *so);
bool dxgdevice_is_active(struct dxgdevice *dev);
void dxgdevice_acquire_context_list_lock(struct dxgdevice *dev);
void dxgdevice_release_context_list_lock(struct dxgdevice *dev);
@@ -455,6 +528,7 @@ void dxgallocation_free_handle(struct dxgallocation *a);
long dxgk_compat_ioctl(struct file *f, unsigned int p1, unsigned long p2);
long dxgk_unlocked_ioctl(struct file *f, unsigned int p1, unsigned long p2);
+int dxg_unmap_iospace(void *va, u32 size);
/*
* The convention is that VNBus instance id is a GUID, but the host sets
* the lower part of the value to the host adapter LUID. The function
@@ -514,6 +588,12 @@ int dxgvmb_send_create_allocation(struct dxgprocess *pr, struct dxgdevice *dev,
int dxgvmb_send_destroy_allocation(struct dxgprocess *pr, struct dxgdevice *dev,
struct d3dkmt_destroyallocation2 *args,
struct d3dkmthandle *alloc_handles);
+int dxgvmb_send_create_sync_object(struct dxgprocess *pr,
+ struct dxgadapter *adapter,
+ struct d3dkmt_createsynchronizationobject2
+ *args, struct dxgsyncobject *so);
+int dxgvmb_send_destroy_sync_object(struct dxgprocess *pr,
+ struct d3dkmthandle h);
int dxgvmb_send_query_adapter_info(struct dxgprocess *process,
struct dxgadapter *adapter,
struct d3dkmt_queryadapterinfo *args);
diff --git a/drivers/hv/dxgkrnl/dxgmodule.c b/drivers/hv/dxgkrnl/dxgmodule.c
index 053ce6f3e083..9bc8931c5043 100644
--- a/drivers/hv/dxgkrnl/dxgmodule.c
+++ b/drivers/hv/dxgkrnl/dxgmodule.c
@@ -162,6 +162,7 @@ int dxgglobal_create_adapter(struct pci_dev *dev, guid_t *guid,
init_rwsem(&adapter->core_lock);
INIT_LIST_HEAD(&adapter->adapter_process_list_head);
+ INIT_LIST_HEAD(&adapter->syncobj_list_head);
init_rwsem(&adapter->shared_resource_list_lock);
adapter->pci_dev = dev;
guid_to_luid(guid, &adapter->luid);
diff --git a/drivers/hv/dxgkrnl/dxgprocess.c b/drivers/hv/dxgkrnl/dxgprocess.c
index ca307beb9a9a..a41985ef438d 100644
--- a/drivers/hv/dxgkrnl/dxgprocess.c
+++ b/drivers/hv/dxgkrnl/dxgprocess.c
@@ -59,6 +59,7 @@ void dxgprocess_destroy(struct dxgprocess *process)
enum hmgrentry_type t;
struct d3dkmthandle h;
void *o;
+ struct dxgsyncobject *syncobj;
struct dxgprocess_adapter *entry;
struct dxgprocess_adapter *tmp;
@@ -84,6 +85,21 @@ void dxgprocess_destroy(struct dxgprocess *process)
}
}
+ i = 0;
+ while (hmgrtable_next_entry(&process->handle_table, &i, &t, &h, &o)) {
+ switch (t) {
+ case HMGRENTRY_TYPE_DXGSYNCOBJECT:
+ DXG_TRACE("Destroy syncobj: %p %d", o, i);
+ syncobj = o;
+ syncobj->handle.v = 0;
+ dxgsyncobject_destroy(process, syncobj);
+ break;
+ default:
+ DXG_ERR("invalid entry in handle table %d", t);
+ break;
+ }
+ }
+
hmgrtable_destroy(&process->handle_table);
hmgrtable_destroy(&process->local_handle_table);
}
diff --git a/drivers/hv/dxgkrnl/dxgvmbus.c b/drivers/hv/dxgkrnl/dxgvmbus.c
index 14b51a3c6afc..d323afc85249 100644
--- a/drivers/hv/dxgkrnl/dxgvmbus.c
+++ b/drivers/hv/dxgkrnl/dxgvmbus.c
@@ -495,6 +495,88 @@ dxgvmb_send_sync_msg_ntstatus(struct dxgvmbuschannel *channel,
return ret;
}
+static int check_iospace_address(unsigned long address, u32 size)
+{
+ struct dxgglobal *dxgglobal = dxggbl();
+
+ if (address < dxgglobal->mmiospace_base ||
+ size > dxgglobal->mmiospace_size ||
+ address >= (dxgglobal->mmiospace_base +
+ dxgglobal->mmiospace_size - size)) {
+ DXG_ERR("invalid iospace address %lx", address);
+ return -EINVAL;
+ }
+ return 0;
+}
+
+int dxg_unmap_iospace(void *va, u32 size)
+{
+ int ret = 0;
+
+ DXG_TRACE("Unmapping io space: %p %x", va, size);
+
+ /*
+ * When an app calls exit(), dxgkrnl is called to close the device
+ * with current->mm equal to NULL.
+ */
+ if (current->mm) {
+ ret = vm_munmap((unsigned long)va, size);
+ if (ret) {
+ DXG_ERR("vm_munmap failed %d", ret);
+ return -ENOTRECOVERABLE;
+ }
+ }
+ return 0;
+}
+
+static u8 *dxg_map_iospace(u64 iospace_address, u32 size,
+ unsigned long protection, bool cached)
+{
+ struct vm_area_struct *vma;
+ unsigned long va;
+ int ret = 0;
+
+ DXG_TRACE("Mapping io space: %llx %x %lx",
+ iospace_address, size, protection);
+ if (check_iospace_address(iospace_address, size) < 0) {
+ DXG_ERR("invalid address to map");
+ return NULL;
+ }
+
+ va = vm_mmap(NULL, 0, size, protection, MAP_SHARED | MAP_ANONYMOUS, 0);
+ if ((long)va <= 0) {
+ DXG_ERR("vm_mmap failed %lx %d", va, size);
+ return NULL;
+ }
+
+ mmap_read_lock(current->mm);
+ vma = find_vma(current->mm, (unsigned long)va);
+ if (vma) {
+ pgprot_t prot = vma->vm_page_prot;
+
+ if (!cached)
+ prot = pgprot_writecombine(prot);
+ DXG_TRACE("vma: %lx %lx %lx",
+ vma->vm_start, vma->vm_end, va);
+ vma->vm_pgoff = iospace_address >> PAGE_SHIFT;
+ ret = io_remap_pfn_range(vma, vma->vm_start, vma->vm_pgoff,
+ size, prot);
+ if (ret)
+ DXG_ERR("io_remap_pfn_range failed: %d", ret);
+ } else {
+ DXG_ERR("failed to find vma: %p %lx", vma, va);
+ ret = -ENOMEM;
+ }
+ mmap_read_unlock(current->mm);
+
+ if (ret) {
+ dxg_unmap_iospace((void *)va, size);
+ return NULL;
+ }
+ DXG_TRACE("Mapped VA: %lx", va);
+ return (u8 *) va;
+}
+
/*
* Global messages to the host
*/
@@ -613,6 +695,39 @@ int dxgvmb_send_destroy_process(struct d3dkmthandle process)
return ret;
}
+int dxgvmb_send_destroy_sync_object(struct dxgprocess *process,
+ struct d3dkmthandle sync_object)
+{
+ struct dxgkvmb_command_destroysyncobject *command;
+ int ret;
+ struct dxgvmbusmsg msg = {.hdr = NULL};
+
+ ret = init_message(&msg, NULL, process, sizeof(*command));
+ if (ret)
+ return ret;
+ command = (void *)msg.msg;
+
+ ret = dxgglobal_acquire_channel_lock();
+ if (ret < 0)
+ goto cleanup;
+
+ command_vm_to_host_init2(&command->hdr,
+ DXGK_VMBCOMMAND_DESTROYSYNCOBJECT,
+ process->host_handle);
+ command->sync_object = sync_object;
+
+ ret = dxgvmb_send_sync_msg_ntstatus(dxgglobal_get_dxgvmbuschannel(),
+ msg.hdr, msg.size);
+
+ dxgglobal_release_channel_lock();
+
+cleanup:
+ free_message(&msg, process);
+ if (ret)
+ DXG_TRACE("err: %d", ret);
+ return ret;
+}
+
/*
* Virtual GPU messages to the host
*/
@@ -1023,7 +1138,11 @@ int create_existing_sysmem(struct dxgdevice *device,
ret = -ENOMEM;
goto cleanup;
}
+#ifdef _MAIN_KERNEL_
DXG_TRACE("New gpadl %d", dxgalloc->gpadl.gpadl_handle);
+#else
+ DXG_TRACE("New gpadl %d", dxgalloc->gpadl);
+#endif
command_vgpu_to_host_init2(&set_store_command->hdr,
DXGK_VMBCOMMAND_SETEXISTINGSYSMEMSTORE,
@@ -1501,6 +1620,92 @@ int dxgvmb_send_get_stdalloc_data(struct dxgdevice *device,
return ret;
}
+static void set_result(struct d3dkmt_createsynchronizationobject2 *args,
+ u64 fence_gpu_va, u8 *va)
+{
+ args->info.periodic_monitored_fence.fence_gpu_virtual_address =
+ fence_gpu_va;
+ args->info.periodic_monitored_fence.fence_cpu_virtual_address = va;
+}
+
+int
+dxgvmb_send_create_sync_object(struct dxgprocess *process,
+ struct dxgadapter *adapter,
+ struct d3dkmt_createsynchronizationobject2 *args,
+ struct dxgsyncobject *syncobj)
+{
+ struct dxgkvmb_command_createsyncobject_return result = { };
+ struct dxgkvmb_command_createsyncobject *command;
+ int ret;
+ u8 *va = 0;
+ struct dxgvmbusmsg msg = {.hdr = NULL};
+
+ ret = init_message(&msg, adapter, process, sizeof(*command));
+ if (ret)
+ goto cleanup;
+ command = (void *)msg.msg;
+
+ command_vgpu_to_host_init2(&command->hdr,
+ DXGK_VMBCOMMAND_CREATESYNCOBJECT,
+ process->host_handle);
+ command->args = *args;
+ command->client_hint = 1; /* CLIENTHINT_UMD */
+
+ ret = dxgvmb_send_sync_msg(msg.channel, msg.hdr, msg.size, &result,
+ sizeof(result));
+ if (ret < 0) {
+ DXG_ERR("failed %d", ret);
+ goto cleanup;
+ }
+ args->sync_object = result.sync_object;
+ if (syncobj->shared) {
+ if (result.global_sync_object.v == 0) {
+ DXG_ERR("shared handle is 0");
+ ret = -EINVAL;
+ goto cleanup;
+ }
+ args->info.shared_handle = result.global_sync_object;
+ }
+
+ if (syncobj->monitored_fence) {
+ va = dxg_map_iospace(result.fence_storage_address, PAGE_SIZE,
+ PROT_READ | PROT_WRITE, true);
+ if (va == NULL) {
+ ret = -ENOMEM;
+ goto cleanup;
+ }
+ if (args->info.type == _D3DDDI_MONITORED_FENCE) {
+ args->info.monitored_fence.fence_gpu_virtual_address =
+ result.fence_gpu_va;
+ args->info.monitored_fence.fence_cpu_virtual_address =
+ va;
+ {
+ unsigned long value;
+
+ DXG_TRACE("fence cpu va: %p", va);
+ ret = copy_from_user(&value, va,
+ sizeof(u64));
+ if (ret) {
+ DXG_ERR("failed to read fence");
+ ret = -EINVAL;
+ } else {
+ DXG_TRACE("fence value:%lx",
+ value);
+ }
+ }
+ } else {
+ set_result(args, result.fence_gpu_va, va);
+ }
+ syncobj->mapped_address = va;
+ }
+
+cleanup:
+ free_message(&msg, process);
+ if (ret)
+ DXG_TRACE("err: %d", ret);
+ return ret;
+}
+
int dxgvmb_send_query_adapter_info(struct dxgprocess *process,
struct dxgadapter *adapter,
struct d3dkmt_queryadapterinfo *args)
diff --git a/drivers/hv/dxgkrnl/dxgvmbus.h b/drivers/hv/dxgkrnl/dxgvmbus.h
index 4b7466d1b9f2..bbf5f31cdf81 100644
--- a/drivers/hv/dxgkrnl/dxgvmbus.h
+++ b/drivers/hv/dxgkrnl/dxgvmbus.h
@@ -410,4 +410,24 @@ struct dxgkvmb_command_destroycontext {
struct d3dkmthandle context;
};
+struct dxgkvmb_command_createsyncobject {
+ struct dxgkvmb_command_vgpu_to_host hdr;
+ struct d3dkmt_createsynchronizationobject2 args;
+ u32 client_hint;
+};
+
+struct dxgkvmb_command_createsyncobject_return {
+ struct d3dkmthandle sync_object;
+ struct d3dkmthandle global_sync_object;
+ u64 fence_gpu_va;
+ u64 fence_storage_address;
+ u32 fence_storage_offset;
+};
+
+/* The command returns ntstatus */
+struct dxgkvmb_command_destroysyncobject {
+ struct dxgkvmb_command_vm_to_host hdr;
+ struct d3dkmthandle sync_object;
+};
+
#endif /* _DXGVMBUS_H */
diff --git a/drivers/hv/dxgkrnl/ioctl.c b/drivers/hv/dxgkrnl/ioctl.c
index 0eaa577d7ed4..4bba1e209f33 100644
--- a/drivers/hv/dxgkrnl/ioctl.c
+++ b/drivers/hv/dxgkrnl/ioctl.c
@@ -1341,6 +1341,132 @@ dxgkio_destroy_allocation(struct dxgprocess *process, void *__user inargs)
return ret;
}
+static int
+dxgkio_create_sync_object(struct dxgprocess *process, void *__user inargs)
+{
+ int ret;
+ struct d3dkmt_createsynchronizationobject2 args;
+ struct dxgdevice *device = NULL;
+ struct dxgadapter *adapter = NULL;
+ struct dxgsyncobject *syncobj = NULL;
+ bool device_lock_acquired = false;
+
+ ret = copy_from_user(&args, inargs, sizeof(args));
+ if (ret) {
+ DXG_ERR("failed to copy input args");
+ ret = -EINVAL;
+ goto cleanup;
+ }
+
+ device = dxgprocess_device_by_handle(process, args.device);
+ if (device == NULL) {
+ ret = -EINVAL;
+ goto cleanup;
+ }
+
+ ret = dxgdevice_acquire_lock_shared(device);
+ if (ret < 0)
+ goto cleanup;
+
+ device_lock_acquired = true;
+
+ adapter = device->adapter;
+ ret = dxgadapter_acquire_lock_shared(adapter);
+ if (ret < 0) {
+ adapter = NULL;
+ goto cleanup;
+ }
+
+ syncobj = dxgsyncobject_create(process, device, adapter, args.info.type,
+ args.info.flags);
+ if (syncobj == NULL) {
+ ret = -EINVAL;
+ goto cleanup;
+ }
+
+ ret = dxgvmb_send_create_sync_object(process, adapter, &args, syncobj);
+ if (ret < 0)
+ goto cleanup;
+
+ ret = copy_to_user(inargs, &args, sizeof(args));
+ if (ret) {
+ DXG_ERR("failed to copy output args");
+ ret = -EINVAL;
+ goto cleanup;
+ }
+
+ hmgrtable_lock(&process->handle_table, DXGLOCK_EXCL);
+ ret = hmgrtable_assign_handle(&process->handle_table, syncobj,
+ HMGRENTRY_TYPE_DXGSYNCOBJECT,
+ args.sync_object);
+ if (ret >= 0)
+ syncobj->handle = args.sync_object;
+ hmgrtable_unlock(&process->handle_table, DXGLOCK_EXCL);
+
+cleanup:
+
+ if (ret < 0) {
+ if (syncobj) {
+ dxgsyncobject_destroy(process, syncobj);
+ if (args.sync_object.v)
+ dxgvmb_send_destroy_sync_object(process,
+ args.sync_object);
+ }
+ }
+ if (adapter)
+ dxgadapter_release_lock_shared(adapter);
+ if (device_lock_acquired)
+ dxgdevice_release_lock_shared(device);
+ if (device)
+ kref_put(&device->device_kref, dxgdevice_release);
+
+ DXG_TRACE("ioctl:%s %d", errorstr(ret), ret);
+ return ret;
+}
+
+static int
+dxgkio_destroy_sync_object(struct dxgprocess *process, void *__user inargs)
+{
+ struct d3dkmt_destroysynchronizationobject args;
+ struct dxgsyncobject *syncobj = NULL;
+ int ret;
+
+ ret = copy_from_user(&args, inargs, sizeof(args));
+ if (ret) {
+ DXG_ERR("failed to copy input args");
+ ret = -EINVAL;
+ goto cleanup;
+ }
+
+ DXG_TRACE("handle 0x%x", args.sync_object.v);
+ hmgrtable_lock(&process->handle_table, DXGLOCK_EXCL);
+ syncobj = hmgrtable_get_object_by_type(&process->handle_table,
+ HMGRENTRY_TYPE_DXGSYNCOBJECT,
+ args.sync_object);
+ if (syncobj) {
+ DXG_TRACE("syncobj 0x%p", syncobj);
+ syncobj->handle.v = 0;
+ hmgrtable_free_handle(&process->handle_table,
+ HMGRENTRY_TYPE_DXGSYNCOBJECT,
+ args.sync_object);
+ }
+ hmgrtable_unlock(&process->handle_table, DXGLOCK_EXCL);
+
+ if (syncobj == NULL) {
+ ret = -EINVAL;
+ goto cleanup;
+ }
+
+ dxgsyncobject_destroy(process, syncobj);
+
+ ret = dxgvmb_send_destroy_sync_object(process, args.sync_object);
+
+cleanup:
+
+ DXG_TRACE("ioctl:%s %d", errorstr(ret), ret);
+ return ret;
+}
+
static struct ioctl_desc ioctls[] = {
/* 0x00 */ {},
/* 0x01 */ {dxgkio_open_adapter_from_luid, LX_DXOPENADAPTERFROMLUID},
@@ -1358,7 +1484,7 @@ static struct ioctl_desc ioctls[] = {
/* 0x0d */ {},
/* 0x0e */ {},
/* 0x0f */ {},
-/* 0x10 */ {},
+/* 0x10 */ {dxgkio_create_sync_object, LX_DXCREATESYNCHRONIZATIONOBJECT},
/* 0x11 */ {},
/* 0x12 */ {},
/* 0x13 */ {dxgkio_destroy_allocation, LX_DXDESTROYALLOCATION2},
@@ -1371,7 +1497,7 @@ static struct ioctl_desc ioctls[] = {
/* 0x1a */ {},
/* 0x1b */ {},
/* 0x1c */ {},
-/* 0x1d */ {},
+/* 0x1d */ {dxgkio_destroy_sync_object, LX_DXDESTROYSYNCHRONIZATIONOBJECT},
/* 0x1e */ {},
/* 0x1f */ {},
/* 0x20 */ {},
diff --git a/include/uapi/misc/d3dkmthk.h b/include/uapi/misc/d3dkmthk.h
index cf670b9c4dc2..4e1069f41d76 100644
--- a/include/uapi/misc/d3dkmthk.h
+++ b/include/uapi/misc/d3dkmthk.h
@@ -256,6 +256,97 @@ enum d3dkmdt_standardallocationtype {
_D3DKMDT_STANDARDALLOCATION_GDISURFACE = 4,
};
+struct d3dddi_synchronizationobject_flags {
+ union {
+ struct {
+ __u32 shared:1;
+ __u32 nt_security_sharing:1;
+ __u32 cross_adapter:1;
+ __u32 top_of_pipeline:1;
+ __u32 no_signal:1;
+ __u32 no_wait:1;
+ __u32 no_signal_max_value_on_tdr:1;
+ __u32 no_gpu_access:1;
+ __u32 reserved:23;
+ };
+ __u32 value;
+ };
+};
+
+enum d3dddi_synchronizationobject_type {
+ _D3DDDI_SYNCHRONIZATION_MUTEX = 1,
+ _D3DDDI_SEMAPHORE = 2,
+ _D3DDDI_FENCE = 3,
+ _D3DDDI_CPU_NOTIFICATION = 4,
+ _D3DDDI_MONITORED_FENCE = 5,
+ _D3DDDI_PERIODIC_MONITORED_FENCE = 6,
+ _D3DDDI_SYNCHRONIZATION_TYPE_LIMIT
+};
+
+struct d3dddi_synchronizationobjectinfo2 {
+ enum d3dddi_synchronizationobject_type type;
+ struct d3dddi_synchronizationobject_flags flags;
+ union {
+ struct {
+ __u32 initial_state;
+ } synchronization_mutex;
+
+ struct {
+ __u32 max_count;
+ __u32 initial_count;
+ } semaphore;
+
+ struct {
+ __u64 fence_value;
+ } fence;
+
+ struct {
+ __u64 event;
+ } cpu_notification;
+
+ struct {
+ __u64 initial_fence_value;
+#ifdef __KERNEL__
+ void *fence_cpu_virtual_address;
+#else
+ __u64 *fence_cpu_virtual_address;
+#endif
+ __u64 fence_gpu_virtual_address;
+ __u32 engine_affinity;
+ } monitored_fence;
+
+ struct {
+ struct d3dkmthandle adapter;
+ __u32 vidpn_target_id;
+ __u64 time;
+#ifdef __KERNEL__
+ void *fence_cpu_virtual_address;
+#else
+ __u64 fence_cpu_virtual_address;
+#endif
+ __u64 fence_gpu_virtual_address;
+ __u32 engine_affinity;
+ } periodic_monitored_fence;
+
+ struct {
+ __u64 reserved[8];
+ } reserved;
+ };
+ struct d3dkmthandle shared_handle;
+};
+
+struct d3dkmt_createsynchronizationobject2 {
+ struct d3dkmthandle device;
+ __u32 reserved;
+ struct d3dddi_synchronizationobjectinfo2 info;
+ struct d3dkmthandle sync_object;
+ __u32 reserved1;
+};
+
+struct d3dkmt_destroysynchronizationobject {
+ struct d3dkmthandle sync_object;
+};
+
enum d3dkmt_standardallocationtype {
_D3DKMT_STANDARDALLOCATIONTYPE_EXISTINGHEAP = 1,
_D3DKMT_STANDARDALLOCATIONTYPE_CROSSADAPTER = 2,
@@ -483,6 +574,8 @@ struct d3dkmt_enumadapters3 {
_IOWR(0x47, 0x06, struct d3dkmt_createallocation)
#define LX_DXQUERYADAPTERINFO \
_IOWR(0x47, 0x09, struct d3dkmt_queryadapterinfo)
+#define LX_DXCREATESYNCHRONIZATIONOBJECT \
+ _IOWR(0x47, 0x10, struct d3dkmt_createsynchronizationobject2)
#define LX_DXDESTROYALLOCATION2 \
_IOWR(0x47, 0x13, struct d3dkmt_destroyallocation2)
#define LX_DXENUMADAPTERS2 \
@@ -491,6 +584,8 @@ struct d3dkmt_enumadapters3 {
_IOWR(0x47, 0x15, struct d3dkmt_closeadapter)
#define LX_DXDESTROYDEVICE \
_IOWR(0x47, 0x19, struct d3dkmt_destroydevice)
+#define LX_DXDESTROYSYNCHRONIZATIONOBJECT \
+ _IOWR(0x47, 0x1d, struct d3dkmt_destroysynchronizationobject)
#define LX_DXENUMADAPTERS3 \
_IOWR(0x47, 0x3e, struct d3dkmt_enumadapters3)
^ permalink raw reply related [flat|nested] 56+ messages in thread* [PATCH 10/55] drivers: hv: dxgkrnl: Operations using sync objects
2026-03-19 20:24 [PATCH v4 00/55] drivers: hv: dxgkrnl: Driver for Hyper-V virtual compute device Eric Curtin
` (8 preceding siblings ...)
2026-03-19 20:24 ` [PATCH 09/55] drivers: hv: dxgkrnl: Creation of compute device sync objects Eric Curtin
@ 2026-03-19 20:24 ` Eric Curtin
2026-03-19 20:24 ` [PATCH 11/55] drivers: hv: dxgkrnl: Sharing of dxgresource objects Eric Curtin
` (44 subsequent siblings)
54 siblings, 0 replies; 56+ messages in thread
From: Eric Curtin @ 2026-03-19 20:24 UTC (permalink / raw)
To: linux-hyperv; +Cc: linux-kernel, iourit, wei.liu, decui, haiyangz
From: Iouri Tarassov <iourit@linux.microsoft.com>
Implement ioctls to submit operations with compute device
sync objects:
- the LX_DXSIGNALSYNCHRONIZATIONOBJECT ioctl.
The ioctl is used to submit a signal to a sync object.
- the LX_DXWAITFORSYNCHRONIZATIONOBJECT ioctl.
The ioctl is used to submit a wait for a sync object
- the LX_DXSIGNALSYNCHRONIZATIONOBJECTFROMCPU ioctl
The ioctl is used to signal to a monitored fence sync object
from a CPU thread.
- the LX_DXSIGNALSYNCHRONIZATIONOBJECTFROMGPU ioctl.
The ioctl is used to submit a signal to a monitored fence
sync object..
- the LX_DXSIGNALSYNCHRONIZATIONOBJECTFROMGPU2 ioctl.
The ioctl is used to submit a signal to a monitored fence
sync object.
- the LX_DXWAITFORSYNCHRONIZATIONOBJECTFROMGPU ioctl.
The ioctl is used to submit a wait for a monitored fence
sync object.
Compute device synchronization objects are used to synchronize
execution of DMA buffers between different execution contexts.
Operations with sync objects include "signal" and "wait". A wait
for a sync object is satisfied when the sync object is signaled.
A signal operation could be submitted to a compute device context or
the sync object could be signaled by a CPU thread.
To improve performance, submitting operations to the host is done
asynchronously when the host supports it.
Signed-off-by: Iouri Tarassov <iourit@linux.microsoft.com>
[kms: forward port to 6.6 from 6.1. No code changes made.]
Signed-off-by: Kelsey Steele <kelseysteele@microsoft.com>
---
drivers/hv/dxgkrnl/dxgadapter.c | 38 +-
drivers/hv/dxgkrnl/dxgkrnl.h | 62 +++
drivers/hv/dxgkrnl/dxgmodule.c | 102 ++++-
drivers/hv/dxgkrnl/dxgvmbus.c | 219 +++++++++-
drivers/hv/dxgkrnl/dxgvmbus.h | 48 +++
drivers/hv/dxgkrnl/ioctl.c | 702 +++++++++++++++++++++++++++++++-
drivers/hv/dxgkrnl/misc.h | 2 +
include/uapi/misc/d3dkmthk.h | 159 ++++++++
8 files changed, 1311 insertions(+), 21 deletions(-)
diff --git a/drivers/hv/dxgkrnl/dxgadapter.c b/drivers/hv/dxgkrnl/dxgadapter.c
index d2f2b96527e6..04d827a15c54 100644
--- a/drivers/hv/dxgkrnl/dxgadapter.c
+++ b/drivers/hv/dxgkrnl/dxgadapter.c
@@ -249,7 +249,7 @@ void dxgdevice_stop(struct dxgdevice *device)
struct dxgallocation *alloc;
struct dxgsyncobject *syncobj;
- DXG_TRACE("Destroying device: %p", device);
+ DXG_TRACE("Stopping device: %p", device);
dxgdevice_acquire_alloc_list_lock(device);
list_for_each_entry(alloc, &device->alloc_list_head, alloc_list_entry) {
dxgallocation_stop(alloc);
@@ -743,15 +743,13 @@ void dxgallocation_destroy(struct dxgallocation *alloc)
}
#ifdef _MAIN_KERNEL_
if (alloc->gpadl.gpadl_handle) {
- DXG_TRACE("Teardown gpadl %d",
- alloc->gpadl.gpadl_handle);
+ DXG_TRACE("Teardown gpadl %d", alloc->gpadl.gpadl_handle);
vmbus_teardown_gpadl(dxgglobal_get_vmbus(), &alloc->gpadl);
alloc->gpadl.gpadl_handle = 0;
}
else
if (alloc->gpadl) {
- DXG_TRACE("Teardown gpadl %d",
- alloc->gpadl);
+ DXG_TRACE("Teardown gpadl %d", alloc->gpadl);
vmbus_teardown_gpadl(dxgglobal_get_vmbus(), alloc->gpadl);
alloc->gpadl = 0;
}
@@ -901,6 +899,13 @@ struct dxgsyncobject *dxgsyncobject_create(struct dxgprocess *process,
case _D3DDDI_PERIODIC_MONITORED_FENCE:
syncobj->monitored_fence = 1;
break;
+ case _D3DDDI_CPU_NOTIFICATION:
+ syncobj->cpu_event = 1;
+ syncobj->host_event = kzalloc(sizeof(*syncobj->host_event),
+ GFP_KERNEL);
+ if (syncobj->host_event == NULL)
+ goto cleanup;
+ break;
default:
break;
}
@@ -928,6 +933,8 @@ struct dxgsyncobject *dxgsyncobject_create(struct dxgprocess *process,
DXG_TRACE("Syncobj created: %p", syncobj);
return syncobj;
cleanup:
+ if (syncobj->host_event)
+ kfree(syncobj->host_event);
if (syncobj)
kfree(syncobj);
return NULL;
@@ -937,6 +944,7 @@ void dxgsyncobject_destroy(struct dxgprocess *process,
struct dxgsyncobject *syncobj)
{
int destroyed;
+ struct dxghosteventcpu *host_event;
DXG_TRACE("Destroying syncobj: %p", syncobj);
@@ -955,6 +963,16 @@ void dxgsyncobject_destroy(struct dxgprocess *process,
}
hmgrtable_unlock(&process->handle_table, DXGLOCK_EXCL);
+ if (syncobj->cpu_event) {
+ host_event = syncobj->host_event;
+ if (host_event->cpu_event) {
+ eventfd_ctx_put(host_event->cpu_event);
+ if (host_event->hdr.event_id)
+ dxgglobal_remove_host_event(
+ &host_event->hdr);
+ host_event->cpu_event = NULL;
+ }
+ }
if (syncobj->monitored_fence)
dxgdevice_remove_syncobj(syncobj);
else
@@ -971,16 +989,14 @@ void dxgsyncobject_destroy(struct dxgprocess *process,
void dxgsyncobject_stop(struct dxgsyncobject *syncobj)
{
int stopped = test_and_set_bit(1, &syncobj->flags);
+ int ret;
if (!stopped) {
DXG_TRACE("Stopping syncobj");
if (syncobj->monitored_fence) {
if (syncobj->mapped_address) {
- int ret =
- dxg_unmap_iospace(syncobj->mapped_address,
- PAGE_SIZE);
-
- (void)ret;
+ ret = dxg_unmap_iospace(syncobj->mapped_address,
+ PAGE_SIZE);
DXG_TRACE("unmap fence %d %p",
ret, syncobj->mapped_address);
syncobj->mapped_address = NULL;
@@ -994,5 +1010,7 @@ void dxgsyncobject_release(struct kref *refcount)
struct dxgsyncobject *syncobj;
syncobj = container_of(refcount, struct dxgsyncobject, syncobj_kref);
+ if (syncobj->host_event)
+ kfree(syncobj->host_event);
kfree(syncobj);
}
diff --git a/drivers/hv/dxgkrnl/dxgkrnl.h b/drivers/hv/dxgkrnl/dxgkrnl.h
index 1b9410c9152b..8431523f42de 100644
--- a/drivers/hv/dxgkrnl/dxgkrnl.h
+++ b/drivers/hv/dxgkrnl/dxgkrnl.h
@@ -101,6 +101,29 @@ int dxgvmbuschannel_init(struct dxgvmbuschannel *ch, struct hv_device *hdev);
void dxgvmbuschannel_destroy(struct dxgvmbuschannel *ch);
void dxgvmbuschannel_receive(void *ctx);
+/*
+ * The structure describes an event, which will be signaled by
+ * a message from host.
+ */
+enum dxghosteventtype {
+ dxghostevent_cpu_event = 1,
+};
+
+struct dxghostevent {
+ struct list_head host_event_list_entry;
+ u64 event_id;
+ enum dxghosteventtype event_type;
+};
+
+struct dxghosteventcpu {
+ struct dxghostevent hdr;
+ struct dxgprocess *process;
+ struct eventfd_ctx *cpu_event;
+ struct completion *completion_event;
+ bool destroy_after_signal;
+ bool remove_from_list;
+};
+
/*
* This is GPU synchronization object, which is used to synchronize execution
* between GPU contextx/hardware queues or for tracking GPU execution progress.
@@ -130,6 +153,8 @@ struct dxgsyncobject {
*/
struct dxgdevice *device;
struct dxgprocess *process;
+ /* Used by D3DDDI_CPU_NOTIFICATION objects */
+ struct dxghosteventcpu *host_event;
/* CPU virtual address of the fence value for "device" syncobjects */
void *mapped_address;
/* Handle in the process handle table */
@@ -144,6 +169,7 @@ struct dxgsyncobject {
u32 stopped:1;
/* device syncobject */
u32 monitored_fence:1;
+ u32 cpu_event:1;
u32 shared:1;
u32 reserved:27;
};
@@ -206,6 +232,11 @@ struct dxgglobal {
/* protects the dxgprocess_adapter lists */
struct mutex process_adapter_mutex;
+ /* list of events, waiting to be signaled by the host */
+ struct list_head host_event_list_head;
+ spinlock_t host_event_list_mutex;
+ atomic64_t host_event_id;
+
bool global_channel_initialized;
bool async_msg_enabled;
bool misc_registered;
@@ -228,6 +259,11 @@ struct vmbus_channel *dxgglobal_get_vmbus(void);
struct dxgvmbuschannel *dxgglobal_get_dxgvmbuschannel(void);
void dxgglobal_acquire_process_adapter_lock(void);
void dxgglobal_release_process_adapter_lock(void);
+void dxgglobal_add_host_event(struct dxghostevent *hostevent);
+void dxgglobal_remove_host_event(struct dxghostevent *hostevent);
+u64 dxgglobal_new_host_event_id(void);
+void dxgglobal_signal_host_event(u64 event_id);
+struct dxghostevent *dxgglobal_get_host_event(u64 event_id);
int dxgglobal_acquire_channel_lock(void);
void dxgglobal_release_channel_lock(void);
@@ -594,6 +630,31 @@ int dxgvmb_send_create_sync_object(struct dxgprocess *pr,
*args, struct dxgsyncobject *so);
int dxgvmb_send_destroy_sync_object(struct dxgprocess *pr,
struct d3dkmthandle h);
+int dxgvmb_send_signal_sync_object(struct dxgprocess *process,
+ struct dxgadapter *adapter,
+ struct d3dddicb_signalflags flags,
+ u64 legacy_fence_value,
+ struct d3dkmthandle context,
+ u32 object_count,
+ struct d3dkmthandle *object,
+ u32 context_count,
+ struct d3dkmthandle *contexts,
+ u32 fence_count, u64 *fences,
+ struct eventfd_ctx *cpu_event,
+ struct d3dkmthandle device);
+int dxgvmb_send_wait_sync_object_gpu(struct dxgprocess *process,
+ struct dxgadapter *adapter,
+ struct d3dkmthandle context,
+ u32 object_count,
+ struct d3dkmthandle *objects,
+ u64 *fences,
+ bool legacy_fence);
+int dxgvmb_send_wait_sync_object_cpu(struct dxgprocess *process,
+ struct dxgadapter *adapter,
+ struct
+ d3dkmt_waitforsynchronizationobjectfromcpu
+ *args,
+ u64 cpu_event);
int dxgvmb_send_query_adapter_info(struct dxgprocess *process,
struct dxgadapter *adapter,
struct d3dkmt_queryadapterinfo *args);
@@ -609,6 +670,7 @@ int dxgvmb_send_async_msg(struct dxgvmbuschannel *channel,
void *command,
u32 cmd_size);
+void signal_host_cpu_event(struct dxghostevent *eventhdr);
int ntstatus2int(struct ntstatus status);
#ifdef DEBUG
diff --git a/drivers/hv/dxgkrnl/dxgmodule.c b/drivers/hv/dxgkrnl/dxgmodule.c
index 9bc8931c5043..5a5ca8791d27 100644
--- a/drivers/hv/dxgkrnl/dxgmodule.c
+++ b/drivers/hv/dxgkrnl/dxgmodule.c
@@ -123,6 +123,102 @@ static struct dxgadapter *find_adapter(struct winluid *luid)
return adapter;
}
+void dxgglobal_add_host_event(struct dxghostevent *event)
+{
+ struct dxgglobal *dxgglobal = dxggbl();
+
+ spin_lock_irq(&dxgglobal->host_event_list_mutex);
+ list_add_tail(&event->host_event_list_entry,
+ &dxgglobal->host_event_list_head);
+ spin_unlock_irq(&dxgglobal->host_event_list_mutex);
+}
+
+void dxgglobal_remove_host_event(struct dxghostevent *event)
+{
+ struct dxgglobal *dxgglobal = dxggbl();
+
+ spin_lock_irq(&dxgglobal->host_event_list_mutex);
+ if (event->host_event_list_entry.next != NULL) {
+ list_del(&event->host_event_list_entry);
+ event->host_event_list_entry.next = NULL;
+ }
+ spin_unlock_irq(&dxgglobal->host_event_list_mutex);
+}
+
+void signal_host_cpu_event(struct dxghostevent *eventhdr)
+{
+ struct dxghosteventcpu *event = (struct dxghosteventcpu *)eventhdr;
+
+ if (event->remove_from_list ||
+ event->destroy_after_signal) {
+ list_del(&eventhdr->host_event_list_entry);
+ eventhdr->host_event_list_entry.next = NULL;
+ }
+ if (event->cpu_event) {
+ DXG_TRACE("signal cpu event");
+ eventfd_signal(event->cpu_event, 1);
+ if (event->destroy_after_signal)
+ eventfd_ctx_put(event->cpu_event);
+ } else {
+ DXG_TRACE("signal completion");
+ complete(event->completion_event);
+ }
+ if (event->destroy_after_signal) {
+ DXG_TRACE("destroying event %p", event);
+ kfree(event);
+ }
+}
+
+void dxgglobal_signal_host_event(u64 event_id)
+{
+ struct dxghostevent *event;
+ unsigned long flags;
+ struct dxgglobal *dxgglobal = dxggbl();
+
+ DXG_TRACE("Signaling host event %lld", event_id);
+
+ spin_lock_irqsave(&dxgglobal->host_event_list_mutex, flags);
+ list_for_each_entry(event, &dxgglobal->host_event_list_head,
+ host_event_list_entry) {
+ if (event->event_id == event_id) {
+ DXG_TRACE("found event to signal");
+ if (event->event_type == dxghostevent_cpu_event)
+ signal_host_cpu_event(event);
+ else
+ DXG_ERR("Unknown host event type");
+ break;
+ }
+ }
+ spin_unlock_irqrestore(&dxgglobal->host_event_list_mutex, flags);
+}
+
+struct dxghostevent *dxgglobal_get_host_event(u64 event_id)
+{
+ struct dxghostevent *entry;
+ struct dxghostevent *event = NULL;
+ struct dxgglobal *dxgglobal = dxggbl();
+
+ spin_lock_irq(&dxgglobal->host_event_list_mutex);
+ list_for_each_entry(entry, &dxgglobal->host_event_list_head,
+ host_event_list_entry) {
+ if (entry->event_id == event_id) {
+ list_del(&entry->host_event_list_entry);
+ entry->host_event_list_entry.next = NULL;
+ event = entry;
+ break;
+ }
+ }
+ spin_unlock_irq(&dxgglobal->host_event_list_mutex);
+ return event;
+}
+
+u64 dxgglobal_new_host_event_id(void)
+{
+ struct dxgglobal *dxgglobal = dxggbl();
+
+ return atomic64_inc_return(&dxgglobal->host_event_id);
+}
+
void dxgglobal_acquire_process_adapter_lock(void)
{
struct dxgglobal *dxgglobal = dxggbl();
@@ -720,12 +816,16 @@ static struct dxgglobal *dxgglobal_create(void)
INIT_LIST_HEAD(&dxgglobal->vgpu_ch_list_head);
INIT_LIST_HEAD(&dxgglobal->adapter_list_head);
init_rwsem(&dxgglobal->adapter_list_lock);
-
init_rwsem(&dxgglobal->channel_lock);
+ INIT_LIST_HEAD(&dxgglobal->host_event_list_head);
+ spin_lock_init(&dxgglobal->host_event_list_mutex);
+ atomic64_set(&dxgglobal->host_event_id, 1);
+
#ifdef DEBUG
dxgk_validate_ioctls();
#endif
+
return dxgglobal;
}
diff --git a/drivers/hv/dxgkrnl/dxgvmbus.c b/drivers/hv/dxgkrnl/dxgvmbus.c
index d323afc85249..6b2dea24a509 100644
--- a/drivers/hv/dxgkrnl/dxgvmbus.c
+++ b/drivers/hv/dxgkrnl/dxgvmbus.c
@@ -281,6 +281,22 @@ static void command_vm_to_host_init1(struct dxgkvmb_command_vm_to_host *command,
command->channel_type = DXGKVMB_VM_TO_HOST;
}
+static void signal_guest_event(struct dxgkvmb_command_host_to_vm *packet,
+ u32 packet_length)
+{
+ struct dxgkvmb_command_signalguestevent *command = (void *)packet;
+
+ if (packet_length < sizeof(struct dxgkvmb_command_signalguestevent)) {
+ DXG_ERR("invalid signal guest event packet size");
+ return;
+ }
+ if (command->event == 0) {
+ DXG_ERR("invalid event pointer");
+ return;
+ }
+ dxgglobal_signal_host_event(command->event);
+}
+
static void process_inband_packet(struct dxgvmbuschannel *channel,
struct vmpacket_descriptor *desc)
{
@@ -297,6 +313,7 @@ static void process_inband_packet(struct dxgvmbuschannel *channel,
switch (packet->command_type) {
case DXGK_VMBCOMMAND_SIGNALGUESTEVENT:
case DXGK_VMBCOMMAND_SIGNALGUESTEVENTPASSIVE:
+ signal_guest_event(packet, packet_length);
break;
case DXGK_VMBCOMMAND_SENDWNFNOTIFICATION:
break;
@@ -959,7 +976,7 @@ dxgvmb_send_create_context(struct dxgadapter *adapter,
command->priv_drv_data,
args->priv_drv_data_size);
if (ret) {
- dev_err(DXGDEV,
+ DXG_ERR(
"Faled to copy private data to user");
ret = -EINVAL;
dxgvmb_send_destroy_context(adapter, process,
@@ -1706,6 +1723,206 @@ dxgvmb_send_create_sync_object(struct dxgprocess *process,
return ret;
}
+int dxgvmb_send_signal_sync_object(struct dxgprocess *process,
+ struct dxgadapter *adapter,
+ struct d3dddicb_signalflags flags,
+ u64 legacy_fence_value,
+ struct d3dkmthandle context,
+ u32 object_count,
+ struct d3dkmthandle __user *objects,
+ u32 context_count,
+ struct d3dkmthandle __user *contexts,
+ u32 fence_count,
+ u64 __user *fences,
+ struct eventfd_ctx *cpu_event_handle,
+ struct d3dkmthandle device)
+{
+ int ret;
+ struct dxgkvmb_command_signalsyncobject *command;
+ u32 object_size = object_count * sizeof(struct d3dkmthandle);
+ u32 context_size = context_count * sizeof(struct d3dkmthandle);
+ u32 fence_size = fences ? fence_count * sizeof(u64) : 0;
+ u8 *current_pos;
+ u32 cmd_size = sizeof(struct dxgkvmb_command_signalsyncobject) +
+ object_size + context_size + fence_size;
+ struct dxgvmbusmsg msg = {.hdr = NULL};
+ struct dxgglobal *dxgglobal = dxggbl();
+
+ if (context.v)
+ cmd_size += sizeof(struct d3dkmthandle);
+
+ ret = init_message(&msg, adapter, process, cmd_size);
+ if (ret)
+ goto cleanup;
+ command = (void *)msg.msg;
+
+ command_vgpu_to_host_init2(&command->hdr,
+ DXGK_VMBCOMMAND_SIGNALSYNCOBJECT,
+ process->host_handle);
+
+ if (flags.enqueue_cpu_event)
+ command->cpu_event_handle = (u64) cpu_event_handle;
+ else
+ command->device = device;
+ command->flags = flags;
+ command->fence_value = legacy_fence_value;
+ command->object_count = object_count;
+ command->context_count = context_count;
+ current_pos = (u8 *) &command[1];
+ ret = copy_from_user(current_pos, objects, object_size);
+ if (ret) {
+ DXG_ERR("Failed to read objects %p %d",
+ objects, object_size);
+ ret = -EINVAL;
+ goto cleanup;
+ }
+ current_pos += object_size;
+ if (context.v) {
+ command->context_count++;
+ *(struct d3dkmthandle *) current_pos = context;
+ current_pos += sizeof(struct d3dkmthandle);
+ }
+ if (context_size) {
+ ret = copy_from_user(current_pos, contexts, context_size);
+ if (ret) {
+ DXG_ERR("Failed to read contexts %p %d",
+ contexts, context_size);
+ ret = -EINVAL;
+ goto cleanup;
+ }
+ current_pos += context_size;
+ }
+ if (fence_size) {
+ ret = copy_from_user(current_pos, fences, fence_size);
+ if (ret) {
+ DXG_ERR("Failed to read fences %p %d",
+ fences, fence_size);
+ ret = -EINVAL;
+ goto cleanup;
+ }
+ }
+
+ if (dxgglobal->async_msg_enabled) {
+ command->hdr.async_msg = 1;
+ ret = dxgvmb_send_async_msg(msg.channel, msg.hdr, msg.size);
+ } else {
+ ret = dxgvmb_send_sync_msg_ntstatus(msg.channel, msg.hdr,
+ msg.size);
+ }
+
+cleanup:
+ free_message(&msg, process);
+ if (ret)
+ DXG_TRACE("err: %d", ret);
+ return ret;
+}
+
+int dxgvmb_send_wait_sync_object_cpu(struct dxgprocess *process,
+ struct dxgadapter *adapter,
+ struct
+ d3dkmt_waitforsynchronizationobjectfromcpu
+ *args,
+ u64 cpu_event)
+{
+ int ret = -EINVAL;
+ struct dxgkvmb_command_waitforsyncobjectfromcpu *command;
+ u32 object_size = args->object_count * sizeof(struct d3dkmthandle);
+ u32 fence_size = args->object_count * sizeof(u64);
+ u8 *current_pos;
+ u32 cmd_size = sizeof(*command) + object_size + fence_size;
+ struct dxgvmbusmsg msg = {.hdr = NULL};
+
+ ret = init_message(&msg, adapter, process, cmd_size);
+ if (ret)
+ goto cleanup;
+ command = (void *)msg.msg;
+
+ command_vgpu_to_host_init2(&command->hdr,
+ DXGK_VMBCOMMAND_WAITFORSYNCOBJECTFROMCPU,
+ process->host_handle);
+ command->device = args->device;
+ command->flags = args->flags;
+ command->object_count = args->object_count;
+ command->guest_event_pointer = (u64) cpu_event;
+ current_pos = (u8 *) &command[1];
+
+ ret = copy_from_user(current_pos, args->objects, object_size);
+ if (ret) {
+ DXG_ERR("failed to copy objects");
+ ret = -EINVAL;
+ goto cleanup;
+ }
+ current_pos += object_size;
+ ret = copy_from_user(current_pos, args->fence_values,
+ fence_size);
+ if (ret) {
+ DXG_ERR("failed to copy fences");
+ ret = -EINVAL;
+ goto cleanup;
+ }
+
+ ret = dxgvmb_send_sync_msg_ntstatus(msg.channel, msg.hdr, msg.size);
+
+cleanup:
+ free_message(&msg, process);
+ if (ret)
+ DXG_TRACE("err: %d", ret);
+ return ret;
+}
+
+int dxgvmb_send_wait_sync_object_gpu(struct dxgprocess *process,
+ struct dxgadapter *adapter,
+ struct d3dkmthandle context,
+ u32 object_count,
+ struct d3dkmthandle *objects,
+ u64 *fences,
+ bool legacy_fence)
+{
+ int ret;
+ struct dxgkvmb_command_waitforsyncobjectfromgpu *command;
+ u32 fence_size = object_count * sizeof(u64);
+ u32 object_size = object_count * sizeof(struct d3dkmthandle);
+ u8 *current_pos;
+ u32 cmd_size = object_size + fence_size - sizeof(u64) +
+ sizeof(struct dxgkvmb_command_waitforsyncobjectfromgpu);
+ struct dxgvmbusmsg msg = {.hdr = NULL};
+ struct dxgglobal *dxgglobal = dxggbl();
+
+ if (object_count == 0 || object_count > D3DDDI_MAX_OBJECT_WAITED_ON) {
+ ret = -EINVAL;
+ goto cleanup;
+ }
+ ret = init_message(&msg, adapter, process, cmd_size);
+ if (ret)
+ goto cleanup;
+ command = (void *)msg.msg;
+
+ command_vgpu_to_host_init2(&command->hdr,
+ DXGK_VMBCOMMAND_WAITFORSYNCOBJECTFROMGPU,
+ process->host_handle);
+ command->context = context;
+ command->object_count = object_count;
+ command->legacy_fence_object = legacy_fence;
+ current_pos = (u8 *) command->fence_values;
+ memcpy(current_pos, fences, fence_size);
+ current_pos += fence_size;
+ memcpy(current_pos, objects, object_size);
+
+ if (dxgglobal->async_msg_enabled) {
+ command->hdr.async_msg = 1;
+ ret = dxgvmb_send_async_msg(msg.channel, msg.hdr, msg.size);
+ } else {
+ ret = dxgvmb_send_sync_msg_ntstatus(msg.channel, msg.hdr,
+ msg.size);
+ }
+
+cleanup:
+ free_message(&msg, process);
+ if (ret)
+ DXG_TRACE("err: %d", ret);
+ return ret;
+}
+
int dxgvmb_send_query_adapter_info(struct dxgprocess *process,
struct dxgadapter *adapter,
struct d3dkmt_queryadapterinfo *args)
diff --git a/drivers/hv/dxgkrnl/dxgvmbus.h b/drivers/hv/dxgkrnl/dxgvmbus.h
index bbf5f31cdf81..89fecbcefbc8 100644
--- a/drivers/hv/dxgkrnl/dxgvmbus.h
+++ b/drivers/hv/dxgkrnl/dxgvmbus.h
@@ -165,6 +165,13 @@ struct dxgkvmb_command_host_to_vm {
enum dxgkvmb_commandtype_host_to_vm command_type;
};
+struct dxgkvmb_command_signalguestevent {
+ struct dxgkvmb_command_host_to_vm hdr;
+ u64 event;
+ u64 process_id;
+ bool dereference_event;
+};
+
/* Returns ntstatus */
struct dxgkvmb_command_setiospaceregion {
struct dxgkvmb_command_vm_to_host hdr;
@@ -430,4 +437,45 @@ struct dxgkvmb_command_destroysyncobject {
struct d3dkmthandle sync_object;
};
+/* The command returns ntstatus */
+struct dxgkvmb_command_signalsyncobject {
+ struct dxgkvmb_command_vgpu_to_host hdr;
+ u32 object_count;
+ struct d3dddicb_signalflags flags;
+ u32 context_count;
+ u64 fence_value;
+ union {
+ /* Pointer to the guest event object */
+ u64 cpu_event_handle;
+ /* Non zero when signal from CPU is done */
+ struct d3dkmthandle device;
+ };
+ /* struct d3dkmthandle ObjectHandleArray[object_count] */
+ /* struct d3dkmthandle ContextArray[context_count] */
+ /* u64 MonitoredFenceValueArray[object_count] */
+};
+
+/* The command returns ntstatus */
+struct dxgkvmb_command_waitforsyncobjectfromcpu {
+ struct dxgkvmb_command_vgpu_to_host hdr;
+ struct d3dkmthandle device;
+ u32 object_count;
+ struct d3dddi_waitforsynchronizationobjectfromcpu_flags flags;
+ u64 guest_event_pointer;
+ bool dereference_event;
+ /* struct d3dkmthandle ObjectHandleArray[object_count] */
+ /* u64 FenceValueArray [object_count] */
+};
+
+/* The command returns ntstatus */
+struct dxgkvmb_command_waitforsyncobjectfromgpu {
+ struct dxgkvmb_command_vgpu_to_host hdr;
+ struct d3dkmthandle context;
+ /* Must be 1 when bLegacyFenceObject is TRUE */
+ u32 object_count;
+ bool legacy_fence_object;
+ u64 fence_values[1];
+ /* struct d3dkmthandle ObjectHandles[object_count] */
+};
+
#endif /* _DXGVMBUS_H */
diff --git a/drivers/hv/dxgkrnl/ioctl.c b/drivers/hv/dxgkrnl/ioctl.c
index 4bba1e209f33..0025e1ee2d4d 100644
--- a/drivers/hv/dxgkrnl/ioctl.c
+++ b/drivers/hv/dxgkrnl/ioctl.c
@@ -759,7 +759,7 @@ get_standard_alloc_priv_data(struct dxgdevice *device,
res_priv_data = vzalloc(res_priv_data_size);
if (res_priv_data == NULL) {
ret = -ENOMEM;
- dev_err(DXGDEV,
+ DXG_ERR(
"failed to alloc memory for res priv data: %d",
res_priv_data_size);
goto cleanup;
@@ -1065,7 +1065,7 @@ dxgkio_create_allocation(struct dxgprocess *process, void *__user inargs)
alloc_info[i].priv_drv_data,
priv_data_size);
if (ret) {
- dev_err(DXGDEV,
+ DXG_ERR(
"failed to copy priv data");
ret = -EFAULT;
goto cleanup;
@@ -1348,8 +1348,10 @@ dxgkio_create_sync_object(struct dxgprocess *process, void *__user inargs)
struct d3dkmt_createsynchronizationobject2 args;
struct dxgdevice *device = NULL;
struct dxgadapter *adapter = NULL;
+ struct eventfd_ctx *event = NULL;
struct dxgsyncobject *syncobj = NULL;
bool device_lock_acquired = false;
+ struct dxghosteventcpu *host_event = NULL;
ret = copy_from_user(&args, inargs, sizeof(args));
if (ret) {
@@ -1384,6 +1386,27 @@ dxgkio_create_sync_object(struct dxgprocess *process, void *__user inargs)
goto cleanup;
}
+ if (args.info.type == _D3DDDI_CPU_NOTIFICATION) {
+ event = eventfd_ctx_fdget((int)
+ args.info.cpu_notification.event);
+ if (IS_ERR(event)) {
+ DXG_ERR("failed to reference the event");
+ event = NULL;
+ ret = -EINVAL;
+ goto cleanup;
+ }
+ host_event = syncobj->host_event;
+ host_event->hdr.event_id = dxgglobal_new_host_event_id();
+ host_event->cpu_event = event;
+ host_event->remove_from_list = false;
+ host_event->destroy_after_signal = false;
+ host_event->hdr.event_type = dxghostevent_cpu_event;
+ dxgglobal_add_host_event(&host_event->hdr);
+ args.info.cpu_notification.event = host_event->hdr.event_id;
+ DXG_TRACE("creating CPU notification event: %lld",
+ args.info.cpu_notification.event);
+ }
+
ret = dxgvmb_send_create_sync_object(process, adapter, &args, syncobj);
if (ret < 0)
goto cleanup;
@@ -1411,7 +1434,10 @@ dxgkio_create_sync_object(struct dxgprocess *process, void *__user inargs)
if (args.sync_object.v)
dxgvmb_send_destroy_sync_object(process,
args.sync_object);
+ event = NULL;
}
+ if (event)
+ eventfd_ctx_put(event);
}
if (adapter)
dxgadapter_release_lock_shared(adapter);
@@ -1467,6 +1493,659 @@ dxgkio_destroy_sync_object(struct dxgprocess *process, void *__user inargs)
return ret;
}
+static int
+dxgkio_signal_sync_object(struct dxgprocess *process, void *__user inargs)
+{
+ struct d3dkmt_signalsynchronizationobject2 args;
+ struct d3dkmt_signalsynchronizationobject2 *__user in_args = inargs;
+ struct dxgdevice *device = NULL;
+ struct dxgadapter *adapter = NULL;
+ int ret;
+ u32 fence_count = 1;
+ struct eventfd_ctx *event = NULL;
+ struct dxghosteventcpu *host_event = NULL;
+ bool host_event_added = false;
+ u64 host_event_id = 0;
+
+ ret = copy_from_user(&args, inargs, sizeof(args));
+ if (ret) {
+ DXG_ERR("failed to copy input args");
+ ret = -EINVAL;
+ goto cleanup;
+ }
+
+ if (args.context_count >= D3DDDI_MAX_BROADCAST_CONTEXT ||
+ args.object_count > D3DDDI_MAX_OBJECT_SIGNALED) {
+ ret = -EINVAL;
+ goto cleanup;
+ }
+
+ if (args.flags.enqueue_cpu_event) {
+ host_event = kzalloc(sizeof(*host_event), GFP_KERNEL);
+ if (host_event == NULL) {
+ ret = -ENOMEM;
+ goto cleanup;
+ }
+ host_event->process = process;
+ event = eventfd_ctx_fdget((int)args.cpu_event_handle);
+ if (IS_ERR(event)) {
+ DXG_ERR("failed to reference the event");
+ event = NULL;
+ ret = -EINVAL;
+ goto cleanup;
+ }
+ fence_count = 0;
+ host_event->cpu_event = event;
+ host_event_id = dxgglobal_new_host_event_id();
+ host_event->hdr.event_type = dxghostevent_cpu_event;
+ host_event->hdr.event_id = host_event_id;
+ host_event->remove_from_list = true;
+ host_event->destroy_after_signal = true;
+ dxgglobal_add_host_event(&host_event->hdr);
+ host_event_added = true;
+ }
+
+ device = dxgprocess_device_by_object_handle(process,
+ HMGRENTRY_TYPE_DXGCONTEXT,
+ args.context);
+ if (device == NULL) {
+ ret = -EINVAL;
+ goto cleanup;
+ }
+
+ adapter = device->adapter;
+ ret = dxgadapter_acquire_lock_shared(adapter);
+ if (ret < 0) {
+ adapter = NULL;
+ goto cleanup;
+ }
+
+ ret = dxgvmb_send_signal_sync_object(process, adapter,
+ args.flags, args.fence.fence_value,
+ args.context, args.object_count,
+ in_args->object_array,
+ args.context_count,
+ in_args->contexts, fence_count,
+ NULL, (void *)host_event_id,
+ zerohandle);
+
+ /*
+ * When the send operation succeeds, the host event will be destroyed
+ * after signal from the host
+ */
+
+cleanup:
+
+ if (ret < 0) {
+ if (host_event_added) {
+ /* The event might be signaled and destroyed by host */
+ host_event = (struct dxghosteventcpu *)
+ dxgglobal_get_host_event(host_event_id);
+ if (host_event) {
+ eventfd_ctx_put(event);
+ event = NULL;
+ kfree(host_event);
+ host_event = NULL;
+ }
+ }
+ if (event)
+ eventfd_ctx_put(event);
+ if (host_event)
+ kfree(host_event);
+ }
+ if (adapter)
+ dxgadapter_release_lock_shared(adapter);
+ if (device)
+ kref_put(&device->device_kref, dxgdevice_release);
+
+ DXG_TRACE("ioctl:%s %d", errorstr(ret), ret);
+ return ret;
+}
+
+static int
+dxgkio_signal_sync_object_cpu(struct dxgprocess *process, void *__user inargs)
+{
+ struct d3dkmt_signalsynchronizationobjectfromcpu args;
+ struct dxgdevice *device = NULL;
+ struct dxgadapter *adapter = NULL;
+ int ret;
+
+ ret = copy_from_user(&args, inargs, sizeof(args));
+ if (ret) {
+ DXG_ERR("failed to copy input args");
+ ret = -EINVAL;
+ goto cleanup;
+ }
+ if (args.object_count == 0 ||
+ args.object_count > D3DDDI_MAX_OBJECT_SIGNALED) {
+ DXG_TRACE("Too many syncobjects : %d", args.object_count);
+ ret = -EINVAL;
+ goto cleanup;
+ }
+
+ device = dxgprocess_device_by_handle(process, args.device);
+ if (device == NULL) {
+ ret = -EINVAL;
+ goto cleanup;
+ }
+
+ adapter = device->adapter;
+ ret = dxgadapter_acquire_lock_shared(adapter);
+ if (ret < 0) {
+ adapter = NULL;
+ goto cleanup;
+ }
+
+ ret = dxgvmb_send_signal_sync_object(process, adapter,
+ args.flags, 0, zerohandle,
+ args.object_count, args.objects, 0,
+ NULL, args.object_count,
+ args.fence_values, NULL,
+ args.device);
+
+cleanup:
+
+ if (adapter)
+ dxgadapter_release_lock_shared(adapter);
+ if (device)
+ kref_put(&device->device_kref, dxgdevice_release);
+
+ DXG_TRACE("ioctl:%s %d", errorstr(ret), ret);
+ return ret;
+}
+
+static int
+dxgkio_signal_sync_object_gpu(struct dxgprocess *process, void *__user inargs)
+{
+ struct d3dkmt_signalsynchronizationobjectfromgpu args;
+ struct d3dkmt_signalsynchronizationobjectfromgpu *__user user_args =
+ inargs;
+ struct dxgdevice *device = NULL;
+ struct dxgadapter *adapter = NULL;
+ struct d3dddicb_signalflags flags = { };
+ int ret;
+
+ ret = copy_from_user(&args, inargs, sizeof(args));
+ if (ret) {
+ DXG_ERR("failed to copy input args");
+ ret = -EINVAL;
+ goto cleanup;
+ }
+
+ if (args.object_count == 0 ||
+ args.object_count > DXG_MAX_VM_BUS_PACKET_SIZE) {
+ ret = -EINVAL;
+ goto cleanup;
+ }
+
+ device = dxgprocess_device_by_object_handle(process,
+ HMGRENTRY_TYPE_DXGCONTEXT,
+ args.context);
+ if (device == NULL) {
+ ret = -EINVAL;
+ goto cleanup;
+ }
+
+ adapter = device->adapter;
+ ret = dxgadapter_acquire_lock_shared(adapter);
+ if (ret < 0) {
+ adapter = NULL;
+ goto cleanup;
+ }
+
+ ret = dxgvmb_send_signal_sync_object(process, adapter,
+ flags, 0, zerohandle,
+ args.object_count,
+ args.objects, 1,
+ &user_args->context,
+ args.object_count,
+ args.monitored_fence_values, NULL,
+ zerohandle);
+
+cleanup:
+
+ if (adapter)
+ dxgadapter_release_lock_shared(adapter);
+ if (device)
+ kref_put(&device->device_kref, dxgdevice_release);
+
+ DXG_TRACE("ioctl:%s %d", errorstr(ret), ret);
+ return ret;
+}
+
+static int
+dxgkio_signal_sync_object_gpu2(struct dxgprocess *process, void *__user inargs)
+{
+ struct d3dkmt_signalsynchronizationobjectfromgpu2 args;
+ struct dxgdevice *device = NULL;
+ struct dxgadapter *adapter = NULL;
+ struct d3dkmthandle context_handle;
+ struct eventfd_ctx *event = NULL;
+ u64 *fences = NULL;
+ u32 fence_count = 0;
+ int ret;
+ struct dxghosteventcpu *host_event = NULL;
+ bool host_event_added = false;
+ u64 host_event_id = 0;
+
+ ret = copy_from_user(&args, inargs, sizeof(args));
+ if (ret) {
+ DXG_ERR("failed to copy input args");
+ ret = -EINVAL;
+ goto cleanup;
+ }
+
+ if (args.flags.enqueue_cpu_event) {
+ if (args.object_count != 0 || args.cpu_event_handle == 0) {
+ DXG_ERR("Bad input in EnqueueCpuEvent: %d %lld",
+ args.object_count, args.cpu_event_handle);
+ ret = -EINVAL;
+ goto cleanup;
+ }
+ } else if (args.object_count == 0 ||
+ args.object_count > DXG_MAX_VM_BUS_PACKET_SIZE ||
+ args.context_count == 0 ||
+ args.context_count > DXG_MAX_VM_BUS_PACKET_SIZE) {
+ DXG_ERR("Invalid input: %d %d",
+ args.object_count, args.context_count);
+ ret = -EINVAL;
+ goto cleanup;
+ }
+
+ ret = copy_from_user(&context_handle, args.contexts,
+ sizeof(struct d3dkmthandle));
+ if (ret) {
+ DXG_ERR("failed to copy context handle");
+ ret = -EINVAL;
+ goto cleanup;
+ }
+
+ if (args.flags.enqueue_cpu_event) {
+ host_event = kzalloc(sizeof(*host_event), GFP_KERNEL);
+ if (host_event == NULL) {
+ ret = -ENOMEM;
+ goto cleanup;
+ }
+ host_event->process = process;
+ event = eventfd_ctx_fdget((int)args.cpu_event_handle);
+ if (IS_ERR(event)) {
+ DXG_ERR("failed to reference the event");
+ event = NULL;
+ ret = -EINVAL;
+ goto cleanup;
+ }
+ fence_count = 0;
+ host_event->cpu_event = event;
+ host_event_id = dxgglobal_new_host_event_id();
+ host_event->hdr.event_id = host_event_id;
+ host_event->hdr.event_type = dxghostevent_cpu_event;
+ host_event->remove_from_list = true;
+ host_event->destroy_after_signal = true;
+ dxgglobal_add_host_event(&host_event->hdr);
+ host_event_added = true;
+ } else {
+ fences = args.monitored_fence_values;
+ fence_count = args.object_count;
+ }
+
+ device = dxgprocess_device_by_object_handle(process,
+ HMGRENTRY_TYPE_DXGCONTEXT,
+ context_handle);
+ if (device == NULL) {
+ ret = -EINVAL;
+ goto cleanup;
+ }
+
+ adapter = device->adapter;
+ ret = dxgadapter_acquire_lock_shared(adapter);
+ if (ret < 0) {
+ adapter = NULL;
+ goto cleanup;
+ }
+
+ ret = dxgvmb_send_signal_sync_object(process, adapter,
+ args.flags, 0, zerohandle,
+ args.object_count, args.objects,
+ args.context_count, args.contexts,
+ fence_count, fences,
+ (void *)host_event_id, zerohandle);
+
+cleanup:
+
+ if (ret < 0) {
+ if (host_event_added) {
+ /* The event might be signaled and destroyed by host */
+ host_event = (struct dxghosteventcpu *)
+ dxgglobal_get_host_event(host_event_id);
+ if (host_event) {
+ eventfd_ctx_put(event);
+ event = NULL;
+ kfree(host_event);
+ host_event = NULL;
+ }
+ }
+ if (event)
+ eventfd_ctx_put(event);
+ if (host_event)
+ kfree(host_event);
+ }
+ if (adapter)
+ dxgadapter_release_lock_shared(adapter);
+ if (device)
+ kref_put(&device->device_kref, dxgdevice_release);
+
+ DXG_TRACE("ioctl:%s %d", errorstr(ret), ret);
+ return ret;
+}
+
+static int
+dxgkio_wait_sync_object(struct dxgprocess *process, void *__user inargs)
+{
+ struct d3dkmt_waitforsynchronizationobject2 args;
+ struct dxgdevice *device = NULL;
+ struct dxgadapter *adapter = NULL;
+ int ret;
+
+ ret = copy_from_user(&args, inargs, sizeof(args));
+ if (ret) {
+ DXG_ERR("failed to copy input args");
+ ret = -EINVAL;
+ goto cleanup;
+ }
+
+ if (args.object_count > D3DDDI_MAX_OBJECT_WAITED_ON ||
+ args.object_count == 0) {
+ ret = -EINVAL;
+ goto cleanup;
+ }
+
+ device = dxgprocess_device_by_object_handle(process,
+ HMGRENTRY_TYPE_DXGCONTEXT,
+ args.context);
+ if (device == NULL) {
+ ret = -EINVAL;
+ goto cleanup;
+ }
+
+ adapter = device->adapter;
+ ret = dxgadapter_acquire_lock_shared(adapter);
+ if (ret < 0) {
+ adapter = NULL;
+ goto cleanup;
+ }
+
+ DXG_TRACE("Fence value: %lld", args.fence.fence_value);
+ ret = dxgvmb_send_wait_sync_object_gpu(process, adapter,
+ args.context, args.object_count,
+ args.object_array,
+ &args.fence.fence_value, true);
+
+cleanup:
+
+ if (adapter)
+ dxgadapter_release_lock_shared(adapter);
+ if (device)
+ kref_put(&device->device_kref, dxgdevice_release);
+
+ DXG_TRACE("ioctl:%s %d", errorstr(ret), ret);
+ return ret;
+}
+
+static int
+dxgkio_wait_sync_object_cpu(struct dxgprocess *process, void *__user inargs)
+{
+ struct d3dkmt_waitforsynchronizationobjectfromcpu args;
+ struct dxgdevice *device = NULL;
+ struct dxgadapter *adapter = NULL;
+ struct eventfd_ctx *event = NULL;
+ struct dxghosteventcpu host_event = { };
+ struct dxghosteventcpu *async_host_event = NULL;
+ struct completion local_event = { };
+ u64 event_id = 0;
+ int ret;
+ bool host_event_added = false;
+
+ ret = copy_from_user(&args, inargs, sizeof(args));
+ if (ret) {
+ DXG_ERR("failed to copy input args");
+ ret = -EINVAL;
+ goto cleanup;
+ }
+
+ if (args.object_count > DXG_MAX_VM_BUS_PACKET_SIZE ||
+ args.object_count == 0) {
+ ret = -EINVAL;
+ goto cleanup;
+ }
+
+ if (args.async_event) {
+ async_host_event = kzalloc(sizeof(*async_host_event),
+ GFP_KERNEL);
+ if (async_host_event == NULL) {
+ ret = -EINVAL;
+ goto cleanup;
+ }
+ async_host_event->process = process;
+ event = eventfd_ctx_fdget((int)args.async_event);
+ if (IS_ERR(event)) {
+ DXG_ERR("failed to reference the event");
+ event = NULL;
+ ret = -EINVAL;
+ goto cleanup;
+ }
+ async_host_event->cpu_event = event;
+ async_host_event->hdr.event_id = dxgglobal_new_host_event_id();
+ async_host_event->destroy_after_signal = true;
+ async_host_event->hdr.event_type = dxghostevent_cpu_event;
+ dxgglobal_add_host_event(&async_host_event->hdr);
+ event_id = async_host_event->hdr.event_id;
+ host_event_added = true;
+ } else {
+ init_completion(&local_event);
+ host_event.completion_event = &local_event;
+ host_event.hdr.event_id = dxgglobal_new_host_event_id();
+ host_event.hdr.event_type = dxghostevent_cpu_event;
+ dxgglobal_add_host_event(&host_event.hdr);
+ event_id = host_event.hdr.event_id;
+ }
+
+ device = dxgprocess_device_by_handle(process, args.device);
+ if (device == NULL) {
+ ret = -EINVAL;
+ goto cleanup;
+ }
+
+ adapter = device->adapter;
+ ret = dxgadapter_acquire_lock_shared(adapter);
+ if (ret < 0) {
+ adapter = NULL;
+ goto cleanup;
+ }
+
+ ret = dxgvmb_send_wait_sync_object_cpu(process, adapter,
+ &args, event_id);
+ if (ret < 0)
+ goto cleanup;
+
+ if (args.async_event == 0) {
+ dxgadapter_release_lock_shared(adapter);
+ adapter = NULL;
+ ret = wait_for_completion_interruptible(&local_event);
+ if (ret) {
+ DXG_ERR("wait_completion_interruptible: %d",
+ ret);
+ ret = -ERESTARTSYS;
+ }
+ }
+
+cleanup:
+
+ if (adapter)
+ dxgadapter_release_lock_shared(adapter);
+ if (device)
+ kref_put(&device->device_kref, dxgdevice_release);
+ if (host_event.hdr.event_id)
+ dxgglobal_remove_host_event(&host_event.hdr);
+ if (ret < 0) {
+ if (host_event_added) {
+ async_host_event = (struct dxghosteventcpu *)
+ dxgglobal_get_host_event(event_id);
+ if (async_host_event) {
+ if (async_host_event->hdr.event_type ==
+ dxghostevent_cpu_event) {
+ eventfd_ctx_put(event);
+ event = NULL;
+ kfree(async_host_event);
+ async_host_event = NULL;
+ } else {
+ DXG_ERR("Invalid event type");
+ DXGKRNL_ASSERT(0);
+ }
+ }
+ }
+ if (event)
+ eventfd_ctx_put(event);
+ if (async_host_event)
+ kfree(async_host_event);
+ }
+
+ DXG_TRACE("ioctl:%s %d", errorstr(ret), ret);
+ return ret;
+}
+
+static int
+dxgkio_wait_sync_object_gpu(struct dxgprocess *process, void *__user inargs)
+{
+ struct d3dkmt_waitforsynchronizationobjectfromgpu args;
+ struct dxgcontext *context = NULL;
+ struct d3dkmthandle device_handle = {};
+ struct dxgdevice *device = NULL;
+ struct dxgadapter *adapter = NULL;
+ struct dxgsyncobject *syncobj = NULL;
+ struct d3dkmthandle *objects = NULL;
+ u32 object_size;
+ u64 *fences = NULL;
+ int ret;
+ enum hmgrentry_type syncobj_type = HMGRENTRY_TYPE_FREE;
+ bool monitored_fence = false;
+
+ ret = copy_from_user(&args, inargs, sizeof(args));
+ if (ret) {
+ DXG_ERR("failed to copy input args");
+ ret = -EINVAL;
+ goto cleanup;
+ }
+
+ if (args.object_count > DXG_MAX_VM_BUS_PACKET_SIZE ||
+ args.object_count == 0) {
+ DXG_ERR("Invalid object count: %d", args.object_count);
+ ret = -EINVAL;
+ goto cleanup;
+ }
+
+ object_size = sizeof(struct d3dkmthandle) * args.object_count;
+ objects = vzalloc(object_size);
+ if (objects == NULL) {
+ ret = -ENOMEM;
+ goto cleanup;
+ }
+ ret = copy_from_user(objects, args.objects, object_size);
+ if (ret) {
+ DXG_ERR("failed to copy objects");
+ ret = -EINVAL;
+ goto cleanup;
+ }
+
+ hmgrtable_lock(&process->handle_table, DXGLOCK_SHARED);
+ context = hmgrtable_get_object_by_type(&process->handle_table,
+ HMGRENTRY_TYPE_DXGCONTEXT,
+ args.context);
+ if (context) {
+ device_handle = context->device_handle;
+ syncobj_type =
+ hmgrtable_get_object_type(&process->handle_table,
+ objects[0]);
+ }
+ if (device_handle.v == 0) {
+ DXG_ERR("Invalid context handle: %x", args.context.v);
+ ret = -EINVAL;
+ } else {
+ if (syncobj_type == HMGRENTRY_TYPE_MONITOREDFENCE) {
+ monitored_fence = true;
+ } else if (syncobj_type == HMGRENTRY_TYPE_DXGSYNCOBJECT) {
+ syncobj =
+ hmgrtable_get_object_by_type(&process->handle_table,
+ HMGRENTRY_TYPE_DXGSYNCOBJECT,
+ objects[0]);
+ if (syncobj == NULL) {
+ DXG_ERR("Invalid syncobj: %x",
+ objects[0].v);
+ ret = -EINVAL;
+ } else {
+ monitored_fence = syncobj->monitored_fence;
+ }
+ } else {
+ DXG_ERR("Invalid syncobj type: %x",
+ objects[0].v);
+ ret = -EINVAL;
+ }
+ }
+ hmgrtable_unlock(&process->handle_table, DXGLOCK_SHARED);
+
+ if (ret < 0)
+ goto cleanup;
+
+ if (monitored_fence) {
+ object_size = sizeof(u64) * args.object_count;
+ fences = vzalloc(object_size);
+ if (fences == NULL) {
+ ret = -ENOMEM;
+ goto cleanup;
+ }
+ ret = copy_from_user(fences, args.monitored_fence_values,
+ object_size);
+ if (ret) {
+ DXG_ERR("failed to copy fences");
+ ret = -EINVAL;
+ goto cleanup;
+ }
+ } else {
+ fences = &args.fence_value;
+ }
+
+ device = dxgprocess_device_by_handle(process, device_handle);
+ if (device == NULL) {
+ ret = -EINVAL;
+ goto cleanup;
+ }
+
+ adapter = device->adapter;
+ ret = dxgadapter_acquire_lock_shared(adapter);
+ if (ret < 0) {
+ adapter = NULL;
+ goto cleanup;
+ }
+
+ ret = dxgvmb_send_wait_sync_object_gpu(process, adapter,
+ args.context, args.object_count,
+ objects, fences,
+ !monitored_fence);
+
+cleanup:
+
+ if (adapter)
+ dxgadapter_release_lock_shared(adapter);
+ if (device)
+ kref_put(&device->device_kref, dxgdevice_release);
+ if (objects)
+ vfree(objects);
+ if (fences && fences != &args.fence_value)
+ vfree(fences);
+
+ DXG_TRACE("ioctl:%s %d", errorstr(ret), ret);
+ return ret;
+}
+
static struct ioctl_desc ioctls[] = {
/* 0x00 */ {},
/* 0x01 */ {dxgkio_open_adapter_from_luid, LX_DXOPENADAPTERFROMLUID},
@@ -1485,8 +2164,8 @@ static struct ioctl_desc ioctls[] = {
/* 0x0e */ {},
/* 0x0f */ {},
/* 0x10 */ {dxgkio_create_sync_object, LX_DXCREATESYNCHRONIZATIONOBJECT},
-/* 0x11 */ {},
-/* 0x12 */ {},
+/* 0x11 */ {dxgkio_signal_sync_object, LX_DXSIGNALSYNCHRONIZATIONOBJECT},
+/* 0x12 */ {dxgkio_wait_sync_object, LX_DXWAITFORSYNCHRONIZATIONOBJECT},
/* 0x13 */ {dxgkio_destroy_allocation, LX_DXDESTROYALLOCATION2},
/* 0x14 */ {dxgkio_enum_adapters, LX_DXENUMADAPTERS2},
/* 0x15 */ {dxgkio_close_adapter, LX_DXCLOSEADAPTER},
@@ -1517,17 +2196,22 @@ static struct ioctl_desc ioctls[] = {
/* 0x2e */ {},
/* 0x2f */ {},
/* 0x30 */ {},
-/* 0x31 */ {},
-/* 0x32 */ {},
-/* 0x33 */ {},
+/* 0x31 */ {dxgkio_signal_sync_object_cpu,
+ LX_DXSIGNALSYNCHRONIZATIONOBJECTFROMCPU},
+/* 0x32 */ {dxgkio_signal_sync_object_gpu,
+ LX_DXSIGNALSYNCHRONIZATIONOBJECTFROMGPU},
+/* 0x33 */ {dxgkio_signal_sync_object_gpu2,
+ LX_DXSIGNALSYNCHRONIZATIONOBJECTFROMGPU2},
/* 0x34 */ {},
/* 0x35 */ {},
/* 0x36 */ {},
/* 0x37 */ {},
/* 0x38 */ {},
/* 0x39 */ {},
-/* 0x3a */ {},
-/* 0x3b */ {},
+/* 0x3a */ {dxgkio_wait_sync_object_cpu,
+ LX_DXWAITFORSYNCHRONIZATIONOBJECTFROMCPU},
+/* 0x3b */ {dxgkio_wait_sync_object_gpu,
+ LX_DXWAITFORSYNCHRONIZATIONOBJECTFROMGPU},
/* 0x3c */ {},
/* 0x3d */ {},
/* 0x3e */ {dxgkio_enum_adapters3, LX_DXENUMADAPTERS3},
diff --git a/drivers/hv/dxgkrnl/misc.h b/drivers/hv/dxgkrnl/misc.h
index a51b29a6a68f..ee2ebfdd1c13 100644
--- a/drivers/hv/dxgkrnl/misc.h
+++ b/drivers/hv/dxgkrnl/misc.h
@@ -25,6 +25,8 @@ extern const struct d3dkmthandle zerohandle;
* The locks here are in the order from lowest to highest.
* When a lower lock is held, the higher lock should not be acquired.
*
+ * device_list_mutex
+ * host_event_list_mutex
* channel_lock (VMBus channel lock)
* fd_mutex
* plistmutex (process list mutex)
diff --git a/include/uapi/misc/d3dkmthk.h b/include/uapi/misc/d3dkmthk.h
index 4e1069f41d76..39055b0c1069 100644
--- a/include/uapi/misc/d3dkmthk.h
+++ b/include/uapi/misc/d3dkmthk.h
@@ -60,6 +60,9 @@ struct winluid {
#define D3DKMT_CREATEALLOCATION_MAX 1024
#define D3DKMT_ADAPTERS_MAX 64
+#define D3DDDI_MAX_BROADCAST_CONTEXT 64
+#define D3DDDI_MAX_OBJECT_WAITED_ON 32
+#define D3DDDI_MAX_OBJECT_SIGNALED 32
struct d3dkmt_adapterinfo {
struct d3dkmthandle adapter_handle;
@@ -343,6 +346,148 @@ struct d3dkmt_createsynchronizationobject2 {
__u32 reserved1;
};
+struct d3dkmt_waitforsynchronizationobject2 {
+ struct d3dkmthandle context;
+ __u32 object_count;
+ struct d3dkmthandle object_array[D3DDDI_MAX_OBJECT_WAITED_ON];
+ union {
+ struct {
+ __u64 fence_value;
+ } fence;
+ __u64 reserved[8];
+ };
+};
+
+struct d3dddicb_signalflags {
+ union {
+ struct {
+ __u32 signal_at_submission:1;
+ __u32 enqueue_cpu_event:1;
+ __u32 allow_fence_rewind:1;
+ __u32 reserved:28;
+ __u32 DXGK_SIGNAL_FLAG_INTERNAL0:1;
+ };
+ __u32 value;
+ };
+};
+
+struct d3dkmt_signalsynchronizationobject2 {
+ struct d3dkmthandle context;
+ __u32 object_count;
+ struct d3dkmthandle object_array[D3DDDI_MAX_OBJECT_SIGNALED];
+ struct d3dddicb_signalflags flags;
+ __u32 context_count;
+ struct d3dkmthandle contexts[D3DDDI_MAX_BROADCAST_CONTEXT];
+ union {
+ struct {
+ __u64 fence_value;
+ } fence;
+ __u64 cpu_event_handle;
+ __u64 reserved[8];
+ };
+};
+
+struct d3dddi_waitforsynchronizationobjectfromcpu_flags {
+ union {
+ struct {
+ __u32 wait_any:1;
+ __u32 reserved:31;
+ };
+ __u32 value;
+ };
+};
+
+struct d3dkmt_waitforsynchronizationobjectfromcpu {
+ struct d3dkmthandle device;
+ __u32 object_count;
+#ifdef __KERNEL__
+ struct d3dkmthandle *objects;
+ __u64 *fence_values;
+#else
+ __u64 objects;
+ __u64 fence_values;
+#endif
+ __u64 async_event;
+ struct d3dddi_waitforsynchronizationobjectfromcpu_flags flags;
+};
+
+struct d3dkmt_signalsynchronizationobjectfromcpu {
+ struct d3dkmthandle device;
+ __u32 object_count;
+#ifdef __KERNEL__
+ struct d3dkmthandle *objects;
+ __u64 *fence_values;
+#else
+ __u64 objects;
+ __u64 fence_values;
+#endif
+ struct d3dddicb_signalflags flags;
+};
+
+struct d3dkmt_waitforsynchronizationobjectfromgpu {
+ struct d3dkmthandle context;
+ __u32 object_count;
+#ifdef __KERNEL__
+ struct d3dkmthandle *objects;
+#else
+ __u64 objects;
+#endif
+ union {
+#ifdef __KERNEL__
+ __u64 *monitored_fence_values;
+#else
+ __u64 monitored_fence_values;
+#endif
+ __u64 fence_value;
+ __u64 reserved[8];
+ };
+};
+
+struct d3dkmt_signalsynchronizationobjectfromgpu {
+ struct d3dkmthandle context;
+ __u32 object_count;
+#ifdef __KERNEL__
+ struct d3dkmthandle *objects;
+#else
+ __u64 objects;
+#endif
+ union {
+#ifdef __KERNEL__
+ __u64 *monitored_fence_values;
+#else
+ __u64 monitored_fence_values;
+#endif
+ __u64 reserved[8];
+ };
+};
+
+struct d3dkmt_signalsynchronizationobjectfromgpu2 {
+ __u32 object_count;
+ __u32 reserved1;
+#ifdef __KERNEL__
+ struct d3dkmthandle *objects;
+#else
+ __u64 objects;
+#endif
+ struct d3dddicb_signalflags flags;
+ __u32 context_count;
+#ifdef __KERNEL__
+ struct d3dkmthandle *contexts;
+#else
+ __u64 contexts;
+#endif
+ union {
+ __u64 fence_value;
+ __u64 cpu_event_handle;
+#ifdef __KERNEL__
+ __u64 *monitored_fence_values;
+#else
+ __u64 monitored_fence_values;
+#endif
+ __u64 reserved[8];
+ };
+};
+
struct d3dkmt_destroysynchronizationobject {
struct d3dkmthandle sync_object;
};
@@ -576,6 +721,10 @@ struct d3dkmt_enumadapters3 {
_IOWR(0x47, 0x09, struct d3dkmt_queryadapterinfo)
#define LX_DXCREATESYNCHRONIZATIONOBJECT \
_IOWR(0x47, 0x10, struct d3dkmt_createsynchronizationobject2)
+#define LX_DXSIGNALSYNCHRONIZATIONOBJECT \
+ _IOWR(0x47, 0x11, struct d3dkmt_signalsynchronizationobject2)
+#define LX_DXWAITFORSYNCHRONIZATIONOBJECT \
+ _IOWR(0x47, 0x12, struct d3dkmt_waitforsynchronizationobject2)
#define LX_DXDESTROYALLOCATION2 \
_IOWR(0x47, 0x13, struct d3dkmt_destroyallocation2)
#define LX_DXENUMADAPTERS2 \
@@ -586,6 +735,16 @@ struct d3dkmt_enumadapters3 {
_IOWR(0x47, 0x19, struct d3dkmt_destroydevice)
#define LX_DXDESTROYSYNCHRONIZATIONOBJECT \
_IOWR(0x47, 0x1d, struct d3dkmt_destroysynchronizationobject)
+#define LX_DXSIGNALSYNCHRONIZATIONOBJECTFROMCPU \
+ _IOWR(0x47, 0x31, struct d3dkmt_signalsynchronizationobjectfromcpu)
+#define LX_DXSIGNALSYNCHRONIZATIONOBJECTFROMGPU \
+ _IOWR(0x47, 0x32, struct d3dkmt_signalsynchronizationobjectfromgpu)
+#define LX_DXSIGNALSYNCHRONIZATIONOBJECTFROMGPU2 \
+ _IOWR(0x47, 0x33, struct d3dkmt_signalsynchronizationobjectfromgpu2)
+#define LX_DXWAITFORSYNCHRONIZATIONOBJECTFROMCPU \
+ _IOWR(0x47, 0x3a, struct d3dkmt_waitforsynchronizationobjectfromcpu)
+#define LX_DXWAITFORSYNCHRONIZATIONOBJECTFROMGPU \
+ _IOWR(0x47, 0x3b, struct d3dkmt_waitforsynchronizationobjectfromgpu)
#define LX_DXENUMADAPTERS3 \
_IOWR(0x47, 0x3e, struct d3dkmt_enumadapters3)
^ permalink raw reply related [flat|nested] 56+ messages in thread* [PATCH 11/55] drivers: hv: dxgkrnl: Sharing of dxgresource objects
2026-03-19 20:24 [PATCH v4 00/55] drivers: hv: dxgkrnl: Driver for Hyper-V virtual compute device Eric Curtin
` (9 preceding siblings ...)
2026-03-19 20:24 ` [PATCH 10/55] drivers: hv: dxgkrnl: Operations using " Eric Curtin
@ 2026-03-19 20:24 ` Eric Curtin
2026-03-19 20:24 ` [PATCH 12/55] drivers: hv: dxgkrnl: Sharing of sync objects Eric Curtin
` (43 subsequent siblings)
54 siblings, 0 replies; 56+ messages in thread
From: Eric Curtin @ 2026-03-19 20:24 UTC (permalink / raw)
To: linux-hyperv; +Cc: linux-kernel, iourit, wei.liu, decui, haiyangz
From: Iouri Tarassov <iourit@linux.microsoft.com>
Implement creation of shared resources and ioctls for sharing
dxgresource objects between processes in the virtual machine.
A dxgresource object is a collection of dxgallocation objects.
The driver API allows addition/removal of allocations to a resource,
but has limitations on addition/removal of allocations to a shared
resource. When a resource is "sealed", addition/removal of allocations
is not allowed.
Resources are shared using file descriptor (FD) handles. The name
"NT handle" is used to be compatible with Windows implementation.
An FD handle is created by the LX_DXSHAREOBJECTS ioctl. The given FD
handle could be sent to another process using any Linux API.
To use a shared resource object in other ioctls the object needs to be
opened using its FD handle. An resource object is opened by the
LX_DXOPENRESOURCEFROMNTHANDLE ioctl. This ioctl returns a d3dkmthandle
value, which can be used to reference the resource object.
The LX_DXQUERYRESOURCEINFOFROMNTHANDLE ioctl is used to query private
driver data of a shared resource object. This private data needs to be
used to actually open the object using the LX_DXOPENRESOURCEFROMNTHANDLE
ioctl.
Signed-off-by: Iouri Tarassov <iourit@linux.microsoft.com>
[kms: forward port to 6.6 from 6.1. No code changes made.]
Signed-off-by: Kelsey Steele <kelseysteele@microsoft.com>
---
drivers/hv/dxgkrnl/dxgadapter.c | 81 ++++
drivers/hv/dxgkrnl/dxgkrnl.h | 77 ++++
drivers/hv/dxgkrnl/dxgmodule.c | 1 +
drivers/hv/dxgkrnl/dxgvmbus.c | 127 +++++
drivers/hv/dxgkrnl/dxgvmbus.h | 30 ++
drivers/hv/dxgkrnl/ioctl.c | 792 +++++++++++++++++++++++++++++++-
include/uapi/misc/d3dkmthk.h | 96 ++++
7 files changed, 1200 insertions(+), 4 deletions(-)
diff --git a/drivers/hv/dxgkrnl/dxgadapter.c b/drivers/hv/dxgkrnl/dxgadapter.c
index 04d827a15c54..26fce9aba4f3 100644
--- a/drivers/hv/dxgkrnl/dxgadapter.c
+++ b/drivers/hv/dxgkrnl/dxgadapter.c
@@ -160,6 +160,17 @@ void dxgadapter_remove_process(struct dxgprocess_adapter *process_info)
list_del(&process_info->adapter_process_list_entry);
}
+void dxgadapter_remove_shared_resource(struct dxgadapter *adapter,
+ struct dxgsharedresource *object)
+{
+ down_write(&adapter->shared_resource_list_lock);
+ if (object->shared_resource_list_entry.next) {
+ list_del(&object->shared_resource_list_entry);
+ object->shared_resource_list_entry.next = NULL;
+ }
+ up_write(&adapter->shared_resource_list_lock);
+}
+
void dxgadapter_add_syncobj(struct dxgadapter *adapter,
struct dxgsyncobject *object)
{
@@ -489,6 +500,69 @@ void dxgdevice_remove_resource(struct dxgdevice *device,
}
}
+struct dxgsharedresource *dxgsharedresource_create(struct dxgadapter *adapter)
+{
+ struct dxgsharedresource *resource;
+
+ resource = kzalloc(sizeof(*resource), GFP_KERNEL);
+ if (resource) {
+ INIT_LIST_HEAD(&resource->resource_list_head);
+ kref_init(&resource->sresource_kref);
+ mutex_init(&resource->fd_mutex);
+ resource->adapter = adapter;
+ }
+ return resource;
+}
+
+void dxgsharedresource_destroy(struct kref *refcount)
+{
+ struct dxgsharedresource *resource;
+
+ resource = container_of(refcount, struct dxgsharedresource,
+ sresource_kref);
+ if (resource->runtime_private_data)
+ vfree(resource->runtime_private_data);
+ if (resource->resource_private_data)
+ vfree(resource->resource_private_data);
+ if (resource->alloc_private_data_sizes)
+ vfree(resource->alloc_private_data_sizes);
+ if (resource->alloc_private_data)
+ vfree(resource->alloc_private_data);
+ kfree(resource);
+}
+
+void dxgsharedresource_add_resource(struct dxgsharedresource *shared_resource,
+ struct dxgresource *resource)
+{
+ down_write(&shared_resource->adapter->shared_resource_list_lock);
+ DXG_TRACE("Adding resource: %p %p", shared_resource, resource);
+ list_add_tail(&resource->shared_resource_list_entry,
+ &shared_resource->resource_list_head);
+ kref_get(&shared_resource->sresource_kref);
+ kref_get(&resource->resource_kref);
+ resource->shared_owner = shared_resource;
+ up_write(&shared_resource->adapter->shared_resource_list_lock);
+}
+
+void dxgsharedresource_remove_resource(struct dxgsharedresource
+ *shared_resource,
+ struct dxgresource *resource)
+{
+ struct dxgadapter *adapter = shared_resource->adapter;
+
+ down_write(&adapter->shared_resource_list_lock);
+ DXG_TRACE("Removing resource: %p %p", shared_resource, resource);
+ if (resource->shared_resource_list_entry.next) {
+ list_del(&resource->shared_resource_list_entry);
+ resource->shared_resource_list_entry.next = NULL;
+ kref_put(&shared_resource->sresource_kref,
+ dxgsharedresource_destroy);
+ resource->shared_owner = NULL;
+ kref_put(&resource->resource_kref, dxgresource_release);
+ }
+ up_write(&adapter->shared_resource_list_lock);
+}
+
struct dxgresource *dxgresource_create(struct dxgdevice *device)
{
struct dxgresource *resource;
@@ -532,6 +606,7 @@ void dxgresource_destroy(struct dxgresource *resource)
struct d3dkmt_destroyallocation2 args = { };
int destroyed = test_and_set_bit(0, &resource->flags);
struct dxgdevice *device = resource->device;
+ struct dxgsharedresource *shared_resource;
if (!destroyed) {
dxgresource_free_handle(resource);
@@ -547,6 +622,12 @@ void dxgresource_destroy(struct dxgresource *resource)
dxgallocation_destroy(alloc);
}
dxgdevice_remove_resource(device, resource);
+ shared_resource = resource->shared_owner;
+ if (shared_resource) {
+ dxgsharedresource_remove_resource(shared_resource,
+ resource);
+ resource->shared_owner = NULL;
+ }
}
kref_put(&resource->resource_kref, dxgresource_release);
}
diff --git a/drivers/hv/dxgkrnl/dxgkrnl.h b/drivers/hv/dxgkrnl/dxgkrnl.h
index 8431523f42de..0336e1843223 100644
--- a/drivers/hv/dxgkrnl/dxgkrnl.h
+++ b/drivers/hv/dxgkrnl/dxgkrnl.h
@@ -38,6 +38,7 @@ struct dxgdevice;
struct dxgcontext;
struct dxgallocation;
struct dxgresource;
+struct dxgsharedresource;
struct dxgsyncobject;
/*
@@ -372,6 +373,8 @@ struct dxgadapter {
struct list_head adapter_list_entry;
/* The list of dxgprocess_adapter entries */
struct list_head adapter_process_list_head;
+ /* List of all dxgsharedresource objects */
+ struct list_head shared_resource_list_head;
/* List of all non-device dxgsyncobject objects */
struct list_head syncobj_list_head;
/* This lock protects shared resource and syncobject lists */
@@ -405,6 +408,8 @@ void dxgadapter_remove_syncobj(struct dxgsyncobject *so);
void dxgadapter_add_process(struct dxgadapter *adapter,
struct dxgprocess_adapter *process_info);
void dxgadapter_remove_process(struct dxgprocess_adapter *process_info);
+void dxgadapter_remove_shared_resource(struct dxgadapter *adapter,
+ struct dxgsharedresource *object);
/*
* The object represent the device object.
@@ -484,6 +489,64 @@ void dxgcontext_destroy_safe(struct dxgprocess *pr, struct dxgcontext *ctx);
void dxgcontext_release(struct kref *refcount);
bool dxgcontext_is_active(struct dxgcontext *ctx);
+/*
+ * A shared resource object is created to track the list of dxgresource objects,
+ * which are opened for the same underlying shared resource.
+ * Objects are shared by using a file descriptor handle.
+ * FD is created by calling dxgk_share_objects and providing shandle to
+ * dxgsharedresource. The FD points to a dxgresource object, which is created
+ * by calling dxgk_open_resource_nt. dxgresource object is referenced by the
+ * FD.
+ *
+ * The object is referenced by every dxgresource in its list.
+ *
+ */
+struct dxgsharedresource {
+ /* Every dxgresource object in the resource list takes a reference */
+ struct kref sresource_kref;
+ struct dxgadapter *adapter;
+ /* List of dxgresource objects, opened for the shared resource. */
+ /* Protected by dxgadapter::shared_resource_list_lock */
+ struct list_head resource_list_head;
+ /* Entry in the list of dxgsharedresource in dxgadapter */
+ /* Protected by dxgadapter::shared_resource_list_lock */
+ struct list_head shared_resource_list_entry;
+ struct mutex fd_mutex;
+ /* Referenced by file descriptors */
+ int host_shared_handle_nt_reference;
+ /* Corresponding global handle in the host */
+ struct d3dkmthandle host_shared_handle;
+ /*
+ * When the sync object is shared by NT handle, this is the
+ * corresponding handle in the host
+ */
+ struct d3dkmthandle host_shared_handle_nt;
+ /* Values below are computed when the resource is sealed */
+ u32 runtime_private_data_size;
+ u32 alloc_private_data_size;
+ u32 resource_private_data_size;
+ u32 allocation_count;
+ union {
+ struct {
+ /* Cannot add new allocations */
+ u32 sealed:1;
+ u32 reserved:31;
+ };
+ long flags;
+ };
+ u32 *alloc_private_data_sizes;
+ u8 *alloc_private_data;
+ u8 *runtime_private_data;
+ u8 *resource_private_data;
+};
+
+struct dxgsharedresource *dxgsharedresource_create(struct dxgadapter *adapter);
+void dxgsharedresource_destroy(struct kref *refcount);
+void dxgsharedresource_add_resource(struct dxgsharedresource *sres,
+ struct dxgresource *res);
+void dxgsharedresource_remove_resource(struct dxgsharedresource *sres,
+ struct dxgresource *res);
+
struct dxgresource {
struct kref resource_kref;
enum dxgobjectstate object_state;
@@ -504,6 +567,8 @@ struct dxgresource {
};
long flags;
};
+ /* Owner of the shared resource */
+ struct dxgsharedresource *shared_owner;
};
struct dxgresource *dxgresource_create(struct dxgdevice *dev);
@@ -658,6 +723,18 @@ int dxgvmb_send_wait_sync_object_cpu(struct dxgprocess *process,
int dxgvmb_send_query_adapter_info(struct dxgprocess *process,
struct dxgadapter *adapter,
struct d3dkmt_queryadapterinfo *args);
+int dxgvmb_send_create_nt_shared_object(struct dxgprocess *process,
+ struct d3dkmthandle object,
+ struct d3dkmthandle *shared_handle);
+int dxgvmb_send_destroy_nt_shared_object(struct d3dkmthandle shared_handle);
+int dxgvmb_send_open_resource(struct dxgprocess *process,
+ struct dxgadapter *adapter,
+ struct d3dkmthandle device,
+ struct d3dkmthandle global_share,
+ u32 allocation_count,
+ u32 total_priv_drv_data_size,
+ struct d3dkmthandle *resource_handle,
+ struct d3dkmthandle *alloc_handles);
int dxgvmb_send_get_stdalloc_data(struct dxgdevice *device,
enum d3dkmdt_standardallocationtype t,
struct d3dkmdt_gdisurfacedata *data,
diff --git a/drivers/hv/dxgkrnl/dxgmodule.c b/drivers/hv/dxgkrnl/dxgmodule.c
index 5a5ca8791d27..69e221613af9 100644
--- a/drivers/hv/dxgkrnl/dxgmodule.c
+++ b/drivers/hv/dxgkrnl/dxgmodule.c
@@ -258,6 +258,7 @@ int dxgglobal_create_adapter(struct pci_dev *dev, guid_t *guid,
init_rwsem(&adapter->core_lock);
INIT_LIST_HEAD(&adapter->adapter_process_list_head);
+ INIT_LIST_HEAD(&adapter->shared_resource_list_head);
INIT_LIST_HEAD(&adapter->syncobj_list_head);
init_rwsem(&adapter->shared_resource_list_lock);
adapter->pci_dev = dev;
diff --git a/drivers/hv/dxgkrnl/dxgvmbus.c b/drivers/hv/dxgkrnl/dxgvmbus.c
index 6b2dea24a509..b3a4377c8b0b 100644
--- a/drivers/hv/dxgkrnl/dxgvmbus.c
+++ b/drivers/hv/dxgkrnl/dxgvmbus.c
@@ -712,6 +712,79 @@ int dxgvmb_send_destroy_process(struct d3dkmthandle process)
return ret;
}
+int dxgvmb_send_create_nt_shared_object(struct dxgprocess *process,
+ struct d3dkmthandle object,
+ struct d3dkmthandle *shared_handle)
+{
+ struct dxgkvmb_command_createntsharedobject *command;
+ int ret;
+ struct dxgvmbusmsg msg;
+
+ ret = init_message(&msg, NULL, process, sizeof(*command));
+ if (ret)
+ return ret;
+ command = (void *)msg.msg;
+
+ command_vm_to_host_init2(&command->hdr,
+ DXGK_VMBCOMMAND_CREATENTSHAREDOBJECT,
+ process->host_handle);
+ command->object = object;
+
+ ret = dxgglobal_acquire_channel_lock();
+ if (ret < 0)
+ goto cleanup;
+
+ ret = dxgvmb_send_sync_msg(dxgglobal_get_dxgvmbuschannel(),
+ msg.hdr, msg.size, shared_handle,
+ sizeof(*shared_handle));
+
+ dxgglobal_release_channel_lock();
+
+ if (ret < 0)
+ goto cleanup;
+ if (shared_handle->v == 0) {
+ DXG_ERR("failed to create NT shared object");
+ ret = -ENOTRECOVERABLE;
+ }
+
+cleanup:
+ free_message(&msg, process);
+ if (ret)
+ DXG_TRACE("err: %d", ret);
+ return ret;
+}
+
+int dxgvmb_send_destroy_nt_shared_object(struct d3dkmthandle shared_handle)
+{
+ struct dxgkvmb_command_destroyntsharedobject *command;
+ int ret;
+ struct dxgvmbusmsg msg;
+
+ ret = init_message(&msg, NULL, NULL, sizeof(*command));
+ if (ret)
+ return ret;
+ command = (void *)msg.msg;
+
+ command_vm_to_host_init1(&command->hdr,
+ DXGK_VMBCOMMAND_DESTROYNTSHAREDOBJECT);
+ command->shared_handle = shared_handle;
+
+ ret = dxgglobal_acquire_channel_lock();
+ if (ret < 0)
+ goto cleanup;
+
+ ret = dxgvmb_send_sync_msg_ntstatus(dxgglobal_get_dxgvmbuschannel(),
+ msg.hdr, msg.size);
+
+ dxgglobal_release_channel_lock();
+
+cleanup:
+ free_message(&msg, NULL);
+ if (ret)
+ DXG_TRACE("err: %d", ret);
+ return ret;
+}
+
int dxgvmb_send_destroy_sync_object(struct dxgprocess *process,
struct d3dkmthandle sync_object)
{
@@ -1552,6 +1625,60 @@ int dxgvmb_send_destroy_allocation(struct dxgprocess *process,
return ret;
}
+int dxgvmb_send_open_resource(struct dxgprocess *process,
+ struct dxgadapter *adapter,
+ struct d3dkmthandle device,
+ struct d3dkmthandle global_share,
+ u32 allocation_count,
+ u32 total_priv_drv_data_size,
+ struct d3dkmthandle *resource_handle,
+ struct d3dkmthandle *alloc_handles)
+{
+ struct dxgkvmb_command_openresource *command;
+ struct dxgkvmb_command_openresource_return *result;
+ struct d3dkmthandle *handles;
+ int ret;
+ int i;
+ u32 result_size = allocation_count * sizeof(struct d3dkmthandle) +
+ sizeof(*result);
+ struct dxgvmbusmsgres msg = {.hdr = NULL};
+
+ ret = init_message_res(&msg, adapter, process, sizeof(*command),
+ result_size);
+ if (ret)
+ goto cleanup;
+ command = msg.msg;
+ result = msg.res;
+
+ command_vgpu_to_host_init2(&command->hdr, DXGK_VMBCOMMAND_OPENRESOURCE,
+ process->host_handle);
+ command->device = device;
+ command->nt_security_sharing = 1;
+ command->global_share = global_share;
+ command->allocation_count = allocation_count;
+ command->total_priv_drv_data_size = total_priv_drv_data_size;
+
+ ret = dxgvmb_send_sync_msg(msg.channel, msg.hdr, msg.size,
+ result, msg.res_size);
+ if (ret < 0)
+ goto cleanup;
+
+ ret = ntstatus2int(result->status);
+ if (ret < 0)
+ goto cleanup;
+
+ *resource_handle = result->resource;
+ handles = (struct d3dkmthandle *) &result[1];
+ for (i = 0; i < allocation_count; i++)
+ alloc_handles[i] = handles[i];
+
+cleanup:
+ free_message((struct dxgvmbusmsg *)&msg, process);
+ if (ret)
+ DXG_TRACE("err: %d", ret);
+ return ret;
+}
+
int dxgvmb_send_get_stdalloc_data(struct dxgdevice *device,
enum d3dkmdt_standardallocationtype alloctype,
struct d3dkmdt_gdisurfacedata *alloc_data,
diff --git a/drivers/hv/dxgkrnl/dxgvmbus.h b/drivers/hv/dxgkrnl/dxgvmbus.h
index 89fecbcefbc8..73d7adac60a1 100644
--- a/drivers/hv/dxgkrnl/dxgvmbus.h
+++ b/drivers/hv/dxgkrnl/dxgvmbus.h
@@ -172,6 +172,21 @@ struct dxgkvmb_command_signalguestevent {
bool dereference_event;
};
+/*
+ * The command returns struct d3dkmthandle of a shared object for the
+ * given pre-process object
+ */
+struct dxgkvmb_command_createntsharedobject {
+ struct dxgkvmb_command_vm_to_host hdr;
+ struct d3dkmthandle object;
+};
+
+/* The command returns ntstatus */
+struct dxgkvmb_command_destroyntsharedobject {
+ struct dxgkvmb_command_vm_to_host hdr;
+ struct d3dkmthandle shared_handle;
+};
+
/* Returns ntstatus */
struct dxgkvmb_command_setiospaceregion {
struct dxgkvmb_command_vm_to_host hdr;
@@ -305,6 +320,21 @@ struct dxgkvmb_command_createallocation {
/* u8 priv_drv_data[] for each alloc_info */
};
+struct dxgkvmb_command_openresource {
+ struct dxgkvmb_command_vgpu_to_host hdr;
+ struct d3dkmthandle device;
+ bool nt_security_sharing;
+ struct d3dkmthandle global_share;
+ u32 allocation_count;
+ u32 total_priv_drv_data_size;
+};
+
+struct dxgkvmb_command_openresource_return {
+ struct d3dkmthandle resource;
+ struct ntstatus status;
+/* struct d3dkmthandle allocation[allocation_count]; */
+};
+
struct dxgkvmb_command_getstandardallocprivdata {
struct dxgkvmb_command_vgpu_to_host hdr;
enum d3dkmdt_standardallocationtype alloc_type;
diff --git a/drivers/hv/dxgkrnl/ioctl.c b/drivers/hv/dxgkrnl/ioctl.c
index 0025e1ee2d4d..abb64f6c3a59 100644
--- a/drivers/hv/dxgkrnl/ioctl.c
+++ b/drivers/hv/dxgkrnl/ioctl.c
@@ -36,8 +36,35 @@ static char *errorstr(int ret)
}
#endif
+static int dxgsharedresource_release(struct inode *inode, struct file *file)
+{
+ struct dxgsharedresource *resource = file->private_data;
+
+ DXG_TRACE("Release resource: %p", resource);
+ mutex_lock(&resource->fd_mutex);
+ kref_get(&resource->sresource_kref);
+ resource->host_shared_handle_nt_reference--;
+ if (resource->host_shared_handle_nt_reference == 0) {
+ if (resource->host_shared_handle_nt.v) {
+ dxgvmb_send_destroy_nt_shared_object(
+ resource->host_shared_handle_nt);
+ DXG_TRACE("Resource host_handle_nt destroyed: %x",
+ resource->host_shared_handle_nt.v);
+ resource->host_shared_handle_nt.v = 0;
+ }
+ kref_put(&resource->sresource_kref, dxgsharedresource_destroy);
+ }
+ mutex_unlock(&resource->fd_mutex);
+ kref_put(&resource->sresource_kref, dxgsharedresource_destroy);
+ return 0;
+}
+
+static const struct file_operations dxg_resource_fops = {
+ .release = dxgsharedresource_release,
+};
+
static int dxgkio_open_adapter_from_luid(struct dxgprocess *process,
- void *__user inargs)
+ void *__user inargs)
{
struct d3dkmt_openadapterfromluid args;
int ret;
@@ -212,6 +239,98 @@ dxgkp_enum_adapters(struct dxgprocess *process,
return ret;
}
+static int dxgsharedresource_seal(struct dxgsharedresource *shared_resource)
+{
+ int ret = 0;
+ int i = 0;
+ u8 *private_data;
+ u32 data_size;
+ struct dxgresource *resource;
+ struct dxgallocation *alloc;
+
+ DXG_TRACE("Sealing resource: %p", shared_resource);
+
+ down_write(&shared_resource->adapter->shared_resource_list_lock);
+ if (shared_resource->sealed) {
+ DXG_TRACE("Resource already sealed");
+ goto cleanup;
+ }
+ shared_resource->sealed = 1;
+ if (!list_empty(&shared_resource->resource_list_head)) {
+ resource =
+ list_first_entry(&shared_resource->resource_list_head,
+ struct dxgresource,
+ shared_resource_list_entry);
+ DXG_TRACE("First resource: %p", resource);
+ mutex_lock(&resource->resource_mutex);
+ list_for_each_entry(alloc, &resource->alloc_list_head,
+ alloc_list_entry) {
+ DXG_TRACE("Resource alloc: %p %d", alloc,
+ alloc->priv_drv_data->data_size);
+ shared_resource->allocation_count++;
+ shared_resource->alloc_private_data_size +=
+ alloc->priv_drv_data->data_size;
+ if (shared_resource->alloc_private_data_size <
+ alloc->priv_drv_data->data_size) {
+ DXG_ERR("alloc private data overflow");
+ ret = -EINVAL;
+ goto cleanup1;
+ }
+ }
+ if (shared_resource->alloc_private_data_size == 0) {
+ ret = -EINVAL;
+ goto cleanup1;
+ }
+ shared_resource->alloc_private_data =
+ vzalloc(shared_resource->alloc_private_data_size);
+ if (shared_resource->alloc_private_data == NULL) {
+ ret = -EINVAL;
+ goto cleanup1;
+ }
+ shared_resource->alloc_private_data_sizes =
+ vzalloc(sizeof(u32)*shared_resource->allocation_count);
+ if (shared_resource->alloc_private_data_sizes == NULL) {
+ ret = -EINVAL;
+ goto cleanup1;
+ }
+ private_data = shared_resource->alloc_private_data;
+ data_size = shared_resource->alloc_private_data_size;
+ i = 0;
+ list_for_each_entry(alloc, &resource->alloc_list_head,
+ alloc_list_entry) {
+ u32 alloc_data_size = alloc->priv_drv_data->data_size;
+
+ if (alloc_data_size) {
+ if (data_size < alloc_data_size) {
+ dev_err(DXGDEV,
+ "Invalid private data size");
+ ret = -EINVAL;
+ goto cleanup1;
+ }
+ shared_resource->alloc_private_data_sizes[i] =
+ alloc_data_size;
+ memcpy(private_data,
+ alloc->priv_drv_data->data,
+ alloc_data_size);
+ vfree(alloc->priv_drv_data);
+ alloc->priv_drv_data = NULL;
+ private_data += alloc_data_size;
+ data_size -= alloc_data_size;
+ }
+ i++;
+ }
+ if (data_size != 0) {
+ DXG_ERR("Data size mismatch");
+ ret = -EINVAL;
+ }
+cleanup1:
+ mutex_unlock(&resource->resource_mutex);
+ }
+cleanup:
+ up_write(&shared_resource->adapter->shared_resource_list_lock);
+ return ret;
+}
+
static int
dxgkio_enum_adapters(struct dxgprocess *process, void *__user inargs)
{
@@ -803,6 +922,7 @@ dxgkio_create_allocation(struct dxgprocess *process, void *__user inargs)
u32 alloc_info_size = 0;
struct dxgresource *resource = NULL;
struct dxgallocation **dxgalloc = NULL;
+ struct dxgsharedresource *shared_resource = NULL;
bool resource_mutex_acquired = false;
u32 standard_alloc_priv_data_size = 0;
void *standard_alloc_priv_data = NULL;
@@ -973,6 +1093,76 @@ dxgkio_create_allocation(struct dxgprocess *process, void *__user inargs)
}
resource->private_runtime_handle =
args.private_runtime_resource_handle;
+ if (args.flags.create_shared) {
+ if (!args.flags.nt_security_sharing) {
+ dev_err(DXGDEV,
+ "nt_security_sharing must be set");
+ ret = -EINVAL;
+ goto cleanup;
+ }
+ shared_resource = dxgsharedresource_create(adapter);
+ if (shared_resource == NULL) {
+ ret = -ENOMEM;
+ goto cleanup;
+ }
+ shared_resource->runtime_private_data_size =
+ args.priv_drv_data_size;
+ shared_resource->resource_private_data_size =
+ args.priv_drv_data_size;
+
+ shared_resource->runtime_private_data_size =
+ args.private_runtime_data_size;
+ shared_resource->resource_private_data_size =
+ args.priv_drv_data_size;
+ dxgsharedresource_add_resource(shared_resource,
+ resource);
+ if (args.flags.standard_allocation) {
+ shared_resource->resource_private_data =
+ res_priv_data;
+ shared_resource->resource_private_data_size =
+ res_priv_data_size;
+ res_priv_data = NULL;
+ }
+ if (args.private_runtime_data_size) {
+ shared_resource->runtime_private_data =
+ vzalloc(args.private_runtime_data_size);
+ if (shared_resource->runtime_private_data ==
+ NULL) {
+ ret = -ENOMEM;
+ goto cleanup;
+ }
+ ret = copy_from_user(
+ shared_resource->runtime_private_data,
+ args.private_runtime_data,
+ args.private_runtime_data_size);
+ if (ret) {
+ dev_err(DXGDEV,
+ "failed to copy runtime data");
+ ret = -EINVAL;
+ goto cleanup;
+ }
+ }
+ if (args.priv_drv_data_size &&
+ !args.flags.standard_allocation) {
+ shared_resource->resource_private_data =
+ vzalloc(args.priv_drv_data_size);
+ if (shared_resource->resource_private_data ==
+ NULL) {
+ ret = -ENOMEM;
+ goto cleanup;
+ }
+ ret = copy_from_user(
+ shared_resource->resource_private_data,
+ args.priv_drv_data,
+ args.priv_drv_data_size);
+ if (ret) {
+ dev_err(DXGDEV,
+ "failed to copy res data");
+ ret = -EINVAL;
+ goto cleanup;
+ }
+ }
+ }
} else {
if (args.resource.v) {
/* Adding new allocations to the given resource */
@@ -991,6 +1181,12 @@ dxgkio_create_allocation(struct dxgprocess *process, void *__user inargs)
ret = -EINVAL;
goto cleanup;
}
+ if (resource->shared_owner &&
+ resource->shared_owner->sealed) {
+ DXG_ERR("Resource is sealed");
+ ret = -EINVAL;
+ goto cleanup;
+ }
/* Synchronize with resource destruction */
mutex_lock(&resource->resource_mutex);
if (!dxgresource_is_active(resource)) {
@@ -1092,9 +1288,16 @@ dxgkio_create_allocation(struct dxgprocess *process, void *__user inargs)
}
}
if (resource && args.flags.create_resource) {
+ if (shared_resource) {
+ dxgsharedresource_remove_resource
+ (shared_resource, resource);
+ }
dxgresource_destroy(resource);
}
}
+ if (shared_resource)
+ kref_put(&shared_resource->sresource_kref,
+ dxgsharedresource_destroy);
if (dxgalloc)
vfree(dxgalloc);
if (standard_alloc_priv_data)
@@ -1140,6 +1343,10 @@ static int validate_alloc(struct dxgallocation *alloc0,
fail_reason = 4;
goto cleanup;
}
+ if (alloc->owner.resource->shared_owner) {
+ fail_reason = 5;
+ goto cleanup;
+ }
} else {
if (alloc->owner.device != device) {
fail_reason = 6;
@@ -2146,6 +2353,582 @@ dxgkio_wait_sync_object_gpu(struct dxgprocess *process, void *__user inargs)
return ret;
}
+static int
+dxgsharedresource_get_host_nt_handle(struct dxgsharedresource *resource,
+ struct dxgprocess *process,
+ struct d3dkmthandle objecthandle)
+{
+ int ret = 0;
+
+ mutex_lock(&resource->fd_mutex);
+ if (resource->host_shared_handle_nt_reference == 0) {
+ ret = dxgvmb_send_create_nt_shared_object(process,
+ objecthandle,
+ &resource->host_shared_handle_nt);
+ if (ret < 0)
+ goto cleanup;
+ DXG_TRACE("Resource host_shared_handle_ht: %x",
+ resource->host_shared_handle_nt.v);
+ kref_get(&resource->sresource_kref);
+ }
+ resource->host_shared_handle_nt_reference++;
+cleanup:
+ mutex_unlock(&resource->fd_mutex);
+ return ret;
+}
+
+enum dxg_sharedobject_type {
+ DXG_SHARED_RESOURCE
+};
+
+static int get_object_fd(enum dxg_sharedobject_type type,
+ void *object, int *fdout)
+{
+ struct file *file;
+ int fd;
+
+ fd = get_unused_fd_flags(O_CLOEXEC);
+ if (fd < 0) {
+ DXG_ERR("get_unused_fd_flags failed: %x", fd);
+ return -ENOTRECOVERABLE;
+ }
+
+ switch (type) {
+ case DXG_SHARED_RESOURCE:
+ file = anon_inode_getfile("dxgresource",
+ &dxg_resource_fops, object, 0);
+ break;
+ default:
+ return -EINVAL;
+ };
+ if (IS_ERR(file)) {
+ DXG_ERR("anon_inode_getfile failed: %x", fd);
+ put_unused_fd(fd);
+ return -ENOTRECOVERABLE;
+ }
+
+ fd_install(fd, file);
+ *fdout = fd;
+ return 0;
+}
+
+static int
+dxgkio_share_objects(struct dxgprocess *process, void *__user inargs)
+{
+ struct d3dkmt_shareobjects args;
+ enum hmgrentry_type object_type;
+ struct dxgsyncobject *syncobj = NULL;
+ struct dxgresource *resource = NULL;
+ struct dxgsharedresource *shared_resource = NULL;
+ struct d3dkmthandle *handles = NULL;
+ int object_fd = -1;
+ void *obj = NULL;
+ u32 handle_size;
+ int ret;
+ u64 tmp = 0;
+
+ ret = copy_from_user(&args, inargs, sizeof(args));
+ if (ret) {
+ DXG_ERR("failed to copy input args");
+ ret = -EINVAL;
+ goto cleanup;
+ }
+
+ if (args.object_count == 0 || args.object_count > 1) {
+ DXG_ERR("invalid object count %d", args.object_count);
+ ret = -EINVAL;
+ goto cleanup;
+ }
+
+ handle_size = args.object_count * sizeof(struct d3dkmthandle);
+
+ handles = vzalloc(handle_size);
+ if (handles == NULL) {
+ ret = -ENOMEM;
+ goto cleanup;
+ }
+ ret = copy_from_user(handles, args.objects, handle_size);
+ if (ret) {
+ DXG_ERR("failed to copy object handles");
+ ret = -EINVAL;
+ goto cleanup;
+ }
+
+ DXG_TRACE("Sharing handle: %x", handles[0].v);
+
+ hmgrtable_lock(&process->handle_table, DXGLOCK_SHARED);
+ object_type = hmgrtable_get_object_type(&process->handle_table,
+ handles[0]);
+ obj = hmgrtable_get_object(&process->handle_table, handles[0]);
+ if (obj == NULL) {
+ DXG_ERR("invalid object handle %x", handles[0].v);
+ ret = -EINVAL;
+ } else {
+ switch (object_type) {
+ case HMGRENTRY_TYPE_DXGRESOURCE:
+ resource = obj;
+ if (resource->shared_owner) {
+ kref_get(&resource->resource_kref);
+ shared_resource = resource->shared_owner;
+ } else {
+ resource = NULL;
+ DXG_ERR("resource object shared");
+ ret = -EINVAL;
+ }
+ break;
+ default:
+ DXG_ERR("invalid object type %d", object_type);
+ ret = -EINVAL;
+ break;
+ }
+ }
+ hmgrtable_unlock(&process->handle_table, DXGLOCK_SHARED);
+
+ if (ret < 0)
+ goto cleanup;
+
+ switch (object_type) {
+ case HMGRENTRY_TYPE_DXGRESOURCE:
+ ret = get_object_fd(DXG_SHARED_RESOURCE, shared_resource,
+ &object_fd);
+ if (ret < 0) {
+ DXG_ERR("get_object_fd failed for resource");
+ goto cleanup;
+ }
+ ret = dxgsharedresource_get_host_nt_handle(shared_resource,
+ process, handles[0]);
+ if (ret < 0) {
+ DXG_ERR("get_host_res_nt_handle failed");
+ goto cleanup;
+ }
+ ret = dxgsharedresource_seal(shared_resource);
+ if (ret < 0) {
+ DXG_ERR("dxgsharedresource_seal failed");
+ goto cleanup;
+ }
+ break;
+ default:
+ ret = -EINVAL;
+ break;
+ }
+
+ if (ret < 0)
+ goto cleanup;
+
+ DXG_TRACE("Object FD: %x", object_fd);
+
+ tmp = (u64) object_fd;
+
+ ret = copy_to_user(args.shared_handle, &tmp, sizeof(u64));
+ if (ret < 0)
+ DXG_ERR("failed to copy shared handle");
+
+cleanup:
+ if (ret < 0) {
+ if (object_fd >= 0)
+ put_unused_fd(object_fd);
+ }
+
+ if (handles)
+ vfree(handles);
+
+ if (syncobj)
+ kref_put(&syncobj->syncobj_kref, dxgsyncobject_release);
+
+ if (resource)
+ kref_put(&resource->resource_kref, dxgresource_release);
+
+ DXG_TRACE("ioctl:%s %d", errorstr(ret), ret);
+ return ret;
+}
+
+static int
+dxgkio_query_resource_info_nt(struct dxgprocess *process, void *__user inargs)
+{
+ struct d3dkmt_queryresourceinfofromnthandle args;
+ int ret;
+ struct dxgdevice *device = NULL;
+ struct dxgsharedresource *shared_resource = NULL;
+ struct file *file = NULL;
+
+ ret = copy_from_user(&args, inargs, sizeof(args));
+ if (ret) {
+ DXG_ERR("failed to copy input args");
+ ret = -EINVAL;
+ goto cleanup;
+ }
+
+ file = fget(args.nt_handle);
+ if (!file) {
+ DXG_ERR("failed to get file from handle: %llx",
+ args.nt_handle);
+ ret = -EINVAL;
+ goto cleanup;
+ }
+
+ if (file->f_op != &dxg_resource_fops) {
+ DXG_ERR("invalid fd: %llx", args.nt_handle);
+ ret = -EINVAL;
+ goto cleanup;
+ }
+
+ shared_resource = file->private_data;
+ if (shared_resource == NULL) {
+ DXG_ERR("invalid private data: %llx", args.nt_handle);
+ ret = -EINVAL;
+ goto cleanup;
+ }
+
+ device = dxgprocess_device_by_handle(process, args.device);
+ if (device == NULL) {
+ ret = -EINVAL;
+ goto cleanup;
+ }
+
+ ret = dxgdevice_acquire_lock_shared(device);
+ if (ret < 0) {
+ kref_put(&device->device_kref, dxgdevice_release);
+ device = NULL;
+ goto cleanup;
+ }
+
+ ret = dxgsharedresource_seal(shared_resource);
+ if (ret < 0)
+ goto cleanup;
+
+ args.private_runtime_data_size =
+ shared_resource->runtime_private_data_size;
+ args.resource_priv_drv_data_size =
+ shared_resource->resource_private_data_size;
+ args.allocation_count = shared_resource->allocation_count;
+ args.total_priv_drv_data_size =
+ shared_resource->alloc_private_data_size;
+
+ ret = copy_to_user(inargs, &args, sizeof(args));
+ if (ret) {
+ DXG_ERR("failed to copy output args");
+ ret = -EINVAL;
+ }
+
+cleanup:
+
+ if (file)
+ fput(file);
+ if (device)
+ dxgdevice_release_lock_shared(device);
+ if (device)
+ kref_put(&device->device_kref, dxgdevice_release);
+
+ DXG_TRACE("ioctl:%s %d", errorstr(ret), ret);
+ return ret;
+}
+
+static int
+assign_resource_handles(struct dxgprocess *process,
+ struct dxgsharedresource *shared_resource,
+ struct d3dkmt_openresourcefromnthandle *args,
+ struct d3dkmthandle resource_handle,
+ struct dxgresource *resource,
+ struct dxgallocation **allocs,
+ struct d3dkmthandle *handles)
+{
+ int ret;
+ int i;
+ u8 *cur_priv_data;
+ u32 total_priv_data_size = 0;
+ struct d3dddi_openallocationinfo2 open_alloc_info = { };
+
+ hmgrtable_lock(&process->handle_table, DXGLOCK_EXCL);
+ ret = hmgrtable_assign_handle(&process->handle_table, resource,
+ HMGRENTRY_TYPE_DXGRESOURCE,
+ resource_handle);
+ if (ret < 0)
+ goto cleanup;
+ resource->handle = resource_handle;
+ resource->handle_valid = 1;
+ cur_priv_data = args->total_priv_drv_data;
+ for (i = 0; i < args->allocation_count; i++) {
+ ret = hmgrtable_assign_handle(&process->handle_table, allocs[i],
+ HMGRENTRY_TYPE_DXGALLOCATION,
+ handles[i]);
+ if (ret < 0)
+ goto cleanup;
+ allocs[i]->alloc_handle = handles[i];
+ allocs[i]->handle_valid = 1;
+ open_alloc_info.allocation = handles[i];
+ if (shared_resource->alloc_private_data_sizes)
+ open_alloc_info.priv_drv_data_size =
+ shared_resource->alloc_private_data_sizes[i];
+ else
+ open_alloc_info.priv_drv_data_size = 0;
+
+ total_priv_data_size += open_alloc_info.priv_drv_data_size;
+ open_alloc_info.priv_drv_data = cur_priv_data;
+ cur_priv_data += open_alloc_info.priv_drv_data_size;
+
+ ret = copy_to_user(&args->open_alloc_info[i],
+ &open_alloc_info,
+ sizeof(open_alloc_info));
+ if (ret) {
+ DXG_ERR("failed to copy alloc info");
+ ret = -EINVAL;
+ goto cleanup;
+ }
+ }
+ args->total_priv_drv_data_size = total_priv_data_size;
+cleanup:
+ hmgrtable_unlock(&process->handle_table, DXGLOCK_EXCL);
+ if (ret < 0) {
+ for (i = 0; i < args->allocation_count; i++)
+ dxgallocation_free_handle(allocs[i]);
+ dxgresource_free_handle(resource);
+ }
+ return ret;
+}
+
+static int
+open_resource(struct dxgprocess *process,
+ struct d3dkmt_openresourcefromnthandle *args,
+ __user struct d3dkmthandle *res_out,
+ __user u32 *total_driver_data_size_out)
+{
+ int ret = 0;
+ int i;
+ struct d3dkmthandle *alloc_handles = NULL;
+ int alloc_handles_size = sizeof(struct d3dkmthandle) *
+ args->allocation_count;
+ struct dxgsharedresource *shared_resource = NULL;
+ struct dxgresource *resource = NULL;
+ struct dxgallocation **allocs = NULL;
+ struct d3dkmthandle global_share = {};
+ struct dxgdevice *device = NULL;
+ struct dxgadapter *adapter = NULL;
+ struct d3dkmthandle resource_handle = {};
+ struct file *file = NULL;
+
+ DXG_TRACE("Opening resource handle: %llx", args->nt_handle);
+
+ file = fget(args->nt_handle);
+ if (!file) {
+ DXG_ERR("failed to get file from handle: %llx",
+ args->nt_handle);
+ ret = -EINVAL;
+ goto cleanup;
+ }
+ if (file->f_op != &dxg_resource_fops) {
+ DXG_ERR("invalid fd type: %llx", args->nt_handle);
+ ret = -EINVAL;
+ goto cleanup;
+ }
+ shared_resource = file->private_data;
+ if (shared_resource == NULL) {
+ DXG_ERR("invalid private data: %llx", args->nt_handle);
+ ret = -EINVAL;
+ goto cleanup;
+ }
+ if (kref_get_unless_zero(&shared_resource->sresource_kref) == 0)
+ shared_resource = NULL;
+ else
+ global_share = shared_resource->host_shared_handle_nt;
+
+ if (shared_resource == NULL) {
+ DXG_ERR("Invalid shared resource handle: %x",
+ (u32)args->nt_handle);
+ ret = -EINVAL;
+ goto cleanup;
+ }
+
+ DXG_TRACE("Shared resource: %p %x", shared_resource,
+ global_share.v);
+
+ device = dxgprocess_device_by_handle(process, args->device);
+ if (device == NULL) {
+ ret = -EINVAL;
+ goto cleanup;
+ }
+
+ ret = dxgdevice_acquire_lock_shared(device);
+ if (ret < 0) {
+ kref_put(&device->device_kref, dxgdevice_release);
+ device = NULL;
+ goto cleanup;
+ }
+
+ adapter = device->adapter;
+ ret = dxgadapter_acquire_lock_shared(adapter);
+ if (ret < 0) {
+ adapter = NULL;
+ goto cleanup;
+ }
+
+ ret = dxgsharedresource_seal(shared_resource);
+ if (ret < 0)
+ goto cleanup;
+
+ if (args->allocation_count != shared_resource->allocation_count ||
+ args->private_runtime_data_size <
+ shared_resource->runtime_private_data_size ||
+ args->resource_priv_drv_data_size <
+ shared_resource->resource_private_data_size ||
+ args->total_priv_drv_data_size <
+ shared_resource->alloc_private_data_size) {
+ ret = -EINVAL;
+ DXG_ERR("Invalid data sizes");
+ goto cleanup;
+ }
+
+ alloc_handles = vzalloc(alloc_handles_size);
+ if (alloc_handles == NULL) {
+ ret = -ENOMEM;
+ goto cleanup;
+ }
+
+ allocs = vzalloc(sizeof(void *) * args->allocation_count);
+ if (allocs == NULL) {
+ ret = -ENOMEM;
+ goto cleanup;
+ }
+
+ resource = dxgresource_create(device);
+ if (resource == NULL) {
+ ret = -ENOMEM;
+ goto cleanup;
+ }
+ dxgsharedresource_add_resource(shared_resource, resource);
+
+ for (i = 0; i < args->allocation_count; i++) {
+ allocs[i] = dxgallocation_create(process);
+ if (allocs[i] == NULL)
+ goto cleanup;
+ ret = dxgresource_add_alloc(resource, allocs[i]);
+ if (ret < 0)
+ goto cleanup;
+ }
+
+ ret = dxgvmb_send_open_resource(process, adapter,
+ device->handle, global_share,
+ args->allocation_count,
+ args->total_priv_drv_data_size,
+ &resource_handle, alloc_handles);
+ if (ret < 0) {
+ DXG_ERR("dxgvmb_send_open_resource failed");
+ goto cleanup;
+ }
+
+ if (shared_resource->runtime_private_data_size) {
+ ret = copy_to_user(args->private_runtime_data,
+ shared_resource->runtime_private_data,
+ shared_resource->runtime_private_data_size);
+ if (ret) {
+ DXG_ERR("failed to copy runtime data");
+ ret = -EINVAL;
+ goto cleanup;
+ }
+ }
+
+ if (shared_resource->resource_private_data_size) {
+ ret = copy_to_user(args->resource_priv_drv_data,
+ shared_resource->resource_private_data,
+ shared_resource->resource_private_data_size);
+ if (ret) {
+ DXG_ERR("failed to copy resource data");
+ ret = -EINVAL;
+ goto cleanup;
+ }
+ }
+
+ if (shared_resource->alloc_private_data_size) {
+ ret = copy_to_user(args->total_priv_drv_data,
+ shared_resource->alloc_private_data,
+ shared_resource->alloc_private_data_size);
+ if (ret) {
+ DXG_ERR("failed to copy alloc data");
+ ret = -EINVAL;
+ goto cleanup;
+ }
+ }
+
+ ret = assign_resource_handles(process, shared_resource, args,
+ resource_handle, resource, allocs,
+ alloc_handles);
+ if (ret < 0)
+ goto cleanup;
+
+ ret = copy_to_user(res_out, &resource_handle,
+ sizeof(struct d3dkmthandle));
+ if (ret) {
+ DXG_ERR("failed to copy resource handle to user");
+ ret = -EINVAL;
+ goto cleanup;
+ }
+
+ ret = copy_to_user(total_driver_data_size_out,
+ &args->total_priv_drv_data_size, sizeof(u32));
+ if (ret) {
+ DXG_ERR("failed to copy total driver data size");
+ ret = -EINVAL;
+ }
+
+cleanup:
+
+ if (ret < 0) {
+ if (resource_handle.v) {
+ struct d3dkmt_destroyallocation2 tmp = { };
+
+ tmp.flags.assume_not_in_use = 1;
+ tmp.device = args->device;
+ tmp.resource = resource_handle;
+ ret = dxgvmb_send_destroy_allocation(process, device,
+ &tmp, NULL);
+ }
+ if (resource)
+ dxgresource_destroy(resource);
+ }
+
+ if (file)
+ fput(file);
+ if (allocs)
+ vfree(allocs);
+ if (shared_resource)
+ kref_put(&shared_resource->sresource_kref,
+ dxgsharedresource_destroy);
+ if (alloc_handles)
+ vfree(alloc_handles);
+ if (adapter)
+ dxgadapter_release_lock_shared(adapter);
+ if (device)
+ dxgdevice_release_lock_shared(device);
+ if (device)
+ kref_put(&device->device_kref, dxgdevice_release);
+
+ return ret;
+}
+
+static int
+dxgkio_open_resource_nt(struct dxgprocess *process,
+ void *__user inargs)
+{
+ struct d3dkmt_openresourcefromnthandle args;
+ struct d3dkmt_openresourcefromnthandle *__user args_user = inargs;
+ int ret;
+
+ ret = copy_from_user(&args, inargs, sizeof(args));
+ if (ret) {
+ DXG_ERR("failed to copy input args");
+ ret = -EINVAL;
+ goto cleanup;
+ }
+
+ ret = open_resource(process, &args,
+ &args_user->resource,
+ &args_user->total_priv_drv_data_size);
+
+cleanup:
+
+ DXG_TRACE("ioctl:%s %d", errorstr(ret), ret);
+ return ret;
+}
+
static struct ioctl_desc ioctls[] = {
/* 0x00 */ {},
/* 0x01 */ {dxgkio_open_adapter_from_luid, LX_DXOPENADAPTERFROMLUID},
@@ -2215,10 +2998,11 @@ static struct ioctl_desc ioctls[] = {
/* 0x3c */ {},
/* 0x3d */ {},
/* 0x3e */ {dxgkio_enum_adapters3, LX_DXENUMADAPTERS3},
-/* 0x3f */ {},
+/* 0x3f */ {dxgkio_share_objects, LX_DXSHAREOBJECTS},
/* 0x40 */ {},
-/* 0x41 */ {},
-/* 0x42 */ {},
+/* 0x41 */ {dxgkio_query_resource_info_nt,
+ LX_DXQUERYRESOURCEINFOFROMNTHANDLE},
+/* 0x42 */ {dxgkio_open_resource_nt, LX_DXOPENRESOURCEFROMNTHANDLE},
/* 0x43 */ {},
/* 0x44 */ {},
/* 0x45 */ {},
diff --git a/include/uapi/misc/d3dkmthk.h b/include/uapi/misc/d3dkmthk.h
index 39055b0c1069..f74564cf7ee9 100644
--- a/include/uapi/misc/d3dkmthk.h
+++ b/include/uapi/misc/d3dkmthk.h
@@ -682,6 +682,94 @@ enum d3dkmt_deviceexecution_state {
_D3DKMT_DEVICEEXECUTION_ERROR_DMAPAGEFAULT = 7,
};
+struct d3dddi_openallocationinfo2 {
+ struct d3dkmthandle allocation;
+#ifdef __KERNEL__
+ void *priv_drv_data;
+#else
+ __u64 priv_drv_data;
+#endif
+ __u32 priv_drv_data_size;
+ __u64 gpu_va;
+ __u64 reserved[6];
+};
+
+struct d3dkmt_openresourcefromnthandle {
+ struct d3dkmthandle device;
+ __u32 reserved;
+ __u64 nt_handle;
+ __u32 allocation_count;
+ __u32 reserved1;
+#ifdef __KERNEL__
+ struct d3dddi_openallocationinfo2 *open_alloc_info;
+#else
+ __u64 open_alloc_info;
+#endif
+ int private_runtime_data_size;
+ __u32 reserved2;
+#ifdef __KERNEL__
+ void *private_runtime_data;
+#else
+ __u64 private_runtime_data;
+#endif
+ __u32 resource_priv_drv_data_size;
+ __u32 reserved3;
+#ifdef __KERNEL__
+ void *resource_priv_drv_data;
+#else
+ __u64 resource_priv_drv_data;
+#endif
+ __u32 total_priv_drv_data_size;
+#ifdef __KERNEL__
+ void *total_priv_drv_data;
+#else
+ __u64 total_priv_drv_data;
+#endif
+ struct d3dkmthandle resource;
+ struct d3dkmthandle keyed_mutex;
+#ifdef __KERNEL__
+ void *keyed_mutex_private_data;
+#else
+ __u64 keyed_mutex_private_data;
+#endif
+ __u32 keyed_mutex_private_data_size;
+ struct d3dkmthandle sync_object;
+};
+
+struct d3dkmt_queryresourceinfofromnthandle {
+ struct d3dkmthandle device;
+ __u32 reserved;
+ __u64 nt_handle;
+#ifdef __KERNEL__
+ void *private_runtime_data;
+#else
+ __u64 private_runtime_data;
+#endif
+ __u32 private_runtime_data_size;
+ __u32 total_priv_drv_data_size;
+ __u32 resource_priv_drv_data_size;
+ __u32 allocation_count;
+};
+
+struct d3dkmt_shareobjects {
+ __u32 object_count;
+ __u32 reserved;
+#ifdef __KERNEL__
+ const struct d3dkmthandle *objects;
+ void *object_attr; /* security attributes */
+#else
+ __u64 objects;
+ __u64 object_attr;
+#endif
+ __u32 desired_access;
+ __u32 reserved1;
+#ifdef __KERNEL__
+ __u64 *shared_handle; /* output file descriptors */
+#else
+ __u64 shared_handle;
+#endif
+};
+
union d3dkmt_enumadapters_filter {
struct {
__u64 include_compute_only:1;
@@ -747,5 +835,13 @@ struct d3dkmt_enumadapters3 {
_IOWR(0x47, 0x3b, struct d3dkmt_waitforsynchronizationobjectfromgpu)
#define LX_DXENUMADAPTERS3 \
_IOWR(0x47, 0x3e, struct d3dkmt_enumadapters3)
+#define LX_DXSHAREOBJECTS \
+ _IOWR(0x47, 0x3f, struct d3dkmt_shareobjects)
+#define LX_DXOPENSYNCOBJECTFROMNTHANDLE2 \
+ _IOWR(0x47, 0x40, struct d3dkmt_opensyncobjectfromnthandle2)
+#define LX_DXQUERYRESOURCEINFOFROMNTHANDLE \
+ _IOWR(0x47, 0x41, struct d3dkmt_queryresourceinfofromnthandle)
+#define LX_DXOPENRESOURCEFROMNTHANDLE \
+ _IOWR(0x47, 0x42, struct d3dkmt_openresourcefromnthandle)
#endif /* _D3DKMTHK_H */
^ permalink raw reply related [flat|nested] 56+ messages in thread* [PATCH 12/55] drivers: hv: dxgkrnl: Sharing of sync objects
2026-03-19 20:24 [PATCH v4 00/55] drivers: hv: dxgkrnl: Driver for Hyper-V virtual compute device Eric Curtin
` (10 preceding siblings ...)
2026-03-19 20:24 ` [PATCH 11/55] drivers: hv: dxgkrnl: Sharing of dxgresource objects Eric Curtin
@ 2026-03-19 20:24 ` Eric Curtin
2026-03-19 20:24 ` [PATCH 13/55] drivers: hv: dxgkrnl: Creation of paging queue objects Eric Curtin
` (42 subsequent siblings)
54 siblings, 0 replies; 56+ messages in thread
From: Eric Curtin @ 2026-03-19 20:24 UTC (permalink / raw)
To: linux-hyperv; +Cc: linux-kernel, iourit, wei.liu, decui, haiyangz
From: Iouri Tarassov <iourit@linux.microsoft.com>
Implement creation of a shared sync objects and the ioctl for sharing
dxgsyncobject objects between processes in the virtual machine.
Sync objects are shared using file descriptor (FD) handles.
The name "NT handle" is used to be compatible with Windows implementation.
An FD handle is created by the LX_DXSHAREOBJECTS ioctl. The created FD
handle could be sent to another process using any Linux API.
To use a shared sync object in other ioctls, the object needs to be
opened using its FD handle. A sync object is opened by the
LX_DXOPENSYNCOBJECTFROMNTHANDLE2 ioctl, which returns a d3dkmthandle
value.
Signed-off-by: Iouri Tarassov <iourit@linux.microsoft.com>
[kms: forward port to 6.6 from 6.1. No code changes made.]
Signed-off-by: Kelsey Steele <kelseysteele@microsoft.com>
---
drivers/hv/dxgkrnl/dxgadapter.c | 181 ++++++++++-
drivers/hv/dxgkrnl/dxgkrnl.h | 96 ++++++
drivers/hv/dxgkrnl/dxgmodule.c | 1 +
drivers/hv/dxgkrnl/dxgprocess.c | 4 +
drivers/hv/dxgkrnl/dxgvmbus.c | 221 +++++++++++++
drivers/hv/dxgkrnl/dxgvmbus.h | 35 ++
drivers/hv/dxgkrnl/ioctl.c | 556 +++++++++++++++++++++++++++++++-
include/uapi/misc/d3dkmthk.h | 93 ++++++
8 files changed, 1181 insertions(+), 6 deletions(-)
diff --git a/drivers/hv/dxgkrnl/dxgadapter.c b/drivers/hv/dxgkrnl/dxgadapter.c
index 26fce9aba4f3..f59173f13559 100644
--- a/drivers/hv/dxgkrnl/dxgadapter.c
+++ b/drivers/hv/dxgkrnl/dxgadapter.c
@@ -171,6 +171,26 @@ void dxgadapter_remove_shared_resource(struct dxgadapter *adapter,
up_write(&adapter->shared_resource_list_lock);
}
+void dxgadapter_add_shared_syncobj(struct dxgadapter *adapter,
+ struct dxgsharedsyncobject *object)
+{
+ down_write(&adapter->shared_resource_list_lock);
+ list_add_tail(&object->adapter_shared_syncobj_list_entry,
+ &adapter->adapter_shared_syncobj_list_head);
+ up_write(&adapter->shared_resource_list_lock);
+}
+
+void dxgadapter_remove_shared_syncobj(struct dxgadapter *adapter,
+ struct dxgsharedsyncobject *object)
+{
+ down_write(&adapter->shared_resource_list_lock);
+ if (object->adapter_shared_syncobj_list_entry.next) {
+ list_del(&object->adapter_shared_syncobj_list_entry);
+ object->adapter_shared_syncobj_list_entry.next = NULL;
+ }
+ up_write(&adapter->shared_resource_list_lock);
+}
+
void dxgadapter_add_syncobj(struct dxgadapter *adapter,
struct dxgsyncobject *object)
{
@@ -622,7 +642,7 @@ void dxgresource_destroy(struct dxgresource *resource)
dxgallocation_destroy(alloc);
}
dxgdevice_remove_resource(device, resource);
- shared_resource = resource->shared_owner;
+ shared_resource = resource->shared_owner;
if (shared_resource) {
dxgsharedresource_remove_resource(shared_resource,
resource);
@@ -736,6 +756,9 @@ struct dxgcontext *dxgcontext_create(struct dxgdevice *device)
*/
void dxgcontext_destroy(struct dxgprocess *process, struct dxgcontext *context)
{
+ struct dxghwqueue *hwqueue;
+ struct dxghwqueue *tmp;
+
DXG_TRACE("Destroying context %p", context);
context->object_state = DXGOBJECTSTATE_DESTROYED;
if (context->device) {
@@ -747,6 +770,10 @@ void dxgcontext_destroy(struct dxgprocess *process, struct dxgcontext *context)
dxgdevice_remove_context(context->device, context);
kref_put(&context->device->device_kref, dxgdevice_release);
}
+ list_for_each_entry_safe(hwqueue, tmp, &context->hwqueue_list_head,
+ hwqueue_list_entry) {
+ dxghwqueue_destroy(process, hwqueue);
+ }
kref_put(&context->context_kref, dxgcontext_release);
}
@@ -773,6 +800,38 @@ void dxgcontext_release(struct kref *refcount)
kfree(context);
}
+int dxgcontext_add_hwqueue(struct dxgcontext *context,
+ struct dxghwqueue *hwqueue)
+{
+ int ret = 0;
+
+ down_write(&context->hwqueue_list_lock);
+ if (dxgcontext_is_active(context))
+ list_add_tail(&hwqueue->hwqueue_list_entry,
+ &context->hwqueue_list_head);
+ else
+ ret = -ENODEV;
+ up_write(&context->hwqueue_list_lock);
+ return ret;
+}
+
+void dxgcontext_remove_hwqueue(struct dxgcontext *context,
+ struct dxghwqueue *hwqueue)
+{
+ if (hwqueue->hwqueue_list_entry.next) {
+ list_del(&hwqueue->hwqueue_list_entry);
+ hwqueue->hwqueue_list_entry.next = NULL;
+ }
+}
+
+void dxgcontext_remove_hwqueue_safe(struct dxgcontext *context,
+ struct dxghwqueue *hwqueue)
+{
+ down_write(&context->hwqueue_list_lock);
+ dxgcontext_remove_hwqueue(context, hwqueue);
+ up_write(&context->hwqueue_list_lock);
+}
+
struct dxgallocation *dxgallocation_create(struct dxgprocess *process)
{
struct dxgallocation *alloc;
@@ -958,6 +1017,63 @@ void dxgprocess_adapter_remove_device(struct dxgdevice *device)
mutex_unlock(&device->adapter_info->device_list_mutex);
}
+struct dxgsharedsyncobject *dxgsharedsyncobj_create(struct dxgadapter *adapter,
+ struct dxgsyncobject *so)
+{
+ struct dxgsharedsyncobject *syncobj;
+
+ syncobj = kzalloc(sizeof(*syncobj), GFP_KERNEL);
+ if (syncobj) {
+ kref_init(&syncobj->ssyncobj_kref);
+ INIT_LIST_HEAD(&syncobj->shared_syncobj_list_head);
+ syncobj->adapter = adapter;
+ syncobj->type = so->type;
+ syncobj->monitored_fence = so->monitored_fence;
+ dxgadapter_add_shared_syncobj(adapter, syncobj);
+ kref_get(&adapter->adapter_kref);
+ init_rwsem(&syncobj->syncobj_list_lock);
+ mutex_init(&syncobj->fd_mutex);
+ }
+ return syncobj;
+}
+
+void dxgsharedsyncobj_release(struct kref *refcount)
+{
+ struct dxgsharedsyncobject *syncobj;
+
+ syncobj = container_of(refcount, struct dxgsharedsyncobject,
+ ssyncobj_kref);
+ DXG_TRACE("Destroying shared sync object %p", syncobj);
+ if (syncobj->adapter) {
+ dxgadapter_remove_shared_syncobj(syncobj->adapter,
+ syncobj);
+ kref_put(&syncobj->adapter->adapter_kref,
+ dxgadapter_release);
+ }
+ kfree(syncobj);
+}
+
+void dxgsharedsyncobj_add_syncobj(struct dxgsharedsyncobject *shared,
+ struct dxgsyncobject *syncobj)
+{
+ DXG_TRACE("Add syncobj 0x%p 0x%p", shared, syncobj);
+ kref_get(&shared->ssyncobj_kref);
+ down_write(&shared->syncobj_list_lock);
+ list_add(&syncobj->shared_syncobj_list_entry,
+ &shared->shared_syncobj_list_head);
+ syncobj->shared_owner = shared;
+ up_write(&shared->syncobj_list_lock);
+}
+
+void dxgsharedsyncobj_remove_syncobj(struct dxgsharedsyncobject *shared,
+ struct dxgsyncobject *syncobj)
+{
+ DXG_TRACE("Remove syncobj 0x%p", shared);
+ down_write(&shared->syncobj_list_lock);
+ list_del(&syncobj->shared_syncobj_list_entry);
+ up_write(&shared->syncobj_list_lock);
+}
+
struct dxgsyncobject *dxgsyncobject_create(struct dxgprocess *process,
struct dxgdevice *device,
struct dxgadapter *adapter,
@@ -1091,7 +1207,70 @@ void dxgsyncobject_release(struct kref *refcount)
struct dxgsyncobject *syncobj;
syncobj = container_of(refcount, struct dxgsyncobject, syncobj_kref);
+ if (syncobj->shared_owner) {
+ dxgsharedsyncobj_remove_syncobj(syncobj->shared_owner,
+ syncobj);
+ kref_put(&syncobj->shared_owner->ssyncobj_kref,
+ dxgsharedsyncobj_release);
+ }
if (syncobj->host_event)
kfree(syncobj->host_event);
kfree(syncobj);
}
+
+struct dxghwqueue *dxghwqueue_create(struct dxgcontext *context)
+{
+ struct dxgprocess *process = context->device->process;
+ struct dxghwqueue *hwqueue = kzalloc(sizeof(*hwqueue), GFP_KERNEL);
+
+ if (hwqueue) {
+ kref_init(&hwqueue->hwqueue_kref);
+ hwqueue->context = context;
+ hwqueue->process = process;
+ hwqueue->device_handle = context->device->handle;
+ if (dxgcontext_add_hwqueue(context, hwqueue) < 0) {
+ kref_put(&hwqueue->hwqueue_kref, dxghwqueue_release);
+ hwqueue = NULL;
+ } else {
+ kref_get(&context->context_kref);
+ }
+ }
+ return hwqueue;
+}
+
+void dxghwqueue_destroy(struct dxgprocess *process, struct dxghwqueue *hwqueue)
+{
+ DXG_TRACE("Destroyng hwqueue %p", hwqueue);
+ hmgrtable_lock(&process->handle_table, DXGLOCK_EXCL);
+ if (hwqueue->handle.v) {
+ hmgrtable_free_handle(&process->handle_table,
+ HMGRENTRY_TYPE_DXGHWQUEUE,
+ hwqueue->handle);
+ hwqueue->handle.v = 0;
+ }
+ if (hwqueue->progress_fence_sync_object.v) {
+ hmgrtable_free_handle(&process->handle_table,
+ HMGRENTRY_TYPE_MONITOREDFENCE,
+ hwqueue->progress_fence_sync_object);
+ hwqueue->progress_fence_sync_object.v = 0;
+ }
+ hmgrtable_unlock(&process->handle_table, DXGLOCK_EXCL);
+
+ if (hwqueue->progress_fence_mapped_address) {
+ dxg_unmap_iospace(hwqueue->progress_fence_mapped_address,
+ PAGE_SIZE);
+ hwqueue->progress_fence_mapped_address = NULL;
+ }
+ dxgcontext_remove_hwqueue_safe(hwqueue->context, hwqueue);
+
+ kref_put(&hwqueue->context->context_kref, dxgcontext_release);
+ kref_put(&hwqueue->hwqueue_kref, dxghwqueue_release);
+}
+
+void dxghwqueue_release(struct kref *refcount)
+{
+ struct dxghwqueue *hwqueue;
+
+ hwqueue = container_of(refcount, struct dxghwqueue, hwqueue_kref);
+ kfree(hwqueue);
+}
diff --git a/drivers/hv/dxgkrnl/dxgkrnl.h b/drivers/hv/dxgkrnl/dxgkrnl.h
index 0336e1843223..0330352b9c06 100644
--- a/drivers/hv/dxgkrnl/dxgkrnl.h
+++ b/drivers/hv/dxgkrnl/dxgkrnl.h
@@ -40,6 +40,8 @@ struct dxgallocation;
struct dxgresource;
struct dxgsharedresource;
struct dxgsyncobject;
+struct dxgsharedsyncobject;
+struct dxghwqueue;
/*
* Driver private data.
@@ -137,6 +139,18 @@ struct dxghosteventcpu {
* "device" syncobject, because the belong to a device (dxgdevice).
* Device syncobjects are inserted to a list in dxgdevice.
*
+ * A syncobject can be "shared", meaning that it could be opened by many
+ * processes.
+ *
+ * Shared syncobjects are inserted to a list in its owner
+ * (dxgsharedsyncobject).
+ * A syncobject can be shared by using a global handle or by using
+ * "NT security handle".
+ * When global handle sharing is used, the handle is created durinig object
+ * creation.
+ * When "NT security" is used, the handle for sharing is create be calling
+ * dxgk_share_objects. On Linux "NT handle" is represented by a file
+ * descriptor. FD points to dxgsharedsyncobject.
*/
struct dxgsyncobject {
struct kref syncobj_kref;
@@ -146,6 +160,8 @@ struct dxgsyncobject {
* List entry in dxgadapter for other objects
*/
struct list_head syncobj_list_entry;
+ /* List entry in the dxgsharedsyncobject object for shared synobjects */
+ struct list_head shared_syncobj_list_entry;
/* Adapter, the syncobject belongs to. NULL for stopped sync obejcts. */
struct dxgadapter *adapter;
/*
@@ -156,6 +172,8 @@ struct dxgsyncobject {
struct dxgprocess *process;
/* Used by D3DDDI_CPU_NOTIFICATION objects */
struct dxghosteventcpu *host_event;
+ /* Owner object for shared syncobjects */
+ struct dxgsharedsyncobject *shared_owner;
/* CPU virtual address of the fence value for "device" syncobjects */
void *mapped_address;
/* Handle in the process handle table */
@@ -187,6 +205,41 @@ struct dxgvgpuchannel {
struct hv_device *hdev;
};
+/*
+ * The object is used as parent of all sync objects, created for a shared
+ * syncobject. When a shared syncobject is created without NT security, the
+ * handle in the global handle table will point to this object.
+ */
+struct dxgsharedsyncobject {
+ struct kref ssyncobj_kref;
+ /* Referenced by file descriptors */
+ int host_shared_handle_nt_reference;
+ /* Corresponding handle in the host global handle table */
+ struct d3dkmthandle host_shared_handle;
+ /*
+ * When the sync object is shared by NT handle, this is the
+ * corresponding handle in the host
+ */
+ struct d3dkmthandle host_shared_handle_nt;
+ /* Protects access to host_shared_handle_nt */
+ struct mutex fd_mutex;
+ struct rw_semaphore syncobj_list_lock;
+ struct list_head shared_syncobj_list_head;
+ struct list_head adapter_shared_syncobj_list_entry;
+ struct dxgadapter *adapter;
+ enum d3dddi_synchronizationobject_type type;
+ u32 monitored_fence:1;
+};
+
+struct dxgsharedsyncobject *dxgsharedsyncobj_create(struct dxgadapter *adapter,
+ struct dxgsyncobject
+ *syncobj);
+void dxgsharedsyncobj_release(struct kref *refcount);
+void dxgsharedsyncobj_add_syncobj(struct dxgsharedsyncobject *sharedsyncobj,
+ struct dxgsyncobject *syncobj);
+void dxgsharedsyncobj_remove_syncobj(struct dxgsharedsyncobject *sharedsyncobj,
+ struct dxgsyncobject *syncobj);
+
struct dxgsyncobject *dxgsyncobject_create(struct dxgprocess *process,
struct dxgdevice *device,
struct dxgadapter *adapter,
@@ -375,6 +428,8 @@ struct dxgadapter {
struct list_head adapter_process_list_head;
/* List of all dxgsharedresource objects */
struct list_head shared_resource_list_head;
+ /* List of all dxgsharedsyncobject objects */
+ struct list_head adapter_shared_syncobj_list_head;
/* List of all non-device dxgsyncobject objects */
struct list_head syncobj_list_head;
/* This lock protects shared resource and syncobject lists */
@@ -402,6 +457,10 @@ void dxgadapter_release_lock_shared(struct dxgadapter *adapter);
int dxgadapter_acquire_lock_exclusive(struct dxgadapter *adapter);
void dxgadapter_acquire_lock_forced(struct dxgadapter *adapter);
void dxgadapter_release_lock_exclusive(struct dxgadapter *adapter);
+void dxgadapter_add_shared_syncobj(struct dxgadapter *adapter,
+ struct dxgsharedsyncobject *so);
+void dxgadapter_remove_shared_syncobj(struct dxgadapter *adapter,
+ struct dxgsharedsyncobject *so);
void dxgadapter_add_syncobj(struct dxgadapter *adapter,
struct dxgsyncobject *so);
void dxgadapter_remove_syncobj(struct dxgsyncobject *so);
@@ -487,8 +546,32 @@ struct dxgcontext *dxgcontext_create(struct dxgdevice *dev);
void dxgcontext_destroy(struct dxgprocess *pr, struct dxgcontext *ctx);
void dxgcontext_destroy_safe(struct dxgprocess *pr, struct dxgcontext *ctx);
void dxgcontext_release(struct kref *refcount);
+int dxgcontext_add_hwqueue(struct dxgcontext *ctx,
+ struct dxghwqueue *hq);
+void dxgcontext_remove_hwqueue(struct dxgcontext *ctx, struct dxghwqueue *hq);
+void dxgcontext_remove_hwqueue_safe(struct dxgcontext *ctx,
+ struct dxghwqueue *hq);
bool dxgcontext_is_active(struct dxgcontext *ctx);
+/*
+ * The object represent the execution hardware queue of a device.
+ */
+struct dxghwqueue {
+ /* entry in the context hw queue list */
+ struct list_head hwqueue_list_entry;
+ struct kref hwqueue_kref;
+ struct dxgcontext *context;
+ struct dxgprocess *process;
+ struct d3dkmthandle progress_fence_sync_object;
+ struct d3dkmthandle handle;
+ struct d3dkmthandle device_handle;
+ void *progress_fence_mapped_address;
+};
+
+struct dxghwqueue *dxghwqueue_create(struct dxgcontext *ctx);
+void dxghwqueue_destroy(struct dxgprocess *pr, struct dxghwqueue *hq);
+void dxghwqueue_release(struct kref *refcount);
+
/*
* A shared resource object is created to track the list of dxgresource objects,
* which are opened for the same underlying shared resource.
@@ -720,9 +803,22 @@ int dxgvmb_send_wait_sync_object_cpu(struct dxgprocess *process,
d3dkmt_waitforsynchronizationobjectfromcpu
*args,
u64 cpu_event);
+int dxgvmb_send_create_hwqueue(struct dxgprocess *process,
+ struct dxgadapter *adapter,
+ struct d3dkmt_createhwqueue *args,
+ struct d3dkmt_createhwqueue *__user inargs,
+ struct dxghwqueue *hq);
+int dxgvmb_send_destroy_hwqueue(struct dxgprocess *process,
+ struct dxgadapter *adapter,
+ struct d3dkmthandle handle);
int dxgvmb_send_query_adapter_info(struct dxgprocess *process,
struct dxgadapter *adapter,
struct d3dkmt_queryadapterinfo *args);
+int dxgvmb_send_open_sync_object_nt(struct dxgprocess *process,
+ struct dxgvmbuschannel *channel,
+ struct d3dkmt_opensyncobjectfromnthandle2
+ *args,
+ struct dxgsyncobject *syncobj);
int dxgvmb_send_create_nt_shared_object(struct dxgprocess *process,
struct d3dkmthandle object,
struct d3dkmthandle *shared_handle);
diff --git a/drivers/hv/dxgkrnl/dxgmodule.c b/drivers/hv/dxgkrnl/dxgmodule.c
index 69e221613af9..8cbe1095599f 100644
--- a/drivers/hv/dxgkrnl/dxgmodule.c
+++ b/drivers/hv/dxgkrnl/dxgmodule.c
@@ -259,6 +259,7 @@ int dxgglobal_create_adapter(struct pci_dev *dev, guid_t *guid,
INIT_LIST_HEAD(&adapter->adapter_process_list_head);
INIT_LIST_HEAD(&adapter->shared_resource_list_head);
+ INIT_LIST_HEAD(&adapter->adapter_shared_syncobj_list_head);
INIT_LIST_HEAD(&adapter->syncobj_list_head);
init_rwsem(&adapter->shared_resource_list_lock);
adapter->pci_dev = dev;
diff --git a/drivers/hv/dxgkrnl/dxgprocess.c b/drivers/hv/dxgkrnl/dxgprocess.c
index a41985ef438d..4021084ebd78 100644
--- a/drivers/hv/dxgkrnl/dxgprocess.c
+++ b/drivers/hv/dxgkrnl/dxgprocess.c
@@ -277,6 +277,10 @@ struct dxgdevice *dxgprocess_device_by_object_handle(struct dxgprocess *process,
device_handle =
((struct dxgcontext *)obj)->device_handle;
break;
+ case HMGRENTRY_TYPE_DXGHWQUEUE:
+ device_handle =
+ ((struct dxghwqueue *)obj)->device_handle;
+ break;
default:
DXG_ERR("invalid handle type: %d", t);
break;
diff --git a/drivers/hv/dxgkrnl/dxgvmbus.c b/drivers/hv/dxgkrnl/dxgvmbus.c
index b3a4377c8b0b..e83600945de1 100644
--- a/drivers/hv/dxgkrnl/dxgvmbus.c
+++ b/drivers/hv/dxgkrnl/dxgvmbus.c
@@ -712,6 +712,69 @@ int dxgvmb_send_destroy_process(struct d3dkmthandle process)
return ret;
}
+int dxgvmb_send_open_sync_object_nt(struct dxgprocess *process,
+ struct dxgvmbuschannel *channel,
+ struct d3dkmt_opensyncobjectfromnthandle2
+ *args,
+ struct dxgsyncobject *syncobj)
+{
+ struct dxgkvmb_command_opensyncobject *command;
+ struct dxgkvmb_command_opensyncobject_return result = { };
+ int ret;
+ struct dxgvmbusmsg msg;
+
+ ret = init_message(&msg, NULL, process, sizeof(*command));
+ if (ret)
+ return ret;
+ command = (void *)msg.msg;
+
+ command_vm_to_host_init2(&command->hdr, DXGK_VMBCOMMAND_OPENSYNCOBJECT,
+ process->host_handle);
+ command->device = args->device;
+ command->global_sync_object = syncobj->shared_owner->host_shared_handle;
+ command->flags = args->flags;
+ if (syncobj->monitored_fence)
+ command->engine_affinity =
+ args->monitored_fence.engine_affinity;
+
+ ret = dxgglobal_acquire_channel_lock();
+ if (ret < 0)
+ goto cleanup;
+
+ ret = dxgvmb_send_sync_msg(channel, msg.hdr, msg.size,
+ &result, sizeof(result));
+
+ dxgglobal_release_channel_lock();
+
+ if (ret < 0)
+ goto cleanup;
+
+ ret = ntstatus2int(result.status);
+ if (ret < 0)
+ goto cleanup;
+
+ args->sync_object = result.sync_object;
+ if (syncobj->monitored_fence) {
+ void *va = dxg_map_iospace(result.guest_cpu_physical_address,
+ PAGE_SIZE, PROT_READ | PROT_WRITE,
+ true);
+ if (va == NULL) {
+ ret = -ENOMEM;
+ goto cleanup;
+ }
+ args->monitored_fence.fence_value_cpu_va = va;
+ args->monitored_fence.fence_value_gpu_va =
+ result.gpu_virtual_address;
+ syncobj->mapped_address = va;
+ }
+
+cleanup:
+ free_message(&msg, process);
+ if (ret)
+ DXG_TRACE("err: %d", ret);
+ return ret;
+}
+
int dxgvmb_send_create_nt_shared_object(struct dxgprocess *process,
struct d3dkmthandle object,
struct d3dkmthandle *shared_handle)
@@ -2050,6 +2113,164 @@ int dxgvmb_send_wait_sync_object_gpu(struct dxgprocess *process,
return ret;
}
+int dxgvmb_send_create_hwqueue(struct dxgprocess *process,
+ struct dxgadapter *adapter,
+ struct d3dkmt_createhwqueue *args,
+ struct d3dkmt_createhwqueue *__user inargs,
+ struct dxghwqueue *hwqueue)
+{
+ struct dxgkvmb_command_createhwqueue *command = NULL;
+ u32 cmd_size = sizeof(struct dxgkvmb_command_createhwqueue);
+ int ret;
+ struct dxgvmbusmsg msg = {.hdr = NULL};
+
+ if (args->priv_drv_data_size > DXG_MAX_VM_BUS_PACKET_SIZE) {
+ DXG_ERR("invalid private driver data size: %d",
+ args->priv_drv_data_size);
+ ret = -EINVAL;
+ goto cleanup;
+ }
+
+ if (args->priv_drv_data_size)
+ cmd_size += args->priv_drv_data_size - 1;
+
+ ret = init_message(&msg, adapter, process, cmd_size);
+ if (ret)
+ goto cleanup;
+ command = (void *)msg.msg;
+
+ command_vgpu_to_host_init2(&command->hdr,
+ DXGK_VMBCOMMAND_CREATEHWQUEUE,
+ process->host_handle);
+ command->context = args->context;
+ command->flags = args->flags;
+ command->priv_drv_data_size = args->priv_drv_data_size;
+ if (args->priv_drv_data_size) {
+ ret = copy_from_user(command->priv_drv_data,
+ args->priv_drv_data,
+ args->priv_drv_data_size);
+ if (ret) {
+ DXG_ERR("failed to copy private data");
+ ret = -EINVAL;
+ goto cleanup;
+ }
+ }
+
+ ret = dxgvmb_send_sync_msg(msg.channel, msg.hdr, msg.size,
+ command, cmd_size);
+ if (ret < 0)
+ goto cleanup;
+
+ ret = ntstatus2int(command->status);
+ if (ret < 0) {
+ DXG_ERR("dxgvmb_send_sync_msg failed: %x",
+ command->status.v);
+ goto cleanup;
+ }
+
+ ret = hmgrtable_assign_handle_safe(&process->handle_table, hwqueue,
+ HMGRENTRY_TYPE_DXGHWQUEUE,
+ command->hwqueue);
+ if (ret < 0)
+ goto cleanup;
+
+ ret = hmgrtable_assign_handle_safe(&process->handle_table,
+ NULL,
+ HMGRENTRY_TYPE_MONITOREDFENCE,
+ command->hwqueue_progress_fence);
+ if (ret < 0)
+ goto cleanup;
+
+ hwqueue->handle = command->hwqueue;
+ hwqueue->progress_fence_sync_object = command->hwqueue_progress_fence;
+
+ hwqueue->progress_fence_mapped_address =
+ dxg_map_iospace((u64)command->hwqueue_progress_fence_cpuva,
+ PAGE_SIZE, PROT_READ | PROT_WRITE, true);
+ if (hwqueue->progress_fence_mapped_address == NULL) {
+ ret = -ENOMEM;
+ goto cleanup;
+ }
+
+ ret = copy_to_user(&inargs->queue, &command->hwqueue,
+ sizeof(struct d3dkmthandle));
+ if (ret < 0) {
+ DXG_ERR("failed to copy hwqueue handle");
+ goto cleanup;
+ }
+ ret = copy_to_user(&inargs->queue_progress_fence,
+ &command->hwqueue_progress_fence,
+ sizeof(struct d3dkmthandle));
+ if (ret < 0) {
+ DXG_ERR("failed to progress fence");
+ goto cleanup;
+ }
+ ret = copy_to_user(&inargs->queue_progress_fence_cpu_va,
+ &hwqueue->progress_fence_mapped_address,
+ sizeof(inargs->queue_progress_fence_cpu_va));
+ if (ret < 0) {
+ DXG_ERR("failed to copy fence cpu va");
+ goto cleanup;
+ }
+ ret = copy_to_user(&inargs->queue_progress_fence_gpu_va,
+ &command->hwqueue_progress_fence_gpuva,
+ sizeof(u64));
+ if (ret < 0) {
+ DXG_ERR("failed to copy fence gpu va");
+ goto cleanup;
+ }
+ if (args->priv_drv_data_size) {
+ ret = copy_to_user(args->priv_drv_data,
+ command->priv_drv_data,
+ args->priv_drv_data_size);
+ if (ret < 0)
+ DXG_ERR("failed to copy private data");
+ }
+
+cleanup:
+ if (ret < 0) {
+ DXG_ERR("failed %x", ret);
+ if (hwqueue->handle.v) {
+ hmgrtable_free_handle_safe(&process->handle_table,
+ HMGRENTRY_TYPE_DXGHWQUEUE,
+ hwqueue->handle);
+ hwqueue->handle.v = 0;
+ }
+ if (command && command->hwqueue.v)
+ dxgvmb_send_destroy_hwqueue(process, adapter,
+ command->hwqueue);
+ }
+ free_message(&msg, process);
+ return ret;
+}
+
+int dxgvmb_send_destroy_hwqueue(struct dxgprocess *process,
+ struct dxgadapter *adapter,
+ struct d3dkmthandle handle)
+{
+ int ret;
+ struct dxgkvmb_command_destroyhwqueue *command;
+ struct dxgvmbusmsg msg = {.hdr = NULL};
+
+ ret = init_message(&msg, adapter, process, sizeof(*command));
+ if (ret)
+ goto cleanup;
+ command = (void *)msg.msg;
+
+ command_vgpu_to_host_init2(&command->hdr,
+ DXGK_VMBCOMMAND_DESTROYHWQUEUE,
+ process->host_handle);
+ command->hwqueue = handle;
+
+ ret = dxgvmb_send_sync_msg_ntstatus(msg.channel, msg.hdr, msg.size);
+
+cleanup:
+ free_message(&msg, process);
+ if (ret)
+ DXG_TRACE("err: %d", ret);
+ return ret;
+}
+
int dxgvmb_send_query_adapter_info(struct dxgprocess *process,
struct dxgadapter *adapter,
struct d3dkmt_queryadapterinfo *args)
diff --git a/drivers/hv/dxgkrnl/dxgvmbus.h b/drivers/hv/dxgkrnl/dxgvmbus.h
index 73d7adac60a1..2e2fd1ae5ec2 100644
--- a/drivers/hv/dxgkrnl/dxgvmbus.h
+++ b/drivers/hv/dxgkrnl/dxgvmbus.h
@@ -172,6 +172,21 @@ struct dxgkvmb_command_signalguestevent {
bool dereference_event;
};
+struct dxgkvmb_command_opensyncobject {
+ struct dxgkvmb_command_vm_to_host hdr;
+ struct d3dkmthandle device;
+ struct d3dkmthandle global_sync_object;
+ u32 engine_affinity;
+ struct d3dddi_synchronizationobject_flags flags;
+};
+
+struct dxgkvmb_command_opensyncobject_return {
+ struct d3dkmthandle sync_object;
+ struct ntstatus status;
+ u64 gpu_virtual_address;
+ u64 guest_cpu_physical_address;
+};
+
/*
* The command returns struct d3dkmthandle of a shared object for the
* given pre-process object
@@ -508,4 +523,24 @@ struct dxgkvmb_command_waitforsyncobjectfromgpu {
/* struct d3dkmthandle ObjectHandles[object_count] */
};
+/* Returns the same structure */
+struct dxgkvmb_command_createhwqueue {
+ struct dxgkvmb_command_vgpu_to_host hdr;
+ struct ntstatus status;
+ struct d3dkmthandle hwqueue;
+ struct d3dkmthandle hwqueue_progress_fence;
+ void *hwqueue_progress_fence_cpuva;
+ u64 hwqueue_progress_fence_gpuva;
+ struct d3dkmthandle context;
+ struct d3dddi_createhwqueueflags flags;
+ u32 priv_drv_data_size;
+ char priv_drv_data[1];
+};
+
+/* The command returns ntstatus */
+struct dxgkvmb_command_destroyhwqueue {
+ struct dxgkvmb_command_vgpu_to_host hdr;
+ struct d3dkmthandle hwqueue;
+};
+
#endif /* _DXGVMBUS_H */
diff --git a/drivers/hv/dxgkrnl/ioctl.c b/drivers/hv/dxgkrnl/ioctl.c
index abb64f6c3a59..3cfc1c40e0bb 100644
--- a/drivers/hv/dxgkrnl/ioctl.c
+++ b/drivers/hv/dxgkrnl/ioctl.c
@@ -36,6 +36,33 @@ static char *errorstr(int ret)
}
#endif
+static int dxgsyncobj_release(struct inode *inode, struct file *file)
+{
+ struct dxgsharedsyncobject *syncobj = file->private_data;
+
+ DXG_TRACE("Release syncobj: %p", syncobj);
+ mutex_lock(&syncobj->fd_mutex);
+ kref_get(&syncobj->ssyncobj_kref);
+ syncobj->host_shared_handle_nt_reference--;
+ if (syncobj->host_shared_handle_nt_reference == 0) {
+ if (syncobj->host_shared_handle_nt.v) {
+ dxgvmb_send_destroy_nt_shared_object(
+ syncobj->host_shared_handle_nt);
+ DXG_TRACE("Syncobj host_handle_nt destroyed: %x",
+ syncobj->host_shared_handle_nt.v);
+ syncobj->host_shared_handle_nt.v = 0;
+ }
+ kref_put(&syncobj->ssyncobj_kref, dxgsharedsyncobj_release);
+ }
+ mutex_unlock(&syncobj->fd_mutex);
+ kref_put(&syncobj->ssyncobj_kref, dxgsharedsyncobj_release);
+ return 0;
+}
+
+static const struct file_operations dxg_syncobj_fops = {
+ .release = dxgsyncobj_release,
+};
+
static int dxgsharedresource_release(struct inode *inode, struct file *file)
{
struct dxgsharedresource *resource = file->private_data;
@@ -833,6 +860,156 @@ dxgkio_destroy_context(struct dxgprocess *process, void *__user inargs)
return ret;
}
+static int
+dxgkio_create_hwqueue(struct dxgprocess *process, void *__user inargs)
+{
+ struct d3dkmt_createhwqueue args;
+ struct dxgdevice *device = NULL;
+ struct dxgcontext *context = NULL;
+ struct dxgadapter *adapter = NULL;
+ struct dxghwqueue *hwqueue = NULL;
+ int ret;
+ bool device_lock_acquired = false;
+
+ ret = copy_from_user(&args, inargs, sizeof(args));
+ if (ret) {
+ DXG_ERR("failed to copy input args");
+ ret = -EINVAL;
+ goto cleanup;
+ }
+
+ /*
+ * The call acquires reference on the device. It is safe to access the
+ * adapter, because the device holds reference on it.
+ */
+ device = dxgprocess_device_by_object_handle(process,
+ HMGRENTRY_TYPE_DXGCONTEXT,
+ args.context);
+ if (device == NULL) {
+ ret = -EINVAL;
+ goto cleanup;
+ }
+
+ ret = dxgdevice_acquire_lock_shared(device);
+ if (ret < 0)
+ goto cleanup;
+
+ device_lock_acquired = true;
+
+ hmgrtable_lock(&process->handle_table, DXGLOCK_SHARED);
+ context = hmgrtable_get_object_by_type(&process->handle_table,
+ HMGRENTRY_TYPE_DXGCONTEXT,
+ args.context);
+ hmgrtable_unlock(&process->handle_table, DXGLOCK_SHARED);
+
+ if (context == NULL) {
+ DXG_ERR("Invalid context handle %x", args.context.v);
+ ret = -EINVAL;
+ goto cleanup;
+ }
+
+ hwqueue = dxghwqueue_create(context);
+ if (hwqueue == NULL) {
+ ret = -ENOMEM;
+ goto cleanup;
+ }
+
+ adapter = device->adapter;
+ ret = dxgadapter_acquire_lock_shared(adapter);
+ if (ret < 0) {
+ adapter = NULL;
+ goto cleanup;
+ }
+
+ ret = dxgvmb_send_create_hwqueue(process, adapter, &args,
+ inargs, hwqueue);
+
+cleanup:
+
+ if (ret < 0 && hwqueue)
+ dxghwqueue_destroy(process, hwqueue);
+
+ if (adapter)
+ dxgadapter_release_lock_shared(adapter);
+
+ if (device_lock_acquired)
+ dxgdevice_release_lock_shared(device);
+
+ if (device)
+ kref_put(&device->device_kref, dxgdevice_release);
+
+ DXG_TRACE("ioctl:%s %d", errorstr(ret), ret);
+ return ret;
+}
+
+static int dxgkio_destroy_hwqueue(struct dxgprocess *process,
+ void *__user inargs)
+{
+ struct d3dkmt_destroyhwqueue args;
+ int ret;
+ struct dxgadapter *adapter = NULL;
+ struct dxgdevice *device = NULL;
+ struct dxghwqueue *hwqueue = NULL;
+ struct d3dkmthandle device_handle = {};
+
+ ret = copy_from_user(&args, inargs, sizeof(args));
+ if (ret) {
+ DXG_ERR("failed to copy input args");
+ ret = -EINVAL;
+ goto cleanup;
+ }
+
+ hmgrtable_lock(&process->handle_table, DXGLOCK_EXCL);
+ hwqueue = hmgrtable_get_object_by_type(&process->handle_table,
+ HMGRENTRY_TYPE_DXGHWQUEUE,
+ args.queue);
+ if (hwqueue) {
+ hmgrtable_free_handle(&process->handle_table,
+ HMGRENTRY_TYPE_DXGHWQUEUE, args.queue);
+ hwqueue->handle.v = 0;
+ device_handle = hwqueue->device_handle;
+ }
+ hmgrtable_unlock(&process->handle_table, DXGLOCK_EXCL);
+
+ if (hwqueue == NULL) {
+ DXG_ERR("invalid hwqueue handle: %x", args.queue.v);
+ ret = -EINVAL;
+ goto cleanup;
+ }
+
+ /*
+ * The call acquires reference on the device. It is safe to access the
+ * adapter, because the device holds reference on it.
+ */
+ device = dxgprocess_device_by_handle(process, device_handle);
+ if (device == NULL) {
+ ret = -EINVAL;
+ goto cleanup;
+ }
+
+ adapter = device->adapter;
+ ret = dxgadapter_acquire_lock_shared(adapter);
+ if (ret < 0) {
+ adapter = NULL;
+ goto cleanup;
+ }
+
+ ret = dxgvmb_send_destroy_hwqueue(process, adapter, args.queue);
+
+ dxghwqueue_destroy(process, hwqueue);
+
+cleanup:
+
+ if (adapter)
+ dxgadapter_release_lock_shared(adapter);
+
+ if (device)
+ kref_put(&device->device_kref, dxgdevice_release);
+
+ DXG_TRACE("ioctl:%s %d", errorstr(ret), ret);
+ return ret;
+}
+
static int
get_standard_alloc_priv_data(struct dxgdevice *device,
struct d3dkmt_createstandardallocation *alloc_info,
@@ -1548,6 +1725,164 @@ dxgkio_destroy_allocation(struct dxgprocess *process, void *__user inargs)
return ret;
}
+static int
+dxgkio_submit_signal_to_hwqueue(struct dxgprocess *process, void *__user inargs)
+{
+ int ret;
+ struct d3dkmt_submitsignalsyncobjectstohwqueue args;
+ struct dxgdevice *device = NULL;
+ struct dxgadapter *adapter = NULL;
+ struct d3dkmthandle hwqueue = {};
+
+ ret = copy_from_user(&args, inargs, sizeof(args));
+ if (ret) {
+ DXG_ERR("failed to copy input args");
+ ret = -EINVAL;
+ goto cleanup;
+ }
+
+ if (args.hwqueue_count > D3DDDI_MAX_BROADCAST_CONTEXT ||
+ args.hwqueue_count == 0) {
+ DXG_ERR("invalid hwqueue count: %d",
+ args.hwqueue_count);
+ ret = -EINVAL;
+ goto cleanup;
+ }
+
+ if (args.object_count > D3DDDI_MAX_OBJECT_SIGNALED ||
+ args.object_count == 0) {
+ DXG_ERR("invalid number of syncobjects: %d",
+ args.object_count);
+ ret = -EINVAL;
+ goto cleanup;
+ }
+
+ ret = copy_from_user(&hwqueue, args.hwqueues,
+ sizeof(struct d3dkmthandle));
+ if (ret) {
+ DXG_ERR("failed to copy hwqueue handle");
+ ret = -EINVAL;
+ goto cleanup;
+ }
+
+ device = dxgprocess_device_by_object_handle(process,
+ HMGRENTRY_TYPE_DXGHWQUEUE,
+ hwqueue);
+ if (device == NULL) {
+ ret = -EINVAL;
+ goto cleanup;
+ }
+
+ adapter = device->adapter;
+ ret = dxgadapter_acquire_lock_shared(adapter);
+ if (ret < 0) {
+ adapter = NULL;
+ goto cleanup;
+ }
+
+ ret = dxgvmb_send_signal_sync_object(process, adapter,
+ args.flags, 0, zerohandle,
+ args.object_count, args.objects,
+ args.hwqueue_count, args.hwqueues,
+ args.object_count,
+ args.fence_values, NULL,
+ zerohandle);
+
+cleanup:
+
+ if (adapter)
+ dxgadapter_release_lock_shared(adapter);
+ if (device)
+ kref_put(&device->device_kref, dxgdevice_release);
+
+ DXG_TRACE("ioctl:%s %d", errorstr(ret), ret);
+ return ret;
+}
+
+static int
+dxgkio_submit_wait_to_hwqueue(struct dxgprocess *process, void *__user inargs)
+{
+ struct d3dkmt_submitwaitforsyncobjectstohwqueue args;
+ struct dxgdevice *device = NULL;
+ struct dxgadapter *adapter = NULL;
+ int ret;
+ struct d3dkmthandle *objects = NULL;
+ u32 object_size;
+ u64 *fences = NULL;
+
+ ret = copy_from_user(&args, inargs, sizeof(args));
+ if (ret) {
+ DXG_ERR("failed to copy input args");
+ ret = -EINVAL;
+ goto cleanup;
+ }
+
+ if (args.object_count > D3DDDI_MAX_OBJECT_WAITED_ON ||
+ args.object_count == 0) {
+ ret = -EINVAL;
+ goto cleanup;
+ }
+
+ object_size = sizeof(struct d3dkmthandle) * args.object_count;
+ objects = vzalloc(object_size);
+ if (objects == NULL) {
+ ret = -ENOMEM;
+ goto cleanup;
+ }
+ ret = copy_from_user(objects, args.objects, object_size);
+ if (ret) {
+ DXG_ERR("failed to copy objects");
+ ret = -EINVAL;
+ goto cleanup;
+ }
+
+ object_size = sizeof(u64) * args.object_count;
+ fences = vzalloc(object_size);
+ if (fences == NULL) {
+ ret = -ENOMEM;
+ goto cleanup;
+ }
+ ret = copy_from_user(fences, args.fence_values, object_size);
+ if (ret) {
+ DXG_ERR("failed to copy fence values");
+ ret = -EINVAL;
+ goto cleanup;
+ }
+
+ device = dxgprocess_device_by_object_handle(process,
+ HMGRENTRY_TYPE_DXGHWQUEUE,
+ args.hwqueue);
+ if (device == NULL) {
+ ret = -EINVAL;
+ goto cleanup;
+ }
+
+ adapter = device->adapter;
+ ret = dxgadapter_acquire_lock_shared(adapter);
+ if (ret < 0) {
+ adapter = NULL;
+ goto cleanup;
+ }
+
+ ret = dxgvmb_send_wait_sync_object_gpu(process, adapter,
+ args.hwqueue, args.object_count,
+ objects, fences, false);
+
+cleanup:
+
+ if (objects)
+ vfree(objects);
+ if (fences)
+ vfree(fences);
+ if (adapter)
+ dxgadapter_release_lock_shared(adapter);
+ if (device)
+ kref_put(&device->device_kref, dxgdevice_release);
+
+ DXG_TRACE("ioctl:%s %d", errorstr(ret), ret);
+ return ret;
+}
+
static int
dxgkio_create_sync_object(struct dxgprocess *process, void *__user inargs)
{
@@ -1558,6 +1893,7 @@ dxgkio_create_sync_object(struct dxgprocess *process, void *__user inargs)
struct eventfd_ctx *event = NULL;
struct dxgsyncobject *syncobj = NULL;
bool device_lock_acquired = false;
+ struct dxgsharedsyncobject *syncobjgbl = NULL;
struct dxghosteventcpu *host_event = NULL;
ret = copy_from_user(&args, inargs, sizeof(args));
@@ -1618,6 +1954,22 @@ dxgkio_create_sync_object(struct dxgprocess *process, void *__user inargs)
if (ret < 0)
goto cleanup;
+ if (args.info.flags.shared) {
+ if (args.info.shared_handle.v == 0) {
+ DXG_ERR("shared handle should not be 0");
+ ret = -EINVAL;
+ goto cleanup;
+ }
+ syncobjgbl = dxgsharedsyncobj_create(device->adapter, syncobj);
+ if (syncobjgbl == NULL) {
+ ret = -ENOMEM;
+ goto cleanup;
+ }
+ dxgsharedsyncobj_add_syncobj(syncobjgbl, syncobj);
+
+ syncobjgbl->host_shared_handle = args.info.shared_handle;
+ }
+
ret = copy_to_user(inargs, &args, sizeof(args));
if (ret) {
DXG_ERR("failed to copy output args");
@@ -1646,6 +1998,8 @@ dxgkio_create_sync_object(struct dxgprocess *process, void *__user inargs)
if (event)
eventfd_ctx_put(event);
}
+ if (syncobjgbl)
+ kref_put(&syncobjgbl->ssyncobj_kref, dxgsharedsyncobj_release);
if (adapter)
dxgadapter_release_lock_shared(adapter);
if (device_lock_acquired)
@@ -1700,6 +2054,140 @@ dxgkio_destroy_sync_object(struct dxgprocess *process, void *__user inargs)
return ret;
}
+static int
+dxgkio_open_sync_object_nt(struct dxgprocess *process, void *__user inargs)
+{
+ struct d3dkmt_opensyncobjectfromnthandle2 args;
+ struct dxgsyncobject *syncobj = NULL;
+ struct dxgsharedsyncobject *syncobj_fd = NULL;
+ struct file *file = NULL;
+ struct dxgdevice *device = NULL;
+ struct dxgadapter *adapter = NULL;
+ struct d3dddi_synchronizationobject_flags flags = { };
+ int ret;
+ bool device_lock_acquired = false;
+ struct dxgglobal *dxgglobal = dxggbl();
+
+ ret = copy_from_user(&args, inargs, sizeof(args));
+ if (ret) {
+ DXG_ERR("failed to copy input args");
+ ret = -EINVAL;
+ goto cleanup;
+ }
+
+ args.sync_object.v = 0;
+
+ if (args.device.v) {
+ device = dxgprocess_device_by_handle(process, args.device);
+ if (device == NULL) {
+ return -EINVAL;
+ goto cleanup;
+ }
+ } else {
+ DXG_ERR("device handle is missing");
+ ret = -EINVAL;
+ goto cleanup;
+ }
+
+ ret = dxgdevice_acquire_lock_shared(device);
+ if (ret < 0)
+ goto cleanup;
+
+ device_lock_acquired = true;
+
+ adapter = device->adapter;
+ ret = dxgadapter_acquire_lock_shared(adapter);
+ if (ret < 0) {
+ adapter = NULL;
+ goto cleanup;
+ }
+
+ file = fget(args.nt_handle);
+ if (!file) {
+ DXG_ERR("failed to get file from handle: %llx",
+ args.nt_handle);
+ ret = -EINVAL;
+ goto cleanup;
+ }
+
+ if (file->f_op != &dxg_syncobj_fops) {
+ DXG_ERR("invalid fd: %llx", args.nt_handle);
+ ret = -EINVAL;
+ goto cleanup;
+ }
+
+ syncobj_fd = file->private_data;
+ if (syncobj_fd == NULL) {
+ DXG_ERR("invalid private data: %llx", args.nt_handle);
+ ret = -EINVAL;
+ goto cleanup;
+ }
+
+ flags.shared = 1;
+ flags.nt_security_sharing = 1;
+ syncobj = dxgsyncobject_create(process, device, adapter,
+ syncobj_fd->type, flags);
+ if (syncobj == NULL) {
+ DXG_ERR("failed to create sync object");
+ ret = -ENOMEM;
+ goto cleanup;
+ }
+
+ dxgsharedsyncobj_add_syncobj(syncobj_fd, syncobj);
+
+ ret = dxgvmb_send_open_sync_object_nt(process, &dxgglobal->channel,
+ &args, syncobj);
+ if (ret < 0) {
+ DXG_ERR("failed to open sync object on host: %x",
+ syncobj_fd->host_shared_handle.v);
+ goto cleanup;
+ }
+
+ hmgrtable_lock(&process->handle_table, DXGLOCK_EXCL);
+ ret = hmgrtable_assign_handle(&process->handle_table, syncobj,
+ HMGRENTRY_TYPE_DXGSYNCOBJECT,
+ args.sync_object);
+ if (ret >= 0) {
+ syncobj->handle = args.sync_object;
+ kref_get(&syncobj->syncobj_kref);
+ }
+ hmgrtable_unlock(&process->handle_table, DXGLOCK_EXCL);
+
+ if (ret < 0)
+ goto cleanup;
+
+ ret = copy_to_user(inargs, &args, sizeof(args));
+ if (ret == 0)
+ goto success;
+ DXG_ERR("failed to copy output args");
+
+cleanup:
+
+ if (syncobj) {
+ dxgsyncobject_destroy(process, syncobj);
+ syncobj = NULL;
+ }
+
+ if (args.sync_object.v)
+ dxgvmb_send_destroy_sync_object(process, args.sync_object);
+
+success:
+
+ if (file)
+ fput(file);
+ if (syncobj)
+ kref_put(&syncobj->syncobj_kref, dxgsyncobject_release);
+ if (adapter)
+ dxgadapter_release_lock_shared(adapter);
+ if (device_lock_acquired)
+ dxgdevice_release_lock_shared(device);
+ if (device)
+ kref_put(&device->device_kref, dxgdevice_release);
+
+ DXG_TRACE("ioctl:%s %d", errorstr(ret), ret);
+ return ret;
+}
+
static int
dxgkio_signal_sync_object(struct dxgprocess *process, void *__user inargs)
{
@@ -2353,6 +2841,30 @@ dxgkio_wait_sync_object_gpu(struct dxgprocess *process, void *__user inargs)
return ret;
}
+static int
+dxgsharedsyncobj_get_host_nt_handle(struct dxgsharedsyncobject *syncobj,
+ struct dxgprocess *process,
+ struct d3dkmthandle objecthandle)
+{
+ int ret = 0;
+
+ mutex_lock(&syncobj->fd_mutex);
+ if (syncobj->host_shared_handle_nt_reference == 0) {
+ ret = dxgvmb_send_create_nt_shared_object(process,
+ objecthandle,
+ &syncobj->host_shared_handle_nt);
+ if (ret < 0)
+ goto cleanup;
+ DXG_TRACE("Host_shared_handle_ht: %x",
+ syncobj->host_shared_handle_nt.v);
+ kref_get(&syncobj->ssyncobj_kref);
+ }
+ syncobj->host_shared_handle_nt_reference++;
+cleanup:
+ mutex_unlock(&syncobj->fd_mutex);
+ return ret;
+}
+
static int
dxgsharedresource_get_host_nt_handle(struct dxgsharedresource *resource,
struct dxgprocess *process,
@@ -2378,6 +2890,7 @@ dxgsharedresource_get_host_nt_handle(struct dxgsharedresource *resource,
}
enum dxg_sharedobject_type {
+ DXG_SHARED_SYNCOBJECT,
DXG_SHARED_RESOURCE
};
@@ -2394,6 +2907,10 @@ static int get_object_fd(enum dxg_sharedobject_type type,
}
switch (type) {
+ case DXG_SHARED_SYNCOBJECT:
+ file = anon_inode_getfile("dxgsyncobj",
+ &dxg_syncobj_fops, object, 0);
+ break;
case DXG_SHARED_RESOURCE:
file = anon_inode_getfile("dxgresource",
&dxg_resource_fops, object, 0);
@@ -2419,6 +2936,7 @@ dxgkio_share_objects(struct dxgprocess *process, void *__user inargs)
enum hmgrentry_type object_type;
struct dxgsyncobject *syncobj = NULL;
struct dxgresource *resource = NULL;
+ struct dxgsharedsyncobject *shared_syncobj = NULL;
struct dxgsharedresource *shared_resource = NULL;
struct d3dkmthandle *handles = NULL;
int object_fd = -1;
@@ -2465,6 +2983,17 @@ dxgkio_share_objects(struct dxgprocess *process, void *__user inargs)
ret = -EINVAL;
} else {
switch (object_type) {
+ case HMGRENTRY_TYPE_DXGSYNCOBJECT:
+ syncobj = obj;
+ if (syncobj->shared) {
+ kref_get(&syncobj->syncobj_kref);
+ shared_syncobj = syncobj->shared_owner;
+ } else {
+ DXG_ERR("sync object is not shared");
+ syncobj = NULL;
+ ret = -EINVAL;
+ }
+ break;
case HMGRENTRY_TYPE_DXGRESOURCE:
resource = obj;
if (resource->shared_owner) {
@@ -2488,6 +3017,21 @@ dxgkio_share_objects(struct dxgprocess *process, void *__user inargs)
goto cleanup;
switch (object_type) {
+ case HMGRENTRY_TYPE_DXGSYNCOBJECT:
+ ret = get_object_fd(DXG_SHARED_SYNCOBJECT, shared_syncobj,
+ &object_fd);
+ if (ret < 0) {
+ DXG_ERR("get_object_fd failed for sync object");
+ goto cleanup;
+ }
+ ret = dxgsharedsyncobj_get_host_nt_handle(shared_syncobj,
+ process,
+ handles[0]);
+ if (ret < 0) {
+ DXG_ERR("get_host_nt_handle failed");
+ goto cleanup;
+ }
+ break;
case HMGRENTRY_TYPE_DXGRESOURCE:
ret = get_object_fd(DXG_SHARED_RESOURCE, shared_resource,
&object_fd);
@@ -2954,10 +3498,10 @@ static struct ioctl_desc ioctls[] = {
/* 0x15 */ {dxgkio_close_adapter, LX_DXCLOSEADAPTER},
/* 0x16 */ {},
/* 0x17 */ {},
-/* 0x18 */ {},
+/* 0x18 */ {dxgkio_create_hwqueue, LX_DXCREATEHWQUEUE},
/* 0x19 */ {dxgkio_destroy_device, LX_DXDESTROYDEVICE},
/* 0x1a */ {},
-/* 0x1b */ {},
+/* 0x1b */ {dxgkio_destroy_hwqueue, LX_DXDESTROYHWQUEUE},
/* 0x1c */ {},
/* 0x1d */ {dxgkio_destroy_sync_object, LX_DXDESTROYSYNCHRONIZATIONOBJECT},
/* 0x1e */ {},
@@ -2986,8 +3530,10 @@ static struct ioctl_desc ioctls[] = {
/* 0x33 */ {dxgkio_signal_sync_object_gpu2,
LX_DXSIGNALSYNCHRONIZATIONOBJECTFROMGPU2},
/* 0x34 */ {},
-/* 0x35 */ {},
-/* 0x36 */ {},
+/* 0x35 */ {dxgkio_submit_signal_to_hwqueue,
+ LX_DXSUBMITSIGNALSYNCOBJECTSTOHWQUEUE},
+/* 0x36 */ {dxgkio_submit_wait_to_hwqueue,
+ LX_DXSUBMITWAITFORSYNCOBJECTSTOHWQUEUE},
/* 0x37 */ {},
/* 0x38 */ {},
/* 0x39 */ {},
@@ -2999,7 +3545,7 @@ static struct ioctl_desc ioctls[] = {
/* 0x3d */ {},
/* 0x3e */ {dxgkio_enum_adapters3, LX_DXENUMADAPTERS3},
/* 0x3f */ {dxgkio_share_objects, LX_DXSHAREOBJECTS},
-/* 0x40 */ {},
+/* 0x40 */ {dxgkio_open_sync_object_nt, LX_DXOPENSYNCOBJECTFROMNTHANDLE2},
/* 0x41 */ {dxgkio_query_resource_info_nt,
LX_DXQUERYRESOURCEINFOFROMNTHANDLE},
/* 0x42 */ {dxgkio_open_resource_nt, LX_DXOPENRESOURCEFROMNTHANDLE},
diff --git a/include/uapi/misc/d3dkmthk.h b/include/uapi/misc/d3dkmthk.h
index f74564cf7ee9..a78252901c8d 100644
--- a/include/uapi/misc/d3dkmthk.h
+++ b/include/uapi/misc/d3dkmthk.h
@@ -201,6 +201,16 @@ struct d3dkmt_createcontextvirtual {
struct d3dkmthandle context;
};
+struct d3dddi_createhwqueueflags {
+ union {
+ struct {
+ __u32 disable_gpu_timeout:1;
+ __u32 reserved:31;
+ };
+ __u32 value;
+ };
+};
+
enum d3dkmdt_gdisurfacetype {
_D3DKMDT_GDISURFACE_INVALID = 0,
_D3DKMDT_GDISURFACE_TEXTURE = 1,
@@ -694,6 +704,81 @@ struct d3dddi_openallocationinfo2 {
__u64 reserved[6];
};
+struct d3dkmt_createhwqueue {
+ struct d3dkmthandle context;
+ struct d3dddi_createhwqueueflags flags;
+ __u32 priv_drv_data_size;
+ __u32 reserved;
+#ifdef __KERNEL__
+ void *priv_drv_data;
+#else
+ __u64 priv_drv_data;
+#endif
+ struct d3dkmthandle queue;
+ struct d3dkmthandle queue_progress_fence;
+#ifdef __KERNEL__
+ void *queue_progress_fence_cpu_va;
+#else
+ __u64 queue_progress_fence_cpu_va;
+#endif
+ __u64 queue_progress_fence_gpu_va;
+};
+
+struct d3dkmt_destroyhwqueue {
+ struct d3dkmthandle queue;
+};
+
+struct d3dkmt_submitwaitforsyncobjectstohwqueue {
+ struct d3dkmthandle hwqueue;
+ __u32 object_count;
+#ifdef __KERNEL__
+ struct d3dkmthandle *objects;
+ __u64 *fence_values;
+#else
+ __u64 objects;
+ __u64 fence_values;
+#endif
+};
+
+struct d3dkmt_submitsignalsyncobjectstohwqueue {
+ struct d3dddicb_signalflags flags;
+ __u32 hwqueue_count;
+#ifdef __KERNEL__
+ struct d3dkmthandle *hwqueues;
+#else
+ __u64 hwqueues;
+#endif
+ __u32 object_count;
+ __u32 reserved;
+#ifdef __KERNEL__
+ struct d3dkmthandle *objects;
+ __u64 *fence_values;
+#else
+ __u64 objects;
+ __u64 fence_values;
+#endif
+};
+
+struct d3dkmt_opensyncobjectfromnthandle2 {
+ __u64 nt_handle;
+ struct d3dkmthandle device;
+ struct d3dddi_synchronizationobject_flags flags;
+ struct d3dkmthandle sync_object;
+ __u32 reserved1;
+ union {
+ struct {
+#ifdef __KERNEL__
+ void *fence_value_cpu_va;
+#else
+ __u64 fence_value_cpu_va;
+#endif
+ __u64 fence_value_gpu_va;
+ __u32 engine_affinity;
+ } monitored_fence;
+ __u64 reserved[8];
+ };
+};
+
struct d3dkmt_openresourcefromnthandle {
struct d3dkmthandle device;
__u32 reserved;
@@ -819,6 +904,10 @@ struct d3dkmt_enumadapters3 {
_IOWR(0x47, 0x14, struct d3dkmt_enumadapters2)
#define LX_DXCLOSEADAPTER \
_IOWR(0x47, 0x15, struct d3dkmt_closeadapter)
+#define LX_DXCREATEHWQUEUE \
+ _IOWR(0x47, 0x18, struct d3dkmt_createhwqueue)
+#define LX_DXDESTROYHWQUEUE \
+ _IOWR(0x47, 0x1b, struct d3dkmt_destroyhwqueue)
#define LX_DXDESTROYDEVICE \
_IOWR(0x47, 0x19, struct d3dkmt_destroydevice)
#define LX_DXDESTROYSYNCHRONIZATIONOBJECT \
@@ -829,6 +918,10 @@ struct d3dkmt_enumadapters3 {
_IOWR(0x47, 0x32, struct d3dkmt_signalsynchronizationobjectfromgpu)
#define LX_DXSIGNALSYNCHRONIZATIONOBJECTFROMGPU2 \
_IOWR(0x47, 0x33, struct d3dkmt_signalsynchronizationobjectfromgpu2)
+#define LX_DXSUBMITSIGNALSYNCOBJECTSTOHWQUEUE \
+ _IOWR(0x47, 0x35, struct d3dkmt_submitsignalsyncobjectstohwqueue)
+#define LX_DXSUBMITWAITFORSYNCOBJECTSTOHWQUEUE \
+ _IOWR(0x47, 0x36, struct d3dkmt_submitwaitforsyncobjectstohwqueue)
#define LX_DXWAITFORSYNCHRONIZATIONOBJECTFROMCPU \
_IOWR(0x47, 0x3a, struct d3dkmt_waitforsynchronizationobjectfromcpu)
#define LX_DXWAITFORSYNCHRONIZATIONOBJECTFROMGPU \
^ permalink raw reply related [flat|nested] 56+ messages in thread* [PATCH 13/55] drivers: hv: dxgkrnl: Creation of paging queue objects.
2026-03-19 20:24 [PATCH v4 00/55] drivers: hv: dxgkrnl: Driver for Hyper-V virtual compute device Eric Curtin
` (11 preceding siblings ...)
2026-03-19 20:24 ` [PATCH 12/55] drivers: hv: dxgkrnl: Sharing of sync objects Eric Curtin
@ 2026-03-19 20:24 ` Eric Curtin
2026-03-19 20:24 ` [PATCH 14/55] drivers: hv: dxgkrnl: Submit execution commands to the compute device Eric Curtin
` (41 subsequent siblings)
54 siblings, 0 replies; 56+ messages in thread
From: Eric Curtin @ 2026-03-19 20:24 UTC (permalink / raw)
To: linux-hyperv; +Cc: linux-kernel, iourit, wei.liu, decui, haiyangz
From: Iouri Tarassov <iourit@linux.microsoft.com>
Implement ioctls for creation/destruction of the paging queue objects:
- LX_DXCREATEPAGINGQUEUE,
- LX_DXDESTROYPAGINGQUEUE
Paging queue objects (dxgpagingqueue) contain operations, which
handle residency of device accessible allocations. An allocation is
resident, when the device has access to it. For example, the allocation
resides in local device memory or device page tables point to system
memory which is made non-pageable.
Each paging queue has an associated monitored fence sync object, which
is used to detect when a paging operation is completed.
Signed-off-by: Iouri Tarassov <iourit@linux.microsoft.com>
[kms: forward port to 6.6 from 6.1. No code changes made.]
Signed-off-by: Kelsey Steele <kelseysteele@microsoft.com>
---
drivers/hv/dxgkrnl/dxgadapter.c | 89 +++++++++++++++
drivers/hv/dxgkrnl/dxgkrnl.h | 24 ++++
drivers/hv/dxgkrnl/dxgprocess.c | 4 +
drivers/hv/dxgkrnl/dxgvmbus.c | 74 +++++++++++++
drivers/hv/dxgkrnl/dxgvmbus.h | 17 +++
drivers/hv/dxgkrnl/ioctl.c | 189 +++++++++++++++++++++++++++++++-
include/uapi/misc/d3dkmthk.h | 27 +++++
7 files changed, 418 insertions(+), 6 deletions(-)
diff --git a/drivers/hv/dxgkrnl/dxgadapter.c b/drivers/hv/dxgkrnl/dxgadapter.c
index f59173f13559..410f08768bad 100644
--- a/drivers/hv/dxgkrnl/dxgadapter.c
+++ b/drivers/hv/dxgkrnl/dxgadapter.c
@@ -278,6 +278,7 @@ struct dxgdevice *dxgdevice_create(struct dxgadapter *adapter,
void dxgdevice_stop(struct dxgdevice *device)
{
struct dxgallocation *alloc;
+ struct dxgpagingqueue *pqueue;
struct dxgsyncobject *syncobj;
DXG_TRACE("Stopping device: %p", device);
@@ -288,6 +289,10 @@ void dxgdevice_stop(struct dxgdevice *device)
dxgdevice_release_alloc_list_lock(device);
hmgrtable_lock(&device->process->handle_table, DXGLOCK_EXCL);
+ list_for_each_entry(pqueue, &device->pqueue_list_head,
+ pqueue_list_entry) {
+ dxgpagingqueue_stop(pqueue);
+ }
list_for_each_entry(syncobj, &device->syncobj_list_head,
syncobj_list_entry) {
dxgsyncobject_stop(syncobj);
@@ -375,6 +380,17 @@ void dxgdevice_destroy(struct dxgdevice *device)
dxgdevice_release_context_list_lock(device);
}
+ {
+ struct dxgpagingqueue *tmp;
+ struct dxgpagingqueue *pqueue;
+
+ DXG_TRACE("destroying paging queues");
+ list_for_each_entry_safe(pqueue, tmp, &device->pqueue_list_head,
+ pqueue_list_entry) {
+ dxgpagingqueue_destroy(pqueue);
+ }
+ }
+
/* Guest handles need to be released before the host handles */
hmgrtable_lock(&process->handle_table, DXGLOCK_EXCL);
if (device->handle_valid) {
@@ -708,6 +724,26 @@ void dxgdevice_release(struct kref *refcount)
kfree(device);
}
+void dxgdevice_add_paging_queue(struct dxgdevice *device,
+ struct dxgpagingqueue *entry)
+{
+ dxgdevice_acquire_alloc_list_lock(device);
+ list_add_tail(&entry->pqueue_list_entry, &device->pqueue_list_head);
+ dxgdevice_release_alloc_list_lock(device);
+}
+
+void dxgdevice_remove_paging_queue(struct dxgpagingqueue *pqueue)
+{
+ struct dxgdevice *device = pqueue->device;
+
+ dxgdevice_acquire_alloc_list_lock(device);
+ if (pqueue->pqueue_list_entry.next) {
+ list_del(&pqueue->pqueue_list_entry);
+ pqueue->pqueue_list_entry.next = NULL;
+ }
+ dxgdevice_release_alloc_list_lock(device);
+}
+
void dxgdevice_add_syncobj(struct dxgdevice *device,
struct dxgsyncobject *syncobj)
{
@@ -899,6 +935,59 @@ else
kfree(alloc);
}
+struct dxgpagingqueue *dxgpagingqueue_create(struct dxgdevice *device)
+{
+ struct dxgpagingqueue *pqueue;
+
+ pqueue = kzalloc(sizeof(*pqueue), GFP_KERNEL);
+ if (pqueue) {
+ pqueue->device = device;
+ pqueue->process = device->process;
+ pqueue->device_handle = device->handle;
+ dxgdevice_add_paging_queue(device, pqueue);
+ }
+ return pqueue;
+}
+
+void dxgpagingqueue_stop(struct dxgpagingqueue *pqueue)
+{
+ int ret;
+
+ if (pqueue->mapped_address) {
+ ret = dxg_unmap_iospace(pqueue->mapped_address, PAGE_SIZE);
+ DXG_TRACE("fence is unmapped %d %p",
+ ret, pqueue->mapped_address);
+ pqueue->mapped_address = NULL;
+ }
+}
+
+void dxgpagingqueue_destroy(struct dxgpagingqueue *pqueue)
+{
+ struct dxgprocess *process = pqueue->process;
+
+ DXG_TRACE("Destroying pqueue %p %x", pqueue, pqueue->handle.v);
+
+ dxgpagingqueue_stop(pqueue);
+
+ hmgrtable_lock(&process->handle_table, DXGLOCK_EXCL);
+ if (pqueue->handle.v) {
+ hmgrtable_free_handle(&process->handle_table,
+ HMGRENTRY_TYPE_DXGPAGINGQUEUE,
+ pqueue->handle);
+ pqueue->handle.v = 0;
+ }
+ if (pqueue->syncobj_handle.v) {
+ hmgrtable_free_handle(&process->handle_table,
+ HMGRENTRY_TYPE_MONITOREDFENCE,
+ pqueue->syncobj_handle);
+ pqueue->syncobj_handle.v = 0;
+ }
+ hmgrtable_unlock(&process->handle_table, DXGLOCK_EXCL);
+ if (pqueue->device)
+ dxgdevice_remove_paging_queue(pqueue);
+ kfree(pqueue);
+}
+
struct dxgprocess_adapter *dxgprocess_adapter_create(struct dxgprocess *process,
struct dxgadapter *adapter)
{
diff --git a/drivers/hv/dxgkrnl/dxgkrnl.h b/drivers/hv/dxgkrnl/dxgkrnl.h
index 0330352b9c06..440d1f9b8882 100644
--- a/drivers/hv/dxgkrnl/dxgkrnl.h
+++ b/drivers/hv/dxgkrnl/dxgkrnl.h
@@ -104,6 +104,16 @@ int dxgvmbuschannel_init(struct dxgvmbuschannel *ch, struct hv_device *hdev);
void dxgvmbuschannel_destroy(struct dxgvmbuschannel *ch);
void dxgvmbuschannel_receive(void *ctx);
+struct dxgpagingqueue {
+ struct dxgdevice *device;
+ struct dxgprocess *process;
+ struct list_head pqueue_list_entry;
+ struct d3dkmthandle device_handle;
+ struct d3dkmthandle handle;
+ struct d3dkmthandle syncobj_handle;
+ void *mapped_address;
+};
+
/*
* The structure describes an event, which will be signaled by
* a message from host.
@@ -127,6 +137,10 @@ struct dxghosteventcpu {
bool remove_from_list;
};
+struct dxgpagingqueue *dxgpagingqueue_create(struct dxgdevice *device);
+void dxgpagingqueue_destroy(struct dxgpagingqueue *pqueue);
+void dxgpagingqueue_stop(struct dxgpagingqueue *pqueue);
+
/*
* This is GPU synchronization object, which is used to synchronize execution
* between GPU contextx/hardware queues or for tracking GPU execution progress.
@@ -516,6 +530,9 @@ void dxgdevice_remove_alloc_safe(struct dxgdevice *dev,
struct dxgallocation *a);
void dxgdevice_add_resource(struct dxgdevice *dev, struct dxgresource *res);
void dxgdevice_remove_resource(struct dxgdevice *dev, struct dxgresource *res);
+void dxgdevice_add_paging_queue(struct dxgdevice *dev,
+ struct dxgpagingqueue *pqueue);
+void dxgdevice_remove_paging_queue(struct dxgpagingqueue *pqueue);
void dxgdevice_add_syncobj(struct dxgdevice *dev, struct dxgsyncobject *so);
void dxgdevice_remove_syncobj(struct dxgsyncobject *so);
bool dxgdevice_is_active(struct dxgdevice *dev);
@@ -762,6 +779,13 @@ dxgvmb_send_create_context(struct dxgadapter *adapter,
int dxgvmb_send_destroy_context(struct dxgadapter *adapter,
struct dxgprocess *process,
struct d3dkmthandle h);
+int dxgvmb_send_create_paging_queue(struct dxgprocess *pr,
+ struct dxgdevice *dev,
+ struct d3dkmt_createpagingqueue *args,
+ struct dxgpagingqueue *pq);
+int dxgvmb_send_destroy_paging_queue(struct dxgprocess *process,
+ struct dxgadapter *adapter,
+ struct d3dkmthandle h);
int dxgvmb_send_create_allocation(struct dxgprocess *pr, struct dxgdevice *dev,
struct d3dkmt_createallocation *args,
struct d3dkmt_createallocation *__user inargs,
diff --git a/drivers/hv/dxgkrnl/dxgprocess.c b/drivers/hv/dxgkrnl/dxgprocess.c
index 4021084ebd78..5de3f8ccb448 100644
--- a/drivers/hv/dxgkrnl/dxgprocess.c
+++ b/drivers/hv/dxgkrnl/dxgprocess.c
@@ -277,6 +277,10 @@ struct dxgdevice *dxgprocess_device_by_object_handle(struct dxgprocess *process,
device_handle =
((struct dxgcontext *)obj)->device_handle;
break;
+ case HMGRENTRY_TYPE_DXGPAGINGQUEUE:
+ device_handle =
+ ((struct dxgpagingqueue *)obj)->device_handle;
+ break;
case HMGRENTRY_TYPE_DXGHWQUEUE:
device_handle =
((struct dxghwqueue *)obj)->device_handle;
diff --git a/drivers/hv/dxgkrnl/dxgvmbus.c b/drivers/hv/dxgkrnl/dxgvmbus.c
index e83600945de1..c9c00b288ae0 100644
--- a/drivers/hv/dxgkrnl/dxgvmbus.c
+++ b/drivers/hv/dxgkrnl/dxgvmbus.c
@@ -1155,6 +1155,80 @@ int dxgvmb_send_destroy_context(struct dxgadapter *adapter,
return ret;
}
+int dxgvmb_send_create_paging_queue(struct dxgprocess *process,
+ struct dxgdevice *device,
+ struct d3dkmt_createpagingqueue *args,
+ struct dxgpagingqueue *pqueue)
+{
+ struct dxgkvmb_command_createpagingqueue_return result;
+ struct dxgkvmb_command_createpagingqueue *command;
+ int ret;
+ struct dxgvmbusmsg msg = {.hdr = NULL};
+
+ ret = init_message(&msg, device->adapter, process, sizeof(*command));
+ if (ret)
+ goto cleanup;
+ command = (void *)msg.msg;
+
+ command_vgpu_to_host_init2(&command->hdr,
+ DXGK_VMBCOMMAND_CREATEPAGINGQUEUE,
+ process->host_handle);
+ command->args = *args;
+ args->paging_queue.v = 0;
+
+ ret = dxgvmb_send_sync_msg(msg.channel, msg.hdr, msg.size, &result,
+ sizeof(result));
+ if (ret < 0) {
+ DXG_ERR("send_create_paging_queue failed %x", ret);
+ goto cleanup;
+ }
+
+ args->paging_queue = result.paging_queue;
+ args->sync_object = result.sync_object;
+ args->fence_cpu_virtual_address =
+ dxg_map_iospace(result.fence_storage_physical_address, PAGE_SIZE,
+ PROT_READ | PROT_WRITE, true);
+ if (args->fence_cpu_virtual_address == NULL) {
+ ret = -ENOMEM;
+ goto cleanup;
+ }
+ pqueue->mapped_address = args->fence_cpu_virtual_address;
+ pqueue->handle = args->paging_queue;
+
+cleanup:
+ free_message(&msg, process);
+ if (ret)
+ DXG_TRACE("err: %d", ret);
+ return ret;
+}
+
+int dxgvmb_send_destroy_paging_queue(struct dxgprocess *process,
+ struct dxgadapter *adapter,
+ struct d3dkmthandle h)
+{
+ int ret;
+ struct dxgkvmb_command_destroypagingqueue *command;
+ struct dxgvmbusmsg msg = {.hdr = NULL};
+
+ ret = init_message(&msg, adapter, process, sizeof(*command));
+ if (ret)
+ goto cleanup;
+ command = (void *)msg.msg;
+
+ command_vgpu_to_host_init2(&command->hdr,
+ DXGK_VMBCOMMAND_DESTROYPAGINGQUEUE,
+ process->host_handle);
+ command->paging_queue = h;
+
+ ret = dxgvmb_send_sync_msg(msg.channel, msg.hdr, msg.size, NULL, 0);
+
+cleanup:
+ free_message(&msg, process);
+ if (ret)
+ DXG_TRACE("err: %d", ret);
+ return ret;
+}
+
static int
copy_private_data(struct d3dkmt_createallocation *args,
struct dxgkvmb_command_createallocation *command,
diff --git a/drivers/hv/dxgkrnl/dxgvmbus.h b/drivers/hv/dxgkrnl/dxgvmbus.h
index 2e2fd1ae5ec2..aba075d374c9 100644
--- a/drivers/hv/dxgkrnl/dxgvmbus.h
+++ b/drivers/hv/dxgkrnl/dxgvmbus.h
@@ -462,6 +462,23 @@ struct dxgkvmb_command_destroycontext {
struct d3dkmthandle context;
};
+struct dxgkvmb_command_createpagingqueue {
+ struct dxgkvmb_command_vgpu_to_host hdr;
+ struct d3dkmt_createpagingqueue args;
+};
+
+struct dxgkvmb_command_createpagingqueue_return {
+ struct d3dkmthandle paging_queue;
+ struct d3dkmthandle sync_object;
+ u64 fence_storage_physical_address;
+ u64 fence_storage_offset;
+};
+
+struct dxgkvmb_command_destroypagingqueue {
+ struct dxgkvmb_command_vgpu_to_host hdr;
+ struct d3dkmthandle paging_queue;
+};
+
struct dxgkvmb_command_createsyncobject {
struct dxgkvmb_command_vgpu_to_host hdr;
struct d3dkmt_createsynchronizationobject2 args;
diff --git a/drivers/hv/dxgkrnl/ioctl.c b/drivers/hv/dxgkrnl/ioctl.c
index 3cfc1c40e0bb..a2d236f5eff5 100644
--- a/drivers/hv/dxgkrnl/ioctl.c
+++ b/drivers/hv/dxgkrnl/ioctl.c
@@ -329,7 +329,7 @@ static int dxgsharedresource_seal(struct dxgsharedresource *shared_resource)
if (alloc_data_size) {
if (data_size < alloc_data_size) {
- dev_err(DXGDEV,
+ DXG_ERR(
"Invalid private data size");
ret = -EINVAL;
goto cleanup1;
@@ -1010,6 +1010,183 @@ static int dxgkio_destroy_hwqueue(struct dxgprocess *process,
return ret;
}
+static int
+dxgkio_create_paging_queue(struct dxgprocess *process, void *__user inargs)
+{
+ struct d3dkmt_createpagingqueue args;
+ struct dxgdevice *device = NULL;
+ struct dxgadapter *adapter = NULL;
+ struct dxgpagingqueue *pqueue = NULL;
+ int ret;
+ struct d3dkmthandle host_handle = {};
+ bool device_lock_acquired = false;
+
+ ret = copy_from_user(&args, inargs, sizeof(args));
+ if (ret) {
+ DXG_ERR("failed to copy input args");
+ ret = -EINVAL;
+ goto cleanup;
+ }
+
+ /*
+ * The call acquires reference on the device. It is safe to access the
+ * adapter, because the device holds reference on it.
+ */
+ device = dxgprocess_device_by_handle(process, args.device);
+ if (device == NULL) {
+ ret = -EINVAL;
+ goto cleanup;
+ }
+
+ ret = dxgdevice_acquire_lock_shared(device);
+ if (ret < 0)
+ goto cleanup;
+
+ device_lock_acquired = true;
+ adapter = device->adapter;
+
+ ret = dxgadapter_acquire_lock_shared(adapter);
+ if (ret < 0) {
+ adapter = NULL;
+ goto cleanup;
+ }
+
+ pqueue = dxgpagingqueue_create(device);
+ if (pqueue == NULL) {
+ ret = -ENOMEM;
+ goto cleanup;
+ }
+
+ ret = dxgvmb_send_create_paging_queue(process, device, &args, pqueue);
+ if (ret >= 0) {
+ host_handle = args.paging_queue;
+
+ ret = copy_to_user(inargs, &args, sizeof(args));
+ if (ret) {
+ DXG_ERR("failed to copy input args");
+ ret = -EINVAL;
+ goto cleanup;
+ }
+
+ hmgrtable_lock(&process->handle_table, DXGLOCK_EXCL);
+ ret = hmgrtable_assign_handle(&process->handle_table, pqueue,
+ HMGRENTRY_TYPE_DXGPAGINGQUEUE,
+ host_handle);
+ if (ret >= 0) {
+ pqueue->handle = host_handle;
+ ret = hmgrtable_assign_handle(&process->handle_table,
+ NULL,
+ HMGRENTRY_TYPE_MONITOREDFENCE,
+ args.sync_object);
+ if (ret >= 0)
+ pqueue->syncobj_handle = args.sync_object;
+ }
+ hmgrtable_unlock(&process->handle_table, DXGLOCK_EXCL);
+ /* should not fail after this */
+ }
+
+cleanup:
+
+ if (ret < 0) {
+ if (pqueue)
+ dxgpagingqueue_destroy(pqueue);
+ if (host_handle.v)
+ dxgvmb_send_destroy_paging_queue(process,
+ adapter,
+ host_handle);
+ }
+
+ if (adapter)
+ dxgadapter_release_lock_shared(adapter);
+
+ if (device) {
+ if (device_lock_acquired)
+ dxgdevice_release_lock_shared(device);
+ kref_put(&device->device_kref, dxgdevice_release);
+ }
+
+ DXG_TRACE("ioctl:%s %d", errorstr(ret), ret);
+ return ret;
+}
+
+static int
+dxgkio_destroy_paging_queue(struct dxgprocess *process, void *__user inargs)
+{
+ struct d3dddi_destroypagingqueue args;
+ struct dxgpagingqueue *paging_queue = NULL;
+ int ret;
+ struct d3dkmthandle device_handle = {};
+ struct dxgdevice *device = NULL;
+ struct dxgadapter *adapter = NULL;
+
+ ret = copy_from_user(&args, inargs, sizeof(args));
+ if (ret) {
+ DXG_ERR("failed to copy input args");
+ ret = -EINVAL;
+ goto cleanup;
+ }
+
+ hmgrtable_lock(&process->handle_table, DXGLOCK_EXCL);
+ paging_queue = hmgrtable_get_object_by_type(&process->handle_table,
+ HMGRENTRY_TYPE_DXGPAGINGQUEUE,
+ args.paging_queue);
+ if (paging_queue) {
+ device_handle = paging_queue->device_handle;
+ hmgrtable_free_handle(&process->handle_table,
+ HMGRENTRY_TYPE_DXGPAGINGQUEUE,
+ args.paging_queue);
+ hmgrtable_free_handle(&process->handle_table,
+ HMGRENTRY_TYPE_MONITOREDFENCE,
+ paging_queue->syncobj_handle);
+ paging_queue->syncobj_handle.v = 0;
+ paging_queue->handle.v = 0;
+ }
+ hmgrtable_unlock(&process->handle_table, DXGLOCK_EXCL);
+
+ /*
+ * The call acquires reference on the device. It is safe to access the
+ * adapter, because the device holds reference on it.
+ */
+ if (device_handle.v)
+ device = dxgprocess_device_by_handle(process, device_handle);
+ if (device == NULL) {
+ ret = -EINVAL;
+ goto cleanup;
+ }
+
+ ret = dxgdevice_acquire_lock_shared(device);
+ if (ret < 0) {
+ kref_put(&device->device_kref, dxgdevice_release);
+ device = NULL;
+ goto cleanup;
+ }
+
+ adapter = device->adapter;
+ ret = dxgadapter_acquire_lock_shared(adapter);
+ if (ret < 0) {
+ adapter = NULL;
+ goto cleanup;
+ }
+
+ ret = dxgvmb_send_destroy_paging_queue(process, adapter,
+ args.paging_queue);
+
+ dxgpagingqueue_destroy(paging_queue);
+
+cleanup:
+
+ if (adapter)
+ dxgadapter_release_lock_shared(adapter);
+
+ if (device) {
+ dxgdevice_release_lock_shared(device);
+ kref_put(&device->device_kref, dxgdevice_release);
+ }
+
+ DXG_TRACE("ioctl:%s %d", errorstr(ret), ret);
+ return ret;
+}
+
static int
get_standard_alloc_priv_data(struct dxgdevice *device,
struct d3dkmt_createstandardallocation *alloc_info,
@@ -1272,7 +1449,7 @@ dxgkio_create_allocation(struct dxgprocess *process, void *__user inargs)
args.private_runtime_resource_handle;
if (args.flags.create_shared) {
if (!args.flags.nt_security_sharing) {
- dev_err(DXGDEV,
+ DXG_ERR(
"nt_security_sharing must be set");
ret = -EINVAL;
goto cleanup;
@@ -1313,7 +1490,7 @@ dxgkio_create_allocation(struct dxgprocess *process, void *__user inargs)
args.private_runtime_data,
args.private_runtime_data_size);
if (ret) {
- dev_err(DXGDEV,
+ DXG_ERR(
"failed to copy runtime data");
ret = -EINVAL;
goto cleanup;
@@ -1333,7 +1510,7 @@ dxgkio_create_allocation(struct dxgprocess *process, void *__user inargs)
args.priv_drv_data,
args.priv_drv_data_size);
if (ret) {
- dev_err(DXGDEV,
+ DXG_ERR(
"failed to copy res data");
ret = -EINVAL;
goto cleanup;
@@ -3481,7 +3658,7 @@ static struct ioctl_desc ioctls[] = {
/* 0x04 */ {dxgkio_create_context_virtual, LX_DXCREATECONTEXTVIRTUAL},
/* 0x05 */ {dxgkio_destroy_context, LX_DXDESTROYCONTEXT},
/* 0x06 */ {dxgkio_create_allocation, LX_DXCREATEALLOCATION},
-/* 0x07 */ {},
+/* 0x07 */ {dxgkio_create_paging_queue, LX_DXCREATEPAGINGQUEUE},
/* 0x08 */ {},
/* 0x09 */ {dxgkio_query_adapter_info, LX_DXQUERYADAPTERINFO},
/* 0x0a */ {},
@@ -3502,7 +3679,7 @@ static struct ioctl_desc ioctls[] = {
/* 0x19 */ {dxgkio_destroy_device, LX_DXDESTROYDEVICE},
/* 0x1a */ {},
/* 0x1b */ {dxgkio_destroy_hwqueue, LX_DXDESTROYHWQUEUE},
-/* 0x1c */ {},
+/* 0x1c */ {dxgkio_destroy_paging_queue, LX_DXDESTROYPAGINGQUEUE},
/* 0x1d */ {dxgkio_destroy_sync_object, LX_DXDESTROYSYNCHRONIZATIONOBJECT},
/* 0x1e */ {},
/* 0x1f */ {},
diff --git a/include/uapi/misc/d3dkmthk.h b/include/uapi/misc/d3dkmthk.h
index a78252901c8d..6ec70852de6e 100644
--- a/include/uapi/misc/d3dkmthk.h
+++ b/include/uapi/misc/d3dkmthk.h
@@ -211,6 +211,29 @@ struct d3dddi_createhwqueueflags {
};
};
+enum d3dddi_pagingqueue_priority {
+ _D3DDDI_PAGINGQUEUE_PRIORITY_BELOW_NORMAL = -1,
+ _D3DDDI_PAGINGQUEUE_PRIORITY_NORMAL = 0,
+ _D3DDDI_PAGINGQUEUE_PRIORITY_ABOVE_NORMAL = 1,
+};
+
+struct d3dkmt_createpagingqueue {
+ struct d3dkmthandle device;
+ enum d3dddi_pagingqueue_priority priority;
+ struct d3dkmthandle paging_queue;
+ struct d3dkmthandle sync_object;
+#ifdef __KERNEL__
+ void *fence_cpu_virtual_address;
+#else
+ __u64 fence_cpu_virtual_address;
+#endif
+ __u32 physical_adapter_index;
+};
+
+struct d3dddi_destroypagingqueue {
+ struct d3dkmthandle paging_queue;
+};
+
enum d3dkmdt_gdisurfacetype {
_D3DKMDT_GDISURFACE_INVALID = 0,
_D3DKMDT_GDISURFACE_TEXTURE = 1,
@@ -890,6 +913,8 @@ struct d3dkmt_enumadapters3 {
_IOWR(0x47, 0x05, struct d3dkmt_destroycontext)
#define LX_DXCREATEALLOCATION \
_IOWR(0x47, 0x06, struct d3dkmt_createallocation)
+#define LX_DXCREATEPAGINGQUEUE \
+ _IOWR(0x47, 0x07, struct d3dkmt_createpagingqueue)
#define LX_DXQUERYADAPTERINFO \
_IOWR(0x47, 0x09, struct d3dkmt_queryadapterinfo)
#define LX_DXCREATESYNCHRONIZATIONOBJECT \
@@ -908,6 +933,8 @@ struct d3dkmt_enumadapters3 {
_IOWR(0x47, 0x18, struct d3dkmt_createhwqueue)
#define LX_DXDESTROYHWQUEUE \
_IOWR(0x47, 0x1b, struct d3dkmt_destroyhwqueue)
+#define LX_DXDESTROYPAGINGQUEUE \
+ _IOWR(0x47, 0x1c, struct d3dddi_destroypagingqueue)
#define LX_DXDESTROYDEVICE \
_IOWR(0x47, 0x19, struct d3dkmt_destroydevice)
#define LX_DXDESTROYSYNCHRONIZATIONOBJECT \
^ permalink raw reply related [flat|nested] 56+ messages in thread* [PATCH 14/55] drivers: hv: dxgkrnl: Submit execution commands to the compute device
2026-03-19 20:24 [PATCH v4 00/55] drivers: hv: dxgkrnl: Driver for Hyper-V virtual compute device Eric Curtin
` (12 preceding siblings ...)
2026-03-19 20:24 ` [PATCH 13/55] drivers: hv: dxgkrnl: Creation of paging queue objects Eric Curtin
@ 2026-03-19 20:24 ` Eric Curtin
2026-03-19 20:24 ` [PATCH 15/55] drivers: hv: dxgkrnl: Share objects with the host Eric Curtin
` (40 subsequent siblings)
54 siblings, 0 replies; 56+ messages in thread
From: Eric Curtin @ 2026-03-19 20:24 UTC (permalink / raw)
To: linux-hyperv; +Cc: linux-kernel, iourit, wei.liu, decui, haiyangz
From: Iouri Tarassov <iourit@linux.microsoft.com>
Implements ioctls for submission of compute device buffers for execution:
- LX_DXSUBMITCOMMAND
The ioctl is used to submit a command buffer to the device,
working in the "packet scheduling" mode.
- LX_DXSUBMITCOMMANDTOHWQUEUE
The ioctl is used to submit a command buffer to the device,
working in the "hardware scheduling" mode.
To improve performance both ioctls use asynchronous VM bus messages
to communicate with the host as these are high frequency operations.
Signed-off-by: Iouri Tarassov <iourit@linux.microsoft.com>
[kms: forward port to 6.6 from 6.1. No code changes made.]
Signed-off-by: Kelsey Steele <kelseysteele@microsoft.com>
---
drivers/hv/dxgkrnl/dxgkrnl.h | 6 ++
drivers/hv/dxgkrnl/dxgvmbus.c | 113 ++++++++++++++++++++++++++++++
drivers/hv/dxgkrnl/dxgvmbus.h | 14 ++++
drivers/hv/dxgkrnl/ioctl.c | 127 +++++++++++++++++++++++++++++++++-
include/uapi/misc/d3dkmthk.h | 58 ++++++++++++++++
5 files changed, 316 insertions(+), 2 deletions(-)
diff --git a/drivers/hv/dxgkrnl/dxgkrnl.h b/drivers/hv/dxgkrnl/dxgkrnl.h
index 440d1f9b8882..ab97bc53b124 100644
--- a/drivers/hv/dxgkrnl/dxgkrnl.h
+++ b/drivers/hv/dxgkrnl/dxgkrnl.h
@@ -796,6 +796,9 @@ int dxgvmb_send_create_allocation(struct dxgprocess *pr, struct dxgdevice *dev,
int dxgvmb_send_destroy_allocation(struct dxgprocess *pr, struct dxgdevice *dev,
struct d3dkmt_destroyallocation2 *args,
struct d3dkmthandle *alloc_handles);
+int dxgvmb_send_submit_command(struct dxgprocess *pr,
+ struct dxgadapter *adapter,
+ struct d3dkmt_submitcommand *args);
int dxgvmb_send_create_sync_object(struct dxgprocess *pr,
struct dxgadapter *adapter,
struct d3dkmt_createsynchronizationobject2
@@ -838,6 +841,9 @@ int dxgvmb_send_destroy_hwqueue(struct dxgprocess *process,
int dxgvmb_send_query_adapter_info(struct dxgprocess *process,
struct dxgadapter *adapter,
struct d3dkmt_queryadapterinfo *args);
+int dxgvmb_send_submit_command_hwqueue(struct dxgprocess *process,
+ struct dxgadapter *adapter,
+ struct d3dkmt_submitcommandtohwqueue *a);
int dxgvmb_send_open_sync_object_nt(struct dxgprocess *process,
struct dxgvmbuschannel *channel,
struct d3dkmt_opensyncobjectfromnthandle2
diff --git a/drivers/hv/dxgkrnl/dxgvmbus.c b/drivers/hv/dxgkrnl/dxgvmbus.c
index c9c00b288ae0..7cb04fec217e 100644
--- a/drivers/hv/dxgkrnl/dxgvmbus.c
+++ b/drivers/hv/dxgkrnl/dxgvmbus.c
@@ -1901,6 +1901,61 @@ int dxgvmb_send_get_stdalloc_data(struct dxgdevice *device,
return ret;
}
+int dxgvmb_send_submit_command(struct dxgprocess *process,
+ struct dxgadapter *adapter,
+ struct d3dkmt_submitcommand *args)
+{
+ int ret;
+ u32 cmd_size;
+ struct dxgkvmb_command_submitcommand *command;
+ u32 hbufsize = args->num_history_buffers * sizeof(struct d3dkmthandle);
+ struct dxgvmbusmsg msg = {.hdr = NULL};
+ struct dxgglobal *dxgglobal = dxggbl();
+
+ cmd_size = sizeof(struct dxgkvmb_command_submitcommand) +
+ hbufsize + args->priv_drv_data_size;
+
+ ret = init_message(&msg, adapter, process, cmd_size);
+ if (ret)
+ goto cleanup;
+ command = (void *)msg.msg;
+
+ ret = copy_from_user(&command[1], args->history_buffer_array,
+ hbufsize);
+ if (ret) {
+ DXG_ERR(" failed to copy history buffer");
+ ret = -EINVAL;
+ goto cleanup;
+ }
+ ret = copy_from_user((u8 *) &command[1] + hbufsize,
+ args->priv_drv_data, args->priv_drv_data_size);
+ if (ret) {
+ DXG_ERR("failed to copy history priv data");
+ ret = -EINVAL;
+ goto cleanup;
+ }
+
+ command_vgpu_to_host_init2(&command->hdr,
+ DXGK_VMBCOMMAND_SUBMITCOMMAND,
+ process->host_handle);
+ command->args = *args;
+
+ if (dxgglobal->async_msg_enabled) {
+ command->hdr.async_msg = 1;
+ ret = dxgvmb_send_async_msg(msg.channel, msg.hdr, msg.size);
+ } else {
+ ret = dxgvmb_send_sync_msg_ntstatus(msg.channel, msg.hdr,
+ msg.size);
+ }
+
+cleanup:
+
+ free_message(&msg, process);
+ if (ret)
+ DXG_TRACE("err: %d", ret);
+ return ret;
+}
+
static void set_result(struct d3dkmt_createsynchronizationobject2 *args,
u64 fence_gpu_va, u8 *va)
{
@@ -2427,3 +2482,61 @@ int dxgvmb_send_query_adapter_info(struct dxgprocess *process,
DXG_TRACE("err: %d", ret);
return ret;
}
+
+int dxgvmb_send_submit_command_hwqueue(struct dxgprocess *process,
+ struct dxgadapter *adapter,
+ struct d3dkmt_submitcommandtohwqueue
+ *args)
+{
+ int ret = -EINVAL;
+ u32 cmd_size;
+ struct dxgkvmb_command_submitcommandtohwqueue *command;
+ u32 primaries_size = args->num_primaries * sizeof(struct d3dkmthandle);
+ struct dxgvmbusmsg msg = {.hdr = NULL};
+ struct dxgglobal *dxgglobal = dxggbl();
+
+ cmd_size = sizeof(*command) + args->priv_drv_data_size + primaries_size;
+ ret = init_message(&msg, adapter, process, cmd_size);
+ if (ret)
+ goto cleanup;
+ command = (void *)msg.msg;
+
+ if (primaries_size) {
+ ret = copy_from_user(&command[1], args->written_primaries,
+ primaries_size);
+ if (ret) {
+ DXG_ERR("failed to copy primaries handles");
+ ret = -EINVAL;
+ goto cleanup;
+ }
+ }
+ if (args->priv_drv_data_size) {
+ ret = copy_from_user((char *)&command[1] + primaries_size,
+ args->priv_drv_data,
+ args->priv_drv_data_size);
+ if (ret) {
+ DXG_ERR("failed to copy primaries data");
+ ret = -EINVAL;
+ goto cleanup;
+ }
+ }
+
+ command_vgpu_to_host_init2(&command->hdr,
+ DXGK_VMBCOMMAND_SUBMITCOMMANDTOHWQUEUE,
+ process->host_handle);
+ command->args = *args;
+
+ if (dxgglobal->async_msg_enabled) {
+ command->hdr.async_msg = 1;
+ ret = dxgvmb_send_async_msg(msg.channel, msg.hdr, msg.size);
+ } else {
+ ret = dxgvmb_send_sync_msg_ntstatus(msg.channel, msg.hdr,
+ msg.size);
+ }
+
+cleanup:
+ free_message(&msg, process);
+ if (ret)
+ DXG_TRACE("err: %d", ret);
+ return ret;
+}
diff --git a/drivers/hv/dxgkrnl/dxgvmbus.h b/drivers/hv/dxgkrnl/dxgvmbus.h
index aba075d374c9..acfdbde09e82 100644
--- a/drivers/hv/dxgkrnl/dxgvmbus.h
+++ b/drivers/hv/dxgkrnl/dxgvmbus.h
@@ -314,6 +314,20 @@ struct dxgkvmb_command_flushdevice {
enum dxgdevice_flushschedulerreason reason;
};
+struct dxgkvmb_command_submitcommand {
+ struct dxgkvmb_command_vgpu_to_host hdr;
+ struct d3dkmt_submitcommand args;
+ /* HistoryBufferHandles */
+ /* PrivateDriverData */
+};
+
+struct dxgkvmb_command_submitcommandtohwqueue {
+ struct dxgkvmb_command_vgpu_to_host hdr;
+ struct d3dkmt_submitcommandtohwqueue args;
+ /* Written primaries */
+ /* PrivateDriverData */
+};
+
struct dxgkvmb_command_createallocation_allocinfo {
u32 flags;
u32 priv_drv_data_size;
diff --git a/drivers/hv/dxgkrnl/ioctl.c b/drivers/hv/dxgkrnl/ioctl.c
index a2d236f5eff5..9128694c8e78 100644
--- a/drivers/hv/dxgkrnl/ioctl.c
+++ b/drivers/hv/dxgkrnl/ioctl.c
@@ -1902,6 +1902,129 @@ dxgkio_destroy_allocation(struct dxgprocess *process, void *__user inargs)
return ret;
}
+static int
+dxgkio_submit_command(struct dxgprocess *process, void *__user inargs)
+{
+ int ret;
+ struct d3dkmt_submitcommand args;
+ struct dxgdevice *device = NULL;
+ struct dxgadapter *adapter = NULL;
+
+ ret = copy_from_user(&args, inargs, sizeof(args));
+ if (ret) {
+ DXG_ERR("failed to copy input args");
+ ret = -EINVAL;
+ goto cleanup;
+ }
+
+ if (args.broadcast_context_count > D3DDDI_MAX_BROADCAST_CONTEXT ||
+ args.broadcast_context_count == 0) {
+ DXG_ERR("invalid number of contexts");
+ ret = -EINVAL;
+ goto cleanup;
+ }
+
+ if (args.priv_drv_data_size > DXG_MAX_VM_BUS_PACKET_SIZE) {
+ DXG_ERR("invalid private data size");
+ ret = -EINVAL;
+ goto cleanup;
+ }
+
+ if (args.num_history_buffers > 1024) {
+ DXG_ERR("invalid number of history buffers");
+ ret = -EINVAL;
+ goto cleanup;
+ }
+
+ if (args.num_primaries > DXG_MAX_VM_BUS_PACKET_SIZE) {
+ DXG_ERR("invalid number of primaries");
+ ret = -EINVAL;
+ goto cleanup;
+ }
+
+ device = dxgprocess_device_by_object_handle(process,
+ HMGRENTRY_TYPE_DXGCONTEXT,
+ args.broadcast_context[0]);
+ if (device == NULL) {
+ ret = -EINVAL;
+ goto cleanup;
+ }
+
+ adapter = device->adapter;
+ ret = dxgadapter_acquire_lock_shared(adapter);
+ if (ret < 0) {
+ adapter = NULL;
+ goto cleanup;
+ }
+
+ ret = dxgvmb_send_submit_command(process, adapter, &args);
+
+cleanup:
+
+ if (adapter)
+ dxgadapter_release_lock_shared(adapter);
+ if (device)
+ kref_put(&device->device_kref, dxgdevice_release);
+
+ DXG_TRACE("ioctl:%s %d", errorstr(ret), ret);
+ return ret;
+}
+
+static int
+dxgkio_submit_command_to_hwqueue(struct dxgprocess *process, void *__user inargs)
+{
+ int ret;
+ struct d3dkmt_submitcommandtohwqueue args;
+ struct dxgdevice *device = NULL;
+ struct dxgadapter *adapter = NULL;
+
+ ret = copy_from_user(&args, inargs, sizeof(args));
+ if (ret) {
+ DXG_ERR("failed to copy input args");
+ ret = -EINVAL;
+ goto cleanup;
+ }
+
+ if (args.priv_drv_data_size > DXG_MAX_VM_BUS_PACKET_SIZE) {
+ DXG_ERR("invalid private data size");
+ ret = -EINVAL;
+ goto cleanup;
+ }
+
+ if (args.num_primaries > DXG_MAX_VM_BUS_PACKET_SIZE) {
+ DXG_ERR("invalid number of primaries");
+ ret = -EINVAL;
+ goto cleanup;
+ }
+
+ device = dxgprocess_device_by_object_handle(process,
+ HMGRENTRY_TYPE_DXGHWQUEUE,
+ args.hwqueue);
+ if (device == NULL) {
+ ret = -EINVAL;
+ goto cleanup;
+ }
+
+ adapter = device->adapter;
+ ret = dxgadapter_acquire_lock_shared(adapter);
+ if (ret < 0) {
+ adapter = NULL;
+ goto cleanup;
+ }
+
+ ret = dxgvmb_send_submit_command_hwqueue(process, adapter, &args);
+
+cleanup:
+
+ if (adapter)
+ dxgadapter_release_lock_shared(adapter);
+ if (device)
+ kref_put(&device->device_kref, dxgdevice_release);
+
+ DXG_TRACE("ioctl:%s %d", errorstr(ret), ret);
+ return ret;
+}
+
static int
dxgkio_submit_signal_to_hwqueue(struct dxgprocess *process, void *__user inargs)
{
@@ -3666,7 +3789,7 @@ static struct ioctl_desc ioctls[] = {
/* 0x0c */ {},
/* 0x0d */ {},
/* 0x0e */ {},
-/* 0x0f */ {},
+/* 0x0f */ {dxgkio_submit_command, LX_DXSUBMITCOMMAND},
/* 0x10 */ {dxgkio_create_sync_object, LX_DXCREATESYNCHRONIZATIONOBJECT},
/* 0x11 */ {dxgkio_signal_sync_object, LX_DXSIGNALSYNCHRONIZATIONOBJECT},
/* 0x12 */ {dxgkio_wait_sync_object, LX_DXWAITFORSYNCHRONIZATIONOBJECT},
@@ -3706,7 +3829,7 @@ static struct ioctl_desc ioctls[] = {
LX_DXSIGNALSYNCHRONIZATIONOBJECTFROMGPU},
/* 0x33 */ {dxgkio_signal_sync_object_gpu2,
LX_DXSIGNALSYNCHRONIZATIONOBJECTFROMGPU2},
-/* 0x34 */ {},
+/* 0x34 */ {dxgkio_submit_command_to_hwqueue, LX_DXSUBMITCOMMANDTOHWQUEUE},
/* 0x35 */ {dxgkio_submit_signal_to_hwqueue,
LX_DXSUBMITSIGNALSYNCOBJECTSTOHWQUEUE},
/* 0x36 */ {dxgkio_submit_wait_to_hwqueue,
diff --git a/include/uapi/misc/d3dkmthk.h b/include/uapi/misc/d3dkmthk.h
index 6ec70852de6e..9238115d165d 100644
--- a/include/uapi/misc/d3dkmthk.h
+++ b/include/uapi/misc/d3dkmthk.h
@@ -58,6 +58,8 @@ struct winluid {
__u32 b;
};
+#define D3DDDI_MAX_WRITTEN_PRIMARIES 16
+
#define D3DKMT_CREATEALLOCATION_MAX 1024
#define D3DKMT_ADAPTERS_MAX 64
#define D3DDDI_MAX_BROADCAST_CONTEXT 64
@@ -525,6 +527,58 @@ struct d3dkmt_destroysynchronizationobject {
struct d3dkmthandle sync_object;
};
+struct d3dkmt_submitcommandflags {
+ __u32 null_rendering:1;
+ __u32 present_redirected:1;
+ __u32 reserved:30;
+};
+
+struct d3dkmt_submitcommand {
+ __u64 command_buffer;
+ __u32 command_length;
+ struct d3dkmt_submitcommandflags flags;
+ __u64 present_history_token;
+ __u32 broadcast_context_count;
+ struct d3dkmthandle broadcast_context[D3DDDI_MAX_BROADCAST_CONTEXT];
+ __u32 reserved;
+#ifdef __KERNEL__
+ void *priv_drv_data;
+#else
+ __u64 priv_drv_data;
+#endif
+ __u32 priv_drv_data_size;
+ __u32 num_primaries;
+ struct d3dkmthandle written_primaries[D3DDDI_MAX_WRITTEN_PRIMARIES];
+ __u32 num_history_buffers;
+ __u32 reserved1;
+#ifdef __KERNEL__
+ struct d3dkmthandle *history_buffer_array;
+#else
+ __u64 history_buffer_array;
+#endif
+};
+
+struct d3dkmt_submitcommandtohwqueue {
+ struct d3dkmthandle hwqueue;
+ __u32 reserved;
+ __u64 hwqueue_progress_fence_id;
+ __u64 command_buffer;
+ __u32 command_length;
+ __u32 priv_drv_data_size;
+#ifdef __KERNEL__
+ void *priv_drv_data;
+#else
+ __u64 priv_drv_data;
+#endif
+ __u32 num_primaries;
+ __u32 reserved1;
+#ifdef __KERNEL__
+ struct d3dkmthandle *written_primaries;
+#else
+ __u64 written_primaries;
+#endif
+};
+
enum d3dkmt_standardallocationtype {
_D3DKMT_STANDARDALLOCATIONTYPE_EXISTINGHEAP = 1,
_D3DKMT_STANDARDALLOCATIONTYPE_CROSSADAPTER = 2,
@@ -917,6 +971,8 @@ struct d3dkmt_enumadapters3 {
_IOWR(0x47, 0x07, struct d3dkmt_createpagingqueue)
#define LX_DXQUERYADAPTERINFO \
_IOWR(0x47, 0x09, struct d3dkmt_queryadapterinfo)
+#define LX_DXSUBMITCOMMAND \
+ _IOWR(0x47, 0x0f, struct d3dkmt_submitcommand)
#define LX_DXCREATESYNCHRONIZATIONOBJECT \
_IOWR(0x47, 0x10, struct d3dkmt_createsynchronizationobject2)
#define LX_DXSIGNALSYNCHRONIZATIONOBJECT \
@@ -945,6 +1001,8 @@ struct d3dkmt_enumadapters3 {
_IOWR(0x47, 0x32, struct d3dkmt_signalsynchronizationobjectfromgpu)
#define LX_DXSIGNALSYNCHRONIZATIONOBJECTFROMGPU2 \
_IOWR(0x47, 0x33, struct d3dkmt_signalsynchronizationobjectfromgpu2)
+#define LX_DXSUBMITCOMMANDTOHWQUEUE \
+ _IOWR(0x47, 0x34, struct d3dkmt_submitcommandtohwqueue)
#define LX_DXSUBMITSIGNALSYNCOBJECTSTOHWQUEUE \
_IOWR(0x47, 0x35, struct d3dkmt_submitsignalsyncobjectstohwqueue)
#define LX_DXSUBMITWAITFORSYNCOBJECTSTOHWQUEUE \
^ permalink raw reply related [flat|nested] 56+ messages in thread* [PATCH 15/55] drivers: hv: dxgkrnl: Share objects with the host
2026-03-19 20:24 [PATCH v4 00/55] drivers: hv: dxgkrnl: Driver for Hyper-V virtual compute device Eric Curtin
` (13 preceding siblings ...)
2026-03-19 20:24 ` [PATCH 14/55] drivers: hv: dxgkrnl: Submit execution commands to the compute device Eric Curtin
@ 2026-03-19 20:24 ` Eric Curtin
2026-03-19 20:24 ` [PATCH 16/55] drivers: hv: dxgkrnl: Query the dxgdevice state Eric Curtin
` (39 subsequent siblings)
54 siblings, 0 replies; 56+ messages in thread
From: Eric Curtin @ 2026-03-19 20:24 UTC (permalink / raw)
To: linux-hyperv; +Cc: linux-kernel, iourit, wei.liu, decui, haiyangz
From: Iouri Tarassov <iourit@linux.microsoft.com>
Implement the LX_DXSHAREOBJECTWITHHOST ioctl.
This ioctl is used to create a Windows NT handle on the host
for the given shared object (resource or sync object). The NT
handle is returned to the caller. The caller could share the NT
handle with a host application, which needs to access the object.
The host application can open the shared resource using the NT
handle. This way the guest and the host have access to the same
object.
Fix incorrect handling of error results from copy_from_user().
Signed-off-by: Iouri Tarassov <iourit@linux.microsoft.com>
[kms: forward port to 6.6 from 6.1. No code changes made.]
Signed-off-by: Kelsey Steele <kelseysteele@microsoft.com>
---
drivers/hv/dxgkrnl/dxgkrnl.h | 2 ++
drivers/hv/dxgkrnl/dxgvmbus.c | 60 ++++++++++++++++++++++++++++++++---
drivers/hv/dxgkrnl/dxgvmbus.h | 18 +++++++++++
drivers/hv/dxgkrnl/ioctl.c | 38 ++++++++++++++++++++--
include/uapi/misc/d3dkmthk.h | 9 ++++++
5 files changed, 120 insertions(+), 7 deletions(-)
diff --git a/drivers/hv/dxgkrnl/dxgkrnl.h b/drivers/hv/dxgkrnl/dxgkrnl.h
index ab97bc53b124..a39d11d76e41 100644
--- a/drivers/hv/dxgkrnl/dxgkrnl.h
+++ b/drivers/hv/dxgkrnl/dxgkrnl.h
@@ -872,6 +872,8 @@ int dxgvmb_send_get_stdalloc_data(struct dxgdevice *device,
int dxgvmb_send_async_msg(struct dxgvmbuschannel *channel,
void *command,
u32 cmd_size);
+int dxgvmb_send_share_object_with_host(struct dxgprocess *process,
+ struct d3dkmt_shareobjectwithhost *args);
void signal_host_cpu_event(struct dxghostevent *eventhdr);
int ntstatus2int(struct ntstatus status);
diff --git a/drivers/hv/dxgkrnl/dxgvmbus.c b/drivers/hv/dxgkrnl/dxgvmbus.c
index 7cb04fec217e..67a16de622e0 100644
--- a/drivers/hv/dxgkrnl/dxgvmbus.c
+++ b/drivers/hv/dxgkrnl/dxgvmbus.c
@@ -881,6 +881,50 @@ int dxgvmb_send_destroy_sync_object(struct dxgprocess *process,
return ret;
}
+int dxgvmb_send_share_object_with_host(struct dxgprocess *process,
+ struct d3dkmt_shareobjectwithhost *args)
+{
+ struct dxgkvmb_command_shareobjectwithhost *command;
+ struct dxgkvmb_command_shareobjectwithhost_return result = {};
+ int ret;
+ struct dxgvmbusmsg msg = {.hdr = NULL};
+
+ ret = init_message(&msg, NULL, process, sizeof(*command));
+ if (ret)
+ return ret;
+ command = (void *)msg.msg;
+
+ ret = dxgglobal_acquire_channel_lock();
+ if (ret < 0)
+ goto cleanup;
+
+ command_vm_to_host_init2(&command->hdr,
+ DXGK_VMBCOMMAND_SHAREOBJECTWITHHOST,
+ process->host_handle);
+ command->device_handle = args->device_handle;
+ command->object_handle = args->object_handle;
+
+ ret = dxgvmb_send_sync_msg(dxgglobal_get_dxgvmbuschannel(),
+ msg.hdr, msg.size, &result, sizeof(result));
+
+ dxgglobal_release_channel_lock();
+
+ if (ret || !NT_SUCCESS(result.status)) {
+ if (ret == 0)
+ ret = ntstatus2int(result.status);
+ DXG_ERR("Host failed to share object with host: %d %x",
+ ret, result.status.v);
+ goto cleanup;
+ }
+ args->object_vail_nt_handle = result.vail_nt_handle;
+
+cleanup:
+ free_message(&msg, process);
+ if (ret)
+ DXG_ERR("err: %d", ret);
+ return ret;
+}
+
/*
* Virtual GPU messages to the host
*/
@@ -2323,37 +2367,43 @@ int dxgvmb_send_create_hwqueue(struct dxgprocess *process,
ret = copy_to_user(&inargs->queue, &command->hwqueue,
sizeof(struct d3dkmthandle));
- if (ret < 0) {
+ if (ret) {
DXG_ERR("failed to copy hwqueue handle");
+ ret = -EINVAL;
goto cleanup;
}
ret = copy_to_user(&inargs->queue_progress_fence,
&command->hwqueue_progress_fence,
sizeof(struct d3dkmthandle));
- if (ret < 0) {
+ if (ret) {
DXG_ERR("failed to progress fence");
+ ret = -EINVAL;
goto cleanup;
}
ret = copy_to_user(&inargs->queue_progress_fence_cpu_va,
&hwqueue->progress_fence_mapped_address,
sizeof(inargs->queue_progress_fence_cpu_va));
- if (ret < 0) {
+ if (ret) {
DXG_ERR("failed to copy fence cpu va");
+ ret = -EINVAL;
goto cleanup;
}
ret = copy_to_user(&inargs->queue_progress_fence_gpu_va,
&command->hwqueue_progress_fence_gpuva,
sizeof(u64));
- if (ret < 0) {
+ if (ret) {
DXG_ERR("failed to copy fence gpu va");
+ ret = -EINVAL;
goto cleanup;
}
if (args->priv_drv_data_size) {
ret = copy_to_user(args->priv_drv_data,
command->priv_drv_data,
args->priv_drv_data_size);
- if (ret < 0)
+ if (ret) {
DXG_ERR("failed to copy private data");
+ ret = -EINVAL;
+ }
}
cleanup:
diff --git a/drivers/hv/dxgkrnl/dxgvmbus.h b/drivers/hv/dxgkrnl/dxgvmbus.h
index acfdbde09e82..c1f693917d99 100644
--- a/drivers/hv/dxgkrnl/dxgvmbus.h
+++ b/drivers/hv/dxgkrnl/dxgvmbus.h
@@ -574,4 +574,22 @@ struct dxgkvmb_command_destroyhwqueue {
struct d3dkmthandle hwqueue;
};
+struct dxgkvmb_command_shareobjectwithhost {
+ struct dxgkvmb_command_vm_to_host hdr;
+ struct d3dkmthandle device_handle;
+ struct d3dkmthandle object_handle;
+ u64 reserved;
+};
+
+struct dxgkvmb_command_shareobjectwithhost_return {
+ struct ntstatus status;
+ u32 alignment;
+ u64 vail_nt_handle;
+};
+
+int
+dxgvmb_send_sync_msg(struct dxgvmbuschannel *channel,
+ void *command, u32 command_size, void *result,
+ u32 result_size);
+
#endif /* _DXGVMBUS_H */
diff --git a/drivers/hv/dxgkrnl/ioctl.c b/drivers/hv/dxgkrnl/ioctl.c
index 9128694c8e78..ac052836ce27 100644
--- a/drivers/hv/dxgkrnl/ioctl.c
+++ b/drivers/hv/dxgkrnl/ioctl.c
@@ -2460,6 +2460,7 @@ dxgkio_open_sync_object_nt(struct dxgprocess *process, void *__user inargs)
if (ret == 0)
goto success;
DXG_ERR("failed to copy output args");
+ ret = -EINVAL;
cleanup:
@@ -3364,8 +3365,10 @@ dxgkio_share_objects(struct dxgprocess *process, void *__user inargs)
tmp = (u64) object_fd;
ret = copy_to_user(args.shared_handle, &tmp, sizeof(u64));
- if (ret < 0)
+ if (ret) {
DXG_ERR("failed to copy shared handle");
+ ret = -EINVAL;
+ }
cleanup:
if (ret < 0) {
@@ -3773,6 +3776,37 @@ dxgkio_open_resource_nt(struct dxgprocess *process,
return ret;
}
+static int
+dxgkio_share_object_with_host(struct dxgprocess *process, void *__user inargs)
+{
+ struct d3dkmt_shareobjectwithhost args;
+ int ret;
+
+ ret = copy_from_user(&args, inargs, sizeof(args));
+ if (ret) {
+ DXG_ERR("failed to copy input args");
+ ret = -EINVAL;
+ goto cleanup;
+ }
+
+ ret = dxgvmb_send_share_object_with_host(process, &args);
+ if (ret) {
+ DXG_ERR("dxgvmb_send_share_object_with_host dailed");
+ goto cleanup;
+ }
+
+ ret = copy_to_user(inargs, &args, sizeof(args));
+ if (ret) {
+ DXG_ERR("failed to copy data to user");
+ ret = -EINVAL;
+ }
+
+cleanup:
+
+ DXG_TRACE("ioctl:%s %d", errorstr(ret), ret);
+ return ret;
+}
+
static struct ioctl_desc ioctls[] = {
/* 0x00 */ {},
/* 0x01 */ {dxgkio_open_adapter_from_luid, LX_DXOPENADAPTERFROMLUID},
@@ -3850,7 +3884,7 @@ static struct ioctl_desc ioctls[] = {
LX_DXQUERYRESOURCEINFOFROMNTHANDLE},
/* 0x42 */ {dxgkio_open_resource_nt, LX_DXOPENRESOURCEFROMNTHANDLE},
/* 0x43 */ {},
-/* 0x44 */ {},
+/* 0x44 */ {dxgkio_share_object_with_host, LX_DXSHAREOBJECTWITHHOST},
/* 0x45 */ {},
};
diff --git a/include/uapi/misc/d3dkmthk.h b/include/uapi/misc/d3dkmthk.h
index 9238115d165d..895861505e6e 100644
--- a/include/uapi/misc/d3dkmthk.h
+++ b/include/uapi/misc/d3dkmthk.h
@@ -952,6 +952,13 @@ struct d3dkmt_enumadapters3 {
#endif
};
+struct d3dkmt_shareobjectwithhost {
+ struct d3dkmthandle device_handle;
+ struct d3dkmthandle object_handle;
+ __u64 reserved;
+ __u64 object_vail_nt_handle;
+};
+
/*
* Dxgkrnl Graphics Port Driver ioctl definitions
*
@@ -1021,5 +1028,7 @@ struct d3dkmt_enumadapters3 {
_IOWR(0x47, 0x41, struct d3dkmt_queryresourceinfofromnthandle)
#define LX_DXOPENRESOURCEFROMNTHANDLE \
_IOWR(0x47, 0x42, struct d3dkmt_openresourcefromnthandle)
+#define LX_DXSHAREOBJECTWITHHOST \
+ _IOWR(0x47, 0x44, struct d3dkmt_shareobjectwithhost)
#endif /* _D3DKMTHK_H */
^ permalink raw reply related [flat|nested] 56+ messages in thread* [PATCH 16/55] drivers: hv: dxgkrnl: Query the dxgdevice state
2026-03-19 20:24 [PATCH v4 00/55] drivers: hv: dxgkrnl: Driver for Hyper-V virtual compute device Eric Curtin
` (14 preceding siblings ...)
2026-03-19 20:24 ` [PATCH 15/55] drivers: hv: dxgkrnl: Share objects with the host Eric Curtin
@ 2026-03-19 20:24 ` Eric Curtin
2026-03-19 20:24 ` [PATCH 17/55] drivers: hv: dxgkrnl: Map(unmap) CPU address to device allocation Eric Curtin
` (38 subsequent siblings)
54 siblings, 0 replies; 56+ messages in thread
From: Eric Curtin @ 2026-03-19 20:24 UTC (permalink / raw)
To: linux-hyperv; +Cc: linux-kernel, iourit, wei.liu, decui, haiyangz
From: Iouri Tarassov <iourit@linux.microsoft.com>
Implement the ioctl to query the dxgdevice state - LX_DXGETDEVICESTATE.
The IOCTL is used to query the state of the given dxgdevice object (active,
error, etc.).
A call to the dxgdevice execution state could be high frequency.
The following method is used to avoid sending a synchronous VM
bus message to the host for every call:
- When a dxgdevice is created, a pointer to dxgglobal->device_state_counter
is sent to the host
- Every time the device state on the host is changed, the host will send
an asynchronous message to the guest (DXGK_VMBCOMMAND_SETGUESTDATA) and
the guest will increment the device_state_counter value.
- the dxgdevice object has execution_state_counter member, which is equal
to dxgglobal->device_state_counter value at the time when
LX_DXGETDEVICESTATE was last processed..
- if execution_state_counter is different from device_state_counter, the
dxgk_vmbcommand_getdevicestate VM bus message is sent to the host.
Otherwise, the cached value is returned to the caller.
Signed-off-by: Iouri Tarassov <iourit@linux.microsoft.com>
[kms: forward port to 6.6 from 6.1. No code changes made.]
Signed-off-by: Kelsey Steele <kelseysteele@microsoft.com>
---
drivers/hv/dxgkrnl/dxgkrnl.h | 11 ++++
drivers/hv/dxgkrnl/dxgmodule.c | 1 -
drivers/hv/dxgkrnl/dxgvmbus.c | 68 ++++++++++++++++++++++
drivers/hv/dxgkrnl/dxgvmbus.h | 26 +++++++++
drivers/hv/dxgkrnl/ioctl.c | 66 ++++++++++++++++++++-
include/uapi/misc/d3dkmthk.h | 101 +++++++++++++++++++++++++++++----
6 files changed, 261 insertions(+), 12 deletions(-)
diff --git a/drivers/hv/dxgkrnl/dxgkrnl.h b/drivers/hv/dxgkrnl/dxgkrnl.h
index a39d11d76e41..b131c3b43838 100644
--- a/drivers/hv/dxgkrnl/dxgkrnl.h
+++ b/drivers/hv/dxgkrnl/dxgkrnl.h
@@ -268,12 +268,18 @@ void dxgsyncobject_destroy(struct dxgprocess *process,
void dxgsyncobject_stop(struct dxgsyncobject *syncobj);
void dxgsyncobject_release(struct kref *refcount);
+/*
+ * device_state_counter - incremented every time the execition state of
+ * a DXGDEVICE is changed in the host. Used to optimize access to the
+ * device execution state.
+ */
struct dxgglobal {
struct dxgdriver *drvdata;
struct dxgvmbuschannel channel;
struct hv_device *hdev;
u32 num_adapters;
u32 vmbus_ver; /* Interface version */
+ atomic_t device_state_counter;
struct resource *mem;
u64 mmiospace_base;
u64 mmiospace_size;
@@ -512,6 +518,7 @@ struct dxgdevice {
struct list_head syncobj_list_head;
struct d3dkmthandle handle;
enum d3dkmt_deviceexecution_state execution_state;
+ int execution_state_counter;
u32 handle_valid;
};
@@ -849,6 +856,10 @@ int dxgvmb_send_open_sync_object_nt(struct dxgprocess *process,
struct d3dkmt_opensyncobjectfromnthandle2
*args,
struct dxgsyncobject *syncobj);
+int dxgvmb_send_get_device_state(struct dxgprocess *process,
+ struct dxgadapter *adapter,
+ struct d3dkmt_getdevicestate *args,
+ struct d3dkmt_getdevicestate *__user inargs);
int dxgvmb_send_create_nt_shared_object(struct dxgprocess *process,
struct d3dkmthandle object,
struct d3dkmthandle *shared_handle);
diff --git a/drivers/hv/dxgkrnl/dxgmodule.c b/drivers/hv/dxgkrnl/dxgmodule.c
index 8cbe1095599f..5c364a46b65f 100644
--- a/drivers/hv/dxgkrnl/dxgmodule.c
+++ b/drivers/hv/dxgkrnl/dxgmodule.c
@@ -827,7 +827,6 @@ static struct dxgglobal *dxgglobal_create(void)
#ifdef DEBUG
dxgk_validate_ioctls();
#endif
-
return dxgglobal;
}
diff --git a/drivers/hv/dxgkrnl/dxgvmbus.c b/drivers/hv/dxgkrnl/dxgvmbus.c
index 67a16de622e0..ed800dc09180 100644
--- a/drivers/hv/dxgkrnl/dxgvmbus.c
+++ b/drivers/hv/dxgkrnl/dxgvmbus.c
@@ -281,6 +281,24 @@ static void command_vm_to_host_init1(struct dxgkvmb_command_vm_to_host *command,
command->channel_type = DXGKVMB_VM_TO_HOST;
}
+static void set_guest_data(struct dxgkvmb_command_host_to_vm *packet,
+ u32 packet_length)
+{
+ struct dxgkvmb_command_setguestdata *command = (void *)packet;
+ struct dxgglobal *dxgglobal = dxggbl();
+
+ DXG_TRACE("Setting guest data: %d %d %p %p",
+ command->data_type,
+ command->data32,
+ command->guest_pointer,
+ &dxgglobal->device_state_counter);
+ if (command->data_type == SETGUESTDATA_DATATYPE_DWORD &&
+ command->guest_pointer == &dxgglobal->device_state_counter &&
+ command->data32 != 0) {
+ atomic_inc(&dxgglobal->device_state_counter);
+ }
+}
+
static void signal_guest_event(struct dxgkvmb_command_host_to_vm *packet,
u32 packet_length)
{
@@ -311,6 +329,9 @@ static void process_inband_packet(struct dxgvmbuschannel *channel,
DXG_TRACE("global packet %d",
packet->command_type);
switch (packet->command_type) {
+ case DXGK_VMBCOMMAND_SETGUESTDATA:
+ set_guest_data(packet, packet_length);
+ break;
case DXGK_VMBCOMMAND_SIGNALGUESTEVENT:
case DXGK_VMBCOMMAND_SIGNALGUESTEVENTPASSIVE:
signal_guest_event(packet, packet_length);
@@ -1028,6 +1049,7 @@ struct d3dkmthandle dxgvmb_send_create_device(struct dxgadapter *adapter,
struct dxgkvmb_command_createdevice *command;
struct dxgkvmb_command_createdevice_return result = { };
struct dxgvmbusmsg msg;
+ struct dxgglobal *dxgglobal = dxggbl();
ret = init_message(&msg, adapter, process, sizeof(*command));
if (ret)
@@ -1037,6 +1059,7 @@ struct d3dkmthandle dxgvmb_send_create_device(struct dxgadapter *adapter,
command_vgpu_to_host_init2(&command->hdr, DXGK_VMBCOMMAND_CREATEDEVICE,
process->host_handle);
command->flags = args->flags;
+ command->error_code = &dxgglobal->device_state_counter;
ret = dxgvmb_send_sync_msg(msg.channel, msg.hdr, msg.size,
&result, sizeof(result));
@@ -1806,6 +1829,51 @@ int dxgvmb_send_destroy_allocation(struct dxgprocess *process,
return ret;
}
+int dxgvmb_send_get_device_state(struct dxgprocess *process,
+ struct dxgadapter *adapter,
+ struct d3dkmt_getdevicestate *args,
+ struct d3dkmt_getdevicestate *__user output)
+{
+ int ret;
+ struct dxgkvmb_command_getdevicestate *command;
+ struct dxgkvmb_command_getdevicestate_return result = { };
+ struct dxgvmbusmsg msg = {.hdr = NULL};
+
+ ret = init_message(&msg, adapter, process, sizeof(*command));
+ if (ret)
+ goto cleanup;
+ command = (void *)msg.msg;
+
+ command_vgpu_to_host_init2(&command->hdr,
+ DXGK_VMBCOMMAND_GETDEVICESTATE,
+ process->host_handle);
+ command->args = *args;
+
+ ret = dxgvmb_send_sync_msg(msg.channel, msg.hdr, msg.size,
+ &result, sizeof(result));
+ if (ret < 0)
+ goto cleanup;
+
+ ret = ntstatus2int(result.status);
+ if (ret < 0)
+ goto cleanup;
+
+ ret = copy_to_user(output, &result.args, sizeof(result.args));
+ if (ret) {
+ DXG_ERR("failed to copy output args");
+ ret = -EINVAL;
+ }
+
+ if (args->state_type == _D3DKMT_DEVICESTATE_EXECUTION)
+ args->execution_state = result.args.execution_state;
+
+cleanup:
+ free_message(&msg, process);
+ if (ret)
+ DXG_TRACE("err: %d", ret);
+ return ret;
+}
+
int dxgvmb_send_open_resource(struct dxgprocess *process,
struct dxgadapter *adapter,
struct d3dkmthandle device,
diff --git a/drivers/hv/dxgkrnl/dxgvmbus.h b/drivers/hv/dxgkrnl/dxgvmbus.h
index c1f693917d99..6ca1068b0d4c 100644
--- a/drivers/hv/dxgkrnl/dxgvmbus.h
+++ b/drivers/hv/dxgkrnl/dxgvmbus.h
@@ -172,6 +172,22 @@ struct dxgkvmb_command_signalguestevent {
bool dereference_event;
};
+enum set_guestdata_type {
+ SETGUESTDATA_DATATYPE_DWORD = 0,
+ SETGUESTDATA_DATATYPE_UINT64 = 1
+};
+
+struct dxgkvmb_command_setguestdata {
+ struct dxgkvmb_command_host_to_vm hdr;
+ void *guest_pointer;
+ union {
+ u64 data64;
+ u32 data32;
+ };
+ u32 dereference : 1;
+ u32 data_type : 4;
+};
+
struct dxgkvmb_command_opensyncobject {
struct dxgkvmb_command_vm_to_host hdr;
struct d3dkmthandle device;
@@ -574,6 +590,16 @@ struct dxgkvmb_command_destroyhwqueue {
struct d3dkmthandle hwqueue;
};
+struct dxgkvmb_command_getdevicestate {
+ struct dxgkvmb_command_vgpu_to_host hdr;
+ struct d3dkmt_getdevicestate args;
+};
+
+struct dxgkvmb_command_getdevicestate_return {
+ struct d3dkmt_getdevicestate args;
+ struct ntstatus status;
+};
+
struct dxgkvmb_command_shareobjectwithhost {
struct dxgkvmb_command_vm_to_host hdr;
struct d3dkmthandle device_handle;
diff --git a/drivers/hv/dxgkrnl/ioctl.c b/drivers/hv/dxgkrnl/ioctl.c
index ac052836ce27..26d410fd6e99 100644
--- a/drivers/hv/dxgkrnl/ioctl.c
+++ b/drivers/hv/dxgkrnl/ioctl.c
@@ -3142,6 +3142,70 @@ dxgkio_wait_sync_object_gpu(struct dxgprocess *process, void *__user inargs)
return ret;
}
+static int
+dxgkio_get_device_state(struct dxgprocess *process, void *__user inargs)
+{
+ int ret;
+ struct d3dkmt_getdevicestate args;
+ struct dxgdevice *device = NULL;
+ struct dxgadapter *adapter = NULL;
+ int global_device_state_counter = 0;
+ struct dxgglobal *dxgglobal = dxggbl();
+
+ ret = copy_from_user(&args, inargs, sizeof(args));
+ if (ret) {
+ DXG_ERR("failed to copy input args");
+ ret = -EINVAL;
+ goto cleanup;
+ }
+
+ device = dxgprocess_device_by_handle(process, args.device);
+ if (device == NULL) {
+ ret = -EINVAL;
+ goto cleanup;
+ }
+
+ adapter = device->adapter;
+ ret = dxgadapter_acquire_lock_shared(adapter);
+ if (ret < 0) {
+ adapter = NULL;
+ goto cleanup;
+ }
+
+ if (args.state_type == _D3DKMT_DEVICESTATE_EXECUTION) {
+ global_device_state_counter =
+ atomic_read(&dxgglobal->device_state_counter);
+ if (device->execution_state_counter ==
+ global_device_state_counter) {
+ args.execution_state = device->execution_state;
+ ret = copy_to_user(inargs, &args, sizeof(args));
+ if (ret) {
+ DXG_ERR("failed to copy args to user");
+ ret = -EINVAL;
+ }
+ goto cleanup;
+ }
+ }
+
+ ret = dxgvmb_send_get_device_state(process, adapter, &args, inargs);
+
+ if (ret == 0 && args.state_type == _D3DKMT_DEVICESTATE_EXECUTION) {
+ device->execution_state = args.execution_state;
+ device->execution_state_counter = global_device_state_counter;
+ }
+
+cleanup:
+
+ if (adapter)
+ dxgadapter_release_lock_shared(adapter);
+ if (device)
+ kref_put(&device->device_kref, dxgdevice_release);
+ if (ret < 0)
+ DXG_ERR("Failed to get device state %x", ret);
+
+ return ret;
+}
+
static int
dxgsharedsyncobj_get_host_nt_handle(struct dxgsharedsyncobject *syncobj,
struct dxgprocess *process,
@@ -3822,7 +3886,7 @@ static struct ioctl_desc ioctls[] = {
/* 0x0b */ {},
/* 0x0c */ {},
/* 0x0d */ {},
-/* 0x0e */ {},
+/* 0x0e */ {dxgkio_get_device_state, LX_DXGETDEVICESTATE},
/* 0x0f */ {dxgkio_submit_command, LX_DXSUBMITCOMMAND},
/* 0x10 */ {dxgkio_create_sync_object, LX_DXCREATESYNCHRONIZATIONOBJECT},
/* 0x11 */ {dxgkio_signal_sync_object, LX_DXSIGNALSYNCHRONIZATIONOBJECT},
diff --git a/include/uapi/misc/d3dkmthk.h b/include/uapi/misc/d3dkmthk.h
index 895861505e6e..8a013b07e88a 100644
--- a/include/uapi/misc/d3dkmthk.h
+++ b/include/uapi/misc/d3dkmthk.h
@@ -236,6 +236,95 @@ struct d3dddi_destroypagingqueue {
struct d3dkmthandle paging_queue;
};
+enum dxgk_render_pipeline_stage {
+ _DXGK_RENDER_PIPELINE_STAGE_UNKNOWN = 0,
+ _DXGK_RENDER_PIPELINE_STAGE_INPUT_ASSEMBLER = 1,
+ _DXGK_RENDER_PIPELINE_STAGE_VERTEX_SHADER = 2,
+ _DXGK_RENDER_PIPELINE_STAGE_GEOMETRY_SHADER = 3,
+ _DXGK_RENDER_PIPELINE_STAGE_STREAM_OUTPUT = 4,
+ _DXGK_RENDER_PIPELINE_STAGE_RASTERIZER = 5,
+ _DXGK_RENDER_PIPELINE_STAGE_PIXEL_SHADER = 6,
+ _DXGK_RENDER_PIPELINE_STAGE_OUTPUT_MERGER = 7,
+};
+
+enum dxgk_page_fault_flags {
+ _DXGK_PAGE_FAULT_WRITE = 0x1,
+ _DXGK_PAGE_FAULT_FENCE_INVALID = 0x2,
+ _DXGK_PAGE_FAULT_ADAPTER_RESET_REQUIRED = 0x4,
+ _DXGK_PAGE_FAULT_ENGINE_RESET_REQUIRED = 0x8,
+ _DXGK_PAGE_FAULT_FATAL_HARDWARE_ERROR = 0x10,
+ _DXGK_PAGE_FAULT_IOMMU = 0x20,
+ _DXGK_PAGE_FAULT_HW_CONTEXT_VALID = 0x40,
+ _DXGK_PAGE_FAULT_PROCESS_HANDLE_VALID = 0x80,
+};
+
+enum dxgk_general_error_code {
+ _DXGK_GENERAL_ERROR_PAGE_FAULT = 0,
+ _DXGK_GENERAL_ERROR_INVALID_INSTRUCTION = 1,
+};
+
+struct dxgk_fault_error_code {
+ union {
+ struct {
+ __u32 is_device_specific_code:1;
+ enum dxgk_general_error_code general_error_code:31;
+ };
+ struct {
+ __u32 is_device_specific_code_reserved_bit:1;
+ __u32 device_specific_code:31;
+ };
+ };
+};
+
+struct d3dkmt_devicereset_state {
+ union {
+ struct {
+ __u32 desktop_switched:1;
+ __u32 reserved:31;
+ };
+ __u32 value;
+ };
+};
+
+struct d3dkmt_devicepagefault_state {
+ __u64 faulted_primitive_api_sequence_number;
+ enum dxgk_render_pipeline_stage faulted_pipeline_stage;
+ __u32 faulted_bind_table_entry;
+ enum dxgk_page_fault_flags page_fault_flags;
+ struct dxgk_fault_error_code fault_error_code;
+ __u64 faulted_virtual_address;
+};
+
+enum d3dkmt_deviceexecution_state {
+ _D3DKMT_DEVICEEXECUTION_ACTIVE = 1,
+ _D3DKMT_DEVICEEXECUTION_RESET = 2,
+ _D3DKMT_DEVICEEXECUTION_HUNG = 3,
+ _D3DKMT_DEVICEEXECUTION_STOPPED = 4,
+ _D3DKMT_DEVICEEXECUTION_ERROR_OUTOFMEMORY = 5,
+ _D3DKMT_DEVICEEXECUTION_ERROR_DMAFAULT = 6,
+ _D3DKMT_DEVICEEXECUTION_ERROR_DMAPAGEFAULT = 7,
+};
+
+enum d3dkmt_devicestate_type {
+ _D3DKMT_DEVICESTATE_EXECUTION = 1,
+ _D3DKMT_DEVICESTATE_PRESENT = 2,
+ _D3DKMT_DEVICESTATE_RESET = 3,
+ _D3DKMT_DEVICESTATE_PRESENT_DWM = 4,
+ _D3DKMT_DEVICESTATE_PAGE_FAULT = 5,
+ _D3DKMT_DEVICESTATE_PRESENT_QUEUE = 6,
+};
+
+struct d3dkmt_getdevicestate {
+ struct d3dkmthandle device;
+ enum d3dkmt_devicestate_type state_type;
+ union {
+ enum d3dkmt_deviceexecution_state execution_state;
+ struct d3dkmt_devicereset_state reset_state;
+ struct d3dkmt_devicepagefault_state page_fault_state;
+ char alignment[48];
+ };
+};
+
enum d3dkmdt_gdisurfacetype {
_D3DKMDT_GDISURFACE_INVALID = 0,
_D3DKMDT_GDISURFACE_TEXTURE = 1,
@@ -759,16 +848,6 @@ struct d3dkmt_queryadapterinfo {
__u32 private_data_size;
};
-enum d3dkmt_deviceexecution_state {
- _D3DKMT_DEVICEEXECUTION_ACTIVE = 1,
- _D3DKMT_DEVICEEXECUTION_RESET = 2,
- _D3DKMT_DEVICEEXECUTION_HUNG = 3,
- _D3DKMT_DEVICEEXECUTION_STOPPED = 4,
- _D3DKMT_DEVICEEXECUTION_ERROR_OUTOFMEMORY = 5,
- _D3DKMT_DEVICEEXECUTION_ERROR_DMAFAULT = 6,
- _D3DKMT_DEVICEEXECUTION_ERROR_DMAPAGEFAULT = 7,
-};
-
struct d3dddi_openallocationinfo2 {
struct d3dkmthandle allocation;
#ifdef __KERNEL__
@@ -978,6 +1057,8 @@ struct d3dkmt_shareobjectwithhost {
_IOWR(0x47, 0x07, struct d3dkmt_createpagingqueue)
#define LX_DXQUERYADAPTERINFO \
_IOWR(0x47, 0x09, struct d3dkmt_queryadapterinfo)
+#define LX_DXGETDEVICESTATE \
+ _IOWR(0x47, 0x0e, struct d3dkmt_getdevicestate)
#define LX_DXSUBMITCOMMAND \
_IOWR(0x47, 0x0f, struct d3dkmt_submitcommand)
#define LX_DXCREATESYNCHRONIZATIONOBJECT \
^ permalink raw reply related [flat|nested] 56+ messages in thread* [PATCH 17/55] drivers: hv: dxgkrnl: Map(unmap) CPU address to device allocation
2026-03-19 20:24 [PATCH v4 00/55] drivers: hv: dxgkrnl: Driver for Hyper-V virtual compute device Eric Curtin
` (15 preceding siblings ...)
2026-03-19 20:24 ` [PATCH 16/55] drivers: hv: dxgkrnl: Query the dxgdevice state Eric Curtin
@ 2026-03-19 20:24 ` Eric Curtin
2026-03-19 20:24 ` [PATCH 18/55] drivers: hv: dxgkrnl: Manage device allocation properties Eric Curtin
` (37 subsequent siblings)
54 siblings, 0 replies; 56+ messages in thread
From: Eric Curtin @ 2026-03-19 20:24 UTC (permalink / raw)
To: linux-hyperv; +Cc: linux-kernel, iourit, wei.liu, decui, haiyangz
From: Iouri Tarassov <iourit@linux.microsoft.com>
Implement ioctls to map/unmap CPU virtual addresses to compute device
allocations - LX_DXLOCK2 and LX_DXUNLOCK2.
The LX_DXLOCK2 ioctl maps a CPU virtual address to a compute device
allocation. The allocation could be located in system memory or local
device memory on the host. When the device allocation is created
from the guest system memory (existing sysmem allocation), the
allocation CPU address is known and is returned to the caller.
For other CPU visible allocations the code flow is the following:
1. A VM bus message is sent to the host to map the allocation
2. The host allocates a portion of the guest IO space and maps it
to the allocation backing store. The IO space address of the
allocation is returned back to the guest.
3. The guest allocates a CPU virtual address and maps it to the IO
space (see the dxg_map_iospace function).
4. The CPU VA is returned back to the caller
cpu_address_mapped and cpu_address_refcount are used to track how
many times an allocation was mapped.
The LX_DXUNLOCK2 ioctl unmaps a CPU virtual address from a compute
device allocation.
Signed-off-by: Iouri Tarassov <iourit@linux.microsoft.com>
[kms: forward port to 6.6 from 6.1. No code changes made.]
Signed-off-by: Kelsey Steele <kelseysteele@microsoft.com>
---
drivers/hv/dxgkrnl/dxgadapter.c | 11 +++
drivers/hv/dxgkrnl/dxgkrnl.h | 14 +++
drivers/hv/dxgkrnl/dxgvmbus.c | 107 +++++++++++++++++++++
drivers/hv/dxgkrnl/dxgvmbus.h | 19 ++++
drivers/hv/dxgkrnl/ioctl.c | 160 +++++++++++++++++++++++++++++++-
include/uapi/misc/d3dkmthk.h | 30 ++++++
6 files changed, 339 insertions(+), 2 deletions(-)
diff --git a/drivers/hv/dxgkrnl/dxgadapter.c b/drivers/hv/dxgkrnl/dxgadapter.c
index 410f08768bad..23f00db7637e 100644
--- a/drivers/hv/dxgkrnl/dxgadapter.c
+++ b/drivers/hv/dxgkrnl/dxgadapter.c
@@ -885,6 +885,15 @@ void dxgallocation_stop(struct dxgallocation *alloc)
vfree(alloc->pages);
alloc->pages = NULL;
}
+ dxgprocess_ht_lock_exclusive_down(alloc->process);
+ if (alloc->cpu_address_mapped) {
+ dxg_unmap_iospace(alloc->cpu_address,
+ alloc->num_pages << PAGE_SHIFT);
+ alloc->cpu_address_mapped = false;
+ alloc->cpu_address = NULL;
+ alloc->cpu_address_refcount = 0;
+ }
+ dxgprocess_ht_lock_exclusive_up(alloc->process);
}
void dxgallocation_free_handle(struct dxgallocation *alloc)
@@ -932,6 +941,8 @@ else
#endif
if (alloc->priv_drv_data)
vfree(alloc->priv_drv_data);
+ if (alloc->cpu_address_mapped)
+ pr_err("Alloc IO space is mapped: %p", alloc);
kfree(alloc);
}
diff --git a/drivers/hv/dxgkrnl/dxgkrnl.h b/drivers/hv/dxgkrnl/dxgkrnl.h
index b131c3b43838..1d6b552f1c1a 100644
--- a/drivers/hv/dxgkrnl/dxgkrnl.h
+++ b/drivers/hv/dxgkrnl/dxgkrnl.h
@@ -708,6 +708,8 @@ struct dxgallocation {
struct d3dkmthandle alloc_handle;
/* Set to 1 when allocation belongs to resource. */
u32 resource_owner:1;
+ /* Set to 1 when 'cpu_address' is mapped to the IO space. */
+ u32 cpu_address_mapped:1;
/* Set to 1 when the allocatio is mapped as cached */
u32 cached:1;
u32 handle_valid:1;
@@ -719,6 +721,11 @@ struct dxgallocation {
#endif
/* Number of pages in the 'pages' array */
u32 num_pages;
+ /*
+ * How many times dxgk_lock2 is called to allocation, which is mapped
+ * to IO space.
+ */
+ u32 cpu_address_refcount;
/*
* CPU address from the existing sysmem allocation, or
* mapped to the CPU visible backing store in the IO space
@@ -837,6 +844,13 @@ int dxgvmb_send_wait_sync_object_cpu(struct dxgprocess *process,
d3dkmt_waitforsynchronizationobjectfromcpu
*args,
u64 cpu_event);
+int dxgvmb_send_lock2(struct dxgprocess *process,
+ struct dxgadapter *adapter,
+ struct d3dkmt_lock2 *args,
+ struct d3dkmt_lock2 *__user outargs);
+int dxgvmb_send_unlock2(struct dxgprocess *process,
+ struct dxgadapter *adapter,
+ struct d3dkmt_unlock2 *args);
int dxgvmb_send_create_hwqueue(struct dxgprocess *process,
struct dxgadapter *adapter,
struct d3dkmt_createhwqueue *args,
diff --git a/drivers/hv/dxgkrnl/dxgvmbus.c b/drivers/hv/dxgkrnl/dxgvmbus.c
index ed800dc09180..a80f84d9065a 100644
--- a/drivers/hv/dxgkrnl/dxgvmbus.c
+++ b/drivers/hv/dxgkrnl/dxgvmbus.c
@@ -2354,6 +2354,113 @@ int dxgvmb_send_wait_sync_object_gpu(struct dxgprocess *process,
return ret;
}
+int dxgvmb_send_lock2(struct dxgprocess *process,
+ struct dxgadapter *adapter,
+ struct d3dkmt_lock2 *args,
+ struct d3dkmt_lock2 *__user outargs)
+{
+ int ret;
+ struct dxgkvmb_command_lock2 *command;
+ struct dxgkvmb_command_lock2_return result = { };
+ struct dxgallocation *alloc = NULL;
+ struct dxgvmbusmsg msg = {.hdr = NULL};
+
+ ret = init_message(&msg, adapter, process, sizeof(*command));
+ if (ret)
+ goto cleanup;
+ command = (void *)msg.msg;
+
+ command_vgpu_to_host_init2(&command->hdr,
+ DXGK_VMBCOMMAND_LOCK2, process->host_handle);
+ command->args = *args;
+
+ ret = dxgvmb_send_sync_msg(msg.channel, msg.hdr, msg.size,
+ &result, sizeof(result));
+ if (ret < 0)
+ goto cleanup;
+
+ ret = ntstatus2int(result.status);
+ if (ret < 0)
+ goto cleanup;
+
+ hmgrtable_lock(&process->handle_table, DXGLOCK_EXCL);
+ alloc = hmgrtable_get_object_by_type(&process->handle_table,
+ HMGRENTRY_TYPE_DXGALLOCATION,
+ args->allocation);
+ if (alloc == NULL) {
+ DXG_ERR("invalid alloc");
+ ret = -EINVAL;
+ } else {
+ if (alloc->cpu_address) {
+ args->data = alloc->cpu_address;
+ if (alloc->cpu_address_mapped)
+ alloc->cpu_address_refcount++;
+ } else {
+ u64 offset = (u64)result.cpu_visible_buffer_offset;
+
+ args->data = dxg_map_iospace(offset,
+ alloc->num_pages << PAGE_SHIFT,
+ PROT_READ | PROT_WRITE, alloc->cached);
+ if (args->data) {
+ alloc->cpu_address_refcount = 1;
+ alloc->cpu_address_mapped = true;
+ alloc->cpu_address = args->data;
+ }
+ }
+ if (args->data == NULL) {
+ ret = -ENOMEM;
+ } else {
+ ret = copy_to_user(&outargs->data, &args->data,
+ sizeof(args->data));
+ if (ret) {
+ DXG_ERR("failed to copy data");
+ ret = -EINVAL;
+ alloc->cpu_address_refcount--;
+ if (alloc->cpu_address_refcount == 0) {
+ dxg_unmap_iospace(alloc->cpu_address,
+ alloc->num_pages << PAGE_SHIFT);
+ alloc->cpu_address_mapped = false;
+ alloc->cpu_address = NULL;
+ }
+ }
+ }
+ }
+ hmgrtable_unlock(&process->handle_table, DXGLOCK_EXCL);
+
+cleanup:
+ free_message(&msg, process);
+ if (ret)
+ DXG_TRACE("err: %d", ret);
+ return ret;
+}
+
+int dxgvmb_send_unlock2(struct dxgprocess *process,
+ struct dxgadapter *adapter,
+ struct d3dkmt_unlock2 *args)
+{
+ int ret;
+ struct dxgkvmb_command_unlock2 *command;
+ struct dxgvmbusmsg msg = {.hdr = NULL};
+
+ ret = init_message(&msg, adapter, process, sizeof(*command));
+ if (ret)
+ goto cleanup;
+ command = (void *)msg.msg;
+
+ command_vgpu_to_host_init2(&command->hdr,
+ DXGK_VMBCOMMAND_UNLOCK2,
+ process->host_handle);
+ command->args = *args;
+
+ ret = dxgvmb_send_sync_msg_ntstatus(msg.channel, msg.hdr, msg.size);
+
+cleanup:
+ free_message(&msg, process);
+ if (ret)
+ DXG_TRACE("err: %d", ret);
+ return ret;
+}
+
int dxgvmb_send_create_hwqueue(struct dxgprocess *process,
struct dxgadapter *adapter,
struct d3dkmt_createhwqueue *args,
diff --git a/drivers/hv/dxgkrnl/dxgvmbus.h b/drivers/hv/dxgkrnl/dxgvmbus.h
index 6ca1068b0d4c..447bb1ba391b 100644
--- a/drivers/hv/dxgkrnl/dxgvmbus.h
+++ b/drivers/hv/dxgkrnl/dxgvmbus.h
@@ -570,6 +570,25 @@ struct dxgkvmb_command_waitforsyncobjectfromgpu {
/* struct d3dkmthandle ObjectHandles[object_count] */
};
+struct dxgkvmb_command_lock2 {
+ struct dxgkvmb_command_vgpu_to_host hdr;
+ struct d3dkmt_lock2 args;
+ bool use_legacy_lock;
+ u32 flags;
+ u32 priv_drv_data;
+};
+
+struct dxgkvmb_command_lock2_return {
+ struct ntstatus status;
+ void *cpu_visible_buffer_offset;
+};
+
+struct dxgkvmb_command_unlock2 {
+ struct dxgkvmb_command_vgpu_to_host hdr;
+ struct d3dkmt_unlock2 args;
+ bool use_legacy_unlock;
+};
+
/* Returns the same structure */
struct dxgkvmb_command_createhwqueue {
struct dxgkvmb_command_vgpu_to_host hdr;
diff --git a/drivers/hv/dxgkrnl/ioctl.c b/drivers/hv/dxgkrnl/ioctl.c
index 26d410fd6e99..37e218443310 100644
--- a/drivers/hv/dxgkrnl/ioctl.c
+++ b/drivers/hv/dxgkrnl/ioctl.c
@@ -3142,6 +3142,162 @@ dxgkio_wait_sync_object_gpu(struct dxgprocess *process, void *__user inargs)
return ret;
}
+static int
+dxgkio_lock2(struct dxgprocess *process, void *__user inargs)
+{
+ struct d3dkmt_lock2 args;
+ struct d3dkmt_lock2 *__user result = inargs;
+ int ret;
+ struct dxgadapter *adapter = NULL;
+ struct dxgdevice *device = NULL;
+ struct dxgallocation *alloc = NULL;
+
+ ret = copy_from_user(&args, inargs, sizeof(args));
+ if (ret) {
+ DXG_ERR("failed to copy input args");
+ ret = -EINVAL;
+ goto cleanup;
+ }
+
+ args.data = NULL;
+ hmgrtable_lock(&process->handle_table, DXGLOCK_EXCL);
+ alloc = hmgrtable_get_object_by_type(&process->handle_table,
+ HMGRENTRY_TYPE_DXGALLOCATION,
+ args.allocation);
+ if (alloc == NULL) {
+ ret = -EINVAL;
+ } else {
+ if (alloc->cpu_address) {
+ ret = copy_to_user(&result->data,
+ &alloc->cpu_address,
+ sizeof(args.data));
+ if (ret == 0) {
+ args.data = alloc->cpu_address;
+ if (alloc->cpu_address_mapped)
+ alloc->cpu_address_refcount++;
+ } else {
+ DXG_ERR("Failed to copy cpu address");
+ ret = -EINVAL;
+ }
+ }
+ }
+ hmgrtable_unlock(&process->handle_table, DXGLOCK_EXCL);
+ if (ret < 0)
+ goto cleanup;
+ if (args.data)
+ goto success;
+
+ /*
+ * The call acquires reference on the device. It is safe to access the
+ * adapter, because the device holds reference on it.
+ */
+ device = dxgprocess_device_by_handle(process, args.device);
+ if (device == NULL) {
+ ret = -EINVAL;
+ goto cleanup;
+ }
+ adapter = device->adapter;
+ ret = dxgadapter_acquire_lock_shared(adapter);
+ if (ret < 0) {
+ adapter = NULL;
+ goto cleanup;
+ }
+
+ ret = dxgvmb_send_lock2(process, adapter, &args, result);
+
+cleanup:
+
+ if (adapter)
+ dxgadapter_release_lock_shared(adapter);
+
+ if (device)
+ kref_put(&device->device_kref, dxgdevice_release);
+
+success:
+ DXG_TRACE("ioctl:%s %s %d", errorstr(ret), __func__, ret);
+ return ret;
+}
+
+static int
+dxgkio_unlock2(struct dxgprocess *process, void *__user inargs)
+{
+ struct d3dkmt_unlock2 args;
+ int ret;
+ struct dxgadapter *adapter = NULL;
+ struct dxgdevice *device = NULL;
+ struct dxgallocation *alloc = NULL;
+ bool done = false;
+
+ ret = copy_from_user(&args, inargs, sizeof(args));
+ if (ret) {
+ DXG_ERR("failed to copy input args");
+ ret = -EINVAL;
+ goto cleanup;
+ }
+
+ hmgrtable_lock(&process->handle_table, DXGLOCK_EXCL);
+ alloc = hmgrtable_get_object_by_type(&process->handle_table,
+ HMGRENTRY_TYPE_DXGALLOCATION,
+ args.allocation);
+ if (alloc == NULL) {
+ ret = -EINVAL;
+ } else {
+ if (alloc->cpu_address == NULL) {
+ DXG_ERR("Allocation is not locked: %p", alloc);
+ ret = -EINVAL;
+ } else if (alloc->cpu_address_mapped) {
+ if (alloc->cpu_address_refcount > 0) {
+ alloc->cpu_address_refcount--;
+ if (alloc->cpu_address_refcount != 0) {
+ done = true;
+ } else {
+ dxg_unmap_iospace(alloc->cpu_address,
+ alloc->num_pages << PAGE_SHIFT);
+ alloc->cpu_address_mapped = false;
+ alloc->cpu_address = NULL;
+ }
+ } else {
+ DXG_ERR("Invalid cpu access refcount");
+ done = true;
+ }
+ }
+ }
+ hmgrtable_unlock(&process->handle_table, DXGLOCK_EXCL);
+ if (done)
+ goto success;
+ if (ret < 0)
+ goto cleanup;
+
+ /*
+ * The call acquires reference on the device. It is safe to access the
+ * adapter, because the device holds reference on it.
+ */
+ device = dxgprocess_device_by_handle(process, args.device);
+ if (device == NULL) {
+ ret = -EINVAL;
+ goto cleanup;
+ }
+ adapter = device->adapter;
+ ret = dxgadapter_acquire_lock_shared(adapter);
+ if (ret < 0) {
+ adapter = NULL;
+ goto cleanup;
+ }
+
+ ret = dxgvmb_send_unlock2(process, adapter, &args);
+
+cleanup:
+ if (adapter)
+ dxgadapter_release_lock_shared(adapter);
+
+ if (device)
+ kref_put(&device->device_kref, dxgdevice_release);
+
+success:
+ DXG_TRACE("ioctl:%s %s %d", errorstr(ret), __func__, ret);
+ return ret;
+}
+
static int
dxgkio_get_device_state(struct dxgprocess *process, void *__user inargs)
{
@@ -3909,7 +4065,7 @@ static struct ioctl_desc ioctls[] = {
/* 0x22 */ {},
/* 0x23 */ {},
/* 0x24 */ {},
-/* 0x25 */ {},
+/* 0x25 */ {dxgkio_lock2, LX_DXLOCK2},
/* 0x26 */ {},
/* 0x27 */ {},
/* 0x28 */ {},
@@ -3932,7 +4088,7 @@ static struct ioctl_desc ioctls[] = {
LX_DXSUBMITSIGNALSYNCOBJECTSTOHWQUEUE},
/* 0x36 */ {dxgkio_submit_wait_to_hwqueue,
LX_DXSUBMITWAITFORSYNCOBJECTSTOHWQUEUE},
-/* 0x37 */ {},
+/* 0x37 */ {dxgkio_unlock2, LX_DXUNLOCK2},
/* 0x38 */ {},
/* 0x39 */ {},
/* 0x3a */ {dxgkio_wait_sync_object_cpu,
diff --git a/include/uapi/misc/d3dkmthk.h b/include/uapi/misc/d3dkmthk.h
index 8a013b07e88a..b498f09e694d 100644
--- a/include/uapi/misc/d3dkmthk.h
+++ b/include/uapi/misc/d3dkmthk.h
@@ -668,6 +668,32 @@ struct d3dkmt_submitcommandtohwqueue {
#endif
};
+struct d3dddicb_lock2flags {
+ union {
+ struct {
+ __u32 reserved:32;
+ };
+ __u32 value;
+ };
+};
+
+struct d3dkmt_lock2 {
+ struct d3dkmthandle device;
+ struct d3dkmthandle allocation;
+ struct d3dddicb_lock2flags flags;
+ __u32 reserved;
+#ifdef __KERNEL__
+ void *data;
+#else
+ __u64 data;
+#endif
+};
+
+struct d3dkmt_unlock2 {
+ struct d3dkmthandle device;
+ struct d3dkmthandle allocation;
+};
+
enum d3dkmt_standardallocationtype {
_D3DKMT_STANDARDALLOCATIONTYPE_EXISTINGHEAP = 1,
_D3DKMT_STANDARDALLOCATIONTYPE_CROSSADAPTER = 2,
@@ -1083,6 +1109,8 @@ struct d3dkmt_shareobjectwithhost {
_IOWR(0x47, 0x19, struct d3dkmt_destroydevice)
#define LX_DXDESTROYSYNCHRONIZATIONOBJECT \
_IOWR(0x47, 0x1d, struct d3dkmt_destroysynchronizationobject)
+#define LX_DXLOCK2 \
+ _IOWR(0x47, 0x25, struct d3dkmt_lock2)
#define LX_DXSIGNALSYNCHRONIZATIONOBJECTFROMCPU \
_IOWR(0x47, 0x31, struct d3dkmt_signalsynchronizationobjectfromcpu)
#define LX_DXSIGNALSYNCHRONIZATIONOBJECTFROMGPU \
@@ -1095,6 +1123,8 @@ struct d3dkmt_shareobjectwithhost {
_IOWR(0x47, 0x35, struct d3dkmt_submitsignalsyncobjectstohwqueue)
#define LX_DXSUBMITWAITFORSYNCOBJECTSTOHWQUEUE \
_IOWR(0x47, 0x36, struct d3dkmt_submitwaitforsyncobjectstohwqueue)
+#define LX_DXUNLOCK2 \
+ _IOWR(0x47, 0x37, struct d3dkmt_unlock2)
#define LX_DXWAITFORSYNCHRONIZATIONOBJECTFROMCPU \
_IOWR(0x47, 0x3a, struct d3dkmt_waitforsynchronizationobjectfromcpu)
#define LX_DXWAITFORSYNCHRONIZATIONOBJECTFROMGPU \
^ permalink raw reply related [flat|nested] 56+ messages in thread* [PATCH 18/55] drivers: hv: dxgkrnl: Manage device allocation properties
2026-03-19 20:24 [PATCH v4 00/55] drivers: hv: dxgkrnl: Driver for Hyper-V virtual compute device Eric Curtin
` (16 preceding siblings ...)
2026-03-19 20:24 ` [PATCH 17/55] drivers: hv: dxgkrnl: Map(unmap) CPU address to device allocation Eric Curtin
@ 2026-03-19 20:24 ` Eric Curtin
2026-03-19 20:24 ` [PATCH 19/55] drivers: hv: dxgkrnl: Flush heap transitions Eric Curtin
` (36 subsequent siblings)
54 siblings, 0 replies; 56+ messages in thread
From: Eric Curtin @ 2026-03-19 20:24 UTC (permalink / raw)
To: linux-hyperv; +Cc: linux-kernel, iourit, wei.liu, decui, haiyangz
From: Iouri Tarassov <iourit@linux.microsoft.com>
Implement ioctls to manage properties of a compute device allocation:
- LX_DXUPDATEALLOCPROPERTY,
- LX_DXSETALLOCATIONPRIORITY,
- LX_DXGETALLOCATIONPRIORITY,
- LX_DXQUERYALLOCATIONRESIDENCY.
- LX_DXCHANGEVIDEOMEMORYRESERVATION,
The LX_DXUPDATEALLOCPROPERTY ioctl requests the host to update
various properties of a compute devoce allocation.
The LX_DXSETALLOCATIONPRIORITY and LX_DXGETALLOCATIONPRIORITY ioctls
are used to set/get allocation priority, which defines the
importance of the allocation to be in the local device memory.
The LX_DXQUERYALLOCATIONRESIDENCY ioctl queries if the allocation
is located in the compute device accessible memory.
The LX_DXCHANGEVIDEOMEMORYRESERVATION ioctl changes compute device
memory reservation of an allocation.
Signed-off-by: Iouri Tarassov <iourit@linux.microsoft.com>
[kms: forward port to 6.6 from 6.1. No code changes made.]
Signed-off-by: Kelsey Steele <kelseysteele@microsoft.com>
---
drivers/hv/dxgkrnl/dxgkrnl.h | 21 +++
drivers/hv/dxgkrnl/dxgvmbus.c | 300 ++++++++++++++++++++++++++++++++++
drivers/hv/dxgkrnl/dxgvmbus.h | 50 ++++++
drivers/hv/dxgkrnl/ioctl.c | 217 +++++++++++++++++++++++-
include/uapi/misc/d3dkmthk.h | 127 ++++++++++++++
5 files changed, 708 insertions(+), 7 deletions(-)
diff --git a/drivers/hv/dxgkrnl/dxgkrnl.h b/drivers/hv/dxgkrnl/dxgkrnl.h
index 1d6b552f1c1a..7fefe4617488 100644
--- a/drivers/hv/dxgkrnl/dxgkrnl.h
+++ b/drivers/hv/dxgkrnl/dxgkrnl.h
@@ -851,6 +851,23 @@ int dxgvmb_send_lock2(struct dxgprocess *process,
int dxgvmb_send_unlock2(struct dxgprocess *process,
struct dxgadapter *adapter,
struct d3dkmt_unlock2 *args);
+int dxgvmb_send_update_alloc_property(struct dxgprocess *process,
+ struct dxgadapter *adapter,
+ struct d3dddi_updateallocproperty *args,
+ struct d3dddi_updateallocproperty *__user
+ inargs);
+int dxgvmb_send_set_allocation_priority(struct dxgprocess *process,
+ struct dxgadapter *adapter,
+ struct d3dkmt_setallocationpriority *a);
+int dxgvmb_send_get_allocation_priority(struct dxgprocess *process,
+ struct dxgadapter *adapter,
+ struct d3dkmt_getallocationpriority *a);
+int dxgvmb_send_change_vidmem_reservation(struct dxgprocess *process,
+ struct dxgadapter *adapter,
+ struct d3dkmthandle other_process,
+ struct
+ d3dkmt_changevideomemoryreservation
+ *args);
int dxgvmb_send_create_hwqueue(struct dxgprocess *process,
struct dxgadapter *adapter,
struct d3dkmt_createhwqueue *args,
@@ -870,6 +887,10 @@ int dxgvmb_send_open_sync_object_nt(struct dxgprocess *process,
struct d3dkmt_opensyncobjectfromnthandle2
*args,
struct dxgsyncobject *syncobj);
+int dxgvmb_send_query_alloc_residency(struct dxgprocess *process,
+ struct dxgadapter *adapter,
+ struct d3dkmt_queryallocationresidency
+ *args);
int dxgvmb_send_get_device_state(struct dxgprocess *process,
struct dxgadapter *adapter,
struct d3dkmt_getdevicestate *args,
diff --git a/drivers/hv/dxgkrnl/dxgvmbus.c b/drivers/hv/dxgkrnl/dxgvmbus.c
index a80f84d9065a..dd2c97fee27b 100644
--- a/drivers/hv/dxgkrnl/dxgvmbus.c
+++ b/drivers/hv/dxgkrnl/dxgvmbus.c
@@ -1829,6 +1829,79 @@ int dxgvmb_send_destroy_allocation(struct dxgprocess *process,
return ret;
}
+int dxgvmb_send_query_alloc_residency(struct dxgprocess *process,
+ struct dxgadapter *adapter,
+ struct d3dkmt_queryallocationresidency
+ *args)
+{
+ int ret = -EINVAL;
+ struct dxgkvmb_command_queryallocationresidency *command = NULL;
+ u32 cmd_size = sizeof(*command);
+ u32 alloc_size = 0;
+ u32 result_allocation_size = 0;
+ struct dxgkvmb_command_queryallocationresidency_return *result = NULL;
+ u32 result_size = sizeof(*result);
+ struct dxgvmbusmsgres msg = {.hdr = NULL};
+
+ if (args->allocation_count > DXG_MAX_VM_BUS_PACKET_SIZE) {
+ ret = -EINVAL;
+ goto cleanup;
+ }
+
+ if (args->allocation_count) {
+ alloc_size = args->allocation_count *
+ sizeof(struct d3dkmthandle);
+ cmd_size += alloc_size;
+ result_allocation_size = args->allocation_count *
+ sizeof(args->residency_status[0]);
+ } else {
+ result_allocation_size = sizeof(args->residency_status[0]);
+ }
+ result_size += result_allocation_size;
+
+ ret = init_message_res(&msg, adapter, process, cmd_size, result_size);
+ if (ret)
+ goto cleanup;
+ command = (void *)msg.msg;
+ result = msg.res;
+
+ command_vgpu_to_host_init2(&command->hdr,
+ DXGK_VMBCOMMAND_QUERYALLOCATIONRESIDENCY,
+ process->host_handle);
+ command->args = *args;
+ if (alloc_size) {
+ ret = copy_from_user(&command[1], args->allocations,
+ alloc_size);
+ if (ret) {
+ DXG_ERR("failed to copy alloc handles");
+ ret = -EINVAL;
+ goto cleanup;
+ }
+ }
+
+ ret = dxgvmb_send_sync_msg(msg.channel, msg.hdr, msg.size,
+ result, msg.res_size);
+ if (ret < 0)
+ goto cleanup;
+
+ ret = ntstatus2int(result->status);
+ if (ret < 0)
+ goto cleanup;
+
+ ret = copy_to_user(args->residency_status, &result[1],
+ result_allocation_size);
+ if (ret) {
+ DXG_ERR("failed to copy residency status");
+ ret = -EINVAL;
+ }
+
+cleanup:
+ free_message((struct dxgvmbusmsg *)&msg, process);
+ if (ret)
+ DXG_TRACE("err: %d", ret);
+ return ret;
+}
+
int dxgvmb_send_get_device_state(struct dxgprocess *process,
struct dxgadapter *adapter,
struct d3dkmt_getdevicestate *args,
@@ -2461,6 +2534,233 @@ int dxgvmb_send_unlock2(struct dxgprocess *process,
return ret;
}
+int dxgvmb_send_update_alloc_property(struct dxgprocess *process,
+ struct dxgadapter *adapter,
+ struct d3dddi_updateallocproperty *args,
+ struct d3dddi_updateallocproperty *__user
+ inargs)
+{
+ int ret;
+ int ret1;
+ struct dxgkvmb_command_updateallocationproperty *command;
+ struct dxgkvmb_command_updateallocationproperty_return result = { };
+ struct dxgvmbusmsg msg = {.hdr = NULL};
+
+ ret = init_message(&msg, adapter, process, sizeof(*command));
+ if (ret)
+ goto cleanup;
+ command = (void *)msg.msg;
+
+ command_vgpu_to_host_init2(&command->hdr,
+ DXGK_VMBCOMMAND_UPDATEALLOCATIONPROPERTY,
+ process->host_handle);
+ command->args = *args;
+
+ ret = dxgvmb_send_sync_msg(msg.channel, msg.hdr, msg.size,
+ &result, sizeof(result));
+
+ if (ret < 0)
+ goto cleanup;
+ ret = ntstatus2int(result.status);
+ /* STATUS_PENING is a success code > 0 */
+ if (ret == STATUS_PENDING) {
+ ret1 = copy_to_user(&inargs->paging_fence_value,
+ &result.paging_fence_value,
+ sizeof(u64));
+ if (ret1) {
+ DXG_ERR("failed to copy paging fence");
+ ret = -EINVAL;
+ }
+ }
+cleanup:
+ free_message(&msg, process);
+ if (ret)
+ DXG_TRACE("err: %d", ret);
+ return ret;
+}
+
+int dxgvmb_send_set_allocation_priority(struct dxgprocess *process,
+ struct dxgadapter *adapter,
+ struct d3dkmt_setallocationpriority *args)
+{
+ u32 cmd_size = sizeof(struct dxgkvmb_command_setallocationpriority);
+ u32 alloc_size = 0;
+ u32 priority_size = 0;
+ struct dxgkvmb_command_setallocationpriority *command;
+ int ret;
+ struct d3dkmthandle *allocations;
+ struct dxgvmbusmsg msg = {.hdr = NULL};
+
+ if (args->allocation_count > DXG_MAX_VM_BUS_PACKET_SIZE) {
+ ret = -EINVAL;
+ goto cleanup;
+ }
+ if (args->resource.v) {
+ priority_size = sizeof(u32);
+ if (args->allocation_count != 0) {
+ ret = -EINVAL;
+ goto cleanup;
+ }
+ } else {
+ if (args->allocation_count == 0) {
+ ret = -EINVAL;
+ goto cleanup;
+ }
+ alloc_size = args->allocation_count *
+ sizeof(struct d3dkmthandle);
+ cmd_size += alloc_size;
+ priority_size = sizeof(u32) * args->allocation_count;
+ }
+ cmd_size += priority_size;
+
+ ret = init_message(&msg, adapter, process, cmd_size);
+ if (ret)
+ goto cleanup;
+ command = (void *)msg.msg;
+
+ command_vgpu_to_host_init2(&command->hdr,
+ DXGK_VMBCOMMAND_SETALLOCATIONPRIORITY,
+ process->host_handle);
+ command->device = args->device;
+ command->allocation_count = args->allocation_count;
+ command->resource = args->resource;
+ allocations = (struct d3dkmthandle *) &command[1];
+ ret = copy_from_user(allocations, args->allocation_list,
+ alloc_size);
+ if (ret) {
+ DXG_ERR("failed to copy alloc handle");
+ ret = -EINVAL;
+ goto cleanup;
+ }
+ ret = copy_from_user((u8 *) allocations + alloc_size,
+ args->priorities, priority_size);
+ if (ret) {
+ DXG_ERR("failed to copy alloc priority");
+ ret = -EINVAL;
+ goto cleanup;
+ }
+
+ ret = dxgvmb_send_sync_msg_ntstatus(msg.channel, msg.hdr, msg.size);
+
+cleanup:
+ free_message(&msg, process);
+ if (ret)
+ DXG_TRACE("err: %d", ret);
+ return ret;
+}
+
+int dxgvmb_send_get_allocation_priority(struct dxgprocess *process,
+ struct dxgadapter *adapter,
+ struct d3dkmt_getallocationpriority *args)
+{
+ u32 cmd_size = sizeof(struct dxgkvmb_command_getallocationpriority);
+ u32 result_size;
+ u32 alloc_size = 0;
+ u32 priority_size = 0;
+ struct dxgkvmb_command_getallocationpriority *command;
+ struct dxgkvmb_command_getallocationpriority_return *result;
+ int ret;
+ struct d3dkmthandle *allocations;
+ struct dxgvmbusmsgres msg = {.hdr = NULL};
+
+ if (args->allocation_count > DXG_MAX_VM_BUS_PACKET_SIZE) {
+ ret = -EINVAL;
+ goto cleanup;
+ }
+ if (args->resource.v) {
+ priority_size = sizeof(u32);
+ if (args->allocation_count != 0) {
+ ret = -EINVAL;
+ goto cleanup;
+ }
+ } else {
+ if (args->allocation_count == 0) {
+ ret = -EINVAL;
+ goto cleanup;
+ }
+ alloc_size = args->allocation_count *
+ sizeof(struct d3dkmthandle);
+ cmd_size += alloc_size;
+ priority_size = sizeof(u32) * args->allocation_count;
+ }
+ result_size = sizeof(*result) + priority_size;
+
+ ret = init_message_res(&msg, adapter, process, cmd_size, result_size);
+ if (ret)
+ goto cleanup;
+ command = (void *)msg.msg;
+ result = msg.res;
+
+ command_vgpu_to_host_init2(&command->hdr,
+ DXGK_VMBCOMMAND_GETALLOCATIONPRIORITY,
+ process->host_handle);
+ command->device = args->device;
+ command->allocation_count = args->allocation_count;
+ command->resource = args->resource;
+ allocations = (struct d3dkmthandle *) &command[1];
+ ret = copy_from_user(allocations, args->allocation_list,
+ alloc_size);
+ if (ret) {
+ DXG_ERR("failed to copy alloc handles");
+ ret = -EINVAL;
+ goto cleanup;
+ }
+
+ ret = dxgvmb_send_sync_msg(msg.channel, msg.hdr,
+ msg.size + msg.res_size,
+ result, msg.res_size);
+ if (ret < 0)
+ goto cleanup;
+
+ ret = ntstatus2int(result->status);
+ if (ret < 0)
+ goto cleanup;
+
+ ret = copy_to_user(args->priorities,
+ (u8 *) result + sizeof(*result),
+ priority_size);
+ if (ret) {
+ DXG_ERR("failed to copy priorities");
+ ret = -EINVAL;
+ }
+
+cleanup:
+ free_message((struct dxgvmbusmsg *)&msg, process);
+ if (ret)
+ DXG_TRACE("err: %d", ret);
+ return ret;
+}
+
+int dxgvmb_send_change_vidmem_reservation(struct dxgprocess *process,
+ struct dxgadapter *adapter,
+ struct d3dkmthandle other_process,
+ struct
+ d3dkmt_changevideomemoryreservation
+ *args)
+{
+ struct dxgkvmb_command_changevideomemoryreservation *command;
+ int ret;
+ struct dxgvmbusmsg msg = {.hdr = NULL};
+
+ ret = init_message(&msg, adapter, process, sizeof(*command));
+ if (ret)
+ goto cleanup;
+ command = (void *)msg.msg;
+
+ command_vgpu_to_host_init2(&command->hdr,
+ DXGK_VMBCOMMAND_CHANGEVIDEOMEMORYRESERVATION,
+ process->host_handle);
+ command->args = *args;
+ command->args.process = other_process.v;
+
+ ret = dxgvmb_send_sync_msg_ntstatus(msg.channel, msg.hdr, msg.size);
+cleanup:
+ free_message(&msg, process);
+ if (ret)
+ DXG_TRACE("err: %d", ret);
+ return ret;
+}
+
int dxgvmb_send_create_hwqueue(struct dxgprocess *process,
struct dxgadapter *adapter,
struct d3dkmt_createhwqueue *args,
diff --git a/drivers/hv/dxgkrnl/dxgvmbus.h b/drivers/hv/dxgkrnl/dxgvmbus.h
index 447bb1ba391b..dbb01b9ab066 100644
--- a/drivers/hv/dxgkrnl/dxgvmbus.h
+++ b/drivers/hv/dxgkrnl/dxgvmbus.h
@@ -308,6 +308,29 @@ struct dxgkvmb_command_queryadapterinfo_return {
u8 private_data[1];
};
+/* Returns ntstatus */
+struct dxgkvmb_command_setallocationpriority {
+ struct dxgkvmb_command_vgpu_to_host hdr;
+ struct d3dkmthandle device;
+ struct d3dkmthandle resource;
+ u32 allocation_count;
+ /* struct d3dkmthandle allocations[allocation_count or 0]; */
+ /* u32 priorities[allocation_count or 1]; */
+};
+
+struct dxgkvmb_command_getallocationpriority {
+ struct dxgkvmb_command_vgpu_to_host hdr;
+ struct d3dkmthandle device;
+ struct d3dkmthandle resource;
+ u32 allocation_count;
+ /* struct d3dkmthandle allocations[allocation_count or 0]; */
+};
+
+struct dxgkvmb_command_getallocationpriority_return {
+ struct ntstatus status;
+ /* u32 priorities[allocation_count or 1]; */
+};
+
struct dxgkvmb_command_createdevice {
struct dxgkvmb_command_vgpu_to_host hdr;
struct d3dkmt_createdeviceflags flags;
@@ -589,6 +612,22 @@ struct dxgkvmb_command_unlock2 {
bool use_legacy_unlock;
};
+struct dxgkvmb_command_updateallocationproperty {
+ struct dxgkvmb_command_vgpu_to_host hdr;
+ struct d3dddi_updateallocproperty args;
+};
+
+struct dxgkvmb_command_updateallocationproperty_return {
+ u64 paging_fence_value;
+ struct ntstatus status;
+};
+
+/* Returns ntstatus */
+struct dxgkvmb_command_changevideomemoryreservation {
+ struct dxgkvmb_command_vgpu_to_host hdr;
+ struct d3dkmt_changevideomemoryreservation args;
+};
+
/* Returns the same structure */
struct dxgkvmb_command_createhwqueue {
struct dxgkvmb_command_vgpu_to_host hdr;
@@ -609,6 +648,17 @@ struct dxgkvmb_command_destroyhwqueue {
struct d3dkmthandle hwqueue;
};
+struct dxgkvmb_command_queryallocationresidency {
+ struct dxgkvmb_command_vgpu_to_host hdr;
+ struct d3dkmt_queryallocationresidency args;
+ /* struct d3dkmthandle allocations[0 or number of allocations] */
+};
+
+struct dxgkvmb_command_queryallocationresidency_return {
+ struct ntstatus status;
+ /* d3dkmt_allocationresidencystatus[NumAllocations] */
+};
+
struct dxgkvmb_command_getdevicestate {
struct dxgkvmb_command_vgpu_to_host hdr;
struct d3dkmt_getdevicestate args;
diff --git a/drivers/hv/dxgkrnl/ioctl.c b/drivers/hv/dxgkrnl/ioctl.c
index 37e218443310..b626e2518ff2 100644
--- a/drivers/hv/dxgkrnl/ioctl.c
+++ b/drivers/hv/dxgkrnl/ioctl.c
@@ -3214,7 +3214,7 @@ dxgkio_lock2(struct dxgprocess *process, void *__user inargs)
kref_put(&device->device_kref, dxgdevice_release);
success:
- DXG_TRACE("ioctl:%s %s %d", errorstr(ret), __func__, ret);
+ DXG_TRACE("ioctl:%s %d", errorstr(ret), ret);
return ret;
}
@@ -3294,7 +3294,209 @@ dxgkio_unlock2(struct dxgprocess *process, void *__user inargs)
kref_put(&device->device_kref, dxgdevice_release);
success:
- DXG_TRACE("ioctl:%s %s %d", errorstr(ret), __func__, ret);
+ DXG_TRACE("ioctl:%s %d", errorstr(ret), ret);
+ return ret;
+}
+
+static int
+dxgkio_update_alloc_property(struct dxgprocess *process, void *__user inargs)
+{
+ struct d3dddi_updateallocproperty args;
+ int ret;
+ struct dxgadapter *adapter = NULL;
+ struct dxgdevice *device = NULL;
+
+ ret = copy_from_user(&args, inargs, sizeof(args));
+ if (ret) {
+ DXG_ERR("failed to copy input args");
+ ret = -EINVAL;
+ goto cleanup;
+ }
+
+ device = dxgprocess_device_by_object_handle(process,
+ HMGRENTRY_TYPE_DXGPAGINGQUEUE,
+ args.paging_queue);
+ if (device == NULL) {
+ ret = -EINVAL;
+ goto cleanup;
+ }
+ adapter = device->adapter;
+ ret = dxgadapter_acquire_lock_shared(adapter);
+ if (ret < 0) {
+ adapter = NULL;
+ goto cleanup;
+ }
+
+ ret = dxgvmb_send_update_alloc_property(process, adapter,
+ &args, inargs);
+
+cleanup:
+ if (adapter)
+ dxgadapter_release_lock_shared(adapter);
+
+ if (device)
+ kref_put(&device->device_kref, dxgdevice_release);
+
+ DXG_TRACE("ioctl:%s %d", errorstr(ret), ret);
+ return ret;
+}
+
+static int
+dxgkio_query_alloc_residency(struct dxgprocess *process, void *__user inargs)
+{
+ struct d3dkmt_queryallocationresidency args;
+ struct dxgadapter *adapter = NULL;
+ struct dxgdevice *device = NULL;
+ int ret;
+
+ ret = copy_from_user(&args, inargs, sizeof(args));
+ if (ret) {
+ DXG_ERR("failed to copy input args");
+ ret = -EINVAL;
+ goto cleanup;
+ }
+
+ if ((args.allocation_count == 0) == (args.resource.v == 0)) {
+ ret = -EINVAL;
+ goto cleanup;
+ }
+
+ device = dxgprocess_device_by_handle(process, args.device);
+ if (device == NULL) {
+ ret = -EINVAL;
+ goto cleanup;
+ }
+ adapter = device->adapter;
+ ret = dxgadapter_acquire_lock_shared(adapter);
+ if (ret < 0) {
+ adapter = NULL;
+ goto cleanup;
+ }
+ ret = dxgvmb_send_query_alloc_residency(process, adapter, &args);
+cleanup:
+ if (adapter)
+ dxgadapter_release_lock_shared(adapter);
+ if (device)
+ kref_put(&device->device_kref, dxgdevice_release);
+ DXG_TRACE("ioctl:%s %d", errorstr(ret), ret);
+ return ret;
+}
+
+static int
+dxgkio_set_allocation_priority(struct dxgprocess *process, void *__user inargs)
+{
+ struct d3dkmt_setallocationpriority args;
+ struct dxgadapter *adapter = NULL;
+ struct dxgdevice *device = NULL;
+ int ret;
+
+ ret = copy_from_user(&args, inargs, sizeof(args));
+ if (ret) {
+ DXG_ERR("failed to copy input args");
+ ret = -EINVAL;
+ goto cleanup;
+ }
+ device = dxgprocess_device_by_handle(process, args.device);
+ if (device == NULL) {
+ ret = -EINVAL;
+ goto cleanup;
+ }
+ adapter = device->adapter;
+ ret = dxgadapter_acquire_lock_shared(adapter);
+ if (ret < 0) {
+ adapter = NULL;
+ goto cleanup;
+ }
+ ret = dxgvmb_send_set_allocation_priority(process, adapter, &args);
+cleanup:
+ if (adapter)
+ dxgadapter_release_lock_shared(adapter);
+ if (device)
+ kref_put(&device->device_kref, dxgdevice_release);
+ DXG_TRACE("ioctl:%s %d", errorstr(ret), ret);
+ return ret;
+}
+
+static int
+dxgkio_get_allocation_priority(struct dxgprocess *process, void *__user inargs)
+{
+ struct d3dkmt_getallocationpriority args;
+ struct dxgadapter *adapter = NULL;
+ struct dxgdevice *device = NULL;
+ int ret;
+
+ ret = copy_from_user(&args, inargs, sizeof(args));
+ if (ret) {
+ DXG_ERR("failed to copy input args");
+ ret = -EINVAL;
+ goto cleanup;
+ }
+ device = dxgprocess_device_by_handle(process, args.device);
+ if (device == NULL) {
+ ret = -EINVAL;
+ goto cleanup;
+ }
+ adapter = device->adapter;
+ ret = dxgadapter_acquire_lock_shared(adapter);
+ if (ret < 0) {
+ adapter = NULL;
+ goto cleanup;
+ }
+ ret = dxgvmb_send_get_allocation_priority(process, adapter, &args);
+cleanup:
+ if (adapter)
+ dxgadapter_release_lock_shared(adapter);
+ if (device)
+ kref_put(&device->device_kref, dxgdevice_release);
+ DXG_TRACE("ioctl:%s %d", errorstr(ret), ret);
+ return ret;
+}
+
+static int
+dxgkio_change_vidmem_reservation(struct dxgprocess *process, void *__user inargs)
+{
+ struct d3dkmt_changevideomemoryreservation args;
+ int ret;
+ struct dxgadapter *adapter = NULL;
+ bool adapter_locked = false;
+
+ ret = copy_from_user(&args, inargs, sizeof(args));
+ if (ret) {
+ DXG_ERR("failed to copy input args");
+ ret = -EINVAL;
+ goto cleanup;
+ }
+
+ if (args.process != 0) {
+ DXG_ERR("setting memory reservation for other process");
+ ret = -EINVAL;
+ goto cleanup;
+ }
+
+ adapter = dxgprocess_adapter_by_handle(process, args.adapter);
+ if (adapter == NULL) {
+ ret = -EINVAL;
+ goto cleanup;
+ }
+
+ ret = dxgadapter_acquire_lock_shared(adapter);
+ if (ret < 0) {
+ adapter = NULL;
+ goto cleanup;
+ }
+ adapter_locked = true;
+ args.adapter.v = 0;
+ ret = dxgvmb_send_change_vidmem_reservation(process, adapter,
+ zerohandle, &args);
+
+cleanup:
+
+ if (adapter_locked)
+ dxgadapter_release_lock_shared(adapter);
+ if (adapter)
+ kref_put(&adapter->adapter_kref, dxgadapter_release);
+
+ DXG_TRACE("ioctl:%s %d", errorstr(ret), ret);
return ret;
}
@@ -4050,7 +4252,8 @@ static struct ioctl_desc ioctls[] = {
/* 0x13 */ {dxgkio_destroy_allocation, LX_DXDESTROYALLOCATION2},
/* 0x14 */ {dxgkio_enum_adapters, LX_DXENUMADAPTERS2},
/* 0x15 */ {dxgkio_close_adapter, LX_DXCLOSEADAPTER},
-/* 0x16 */ {},
+/* 0x16 */ {dxgkio_change_vidmem_reservation,
+ LX_DXCHANGEVIDEOMEMORYRESERVATION},
/* 0x17 */ {},
/* 0x18 */ {dxgkio_create_hwqueue, LX_DXCREATEHWQUEUE},
/* 0x19 */ {dxgkio_destroy_device, LX_DXDESTROYDEVICE},
@@ -4070,11 +4273,11 @@ static struct ioctl_desc ioctls[] = {
/* 0x27 */ {},
/* 0x28 */ {},
/* 0x29 */ {},
-/* 0x2a */ {},
+/* 0x2a */ {dxgkio_query_alloc_residency, LX_DXQUERYALLOCATIONRESIDENCY},
/* 0x2b */ {},
/* 0x2c */ {},
/* 0x2d */ {},
-/* 0x2e */ {},
+/* 0x2e */ {dxgkio_set_allocation_priority, LX_DXSETALLOCATIONPRIORITY},
/* 0x2f */ {},
/* 0x30 */ {},
/* 0x31 */ {dxgkio_signal_sync_object_cpu,
@@ -4089,13 +4292,13 @@ static struct ioctl_desc ioctls[] = {
/* 0x36 */ {dxgkio_submit_wait_to_hwqueue,
LX_DXSUBMITWAITFORSYNCOBJECTSTOHWQUEUE},
/* 0x37 */ {dxgkio_unlock2, LX_DXUNLOCK2},
-/* 0x38 */ {},
+/* 0x38 */ {dxgkio_update_alloc_property, LX_DXUPDATEALLOCPROPERTY},
/* 0x39 */ {},
/* 0x3a */ {dxgkio_wait_sync_object_cpu,
LX_DXWAITFORSYNCHRONIZATIONOBJECTFROMCPU},
/* 0x3b */ {dxgkio_wait_sync_object_gpu,
LX_DXWAITFORSYNCHRONIZATIONOBJECTFROMGPU},
-/* 0x3c */ {},
+/* 0x3c */ {dxgkio_get_allocation_priority, LX_DXGETALLOCATIONPRIORITY},
/* 0x3d */ {},
/* 0x3e */ {dxgkio_enum_adapters3, LX_DXENUMADAPTERS3},
/* 0x3f */ {dxgkio_share_objects, LX_DXSHAREOBJECTS},
diff --git a/include/uapi/misc/d3dkmthk.h b/include/uapi/misc/d3dkmthk.h
index b498f09e694d..af381101fd90 100644
--- a/include/uapi/misc/d3dkmthk.h
+++ b/include/uapi/misc/d3dkmthk.h
@@ -668,6 +668,63 @@ struct d3dkmt_submitcommandtohwqueue {
#endif
};
+struct d3dkmt_setallocationpriority {
+ struct d3dkmthandle device;
+ struct d3dkmthandle resource;
+#ifdef __KERNEL__
+ const struct d3dkmthandle *allocation_list;
+#else
+ __u64 allocation_list;
+#endif
+ __u32 allocation_count;
+ __u32 reserved;
+#ifdef __KERNEL__
+ const __u32 *priorities;
+#else
+ __u64 priorities;
+#endif
+};
+
+struct d3dkmt_getallocationpriority {
+ struct d3dkmthandle device;
+ struct d3dkmthandle resource;
+#ifdef __KERNEL__
+ const struct d3dkmthandle *allocation_list;
+#else
+ __u64 allocation_list;
+#endif
+ __u32 allocation_count;
+ __u32 reserved;
+#ifdef __KERNEL__
+ __u32 *priorities;
+#else
+ __u64 priorities;
+#endif
+};
+
+enum d3dkmt_allocationresidencystatus {
+ _D3DKMT_ALLOCATIONRESIDENCYSTATUS_RESIDENTINGPUMEMORY = 1,
+ _D3DKMT_ALLOCATIONRESIDENCYSTATUS_RESIDENTINSHAREDMEMORY = 2,
+ _D3DKMT_ALLOCATIONRESIDENCYSTATUS_NOTRESIDENT = 3,
+};
+
+struct d3dkmt_queryallocationresidency {
+ struct d3dkmthandle device;
+ struct d3dkmthandle resource;
+#ifdef __KERNEL__
+ struct d3dkmthandle *allocations;
+#else
+ __u64 allocations;
+#endif
+ __u32 allocation_count;
+ __u32 reserved;
+#ifdef __KERNEL__
+ enum d3dkmt_allocationresidencystatus *residency_status;
+#else
+ __u64 residency_status;
+#endif
+};
+
struct d3dddicb_lock2flags {
union {
struct {
@@ -835,6 +892,11 @@ struct d3dkmt_destroyallocation2 {
struct d3dddicb_destroyallocation2flags flags;
};
+enum d3dkmt_memory_segment_group {
+ _D3DKMT_MEMORY_SEGMENT_GROUP_LOCAL = 0,
+ _D3DKMT_MEMORY_SEGMENT_GROUP_NON_LOCAL = 1
+};
+
struct d3dkmt_adaptertype {
union {
struct {
@@ -886,6 +948,61 @@ struct d3dddi_openallocationinfo2 {
__u64 reserved[6];
};
+struct d3dddi_updateallocproperty_flags {
+ union {
+ struct {
+ __u32 accessed_physically:1;
+ __u32 reserved:31;
+ };
+ __u32 value;
+ };
+};
+
+struct d3dddi_segmentpreference {
+ union {
+ struct {
+ __u32 segment_id0:5;
+ __u32 direction0:1;
+ __u32 segment_id1:5;
+ __u32 direction1:1;
+ __u32 segment_id2:5;
+ __u32 direction2:1;
+ __u32 segment_id3:5;
+ __u32 direction3:1;
+ __u32 segment_id4:5;
+ __u32 direction4:1;
+ __u32 reserved:2;
+ };
+ __u32 value;
+ };
+};
+
+struct d3dddi_updateallocproperty {
+ struct d3dkmthandle paging_queue;
+ struct d3dkmthandle allocation;
+ __u32 supported_segment_set;
+ struct d3dddi_segmentpreference preferred_segment;
+ struct d3dddi_updateallocproperty_flags flags;
+ __u64 paging_fence_value;
+ union {
+ struct {
+ __u32 set_accessed_physically:1;
+ __u32 set_supported_segmentSet:1;
+ __u32 set_preferred_segment:1;
+ __u32 reserved:29;
+ };
+ __u32 property_mask_value;
+ };
+};
+
+struct d3dkmt_changevideomemoryreservation {
+ __u64 process;
+ struct d3dkmthandle adapter;
+ enum d3dkmt_memory_segment_group memory_segment_group;
+ __u64 reservation;
+ __u32 physical_adapter_index;
+};
+
struct d3dkmt_createhwqueue {
struct d3dkmthandle context;
struct d3dddi_createhwqueueflags flags;
@@ -1099,6 +1216,8 @@ struct d3dkmt_shareobjectwithhost {
_IOWR(0x47, 0x14, struct d3dkmt_enumadapters2)
#define LX_DXCLOSEADAPTER \
_IOWR(0x47, 0x15, struct d3dkmt_closeadapter)
+#define LX_DXCHANGEVIDEOMEMORYRESERVATION \
+ _IOWR(0x47, 0x16, struct d3dkmt_changevideomemoryreservation)
#define LX_DXCREATEHWQUEUE \
_IOWR(0x47, 0x18, struct d3dkmt_createhwqueue)
#define LX_DXDESTROYHWQUEUE \
@@ -1111,6 +1230,10 @@ struct d3dkmt_shareobjectwithhost {
_IOWR(0x47, 0x1d, struct d3dkmt_destroysynchronizationobject)
#define LX_DXLOCK2 \
_IOWR(0x47, 0x25, struct d3dkmt_lock2)
+#define LX_DXQUERYALLOCATIONRESIDENCY \
+ _IOWR(0x47, 0x2a, struct d3dkmt_queryallocationresidency)
+#define LX_DXSETALLOCATIONPRIORITY \
+ _IOWR(0x47, 0x2e, struct d3dkmt_setallocationpriority)
#define LX_DXSIGNALSYNCHRONIZATIONOBJECTFROMCPU \
_IOWR(0x47, 0x31, struct d3dkmt_signalsynchronizationobjectfromcpu)
#define LX_DXSIGNALSYNCHRONIZATIONOBJECTFROMGPU \
@@ -1125,10 +1248,14 @@ struct d3dkmt_shareobjectwithhost {
_IOWR(0x47, 0x36, struct d3dkmt_submitwaitforsyncobjectstohwqueue)
#define LX_DXUNLOCK2 \
_IOWR(0x47, 0x37, struct d3dkmt_unlock2)
+#define LX_DXUPDATEALLOCPROPERTY \
+ _IOWR(0x47, 0x38, struct d3dddi_updateallocproperty)
#define LX_DXWAITFORSYNCHRONIZATIONOBJECTFROMCPU \
_IOWR(0x47, 0x3a, struct d3dkmt_waitforsynchronizationobjectfromcpu)
#define LX_DXWAITFORSYNCHRONIZATIONOBJECTFROMGPU \
_IOWR(0x47, 0x3b, struct d3dkmt_waitforsynchronizationobjectfromgpu)
+#define LX_DXGETALLOCATIONPRIORITY \
+ _IOWR(0x47, 0x3c, struct d3dkmt_getallocationpriority)
#define LX_DXENUMADAPTERS3 \
_IOWR(0x47, 0x3e, struct d3dkmt_enumadapters3)
#define LX_DXSHAREOBJECTS \
^ permalink raw reply related [flat|nested] 56+ messages in thread* [PATCH 19/55] drivers: hv: dxgkrnl: Flush heap transitions
2026-03-19 20:24 [PATCH v4 00/55] drivers: hv: dxgkrnl: Driver for Hyper-V virtual compute device Eric Curtin
` (17 preceding siblings ...)
2026-03-19 20:24 ` [PATCH 18/55] drivers: hv: dxgkrnl: Manage device allocation properties Eric Curtin
@ 2026-03-19 20:24 ` Eric Curtin
2026-03-19 20:24 ` [PATCH 20/55] drivers: hv: dxgkrnl: Query video memory information Eric Curtin
` (35 subsequent siblings)
54 siblings, 0 replies; 56+ messages in thread
From: Eric Curtin @ 2026-03-19 20:24 UTC (permalink / raw)
To: linux-hyperv; +Cc: linux-kernel, iourit, wei.liu, decui, haiyangz
From: Iouri Tarassov <iourit@linux.microsoft.com>
Implement the ioctl to flush heap transitions
(LX_DXFLUSHHEAPTRANSITIONS).
The ioctl is used to ensure that the video memory manager on the host
flushes all internal operations.
Signed-off-by: Iouri Tarassov <iourit@linux.microsoft.com>
[kms: forward port to 6.6 from 6.1. No code changes made.]
Signed-off-by: Kelsey Steele <kelseysteele@microsoft.com>
---
drivers/hv/dxgkrnl/dxgadapter.c | 2 +-
drivers/hv/dxgkrnl/dxgkrnl.h | 3 ++
drivers/hv/dxgkrnl/dxgvmbus.c | 23 ++++++++++++++++
drivers/hv/dxgkrnl/dxgvmbus.h | 5 ++++
drivers/hv/dxgkrnl/ioctl.c | 49 ++++++++++++++++++++++++++++++++-
include/uapi/misc/d3dkmthk.h | 6 ++++
6 files changed, 86 insertions(+), 2 deletions(-)
diff --git a/drivers/hv/dxgkrnl/dxgadapter.c b/drivers/hv/dxgkrnl/dxgadapter.c
index 23f00db7637e..6f763e326a65 100644
--- a/drivers/hv/dxgkrnl/dxgadapter.c
+++ b/drivers/hv/dxgkrnl/dxgadapter.c
@@ -942,7 +942,7 @@ else
if (alloc->priv_drv_data)
vfree(alloc->priv_drv_data);
if (alloc->cpu_address_mapped)
- pr_err("Alloc IO space is mapped: %p", alloc);
+ DXG_ERR("Alloc IO space is mapped: %p", alloc);
kfree(alloc);
}
diff --git a/drivers/hv/dxgkrnl/dxgkrnl.h b/drivers/hv/dxgkrnl/dxgkrnl.h
index 7fefe4617488..ced9dd294f5f 100644
--- a/drivers/hv/dxgkrnl/dxgkrnl.h
+++ b/drivers/hv/dxgkrnl/dxgkrnl.h
@@ -882,6 +882,9 @@ int dxgvmb_send_query_adapter_info(struct dxgprocess *process,
int dxgvmb_send_submit_command_hwqueue(struct dxgprocess *process,
struct dxgadapter *adapter,
struct d3dkmt_submitcommandtohwqueue *a);
+int dxgvmb_send_flush_heap_transitions(struct dxgprocess *process,
+ struct dxgadapter *adapter,
+ struct d3dkmt_flushheaptransitions *arg);
int dxgvmb_send_open_sync_object_nt(struct dxgprocess *process,
struct dxgvmbuschannel *channel,
struct d3dkmt_opensyncobjectfromnthandle2
diff --git a/drivers/hv/dxgkrnl/dxgvmbus.c b/drivers/hv/dxgkrnl/dxgvmbus.c
index dd2c97fee27b..928fad5f133b 100644
--- a/drivers/hv/dxgkrnl/dxgvmbus.c
+++ b/drivers/hv/dxgkrnl/dxgvmbus.c
@@ -1829,6 +1829,29 @@ int dxgvmb_send_destroy_allocation(struct dxgprocess *process,
return ret;
}
+int dxgvmb_send_flush_heap_transitions(struct dxgprocess *process,
+ struct dxgadapter *adapter,
+ struct d3dkmt_flushheaptransitions *args)
+{
+ struct dxgkvmb_command_flushheaptransitions *command;
+ int ret;
+ struct dxgvmbusmsg msg = {.hdr = NULL};
+
+ ret = init_message(&msg, adapter, process, sizeof(*command));
+ if (ret)
+ goto cleanup;
+ command = (void *)msg.msg;
+ command_vgpu_to_host_init2(&command->hdr,
+ DXGK_VMBCOMMAND_FLUSHHEAPTRANSITIONS,
+ process->host_handle);
+ ret = dxgvmb_send_sync_msg_ntstatus(msg.channel, msg.hdr, msg.size);
+cleanup:
+ free_message(&msg, process);
+ if (ret)
+ DXG_TRACE("err: %d", ret);
+ return ret;
+}
+
int dxgvmb_send_query_alloc_residency(struct dxgprocess *process,
struct dxgadapter *adapter,
struct d3dkmt_queryallocationresidency
diff --git a/drivers/hv/dxgkrnl/dxgvmbus.h b/drivers/hv/dxgkrnl/dxgvmbus.h
index dbb01b9ab066..d232eb234e2c 100644
--- a/drivers/hv/dxgkrnl/dxgvmbus.h
+++ b/drivers/hv/dxgkrnl/dxgvmbus.h
@@ -367,6 +367,11 @@ struct dxgkvmb_command_submitcommandtohwqueue {
/* PrivateDriverData */
};
+/* Returns ntstatus */
+struct dxgkvmb_command_flushheaptransitions {
+ struct dxgkvmb_command_vgpu_to_host hdr;
+};
+
struct dxgkvmb_command_createallocation_allocinfo {
u32 flags;
u32 priv_drv_data_size;
diff --git a/drivers/hv/dxgkrnl/ioctl.c b/drivers/hv/dxgkrnl/ioctl.c
index b626e2518ff2..8b7d00e4c881 100644
--- a/drivers/hv/dxgkrnl/ioctl.c
+++ b/drivers/hv/dxgkrnl/ioctl.c
@@ -3500,6 +3500,53 @@ dxgkio_change_vidmem_reservation(struct dxgprocess *process, void *__user inargs
return ret;
}
+static int
+dxgkio_flush_heap_transitions(struct dxgprocess *process, void *__user inargs)
+{
+ struct d3dkmt_flushheaptransitions args;
+ int ret;
+ struct dxgadapter *adapter = NULL;
+ bool adapter_locked = false;
+
+ ret = copy_from_user(&args, inargs, sizeof(args));
+ if (ret) {
+ DXG_ERR("failed to copy input args");
+ ret = -EINVAL;
+ goto cleanup;
+ }
+
+ adapter = dxgprocess_adapter_by_handle(process, args.adapter);
+ if (adapter == NULL) {
+ ret = -EINVAL;
+ goto cleanup;
+ }
+
+ ret = dxgadapter_acquire_lock_shared(adapter);
+ if (ret < 0) {
+ adapter = NULL;
+ goto cleanup;
+ }
+ adapter_locked = true;
+
+ args.adapter = adapter->host_handle;
+ ret = dxgvmb_send_flush_heap_transitions(process, adapter, &args);
+ if (ret < 0)
+ goto cleanup;
+ ret = copy_to_user(inargs, &args, sizeof(args));
+ if (ret) {
+ DXG_ERR("failed to copy output args");
+ ret = -EINVAL;
+ }
+
+cleanup:
+
+ if (adapter_locked)
+ dxgadapter_release_lock_shared(adapter);
+ if (adapter)
+ kref_put(&adapter->adapter_kref, dxgadapter_release);
+ return ret;
+}
+
static int
dxgkio_get_device_state(struct dxgprocess *process, void *__user inargs)
{
@@ -4262,7 +4309,7 @@ static struct ioctl_desc ioctls[] = {
/* 0x1c */ {dxgkio_destroy_paging_queue, LX_DXDESTROYPAGINGQUEUE},
/* 0x1d */ {dxgkio_destroy_sync_object, LX_DXDESTROYSYNCHRONIZATIONOBJECT},
/* 0x1e */ {},
-/* 0x1f */ {},
+/* 0x1f */ {dxgkio_flush_heap_transitions, LX_DXFLUSHHEAPTRANSITIONS},
/* 0x20 */ {},
/* 0x21 */ {},
/* 0x22 */ {},
diff --git a/include/uapi/misc/d3dkmthk.h b/include/uapi/misc/d3dkmthk.h
index af381101fd90..873feb951129 100644
--- a/include/uapi/misc/d3dkmthk.h
+++ b/include/uapi/misc/d3dkmthk.h
@@ -936,6 +936,10 @@ struct d3dkmt_queryadapterinfo {
__u32 private_data_size;
};
+struct d3dkmt_flushheaptransitions {
+ struct d3dkmthandle adapter;
+};
+
struct d3dddi_openallocationinfo2 {
struct d3dkmthandle allocation;
#ifdef __KERNEL__
@@ -1228,6 +1232,8 @@ struct d3dkmt_shareobjectwithhost {
_IOWR(0x47, 0x19, struct d3dkmt_destroydevice)
#define LX_DXDESTROYSYNCHRONIZATIONOBJECT \
_IOWR(0x47, 0x1d, struct d3dkmt_destroysynchronizationobject)
+#define LX_DXFLUSHHEAPTRANSITIONS \
+ _IOWR(0x47, 0x1f, struct d3dkmt_flushheaptransitions)
#define LX_DXLOCK2 \
_IOWR(0x47, 0x25, struct d3dkmt_lock2)
#define LX_DXQUERYALLOCATIONRESIDENCY \
^ permalink raw reply related [flat|nested] 56+ messages in thread* [PATCH 20/55] drivers: hv: dxgkrnl: Query video memory information
2026-03-19 20:24 [PATCH v4 00/55] drivers: hv: dxgkrnl: Driver for Hyper-V virtual compute device Eric Curtin
` (18 preceding siblings ...)
2026-03-19 20:24 ` [PATCH 19/55] drivers: hv: dxgkrnl: Flush heap transitions Eric Curtin
@ 2026-03-19 20:24 ` Eric Curtin
2026-03-19 20:24 ` [PATCH 21/55] drivers: hv: dxgkrnl: The escape ioctl Eric Curtin
` (34 subsequent siblings)
54 siblings, 0 replies; 56+ messages in thread
From: Eric Curtin @ 2026-03-19 20:24 UTC (permalink / raw)
To: linux-hyperv; +Cc: linux-kernel, iourit, wei.liu, decui, haiyangz
From: Iouri Tarassov <iourit@linux.microsoft.com>
Implement the ioctl to query video memory information from the host
(LX_DXQUERYVIDEOMEMORYINFO).
Signed-off-by: Iouri Tarassov <iourit@linux.microsoft.com>
[kms: forward port to 6.6 from 6.1. No code changes made.]
Signed-off-by: Kelsey Steele <kelseysteele@microsoft.com>
---
drivers/hv/dxgkrnl/dxgkrnl.h | 5 +++
drivers/hv/dxgkrnl/dxgvmbus.c | 64 +++++++++++++++++++++++++++++++++++
drivers/hv/dxgkrnl/dxgvmbus.h | 14 ++++++++
drivers/hv/dxgkrnl/ioctl.c | 50 ++++++++++++++++++++++++++-
include/uapi/misc/d3dkmthk.h | 13 +++++++
5 files changed, 145 insertions(+), 1 deletion(-)
diff --git a/drivers/hv/dxgkrnl/dxgkrnl.h b/drivers/hv/dxgkrnl/dxgkrnl.h
index ced9dd294f5f..b6a7288a4177 100644
--- a/drivers/hv/dxgkrnl/dxgkrnl.h
+++ b/drivers/hv/dxgkrnl/dxgkrnl.h
@@ -894,6 +894,11 @@ int dxgvmb_send_query_alloc_residency(struct dxgprocess *process,
struct dxgadapter *adapter,
struct d3dkmt_queryallocationresidency
*args);
+int dxgvmb_send_query_vidmem_info(struct dxgprocess *process,
+ struct dxgadapter *adapter,
+ struct d3dkmt_queryvideomemoryinfo *args,
+ struct d3dkmt_queryvideomemoryinfo
+ *__user iargs);
int dxgvmb_send_get_device_state(struct dxgprocess *process,
struct dxgadapter *adapter,
struct d3dkmt_getdevicestate *args,
diff --git a/drivers/hv/dxgkrnl/dxgvmbus.c b/drivers/hv/dxgkrnl/dxgvmbus.c
index 928fad5f133b..48ff49456057 100644
--- a/drivers/hv/dxgkrnl/dxgvmbus.c
+++ b/drivers/hv/dxgkrnl/dxgvmbus.c
@@ -1925,6 +1925,70 @@ int dxgvmb_send_query_alloc_residency(struct dxgprocess *process,
return ret;
}
+int dxgvmb_send_query_vidmem_info(struct dxgprocess *process,
+ struct dxgadapter *adapter,
+ struct d3dkmt_queryvideomemoryinfo *args,
+ struct d3dkmt_queryvideomemoryinfo *__user
+ output)
+{
+ int ret;
+ struct dxgkvmb_command_queryvideomemoryinfo *command;
+ struct dxgkvmb_command_queryvideomemoryinfo_return result = { };
+ struct dxgvmbusmsg msg = {.hdr = NULL};
+
+ ret = init_message(&msg, adapter, process, sizeof(*command));
+ if (ret)
+ goto cleanup;
+ command = (void *)msg.msg;
+ command_vgpu_to_host_init2(&command->hdr,
+ dxgk_vmbcommand_queryvideomemoryinfo,
+ process->host_handle);
+ command->adapter = args->adapter;
+ command->memory_segment_group = args->memory_segment_group;
+ command->physical_adapter_index = args->physical_adapter_index;
+
+ ret = dxgvmb_send_sync_msg(msg.channel, msg.hdr, msg.size,
+ &result, sizeof(result));
+ if (ret < 0)
+ goto cleanup;
+
+ ret = copy_to_user(&output->budget, &result.budget,
+ sizeof(output->budget));
+ if (ret) {
+ pr_err("%s failed to copy budget", __func__);
+ ret = -EINVAL;
+ goto cleanup;
+ }
+ ret = copy_to_user(&output->current_usage, &result.current_usage,
+ sizeof(output->current_usage));
+ if (ret) {
+ pr_err("%s failed to copy current usage", __func__);
+ ret = -EINVAL;
+ goto cleanup;
+ }
+ ret = copy_to_user(&output->current_reservation,
+ &result.current_reservation,
+ sizeof(output->current_reservation));
+ if (ret) {
+ pr_err("%s failed to copy reservation", __func__);
+ ret = -EINVAL;
+ goto cleanup;
+ }
+ ret = copy_to_user(&output->available_for_reservation,
+ &result.available_for_reservation,
+ sizeof(output->available_for_reservation));
+ if (ret) {
+ pr_err("%s failed to copy avail reservation", __func__);
+ ret = -EINVAL;
+ }
+
+cleanup:
+ free_message(&msg, process);
+ if (ret)
+ dev_dbg(DXGDEV, "err: %d", ret);
+ return ret;
+}
+
int dxgvmb_send_get_device_state(struct dxgprocess *process,
struct dxgadapter *adapter,
struct d3dkmt_getdevicestate *args,
diff --git a/drivers/hv/dxgkrnl/dxgvmbus.h b/drivers/hv/dxgkrnl/dxgvmbus.h
index d232eb234e2c..a1549983d50f 100644
--- a/drivers/hv/dxgkrnl/dxgvmbus.h
+++ b/drivers/hv/dxgkrnl/dxgvmbus.h
@@ -664,6 +664,20 @@ struct dxgkvmb_command_queryallocationresidency_return {
/* d3dkmt_allocationresidencystatus[NumAllocations] */
};
+struct dxgkvmb_command_queryvideomemoryinfo {
+ struct dxgkvmb_command_vgpu_to_host hdr;
+ struct d3dkmthandle adapter;
+ enum d3dkmt_memory_segment_group memory_segment_group;
+ u32 physical_adapter_index;
+};
+
+struct dxgkvmb_command_queryvideomemoryinfo_return {
+ u64 budget;
+ u64 current_usage;
+ u64 current_reservation;
+ u64 available_for_reservation;
+};
+
struct dxgkvmb_command_getdevicestate {
struct dxgkvmb_command_vgpu_to_host hdr;
struct d3dkmt_getdevicestate args;
diff --git a/drivers/hv/dxgkrnl/ioctl.c b/drivers/hv/dxgkrnl/ioctl.c
index 8b7d00e4c881..e692b127e219 100644
--- a/drivers/hv/dxgkrnl/ioctl.c
+++ b/drivers/hv/dxgkrnl/ioctl.c
@@ -3547,6 +3547,54 @@ dxgkio_flush_heap_transitions(struct dxgprocess *process, void *__user inargs)
return ret;
}
+static int
+dxgkio_query_vidmem_info(struct dxgprocess *process, void *__user inargs)
+{
+ struct d3dkmt_queryvideomemoryinfo args;
+ int ret;
+ struct dxgadapter *adapter = NULL;
+ bool adapter_locked = false;
+
+ ret = copy_from_user(&args, inargs, sizeof(args));
+ if (ret) {
+ DXG_ERR("failed to copy input args");
+ ret = -EINVAL;
+ goto cleanup;
+ }
+
+ if (args.process != 0) {
+ DXG_ERR("query vidmem info from another process");
+ ret = -EINVAL;
+ goto cleanup;
+ }
+
+ adapter = dxgprocess_adapter_by_handle(process, args.adapter);
+ if (adapter == NULL) {
+ ret = -EINVAL;
+ goto cleanup;
+ }
+
+ ret = dxgadapter_acquire_lock_shared(adapter);
+ if (ret < 0) {
+ adapter = NULL;
+ goto cleanup;
+ }
+ adapter_locked = true;
+
+ args.adapter = adapter->host_handle;
+ ret = dxgvmb_send_query_vidmem_info(process, adapter, &args, inargs);
+
+cleanup:
+
+ if (adapter_locked)
+ dxgadapter_release_lock_shared(adapter);
+ if (adapter)
+ kref_put(&adapter->adapter_kref, dxgadapter_release);
+ if (ret < 0)
+ DXG_ERR("failed: %x", ret);
+ return ret;
+}
+
static int
dxgkio_get_device_state(struct dxgprocess *process, void *__user inargs)
{
@@ -4287,7 +4335,7 @@ static struct ioctl_desc ioctls[] = {
/* 0x07 */ {dxgkio_create_paging_queue, LX_DXCREATEPAGINGQUEUE},
/* 0x08 */ {},
/* 0x09 */ {dxgkio_query_adapter_info, LX_DXQUERYADAPTERINFO},
-/* 0x0a */ {},
+/* 0x0a */ {dxgkio_query_vidmem_info, LX_DXQUERYVIDEOMEMORYINFO},
/* 0x0b */ {},
/* 0x0c */ {},
/* 0x0d */ {},
diff --git a/include/uapi/misc/d3dkmthk.h b/include/uapi/misc/d3dkmthk.h
index 873feb951129..b7d8b1d91cfc 100644
--- a/include/uapi/misc/d3dkmthk.h
+++ b/include/uapi/misc/d3dkmthk.h
@@ -897,6 +897,17 @@ enum d3dkmt_memory_segment_group {
_D3DKMT_MEMORY_SEGMENT_GROUP_NON_LOCAL = 1
};
+struct d3dkmt_queryvideomemoryinfo {
+ __u64 process;
+ struct d3dkmthandle adapter;
+ enum d3dkmt_memory_segment_group memory_segment_group;
+ __u64 budget;
+ __u64 current_usage;
+ __u64 current_reservation;
+ __u64 available_for_reservation;
+ __u32 physical_adapter_index;
+};
+
struct d3dkmt_adaptertype {
union {
struct {
@@ -1204,6 +1215,8 @@ struct d3dkmt_shareobjectwithhost {
_IOWR(0x47, 0x07, struct d3dkmt_createpagingqueue)
#define LX_DXQUERYADAPTERINFO \
_IOWR(0x47, 0x09, struct d3dkmt_queryadapterinfo)
+#define LX_DXQUERYVIDEOMEMORYINFO \
+ _IOWR(0x47, 0x0a, struct d3dkmt_queryvideomemoryinfo)
#define LX_DXGETDEVICESTATE \
_IOWR(0x47, 0x0e, struct d3dkmt_getdevicestate)
#define LX_DXSUBMITCOMMAND \
^ permalink raw reply related [flat|nested] 56+ messages in thread* [PATCH 21/55] drivers: hv: dxgkrnl: The escape ioctl
2026-03-19 20:24 [PATCH v4 00/55] drivers: hv: dxgkrnl: Driver for Hyper-V virtual compute device Eric Curtin
` (19 preceding siblings ...)
2026-03-19 20:24 ` [PATCH 20/55] drivers: hv: dxgkrnl: Query video memory information Eric Curtin
@ 2026-03-19 20:24 ` Eric Curtin
2026-03-19 20:24 ` [PATCH 22/55] drivers: hv: dxgkrnl: Ioctl to put device to error state Eric Curtin
` (33 subsequent siblings)
54 siblings, 0 replies; 56+ messages in thread
From: Eric Curtin @ 2026-03-19 20:24 UTC (permalink / raw)
To: linux-hyperv; +Cc: linux-kernel, iourit, wei.liu, decui, haiyangz
From: Iouri Tarassov <iourit@linux.microsoft.com>
Implement the escape ioctl (LX_DXESCAPE).
This ioctl is used to send/receive private data between user mode
compute device driver (guest) and kernel mode compute device
driver (host). It allows the user mode driver to extend the virtual
compute device API.
Signed-off-by: Iouri Tarassov <iourit@linux.microsoft.com>
[kms: forward port to 6.6 from 6.1. No code changes made.]
Signed-off-by: Kelsey Steele <kelseysteele@microsoft.com>
---
drivers/hv/dxgkrnl/dxgkrnl.h | 3 ++
drivers/hv/dxgkrnl/dxgvmbus.c | 75 ++++++++++++++++++++++++++++++++---
drivers/hv/dxgkrnl/dxgvmbus.h | 12 ++++++
drivers/hv/dxgkrnl/ioctl.c | 42 +++++++++++++++++++-
include/uapi/misc/d3dkmthk.h | 41 +++++++++++++++++++
5 files changed, 167 insertions(+), 6 deletions(-)
diff --git a/drivers/hv/dxgkrnl/dxgkrnl.h b/drivers/hv/dxgkrnl/dxgkrnl.h
index b6a7288a4177..dafc721ed6cf 100644
--- a/drivers/hv/dxgkrnl/dxgkrnl.h
+++ b/drivers/hv/dxgkrnl/dxgkrnl.h
@@ -894,6 +894,9 @@ int dxgvmb_send_query_alloc_residency(struct dxgprocess *process,
struct dxgadapter *adapter,
struct d3dkmt_queryallocationresidency
*args);
+int dxgvmb_send_escape(struct dxgprocess *process,
+ struct dxgadapter *adapter,
+ struct d3dkmt_escape *args);
int dxgvmb_send_query_vidmem_info(struct dxgprocess *process,
struct dxgadapter *adapter,
struct d3dkmt_queryvideomemoryinfo *args,
diff --git a/drivers/hv/dxgkrnl/dxgvmbus.c b/drivers/hv/dxgkrnl/dxgvmbus.c
index 48ff49456057..8bdd49bc7aa6 100644
--- a/drivers/hv/dxgkrnl/dxgvmbus.c
+++ b/drivers/hv/dxgkrnl/dxgvmbus.c
@@ -1925,6 +1925,70 @@ int dxgvmb_send_query_alloc_residency(struct dxgprocess *process,
return ret;
}
+int dxgvmb_send_escape(struct dxgprocess *process,
+ struct dxgadapter *adapter,
+ struct d3dkmt_escape *args)
+{
+ int ret;
+ struct dxgkvmb_command_escape *command = NULL;
+ u32 cmd_size = sizeof(*command);
+ struct dxgvmbusmsg msg = {.hdr = NULL};
+
+ if (args->priv_drv_data_size > DXG_MAX_VM_BUS_PACKET_SIZE) {
+ ret = -EINVAL;
+ goto cleanup;
+ }
+
+ cmd_size = cmd_size - sizeof(args->priv_drv_data[0]) +
+ args->priv_drv_data_size;
+
+ ret = init_message(&msg, adapter, process, cmd_size);
+ if (ret)
+ goto cleanup;
+ command = (void *)msg.msg;
+ command_vgpu_to_host_init2(&command->hdr,
+ DXGK_VMBCOMMAND_ESCAPE,
+ process->host_handle);
+ command->adapter = args->adapter;
+ command->device = args->device;
+ command->type = args->type;
+ command->flags = args->flags;
+ command->priv_drv_data_size = args->priv_drv_data_size;
+ command->context = args->context;
+ if (args->priv_drv_data_size) {
+ ret = copy_from_user(command->priv_drv_data,
+ args->priv_drv_data,
+ args->priv_drv_data_size);
+ if (ret) {
+ DXG_ERR("failed to copy priv data");
+ ret = -EINVAL;
+ goto cleanup;
+ }
+ }
+
+ ret = dxgvmb_send_sync_msg(msg.channel, msg.hdr, msg.size,
+ command->priv_drv_data,
+ args->priv_drv_data_size);
+ if (ret < 0)
+ goto cleanup;
+
+ if (args->priv_drv_data_size) {
+ ret = copy_to_user(args->priv_drv_data,
+ command->priv_drv_data,
+ args->priv_drv_data_size);
+ if (ret) {
+ DXG_ERR("failed to copy priv data");
+ ret = -EINVAL;
+ }
+ }
+
+cleanup:
+ free_message(&msg, process);
+ if (ret)
+ DXG_TRACE("err: %d", ret);
+ return ret;
+}
+
int dxgvmb_send_query_vidmem_info(struct dxgprocess *process,
struct dxgadapter *adapter,
struct d3dkmt_queryvideomemoryinfo *args,
@@ -1955,14 +2019,14 @@ int dxgvmb_send_query_vidmem_info(struct dxgprocess *process,
ret = copy_to_user(&output->budget, &result.budget,
sizeof(output->budget));
if (ret) {
- pr_err("%s failed to copy budget", __func__);
+ DXG_ERR("failed to copy budget");
ret = -EINVAL;
goto cleanup;
}
ret = copy_to_user(&output->current_usage, &result.current_usage,
sizeof(output->current_usage));
if (ret) {
- pr_err("%s failed to copy current usage", __func__);
+ DXG_ERR("failed to copy current usage");
ret = -EINVAL;
goto cleanup;
}
@@ -1970,7 +2034,7 @@ int dxgvmb_send_query_vidmem_info(struct dxgprocess *process,
&result.current_reservation,
sizeof(output->current_reservation));
if (ret) {
- pr_err("%s failed to copy reservation", __func__);
+ DXG_ERR("failed to copy reservation");
ret = -EINVAL;
goto cleanup;
}
@@ -1978,14 +2042,14 @@ int dxgvmb_send_query_vidmem_info(struct dxgprocess *process,
&result.available_for_reservation,
sizeof(output->available_for_reservation));
if (ret) {
- pr_err("%s failed to copy avail reservation", __func__);
+ DXG_ERR("failed to copy avail reservation");
ret = -EINVAL;
}
cleanup:
free_message(&msg, process);
if (ret)
- dev_dbg(DXGDEV, "err: %d", ret);
+ DXG_TRACE("err: %d", ret);
return ret;
}
@@ -3152,3 +3216,4 @@ int dxgvmb_send_submit_command_hwqueue(struct dxgprocess *process,
DXG_TRACE("err: %d", ret);
return ret;
}
+
diff --git a/drivers/hv/dxgkrnl/dxgvmbus.h b/drivers/hv/dxgkrnl/dxgvmbus.h
index a1549983d50f..e1c2ed7b1580 100644
--- a/drivers/hv/dxgkrnl/dxgvmbus.h
+++ b/drivers/hv/dxgkrnl/dxgvmbus.h
@@ -664,6 +664,18 @@ struct dxgkvmb_command_queryallocationresidency_return {
/* d3dkmt_allocationresidencystatus[NumAllocations] */
};
+/* Returns only private data */
+struct dxgkvmb_command_escape {
+ struct dxgkvmb_command_vgpu_to_host hdr;
+ struct d3dkmthandle adapter;
+ struct d3dkmthandle device;
+ enum d3dkmt_escapetype type;
+ struct d3dddi_escapeflags flags;
+ u32 priv_drv_data_size;
+ struct d3dkmthandle context;
+ u8 priv_drv_data[1];
+};
+
struct dxgkvmb_command_queryvideomemoryinfo {
struct dxgkvmb_command_vgpu_to_host hdr;
struct d3dkmthandle adapter;
diff --git a/drivers/hv/dxgkrnl/ioctl.c b/drivers/hv/dxgkrnl/ioctl.c
index e692b127e219..78de76abce2d 100644
--- a/drivers/hv/dxgkrnl/ioctl.c
+++ b/drivers/hv/dxgkrnl/ioctl.c
@@ -3547,6 +3547,46 @@ dxgkio_flush_heap_transitions(struct dxgprocess *process, void *__user inargs)
return ret;
}
+static int
+dxgkio_escape(struct dxgprocess *process, void *__user inargs)
+{
+ struct d3dkmt_escape args;
+ int ret;
+ struct dxgadapter *adapter = NULL;
+ bool adapter_locked = false;
+
+ ret = copy_from_user(&args, inargs, sizeof(args));
+ if (ret) {
+ ret = -EINVAL;
+ goto cleanup;
+ }
+
+ adapter = dxgprocess_adapter_by_handle(process, args.adapter);
+ if (adapter == NULL) {
+ ret = -EINVAL;
+ goto cleanup;
+ }
+
+ ret = dxgadapter_acquire_lock_shared(adapter);
+ if (ret < 0) {
+ adapter = NULL;
+ goto cleanup;
+ }
+ adapter_locked = true;
+
+ args.adapter = adapter->host_handle;
+ ret = dxgvmb_send_escape(process, adapter, &args);
+
+cleanup:
+
+ if (adapter_locked)
+ dxgadapter_release_lock_shared(adapter);
+ if (adapter)
+ kref_put(&adapter->adapter_kref, dxgadapter_release);
+ DXG_TRACE("ioctl:%s %d", errorstr(ret), ret);
+ return ret;
+}
+
static int
dxgkio_query_vidmem_info(struct dxgprocess *process, void *__user inargs)
{
@@ -4338,7 +4378,7 @@ static struct ioctl_desc ioctls[] = {
/* 0x0a */ {dxgkio_query_vidmem_info, LX_DXQUERYVIDEOMEMORYINFO},
/* 0x0b */ {},
/* 0x0c */ {},
-/* 0x0d */ {},
+/* 0x0d */ {dxgkio_escape, LX_DXESCAPE},
/* 0x0e */ {dxgkio_get_device_state, LX_DXGETDEVICESTATE},
/* 0x0f */ {dxgkio_submit_command, LX_DXSUBMITCOMMAND},
/* 0x10 */ {dxgkio_create_sync_object, LX_DXCREATESYNCHRONIZATIONOBJECT},
diff --git a/include/uapi/misc/d3dkmthk.h b/include/uapi/misc/d3dkmthk.h
index b7d8b1d91cfc..749edf28bd43 100644
--- a/include/uapi/misc/d3dkmthk.h
+++ b/include/uapi/misc/d3dkmthk.h
@@ -236,6 +236,45 @@ struct d3dddi_destroypagingqueue {
struct d3dkmthandle paging_queue;
};
+enum d3dkmt_escapetype {
+ _D3DKMT_ESCAPE_DRIVERPRIVATE = 0,
+ _D3DKMT_ESCAPE_VIDMM = 1,
+ _D3DKMT_ESCAPE_VIDSCH = 3,
+ _D3DKMT_ESCAPE_DEVICE = 4,
+ _D3DKMT_ESCAPE_DRT_TEST = 8,
+};
+
+struct d3dddi_escapeflags {
+ union {
+ struct {
+ __u32 hardware_access:1;
+ __u32 device_status_query:1;
+ __u32 change_frame_latency:1;
+ __u32 no_adapter_synchronization:1;
+ __u32 reserved:1;
+ __u32 virtual_machine_data:1;
+ __u32 driver_known_escape:1;
+ __u32 driver_common_escape:1;
+ __u32 reserved2:24;
+ };
+ __u32 value;
+ };
+};
+
+struct d3dkmt_escape {
+ struct d3dkmthandle adapter;
+ struct d3dkmthandle device;
+ enum d3dkmt_escapetype type;
+ struct d3dddi_escapeflags flags;
+#ifdef __KERNEL__
+ void *priv_drv_data;
+#else
+ __u64 priv_drv_data;
+#endif
+ __u32 priv_drv_data_size;
+ struct d3dkmthandle context;
+};
+
enum dxgk_render_pipeline_stage {
_DXGK_RENDER_PIPELINE_STAGE_UNKNOWN = 0,
_DXGK_RENDER_PIPELINE_STAGE_INPUT_ASSEMBLER = 1,
@@ -1217,6 +1256,8 @@ struct d3dkmt_shareobjectwithhost {
_IOWR(0x47, 0x09, struct d3dkmt_queryadapterinfo)
#define LX_DXQUERYVIDEOMEMORYINFO \
_IOWR(0x47, 0x0a, struct d3dkmt_queryvideomemoryinfo)
+#define LX_DXESCAPE \
+ _IOWR(0x47, 0x0d, struct d3dkmt_escape)
#define LX_DXGETDEVICESTATE \
_IOWR(0x47, 0x0e, struct d3dkmt_getdevicestate)
#define LX_DXSUBMITCOMMAND \
^ permalink raw reply related [flat|nested] 56+ messages in thread* [PATCH 22/55] drivers: hv: dxgkrnl: Ioctl to put device to error state
2026-03-19 20:24 [PATCH v4 00/55] drivers: hv: dxgkrnl: Driver for Hyper-V virtual compute device Eric Curtin
` (20 preceding siblings ...)
2026-03-19 20:24 ` [PATCH 21/55] drivers: hv: dxgkrnl: The escape ioctl Eric Curtin
@ 2026-03-19 20:24 ` Eric Curtin
2026-03-19 20:24 ` [PATCH 23/55] drivers: hv: dxgkrnl: Ioctls to query statistics and clock calibration Eric Curtin
` (32 subsequent siblings)
54 siblings, 0 replies; 56+ messages in thread
From: Eric Curtin @ 2026-03-19 20:24 UTC (permalink / raw)
To: linux-hyperv; +Cc: linux-kernel, iourit, wei.liu, decui, haiyangz
From: Iouri Tarassov <iourit@linux.microsoft.com>
Implement the ioctl to put the virtual compute device to the error
state (LX_DXMARKDEVICEASERROR).
This ioctl is used by the user mode driver when it detects an
unrecoverable error condition.
When a compute device is put to the error state, all subsequent
ioctl calls to the device will fail.
Signed-off-by: Iouri Tarassov <iourit@linux.microsoft.com>
[kms: forward port to 6.6 from 6.1. No code changes made.]
Signed-off-by: Kelsey Steele <kelseysteele@microsoft.com>
---
drivers/hv/dxgkrnl/dxgkrnl.h | 3 +++
drivers/hv/dxgkrnl/dxgvmbus.c | 25 +++++++++++++++++++++++
drivers/hv/dxgkrnl/dxgvmbus.h | 5 +++++
drivers/hv/dxgkrnl/ioctl.c | 38 ++++++++++++++++++++++++++++++++++-
include/uapi/misc/d3dkmthk.h | 12 +++++++++++
5 files changed, 82 insertions(+), 1 deletion(-)
diff --git a/drivers/hv/dxgkrnl/dxgkrnl.h b/drivers/hv/dxgkrnl/dxgkrnl.h
index dafc721ed6cf..b454c7430f06 100644
--- a/drivers/hv/dxgkrnl/dxgkrnl.h
+++ b/drivers/hv/dxgkrnl/dxgkrnl.h
@@ -856,6 +856,9 @@ int dxgvmb_send_update_alloc_property(struct dxgprocess *process,
struct d3dddi_updateallocproperty *args,
struct d3dddi_updateallocproperty *__user
inargs);
+int dxgvmb_send_mark_device_as_error(struct dxgprocess *process,
+ struct dxgadapter *adapter,
+ struct d3dkmt_markdeviceaserror *args);
int dxgvmb_send_set_allocation_priority(struct dxgprocess *process,
struct dxgadapter *adapter,
struct d3dkmt_setallocationpriority *a);
diff --git a/drivers/hv/dxgkrnl/dxgvmbus.c b/drivers/hv/dxgkrnl/dxgvmbus.c
index 8bdd49bc7aa6..f7264b12a477 100644
--- a/drivers/hv/dxgkrnl/dxgvmbus.c
+++ b/drivers/hv/dxgkrnl/dxgvmbus.c
@@ -2730,6 +2730,31 @@ int dxgvmb_send_update_alloc_property(struct dxgprocess *process,
return ret;
}
+int dxgvmb_send_mark_device_as_error(struct dxgprocess *process,
+ struct dxgadapter *adapter,
+ struct d3dkmt_markdeviceaserror *args)
+{
+ struct dxgkvmb_command_markdeviceaserror *command;
+ int ret;
+ struct dxgvmbusmsg msg = {.hdr = NULL};
+
+ ret = init_message(&msg, adapter, process, sizeof(*command));
+ if (ret)
+ goto cleanup;
+ command = (void *)msg.msg;
+
+ command_vgpu_to_host_init2(&command->hdr,
+ DXGK_VMBCOMMAND_MARKDEVICEASERROR,
+ process->host_handle);
+ command->args = *args;
+ ret = dxgvmb_send_sync_msg_ntstatus(msg.channel, msg.hdr, msg.size);
+cleanup:
+ free_message(&msg, process);
+ if (ret)
+ DXG_TRACE("err: %d", ret);
+ return ret;
+}
+
int dxgvmb_send_set_allocation_priority(struct dxgprocess *process,
struct dxgadapter *adapter,
struct d3dkmt_setallocationpriority *args)
diff --git a/drivers/hv/dxgkrnl/dxgvmbus.h b/drivers/hv/dxgkrnl/dxgvmbus.h
index e1c2ed7b1580..a66e11097bb2 100644
--- a/drivers/hv/dxgkrnl/dxgvmbus.h
+++ b/drivers/hv/dxgkrnl/dxgvmbus.h
@@ -627,6 +627,11 @@ struct dxgkvmb_command_updateallocationproperty_return {
struct ntstatus status;
};
+struct dxgkvmb_command_markdeviceaserror {
+ struct dxgkvmb_command_vgpu_to_host hdr;
+ struct d3dkmt_markdeviceaserror args;
+};
+
/* Returns ntstatus */
struct dxgkvmb_command_changevideomemoryreservation {
struct dxgkvmb_command_vgpu_to_host hdr;
diff --git a/drivers/hv/dxgkrnl/ioctl.c b/drivers/hv/dxgkrnl/ioctl.c
index 78de76abce2d..ce4af610ada7 100644
--- a/drivers/hv/dxgkrnl/ioctl.c
+++ b/drivers/hv/dxgkrnl/ioctl.c
@@ -3341,6 +3341,42 @@ dxgkio_update_alloc_property(struct dxgprocess *process, void *__user inargs)
return ret;
}
+static int
+dxgkio_mark_device_as_error(struct dxgprocess *process, void *__user inargs)
+{
+ struct d3dkmt_markdeviceaserror args;
+ struct dxgadapter *adapter = NULL;
+ struct dxgdevice *device = NULL;
+ int ret;
+
+ ret = copy_from_user(&args, inargs, sizeof(args));
+ if (ret) {
+ DXG_ERR("failed to copy input args");
+ ret = -EINVAL;
+ goto cleanup;
+ }
+ device = dxgprocess_device_by_handle(process, args.device);
+ if (device == NULL) {
+ ret = -EINVAL;
+ goto cleanup;
+ }
+ adapter = device->adapter;
+ ret = dxgadapter_acquire_lock_shared(adapter);
+ if (ret < 0) {
+ adapter = NULL;
+ goto cleanup;
+ }
+ device->execution_state = _D3DKMT_DEVICEEXECUTION_RESET;
+ ret = dxgvmb_send_mark_device_as_error(process, adapter, &args);
+cleanup:
+ if (adapter)
+ dxgadapter_release_lock_shared(adapter);
+ if (device)
+ kref_put(&device->device_kref, dxgdevice_release);
+ DXG_TRACE("ioctl:%s %d", errorstr(ret), ret);
+ return ret;
+}
+
static int
dxgkio_query_alloc_residency(struct dxgprocess *process, void *__user inargs)
{
@@ -4404,7 +4440,7 @@ static struct ioctl_desc ioctls[] = {
/* 0x23 */ {},
/* 0x24 */ {},
/* 0x25 */ {dxgkio_lock2, LX_DXLOCK2},
-/* 0x26 */ {},
+/* 0x26 */ {dxgkio_mark_device_as_error, LX_DXMARKDEVICEASERROR},
/* 0x27 */ {},
/* 0x28 */ {},
/* 0x29 */ {},
diff --git a/include/uapi/misc/d3dkmthk.h b/include/uapi/misc/d3dkmthk.h
index 749edf28bd43..ce5a638a886d 100644
--- a/include/uapi/misc/d3dkmthk.h
+++ b/include/uapi/misc/d3dkmthk.h
@@ -790,6 +790,16 @@ struct d3dkmt_unlock2 {
struct d3dkmthandle allocation;
};
+enum d3dkmt_device_error_reason {
+ _D3DKMT_DEVICE_ERROR_REASON_GENERIC = 0x80000000,
+ _D3DKMT_DEVICE_ERROR_REASON_DRIVER_ERROR = 0x80000006,
+};
+
+struct d3dkmt_markdeviceaserror {
+ struct d3dkmthandle device;
+ enum d3dkmt_device_error_reason reason;
+};
+
enum d3dkmt_standardallocationtype {
_D3DKMT_STANDARDALLOCATIONTYPE_EXISTINGHEAP = 1,
_D3DKMT_STANDARDALLOCATIONTYPE_CROSSADAPTER = 2,
@@ -1290,6 +1300,8 @@ struct d3dkmt_shareobjectwithhost {
_IOWR(0x47, 0x1f, struct d3dkmt_flushheaptransitions)
#define LX_DXLOCK2 \
_IOWR(0x47, 0x25, struct d3dkmt_lock2)
+#define LX_DXMARKDEVICEASERROR \
+ _IOWR(0x47, 0x26, struct d3dkmt_markdeviceaserror)
#define LX_DXQUERYALLOCATIONRESIDENCY \
_IOWR(0x47, 0x2a, struct d3dkmt_queryallocationresidency)
#define LX_DXSETALLOCATIONPRIORITY \
^ permalink raw reply related [flat|nested] 56+ messages in thread* [PATCH 23/55] drivers: hv: dxgkrnl: Ioctls to query statistics and clock calibration
2026-03-19 20:24 [PATCH v4 00/55] drivers: hv: dxgkrnl: Driver for Hyper-V virtual compute device Eric Curtin
` (21 preceding siblings ...)
2026-03-19 20:24 ` [PATCH 22/55] drivers: hv: dxgkrnl: Ioctl to put device to error state Eric Curtin
@ 2026-03-19 20:24 ` Eric Curtin
2026-03-19 20:24 ` [PATCH 24/55] drivers: hv: dxgkrnl: Offer and reclaim allocations Eric Curtin
` (31 subsequent siblings)
54 siblings, 0 replies; 56+ messages in thread
From: Eric Curtin @ 2026-03-19 20:24 UTC (permalink / raw)
To: linux-hyperv; +Cc: linux-kernel, iourit, wei.liu, decui, haiyangz
From: Iouri Tarassov <iourit@linux.microsoft.com>
Implement ioctls to query statistics from the VGPU device
(LX_DXQUERYSTATISTICS) and to query clock calibration
(LX_DXQUERYCLOCKCALIBRATION).
The LX_DXQUERYSTATISTICS ioctl is used to query various statistics from
the compute device on the host.
The LX_DXQUERYCLOCKCALIBRATION ioctl queries the compute device clock
and is used for performance monitoring.
Signed-off-by: Iouri Tarassov <iourit@linux.microsoft.com>
[kms: forward port to 6.6 from 6.1. No code changes made.]
Signed-off-by: Kelsey Steele <kelseysteele@microsoft.com>
---
drivers/hv/dxgkrnl/dxgkrnl.h | 8 +++
drivers/hv/dxgkrnl/dxgvmbus.c | 77 +++++++++++++++++++++++
drivers/hv/dxgkrnl/dxgvmbus.h | 21 +++++++
drivers/hv/dxgkrnl/ioctl.c | 111 +++++++++++++++++++++++++++++++++-
include/uapi/misc/d3dkmthk.h | 62 +++++++++++++++++++
5 files changed, 277 insertions(+), 2 deletions(-)
diff --git a/drivers/hv/dxgkrnl/dxgkrnl.h b/drivers/hv/dxgkrnl/dxgkrnl.h
index b454c7430f06..a55873bdd9a6 100644
--- a/drivers/hv/dxgkrnl/dxgkrnl.h
+++ b/drivers/hv/dxgkrnl/dxgkrnl.h
@@ -885,6 +885,11 @@ int dxgvmb_send_query_adapter_info(struct dxgprocess *process,
int dxgvmb_send_submit_command_hwqueue(struct dxgprocess *process,
struct dxgadapter *adapter,
struct d3dkmt_submitcommandtohwqueue *a);
+int dxgvmb_send_query_clock_calibration(struct dxgprocess *process,
+ struct dxgadapter *adapter,
+ struct d3dkmt_queryclockcalibration *a,
+ struct d3dkmt_queryclockcalibration
+ *__user inargs);
int dxgvmb_send_flush_heap_transitions(struct dxgprocess *process,
struct dxgadapter *adapter,
struct d3dkmt_flushheaptransitions *arg);
@@ -929,6 +934,9 @@ int dxgvmb_send_get_stdalloc_data(struct dxgdevice *device,
void *prive_alloc_data,
u32 *res_priv_data_size,
void *priv_res_data);
+int dxgvmb_send_query_statistics(struct dxgprocess *process,
+ struct dxgadapter *adapter,
+ struct d3dkmt_querystatistics *args);
int dxgvmb_send_async_msg(struct dxgvmbuschannel *channel,
void *command,
u32 cmd_size);
diff --git a/drivers/hv/dxgkrnl/dxgvmbus.c b/drivers/hv/dxgkrnl/dxgvmbus.c
index f7264b12a477..9a1864bb4e14 100644
--- a/drivers/hv/dxgkrnl/dxgvmbus.c
+++ b/drivers/hv/dxgkrnl/dxgvmbus.c
@@ -1829,6 +1829,48 @@ int dxgvmb_send_destroy_allocation(struct dxgprocess *process,
return ret;
}
+int dxgvmb_send_query_clock_calibration(struct dxgprocess *process,
+ struct dxgadapter *adapter,
+ struct d3dkmt_queryclockcalibration
+ *args,
+ struct d3dkmt_queryclockcalibration
+ *__user inargs)
+{
+ struct dxgkvmb_command_queryclockcalibration *command;
+ struct dxgkvmb_command_queryclockcalibration_return result;
+ int ret;
+ struct dxgvmbusmsg msg = {.hdr = NULL};
+
+ ret = init_message(&msg, adapter, process, sizeof(*command));
+ if (ret)
+ goto cleanup;
+ command = (void *)msg.msg;
+
+ command_vgpu_to_host_init2(&command->hdr,
+ DXGK_VMBCOMMAND_QUERYCLOCKCALIBRATION,
+ process->host_handle);
+ command->args = *args;
+
+ ret = dxgvmb_send_sync_msg(msg.channel, msg.hdr, msg.size,
+ &result, sizeof(result));
+ if (ret < 0)
+ goto cleanup;
+ ret = copy_to_user(&inargs->clock_data, &result.clock_data,
+ sizeof(result.clock_data));
+ if (ret) {
+ pr_err("%s failed to copy clock data", __func__);
+ ret = -EINVAL;
+ goto cleanup;
+ }
+ ret = ntstatus2int(result.status);
+
+cleanup:
+ free_message(&msg, process);
+ if (ret)
+ DXG_TRACE("err: %d", ret);
+ return ret;
+}
+
int dxgvmb_send_flush_heap_transitions(struct dxgprocess *process,
struct dxgadapter *adapter,
struct d3dkmt_flushheaptransitions *args)
@@ -3242,3 +3284,38 @@ int dxgvmb_send_submit_command_hwqueue(struct dxgprocess *process,
return ret;
}
+int dxgvmb_send_query_statistics(struct dxgprocess *process,
+ struct dxgadapter *adapter,
+ struct d3dkmt_querystatistics *args)
+{
+ struct dxgkvmb_command_querystatistics *command;
+ struct dxgkvmb_command_querystatistics_return *result;
+ int ret;
+ struct dxgvmbusmsgres msg = {.hdr = NULL};
+
+ ret = init_message_res(&msg, adapter, process, sizeof(*command),
+ sizeof(*result));
+ if (ret)
+ goto cleanup;
+ command = msg.msg;
+ result = msg.res;
+
+ command_vgpu_to_host_init2(&command->hdr,
+ DXGK_VMBCOMMAND_QUERYSTATISTICS,
+ process->host_handle);
+ command->args = *args;
+
+ ret = dxgvmb_send_sync_msg(msg.channel, msg.hdr, msg.size,
+ result, msg.res_size);
+ if (ret < 0)
+ goto cleanup;
+
+ args->result = result->result;
+ ret = ntstatus2int(result->status);
+
+cleanup:
+ free_message((struct dxgvmbusmsg *)&msg, process);
+ if (ret)
+ DXG_TRACE("err: %d", ret);
+ return ret;
+}
diff --git a/drivers/hv/dxgkrnl/dxgvmbus.h b/drivers/hv/dxgkrnl/dxgvmbus.h
index a66e11097bb2..17768ed0e68d 100644
--- a/drivers/hv/dxgkrnl/dxgvmbus.h
+++ b/drivers/hv/dxgkrnl/dxgvmbus.h
@@ -372,6 +372,16 @@ struct dxgkvmb_command_flushheaptransitions {
struct dxgkvmb_command_vgpu_to_host hdr;
};
+struct dxgkvmb_command_queryclockcalibration {
+ struct dxgkvmb_command_vgpu_to_host hdr;
+ struct d3dkmt_queryclockcalibration args;
+};
+
+struct dxgkvmb_command_queryclockcalibration_return {
+ struct ntstatus status;
+ struct dxgk_gpuclockdata clock_data;
+};
+
struct dxgkvmb_command_createallocation_allocinfo {
u32 flags;
u32 priv_drv_data_size;
@@ -408,6 +418,17 @@ struct dxgkvmb_command_openresource_return {
/* struct d3dkmthandle allocation[allocation_count]; */
};
+struct dxgkvmb_command_querystatistics {
+ struct dxgkvmb_command_vgpu_to_host hdr;
+ struct d3dkmt_querystatistics args;
+};
+
+struct dxgkvmb_command_querystatistics_return {
+ struct ntstatus status;
+ u32 reserved;
+ struct d3dkmt_querystatistics_result result;
+};
+
struct dxgkvmb_command_getstandardallocprivdata {
struct dxgkvmb_command_vgpu_to_host hdr;
enum d3dkmdt_standardallocationtype alloc_type;
diff --git a/drivers/hv/dxgkrnl/ioctl.c b/drivers/hv/dxgkrnl/ioctl.c
index ce4af610ada7..4babb21f38a9 100644
--- a/drivers/hv/dxgkrnl/ioctl.c
+++ b/drivers/hv/dxgkrnl/ioctl.c
@@ -149,6 +149,65 @@ static int dxgkio_open_adapter_from_luid(struct dxgprocess *process,
return ret;
}
+static int dxgkio_query_statistics(struct dxgprocess *process,
+ void __user *inargs)
+{
+ struct d3dkmt_querystatistics *args;
+ int ret;
+ struct dxgadapter *entry;
+ struct dxgadapter *adapter = NULL;
+ struct winluid tmp;
+ struct dxgglobal *dxgglobal = dxggbl();
+
+ args = vzalloc(sizeof(struct d3dkmt_querystatistics));
+ if (args == NULL) {
+ ret = -ENOMEM;
+ goto cleanup;
+ }
+
+ ret = copy_from_user(args, inargs, sizeof(*args));
+ if (ret) {
+ DXG_ERR("failed to copy input args");
+ ret = -EINVAL;
+ goto cleanup;
+ }
+
+ dxgglobal_acquire_adapter_list_lock(DXGLOCK_SHARED);
+ list_for_each_entry(entry, &dxgglobal->adapter_list_head,
+ adapter_list_entry) {
+ if (dxgadapter_acquire_lock_shared(entry) == 0) {
+ if (*(u64 *) &entry->luid ==
+ *(u64 *) &args->adapter_luid) {
+ adapter = entry;
+ break;
+ }
+ dxgadapter_release_lock_shared(entry);
+ }
+ }
+ dxgglobal_release_adapter_list_lock(DXGLOCK_SHARED);
+ if (adapter) {
+ tmp = args->adapter_luid;
+ args->adapter_luid = adapter->host_adapter_luid;
+ ret = dxgvmb_send_query_statistics(process, adapter, args);
+ if (ret >= 0) {
+ args->adapter_luid = tmp;
+ ret = copy_to_user(inargs, args, sizeof(*args));
+ if (ret) {
+ DXG_ERR("failed to copy args");
+ ret = -EINVAL;
+ }
+ }
+ dxgadapter_release_lock_shared(adapter);
+ }
+
+cleanup:
+ if (args)
+ vfree(args);
+
+ DXG_TRACE("ioctl:%s %d", errorstr(ret), ret);
+ return ret;
+}
+
static int
dxgkp_enum_adapters(struct dxgprocess *process,
union d3dkmt_enumadapters_filter filter,
@@ -3536,6 +3595,54 @@ dxgkio_change_vidmem_reservation(struct dxgprocess *process, void *__user inargs
return ret;
}
+static int
+dxgkio_query_clock_calibration(struct dxgprocess *process, void *__user inargs)
+{
+ struct d3dkmt_queryclockcalibration args;
+ int ret;
+ struct dxgadapter *adapter = NULL;
+ bool adapter_locked = false;
+
+ ret = copy_from_user(&args, inargs, sizeof(args));
+ if (ret) {
+ DXG_ERR("failed to copy input args");
+ ret = -EINVAL;
+ goto cleanup;
+ }
+
+ adapter = dxgprocess_adapter_by_handle(process, args.adapter);
+ if (adapter == NULL) {
+ ret = -EINVAL;
+ goto cleanup;
+ }
+
+ ret = dxgadapter_acquire_lock_shared(adapter);
+ if (ret < 0) {
+ adapter = NULL;
+ goto cleanup;
+ }
+ adapter_locked = true;
+
+ args.adapter = adapter->host_handle;
+ ret = dxgvmb_send_query_clock_calibration(process, adapter,
+ &args, inargs);
+ if (ret < 0)
+ goto cleanup;
+ ret = copy_to_user(inargs, &args, sizeof(args));
+ if (ret) {
+ DXG_ERR("failed to copy output args");
+ ret = -EINVAL;
+ }
+
+cleanup:
+
+ if (adapter_locked)
+ dxgadapter_release_lock_shared(adapter);
+ if (adapter)
+ kref_put(&adapter->adapter_kref, dxgadapter_release);
+ return ret;
+}
+
static int
dxgkio_flush_heap_transitions(struct dxgprocess *process, void *__user inargs)
{
@@ -4470,14 +4577,14 @@ static struct ioctl_desc ioctls[] = {
/* 0x3b */ {dxgkio_wait_sync_object_gpu,
LX_DXWAITFORSYNCHRONIZATIONOBJECTFROMGPU},
/* 0x3c */ {dxgkio_get_allocation_priority, LX_DXGETALLOCATIONPRIORITY},
-/* 0x3d */ {},
+/* 0x3d */ {dxgkio_query_clock_calibration, LX_DXQUERYCLOCKCALIBRATION},
/* 0x3e */ {dxgkio_enum_adapters3, LX_DXENUMADAPTERS3},
/* 0x3f */ {dxgkio_share_objects, LX_DXSHAREOBJECTS},
/* 0x40 */ {dxgkio_open_sync_object_nt, LX_DXOPENSYNCOBJECTFROMNTHANDLE2},
/* 0x41 */ {dxgkio_query_resource_info_nt,
LX_DXQUERYRESOURCEINFOFROMNTHANDLE},
/* 0x42 */ {dxgkio_open_resource_nt, LX_DXOPENRESOURCEFROMNTHANDLE},
-/* 0x43 */ {},
+/* 0x43 */ {dxgkio_query_statistics, LX_DXQUERYSTATISTICS},
/* 0x44 */ {dxgkio_share_object_with_host, LX_DXSHAREOBJECTWITHHOST},
/* 0x45 */ {},
};
diff --git a/include/uapi/misc/d3dkmthk.h b/include/uapi/misc/d3dkmthk.h
index ce5a638a886d..ea18242ceb83 100644
--- a/include/uapi/misc/d3dkmthk.h
+++ b/include/uapi/misc/d3dkmthk.h
@@ -996,6 +996,34 @@ struct d3dkmt_queryadapterinfo {
__u32 private_data_size;
};
+#pragma pack(push, 1)
+
+struct dxgk_gpuclockdata_flags {
+ union {
+ struct {
+ __u32 context_management_processor:1;
+ __u32 reserved:31;
+ };
+ __u32 value;
+ };
+};
+
+struct dxgk_gpuclockdata {
+ __u64 gpu_frequency;
+ __u64 gpu_clock_counter;
+ __u64 cpu_clock_counter;
+ struct dxgk_gpuclockdata_flags flags;
+} __packed;
+
+struct d3dkmt_queryclockcalibration {
+ struct d3dkmthandle adapter;
+ __u32 node_ordinal;
+ __u32 physical_adapter_index;
+ struct dxgk_gpuclockdata clock_data;
+};
+
+#pragma pack(pop)
+
struct d3dkmt_flushheaptransitions {
struct d3dkmthandle adapter;
};
@@ -1238,6 +1266,36 @@ struct d3dkmt_enumadapters3 {
#endif
};
+enum d3dkmt_querystatistics_type {
+ _D3DKMT_QUERYSTATISTICS_ADAPTER = 0,
+ _D3DKMT_QUERYSTATISTICS_PROCESS = 1,
+ _D3DKMT_QUERYSTATISTICS_PROCESS_ADAPTER = 2,
+ _D3DKMT_QUERYSTATISTICS_SEGMENT = 3,
+ _D3DKMT_QUERYSTATISTICS_PROCESS_SEGMENT = 4,
+ _D3DKMT_QUERYSTATISTICS_NODE = 5,
+ _D3DKMT_QUERYSTATISTICS_PROCESS_NODE = 6,
+ _D3DKMT_QUERYSTATISTICS_VIDPNSOURCE = 7,
+ _D3DKMT_QUERYSTATISTICS_PROCESS_VIDPNSOURCE = 8,
+ _D3DKMT_QUERYSTATISTICS_PROCESS_SEGMENT_GROUP = 9,
+ _D3DKMT_QUERYSTATISTICS_PHYSICAL_ADAPTER = 10,
+};
+
+struct d3dkmt_querystatistics_result {
+ char size[0x308];
+};
+
+struct d3dkmt_querystatistics {
+ union {
+ struct {
+ enum d3dkmt_querystatistics_type type;
+ struct winluid adapter_luid;
+ __u64 process;
+ struct d3dkmt_querystatistics_result result;
+ };
+ char size[0x328];
+ };
+};
+
struct d3dkmt_shareobjectwithhost {
struct d3dkmthandle device_handle;
struct d3dkmthandle object_handle;
@@ -1328,6 +1386,8 @@ struct d3dkmt_shareobjectwithhost {
_IOWR(0x47, 0x3b, struct d3dkmt_waitforsynchronizationobjectfromgpu)
#define LX_DXGETALLOCATIONPRIORITY \
_IOWR(0x47, 0x3c, struct d3dkmt_getallocationpriority)
+#define LX_DXQUERYCLOCKCALIBRATION \
+ _IOWR(0x47, 0x3d, struct d3dkmt_queryclockcalibration)
#define LX_DXENUMADAPTERS3 \
_IOWR(0x47, 0x3e, struct d3dkmt_enumadapters3)
#define LX_DXSHAREOBJECTS \
@@ -1338,6 +1398,8 @@ struct d3dkmt_shareobjectwithhost {
_IOWR(0x47, 0x41, struct d3dkmt_queryresourceinfofromnthandle)
#define LX_DXOPENRESOURCEFROMNTHANDLE \
_IOWR(0x47, 0x42, struct d3dkmt_openresourcefromnthandle)
+#define LX_DXQUERYSTATISTICS \
+ _IOWR(0x47, 0x43, struct d3dkmt_querystatistics)
#define LX_DXSHAREOBJECTWITHHOST \
_IOWR(0x47, 0x44, struct d3dkmt_shareobjectwithhost)
^ permalink raw reply related [flat|nested] 56+ messages in thread* [PATCH 24/55] drivers: hv: dxgkrnl: Offer and reclaim allocations
2026-03-19 20:24 [PATCH v4 00/55] drivers: hv: dxgkrnl: Driver for Hyper-V virtual compute device Eric Curtin
` (22 preceding siblings ...)
2026-03-19 20:24 ` [PATCH 23/55] drivers: hv: dxgkrnl: Ioctls to query statistics and clock calibration Eric Curtin
@ 2026-03-19 20:24 ` Eric Curtin
2026-03-19 20:24 ` [PATCH 25/55] drivers: hv: dxgkrnl: Ioctls to manage scheduling priority Eric Curtin
` (30 subsequent siblings)
54 siblings, 0 replies; 56+ messages in thread
From: Eric Curtin @ 2026-03-19 20:24 UTC (permalink / raw)
To: linux-hyperv; +Cc: linux-kernel, iourit, wei.liu, decui, haiyangz
From: Iouri Tarassov <iourit@linux.microsoft.com>
Implement ioctls to offer and reclaim compute device allocations:
- LX_DXOFFERALLOCATIONS,
- LX_DXRECLAIMALLOCATIONS2
When a user mode driver (UMD) does not need to access an allocation,
it can "offer" it by issuing the LX_DXOFFERALLOCATIONS ioctl. This
means that the allocation is not in use and its local device memory
could be evicted. The freed space could be given to another allocation.
When the allocation is again needed, the UMD can attempt to"reclaim"
the allocation by issuing the LX_DXRECLAIMALLOCATIONS2 ioctl. If the
allocation is still not evicted, the reclaim operation succeeds and no
other action is required. If the reclaim operation fails, the caller
must restore the content of the allocation before it can be used by
the device.
Signed-off-by: Iouri Tarassov <iourit@linux.microsoft.com>
[kms: forward port to 6.6 from 6.1. No code changes made.]
Signed-off-by: Kelsey Steele <kelseysteele@microsoft.com>
---
drivers/hv/dxgkrnl/dxgkrnl.h | 8 +++
drivers/hv/dxgkrnl/dxgvmbus.c | 124 +++++++++++++++++++++++++++++++++-
drivers/hv/dxgkrnl/dxgvmbus.h | 27 ++++++++
drivers/hv/dxgkrnl/ioctl.c | 117 +++++++++++++++++++++++++++++++-
include/uapi/misc/d3dkmthk.h | 67 ++++++++++++++++++
5 files changed, 340 insertions(+), 3 deletions(-)
diff --git a/drivers/hv/dxgkrnl/dxgkrnl.h b/drivers/hv/dxgkrnl/dxgkrnl.h
index a55873bdd9a6..494ea8fb0bb3 100644
--- a/drivers/hv/dxgkrnl/dxgkrnl.h
+++ b/drivers/hv/dxgkrnl/dxgkrnl.h
@@ -865,6 +865,14 @@ int dxgvmb_send_set_allocation_priority(struct dxgprocess *process,
int dxgvmb_send_get_allocation_priority(struct dxgprocess *process,
struct dxgadapter *adapter,
struct d3dkmt_getallocationpriority *a);
+int dxgvmb_send_offer_allocations(struct dxgprocess *process,
+ struct dxgadapter *adapter,
+ struct d3dkmt_offerallocations *args);
+int dxgvmb_send_reclaim_allocations(struct dxgprocess *process,
+ struct dxgadapter *adapter,
+ struct d3dkmthandle device,
+ struct d3dkmt_reclaimallocations2 *args,
+ u64 __user *paging_fence_value);
int dxgvmb_send_change_vidmem_reservation(struct dxgprocess *process,
struct dxgadapter *adapter,
struct d3dkmthandle other_process,
diff --git a/drivers/hv/dxgkrnl/dxgvmbus.c b/drivers/hv/dxgkrnl/dxgvmbus.c
index 9a1864bb4e14..8448fd78975b 100644
--- a/drivers/hv/dxgkrnl/dxgvmbus.c
+++ b/drivers/hv/dxgkrnl/dxgvmbus.c
@@ -1858,7 +1858,7 @@ int dxgvmb_send_query_clock_calibration(struct dxgprocess *process,
ret = copy_to_user(&inargs->clock_data, &result.clock_data,
sizeof(result.clock_data));
if (ret) {
- pr_err("%s failed to copy clock data", __func__);
+ DXG_ERR("failed to copy clock data");
ret = -EINVAL;
goto cleanup;
}
@@ -2949,6 +2949,128 @@ int dxgvmb_send_get_allocation_priority(struct dxgprocess *process,
return ret;
}
+int dxgvmb_send_offer_allocations(struct dxgprocess *process,
+ struct dxgadapter *adapter,
+ struct d3dkmt_offerallocations *args)
+{
+ struct dxgkvmb_command_offerallocations *command;
+ int ret = -EINVAL;
+ u32 alloc_size = sizeof(struct d3dkmthandle) * args->allocation_count;
+ u32 cmd_size = sizeof(struct dxgkvmb_command_offerallocations) +
+ alloc_size - sizeof(struct d3dkmthandle);
+ struct dxgvmbusmsg msg = {.hdr = NULL};
+
+ ret = init_message(&msg, adapter, process, cmd_size);
+ if (ret)
+ goto cleanup;
+ command = (void *)msg.msg;
+
+ command_vgpu_to_host_init2(&command->hdr,
+ DXGK_VMBCOMMAND_OFFERALLOCATIONS,
+ process->host_handle);
+ command->flags = args->flags;
+ command->priority = args->priority;
+ command->device = args->device;
+ command->allocation_count = args->allocation_count;
+ if (args->resources) {
+ command->resources = true;
+ ret = copy_from_user(command->allocations, args->resources,
+ alloc_size);
+ } else {
+ ret = copy_from_user(command->allocations,
+ args->allocations, alloc_size);
+ }
+ if (ret) {
+ DXG_ERR("failed to copy input handles");
+ ret = -EINVAL;
+ goto cleanup;
+ }
+
+ ret = dxgvmb_send_sync_msg_ntstatus(msg.channel, msg.hdr, msg.size);
+
+cleanup:
+ free_message(&msg, process);
+ if (ret)
+ pr_debug("err: %s %d", __func__, ret);
+ return ret;
+}
+
+int dxgvmb_send_reclaim_allocations(struct dxgprocess *process,
+ struct dxgadapter *adapter,
+ struct d3dkmthandle device,
+ struct d3dkmt_reclaimallocations2 *args,
+ u64 __user *paging_fence_value)
+{
+ struct dxgkvmb_command_reclaimallocations *command;
+ struct dxgkvmb_command_reclaimallocations_return *result;
+ int ret;
+ u32 alloc_size = sizeof(struct d3dkmthandle) * args->allocation_count;
+ u32 cmd_size = sizeof(struct dxgkvmb_command_reclaimallocations) +
+ alloc_size - sizeof(struct d3dkmthandle);
+ u32 result_size = sizeof(*result);
+ struct dxgvmbusmsgres msg = {.hdr = NULL};
+
+ if (args->results)
+ result_size += (args->allocation_count - 1) *
+ sizeof(enum d3dddi_reclaim_result);
+
+ ret = init_message_res(&msg, adapter, process, cmd_size, result_size);
+ if (ret)
+ goto cleanup;
+ command = (void *)msg.msg;
+ result = msg.res;
+
+ command_vgpu_to_host_init2(&command->hdr,
+ DXGK_VMBCOMMAND_RECLAIMALLOCATIONS,
+ process->host_handle);
+ command->device = device;
+ command->paging_queue = args->paging_queue;
+ command->allocation_count = args->allocation_count;
+ command->write_results = args->results != NULL;
+ if (args->resources) {
+ command->resources = true;
+ ret = copy_from_user(command->allocations, args->resources,
+ alloc_size);
+ } else {
+ ret = copy_from_user(command->allocations,
+ args->allocations, alloc_size);
+ }
+ if (ret) {
+ DXG_ERR("failed to copy input handles");
+ ret = -EINVAL;
+ goto cleanup;
+ }
+
+ ret = dxgvmb_send_sync_msg(msg.channel, msg.hdr, msg.size,
+ result, msg.res_size);
+ if (ret < 0)
+ goto cleanup;
+ ret = copy_to_user(paging_fence_value,
+ &result->paging_fence_value, sizeof(u64));
+ if (ret) {
+ DXG_ERR("failed to copy paging fence");
+ ret = -EINVAL;
+ goto cleanup;
+ }
+
+ ret = ntstatus2int(result->status);
+ if (NT_SUCCESS(result->status) && args->results) {
+ ret = copy_to_user(args->results, result->discarded,
+ sizeof(result->discarded[0]) *
+ args->allocation_count);
+ if (ret) {
+ DXG_ERR("failed to copy results");
+ ret = -EINVAL;
+ }
+ }
+
+cleanup:
+ free_message((struct dxgvmbusmsg *)&msg, process);
+ if (ret)
+ pr_debug("err: %s %d", __func__, ret);
+ return ret;
+}
+
int dxgvmb_send_change_vidmem_reservation(struct dxgprocess *process,
struct dxgadapter *adapter,
struct d3dkmthandle other_process,
diff --git a/drivers/hv/dxgkrnl/dxgvmbus.h b/drivers/hv/dxgkrnl/dxgvmbus.h
index 17768ed0e68d..558c6576a262 100644
--- a/drivers/hv/dxgkrnl/dxgvmbus.h
+++ b/drivers/hv/dxgkrnl/dxgvmbus.h
@@ -653,6 +653,33 @@ struct dxgkvmb_command_markdeviceaserror {
struct d3dkmt_markdeviceaserror args;
};
+/* Returns ntstatus */
+struct dxgkvmb_command_offerallocations {
+ struct dxgkvmb_command_vgpu_to_host hdr;
+ struct d3dkmthandle device;
+ u32 allocation_count;
+ enum d3dkmt_offer_priority priority;
+ struct d3dkmt_offer_flags flags;
+ bool resources;
+ struct d3dkmthandle allocations[1];
+};
+
+struct dxgkvmb_command_reclaimallocations {
+ struct dxgkvmb_command_vgpu_to_host hdr;
+ struct d3dkmthandle device;
+ struct d3dkmthandle paging_queue;
+ u32 allocation_count;
+ bool resources;
+ bool write_results;
+ struct d3dkmthandle allocations[1];
+};
+
+struct dxgkvmb_command_reclaimallocations_return {
+ u64 paging_fence_value;
+ struct ntstatus status;
+ enum d3dddi_reclaim_result discarded[1];
+};
+
/* Returns ntstatus */
struct dxgkvmb_command_changevideomemoryreservation {
struct dxgkvmb_command_vgpu_to_host hdr;
diff --git a/drivers/hv/dxgkrnl/ioctl.c b/drivers/hv/dxgkrnl/ioctl.c
index 4babb21f38a9..fa880aa0196a 100644
--- a/drivers/hv/dxgkrnl/ioctl.c
+++ b/drivers/hv/dxgkrnl/ioctl.c
@@ -1961,6 +1961,119 @@ dxgkio_destroy_allocation(struct dxgprocess *process, void *__user inargs)
return ret;
}
+static int
+dxgkio_offer_allocations(struct dxgprocess *process, void *__user inargs)
+{
+ int ret;
+ struct d3dkmt_offerallocations args;
+ struct dxgdevice *device = NULL;
+ struct dxgadapter *adapter = NULL;
+
+ ret = copy_from_user(&args, inargs, sizeof(args));
+ if (ret) {
+ DXG_ERR("failed to copy input args");
+ ret = -EINVAL;
+ goto cleanup;
+ }
+
+ if (args.allocation_count > D3DKMT_MAKERESIDENT_ALLOC_MAX ||
+ args.allocation_count == 0) {
+ DXG_ERR("invalid number of allocations");
+ ret = -EINVAL;
+ goto cleanup;
+ }
+
+ if ((args.resources == NULL) == (args.allocations == NULL)) {
+ DXG_ERR("invalid pointer to resources/allocations");
+ ret = -EINVAL;
+ goto cleanup;
+ }
+
+ device = dxgprocess_device_by_handle(process, args.device);
+ if (device == NULL) {
+ ret = -EINVAL;
+ goto cleanup;
+ }
+
+ adapter = device->adapter;
+ ret = dxgadapter_acquire_lock_shared(adapter);
+ if (ret < 0) {
+ adapter = NULL;
+ goto cleanup;
+ }
+
+ ret = dxgvmb_send_offer_allocations(process, adapter, &args);
+
+cleanup:
+
+ if (adapter)
+ dxgadapter_release_lock_shared(adapter);
+ if (device)
+ kref_put(&device->device_kref, dxgdevice_release);
+
+ DXG_TRACE("ioctl:%s %d", errorstr(ret), ret);
+ return ret;
+}
+
+static int
+dxgkio_reclaim_allocations(struct dxgprocess *process, void *__user inargs)
+{
+ int ret;
+ struct d3dkmt_reclaimallocations2 args;
+ struct dxgdevice *device = NULL;
+ struct dxgadapter *adapter = NULL;
+ struct d3dkmt_reclaimallocations2 * __user in_args = inargs;
+
+ ret = copy_from_user(&args, inargs, sizeof(args));
+ if (ret) {
+ DXG_ERR("failed to copy input args");
+ ret = -EINVAL;
+ goto cleanup;
+ }
+
+ if (args.allocation_count > D3DKMT_MAKERESIDENT_ALLOC_MAX ||
+ args.allocation_count == 0) {
+ DXG_ERR("invalid number of allocations");
+ ret = -EINVAL;
+ goto cleanup;
+ }
+
+ if ((args.resources == NULL) == (args.allocations == NULL)) {
+ DXG_ERR("invalid pointer to resources/allocations");
+ ret = -EINVAL;
+ goto cleanup;
+ }
+
+ device = dxgprocess_device_by_object_handle(process,
+ HMGRENTRY_TYPE_DXGPAGINGQUEUE,
+ args.paging_queue);
+ if (device == NULL) {
+ ret = -EINVAL;
+ goto cleanup;
+ }
+
+ adapter = device->adapter;
+ ret = dxgadapter_acquire_lock_shared(adapter);
+ if (ret < 0) {
+ adapter = NULL;
+ goto cleanup;
+ }
+
+ ret = dxgvmb_send_reclaim_allocations(process, adapter,
+ device->handle, &args,
+ &in_args->paging_fence_value);
+
+cleanup:
+
+ if (adapter)
+ dxgadapter_release_lock_shared(adapter);
+ if (device)
+ kref_put(&device->device_kref, dxgdevice_release);
+
+ DXG_TRACE("ioctl:%s %d", errorstr(ret), ret);
+ return ret;
+}
+
static int
dxgkio_submit_command(struct dxgprocess *process, void *__user inargs)
{
@@ -4548,12 +4661,12 @@ static struct ioctl_desc ioctls[] = {
/* 0x24 */ {},
/* 0x25 */ {dxgkio_lock2, LX_DXLOCK2},
/* 0x26 */ {dxgkio_mark_device_as_error, LX_DXMARKDEVICEASERROR},
-/* 0x27 */ {},
+/* 0x27 */ {dxgkio_offer_allocations, LX_DXOFFERALLOCATIONS},
/* 0x28 */ {},
/* 0x29 */ {},
/* 0x2a */ {dxgkio_query_alloc_residency, LX_DXQUERYALLOCATIONRESIDENCY},
/* 0x2b */ {},
-/* 0x2c */ {},
+/* 0x2c */ {dxgkio_reclaim_allocations, LX_DXRECLAIMALLOCATIONS2},
/* 0x2d */ {},
/* 0x2e */ {dxgkio_set_allocation_priority, LX_DXSETALLOCATIONPRIORITY},
/* 0x2f */ {},
diff --git a/include/uapi/misc/d3dkmthk.h b/include/uapi/misc/d3dkmthk.h
index ea18242ceb83..46b9f6d303bf 100644
--- a/include/uapi/misc/d3dkmthk.h
+++ b/include/uapi/misc/d3dkmthk.h
@@ -61,6 +61,7 @@ struct winluid {
#define D3DDDI_MAX_WRITTEN_PRIMARIES 16
#define D3DKMT_CREATEALLOCATION_MAX 1024
+#define D3DKMT_MAKERESIDENT_ALLOC_MAX (1024 * 10)
#define D3DKMT_ADAPTERS_MAX 64
#define D3DDDI_MAX_BROADCAST_CONTEXT 64
#define D3DDDI_MAX_OBJECT_WAITED_ON 32
@@ -1087,6 +1088,68 @@ struct d3dddi_updateallocproperty {
};
};
+enum d3dkmt_offer_priority {
+ _D3DKMT_OFFER_PRIORITY_LOW = 1,
+ _D3DKMT_OFFER_PRIORITY_NORMAL = 2,
+ _D3DKMT_OFFER_PRIORITY_HIGH = 3,
+ _D3DKMT_OFFER_PRIORITY_AUTO = 4,
+};
+
+struct d3dkmt_offer_flags {
+ union {
+ struct {
+ __u32 offer_immediately:1;
+ __u32 allow_decommit:1;
+ __u32 reserved:30;
+ };
+ __u32 value;
+ };
+};
+
+struct d3dkmt_offerallocations {
+ struct d3dkmthandle device;
+ __u32 reserved;
+#ifdef __KERNEL__
+ struct d3dkmthandle *resources;
+ const struct d3dkmthandle *allocations;
+#else
+ __u64 resources;
+ __u64 allocations;
+#endif
+ __u32 allocation_count;
+ enum d3dkmt_offer_priority priority;
+ struct d3dkmt_offer_flags flags;
+ __u32 reserved1;
+};
+
+enum d3dddi_reclaim_result {
+ _D3DDDI_RECLAIM_RESULT_OK = 0,
+ _D3DDDI_RECLAIM_RESULT_DISCARDED = 1,
+ _D3DDDI_RECLAIM_RESULT_NOT_COMMITTED = 2,
+};
+
+struct d3dkmt_reclaimallocations2 {
+ struct d3dkmthandle paging_queue;
+ __u32 allocation_count;
+#ifdef __KERNEL__
+ struct d3dkmthandle *resources;
+ struct d3dkmthandle *allocations;
+#else
+ __u64 resources;
+ __u64 allocations;
+#endif
+ union {
+#ifdef __KERNEL__
+ __u32 *discarded;
+ enum d3dddi_reclaim_result *results;
+#else
+ __u64 discarded;
+ __u64 results;
+#endif
+ };
+ __u64 paging_fence_value;
+};
+
struct d3dkmt_changevideomemoryreservation {
__u64 process;
struct d3dkmthandle adapter;
@@ -1360,8 +1423,12 @@ struct d3dkmt_shareobjectwithhost {
_IOWR(0x47, 0x25, struct d3dkmt_lock2)
#define LX_DXMARKDEVICEASERROR \
_IOWR(0x47, 0x26, struct d3dkmt_markdeviceaserror)
+#define LX_DXOFFERALLOCATIONS \
+ _IOWR(0x47, 0x27, struct d3dkmt_offerallocations)
#define LX_DXQUERYALLOCATIONRESIDENCY \
_IOWR(0x47, 0x2a, struct d3dkmt_queryallocationresidency)
+#define LX_DXRECLAIMALLOCATIONS2 \
+ _IOWR(0x47, 0x2c, struct d3dkmt_reclaimallocations2)
#define LX_DXSETALLOCATIONPRIORITY \
_IOWR(0x47, 0x2e, struct d3dkmt_setallocationpriority)
#define LX_DXSIGNALSYNCHRONIZATIONOBJECTFROMCPU \
^ permalink raw reply related [flat|nested] 56+ messages in thread* [PATCH 25/55] drivers: hv: dxgkrnl: Ioctls to manage scheduling priority
2026-03-19 20:24 [PATCH v4 00/55] drivers: hv: dxgkrnl: Driver for Hyper-V virtual compute device Eric Curtin
` (23 preceding siblings ...)
2026-03-19 20:24 ` [PATCH 24/55] drivers: hv: dxgkrnl: Offer and reclaim allocations Eric Curtin
@ 2026-03-19 20:24 ` Eric Curtin
2026-03-19 20:24 ` [PATCH 26/55] drivers: hv: dxgkrnl: Manage residency of allocations Eric Curtin
` (29 subsequent siblings)
54 siblings, 0 replies; 56+ messages in thread
From: Eric Curtin @ 2026-03-19 20:24 UTC (permalink / raw)
To: linux-hyperv; +Cc: linux-kernel, iourit, wei.liu, decui, haiyangz
From: Iouri Tarassov <iourit@linux.microsoft.com>
Implement iocts to manage compute device scheduling priority:
- LX_DXGETCONTEXTINPROCESSSCHEDULINGPRIORITY
- LX_DXGETCONTEXTSCHEDULINGPRIORITY
- LX_DXSETCONTEXTINPROCESSSCHEDULINGPRIORITY
- LX_DXSETCONTEXTSCHEDULINGPRIORITY
Each compute device execution context has an assigned scheduling
priority. It is used by the compute device scheduler on the host to
pick contexts for execution. There is a global priority and a
priority within a process.
Signed-off-by: Iouri Tarassov <iourit@linux.microsoft.com>
[kms: forward port to 6.6 from 6.1. No code changes made.]
Signed-off-by: Kelsey Steele <kelseysteele@microsoft.com>
---
drivers/hv/dxgkrnl/dxgkrnl.h | 9 ++
drivers/hv/dxgkrnl/dxgvmbus.c | 67 ++++++++++++-
drivers/hv/dxgkrnl/dxgvmbus.h | 19 ++++
drivers/hv/dxgkrnl/ioctl.c | 177 +++++++++++++++++++++++++++++++++-
include/uapi/misc/d3dkmthk.h | 28 ++++++
5 files changed, 294 insertions(+), 6 deletions(-)
diff --git a/drivers/hv/dxgkrnl/dxgkrnl.h b/drivers/hv/dxgkrnl/dxgkrnl.h
index 494ea8fb0bb3..02d10bdcc820 100644
--- a/drivers/hv/dxgkrnl/dxgkrnl.h
+++ b/drivers/hv/dxgkrnl/dxgkrnl.h
@@ -865,6 +865,15 @@ int dxgvmb_send_set_allocation_priority(struct dxgprocess *process,
int dxgvmb_send_get_allocation_priority(struct dxgprocess *process,
struct dxgadapter *adapter,
struct d3dkmt_getallocationpriority *a);
+int dxgvmb_send_set_context_sch_priority(struct dxgprocess *process,
+ struct dxgadapter *adapter,
+ struct d3dkmthandle context,
+ int priority, bool in_process);
+int dxgvmb_send_get_context_sch_priority(struct dxgprocess *process,
+ struct dxgadapter *adapter,
+ struct d3dkmthandle context,
+ int *priority,
+ bool in_process);
int dxgvmb_send_offer_allocations(struct dxgprocess *process,
struct dxgadapter *adapter,
struct d3dkmt_offerallocations *args);
diff --git a/drivers/hv/dxgkrnl/dxgvmbus.c b/drivers/hv/dxgkrnl/dxgvmbus.c
index 8448fd78975b..9a610d48bed7 100644
--- a/drivers/hv/dxgkrnl/dxgvmbus.c
+++ b/drivers/hv/dxgkrnl/dxgvmbus.c
@@ -2949,6 +2949,69 @@ int dxgvmb_send_get_allocation_priority(struct dxgprocess *process,
return ret;
}
+int dxgvmb_send_set_context_sch_priority(struct dxgprocess *process,
+ struct dxgadapter *adapter,
+ struct d3dkmthandle context,
+ int priority,
+ bool in_process)
+{
+ struct dxgkvmb_command_setcontextschedulingpriority2 *command;
+ int ret;
+ struct dxgvmbusmsg msg = {.hdr = NULL};
+
+ ret = init_message(&msg, adapter, process, sizeof(*command));
+ if (ret)
+ goto cleanup;
+ command = (void *)msg.msg;
+
+ command_vgpu_to_host_init2(&command->hdr,
+ DXGK_VMBCOMMAND_SETCONTEXTSCHEDULINGPRIORITY,
+ process->host_handle);
+ command->context = context;
+ command->priority = priority;
+ command->in_process = in_process;
+ ret = dxgvmb_send_sync_msg_ntstatus(msg.channel, msg.hdr, msg.size);
+cleanup:
+ free_message(&msg, process);
+ if (ret)
+ DXG_TRACE("err: %d", ret);
+ return ret;
+}
+
+int dxgvmb_send_get_context_sch_priority(struct dxgprocess *process,
+ struct dxgadapter *adapter,
+ struct d3dkmthandle context,
+ int *priority,
+ bool in_process)
+{
+ struct dxgkvmb_command_getcontextschedulingpriority *command;
+ struct dxgkvmb_command_getcontextschedulingpriority_return result = { };
+ int ret;
+ struct dxgvmbusmsg msg = {.hdr = NULL};
+
+ ret = init_message(&msg, adapter, process, sizeof(*command));
+ if (ret)
+ goto cleanup;
+ command = (void *)msg.msg;
+
+ command_vgpu_to_host_init2(&command->hdr,
+ DXGK_VMBCOMMAND_GETCONTEXTSCHEDULINGPRIORITY,
+ process->host_handle);
+ command->context = context;
+ command->in_process = in_process;
+ ret = dxgvmb_send_sync_msg(msg.channel, msg.hdr, msg.size,
+ &result, sizeof(result));
+ if (ret >= 0) {
+ ret = ntstatus2int(result.status);
+ *priority = result.priority;
+ }
+cleanup:
+ free_message(&msg, process);
+ if (ret)
+ DXG_TRACE("err: %d", ret);
+ return ret;
+}
+
int dxgvmb_send_offer_allocations(struct dxgprocess *process,
struct dxgadapter *adapter,
struct d3dkmt_offerallocations *args)
@@ -2991,7 +3054,7 @@ int dxgvmb_send_offer_allocations(struct dxgprocess *process,
cleanup:
free_message(&msg, process);
if (ret)
- pr_debug("err: %s %d", __func__, ret);
+ DXG_TRACE("err: %d", ret);
return ret;
}
@@ -3067,7 +3130,7 @@ int dxgvmb_send_reclaim_allocations(struct dxgprocess *process,
cleanup:
free_message((struct dxgvmbusmsg *)&msg, process);
if (ret)
- pr_debug("err: %s %d", __func__, ret);
+ DXG_TRACE("err: %d", ret);
return ret;
}
diff --git a/drivers/hv/dxgkrnl/dxgvmbus.h b/drivers/hv/dxgkrnl/dxgvmbus.h
index 558c6576a262..509482e1f870 100644
--- a/drivers/hv/dxgkrnl/dxgvmbus.h
+++ b/drivers/hv/dxgkrnl/dxgvmbus.h
@@ -331,6 +331,25 @@ struct dxgkvmb_command_getallocationpriority_return {
/* u32 priorities[allocation_count or 1]; */
};
+/* Returns ntstatus */
+struct dxgkvmb_command_setcontextschedulingpriority2 {
+ struct dxgkvmb_command_vgpu_to_host hdr;
+ struct d3dkmthandle context;
+ int priority;
+ bool in_process;
+};
+
+struct dxgkvmb_command_getcontextschedulingpriority {
+ struct dxgkvmb_command_vgpu_to_host hdr;
+ struct d3dkmthandle context;
+ bool in_process;
+};
+
+struct dxgkvmb_command_getcontextschedulingpriority_return {
+ struct ntstatus status;
+ int priority;
+};
+
struct dxgkvmb_command_createdevice {
struct dxgkvmb_command_vgpu_to_host hdr;
struct d3dkmt_createdeviceflags flags;
diff --git a/drivers/hv/dxgkrnl/ioctl.c b/drivers/hv/dxgkrnl/ioctl.c
index fa880aa0196a..bc0adebe52ae 100644
--- a/drivers/hv/dxgkrnl/ioctl.c
+++ b/drivers/hv/dxgkrnl/ioctl.c
@@ -3660,6 +3660,171 @@ dxgkio_get_allocation_priority(struct dxgprocess *process, void *__user inargs)
return ret;
}
+static int
+set_context_scheduling_priority(struct dxgprocess *process,
+ struct d3dkmthandle hcontext,
+ int priority, bool in_process)
+{
+ int ret = 0;
+ struct dxgdevice *device = NULL;
+ struct dxgadapter *adapter = NULL;
+
+ device = dxgprocess_device_by_object_handle(process,
+ HMGRENTRY_TYPE_DXGCONTEXT,
+ hcontext);
+ if (device == NULL) {
+ ret = -EINVAL;
+ goto cleanup;
+ }
+ adapter = device->adapter;
+ ret = dxgadapter_acquire_lock_shared(adapter);
+ if (ret < 0) {
+ adapter = NULL;
+ goto cleanup;
+ }
+ ret = dxgvmb_send_set_context_sch_priority(process, adapter,
+ hcontext, priority,
+ in_process);
+ if (ret < 0)
+ DXG_ERR("send_set_context_scheduling_priority failed");
+cleanup:
+ if (adapter)
+ dxgadapter_release_lock_shared(adapter);
+ if (device)
+ kref_put(&device->device_kref, dxgdevice_release);
+
+ return ret;
+}
+
+static int
+dxgkio_set_context_scheduling_priority(struct dxgprocess *process,
+ void *__user inargs)
+{
+ struct d3dkmt_setcontextschedulingpriority args;
+ int ret;
+
+ ret = copy_from_user(&args, inargs, sizeof(args));
+ if (ret) {
+ DXG_ERR("failed to copy input args");
+ ret = -EINVAL;
+ goto cleanup;
+ }
+
+ ret = set_context_scheduling_priority(process, args.context,
+ args.priority, false);
+cleanup:
+ DXG_TRACE("ioctl:%s %d", errorstr(ret), ret);
+ return ret;
+}
+
+static int
+get_context_scheduling_priority(struct dxgprocess *process,
+ struct d3dkmthandle hcontext,
+ int __user *priority,
+ bool in_process)
+{
+ int ret;
+ struct dxgdevice *device = NULL;
+ struct dxgadapter *adapter = NULL;
+ int pri = 0;
+
+ device = dxgprocess_device_by_object_handle(process,
+ HMGRENTRY_TYPE_DXGCONTEXT,
+ hcontext);
+ if (device == NULL) {
+ ret = -EINVAL;
+ goto cleanup;
+ }
+ adapter = device->adapter;
+ ret = dxgadapter_acquire_lock_shared(adapter);
+ if (ret < 0) {
+ adapter = NULL;
+ goto cleanup;
+ }
+ ret = dxgvmb_send_get_context_sch_priority(process, adapter,
+ hcontext, &pri, in_process);
+ if (ret < 0)
+ goto cleanup;
+ ret = copy_to_user(priority, &pri, sizeof(pri));
+ if (ret) {
+ DXG_ERR("failed to copy priority to user");
+ ret = -EINVAL;
+ }
+
+cleanup:
+ if (adapter)
+ dxgadapter_release_lock_shared(adapter);
+ if (device)
+ kref_put(&device->device_kref, dxgdevice_release);
+
+ return ret;
+}
+
+static int
+dxgkio_get_context_scheduling_priority(struct dxgprocess *process,
+ void *__user inargs)
+{
+ struct d3dkmt_getcontextschedulingpriority args;
+ struct d3dkmt_getcontextschedulingpriority __user *input = inargs;
+ int ret;
+
+ ret = copy_from_user(&args, inargs, sizeof(args));
+ if (ret) {
+ DXG_ERR("failed to copy input args");
+ ret = -EINVAL;
+ goto cleanup;
+ }
+
+ ret = get_context_scheduling_priority(process, args.context,
+ &input->priority, false);
+cleanup:
+ DXG_TRACE("ioctl:%s %d", errorstr(ret), ret);
+ return ret;
+}
+
+static int
+dxgkio_set_context_process_scheduling_priority(struct dxgprocess *process,
+ void *__user inargs)
+{
+ struct d3dkmt_setcontextinprocessschedulingpriority args;
+ int ret;
+
+ ret = copy_from_user(&args, inargs, sizeof(args));
+ if (ret) {
+ DXG_ERR("failed to copy input args");
+ ret = -EINVAL;
+ goto cleanup;
+ }
+
+ ret = set_context_scheduling_priority(process, args.context,
+ args.priority, true);
+cleanup:
+ DXG_TRACE("ioctl:%s %d", errorstr(ret), ret);
+ return ret;
+}
+
+static int
+dxgkio_get_context_process_scheduling_priority(struct dxgprocess *process,
+ void __user *inargs)
+{
+ struct d3dkmt_getcontextinprocessschedulingpriority args;
+ int ret;
+
+ ret = copy_from_user(&args, inargs, sizeof(args));
+ if (ret) {
+ DXG_ERR("failed to copy input args");
+ ret = -EINVAL;
+ goto cleanup;
+ }
+
+ ret = get_context_scheduling_priority(process, args.context,
+ &((struct d3dkmt_getcontextinprocessschedulingpriority *)
+ inargs)->priority, true);
+cleanup:
+ DXG_TRACE("ioctl:%s %d", errorstr(ret), ret);
+ return ret;
+}
+
static int
dxgkio_change_vidmem_reservation(struct dxgprocess *process, void *__user inargs)
{
@@ -4655,8 +4820,10 @@ static struct ioctl_desc ioctls[] = {
/* 0x1e */ {},
/* 0x1f */ {dxgkio_flush_heap_transitions, LX_DXFLUSHHEAPTRANSITIONS},
/* 0x20 */ {},
-/* 0x21 */ {},
-/* 0x22 */ {},
+/* 0x21 */ {dxgkio_get_context_process_scheduling_priority,
+ LX_DXGETCONTEXTINPROCESSSCHEDULINGPRIORITY},
+/* 0x22 */ {dxgkio_get_context_scheduling_priority,
+ LX_DXGETCONTEXTSCHEDULINGPRIORITY},
/* 0x23 */ {},
/* 0x24 */ {},
/* 0x25 */ {dxgkio_lock2, LX_DXLOCK2},
@@ -4669,8 +4836,10 @@ static struct ioctl_desc ioctls[] = {
/* 0x2c */ {dxgkio_reclaim_allocations, LX_DXRECLAIMALLOCATIONS2},
/* 0x2d */ {},
/* 0x2e */ {dxgkio_set_allocation_priority, LX_DXSETALLOCATIONPRIORITY},
-/* 0x2f */ {},
-/* 0x30 */ {},
+/* 0x2f */ {dxgkio_set_context_process_scheduling_priority,
+ LX_DXSETCONTEXTINPROCESSSCHEDULINGPRIORITY},
+/* 0x30 */ {dxgkio_set_context_scheduling_priority,
+ LX_DXSETCONTEXTSCHEDULINGPRIORITY},
/* 0x31 */ {dxgkio_signal_sync_object_cpu,
LX_DXSIGNALSYNCHRONIZATIONOBJECTFROMCPU},
/* 0x32 */ {dxgkio_signal_sync_object_gpu,
diff --git a/include/uapi/misc/d3dkmthk.h b/include/uapi/misc/d3dkmthk.h
index 46b9f6d303bf..a9bafab97c18 100644
--- a/include/uapi/misc/d3dkmthk.h
+++ b/include/uapi/misc/d3dkmthk.h
@@ -708,6 +708,26 @@ struct d3dkmt_submitcommandtohwqueue {
#endif
};
+struct d3dkmt_setcontextschedulingpriority {
+ struct d3dkmthandle context;
+ int priority;
+};
+
+struct d3dkmt_setcontextinprocessschedulingpriority {
+ struct d3dkmthandle context;
+ int priority;
+};
+
+struct d3dkmt_getcontextschedulingpriority {
+ struct d3dkmthandle context;
+ int priority;
+};
+
+struct d3dkmt_getcontextinprocessschedulingpriority {
+ struct d3dkmthandle context;
+ int priority;
+};
+
struct d3dkmt_setallocationpriority {
struct d3dkmthandle device;
struct d3dkmthandle resource;
@@ -1419,6 +1439,10 @@ struct d3dkmt_shareobjectwithhost {
_IOWR(0x47, 0x1d, struct d3dkmt_destroysynchronizationobject)
#define LX_DXFLUSHHEAPTRANSITIONS \
_IOWR(0x47, 0x1f, struct d3dkmt_flushheaptransitions)
+#define LX_DXGETCONTEXTINPROCESSSCHEDULINGPRIORITY \
+ _IOWR(0x47, 0x21, struct d3dkmt_getcontextinprocessschedulingpriority)
+#define LX_DXGETCONTEXTSCHEDULINGPRIORITY \
+ _IOWR(0x47, 0x22, struct d3dkmt_getcontextschedulingpriority)
#define LX_DXLOCK2 \
_IOWR(0x47, 0x25, struct d3dkmt_lock2)
#define LX_DXMARKDEVICEASERROR \
@@ -1431,6 +1455,10 @@ struct d3dkmt_shareobjectwithhost {
_IOWR(0x47, 0x2c, struct d3dkmt_reclaimallocations2)
#define LX_DXSETALLOCATIONPRIORITY \
_IOWR(0x47, 0x2e, struct d3dkmt_setallocationpriority)
+#define LX_DXSETCONTEXTINPROCESSSCHEDULINGPRIORITY \
+ _IOWR(0x47, 0x2f, struct d3dkmt_setcontextinprocessschedulingpriority)
+#define LX_DXSETCONTEXTSCHEDULINGPRIORITY \
+ _IOWR(0x47, 0x30, struct d3dkmt_setcontextschedulingpriority)
#define LX_DXSIGNALSYNCHRONIZATIONOBJECTFROMCPU \
_IOWR(0x47, 0x31, struct d3dkmt_signalsynchronizationobjectfromcpu)
#define LX_DXSIGNALSYNCHRONIZATIONOBJECTFROMGPU \
^ permalink raw reply related [flat|nested] 56+ messages in thread* [PATCH 26/55] drivers: hv: dxgkrnl: Manage residency of allocations
2026-03-19 20:24 [PATCH v4 00/55] drivers: hv: dxgkrnl: Driver for Hyper-V virtual compute device Eric Curtin
` (24 preceding siblings ...)
2026-03-19 20:24 ` [PATCH 25/55] drivers: hv: dxgkrnl: Ioctls to manage scheduling priority Eric Curtin
@ 2026-03-19 20:24 ` Eric Curtin
2026-03-19 20:24 ` [PATCH 27/55] drivers: hv: dxgkrnl: Manage compute device virtual addresses Eric Curtin
` (28 subsequent siblings)
54 siblings, 0 replies; 56+ messages in thread
From: Eric Curtin @ 2026-03-19 20:24 UTC (permalink / raw)
To: linux-hyperv; +Cc: linux-kernel, iourit, wei.liu, decui, haiyangz
From: Iouri Tarassov <iourit@linux.microsoft.com>
Implement ioctls to manage residency of compute device allocations:
- LX_DXMAKERESIDENT,
- LX_DXEVICT.
An allocation is "resident" when the compute devoce is setup to
access it. It means that the allocation is in the local device
memory or in non-pageable system memory.
The current design does not support on demand compute device page
faulting. An allocation must be resident before the compute device
is allowed to access it.
The LX_DXMAKERESIDENT ioctl instructs the video memory manager to
make the given allocations resident. The operation is submitted to
a paging queue (dxgpagingqueue). When the ioctl returns a "pending"
status, a monitored fence sync object can be used to synchronize
with the completion of the operation.
The LX_DXEVICT ioctl istructs the video memory manager to evict
the given allocations from device accessible memory.
Signed-off-by: Iouri Tarassov <iourit@linux.microsoft.com>
[kms: forward port to 6.6 from 6.1. No code changes made.]
Signed-off-by: Kelsey Steele <kelseysteele@microsoft.com>
---
drivers/hv/dxgkrnl/dxgkrnl.h | 4 +
drivers/hv/dxgkrnl/dxgvmbus.c | 98 +++++++++++++++++++++++
drivers/hv/dxgkrnl/dxgvmbus.h | 27 +++++++
drivers/hv/dxgkrnl/ioctl.c | 141 +++++++++++++++++++++++++++++++++-
include/uapi/misc/d3dkmthk.h | 54 +++++++++++++
5 files changed, 322 insertions(+), 2 deletions(-)
diff --git a/drivers/hv/dxgkrnl/dxgkrnl.h b/drivers/hv/dxgkrnl/dxgkrnl.h
index 02d10bdcc820..93c3ceb23865 100644
--- a/drivers/hv/dxgkrnl/dxgkrnl.h
+++ b/drivers/hv/dxgkrnl/dxgkrnl.h
@@ -810,6 +810,10 @@ int dxgvmb_send_create_allocation(struct dxgprocess *pr, struct dxgdevice *dev,
int dxgvmb_send_destroy_allocation(struct dxgprocess *pr, struct dxgdevice *dev,
struct d3dkmt_destroyallocation2 *args,
struct d3dkmthandle *alloc_handles);
+int dxgvmb_send_make_resident(struct dxgprocess *pr, struct dxgadapter *adapter,
+ struct d3dddi_makeresident *args);
+int dxgvmb_send_evict(struct dxgprocess *pr, struct dxgadapter *adapter,
+ struct d3dkmt_evict *args);
int dxgvmb_send_submit_command(struct dxgprocess *pr,
struct dxgadapter *adapter,
struct d3dkmt_submitcommand *args);
diff --git a/drivers/hv/dxgkrnl/dxgvmbus.c b/drivers/hv/dxgkrnl/dxgvmbus.c
index 9a610d48bed7..f4c4a7e7ad8b 100644
--- a/drivers/hv/dxgkrnl/dxgvmbus.c
+++ b/drivers/hv/dxgkrnl/dxgvmbus.c
@@ -2279,6 +2279,104 @@ int dxgvmb_send_get_stdalloc_data(struct dxgdevice *device,
return ret;
}
+int dxgvmb_send_make_resident(struct dxgprocess *process,
+ struct dxgadapter *adapter,
+ struct d3dddi_makeresident *args)
+{
+ int ret;
+ u32 cmd_size;
+ struct dxgkvmb_command_makeresident_return result = { };
+ struct dxgkvmb_command_makeresident *command = NULL;
+ struct dxgvmbusmsg msg = {.hdr = NULL};
+
+ cmd_size = (args->alloc_count - 1) * sizeof(struct d3dkmthandle) +
+ sizeof(struct dxgkvmb_command_makeresident);
+
+ ret = init_message(&msg, adapter, process, cmd_size);
+ if (ret)
+ goto cleanup;
+ command = (void *)msg.msg;
+
+ ret = copy_from_user(command->allocations, args->allocation_list,
+ args->alloc_count *
+ sizeof(struct d3dkmthandle));
+ if (ret) {
+ DXG_ERR("failed to copy alloc handles");
+ ret = -EINVAL;
+ goto cleanup;
+ }
+ command_vgpu_to_host_init2(&command->hdr,
+ DXGK_VMBCOMMAND_MAKERESIDENT,
+ process->host_handle);
+ command->alloc_count = args->alloc_count;
+ command->paging_queue = args->paging_queue;
+ command->flags = args->flags;
+
+ ret = dxgvmb_send_sync_msg(msg.channel, msg.hdr, msg.size,
+ &result, sizeof(result));
+ if (ret < 0) {
+ DXG_ERR("send_make_resident failed %x", ret);
+ goto cleanup;
+ }
+
+ args->paging_fence_value = result.paging_fence_value;
+ args->num_bytes_to_trim = result.num_bytes_to_trim;
+ ret = ntstatus2int(result.status);
+
+cleanup:
+
+ free_message(&msg, process);
+ if (ret)
+ DXG_TRACE("err: %d", ret);
+ return ret;
+}
+
+int dxgvmb_send_evict(struct dxgprocess *process,
+ struct dxgadapter *adapter,
+ struct d3dkmt_evict *args)
+{
+ int ret;
+ u32 cmd_size;
+ struct dxgkvmb_command_evict_return result = { };
+ struct dxgkvmb_command_evict *command = NULL;
+ struct dxgvmbusmsg msg = {.hdr = NULL};
+
+ cmd_size = (args->alloc_count - 1) * sizeof(struct d3dkmthandle) +
+ sizeof(struct dxgkvmb_command_evict);
+ ret = init_message(&msg, adapter, process, cmd_size);
+ if (ret)
+ goto cleanup;
+ command = (void *)msg.msg;
+ ret = copy_from_user(command->allocations, args->allocations,
+ args->alloc_count *
+ sizeof(struct d3dkmthandle));
+ if (ret) {
+ DXG_ERR("failed to copy alloc handles");
+ ret = -EINVAL;
+ goto cleanup;
+ }
+ command_vgpu_to_host_init2(&command->hdr,
+ DXGK_VMBCOMMAND_EVICT, process->host_handle);
+ command->alloc_count = args->alloc_count;
+ command->device = args->device;
+ command->flags = args->flags;
+
+ ret = dxgvmb_send_sync_msg(msg.channel, msg.hdr, msg.size,
+ &result, sizeof(result));
+ if (ret < 0) {
+ DXG_ERR("send_evict failed %x", ret);
+ goto cleanup;
+ }
+ args->num_bytes_to_trim = result.num_bytes_to_trim;
+
+cleanup:
+
+ free_message(&msg, process);
+ if (ret)
+ DXG_TRACE("err: %d", ret);
+ return ret;
+}
+
int dxgvmb_send_submit_command(struct dxgprocess *process,
struct dxgadapter *adapter,
struct d3dkmt_submitcommand *args)
diff --git a/drivers/hv/dxgkrnl/dxgvmbus.h b/drivers/hv/dxgkrnl/dxgvmbus.h
index 509482e1f870..23f92ab9f8ad 100644
--- a/drivers/hv/dxgkrnl/dxgvmbus.h
+++ b/drivers/hv/dxgkrnl/dxgvmbus.h
@@ -372,6 +372,33 @@ struct dxgkvmb_command_flushdevice {
enum dxgdevice_flushschedulerreason reason;
};
+struct dxgkvmb_command_makeresident {
+ struct dxgkvmb_command_vgpu_to_host hdr;
+ struct d3dkmthandle device;
+ struct d3dkmthandle paging_queue;
+ struct d3dddi_makeresident_flags flags;
+ u32 alloc_count;
+ struct d3dkmthandle allocations[1];
+};
+
+struct dxgkvmb_command_makeresident_return {
+ u64 paging_fence_value;
+ u64 num_bytes_to_trim;
+ struct ntstatus status;
+};
+
+struct dxgkvmb_command_evict {
+ struct dxgkvmb_command_vgpu_to_host hdr;
+ struct d3dkmthandle device;
+ struct d3dddi_evict_flags flags;
+ u32 alloc_count;
+ struct d3dkmthandle allocations[1];
+};
+
+struct dxgkvmb_command_evict_return {
+ u64 num_bytes_to_trim;
+};
+
struct dxgkvmb_command_submitcommand {
struct dxgkvmb_command_vgpu_to_host hdr;
struct d3dkmt_submitcommand args;
diff --git a/drivers/hv/dxgkrnl/ioctl.c b/drivers/hv/dxgkrnl/ioctl.c
index bc0adebe52ae..2700da51bc01 100644
--- a/drivers/hv/dxgkrnl/ioctl.c
+++ b/drivers/hv/dxgkrnl/ioctl.c
@@ -1961,6 +1961,143 @@ dxgkio_destroy_allocation(struct dxgprocess *process, void *__user inargs)
return ret;
}
+static int
+dxgkio_make_resident(struct dxgprocess *process, void *__user inargs)
+{
+ int ret, ret2;
+ struct d3dddi_makeresident args;
+ struct d3dddi_makeresident *input = inargs;
+ struct dxgdevice *device = NULL;
+ struct dxgadapter *adapter = NULL;
+
+ ret = copy_from_user(&args, inargs, sizeof(args));
+ if (ret) {
+ DXG_ERR("failed to copy input args");
+ ret = -EINVAL;
+ goto cleanup;
+ }
+
+ if (args.alloc_count > D3DKMT_MAKERESIDENT_ALLOC_MAX ||
+ args.alloc_count == 0) {
+ DXG_ERR("invalid number of allocations");
+ ret = -EINVAL;
+ goto cleanup;
+ }
+ if (args.paging_queue.v == 0) {
+ DXG_ERR("paging queue is missing");
+ ret = -EINVAL;
+ goto cleanup;
+ }
+
+ device = dxgprocess_device_by_object_handle(process,
+ HMGRENTRY_TYPE_DXGPAGINGQUEUE,
+ args.paging_queue);
+ if (device == NULL) {
+ ret = -EINVAL;
+ goto cleanup;
+ }
+
+ adapter = device->adapter;
+ ret = dxgadapter_acquire_lock_shared(adapter);
+ if (ret < 0) {
+ adapter = NULL;
+ goto cleanup;
+ }
+
+ ret = dxgvmb_send_make_resident(process, adapter, &args);
+ if (ret < 0)
+ goto cleanup;
+ /* STATUS_PENING is a success code > 0. It is returned to user mode */
+ if (!(ret == STATUS_PENDING || ret == 0)) {
+ DXG_ERR("Unexpected error %x", ret);
+ goto cleanup;
+ }
+
+ ret2 = copy_to_user(&input->paging_fence_value,
+ &args.paging_fence_value, sizeof(u64));
+ if (ret2) {
+ DXG_ERR("failed to copy paging fence");
+ ret = -EINVAL;
+ goto cleanup;
+ }
+
+ ret2 = copy_to_user(&input->num_bytes_to_trim,
+ &args.num_bytes_to_trim, sizeof(u64));
+ if (ret2) {
+ DXG_ERR("failed to copy bytes to trim");
+ ret = -EINVAL;
+ goto cleanup;
+ }
+
+cleanup:
+
+ if (adapter)
+ dxgadapter_release_lock_shared(adapter);
+ if (device)
+ kref_put(&device->device_kref, dxgdevice_release);
+
+ DXG_TRACE("ioctl:%s %d", errorstr(ret), ret);
+
+ return ret;
+}
+
+static int
+dxgkio_evict(struct dxgprocess *process, void *__user inargs)
+{
+ int ret;
+ struct d3dkmt_evict args;
+ struct d3dkmt_evict *input = inargs;
+ struct dxgdevice *device = NULL;
+ struct dxgadapter *adapter = NULL;
+
+ ret = copy_from_user(&args, inargs, sizeof(args));
+ if (ret) {
+ DXG_ERR("failed to copy input args");
+ ret = -EINVAL;
+ goto cleanup;
+ }
+
+ if (args.alloc_count > D3DKMT_MAKERESIDENT_ALLOC_MAX ||
+ args.alloc_count == 0) {
+ DXG_ERR("invalid number of allocations");
+ ret = -EINVAL;
+ goto cleanup;
+ }
+
+ device = dxgprocess_device_by_handle(process, args.device);
+ if (device == NULL) {
+ ret = -EINVAL;
+ goto cleanup;
+ }
+
+ adapter = device->adapter;
+ ret = dxgadapter_acquire_lock_shared(adapter);
+ if (ret < 0) {
+ adapter = NULL;
+ goto cleanup;
+ }
+
+ ret = dxgvmb_send_evict(process, adapter, &args);
+ if (ret < 0)
+ goto cleanup;
+
+ ret = copy_to_user(&input->num_bytes_to_trim,
+ &args.num_bytes_to_trim, sizeof(u64));
+ if (ret) {
+ DXG_ERR("failed to copy bytes to trim to user");
+ ret = -EINVAL;
+ }
+cleanup:
+
+ if (adapter)
+ dxgadapter_release_lock_shared(adapter);
+ if (device)
+ kref_put(&device->device_kref, dxgdevice_release);
+
+ DXG_TRACE("ioctl:%s %d", errorstr(ret), ret);
+ return ret;
+}
+
static int
dxgkio_offer_allocations(struct dxgprocess *process, void *__user inargs)
{
@@ -4797,7 +4934,7 @@ static struct ioctl_desc ioctls[] = {
/* 0x08 */ {},
/* 0x09 */ {dxgkio_query_adapter_info, LX_DXQUERYADAPTERINFO},
/* 0x0a */ {dxgkio_query_vidmem_info, LX_DXQUERYVIDEOMEMORYINFO},
-/* 0x0b */ {},
+/* 0x0b */ {dxgkio_make_resident, LX_DXMAKERESIDENT},
/* 0x0c */ {},
/* 0x0d */ {dxgkio_escape, LX_DXESCAPE},
/* 0x0e */ {dxgkio_get_device_state, LX_DXGETDEVICESTATE},
@@ -4817,7 +4954,7 @@ static struct ioctl_desc ioctls[] = {
/* 0x1b */ {dxgkio_destroy_hwqueue, LX_DXDESTROYHWQUEUE},
/* 0x1c */ {dxgkio_destroy_paging_queue, LX_DXDESTROYPAGINGQUEUE},
/* 0x1d */ {dxgkio_destroy_sync_object, LX_DXDESTROYSYNCHRONIZATIONOBJECT},
-/* 0x1e */ {},
+/* 0x1e */ {dxgkio_evict, LX_DXEVICT},
/* 0x1f */ {dxgkio_flush_heap_transitions, LX_DXFLUSHHEAPTRANSITIONS},
/* 0x20 */ {},
/* 0x21 */ {dxgkio_get_context_process_scheduling_priority,
diff --git a/include/uapi/misc/d3dkmthk.h b/include/uapi/misc/d3dkmthk.h
index a9bafab97c18..944f9d1e73d6 100644
--- a/include/uapi/misc/d3dkmthk.h
+++ b/include/uapi/misc/d3dkmthk.h
@@ -962,6 +962,56 @@ struct d3dkmt_destroyallocation2 {
struct d3dddicb_destroyallocation2flags flags;
};
+struct d3dddi_makeresident_flags {
+ union {
+ struct {
+ __u32 cant_trim_further:1;
+ __u32 must_succeed:1;
+ __u32 reserved:30;
+ };
+ __u32 value;
+ };
+};
+
+struct d3dddi_makeresident {
+ struct d3dkmthandle paging_queue;
+ __u32 alloc_count;
+#ifdef __KERNEL__
+ const struct d3dkmthandle *allocation_list;
+ const __u32 *priority_list;
+#else
+ __u64 allocation_list;
+ __u64 priority_list;
+#endif
+ struct d3dddi_makeresident_flags flags;
+ __u64 paging_fence_value;
+ __u64 num_bytes_to_trim;
+};
+
+struct d3dddi_evict_flags {
+ union {
+ struct {
+ __u32 evict_only_if_necessary:1;
+ __u32 not_written_to:1;
+ __u32 reserved:30;
+ };
+ __u32 value;
+ };
+};
+
+struct d3dkmt_evict {
+ struct d3dkmthandle device;
+ __u32 alloc_count;
+#ifdef __KERNEL__
+ const struct d3dkmthandle *allocations;
+#else
+ __u64 allocations;
+#endif
+ struct d3dddi_evict_flags flags;
+ __u32 reserved;
+ __u64 num_bytes_to_trim;
+};
+
enum d3dkmt_memory_segment_group {
_D3DKMT_MEMORY_SEGMENT_GROUP_LOCAL = 0,
_D3DKMT_MEMORY_SEGMENT_GROUP_NON_LOCAL = 1
@@ -1407,6 +1457,8 @@ struct d3dkmt_shareobjectwithhost {
_IOWR(0x47, 0x09, struct d3dkmt_queryadapterinfo)
#define LX_DXQUERYVIDEOMEMORYINFO \
_IOWR(0x47, 0x0a, struct d3dkmt_queryvideomemoryinfo)
+#define LX_DXMAKERESIDENT \
+ _IOWR(0x47, 0x0b, struct d3dddi_makeresident)
#define LX_DXESCAPE \
_IOWR(0x47, 0x0d, struct d3dkmt_escape)
#define LX_DXGETDEVICESTATE \
@@ -1437,6 +1489,8 @@ struct d3dkmt_shareobjectwithhost {
_IOWR(0x47, 0x19, struct d3dkmt_destroydevice)
#define LX_DXDESTROYSYNCHRONIZATIONOBJECT \
_IOWR(0x47, 0x1d, struct d3dkmt_destroysynchronizationobject)
+#define LX_DXEVICT \
+ _IOWR(0x47, 0x1e, struct d3dkmt_evict)
#define LX_DXFLUSHHEAPTRANSITIONS \
_IOWR(0x47, 0x1f, struct d3dkmt_flushheaptransitions)
#define LX_DXGETCONTEXTINPROCESSSCHEDULINGPRIORITY \
^ permalink raw reply related [flat|nested] 56+ messages in thread* [PATCH 27/55] drivers: hv: dxgkrnl: Manage compute device virtual addresses
2026-03-19 20:24 [PATCH v4 00/55] drivers: hv: dxgkrnl: Driver for Hyper-V virtual compute device Eric Curtin
` (25 preceding siblings ...)
2026-03-19 20:24 ` [PATCH 26/55] drivers: hv: dxgkrnl: Manage residency of allocations Eric Curtin
@ 2026-03-19 20:24 ` Eric Curtin
2026-03-19 20:24 ` [PATCH 28/55] drivers: hv: dxgkrnl: Add support to map guest pages by host Eric Curtin
` (27 subsequent siblings)
54 siblings, 0 replies; 56+ messages in thread
From: Eric Curtin @ 2026-03-19 20:24 UTC (permalink / raw)
To: linux-hyperv; +Cc: linux-kernel, iourit, wei.liu, decui, haiyangz
From: Iouri Tarassov <iourit@linux.microsoft.com>
Implement ioctls to manage compute device virtual addresses (VA):
- LX_DXRESERVEGPUVIRTUALADDRESS,
- LX_DXFREEGPUVIRTUALADDRESS,
- LX_DXMAPGPUVIRTUALADDRESS,
- LX_DXUPDATEGPUVIRTUALADDRESS.
Compute devices access memory by using virtual addressses.
Each process has a dedicated VA space. The video memory manager
on the host is responsible with updating device page tables
before submitting a DMA buffer for execution.
The LX_DXRESERVEGPUVIRTUALADDRESS ioctl reserves a portion of the
process compute device VA space.
The LX_DXMAPGPUVIRTUALADDRESS ioctl reserves a portion of the process
compute device VA space and maps it to the given compute device
allocation.
The LX_DXFREEGPUVIRTUALADDRESS frees the previously reserved portion
of the compute device VA space.
The LX_DXUPDATEGPUVIRTUALADDRESS ioctl adds operations to modify the
compute device VA space to a compute device execution context. It
allows the operations to be queued and synchronized with execution
of other compute device DMA buffers..
Signed-off-by: Iouri Tarassov <iourit@linux.microsoft.com>
[kms: forward port to 6.6 from 6.1. No code changes made.]
Signed-off-by: Kelsey Steele <kelseysteele@microsoft.com>
---
drivers/hv/dxgkrnl/dxgkrnl.h | 10 ++
drivers/hv/dxgkrnl/dxgvmbus.c | 150 ++++++++++++++++++++++
drivers/hv/dxgkrnl/dxgvmbus.h | 38 ++++++
drivers/hv/dxgkrnl/ioctl.c | 228 +++++++++++++++++++++++++++++++++-
include/uapi/misc/d3dkmthk.h | 126 +++++++++++++++++++
5 files changed, 548 insertions(+), 4 deletions(-)
diff --git a/drivers/hv/dxgkrnl/dxgkrnl.h b/drivers/hv/dxgkrnl/dxgkrnl.h
index 93c3ceb23865..93bc9b41aa41 100644
--- a/drivers/hv/dxgkrnl/dxgkrnl.h
+++ b/drivers/hv/dxgkrnl/dxgkrnl.h
@@ -817,6 +817,16 @@ int dxgvmb_send_evict(struct dxgprocess *pr, struct dxgadapter *adapter,
int dxgvmb_send_submit_command(struct dxgprocess *pr,
struct dxgadapter *adapter,
struct d3dkmt_submitcommand *args);
+int dxgvmb_send_map_gpu_va(struct dxgprocess *pr, struct d3dkmthandle h,
+ struct dxgadapter *adapter,
+ struct d3dddi_mapgpuvirtualaddress *args);
+int dxgvmb_send_reserve_gpu_va(struct dxgprocess *pr,
+ struct dxgadapter *adapter,
+ struct d3dddi_reservegpuvirtualaddress *args);
+int dxgvmb_send_free_gpu_va(struct dxgprocess *pr, struct dxgadapter *adapter,
+ struct d3dkmt_freegpuvirtualaddress *args);
+int dxgvmb_send_update_gpu_va(struct dxgprocess *pr, struct dxgadapter *adapter,
+ struct d3dkmt_updategpuvirtualaddress *args);
int dxgvmb_send_create_sync_object(struct dxgprocess *pr,
struct dxgadapter *adapter,
struct d3dkmt_createsynchronizationobject2
diff --git a/drivers/hv/dxgkrnl/dxgvmbus.c b/drivers/hv/dxgkrnl/dxgvmbus.c
index f4c4a7e7ad8b..425a1ab87bd6 100644
--- a/drivers/hv/dxgkrnl/dxgvmbus.c
+++ b/drivers/hv/dxgkrnl/dxgvmbus.c
@@ -2432,6 +2432,156 @@ int dxgvmb_send_submit_command(struct dxgprocess *process,
return ret;
}
+int dxgvmb_send_map_gpu_va(struct dxgprocess *process,
+ struct d3dkmthandle device,
+ struct dxgadapter *adapter,
+ struct d3dddi_mapgpuvirtualaddress *args)
+{
+ struct dxgkvmb_command_mapgpuvirtualaddress *command;
+ struct dxgkvmb_command_mapgpuvirtualaddress_return result;
+ int ret;
+ struct dxgvmbusmsg msg = {.hdr = NULL};
+
+ ret = init_message(&msg, adapter, process, sizeof(*command));
+ if (ret)
+ goto cleanup;
+ command = (void *)msg.msg;
+
+ command_vgpu_to_host_init2(&command->hdr,
+ DXGK_VMBCOMMAND_MAPGPUVIRTUALADDRESS,
+ process->host_handle);
+ command->args = *args;
+ command->device = device;
+
+ ret = dxgvmb_send_sync_msg(msg.channel, msg.hdr, msg.size, &result,
+ sizeof(result));
+ if (ret < 0)
+ goto cleanup;
+ args->virtual_address = result.virtual_address;
+ args->paging_fence_value = result.paging_fence_value;
+ ret = ntstatus2int(result.status);
+
+cleanup:
+
+ free_message(&msg, process);
+ if (ret)
+ DXG_TRACE("err: %d", ret);
+ return ret;
+}
+
+int dxgvmb_send_reserve_gpu_va(struct dxgprocess *process,
+ struct dxgadapter *adapter,
+ struct d3dddi_reservegpuvirtualaddress *args)
+{
+ struct dxgkvmb_command_reservegpuvirtualaddress *command;
+ struct dxgkvmb_command_reservegpuvirtualaddress_return result;
+ int ret;
+ struct dxgvmbusmsg msg = {.hdr = NULL};
+
+ ret = init_message(&msg, adapter, process, sizeof(*command));
+ if (ret)
+ goto cleanup;
+ command = (void *)msg.msg;
+
+ command_vgpu_to_host_init2(&command->hdr,
+ DXGK_VMBCOMMAND_RESERVEGPUVIRTUALADDRESS,
+ process->host_handle);
+ command->args = *args;
+
+ ret = dxgvmb_send_sync_msg(msg.channel, msg.hdr, msg.size, &result,
+ sizeof(result));
+ args->virtual_address = result.virtual_address;
+
+cleanup:
+ free_message(&msg, process);
+ if (ret)
+ DXG_TRACE("err: %d", ret);
+ return ret;
+}
+
+int dxgvmb_send_free_gpu_va(struct dxgprocess *process,
+ struct dxgadapter *adapter,
+ struct d3dkmt_freegpuvirtualaddress *args)
+{
+ struct dxgkvmb_command_freegpuvirtualaddress *command;
+ int ret;
+ struct dxgvmbusmsg msg = {.hdr = NULL};
+
+ ret = init_message(&msg, adapter, process, sizeof(*command));
+ if (ret)
+ goto cleanup;
+ command = (void *)msg.msg;
+
+ command_vgpu_to_host_init2(&command->hdr,
+ DXGK_VMBCOMMAND_FREEGPUVIRTUALADDRESS,
+ process->host_handle);
+ command->args = *args;
+
+ ret = dxgvmb_send_sync_msg_ntstatus(msg.channel, msg.hdr, msg.size);
+
+cleanup:
+ free_message(&msg, process);
+ if (ret)
+ DXG_TRACE("err: %d", ret);
+ return ret;
+}
+
+int dxgvmb_send_update_gpu_va(struct dxgprocess *process,
+ struct dxgadapter *adapter,
+ struct d3dkmt_updategpuvirtualaddress *args)
+{
+ struct dxgkvmb_command_updategpuvirtualaddress *command;
+ u32 cmd_size;
+ u32 op_size;
+ int ret;
+ struct dxgvmbusmsg msg = {.hdr = NULL};
+
+ if (args->num_operations == 0 ||
+ (DXG_MAX_VM_BUS_PACKET_SIZE /
+ sizeof(struct d3dddi_updategpuvirtualaddress_operation)) <
+ args->num_operations) {
+ ret = -EINVAL;
+ DXG_ERR("Invalid number of operations: %d",
+ args->num_operations);
+ goto cleanup;
+ }
+
+ op_size = args->num_operations *
+ sizeof(struct d3dddi_updategpuvirtualaddress_operation);
+ cmd_size = sizeof(struct dxgkvmb_command_updategpuvirtualaddress) +
+ op_size - sizeof(args->operations[0]);
+
+ ret = init_message(&msg, adapter, process, cmd_size);
+ if (ret)
+ goto cleanup;
+ command = (void *)msg.msg;
+
+ command_vgpu_to_host_init2(&command->hdr,
+ DXGK_VMBCOMMAND_UPDATEGPUVIRTUALADDRESS,
+ process->host_handle);
+ command->fence_value = args->fence_value;
+ command->device = args->device;
+ command->context = args->context;
+ command->fence_object = args->fence_object;
+ command->num_operations = args->num_operations;
+ command->flags = args->flags.value;
+ ret = copy_from_user(command->operations, args->operations,
+ op_size);
+ if (ret) {
+ DXG_ERR("failed to copy operations");
+ ret = -EINVAL;
+ goto cleanup;
+ }
+
+ ret = dxgvmb_send_sync_msg_ntstatus(msg.channel, msg.hdr, msg.size);
+
+cleanup:
+ free_message(&msg, process);
+ if (ret)
+ DXG_TRACE("err: %d", ret);
+ return ret;
+}
+
static void set_result(struct d3dkmt_createsynchronizationobject2 *args,
u64 fence_gpu_va, u8 *va)
{
diff --git a/drivers/hv/dxgkrnl/dxgvmbus.h b/drivers/hv/dxgkrnl/dxgvmbus.h
index 23f92ab9f8ad..88967ff6a505 100644
--- a/drivers/hv/dxgkrnl/dxgvmbus.h
+++ b/drivers/hv/dxgkrnl/dxgvmbus.h
@@ -418,6 +418,44 @@ struct dxgkvmb_command_flushheaptransitions {
struct dxgkvmb_command_vgpu_to_host hdr;
};
+struct dxgkvmb_command_freegpuvirtualaddress {
+ struct dxgkvmb_command_vgpu_to_host hdr;
+ struct d3dkmt_freegpuvirtualaddress args;
+};
+
+struct dxgkvmb_command_mapgpuvirtualaddress {
+ struct dxgkvmb_command_vgpu_to_host hdr;
+ struct d3dddi_mapgpuvirtualaddress args;
+ struct d3dkmthandle device;
+};
+
+struct dxgkvmb_command_mapgpuvirtualaddress_return {
+ u64 virtual_address;
+ u64 paging_fence_value;
+ struct ntstatus status;
+};
+
+struct dxgkvmb_command_reservegpuvirtualaddress {
+ struct dxgkvmb_command_vgpu_to_host hdr;
+ struct d3dddi_reservegpuvirtualaddress args;
+};
+
+struct dxgkvmb_command_reservegpuvirtualaddress_return {
+ u64 virtual_address;
+ u64 paging_fence_value;
+};
+
+struct dxgkvmb_command_updategpuvirtualaddress {
+ struct dxgkvmb_command_vgpu_to_host hdr;
+ u64 fence_value;
+ struct d3dkmthandle device;
+ struct d3dkmthandle context;
+ struct d3dkmthandle fence_object;
+ u32 num_operations;
+ u32 flags;
+ struct d3dddi_updategpuvirtualaddress_operation operations[1];
+};
+
struct dxgkvmb_command_queryclockcalibration {
struct dxgkvmb_command_vgpu_to_host hdr;
struct d3dkmt_queryclockcalibration args;
diff --git a/drivers/hv/dxgkrnl/ioctl.c b/drivers/hv/dxgkrnl/ioctl.c
index 2700da51bc01..f6700e974f25 100644
--- a/drivers/hv/dxgkrnl/ioctl.c
+++ b/drivers/hv/dxgkrnl/ioctl.c
@@ -2492,6 +2492,226 @@ dxgkio_submit_wait_to_hwqueue(struct dxgprocess *process, void *__user inargs)
return ret;
}
+static int
+dxgkio_map_gpu_va(struct dxgprocess *process, void *__user inargs)
+{
+ int ret, ret2;
+ struct d3dddi_mapgpuvirtualaddress args;
+ struct d3dddi_mapgpuvirtualaddress *input = inargs;
+ struct dxgdevice *device = NULL;
+ struct dxgadapter *adapter = NULL;
+
+ ret = copy_from_user(&args, inargs, sizeof(args));
+ if (ret) {
+ DXG_ERR("failed to copy input args");
+ ret = -EINVAL;
+ goto cleanup;
+ }
+
+ device = dxgprocess_device_by_object_handle(process,
+ HMGRENTRY_TYPE_DXGPAGINGQUEUE,
+ args.paging_queue);
+ if (device == NULL) {
+ ret = -EINVAL;
+ goto cleanup;
+ }
+
+ adapter = device->adapter;
+ ret = dxgadapter_acquire_lock_shared(adapter);
+ if (ret < 0) {
+ adapter = NULL;
+ goto cleanup;
+ }
+
+ ret = dxgvmb_send_map_gpu_va(process, zerohandle, adapter, &args);
+ if (ret < 0)
+ goto cleanup;
+ /* STATUS_PENING is a success code > 0. It is returned to user mode */
+ if (!(ret == STATUS_PENDING || ret == 0)) {
+ DXG_ERR("Unexpected error %x", ret);
+ goto cleanup;
+ }
+
+ ret2 = copy_to_user(&input->paging_fence_value,
+ &args.paging_fence_value, sizeof(u64));
+ if (ret2) {
+ DXG_ERR("failed to copy paging fence to user");
+ ret = -EINVAL;
+ goto cleanup;
+ }
+
+ ret2 = copy_to_user(&input->virtual_address, &args.virtual_address,
+ sizeof(args.virtual_address));
+ if (ret2) {
+ DXG_ERR("failed to copy va to user");
+ ret = -EINVAL;
+ goto cleanup;
+ }
+
+cleanup:
+
+ if (adapter)
+ dxgadapter_release_lock_shared(adapter);
+ if (device)
+ kref_put(&device->device_kref, dxgdevice_release);
+
+ DXG_TRACE("ioctl:%s %d", errorstr(ret), ret);
+ return ret;
+}
+
+static int
+dxgkio_reserve_gpu_va(struct dxgprocess *process, void *__user inargs)
+{
+ int ret;
+ struct d3dddi_reservegpuvirtualaddress args;
+ struct d3dddi_reservegpuvirtualaddress *input = inargs;
+ struct dxgadapter *adapter = NULL;
+ struct dxgdevice *device = NULL;
+
+ ret = copy_from_user(&args, inargs, sizeof(args));
+ if (ret) {
+ DXG_ERR("failed to copy input args");
+ ret = -EINVAL;
+ goto cleanup;
+ }
+
+ adapter = dxgprocess_adapter_by_handle(process, args.adapter);
+ if (adapter == NULL) {
+ device = dxgprocess_device_by_object_handle(process,
+ HMGRENTRY_TYPE_DXGPAGINGQUEUE,
+ args.adapter);
+ if (device == NULL) {
+ DXG_ERR("invalid adapter or paging queue: 0x%x",
+ args.adapter.v);
+ ret = -EINVAL;
+ goto cleanup;
+ }
+ adapter = device->adapter;
+ kref_get(&adapter->adapter_kref);
+ kref_put(&device->device_kref, dxgdevice_release);
+ } else {
+ args.adapter = adapter->host_handle;
+ }
+
+ ret = dxgadapter_acquire_lock_shared(adapter);
+ if (ret < 0) {
+ kref_put(&adapter->adapter_kref, dxgadapter_release);
+ adapter = NULL;
+ goto cleanup;
+ }
+
+ ret = dxgvmb_send_reserve_gpu_va(process, adapter, &args);
+ if (ret < 0)
+ goto cleanup;
+
+ ret = copy_to_user(&input->virtual_address, &args.virtual_address,
+ sizeof(args.virtual_address));
+ if (ret) {
+ DXG_ERR("failed to copy VA to user");
+ ret = -EINVAL;
+ }
+
+cleanup:
+
+ if (adapter) {
+ dxgadapter_release_lock_shared(adapter);
+ kref_put(&adapter->adapter_kref, dxgadapter_release);
+ }
+
+ DXG_TRACE("ioctl:%s %d", errorstr(ret), ret);
+ return ret;
+}
+
+static int
+dxgkio_free_gpu_va(struct dxgprocess *process, void *__user inargs)
+{
+ int ret;
+ struct d3dkmt_freegpuvirtualaddress args;
+ struct dxgadapter *adapter = NULL;
+
+ ret = copy_from_user(&args, inargs, sizeof(args));
+ if (ret) {
+ DXG_ERR("failed to copy input args");
+ ret = -EINVAL;
+ goto cleanup;
+ }
+
+ adapter = dxgprocess_adapter_by_handle(process, args.adapter);
+ if (adapter == NULL) {
+ ret = -EINVAL;
+ goto cleanup;
+ }
+
+ ret = dxgadapter_acquire_lock_shared(adapter);
+ if (ret < 0) {
+ kref_put(&adapter->adapter_kref, dxgadapter_release);
+ adapter = NULL;
+ goto cleanup;
+ }
+
+ args.adapter = adapter->host_handle;
+ ret = dxgvmb_send_free_gpu_va(process, adapter, &args);
+
+cleanup:
+
+ if (adapter) {
+ dxgadapter_release_lock_shared(adapter);
+ kref_put(&adapter->adapter_kref, dxgadapter_release);
+ }
+
+ return ret;
+}
+
+static int
+dxgkio_update_gpu_va(struct dxgprocess *process, void *__user inargs)
+{
+ int ret;
+ struct d3dkmt_updategpuvirtualaddress args;
+ struct d3dkmt_updategpuvirtualaddress *input = inargs;
+ struct dxgadapter *adapter = NULL;
+ struct dxgdevice *device = NULL;
+
+ ret = copy_from_user(&args, inargs, sizeof(args));
+ if (ret) {
+ DXG_ERR("failed to copy input args");
+ ret = -EINVAL;
+ goto cleanup;
+ }
+
+ device = dxgprocess_device_by_handle(process, args.device);
+ if (device == NULL) {
+ ret = -EINVAL;
+ goto cleanup;
+ }
+
+ adapter = device->adapter;
+ ret = dxgadapter_acquire_lock_shared(adapter);
+ if (ret < 0) {
+ adapter = NULL;
+ goto cleanup;
+ }
+
+ ret = dxgvmb_send_update_gpu_va(process, adapter, &args);
+ if (ret < 0)
+ goto cleanup;
+
+ ret = copy_to_user(&input->fence_value, &args.fence_value,
+ sizeof(args.fence_value));
+ if (ret) {
+ DXG_ERR("failed to copy fence value to user");
+ ret = -EINVAL;
+ }
+
+cleanup:
+
+ if (adapter)
+ dxgadapter_release_lock_shared(adapter);
+ if (device)
+ kref_put(&device->device_kref, dxgdevice_release);
+
+ return ret;
+}
+
static int
dxgkio_create_sync_object(struct dxgprocess *process, void *__user inargs)
{
@@ -4931,11 +5151,11 @@ static struct ioctl_desc ioctls[] = {
/* 0x05 */ {dxgkio_destroy_context, LX_DXDESTROYCONTEXT},
/* 0x06 */ {dxgkio_create_allocation, LX_DXCREATEALLOCATION},
/* 0x07 */ {dxgkio_create_paging_queue, LX_DXCREATEPAGINGQUEUE},
-/* 0x08 */ {},
+/* 0x08 */ {dxgkio_reserve_gpu_va, LX_DXRESERVEGPUVIRTUALADDRESS},
/* 0x09 */ {dxgkio_query_adapter_info, LX_DXQUERYADAPTERINFO},
/* 0x0a */ {dxgkio_query_vidmem_info, LX_DXQUERYVIDEOMEMORYINFO},
/* 0x0b */ {dxgkio_make_resident, LX_DXMAKERESIDENT},
-/* 0x0c */ {},
+/* 0x0c */ {dxgkio_map_gpu_va, LX_DXMAPGPUVIRTUALADDRESS},
/* 0x0d */ {dxgkio_escape, LX_DXESCAPE},
/* 0x0e */ {dxgkio_get_device_state, LX_DXGETDEVICESTATE},
/* 0x0f */ {dxgkio_submit_command, LX_DXSUBMITCOMMAND},
@@ -4956,7 +5176,7 @@ static struct ioctl_desc ioctls[] = {
/* 0x1d */ {dxgkio_destroy_sync_object, LX_DXDESTROYSYNCHRONIZATIONOBJECT},
/* 0x1e */ {dxgkio_evict, LX_DXEVICT},
/* 0x1f */ {dxgkio_flush_heap_transitions, LX_DXFLUSHHEAPTRANSITIONS},
-/* 0x20 */ {},
+/* 0x20 */ {dxgkio_free_gpu_va, LX_DXFREEGPUVIRTUALADDRESS},
/* 0x21 */ {dxgkio_get_context_process_scheduling_priority,
LX_DXGETCONTEXTINPROCESSSCHEDULINGPRIORITY},
/* 0x22 */ {dxgkio_get_context_scheduling_priority,
@@ -4990,7 +5210,7 @@ static struct ioctl_desc ioctls[] = {
LX_DXSUBMITWAITFORSYNCOBJECTSTOHWQUEUE},
/* 0x37 */ {dxgkio_unlock2, LX_DXUNLOCK2},
/* 0x38 */ {dxgkio_update_alloc_property, LX_DXUPDATEALLOCPROPERTY},
-/* 0x39 */ {},
+/* 0x39 */ {dxgkio_update_gpu_va, LX_DXUPDATEGPUVIRTUALADDRESS},
/* 0x3a */ {dxgkio_wait_sync_object_cpu,
LX_DXWAITFORSYNCHRONIZATIONOBJECTFROMCPU},
/* 0x3b */ {dxgkio_wait_sync_object_gpu,
diff --git a/include/uapi/misc/d3dkmthk.h b/include/uapi/misc/d3dkmthk.h
index 944f9d1e73d6..1f60f5120e1d 100644
--- a/include/uapi/misc/d3dkmthk.h
+++ b/include/uapi/misc/d3dkmthk.h
@@ -1012,6 +1012,124 @@ struct d3dkmt_evict {
__u64 num_bytes_to_trim;
};
+struct d3dddigpuva_protection_type {
+ union {
+ struct {
+ __u64 write:1;
+ __u64 execute:1;
+ __u64 zero:1;
+ __u64 no_access:1;
+ __u64 system_use_only:1;
+ __u64 reserved:59;
+ };
+ __u64 value;
+ };
+};
+
+enum d3dddi_updategpuvirtualaddress_operation_type {
+ _D3DDDI_UPDATEGPUVIRTUALADDRESS_MAP = 0,
+ _D3DDDI_UPDATEGPUVIRTUALADDRESS_UNMAP = 1,
+ _D3DDDI_UPDATEGPUVIRTUALADDRESS_COPY = 2,
+ _D3DDDI_UPDATEGPUVIRTUALADDRESS_MAP_PROTECT = 3,
+};
+
+struct d3dddi_updategpuvirtualaddress_operation {
+ enum d3dddi_updategpuvirtualaddress_operation_type operation;
+ union {
+ struct {
+ __u64 base_address;
+ __u64 size;
+ struct d3dkmthandle allocation;
+ __u64 allocation_offset;
+ __u64 allocation_size;
+ } map;
+ struct {
+ __u64 base_address;
+ __u64 size;
+ struct d3dkmthandle allocation;
+ __u64 allocation_offset;
+ __u64 allocation_size;
+ struct d3dddigpuva_protection_type protection;
+ __u64 driver_protection;
+ } map_protect;
+ struct {
+ __u64 base_address;
+ __u64 size;
+ struct d3dddigpuva_protection_type protection;
+ } unmap;
+ struct {
+ __u64 source_address;
+ __u64 size;
+ __u64 dest_address;
+ } copy;
+ };
+};
+
+enum d3dddigpuva_reservation_type {
+ _D3DDDIGPUVA_RESERVE_NO_ACCESS = 0,
+ _D3DDDIGPUVA_RESERVE_ZERO = 1,
+ _D3DDDIGPUVA_RESERVE_NO_COMMIT = 2
+};
+
+struct d3dkmt_updategpuvirtualaddress {
+ struct d3dkmthandle device;
+ struct d3dkmthandle context;
+ struct d3dkmthandle fence_object;
+ __u32 num_operations;
+#ifdef __KERNEL__
+ struct d3dddi_updategpuvirtualaddress_operation *operations;
+#else
+ __u64 operations;
+#endif
+ __u32 reserved0;
+ __u32 reserved1;
+ __u64 reserved2;
+ __u64 fence_value;
+ union {
+ struct {
+ __u32 do_not_wait:1;
+ __u32 reserved:31;
+ };
+ __u32 value;
+ } flags;
+ __u32 reserved3;
+};
+
+struct d3dddi_mapgpuvirtualaddress {
+ struct d3dkmthandle paging_queue;
+ __u64 base_address;
+ __u64 minimum_address;
+ __u64 maximum_address;
+ struct d3dkmthandle allocation;
+ __u64 offset_in_pages;
+ __u64 size_in_pages;
+ struct d3dddigpuva_protection_type protection;
+ __u64 driver_protection;
+ __u32 reserved0;
+ __u64 reserved1;
+ __u64 virtual_address;
+ __u64 paging_fence_value;
+};
+
+struct d3dddi_reservegpuvirtualaddress {
+ struct d3dkmthandle adapter;
+ __u64 base_address;
+ __u64 minimum_address;
+ __u64 maximum_address;
+ __u64 size;
+ enum d3dddigpuva_reservation_type reservation_type;
+ __u64 driver_protection;
+ __u64 virtual_address;
+ __u64 paging_fence_value;
+};
+
+struct d3dkmt_freegpuvirtualaddress {
+ struct d3dkmthandle adapter;
+ __u32 reserved;
+ __u64 base_address;
+ __u64 size;
+};
+
enum d3dkmt_memory_segment_group {
_D3DKMT_MEMORY_SEGMENT_GROUP_LOCAL = 0,
_D3DKMT_MEMORY_SEGMENT_GROUP_NON_LOCAL = 1
@@ -1453,12 +1571,16 @@ struct d3dkmt_shareobjectwithhost {
_IOWR(0x47, 0x06, struct d3dkmt_createallocation)
#define LX_DXCREATEPAGINGQUEUE \
_IOWR(0x47, 0x07, struct d3dkmt_createpagingqueue)
+#define LX_DXRESERVEGPUVIRTUALADDRESS \
+ _IOWR(0x47, 0x08, struct d3dddi_reservegpuvirtualaddress)
#define LX_DXQUERYADAPTERINFO \
_IOWR(0x47, 0x09, struct d3dkmt_queryadapterinfo)
#define LX_DXQUERYVIDEOMEMORYINFO \
_IOWR(0x47, 0x0a, struct d3dkmt_queryvideomemoryinfo)
#define LX_DXMAKERESIDENT \
_IOWR(0x47, 0x0b, struct d3dddi_makeresident)
+#define LX_DXMAPGPUVIRTUALADDRESS \
+ _IOWR(0x47, 0x0c, struct d3dddi_mapgpuvirtualaddress)
#define LX_DXESCAPE \
_IOWR(0x47, 0x0d, struct d3dkmt_escape)
#define LX_DXGETDEVICESTATE \
@@ -1493,6 +1615,8 @@ struct d3dkmt_shareobjectwithhost {
_IOWR(0x47, 0x1e, struct d3dkmt_evict)
#define LX_DXFLUSHHEAPTRANSITIONS \
_IOWR(0x47, 0x1f, struct d3dkmt_flushheaptransitions)
+#define LX_DXFREEGPUVIRTUALADDRESS \
+ _IOWR(0x47, 0x20, struct d3dkmt_freegpuvirtualaddress)
#define LX_DXGETCONTEXTINPROCESSSCHEDULINGPRIORITY \
_IOWR(0x47, 0x21, struct d3dkmt_getcontextinprocessschedulingpriority)
#define LX_DXGETCONTEXTSCHEDULINGPRIORITY \
@@ -1529,6 +1653,8 @@ struct d3dkmt_shareobjectwithhost {
_IOWR(0x47, 0x37, struct d3dkmt_unlock2)
#define LX_DXUPDATEALLOCPROPERTY \
_IOWR(0x47, 0x38, struct d3dddi_updateallocproperty)
+#define LX_DXUPDATEGPUVIRTUALADDRESS \
+ _IOWR(0x47, 0x39, struct d3dkmt_updategpuvirtualaddress)
#define LX_DXWAITFORSYNCHRONIZATIONOBJECTFROMCPU \
_IOWR(0x47, 0x3a, struct d3dkmt_waitforsynchronizationobjectfromcpu)
#define LX_DXWAITFORSYNCHRONIZATIONOBJECTFROMGPU \
^ permalink raw reply related [flat|nested] 56+ messages in thread* [PATCH 28/55] drivers: hv: dxgkrnl: Add support to map guest pages by host
2026-03-19 20:24 [PATCH v4 00/55] drivers: hv: dxgkrnl: Driver for Hyper-V virtual compute device Eric Curtin
` (26 preceding siblings ...)
2026-03-19 20:24 ` [PATCH 27/55] drivers: hv: dxgkrnl: Manage compute device virtual addresses Eric Curtin
@ 2026-03-19 20:24 ` Eric Curtin
2026-03-19 20:24 ` [PATCH 29/55] drivers: hv: dxgkrnl: Removed struct vmbus_gpadl, which was defined in the main linux branch Eric Curtin
` (26 subsequent siblings)
54 siblings, 0 replies; 56+ messages in thread
From: Eric Curtin @ 2026-03-19 20:24 UTC (permalink / raw)
To: linux-hyperv; +Cc: linux-kernel, iourit, wei.liu, decui, haiyangz
From: Iouri Tarassov <iourit@linux.microsoft.com>
Implement support for mapping guest memory pages by the host.
This removes hyper-v limitations of using GPADL (guest physical
address list).
Dxgkrnl uses hyper-v GPADLs to share guest system memory with the
host. This method has limitations:
- a single GPADL can represent only ~32MB of memory
- there is a limit of how much memory the total size of GPADLs
in a VM can represent.
To avoid these limitations the host implemented mapping guest memory
pages. Presence of this support is determined by reading PCI config
space. When the support is enabled, dxgkrnl does not use GPADLs and
instead uses the following code flow:
- memory pages of an existing system memory buffer are pinned
- PFNs of the pages are sent to the host via a VM bus message
- the host maps the PFNs to get access to the memory
Signed-off-by: Iouri Tarassov <iourit@linux.microsoft.com>
[kms: forward port to 6.6 from 6.1. No code changes made.]
Signed-off-by: Kelsey Steele <kelseysteele@microsoft.com>
---
drivers/hv/dxgkrnl/Makefile | 2 +-
drivers/hv/dxgkrnl/dxgkrnl.h | 1 +
drivers/hv/dxgkrnl/dxgmodule.c | 33 +++++++++-
drivers/hv/dxgkrnl/dxgvmbus.c | 117 ++++++++++++++++++++++++---------
drivers/hv/dxgkrnl/dxgvmbus.h | 10 +++
drivers/hv/dxgkrnl/misc.c | 1 +
6 files changed, 129 insertions(+), 35 deletions(-)
diff --git a/drivers/hv/dxgkrnl/Makefile b/drivers/hv/dxgkrnl/Makefile
index 9d821e83448a..fc85a47a6ad5 100644
--- a/drivers/hv/dxgkrnl/Makefile
+++ b/drivers/hv/dxgkrnl/Makefile
@@ -2,4 +2,4 @@
# Makefile for the hyper-v compute device driver (dxgkrnl).
obj-$(CONFIG_DXGKRNL) += dxgkrnl.o
-dxgkrnl-y := dxgmodule.o hmgr.o misc.o dxgadapter.o ioctl.o dxgvmbus.o dxgprocess.o
+dxgkrnl-y := dxgmodule.o hmgr.o misc.o dxgadapter.o ioctl.o dxgvmbus.o dxgprocess.o
diff --git a/drivers/hv/dxgkrnl/dxgkrnl.h b/drivers/hv/dxgkrnl/dxgkrnl.h
index 93bc9b41aa41..091dbe999d33 100644
--- a/drivers/hv/dxgkrnl/dxgkrnl.h
+++ b/drivers/hv/dxgkrnl/dxgkrnl.h
@@ -316,6 +316,7 @@ struct dxgglobal {
bool misc_registered;
bool pci_registered;
bool vmbus_registered;
+ bool map_guest_pages_enabled;
};
static inline struct dxgglobal *dxggbl(void)
diff --git a/drivers/hv/dxgkrnl/dxgmodule.c b/drivers/hv/dxgkrnl/dxgmodule.c
index 5c364a46b65f..b1b612b90fc1 100644
--- a/drivers/hv/dxgkrnl/dxgmodule.c
+++ b/drivers/hv/dxgkrnl/dxgmodule.c
@@ -147,7 +147,7 @@ void dxgglobal_remove_host_event(struct dxghostevent *event)
void signal_host_cpu_event(struct dxghostevent *eventhdr)
{
- struct dxghosteventcpu *event = (struct dxghosteventcpu *)eventhdr;
+ struct dxghosteventcpu *event = (struct dxghosteventcpu *)eventhdr;
if (event->remove_from_list ||
event->destroy_after_signal) {
@@ -426,7 +426,11 @@ const struct file_operations dxgk_fops = {
#define DXGK_VMBUS_VGPU_LUID_OFFSET (DXGK_VMBUS_VERSION_OFFSET + \
sizeof(u32))
-/* The guest writes its capabilities to this address */
+/* The host caps (dxgk_vmbus_hostcaps) */
+#define DXGK_VMBUS_HOSTCAPS_OFFSET (DXGK_VMBUS_VGPU_LUID_OFFSET + \
+ sizeof(struct winluid))
+
+/* The guest writes its capavilities to this adderss */
#define DXGK_VMBUS_GUESTCAPS_OFFSET (DXGK_VMBUS_VERSION_OFFSET + \
sizeof(u32))
@@ -441,6 +445,23 @@ struct dxgk_vmbus_guestcaps {
};
};
+/*
+ * The structure defines features, supported by the host.
+ *
+ * map_guest_memory
+ * Host can map guest memory pages, so the guest can avoid using GPADLs
+ * to represent existing system memory allocations.
+ */
+struct dxgk_vmbus_hostcaps {
+ union {
+ struct {
+ u32 map_guest_memory : 1;
+ u32 reserved : 31;
+ };
+ u32 host_caps;
+ };
+};
+
/*
* A helper function to read PCI config space.
*/
@@ -475,6 +496,7 @@ static int dxg_pci_probe_device(struct pci_dev *dev,
struct winluid vgpu_luid = {};
struct dxgk_vmbus_guestcaps guest_caps = {.wsl2 = 1};
struct dxgglobal *dxgglobal = dxggbl();
+ struct dxgk_vmbus_hostcaps host_caps = {};
mutex_lock(&dxgglobal->device_mutex);
@@ -503,6 +525,13 @@ static int dxg_pci_probe_device(struct pci_dev *dev,
if (ret)
goto cleanup;
+ ret = pci_read_config_dword(dev, DXGK_VMBUS_HOSTCAPS_OFFSET,
+ &host_caps.host_caps);
+ if (ret == 0) {
+ if (host_caps.map_guest_memory)
+ dxgglobal->map_guest_pages_enabled = true;
+ }
+
if (dxgglobal->vmbus_ver > DXGK_VMBUS_INTERFACE_VERSION)
dxgglobal->vmbus_ver = DXGK_VMBUS_INTERFACE_VERSION;
}
diff --git a/drivers/hv/dxgkrnl/dxgvmbus.c b/drivers/hv/dxgkrnl/dxgvmbus.c
index 425a1ab87bd6..4d7807909284 100644
--- a/drivers/hv/dxgkrnl/dxgvmbus.c
+++ b/drivers/hv/dxgkrnl/dxgvmbus.c
@@ -1383,15 +1383,19 @@ int create_existing_sysmem(struct dxgdevice *device,
void *kmem = NULL;
int ret = 0;
struct dxgkvmb_command_setexistingsysmemstore *set_store_command;
+ struct dxgkvmb_command_setexistingsysmempages *set_pages_command;
u64 alloc_size = host_alloc->allocation_size;
u32 npages = alloc_size >> PAGE_SHIFT;
struct dxgvmbusmsg msg = {.hdr = NULL};
-
- ret = init_message(&msg, device->adapter, device->process,
- sizeof(*set_store_command));
- if (ret)
- goto cleanup;
- set_store_command = (void *)msg.msg;
+ const u32 max_pfns_in_message =
+ (DXG_MAX_VM_BUS_PACKET_SIZE - sizeof(*set_pages_command) -
+ PAGE_SIZE) / sizeof(__u64);
+ u32 alloc_offset_in_pages = 0;
+ struct page **page_in;
+ u64 *pfn;
+ u32 pages_to_send;
+ u32 i;
+ struct dxgglobal *dxgglobal = dxggbl();
/*
* Create a guest physical address list and set it as the allocation
@@ -1402,6 +1406,7 @@ int create_existing_sysmem(struct dxgdevice *device,
DXG_TRACE("Alloc size: %lld", alloc_size);
dxgalloc->cpu_address = (void *)sysmem;
+
dxgalloc->pages = vzalloc(npages * sizeof(void *));
if (dxgalloc->pages == NULL) {
DXG_ERR("failed to allocate pages");
@@ -1419,39 +1424,87 @@ int create_existing_sysmem(struct dxgdevice *device,
ret = -ENOMEM;
goto cleanup;
}
- kmem = vmap(dxgalloc->pages, npages, VM_MAP, PAGE_KERNEL);
- if (kmem == NULL) {
- DXG_ERR("vmap failed");
- ret = -ENOMEM;
- goto cleanup;
- }
- ret1 = vmbus_establish_gpadl(dxgglobal_get_vmbus(), kmem,
- alloc_size, &dxgalloc->gpadl);
- if (ret1) {
- DXG_ERR("establish_gpadl failed: %d", ret1);
- ret = -ENOMEM;
- goto cleanup;
- }
+ if (!dxgglobal->map_guest_pages_enabled) {
+ ret = init_message(&msg, device->adapter, device->process,
+ sizeof(*set_store_command));
+ if (ret)
+ goto cleanup;
+ set_store_command = (void *)msg.msg;
+
+ kmem = vmap(dxgalloc->pages, npages, VM_MAP, PAGE_KERNEL);
+ if (kmem == NULL) {
+ DXG_ERR("vmap failed");
+ ret = -ENOMEM;
+ goto cleanup;
+ }
+ ret1 = vmbus_establish_gpadl(dxgglobal_get_vmbus(), kmem,
+ alloc_size, &dxgalloc->gpadl);
+ if (ret1) {
+ DXG_ERR("establish_gpadl failed: %d", ret1);
+ ret = -ENOMEM;
+ goto cleanup;
+ }
#ifdef _MAIN_KERNEL_
- DXG_TRACE("New gpadl %d", dxgalloc->gpadl.gpadl_handle);
+ DXG_TRACE("New gpadl %d", dxgalloc->gpadl.gpadl_handle);
#else
- DXG_TRACE("New gpadl %d", dxgalloc->gpadl);
+ DXG_TRACE("New gpadl %d", dxgalloc->gpadl);
#endif
- command_vgpu_to_host_init2(&set_store_command->hdr,
- DXGK_VMBCOMMAND_SETEXISTINGSYSMEMSTORE,
- device->process->host_handle);
- set_store_command->device = device->handle;
- set_store_command->device = device->handle;
- set_store_command->allocation = host_alloc->allocation;
+ command_vgpu_to_host_init2(&set_store_command->hdr,
+ DXGK_VMBCOMMAND_SETEXISTINGSYSMEMSTORE,
+ device->process->host_handle);
+ set_store_command->device = device->handle;
+ set_store_command->allocation = host_alloc->allocation;
#ifdef _MAIN_KERNEL_
- set_store_command->gpadl = dxgalloc->gpadl.gpadl_handle;
+ set_store_command->gpadl = dxgalloc->gpadl.gpadl_handle;
#else
- set_store_command->gpadl = dxgalloc->gpadl;
+ set_store_command->gpadl = dxgalloc->gpadl;
#endif
- ret = dxgvmb_send_sync_msg_ntstatus(msg.channel, msg.hdr, msg.size);
- if (ret < 0)
- DXG_ERR("failed to set existing store: %x", ret);
+ ret = dxgvmb_send_sync_msg_ntstatus(msg.channel, msg.hdr,
+ msg.size);
+ if (ret < 0)
+ DXG_ERR("failed set existing store: %x", ret);
+ } else {
+ /*
+ * Send the list of the allocation PFNs to the host. The host
+ * will map the pages for GPU access.
+ */
+
+ ret = init_message(&msg, device->adapter, device->process,
+ sizeof(*set_pages_command) +
+ max_pfns_in_message * sizeof(u64));
+ if (ret)
+ goto cleanup;
+ set_pages_command = (void *)msg.msg;
+ command_vgpu_to_host_init2(&set_pages_command->hdr,
+ DXGK_VMBCOMMAND_SETEXISTINGSYSMEMPAGES,
+ device->process->host_handle);
+ set_pages_command->device = device->handle;
+ set_pages_command->allocation = host_alloc->allocation;
+
+ page_in = dxgalloc->pages;
+ while (alloc_offset_in_pages < npages) {
+ pfn = (u64 *)((char *)msg.msg +
+ sizeof(*set_pages_command));
+ pages_to_send = min(npages - alloc_offset_in_pages,
+ max_pfns_in_message);
+ set_pages_command->num_pages = pages_to_send;
+ set_pages_command->alloc_offset_in_pages =
+ alloc_offset_in_pages;
+
+ for (i = 0; i < pages_to_send; i++)
+ *pfn++ = page_to_pfn(*page_in++);
+
+ ret = dxgvmb_send_sync_msg_ntstatus(msg.channel,
+ msg.hdr,
+ msg.size);
+ if (ret < 0) {
+ DXG_ERR("failed set existing pages: %x", ret);
+ break;
+ }
+ alloc_offset_in_pages += pages_to_send;
+ }
+ }
cleanup:
if (kmem)
diff --git a/drivers/hv/dxgkrnl/dxgvmbus.h b/drivers/hv/dxgkrnl/dxgvmbus.h
index 88967ff6a505..b4a98f7c2522 100644
--- a/drivers/hv/dxgkrnl/dxgvmbus.h
+++ b/drivers/hv/dxgkrnl/dxgvmbus.h
@@ -234,6 +234,16 @@ struct dxgkvmb_command_setexistingsysmemstore {
u32 gpadl;
};
+/* Returns ntstatus */
+struct dxgkvmb_command_setexistingsysmempages {
+ struct dxgkvmb_command_vgpu_to_host hdr;
+ struct d3dkmthandle device;
+ struct d3dkmthandle allocation;
+ u32 num_pages;
+ u32 alloc_offset_in_pages;
+ /* u64 pfn_array[num_pages] */
+};
+
struct dxgkvmb_command_createprocess {
struct dxgkvmb_command_vm_to_host hdr;
void *process;
diff --git a/drivers/hv/dxgkrnl/misc.c b/drivers/hv/dxgkrnl/misc.c
index cb1e0635bebc..4a1309d80ee5 100644
--- a/drivers/hv/dxgkrnl/misc.c
+++ b/drivers/hv/dxgkrnl/misc.c
@@ -35,3 +35,4 @@ u16 *wcsncpy(u16 *dest, const u16 *src, size_t n)
dest[i - 1] = 0;
return dest;
}
+
^ permalink raw reply related [flat|nested] 56+ messages in thread* [PATCH 29/55] drivers: hv: dxgkrnl: Removed struct vmbus_gpadl, which was defined in the main linux branch
2026-03-19 20:24 [PATCH v4 00/55] drivers: hv: dxgkrnl: Driver for Hyper-V virtual compute device Eric Curtin
` (27 preceding siblings ...)
2026-03-19 20:24 ` [PATCH 28/55] drivers: hv: dxgkrnl: Add support to map guest pages by host Eric Curtin
@ 2026-03-19 20:24 ` Eric Curtin
2026-03-19 20:24 ` [PATCH 30/55] drivers: hv: dxgkrnl: Remove dxgk_init_ioctls Eric Curtin
` (25 subsequent siblings)
54 siblings, 0 replies; 56+ messages in thread
From: Eric Curtin @ 2026-03-19 20:24 UTC (permalink / raw)
To: linux-hyperv; +Cc: linux-kernel, iourit, wei.liu, decui, haiyangz
From: Iouri Tarassov <iourit@linux.microsoft.com>
Signed-off-by: Iouri Tarassov <iourit@linux.microsoft.com>
[kms: forward port to 6.6 from 6.1. No code changes made.]
Signed-off-by: Kelsey Steele <kelseysteele@microsoft.com>
---
drivers/hv/dxgkrnl/dxgadapter.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/drivers/hv/dxgkrnl/dxgadapter.c b/drivers/hv/dxgkrnl/dxgadapter.c
index 6f763e326a65..236febbc6fca 100644
--- a/drivers/hv/dxgkrnl/dxgadapter.c
+++ b/drivers/hv/dxgkrnl/dxgadapter.c
@@ -932,7 +932,7 @@ void dxgallocation_destroy(struct dxgallocation *alloc)
vmbus_teardown_gpadl(dxgglobal_get_vmbus(), &alloc->gpadl);
alloc->gpadl.gpadl_handle = 0;
}
-else
+#else
if (alloc->gpadl) {
DXG_TRACE("Teardown gpadl %d", alloc->gpadl);
vmbus_teardown_gpadl(dxgglobal_get_vmbus(), alloc->gpadl);
^ permalink raw reply related [flat|nested] 56+ messages in thread* [PATCH 30/55] drivers: hv: dxgkrnl: Remove dxgk_init_ioctls
2026-03-19 20:24 [PATCH v4 00/55] drivers: hv: dxgkrnl: Driver for Hyper-V virtual compute device Eric Curtin
` (28 preceding siblings ...)
2026-03-19 20:24 ` [PATCH 29/55] drivers: hv: dxgkrnl: Removed struct vmbus_gpadl, which was defined in the main linux branch Eric Curtin
@ 2026-03-19 20:24 ` Eric Curtin
2026-03-19 20:24 ` [PATCH 31/55] drivers: hv: dxgkrnl: Creation of dxgsyncfile objects Eric Curtin
` (24 subsequent siblings)
54 siblings, 0 replies; 56+ messages in thread
From: Eric Curtin @ 2026-03-19 20:24 UTC (permalink / raw)
To: linux-hyperv; +Cc: linux-kernel, iourit, wei.liu, decui, haiyangz
From: Iouri Tarassov <iourit@linux.microsoft.com>
The array of ioctls is initialized statically to remove the unnecessary
function.
Signed-off-by: Iouri Tarassov <iourit@linux.microsoft.com>
[kms: forward port to 6.6 from 6.1. No code changes made.]
Signed-off-by: Kelsey Steele <kelseysteele@microsoft.com>
---
drivers/hv/dxgkrnl/dxgmodule.c | 2 +-
drivers/hv/dxgkrnl/ioctl.c | 15 +++++++--------
2 files changed, 8 insertions(+), 9 deletions(-)
diff --git a/drivers/hv/dxgkrnl/dxgmodule.c b/drivers/hv/dxgkrnl/dxgmodule.c
index b1b612b90fc1..f1245a9d8826 100644
--- a/drivers/hv/dxgkrnl/dxgmodule.c
+++ b/drivers/hv/dxgkrnl/dxgmodule.c
@@ -300,7 +300,7 @@ static void dxgglobal_start_adapters(void)
}
/*
- * Stopsthe active dxgadapter objects.
+ * Stop the active dxgadapter objects.
*/
static void dxgglobal_stop_adapters(void)
{
diff --git a/drivers/hv/dxgkrnl/ioctl.c b/drivers/hv/dxgkrnl/ioctl.c
index f6700e974f25..8732a66040a0 100644
--- a/drivers/hv/dxgkrnl/ioctl.c
+++ b/drivers/hv/dxgkrnl/ioctl.c
@@ -26,7 +26,6 @@
struct ioctl_desc {
int (*ioctl_callback)(struct dxgprocess *p, void __user *arg);
u32 ioctl;
- u32 arg_size;
};
#ifdef DEBUG
@@ -91,7 +90,7 @@ static const struct file_operations dxg_resource_fops = {
};
static int dxgkio_open_adapter_from_luid(struct dxgprocess *process,
- void *__user inargs)
+ void *__user inargs)
{
struct d3dkmt_openadapterfromluid args;
int ret;
@@ -1002,7 +1001,7 @@ dxgkio_create_hwqueue(struct dxgprocess *process, void *__user inargs)
}
static int dxgkio_destroy_hwqueue(struct dxgprocess *process,
- void *__user inargs)
+ void *__user inargs)
{
struct d3dkmt_destroyhwqueue args;
int ret;
@@ -2280,7 +2279,8 @@ dxgkio_submit_command(struct dxgprocess *process, void *__user inargs)
}
static int
-dxgkio_submit_command_to_hwqueue(struct dxgprocess *process, void *__user inargs)
+dxgkio_submit_command_to_hwqueue(struct dxgprocess *process,
+ void *__user inargs)
{
int ret;
struct d3dkmt_submitcommandtohwqueue args;
@@ -5087,8 +5087,7 @@ open_resource(struct dxgprocess *process,
}
static int
-dxgkio_open_resource_nt(struct dxgprocess *process,
- void *__user inargs)
+dxgkio_open_resource_nt(struct dxgprocess *process, void *__user inargs)
{
struct d3dkmt_openresourcefromnthandle args;
struct d3dkmt_openresourcefromnthandle *__user args_user = inargs;
@@ -5166,7 +5165,7 @@ static struct ioctl_desc ioctls[] = {
/* 0x14 */ {dxgkio_enum_adapters, LX_DXENUMADAPTERS2},
/* 0x15 */ {dxgkio_close_adapter, LX_DXCLOSEADAPTER},
/* 0x16 */ {dxgkio_change_vidmem_reservation,
- LX_DXCHANGEVIDEOMEMORYRESERVATION},
+ LX_DXCHANGEVIDEOMEMORYRESERVATION},
/* 0x17 */ {},
/* 0x18 */ {dxgkio_create_hwqueue, LX_DXCREATEHWQUEUE},
/* 0x19 */ {dxgkio_destroy_device, LX_DXDESTROYDEVICE},
@@ -5205,7 +5204,7 @@ static struct ioctl_desc ioctls[] = {
LX_DXSIGNALSYNCHRONIZATIONOBJECTFROMGPU2},
/* 0x34 */ {dxgkio_submit_command_to_hwqueue, LX_DXSUBMITCOMMANDTOHWQUEUE},
/* 0x35 */ {dxgkio_submit_signal_to_hwqueue,
- LX_DXSUBMITSIGNALSYNCOBJECTSTOHWQUEUE},
+ LX_DXSUBMITSIGNALSYNCOBJECTSTOHWQUEUE},
/* 0x36 */ {dxgkio_submit_wait_to_hwqueue,
LX_DXSUBMITWAITFORSYNCOBJECTSTOHWQUEUE},
/* 0x37 */ {dxgkio_unlock2, LX_DXUNLOCK2},
^ permalink raw reply related [flat|nested] 56+ messages in thread* [PATCH 31/55] drivers: hv: dxgkrnl: Creation of dxgsyncfile objects
2026-03-19 20:24 [PATCH v4 00/55] drivers: hv: dxgkrnl: Driver for Hyper-V virtual compute device Eric Curtin
` (29 preceding siblings ...)
2026-03-19 20:24 ` [PATCH 30/55] drivers: hv: dxgkrnl: Remove dxgk_init_ioctls Eric Curtin
@ 2026-03-19 20:24 ` Eric Curtin
2026-03-19 20:24 ` [PATCH 32/55] drivers: hv: dxgkrnl: Use tracing instead of dev_dbg Eric Curtin
` (23 subsequent siblings)
54 siblings, 0 replies; 56+ messages in thread
From: Eric Curtin @ 2026-03-19 20:24 UTC (permalink / raw)
To: linux-hyperv; +Cc: linux-kernel, iourit, wei.liu, decui, haiyangz
From: Iouri Tarassov <iourit@linux.microsoft.com>
Implement the ioctl to create a dxgsyncfile object
(LX_DXCREATESYNCFILE). This object is a wrapper around a monitored
fence sync object and a fence value.
dxgsyncfile is built on top of the Linux sync_file object and
provides a way for the user mode to synchronize with the execution
of the device DMA packets.
The ioctl creates a dxgsyncfile object for the given GPU synchronization
object and a fence value. A file descriptor of the sync_file object
is returned to the caller. The caller could wait for the object by using
poll(). When the underlying GPU synchronization object is signaled on
the host, the host sends a message to the virtual machine and the
sync_file object is signaled.
Signed-off-by: Iouri Tarassov <iourit@linux.microsoft.com>
[kms: forward port to 6.6 from 6.1. No code changes made.]
Signed-off-by: Kelsey Steele <kelseysteele@microsoft.com>
---
drivers/hv/dxgkrnl/Kconfig | 2 +
drivers/hv/dxgkrnl/Makefile | 2 +-
drivers/hv/dxgkrnl/dxgkrnl.h | 2 +
drivers/hv/dxgkrnl/dxgmodule.c | 12 ++
drivers/hv/dxgkrnl/dxgsyncfile.c | 215 +++++++++++++++++++++++++++++++
drivers/hv/dxgkrnl/dxgsyncfile.h | 30 +++++
drivers/hv/dxgkrnl/dxgvmbus.c | 33 +++--
drivers/hv/dxgkrnl/ioctl.c | 5 +-
include/uapi/misc/d3dkmthk.h | 9 ++
9 files changed, 294 insertions(+), 16 deletions(-)
create mode 100644 drivers/hv/dxgkrnl/dxgsyncfile.c
create mode 100644 drivers/hv/dxgkrnl/dxgsyncfile.h
diff --git a/drivers/hv/dxgkrnl/Kconfig b/drivers/hv/dxgkrnl/Kconfig
index bcd92bbff939..782692610887 100644
--- a/drivers/hv/dxgkrnl/Kconfig
+++ b/drivers/hv/dxgkrnl/Kconfig
@@ -6,6 +6,8 @@ config DXGKRNL
tristate "Microsoft Paravirtualized GPU support"
depends on HYPERV
depends on 64BIT || COMPILE_TEST
+ select DMA_SHARED_BUFFER
+ select SYNC_FILE
help
This driver supports paravirtualized virtual compute devices, exposed
by Microsoft Hyper-V when Linux is running inside of a virtual machine
diff --git a/drivers/hv/dxgkrnl/Makefile b/drivers/hv/dxgkrnl/Makefile
index fc85a47a6ad5..89824cda670a 100644
--- a/drivers/hv/dxgkrnl/Makefile
+++ b/drivers/hv/dxgkrnl/Makefile
@@ -2,4 +2,4 @@
# Makefile for the hyper-v compute device driver (dxgkrnl).
obj-$(CONFIG_DXGKRNL) += dxgkrnl.o
-dxgkrnl-y := dxgmodule.o hmgr.o misc.o dxgadapter.o ioctl.o dxgvmbus.o dxgprocess.o
+dxgkrnl-y := dxgmodule.o hmgr.o misc.o dxgadapter.o ioctl.o dxgvmbus.o dxgprocess.o dxgsyncfile.o
diff --git a/drivers/hv/dxgkrnl/dxgkrnl.h b/drivers/hv/dxgkrnl/dxgkrnl.h
index 091dbe999d33..3a69e3b34e1c 100644
--- a/drivers/hv/dxgkrnl/dxgkrnl.h
+++ b/drivers/hv/dxgkrnl/dxgkrnl.h
@@ -120,6 +120,7 @@ struct dxgpagingqueue {
*/
enum dxghosteventtype {
dxghostevent_cpu_event = 1,
+ dxghostevent_dma_fence = 2,
};
struct dxghostevent {
@@ -858,6 +859,7 @@ int dxgvmb_send_wait_sync_object_cpu(struct dxgprocess *process,
struct
d3dkmt_waitforsynchronizationobjectfromcpu
*args,
+ bool user_address,
u64 cpu_event);
int dxgvmb_send_lock2(struct dxgprocess *process,
struct dxgadapter *adapter,
diff --git a/drivers/hv/dxgkrnl/dxgmodule.c b/drivers/hv/dxgkrnl/dxgmodule.c
index f1245a9d8826..af51fcd35697 100644
--- a/drivers/hv/dxgkrnl/dxgmodule.c
+++ b/drivers/hv/dxgkrnl/dxgmodule.c
@@ -16,6 +16,7 @@
#include <linux/hyperv.h>
#include <linux/pci.h>
#include "dxgkrnl.h"
+#include "dxgsyncfile.h"
#define PCI_VENDOR_ID_MICROSOFT 0x1414
#define PCI_DEVICE_ID_VIRTUAL_RENDER 0x008E
@@ -145,6 +146,15 @@ void dxgglobal_remove_host_event(struct dxghostevent *event)
spin_unlock_irq(&dxgglobal->host_event_list_mutex);
}
+static void signal_dma_fence(struct dxghostevent *eventhdr)
+{
+ struct dxgsyncpoint *event = (struct dxgsyncpoint *)eventhdr;
+
+ event->fence_value++;
+ list_del(&eventhdr->host_event_list_entry);
+ dma_fence_signal(&event->base);
+}
+
void signal_host_cpu_event(struct dxghostevent *eventhdr)
{
struct dxghosteventcpu *event = (struct dxghosteventcpu *)eventhdr;
@@ -184,6 +194,8 @@ void dxgglobal_signal_host_event(u64 event_id)
DXG_TRACE("found event to signal");
if (event->event_type == dxghostevent_cpu_event)
signal_host_cpu_event(event);
+ else if (event->event_type == dxghostevent_dma_fence)
+ signal_dma_fence(event);
else
DXG_ERR("Unknown host event type");
break;
diff --git a/drivers/hv/dxgkrnl/dxgsyncfile.c b/drivers/hv/dxgkrnl/dxgsyncfile.c
new file mode 100644
index 000000000000..88fd78f08fbe
--- /dev/null
+++ b/drivers/hv/dxgkrnl/dxgsyncfile.c
@@ -0,0 +1,215 @@
+// SPDX-License-Identifier: GPL-2.0
+
+/*
+ * Copyright (c) 2022, Microsoft Corporation.
+ *
+ * Author:
+ * Iouri Tarassov <iourit@linux.microsoft.com>
+ *
+ * Dxgkrnl Graphics Driver
+ * Ioctl implementation
+ *
+ */
+
+#include <linux/eventfd.h>
+#include <linux/file.h>
+#include <linux/fs.h>
+#include <linux/anon_inodes.h>
+#include <linux/mman.h>
+
+#include "dxgkrnl.h"
+#include "dxgvmbus.h"
+#include "dxgsyncfile.h"
+
+#undef dev_fmt
+#define dev_fmt(fmt) "dxgk: " fmt
+
+#ifdef DEBUG
+static char *errorstr(int ret)
+{
+ return ret < 0 ? "err" : "";
+}
+#endif
+
+static const struct dma_fence_ops dxgdmafence_ops;
+
+static struct dxgsyncpoint *to_syncpoint(struct dma_fence *fence)
+{
+ if (fence->ops != &dxgdmafence_ops)
+ return NULL;
+ return container_of(fence, struct dxgsyncpoint, base);
+}
+
+int dxgkio_create_sync_file(struct dxgprocess *process, void *__user inargs)
+{
+ struct d3dkmt_createsyncfile args;
+ struct dxgsyncpoint *pt = NULL;
+ int ret = 0;
+ int fd = get_unused_fd_flags(O_CLOEXEC);
+ struct sync_file *sync_file = NULL;
+ struct dxgdevice *device = NULL;
+ struct dxgadapter *adapter = NULL;
+ struct d3dkmt_waitforsynchronizationobjectfromcpu waitargs = {};
+
+ if (fd < 0) {
+ DXG_ERR("get_unused_fd_flags failed: %d", fd);
+ ret = fd;
+ goto cleanup;
+ }
+
+ ret = copy_from_user(&args, inargs, sizeof(args));
+ if (ret) {
+ DXG_ERR("failed to copy input args");
+ ret = -EFAULT;
+ goto cleanup;
+ }
+
+ device = dxgprocess_device_by_handle(process, args.device);
+ if (device == NULL) {
+ DXG_ERR("dxgprocess_device_by_handle failed");
+ ret = -EINVAL;
+ goto cleanup;
+ }
+
+ ret = dxgdevice_acquire_lock_shared(device);
+ if (ret < 0) {
+ DXG_ERR("dxgdevice_acquire_lock_shared failed");
+ device = NULL;
+ goto cleanup;
+ }
+
+ adapter = device->adapter;
+ ret = dxgadapter_acquire_lock_shared(adapter);
+ if (ret < 0) {
+ DXG_ERR("dxgadapter_acquire_lock_shared failed");
+ adapter = NULL;
+ goto cleanup;
+ }
+
+ pt = kzalloc(sizeof(*pt), GFP_KERNEL);
+ if (!pt) {
+ ret = -ENOMEM;
+ goto cleanup;
+ }
+ spin_lock_init(&pt->lock);
+ pt->fence_value = args.fence_value;
+ pt->context = dma_fence_context_alloc(1);
+ pt->hdr.event_id = dxgglobal_new_host_event_id();
+ pt->hdr.event_type = dxghostevent_dma_fence;
+ dxgglobal_add_host_event(&pt->hdr);
+
+ dma_fence_init(&pt->base, &dxgdmafence_ops, &pt->lock,
+ pt->context, args.fence_value);
+
+ sync_file = sync_file_create(&pt->base);
+ if (sync_file == NULL) {
+ DXG_ERR("sync_file_create failed");
+ ret = -ENOMEM;
+ goto cleanup;
+ }
+ dma_fence_put(&pt->base);
+
+ waitargs.device = args.device;
+ waitargs.object_count = 1;
+ waitargs.objects = &args.monitored_fence;
+ waitargs.fence_values = &args.fence_value;
+ ret = dxgvmb_send_wait_sync_object_cpu(process, adapter,
+ &waitargs, false,
+ pt->hdr.event_id);
+ if (ret < 0) {
+ DXG_ERR("dxgvmb_send_wait_sync_object_cpu failed");
+ goto cleanup;
+ }
+
+ args.sync_file_handle = (u64)fd;
+ ret = copy_to_user(inargs, &args, sizeof(args));
+ if (ret) {
+ DXG_ERR("failed to copy output args");
+ ret = -EFAULT;
+ goto cleanup;
+ }
+
+ fd_install(fd, sync_file->file);
+
+cleanup:
+ if (adapter)
+ dxgadapter_release_lock_shared(adapter);
+ if (device)
+ dxgdevice_release_lock_shared(device);
+ if (ret) {
+ if (sync_file) {
+ fput(sync_file->file);
+ /* sync_file_release will destroy dma_fence */
+ pt = NULL;
+ }
+ if (pt)
+ dma_fence_put(&pt->base);
+ if (fd >= 0)
+ put_unused_fd(fd);
+ }
+ DXG_TRACE("ioctl:%s %d", errorstr(ret), ret);
+ return ret;
+}
+
+static const char *dxgdmafence_get_driver_name(struct dma_fence *fence)
+{
+ return "dxgkrnl";
+}
+
+static const char *dxgdmafence_get_timeline_name(struct dma_fence *fence)
+{
+ return "no_timeline";
+}
+
+static void dxgdmafence_release(struct dma_fence *fence)
+{
+ struct dxgsyncpoint *syncpoint;
+
+ syncpoint = to_syncpoint(fence);
+ if (syncpoint) {
+ if (syncpoint->hdr.event_id)
+ dxgglobal_get_host_event(syncpoint->hdr.event_id);
+ kfree(syncpoint);
+ }
+}
+
+static bool dxgdmafence_signaled(struct dma_fence *fence)
+{
+ struct dxgsyncpoint *syncpoint;
+
+ syncpoint = to_syncpoint(fence);
+ if (syncpoint == 0)
+ return true;
+ return __dma_fence_is_later(syncpoint->fence_value, fence->seqno,
+ fence->ops);
+}
+
+static bool dxgdmafence_enable_signaling(struct dma_fence *fence)
+{
+ return true;
+}
+
+static void dxgdmafence_value_str(struct dma_fence *fence,
+ char *str, int size)
+{
+ snprintf(str, size, "%lld", fence->seqno);
+}
+
+static void dxgdmafence_timeline_value_str(struct dma_fence *fence,
+ char *str, int size)
+{
+ struct dxgsyncpoint *syncpoint;
+
+ syncpoint = to_syncpoint(fence);
+ snprintf(str, size, "%lld", syncpoint->fence_value);
+}
+
+static const struct dma_fence_ops dxgdmafence_ops = {
+ .get_driver_name = dxgdmafence_get_driver_name,
+ .get_timeline_name = dxgdmafence_get_timeline_name,
+ .enable_signaling = dxgdmafence_enable_signaling,
+ .signaled = dxgdmafence_signaled,
+ .release = dxgdmafence_release,
+ .fence_value_str = dxgdmafence_value_str,
+ .timeline_value_str = dxgdmafence_timeline_value_str,
+};
diff --git a/drivers/hv/dxgkrnl/dxgsyncfile.h b/drivers/hv/dxgkrnl/dxgsyncfile.h
new file mode 100644
index 000000000000..207ef9b30f67
--- /dev/null
+++ b/drivers/hv/dxgkrnl/dxgsyncfile.h
@@ -0,0 +1,30 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+
+/*
+ * Copyright (c) 2022, Microsoft Corporation.
+ *
+ * Author:
+ * Iouri Tarassov <iourit@linux.microsoft.com>
+ *
+ * Dxgkrnl Graphics Driver
+ * Headers for sync file objects
+ *
+ */
+
+#ifndef _DXGSYNCFILE_H
+#define _DXGSYNCFILE_H
+
+#include <linux/sync_file.h>
+
+int dxgkio_create_sync_file(struct dxgprocess *process, void *__user inargs);
+
+struct dxgsyncpoint {
+ struct dxghostevent hdr;
+ struct dma_fence base;
+ u64 fence_value;
+ u64 context;
+ spinlock_t lock;
+ u64 u64;
+};
+
+#endif /* _DXGSYNCFILE_H */
diff --git a/drivers/hv/dxgkrnl/dxgvmbus.c b/drivers/hv/dxgkrnl/dxgvmbus.c
index 4d7807909284..913ea3cabb31 100644
--- a/drivers/hv/dxgkrnl/dxgvmbus.c
+++ b/drivers/hv/dxgkrnl/dxgvmbus.c
@@ -2820,6 +2820,7 @@ int dxgvmb_send_wait_sync_object_cpu(struct dxgprocess *process,
struct
d3dkmt_waitforsynchronizationobjectfromcpu
*args,
+ bool user_address,
u64 cpu_event)
{
int ret = -EINVAL;
@@ -2844,19 +2845,25 @@ int dxgvmb_send_wait_sync_object_cpu(struct dxgprocess *process,
command->guest_event_pointer = (u64) cpu_event;
current_pos = (u8 *) &command[1];
- ret = copy_from_user(current_pos, args->objects, object_size);
- if (ret) {
- DXG_ERR("failed to copy objects");
- ret = -EINVAL;
- goto cleanup;
- }
- current_pos += object_size;
- ret = copy_from_user(current_pos, args->fence_values,
- fence_size);
- if (ret) {
- DXG_ERR("failed to copy fences");
- ret = -EINVAL;
- goto cleanup;
+ if (user_address) {
+ ret = copy_from_user(current_pos, args->objects, object_size);
+ if (ret) {
+ DXG_ERR("failed to copy objects");
+ ret = -EINVAL;
+ goto cleanup;
+ }
+ current_pos += object_size;
+ ret = copy_from_user(current_pos, args->fence_values,
+ fence_size);
+ if (ret) {
+ DXG_ERR("failed to copy fences");
+ ret = -EINVAL;
+ goto cleanup;
+ }
+ } else {
+ memcpy(current_pos, args->objects, object_size);
+ current_pos += object_size;
+ memcpy(current_pos, args->fence_values, fence_size);
}
ret = dxgvmb_send_sync_msg_ntstatus(msg.channel, msg.hdr, msg.size);
diff --git a/drivers/hv/dxgkrnl/ioctl.c b/drivers/hv/dxgkrnl/ioctl.c
index 8732a66040a0..6c26aafb0619 100644
--- a/drivers/hv/dxgkrnl/ioctl.c
+++ b/drivers/hv/dxgkrnl/ioctl.c
@@ -19,6 +19,7 @@
#include "dxgkrnl.h"
#include "dxgvmbus.h"
+#include "dxgsyncfile.h"
#undef pr_fmt
#define pr_fmt(fmt) "dxgk: " fmt
@@ -3488,7 +3489,7 @@ dxgkio_wait_sync_object_cpu(struct dxgprocess *process, void *__user inargs)
}
ret = dxgvmb_send_wait_sync_object_cpu(process, adapter,
- &args, event_id);
+ &args, true, event_id);
if (ret < 0)
goto cleanup;
@@ -5224,7 +5225,7 @@ static struct ioctl_desc ioctls[] = {
/* 0x42 */ {dxgkio_open_resource_nt, LX_DXOPENRESOURCEFROMNTHANDLE},
/* 0x43 */ {dxgkio_query_statistics, LX_DXQUERYSTATISTICS},
/* 0x44 */ {dxgkio_share_object_with_host, LX_DXSHAREOBJECTWITHHOST},
-/* 0x45 */ {},
+/* 0x45 */ {dxgkio_create_sync_file, LX_DXCREATESYNCFILE},
};
/*
diff --git a/include/uapi/misc/d3dkmthk.h b/include/uapi/misc/d3dkmthk.h
index 1f60f5120e1d..c7f168425dc7 100644
--- a/include/uapi/misc/d3dkmthk.h
+++ b/include/uapi/misc/d3dkmthk.h
@@ -1554,6 +1554,13 @@ struct d3dkmt_shareobjectwithhost {
__u64 object_vail_nt_handle;
};
+struct d3dkmt_createsyncfile {
+ struct d3dkmthandle device;
+ struct d3dkmthandle monitored_fence;
+ __u64 fence_value;
+ __u64 sync_file_handle; /* out */
+};
+
/*
* Dxgkrnl Graphics Port Driver ioctl definitions
*
@@ -1677,5 +1684,7 @@ struct d3dkmt_shareobjectwithhost {
_IOWR(0x47, 0x43, struct d3dkmt_querystatistics)
#define LX_DXSHAREOBJECTWITHHOST \
_IOWR(0x47, 0x44, struct d3dkmt_shareobjectwithhost)
+#define LX_DXCREATESYNCFILE \
+ _IOWR(0x47, 0x45, struct d3dkmt_createsyncfile)
#endif /* _D3DKMTHK_H */
^ permalink raw reply related [flat|nested] 56+ messages in thread* [PATCH 32/55] drivers: hv: dxgkrnl: Use tracing instead of dev_dbg
2026-03-19 20:24 [PATCH v4 00/55] drivers: hv: dxgkrnl: Driver for Hyper-V virtual compute device Eric Curtin
` (30 preceding siblings ...)
2026-03-19 20:24 ` [PATCH 31/55] drivers: hv: dxgkrnl: Creation of dxgsyncfile objects Eric Curtin
@ 2026-03-19 20:24 ` Eric Curtin
2026-03-19 20:24 ` [PATCH 33/55] drivers: hv: dxgkrnl: Implement D3DKMTWaitSyncFile Eric Curtin
` (22 subsequent siblings)
54 siblings, 0 replies; 56+ messages in thread
From: Eric Curtin @ 2026-03-19 20:24 UTC (permalink / raw)
To: linux-hyperv; +Cc: linux-kernel, iourit, wei.liu, decui, haiyangz
From: Iouri Tarassov <iourit@linux.microsoft.com>
Signed-off-by: Iouri Tarassov <iourit@linux.microsoft.com>
[kms: forward port to 6.6 from 6.1. No code changes made.]
Signed-off-by: Kelsey Steele <kelseysteele@microsoft.com>
---
drivers/hv/dxgkrnl/dxgadapter.c | 4 ++--
drivers/hv/dxgkrnl/dxgmodule.c | 5 ++++-
drivers/hv/dxgkrnl/dxgprocess.c | 6 +++---
drivers/hv/dxgkrnl/dxgvmbus.c | 4 ++--
drivers/hv/dxgkrnl/hmgr.c | 16 ++++++++--------
drivers/hv/dxgkrnl/ioctl.c | 8 ++++----
drivers/hv/dxgkrnl/misc.c | 4 ++--
7 files changed, 25 insertions(+), 22 deletions(-)
diff --git a/drivers/hv/dxgkrnl/dxgadapter.c b/drivers/hv/dxgkrnl/dxgadapter.c
index 236febbc6fca..3d8bec295b87 100644
--- a/drivers/hv/dxgkrnl/dxgadapter.c
+++ b/drivers/hv/dxgkrnl/dxgadapter.c
@@ -18,8 +18,8 @@
#include "dxgkrnl.h"
-#undef pr_fmt
-#define pr_fmt(fmt) "dxgk: " fmt
+#undef dev_fmt
+#define dev_fmt(fmt) "dxgk: " fmt
int dxgadapter_set_vmbus(struct dxgadapter *adapter, struct hv_device *hdev)
{
diff --git a/drivers/hv/dxgkrnl/dxgmodule.c b/drivers/hv/dxgkrnl/dxgmodule.c
index af51fcd35697..08feae97e845 100644
--- a/drivers/hv/dxgkrnl/dxgmodule.c
+++ b/drivers/hv/dxgkrnl/dxgmodule.c
@@ -24,6 +24,9 @@
#undef pr_fmt
#define pr_fmt(fmt) "dxgk: " fmt
+#undef dev_fmt
+#define dev_fmt(fmt) "dxgk: " fmt
+
/*
* Interface from dxgglobal
*/
@@ -442,7 +445,7 @@ const struct file_operations dxgk_fops = {
#define DXGK_VMBUS_HOSTCAPS_OFFSET (DXGK_VMBUS_VGPU_LUID_OFFSET + \
sizeof(struct winluid))
-/* The guest writes its capavilities to this adderss */
+/* The guest writes its capabilities to this address */
#define DXGK_VMBUS_GUESTCAPS_OFFSET (DXGK_VMBUS_VERSION_OFFSET + \
sizeof(u32))
diff --git a/drivers/hv/dxgkrnl/dxgprocess.c b/drivers/hv/dxgkrnl/dxgprocess.c
index 5de3f8ccb448..afef196c0588 100644
--- a/drivers/hv/dxgkrnl/dxgprocess.c
+++ b/drivers/hv/dxgkrnl/dxgprocess.c
@@ -13,8 +13,8 @@
#include "dxgkrnl.h"
-#undef pr_fmt
-#define pr_fmt(fmt) "dxgk: " fmt
+#undef dev_fmt
+#define dev_fmt(fmt) "dxgk: " fmt
/*
* Creates a new dxgprocess object
@@ -248,7 +248,7 @@ struct dxgadapter *dxgprocess_adapter_by_handle(struct dxgprocess *process,
HMGRENTRY_TYPE_DXGADAPTER,
handle);
if (adapter == NULL)
- DXG_ERR("adapter_by_handle failed %x", handle.v);
+ DXG_TRACE("adapter_by_handle failed %x", handle.v);
else if (kref_get_unless_zero(&adapter->adapter_kref) == 0) {
DXG_ERR("failed to acquire adapter reference");
adapter = NULL;
diff --git a/drivers/hv/dxgkrnl/dxgvmbus.c b/drivers/hv/dxgkrnl/dxgvmbus.c
index 913ea3cabb31..d53d4254be63 100644
--- a/drivers/hv/dxgkrnl/dxgvmbus.c
+++ b/drivers/hv/dxgkrnl/dxgvmbus.c
@@ -22,8 +22,8 @@
#include "dxgkrnl.h"
#include "dxgvmbus.h"
-#undef pr_fmt
-#define pr_fmt(fmt) "dxgk: " fmt
+#undef dev_fmt
+#define dev_fmt(fmt) "dxgk: " fmt
#define RING_BUFSIZE (256 * 1024)
diff --git a/drivers/hv/dxgkrnl/hmgr.c b/drivers/hv/dxgkrnl/hmgr.c
index 526b50f46d96..24101d0091ab 100644
--- a/drivers/hv/dxgkrnl/hmgr.c
+++ b/drivers/hv/dxgkrnl/hmgr.c
@@ -19,8 +19,8 @@
#include "dxgkrnl.h"
#include "hmgr.h"
-#undef pr_fmt
-#define pr_fmt(fmt) "dxgk: " fmt
+#undef dev_fmt
+#define dev_fmt(fmt) "dxgk: " fmt
const struct d3dkmthandle zerohandle;
@@ -90,29 +90,29 @@ static bool is_handle_valid(struct hmgrtable *table, struct d3dkmthandle h,
struct hmgrentry *entry;
if (index >= table->table_size) {
- DXG_ERR("Invalid index %x %d", h.v, index);
+ DXG_TRACE("Invalid index %x %d", h.v, index);
return false;
}
entry = &table->entry_table[index];
if (unique != entry->unique) {
- DXG_ERR("Invalid unique %x %d %d %d %p",
+ DXG_TRACE("Invalid unique %x %d %d %d %p",
h.v, unique, entry->unique, index, entry->object);
return false;
}
if (entry->destroyed && !ignore_destroyed) {
- DXG_ERR("Invalid destroyed value");
+ DXG_TRACE("Invalid destroyed value");
return false;
}
if (entry->type == HMGRENTRY_TYPE_FREE) {
- DXG_ERR("Entry is freed %x %d", h.v, index);
+ DXG_TRACE("Entry is freed %x %d", h.v, index);
return false;
}
if (t != HMGRENTRY_TYPE_FREE && t != entry->type) {
- DXG_ERR("type mismatch %x %d %d", h.v, t, entry->type);
+ DXG_TRACE("type mismatch %x %d %d", h.v, t, entry->type);
return false;
}
@@ -500,7 +500,7 @@ void *hmgrtable_get_object_by_type(struct hmgrtable *table,
struct d3dkmthandle h)
{
if (!is_handle_valid(table, h, false, type)) {
- DXG_ERR("Invalid handle %x", h.v);
+ DXG_TRACE("Invalid handle %x", h.v);
return NULL;
}
return table->entry_table[get_index(h)].object;
diff --git a/drivers/hv/dxgkrnl/ioctl.c b/drivers/hv/dxgkrnl/ioctl.c
index 6c26aafb0619..4db23cd55b24 100644
--- a/drivers/hv/dxgkrnl/ioctl.c
+++ b/drivers/hv/dxgkrnl/ioctl.c
@@ -21,8 +21,8 @@
#include "dxgvmbus.h"
#include "dxgsyncfile.h"
-#undef pr_fmt
-#define pr_fmt(fmt) "dxgk: " fmt
+#undef dev_fmt
+#define dev_fmt(fmt) "dxgk: " fmt
struct ioctl_desc {
int (*ioctl_callback)(struct dxgprocess *p, void __user *arg);
@@ -556,7 +556,7 @@ dxgkio_enum_adapters3(struct dxgprocess *process, void *__user inargs)
cleanup:
- DXG_TRACE("ioctl: %s %d", errorstr(ret), ret);
+ DXG_TRACE("ioctl:%s %d", errorstr(ret), ret);
return ret;
}
@@ -5242,7 +5242,7 @@ static int dxgk_ioctl(struct file *f, unsigned int p1, unsigned long p2)
int status;
struct dxgprocess *process;
- if (code < 1 || code >= ARRAY_SIZE(ioctls)) {
+ if (code < 1 || code >= ARRAY_SIZE(ioctls)) {
DXG_ERR("bad ioctl %x %x %x %x",
code, _IOC_TYPE(p1), _IOC_SIZE(p1), _IOC_DIR(p1));
return -ENOTTY;
diff --git a/drivers/hv/dxgkrnl/misc.c b/drivers/hv/dxgkrnl/misc.c
index 4a1309d80ee5..4bf6fe80d22a 100644
--- a/drivers/hv/dxgkrnl/misc.c
+++ b/drivers/hv/dxgkrnl/misc.c
@@ -18,8 +18,8 @@
#include "dxgkrnl.h"
#include "misc.h"
-#undef pr_fmt
-#define pr_fmt(fmt) "dxgk: " fmt
+#undef dev_fmt
+#define dev_fmt(fmt) "dxgk: " fmt
u16 *wcsncpy(u16 *dest, const u16 *src, size_t n)
{
^ permalink raw reply related [flat|nested] 56+ messages in thread* [PATCH 33/55] drivers: hv: dxgkrnl: Implement D3DKMTWaitSyncFile
2026-03-19 20:24 [PATCH v4 00/55] drivers: hv: dxgkrnl: Driver for Hyper-V virtual compute device Eric Curtin
` (31 preceding siblings ...)
2026-03-19 20:24 ` [PATCH 32/55] drivers: hv: dxgkrnl: Use tracing instead of dev_dbg Eric Curtin
@ 2026-03-19 20:24 ` Eric Curtin
2026-03-19 20:24 ` [PATCH 34/55] drivers: hv: dxgkrnl: Improve tracing and return values from copy from user Eric Curtin
` (21 subsequent siblings)
54 siblings, 0 replies; 56+ messages in thread
From: Eric Curtin @ 2026-03-19 20:24 UTC (permalink / raw)
To: linux-hyperv; +Cc: linux-kernel, iourit, wei.liu, decui, haiyangz
From: Iouri Tarassov <iourit@linux.microsoft.com>
Signed-off-by: Iouri Tarassov <iourit@linux.microsoft.com>
[kms: forward port to 6.6 from 6.1. No code changes made.]
Signed-off-by: Kelsey Steele <kelseysteele@microsoft.com>
---
drivers/hv/dxgkrnl/dxgkrnl.h | 11 ++
drivers/hv/dxgkrnl/dxgmodule.c | 7 +-
drivers/hv/dxgkrnl/dxgprocess.c | 12 +-
drivers/hv/dxgkrnl/dxgsyncfile.c | 291 ++++++++++++++++++++++++++++++-
drivers/hv/dxgkrnl/dxgsyncfile.h | 3 +
drivers/hv/dxgkrnl/dxgvmbus.c | 49 ++++++
drivers/hv/dxgkrnl/ioctl.c | 16 +-
include/uapi/misc/d3dkmthk.h | 23 +++
8 files changed, 396 insertions(+), 16 deletions(-)
diff --git a/drivers/hv/dxgkrnl/dxgkrnl.h b/drivers/hv/dxgkrnl/dxgkrnl.h
index 3a69e3b34e1c..d92e1348ccfb 100644
--- a/drivers/hv/dxgkrnl/dxgkrnl.h
+++ b/drivers/hv/dxgkrnl/dxgkrnl.h
@@ -254,6 +254,10 @@ void dxgsharedsyncobj_add_syncobj(struct dxgsharedsyncobject *sharedsyncobj,
struct dxgsyncobject *syncobj);
void dxgsharedsyncobj_remove_syncobj(struct dxgsharedsyncobject *sharedsyncobj,
struct dxgsyncobject *syncobj);
+int dxgsharedsyncobj_get_host_nt_handle(struct dxgsharedsyncobject *syncobj,
+ struct dxgprocess *process,
+ struct d3dkmthandle objecthandle);
+void dxgsharedsyncobj_put(struct dxgsharedsyncobject *syncobj);
struct dxgsyncobject *dxgsyncobject_create(struct dxgprocess *process,
struct dxgdevice *device,
@@ -384,6 +388,8 @@ struct dxgprocess {
pid_t tgid;
/* how many time the process was opened */
struct kref process_kref;
+ /* protects the object memory */
+ struct kref process_mem_kref;
/*
* This handle table is used for all objects except dxgadapter
* The handle table lock order is higher than the local_handle_table
@@ -405,6 +411,7 @@ struct dxgprocess {
struct dxgprocess *dxgprocess_create(void);
void dxgprocess_destroy(struct dxgprocess *process);
void dxgprocess_release(struct kref *refcount);
+void dxgprocess_mem_release(struct kref *refcount);
int dxgprocess_open_adapter(struct dxgprocess *process,
struct dxgadapter *adapter,
struct d3dkmthandle *handle);
@@ -932,6 +939,10 @@ int dxgvmb_send_open_sync_object_nt(struct dxgprocess *process,
struct d3dkmt_opensyncobjectfromnthandle2
*args,
struct dxgsyncobject *syncobj);
+int dxgvmb_send_open_sync_object(struct dxgprocess *process,
+ struct d3dkmthandle device,
+ struct d3dkmthandle host_shared_syncobj,
+ struct d3dkmthandle *syncobj);
int dxgvmb_send_query_alloc_residency(struct dxgprocess *process,
struct dxgadapter *adapter,
struct d3dkmt_queryallocationresidency
diff --git a/drivers/hv/dxgkrnl/dxgmodule.c b/drivers/hv/dxgkrnl/dxgmodule.c
index 08feae97e845..5570f35954d4 100644
--- a/drivers/hv/dxgkrnl/dxgmodule.c
+++ b/drivers/hv/dxgkrnl/dxgmodule.c
@@ -149,10 +149,11 @@ void dxgglobal_remove_host_event(struct dxghostevent *event)
spin_unlock_irq(&dxgglobal->host_event_list_mutex);
}
-static void signal_dma_fence(struct dxghostevent *eventhdr)
+static void dxg_signal_dma_fence(struct dxghostevent *eventhdr)
{
struct dxgsyncpoint *event = (struct dxgsyncpoint *)eventhdr;
+ DXG_TRACE("syncpoint: %px, fence: %lld", event, event->fence_value);
event->fence_value++;
list_del(&eventhdr->host_event_list_entry);
dma_fence_signal(&event->base);
@@ -198,7 +199,7 @@ void dxgglobal_signal_host_event(u64 event_id)
if (event->event_type == dxghostevent_cpu_event)
signal_host_cpu_event(event);
else if (event->event_type == dxghostevent_dma_fence)
- signal_dma_fence(event);
+ dxg_signal_dma_fence(event);
else
DXG_ERR("Unknown host event type");
break;
@@ -355,6 +356,7 @@ static struct dxgprocess *dxgglobal_get_current_process(void)
if (entry->tgid == current->tgid) {
if (kref_get_unless_zero(&entry->process_kref)) {
process = entry;
+ kref_get(&entry->process_mem_kref);
DXG_TRACE("found dxgprocess");
} else {
DXG_TRACE("process is destroyed");
@@ -405,6 +407,7 @@ static int dxgk_release(struct inode *n, struct file *f)
return -EINVAL;
kref_put(&process->process_kref, dxgprocess_release);
+ kref_put(&process->process_mem_kref, dxgprocess_mem_release);
f->private_data = NULL;
return 0;
diff --git a/drivers/hv/dxgkrnl/dxgprocess.c b/drivers/hv/dxgkrnl/dxgprocess.c
index afef196c0588..e77e3a4983f8 100644
--- a/drivers/hv/dxgkrnl/dxgprocess.c
+++ b/drivers/hv/dxgkrnl/dxgprocess.c
@@ -39,6 +39,7 @@ struct dxgprocess *dxgprocess_create(void)
} else {
INIT_LIST_HEAD(&process->plistentry);
kref_init(&process->process_kref);
+ kref_init(&process->process_mem_kref);
mutex_lock(&dxgglobal->plistmutex);
list_add_tail(&process->plistentry,
@@ -117,8 +118,17 @@ void dxgprocess_release(struct kref *refcount)
dxgprocess_destroy(process);
- if (process->host_handle.v)
+ if (process->host_handle.v) {
dxgvmb_send_destroy_process(process->host_handle);
+ process->host_handle.v = 0;
+ }
+}
+
+void dxgprocess_mem_release(struct kref *refcount)
+{
+ struct dxgprocess *process;
+
+ process = container_of(refcount, struct dxgprocess, process_mem_kref);
kfree(process);
}
diff --git a/drivers/hv/dxgkrnl/dxgsyncfile.c b/drivers/hv/dxgkrnl/dxgsyncfile.c
index 88fd78f08fbe..9d5832c90ad7 100644
--- a/drivers/hv/dxgkrnl/dxgsyncfile.c
+++ b/drivers/hv/dxgkrnl/dxgsyncfile.c
@@ -9,6 +9,20 @@
* Dxgkrnl Graphics Driver
* Ioctl implementation
*
+ * dxgsyncpoint:
+ * - pointer to dxgsharedsyncobject
+ * - host_shared_handle_nt_reference incremented
+ * - list of (process, local syncobj d3dkmthandle) pairs
+ * wait for sync file
+ * - get dxgsyncpoint
+ * - if process doesn't have a local syncobj
+ * - create local dxgsyncobject
+ * - send open syncobj to the host
+ * - Send wait for syncobj to the context
+ * dxgsyncpoint destruction
+ * - walk the list of (process, local syncobj)
+ * - destroy syncobj
+ * - remove reference to dxgsharedsyncobject
*/
#include <linux/eventfd.h>
@@ -45,12 +59,15 @@ int dxgkio_create_sync_file(struct dxgprocess *process, void *__user inargs)
struct d3dkmt_createsyncfile args;
struct dxgsyncpoint *pt = NULL;
int ret = 0;
- int fd = get_unused_fd_flags(O_CLOEXEC);
+ int fd;
struct sync_file *sync_file = NULL;
struct dxgdevice *device = NULL;
struct dxgadapter *adapter = NULL;
+ struct dxgsyncobject *syncobj = NULL;
struct d3dkmt_waitforsynchronizationobjectfromcpu waitargs = {};
+ bool device_lock_acquired = false;
+ fd = get_unused_fd_flags(O_CLOEXEC);
if (fd < 0) {
DXG_ERR("get_unused_fd_flags failed: %d", fd);
ret = fd;
@@ -74,9 +91,9 @@ int dxgkio_create_sync_file(struct dxgprocess *process, void *__user inargs)
ret = dxgdevice_acquire_lock_shared(device);
if (ret < 0) {
DXG_ERR("dxgdevice_acquire_lock_shared failed");
- device = NULL;
goto cleanup;
}
+ device_lock_acquired = true;
adapter = device->adapter;
ret = dxgadapter_acquire_lock_shared(adapter);
@@ -109,6 +126,30 @@ int dxgkio_create_sync_file(struct dxgprocess *process, void *__user inargs)
}
dma_fence_put(&pt->base);
+ hmgrtable_lock(&process->handle_table, DXGLOCK_SHARED);
+ syncobj = hmgrtable_get_object(&process->handle_table,
+ args.monitored_fence);
+ if (syncobj == NULL) {
+ DXG_ERR("invalid syncobj handle %x", args.monitored_fence.v);
+ ret = -EINVAL;
+ } else {
+ if (syncobj->shared) {
+ kref_get(&syncobj->syncobj_kref);
+ pt->shared_syncobj = syncobj->shared_owner;
+ }
+ }
+ hmgrtable_unlock(&process->handle_table, DXGLOCK_SHARED);
+
+ if (pt->shared_syncobj) {
+ ret = dxgsharedsyncobj_get_host_nt_handle(pt->shared_syncobj,
+ process,
+ args.monitored_fence);
+ if (ret)
+ pt->shared_syncobj = NULL;
+ }
+ if (ret)
+ goto cleanup;
+
waitargs.device = args.device;
waitargs.object_count = 1;
waitargs.objects = &args.monitored_fence;
@@ -132,10 +173,15 @@ int dxgkio_create_sync_file(struct dxgprocess *process, void *__user inargs)
fd_install(fd, sync_file->file);
cleanup:
+ if (syncobj && syncobj->shared)
+ kref_put(&syncobj->syncobj_kref, dxgsyncobject_release);
if (adapter)
dxgadapter_release_lock_shared(adapter);
- if (device)
- dxgdevice_release_lock_shared(device);
+ if (device) {
+ if (device_lock_acquired)
+ dxgdevice_release_lock_shared(device);
+ kref_put(&device->device_kref, dxgdevice_release);
+ }
if (ret) {
if (sync_file) {
fput(sync_file->file);
@@ -151,6 +197,228 @@ int dxgkio_create_sync_file(struct dxgprocess *process, void *__user inargs)
return ret;
}
+int dxgkio_open_syncobj_from_syncfile(struct dxgprocess *process,
+ void *__user inargs)
+{
+ struct d3dkmt_opensyncobjectfromsyncfile args;
+ int ret = 0;
+ struct dxgsyncpoint *pt = NULL;
+ struct dma_fence *dmafence = NULL;
+ struct dxgdevice *device = NULL;
+ struct dxgadapter *adapter = NULL;
+ struct dxgsyncobject *syncobj = NULL;
+ struct d3dddi_synchronizationobject_flags flags = { };
+ struct d3dkmt_opensyncobjectfromnthandle2 openargs = { };
+ struct dxgglobal *dxgglobal = dxggbl();
+
+ ret = copy_from_user(&args, inargs, sizeof(args));
+ if (ret) {
+ DXG_ERR("failed to copy input args");
+ ret = -EFAULT;
+ goto cleanup;
+ }
+
+ dmafence = sync_file_get_fence(args.sync_file_handle);
+ if (dmafence == NULL) {
+ DXG_ERR("failed to get dmafence from handle: %llx",
+ args.sync_file_handle);
+ ret = -EINVAL;
+ goto cleanup;
+ }
+ pt = to_syncpoint(dmafence);
+ if (pt->shared_syncobj == NULL) {
+ DXG_ERR("Sync object is not shared");
+ goto cleanup;
+ }
+
+ device = dxgprocess_device_by_handle(process, args.device);
+ if (device == NULL) {
+ DXG_ERR("dxgprocess_device_by_handle failed");
+ ret = -EINVAL;
+ goto cleanup;
+ }
+
+ ret = dxgdevice_acquire_lock_shared(device);
+ if (ret < 0) {
+ DXG_ERR("dxgdevice_acquire_lock_shared failed");
+ kref_put(&device->device_kref, dxgdevice_release);
+ device = NULL;
+ goto cleanup;
+ }
+
+ adapter = device->adapter;
+ ret = dxgadapter_acquire_lock_shared(adapter);
+ if (ret < 0) {
+ DXG_ERR("dxgadapter_acquire_lock_shared failed");
+ adapter = NULL;
+ goto cleanup;
+ }
+
+ flags.shared = 1;
+ flags.nt_security_sharing = 1;
+ syncobj = dxgsyncobject_create(process, device, adapter,
+ _D3DDDI_MONITORED_FENCE, flags);
+ if (syncobj == NULL) {
+ DXG_ERR("failed to create sync object");
+ ret = -ENOMEM;
+ goto cleanup;
+ }
+ dxgsharedsyncobj_add_syncobj(pt->shared_syncobj, syncobj);
+
+ /* Open the shared syncobj to get a local handle */
+
+ openargs.device = device->handle;
+ openargs.flags.shared = 1;
+ openargs.flags.nt_security_sharing = 1;
+ openargs.flags.no_signal = 1;
+
+ ret = dxgvmb_send_open_sync_object_nt(process,
+ &dxgglobal->channel, &openargs, syncobj);
+ if (ret) {
+ DXG_ERR("Failed to open shared syncobj on host");
+ goto cleanup;
+ }
+
+ hmgrtable_lock(&process->handle_table, DXGLOCK_EXCL);
+ ret = hmgrtable_assign_handle(&process->handle_table,
+ syncobj,
+ HMGRENTRY_TYPE_DXGSYNCOBJECT,
+ openargs.sync_object);
+ if (ret == 0) {
+ syncobj->handle = openargs.sync_object;
+ kref_get(&syncobj->syncobj_kref);
+ }
+ hmgrtable_unlock(&process->handle_table, DXGLOCK_EXCL);
+
+ args.syncobj = openargs.sync_object;
+ args.fence_value = pt->fence_value;
+ args.fence_value_cpu_va = openargs.monitored_fence.fence_value_cpu_va;
+ args.fence_value_gpu_va = openargs.monitored_fence.fence_value_gpu_va;
+
+ ret = copy_to_user(inargs, &args, sizeof(args));
+ if (ret) {
+ DXG_ERR("failed to copy output args");
+ ret = -EFAULT;
+ }
+
+cleanup:
+ if (dmafence)
+ dma_fence_put(dmafence);
+ if (ret) {
+ if (syncobj) {
+ dxgsyncobject_destroy(process, syncobj);
+ kref_put(&syncobj->syncobj_kref, dxgsyncobject_release);
+ }
+ }
+ if (adapter)
+ dxgadapter_release_lock_shared(adapter);
+ if (device) {
+ dxgdevice_release_lock_shared(device);
+ kref_put(&device->device_kref, dxgdevice_release);
+ }
+
+ DXG_TRACE("ioctl:%s %d", errorstr(ret), ret);
+ return ret;
+}
+
+int dxgkio_wait_sync_file(struct dxgprocess *process, void *__user inargs)
+{
+ struct d3dkmt_waitsyncfile args;
+ struct dma_fence *dmafence = NULL;
+ int ret = 0;
+ struct dxgsyncpoint *pt = NULL;
+ struct dxgdevice *device = NULL;
+ struct dxgadapter *adapter = NULL;
+ struct d3dkmthandle syncobj_handle = {};
+ bool device_lock_acquired = false;
+
+ ret = copy_from_user(&args, inargs, sizeof(args));
+ if (ret) {
+ DXG_ERR("failed to copy input args");
+ ret = -EFAULT;
+ goto cleanup;
+ }
+
+ dmafence = sync_file_get_fence(args.sync_file_handle);
+ if (dmafence == NULL) {
+ DXG_ERR("failed to get dmafence from handle: %llx",
+ args.sync_file_handle);
+ ret = -EINVAL;
+ goto cleanup;
+ }
+ pt = to_syncpoint(dmafence);
+
+ device = dxgprocess_device_by_object_handle(process,
+ HMGRENTRY_TYPE_DXGCONTEXT,
+ args.context);
+ if (device == NULL) {
+ ret = -EINVAL;
+ goto cleanup;
+ }
+
+ ret = dxgdevice_acquire_lock_shared(device);
+ if (ret < 0) {
+ DXG_ERR("dxgdevice_acquire_lock_shared failed");
+ device = NULL;
+ goto cleanup;
+ }
+ device_lock_acquired = true;
+
+ adapter = device->adapter;
+ ret = dxgadapter_acquire_lock_shared(adapter);
+ if (ret < 0) {
+ DXG_ERR("dxgadapter_acquire_lock_shared failed");
+ adapter = NULL;
+ goto cleanup;
+ }
+
+ /* Open the shared syncobj to get a local handle */
+ if (pt->shared_syncobj == NULL) {
+ DXG_ERR("Sync object is not shared");
+ goto cleanup;
+ }
+ ret = dxgvmb_send_open_sync_object(process,
+ device->handle,
+ pt->shared_syncobj->host_shared_handle,
+ &syncobj_handle);
+ if (ret) {
+ DXG_ERR("Failed to open shared syncobj on host");
+ goto cleanup;
+ }
+
+ /* Ask the host to insert the syncobj to the context queue */
+ ret = dxgvmb_send_wait_sync_object_gpu(process, adapter,
+ args.context, 1,
+ &syncobj_handle,
+ &pt->fence_value,
+ false);
+ if (ret < 0) {
+ DXG_ERR("dxgvmb_send_wait_sync_object_cpu failed");
+ goto cleanup;
+ }
+
+ /*
+ * Destroy the local syncobject immediately. This will not unblock
+ * GPU waiters, but will unblock CPU waiter, which includes the sync
+ * file itself.
+ */
+ ret = dxgvmb_send_destroy_sync_object(process, syncobj_handle);
+
+cleanup:
+ if (adapter)
+ dxgadapter_release_lock_shared(adapter);
+ if (device) {
+ if (device_lock_acquired)
+ dxgdevice_release_lock_shared(device);
+ kref_put(&device->device_kref, dxgdevice_release);
+ }
+ if (dmafence)
+ dma_fence_put(dmafence);
+
+ DXG_TRACE("ioctl:%s %d", errorstr(ret), ret);
+ return ret;
+}
+
static const char *dxgdmafence_get_driver_name(struct dma_fence *fence)
{
return "dxgkrnl";
@@ -166,11 +434,16 @@ static void dxgdmafence_release(struct dma_fence *fence)
struct dxgsyncpoint *syncpoint;
syncpoint = to_syncpoint(fence);
- if (syncpoint) {
- if (syncpoint->hdr.event_id)
- dxgglobal_get_host_event(syncpoint->hdr.event_id);
- kfree(syncpoint);
- }
+ if (syncpoint == NULL)
+ return;
+
+ if (syncpoint->hdr.event_id)
+ dxgglobal_get_host_event(syncpoint->hdr.event_id);
+
+ if (syncpoint->shared_syncobj)
+ dxgsharedsyncobj_put(syncpoint->shared_syncobj);
+
+ kfree(syncpoint);
}
static bool dxgdmafence_signaled(struct dma_fence *fence)
diff --git a/drivers/hv/dxgkrnl/dxgsyncfile.h b/drivers/hv/dxgkrnl/dxgsyncfile.h
index 207ef9b30f67..292b7f718987 100644
--- a/drivers/hv/dxgkrnl/dxgsyncfile.h
+++ b/drivers/hv/dxgkrnl/dxgsyncfile.h
@@ -17,10 +17,13 @@
#include <linux/sync_file.h>
int dxgkio_create_sync_file(struct dxgprocess *process, void *__user inargs);
+int dxgkio_wait_sync_file(struct dxgprocess *process, void *__user inargs);
+int dxgkio_open_syncobj_from_syncfile(struct dxgprocess *p, void *__user args);
struct dxgsyncpoint {
struct dxghostevent hdr;
struct dma_fence base;
+ struct dxgsharedsyncobject *shared_syncobj;
u64 fence_value;
u64 context;
spinlock_t lock;
diff --git a/drivers/hv/dxgkrnl/dxgvmbus.c b/drivers/hv/dxgkrnl/dxgvmbus.c
index d53d4254be63..36f4d4e84d3e 100644
--- a/drivers/hv/dxgkrnl/dxgvmbus.c
+++ b/drivers/hv/dxgkrnl/dxgvmbus.c
@@ -796,6 +796,55 @@ int dxgvmb_send_open_sync_object_nt(struct dxgprocess *process,
return ret;
}
+int dxgvmb_send_open_sync_object(struct dxgprocess *process,
+ struct d3dkmthandle device,
+ struct d3dkmthandle host_shared_syncobj,
+ struct d3dkmthandle *syncobj)
+{
+ struct dxgkvmb_command_opensyncobject *command;
+ struct dxgkvmb_command_opensyncobject_return result = { };
+ int ret;
+ struct dxgvmbusmsg msg;
+ struct dxgglobal *dxgglobal = dxggbl();
+
+ ret = init_message(&msg, NULL, process, sizeof(*command));
+ if (ret)
+ return ret;
+ command = (void *)msg.msg;
+
+ command_vm_to_host_init2(&command->hdr, DXGK_VMBCOMMAND_OPENSYNCOBJECT,
+ process->host_handle);
+ command->device = device;
+ command->global_sync_object = host_shared_syncobj;
+ command->flags.shared = 1;
+ command->flags.nt_security_sharing = 1;
+ command->flags.no_signal = 1;
+
+ ret = dxgglobal_acquire_channel_lock();
+ if (ret < 0)
+ goto cleanup;
+
+ ret = dxgvmb_send_sync_msg(&dxgglobal->channel, msg.hdr, msg.size,
+ &result, sizeof(result));
+
+ dxgglobal_release_channel_lock();
+
+ if (ret < 0)
+ goto cleanup;
+
+ ret = ntstatus2int(result.status);
+ if (ret < 0)
+ goto cleanup;
+
+ *syncobj = result.sync_object;
+
+cleanup:
+ free_message(&msg, process);
+ if (ret)
+ DXG_TRACE("err: %d", ret);
+ return ret;
+}
+
int dxgvmb_send_create_nt_shared_object(struct dxgprocess *process,
struct d3dkmthandle object,
struct d3dkmthandle *shared_handle)
diff --git a/drivers/hv/dxgkrnl/ioctl.c b/drivers/hv/dxgkrnl/ioctl.c
index 4db23cd55b24..622904d5c3a9 100644
--- a/drivers/hv/dxgkrnl/ioctl.c
+++ b/drivers/hv/dxgkrnl/ioctl.c
@@ -36,10 +36,8 @@ static char *errorstr(int ret)
}
#endif
-static int dxgsyncobj_release(struct inode *inode, struct file *file)
+void dxgsharedsyncobj_put(struct dxgsharedsyncobject *syncobj)
{
- struct dxgsharedsyncobject *syncobj = file->private_data;
-
DXG_TRACE("Release syncobj: %p", syncobj);
mutex_lock(&syncobj->fd_mutex);
kref_get(&syncobj->ssyncobj_kref);
@@ -56,6 +54,13 @@ static int dxgsyncobj_release(struct inode *inode, struct file *file)
}
mutex_unlock(&syncobj->fd_mutex);
kref_put(&syncobj->ssyncobj_kref, dxgsharedsyncobj_release);
+}
+
+static int dxgsyncobj_release(struct inode *inode, struct file *file)
+{
+ struct dxgsharedsyncobject *syncobj = file->private_data;
+
+ dxgsharedsyncobj_put(syncobj);
return 0;
}
@@ -4478,7 +4483,7 @@ dxgkio_get_device_state(struct dxgprocess *process, void *__user inargs)
return ret;
}
-static int
+int
dxgsharedsyncobj_get_host_nt_handle(struct dxgsharedsyncobject *syncobj,
struct dxgprocess *process,
struct d3dkmthandle objecthandle)
@@ -5226,6 +5231,9 @@ static struct ioctl_desc ioctls[] = {
/* 0x43 */ {dxgkio_query_statistics, LX_DXQUERYSTATISTICS},
/* 0x44 */ {dxgkio_share_object_with_host, LX_DXSHAREOBJECTWITHHOST},
/* 0x45 */ {dxgkio_create_sync_file, LX_DXCREATESYNCFILE},
+/* 0x46 */ {dxgkio_wait_sync_file, LX_DXWAITSYNCFILE},
+/* 0x46 */ {dxgkio_open_syncobj_from_syncfile,
+ LX_DXOPENSYNCOBJECTFROMSYNCFILE},
};
/*
diff --git a/include/uapi/misc/d3dkmthk.h b/include/uapi/misc/d3dkmthk.h
index c7f168425dc7..1eaa3f038322 100644
--- a/include/uapi/misc/d3dkmthk.h
+++ b/include/uapi/misc/d3dkmthk.h
@@ -1561,6 +1561,25 @@ struct d3dkmt_createsyncfile {
__u64 sync_file_handle; /* out */
};
+struct d3dkmt_waitsyncfile {
+ __u64 sync_file_handle;
+ struct d3dkmthandle context;
+ __u32 reserved;
+};
+
+struct d3dkmt_opensyncobjectfromsyncfile {
+ __u64 sync_file_handle;
+ struct d3dkmthandle device;
+ struct d3dkmthandle syncobj; /* out */
+ __u64 fence_value; /* out */
+#ifdef __KERNEL__
+ void *fence_value_cpu_va; /* out */
+#else
+ __u64 fence_value_cpu_va; /* out */
+#endif
+ __u64 fence_value_gpu_va; /* out */
+};
+
/*
* Dxgkrnl Graphics Port Driver ioctl definitions
*
@@ -1686,5 +1705,9 @@ struct d3dkmt_createsyncfile {
_IOWR(0x47, 0x44, struct d3dkmt_shareobjectwithhost)
#define LX_DXCREATESYNCFILE \
_IOWR(0x47, 0x45, struct d3dkmt_createsyncfile)
+#define LX_DXWAITSYNCFILE \
+ _IOWR(0x47, 0x46, struct d3dkmt_waitsyncfile)
+#define LX_DXOPENSYNCOBJECTFROMSYNCFILE \
+ _IOWR(0x47, 0x47, struct d3dkmt_opensyncobjectfromsyncfile)
#endif /* _D3DKMTHK_H */
^ permalink raw reply related [flat|nested] 56+ messages in thread* [PATCH 34/55] drivers: hv: dxgkrnl: Improve tracing and return values from copy from user
2026-03-19 20:24 [PATCH v4 00/55] drivers: hv: dxgkrnl: Driver for Hyper-V virtual compute device Eric Curtin
` (32 preceding siblings ...)
2026-03-19 20:24 ` [PATCH 33/55] drivers: hv: dxgkrnl: Implement D3DKMTWaitSyncFile Eric Curtin
@ 2026-03-19 20:24 ` Eric Curtin
2026-03-19 20:24 ` [PATCH 35/55] drivers: hv: dxgkrnl: Fix synchronization locks Eric Curtin
` (20 subsequent siblings)
54 siblings, 0 replies; 56+ messages in thread
From: Eric Curtin @ 2026-03-19 20:24 UTC (permalink / raw)
To: linux-hyperv; +Cc: linux-kernel, iourit, wei.liu, decui, haiyangz
From: Iouri Tarassov <iourit@linux.microsoft.com>
Signed-off-by: Iouri Tarassov <iourit@linux.microsoft.com>
[kms: forward port to 6.6 from 6.1. No code changes made.]
Signed-off-by: Kelsey Steele <kelseysteele@microsoft.com>
---
drivers/hv/dxgkrnl/dxgkrnl.h | 17 +-
drivers/hv/dxgkrnl/dxgmodule.c | 1 +
drivers/hv/dxgkrnl/dxgsyncfile.c | 13 +-
drivers/hv/dxgkrnl/dxgvmbus.c | 98 ++++-----
drivers/hv/dxgkrnl/ioctl.c | 327 +++++++++++++++----------------
5 files changed, 225 insertions(+), 231 deletions(-)
diff --git a/drivers/hv/dxgkrnl/dxgkrnl.h b/drivers/hv/dxgkrnl/dxgkrnl.h
index d92e1348ccfb..f63aa6f7a9dc 100644
--- a/drivers/hv/dxgkrnl/dxgkrnl.h
+++ b/drivers/hv/dxgkrnl/dxgkrnl.h
@@ -999,18 +999,25 @@ void dxgk_validate_ioctls(void);
trace_printk(dev_fmt(fmt) "\n", ##__VA_ARGS__); \
} while (0)
-#define DXG_ERR(fmt, ...) do { \
- dev_err(DXGDEV, fmt, ##__VA_ARGS__); \
- trace_printk("*** dxgkerror *** " dev_fmt(fmt) "\n", ##__VA_ARGS__); \
+#define DXG_ERR(fmt, ...) do { \
+ dev_err(DXGDEV, "%s: " fmt, __func__, ##__VA_ARGS__); \
+ trace_printk("*** dxgkerror *** " dev_fmt(fmt) "\n", ##__VA_ARGS__); \
} while (0)
#else
#define DXG_TRACE(...)
-#define DXG_ERR(fmt, ...) do { \
- dev_err(DXGDEV, fmt, ##__VA_ARGS__); \
+#define DXG_ERR(fmt, ...) do { \
+ dev_err(DXGDEV, "%s: " fmt, __func__, ##__VA_ARGS__); \
} while (0)
#endif /* DEBUG */
+#define DXG_TRACE_IOCTL_END(ret) do { \
+ if (ret < 0) \
+ DXG_ERR("Ioctl failed: %d", ret); \
+ else \
+ DXG_TRACE("Ioctl returned: %d", ret); \
+} while (0)
+
#endif
diff --git a/drivers/hv/dxgkrnl/dxgmodule.c b/drivers/hv/dxgkrnl/dxgmodule.c
index 5570f35954d4..aa27931a3447 100644
--- a/drivers/hv/dxgkrnl/dxgmodule.c
+++ b/drivers/hv/dxgkrnl/dxgmodule.c
@@ -961,3 +961,4 @@ module_exit(dxg_drv_exit);
MODULE_LICENSE("GPL");
MODULE_DESCRIPTION("Microsoft Dxgkrnl virtual compute device Driver");
+MODULE_VERSION("2.0.0");
diff --git a/drivers/hv/dxgkrnl/dxgsyncfile.c b/drivers/hv/dxgkrnl/dxgsyncfile.c
index 9d5832c90ad7..f3b3e8dd4568 100644
--- a/drivers/hv/dxgkrnl/dxgsyncfile.c
+++ b/drivers/hv/dxgkrnl/dxgsyncfile.c
@@ -38,13 +38,6 @@
#undef dev_fmt
#define dev_fmt(fmt) "dxgk: " fmt
-#ifdef DEBUG
-static char *errorstr(int ret)
-{
- return ret < 0 ? "err" : "";
-}
-#endif
-
static const struct dma_fence_ops dxgdmafence_ops;
static struct dxgsyncpoint *to_syncpoint(struct dma_fence *fence)
@@ -193,7 +186,7 @@ int dxgkio_create_sync_file(struct dxgprocess *process, void *__user inargs)
if (fd >= 0)
put_unused_fd(fd);
}
- DXG_TRACE("ioctl:%s %d", errorstr(ret), ret);
+ DXG_TRACE_IOCTL_END(ret);
return ret;
}
@@ -317,7 +310,7 @@ int dxgkio_open_syncobj_from_syncfile(struct dxgprocess *process,
kref_put(&device->device_kref, dxgdevice_release);
}
- DXG_TRACE("ioctl:%s %d", errorstr(ret), ret);
+ DXG_TRACE_IOCTL_END(ret);
return ret;
}
@@ -415,7 +408,7 @@ int dxgkio_wait_sync_file(struct dxgprocess *process, void *__user inargs)
if (dmafence)
dma_fence_put(dmafence);
- DXG_TRACE("ioctl:%s %d", errorstr(ret), ret);
+ DXG_TRACE_IOCTL_END(ret);
return ret;
}
diff --git a/drivers/hv/dxgkrnl/dxgvmbus.c b/drivers/hv/dxgkrnl/dxgvmbus.c
index 36f4d4e84d3e..566ccb6d01c9 100644
--- a/drivers/hv/dxgkrnl/dxgvmbus.c
+++ b/drivers/hv/dxgkrnl/dxgvmbus.c
@@ -1212,7 +1212,7 @@ dxgvmb_send_create_context(struct dxgadapter *adapter,
args->priv_drv_data_size);
if (ret) {
DXG_ERR("Faled to copy private data");
- ret = -EINVAL;
+ ret = -EFAULT;
goto cleanup;
}
}
@@ -1230,7 +1230,7 @@ dxgvmb_send_create_context(struct dxgadapter *adapter,
if (ret) {
DXG_ERR(
"Faled to copy private data to user");
- ret = -EINVAL;
+ ret = -EFAULT;
dxgvmb_send_destroy_context(adapter, process,
context);
context.v = 0;
@@ -1365,7 +1365,7 @@ copy_private_data(struct d3dkmt_createallocation *args,
args->private_runtime_data_size);
if (ret) {
DXG_ERR("failed to copy runtime data");
- ret = -EINVAL;
+ ret = -EFAULT;
goto cleanup;
}
private_data_dest += args->private_runtime_data_size;
@@ -1385,7 +1385,7 @@ copy_private_data(struct d3dkmt_createallocation *args,
args->priv_drv_data_size);
if (ret) {
DXG_ERR("failed to copy private data");
- ret = -EINVAL;
+ ret = -EFAULT;
goto cleanup;
}
private_data_dest += args->priv_drv_data_size;
@@ -1406,7 +1406,7 @@ copy_private_data(struct d3dkmt_createallocation *args,
input_alloc->priv_drv_data_size);
if (ret) {
DXG_ERR("failed to copy alloc data");
- ret = -EINVAL;
+ ret = -EFAULT;
goto cleanup;
}
private_data_dest += input_alloc->priv_drv_data_size;
@@ -1658,7 +1658,7 @@ create_local_allocations(struct dxgprocess *process,
sizeof(struct d3dkmthandle));
if (ret) {
DXG_ERR("failed to copy resource handle");
- ret = -EINVAL;
+ ret = -EFAULT;
goto cleanup;
}
}
@@ -1690,7 +1690,7 @@ create_local_allocations(struct dxgprocess *process,
host_alloc->priv_drv_data_size);
if (ret) {
DXG_ERR("failed to copy private data");
- ret = -EINVAL;
+ ret = -EFAULT;
goto cleanup;
}
alloc_private_data += host_alloc->priv_drv_data_size;
@@ -1700,7 +1700,7 @@ create_local_allocations(struct dxgprocess *process,
sizeof(struct d3dkmthandle));
if (ret) {
DXG_ERR("failed to copy alloc handle");
- ret = -EINVAL;
+ ret = -EFAULT;
goto cleanup;
}
}
@@ -1714,7 +1714,7 @@ create_local_allocations(struct dxgprocess *process,
sizeof(struct d3dkmthandle));
if (ret) {
DXG_ERR("failed to copy global share");
- ret = -EINVAL;
+ ret = -EFAULT;
}
cleanup:
@@ -1961,7 +1961,7 @@ int dxgvmb_send_query_clock_calibration(struct dxgprocess *process,
sizeof(result.clock_data));
if (ret) {
DXG_ERR("failed to copy clock data");
- ret = -EINVAL;
+ ret = -EFAULT;
goto cleanup;
}
ret = ntstatus2int(result.status);
@@ -2041,7 +2041,7 @@ int dxgvmb_send_query_alloc_residency(struct dxgprocess *process,
alloc_size);
if (ret) {
DXG_ERR("failed to copy alloc handles");
- ret = -EINVAL;
+ ret = -EFAULT;
goto cleanup;
}
}
@@ -2059,7 +2059,7 @@ int dxgvmb_send_query_alloc_residency(struct dxgprocess *process,
result_allocation_size);
if (ret) {
DXG_ERR("failed to copy residency status");
- ret = -EINVAL;
+ ret = -EFAULT;
}
cleanup:
@@ -2105,7 +2105,7 @@ int dxgvmb_send_escape(struct dxgprocess *process,
args->priv_drv_data_size);
if (ret) {
DXG_ERR("failed to copy priv data");
- ret = -EINVAL;
+ ret = -EFAULT;
goto cleanup;
}
}
@@ -2164,14 +2164,14 @@ int dxgvmb_send_query_vidmem_info(struct dxgprocess *process,
sizeof(output->budget));
if (ret) {
DXG_ERR("failed to copy budget");
- ret = -EINVAL;
+ ret = -EFAULT;
goto cleanup;
}
ret = copy_to_user(&output->current_usage, &result.current_usage,
sizeof(output->current_usage));
if (ret) {
DXG_ERR("failed to copy current usage");
- ret = -EINVAL;
+ ret = -EFAULT;
goto cleanup;
}
ret = copy_to_user(&output->current_reservation,
@@ -2179,7 +2179,7 @@ int dxgvmb_send_query_vidmem_info(struct dxgprocess *process,
sizeof(output->current_reservation));
if (ret) {
DXG_ERR("failed to copy reservation");
- ret = -EINVAL;
+ ret = -EFAULT;
goto cleanup;
}
ret = copy_to_user(&output->available_for_reservation,
@@ -2187,7 +2187,7 @@ int dxgvmb_send_query_vidmem_info(struct dxgprocess *process,
sizeof(output->available_for_reservation));
if (ret) {
DXG_ERR("failed to copy avail reservation");
- ret = -EINVAL;
+ ret = -EFAULT;
}
cleanup:
@@ -2229,7 +2229,7 @@ int dxgvmb_send_get_device_state(struct dxgprocess *process,
ret = copy_to_user(output, &result.args, sizeof(result.args));
if (ret) {
DXG_ERR("failed to copy output args");
- ret = -EINVAL;
+ ret = -EFAULT;
}
if (args->state_type == _D3DKMT_DEVICESTATE_EXECUTION)
@@ -2404,7 +2404,7 @@ int dxgvmb_send_make_resident(struct dxgprocess *process,
sizeof(struct d3dkmthandle));
if (ret) {
DXG_ERR("failed to copy alloc handles");
- ret = -EINVAL;
+ ret = -EFAULT;
goto cleanup;
}
command_vgpu_to_host_init2(&command->hdr,
@@ -2454,7 +2454,7 @@ int dxgvmb_send_evict(struct dxgprocess *process,
sizeof(struct d3dkmthandle));
if (ret) {
DXG_ERR("failed to copy alloc handles");
- ret = -EINVAL;
+ ret = -EFAULT;
goto cleanup;
}
command_vgpu_to_host_init2(&command->hdr,
@@ -2502,14 +2502,14 @@ int dxgvmb_send_submit_command(struct dxgprocess *process,
hbufsize);
if (ret) {
DXG_ERR(" failed to copy history buffer");
- ret = -EINVAL;
+ ret = -EFAULT;
goto cleanup;
}
ret = copy_from_user((u8 *) &command[1] + hbufsize,
args->priv_drv_data, args->priv_drv_data_size);
if (ret) {
DXG_ERR("failed to copy history priv data");
- ret = -EINVAL;
+ ret = -EFAULT;
goto cleanup;
}
@@ -2671,7 +2671,7 @@ int dxgvmb_send_update_gpu_va(struct dxgprocess *process,
op_size);
if (ret) {
DXG_ERR("failed to copy operations");
- ret = -EINVAL;
+ ret = -EFAULT;
goto cleanup;
}
@@ -2751,7 +2751,7 @@ dxgvmb_send_create_sync_object(struct dxgprocess *process,
sizeof(u64));
if (ret) {
DXG_ERR("failed to read fence");
- ret = -EINVAL;
+ ret = -EFAULT;
} else {
DXG_TRACE("fence value:%lx",
value);
@@ -2820,7 +2820,7 @@ int dxgvmb_send_signal_sync_object(struct dxgprocess *process,
if (ret) {
DXG_ERR("Failed to read objects %p %d",
objects, object_size);
- ret = -EINVAL;
+ ret = -EFAULT;
goto cleanup;
}
current_pos += object_size;
@@ -2834,7 +2834,7 @@ int dxgvmb_send_signal_sync_object(struct dxgprocess *process,
if (ret) {
DXG_ERR("Failed to read contexts %p %d",
contexts, context_size);
- ret = -EINVAL;
+ ret = -EFAULT;
goto cleanup;
}
current_pos += context_size;
@@ -2844,7 +2844,7 @@ int dxgvmb_send_signal_sync_object(struct dxgprocess *process,
if (ret) {
DXG_ERR("Failed to read fences %p %d",
fences, fence_size);
- ret = -EINVAL;
+ ret = -EFAULT;
goto cleanup;
}
}
@@ -2898,7 +2898,7 @@ int dxgvmb_send_wait_sync_object_cpu(struct dxgprocess *process,
ret = copy_from_user(current_pos, args->objects, object_size);
if (ret) {
DXG_ERR("failed to copy objects");
- ret = -EINVAL;
+ ret = -EFAULT;
goto cleanup;
}
current_pos += object_size;
@@ -2906,7 +2906,7 @@ int dxgvmb_send_wait_sync_object_cpu(struct dxgprocess *process,
fence_size);
if (ret) {
DXG_ERR("failed to copy fences");
- ret = -EINVAL;
+ ret = -EFAULT;
goto cleanup;
}
} else {
@@ -3037,7 +3037,7 @@ int dxgvmb_send_lock2(struct dxgprocess *process,
sizeof(args->data));
if (ret) {
DXG_ERR("failed to copy data");
- ret = -EINVAL;
+ ret = -EFAULT;
alloc->cpu_address_refcount--;
if (alloc->cpu_address_refcount == 0) {
dxg_unmap_iospace(alloc->cpu_address,
@@ -3119,7 +3119,7 @@ int dxgvmb_send_update_alloc_property(struct dxgprocess *process,
sizeof(u64));
if (ret1) {
DXG_ERR("failed to copy paging fence");
- ret = -EINVAL;
+ ret = -EFAULT;
}
}
cleanup:
@@ -3204,14 +3204,14 @@ int dxgvmb_send_set_allocation_priority(struct dxgprocess *process,
alloc_size);
if (ret) {
DXG_ERR("failed to copy alloc handle");
- ret = -EINVAL;
+ ret = -EFAULT;
goto cleanup;
}
ret = copy_from_user((u8 *) allocations + alloc_size,
args->priorities, priority_size);
if (ret) {
DXG_ERR("failed to copy alloc priority");
- ret = -EINVAL;
+ ret = -EFAULT;
goto cleanup;
}
@@ -3277,7 +3277,7 @@ int dxgvmb_send_get_allocation_priority(struct dxgprocess *process,
alloc_size);
if (ret) {
DXG_ERR("failed to copy alloc handles");
- ret = -EINVAL;
+ ret = -EFAULT;
goto cleanup;
}
@@ -3296,7 +3296,7 @@ int dxgvmb_send_get_allocation_priority(struct dxgprocess *process,
priority_size);
if (ret) {
DXG_ERR("failed to copy priorities");
- ret = -EINVAL;
+ ret = -EFAULT;
}
cleanup:
@@ -3402,7 +3402,7 @@ int dxgvmb_send_offer_allocations(struct dxgprocess *process,
}
if (ret) {
DXG_ERR("failed to copy input handles");
- ret = -EINVAL;
+ ret = -EFAULT;
goto cleanup;
}
@@ -3457,7 +3457,7 @@ int dxgvmb_send_reclaim_allocations(struct dxgprocess *process,
}
if (ret) {
DXG_ERR("failed to copy input handles");
- ret = -EINVAL;
+ ret = -EFAULT;
goto cleanup;
}
@@ -3469,7 +3469,7 @@ int dxgvmb_send_reclaim_allocations(struct dxgprocess *process,
&result->paging_fence_value, sizeof(u64));
if (ret) {
DXG_ERR("failed to copy paging fence");
- ret = -EINVAL;
+ ret = -EFAULT;
goto cleanup;
}
@@ -3480,7 +3480,7 @@ int dxgvmb_send_reclaim_allocations(struct dxgprocess *process,
args->allocation_count);
if (ret) {
DXG_ERR("failed to copy results");
- ret = -EINVAL;
+ ret = -EFAULT;
}
}
@@ -3559,7 +3559,7 @@ int dxgvmb_send_create_hwqueue(struct dxgprocess *process,
args->priv_drv_data_size);
if (ret) {
DXG_ERR("failed to copy private data");
- ret = -EINVAL;
+ ret = -EFAULT;
goto cleanup;
}
}
@@ -3604,7 +3604,7 @@ int dxgvmb_send_create_hwqueue(struct dxgprocess *process,
sizeof(struct d3dkmthandle));
if (ret) {
DXG_ERR("failed to copy hwqueue handle");
- ret = -EINVAL;
+ ret = -EFAULT;
goto cleanup;
}
ret = copy_to_user(&inargs->queue_progress_fence,
@@ -3612,7 +3612,7 @@ int dxgvmb_send_create_hwqueue(struct dxgprocess *process,
sizeof(struct d3dkmthandle));
if (ret) {
DXG_ERR("failed to progress fence");
- ret = -EINVAL;
+ ret = -EFAULT;
goto cleanup;
}
ret = copy_to_user(&inargs->queue_progress_fence_cpu_va,
@@ -3620,7 +3620,7 @@ int dxgvmb_send_create_hwqueue(struct dxgprocess *process,
sizeof(inargs->queue_progress_fence_cpu_va));
if (ret) {
DXG_ERR("failed to copy fence cpu va");
- ret = -EINVAL;
+ ret = -EFAULT;
goto cleanup;
}
ret = copy_to_user(&inargs->queue_progress_fence_gpu_va,
@@ -3628,7 +3628,7 @@ int dxgvmb_send_create_hwqueue(struct dxgprocess *process,
sizeof(u64));
if (ret) {
DXG_ERR("failed to copy fence gpu va");
- ret = -EINVAL;
+ ret = -EFAULT;
goto cleanup;
}
if (args->priv_drv_data_size) {
@@ -3637,7 +3637,7 @@ int dxgvmb_send_create_hwqueue(struct dxgprocess *process,
args->priv_drv_data_size);
if (ret) {
DXG_ERR("failed to copy private data");
- ret = -EINVAL;
+ ret = -EFAULT;
}
}
@@ -3706,7 +3706,7 @@ int dxgvmb_send_query_adapter_info(struct dxgprocess *process,
args->private_data, args->private_data_size);
if (ret) {
DXG_ERR("Faled to copy private data");
- ret = -EINVAL;
+ ret = -EFAULT;
goto cleanup;
}
@@ -3758,7 +3758,7 @@ int dxgvmb_send_query_adapter_info(struct dxgprocess *process,
args->private_data_size);
if (ret) {
DXG_ERR("Faled to copy private data to user");
- ret = -EINVAL;
+ ret = -EFAULT;
}
cleanup:
@@ -3791,7 +3791,7 @@ int dxgvmb_send_submit_command_hwqueue(struct dxgprocess *process,
primaries_size);
if (ret) {
DXG_ERR("failed to copy primaries handles");
- ret = -EINVAL;
+ ret = -EFAULT;
goto cleanup;
}
}
@@ -3801,7 +3801,7 @@ int dxgvmb_send_submit_command_hwqueue(struct dxgprocess *process,
args->priv_drv_data_size);
if (ret) {
DXG_ERR("failed to copy primaries data");
- ret = -EINVAL;
+ ret = -EFAULT;
goto cleanup;
}
}
diff --git a/drivers/hv/dxgkrnl/ioctl.c b/drivers/hv/dxgkrnl/ioctl.c
index 622904d5c3a9..3dc9e76f4f3d 100644
--- a/drivers/hv/dxgkrnl/ioctl.c
+++ b/drivers/hv/dxgkrnl/ioctl.c
@@ -29,13 +29,6 @@ struct ioctl_desc {
u32 ioctl;
};
-#ifdef DEBUG
-static char *errorstr(int ret)
-{
- return ret < 0 ? "err" : "";
-}
-#endif
-
void dxgsharedsyncobj_put(struct dxgsharedsyncobject *syncobj)
{
DXG_TRACE("Release syncobj: %p", syncobj);
@@ -108,7 +101,7 @@ static int dxgkio_open_adapter_from_luid(struct dxgprocess *process,
ret = copy_from_user(&args, inargs, sizeof(args));
if (ret) {
DXG_ERR("Faled to copy input args");
- ret = -EINVAL;
+ ret = -EFAULT;
goto cleanup;
}
@@ -129,7 +122,7 @@ static int dxgkio_open_adapter_from_luid(struct dxgprocess *process,
&args.adapter_handle,
sizeof(struct d3dkmthandle));
if (ret)
- ret = -EINVAL;
+ ret = -EFAULT;
}
adapter = entry;
}
@@ -150,7 +143,7 @@ static int dxgkio_open_adapter_from_luid(struct dxgprocess *process,
if (ret < 0)
dxgprocess_close_adapter(process, args.adapter_handle);
- DXG_TRACE("ioctl:%s %d", errorstr(ret), ret);
+ DXG_TRACE_IOCTL_END(ret);
return ret;
}
@@ -173,7 +166,7 @@ static int dxgkio_query_statistics(struct dxgprocess *process,
ret = copy_from_user(args, inargs, sizeof(*args));
if (ret) {
DXG_ERR("failed to copy input args");
- ret = -EINVAL;
+ ret = -EFAULT;
goto cleanup;
}
@@ -199,7 +192,7 @@ static int dxgkio_query_statistics(struct dxgprocess *process,
ret = copy_to_user(inargs, args, sizeof(*args));
if (ret) {
DXG_ERR("failed to copy args");
- ret = -EINVAL;
+ ret = -EFAULT;
}
}
dxgadapter_release_lock_shared(adapter);
@@ -209,7 +202,7 @@ static int dxgkio_query_statistics(struct dxgprocess *process,
if (args)
vfree(args);
- DXG_TRACE("ioctl:%s %d", errorstr(ret), ret);
+ DXG_TRACE_IOCTL_END(ret);
return ret;
}
@@ -233,7 +226,7 @@ dxgkp_enum_adapters(struct dxgprocess *process,
&dxgglobal->num_adapters, sizeof(u32));
if (ret) {
DXG_ERR("copy_to_user faled");
- ret = -EINVAL;
+ ret = -EFAULT;
}
goto cleanup;
}
@@ -291,7 +284,7 @@ dxgkp_enum_adapters(struct dxgprocess *process,
&dxgglobal->num_adapters, sizeof(u32));
if (ret) {
DXG_ERR("copy_to_user failed");
- ret = -EINVAL;
+ ret = -EFAULT;
}
goto cleanup;
}
@@ -300,13 +293,13 @@ dxgkp_enum_adapters(struct dxgprocess *process,
sizeof(adapter_count));
if (ret) {
DXG_ERR("failed to copy adapter_count");
- ret = -EINVAL;
+ ret = -EFAULT;
goto cleanup;
}
ret = copy_to_user(info_out, info, sizeof(info[0]) * adapter_count);
if (ret) {
DXG_ERR("failed to copy adapter info");
- ret = -EINVAL;
+ ret = -EFAULT;
}
cleanup:
@@ -326,7 +319,7 @@ dxgkp_enum_adapters(struct dxgprocess *process,
if (adapters)
vfree(adapters);
- DXG_TRACE("ioctl:%s %d", errorstr(ret), ret);
+ DXG_TRACE_IOCTL_END(ret);
return ret;
}
@@ -437,7 +430,7 @@ dxgkio_enum_adapters(struct dxgprocess *process, void *__user inargs)
ret = copy_from_user(&args, inargs, sizeof(args));
if (ret) {
DXG_ERR("failed to copy input args");
- ret = -EINVAL;
+ ret = -EFAULT;
goto cleanup;
}
@@ -447,7 +440,7 @@ dxgkio_enum_adapters(struct dxgprocess *process, void *__user inargs)
ret = copy_to_user(inargs, &args, sizeof(args));
if (ret) {
DXG_ERR("failed to copy args to user");
- ret = -EINVAL;
+ ret = -EFAULT;
}
goto cleanup;
}
@@ -508,14 +501,14 @@ dxgkio_enum_adapters(struct dxgprocess *process, void *__user inargs)
ret = copy_to_user(inargs, &args, sizeof(args));
if (ret) {
DXG_ERR("failed to copy args to user");
- ret = -EINVAL;
+ ret = -EFAULT;
goto cleanup;
}
ret = copy_to_user(args.adapters, info,
sizeof(info[0]) * args.num_adapters);
if (ret) {
DXG_ERR("failed to copy adapter info to user");
- ret = -EINVAL;
+ ret = -EFAULT;
}
cleanup:
@@ -536,7 +529,7 @@ dxgkio_enum_adapters(struct dxgprocess *process, void *__user inargs)
if (adapters)
vfree(adapters);
- DXG_TRACE("ioctl:%s %d", errorstr(ret), ret);
+ DXG_TRACE_IOCTL_END(ret);
return ret;
}
@@ -549,7 +542,7 @@ dxgkio_enum_adapters3(struct dxgprocess *process, void *__user inargs)
ret = copy_from_user(&args, inargs, sizeof(args));
if (ret) {
DXG_ERR("failed to copy input args");
- ret = -EINVAL;
+ ret = -EFAULT;
goto cleanup;
}
@@ -561,7 +554,7 @@ dxgkio_enum_adapters3(struct dxgprocess *process, void *__user inargs)
cleanup:
- DXG_TRACE("ioctl:%s %d", errorstr(ret), ret);
+ DXG_TRACE_IOCTL_END(ret);
return ret;
}
@@ -574,7 +567,7 @@ dxgkio_close_adapter(struct dxgprocess *process, void *__user inargs)
ret = copy_from_user(&args, inargs, sizeof(args));
if (ret) {
DXG_ERR("failed to copy input args");
- ret = -EINVAL;
+ ret = -EFAULT;
goto cleanup;
}
@@ -584,7 +577,7 @@ dxgkio_close_adapter(struct dxgprocess *process, void *__user inargs)
cleanup:
- DXG_TRACE("ioctl: %s %d", errorstr(ret), ret);
+ DXG_TRACE_IOCTL_END(ret);
return ret;
}
@@ -598,7 +591,7 @@ dxgkio_query_adapter_info(struct dxgprocess *process, void *__user inargs)
ret = copy_from_user(&args, inargs, sizeof(args));
if (ret) {
DXG_ERR("failed to copy input args");
- ret = -EINVAL;
+ ret = -EFAULT;
goto cleanup;
}
@@ -630,7 +623,7 @@ dxgkio_query_adapter_info(struct dxgprocess *process, void *__user inargs)
if (adapter)
kref_put(&adapter->adapter_kref, dxgadapter_release);
- DXG_TRACE("ioctl:%s %d", errorstr(ret), ret);
+ DXG_TRACE_IOCTL_END(ret);
return ret;
}
@@ -647,7 +640,7 @@ dxgkio_create_device(struct dxgprocess *process, void *__user inargs)
ret = copy_from_user(&args, inargs, sizeof(args));
if (ret) {
DXG_ERR("failed to copy input args");
- ret = -EINVAL;
+ ret = -EFAULT;
goto cleanup;
}
@@ -677,7 +670,7 @@ dxgkio_create_device(struct dxgprocess *process, void *__user inargs)
sizeof(struct d3dkmthandle));
if (ret) {
DXG_ERR("failed to copy device handle");
- ret = -EINVAL;
+ ret = -EFAULT;
goto cleanup;
}
@@ -709,7 +702,7 @@ dxgkio_create_device(struct dxgprocess *process, void *__user inargs)
if (adapter)
kref_put(&adapter->adapter_kref, dxgadapter_release);
- DXG_TRACE("ioctl:%s %d", errorstr(ret), ret);
+ DXG_TRACE_IOCTL_END(ret);
return ret;
}
@@ -724,7 +717,7 @@ dxgkio_destroy_device(struct dxgprocess *process, void *__user inargs)
ret = copy_from_user(&args, inargs, sizeof(args));
if (ret) {
DXG_ERR("failed to copy input args");
- ret = -EINVAL;
+ ret = -EFAULT;
goto cleanup;
}
@@ -756,7 +749,7 @@ dxgkio_destroy_device(struct dxgprocess *process, void *__user inargs)
cleanup:
- DXG_TRACE("ioctl:%s %d", errorstr(ret), ret);
+ DXG_TRACE_IOCTL_END(ret);
return ret;
}
@@ -774,7 +767,7 @@ dxgkio_create_context_virtual(struct dxgprocess *process, void *__user inargs)
ret = copy_from_user(&args, inargs, sizeof(args));
if (ret) {
DXG_ERR("failed to copy input args");
- ret = -EINVAL;
+ ret = -EFAULT;
goto cleanup;
}
@@ -824,7 +817,7 @@ dxgkio_create_context_virtual(struct dxgprocess *process, void *__user inargs)
sizeof(struct d3dkmthandle));
if (ret) {
DXG_ERR("failed to copy context handle");
- ret = -EINVAL;
+ ret = -EFAULT;
}
} else {
DXG_ERR("invalid host handle");
@@ -851,7 +844,7 @@ dxgkio_create_context_virtual(struct dxgprocess *process, void *__user inargs)
kref_put(&device->device_kref, dxgdevice_release);
}
- DXG_TRACE("ioctl:%s %d", errorstr(ret), ret);
+ DXG_TRACE_IOCTL_END(ret);
return ret;
}
@@ -868,7 +861,7 @@ dxgkio_destroy_context(struct dxgprocess *process, void *__user inargs)
ret = copy_from_user(&args, inargs, sizeof(args));
if (ret) {
DXG_ERR("failed to copy input args");
- ret = -EINVAL;
+ ret = -EFAULT;
goto cleanup;
}
@@ -920,7 +913,7 @@ dxgkio_destroy_context(struct dxgprocess *process, void *__user inargs)
if (device)
kref_put(&device->device_kref, dxgdevice_release);
- DXG_TRACE("ioctl:%s %s %d", errorstr(ret), __func__, ret);
+ DXG_TRACE_IOCTL_END(ret);
return ret;
}
@@ -938,7 +931,7 @@ dxgkio_create_hwqueue(struct dxgprocess *process, void *__user inargs)
ret = copy_from_user(&args, inargs, sizeof(args));
if (ret) {
DXG_ERR("failed to copy input args");
- ret = -EINVAL;
+ ret = -EFAULT;
goto cleanup;
}
@@ -1002,7 +995,7 @@ dxgkio_create_hwqueue(struct dxgprocess *process, void *__user inargs)
if (device)
kref_put(&device->device_kref, dxgdevice_release);
- DXG_TRACE("ioctl:%s %d", errorstr(ret), ret);
+ DXG_TRACE_IOCTL_END(ret);
return ret;
}
@@ -1019,7 +1012,7 @@ static int dxgkio_destroy_hwqueue(struct dxgprocess *process,
ret = copy_from_user(&args, inargs, sizeof(args));
if (ret) {
DXG_ERR("failed to copy input args");
- ret = -EINVAL;
+ ret = -EFAULT;
goto cleanup;
}
@@ -1070,7 +1063,7 @@ static int dxgkio_destroy_hwqueue(struct dxgprocess *process,
if (device)
kref_put(&device->device_kref, dxgdevice_release);
- DXG_TRACE("ioctl:%s %d", errorstr(ret), ret);
+ DXG_TRACE_IOCTL_END(ret);
return ret;
}
@@ -1088,7 +1081,7 @@ dxgkio_create_paging_queue(struct dxgprocess *process, void *__user inargs)
ret = copy_from_user(&args, inargs, sizeof(args));
if (ret) {
DXG_ERR("failed to copy input args");
- ret = -EINVAL;
+ ret = -EFAULT;
goto cleanup;
}
@@ -1128,7 +1121,7 @@ dxgkio_create_paging_queue(struct dxgprocess *process, void *__user inargs)
ret = copy_to_user(inargs, &args, sizeof(args));
if (ret) {
DXG_ERR("failed to copy input args");
- ret = -EINVAL;
+ ret = -EFAULT;
goto cleanup;
}
@@ -1169,7 +1162,7 @@ dxgkio_create_paging_queue(struct dxgprocess *process, void *__user inargs)
kref_put(&device->device_kref, dxgdevice_release);
}
- DXG_TRACE("ioctl:%s %d", errorstr(ret), ret);
+ DXG_TRACE_IOCTL_END(ret);
return ret;
}
@@ -1186,7 +1179,7 @@ dxgkio_destroy_paging_queue(struct dxgprocess *process, void *__user inargs)
ret = copy_from_user(&args, inargs, sizeof(args));
if (ret) {
DXG_ERR("failed to copy input args");
- ret = -EINVAL;
+ ret = -EFAULT;
goto cleanup;
}
@@ -1247,7 +1240,7 @@ dxgkio_destroy_paging_queue(struct dxgprocess *process, void *__user inargs)
kref_put(&device->device_kref, dxgdevice_release);
}
- DXG_TRACE("ioctl:%s %d", errorstr(ret), ret);
+ DXG_TRACE_IOCTL_END(ret);
return ret;
}
@@ -1351,7 +1344,7 @@ dxgkio_create_allocation(struct dxgprocess *process, void *__user inargs)
ret = copy_from_user(&args, inargs, sizeof(args));
if (ret) {
DXG_ERR("failed to copy input args");
- ret = -EINVAL;
+ ret = -EFAULT;
goto cleanup;
}
@@ -1373,7 +1366,7 @@ dxgkio_create_allocation(struct dxgprocess *process, void *__user inargs)
alloc_info_size);
if (ret) {
DXG_ERR("failed to copy alloc info");
- ret = -EINVAL;
+ ret = -EFAULT;
goto cleanup;
}
@@ -1412,7 +1405,7 @@ dxgkio_create_allocation(struct dxgprocess *process, void *__user inargs)
sizeof(standard_alloc));
if (ret) {
DXG_ERR("failed to copy std alloc data");
- ret = -EINVAL;
+ ret = -EFAULT;
goto cleanup;
}
if (standard_alloc.type ==
@@ -1556,7 +1549,7 @@ dxgkio_create_allocation(struct dxgprocess *process, void *__user inargs)
if (ret) {
DXG_ERR(
"failed to copy runtime data");
- ret = -EINVAL;
+ ret = -EFAULT;
goto cleanup;
}
}
@@ -1576,7 +1569,7 @@ dxgkio_create_allocation(struct dxgprocess *process, void *__user inargs)
if (ret) {
DXG_ERR(
"failed to copy res data");
- ret = -EINVAL;
+ ret = -EFAULT;
goto cleanup;
}
}
@@ -1733,7 +1726,7 @@ dxgkio_create_allocation(struct dxgprocess *process, void *__user inargs)
kref_put(&device->device_kref, dxgdevice_release);
}
- DXG_TRACE("ioctl:%s %d", errorstr(ret), ret);
+ DXG_TRACE_IOCTL_END(ret);
return ret;
}
@@ -1793,7 +1786,7 @@ dxgkio_destroy_allocation(struct dxgprocess *process, void *__user inargs)
ret = copy_from_user(&args, inargs, sizeof(args));
if (ret) {
DXG_ERR("failed to copy input args");
- ret = -EINVAL;
+ ret = -EFAULT;
goto cleanup;
}
@@ -1823,7 +1816,7 @@ dxgkio_destroy_allocation(struct dxgprocess *process, void *__user inargs)
handle_size);
if (ret) {
DXG_ERR("failed to copy alloc handles");
- ret = -EINVAL;
+ ret = -EFAULT;
goto cleanup;
}
}
@@ -1962,7 +1955,7 @@ dxgkio_destroy_allocation(struct dxgprocess *process, void *__user inargs)
if (allocs)
vfree(allocs);
- DXG_TRACE("ioctl:%s %d", errorstr(ret), ret);
+ DXG_TRACE_IOCTL_END(ret);
return ret;
}
@@ -1978,7 +1971,7 @@ dxgkio_make_resident(struct dxgprocess *process, void *__user inargs)
ret = copy_from_user(&args, inargs, sizeof(args));
if (ret) {
DXG_ERR("failed to copy input args");
- ret = -EINVAL;
+ ret = -EFAULT;
goto cleanup;
}
@@ -2022,7 +2015,7 @@ dxgkio_make_resident(struct dxgprocess *process, void *__user inargs)
&args.paging_fence_value, sizeof(u64));
if (ret2) {
DXG_ERR("failed to copy paging fence");
- ret = -EINVAL;
+ ret = -EFAULT;
goto cleanup;
}
@@ -2030,7 +2023,7 @@ dxgkio_make_resident(struct dxgprocess *process, void *__user inargs)
&args.num_bytes_to_trim, sizeof(u64));
if (ret2) {
DXG_ERR("failed to copy bytes to trim");
- ret = -EINVAL;
+ ret = -EFAULT;
goto cleanup;
}
@@ -2041,7 +2034,7 @@ dxgkio_make_resident(struct dxgprocess *process, void *__user inargs)
if (device)
kref_put(&device->device_kref, dxgdevice_release);
- DXG_TRACE("ioctl:%s %d", errorstr(ret), ret);
+ DXG_TRACE_IOCTL_END(ret);
return ret;
}
@@ -2058,7 +2051,7 @@ dxgkio_evict(struct dxgprocess *process, void *__user inargs)
ret = copy_from_user(&args, inargs, sizeof(args));
if (ret) {
DXG_ERR("failed to copy input args");
- ret = -EINVAL;
+ ret = -EFAULT;
goto cleanup;
}
@@ -2090,7 +2083,7 @@ dxgkio_evict(struct dxgprocess *process, void *__user inargs)
&args.num_bytes_to_trim, sizeof(u64));
if (ret) {
DXG_ERR("failed to copy bytes to trim to user");
- ret = -EINVAL;
+ ret = -EFAULT;
}
cleanup:
@@ -2099,7 +2092,7 @@ dxgkio_evict(struct dxgprocess *process, void *__user inargs)
if (device)
kref_put(&device->device_kref, dxgdevice_release);
- DXG_TRACE("ioctl:%s %d", errorstr(ret), ret);
+ DXG_TRACE_IOCTL_END(ret);
return ret;
}
@@ -2114,7 +2107,7 @@ dxgkio_offer_allocations(struct dxgprocess *process, void *__user inargs)
ret = copy_from_user(&args, inargs, sizeof(args));
if (ret) {
DXG_ERR("failed to copy input args");
- ret = -EINVAL;
+ ret = -EFAULT;
goto cleanup;
}
@@ -2153,7 +2146,7 @@ dxgkio_offer_allocations(struct dxgprocess *process, void *__user inargs)
if (device)
kref_put(&device->device_kref, dxgdevice_release);
- DXG_TRACE("ioctl:%s %d", errorstr(ret), ret);
+ DXG_TRACE_IOCTL_END(ret);
return ret;
}
@@ -2169,7 +2162,7 @@ dxgkio_reclaim_allocations(struct dxgprocess *process, void *__user inargs)
ret = copy_from_user(&args, inargs, sizeof(args));
if (ret) {
DXG_ERR("failed to copy input args");
- ret = -EINVAL;
+ ret = -EFAULT;
goto cleanup;
}
@@ -2212,7 +2205,7 @@ dxgkio_reclaim_allocations(struct dxgprocess *process, void *__user inargs)
if (device)
kref_put(&device->device_kref, dxgdevice_release);
- DXG_TRACE("ioctl:%s %d", errorstr(ret), ret);
+ DXG_TRACE_IOCTL_END(ret);
return ret;
}
@@ -2227,7 +2220,7 @@ dxgkio_submit_command(struct dxgprocess *process, void *__user inargs)
ret = copy_from_user(&args, inargs, sizeof(args));
if (ret) {
DXG_ERR("failed to copy input args");
- ret = -EINVAL;
+ ret = -EFAULT;
goto cleanup;
}
@@ -2280,7 +2273,7 @@ dxgkio_submit_command(struct dxgprocess *process, void *__user inargs)
if (device)
kref_put(&device->device_kref, dxgdevice_release);
- DXG_TRACE("ioctl:%s %d", errorstr(ret), ret);
+ DXG_TRACE_IOCTL_END(ret);
return ret;
}
@@ -2296,7 +2289,7 @@ dxgkio_submit_command_to_hwqueue(struct dxgprocess *process,
ret = copy_from_user(&args, inargs, sizeof(args));
if (ret) {
DXG_ERR("failed to copy input args");
- ret = -EINVAL;
+ ret = -EFAULT;
goto cleanup;
}
@@ -2336,7 +2329,7 @@ dxgkio_submit_command_to_hwqueue(struct dxgprocess *process,
if (device)
kref_put(&device->device_kref, dxgdevice_release);
- DXG_TRACE("ioctl:%s %d", errorstr(ret), ret);
+ DXG_TRACE_IOCTL_END(ret);
return ret;
}
@@ -2352,7 +2345,7 @@ dxgkio_submit_signal_to_hwqueue(struct dxgprocess *process, void *__user inargs)
ret = copy_from_user(&args, inargs, sizeof(args));
if (ret) {
DXG_ERR("failed to copy input args");
- ret = -EINVAL;
+ ret = -EFAULT;
goto cleanup;
}
@@ -2376,7 +2369,7 @@ dxgkio_submit_signal_to_hwqueue(struct dxgprocess *process, void *__user inargs)
sizeof(struct d3dkmthandle));
if (ret) {
DXG_ERR("failed to copy hwqueue handle");
- ret = -EINVAL;
+ ret = -EFAULT;
goto cleanup;
}
@@ -2410,7 +2403,7 @@ dxgkio_submit_signal_to_hwqueue(struct dxgprocess *process, void *__user inargs)
if (device)
kref_put(&device->device_kref, dxgdevice_release);
- DXG_TRACE("ioctl:%s %d", errorstr(ret), ret);
+ DXG_TRACE_IOCTL_END(ret);
return ret;
}
@@ -2428,7 +2421,7 @@ dxgkio_submit_wait_to_hwqueue(struct dxgprocess *process, void *__user inargs)
ret = copy_from_user(&args, inargs, sizeof(args));
if (ret) {
DXG_ERR("failed to copy input args");
- ret = -EINVAL;
+ ret = -EFAULT;
goto cleanup;
}
@@ -2447,7 +2440,7 @@ dxgkio_submit_wait_to_hwqueue(struct dxgprocess *process, void *__user inargs)
ret = copy_from_user(objects, args.objects, object_size);
if (ret) {
DXG_ERR("failed to copy objects");
- ret = -EINVAL;
+ ret = -EFAULT;
goto cleanup;
}
@@ -2460,7 +2453,7 @@ dxgkio_submit_wait_to_hwqueue(struct dxgprocess *process, void *__user inargs)
ret = copy_from_user(fences, args.fence_values, object_size);
if (ret) {
DXG_ERR("failed to copy fence values");
- ret = -EINVAL;
+ ret = -EFAULT;
goto cleanup;
}
@@ -2494,7 +2487,7 @@ dxgkio_submit_wait_to_hwqueue(struct dxgprocess *process, void *__user inargs)
if (device)
kref_put(&device->device_kref, dxgdevice_release);
- DXG_TRACE("ioctl:%s %d", errorstr(ret), ret);
+ DXG_TRACE_IOCTL_END(ret);
return ret;
}
@@ -2510,7 +2503,7 @@ dxgkio_map_gpu_va(struct dxgprocess *process, void *__user inargs)
ret = copy_from_user(&args, inargs, sizeof(args));
if (ret) {
DXG_ERR("failed to copy input args");
- ret = -EINVAL;
+ ret = -EFAULT;
goto cleanup;
}
@@ -2542,7 +2535,7 @@ dxgkio_map_gpu_va(struct dxgprocess *process, void *__user inargs)
&args.paging_fence_value, sizeof(u64));
if (ret2) {
DXG_ERR("failed to copy paging fence to user");
- ret = -EINVAL;
+ ret = -EFAULT;
goto cleanup;
}
@@ -2550,7 +2543,7 @@ dxgkio_map_gpu_va(struct dxgprocess *process, void *__user inargs)
sizeof(args.virtual_address));
if (ret2) {
DXG_ERR("failed to copy va to user");
- ret = -EINVAL;
+ ret = -EFAULT;
goto cleanup;
}
@@ -2561,7 +2554,7 @@ dxgkio_map_gpu_va(struct dxgprocess *process, void *__user inargs)
if (device)
kref_put(&device->device_kref, dxgdevice_release);
- DXG_TRACE("ioctl:%s %d", errorstr(ret), ret);
+ DXG_TRACE_IOCTL_END(ret);
return ret;
}
@@ -2577,7 +2570,7 @@ dxgkio_reserve_gpu_va(struct dxgprocess *process, void *__user inargs)
ret = copy_from_user(&args, inargs, sizeof(args));
if (ret) {
DXG_ERR("failed to copy input args");
- ret = -EINVAL;
+ ret = -EFAULT;
goto cleanup;
}
@@ -2614,7 +2607,7 @@ dxgkio_reserve_gpu_va(struct dxgprocess *process, void *__user inargs)
sizeof(args.virtual_address));
if (ret) {
DXG_ERR("failed to copy VA to user");
- ret = -EINVAL;
+ ret = -EFAULT;
}
cleanup:
@@ -2624,7 +2617,7 @@ dxgkio_reserve_gpu_va(struct dxgprocess *process, void *__user inargs)
kref_put(&adapter->adapter_kref, dxgadapter_release);
}
- DXG_TRACE("ioctl:%s %d", errorstr(ret), ret);
+ DXG_TRACE_IOCTL_END(ret);
return ret;
}
@@ -2638,7 +2631,7 @@ dxgkio_free_gpu_va(struct dxgprocess *process, void *__user inargs)
ret = copy_from_user(&args, inargs, sizeof(args));
if (ret) {
DXG_ERR("failed to copy input args");
- ret = -EINVAL;
+ ret = -EFAULT;
goto cleanup;
}
@@ -2680,7 +2673,7 @@ dxgkio_update_gpu_va(struct dxgprocess *process, void *__user inargs)
ret = copy_from_user(&args, inargs, sizeof(args));
if (ret) {
DXG_ERR("failed to copy input args");
- ret = -EINVAL;
+ ret = -EFAULT;
goto cleanup;
}
@@ -2705,7 +2698,7 @@ dxgkio_update_gpu_va(struct dxgprocess *process, void *__user inargs)
sizeof(args.fence_value));
if (ret) {
DXG_ERR("failed to copy fence value to user");
- ret = -EINVAL;
+ ret = -EFAULT;
}
cleanup:
@@ -2734,7 +2727,7 @@ dxgkio_create_sync_object(struct dxgprocess *process, void *__user inargs)
ret = copy_from_user(&args, inargs, sizeof(args));
if (ret) {
DXG_ERR("failed to copy input args");
- ret = -EINVAL;
+ ret = -EFAULT;
goto cleanup;
}
@@ -2808,7 +2801,7 @@ dxgkio_create_sync_object(struct dxgprocess *process, void *__user inargs)
ret = copy_to_user(inargs, &args, sizeof(args));
if (ret) {
DXG_ERR("failed to copy output args");
- ret = -EINVAL;
+ ret = -EFAULT;
goto cleanup;
}
@@ -2842,7 +2835,7 @@ dxgkio_create_sync_object(struct dxgprocess *process, void *__user inargs)
if (device)
kref_put(&device->device_kref, dxgdevice_release);
- DXG_TRACE("ioctl:%s %d", errorstr(ret), ret);
+ DXG_TRACE_IOCTL_END(ret);
return ret;
}
@@ -2856,7 +2849,7 @@ dxgkio_destroy_sync_object(struct dxgprocess *process, void *__user inargs)
ret = copy_from_user(&args, inargs, sizeof(args));
if (ret) {
DXG_ERR("failed to copy input args");
- ret = -EINVAL;
+ ret = -EFAULT;
goto cleanup;
}
@@ -2885,7 +2878,7 @@ dxgkio_destroy_sync_object(struct dxgprocess *process, void *__user inargs)
cleanup:
- DXG_TRACE("ioctl:%s %d", errorstr(ret), ret);
+ DXG_TRACE_IOCTL_END(ret);
return ret;
}
@@ -2906,7 +2899,7 @@ dxgkio_open_sync_object_nt(struct dxgprocess *process, void *__user inargs)
ret = copy_from_user(&args, inargs, sizeof(args));
if (ret) {
DXG_ERR("failed to copy input args");
- ret = -EINVAL;
+ ret = -EFAULT;
goto cleanup;
}
@@ -2995,7 +2988,7 @@ dxgkio_open_sync_object_nt(struct dxgprocess *process, void *__user inargs)
if (ret == 0)
goto success;
DXG_ERR("failed to copy output args");
- ret = -EINVAL;
+ ret = -EFAULT;
cleanup:
@@ -3020,7 +3013,7 @@ dxgkio_open_sync_object_nt(struct dxgprocess *process, void *__user inargs)
if (device)
kref_put(&device->device_kref, dxgdevice_release);
- DXG_TRACE("ioctl:%s %d", errorstr(ret), ret);
+ DXG_TRACE_IOCTL_END(ret);
return ret;
}
@@ -3041,7 +3034,7 @@ dxgkio_signal_sync_object(struct dxgprocess *process, void *__user inargs)
ret = copy_from_user(&args, inargs, sizeof(args));
if (ret) {
DXG_ERR("failed to copy input args");
- ret = -EINVAL;
+ ret = -EFAULT;
goto cleanup;
}
@@ -3129,7 +3122,7 @@ dxgkio_signal_sync_object(struct dxgprocess *process, void *__user inargs)
if (device)
kref_put(&device->device_kref, dxgdevice_release);
- DXG_TRACE("ioctl:%s %d", errorstr(ret), ret);
+ DXG_TRACE_IOCTL_END(ret);
return ret;
}
@@ -3144,7 +3137,7 @@ dxgkio_signal_sync_object_cpu(struct dxgprocess *process, void *__user inargs)
ret = copy_from_user(&args, inargs, sizeof(args));
if (ret) {
DXG_ERR("failed to copy input args");
- ret = -EINVAL;
+ ret = -EFAULT;
goto cleanup;
}
if (args.object_count == 0 ||
@@ -3181,7 +3174,7 @@ dxgkio_signal_sync_object_cpu(struct dxgprocess *process, void *__user inargs)
if (device)
kref_put(&device->device_kref, dxgdevice_release);
- DXG_TRACE("ioctl:%s %d", errorstr(ret), ret);
+ DXG_TRACE_IOCTL_END(ret);
return ret;
}
@@ -3199,7 +3192,7 @@ dxgkio_signal_sync_object_gpu(struct dxgprocess *process, void *__user inargs)
ret = copy_from_user(&args, inargs, sizeof(args));
if (ret) {
DXG_ERR("failed to copy input args");
- ret = -EINVAL;
+ ret = -EFAULT;
goto cleanup;
}
@@ -3240,7 +3233,7 @@ dxgkio_signal_sync_object_gpu(struct dxgprocess *process, void *__user inargs)
if (device)
kref_put(&device->device_kref, dxgdevice_release);
- DXG_TRACE("ioctl:%s %d", errorstr(ret), ret);
+ DXG_TRACE_IOCTL_END(ret);
return ret;
}
@@ -3262,7 +3255,7 @@ dxgkio_signal_sync_object_gpu2(struct dxgprocess *process, void *__user inargs)
ret = copy_from_user(&args, inargs, sizeof(args));
if (ret) {
DXG_ERR("failed to copy input args");
- ret = -EINVAL;
+ ret = -EFAULT;
goto cleanup;
}
@@ -3287,7 +3280,7 @@ dxgkio_signal_sync_object_gpu2(struct dxgprocess *process, void *__user inargs)
sizeof(struct d3dkmthandle));
if (ret) {
DXG_ERR("failed to copy context handle");
- ret = -EINVAL;
+ ret = -EFAULT;
goto cleanup;
}
@@ -3365,7 +3358,7 @@ dxgkio_signal_sync_object_gpu2(struct dxgprocess *process, void *__user inargs)
if (device)
kref_put(&device->device_kref, dxgdevice_release);
- DXG_TRACE("ioctl:%s %d", errorstr(ret), ret);
+ DXG_TRACE_IOCTL_END(ret);
return ret;
}
@@ -3380,7 +3373,7 @@ dxgkio_wait_sync_object(struct dxgprocess *process, void *__user inargs)
ret = copy_from_user(&args, inargs, sizeof(args));
if (ret) {
DXG_ERR("failed to copy input args");
- ret = -EINVAL;
+ ret = -EFAULT;
goto cleanup;
}
@@ -3418,7 +3411,7 @@ dxgkio_wait_sync_object(struct dxgprocess *process, void *__user inargs)
if (device)
kref_put(&device->device_kref, dxgdevice_release);
- DXG_TRACE("ioctl:%s %d", errorstr(ret), ret);
+ DXG_TRACE_IOCTL_END(ret);
return ret;
}
@@ -3439,7 +3432,7 @@ dxgkio_wait_sync_object_cpu(struct dxgprocess *process, void *__user inargs)
ret = copy_from_user(&args, inargs, sizeof(args));
if (ret) {
DXG_ERR("failed to copy input args");
- ret = -EINVAL;
+ ret = -EFAULT;
goto cleanup;
}
@@ -3540,7 +3533,7 @@ dxgkio_wait_sync_object_cpu(struct dxgprocess *process, void *__user inargs)
kfree(async_host_event);
}
- DXG_TRACE("ioctl:%s %d", errorstr(ret), ret);
+ DXG_TRACE_IOCTL_END(ret);
return ret;
}
@@ -3563,7 +3556,7 @@ dxgkio_wait_sync_object_gpu(struct dxgprocess *process, void *__user inargs)
ret = copy_from_user(&args, inargs, sizeof(args));
if (ret) {
DXG_ERR("failed to copy input args");
- ret = -EINVAL;
+ ret = -EFAULT;
goto cleanup;
}
@@ -3583,7 +3576,7 @@ dxgkio_wait_sync_object_gpu(struct dxgprocess *process, void *__user inargs)
ret = copy_from_user(objects, args.objects, object_size);
if (ret) {
DXG_ERR("failed to copy objects");
- ret = -EINVAL;
+ ret = -EFAULT;
goto cleanup;
}
@@ -3637,7 +3630,7 @@ dxgkio_wait_sync_object_gpu(struct dxgprocess *process, void *__user inargs)
object_size);
if (ret) {
DXG_ERR("failed to copy fences");
- ret = -EINVAL;
+ ret = -EFAULT;
goto cleanup;
}
} else {
@@ -3673,7 +3666,7 @@ dxgkio_wait_sync_object_gpu(struct dxgprocess *process, void *__user inargs)
if (fences && fences != &args.fence_value)
vfree(fences);
- DXG_TRACE("ioctl:%s %d", errorstr(ret), ret);
+ DXG_TRACE_IOCTL_END(ret);
return ret;
}
@@ -3690,7 +3683,7 @@ dxgkio_lock2(struct dxgprocess *process, void *__user inargs)
ret = copy_from_user(&args, inargs, sizeof(args));
if (ret) {
DXG_ERR("failed to copy input args");
- ret = -EINVAL;
+ ret = -EFAULT;
goto cleanup;
}
@@ -3712,7 +3705,7 @@ dxgkio_lock2(struct dxgprocess *process, void *__user inargs)
alloc->cpu_address_refcount++;
} else {
DXG_ERR("Failed to copy cpu address");
- ret = -EINVAL;
+ ret = -EFAULT;
}
}
}
@@ -3749,7 +3742,7 @@ dxgkio_lock2(struct dxgprocess *process, void *__user inargs)
kref_put(&device->device_kref, dxgdevice_release);
success:
- DXG_TRACE("ioctl:%s %d", errorstr(ret), ret);
+ DXG_TRACE_IOCTL_END(ret);
return ret;
}
@@ -3766,7 +3759,7 @@ dxgkio_unlock2(struct dxgprocess *process, void *__user inargs)
ret = copy_from_user(&args, inargs, sizeof(args));
if (ret) {
DXG_ERR("failed to copy input args");
- ret = -EINVAL;
+ ret = -EFAULT;
goto cleanup;
}
@@ -3829,7 +3822,7 @@ dxgkio_unlock2(struct dxgprocess *process, void *__user inargs)
kref_put(&device->device_kref, dxgdevice_release);
success:
- DXG_TRACE("ioctl:%s %d", errorstr(ret), ret);
+ DXG_TRACE_IOCTL_END(ret);
return ret;
}
@@ -3844,7 +3837,7 @@ dxgkio_update_alloc_property(struct dxgprocess *process, void *__user inargs)
ret = copy_from_user(&args, inargs, sizeof(args));
if (ret) {
DXG_ERR("failed to copy input args");
- ret = -EINVAL;
+ ret = -EFAULT;
goto cleanup;
}
@@ -3872,7 +3865,7 @@ dxgkio_update_alloc_property(struct dxgprocess *process, void *__user inargs)
if (device)
kref_put(&device->device_kref, dxgdevice_release);
- DXG_TRACE("ioctl:%s %d", errorstr(ret), ret);
+ DXG_TRACE_IOCTL_END(ret);
return ret;
}
@@ -3887,7 +3880,7 @@ dxgkio_mark_device_as_error(struct dxgprocess *process, void *__user inargs)
ret = copy_from_user(&args, inargs, sizeof(args));
if (ret) {
DXG_ERR("failed to copy input args");
- ret = -EINVAL;
+ ret = -EFAULT;
goto cleanup;
}
device = dxgprocess_device_by_handle(process, args.device);
@@ -3908,7 +3901,7 @@ dxgkio_mark_device_as_error(struct dxgprocess *process, void *__user inargs)
dxgadapter_release_lock_shared(adapter);
if (device)
kref_put(&device->device_kref, dxgdevice_release);
- DXG_TRACE("ioctl:%s %d", errorstr(ret), ret);
+ DXG_TRACE_IOCTL_END(ret);
return ret;
}
@@ -3923,7 +3916,7 @@ dxgkio_query_alloc_residency(struct dxgprocess *process, void *__user inargs)
ret = copy_from_user(&args, inargs, sizeof(args));
if (ret) {
DXG_ERR("failed to copy input args");
- ret = -EINVAL;
+ ret = -EFAULT;
goto cleanup;
}
@@ -3949,7 +3942,7 @@ dxgkio_query_alloc_residency(struct dxgprocess *process, void *__user inargs)
dxgadapter_release_lock_shared(adapter);
if (device)
kref_put(&device->device_kref, dxgdevice_release);
- DXG_TRACE("ioctl:%s %d", errorstr(ret), ret);
+ DXG_TRACE_IOCTL_END(ret);
return ret;
}
@@ -3964,7 +3957,7 @@ dxgkio_set_allocation_priority(struct dxgprocess *process, void *__user inargs)
ret = copy_from_user(&args, inargs, sizeof(args));
if (ret) {
DXG_ERR("failed to copy input args");
- ret = -EINVAL;
+ ret = -EFAULT;
goto cleanup;
}
device = dxgprocess_device_by_handle(process, args.device);
@@ -3984,7 +3977,7 @@ dxgkio_set_allocation_priority(struct dxgprocess *process, void *__user inargs)
dxgadapter_release_lock_shared(adapter);
if (device)
kref_put(&device->device_kref, dxgdevice_release);
- DXG_TRACE("ioctl:%s %d", errorstr(ret), ret);
+ DXG_TRACE_IOCTL_END(ret);
return ret;
}
@@ -3999,7 +3992,7 @@ dxgkio_get_allocation_priority(struct dxgprocess *process, void *__user inargs)
ret = copy_from_user(&args, inargs, sizeof(args));
if (ret) {
DXG_ERR("failed to copy input args");
- ret = -EINVAL;
+ ret = -EFAULT;
goto cleanup;
}
device = dxgprocess_device_by_handle(process, args.device);
@@ -4019,7 +4012,7 @@ dxgkio_get_allocation_priority(struct dxgprocess *process, void *__user inargs)
dxgadapter_release_lock_shared(adapter);
if (device)
kref_put(&device->device_kref, dxgdevice_release);
- DXG_TRACE("ioctl:%s %d", errorstr(ret), ret);
+ DXG_TRACE_IOCTL_END(ret);
return ret;
}
@@ -4069,14 +4062,14 @@ dxgkio_set_context_scheduling_priority(struct dxgprocess *process,
ret = copy_from_user(&args, inargs, sizeof(args));
if (ret) {
DXG_ERR("failed to copy input args");
- ret = -EINVAL;
+ ret = -EFAULT;
goto cleanup;
}
ret = set_context_scheduling_priority(process, args.context,
args.priority, false);
cleanup:
- DXG_TRACE("ioctl:%s %d", errorstr(ret), ret);
+ DXG_TRACE_IOCTL_END(ret);
return ret;
}
@@ -4111,7 +4104,7 @@ get_context_scheduling_priority(struct dxgprocess *process,
ret = copy_to_user(priority, &pri, sizeof(pri));
if (ret) {
DXG_ERR("failed to copy priority to user");
- ret = -EINVAL;
+ ret = -EFAULT;
}
cleanup:
@@ -4134,14 +4127,14 @@ dxgkio_get_context_scheduling_priority(struct dxgprocess *process,
ret = copy_from_user(&args, inargs, sizeof(args));
if (ret) {
DXG_ERR("failed to copy input args");
- ret = -EINVAL;
+ ret = -EFAULT;
goto cleanup;
}
ret = get_context_scheduling_priority(process, args.context,
&input->priority, false);
cleanup:
- DXG_TRACE("ioctl:%s %d", errorstr(ret), ret);
+ DXG_TRACE_IOCTL_END(ret);
return ret;
}
@@ -4155,14 +4148,14 @@ dxgkio_set_context_process_scheduling_priority(struct dxgprocess *process,
ret = copy_from_user(&args, inargs, sizeof(args));
if (ret) {
DXG_ERR("failed to copy input args");
- ret = -EINVAL;
+ ret = -EFAULT;
goto cleanup;
}
ret = set_context_scheduling_priority(process, args.context,
args.priority, true);
cleanup:
- DXG_TRACE("ioctl:%s %d", errorstr(ret), ret);
+ DXG_TRACE_IOCTL_END(ret);
return ret;
}
@@ -4176,7 +4169,7 @@ dxgkio_get_context_process_scheduling_priority(struct dxgprocess *process,
ret = copy_from_user(&args, inargs, sizeof(args));
if (ret) {
DXG_ERR("failed to copy input args");
- ret = -EINVAL;
+ ret = -EFAULT;
goto cleanup;
}
@@ -4184,7 +4177,7 @@ dxgkio_get_context_process_scheduling_priority(struct dxgprocess *process,
&((struct d3dkmt_getcontextinprocessschedulingpriority *)
inargs)->priority, true);
cleanup:
- DXG_TRACE("ioctl:%s %d", errorstr(ret), ret);
+ DXG_TRACE_IOCTL_END(ret);
return ret;
}
@@ -4199,7 +4192,7 @@ dxgkio_change_vidmem_reservation(struct dxgprocess *process, void *__user inargs
ret = copy_from_user(&args, inargs, sizeof(args));
if (ret) {
DXG_ERR("failed to copy input args");
- ret = -EINVAL;
+ ret = -EFAULT;
goto cleanup;
}
@@ -4232,7 +4225,7 @@ dxgkio_change_vidmem_reservation(struct dxgprocess *process, void *__user inargs
if (adapter)
kref_put(&adapter->adapter_kref, dxgadapter_release);
- DXG_TRACE("ioctl:%s %d", errorstr(ret), ret);
+ DXG_TRACE_IOCTL_END(ret);
return ret;
}
@@ -4247,7 +4240,7 @@ dxgkio_query_clock_calibration(struct dxgprocess *process, void *__user inargs)
ret = copy_from_user(&args, inargs, sizeof(args));
if (ret) {
DXG_ERR("failed to copy input args");
- ret = -EINVAL;
+ ret = -EFAULT;
goto cleanup;
}
@@ -4272,7 +4265,7 @@ dxgkio_query_clock_calibration(struct dxgprocess *process, void *__user inargs)
ret = copy_to_user(inargs, &args, sizeof(args));
if (ret) {
DXG_ERR("failed to copy output args");
- ret = -EINVAL;
+ ret = -EFAULT;
}
cleanup:
@@ -4295,7 +4288,7 @@ dxgkio_flush_heap_transitions(struct dxgprocess *process, void *__user inargs)
ret = copy_from_user(&args, inargs, sizeof(args));
if (ret) {
DXG_ERR("failed to copy input args");
- ret = -EINVAL;
+ ret = -EFAULT;
goto cleanup;
}
@@ -4319,7 +4312,7 @@ dxgkio_flush_heap_transitions(struct dxgprocess *process, void *__user inargs)
ret = copy_to_user(inargs, &args, sizeof(args));
if (ret) {
DXG_ERR("failed to copy output args");
- ret = -EINVAL;
+ ret = -EFAULT;
}
cleanup:
@@ -4341,7 +4334,7 @@ dxgkio_escape(struct dxgprocess *process, void *__user inargs)
ret = copy_from_user(&args, inargs, sizeof(args));
if (ret) {
- ret = -EINVAL;
+ ret = -EFAULT;
goto cleanup;
}
@@ -4367,7 +4360,7 @@ dxgkio_escape(struct dxgprocess *process, void *__user inargs)
dxgadapter_release_lock_shared(adapter);
if (adapter)
kref_put(&adapter->adapter_kref, dxgadapter_release);
- DXG_TRACE("ioctl:%s %d", errorstr(ret), ret);
+ DXG_TRACE_IOCTL_END(ret);
return ret;
}
@@ -4382,7 +4375,7 @@ dxgkio_query_vidmem_info(struct dxgprocess *process, void *__user inargs)
ret = copy_from_user(&args, inargs, sizeof(args));
if (ret) {
DXG_ERR("failed to copy input args");
- ret = -EINVAL;
+ ret = -EFAULT;
goto cleanup;
}
@@ -4432,7 +4425,7 @@ dxgkio_get_device_state(struct dxgprocess *process, void *__user inargs)
ret = copy_from_user(&args, inargs, sizeof(args));
if (ret) {
DXG_ERR("failed to copy input args");
- ret = -EINVAL;
+ ret = -EFAULT;
goto cleanup;
}
@@ -4458,7 +4451,7 @@ dxgkio_get_device_state(struct dxgprocess *process, void *__user inargs)
ret = copy_to_user(inargs, &args, sizeof(args));
if (ret) {
DXG_ERR("failed to copy args to user");
- ret = -EINVAL;
+ ret = -EFAULT;
}
goto cleanup;
}
@@ -4590,7 +4583,7 @@ dxgkio_share_objects(struct dxgprocess *process, void *__user inargs)
ret = copy_from_user(&args, inargs, sizeof(args));
if (ret) {
DXG_ERR("failed to copy input args");
- ret = -EINVAL;
+ ret = -EFAULT;
goto cleanup;
}
@@ -4610,7 +4603,7 @@ dxgkio_share_objects(struct dxgprocess *process, void *__user inargs)
ret = copy_from_user(handles, args.objects, handle_size);
if (ret) {
DXG_ERR("failed to copy object handles");
- ret = -EINVAL;
+ ret = -EFAULT;
goto cleanup;
}
@@ -4708,7 +4701,7 @@ dxgkio_share_objects(struct dxgprocess *process, void *__user inargs)
ret = copy_to_user(args.shared_handle, &tmp, sizeof(u64));
if (ret) {
DXG_ERR("failed to copy shared handle");
- ret = -EINVAL;
+ ret = -EFAULT;
}
cleanup:
@@ -4726,7 +4719,7 @@ dxgkio_share_objects(struct dxgprocess *process, void *__user inargs)
if (resource)
kref_put(&resource->resource_kref, dxgresource_release);
- DXG_TRACE("ioctl:%s %d", errorstr(ret), ret);
+ DXG_TRACE_IOCTL_END(ret);
return ret;
}
@@ -4742,7 +4735,7 @@ dxgkio_query_resource_info_nt(struct dxgprocess *process, void *__user inargs)
ret = copy_from_user(&args, inargs, sizeof(args));
if (ret) {
DXG_ERR("failed to copy input args");
- ret = -EINVAL;
+ ret = -EFAULT;
goto cleanup;
}
@@ -4795,7 +4788,7 @@ dxgkio_query_resource_info_nt(struct dxgprocess *process, void *__user inargs)
ret = copy_to_user(inargs, &args, sizeof(args));
if (ret) {
DXG_ERR("failed to copy output args");
- ret = -EINVAL;
+ ret = -EFAULT;
}
cleanup:
@@ -4807,7 +4800,7 @@ dxgkio_query_resource_info_nt(struct dxgprocess *process, void *__user inargs)
if (device)
kref_put(&device->device_kref, dxgdevice_release);
- DXG_TRACE("ioctl:%s %d", errorstr(ret), ret);
+ DXG_TRACE_IOCTL_END(ret);
return ret;
}
@@ -4859,7 +4852,7 @@ assign_resource_handles(struct dxgprocess *process,
sizeof(open_alloc_info));
if (ret) {
DXG_ERR("failed to copy alloc info");
- ret = -EINVAL;
+ ret = -EFAULT;
goto cleanup;
}
}
@@ -5009,7 +5002,7 @@ open_resource(struct dxgprocess *process,
shared_resource->runtime_private_data_size);
if (ret) {
DXG_ERR("failed to copy runtime data");
- ret = -EINVAL;
+ ret = -EFAULT;
goto cleanup;
}
}
@@ -5020,7 +5013,7 @@ open_resource(struct dxgprocess *process,
shared_resource->resource_private_data_size);
if (ret) {
DXG_ERR("failed to copy resource data");
- ret = -EINVAL;
+ ret = -EFAULT;
goto cleanup;
}
}
@@ -5031,7 +5024,7 @@ open_resource(struct dxgprocess *process,
shared_resource->alloc_private_data_size);
if (ret) {
DXG_ERR("failed to copy alloc data");
- ret = -EINVAL;
+ ret = -EFAULT;
goto cleanup;
}
}
@@ -5046,7 +5039,7 @@ open_resource(struct dxgprocess *process,
sizeof(struct d3dkmthandle));
if (ret) {
DXG_ERR("failed to copy resource handle to user");
- ret = -EINVAL;
+ ret = -EFAULT;
goto cleanup;
}
@@ -5054,7 +5047,7 @@ open_resource(struct dxgprocess *process,
&args->total_priv_drv_data_size, sizeof(u32));
if (ret) {
DXG_ERR("failed to copy total driver data size");
- ret = -EINVAL;
+ ret = -EFAULT;
}
cleanup:
@@ -5102,7 +5095,7 @@ dxgkio_open_resource_nt(struct dxgprocess *process, void *__user inargs)
ret = copy_from_user(&args, inargs, sizeof(args));
if (ret) {
DXG_ERR("failed to copy input args");
- ret = -EINVAL;
+ ret = -EFAULT;
goto cleanup;
}
@@ -5112,7 +5105,7 @@ dxgkio_open_resource_nt(struct dxgprocess *process, void *__user inargs)
cleanup:
- DXG_TRACE("ioctl:%s %d", errorstr(ret), ret);
+ DXG_TRACE_IOCTL_END(ret);
return ret;
}
@@ -5125,7 +5118,7 @@ dxgkio_share_object_with_host(struct dxgprocess *process, void *__user inargs)
ret = copy_from_user(&args, inargs, sizeof(args));
if (ret) {
DXG_ERR("failed to copy input args");
- ret = -EINVAL;
+ ret = -EFAULT;
goto cleanup;
}
@@ -5138,12 +5131,12 @@ dxgkio_share_object_with_host(struct dxgprocess *process, void *__user inargs)
ret = copy_to_user(inargs, &args, sizeof(args));
if (ret) {
DXG_ERR("failed to copy data to user");
- ret = -EINVAL;
+ ret = -EFAULT;
}
cleanup:
- DXG_TRACE("ioctl:%s %d", errorstr(ret), ret);
+ DXG_TRACE_IOCTL_END(ret);
return ret;
}
^ permalink raw reply related [flat|nested] 56+ messages in thread* [PATCH 35/55] drivers: hv: dxgkrnl: Fix synchronization locks
2026-03-19 20:24 [PATCH v4 00/55] drivers: hv: dxgkrnl: Driver for Hyper-V virtual compute device Eric Curtin
` (33 preceding siblings ...)
2026-03-19 20:24 ` [PATCH 34/55] drivers: hv: dxgkrnl: Improve tracing and return values from copy from user Eric Curtin
@ 2026-03-19 20:24 ` Eric Curtin
2026-03-19 20:24 ` [PATCH 36/55] drivers: hv: dxgkrnl: Close shared file objects in case of a failure Eric Curtin
` (19 subsequent siblings)
54 siblings, 0 replies; 56+ messages in thread
From: Eric Curtin @ 2026-03-19 20:24 UTC (permalink / raw)
To: linux-hyperv; +Cc: linux-kernel, iourit, wei.liu, decui, haiyangz
From: Iouri Tarassov <iourit@linux.microsoft.com>
Signed-off-by: Iouri Tarassov <iourit@linux.microsoft.com>
[kms: forward port to 6.6 from 6.1. No code changes made.]
Signed-off-by: Kelsey Steele <kelseysteele@microsoft.com>
---
drivers/hv/dxgkrnl/dxgadapter.c | 19 ++++----
drivers/hv/dxgkrnl/dxgkrnl.h | 8 +++-
drivers/hv/dxgkrnl/dxgmodule.c | 3 +-
drivers/hv/dxgkrnl/dxgprocess.c | 11 +++--
drivers/hv/dxgkrnl/dxgvmbus.c | 85 +++++++++++++++++++++++----------
drivers/hv/dxgkrnl/ioctl.c | 24 ++++++----
drivers/hv/dxgkrnl/misc.h | 1 +
7 files changed, 101 insertions(+), 50 deletions(-)
diff --git a/drivers/hv/dxgkrnl/dxgadapter.c b/drivers/hv/dxgkrnl/dxgadapter.c
index 3d8bec295b87..d9d45bd4a31e 100644
--- a/drivers/hv/dxgkrnl/dxgadapter.c
+++ b/drivers/hv/dxgkrnl/dxgadapter.c
@@ -136,7 +136,7 @@ void dxgadapter_release(struct kref *refcount)
struct dxgadapter *adapter;
adapter = container_of(refcount, struct dxgadapter, adapter_kref);
- DXG_TRACE("%p", adapter);
+ DXG_TRACE("Destroying adapter: %px", adapter);
kfree(adapter);
}
@@ -270,6 +270,8 @@ struct dxgdevice *dxgdevice_create(struct dxgadapter *adapter,
if (ret < 0) {
kref_put(&device->device_kref, dxgdevice_release);
device = NULL;
+ } else {
+ DXG_TRACE("dxgdevice created: %px", device);
}
}
return device;
@@ -413,11 +415,8 @@ void dxgdevice_destroy(struct dxgdevice *device)
cleanup:
- if (device->adapter) {
+ if (device->adapter)
dxgprocess_adapter_remove_device(device);
- kref_put(&device->adapter->adapter_kref, dxgadapter_release);
- device->adapter = NULL;
- }
up_write(&device->device_lock);
@@ -721,6 +720,8 @@ void dxgdevice_release(struct kref *refcount)
struct dxgdevice *device;
device = container_of(refcount, struct dxgdevice, device_kref);
+ DXG_TRACE("Destroying device: %px", device);
+ kref_put(&device->adapter->adapter_kref, dxgadapter_release);
kfree(device);
}
@@ -999,6 +1000,9 @@ void dxgpagingqueue_destroy(struct dxgpagingqueue *pqueue)
kfree(pqueue);
}
+/*
+ * Process_adapter_mutex is held.
+ */
struct dxgprocess_adapter *dxgprocess_adapter_create(struct dxgprocess *process,
struct dxgadapter *adapter)
{
@@ -1108,7 +1112,7 @@ int dxgprocess_adapter_add_device(struct dxgprocess *process,
void dxgprocess_adapter_remove_device(struct dxgdevice *device)
{
- DXG_TRACE("Removing device: %p", device);
+ DXG_TRACE("Removing device: %px", device);
mutex_lock(&device->adapter_info->device_list_mutex);
if (device->device_list_entry.next) {
list_del(&device->device_list_entry);
@@ -1147,8 +1151,7 @@ void dxgsharedsyncobj_release(struct kref *refcount)
if (syncobj->adapter) {
dxgadapter_remove_shared_syncobj(syncobj->adapter,
syncobj);
- kref_put(&syncobj->adapter->adapter_kref,
- dxgadapter_release);
+ kref_put(&syncobj->adapter->adapter_kref, dxgadapter_release);
}
kfree(syncobj);
}
diff --git a/drivers/hv/dxgkrnl/dxgkrnl.h b/drivers/hv/dxgkrnl/dxgkrnl.h
index f63aa6f7a9dc..1b40d6e39085 100644
--- a/drivers/hv/dxgkrnl/dxgkrnl.h
+++ b/drivers/hv/dxgkrnl/dxgkrnl.h
@@ -404,7 +404,10 @@ struct dxgprocess {
/* Handle of the corresponding objec on the host */
struct d3dkmthandle host_handle;
- /* List of opened adapters (dxgprocess_adapter) */
+ /*
+ * List of opened adapters (dxgprocess_adapter).
+ * Protected by process_adapter_mutex.
+ */
struct list_head process_adapter_list_head;
};
@@ -451,6 +454,8 @@ enum dxgadapter_state {
struct dxgadapter {
struct rw_semaphore core_lock;
struct kref adapter_kref;
+ /* Protects creation and destruction of dxgdevice objects */
+ struct mutex device_creation_lock;
/* Entry in the list of adapters in dxgglobal */
struct list_head adapter_list_entry;
/* The list of dxgprocess_adapter entries */
@@ -997,6 +1002,7 @@ void dxgk_validate_ioctls(void);
#define DXG_TRACE(fmt, ...) do { \
trace_printk(dev_fmt(fmt) "\n", ##__VA_ARGS__); \
+ dev_dbg(DXGDEV, "%s: " fmt, __func__, ##__VA_ARGS__); \
} while (0)
#define DXG_ERR(fmt, ...) do { \
diff --git a/drivers/hv/dxgkrnl/dxgmodule.c b/drivers/hv/dxgkrnl/dxgmodule.c
index aa27931a3447..f419597f711a 100644
--- a/drivers/hv/dxgkrnl/dxgmodule.c
+++ b/drivers/hv/dxgkrnl/dxgmodule.c
@@ -272,6 +272,7 @@ int dxgglobal_create_adapter(struct pci_dev *dev, guid_t *guid,
adapter->host_vgpu_luid = host_vgpu_luid;
kref_init(&adapter->adapter_kref);
init_rwsem(&adapter->core_lock);
+ mutex_init(&adapter->device_creation_lock);
INIT_LIST_HEAD(&adapter->adapter_process_list_head);
INIT_LIST_HEAD(&adapter->shared_resource_list_head);
@@ -961,4 +962,4 @@ module_exit(dxg_drv_exit);
MODULE_LICENSE("GPL");
MODULE_DESCRIPTION("Microsoft Dxgkrnl virtual compute device Driver");
-MODULE_VERSION("2.0.0");
+MODULE_VERSION("2.0.1");
diff --git a/drivers/hv/dxgkrnl/dxgprocess.c b/drivers/hv/dxgkrnl/dxgprocess.c
index e77e3a4983f8..fd51fd968049 100644
--- a/drivers/hv/dxgkrnl/dxgprocess.c
+++ b/drivers/hv/dxgkrnl/dxgprocess.c
@@ -214,14 +214,15 @@ int dxgprocess_close_adapter(struct dxgprocess *process,
hmgrtable_unlock(&process->local_handle_table, DXGLOCK_EXCL);
if (adapter) {
+ mutex_lock(&adapter->device_creation_lock);
+ dxgglobal_acquire_process_adapter_lock();
adapter_info = dxgprocess_get_adapter_info(process, adapter);
- if (adapter_info) {
- dxgglobal_acquire_process_adapter_lock();
+ if (adapter_info)
dxgprocess_adapter_release(adapter_info);
- dxgglobal_release_process_adapter_lock();
- } else {
+ else
ret = -EINVAL;
- }
+ dxgglobal_release_process_adapter_lock();
+ mutex_unlock(&adapter->device_creation_lock);
} else {
DXG_ERR("Adapter not found %x", handle.v);
ret = -EINVAL;
diff --git a/drivers/hv/dxgkrnl/dxgvmbus.c b/drivers/hv/dxgkrnl/dxgvmbus.c
index 566ccb6d01c9..8c99f141482e 100644
--- a/drivers/hv/dxgkrnl/dxgvmbus.c
+++ b/drivers/hv/dxgkrnl/dxgvmbus.c
@@ -1573,8 +1573,27 @@ process_allocation_handles(struct dxgprocess *process,
struct dxgresource *resource)
{
int ret = 0;
- int i;
+ int i = 0;
+ int k;
+ struct dxgkvmb_command_allocinfo_return *host_alloc;
+ /*
+ * Assign handle to the internal objects, so VM bus messages will be
+ * sent to the host to free them during object destruction.
+ */
+ if (args->flags.create_resource)
+ resource->handle = res->resource;
+ for (i = 0; i < args->alloc_count; i++) {
+ host_alloc = &res->allocation_info[i];
+ dxgalloc[i]->alloc_handle = host_alloc->allocation;
+ }
+
+ /*
+ * Assign handle to the handle table.
+ * In case of a failure all handles should be freed.
+ * When the function returns, the objects could be destroyed by
+ * handle immediately.
+ */
hmgrtable_lock(&process->handle_table, DXGLOCK_EXCL);
if (args->flags.create_resource) {
ret = hmgrtable_assign_handle(&process->handle_table, resource,
@@ -1583,14 +1602,12 @@ process_allocation_handles(struct dxgprocess *process,
if (ret < 0) {
DXG_ERR("failed to assign resource handle %x",
res->resource.v);
+ goto cleanup;
} else {
- resource->handle = res->resource;
resource->handle_valid = 1;
}
}
for (i = 0; i < args->alloc_count; i++) {
- struct dxgkvmb_command_allocinfo_return *host_alloc;
-
host_alloc = &res->allocation_info[i];
ret = hmgrtable_assign_handle(&process->handle_table,
dxgalloc[i],
@@ -1602,9 +1619,26 @@ process_allocation_handles(struct dxgprocess *process,
args->alloc_count, i);
break;
}
- dxgalloc[i]->alloc_handle = host_alloc->allocation;
dxgalloc[i]->handle_valid = 1;
}
+ if (ret < 0) {
+ if (args->flags.create_resource) {
+ hmgrtable_free_handle(&process->handle_table,
+ HMGRENTRY_TYPE_DXGRESOURCE,
+ res->resource);
+ resource->handle_valid = 0;
+ }
+ for (k = 0; k < i; k++) {
+ host_alloc = &res->allocation_info[i];
+ hmgrtable_free_handle(&process->handle_table,
+ HMGRENTRY_TYPE_DXGALLOCATION,
+ host_alloc->allocation);
+ dxgalloc[i]->handle_valid = 0;
+ }
+ }
+
+cleanup:
+
hmgrtable_unlock(&process->handle_table, DXGLOCK_EXCL);
if (ret)
@@ -1705,18 +1739,17 @@ create_local_allocations(struct dxgprocess *process,
}
}
- ret = process_allocation_handles(process, device, args, result,
- dxgalloc, resource);
- if (ret < 0)
- goto cleanup;
-
ret = copy_to_user(&input_args->global_share, &args->global_share,
sizeof(struct d3dkmthandle));
if (ret) {
DXG_ERR("failed to copy global share");
ret = -EFAULT;
+ goto cleanup;
}
+ ret = process_allocation_handles(process, device, args, result,
+ dxgalloc, resource);
+
cleanup:
if (ret < 0) {
@@ -3576,22 +3609,6 @@ int dxgvmb_send_create_hwqueue(struct dxgprocess *process,
goto cleanup;
}
- ret = hmgrtable_assign_handle_safe(&process->handle_table, hwqueue,
- HMGRENTRY_TYPE_DXGHWQUEUE,
- command->hwqueue);
- if (ret < 0)
- goto cleanup;
-
- ret = hmgrtable_assign_handle_safe(&process->handle_table,
- NULL,
- HMGRENTRY_TYPE_MONITOREDFENCE,
- command->hwqueue_progress_fence);
- if (ret < 0)
- goto cleanup;
-
- hwqueue->handle = command->hwqueue;
- hwqueue->progress_fence_sync_object = command->hwqueue_progress_fence;
-
hwqueue->progress_fence_mapped_address =
dxg_map_iospace((u64)command->hwqueue_progress_fence_cpuva,
PAGE_SIZE, PROT_READ | PROT_WRITE, true);
@@ -3641,6 +3658,22 @@ int dxgvmb_send_create_hwqueue(struct dxgprocess *process,
}
}
+ ret = hmgrtable_assign_handle_safe(&process->handle_table,
+ NULL,
+ HMGRENTRY_TYPE_MONITOREDFENCE,
+ command->hwqueue_progress_fence);
+ if (ret < 0)
+ goto cleanup;
+
+ hwqueue->progress_fence_sync_object = command->hwqueue_progress_fence;
+ hwqueue->handle = command->hwqueue;
+
+ ret = hmgrtable_assign_handle_safe(&process->handle_table, hwqueue,
+ HMGRENTRY_TYPE_DXGHWQUEUE,
+ command->hwqueue);
+ if (ret < 0)
+ hwqueue->handle.v = 0;
+
cleanup:
if (ret < 0) {
DXG_ERR("failed %x", ret);
diff --git a/drivers/hv/dxgkrnl/ioctl.c b/drivers/hv/dxgkrnl/ioctl.c
index 3dc9e76f4f3d..7c72790f917f 100644
--- a/drivers/hv/dxgkrnl/ioctl.c
+++ b/drivers/hv/dxgkrnl/ioctl.c
@@ -636,6 +636,7 @@ dxgkio_create_device(struct dxgprocess *process, void *__user inargs)
struct dxgdevice *device = NULL;
struct d3dkmthandle host_device_handle = {};
bool adapter_locked = false;
+ bool device_creation_locked = false;
ret = copy_from_user(&args, inargs, sizeof(args));
if (ret) {
@@ -651,6 +652,9 @@ dxgkio_create_device(struct dxgprocess *process, void *__user inargs)
goto cleanup;
}
+ mutex_lock(&adapter->device_creation_lock);
+ device_creation_locked = true;
+
device = dxgdevice_create(adapter, process);
if (device == NULL) {
ret = -ENOMEM;
@@ -699,6 +703,9 @@ dxgkio_create_device(struct dxgprocess *process, void *__user inargs)
if (adapter_locked)
dxgadapter_release_lock_shared(adapter);
+ if (device_creation_locked)
+ mutex_unlock(&adapter->device_creation_lock);
+
if (adapter)
kref_put(&adapter->adapter_kref, dxgadapter_release);
@@ -803,22 +810,21 @@ dxgkio_create_context_virtual(struct dxgprocess *process, void *__user inargs)
host_context_handle = dxgvmb_send_create_context(adapter,
process, &args);
if (host_context_handle.v) {
- hmgrtable_lock(&process->handle_table, DXGLOCK_EXCL);
- ret = hmgrtable_assign_handle(&process->handle_table, context,
- HMGRENTRY_TYPE_DXGCONTEXT,
- host_context_handle);
- if (ret >= 0)
- context->handle = host_context_handle;
- hmgrtable_unlock(&process->handle_table, DXGLOCK_EXCL);
- if (ret < 0)
- goto cleanup;
ret = copy_to_user(&((struct d3dkmt_createcontextvirtual *)
inargs)->context, &host_context_handle,
sizeof(struct d3dkmthandle));
if (ret) {
DXG_ERR("failed to copy context handle");
ret = -EFAULT;
+ goto cleanup;
}
+ hmgrtable_lock(&process->handle_table, DXGLOCK_EXCL);
+ ret = hmgrtable_assign_handle(&process->handle_table, context,
+ HMGRENTRY_TYPE_DXGCONTEXT,
+ host_context_handle);
+ if (ret >= 0)
+ context->handle = host_context_handle;
+ hmgrtable_unlock(&process->handle_table, DXGLOCK_EXCL);
} else {
DXG_ERR("invalid host handle");
ret = -EINVAL;
diff --git a/drivers/hv/dxgkrnl/misc.h b/drivers/hv/dxgkrnl/misc.h
index ee2ebfdd1c13..9fcab4ae2c0c 100644
--- a/drivers/hv/dxgkrnl/misc.h
+++ b/drivers/hv/dxgkrnl/misc.h
@@ -38,6 +38,7 @@ extern const struct d3dkmthandle zerohandle;
* core_lock (dxgadapter lock)
* device_lock (dxgdevice lock)
* process_adapter_mutex
+ * device_creation_lock in dxgadapter
* adapter_list_lock
* device_mutex (dxgglobal mutex)
*/
^ permalink raw reply related [flat|nested] 56+ messages in thread* [PATCH 36/55] drivers: hv: dxgkrnl: Close shared file objects in case of a failure
2026-03-19 20:24 [PATCH v4 00/55] drivers: hv: dxgkrnl: Driver for Hyper-V virtual compute device Eric Curtin
` (34 preceding siblings ...)
2026-03-19 20:24 ` [PATCH 35/55] drivers: hv: dxgkrnl: Fix synchronization locks Eric Curtin
@ 2026-03-19 20:24 ` Eric Curtin
2026-03-19 20:24 ` [PATCH 37/55] drivers: hv: dxgkrnl: Added missed NULL check for resource object Eric Curtin
` (18 subsequent siblings)
54 siblings, 0 replies; 56+ messages in thread
From: Eric Curtin @ 2026-03-19 20:24 UTC (permalink / raw)
To: linux-hyperv; +Cc: linux-kernel, iourit, wei.liu, decui, haiyangz
From: Iouri Tarassov <iourit@linux.microsoft.com>
Signed-off-by: Iouri Tarassov <iourit@linux.microsoft.com>
[kms: forward port to 6.6 from 6.1. No code changes made.]
Signed-off-by: Kelsey Steele <kelseysteele@microsoft.com>
---
drivers/hv/dxgkrnl/ioctl.c | 14 ++++++++++----
1 file changed, 10 insertions(+), 4 deletions(-)
diff --git a/drivers/hv/dxgkrnl/ioctl.c b/drivers/hv/dxgkrnl/ioctl.c
index 7c72790f917f..69324510c9e2 100644
--- a/drivers/hv/dxgkrnl/ioctl.c
+++ b/drivers/hv/dxgkrnl/ioctl.c
@@ -4536,7 +4536,7 @@ enum dxg_sharedobject_type {
};
static int get_object_fd(enum dxg_sharedobject_type type,
- void *object, int *fdout)
+ void *object, int *fdout, struct file **filp)
{
struct file *file;
int fd;
@@ -4565,8 +4565,8 @@ static int get_object_fd(enum dxg_sharedobject_type type,
return -ENOTRECOVERABLE;
}
- fd_install(fd, file);
*fdout = fd;
+ *filp = file;
return 0;
}
@@ -4581,6 +4581,7 @@ dxgkio_share_objects(struct dxgprocess *process, void *__user inargs)
struct dxgsharedresource *shared_resource = NULL;
struct d3dkmthandle *handles = NULL;
int object_fd = -1;
+ struct file *filp = NULL;
void *obj = NULL;
u32 handle_size;
int ret;
@@ -4660,7 +4661,7 @@ dxgkio_share_objects(struct dxgprocess *process, void *__user inargs)
switch (object_type) {
case HMGRENTRY_TYPE_DXGSYNCOBJECT:
ret = get_object_fd(DXG_SHARED_SYNCOBJECT, shared_syncobj,
- &object_fd);
+ &object_fd, &filp);
if (ret < 0) {
DXG_ERR("get_object_fd failed for sync object");
goto cleanup;
@@ -4675,7 +4676,7 @@ dxgkio_share_objects(struct dxgprocess *process, void *__user inargs)
break;
case HMGRENTRY_TYPE_DXGRESOURCE:
ret = get_object_fd(DXG_SHARED_RESOURCE, shared_resource,
- &object_fd);
+ &object_fd, &filp);
if (ret < 0) {
DXG_ERR("get_object_fd failed for resource");
goto cleanup;
@@ -4708,10 +4709,15 @@ dxgkio_share_objects(struct dxgprocess *process, void *__user inargs)
if (ret) {
DXG_ERR("failed to copy shared handle");
ret = -EFAULT;
+ goto cleanup;
}
+ fd_install(object_fd, filp);
+
cleanup:
if (ret < 0) {
+ if (filp)
+ fput(filp);
if (object_fd >= 0)
put_unused_fd(object_fd);
}
^ permalink raw reply related [flat|nested] 56+ messages in thread* [PATCH 37/55] drivers: hv: dxgkrnl: Added missed NULL check for resource object
2026-03-19 20:24 [PATCH v4 00/55] drivers: hv: dxgkrnl: Driver for Hyper-V virtual compute device Eric Curtin
` (35 preceding siblings ...)
2026-03-19 20:24 ` [PATCH 36/55] drivers: hv: dxgkrnl: Close shared file objects in case of a failure Eric Curtin
@ 2026-03-19 20:24 ` Eric Curtin
2026-03-19 20:24 ` [PATCH 38/55] drivers: hv: dxgkrnl: Fixed dxgkrnl to build for the 6.1 kernel Eric Curtin
` (17 subsequent siblings)
54 siblings, 0 replies; 56+ messages in thread
From: Eric Curtin @ 2026-03-19 20:24 UTC (permalink / raw)
To: linux-hyperv; +Cc: linux-kernel, iourit, wei.liu, decui, haiyangz
From: Iouri Tarassov <iourit@linux.microsoft.com>
Signed-off-by: Iouri Tarassov <iourit@linux.microsoft.com>
[kms: forward port to 6.6 from 6.1. No code changes made.]
Signed-off-by: Kelsey Steele <kelseysteele@microsoft.com>
---
drivers/hv/dxgkrnl/ioctl.c | 10 ++++++----
1 file changed, 6 insertions(+), 4 deletions(-)
diff --git a/drivers/hv/dxgkrnl/ioctl.c b/drivers/hv/dxgkrnl/ioctl.c
index 69324510c9e2..98350583943e 100644
--- a/drivers/hv/dxgkrnl/ioctl.c
+++ b/drivers/hv/dxgkrnl/ioctl.c
@@ -1589,7 +1589,8 @@ dxgkio_create_allocation(struct dxgprocess *process, void *__user inargs)
&process->handle_table,
HMGRENTRY_TYPE_DXGRESOURCE,
args.resource);
- kref_get(&resource->resource_kref);
+ if (resource != NULL)
+ kref_get(&resource->resource_kref);
dxgprocess_ht_lock_shared_up(process);
if (resource == NULL || resource->device != device) {
@@ -1693,10 +1694,8 @@ dxgkio_create_allocation(struct dxgprocess *process, void *__user inargs)
&standard_alloc);
cleanup:
- if (resource_mutex_acquired) {
+ if (resource_mutex_acquired)
mutex_unlock(&resource->resource_mutex);
- kref_put(&resource->resource_kref, dxgresource_release);
- }
if (ret < 0) {
if (dxgalloc) {
for (i = 0; i < args.alloc_count; i++) {
@@ -1727,6 +1726,9 @@ dxgkio_create_allocation(struct dxgprocess *process, void *__user inargs)
if (adapter)
dxgadapter_release_lock_shared(adapter);
+ if (resource && !args.flags.create_resource)
+ kref_put(&resource->resource_kref, dxgresource_release);
+
if (device) {
dxgdevice_release_lock_shared(device);
kref_put(&device->device_kref, dxgdevice_release);
^ permalink raw reply related [flat|nested] 56+ messages in thread* [PATCH 38/55] drivers: hv: dxgkrnl: Fixed dxgkrnl to build for the 6.1 kernel
2026-03-19 20:24 [PATCH v4 00/55] drivers: hv: dxgkrnl: Driver for Hyper-V virtual compute device Eric Curtin
` (36 preceding siblings ...)
2026-03-19 20:24 ` [PATCH 37/55] drivers: hv: dxgkrnl: Added missed NULL check for resource object Eric Curtin
@ 2026-03-19 20:24 ` Eric Curtin
2026-03-19 20:24 ` [PATCH 39/55] drivers: hv: dxgkrnl: Added support for compute only adapters Eric Curtin
` (16 subsequent siblings)
54 siblings, 0 replies; 56+ messages in thread
From: Eric Curtin @ 2026-03-19 20:24 UTC (permalink / raw)
To: linux-hyperv; +Cc: linux-kernel, iourit, wei.liu, decui, haiyangz
From: Iouri Tarassov <iourit@linux.microsoft.com>
Definition for GPADL was changed from u32 to struct vmbus_gpadl.
Signed-off-by: Iouri Tarassov <iourit@linux.microsoft.com>
[kms: forward port to 6.6 from 6.1. No code changes made.]
Signed-off-by: Kelsey Steele <kelseysteele@microsoft.com>
---
drivers/hv/dxgkrnl/dxgadapter.c | 8 --------
drivers/hv/dxgkrnl/dxgkrnl.h | 4 ----
drivers/hv/dxgkrnl/dxgvmbus.c | 8 --------
3 files changed, 20 deletions(-)
diff --git a/drivers/hv/dxgkrnl/dxgadapter.c b/drivers/hv/dxgkrnl/dxgadapter.c
index d9d45bd4a31e..bcd19b7267d1 100644
--- a/drivers/hv/dxgkrnl/dxgadapter.c
+++ b/drivers/hv/dxgkrnl/dxgadapter.c
@@ -927,19 +927,11 @@ void dxgallocation_destroy(struct dxgallocation *alloc)
alloc->owner.device,
&args, &alloc->alloc_handle);
}
-#ifdef _MAIN_KERNEL_
if (alloc->gpadl.gpadl_handle) {
DXG_TRACE("Teardown gpadl %d", alloc->gpadl.gpadl_handle);
vmbus_teardown_gpadl(dxgglobal_get_vmbus(), &alloc->gpadl);
alloc->gpadl.gpadl_handle = 0;
}
-#else
- if (alloc->gpadl) {
- DXG_TRACE("Teardown gpadl %d", alloc->gpadl);
- vmbus_teardown_gpadl(dxgglobal_get_vmbus(), alloc->gpadl);
- alloc->gpadl = 0;
- }
-#endif
if (alloc->priv_drv_data)
vfree(alloc->priv_drv_data);
if (alloc->cpu_address_mapped)
diff --git a/drivers/hv/dxgkrnl/dxgkrnl.h b/drivers/hv/dxgkrnl/dxgkrnl.h
index 1b40d6e39085..c5ed23cb90df 100644
--- a/drivers/hv/dxgkrnl/dxgkrnl.h
+++ b/drivers/hv/dxgkrnl/dxgkrnl.h
@@ -728,11 +728,7 @@ struct dxgallocation {
u32 cached:1;
u32 handle_valid:1;
/* GPADL address list for existing sysmem allocations */
-#ifdef _MAIN_KERNEL_
struct vmbus_gpadl gpadl;
-#else
- u32 gpadl;
-#endif
/* Number of pages in the 'pages' array */
u32 num_pages;
/*
diff --git a/drivers/hv/dxgkrnl/dxgvmbus.c b/drivers/hv/dxgkrnl/dxgvmbus.c
index 8c99f141482e..eb3f4c5153a6 100644
--- a/drivers/hv/dxgkrnl/dxgvmbus.c
+++ b/drivers/hv/dxgkrnl/dxgvmbus.c
@@ -1493,22 +1493,14 @@ int create_existing_sysmem(struct dxgdevice *device,
ret = -ENOMEM;
goto cleanup;
}
-#ifdef _MAIN_KERNEL_
DXG_TRACE("New gpadl %d", dxgalloc->gpadl.gpadl_handle);
-#else
- DXG_TRACE("New gpadl %d", dxgalloc->gpadl);
-#endif
command_vgpu_to_host_init2(&set_store_command->hdr,
DXGK_VMBCOMMAND_SETEXISTINGSYSMEMSTORE,
device->process->host_handle);
set_store_command->device = device->handle;
set_store_command->allocation = host_alloc->allocation;
-#ifdef _MAIN_KERNEL_
set_store_command->gpadl = dxgalloc->gpadl.gpadl_handle;
-#else
- set_store_command->gpadl = dxgalloc->gpadl;
-#endif
ret = dxgvmb_send_sync_msg_ntstatus(msg.channel, msg.hdr,
msg.size);
if (ret < 0)
^ permalink raw reply related [flat|nested] 56+ messages in thread* [PATCH 39/55] drivers: hv: dxgkrnl: Added support for compute only adapters
2026-03-19 20:24 [PATCH v4 00/55] drivers: hv: dxgkrnl: Driver for Hyper-V virtual compute device Eric Curtin
` (37 preceding siblings ...)
2026-03-19 20:24 ` [PATCH 38/55] drivers: hv: dxgkrnl: Fixed dxgkrnl to build for the 6.1 kernel Eric Curtin
@ 2026-03-19 20:24 ` Eric Curtin
2026-03-19 20:24 ` [PATCH 40/55] drivers: hv: dxgkrnl: Added implementation for D3DKMTInvalidateCache Eric Curtin
` (15 subsequent siblings)
54 siblings, 0 replies; 56+ messages in thread
From: Eric Curtin @ 2026-03-19 20:24 UTC (permalink / raw)
To: linux-hyperv; +Cc: linux-kernel, iourit, wei.liu, decui, haiyangz
From: Iouri Tarassov <iourit@linux.microsoft.com>
Signed-off-by: Iouri Tarassov <iourit@linux.microsoft.com>
[kms: forward port to 6.6 from 6.1. No code changes made.]
Signed-off-by: Kelsey Steele <kelseysteele@microsoft.com>
---
drivers/hv/dxgkrnl/dxgkrnl.h | 1 +
drivers/hv/dxgkrnl/dxgmodule.c | 11 ++++++++++-
drivers/hv/dxgkrnl/dxgvmbus.c | 1 +
drivers/hv/dxgkrnl/ioctl.c | 4 ++++
4 files changed, 16 insertions(+), 1 deletion(-)
diff --git a/drivers/hv/dxgkrnl/dxgkrnl.h b/drivers/hv/dxgkrnl/dxgkrnl.h
index c5ed23cb90df..d20489317c0b 100644
--- a/drivers/hv/dxgkrnl/dxgkrnl.h
+++ b/drivers/hv/dxgkrnl/dxgkrnl.h
@@ -478,6 +478,7 @@ struct dxgadapter {
struct winluid luid; /* VM bus channel luid */
u16 device_description[80];
u16 device_instance_id[WIN_MAX_PATH];
+ bool compute_only;
bool stopping_adapter;
};
diff --git a/drivers/hv/dxgkrnl/dxgmodule.c b/drivers/hv/dxgkrnl/dxgmodule.c
index f419597f711a..0fafb6167229 100644
--- a/drivers/hv/dxgkrnl/dxgmodule.c
+++ b/drivers/hv/dxgkrnl/dxgmodule.c
@@ -20,6 +20,7 @@
#define PCI_VENDOR_ID_MICROSOFT 0x1414
#define PCI_DEVICE_ID_VIRTUAL_RENDER 0x008E
+#define PCI_DEVICE_ID_COMPUTE_ACCELERATOR 0x008A
#undef pr_fmt
#define pr_fmt(fmt) "dxgk: " fmt
@@ -270,6 +271,8 @@ int dxgglobal_create_adapter(struct pci_dev *dev, guid_t *guid,
adapter->adapter_state = DXGADAPTER_STATE_WAITING_VMBUS;
adapter->host_vgpu_luid = host_vgpu_luid;
+ if (dev->device == PCI_DEVICE_ID_COMPUTE_ACCELERATOR)
+ adapter->compute_only = true;
kref_init(&adapter->adapter_kref);
init_rwsem(&adapter->core_lock);
mutex_init(&adapter->device_creation_lock);
@@ -622,6 +625,12 @@ static struct pci_device_id dxg_pci_id_table[] = {
.subvendor = PCI_ANY_ID,
.subdevice = PCI_ANY_ID
},
+ {
+ .vendor = PCI_VENDOR_ID_MICROSOFT,
+ .device = PCI_DEVICE_ID_COMPUTE_ACCELERATOR,
+ .subvendor = PCI_ANY_ID,
+ .subdevice = PCI_ANY_ID
+ },
{ 0 }
};
@@ -962,4 +971,4 @@ module_exit(dxg_drv_exit);
MODULE_LICENSE("GPL");
MODULE_DESCRIPTION("Microsoft Dxgkrnl virtual compute device Driver");
-MODULE_VERSION("2.0.1");
+MODULE_VERSION("2.0.2");
diff --git a/drivers/hv/dxgkrnl/dxgvmbus.c b/drivers/hv/dxgkrnl/dxgvmbus.c
index eb3f4c5153a6..5f17efc937c3 100644
--- a/drivers/hv/dxgkrnl/dxgvmbus.c
+++ b/drivers/hv/dxgkrnl/dxgvmbus.c
@@ -3774,6 +3774,7 @@ int dxgvmb_send_query_adapter_info(struct dxgprocess *process,
adapter_type->indirect_display_device = 0;
adapter_type->acg_supported = 0;
adapter_type->support_set_timings_from_vidpn = 0;
+ adapter_type->compute_only = !!adapter->compute_only;
break;
}
default:
diff --git a/drivers/hv/dxgkrnl/ioctl.c b/drivers/hv/dxgkrnl/ioctl.c
index 98350583943e..f735b18fcc14 100644
--- a/drivers/hv/dxgkrnl/ioctl.c
+++ b/drivers/hv/dxgkrnl/ioctl.c
@@ -254,6 +254,8 @@ dxgkp_enum_adapters(struct dxgprocess *process,
list_for_each_entry(entry, &dxgglobal->adapter_list_head,
adapter_list_entry) {
+ if (entry->compute_only && !filter.include_compute_only)
+ continue;
if (dxgadapter_acquire_lock_shared(entry) == 0) {
struct d3dkmt_adapterinfo *inf = &info[adapter_count];
@@ -474,6 +476,8 @@ dxgkio_enum_adapters(struct dxgprocess *process, void *__user inargs)
list_for_each_entry(entry, &dxgglobal->adapter_list_head,
adapter_list_entry) {
+ if (entry->compute_only)
+ continue;
if (dxgadapter_acquire_lock_shared(entry) == 0) {
struct d3dkmt_adapterinfo *inf = &info[adapter_count];
^ permalink raw reply related [flat|nested] 56+ messages in thread* [PATCH 40/55] drivers: hv: dxgkrnl: Added implementation for D3DKMTInvalidateCache
2026-03-19 20:24 [PATCH v4 00/55] drivers: hv: dxgkrnl: Driver for Hyper-V virtual compute device Eric Curtin
` (38 preceding siblings ...)
2026-03-19 20:24 ` [PATCH 39/55] drivers: hv: dxgkrnl: Added support for compute only adapters Eric Curtin
@ 2026-03-19 20:24 ` Eric Curtin
2026-03-19 20:24 ` [PATCH 41/55] drivers: hv: dxgkrnl: Handle process ID in D3DKMTQueryStatistics Eric Curtin
` (14 subsequent siblings)
54 siblings, 0 replies; 56+ messages in thread
From: Eric Curtin @ 2026-03-19 20:24 UTC (permalink / raw)
To: linux-hyperv; +Cc: linux-kernel, iourit, wei.liu, decui, haiyangz
From: Iouri Tarassov <iourit@linux.microsoft.com>
D3DKMTInvalidateCache is called by user mode drivers when the device
doesn't support cache coherent access to compute device allocations.
It needs to be called after an allocation was accessed by CPU and now
needs to be accessed by the device. And vice versa.
Signed-off-by: Iouri Tarassov <iourit@linux.microsoft.com>
[kms: forward port to 6.6 from 6.1. No code changes made.]
Signed-off-by: Kelsey Steele <kelseysteele@microsoft.com>
---
drivers/hv/dxgkrnl/dxgkrnl.h | 3 +++
drivers/hv/dxgkrnl/dxgvmbus.c | 27 +++++++++++++++++++
drivers/hv/dxgkrnl/dxgvmbus.h | 11 ++++++++
drivers/hv/dxgkrnl/ioctl.c | 49 +++++++++++++++++++++++++++++++++--
include/uapi/misc/d3dkmthk.h | 9 +++++++
5 files changed, 97 insertions(+), 2 deletions(-)
diff --git a/drivers/hv/dxgkrnl/dxgkrnl.h b/drivers/hv/dxgkrnl/dxgkrnl.h
index d20489317c0b..e7d8919b3c01 100644
--- a/drivers/hv/dxgkrnl/dxgkrnl.h
+++ b/drivers/hv/dxgkrnl/dxgkrnl.h
@@ -989,6 +989,9 @@ int dxgvmb_send_async_msg(struct dxgvmbuschannel *channel,
u32 cmd_size);
int dxgvmb_send_share_object_with_host(struct dxgprocess *process,
struct d3dkmt_shareobjectwithhost *args);
+int dxgvmb_send_invalidate_cache(struct dxgprocess *process,
+ struct dxgadapter *adapter,
+ struct d3dkmt_invalidatecache *args);
void signal_host_cpu_event(struct dxghostevent *eventhdr);
int ntstatus2int(struct ntstatus status);
diff --git a/drivers/hv/dxgkrnl/dxgvmbus.c b/drivers/hv/dxgkrnl/dxgvmbus.c
index 5f17efc937c3..487804ca731a 100644
--- a/drivers/hv/dxgkrnl/dxgvmbus.c
+++ b/drivers/hv/dxgkrnl/dxgvmbus.c
@@ -2021,6 +2021,33 @@ int dxgvmb_send_flush_heap_transitions(struct dxgprocess *process,
return ret;
}
+int dxgvmb_send_invalidate_cache(struct dxgprocess *process,
+ struct dxgadapter *adapter,
+ struct d3dkmt_invalidatecache *args)
+{
+ struct dxgkvmb_command_invalidatecache *command;
+ int ret;
+ struct dxgvmbusmsg msg = {.hdr = NULL};
+
+ ret = init_message(&msg, adapter, process, sizeof(*command));
+ if (ret)
+ goto cleanup;
+ command = (void *)msg.msg;
+ command_vgpu_to_host_init2(&command->hdr,
+ DXGK_VMBCOMMAND_INVALIDATECACHE,
+ process->host_handle);
+ command->device = args->device;
+ command->allocation = args->allocation;
+ command->offset = args->offset;
+ command->length = args->length;
+ ret = dxgvmb_send_sync_msg_ntstatus(msg.channel, msg.hdr, msg.size);
+cleanup:
+ free_message(&msg, process);
+ if (ret)
+ DXG_TRACE("err: %d", ret);
+ return ret;
+}
+
int dxgvmb_send_query_alloc_residency(struct dxgprocess *process,
struct dxgadapter *adapter,
struct d3dkmt_queryallocationresidency
diff --git a/drivers/hv/dxgkrnl/dxgvmbus.h b/drivers/hv/dxgkrnl/dxgvmbus.h
index b4a98f7c2522..20c562b485de 100644
--- a/drivers/hv/dxgkrnl/dxgvmbus.h
+++ b/drivers/hv/dxgkrnl/dxgvmbus.h
@@ -125,6 +125,7 @@ enum dxgkvmb_commandtype {
DXGK_VMBCOMMAND_QUERYRESOURCEINFO = 64,
DXGK_VMBCOMMAND_LOGEVENT = 65,
DXGK_VMBCOMMAND_SETEXISTINGSYSMEMPAGES = 66,
+ DXGK_VMBCOMMAND_INVALIDATECACHE = 67,
DXGK_VMBCOMMAND_INVALID
};
@@ -428,6 +429,16 @@ struct dxgkvmb_command_flushheaptransitions {
struct dxgkvmb_command_vgpu_to_host hdr;
};
+/* Returns ntstatus */
+struct dxgkvmb_command_invalidatecache {
+ struct dxgkvmb_command_vgpu_to_host hdr;
+ struct d3dkmthandle device;
+ struct d3dkmthandle allocation;
+ u64 offset;
+ u64 length;
+ u64 reserved;
+};
+
struct dxgkvmb_command_freegpuvirtualaddress {
struct dxgkvmb_command_vgpu_to_host hdr;
struct d3dkmt_freegpuvirtualaddress args;
diff --git a/drivers/hv/dxgkrnl/ioctl.c b/drivers/hv/dxgkrnl/ioctl.c
index f735b18fcc14..56b838a87f09 100644
--- a/drivers/hv/dxgkrnl/ioctl.c
+++ b/drivers/hv/dxgkrnl/ioctl.c
@@ -4286,6 +4286,8 @@ dxgkio_query_clock_calibration(struct dxgprocess *process, void *__user inargs)
dxgadapter_release_lock_shared(adapter);
if (adapter)
kref_put(&adapter->adapter_kref, dxgadapter_release);
+
+ DXG_TRACE_IOCTL_END(ret);
return ret;
}
@@ -4333,6 +4335,49 @@ dxgkio_flush_heap_transitions(struct dxgprocess *process, void *__user inargs)
dxgadapter_release_lock_shared(adapter);
if (adapter)
kref_put(&adapter->adapter_kref, dxgadapter_release);
+
+ DXG_TRACE_IOCTL_END(ret);
+ return ret;
+}
+
+static int
+dxgkio_invalidate_cache(struct dxgprocess *process, void *__user inargs)
+{
+ struct d3dkmt_invalidatecache args;
+ int ret;
+ struct dxgdevice *device = NULL;
+
+ ret = copy_from_user(&args, inargs, sizeof(args));
+ if (ret) {
+ DXG_ERR("failed to copy input args");
+ ret = -EFAULT;
+ goto cleanup;
+ }
+
+ device = dxgprocess_device_by_handle(process, args.device);
+ if (device == NULL) {
+ ret = -EINVAL;
+ goto cleanup;
+ }
+
+ ret = dxgdevice_acquire_lock_shared(device);
+ if (ret < 0) {
+ kref_put(&device->device_kref, dxgdevice_release);
+ device = NULL;
+ goto cleanup;
+ }
+
+ ret = dxgvmb_send_invalidate_cache(process, device->adapter,
+ &args);
+
+cleanup:
+
+ if (device) {
+ dxgdevice_release_lock_shared(device);
+ kref_put(&device->device_kref, dxgdevice_release);
+ }
+
+ DXG_TRACE_IOCTL_END(ret);
return ret;
}
@@ -5198,7 +5243,7 @@ static struct ioctl_desc ioctls[] = {
/* 0x22 */ {dxgkio_get_context_scheduling_priority,
LX_DXGETCONTEXTSCHEDULINGPRIORITY},
/* 0x23 */ {},
-/* 0x24 */ {},
+/* 0x24 */ {dxgkio_invalidate_cache, LX_DXINVALIDATECACHE},
/* 0x25 */ {dxgkio_lock2, LX_DXLOCK2},
/* 0x26 */ {dxgkio_mark_device_as_error, LX_DXMARKDEVICEASERROR},
/* 0x27 */ {dxgkio_offer_allocations, LX_DXOFFERALLOCATIONS},
@@ -5243,7 +5288,7 @@ static struct ioctl_desc ioctls[] = {
/* 0x44 */ {dxgkio_share_object_with_host, LX_DXSHAREOBJECTWITHHOST},
/* 0x45 */ {dxgkio_create_sync_file, LX_DXCREATESYNCFILE},
/* 0x46 */ {dxgkio_wait_sync_file, LX_DXWAITSYNCFILE},
-/* 0x46 */ {dxgkio_open_syncobj_from_syncfile,
+/* 0x47 */ {dxgkio_open_syncobj_from_syncfile,
LX_DXOPENSYNCOBJECTFROMSYNCFILE},
};
diff --git a/include/uapi/misc/d3dkmthk.h b/include/uapi/misc/d3dkmthk.h
index 1eaa3f038322..84fa07a46d3c 100644
--- a/include/uapi/misc/d3dkmthk.h
+++ b/include/uapi/misc/d3dkmthk.h
@@ -1580,6 +1580,13 @@ struct d3dkmt_opensyncobjectfromsyncfile {
__u64 fence_value_gpu_va; /* out */
};
+struct d3dkmt_invalidatecache {
+ struct d3dkmthandle device;
+ struct d3dkmthandle allocation;
+ __u64 offset;
+ __u64 length;
+};
+
/*
* Dxgkrnl Graphics Port Driver ioctl definitions
*
@@ -1647,6 +1654,8 @@ struct d3dkmt_opensyncobjectfromsyncfile {
_IOWR(0x47, 0x21, struct d3dkmt_getcontextinprocessschedulingpriority)
#define LX_DXGETCONTEXTSCHEDULINGPRIORITY \
_IOWR(0x47, 0x22, struct d3dkmt_getcontextschedulingpriority)
+#define LX_DXINVALIDATECACHE \
+ _IOWR(0x47, 0x24, struct d3dkmt_invalidatecache)
#define LX_DXLOCK2 \
_IOWR(0x47, 0x25, struct d3dkmt_lock2)
#define LX_DXMARKDEVICEASERROR \
^ permalink raw reply related [flat|nested] 56+ messages in thread* [PATCH 41/55] drivers: hv: dxgkrnl: Handle process ID in D3DKMTQueryStatistics
2026-03-19 20:24 [PATCH v4 00/55] drivers: hv: dxgkrnl: Driver for Hyper-V virtual compute device Eric Curtin
` (39 preceding siblings ...)
2026-03-19 20:24 ` [PATCH 40/55] drivers: hv: dxgkrnl: Added implementation for D3DKMTInvalidateCache Eric Curtin
@ 2026-03-19 20:24 ` Eric Curtin
2026-03-19 20:24 ` [PATCH 42/55] drivers: hv: dxgkrnl: Implement the D3DKMTEnumProcesses API Eric Curtin
` (13 subsequent siblings)
54 siblings, 0 replies; 56+ messages in thread
From: Eric Curtin @ 2026-03-19 20:24 UTC (permalink / raw)
To: linux-hyperv; +Cc: linux-kernel, iourit, wei.liu, decui, haiyangz
From: Iouri Tarassov <iourit@linux.microsoft.com>
When D3DKMTQueryStatistics specifies a non-zero process ID, it needs to be
translated to the host process handle before sending a message to the host.
Signed-off-by: Iouri Tarassov <iourit@linux.microsoft.com>
[kms: forward port to 6.6 from 6.1. No code changes made.]
Signed-off-by: Kelsey Steele <kelseysteele@microsoft.com>
---
drivers/hv/dxgkrnl/dxgkrnl.h | 3 +-
drivers/hv/dxgkrnl/dxgprocess.c | 2 +
drivers/hv/dxgkrnl/dxgvmbus.c | 140 ++++++++++++++++----------------
drivers/hv/dxgkrnl/ioctl.c | 39 ++++++++-
4 files changed, 111 insertions(+), 73 deletions(-)
diff --git a/drivers/hv/dxgkrnl/dxgkrnl.h b/drivers/hv/dxgkrnl/dxgkrnl.h
index e7d8919b3c01..6af1e59b0a31 100644
--- a/drivers/hv/dxgkrnl/dxgkrnl.h
+++ b/drivers/hv/dxgkrnl/dxgkrnl.h
@@ -386,6 +386,7 @@ struct dxgprocess {
struct list_head plistentry;
pid_t pid;
pid_t tgid;
+ pid_t vpid; /* pdi from the current namespace */
/* how many time the process was opened */
struct kref process_kref;
/* protects the object memory */
@@ -981,7 +982,7 @@ int dxgvmb_send_get_stdalloc_data(struct dxgdevice *device,
void *prive_alloc_data,
u32 *res_priv_data_size,
void *priv_res_data);
-int dxgvmb_send_query_statistics(struct dxgprocess *process,
+int dxgvmb_send_query_statistics(struct d3dkmthandle host_process_handle,
struct dxgadapter *adapter,
struct d3dkmt_querystatistics *args);
int dxgvmb_send_async_msg(struct dxgvmbuschannel *channel,
diff --git a/drivers/hv/dxgkrnl/dxgprocess.c b/drivers/hv/dxgkrnl/dxgprocess.c
index fd51fd968049..5a4c4cb0c2e8 100644
--- a/drivers/hv/dxgkrnl/dxgprocess.c
+++ b/drivers/hv/dxgkrnl/dxgprocess.c
@@ -12,6 +12,7 @@
*/
#include "dxgkrnl.h"
+#include "linux/sched.h"
#undef dev_fmt
#define dev_fmt(fmt) "dxgk: " fmt
@@ -31,6 +32,7 @@ struct dxgprocess *dxgprocess_create(void)
DXG_TRACE("new dxgprocess created");
process->pid = current->pid;
process->tgid = current->tgid;
+ process->vpid = task_pid_vnr(current);
ret = dxgvmb_send_create_process(process);
if (ret < 0) {
DXG_TRACE("send_create_process failed");
diff --git a/drivers/hv/dxgkrnl/dxgvmbus.c b/drivers/hv/dxgkrnl/dxgvmbus.c
index 487804ca731a..916ed9071656 100644
--- a/drivers/hv/dxgkrnl/dxgvmbus.c
+++ b/drivers/hv/dxgkrnl/dxgvmbus.c
@@ -22,6 +22,8 @@
#include "dxgkrnl.h"
#include "dxgvmbus.h"
+#pragma GCC diagnostic ignored "-Warray-bounds"
+
#undef dev_fmt
#define dev_fmt(fmt) "dxgk: " fmt
@@ -113,7 +115,6 @@ static int init_message(struct dxgvmbusmsg *msg, struct dxgadapter *adapter,
static int init_message_res(struct dxgvmbusmsgres *msg,
struct dxgadapter *adapter,
- struct dxgprocess *process,
u32 size,
u32 result_size)
{
@@ -146,7 +147,7 @@ static int init_message_res(struct dxgvmbusmsgres *msg,
return 0;
}
-static void free_message(struct dxgvmbusmsg *msg, struct dxgprocess *process)
+static void free_message(struct dxgvmbusmsg *msg)
{
if (msg->hdr && (char *)msg->hdr != msg->msg_on_stack)
vfree(msg->hdr);
@@ -646,7 +647,7 @@ int dxgvmb_send_set_iospace_region(u64 start, u64 len)
dxgglobal_release_channel_lock();
cleanup:
- free_message(&msg, NULL);
+ free_message(&msg);
if (ret)
DXG_TRACE("Error: %d", ret);
return ret;
@@ -699,7 +700,7 @@ int dxgvmb_send_create_process(struct dxgprocess *process)
dxgglobal_release_channel_lock();
cleanup:
- free_message(&msg, process);
+ free_message(&msg);
if (ret)
DXG_TRACE("err: %d", ret);
return ret;
@@ -727,7 +728,7 @@ int dxgvmb_send_destroy_process(struct d3dkmthandle process)
dxgglobal_release_channel_lock();
cleanup:
- free_message(&msg, NULL);
+ free_message(&msg);
if (ret)
DXG_TRACE("err: %d", ret);
return ret;
@@ -790,7 +791,7 @@ int dxgvmb_send_open_sync_object_nt(struct dxgprocess *process,
}
cleanup:
- free_message(&msg, process);
+ free_message(&msg);
if (ret)
DXG_TRACE("err: %d", ret);
return ret;
@@ -839,7 +840,7 @@ int dxgvmb_send_open_sync_object(struct dxgprocess *process,
*syncobj = result.sync_object;
cleanup:
- free_message(&msg, process);
+ free_message(&msg);
if (ret)
DXG_TRACE("err: %d", ret);
return ret;
@@ -881,7 +882,7 @@ int dxgvmb_send_create_nt_shared_object(struct dxgprocess *process,
}
cleanup:
- free_message(&msg, process);
+ free_message(&msg);
if (ret)
DXG_TRACE("err: %d", ret);
return ret;
@@ -912,7 +913,7 @@ int dxgvmb_send_destroy_nt_shared_object(struct d3dkmthandle shared_handle)
dxgglobal_release_channel_lock();
cleanup:
- free_message(&msg, NULL);
+ free_message(&msg);
if (ret)
DXG_TRACE("err: %d", ret);
return ret;
@@ -945,7 +946,7 @@ int dxgvmb_send_destroy_sync_object(struct dxgprocess *process,
dxgglobal_release_channel_lock();
cleanup:
- free_message(&msg, process);
+ free_message(&msg);
if (ret)
DXG_TRACE("err: %d", ret);
return ret;
@@ -989,7 +990,7 @@ int dxgvmb_send_share_object_with_host(struct dxgprocess *process,
args->object_vail_nt_handle = result.vail_nt_handle;
cleanup:
- free_message(&msg, process);
+ free_message(&msg);
if (ret)
DXG_ERR("err: %d", ret);
return ret;
@@ -1026,7 +1027,7 @@ int dxgvmb_send_open_adapter(struct dxgadapter *adapter)
adapter->host_handle = result.host_adapter_handle;
cleanup:
- free_message(&msg, NULL);
+ free_message(&msg);
if (ret)
DXG_ERR("Failed to open adapter: %d", ret);
return ret;
@@ -1048,7 +1049,7 @@ int dxgvmb_send_close_adapter(struct dxgadapter *adapter)
ret = dxgvmb_send_sync_msg(msg.channel, msg.hdr, msg.size,
NULL, 0);
- free_message(&msg, NULL);
+ free_message(&msg);
if (ret)
DXG_ERR("Failed to close adapter: %d", ret);
return ret;
@@ -1084,7 +1085,7 @@ int dxgvmb_send_get_internal_adapter_info(struct dxgadapter *adapter)
sizeof(adapter->device_instance_id) / sizeof(u16));
dxgglobal->async_msg_enabled = result.async_msg_enabled != 0;
}
- free_message(&msg, NULL);
+ free_message(&msg);
if (ret)
DXG_ERR("Failed to get adapter info: %d", ret);
return ret;
@@ -1114,7 +1115,7 @@ struct d3dkmthandle dxgvmb_send_create_device(struct dxgadapter *adapter,
&result, sizeof(result));
if (ret < 0)
result.device.v = 0;
- free_message(&msg, process);
+ free_message(&msg);
cleanup:
if (ret)
DXG_TRACE("err: %d", ret);
@@ -1140,7 +1141,7 @@ int dxgvmb_send_destroy_device(struct dxgadapter *adapter,
ret = dxgvmb_send_sync_msg_ntstatus(msg.channel, msg.hdr, msg.size);
cleanup:
- free_message(&msg, process);
+ free_message(&msg);
if (ret)
DXG_TRACE("err: %d", ret);
return ret;
@@ -1167,7 +1168,7 @@ int dxgvmb_send_flush_device(struct dxgdevice *device,
ret = dxgvmb_send_sync_msg_ntstatus(msg.channel, msg.hdr, msg.size);
cleanup:
- free_message(&msg, process);
+ free_message(&msg);
if (ret)
DXG_TRACE("err: %d", ret);
return ret;
@@ -1239,7 +1240,7 @@ dxgvmb_send_create_context(struct dxgadapter *adapter,
}
cleanup:
- free_message(&msg, process);
+ free_message(&msg);
if (ret)
DXG_TRACE("err: %d", ret);
return context;
@@ -1265,7 +1266,7 @@ int dxgvmb_send_destroy_context(struct dxgadapter *adapter,
ret = dxgvmb_send_sync_msg_ntstatus(msg.channel, msg.hdr, msg.size);
cleanup:
- free_message(&msg, process);
+ free_message(&msg);
if (ret)
DXG_TRACE("err: %d", ret);
return ret;
@@ -1312,7 +1313,7 @@ int dxgvmb_send_create_paging_queue(struct dxgprocess *process,
pqueue->handle = args->paging_queue;
cleanup:
- free_message(&msg, process);
+ free_message(&msg);
if (ret)
DXG_TRACE("err: %d", ret);
return ret;
@@ -1339,7 +1340,7 @@ int dxgvmb_send_destroy_paging_queue(struct dxgprocess *process,
ret = dxgvmb_send_sync_msg(msg.channel, msg.hdr, msg.size, NULL, 0);
cleanup:
- free_message(&msg, process);
+ free_message(&msg);
if (ret)
DXG_TRACE("err: %d", ret);
return ret;
@@ -1550,7 +1551,7 @@ int create_existing_sysmem(struct dxgdevice *device,
cleanup:
if (kmem)
vunmap(kmem);
- free_message(&msg, device->process);
+ free_message(&msg);
if (ret)
DXG_TRACE("err: %d", ret);
return ret;
@@ -1783,7 +1784,7 @@ create_local_allocations(struct dxgprocess *process,
dxgdevice_release_alloc_list_lock(device);
}
- free_message(&msg, process);
+ free_message(&msg);
if (ret)
DXG_TRACE("err: %d", ret);
return ret;
@@ -1908,7 +1909,7 @@ int dxgvmb_send_create_allocation(struct dxgprocess *process,
if (result)
vfree(result);
- free_message(&msg, process);
+ free_message(&msg);
if (ret)
DXG_TRACE("err: %d", ret);
@@ -1950,7 +1951,7 @@ int dxgvmb_send_destroy_allocation(struct dxgprocess *process,
cleanup:
- free_message(&msg, process);
+ free_message(&msg);
if (ret)
DXG_TRACE("err: %d", ret);
return ret;
@@ -1992,7 +1993,7 @@ int dxgvmb_send_query_clock_calibration(struct dxgprocess *process,
ret = ntstatus2int(result.status);
cleanup:
- free_message(&msg, process);
+ free_message(&msg);
if (ret)
DXG_TRACE("err: %d", ret);
return ret;
@@ -2015,7 +2016,7 @@ int dxgvmb_send_flush_heap_transitions(struct dxgprocess *process,
process->host_handle);
ret = dxgvmb_send_sync_msg_ntstatus(msg.channel, msg.hdr, msg.size);
cleanup:
- free_message(&msg, process);
+ free_message(&msg);
if (ret)
DXG_TRACE("err: %d", ret);
return ret;
@@ -2042,7 +2043,7 @@ int dxgvmb_send_invalidate_cache(struct dxgprocess *process,
command->length = args->length;
ret = dxgvmb_send_sync_msg_ntstatus(msg.channel, msg.hdr, msg.size);
cleanup:
- free_message(&msg, process);
+ free_message(&msg);
if (ret)
DXG_TRACE("err: %d", ret);
return ret;
@@ -2078,7 +2079,7 @@ int dxgvmb_send_query_alloc_residency(struct dxgprocess *process,
}
result_size += result_allocation_size;
- ret = init_message_res(&msg, adapter, process, cmd_size, result_size);
+ ret = init_message_res(&msg, adapter, cmd_size, result_size);
if (ret)
goto cleanup;
command = (void *)msg.msg;
@@ -2115,7 +2116,7 @@ int dxgvmb_send_query_alloc_residency(struct dxgprocess *process,
}
cleanup:
- free_message((struct dxgvmbusmsg *)&msg, process);
+ free_message((struct dxgvmbusmsg *)&msg);
if (ret)
DXG_TRACE("err: %d", ret);
return ret;
@@ -2179,7 +2180,7 @@ int dxgvmb_send_escape(struct dxgprocess *process,
}
cleanup:
- free_message(&msg, process);
+ free_message(&msg);
if (ret)
DXG_TRACE("err: %d", ret);
return ret;
@@ -2243,7 +2244,7 @@ int dxgvmb_send_query_vidmem_info(struct dxgprocess *process,
}
cleanup:
- free_message(&msg, process);
+ free_message(&msg);
if (ret)
DXG_TRACE("err: %d", ret);
return ret;
@@ -2288,7 +2289,7 @@ int dxgvmb_send_get_device_state(struct dxgprocess *process,
args->execution_state = result.args.execution_state;
cleanup:
- free_message(&msg, process);
+ free_message(&msg);
if (ret)
DXG_TRACE("err: %d", ret);
return ret;
@@ -2312,8 +2313,7 @@ int dxgvmb_send_open_resource(struct dxgprocess *process,
sizeof(*result);
struct dxgvmbusmsgres msg = {.hdr = NULL};
- ret = init_message_res(&msg, adapter, process, sizeof(*command),
- result_size);
+ ret = init_message_res(&msg, adapter, sizeof(*command), result_size);
if (ret)
goto cleanup;
command = msg.msg;
@@ -2342,7 +2342,7 @@ int dxgvmb_send_open_resource(struct dxgprocess *process,
alloc_handles[i] = handles[i];
cleanup:
- free_message((struct dxgvmbusmsg *)&msg, process);
+ free_message((struct dxgvmbusmsg *)&msg);
if (ret)
DXG_TRACE("err: %d", ret);
return ret;
@@ -2367,7 +2367,7 @@ int dxgvmb_send_get_stdalloc_data(struct dxgdevice *device,
result_size += *alloc_priv_driver_size;
if (priv_res_data)
result_size += *res_priv_data_size;
- ret = init_message_res(&msg, device->adapter, device->process,
+ ret = init_message_res(&msg, device->adapter,
sizeof(*command), result_size);
if (ret)
goto cleanup;
@@ -2427,7 +2427,7 @@ int dxgvmb_send_get_stdalloc_data(struct dxgdevice *device,
cleanup:
- free_message((struct dxgvmbusmsg *)&msg, device->process);
+ free_message((struct dxgvmbusmsg *)&msg);
if (ret)
DXG_TRACE("err: %d", ret);
return ret;
@@ -2479,7 +2479,7 @@ int dxgvmb_send_make_resident(struct dxgprocess *process,
cleanup:
- free_message(&msg, process);
+ free_message(&msg);
if (ret)
DXG_TRACE("err: %d", ret);
return ret;
@@ -2525,7 +2525,7 @@ int dxgvmb_send_evict(struct dxgprocess *process,
cleanup:
- free_message(&msg, process);
+ free_message(&msg);
if (ret)
DXG_TRACE("err: %d", ret);
return ret;
@@ -2580,7 +2580,7 @@ int dxgvmb_send_submit_command(struct dxgprocess *process,
cleanup:
- free_message(&msg, process);
+ free_message(&msg);
if (ret)
DXG_TRACE("err: %d", ret);
return ret;
@@ -2617,7 +2617,7 @@ int dxgvmb_send_map_gpu_va(struct dxgprocess *process,
cleanup:
- free_message(&msg, process);
+ free_message(&msg);
if (ret)
DXG_TRACE("err: %d", ret);
return ret;
@@ -2647,7 +2647,7 @@ int dxgvmb_send_reserve_gpu_va(struct dxgprocess *process,
args->virtual_address = result.virtual_address;
cleanup:
- free_message(&msg, process);
+ free_message(&msg);
if (ret)
DXG_TRACE("err: %d", ret);
return ret;
@@ -2674,7 +2674,7 @@ int dxgvmb_send_free_gpu_va(struct dxgprocess *process,
ret = dxgvmb_send_sync_msg_ntstatus(msg.channel, msg.hdr, msg.size);
cleanup:
- free_message(&msg, process);
+ free_message(&msg);
if (ret)
DXG_TRACE("err: %d", ret);
return ret;
@@ -2730,7 +2730,7 @@ int dxgvmb_send_update_gpu_va(struct dxgprocess *process,
ret = dxgvmb_send_sync_msg_ntstatus(msg.channel, msg.hdr, msg.size);
cleanup:
- free_message(&msg, process);
+ free_message(&msg);
if (ret)
DXG_TRACE("err: %d", ret);
return ret;
@@ -2816,7 +2816,7 @@ dxgvmb_send_create_sync_object(struct dxgprocess *process,
}
cleanup:
- free_message(&msg, process);
+ free_message(&msg);
if (ret)
DXG_TRACE("err: %d", ret);
return ret;
@@ -2910,7 +2910,7 @@ int dxgvmb_send_signal_sync_object(struct dxgprocess *process,
}
cleanup:
- free_message(&msg, process);
+ free_message(&msg);
if (ret)
DXG_TRACE("err: %d", ret);
return ret;
@@ -2970,7 +2970,7 @@ int dxgvmb_send_wait_sync_object_cpu(struct dxgprocess *process,
ret = dxgvmb_send_sync_msg_ntstatus(msg.channel, msg.hdr, msg.size);
cleanup:
- free_message(&msg, process);
+ free_message(&msg);
if (ret)
DXG_TRACE("err: %d", ret);
return ret;
@@ -3023,7 +3023,7 @@ int dxgvmb_send_wait_sync_object_gpu(struct dxgprocess *process,
}
cleanup:
- free_message(&msg, process);
+ free_message(&msg);
if (ret)
DXG_TRACE("err: %d", ret);
return ret;
@@ -3103,7 +3103,7 @@ int dxgvmb_send_lock2(struct dxgprocess *process,
hmgrtable_unlock(&process->handle_table, DXGLOCK_EXCL);
cleanup:
- free_message(&msg, process);
+ free_message(&msg);
if (ret)
DXG_TRACE("err: %d", ret);
return ret;
@@ -3130,7 +3130,7 @@ int dxgvmb_send_unlock2(struct dxgprocess *process,
ret = dxgvmb_send_sync_msg_ntstatus(msg.channel, msg.hdr, msg.size);
cleanup:
- free_message(&msg, process);
+ free_message(&msg);
if (ret)
DXG_TRACE("err: %d", ret);
return ret;
@@ -3175,7 +3175,7 @@ int dxgvmb_send_update_alloc_property(struct dxgprocess *process,
}
}
cleanup:
- free_message(&msg, process);
+ free_message(&msg);
if (ret)
DXG_TRACE("err: %d", ret);
return ret;
@@ -3200,7 +3200,7 @@ int dxgvmb_send_mark_device_as_error(struct dxgprocess *process,
command->args = *args;
ret = dxgvmb_send_sync_msg_ntstatus(msg.channel, msg.hdr, msg.size);
cleanup:
- free_message(&msg, process);
+ free_message(&msg);
if (ret)
DXG_TRACE("err: %d", ret);
return ret;
@@ -3270,7 +3270,7 @@ int dxgvmb_send_set_allocation_priority(struct dxgprocess *process,
ret = dxgvmb_send_sync_msg_ntstatus(msg.channel, msg.hdr, msg.size);
cleanup:
- free_message(&msg, process);
+ free_message(&msg);
if (ret)
DXG_TRACE("err: %d", ret);
return ret;
@@ -3312,7 +3312,7 @@ int dxgvmb_send_get_allocation_priority(struct dxgprocess *process,
}
result_size = sizeof(*result) + priority_size;
- ret = init_message_res(&msg, adapter, process, cmd_size, result_size);
+ ret = init_message_res(&msg, adapter, cmd_size, result_size);
if (ret)
goto cleanup;
command = (void *)msg.msg;
@@ -3352,7 +3352,7 @@ int dxgvmb_send_get_allocation_priority(struct dxgprocess *process,
}
cleanup:
- free_message((struct dxgvmbusmsg *)&msg, process);
+ free_message((struct dxgvmbusmsg *)&msg);
if (ret)
DXG_TRACE("err: %d", ret);
return ret;
@@ -3381,7 +3381,7 @@ int dxgvmb_send_set_context_sch_priority(struct dxgprocess *process,
command->in_process = in_process;
ret = dxgvmb_send_sync_msg_ntstatus(msg.channel, msg.hdr, msg.size);
cleanup:
- free_message(&msg, process);
+ free_message(&msg);
if (ret)
DXG_TRACE("err: %d", ret);
return ret;
@@ -3415,7 +3415,7 @@ int dxgvmb_send_get_context_sch_priority(struct dxgprocess *process,
*priority = result.priority;
}
cleanup:
- free_message(&msg, process);
+ free_message(&msg);
if (ret)
DXG_TRACE("err: %d", ret);
return ret;
@@ -3461,7 +3461,7 @@ int dxgvmb_send_offer_allocations(struct dxgprocess *process,
ret = dxgvmb_send_sync_msg_ntstatus(msg.channel, msg.hdr, msg.size);
cleanup:
- free_message(&msg, process);
+ free_message(&msg);
if (ret)
DXG_TRACE("err: %d", ret);
return ret;
@@ -3486,7 +3486,7 @@ int dxgvmb_send_reclaim_allocations(struct dxgprocess *process,
result_size += (args->allocation_count - 1) *
sizeof(enum d3dddi_reclaim_result);
- ret = init_message_res(&msg, adapter, process, cmd_size, result_size);
+ ret = init_message_res(&msg, adapter, cmd_size, result_size);
if (ret)
goto cleanup;
command = (void *)msg.msg;
@@ -3537,7 +3537,7 @@ int dxgvmb_send_reclaim_allocations(struct dxgprocess *process,
}
cleanup:
- free_message((struct dxgvmbusmsg *)&msg, process);
+ free_message((struct dxgvmbusmsg *)&msg);
if (ret)
DXG_TRACE("err: %d", ret);
return ret;
@@ -3567,7 +3567,7 @@ int dxgvmb_send_change_vidmem_reservation(struct dxgprocess *process,
ret = dxgvmb_send_sync_msg_ntstatus(msg.channel, msg.hdr, msg.size);
cleanup:
- free_message(&msg, process);
+ free_message(&msg);
if (ret)
DXG_TRACE("err: %d", ret);
return ret;
@@ -3706,7 +3706,7 @@ int dxgvmb_send_create_hwqueue(struct dxgprocess *process,
dxgvmb_send_destroy_hwqueue(process, adapter,
command->hwqueue);
}
- free_message(&msg, process);
+ free_message(&msg);
return ret;
}
@@ -3731,7 +3731,7 @@ int dxgvmb_send_destroy_hwqueue(struct dxgprocess *process,
ret = dxgvmb_send_sync_msg_ntstatus(msg.channel, msg.hdr, msg.size);
cleanup:
- free_message(&msg, process);
+ free_message(&msg);
if (ret)
DXG_TRACE("err: %d", ret);
return ret;
@@ -3815,7 +3815,7 @@ int dxgvmb_send_query_adapter_info(struct dxgprocess *process,
}
cleanup:
- free_message(&msg, process);
+ free_message(&msg);
if (ret)
DXG_TRACE("err: %d", ret);
return ret;
@@ -3873,13 +3873,13 @@ int dxgvmb_send_submit_command_hwqueue(struct dxgprocess *process,
}
cleanup:
- free_message(&msg, process);
+ free_message(&msg);
if (ret)
DXG_TRACE("err: %d", ret);
return ret;
}
-int dxgvmb_send_query_statistics(struct dxgprocess *process,
+int dxgvmb_send_query_statistics(struct d3dkmthandle host_process_handle,
struct dxgadapter *adapter,
struct d3dkmt_querystatistics *args)
{
@@ -3888,7 +3888,7 @@ int dxgvmb_send_query_statistics(struct dxgprocess *process,
int ret;
struct dxgvmbusmsgres msg = {.hdr = NULL};
- ret = init_message_res(&msg, adapter, process, sizeof(*command),
+ ret = init_message_res(&msg, adapter, sizeof(*command),
sizeof(*result));
if (ret)
goto cleanup;
@@ -3897,7 +3897,7 @@ int dxgvmb_send_query_statistics(struct dxgprocess *process,
command_vgpu_to_host_init2(&command->hdr,
DXGK_VMBCOMMAND_QUERYSTATISTICS,
- process->host_handle);
+ host_process_handle);
command->args = *args;
ret = dxgvmb_send_sync_msg(msg.channel, msg.hdr, msg.size,
@@ -3909,7 +3909,7 @@ int dxgvmb_send_query_statistics(struct dxgprocess *process,
ret = ntstatus2int(result->status);
cleanup:
- free_message((struct dxgvmbusmsg *)&msg, process);
+ free_message((struct dxgvmbusmsg *)&msg);
if (ret)
DXG_TRACE("err: %d", ret);
return ret;
diff --git a/drivers/hv/dxgkrnl/ioctl.c b/drivers/hv/dxgkrnl/ioctl.c
index 56b838a87f09..466bef6c14b3 100644
--- a/drivers/hv/dxgkrnl/ioctl.c
+++ b/drivers/hv/dxgkrnl/ioctl.c
@@ -147,6 +147,23 @@ static int dxgkio_open_adapter_from_luid(struct dxgprocess *process,
return ret;
}
+static struct d3dkmthandle find_dxgprocess_handle(u64 pid)
+{
+ struct dxgglobal *dxgglobal = dxggbl();
+ struct dxgprocess *entry;
+ struct d3dkmthandle host_handle = {};
+
+ mutex_lock(&dxgglobal->plistmutex);
+ list_for_each_entry(entry, &dxgglobal->plisthead, plistentry) {
+ if (entry->vpid == pid) {
+ host_handle.v = entry->host_handle.v;
+ break;
+ }
+ }
+ mutex_unlock(&dxgglobal->plistmutex);
+ return host_handle;
+}
+
static int dxgkio_query_statistics(struct dxgprocess *process,
void __user *inargs)
{
@@ -156,6 +173,8 @@ static int dxgkio_query_statistics(struct dxgprocess *process,
struct dxgadapter *adapter = NULL;
struct winluid tmp;
struct dxgglobal *dxgglobal = dxggbl();
+ struct d3dkmthandle host_process_handle = process->host_handle;
+ u64 pid;
args = vzalloc(sizeof(struct d3dkmt_querystatistics));
if (args == NULL) {
@@ -170,6 +189,18 @@ static int dxgkio_query_statistics(struct dxgprocess *process,
goto cleanup;
}
+ /* Find the host process handle when needed */
+ pid = args->process;
+ if (pid) {
+ host_process_handle = find_dxgprocess_handle(pid);
+ if (host_process_handle.v == 0) {
+ DXG_ERR("Invalid process ID is specified: %lld", pid);
+ ret = -EINVAL;
+ goto cleanup;
+ }
+ args->process = 0;
+ }
+
dxgglobal_acquire_adapter_list_lock(DXGLOCK_SHARED);
list_for_each_entry(entry, &dxgglobal->adapter_list_head,
adapter_list_entry) {
@@ -186,7 +217,8 @@ static int dxgkio_query_statistics(struct dxgprocess *process,
if (adapter) {
tmp = args->adapter_luid;
args->adapter_luid = adapter->host_adapter_luid;
- ret = dxgvmb_send_query_statistics(process, adapter, args);
+ ret = dxgvmb_send_query_statistics(host_process_handle, adapter,
+ args);
if (ret >= 0) {
args->adapter_luid = tmp;
ret = copy_to_user(inargs, args, sizeof(*args));
@@ -280,7 +312,10 @@ dxgkp_enum_adapters(struct dxgprocess *process,
dxgglobal_release_adapter_list_lock(DXGLOCK_SHARED);
if (adapter_count > adapter_count_max) {
- ret = STATUS_BUFFER_TOO_SMALL;
+ struct ntstatus status;
+
+ status.v = STATUS_BUFFER_TOO_SMALL;
+ ret = ntstatus2int(status);
DXG_TRACE("Too many adapters");
ret = copy_to_user(adapter_count_out,
&dxgglobal->num_adapters, sizeof(u32));
^ permalink raw reply related [flat|nested] 56+ messages in thread* [PATCH 42/55] drivers: hv: dxgkrnl: Implement the D3DKMTEnumProcesses API
2026-03-19 20:24 [PATCH v4 00/55] drivers: hv: dxgkrnl: Driver for Hyper-V virtual compute device Eric Curtin
` (40 preceding siblings ...)
2026-03-19 20:24 ` [PATCH 41/55] drivers: hv: dxgkrnl: Handle process ID in D3DKMTQueryStatistics Eric Curtin
@ 2026-03-19 20:24 ` Eric Curtin
2026-03-19 20:24 ` [PATCH 43/55] drivers: hv: dxgkrnl: Implement D3DDKMTIsFeatureEnabled API Eric Curtin
` (12 subsequent siblings)
54 siblings, 0 replies; 56+ messages in thread
From: Eric Curtin @ 2026-03-19 20:24 UTC (permalink / raw)
To: linux-hyperv; +Cc: linux-kernel, iourit, wei.liu, decui, haiyangz
From: Iouri Tarassov <iourit@linux.microsoft.com>
D3DKMTEnumProcesses is used to enumerate PIDs for all processes,
which opened the /dev/dxg device.
Signed-off-by: Iouri Tarassov <iourit@linux.microsoft.com>
[kms: forward port to 6.6 from 6.1. No code changes made.]
Signed-off-by: Kelsey Steele <kelseysteele@microsoft.com>
---
drivers/hv/dxgkrnl/dxgkrnl.h | 1 +
drivers/hv/dxgkrnl/dxgprocess.c | 2 +
drivers/hv/dxgkrnl/ioctl.c | 81 +++++++++++++++++++++++++++++++++
include/uapi/misc/d3dkmthk.h | 12 +++++
4 files changed, 96 insertions(+)
diff --git a/drivers/hv/dxgkrnl/dxgkrnl.h b/drivers/hv/dxgkrnl/dxgkrnl.h
index 6af1e59b0a31..90bcd5377744 100644
--- a/drivers/hv/dxgkrnl/dxgkrnl.h
+++ b/drivers/hv/dxgkrnl/dxgkrnl.h
@@ -387,6 +387,7 @@ struct dxgprocess {
pid_t pid;
pid_t tgid;
pid_t vpid; /* pdi from the current namespace */
+ struct pid_namespace *nspid; /* namespace id */
/* how many time the process was opened */
struct kref process_kref;
/* protects the object memory */
diff --git a/drivers/hv/dxgkrnl/dxgprocess.c b/drivers/hv/dxgkrnl/dxgprocess.c
index 5a4c4cb0c2e8..9bfd53df1a54 100644
--- a/drivers/hv/dxgkrnl/dxgprocess.c
+++ b/drivers/hv/dxgkrnl/dxgprocess.c
@@ -13,6 +13,7 @@
#include "dxgkrnl.h"
#include "linux/sched.h"
+#include <linux/pid_namespace.h>
#undef dev_fmt
#define dev_fmt(fmt) "dxgk: " fmt
@@ -33,6 +34,7 @@ struct dxgprocess *dxgprocess_create(void)
process->pid = current->pid;
process->tgid = current->tgid;
process->vpid = task_pid_vnr(current);
+ process->nspid = task_active_pid_ns(current);
ret = dxgvmb_send_create_process(process);
if (ret < 0) {
DXG_TRACE("send_create_process failed");
diff --git a/drivers/hv/dxgkrnl/ioctl.c b/drivers/hv/dxgkrnl/ioctl.c
index 466bef6c14b3..24b84be2fb73 100644
--- a/drivers/hv/dxgkrnl/ioctl.c
+++ b/drivers/hv/dxgkrnl/ioctl.c
@@ -16,6 +16,7 @@
#include <linux/fs.h>
#include <linux/anon_inodes.h>
#include <linux/mman.h>
+#include <linux/pid_namespace.h>
#include "dxgkrnl.h"
#include "dxgvmbus.h"
@@ -5238,6 +5239,85 @@ dxgkio_share_object_with_host(struct dxgprocess *process, void *__user inargs)
return ret;
}
+static int
+dxgkio_enum_processes(struct dxgprocess *process, void *__user inargs)
+{
+ struct d3dkmt_enumprocesses args;
+ struct d3dkmt_enumprocesses *__user input = inargs;
+ struct dxgadapter *adapter = NULL;
+ struct dxgadapter *entry;
+ struct dxgglobal *dxgglobal = dxggbl();
+ struct dxgprocess_adapter *pentry;
+ int nump = 0; /* Current number of processes*/
+ struct ntstatus status;
+ int ret;
+
+ ret = copy_from_user(&args, inargs, sizeof(args));
+ if (ret) {
+ DXG_ERR("failed to copy input args");
+ ret = -EFAULT;
+ goto cleanup;
+ }
+
+ if (args.buffer_count == 0) {
+ DXG_ERR("Invalid buffer count");
+ ret = -EINVAL;
+ goto cleanup;
+ }
+
+ dxgglobal_acquire_adapter_list_lock(DXGLOCK_SHARED);
+ dxgglobal_acquire_process_adapter_lock();
+
+ list_for_each_entry(entry, &dxgglobal->adapter_list_head,
+ adapter_list_entry) {
+ if (*(u64 *) &entry->luid == *(u64 *) &args.adapter_luid) {
+ adapter = entry;
+ break;
+ }
+ }
+
+ if (adapter == NULL) {
+ DXG_ERR("Failed to find dxgadapter");
+ ret = -EINVAL;
+ goto cleanup_locks;
+ }
+
+ list_for_each_entry(pentry, &adapter->adapter_process_list_head,
+ adapter_process_list_entry) {
+ if (pentry->process->nspid != task_active_pid_ns(current))
+ continue;
+ if (nump == args.buffer_count) {
+ status.v = STATUS_BUFFER_TOO_SMALL;
+ ret = ntstatus2int(status);
+ goto cleanup_locks;
+ }
+ ret = copy_to_user(&args.buffer[nump], &pentry->process->vpid,
+ sizeof(u32));
+ if (ret) {
+ DXG_ERR("failed to copy data to user");
+ ret = -EFAULT;
+ goto cleanup_locks;
+ }
+ nump++;
+ }
+
+cleanup_locks:
+
+ dxgglobal_release_process_adapter_lock();
+ dxgglobal_release_adapter_list_lock(DXGLOCK_SHARED);
+
+ if (ret == 0) {
+ ret = copy_to_user(&input->buffer_count, &nump, sizeof(u32));
+ if (ret)
+ DXG_ERR("failed to copy buffer count to user");
+ }
+
+cleanup:
+
+ DXG_TRACE_IOCTL_END(ret);
+ return ret;
+}
+
static struct ioctl_desc ioctls[] = {
/* 0x00 */ {},
/* 0x01 */ {dxgkio_open_adapter_from_luid, LX_DXOPENADAPTERFROMLUID},
@@ -5325,6 +5405,7 @@ static struct ioctl_desc ioctls[] = {
/* 0x46 */ {dxgkio_wait_sync_file, LX_DXWAITSYNCFILE},
/* 0x47 */ {dxgkio_open_syncobj_from_syncfile,
LX_DXOPENSYNCOBJECTFROMSYNCFILE},
+/* 0x48 */ {dxgkio_enum_processes, LX_DXENUMPROCESSES},
};
/*
diff --git a/include/uapi/misc/d3dkmthk.h b/include/uapi/misc/d3dkmthk.h
index 84fa07a46d3c..f9f817060fa9 100644
--- a/include/uapi/misc/d3dkmthk.h
+++ b/include/uapi/misc/d3dkmthk.h
@@ -1580,6 +1580,16 @@ struct d3dkmt_opensyncobjectfromsyncfile {
__u64 fence_value_gpu_va; /* out */
};
+ struct d3dkmt_enumprocesses {
+ struct winluid adapter_luid;
+#ifdef __KERNEL__
+ __u32 *buffer;
+#else
+ __u64 buffer;
+#endif
+ __u64 buffer_count;
+};
+
struct d3dkmt_invalidatecache {
struct d3dkmthandle device;
struct d3dkmthandle allocation;
@@ -1718,5 +1728,7 @@ struct d3dkmt_invalidatecache {
_IOWR(0x47, 0x46, struct d3dkmt_waitsyncfile)
#define LX_DXOPENSYNCOBJECTFROMSYNCFILE \
_IOWR(0x47, 0x47, struct d3dkmt_opensyncobjectfromsyncfile)
+#define LX_DXENUMPROCESSES \
+ _IOWR(0x47, 0x48, struct d3dkmt_enumprocesses)
#endif /* _D3DKMTHK_H */
^ permalink raw reply related [flat|nested] 56+ messages in thread* [PATCH 43/55] drivers: hv: dxgkrnl: Implement D3DDKMTIsFeatureEnabled API
2026-03-19 20:24 [PATCH v4 00/55] drivers: hv: dxgkrnl: Driver for Hyper-V virtual compute device Eric Curtin
` (41 preceding siblings ...)
2026-03-19 20:24 ` [PATCH 42/55] drivers: hv: dxgkrnl: Implement the D3DKMTEnumProcesses API Eric Curtin
@ 2026-03-19 20:24 ` Eric Curtin
2026-03-19 20:24 ` [PATCH 44/55] drivers: hv: dxgkrnl: Implement known escapes Eric Curtin
` (11 subsequent siblings)
54 siblings, 0 replies; 56+ messages in thread
From: Eric Curtin @ 2026-03-19 20:24 UTC (permalink / raw)
To: linux-hyperv; +Cc: linux-kernel, iourit, wei.liu, decui, haiyangz
From: Iouri Tarassov <iourit@linux.microsoft.com>
D3DKMTIsFeatureEnabled is used to query if a particular feature is
supported by the given adapter.
Signed-off-by: Iouri Tarassov <iourit@linux.microsoft.com>
[kms: forward port to 6.6 from 6.1. No code changes made.]
Signed-off-by: Kelsey Steele <kelseysteele@microsoft.com>
---
drivers/hv/dxgkrnl/dxgkrnl.h | 2 ++
drivers/hv/dxgkrnl/dxgvmbus.c | 58 ++++++++++++++++++++++++++++++++---
drivers/hv/dxgkrnl/dxgvmbus.h | 31 +++++++++++++++++++
drivers/hv/dxgkrnl/ioctl.c | 46 +++++++++++++++++++++++++++
include/uapi/misc/d3dkmthk.h | 31 ++++++++++++++++++-
5 files changed, 163 insertions(+), 5 deletions(-)
diff --git a/drivers/hv/dxgkrnl/dxgkrnl.h b/drivers/hv/dxgkrnl/dxgkrnl.h
index 90bcd5377744..ebf81cffa289 100644
--- a/drivers/hv/dxgkrnl/dxgkrnl.h
+++ b/drivers/hv/dxgkrnl/dxgkrnl.h
@@ -994,6 +994,8 @@ int dxgvmb_send_share_object_with_host(struct dxgprocess *process,
int dxgvmb_send_invalidate_cache(struct dxgprocess *process,
struct dxgadapter *adapter,
struct d3dkmt_invalidatecache *args);
+int dxgvmb_send_is_feature_enabled(struct dxgadapter *adapter,
+ struct d3dkmt_isfeatureenabled *args);
void signal_host_cpu_event(struct dxghostevent *eventhdr);
int ntstatus2int(struct ntstatus status);
diff --git a/drivers/hv/dxgkrnl/dxgvmbus.c b/drivers/hv/dxgkrnl/dxgvmbus.c
index 916ed9071656..2436e1a7bc73 100644
--- a/drivers/hv/dxgkrnl/dxgvmbus.c
+++ b/drivers/hv/dxgkrnl/dxgvmbus.c
@@ -135,15 +135,16 @@ static int init_message_res(struct dxgvmbusmsgres *msg,
if (use_ext_header) {
msg->msg = (char *)&msg->hdr[1];
msg->hdr->command_offset = sizeof(msg->hdr[0]);
- msg->hdr->vgpu_luid = adapter->host_vgpu_luid;
+ if (adapter)
+ msg->hdr->vgpu_luid = adapter->host_vgpu_luid;
} else {
msg->msg = (char *)msg->hdr;
}
msg->res = (char *)msg->hdr + msg->size;
- if (dxgglobal->async_msg_enabled)
- msg->channel = &dxgglobal->channel;
- else
+ if (adapter && !dxgglobal->async_msg_enabled)
msg->channel = &adapter->channel;
+ else
+ msg->channel = &dxgglobal->channel;
return 0;
}
@@ -2049,6 +2050,55 @@ int dxgvmb_send_invalidate_cache(struct dxgprocess *process,
return ret;
}
+int dxgvmb_send_is_feature_enabled(struct dxgadapter *adapter,
+ struct d3dkmt_isfeatureenabled *args)
+{
+ int ret;
+ struct dxgkvmb_command_isfeatureenabled_return *result;
+ struct dxgvmbusmsgres msg = {.hdr = NULL};
+ int res_size = sizeof(*result);
+
+ if (adapter) {
+ struct dxgkvmb_command_isfeatureenabled *command;
+
+ ret = init_message_res(&msg, adapter, sizeof(*command),
+ res_size);
+ if (ret)
+ goto cleanup;
+ command = (void *)msg.msg;
+ command->feature_id = args->feature_id;
+ result = msg.res;
+ command_vgpu_to_host_init1(&command->hdr,
+ DXGK_VMBCOMMAND_ISFEATUREENABLED);
+ } else {
+ struct dxgkvmb_command_isfeatureenabled_gbl *command;
+
+ ret = init_message_res(&msg, adapter, sizeof(*command),
+ res_size);
+ if (ret)
+ goto cleanup;
+ command = (void *)msg.msg;
+ command->feature_id = args->feature_id;
+ result = msg.res;
+ command_vm_to_host_init1(&command->hdr,
+ DXGK_VMBCOMMAND_ISFEATUREENABLED_GLOBAL);
+ }
+ ret = dxgvmb_send_sync_msg(msg.channel, msg.hdr, msg.size,
+ result, res_size);
+ if (ret == 0) {
+ ret = ntstatus2int(result->status);
+ if (ret == 0)
+ args->result = result->result;
+ goto cleanup;
+ }
+
+cleanup:
+ free_message((struct dxgvmbusmsg *)&msg);
+ if (ret)
+ DXG_TRACE("err: %d", ret);
+ return ret;
+}
+
int dxgvmb_send_query_alloc_residency(struct dxgprocess *process,
struct dxgadapter *adapter,
struct d3dkmt_queryallocationresidency
diff --git a/drivers/hv/dxgkrnl/dxgvmbus.h b/drivers/hv/dxgkrnl/dxgvmbus.h
index 20c562b485de..a7e625b2f896 100644
--- a/drivers/hv/dxgkrnl/dxgvmbus.h
+++ b/drivers/hv/dxgkrnl/dxgvmbus.h
@@ -48,6 +48,7 @@ enum dxgkvmb_commandtype_global {
DXGK_VMBCOMMAND_SETIOSPACEREGION = 1010,
DXGK_VMBCOMMAND_COMPLETETRANSACTION = 1011,
DXGK_VMBCOMMAND_SHAREOBJECTWITHHOST = 1021,
+ DXGK_VMBCOMMAND_ISFEATUREENABLED_GLOBAL = 1022,
DXGK_VMBCOMMAND_INVALID_VM_TO_HOST
};
@@ -126,6 +127,7 @@ enum dxgkvmb_commandtype {
DXGK_VMBCOMMAND_LOGEVENT = 65,
DXGK_VMBCOMMAND_SETEXISTINGSYSMEMPAGES = 66,
DXGK_VMBCOMMAND_INVALIDATECACHE = 67,
+ DXGK_VMBCOMMAND_ISFEATUREENABLED = 68,
DXGK_VMBCOMMAND_INVALID
};
@@ -871,6 +873,35 @@ struct dxgkvmb_command_shareobjectwithhost_return {
u64 vail_nt_handle;
};
+struct dxgk_feature_desc {
+ u16 min_supported_version;
+ u16 max_supported_version;
+ struct {
+ u16 supported : 1;
+ u16 virtualization_mode : 3;
+ u16 global : 1;
+ u16 driver_feature : 1;
+ u16 internal : 1;
+ u16 reserved : 9;
+ };
+};
+
+struct dxgkvmb_command_isfeatureenabled {
+ struct dxgkvmb_command_vgpu_to_host hdr;
+ enum dxgk_feature_id feature_id;
+};
+
+struct dxgkvmb_command_isfeatureenabled_gbl {
+ struct dxgkvmb_command_vm_to_host hdr;
+ enum dxgk_feature_id feature_id;
+};
+
+struct dxgkvmb_command_isfeatureenabled_return {
+ struct ntstatus status;
+ struct dxgk_feature_desc descriptor;
+ struct dxgk_isfeatureenabled_result result;
+};
+
int
dxgvmb_send_sync_msg(struct dxgvmbuschannel *channel,
void *command, u32 command_size, void *result,
diff --git a/drivers/hv/dxgkrnl/ioctl.c b/drivers/hv/dxgkrnl/ioctl.c
index 24b84be2fb73..5ff4b27af19d 100644
--- a/drivers/hv/dxgkrnl/ioctl.c
+++ b/drivers/hv/dxgkrnl/ioctl.c
@@ -5318,6 +5318,51 @@ dxgkio_enum_processes(struct dxgprocess *process, void *__user inargs)
return ret;
}
+static int
+dxgkio_is_feature_enabled(struct dxgprocess *process, void *__user inargs)
+{
+ struct d3dkmt_isfeatureenabled args;
+ struct dxgadapter *adapter = NULL;
+ struct dxgglobal *dxgglobal = dxggbl();
+ struct d3dkmt_isfeatureenabled *__user uargs = inargs;
+ int ret;
+
+ ret = copy_from_user(&args, inargs, sizeof(args));
+ if (ret) {
+ DXG_ERR("failed to copy input args");
+ ret = -EFAULT;
+ goto cleanup;
+ }
+
+ adapter = dxgprocess_adapter_by_handle(process, args.adapter);
+ if (adapter == NULL) {
+ ret = -EINVAL;
+ goto cleanup;
+ }
+
+ if (adapter) {
+ ret = dxgadapter_acquire_lock_shared(adapter);
+ if (ret < 0)
+ goto cleanup;
+ }
+
+ ret = dxgvmb_send_is_feature_enabled(adapter, &args);
+ if (ret)
+ goto cleanup;
+
+ ret = copy_to_user(&uargs->result, &args.result, sizeof(args.result));
+
+cleanup:
+
+ if (adapter) {
+ dxgadapter_release_lock_shared(adapter);
+ kref_put(&adapter->adapter_kref, dxgadapter_release);
+ }
+
+ DXG_TRACE_IOCTL_END(ret);
+ return ret;
+}
+
static struct ioctl_desc ioctls[] = {
/* 0x00 */ {},
/* 0x01 */ {dxgkio_open_adapter_from_luid, LX_DXOPENADAPTERFROMLUID},
@@ -5406,6 +5451,7 @@ static struct ioctl_desc ioctls[] = {
/* 0x47 */ {dxgkio_open_syncobj_from_syncfile,
LX_DXOPENSYNCOBJECTFROMSYNCFILE},
/* 0x48 */ {dxgkio_enum_processes, LX_DXENUMPROCESSES},
+/* 0x49 */ {dxgkio_is_feature_enabled, LX_ISFEATUREENABLED},
};
/*
diff --git a/include/uapi/misc/d3dkmthk.h b/include/uapi/misc/d3dkmthk.h
index f9f817060fa9..5b345ddaf66e 100644
--- a/include/uapi/misc/d3dkmthk.h
+++ b/include/uapi/misc/d3dkmthk.h
@@ -1580,7 +1580,7 @@ struct d3dkmt_opensyncobjectfromsyncfile {
__u64 fence_value_gpu_va; /* out */
};
- struct d3dkmt_enumprocesses {
+struct d3dkmt_enumprocesses {
struct winluid adapter_luid;
#ifdef __KERNEL__
__u32 *buffer;
@@ -1590,6 +1590,33 @@ struct d3dkmt_opensyncobjectfromsyncfile {
__u64 buffer_count;
};
+enum dxgk_feature_id {
+ _DXGK_FEATURE_HWSCH = 0,
+ _DXGK_FEATURE_PAGE_BASED_MEMORY_MANAGER = 32,
+ _DXGK_FEATURE_KERNEL_MODE_TESTING = 33,
+ _DXGK_FEATURE_MAX
+};
+
+struct dxgk_isfeatureenabled_result {
+ __u16 version;
+ union {
+ struct {
+ __u16 enabled : 1;
+ __u16 known_feature : 1;
+ __u16 supported_by_driver : 1;
+ __u16 supported_on_config : 1;
+ __u16 reserved : 12;
+ };
+ __u16 value;
+ };
+};
+
+struct d3dkmt_isfeatureenabled {
+ struct d3dkmthandle adapter;
+ enum dxgk_feature_id feature_id;
+ struct dxgk_isfeatureenabled_result result;
+};
+
struct d3dkmt_invalidatecache {
struct d3dkmthandle device;
struct d3dkmthandle allocation;
@@ -1730,5 +1757,7 @@ struct d3dkmt_invalidatecache {
_IOWR(0x47, 0x47, struct d3dkmt_opensyncobjectfromsyncfile)
#define LX_DXENUMPROCESSES \
_IOWR(0x47, 0x48, struct d3dkmt_enumprocesses)
+#define LX_ISFEATUREENABLED \
+ _IOWR(0x47, 0x49, struct d3dkmt_isfeatureenabled)
#endif /* _D3DKMTHK_H */
^ permalink raw reply related [flat|nested] 56+ messages in thread* [PATCH 44/55] drivers: hv: dxgkrnl: Implement known escapes
2026-03-19 20:24 [PATCH v4 00/55] drivers: hv: dxgkrnl: Driver for Hyper-V virtual compute device Eric Curtin
` (42 preceding siblings ...)
2026-03-19 20:24 ` [PATCH 43/55] drivers: hv: dxgkrnl: Implement D3DDKMTIsFeatureEnabled API Eric Curtin
@ 2026-03-19 20:24 ` Eric Curtin
2026-03-19 20:24 ` [PATCH 45/55] drivers: hv: dxgkrnl: Fixed coding style issues Eric Curtin
` (10 subsequent siblings)
54 siblings, 0 replies; 56+ messages in thread
From: Eric Curtin @ 2026-03-19 20:24 UTC (permalink / raw)
To: linux-hyperv; +Cc: linux-kernel, iourit, wei.liu, decui, haiyangz
From: Iouri Tarassov <iourit@linux.microsoft.com>
Implement an escape to build test command buffer.
Implement other known escapes.
Signed-off-by: Iouri Tarassov <iourit@linux.microsoft.com>
[kms: forward port to 6.6 from 6.1. No code changes made.]
Signed-off-by: Kelsey Steele <kelseysteele@microsoft.com>
---
drivers/hv/dxgkrnl/dxgkrnl.h | 3 +-
drivers/hv/dxgkrnl/dxgvmbus.c | 40 +++++---
drivers/hv/dxgkrnl/ioctl.c | 170 +++++++++++++++++++++++++++++-----
include/uapi/misc/d3dkmthk.h | 31 +++++++
4 files changed, 205 insertions(+), 39 deletions(-)
diff --git a/drivers/hv/dxgkrnl/dxgkrnl.h b/drivers/hv/dxgkrnl/dxgkrnl.h
index ebf81cffa289..9599ec8e0f1d 100644
--- a/drivers/hv/dxgkrnl/dxgkrnl.h
+++ b/drivers/hv/dxgkrnl/dxgkrnl.h
@@ -953,7 +953,8 @@ int dxgvmb_send_query_alloc_residency(struct dxgprocess *process,
*args);
int dxgvmb_send_escape(struct dxgprocess *process,
struct dxgadapter *adapter,
- struct d3dkmt_escape *args);
+ struct d3dkmt_escape *args,
+ bool user_mode);
int dxgvmb_send_query_vidmem_info(struct dxgprocess *process,
struct dxgadapter *adapter,
struct d3dkmt_queryvideomemoryinfo *args,
diff --git a/drivers/hv/dxgkrnl/dxgvmbus.c b/drivers/hv/dxgkrnl/dxgvmbus.c
index 2436e1a7bc73..de28c6162a70 100644
--- a/drivers/hv/dxgkrnl/dxgvmbus.c
+++ b/drivers/hv/dxgkrnl/dxgvmbus.c
@@ -2174,7 +2174,8 @@ int dxgvmb_send_query_alloc_residency(struct dxgprocess *process,
int dxgvmb_send_escape(struct dxgprocess *process,
struct dxgadapter *adapter,
- struct d3dkmt_escape *args)
+ struct d3dkmt_escape *args,
+ bool user_mode)
{
int ret;
struct dxgkvmb_command_escape *command = NULL;
@@ -2203,13 +2204,18 @@ int dxgvmb_send_escape(struct dxgprocess *process,
command->priv_drv_data_size = args->priv_drv_data_size;
command->context = args->context;
if (args->priv_drv_data_size) {
- ret = copy_from_user(command->priv_drv_data,
- args->priv_drv_data,
- args->priv_drv_data_size);
- if (ret) {
- DXG_ERR("failed to copy priv data");
- ret = -EFAULT;
- goto cleanup;
+ if (user_mode) {
+ ret = copy_from_user(command->priv_drv_data,
+ args->priv_drv_data,
+ args->priv_drv_data_size);
+ if (ret) {
+ DXG_ERR("failed to copy priv data");
+ ret = -EFAULT;
+ goto cleanup;
+ }
+ } else {
+ memcpy(command->priv_drv_data, args->priv_drv_data,
+ args->priv_drv_data_size);
}
}
@@ -2220,12 +2226,18 @@ int dxgvmb_send_escape(struct dxgprocess *process,
goto cleanup;
if (args->priv_drv_data_size) {
- ret = copy_to_user(args->priv_drv_data,
- command->priv_drv_data,
- args->priv_drv_data_size);
- if (ret) {
- DXG_ERR("failed to copy priv data");
- ret = -EINVAL;
+ if (user_mode) {
+ ret = copy_to_user(args->priv_drv_data,
+ command->priv_drv_data,
+ args->priv_drv_data_size);
+ if (ret) {
+ DXG_ERR("failed to copy priv data");
+ ret = -EINVAL;
+ }
+ } else {
+ memcpy(args->priv_drv_data,
+ command->priv_drv_data,
+ args->priv_drv_data_size);
}
}
diff --git a/drivers/hv/dxgkrnl/ioctl.c b/drivers/hv/dxgkrnl/ioctl.c
index 5ff4b27af19d..f8ca79d098f3 100644
--- a/drivers/hv/dxgkrnl/ioctl.c
+++ b/drivers/hv/dxgkrnl/ioctl.c
@@ -4257,10 +4257,8 @@ dxgkio_change_vidmem_reservation(struct dxgprocess *process, void *__user inargs
}
ret = dxgadapter_acquire_lock_shared(adapter);
- if (ret < 0) {
- adapter = NULL;
+ if (ret < 0)
goto cleanup;
- }
adapter_locked = true;
args.adapter.v = 0;
ret = dxgvmb_send_change_vidmem_reservation(process, adapter,
@@ -4299,10 +4297,8 @@ dxgkio_query_clock_calibration(struct dxgprocess *process, void *__user inargs)
}
ret = dxgadapter_acquire_lock_shared(adapter);
- if (ret < 0) {
- adapter = NULL;
+ if (ret < 0)
goto cleanup;
- }
adapter_locked = true;
args.adapter = adapter->host_handle;
@@ -4349,10 +4345,8 @@ dxgkio_flush_heap_transitions(struct dxgprocess *process, void *__user inargs)
}
ret = dxgadapter_acquire_lock_shared(adapter);
- if (ret < 0) {
- adapter = NULL;
+ if (ret < 0)
goto cleanup;
- }
adapter_locked = true;
args.adapter = adapter->host_handle;
@@ -4417,6 +4411,134 @@ dxgkio_invalidate_cache(struct dxgprocess *process, void *__user inargs)
return ret;
}
+static int
+build_test_command_buffer(struct dxgprocess *process,
+ struct dxgadapter *adapter,
+ struct d3dkmt_escape *args)
+{
+ int ret;
+ struct d3dddi_buildtestcommandbuffer cmd;
+ struct d3dkmt_escape newargs = *args;
+ u32 buf_size;
+ struct d3dddi_buildtestcommandbuffer *buf = NULL;
+ struct d3dddi_buildtestcommandbuffer *__user ucmd;
+
+ ucmd = args->priv_drv_data;
+ if (args->priv_drv_data_size <
+ sizeof(struct d3dddi_buildtestcommandbuffer)) {
+ DXG_ERR("Invalid private data size");
+ return -EINVAL;
+ }
+ ret = copy_from_user(&cmd, ucmd, sizeof(cmd));
+ if (ret) {
+ DXG_ERR("Failed to copy private data");
+ return -EFAULT;
+ }
+
+ if (cmd.dma_buffer_size < sizeof(u32) ||
+ cmd.dma_buffer_size > D3DDDI_MAXTESTBUFFERSIZE ||
+ cmd.dma_buffer_priv_data_size >
+ D3DDDI_MAXTESTBUFFERPRIVATEDRIVERDATASIZE) {
+ DXG_ERR("Invalid DMA buffer or private data size");
+ return -EINVAL;
+ }
+ /* Allocate a new buffer for the escape call */
+ buf_size = sizeof(struct d3dddi_buildtestcommandbuffer) +
+ cmd.dma_buffer_size +
+ cmd.dma_buffer_priv_data_size;
+ buf = vzalloc(buf_size);
+ if (buf == NULL) {
+ ret = -ENOMEM;
+ goto cleanup;
+ }
+ *buf = cmd;
+ buf->dma_buffer = NULL;
+ buf->dma_buffer_priv_data = NULL;
+
+ /* Replace private data in the escape arguments and call the host */
+ newargs.priv_drv_data = buf;
+ newargs.priv_drv_data_size = buf_size;
+ ret = dxgvmb_send_escape(process, adapter, &newargs, false);
+ if (ret) {
+ DXG_ERR("Host failed escape");
+ goto cleanup;
+ }
+
+ ret = copy_to_user(&ucmd->dma_buffer_size, &buf->dma_buffer_size,
+ sizeof(u32));
+ if (ret) {
+ DXG_ERR("Failed to dma size to user");
+ ret = -EFAULT;
+ goto cleanup;
+ }
+ ret = copy_to_user(&ucmd->dma_buffer_priv_data_size,
+ &buf->dma_buffer_priv_data_size,
+ sizeof(u32));
+ if (ret) {
+ DXG_ERR("Failed to dma private data size to user");
+ ret = -EFAULT;
+ goto cleanup;
+ }
+ ret = copy_to_user(cmd.dma_buffer, (char *)buf + sizeof(*buf),
+ buf->dma_buffer_size);
+ if (ret) {
+ DXG_ERR("Failed to copy dma buffer to user");
+ ret = -EFAULT;
+ goto cleanup;
+ }
+ if (buf->dma_buffer_priv_data_size) {
+ ret = copy_to_user(cmd.dma_buffer_priv_data,
+ (char *)buf + sizeof(*buf) + cmd.dma_buffer_size,
+ buf->dma_buffer_priv_data_size);
+ if (ret) {
+ DXG_ERR("Failed to copy private data to user");
+ ret = -EFAULT;
+ goto cleanup;
+ }
+ }
+
+cleanup:
+ if (buf)
+ vfree(buf);
+ return ret;
+}
+
+static int
+driver_known_escape(struct dxgprocess *process,
+ struct dxgadapter *adapter,
+ struct d3dkmt_escape *args)
+{
+ enum d3dkmt_escapetype escape_type;
+ int ret = 0;
+
+ if (args->priv_drv_data_size < sizeof(enum d3dddi_knownescapetype))
+ {
+ DXG_ERR("Invalid private data size");
+ return -EINVAL;
+ }
+ ret = copy_from_user(&escape_type, args->priv_drv_data,
+ sizeof(escape_type));
+ if (ret) {
+ DXG_ERR("Failed to read escape type");
+ return -EFAULT;
+ }
+ switch (escape_type) {
+ case _D3DDDI_DRIVERESCAPETYPE_TRANSLATEALLOCATIONHANDLE:
+ case _D3DDDI_DRIVERESCAPETYPE_TRANSLATERESOURCEHANDLE:
+ /*
+ * The host and VM handles are the same
+ */
+ break;
+ case _D3DDDI_DRIVERESCAPETYPE_BUILDTESTCOMMANDBUFFER:
+ ret = build_test_command_buffer(process, adapter, args);
+ break;
+ default:
+ ret = dxgvmb_send_escape(process, adapter, args, true);
+ break;
+ }
+ return ret;
+}
+
static int
dxgkio_escape(struct dxgprocess *process, void *__user inargs)
{
@@ -4438,14 +4560,17 @@ dxgkio_escape(struct dxgprocess *process, void *__user inargs)
}
ret = dxgadapter_acquire_lock_shared(adapter);
- if (ret < 0) {
- adapter = NULL;
+ if (ret < 0)
goto cleanup;
- }
adapter_locked = true;
args.adapter = adapter->host_handle;
- ret = dxgvmb_send_escape(process, adapter, &args);
+
+ if (args.type == _D3DKMT_ESCAPE_DRIVERPRIVATE &&
+ args.flags.driver_known_escape)
+ ret = driver_known_escape(process, adapter, &args);
+ else
+ ret = dxgvmb_send_escape(process, adapter, &args, true);
cleanup:
@@ -4485,10 +4610,8 @@ dxgkio_query_vidmem_info(struct dxgprocess *process, void *__user inargs)
}
ret = dxgadapter_acquire_lock_shared(adapter);
- if (ret < 0) {
- adapter = NULL;
+ if (ret < 0)
goto cleanup;
- }
adapter_locked = true;
args.adapter = adapter->host_handle;
@@ -5323,9 +5446,9 @@ dxgkio_is_feature_enabled(struct dxgprocess *process, void *__user inargs)
{
struct d3dkmt_isfeatureenabled args;
struct dxgadapter *adapter = NULL;
- struct dxgglobal *dxgglobal = dxggbl();
struct d3dkmt_isfeatureenabled *__user uargs = inargs;
int ret;
+ bool adapter_locked = false;
ret = copy_from_user(&args, inargs, sizeof(args));
if (ret) {
@@ -5340,11 +5463,10 @@ dxgkio_is_feature_enabled(struct dxgprocess *process, void *__user inargs)
goto cleanup;
}
- if (adapter) {
- ret = dxgadapter_acquire_lock_shared(adapter);
- if (ret < 0)
- goto cleanup;
- }
+ ret = dxgadapter_acquire_lock_shared(adapter);
+ if (ret < 0)
+ goto cleanup;
+ adapter_locked = true;
ret = dxgvmb_send_is_feature_enabled(adapter, &args);
if (ret)
@@ -5354,10 +5476,10 @@ dxgkio_is_feature_enabled(struct dxgprocess *process, void *__user inargs)
cleanup:
- if (adapter) {
+ if (adapter_locked)
dxgadapter_release_lock_shared(adapter);
+ if (adapter)
kref_put(&adapter->adapter_kref, dxgadapter_release);
- }
DXG_TRACE_IOCTL_END(ret);
return ret;
diff --git a/include/uapi/misc/d3dkmthk.h b/include/uapi/misc/d3dkmthk.h
index 5b345ddaf66e..db40e8ff40b0 100644
--- a/include/uapi/misc/d3dkmthk.h
+++ b/include/uapi/misc/d3dkmthk.h
@@ -237,6 +237,37 @@ struct d3dddi_destroypagingqueue {
struct d3dkmthandle paging_queue;
};
+enum d3dddi_knownescapetype {
+ _D3DDDI_DRIVERESCAPETYPE_TRANSLATEALLOCATIONHANDLE = 0,
+ _D3DDDI_DRIVERESCAPETYPE_TRANSLATERESOURCEHANDLE = 1,
+ _D3DDDI_DRIVERESCAPETYPE_CPUEVENTUSAGE = 2,
+ _D3DDDI_DRIVERESCAPETYPE_BUILDTESTCOMMANDBUFFER = 3,
+};
+
+struct d3dddi_translate_allocation_handle {
+ enum d3dddi_knownescapetype escape_type;
+ struct d3dkmthandle allocation;
+};
+
+struct d3dddi_testcommand {
+ char buffer[72];
+};
+
+#define D3DDDI_MAXTESTBUFFERSIZE 4096
+#define D3DDDI_MAXTESTBUFFERPRIVATEDRIVERDATASIZE 1024
+
+struct d3dddi_buildtestcommandbuffer {
+ enum d3dddi_knownescapetype escape_type;
+ struct d3dkmthandle device;
+ struct d3dkmthandle context;
+ __u32 flags;
+ struct d3dddi_testcommand command;
+ void *dma_buffer;
+ void *dma_buffer_priv_data;
+ __u32 dma_buffer_size;
+ __u32 dma_buffer_priv_data_size;
+};
+
enum d3dkmt_escapetype {
_D3DKMT_ESCAPE_DRIVERPRIVATE = 0,
_D3DKMT_ESCAPE_VIDMM = 1,
^ permalink raw reply related [flat|nested] 56+ messages in thread* [PATCH 45/55] drivers: hv: dxgkrnl: Fixed coding style issues
2026-03-19 20:24 [PATCH v4 00/55] drivers: hv: dxgkrnl: Driver for Hyper-V virtual compute device Eric Curtin
` (43 preceding siblings ...)
2026-03-19 20:24 ` [PATCH 44/55] drivers: hv: dxgkrnl: Implement known escapes Eric Curtin
@ 2026-03-19 20:24 ` Eric Curtin
2026-03-19 20:25 ` [PATCH 46/55] drivers: hv: dxgkrnl: Fixed the implementation of D3DKMTQueryClockCalibration Eric Curtin
` (9 subsequent siblings)
54 siblings, 0 replies; 56+ messages in thread
From: Eric Curtin @ 2026-03-19 20:24 UTC (permalink / raw)
To: linux-hyperv; +Cc: linux-kernel, iourit, wei.liu, decui, haiyangz
From: Iouri Tarassov <iourit@linux.microsoft.com>
Signed-off-by: Iouri Tarassov <iourit@linux.microsoft.com>
[kms: forward port to 6.6 from 6.1. No code changes made.]
Signed-off-by: Kelsey Steele <kelseysteele@microsoft.com>
---
drivers/hv/dxgkrnl/dxgadapter.c | 12 ++++--------
drivers/hv/dxgkrnl/dxgkrnl.h | 6 +++---
drivers/hv/dxgkrnl/dxgvmbus.c | 2 +-
drivers/hv/dxgkrnl/ioctl.c | 20 +++++++-------------
4 files changed, 15 insertions(+), 25 deletions(-)
diff --git a/drivers/hv/dxgkrnl/dxgadapter.c b/drivers/hv/dxgkrnl/dxgadapter.c
index bcd19b7267d1..b8ae8099847b 100644
--- a/drivers/hv/dxgkrnl/dxgadapter.c
+++ b/drivers/hv/dxgkrnl/dxgadapter.c
@@ -1017,8 +1017,7 @@ struct dxgprocess_adapter *dxgprocess_adapter_create(struct dxgprocess *process,
}
return adapter_info;
cleanup:
- if (adapter_info)
- kfree(adapter_info);
+ kfree(adapter_info);
return NULL;
}
@@ -1225,10 +1224,8 @@ struct dxgsyncobject *dxgsyncobject_create(struct dxgprocess *process,
DXG_TRACE("Syncobj created: %p", syncobj);
return syncobj;
cleanup:
- if (syncobj->host_event)
- kfree(syncobj->host_event);
- if (syncobj)
- kfree(syncobj);
+ kfree(syncobj->host_event);
+ kfree(syncobj);
return NULL;
}
@@ -1308,8 +1305,7 @@ void dxgsyncobject_release(struct kref *refcount)
kref_put(&syncobj->shared_owner->ssyncobj_kref,
dxgsharedsyncobj_release);
}
- if (syncobj->host_event)
- kfree(syncobj->host_event);
+ kfree(syncobj->host_event);
kfree(syncobj);
}
diff --git a/drivers/hv/dxgkrnl/dxgkrnl.h b/drivers/hv/dxgkrnl/dxgkrnl.h
index 9599ec8e0f1d..a4d0c504668b 100644
--- a/drivers/hv/dxgkrnl/dxgkrnl.h
+++ b/drivers/hv/dxgkrnl/dxgkrnl.h
@@ -47,10 +47,10 @@ struct dxghwqueue;
* Driver private data.
* A single /dev/dxg device is created per virtual machine.
*/
-struct dxgdriver{
+struct dxgdriver {
struct dxgglobal *dxgglobal;
- struct device *dxgdev;
- struct pci_driver pci_drv;
+ struct device *dxgdev;
+ struct pci_driver pci_drv;
struct hv_driver vmbus_drv;
};
extern struct dxgdriver dxgdrv;
diff --git a/drivers/hv/dxgkrnl/dxgvmbus.c b/drivers/hv/dxgkrnl/dxgvmbus.c
index de28c6162a70..215e2f6648e2 100644
--- a/drivers/hv/dxgkrnl/dxgvmbus.c
+++ b/drivers/hv/dxgkrnl/dxgvmbus.c
@@ -246,7 +246,7 @@ int dxgvmbuschannel_init(struct dxgvmbuschannel *ch, struct hv_device *hdev)
goto cleanup;
}
-#if LINUX_VERSION_CODE >= KERNEL_VERSION(5,15,0)
+#if KERNEL_VERSION(5, 15, 0) <= LINUX_VERSION_CODE
hdev->channel->max_pkt_size = DXG_MAX_VM_BUS_PACKET_SIZE;
#endif
ret = vmbus_open(hdev->channel, RING_BUFSIZE, RING_BUFSIZE,
diff --git a/drivers/hv/dxgkrnl/ioctl.c b/drivers/hv/dxgkrnl/ioctl.c
index f8ca79d098f3..5ac6dd1f09b9 100644
--- a/drivers/hv/dxgkrnl/ioctl.c
+++ b/drivers/hv/dxgkrnl/ioctl.c
@@ -3162,8 +3162,7 @@ dxgkio_signal_sync_object(struct dxgprocess *process, void *__user inargs)
}
if (event)
eventfd_ctx_put(event);
- if (host_event)
- kfree(host_event);
+ kfree(host_event);
}
if (adapter)
dxgadapter_release_lock_shared(adapter);
@@ -3398,8 +3397,7 @@ dxgkio_signal_sync_object_gpu2(struct dxgprocess *process, void *__user inargs)
}
if (event)
eventfd_ctx_put(event);
- if (host_event)
- kfree(host_event);
+ kfree(host_event);
}
if (adapter)
dxgadapter_release_lock_shared(adapter);
@@ -3577,8 +3575,7 @@ dxgkio_wait_sync_object_cpu(struct dxgprocess *process, void *__user inargs)
}
if (event)
eventfd_ctx_put(event);
- if (async_host_event)
- kfree(async_host_event);
+ kfree(async_host_event);
}
DXG_TRACE_IOCTL_END(ret);
@@ -4438,7 +4435,7 @@ build_test_command_buffer(struct dxgprocess *process,
if (cmd.dma_buffer_size < sizeof(u32) ||
cmd.dma_buffer_size > D3DDDI_MAXTESTBUFFERSIZE ||
cmd.dma_buffer_priv_data_size >
- D3DDDI_MAXTESTBUFFERPRIVATEDRIVERDATASIZE) {
+ D3DDDI_MAXTESTBUFFERPRIVATEDRIVERDATASIZE) {
DXG_ERR("Invalid DMA buffer or private data size");
return -EINVAL;
}
@@ -4511,8 +4508,7 @@ driver_known_escape(struct dxgprocess *process,
enum d3dkmt_escapetype escape_type;
int ret = 0;
- if (args->priv_drv_data_size < sizeof(enum d3dddi_knownescapetype))
- {
+ if (args->priv_drv_data_size < sizeof(enum d3dddi_knownescapetype)) {
DXG_ERR("Invalid private data size");
return -EINVAL;
}
@@ -5631,10 +5627,8 @@ void dxgk_validate_ioctls(void)
{
int i;
- for (i=0; i < ARRAY_SIZE(ioctls); i++)
- {
- if (ioctls[i].ioctl && _IOC_NR(ioctls[i].ioctl) != i)
- {
+ for (i = 0; i < ARRAY_SIZE(ioctls); i++) {
+ if (ioctls[i].ioctl && _IOC_NR(ioctls[i].ioctl) != i) {
DXG_ERR("Invalid ioctl");
DXGKRNL_ASSERT(0);
}
^ permalink raw reply related [flat|nested] 56+ messages in thread* [PATCH 46/55] drivers: hv: dxgkrnl: Fixed the implementation of D3DKMTQueryClockCalibration
2026-03-19 20:24 [PATCH v4 00/55] drivers: hv: dxgkrnl: Driver for Hyper-V virtual compute device Eric Curtin
` (44 preceding siblings ...)
2026-03-19 20:24 ` [PATCH 45/55] drivers: hv: dxgkrnl: Fixed coding style issues Eric Curtin
@ 2026-03-19 20:25 ` Eric Curtin
2026-03-19 20:25 ` [PATCH 47/55] drivers: hv: dxgkrnl: Retry sending a VM bus packet when there is no place in the ring buffer Eric Curtin
` (8 subsequent siblings)
54 siblings, 0 replies; 56+ messages in thread
From: Eric Curtin @ 2026-03-19 20:25 UTC (permalink / raw)
To: linux-hyperv; +Cc: linux-kernel, iourit, wei.liu, decui, haiyangz
From: Iouri Tarassov <iourit@linux.microsoft.com>
The result of a VM bus call was not copied to the user output structure.
Signed-off-by: Iouri Tarassov <iourit@linux.microsoft.com>
[kms: forward port to 6.6 from 6.1. No code changes made.]
Signed-off-by: Kelsey Steele <kelseysteele@microsoft.com>
---
drivers/hv/dxgkrnl/dxgvmbus.c | 18 ++++++++++--------
drivers/hv/dxgkrnl/ioctl.c | 5 -----
2 files changed, 10 insertions(+), 13 deletions(-)
diff --git a/drivers/hv/dxgkrnl/dxgvmbus.c b/drivers/hv/dxgkrnl/dxgvmbus.c
index 215e2f6648e2..67f55f4bf41d 100644
--- a/drivers/hv/dxgkrnl/dxgvmbus.c
+++ b/drivers/hv/dxgkrnl/dxgvmbus.c
@@ -1966,14 +1966,16 @@ int dxgvmb_send_query_clock_calibration(struct dxgprocess *process,
*__user inargs)
{
struct dxgkvmb_command_queryclockcalibration *command;
- struct dxgkvmb_command_queryclockcalibration_return result;
+ struct dxgkvmb_command_queryclockcalibration_return *result;
int ret;
- struct dxgvmbusmsg msg = {.hdr = NULL};
+ struct dxgvmbusmsgres msg = {.hdr = NULL};
- ret = init_message(&msg, adapter, process, sizeof(*command));
+ ret = init_message_res(&msg, adapter, sizeof(*command),
+ sizeof(*result));
if (ret)
goto cleanup;
command = (void *)msg.msg;
+ result = msg.res;
command_vgpu_to_host_init2(&command->hdr,
DXGK_VMBCOMMAND_QUERYCLOCKCALIBRATION,
@@ -1981,20 +1983,20 @@ int dxgvmb_send_query_clock_calibration(struct dxgprocess *process,
command->args = *args;
ret = dxgvmb_send_sync_msg(msg.channel, msg.hdr, msg.size,
- &result, sizeof(result));
+ result, sizeof(*result));
if (ret < 0)
goto cleanup;
- ret = copy_to_user(&inargs->clock_data, &result.clock_data,
- sizeof(result.clock_data));
+ ret = copy_to_user(&inargs->clock_data, &result->clock_data,
+ sizeof(result->clock_data));
if (ret) {
DXG_ERR("failed to copy clock data");
ret = -EFAULT;
goto cleanup;
}
- ret = ntstatus2int(result.status);
+ ret = ntstatus2int(result->status);
cleanup:
- free_message(&msg);
+ free_message((struct dxgvmbusmsg *)&msg);
if (ret)
DXG_TRACE("err: %d", ret);
return ret;
diff --git a/drivers/hv/dxgkrnl/ioctl.c b/drivers/hv/dxgkrnl/ioctl.c
index 5ac6dd1f09b9..d91af2e176e9 100644
--- a/drivers/hv/dxgkrnl/ioctl.c
+++ b/drivers/hv/dxgkrnl/ioctl.c
@@ -4303,11 +4303,6 @@ dxgkio_query_clock_calibration(struct dxgprocess *process, void *__user inargs)
&args, inargs);
if (ret < 0)
goto cleanup;
- ret = copy_to_user(inargs, &args, sizeof(args));
- if (ret) {
- DXG_ERR("failed to copy output args");
- ret = -EFAULT;
- }
cleanup:
^ permalink raw reply related [flat|nested] 56+ messages in thread* [PATCH 47/55] drivers: hv: dxgkrnl: Retry sending a VM bus packet when there is no place in the ring buffer
2026-03-19 20:24 [PATCH v4 00/55] drivers: hv: dxgkrnl: Driver for Hyper-V virtual compute device Eric Curtin
` (45 preceding siblings ...)
2026-03-19 20:25 ` [PATCH 46/55] drivers: hv: dxgkrnl: Fixed the implementation of D3DKMTQueryClockCalibration Eric Curtin
@ 2026-03-19 20:25 ` Eric Curtin
2026-03-19 20:25 ` [PATCH 48/55] drivers: hv: dxgkrnl: Add support for locking a shared allocation by not the owner Eric Curtin
` (7 subsequent siblings)
54 siblings, 0 replies; 56+ messages in thread
From: Eric Curtin @ 2026-03-19 20:25 UTC (permalink / raw)
To: linux-hyperv; +Cc: linux-kernel, iourit, wei.liu, decui, haiyangz
From: Iouri Tarassov <iourit@linux.microsoft.com>
When D3DKMT requests are sent too quickly, the VM bus ring buffer could be
full when a message is submitted. The change adds sleep and re-try count
to handle this condition.
Signed-off-by: Iouri Tarassov <iourit@linux.microsoft.com>
[kms: forward port to 6.6 from 6.1. No code changes made.]
Signed-off-by: Kelsey Steele <kelseysteele@microsoft.com>
---
drivers/hv/dxgkrnl/dxgvmbus.c | 17 ++++++++++++++---
1 file changed, 14 insertions(+), 3 deletions(-)
diff --git a/drivers/hv/dxgkrnl/dxgvmbus.c b/drivers/hv/dxgkrnl/dxgvmbus.c
index 67f55f4bf41d..467e7707c8c7 100644
--- a/drivers/hv/dxgkrnl/dxgvmbus.c
+++ b/drivers/hv/dxgkrnl/dxgvmbus.c
@@ -420,6 +420,7 @@ int dxgvmb_send_sync_msg(struct dxgvmbuschannel *channel,
struct dxgvmbuspacket *packet = NULL;
struct dxgkvmb_command_vm_to_host *cmd1;
struct dxgkvmb_command_vgpu_to_host *cmd2;
+ int try_count = 0;
if (cmd_size > DXG_MAX_VM_BUS_PACKET_SIZE ||
result_size > DXG_MAX_VM_BUS_PACKET_SIZE) {
@@ -453,9 +454,19 @@ int dxgvmb_send_sync_msg(struct dxgvmbuschannel *channel,
list_add_tail(&packet->packet_list_entry, &channel->packet_list_head);
spin_unlock_irq(&channel->packet_list_mutex);
- ret = vmbus_sendpacket(channel->channel, command, cmd_size,
- packet->request_id, VM_PKT_DATA_INBAND,
- VMBUS_DATA_PACKET_FLAG_COMPLETION_REQUESTED);
+ do {
+ ret = vmbus_sendpacket(channel->channel, command, cmd_size,
+ packet->request_id, VM_PKT_DATA_INBAND,
+ VMBUS_DATA_PACKET_FLAG_COMPLETION_REQUESTED);
+ /*
+ * -EAGAIN is returned when the VM bus ring buffer if full.
+ * Wait 2ms to allow the host to process messages and try again.
+ */
+ if (ret == -EAGAIN) {
+ usleep_range(1000, 2000);
+ try_count++;
+ }
+ } while (ret == -EAGAIN && try_count < 50);
if (ret) {
DXG_ERR("vmbus_sendpacket failed: %x", ret);
spin_lock_irq(&channel->packet_list_mutex);
^ permalink raw reply related [flat|nested] 56+ messages in thread* [PATCH 48/55] drivers: hv: dxgkrnl: Add support for locking a shared allocation by not the owner
2026-03-19 20:24 [PATCH v4 00/55] drivers: hv: dxgkrnl: Driver for Hyper-V virtual compute device Eric Curtin
` (46 preceding siblings ...)
2026-03-19 20:25 ` [PATCH 47/55] drivers: hv: dxgkrnl: Retry sending a VM bus packet when there is no place in the ring buffer Eric Curtin
@ 2026-03-19 20:25 ` Eric Curtin
2026-03-19 20:25 ` [PATCH 49/55] drivers: hv: dxgkrnl: Fix build breaks when switching to 6.6 kernel due to hv_driver remove callback change Eric Curtin
` (6 subsequent siblings)
54 siblings, 0 replies; 56+ messages in thread
From: Eric Curtin @ 2026-03-19 20:25 UTC (permalink / raw)
To: linux-hyperv; +Cc: linux-kernel, iourit, wei.liu, decui, haiyangz
From: Iouri Tarassov <iourit@linux.microsoft.com>
WDDM has a restriction that an allocation of a shared resource can be
locked for CPU access only by the resource creator (the owner). This
restriction is removed for system memory only allocations. This change
adds support for this feature.
Signed-off-by: Iouri Tarassov <iourit@linux.microsoft.com>
[kms: forward port to 6.6 from 6.1. No code changes made.]
Signed-off-by: Kelsey Steele <kelseysteele@microsoft.com>
---
drivers/hv/dxgkrnl/dxgadapter.c | 4 ++--
drivers/hv/dxgkrnl/dxgkrnl.h | 13 ++++++++++++-
drivers/hv/dxgkrnl/ioctl.c | 25 +++++++++++++++++--------
3 files changed, 31 insertions(+), 11 deletions(-)
diff --git a/drivers/hv/dxgkrnl/dxgadapter.c b/drivers/hv/dxgkrnl/dxgadapter.c
index b8ae8099847b..cf946e476411 100644
--- a/drivers/hv/dxgkrnl/dxgadapter.c
+++ b/drivers/hv/dxgkrnl/dxgadapter.c
@@ -559,8 +559,8 @@ void dxgsharedresource_destroy(struct kref *refcount)
vfree(resource->runtime_private_data);
if (resource->resource_private_data)
vfree(resource->resource_private_data);
- if (resource->alloc_private_data_sizes)
- vfree(resource->alloc_private_data_sizes);
+ if (resource->alloc_info)
+ vfree(resource->alloc_info);
if (resource->alloc_private_data)
vfree(resource->alloc_private_data);
kfree(resource);
diff --git a/drivers/hv/dxgkrnl/dxgkrnl.h b/drivers/hv/dxgkrnl/dxgkrnl.h
index a4d0c504668b..d816a875d5ab 100644
--- a/drivers/hv/dxgkrnl/dxgkrnl.h
+++ b/drivers/hv/dxgkrnl/dxgkrnl.h
@@ -613,6 +613,17 @@ struct dxghwqueue *dxghwqueue_create(struct dxgcontext *ctx);
void dxghwqueue_destroy(struct dxgprocess *pr, struct dxghwqueue *hq);
void dxghwqueue_release(struct kref *refcount);
+/*
+ * When a shared resource is created this structure provides information
+ * about every allocation in the resource. It is used when someone opens the
+ * resource and locks its allocation.
+ */
+struct dxgsharedallocdata {
+ u32 private_data_size; /* Size of private data */
+ u32 num_pages; /* Allocation size in pages */
+ bool cached; /* True is the alloc memory is cached */
+};
+
/*
* A shared resource object is created to track the list of dxgresource objects,
* which are opened for the same underlying shared resource.
@@ -658,7 +669,7 @@ struct dxgsharedresource {
};
long flags;
};
- u32 *alloc_private_data_sizes;
+ struct dxgsharedallocdata *alloc_info;
u8 *alloc_private_data;
u8 *runtime_private_data;
u8 *resource_private_data;
diff --git a/drivers/hv/dxgkrnl/ioctl.c b/drivers/hv/dxgkrnl/ioctl.c
index d91af2e176e9..f8f116a7f87f 100644
--- a/drivers/hv/dxgkrnl/ioctl.c
+++ b/drivers/hv/dxgkrnl/ioctl.c
@@ -369,6 +369,7 @@ static int dxgsharedresource_seal(struct dxgsharedresource *shared_resource)
u32 data_size;
struct dxgresource *resource;
struct dxgallocation *alloc;
+ struct dxgsharedallocdata *alloc_info;
DXG_TRACE("Sealing resource: %p", shared_resource);
@@ -409,9 +410,10 @@ static int dxgsharedresource_seal(struct dxgsharedresource *shared_resource)
ret = -EINVAL;
goto cleanup1;
}
- shared_resource->alloc_private_data_sizes =
- vzalloc(sizeof(u32)*shared_resource->allocation_count);
- if (shared_resource->alloc_private_data_sizes == NULL) {
+ shared_resource->alloc_info =
+ vzalloc(sizeof(struct dxgsharedallocdata) *
+ shared_resource->allocation_count);
+ if (shared_resource->alloc_info == NULL) {
ret = -EINVAL;
goto cleanup1;
}
@@ -429,8 +431,10 @@ static int dxgsharedresource_seal(struct dxgsharedresource *shared_resource)
ret = -EINVAL;
goto cleanup1;
}
- shared_resource->alloc_private_data_sizes[i] =
- alloc_data_size;
+ alloc_info = &shared_resource->alloc_info[i];
+ alloc_info->private_data_size = alloc_data_size;
+ alloc_info->num_pages = alloc->num_pages;
+ alloc_info->cached = alloc->cached;
memcpy(private_data,
alloc->priv_drv_data->data,
alloc_data_size);
@@ -5031,6 +5035,7 @@ assign_resource_handles(struct dxgprocess *process,
u8 *cur_priv_data;
u32 total_priv_data_size = 0;
struct d3dddi_openallocationinfo2 open_alloc_info = { };
+ struct dxgsharedallocdata *alloc_info;
hmgrtable_lock(&process->handle_table, DXGLOCK_EXCL);
ret = hmgrtable_assign_handle(&process->handle_table, resource,
@@ -5050,11 +5055,15 @@ assign_resource_handles(struct dxgprocess *process,
allocs[i]->alloc_handle = handles[i];
allocs[i]->handle_valid = 1;
open_alloc_info.allocation = handles[i];
- if (shared_resource->alloc_private_data_sizes)
+ if (shared_resource->alloc_info) {
+ alloc_info = &shared_resource->alloc_info[i];
open_alloc_info.priv_drv_data_size =
- shared_resource->alloc_private_data_sizes[i];
- else
+ alloc_info->private_data_size;
+ allocs[i]->num_pages = alloc_info->num_pages;
+ allocs[i]->cached = alloc_info->cached;
+ } else {
open_alloc_info.priv_drv_data_size = 0;
+ }
total_priv_data_size += open_alloc_info.priv_drv_data_size;
open_alloc_info.priv_drv_data = cur_priv_data;
^ permalink raw reply related [flat|nested] 56+ messages in thread* [PATCH 49/55] drivers: hv: dxgkrnl: Fix build breaks when switching to 6.6 kernel due to hv_driver remove callback change.
2026-03-19 20:24 [PATCH v4 00/55] drivers: hv: dxgkrnl: Driver for Hyper-V virtual compute device Eric Curtin
` (47 preceding siblings ...)
2026-03-19 20:25 ` [PATCH 48/55] drivers: hv: dxgkrnl: Add support for locking a shared allocation by not the owner Eric Curtin
@ 2026-03-19 20:25 ` Eric Curtin
2026-03-19 20:25 ` [PATCH 50/55] drivers: hv: dxgkrnl: Fix build breaks when switching to 6.6 kernel due to removed uuid_le_cmp Eric Curtin
` (5 subsequent siblings)
54 siblings, 0 replies; 56+ messages in thread
From: Eric Curtin @ 2026-03-19 20:25 UTC (permalink / raw)
To: linux-hyperv; +Cc: linux-kernel, iourit, wei.liu, decui, haiyangz
From: Iouri Tarassov <iourit@linux.microsoft.com>
The hv_driver remove callback has been updated to return void instead of int.
dxg_remove_vmbus() in dxgmodule.c needs to be updated to match. See this
commit for more context:
96ec2939620c "Drivers: hv: Make remove callback of hyperv driver void returned"
Signed-off-by: Iouri Tarassov <iourit@linux.microsoft.com>
---
drivers/hv/dxgkrnl/dxgmodule.c | 6 +-----
1 file changed, 1 insertion(+), 5 deletions(-)
diff --git a/drivers/hv/dxgkrnl/dxgmodule.c b/drivers/hv/dxgkrnl/dxgmodule.c
index 0fafb6167229..5459bd9b82fb 100644
--- a/drivers/hv/dxgkrnl/dxgmodule.c
+++ b/drivers/hv/dxgkrnl/dxgmodule.c
@@ -803,9 +803,8 @@ static int dxg_probe_vmbus(struct hv_device *hdev,
return ret;
}
-static int dxg_remove_vmbus(struct hv_device *hdev)
+static void dxg_remove_vmbus(struct hv_device *hdev)
{
- int ret = 0;
struct dxgvgpuchannel *vgpu_channel;
struct dxgglobal *dxgglobal = dxggbl();
@@ -830,12 +829,9 @@ static int dxg_remove_vmbus(struct hv_device *hdev)
} else {
/* Unknown device type */
DXG_ERR("Unknown device type");
- ret = -ENODEV;
}
mutex_unlock(&dxgglobal->device_mutex);
-
- return ret;
}
MODULE_DEVICE_TABLE(vmbus, dxg_vmbus_id_table);
^ permalink raw reply related [flat|nested] 56+ messages in thread* [PATCH 50/55] drivers: hv: dxgkrnl: Fix build breaks when switching to 6.6 kernel due to removed uuid_le_cmp
2026-03-19 20:24 [PATCH v4 00/55] drivers: hv: dxgkrnl: Driver for Hyper-V virtual compute device Eric Curtin
` (48 preceding siblings ...)
2026-03-19 20:25 ` [PATCH 49/55] drivers: hv: dxgkrnl: Fix build breaks when switching to 6.6 kernel due to hv_driver remove callback change Eric Curtin
@ 2026-03-19 20:25 ` Eric Curtin
2026-03-19 20:25 ` [PATCH 51/55] drivers: hv: dxgkrnl: Implement D3DKMTEnumProcesses to match the Windows implementation Eric Curtin
` (4 subsequent siblings)
54 siblings, 0 replies; 56+ messages in thread
From: Eric Curtin @ 2026-03-19 20:25 UTC (permalink / raw)
To: linux-hyperv; +Cc: linux-kernel, iourit, wei.liu, decui, haiyangz
From: Iouri Tarassov <iourit@linux.microsoft.com>
uuid_le_cmp was removed and needs to be replaced by guid_equal. The
relevant upstream commits are:
1fb1ea0d9cb8 "mei: Move uuid.h to the MEI namespace"
f5b3c341a46e "mei: Move uuid_le_cmp() to its only user"
5e6a51787fef "uuid: Decouple guid_t and uuid_le types and respective macros"
Signed-off-by: Iouri Tarassov <iourit@linux.microsoft.com>
---
drivers/hv/dxgkrnl/dxgmodule.c | 10 ++++------
1 file changed, 4 insertions(+), 6 deletions(-)
diff --git a/drivers/hv/dxgkrnl/dxgmodule.c b/drivers/hv/dxgkrnl/dxgmodule.c
index 5459bd9b82fb..e3ac70df1b6f 100644
--- a/drivers/hv/dxgkrnl/dxgmodule.c
+++ b/drivers/hv/dxgkrnl/dxgmodule.c
@@ -762,7 +762,7 @@ static int dxg_probe_vmbus(struct hv_device *hdev,
mutex_lock(&dxgglobal->device_mutex);
- if (uuid_le_cmp(hdev->dev_type, dxg_vmbus_id_table[0].guid) == 0) {
+ if (guid_equal(&hdev->dev_type, &dxg_vmbus_id_table[0].guid)) {
/* This is a new virtual GPU channel */
guid_to_luid(&hdev->channel->offermsg.offer.if_instance, &luid);
DXG_TRACE("vGPU channel: %pUb",
@@ -777,8 +777,7 @@ static int dxg_probe_vmbus(struct hv_device *hdev,
list_add_tail(&vgpuch->vgpu_ch_list_entry,
&dxgglobal->vgpu_ch_list_head);
dxgglobal_start_adapters();
- } else if (uuid_le_cmp(hdev->dev_type,
- dxg_vmbus_id_table[1].guid) == 0) {
+ } else if (guid_equal(&hdev->dev_type, &dxg_vmbus_id_table[1].guid)) {
/* This is the global Dxgkgnl channel */
DXG_TRACE("Global channel: %pUb",
&hdev->channel->offermsg.offer.if_instance);
@@ -810,7 +809,7 @@ static void dxg_remove_vmbus(struct hv_device *hdev)
mutex_lock(&dxgglobal->device_mutex);
- if (uuid_le_cmp(hdev->dev_type, dxg_vmbus_id_table[0].guid) == 0) {
+ if (guid_equal(&hdev->dev_type, &dxg_vmbus_id_table[0].guid)) {
DXG_TRACE("Remove virtual GPU channel");
dxgglobal_stop_adapter_vmbus(hdev);
list_for_each_entry(vgpu_channel,
@@ -822,8 +821,7 @@ static void dxg_remove_vmbus(struct hv_device *hdev)
break;
}
}
- } else if (uuid_le_cmp(hdev->dev_type,
- dxg_vmbus_id_table[1].guid) == 0) {
+ } else if (guid_equal(&hdev->dev_type, &dxg_vmbus_id_table[1].guid)) {
DXG_TRACE("Remove global channel device");
dxgglobal_destroy_global_channel();
} else {
^ permalink raw reply related [flat|nested] 56+ messages in thread* [PATCH 51/55] drivers: hv: dxgkrnl: Implement D3DKMTEnumProcesses to match the Windows implementation
2026-03-19 20:24 [PATCH v4 00/55] drivers: hv: dxgkrnl: Driver for Hyper-V virtual compute device Eric Curtin
` (49 preceding siblings ...)
2026-03-19 20:25 ` [PATCH 50/55] drivers: hv: dxgkrnl: Fix build breaks when switching to 6.6 kernel due to removed uuid_le_cmp Eric Curtin
@ 2026-03-19 20:25 ` Eric Curtin
2026-03-19 20:25 ` [PATCH 52/55] drivers: hv: dxgkrnl: Use pin_user_pages instead of get_user_pages for DMA accessible memory Eric Curtin
` (3 subsequent siblings)
54 siblings, 0 replies; 56+ messages in thread
From: Eric Curtin @ 2026-03-19 20:25 UTC (permalink / raw)
To: linux-hyperv; +Cc: linux-kernel, iourit, wei.liu, decui, haiyangz
From: Iouri Tarassov <iourit@linux.microsoft.com>
The behavior of D3DKMTEnumProcesses on Windows is that when buffer_count is 0 or
input buffer is NULL, the number of active processes is returned. The Linux implemenation
is updated to match this.
Signed-off-by: Iouri Tarassov <iourit@linux.microsoft.com>
---
drivers/hv/dxgkrnl/dxgmodule.c | 2 +-
drivers/hv/dxgkrnl/ioctl.c | 29 ++++++++++++++++++-----------
2 files changed, 19 insertions(+), 12 deletions(-)
diff --git a/drivers/hv/dxgkrnl/dxgmodule.c b/drivers/hv/dxgkrnl/dxgmodule.c
index e3ac70df1b6f..8f5d6db256a3 100644
--- a/drivers/hv/dxgkrnl/dxgmodule.c
+++ b/drivers/hv/dxgkrnl/dxgmodule.c
@@ -965,4 +965,4 @@ module_exit(dxg_drv_exit);
MODULE_LICENSE("GPL");
MODULE_DESCRIPTION("Microsoft Dxgkrnl virtual compute device Driver");
-MODULE_VERSION("2.0.2");
+MODULE_VERSION("2.0.3");
diff --git a/drivers/hv/dxgkrnl/ioctl.c b/drivers/hv/dxgkrnl/ioctl.c
index f8f116a7f87f..42f3de31a63c 100644
--- a/drivers/hv/dxgkrnl/ioctl.c
+++ b/drivers/hv/dxgkrnl/ioctl.c
@@ -5373,7 +5373,7 @@ dxgkio_enum_processes(struct dxgprocess *process, void *__user inargs)
struct dxgprocess_adapter *pentry;
int nump = 0; /* Current number of processes*/
struct ntstatus status;
- int ret;
+ int ret, ret1;
ret = copy_from_user(&args, inargs, sizeof(args));
if (ret) {
@@ -5382,12 +5382,6 @@ dxgkio_enum_processes(struct dxgprocess *process, void *__user inargs)
goto cleanup;
}
- if (args.buffer_count == 0) {
- DXG_ERR("Invalid buffer count");
- ret = -EINVAL;
- goto cleanup;
- }
-
dxgglobal_acquire_adapter_list_lock(DXGLOCK_SHARED);
dxgglobal_acquire_process_adapter_lock();
@@ -5405,6 +5399,19 @@ dxgkio_enum_processes(struct dxgprocess *process, void *__user inargs)
goto cleanup_locks;
}
+ list_for_each_entry(pentry, &adapter->adapter_process_list_head,
+ adapter_process_list_entry) {
+ if (pentry->process->nspid == task_active_pid_ns(current))
+ nump++;
+ }
+
+ if (nump > args.buffer_count || args.buffer == NULL) {
+ status.v = STATUS_BUFFER_TOO_SMALL;
+ ret = ntstatus2int(status);
+ goto cleanup_locks;
+ }
+
+ nump = 0;
list_for_each_entry(pentry, &adapter->adapter_process_list_head,
adapter_process_list_entry) {
if (pentry->process->nspid != task_active_pid_ns(current))
@@ -5429,10 +5436,10 @@ dxgkio_enum_processes(struct dxgprocess *process, void *__user inargs)
dxgglobal_release_process_adapter_lock();
dxgglobal_release_adapter_list_lock(DXGLOCK_SHARED);
- if (ret == 0) {
- ret = copy_to_user(&input->buffer_count, &nump, sizeof(u32));
- if (ret)
- DXG_ERR("failed to copy buffer count to user");
+ ret1 = copy_to_user(&input->buffer_count, &nump, sizeof(u32));
+ if (ret1) {
+ DXG_ERR("failed to copy buffer count to user");
+ ret = -EFAULT;
}
cleanup:
^ permalink raw reply related [flat|nested] 56+ messages in thread* [PATCH 52/55] drivers: hv: dxgkrnl: Use pin_user_pages instead of get_user_pages for DMA accessible memory
2026-03-19 20:24 [PATCH v4 00/55] drivers: hv: dxgkrnl: Driver for Hyper-V virtual compute device Eric Curtin
` (50 preceding siblings ...)
2026-03-19 20:25 ` [PATCH 51/55] drivers: hv: dxgkrnl: Implement D3DKMTEnumProcesses to match the Windows implementation Eric Curtin
@ 2026-03-19 20:25 ` Eric Curtin
2026-03-19 20:25 ` [PATCH 53/55] drivers: hv: dxgkrnl: Do not print error messages when virtual GPU is not present Eric Curtin
` (2 subsequent siblings)
54 siblings, 0 replies; 56+ messages in thread
From: Eric Curtin @ 2026-03-19 20:25 UTC (permalink / raw)
To: linux-hyperv; +Cc: linux-kernel, iourit, wei.liu, decui, haiyangz
From: Iouri Tarassov <iourit@linux.microsoft.com>
Pages, which are obtained by calling get_user_pages(), can be evicted from memory.
pin_user_pages() should be used for memory, which is accessed by DMA.
Signed-off-by: Iouri Tarassov <iourit@linux.microsoft.com>
---
drivers/hv/dxgkrnl/dxgadapter.c | 2 +-
drivers/hv/dxgkrnl/dxgvmbus.c | 12 ++++++++----
2 files changed, 9 insertions(+), 5 deletions(-)
diff --git a/drivers/hv/dxgkrnl/dxgadapter.c b/drivers/hv/dxgkrnl/dxgadapter.c
index cf946e476411..c94283b09fa1 100644
--- a/drivers/hv/dxgkrnl/dxgadapter.c
+++ b/drivers/hv/dxgkrnl/dxgadapter.c
@@ -882,7 +882,7 @@ struct dxgallocation *dxgallocation_create(struct dxgprocess *process)
void dxgallocation_stop(struct dxgallocation *alloc)
{
if (alloc->pages) {
- release_pages(alloc->pages, alloc->num_pages);
+ unpin_user_pages(alloc->pages, alloc->num_pages);
vfree(alloc->pages);
alloc->pages = NULL;
}
diff --git a/drivers/hv/dxgkrnl/dxgvmbus.c b/drivers/hv/dxgkrnl/dxgvmbus.c
index 467e7707c8c7..abb6d2af89ac 100644
--- a/drivers/hv/dxgkrnl/dxgvmbus.c
+++ b/drivers/hv/dxgkrnl/dxgvmbus.c
@@ -1457,6 +1457,7 @@ int create_existing_sysmem(struct dxgdevice *device,
u64 *pfn;
u32 pages_to_send;
u32 i;
+ u32 gup_flags = FOLL_LONGTERM;
struct dxgglobal *dxgglobal = dxggbl();
/*
@@ -1475,12 +1476,15 @@ int create_existing_sysmem(struct dxgdevice *device,
ret = -ENOMEM;
goto cleanup;
}
- ret1 = get_user_pages_fast((unsigned long)sysmem, npages, !read_only,
- dxgalloc->pages);
+ if (!read_only)
+ gup_flags |= FOLL_WRITE;
+ ret1 = pin_user_pages_fast((unsigned long)sysmem, npages, gup_flags,
+ dxgalloc->pages);
if (ret1 != npages) {
DXG_ERR("get_user_pages_fast failed: %d", ret1);
- if (ret1 > 0 && ret1 < npages)
- release_pages(dxgalloc->pages, ret1);
+ if (ret1 > 0 && ret1 < npages) {
+ unpin_user_pages(dxgalloc->pages, ret1);
+ }
vfree(dxgalloc->pages);
dxgalloc->pages = NULL;
ret = -ENOMEM;
^ permalink raw reply related [flat|nested] 56+ messages in thread* [PATCH 53/55] drivers: hv: dxgkrnl: Do not print error messages when virtual GPU is not present
2026-03-19 20:24 [PATCH v4 00/55] drivers: hv: dxgkrnl: Driver for Hyper-V virtual compute device Eric Curtin
` (51 preceding siblings ...)
2026-03-19 20:25 ` [PATCH 52/55] drivers: hv: dxgkrnl: Use pin_user_pages instead of get_user_pages for DMA accessible memory Eric Curtin
@ 2026-03-19 20:25 ` Eric Curtin
2026-03-19 20:25 ` [PATCH 54/55] drivers: hv: dxgkrnl: Fix crash at hmgrtable_free_handle Eric Curtin
2026-03-19 20:25 ` [PATCH 55/55] drivers: hv: dxgkrnl: Code cleanup for upstream submission Eric Curtin
54 siblings, 0 replies; 56+ messages in thread
From: Eric Curtin @ 2026-03-19 20:25 UTC (permalink / raw)
To: linux-hyperv; +Cc: linux-kernel, iourit, wei.liu, decui, haiyangz
From: Iouri Tarassov <iourit@linux.microsoft.com>
Dxgkrnl prints the error message "Failed to acquire global channel lock"
when a process tries to open the /dev/dxg device and there is no
virtual GPU. This message should not be printed in this scenario.
Signed-off-by: Iouri Tarassov <iourit@linux.microsoft.com>
---
drivers/hv/dxgkrnl/dxgadapter.c | 2 +-
drivers/hv/dxgkrnl/dxgmodule.c | 4 ++++
2 files changed, 5 insertions(+), 1 deletion(-)
diff --git a/drivers/hv/dxgkrnl/dxgadapter.c b/drivers/hv/dxgkrnl/dxgadapter.c
index c94283b09fa1..6d3cabb24e6f 100644
--- a/drivers/hv/dxgkrnl/dxgadapter.c
+++ b/drivers/hv/dxgkrnl/dxgadapter.c
@@ -78,12 +78,12 @@ void dxgadapter_start(struct dxgadapter *adapter)
/* The global channel is initialized when the first adapter starts */
if (!dxgglobal->global_channel_initialized) {
+ dxgglobal->global_channel_initialized = true;
ret = dxgglobal_init_global_channel();
if (ret) {
dxgglobal_destroy_global_channel();
return;
}
- dxgglobal->global_channel_initialized = true;
}
/* Initialize vGPU vm bus channel */
diff --git a/drivers/hv/dxgkrnl/dxgmodule.c b/drivers/hv/dxgkrnl/dxgmodule.c
index 8f5d6db256a3..c2a4a2a2136f 100644
--- a/drivers/hv/dxgkrnl/dxgmodule.c
+++ b/drivers/hv/dxgkrnl/dxgmodule.c
@@ -46,9 +46,13 @@ int dxgglobal_acquire_channel_lock(void)
{
struct dxgglobal *dxgglobal = dxggbl();
+ if (!dxgglobal->global_channel_initialized)
+ return -ENODEV;
+
down_read(&dxgglobal->channel_lock);
if (dxgglobal->channel.channel == NULL) {
DXG_ERR("Failed to acquire global channel lock");
+ up_read(&dxgglobal->channel_lock);
return -ENODEV;
} else {
return 0;
^ permalink raw reply related [flat|nested] 56+ messages in thread* [PATCH 54/55] drivers: hv: dxgkrnl: Fix crash at hmgrtable_free_handle
2026-03-19 20:24 [PATCH v4 00/55] drivers: hv: dxgkrnl: Driver for Hyper-V virtual compute device Eric Curtin
` (52 preceding siblings ...)
2026-03-19 20:25 ` [PATCH 53/55] drivers: hv: dxgkrnl: Do not print error messages when virtual GPU is not present Eric Curtin
@ 2026-03-19 20:25 ` Eric Curtin
2026-03-19 20:25 ` [PATCH 55/55] drivers: hv: dxgkrnl: Code cleanup for upstream submission Eric Curtin
54 siblings, 0 replies; 56+ messages in thread
From: Eric Curtin @ 2026-03-19 20:25 UTC (permalink / raw)
To: linux-hyperv; +Cc: linux-kernel, iourit, wei.liu, decui, haiyangz
From: Hideyuki Nagase <hideyukn@microsoft.com>
Fix a potential NULL pointer crash in hmgrtable_free_handle() when
free_handle_list_tail is HMGRTABLE_INVALID_INDEX. Guard the entry
dereference with a bounds check before writing the next_free_index.
Signed-off-by: Hideyuki Nagase <hideyukn@microsoft.com>
---
drivers/hv/dxgkrnl/hmgr.c | 9 +++++++--
1 file changed, 7 insertions(+), 2 deletions(-)
diff --git a/drivers/hv/dxgkrnl/hmgr.c b/drivers/hv/dxgkrnl/hmgr.c
index 24101d0091ab..059f94307a0e 100644
--- a/drivers/hv/dxgkrnl/hmgr.c
+++ b/drivers/hv/dxgkrnl/hmgr.c
@@ -462,9 +462,14 @@ void hmgrtable_free_handle(struct hmgrtable *table, enum hmgrentry_type t,
*/
entry->next_free_index = HMGRTABLE_INVALID_INDEX;
entry->prev_free_index = table->free_handle_list_tail;
- entry = &table->entry_table[table->free_handle_list_tail];
- entry->next_free_index = i;
+ if (table->free_handle_list_tail != HMGRTABLE_INVALID_INDEX) {
+ entry = &table->entry_table[table->free_handle_list_tail];
+ entry->next_free_index = i;
+ }
table->free_handle_list_tail = i;
+ if (table->free_handle_list_head == HMGRTABLE_INVALID_INDEX) {
+ table->free_handle_list_head = i;
+ }
} else {
DXG_ERR("Invalid handle to free: %d %x", i, h.v);
}
^ permalink raw reply related [flat|nested] 56+ messages in thread* [PATCH 55/55] drivers: hv: dxgkrnl: Code cleanup for upstream submission
2026-03-19 20:24 [PATCH v4 00/55] drivers: hv: dxgkrnl: Driver for Hyper-V virtual compute device Eric Curtin
` (53 preceding siblings ...)
2026-03-19 20:25 ` [PATCH 54/55] drivers: hv: dxgkrnl: Fix crash at hmgrtable_free_handle Eric Curtin
@ 2026-03-19 20:25 ` Eric Curtin
54 siblings, 0 replies; 56+ messages in thread
From: Eric Curtin @ 2026-03-19 20:25 UTC (permalink / raw)
To: linux-hyperv; +Cc: linux-kernel, iourit, wei.liu, decui, haiyangz
Address issues raised in previous LKML submission attempts (v1-v3):
- Replace deprecated one-element arrays [1] with C99 flexible arrays []
in dxgvmbus.h and dxgkrnl.h
- Replace %px with %p in DXG_TRACE calls (avoids exposing kernel layout)
- Remove unnecessary braces from single-statement if blocks
- Remove LINUX_VERSION_CODE guard: max_pkt_size exists in all supported kernels
- Remove unused linux/version.h include from dxgkrnl.h
- Fix whitespace (space before tab) in dxgvmbus.h and d3dkmthk.h
- Replace DXG_ERR non-debug macro do{}while(0) with direct dev_err call
- Change -EBADE to -ENODEV for global channel duplicate detection
- Remove MODULE_VERSION as it is not recommended for in-tree drivers
- Add explanatory comment to guid_to_luid() cast
- Update MAINTAINERS email to iourit@linux.microsoft.com
Signed-off-by: Iouri Tarassov <iourit@linux.microsoft.com>
---
MAINTAINERS | 2 +-
drivers/hv/dxgkrnl/dxgadapter.c | 8 ++++----
drivers/hv/dxgkrnl/dxgkrnl.h | 13 +++++--------
drivers/hv/dxgkrnl/dxgmodule.c | 5 ++---
drivers/hv/dxgkrnl/dxgvmbus.c | 5 +----
drivers/hv/dxgkrnl/dxgvmbus.h | 26 +++++++++++++-------------
drivers/hv/dxgkrnl/hmgr.c | 3 +--
include/uapi/misc/d3dkmthk.h | 2 +-
8 files changed, 28 insertions(+), 36 deletions(-)
diff --git a/MAINTAINERS b/MAINTAINERS
index 4fe0b3501931..493c65a02b80 100644
--- a/MAINTAINERS
+++ b/MAINTAINERS
@@ -9772,7 +9772,7 @@ F: drivers/mtd/hyperbus/
F: include/linux/mtd/hyperbus.h
Hyper-V vGPU DRIVER
-M: Iouri Tarassov <iourit@microsoft.com>
+M: Iouri Tarassov <iourit@linux.microsoft.com>
L: linux-hyperv@vger.kernel.org
S: Supported
F: drivers/hv/dxgkrnl/
diff --git a/drivers/hv/dxgkrnl/dxgadapter.c b/drivers/hv/dxgkrnl/dxgadapter.c
index 6d3cabb24e6f..d395fdcb63fa 100644
--- a/drivers/hv/dxgkrnl/dxgadapter.c
+++ b/drivers/hv/dxgkrnl/dxgadapter.c
@@ -136,7 +136,7 @@ void dxgadapter_release(struct kref *refcount)
struct dxgadapter *adapter;
adapter = container_of(refcount, struct dxgadapter, adapter_kref);
- DXG_TRACE("Destroying adapter: %px", adapter);
+ DXG_TRACE("Destroying adapter: %p", adapter);
kfree(adapter);
}
@@ -271,7 +271,7 @@ struct dxgdevice *dxgdevice_create(struct dxgadapter *adapter,
kref_put(&device->device_kref, dxgdevice_release);
device = NULL;
} else {
- DXG_TRACE("dxgdevice created: %px", device);
+ DXG_TRACE("dxgdevice created: %p", device);
}
}
return device;
@@ -720,7 +720,7 @@ void dxgdevice_release(struct kref *refcount)
struct dxgdevice *device;
device = container_of(refcount, struct dxgdevice, device_kref);
- DXG_TRACE("Destroying device: %px", device);
+ DXG_TRACE("Destroying device: %p", device);
kref_put(&device->adapter->adapter_kref, dxgadapter_release);
kfree(device);
}
@@ -1103,7 +1103,7 @@ int dxgprocess_adapter_add_device(struct dxgprocess *process,
void dxgprocess_adapter_remove_device(struct dxgdevice *device)
{
- DXG_TRACE("Removing device: %px", device);
+ DXG_TRACE("Removing device: %p", device);
mutex_lock(&device->adapter_info->device_list_mutex);
if (device->device_list_entry.next) {
list_del(&device->device_list_entry);
diff --git a/drivers/hv/dxgkrnl/dxgkrnl.h b/drivers/hv/dxgkrnl/dxgkrnl.h
index d816a875d5ab..4a4605f45736 100644
--- a/drivers/hv/dxgkrnl/dxgkrnl.h
+++ b/drivers/hv/dxgkrnl/dxgkrnl.h
@@ -27,7 +27,6 @@
#include <linux/pci.h>
#include <linux/hyperv.h>
#include <uapi/misc/d3dkmthk.h>
-#include <linux/version.h>
#include "misc.h"
#include "hmgr.h"
#include <uapi/misc/d3dkmthk.h>
@@ -719,7 +718,7 @@ bool dxgresource_is_active(struct dxgresource *res);
struct privdata {
u32 data_size;
- u8 data[1];
+ u8 data[];
};
struct dxgallocation {
@@ -769,9 +768,9 @@ long dxgk_unlocked_ioctl(struct file *f, unsigned int p1, unsigned long p2);
int dxg_unmap_iospace(void *va, u32 size);
/*
- * The convention is that VNBus instance id is a GUID, but the host sets
- * the lower part of the value to the host adapter LUID. The function
- * provides the necessary conversion.
+ * The convention is that VMBus instance id is a GUID, but the host sets
+ * the lower part of the value to the host adapter LUID. The cast reads
+ * the first sizeof(winluid) bytes of the GUID as a winluid value.
*/
static inline void guid_to_luid(guid_t *guid, struct winluid *luid)
{
@@ -1029,9 +1028,7 @@ void dxgk_validate_ioctls(void);
#else
#define DXG_TRACE(...)
-#define DXG_ERR(fmt, ...) do { \
- dev_err(DXGDEV, "%s: " fmt, __func__, ##__VA_ARGS__); \
-} while (0)
+#define DXG_ERR(fmt, ...) dev_err(DXGDEV, "%s: " fmt, __func__, ##__VA_ARGS__)
#endif /* DEBUG */
diff --git a/drivers/hv/dxgkrnl/dxgmodule.c b/drivers/hv/dxgkrnl/dxgmodule.c
index c2a4a2a2136f..435dc60511b8 100644
--- a/drivers/hv/dxgkrnl/dxgmodule.c
+++ b/drivers/hv/dxgkrnl/dxgmodule.c
@@ -158,7 +158,7 @@ static void dxg_signal_dma_fence(struct dxghostevent *eventhdr)
{
struct dxgsyncpoint *event = (struct dxgsyncpoint *)eventhdr;
- DXG_TRACE("syncpoint: %px, fence: %lld", event, event->fence_value);
+ DXG_TRACE("syncpoint: %p, fence: %lld", event, event->fence_value);
event->fence_value++;
list_del(&eventhdr->host_event_list_entry);
dma_fence_signal(&event->base);
@@ -788,7 +788,7 @@ static int dxg_probe_vmbus(struct hv_device *hdev,
if (dxgglobal->hdev) {
/* This device should appear only once */
DXG_ERR("global channel already exists");
- ret = -EBADE;
+ ret = -ENODEV;
goto error;
}
dxgglobal->hdev = hdev;
@@ -969,4 +969,3 @@ module_exit(dxg_drv_exit);
MODULE_LICENSE("GPL");
MODULE_DESCRIPTION("Microsoft Dxgkrnl virtual compute device Driver");
-MODULE_VERSION("2.0.3");
diff --git a/drivers/hv/dxgkrnl/dxgvmbus.c b/drivers/hv/dxgkrnl/dxgvmbus.c
index abb6d2af89ac..4b1ccaac440c 100644
--- a/drivers/hv/dxgkrnl/dxgvmbus.c
+++ b/drivers/hv/dxgkrnl/dxgvmbus.c
@@ -246,9 +246,7 @@ int dxgvmbuschannel_init(struct dxgvmbuschannel *ch, struct hv_device *hdev)
goto cleanup;
}
-#if KERNEL_VERSION(5, 15, 0) <= LINUX_VERSION_CODE
hdev->channel->max_pkt_size = DXG_MAX_VM_BUS_PACKET_SIZE;
-#endif
ret = vmbus_open(hdev->channel, RING_BUFSIZE, RING_BUFSIZE,
NULL, 0, dxgvmbuschannel_receive, ch);
if (ret) {
@@ -1482,9 +1480,8 @@ int create_existing_sysmem(struct dxgdevice *device,
dxgalloc->pages);
if (ret1 != npages) {
DXG_ERR("get_user_pages_fast failed: %d", ret1);
- if (ret1 > 0 && ret1 < npages) {
+ if (ret1 > 0 && ret1 < npages)
unpin_user_pages(dxgalloc->pages, ret1);
- }
vfree(dxgalloc->pages);
dxgalloc->pages = NULL;
ret = -ENOMEM;
diff --git a/drivers/hv/dxgkrnl/dxgvmbus.h b/drivers/hv/dxgkrnl/dxgvmbus.h
index a7e625b2f896..22246826d2f1 100644
--- a/drivers/hv/dxgkrnl/dxgvmbus.h
+++ b/drivers/hv/dxgkrnl/dxgvmbus.h
@@ -313,12 +313,12 @@ struct dxgkvmb_command_queryadapterinfo {
struct dxgkvmb_command_vgpu_to_host hdr;
enum kmtqueryadapterinfotype query_type;
u32 private_data_size;
- u8 private_data[1];
+ u8 private_data[];
};
struct dxgkvmb_command_queryadapterinfo_return {
struct ntstatus status;
- u8 private_data[1];
+ u8 private_data[];
};
/* Returns ntstatus */
@@ -391,7 +391,7 @@ struct dxgkvmb_command_makeresident {
struct d3dkmthandle paging_queue;
struct d3dddi_makeresident_flags flags;
u32 alloc_count;
- struct d3dkmthandle allocations[1];
+ struct d3dkmthandle allocations[];
};
struct dxgkvmb_command_makeresident_return {
@@ -405,7 +405,7 @@ struct dxgkvmb_command_evict {
struct d3dkmthandle device;
struct d3dddi_evict_flags flags;
u32 alloc_count;
- struct d3dkmthandle allocations[1];
+ struct d3dkmthandle allocations[];
};
struct dxgkvmb_command_evict_return {
@@ -476,7 +476,7 @@ struct dxgkvmb_command_updategpuvirtualaddress {
struct d3dkmthandle fence_object;
u32 num_operations;
u32 flags;
- struct d3dddi_updategpuvirtualaddress_operation operations[1];
+ struct d3dddi_updategpuvirtualaddress_operation operations[];
};
struct dxgkvmb_command_queryclockcalibration {
@@ -627,7 +627,7 @@ struct dxgkvmb_command_destroyallocation {
struct d3dkmthandle resource;
u32 alloc_count;
struct d3dddicb_destroyallocation2flags flags;
- struct d3dkmthandle allocations[1];
+ struct d3dkmthandle allocations[];
};
struct dxgkvmb_command_createcontextvirtual {
@@ -639,7 +639,7 @@ struct dxgkvmb_command_createcontextvirtual {
struct d3dddi_createcontextflags flags;
enum d3dkmt_clienthint client_hint;
u32 priv_drv_data_size;
- u8 priv_drv_data[1];
+ u8 priv_drv_data[];
};
/* The command returns ntstatus */
@@ -768,7 +768,7 @@ struct dxgkvmb_command_offerallocations {
enum d3dkmt_offer_priority priority;
struct d3dkmt_offer_flags flags;
bool resources;
- struct d3dkmthandle allocations[1];
+ struct d3dkmthandle allocations[];
};
struct dxgkvmb_command_reclaimallocations {
@@ -778,13 +778,13 @@ struct dxgkvmb_command_reclaimallocations {
u32 allocation_count;
bool resources;
bool write_results;
- struct d3dkmthandle allocations[1];
+ struct d3dkmthandle allocations[];
};
struct dxgkvmb_command_reclaimallocations_return {
u64 paging_fence_value;
struct ntstatus status;
- enum d3dddi_reclaim_result discarded[1];
+ enum d3dddi_reclaim_result discarded[];
};
/* Returns ntstatus */
@@ -804,7 +804,7 @@ struct dxgkvmb_command_createhwqueue {
struct d3dkmthandle context;
struct d3dddi_createhwqueueflags flags;
u32 priv_drv_data_size;
- char priv_drv_data[1];
+ char priv_drv_data[];
};
/* The command returns ntstatus */
@@ -833,7 +833,7 @@ struct dxgkvmb_command_escape {
struct d3dddi_escapeflags flags;
u32 priv_drv_data_size;
struct d3dkmthandle context;
- u8 priv_drv_data[1];
+ u8 priv_drv_data[];
};
struct dxgkvmb_command_queryvideomemoryinfo {
@@ -879,7 +879,7 @@ struct dxgk_feature_desc {
struct {
u16 supported : 1;
u16 virtualization_mode : 3;
- u16 global : 1;
+ u16 global : 1;
u16 driver_feature : 1;
u16 internal : 1;
u16 reserved : 9;
diff --git a/drivers/hv/dxgkrnl/hmgr.c b/drivers/hv/dxgkrnl/hmgr.c
index 059f94307a0e..95879f59133e 100644
--- a/drivers/hv/dxgkrnl/hmgr.c
+++ b/drivers/hv/dxgkrnl/hmgr.c
@@ -467,9 +467,8 @@ void hmgrtable_free_handle(struct hmgrtable *table, enum hmgrentry_type t,
entry->next_free_index = i;
}
table->free_handle_list_tail = i;
- if (table->free_handle_list_head == HMGRTABLE_INVALID_INDEX) {
+ if (table->free_handle_list_head == HMGRTABLE_INVALID_INDEX)
table->free_handle_list_head = i;
- }
} else {
DXG_ERR("Invalid handle to free: %d %x", i, h.v);
}
diff --git a/include/uapi/misc/d3dkmthk.h b/include/uapi/misc/d3dkmthk.h
index db40e8ff40b0..a58b2513dfd3 100644
--- a/include/uapi/misc/d3dkmthk.h
+++ b/include/uapi/misc/d3dkmthk.h
@@ -1612,7 +1612,7 @@ struct d3dkmt_opensyncobjectfromsyncfile {
};
struct d3dkmt_enumprocesses {
- struct winluid adapter_luid;
+ struct winluid adapter_luid;
#ifdef __KERNEL__
__u32 *buffer;
#else
^ permalink raw reply related [flat|nested] 56+ messages in thread