virtualization.lists.linux-foundation.org archive mirror
 help / color / mirror / Atom feed
* [RFC PATCH 1/3] virtio-gpu api: add blob userptr resource
@ 2024-12-20 10:04 Honglei Huang
  2024-12-20 10:04 ` [RFC PATCH 2/3] drm/virtgpu " Honglei Huang
                   ` (2 more replies)
  0 siblings, 3 replies; 23+ messages in thread
From: Honglei Huang @ 2024-12-20 10:04 UTC (permalink / raw)
  To: Huang Rui, virtualization, linux-kernel, Dmitry Osipenko,
	dri-devel, David Airlie, Gerd Hoffmann, Gurchetan Singh,
	Chia-I Wu, Daniel Vetter, Akihiko Odaki
  Cc: Lingshan Zhu, Demi Marie Obenour, Honglei Huang

From: Honglei Huang <Honglei1.Huang@amd.com>

Add a new resource for blob resource, called userptr, used for let
host access guest user space memory, to acquire a simple SVM features
in virtio GPU.

- The capset VIRTIO_GPU_CAPSET_HSAKMT used for context init,
in this series patches only HSAKMT context can use the userptr
feature. HSAKMT is a GPU compute library in HSA stack, like
the role libdrm in mesa stack.
- New flag VIRTIO_GPU_BLOB_FLAG_USE_USERPTR used in blob create
to indicate the blob create ioctl is used for create a userptr
blob resource.

Signed-off-by: Honglei Huang <Honglei1.Huang@amd.com>
---
 include/uapi/linux/virtio_gpu.h | 2 ++
 1 file changed, 2 insertions(+)

diff --git a/include/uapi/linux/virtio_gpu.h b/include/uapi/linux/virtio_gpu.h
index b9a9783f0b14..0a6b56acbc13 100644
--- a/include/uapi/linux/virtio_gpu.h
+++ b/include/uapi/linux/virtio_gpu.h
@@ -323,6 +323,7 @@ struct virtio_gpu_cmd_submit {
 
 #define VIRTIO_GPU_CAPSET_VIRGL 1
 #define VIRTIO_GPU_CAPSET_VIRGL2 2
+#define VIRTIO_GPU_CAPSET_HSAKMT 7
 
 /* VIRTIO_GPU_CMD_GET_CAPSET_INFO */
 struct virtio_gpu_get_capset_info {
@@ -415,6 +416,7 @@ struct virtio_gpu_resource_create_blob {
 #define VIRTIO_GPU_BLOB_FLAG_USE_MAPPABLE     0x0001
 #define VIRTIO_GPU_BLOB_FLAG_USE_SHAREABLE    0x0002
 #define VIRTIO_GPU_BLOB_FLAG_USE_CROSS_DEVICE 0x0004
+#define VIRTIO_GPU_BLOB_FLAG_USE_USERPTR      0x0008
 	/* zero is invalid blob mem */
 	__le32 blob_mem;
 	__le32 blob_flags;
-- 
2.34.1


^ permalink raw reply related	[flat|nested] 23+ messages in thread

* [RFC PATCH 2/3] drm/virtgpu api: add blob userptr resource
  2024-12-20 10:04 [RFC PATCH 1/3] virtio-gpu api: add blob userptr resource Honglei Huang
@ 2024-12-20 10:04 ` Honglei Huang
  2024-12-20 10:04 ` [RFC PATCH 3/3] drm/virtio: implement blob userptr resource object Honglei Huang
  2025-02-03  8:25 ` [RFC PATCH 1/3] virtio-gpu api: add blob userptr resource Akihiko Odaki
  2 siblings, 0 replies; 23+ messages in thread
From: Honglei Huang @ 2024-12-20 10:04 UTC (permalink / raw)
  To: Huang Rui, virtualization, linux-kernel, Dmitry Osipenko,
	dri-devel, David Airlie, Gerd Hoffmann, Gurchetan Singh,
	Chia-I Wu, Daniel Vetter, Akihiko Odaki
  Cc: Lingshan Zhu, Demi Marie Obenour, Honglei Huang

From: Honglei Huang <Honglei1.Huang@amd.com>

This makes blob userptr resource available to guest userspace.

- Flag VIRTGPU_BLOB_FLAG_USE_USERPTR for guest userspace blob create,
enable this flag to indicate blob userptr resource create.
- New parameter blob_userptr for bypass userspace memory address to
virtio GPU, like other SVM design, virtio GPU needs a userspace
memory for device access.
- New parameter offset used for the already created blob userptr
resource to get the address offset between the create one.

The blob userptr resource is used for SVM feature, in compute side,
this feature is must needed. like in OpenCL SVM feature also called
userptr feature, it is for let device to access userspace memory,
a very basic and important feature to prevent large memory copy between
UMD and KMD.

Signed-off-by: Honglei Huang <Honglei1.Huang@amd.com>
---
 include/uapi/drm/virtgpu_drm.h | 3 +++
 1 file changed, 3 insertions(+)

diff --git a/include/uapi/drm/virtgpu_drm.h b/include/uapi/drm/virtgpu_drm.h
index 2bb2d3a0c7bd..19fced75708c 100644
--- a/include/uapi/drm/virtgpu_drm.h
+++ b/include/uapi/drm/virtgpu_drm.h
@@ -181,6 +181,7 @@ struct drm_virtgpu_resource_create_blob {
 #define VIRTGPU_BLOB_FLAG_USE_MAPPABLE     0x0001
 #define VIRTGPU_BLOB_FLAG_USE_SHAREABLE    0x0002
 #define VIRTGPU_BLOB_FLAG_USE_CROSS_DEVICE 0x0004
+#define VIRTGPU_BLOB_FLAG_USE_USERPTR      0x0008
 	/* zero is invalid blob_mem */
 	__u32 blob_mem;
 	__u32 blob_flags;
@@ -196,6 +197,8 @@ struct drm_virtgpu_resource_create_blob {
 	__u32 cmd_size;
 	__u64 cmd;
 	__u64 blob_id;
+	__u64 blob_userptr;
+	__u64 offset;
 };
 
 #define VIRTGPU_CONTEXT_PARAM_CAPSET_ID       0x0001
-- 
2.34.1


^ permalink raw reply related	[flat|nested] 23+ messages in thread

* [RFC PATCH 3/3] drm/virtio: implement blob userptr resource object
  2024-12-20 10:04 [RFC PATCH 1/3] virtio-gpu api: add blob userptr resource Honglei Huang
  2024-12-20 10:04 ` [RFC PATCH 2/3] drm/virtgpu " Honglei Huang
@ 2024-12-20 10:04 ` Honglei Huang
  2024-12-20 15:35   ` Simona Vetter
  2025-02-03  8:25 ` [RFC PATCH 1/3] virtio-gpu api: add blob userptr resource Akihiko Odaki
  2 siblings, 1 reply; 23+ messages in thread
From: Honglei Huang @ 2024-12-20 10:04 UTC (permalink / raw)
  To: Huang Rui, virtualization, linux-kernel, Dmitry Osipenko,
	dri-devel, David Airlie, Gerd Hoffmann, Gurchetan Singh,
	Chia-I Wu, Daniel Vetter, Akihiko Odaki
  Cc: Lingshan Zhu, Demi Marie Obenour, Honglei Huang

From: Honglei Huang <Honglei1.Huang@amd.com>

A virtio-gpu userptr is based on HMM notifier.
Used for let host access guest userspace memory and
notice the change of userspace memory.
This series patches are in very beginning state,
User space are pinned currently to ensure the host
device memory operations are correct.
The free and unmap operations for userspace can be
handled by MMU notifier this is a simple and basice
SVM feature for this series patches.
The physical PFNS update operations is splited into
two OPs in here. The evicted memories won't be used
anymore but remap into host again to achieve same
effect with hmm_rang_fault.

Signed-off-by: Honglei Huang <Honglei1.Huang@amd.com>
---
 drivers/gpu/drm/virtio/Makefile          |   3 +-
 drivers/gpu/drm/virtio/virtgpu_drv.h     |  72 +++
 drivers/gpu/drm/virtio/virtgpu_ioctl.c   |  25 +-
 drivers/gpu/drm/virtio/virtgpu_kms.c     |   2 +
 drivers/gpu/drm/virtio/virtgpu_object.c  |   5 +
 drivers/gpu/drm/virtio/virtgpu_userptr.c | 738 +++++++++++++++++++++++
 drivers/gpu/drm/virtio/virtgpu_vq.c      |  29 +
 7 files changed, 871 insertions(+), 3 deletions(-)
 create mode 100644 drivers/gpu/drm/virtio/virtgpu_userptr.c

diff --git a/drivers/gpu/drm/virtio/Makefile b/drivers/gpu/drm/virtio/Makefile
index d2e1788a8227..351c45e1e7d1 100644
--- a/drivers/gpu/drm/virtio/Makefile
+++ b/drivers/gpu/drm/virtio/Makefile
@@ -2,8 +2,9 @@
 #
 # Makefile for the drm device driver.  This driver provides support for the
 # Direct Rendering Infrastructure (DRI) in XFree86 4.1.0 and higher.
+# virtgpu_userptr.o
 
-virtio-gpu-y := virtgpu_drv.o virtgpu_kms.o virtgpu_gem.o virtgpu_vram.o \
+virtio-gpu-y := virtgpu_drv.o virtgpu_kms.o virtgpu_gem.o virtgpu_vram.o virtgpu_userptr.o \
 	virtgpu_display.o virtgpu_vq.o \
 	virtgpu_fence.o virtgpu_object.o virtgpu_debugfs.o virtgpu_plane.o \
 	virtgpu_ioctl.o virtgpu_prime.o virtgpu_trace_points.o virtgpu_submit.o
diff --git a/drivers/gpu/drm/virtio/virtgpu_drv.h b/drivers/gpu/drm/virtio/virtgpu_drv.h
index e1ee17466f6b..5d15c018201d 100644
--- a/drivers/gpu/drm/virtio/virtgpu_drv.h
+++ b/drivers/gpu/drm/virtio/virtgpu_drv.h
@@ -31,6 +31,8 @@
 #include <linux/virtio_ids.h>
 #include <linux/virtio_config.h>
 #include <linux/virtio_gpu.h>
+#include <linux/mmu_notifier.h>
+#include <linux/rbtree_types.h>
 
 #include <drm/drm_atomic.h>
 #include <drm/drm_drv.h>
@@ -85,6 +87,7 @@ struct virtio_gpu_object_params {
 	uint32_t blob_mem;
 	uint32_t blob_flags;
 	uint64_t blob_id;
+	uint64_t blob_userptr;
 };
 
 struct virtio_gpu_object {
@@ -112,12 +115,50 @@ struct virtio_gpu_object_vram {
 	struct drm_mm_node vram_node;
 };
 
+enum userptr_work_list_ops {
+	USERPTR_OP_NULL,
+	USERPTR_OP_UNMAP,
+	USERPTR_OP_UPDATE,
+	USERPTR_OP_EVICT,
+};
+
+struct virtio_gpu_object_userptr {
+	struct virtio_gpu_object base;
+
+	struct page **pages;
+	uint64_t userptr_inital_start;
+	uint64_t userptr_start;
+	uint64_t userptr_last;
+	uint32_t npages;
+	uint32_t bo_handle;
+	struct list_head work_list;
+	struct virtio_gpu_device *vgdev;
+	struct mmu_interval_notifier notifier;
+	struct drm_file *file;
+
+	/* for list work */
+	struct mm_struct *mm;
+	enum userptr_work_list_ops op;
+	uint64_t notifier_start;
+	uint64_t notifier_last;
+
+	/* userptr interval tree node */
+	struct interval_tree_node it_node;
+
+	/* in release list work queue */
+	atomic_t in_release;
+	struct mutex lock;
+};
+
 #define to_virtio_gpu_shmem(virtio_gpu_object) \
 	container_of((virtio_gpu_object), struct virtio_gpu_object_shmem, base)
 
 #define to_virtio_gpu_vram(virtio_gpu_object) \
 	container_of((virtio_gpu_object), struct virtio_gpu_object_vram, base)
 
+#define to_virtio_gpu_userptr(virtio_gpu_object) \
+	container_of((virtio_gpu_object), struct virtio_gpu_object_userptr, base)
+
 struct virtio_gpu_object_array {
 	struct ww_acquire_ctx ticket;
 	struct list_head next;
@@ -279,6 +320,16 @@ struct virtio_gpu_fpriv {
 	uint64_t base_fence_ctx;
 	uint64_t ring_idx_mask;
 	struct mutex context_lock;
+
+	/* for userptr mmu notifier invalidate */
+	struct work_struct userptr_invalidate_work;
+	struct list_head userptr_invalidate_list;
+	spinlock_t userptr_invalidate_list_lock;
+
+	/* userptr interval tree */
+	struct rb_root_cached userptrs_tree;
+	struct mutex userptrs_tree_lock;
+
 	char debug_name[DEBUG_NAME_MAX_LEN];
 	bool explicit_debug_name;
 	bool fence_passing_enabled;
@@ -422,6 +473,14 @@ virtio_gpu_cmd_resource_create_blob(struct virtio_gpu_device *vgdev,
 				    struct virtio_gpu_object_params *params,
 				    struct virtio_gpu_mem_entry *ents,
 				    uint32_t nents);
+
+void
+virtio_gpu_cmd_resource_create_userptr(struct virtio_gpu_device *vgdev,
+				    struct virtio_gpu_object *bo,
+				    struct virtio_gpu_object_params *params,
+				    unsigned long *pfns,
+				    uint32_t npfns);
+
 void
 virtio_gpu_cmd_set_scanout_blob(struct virtio_gpu_device *vgdev,
 				uint32_t scanout_id,
@@ -497,4 +556,17 @@ void virtio_gpu_vram_unmap_dma_buf(struct device *dev,
 int virtio_gpu_execbuffer_ioctl(struct drm_device *dev, void *data,
 				struct drm_file *file);
 
+/* virtgpu_userptr.c */
+void virtio_gpu_userptr_set_handle(struct virtio_gpu_object *qobj,
+				   uint32_t handle);
+uint32_t virtio_gpu_userptr_get_handle(struct virtio_gpu_object *qobj);
+void virtio_gpu_userptr_list_work_init(struct virtio_gpu_fpriv *vfpriv);
+void virtio_gpu_userptr_interval_tree_init(struct virtio_gpu_fpriv *vfpriv);
+uint64_t virtio_gpu_userptr_get_offset(struct virtio_gpu_object *qobj,
+				       uint64_t addr);
+bool virtio_gpu_is_userptr(struct virtio_gpu_object *bo);
+int virtio_gpu_userptr_create(struct virtio_gpu_device *vgdev,
+			      struct drm_file *file,
+			      struct virtio_gpu_object_params *params,
+			      struct virtio_gpu_object **bo_ptr);
 #endif
diff --git a/drivers/gpu/drm/virtio/virtgpu_ioctl.c b/drivers/gpu/drm/virtio/virtgpu_ioctl.c
index d40c7402720d..519278443c66 100644
--- a/drivers/gpu/drm/virtio/virtgpu_ioctl.c
+++ b/drivers/gpu/drm/virtio/virtgpu_ioctl.c
@@ -36,7 +36,8 @@
 
 #define VIRTGPU_BLOB_FLAG_USE_MASK (VIRTGPU_BLOB_FLAG_USE_MAPPABLE | \
 				    VIRTGPU_BLOB_FLAG_USE_SHAREABLE | \
-				    VIRTGPU_BLOB_FLAG_USE_CROSS_DEVICE)
+				    VIRTGPU_BLOB_FLAG_USE_CROSS_DEVICE | \
+				    VIRTGPU_BLOB_FLAG_USE_USERPTR)
 
 /* Must be called with &virtio_gpu_fpriv.struct_mutex held. */
 static void virtio_gpu_create_context_locked(struct virtio_gpu_device *vgdev,
@@ -489,6 +490,7 @@ static int verify_blob(struct virtio_gpu_device *vgdev,
 	params->size = rc_blob->size;
 	params->blob = true;
 	params->blob_flags = rc_blob->blob_flags;
+	params->blob_userptr = rc_blob->blob_userptr;
 	return 0;
 }
 
@@ -528,8 +530,20 @@ static int virtio_gpu_resource_create_blob_ioctl(struct drm_device *dev,
 				      rc_blob->cmd_size, 0);
 	}
 
-	if (guest_blob)
+	if (guest_blob && !params.blob_userptr)
 		ret = virtio_gpu_object_create(vgdev, &params, &bo, NULL);
+	else if (guest_blob && params.blob_userptr) {
+		ret = virtio_gpu_userptr_create(vgdev, file, &params, &bo);
+		if (unlikely(ret < 0))
+			return ret;
+		if (ret > 0) {
+			/* userptr already exist */
+			rc_blob->res_handle = bo->hw_res_handle;
+			rc_blob->bo_handle = virtio_gpu_userptr_get_handle(bo);
+			rc_blob->offset = virtio_gpu_userptr_get_offset(bo, rc_blob->blob_userptr);
+			return ret;
+		}
+	}
 	else if (!guest_blob && host3d_blob)
 		ret = virtio_gpu_vram_create(vgdev, &params, &bo);
 	else
@@ -560,6 +574,9 @@ static int virtio_gpu_resource_create_blob_ioctl(struct drm_device *dev,
 
 	rc_blob->res_handle = bo->hw_res_handle;
 	rc_blob->bo_handle = handle;
+	/* for mmu notifier auto release */
+	if (guest_blob && params.blob_userptr)
+		virtio_gpu_userptr_set_handle(bo, handle);
 
 	/*
 	 * The handle owns the reference now.  But we must drop our
@@ -691,6 +708,10 @@ static int virtio_gpu_context_init_ioctl(struct drm_device *dev,
 			goto out_unlock;
 		}
 	}
+	if (vfpriv->context_init & VIRTIO_GPU_CAPSET_HSAKMT) {
+		virtio_gpu_userptr_list_work_init(vfpriv);
+		virtio_gpu_userptr_interval_tree_init(vfpriv);
+	}
 
 	virtio_gpu_create_context_locked(vgdev, vfpriv);
 	virtio_gpu_notify(vgdev);
diff --git a/drivers/gpu/drm/virtio/virtgpu_kms.c b/drivers/gpu/drm/virtio/virtgpu_kms.c
index 9f4617a75edd..3af40ed8936a 100644
--- a/drivers/gpu/drm/virtio/virtgpu_kms.c
+++ b/drivers/gpu/drm/virtio/virtgpu_kms.c
@@ -346,6 +346,8 @@ void virtio_gpu_driver_postclose(struct drm_device *dev, struct drm_file *file)
 		return;
 
 	if (vfpriv->context_created) {
+		if (vfpriv->context_init & VIRTIO_GPU_CAPSET_HSAKMT)
+			flush_work(&vfpriv->userptr_invalidate_work);
 		virtio_gpu_cmd_context_destroy(vgdev, vfpriv->ctx_id);
 		virtio_gpu_notify(vgdev);
 	}
diff --git a/drivers/gpu/drm/virtio/virtgpu_object.c b/drivers/gpu/drm/virtio/virtgpu_object.c
index c7e74cf13022..31659b0a028d 100644
--- a/drivers/gpu/drm/virtio/virtgpu_object.c
+++ b/drivers/gpu/drm/virtio/virtgpu_object.c
@@ -80,6 +80,11 @@ void virtio_gpu_cleanup_object(struct virtio_gpu_object *bo)
 		drm_gem_free_mmap_offset(&vram->base.base.base);
 		drm_gem_object_release(&vram->base.base.base);
 		kfree(vram);
+	} else if (virtio_gpu_is_userptr(bo)) {
+		struct virtio_gpu_object_userptr *userptr = to_virtio_gpu_userptr(bo);
+
+		drm_gem_object_release(&userptr->base.base.base);
+		kfree(userptr);
 	}
 }
 
diff --git a/drivers/gpu/drm/virtio/virtgpu_userptr.c b/drivers/gpu/drm/virtio/virtgpu_userptr.c
new file mode 100644
index 000000000000..646088f7f72b
--- /dev/null
+++ b/drivers/gpu/drm/virtio/virtgpu_userptr.c
@@ -0,0 +1,738 @@
+// SPDX-License-Identifier: GPL-2.0 OR MIT
+#include "virtgpu_drv.h"
+
+#include <linux/dma-mapping.h>
+
+#include <linux/mm.h>
+#include <linux/pid.h>
+
+#include "drm/drm_gem.h"
+
+#include <linux/mmu_notifier.h>
+
+#define USERPTR_PFNS_NO_CHANGE 0
+#define USERPTR_PFNS_CHANGED 1
+#define USERPTR_PFNS_NONE 2
+
+#define USERPTR_EXISTS 1
+
+static bool
+virtio_gpu_userptr_invalidate(struct mmu_interval_notifier *mn,
+			      const struct mmu_notifier_range *range,
+			      unsigned long cur_seq);
+
+static const struct mmu_interval_notifier_ops virtio_gpu_userptr_mn_ops = {
+	.invalidate = virtio_gpu_userptr_invalidate,
+};
+
+static void virtio_gpu_userptr_unlink(struct virtio_gpu_fpriv *vfpriv,
+				      struct virtio_gpu_object_userptr *userptr)
+{
+	if (userptr->it_node.start != 0 && userptr->it_node.last != 0)
+		interval_tree_remove(&userptr->it_node, &vfpriv->userptrs_tree);
+}
+
+static void virtio_gpu_userptr_free(struct drm_gem_object *obj)
+{
+	struct virtio_gpu_object *bo = gem_to_virtio_gpu_obj(obj);
+	struct virtio_gpu_device *vgdev = obj->dev->dev_private;
+	struct virtio_gpu_object_userptr *userptr = to_virtio_gpu_userptr(bo);
+
+	if (bo->created) {
+		unpin_user_pages_dirty_lock(userptr->pages, userptr->npages,
+					    false);
+		vfree(userptr->pages);
+		userptr->pages = NULL;
+
+		virtio_gpu_cmd_unref_resource(vgdev, bo);
+		virtio_gpu_notify(vgdev);
+
+		return;
+	}
+}
+
+static void virtio_gpu_userptr_object_close(struct drm_gem_object *obj,
+					    struct drm_file *file)
+{
+	virtio_gpu_gem_object_close(obj, file);
+}
+
+static const struct drm_gem_object_funcs virtio_gpu_userptr_funcs = {
+	.open = virtio_gpu_gem_object_open,
+	.close = virtio_gpu_userptr_object_close,
+	.free = virtio_gpu_userptr_free,
+};
+
+bool virtio_gpu_is_userptr(struct virtio_gpu_object *bo)
+{
+	return bo->base.base.funcs == &virtio_gpu_userptr_funcs;
+}
+
+static int
+virtio_gpu_userptr_add_notifier(struct virtio_gpu_object_userptr *userptr,
+				unsigned long start, unsigned long length)
+{
+	int ret = mmu_interval_notifier_insert_locked(
+		&userptr->notifier, current->mm, start, length,
+		&virtio_gpu_userptr_mn_ops);
+
+	if (ret)
+		pr_err("mmu_interval_notifier_insert_locked failed ret: %d\n",
+		       ret);
+	return ret;
+}
+
+uint32_t virtio_gpu_userptr_get_handle(struct virtio_gpu_object *qobj)
+{
+	struct virtio_gpu_object_userptr *userptr = to_virtio_gpu_userptr(qobj);
+
+	return userptr->bo_handle;
+}
+
+uint64_t virtio_gpu_userptr_get_offset(struct virtio_gpu_object *qobj,
+				       uint64_t addr)
+{
+	struct virtio_gpu_object_userptr *userptr = to_virtio_gpu_userptr(qobj);
+	uint64_t userptr_align_down = ALIGN_DOWN(addr, PAGE_SIZE);
+	uint64_t offset = userptr_align_down - userptr->userptr_inital_start;
+	return offset;
+}
+
+void virtio_gpu_userptr_interval_tree_init(struct virtio_gpu_fpriv *vfpriv)
+{
+	vfpriv->userptrs_tree = RB_ROOT_CACHED;
+	mutex_init(&vfpriv->userptrs_tree_lock);
+}
+
+static struct virtio_gpu_object_userptr *
+virtio_gpu_userptr_from_addr_range(struct virtio_gpu_fpriv *vfpriv,
+				   u_int64_t start, u_int64_t last)
+{
+	struct interval_tree_node *node;
+	struct virtio_gpu_object_userptr *userptr = NULL;
+	struct virtio_gpu_object_userptr *ret = NULL;
+	uint64_t userptr_size;
+
+	node = interval_tree_iter_first(&vfpriv->userptrs_tree, start, last);
+
+	while (node) {
+		struct interval_tree_node *next;
+
+		userptr = container_of(node, struct virtio_gpu_object_userptr,
+				       it_node);
+
+		if (start >= userptr->userptr_start &&
+		    last <= userptr->userptr_last &&
+		    !atomic_read(&userptr->in_release) && !userptr->op) {
+			ret = userptr;
+			userptr_size = userptr->userptr_last -
+				       userptr->userptr_start + 1UL;
+			return ret;
+		}
+
+		next = interval_tree_iter_next(node, start, last);
+		node = next;
+	}
+
+	return ret;
+}
+
+static void
+virtio_gpu_userptr_add_interval_tree(struct virtio_gpu_fpriv *vfpriv,
+				     struct virtio_gpu_object_userptr *userptr)
+{
+	userptr->it_node.start = userptr->userptr_start;
+	userptr->it_node.last = userptr->userptr_last;
+	interval_tree_insert(&userptr->it_node, &vfpriv->userptrs_tree);
+}
+
+static void virtio_gpu_userptr_unmap(struct virtio_gpu_object_userptr *userptr)
+{
+	pr_debug(
+		"list work remove userptr: [%llx-%llx], resid: %d bo_handle: %d size: %x\n",
+		userptr->userptr_start, userptr->userptr_last,
+		userptr->base.hw_res_handle, userptr->bo_handle,
+		userptr->npages);
+
+	virtio_gpu_userptr_unlink(userptr->file->driver_priv, userptr);
+	mmu_interval_notifier_remove(&userptr->notifier);
+
+	drm_gem_handle_delete(userptr->file, userptr->bo_handle);
+}
+
+static void virtio_gpu_userptr_update_notifier_and_interval_tree(
+	struct virtio_gpu_object_userptr *userptr)
+{
+	unsigned long start = userptr->notifier.interval_tree.start;
+	unsigned long last = userptr->notifier.interval_tree.last;
+
+	if (userptr->userptr_start == start && userptr->userptr_last == last)
+		return;
+
+	if (start != 0 && last != 0) {
+		virtio_gpu_userptr_unlink(userptr->file->driver_priv, userptr);
+		mmu_interval_notifier_remove(&userptr->notifier);
+	}
+
+	pr_debug(
+		"update userptr: [%lx-%lx]-%lx -> [%llx-%llx]-%llx resid: %d\n",
+		start, last, last - start + 1UL, userptr->userptr_start,
+		userptr->userptr_last,
+		userptr->userptr_last - userptr->userptr_start + 1UL,
+		userptr->base.hw_res_handle);
+
+	virtio_gpu_userptr_add_interval_tree(userptr->file->driver_priv,
+					     userptr);
+	mmu_interval_notifier_insert_locked(
+		&userptr->notifier, userptr->mm, userptr->userptr_start,
+		userptr->userptr_last - userptr->userptr_start + 1UL,
+		&virtio_gpu_userptr_mn_ops);
+
+	userptr->op = 0;
+}
+
+static int virtio_gpu_userptr_split(struct virtio_gpu_object_userptr *userptr,
+				    unsigned long valid_start,
+				    unsigned long valid_last,
+				    struct virtio_gpu_object_userptr **new)
+{
+	uint64_t old_start = userptr->userptr_start;
+	uint64_t old_last = userptr->userptr_last;
+
+	if (old_start != valid_start && old_last != valid_last)
+		return -EINVAL;
+	if (valid_start < old_start || valid_last > old_last)
+		return -EINVAL;
+
+	/* split new userptr is not needed currently, but keep the API parameters here
+	 * for future expansion.
+	 */
+	*new = NULL;
+
+	/* update range */
+	userptr->userptr_start = valid_start;
+	userptr->userptr_last = valid_last;
+
+	return 0;
+}
+
+static void
+virtio_gpu_userptr_update_split(struct virtio_gpu_object_userptr *userptr,
+				unsigned long mn_start, unsigned long mn_last)
+{
+	struct virtio_gpu_object_userptr *head;
+	struct virtio_gpu_object_userptr *tail;
+
+	if (userptr->op == USERPTR_OP_UNMAP)
+		return;
+
+	if (mn_start > userptr->userptr_last ||
+	    mn_last < userptr->userptr_start)
+		return;
+
+	head = tail = userptr;
+	if (mn_start > userptr->userptr_start)
+		virtio_gpu_userptr_split(userptr, userptr->userptr_start,
+					 mn_start - 1UL, &tail);
+	else if (mn_last < userptr->userptr_last)
+		virtio_gpu_userptr_split(userptr, mn_last + 1UL,
+					 userptr->userptr_last, &head);
+
+	/* split tail maybe not needed in virtgpu */
+	/* if (mn_last < userptr->userptr_last) */
+	/* add child userptr maybe not needed in virtgpu */
+}
+
+static void
+virtio_gpu_userptr_add_list_work(struct virtio_gpu_object_userptr *userptr,
+				 int op)
+{
+	struct virtio_gpu_fpriv *vfpriv = userptr->file->driver_priv;
+
+	spin_lock(&vfpriv->userptr_invalidate_list_lock);
+
+	if (!list_empty(&userptr->work_list)) {
+		pr_debug(
+			"update exist userptr userptr: [%llx-%llx] work op to %d\n",
+			userptr->userptr_start, userptr->userptr_last, op);
+		if (op != USERPTR_OP_NULL && userptr->op != USERPTR_OP_UNMAP)
+			userptr->op = op;
+	} else {
+		userptr->op = op;
+		list_add_tail(&userptr->work_list,
+			      &vfpriv->userptr_invalidate_list);
+	}
+
+	spin_unlock(&vfpriv->userptr_invalidate_list_lock);
+}
+
+static int
+virtio_gpu_userptr_check_pfns(struct virtio_gpu_object_userptr *userptr,
+			      struct vm_area_struct *vma, uint64_t start,
+			      uint64_t end)
+{
+	uint64_t addr;
+	int ret;
+	unsigned long pfn;
+	spinlock_t *ptl;
+	pte_t *ptep;
+
+	for (addr = start; addr < end; addr += PAGE_SIZE) {
+		ret = follow_pte(vma->vm_mm, addr, &ptep, &ptl);
+		if (ret) {
+			pr_debug("follow_pfn in userptr failed, addr: %llx\n",
+				 addr);
+			return USERPTR_PFNS_NONE;
+		}
+		pfn = pte_pfn(ptep_get(ptep));
+		pte_unmap_unlock(ptep, ptl);
+		if (page_to_pfn(
+			    userptr->pages[(addr - userptr->userptr_start) >>
+					   PAGE_SHIFT]) != pfn) {
+			pr_debug("userptr pages not match, addr: %llx\n", addr);
+			return USERPTR_PFNS_CHANGED;
+		}
+	}
+
+	return USERPTR_PFNS_NO_CHANGE;
+}
+
+static int
+virtio_gpu_userptr_check_range(struct virtio_gpu_object_userptr *userptr,
+			       uint64_t notifier_start, uint64_t notifier_last)
+{
+	uint64_t start, end, addr;
+	int r = 0;
+
+	start = notifier_start;
+	end = notifier_last + (1UL << PAGE_SHIFT);
+
+	for (addr = start; !r && addr < end;) {
+		struct vm_area_struct *vma;
+		uint64_t next = 0;
+		uint32_t npages;
+
+		vma = vma_lookup(userptr->mm, addr);
+
+		if (vma) {
+			next = min(vma->vm_end, end);
+			npages = (next - addr) >> PAGE_SHIFT;
+			r = virtio_gpu_userptr_check_pfns(userptr, vma, start,
+							  next);
+			if (r)
+				break;
+		} else {
+			pr_debug("vma not found for addr: %llx\n", addr);
+			r = -EFAULT;
+			break;
+		}
+
+		addr = next;
+	}
+
+	return r;
+}
+
+static void
+virtio_gpu_update_or_remove_userptr(struct virtio_gpu_object_userptr *userptr,
+				    unsigned long start, unsigned long last)
+{
+	if ((userptr->userptr_start) >= start &&
+	    (userptr->userptr_last) <= last) {
+		if (atomic_xchg(&userptr->in_release, 1) == 0) {
+			virtio_gpu_userptr_add_list_work(userptr,
+							 USERPTR_OP_UNMAP);
+		}
+	} else {
+		pr_debug(
+			"mmu notifier: [%lx-%lx]-%lx userptr: [%llx-%llx]-%llx not match need update.\n",
+			start, last, last - start + 1UL, userptr->userptr_start,
+			userptr->userptr_last,
+			userptr->userptr_last - userptr->userptr_start + 1UL);
+		virtio_gpu_userptr_update_split(userptr, start, last);
+		virtio_gpu_userptr_add_list_work(userptr, USERPTR_OP_UPDATE);
+	}
+}
+
+static void virtio_gpu_userptr_evict(struct virtio_gpu_object_userptr *userptr)
+{
+	if (!userptr->notifier_start || !userptr->notifier_last) {
+		pr_err("userptr: [%llx-%llx] not have notifier range\n",
+		       userptr->userptr_start, userptr->userptr_last);
+		return;
+	}
+
+	if (userptr->notifier_start < userptr->userptr_start ||
+	    userptr->notifier_last > userptr->userptr_last) {
+		pr_err("invalid evict param, userptr: [%llx-%llx] notifier: [%llx-%llx] out of range\n",
+		       userptr->userptr_start, userptr->userptr_last,
+		       userptr->notifier_start, userptr->notifier_last);
+		return;
+	}
+
+	if (virtio_gpu_userptr_check_range(userptr, userptr->notifier_start,
+					   userptr->notifier_last)) {
+		pr_debug("userptr: [%llx-%llx], resid: %d check range failed\n",
+			 userptr->userptr_start, userptr->userptr_last,
+			 userptr->base.hw_res_handle);
+		/* add to work list or process here directly, add to work list here */
+		virtio_gpu_update_or_remove_userptr(
+			userptr, userptr->notifier_start,
+			userptr->notifier_last + (1UL << PAGE_SHIFT) - 1UL);
+	}
+
+	userptr->notifier_start = 0;
+	userptr->notifier_last = 0;
+}
+
+static void
+virtio_gpu_userptr_handle_list_work(struct virtio_gpu_object_userptr *userptr)
+{
+	switch (userptr->op) {
+	case USERPTR_OP_NULL:
+		break;
+	case USERPTR_OP_UNMAP:
+		virtio_gpu_userptr_unmap(userptr);
+		break;
+	case USERPTR_OP_UPDATE:
+		virtio_gpu_userptr_update_notifier_and_interval_tree(userptr);
+		break;
+	case USERPTR_OP_EVICT:
+		virtio_gpu_userptr_evict(userptr);
+		break;
+	default:
+		break;
+	}
+}
+
+static void virtio_gpu_userptr_invalidate_work(struct work_struct *work)
+{
+	struct virtio_gpu_fpriv *vfpriv;
+	struct virtio_gpu_object_userptr *userptr;
+	struct mm_struct *mm;
+
+	vfpriv = container_of(work, struct virtio_gpu_fpriv,
+			      userptr_invalidate_work);
+
+	spin_lock(&vfpriv->userptr_invalidate_list_lock);
+	while (!list_empty(&vfpriv->userptr_invalidate_list)) {
+		userptr = list_first_entry(&vfpriv->userptr_invalidate_list,
+					   struct virtio_gpu_object_userptr,
+					   work_list);
+		spin_unlock(&vfpriv->userptr_invalidate_list_lock);
+
+		mm = userptr->mm;
+
+		mmap_write_lock(mm);
+
+		/* Remove from userptr_invalidate_list_lock must inside mmap write lock, cause:
+		 * after remove from list, the work_item.op may be changed by other thread
+		 * like MMU notifier invalidate callback, and maybe add the userptr to work
+		 * list again.
+		 * What will cause use after free or double free bug.
+		 * So need use mmap_write_lock to prevent the invalidate callback triggering then
+		 * remove the from work list to snsure one work item only be handled once.
+		 */
+		spin_lock(&vfpriv->userptr_invalidate_list_lock);
+		list_del_init(&userptr->work_list);
+		spin_unlock(&vfpriv->userptr_invalidate_list_lock);
+
+		mutex_lock(&vfpriv->userptrs_tree_lock);
+
+		virtio_gpu_userptr_handle_list_work(userptr);
+
+		mutex_unlock(&vfpriv->userptrs_tree_lock);
+		mmap_write_unlock(mm);
+
+		spin_lock(&vfpriv->userptr_invalidate_list_lock);
+	}
+	spin_unlock(&vfpriv->userptr_invalidate_list_lock);
+}
+
+void virtio_gpu_userptr_list_work_init(struct virtio_gpu_fpriv *vfpriv)
+{
+	INIT_WORK(&vfpriv->userptr_invalidate_work,
+		  virtio_gpu_userptr_invalidate_work);
+	INIT_LIST_HEAD(&vfpriv->userptr_invalidate_list);
+	spin_lock_init(&vfpriv->userptr_invalidate_list_lock);
+}
+
+static void
+virtio_gpu_userptr_schedule_list_work(struct virtio_gpu_fpriv *vfpriv)
+{
+	spin_lock(&vfpriv->userptr_invalidate_list_lock);
+	if (!list_empty(&vfpriv->userptr_invalidate_list))
+		schedule_work(&vfpriv->userptr_invalidate_work);
+	spin_unlock(&vfpriv->userptr_invalidate_list_lock);
+}
+
+static void virtio_gpu_object_userptr_remove_within_range(
+	struct virtio_gpu_fpriv *vfpriv, u_int64_t start, u_int64_t last,
+	bool check_start, const char *from)
+{
+	struct interval_tree_node *node;
+	struct virtio_gpu_object_userptr *userptr = NULL;
+	uint64_t remove_userptr_size = last - start + 1UL;
+	uint64_t userptr_size;
+
+	mutex_lock(&vfpriv->userptrs_tree_lock);
+
+	node = interval_tree_iter_first(&vfpriv->userptrs_tree, start, last);
+
+	while (node) {
+		struct interval_tree_node *next;
+
+		userptr = container_of(node, struct virtio_gpu_object_userptr,
+				       it_node);
+
+		userptr_size =
+			userptr->userptr_last - userptr->userptr_start + 1UL;
+		if (userptr->userptr_start >= start &&
+		    userptr->userptr_last < last) {
+			if ((!check_start) ||
+			    (check_start && userptr->userptr_start == start)) {
+				if (atomic_xchg(&userptr->in_release, 1) == 0 &&
+				    !userptr->op) {
+					userptr->mm = current->mm;
+					virtio_gpu_userptr_add_list_work(
+						userptr, USERPTR_OP_UNMAP);
+				}
+			}
+		}
+
+		next = interval_tree_iter_next(node, start, last);
+		node = next;
+	}
+	mutex_unlock(&vfpriv->userptrs_tree_lock);
+
+	virtio_gpu_userptr_schedule_list_work(userptr->file->driver_priv);
+}
+
+static bool
+virtio_gpu_userptr_invalidate(struct mmu_interval_notifier *mn,
+			      const struct mmu_notifier_range *range,
+			      unsigned long cur_seq)
+{
+	struct virtio_gpu_object_userptr *userptr;
+	struct virtio_gpu_fpriv *vfpriv;
+	unsigned long start;
+	unsigned long last;
+
+	if (range->event == MMU_NOTIFY_RELEASE)
+		return true;
+	if (!mmget_not_zero(mn->mm))
+		return true;
+
+	start = mn->interval_tree.start;
+	last = mn->interval_tree.last;
+	start = (max(start, range->start) >> PAGE_SHIFT) << PAGE_SHIFT;
+	last = (min(last, range->end - 1UL) >> PAGE_SHIFT) << PAGE_SHIFT;
+
+	userptr = container_of(mn, struct virtio_gpu_object_userptr, notifier);
+	userptr->mm = mn->mm;
+	vfpriv = userptr->file->driver_priv;
+
+	mutex_lock(&userptr->lock);
+	mmu_interval_set_seq(mn, cur_seq);
+
+	pr_debug(
+		"userptr: [%llx-%llx]-%llx notifier: [%lx-%lx]-%lx check: [%lx-%lx]-%lx resid: %d event: %d\n",
+		userptr->userptr_start, userptr->userptr_last,
+		userptr->userptr_last - userptr->userptr_start + 1UL,
+		range->start, range->end - 1UL, range->end - range->start,
+		start, last, last - start + (1UL << PAGE_SHIFT),
+		userptr->base.hw_res_handle, range->event);
+
+	if (userptr->op == USERPTR_OP_UNMAP) {
+		pr_debug(
+			"userptr: [%llx-%llx] resid: %d already in unmap op: %d\n",
+			userptr->userptr_start, userptr->userptr_last,
+			userptr->base.hw_res_handle, userptr->op);
+	} else {
+		switch (range->event) {
+		case MMU_NOTIFY_UNMAP:
+			virtio_gpu_update_or_remove_userptr(
+				userptr, start,
+				last + (1UL << PAGE_SHIFT) - 1UL);
+			break;
+		default:
+			userptr->notifier_start = start;
+			userptr->notifier_last = last;
+			virtio_gpu_userptr_add_list_work(userptr,
+							 USERPTR_OP_EVICT);
+			break;
+		}
+	}
+
+	virtio_gpu_userptr_schedule_list_work(userptr->file->driver_priv);
+
+	mutex_unlock(&userptr->lock);
+	mmput(mn->mm);
+	return true;
+}
+
+static void
+virtio_gpu_userptr_lock_and_flush_work(struct virtio_gpu_fpriv *vfpriv,
+				       struct mm_struct *mm)
+{
+retry_flush_work:
+	flush_work(&vfpriv->userptr_invalidate_work);
+
+	if (list_empty(&vfpriv->userptr_invalidate_list))
+		return;
+
+	goto retry_flush_work;
+}
+
+void virtio_gpu_userptr_set_handle(struct virtio_gpu_object *qobj,
+				   uint32_t handle)
+{
+	struct virtio_gpu_object_userptr *userptr = to_virtio_gpu_userptr(qobj);
+
+	userptr->bo_handle = handle;
+	virtio_gpu_object_userptr_remove_within_range(
+		userptr->file->driver_priv, userptr->userptr_start,
+		userptr->userptr_last, false, __func__);
+	virtio_gpu_userptr_add_notifier(userptr, userptr->userptr_start,
+					userptr->npages << PAGE_SHIFT);
+}
+
+static int virtio_gpu_userptr_init(struct drm_device *dev,
+				   struct drm_file *file,
+				   struct drm_gem_object *obj,
+				   struct virtio_gpu_object_params *params,
+				   unsigned long **p_pfns, uint32_t *p_npfns)
+{
+	struct virtio_gpu_object *bo = gem_to_virtio_gpu_obj(obj);
+	struct virtio_gpu_object_userptr *userptr = to_virtio_gpu_userptr(bo);
+	unsigned long page_offset;
+	unsigned long aligned_size;
+	struct page **pages;
+	unsigned int pinned = 0;
+	uint64_t aligned_addr;
+	int ret;
+
+	page_offset = (uint64_t)params->blob_userptr & (PAGE_SIZE - 1UL);
+	aligned_addr = params->blob_userptr - page_offset;
+	aligned_size = roundup(page_offset + params->size, PAGE_SIZE);
+
+	pr_debug(
+		"create userptr addr: %llx size: %lx, aligned: [%llx-%llx]-%lx\n",
+		params->blob_userptr, params->size, aligned_addr,
+		aligned_addr + aligned_size - 1UL, aligned_size);
+
+	params->size = aligned_size;
+
+	drm_gem_private_object_init(dev, obj, aligned_size);
+
+	*p_npfns = aligned_size / PAGE_SIZE;
+	*p_pfns = vmalloc(*p_npfns * sizeof(unsigned long));
+	if (!(*p_pfns)) {
+		pr_err("failed to allocate userptr pfns\n");
+		return -ENOMEM;
+	}
+
+	pages = vmalloc(*p_npfns * sizeof(struct page *));
+	if (!pages)
+		return -ENOMEM;
+
+	userptr->userptr_inital_start = aligned_addr;
+	userptr->userptr_start = aligned_addr;
+	userptr->userptr_last = userptr->userptr_start + aligned_size - 1UL;
+
+	do {
+		unsigned int num_pages = *p_npfns - pinned;
+		uint64_t ptr = userptr->userptr_start + pinned * PAGE_SIZE;
+		struct page **pinned_pages = pages + pinned;
+
+		ret = pin_user_pages_fast(
+			ptr, num_pages, FOLL_WRITE | FOLL_FORCE, pinned_pages);
+
+		if (ret < 0) {
+			pr_err("pin memory failed, addr: 0x%llx\n",
+			       userptr->userptr_start);
+			if (pinned && pages)
+				unpin_user_pages(pages, pinned);
+			userptr->userptr_start = 0;
+			vfree(pages);
+			vfree(*p_pfns);
+			return -ENOMEM;
+		}
+
+		pinned += ret;
+
+	} while (pinned < *p_npfns);
+
+	userptr->pages = pages;
+	userptr->npages = *p_npfns;
+	bo->base.base.size = aligned_size;
+
+	for (int i = 0; i < (*p_npfns); i++)
+		(*p_pfns)[i] = page_to_pfn(pages[i]);
+
+	atomic_set(&userptr->in_release, 0);
+	INIT_LIST_HEAD(&userptr->work_list);
+	mutex_init(&userptr->lock);
+	userptr->vgdev = dev->dev_private;
+	userptr->file = file;
+
+	return 0;
+}
+
+int virtio_gpu_userptr_create(struct virtio_gpu_device *vgdev,
+			      struct drm_file *file,
+			      struct virtio_gpu_object_params *params,
+			      struct virtio_gpu_object **bo_ptr)
+{
+	struct mm_struct *mm = current->mm;
+	struct virtio_gpu_fpriv *vfpriv = file->driver_priv;
+	struct drm_gem_object *obj;
+	struct virtio_gpu_object_userptr *userptr;
+	int ret;
+	unsigned long *pfns;
+	uint32_t npfns;
+
+	virtio_gpu_userptr_lock_and_flush_work(vfpriv, mm);
+
+	mutex_lock(&vfpriv->userptrs_tree_lock);
+	userptr = virtio_gpu_userptr_from_addr_range(
+		vfpriv, params->blob_userptr,
+		params->blob_userptr + params->size - 1UL);
+	if (userptr) {
+		*bo_ptr = &userptr->base;
+		mutex_unlock(&vfpriv->userptrs_tree_lock);
+		return USERPTR_EXISTS;
+	}
+
+	userptr = kzalloc(sizeof(*userptr), GFP_KERNEL);
+	if (!userptr)
+		return -ENOMEM;
+
+	obj = &userptr->base.base.base;
+	obj->funcs = &virtio_gpu_userptr_funcs;
+
+	ret = virtio_gpu_userptr_init(vgdev->ddev, file, obj, params, &pfns,
+				      &npfns);
+	if (ret)
+		goto failed_free;
+
+	ret = virtio_gpu_resource_id_get(vgdev, &userptr->base.hw_res_handle);
+	if (ret)
+		goto failed_free;
+
+	virtio_gpu_userptr_add_interval_tree(vfpriv, userptr);
+	/* virtio_gpu_userptr_dump(vfpriv); */
+
+	mutex_unlock(&vfpriv->userptrs_tree_lock);
+
+	virtio_gpu_cmd_resource_create_userptr(vgdev, &userptr->base, params,
+					       pfns, npfns);
+
+	*bo_ptr = &userptr->base;
+	return 0;
+
+failed_free:
+	mutex_unlock(&vfpriv->userptrs_tree_lock);
+	kfree(userptr);
+	return ret;
+}
diff --git a/drivers/gpu/drm/virtio/virtgpu_vq.c b/drivers/gpu/drm/virtio/virtgpu_vq.c
index 29d462b69bad..2699b85829f4 100644
--- a/drivers/gpu/drm/virtio/virtgpu_vq.c
+++ b/drivers/gpu/drm/virtio/virtgpu_vq.c
@@ -1270,6 +1270,35 @@ virtio_gpu_cmd_resource_create_blob(struct virtio_gpu_device *vgdev,
 	bo->created = true;
 }
 
+void
+virtio_gpu_cmd_resource_create_userptr(struct virtio_gpu_device *vgdev,
+				    struct virtio_gpu_object *bo,
+				    struct virtio_gpu_object_params *params,
+				    unsigned long *pfns,
+				    uint32_t npfns)
+{
+	struct virtio_gpu_resource_create_blob *cmd_p;
+	struct virtio_gpu_vbuffer *vbuf;
+
+	cmd_p = virtio_gpu_alloc_cmd(vgdev, &vbuf, sizeof(*cmd_p));
+	memset(cmd_p, 0, sizeof(*cmd_p));
+
+	cmd_p->hdr.type = cpu_to_le32(VIRTIO_GPU_CMD_RESOURCE_CREATE_BLOB);
+	cmd_p->hdr.ctx_id = cpu_to_le32(params->ctx_id);
+	cmd_p->resource_id = cpu_to_le32(bo->hw_res_handle);
+	cmd_p->blob_mem = cpu_to_le32(params->blob_mem);
+	cmd_p->blob_flags = cpu_to_le32(params->blob_flags);
+	cmd_p->blob_id = cpu_to_le64(params->blob_id);
+	cmd_p->size = cpu_to_le64(params->size);
+	cmd_p->nr_entries = cpu_to_le32(npfns);
+
+	vbuf->data_buf = pfns;
+	vbuf->data_size = sizeof(*pfns) * npfns;
+
+	virtio_gpu_queue_ctrl_buffer(vgdev, vbuf);
+	bo->created = true;
+}
+
 void virtio_gpu_cmd_set_scanout_blob(struct virtio_gpu_device *vgdev,
 				     uint32_t scanout_id,
 				     struct virtio_gpu_object *bo,
-- 
2.34.1


^ permalink raw reply related	[flat|nested] 23+ messages in thread

* Re: [RFC PATCH 3/3] drm/virtio: implement blob userptr resource object
  2024-12-20 10:04 ` [RFC PATCH 3/3] drm/virtio: implement blob userptr resource object Honglei Huang
@ 2024-12-20 15:35   ` Simona Vetter
  2024-12-22  1:59     ` Demi Marie Obenour
  0 siblings, 1 reply; 23+ messages in thread
From: Simona Vetter @ 2024-12-20 15:35 UTC (permalink / raw)
  To: Honglei Huang
  Cc: Huang Rui, virtualization, linux-kernel, Dmitry Osipenko,
	dri-devel, David Airlie, Gerd Hoffmann, Gurchetan Singh,
	Chia-I Wu, Daniel Vetter, Akihiko Odaki, Lingshan Zhu,
	Demi Marie Obenour

On Fri, Dec 20, 2024 at 06:04:09PM +0800, Honglei Huang wrote:
> From: Honglei Huang <Honglei1.Huang@amd.com>
> 
> A virtio-gpu userptr is based on HMM notifier.
> Used for let host access guest userspace memory and
> notice the change of userspace memory.
> This series patches are in very beginning state,
> User space are pinned currently to ensure the host
> device memory operations are correct.
> The free and unmap operations for userspace can be
> handled by MMU notifier this is a simple and basice
> SVM feature for this series patches.
> The physical PFNS update operations is splited into
> two OPs in here. The evicted memories won't be used
> anymore but remap into host again to achieve same
> effect with hmm_rang_fault.

So in my opinion there are two ways to implement userptr that make sense:

- pinned userptr with pin_user_pages(FOLL_LONGTERM). there is not mmu
  notifier

- unpinnned userptr where you entirely rely on userptr and do not hold any
  page references or page pins at all, for full SVM integration. This
  should use hmm_range_fault ideally, since that's the version that
  doesn't ever grab any page reference pins.

All the in-between variants are imo really bad hacks, whether they hold a
page reference or a temporary page pin (which seems to be what you're
doing here). In much older kernels there was some justification for them,
because strange stuff happened over fork(), but with FOLL_LONGTERM this is
now all sorted out. So there's really only fully pinned, or true svm left
as clean design choices imo.

With that background, why does pin_user_pages(FOLL_LONGTERM) not work for
you?

The other part is that I think we really should extract these into
helpers. Definitely for the pinned variant, that should be really simple.

Cheers, Sima

> 
> Signed-off-by: Honglei Huang <Honglei1.Huang@amd.com>
> ---
>  drivers/gpu/drm/virtio/Makefile          |   3 +-
>  drivers/gpu/drm/virtio/virtgpu_drv.h     |  72 +++
>  drivers/gpu/drm/virtio/virtgpu_ioctl.c   |  25 +-
>  drivers/gpu/drm/virtio/virtgpu_kms.c     |   2 +
>  drivers/gpu/drm/virtio/virtgpu_object.c  |   5 +
>  drivers/gpu/drm/virtio/virtgpu_userptr.c | 738 +++++++++++++++++++++++
>  drivers/gpu/drm/virtio/virtgpu_vq.c      |  29 +
>  7 files changed, 871 insertions(+), 3 deletions(-)
>  create mode 100644 drivers/gpu/drm/virtio/virtgpu_userptr.c
> 
> diff --git a/drivers/gpu/drm/virtio/Makefile b/drivers/gpu/drm/virtio/Makefile
> index d2e1788a8227..351c45e1e7d1 100644
> --- a/drivers/gpu/drm/virtio/Makefile
> +++ b/drivers/gpu/drm/virtio/Makefile
> @@ -2,8 +2,9 @@
>  #
>  # Makefile for the drm device driver.  This driver provides support for the
>  # Direct Rendering Infrastructure (DRI) in XFree86 4.1.0 and higher.
> +# virtgpu_userptr.o
>  
> -virtio-gpu-y := virtgpu_drv.o virtgpu_kms.o virtgpu_gem.o virtgpu_vram.o \
> +virtio-gpu-y := virtgpu_drv.o virtgpu_kms.o virtgpu_gem.o virtgpu_vram.o virtgpu_userptr.o \
>  	virtgpu_display.o virtgpu_vq.o \
>  	virtgpu_fence.o virtgpu_object.o virtgpu_debugfs.o virtgpu_plane.o \
>  	virtgpu_ioctl.o virtgpu_prime.o virtgpu_trace_points.o virtgpu_submit.o
> diff --git a/drivers/gpu/drm/virtio/virtgpu_drv.h b/drivers/gpu/drm/virtio/virtgpu_drv.h
> index e1ee17466f6b..5d15c018201d 100644
> --- a/drivers/gpu/drm/virtio/virtgpu_drv.h
> +++ b/drivers/gpu/drm/virtio/virtgpu_drv.h
> @@ -31,6 +31,8 @@
>  #include <linux/virtio_ids.h>
>  #include <linux/virtio_config.h>
>  #include <linux/virtio_gpu.h>
> +#include <linux/mmu_notifier.h>
> +#include <linux/rbtree_types.h>
>  
>  #include <drm/drm_atomic.h>
>  #include <drm/drm_drv.h>
> @@ -85,6 +87,7 @@ struct virtio_gpu_object_params {
>  	uint32_t blob_mem;
>  	uint32_t blob_flags;
>  	uint64_t blob_id;
> +	uint64_t blob_userptr;
>  };
>  
>  struct virtio_gpu_object {
> @@ -112,12 +115,50 @@ struct virtio_gpu_object_vram {
>  	struct drm_mm_node vram_node;
>  };
>  
> +enum userptr_work_list_ops {
> +	USERPTR_OP_NULL,
> +	USERPTR_OP_UNMAP,
> +	USERPTR_OP_UPDATE,
> +	USERPTR_OP_EVICT,
> +};
> +
> +struct virtio_gpu_object_userptr {
> +	struct virtio_gpu_object base;
> +
> +	struct page **pages;
> +	uint64_t userptr_inital_start;
> +	uint64_t userptr_start;
> +	uint64_t userptr_last;
> +	uint32_t npages;
> +	uint32_t bo_handle;
> +	struct list_head work_list;
> +	struct virtio_gpu_device *vgdev;
> +	struct mmu_interval_notifier notifier;
> +	struct drm_file *file;
> +
> +	/* for list work */
> +	struct mm_struct *mm;
> +	enum userptr_work_list_ops op;
> +	uint64_t notifier_start;
> +	uint64_t notifier_last;
> +
> +	/* userptr interval tree node */
> +	struct interval_tree_node it_node;
> +
> +	/* in release list work queue */
> +	atomic_t in_release;
> +	struct mutex lock;
> +};
> +
>  #define to_virtio_gpu_shmem(virtio_gpu_object) \
>  	container_of((virtio_gpu_object), struct virtio_gpu_object_shmem, base)
>  
>  #define to_virtio_gpu_vram(virtio_gpu_object) \
>  	container_of((virtio_gpu_object), struct virtio_gpu_object_vram, base)
>  
> +#define to_virtio_gpu_userptr(virtio_gpu_object) \
> +	container_of((virtio_gpu_object), struct virtio_gpu_object_userptr, base)
> +
>  struct virtio_gpu_object_array {
>  	struct ww_acquire_ctx ticket;
>  	struct list_head next;
> @@ -279,6 +320,16 @@ struct virtio_gpu_fpriv {
>  	uint64_t base_fence_ctx;
>  	uint64_t ring_idx_mask;
>  	struct mutex context_lock;
> +
> +	/* for userptr mmu notifier invalidate */
> +	struct work_struct userptr_invalidate_work;
> +	struct list_head userptr_invalidate_list;
> +	spinlock_t userptr_invalidate_list_lock;
> +
> +	/* userptr interval tree */
> +	struct rb_root_cached userptrs_tree;
> +	struct mutex userptrs_tree_lock;
> +
>  	char debug_name[DEBUG_NAME_MAX_LEN];
>  	bool explicit_debug_name;
>  	bool fence_passing_enabled;
> @@ -422,6 +473,14 @@ virtio_gpu_cmd_resource_create_blob(struct virtio_gpu_device *vgdev,
>  				    struct virtio_gpu_object_params *params,
>  				    struct virtio_gpu_mem_entry *ents,
>  				    uint32_t nents);
> +
> +void
> +virtio_gpu_cmd_resource_create_userptr(struct virtio_gpu_device *vgdev,
> +				    struct virtio_gpu_object *bo,
> +				    struct virtio_gpu_object_params *params,
> +				    unsigned long *pfns,
> +				    uint32_t npfns);
> +
>  void
>  virtio_gpu_cmd_set_scanout_blob(struct virtio_gpu_device *vgdev,
>  				uint32_t scanout_id,
> @@ -497,4 +556,17 @@ void virtio_gpu_vram_unmap_dma_buf(struct device *dev,
>  int virtio_gpu_execbuffer_ioctl(struct drm_device *dev, void *data,
>  				struct drm_file *file);
>  
> +/* virtgpu_userptr.c */
> +void virtio_gpu_userptr_set_handle(struct virtio_gpu_object *qobj,
> +				   uint32_t handle);
> +uint32_t virtio_gpu_userptr_get_handle(struct virtio_gpu_object *qobj);
> +void virtio_gpu_userptr_list_work_init(struct virtio_gpu_fpriv *vfpriv);
> +void virtio_gpu_userptr_interval_tree_init(struct virtio_gpu_fpriv *vfpriv);
> +uint64_t virtio_gpu_userptr_get_offset(struct virtio_gpu_object *qobj,
> +				       uint64_t addr);
> +bool virtio_gpu_is_userptr(struct virtio_gpu_object *bo);
> +int virtio_gpu_userptr_create(struct virtio_gpu_device *vgdev,
> +			      struct drm_file *file,
> +			      struct virtio_gpu_object_params *params,
> +			      struct virtio_gpu_object **bo_ptr);
>  #endif
> diff --git a/drivers/gpu/drm/virtio/virtgpu_ioctl.c b/drivers/gpu/drm/virtio/virtgpu_ioctl.c
> index d40c7402720d..519278443c66 100644
> --- a/drivers/gpu/drm/virtio/virtgpu_ioctl.c
> +++ b/drivers/gpu/drm/virtio/virtgpu_ioctl.c
> @@ -36,7 +36,8 @@
>  
>  #define VIRTGPU_BLOB_FLAG_USE_MASK (VIRTGPU_BLOB_FLAG_USE_MAPPABLE | \
>  				    VIRTGPU_BLOB_FLAG_USE_SHAREABLE | \
> -				    VIRTGPU_BLOB_FLAG_USE_CROSS_DEVICE)
> +				    VIRTGPU_BLOB_FLAG_USE_CROSS_DEVICE | \
> +				    VIRTGPU_BLOB_FLAG_USE_USERPTR)
>  
>  /* Must be called with &virtio_gpu_fpriv.struct_mutex held. */
>  static void virtio_gpu_create_context_locked(struct virtio_gpu_device *vgdev,
> @@ -489,6 +490,7 @@ static int verify_blob(struct virtio_gpu_device *vgdev,
>  	params->size = rc_blob->size;
>  	params->blob = true;
>  	params->blob_flags = rc_blob->blob_flags;
> +	params->blob_userptr = rc_blob->blob_userptr;
>  	return 0;
>  }
>  
> @@ -528,8 +530,20 @@ static int virtio_gpu_resource_create_blob_ioctl(struct drm_device *dev,
>  				      rc_blob->cmd_size, 0);
>  	}
>  
> -	if (guest_blob)
> +	if (guest_blob && !params.blob_userptr)
>  		ret = virtio_gpu_object_create(vgdev, &params, &bo, NULL);
> +	else if (guest_blob && params.blob_userptr) {
> +		ret = virtio_gpu_userptr_create(vgdev, file, &params, &bo);
> +		if (unlikely(ret < 0))
> +			return ret;
> +		if (ret > 0) {
> +			/* userptr already exist */
> +			rc_blob->res_handle = bo->hw_res_handle;
> +			rc_blob->bo_handle = virtio_gpu_userptr_get_handle(bo);
> +			rc_blob->offset = virtio_gpu_userptr_get_offset(bo, rc_blob->blob_userptr);
> +			return ret;
> +		}
> +	}
>  	else if (!guest_blob && host3d_blob)
>  		ret = virtio_gpu_vram_create(vgdev, &params, &bo);
>  	else
> @@ -560,6 +574,9 @@ static int virtio_gpu_resource_create_blob_ioctl(struct drm_device *dev,
>  
>  	rc_blob->res_handle = bo->hw_res_handle;
>  	rc_blob->bo_handle = handle;
> +	/* for mmu notifier auto release */
> +	if (guest_blob && params.blob_userptr)
> +		virtio_gpu_userptr_set_handle(bo, handle);
>  
>  	/*
>  	 * The handle owns the reference now.  But we must drop our
> @@ -691,6 +708,10 @@ static int virtio_gpu_context_init_ioctl(struct drm_device *dev,
>  			goto out_unlock;
>  		}
>  	}
> +	if (vfpriv->context_init & VIRTIO_GPU_CAPSET_HSAKMT) {
> +		virtio_gpu_userptr_list_work_init(vfpriv);
> +		virtio_gpu_userptr_interval_tree_init(vfpriv);
> +	}
>  
>  	virtio_gpu_create_context_locked(vgdev, vfpriv);
>  	virtio_gpu_notify(vgdev);
> diff --git a/drivers/gpu/drm/virtio/virtgpu_kms.c b/drivers/gpu/drm/virtio/virtgpu_kms.c
> index 9f4617a75edd..3af40ed8936a 100644
> --- a/drivers/gpu/drm/virtio/virtgpu_kms.c
> +++ b/drivers/gpu/drm/virtio/virtgpu_kms.c
> @@ -346,6 +346,8 @@ void virtio_gpu_driver_postclose(struct drm_device *dev, struct drm_file *file)
>  		return;
>  
>  	if (vfpriv->context_created) {
> +		if (vfpriv->context_init & VIRTIO_GPU_CAPSET_HSAKMT)
> +			flush_work(&vfpriv->userptr_invalidate_work);
>  		virtio_gpu_cmd_context_destroy(vgdev, vfpriv->ctx_id);
>  		virtio_gpu_notify(vgdev);
>  	}
> diff --git a/drivers/gpu/drm/virtio/virtgpu_object.c b/drivers/gpu/drm/virtio/virtgpu_object.c
> index c7e74cf13022..31659b0a028d 100644
> --- a/drivers/gpu/drm/virtio/virtgpu_object.c
> +++ b/drivers/gpu/drm/virtio/virtgpu_object.c
> @@ -80,6 +80,11 @@ void virtio_gpu_cleanup_object(struct virtio_gpu_object *bo)
>  		drm_gem_free_mmap_offset(&vram->base.base.base);
>  		drm_gem_object_release(&vram->base.base.base);
>  		kfree(vram);
> +	} else if (virtio_gpu_is_userptr(bo)) {
> +		struct virtio_gpu_object_userptr *userptr = to_virtio_gpu_userptr(bo);
> +
> +		drm_gem_object_release(&userptr->base.base.base);
> +		kfree(userptr);
>  	}
>  }
>  
> diff --git a/drivers/gpu/drm/virtio/virtgpu_userptr.c b/drivers/gpu/drm/virtio/virtgpu_userptr.c
> new file mode 100644
> index 000000000000..646088f7f72b
> --- /dev/null
> +++ b/drivers/gpu/drm/virtio/virtgpu_userptr.c
> @@ -0,0 +1,738 @@
> +// SPDX-License-Identifier: GPL-2.0 OR MIT
> +#include "virtgpu_drv.h"
> +
> +#include <linux/dma-mapping.h>
> +
> +#include <linux/mm.h>
> +#include <linux/pid.h>
> +
> +#include "drm/drm_gem.h"
> +
> +#include <linux/mmu_notifier.h>
> +
> +#define USERPTR_PFNS_NO_CHANGE 0
> +#define USERPTR_PFNS_CHANGED 1
> +#define USERPTR_PFNS_NONE 2
> +
> +#define USERPTR_EXISTS 1
> +
> +static bool
> +virtio_gpu_userptr_invalidate(struct mmu_interval_notifier *mn,
> +			      const struct mmu_notifier_range *range,
> +			      unsigned long cur_seq);
> +
> +static const struct mmu_interval_notifier_ops virtio_gpu_userptr_mn_ops = {
> +	.invalidate = virtio_gpu_userptr_invalidate,
> +};
> +
> +static void virtio_gpu_userptr_unlink(struct virtio_gpu_fpriv *vfpriv,
> +				      struct virtio_gpu_object_userptr *userptr)
> +{
> +	if (userptr->it_node.start != 0 && userptr->it_node.last != 0)
> +		interval_tree_remove(&userptr->it_node, &vfpriv->userptrs_tree);
> +}
> +
> +static void virtio_gpu_userptr_free(struct drm_gem_object *obj)
> +{
> +	struct virtio_gpu_object *bo = gem_to_virtio_gpu_obj(obj);
> +	struct virtio_gpu_device *vgdev = obj->dev->dev_private;
> +	struct virtio_gpu_object_userptr *userptr = to_virtio_gpu_userptr(bo);
> +
> +	if (bo->created) {
> +		unpin_user_pages_dirty_lock(userptr->pages, userptr->npages,
> +					    false);
> +		vfree(userptr->pages);
> +		userptr->pages = NULL;
> +
> +		virtio_gpu_cmd_unref_resource(vgdev, bo);
> +		virtio_gpu_notify(vgdev);
> +
> +		return;
> +	}
> +}
> +
> +static void virtio_gpu_userptr_object_close(struct drm_gem_object *obj,
> +					    struct drm_file *file)
> +{
> +	virtio_gpu_gem_object_close(obj, file);
> +}
> +
> +static const struct drm_gem_object_funcs virtio_gpu_userptr_funcs = {
> +	.open = virtio_gpu_gem_object_open,
> +	.close = virtio_gpu_userptr_object_close,
> +	.free = virtio_gpu_userptr_free,
> +};
> +
> +bool virtio_gpu_is_userptr(struct virtio_gpu_object *bo)
> +{
> +	return bo->base.base.funcs == &virtio_gpu_userptr_funcs;
> +}
> +
> +static int
> +virtio_gpu_userptr_add_notifier(struct virtio_gpu_object_userptr *userptr,
> +				unsigned long start, unsigned long length)
> +{
> +	int ret = mmu_interval_notifier_insert_locked(
> +		&userptr->notifier, current->mm, start, length,
> +		&virtio_gpu_userptr_mn_ops);
> +
> +	if (ret)
> +		pr_err("mmu_interval_notifier_insert_locked failed ret: %d\n",
> +		       ret);
> +	return ret;
> +}
> +
> +uint32_t virtio_gpu_userptr_get_handle(struct virtio_gpu_object *qobj)
> +{
> +	struct virtio_gpu_object_userptr *userptr = to_virtio_gpu_userptr(qobj);
> +
> +	return userptr->bo_handle;
> +}
> +
> +uint64_t virtio_gpu_userptr_get_offset(struct virtio_gpu_object *qobj,
> +				       uint64_t addr)
> +{
> +	struct virtio_gpu_object_userptr *userptr = to_virtio_gpu_userptr(qobj);
> +	uint64_t userptr_align_down = ALIGN_DOWN(addr, PAGE_SIZE);
> +	uint64_t offset = userptr_align_down - userptr->userptr_inital_start;
> +	return offset;
> +}
> +
> +void virtio_gpu_userptr_interval_tree_init(struct virtio_gpu_fpriv *vfpriv)
> +{
> +	vfpriv->userptrs_tree = RB_ROOT_CACHED;
> +	mutex_init(&vfpriv->userptrs_tree_lock);
> +}
> +
> +static struct virtio_gpu_object_userptr *
> +virtio_gpu_userptr_from_addr_range(struct virtio_gpu_fpriv *vfpriv,
> +				   u_int64_t start, u_int64_t last)
> +{
> +	struct interval_tree_node *node;
> +	struct virtio_gpu_object_userptr *userptr = NULL;
> +	struct virtio_gpu_object_userptr *ret = NULL;
> +	uint64_t userptr_size;
> +
> +	node = interval_tree_iter_first(&vfpriv->userptrs_tree, start, last);
> +
> +	while (node) {
> +		struct interval_tree_node *next;
> +
> +		userptr = container_of(node, struct virtio_gpu_object_userptr,
> +				       it_node);
> +
> +		if (start >= userptr->userptr_start &&
> +		    last <= userptr->userptr_last &&
> +		    !atomic_read(&userptr->in_release) && !userptr->op) {
> +			ret = userptr;
> +			userptr_size = userptr->userptr_last -
> +				       userptr->userptr_start + 1UL;
> +			return ret;
> +		}
> +
> +		next = interval_tree_iter_next(node, start, last);
> +		node = next;
> +	}
> +
> +	return ret;
> +}
> +
> +static void
> +virtio_gpu_userptr_add_interval_tree(struct virtio_gpu_fpriv *vfpriv,
> +				     struct virtio_gpu_object_userptr *userptr)
> +{
> +	userptr->it_node.start = userptr->userptr_start;
> +	userptr->it_node.last = userptr->userptr_last;
> +	interval_tree_insert(&userptr->it_node, &vfpriv->userptrs_tree);
> +}
> +
> +static void virtio_gpu_userptr_unmap(struct virtio_gpu_object_userptr *userptr)
> +{
> +	pr_debug(
> +		"list work remove userptr: [%llx-%llx], resid: %d bo_handle: %d size: %x\n",
> +		userptr->userptr_start, userptr->userptr_last,
> +		userptr->base.hw_res_handle, userptr->bo_handle,
> +		userptr->npages);
> +
> +	virtio_gpu_userptr_unlink(userptr->file->driver_priv, userptr);
> +	mmu_interval_notifier_remove(&userptr->notifier);
> +
> +	drm_gem_handle_delete(userptr->file, userptr->bo_handle);
> +}
> +
> +static void virtio_gpu_userptr_update_notifier_and_interval_tree(
> +	struct virtio_gpu_object_userptr *userptr)
> +{
> +	unsigned long start = userptr->notifier.interval_tree.start;
> +	unsigned long last = userptr->notifier.interval_tree.last;
> +
> +	if (userptr->userptr_start == start && userptr->userptr_last == last)
> +		return;
> +
> +	if (start != 0 && last != 0) {
> +		virtio_gpu_userptr_unlink(userptr->file->driver_priv, userptr);
> +		mmu_interval_notifier_remove(&userptr->notifier);
> +	}
> +
> +	pr_debug(
> +		"update userptr: [%lx-%lx]-%lx -> [%llx-%llx]-%llx resid: %d\n",
> +		start, last, last - start + 1UL, userptr->userptr_start,
> +		userptr->userptr_last,
> +		userptr->userptr_last - userptr->userptr_start + 1UL,
> +		userptr->base.hw_res_handle);
> +
> +	virtio_gpu_userptr_add_interval_tree(userptr->file->driver_priv,
> +					     userptr);
> +	mmu_interval_notifier_insert_locked(
> +		&userptr->notifier, userptr->mm, userptr->userptr_start,
> +		userptr->userptr_last - userptr->userptr_start + 1UL,
> +		&virtio_gpu_userptr_mn_ops);
> +
> +	userptr->op = 0;
> +}
> +
> +static int virtio_gpu_userptr_split(struct virtio_gpu_object_userptr *userptr,
> +				    unsigned long valid_start,
> +				    unsigned long valid_last,
> +				    struct virtio_gpu_object_userptr **new)
> +{
> +	uint64_t old_start = userptr->userptr_start;
> +	uint64_t old_last = userptr->userptr_last;
> +
> +	if (old_start != valid_start && old_last != valid_last)
> +		return -EINVAL;
> +	if (valid_start < old_start || valid_last > old_last)
> +		return -EINVAL;
> +
> +	/* split new userptr is not needed currently, but keep the API parameters here
> +	 * for future expansion.
> +	 */
> +	*new = NULL;
> +
> +	/* update range */
> +	userptr->userptr_start = valid_start;
> +	userptr->userptr_last = valid_last;
> +
> +	return 0;
> +}
> +
> +static void
> +virtio_gpu_userptr_update_split(struct virtio_gpu_object_userptr *userptr,
> +				unsigned long mn_start, unsigned long mn_last)
> +{
> +	struct virtio_gpu_object_userptr *head;
> +	struct virtio_gpu_object_userptr *tail;
> +
> +	if (userptr->op == USERPTR_OP_UNMAP)
> +		return;
> +
> +	if (mn_start > userptr->userptr_last ||
> +	    mn_last < userptr->userptr_start)
> +		return;
> +
> +	head = tail = userptr;
> +	if (mn_start > userptr->userptr_start)
> +		virtio_gpu_userptr_split(userptr, userptr->userptr_start,
> +					 mn_start - 1UL, &tail);
> +	else if (mn_last < userptr->userptr_last)
> +		virtio_gpu_userptr_split(userptr, mn_last + 1UL,
> +					 userptr->userptr_last, &head);
> +
> +	/* split tail maybe not needed in virtgpu */
> +	/* if (mn_last < userptr->userptr_last) */
> +	/* add child userptr maybe not needed in virtgpu */
> +}
> +
> +static void
> +virtio_gpu_userptr_add_list_work(struct virtio_gpu_object_userptr *userptr,
> +				 int op)
> +{
> +	struct virtio_gpu_fpriv *vfpriv = userptr->file->driver_priv;
> +
> +	spin_lock(&vfpriv->userptr_invalidate_list_lock);
> +
> +	if (!list_empty(&userptr->work_list)) {
> +		pr_debug(
> +			"update exist userptr userptr: [%llx-%llx] work op to %d\n",
> +			userptr->userptr_start, userptr->userptr_last, op);
> +		if (op != USERPTR_OP_NULL && userptr->op != USERPTR_OP_UNMAP)
> +			userptr->op = op;
> +	} else {
> +		userptr->op = op;
> +		list_add_tail(&userptr->work_list,
> +			      &vfpriv->userptr_invalidate_list);
> +	}
> +
> +	spin_unlock(&vfpriv->userptr_invalidate_list_lock);
> +}
> +
> +static int
> +virtio_gpu_userptr_check_pfns(struct virtio_gpu_object_userptr *userptr,
> +			      struct vm_area_struct *vma, uint64_t start,
> +			      uint64_t end)
> +{
> +	uint64_t addr;
> +	int ret;
> +	unsigned long pfn;
> +	spinlock_t *ptl;
> +	pte_t *ptep;
> +
> +	for (addr = start; addr < end; addr += PAGE_SIZE) {
> +		ret = follow_pte(vma->vm_mm, addr, &ptep, &ptl);
> +		if (ret) {
> +			pr_debug("follow_pfn in userptr failed, addr: %llx\n",
> +				 addr);
> +			return USERPTR_PFNS_NONE;
> +		}
> +		pfn = pte_pfn(ptep_get(ptep));
> +		pte_unmap_unlock(ptep, ptl);
> +		if (page_to_pfn(
> +			    userptr->pages[(addr - userptr->userptr_start) >>
> +					   PAGE_SHIFT]) != pfn) {
> +			pr_debug("userptr pages not match, addr: %llx\n", addr);
> +			return USERPTR_PFNS_CHANGED;
> +		}
> +	}
> +
> +	return USERPTR_PFNS_NO_CHANGE;
> +}
> +
> +static int
> +virtio_gpu_userptr_check_range(struct virtio_gpu_object_userptr *userptr,
> +			       uint64_t notifier_start, uint64_t notifier_last)
> +{
> +	uint64_t start, end, addr;
> +	int r = 0;
> +
> +	start = notifier_start;
> +	end = notifier_last + (1UL << PAGE_SHIFT);
> +
> +	for (addr = start; !r && addr < end;) {
> +		struct vm_area_struct *vma;
> +		uint64_t next = 0;
> +		uint32_t npages;
> +
> +		vma = vma_lookup(userptr->mm, addr);
> +
> +		if (vma) {
> +			next = min(vma->vm_end, end);
> +			npages = (next - addr) >> PAGE_SHIFT;
> +			r = virtio_gpu_userptr_check_pfns(userptr, vma, start,
> +							  next);
> +			if (r)
> +				break;
> +		} else {
> +			pr_debug("vma not found for addr: %llx\n", addr);
> +			r = -EFAULT;
> +			break;
> +		}
> +
> +		addr = next;
> +	}
> +
> +	return r;
> +}
> +
> +static void
> +virtio_gpu_update_or_remove_userptr(struct virtio_gpu_object_userptr *userptr,
> +				    unsigned long start, unsigned long last)
> +{
> +	if ((userptr->userptr_start) >= start &&
> +	    (userptr->userptr_last) <= last) {
> +		if (atomic_xchg(&userptr->in_release, 1) == 0) {
> +			virtio_gpu_userptr_add_list_work(userptr,
> +							 USERPTR_OP_UNMAP);
> +		}
> +	} else {
> +		pr_debug(
> +			"mmu notifier: [%lx-%lx]-%lx userptr: [%llx-%llx]-%llx not match need update.\n",
> +			start, last, last - start + 1UL, userptr->userptr_start,
> +			userptr->userptr_last,
> +			userptr->userptr_last - userptr->userptr_start + 1UL);
> +		virtio_gpu_userptr_update_split(userptr, start, last);
> +		virtio_gpu_userptr_add_list_work(userptr, USERPTR_OP_UPDATE);
> +	}
> +}
> +
> +static void virtio_gpu_userptr_evict(struct virtio_gpu_object_userptr *userptr)
> +{
> +	if (!userptr->notifier_start || !userptr->notifier_last) {
> +		pr_err("userptr: [%llx-%llx] not have notifier range\n",
> +		       userptr->userptr_start, userptr->userptr_last);
> +		return;
> +	}
> +
> +	if (userptr->notifier_start < userptr->userptr_start ||
> +	    userptr->notifier_last > userptr->userptr_last) {
> +		pr_err("invalid evict param, userptr: [%llx-%llx] notifier: [%llx-%llx] out of range\n",
> +		       userptr->userptr_start, userptr->userptr_last,
> +		       userptr->notifier_start, userptr->notifier_last);
> +		return;
> +	}
> +
> +	if (virtio_gpu_userptr_check_range(userptr, userptr->notifier_start,
> +					   userptr->notifier_last)) {
> +		pr_debug("userptr: [%llx-%llx], resid: %d check range failed\n",
> +			 userptr->userptr_start, userptr->userptr_last,
> +			 userptr->base.hw_res_handle);
> +		/* add to work list or process here directly, add to work list here */
> +		virtio_gpu_update_or_remove_userptr(
> +			userptr, userptr->notifier_start,
> +			userptr->notifier_last + (1UL << PAGE_SHIFT) - 1UL);
> +	}
> +
> +	userptr->notifier_start = 0;
> +	userptr->notifier_last = 0;
> +}
> +
> +static void
> +virtio_gpu_userptr_handle_list_work(struct virtio_gpu_object_userptr *userptr)
> +{
> +	switch (userptr->op) {
> +	case USERPTR_OP_NULL:
> +		break;
> +	case USERPTR_OP_UNMAP:
> +		virtio_gpu_userptr_unmap(userptr);
> +		break;
> +	case USERPTR_OP_UPDATE:
> +		virtio_gpu_userptr_update_notifier_and_interval_tree(userptr);
> +		break;
> +	case USERPTR_OP_EVICT:
> +		virtio_gpu_userptr_evict(userptr);
> +		break;
> +	default:
> +		break;
> +	}
> +}
> +
> +static void virtio_gpu_userptr_invalidate_work(struct work_struct *work)
> +{
> +	struct virtio_gpu_fpriv *vfpriv;
> +	struct virtio_gpu_object_userptr *userptr;
> +	struct mm_struct *mm;
> +
> +	vfpriv = container_of(work, struct virtio_gpu_fpriv,
> +			      userptr_invalidate_work);
> +
> +	spin_lock(&vfpriv->userptr_invalidate_list_lock);
> +	while (!list_empty(&vfpriv->userptr_invalidate_list)) {
> +		userptr = list_first_entry(&vfpriv->userptr_invalidate_list,
> +					   struct virtio_gpu_object_userptr,
> +					   work_list);
> +		spin_unlock(&vfpriv->userptr_invalidate_list_lock);
> +
> +		mm = userptr->mm;
> +
> +		mmap_write_lock(mm);
> +
> +		/* Remove from userptr_invalidate_list_lock must inside mmap write lock, cause:
> +		 * after remove from list, the work_item.op may be changed by other thread
> +		 * like MMU notifier invalidate callback, and maybe add the userptr to work
> +		 * list again.
> +		 * What will cause use after free or double free bug.
> +		 * So need use mmap_write_lock to prevent the invalidate callback triggering then
> +		 * remove the from work list to snsure one work item only be handled once.
> +		 */
> +		spin_lock(&vfpriv->userptr_invalidate_list_lock);
> +		list_del_init(&userptr->work_list);
> +		spin_unlock(&vfpriv->userptr_invalidate_list_lock);
> +
> +		mutex_lock(&vfpriv->userptrs_tree_lock);
> +
> +		virtio_gpu_userptr_handle_list_work(userptr);
> +
> +		mutex_unlock(&vfpriv->userptrs_tree_lock);
> +		mmap_write_unlock(mm);
> +
> +		spin_lock(&vfpriv->userptr_invalidate_list_lock);
> +	}
> +	spin_unlock(&vfpriv->userptr_invalidate_list_lock);
> +}
> +
> +void virtio_gpu_userptr_list_work_init(struct virtio_gpu_fpriv *vfpriv)
> +{
> +	INIT_WORK(&vfpriv->userptr_invalidate_work,
> +		  virtio_gpu_userptr_invalidate_work);
> +	INIT_LIST_HEAD(&vfpriv->userptr_invalidate_list);
> +	spin_lock_init(&vfpriv->userptr_invalidate_list_lock);
> +}
> +
> +static void
> +virtio_gpu_userptr_schedule_list_work(struct virtio_gpu_fpriv *vfpriv)
> +{
> +	spin_lock(&vfpriv->userptr_invalidate_list_lock);
> +	if (!list_empty(&vfpriv->userptr_invalidate_list))
> +		schedule_work(&vfpriv->userptr_invalidate_work);
> +	spin_unlock(&vfpriv->userptr_invalidate_list_lock);
> +}
> +
> +static void virtio_gpu_object_userptr_remove_within_range(
> +	struct virtio_gpu_fpriv *vfpriv, u_int64_t start, u_int64_t last,
> +	bool check_start, const char *from)
> +{
> +	struct interval_tree_node *node;
> +	struct virtio_gpu_object_userptr *userptr = NULL;
> +	uint64_t remove_userptr_size = last - start + 1UL;
> +	uint64_t userptr_size;
> +
> +	mutex_lock(&vfpriv->userptrs_tree_lock);
> +
> +	node = interval_tree_iter_first(&vfpriv->userptrs_tree, start, last);
> +
> +	while (node) {
> +		struct interval_tree_node *next;
> +
> +		userptr = container_of(node, struct virtio_gpu_object_userptr,
> +				       it_node);
> +
> +		userptr_size =
> +			userptr->userptr_last - userptr->userptr_start + 1UL;
> +		if (userptr->userptr_start >= start &&
> +		    userptr->userptr_last < last) {
> +			if ((!check_start) ||
> +			    (check_start && userptr->userptr_start == start)) {
> +				if (atomic_xchg(&userptr->in_release, 1) == 0 &&
> +				    !userptr->op) {
> +					userptr->mm = current->mm;
> +					virtio_gpu_userptr_add_list_work(
> +						userptr, USERPTR_OP_UNMAP);
> +				}
> +			}
> +		}
> +
> +		next = interval_tree_iter_next(node, start, last);
> +		node = next;
> +	}
> +	mutex_unlock(&vfpriv->userptrs_tree_lock);
> +
> +	virtio_gpu_userptr_schedule_list_work(userptr->file->driver_priv);
> +}
> +
> +static bool
> +virtio_gpu_userptr_invalidate(struct mmu_interval_notifier *mn,
> +			      const struct mmu_notifier_range *range,
> +			      unsigned long cur_seq)
> +{
> +	struct virtio_gpu_object_userptr *userptr;
> +	struct virtio_gpu_fpriv *vfpriv;
> +	unsigned long start;
> +	unsigned long last;
> +
> +	if (range->event == MMU_NOTIFY_RELEASE)
> +		return true;
> +	if (!mmget_not_zero(mn->mm))
> +		return true;
> +
> +	start = mn->interval_tree.start;
> +	last = mn->interval_tree.last;
> +	start = (max(start, range->start) >> PAGE_SHIFT) << PAGE_SHIFT;
> +	last = (min(last, range->end - 1UL) >> PAGE_SHIFT) << PAGE_SHIFT;
> +
> +	userptr = container_of(mn, struct virtio_gpu_object_userptr, notifier);
> +	userptr->mm = mn->mm;
> +	vfpriv = userptr->file->driver_priv;
> +
> +	mutex_lock(&userptr->lock);
> +	mmu_interval_set_seq(mn, cur_seq);
> +
> +	pr_debug(
> +		"userptr: [%llx-%llx]-%llx notifier: [%lx-%lx]-%lx check: [%lx-%lx]-%lx resid: %d event: %d\n",
> +		userptr->userptr_start, userptr->userptr_last,
> +		userptr->userptr_last - userptr->userptr_start + 1UL,
> +		range->start, range->end - 1UL, range->end - range->start,
> +		start, last, last - start + (1UL << PAGE_SHIFT),
> +		userptr->base.hw_res_handle, range->event);
> +
> +	if (userptr->op == USERPTR_OP_UNMAP) {
> +		pr_debug(
> +			"userptr: [%llx-%llx] resid: %d already in unmap op: %d\n",
> +			userptr->userptr_start, userptr->userptr_last,
> +			userptr->base.hw_res_handle, userptr->op);
> +	} else {
> +		switch (range->event) {
> +		case MMU_NOTIFY_UNMAP:
> +			virtio_gpu_update_or_remove_userptr(
> +				userptr, start,
> +				last + (1UL << PAGE_SHIFT) - 1UL);
> +			break;
> +		default:
> +			userptr->notifier_start = start;
> +			userptr->notifier_last = last;
> +			virtio_gpu_userptr_add_list_work(userptr,
> +							 USERPTR_OP_EVICT);
> +			break;
> +		}
> +	}
> +
> +	virtio_gpu_userptr_schedule_list_work(userptr->file->driver_priv);
> +
> +	mutex_unlock(&userptr->lock);
> +	mmput(mn->mm);
> +	return true;
> +}
> +
> +static void
> +virtio_gpu_userptr_lock_and_flush_work(struct virtio_gpu_fpriv *vfpriv,
> +				       struct mm_struct *mm)
> +{
> +retry_flush_work:
> +	flush_work(&vfpriv->userptr_invalidate_work);
> +
> +	if (list_empty(&vfpriv->userptr_invalidate_list))
> +		return;
> +
> +	goto retry_flush_work;
> +}
> +
> +void virtio_gpu_userptr_set_handle(struct virtio_gpu_object *qobj,
> +				   uint32_t handle)
> +{
> +	struct virtio_gpu_object_userptr *userptr = to_virtio_gpu_userptr(qobj);
> +
> +	userptr->bo_handle = handle;
> +	virtio_gpu_object_userptr_remove_within_range(
> +		userptr->file->driver_priv, userptr->userptr_start,
> +		userptr->userptr_last, false, __func__);
> +	virtio_gpu_userptr_add_notifier(userptr, userptr->userptr_start,
> +					userptr->npages << PAGE_SHIFT);
> +}
> +
> +static int virtio_gpu_userptr_init(struct drm_device *dev,
> +				   struct drm_file *file,
> +				   struct drm_gem_object *obj,
> +				   struct virtio_gpu_object_params *params,
> +				   unsigned long **p_pfns, uint32_t *p_npfns)
> +{
> +	struct virtio_gpu_object *bo = gem_to_virtio_gpu_obj(obj);
> +	struct virtio_gpu_object_userptr *userptr = to_virtio_gpu_userptr(bo);
> +	unsigned long page_offset;
> +	unsigned long aligned_size;
> +	struct page **pages;
> +	unsigned int pinned = 0;
> +	uint64_t aligned_addr;
> +	int ret;
> +
> +	page_offset = (uint64_t)params->blob_userptr & (PAGE_SIZE - 1UL);
> +	aligned_addr = params->blob_userptr - page_offset;
> +	aligned_size = roundup(page_offset + params->size, PAGE_SIZE);
> +
> +	pr_debug(
> +		"create userptr addr: %llx size: %lx, aligned: [%llx-%llx]-%lx\n",
> +		params->blob_userptr, params->size, aligned_addr,
> +		aligned_addr + aligned_size - 1UL, aligned_size);
> +
> +	params->size = aligned_size;
> +
> +	drm_gem_private_object_init(dev, obj, aligned_size);
> +
> +	*p_npfns = aligned_size / PAGE_SIZE;
> +	*p_pfns = vmalloc(*p_npfns * sizeof(unsigned long));
> +	if (!(*p_pfns)) {
> +		pr_err("failed to allocate userptr pfns\n");
> +		return -ENOMEM;
> +	}
> +
> +	pages = vmalloc(*p_npfns * sizeof(struct page *));
> +	if (!pages)
> +		return -ENOMEM;
> +
> +	userptr->userptr_inital_start = aligned_addr;
> +	userptr->userptr_start = aligned_addr;
> +	userptr->userptr_last = userptr->userptr_start + aligned_size - 1UL;
> +
> +	do {
> +		unsigned int num_pages = *p_npfns - pinned;
> +		uint64_t ptr = userptr->userptr_start + pinned * PAGE_SIZE;
> +		struct page **pinned_pages = pages + pinned;
> +
> +		ret = pin_user_pages_fast(
> +			ptr, num_pages, FOLL_WRITE | FOLL_FORCE, pinned_pages);
> +
> +		if (ret < 0) {
> +			pr_err("pin memory failed, addr: 0x%llx\n",
> +			       userptr->userptr_start);
> +			if (pinned && pages)
> +				unpin_user_pages(pages, pinned);
> +			userptr->userptr_start = 0;
> +			vfree(pages);
> +			vfree(*p_pfns);
> +			return -ENOMEM;
> +		}
> +
> +		pinned += ret;
> +
> +	} while (pinned < *p_npfns);
> +
> +	userptr->pages = pages;
> +	userptr->npages = *p_npfns;
> +	bo->base.base.size = aligned_size;
> +
> +	for (int i = 0; i < (*p_npfns); i++)
> +		(*p_pfns)[i] = page_to_pfn(pages[i]);
> +
> +	atomic_set(&userptr->in_release, 0);
> +	INIT_LIST_HEAD(&userptr->work_list);
> +	mutex_init(&userptr->lock);
> +	userptr->vgdev = dev->dev_private;
> +	userptr->file = file;
> +
> +	return 0;
> +}
> +
> +int virtio_gpu_userptr_create(struct virtio_gpu_device *vgdev,
> +			      struct drm_file *file,
> +			      struct virtio_gpu_object_params *params,
> +			      struct virtio_gpu_object **bo_ptr)
> +{
> +	struct mm_struct *mm = current->mm;
> +	struct virtio_gpu_fpriv *vfpriv = file->driver_priv;
> +	struct drm_gem_object *obj;
> +	struct virtio_gpu_object_userptr *userptr;
> +	int ret;
> +	unsigned long *pfns;
> +	uint32_t npfns;
> +
> +	virtio_gpu_userptr_lock_and_flush_work(vfpriv, mm);
> +
> +	mutex_lock(&vfpriv->userptrs_tree_lock);
> +	userptr = virtio_gpu_userptr_from_addr_range(
> +		vfpriv, params->blob_userptr,
> +		params->blob_userptr + params->size - 1UL);
> +	if (userptr) {
> +		*bo_ptr = &userptr->base;
> +		mutex_unlock(&vfpriv->userptrs_tree_lock);
> +		return USERPTR_EXISTS;
> +	}
> +
> +	userptr = kzalloc(sizeof(*userptr), GFP_KERNEL);
> +	if (!userptr)
> +		return -ENOMEM;
> +
> +	obj = &userptr->base.base.base;
> +	obj->funcs = &virtio_gpu_userptr_funcs;
> +
> +	ret = virtio_gpu_userptr_init(vgdev->ddev, file, obj, params, &pfns,
> +				      &npfns);
> +	if (ret)
> +		goto failed_free;
> +
> +	ret = virtio_gpu_resource_id_get(vgdev, &userptr->base.hw_res_handle);
> +	if (ret)
> +		goto failed_free;
> +
> +	virtio_gpu_userptr_add_interval_tree(vfpriv, userptr);
> +	/* virtio_gpu_userptr_dump(vfpriv); */
> +
> +	mutex_unlock(&vfpriv->userptrs_tree_lock);
> +
> +	virtio_gpu_cmd_resource_create_userptr(vgdev, &userptr->base, params,
> +					       pfns, npfns);
> +
> +	*bo_ptr = &userptr->base;
> +	return 0;
> +
> +failed_free:
> +	mutex_unlock(&vfpriv->userptrs_tree_lock);
> +	kfree(userptr);
> +	return ret;
> +}
> diff --git a/drivers/gpu/drm/virtio/virtgpu_vq.c b/drivers/gpu/drm/virtio/virtgpu_vq.c
> index 29d462b69bad..2699b85829f4 100644
> --- a/drivers/gpu/drm/virtio/virtgpu_vq.c
> +++ b/drivers/gpu/drm/virtio/virtgpu_vq.c
> @@ -1270,6 +1270,35 @@ virtio_gpu_cmd_resource_create_blob(struct virtio_gpu_device *vgdev,
>  	bo->created = true;
>  }
>  
> +void
> +virtio_gpu_cmd_resource_create_userptr(struct virtio_gpu_device *vgdev,
> +				    struct virtio_gpu_object *bo,
> +				    struct virtio_gpu_object_params *params,
> +				    unsigned long *pfns,
> +				    uint32_t npfns)
> +{
> +	struct virtio_gpu_resource_create_blob *cmd_p;
> +	struct virtio_gpu_vbuffer *vbuf;
> +
> +	cmd_p = virtio_gpu_alloc_cmd(vgdev, &vbuf, sizeof(*cmd_p));
> +	memset(cmd_p, 0, sizeof(*cmd_p));
> +
> +	cmd_p->hdr.type = cpu_to_le32(VIRTIO_GPU_CMD_RESOURCE_CREATE_BLOB);
> +	cmd_p->hdr.ctx_id = cpu_to_le32(params->ctx_id);
> +	cmd_p->resource_id = cpu_to_le32(bo->hw_res_handle);
> +	cmd_p->blob_mem = cpu_to_le32(params->blob_mem);
> +	cmd_p->blob_flags = cpu_to_le32(params->blob_flags);
> +	cmd_p->blob_id = cpu_to_le64(params->blob_id);
> +	cmd_p->size = cpu_to_le64(params->size);
> +	cmd_p->nr_entries = cpu_to_le32(npfns);
> +
> +	vbuf->data_buf = pfns;
> +	vbuf->data_size = sizeof(*pfns) * npfns;
> +
> +	virtio_gpu_queue_ctrl_buffer(vgdev, vbuf);
> +	bo->created = true;
> +}
> +
>  void virtio_gpu_cmd_set_scanout_blob(struct virtio_gpu_device *vgdev,
>  				     uint32_t scanout_id,
>  				     struct virtio_gpu_object *bo,
> -- 
> 2.34.1
> 

-- 
Simona Vetter
Software Engineer, Intel Corporation
http://blog.ffwll.ch

^ permalink raw reply	[flat|nested] 23+ messages in thread

* Re: [RFC PATCH 3/3] drm/virtio: implement blob userptr resource object
  2024-12-20 15:35   ` Simona Vetter
@ 2024-12-22  1:59     ` Demi Marie Obenour
  2024-12-27  2:24       ` Huang, Honglei1
       [not found]       ` <40485636-7ea3-4e34-b4bb-1ccaaadd4e47@amd.com>
  0 siblings, 2 replies; 23+ messages in thread
From: Demi Marie Obenour @ 2024-12-22  1:59 UTC (permalink / raw)
  To: Honglei Huang, Huang Rui, virtualization, linux-kernel,
	Dmitry Osipenko, dri-devel, David Airlie, Gerd Hoffmann,
	Gurchetan Singh, Chia-I Wu, Akihiko Odaki, Lingshan Zhu

On 12/20/24 10:35 AM, Simona Vetter wrote:
> On Fri, Dec 20, 2024 at 06:04:09PM +0800, Honglei Huang wrote:
>> From: Honglei Huang <Honglei1.Huang@amd.com>
>>
>> A virtio-gpu userptr is based on HMM notifier.
>> Used for let host access guest userspace memory and
>> notice the change of userspace memory.
>> This series patches are in very beginning state,
>> User space are pinned currently to ensure the host
>> device memory operations are correct.
>> The free and unmap operations for userspace can be
>> handled by MMU notifier this is a simple and basice
>> SVM feature for this series patches.
>> The physical PFNS update operations is splited into
>> two OPs in here. The evicted memories won't be used
>> anymore but remap into host again to achieve same
>> effect with hmm_rang_fault.
> 
> So in my opinion there are two ways to implement userptr that make sense:
> 
> - pinned userptr with pin_user_pages(FOLL_LONGTERM). there is not mmu
>   notifier
> 
> - unpinnned userptr where you entirely rely on userptr and do not hold any
>   page references or page pins at all, for full SVM integration. This
>   should use hmm_range_fault ideally, since that's the version that
>   doesn't ever grab any page reference pins.
> 
> All the in-between variants are imo really bad hacks, whether they hold a
> page reference or a temporary page pin (which seems to be what you're
> doing here). In much older kernels there was some justification for them,
> because strange stuff happened over fork(), but with FOLL_LONGTERM this is
> now all sorted out. So there's really only fully pinned, or true svm left
> as clean design choices imo.
> 
> With that background, why does pin_user_pages(FOLL_LONGTERM) not work for
> you?

+1 on using FOLL_LONGTERM.  Fully dynamic memory management has a huge cost
in complexity that pinning everything avoids.  Furthermore, this avoids the
host having to take action in response to guest memory reclaim requests.
This avoids additional complexity (and thus attack surface) on the host side.
Furthermore, since this is for ROCm and not for graphics, I am less concerned
about supporting systems that require swappable GPU VRAM.
-- 
Sincerely,
Demi Marie Obenour (she/her/hers)

^ permalink raw reply	[flat|nested] 23+ messages in thread

* Re: [RFC PATCH 3/3] drm/virtio: implement blob userptr resource object
  2024-12-22  1:59     ` Demi Marie Obenour
@ 2024-12-27  2:24       ` Huang, Honglei1
  2025-01-08 17:05         ` Simona Vetter
       [not found]       ` <40485636-7ea3-4e34-b4bb-1ccaaadd4e47@amd.com>
  1 sibling, 1 reply; 23+ messages in thread
From: Huang, Honglei1 @ 2024-12-27  2:24 UTC (permalink / raw)
  To: Demi Marie Obenour, Huang Rui, virtualization, linux-kernel,
	Dmitry Osipenko, dri-devel, David Airlie, Gerd Hoffmann,
	Gurchetan Singh, Chia-I Wu, Akihiko Odaki, Lingshan Zhu


On 2024/12/22 9:59, Demi Marie Obenour wrote:
> On 12/20/24 10:35 AM, Simona Vetter wrote:
>> On Fri, Dec 20, 2024 at 06:04:09PM +0800, Honglei Huang wrote:
>>> From: Honglei Huang <Honglei1.Huang@amd.com>
>>>
>>> A virtio-gpu userptr is based on HMM notifier.
>>> Used for let host access guest userspace memory and
>>> notice the change of userspace memory.
>>> This series patches are in very beginning state,
>>> User space are pinned currently to ensure the host
>>> device memory operations are correct.
>>> The free and unmap operations for userspace can be
>>> handled by MMU notifier this is a simple and basice
>>> SVM feature for this series patches.
>>> The physical PFNS update operations is splited into
>>> two OPs in here. The evicted memories won't be used
>>> anymore but remap into host again to achieve same
>>> effect with hmm_rang_fault.
>>
>> So in my opinion there are two ways to implement userptr that make sense:
>>
>> - pinned userptr with pin_user_pages(FOLL_LONGTERM). there is not mmu
>>    notifier
>>
>> - unpinnned userptr where you entirely rely on userptr and do not hold any
>>    page references or page pins at all, for full SVM integration. This
>>    should use hmm_range_fault ideally, since that's the version that
>>    doesn't ever grab any page reference pins.
>>
>> All the in-between variants are imo really bad hacks, whether they hold a
>> page reference or a temporary page pin (which seems to be what you're
>> doing here). In much older kernels there was some justification for them,
>> because strange stuff happened over fork(), but with FOLL_LONGTERM this is
>> now all sorted out. So there's really only fully pinned, or true svm left
>> as clean design choices imo.
>>
>> With that background, why does pin_user_pages(FOLL_LONGTERM) not work for
>> you?
> 
> +1 on using FOLL_LONGTERM.  Fully dynamic memory management has a huge cost
> in complexity that pinning everything avoids.  Furthermore, this avoids the
> host having to take action in response to guest memory reclaim requests.
> This avoids additional complexity (and thus attack surface) on the host side.
> Furthermore, since this is for ROCm and not for graphics, I am less concerned
> about supporting systems that require swappable GPU VRAM.

Hi Sima and Demi,

I totally agree the flag FOLL_LONGTERM is needed, I will add it in next 
version.

And for the first pin variants implementation, the MMU notifier is also 
needed I think.Cause the userptr feature in UMD generally used like 
this: the registering of userptr always is explicitly invoked by user 
code like "registerMemoryToGPU(userptrAddr, ...)", but for the userptr 
release/free, there is no explicit API for it, at least in hsakmt/KFD 
stack. User just need call system call "free(userptrAddr)", then kernel 
driver will release the userptr by MMU notifier callback.Virtio-GPU has 
no other way to know if user has been free the userptr except for MMU 
notifior.And in UMD theres is no way to get the free() operation is 
invoked by user.The only way is use MMU notifier in virtio-GPU driver 
and free the corresponding data in host by some virtio CMDs as far as I 
can see.

And for the second way that is use hmm_range_fault, there is a 
predictable issues as far as I can see, at least in hsakmt/KFD stack. 
That is the memory may migrate when GPU/device is working. In bare 
metal, when memory is migrating KFD driver will pause the compute work 
of the device in mmap_wirte_lock then use hmm_range_fault to remap the 
migrated/evicted memories to GPU then restore the compute work of device 
to ensure the correction of the data. But in virtio-GPU driver the 
migration happen in guest kernel, the evict mmu notifier callback 
happens in guest, a virtio CMD can be used for notify host but as lack 
of mmap_write_lock protection in host kernel, host will hold invalid 
data for a short period of time, this may lead to some issues. And it is 
hard to fix as far as I can see.

I will extract some APIs into helper according to your request, and I 
will refactor the whole userptr implementation, use some callbacks in 
page getting path, let the pin method and hmm_range_fault can be choiced
in this series patches.

Regards,
Honglei


^ permalink raw reply	[flat|nested] 23+ messages in thread

* Re: [RFC PATCH 3/3] drm/virtio: implement blob userptr resource object
  2024-12-27  2:24       ` Huang, Honglei1
@ 2025-01-08 17:05         ` Simona Vetter
  2025-01-25  0:42           ` Demi Marie Obenour
  2025-01-29 20:54           ` Demi Marie Obenour
  0 siblings, 2 replies; 23+ messages in thread
From: Simona Vetter @ 2025-01-08 17:05 UTC (permalink / raw)
  To: Huang, Honglei1
  Cc: Demi Marie Obenour, Huang Rui, virtualization, linux-kernel,
	Dmitry Osipenko, dri-devel, David Airlie, Gerd Hoffmann,
	Gurchetan Singh, Chia-I Wu, Akihiko Odaki, Lingshan Zhu

On Fri, Dec 27, 2024 at 10:24:29AM +0800, Huang, Honglei1 wrote:
> 
> On 2024/12/22 9:59, Demi Marie Obenour wrote:
> > On 12/20/24 10:35 AM, Simona Vetter wrote:
> > > On Fri, Dec 20, 2024 at 06:04:09PM +0800, Honglei Huang wrote:
> > > > From: Honglei Huang <Honglei1.Huang@amd.com>
> > > > 
> > > > A virtio-gpu userptr is based on HMM notifier.
> > > > Used for let host access guest userspace memory and
> > > > notice the change of userspace memory.
> > > > This series patches are in very beginning state,
> > > > User space are pinned currently to ensure the host
> > > > device memory operations are correct.
> > > > The free and unmap operations for userspace can be
> > > > handled by MMU notifier this is a simple and basice
> > > > SVM feature for this series patches.
> > > > The physical PFNS update operations is splited into
> > > > two OPs in here. The evicted memories won't be used
> > > > anymore but remap into host again to achieve same
> > > > effect with hmm_rang_fault.
> > > 
> > > So in my opinion there are two ways to implement userptr that make sense:
> > > 
> > > - pinned userptr with pin_user_pages(FOLL_LONGTERM). there is not mmu
> > >    notifier
> > > 
> > > - unpinnned userptr where you entirely rely on userptr and do not hold any
> > >    page references or page pins at all, for full SVM integration. This
> > >    should use hmm_range_fault ideally, since that's the version that
> > >    doesn't ever grab any page reference pins.
> > > 
> > > All the in-between variants are imo really bad hacks, whether they hold a
> > > page reference or a temporary page pin (which seems to be what you're
> > > doing here). In much older kernels there was some justification for them,
> > > because strange stuff happened over fork(), but with FOLL_LONGTERM this is
> > > now all sorted out. So there's really only fully pinned, or true svm left
> > > as clean design choices imo.
> > > 
> > > With that background, why does pin_user_pages(FOLL_LONGTERM) not work for
> > > you?
> > 
> > +1 on using FOLL_LONGTERM.  Fully dynamic memory management has a huge cost
> > in complexity that pinning everything avoids.  Furthermore, this avoids the
> > host having to take action in response to guest memory reclaim requests.
> > This avoids additional complexity (and thus attack surface) on the host side.
> > Furthermore, since this is for ROCm and not for graphics, I am less concerned
> > about supporting systems that require swappable GPU VRAM.
> 
> Hi Sima and Demi,
> 
> I totally agree the flag FOLL_LONGTERM is needed, I will add it in next
> version.
> 
> And for the first pin variants implementation, the MMU notifier is also
> needed I think.Cause the userptr feature in UMD generally used like this:
> the registering of userptr always is explicitly invoked by user code like
> "registerMemoryToGPU(userptrAddr, ...)", but for the userptr release/free,
> there is no explicit API for it, at least in hsakmt/KFD stack. User just
> need call system call "free(userptrAddr)", then kernel driver will release
> the userptr by MMU notifier callback.Virtio-GPU has no other way to know if
> user has been free the userptr except for MMU notifior.And in UMD theres is
> no way to get the free() operation is invoked by user.The only way is use
> MMU notifier in virtio-GPU driver and free the corresponding data in host by
> some virtio CMDs as far as I can see.
> 
> And for the second way that is use hmm_range_fault, there is a predictable
> issues as far as I can see, at least in hsakmt/KFD stack. That is the memory
> may migrate when GPU/device is working. In bare metal, when memory is
> migrating KFD driver will pause the compute work of the device in
> mmap_wirte_lock then use hmm_range_fault to remap the migrated/evicted
> memories to GPU then restore the compute work of device to ensure the
> correction of the data. But in virtio-GPU driver the migration happen in
> guest kernel, the evict mmu notifier callback happens in guest, a virtio CMD
> can be used for notify host but as lack of mmap_write_lock protection in
> host kernel, host will hold invalid data for a short period of time, this
> may lead to some issues. And it is hard to fix as far as I can see.
> 
> I will extract some APIs into helper according to your request, and I will
> refactor the whole userptr implementation, use some callbacks in page
> getting path, let the pin method and hmm_range_fault can be choiced
> in this series patches.

Ok, so if this is for svm, then you need full blast hmm, or the semantics
are buggy. You cannot fake svm with pin(FOLL_LONGTERM) userptr, this does
not work.

The other option is that hsakmt/kfd api is completely busted, and that's
kinda not a kernel problem.
-Sima
-- 
Simona Vetter
Software Engineer, Intel Corporation
http://blog.ffwll.ch

^ permalink raw reply	[flat|nested] 23+ messages in thread

* Re: [RFC PATCH 3/3] drm/virtio: implement blob userptr resource object
       [not found]       ` <40485636-7ea3-4e34-b4bb-1ccaaadd4e47@amd.com>
@ 2025-01-20  9:54         ` Huang, Honglei1
  2025-01-29 19:46           ` Demi Marie Obenour
  2025-02-04 10:45           ` Simona Vetter
  0 siblings, 2 replies; 23+ messages in thread
From: Huang, Honglei1 @ 2025-01-20  9:54 UTC (permalink / raw)
  To: Demi Marie Obenour, Huang Rui, virtualization, linux-kernel,
	Dmitry Osipenko, dri-devel, David Airlie, Gerd Hoffmann,
	Gurchetan Singh, Chia-I Wu, Akihiko Odaki, Lingshan Zhu

On 2024/12/27 10:02, Huang, Honglei1 wrote:
> On 2024/12/22 9:59, Demi Marie Obenour wrote:
>> On 12/20/24 10:35 AM, Simona Vetter wrote:
>>> On Fri, Dec 20, 2024 at 06:04:09PM +0800, Honglei Huang wrote:
>>>> From: Honglei Huang<Honglei1.Huang@amd.com>
>>>>
>>>> A virtio-gpu userptr is based on HMM notifier.
>>>> Used for let host access guest userspace memory and
>>>> notice the change of userspace memory.
>>>> This series patches are in very beginning state,
>>>> User space are pinned currently to ensure the host
>>>> device memory operations are correct.
>>>> The free and unmap operations for userspace can be
>>>> handled by MMU notifier this is a simple and basice
>>>> SVM feature for this series patches.
>>>> The physical PFNS update operations is splited into
>>>> two OPs in here. The evicted memories won't be used
>>>> anymore but remap into host again to achieve same
>>>> effect with hmm_rang_fault.
>>> So in my opinion there are two ways to implement userptr that make sense:
>>>
>>> - pinned userptr with pin_user_pages(FOLL_LONGTERM). there is not mmu
>>>    notifier
>>>
>>> - unpinnned userptr where you entirely rely on userptr and do not hold any
>>>    page references or page pins at all, for full SVM integration. This
>>>    should use hmm_range_fault ideally, since that's the version that
>>>    doesn't ever grab any page reference pins.
>>>
>>> All the in-between variants are imo really bad hacks, whether they hold a
>>> page reference or a temporary page pin (which seems to be what you're
>>> doing here). In much older kernels there was some justification for them,
>>> because strange stuff happened over fork(), but with FOLL_LONGTERM this is
>>> now all sorted out. So there's really only fully pinned, or true svm left
>>> as clean design choices imo.
>>>
>>> With that background, why does pin_user_pages(FOLL_LONGTERM) not work for
>>> you?
>> +1 on using FOLL_LONGTERM.  Fully dynamic memory management has a huge cost
>> in complexity that pinning everything avoids.  Furthermore, this avoids the
>> host having to take action in response to guest memory reclaim requests.
>> This avoids additional complexity (and thus attack surface) on the host side.
>> Furthermore, since this is for ROCm and not for graphics, I am less concerned
>> about supporting systems that require swappable GPU VRAM.
> Hi Sima and Demi, I totally agree the flag FOLL_LONGTERM is needed, I 
> will add it in next version. And for the first pin variants 
> implementation, the MMU notifier is also needed I think.
> Cause the userptr feature in UMD generally used like this: the 
> registering of userptr
> always is explicitly invoked by user code like 
> "registerMemoryToGPU(userptrAddr, ...)",
> but for the userptr release/free, there is no explicit API for it, at 
> least in hsakmt/KFD stack.
> User just need call system call "free(userptrAddr)", thenkernel driver 
> will release the userptr
> by MMU notifier callback.Virtio-GPU has no other way to know if user has 
> been free the userptr
> except for MMU notifior.And in UMD theres is no way to get the free() 
> operation is invoked by user.
> the only way is use MMU notifierin virtio-GPU driver and free the 
> corresponding data in host
> by some virtio CMDs as far as I can see.
> And for the second way that is use hmm_range_fault, there is a 
> predictable issues as far as I can see, at least in hsakmt/KFD stack. 
> That is the memory may migrate when GPU/device is working. In bare 
> metal, when memory is migrating KFD driver will pause the compute work 
> of the device in mmap_wirte_lock then use hmm_range_fault to remap the 
> migrated/evicted memories to GPU then restore the compute work of device 
> to ensure the correction of the data. But in virtio-GPU driver the 
> migration happen in guest kernel, the evict mmu notifier callback 
> happens in guest, a virtio CMD can be used for notify host but as lack 
> of mmap_write_lock protection in host kernel, host will hold invalid 
> data for a short period of time, this may lead to some issues. And it is 
> hard to fix as far as I can see.
> Finally I will extract some APIs into helper according to your request, 
> and I will refactor the whole userptr
> implementation, use some callbacks in page getting path, let the pin 
> method and hmm_range_fault can be choiced
> in this series patches.
> 
> Regards,
> Honglei

Hi Sima,

I modified the code, remove all the MMU nitifior and use 
pin_user_pages_fast only. Under this implementation userptr fully
managed by UMD. We did a performance test, it decreased by 30% in
OpenCL stack in Geekbench6 benmark.
We use AMD V2000 for test:
use MMU notifior + pin_user_pages:
near 13000 score: https://browser.geekbench.com/v6/compute/3257793

use pin_user_pages only:
near 10000 socre: https://browser.geekbench.com/v6/compute/3496228

The code is under clean up, I will send out later.

And I found a another thing, it seems like in intel i915 userptr
implementation, the pin_user_pages is also used in MMU notifior.
Code path is: 
https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/tree/drivers/gpu/drm/i915/gem/i915_gem_userptr.c?h=v6.13#:~:text=ret%20%3D%20pin_user_pages_fast(obj%2D%3Euserptr.ptr%20%2B%20pinned%20*%20PAGE_SIZE%2C

Patch set: 
https://lore.kernel.org/all/159353552439.22701.14005121342739071590@emeril.freedesktop.org/T/
https://patchwork.kernel.org/project/intel-gfx/patch/20210323155059.628690-17-maarten.lankhorst@linux.intel.com/#24064663

And I didn't find the hmm_range_fault code path, maybe I missed it?

Regards,
Honglei


^ permalink raw reply	[flat|nested] 23+ messages in thread

* Re: [RFC PATCH 3/3] drm/virtio: implement blob userptr resource object
  2025-01-08 17:05         ` Simona Vetter
@ 2025-01-25  0:42           ` Demi Marie Obenour
  2025-01-29 19:40             ` Demi Marie Obenour
  2025-01-29 20:54           ` Demi Marie Obenour
  1 sibling, 1 reply; 23+ messages in thread
From: Demi Marie Obenour @ 2025-01-25  0:42 UTC (permalink / raw)
  To: Huang, Honglei1, Huang Rui, virtualization, linux-kernel,
	Dmitry Osipenko, dri-devel, David Airlie, Gerd Hoffmann,
	Gurchetan Singh, Chia-I Wu, Akihiko Odaki, Lingshan Zhu,
	Simona Vetter

On 1/8/25 12:05 PM, Simona Vetter wrote:
> On Fri, Dec 27, 2024 at 10:24:29AM +0800, Huang, Honglei1 wrote:
>>
>> On 2024/12/22 9:59, Demi Marie Obenour wrote:
>>> On 12/20/24 10:35 AM, Simona Vetter wrote:
>>>> On Fri, Dec 20, 2024 at 06:04:09PM +0800, Honglei Huang wrote:
>>>>> From: Honglei Huang <Honglei1.Huang@amd.com>
>>>>>
>>>>> A virtio-gpu userptr is based on HMM notifier.
>>>>> Used for let host access guest userspace memory and
>>>>> notice the change of userspace memory.
>>>>> This series patches are in very beginning state,
>>>>> User space are pinned currently to ensure the host
>>>>> device memory operations are correct.
>>>>> The free and unmap operations for userspace can be
>>>>> handled by MMU notifier this is a simple and basice
>>>>> SVM feature for this series patches.
>>>>> The physical PFNS update operations is splited into
>>>>> two OPs in here. The evicted memories won't be used
>>>>> anymore but remap into host again to achieve same
>>>>> effect with hmm_rang_fault.
>>>>
>>>> So in my opinion there are two ways to implement userptr that make sense:
>>>>
>>>> - pinned userptr with pin_user_pages(FOLL_LONGTERM). there is not mmu
>>>>    notifier
>>>>
>>>> - unpinnned userptr where you entirely rely on userptr and do not hold any
>>>>    page references or page pins at all, for full SVM integration. This
>>>>    should use hmm_range_fault ideally, since that's the version that
>>>>    doesn't ever grab any page reference pins.
>>>>
>>>> All the in-between variants are imo really bad hacks, whether they hold a
>>>> page reference or a temporary page pin (which seems to be what you're
>>>> doing here). In much older kernels there was some justification for them,
>>>> because strange stuff happened over fork(), but with FOLL_LONGTERM this is
>>>> now all sorted out. So there's really only fully pinned, or true svm left
>>>> as clean design choices imo.
>>>>
>>>> With that background, why does pin_user_pages(FOLL_LONGTERM) not work for
>>>> you?
>>>
>>> +1 on using FOLL_LONGTERM.  Fully dynamic memory management has a huge cost
>>> in complexity that pinning everything avoids.  Furthermore, this avoids the
>>> host having to take action in response to guest memory reclaim requests.
>>> This avoids additional complexity (and thus attack surface) on the host side.
>>> Furthermore, since this is for ROCm and not for graphics, I am less concerned
>>> about supporting systems that require swappable GPU VRAM.
>>
>> Hi Sima and Demi,
>>
>> I totally agree the flag FOLL_LONGTERM is needed, I will add it in next
>> version.
>>
>> And for the first pin variants implementation, the MMU notifier is also
>> needed I think.Cause the userptr feature in UMD generally used like this:
>> the registering of userptr always is explicitly invoked by user code like
>> "registerMemoryToGPU(userptrAddr, ...)", but for the userptr release/free,
>> there is no explicit API for it, at least in hsakmt/KFD stack. User just
>> need call system call "free(userptrAddr)", then kernel driver will release
>> the userptr by MMU notifier callback.Virtio-GPU has no other way to know if
>> user has been free the userptr except for MMU notifior.And in UMD theres is
>> no way to get the free() operation is invoked by user.The only way is use
>> MMU notifier in virtio-GPU driver and free the corresponding data in host by
>> some virtio CMDs as far as I can see.
>>
>> And for the second way that is use hmm_range_fault, there is a predictable
>> issues as far as I can see, at least in hsakmt/KFD stack. That is the memory
>> may migrate when GPU/device is working. In bare metal, when memory is
>> migrating KFD driver will pause the compute work of the device in
>> mmap_wirte_lock then use hmm_range_fault to remap the migrated/evicted
>> memories to GPU then restore the compute work of device to ensure the
>> correction of the data. But in virtio-GPU driver the migration happen in
>> guest kernel, the evict mmu notifier callback happens in guest, a virtio CMD
>> can be used for notify host but as lack of mmap_write_lock protection in
>> host kernel, host will hold invalid data for a short period of time, this
>> may lead to some issues. And it is hard to fix as far as I can see.
>>
>> I will extract some APIs into helper according to your request, and I will
>> refactor the whole userptr implementation, use some callbacks in page
>> getting path, let the pin method and hmm_range_fault can be choiced
>> in this series patches.
> 
> Ok, so if this is for svm, then you need full blast hmm, or the semantics
> are buggy. You cannot fake svm with pin(FOLL_LONGTERM) userptr, this does
> not work.

Is this still broken in the virtualized case?  Page migration between host
and device memory is completely transparent to the guest kernel, so pinning
guest memory doesn't interfere with the host KMD at all.  In fact, the host
KMD is not even aware of it.

Allowing memory registered with AMDKFD to be pageable *by the guest* seems
like a bad idea to me.  Paging would require a guest <=> host round-trip
for _each_ call to mmu_interval_notifier_ops::invalidate().  That’s going
to be _very_ slow if it happens with any regularity.  Worse, the userspace
VMM will need to be notified if the GPU writes to the pages while the guest
expects them to be stable.  Can this be done with userfaultfd, and if so,
is it even a good idea?

The reason I am not sure that using userfaultfd to notify the guest of
changes is a good idea is that it seems intuitively rather risky.  At a
minimum, it allows the guest to stall host accesses for an arbitrarily
long period of time, which I suspect will make exploiting race conditions
easier.  Furthermore, this seems very prone to deadlocks.  Suppose that
that the guest causes a virtual device to access write-protected memory.
The VMM’s virtual device implementation will cause a userfaultfd
write-protect fault, which will then be passed to the guest to handle.
Suppose that resolving the fault requires allocating memory, which in
turn causes memory reclaim that waits for I/O on the same block device.
If the virtual device is single-threaded, you just deadlocked.  Even
if it is not single-threaded, operations like live migration might never
complete.  It might be possible for userspace to check the cause of a
write-protect fault and break the deadlock, but that is even more
complexity.

With FOLL_LONGTERM, this can’t happen.  The guest will never try to
make the pages clean, so it never needs to write-protect them.  This
means that the host does not need to worry about its device model
stalling forever and that there is no risk of deadlock.  The only thing
I know will break is using writable file-backed memory with SVM, but
that seems like a very, _very_ niche thing to do as there is no
consistency guarantee.  Read-only access would work fine.

> The other option is that hsakmt/kfd api is completely busted, and that's
> kinda not a kernel problem.

My understanding is that it _is_ busted, in that it is tied to address
spaces, not contexts.  If my understanding is correct, the host-side
device model must create a separate process for each guest process that
wants to use KFD.  Otherwise, different guest processes that use the same
GPU virtual address will conflict with each other.
-- 
Sincerely,
Demi Marie Obenour (she/her/hers)

^ permalink raw reply	[flat|nested] 23+ messages in thread

* Re: [RFC PATCH 3/3] drm/virtio: implement blob userptr resource object
  2025-01-25  0:42           ` Demi Marie Obenour
@ 2025-01-29 19:40             ` Demi Marie Obenour
  0 siblings, 0 replies; 23+ messages in thread
From: Demi Marie Obenour @ 2025-01-29 19:40 UTC (permalink / raw)
  To: Huang, Honglei1, Huang Rui, virtualization, linux-kernel,
	Dmitry Osipenko, dri-devel, David Airlie, Gerd Hoffmann,
	Gurchetan Singh, Chia-I Wu, Akihiko Odaki, Lingshan Zhu,
	Simona Vetter, Xen developer discussion,
	Kernel KVM virtualization development,
	Marek Marczykowski-Górecki

On 1/24/25 7:42 PM, Demi Marie Obenour wrote:
> On 1/8/25 12:05 PM, Simona Vetter wrote:
>> On Fri, Dec 27, 2024 at 10:24:29AM +0800, Huang, Honglei1 wrote:
>>>
>>> On 2024/12/22 9:59, Demi Marie Obenour wrote:
>>>> On 12/20/24 10:35 AM, Simona Vetter wrote:
>>>>> On Fri, Dec 20, 2024 at 06:04:09PM +0800, Honglei Huang wrote:
>>>>>> From: Honglei Huang <Honglei1.Huang@amd.com>
>>>>>>
>>>>>> A virtio-gpu userptr is based on HMM notifier.
>>>>>> Used for let host access guest userspace memory and
>>>>>> notice the change of userspace memory.
>>>>>> This series patches are in very beginning state,
>>>>>> User space are pinned currently to ensure the host
>>>>>> device memory operations are correct.
>>>>>> The free and unmap operations for userspace can be
>>>>>> handled by MMU notifier this is a simple and basice
>>>>>> SVM feature for this series patches.
>>>>>> The physical PFNS update operations is splited into
>>>>>> two OPs in here. The evicted memories won't be used
>>>>>> anymore but remap into host again to achieve same
>>>>>> effect with hmm_rang_fault.
>>>>>
>>>>> So in my opinion there are two ways to implement userptr that make sense:
>>>>>
>>>>> - pinned userptr with pin_user_pages(FOLL_LONGTERM). there is not mmu
>>>>>    notifier
>>>>>
>>>>> - unpinnned userptr where you entirely rely on userptr and do not hold any
>>>>>    page references or page pins at all, for full SVM integration. This
>>>>>    should use hmm_range_fault ideally, since that's the version that
>>>>>    doesn't ever grab any page reference pins.
>>>>>
>>>>> All the in-between variants are imo really bad hacks, whether they hold a
>>>>> page reference or a temporary page pin (which seems to be what you're
>>>>> doing here). In much older kernels there was some justification for them,
>>>>> because strange stuff happened over fork(), but with FOLL_LONGTERM this is
>>>>> now all sorted out. So there's really only fully pinned, or true svm left
>>>>> as clean design choices imo.
>>>>>
>>>>> With that background, why does pin_user_pages(FOLL_LONGTERM) not work for
>>>>> you?
>>>>
>>>> +1 on using FOLL_LONGTERM.  Fully dynamic memory management has a huge cost
>>>> in complexity that pinning everything avoids.  Furthermore, this avoids the
>>>> host having to take action in response to guest memory reclaim requests.
>>>> This avoids additional complexity (and thus attack surface) on the host side.
>>>> Furthermore, since this is for ROCm and not for graphics, I am less concerned
>>>> about supporting systems that require swappable GPU VRAM.
>>>
>>> Hi Sima and Demi,
>>>
>>> I totally agree the flag FOLL_LONGTERM is needed, I will add it in next
>>> version.
>>>
>>> And for the first pin variants implementation, the MMU notifier is also
>>> needed I think.Cause the userptr feature in UMD generally used like this:
>>> the registering of userptr always is explicitly invoked by user code like
>>> "registerMemoryToGPU(userptrAddr, ...)", but for the userptr release/free,
>>> there is no explicit API for it, at least in hsakmt/KFD stack. User just
>>> need call system call "free(userptrAddr)", then kernel driver will release
>>> the userptr by MMU notifier callback.Virtio-GPU has no other way to know if
>>> user has been free the userptr except for MMU notifior.And in UMD theres is
>>> no way to get the free() operation is invoked by user.The only way is use
>>> MMU notifier in virtio-GPU driver and free the corresponding data in host by
>>> some virtio CMDs as far as I can see.
>>>
>>> And for the second way that is use hmm_range_fault, there is a predictable
>>> issues as far as I can see, at least in hsakmt/KFD stack. That is the memory
>>> may migrate when GPU/device is working. In bare metal, when memory is
>>> migrating KFD driver will pause the compute work of the device in
>>> mmap_wirte_lock then use hmm_range_fault to remap the migrated/evicted
>>> memories to GPU then restore the compute work of device to ensure the
>>> correction of the data. But in virtio-GPU driver the migration happen in
>>> guest kernel, the evict mmu notifier callback happens in guest, a virtio CMD
>>> can be used for notify host but as lack of mmap_write_lock protection in
>>> host kernel, host will hold invalid data for a short period of time, this
>>> may lead to some issues. And it is hard to fix as far as I can see.
>>>
>>> I will extract some APIs into helper according to your request, and I will
>>> refactor the whole userptr implementation, use some callbacks in page
>>> getting path, let the pin method and hmm_range_fault can be choiced
>>> in this series patches.
>>
>> Ok, so if this is for svm, then you need full blast hmm, or the semantics
>> are buggy. You cannot fake svm with pin(FOLL_LONGTERM) userptr, this does
>> not work.
> 
> Is this still broken in the virtualized case?  Page migration between host
> and device memory is completely transparent to the guest kernel, so pinning
> guest memory doesn't interfere with the host KMD at all.  In fact, the host
> KMD is not even aware of it.

To elaborate further:

Memory in a KVM guest is *not* host physical memory, or even host kernel
memory.  It is host *userspace* memory, and in particular, *it is fully pageable*.
There *might* be a few exceptions involving structures that are accessed by
the (physical) CPU, but none of these are relevant here.

This means that memory management works very differently than in the
non-virtualized case.  The host KMD can migrate pages between host memory
and device memory without either the guest kernel or host userspace being
aware that such migration has happened.  This means that pin(FOLL_LONGTERM)
in the guest doesn't pin memory on the host.  Instead, it pins memory in the
*guest*.  The host will continue to migrate pages between host and device
as needed.  I’m no expert on SVM, but I suspect this is the desired behavior.

Xen is significantly trickier, because most guest memory is provided by
the Xen toolstack via the hypervisor and is _not_ pageable.  Therefore,
it cannot be mapped into the GPU without using Xen grant tables.  Since
Xen grants do not support non-cooperative revocation, this requires a
FOLL_LONGTERM pin *anyway*.  Furthermore, granted pages _cannot_ be
migrated from host to device, so unless the GPU is an iGPU all of its
accesses will need to cross the PCI bus.  This will obviously be slow.

The guest can avoid this problem by migrating userptr memory to virtio-GPU
blob objects _before_ pinning it.  Virtio-GPU blob objects are backed by
host userspace memory, so the host can migrate them between device and host
memory just like in the KVM case.  Under KVM, such migration would be be
slightly wasteful but otherwise harmless in the common case.  In the case
where PCI passthrough is also in use, however, it might be necessary even
for KVM guests.  This is because PCI passthrough requires pinned memory,
and pinned memory cannot be migrated to the device.

Since AMD’s automotive use-case uses Xen, and since KVM might also need
page migration, I recommend that the initial implementation _always_
migrate pages to blob objects no matter what the hypervisor is.  Direct
GPU access to guest memory can be implemented as a KVM-specific optimization
later.

Also worth noting is that only pages that have been written need to be
migrated.  If a page hasn't been written, it should not be migrated, because
unwritten pages of a blob objects will read as zero.  However, the migration
should almost certainly be done in 2M chunks, rather than 4K ones.  This is
because the TLBs of at least AMD GPU are optimized for 2M pages, and GPU access
to 4K pages takes a 30% performance penalty.  This nicely matches the penalty
that AMD observed.
-- 
Sincerely,
Demi Marie Obenour (she/her/hers)

^ permalink raw reply	[flat|nested] 23+ messages in thread

* Re: [RFC PATCH 3/3] drm/virtio: implement blob userptr resource object
  2025-01-20  9:54         ` Huang, Honglei1
@ 2025-01-29 19:46           ` Demi Marie Obenour
  2025-02-04 10:45           ` Simona Vetter
  1 sibling, 0 replies; 23+ messages in thread
From: Demi Marie Obenour @ 2025-01-29 19:46 UTC (permalink / raw)
  To: Huang, Honglei1, Huang Rui, virtualization, linux-kernel,
	Dmitry Osipenko, dri-devel, David Airlie, Gerd Hoffmann,
	Gurchetan Singh, Chia-I Wu, Akihiko Odaki, Lingshan Zhu,
	Christian König

On 1/20/25 4:54 AM, Huang, Honglei1 wrote:
> On 2024/12/27 10:02, Huang, Honglei1 wrote:
>> On 2024/12/22 9:59, Demi Marie Obenour wrote:
>>> On 12/20/24 10:35 AM, Simona Vetter wrote:
>>>> On Fri, Dec 20, 2024 at 06:04:09PM +0800, Honglei Huang wrote:
>>>>> From: Honglei Huang<Honglei1.Huang@amd.com>
>>>>>
>>>>> A virtio-gpu userptr is based on HMM notifier.
>>>>> Used for let host access guest userspace memory and
>>>>> notice the change of userspace memory.
>>>>> This series patches are in very beginning state,
>>>>> User space are pinned currently to ensure the host
>>>>> device memory operations are correct.
>>>>> The free and unmap operations for userspace can be
>>>>> handled by MMU notifier this is a simple and basice
>>>>> SVM feature for this series patches.
>>>>> The physical PFNS update operations is splited into
>>>>> two OPs in here. The evicted memories won't be used
>>>>> anymore but remap into host again to achieve same
>>>>> effect with hmm_rang_fault.
>>>> So in my opinion there are two ways to implement userptr that make sense:
>>>>
>>>> - pinned userptr with pin_user_pages(FOLL_LONGTERM). there is not mmu
>>>>    notifier
>>>>
>>>> - unpinnned userptr where you entirely rely on userptr and do not hold any
>>>>    page references or page pins at all, for full SVM integration. This
>>>>    should use hmm_range_fault ideally, since that's the version that
>>>>    doesn't ever grab any page reference pins.
>>>>
>>>> All the in-between variants are imo really bad hacks, whether they hold a
>>>> page reference or a temporary page pin (which seems to be what you're
>>>> doing here). In much older kernels there was some justification for them,
>>>> because strange stuff happened over fork(), but with FOLL_LONGTERM this is
>>>> now all sorted out. So there's really only fully pinned, or true svm left
>>>> as clean design choices imo.
>>>>
>>>> With that background, why does pin_user_pages(FOLL_LONGTERM) not work for
>>>> you?
>>> +1 on using FOLL_LONGTERM.  Fully dynamic memory management has a huge cost
>>> in complexity that pinning everything avoids.  Furthermore, this avoids the
>>> host having to take action in response to guest memory reclaim requests.
>>> This avoids additional complexity (and thus attack surface) on the host side.
>>> Furthermore, since this is for ROCm and not for graphics, I am less concerned
>>> about supporting systems that require swappable GPU VRAM.
>> Hi Sima and Demi, I totally agree the flag FOLL_LONGTERM is needed, I 
>> will add it in next version. And for the first pin variants 
>> implementation, the MMU notifier is also needed I think.
>> Cause the userptr feature in UMD generally used like this: the 
>> registering of userptr
>> always is explicitly invoked by user code like 
>> "registerMemoryToGPU(userptrAddr, ...)",
>> but for the userptr release/free, there is no explicit API for it, at 
>> least in hsakmt/KFD stack.
>> User just need call system call "free(userptrAddr)", thenkernel driver 
>> will release the userptr
>> by MMU notifier callback.Virtio-GPU has no other way to know if user has 
>> been free the userptr
>> except for MMU notifior.And in UMD theres is no way to get the free() 
>> operation is invoked by user.
>> the only way is use MMU notifierin virtio-GPU driver and free the 
>> corresponding data in host
>> by some virtio CMDs as far as I can see.
>> And for the second way that is use hmm_range_fault, there is a 
>> predictable issues as far as I can see, at least in hsakmt/KFD stack. 
>> That is the memory may migrate when GPU/device is working. In bare 
>> metal, when memory is migrating KFD driver will pause the compute work 
>> of the device in mmap_wirte_lock then use hmm_range_fault to remap the 
>> migrated/evicted memories to GPU then restore the compute work of device 
>> to ensure the correction of the data. But in virtio-GPU driver the 
>> migration happen in guest kernel, the evict mmu notifier callback 
>> happens in guest, a virtio CMD can be used for notify host but as lack 
>> of mmap_write_lock protection in host kernel, host will hold invalid 
>> data for a short period of time, this may lead to some issues. And it is 
>> hard to fix as far as I can see.
>> Finally I will extract some APIs into helper according to your request, 
>> and I will refactor the whole userptr
>> implementation, use some callbacks in page getting path, let the pin 
>> method and hmm_range_fault can be choiced
>> in this series patches.
>>
>> Regards,
>> Honglei
> 
> Hi Sima,
> 
> I modified the code, remove all the MMU nitifior and use 
> pin_user_pages_fast only. Under this implementation userptr fully
> managed by UMD. We did a performance test, it decreased by 30% in
> OpenCL stack in Geekbench6 benmark.
> We use AMD V2000 for test:
> use MMU notifior + pin_user_pages:
> near 13000 score: https://browser.geekbench.com/v6/compute/3257793
> 
> use pin_user_pages only:
> near 10000 socre: https://browser.geekbench.com/v6/compute/3496228
> 
> The code is under clean up, I will send out later.
> 
> And I found a another thing, it seems like in intel i915 userptr
> implementation, the pin_user_pages is also used in MMU notifior.
> Code path is: 
> https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/tree/drivers/gpu/drm/i915/gem/i915_gem_userptr.c?h=v6.13#:~:text=ret%20%3D%20pin_user_pages_fast(obj%2D%3Euserptr.ptr%20%2B%20pinned%20*%20PAGE_SIZE%2C
> 
> Patch set: 
> https://lore.kernel.org/all/159353552439.22701.14005121342739071590@emeril.freedesktop.org/T/
> https://patchwork.kernel.org/project/intel-gfx/patch/20210323155059.628690-17-maarten.lankhorst@linux.intel.com/#24064663
> 
> And I didn't find the hmm_range_fault code path, maybe I missed it?

A 30% performance penalty is consistent with the GPU being forced
to use 4K pages instead of its preferred 2M huge pages.  AMD GPUs
have TLBs that are optimized for 2M pages, so using 4K pages will
cause lots of TLB misses.

CC Christian König who pointed out that a highly fragmented
physical address space is a bad idea if you care about GPU
performance.
-- 
Sincerely,
Demi Marie Obenour (she/her/hers)

^ permalink raw reply	[flat|nested] 23+ messages in thread

* Re: [RFC PATCH 3/3] drm/virtio: implement blob userptr resource object
  2025-01-08 17:05         ` Simona Vetter
  2025-01-25  0:42           ` Demi Marie Obenour
@ 2025-01-29 20:54           ` Demi Marie Obenour
  2025-01-31  0:33             ` Demi Marie Obenour
  1 sibling, 1 reply; 23+ messages in thread
From: Demi Marie Obenour @ 2025-01-29 20:54 UTC (permalink / raw)
  To: Huang, Honglei1, Huang Rui, virtualization, linux-kernel,
	Dmitry Osipenko, dri-devel, David Airlie, Gerd Hoffmann,
	Gurchetan Singh, Chia-I Wu, Akihiko Odaki, Lingshan Zhu,
	Xen developer discussion, Marek Marczykowski-Górecki,
	Kernel KVM virtualization development

On 1/8/25 12:05 PM, Simona Vetter wrote:
> On Fri, Dec 27, 2024 at 10:24:29AM +0800, Huang, Honglei1 wrote:
>>
>> On 2024/12/22 9:59, Demi Marie Obenour wrote:
>>> On 12/20/24 10:35 AM, Simona Vetter wrote:
>>>> On Fri, Dec 20, 2024 at 06:04:09PM +0800, Honglei Huang wrote:
>>>>> From: Honglei Huang <Honglei1.Huang@amd.com>
>>>>>
>>>>> A virtio-gpu userptr is based on HMM notifier.
>>>>> Used for let host access guest userspace memory and
>>>>> notice the change of userspace memory.
>>>>> This series patches are in very beginning state,
>>>>> User space are pinned currently to ensure the host
>>>>> device memory operations are correct.
>>>>> The free and unmap operations for userspace can be
>>>>> handled by MMU notifier this is a simple and basice
>>>>> SVM feature for this series patches.
>>>>> The physical PFNS update operations is splited into
>>>>> two OPs in here. The evicted memories won't be used
>>>>> anymore but remap into host again to achieve same
>>>>> effect with hmm_rang_fault.
>>>>
>>>> So in my opinion there are two ways to implement userptr that make sense:
>>>>
>>>> - pinned userptr with pin_user_pages(FOLL_LONGTERM). there is not mmu
>>>>    notifier
>>>>
>>>> - unpinnned userptr where you entirely rely on userptr and do not hold any
>>>>    page references or page pins at all, for full SVM integration. This
>>>>    should use hmm_range_fault ideally, since that's the version that
>>>>    doesn't ever grab any page reference pins.
>>>>
>>>> All the in-between variants are imo really bad hacks, whether they hold a
>>>> page reference or a temporary page pin (which seems to be what you're
>>>> doing here). In much older kernels there was some justification for them,
>>>> because strange stuff happened over fork(), but with FOLL_LONGTERM this is
>>>> now all sorted out. So there's really only fully pinned, or true svm left
>>>> as clean design choices imo.
>>>>
>>>> With that background, why does pin_user_pages(FOLL_LONGTERM) not work for
>>>> you?
>>>
>>> +1 on using FOLL_LONGTERM.  Fully dynamic memory management has a huge cost
>>> in complexity that pinning everything avoids.  Furthermore, this avoids the
>>> host having to take action in response to guest memory reclaim requests.
>>> This avoids additional complexity (and thus attack surface) on the host side.
>>> Furthermore, since this is for ROCm and not for graphics, I am less concerned
>>> about supporting systems that require swappable GPU VRAM.
>>
>> Hi Sima and Demi,
>>
>> I totally agree the flag FOLL_LONGTERM is needed, I will add it in next
>> version.
>>
>> And for the first pin variants implementation, the MMU notifier is also
>> needed I think.Cause the userptr feature in UMD generally used like this:
>> the registering of userptr always is explicitly invoked by user code like
>> "registerMemoryToGPU(userptrAddr, ...)", but for the userptr release/free,
>> there is no explicit API for it, at least in hsakmt/KFD stack. User just
>> need call system call "free(userptrAddr)", then kernel driver will release
>> the userptr by MMU notifier callback.Virtio-GPU has no other way to know if
>> user has been free the userptr except for MMU notifior.And in UMD theres is
>> no way to get the free() operation is invoked by user.The only way is use
>> MMU notifier in virtio-GPU driver and free the corresponding data in host by
>> some virtio CMDs as far as I can see.
>>
>> And for the second way that is use hmm_range_fault, there is a predictable
>> issues as far as I can see, at least in hsakmt/KFD stack. That is the memory
>> may migrate when GPU/device is working. In bare metal, when memory is
>> migrating KFD driver will pause the compute work of the device in
>> mmap_wirte_lock then use hmm_range_fault to remap the migrated/evicted
>> memories to GPU then restore the compute work of device to ensure the
>> correction of the data. But in virtio-GPU driver the migration happen in
>> guest kernel, the evict mmu notifier callback happens in guest, a virtio CMD
>> can be used for notify host but as lack of mmap_write_lock protection in
>> host kernel, host will hold invalid data for a short period of time, this
>> may lead to some issues. And it is hard to fix as far as I can see.
>>
>> I will extract some APIs into helper according to your request, and I will
>> refactor the whole userptr implementation, use some callbacks in page
>> getting path, let the pin method and hmm_range_fault can be choiced
>> in this series patches.
> 
> Ok, so if this is for svm, then you need full blast hmm, or the semantics
> are buggy. You cannot fake svm with pin(FOLL_LONGTERM) userptr, this does
> not work.
> 
> The other option is that hsakmt/kfd api is completely busted, and that's
> kinda not a kernel problem.
> -Sima

On further thought, I believe the driver needs to migrate the pages to
device memory (really a virtio-GPU blob object) *and* take a FOLL_LONGTERM
pin on them.  The reason is that it isn’t possible to migrate these pages
back to "host" memory without unmapping them from the GPU.  For the reasons
I mention in [1], I believe that temporarily revoking access to virtio-GPU
blob objects is not feasible.  Instead, the pages must be treated as if
they are permanently in device memory until guest userspace unmaps them
from the GPU, after which they must be migrated back to host memory.

The problems with other approaches are most obvious if one considers a Xen
guest using a virtio-GPU backend that is not all-powerful.  Normal guest
memory is not accessible to the GPU, and Xen uses the IOMMU to enforce this
restriction.  Therefore, the guest must migrate pages to virtio-GPU blob
objects before the GPU can access them.  From Xen’s perspective, virtio-GPU
blob objects belong to the backend domain, so Xen allows the GPU to access
them.  However, the pages in blob objects _cannot_ be used in Xen grant table
operations, because Xen doesn’t consider them to belong to the guest!
Similarly, if the guest has an assigned PCI device, that device will not
be able to access the blob object’s pages.

I’m no expert on Linux memory management, so I’m not sure how to implement
this behavior.  What I _can_ say is that a blob object is I/O memory, and
behaves somewhat similar to a PCI BAR in a system with no P2PDMA support:
CPU access works, but DMA from other devices does not.  Furthermore, the
memory can’t be used for page tables or granted to other Xen guests, and it
will go away if the device is hot-unplugged.  In fact, if the PCI transport
is used, the blob object is located in the BAR of an (emulated) device.
There are non-PCI transports, though, so assuming that blob objects are
located in a PCI BAR is not a good idea.

The reason that pinning the objects in "device" memory is a reasonable
approach is that the host (or backend, in the Xen case) can still migrate
pages between device and host memory and not allocate backing store for
pages that are never accessed.  Therefore, it is not necessary for every
CPU access to go across the PCIe bus even for dGPUs.  Instead, if guest
CPU accesses are much more frequent than device accesses, the memory will
be migrated to the host side.  It’s up to the virtio-GPU backend
implementation to make sure that this happens.  For KVM, this should be
automatic, but for Xen, this might need additional Xen patches so that
the backend domain is notified when pages are accessed or dirtied.

[1]: https://lore.kernel.org/dri-devel/9572ba57-5552-4543-a3b0-6097520a12a3@gmail.com
-- 
Sincerely,
Demi Marie Obenour (she/her/hers)

^ permalink raw reply	[flat|nested] 23+ messages in thread

* Re: [RFC PATCH 3/3] drm/virtio: implement blob userptr resource object
  2025-01-29 20:54           ` Demi Marie Obenour
@ 2025-01-31  0:33             ` Demi Marie Obenour
  2025-02-06 10:53               ` Huang, Honglei1
  0 siblings, 1 reply; 23+ messages in thread
From: Demi Marie Obenour @ 2025-01-31  0:33 UTC (permalink / raw)
  To: Demi Marie Obenour, Huang, Honglei1, Huang Rui, virtualization,
	linux-kernel, Dmitry Osipenko, dri-devel, David Airlie,
	Gerd Hoffmann, Gurchetan Singh, Chia-I Wu, Akihiko Odaki,
	Lingshan Zhu, Xen developer discussion,
	Marek Marczykowski-Górecki,
	Kernel KVM virtualization development, Xenia Ragiadakou,
	Stefano Stabellini

[-- Attachment #1: Type: text/plain, Size: 7902 bytes --]

On Wed, Jan 29, 2025 at 03:54:59PM -0500, Demi Marie Obenour wrote:
> On 1/8/25 12:05 PM, Simona Vetter wrote:
> > On Fri, Dec 27, 2024 at 10:24:29AM +0800, Huang, Honglei1 wrote:
> >>
> >> On 2024/12/22 9:59, Demi Marie Obenour wrote:
> >>> On 12/20/24 10:35 AM, Simona Vetter wrote:
> >>>> On Fri, Dec 20, 2024 at 06:04:09PM +0800, Honglei Huang wrote:
> >>>>> From: Honglei Huang <Honglei1.Huang@amd.com>
> >>>>>
> >>>>> A virtio-gpu userptr is based on HMM notifier.
> >>>>> Used for let host access guest userspace memory and
> >>>>> notice the change of userspace memory.
> >>>>> This series patches are in very beginning state,
> >>>>> User space are pinned currently to ensure the host
> >>>>> device memory operations are correct.
> >>>>> The free and unmap operations for userspace can be
> >>>>> handled by MMU notifier this is a simple and basice
> >>>>> SVM feature for this series patches.
> >>>>> The physical PFNS update operations is splited into
> >>>>> two OPs in here. The evicted memories won't be used
> >>>>> anymore but remap into host again to achieve same
> >>>>> effect with hmm_rang_fault.
> >>>>
> >>>> So in my opinion there are two ways to implement userptr that make sense:
> >>>>
> >>>> - pinned userptr with pin_user_pages(FOLL_LONGTERM). there is not mmu
> >>>>    notifier
> >>>>
> >>>> - unpinnned userptr where you entirely rely on userptr and do not hold any
> >>>>    page references or page pins at all, for full SVM integration. This
> >>>>    should use hmm_range_fault ideally, since that's the version that
> >>>>    doesn't ever grab any page reference pins.
> >>>>
> >>>> All the in-between variants are imo really bad hacks, whether they hold a
> >>>> page reference or a temporary page pin (which seems to be what you're
> >>>> doing here). In much older kernels there was some justification for them,
> >>>> because strange stuff happened over fork(), but with FOLL_LONGTERM this is
> >>>> now all sorted out. So there's really only fully pinned, or true svm left
> >>>> as clean design choices imo.
> >>>>
> >>>> With that background, why does pin_user_pages(FOLL_LONGTERM) not work for
> >>>> you?
> >>>
> >>> +1 on using FOLL_LONGTERM.  Fully dynamic memory management has a huge cost
> >>> in complexity that pinning everything avoids.  Furthermore, this avoids the
> >>> host having to take action in response to guest memory reclaim requests.
> >>> This avoids additional complexity (and thus attack surface) on the host side.
> >>> Furthermore, since this is for ROCm and not for graphics, I am less concerned
> >>> about supporting systems that require swappable GPU VRAM.
> >>
> >> Hi Sima and Demi,
> >>
> >> I totally agree the flag FOLL_LONGTERM is needed, I will add it in next
> >> version.
> >>
> >> And for the first pin variants implementation, the MMU notifier is also
> >> needed I think.Cause the userptr feature in UMD generally used like this:
> >> the registering of userptr always is explicitly invoked by user code like
> >> "registerMemoryToGPU(userptrAddr, ...)", but for the userptr release/free,
> >> there is no explicit API for it, at least in hsakmt/KFD stack. User just
> >> need call system call "free(userptrAddr)", then kernel driver will release
> >> the userptr by MMU notifier callback.Virtio-GPU has no other way to know if
> >> user has been free the userptr except for MMU notifior.And in UMD theres is
> >> no way to get the free() operation is invoked by user.The only way is use
> >> MMU notifier in virtio-GPU driver and free the corresponding data in host by
> >> some virtio CMDs as far as I can see.
> >>
> >> And for the second way that is use hmm_range_fault, there is a predictable
> >> issues as far as I can see, at least in hsakmt/KFD stack. That is the memory
> >> may migrate when GPU/device is working. In bare metal, when memory is
> >> migrating KFD driver will pause the compute work of the device in
> >> mmap_wirte_lock then use hmm_range_fault to remap the migrated/evicted
> >> memories to GPU then restore the compute work of device to ensure the
> >> correction of the data. But in virtio-GPU driver the migration happen in
> >> guest kernel, the evict mmu notifier callback happens in guest, a virtio CMD
> >> can be used for notify host but as lack of mmap_write_lock protection in
> >> host kernel, host will hold invalid data for a short period of time, this
> >> may lead to some issues. And it is hard to fix as far as I can see.
> >>
> >> I will extract some APIs into helper according to your request, and I will
> >> refactor the whole userptr implementation, use some callbacks in page
> >> getting path, let the pin method and hmm_range_fault can be choiced
> >> in this series patches.
> > 
> > Ok, so if this is for svm, then you need full blast hmm, or the semantics
> > are buggy. You cannot fake svm with pin(FOLL_LONGTERM) userptr, this does
> > not work.
> > 
> > The other option is that hsakmt/kfd api is completely busted, and that's
> > kinda not a kernel problem.
> > -Sima
> 
> On further thought, I believe the driver needs to migrate the pages to
> device memory (really a virtio-GPU blob object) *and* take a FOLL_LONGTERM
> pin on them.  The reason is that it isn’t possible to migrate these pages
> back to "host" memory without unmapping them from the GPU.  For the reasons
> I mention in [1], I believe that temporarily revoking access to virtio-GPU
> blob objects is not feasible.  Instead, the pages must be treated as if
> they are permanently in device memory until guest userspace unmaps them
> from the GPU, after which they must be migrated back to host memory.

Discussion on IRC indicates that migration isn't reliable.  This is
because Linux core memory management is largely lock-free for
performance reasons, so there is no way to prevent temporary elevation
of a page's reference count.  A page with an elevated reference count
cannot be migrated.

The only alternative I can think of is for the hypervisor to perform the
migration.  The hypervisor can revoke the guest's access to the page
without the guest's consent or involvement.  The host can then replace
the page with one of its own pages, which might be on the CPU or GPU.
Further migration between the CPU and GPU is controlled by the host
kernel-mode driver (KMD) and host kernel memory management.  The guest
kernel driver must take a FOLL_LONGTERM pin before telling the host to
use the pages, but that is all.

On KVM, this should be essentially automatic, as guest memory really is
just host userspace memory.  On Xen, this requires that the backend
domain can revoke fronted access to _any_ frontend page, or at least
frontend pages that have been granted to the backend.  The backend will
then need to be able to handle page faults for the frontend pages, and
replace the frontend pages with its own pages at will.  Furthermore,
revoking pages that the backend has installed into the frontend must
never fail, because the backend will panic if it does fail.

Sima, is putting guest pages under host kernel control the only option?
I thought that this could be avoided by leaving the pages on the CPU if
migration fails, but that won't work because there will be no way to
migrate them to the GPU later, causing performance problems that would
be impossible to debug.  Is waiting (possibly forever) on migration to
finish an option?  Otherwise, this might mean extra complexity in the
Xen hypervisor, as I do not believe the primitives needed are currently
available.  Specifically, in addition to the primitives discussed at Xen
Project Summit 2024, the backend also needs to intercept access to, and
replace the contents of, arbitrary frontend-controlled pages.
-- 
Sincerely,
Demi Marie Obenour (she/her/hers)
Invisible Things Lab

[-- Attachment #2: signature.asc --]
[-- Type: application/pgp-signature, Size: 833 bytes --]

^ permalink raw reply	[flat|nested] 23+ messages in thread

* Re: [RFC PATCH 1/3] virtio-gpu api: add blob userptr resource
  2024-12-20 10:04 [RFC PATCH 1/3] virtio-gpu api: add blob userptr resource Honglei Huang
  2024-12-20 10:04 ` [RFC PATCH 2/3] drm/virtgpu " Honglei Huang
  2024-12-20 10:04 ` [RFC PATCH 3/3] drm/virtio: implement blob userptr resource object Honglei Huang
@ 2025-02-03  8:25 ` Akihiko Odaki
  2025-02-06 10:40   ` Huang, Honglei1
  2 siblings, 1 reply; 23+ messages in thread
From: Akihiko Odaki @ 2025-02-03  8:25 UTC (permalink / raw)
  To: Honglei Huang, Gurchetan Singh, Antonio Caggiano
  Cc: Lingshan Zhu, Demi Marie Obenour, Huang Rui, virtualization,
	linux-kernel, Dmitry Osipenko, dri-devel, David Airlie,
	Gerd Hoffmann, Chia-I Wu, Daniel Vetter

On 2024/12/20 19:04, Honglei Huang wrote:
> From: Honglei Huang <Honglei1.Huang@amd.com>
> 
> Add a new resource for blob resource, called userptr, used for let
> host access guest user space memory, to acquire a simple SVM features
> in virtio GPU.
> 
> - The capset VIRTIO_GPU_CAPSET_HSAKMT used for context init,
> in this series patches only HSAKMT context can use the userptr
> feature. HSAKMT is a GPU compute library in HSA stack, like
> the role libdrm in mesa stack.
> - New flag VIRTIO_GPU_BLOB_FLAG_USE_USERPTR used in blob create
> to indicate the blob create ioctl is used for create a userptr
> blob resource.
> 
> Signed-off-by: Honglei Huang <Honglei1.Huang@amd.com>
> ---
>   include/uapi/linux/virtio_gpu.h | 2 ++
>   1 file changed, 2 insertions(+)
> 
> diff --git a/include/uapi/linux/virtio_gpu.h b/include/uapi/linux/virtio_gpu.h
> index b9a9783f0b14..0a6b56acbc13 100644
> --- a/include/uapi/linux/virtio_gpu.h
> +++ b/include/uapi/linux/virtio_gpu.h
> @@ -323,6 +323,7 @@ struct virtio_gpu_cmd_submit {
>   
>   #define VIRTIO_GPU_CAPSET_VIRGL 1
>   #define VIRTIO_GPU_CAPSET_VIRGL2 2
> +#define VIRTIO_GPU_CAPSET_HSAKMT 7

The changes to add VIRTIO_GPU_CAPSET_VENUS and VIRTIO_GPU_CAPSET_DRM are 
already merged so this should be rebased for clean apply.

Number 7 is also occupied since 2023:
https://chromium.googlesource.com/crosvm/crosvm/+/e4c1878733937042111fca58899a3a94002bfef0%5E%21/rutabaga_gfx/src/rutabaga_utils.rs

VCL, a proposed VirtIO-GPU OpenCL driver, is also going to use that number:
https://gitlab.freedesktop.org/mesa/mesa/-/merge_requests/31634/diffs?commit_id=55a1a8a32057e83819b046f2de03aca333b052b4

I think you should talk with Antonio Caggiano, who picked the number for 
VCL, to allocate a number without a conflict with VCL.

Gurchetan (the author of Rutabaga change allocating the number), I think 
you should notify the number usage by sending a patch for Linux or 
virtio-spec.

Regards,
Akihiko Odaki

>   
>   /* VIRTIO_GPU_CMD_GET_CAPSET_INFO */
>   struct virtio_gpu_get_capset_info {
> @@ -415,6 +416,7 @@ struct virtio_gpu_resource_create_blob {
>   #define VIRTIO_GPU_BLOB_FLAG_USE_MAPPABLE     0x0001
>   #define VIRTIO_GPU_BLOB_FLAG_USE_SHAREABLE    0x0002
>   #define VIRTIO_GPU_BLOB_FLAG_USE_CROSS_DEVICE 0x0004
> +#define VIRTIO_GPU_BLOB_FLAG_USE_USERPTR      0x0008
>   	/* zero is invalid blob mem */
>   	__le32 blob_mem;
>   	__le32 blob_flags;


^ permalink raw reply	[flat|nested] 23+ messages in thread

* Re: [RFC PATCH 3/3] drm/virtio: implement blob userptr resource object
  2025-01-20  9:54         ` Huang, Honglei1
  2025-01-29 19:46           ` Demi Marie Obenour
@ 2025-02-04 10:45           ` Simona Vetter
  2025-02-06 11:05             ` Huang, Honglei1
  1 sibling, 1 reply; 23+ messages in thread
From: Simona Vetter @ 2025-02-04 10:45 UTC (permalink / raw)
  To: Huang, Honglei1
  Cc: Demi Marie Obenour, Huang Rui, virtualization, linux-kernel,
	Dmitry Osipenko, dri-devel, David Airlie, Gerd Hoffmann,
	Gurchetan Singh, Chia-I Wu, Akihiko Odaki, Lingshan Zhu

On Mon, Jan 20, 2025 at 05:54:10PM +0800, Huang, Honglei1 wrote:
> On 2024/12/27 10:02, Huang, Honglei1 wrote:
> > On 2024/12/22 9:59, Demi Marie Obenour wrote:
> > > On 12/20/24 10:35 AM, Simona Vetter wrote:
> > > > On Fri, Dec 20, 2024 at 06:04:09PM +0800, Honglei Huang wrote:
> > > > > From: Honglei Huang<Honglei1.Huang@amd.com>
> > > > > 
> > > > > A virtio-gpu userptr is based on HMM notifier.
> > > > > Used for let host access guest userspace memory and
> > > > > notice the change of userspace memory.
> > > > > This series patches are in very beginning state,
> > > > > User space are pinned currently to ensure the host
> > > > > device memory operations are correct.
> > > > > The free and unmap operations for userspace can be
> > > > > handled by MMU notifier this is a simple and basice
> > > > > SVM feature for this series patches.
> > > > > The physical PFNS update operations is splited into
> > > > > two OPs in here. The evicted memories won't be used
> > > > > anymore but remap into host again to achieve same
> > > > > effect with hmm_rang_fault.
> > > > So in my opinion there are two ways to implement userptr that make sense:
> > > > 
> > > > - pinned userptr with pin_user_pages(FOLL_LONGTERM). there is not mmu
> > > >    notifier
> > > > 
> > > > - unpinnned userptr where you entirely rely on userptr and do not hold any
> > > >    page references or page pins at all, for full SVM integration. This
> > > >    should use hmm_range_fault ideally, since that's the version that
> > > >    doesn't ever grab any page reference pins.
> > > > 
> > > > All the in-between variants are imo really bad hacks, whether they hold a
> > > > page reference or a temporary page pin (which seems to be what you're
> > > > doing here). In much older kernels there was some justification for them,
> > > > because strange stuff happened over fork(), but with FOLL_LONGTERM this is
> > > > now all sorted out. So there's really only fully pinned, or true svm left
> > > > as clean design choices imo.
> > > > 
> > > > With that background, why does pin_user_pages(FOLL_LONGTERM) not work for
> > > > you?
> > > +1 on using FOLL_LONGTERM.  Fully dynamic memory management has a huge cost
> > > in complexity that pinning everything avoids.  Furthermore, this avoids the
> > > host having to take action in response to guest memory reclaim requests.
> > > This avoids additional complexity (and thus attack surface) on the host side.
> > > Furthermore, since this is for ROCm and not for graphics, I am less concerned
> > > about supporting systems that require swappable GPU VRAM.

I kinda ignored this one, because from my point of view it's all clear.
But I guess better to elaborate some more.

> > Hi Sima and Demi, I totally agree the flag FOLL_LONGTERM is needed, I
> > will add it in next version. And for the first pin variants
> > implementation, the MMU notifier is also needed I think.
> > Cause the userptr feature in UMD generally used like this: the
> > registering of userptr
> > always is explicitly invoked by user code like
> > "registerMemoryToGPU(userptrAddr, ...)",
> > but for the userptr release/free, there is no explicit API for it, at
> > least in hsakmt/KFD stack.
> > User just need call system call "free(userptrAddr)", thenkernel driver
> > will release the userptr
> > by MMU notifier callback.Virtio-GPU has no other way to know if user has
> > been free the userptr
> > except for MMU notifior.And in UMD theres is no way to get the free()
> > operation is invoked by user.
> > the only way is use MMU notifierin virtio-GPU driver and free the
> > corresponding data in host
> > by some virtio CMDs as far as I can see.

Yes that's what I meant that you need real svm/hmm here. You cannot fake
the amdkfd/hsakmt model with pin_user_pages, it fundamentally falls apart.
One case is overcommit, where userspace malloc() a huge amount of virtual
address space, registers it with the gpu, but only uses fairly little of
it. If you pin that all, you run out of memory or at least thrash
performance.

For the hsa/kfd model, you must use hmm+mmu_notifier, or you're breaking
the uapi contract.

> > And for the second way that is use hmm_range_fault, there is a
> > predictable issues as far as I can see, at least in hsakmt/KFD stack.
> > That is the memory may migrate when GPU/device is working. In bare
> > metal, when memory is migrating KFD driver will pause the compute work
> > of the device in mmap_wirte_lock then use hmm_range_fault to remap the
> > migrated/evicted memories to GPU then restore the compute work of device
> > to ensure the correction of the data. But in virtio-GPU driver the
> > migration happen in guest kernel, the evict mmu notifier callback
> > happens in guest, a virtio CMD can be used for notify host but as lack
> > of mmap_write_lock protection in host kernel, host will hold invalid
> > data for a short period of time, this may lead to some issues. And it is
> > hard to fix as far as I can see.

I forgot the exact details, but when I looked amdkfd was taking too big
locks in the migration paths, which results in deadlocks. Those were
worked around by taking even bigger locks, the mmap_write_lock. But that's
a design issue with amdkfd, not something fundamental.

hmm only needs the mmu_notifier callbacks to work, meaning either context
preemption and restarting, or tlb invalidation and gpu page fault
handling. Those you can forward between guest and host with no issues, and
with hw support like pasid/ats in iommu this already happens.

Note that the drm_gpusvm library that's under discussion for xe had the
same issue in version 1 of relying on mmap_write_lock to paper over design
issues. But the recent versions should be fixed. Would be really good to
look at all that. And then probably do a full redo of the svm support for
amd in amdgpu.ko using all these new helper libraries, because my personal
take is that fixing amdkfd is probably more work than just writing new
clean code.

> > Finally I will extract some APIs into helper according to your request,
> > and I will refactor the whole userptr
> > implementation, use some callbacks in page getting path, let the pin
> > method and hmm_range_fault can be choiced
> > in this series patches.
> > 
> > Regards,
> > Honglei
> 
> Hi Sima,
> 
> I modified the code, remove all the MMU nitifior and use pin_user_pages_fast
> only. Under this implementation userptr fully
> managed by UMD. We did a performance test, it decreased by 30% in
> OpenCL stack in Geekbench6 benmark.
> We use AMD V2000 for test:
> use MMU notifior + pin_user_pages:
> near 13000 score: https://browser.geekbench.com/v6/compute/3257793
> 
> use pin_user_pages only:
> near 10000 socre: https://browser.geekbench.com/v6/compute/3496228
> 
> The code is under clean up, I will send out later.

pin_user_pages is fundamentally broken for the hsa memory model, no amount
of benchmarking different flavors of it against each another will change
that.

> And I found a another thing, it seems like in intel i915 userptr
> implementation, the pin_user_pages is also used in MMU notifior.
> Code path is: https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/tree/drivers/gpu/drm/i915/gem/i915_gem_userptr.c?h=v6.13#:~:text=ret%20%3D%20pin_user_pages_fast(obj%2D%3Euserptr.ptr%20%2B%20pinned%20*%20PAGE_SIZE%2C

Yeah i915-gem isn't great code and shouldn't be used as example for
anything. This is Dave&me asked Xe developers to create a lot of the
infrastructure in drm libraries, to make sure we have a solid design here
and not what i915-gem did.

Also note that i915-gem userptr is for buffer based userptr, like for
VK_KHR_external_memory if I remember correctly. This is not the hsa memory
model at all.
> 
> Patch set: https://lore.kernel.org/all/159353552439.22701.14005121342739071590@emeril.freedesktop.org/T/
> https://patchwork.kernel.org/project/intel-gfx/patch/20210323155059.628690-17-maarten.lankhorst@linux.intel.com/#24064663
> 
> And I didn't find the hmm_range_fault code path, maybe I missed it?

i915-gem never supported the svm/hsa memory model in upstream, so yeah
it's not there. For drm/xe see the work from Matt Brost that's currently
under discussion on dri-devel for what it should all look like.

Cheers, Sima

> 
> Regards,
> Honglei
> 

-- 
Simona Vetter
Software Engineer, Intel Corporation
http://blog.ffwll.ch

^ permalink raw reply	[flat|nested] 23+ messages in thread

* Re: [RFC PATCH 1/3] virtio-gpu api: add blob userptr resource
  2025-02-03  8:25 ` [RFC PATCH 1/3] virtio-gpu api: add blob userptr resource Akihiko Odaki
@ 2025-02-06 10:40   ` Huang, Honglei1
  0 siblings, 0 replies; 23+ messages in thread
From: Huang, Honglei1 @ 2025-02-06 10:40 UTC (permalink / raw)
  To: Akihiko Odaki, Gurchetan Singh, Antonio Caggiano
  Cc: Lingshan Zhu, Demi Marie Obenour, Huang Rui, virtualization,
	linux-kernel, Dmitry Osipenko, dri-devel, David Airlie,
	Gerd Hoffmann, Chia-I Wu, Daniel Vetter

On 2025/2/3 16:25, Akihiko Odaki wrote:
> On 2024/12/20 19:04, Honglei Huang wrote:
>> From: Honglei Huang <Honglei1.Huang@amd.com>
>>
>> Add a new resource for blob resource, called userptr, used for let
>> host access guest user space memory, to acquire a simple SVM features
>> in virtio GPU.
>>
>> - The capset VIRTIO_GPU_CAPSET_HSAKMT used for context init,
>> in this series patches only HSAKMT context can use the userptr
>> feature. HSAKMT is a GPU compute library in HSA stack, like
>> the role libdrm in mesa stack.
>> - New flag VIRTIO_GPU_BLOB_FLAG_USE_USERPTR used in blob create
>> to indicate the blob create ioctl is used for create a userptr
>> blob resource.
>>
>> Signed-off-by: Honglei Huang <Honglei1.Huang@amd.com>
>> ---
>>   include/uapi/linux/virtio_gpu.h | 2 ++
>>   1 file changed, 2 insertions(+)
>>
>> diff --git a/include/uapi/linux/virtio_gpu.h b/include/uapi/linux/ 
>> virtio_gpu.h
>> index b9a9783f0b14..0a6b56acbc13 100644
>> --- a/include/uapi/linux/virtio_gpu.h
>> +++ b/include/uapi/linux/virtio_gpu.h
>> @@ -323,6 +323,7 @@ struct virtio_gpu_cmd_submit {
>>   #define VIRTIO_GPU_CAPSET_VIRGL 1
>>   #define VIRTIO_GPU_CAPSET_VIRGL2 2
>> +#define VIRTIO_GPU_CAPSET_HSAKMT 7
> 
> The changes to add VIRTIO_GPU_CAPSET_VENUS and VIRTIO_GPU_CAPSET_DRM are 
> already merged so this should be rebased for clean apply.
> 
> Number 7 is also occupied since 2023:
> https://chromium.googlesource.com/crosvm/crosvm/+/ 
> e4c1878733937042111fca58899a3a94002bfef0%5E%21/rutabaga_gfx/src/ 
> rutabaga_utils.rs
> 
> VCL, a proposed VirtIO-GPU OpenCL driver, is also going to use that number:
> https://gitlab.freedesktop.org/mesa/mesa/-/merge_requests/31634/diffs? 
> commit_id=55a1a8a32057e83819b046f2de03aca333b052b4
> 
> I think you should talk with Antonio Caggiano, who picked the number for 
> VCL, to allocate a number without a conflict with VCL.
> 
> Gurchetan (the author of Rutabaga change allocating the number), I think 
> you should notify the number usage by sending a patch for Linux or 
> virtio-spec.
> 
> Regards,
> Akihiko Odaki
> 
>>   /* VIRTIO_GPU_CMD_GET_CAPSET_INFO */
>>   struct virtio_gpu_get_capset_info {
>> @@ -415,6 +416,7 @@ struct virtio_gpu_resource_create_blob {
>>   #define VIRTIO_GPU_BLOB_FLAG_USE_MAPPABLE     0x0001
>>   #define VIRTIO_GPU_BLOB_FLAG_USE_SHAREABLE    0x0002
>>   #define VIRTIO_GPU_BLOB_FLAG_USE_CROSS_DEVICE 0x0004
>> +#define VIRTIO_GPU_BLOB_FLAG_USE_USERPTR      0x0008
>>       /* zero is invalid blob mem */
>>       __le32 blob_mem;
>>       __le32 blob_flags;
> 
Hi Akihiko,

Really thanks for the information, I think maybe using an unused number 
is a better way. I will send a patch to virtio-spec to occupy a unused one.

Regards,
Honglei

^ permalink raw reply	[flat|nested] 23+ messages in thread

* Re: [RFC PATCH 3/3] drm/virtio: implement blob userptr resource object
  2025-01-31  0:33             ` Demi Marie Obenour
@ 2025-02-06 10:53               ` Huang, Honglei1
  2025-02-06 18:21                 ` Demi Marie Obenour
  0 siblings, 1 reply; 23+ messages in thread
From: Huang, Honglei1 @ 2025-02-06 10:53 UTC (permalink / raw)
  To: Demi Marie Obenour
  Cc: Demi Marie Obenour, Huang Rui, Stefano Stabellini, virtualization,
	linux-kernel, David Airlie, dri-devel, Dmitry Osipenko,
	Gerd Hoffmann, Gurchetan Singh, Chia-I Wu, Akihiko Odaki,
	Lingshan Zhu, Xen developer discussion,
	Kernel KVM virtualization development, Xenia Ragiadakou,
	Marek Marczykowski-Górecki

On 2025/1/31 8:33, Demi Marie Obenour wrote:
> On Wed, Jan 29, 2025 at 03:54:59PM -0500, Demi Marie Obenour wrote:
>> On 1/8/25 12:05 PM, Simona Vetter wrote:
>>> On Fri, Dec 27, 2024 at 10:24:29AM +0800, Huang, Honglei1 wrote:
>>>>
>>>> On 2024/12/22 9:59, Demi Marie Obenour wrote:
>>>>> On 12/20/24 10:35 AM, Simona Vetter wrote:
>>>>>> On Fri, Dec 20, 2024 at 06:04:09PM +0800, Honglei Huang wrote:
>>>>>>> From: Honglei Huang <Honglei1.Huang@amd.com>
>>>>>>>
>>>>>>> A virtio-gpu userptr is based on HMM notifier.
>>>>>>> Used for let host access guest userspace memory and
>>>>>>> notice the change of userspace memory.
>>>>>>> This series patches are in very beginning state,
>>>>>>> User space are pinned currently to ensure the host
>>>>>>> device memory operations are correct.
>>>>>>> The free and unmap operations for userspace can be
>>>>>>> handled by MMU notifier this is a simple and basice
>>>>>>> SVM feature for this series patches.
>>>>>>> The physical PFNS update operations is splited into
>>>>>>> two OPs in here. The evicted memories won't be used
>>>>>>> anymore but remap into host again to achieve same
>>>>>>> effect with hmm_rang_fault.
>>>>>>
>>>>>> So in my opinion there are two ways to implement userptr that make sense:
>>>>>>
>>>>>> - pinned userptr with pin_user_pages(FOLL_LONGTERM). there is not mmu
>>>>>>     notifier
>>>>>>
>>>>>> - unpinnned userptr where you entirely rely on userptr and do not hold any
>>>>>>     page references or page pins at all, for full SVM integration. This
>>>>>>     should use hmm_range_fault ideally, since that's the version that
>>>>>>     doesn't ever grab any page reference pins.
>>>>>>
>>>>>> All the in-between variants are imo really bad hacks, whether they hold a
>>>>>> page reference or a temporary page pin (which seems to be what you're
>>>>>> doing here). In much older kernels there was some justification for them,
>>>>>> because strange stuff happened over fork(), but with FOLL_LONGTERM this is
>>>>>> now all sorted out. So there's really only fully pinned, or true svm left
>>>>>> as clean design choices imo.
>>>>>>
>>>>>> With that background, why does pin_user_pages(FOLL_LONGTERM) not work for
>>>>>> you?
>>>>>
>>>>> +1 on using FOLL_LONGTERM.  Fully dynamic memory management has a huge cost
>>>>> in complexity that pinning everything avoids.  Furthermore, this avoids the
>>>>> host having to take action in response to guest memory reclaim requests.
>>>>> This avoids additional complexity (and thus attack surface) on the host side.
>>>>> Furthermore, since this is for ROCm and not for graphics, I am less concerned
>>>>> about supporting systems that require swappable GPU VRAM.
>>>>
>>>> Hi Sima and Demi,
>>>>
>>>> I totally agree the flag FOLL_LONGTERM is needed, I will add it in next
>>>> version.
>>>>
>>>> And for the first pin variants implementation, the MMU notifier is also
>>>> needed I think.Cause the userptr feature in UMD generally used like this:
>>>> the registering of userptr always is explicitly invoked by user code like
>>>> "registerMemoryToGPU(userptrAddr, ...)", but for the userptr release/free,
>>>> there is no explicit API for it, at least in hsakmt/KFD stack. User just
>>>> need call system call "free(userptrAddr)", then kernel driver will release
>>>> the userptr by MMU notifier callback.Virtio-GPU has no other way to know if
>>>> user has been free the userptr except for MMU notifior.And in UMD theres is
>>>> no way to get the free() operation is invoked by user.The only way is use
>>>> MMU notifier in virtio-GPU driver and free the corresponding data in host by
>>>> some virtio CMDs as far as I can see.
>>>>
>>>> And for the second way that is use hmm_range_fault, there is a predictable
>>>> issues as far as I can see, at least in hsakmt/KFD stack. That is the memory
>>>> may migrate when GPU/device is working. In bare metal, when memory is
>>>> migrating KFD driver will pause the compute work of the device in
>>>> mmap_wirte_lock then use hmm_range_fault to remap the migrated/evicted
>>>> memories to GPU then restore the compute work of device to ensure the
>>>> correction of the data. But in virtio-GPU driver the migration happen in
>>>> guest kernel, the evict mmu notifier callback happens in guest, a virtio CMD
>>>> can be used for notify host but as lack of mmap_write_lock protection in
>>>> host kernel, host will hold invalid data for a short period of time, this
>>>> may lead to some issues. And it is hard to fix as far as I can see.
>>>>
>>>> I will extract some APIs into helper according to your request, and I will
>>>> refactor the whole userptr implementation, use some callbacks in page
>>>> getting path, let the pin method and hmm_range_fault can be choiced
>>>> in this series patches.
>>>
>>> Ok, so if this is for svm, then you need full blast hmm, or the semantics
>>> are buggy. You cannot fake svm with pin(FOLL_LONGTERM) userptr, this does
>>> not work.
>>>
>>> The other option is that hsakmt/kfd api is completely busted, and that's
>>> kinda not a kernel problem.
>>> -Sima
>>
>> On further thought, I believe the driver needs to migrate the pages to
>> device memory (really a virtio-GPU blob object) *and* take a FOLL_LONGTERM
>> pin on them.  The reason is that it isn’t possible to migrate these pages
>> back to "host" memory without unmapping them from the GPU.  For the reasons
>> I mention in [1], I believe that temporarily revoking access to virtio-GPU
>> blob objects is not feasible.  Instead, the pages must be treated as if
>> they are permanently in device memory until guest userspace unmaps them
>> from the GPU, after which they must be migrated back to host memory.
> 
> Discussion on IRC indicates that migration isn't reliable.  This is
> because Linux core memory management is largely lock-free for
> performance reasons, so there is no way to prevent temporary elevation
> of a page's reference count.  A page with an elevated reference count
> cannot be migrated.
> 
> The only alternative I can think of is for the hypervisor to perform the
> migration.  The hypervisor can revoke the guest's access to the page
> without the guest's consent or involvement.  The host can then replace
> the page with one of its own pages, which might be on the CPU or GPU.
> Further migration between the CPU and GPU is controlled by the host
> kernel-mode driver (KMD) and host kernel memory management.  The guest
> kernel driver must take a FOLL_LONGTERM pin before telling the host to
> use the pages, but that is all.
> 
> On KVM, this should be essentially automatic, as guest memory really is
> just host userspace memory.  On Xen, this requires that the backend
> domain can revoke fronted access to _any_ frontend page, or at least
> frontend pages that have been granted to the backend.  The backend will
> then need to be able to handle page faults for the frontend pages, and
> replace the frontend pages with its own pages at will.  Furthermore,
> revoking pages that the backend has installed into the frontend must
> never fail, because the backend will panic if it does fail.
> 
> Sima, is putting guest pages under host kernel control the only option?
> I thought that this could be avoided by leaving the pages on the CPU if
> migration fails, but that won't work because there will be no way to
> migrate them to the GPU later, causing performance problems that would
> be impossible to debug.  Is waiting (possibly forever) on migration to
> finish an option?  Otherwise, this might mean extra complexity in the
> Xen hypervisor, as I do not believe the primitives needed are currently
> available.  Specifically, in addition to the primitives discussed at Xen
> Project Summit 2024, the backend also needs to intercept access to, and
> replace the contents of, arbitrary frontend-controlled pages.

Hi Demi,

I agree that to achieve the complete SVM feature in virtio-GPU, it is 
necessary to have the hypervisor deeply involved and add new features.
It needs solid design, I saw the detailed reply in a another thread, it
is very helpful,looking forward to the response from the Xen/hypervisor 
experts.

So for the current virito-GPU userptr implementation, It can not support 
the full SVM feature, it just can only let GPU access the user space 
memory, maybe can be called by userptr feature. I think I will finish 
this small part firstly and then to try to complete the whole SVM feature.

Regards,
Honglei


^ permalink raw reply	[flat|nested] 23+ messages in thread

* Re: [RFC PATCH 3/3] drm/virtio: implement blob userptr resource object
  2025-02-04 10:45           ` Simona Vetter
@ 2025-02-06 11:05             ` Huang, Honglei1
  0 siblings, 0 replies; 23+ messages in thread
From: Huang, Honglei1 @ 2025-02-06 11:05 UTC (permalink / raw)
  To: Simona Vetter
  Cc: Demi Marie Obenour, Huang Rui, virtualization, linux-kernel,
	Dmitry Osipenko, dri-devel, David Airlie, Gerd Hoffmann,
	Gurchetan Singh, Chia-I Wu, Akihiko Odaki, Lingshan Zhu

On 2025/2/4 18:45, Simona Vetter wrote:
> On Mon, Jan 20, 2025 at 05:54:10PM +0800, Huang, Honglei1 wrote:
>> On 2024/12/27 10:02, Huang, Honglei1 wrote:
>>> On 2024/12/22 9:59, Demi Marie Obenour wrote:
>>>> On 12/20/24 10:35 AM, Simona Vetter wrote:
>>>>> On Fri, Dec 20, 2024 at 06:04:09PM +0800, Honglei Huang wrote:
>>>>>> From: Honglei Huang<Honglei1.Huang@amd.com>
>>>>>>
>>>>>> A virtio-gpu userptr is based on HMM notifier.
>>>>>> Used for let host access guest userspace memory and
>>>>>> notice the change of userspace memory.
>>>>>> This series patches are in very beginning state,
>>>>>> User space are pinned currently to ensure the host
>>>>>> device memory operations are correct.
>>>>>> The free and unmap operations for userspace can be
>>>>>> handled by MMU notifier this is a simple and basice
>>>>>> SVM feature for this series patches.
>>>>>> The physical PFNS update operations is splited into
>>>>>> two OPs in here. The evicted memories won't be used
>>>>>> anymore but remap into host again to achieve same
>>>>>> effect with hmm_rang_fault.
>>>>> So in my opinion there are two ways to implement userptr that make sense:
>>>>>
>>>>> - pinned userptr with pin_user_pages(FOLL_LONGTERM). there is not mmu
>>>>>     notifier
>>>>>
>>>>> - unpinnned userptr where you entirely rely on userptr and do not hold any
>>>>>     page references or page pins at all, for full SVM integration. This
>>>>>     should use hmm_range_fault ideally, since that's the version that
>>>>>     doesn't ever grab any page reference pins.
>>>>>
>>>>> All the in-between variants are imo really bad hacks, whether they hold a
>>>>> page reference or a temporary page pin (which seems to be what you're
>>>>> doing here). In much older kernels there was some justification for them,
>>>>> because strange stuff happened over fork(), but with FOLL_LONGTERM this is
>>>>> now all sorted out. So there's really only fully pinned, or true svm left
>>>>> as clean design choices imo.
>>>>>
>>>>> With that background, why does pin_user_pages(FOLL_LONGTERM) not work for
>>>>> you?
>>>> +1 on using FOLL_LONGTERM.  Fully dynamic memory management has a huge cost
>>>> in complexity that pinning everything avoids.  Furthermore, this avoids the
>>>> host having to take action in response to guest memory reclaim requests.
>>>> This avoids additional complexity (and thus attack surface) on the host side.
>>>> Furthermore, since this is for ROCm and not for graphics, I am less concerned
>>>> about supporting systems that require swappable GPU VRAM.
> 
> I kinda ignored this one, because from my point of view it's all clear.
> But I guess better to elaborate some more.
> 
>>> Hi Sima and Demi, I totally agree the flag FOLL_LONGTERM is needed, I
>>> will add it in next version. And for the first pin variants
>>> implementation, the MMU notifier is also needed I think.
>>> Cause the userptr feature in UMD generally used like this: the
>>> registering of userptr
>>> always is explicitly invoked by user code like
>>> "registerMemoryToGPU(userptrAddr, ...)",
>>> but for the userptr release/free, there is no explicit API for it, at
>>> least in hsakmt/KFD stack.
>>> User just need call system call "free(userptrAddr)", thenkernel driver
>>> will release the userptr
>>> by MMU notifier callback.Virtio-GPU has no other way to know if user has
>>> been free the userptr
>>> except for MMU notifior.And in UMD theres is no way to get the free()
>>> operation is invoked by user.
>>> the only way is use MMU notifierin virtio-GPU driver and free the
>>> corresponding data in host
>>> by some virtio CMDs as far as I can see.
> 
> Yes that's what I meant that you need real svm/hmm here. You cannot fake
> the amdkfd/hsakmt model with pin_user_pages, it fundamentally falls apart.
> One case is overcommit, where userspace malloc() a huge amount of virtual
> address space, registers it with the gpu, but only uses fairly little of
> it. If you pin that all, you run out of memory or at least thrash
> performance.
> 
> For the hsa/kfd model, you must use hmm+mmu_notifier, or you're breaking
> the uapi contract.
> 
>>> And for the second way that is use hmm_range_fault, there is a
>>> predictable issues as far as I can see, at least in hsakmt/KFD stack.
>>> That is the memory may migrate when GPU/device is working. In bare
>>> metal, when memory is migrating KFD driver will pause the compute work
>>> of the device in mmap_wirte_lock then use hmm_range_fault to remap the
>>> migrated/evicted memories to GPU then restore the compute work of device
>>> to ensure the correction of the data. But in virtio-GPU driver the
>>> migration happen in guest kernel, the evict mmu notifier callback
>>> happens in guest, a virtio CMD can be used for notify host but as lack
>>> of mmap_write_lock protection in host kernel, host will hold invalid
>>> data for a short period of time, this may lead to some issues. And it is
>>> hard to fix as far as I can see.
> 
> I forgot the exact details, but when I looked amdkfd was taking too big
> locks in the migration paths, which results in deadlocks. Those were
> worked around by taking even bigger locks, the mmap_write_lock. But that's
> a design issue with amdkfd, not something fundamental.
> 
> hmm only needs the mmu_notifier callbacks to work, meaning either context
> preemption and restarting, or tlb invalidation and gpu page fault
> handling. Those you can forward between guest and host with no issues, and
> with hw support like pasid/ats in iommu this already happens.
> 
> Note that the drm_gpusvm library that's under discussion for xe had the
> same issue in version 1 of relying on mmap_write_lock to paper over design
> issues. But the recent versions should be fixed. Would be really good to
> look at all that. And then probably do a full redo of the svm support for
> amd in amdgpu.ko using all these new helper libraries, because my personal
> take is that fixing amdkfd is probably more work than just writing new
> clean code.
> 
>>> Finally I will extract some APIs into helper according to your request,
>>> and I will refactor the whole userptr
>>> implementation, use some callbacks in page getting path, let the pin
>>> method and hmm_range_fault can be choiced
>>> in this series patches.
>>>
>>> Regards,
>>> Honglei
>>
>> Hi Sima,
>>
>> I modified the code, remove all the MMU nitifior and use pin_user_pages_fast
>> only. Under this implementation userptr fully
>> managed by UMD. We did a performance test, it decreased by 30% in
>> OpenCL stack in Geekbench6 benmark.
>> We use AMD V2000 for test:
>> use MMU notifior + pin_user_pages:
>> near 13000 score: https://browser.geekbench.com/v6/compute/3257793
>>
>> use pin_user_pages only:
>> near 10000 socre: https://browser.geekbench.com/v6/compute/3496228
>>
>> The code is under clean up, I will send out later.
> 
> pin_user_pages is fundamentally broken for the hsa memory model, no amount
> of benchmarking different flavors of it against each another will change
> that.
> 
>> And I found a another thing, it seems like in intel i915 userptr
>> implementation, the pin_user_pages is also used in MMU notifior.
>> Code path is: https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/tree/drivers/gpu/drm/i915/gem/i915_gem_userptr.c?h=v6.13#:~:text=ret%20%3D%20pin_user_pages_fast(obj%2D%3Euserptr.ptr%20%2B%20pinned%20*%20PAGE_SIZE%2C
> 
> Yeah i915-gem isn't great code and shouldn't be used as example for
> anything. This is Dave&me asked Xe developers to create a lot of the
> infrastructure in drm libraries, to make sure we have a solid design here
> and not what i915-gem did.
> 
> Also note that i915-gem userptr is for buffer based userptr, like for
> VK_KHR_external_memory if I remember correctly. This is not the hsa memory
> model at all.
>>
>> Patch set: https://lore.kernel.org/all/159353552439.22701.14005121342739071590@emeril.freedesktop.org/T/
>> https://patchwork.kernel.org/project/intel-gfx/patch/20210323155059.628690-17-maarten.lankhorst@linux.intel.com/#24064663
>>
>> And I didn't find the hmm_range_fault code path, maybe I missed it?
> 
> i915-gem never supported the svm/hsa memory model in upstream, so yeah
> it's not there. For drm/xe see the work from Matt Brost that's currently
> under discussion on dri-devel for what it should all look like.
> 
> Cheers, Sima
> 
>>
>> Regards,
>> Honglei
>>
> 

Understood, and really sorry for the misunderstanding on SVM feature. 
This patch set is not going to implement the SVM/HSA feature for 
virtio-gpu. Just to let host GPU access the guest userspace memory. I 
will correct the feature description in patches. The full SVM/HSA are 
heavyweight feature, need much more time to implement.
The goal of this patch set is not to achieve the SVM/HSA feature, called 
by "userptr" or "guest blob memory" maybe is better.

Regards,
Honglei



^ permalink raw reply	[flat|nested] 23+ messages in thread

* Re: [RFC PATCH 3/3] drm/virtio: implement blob userptr resource object
  2025-02-06 10:53               ` Huang, Honglei1
@ 2025-02-06 18:21                 ` Demi Marie Obenour
  2025-02-07 11:07                   ` Huang, Honglei1
  0 siblings, 1 reply; 23+ messages in thread
From: Demi Marie Obenour @ 2025-02-06 18:21 UTC (permalink / raw)
  To: Huang, Honglei1
  Cc: Demi Marie Obenour, Huang Rui, Stefano Stabellini, virtualization,
	linux-kernel, David Airlie, dri-devel, Dmitry Osipenko,
	Gerd Hoffmann, Gurchetan Singh, Chia-I Wu, Akihiko Odaki,
	Lingshan Zhu, Xen developer discussion,
	Kernel KVM virtualization development, Xenia Ragiadakou,
	Marek Marczykowski-Górecki

[-- Attachment #1: Type: text/plain, Size: 11250 bytes --]

On Thu, Feb 06, 2025 at 06:53:55PM +0800, Huang, Honglei1 wrote:
> On 2025/1/31 8:33, Demi Marie Obenour wrote:
> > On Wed, Jan 29, 2025 at 03:54:59PM -0500, Demi Marie Obenour wrote:
> > > On 1/8/25 12:05 PM, Simona Vetter wrote:
> > > > On Fri, Dec 27, 2024 at 10:24:29AM +0800, Huang, Honglei1 wrote:
> > > > > 
> > > > > On 2024/12/22 9:59, Demi Marie Obenour wrote:
> > > > > > On 12/20/24 10:35 AM, Simona Vetter wrote:
> > > > > > > On Fri, Dec 20, 2024 at 06:04:09PM +0800, Honglei Huang wrote:
> > > > > > > > From: Honglei Huang <Honglei1.Huang@amd.com>
> > > > > > > > 
> > > > > > > > A virtio-gpu userptr is based on HMM notifier.
> > > > > > > > Used for let host access guest userspace memory and
> > > > > > > > notice the change of userspace memory.
> > > > > > > > This series patches are in very beginning state,
> > > > > > > > User space are pinned currently to ensure the host
> > > > > > > > device memory operations are correct.
> > > > > > > > The free and unmap operations for userspace can be
> > > > > > > > handled by MMU notifier this is a simple and basice
> > > > > > > > SVM feature for this series patches.
> > > > > > > > The physical PFNS update operations is splited into
> > > > > > > > two OPs in here. The evicted memories won't be used
> > > > > > > > anymore but remap into host again to achieve same
> > > > > > > > effect with hmm_rang_fault.
> > > > > > > 
> > > > > > > So in my opinion there are two ways to implement userptr that make sense:
> > > > > > > 
> > > > > > > - pinned userptr with pin_user_pages(FOLL_LONGTERM). there is not mmu
> > > > > > >     notifier
> > > > > > > 
> > > > > > > - unpinnned userptr where you entirely rely on userptr and do not hold any
> > > > > > >     page references or page pins at all, for full SVM integration. This
> > > > > > >     should use hmm_range_fault ideally, since that's the version that
> > > > > > >     doesn't ever grab any page reference pins.
> > > > > > > 
> > > > > > > All the in-between variants are imo really bad hacks, whether they hold a
> > > > > > > page reference or a temporary page pin (which seems to be what you're
> > > > > > > doing here). In much older kernels there was some justification for them,
> > > > > > > because strange stuff happened over fork(), but with FOLL_LONGTERM this is
> > > > > > > now all sorted out. So there's really only fully pinned, or true svm left
> > > > > > > as clean design choices imo.
> > > > > > > 
> > > > > > > With that background, why does pin_user_pages(FOLL_LONGTERM) not work for
> > > > > > > you?
> > > > > > 
> > > > > > +1 on using FOLL_LONGTERM.  Fully dynamic memory management has a huge cost
> > > > > > in complexity that pinning everything avoids.  Furthermore, this avoids the
> > > > > > host having to take action in response to guest memory reclaim requests.
> > > > > > This avoids additional complexity (and thus attack surface) on the host side.
> > > > > > Furthermore, since this is for ROCm and not for graphics, I am less concerned
> > > > > > about supporting systems that require swappable GPU VRAM.
> > > > > 
> > > > > Hi Sima and Demi,
> > > > > 
> > > > > I totally agree the flag FOLL_LONGTERM is needed, I will add it in next
> > > > > version.
> > > > > 
> > > > > And for the first pin variants implementation, the MMU notifier is also
> > > > > needed I think.Cause the userptr feature in UMD generally used like this:
> > > > > the registering of userptr always is explicitly invoked by user code like
> > > > > "registerMemoryToGPU(userptrAddr, ...)", but for the userptr release/free,
> > > > > there is no explicit API for it, at least in hsakmt/KFD stack. User just
> > > > > need call system call "free(userptrAddr)", then kernel driver will release
> > > > > the userptr by MMU notifier callback.Virtio-GPU has no other way to know if
> > > > > user has been free the userptr except for MMU notifior.And in UMD theres is
> > > > > no way to get the free() operation is invoked by user.The only way is use
> > > > > MMU notifier in virtio-GPU driver and free the corresponding data in host by
> > > > > some virtio CMDs as far as I can see.
> > > > > 
> > > > > And for the second way that is use hmm_range_fault, there is a predictable
> > > > > issues as far as I can see, at least in hsakmt/KFD stack. That is the memory
> > > > > may migrate when GPU/device is working. In bare metal, when memory is
> > > > > migrating KFD driver will pause the compute work of the device in
> > > > > mmap_wirte_lock then use hmm_range_fault to remap the migrated/evicted
> > > > > memories to GPU then restore the compute work of device to ensure the
> > > > > correction of the data. But in virtio-GPU driver the migration happen in
> > > > > guest kernel, the evict mmu notifier callback happens in guest, a virtio CMD
> > > > > can be used for notify host but as lack of mmap_write_lock protection in
> > > > > host kernel, host will hold invalid data for a short period of time, this
> > > > > may lead to some issues. And it is hard to fix as far as I can see.
> > > > > 
> > > > > I will extract some APIs into helper according to your request, and I will
> > > > > refactor the whole userptr implementation, use some callbacks in page
> > > > > getting path, let the pin method and hmm_range_fault can be choiced
> > > > > in this series patches.
> > > > 
> > > > Ok, so if this is for svm, then you need full blast hmm, or the semantics
> > > > are buggy. You cannot fake svm with pin(FOLL_LONGTERM) userptr, this does
> > > > not work.
> > > > 
> > > > The other option is that hsakmt/kfd api is completely busted, and that's
> > > > kinda not a kernel problem.
> > > > -Sima
> > > 
> > > On further thought, I believe the driver needs to migrate the pages to
> > > device memory (really a virtio-GPU blob object) *and* take a FOLL_LONGTERM
> > > pin on them.  The reason is that it isn’t possible to migrate these pages
> > > back to "host" memory without unmapping them from the GPU.  For the reasons
> > > I mention in [1], I believe that temporarily revoking access to virtio-GPU
> > > blob objects is not feasible.  Instead, the pages must be treated as if
> > > they are permanently in device memory until guest userspace unmaps them
> > > from the GPU, after which they must be migrated back to host memory.
> > 
> > Discussion on IRC indicates that migration isn't reliable.  This is
> > because Linux core memory management is largely lock-free for
> > performance reasons, so there is no way to prevent temporary elevation
> > of a page's reference count.  A page with an elevated reference count
> > cannot be migrated.
> > 
> > The only alternative I can think of is for the hypervisor to perform the
> > migration.  The hypervisor can revoke the guest's access to the page
> > without the guest's consent or involvement.  The host can then replace
> > the page with one of its own pages, which might be on the CPU or GPU.
> > Further migration between the CPU and GPU is controlled by the host
> > kernel-mode driver (KMD) and host kernel memory management.  The guest
> > kernel driver must take a FOLL_LONGTERM pin before telling the host to
> > use the pages, but that is all.
> > 
> > On KVM, this should be essentially automatic, as guest memory really is
> > just host userspace memory.  On Xen, this requires that the backend
> > domain can revoke fronted access to _any_ frontend page, or at least
> > frontend pages that have been granted to the backend.  The backend will
> > then need to be able to handle page faults for the frontend pages, and
> > replace the frontend pages with its own pages at will.  Furthermore,
> > revoking pages that the backend has installed into the frontend must
> > never fail, because the backend will panic if it does fail.
> > 
> > Sima, is putting guest pages under host kernel control the only option?
> > I thought that this could be avoided by leaving the pages on the CPU if
> > migration fails, but that won't work because there will be no way to
> > migrate them to the GPU later, causing performance problems that would
> > be impossible to debug.  Is waiting (possibly forever) on migration to
> > finish an option?  Otherwise, this might mean extra complexity in the
> > Xen hypervisor, as I do not believe the primitives needed are currently
> > available.  Specifically, in addition to the primitives discussed at Xen
> > Project Summit 2024, the backend also needs to intercept access to, and
> > replace the contents of, arbitrary frontend-controlled pages.
> 
> Hi Demi,
> 
> I agree that to achieve the complete SVM feature in virtio-GPU, it is
> necessary to have the hypervisor deeply involved and add new features.
> It needs solid design, I saw the detailed reply in a another thread, it
> is very helpful,looking forward to the response from the Xen/hypervisor
> experts.

From further discussion with Sima, I suspect that virtio-GPU cannot
support SVM with reasonable performance.  Native contexts have such good
performance for graphics workloads because graphics workloads very rarely
perform blocking waits for host GPU operations to complete, so one can
make all frequently-used operations asynchronous and therefore hide the
guest <=> host latency.  SVM seems to require many synchronous GPU
operations, and I believe those will severely harm performance with
virtio-GPU.

If you need full SVM for your workloads, I recommend using hardware
SR-IOV.  This should allow the guest to perform GPU virtual memory
operations without host involvement, which I expect will be much faster
than paravirtualizing these operations.  Scalable I/O virtualization
might also work, but it might also require paravirtualizing the
performance-critical address-space operations unless the hardware has
stage 2 translation tables.

> So for the current virito-GPU userptr implementation, It can not support the
> full SVM feature, it just can only let GPU access the user space memory,
> maybe can be called by userptr feature. I think I will finish this small
> part firstly and then to try to complete the whole SVM feature.

I think you will still have problems if the host is able to migrate
pages in any way.  This requires that the host install an MMU notifier
for the pages it has received from the guest, which in turn implies that
the host must be able to prevent the guest from accessing the pages.
If the pages are used in grant table operations, this isn't possible.

If you are willing to have the pages be pinned on the host side things
are much simpler.  Such pages will always be in system memory, and will
never be able to migrate to VRAM.  This will result in a performance
penalty and will likely require explicit prefetching by programs using
ROCm, but this may be acceptable for your use-cases.  The performance
penalty is the same as that with XNACK disabled, which is the case for
all RDNA2+ GPUs, so all code that aims to be portable to recent consumer
hardware will have to account for it anyway.
-- 
Sincerely,
Demi Marie Obenour (she/her/hers)
Invisible Things Lab

[-- Attachment #2: signature.asc --]
[-- Type: application/pgp-signature, Size: 833 bytes --]

^ permalink raw reply	[flat|nested] 23+ messages in thread

* Re: [RFC PATCH 3/3] drm/virtio: implement blob userptr resource object
  2025-02-06 18:21                 ` Demi Marie Obenour
@ 2025-02-07 11:07                   ` Huang, Honglei1
  2025-02-08  2:30                     ` Demi Marie Obenour
  0 siblings, 1 reply; 23+ messages in thread
From: Huang, Honglei1 @ 2025-02-07 11:07 UTC (permalink / raw)
  To: Demi Marie Obenour
  Cc: Demi Marie Obenour, Huang, Ray, Stabellini, Stefano,
	virtualization@lists.linux-foundation.org,
	linux-kernel@vger.kernel.org, David Airlie,
	dri-devel@lists.freedesktop.org, Dmitry Osipenko, Gerd Hoffmann,
	Gurchetan Singh, Chia-I Wu, Akihiko Odaki, Zhu, Lingshan,
	Xen developer discussion, Kernel KVM virtualization development,
	Xenia Ragiadakou, Marek Marczykowski-Górecki

On 2025/2/7 2:21, Demi Marie Obenour wrote:
> On Thu, Feb 06, 2025 at 06:53:55PM +0800, Huang, Honglei1 wrote:
>> On 2025/1/31 8:33, Demi Marie Obenour wrote:
>>> On Wed, Jan 29, 2025 at 03:54:59PM -0500, Demi Marie Obenour wrote:
>>>> On 1/8/25 12:05 PM, Simona Vetter wrote:
>>>>> On Fri, Dec 27, 2024 at 10:24:29AM +0800, Huang, Honglei1 wrote:
>>>>>>
>>>>>> On 2024/12/22 9:59, Demi Marie Obenour wrote:
>>>>>>> On 12/20/24 10:35 AM, Simona Vetter wrote:
>>>>>>>> On Fri, Dec 20, 2024 at 06:04:09PM +0800, Honglei Huang wrote:
>>>>>>>>> From: Honglei Huang <Honglei1.Huang@amd.com>
>>>>>>>>>
>>>>>>>>> A virtio-gpu userptr is based on HMM notifier.
>>>>>>>>> Used for let host access guest userspace memory and
>>>>>>>>> notice the change of userspace memory.
>>>>>>>>> This series patches are in very beginning state,
>>>>>>>>> User space are pinned currently to ensure the host
>>>>>>>>> device memory operations are correct.
>>>>>>>>> The free and unmap operations for userspace can be
>>>>>>>>> handled by MMU notifier this is a simple and basice
>>>>>>>>> SVM feature for this series patches.
>>>>>>>>> The physical PFNS update operations is splited into
>>>>>>>>> two OPs in here. The evicted memories won't be used
>>>>>>>>> anymore but remap into host again to achieve same
>>>>>>>>> effect with hmm_rang_fault.
>>>>>>>>
>>>>>>>> So in my opinion there are two ways to implement userptr that make sense:
>>>>>>>>
>>>>>>>> - pinned userptr with pin_user_pages(FOLL_LONGTERM). there is not mmu
>>>>>>>>      notifier
>>>>>>>>
>>>>>>>> - unpinnned userptr where you entirely rely on userptr and do not hold any
>>>>>>>>      page references or page pins at all, for full SVM integration. This
>>>>>>>>      should use hmm_range_fault ideally, since that's the version that
>>>>>>>>      doesn't ever grab any page reference pins.
>>>>>>>>
>>>>>>>> All the in-between variants are imo really bad hacks, whether they hold a
>>>>>>>> page reference or a temporary page pin (which seems to be what you're
>>>>>>>> doing here). In much older kernels there was some justification for them,
>>>>>>>> because strange stuff happened over fork(), but with FOLL_LONGTERM this is
>>>>>>>> now all sorted out. So there's really only fully pinned, or true svm left
>>>>>>>> as clean design choices imo.
>>>>>>>>
>>>>>>>> With that background, why does pin_user_pages(FOLL_LONGTERM) not work for
>>>>>>>> you?
>>>>>>>
>>>>>>> +1 on using FOLL_LONGTERM.  Fully dynamic memory management has a huge cost
>>>>>>> in complexity that pinning everything avoids.  Furthermore, this avoids the
>>>>>>> host having to take action in response to guest memory reclaim requests.
>>>>>>> This avoids additional complexity (and thus attack surface) on the host side.
>>>>>>> Furthermore, since this is for ROCm and not for graphics, I am less concerned
>>>>>>> about supporting systems that require swappable GPU VRAM.
>>>>>>
>>>>>> Hi Sima and Demi,
>>>>>>
>>>>>> I totally agree the flag FOLL_LONGTERM is needed, I will add it in next
>>>>>> version.
>>>>>>
>>>>>> And for the first pin variants implementation, the MMU notifier is also
>>>>>> needed I think.Cause the userptr feature in UMD generally used like this:
>>>>>> the registering of userptr always is explicitly invoked by user code like
>>>>>> "registerMemoryToGPU(userptrAddr, ...)", but for the userptr release/free,
>>>>>> there is no explicit API for it, at least in hsakmt/KFD stack. User just
>>>>>> need call system call "free(userptrAddr)", then kernel driver will release
>>>>>> the userptr by MMU notifier callback.Virtio-GPU has no other way to know if
>>>>>> user has been free the userptr except for MMU notifior.And in UMD theres is
>>>>>> no way to get the free() operation is invoked by user.The only way is use
>>>>>> MMU notifier in virtio-GPU driver and free the corresponding data in host by
>>>>>> some virtio CMDs as far as I can see.
>>>>>>
>>>>>> And for the second way that is use hmm_range_fault, there is a predictable
>>>>>> issues as far as I can see, at least in hsakmt/KFD stack. That is the memory
>>>>>> may migrate when GPU/device is working. In bare metal, when memory is
>>>>>> migrating KFD driver will pause the compute work of the device in
>>>>>> mmap_wirte_lock then use hmm_range_fault to remap the migrated/evicted
>>>>>> memories to GPU then restore the compute work of device to ensure the
>>>>>> correction of the data. But in virtio-GPU driver the migration happen in
>>>>>> guest kernel, the evict mmu notifier callback happens in guest, a virtio CMD
>>>>>> can be used for notify host but as lack of mmap_write_lock protection in
>>>>>> host kernel, host will hold invalid data for a short period of time, this
>>>>>> may lead to some issues. And it is hard to fix as far as I can see.
>>>>>>
>>>>>> I will extract some APIs into helper according to your request, and I will
>>>>>> refactor the whole userptr implementation, use some callbacks in page
>>>>>> getting path, let the pin method and hmm_range_fault can be choiced
>>>>>> in this series patches.
>>>>>
>>>>> Ok, so if this is for svm, then you need full blast hmm, or the semantics
>>>>> are buggy. You cannot fake svm with pin(FOLL_LONGTERM) userptr, this does
>>>>> not work.
>>>>>
>>>>> The other option is that hsakmt/kfd api is completely busted, and that's
>>>>> kinda not a kernel problem.
>>>>> -Sima
>>>>
>>>> On further thought, I believe the driver needs to migrate the pages to
>>>> device memory (really a virtio-GPU blob object) *and* take a FOLL_LONGTERM
>>>> pin on them.  The reason is that it isn’t possible to migrate these pages
>>>> back to "host" memory without unmapping them from the GPU.  For the reasons
>>>> I mention in [1], I believe that temporarily revoking access to virtio-GPU
>>>> blob objects is not feasible.  Instead, the pages must be treated as if
>>>> they are permanently in device memory until guest userspace unmaps them
>>>> from the GPU, after which they must be migrated back to host memory.
>>>
>>> Discussion on IRC indicates that migration isn't reliable.  This is
>>> because Linux core memory management is largely lock-free for
>>> performance reasons, so there is no way to prevent temporary elevation
>>> of a page's reference count.  A page with an elevated reference count
>>> cannot be migrated.
>>>
>>> The only alternative I can think of is for the hypervisor to perform the
>>> migration.  The hypervisor can revoke the guest's access to the page
>>> without the guest's consent or involvement.  The host can then replace
>>> the page with one of its own pages, which might be on the CPU or GPU.
>>> Further migration between the CPU and GPU is controlled by the host
>>> kernel-mode driver (KMD) and host kernel memory management.  The guest
>>> kernel driver must take a FOLL_LONGTERM pin before telling the host to
>>> use the pages, but that is all.
>>>
>>> On KVM, this should be essentially automatic, as guest memory really is
>>> just host userspace memory.  On Xen, this requires that the backend
>>> domain can revoke fronted access to _any_ frontend page, or at least
>>> frontend pages that have been granted to the backend.  The backend will
>>> then need to be able to handle page faults for the frontend pages, and
>>> replace the frontend pages with its own pages at will.  Furthermore,
>>> revoking pages that the backend has installed into the frontend must
>>> never fail, because the backend will panic if it does fail.
>>>
>>> Sima, is putting guest pages under host kernel control the only option?
>>> I thought that this could be avoided by leaving the pages on the CPU if
>>> migration fails, but that won't work because there will be no way to
>>> migrate them to the GPU later, causing performance problems that would
>>> be impossible to debug.  Is waiting (possibly forever) on migration to
>>> finish an option?  Otherwise, this might mean extra complexity in the
>>> Xen hypervisor, as I do not believe the primitives needed are currently
>>> available.  Specifically, in addition to the primitives discussed at Xen
>>> Project Summit 2024, the backend also needs to intercept access to, and
>>> replace the contents of, arbitrary frontend-controlled pages.
>>
>> Hi Demi,
>>
>> I agree that to achieve the complete SVM feature in virtio-GPU, it is
>> necessary to have the hypervisor deeply involved and add new features.
>> It needs solid design, I saw the detailed reply in a another thread, it
>> is very helpful,looking forward to the response from the Xen/hypervisor
>> experts.
> 
>  From further discussion with Sima, I suspect that virtio-GPU cannot
> support SVM with reasonable performance.  Native contexts have such good
> performance for graphics workloads because graphics workloads very rarely
> perform blocking waits for host GPU operations to complete, so one can
> make all frequently-used operations asynchronous and therefore hide the
> guest <=> host latency.  SVM seems to require many synchronous GPU
> operations, and I believe those will severely harm performance with
> virtio-GPU.
> 
> If you need full SVM for your workloads, I recommend using hardware
> SR-IOV.  This should allow the guest to perform GPU virtual memory
> operations without host involvement, which I expect will be much faster
> than paravirtualizing these operations.  Scalable I/O virtualization
> might also work, but it might also require paravirtualizing the
> performance-critical address-space operations unless the hardware has
> stage 2 translation tables.
> 

Yes I think so, the SR-IOV or some other hardware virtualization are 
clean design for ROCm/compute currently. But actually those hardware 
features supported solution also have their own limitation, like high 
hardware cost and the performance decreasing caused by different guest 
VMs hardware workload schedule. We are trying a low-cost, 
high-performance virtualization solution, it appears to be difficult to 
support full feature VS SR-IOV at present. But it doesn't prevent us 
from enabling part of functions.

>> So for the current virito-GPU userptr implementation, It can not support the
>> full SVM feature, it just can only let GPU access the user space memory,
>> maybe can be called by userptr feature. I think I will finish this small
>> part firstly and then to try to complete the whole SVM feature.
> 
> I think you will still have problems if the host is able to migrate
> pages in any way.  This requires that the host install an MMU notifier
> for the pages it has received from the guest, which in turn implies that
> the host must be able to prevent the guest from accessing the pages.
> If the pages are used in grant table operations, this isn't possible.
> 
> If you are willing to have the pages be pinned on the host side things
> are much simpler.  Such pages will always be in system memory, and will
> never be able to migrate to VRAM.  This will result in a performance
> penalty and will likely require explicit prefetching by programs using
> ROCm, but this may be acceptable for your use-cases.  The performance
> penalty is the same as that with XNACK disabled, which is the case for
> all RDNA2+ GPUs, so all code that aims to be portable to recent consumer
> hardware will have to account for it anyway.

Totally agreed. Actually memory migrating to VRAM is very common in GFX 
side, but in ROCm/KFD, maybe it can be disabled and not often used as 
far as I know. ROCm/KFD always uses SDMA to transfer or copy data maybe 
this is faster than migrating to VRAM (needs further verification).
But we have some method to workaround it. Really thanks for your reminding.

Regards,
Honglei



^ permalink raw reply	[flat|nested] 23+ messages in thread

* Re: [RFC PATCH 3/3] drm/virtio: implement blob userptr resource object
  2025-02-07 11:07                   ` Huang, Honglei1
@ 2025-02-08  2:30                     ` Demi Marie Obenour
  2025-02-08  2:43                       ` Demi Marie Obenour
  0 siblings, 1 reply; 23+ messages in thread
From: Demi Marie Obenour @ 2025-02-08  2:30 UTC (permalink / raw)
  To: Huang, Honglei1
  Cc: Demi Marie Obenour, Huang, Ray, Stabellini, Stefano,
	virtualization@lists.linux-foundation.org,
	linux-kernel@vger.kernel.org, David Airlie,
	dri-devel@lists.freedesktop.org, Dmitry Osipenko, Gerd Hoffmann,
	Gurchetan Singh, Chia-I Wu, Akihiko Odaki, Zhu, Lingshan,
	Xen developer discussion, Kernel KVM virtualization development,
	Xenia Ragiadakou, Marek Marczykowski-Górecki

[-- Attachment #1: Type: text/plain, Size: 16077 bytes --]

On Fri, Feb 07, 2025 at 07:07:11PM +0800, Huang, Honglei1 wrote:
> On 2025/2/7 2:21, Demi Marie Obenour wrote:
> > On Thu, Feb 06, 2025 at 06:53:55PM +0800, Huang, Honglei1 wrote:
> > > On 2025/1/31 8:33, Demi Marie Obenour wrote:
> > > > On Wed, Jan 29, 2025 at 03:54:59PM -0500, Demi Marie Obenour wrote:
> > > > > On 1/8/25 12:05 PM, Simona Vetter wrote:
> > > > > > On Fri, Dec 27, 2024 at 10:24:29AM +0800, Huang, Honglei1 wrote:
> > > > > > > 
> > > > > > > On 2024/12/22 9:59, Demi Marie Obenour wrote:
> > > > > > > > On 12/20/24 10:35 AM, Simona Vetter wrote:
> > > > > > > > > On Fri, Dec 20, 2024 at 06:04:09PM +0800, Honglei Huang wrote:
> > > > > > > > > > From: Honglei Huang <Honglei1.Huang@amd.com>
> > > > > > > > > > 
> > > > > > > > > > A virtio-gpu userptr is based on HMM notifier.
> > > > > > > > > > Used for let host access guest userspace memory and
> > > > > > > > > > notice the change of userspace memory.
> > > > > > > > > > This series patches are in very beginning state,
> > > > > > > > > > User space are pinned currently to ensure the host
> > > > > > > > > > device memory operations are correct.
> > > > > > > > > > The free and unmap operations for userspace can be
> > > > > > > > > > handled by MMU notifier this is a simple and basice
> > > > > > > > > > SVM feature for this series patches.
> > > > > > > > > > The physical PFNS update operations is splited into
> > > > > > > > > > two OPs in here. The evicted memories won't be used
> > > > > > > > > > anymore but remap into host again to achieve same
> > > > > > > > > > effect with hmm_rang_fault.
> > > > > > > > > 
> > > > > > > > > So in my opinion there are two ways to implement userptr that make sense:
> > > > > > > > > 
> > > > > > > > > - pinned userptr with pin_user_pages(FOLL_LONGTERM). there is not mmu
> > > > > > > > >      notifier
> > > > > > > > > 
> > > > > > > > > - unpinnned userptr where you entirely rely on userptr and do not hold any
> > > > > > > > >      page references or page pins at all, for full SVM integration. This
> > > > > > > > >      should use hmm_range_fault ideally, since that's the version that
> > > > > > > > >      doesn't ever grab any page reference pins.
> > > > > > > > > 
> > > > > > > > > All the in-between variants are imo really bad hacks, whether they hold a
> > > > > > > > > page reference or a temporary page pin (which seems to be what you're
> > > > > > > > > doing here). In much older kernels there was some justification for them,
> > > > > > > > > because strange stuff happened over fork(), but with FOLL_LONGTERM this is
> > > > > > > > > now all sorted out. So there's really only fully pinned, or true svm left
> > > > > > > > > as clean design choices imo.
> > > > > > > > > 
> > > > > > > > > With that background, why does pin_user_pages(FOLL_LONGTERM) not work for
> > > > > > > > > you?
> > > > > > > > 
> > > > > > > > +1 on using FOLL_LONGTERM.  Fully dynamic memory management has a huge cost
> > > > > > > > in complexity that pinning everything avoids.  Furthermore, this avoids the
> > > > > > > > host having to take action in response to guest memory reclaim requests.
> > > > > > > > This avoids additional complexity (and thus attack surface) on the host side.
> > > > > > > > Furthermore, since this is for ROCm and not for graphics, I am less concerned
> > > > > > > > about supporting systems that require swappable GPU VRAM.
> > > > > > > 
> > > > > > > Hi Sima and Demi,
> > > > > > > 
> > > > > > > I totally agree the flag FOLL_LONGTERM is needed, I will add it in next
> > > > > > > version.
> > > > > > > 
> > > > > > > And for the first pin variants implementation, the MMU notifier is also
> > > > > > > needed I think.Cause the userptr feature in UMD generally used like this:
> > > > > > > the registering of userptr always is explicitly invoked by user code like
> > > > > > > "registerMemoryToGPU(userptrAddr, ...)", but for the userptr release/free,
> > > > > > > there is no explicit API for it, at least in hsakmt/KFD stack. User just
> > > > > > > need call system call "free(userptrAddr)", then kernel driver will release
> > > > > > > the userptr by MMU notifier callback.Virtio-GPU has no other way to know if
> > > > > > > user has been free the userptr except for MMU notifior.And in UMD theres is
> > > > > > > no way to get the free() operation is invoked by user.The only way is use
> > > > > > > MMU notifier in virtio-GPU driver and free the corresponding data in host by
> > > > > > > some virtio CMDs as far as I can see.
> > > > > > > 
> > > > > > > And for the second way that is use hmm_range_fault, there is a predictable
> > > > > > > issues as far as I can see, at least in hsakmt/KFD stack. That is the memory
> > > > > > > may migrate when GPU/device is working. In bare metal, when memory is
> > > > > > > migrating KFD driver will pause the compute work of the device in
> > > > > > > mmap_wirte_lock then use hmm_range_fault to remap the migrated/evicted
> > > > > > > memories to GPU then restore the compute work of device to ensure the
> > > > > > > correction of the data. But in virtio-GPU driver the migration happen in
> > > > > > > guest kernel, the evict mmu notifier callback happens in guest, a virtio CMD
> > > > > > > can be used for notify host but as lack of mmap_write_lock protection in
> > > > > > > host kernel, host will hold invalid data for a short period of time, this
> > > > > > > may lead to some issues. And it is hard to fix as far as I can see.
> > > > > > > 
> > > > > > > I will extract some APIs into helper according to your request, and I will
> > > > > > > refactor the whole userptr implementation, use some callbacks in page
> > > > > > > getting path, let the pin method and hmm_range_fault can be choiced
> > > > > > > in this series patches.
> > > > > > 
> > > > > > Ok, so if this is for svm, then you need full blast hmm, or the semantics
> > > > > > are buggy. You cannot fake svm with pin(FOLL_LONGTERM) userptr, this does
> > > > > > not work.
> > > > > > 
> > > > > > The other option is that hsakmt/kfd api is completely busted, and that's
> > > > > > kinda not a kernel problem.
> > > > > > -Sima
> > > > > 
> > > > > On further thought, I believe the driver needs to migrate the pages to
> > > > > device memory (really a virtio-GPU blob object) *and* take a FOLL_LONGTERM
> > > > > pin on them.  The reason is that it isn’t possible to migrate these pages
> > > > > back to "host" memory without unmapping them from the GPU.  For the reasons
> > > > > I mention in [1], I believe that temporarily revoking access to virtio-GPU
> > > > > blob objects is not feasible.  Instead, the pages must be treated as if
> > > > > they are permanently in device memory until guest userspace unmaps them
> > > > > from the GPU, after which they must be migrated back to host memory.
> > > > 
> > > > Discussion on IRC indicates that migration isn't reliable.  This is
> > > > because Linux core memory management is largely lock-free for
> > > > performance reasons, so there is no way to prevent temporary elevation
> > > > of a page's reference count.  A page with an elevated reference count
> > > > cannot be migrated.
> > > > 
> > > > The only alternative I can think of is for the hypervisor to perform the
> > > > migration.  The hypervisor can revoke the guest's access to the page
> > > > without the guest's consent or involvement.  The host can then replace
> > > > the page with one of its own pages, which might be on the CPU or GPU.
> > > > Further migration between the CPU and GPU is controlled by the host
> > > > kernel-mode driver (KMD) and host kernel memory management.  The guest
> > > > kernel driver must take a FOLL_LONGTERM pin before telling the host to
> > > > use the pages, but that is all.
> > > > 
> > > > On KVM, this should be essentially automatic, as guest memory really is
> > > > just host userspace memory.  On Xen, this requires that the backend
> > > > domain can revoke fronted access to _any_ frontend page, or at least
> > > > frontend pages that have been granted to the backend.  The backend will
> > > > then need to be able to handle page faults for the frontend pages, and
> > > > replace the frontend pages with its own pages at will.  Furthermore,
> > > > revoking pages that the backend has installed into the frontend must
> > > > never fail, because the backend will panic if it does fail.
> > > > 
> > > > Sima, is putting guest pages under host kernel control the only option?
> > > > I thought that this could be avoided by leaving the pages on the CPU if
> > > > migration fails, but that won't work because there will be no way to
> > > > migrate them to the GPU later, causing performance problems that would
> > > > be impossible to debug.  Is waiting (possibly forever) on migration to
> > > > finish an option?  Otherwise, this might mean extra complexity in the
> > > > Xen hypervisor, as I do not believe the primitives needed are currently
> > > > available.  Specifically, in addition to the primitives discussed at Xen
> > > > Project Summit 2024, the backend also needs to intercept access to, and
> > > > replace the contents of, arbitrary frontend-controlled pages.
> > > 
> > > Hi Demi,
> > > 
> > > I agree that to achieve the complete SVM feature in virtio-GPU, it is
> > > necessary to have the hypervisor deeply involved and add new features.
> > > It needs solid design, I saw the detailed reply in a another thread, it
> > > is very helpful,looking forward to the response from the Xen/hypervisor
> > > experts.
> > 
> >  From further discussion with Sima, I suspect that virtio-GPU cannot
> > support SVM with reasonable performance.  Native contexts have such good
> > performance for graphics workloads because graphics workloads very rarely
> > perform blocking waits for host GPU operations to complete, so one can
> > make all frequently-used operations asynchronous and therefore hide the
> > guest <=> host latency.  SVM seems to require many synchronous GPU
> > operations, and I believe those will severely harm performance with
> > virtio-GPU.
> > 
> > If you need full SVM for your workloads, I recommend using hardware
> > SR-IOV.  This should allow the guest to perform GPU virtual memory
> > operations without host involvement, which I expect will be much faster
> > than paravirtualizing these operations.  Scalable I/O virtualization
> > might also work, but it might also require paravirtualizing the
> > performance-critical address-space operations unless the hardware has
> > stage 2 translation tables.
> > 
> 
> Yes I think so, the SR-IOV or some other hardware virtualization are clean
> design for ROCm/compute currently. But actually those hardware features
> supported solution also have their own limitation, like high hardware cost
> and the performance decreasing caused by different guest VMs hardware
> workload schedule. We are trying a low-cost, high-performance virtualization
> solution, it appears to be difficult to support full feature VS SR-IOV at
> present. But it doesn't prevent us from enabling part of functions.
> 
> > > So for the current virito-GPU userptr implementation, It can not support the
> > > full SVM feature, it just can only let GPU access the user space memory,
> > > maybe can be called by userptr feature. I think I will finish this small
> > > part firstly and then to try to complete the whole SVM feature.
> > 
> > I think you will still have problems if the host is able to migrate
> > pages in any way.  This requires that the host install an MMU notifier
> > for the pages it has received from the guest, which in turn implies that
> > the host must be able to prevent the guest from accessing the pages.
> > If the pages are used in grant table operations, this isn't possible.
> > 
> > If you are willing to have the pages be pinned on the host side things
> > are much simpler.  Such pages will always be in system memory, and will
> > never be able to migrate to VRAM.  This will result in a performance
> > penalty and will likely require explicit prefetching by programs using
> > ROCm, but this may be acceptable for your use-cases.  The performance
> > penalty is the same as that with XNACK disabled, which is the case for
> > all RDNA2+ GPUs, so all code that aims to be portable to recent consumer
> > hardware will have to account for it anyway.
> 
> Totally agreed. Actually memory migrating to VRAM is very common in GFX
> side, but in ROCm/KFD, maybe it can be disabled and not often used as far as
> I know. ROCm/KFD always uses SDMA to transfer or copy data maybe this is
> faster than migrating to VRAM (needs further verification).
> But we have some method to workaround it. Really thanks for your reminding.

I think you will do okay if you treat virtio-GPU as providing a virtual
GPU with no XNACK support.  XNACK is necessary for migrating pages
between GPU and CPU based on demand, and it is this migration that is
so hard to implement.  Furthermore, I highly doubt that the combination
of AMDKFD and the hardware you are targeting even supports XNACK.

At Xen Project Summit 2024, AMD mentioned that it wanted to enable both
rendering (Vulkan/OpenGL) and compute (ROCm) with virtio-GPU native
contexts under Xen.  The only GPUs for which AMDKFD will enable XNACK
support are GFX9 GPUs, which are GCN and CDNA.  Shipping a GCN GPU in a
new design would be very unusual and CDNA (Instinct) accelerators do not
support graphics, so either AMD is using separate devices for compute
and graphics or the workloads will run with no XNACK support.  Since you
mention HW cost as an important consideration, I suspect the latter.

I believe that the Instinct accelerators that support XNACK also support
SR-IOV, but if you wish to combine XNACK and virtio-GPU, this should be
possible subject to caveats.  The main caveat is that under no
circumstances can the host's Xen driver install an MMU notifier for
pages that the guest can use in grant table operations or DMA.  A driver
that installs an MMU notifier promises that it can block access to
pages in a bounded amount of time, and if the guest can DMA to the pages
or grant them to other domains this is not possible.  Without the Xen
driver installing an MMU notifier, there is no way for the pages to be
migrated to the GPU without risking use-after-free or at least data
corruption.  Instead, one of the following options will be needed:

1. hipMallocManaged() allocates the memory from the backend using the
   Map primitive discussed elsewhere.  Such memory is not mappable in
   the IOMMU (if there is an assigned PCI device) and cannot be used for
   grant table operations.  Memory allocated via system allocators
   (anonymous pages) is not able to be migrated.

2. The frontend uses shadow buffers for all I/O.  This allows the
   backend to use a new Steal primitive to revoke the guest's accesses
   to anonymous pages and handle page faults accordingly.

3. Same as 2 except that the frontend allocates all memory (except
   bounce buffers) from the backend, just like a KVM guest does, rather
   than from the Xen toolstack.

4. The frontend tries to migrate the pages to backend-provided ones, and
   falls back to leaving them pinned on the CPU.  The frontend's MMU
   notifier tells the backend to stop accessing the pages, blocking
   until the backend confirms this.  The frontend then moves the pages
   to its own memory and returns from the notifier.  This may require
   new AMDKFD APIs.

5. Same as 4 except that the frontend uses hmm_range_fault to move the
   pages to the backend in response to GPU page faults.  This requires a
   frontend <-> backend round-trip for each fault (slooooow) so a new
   fast mechanism for this might be needed.
-- 
Sincerely,
Demi Marie Obenour (she/her/hers)
Invisible Things Lab

[-- Attachment #2: signature.asc --]
[-- Type: application/pgp-signature, Size: 833 bytes --]

^ permalink raw reply	[flat|nested] 23+ messages in thread

* Re: [RFC PATCH 3/3] drm/virtio: implement blob userptr resource object
  2025-02-08  2:30                     ` Demi Marie Obenour
@ 2025-02-08  2:43                       ` Demi Marie Obenour
       [not found]                         ` <d259279c-9989-410f-907d-9bf0b318bc84@amd.com>
  0 siblings, 1 reply; 23+ messages in thread
From: Demi Marie Obenour @ 2025-02-08  2:43 UTC (permalink / raw)
  To: Huang, Honglei1
  Cc: Demi Marie Obenour, Huang, Ray, Stabellini, Stefano,
	virtualization@lists.linux-foundation.org,
	linux-kernel@vger.kernel.org, David Airlie,
	dri-devel@lists.freedesktop.org, Dmitry Osipenko, Gerd Hoffmann,
	Gurchetan Singh, Chia-I Wu, Akihiko Odaki, Zhu, Lingshan,
	Xen developer discussion, Kernel KVM virtualization development,
	Xenia Ragiadakou, Marek Marczykowski-Górecki, Simona Vetter

[-- Attachment #1: Type: text/plain, Size: 16752 bytes --]

On Fri, Feb 07, 2025 at 09:30:45PM -0500, Demi Marie Obenour wrote:
> On Fri, Feb 07, 2025 at 07:07:11PM +0800, Huang, Honglei1 wrote:
> > On 2025/2/7 2:21, Demi Marie Obenour wrote:
> > > On Thu, Feb 06, 2025 at 06:53:55PM +0800, Huang, Honglei1 wrote:
> > > > On 2025/1/31 8:33, Demi Marie Obenour wrote:
> > > > > On Wed, Jan 29, 2025 at 03:54:59PM -0500, Demi Marie Obenour wrote:
> > > > > > On 1/8/25 12:05 PM, Simona Vetter wrote:
> > > > > > > On Fri, Dec 27, 2024 at 10:24:29AM +0800, Huang, Honglei1 wrote:
> > > > > > > > 
> > > > > > > > On 2024/12/22 9:59, Demi Marie Obenour wrote:
> > > > > > > > > On 12/20/24 10:35 AM, Simona Vetter wrote:
> > > > > > > > > > On Fri, Dec 20, 2024 at 06:04:09PM +0800, Honglei Huang wrote:
> > > > > > > > > > > From: Honglei Huang <Honglei1.Huang@amd.com>
> > > > > > > > > > > 
> > > > > > > > > > > A virtio-gpu userptr is based on HMM notifier.
> > > > > > > > > > > Used for let host access guest userspace memory and
> > > > > > > > > > > notice the change of userspace memory.
> > > > > > > > > > > This series patches are in very beginning state,
> > > > > > > > > > > User space are pinned currently to ensure the host
> > > > > > > > > > > device memory operations are correct.
> > > > > > > > > > > The free and unmap operations for userspace can be
> > > > > > > > > > > handled by MMU notifier this is a simple and basice
> > > > > > > > > > > SVM feature for this series patches.
> > > > > > > > > > > The physical PFNS update operations is splited into
> > > > > > > > > > > two OPs in here. The evicted memories won't be used
> > > > > > > > > > > anymore but remap into host again to achieve same
> > > > > > > > > > > effect with hmm_rang_fault.
> > > > > > > > > > 
> > > > > > > > > > So in my opinion there are two ways to implement userptr that make sense:
> > > > > > > > > > 
> > > > > > > > > > - pinned userptr with pin_user_pages(FOLL_LONGTERM). there is not mmu
> > > > > > > > > >      notifier
> > > > > > > > > > 
> > > > > > > > > > - unpinnned userptr where you entirely rely on userptr and do not hold any
> > > > > > > > > >      page references or page pins at all, for full SVM integration. This
> > > > > > > > > >      should use hmm_range_fault ideally, since that's the version that
> > > > > > > > > >      doesn't ever grab any page reference pins.
> > > > > > > > > > 
> > > > > > > > > > All the in-between variants are imo really bad hacks, whether they hold a
> > > > > > > > > > page reference or a temporary page pin (which seems to be what you're
> > > > > > > > > > doing here). In much older kernels there was some justification for them,
> > > > > > > > > > because strange stuff happened over fork(), but with FOLL_LONGTERM this is
> > > > > > > > > > now all sorted out. So there's really only fully pinned, or true svm left
> > > > > > > > > > as clean design choices imo.
> > > > > > > > > > 
> > > > > > > > > > With that background, why does pin_user_pages(FOLL_LONGTERM) not work for
> > > > > > > > > > you?
> > > > > > > > > 
> > > > > > > > > +1 on using FOLL_LONGTERM.  Fully dynamic memory management has a huge cost
> > > > > > > > > in complexity that pinning everything avoids.  Furthermore, this avoids the
> > > > > > > > > host having to take action in response to guest memory reclaim requests.
> > > > > > > > > This avoids additional complexity (and thus attack surface) on the host side.
> > > > > > > > > Furthermore, since this is for ROCm and not for graphics, I am less concerned
> > > > > > > > > about supporting systems that require swappable GPU VRAM.
> > > > > > > > 
> > > > > > > > Hi Sima and Demi,
> > > > > > > > 
> > > > > > > > I totally agree the flag FOLL_LONGTERM is needed, I will add it in next
> > > > > > > > version.
> > > > > > > > 
> > > > > > > > And for the first pin variants implementation, the MMU notifier is also
> > > > > > > > needed I think.Cause the userptr feature in UMD generally used like this:
> > > > > > > > the registering of userptr always is explicitly invoked by user code like
> > > > > > > > "registerMemoryToGPU(userptrAddr, ...)", but for the userptr release/free,
> > > > > > > > there is no explicit API for it, at least in hsakmt/KFD stack. User just
> > > > > > > > need call system call "free(userptrAddr)", then kernel driver will release
> > > > > > > > the userptr by MMU notifier callback.Virtio-GPU has no other way to know if
> > > > > > > > user has been free the userptr except for MMU notifior.And in UMD theres is
> > > > > > > > no way to get the free() operation is invoked by user.The only way is use
> > > > > > > > MMU notifier in virtio-GPU driver and free the corresponding data in host by
> > > > > > > > some virtio CMDs as far as I can see.
> > > > > > > > 
> > > > > > > > And for the second way that is use hmm_range_fault, there is a predictable
> > > > > > > > issues as far as I can see, at least in hsakmt/KFD stack. That is the memory
> > > > > > > > may migrate when GPU/device is working. In bare metal, when memory is
> > > > > > > > migrating KFD driver will pause the compute work of the device in
> > > > > > > > mmap_wirte_lock then use hmm_range_fault to remap the migrated/evicted
> > > > > > > > memories to GPU then restore the compute work of device to ensure the
> > > > > > > > correction of the data. But in virtio-GPU driver the migration happen in
> > > > > > > > guest kernel, the evict mmu notifier callback happens in guest, a virtio CMD
> > > > > > > > can be used for notify host but as lack of mmap_write_lock protection in
> > > > > > > > host kernel, host will hold invalid data for a short period of time, this
> > > > > > > > may lead to some issues. And it is hard to fix as far as I can see.
> > > > > > > > 
> > > > > > > > I will extract some APIs into helper according to your request, and I will
> > > > > > > > refactor the whole userptr implementation, use some callbacks in page
> > > > > > > > getting path, let the pin method and hmm_range_fault can be choiced
> > > > > > > > in this series patches.
> > > > > > > 
> > > > > > > Ok, so if this is for svm, then you need full blast hmm, or the semantics
> > > > > > > are buggy. You cannot fake svm with pin(FOLL_LONGTERM) userptr, this does
> > > > > > > not work.
> > > > > > > 
> > > > > > > The other option is that hsakmt/kfd api is completely busted, and that's
> > > > > > > kinda not a kernel problem.
> > > > > > > -Sima
> > > > > > 
> > > > > > On further thought, I believe the driver needs to migrate the pages to
> > > > > > device memory (really a virtio-GPU blob object) *and* take a FOLL_LONGTERM
> > > > > > pin on them.  The reason is that it isn’t possible to migrate these pages
> > > > > > back to "host" memory without unmapping them from the GPU.  For the reasons
> > > > > > I mention in [1], I believe that temporarily revoking access to virtio-GPU
> > > > > > blob objects is not feasible.  Instead, the pages must be treated as if
> > > > > > they are permanently in device memory until guest userspace unmaps them
> > > > > > from the GPU, after which they must be migrated back to host memory.
> > > > > 
> > > > > Discussion on IRC indicates that migration isn't reliable.  This is
> > > > > because Linux core memory management is largely lock-free for
> > > > > performance reasons, so there is no way to prevent temporary elevation
> > > > > of a page's reference count.  A page with an elevated reference count
> > > > > cannot be migrated.
> > > > > 
> > > > > The only alternative I can think of is for the hypervisor to perform the
> > > > > migration.  The hypervisor can revoke the guest's access to the page
> > > > > without the guest's consent or involvement.  The host can then replace
> > > > > the page with one of its own pages, which might be on the CPU or GPU.
> > > > > Further migration between the CPU and GPU is controlled by the host
> > > > > kernel-mode driver (KMD) and host kernel memory management.  The guest
> > > > > kernel driver must take a FOLL_LONGTERM pin before telling the host to
> > > > > use the pages, but that is all.
> > > > > 
> > > > > On KVM, this should be essentially automatic, as guest memory really is
> > > > > just host userspace memory.  On Xen, this requires that the backend
> > > > > domain can revoke fronted access to _any_ frontend page, or at least
> > > > > frontend pages that have been granted to the backend.  The backend will
> > > > > then need to be able to handle page faults for the frontend pages, and
> > > > > replace the frontend pages with its own pages at will.  Furthermore,
> > > > > revoking pages that the backend has installed into the frontend must
> > > > > never fail, because the backend will panic if it does fail.
> > > > > 
> > > > > Sima, is putting guest pages under host kernel control the only option?
> > > > > I thought that this could be avoided by leaving the pages on the CPU if
> > > > > migration fails, but that won't work because there will be no way to
> > > > > migrate them to the GPU later, causing performance problems that would
> > > > > be impossible to debug.  Is waiting (possibly forever) on migration to
> > > > > finish an option?  Otherwise, this might mean extra complexity in the
> > > > > Xen hypervisor, as I do not believe the primitives needed are currently
> > > > > available.  Specifically, in addition to the primitives discussed at Xen
> > > > > Project Summit 2024, the backend also needs to intercept access to, and
> > > > > replace the contents of, arbitrary frontend-controlled pages.
> > > > 
> > > > Hi Demi,
> > > > 
> > > > I agree that to achieve the complete SVM feature in virtio-GPU, it is
> > > > necessary to have the hypervisor deeply involved and add new features.
> > > > It needs solid design, I saw the detailed reply in a another thread, it
> > > > is very helpful,looking forward to the response from the Xen/hypervisor
> > > > experts.
> > > 
> > >  From further discussion with Sima, I suspect that virtio-GPU cannot
> > > support SVM with reasonable performance.  Native contexts have such good
> > > performance for graphics workloads because graphics workloads very rarely
> > > perform blocking waits for host GPU operations to complete, so one can
> > > make all frequently-used operations asynchronous and therefore hide the
> > > guest <=> host latency.  SVM seems to require many synchronous GPU
> > > operations, and I believe those will severely harm performance with
> > > virtio-GPU.
> > > 
> > > If you need full SVM for your workloads, I recommend using hardware
> > > SR-IOV.  This should allow the guest to perform GPU virtual memory
> > > operations without host involvement, which I expect will be much faster
> > > than paravirtualizing these operations.  Scalable I/O virtualization
> > > might also work, but it might also require paravirtualizing the
> > > performance-critical address-space operations unless the hardware has
> > > stage 2 translation tables.
> > > 
> > 
> > Yes I think so, the SR-IOV or some other hardware virtualization are clean
> > design for ROCm/compute currently. But actually those hardware features
> > supported solution also have their own limitation, like high hardware cost
> > and the performance decreasing caused by different guest VMs hardware
> > workload schedule. We are trying a low-cost, high-performance virtualization
> > solution, it appears to be difficult to support full feature VS SR-IOV at
> > present. But it doesn't prevent us from enabling part of functions.
> > 
> > > > So for the current virito-GPU userptr implementation, It can not support the
> > > > full SVM feature, it just can only let GPU access the user space memory,
> > > > maybe can be called by userptr feature. I think I will finish this small
> > > > part firstly and then to try to complete the whole SVM feature.
> > > 
> > > I think you will still have problems if the host is able to migrate
> > > pages in any way.  This requires that the host install an MMU notifier
> > > for the pages it has received from the guest, which in turn implies that
> > > the host must be able to prevent the guest from accessing the pages.
> > > If the pages are used in grant table operations, this isn't possible.
> > > 
> > > If you are willing to have the pages be pinned on the host side things
> > > are much simpler.  Such pages will always be in system memory, and will
> > > never be able to migrate to VRAM.  This will result in a performance
> > > penalty and will likely require explicit prefetching by programs using
> > > ROCm, but this may be acceptable for your use-cases.  The performance
> > > penalty is the same as that with XNACK disabled, which is the case for
> > > all RDNA2+ GPUs, so all code that aims to be portable to recent consumer
> > > hardware will have to account for it anyway.
> > 
> > Totally agreed. Actually memory migrating to VRAM is very common in GFX
> > side, but in ROCm/KFD, maybe it can be disabled and not often used as far as
> > I know. ROCm/KFD always uses SDMA to transfer or copy data maybe this is
> > faster than migrating to VRAM (needs further verification).
> > But we have some method to workaround it. Really thanks for your reminding.
> 
> I think you will do okay if you treat virtio-GPU as providing a virtual
> GPU with no XNACK support.  XNACK is necessary for migrating pages
> between GPU and CPU based on demand, and it is this migration that is
> so hard to implement.  Furthermore, I highly doubt that the combination
> of AMDKFD and the hardware you are targeting even supports XNACK.
> 
> At Xen Project Summit 2024, AMD mentioned that it wanted to enable both
> rendering (Vulkan/OpenGL) and compute (ROCm) with virtio-GPU native
> contexts under Xen.  The only GPUs for which AMDKFD will enable XNACK
> support are GFX9 GPUs, which are GCN and CDNA.  Shipping a GCN GPU in a
> new design would be very unusual and CDNA (Instinct) accelerators do not
> support graphics, so either AMD is using separate devices for compute
> and graphics or the workloads will run with no XNACK support.  Since you
> mention HW cost as an important consideration, I suspect the latter.
> 
> I believe that the Instinct accelerators that support XNACK also support
> SR-IOV, but if you wish to combine XNACK and virtio-GPU, this should be
> possible subject to caveats.  The main caveat is that under no
> circumstances can the host's Xen driver install an MMU notifier for
> pages that the guest can use in grant table operations or DMA.  A driver
> that installs an MMU notifier promises that it can block access to
> pages in a bounded amount of time, and if the guest can DMA to the pages
> or grant them to other domains this is not possible.  Without the Xen
> driver installing an MMU notifier, there is no way for the pages to be
> migrated to the GPU without risking use-after-free or at least data
> corruption.  Instead, one of the following options will be needed:
> 
> 1. hipMallocManaged() allocates the memory from the backend using the
>    Map primitive discussed elsewhere.  Such memory is not mappable in
>    the IOMMU (if there is an assigned PCI device) and cannot be used for
>    grant table operations.  Memory allocated via system allocators
>    (anonymous pages) is not able to be migrated.
> 
> 2. The frontend uses shadow buffers for all I/O.  This allows the
>    backend to use a new Steal primitive to revoke the guest's accesses
>    to anonymous pages and handle page faults accordingly.
> 
> 3. Same as 2 except that the frontend allocates all memory (except
>    bounce buffers) from the backend, just like a KVM guest does, rather
>    than from the Xen toolstack.
> 
> 4. The frontend tries to migrate the pages to backend-provided ones, and
>    falls back to leaving them pinned on the CPU.  The frontend's MMU
>    notifier tells the backend to stop accessing the pages, blocking
>    until the backend confirms this.  The frontend then moves the pages
>    to its own memory and returns from the notifier.  This may require
>    new AMDKFD APIs.
> 
> 5. Same as 4 except that the frontend uses hmm_range_fault to move the
>    pages to the backend in response to GPU page faults.  This requires a
>    frontend <-> backend round-trip for each fault (slooooow) so a new
>    fast mechanism for this might be needed.
> -- 
> Sincerely,
> Demi Marie Obenour (she/her/hers)
> Invisible Things Lab

CC Simona Vetter
-- 
Sincerely,
Demi Marie Obenour (she/her/hers)
Invisible Things Lab

[-- Attachment #2: signature.asc --]
[-- Type: application/pgp-signature, Size: 833 bytes --]

^ permalink raw reply	[flat|nested] 23+ messages in thread

* Re: [RFC PATCH 3/3] drm/virtio: implement blob userptr resource object
       [not found]                         ` <d259279c-9989-410f-907d-9bf0b318bc84@amd.com>
@ 2025-02-08 19:47                           ` Demi Marie Obenour
  0 siblings, 0 replies; 23+ messages in thread
From: Demi Marie Obenour @ 2025-02-08 19:47 UTC (permalink / raw)
  To: Huang, Honglei1
  Cc: Demi Marie Obenour, Huang, Ray, Stabellini, Stefano,
	virtualization@lists.linux-foundation.org,
	linux-kernel@vger.kernel.org, David Airlie,
	dri-devel@lists.freedesktop.org, Dmitry Osipenko, Gerd Hoffmann,
	Gurchetan Singh, Chia-I Wu, Akihiko Odaki, Zhu, Lingshan,
	Xen developer discussion, Kernel KVM virtualization development,
	Xenia Ragiadakou, Simona Vetter, Marek Marczykowski-Górecki

[-- Attachment #1: Type: text/plain, Size: 20664 bytes --]

On Sat, Feb 08, 2025 at 05:44:14PM +0800, Huang, Honglei1 wrote:
> On 2025/2/8 10:43, Demi Marie Obenour wrote:
> > On Fri, Feb 07, 2025 at 09:30:45PM -0500, Demi Marie Obenour wrote:
> > > On Fri, Feb 07, 2025 at 07:07:11PM +0800, Huang, Honglei1 wrote:
> > > > On 2025/2/7 2:21, Demi Marie Obenour wrote:
> > > > > On Thu, Feb 06, 2025 at 06:53:55PM +0800, Huang, Honglei1 wrote:
> > > > > > On 2025/1/31 8:33, Demi Marie Obenour wrote:
> > > > > > > On Wed, Jan 29, 2025 at 03:54:59PM -0500, Demi Marie Obenour wrote:
> > > > > > > > On 1/8/25 12:05 PM, Simona Vetter wrote:
> > > > > > > > > On Fri, Dec 27, 2024 at 10:24:29AM +0800, Huang, Honglei1 wrote:
> > > > > > > > > > 
> > > > > > > > > > On 2024/12/22 9:59, Demi Marie Obenour wrote:
> > > > > > > > > > > On 12/20/24 10:35 AM, Simona Vetter wrote:
> > > > > > > > > > > > On Fri, Dec 20, 2024 at 06:04:09PM +0800, Honglei Huang wrote:
> > > > > > > > > > > > > From: Honglei Huang <Honglei1.Huang@amd.com>
> > > > > > > > > > > > > 
> > > > > > > > > > > > > A virtio-gpu userptr is based on HMM notifier.
> > > > > > > > > > > > > Used for let host access guest userspace memory and
> > > > > > > > > > > > > notice the change of userspace memory.
> > > > > > > > > > > > > This series patches are in very beginning state,
> > > > > > > > > > > > > User space are pinned currently to ensure the host
> > > > > > > > > > > > > device memory operations are correct.
> > > > > > > > > > > > > The free and unmap operations for userspace can be
> > > > > > > > > > > > > handled by MMU notifier this is a simple and basice
> > > > > > > > > > > > > SVM feature for this series patches.
> > > > > > > > > > > > > The physical PFNS update operations is splited into
> > > > > > > > > > > > > two OPs in here. The evicted memories won't be used
> > > > > > > > > > > > > anymore but remap into host again to achieve same
> > > > > > > > > > > > > effect with hmm_rang_fault.
> > > > > > > > > > > > 
> > > > > > > > > > > > So in my opinion there are two ways to implement userptr that make sense:
> > > > > > > > > > > > 
> > > > > > > > > > > > - pinned userptr with pin_user_pages(FOLL_LONGTERM). there is not mmu
> > > > > > > > > > > >       notifier
> > > > > > > > > > > > 
> > > > > > > > > > > > - unpinnned userptr where you entirely rely on userptr and do not hold any
> > > > > > > > > > > >       page references or page pins at all, for full SVM integration. This
> > > > > > > > > > > >       should use hmm_range_fault ideally, since that's the version that
> > > > > > > > > > > >       doesn't ever grab any page reference pins.
> > > > > > > > > > > > 
> > > > > > > > > > > > All the in-between variants are imo really bad hacks, whether they hold a
> > > > > > > > > > > > page reference or a temporary page pin (which seems to be what you're
> > > > > > > > > > > > doing here). In much older kernels there was some justification for them,
> > > > > > > > > > > > because strange stuff happened over fork(), but with FOLL_LONGTERM this is
> > > > > > > > > > > > now all sorted out. So there's really only fully pinned, or true svm left
> > > > > > > > > > > > as clean design choices imo.
> > > > > > > > > > > > 
> > > > > > > > > > > > With that background, why does pin_user_pages(FOLL_LONGTERM) not work for
> > > > > > > > > > > > you?
> > > > > > > > > > > 
> > > > > > > > > > > +1 on using FOLL_LONGTERM.  Fully dynamic memory management has a huge cost
> > > > > > > > > > > in complexity that pinning everything avoids.  Furthermore, this avoids the
> > > > > > > > > > > host having to take action in response to guest memory reclaim requests.
> > > > > > > > > > > This avoids additional complexity (and thus attack surface) on the host side.
> > > > > > > > > > > Furthermore, since this is for ROCm and not for graphics, I am less concerned
> > > > > > > > > > > about supporting systems that require swappable GPU VRAM.
> > > > > > > > > > 
> > > > > > > > > > Hi Sima and Demi,
> > > > > > > > > > 
> > > > > > > > > > I totally agree the flag FOLL_LONGTERM is needed, I will add it in next
> > > > > > > > > > version.
> > > > > > > > > > 
> > > > > > > > > > And for the first pin variants implementation, the MMU notifier is also
> > > > > > > > > > needed I think.Cause the userptr feature in UMD generally used like this:
> > > > > > > > > > the registering of userptr always is explicitly invoked by user code like
> > > > > > > > > > "registerMemoryToGPU(userptrAddr, ...)", but for the userptr release/free,
> > > > > > > > > > there is no explicit API for it, at least in hsakmt/KFD stack. User just
> > > > > > > > > > need call system call "free(userptrAddr)", then kernel driver will release
> > > > > > > > > > the userptr by MMU notifier callback.Virtio-GPU has no other way to know if
> > > > > > > > > > user has been free the userptr except for MMU notifior.And in UMD theres is
> > > > > > > > > > no way to get the free() operation is invoked by user.The only way is use
> > > > > > > > > > MMU notifier in virtio-GPU driver and free the corresponding data in host by
> > > > > > > > > > some virtio CMDs as far as I can see.
> > > > > > > > > > 
> > > > > > > > > > And for the second way that is use hmm_range_fault, there is a predictable
> > > > > > > > > > issues as far as I can see, at least in hsakmt/KFD stack. That is the memory
> > > > > > > > > > may migrate when GPU/device is working. In bare metal, when memory is
> > > > > > > > > > migrating KFD driver will pause the compute work of the device in
> > > > > > > > > > mmap_wirte_lock then use hmm_range_fault to remap the migrated/evicted
> > > > > > > > > > memories to GPU then restore the compute work of device to ensure the
> > > > > > > > > > correction of the data. But in virtio-GPU driver the migration happen in
> > > > > > > > > > guest kernel, the evict mmu notifier callback happens in guest, a virtio CMD
> > > > > > > > > > can be used for notify host but as lack of mmap_write_lock protection in
> > > > > > > > > > host kernel, host will hold invalid data for a short period of time, this
> > > > > > > > > > may lead to some issues. And it is hard to fix as far as I can see.
> > > > > > > > > > 
> > > > > > > > > > I will extract some APIs into helper according to your request, and I will
> > > > > > > > > > refactor the whole userptr implementation, use some callbacks in page
> > > > > > > > > > getting path, let the pin method and hmm_range_fault can be choiced
> > > > > > > > > > in this series patches.
> > > > > > > > > 
> > > > > > > > > Ok, so if this is for svm, then you need full blast hmm, or the semantics
> > > > > > > > > are buggy. You cannot fake svm with pin(FOLL_LONGTERM) userptr, this does
> > > > > > > > > not work.
> > > > > > > > > 
> > > > > > > > > The other option is that hsakmt/kfd api is completely busted, and that's
> > > > > > > > > kinda not a kernel problem.
> > > > > > > > > -Sima
> > > > > > > > 
> > > > > > > > On further thought, I believe the driver needs to migrate the pages to
> > > > > > > > device memory (really a virtio-GPU blob object) *and* take a FOLL_LONGTERM
> > > > > > > > pin on them.  The reason is that it isn’t possible to migrate these pages
> > > > > > > > back to "host" memory without unmapping them from the GPU.  For the reasons
> > > > > > > > I mention in [1], I believe that temporarily revoking access to virtio-GPU
> > > > > > > > blob objects is not feasible.  Instead, the pages must be treated as if
> > > > > > > > they are permanently in device memory until guest userspace unmaps them
> > > > > > > > from the GPU, after which they must be migrated back to host memory.
> > > > > > > 
> > > > > > > Discussion on IRC indicates that migration isn't reliable.  This is
> > > > > > > because Linux core memory management is largely lock-free for
> > > > > > > performance reasons, so there is no way to prevent temporary elevation
> > > > > > > of a page's reference count.  A page with an elevated reference count
> > > > > > > cannot be migrated.
> > > > > > > 
> > > > > > > The only alternative I can think of is for the hypervisor to perform the
> > > > > > > migration.  The hypervisor can revoke the guest's access to the page
> > > > > > > without the guest's consent or involvement.  The host can then replace
> > > > > > > the page with one of its own pages, which might be on the CPU or GPU.
> > > > > > > Further migration between the CPU and GPU is controlled by the host
> > > > > > > kernel-mode driver (KMD) and host kernel memory management.  The guest
> > > > > > > kernel driver must take a FOLL_LONGTERM pin before telling the host to
> > > > > > > use the pages, but that is all.
> > > > > > > 
> > > > > > > On KVM, this should be essentially automatic, as guest memory really is
> > > > > > > just host userspace memory.  On Xen, this requires that the backend
> > > > > > > domain can revoke fronted access to _any_ frontend page, or at least
> > > > > > > frontend pages that have been granted to the backend.  The backend will
> > > > > > > then need to be able to handle page faults for the frontend pages, and
> > > > > > > replace the frontend pages with its own pages at will.  Furthermore,
> > > > > > > revoking pages that the backend has installed into the frontend must
> > > > > > > never fail, because the backend will panic if it does fail.
> > > > > > > 
> > > > > > > Sima, is putting guest pages under host kernel control the only option?
> > > > > > > I thought that this could be avoided by leaving the pages on the CPU if
> > > > > > > migration fails, but that won't work because there will be no way to
> > > > > > > migrate them to the GPU later, causing performance problems that would
> > > > > > > be impossible to debug.  Is waiting (possibly forever) on migration to
> > > > > > > finish an option?  Otherwise, this might mean extra complexity in the
> > > > > > > Xen hypervisor, as I do not believe the primitives needed are currently
> > > > > > > available.  Specifically, in addition to the primitives discussed at Xen
> > > > > > > Project Summit 2024, the backend also needs to intercept access to, and
> > > > > > > replace the contents of, arbitrary frontend-controlled pages.
> > > > > > 
> > > > > > Hi Demi,
> > > > > > 
> > > > > > I agree that to achieve the complete SVM feature in virtio-GPU, it is
> > > > > > necessary to have the hypervisor deeply involved and add new features.
> > > > > > It needs solid design, I saw the detailed reply in a another thread, it
> > > > > > is very helpful,looking forward to the response from the Xen/hypervisor
> > > > > > experts.
> > > > > 
> > > > >   From further discussion with Sima, I suspect that virtio-GPU cannot
> > > > > support SVM with reasonable performance.  Native contexts have such good
> > > > > performance for graphics workloads because graphics workloads very rarely
> > > > > perform blocking waits for host GPU operations to complete, so one can
> > > > > make all frequently-used operations asynchronous and therefore hide the
> > > > > guest <=> host latency.  SVM seems to require many synchronous GPU
> > > > > operations, and I believe those will severely harm performance with
> > > > > virtio-GPU.
> > > > > 
> > > > > If you need full SVM for your workloads, I recommend using hardware
> > > > > SR-IOV.  This should allow the guest to perform GPU virtual memory
> > > > > operations without host involvement, which I expect will be much faster
> > > > > than paravirtualizing these operations.  Scalable I/O virtualization
> > > > > might also work, but it might also require paravirtualizing the
> > > > > performance-critical address-space operations unless the hardware has
> > > > > stage 2 translation tables.
> > > > > 
> > > > 
> > > > Yes I think so, the SR-IOV or some other hardware virtualization are clean
> > > > design for ROCm/compute currently. But actually those hardware features
> > > > supported solution also have their own limitation, like high hardware cost
> > > > and the performance decreasing caused by different guest VMs hardware
> > > > workload schedule. We are trying a low-cost, high-performance virtualization
> > > > solution, it appears to be difficult to support full feature VS SR-IOV at
> > > > present. But it doesn't prevent us from enabling part of functions.
> > > > 
> > > > > > So for the current virito-GPU userptr implementation, It can not support the
> > > > > > full SVM feature, it just can only let GPU access the user space memory,
> > > > > > maybe can be called by userptr feature. I think I will finish this small
> > > > > > part firstly and then to try to complete the whole SVM feature.
> > > > > 
> > > > > I think you will still have problems if the host is able to migrate
> > > > > pages in any way.  This requires that the host install an MMU notifier
> > > > > for the pages it has received from the guest, which in turn implies that
> > > > > the host must be able to prevent the guest from accessing the pages.
> > > > > If the pages are used in grant table operations, this isn't possible.
> > > > > 
> > > > > If you are willing to have the pages be pinned on the host side things
> > > > > are much simpler.  Such pages will always be in system memory, and will
> > > > > never be able to migrate to VRAM.  This will result in a performance
> > > > > penalty and will likely require explicit prefetching by programs using
> > > > > ROCm, but this may be acceptable for your use-cases.  The performance
> > > > > penalty is the same as that with XNACK disabled, which is the case for
> > > > > all RDNA2+ GPUs, so all code that aims to be portable to recent consumer
> > > > > hardware will have to account for it anyway.
> > > > 
> > > > Totally agreed. Actually memory migrating to VRAM is very common in GFX
> > > > side, but in ROCm/KFD, maybe it can be disabled and not often used as far as
> > > > I know. ROCm/KFD always uses SDMA to transfer or copy data maybe this is
> > > > faster than migrating to VRAM (needs further verification).
> > > > But we have some method to workaround it. Really thanks for your reminding.
> > > 
> > > I think you will do okay if you treat virtio-GPU as providing a virtual
> > > GPU with no XNACK support.  XNACK is necessary for migrating pages
> > > between GPU and CPU based on demand, and it is this migration that is
> > > so hard to implement.  Furthermore, I highly doubt that the combination
> > > of AMDKFD and the hardware you are targeting even supports XNACK.
> 
> Yes the goal of this patch set is to support functions without memory
> migration related. It seems like XNACK is hard to support at present.
> 
> > > At Xen Project Summit 2024, AMD mentioned that it wanted to enable both
> > > rendering (Vulkan/OpenGL) and compute (ROCm) with virtio-GPU native
> > > contexts under Xen.  The only GPUs for which AMDKFD will enable XNACK
> > > support are GFX9 GPUs, which are GCN and CDNA.  Shipping a GCN GPU in a
> > > new design would be very unusual and CDNA (Instinct) accelerators do not
> > > support graphics, so either AMD is using separate devices for compute
> > > and graphics or the workloads will run with no XNACK support.  Since you
> > > mention HW cost as an important consideration, I suspect the latter.
> > > 
> > > I believe that the Instinct accelerators that support XNACK also support
> > > SR-IOV, but if you wish to combine XNACK and virtio-GPU, this should be
> > > possible subject to caveats.  The main caveat is that under no
> > > circumstances can the host's Xen driver install an MMU notifier for
> > > pages that the guest can use in grant table operations or DMA.  A driver
> > > that installs an MMU notifier promises that it can block access to
> > > pages in a bounded amount of time, and if the guest can DMA to the pages
> > > or grant them to other domains this is not possible.  Without the Xen
> > > driver installing an MMU notifier, there is no way for the pages to be
> > > migrated to the GPU without risking use-after-free or at least data
> > > corruption.  Instead, one of the following options will be needed:
> > > 
> > > 1. hipMallocManaged() allocates the memory from the backend using the
> > >     Map primitive discussed elsewhere.  Such memory is not mappable in
> > >     the IOMMU (if there is an assigned PCI device) and cannot be used for
> > >     grant table operations.  Memory allocated via system allocators
> > >     (anonymous pages) is not able to be migrated.
> > > 
> > > 2. The frontend uses shadow buffers for all I/O.  This allows the
> > >     backend to use a new Steal primitive to revoke the guest's accesses
> > >     to anonymous pages and handle page faults accordingly.
> > > 
> > > 3. Same as 2 except that the frontend allocates all memory (except
> > >     bounce buffers) from the backend, just like a KVM guest does, rather
> > >     than from the Xen toolstack.
> > > 
> > > 4. The frontend tries to migrate the pages to backend-provided ones, and
> > >     falls back to leaving them pinned on the CPU.  The frontend's MMU
> > >     notifier tells the backend to stop accessing the pages, blocking
> > >     until the backend confirms this.  The frontend then moves the pages
> > >     to its own memory and returns from the notifier.  This may require
> > >     new AMDKFD APIs.
> 
> This step needs new AMDKFD userspace APIs, KFD has some internel APIs for
> it, need exported into UMD. But actually the most challenging parts are
> adding hypervisior primitive, and adding the frontend <-> backend sync
> solution. KFD needs mmap_write_lock to handle migrate/update operations,
> this lock needs be removed or the sync between frontend and backend is hard
> to implement. It maybe needs refactor the KFD SVM, only for this task needs
> a lot of work, and there may be other work that needs to do I haven't
> discovered it yet.
> Before starting all of it, we may need a solid design or another
> clever/compromise solution that may reduce some of the workload. Your
> reminding are professional and detailed, many things I haven't noticed,
> really thanks a lot.

You're welcome.  I am glad to have helped.

I believe the best first step would be to implement Map and Revoke as
described in https://lore.kernel.org/xen-devel/Z6U7yOrMyLZWqPA4@itl-email/T/,
and to submit the code for upstream review.  These primitives should be
sufficient for graphics, and they are are also prerequisites for future
SVM work.  By submitting the code upstream without waiting for
virtio-GPU ROCm support, you will be able to get review of Xen-related
code to happen in parallel with the ROCm work.  This will shorten the
critical path for upstreaming, and will reduce this risk of spending
development resources on designs that would not be acceptable upstream.
Sending the current patches would be a good idea, even if they are known
to have bugs, as this will allow upstream to help you fix them.

Once Map and Revoke are implemented, you should be able to implement
full support for virtio-GPU native contexts in Xen+QEMU.  This unlocks
OpenGL and Vulkan, along with OpenCL via rusticl.  This work can happen
in parallel with the kernel and hypervisor patches being reviewed.  I
also expect supporting ROCm with no XNACK to be feasible here, though I
am not certain as I am not familiar with the details of the AMDKFD
driver.

Finally, XNACK support can be implemented.  This should be the last
step, as it depends on all of the other work and will be a signfiicant
amount of effort in its own right.  It also might not be necessary in
practice, as if one is willing to abandon fine-grained cache coherency
I believe one can achieve equivalent performance without it, at the
expense of additional effort from the programmer.  Due to the limited
hardware support for XNACK, I expect that programs intended to be run
on end-user devices (as opposed to servers) will not require it, and it
might not be possible to achieve reasonable performance with virtio-GPU.
Since my understanding is that AMD's use-cases for virtio-GPU are
primarily in the automotive sector, I am not sure that AMD will gain any
value from enabling it, unless either UDNA devices will support XNACK in
ROCm or automotive OEMs will be including Instinct accelerators in the
cars they make.
-- 
Sincerely,
Demi Marie Obenour (she/her/hers)
Invisible Things Lab

[-- Attachment #2: signature.asc --]
[-- Type: application/pgp-signature, Size: 833 bytes --]

^ permalink raw reply	[flat|nested] 23+ messages in thread

end of thread, other threads:[~2025-02-08 19:48 UTC | newest]

Thread overview: 23+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2024-12-20 10:04 [RFC PATCH 1/3] virtio-gpu api: add blob userptr resource Honglei Huang
2024-12-20 10:04 ` [RFC PATCH 2/3] drm/virtgpu " Honglei Huang
2024-12-20 10:04 ` [RFC PATCH 3/3] drm/virtio: implement blob userptr resource object Honglei Huang
2024-12-20 15:35   ` Simona Vetter
2024-12-22  1:59     ` Demi Marie Obenour
2024-12-27  2:24       ` Huang, Honglei1
2025-01-08 17:05         ` Simona Vetter
2025-01-25  0:42           ` Demi Marie Obenour
2025-01-29 19:40             ` Demi Marie Obenour
2025-01-29 20:54           ` Demi Marie Obenour
2025-01-31  0:33             ` Demi Marie Obenour
2025-02-06 10:53               ` Huang, Honglei1
2025-02-06 18:21                 ` Demi Marie Obenour
2025-02-07 11:07                   ` Huang, Honglei1
2025-02-08  2:30                     ` Demi Marie Obenour
2025-02-08  2:43                       ` Demi Marie Obenour
     [not found]                         ` <d259279c-9989-410f-907d-9bf0b318bc84@amd.com>
2025-02-08 19:47                           ` Demi Marie Obenour
     [not found]       ` <40485636-7ea3-4e34-b4bb-1ccaaadd4e47@amd.com>
2025-01-20  9:54         ` Huang, Honglei1
2025-01-29 19:46           ` Demi Marie Obenour
2025-02-04 10:45           ` Simona Vetter
2025-02-06 11:05             ` Huang, Honglei1
2025-02-03  8:25 ` [RFC PATCH 1/3] virtio-gpu api: add blob userptr resource Akihiko Odaki
2025-02-06 10:40   ` Huang, Honglei1

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).