dri-devel.lists.freedesktop.org archive mirror
 help / color / mirror / Atom feed
* [PATCH v10 0/5] drm/xe/xe_vm: Implement xe_vm_get_property_ioctl
@ 2025-03-20 15:26 Jonathan Cavitt
  2025-03-20 15:26 ` [PATCH v10 1/5] drm/xe/xe_gt_pagefault: Disallow writes to read-only VMAs Jonathan Cavitt
                   ` (4 more replies)
  0 siblings, 5 replies; 13+ messages in thread
From: Jonathan Cavitt @ 2025-03-20 15:26 UTC (permalink / raw)
  To: intel-xe
  Cc: saurabhg.gupta, alex.zuo, jonathan.cavitt, joonas.lahtinen,
	matthew.brost, jianxun.zhang, shuicheng.lin, dri-devel,
	Michal.Wajdeczko, michal.mrozek

Add additional information to each VM so they can report up to the first
50 seen faults.  Only pagefaults are saved this way currently, though in
the future, all faults should be tracked by the VM for future reporting.

Additionally, of the pagefaults reported, only failed pagefaults are
saved this way, as successful pagefaults should recover silently and not
need to be reported to userspace.

To allow userspace to access these faults, a new ioctl -
xe_vm_get_property_ioct - was created.

v2: (Matt Brost)
- Break full ban list request into a separate property.
- Reformat drm_xe_vm_get_property struct.
- Remove need for drm_xe_faults helper struct.
- Separate data pointer and scalar return value in ioctl.
- Get address type on pagefault report and save it to the pagefault.
- Correctly reject writes to read-only VMAs.
- Miscellaneous formatting fixes.

v3: (Matt Brost)
- Only allow querying of failed pagefaults

v4:
- Remove unnecessary size parameter from helper function, as it
  is a property of the arguments. (jcavitt)
- Remove unnecessary copy_from_user (Jainxun)
- Set address_precision to 1 (Jainxun)
- Report max size instead of dynamic size for memory allocation
  purposes.  Total memory usage is reported separately.

v5:
- Return int from xe_vm_get_property_size (Shuicheng)
- Fix memory leak (Shuicheng)
- Remove unnecessary size variable (jcavitt)

v6:
- Free vm after use (Shuicheng)
- Compress pf copy logic (Shuicheng)
- Update fault_unsuccessful before storing (Shuicheng)
- Fix old struct name in comments (Shuicheng)
- Keep first 50 pagefaults instead of last 50 (Jianxun)
- Rename ioctl to xe_vm_get_faults_ioctl (jcavitt)

v7:
- Avoid unnecessary execution by checking MAX_PFS earlier (jcavitt)
- Fix double-locking error (jcavitt)
- Assert kmemdump is successful (Shuicheng)
- Repair and move fill_faults break condition (Dan Carpenter)
- Free vm after use (jcavitt)
- Combine assertions (jcavitt)
- Expand size check in xe_vm_get_faults_ioctl (jcavitt)
- Remove return mask from fill_faults, as return is already -EFAULT or 0
  (jcavitt)

v8:
- Revert back to using drm_xe_vm_get_property_ioctl
- s/Migrate/Move (Michal)
- s/xe_pagefault/xe_gt_pagefault (Michal)
- Create new header file, xe_gt_pagefault_types.h (Michal)
- Add and fix kernel docs (Michal)
- Rename xe_vm.pfs to xe_vm.faults (jcavitt)
- Store fault data and not pagefault in xe_vm faults list (jcavitt)
- Store address, address type, and address precision per fault (jcavitt)
- Store engine class and instance data per fault (Jianxun)
- Properly handle kzalloc error (Michal W)
- s/MAX_PFS/MAX_FAULTS_SAVED_PER_VM (Michal W)
- Store fault level per fault (Micahl M)
- Apply better copy_to_user logic (jcavitt)

v9:
- More kernel doc fixes (Michal W, Jianxun)
- Better error handling (jcavitt)

v10:
- Convert enums to defines in regs folder (Michal W)
- Move xe_guc_pagefault_desc to regs folder (Michal W)
- Future-proof size logic for zero-size properties (jcavitt)
- Replace address type extern with access type (Jianxun)
- Add fault type to xe_drm_fault (Jianxun)

Signed-off-by: Jonathan Cavitt <joanthan.cavitt@intel.com>
Suggested-by: Joonas Lahtinen <joonas.lahtinen@linux.intel.com>
Suggested-by: Matthew Brost <matthew.brost@intel.com>
Cc: Zhang Jianxun <jianxun.zhang@intel.com>
Cc: Shuicheng Lin <shuicheng.lin@intel.com>
Cc: Michal Wajdeczko <Michal.Wajdeczko@intel.com>
Cc: Michal Mrozek <michal.mrozek@intel.com>

Jonathan Cavitt (5):
  drm/xe/xe_gt_pagefault: Disallow writes to read-only VMAs
  drm/xe/xe_gt_pagefault: Move pagefault struct to header
  drm/xe/uapi: Define drm_xe_vm_get_property
  drm/xe/xe_vm: Add per VM fault info
  drm/xe/xe_vm: Implement xe_vm_get_property_ioctl

 drivers/gpu/drm/xe/regs/xe_pagefault_desc.h |  50 +++++
 drivers/gpu/drm/xe/xe_device.c              |   3 +
 drivers/gpu/drm/xe/xe_gt_pagefault.c        |  67 +++----
 drivers/gpu/drm/xe/xe_gt_pagefault_types.h  |  42 +++++
 drivers/gpu/drm/xe/xe_guc_fwif.h            |  28 ---
 drivers/gpu/drm/xe/xe_vm.c                  | 194 ++++++++++++++++++++
 drivers/gpu/drm/xe/xe_vm.h                  |  11 ++
 drivers/gpu/drm/xe/xe_vm_types.h            |  34 ++++
 include/uapi/drm/xe_drm.h                   |  79 ++++++++
 9 files changed, 447 insertions(+), 61 deletions(-)
 create mode 100644 drivers/gpu/drm/xe/regs/xe_pagefault_desc.h
 create mode 100644 drivers/gpu/drm/xe/xe_gt_pagefault_types.h

-- 
2.43.0


^ permalink raw reply	[flat|nested] 13+ messages in thread

* [PATCH v10 1/5] drm/xe/xe_gt_pagefault: Disallow writes to read-only VMAs
  2025-03-20 15:26 [PATCH v10 0/5] drm/xe/xe_vm: Implement xe_vm_get_property_ioctl Jonathan Cavitt
@ 2025-03-20 15:26 ` Jonathan Cavitt
  2025-03-20 15:26 ` [PATCH v10 2/5] drm/xe/xe_gt_pagefault: Move pagefault struct to header Jonathan Cavitt
                   ` (3 subsequent siblings)
  4 siblings, 0 replies; 13+ messages in thread
From: Jonathan Cavitt @ 2025-03-20 15:26 UTC (permalink / raw)
  To: intel-xe
  Cc: saurabhg.gupta, alex.zuo, jonathan.cavitt, joonas.lahtinen,
	matthew.brost, jianxun.zhang, shuicheng.lin, dri-devel,
	Michal.Wajdeczko, michal.mrozek

The page fault handler should reject write/atomic access to read only
VMAs.  Add code to handle this in handle_pagefault after the VMA lookup.

Fixes: 3d420e9fa848 ("drm/xe: Rework GPU page fault handling")
Signed-off-by: Jonathan Cavitt <jonathan.cavitt@intel.com>
Suggested-by: Matthew Brost <matthew.brost@intel.com>
---
 drivers/gpu/drm/xe/xe_gt_pagefault.c | 5 +++++
 1 file changed, 5 insertions(+)

diff --git a/drivers/gpu/drm/xe/xe_gt_pagefault.c b/drivers/gpu/drm/xe/xe_gt_pagefault.c
index 9fa11e837dd1..3240890aac07 100644
--- a/drivers/gpu/drm/xe/xe_gt_pagefault.c
+++ b/drivers/gpu/drm/xe/xe_gt_pagefault.c
@@ -237,6 +237,11 @@ static int handle_pagefault(struct xe_gt *gt, struct pagefault *pf)
 		goto unlock_vm;
 	}
 
+	if (xe_vma_read_only(vma) && pf->access_type != ACCESS_TYPE_READ) {
+		err = -EPERM;
+		goto unlock_vm;
+	}
+
 	atomic = access_is_atomic(pf->access_type);
 
 	if (xe_vma_is_cpu_addr_mirror(vma))
-- 
2.43.0


^ permalink raw reply related	[flat|nested] 13+ messages in thread

* [PATCH v10 2/5] drm/xe/xe_gt_pagefault: Move pagefault struct to header
  2025-03-20 15:26 [PATCH v10 0/5] drm/xe/xe_vm: Implement xe_vm_get_property_ioctl Jonathan Cavitt
  2025-03-20 15:26 ` [PATCH v10 1/5] drm/xe/xe_gt_pagefault: Disallow writes to read-only VMAs Jonathan Cavitt
@ 2025-03-20 15:26 ` Jonathan Cavitt
  2025-03-20 15:26 ` [PATCH v10 3/5] drm/xe/uapi: Define drm_xe_vm_get_property Jonathan Cavitt
                   ` (2 subsequent siblings)
  4 siblings, 0 replies; 13+ messages in thread
From: Jonathan Cavitt @ 2025-03-20 15:26 UTC (permalink / raw)
  To: intel-xe
  Cc: saurabhg.gupta, alex.zuo, jonathan.cavitt, joonas.lahtinen,
	matthew.brost, jianxun.zhang, shuicheng.lin, dri-devel,
	Michal.Wajdeczko, michal.mrozek

Move the pagefault struct from xe_gt_pagefault.c to the
xe_gt_pagefault_types.h header file, and move the associated enum values
into the regs folder under xe_pagefault_desc.h

Since xe_pagefault_desc.h is being initialized here, also move the
xe_guc_pagefault_desc hardware formats to the new file.

v2:
- Normalize names for common header (Matt Brost)

v3:
- s/Migrate/Move (Michal W)
- s/xe_pagefault/xe_gt_pagefault (Michal W)
- Create new header file, xe_gt_pagefault_types.h (Michal W)
- Add kernel docs (Michal W)

v4:
- Fix includes usage (Michal W)
- Reference Bspec (Michal W)

v5:
- Convert enums to defines in regs folder (Michal W)
- Move xe_guc_pagefault_desc to regs folder (Michal W)

Bspec: 77412
Signed-off-by: Jonathan Cavitt <jonathan.cavitt@intel.com>
Cc: Michal Wajdeczko <Michal.Wajdeczko@intel.com>
---
 drivers/gpu/drm/xe/regs/xe_pagefault_desc.h | 50 +++++++++++++++++++++
 drivers/gpu/drm/xe/xe_gt_pagefault.c        | 43 ++++--------------
 drivers/gpu/drm/xe/xe_gt_pagefault_types.h  | 42 +++++++++++++++++
 drivers/gpu/drm/xe/xe_guc_fwif.h            | 28 ------------
 4 files changed, 101 insertions(+), 62 deletions(-)
 create mode 100644 drivers/gpu/drm/xe/regs/xe_pagefault_desc.h
 create mode 100644 drivers/gpu/drm/xe/xe_gt_pagefault_types.h

diff --git a/drivers/gpu/drm/xe/regs/xe_pagefault_desc.h b/drivers/gpu/drm/xe/regs/xe_pagefault_desc.h
new file mode 100644
index 000000000000..cfa18cb8e8ac
--- /dev/null
+++ b/drivers/gpu/drm/xe/regs/xe_pagefault_desc.h
@@ -0,0 +1,50 @@
+/* SPDX-License-Identifier: MIT */
+/*
+ * Copyright © 2025 Intel Corporation
+ */
+
+#ifndef _XE_PAGEFAULT_DESC_H_
+#define _XE_PAGEFAULT_DESC_H_
+
+#include <linux/bits.h>
+#include <linux/types.h>
+
+struct xe_guc_pagefault_desc {
+	u32 dw0;
+#define PFD_FAULT_LEVEL		GENMASK(2, 0)
+#define PFD_SRC_ID		GENMASK(10, 3)
+#define PFD_RSVD_0		GENMASK(17, 11)
+#define XE2_PFD_TRVA_FAULT	BIT(18)
+#define PFD_ENG_INSTANCE	GENMASK(24, 19)
+#define PFD_ENG_CLASS		GENMASK(27, 25)
+#define PFD_PDATA_LO		GENMASK(31, 28)
+
+	u32 dw1;
+#define PFD_PDATA_HI		GENMASK(11, 0)
+#define PFD_PDATA_HI_SHIFT	4
+#define PFD_ASID		GENMASK(31, 12)
+
+	u32 dw2;
+#define PFD_ACCESS_TYPE		GENMASK(1, 0)
+#define PFD_FAULT_TYPE		GENMASK(3, 2)
+#define PFD_VFID		GENMASK(9, 4)
+#define PFD_RSVD_1		BIT(10)
+#define XE3P_PFD_PREFETCH	BIT(11)
+#define PFD_VIRTUAL_ADDR_LO	GENMASK(31, 12)
+#define PFD_VIRTUAL_ADDR_LO_SHIFT 12
+
+	u32 dw3;
+#define PFD_VIRTUAL_ADDR_HI	GENMASK(31, 0)
+#define PFD_VIRTUAL_ADDR_HI_SHIFT 32
+} __packed;
+
+#define FLT_ACCESS_TYPE_READ		0u
+#define FLT_ACCESS_TYPE_WRITE		1u
+#define FLT_ACCESS_TYPE_ATOMIC		2u
+#define FLT_ACCESS_TYPE_RESERVED	3u
+
+#define FLT_TYPE_NOT_PRESENT_FAULT		0u
+#define FLT_TYPE_WRITE_ACCESS_VIOLATION		1u
+#define FLT_TYPE_ATOMIC_ACCESS_VIOLATION	2u
+
+#endif
diff --git a/drivers/gpu/drm/xe/xe_gt_pagefault.c b/drivers/gpu/drm/xe/xe_gt_pagefault.c
index 3240890aac07..0cedf089a3f2 100644
--- a/drivers/gpu/drm/xe/xe_gt_pagefault.c
+++ b/drivers/gpu/drm/xe/xe_gt_pagefault.c
@@ -12,8 +12,10 @@
 #include <drm/drm_managed.h>
 
 #include "abi/guc_actions_abi.h"
+#include "regs/xe_pagefault_desc.h"
 #include "xe_bo.h"
 #include "xe_gt.h"
+#include "xe_gt_pagefault_types.h"
 #include "xe_gt_stats.h"
 #include "xe_gt_tlb_invalidation.h"
 #include "xe_guc.h"
@@ -23,33 +25,6 @@
 #include "xe_trace_bo.h"
 #include "xe_vm.h"
 
-struct pagefault {
-	u64 page_addr;
-	u32 asid;
-	u16 pdata;
-	u8 vfid;
-	u8 access_type;
-	u8 fault_type;
-	u8 fault_level;
-	u8 engine_class;
-	u8 engine_instance;
-	u8 fault_unsuccessful;
-	bool trva_fault;
-};
-
-enum access_type {
-	ACCESS_TYPE_READ = 0,
-	ACCESS_TYPE_WRITE = 1,
-	ACCESS_TYPE_ATOMIC = 2,
-	ACCESS_TYPE_RESERVED = 3,
-};
-
-enum fault_type {
-	NOT_PRESENT = 0,
-	WRITE_ACCESS_VIOLATION = 1,
-	ATOMIC_ACCESS_VIOLATION = 2,
-};
-
 struct acc {
 	u64 va_range_base;
 	u32 asid;
@@ -61,9 +36,9 @@ struct acc {
 	u8 engine_instance;
 };
 
-static bool access_is_atomic(enum access_type access_type)
+static bool access_is_atomic(u32 access_type)
 {
-	return access_type == ACCESS_TYPE_ATOMIC;
+	return access_type == FLT_ACCESS_TYPE_ATOMIC;
 }
 
 static bool vma_is_valid(struct xe_tile *tile, struct xe_vma *vma)
@@ -205,7 +180,7 @@ static struct xe_vm *asid_to_vm(struct xe_device *xe, u32 asid)
 	return vm;
 }
 
-static int handle_pagefault(struct xe_gt *gt, struct pagefault *pf)
+static int handle_pagefault(struct xe_gt *gt, struct xe_gt_pagefault *pf)
 {
 	struct xe_device *xe = gt_to_xe(gt);
 	struct xe_vm *vm;
@@ -237,7 +212,7 @@ static int handle_pagefault(struct xe_gt *gt, struct pagefault *pf)
 		goto unlock_vm;
 	}
 
-	if (xe_vma_read_only(vma) && pf->access_type != ACCESS_TYPE_READ) {
+	if (xe_vma_read_only(vma) && pf->access_type != FLT_ACCESS_TYPE_READ) {
 		err = -EPERM;
 		goto unlock_vm;
 	}
@@ -271,7 +246,7 @@ static int send_pagefault_reply(struct xe_guc *guc,
 	return xe_guc_ct_send(&guc->ct, action, ARRAY_SIZE(action), 0, 0);
 }
 
-static void print_pagefault(struct xe_device *xe, struct pagefault *pf)
+static void print_pagefault(struct xe_device *xe, struct xe_gt_pagefault *pf)
 {
 	drm_dbg(&xe->drm, "\n\tASID: %d\n"
 		 "\tVFID: %d\n"
@@ -291,7 +266,7 @@ static void print_pagefault(struct xe_device *xe, struct pagefault *pf)
 
 #define PF_MSG_LEN_DW	4
 
-static bool get_pagefault(struct pf_queue *pf_queue, struct pagefault *pf)
+static bool get_pagefault(struct pf_queue *pf_queue, struct xe_gt_pagefault *pf)
 {
 	const struct xe_guc_pagefault_desc *desc;
 	bool ret = false;
@@ -378,7 +353,7 @@ static void pf_queue_work_func(struct work_struct *w)
 	struct xe_gt *gt = pf_queue->gt;
 	struct xe_device *xe = gt_to_xe(gt);
 	struct xe_guc_pagefault_reply reply = {};
-	struct pagefault pf = {};
+	struct xe_gt_pagefault pf = {};
 	unsigned long threshold;
 	int ret;
 
diff --git a/drivers/gpu/drm/xe/xe_gt_pagefault_types.h b/drivers/gpu/drm/xe/xe_gt_pagefault_types.h
new file mode 100644
index 000000000000..b7d41b558de3
--- /dev/null
+++ b/drivers/gpu/drm/xe/xe_gt_pagefault_types.h
@@ -0,0 +1,42 @@
+/* SPDX-License-Identifier: MIT */
+/*
+ * Copyright © 2022-2025 Intel Corporation
+ */
+
+#ifndef _XE_GT_PAGEFAULT_TYPES_H_
+#define _XE_GT_PAGEFAULT_TYPES_H_
+
+#include <linux/types.h>
+
+/**
+ * struct xe_gt_pagefault - Structure of pagefaults returned by the
+ * pagefault handler
+ */
+struct xe_gt_pagefault {
+	/** @page_addr: faulted address of this pagefault */
+	u64 page_addr;
+	/** @asid: ASID of this pagefault */
+	u32 asid;
+	/** @pdata: PDATA of this pagefault */
+	u16 pdata;
+	/** @vfid: VFID of this pagefault */
+	u8 vfid;
+	/** @access_type: access type of this pagefault */
+	u8 access_type;
+	/** @fault_type: fault type of this pagefault */
+	u8 fault_type;
+	/** @fault_level: fault level of this pagefault */
+	u8 fault_level;
+	/** @engine_class: engine class this pagefault was reported on */
+	u8 engine_class;
+	/** @engine_instance: engine instance this pagefault was reported on */
+	u8 engine_instance;
+	/** @fault_unsuccessful: flag for if the pagefault recovered or not */
+	u8 fault_unsuccessful;
+	/** @prefetch: unused */
+	bool prefetch;
+	/** @trva_fault: is set if this is a TRTT fault */
+	bool trva_fault;
+};
+
+#endif
diff --git a/drivers/gpu/drm/xe/xe_guc_fwif.h b/drivers/gpu/drm/xe/xe_guc_fwif.h
index 6f57578b07cb..30ac21bb4f15 100644
--- a/drivers/gpu/drm/xe/xe_guc_fwif.h
+++ b/drivers/gpu/drm/xe/xe_guc_fwif.h
@@ -290,34 +290,6 @@ enum xe_guc_response_desc_type {
 	FAULT_RESPONSE_DESC
 };
 
-struct xe_guc_pagefault_desc {
-	u32 dw0;
-#define PFD_FAULT_LEVEL		GENMASK(2, 0)
-#define PFD_SRC_ID		GENMASK(10, 3)
-#define PFD_RSVD_0		GENMASK(17, 11)
-#define XE2_PFD_TRVA_FAULT	BIT(18)
-#define PFD_ENG_INSTANCE	GENMASK(24, 19)
-#define PFD_ENG_CLASS		GENMASK(27, 25)
-#define PFD_PDATA_LO		GENMASK(31, 28)
-
-	u32 dw1;
-#define PFD_PDATA_HI		GENMASK(11, 0)
-#define PFD_PDATA_HI_SHIFT	4
-#define PFD_ASID		GENMASK(31, 12)
-
-	u32 dw2;
-#define PFD_ACCESS_TYPE		GENMASK(1, 0)
-#define PFD_FAULT_TYPE		GENMASK(3, 2)
-#define PFD_VFID		GENMASK(9, 4)
-#define PFD_RSVD_1		GENMASK(11, 10)
-#define PFD_VIRTUAL_ADDR_LO	GENMASK(31, 12)
-#define PFD_VIRTUAL_ADDR_LO_SHIFT 12
-
-	u32 dw3;
-#define PFD_VIRTUAL_ADDR_HI	GENMASK(31, 0)
-#define PFD_VIRTUAL_ADDR_HI_SHIFT 32
-} __packed;
-
 struct xe_guc_pagefault_reply {
 	u32 dw0;
 #define PFR_VALID		BIT(0)
-- 
2.43.0


^ permalink raw reply related	[flat|nested] 13+ messages in thread

* [PATCH v10 3/5] drm/xe/uapi: Define drm_xe_vm_get_property
  2025-03-20 15:26 [PATCH v10 0/5] drm/xe/xe_vm: Implement xe_vm_get_property_ioctl Jonathan Cavitt
  2025-03-20 15:26 ` [PATCH v10 1/5] drm/xe/xe_gt_pagefault: Disallow writes to read-only VMAs Jonathan Cavitt
  2025-03-20 15:26 ` [PATCH v10 2/5] drm/xe/xe_gt_pagefault: Move pagefault struct to header Jonathan Cavitt
@ 2025-03-20 15:26 ` Jonathan Cavitt
  2025-03-20 15:26 ` [PATCH v10 4/5] drm/xe/xe_vm: Add per VM fault info Jonathan Cavitt
  2025-03-20 15:26 ` [PATCH v10 5/5] drm/xe/xe_vm: Implement xe_vm_get_property_ioctl Jonathan Cavitt
  4 siblings, 0 replies; 13+ messages in thread
From: Jonathan Cavitt @ 2025-03-20 15:26 UTC (permalink / raw)
  To: intel-xe
  Cc: saurabhg.gupta, alex.zuo, jonathan.cavitt, joonas.lahtinen,
	matthew.brost, jianxun.zhang, shuicheng.lin, dri-devel,
	Michal.Wajdeczko, michal.mrozek

Add initial declarations for the drm_xe_vm_get_property ioctl.

v2:
- Expand kernel docs for drm_xe_vm_get_property (Jianxun)

v3:
- Remove address type external definitions (Jianxun)
- Add fault type to xe_drm_fault struct (Jianxun)

Signed-off-by: Jonathan Cavitt <jonathan.cavitt@intel.com>
Cc: Zhang Jianxun <jianxun.zhang@intel.com>
---
 include/uapi/drm/xe_drm.h | 79 +++++++++++++++++++++++++++++++++++++++
 1 file changed, 79 insertions(+)

diff --git a/include/uapi/drm/xe_drm.h b/include/uapi/drm/xe_drm.h
index 616916985e3f..5817f246e620 100644
--- a/include/uapi/drm/xe_drm.h
+++ b/include/uapi/drm/xe_drm.h
@@ -81,6 +81,7 @@ extern "C" {
  *  - &DRM_IOCTL_XE_EXEC
  *  - &DRM_IOCTL_XE_WAIT_USER_FENCE
  *  - &DRM_IOCTL_XE_OBSERVATION
+ *  - &DRM_IOCTL_XE_VM_GET_PROPERTY
  */
 
 /*
@@ -102,6 +103,7 @@ extern "C" {
 #define DRM_XE_EXEC			0x09
 #define DRM_XE_WAIT_USER_FENCE		0x0a
 #define DRM_XE_OBSERVATION		0x0b
+#define DRM_XE_VM_GET_PROPERTY		0x0c
 
 /* Must be kept compact -- no holes */
 
@@ -117,6 +119,7 @@ extern "C" {
 #define DRM_IOCTL_XE_EXEC			DRM_IOW(DRM_COMMAND_BASE + DRM_XE_EXEC, struct drm_xe_exec)
 #define DRM_IOCTL_XE_WAIT_USER_FENCE		DRM_IOWR(DRM_COMMAND_BASE + DRM_XE_WAIT_USER_FENCE, struct drm_xe_wait_user_fence)
 #define DRM_IOCTL_XE_OBSERVATION		DRM_IOW(DRM_COMMAND_BASE + DRM_XE_OBSERVATION, struct drm_xe_observation_param)
+#define DRM_IOCTL_XE_VM_GET_PROPERTY	DRM_IOWR(DRM_COMMAND_BASE + DRM_XE_VM_GET_PROPERTY, struct drm_xe_vm_get_property)
 
 /**
  * DOC: Xe IOCTL Extensions
@@ -1189,6 +1192,82 @@ struct drm_xe_vm_bind {
 	__u64 reserved[2];
 };
 
+/** struct xe_vm_fault - Describes faults for %DRM_XE_VM_GET_PROPERTY_FAULTS */
+struct xe_vm_fault {
+	/** @address: Address of the fault */
+	__u64 address;
+	/** @address_precision: Precision of faulted address */
+	__u32 address_precision;
+	/** @access_type: Type of address access that resulted in fault */
+	__u8 access_type;
+	/** @fault_type: Type of fault reported */
+	__u8 fault_type;
+	/** @fault_level: fault level of the fault */
+	__u8 fault_level;
+	/** @engine_class: class of engine fault was reported on */
+	__u8 engine_class;
+	/** @engine_instance: instance of engine fault was reported on */
+	__u8 engine_instance;
+	/** @pad: MBZ */
+	__u8 pad[7];
+	/** @reserved: MBZ */
+	__u64 reserved[3];
+};
+
+/**
+ * struct drm_xe_vm_get_property - Input of &DRM_IOCTL_XE_VM_GET_PROPERTY
+ *
+ * The user provides a VM and a property to query among DRM_XE_VM_GET_PROPERTY_*,
+ * and sets the values in the vm_id and property members, respectively.  This
+ * determines both the VM to get the property of, as well as the property to
+ * report.
+ *
+ * If size is set to 0, the driver fills it with the required size for the
+ * requested property.  The user is expected here to allocate memory for the
+ * property structure and to provide a pointer to the allocated memory using the
+ * data member.  For some properties, this may be zero, in which case, the
+ * value of the property will be saved to the value member and size will remain
+ * zero on return.
+ *
+ * If size is not zero, then the IOCTL will attempt to copy the requested
+ * property into the data member.
+ *
+ * The IOCTL will return -ENOENT if the VM could not be identified from the
+ * provided VM ID, or -EINVAL if the IOCTL fails for any other reason, such as
+ * providing an invalid size for the given property or if the property data
+ * could not be copied to the memory allocated to the data member.
+ *
+ * The property member can be:
+ *  - %DRM_XE_VM_GET_PROPERTY_FAULTS
+ */
+struct drm_xe_vm_get_property {
+	/** @extensions: Pointer to the first extension struct, if any */
+	__u64 extensions;
+
+	/** @vm_id: The ID of the VM to query the properties of */
+	__u32 vm_id;
+
+#define DRM_XE_VM_GET_PROPERTY_FAULTS		0
+	/** @property: property to get */
+	__u32 property;
+
+	/** @size: Size to allocate for @data */
+	__u32 size;
+
+	/** @pad: MBZ */
+	__u32 pad;
+
+	union {
+		/** @data: Pointer to user-defined array of flexible size and type */
+		__u64 data;
+		/** @value: Return value for scalar queries */
+		__u64 value;
+	};
+
+	/** @reserved: MBZ */
+	__u64 reserved[3];
+};
+
 /**
  * struct drm_xe_exec_queue_create - Input of &DRM_IOCTL_XE_EXEC_QUEUE_CREATE
  *
-- 
2.43.0


^ permalink raw reply related	[flat|nested] 13+ messages in thread

* [PATCH v10 4/5] drm/xe/xe_vm: Add per VM fault info
  2025-03-20 15:26 [PATCH v10 0/5] drm/xe/xe_vm: Implement xe_vm_get_property_ioctl Jonathan Cavitt
                   ` (2 preceding siblings ...)
  2025-03-20 15:26 ` [PATCH v10 3/5] drm/xe/uapi: Define drm_xe_vm_get_property Jonathan Cavitt
@ 2025-03-20 15:26 ` Jonathan Cavitt
  2025-03-20 15:26 ` [PATCH v10 5/5] drm/xe/xe_vm: Implement xe_vm_get_property_ioctl Jonathan Cavitt
  4 siblings, 0 replies; 13+ messages in thread
From: Jonathan Cavitt @ 2025-03-20 15:26 UTC (permalink / raw)
  To: intel-xe
  Cc: saurabhg.gupta, alex.zuo, jonathan.cavitt, joonas.lahtinen,
	matthew.brost, jianxun.zhang, shuicheng.lin, dri-devel,
	Michal.Wajdeczko, michal.mrozek

Add additional information to each VM so they can report up to the first
50 seen faults.  Only pagefaults are saved this way currently, though in
the future, all faults should be tracked by the VM for future reporting.

Additionally, of the pagefaults reported, only failed pagefaults are
saved this way, as successful pagefaults should recover silently and not
need to be reported to userspace.

v2:
- Free vm after use (Shuicheng)
- Compress pf copy logic (Shuicheng)
- Update fault_unsuccessful before storing (Shuicheng)
- Fix old struct name in comments (Shuicheng)
- Keep first 50 pagefaults instead of last 50 (Jianxun)

v3:
- Avoid unnecessary execution by checking MAX_PFS earlier (jcavitt)
- Fix double-locking error (jcavitt)
- Assert kmemdump is successful (Shuicheng)

v4:
- Rename xe_vm.pfs to xe_vm.faults (jcavitt)
- Store fault data and not pagefault in xe_vm faults list (jcavitt)
- Store address, address type, and address precision per fault (jcavitt)
- Store engine class and instance data per fault (Jianxun)
- Add and fix kernel docs (Michal W)
- Properly handle kzalloc error (Michal W)
- s/MAX_PFS/MAX_FAULTS_SAVED_PER_VM (Michal W)
- Store fault level per fault (Micahl M)

v5:
- Store fault and access type instead of address type (Jianxun)

Signed-off-by: Jonathan Cavitt <jonathan.cavitt@intel.com>
Suggested-by: Matthew Brost <matthew.brost@intel.com>
Cc: Shuicheng Lin <shuicheng.lin@intel.com>
Cc: Jianxun Zhang <jianxun.zhang@intel.com>
Cc: Michal Wajdeczko <Michal.Wajdeczko@intel.com>
Cc: Michal Mzorek <michal.mzorek@intel.com>
---
 drivers/gpu/drm/xe/xe_gt_pagefault.c | 21 ++++++++++
 drivers/gpu/drm/xe/xe_vm.c           | 60 ++++++++++++++++++++++++++++
 drivers/gpu/drm/xe/xe_vm.h           |  9 +++++
 drivers/gpu/drm/xe/xe_vm_types.h     | 34 ++++++++++++++++
 4 files changed, 124 insertions(+)

diff --git a/drivers/gpu/drm/xe/xe_gt_pagefault.c b/drivers/gpu/drm/xe/xe_gt_pagefault.c
index 0cedf089a3f2..0913668be3fe 100644
--- a/drivers/gpu/drm/xe/xe_gt_pagefault.c
+++ b/drivers/gpu/drm/xe/xe_gt_pagefault.c
@@ -345,6 +345,26 @@ int xe_guc_pagefault_handler(struct xe_guc *guc, u32 *msg, u32 len)
 	return full ? -ENOSPC : 0;
 }
 
+static void save_pagefault_to_vm(struct xe_device *xe, struct xe_gt_pagefault *pf)
+{
+	struct xe_vm *vm;
+
+	vm = asid_to_vm(xe, pf->asid);
+	if (IS_ERR(vm))
+		return;
+
+	spin_lock(&vm->faults.lock);
+
+	/**
+	 * Limit the number of faults in the fault list to prevent memory overuse.
+	 */
+	if (vm->faults.len < MAX_FAULTS_SAVED_PER_VM)
+		xe_vm_add_fault_entry_pf(vm, pf);
+
+	spin_unlock(&vm->faults.lock);
+	xe_vm_put(vm);
+}
+
 #define USM_QUEUE_MAX_RUNTIME_MS	20
 
 static void pf_queue_work_func(struct work_struct *w)
@@ -364,6 +384,7 @@ static void pf_queue_work_func(struct work_struct *w)
 		if (unlikely(ret)) {
 			print_pagefault(xe, &pf);
 			pf.fault_unsuccessful = 1;
+			save_pagefault_to_vm(xe, &pf);
 			drm_dbg(&xe->drm, "Fault response: Unsuccessful %d\n", ret);
 		}
 
diff --git a/drivers/gpu/drm/xe/xe_vm.c b/drivers/gpu/drm/xe/xe_vm.c
index 60303998bd61..9a627ba17f55 100644
--- a/drivers/gpu/drm/xe/xe_vm.c
+++ b/drivers/gpu/drm/xe/xe_vm.c
@@ -28,6 +28,7 @@
 #include "xe_drm_client.h"
 #include "xe_exec_queue.h"
 #include "xe_gt_pagefault.h"
+#include "xe_gt_pagefault_types.h"
 #include "xe_gt_tlb_invalidation.h"
 #include "xe_migrate.h"
 #include "xe_pat.h"
@@ -778,6 +779,60 @@ int xe_vm_userptr_check_repin(struct xe_vm *vm)
 		list_empty_careful(&vm->userptr.invalidated)) ? 0 : -EAGAIN;
 }
 
+/**
+ * xe_vm_add_fault_entry_pf() - Add pagefault to vm fault list
+ * @vm: The VM.
+ * @pf: The pagefault.
+ *
+ * This function takes the data from the pagefault @pf and saves it to @vm->faults.list.
+ *
+ * The function exits silently if the list is full, and reports a warning if the pagefault
+ * could not be saved to the list.
+ */
+void xe_vm_add_fault_entry_pf(struct xe_vm *vm, struct xe_gt_pagefault *pf)
+{
+	struct xe_vm_fault_entry *e = NULL;
+
+	spin_lock(&vm->faults.lock);
+
+	if (vm->faults.len >= MAX_FAULTS_SAVED_PER_VM)
+		goto out;
+
+	e = kzalloc(sizeof(*e), GFP_KERNEL);
+	if (!e) {
+		drm_warn(&vm->xe->drm,
+			 "Could not allocate memory for fault %i!",
+			 vm->faults.len);
+		goto out;
+	}
+
+	e->address = pf->page_addr;
+	e->address_precision = 1;
+	e->access_type = pf->access_type;
+	e->fault_type = pf->fault_type;
+	e->fault_level = pf->fault_level;
+	e->engine_class = pf->engine_class;
+	e->engine_instance = pf->engine_instance;
+
+	list_add_tail(&e->list, &vm->faults.list);
+	vm->faults.len++;
+out:
+	spin_unlock(&vm->faults.lock);
+}
+
+static void xe_vm_clear_fault_entries(struct xe_vm *vm)
+{
+	struct xe_vm_fault_entry *e, *tmp;
+
+	spin_lock(&vm->faults.lock);
+	list_for_each_entry_safe(e, tmp, &vm->faults.list, list) {
+		list_del(&e->list);
+		kfree(e);
+	}
+	vm->faults.len = 0;
+	spin_unlock(&vm->faults.lock);
+}
+
 static int xe_vma_ops_alloc(struct xe_vma_ops *vops, bool array_of_binds)
 {
 	int i;
@@ -1660,6 +1715,9 @@ struct xe_vm *xe_vm_create(struct xe_device *xe, u32 flags)
 	init_rwsem(&vm->userptr.notifier_lock);
 	spin_lock_init(&vm->userptr.invalidated_lock);
 
+	INIT_LIST_HEAD(&vm->faults.list);
+	spin_lock_init(&vm->faults.lock);
+
 	ttm_lru_bulk_move_init(&vm->lru_bulk_move);
 
 	INIT_WORK(&vm->destroy_work, vm_destroy_work_func);
@@ -1930,6 +1988,8 @@ void xe_vm_close_and_put(struct xe_vm *vm)
 	}
 	up_write(&xe->usm.lock);
 
+	xe_vm_clear_fault_entries(vm);
+
 	for_each_tile(tile, xe, id)
 		xe_range_fence_tree_fini(&vm->rftree[id]);
 
diff --git a/drivers/gpu/drm/xe/xe_vm.h b/drivers/gpu/drm/xe/xe_vm.h
index 0ef811fc2bde..9bd7e93824da 100644
--- a/drivers/gpu/drm/xe/xe_vm.h
+++ b/drivers/gpu/drm/xe/xe_vm.h
@@ -12,6 +12,12 @@
 #include "xe_map.h"
 #include "xe_vm_types.h"
 
+/**
+ * MAX_FAULTS_SAVED_PER_VM - Maximum number of faults each vm can store before future
+ * faults are discarded to prevent memory overuse
+ */
+#define MAX_FAULTS_SAVED_PER_VM	50
+
 struct drm_device;
 struct drm_printer;
 struct drm_file;
@@ -22,6 +28,7 @@ struct dma_fence;
 
 struct xe_exec_queue;
 struct xe_file;
+struct xe_gt_pagefault;
 struct xe_sync_entry;
 struct xe_svm_range;
 struct drm_exec;
@@ -257,6 +264,8 @@ int xe_vma_userptr_pin_pages(struct xe_userptr_vma *uvma);
 
 int xe_vma_userptr_check_repin(struct xe_userptr_vma *uvma);
 
+void xe_vm_add_fault_entry_pf(struct xe_vm *vm, struct xe_gt_pagefault *pf);
+
 bool xe_vm_validate_should_retry(struct drm_exec *exec, int err, ktime_t *end);
 
 int xe_vm_lock_vma(struct drm_exec *exec, struct xe_vma *vma);
diff --git a/drivers/gpu/drm/xe/xe_vm_types.h b/drivers/gpu/drm/xe/xe_vm_types.h
index 84fa41b9fa20..ac1948d00df0 100644
--- a/drivers/gpu/drm/xe/xe_vm_types.h
+++ b/drivers/gpu/drm/xe/xe_vm_types.h
@@ -19,6 +19,7 @@
 #include "xe_range_fence.h"
 
 struct xe_bo;
+struct xe_pagefault;
 struct xe_svm_range;
 struct xe_sync_entry;
 struct xe_user_fence;
@@ -142,6 +143,27 @@ struct xe_userptr_vma {
 
 struct xe_device;
 
+/**
+ * struct xe_vm_fault_entry - Elements of vm->faults.list
+ * @list: link into @xe_vm.faults.list
+ * @address: address of the fault
+ * @address_type: type of address access that resulted in fault
+ * @address_precision: precision of faulted address
+ * @fault_level: fault level of the fault
+ * @engine_class: class of engine fault was reported on
+ * @engine_instance: instance of engine fault was reported on
+ */
+struct xe_vm_fault_entry {
+	struct list_head list;
+	u64 address;
+	u32 address_precision;
+	u8 access_type;
+	u8 fault_type;
+	u8 fault_level;
+	u8 engine_class;
+	u8 engine_instance;
+};
+
 struct xe_vm {
 	/** @gpuvm: base GPUVM used to track VMAs */
 	struct drm_gpuvm gpuvm;
@@ -305,6 +327,18 @@ struct xe_vm {
 		bool capture_once;
 	} error_capture;
 
+	/**
+	 * @faults: List of all faults associated with this VM
+	 */
+	struct {
+		/** @faults.lock: lock protecting @faults.list */
+		spinlock_t lock;
+		/** @faults.list: list of xe_vm_fault_entry entries */
+		struct list_head list;
+		/** @faults.len: length of @faults.list */
+		unsigned int len;
+	} faults;
+
 	/**
 	 * @tlb_flush_seqno: Required TLB flush seqno for the next exec.
 	 * protected by the vm resv.
-- 
2.43.0


^ permalink raw reply related	[flat|nested] 13+ messages in thread

* [PATCH v10 5/5] drm/xe/xe_vm: Implement xe_vm_get_property_ioctl
  2025-03-20 15:26 [PATCH v10 0/5] drm/xe/xe_vm: Implement xe_vm_get_property_ioctl Jonathan Cavitt
                   ` (3 preceding siblings ...)
  2025-03-20 15:26 ` [PATCH v10 4/5] drm/xe/xe_vm: Add per VM fault info Jonathan Cavitt
@ 2025-03-20 15:26 ` Jonathan Cavitt
  2025-03-21 23:36   ` Raag Jadav
  4 siblings, 1 reply; 13+ messages in thread
From: Jonathan Cavitt @ 2025-03-20 15:26 UTC (permalink / raw)
  To: intel-xe
  Cc: saurabhg.gupta, alex.zuo, jonathan.cavitt, joonas.lahtinen,
	matthew.brost, jianxun.zhang, shuicheng.lin, dri-devel,
	Michal.Wajdeczko, michal.mrozek

Add support for userspace to request a list of observed faults
from a specified VM.

v2:
- Only allow querying of failed pagefaults (Matt Brost)

v3:
- Remove unnecessary size parameter from helper function, as it
  is a property of the arguments. (jcavitt)
- Remove unnecessary copy_from_user (Jainxun)
- Set address_precision to 1 (Jainxun)
- Report max size instead of dynamic size for memory allocation
  purposes.  Total memory usage is reported separately.

v4:
- Return int from xe_vm_get_property_size (Shuicheng)
- Fix memory leak (Shuicheng)
- Remove unnecessary size variable (jcavitt)

v5:
- Rename ioctl to xe_vm_get_faults_ioctl (jcavitt)
- Update fill_property_pfs to eliminate need for kzalloc (Jianxun)

v6:
- Repair and move fill_faults break condition (Dan Carpenter)
- Free vm after use (jcavitt)
- Combine assertions (jcavitt)
- Expand size check in xe_vm_get_faults_ioctl (jcavitt)
- Remove return mask from fill_faults, as return is already -EFAULT or 0
  (jcavitt)

v7:
- Revert back to using xe_vm_get_property_ioctl
- Apply better copy_to_user logic (jcavitt)

v8:
- Fix and clean up error value handling in ioctl (jcavitt)
- Reapply return mask for fill_faults (jcavitt)

v9:
- Future-proof size logic for zero-size properties (jcavitt)
- Add access and fault types (Jianxun)
- Remove address type (Jianxun)

Signed-off-by: Jonathan Cavitt <jonathan.cavitt@intel.com>
Suggested-by: Matthew Brost <matthew.brost@intel.com>
Cc: Jainxun Zhang <jianxun.zhang@intel.com>
Cc: Shuicheng Lin <shuicheng.lin@intel.com>
---
 drivers/gpu/drm/xe/xe_device.c |   3 +
 drivers/gpu/drm/xe/xe_vm.c     | 134 +++++++++++++++++++++++++++++++++
 drivers/gpu/drm/xe/xe_vm.h     |   2 +
 3 files changed, 139 insertions(+)

diff --git a/drivers/gpu/drm/xe/xe_device.c b/drivers/gpu/drm/xe/xe_device.c
index b2f656b2a563..74e510cb0e47 100644
--- a/drivers/gpu/drm/xe/xe_device.c
+++ b/drivers/gpu/drm/xe/xe_device.c
@@ -194,6 +194,9 @@ static const struct drm_ioctl_desc xe_ioctls[] = {
 	DRM_IOCTL_DEF_DRV(XE_WAIT_USER_FENCE, xe_wait_user_fence_ioctl,
 			  DRM_RENDER_ALLOW),
 	DRM_IOCTL_DEF_DRV(XE_OBSERVATION, xe_observation_ioctl, DRM_RENDER_ALLOW),
+	DRM_IOCTL_DEF_DRV(XE_VM_GET_PROPERTY, xe_vm_get_property_ioctl,
+			  DRM_RENDER_ALLOW),
+
 };
 
 static long xe_drm_ioctl(struct file *file, unsigned int cmd, unsigned long arg)
diff --git a/drivers/gpu/drm/xe/xe_vm.c b/drivers/gpu/drm/xe/xe_vm.c
index 9a627ba17f55..6b0ac81ae8c5 100644
--- a/drivers/gpu/drm/xe/xe_vm.c
+++ b/drivers/gpu/drm/xe/xe_vm.c
@@ -43,6 +43,14 @@
 #include "xe_wa.h"
 #include "xe_hmm.h"
 
+static const u16 xe_to_user_engine_class[] = {
+	[XE_ENGINE_CLASS_RENDER] = DRM_XE_ENGINE_CLASS_RENDER,
+	[XE_ENGINE_CLASS_COPY] = DRM_XE_ENGINE_CLASS_COPY,
+	[XE_ENGINE_CLASS_VIDEO_DECODE] = DRM_XE_ENGINE_CLASS_VIDEO_DECODE,
+	[XE_ENGINE_CLASS_VIDEO_ENHANCE] = DRM_XE_ENGINE_CLASS_VIDEO_ENHANCE,
+	[XE_ENGINE_CLASS_COMPUTE] = DRM_XE_ENGINE_CLASS_COMPUTE,
+};
+
 static struct drm_gem_object *xe_vm_obj(struct xe_vm *vm)
 {
 	return vm->gpuvm.r_obj;
@@ -3553,6 +3561,132 @@ int xe_vm_bind_ioctl(struct drm_device *dev, void *data, struct drm_file *file)
 	return err;
 }
 
+static int xe_vm_get_property_size(struct xe_vm *vm, u32 property)
+{
+	int size = -EINVAL;
+
+	switch (property) {
+	case DRM_XE_VM_GET_PROPERTY_FAULTS:
+		spin_lock(&vm->faults.lock);
+		size = vm->faults.len * sizeof(struct xe_vm_fault);
+		spin_unlock(&vm->faults.lock);
+		break;
+	default:
+		break;
+	}
+	return size;
+}
+
+static int xe_vm_get_property_verify_size(struct xe_vm *vm, u32 property,
+					  int expected, int actual)
+{
+	switch (property) {
+	case DRM_XE_VM_GET_PROPERTY_FAULTS:
+		/*
+		 * Number of faults may increase between calls to
+		 * xe_vm_get_property_ioctl, so just report the
+		 * number of faults the user requests if it's less
+		 * than or equal to the number of faults in the VM
+		 * fault array.
+		 */
+		if (actual < expected)
+			return -EINVAL;
+		break;
+	default:
+		if (actual != expected)
+			return -EINVAL;
+		break;
+	}
+	return 0;
+}
+
+static int fill_faults(struct xe_vm *vm,
+		       struct drm_xe_vm_get_property *args)
+{
+	struct xe_vm_fault __user *usr_ptr = u64_to_user_ptr(args->data);
+	struct xe_vm_fault store = { 0 };
+	struct xe_vm_fault_entry *entry;
+	int ret = 0, i = 0, count, entry_size;
+
+	entry_size = sizeof(struct xe_vm_fault);
+	count = args->size / entry_size;
+
+	spin_lock(&vm->faults.lock);
+	list_for_each_entry(entry, &vm->faults.list, list) {
+		if (i++ == count)
+			break;
+
+		memset(&store, 0, entry_size);
+
+		store.address = entry->address;
+		store.address_precision = entry->address_precision;
+		store.access_type = entry->access_type;
+		store.fault_type = entry->fault_type;
+		store.fault_level = entry->fault_level;
+		store.engine_class = xe_to_user_engine_class[entry->engine_class];
+		store.engine_instance = entry->engine_instance;
+
+		ret = copy_to_user(usr_ptr, &store, entry_size);
+		if (ret)
+			break;
+
+		usr_ptr++;
+	}
+	spin_unlock(&vm->faults.lock);
+
+	return ret ? -EFAULT : 0;
+}
+
+static int xe_vm_get_property_fill_data(struct xe_vm *vm,
+					struct drm_xe_vm_get_property *args)
+{
+	switch (args->property) {
+	case DRM_XE_VM_GET_PROPERTY_FAULTS:
+		return fill_faults(vm, args);
+	default:
+		break;
+	}
+	return -EINVAL;
+}
+
+int xe_vm_get_property_ioctl(struct drm_device *drm, void *data,
+			     struct drm_file *file)
+{
+	struct xe_device *xe = to_xe_device(drm);
+	struct xe_file *xef = to_xe_file(file);
+	struct drm_xe_vm_get_property *args = data;
+	struct xe_vm *vm;
+	int size, ret = 0;
+
+	if (XE_IOCTL_DBG(xe, args->reserved[0] || args->reserved[1]))
+		return -EINVAL;
+
+	vm = xe_vm_lookup(xef, args->vm_id);
+	if (XE_IOCTL_DBG(xe, !vm))
+		return -ENOENT;
+
+	size = xe_vm_get_property_size(vm, args->property);
+
+	if (size < 0) {
+		ret = size;
+		goto put_vm;
+	} else if (!args->size && size) {
+		args->size = size;
+		goto put_vm;
+	}
+
+	ret = xe_vm_get_property_verify_size(vm, args->property,
+					     args->size, size);
+	if (XE_IOCTL_DBG(xe, ret))
+		goto put_vm;
+
+	ret = xe_vm_get_property_fill_data(vm, args);
+
+put_vm:
+	xe_vm_put(vm);
+	return ret;
+}
+
 /**
  * xe_vm_bind_kernel_bo - bind a kernel BO to a VM
  * @vm: VM to bind the BO to
diff --git a/drivers/gpu/drm/xe/xe_vm.h b/drivers/gpu/drm/xe/xe_vm.h
index 9bd7e93824da..63ec22458e04 100644
--- a/drivers/gpu/drm/xe/xe_vm.h
+++ b/drivers/gpu/drm/xe/xe_vm.h
@@ -196,6 +196,8 @@ int xe_vm_destroy_ioctl(struct drm_device *dev, void *data,
 			struct drm_file *file);
 int xe_vm_bind_ioctl(struct drm_device *dev, void *data,
 		     struct drm_file *file);
+int xe_vm_get_property_ioctl(struct drm_device *dev, void *data,
+			     struct drm_file *file);
 
 void xe_vm_close_and_put(struct xe_vm *vm);
 
-- 
2.43.0


^ permalink raw reply related	[flat|nested] 13+ messages in thread

* Re: [PATCH v10 5/5] drm/xe/xe_vm: Implement xe_vm_get_property_ioctl
  2025-03-20 15:26 ` [PATCH v10 5/5] drm/xe/xe_vm: Implement xe_vm_get_property_ioctl Jonathan Cavitt
@ 2025-03-21 23:36   ` Raag Jadav
  2025-03-24 16:57     ` Cavitt, Jonathan
  0 siblings, 1 reply; 13+ messages in thread
From: Raag Jadav @ 2025-03-21 23:36 UTC (permalink / raw)
  To: Jonathan Cavitt
  Cc: intel-xe, saurabhg.gupta, alex.zuo, joonas.lahtinen,
	matthew.brost, jianxun.zhang, shuicheng.lin, dri-devel,
	Michal.Wajdeczko, michal.mrozek

On Thu, Mar 20, 2025 at 03:26:15PM +0000, Jonathan Cavitt wrote:
> Add support for userspace to request a list of observed faults
> from a specified VM.

...

> +static int xe_vm_get_property_size(struct xe_vm *vm, u32 property)
> +{
> +	int size = -EINVAL;

Mixing size and error codes is usually received with mixed feelings.

> +
> +	switch (property) {
> +	case DRM_XE_VM_GET_PROPERTY_FAULTS:
> +		spin_lock(&vm->faults.lock);
> +		size = vm->faults.len * sizeof(struct xe_vm_fault);

size_mul() and,
[1] perhaps fill it up into the pointer passed by the caller here?

> +		spin_unlock(&vm->faults.lock);
> +		break;
> +	default:
> +		break;

Do we need the default case?

> +	}
> +	return size;
> +}
> +
> +static int xe_vm_get_property_verify_size(struct xe_vm *vm, u32 property,
> +					  int expected, int actual)
> +{
> +	switch (property) {
> +	case DRM_XE_VM_GET_PROPERTY_FAULTS:

Unless we're expecting more cases (that we confidently know of), there's
not much point of single case switch.

> +		/*
> +		 * Number of faults may increase between calls to
> +		 * xe_vm_get_property_ioctl, so just report the
> +		 * number of faults the user requests if it's less
> +		 * than or equal to the number of faults in the VM
> +		 * fault array.
> +		 */
> +		if (actual < expected)
> +			return -EINVAL;
> +		break;
> +	default:
> +		if (actual != expected)
> +			return -EINVAL;
> +		break;
> +	}
> +	return 0;
> +}

...

> +static int xe_vm_get_property_fill_data(struct xe_vm *vm,
> +					struct drm_xe_vm_get_property *args)
> +{
> +	switch (args->property) {
> +	case DRM_XE_VM_GET_PROPERTY_FAULTS:
> +		return fill_faults(vm, args);
> +	default:
> +		break;

Same as above.

> +	}
> +	return -EINVAL;
> +}
> +
> +int xe_vm_get_property_ioctl(struct drm_device *drm, void *data,
> +			     struct drm_file *file)
> +{
> +	struct xe_device *xe = to_xe_device(drm);
> +	struct xe_file *xef = to_xe_file(file);
> +	struct drm_xe_vm_get_property *args = data;
> +	struct xe_vm *vm;
> +	int size, ret = 0;
> +
> +	if (XE_IOCTL_DBG(xe, args->reserved[0] || args->reserved[1]))
> +		return -EINVAL;
> +
> +	vm = xe_vm_lookup(xef, args->vm_id);
> +	if (XE_IOCTL_DBG(xe, !vm))
> +		return -ENOENT;
> +
> +	size = xe_vm_get_property_size(vm, args->property);
> +
> +	if (size < 0) {
> +		ret = size;
> +		goto put_vm;
> +	} else if (!args->size && size) {
> +		args->size = size;
> +		goto put_vm;
> +	}

With [1] in place, this gymnastics can be dropped

	ret = xe_vm_get_property_size(vm, args->property, &size);
	if (ret)
		goto put_vm;

> +
> +	ret = xe_vm_get_property_verify_size(vm, args->property,
> +					     args->size, size);
> +	if (XE_IOCTL_DBG(xe, ret))
> +		goto put_vm;
> +
> +	ret = xe_vm_get_property_fill_data(vm, args);
> +
> +put_vm:
> +	xe_vm_put(vm);
> +	return ret;
> +}

Raag

^ permalink raw reply	[flat|nested] 13+ messages in thread

* RE: [PATCH v10 5/5] drm/xe/xe_vm: Implement xe_vm_get_property_ioctl
  2025-03-21 23:36   ` Raag Jadav
@ 2025-03-24 16:57     ` Cavitt, Jonathan
  2025-03-24 21:25       ` Raag Jadav
  0 siblings, 1 reply; 13+ messages in thread
From: Cavitt, Jonathan @ 2025-03-24 16:57 UTC (permalink / raw)
  To: Jadav, Raag
  Cc: intel-xe@lists.freedesktop.org, Gupta,  saurabhg, Zuo, Alex,
	joonas.lahtinen@linux.intel.com, Brost, Matthew, Zhang, Jianxun,
	Lin, Shuicheng, dri-devel@lists.freedesktop.org,
	Wajdeczko, Michal, Mrozek, Michal, Cavitt, Jonathan

-----Original Message-----
From: Jadav, Raag <raag.jadav@intel.com> 
Sent: Friday, March 21, 2025 4:37 PM
To: Cavitt, Jonathan <jonathan.cavitt@intel.com>
Cc: intel-xe@lists.freedesktop.org; Gupta, saurabhg <saurabhg.gupta@intel.com>; Zuo, Alex <alex.zuo@intel.com>; joonas.lahtinen@linux.intel.com; Brost, Matthew <matthew.brost@intel.com>; Zhang, Jianxun <jianxun.zhang@intel.com>; Lin, Shuicheng <shuicheng.lin@intel.com>; dri-devel@lists.freedesktop.org; Wajdeczko, Michal <Michal.Wajdeczko@intel.com>; Mrozek, Michal <michal.mrozek@intel.com>
Subject: Re: [PATCH v10 5/5] drm/xe/xe_vm: Implement xe_vm_get_property_ioctl
> 
> On Thu, Mar 20, 2025 at 03:26:15PM +0000, Jonathan Cavitt wrote:
> > Add support for userspace to request a list of observed faults
> > from a specified VM.
> 
> ...
> 
> > +static int xe_vm_get_property_size(struct xe_vm *vm, u32 property)
> > +{
> > +	int size = -EINVAL;
> 
> Mixing size and error codes is usually received with mixed feelings.
> 
> > +
> > +	switch (property) {
> > +	case DRM_XE_VM_GET_PROPERTY_FAULTS:
> > +		spin_lock(&vm->faults.lock);
> > +		size = vm->faults.len * sizeof(struct xe_vm_fault);
> 
> size_mul() and,
> [1] perhaps fill it up into the pointer passed by the caller here?

"The pointer passed by the caller".  You mean the args pointer?

We'd still need to check that the args->size value is empty here before overwriting
it, and we'd also still need to return the size to the ioctl so we can verify it's
acceptable later in xe_vm_get_property_verify_size.

Unless you want to merge those two processes together into here?
"""
static int xe_vm_get_property_report_size(struct xe_vm *vm,
				struct drm_xe_vm_get_property *args)
{
	int size;

	switch(args->property) {
	case DRM_XE_VM_GET_PROPERTY_FAULTS:
		spin_lock(&vm->faults.lock);
		size = size_mul(sizeof(struct xe_vm_fault), vm->faults.len);
		spin_unlock(&vm->faults.lock);

		if (args->size)
			/*
			 * Number of faults may increase between calls to
			 * xe_vm_get_property_ioctl, so just report the
			 * number of faults the user requests if it's less
			 * than or equal to the number of faults in the VM
			 * fault array.
			 */
			return args->size <= size ? 0 : -EINVAL;
		else
			args->size = size;
		return 0;
	}
	return -EINVAL;
}
"""

Then, below, we'd need to branch based on the initial state of args->size:

"""
	vm = xe_vm_lookup(xef, args->vm_id);
	if (XE_IOCTL_DBG(xe, !vm))
		return -ENOENT;

	size = args->size;
	ret = xe_vm_get_property_report_size(vm, args);
	/*
	 * Either the xe_vm_get_property_report_size function failed, or
	 * userspace is expected to provide a memory allocation for the
	 * property.  In either case, exit early.
	 */
	if ((args->size && !size) || ret)
		goto put_vm;
"""

Something about this seems a bit cluttered, and it'll only get worse if
we need to add more properties, but maybe this would work.

...

I just looked below.  When you're referring to "the pointer", you're
referring to a new pointer to store the size in, not "the args pointer"...

I guess that would also work, though we'd still need to branch execution
so we can store the new size in args if a size is requested/reported.

> 
> > +		spin_unlock(&vm->faults.lock);
> > +		break;
> > +	default:
> > +		break;
> 
> Do we need the default case?

That's a fair point.  I thought that if I didn't include the default case,
either checkpatch would complain or the switch case would not run
properly, but I double-checked and it seems like it's not necessary.

> 
> > +	}
> > +	return size;
> > +}
> > +
> > +static int xe_vm_get_property_verify_size(struct xe_vm *vm, u32 property,
> > +					  int expected, int actual)
> > +{
> > +	switch (property) {
> > +	case DRM_XE_VM_GET_PROPERTY_FAULTS:
> 
> Unless we're expecting more cases (that we confidently know of), there's
> not much point of single case switch.

I guess one could argue that if the property value was anything other than
DRM_XE_VM_GET_PROPERTY_FAULTS, the test would have failed by
xe_vm_get_property_size, so any further checks are unnecessary.

Though given a previous ask (or at least a misinterpretation of a previous ask),
this function probably won't exist for much longer anyways.

> 
> > +		/*
> > +		 * Number of faults may increase between calls to
> > +		 * xe_vm_get_property_ioctl, so just report the
> > +		 * number of faults the user requests if it's less
> > +		 * than or equal to the number of faults in the VM
> > +		 * fault array.
> > +		 */
> > +		if (actual < expected)
> > +			return -EINVAL;
> > +		break;
> > +	default:
> > +		if (actual != expected)
> > +			return -EINVAL;
> > +		break;
> > +	}
> > +	return 0;
> > +}
> 
> ...
> 
> > +static int xe_vm_get_property_fill_data(struct xe_vm *vm,
> > +					struct drm_xe_vm_get_property *args)
> > +{
> > +	switch (args->property) {
> > +	case DRM_XE_VM_GET_PROPERTY_FAULTS:
> > +		return fill_faults(vm, args);
> > +	default:
> > +		break;
> 
> Same as above.

"above" as in "unless we expect more properties, having a single switch case is pointless",
or "above" as in "we don't need a default case if all it's doing is breaking out of the switch"?

> 
> > +	}
> > +	return -EINVAL;
> > +}
> > +
> > +int xe_vm_get_property_ioctl(struct drm_device *drm, void *data,
> > +			     struct drm_file *file)
> > +{
> > +	struct xe_device *xe = to_xe_device(drm);
> > +	struct xe_file *xef = to_xe_file(file);
> > +	struct drm_xe_vm_get_property *args = data;
> > +	struct xe_vm *vm;
> > +	int size, ret = 0;
> > +
> > +	if (XE_IOCTL_DBG(xe, args->reserved[0] || args->reserved[1]))
> > +		return -EINVAL;
> > +
> > +	vm = xe_vm_lookup(xef, args->vm_id);
> > +	if (XE_IOCTL_DBG(xe, !vm))
> > +		return -ENOENT;
> > +
> > +	size = xe_vm_get_property_size(vm, args->property);
> > +
> > +	if (size < 0) {
> > +		ret = size;
> > +		goto put_vm;
> > +	} else if (!args->size && size) {
> > +		args->size = size;
> > +		goto put_vm;
> > +	}
> 
> With [1] in place, this gymnastics can be dropped
> 
> 	ret = xe_vm_get_property_size(vm, args->property, &size);
> 	if (ret)
> 		goto put_vm;

We'd still need to branch execution so we can store the new size in args if a
size is requested/reported.
So, it'd actually look something more like:
"""
	ret = xe_vm_get_property_size(vm, args->property, &size);
	if (ret) {
		goto put_vm;
	} else if (!args->size && size) {
		args->size = size;
		goto put_vm;
	}
"""
I won't deny it's cleaner to look at, but it's not particularly better compressed
than before.
-Jonathan Cavitt

> 
> > +
> > +	ret = xe_vm_get_property_verify_size(vm, args->property,
> > +					     args->size, size);
> > +	if (XE_IOCTL_DBG(xe, ret))
> > +		goto put_vm;
> > +
> > +	ret = xe_vm_get_property_fill_data(vm, args);
> > +
> > +put_vm:
> > +	xe_vm_put(vm);
> > +	return ret;
> > +}
> 
> Raag
> 

^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: [PATCH v10 5/5] drm/xe/xe_vm: Implement xe_vm_get_property_ioctl
  2025-03-24 16:57     ` Cavitt, Jonathan
@ 2025-03-24 21:25       ` Raag Jadav
  2025-03-24 21:31         ` Cavitt, Jonathan
  0 siblings, 1 reply; 13+ messages in thread
From: Raag Jadav @ 2025-03-24 21:25 UTC (permalink / raw)
  To: Cavitt, Jonathan
  Cc: intel-xe@lists.freedesktop.org, Gupta, saurabhg, Zuo, Alex,
	joonas.lahtinen@linux.intel.com, Brost, Matthew, Zhang, Jianxun,
	Lin, Shuicheng, dri-devel@lists.freedesktop.org,
	Wajdeczko, Michal, Mrozek, Michal

On Mon, Mar 24, 2025 at 10:27:08PM +0530, Cavitt, Jonathan wrote:
> From: Jadav, Raag <raag.jadav@intel.com> 
> > On Thu, Mar 20, 2025 at 03:26:15PM +0000, Jonathan Cavitt wrote:
> > > Add support for userspace to request a list of observed faults
> > > from a specified VM.
> > 
> > ...
> > 
> > > +static int xe_vm_get_property_size(struct xe_vm *vm, u32 property)
> > > +{
> > > +	int size = -EINVAL;
> > 
> > Mixing size and error codes is usually received with mixed feelings.
> > 
> > > +
> > > +	switch (property) {
> > > +	case DRM_XE_VM_GET_PROPERTY_FAULTS:
> > > +		spin_lock(&vm->faults.lock);
> > > +		size = vm->faults.len * sizeof(struct xe_vm_fault);
> > 
> > size_mul() and,
> > [1] perhaps fill it up into the pointer passed by the caller here?
> 
> "The pointer passed by the caller".  You mean the args pointer?
> 
> We'd still need to check that the args->size value is empty here before overwriting
> it, and we'd also still need to return the size to the ioctl so we can verify it's
> acceptable later in xe_vm_get_property_verify_size.
> 
> Unless you want to merge those two processes together into here?

The semantics are a bit fuzzy to me. Why do we have a single ioctl for
two different processes? Shouldn't they be handled separately?

Raag

^ permalink raw reply	[flat|nested] 13+ messages in thread

* RE: [PATCH v10 5/5] drm/xe/xe_vm: Implement xe_vm_get_property_ioctl
  2025-03-24 21:25       ` Raag Jadav
@ 2025-03-24 21:31         ` Cavitt, Jonathan
  2025-03-25  7:19           ` Raag Jadav
  0 siblings, 1 reply; 13+ messages in thread
From: Cavitt, Jonathan @ 2025-03-24 21:31 UTC (permalink / raw)
  To: Jadav, Raag
  Cc: intel-xe@lists.freedesktop.org, Gupta,  saurabhg, Zuo, Alex,
	joonas.lahtinen@linux.intel.com, Brost, Matthew, Zhang, Jianxun,
	Lin, Shuicheng, dri-devel@lists.freedesktop.org,
	Wajdeczko, Michal, Mrozek, Michal, Cavitt, Jonathan

-----Original Message-----
From: Jadav, Raag <raag.jadav@intel.com> 
Sent: Monday, March 24, 2025 2:26 PM
To: Cavitt, Jonathan <jonathan.cavitt@intel.com>
Cc: intel-xe@lists.freedesktop.org; Gupta, saurabhg <saurabhg.gupta@intel.com>; Zuo, Alex <alex.zuo@intel.com>; joonas.lahtinen@linux.intel.com; Brost, Matthew <matthew.brost@intel.com>; Zhang, Jianxun <jianxun.zhang@intel.com>; Lin, Shuicheng <shuicheng.lin@intel.com>; dri-devel@lists.freedesktop.org; Wajdeczko, Michal <Michal.Wajdeczko@intel.com>; Mrozek, Michal <michal.mrozek@intel.com>
Subject: Re: [PATCH v10 5/5] drm/xe/xe_vm: Implement xe_vm_get_property_ioctl
> 
> On Mon, Mar 24, 2025 at 10:27:08PM +0530, Cavitt, Jonathan wrote:
> > From: Jadav, Raag <raag.jadav@intel.com> 
> > > On Thu, Mar 20, 2025 at 03:26:15PM +0000, Jonathan Cavitt wrote:
> > > > Add support for userspace to request a list of observed faults
> > > > from a specified VM.
> > > 
> > > ...
> > > 
> > > > +static int xe_vm_get_property_size(struct xe_vm *vm, u32 property)
> > > > +{
> > > > +	int size = -EINVAL;
> > > 
> > > Mixing size and error codes is usually received with mixed feelings.
> > > 
> > > > +
> > > > +	switch (property) {
> > > > +	case DRM_XE_VM_GET_PROPERTY_FAULTS:
> > > > +		spin_lock(&vm->faults.lock);
> > > > +		size = vm->faults.len * sizeof(struct xe_vm_fault);
> > > 
> > > size_mul() and,
> > > [1] perhaps fill it up into the pointer passed by the caller here?
> > 
> > "The pointer passed by the caller".  You mean the args pointer?
> > 
> > We'd still need to check that the args->size value is empty here before overwriting
> > it, and we'd also still need to return the size to the ioctl so we can verify it's
> > acceptable later in xe_vm_get_property_verify_size.
> > 
> > Unless you want to merge those two processes together into here?
> 
> The semantics are a bit fuzzy to me. Why do we have a single ioctl for
> two different processes? Shouldn't they be handled separately?

No.  Sorry.  Let me clarify.
"two different processes" = getting the size + verifying the size.
-Jonathan Cavitt

> 
> Raag
> 

^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: [PATCH v10 5/5] drm/xe/xe_vm: Implement xe_vm_get_property_ioctl
  2025-03-24 21:31         ` Cavitt, Jonathan
@ 2025-03-25  7:19           ` Raag Jadav
  2025-03-25 14:44             ` Cavitt, Jonathan
  0 siblings, 1 reply; 13+ messages in thread
From: Raag Jadav @ 2025-03-25  7:19 UTC (permalink / raw)
  To: Cavitt, Jonathan
  Cc: intel-xe@lists.freedesktop.org, Gupta, saurabhg, Zuo, Alex,
	joonas.lahtinen@linux.intel.com, Brost, Matthew, Zhang, Jianxun,
	Lin, Shuicheng, dri-devel@lists.freedesktop.org,
	Wajdeczko, Michal, Mrozek, Michal

On Tue, Mar 25, 2025 at 03:01:27AM +0530, Cavitt, Jonathan wrote:
> From: Jadav, Raag <raag.jadav@intel.com> 
> > On Mon, Mar 24, 2025 at 10:27:08PM +0530, Cavitt, Jonathan wrote:
> > > From: Jadav, Raag <raag.jadav@intel.com> 
> > > > On Thu, Mar 20, 2025 at 03:26:15PM +0000, Jonathan Cavitt wrote:
> > > > > Add support for userspace to request a list of observed faults
> > > > > from a specified VM.
> > > > 
> > > > ...
> > > > 
> > > > > +static int xe_vm_get_property_size(struct xe_vm *vm, u32 property)
> > > > > +{
> > > > > +	int size = -EINVAL;
> > > > 
> > > > Mixing size and error codes is usually received with mixed feelings.
> > > > 
> > > > > +
> > > > > +	switch (property) {
> > > > > +	case DRM_XE_VM_GET_PROPERTY_FAULTS:
> > > > > +		spin_lock(&vm->faults.lock);
> > > > > +		size = vm->faults.len * sizeof(struct xe_vm_fault);
> > > > 
> > > > size_mul() and,
> > > > [1] perhaps fill it up into the pointer passed by the caller here?
> > > 
> > > "The pointer passed by the caller".  You mean the args pointer?
> > > 
> > > We'd still need to check that the args->size value is empty here before overwriting
> > > it, and we'd also still need to return the size to the ioctl so we can verify it's
> > > acceptable later in xe_vm_get_property_verify_size.
> > > 
> > > Unless you want to merge those two processes together into here?
> > 
> > The semantics are a bit fuzzy to me. Why do we have a single ioctl for
> > two different processes? Shouldn't they be handled separately?
> 
> No.  Sorry.  Let me clarify.
> "two different processes" = getting the size + verifying the size.

Yes, which seems like they should be handlded with _FAULT_NUM and
_FAULT_DATA ioctls but I guess we're way past it now.

I'm also not much informed about the history here. Is there a real
usecase behind exposing them? What is the user expected to do with
this information?

Raag

^ permalink raw reply	[flat|nested] 13+ messages in thread

* RE: [PATCH v10 5/5] drm/xe/xe_vm: Implement xe_vm_get_property_ioctl
  2025-03-25  7:19           ` Raag Jadav
@ 2025-03-25 14:44             ` Cavitt, Jonathan
  2025-03-25 23:36               ` Raag Jadav
  0 siblings, 1 reply; 13+ messages in thread
From: Cavitt, Jonathan @ 2025-03-25 14:44 UTC (permalink / raw)
  To: Jadav, Raag
  Cc: intel-xe@lists.freedesktop.org, Gupta,  saurabhg, Zuo, Alex,
	joonas.lahtinen@linux.intel.com, Brost, Matthew, Zhang, Jianxun,
	Lin, Shuicheng, dri-devel@lists.freedesktop.org,
	Wajdeczko, Michal, Mrozek, Michal

-----Original Message-----
From: Jadav, Raag <raag.jadav@intel.com> 
Sent: Tuesday, March 25, 2025 12:19 AM
To: Cavitt, Jonathan <jonathan.cavitt@intel.com>
Cc: intel-xe@lists.freedesktop.org; Gupta, saurabhg <saurabhg.gupta@intel.com>; Zuo, Alex <alex.zuo@intel.com>; joonas.lahtinen@linux.intel.com; Brost, Matthew <matthew.brost@intel.com>; Zhang, Jianxun <jianxun.zhang@intel.com>; Lin, Shuicheng <shuicheng.lin@intel.com>; dri-devel@lists.freedesktop.org; Wajdeczko, Michal <Michal.Wajdeczko@intel.com>; Mrozek, Michal <michal.mrozek@intel.com>
Subject: Re: [PATCH v10 5/5] drm/xe/xe_vm: Implement xe_vm_get_property_ioctl
> 
> On Tue, Mar 25, 2025 at 03:01:27AM +0530, Cavitt, Jonathan wrote:
> > From: Jadav, Raag <raag.jadav@intel.com> 
> > > On Mon, Mar 24, 2025 at 10:27:08PM +0530, Cavitt, Jonathan wrote:
> > > > From: Jadav, Raag <raag.jadav@intel.com> 
> > > > > On Thu, Mar 20, 2025 at 03:26:15PM +0000, Jonathan Cavitt wrote:
> > > > > > Add support for userspace to request a list of observed faults
> > > > > > from a specified VM.
> > > > > 
> > > > > ...
> > > > > 
> > > > > > +static int xe_vm_get_property_size(struct xe_vm *vm, u32 property)
> > > > > > +{
> > > > > > +	int size = -EINVAL;
> > > > > 
> > > > > Mixing size and error codes is usually received with mixed feelings.
> > > > > 
> > > > > > +
> > > > > > +	switch (property) {
> > > > > > +	case DRM_XE_VM_GET_PROPERTY_FAULTS:
> > > > > > +		spin_lock(&vm->faults.lock);
> > > > > > +		size = vm->faults.len * sizeof(struct xe_vm_fault);
> > > > > 
> > > > > size_mul() and,
> > > > > [1] perhaps fill it up into the pointer passed by the caller here?
> > > > 
> > > > "The pointer passed by the caller".  You mean the args pointer?
> > > > 
> > > > We'd still need to check that the args->size value is empty here before overwriting
> > > > it, and we'd also still need to return the size to the ioctl so we can verify it's
> > > > acceptable later in xe_vm_get_property_verify_size.
> > > > 
> > > > Unless you want to merge those two processes together into here?
> > > 
> > > The semantics are a bit fuzzy to me. Why do we have a single ioctl for
> > > two different processes? Shouldn't they be handled separately?
> > 
> > No.  Sorry.  Let me clarify.
> > "two different processes" = getting the size + verifying the size.
> 
> Yes, which seems like they should be handlded with _FAULT_NUM and
> _FAULT_DATA ioctls but I guess we're way past it now.

The current implementation mirrors xe_query.  Should we have separate
queries for getting the size of the query data and getting the data itself
in xe_query?

And just to preempt the question: this cannot be an xe_query because
the size of the returned data depends on the target VM, which cannot
be passed to the xe_query structure on the first pass when calculating
the size.  And just reporting the maximum possible size was rejected
separately. 

> 
> I'm also not much informed about the history here. Is there a real
> usecase behind exposing them? What is the user expected to do with
> this information?

This is a request from Vulkan, and is necessary to satisfy the requirements
for one of their interfaces.  Specifically,
https://registry.khronos.org/vulkan/specs/latest/man/html/VK_EXT_device_fault.html
-Jonathan Cavitt

> 
> Raag
> 

^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: [PATCH v10 5/5] drm/xe/xe_vm: Implement xe_vm_get_property_ioctl
  2025-03-25 14:44             ` Cavitt, Jonathan
@ 2025-03-25 23:36               ` Raag Jadav
  0 siblings, 0 replies; 13+ messages in thread
From: Raag Jadav @ 2025-03-25 23:36 UTC (permalink / raw)
  To: Cavitt, Jonathan
  Cc: intel-xe@lists.freedesktop.org, Gupta, saurabhg, Zuo, Alex,
	joonas.lahtinen@linux.intel.com, Brost, Matthew, Zhang, Jianxun,
	Lin, Shuicheng, dri-devel@lists.freedesktop.org,
	Wajdeczko, Michal, Mrozek, Michal

On Tue, Mar 25, 2025 at 08:14:13PM +0530, Cavitt, Jonathan wrote:
> From: Jadav, Raag <raag.jadav@intel.com> 
> > On Tue, Mar 25, 2025 at 03:01:27AM +0530, Cavitt, Jonathan wrote:
> > > From: Jadav, Raag <raag.jadav@intel.com> 
> > > > On Mon, Mar 24, 2025 at 10:27:08PM +0530, Cavitt, Jonathan wrote:
> > > > > From: Jadav, Raag <raag.jadav@intel.com> 
> > > > > > On Thu, Mar 20, 2025 at 03:26:15PM +0000, Jonathan Cavitt wrote:
> > > > > > > Add support for userspace to request a list of observed faults
> > > > > > > from a specified VM.
> > > > > > 
> > > > > > ...
> > > > > > 
> > > > > > > +static int xe_vm_get_property_size(struct xe_vm *vm, u32 property)
> > > > > > > +{
> > > > > > > +	int size = -EINVAL;
> > > > > > 
> > > > > > Mixing size and error codes is usually received with mixed feelings.
> > > > > > 
> > > > > > > +
> > > > > > > +	switch (property) {
> > > > > > > +	case DRM_XE_VM_GET_PROPERTY_FAULTS:
> > > > > > > +		spin_lock(&vm->faults.lock);
> > > > > > > +		size = vm->faults.len * sizeof(struct xe_vm_fault);
> > > > > > 
> > > > > > size_mul() and,
> > > > > > [1] perhaps fill it up into the pointer passed by the caller here?
> > > > > 
> > > > > "The pointer passed by the caller".  You mean the args pointer?
> > > > > 
> > > > > We'd still need to check that the args->size value is empty here before overwriting
> > > > > it, and we'd also still need to return the size to the ioctl so we can verify it's
> > > > > acceptable later in xe_vm_get_property_verify_size.
> > > > > 
> > > > > Unless you want to merge those two processes together into here?
> > > > 
> > > > The semantics are a bit fuzzy to me. Why do we have a single ioctl for
> > > > two different processes? Shouldn't they be handled separately?
> > > 
> > > No.  Sorry.  Let me clarify.
> > > "two different processes" = getting the size + verifying the size.
> > 
> > Yes, which seems like they should be handlded with _FAULT_NUM and
> > _FAULT_DATA ioctls but I guess we're way past it now.
> 
> The current implementation mirrors xe_query.  Should we have separate
> queries for getting the size of the query data and getting the data itself
> in xe_query?

Let's not break a well established API.

> And just to preempt the question: this cannot be an xe_query because
> the size of the returned data depends on the target VM, which cannot
> be passed to the xe_query structure on the first pass when calculating
> the size.  And just reporting the maximum possible size was rejected
> separately. 

Sure, makes sense.

> > I'm also not much informed about the history here. Is there a real
> > usecase behind exposing them? What is the user expected to do with
> > this information?
> 
> This is a request from Vulkan, and is necessary to satisfy the requirements
> for one of their interfaces.  Specifically,
> https://registry.khronos.org/vulkan/specs/latest/man/html/VK_EXT_device_fault.html

It says this should be a subsequence of device lost. What is the criteria
for it wrt xe?

A big enough fault will probably result in a coredump. So why not just
reuse it?

Raag

^ permalink raw reply	[flat|nested] 13+ messages in thread

end of thread, other threads:[~2025-03-25 23:36 UTC | newest]

Thread overview: 13+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2025-03-20 15:26 [PATCH v10 0/5] drm/xe/xe_vm: Implement xe_vm_get_property_ioctl Jonathan Cavitt
2025-03-20 15:26 ` [PATCH v10 1/5] drm/xe/xe_gt_pagefault: Disallow writes to read-only VMAs Jonathan Cavitt
2025-03-20 15:26 ` [PATCH v10 2/5] drm/xe/xe_gt_pagefault: Move pagefault struct to header Jonathan Cavitt
2025-03-20 15:26 ` [PATCH v10 3/5] drm/xe/uapi: Define drm_xe_vm_get_property Jonathan Cavitt
2025-03-20 15:26 ` [PATCH v10 4/5] drm/xe/xe_vm: Add per VM fault info Jonathan Cavitt
2025-03-20 15:26 ` [PATCH v10 5/5] drm/xe/xe_vm: Implement xe_vm_get_property_ioctl Jonathan Cavitt
2025-03-21 23:36   ` Raag Jadav
2025-03-24 16:57     ` Cavitt, Jonathan
2025-03-24 21:25       ` Raag Jadav
2025-03-24 21:31         ` Cavitt, Jonathan
2025-03-25  7:19           ` Raag Jadav
2025-03-25 14:44             ` Cavitt, Jonathan
2025-03-25 23:36               ` Raag Jadav

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).