Igt-dev Archive on lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH i-g-t v3 00/14] Test coverage for GPU debug support
@ 2024-08-09 12:37 Christoph Manszewski
  2024-08-09 12:38 ` [PATCH i-g-t v3 01/14] drm-uapi/xe: Sync with oa uapi fix Christoph Manszewski
                   ` (19 more replies)
  0 siblings, 20 replies; 41+ messages in thread
From: Christoph Manszewski @ 2024-08-09 12:37 UTC (permalink / raw)
  To: igt-dev
  Cc: Zbigniew Kempczyński, Kamil Konieczny, Dominik Grzegorzek,
	Maciej Patelczyk, Dominik Karol Piątkowski, Pawel Sikora,
	Andrzej Hajda, Kolanupaka Naveena, Mika Kuoppala, Gwan-gyeong Mun,
	Christoph Manszewski

Hi,

In this series the eudebug kernel and validation team would like to
add test coverage for GPU debug support recently proposed as an RFC.
(https://patchwork.freedesktop.org/series/136572/)

This series adds 'xe_eudebug' and 'xe_eudebug_online' tests together
with a library that encapsulates common paths in current and future
EU debugger scenarios. It also extends the 'xe_exec_sip' test and
'gpgpu_shader' library.

The aim of the 'xe_eudebug' test is to validate the eudebug resource
tracking and event delivery mechanism. The 'xe_eudebug_online' test is
dedicated for 'online' scenarios which means scenarios that exercise
hardware exception handling and thread state manipulation.

The xe_eudebug library provides an abstraction over debugger and debuggee
processes, asynchronous event reader, and event log buffers for post-mortem
analysis.

Latest kernel code can be found here:
https://gitlab.freedesktop.org/miku/kernel/-/commits/eudebug-dev

Thank you in advance for any comments and insight.

v2:
 - make sure to include all patches and verify that each individual
 patch compiles (Zbigniew)

v3:
 - fix multiple typos (Dominik Karol),
 - squash subtest and eudebug lib patches (Zbigniew),
 - include uapi sync/fix (Kamil)

Andrzej Hajda (4):
  lib/gpgpu_shader: Add write_on_exception template
  lib/gpgpu_shader: Add set/clear exception register (cr0.1) helpers
  lib/intel_batchbuffer: Add helper to get pointer at specified offset
  lib/gpgpu_shader: Allow enabling illegal opcode exceptions in shader

Christoph Manszewski (5):
  drm-uapi/xe: Sync with oa uapi fix
  lib/xe_ioctl: Add wrapper with vm_bind_op extension parameter
  lib/gpgpu_shader: Extend shader building library
  tests/xe_exec_sip: Extend SIP interaction testing
  tests/xe_live_ktest: Add xe_eudebug live test

Dominik Grzegorzek (4):
  drm-uapi/xe: Sync with eudebug uapi
  lib/xe_eudebug: Introduce eu debug testing framework
  tests/xe_eudebug: Test eudebug resource tracking and manipulation
  tests/xe_eudebug_online: Debug client which runs workloads on EU

Gwan-gyeong Mun (1):
  lib/intel_batchbuffer: Add support for long-running mode execution

 include/drm-uapi/xe_drm.h         |  112 +-
 include/drm-uapi/xe_drm_eudebug.h |  225 +++
 lib/gpgpu_shader.c                |  474 ++++-
 lib/gpgpu_shader.h                |   29 +-
 lib/iga64_generated_codes.c       |  428 ++++-
 lib/intel_batchbuffer.c           |  153 +-
 lib/intel_batchbuffer.h           |   22 +
 lib/meson.build                   |    1 +
 lib/xe/xe_eudebug.c               | 2192 +++++++++++++++++++++++
 lib/xe/xe_eudebug.h               |  206 +++
 lib/xe/xe_ioctl.c                 |   20 +-
 lib/xe/xe_ioctl.h                 |    5 +
 tests/intel/xe_eudebug.c          | 2671 +++++++++++++++++++++++++++++
 tests/intel/xe_eudebug_online.c   | 2203 ++++++++++++++++++++++++
 tests/intel/xe_exec_sip.c         |  332 +++-
 tests/intel/xe_live_ktest.c       |    6 +
 tests/meson.build                 |    2 +
 17 files changed, 9036 insertions(+), 45 deletions(-)
 create mode 100644 include/drm-uapi/xe_drm_eudebug.h
 create mode 100644 lib/xe/xe_eudebug.c
 create mode 100644 lib/xe/xe_eudebug.h
 create mode 100644 tests/intel/xe_eudebug.c
 create mode 100644 tests/intel/xe_eudebug_online.c

-- 
2.34.1


^ permalink raw reply	[flat|nested] 41+ messages in thread

* [PATCH i-g-t v3 01/14] drm-uapi/xe: Sync with oa uapi fix
  2024-08-09 12:37 [PATCH i-g-t v3 00/14] Test coverage for GPU debug support Christoph Manszewski
@ 2024-08-09 12:38 ` Christoph Manszewski
  2024-08-09 14:21   ` Kamil Konieczny
  2024-08-09 12:38 ` [PATCH i-g-t v3 02/14] drm-uapi/xe: Sync with eudebug uapi Christoph Manszewski
                   ` (18 subsequent siblings)
  19 siblings, 1 reply; 41+ messages in thread
From: Christoph Manszewski @ 2024-08-09 12:38 UTC (permalink / raw)
  To: igt-dev
  Cc: Zbigniew Kempczyński, Kamil Konieczny, Dominik Grzegorzek,
	Maciej Patelczyk, Dominik Karol Piątkowski, Pawel Sikora,
	Andrzej Hajda, Kolanupaka Naveena, Mika Kuoppala, Gwan-gyeong Mun,
	Christoph Manszewski, Jonathan Cavitt, Lucas De Marchi

Align with kernel commit f2881dfdaaa9 ("drm/xe/oa/uapi: Make bit masks
unsigned"). Use built header instead of raw uapi header.

Cc: Jonathan Cavitt <jonathan.cavitt@intel.com>
Cc: Lucas De Marchi <lucas.demarchi@intel.com>
Signed-off-by: Christoph Manszewski <christoph.manszewski@intel.com>
---
 include/drm-uapi/xe_drm.h | 16 ++++++++--------
 1 file changed, 8 insertions(+), 8 deletions(-)

diff --git a/include/drm-uapi/xe_drm.h b/include/drm-uapi/xe_drm.h
index 29425d7fd..f0a450db9 100644
--- a/include/drm-uapi/xe_drm.h
+++ b/include/drm-uapi/xe_drm.h
@@ -3,8 +3,8 @@
  * Copyright © 2023 Intel Corporation
  */
 
-#ifndef _UAPI_XE_DRM_H_
-#define _UAPI_XE_DRM_H_
+#ifndef _XE_DRM_H_
+#define _XE_DRM_H_
 
 #include "drm.h"
 
@@ -134,7 +134,7 @@ extern "C" {
  * redefine the interface more easily than an ever growing struct of
  * increasing complexity, and for large parts of that interface to be
  * entirely optional. The downside is more pointer chasing; chasing across
- * the __user boundary with pointers encapsulated inside u64.
+ * the boundary with pointers encapsulated inside u64.
  *
  * Example chaining:
  *
@@ -1598,10 +1598,10 @@ enum drm_xe_oa_property_id {
 	 * b. Counter select c. Counter size and d. BC report. Also refer to the
 	 * oa_formats array in drivers/gpu/drm/xe/xe_oa.c.
 	 */
-#define DRM_XE_OA_FORMAT_MASK_FMT_TYPE		(0xff << 0)
-#define DRM_XE_OA_FORMAT_MASK_COUNTER_SEL	(0xff << 8)
-#define DRM_XE_OA_FORMAT_MASK_COUNTER_SIZE	(0xff << 16)
-#define DRM_XE_OA_FORMAT_MASK_BC_REPORT		(0xff << 24)
+#define DRM_XE_OA_FORMAT_MASK_FMT_TYPE		(0xffu << 0)
+#define DRM_XE_OA_FORMAT_MASK_COUNTER_SEL	(0xffu << 8)
+#define DRM_XE_OA_FORMAT_MASK_COUNTER_SIZE	(0xffu << 16)
+#define DRM_XE_OA_FORMAT_MASK_BC_REPORT		(0xffu << 24)
 
 	/**
 	 * @DRM_XE_OA_PROPERTY_OA_PERIOD_EXPONENT: Requests periodic OA unit
@@ -1698,4 +1698,4 @@ struct drm_xe_oa_stream_info {
 }
 #endif
 
-#endif /* _UAPI_XE_DRM_H_ */
+#endif /* _XE_DRM_H_ */
-- 
2.34.1


^ permalink raw reply related	[flat|nested] 41+ messages in thread

* [PATCH i-g-t v3 02/14] drm-uapi/xe: Sync with eudebug uapi
  2024-08-09 12:37 [PATCH i-g-t v3 00/14] Test coverage for GPU debug support Christoph Manszewski
  2024-08-09 12:38 ` [PATCH i-g-t v3 01/14] drm-uapi/xe: Sync with oa uapi fix Christoph Manszewski
@ 2024-08-09 12:38 ` Christoph Manszewski
  2024-08-20  7:52   ` Zbigniew Kempczyński
  2024-08-09 12:38 ` [PATCH i-g-t v3 03/14] lib/xe_ioctl: Add wrapper with vm_bind_op extension parameter Christoph Manszewski
                   ` (17 subsequent siblings)
  19 siblings, 1 reply; 41+ messages in thread
From: Christoph Manszewski @ 2024-08-09 12:38 UTC (permalink / raw)
  To: igt-dev
  Cc: Zbigniew Kempczyński, Kamil Konieczny, Dominik Grzegorzek,
	Maciej Patelczyk, Dominik Karol Piątkowski, Pawel Sikora,
	Andrzej Hajda, Kolanupaka Naveena, Mika Kuoppala, Gwan-gyeong Mun,
	Mika Kuoppala, Christoph Manszewski

From: Dominik Grzegorzek <dominik.grzegorzek@intel.com>

Align with kernel commit 09411c6ecbef ("drm/xe/eudebug: Add debug
metadata support for xe_eudebug") from:

https://gitlab.freedesktop.org/miku/kernel.git

which introduces most recent changes to the eudebug uapi.

Signed-off-by: Dominik Grzegorzek <dominik.grzegorzek@intel.com>
Signed-off-by: Mika Kuoppala <mika.kuoppala@linux.intel.com>
Signed-off-by: Christoph Manszewski <christoph.manszewski@intel.com>
---
 include/drm-uapi/xe_drm.h         |  96 ++++++++++++-
 include/drm-uapi/xe_drm_eudebug.h | 225 ++++++++++++++++++++++++++++++
 2 files changed, 319 insertions(+), 2 deletions(-)
 create mode 100644 include/drm-uapi/xe_drm_eudebug.h

diff --git a/include/drm-uapi/xe_drm.h b/include/drm-uapi/xe_drm.h
index f0a450db9..01ffbb8cb 100644
--- a/include/drm-uapi/xe_drm.h
+++ b/include/drm-uapi/xe_drm.h
@@ -102,7 +102,9 @@ extern "C" {
 #define DRM_XE_EXEC			0x09
 #define DRM_XE_WAIT_USER_FENCE		0x0a
 #define DRM_XE_OBSERVATION		0x0b
-
+#define DRM_XE_EUDEBUG_CONNECT		0x0c
+#define DRM_XE_DEBUG_METADATA_CREATE	0x0d
+#define DRM_XE_DEBUG_METADATA_DESTROY	0x0e
 /* Must be kept compact -- no holes */
 
 #define DRM_IOCTL_XE_DEVICE_QUERY		DRM_IOWR(DRM_COMMAND_BASE + DRM_XE_DEVICE_QUERY, struct drm_xe_device_query)
@@ -117,6 +119,9 @@ extern "C" {
 #define DRM_IOCTL_XE_EXEC			DRM_IOW(DRM_COMMAND_BASE + DRM_XE_EXEC, struct drm_xe_exec)
 #define DRM_IOCTL_XE_WAIT_USER_FENCE		DRM_IOWR(DRM_COMMAND_BASE + DRM_XE_WAIT_USER_FENCE, struct drm_xe_wait_user_fence)
 #define DRM_IOCTL_XE_OBSERVATION		DRM_IOW(DRM_COMMAND_BASE + DRM_XE_OBSERVATION, struct drm_xe_observation_param)
+#define DRM_IOCTL_XE_EUDEBUG_CONNECT		DRM_IOWR(DRM_COMMAND_BASE + DRM_XE_EUDEBUG_CONNECT, struct drm_xe_eudebug_connect)
+#define DRM_IOCTL_XE_DEBUG_METADATA_CREATE	 DRM_IOWR(DRM_COMMAND_BASE + DRM_XE_DEBUG_METADATA_CREATE, struct drm_xe_debug_metadata_create)
+#define DRM_IOCTL_XE_DEBUG_METADATA_DESTROY	 DRM_IOW(DRM_COMMAND_BASE + DRM_XE_DEBUG_METADATA_DESTROY, struct drm_xe_debug_metadata_destroy)
 
 /**
  * DOC: Xe IOCTL Extensions
@@ -881,6 +886,23 @@ struct drm_xe_vm_destroy {
 	__u64 reserved[2];
 };
 
+struct drm_xe_vm_bind_op_ext_attach_debug {
+	/** @base: base user extension */
+	struct drm_xe_user_extension base;
+
+	/** @id: Debug object id from create metadata */
+	__u64 metadata_id;
+
+	/** @flags: Flags */
+	__u64 flags;
+
+	/** @cookie: Cookie */
+	__u64 cookie;
+
+	/** @reserved: Reserved */
+	__u64 reserved;
+};
+
 /**
  * struct drm_xe_vm_bind_op - run bind operations
  *
@@ -905,7 +927,9 @@ struct drm_xe_vm_destroy {
  *    handle MBZ, and the BO offset MBZ. This flag is intended to
  *    implement VK sparse bindings.
  */
+
 struct drm_xe_vm_bind_op {
+#define XE_VM_BIND_OP_EXTENSIONS_ATTACH_DEBUG 0
 	/** @extensions: Pointer to the first extension struct, if any */
 	__u64 extensions;
 
@@ -1108,7 +1132,8 @@ struct drm_xe_exec_queue_create {
 #define DRM_XE_EXEC_QUEUE_EXTENSION_SET_PROPERTY		0
 #define   DRM_XE_EXEC_QUEUE_SET_PROPERTY_PRIORITY		0
 #define   DRM_XE_EXEC_QUEUE_SET_PROPERTY_TIMESLICE		1
-
+#define   DRM_XE_EXEC_QUEUE_SET_PROPERTY_EUDEBUG		2
+#define     DRM_XE_EXEC_QUEUE_EUDEBUG_FLAG_ENABLE		(1 << 0)
 	/** @extensions: Pointer to the first extension struct, if any */
 	__u64 extensions;
 
@@ -1694,6 +1719,73 @@ struct drm_xe_oa_stream_info {
 	__u64 reserved[3];
 };
 
+/*
+ * Debugger ABI (ioctl and events) Version History:
+ * 0 - No debugger available
+ * 1 - Initial version
+ */
+#define DRM_XE_EUDEBUG_VERSION 1
+
+struct drm_xe_eudebug_connect {
+	/** @extensions: Pointer to the first extension struct, if any */
+	__u64 extensions;
+
+	__u64 pid; /* input: Target process ID */
+	__u32 flags; /* MBZ */
+
+	__u32 version; /* output: current ABI (ioctl / events) version */
+};
+
+/*
+ * struct drm_xe_debug_metadata_create - Create debug metadata
+ *
+ * Add a region of user memory to be marked as debug metadata.
+ * When the debugger attaches, the metadata regions will be delivered
+ * for debugger. Debugger can then map these regions to help decode
+ * the program state.
+ *
+ * Returns handle to created metadata entry.
+ */
+struct drm_xe_debug_metadata_create {
+	/** @extensions: Pointer to the first extension struct, if any */
+	__u64 extensions;
+
+#define DRM_XE_DEBUG_METADATA_ELF_BINARY     0
+#define DRM_XE_DEBUG_METADATA_PROGRAM_MODULE 1
+#define WORK_IN_PROGRESS_DRM_XE_DEBUG_METADATA_MODULE_AREA 2
+#define WORK_IN_PROGRESS_DRM_XE_DEBUG_METADATA_SBA_AREA 3
+#define WORK_IN_PROGRESS_DRM_XE_DEBUG_METADATA_SIP_AREA 4
+#define WORK_IN_PROGRESS_DRM_XE_DEBUG_METADATA_NUM (1 + \
+	  WORK_IN_PROGRESS_DRM_XE_DEBUG_METADATA_SIP_AREA)
+
+	/** @type: Type of metadata */
+	__u64 type;
+
+	/** @user_addr: pointer to start of the metadata */
+	__u64 user_addr;
+
+	/** @len: length, in bytes of the medata */
+	__u64 len;
+
+	/** @metadata_id: created metadata handle (out) */
+	__u32 metadata_id;
+};
+
+/**
+ * struct drm_xe_debug_metadata_destroy - Destroy debug metadata
+ *
+ * Destroy debug metadata.
+ */
+struct drm_xe_debug_metadata_destroy {
+	/** @extensions: Pointer to the first extension struct, if any */
+	__u64 extensions;
+
+	/** @metadata_id: metadata handle to destroy */
+	__u32 metadata_id;
+};
+
+#include "xe_drm_eudebug.h"
+
 #if defined(__cplusplus)
 }
 #endif
diff --git a/include/drm-uapi/xe_drm_eudebug.h b/include/drm-uapi/xe_drm_eudebug.h
new file mode 100644
index 000000000..1db737080
--- /dev/null
+++ b/include/drm-uapi/xe_drm_eudebug.h
@@ -0,0 +1,225 @@
+/* SPDX-License-Identifier: MIT */
+/*
+ * Copyright © 2023 Intel Corporation
+ */
+
+#ifndef _XE_DRM_EUDEBUG_H_
+#define _XE_DRM_EUDEBUG_H_
+
+#if defined(__cplusplus)
+extern "C" {
+#endif
+
+/**
+ * Do a eudebug event read for a debugger connection.
+ *
+ * This ioctl is available in debug version 1.
+ */
+#define DRM_XE_EUDEBUG_IOCTL_READ_EVENT		_IO('j', 0x0)
+#define DRM_XE_EUDEBUG_IOCTL_EU_CONTROL		_IOWR('j', 0x2, struct drm_xe_eudebug_eu_control)
+#define DRM_XE_EUDEBUG_IOCTL_ACK_EVENT		_IOW('j', 0x4, struct drm_xe_eudebug_ack_event)
+#define DRM_XE_EUDEBUG_IOCTL_VM_OPEN		_IOW('j', 0x1, struct drm_xe_eudebug_vm_open)
+#define DRM_XE_EUDEBUG_IOCTL_READ_METADATA	_IOWR('j', 0x3, struct drm_xe_eudebug_read_metadata)
+
+/* XXX: Document events to match their internal counterparts when moved to xe_drm.h */
+struct drm_xe_eudebug_event {
+	__u32 len;
+
+	__u16 type;
+#define DRM_XE_EUDEBUG_EVENT_NONE		0
+#define DRM_XE_EUDEBUG_EVENT_READ		1
+#define DRM_XE_EUDEBUG_EVENT_OPEN		2
+#define DRM_XE_EUDEBUG_EVENT_VM			3
+#define DRM_XE_EUDEBUG_EVENT_EXEC_QUEUE		4
+#define DRM_XE_EUDEBUG_EVENT_EU_ATTENTION	5
+#define DRM_XE_EUDEBUG_EVENT_VM_BIND		6
+#define DRM_XE_EUDEBUG_EVENT_VM_BIND_OP		7
+#define DRM_XE_EUDEBUG_EVENT_VM_BIND_UFENCE	8
+#define DRM_XE_EUDEBUG_EVENT_METADATA		9
+#define DRM_XE_EUDEBUG_EVENT_VM_BIND_OP_METADATA 10
+
+	__u16 flags;
+#define DRM_XE_EUDEBUG_EVENT_CREATE		(1 << 0)
+#define DRM_XE_EUDEBUG_EVENT_DESTROY		(1 << 1)
+#define DRM_XE_EUDEBUG_EVENT_STATE_CHANGE	(1 << 2)
+#define DRM_XE_EUDEBUG_EVENT_NEED_ACK		(1 << 3)
+
+	__u64 seqno;
+	__u64 reserved;
+};
+
+struct drm_xe_eudebug_event_client {
+	struct drm_xe_eudebug_event base;
+
+	__u64 client_handle; /* This is unique per debug connection */
+};
+
+struct drm_xe_eudebug_event_vm {
+	struct drm_xe_eudebug_event base;
+
+	__u64 client_handle;
+	__u64 vm_handle;
+};
+
+struct drm_xe_eudebug_event_exec_queue {
+	struct drm_xe_eudebug_event base;
+
+	__u64 client_handle;
+	__u64 vm_handle;
+	__u64 exec_queue_handle;
+	__u32 engine_class;
+	__u32 width;
+	__u64 lrc_handle[];
+};
+
+struct drm_xe_eudebug_event_eu_attention {
+	struct drm_xe_eudebug_event base;
+
+	__u64 client_handle;
+	__u64 exec_queue_handle;
+	__u64 lrc_handle;
+	__u32 flags;
+	__u32 bitmask_size;
+	__u8 bitmask[];
+};
+
+struct drm_xe_eudebug_eu_control {
+	__u64 client_handle;
+
+#define DRM_XE_EUDEBUG_EU_CONTROL_CMD_INTERRUPT_ALL	0
+#define DRM_XE_EUDEBUG_EU_CONTROL_CMD_STOPPED		1
+#define DRM_XE_EUDEBUG_EU_CONTROL_CMD_RESUME		2
+	__u32 cmd;
+	__u32 flags;
+
+	__u64 seqno;
+
+	__u64 exec_queue_handle;
+	__u64 lrc_handle;
+	__u32 reserved;
+	__u32 bitmask_size;
+	__u64 bitmask_ptr;
+};
+
+/*
+ *  When client (debuggee) does vm_bind_ioctl() following event
+ *  sequence will be created (for the debugger):
+ *
+ *  ┌───────────────────────┐
+ *  │  EVENT_VM_BIND        ├───────┬─┬─┐
+ *  └───────────────────────┘       │ │ │
+ *      ┌───────────────────────┐   │ │ │
+ *      │ EVENT_VM_BIND_OP #1   ├───┘ │ │
+ *      └───────────────────────┘     │ │
+ *                 ...                │ │
+ *      ┌───────────────────────┐     │ │
+ *      │ EVENT_VM_BIND_OP #n   ├─────┘ │
+ *      └───────────────────────┘       │
+ *                                      │
+ *      ┌───────────────────────┐       │
+ *      │ EVENT_UFENCE          ├───────┘
+ *      └───────────────────────┘
+ *
+ * All the events below VM_BIND will reference the VM_BIND
+ * they associate with, by field .vm_bind_ref_seqno.
+ * event_ufence will only be included if the client did
+ * attach sync of type UFENCE into its vm_bind_ioctl().
+ *
+ * When EVENT_UFENCE is sent by the driver, all the OPs of
+ * the original VM_BIND are completed and the [addr,range]
+ * contained in them are present and modifiable through the
+ * vm accessors. Accessing [addr, range] before related ufence
+ * event will lead to undefined results as the actual bind
+ * operations are async and the backing storage might not
+ * be there on a moment of receiving the event.
+ *
+ * Client's UFENCE sync will be held by the driver: client's
+ * drm_xe_wait_ufence will not complete and the value of the ufence
+ * won't appear until ufence is acked by the debugger process calling
+ * DRM_XE_EUDEBUG_IOCTL_ACK_EVENT with the event_ufence.base.seqno.
+ * This will signal the fence, .value will update and the wait will
+ * complete allowing the client to continue.
+ *
+ */
+
+struct drm_xe_eudebug_event_vm_bind {
+	struct drm_xe_eudebug_event base;
+
+	__u64 client_handle;
+	__u64 vm_handle;
+
+	__u32 flags;
+#define DRM_XE_EUDEBUG_EVENT_VM_BIND_FLAG_UFENCE (1 << 0)
+
+	__u32 num_binds;
+};
+
+struct drm_xe_eudebug_event_vm_bind_op {
+	struct drm_xe_eudebug_event base;
+	__u64 vm_bind_ref_seqno; /* *_event_vm_bind.base.seqno */
+	__u64 num_extensions;
+
+	__u64 addr; /* XXX: Zero for unmap all? */
+	__u64 range; /* XXX: Zero for unmap all? */
+};
+
+struct drm_xe_eudebug_event_vm_bind_ufence {
+	struct drm_xe_eudebug_event base;
+	__u64 vm_bind_ref_seqno; /* *_event_vm_bind.base.seqno */
+};
+
+struct drm_xe_eudebug_ack_event {
+	__u32 type;
+	__u32 flags; /* MBZ */
+	__u64 seqno;
+};
+
+struct drm_xe_eudebug_vm_open {
+	/** @extensions: Pointer to the first extension struct, if any */
+	__u64 extensions;
+
+	/** @client_handle: id of client */
+	__u64 client_handle;
+
+	/** @vm_handle: id of vm */
+	__u64 vm_handle;
+
+	/** @flags: flags */
+	__u64 flags;
+
+	/** @timeout_ns: Timeout value in nanoseconds operations (fsync) */
+	__u64 timeout_ns;
+};
+
+struct drm_xe_eudebug_read_metadata {
+	__u64 client_handle;
+	__u64 metadata_handle;
+	__u32 flags;
+	__u32 reserved;
+	__u64 ptr;
+	__u64 size;
+};
+
+struct drm_xe_eudebug_event_metadata {
+	struct drm_xe_eudebug_event base;
+
+	__u64 client_handle;
+	__u64 metadata_handle;
+	/* XXX: Refer to xe_drm.h for fields */
+	__u64 type;
+	__u64 len;
+};
+
+struct drm_xe_eudebug_event_vm_bind_op_metadata {
+	struct drm_xe_eudebug_event base;
+	__u64 vm_bind_op_ref_seqno; /* *_event_vm_bind_op.base.seqno */
+
+	__u64 metadata_handle;
+	__u64 metadata_cookie;
+};
+
+#if defined(__cplusplus)
+}
+#endif
+
+#endif
-- 
2.34.1


^ permalink raw reply related	[flat|nested] 41+ messages in thread

* [PATCH i-g-t v3 03/14] lib/xe_ioctl: Add wrapper with vm_bind_op extension parameter
  2024-08-09 12:37 [PATCH i-g-t v3 00/14] Test coverage for GPU debug support Christoph Manszewski
  2024-08-09 12:38 ` [PATCH i-g-t v3 01/14] drm-uapi/xe: Sync with oa uapi fix Christoph Manszewski
  2024-08-09 12:38 ` [PATCH i-g-t v3 02/14] drm-uapi/xe: Sync with eudebug uapi Christoph Manszewski
@ 2024-08-09 12:38 ` Christoph Manszewski
  2024-08-20  7:54   ` Zbigniew Kempczyński
  2024-08-09 12:38 ` [PATCH i-g-t v3 04/14] lib/xe_eudebug: Introduce eu debug testing framework Christoph Manszewski
                   ` (16 subsequent siblings)
  19 siblings, 1 reply; 41+ messages in thread
From: Christoph Manszewski @ 2024-08-09 12:38 UTC (permalink / raw)
  To: igt-dev
  Cc: Zbigniew Kempczyński, Kamil Konieczny, Dominik Grzegorzek,
	Maciej Patelczyk, Dominik Karol Piątkowski, Pawel Sikora,
	Andrzej Hajda, Kolanupaka Naveena, Mika Kuoppala, Gwan-gyeong Mun,
	Christoph Manszewski, Mika Kuoppala

Currently there is no way to set drm_xe_vm_bind_op extension field. Add
vm_bind wrapper that allows pass this field as a parameter.

Signed-off-by: Mika Kuoppala <mika.kuoppala@linux.intel.com>
Signed-off-by: Christoph Manszewski <christoph.manszewski@intel.com>
Reviewed-by: Dominik Karol Piątkowski <dominik.karol.piatkowski@intel.com>
---
 lib/xe/xe_ioctl.c | 20 ++++++++++++++++----
 lib/xe/xe_ioctl.h |  5 +++++
 2 files changed, 21 insertions(+), 4 deletions(-)

diff --git a/lib/xe/xe_ioctl.c b/lib/xe/xe_ioctl.c
index ae43ffd15..6d8388918 100644
--- a/lib/xe/xe_ioctl.c
+++ b/lib/xe/xe_ioctl.c
@@ -96,15 +96,17 @@ void xe_vm_bind_array(int fd, uint32_t vm, uint32_t exec_queue,
 	igt_assert_eq(igt_ioctl(fd, DRM_IOCTL_XE_VM_BIND, &bind), 0);
 }
 
-int  __xe_vm_bind(int fd, uint32_t vm, uint32_t exec_queue, uint32_t bo,
-		  uint64_t offset, uint64_t addr, uint64_t size, uint32_t op,
-		  uint32_t flags, struct drm_xe_sync *sync, uint32_t num_syncs,
-		  uint32_t prefetch_region, uint8_t pat_index, uint64_t ext)
+int  ___xe_vm_bind(int fd, uint32_t vm, uint32_t exec_queue, uint32_t bo,
+		   uint64_t offset, uint64_t addr, uint64_t size, uint32_t op,
+		   uint32_t flags, struct drm_xe_sync *sync, uint32_t num_syncs,
+		   uint32_t prefetch_region, uint8_t pat_index, uint64_t ext,
+		   uint64_t op_ext)
 {
 	struct drm_xe_vm_bind bind = {
 		.extensions = ext,
 		.vm_id = vm,
 		.num_binds = 1,
+		.bind.extensions = op_ext,
 		.bind.obj = bo,
 		.bind.obj_offset = offset,
 		.bind.range = size,
@@ -125,6 +127,16 @@ int  __xe_vm_bind(int fd, uint32_t vm, uint32_t exec_queue, uint32_t bo,
 	return 0;
 }
 
+int  __xe_vm_bind(int fd, uint32_t vm, uint32_t exec_queue, uint32_t bo,
+		  uint64_t offset, uint64_t addr, uint64_t size, uint32_t op,
+		  uint32_t flags, struct drm_xe_sync *sync, uint32_t num_syncs,
+		  uint32_t prefetch_region, uint8_t pat_index, uint64_t ext)
+{
+	return ___xe_vm_bind(fd, vm, exec_queue, bo, offset, addr, size, op,
+			     flags, sync, num_syncs, prefetch_region,
+			     pat_index, ext, 0);
+}
+
 void  __xe_vm_bind_assert(int fd, uint32_t vm, uint32_t exec_queue, uint32_t bo,
 			  uint64_t offset, uint64_t addr, uint64_t size,
 			  uint32_t op, uint32_t flags, struct drm_xe_sync *sync,
diff --git a/lib/xe/xe_ioctl.h b/lib/xe/xe_ioctl.h
index b27c0053f..18cc2b72b 100644
--- a/lib/xe/xe_ioctl.h
+++ b/lib/xe/xe_ioctl.h
@@ -20,6 +20,11 @@
 uint32_t xe_cs_prefetch_size(int fd);
 uint64_t xe_bb_size(int fd, uint64_t reqsize);
 uint32_t xe_vm_create(int fd, uint32_t flags, uint64_t ext);
+int  ___xe_vm_bind(int fd, uint32_t vm, uint32_t exec_queue, uint32_t bo,
+		   uint64_t offset, uint64_t addr, uint64_t size, uint32_t op,
+		   uint32_t flags, struct drm_xe_sync *sync, uint32_t num_syncs,
+		   uint32_t prefetch_region, uint8_t pat_index, uint64_t ext,
+		   uint64_t op_ext);
 int  __xe_vm_bind(int fd, uint32_t vm, uint32_t exec_queue, uint32_t bo,
 		  uint64_t offset, uint64_t addr, uint64_t size, uint32_t op,
 		  uint32_t flags, struct drm_xe_sync *sync, uint32_t num_syncs,
-- 
2.34.1


^ permalink raw reply related	[flat|nested] 41+ messages in thread

* [PATCH i-g-t v3 04/14] lib/xe_eudebug: Introduce eu debug testing framework
  2024-08-09 12:37 [PATCH i-g-t v3 00/14] Test coverage for GPU debug support Christoph Manszewski
                   ` (2 preceding siblings ...)
  2024-08-09 12:38 ` [PATCH i-g-t v3 03/14] lib/xe_ioctl: Add wrapper with vm_bind_op extension parameter Christoph Manszewski
@ 2024-08-09 12:38 ` Christoph Manszewski
  2024-08-19  8:30   ` Grzegorzek, Dominik
  2024-08-20  8:14   ` Zbigniew Kempczyński
  2024-08-09 12:38 ` [PATCH i-g-t v3 05/14] tests/xe_eudebug: Test eudebug resource tracking and manipulation Christoph Manszewski
                   ` (15 subsequent siblings)
  19 siblings, 2 replies; 41+ messages in thread
From: Christoph Manszewski @ 2024-08-09 12:38 UTC (permalink / raw)
  To: igt-dev
  Cc: Zbigniew Kempczyński, Kamil Konieczny, Dominik Grzegorzek,
	Maciej Patelczyk, Dominik Karol Piątkowski, Pawel Sikora,
	Andrzej Hajda, Kolanupaka Naveena, Mika Kuoppala, Gwan-gyeong Mun,
	Mika Kuoppala, Christoph Manszewski, Karolina Stolarek

From: Dominik Grzegorzek <dominik.grzegorzek@intel.com>

Introduce library which simplifies testing of eu debug capability.
The library provides event log helpers together with asynchronous
abstraction for client proccess and the debugger itself.

xe_eudebug_client creates its own proccess with user's work function,
and gives machanisms to synchronize beginning of execution and event
logging.

xe_eudebug_debugger allows to attach to the given proccess, provides
asynchronous thread for event reading and introduces triggers -
a callback mechanism triggered every time subscribed event was read.

Signed-off-by: Dominik Grzegorzek <dominik.grzegorzek@intel.com>
Signed-off-by: Mika Kuoppala <mika.kuaoppala@linux.intel.com>
Signed-off-by: Christoph Manszewski <christoph.manszewski@intel.com>
Signed-off-by: Maciej Patelczyk <maciej.patelczyk@intel.com>
Signed-off-by: Pawel Sikora <pawel.sikora@intel.com>
Signed-off-by: Karolina Stolarek <karolina.stolarek@intel.com>
---
 lib/meson.build     |    1 +
 lib/xe/xe_eudebug.c | 2192 +++++++++++++++++++++++++++++++++++++++++++
 lib/xe/xe_eudebug.h |  206 ++++
 3 files changed, 2399 insertions(+)
 create mode 100644 lib/xe/xe_eudebug.c
 create mode 100644 lib/xe/xe_eudebug.h

diff --git a/lib/meson.build b/lib/meson.build
index f711e60a7..969ca4101 100644
--- a/lib/meson.build
+++ b/lib/meson.build
@@ -111,6 +111,7 @@ lib_sources = [
 	'igt_msm.c',
 	'igt_dsc.c',
 	'xe/xe_gt.c',
+	'xe/xe_eudebug.c',
 	'xe/xe_ioctl.c',
 	'xe/xe_mmio.c',
 	'xe/xe_query.c',
diff --git a/lib/xe/xe_eudebug.c b/lib/xe/xe_eudebug.c
new file mode 100644
index 000000000..4eac87476
--- /dev/null
+++ b/lib/xe/xe_eudebug.c
@@ -0,0 +1,2192 @@
+// SPDX-License-Identifier: MIT
+/*
+ * Copyright © 2023 Intel Corporation
+ */
+
+#include <fcntl.h>
+#include <poll.h>
+#include <signal.h>
+#include <sys/select.h>
+#include <sys/stat.h>
+#include <sys/types.h>
+#include <sys/wait.h>
+
+#include "igt.h"
+#include "igt_sysfs.h"
+#include "intel_pat.h"
+#include "xe_eudebug.h"
+#include "xe_ioctl.h"
+
+struct event_trigger {
+	xe_eudebug_trigger_fn fn;
+	int type;
+	struct igt_list_head link;
+};
+
+struct seqno_list_entry {
+	struct igt_list_head link;
+	uint64_t seqno;
+};
+
+struct match_dto {
+	struct drm_xe_eudebug_event *target;
+	struct igt_list_head *seqno_list;
+	uint64_t client_handle;
+	uint32_t filter;
+
+	/* store latest 'EVENT_VM_BIND' seqno */
+	uint64_t *bind_seqno;
+	/* latest vm_bind_op seqno matching bind_seqno */
+	uint64_t *bind_op_seqno;
+};
+
+#define CLIENT_PID  1
+#define CLIENT_RUN  2
+#define CLIENT_FINI 3
+#define CLIENT_STOP 4
+#define CLIENT_STAGE 5
+#define DEBUGGER_STAGE 6
+
+#define DEBUGGER_WORKER_INACTIVE  0
+#define DEBUGGER_WORKER_ACTIVE  1
+#define DEBUGGER_WORKER_QUITTING 2
+
+static const char *type_to_str(unsigned int type)
+{
+	switch (type) {
+	case DRM_XE_EUDEBUG_EVENT_NONE:
+		return "none";
+	case DRM_XE_EUDEBUG_EVENT_READ:
+		return "read";
+	case DRM_XE_EUDEBUG_EVENT_OPEN:
+		return "client";
+	case DRM_XE_EUDEBUG_EVENT_VM:
+		return "vm";
+	case DRM_XE_EUDEBUG_EVENT_EXEC_QUEUE:
+		return "exec_queue";
+	case DRM_XE_EUDEBUG_EVENT_EU_ATTENTION:
+		return "attention";
+	case DRM_XE_EUDEBUG_EVENT_VM_BIND:
+		return "vm_bind";
+	case DRM_XE_EUDEBUG_EVENT_VM_BIND_OP:
+		return "vm_bind_op";
+	case DRM_XE_EUDEBUG_EVENT_VM_BIND_UFENCE:
+		return "vm_bind_ufence";
+	case DRM_XE_EUDEBUG_EVENT_METADATA:
+		return "metadata";
+	case DRM_XE_EUDEBUG_EVENT_VM_BIND_OP_METADATA:
+		return "vm_bind_op_metadata";
+	}
+
+	return "UNKNOWN";
+}
+
+static const char *event_type_to_str(struct drm_xe_eudebug_event *e, char *buf)
+{
+	sprintf(buf, "%s(%d)", type_to_str(e->type), e->type);
+
+	return buf;
+}
+
+static const char *flags_to_str(unsigned int flags)
+{
+	if (flags & DRM_XE_EUDEBUG_EVENT_CREATE) {
+		if (flags & DRM_XE_EUDEBUG_EVENT_NEED_ACK)
+			return "create|ack";
+		else
+			return "create";
+	}
+	if (flags & DRM_XE_EUDEBUG_EVENT_DESTROY)
+		return "destroy";
+
+	if (flags & DRM_XE_EUDEBUG_EVENT_STATE_CHANGE)
+		return "state-change";
+
+	igt_assert(!(flags & DRM_XE_EUDEBUG_EVENT_NEED_ACK));
+
+	return "flags unknown";
+}
+
+static const char *event_members_to_str(struct drm_xe_eudebug_event *e, char *b)
+{
+	switch (e->type) {
+	case DRM_XE_EUDEBUG_EVENT_OPEN: {
+		struct drm_xe_eudebug_event_client *ec = (struct drm_xe_eudebug_event_client *)e;
+
+		sprintf(b, "handle=%llu", ec->client_handle);
+		break;
+	}
+	case DRM_XE_EUDEBUG_EVENT_VM: {
+		struct drm_xe_eudebug_event_vm *evm = (struct drm_xe_eudebug_event_vm *)e;
+
+		sprintf(b, "client_handle=%llu, handle=%llu",
+			evm->client_handle, evm->vm_handle);
+		break;
+	}
+	case DRM_XE_EUDEBUG_EVENT_EXEC_QUEUE: {
+		struct drm_xe_eudebug_event_exec_queue *ee = (void *)e;
+
+		sprintf(b, "client_handle=%llu, vm_handle=%llu, "
+			   "exec_queue_handle=%llu, engine_class=%d, exec_queue_width=%d",
+			ee->client_handle, ee->vm_handle,
+			ee->exec_queue_handle, ee->engine_class, ee->width);
+		break;
+	}
+	case DRM_XE_EUDEBUG_EVENT_EU_ATTENTION: {
+		struct drm_xe_eudebug_event_eu_attention *ea = (void *)e;
+
+		sprintf(b, "client_handle=%llu, exec_queue_handle=%llu, "
+			   "lrc_handle=%llu, bitmask_size=%d",
+			ea->client_handle, ea->exec_queue_handle,
+			ea->lrc_handle, ea->bitmask_size);
+		break;
+	}
+	case DRM_XE_EUDEBUG_EVENT_VM_BIND: {
+		struct drm_xe_eudebug_event_vm_bind *evmb = (void *)e;
+
+		sprintf(b, "client_handle=%llu, vm_handle=%llu, flags=0x%x, num_binds=%u",
+			evmb->client_handle, evmb->vm_handle, evmb->flags, evmb->num_binds);
+		break;
+	}
+	case DRM_XE_EUDEBUG_EVENT_VM_BIND_OP: {
+		struct drm_xe_eudebug_event_vm_bind_op *op = (void *)e;
+
+		sprintf(b, "vm_bind_ref_seqno=%lld, addr=%016llx, range=%llu num_extensions=%llu",
+			op->vm_bind_ref_seqno, op->addr, op->range, op->num_extensions);
+		break;
+	}
+	case DRM_XE_EUDEBUG_EVENT_VM_BIND_UFENCE: {
+		struct drm_xe_eudebug_event_vm_bind_ufence *f = (void *)e;
+
+		sprintf(b, "vm_bind_ref_seqno=%lld", f->vm_bind_ref_seqno);
+		break;
+	}
+	case DRM_XE_EUDEBUG_EVENT_METADATA: {
+		struct drm_xe_eudebug_event_metadata *em = (void *)e;
+
+		sprintf(b, "client_handle=%llu, metadata_handle=%llu, type=%llu, len=%llu",
+			em->client_handle, em->metadata_handle, em->type, em->len);
+		break;
+	}
+	case DRM_XE_EUDEBUG_EVENT_VM_BIND_OP_METADATA: {
+		struct drm_xe_eudebug_event_vm_bind_op_metadata *op = (void *)e;
+
+		sprintf(b, "vm_bind_op_ref_seqno=%lld, metadata_handle=%llu, metadata_cookie=%llu",
+			op->vm_bind_op_ref_seqno, op->metadata_handle, op->metadata_cookie);
+		break;
+	}
+	default:
+		strcpy(b, "<...>");
+	}
+
+	return b;
+}
+
+/**
+ * xe_eudebug_event_to_str:
+ * @e: pointer to event
+ * @buf: target to write string representation of @e
+ * @len: size of target buffer @buf
+ *
+ * Creates string representation for given event.
+ *
+ * Returns: the written input buffer pointed by @buf.
+ */
+const char *xe_eudebug_event_to_str(struct drm_xe_eudebug_event *e, char *buf, size_t len)
+{
+	char a[256];
+	char b[256];
+
+	snprintf(buf, len, "(%llu) %15s:%s: %s",
+		 e->seqno,
+		 event_type_to_str(e, a),
+		 flags_to_str(e->flags),
+		 event_members_to_str(e, b));
+
+	return buf;
+}
+
+static void catch_child_failure(void)
+{
+	pid_t pid;
+	int status;
+
+	pid = waitpid(-1, &status, WNOHANG);
+
+	if (pid == 0 || pid == -1)
+		return;
+
+	if (!WIFEXITED(status))
+		return;
+
+	igt_assert_f(WEXITSTATUS(status) == 0, "Client failed!\n");
+}
+
+static int safe_pipe_read(int pipe[2], void *buf, int nbytes, int timeout_ms)
+{
+	int ret;
+	int t = 0;
+	struct pollfd fd = {
+		.fd = pipe[0],
+		.events = POLLIN,
+		.revents = 0
+	};
+
+	/* When child fails we may get stuck forever. Check whether
+	 * the child process ended with an error.
+	 */
+	do {
+		const int interval_ms = 1000;
+
+		ret = poll(&fd, 1, interval_ms);
+
+		if (!ret) {
+			catch_child_failure();
+			t += interval_ms;
+		}
+	} while (!ret && t < timeout_ms);
+
+	if (ret > 0)
+		return read(pipe[0], buf, nbytes);
+
+	return 0;
+}
+
+static uint64_t pipe_read(int pipe[2], int timeout_ms)
+{
+	uint64_t in;
+	uint64_t ret;
+
+	ret = safe_pipe_read(pipe, &in, sizeof(in), timeout_ms);
+	igt_assert(ret == sizeof(in));
+
+	return in;
+}
+
+static void pipe_signal(int pipe[2], uint64_t token)
+{
+	igt_assert(write(pipe[1], &token, sizeof(token)) == sizeof(token));
+}
+
+static void pipe_close(int pipe[2])
+{
+	if (pipe[0] != -1)
+		close(pipe[0]);
+
+	if (pipe[1] != -1)
+		close(pipe[1]);
+}
+
+static uint64_t __wait_token(int p[2], const uint64_t token, int timeout_ms)
+{
+	uint64_t in;
+
+	in = pipe_read(p, timeout_ms);
+
+	igt_assert_eq(in, token);
+
+	return pipe_read(p, timeout_ms);
+}
+
+static uint64_t client_wait_token(struct xe_eudebug_client *c,
+				 const uint64_t token)
+{
+	return __wait_token(c->p_in, token, c->timeout_ms);
+}
+
+static uint64_t wait_from_client(struct xe_eudebug_client *c,
+				 const uint64_t token)
+{
+	return __wait_token(c->p_out, token, c->timeout_ms);
+}
+
+static void token_signal(int p[2], const uint64_t token, const uint64_t value)
+{
+	pipe_signal(p, token);
+	pipe_signal(p, value);
+}
+
+static void client_signal(struct xe_eudebug_client *c,
+			  const uint64_t token,
+			  const uint64_t value)
+{
+	token_signal(c->p_out, token, value);
+}
+
+static int __xe_eudebug_connect(int fd, pid_t pid, uint32_t flags, uint64_t events)
+{
+	struct drm_xe_eudebug_connect param = {
+		.pid = pid,
+		.flags = flags,
+	};
+	int debugfd;
+
+	debugfd = igt_ioctl(fd, DRM_IOCTL_XE_EUDEBUG_CONNECT, &param);
+
+	if (debugfd < 0)
+		return -errno;
+
+	return debugfd;
+}
+
+static void event_log_write_to_fd(struct xe_eudebug_event_log *l, int fd)
+{
+	igt_assert_eq(write(fd, &l->head, sizeof(l->head)),
+		      sizeof(l->head));
+
+	igt_assert_eq(write(fd, l->log, l->head), l->head);
+}
+
+static void read_all(int fd, void *buf, size_t nbytes)
+{
+	ssize_t remaining_size = nbytes;
+	ssize_t current_size = 0;
+	ssize_t read_size = 0;
+
+	do {
+		read_size = read(fd, buf + current_size, remaining_size);
+		igt_assert_f(read_size >= 0, "read failed: %s\n", strerror(errno));
+
+		current_size += read_size;
+		remaining_size -= read_size;
+	} while (remaining_size > 0 && read_size > 0);
+
+	igt_assert_eq(current_size, nbytes);
+}
+
+static void event_log_read_from_fd(struct xe_eudebug_event_log *l, int fd)
+{
+	read_all(fd, &l->head, sizeof(l->head));
+	igt_assert_lt(l->head, l->max_size);
+
+	read_all(fd, l->log, l->head);
+}
+
+typedef int (*cmp_fn_t)(struct drm_xe_eudebug_event *, void *);
+
+static struct drm_xe_eudebug_event *
+event_cmp(struct xe_eudebug_event_log *l,
+	  struct drm_xe_eudebug_event *current,
+	  cmp_fn_t match,
+	  void *data)
+{
+	struct drm_xe_eudebug_event *e = current;
+
+	xe_eudebug_for_each_event(e, l) {
+		if (match(e, data))
+			return e;
+	}
+
+	return NULL;
+}
+
+static int match_type_and_flags(struct drm_xe_eudebug_event *a, void *data)
+{
+	struct drm_xe_eudebug_event *b = data;
+
+	if (a->type == b->type &&
+	    a->flags == b->flags)
+		return 1;
+
+	return 0;
+}
+
+static int match_fields(struct drm_xe_eudebug_event *a, void *data)
+{
+	struct drm_xe_eudebug_event *b = data;
+	int ret = 0;
+
+	ret = match_type_and_flags(a, data);
+	if (!ret)
+		return ret;
+
+	ret = 0;
+
+	switch (a->type) {
+	case DRM_XE_EUDEBUG_EVENT_EXEC_QUEUE: {
+		struct drm_xe_eudebug_event_exec_queue *ae = (void *)a;
+		struct drm_xe_eudebug_event_exec_queue *be = (void *)b;
+
+		if (ae->engine_class == be->engine_class && ae->width == be->width)
+			ret = 1;
+		break;
+	}
+	case DRM_XE_EUDEBUG_EVENT_VM_BIND: {
+		struct drm_xe_eudebug_event_vm_bind *ea = (void *)a;
+		struct drm_xe_eudebug_event_vm_bind *eb = (void *)b;
+
+		if (ea->num_binds == eb->num_binds)
+			ret = 1;
+		break;
+	}
+	case DRM_XE_EUDEBUG_EVENT_VM_BIND_OP: {
+		struct drm_xe_eudebug_event_vm_bind_op *ea = (void *)a;
+		struct drm_xe_eudebug_event_vm_bind_op *eb = (void *)b;
+
+		if (ea->addr == eb->addr && ea->range == eb->range &&
+		    ea->num_extensions == eb->num_extensions)
+			ret = 1;
+		break;
+	}
+	case DRM_XE_EUDEBUG_EVENT_VM_BIND_OP_METADATA: {
+		struct drm_xe_eudebug_event_vm_bind_op_metadata *ea = (void *)a;
+		struct drm_xe_eudebug_event_vm_bind_op_metadata *eb = (void *)b;
+
+		if (ea->metadata_handle == eb->metadata_handle &&
+		    ea->metadata_cookie == eb->metadata_cookie)
+			ret = 1;
+		break;
+	}
+
+	default:
+		ret = 1;
+		break;
+	}
+
+	return ret;
+}
+
+static int match_client_handle(struct drm_xe_eudebug_event *e, void *data)
+{
+	struct match_dto *md = (void *)data;
+	uint64_t *bind_seqno = md->bind_seqno;
+	uint64_t *bind_op_seqno = md->bind_op_seqno;
+	uint64_t h = md->client_handle;
+
+	if (XE_EUDEBUG_EVENT_IS_FILTERED(e->type, md->filter))
+		return 0;
+
+	switch (e->type) {
+	case DRM_XE_EUDEBUG_EVENT_OPEN: {
+		struct drm_xe_eudebug_event_client *client = (struct drm_xe_eudebug_event_client *)e;
+
+		if (client->client_handle == h)
+			return 1;
+		break;
+	}
+	case DRM_XE_EUDEBUG_EVENT_VM: {
+		struct drm_xe_eudebug_event_vm *vm = (struct drm_xe_eudebug_event_vm *)e;
+
+		if (vm->client_handle == h)
+			return 1;
+		break;
+	}
+	case DRM_XE_EUDEBUG_EVENT_EXEC_QUEUE: {
+		struct drm_xe_eudebug_event_exec_queue *ee = (struct drm_xe_eudebug_event_exec_queue *)e;
+
+		if (ee->client_handle == h)
+			return 1;
+		break;
+	}
+	case DRM_XE_EUDEBUG_EVENT_VM_BIND: {
+		struct drm_xe_eudebug_event_vm_bind *evmb = (struct drm_xe_eudebug_event_vm_bind *)e;
+
+		if (evmb->client_handle == h) {
+			*bind_seqno = evmb->base.seqno;
+			return 1;
+		}
+		break;
+	}
+	case DRM_XE_EUDEBUG_EVENT_VM_BIND_OP: {
+		struct drm_xe_eudebug_event_vm_bind_op *eo = (struct drm_xe_eudebug_event_vm_bind_op *)e;
+
+		if (eo->vm_bind_ref_seqno == *bind_seqno) {
+			*bind_op_seqno = eo->base.seqno;
+			return 1;
+		}
+		break;
+	}
+	case DRM_XE_EUDEBUG_EVENT_VM_BIND_UFENCE: {
+		struct drm_xe_eudebug_event_vm_bind_ufence *ef  = (struct drm_xe_eudebug_event_vm_bind_ufence *)e;
+
+		if (ef->vm_bind_ref_seqno == *bind_seqno)
+			return 1;
+
+		break;
+	}
+	case DRM_XE_EUDEBUG_EVENT_METADATA: {
+		struct drm_xe_eudebug_event_metadata *em = (struct drm_xe_eudebug_event_metadata *)e;
+
+		if (em->client_handle == h)
+			return 1;
+		break;
+	}
+	case DRM_XE_EUDEBUG_EVENT_VM_BIND_OP_METADATA: {
+		struct drm_xe_eudebug_event_vm_bind_op_metadata *eo = (struct drm_xe_eudebug_event_vm_bind_op_metadata *)e;
+
+		if (eo->vm_bind_op_ref_seqno == *bind_op_seqno)
+			return 1;
+		break;
+	}
+	default:
+		break;
+	}
+
+	return 0;
+}
+
+static int match_opposite_resource(struct drm_xe_eudebug_event *e, void *data)
+{
+
+	struct drm_xe_eudebug_event *d = (void *)data;
+	int ret;
+
+	d->flags ^= DRM_XE_EUDEBUG_EVENT_CREATE | DRM_XE_EUDEBUG_EVENT_DESTROY;
+	d->flags &= ~(DRM_XE_EUDEBUG_EVENT_NEED_ACK);
+	ret = match_type_and_flags(e, data);
+	d->flags ^= DRM_XE_EUDEBUG_EVENT_CREATE | DRM_XE_EUDEBUG_EVENT_DESTROY;
+
+	if (!ret)
+		return 0;
+
+	switch (e->type) {
+	case DRM_XE_EUDEBUG_EVENT_OPEN: {
+		struct drm_xe_eudebug_event_client *client = (struct drm_xe_eudebug_event_client *)e;
+		struct drm_xe_eudebug_event_client *filter = (struct drm_xe_eudebug_event_client *)data;
+
+		if (client->client_handle == filter->client_handle)
+			return 1;
+		break;
+	}
+	case DRM_XE_EUDEBUG_EVENT_VM: {
+		struct drm_xe_eudebug_event_vm *vm = (struct drm_xe_eudebug_event_vm *)e;
+		struct drm_xe_eudebug_event_vm *filter = (struct drm_xe_eudebug_event_vm *)data;
+
+		if (vm->vm_handle == filter->vm_handle)
+			return 1;
+		break;
+	}
+	case DRM_XE_EUDEBUG_EVENT_EXEC_QUEUE: {
+		struct drm_xe_eudebug_event_exec_queue *ee = (void *)e;
+		struct drm_xe_eudebug_event_exec_queue *filter = (struct drm_xe_eudebug_event_exec_queue *)data;
+
+		if (ee->exec_queue_handle == filter->exec_queue_handle)
+			return 1;
+		break;
+	}
+	case DRM_XE_EUDEBUG_EVENT_VM_BIND: {
+		struct drm_xe_eudebug_event_vm_bind *evmb = (void *)e;
+		struct drm_xe_eudebug_event_vm_bind *filter = (struct drm_xe_eudebug_event_vm_bind *)data;
+
+		if (evmb->vm_handle == filter->vm_handle &&
+		    evmb->num_binds == filter->num_binds)
+			return 1;
+		break;
+	}
+	case DRM_XE_EUDEBUG_EVENT_VM_BIND_OP: {
+		struct drm_xe_eudebug_event_vm_bind_op *avmb = (void *)e;
+		struct drm_xe_eudebug_event_vm_bind_op *filter = (struct drm_xe_eudebug_event_vm_bind_op *)data;
+
+		if (avmb->addr == filter->addr &&
+		    avmb->range == filter->range)
+			return 1;
+		break;
+	}
+	case DRM_XE_EUDEBUG_EVENT_METADATA: {
+		struct drm_xe_eudebug_event_metadata *em = (void *)e;
+		struct drm_xe_eudebug_event_metadata *filter = (struct drm_xe_eudebug_event_metadata *)data;
+
+		if (em->metadata_handle == filter->metadata_handle)
+			return 1;
+		break;
+	}
+	case DRM_XE_EUDEBUG_EVENT_VM_BIND_OP_METADATA: {
+		struct drm_xe_eudebug_event_vm_bind_op_metadata *avmb = (void *)e;
+		struct drm_xe_eudebug_event_vm_bind_op_metadata *filter = (struct drm_xe_eudebug_event_vm_bind_op_metadata *)data;
+
+		if (avmb->metadata_handle == filter->metadata_handle &&
+		    avmb->metadata_cookie == filter->metadata_cookie)
+			return 1;
+		break;
+	}
+
+	default:
+		break;
+	}
+	return 0;
+}
+
+static int match_full(struct drm_xe_eudebug_event *e, void *data)
+{
+	struct seqno_list_entry *sl;
+
+	struct match_dto *md = (void *)data;
+	int ret = 0;
+
+	ret = match_client_handle(e, md);
+	if (!ret)
+		return 0;
+
+	ret = match_fields(e, md->target);
+	if (!ret)
+		return 0;
+
+	igt_list_for_each_entry(sl, md->seqno_list, link) {
+		if (sl->seqno == e->seqno)
+			return 0;
+	}
+
+	return 1;
+}
+
+static struct drm_xe_eudebug_event *
+event_type_match(struct xe_eudebug_event_log *l,
+		 struct drm_xe_eudebug_event *target,
+		 struct drm_xe_eudebug_event *current)
+{
+	return event_cmp(l, current, match_type_and_flags, target);
+}
+
+static struct drm_xe_eudebug_event *
+client_match(struct xe_eudebug_event_log *l,
+	     uint64_t client_handle,
+	     struct drm_xe_eudebug_event *current,
+	     uint32_t filter,
+	     uint64_t *bind_seqno,
+	     uint64_t *bind_op_seqno)
+{
+	struct match_dto md = {
+		.client_handle = client_handle,
+		.filter = filter,
+		.bind_seqno = bind_seqno,
+		.bind_op_seqno = bind_op_seqno,
+	};
+
+	return event_cmp(l, current, match_client_handle, &md);
+}
+
+static struct drm_xe_eudebug_event *
+opposite_event_match(struct xe_eudebug_event_log *l,
+		    struct drm_xe_eudebug_event *target,
+		    struct drm_xe_eudebug_event *current)
+{
+	return event_cmp(l, current, match_opposite_resource, target);
+}
+
+static struct drm_xe_eudebug_event *
+event_match(struct xe_eudebug_event_log *l,
+	    struct drm_xe_eudebug_event *target,
+	    uint64_t client_handle,
+	    struct igt_list_head *seqno_list,
+	    uint64_t *bind_seqno,
+	    uint64_t *bind_op_seqno)
+{
+	struct match_dto md = {
+		.target = target,
+		.client_handle = client_handle,
+		.seqno_list = seqno_list,
+		.bind_seqno = bind_seqno,
+		.bind_op_seqno = bind_op_seqno,
+	};
+
+	return event_cmp(l, NULL, match_full, &md);
+}
+
+static void compare_client(struct xe_eudebug_event_log *c, struct drm_xe_eudebug_event *_ce,
+			   struct xe_eudebug_event_log *d, struct drm_xe_eudebug_event *_de,
+			   uint32_t filter)
+{
+	struct drm_xe_eudebug_event_client *ce = (void *)_ce;
+	struct drm_xe_eudebug_event_client *de = (void *)_de;
+	uint64_t cbs = 0, dbs = 0, cbso = 0, dbso = 0;
+
+	struct igt_list_head matched_seqno_list;
+	struct drm_xe_eudebug_event *hc, *hd;
+	struct seqno_list_entry *entry, *tmp;
+
+	igt_assert(ce);
+	igt_assert(de);
+
+	igt_debug("client: %llu -> %llu\n", ce->client_handle, de->client_handle);
+
+	hc = NULL;
+	hd = NULL;
+	IGT_INIT_LIST_HEAD(&matched_seqno_list);
+
+	do {
+		hc = client_match(c, ce->client_handle, hc, filter, &cbs, &cbso);
+		if (!hc)
+			break;
+
+		hd = event_match(d, hc, de->client_handle, &matched_seqno_list, &dbs, &dbso);
+
+		igt_assert_f(hd, "%s (%llu): no matching event type %u found for client %llu\n",
+			     c->name,
+			     hc->seqno,
+			     hc->type,
+			     ce->client_handle);
+
+		igt_debug("comparing %s %llu vs %s %llu\n",
+			  c->name, hc->seqno, d->name, hd->seqno);
+
+		/*
+		 * Store the seqno of the event that was matched above,
+		 * inside 'matched_seqno_list', to avoid it getting matched
+		 * by subsequent 'event_match' calls.
+		 */
+		entry = malloc(sizeof(*entry));
+		entry->seqno = hd->seqno;
+		igt_list_add(&entry->link, &matched_seqno_list);
+	} while (hc);
+
+	igt_list_for_each_entry_safe(entry, tmp, &matched_seqno_list, link)
+		free(entry);
+}
+
+/**
+ * xe_eudebug_event_log_find_seqno:
+ * @l: event log pointer
+ * @seqno: seqno of event to be found
+ *
+ * Finds the event with given seqno in the event log.
+ *
+ * Returns: pointer to the event with given seqno within @l or NULL seqno is
+ * not present.
+ */
+struct drm_xe_eudebug_event *
+xe_eudebug_event_log_find_seqno(struct xe_eudebug_event_log *l, uint64_t seqno)
+{
+	struct drm_xe_eudebug_event *e = NULL, *found = NULL;
+
+	igt_assert_neq(seqno, 0);
+	/*
+	 * Try to catch if seqno is corrupted and prevent too long tests,
+	 * as our post processing of events is not optimized.
+	 */
+	igt_assert_lt(seqno, 10 * 1000 * 1000);
+
+	xe_eudebug_for_each_event(e, l) {
+		if (e->seqno == seqno) {
+			if (found) {
+				igt_warn("Found multiple events with the same seqno %lu\n", seqno);
+				xe_eudebug_event_log_print(l, false);
+				igt_assert(!found);
+			}
+			found = e;
+		}
+	}
+
+	return found;
+}
+
+static void event_log_sort(struct xe_eudebug_event_log *l)
+{
+	struct xe_eudebug_event_log *tmp;
+	struct drm_xe_eudebug_event *e = NULL;
+	uint64_t first_seqno = 0;
+	uint64_t last_seqno = 0;
+	uint64_t events = 0, added = 0;
+	uint64_t i;
+
+	xe_eudebug_for_each_event(e, l) {
+		if (e->seqno > last_seqno)
+			last_seqno = e->seqno;
+
+		if (e->seqno < first_seqno)
+			first_seqno = e->seqno;
+
+		events++;
+	}
+
+	tmp = xe_eudebug_event_log_create("tmp", l->max_size);
+
+	for (i = 1; i <= last_seqno; i++) {
+		e = xe_eudebug_event_log_find_seqno(l, i);
+		if (e) {
+			xe_eudebug_event_log_write(tmp, e);
+			added++;
+		}
+	}
+
+	igt_assert_eq(events, added);
+	igt_assert_eq(tmp->head, l->head);
+
+	memcpy(l->log, tmp->log, tmp->head);
+
+	xe_eudebug_event_log_destroy(tmp);
+}
+
+/**
+ * xe_eudebug_connect:
+ * @fd: Xe file descriptor
+ * @pid: client PID
+ * @flags: connection flags
+ *
+ * Opens the xe eu debugger connection to the process described by @pid
+ *
+ * Returns: 0 if the debugger was successfully attached, -errno otherwise.
+ */
+int xe_eudebug_connect(int fd, pid_t pid, uint32_t flags)
+{
+	int ret;
+	uint64_t events = 0; /* events filtering not supported yet! */
+
+	ret = __xe_eudebug_connect(fd, pid, flags, events);
+
+	return ret;
+}
+
+/**
+ * xe_eudebug_event_log_create:
+ * @name: event log identifier
+ * @max_size: maximum size of created log
+ *
+ * Function creates an Eu Debugger event log with size equal to @max_size.
+ *
+ * Returns: pointer to just created log
+ */
+#define MAX_EVENT_LOG_SIZE (32 * 1024 * 1024)
+struct xe_eudebug_event_log *xe_eudebug_event_log_create(const char *name, unsigned int max_size)
+{
+	struct xe_eudebug_event_log *l;
+
+	l = calloc(1, sizeof(*l));
+	igt_assert(l);
+	l->log = calloc(1, max_size);
+	igt_assert(l->log);
+	l->max_size = max_size;
+	strncpy(l->name, name, sizeof(l->name) - 1);
+	pthread_mutex_init(&l->lock, NULL);
+
+	return l;
+}
+
+/**
+ * xe_eudebug_event_log_destroy:
+ * @l: event log pointer
+ *
+ * Frees given event log @l.
+ */
+void xe_eudebug_event_log_destroy(struct xe_eudebug_event_log *l)
+{
+	pthread_mutex_destroy(&l->lock);
+	free(l->log);
+	free(l);
+}
+
+/**
+ * xe_eudebug_event_log_write:
+ * @l: event log pointer
+ * @e: event to be written to event log
+ *
+ * Writes event @e to the event log, thread-safe.
+ */
+void xe_eudebug_event_log_write(struct xe_eudebug_event_log *l, struct drm_xe_eudebug_event *e)
+{
+	igt_assert(e->seqno);
+	/*
+	 * Try to catch if seqno is corrupted and prevent too long tests,
+	 * as our post processing of events is not optimized.
+	 */
+	igt_assert_lt(e->seqno, 10 * 1000 * 1000);
+
+	pthread_mutex_lock(&l->lock);
+	igt_assert_lt(l->head + e->len, l->max_size);
+	memcpy(l->log + l->head, e, e->len);
+	l->head += e->len;
+
+#ifdef DEBUG_LOG
+	igt_info("%s: wrote %u bytes to eventlog, free %u bytes\n",
+		 l->name, e->len, l->max_size - l->head);
+#endif
+	pthread_mutex_unlock(&l->lock);
+}
+
+/**
+ * xe_eudebug_event_log_print:
+ * @l: event log pointer
+ * @debug: when true function uses igt_debug instead of igt_info.
+ *
+ * Prints given event log.
+ */
+void
+xe_eudebug_event_log_print(struct xe_eudebug_event_log *l, bool debug)
+{
+	struct drm_xe_eudebug_event *e = NULL;
+	int level = debug ? IGT_LOG_DEBUG : IGT_LOG_INFO;
+	char str[XE_EUDEBUG_EVENT_STRING_MAX_LEN];
+
+	igt_log(IGT_LOG_DOMAIN, level,
+		"event log '%s' (%u bytes):\n", l->name, l->head);
+
+	xe_eudebug_for_each_event(e, l) {
+		xe_eudebug_event_to_str(e, str, XE_EUDEBUG_EVENT_STRING_MAX_LEN);
+		igt_log(IGT_LOG_DOMAIN, level, "%s\n", str);
+	}
+}
+
+/**
+ * xe_eudebug_event_log_compare:
+ * @a: event log pointer
+ * @b: event log pointer
+ * @filter: mask that represents events to be skipped during comparison, useful
+ * for events like 'VM_BIND' since they can be asymmetric. Note that
+ * 'DRM_XE_EUDEBUG_EVENT_OPEN' will always be matched.
+ *
+ * Compares and asserts event logs @a, @b if the event
+ * sequence matches.
+ */
+void xe_eudebug_event_log_compare(struct xe_eudebug_event_log *a, struct xe_eudebug_event_log *b,
+				  uint32_t filter)
+{
+	struct drm_xe_eudebug_event *ae = NULL;
+	struct drm_xe_eudebug_event *be = NULL;
+
+	xe_eudebug_for_each_event(ae, a) {
+		if (ae->type == DRM_XE_EUDEBUG_EVENT_OPEN &&
+		    ae->flags & DRM_XE_EUDEBUG_EVENT_CREATE) {
+			be = event_type_match(b, ae, be);
+
+			compare_client(a, ae, b, be, filter);
+			compare_client(b, be, a, ae, filter);
+		}
+	}
+}
+
+/**
+ * xe_eudebug_event_log_match_opposite:
+ * @l: event log pointer
+ * @filter: mask that represents events to be skipped during comparison, useful
+ * for events like 'VM_BIND' since they can be asymmetric
+ *
+ * Matches and asserts content of all opposite events (create vs destroy).
+ */
+void
+xe_eudebug_event_log_match_opposite(struct xe_eudebug_event_log *l, uint32_t filter)
+{
+	struct drm_xe_eudebug_event *ce = NULL;
+	struct drm_xe_eudebug_event *de = NULL;
+
+	xe_eudebug_for_each_event(ce, l) {
+		if (ce->flags & DRM_XE_EUDEBUG_EVENT_CREATE) {
+			uint8_t offset = sizeof(struct drm_xe_eudebug_event);
+			int opposite_matching;
+
+			if (XE_EUDEBUG_EVENT_IS_FILTERED(ce->type, filter))
+				continue;
+
+			/* No opposite matching for binds */
+			if ((ce->type >= DRM_XE_EUDEBUG_EVENT_VM_BIND &&
+			     ce->type <= DRM_XE_EUDEBUG_EVENT_VM_BIND_UFENCE) ||
+			    ce->type == DRM_XE_EUDEBUG_EVENT_VM_BIND_OP_METADATA)
+				continue;
+
+			de = opposite_event_match(l, ce, ce);
+
+			igt_assert_f(de, "no opposite event of type %u found\n", ce->type);
+
+			igt_assert_eq(ce->len, de->len);
+			opposite_matching = memcmp((uint8_t *)de + offset,
+						   (uint8_t *)ce + offset,
+						   de->len - offset) == 0;
+
+			igt_assert_f(opposite_matching,
+				     "%s: create|destroy event not "
+				     "maching (%llu) vs (%llu)\n",
+				     l->name, de->seqno, ce->seqno);
+		}
+	}
+}
+
+static void debugger_run_triggers(struct xe_eudebug_debugger *d,
+				  struct drm_xe_eudebug_event *e)
+{
+	struct event_trigger *t;
+
+	igt_list_for_each_entry(t, &d->triggers, link) {
+		if (e->type == t->type)
+			t->fn(d, e);
+	}
+}
+
+#define MAX_EVENT_SIZE (32 * 1024)
+static int
+xe_eudebug_read_event(int fd, struct drm_xe_eudebug_event *event)
+{
+	int ret;
+
+	event->type = DRM_XE_EUDEBUG_EVENT_READ;
+	event->flags = 0;
+	event->len = MAX_EVENT_SIZE;
+
+	ret = igt_ioctl(fd, DRM_XE_EUDEBUG_IOCTL_READ_EVENT, event);
+	if (ret < 0)
+		return -errno;
+
+	return ret;
+}
+
+static void *debugger_worker_loop(void *data)
+{
+	uint8_t buf[MAX_EVENT_SIZE];
+	struct drm_xe_eudebug_event *e = (void *)buf;
+	struct xe_eudebug_debugger *d = data;
+	struct pollfd p = {
+		.events = POLLIN,
+		.revents = 0,
+	};
+	int timeout_ms = 100, ret;
+
+	igt_assert(d->master_fd >= 0);
+
+	do {
+		p.fd = d->fd;
+		ret = poll(&p, 1, timeout_ms);
+
+		if (ret == -1) {
+			igt_info("poll failed with errno %d\n", errno);
+			break;
+		}
+
+		if (ret == 1 && (p.revents & POLLIN)) {
+			int err = xe_eudebug_read_event(d->fd, e);
+
+			if (!err) {
+				++d->event_count;
+
+				xe_eudebug_event_log_write(d->log, e);
+				debugger_run_triggers(d, e);
+			} else {
+				igt_info("xe_eudebug_read_event returned %d\n", ret);
+			}
+		}
+	} while ((ret && READ_ONCE(d->worker_state) == DEBUGGER_WORKER_QUITTING) ||
+		 READ_ONCE(d->worker_state) == DEBUGGER_WORKER_ACTIVE);
+
+	d->worker_state = DEBUGGER_WORKER_INACTIVE;
+	return NULL;
+}
+
+/**
+ * xe_eudebug_debugger_available:
+ * @fd: Xe file descriptor
+ *
+ * Returns: true it debugger connection is available, false otherwise.
+ */
+bool xe_eudebug_debugger_available(int fd)
+{
+	struct drm_xe_eudebug_connect param = { .pid = getpid() };
+	int debugfd;
+
+	debugfd = igt_ioctl(fd, DRM_IOCTL_XE_EUDEBUG_CONNECT, &param);
+	if (debugfd >= 0)
+		close(debugfd);
+
+	return debugfd >= 0;
+}
+
+/**
+ * xe_eudebug_debugger_create:
+ * @master_fd: xe client used to open the debugger connection
+ * @flags: flags stored in a debugger structure, can be used at will
+ * of the caller, i.e. to be used inside triggers.
+ * @data: test's private data, allocated with MAP_SHARED | MAP_ANONYMOUS,
+ * can be shared between client and debugger. Can be NULL.
+ *
+ * Returns: newly created xe_eudebug_debugger structure with its
+ * event log initialized. Note that to open the connection
+ * you need call @xe_eudebug_debugger_attach.
+ */
+struct xe_eudebug_debugger *
+xe_eudebug_debugger_create(int master_fd, uint64_t flags, void *data)
+{
+	struct xe_eudebug_debugger *d;
+
+	d = calloc(1, sizeof(*d));
+	d->flags = flags;
+	igt_assert(d);
+	IGT_INIT_LIST_HEAD(&d->triggers);
+	d->log = xe_eudebug_event_log_create("debugger", MAX_EVENT_LOG_SIZE);
+	d->fd = -1;
+	d->master_fd = master_fd;
+	d->ptr = data;
+
+	return d;
+}
+
+static void debugger_destroy_triggers(struct xe_eudebug_debugger *d)
+{
+	struct event_trigger *t, *tmp;
+
+	igt_list_for_each_entry_safe(t, tmp, &d->triggers, link)
+		free(t);
+}
+
+/**
+ * xe_eudebug_debugger_destroy:
+ * @d: pointer to the debugger
+ *
+ * Frees xe_eudebug_debugger structure pointed by @d. If the debugger
+ * connection was still opened it terminates it.
+ */
+void xe_eudebug_debugger_destroy(struct xe_eudebug_debugger *d)
+{
+	if (d->worker_state)
+		xe_eudebug_debugger_stop_worker(d, 1);
+
+	if (d->target_pid)
+		xe_eudebug_debugger_dettach(d);
+
+	xe_eudebug_event_log_destroy(d->log);
+	debugger_destroy_triggers(d);
+	free(d);
+}
+
+/**
+ * xe_eudebug_debugger_attach:
+ * @d: pointer to the debugger
+ * @c: pointer to the client
+ *
+ * Opens the xe eu debugger connection to the process described by @c (c->pid)
+ *
+ * Returns: 0 if the debugger was successfully attached, -errno otherwise.
+ */
+int xe_eudebug_debugger_attach(struct xe_eudebug_debugger *d,
+			       struct xe_eudebug_client *c)
+{
+	int ret;
+
+	igt_assert_eq(d->fd, -1);
+	igt_assert_neq(c->pid, 0);
+	ret = xe_eudebug_connect(d->master_fd, c->pid, 0);
+
+	if (ret < 0)
+		return ret;
+
+	d->fd = ret;
+	d->target_pid = c->pid;
+	d->p_client[0] = c->p_in[0];
+	d->p_client[1] = c->p_in[1];
+
+	igt_debug("debugger connected to %lu\n", d->target_pid);
+
+	return 0;
+}
+
+/**
+ * xe_eudebug_debugger_dettach:
+ * @d: pointer to the debugger
+ *
+ * Closes previously opened xe eu debugger connection. Asserts if
+ * the debugger has active session.
+ */
+void xe_eudebug_debugger_dettach(struct xe_eudebug_debugger *d)
+{
+	igt_assert(d->target_pid);
+	close(d->fd);
+	d->target_pid = 0;
+	d->fd = -1;
+}
+
+/**
+ * xe_eudebug_debugger_add_trigger:
+ * @d: pointer to the debugger
+ * @type: the type of the event which activates the trigger
+ * @fn: function to be called when event of @type was read by the debugger.
+ *
+ * Adds function @fn to the list of triggers activated when event of @type
+ * has been read by worker.
+ * Note: Triggers are activated by the worker.
+ */
+void xe_eudebug_debugger_add_trigger(struct xe_eudebug_debugger *d,
+				     int type, xe_eudebug_trigger_fn fn)
+{
+	struct event_trigger *t;
+
+	t = calloc(1, sizeof(*t));
+	IGT_INIT_LIST_HEAD(&t->link);
+	t->type = type;
+	t->fn = fn;
+
+	igt_list_add_tail(&t->link, &d->triggers);
+	igt_debug("added trigger %p\n", t);
+}
+
+/**
+ * xe_eudebug_debugger_start_worker:
+ * @d: pointer to the debugger
+ *
+ * Starts the debugger worker. Worker is resposible for reading all
+ * incoming events from the debugger, put then into debugger log and
+ * execute appropriate event triggers. Note that using the debuggers
+ * event log while worker is running is not safe.
+ */
+void xe_eudebug_debugger_start_worker(struct xe_eudebug_debugger *d)
+{
+	int ret;
+
+	d->worker_state = true;
+	ret = pthread_create(&d->worker_thread, NULL, &debugger_worker_loop, d);
+
+	igt_assert_f(ret == 0, "Debugger worker thread creation failed!");
+}
+
+/**
+ * xe_eudebug_debugger_stop_worker:
+ * @d: pointer to the debugger
+ *
+ * Stops the debugger worker. Event log is sorted by seqno after closure.
+ */
+void xe_eudebug_debugger_stop_worker(struct xe_eudebug_debugger *d,
+				     int timeout_s)
+{
+	struct timespec t = {};
+	int ret;
+
+	igt_assert(d->worker_state);
+
+	d->worker_state = DEBUGGER_WORKER_QUITTING; /* First time be polite. */
+	igt_assert_eq(clock_gettime(CLOCK_REALTIME, &t), 0);
+	t.tv_sec += timeout_s;
+
+	ret = pthread_timedjoin_np(d->worker_thread, NULL, &t);
+
+	if (ret == ETIMEDOUT) {
+		d->worker_state = DEBUGGER_WORKER_INACTIVE;
+		ret = pthread_join(d->worker_thread, NULL);
+	}
+
+	igt_assert_f(ret == 0 || ret != ESRCH,
+		     "pthread join failed with error %d!\n", ret);
+
+	event_log_sort(d->log);
+}
+
+/**
+ * xe_eudebug_debugger_signal_stage:
+ * @d: pointer to the debugger
+ * @stage: stage to signal
+ *
+ * Signals to client, waiting in xe_eudebug_client_wait_stage(),
+ * releasing it to proceed.
+ */
+void xe_eudebug_debugger_signal_stage(struct xe_eudebug_debugger *d, uint64_t stage)
+{
+	token_signal(d->p_client, CLIENT_STAGE, stage);
+}
+
+/**
+ * xe_eudebug_debugger_wait_stage:
+ * @s: pointer to xe_eudebug_debugger structure
+ * @stage: stage to wait on
+ *
+ * Pauses debugger until the client has signalled the corresponding stage with
+ * xe_eudebug_client_signal_stage. This is only for situations where the actual
+ * event flow is not enough to coordinate between client/debugger and extra sync
+ * mechanism is needed.
+ */
+void xe_eudebug_debugger_wait_stage(struct xe_eudebug_session *s, uint64_t stage)
+{
+	u64 stage_in;
+
+	igt_debug("debugger xe client fd: %d pausing for stage %lu\n", s->d->master_fd, stage);
+
+	stage_in = wait_from_client(s->c, DEBUGGER_STAGE);
+	igt_debug("debugger xe client fd: %d stage %lu, expected %lu, stage\n", s->d->master_fd,
+		  stage_in, stage);
+
+	igt_assert_eq(stage_in, stage);
+}
+
+/**
+ * xe_eudebug_client_create:
+ * @master_fd: xe client used to open the debugger connection
+ * @work: function that opens xe device and executes arbitrary workload
+ * @flags: flags stored in a client structure, can be used at will
+ * of the caller, i.e. to provide the @work function an additional switch.
+ * @data: test's private data, allocated with MAP_SHARED | MAP_ANONYMOUS,
+ * can be shared between client and debugger. Accesible via client->ptr.
+ * Can be NULL.
+ *
+ * Forks and creates the debugger process. @work won't be called until
+ * xe_eudebug_client_start is called.
+ *
+ * Returns: newly created xe_eudebug_debugger structure with its
+ * event log initialized.
+ */
+struct xe_eudebug_client *xe_eudebug_client_create(int master_fd, xe_eudebug_client_work_fn work,
+						   uint64_t flags, void *data)
+{
+	struct xe_eudebug_client *c;
+
+	c = calloc(1, sizeof(*c));
+	c->flags = flags;
+	igt_assert(c);
+	igt_assert(!pipe(c->p_in));
+	igt_assert(!pipe(c->p_out));
+	c->seqno = 1;
+	c->log = xe_eudebug_event_log_create("client", MAX_EVENT_LOG_SIZE);
+	c->done = 0;
+	c->ptr = data;
+	c->master_fd = master_fd;
+	c->timeout_ms = XE_EUDEBUG_DEFAULT_TIMEOUT_MS;
+
+	igt_fork(child, 1) {
+		int mypid;
+
+		igt_assert_eq(c->pid, 0);
+
+		close(c->p_out[0]);
+		c->p_out[0] = -1;
+		close(c->p_in[1]);
+		c->p_in[1] = -1;
+
+		mypid = getpid();
+		client_signal(c, CLIENT_PID, mypid);
+
+		c->pid = client_wait_token(c, CLIENT_RUN);
+		igt_assert_eq(c->pid, mypid);
+		if (work)
+			work(c);
+
+		client_signal(c, CLIENT_FINI, c->seqno);
+
+		event_log_write_to_fd(c->log, c->p_out[1]);
+
+		c->pid = client_wait_token(c, CLIENT_STOP);
+		igt_assert_eq(c->pid, mypid);
+	}
+
+	close(c->p_out[1]);
+	c->p_out[1] = -1;
+	close(c->p_in[0]);
+	c->p_in[0] = -1;
+
+	c->pid = wait_from_client(c, CLIENT_PID);
+
+	igt_info("client running with pid %d\n", c->pid);
+
+	return c;
+}
+
+/**
+ * xe_eudebug_client_stop:
+ * @c: pointer to xe_eudebug_client structure
+ *
+ * Waits for the end of client's work and exits the proccess.
+ */
+void xe_eudebug_client_stop(struct xe_eudebug_client *c)
+{
+	if (c->pid) {
+		int waitstatus;
+
+		xe_eudebug_client_wait_done(c);
+
+		token_signal(c->p_in, CLIENT_STOP, c->pid);
+		igt_assert_eq(waitpid(c->pid, &waitstatus, 0),
+			      c->pid);
+		c->pid = 0;
+	}
+}
+
+/**
+ * xe_eudebug_client_destroy:
+ * @c: pointer to xe_eudebug_client structure to be freed
+ *
+ * Frees the @c client structure. Note that it calls xe_eudebug_client_stop if
+ * client proccess has not terminated yet.
+ */
+void xe_eudebug_client_destroy(struct xe_eudebug_client *c)
+{
+	xe_eudebug_client_stop(c);
+	pipe_close(c->p_in);
+	pipe_close(c->p_out);
+	xe_eudebug_event_log_destroy(c->log);
+	free(c);
+}
+
+/**
+ * xe_eudebug_client_get_seqno:
+ * @c: pointer to xe_eudebug_client structure
+ *
+ * Increments and returns current seqno value of the given client @c
+ *
+ * Returns: incremented seqno
+ */
+uint64_t xe_eudebug_client_get_seqno(struct xe_eudebug_client *c)
+{
+	return c->seqno++;
+}
+
+/**
+ * xe_eudebug_client_start:
+ * @c: pointer to xe_eudebug_client structure
+ *
+ * Starts execution of client's work function within the client's proccess.
+ */
+void xe_eudebug_client_start(struct xe_eudebug_client *c)
+{
+	token_signal(c->p_in, CLIENT_RUN, c->pid);
+}
+
+/**
+ * xe_eudebug_client_wait_done:
+ * @c: pointer to xe_eudebug_client structure
+ *
+ * Waits for the client work end updates the event log.
+ * Doesn't terminate the client's proccess yet.
+ */
+void xe_eudebug_client_wait_done(struct xe_eudebug_client *c)
+{
+	if (!c->done) {
+		c->done = 1;
+		c->seqno = wait_from_client(c, CLIENT_FINI);
+		event_log_read_from_fd(c->log, c->p_out[0]);
+	}
+}
+
+/**
+ * xe_eudebug_client_signal_stage:
+ * @c: pointer to the client
+ * @stage: stage to signal
+ *
+ * Signals to debugger, waiting in xe_eudebug_debugger_wait_stage(),
+ * releasing it to proceed.
+ */
+void xe_eudebug_client_signal_stage(struct xe_eudebug_client *c, uint64_t stage)
+{
+	token_signal(c->p_out, DEBUGGER_STAGE, stage);
+}
+
+/**
+ * xe_eudebug_client_wait_stage:
+ * @c: pointer to xe_eudebug_client structure
+ * @stage: stage to wait on
+ *
+ * Pauses client until the debugger has signalled the corresponding stage with
+ * xe_eudebug_debugger_signal_stage. This is only for situations where the
+ * actual event flow is not enough to coordinate between client/debugger and extra
+ * sync mechanism is needed.
+ *
+ */
+void xe_eudebug_client_wait_stage(struct xe_eudebug_client *c, uint64_t stage)
+{
+	u64 stage_in;
+
+	if (c->done) {
+		igt_warn("client: %d already done before %lu\n", c->pid, stage);
+		return;
+	}
+
+	igt_debug("client: %d pausing for stage %lu\n", c->pid, stage);
+
+	stage_in = client_wait_token(c, CLIENT_STAGE);
+	igt_debug("client: %d stage %lu, expected %lu, stage\n", c->pid, stage_in, stage);
+
+	igt_assert_eq(stage_in, stage);
+}
+
+
+/**
+ * xe_eudebug_session_create:
+ * @fd: XE file descriptor
+ * @work: function passed to the xe_eudebug_client_create
+ * @flags: flags passed to client and debugger
+ * @test_private: test's  data, allocated with MAP_SHARED | MAP_ANONYMOUS,
+ * passed to client and debugger. Can be NULL.
+ *
+ * Creates session together with client and debugger structures.
+ */
+struct xe_eudebug_session *xe_eudebug_session_create(int fd,
+						     xe_eudebug_client_work_fn work,
+						     unsigned int flags,
+						     void *test_private)
+{
+	struct xe_eudebug_session *s;
+
+	s = calloc(1, sizeof(*s));
+	igt_assert(s);
+
+	s->c = xe_eudebug_client_create(fd, work, flags, test_private);
+	s->d = xe_eudebug_debugger_create(fd, flags, test_private);
+	s->flags = flags;
+
+	return s;
+}
+
+/**
+ * xe_eudebug_session_run:
+ * @s: pointer to xe_eudebug_session structure
+ *
+ * Attaches debugger to client's proccess, starts debugger's
+ * async event reader, starts client and once client finish
+ * it stops debugger worker.
+ */
+void xe_eudebug_session_run(struct xe_eudebug_session *s)
+{
+	struct xe_eudebug_debugger *debugger = s->d;
+	struct xe_eudebug_client *client = s->c;
+
+	igt_assert_eq(xe_eudebug_debugger_attach(debugger, client), 0);
+
+	xe_eudebug_debugger_start_worker(debugger);
+
+	xe_eudebug_client_start(client);
+	xe_eudebug_client_wait_done(client);
+
+	xe_eudebug_debugger_stop_worker(debugger, 1);
+
+	xe_eudebug_event_log_print(debugger->log, true);
+	xe_eudebug_event_log_print(client->log, true);
+}
+
+/**
+ * xe_eudebug_session_check:
+ * @s: pointer to xe_eudebug_session structure
+ * @match_opposite: indicates whether check should match all
+ * create and destroy events.
+ * @filter: mask that represents events to be skipped during comparison, useful
+ * for events like 'VM_BIND' since they can be asymmetric
+ *
+ * Validate debugger's log against the log created by the client.
+ */
+void xe_eudebug_session_check(struct xe_eudebug_session *s, bool match_opposite, uint32_t filter)
+{
+	xe_eudebug_event_log_compare(s->c->log, s->d->log, filter);
+
+	if (match_opposite)
+		xe_eudebug_event_log_match_opposite(s->d->log, filter);
+}
+
+/**
+ * xe_eudebug_session_destroy:
+ * @s: pointer to xe_eudebug_session structure
+ *
+ * Destroy session together with its debugger and client.
+ */
+void xe_eudebug_session_destroy(struct xe_eudebug_session *s)
+{
+	xe_eudebug_debugger_destroy(s->d);
+	xe_eudebug_client_destroy(s->c);
+
+	free(s);
+}
+
+#define to_base(x) ((struct drm_xe_eudebug_event *)&x)
+
+static void base_event(struct xe_eudebug_client *c,
+		       struct drm_xe_eudebug_event *e,
+		       uint32_t type,
+		       uint32_t flags,
+		       uint64_t size)
+{
+	e->type = type;
+	e->flags = flags;
+	e->seqno = xe_eudebug_client_get_seqno(c);
+	e->len = size;
+}
+
+static void client_event(struct xe_eudebug_client *c, uint32_t flags, int client_fd)
+{
+	struct drm_xe_eudebug_event_client ec;
+
+	base_event(c, to_base(ec), DRM_XE_EUDEBUG_EVENT_OPEN, flags, sizeof(ec));
+
+	ec.client_handle = client_fd;
+
+	xe_eudebug_event_log_write(c->log, (void *)&ec);
+}
+
+static void vm_event(struct xe_eudebug_client *c, uint32_t flags, int client_fd, uint32_t vm_id)
+{
+	struct drm_xe_eudebug_event_vm evm;
+
+	base_event(c, to_base(evm), DRM_XE_EUDEBUG_EVENT_VM, flags, sizeof(evm));
+
+	evm.client_handle = client_fd;
+	evm.vm_handle = vm_id;
+
+	xe_eudebug_event_log_write(c->log, (void *)&evm);
+}
+
+static void exec_queue_event(struct xe_eudebug_client *c, uint32_t flags,
+			     int client_fd, uint32_t vm_id,
+			     uint32_t exec_queue_handle, uint16_t class,
+			     uint16_t width)
+{
+	struct drm_xe_eudebug_event_exec_queue ee;
+
+	base_event(c, to_base(ee), DRM_XE_EUDEBUG_EVENT_EXEC_QUEUE,
+		   flags, sizeof(ee));
+
+	ee.client_handle = client_fd;
+	ee.vm_handle = vm_id;
+	ee.exec_queue_handle = exec_queue_handle;
+	ee.engine_class = class;
+	ee.width = width;
+
+	xe_eudebug_event_log_write(c->log, (void *)&ee);
+}
+
+static void metadata_event(struct xe_eudebug_client *c, uint32_t flags,
+			   int client_fd, uint32_t id, uint64_t type, uint64_t len)
+{
+	struct drm_xe_eudebug_event_metadata em;
+
+	base_event(c, to_base(em), DRM_XE_EUDEBUG_EVENT_METADATA,
+		   flags, sizeof(em));
+
+	em.client_handle = client_fd;
+	em.metadata_handle = id;
+	em.type = type;
+	em.len = len;
+
+	xe_eudebug_event_log_write(c->log, (void *)&em);
+}
+
+static int enable_getset(int fd, bool *old, bool *new)
+{
+	static const char * const fname = "enable_eudebug";
+	int ret = 0;
+
+	int sysfs, device_fd;
+	bool val_before;
+	struct stat st;
+
+	igt_assert(new || old);
+
+	igt_assert_eq(fstat(fd, &st), 0);
+	sysfs = igt_sysfs_open(fd);
+	if (sysfs < 0)
+		return -1;
+
+	device_fd = openat(sysfs, "device", O_DIRECTORY | O_RDONLY);
+	close(sysfs);
+	if (device_fd < 0)
+		return -1;
+
+	if (!__igt_sysfs_get_boolean(device_fd, fname, &val_before)) {
+		ret = -1;
+		goto out;
+	}
+
+	igt_debug("enable_eudebug before: %d\n", val_before);
+
+	if (old)
+		*old = val_before;
+
+	ret = 0;
+	if (new) {
+		if (__igt_sysfs_set_boolean(device_fd, fname, *new))
+			igt_assert_eq(igt_sysfs_get_boolean(device_fd, fname), *new);
+		else
+			ret = -1;
+	}
+
+out:
+	close(device_fd);
+	return ret;
+}
+
+/**
+ * xe_eudebug_enable
+ * @fd: xe client
+ * @enable: state toggle - true to enable, false to disable
+ *
+ * Enables/disables eudebug capability by writing to
+ * '/sys/class/drm/card<N>/device/enable_eudebug' sysfs entry.
+ *
+ * Returns: previous toggle value, i.e. true when eudebugging was enabled,
+ * false when eudebugging was disabled.
+ */
+bool xe_eudebug_enable(int fd, bool enable)
+{
+	bool old = false;
+	int ret = enable_getset(fd, &old, &enable);
+
+	if (ret) {
+		igt_skip_on(enable);
+		old = false;
+	}
+
+	return old;
+}
+
+/* Eu debugger wrappers around resource creating xe ioctls. */
+
+/**
+ * xe_eudebug_client_open_driver:
+ * @c: pointer to xe_eudebug_client structure
+ *
+ * Calls drm_open_client(DRIVER_XE) and logs the corresponding
+ * event in client's event log.
+ *
+ * Returns: valid DRM file descriptor
+ */
+int xe_eudebug_client_open_driver(struct xe_eudebug_client *c)
+{
+	int fd;
+
+	fd = drm_reopen_driver(c->master_fd);
+	client_event(c, DRM_XE_EUDEBUG_EVENT_CREATE, fd);
+
+	return fd;
+}
+
+/**
+ * xe_eudebug_client_close_driver:
+ * @c: pointer to xe_eudebug_client structure
+ * @fd: xe client
+ *
+ * Calls close driver and logs the corresponding event in
+ * client's event log.
+ */
+void xe_eudebug_client_close_driver(struct xe_eudebug_client *c, int fd)
+{
+	client_event(c, DRM_XE_EUDEBUG_EVENT_DESTROY, fd);
+	close(fd);
+}
+
+/**
+ * xe_eudebug_client_vm_create:
+ * @c: pointer to xe_eudebug_client structure
+ * @fd: xe client
+ * @flags: vm bind flags
+ * @ext: pointer to the first user extension
+ *
+ * Calls xe_vm_create() and logs corresponding events
+ * (including vm set metadata events) in client's event log.
+ *
+ * Returns: valid vm handle
+ */
+uint32_t xe_eudebug_client_vm_create(struct xe_eudebug_client *c, int fd,
+				     uint32_t flags, uint64_t ext)
+{
+	uint32_t vm;
+
+	vm = xe_vm_create(fd, flags, ext);
+	vm_event(c, DRM_XE_EUDEBUG_EVENT_CREATE, fd, vm);
+
+	return vm;
+}
+
+/**
+ * xe_eudebug_client_vm_destroy:
+ * @c: pointer to xe_eudebug_client structure
+ * fd: xe client
+ * vm: vm handle
+ *
+ * Calls xe_vm_destroy() and logs the corresponding event in
+ * client's event log.
+ */
+void xe_eudebug_client_vm_destroy(struct xe_eudebug_client *c, int fd, uint32_t vm)
+{
+	xe_vm_destroy(fd, vm);
+	vm_event(c, DRM_XE_EUDEBUG_EVENT_DESTROY, fd, vm);
+}
+
+/**
+ * xe_eudebug_client_exec_queue_create:
+ * @c: pointer to xe_eudebug_client structure
+ * @fd: xe client
+ * @create: exec_queue create drm struct
+ *
+ * Calls xe exec queue create ioctl and logs the corresponding event in
+ * client's event log.
+ *
+ * Returns: valid exec queue handle
+ */
+uint32_t xe_eudebug_client_exec_queue_create(struct xe_eudebug_client *c, int fd,
+					     struct drm_xe_exec_queue_create *create)
+{
+	uint16_t class = ((struct drm_xe_engine_class_instance *)(create->instances))[0].engine_class;
+
+	igt_assert_eq(igt_ioctl(fd, DRM_IOCTL_XE_EXEC_QUEUE_CREATE, create), 0);
+
+	if (class == DRM_XE_ENGINE_CLASS_COMPUTE || class == DRM_XE_ENGINE_CLASS_RENDER)
+		exec_queue_event(c, DRM_XE_EUDEBUG_EVENT_CREATE, fd, create->vm_id,
+				 create->exec_queue_id, class, create->width);
+
+	return create->exec_queue_id;
+}
+
+/**
+ * xe_eudebug_client_exec_queue_destroy:
+ * @c: pointer to xe_eudebug_client structure
+ * @fd: xe client
+ * @create: exec_queue create drm struct which was used for creation
+ *
+ * Calls xe exec_queue destroy ioctl and logs the corresponding event in
+ * client's event log.
+ */
+void xe_eudebug_client_exec_queue_destroy(struct xe_eudebug_client *c, int fd,
+					  struct drm_xe_exec_queue_create *create)
+{
+	struct drm_xe_exec_queue_destroy destroy = { .exec_queue_id = create->exec_queue_id, };
+	uint16_t class = ((struct drm_xe_engine_class_instance *)(create->instances))[0].engine_class;
+
+	if (class == DRM_XE_ENGINE_CLASS_COMPUTE || class == DRM_XE_ENGINE_CLASS_RENDER)
+		exec_queue_event(c, DRM_XE_EUDEBUG_EVENT_DESTROY, fd, create->vm_id,
+				 create->exec_queue_id, class, create->width);
+
+	igt_assert_eq(igt_ioctl(fd, DRM_IOCTL_XE_EXEC_QUEUE_DESTROY, &destroy), 0);
+}
+
+/**
+ * xe_eudebug_client_vm_bind_event:
+ * @c: pointer to xe_eudebug_client structure
+ * @event_flags: base event flags
+ * @fd: xe client
+ * @vm: vm handle
+ * @bind_flags: bind flags of vm_bind_event
+ * @num_binds: number of bind (operations) for event
+ * @ref_seqno: base vm bind reference seqno
+ * Logs vm bind event in client's event log.
+ */
+void xe_eudebug_client_vm_bind_event(struct xe_eudebug_client *c,
+				     uint32_t event_flags, int fd,
+				     uint32_t vm, uint32_t bind_flags,
+				     uint32_t num_binds, u64 *ref_seqno)
+{
+	struct drm_xe_eudebug_event_vm_bind evmb;
+
+	base_event(c, to_base(evmb), DRM_XE_EUDEBUG_EVENT_VM_BIND,
+		   event_flags, sizeof(evmb));
+	evmb.client_handle = fd;
+	evmb.vm_handle = vm;
+	evmb.flags = bind_flags;
+	evmb.num_binds = num_binds;
+
+	*ref_seqno = evmb.base.seqno;
+
+	xe_eudebug_event_log_write(c->log, (void *)&evmb);
+}
+
+/**
+ * xe_eudebug_client_vm_bind_op_event:
+ * @c: pointer to xe_eudebug_client structure
+ * @event_flags: base event flags
+ * @bind_ref_seqno: base vm bind reference seqno
+ * @op_ref_seqno: output, the vm_bind_op event seqno
+ * @addr: ppgtt address
+ * @size: size of the binding
+ * @num_extensions: number of vm bind op extensions
+ *
+ * Logs vm bind op event in client's event log.
+ */
+void xe_eudebug_client_vm_bind_op_event(struct xe_eudebug_client *c, uint32_t event_flags,
+					uint64_t bind_ref_seqno, uint64_t *op_ref_seqno,
+					uint64_t addr, uint64_t range,
+					uint64_t num_extensions)
+{
+	struct drm_xe_eudebug_event_vm_bind_op op;
+
+	base_event(c, to_base(op), DRM_XE_EUDEBUG_EVENT_VM_BIND_OP,
+		   event_flags, sizeof(op));
+	op.vm_bind_ref_seqno = bind_ref_seqno;
+	op.addr = addr;
+	op.range = range;
+	op.num_extensions = num_extensions;
+
+	*op_ref_seqno = op.base.seqno;
+
+	xe_eudebug_event_log_write(c->log, (void *)&op);
+}
+
+/**
+ * xe_eudebug_client_vm_bind_op_metadata_event:
+ * @c: pointer to xe_eudebug_client structure
+ * @event_flags: base event flags
+ * @op_ref_seqno: base vm bind op reference seqno
+ * @metadata_handle: metadata handle
+ * @metadata_cookie: metadata cookie
+ *
+ * Logs vm bind op metadata event in client's event log.
+ */
+void xe_eudebug_client_vm_bind_op_metadata_event(struct xe_eudebug_client *c,
+						 uint32_t event_flags, uint64_t op_ref_seqno,
+						 uint64_t metadata_handle, uint64_t metadata_cookie)
+{
+	struct drm_xe_eudebug_event_vm_bind_op_metadata op;
+
+	base_event(c, to_base(op), DRM_XE_EUDEBUG_EVENT_VM_BIND_OP_METADATA,
+		   event_flags, sizeof(op));
+	op.vm_bind_op_ref_seqno = op_ref_seqno;
+	op.metadata_handle = metadata_handle;
+	op.metadata_cookie = metadata_cookie;
+
+	xe_eudebug_event_log_write(c->log, (void *)&op);
+}
+
+/**
+ * xe_eudebug_client_vm_bind_ufence_event:
+ * @c: pointer to xe_eudebug_client structure
+ * @event_flags: base event flags
+ * @ref_seqno: base vm bind event seqno
+ *
+ * Logs vm bind ufence event in client's event log.
+ */
+void xe_eudebug_client_vm_bind_ufence_event(struct xe_eudebug_client *c, uint32_t event_flags,
+					    uint64_t ref_seqno)
+{
+	struct drm_xe_eudebug_event_vm_bind_ufence f;
+
+	base_event(c, to_base(f), DRM_XE_EUDEBUG_EVENT_VM_BIND_UFENCE,
+		   event_flags, sizeof(f));
+	f.vm_bind_ref_seqno = ref_seqno;
+
+	xe_eudebug_event_log_write(c->log, (void *)&f);
+}
+
+static bool has_user_fence(const struct drm_xe_sync *sync, uint32_t num_syncs)
+{
+	while (num_syncs--)
+		if (sync[num_syncs].type == DRM_XE_SYNC_TYPE_USER_FENCE)
+			return true;
+
+	return false;
+}
+
+#define for_each_metadata(__m, __ext)					\
+	for ((__m) = from_user_pointer(__ext);				\
+	     (__m);							\
+	     (__m) = from_user_pointer((__m)->base.next_extension))	\
+		if ((__m)->base.name == XE_VM_BIND_OP_EXTENSIONS_ATTACH_DEBUG)
+
+static int  __xe_eudebug_client_vm_bind(struct xe_eudebug_client *c,
+					int fd, uint32_t vm, uint32_t exec_queue,
+					uint32_t bo, uint64_t offset,
+					uint64_t addr, uint64_t size,
+					uint32_t op, uint32_t flags,
+					struct drm_xe_sync *sync,
+					uint32_t num_syncs,
+					uint32_t prefetch_region,
+					uint8_t pat_index, uint64_t op_ext)
+{
+	struct drm_xe_vm_bind_op_ext_attach_debug *metadata;
+	const bool ufence = has_user_fence(sync, num_syncs);
+	const uint32_t bind_flags = ufence ?
+		DRM_XE_EUDEBUG_EVENT_VM_BIND_FLAG_UFENCE : 0;
+	uint64_t seqno = 0, op_seqno = 0, num_metadata = 0;
+	uint32_t bind_base_flags = 0;
+	int ret;
+
+	for_each_metadata(metadata, op_ext)
+		num_metadata++;
+
+	switch (op) {
+	case DRM_XE_VM_BIND_OP_MAP:
+		bind_base_flags = DRM_XE_EUDEBUG_EVENT_CREATE;
+		break;
+	case DRM_XE_VM_BIND_OP_UNMAP:
+		bind_base_flags = DRM_XE_EUDEBUG_EVENT_DESTROY;
+		igt_assert_eq(num_metadata, 0);
+		igt_assert_eq(ufence, false);
+		break;
+	default:
+		/* XXX unmap all? */
+		igt_assert(op);
+		break;
+	}
+
+	ret = ___xe_vm_bind(fd, vm, exec_queue, bo, offset, addr, size,
+			    op, flags, sync, num_syncs, prefetch_region,
+			    pat_index, 0, op_ext);
+
+	if (ret)
+		return ret;
+
+	if (!bind_base_flags)
+		return -EINVAL;
+
+	xe_eudebug_client_vm_bind_event(c, DRM_XE_EUDEBUG_EVENT_STATE_CHANGE,
+					fd, vm, bind_flags, 1, &seqno);
+	xe_eudebug_client_vm_bind_op_event(c, bind_base_flags,
+					   seqno, &op_seqno, addr, size,
+					   num_metadata);
+
+	for_each_metadata(metadata, op_ext)
+		xe_eudebug_client_vm_bind_op_metadata_event(c,
+							    DRM_XE_EUDEBUG_EVENT_CREATE,
+							    op_seqno,
+							    metadata->metadata_id,
+							    metadata->cookie);
+	if (ufence)
+		xe_eudebug_client_vm_bind_ufence_event(c, DRM_XE_EUDEBUG_EVENT_CREATE |
+						       DRM_XE_EUDEBUG_EVENT_NEED_ACK,
+						       seqno);
+	return ret;
+}
+
+static void _xe_eudebug_client_vm_bind(struct xe_eudebug_client *c, int fd,
+				       uint32_t vm, uint32_t bo,
+				       uint64_t offset, uint64_t addr, uint64_t size,
+				       uint32_t op,
+				       uint32_t flags,
+				       struct drm_xe_sync *sync,
+				       uint32_t num_syncs,
+				       uint64_t op_ext)
+{
+	const uint32_t exec_queue_id = 0;
+	const uint32_t prefetch_region = 0;
+
+	igt_assert_eq(__xe_eudebug_client_vm_bind(c, fd, vm, exec_queue_id, bo, offset,
+						  addr, size, op, flags,
+						  sync, num_syncs, prefetch_region,
+						  DEFAULT_PAT_INDEX, op_ext),
+		      0);
+}
+
+/**
+ * xe_eudebug_client_vm_bind_flags
+ * @c: pointer to xe_eudebug_client structure
+ * @fd: xe client
+ * @vm: vm handle
+ * @bo: buffer object handle
+ * @offset: offset within buffer object
+ * @addr: ppgtt address
+ * @size: size of the binding
+ * @flags: vm_bind flags
+ * @sync: sync objects
+ * @num_syncs: number of sync objects
+ * @op_ext: BIND_OP extensions
+ *
+ * Calls xe vm_bind ioctl and logs the corresponding event in client's event log.
+ */
+void xe_eudebug_client_vm_bind_flags(struct xe_eudebug_client *c, int fd, uint32_t vm,
+				     uint32_t bo, uint64_t offset,
+				     uint64_t addr, uint64_t size, uint32_t flags,
+				     struct drm_xe_sync *sync, uint32_t num_syncs,
+				     uint64_t op_ext)
+{
+	_xe_eudebug_client_vm_bind(c, fd, vm, bo, offset, addr, size,
+				   DRM_XE_VM_BIND_OP_MAP, flags,
+				   sync, num_syncs, op_ext);
+}
+
+/**
+ * xe_eudebug_client_vm_bind
+ * @c: pointer to xe_eudebug_client structure
+ * @fd: xe client
+ * @vm: vm handle
+ * @bo: buffer object handle
+ * @offset: offset within buffer object
+ * @addr: ppgtt address
+ * @size: size of the binding
+ *
+ * Calls xe vm_bind ioctl and logs the corresponding event in client's event log.
+ */
+void xe_eudebug_client_vm_bind(struct xe_eudebug_client *c, int fd, uint32_t vm,
+			       uint32_t bo, uint64_t offset,
+			       uint64_t addr, uint64_t size)
+{
+	const uint32_t flags = 0;
+	struct drm_xe_sync *sync = NULL;
+	const uint32_t num_syncs = 0;
+	const uint64_t op_ext = 0;
+
+	xe_eudebug_client_vm_bind_flags(c, fd, vm, bo, offset, addr, size,
+					flags,
+					sync, num_syncs, op_ext);
+}
+
+/**
+ * xe_eudebug_client_vm_unbind_flags
+ * @c: pointer to xe_eudebug_client structure
+ * @fd: xe client
+ * @vm: vm handle
+ * @offset: offset
+ * @addr: ppgtt address
+ * @size: size of the binding
+ * @flags: vm_bind flags
+ * @sync: sync objects
+ * @num_syncs: number of sync objects
+ *
+ * Calls xe vm_unbind ioctl and logs the corresponding event in client's event log.
+ */
+void xe_eudebug_client_vm_unbind_flags(struct xe_eudebug_client *c, int fd,
+				       uint32_t vm, uint64_t offset,
+				       uint64_t addr, uint64_t size, uint32_t flags,
+				       struct drm_xe_sync *sync, uint32_t num_syncs)
+{
+	_xe_eudebug_client_vm_bind(c, fd, vm, 0, offset, addr, size,
+				   DRM_XE_VM_BIND_OP_UNMAP, flags,
+				   sync, num_syncs, 0);
+}
+
+/**
+ * xe_eudebug_client_vm_unbind
+ * @c: pointer to xe_eudebug_client structure
+ * @fd: xe client
+ * @vm: vm handle
+ * @offset: offset
+ * @addr: ppgtt address
+ * @size: size of the binding
+ *
+ * Calls xe vm_unbind ioctl and logs the corresponding event in client's event log.
+ */
+void xe_eudebug_client_vm_unbind(struct xe_eudebug_client *c, int fd, uint32_t vm,
+				 uint64_t offset, uint64_t addr, uint64_t size)
+{
+	const uint32_t flags = 0;
+	struct drm_xe_sync *sync = NULL;
+	const uint32_t num_syncs = 0;
+
+	xe_eudebug_client_vm_unbind_flags(c, fd, vm, offset, addr, size,
+					  flags, sync, num_syncs);
+}
+
+/**
+ * xe_eudebug_client_metadata_create:
+ * @c: pointer to xe_eudebug_client structure
+ * @fd: xe client
+ * @type: debug metadata type
+ * @len: size of @data
+ * @data: debug metadata paylad
+ *
+ * Calls xe metadata create ioctl and logs the corresponding event in
+ * client's event log.
+ *
+ * Return: valid debug metadata id.
+ */
+uint32_t xe_eudebug_client_metadata_create(struct xe_eudebug_client *c, int fd,
+					   int type, size_t len, void *data)
+{
+	struct drm_xe_debug_metadata_create create = {
+		.type = type,
+		.user_addr = to_user_pointer(data),
+		.len = len
+	};
+
+	igt_assert_eq(igt_ioctl(fd, DRM_IOCTL_XE_DEBUG_METADATA_CREATE, &create), 0);
+
+	metadata_event(c, DRM_XE_EUDEBUG_EVENT_CREATE, fd, create.metadata_id, type, len);
+
+	return create.metadata_id;
+}
+
+/**
+ * xe_eudebug_client_metadata_destroy:
+ * @c: pointer to xe_eudebug_client structure
+ * @fd: xe client
+ * @id: xe debug metadata handle
+ * @type: debug metadata type
+ * @len: size of debug metadata payload
+ *
+ * Calls xe metadata destroy ioctl and logs the corresponding event in
+ * client's event log.
+ */
+void xe_eudebug_client_metadata_destroy(struct xe_eudebug_client *c, int fd,
+					uint32_t id, int type, size_t len)
+{
+	struct drm_xe_debug_metadata_destroy destroy = { .metadata_id = id };
+
+
+	igt_assert_eq(igt_ioctl(fd, DRM_IOCTL_XE_DEBUG_METADATA_DESTROY, &destroy), 0);
+
+	metadata_event(c, DRM_XE_EUDEBUG_EVENT_DESTROY, fd, id, type, len);
+}
+
+void xe_eudebug_ack_ufence(int debugfd,
+			   const struct drm_xe_eudebug_event_vm_bind_ufence *f)
+{
+	struct drm_xe_eudebug_ack_event ack = { 0, };
+	char event_str[XE_EUDEBUG_EVENT_STRING_MAX_LEN];
+
+	ack.type = f->base.type;
+	ack.seqno = f->base.seqno;
+
+	xe_eudebug_event_to_str((void *)f, event_str, XE_EUDEBUG_EVENT_STRING_MAX_LEN);
+	igt_debug("delivering ack for event: %s\n", event_str);
+	igt_assert_eq(igt_ioctl(debugfd, DRM_XE_EUDEBUG_IOCTL_ACK_EVENT, &ack), 0);
+}
diff --git a/lib/xe/xe_eudebug.h b/lib/xe/xe_eudebug.h
new file mode 100644
index 000000000..444f5a7b7
--- /dev/null
+++ b/lib/xe/xe_eudebug.h
@@ -0,0 +1,206 @@
+/* SPDX-License-Identifier: MIT */
+/*
+ * Copyright © 2023 Intel Corporation
+ */
+#include <fcntl.h>
+#include <pthread.h>
+#include <stdint.h>
+#include <xe_drm.h>
+
+#include "igt_list.h"
+
+struct xe_eudebug_event_log {
+	uint8_t *log;
+	unsigned int head;
+	unsigned int max_size;
+	char name[80];
+	pthread_mutex_t lock;
+};
+
+struct xe_eudebug_debugger {
+	int fd;
+	uint64_t flags;
+
+	/* Used to smuggle private data */
+	void *ptr;
+
+	struct xe_eudebug_event_log *log;
+
+	uint64_t event_count;
+
+	uint64_t target_pid;
+
+	struct igt_list_head triggers;
+
+	int master_fd;
+
+	pthread_t worker_thread;
+	int worker_state;
+
+	int p_client[2];
+};
+
+struct xe_eudebug_client {
+	int pid;
+	uint64_t seqno;
+	uint64_t flags;
+
+	/* Used to smuggle private data */
+	void *ptr;
+
+	struct xe_eudebug_event_log *log;
+
+	int done;
+	int p_in[2];
+	int p_out[2];
+
+	/* Used to pickup right device (the one used in debugger) */
+	int master_fd;
+
+	int timeout_ms;
+};
+
+struct xe_eudebug_session {
+	uint64_t flags;
+	struct xe_eudebug_client *c;
+	struct xe_eudebug_debugger *d;
+};
+
+typedef void (*xe_eudebug_client_work_fn)(struct xe_eudebug_client *);
+typedef void (*xe_eudebug_trigger_fn)(struct xe_eudebug_debugger *,
+				      struct drm_xe_eudebug_event *);
+
+#define xe_eudebug_for_each_event(_e, _log) \
+	for ((_e) = (_e) ? (void *)(uint8_t *)(_e) + (_e)->len : \
+		    (void *)(_log)->log; \
+	    (uint8_t *)(_e) < (_log)->log + (_log)->head; \
+	    (_e) = (void *)(uint8_t *)(_e) + (_e)->len)
+
+#define xe_eudebug_assert(d, c)						\
+	do {								\
+		if (!(c)) {						\
+			xe_eudebug_event_log_print((d)->log, true);	\
+			igt_assert(c);					\
+		}							\
+	} while (0)
+
+#define xe_eudebug_assert_f(d, c, f...)					\
+	do {								\
+		if (!(c)) {						\
+			xe_eudebug_event_log_print((d)->log, true);	\
+			igt_assert_f(c, f);				\
+		}							\
+	} while (0)
+
+#define XE_EUDEBUG_EVENT_STRING_MAX_LEN		4096
+
+/*
+ * Default abort timeout to use across xe_eudebug lib and tests if no specific
+ * timeout value is required.
+ */
+#define XE_EUDEBUG_DEFAULT_TIMEOUT_MS		25000ULL
+
+#define XE_EUDEBUG_FILTER_EVENT_NONE		BIT(DRM_XE_EUDEBUG_EVENT_NONE)
+#define XE_EUDEBUG_FILTER_EVENT_READ		BIT(DRM_XE_EUDEBUG_EVENT_READ)
+#define XE_EUDEBUG_FILTER_EVENT_OPEN		BIT(DRM_XE_EUDEBUG_EVENT_OPEN)
+#define XE_EUDEBUG_FILTER_EVENT_VM		BIT(DRM_XE_EUDEBUG_EVENT_VM)
+#define XE_EUDEBUG_FILTER_EVENT_EXEC_QUEUE	BIT(DRM_XE_EUDEBUG_EVENT_EXEC_QUEUE)
+#define XE_EUDEBUG_FILTER_EVENT_EU_ATTENTION	BIT(DRM_XE_EUDEBUG_EVENT_EU_ATTENTION)
+#define XE_EUDEBUG_FILTER_EVENT_VM_BIND		BIT(DRM_XE_EUDEBUG_EVENT_VM_BIND)
+#define XE_EUDEBUG_FILTER_EVENT_VM_BIND_OP	BIT(DRM_XE_EUDEBUG_EVENT_VM_BIND_OP)
+#define XE_EUDEBUG_FILTER_EVENT_VM_BIND_UFENCE  BIT(DRM_XE_EUDEBUG_EVENT_VM_BIND_UFENCE)
+#define XE_EUDEBUG_FILTER_ALL			GENMASK(DRM_XE_EUDEBUG_EVENT_MAX_EVENT, 0)
+#define XE_EUDEBUG_EVENT_IS_FILTERED(_e, _f)	((1UL << _e) & _f)
+
+int xe_eudebug_connect(int fd, pid_t pid, uint32_t flags);
+const char *xe_eudebug_event_to_str(struct drm_xe_eudebug_event *e, char *buf, size_t len);
+struct drm_xe_eudebug_event *
+xe_eudebug_event_log_find_seqno(struct xe_eudebug_event_log *l, uint64_t seqno);
+struct xe_eudebug_event_log *
+xe_eudebug_event_log_create(const char *name, unsigned int max_size);
+void xe_eudebug_event_log_destroy(struct xe_eudebug_event_log *l);
+void xe_eudebug_event_log_print(struct xe_eudebug_event_log *l, bool debug);
+void xe_eudebug_event_log_compare(struct xe_eudebug_event_log *c, struct xe_eudebug_event_log *d,
+				  uint32_t filter);
+void xe_eudebug_event_log_write(struct xe_eudebug_event_log *l, struct drm_xe_eudebug_event *e);
+void xe_eudebug_event_log_match_opposite(struct xe_eudebug_event_log *l, uint32_t filter);
+
+bool xe_eudebug_debugger_available(int fd);
+struct xe_eudebug_debugger *
+xe_eudebug_debugger_create(int xe, uint64_t flags, void *data);
+void xe_eudebug_debugger_destroy(struct xe_eudebug_debugger *d);
+int xe_eudebug_debugger_attach(struct xe_eudebug_debugger *d, struct xe_eudebug_client *c);
+void xe_eudebug_debugger_start_worker(struct xe_eudebug_debugger *d);
+void xe_eudebug_debugger_stop_worker(struct xe_eudebug_debugger *d, int timeout_s);
+void xe_eudebug_debugger_dettach(struct xe_eudebug_debugger *d);
+void xe_eudebug_debugger_set_data(struct xe_eudebug_debugger *c, void *ptr);
+void xe_eudebug_debugger_add_trigger(struct xe_eudebug_debugger *d, int type,
+				     xe_eudebug_trigger_fn fn);
+void xe_eudebug_debugger_signal_stage(struct xe_eudebug_debugger *d, uint64_t stage);
+void xe_eudebug_debugger_wait_stage(struct xe_eudebug_session *s, uint64_t stage);
+
+struct xe_eudebug_client *
+xe_eudebug_client_create(int xe, xe_eudebug_client_work_fn work, uint64_t flags, void *data);
+void xe_eudebug_client_destroy(struct xe_eudebug_client *c);
+void xe_eudebug_client_start(struct xe_eudebug_client *c);
+void xe_eudebug_client_stop(struct xe_eudebug_client *c);
+void xe_eudebug_client_wait_done(struct xe_eudebug_client *c);
+void xe_eudebug_client_signal_stage(struct xe_eudebug_client *c, uint64_t stage);
+void xe_eudebug_client_wait_stage(struct xe_eudebug_client *c, uint64_t stage);
+
+uint64_t xe_eudebug_client_get_seqno(struct xe_eudebug_client *c);
+void xe_eudebug_client_set_data(struct xe_eudebug_client *c, void *ptr);
+
+bool xe_eudebug_enable(int fd, bool enable);
+
+int xe_eudebug_client_open_driver(struct xe_eudebug_client *c);
+void xe_eudebug_client_close_driver(struct xe_eudebug_client *c, int fd);
+uint32_t xe_eudebug_client_vm_create(struct xe_eudebug_client *c, int fd,
+				     uint32_t flags, uint64_t ext);
+void xe_eudebug_client_vm_destroy(struct xe_eudebug_client *c, int fd, uint32_t vm);
+uint32_t xe_eudebug_client_exec_queue_create(struct xe_eudebug_client *c, int fd,
+					     struct drm_xe_exec_queue_create *create);
+void xe_eudebug_client_exec_queue_destroy(struct xe_eudebug_client *c, int fd,
+					  struct drm_xe_exec_queue_create *create);
+void xe_eudebug_client_vm_bind_event(struct xe_eudebug_client *c, uint32_t event_flags, int fd,
+				     uint32_t vm, uint32_t bind_flags,
+				     uint32_t num_ops, uint64_t *ref_seqno);
+void xe_eudebug_client_vm_bind_op_event(struct xe_eudebug_client *c, uint32_t event_flags,
+					uint64_t ref_seqno, uint64_t *op_ref_seqno,
+					uint64_t addr, uint64_t range,
+					uint64_t num_extensions);
+void xe_eudebug_client_vm_bind_op_metadata_event(struct xe_eudebug_client *c,
+						 uint32_t event_flags, uint64_t op_ref_seqno,
+						 uint64_t metadata_handle, uint64_t metadata_cookie);
+void xe_eudebug_client_vm_bind_ufence_event(struct xe_eudebug_client *c, uint32_t event_flags,
+					    uint64_t ref_seqno);
+void xe_eudebug_ack_ufence(int debugfd,
+			   const struct drm_xe_eudebug_event_vm_bind_ufence *f);
+
+void xe_eudebug_client_vm_bind_flags(struct xe_eudebug_client *c, int fd, uint32_t vm,
+				     uint32_t bo, uint64_t offset,
+				     uint64_t addr, uint64_t size, uint32_t flags,
+				     struct drm_xe_sync *sync, uint32_t num_syncs,
+				     uint64_t op_ext);
+void xe_eudebug_client_vm_bind(struct xe_eudebug_client *c, int fd, uint32_t vm,
+			       uint32_t bo, uint64_t offset,
+			       uint64_t addr, uint64_t size);
+void xe_eudebug_client_vm_unbind_flags(struct xe_eudebug_client *c, int fd,
+				       uint32_t vm, uint64_t offset,
+				       uint64_t addr, uint64_t size, uint32_t flags,
+				       struct drm_xe_sync *sync, uint32_t num_syncs);
+void xe_eudebug_client_vm_unbind(struct xe_eudebug_client *c, int fd, uint32_t vm,
+				 uint64_t offset, uint64_t addr, uint64_t size);
+
+uint32_t xe_eudebug_client_metadata_create(struct xe_eudebug_client *c, int fd,
+					   int type, size_t len, void *data);
+void xe_eudebug_client_metadata_destroy(struct xe_eudebug_client *c, int fd,
+					uint32_t id, int type, size_t len);
+
+struct xe_eudebug_session *xe_eudebug_session_create(int fd,
+						     xe_eudebug_client_work_fn work,
+						     unsigned int flags,
+						     void *test_private);
+void xe_eudebug_session_destroy(struct xe_eudebug_session *s);
+void xe_eudebug_session_run(struct xe_eudebug_session *s);
+void xe_eudebug_session_check(struct xe_eudebug_session *s, bool match_opposite, uint32_t filter);
-- 
2.34.1


^ permalink raw reply related	[flat|nested] 41+ messages in thread

* [PATCH i-g-t v3 05/14] tests/xe_eudebug: Test eudebug resource tracking and manipulation
  2024-08-09 12:37 [PATCH i-g-t v3 00/14] Test coverage for GPU debug support Christoph Manszewski
                   ` (3 preceding siblings ...)
  2024-08-09 12:38 ` [PATCH i-g-t v3 04/14] lib/xe_eudebug: Introduce eu debug testing framework Christoph Manszewski
@ 2024-08-09 12:38 ` Christoph Manszewski
  2024-08-09 12:38 ` [PATCH i-g-t v3 06/14] lib/gpgpu_shader: Extend shader building library Christoph Manszewski
                   ` (14 subsequent siblings)
  19 siblings, 0 replies; 41+ messages in thread
From: Christoph Manszewski @ 2024-08-09 12:38 UTC (permalink / raw)
  To: igt-dev
  Cc: Zbigniew Kempczyński, Kamil Konieczny, Dominik Grzegorzek,
	Maciej Patelczyk, Dominik Karol Piątkowski, Pawel Sikora,
	Andrzej Hajda, Kolanupaka Naveena, Mika Kuoppala, Gwan-gyeong Mun,
	Mika Kuoppala, Christoph Manszewski, Karolina Stolarek,
	Jonathan Cavitt

From: Dominik Grzegorzek <dominik.grzegorzek@intel.com>

For typical debugging under gdb one can specify two main usecases:
accessing and manupulating resources created by the application and
manipulating thread execution (interrupting and setting breakpoints).

This test adds coverage for the former by checking that:
- the debugger reports the expected events for Xe resources created
by the debugged client,
- the debugger is able to read and write the vm of the debugged client.

Signed-off-by: Dominik Grzegorzek <dominik.grzegorzek@intel.com>
Signed-off-by: Mika Kuoppala <mika.kuoppala@linux.intel.com>
Signed-off-by: Christoph Manszewski <christoph.manszewski@intel.com>
Signed-off-by: Karolina Stolarek <karolina.stolarek@intel.com>
Signed-off-by: Maciej Patelczyk <maciej.patelczyk@intel.com>
Signed-off-by: Pawel Sikora <pawel.sikora@intel.com>
Signed-off-by: Andrzej Hajda <andrzej.hajda@intel.com>
Signed-off-by: Dominik Karol Piątkowski <dominik.karol.piatkowski@intel.com>
Signed-off-by: Jonathan Cavitt <jonathan.cavitt@intel.com>
---
 tests/intel/xe_eudebug.c | 2671 ++++++++++++++++++++++++++++++++++++++
 tests/meson.build        |    1 +
 2 files changed, 2672 insertions(+)
 create mode 100644 tests/intel/xe_eudebug.c

diff --git a/tests/intel/xe_eudebug.c b/tests/intel/xe_eudebug.c
new file mode 100644
index 000000000..152643850
--- /dev/null
+++ b/tests/intel/xe_eudebug.c
@@ -0,0 +1,2671 @@
+// SPDX-License-Identifier: MIT
+/*
+ * Copyright © 2023 Intel Corporation
+ */
+
+/**
+ * TEST: Test EU Debugger functionality
+ * Category: Core
+ * Mega feature: EUdebug
+ * Sub-category: EUdebug tests
+ * Functionality: eu debugger framework
+ * Test category: functionality test
+ */
+
+#include <grp.h>
+#include <poll.h>
+#include <pthread.h>
+#include <pwd.h>
+#include <sys/ioctl.h>
+#include <sys/prctl.h>
+
+#include "igt.h"
+#include "intel_pat.h"
+#include "lib/igt_syncobj.h"
+#include "xe/xe_eudebug.h"
+#include "xe/xe_ioctl.h"
+#include "xe/xe_query.h"
+
+/**
+ * SUBTEST: sysfs-toggle
+ * Description:
+ *	Exercise the debugger enable/disable sysfs toggle logic
+ */
+static void test_sysfs_toggle(int fd)
+{
+	xe_eudebug_enable(fd, false);
+	igt_assert(!xe_eudebug_debugger_available(fd));
+
+	xe_eudebug_enable(fd, true);
+	igt_assert(xe_eudebug_debugger_available(fd));
+	xe_eudebug_enable(fd, true);
+	igt_assert(xe_eudebug_debugger_available(fd));
+
+	xe_eudebug_enable(fd, false);
+	igt_assert(!xe_eudebug_debugger_available(fd));
+	xe_eudebug_enable(fd, false);
+	igt_assert(!xe_eudebug_debugger_available(fd));
+
+	xe_eudebug_enable(fd, true);
+	igt_assert(xe_eudebug_debugger_available(fd));
+}
+
+#define STAGE_PRE_DEBUG_RESOURCES_DONE 1
+#define STAGE_DISCOVERY_DONE 2
+
+#define CREATE_VMS (1 << 0)
+#define CREATE_EXEC_QUEUES (1 << 1)
+#define VM_BIND (1 << 2)
+#define VM_BIND_VM_DESTROY (1 << 3)
+#define VM_BIND_EXTENDED (1 << 4)
+#define VM_METADATA (1 << 5)
+#define VM_BIND_METADATA (1 << 6)
+#define VM_BIND_OP_MAP_USERPTR (1 << 7)
+#define TEST_DISCOVERY (1 << 31)
+
+#define PAGE_SIZE 4096
+static struct drm_xe_vm_bind_op_ext_attach_debug *
+basic_vm_bind_metadata_ext_prepare(int fd, struct xe_eudebug_client *c,
+				   uint8_t **data, uint32_t data_size)
+{
+	struct drm_xe_vm_bind_op_ext_attach_debug *ext;
+	int i;
+
+	*data = calloc(data_size, sizeof(*data));
+	igt_assert(*data);
+
+	for (i = 0; i < data_size; i++)
+		(*data)[i] = 0xff & (i + (i > PAGE_SIZE));
+
+
+	ext = calloc(WORK_IN_PROGRESS_DRM_XE_DEBUG_METADATA_NUM, sizeof(*ext));
+	igt_assert(ext);
+
+	for (i = 0; i < WORK_IN_PROGRESS_DRM_XE_DEBUG_METADATA_NUM; i++) {
+		ext[i].base.name = XE_VM_BIND_OP_EXTENSIONS_ATTACH_DEBUG;
+		ext[i].metadata_id = xe_eudebug_client_metadata_create(c, fd, i,
+								       (i + 1) * PAGE_SIZE, *data);
+		ext[i].cookie = i;
+
+		if (i < WORK_IN_PROGRESS_DRM_XE_DEBUG_METADATA_NUM - 1)
+			ext[i].base.next_extension = to_user_pointer(&ext[i+1]);
+	}
+	return ext;
+}
+
+static void basic_vm_bind_metadata_ext_del(int fd, struct xe_eudebug_client *c,
+					   struct drm_xe_vm_bind_op_ext_attach_debug *ext,
+					   uint8_t *data)
+{
+	for (int i = 0; i < WORK_IN_PROGRESS_DRM_XE_DEBUG_METADATA_NUM; i++)
+		xe_eudebug_client_metadata_destroy(c, fd, ext[i].metadata_id,
+							   i, (i + 1) * PAGE_SIZE);
+	free(ext);
+	free(data);
+}
+
+static void basic_vm_bind_client(int fd, struct xe_eudebug_client *c)
+{
+	struct drm_xe_vm_bind_op_ext_attach_debug *ext = NULL;
+	uint32_t vm = xe_eudebug_client_vm_create(c, fd, 0, 0);
+	size_t bo_size = xe_get_default_alignment(fd);
+	bool test_discovery = c->flags & TEST_DISCOVERY;
+	bool test_metadata = c->flags & VM_BIND_METADATA;
+	uint32_t bo = xe_bo_create(fd, 0, bo_size,
+				   system_memory(fd), 0);
+	uint64_t addr = 0x1a0000;
+	uint8_t *data = NULL;
+
+	if (test_metadata)
+		ext = basic_vm_bind_metadata_ext_prepare(fd, c, &data, PAGE_SIZE);
+
+	xe_eudebug_client_vm_bind_flags(c, fd, vm, bo, 0, addr,
+					bo_size, 0, NULL, 0, to_user_pointer(ext));
+
+	if (test_discovery) {
+		xe_eudebug_client_signal_stage(c, STAGE_PRE_DEBUG_RESOURCES_DONE);
+		xe_eudebug_client_wait_stage(c, STAGE_DISCOVERY_DONE);
+	}
+
+	xe_eudebug_client_vm_unbind(c, fd, vm, 0, addr, bo_size);
+
+	if (test_metadata)
+		basic_vm_bind_metadata_ext_del(fd, c, ext, data);
+
+	gem_close(fd, bo);
+	xe_eudebug_client_vm_destroy(c, fd, vm);
+}
+
+static void basic_vm_bind_vm_destroy_client(int fd, struct xe_eudebug_client *c)
+{
+	uint32_t vm = xe_eudebug_client_vm_create(c, fd, 0, 0);
+	size_t bo_size = xe_get_default_alignment(fd);
+	bool test_discovery = c->flags & TEST_DISCOVERY;
+	uint32_t bo = xe_bo_create(fd, 0, bo_size,
+				   system_memory(fd), 0);
+	uint64_t addr = 0x1a0000;
+
+	if (test_discovery) {
+		vm = xe_vm_create(fd, 0, 0);
+
+		xe_vm_bind_async(fd, vm, 0, bo, 0, addr, bo_size, NULL, 0);
+
+		xe_vm_destroy(fd, vm);
+
+		xe_eudebug_client_signal_stage(c, STAGE_PRE_DEBUG_RESOURCES_DONE);
+		xe_eudebug_client_wait_stage(c, STAGE_DISCOVERY_DONE);
+	} else {
+		vm = xe_eudebug_client_vm_create(c, fd, 0, 0);
+		xe_eudebug_client_vm_bind(c, fd, vm, bo, 0, addr, bo_size);
+		xe_eudebug_client_vm_destroy(c, fd, vm);
+	}
+
+	gem_close(fd, bo);
+}
+
+#define BO_ADDR 0x1a0000
+#define BO_ITEMS 4096
+#define MIN_BO_SIZE (BO_ITEMS * sizeof(uint64_t))
+
+union buf_id {
+	uint32_t fd;
+	void *userptr;
+};
+
+struct bind_list {
+	int fd;
+	uint32_t vm;
+	union buf_id *bo;
+	struct drm_xe_vm_bind_op *bind_ops;
+	unsigned int n;
+};
+
+static void *bo_get_ptr(int fd, struct drm_xe_vm_bind_op *o)
+{
+	void *ptr;
+
+	if (o->op != DRM_XE_VM_BIND_OP_MAP_USERPTR)
+		ptr = xe_bo_map(fd, o->obj, o->range);
+	else
+		ptr = (void *)(uintptr_t)o->userptr;
+
+	igt_assert(ptr);
+
+	return ptr;
+}
+
+static void bo_put_ptr(int fd, struct drm_xe_vm_bind_op *o, void *ptr)
+{
+	if (o->op != DRM_XE_VM_BIND_OP_MAP_USERPTR)
+		munmap(ptr, o->range);
+}
+
+static void bo_prime(int fd, struct drm_xe_vm_bind_op *o)
+{
+	uint64_t *d;
+	uint64_t i;
+
+	d = bo_get_ptr(fd, o);
+
+	for (i = 0; i < o->range / sizeof(*d); i++)
+		d[i] = o->addr + i;
+
+	bo_put_ptr(fd, o, d);
+}
+
+static void bo_check(int fd, struct drm_xe_vm_bind_op *o)
+{
+	uint64_t *d;
+	uint64_t i;
+
+	d = bo_get_ptr(fd, o);
+
+	for (i = 0; i < o->range / sizeof(*d); i++)
+		igt_assert_eq(d[i], o->addr + i + 1);
+
+	bo_put_ptr(fd, o, d);
+}
+
+static union buf_id *vm_create_objects(int fd, uint32_t bo_placement, uint32_t vm, unsigned int size,
+				   unsigned int n)
+{
+	union buf_id *bo;
+	unsigned int i;
+
+	bo = calloc(n, sizeof(*bo));
+	igt_assert(bo);
+
+	for (i = 0; i < n; i++) {
+		if (bo_placement) {
+			bo[i].fd = xe_bo_create(fd, vm, size, bo_placement, 0);
+			igt_assert(bo[i].fd);
+		} else {
+			bo[i].userptr = aligned_alloc(PAGE_SIZE, size);
+			igt_assert(bo[i].userptr);
+		}
+	}
+
+	return bo;
+}
+
+static struct bind_list *create_bind_list(int fd, uint32_t bo_placement,
+					  uint32_t vm, unsigned int n,
+					  unsigned int target_size)
+{
+	unsigned int i = target_size ?: MIN_BO_SIZE;
+	const unsigned int bo_size = max_t(bo_size, xe_get_default_alignment(fd), i);
+	bool is_userptr = !bo_placement;
+	struct bind_list *bl;
+
+	bl = malloc(sizeof(*bl));
+	bl->fd = fd;
+	bl->vm = vm;
+	bl->bo = vm_create_objects(fd, bo_placement, vm, bo_size, n);
+	bl->n = n;
+	bl->bind_ops = calloc(n, sizeof(*bl->bind_ops));
+	igt_assert(bl->bind_ops);
+
+	for (i = 0; i < n; i++) {
+		struct drm_xe_vm_bind_op *o = &bl->bind_ops[i];
+
+		if (is_userptr) {
+			o->obj = 0;
+			o->userptr = (uintptr_t)bl->bo[i].userptr;
+			o->op = DRM_XE_VM_BIND_OP_MAP_USERPTR;
+		} else {
+			o->obj = bl->bo[i].fd;
+			o->obj_offset = 0;
+			o->op = DRM_XE_VM_BIND_OP_MAP;
+		}
+
+		o->range = bo_size;
+		o->addr = BO_ADDR + 2 * i * bo_size;
+		o->flags = 0;
+		o->pat_index = intel_get_pat_idx_wb(fd);
+		o->prefetch_mem_region_instance = 0;
+		o->reserved[0] = 0;
+		o->reserved[1] = 0;
+	}
+
+	for (i = 0; i < bl->n; i++) {
+		struct drm_xe_vm_bind_op *o = &bl->bind_ops[i];
+
+		igt_debug("bo %d: addr 0x%llx, range 0x%llx\n", i, o->addr, o->range);
+		bo_prime(fd, o);
+	}
+
+	return bl;
+}
+
+static void do_bind_list(struct xe_eudebug_client *c,
+			 struct bind_list *bl, struct drm_xe_sync *sync)
+{
+	int i;
+	uint64_t ref_seqno = 0, op_ref_seqno = 0;
+
+	xe_vm_bind_array(bl->fd, bl->vm, 0, bl->bind_ops, bl->n, sync, sync ? 1 : 0);
+	xe_eudebug_client_vm_bind_event(c, DRM_XE_EUDEBUG_EVENT_STATE_CHANGE,
+					bl->fd, bl->vm, 0, bl->n, &ref_seqno);
+	for (i = 0; i < bl->n; i++)
+		xe_eudebug_client_vm_bind_op_event(c, DRM_XE_EUDEBUG_EVENT_CREATE,
+						   ref_seqno,
+						   &op_ref_seqno,
+						   bl->bind_ops[i].addr,
+						   bl->bind_ops[i].range,
+						   0);
+
+	if (sync)
+		igt_assert(syncobj_wait(bl->fd, &sync->handle, 1, INT64_MAX, 0, NULL));
+}
+
+static void free_bind_list(struct xe_eudebug_client *c, struct bind_list *bl)
+{
+	unsigned int i;
+
+	for (i = 0; i < bl->n; i++) {
+		igt_debug("%d: checking 0x%llx (%lld)\n",
+			  i, bl->bind_ops[i].addr, bl->bind_ops[i].addr);
+		bo_check(bl->fd, &bl->bind_ops[i]);
+		if (bl->bind_ops[i].op == DRM_XE_VM_BIND_OP_MAP_USERPTR)
+			free(bl->bo[i].userptr);
+		xe_eudebug_client_vm_unbind(c, bl->fd, bl->vm, 0,
+					    bl->bind_ops[i].addr,
+					    bl->bind_ops[i].range);
+	}
+
+	free(bl->bind_ops);
+	free(bl->bo);
+	free(bl);
+}
+
+static void vm_bind_client(int fd, struct xe_eudebug_client *c)
+{
+	uint64_t op_ref_seqno, ref_seqno;
+	struct bind_list *bl;
+	bool test_discovery = c->flags & TEST_DISCOVERY;
+	size_t bo_size = 3 * xe_get_default_alignment(fd);
+	uint32_t bo[2] = {
+		xe_bo_create(fd, 0, bo_size, system_memory(fd), 0),
+		xe_bo_create(fd, 0, bo_size, system_memory(fd), 0),
+	};
+	uint32_t vm = xe_eudebug_client_vm_create(c, fd, 0, 0);
+	uint64_t addr[] = {0x2a0000, 0x3a0000};
+	uint64_t rebind_bo_offset = 2 * bo_size / 3;
+	uint64_t size = bo_size / 3;
+	int i = 0;
+
+	if (test_discovery) {
+		xe_vm_bind_async(fd, vm, 0, bo[0], 0, addr[0], bo_size, NULL, 0);
+
+		xe_vm_unbind_async(fd, vm, 0, 0, addr[0] + size, size, NULL, 0);
+
+		xe_vm_bind_async(fd, vm, 0, bo[1], 0, addr[1], bo_size, NULL, 0);
+
+		xe_vm_bind_async(fd, vm, 0, bo[1], rebind_bo_offset, addr[1], size, NULL, 0);
+
+		bl = create_bind_list(fd, system_memory(fd), vm, 4, 0);
+		xe_vm_bind_array(bl->fd, bl->vm, 0, bl->bind_ops, bl->n, NULL, 0);
+
+		xe_vm_unbind_all_async(fd, vm, 0, bo[0], NULL, 0);
+
+		xe_eudebug_client_vm_bind_event(c, DRM_XE_EUDEBUG_EVENT_STATE_CHANGE,
+						bl->fd, bl->vm, 0, bl->n + 2, &ref_seqno);
+
+		xe_eudebug_client_vm_bind_op_event(c, DRM_XE_EUDEBUG_EVENT_CREATE, ref_seqno,
+						   &op_ref_seqno, addr[1], size, 0);
+		xe_eudebug_client_vm_bind_op_event(c, DRM_XE_EUDEBUG_EVENT_CREATE, ref_seqno,
+						   &op_ref_seqno, addr[1] + size, size * 2, 0);
+
+		for (i = 0; i < bl->n; i++)
+			xe_eudebug_client_vm_bind_op_event(c, DRM_XE_EUDEBUG_EVENT_CREATE,
+							   ref_seqno, &op_ref_seqno,
+							   bl->bind_ops[i].addr,
+							   bl->bind_ops[i].range, 0);
+
+		xe_eudebug_client_signal_stage(c, STAGE_PRE_DEBUG_RESOURCES_DONE);
+		xe_eudebug_client_wait_stage(c, STAGE_DISCOVERY_DONE);
+	} else {
+		xe_eudebug_client_vm_bind(c, fd, vm, bo[0], 0, addr[0], bo_size);
+		xe_eudebug_client_vm_unbind(c, fd, vm, 0, addr[0] + size, size);
+
+		xe_eudebug_client_vm_bind(c, fd, vm, bo[1], 0, addr[1], bo_size);
+		xe_eudebug_client_vm_bind(c, fd, vm, bo[1], rebind_bo_offset, addr[1], size);
+
+		bl = create_bind_list(fd, system_memory(fd), vm, 4, 0);
+		do_bind_list(c, bl, NULL);
+	}
+
+	xe_vm_unbind_all_async(fd, vm, 0, bo[1], NULL, 0);
+
+	xe_eudebug_client_vm_bind_event(c, DRM_XE_EUDEBUG_EVENT_STATE_CHANGE, fd, vm, 0,
+					1, &ref_seqno);
+	xe_eudebug_client_vm_bind_op_event(c, DRM_XE_EUDEBUG_EVENT_DESTROY, ref_seqno,
+					   &op_ref_seqno, 0, 0, 0);
+
+	gem_close(fd, bo[0]);
+	gem_close(fd, bo[1]);
+	xe_eudebug_client_vm_destroy(c, fd, vm);
+}
+
+static void run_basic_client(struct xe_eudebug_client *c)
+{
+	int fd, i;
+
+	fd = xe_eudebug_client_open_driver(c);
+	xe_device_get(fd);
+
+	if (c->flags & CREATE_VMS) {
+		const uint32_t flags[] = {
+			DRM_XE_VM_CREATE_FLAG_SCRATCH_PAGE,
+			DRM_XE_VM_CREATE_FLAG_LR_MODE,
+		};
+		uint32_t vms[ARRAY_SIZE(flags)];
+
+		for (i = 0; i < ARRAY_SIZE(flags); i++)
+			vms[i] = xe_eudebug_client_vm_create(c, fd, flags[i], 0);
+
+		for (i--; i >= 0; i--)
+			xe_eudebug_client_vm_destroy(c, fd, vms[i]);
+	}
+
+	if (c->flags & CREATE_EXEC_QUEUES) {
+		struct drm_xe_exec_queue_create *create;
+		struct drm_xe_engine_class_instance *hwe;
+		struct drm_xe_engine_class_instance bind_sync = {
+			.engine_class = DRM_XE_ENGINE_CLASS_VM_BIND,
+			.engine_instance = 0,
+		};
+		struct drm_xe_engine_class_instance bind_async = {
+			.engine_class = DRM_XE_ENGINE_CLASS_VM_BIND,
+			.engine_instance = 0,
+		};
+		uint32_t vm;
+
+		create = calloc((xe_number_engines(fd) + 2), sizeof(*create));
+
+		vm = xe_eudebug_client_vm_create(c, fd, 0, 0);
+
+		i = 0;
+		xe_for_each_engine(fd, hwe) {
+			create[i].instances = to_user_pointer(hwe);
+			create[i].vm_id = vm;
+			create[i].width = 1;
+			create[i].num_placements = 1;
+			xe_eudebug_client_exec_queue_create(c, fd, &create[i++]);
+		}
+
+		create[i].instances = to_user_pointer(&bind_sync),
+		create[i].vm_id = vm;
+		create[i].width = 1,
+		create[i].num_placements = 1,
+		xe_eudebug_client_exec_queue_create(c, fd, &create[i++]);
+
+		create[i].instances = to_user_pointer(&bind_async),
+		create[i].vm_id = vm;
+		create[i].width = 1,
+		create[i].num_placements = 1,
+		xe_eudebug_client_exec_queue_create(c, fd, &create[i]);
+
+		for (; i >= 0; i--)
+			xe_eudebug_client_exec_queue_destroy(c, fd, &create[i]);
+
+		xe_eudebug_client_vm_destroy(c, fd,  vm);
+	}
+
+	if (c->flags & VM_BIND || c->flags & VM_BIND_METADATA)
+		basic_vm_bind_client(fd, c);
+
+	if (c->flags & VM_BIND_EXTENDED)
+		vm_bind_client(fd, c);
+
+	if (c->flags & VM_BIND_VM_DESTROY)
+		basic_vm_bind_vm_destroy_client(fd, c);
+
+	xe_device_put(fd);
+	xe_eudebug_client_close_driver(c, fd);
+}
+
+static int read_event(int debugfd, struct drm_xe_eudebug_event *event)
+{
+	int ret;
+
+	ret = igt_ioctl(debugfd, DRM_XE_EUDEBUG_IOCTL_READ_EVENT, event);
+	if (ret < 0)
+		return -errno;
+
+	return ret;
+}
+
+static int __read_event(int debugfd, struct drm_xe_eudebug_event *event)
+{
+	int ret;
+
+	ret = ioctl(debugfd, DRM_XE_EUDEBUG_IOCTL_READ_EVENT, event);
+	if (ret < 0)
+		return -errno;
+
+	return ret;
+}
+
+static int poll_event(int fd, int timeout_ms)
+{
+	int ret;
+
+	struct pollfd p = {
+		.fd = fd,
+		.events = POLLIN,
+		.revents = 0,
+	};
+
+	ret = poll(&p, 1, timeout_ms);
+	if (ret == -1)
+		return -errno;
+
+	return ret == 1 && (p.revents & POLLIN);
+}
+
+static int __debug_connect(int fd, int *debugfd, struct drm_xe_eudebug_connect *param)
+{
+	int ret = 0;
+
+	*debugfd = igt_ioctl(fd, DRM_IOCTL_XE_EUDEBUG_CONNECT, param);
+
+	if (*debugfd < 0) {
+		ret = -errno;
+		igt_assume(ret != 0);
+	}
+
+	errno = 0;
+	return ret;
+}
+
+/**
+ * SUBTEST: basic-connect
+ * Description:
+ *	Exercise XE_EUDEBUG_CONNECT ioctl with passing
+ *	valid and invalid params.
+ */
+static void test_connect(int fd)
+{
+	struct drm_xe_eudebug_connect param = {};
+	int debugfd, ret;
+	pid_t *pid;
+
+	pid = mmap(NULL, sizeof(pid_t), PROT_WRITE,
+		   MAP_SHARED | MAP_ANON, -1, 0);
+
+	/* get fresh unrelated pid */
+	igt_fork(child, 1)
+		*pid = getpid();
+
+	igt_waitchildren();
+	param.pid = *pid;
+	munmap(pid, sizeof(pid_t));
+
+	ret = __debug_connect(fd, &debugfd, &param);
+	igt_assert(debugfd == -1);
+	igt_assert_eq(ret, param.pid ? -ENOENT : -EINVAL);
+
+	param.pid = 0;
+	ret = __debug_connect(fd, &debugfd, &param);
+	igt_assert(debugfd == -1);
+	igt_assert_eq(ret, -EINVAL);
+
+	param.pid = getpid();
+	param.version = -1;
+	ret = __debug_connect(fd, &debugfd, &param);
+	igt_assert(debugfd == -1);
+	igt_assert_eq(ret, -EINVAL);
+
+	param.version = 0;
+	param.flags = ~0;
+	ret = __debug_connect(fd, &debugfd, &param);
+	igt_assert(debugfd == -1);
+	igt_assert_eq(ret, -EINVAL);
+
+	param.flags = 0;
+	param.extensions = ~0;
+	ret = __debug_connect(fd, &debugfd, &param);
+	igt_assert(debugfd == -1);
+	igt_assert_eq(ret, -EINVAL);
+
+	param.extensions = 0;
+	ret = __debug_connect(fd, &debugfd, &param);
+	igt_assert_neq(debugfd, -1);
+	igt_assert_eq(ret, 0);
+
+	close(debugfd);
+}
+
+static void switch_user(__uid_t uid, __gid_t gid)
+{
+	struct group *gr;
+	__gid_t gr_v;
+
+	/* Users other then root need to belong to video group */
+	gr = getgrnam("video");
+	igt_assert(gr);
+
+	/* Drop all */
+	igt_assert_eq(setgroups(1, &gr->gr_gid), 0);
+	igt_assert_eq(setgid(gid), 0);
+	igt_assert_eq(setuid(uid), 0);
+
+	igt_assert_eq(getgroups(1, &gr_v), 1);
+	igt_assert_eq(gr_v, gr->gr_gid);
+	igt_assert_eq(getgid(), gid);
+	igt_assert_eq(getuid(), uid);
+
+	igt_assert_eq(prctl(PR_SET_DUMPABLE, 1L), 0);
+}
+
+/**
+ * SUBTEST: connect-user
+ * Description:
+ *	Verify unprivileged XE_EUDEBG_CONNECT ioctl.
+ *	Check:
+ *	 - user debugger to user workload connection
+ *	 - user debugger to other user workload connection
+ *	 - user debugger to privileged workload connection
+ */
+static void test_connect_user(int fd)
+{
+	struct drm_xe_eudebug_connect param = {};
+	struct passwd *pwd, *pwd2;
+	const char *user1 = "lp";
+	const char *user2 = "mail";
+	int debugfd, ret, i;
+	int p1[2], p2[2];
+	__uid_t u1, u2;
+	__gid_t g1, g2;
+	int newfd;
+	pid_t pid;
+
+#define NUM_USER_TESTS 4
+#define P_APP 0
+#define P_GDB 1
+	struct conn_user {
+		/* u[0] - process uid, u[1] - gdb uid */
+		__uid_t u[P_GDB + 1];
+		/* g[0] - process gid, g[1] - gdb gid */
+		__gid_t g[P_GDB + 1];
+		/* Expected fd from open */
+		int ret;
+		/* Skip this test case */
+		int skip;
+		const char *desc;
+	} test[NUM_USER_TESTS] = {};
+
+	igt_assert(!pipe(p1));
+	igt_assert(!pipe(p2));
+
+	pwd = getpwnam(user1);
+	igt_require(pwd);
+	u1 = pwd->pw_uid;
+	g1 = pwd->pw_gid;
+
+	/*
+	 * Keep a copy of needed contents as it is a static
+	 * memory area and subsequent calls will overwrite
+	 * what's in.
+	 * However getpwnam() returns NULL if cannot find
+	 * user in passwd.
+	 */
+	setpwent();
+	pwd2 = getpwnam(user2);
+	if (pwd2) {
+		u2 = pwd2->pw_uid;
+		g2 = pwd2->pw_gid;
+	}
+
+	test[0].skip = !pwd;
+	test[0].u[P_GDB] = u1;
+	test[0].g[P_GDB] = g1;
+	test[0].ret = -EACCES;
+	test[0].desc = "User GDB to Root App";
+
+	test[1].skip = !pwd;
+	test[1].u[P_APP] = u1;
+	test[1].g[P_APP] = g1;
+	test[1].u[P_GDB] = u1;
+	test[1].g[P_GDB] = g1;
+	test[1].ret = 0;
+	test[1].desc = "User GDB to User App";
+
+	test[2].skip = !pwd;
+	test[2].u[P_APP] = u1;
+	test[2].g[P_APP] = g1;
+	test[2].ret = 0;
+	test[2].desc = "Root GDB to User App";
+
+	test[3].skip = !pwd2;
+	test[3].u[P_APP] = u1;
+	test[3].g[P_APP] = g1;
+	test[3].u[P_GDB] = u2;
+	test[3].g[P_GDB] = g2;
+	test[3].ret = -EACCES;
+	test[3].desc = "User GDB to Other User App";
+
+	if (!pwd2)
+		igt_warn("User %s not available in the system. Skipping subtests: %s.\n",
+			 user2, test[3].desc);
+
+	for (i = 0; i < NUM_USER_TESTS; i++) {
+		if (test[i].skip) {
+			igt_debug("Subtest %s skipped\n", test[i].desc);
+			continue;
+		}
+		igt_debug("Executing connection: %s\n", test[i].desc);
+		igt_fork(child, 2) {
+			if (!child) {
+				if (test[i].u[P_APP])
+					switch_user(test[i].u[P_APP], test[i].g[P_APP]);
+
+				pid = getpid();
+				/* Signal the PID */
+				igt_assert(write(p1[1], &pid, sizeof(pid)) == sizeof(pid));
+				/* wait with exit */
+				igt_assert(read(p2[0], &pid, sizeof(pid)) == sizeof(pid));
+			} else {
+				if (test[i].u[P_GDB])
+					switch_user(test[i].u[P_GDB], test[i].g[P_GDB]);
+
+				igt_assert(read(p1[0], &pid, sizeof(pid)) == sizeof(pid));
+				param.pid = pid;
+
+				newfd = drm_open_driver(DRIVER_XE);
+				ret = __debug_connect(newfd, &debugfd, &param);
+
+				/* Release the app first */
+				igt_assert(write(p2[1], &pid, sizeof(pid)) == sizeof(pid));
+
+				igt_assert_eq(ret, test[i].ret);
+				if (!ret)
+					close(debugfd);
+			}
+		}
+		igt_waitchildren();
+	}
+	close(p1[0]);
+	close(p1[1]);
+	close(p2[0]);
+	close(p2[1]);
+#undef NUM_USER_TESTS
+#undef P_APP
+#undef P_GDB
+}
+
+/**
+ * SUBTEST: basic-close
+ * Description:
+ *	Test whether eudebug can be reattached after closure.
+ */
+static void test_close(int fd)
+{
+	struct drm_xe_eudebug_connect param = { 0,  };
+	int debug_fd1, debug_fd2;
+	int fd2;
+
+	param.pid = getpid();
+
+	igt_assert_eq(__debug_connect(fd, &debug_fd1, &param), 0);
+	igt_assert(debug_fd1 >= 0);
+	igt_assert_eq(__debug_connect(fd, &debug_fd2, &param), -EBUSY);
+	igt_assert_eq(debug_fd2, -1);
+
+	close(debug_fd1);
+	fd2 = drm_open_driver(DRIVER_XE);
+
+	igt_assert_eq(__debug_connect(fd2, &debug_fd2, &param), 0);
+	igt_assert(debug_fd2 >= 0);
+	close(fd2);
+	close(debug_fd2);
+	close(debug_fd1);
+}
+
+/**
+ * SUBTEST: basic-read-event
+ * Description:
+ *	Synchronously exercise eu debugger event polling and reading.
+ */
+#define MAX_EVENT_SIZE (32 * 1024)
+static void test_read_event(int fd)
+{
+	struct drm_xe_eudebug_event *event;
+	struct xe_eudebug_debugger *d;
+	struct xe_eudebug_client *c;
+
+	event = malloc(MAX_EVENT_SIZE);
+	igt_assert(event);
+	memset(event, 0, sizeof(*event));
+
+	c = xe_eudebug_client_create(fd, run_basic_client, 0, NULL);
+	d = xe_eudebug_debugger_create(fd, 0, NULL);
+
+	igt_assert_eq(xe_eudebug_debugger_attach(d, c), 0);
+	igt_assert_eq(poll_event(d->fd, 500), 0);
+
+	event->len = 1;
+	event->type = DRM_XE_EUDEBUG_EVENT_NONE;
+	igt_assert_eq(read_event(d->fd, event), -EINVAL);
+
+	event->len = MAX_EVENT_SIZE;
+	event->type = DRM_XE_EUDEBUG_EVENT_NONE;
+	igt_assert_eq(read_event(d->fd, event), -EINVAL);
+
+	xe_eudebug_client_start(c);
+
+	igt_assert_eq(poll_event(d->fd, 500), 1);
+	event->type = DRM_XE_EUDEBUG_EVENT_READ;
+	igt_assert_eq(read_event(d->fd, event), 0);
+
+	igt_assert_eq(poll_event(d->fd, 500), 1);
+	event->len = MAX_EVENT_SIZE;
+	event->flags = 0;
+	event->type = DRM_XE_EUDEBUG_EVENT_READ;
+	igt_assert_eq(read_event(d->fd, event), 0);
+
+	fcntl(d->fd, F_SETFL, fcntl(d->fd, F_GETFL) | O_NONBLOCK);
+	igt_assert(fcntl(d->fd, F_GETFL) & O_NONBLOCK);
+
+	igt_assert_eq(poll_event(d->fd, 500), 0);
+	event->len = MAX_EVENT_SIZE;
+	event->flags = 0;
+	event->type = DRM_XE_EUDEBUG_EVENT_READ;
+	igt_assert_eq(__read_event(d->fd, event), -EAGAIN);
+
+	xe_eudebug_client_wait_done(c);
+	xe_eudebug_client_stop(c);
+
+	igt_assert_eq(poll_event(d->fd, 500), 0);
+	igt_assert_eq(__read_event(d->fd, event), -EAGAIN);
+
+	xe_eudebug_debugger_destroy(d);
+	xe_eudebug_client_destroy(c);
+
+	free(event);
+}
+
+/**
+ * SUBTEST: basic-client
+ * Description:
+ *	Attach the debugger to process which opens and closes xe drm client.
+ *
+ * SUBTEST: basic-client-th
+ * Description:
+ *	Create client basic resources (vms) in multiple threads
+ *
+ * SUBTEST: multiple-sessions
+ * Description:
+ *	Simultaneously attach many debuggers to many processes.
+ *	Each process opens and closes xe drm client and creates few resources.
+ *
+ * SUBTEST: basic-%s
+ * Description:
+ *	Attach the debugger to process which creates and destroys a few %arg[1].
+ *
+ * SUBTEST: basic-vm-bind
+ * Description:
+ *	Attach the debugger to a process that performs synchronous vm bind
+ *	and vm unbind.
+ *
+ * SUBTEST: basic-vm-bind-vm-destroy
+ * Description:
+ *	Attach the debugger to a process that performs vm bind, and destroys
+ *	the vm without unbinding. Make sure that we don't get unbind events.
+ *
+ * SUBTEST: basic-vm-bind-extended
+ * Description:
+ *	Attach the debugger to a process that performs bind, bind array, rebind,
+ *	partial unbind, unbind and unbind all operations.
+ *
+ * SUBTEST: multigpu-basic-client
+ * Description:
+ *	Attach the debugger to process which opens and closes xe drm client on all Xe devices.
+ *
+ * SUBTEST: multigpu-basic-client-many
+ * Description:
+ *	Simultaneously attach many debuggers to many processes on all Xe devices.
+ *	Each process opens and closes xe drm client and creates few resources.
+ *
+ * arg[1]:
+ *
+ * @vms: vms
+ * @exec-queues: exec queues
+ */
+
+static void test_basic_sessions(int fd, unsigned int flags, int count, bool match_opposite)
+{
+	struct xe_eudebug_session **s;
+	int i;
+
+	s = calloc(count, sizeof(*s));
+
+	igt_assert(s);
+
+	for (i = 0; i < count; i++)
+		s[i] = xe_eudebug_session_create(fd, run_basic_client, flags, NULL);
+
+	for (i = 0; i < count; i++)
+		xe_eudebug_session_run(s[i]);
+
+	for (i = 0; i < count; i++)
+		xe_eudebug_session_check(s[i], match_opposite, 0);
+
+	for (i = 0; i < count; i++)
+		xe_eudebug_session_destroy(s[i]);
+}
+
+/**
+ * SUBTEST: basic-vm-bind-discovery
+ * Description:
+ *	Attach the debugger to a process that performs vm-bind before attaching
+ *	and check if the discovery process reports it.
+ *
+ * SUBTEST: basic-vm-bind-metadata-discovery
+ * Description:
+ *	Attach the debugger to a process that performs vm-bind with metadata attached
+ *	before attaching and check if the discovery process reports it.
+ *
+ * SUBTEST: basic-vm-bind-vm-destroy-discovery
+ * Description:
+ *	Attach the debugger to a process that performs vm bind, and destroys
+ *	the vm without unbinding before attaching. Make sure that we don't get
+ *	any bind/unbind and vm create/destroy events.
+ *
+ * SUBTEST: basic-vm-bind-extended-discovery
+ * Description:
+ *	Attach the debugger to a process that performs bind, bind array, rebind,
+ *	partial unbind, and unbind all operations before attaching. Ensure that
+ *	we get a only a singe 'VM_BIND' event from the discovery worker.
+ */
+static void test_basic_discovery(int fd, unsigned int flags, bool match_opposite)
+{
+	struct xe_eudebug_debugger *d;
+	struct xe_eudebug_session *s;
+	struct xe_eudebug_client *c;
+
+	s = xe_eudebug_session_create(fd, run_basic_client, flags | TEST_DISCOVERY, NULL);
+
+	c = s->c;
+	d = s->d;
+
+	xe_eudebug_client_start(c);
+	xe_eudebug_debugger_wait_stage(s, STAGE_PRE_DEBUG_RESOURCES_DONE);
+
+	igt_assert_eq(xe_eudebug_debugger_attach(d, c), 0);
+	xe_eudebug_debugger_start_worker(d);
+
+	/* give the worker time to do it's job */
+	sleep(2);
+	xe_eudebug_debugger_signal_stage(d, STAGE_DISCOVERY_DONE);
+
+	xe_eudebug_client_wait_done(c);
+
+	xe_eudebug_debugger_stop_worker(d, 1);
+
+	xe_eudebug_event_log_print(d->log, true);
+	xe_eudebug_event_log_print(c->log, true);
+
+	xe_eudebug_session_check(s, match_opposite, 0);
+	xe_eudebug_session_destroy(s);
+}
+
+#define RESOURCE_COUNT 16
+#define PRIMARY_THREAD			(1 << 0)
+#define DISCOVERY_CLOSE_CLIENT		(1 << 1)
+#define DISCOVERY_DESTROY_RESOURCES	(1 << 2)
+#define DISCOVERY_VM_BIND		(1 << 3)
+static void run_discovery_client(struct xe_eudebug_client *c)
+{
+	struct drm_xe_engine_class_instance *hwe = NULL;
+	int fd[RESOURCE_COUNT], i;
+	bool skip_sleep = c->flags & (DISCOVERY_DESTROY_RESOURCES | DISCOVERY_CLOSE_CLIENT);
+	uint64_t addr = 0x1a0000;
+
+	srand(getpid());
+
+	for (i = 0; i < RESOURCE_COUNT; i++) {
+		fd[i] = xe_eudebug_client_open_driver(c);
+
+		if (!i) {
+			bool found = false;
+
+			xe_device_get(fd[0]);
+			xe_for_each_engine(fd[0], hwe) {
+				if (hwe->engine_class == DRM_XE_ENGINE_CLASS_COMPUTE ||
+				    hwe->engine_class == DRM_XE_ENGINE_CLASS_RENDER) {
+					found = true;
+					break;
+				}
+			}
+			igt_assert(found);
+		}
+
+		/*
+		 * Give the debugger a break in event stream after every
+		 * other client, that allows to read discovery and dettach in quiet.
+		 */
+		if (random() % 2 == 0 && !skip_sleep)
+			sleep(1);
+
+		for (int j = 0; j < RESOURCE_COUNT; j++) {
+			uint32_t vm = xe_eudebug_client_vm_create(c, fd[i], 0, 0);
+			struct drm_xe_exec_queue_create create = {
+				.width = 1,
+				.num_placements = 1,
+				.vm_id = vm,
+				.instances = to_user_pointer(hwe)
+			};
+			const unsigned int bo_size = max_t(bo_size,
+							   xe_get_default_alignment(fd[i]),
+							   MIN_BO_SIZE);
+			uint32_t bo = xe_bo_create(fd[i], 0, bo_size, system_memory(fd[i]), 0);
+
+			xe_eudebug_client_exec_queue_create(c, fd[i], &create);
+
+			if (c->flags & DISCOVERY_VM_BIND) {
+				xe_eudebug_client_vm_bind(c, fd[i], vm, bo, 0, addr, bo_size);
+				addr += 0x100000;
+			}
+
+			if (c->flags & DISCOVERY_DESTROY_RESOURCES) {
+				xe_eudebug_client_exec_queue_destroy(c, fd[i], &create);
+				xe_eudebug_client_vm_destroy(c, fd[i], create.vm_id);
+				gem_close(fd[i], bo);
+			}
+		}
+
+		if (c->flags & DISCOVERY_CLOSE_CLIENT)
+			xe_eudebug_client_close_driver(c, fd[i]);
+	}
+	xe_device_put(fd[0]);
+}
+
+/**
+ * SUBTEST: discovery-%s
+ * Description: Race discovery against %arg[1] and the debugger dettach.
+ *
+ * arg[1]:
+ *
+ * @race:		resources creation
+ * @race-vmbind:	vm-bind operations
+ * @empty:		resources destruction
+ * @empty-clients:	client closure
+ */
+static void *discovery_race_thread(void *data)
+{
+	struct {
+		uint64_t client_handle;
+		int vm_count;
+		int exec_queue_count;
+		int vm_bind_op_count;
+	} clients[RESOURCE_COUNT];
+	struct xe_eudebug_session *s = data;
+	int expected = RESOURCE_COUNT * (1 + 2 * RESOURCE_COUNT);
+	const int tries = 100;
+	bool done = false;
+	int ret = 0;
+
+	for (int try = 0; try < tries && !done; try++) {
+
+		ret = xe_eudebug_debugger_attach(s->d, s->c);
+
+		if (ret == -EBUSY) {
+			usleep(100000);
+			continue;
+		}
+
+		igt_assert_eq(ret, 0);
+
+		if (random() % 2) {
+			struct drm_xe_eudebug_event *e = NULL;
+			int i = -1;
+
+			xe_eudebug_debugger_start_worker(s->d);
+			sleep(1);
+			xe_eudebug_debugger_stop_worker(s->d, 1);
+			igt_debug("Resources discovered: %lu\n", s->d->event_count);
+
+			xe_eudebug_for_each_event(e, s->d->log) {
+				if (e->type == DRM_XE_EUDEBUG_EVENT_OPEN) {
+					struct drm_xe_eudebug_event_client *eo = (void *)e;
+
+					if (i >= 0) {
+						igt_assert_eq(clients[i].vm_count,
+							      RESOURCE_COUNT);
+
+						igt_assert_eq(clients[i].exec_queue_count,
+							      RESOURCE_COUNT);
+
+						if (s->c->flags & DISCOVERY_VM_BIND)
+							igt_assert_eq(clients[i].vm_bind_op_count,
+								      RESOURCE_COUNT);
+					}
+
+					igt_assert(++i < RESOURCE_COUNT);
+					clients[i].client_handle = eo->client_handle;
+					clients[i].vm_count = 0;
+					clients[i].exec_queue_count = 0;
+					clients[i].vm_bind_op_count = 0;
+				}
+
+				if (e->type == DRM_XE_EUDEBUG_EVENT_VM)
+					clients[i].vm_count++;
+
+				if (e->type == DRM_XE_EUDEBUG_EVENT_EXEC_QUEUE)
+					clients[i].exec_queue_count++;
+
+				if (e->type == DRM_XE_EUDEBUG_EVENT_VM_BIND_OP)
+					clients[i].vm_bind_op_count++;
+			};
+
+			igt_assert_lte(0, i);
+
+			for (int j = 0; j < i; j++)
+				for (int k = 0; k < i; k++) {
+					if (k == j)
+						continue;
+
+					igt_assert_neq(clients[j].client_handle,
+						       clients[k].client_handle);
+				}
+
+			if (s->d->event_count >= expected)
+				done = true;
+		}
+
+		xe_eudebug_debugger_dettach(s->d);
+		s->d->log->head = 0;
+		s->d->event_count = 0;
+	}
+
+	/* Primary thread must read everything */
+	if (s->flags & PRIMARY_THREAD) {
+		while ((ret = xe_eudebug_debugger_attach(s->d, s->c)) == -EBUSY)
+			usleep(100000);
+
+		igt_assert_eq(ret, 0);
+
+		xe_eudebug_debugger_start_worker(s->d);
+		xe_eudebug_client_wait_done(s->c);
+
+		if (READ_ONCE(s->d->event_count) != expected)
+			sleep(5);
+
+		xe_eudebug_debugger_stop_worker(s->d, 1);
+		xe_eudebug_debugger_dettach(s->d);
+	}
+
+	return NULL;
+}
+
+static void test_race_discovery(int fd, unsigned int flags, int clients)
+{
+	const int debuggers_per_client = 3;
+	int count = clients * debuggers_per_client;
+	struct xe_eudebug_session *sessions, *s;
+	struct xe_eudebug_client *c;
+	pthread_t *threads;
+	int i, j;
+
+	sessions = calloc(count, sizeof(*sessions));
+	threads = calloc(count, sizeof(*threads));
+
+	for (i = 0; i < clients; i++) {
+		c = xe_eudebug_client_create(fd, run_discovery_client, flags, NULL);
+		for (j = 0; j < debuggers_per_client; j++) {
+			s = &sessions[i * debuggers_per_client + j];
+			s->c = c;
+			s->d = xe_eudebug_debugger_create(fd, flags, NULL);
+			s->flags = flags | (!j ? PRIMARY_THREAD : 0);
+		}
+	}
+
+	for (i = 0; i < count; i++) {
+		if (sessions[i].flags & PRIMARY_THREAD)
+			xe_eudebug_client_start(sessions[i].c);
+
+		pthread_create(&threads[i], NULL, discovery_race_thread, &sessions[i]);
+	}
+
+	for (i = 0; i < count; i++)
+		pthread_join(threads[i], NULL);
+
+	for (i = count - 1; i > 0; i--) {
+		if (sessions[i].flags & PRIMARY_THREAD) {
+			igt_assert_eq(sessions[i].c->seqno-1, sessions[i].d->event_count);
+
+			xe_eudebug_event_log_compare(sessions[0].d->log,
+						     sessions[i].d->log,
+						     XE_EUDEBUG_FILTER_EVENT_VM_BIND);
+
+			xe_eudebug_client_destroy(sessions[i].c);
+		}
+		xe_eudebug_debugger_destroy(sessions[i].d);
+	}
+}
+
+static void *attach_dettach_thread(void *data)
+{
+	struct xe_eudebug_session *s = data;
+	const int tries = 100;
+	int ret = 0;
+
+	for (int try = 0; try < tries; try++) {
+
+		ret = xe_eudebug_debugger_attach(s->d, s->c);
+
+		if (ret == -EBUSY) {
+			usleep(100000);
+			continue;
+		}
+
+		igt_assert_eq(ret, 0);
+
+		if (random() % 2 == 0) {
+			xe_eudebug_debugger_start_worker(s->d);
+			xe_eudebug_debugger_stop_worker(s->d, 1);
+		}
+
+		xe_eudebug_debugger_dettach(s->d);
+		s->d->log->head = 0;
+		s->d->event_count = 0;
+	}
+
+	return NULL;
+}
+
+static void test_empty_discovery(int fd, unsigned int flags, int clients)
+{
+	struct xe_eudebug_session **s;
+	pthread_t *threads;
+	int i, expected = flags & DISCOVERY_CLOSE_CLIENT ? 0 : RESOURCE_COUNT;
+
+	igt_assert(flags & (DISCOVERY_DESTROY_RESOURCES | DISCOVERY_CLOSE_CLIENT));
+
+	s = calloc(clients, sizeof(struct xe_eudebug_session *));
+	threads = calloc(clients, sizeof(*threads));
+
+	for (i = 0; i < clients; i++)
+		s[i] = xe_eudebug_session_create(fd, run_discovery_client, flags, NULL);
+
+	for (i = 0; i < clients; i++) {
+		xe_eudebug_client_start(s[i]->c);
+
+		pthread_create(&threads[i], NULL, attach_dettach_thread, s[i]);
+	}
+
+	for (i = 0; i < clients; i++)
+		pthread_join(threads[i], NULL);
+
+	for (i = 0; i < clients; i++) {
+		xe_eudebug_client_wait_done(s[i]->c);
+		igt_assert_eq(xe_eudebug_debugger_attach(s[i]->d, s[i]->c), 0);
+
+		xe_eudebug_debugger_start_worker(s[i]->d);
+		xe_eudebug_debugger_stop_worker(s[i]->d, 5);
+		xe_eudebug_debugger_dettach(s[i]->d);
+
+		igt_assert_eq(s[i]->d->event_count, expected);
+
+		xe_eudebug_session_destroy(s[i]);
+	}
+}
+
+typedef void (*client_run_t)(struct xe_eudebug_client *);
+
+static void test_client_with_trigger(int fd, unsigned int flags, int count,
+				     client_run_t client_fn, int type,
+				     xe_eudebug_trigger_fn trigger_fn,
+				     struct drm_xe_engine_class_instance *hwe,
+				     bool match_opposite, uint32_t event_filter)
+{
+	struct xe_eudebug_session **s;
+	int i;
+
+	s = calloc(count, sizeof(*s));
+
+	igt_assert(s);
+
+	for (i = 0; i < count; i++)
+		s[i] = xe_eudebug_session_create(fd, client_fn, flags, hwe);
+
+	if (trigger_fn)
+		for (i = 0; i < count; i++)
+			xe_eudebug_debugger_add_trigger(s[i]->d, type, trigger_fn);
+
+	for (i = 0; i < count; i++)
+		xe_eudebug_session_run(s[i]);
+
+	for (i = 0; i < count; i++)
+		xe_eudebug_session_check(s[i], match_opposite, event_filter);
+
+	for (i = 0; i < count; i++)
+		xe_eudebug_session_destroy(s[i]);
+}
+
+struct thread_fn_args {
+	struct xe_eudebug_client *client;
+	int fd;
+};
+
+static void *basic_client_th(void *data)
+{
+	struct thread_fn_args *f = data;
+	struct xe_eudebug_client *c = f->client;
+	uint32_t *vms;
+	int fd, i, num_vms;
+
+	fd = f->fd;
+	igt_assert(fd);
+
+	xe_device_get(fd);
+
+	num_vms = 2 + rand() % 16;
+	vms = calloc(num_vms, sizeof(*vms));
+	igt_assert(vms);
+	igt_debug("Create %d client vms\n", num_vms);
+
+	for (i = 0; i < num_vms; i++)
+		vms[i] = xe_eudebug_client_vm_create(c, fd, DRM_XE_VM_CREATE_FLAG_SCRATCH_PAGE, 0);
+
+	for (i = 0; i < num_vms; i++)
+		xe_eudebug_client_vm_destroy(c, fd, vms[i]);
+
+	xe_device_put(fd);
+	free(vms);
+
+	return NULL;
+}
+
+static void run_basic_client_th(struct xe_eudebug_client *c)
+{
+	struct thread_fn_args *args;
+	int i, num_threads, fd;
+	pthread_t *threads;
+
+	args = calloc(1, sizeof(*args));
+	igt_assert(args);
+
+	num_threads = 2 + random() % 16;
+	igt_debug("Run on %d threads\n", num_threads);
+	threads = calloc(num_threads, sizeof(*threads));
+	igt_assert(threads);
+
+	fd = xe_eudebug_client_open_driver(c);
+	args->client = c;
+	args->fd = fd;
+
+	for (i = 0; i < num_threads; i++)
+		pthread_create(&threads[i], NULL, basic_client_th, args);
+
+	for (i = 0; i < num_threads; i++)
+		pthread_join(threads[i], NULL);
+
+	xe_eudebug_client_close_driver(c, fd);
+	free(args);
+	free(threads);
+}
+
+static void test_basic_sessions_th(int fd, unsigned int flags, int num_clients, bool match_opposite)
+{
+	test_client_with_trigger(fd, flags, num_clients, run_basic_client_th,
+					 0, NULL, NULL, match_opposite, 0);
+}
+
+static void vm_access_client(struct xe_eudebug_client *c)
+{
+	struct drm_xe_sync sync = {
+		.flags = DRM_XE_SYNC_TYPE_SYNCOBJ | DRM_XE_SYNC_FLAG_SIGNAL, };
+	struct drm_xe_engine_class_instance *hwe = c->ptr;
+	uint32_t bo_placement;
+	struct bind_list *bl;
+	uint32_t vm;
+	int fd, i, j;
+
+	igt_debug("Using %s\n", xe_engine_class_string(hwe->engine_class));
+
+	fd = xe_eudebug_client_open_driver(c);
+	xe_device_get(fd);
+
+	vm = xe_eudebug_client_vm_create(c, fd, 0, 0);
+
+	if (c->flags & VM_BIND_OP_MAP_USERPTR)
+		bo_placement = 0;
+	else
+		bo_placement = vram_if_possible(fd, hwe->gt_id);
+
+	for (j = 0; j < 5; j++) {
+		unsigned int target_size = MIN_BO_SIZE * (1 << j);
+
+		bl = create_bind_list(fd, bo_placement, vm, 4, target_size);
+		sync.handle = syncobj_create(bl->fd, 0);
+		do_bind_list(c, bl, &sync);
+		syncobj_destroy(bl->fd, sync.handle);
+
+		for (i = 0; i < bl->n; i++)
+			xe_eudebug_client_wait_stage(c, bl->bind_ops[i].addr);
+
+		free_bind_list(c, bl);
+	}
+	xe_eudebug_client_vm_destroy(c, fd, vm);
+
+	xe_device_put(fd);
+	xe_eudebug_client_close_driver(c, fd);
+}
+
+static void debugger_test_vma(struct xe_eudebug_debugger *d,
+			      uint64_t client_handle,
+			      uint64_t vm_handle,
+			      uint64_t va_start,
+			      uint64_t va_length)
+{
+	struct drm_xe_eudebug_vm_open vo = { 0, };
+	uint64_t *v1, *v2;
+	uint64_t items = va_length / sizeof(uint64_t);
+	int fd;
+	int r, i;
+
+	v1 = malloc(va_length);
+	igt_assert(v1);
+	v2 = malloc(va_length);
+	igt_assert(v2);
+
+	vo.client_handle = client_handle;
+	vo.vm_handle = vm_handle;
+
+	fd = igt_ioctl(d->fd, DRM_XE_EUDEBUG_IOCTL_VM_OPEN, &vo);
+	igt_assert_lte(0, fd);
+
+	r = pread(fd, v1, va_length, va_start);
+	igt_assert_eq(r, va_length);
+
+	for (i = 0; i < items; i++)
+		igt_assert_eq(v1[i], va_start + i);
+
+	for (i = 0; i < items; i++)
+		v1[i] = va_start + i + 1;
+
+	r = pwrite(fd, v1, va_length, va_start);
+	igt_assert_eq(r, va_length);
+
+	lseek(fd, va_start, SEEK_SET);
+	r = read(fd, v2, va_length);
+	igt_assert_eq(r, va_length);
+
+	for (i = 0; i < items; i++)
+		igt_assert_eq(v1[i], v2[i]);
+
+	fsync(fd);
+
+	close(fd);
+	free(v1);
+	free(v2);
+}
+
+static void vm_trigger(struct xe_eudebug_debugger *d,
+		       struct drm_xe_eudebug_event *e)
+{
+	struct drm_xe_eudebug_event_vm_bind_op *eo = (void *)e;
+
+	if (e->flags & DRM_XE_EUDEBUG_EVENT_CREATE) {
+		struct drm_xe_eudebug_event_vm_bind *eb;
+
+		igt_debug("vm bind op event received with ref %lld, addr 0x%llx, range 0x%llx\n",
+			  eo->vm_bind_ref_seqno,
+			  eo->addr,
+			  eo->range);
+
+		eb = (struct drm_xe_eudebug_event_vm_bind *)
+			xe_eudebug_event_log_find_seqno(d->log, eo->vm_bind_ref_seqno);
+		igt_assert(eb);
+
+		debugger_test_vma(d, eb->client_handle, eb->vm_handle,
+				  eo->addr, eo->range);
+		xe_eudebug_debugger_signal_stage(d, eo->addr);
+	}
+}
+
+/**
+ * SUBTEST: basic-vm-access
+ * Description:
+ *      Exercise XE_EUDEBUG_VM_OPEN with pread and pwrite into the
+ *      vm fd, concerning many different offsets inside the vm,
+ *      and many virtual addresses of the vm_bound object.
+ *
+ * SUBTEST: basic-vm-access-userptr
+ * Description:
+ *      Exercise XE_EUDEBUG_VM_OPEN with pread and pwrite into the
+ *      vm fd, concerning many different offsets inside the vm,
+ *      and many virtual addresses of the vm_bound object, but backed
+ *      by userptr.
+ */
+static void test_vm_access(int fd, unsigned int flags, int num_clients)
+{
+	struct drm_xe_engine_class_instance *hwe;
+
+	xe_for_each_engine(fd, hwe)
+		test_client_with_trigger(fd, flags, num_clients,
+					 vm_access_client,
+					 DRM_XE_EUDEBUG_EVENT_VM_BIND_OP,
+					 vm_trigger, hwe,
+					 false, XE_EUDEBUG_FILTER_EVENT_VM_BIND_OP);
+}
+
+static void debugger_test_vma_parameters(struct xe_eudebug_debugger *d,
+					 uint64_t client_handle,
+					 uint64_t vm_handle,
+					 uint64_t va_start,
+					 uint64_t va_length)
+{
+	struct drm_xe_eudebug_vm_open vo = { 0, };
+	uint64_t *v;
+	uint64_t items = va_length / sizeof(uint64_t);
+	int fd;
+	int r, i;
+
+	v = malloc(va_length);
+	igt_assert(v);
+
+	/* Negative VM open - bad client handle */
+	vo.client_handle = client_handle + 123;
+	vo.vm_handle = vm_handle;
+	fd = igt_ioctl(d->fd, DRM_XE_EUDEBUG_IOCTL_VM_OPEN, &vo);
+	igt_assert(fd < 0);
+
+	/* Negative VM open - bad vm handle */
+	vo.client_handle = client_handle;
+	vo.vm_handle = vm_handle + 123;
+	fd = igt_ioctl(d->fd, DRM_XE_EUDEBUG_IOCTL_VM_OPEN, &vo);
+	igt_assert(fd < 0);
+
+	/* Positive VM open */
+	vo.client_handle = client_handle;
+	vo.vm_handle = vm_handle;
+	fd = igt_ioctl(d->fd, DRM_XE_EUDEBUG_IOCTL_VM_OPEN, &vo);
+	igt_assert_lte(0, fd);
+
+	/* Negative pread - bad fd */
+	r = pread(fd + 123, v, va_length, va_start);
+	igt_assert(r < 0);
+
+	/* Negative pread - bad va_start */
+	r = pread(fd, v, va_length, 0);
+	igt_assert(r < 0);
+
+	/* Negative pread - bad va_start */
+	r = pread(fd, v, va_length, va_start - 1);
+	igt_assert(r < 0);
+
+	/* Positive pread - zero va_length */
+	r = pread(fd, v, 0, va_start);
+	igt_assert_eq(r, 0);
+
+	/* Negative pread - out of range */
+	r = pread(fd, v, va_length + 1, va_start);
+	igt_assert_eq(r, va_length);
+
+	/* Negative pread - bad va_start */
+	r = pread(fd, v, 1, va_start + va_length);
+	igt_assert(r < 0);
+
+	/* Positive pread - whole range */
+	r = pread(fd, v, va_length, va_start);
+	igt_assert_eq(r, va_length);
+
+	/* Positive pread */
+	r = pread(fd, v, 1, va_start + va_length - 1);
+	igt_assert_eq(r, 1);
+
+	for (i = 0; i < items; i++)
+		igt_assert_eq(v[i], va_start + i);
+
+	for (i = 0; i < items; i++)
+		v[i] = va_start + i + 1;
+
+	/* Negative pwrite - bad fd */
+	r = pwrite(fd + 123, v, va_length, va_start);
+	igt_assert(r < 0);
+
+	/* Negative pwrite - bad va_start */
+	r = pwrite(fd, v, va_length, -1);
+	igt_assert(r < 0);
+
+	/* Negative pwrite - zero va_start */
+	r = pwrite(fd, v, va_length, 0);
+	igt_assert(r < 0);
+
+	/* Negative pwrite - bad va_length */
+	r = pwrite(fd, v, va_length + 1, va_start);
+	igt_assert_eq(r, va_length);
+
+	/* Positive pwrite - zero va_length */
+	r = pwrite(fd, v, 0, va_start);
+	igt_assert_eq(r, 0);
+
+	/* Positive pwrite */
+	r = pwrite(fd, v, va_length, va_start);
+	igt_assert_eq(r, va_length);
+	fsync(fd);
+
+	close(fd);
+	free(v);
+}
+
+static void vm_trigger_access_parameters(struct xe_eudebug_debugger *d,
+		       struct drm_xe_eudebug_event *e)
+{
+	struct drm_xe_eudebug_event_vm_bind_op *eo = (void *)e;
+
+	if (e->flags & DRM_XE_EUDEBUG_EVENT_CREATE) {
+		struct drm_xe_eudebug_event_vm_bind *eb;
+
+		igt_debug("vm bind op event received with ref %lld, addr 0x%llx, range 0x%llx\n",
+			  eo->vm_bind_ref_seqno,
+			  eo->addr,
+			  eo->range);
+
+		eb = (struct drm_xe_eudebug_event_vm_bind *)
+			xe_eudebug_event_log_find_seqno(d->log, eo->vm_bind_ref_seqno);
+		igt_assert(eb);
+
+		debugger_test_vma_parameters(d, eb->client_handle, eb->vm_handle,
+						   eo->addr, eo->range);
+		xe_eudebug_debugger_signal_stage(d, eo->addr);
+	}
+}
+
+/**
+ * SUBTEST: basic-vm-access-parameters
+ * Description:
+ *      Check negative scenarios of VM_OPEN ioctl and pread/pwrite usage.
+ */
+static void test_vm_access_parameters(int fd, unsigned int flags, int num_clients)
+{
+	struct drm_xe_engine_class_instance *hwe;
+
+	xe_for_each_engine(fd, hwe)
+		test_client_with_trigger(fd, flags, num_clients,
+					 vm_access_client,
+					 DRM_XE_EUDEBUG_EVENT_VM_BIND_OP,
+					 vm_trigger_access_parameters, hwe,
+					 false, XE_EUDEBUG_FILTER_EVENT_VM_BIND_OP);
+}
+
+#define PAGE_SIZE 4096
+#define MDATA_SIZE (WORK_IN_PROGRESS_DRM_XE_DEBUG_METADATA_NUM * PAGE_SIZE)
+static void metadata_access_client(struct xe_eudebug_client *c)
+{
+	const uint64_t addr = 0x1a0000;
+	struct drm_xe_vm_bind_op_ext_attach_debug *ext;
+	uint8_t *data;
+	size_t bo_size;
+	uint32_t bo, vm;
+	int fd, i;
+
+	fd = xe_eudebug_client_open_driver(c);
+	xe_device_get(fd);
+
+	bo_size = xe_get_default_alignment(fd);
+	vm = xe_eudebug_client_vm_create(c, fd, 0, 0);
+	bo = xe_bo_create(fd, vm, bo_size, system_memory(fd), 0);
+
+	ext = basic_vm_bind_metadata_ext_prepare(fd, c, &data, MDATA_SIZE);
+
+	xe_eudebug_client_vm_bind_flags(c, fd, vm, bo, 0, addr,
+					bo_size, 0, NULL, 0, to_user_pointer(ext));
+
+	for (i = 0; i < WORK_IN_PROGRESS_DRM_XE_DEBUG_METADATA_NUM; i++)
+		xe_eudebug_client_wait_stage(c, i);
+
+	xe_eudebug_client_vm_unbind(c, fd, vm, 0, addr, bo_size);
+
+	basic_vm_bind_metadata_ext_del(fd, c, ext, data);
+
+	close(bo);
+	xe_eudebug_client_vm_destroy(c, fd, vm);
+
+	xe_device_put(fd);
+	xe_eudebug_client_close_driver(c, fd);
+}
+
+static void debugger_test_metadata(struct xe_eudebug_debugger *d,
+				   uint64_t client_handle,
+				   uint64_t metadata_handle,
+				   uint64_t type,
+				   uint64_t len)
+{
+	struct drm_xe_eudebug_read_metadata rm = {
+		.client_handle = client_handle,
+		.metadata_handle = metadata_handle,
+		.size = len,
+	};
+	uint8_t *data;
+	int i;
+
+	data = malloc(len);
+	igt_assert(data);
+
+	rm.ptr = to_user_pointer(data);
+
+	igt_assert_eq(igt_ioctl(d->fd, DRM_XE_EUDEBUG_IOCTL_READ_METADATA, &rm), 0);
+
+	/* syntetic check, test sets different size per metadata type */
+	igt_assert_eq((type + 1) * PAGE_SIZE, rm.size);
+
+	for (i = 0; i < rm.size; i++)
+		igt_assert_eq(data[i], 0xff & (i + (i > PAGE_SIZE)));
+
+	free(data);
+}
+
+static void metadata_read_trigger(struct xe_eudebug_debugger *d,
+				  struct drm_xe_eudebug_event *e)
+{
+	struct drm_xe_eudebug_event_metadata *em = (void *)e;
+
+	/* syntetic check, test sets different size per metadata type */
+	igt_assert_eq((em->type + 1) * PAGE_SIZE, em->len);
+
+	if (e->flags & DRM_XE_EUDEBUG_EVENT_CREATE) {
+		debugger_test_metadata(d, em->client_handle, em->metadata_handle,
+				       em->type, em->len);
+		xe_eudebug_debugger_signal_stage(d, em->type);
+	}
+}
+
+static void metadata_read_on_vm_bind_trigger(struct xe_eudebug_debugger *d,
+					     struct drm_xe_eudebug_event *e)
+{
+	struct drm_xe_eudebug_event_vm_bind_op_metadata *em = (void *)e;
+	struct drm_xe_eudebug_event_vm_bind_op *eo = (void *)e;
+	struct drm_xe_eudebug_event_vm_bind *eb;
+
+	/* For testing purpose client sets metadata_cookie = type */
+
+	/*
+	 * Metadata event has a reference to vm-bind-op event which has a reference
+	 * to vm-bind event which contains proper client-handle.
+	 */
+	eo = (struct drm_xe_eudebug_event_vm_bind_op *)
+		xe_eudebug_event_log_find_seqno(d->log, em->vm_bind_op_ref_seqno);
+	igt_assert(eo);
+	eb = (struct drm_xe_eudebug_event_vm_bind *)
+		xe_eudebug_event_log_find_seqno(d->log, eo->vm_bind_ref_seqno);
+	igt_assert(eb);
+
+	debugger_test_metadata(d,
+			       eb->client_handle,
+			       em->metadata_handle,
+			       em->metadata_cookie,
+			       MDATA_SIZE); /* max size */
+
+	xe_eudebug_debugger_signal_stage(d, em->metadata_cookie);
+}
+
+/**
+ * SUBTEST: read-metadata
+ * Description:
+ *      Exercise DRM_XE_EUDEBUG_IOCTL_READ_METADATA and debug metadata create|destroy events.
+ */
+static void test_metadata_read(int fd, unsigned int flags, int num_clients)
+{
+	test_client_with_trigger(fd, flags, num_clients, metadata_access_client,
+				 DRM_XE_EUDEBUG_EVENT_METADATA, metadata_read_trigger,
+				 NULL, true, 0);
+}
+
+/**
+ * SUBTEST: attach-debug-metadata
+ * Description:
+ *      Read debug metadata when vm_bind has it attached.
+ */
+static void test_metadata_attach(int fd, unsigned int flags, int num_clients)
+{
+	test_client_with_trigger(fd, flags, num_clients, metadata_access_client,
+				 DRM_XE_EUDEBUG_EVENT_VM_BIND_OP_METADATA,
+				 metadata_read_on_vm_bind_trigger,
+				 NULL, true, 0);
+}
+
+#define STAGE_CLIENT_WAIT_ON_UFENCE_DONE 1337
+
+#define UFENCE_EVENT_COUNT_EXPECTED 4
+#define UFENCE_EVENT_COUNT_MAX 100
+
+struct ufence_bind {
+	struct drm_xe_sync f;
+	uint64_t addr;
+	uint64_t range;
+	uint64_t value;
+	struct {
+		uint64_t vm_sync;
+	} *fence_data;
+};
+
+static void client_wait_ufences(struct xe_eudebug_client *c,
+				int fd, struct ufence_bind *binds, int count)
+{
+	const int64_t default_fence_timeout_ns = MS_TO_NS(500);
+	int64_t timeout_ns;
+	int err;
+
+	/* Ensure that wait on unacked ufence times out */
+	for (int i = 0; i < count; i++) {
+		struct ufence_bind *b = &binds[i];
+
+		timeout_ns = default_fence_timeout_ns;
+		err = __xe_wait_ufence(fd, &b->fence_data->vm_sync, b->f.timeline_value,
+				       0, &timeout_ns);
+		igt_assert_eq(err, -ETIME);
+		igt_assert_neq(b->fence_data->vm_sync, b->f.timeline_value);
+		igt_debug("wait #%d blocked on ack\n", i);
+	}
+
+	/* Wait on fence timed out, now tell the debugger to ack */
+	xe_eudebug_client_signal_stage(c, STAGE_CLIENT_WAIT_ON_UFENCE_DONE);
+
+	/* Check that ack unblocks ufence */
+	for (int i = 0; i < count; i++) {
+		struct ufence_bind *b = &binds[i];
+
+		timeout_ns = MS_TO_NS(XE_EUDEBUG_DEFAULT_TIMEOUT_MS);
+		err = __xe_wait_ufence(fd, &b->fence_data->vm_sync, b->f.timeline_value,
+				       0, &timeout_ns);
+		igt_assert_eq(err, 0);
+		igt_assert_eq(b->fence_data->vm_sync, b->f.timeline_value);
+		igt_debug("wait #%d completed\n", i);
+	}
+}
+
+static struct ufence_bind *create_binds_with_ufence(int fd, int count)
+{
+	struct ufence_bind *binds;
+
+	binds = calloc(count, sizeof(*binds));
+	igt_assert(binds);
+
+	for (int i = 0; i < count; i++) {
+		struct ufence_bind *b = &binds[i];
+
+		b->range = 0x1000;
+		b->addr = 0x100000 + b->range * i;
+		b->fence_data = aligned_alloc(xe_get_default_alignment(fd),
+					      sizeof(*b->fence_data));
+		igt_assert(b->fence_data);
+		memset(b->fence_data, 0, sizeof(*b->fence_data));
+
+		b->f.type = DRM_XE_SYNC_TYPE_USER_FENCE;
+		b->f.flags = DRM_XE_SYNC_FLAG_SIGNAL;
+		b->f.addr = to_user_pointer(&b->fence_data->vm_sync);
+		b->f.timeline_value = UFENCE_EVENT_COUNT_EXPECTED + i;
+	}
+
+	return binds;
+}
+
+static void basic_ufence_client(struct xe_eudebug_client *c)
+{
+	const unsigned int n = UFENCE_EVENT_COUNT_EXPECTED;
+	int fd = xe_eudebug_client_open_driver(c);
+	uint32_t vm = xe_eudebug_client_vm_create(c, fd, 0, 0);
+	size_t bo_size = n * xe_get_default_alignment(fd);
+	uint32_t bo = xe_bo_create(fd, 0, bo_size,
+				   system_memory(fd), 0);
+	struct ufence_bind *binds = create_binds_with_ufence(fd, n);
+
+	for (int i = 0; i < n; i++) {
+		struct ufence_bind *b = &binds[i];
+
+		xe_eudebug_client_vm_bind_flags(c, fd, vm, bo, 0, b->addr, b->range, 0,
+						&b->f, 1, 0);
+	}
+
+	client_wait_ufences(c, fd, binds, n);
+
+	for (int i = 0; i < n; i++) {
+		struct ufence_bind *b = &binds[i];
+
+		xe_eudebug_client_vm_unbind(c, fd, vm, 0, b->addr, b->range);
+	}
+
+	free(binds);
+	gem_close(fd, bo);
+	xe_eudebug_client_vm_destroy(c, fd, vm);
+	xe_eudebug_client_close_driver(c, fd);
+}
+
+struct ufence_priv {
+	struct drm_xe_eudebug_event_vm_bind_ufence ufence_events[UFENCE_EVENT_COUNT_MAX];
+	uint64_t ufence_event_seqno[UFENCE_EVENT_COUNT_MAX];
+	uint64_t ufence_event_vm_addr_start[UFENCE_EVENT_COUNT_MAX];
+	uint64_t ufence_event_vm_addr_range[UFENCE_EVENT_COUNT_MAX];
+	unsigned int ufence_event_count;
+	unsigned int vm_bind_op_count;
+	pthread_mutex_t mutex;
+};
+
+static struct ufence_priv *ufence_priv_create(void)
+{
+	struct ufence_priv *priv;
+
+	priv = mmap(0, ALIGN(sizeof(*priv), PAGE_SIZE),
+		    PROT_WRITE, MAP_SHARED | MAP_ANON, -1, 0);
+	igt_assert(priv);
+	memset(priv, 0, sizeof(*priv));
+	pthread_mutex_init(&priv->mutex, NULL);
+
+	return priv;
+}
+
+static void ufence_priv_destroy(struct ufence_priv *priv)
+{
+	munmap(priv, ALIGN(sizeof(*priv), PAGE_SIZE));
+}
+
+static void ack_fences(struct xe_eudebug_debugger *d)
+{
+	struct ufence_priv *priv = d->ptr;
+
+	for (int i = 0; i < priv->ufence_event_count; i++)
+		xe_eudebug_ack_ufence(d->fd, &priv->ufence_events[i]);
+}
+
+static void basic_ufence_trigger(struct xe_eudebug_debugger *d,
+				 struct drm_xe_eudebug_event *e)
+{
+	struct drm_xe_eudebug_event_vm_bind_ufence *ef = (void *)e;
+	struct ufence_priv *priv = d->ptr;
+
+	if (e->flags & DRM_XE_EUDEBUG_EVENT_CREATE) {
+		char event_str[XE_EUDEBUG_EVENT_STRING_MAX_LEN];
+		struct drm_xe_eudebug_event_vm_bind *eb;
+
+		xe_eudebug_event_to_str(e, event_str, XE_EUDEBUG_EVENT_STRING_MAX_LEN);
+		igt_debug("ufence event received: %s\n", event_str);
+
+		xe_eudebug_assert_f(d, priv->ufence_event_count < UFENCE_EVENT_COUNT_EXPECTED,
+				    "surplus ufence event received: %s\n", event_str);
+		xe_eudebug_assert(d, ef->vm_bind_ref_seqno);
+
+		memcpy(&priv->ufence_events[priv->ufence_event_count++], ef, sizeof(*ef));
+
+		eb = (struct drm_xe_eudebug_event_vm_bind *)
+			xe_eudebug_event_log_find_seqno(d->log, ef->vm_bind_ref_seqno);
+		xe_eudebug_assert_f(d, eb, "vm bind event with seqno (%lld) not found\n",
+				    ef->vm_bind_ref_seqno);
+		xe_eudebug_assert_f(d, eb->flags & DRM_XE_EUDEBUG_EVENT_VM_BIND_FLAG_UFENCE,
+				    "vm bind event does not have ufence: %s\n", event_str);
+	}
+}
+
+static int wait_for_ufence_events(struct ufence_priv *priv, int timeout_ms)
+{
+	int ret = -ETIMEDOUT;
+
+	igt_for_milliseconds(timeout_ms) {
+		pthread_mutex_lock(&priv->mutex);
+		if (priv->ufence_event_count == UFENCE_EVENT_COUNT_EXPECTED)
+			ret = 0;
+		pthread_mutex_unlock(&priv->mutex);
+
+		if (!ret)
+			break;
+		usleep(1000);
+	}
+
+	return ret;
+}
+
+/**
+ * SUBTEST: basic-vm-bind-ufence
+ * Description:
+ *      Give user fence in application and check if ufence ack works
+ */
+static void test_basic_ufence(int fd, unsigned int flags)
+{
+	struct xe_eudebug_debugger *d;
+	struct xe_eudebug_session *s;
+	struct xe_eudebug_client *c;
+	struct ufence_priv *priv;
+
+	priv = ufence_priv_create();
+	s = xe_eudebug_session_create(fd, basic_ufence_client, flags, priv);
+	c = s->c;
+	d = s->d;
+
+	xe_eudebug_debugger_add_trigger(d,
+					DRM_XE_EUDEBUG_EVENT_VM_BIND_UFENCE,
+					basic_ufence_trigger);
+
+	igt_assert_eq(xe_eudebug_debugger_attach(d, c), 0);
+	xe_eudebug_debugger_start_worker(d);
+	xe_eudebug_client_start(c);
+
+	xe_eudebug_debugger_wait_stage(s, STAGE_CLIENT_WAIT_ON_UFENCE_DONE);
+	xe_eudebug_assert_f(d, wait_for_ufence_events(priv, XE_EUDEBUG_DEFAULT_TIMEOUT_MS) == 0,
+			    "missing ufence events\n");
+	ack_fences(d);
+
+	xe_eudebug_client_wait_done(c);
+	xe_eudebug_debugger_stop_worker(d, 1);
+
+	xe_eudebug_event_log_print(d->log, true);
+	xe_eudebug_event_log_print(c->log, true);
+
+	xe_eudebug_session_check(s, true, XE_EUDEBUG_FILTER_EVENT_VM_BIND_UFENCE);
+
+	xe_eudebug_session_destroy(s);
+	ufence_priv_destroy(priv);
+}
+
+struct vm_bind_clear_thread_priv {
+	struct drm_xe_engine_class_instance *hwe;
+	struct xe_eudebug_client *c;
+	pthread_t thread;
+	uint64_t region;
+	unsigned long sum;
+};
+
+struct vm_bind_clear_priv {
+	unsigned long unbind_count;
+	unsigned long bind_count;
+	unsigned long sum;
+};
+
+static struct vm_bind_clear_priv *vm_bind_clear_priv_create(void)
+{
+	struct vm_bind_clear_priv *priv;
+
+	priv = mmap(0, ALIGN(sizeof(*priv), PAGE_SIZE),
+		    PROT_WRITE, MAP_SHARED | MAP_ANON, -1, 0);
+	igt_assert(priv);
+	memset(priv, 0, sizeof(*priv));
+
+	return priv;
+}
+
+static void vm_bind_clear_priv_destroy(struct vm_bind_clear_priv *priv)
+{
+	munmap(priv, ALIGN(sizeof(*priv), PAGE_SIZE));
+}
+
+static void *vm_bind_clear_thread(void *data)
+{
+	const uint32_t CS_GPR0 = 0x600;
+	const size_t batch_size = 16;
+	struct drm_xe_sync uf_sync = {
+		.type = DRM_XE_SYNC_TYPE_USER_FENCE, .flags = DRM_XE_SYNC_FLAG_SIGNAL,
+	};
+	struct vm_bind_clear_thread_priv *priv = data;
+	int fd = xe_eudebug_client_open_driver(priv->c);
+	uint32_t gtt_size = 1ull << min_t(uint32_t, xe_va_bits(fd), 48);
+	uint32_t vm = xe_eudebug_client_vm_create(priv->c, fd, 0, 0);
+	size_t bo_size = xe_bb_size(fd, batch_size);
+	unsigned long count = 0;
+	uint64_t *fence_data;
+
+	/* init uf_sync */
+	fence_data = aligned_alloc(xe_get_default_alignment(fd), sizeof(*fence_data));
+	igt_assert(fence_data);
+	uf_sync.timeline_value = 1337;
+	uf_sync.addr = to_user_pointer(fence_data);
+
+	igt_debug("Run on: %s%u\n", xe_engine_class_string(priv->hwe->engine_class),
+		  priv->hwe->engine_instance);
+
+	igt_until_timeout(5) {
+		struct drm_xe_exec_queue_create eq_create = { 0 };
+		uint32_t clean_bo = 0;
+		uint32_t batch_bo = 0;
+		uint64_t clean_offset, batch_offset;
+		uint32_t exec_queue;
+		uint32_t *map, *cs;
+		uint64_t delta;
+
+		/* calculate offsets (vma addresses) */
+		batch_offset = (random() * SZ_2M) & (gtt_size - 1);
+		/* XXX: for some platforms/memory regions batch offset '0' can be problematic */
+		if (batch_offset == 0)
+			batch_offset = SZ_2M;
+
+		do {
+			clean_offset = (random() * SZ_2M) & (gtt_size - 1);
+			if (clean_offset == 0)
+				clean_offset = SZ_2M;
+		} while (clean_offset == batch_offset);
+
+		batch_offset += random() % SZ_2M & -bo_size;
+		clean_offset += random() % SZ_2M & -bo_size;
+
+		delta = (random() % bo_size) & -4;
+
+		/* prepare clean bo */
+		clean_bo = xe_bo_create(fd, vm, bo_size, priv->region,
+					DRM_XE_GEM_CREATE_FLAG_NEEDS_VISIBLE_VRAM);
+		memset(fence_data, 0, sizeof(*fence_data));
+		xe_eudebug_client_vm_bind_flags(priv->c, fd, vm, clean_bo, 0, clean_offset, bo_size,
+						0, &uf_sync, 1, 0);
+		xe_wait_ufence(fd, fence_data, uf_sync.timeline_value, 0,
+			       MS_TO_NS(XE_EUDEBUG_DEFAULT_TIMEOUT_MS));
+
+		/* prepare batch bo */
+		batch_bo = xe_bo_create(fd, vm, bo_size, priv->region,
+					DRM_XE_GEM_CREATE_FLAG_NEEDS_VISIBLE_VRAM);
+		memset(fence_data, 0, sizeof(*fence_data));
+		xe_eudebug_client_vm_bind_flags(priv->c, fd, vm, batch_bo, 0, batch_offset, bo_size,
+						0, &uf_sync, 1, 0);
+		xe_wait_ufence(fd, fence_data, uf_sync.timeline_value, 0,
+			       MS_TO_NS(XE_EUDEBUG_DEFAULT_TIMEOUT_MS));
+
+		map = xe_bo_map(fd, batch_bo, bo_size);
+
+		cs = map;
+		*cs++ = MI_NOOP | 0xc5a3;
+		*cs++ = MI_LOAD_REGISTER_MEM_CMD | MI_LRI_LRM_CS_MMIO | 2;
+		*cs++ = CS_GPR0;
+		*cs++ = clean_offset + delta;
+		*cs++ = (clean_offset + delta) >> 32;
+		*cs++ = MI_STORE_REGISTER_MEM_CMD | MI_LRI_LRM_CS_MMIO | 2;
+		*cs++ = CS_GPR0;
+		*cs++ = batch_offset;
+		*cs++ = batch_offset >> 32;
+		*cs++ = MI_BATCH_BUFFER_END;
+
+		/* execute batch */
+		eq_create.width = 1;
+		eq_create.num_placements = 1;
+		eq_create.vm_id = vm;
+		eq_create.instances = to_user_pointer(priv->hwe);
+		exec_queue = xe_eudebug_client_exec_queue_create(priv->c, fd, &eq_create);
+		xe_exec_wait(fd, exec_queue, batch_offset);
+
+		igt_assert_eq(*map, 0);
+
+		/* cleanup */
+		xe_eudebug_client_exec_queue_destroy(priv->c, fd, &eq_create);
+		munmap(map, bo_size);
+
+		xe_eudebug_client_vm_unbind(priv->c, fd, vm, 0, batch_offset, bo_size);
+		gem_close(fd, batch_bo);
+
+		xe_eudebug_client_vm_unbind(priv->c, fd, vm, 0, clean_offset, bo_size);
+		gem_close(fd, clean_bo);
+
+		count++;
+	}
+
+	priv->sum = count;
+
+	free(fence_data);
+	xe_eudebug_client_close_driver(priv->c, fd);
+	return NULL;
+}
+
+static void vm_bind_clear_client(struct xe_eudebug_client *c)
+{
+	int fd = xe_eudebug_client_open_driver(c);
+	struct xe_device *xe_dev = xe_device_get(fd);
+	int count = xe_number_engines(fd) * xe_dev->mem_regions->num_mem_regions;
+	uint64_t memreg = all_memory_regions(fd);
+	struct vm_bind_clear_priv *priv = c->ptr;
+	int current = 0;
+	struct drm_xe_engine_class_instance *engine;
+	struct vm_bind_clear_thread_priv *threads;
+	uint64_t region;
+
+	threads = calloc(count, sizeof(*threads));
+	igt_assert(threads);
+	priv->sum = 0;
+
+	xe_for_each_mem_region(fd, memreg, region) {
+		xe_for_each_engine(fd, engine) {
+			threads[current].c = c;
+			threads[current].hwe = engine;
+			threads[current].region = region;
+
+			pthread_create(&threads[current].thread, NULL,
+				       vm_bind_clear_thread, &threads[current]);
+			current++;
+		}
+	}
+
+	for (current = 0; current < count; current++)
+		pthread_join(threads[current].thread, NULL);
+
+	xe_for_each_mem_region(fd, memreg, region) {
+		unsigned long sum = 0;
+
+		for (current = 0; current < count; current++)
+			if (threads[current].region == region)
+				sum += threads[current].sum;
+
+		igt_info("%s sampled %lu objects\n", xe_region_name(region), sum);
+		priv->sum += sum;
+	}
+
+	free(threads);
+	xe_device_put(fd);
+	xe_eudebug_client_close_driver(c, fd);
+}
+
+static void vm_bind_clear_test_trigger(struct xe_eudebug_debugger *d,
+				       struct drm_xe_eudebug_event *e)
+{
+	struct drm_xe_eudebug_event_vm_bind_op *eo = (void *)e;
+	struct vm_bind_clear_priv *priv = d->ptr;
+
+	if (e->flags & DRM_XE_EUDEBUG_EVENT_CREATE) {
+		if (random() & 1) {
+			struct drm_xe_eudebug_vm_open vo = { 0, };
+			uint32_t v = 0xc1c1c1c1;
+
+			struct drm_xe_eudebug_event_vm_bind *eb;
+			int fd, delta, r;
+
+			igt_debug("vm bind op event received with ref %lld, addr 0x%llx, range 0x%llx\n",
+				eo->vm_bind_ref_seqno,
+				eo->addr,
+				eo->range);
+
+			eb = (struct drm_xe_eudebug_event_vm_bind *)
+				xe_eudebug_event_log_find_seqno(d->log, eo->vm_bind_ref_seqno);
+			igt_assert(eb);
+
+			vo.client_handle = eb->client_handle;
+			vo.vm_handle = eb->vm_handle;
+
+			fd = igt_ioctl(d->fd, DRM_XE_EUDEBUG_IOCTL_VM_OPEN, &vo);
+			igt_assert_lte(0, fd);
+
+			delta = (random() % eo->range) & -4;
+			r = pread(fd, &v, sizeof(v), eo->addr + delta);
+			igt_assert_eq(r, sizeof(v));
+			igt_assert_eq_u32(v, 0);
+
+			close(fd);
+		}
+		priv->bind_count++;
+	}
+
+	if (e->flags & DRM_XE_EUDEBUG_EVENT_DESTROY)
+		priv->unbind_count++;
+}
+
+static void vm_bind_clear_ack_trigger(struct xe_eudebug_debugger *d,
+				      struct drm_xe_eudebug_event *e)
+{
+	struct drm_xe_eudebug_event_vm_bind_ufence *ef = (void *)e;
+
+	xe_eudebug_ack_ufence(d->fd, ef);
+}
+
+/**
+ * SUBTEST: vm-bind-clear
+ * Description:
+ *      Check that fresh buffers we vm_bind into the ppGTT are always clear.
+ */
+static void test_vm_bind_clear(int fd)
+{
+	struct vm_bind_clear_priv *priv;
+	struct xe_eudebug_session *s;
+
+	priv = vm_bind_clear_priv_create();
+	s = xe_eudebug_session_create(fd, vm_bind_clear_client, 0, priv);
+
+	xe_eudebug_debugger_add_trigger(s->d, DRM_XE_EUDEBUG_EVENT_VM_BIND_OP,
+					vm_bind_clear_test_trigger);
+	xe_eudebug_debugger_add_trigger(s->d, DRM_XE_EUDEBUG_EVENT_VM_BIND_UFENCE,
+					vm_bind_clear_ack_trigger);
+
+	igt_assert_eq(xe_eudebug_debugger_attach(s->d, s->c), 0);
+	xe_eudebug_debugger_start_worker(s->d);
+	xe_eudebug_client_start(s->c);
+
+	xe_eudebug_client_wait_done(s->c);
+	xe_eudebug_debugger_stop_worker(s->d, 1);
+
+	igt_assert_eq(priv->bind_count, priv->unbind_count);
+	igt_assert_eq(priv->sum * 2, priv->bind_count);
+
+	xe_eudebug_session_destroy(s);
+	vm_bind_clear_priv_destroy(priv);
+}
+
+#define UFENCE_CLIENT_VM_TEST_VAL_START 0xaaaaaaaa
+#define UFENCE_CLIENT_VM_TEST_VAL_END 0xbbbbbbbb
+
+static void vma_ufence_client(struct xe_eudebug_client *c)
+{
+	const unsigned int n = UFENCE_EVENT_COUNT_EXPECTED;
+	int fd = xe_eudebug_client_open_driver(c);
+	struct ufence_bind *binds = create_binds_with_ufence(fd, n);
+	uint32_t vm = xe_eudebug_client_vm_create(c, fd, 0, 0);
+	size_t bo_size = xe_get_default_alignment(fd);
+	uint64_t items = bo_size / sizeof(uint32_t);
+	uint32_t bo[UFENCE_EVENT_COUNT_EXPECTED];
+	uint32_t *ptr[UFENCE_EVENT_COUNT_EXPECTED];
+
+	for (int i = 0; i < n; i++) {
+		bo[i] = xe_bo_create(fd, 0, bo_size,
+				     system_memory(fd), 0);
+		ptr[i] = xe_bo_map(fd, bo[i], bo_size);
+		igt_assert(ptr[i]);
+		memset(ptr[i], UFENCE_CLIENT_VM_TEST_VAL_START, bo_size);
+	}
+
+	for (int i = 0; i < n; i++)
+		for (int j = 0; j < items; j++)
+			igt_assert_eq(ptr[i][j], UFENCE_CLIENT_VM_TEST_VAL_START);
+
+	for (int i = 0; i < n; i++) {
+		struct ufence_bind *b = &binds[i];
+
+		xe_eudebug_client_vm_bind_flags(c, fd, vm, bo[i], 0, b->addr, b->range, 0,
+						&b->f, 1, 0);
+	}
+
+	/* Wait for acks on ufences */
+	for (int i = 0; i < n; i++) {
+		int err;
+		int64_t timeout_ns;
+		struct ufence_bind *b = &binds[i];
+
+		timeout_ns = MS_TO_NS(XE_EUDEBUG_DEFAULT_TIMEOUT_MS);
+		err = __xe_wait_ufence(fd, &b->fence_data->vm_sync, b->f.timeline_value,
+				       0, &timeout_ns);
+		igt_assert_eq(err, 0);
+		igt_assert_eq(b->fence_data->vm_sync, b->f.timeline_value);
+		igt_debug("wait #%d completed\n", i);
+
+		for (int j = 0; j < items; j++)
+			igt_assert_eq(ptr[i][j], UFENCE_CLIENT_VM_TEST_VAL_END);
+	}
+
+	for (int i = 0; i < n; i++) {
+		struct ufence_bind *b = &binds[i];
+
+		xe_eudebug_client_vm_unbind(c, fd, vm, 0, b->addr, b->range);
+	}
+
+	free(binds);
+
+	for (int i = 0; i < n; i++) {
+		munmap(ptr[i], bo_size);
+		gem_close(fd, bo[i]);
+	}
+
+	xe_eudebug_client_vm_destroy(c, fd, vm);
+	xe_eudebug_client_close_driver(c, fd);
+}
+
+static void debugger_test_vma_ufence(struct xe_eudebug_debugger *d,
+				     uint64_t client_handle,
+				     uint64_t vm_handle,
+				     uint64_t va_start,
+				     uint64_t va_length)
+{
+	struct drm_xe_eudebug_vm_open vo = { 0, };
+	uint32_t *v1, *v2;
+	uint32_t items = va_length / sizeof(uint32_t);
+	int fd;
+	int r, i;
+
+	v1 = malloc(va_length);
+	igt_assert(v1);
+	v2 = malloc(va_length);
+	igt_assert(v2);
+
+	vo.client_handle = client_handle;
+	vo.vm_handle = vm_handle;
+
+	fd = igt_ioctl(d->fd, DRM_XE_EUDEBUG_IOCTL_VM_OPEN, &vo);
+	igt_assert_lte(0, fd);
+
+	r = pread(fd, v1, va_length, va_start);
+	igt_assert_eq(r, va_length);
+
+	for (i = 0; i < items; i++)
+		igt_assert_eq(v1[i], UFENCE_CLIENT_VM_TEST_VAL_START);
+
+	memset(v1, UFENCE_CLIENT_VM_TEST_VAL_END, va_length);
+
+	r = pwrite(fd, v1, va_length, va_start);
+	igt_assert_eq(r, va_length);
+
+	lseek(fd, va_start, SEEK_SET);
+	r = read(fd, v2, va_length);
+	igt_assert_eq(r, va_length);
+
+	for (i = 0; i < items; i++)
+		igt_assert_eq_u64(v1[i], v2[i]);
+
+	fsync(fd);
+
+	close(fd);
+	free(v1);
+	free(v2);
+}
+
+static void vma_ufence_op_trigger(struct xe_eudebug_debugger *d,
+				  struct drm_xe_eudebug_event *e)
+{
+	struct drm_xe_eudebug_event_vm_bind_op *eo = (void *)e;
+	struct ufence_priv *priv = d->ptr;
+
+	if (e->flags & DRM_XE_EUDEBUG_EVENT_CREATE) {
+		char event_str[XE_EUDEBUG_EVENT_STRING_MAX_LEN];
+		struct drm_xe_eudebug_event_vm_bind *eb;
+		unsigned int op_count = priv->vm_bind_op_count++;
+
+		xe_eudebug_event_to_str(e, event_str, XE_EUDEBUG_EVENT_STRING_MAX_LEN);
+		igt_debug("vm bind op event: ref %lld, addr 0x%llx, range 0x%llx, op_count %u\n",
+			  eo->vm_bind_ref_seqno,
+			  eo->addr,
+			  eo->range,
+			  op_count);
+		igt_debug("vm bind op event received: %s\n", event_str);
+		xe_eudebug_assert(d, eo->vm_bind_ref_seqno);
+		eb = (struct drm_xe_eudebug_event_vm_bind *)
+			xe_eudebug_event_log_find_seqno(d->log, eo->vm_bind_ref_seqno);
+
+		xe_eudebug_assert_f(d, eb, "vm bind event with seqno (%lld) not found\n",
+				    eo->vm_bind_ref_seqno);
+		xe_eudebug_assert_f(d, eb->flags & DRM_XE_EUDEBUG_EVENT_VM_BIND_FLAG_UFENCE,
+				    "vm bind event does not have ufence: %s\n", event_str);
+
+		priv->ufence_event_seqno[op_count] = eo->vm_bind_ref_seqno;
+		priv->ufence_event_vm_addr_start[op_count] = eo->addr;
+		priv->ufence_event_vm_addr_range[op_count] = eo->range;
+	}
+}
+
+static void vma_ufence_trigger(struct xe_eudebug_debugger *d,
+			       struct drm_xe_eudebug_event *e)
+{
+	struct drm_xe_eudebug_event_vm_bind_ufence *ef = (void *)e;
+	struct ufence_priv *priv = d->ptr;
+	unsigned int ufence_count = priv->ufence_event_count;
+
+	if (e->flags & DRM_XE_EUDEBUG_EVENT_CREATE) {
+		char event_str[XE_EUDEBUG_EVENT_STRING_MAX_LEN];
+		struct drm_xe_eudebug_event_vm_bind *eb;
+		uint64_t addr = priv->ufence_event_vm_addr_start[ufence_count];
+		uint64_t range = priv->ufence_event_vm_addr_range[ufence_count];
+
+		xe_eudebug_event_to_str(e, event_str, XE_EUDEBUG_EVENT_STRING_MAX_LEN);
+		igt_debug("ufence event received: %s\n", event_str);
+
+		xe_eudebug_assert_f(d, priv->ufence_event_count < UFENCE_EVENT_COUNT_EXPECTED,
+				    "surplus ufence event received: %s\n", event_str);
+		xe_eudebug_assert(d, ef->vm_bind_ref_seqno);
+
+		memcpy(&priv->ufence_events[priv->ufence_event_count++], ef, sizeof(*ef));
+
+		eb = (struct drm_xe_eudebug_event_vm_bind *)
+			xe_eudebug_event_log_find_seqno(d->log, ef->vm_bind_ref_seqno);
+		xe_eudebug_assert_f(d, eb, "vm bind event with seqno (%lld) not found\n",
+				    ef->vm_bind_ref_seqno);
+		xe_eudebug_assert_f(d, eb->flags & DRM_XE_EUDEBUG_EVENT_VM_BIND_FLAG_UFENCE,
+				    "vm bind event does not have ufence: %s\n", event_str);
+		igt_debug("vm bind ufence event received with ref %lld, addr 0x%lx, range 0x%lx\n",
+			  ef->vm_bind_ref_seqno,
+			  addr,
+			  range);
+		debugger_test_vma_ufence(d, eb->client_handle, eb->vm_handle,
+					 addr, range);
+
+		xe_eudebug_ack_ufence(d->fd, ef);
+	}
+}
+/**
+ * SUBTEST: vma-ufence
+ * Description:
+ *      Intercept vm bind after receiving ufence event, then access target vm and write to it.
+ *      Then check on client side if the write was successful.
+ */
+static void test_vma_ufence(int fd, unsigned int flags)
+{
+	struct xe_eudebug_session *s;
+	struct ufence_priv *priv;
+
+	priv = ufence_priv_create();
+	s = xe_eudebug_session_create(fd, vma_ufence_client, flags, priv);
+
+	xe_eudebug_debugger_add_trigger(s->d,
+					DRM_XE_EUDEBUG_EVENT_VM_BIND_OP,
+					vma_ufence_op_trigger);
+	xe_eudebug_debugger_add_trigger(s->d,
+					DRM_XE_EUDEBUG_EVENT_VM_BIND_UFENCE,
+					vma_ufence_trigger);
+
+	igt_assert_eq(xe_eudebug_debugger_attach(s->d, s->c), 0);
+	xe_eudebug_debugger_start_worker(s->d);
+	xe_eudebug_client_start(s->c);
+
+	xe_eudebug_client_wait_done(s->c);
+	xe_eudebug_debugger_stop_worker(s->d, 1);
+
+	xe_eudebug_event_log_print(s->d->log, true);
+	xe_eudebug_event_log_print(s->c->log, true);
+
+	xe_eudebug_session_check(s, true, XE_EUDEBUG_FILTER_EVENT_VM_BIND_UFENCE);
+
+	xe_eudebug_session_destroy(s);
+	ufence_priv_destroy(priv);
+}
+
+igt_main
+{
+	bool was_enabled;
+	bool *multigpu_was_enabled;
+	int fd, gpu_count;
+
+	igt_fixture {
+		fd = drm_open_driver(DRIVER_XE);
+		was_enabled = xe_eudebug_enable(fd, true);
+	}
+
+	igt_subtest("sysfs-toggle")
+		test_sysfs_toggle(fd);
+
+	igt_subtest("basic-connect")
+		test_connect(fd);
+
+	igt_subtest("connect-user")
+		test_connect_user(fd);
+
+	igt_subtest("basic-close")
+		test_close(fd);
+
+	igt_subtest("basic-read-event")
+		test_read_event(fd);
+
+	igt_subtest("basic-client")
+		test_basic_sessions(fd, 0, 1, true);
+
+	igt_subtest("basic-client-th")
+		test_basic_sessions_th(fd, 0, 1, true);
+
+	igt_subtest("basic-vm-access")
+		test_vm_access(fd, 0, 1);
+
+	igt_subtest("basic-vm-access-userptr")
+		test_vm_access(fd, VM_BIND_OP_MAP_USERPTR, 1);
+
+	igt_subtest("basic-vm-access-parameters")
+		test_vm_access_parameters(fd, 0, 1);
+
+	igt_subtest("multiple-sessions")
+		test_basic_sessions(fd, CREATE_VMS | CREATE_EXEC_QUEUES, 4, true);
+
+	igt_subtest("basic-vms")
+		test_basic_sessions(fd, CREATE_VMS, 1, true);
+
+	igt_subtest("basic-exec-queues")
+		test_basic_sessions(fd, CREATE_EXEC_QUEUES, 1, true);
+
+	igt_subtest("basic-vm-bind")
+		test_basic_sessions(fd, VM_BIND, 1, true);
+
+	igt_subtest("basic-vm-bind-ufence")
+		test_basic_ufence(fd, 0);
+
+	igt_subtest("vma-ufence")
+		test_vma_ufence(fd, 0);
+
+	igt_subtest("vm-bind-clear")
+		test_vm_bind_clear(fd);
+
+	igt_subtest("basic-vm-bind-discovery")
+		test_basic_discovery(fd, VM_BIND, true);
+
+	igt_subtest("basic-vm-bind-metadata-discovery")
+		test_basic_discovery(fd, VM_BIND_METADATA, true);
+
+	igt_subtest("basic-vm-bind-vm-destroy")
+		test_basic_sessions(fd, VM_BIND_VM_DESTROY, 1, false);
+
+	igt_subtest("basic-vm-bind-vm-destroy-discovery")
+		test_basic_discovery(fd, VM_BIND_VM_DESTROY, false);
+
+	igt_subtest("basic-vm-bind-extended")
+		test_basic_sessions(fd, VM_BIND_EXTENDED, 1, true);
+
+	igt_subtest("basic-vm-bind-extended-discovery")
+		test_basic_discovery(fd, VM_BIND_EXTENDED, true);
+
+	igt_subtest("read-metadata")
+		test_metadata_read(fd, 0, 1);
+
+	igt_subtest("attach-debug-metadata")
+		test_metadata_attach(fd, 0, 1);
+
+	igt_subtest("discovery-race")
+		test_race_discovery(fd, 0, 4);
+
+	igt_subtest("discovery-race-vmbind")
+		test_race_discovery(fd, DISCOVERY_VM_BIND, 4);
+
+	igt_subtest("discovery-empty")
+		test_empty_discovery(fd, DISCOVERY_CLOSE_CLIENT, 16);
+
+	igt_subtest("discovery-empty-clients")
+		test_empty_discovery(fd, DISCOVERY_DESTROY_RESOURCES, 16);
+
+	igt_fixture {
+		xe_eudebug_enable(fd, was_enabled);
+		drm_close_driver(fd);
+	}
+
+	igt_subtest_group {
+		igt_fixture {
+			gpu_count = drm_prepare_filtered_multigpu(DRIVER_XE);
+			igt_require(gpu_count >= 2);
+
+			multigpu_was_enabled = malloc(gpu_count * sizeof(bool));
+			igt_assert(multigpu_was_enabled);
+			for (int i = 0; i < gpu_count; i++) {
+				fd = drm_open_filtered_card(i);
+				multigpu_was_enabled[i] = xe_eudebug_enable(fd, true);
+				close(fd);
+			}
+		}
+
+		igt_subtest("multigpu-basic-client") {
+			igt_multi_fork(child, gpu_count) {
+				fd = drm_open_filtered_card(child);
+				igt_assert_f(fd > 0, "cannot open gpu-%d, errno=%d\n",
+					     child, errno);
+				igt_assert(is_xe_device(fd));
+
+				test_basic_sessions(fd, 0, 1, true);
+				close(fd);
+			}
+			igt_waitchildren();
+		}
+
+		igt_subtest("multigpu-basic-client-many") {
+			igt_multi_fork(child, gpu_count) {
+				fd = drm_open_filtered_card(child);
+				igt_assert_f(fd > 0, "cannot open gpu-%d, errno=%d\n",
+					     child, errno);
+				igt_assert(is_xe_device(fd));
+
+				test_basic_sessions(fd, 0, 4, true);
+				close(fd);
+			}
+			igt_waitchildren();
+		}
+
+		igt_fixture {
+			for (int i = 0; i < gpu_count; i++) {
+				fd = drm_open_filtered_card(i);
+				xe_eudebug_enable(fd, multigpu_was_enabled[i]);
+				close(fd);
+			}
+			free(multigpu_was_enabled);
+		}
+	}
+}
diff --git a/tests/meson.build b/tests/meson.build
index e649466be..35bf8ed35 100644
--- a/tests/meson.build
+++ b/tests/meson.build
@@ -279,6 +279,7 @@ intel_xe_progs = [
 	'xe_dma_buf_sync',
 	'xe_debugfs',
 	'xe_drm_fdinfo',
+	'xe_eudebug',
 	'xe_evict',
 	'xe_evict_ccs',
 	'xe_exec_atomic',
-- 
2.34.1


^ permalink raw reply related	[flat|nested] 41+ messages in thread

* [PATCH i-g-t v3 06/14] lib/gpgpu_shader: Extend shader building library
  2024-08-09 12:37 [PATCH i-g-t v3 00/14] Test coverage for GPU debug support Christoph Manszewski
                   ` (4 preceding siblings ...)
  2024-08-09 12:38 ` [PATCH i-g-t v3 05/14] tests/xe_eudebug: Test eudebug resource tracking and manipulation Christoph Manszewski
@ 2024-08-09 12:38 ` Christoph Manszewski
  2024-08-09 12:38 ` [PATCH i-g-t v3 07/14] lib/gpgpu_shader: Add write_on_exception template Christoph Manszewski
                   ` (13 subsequent siblings)
  19 siblings, 0 replies; 41+ messages in thread
From: Christoph Manszewski @ 2024-08-09 12:38 UTC (permalink / raw)
  To: igt-dev
  Cc: Zbigniew Kempczyński, Kamil Konieczny, Dominik Grzegorzek,
	Maciej Patelczyk, Dominik Karol Piątkowski, Pawel Sikora,
	Andrzej Hajda, Kolanupaka Naveena, Mika Kuoppala, Gwan-gyeong Mun,
	Christoph Manszewski

Add shader building functions and iga64 code used by eudebug subtests.

Signed-off-by: Dominik Grzegorzek <dominik.grzegorzek@intel.com>
Signed-off-by: Christoph Manszewski <christoph.manszewski@intel.com>
Signed-off-by: Andrzej Hajda <andrzej.hajda@intel.com>
Signed-off-by: Dominik Karol Piątkowski <dominik.karol.piatkowski@intel.com>
---
 lib/gpgpu_shader.c          | 386 +++++++++++++++++++++++++++++++++++-
 lib/gpgpu_shader.h          |  24 ++-
 lib/iga64_generated_codes.c | 321 +++++++++++++++++++++++++++++-
 3 files changed, 727 insertions(+), 4 deletions(-)

diff --git a/lib/gpgpu_shader.c b/lib/gpgpu_shader.c
index 991455c49..3b16db593 100644
--- a/lib/gpgpu_shader.c
+++ b/lib/gpgpu_shader.c
@@ -7,10 +7,16 @@
 
 #include <i915_drm.h>
 
+#include "igt_map.h"
 #include "ioctl_wrappers.h"
 #include "gpgpu_shader.h"
 #include "gpu_cmds.h"
 
+struct label_entry {
+	uint32_t id;
+	uint32_t offset;
+};
+
 #define IGA64_ARG0 0xc0ded000
 #define IGA64_ARG_MASK 0xffffff00
 
@@ -32,7 +38,7 @@ static void gpgpu_shader_extend(struct gpgpu_shader *shdr)
 	igt_assert(shdr->code);
 }
 
-void
+uint32_t
 __emit_iga64_code(struct gpgpu_shader *shdr, struct iga64_template const *tpls,
 		  int argc, uint32_t *argv)
 {
@@ -60,6 +66,8 @@ __emit_iga64_code(struct gpgpu_shader *shdr, struct iga64_template const *tpls,
 	}
 
 	shdr->size += tpls->size;
+
+	return tpls->size;
 }
 
 static uint32_t fill_sip(struct intel_bb *ibb,
@@ -235,6 +243,7 @@ struct gpgpu_shader *gpgpu_shader_create(int fd)
 	shdr->gen_ver = 100 * info->graphics_ver + info->graphics_rel;
 	shdr->max_size = 16 * 4;
 	shdr->code = malloc(4 * shdr->max_size);
+	shdr->labels = igt_map_create(igt_map_hash_32, igt_map_equal_32);
 	igt_assert(shdr->code);
 	return shdr;
 }
@@ -251,6 +260,70 @@ void gpgpu_shader_destroy(struct gpgpu_shader *shdr)
 	free(shdr);
 }
 
+/**
+ * gpgpu_shader_dump:
+ * @shdr: shader to be printed
+ *
+ * Print shader instructions from @shdr in hex.
+ */
+void gpgpu_shader_dump(struct gpgpu_shader *shdr)
+{
+	for (int i = 0; i < shdr->size / 4; i++)
+		igt_info("0x%08x 0x%08x 0x%08x 0x%08x\n",
+			 shdr->instr[i][0], shdr->instr[i][1],
+			 shdr->instr[i][2], shdr->instr[i][3]);
+}
+
+/**
+ * gpgpu_shader__breakpoint_on:
+ * @shdr: shader to create breakpoint in
+ * @cmd_no: index of the instruction to break on
+ *
+ * Insert a breakpoint on the @cmd_no'th instruction within @shdr.
+ */
+void gpgpu_shader__breakpoint_on(struct gpgpu_shader *shdr, uint32_t cmd_no)
+{
+	shdr->instr[cmd_no][0] |= 1<<30;
+}
+
+/**
+ * gpgpu_shader__breakpoint:
+ * @shdr: shader to create breakpoint in
+ *
+ * Insert a breakpoint on the last instruction in @shdr.
+ */
+void gpgpu_shader__breakpoint(struct gpgpu_shader *shdr)
+{
+	gpgpu_shader__breakpoint_on(shdr, shdr->size / 4 - 1);
+}
+
+/**
+ * gpgpu_shader__wait:
+ * @shdr: shader to be modified
+ *
+ * Append wait instruction to @shader. This instruction raises attention
+ * and stops execution.
+ */
+void gpgpu_shader__wait(struct gpgpu_shader *shdr)
+{
+	emit_iga64_code(shdr, sync_host, "	\n\
+(W)	sync.host	        null		\n\
+	");
+}
+
+/**
+ * gpgpu_shader__nop:
+ * @shdr: shader to be modified
+ *
+ * Append a no-op instruction to @shdr.
+ */
+void gpgpu_shader__nop(struct gpgpu_shader *shdr)
+{
+	emit_iga64_code(shdr, nop, "	\n\
+(W)	nop				\n\
+	");
+}
+
 /**
  * gpgpu_shader__eot:
  * @shdr: shader to be modified
@@ -269,6 +342,247 @@ void gpgpu_shader__eot(struct gpgpu_shader *shdr)
 	");
 }
 
+/**
+ * gpgpu_shader__label:
+ * @shdr: shader to be modified
+ * @label_id: id of the label to be created
+ *
+ * Create a label for the last instruction within @shdr.
+ */
+void gpgpu_shader__label(struct gpgpu_shader *shdr, int label_id)
+{
+	struct label_entry *l = malloc(sizeof(*l));
+
+	l->id = label_id;
+	l->offset = shdr->size;
+	igt_map_insert(shdr->labels, &l->id, l);
+}
+
+#define OPCODE(x) (x & 0x7f)
+#define OPCODE_JUMP_INDEXED 0x20
+static void __patch_indexed_jump(struct gpgpu_shader *shdr, int label_id,
+				 uint32_t jump_iga64_size)
+{
+	struct label_entry *l;
+	uint32_t *start, *end, *label;
+	int32_t relative;
+
+	l = igt_map_search(shdr->labels, &label_id);
+	igt_assert(l);
+
+	igt_assert(jump_iga64_size % 4 == 0);
+
+	label = shdr->code + l->offset;
+	end = shdr->code + shdr->size;
+	start = end - jump_iga64_size;
+
+	for (; start < end; start += 4)
+		if (OPCODE(*start) == OPCODE_JUMP_INDEXED) {
+			relative = (label - start) * 4;
+			*(start + 3) = relative;
+			break;
+		}
+}
+
+/**
+ * gpgpu_shader__jump:
+ * @shdr: shader to be modified
+ * @label_id: label to jump to
+ *
+ * Append jump instruction to @shdr. Jump to instruction with label @label_id.
+ */
+void gpgpu_shader__jump(struct gpgpu_shader *shdr, int label_id)
+{
+	size_t shader_size;
+
+	shader_size = emit_iga64_code(shdr, jump, "	\n\
+L0:							\n\
+(W)	jmpi        L0					\n\
+	");
+
+	__patch_indexed_jump(shdr, label_id, shader_size);
+}
+
+/**
+ * gpgpu_shader__jump_neq:
+ * @shdr: shader to be modified
+ * @label_id: label to jump to
+ * @y_offset: offset within target buffer in rows
+ * @value: expected value
+ *
+ * Append jump instruction to @shdr. Jump to instruction with label @label_id
+ * when @value is not equal to dword stored at @dw_offset within the surface.
+ *
+ * Note: @dw_offset has to be aligned to 4
+ */
+void gpgpu_shader__jump_neq(struct gpgpu_shader *shdr, int label_id,
+			    uint32_t y_offset, uint32_t value)
+{
+	uint32_t size;
+
+	size = emit_iga64_code(shdr, jump_dw_neq, "					\n\
+L0:											\n\
+(W)		mov (16|M0)              r30.0<1>:ud    0x0:ud				\n\
+#if GEN_VER < 2000 // Media Block Write							\n\
+	// Y offset of the block in rows := thread group id Y				\n\
+(W)		mov (1|M0)               r30.1<1>:ud    ARG(0):ud			\n\
+	// block width [0,63] representing 1 to 64 bytes, we want dword			\n\
+(W)		mov (1|M0)               r30.2<1>:ud    0x3:ud				\n\
+	// FFTID := FFTID from R0 header						\n\
+(W)		mov (1|M0)               r30.4<1>:ud    r0.5<0;1,0>:ud  		\n\
+(W)		send.dc1 (16|M0)         r31     r30      null    0x0	0x2190000	\n\
+#else // Typed 2D Block Store								\n\
+	// Store X and Y block start (160:191 and 192:223)				\n\
+(W)            mov (2|M0)               r30.6<1>:ud    ARG(0):ud			\n\
+	// Store X and Y block size (224:231 and 232:239)				\n\
+(W)            mov (1|M0)               r30.7<1>:ud    0x3:ud				\n\
+(W)            send.tgm (16|M0)         r31     r30    null:0    0x0    0x62100003	\n\
+#endif											\n\
+	// clear the flag register							\n\
+(W)		mov (1|M0)               f0.0<1>:ud    0x0:ud				\n\
+(W)		cmp (1|M0)    (ne)f0.0   null<1>:ud     r31.0<0;1,0>:ud   ARG(1):ud	\n\
+(W&f0.0)	jmpi                     L0						\n\
+	", y_offset, value);
+
+	__patch_indexed_jump(shdr, label_id, size);
+}
+
+/**
+ * gpgpu_shader__loop_begin:
+ * @shdr: shader to be modified
+ * @label_id: id of the label to be created
+ *
+ * Begin a counting loop in @shdr. All subsequent instructions will constitute
+ * the loop body up until 'gpgpu_shader__loop_end' gets called. The first
+ * instruction of the loop will be at label @label_id.
+ */
+void gpgpu_shader__loop_begin(struct gpgpu_shader *shdr, int label_id)
+{
+	emit_iga64_code(shdr, clear_r40, "		\n\
+L0:							\n\
+(W)	mov (1|M0)               r40:ud    0x0:ud	\n\
+	");
+
+	gpgpu_shader__label(shdr, label_id);
+}
+
+/**
+ * gpgpu_shader__loop_end:
+ * @shdr: shader to be modified
+ * @label_id: label id passed to 'gpgpu_shader__loop_begin'
+ * @iter: iteration count
+ *
+ * End loop body in @shdr.
+ */
+void gpgpu_shader__loop_end(struct gpgpu_shader *shdr, int label_id, uint32_t iter)
+{
+	uint32_t size;
+
+	size = emit_iga64_code(shdr, inc_r40_jump_neq, "				\n\
+L0:											\n\
+(W)		add (1|M0)              r40:ud          r40.0<0;1,0>:ud 0x1:ud		\n\
+(W)		mov (1|M0)              f0.0<1>:ud      0x0:ud				\n\
+(W)		cmp (1|M0)    (ne)f0.0   null<1>:ud     r40.0<0;1,0>:ud   ARG(0):ud	\n\
+(W&f0.0)	jmpi                     L0						\n\
+	", iter);
+
+	__patch_indexed_jump(shdr, label_id, size);
+}
+
+/**
+ * gpgpu_shader__common_target_write:
+ * @shdr: shader to be modified
+ * @y_offset: write target offset within target buffer in rows
+ * @value: oword to be written
+ *
+ * Write the oword stored in @value to the target buffer at @y_offset.
+ */
+void gpgpu_shader__common_target_write(struct gpgpu_shader *shdr,
+				       uint32_t y_offset, const uint32_t value[4])
+{
+	emit_iga64_code(shdr, common_target_write, "				\n\
+(W)	mov (16|M0)		r31.0<1>:ud	0x0:ud				\n\
+(W)	mov (1|M0)		r31.0<1>:ud	ARG(1):ud			\n\
+(W)	mov (1|M0)		r31.1<1>:ud	ARG(2):ud			\n\
+(W)	mov (1|M0)		r31.2<1>:ud	ARG(3):ud			\n\
+(W)	mov (1|M0)		r31.3<1>:ud	ARG(4):ud			\n\
+(W)	mov (16|M0)		r30.0<1>:ud	0x0:ud				\n\
+#if GEN_VER < 2000 // Media Block Write						\n\
+	// Y offset of the block in rows					\n\
+(W)	mov (1|M0)		r30.1<1>:ud	ARG(0):ud			\n\
+	// block width [0,63] representing 1 to 64 bytes			\n\
+(W)	mov (1|M0)		r30.2<1>:ud	0xf:ud				\n\
+	// FFTID := FFTID from R0 header					\n\
+(W)	mov (1|M0)		r30.4<1>:ud	r0.5<0;1,0>:ud			\n\
+	// written value							\n\
+(W)	send.dc1 (16|M0)	null	r30	src1_null  0x0	0x40A8000	\n\
+#else	// Typed 2D Block Store							\n\
+	// Store X and Y block start (160:191 and 192:223)			\n\
+(W)	mov (2|M0)              r30.6<1>:ud     ARG(0):ud			\n\
+	// Store X and Y block size (224:231 and 232:239)			\n\
+(W)	mov (1|M0)              r30.7<1>:ud     0xf:ud				\n\
+(W)	send.tgm (16|M0)        null    r30     null:0  0x0     0x64000007	\n\
+#endif										\n\
+	", y_offset, value[0], value[1], value[2], value[3]);
+}
+
+/**
+ * gpgpu_shader__common_target_write_u32:
+ * @shdr: shader to be modified
+ * @y_offset: write target offset within target buffer in rows
+ * @value: dword to be written
+ *
+ * Fill oword at @y_offset with dword stored in @value.
+ */
+void gpgpu_shader__common_target_write_u32(struct gpgpu_shader *shdr,
+					   uint32_t y_offset, uint32_t value)
+{
+	const uint32_t owblock[4] = {
+		value, value, value, value
+	};
+	gpgpu_shader__common_target_write(shdr, y_offset, owblock);
+}
+
+/**
+ * gpgpu_shader__write_aip:
+ * @shdr: shader to be modified
+ * @y_offset: write target offset within the surface in rows
+ *
+ * Write address instruction pointer to row tg_id_y + @y_offset.
+ */
+void gpgpu_shader__write_aip(struct gpgpu_shader *shdr, uint32_t y_offset)
+{
+	emit_iga64_code(shdr, media_block_write_aip, "				\n\
+	// Payload								\n\
+(W)	mov (1|M0)               r5.0<1>:ud    cr0.2:ud				\n\
+#if GEN_VER < 2000 // Media Block Write						\n\
+	// X offset of the block in bytes := (thread group id X << ARG(0))	\n\
+(W)	shl (1|M0)               r4.0<1>:ud    r0.1<0;1,0>:ud    0x2:ud		\n\
+	// Y offset of the block in rows := thread group id Y			\n\
+(W)	mov (1|M0)               r4.1<1>:ud    r0.6<0;1,0>:ud			\n\
+(W)	add (1|M0)               r4.1<1>:ud    r4.1<0;1,0>:ud    ARG(0):ud	\n\
+	// block width [0,63] representing 1 to 64 bytes			\n\
+(W)	mov (1|M0)               r4.2<1>:ud    0x3:ud				\n\
+	// FFTID := FFTID from R0 header					\n\
+(W)	mov (1|M0)               r4.4<1>:ud    r0.5<0;1,0>:ud			\n\
+(W)	send.dc1 (16|M0)         null     r4   src1_null 0       0x40A8000	\n\
+#else // Typed 2D Block Store							\n\
+	// Load r2.0-3 with tg id X << ARG(0)					\n\
+(W)	shl (1|M0)               r2.0<1>:ud    r0.1<0;1,0>:ud    0x2:ud		\n\
+	// Load r2.4-7 with tg id Y + ARG(1):ud					\n\
+(W)	mov (1|M0)               r2.1<1>:ud    r0.6<0;1,0>:ud			\n\
+(W)	add (1|M0)               r2.1<1>:ud    r2.1<0;1,0>:ud    ARG(0):ud	\n\
+	// payload setup							\n\
+(W)	mov (16|M0)              r4.0<1>:ud    0x0:ud				\n\
+	// Store X and Y block start (160:191 and 192:223)			\n\
+(W)	mov (2|M0)               r4.5<1>:ud    r2.0<2;2,1>:ud			\n\
+	// Store X and Y block max_size (224:231 and 232:239)			\n\
+(W)	mov (1|M0)               r4.7<1>:ud    0x3:ud				\n\
+(W)	send.tgm (16|M0)         null     r4   null:0    0    0x64000007	\n\
+#endif										\n\
+	", y_offset);
+}
+
 /**
  * gpgpu_shader__write_dword:
  * @shdr: shader to be modified
@@ -313,3 +627,73 @@ void gpgpu_shader__write_dword(struct gpgpu_shader *shdr, uint32_t value,
 #endif										\n\
 	", 2, y_offset, 3, value, value, value, value);
 }
+
+/**
+ * gpgpu_shader__end_system_routine:
+ * @shdr: shader to be modified
+ * @breakpoint_suppress: breakpoint suppress flag
+ *
+ * Return from system routine. To prevent infinite jumping to the system
+ * routine on a breakpoint, @breakpoint_suppres flag has to be set.
+ */
+void gpgpu_shader__end_system_routine(struct gpgpu_shader *shdr,
+				      bool breakpoint_suppress)
+{
+	/*
+	 * set breakpoint suppress bit to avoid an endless loop
+	 * when sip was invoked by a breakpoint
+	 */
+	if (breakpoint_suppress)
+		emit_iga64_code(shdr, breakpoint_suppress, "			\n\
+(W)	or  (1|M0)               cr0.0<1>:ud   cr0.0<0;1,0>:ud   0x8000:ud	\n\
+		");
+
+	emit_iga64_code(shdr, end_system_routine, "				\n\
+(W)	and (1|M0)               cr0.1<1>:ud   cr0.1<0;1,0>:ud   ARG(0):ud	\n\
+	// return to an application						\n\
+(W)	and (1|M0)               cr0.0<1>:ud   cr0.0<0;1,0>:ud   0x7FFFFFFD:ud	\n\
+	", 0x7fffff | (1 << 26)); /* clear all exceptions, except read only bit */
+}
+
+/**
+ * gpgpu_shader__end_system_routine_step_if_eq:
+ * @shdr: shader to be modified
+ * @y_offset: offset within target buffer in rows
+ * @value: expected value for single stepping execution
+ *
+ * Return from system routine. Don't clear breakpoint exception when @value
+ * is equal to value stored at @dw_offset. This triggers the system routine
+ * after the subsequent instruction, resulting in single stepping execution.
+ */
+void gpgpu_shader__end_system_routine_step_if_eq(struct gpgpu_shader *shdr,
+						 uint32_t y_offset,
+						 uint32_t value)
+{
+	emit_iga64_code(shdr, end_system_routine_step_if_eq, "				\n\
+(W)		or  (1|M0)               cr0.0<1>:ud   cr0.0<0;1,0>:ud   0x8000:ud	\n\
+(W)		and (1|M0)               cr0.1<1>:ud   cr0.1<0;1,0>:ud   ARG(0):ud	\n\
+(W)		mov (16|M0)              r30.0<1>:ud    0x0:ud				\n\
+#if GEN_VER < 2000 // Media Block Write							\n\
+		// Y offset of the block in rows := thread group id Y			\n\
+(W)		mov (1|M0)               r30.1<1>:ud    ARG(1):ud			\n\
+		// block width [0,63] representing 1 to 64 bytes, we want dword		\n\
+(W)		mov (1|M0)               r30.2<1>:ud    0x3:ud				\n\
+		// FFTID := FFTID from R0 header					\n\
+(W)		mov (1|M0)               r30.4<1>:ud    r0.5<0;1,0>:ud			\n\
+(W)		send.dc1 (16|M0)         r31     r30      null    0x0	0x2190000	\n\
+#else	// Typed 2D Block Store								\n\
+		// Store X and Y block start (160:191 and 192:223)			\n\
+(W)		mov (2|M0)               r30.6<1>:ud    ARG(1):ud			\n\
+		// Store X and Y block size (224:231 and 232:239)			\n\
+(W)		mov (1|M0)               r30.7<1>:ud    0x3:ud				\n\
+(W)		send.tgm (16|M0)         r31     r30    null:0    0x0    0x62100003	\n\
+#endif											\n\
+		// clear the flag register						\n\
+(W)		mov (1|M0)               f0.0<1>:ud    0x0:ud				\n\
+(W)		cmp (1|M0)    (ne)f0.0   null<1>:ud     r31.0<0;1,0>:ud   ARG(2):ud	\n\
+(W&f0.0)	and (1|M0)              cr0.1<1>:ud     cr0.1<0;1,0>:ud   ARG(3):ud	\n\
+		// return to an application						\n\
+(W)		and (1|M0)               cr0.0<1>:ud   cr0.0<0;1,0>:ud   0x7FFFFFFD:ud	\n\
+	", 0x807fffff, /* leave breakpoint exception */
+	y_offset, value, 0x7fffff /* clear all exceptions */ );
+}
diff --git a/lib/gpgpu_shader.h b/lib/gpgpu_shader.h
index 255f93b4d..e4ca0be4c 100644
--- a/lib/gpgpu_shader.h
+++ b/lib/gpgpu_shader.h
@@ -21,6 +21,7 @@ struct gpgpu_shader {
 		uint32_t *code;
 		uint32_t (*instr)[4];
 	};
+	struct igt_map *labels;
 };
 
 struct iga64_template {
@@ -31,7 +32,7 @@ struct iga64_template {
 
 #pragma GCC diagnostic ignored "-Wnested-externs"
 
-void
+uint32_t
 __emit_iga64_code(struct gpgpu_shader *shdr, const struct iga64_template *tpls,
 		  int argc, uint32_t *argv);
 
@@ -56,8 +57,27 @@ void gpgpu_shader_exec(struct intel_bb *ibb,
 		       struct gpgpu_shader *sip,
 		       uint64_t ring, bool explicit_engine);
 
+void gpgpu_shader__wait(struct gpgpu_shader *shdr);
+void gpgpu_shader__breakpoint_on(struct gpgpu_shader *shdr, uint32_t cmd_no);
+void gpgpu_shader__breakpoint(struct gpgpu_shader *shdr);
+void gpgpu_shader__nop(struct gpgpu_shader *shdr);
 void gpgpu_shader__eot(struct gpgpu_shader *shdr);
+void gpgpu_shader__common_target_write(struct gpgpu_shader *shdr,
+				       uint32_t y_offset, const uint32_t value[4]);
+void gpgpu_shader__common_target_write_u32(struct gpgpu_shader *shdr,
+				     uint32_t y_offset, uint32_t value);
+void gpgpu_shader__end_system_routine(struct gpgpu_shader *shdr,
+				      bool breakpoint_suppress);
+void gpgpu_shader__end_system_routine_step_if_eq(struct gpgpu_shader *shdr,
+						 uint32_t dw_offset,
+						 uint32_t value);
+void gpgpu_shader__write_aip(struct gpgpu_shader *shdr, uint32_t y_offset);
 void gpgpu_shader__write_dword(struct gpgpu_shader *shdr, uint32_t value,
 			       uint32_t y_offset);
-
+void gpgpu_shader__label(struct gpgpu_shader *shdr, int label_id);
+void gpgpu_shader__jump(struct gpgpu_shader *shdr, int label_id);
+void gpgpu_shader__jump_neq(struct gpgpu_shader *shdr, int label_id,
+			    uint32_t dw_offset, uint32_t value);
+void gpgpu_shader__loop_begin(struct gpgpu_shader *shdr, int label_id);
+void gpgpu_shader__loop_end(struct gpgpu_shader *shdr, int label_id, uint32_t iter);
 #endif /* GPGPU_SHADER_H */
diff --git a/lib/iga64_generated_codes.c b/lib/iga64_generated_codes.c
index 6a08c4844..fea436dee 100644
--- a/lib/iga64_generated_codes.c
+++ b/lib/iga64_generated_codes.c
@@ -3,7 +3,95 @@
 
 #include "gpgpu_shader.h"
 
-#define MD5_SUM_IGA64_ASMS 2c503cbfbd7b3043e9a52188ae4da7a8
+#define MD5_SUM_IGA64_ASMS 61a15534954fe7c6bd0e983fbfd54c27
+
+struct iga64_template const iga64_code_end_system_routine_step_if_eq[] = {
+	{ .gen_ver = 2000, .size = 44, .code = (const uint32_t []) {
+		0x80000966, 0x80018220, 0x02008000, 0x00008000,
+		0x80000965, 0x80118220, 0x02008010, 0xc0ded000,
+		0x80100961, 0x1e054220, 0x00000000, 0x00000000,
+		0x80040061, 0x1e654220, 0x00000000, 0xc0ded001,
+		0x80000061, 0x1e754220, 0x00000000, 0x00000003,
+		0x80132031, 0x1f0c0000, 0xd0061e8c, 0x04000000,
+		0x80000061, 0x30014220, 0x00000000, 0x00000000,
+		0x80008070, 0x00018220, 0x22001f04, 0xc0ded002,
+		0x84000965, 0x80118220, 0x02008010, 0xc0ded003,
+		0x80000965, 0x80018220, 0x02008000, 0x7ffffffd,
+		0x80000901, 0x00010000, 0x00000000, 0x00000000,
+	}},
+	{ .gen_ver = 1272, .size = 48, .code = (const uint32_t []) {
+		0x80000966, 0x80018220, 0x02008000, 0x00008000,
+		0x80000965, 0x80118220, 0x02008010, 0xc0ded000,
+		0x80100961, 0x1e054220, 0x00000000, 0x00000000,
+		0x80000061, 0x1e154220, 0x00000000, 0xc0ded001,
+		0x80000061, 0x1e254220, 0x00000000, 0x00000003,
+		0x80000061, 0x1e450220, 0x00000054, 0x00000000,
+		0x80132031, 0x1f0c0000, 0xc0001e0c, 0x02400000,
+		0x80000061, 0x30014220, 0x00000000, 0x00000000,
+		0x80008070, 0x00018220, 0x22001f04, 0xc0ded002,
+		0x84000965, 0x80118220, 0x02008010, 0xc0ded003,
+		0x80000965, 0x80018220, 0x02008000, 0x7ffffffd,
+		0x80000901, 0x00010000, 0x00000000, 0x00000000,
+	}},
+	{ .gen_ver = 1250, .size = 52, .code = (const uint32_t []) {
+		0x80000966, 0x80018220, 0x02008000, 0x00008000,
+		0x80000965, 0x80218220, 0x02008020, 0xc0ded000,
+		0x80040961, 0x1e054220, 0x00000000, 0x00000000,
+		0x80000061, 0x1e254220, 0x00000000, 0xc0ded001,
+		0x80000061, 0x1e454220, 0x00000000, 0x00000003,
+		0x80000061, 0x1e850220, 0x000000a4, 0x00000000,
+		0x80001901, 0x00010000, 0x00000000, 0x00000000,
+		0x80044031, 0x1f0c0000, 0xc0001e0c, 0x02400000,
+		0x80000061, 0x30014220, 0x00000000, 0x00000000,
+		0x80002070, 0x00018220, 0x22001f04, 0xc0ded002,
+		0x81000965, 0x80218220, 0x02008020, 0xc0ded003,
+		0x80000965, 0x80018220, 0x02008000, 0x7ffffffd,
+		0x80000901, 0x00010000, 0x00000000, 0x00000000,
+	}},
+	{ .gen_ver = 0, .size = 48, .code = (const uint32_t []) {
+		0x80000166, 0x80018220, 0x02008000, 0x00008000,
+		0x80000165, 0x80218220, 0x02008020, 0xc0ded000,
+		0x80040161, 0x1e054220, 0x00000000, 0x00000000,
+		0x80000061, 0x1e254220, 0x00000000, 0xc0ded001,
+		0x80000061, 0x1e454220, 0x00000000, 0x00000003,
+		0x80000061, 0x1e850220, 0x000000a4, 0x00000000,
+		0x80049031, 0x1f0c0000, 0xc0001e0c, 0x02400000,
+		0x80000061, 0x30014220, 0x00000000, 0x00000000,
+		0x80002070, 0x00018220, 0x22001f04, 0xc0ded002,
+		0x81000165, 0x80218220, 0x02008020, 0xc0ded003,
+		0x80000165, 0x80018220, 0x02008000, 0x7ffffffd,
+		0x80000101, 0x00010000, 0x00000000, 0x00000000,
+	}}
+};
+
+struct iga64_template const iga64_code_end_system_routine[] = {
+	{ .gen_ver = 1272, .size = 12, .code = (const uint32_t []) {
+		0x80000965, 0x80118220, 0x02008010, 0xc0ded000,
+		0x80000965, 0x80018220, 0x02008000, 0x7ffffffd,
+		0x80000901, 0x00010000, 0x00000000, 0x00000000,
+	}},
+	{ .gen_ver = 1250, .size = 12, .code = (const uint32_t []) {
+		0x80000965, 0x80218220, 0x02008020, 0xc0ded000,
+		0x80000965, 0x80018220, 0x02008000, 0x7ffffffd,
+		0x80000901, 0x00010000, 0x00000000, 0x00000000,
+	}},
+	{ .gen_ver = 0, .size = 12, .code = (const uint32_t []) {
+		0x80000165, 0x80218220, 0x02008020, 0xc0ded000,
+		0x80000165, 0x80018220, 0x02008000, 0x7ffffffd,
+		0x80000101, 0x00010000, 0x00000000, 0x00000000,
+	}}
+};
+
+struct iga64_template const iga64_code_breakpoint_suppress[] = {
+	{ .gen_ver = 1250, .size = 8, .code = (const uint32_t []) {
+		0x80000966, 0x80018220, 0x02008000, 0x00008000,
+		0x80000901, 0x00010000, 0x00000000, 0x00000000,
+	}},
+	{ .gen_ver = 0, .size = 8, .code = (const uint32_t []) {
+		0x80000166, 0x80018220, 0x02008000, 0x00008000,
+		0x80000101, 0x00010000, 0x00000000, 0x00000000,
+	}}
+};
 
 struct iga64_template const iga64_code_media_block_write[] = {
 	{ .gen_ver = 2000, .size = 56, .code = (const uint32_t []) {
@@ -86,6 +174,215 @@ struct iga64_template const iga64_code_media_block_write[] = {
 	}}
 };
 
+struct iga64_template const iga64_code_media_block_write_aip[] = {
+	{ .gen_ver = 2000, .size = 44, .code = (const uint32_t []) {
+		0x80000961, 0x05050220, 0x00008020, 0x00000000,
+		0x80000969, 0x02058220, 0x02000014, 0x00000002,
+		0x80000061, 0x02150220, 0x00000064, 0x00000000,
+		0x80001940, 0x02158220, 0x02000214, 0xc0ded000,
+		0x80100061, 0x04054220, 0x00000000, 0x00000000,
+		0x80041a61, 0x04550220, 0x00220205, 0x00000000,
+		0x80000061, 0x04754220, 0x00000000, 0x00000003,
+		0x80132031, 0x00000000, 0xd00e0494, 0x04000000,
+		0x80000001, 0x00010000, 0x20000000, 0x00000000,
+		0x80000001, 0x00010000, 0x30000000, 0x00000000,
+		0x80000901, 0x00010000, 0x00000000, 0x00000000,
+	}},
+	{ .gen_ver = 1272, .size = 40, .code = (const uint32_t []) {
+		0x80000961, 0x05050220, 0x00008020, 0x00000000,
+		0x80000969, 0x04058220, 0x02000014, 0x00000002,
+		0x80000061, 0x04150220, 0x00000064, 0x00000000,
+		0x80001940, 0x04158220, 0x02000414, 0xc0ded000,
+		0x80000061, 0x04254220, 0x00000000, 0x00000003,
+		0x80000061, 0x04450220, 0x00000054, 0x00000000,
+		0x80132031, 0x00000000, 0xc0000414, 0x02a00000,
+		0x80000001, 0x00010000, 0x20000000, 0x00000000,
+		0x80000001, 0x00010000, 0x30000000, 0x00000000,
+		0x80000901, 0x00010000, 0x00000000, 0x00000000,
+	}},
+	{ .gen_ver = 1250, .size = 44, .code = (const uint32_t []) {
+		0x80000961, 0x05050220, 0x00008040, 0x00000000,
+		0x80000969, 0x04058220, 0x02000024, 0x00000002,
+		0x80000061, 0x04250220, 0x000000c4, 0x00000000,
+		0x80001940, 0x04258220, 0x02000424, 0xc0ded000,
+		0x80000061, 0x04454220, 0x00000000, 0x00000003,
+		0x80000061, 0x04850220, 0x000000a4, 0x00000000,
+		0x80001901, 0x00010000, 0x00000000, 0x00000000,
+		0x80044031, 0x00000000, 0xc0000414, 0x02a00000,
+		0x80000001, 0x00010000, 0x20000000, 0x00000000,
+		0x80000001, 0x00010000, 0x30000000, 0x00000000,
+		0x80000901, 0x00010000, 0x00000000, 0x00000000,
+	}},
+	{ .gen_ver = 0, .size = 40, .code = (const uint32_t []) {
+		0x80000161, 0x05050220, 0x00008040, 0x00000000,
+		0x80000169, 0x04058220, 0x02000024, 0x00000002,
+		0x80000061, 0x04250220, 0x000000c4, 0x00000000,
+		0x80000140, 0x04258220, 0x02000424, 0xc0ded000,
+		0x80000061, 0x04454220, 0x00000000, 0x00000003,
+		0x80000061, 0x04850220, 0x000000a4, 0x00000000,
+		0x80049031, 0x00000000, 0xc0000414, 0x02a00000,
+		0x80000001, 0x00010000, 0x20000000, 0x00000000,
+		0x80000001, 0x00010000, 0x30000000, 0x00000000,
+		0x80000101, 0x00010000, 0x00000000, 0x00000000,
+	}}
+};
+
+struct iga64_template const iga64_code_common_target_write[] = {
+	{ .gen_ver = 2000, .size = 48, .code = (const uint32_t []) {
+		0x80100061, 0x1f054220, 0x00000000, 0x00000000,
+		0x80000061, 0x1f054220, 0x00000000, 0xc0ded001,
+		0x80000061, 0x1f154220, 0x00000000, 0xc0ded002,
+		0x80000061, 0x1f254220, 0x00000000, 0xc0ded003,
+		0x80000061, 0x1f354220, 0x00000000, 0xc0ded004,
+		0x80100061, 0x1e054220, 0x00000000, 0x00000000,
+		0x80040061, 0x1e654220, 0x00000000, 0xc0ded000,
+		0x80000061, 0x1e754220, 0x00000000, 0x0000000f,
+		0x80132031, 0x00000000, 0xd00e1e94, 0x04000000,
+		0x80000001, 0x00010000, 0x20000000, 0x00000000,
+		0x80000001, 0x00010000, 0x30000000, 0x00000000,
+		0x80000901, 0x00010000, 0x00000000, 0x00000000,
+	}},
+	{ .gen_ver = 1272, .size = 52, .code = (const uint32_t []) {
+		0x80100061, 0x1f054220, 0x00000000, 0x00000000,
+		0x80000061, 0x1f054220, 0x00000000, 0xc0ded001,
+		0x80000061, 0x1f154220, 0x00000000, 0xc0ded002,
+		0x80000061, 0x1f254220, 0x00000000, 0xc0ded003,
+		0x80000061, 0x1f354220, 0x00000000, 0xc0ded004,
+		0x80100061, 0x1e054220, 0x00000000, 0x00000000,
+		0x80000061, 0x1e154220, 0x00000000, 0xc0ded000,
+		0x80000061, 0x1e254220, 0x00000000, 0x0000000f,
+		0x80000061, 0x1e450220, 0x00000054, 0x00000000,
+		0x80132031, 0x00000000, 0xc0001e14, 0x02a00000,
+		0x80000001, 0x00010000, 0x20000000, 0x00000000,
+		0x80000001, 0x00010000, 0x30000000, 0x00000000,
+		0x80000901, 0x00010000, 0x00000000, 0x00000000,
+	}},
+	{ .gen_ver = 1250, .size = 56, .code = (const uint32_t []) {
+		0x80040061, 0x1f054220, 0x00000000, 0x00000000,
+		0x80000061, 0x1f054220, 0x00000000, 0xc0ded001,
+		0x80000061, 0x1f254220, 0x00000000, 0xc0ded002,
+		0x80000061, 0x1f454220, 0x00000000, 0xc0ded003,
+		0x80000061, 0x1f654220, 0x00000000, 0xc0ded004,
+		0x80040061, 0x1e054220, 0x00000000, 0x00000000,
+		0x80000061, 0x1e254220, 0x00000000, 0xc0ded000,
+		0x80000061, 0x1e454220, 0x00000000, 0x0000000f,
+		0x80000061, 0x1e850220, 0x000000a4, 0x00000000,
+		0x80001901, 0x00010000, 0x00000000, 0x00000000,
+		0x80044031, 0x00000000, 0xc0001e14, 0x02a00000,
+		0x80000001, 0x00010000, 0x20000000, 0x00000000,
+		0x80000001, 0x00010000, 0x30000000, 0x00000000,
+		0x80000901, 0x00010000, 0x00000000, 0x00000000,
+	}},
+	{ .gen_ver = 0, .size = 52, .code = (const uint32_t []) {
+		0x80040061, 0x1f054220, 0x00000000, 0x00000000,
+		0x80000061, 0x1f054220, 0x00000000, 0xc0ded001,
+		0x80000061, 0x1f254220, 0x00000000, 0xc0ded002,
+		0x80000061, 0x1f454220, 0x00000000, 0xc0ded003,
+		0x80000061, 0x1f654220, 0x00000000, 0xc0ded004,
+		0x80040061, 0x1e054220, 0x00000000, 0x00000000,
+		0x80000061, 0x1e254220, 0x00000000, 0xc0ded000,
+		0x80000061, 0x1e454220, 0x00000000, 0x0000000f,
+		0x80000061, 0x1e850220, 0x000000a4, 0x00000000,
+		0x80049031, 0x00000000, 0xc0001e14, 0x02a00000,
+		0x80000001, 0x00010000, 0x20000000, 0x00000000,
+		0x80000001, 0x00010000, 0x30000000, 0x00000000,
+		0x80000101, 0x00010000, 0x00000000, 0x00000000,
+	}}
+};
+
+struct iga64_template const iga64_code_inc_r40_jump_neq[] = {
+	{ .gen_ver = 1272, .size = 20, .code = (const uint32_t []) {
+		0x80000040, 0x28058220, 0x02002804, 0x00000001,
+		0x80000061, 0x30014220, 0x00000000, 0x00000000,
+		0x80001a70, 0x00018220, 0x22002804, 0xc0ded000,
+		0x84000020, 0x00004000, 0x00000000, 0xffffffd0,
+		0x80000901, 0x00010000, 0x00000000, 0x00000000,
+	}},
+	{ .gen_ver = 1250, .size = 20, .code = (const uint32_t []) {
+		0x80000040, 0x28058220, 0x02002804, 0x00000001,
+		0x80000061, 0x30014220, 0x00000000, 0x00000000,
+		0x80001a70, 0x00018220, 0x22002804, 0xc0ded000,
+		0x81000020, 0x00004000, 0x00000000, 0xffffffd0,
+		0x80000901, 0x00010000, 0x00000000, 0x00000000,
+	}},
+	{ .gen_ver = 0, .size = 20, .code = (const uint32_t []) {
+		0x80000040, 0x28058220, 0x02002804, 0x00000001,
+		0x80000061, 0x30014220, 0x00000000, 0x00000000,
+		0x80000270, 0x00018220, 0x22002804, 0xc0ded000,
+		0x81000020, 0x00004000, 0x00000000, 0xffffffd0,
+		0x80000101, 0x00010000, 0x00000000, 0x00000000,
+	}}
+};
+
+struct iga64_template const iga64_code_clear_r40[] = {
+	{ .gen_ver = 1250, .size = 8, .code = (const uint32_t []) {
+		0x80000061, 0x28054220, 0x00000000, 0x00000000,
+		0x80000901, 0x00010000, 0x00000000, 0x00000000,
+	}},
+	{ .gen_ver = 0, .size = 8, .code = (const uint32_t []) {
+		0x80000061, 0x28054220, 0x00000000, 0x00000000,
+		0x80000101, 0x00010000, 0x00000000, 0x00000000,
+	}}
+};
+
+struct iga64_template const iga64_code_jump_dw_neq[] = {
+	{ .gen_ver = 2000, .size = 32, .code = (const uint32_t []) {
+		0x80100061, 0x1e054220, 0x00000000, 0x00000000,
+		0x80040061, 0x1e654220, 0x00000000, 0xc0ded000,
+		0x80000061, 0x1e754220, 0x00000000, 0x00000003,
+		0x80132031, 0x1f0c0000, 0xd0061e8c, 0x04000000,
+		0x80000061, 0x30014220, 0x00000000, 0x00000000,
+		0x80008070, 0x00018220, 0x22001f04, 0xc0ded001,
+		0x84000020, 0x00004000, 0x00000000, 0xffffffa0,
+		0x80000901, 0x00010000, 0x00000000, 0x00000000,
+	}},
+	{ .gen_ver = 1272, .size = 36, .code = (const uint32_t []) {
+		0x80100061, 0x1e054220, 0x00000000, 0x00000000,
+		0x80000061, 0x1e154220, 0x00000000, 0xc0ded000,
+		0x80000061, 0x1e254220, 0x00000000, 0x00000003,
+		0x80000061, 0x1e450220, 0x00000054, 0x00000000,
+		0x80132031, 0x1f0c0000, 0xc0001e0c, 0x02400000,
+		0x80000061, 0x30014220, 0x00000000, 0x00000000,
+		0x80008070, 0x00018220, 0x22001f04, 0xc0ded001,
+		0x84000020, 0x00004000, 0x00000000, 0xffffff90,
+		0x80000901, 0x00010000, 0x00000000, 0x00000000,
+	}},
+	{ .gen_ver = 1250, .size = 40, .code = (const uint32_t []) {
+		0x80040061, 0x1e054220, 0x00000000, 0x00000000,
+		0x80000061, 0x1e254220, 0x00000000, 0xc0ded000,
+		0x80000061, 0x1e454220, 0x00000000, 0x00000003,
+		0x80000061, 0x1e850220, 0x000000a4, 0x00000000,
+		0x80001901, 0x00010000, 0x00000000, 0x00000000,
+		0x80044031, 0x1f0c0000, 0xc0001e0c, 0x02400000,
+		0x80000061, 0x30014220, 0x00000000, 0x00000000,
+		0x80002070, 0x00018220, 0x22001f04, 0xc0ded001,
+		0x81000020, 0x00004000, 0x00000000, 0xffffff80,
+		0x80000901, 0x00010000, 0x00000000, 0x00000000,
+	}},
+	{ .gen_ver = 0, .size = 36, .code = (const uint32_t []) {
+		0x80040061, 0x1e054220, 0x00000000, 0x00000000,
+		0x80000061, 0x1e254220, 0x00000000, 0xc0ded000,
+		0x80000061, 0x1e454220, 0x00000000, 0x00000003,
+		0x80000061, 0x1e850220, 0x000000a4, 0x00000000,
+		0x80049031, 0x1f0c0000, 0xc0001e0c, 0x02400000,
+		0x80000061, 0x30014220, 0x00000000, 0x00000000,
+		0x80002070, 0x00018220, 0x22001f04, 0xc0ded001,
+		0x81000020, 0x00004000, 0x00000000, 0xffffff90,
+		0x80000101, 0x00010000, 0x00000000, 0x00000000,
+	}}
+};
+
+struct iga64_template const iga64_code_jump[] = {
+	{ .gen_ver = 1250, .size = 8, .code = (const uint32_t []) {
+		0x80000020, 0x00004000, 0x00000000, 0x00000000,
+		0x80000901, 0x00010000, 0x00000000, 0x00000000,
+	}},
+	{ .gen_ver = 0, .size = 8, .code = (const uint32_t []) {
+		0x80000020, 0x00004000, 0x00000000, 0x00000000,
+		0x80000101, 0x00010000, 0x00000000, 0x00000000,
+	}}
+};
+
 struct iga64_template const iga64_code_eot[] = {
 	{ .gen_ver = 2000, .size = 8, .code = (const uint32_t []) {
 		0x800c0061, 0x70050220, 0x00460005, 0x00000000,
@@ -110,3 +407,25 @@ struct iga64_template const iga64_code_eot[] = {
 		0x80049031, 0x00000004, 0x7020700c, 0x10000000,
 	}}
 };
+
+struct iga64_template const iga64_code_nop[] = {
+	{ .gen_ver = 1250, .size = 8, .code = (const uint32_t []) {
+		0x00000060, 0x00000000, 0x00000000, 0x00000000,
+		0x80000901, 0x00010000, 0x00000000, 0x00000000,
+	}},
+	{ .gen_ver = 0, .size = 8, .code = (const uint32_t []) {
+		0x00000060, 0x00000000, 0x00000000, 0x00000000,
+		0x80000101, 0x00010000, 0x00000000, 0x00000000,
+	}}
+};
+
+struct iga64_template const iga64_code_sync_host[] = {
+	{ .gen_ver = 1250, .size = 8, .code = (const uint32_t []) {
+		0x80000001, 0x00010000, 0xf0000000, 0x00000000,
+		0x80000901, 0x00010000, 0x00000000, 0x00000000,
+	}},
+	{ .gen_ver = 0, .size = 8, .code = (const uint32_t []) {
+		0x80000001, 0x00010000, 0xf0000000, 0x00000000,
+		0x80000101, 0x00010000, 0x00000000, 0x00000000,
+	}}
+};
-- 
2.34.1


^ permalink raw reply related	[flat|nested] 41+ messages in thread

* [PATCH i-g-t v3 07/14] lib/gpgpu_shader: Add write_on_exception template
  2024-08-09 12:37 [PATCH i-g-t v3 00/14] Test coverage for GPU debug support Christoph Manszewski
                   ` (5 preceding siblings ...)
  2024-08-09 12:38 ` [PATCH i-g-t v3 06/14] lib/gpgpu_shader: Extend shader building library Christoph Manszewski
@ 2024-08-09 12:38 ` Christoph Manszewski
  2024-08-19 10:09   ` Grzegorzek, Dominik
  2024-08-09 12:38 ` [PATCH i-g-t v3 08/14] lib/gpgpu_shader: Add set/clear exception register (cr0.1) helpers Christoph Manszewski
                   ` (12 subsequent siblings)
  19 siblings, 1 reply; 41+ messages in thread
From: Christoph Manszewski @ 2024-08-09 12:38 UTC (permalink / raw)
  To: igt-dev
  Cc: Zbigniew Kempczyński, Kamil Konieczny, Dominik Grzegorzek,
	Maciej Patelczyk, Dominik Karol Piątkowski, Pawel Sikora,
	Andrzej Hajda, Kolanupaka Naveena, Mika Kuoppala, Gwan-gyeong Mun,
	Christoph Manszewski

From: Andrzej Hajda <andrzej.hajda@intel.com>

Writing specific value to memory location on unexpected value in exception
register allows to report errors from inside shader or siplet.

Signed-off-by: Andrzej Hajda <andrzej.hajda@intel.com>
Signed-off-by: Christoph Manszewski <christoph.manszewski@intel.com>
---
 lib/gpgpu_shader.c          | 56 ++++++++++++++++++++++++++
 lib/gpgpu_shader.h          |  2 +
 lib/iga64_generated_codes.c | 79 ++++++++++++++++++++++++++++++++++++-
 3 files changed, 136 insertions(+), 1 deletion(-)

diff --git a/lib/gpgpu_shader.c b/lib/gpgpu_shader.c
index 3b16db593..a98abc57e 100644
--- a/lib/gpgpu_shader.c
+++ b/lib/gpgpu_shader.c
@@ -628,6 +628,62 @@ void gpgpu_shader__write_dword(struct gpgpu_shader *shdr, uint32_t value,
 	", 2, y_offset, 3, value, value, value, value);
 }
 
+/**
+ * gpgpu_shader__write_on_exception:
+ * @shdr: shader to be modified
+ * @value: dword to be written
+ * @y_offset: write target offset within the surface in rows
+ * @mask: mask to be applied on exception register
+ * @expected: expected value of exception register with @mask applied
+ *
+ * Check if bits specified by @mask in exception register(cr0.1) are equal
+ * to provided ones: cr0.1 & @mask == @expected,
+ * if yes fill dword in (row, column/dword) == (tg_id_y + @y_offset, tg_id_x).
+ */
+void gpgpu_shader__write_on_exception(struct gpgpu_shader *shdr, uint32_t value,
+				      uint32_t y_offset, uint32_t mask, uint32_t expected)
+{
+	emit_iga64_code(shdr, write_on_exception, "					\n\
+	// Payload									\n\
+(W)	mov (1|M0)               r5.0<1>:ud    ARG(3):ud				\n\
+(W)	mov (1|M0)               r5.1<1>:ud    ARG(4):ud				\n\
+(W)	mov (1|M0)               r5.2<1>:ud    ARG(5):ud				\n\
+(W)	mov (1|M0)               r5.3<1>:ud    ARG(6):ud				\n\
+#if GEN_VER < 2000 // prepare Media Block Write						\n\
+	// X offset of the block in bytes := (thread group id X << ARG(0))		\n\
+(W)	shl (1|M0)               r4.0<1>:ud    r0.1<0;1,0>:ud    ARG(0):ud		\n\
+	// Y offset of the block in rows := thread group id Y				\n\
+(W)	mov (1|M0)               r4.1<1>:ud    r0.6<0;1,0>:ud				\n\
+(W)	add (1|M0)               r4.1<1>:ud    r4.1<0;1,0>:ud   ARG(1):ud		\n\
+	// block width [0,63] representing 1 to 64 bytes				\n\
+(W)	mov (1|M0)               r4.2<1>:ud    ARG(2):ud				\n\
+	// FFTID := FFTID from R0 header						\n\
+(W)	mov (1|M0)               r4.4<1>:ud    r0.5<0;1,0>:ud				\n\
+#else // prepare Typed 2D Block Store							\n\
+	// Load r2.0-3 with tg id X << ARG(0)						\n\
+(W)	shl (1|M0)               r2.0<1>:ud    r0.1<0;1,0>:ud    ARG(0):ud		\n\
+	// Load r2.4-7 with tg id Y + ARG(1):ud						\n\
+(W)	mov (1|M0)               r2.1<1>:ud    r0.6<0;1,0>:ud				\n\
+(W)	add (1|M0)               r2.1<1>:ud    r2.1<0;1,0>:ud    ARG(1):ud		\n\
+	// payload setup								\n\
+(W)	mov (16|M0)              r4.0<1>:ud    0x0:ud					\n\
+	// Store X and Y block start (160:191 and 192:223)				\n\
+(W)	mov (2|M0)               r4.5<1>:ud    r2.0<2;2,1>:ud				\n\
+	// Store X and Y block max_size (224:231 and 232:239)				\n\
+(W)	mov (1|M0)               r4.7<1>:ud    ARG(2):ud				\n\
+#endif											\n\
+	// Check if masked exception is equal to provided value and write conditionally \n\
+(W)      and (1|M0)              r3.0<1>:ud     cr0.1<0;1,0>:ud ARG(7):ud		\n\
+(W)      mov (1|M0)              f0.0<1>:ud     0x0:ud					\n\
+(W)      cmp (1|M0)     (eq)f0.0 null:ud        r3.0<0;1,0>:ud  ARG(8):ud		\n\
+#if GEN_VER < 2000 // Media Block Write							\n\
+(W&f0.0) send.dc1 (16|M0)        null     r4   src1_null 0    0x40A8000			\n\
+#else // Typed 2D Block Store								\n\
+(W&f0.0) send.tgm (16|M0)        null     r4   null:0    0    0x64000007		\n\
+#endif											\n\
+	", 2, y_offset, 3, value, value, value, value, mask, expected);
+}
+
 /**
  * gpgpu_shader__end_system_routine:
  * @shdr: shader to be modified
diff --git a/lib/gpgpu_shader.h b/lib/gpgpu_shader.h
index e4ca0be4c..76ff4989e 100644
--- a/lib/gpgpu_shader.h
+++ b/lib/gpgpu_shader.h
@@ -74,6 +74,8 @@ void gpgpu_shader__end_system_routine_step_if_eq(struct gpgpu_shader *shdr,
 void gpgpu_shader__write_aip(struct gpgpu_shader *shdr, uint32_t y_offset);
 void gpgpu_shader__write_dword(struct gpgpu_shader *shdr, uint32_t value,
 			       uint32_t y_offset);
+void gpgpu_shader__write_on_exception(struct gpgpu_shader *shdr, uint32_t dw,
+			       uint32_t y_offset, uint32_t mask, uint32_t value);
 void gpgpu_shader__label(struct gpgpu_shader *shdr, int label_id);
 void gpgpu_shader__jump(struct gpgpu_shader *shdr, int label_id);
 void gpgpu_shader__jump_neq(struct gpgpu_shader *shdr, int label_id,
diff --git a/lib/iga64_generated_codes.c b/lib/iga64_generated_codes.c
index fea436dee..71429d442 100644
--- a/lib/iga64_generated_codes.c
+++ b/lib/iga64_generated_codes.c
@@ -3,7 +3,7 @@
 
 #include "gpgpu_shader.h"
 
-#define MD5_SUM_IGA64_ASMS 61a15534954fe7c6bd0e983fbfd54c27
+#define MD5_SUM_IGA64_ASMS 88529cc180578939c0b8c4bb29da7db6
 
 struct iga64_template const iga64_code_end_system_routine_step_if_eq[] = {
 	{ .gen_ver = 2000, .size = 44, .code = (const uint32_t []) {
@@ -93,6 +93,83 @@ struct iga64_template const iga64_code_breakpoint_suppress[] = {
 	}}
 };
 
+struct iga64_template const iga64_code_write_on_exception[] = {
+	{ .gen_ver = 2000, .size = 68, .code = (const uint32_t []) {
+		0x80000061, 0x05054220, 0x00000000, 0xc0ded003,
+		0x80000061, 0x05154220, 0x00000000, 0xc0ded004,
+		0x80000061, 0x05254220, 0x00000000, 0xc0ded005,
+		0x80000061, 0x05354220, 0x00000000, 0xc0ded006,
+		0x80000069, 0x02058220, 0x02000014, 0xc0ded000,
+		0x80000061, 0x02150220, 0x00000064, 0x00000000,
+		0x80001940, 0x02158220, 0x02000214, 0xc0ded001,
+		0x80100061, 0x04054220, 0x00000000, 0x00000000,
+		0x80041a61, 0x04550220, 0x00220205, 0x00000000,
+		0x80000061, 0x04754220, 0x00000000, 0xc0ded002,
+		0x80000965, 0x03058220, 0x02008010, 0xc0ded007,
+		0x80000961, 0x30014220, 0x00000000, 0x00000000,
+		0x80001a70, 0x00018220, 0x12000304, 0xc0ded008,
+		0x84134031, 0x00000000, 0xd00e0494, 0x04000000,
+		0x80000001, 0x00010000, 0x20000000, 0x00000000,
+		0x80000001, 0x00010000, 0x30000000, 0x00000000,
+		0x80000901, 0x00010000, 0x00000000, 0x00000000,
+	}},
+	{ .gen_ver = 1272, .size = 64, .code = (const uint32_t []) {
+		0x80000061, 0x05054220, 0x00000000, 0xc0ded003,
+		0x80000061, 0x05154220, 0x00000000, 0xc0ded004,
+		0x80000061, 0x05254220, 0x00000000, 0xc0ded005,
+		0x80000061, 0x05354220, 0x00000000, 0xc0ded006,
+		0x80000069, 0x04058220, 0x02000014, 0xc0ded000,
+		0x80000061, 0x04150220, 0x00000064, 0x00000000,
+		0x80001940, 0x04158220, 0x02000414, 0xc0ded001,
+		0x80000061, 0x04254220, 0x00000000, 0xc0ded002,
+		0x80000061, 0x04450220, 0x00000054, 0x00000000,
+		0x80000965, 0x03058220, 0x02008010, 0xc0ded007,
+		0x80000961, 0x30014220, 0x00000000, 0x00000000,
+		0x80001a70, 0x00018220, 0x12000304, 0xc0ded008,
+		0x84134031, 0x00000000, 0xc0000414, 0x02a00000,
+		0x80000001, 0x00010000, 0x20000000, 0x00000000,
+		0x80000001, 0x00010000, 0x30000000, 0x00000000,
+		0x80000901, 0x00010000, 0x00000000, 0x00000000,
+	}},
+	{ .gen_ver = 1250, .size = 68, .code = (const uint32_t []) {
+		0x80000061, 0x05054220, 0x00000000, 0xc0ded003,
+		0x80000061, 0x05254220, 0x00000000, 0xc0ded004,
+		0x80000061, 0x05454220, 0x00000000, 0xc0ded005,
+		0x80000061, 0x05654220, 0x00000000, 0xc0ded006,
+		0x80000069, 0x04058220, 0x02000024, 0xc0ded000,
+		0x80000061, 0x04250220, 0x000000c4, 0x00000000,
+		0x80001940, 0x04258220, 0x02000424, 0xc0ded001,
+		0x80000061, 0x04454220, 0x00000000, 0xc0ded002,
+		0x80000061, 0x04850220, 0x000000a4, 0x00000000,
+		0x80000965, 0x03058220, 0x02008020, 0xc0ded007,
+		0x80000961, 0x30014220, 0x00000000, 0x00000000,
+		0x80001a70, 0x00018220, 0x12000304, 0xc0ded008,
+		0x80001a01, 0x00010000, 0x00000000, 0x00000000,
+		0x81044031, 0x00000000, 0xc0000414, 0x02a00000,
+		0x80000001, 0x00010000, 0x20000000, 0x00000000,
+		0x80000001, 0x00010000, 0x30000000, 0x00000000,
+		0x80000901, 0x00010000, 0x00000000, 0x00000000,
+	}},
+	{ .gen_ver = 0, .size = 64, .code = (const uint32_t []) {
+		0x80000061, 0x05054220, 0x00000000, 0xc0ded003,
+		0x80000061, 0x05254220, 0x00000000, 0xc0ded004,
+		0x80000061, 0x05454220, 0x00000000, 0xc0ded005,
+		0x80000061, 0x05654220, 0x00000000, 0xc0ded006,
+		0x80000069, 0x04058220, 0x02000024, 0xc0ded000,
+		0x80000061, 0x04250220, 0x000000c4, 0x00000000,
+		0x80000140, 0x04258220, 0x02000424, 0xc0ded001,
+		0x80000061, 0x04454220, 0x00000000, 0xc0ded002,
+		0x80000061, 0x04850220, 0x000000a4, 0x00000000,
+		0x80000165, 0x03058220, 0x02008020, 0xc0ded007,
+		0x80000161, 0x30014220, 0x00000000, 0x00000000,
+		0x80000270, 0x00018220, 0x12000304, 0xc0ded008,
+		0x8104a031, 0x00000000, 0xc0000414, 0x02a00000,
+		0x80000001, 0x00010000, 0x20000000, 0x00000000,
+		0x80000001, 0x00010000, 0x30000000, 0x00000000,
+		0x80000101, 0x00010000, 0x00000000, 0x00000000,
+	}}
+};
+
 struct iga64_template const iga64_code_media_block_write[] = {
 	{ .gen_ver = 2000, .size = 56, .code = (const uint32_t []) {
 		0x80000061, 0x05054220, 0x00000000, 0xc0ded003,
-- 
2.34.1


^ permalink raw reply related	[flat|nested] 41+ messages in thread

* [PATCH i-g-t v3 08/14] lib/gpgpu_shader: Add set/clear exception register (cr0.1) helpers
  2024-08-09 12:37 [PATCH i-g-t v3 00/14] Test coverage for GPU debug support Christoph Manszewski
                   ` (6 preceding siblings ...)
  2024-08-09 12:38 ` [PATCH i-g-t v3 07/14] lib/gpgpu_shader: Add write_on_exception template Christoph Manszewski
@ 2024-08-09 12:38 ` Christoph Manszewski
  2024-08-09 12:38 ` [PATCH i-g-t v3 09/14] lib/intel_batchbuffer: Add helper to get pointer at specified offset Christoph Manszewski
                   ` (11 subsequent siblings)
  19 siblings, 0 replies; 41+ messages in thread
From: Christoph Manszewski @ 2024-08-09 12:38 UTC (permalink / raw)
  To: igt-dev
  Cc: Zbigniew Kempczyński, Kamil Konieczny, Dominik Grzegorzek,
	Maciej Patelczyk, Dominik Karol Piątkowski, Pawel Sikora,
	Andrzej Hajda, Kolanupaka Naveena, Mika Kuoppala, Gwan-gyeong Mun,
	Christoph Manszewski

From: Andrzej Hajda <andrzej.hajda@intel.com>

To allow enabling and handling exceptions from shader and siplet
proper helpers should be provided.

Signed-off-by: Andrzej Hajda <andrzej.hajda@intel.com>
Signed-off-by: Christoph Manszewski <christoph.manszewski@intel.com>
---
 lib/gpgpu_shader.c          | 28 ++++++++++++++++++++++++++++
 lib/gpgpu_shader.h          |  2 ++
 lib/iga64_generated_codes.c | 32 +++++++++++++++++++++++++++++++-
 3 files changed, 61 insertions(+), 1 deletion(-)

diff --git a/lib/gpgpu_shader.c b/lib/gpgpu_shader.c
index a98abc57e..40161c52b 100644
--- a/lib/gpgpu_shader.c
+++ b/lib/gpgpu_shader.c
@@ -628,6 +628,34 @@ void gpgpu_shader__write_dword(struct gpgpu_shader *shdr, uint32_t value,
 	", 2, y_offset, 3, value, value, value, value);
 }
 
+/**
+ * gpgpu_shader__clear_exception:
+ * @shdr: shader to be modified
+ * @value: exception bits to be cleared
+ *
+ * Clear provided bits in exception register: cr0.1 &= ~value.
+ */
+void gpgpu_shader__clear_exception(struct gpgpu_shader *shdr, uint32_t value)
+{
+	emit_iga64_code(shdr, clear_exception, "		\n\
+(W)	and (1|M0) cr0.1<1>:ud cr0.1<0;1,0>:ud ARG(0):ud	\n\
+	", ~value);
+}
+
+/**
+ * gpgpu_shader__set_exception:
+ * @shdr: shader to be modified
+ * @value: exception bits to be set
+ *
+ * Set provided bits in exception register: cr0.1 |= value.
+ */
+void gpgpu_shader__set_exception(struct gpgpu_shader *shdr, uint32_t value)
+{
+	emit_iga64_code(shdr, set_exception, "		\n\
+(W)	or (1|M0) cr0.1<1>:ud cr0.1<0;1,0>:ud ARG(0):ud	\n\
+	", value);
+}
+
 /**
  * gpgpu_shader__write_on_exception:
  * @shdr: shader to be modified
diff --git a/lib/gpgpu_shader.h b/lib/gpgpu_shader.h
index 76ff4989e..0bbeae66f 100644
--- a/lib/gpgpu_shader.h
+++ b/lib/gpgpu_shader.h
@@ -66,6 +66,8 @@ void gpgpu_shader__common_target_write(struct gpgpu_shader *shdr,
 				       uint32_t y_offset, const uint32_t value[4]);
 void gpgpu_shader__common_target_write_u32(struct gpgpu_shader *shdr,
 				     uint32_t y_offset, uint32_t value);
+void gpgpu_shader__clear_exception(struct gpgpu_shader *shdr, uint32_t value);
+void gpgpu_shader__set_exception(struct gpgpu_shader *shdr, uint32_t value);
 void gpgpu_shader__end_system_routine(struct gpgpu_shader *shdr,
 				      bool breakpoint_suppress);
 void gpgpu_shader__end_system_routine_step_if_eq(struct gpgpu_shader *shdr,
diff --git a/lib/iga64_generated_codes.c b/lib/iga64_generated_codes.c
index 71429d442..2cba1b315 100644
--- a/lib/iga64_generated_codes.c
+++ b/lib/iga64_generated_codes.c
@@ -3,7 +3,7 @@
 
 #include "gpgpu_shader.h"
 
-#define MD5_SUM_IGA64_ASMS 88529cc180578939c0b8c4bb29da7db6
+#define MD5_SUM_IGA64_ASMS 6cfe013e2a99076bcdf69ecdbf4333cf
 
 struct iga64_template const iga64_code_end_system_routine_step_if_eq[] = {
 	{ .gen_ver = 2000, .size = 44, .code = (const uint32_t []) {
@@ -170,6 +170,36 @@ struct iga64_template const iga64_code_write_on_exception[] = {
 	}}
 };
 
+struct iga64_template const iga64_code_set_exception[] = {
+	{ .gen_ver = 1272, .size = 8, .code = (const uint32_t []) {
+		0x80000966, 0x80118220, 0x02008010, 0xc0ded000,
+		0x80000901, 0x00010000, 0x00000000, 0x00000000,
+	}},
+	{ .gen_ver = 1250, .size = 8, .code = (const uint32_t []) {
+		0x80000966, 0x80218220, 0x02008020, 0xc0ded000,
+		0x80000901, 0x00010000, 0x00000000, 0x00000000,
+	}},
+	{ .gen_ver = 0, .size = 8, .code = (const uint32_t []) {
+		0x80000166, 0x80218220, 0x02008020, 0xc0ded000,
+		0x80000101, 0x00010000, 0x00000000, 0x00000000,
+	}}
+};
+
+struct iga64_template const iga64_code_clear_exception[] = {
+	{ .gen_ver = 1272, .size = 8, .code = (const uint32_t []) {
+		0x80000965, 0x80118220, 0x02008010, 0xc0ded000,
+		0x80000901, 0x00010000, 0x00000000, 0x00000000,
+	}},
+	{ .gen_ver = 1250, .size = 8, .code = (const uint32_t []) {
+		0x80000965, 0x80218220, 0x02008020, 0xc0ded000,
+		0x80000901, 0x00010000, 0x00000000, 0x00000000,
+	}},
+	{ .gen_ver = 0, .size = 8, .code = (const uint32_t []) {
+		0x80000165, 0x80218220, 0x02008020, 0xc0ded000,
+		0x80000101, 0x00010000, 0x00000000, 0x00000000,
+	}}
+};
+
 struct iga64_template const iga64_code_media_block_write[] = {
 	{ .gen_ver = 2000, .size = 56, .code = (const uint32_t []) {
 		0x80000061, 0x05054220, 0x00000000, 0xc0ded003,
-- 
2.34.1


^ permalink raw reply related	[flat|nested] 41+ messages in thread

* [PATCH i-g-t v3 09/14] lib/intel_batchbuffer: Add helper to get pointer at specified offset
  2024-08-09 12:37 [PATCH i-g-t v3 00/14] Test coverage for GPU debug support Christoph Manszewski
                   ` (7 preceding siblings ...)
  2024-08-09 12:38 ` [PATCH i-g-t v3 08/14] lib/gpgpu_shader: Add set/clear exception register (cr0.1) helpers Christoph Manszewski
@ 2024-08-09 12:38 ` Christoph Manszewski
  2024-08-19 10:10   ` Grzegorzek, Dominik
  2024-08-09 12:38 ` [PATCH i-g-t v3 10/14] lib/gpgpu_shader: Allow enabling illegal opcode exceptions in shader Christoph Manszewski
                   ` (10 subsequent siblings)
  19 siblings, 1 reply; 41+ messages in thread
From: Christoph Manszewski @ 2024-08-09 12:38 UTC (permalink / raw)
  To: igt-dev
  Cc: Zbigniew Kempczyński, Kamil Konieczny, Dominik Grzegorzek,
	Maciej Patelczyk, Dominik Karol Piątkowski, Pawel Sikora,
	Andrzej Hajda, Kolanupaka Naveena, Mika Kuoppala, Gwan-gyeong Mun,
	Christoph Manszewski

From: Andrzej Hajda <andrzej.hajda@intel.com>

The helper will be used to access data placed in batchbuffer.

Signed-off-by: Andrzej Hajda <andrzej.hajda@intel.com>
Cc: Mika Kuoppala <mika.kuoppala@intel.com>
Cc: Dominik Grzegorzek <dominik.grzegorzek@intel.com>
Signed-off-by: Christoph Manszewski <christoph.manszewski@intel.com>
---
 lib/intel_batchbuffer.h | 5 +++++
 1 file changed, 5 insertions(+)

diff --git a/lib/intel_batchbuffer.h b/lib/intel_batchbuffer.h
index cb32206e5..7caf6c518 100644
--- a/lib/intel_batchbuffer.h
+++ b/lib/intel_batchbuffer.h
@@ -353,6 +353,11 @@ static inline uint32_t intel_bb_offset(struct intel_bb *ibb)
 	return (uint32_t) ((uint8_t *) ibb->ptr - (uint8_t *) ibb->batch);
 }
 
+static inline void *intel_bb_ptr_get(struct intel_bb *ibb, uint32_t offset)
+{
+	return ((uint8_t *) ibb->batch + offset);
+}
+
 static inline void intel_bb_ptr_set(struct intel_bb *ibb, uint32_t offset)
 {
 	ibb->ptr = (void *) ((uint8_t *) ibb->batch + offset);
-- 
2.34.1


^ permalink raw reply related	[flat|nested] 41+ messages in thread

* [PATCH i-g-t v3 10/14] lib/gpgpu_shader: Allow enabling illegal opcode exceptions in shader
  2024-08-09 12:37 [PATCH i-g-t v3 00/14] Test coverage for GPU debug support Christoph Manszewski
                   ` (8 preceding siblings ...)
  2024-08-09 12:38 ` [PATCH i-g-t v3 09/14] lib/intel_batchbuffer: Add helper to get pointer at specified offset Christoph Manszewski
@ 2024-08-09 12:38 ` Christoph Manszewski
  2024-08-19 10:12   ` Grzegorzek, Dominik
  2024-08-09 12:38 ` [PATCH i-g-t v3 11/14] tests/xe_exec_sip: Extend SIP interaction testing Christoph Manszewski
                   ` (9 subsequent siblings)
  19 siblings, 1 reply; 41+ messages in thread
From: Christoph Manszewski @ 2024-08-09 12:38 UTC (permalink / raw)
  To: igt-dev
  Cc: Zbigniew Kempczyński, Kamil Konieczny, Dominik Grzegorzek,
	Maciej Patelczyk, Dominik Karol Piątkowski, Pawel Sikora,
	Andrzej Hajda, Kolanupaka Naveena, Mika Kuoppala, Gwan-gyeong Mun,
	Christoph Manszewski

From: Andrzej Hajda <andrzej.hajda@intel.com>

Illegal opcode exceptions can be enabled in interface descriptor data
passed to COMPUTE_WALKER instruction.

Signed-off-by: Andrzej Hajda <andrzej.hajda@intel.com>
Cc: Mika Kuoppala <mika.kuoppala@intel.com>
Cc: Dominik Grzegorzek <dominik.grzegorzek@intel.com>
Signed-off-by: Christoph Manszewski <christoph.manszewski@intel.com>
---
 lib/gpgpu_shader.c | 4 ++++
 lib/gpgpu_shader.h | 1 +
 2 files changed, 5 insertions(+)

diff --git a/lib/gpgpu_shader.c b/lib/gpgpu_shader.c
index 40161c52b..e53c61553 100644
--- a/lib/gpgpu_shader.c
+++ b/lib/gpgpu_shader.c
@@ -103,6 +103,7 @@ __xelp_gpgpu_execfunc(struct intel_bb *ibb,
 		      struct gpgpu_shader *sip,
 		      uint64_t ring, bool explicit_engine)
 {
+	struct gen8_interface_descriptor_data *idd;
 	uint32_t interface_descriptor, sip_offset;
 	uint64_t engine;
 
@@ -113,6 +114,8 @@ __xelp_gpgpu_execfunc(struct intel_bb *ibb,
 	interface_descriptor = gen8_fill_interface_descriptor(ibb, target,
 							      shdr->instr,
 							      4 * shdr->size);
+	idd = intel_bb_ptr_get(ibb, interface_descriptor);
+	idd->desc2.illegal_opcode_exception_enable = shdr->illegal_opcode_exception_enable;
 
 	if (sip && sip->size)
 		sip_offset = fill_sip(ibb, sip->instr, 4 * sip->size);
@@ -163,6 +166,7 @@ __xehp_gpgpu_execfunc(struct intel_bb *ibb,
 
 	xehp_fill_interface_descriptor(ibb, target, shdr->instr,
 				       4 * shdr->size, &idd);
+	idd.desc2.illegal_opcode_exception_enable = shdr->illegal_opcode_exception_enable;
 
 	if (sip && sip->size)
 		sip_offset = fill_sip(ibb, sip->instr, 4 * sip->size);
diff --git a/lib/gpgpu_shader.h b/lib/gpgpu_shader.h
index 0bbeae66f..26a117a0b 100644
--- a/lib/gpgpu_shader.h
+++ b/lib/gpgpu_shader.h
@@ -22,6 +22,7 @@ struct gpgpu_shader {
 		uint32_t (*instr)[4];
 	};
 	struct igt_map *labels;
+	bool illegal_opcode_exception_enable;
 };
 
 struct iga64_template {
-- 
2.34.1


^ permalink raw reply related	[flat|nested] 41+ messages in thread

* [PATCH i-g-t v3 11/14] tests/xe_exec_sip: Extend SIP interaction testing
  2024-08-09 12:37 [PATCH i-g-t v3 00/14] Test coverage for GPU debug support Christoph Manszewski
                   ` (9 preceding siblings ...)
  2024-08-09 12:38 ` [PATCH i-g-t v3 10/14] lib/gpgpu_shader: Allow enabling illegal opcode exceptions in shader Christoph Manszewski
@ 2024-08-09 12:38 ` Christoph Manszewski
  2024-08-21  9:49   ` Zbigniew Kempczyński
  2024-08-09 12:38 ` [PATCH i-g-t v3 12/14] lib/intel_batchbuffer: Add support for long-running mode execution Christoph Manszewski
                   ` (8 subsequent siblings)
  19 siblings, 1 reply; 41+ messages in thread
From: Christoph Manszewski @ 2024-08-09 12:38 UTC (permalink / raw)
  To: igt-dev
  Cc: Zbigniew Kempczyński, Kamil Konieczny, Dominik Grzegorzek,
	Maciej Patelczyk, Dominik Karol Piątkowski, Pawel Sikora,
	Andrzej Hajda, Kolanupaka Naveena, Mika Kuoppala, Gwan-gyeong Mun,
	Christoph Manszewski, Mika Kuoppala, Karolina Stolarek

Extend xe_exec_sip test by adding subtests that check SIP interaction
sanity with regard to resets and hardware debugging capabilities like
breakpoints, and invalid instruction exceptions.

Signed-off-by: Mika Kuoppala <mika.kuoppala@linux.intel.com>
Signed-off-by: Dominik Grzegorzek <dominik.grzegorzek@intel.com>
Signed-off-by: Christoph Manszewski <christoph.manszewski@intel.com>
Signed-off-by: Dominik Karol Piątkowski <dominik.karol.piatkowski@intel.com>
Signed-off-by: Karolina Stolarek <karolina.stolarek@intel.com>
Signed-off-by: Andrzej Hajda <andrzej.hajda@intel.com>
---
 tests/intel/xe_exec_sip.c | 332 +++++++++++++++++++++++++++++++++++---
 1 file changed, 310 insertions(+), 22 deletions(-)

diff --git a/tests/intel/xe_exec_sip.c b/tests/intel/xe_exec_sip.c
index 4b599e7f6..0db0dc4b8 100644
--- a/tests/intel/xe_exec_sip.c
+++ b/tests/intel/xe_exec_sip.c
@@ -21,6 +21,7 @@
 #include "gpgpu_shader.h"
 #include "igt.h"
 #include "igt_sysfs.h"
+#include "xe/xe_eudebug.h"
 #include "xe/xe_ioctl.h"
 #include "xe/xe_query.h"
 
@@ -30,9 +31,29 @@
 #define COLOR_C4 0xc4
 
 #define SHADER_CANARY 0x01010101
+#define SIP_CANARY 0x02020202
 
 #define NSEC_PER_MSEC (1000 * 1000ull)
 
+#define SHADER_BREAKPOINT 0
+#define SHADER_WRITE 1
+#define SHADER_WAIT 2
+#define SHADER_INV_INSTR_DISABLED 3
+#define SHADER_INV_INSTR_THREAD_ENABLED 4
+#define SHADER_INV_INSTR_WALKER_ENABLED 5
+#define SHADER_HANG 6
+#define SIP_WRITE 7
+#define SIP_NULL 8
+#define SIP_WAIT 9
+#define SIP_HEAVY 10
+#define SIP_INV_INSTR 11
+
+#define F_SUBMIT_TWICE	(1 << 0)
+
+/* Control Register cr0.1 bits for exception handling */
+#define ILLEGAL_OPCODE_ENABLE BIT(12)
+#define ILLEGAL_OPCODE_STATUS BIT(28)
+
 static struct intel_buf *
 create_fill_buf(int fd, int width, int height, uint8_t color)
 {
@@ -52,24 +73,109 @@ create_fill_buf(int fd, int width, int height, uint8_t color)
 	return buf;
 }
 
-static struct gpgpu_shader *get_shader(int fd)
+static struct gpgpu_shader *get_shader(int fd, const int shadertype)
 {
 	static struct gpgpu_shader *shader;
+	uint32_t bad;
 
 	shader = gpgpu_shader_create(fd);
+	if (shadertype == SHADER_INV_INSTR_WALKER_ENABLED)
+		shader->illegal_opcode_exception_enable = true;
+
 	gpgpu_shader__write_dword(shader, SHADER_CANARY, 0);
+
+	switch (shadertype) {
+	case SHADER_HANG:
+		gpgpu_shader__label(shader, 0);
+		gpgpu_shader__nop(shader);
+		gpgpu_shader__jump(shader, 0);
+		break;
+	case SHADER_WAIT:
+		gpgpu_shader__wait(shader);
+		break;
+	case SHADER_WRITE:
+		break;
+	case SHADER_BREAKPOINT:
+		gpgpu_shader__nop(shader);
+		gpgpu_shader__breakpoint(shader);
+		break;
+	case SHADER_INV_INSTR_THREAD_ENABLED:
+		gpgpu_shader__set_exception(shader, ILLEGAL_OPCODE_ENABLE);
+		/* fall through */
+	case SHADER_INV_INSTR_DISABLED:
+	case SHADER_INV_INSTR_WALKER_ENABLED:
+		bad = (shadertype == SHADER_INV_INSTR_DISABLED) ? ILLEGAL_OPCODE_ENABLE : 0;
+		gpgpu_shader__write_on_exception(shader, 1, 0, ILLEGAL_OPCODE_ENABLE, bad);
+		gpgpu_shader__nop(shader);
+		gpgpu_shader__nop(shader);
+		/* modify second nop, set only opcode bits[6:0] */
+		shader->instr[shader->size/4 - 1][0] = 0x7f;
+		/* SIP should clear exception bit */
+		bad = ILLEGAL_OPCODE_STATUS;
+		gpgpu_shader__write_on_exception(shader, 2, 0, ILLEGAL_OPCODE_STATUS, bad);
+		break;
+	}
+
 	gpgpu_shader__eot(shader);
 	return shader;
 }
 
-static uint32_t gpgpu_shader(int fd, struct intel_bb *ibb, unsigned int threads,
-			     unsigned int width, unsigned int height)
+static struct gpgpu_shader *get_sip(int fd, const int siptype,
+				    const int shadertype, unsigned int y_offset)
+{
+	static struct gpgpu_shader *sip;
+
+	if (siptype == SIP_NULL)
+		return NULL;
+
+	sip = gpgpu_shader_create(fd);
+	gpgpu_shader__write_dword(sip, SIP_CANARY, y_offset);
+
+	switch (siptype) {
+	case SIP_WRITE:
+		break;
+	case SIP_WAIT:
+		gpgpu_shader__wait(sip);
+		break;
+	case SIP_HEAVY:
+		/* Depending on the generation, the production sip
+		 * executes between 145 to 157 instructions.
+		 * It performs at most 45 data port writes and 5 data port reads.
+		 * Make sure our heavy sip is at least twice heavy as production one.
+		 */
+		gpgpu_shader__loop_begin(sip, 0);
+		gpgpu_shader__write_dword(sip, 0xdeadbeef, y_offset);
+		gpgpu_shader__write_dword(sip, SIP_CANARY, y_offset);
+		gpgpu_shader__loop_end(sip, 0, 45);
+
+		gpgpu_shader__loop_begin(sip, 1);
+		gpgpu_shader__jump_neq(sip, 1, y_offset, SIP_CANARY);
+		gpgpu_shader__loop_end(sip, 1, 10);
+
+		gpgpu_shader__wait(sip);
+		break;
+	case SIP_INV_INSTR:
+		gpgpu_shader__write_on_exception(sip, 1, y_offset, ILLEGAL_OPCODE_STATUS, 0);
+		break;
+	}
+
+	gpgpu_shader__end_system_routine(sip, shadertype == SHADER_BREAKPOINT);
+	return sip;
+}
+
+static uint32_t gpgpu_shader(int fd, struct intel_bb *ibb, const int shadertype, const int siptype,
+			     unsigned int threads, unsigned int width, unsigned int height)
 {
 	struct intel_buf *buf = create_fill_buf(fd, width, height, COLOR_C4);
-	struct gpgpu_shader *shader = get_shader(fd);
+	struct gpgpu_shader *sip = get_sip(fd, siptype, shadertype, height / 2);
+	struct gpgpu_shader *shader = get_shader(fd, shadertype);
 
-	gpgpu_shader_exec(ibb, buf, 1, threads, shader, NULL, 0, 0);
+	gpgpu_shader_exec(ibb, buf, 1, threads, shader, sip, 0, 0);
+
+	if (sip)
+		gpgpu_shader_destroy(sip);
 	gpgpu_shader_destroy(shader);
+
 	return buf->handle;
 }
 
@@ -83,11 +189,11 @@ static void check_fill_buf(uint8_t *ptr, const int width, const int x,
 		     color, val, x, y);
 }
 
-static void check_buf(int fd, uint32_t handle, int width, int height,
-		      uint8_t poison_c)
+static int check_buf(int fd, uint32_t handle, int width, int height,
+		      int shadertype, int siptype, uint8_t poison_c)
 {
 	unsigned int sz = ALIGN(width * height, 4096);
-	int thread_count = 0;
+	int thread_count = 0, sip_count = 0;
 	uint32_t *ptr;
 	int i, j;
 
@@ -105,9 +211,87 @@ static void check_buf(int fd, uint32_t handle, int width, int height,
 		i = 0;
 	}
 
+	for (i = 0, j = height / 2; j < height; ++j) {
+		if (ptr[j * width / 4] == SIP_CANARY) {
+			++sip_count;
+			i = 4;
+		}
+
+		for (; i < width; i++)
+			check_fill_buf((uint8_t *)ptr, width, i, j, poison_c);
+
+		i = 0;
+	}
+
 	igt_assert(thread_count);
+	if (shadertype == SHADER_INV_INSTR_DISABLED)
+		igt_assert(!sip_count);
+	else if ((siptype != SIP_NULL && xe_eudebug_debugger_available(fd)) ||
+		 (siptype == SIP_INV_INSTR && shadertype != SHADER_INV_INSTR_DISABLED))
+		igt_assert_f(thread_count == sip_count,
+			     "Thread and SIP count mismatch, %d != %d\n",
+			     thread_count, sip_count);
+	else
+		igt_assert(sip_count == 0);
 
 	munmap(ptr, sz);
+
+	return sip_count;
+}
+
+#define USERCOREDUMP_FORMAT "usercoredumps/%d/%d"
+static char *get_latest_usercoredump(int dir)
+{
+	char tmp[256];
+	int i = 1;
+
+	do {
+		snprintf(tmp, sizeof(tmp), USERCOREDUMP_FORMAT, getpid(), i++);
+	} while (igt_sysfs_has_attr(dir, tmp));
+
+	snprintf(tmp, sizeof(tmp), USERCOREDUMP_FORMAT, getpid(), i-2);
+	return igt_sysfs_get(dir, tmp);
+}
+
+static void check_usercoredump(int fd, int sip, int dispatched)
+{
+	int dir = igt_debugfs_dir(fd);
+	char *usercoredump, *str;
+	unsigned int before, after;
+	char match[256];
+
+	if (sip != SIP_WAIT && sip != SIP_HEAVY)
+		return;
+
+	/* XXX reinstate when offline coredumps are implemented */
+#ifndef XXX_ATTENTIONS_THROUGH_COREDUMPS
+	return;
+#endif
+	usercoredump = get_latest_usercoredump(dir);
+	igt_assert(usercoredump);
+	igt_debug("%s\n", usercoredump);
+
+	snprintf(match, sizeof(match), "PID: %d", getpid());
+	str = strstr(usercoredump, match);
+	igt_assert(str);
+
+	snprintf(match, sizeof(match), "Comm: %s", igt_test_name());
+	str = strstr(str, match);
+	igt_assert(str);
+
+	str = strstr(str, "TD_ATT");
+	igt_assert(str);
+	igt_assert_eq(sscanf(str, "TD_ATT before (%d):", &before), 1);
+	str = strstr(str + 1, "TD_ATT");
+	igt_assert_eq(sscanf(str, "TD_ATT after (%d):", &after), 1);
+
+	igt_info("attentions %d before, %d after\n", before, after);
+
+	igt_assert_eq(before, dispatched);
+	igt_assert_eq(after, dispatched);
+
+	free(usercoredump);
+	close(dir);
 }
 
 static uint64_t
@@ -128,17 +312,58 @@ xe_sysfs_get_job_timeout_ms(int fd, struct drm_xe_engine_class_instance *eci)
  * Description: check basic shader with write operation
  * Run type: BAT
  *
+ * SUBTEST: sanity-after-timeout
+ * Description: check basic shader execution after job timeout
+ *
+ * SUBTEST: wait-writesip-nodebug
+ * Description: verify that we don't enter SIP after wait with debugging disabled.
+ *
+ * SUBTEST: invalidinstr-disabled
+ * Description: Verify that we don't enter SIP after running into an invalid
+ *		instruction when exception is not enabled.
+ *
+ * SUBTEST: invalidinstr-thread-enabled
+ * Description: Verify that we enter SIP after running into an invalid instruction
+ *              when exception is enabled from thread.
+ *
+ * SUBTEST: invalidinstr-walker-enabled
+ * Description: Verify that we enter SIP after running into an invalid instruction
+ *              when exception is enabled from COMPUTE_WALKER.
+ *
+ * SUBTEST: breakpoint-writesip-nodebug
+ * Description: verify that we don't enter SIP after hitting breakpoint in shader
+ *		when debugging is disabled.
+ *
+ * SUBTEST: breakpoint-writesip
+ * Description: Test that we enter SIP after hitting breakpoint in shader.
+ *
+ * SUBTEST: breakpoint-writesip-twice
+ * Description: Test twice that we enter SIP after hitting breakpoint in shader.
+ *
+ * SUBTEST: breakpoint-waitsip
+ * Description: Test that we reset after seeing the attention without the debugger.
+ *
+ * SUBTEST: breakpoint-waitsip-heavy
+ * Description:
+ *	Test that we reset after seeing the attention from heavy SIP, that resembles
+ *	the production one, without the debugger.
  */
-static void test_sip(struct drm_xe_engine_class_instance *eci, uint32_t flags)
+static void test_sip(int shader, int sip, struct drm_xe_engine_class_instance *eci, uint32_t flags)
 {
 	unsigned int threads = 512;
 	unsigned int height = max_t(threads, HEIGHT, threads * 2);
-	uint32_t exec_queue_id, handle, vm_id;
 	unsigned int width = WIDTH;
+	struct drm_xe_ext_set_property ext = {
+		.base.name = DRM_XE_EXEC_QUEUE_EXTENSION_SET_PROPERTY,
+		.property = DRM_XE_EXEC_QUEUE_SET_PROPERTY_EUDEBUG,
+		.value = DRM_XE_EXEC_QUEUE_EUDEBUG_FLAG_ENABLE,
+	};
 	struct timespec ts = { };
-	uint64_t timeout;
+	int done = 0;
+	uint32_t exec_queue_id, handle, vm_id;
 	struct intel_bb *ibb;
-	int fd;
+	uint64_t timeout;
+	int dispatched, fd;
 
 	igt_debug("Using %s\n", xe_engine_class_string(eci->engine_class));
 
@@ -152,19 +377,24 @@ static void test_sip(struct drm_xe_engine_class_instance *eci, uint32_t flags)
 	timeout *= NSEC_PER_MSEC;
 	timeout *= igt_run_in_simulation() ? 10 : 1;
 
-	exec_queue_id = xe_exec_queue_create(fd, vm_id, eci, 0);
-	ibb = intel_bb_create_with_context(fd, exec_queue_id, vm_id, NULL, 4096);
+	exec_queue_id = xe_exec_queue_create(fd, vm_id, eci,
+					     xe_eudebug_debugger_available(fd) ?
+					     to_user_pointer(&ext) : 0);
+	do {
+		ibb = intel_bb_create_with_context(fd, exec_queue_id, vm_id, NULL, 4096);
 
-	igt_nsec_elapsed(&ts);
-	handle = gpgpu_shader(fd, ibb, threads, width, height);
+		igt_nsec_elapsed(&ts);
+		handle = gpgpu_shader(fd, ibb, shader, sip, threads, width, height);
 
-	intel_bb_sync(ibb);
-	igt_assert_lt_u64(igt_nsec_elapsed(&ts), timeout);
+		intel_bb_sync(ibb);
+		igt_assert_lt_u64(igt_nsec_elapsed(&ts), timeout);
 
-	check_buf(fd, handle, width, height, COLOR_C4);
+		dispatched = check_buf(fd, handle, width, height, shader, sip, COLOR_C4);
+		check_usercoredump(fd, sip, dispatched);
 
-	gem_close(fd, handle);
-	intel_bb_destroy(ibb);
+		gem_close(fd, handle);
+		intel_bb_destroy(ibb);
+	} while (!done++ && (flags & F_SUBMIT_TWICE));
 
 	xe_exec_queue_destroy(fd, exec_queue_id);
 	xe_vm_destroy(fd, vm_id);
@@ -183,13 +413,71 @@ static void test_sip(struct drm_xe_engine_class_instance *eci, uint32_t flags)
 igt_main
 {
 	struct drm_xe_engine_class_instance *eci;
+	bool was_enabled;
 	int fd;
 
 	igt_fixture
 		fd = drm_open_driver(DRIVER_XE);
 
 	test_render_and_compute("sanity", fd, eci)
-		test_sip(eci, 0);
+		test_sip(SHADER_WRITE, SIP_NULL, eci, 0);
+
+	test_render_and_compute("sanity-after-timeout", fd, eci) {
+		test_sip(SHADER_HANG, SIP_NULL, eci, 0);
+
+		xe_for_each_engine(fd, eci)
+			if (eci->engine_class == DRM_XE_ENGINE_CLASS_RENDER ||
+			    eci->engine_class == DRM_XE_ENGINE_CLASS_COMPUTE)
+				test_sip(SHADER_WRITE, SIP_NULL, eci, 0);
+	}
+
+	/* Debugger disabled (TD_CTL not set) */
+	igt_subtest_group {
+		igt_fixture {
+			was_enabled = xe_eudebug_enable(fd, false);
+			igt_require(!xe_eudebug_debugger_available(fd));
+		}
+
+		test_render_and_compute("wait-writesip-nodebug", fd, eci)
+			test_sip(SHADER_WAIT, SIP_WRITE, eci, 0);
+
+		test_render_and_compute("invalidinstr-disabled", fd, eci)
+			test_sip(SHADER_INV_INSTR_DISABLED, SIP_INV_INSTR, eci, 0);
+
+		test_render_and_compute("invalidinstr-thread-enabled", fd, eci)
+			test_sip(SHADER_INV_INSTR_THREAD_ENABLED, SIP_INV_INSTR, eci, 0);
+
+		test_render_and_compute("invalidinstr-walker-enabled", fd, eci)
+			test_sip(SHADER_INV_INSTR_WALKER_ENABLED, SIP_INV_INSTR, eci, 0);
+
+		test_render_and_compute("breakpoint-writesip-nodebug", fd, eci)
+			test_sip(SHADER_BREAKPOINT, SIP_WRITE, eci, 0);
+
+		igt_fixture
+			xe_eudebug_enable(fd, was_enabled);
+	}
+
+	/* Debugger enabled (TD_CTL set) */
+	igt_subtest_group {
+		igt_fixture {
+			was_enabled = xe_eudebug_enable(fd, true);
+		}
+
+		test_render_and_compute("breakpoint-writesip", fd, eci)
+			test_sip(SHADER_BREAKPOINT, SIP_WRITE, eci, 0);
+
+		test_render_and_compute("breakpoint-writesip-twice", fd, eci)
+			test_sip(SHADER_BREAKPOINT, SIP_WRITE, eci, F_SUBMIT_TWICE);
+
+		test_render_and_compute("breakpoint-waitsip", fd, eci)
+			test_sip(SHADER_BREAKPOINT, SIP_WAIT, eci, 0);
+
+		test_render_and_compute("breakpoint-waitsip-heavy", fd, eci)
+			test_sip(SHADER_BREAKPOINT, SIP_HEAVY, eci, 0);
+
+		igt_fixture
+			xe_eudebug_enable(fd, was_enabled);
+	}
 
 	igt_fixture
 		drm_close_driver(fd);
-- 
2.34.1


^ permalink raw reply related	[flat|nested] 41+ messages in thread

* [PATCH i-g-t v3 12/14] lib/intel_batchbuffer: Add support for long-running mode execution
  2024-08-09 12:37 [PATCH i-g-t v3 00/14] Test coverage for GPU debug support Christoph Manszewski
                   ` (10 preceding siblings ...)
  2024-08-09 12:38 ` [PATCH i-g-t v3 11/14] tests/xe_exec_sip: Extend SIP interaction testing Christoph Manszewski
@ 2024-08-09 12:38 ` Christoph Manszewski
  2024-08-09 12:38 ` [PATCH i-g-t v3 13/14] tests/xe_eudebug_online: Debug client which runs workloads on EU Christoph Manszewski
                   ` (7 subsequent siblings)
  19 siblings, 0 replies; 41+ messages in thread
From: Christoph Manszewski @ 2024-08-09 12:38 UTC (permalink / raw)
  To: igt-dev
  Cc: Zbigniew Kempczyński, Kamil Konieczny, Dominik Grzegorzek,
	Maciej Patelczyk, Dominik Karol Piątkowski, Pawel Sikora,
	Andrzej Hajda, Kolanupaka Naveena, Mika Kuoppala, Gwan-gyeong Mun,
	Christoph Manszewski

From: Gwan-gyeong Mun <gwan-gyeong.mun@intel.com>

To execute in lr (long-running) mode, apart from setting
'DRM_XE_VM_CREATE_FLAG_LR_MODE' flag during vm creation, it is required
to use 'DRM_XE_SYNC_TYPE_USER_FENCE' syncs with vm_bind and xe_exec
ioctls.

Make it possible to execute batch buffers via intel_bb_exec() in lr mode
by setting the 'lr_mode' field with the supplied setter.

Signed-off-by: Gwan-gyeong Mun <gwan-gyeong.mun@intel.com>
Signed-off-by: Christoph Manszewski <christoph.manszewski@intel.com>
---
 lib/intel_batchbuffer.c | 153 ++++++++++++++++++++++++++++++++++++++--
 lib/intel_batchbuffer.h |  17 +++++
 2 files changed, 165 insertions(+), 5 deletions(-)

diff --git a/lib/intel_batchbuffer.c b/lib/intel_batchbuffer.c
index 824e92831..43bf5b04b 100644
--- a/lib/intel_batchbuffer.c
+++ b/lib/intel_batchbuffer.c
@@ -992,6 +992,7 @@ __intel_bb_create(int fd, uint32_t ctx, uint32_t vm, const intel_ctx_cfg_t *cfg,
 	igt_assert(ibb->batch);
 	ibb->ptr = ibb->batch;
 	ibb->fence = -1;
+	ibb->user_fence_offset = -1;
 
 	/* Cache context configuration */
 	if (cfg) {
@@ -1455,7 +1456,7 @@ int intel_bb_sync(struct intel_bb *ibb)
 {
 	int ret;
 
-	if (ibb->fence < 0 && !ibb->engine_syncobj)
+	if (ibb->fence < 0 && !ibb->engine_syncobj && ibb->user_fence_offset < 0)
 		return 0;
 
 	if (ibb->fence >= 0) {
@@ -1464,10 +1465,28 @@ int intel_bb_sync(struct intel_bb *ibb)
 			close(ibb->fence);
 			ibb->fence = -1;
 		}
-	} else {
-		igt_assert_neq(ibb->engine_syncobj, 0);
+	} else if(ibb->engine_syncobj) {
 		ret = syncobj_wait_err(ibb->fd, &ibb->engine_syncobj,
 				       1, INT64_MAX, 0);
+	} else {
+		int64_t timeout = -1;
+		uint64_t *sync_data;
+		void *map;
+
+		igt_assert(ibb->user_fence_offset >= 0);
+
+		map = xe_bo_map(ibb->fd, ibb->handle, ibb->size);
+		sync_data = (void*)((uint8_t *)map + ibb->user_fence_offset);
+
+		ret = __xe_wait_ufence(ibb->fd, sync_data, ibb->user_fence_value,
+				       ibb->ctx ?: ibb->engine_id, &timeout);
+
+		gem_munmap(map, ibb->size);
+		ibb->user_fence_offset = -1;
+
+		/* Workload finished forcibly, but finished none the less */
+		if (ret == -EIO)
+			ret = 0;
 	}
 
 	return ret;
@@ -2441,6 +2460,126 @@ __xe_bb_exec(struct intel_bb *ibb, uint64_t flags, bool sync)
 	return 0;
 }
 
+static int
+__xe_lr_bb_exec(struct intel_bb *ibb, uint64_t flags, bool sync)
+{
+	uint32_t engine = flags & (I915_EXEC_BSD_MASK | I915_EXEC_RING_MASK);
+	uint32_t engine_id;
+#define USER_FENCE_VALUE	0xdeadbeefdeadbeefull
+	/*
+	 * LR mode vm_bind requires to use DRM_XE_SYNC_TYPE_USER_FENCE type sync
+	 * LR mode xe_exec requires to use DRM_XE_SYNC_TYPE_USER_FENCE type sync
+	 */
+	struct drm_xe_sync syncs[2] = {
+		{ .type = DRM_XE_SYNC_TYPE_USER_FENCE,
+		  .flags = DRM_XE_SYNC_FLAG_SIGNAL,
+		  .timeline_value = USER_FENCE_VALUE
+		},
+		{ .type = DRM_XE_SYNC_TYPE_USER_FENCE,
+		  .flags = DRM_XE_SYNC_FLAG_SIGNAL,
+		  .timeline_value = USER_FENCE_VALUE
+		},
+	};
+	struct drm_xe_vm_bind_op *bind_ops;
+	struct {
+		uint64_t vm_sync;
+		uint64_t exec_sync;
+	} *sync_data;
+	uint32_t sync_offset;
+	uint64_t ibb_addr, vm_sync_addr, exec_sync_addr;
+	void *map;
+
+	igt_assert_eq(ibb->num_relocs, 0);
+	igt_assert_eq(ibb->xe_bound, false);
+
+	if (ibb->ctx) {
+		engine_id = ibb->ctx;
+	} else if (ibb->last_engine != engine) {
+		struct drm_xe_engine_class_instance inst = { };
+
+		inst.engine_instance =
+			(flags & I915_EXEC_BSD_MASK) >> I915_EXEC_BSD_SHIFT;
+
+		switch (flags & I915_EXEC_RING_MASK) {
+		case I915_EXEC_DEFAULT:
+		case I915_EXEC_BLT:
+			inst.engine_class = DRM_XE_ENGINE_CLASS_COPY;
+			break;
+		case I915_EXEC_BSD:
+			inst.engine_class = DRM_XE_ENGINE_CLASS_VIDEO_DECODE;
+			break;
+		case I915_EXEC_RENDER:
+			if (xe_has_engine_class(ibb->fd, DRM_XE_ENGINE_CLASS_RENDER))
+				inst.engine_class = DRM_XE_ENGINE_CLASS_RENDER;
+			else
+				inst.engine_class = DRM_XE_ENGINE_CLASS_COMPUTE;
+			break;
+		case I915_EXEC_VEBOX:
+			inst.engine_class = DRM_XE_ENGINE_CLASS_VIDEO_ENHANCE;
+			break;
+		default:
+			igt_assert_f(false, "Unknown engine: %x", (uint32_t) flags);
+		}
+		igt_debug("Run on %s\n", xe_engine_class_string(inst.engine_class));
+
+		if (ibb->engine_id)
+			xe_exec_queue_destroy(ibb->fd, ibb->engine_id);
+
+		ibb->engine_id = engine_id =
+			xe_exec_queue_create(ibb->fd, ibb->vm_id, &inst, 0);
+	} else {
+		engine_id = ibb->engine_id;
+	}
+	ibb->last_engine = engine;
+
+	/* User fence add for sync: sync.addr has a quadword align limitation */
+	intel_bb_ptr_align(ibb, 8);
+	sync_offset = intel_bb_offset(ibb);
+	intel_bb_ptr_add(ibb, sizeof(*sync_data));
+
+	map = xe_bo_map(ibb->fd, ibb->handle, ibb->size);
+	memcpy(map, ibb->batch, ibb->size);
+
+	sync_data = (void*)((uint8_t *)map + sync_offset);
+	/* vm_sync userfence userspace address. */
+	vm_sync_addr = to_user_pointer(&sync_data->vm_sync);
+	ibb_addr = ibb->batch_offset;
+	/* exec_sync userfence ppgtt address. */
+	exec_sync_addr = ibb_addr + sync_offset + sizeof(uint64_t);
+	syncs[0].addr = vm_sync_addr;
+	syncs[1].addr = exec_sync_addr;
+
+	if (ibb->num_objects > 1) {
+		bind_ops = xe_alloc_bind_ops(ibb, DRM_XE_VM_BIND_OP_MAP, 0, 0);
+		xe_vm_bind_array(ibb->fd, ibb->vm_id, 0, bind_ops,
+				 ibb->num_objects, syncs, 1);
+		free(bind_ops);
+	} else {
+		igt_debug("bind: MAP\n");
+		igt_debug("  handle: %u, offset: %llx, size: %llx\n",
+			  ibb->handle, (long long)ibb->batch_offset,
+			  (long long)ibb->size);
+		xe_vm_bind_async(ibb->fd, ibb->vm_id, 0, ibb->handle, 0,
+				 ibb->batch_offset, ibb->size, syncs, 1);
+	}
+
+	/* use default vm_bind_exec_queue */
+	xe_wait_ufence(ibb->fd, &sync_data->vm_sync, USER_FENCE_VALUE, 0, -1);
+	gem_munmap(map, ibb->size);
+
+	ibb->xe_bound = true;
+	ibb->user_fence_value = USER_FENCE_VALUE;
+	ibb->user_fence_offset = sync_offset + sizeof(uint64_t);
+
+	xe_exec_sync(ibb->fd, engine_id, ibb->batch_offset, &syncs[1], 1);
+
+	if (sync)
+		intel_bb_sync(ibb);
+
+	return 0;
+}
+
+
 /*
  * __intel_bb_exec:
  * @ibb: pointer to intel_bb
@@ -2541,8 +2680,12 @@ void intel_bb_exec(struct intel_bb *ibb, uint32_t end_offset,
 
 	if (ibb->driver == INTEL_DRIVER_I915)
 		igt_assert_eq(__intel_bb_exec(ibb, end_offset, flags, sync), 0);
-	else
-		igt_assert_eq(__xe_bb_exec(ibb, flags, sync), 0);
+	else {
+		if (intel_bb_get_lr_mode(ibb))
+			igt_assert_eq(__xe_lr_bb_exec(ibb, flags, sync), 0);
+		else
+			igt_assert_eq(__xe_bb_exec(ibb, flags, sync), 0);
+	}
 }
 
 /**
diff --git a/lib/intel_batchbuffer.h b/lib/intel_batchbuffer.h
index 7caf6c518..051c336de 100644
--- a/lib/intel_batchbuffer.h
+++ b/lib/intel_batchbuffer.h
@@ -303,6 +303,11 @@ struct intel_bb {
 	 * is not thread-safe.
 	 */
 	int32_t refcount;
+
+	/* long running mode */
+	bool lr_mode;
+	int64_t user_fence_offset;
+	uint64_t user_fence_value;
 };
 
 struct intel_bb *
@@ -421,6 +426,18 @@ static inline uint32_t intel_bb_pxp_appid(struct intel_bb *ibb)
 	return ibb->pxp.appid;
 }
 
+static inline void intel_bb_set_lr_mode(struct intel_bb *ibb, bool lr_mode)
+{
+	igt_assert(ibb);
+	ibb->lr_mode = lr_mode;
+}
+
+static inline bool intel_bb_get_lr_mode(struct intel_bb *ibb)
+{
+	igt_assert(ibb);
+	return ibb->lr_mode;
+}
+
 struct drm_i915_gem_exec_object2 *
 intel_bb_add_object(struct intel_bb *ibb, uint32_t handle, uint64_t size,
 		    uint64_t offset, uint64_t alignment, bool write);
-- 
2.34.1


^ permalink raw reply related	[flat|nested] 41+ messages in thread

* [PATCH i-g-t v3 13/14] tests/xe_eudebug_online: Debug client which runs workloads on EU
  2024-08-09 12:37 [PATCH i-g-t v3 00/14] Test coverage for GPU debug support Christoph Manszewski
                   ` (11 preceding siblings ...)
  2024-08-09 12:38 ` [PATCH i-g-t v3 12/14] lib/intel_batchbuffer: Add support for long-running mode execution Christoph Manszewski
@ 2024-08-09 12:38 ` Christoph Manszewski
  2024-08-09 14:38   ` Kamil Konieczny
  2024-08-19  9:58   ` Grzegorzek, Dominik
  2024-08-09 12:38 ` [PATCH i-g-t v3 14/14] tests/xe_live_ktest: Add xe_eudebug live test Christoph Manszewski
                   ` (6 subsequent siblings)
  19 siblings, 2 replies; 41+ messages in thread
From: Christoph Manszewski @ 2024-08-09 12:38 UTC (permalink / raw)
  To: igt-dev
  Cc: Zbigniew Kempczyński, Kamil Konieczny, Dominik Grzegorzek,
	Maciej Patelczyk, Dominik Karol Piątkowski, Pawel Sikora,
	Andrzej Hajda, Kolanupaka Naveena, Mika Kuoppala, Gwan-gyeong Mun,
	Christoph Manszewski, Karolina Stolarek

From: Dominik Grzegorzek <dominik.grzegorzek@intel.com>

For typical debugging under gdb one can specify two main usecases:
accessing and manupulating resources created by the application and
manipulating thread execution (interrupting and setting breakpoints).

This test adds coverage for the latter by checking that:
- EU workloads that hit a instruction with breakpoint bit set will stop
  halt execution and the debugger will report this via attention events,
- the debugger is able to interrupt workload execution by issuing a
  'interrupt_all' ioctl call,
- the debugger is able to resume selected workloads that are stopped.

Signed-off-by: Dominik Grzegorzek <dominik.grzegorzek@intel.com>
Signed-off-by: Mika Kuoppala <mika.kuoppala@intel.com>
Signed-off-by: Christoph Manszewski <christoph.manszewski@intel.com>
Signed-off-by: Dominik Karol Piątkowski <dominik.karol.piatkowski@intel.com>
Signed-off-by: Pawel Sikora <pawel.sikora@intel.com>
Signed-off-by: Karolina Stolarek <karolina.stolarek@intel.com>
Signed-off-by: Kolanupaka Naveena <kolanupaka.naveena@intel.com>
---
 tests/intel/xe_eudebug_online.c | 2203 +++++++++++++++++++++++++++++++
 tests/meson.build               |    1 +
 2 files changed, 2204 insertions(+)
 create mode 100644 tests/intel/xe_eudebug_online.c

diff --git a/tests/intel/xe_eudebug_online.c b/tests/intel/xe_eudebug_online.c
new file mode 100644
index 000000000..1c8ac67f1
--- /dev/null
+++ b/tests/intel/xe_eudebug_online.c
@@ -0,0 +1,2203 @@
+// SPDX-License-Identifier: MIT
+/*
+ * Copyright © 2023 Intel Corporation
+ */
+
+/**
+ * TEST: Tests for eudebug online functionality
+ * Category: Core
+ * Mega feature: EUdebug
+ * Sub-category: EUdebug tests
+ * Functionality: eu kernel debug
+ * Test category: functionality test
+ */
+
+#include "xe/xe_eudebug.h"
+#include "xe/xe_ioctl.h"
+#include "xe/xe_query.h"
+#include "igt.h"
+#include "intel_pat.h"
+#include "intel_mocs.h"
+#include "gpgpu_shader.h"
+
+#define SHADER_NOP			(0 << 0)
+#define SHADER_BREAKPOINT		(1 << 0)
+#define SHADER_LOOP			(1 << 1)
+#define SHADER_SINGLE_STEP		(1 << 2)
+#define SIP_SINGLE_STEP			(1 << 3)
+#define DISABLE_DEBUG_MODE		(1 << 4)
+#define SHADER_N_NOOP_BREAKPOINT	(1 << 5)
+#define SHADER_CACHING_SRAM		(1 << 6)
+#define SHADER_CACHING_VRAM		(1 << 7)
+#define SHADER_MIN_THREADS		(1 << 8)
+#define DO_NOT_EXPECT_CANARIES		(1 << 9)
+#define TRIGGER_RESUME_SINGLE_WALK	(1 << 25)
+#define TRIGGER_RESUME_PARALLEL_WALK	(1 << 26)
+#define TRIGGER_RECONNECT		(1 << 27)
+#define TRIGGER_RESUME_SET_BP		(1 << 28)
+#define TRIGGER_RESUME_DELAYED		(1 << 29)
+#define TRIGGER_RESUME_DSS		(1 << 30)
+#define TRIGGER_RESUME_ONE		(1 << 31)
+
+#define DEBUGGER_REATTACHED	1
+
+#define SHADER_LOOP_N		3
+#define SINGLE_STEP_COUNT	16
+#define STEERING_SINGLE_STEP	0
+#define STEERING_CONTINUE	0x00c0ffee
+#define STEERING_END_LOOP	0xdeadca11
+
+#define CACHING_INIT_VALUE	0xcafe0000
+#define CACHING_POISON_VALUE	0xcafedead
+#define CACHING_VALUE(n)	(CACHING_INIT_VALUE + n)
+
+#define SHADER_CANARY 0x01010101
+
+#define WALKER_X_DIM		4
+#define WALKER_ALIGNMENT	16
+#define SIMD_SIZE		16
+
+#define STARTUP_TIMEOUT_MS	3000
+#define WORKLOAD_DELAY_US	(5000 * 1000)
+
+#define PAGE_SIZE 4096
+
+struct dim_t {
+	uint32_t x;
+	uint32_t y;
+	uint32_t alignment;
+};
+
+static struct dim_t walker_dimensions(int threads)
+{
+	uint32_t x_dim = min_t(x_dim, threads, WALKER_X_DIM);
+	struct dim_t ret = {
+		.x = x_dim,
+		.y = threads / x_dim,
+		.alignment = WALKER_ALIGNMENT
+	};
+
+	return ret;
+}
+
+static struct dim_t surface_dimensions(int threads)
+{
+	struct dim_t ret = walker_dimensions(threads);
+
+	ret.y = max_t(ret.y, threads/ret.x, 4);
+	ret.x *= SIMD_SIZE;
+	ret.alignment *= SIMD_SIZE;
+
+	return ret;
+}
+
+static uint32_t steering_offset(int threads)
+{
+	struct dim_t w = walker_dimensions(threads);
+
+	return ALIGN(w.x, w.alignment) * w.y * 4;
+}
+
+static struct intel_buf *create_uc_buf(int fd, int width, int height)
+{
+	struct intel_buf *buf;
+
+	buf = intel_buf_create_full(buf_ops_create(fd), 0, width/4, height,
+				    32, 0, I915_TILING_NONE, 0, 0, 0,
+				    vram_if_possible(fd, 0),
+				    DEFAULT_PAT_INDEX, DEFAULT_MOCS_INDEX);
+
+	return buf;
+}
+
+static int get_number_of_threads(uint64_t flags)
+{
+	if (flags & SHADER_MIN_THREADS)
+		return 16;
+
+	if (flags & (TRIGGER_RESUME_ONE | TRIGGER_RESUME_SINGLE_WALK |
+		     TRIGGER_RESUME_PARALLEL_WALK | SHADER_CACHING_SRAM | SHADER_CACHING_VRAM))
+		return 32;
+
+	return 512;
+}
+
+static int caching_get_instruction_count(int fd, uint32_t s_dim__x, int flags)
+{
+	uint64_t memory;
+
+	igt_assert((flags & SHADER_CACHING_SRAM) || (flags & SHADER_CACHING_VRAM));
+
+	if (flags & SHADER_CACHING_SRAM)
+		memory = system_memory(fd);
+	else
+		memory = vram_memory(fd, 0);
+
+	/* each instruction writes to given y offset */
+	return (2 * xe_min_page_size(fd, memory)) / s_dim__x;
+}
+
+static struct gpgpu_shader *get_shader(int fd, const unsigned int flags)
+{
+	struct dim_t w_dim = walker_dimensions(get_number_of_threads(flags));
+	struct dim_t s_dim = surface_dimensions(get_number_of_threads(flags));
+	static struct gpgpu_shader *shader;
+
+	shader = gpgpu_shader_create(fd);
+
+	gpgpu_shader__write_dword(shader, SHADER_CANARY, 0);
+	if (flags & SHADER_BREAKPOINT) {
+		gpgpu_shader__nop(shader);
+		gpgpu_shader__breakpoint(shader);
+	} else if (flags & SHADER_LOOP) {
+		gpgpu_shader__label(shader, 0);
+		gpgpu_shader__write_dword(shader, SHADER_CANARY, 0);
+		gpgpu_shader__jump_neq(shader, 0, w_dim.y, STEERING_END_LOOP);
+		gpgpu_shader__write_dword(shader, SHADER_CANARY, 0);
+	} else if (flags & SHADER_SINGLE_STEP) {
+		gpgpu_shader__nop(shader);
+		gpgpu_shader__breakpoint(shader);
+		for (int i = 0; i < SINGLE_STEP_COUNT; i++)
+			gpgpu_shader__nop(shader);
+	} else if (flags & SHADER_N_NOOP_BREAKPOINT) {
+		for (int i = 0; i < SHADER_LOOP_N; i++) {
+			gpgpu_shader__nop(shader);
+			gpgpu_shader__breakpoint(shader);
+		}
+	} else if ((flags & SHADER_CACHING_SRAM) || (flags & SHADER_CACHING_VRAM)) {
+		gpgpu_shader__nop(shader);
+		gpgpu_shader__breakpoint(shader);
+		for (int i = 0; i < caching_get_instruction_count(fd, s_dim.x, flags); i++)
+			gpgpu_shader__common_target_write_u32(shader, s_dim.y + i, CACHING_VALUE(i));
+		gpgpu_shader__nop(shader);
+		gpgpu_shader__breakpoint(shader);
+	}
+
+	gpgpu_shader__eot(shader);
+	return shader;
+}
+
+static struct gpgpu_shader *get_sip(int fd, const unsigned int flags)
+{
+	struct dim_t w_dim = walker_dimensions(get_number_of_threads(flags));
+	static struct gpgpu_shader *sip;
+
+	sip = gpgpu_shader_create(fd);
+	gpgpu_shader__write_aip(sip, 0);
+
+	gpgpu_shader__wait(sip);
+	if (flags & SIP_SINGLE_STEP)
+		gpgpu_shader__end_system_routine_step_if_eq(sip, w_dim.y, 0);
+	else
+		gpgpu_shader__end_system_routine(sip, true);
+	return sip;
+}
+
+static int count_set_bits(void *ptr, size_t size)
+{
+	uint8_t *p = ptr;
+	int count = 0;
+	int i, j;
+
+	for (i = 0; i < size; i++)
+		for (j = 0; j < 8; j++)
+			count += !!(p[i] & (1 << j));
+
+	return count;
+}
+
+static int count_canaries_eq(uint32_t *ptr, struct dim_t w_dim, uint32_t value)
+{
+	int count = 0;
+	int x, y;
+
+	for (x = 0; x < w_dim.x; x++)
+		for (y = 0; y < w_dim.y; y++)
+			if (READ_ONCE(ptr[x + ALIGN(w_dim.x, w_dim.alignment) * y]) == value)
+				count++;
+
+	return count;
+}
+
+static int count_canaries_neq(uint32_t *ptr, struct dim_t w_dim, uint32_t value)
+{
+	return w_dim.x * w_dim.y - count_canaries_eq(ptr, w_dim, value);
+}
+
+static const char *td_ctl_cmd_to_str(uint32_t cmd)
+{
+	switch (cmd) {
+	case DRM_XE_EUDEBUG_EU_CONTROL_CMD_INTERRUPT_ALL:
+		return "interrupt all";
+	case DRM_XE_EUDEBUG_EU_CONTROL_CMD_STOPPED:
+		return "stopped";
+	case DRM_XE_EUDEBUG_EU_CONTROL_CMD_RESUME:
+		return "resume";
+	default:
+		return "unknown command";
+	}
+}
+
+static int __eu_ctl(int debugfd, uint64_t client,
+		    uint64_t exec_queue, uint64_t lrc,
+		    uint8_t *bitmask, uint32_t *bitmask_size,
+		    uint32_t cmd, uint64_t *seqno)
+{
+	struct drm_xe_eudebug_eu_control control = {
+		.client_handle = lower_32_bits(client),
+		.exec_queue_handle = exec_queue,
+		.lrc_handle = lrc,
+		.cmd = cmd,
+		.bitmask_ptr = to_user_pointer(bitmask),
+	};
+	int ret;
+
+	if (bitmask_size)
+		control.bitmask_size = *bitmask_size;
+
+	ret = igt_ioctl(debugfd, DRM_XE_EUDEBUG_IOCTL_EU_CONTROL, &control);
+
+	if (ret < 0)
+		return -errno;
+
+	igt_debug("EU CONTROL[%llu]: %s\n", control.seqno, td_ctl_cmd_to_str(cmd));
+
+	if (bitmask_size)
+		*bitmask_size = control.bitmask_size;
+
+	if (seqno)
+		*seqno = control.seqno;
+
+	return 0;
+
+}
+
+static uint64_t eu_ctl(int debugfd, uint64_t client,
+		       uint64_t exec_queue, uint64_t lrc,
+		       uint8_t *bitmask, uint32_t *bitmask_size, uint32_t cmd)
+{
+	uint64_t seqno;
+
+	igt_assert_eq(__eu_ctl(debugfd, client, exec_queue, lrc, bitmask,
+			       bitmask_size, cmd, &seqno), 0);
+
+	return seqno;
+}
+
+static bool intel_gen_needs_resume_wa(int fd)
+{
+	const uint32_t id = intel_get_drm_devid(fd);
+
+	return intel_gen(id) == 12 && intel_graphics_ver(id) < IP_VER(12, 55);
+}
+
+static uint64_t eu_ctl_resume(int fd, int debugfd, uint64_t client,
+			      uint64_t exec_queue, uint64_t lrc,
+			      uint8_t *bitmask, uint32_t bitmask_size)
+{
+	int i;
+
+	/* XXX: WA for hsd: 14011332042 */
+	if (intel_gen_needs_resume_wa(fd)) {
+		uint32_t *att_reg_half = (uint32_t *)bitmask;
+
+		for (i = 0; i < bitmask_size / sizeof(uint32_t); i += 2) {
+			att_reg_half[i] |= att_reg_half[i + 1];
+			att_reg_half[i + 1] |= att_reg_half[i];
+		}
+	}
+
+	return eu_ctl(debugfd, client, exec_queue, lrc, bitmask, &bitmask_size,
+		      DRM_XE_EUDEBUG_EU_CONTROL_CMD_RESUME);
+}
+
+static inline uint64_t eu_ctl_stopped(int debugfd, uint64_t client,
+				      uint64_t exec_queue, uint64_t lrc,
+				      uint8_t *bitmask, uint32_t *bitmask_size)
+{
+	return eu_ctl(debugfd, client, exec_queue, lrc, bitmask, bitmask_size,
+		      DRM_XE_EUDEBUG_EU_CONTROL_CMD_STOPPED);
+}
+
+static inline uint64_t eu_ctl_interrupt_all(int debugfd, uint64_t client,
+					    uint64_t exec_queue, uint64_t lrc)
+{
+	return eu_ctl(debugfd, client, exec_queue, lrc, NULL, 0,
+		      DRM_XE_EUDEBUG_EU_CONTROL_CMD_INTERRUPT_ALL);
+}
+
+struct online_debug_data {
+	pthread_mutex_t mutex;
+	/* client in */
+	struct drm_xe_engine_class_instance hwe;
+	/* client out */
+	int threads_count;
+	/* debugger internals */
+	uint64_t client_handle;
+	uint64_t exec_queue_handle;
+	uint64_t lrc_handle;
+	uint64_t target_offset;
+	size_t target_size;
+	uint64_t bb_offset;
+	size_t bb_size;
+	int vm_fd;
+	uint32_t first_aip;
+	uint64_t *aips_offset_table;
+	uint32_t steps_done;
+	uint8_t *single_step_bitmask;
+	int stepped_threads_count;
+	struct timespec exception_arrived;
+	int last_eu_control_seqno;
+	struct drm_xe_eudebug_event *exception_event;
+};
+
+static struct online_debug_data *
+online_debug_data_create(struct drm_xe_engine_class_instance *hwe)
+{
+	struct online_debug_data *data;
+
+	data = mmap(0, ALIGN(sizeof(*data), PAGE_SIZE),
+		    PROT_WRITE, MAP_SHARED | MAP_ANON, -1, 0);
+	memcpy(&data->hwe, hwe, sizeof(*hwe));
+	pthread_mutex_init(&data->mutex, NULL);
+	data->client_handle = -1ULL;
+	data->exec_queue_handle = -1ULL;
+	data->lrc_handle = -1ULL;
+	data->vm_fd = -1;
+	data->stepped_threads_count = -1;
+
+	return data;
+}
+
+static void online_debug_data_destroy(struct online_debug_data *data)
+{
+	free(data->aips_offset_table);
+	munmap(data, ALIGN(sizeof(*data), PAGE_SIZE));
+}
+
+static void eu_attention_debug_trigger(struct xe_eudebug_debugger *d,
+					struct drm_xe_eudebug_event *e)
+{
+	struct drm_xe_eudebug_event_eu_attention *att = (void *) e;
+	uint32_t *ptr = (uint32_t *) att->bitmask;
+
+	igt_debug("EVENT[%llu] eu-attenttion; threads=%d "
+		 "client[%llu], exec_queue[%llu], lrc[%llu], bitmask_size[%d]\n",
+		 att->base.seqno, count_set_bits(att->bitmask, att->bitmask_size),
+				att->client_handle, att->exec_queue_handle,
+				att->lrc_handle, att->bitmask_size);
+
+	for (uint32_t i = 0; i < att->bitmask_size/4; i += 2)
+		igt_debug("bitmask[%d] = 0x%08x%08x\n", i/2, ptr[i], ptr[i+1]);
+
+}
+
+static void eu_attention_reset_trigger(struct xe_eudebug_debugger *d,
+					struct drm_xe_eudebug_event *e)
+{
+	struct drm_xe_eudebug_event_eu_attention *att = (void *) e;
+	uint32_t *ptr = (uint32_t *) att->bitmask;
+	struct online_debug_data *data = d->ptr;
+
+	igt_debug("EVENT[%llu] eu-attention with reset; threads=%d "
+		 "client[%llu], exec_queue[%llu], lrc[%llu], bitmask_size[%d]\n",
+		 att->base.seqno, count_set_bits(att->bitmask, att->bitmask_size),
+				att->client_handle, att->exec_queue_handle,
+				att->lrc_handle, att->bitmask_size);
+
+	for (uint32_t i = 0; i < att->bitmask_size/4; i += 2)
+		igt_debug("bitmask[%d] = 0x%08x%08x\n", i/2, ptr[i], ptr[i+1]);
+
+	xe_force_gt_reset_async(d->master_fd, data->hwe.gt_id);
+}
+
+static void copy_first_bit(uint8_t *dst, uint8_t *src, int size)
+{
+	bool found = false;
+	int i, j;
+
+	for (i = 0; i < size; i++) {
+		if (found) {
+			dst[i] = 0;
+		} else {
+			uint32_t tmp = src[i]; /* in case dst == src */
+
+			for (j = 0; j < 8; j++) {
+				dst[i] = tmp & (1 << j);
+				if (dst[i]) {
+					found = true;
+					break;
+				}
+			}
+		}
+	}
+}
+
+static void copy_nth_bit(uint8_t *dst, uint8_t *src, int size, int n)
+{
+	int count = 0;
+
+	for (int i = 0; i < size; i++) {
+		uint32_t tmp = src[i];
+		for (int j = 7; j >= 0; j--) {
+			if (tmp & (1 << j)) {
+				count++;
+				if (count == n)
+					dst[i] |= (1 << j);
+				else
+					dst[i] &= ~(1 << j);
+			} else
+				dst[i] &= ~(1 << j);
+		}
+	}
+}
+
+/*
+ * Searches for the first instruction. It stands on assumption,
+ * that shader kernel is placed before sip within the bb.
+ */
+static uint32_t find_kernel_in_bb(struct gpgpu_shader *kernel,
+				  struct online_debug_data *data)
+{
+	uint32_t *p = kernel->code;
+	size_t sz = 4 * sizeof(uint32_t);
+	uint32_t buf[4];
+	int i;
+
+	for (i = 0; i < data->bb_size; i += sz) {
+		igt_assert_eq(pread(data->vm_fd, &buf, sz, data->bb_offset + i), sz);
+
+
+		if (memcmp(p, buf, sz) == 0)
+			break;
+	}
+
+	igt_assert(i < data->bb_size);
+
+	return i;
+}
+
+static void set_breakpoint_once(struct xe_eudebug_debugger *d,
+				struct online_debug_data *data)
+{
+	const uint32_t breakpoint_bit = 1 << 30;
+	size_t sz = sizeof(uint32_t);
+	struct gpgpu_shader *kernel;
+	uint32_t aip;
+
+	kernel = get_shader(d->master_fd, d->flags);
+
+	if (data->first_aip) {
+		uint32_t expected = find_kernel_in_bb(kernel, data) + kernel->size * 4 - 0x10;
+
+		igt_assert_eq(pread(data->vm_fd, &aip, sz, data->target_offset), sz);
+		igt_assert_eq_u32(aip, expected);
+	} else {
+		uint32_t instr_usdw;
+
+		igt_assert(data->vm_fd != -1);
+		igt_assert(data->target_size != 0);
+		igt_assert(data->bb_size != 0);
+
+		igt_assert_eq(pread(data->vm_fd, &aip, sz, data->target_offset), sz);
+		data->first_aip = aip;
+
+		aip = find_kernel_in_bb(kernel, data);
+
+		/* set breakpoint on last instruction */
+		aip += kernel->size * 4 - 0x10;
+		igt_assert_eq(pread(data->vm_fd, &instr_usdw, sz,
+				    data->bb_offset + aip), sz);
+		instr_usdw |= breakpoint_bit;
+		igt_assert_eq(pwrite(data->vm_fd, &instr_usdw, sz,
+				     data->bb_offset + aip), sz);
+
+	}
+
+	gpgpu_shader_destroy(kernel);
+}
+
+static void get_aips_offset_table(struct online_debug_data *data, int threads)
+{
+	size_t sz = sizeof(uint32_t);
+	uint32_t aip;
+	uint32_t first_aip;
+	int table_index = 0;
+
+	if (data->aips_offset_table)
+		return;
+
+	data->aips_offset_table = malloc(threads * sizeof(uint64_t));
+	igt_assert(data->aips_offset_table);
+
+	igt_assert_eq(pread(data->vm_fd, &first_aip, sz, data->target_offset), sz);
+	data->first_aip = first_aip;
+	data->aips_offset_table[table_index++] = 0;
+
+	fsync(data->vm_fd);
+	for (int i = 1; i < data->target_size; i++) {
+		igt_assert_eq(pread(data->vm_fd, &aip, sz, data->target_offset + i), sz);
+		if (aip == first_aip)
+			data->aips_offset_table[table_index++] = i;
+	}
+
+	igt_assert_eq(threads, table_index);
+
+	igt_debug("AIPs offset table:\n");
+	for (int i = 0; i < threads; i++) {
+		igt_debug("%lx\n", data->aips_offset_table[i]);
+	}
+}
+
+static int get_stepped_threads_count(struct online_debug_data *data, int threads)
+{
+	int count = 0;
+	size_t sz = sizeof(uint32_t);
+	uint32_t aip;
+
+	fsync(data->vm_fd);
+	for (int i = 0; i < threads; i++) {
+		igt_assert_eq(pread(data->vm_fd, &aip, sz,
+				    data->target_offset + data->aips_offset_table[i]), sz);
+		if (aip != data->first_aip) {
+			igt_assert(aip == data->first_aip + 0x10);
+			count++;
+		}
+	}
+
+	return count;
+}
+
+static void save_first_exception_trigger(struct xe_eudebug_debugger *d,
+					 struct drm_xe_eudebug_event *e)
+{
+	struct online_debug_data *data = d->ptr;
+
+	pthread_mutex_lock(&data->mutex);
+	if (!data->exception_event) {
+		igt_gettime(&data->exception_arrived);
+		data->exception_event = igt_memdup(e, e->len);
+	}
+	pthread_mutex_unlock(&data->mutex);
+}
+
+#define MAX_PREEMPT_TIMEOUT 10ull
+static int is_client_resumed;
+static void eu_attention_resume_trigger(struct xe_eudebug_debugger *d,
+					struct drm_xe_eudebug_event *e)
+{
+	struct drm_xe_eudebug_event_eu_attention *att = (void *) e;
+	struct online_debug_data *data = d->ptr;
+	uint32_t bitmask_size = att->bitmask_size;
+	uint8_t *bitmask;
+	int i;
+
+	if (data->last_eu_control_seqno > att->base.seqno)
+		return;
+
+	bitmask = calloc(1, att->bitmask_size);
+
+	eu_ctl_stopped(d->fd, att->client_handle, att->exec_queue_handle,
+		       att->lrc_handle, bitmask, &bitmask_size);
+	igt_assert(bitmask_size == att->bitmask_size);
+	igt_assert(memcmp(bitmask, att->bitmask, att->bitmask_size) == 0);
+
+	pthread_mutex_lock(&data->mutex);
+	if (igt_nsec_elapsed(&data->exception_arrived) < (MAX_PREEMPT_TIMEOUT + 1) * NSEC_PER_SEC &&
+	    d->flags & TRIGGER_RESUME_DELAYED) {
+		pthread_mutex_unlock(&data->mutex);
+		free(bitmask);
+		return;
+	} else if (d->flags & TRIGGER_RESUME_ONE) {
+		copy_first_bit(bitmask, bitmask, bitmask_size);
+	} else if (d->flags & TRIGGER_RESUME_DSS) {
+		uint64_t *event = (uint64_t *)att->bitmask;
+		uint64_t *resume = (uint64_t *)bitmask;
+
+		memset(bitmask, 0, bitmask_size);
+		for (i = 0; i < att->bitmask_size / sizeof(uint64_t); i++) {
+			if (!event[i])
+				continue;
+
+			resume[i] = event[i];
+			break;
+		}
+	} else if (d->flags & TRIGGER_RESUME_SET_BP) {
+		set_breakpoint_once(d, data);
+	}
+
+	if (d->flags & SHADER_LOOP) {
+		uint32_t threads = get_number_of_threads(d->flags);
+		uint32_t val = STEERING_END_LOOP;
+
+		igt_assert_eq(pwrite(data->vm_fd, &val, sizeof(uint32_t),
+				     data->target_offset + steering_offset(threads)),
+			      sizeof(uint32_t));
+		fsync(data->vm_fd);
+	}
+	pthread_mutex_unlock(&data->mutex);
+
+	data->last_eu_control_seqno = eu_ctl_resume(d->master_fd, d->fd, att->client_handle,
+						    att->exec_queue_handle, att->lrc_handle,
+						    bitmask, att->bitmask_size);
+
+	is_client_resumed = 1;
+	free(bitmask);
+}
+
+static void eu_attention_resume_single_step_trigger(struct xe_eudebug_debugger *d,
+						    struct drm_xe_eudebug_event *e)
+{
+	struct drm_xe_eudebug_event_eu_attention *att = (void *) e;
+	struct online_debug_data *data = d->ptr;
+	const int threads = get_number_of_threads(d->flags);
+	uint32_t val;
+	size_t sz = sizeof(uint32_t);
+
+	get_aips_offset_table(data, threads);
+
+	if (d->flags & TRIGGER_RESUME_PARALLEL_WALK) {
+		if (data->stepped_threads_count != -1)
+			if (data->steps_done < SINGLE_STEP_COUNT) {
+				int stepped_threads_count_after_resume =
+						get_stepped_threads_count(data, threads);
+				igt_debug("Stepped threads after: %d\n",
+					  stepped_threads_count_after_resume);
+
+				if (stepped_threads_count_after_resume == threads) {
+					data->first_aip += 0x10;
+					data->steps_done++;
+				}
+
+				igt_debug("Shader steps: %d\n", data->steps_done);
+				igt_assert(data->stepped_threads_count == 0);
+				igt_assert(stepped_threads_count_after_resume == threads);
+			}
+
+		if (data->steps_done < SINGLE_STEP_COUNT) {
+			data->stepped_threads_count = get_stepped_threads_count(data, threads);
+			igt_debug("Stepped threads before: %d\n", data->stepped_threads_count);
+		}
+
+		val = data->steps_done < SINGLE_STEP_COUNT ? STEERING_SINGLE_STEP :
+							     STEERING_CONTINUE;
+	} else if (d->flags & TRIGGER_RESUME_SINGLE_WALK) {
+		if (data->stepped_threads_count != -1)
+			if (data->steps_done < 2) {
+				int stepped_threads_count_after_resume =
+						get_stepped_threads_count(data, threads);
+				igt_debug("Stepped threads after: %d\n",
+					  stepped_threads_count_after_resume);
+
+				if (stepped_threads_count_after_resume == threads) {
+					data->first_aip += 0x10;
+					data->steps_done++;
+					free(data->single_step_bitmask);
+					data->single_step_bitmask = 0;
+				}
+
+				igt_debug("Shader steps: %d\n", data->steps_done);
+				igt_assert(data->stepped_threads_count +
+					   (intel_gen_needs_resume_wa(d->master_fd) ? 2 : 1) ==
+					   stepped_threads_count_after_resume);
+			}
+
+		if (data->steps_done < 2) {
+			data->stepped_threads_count = get_stepped_threads_count(data, threads);
+			igt_debug("Stepped threads before: %d\n", data->stepped_threads_count);
+			if (intel_gen_needs_resume_wa(d->master_fd)) {
+				if (!data->single_step_bitmask) {
+					data->single_step_bitmask = malloc(att->bitmask_size *
+									   sizeof(uint8_t));
+					igt_assert(data->single_step_bitmask);
+					memcpy(data->single_step_bitmask, att->bitmask,
+					       att->bitmask_size);
+				}
+
+				copy_first_bit(att->bitmask, data->single_step_bitmask,
+					       att->bitmask_size);
+			} else
+				copy_nth_bit(att->bitmask, att->bitmask, att->bitmask_size,
+					     data->stepped_threads_count + 1);
+		}
+
+		val = data->steps_done < 2 ? STEERING_SINGLE_STEP : STEERING_CONTINUE;
+	}
+
+	igt_assert_eq(pwrite(data->vm_fd, &val, sz,
+			     data->target_offset + steering_offset(threads)), sz);
+	fsync(data->vm_fd);
+
+	eu_ctl_resume(d->master_fd, d->fd, att->client_handle,
+		      att->exec_queue_handle, att->lrc_handle,
+		      att->bitmask, att->bitmask_size);
+
+	if (data->single_step_bitmask)
+		for (int i = 0; i < att->bitmask_size; i++)
+			data->single_step_bitmask[i] &= ~att->bitmask[i];
+}
+
+static void open_trigger(struct xe_eudebug_debugger *d,
+			 struct drm_xe_eudebug_event *e)
+{
+	struct drm_xe_eudebug_event_client *client = (void *)e;
+	struct online_debug_data *data = d->ptr;
+
+	if (e->flags & DRM_XE_EUDEBUG_EVENT_DESTROY)
+		return;
+
+	pthread_mutex_lock(&data->mutex);
+	data->client_handle = client->client_handle;
+	pthread_mutex_unlock(&data->mutex);
+}
+
+static void exec_queue_trigger(struct xe_eudebug_debugger *d,
+			       struct drm_xe_eudebug_event *e)
+{
+	struct drm_xe_eudebug_event_exec_queue *eq = (void *)e;
+	struct online_debug_data *data = d->ptr;
+
+	if (e->flags & DRM_XE_EUDEBUG_EVENT_DESTROY)
+		return;
+
+	pthread_mutex_lock(&data->mutex);
+	data->exec_queue_handle = eq->exec_queue_handle;
+	data->lrc_handle = eq->lrc_handle[0];
+	pthread_mutex_unlock(&data->mutex);
+}
+
+static void vm_open_trigger(struct xe_eudebug_debugger *d,
+			    struct drm_xe_eudebug_event *e)
+{
+	struct drm_xe_eudebug_event_vm *vm = (void *)e;
+	struct online_debug_data *data = d->ptr;
+	struct drm_xe_eudebug_vm_open vo = {
+		.client_handle = vm->client_handle,
+		.vm_handle = vm->vm_handle,
+	};
+	int fd;
+
+	if (e->flags & DRM_XE_EUDEBUG_EVENT_CREATE) {
+		fd = igt_ioctl(d->fd, DRM_XE_EUDEBUG_IOCTL_VM_OPEN, &vo);
+		igt_assert_lte(0, fd);
+
+		pthread_mutex_lock(&data->mutex);
+		igt_assert(data->vm_fd == -1);
+		data->vm_fd = fd;
+		pthread_mutex_unlock(&data->mutex);
+		return;
+	}
+
+	pthread_mutex_lock(&data->mutex);
+	close(data->vm_fd);
+	data->vm_fd = -1;
+	pthread_mutex_unlock(&data->mutex);
+}
+
+static void read_metadata(struct xe_eudebug_debugger *d,
+			  uint64_t client_handle,
+			  uint64_t metadata_handle,
+			  uint64_t type,
+			  uint64_t len)
+{
+	struct drm_xe_eudebug_read_metadata rm = {
+		.client_handle = client_handle,
+		.metadata_handle = metadata_handle,
+		.size = len,
+	};
+	struct online_debug_data *data = d->ptr;
+	uint64_t *metadata;
+
+	metadata = malloc(len);
+	igt_assert(metadata);
+
+	rm.ptr = to_user_pointer(metadata);
+	igt_assert_eq(igt_ioctl(d->fd, DRM_XE_EUDEBUG_IOCTL_READ_METADATA, &rm), 0);
+
+	pthread_mutex_lock(&data->mutex);
+	switch (type) {
+	case DRM_XE_DEBUG_METADATA_ELF_BINARY:
+		data->bb_offset = metadata[0];
+		data->bb_size = metadata[1];
+		break;
+	case DRM_XE_DEBUG_METADATA_PROGRAM_MODULE:
+		data->target_offset = metadata[0];
+		data->target_size = metadata[1];
+		break;
+	default:
+		break;
+	}
+	pthread_mutex_unlock(&data->mutex);
+
+	free(metadata);
+}
+
+static void create_metadata_trigger(struct xe_eudebug_debugger *d, struct drm_xe_eudebug_event *e)
+{
+	struct drm_xe_eudebug_event_metadata *em = (void *)e;
+
+	if (e->flags & DRM_XE_EUDEBUG_EVENT_CREATE) {
+		read_metadata(d, em->client_handle, em->metadata_handle, em->type, em->len);
+	}
+}
+
+static void overwrite_immediate_value_in_common_target_write(int vm_fd, uint64_t offset,
+							     uint32_t old_val, uint32_t new_val)
+{
+	uint64_t addr = offset;
+	int vals_changed = 0;
+	uint32_t val;
+
+	while (vals_changed < 4) {
+		igt_assert_eq(pread(vm_fd, &val, sizeof(uint32_t), addr), sizeof(uint32_t));
+		if (val == old_val) {
+			igt_debug("val_before_write[%d]: %08x\n", vals_changed, val);
+			igt_assert_eq(pwrite(vm_fd, &new_val, sizeof(uint32_t), addr),
+				      sizeof(uint32_t));
+			igt_assert_eq(pread(vm_fd, &val, sizeof(uint32_t), addr),
+				      sizeof(uint32_t));
+			igt_debug("val_before_fsync[%d]: %08x\n", vals_changed, val);
+			fsync(vm_fd);
+			igt_assert_eq(pread(vm_fd, &val, sizeof(uint32_t), addr),
+				      sizeof(uint32_t));
+			igt_debug("val_after_fsync[%d]: %08x\n", vals_changed, val);
+			igt_assert_eq_u32(val, new_val);
+			vals_changed++;
+		}
+		addr += sizeof(uint32_t);
+	}
+}
+
+static void eu_attention_resume_caching_trigger(struct xe_eudebug_debugger *d,
+						struct drm_xe_eudebug_event *e)
+{
+	struct drm_xe_eudebug_event_eu_attention *att = (void *) e;
+	struct online_debug_data *data = d->ptr;
+	static int counter = 0;
+	static int kernel_in_bb = 0;
+	struct dim_t s_dim = surface_dimensions(get_number_of_threads(d->flags));
+	int val;
+	uint32_t instr_usdw;
+	struct gpgpu_shader *kernel;
+	const uint32_t breakpoint_bit = 1 << 30;
+	struct gpgpu_shader *shader_preamble;
+	struct gpgpu_shader *shader_write_instr;
+
+	shader_preamble = gpgpu_shader_create(d->master_fd);
+	gpgpu_shader__write_dword(shader_preamble, SHADER_CANARY, 0);
+	gpgpu_shader__nop(shader_preamble);
+	gpgpu_shader__breakpoint(shader_preamble);
+
+	shader_write_instr = gpgpu_shader_create(d->master_fd);
+	gpgpu_shader__common_target_write_u32(shader_write_instr, 0, 0);
+
+	if (!kernel_in_bb) {
+		kernel = get_shader(d->master_fd, d->flags);
+		kernel_in_bb = find_kernel_in_bb(kernel, data);
+		gpgpu_shader_destroy(kernel);
+	}
+
+	/* set breakpoint on next write instruction */
+	if (counter < caching_get_instruction_count(d->master_fd, s_dim.x, d->flags)) {
+		igt_assert_eq(pread(data->vm_fd, &instr_usdw, sizeof(instr_usdw),
+				    data->bb_offset + kernel_in_bb + shader_preamble->size * 4 +
+				    shader_write_instr->size * 4 * counter), sizeof(instr_usdw));
+		instr_usdw |= breakpoint_bit;
+		igt_assert_eq(pwrite(data->vm_fd, &instr_usdw, sizeof(instr_usdw),
+				     data->bb_offset + kernel_in_bb + shader_preamble->size * 4 +
+				     shader_write_instr->size * 4 * counter), sizeof(instr_usdw));
+		fsync(data->vm_fd);
+	}
+
+	/* restore current instruction */
+	if (counter && counter <= caching_get_instruction_count(d->master_fd, s_dim.x, d->flags))
+		overwrite_immediate_value_in_common_target_write(data->vm_fd,
+								 data->bb_offset + kernel_in_bb +
+								 shader_preamble->size * 4 +
+								 shader_write_instr->size * 4 * (counter - 1),
+								 CACHING_POISON_VALUE,
+								 CACHING_VALUE(counter - 1));
+
+	/* poison next instruction */
+	if (counter < caching_get_instruction_count(d->master_fd, s_dim.x, d->flags))
+		overwrite_immediate_value_in_common_target_write(data->vm_fd,
+								 data->bb_offset + kernel_in_bb +
+								 shader_preamble->size * 4 +
+								 shader_write_instr->size * 4 * counter,
+								 CACHING_VALUE(counter),
+								 CACHING_POISON_VALUE);
+
+	gpgpu_shader_destroy(shader_write_instr);
+	gpgpu_shader_destroy(shader_preamble);
+
+	for (int i = 0; i < data->target_size; i += sizeof(uint32_t)) {
+		igt_assert_eq(pread(data->vm_fd, &val, sizeof(val), data->target_offset + i),
+			      sizeof(val));
+		igt_assert_f(val != CACHING_POISON_VALUE, "Poison value found at %04d!\n", i);
+	}
+
+	eu_ctl_resume(d->master_fd, d->fd, att->client_handle,
+		      att->exec_queue_handle, att->lrc_handle,
+		      att->bitmask, att->bitmask_size);
+
+	counter++;
+}
+
+static struct intel_bb *xe_bb_create_on_offset(int fd, uint32_t exec_queue, uint32_t vm,
+					       uint64_t offset, uint32_t size)
+{
+	struct intel_bb *ibb;
+
+	ibb = intel_bb_create_with_context(fd, exec_queue, vm, NULL, size);
+
+	/* update intel bb offset */
+	intel_bb_remove_object(ibb, ibb->handle, ibb->batch_offset, ibb->size);
+	intel_bb_add_object(ibb, ibb->handle, ibb->size, offset, ibb->alignment, false);
+	ibb->batch_offset = offset;
+
+	return ibb;
+}
+
+static size_t get_bb_size(int flags)
+{
+	if ((flags & SHADER_CACHING_SRAM) || (flags & SHADER_CACHING_VRAM))
+		return 32768;
+
+	return 4096;
+}
+
+static void run_online_client(struct xe_eudebug_client *c)
+{
+	int threads = get_number_of_threads(c->flags);
+	const uint64_t target_offset = 0x1a000000;
+	const uint64_t bb_offset = 0x1b000000;
+	const size_t bb_size = get_bb_size(c->flags);
+	struct online_debug_data *data = c->ptr;
+	struct drm_xe_engine_class_instance hwe = data->hwe;
+	struct drm_xe_ext_set_property ext = {
+		.base.name = DRM_XE_EXEC_QUEUE_EXTENSION_SET_PROPERTY,
+		.property = DRM_XE_EXEC_QUEUE_SET_PROPERTY_EUDEBUG,
+		.value = DRM_XE_EXEC_QUEUE_EUDEBUG_FLAG_ENABLE,
+	};
+	struct drm_xe_exec_queue_create create = {
+		.instances = to_user_pointer(&hwe),
+		.width = 1,
+		.num_placements = 1,
+		.extensions = c->flags & DISABLE_DEBUG_MODE ? 0 : to_user_pointer(&ext)
+	};
+	struct dim_t w_dim = walker_dimensions(threads);
+	struct dim_t s_dim = surface_dimensions(threads);
+	struct timespec ts = { };
+	struct gpgpu_shader *sip, *shader;
+	uint32_t metadata_id[2];
+	uint64_t *metadata[2];
+	struct intel_bb *ibb;
+	struct intel_buf *buf;
+	uint32_t *ptr;
+	int fd;
+
+	metadata[0] = calloc(2, sizeof(*metadata));
+	metadata[1] = calloc(2, sizeof(*metadata));
+	igt_assert(metadata[0]);
+	igt_assert(metadata[1]);
+
+	fd = xe_eudebug_client_open_driver(c);
+	xe_device_get(fd);
+
+	/* Additional memory for steering control */
+	if (c->flags & SHADER_LOOP || c->flags & SHADER_SINGLE_STEP)
+		s_dim.y++;
+	/* Additional memory for caching check */
+	if ((c->flags & SHADER_CACHING_SRAM) || (c->flags & SHADER_CACHING_VRAM))
+		s_dim.y += caching_get_instruction_count(fd, s_dim.x, c->flags);
+	buf = create_uc_buf(fd, s_dim.x, s_dim.y);
+
+	buf->addr.offset = target_offset;
+
+	metadata[0][0] = bb_offset;
+	metadata[0][1] = bb_size;
+	metadata[1][0] = target_offset;
+	metadata[1][1] = buf->size;
+	metadata_id[0] = xe_eudebug_client_metadata_create(c, fd, DRM_XE_DEBUG_METADATA_ELF_BINARY,
+							   2 * sizeof(*metadata), metadata[0]);
+	metadata_id[1] = xe_eudebug_client_metadata_create(c, fd,
+							   DRM_XE_DEBUG_METADATA_PROGRAM_MODULE,
+							   2 * sizeof(*metadata), metadata[1]);
+
+	create.vm_id = xe_eudebug_client_vm_create(c, fd, DRM_XE_VM_CREATE_FLAG_LR_MODE, 0);
+	xe_eudebug_client_exec_queue_create(c, fd, &create);
+
+	ibb = xe_bb_create_on_offset(fd, create.exec_queue_id, create.vm_id,
+				     bb_offset, bb_size);
+	intel_bb_set_lr_mode(ibb, true);
+
+	sip = get_sip(fd, c->flags);
+	shader = get_shader(fd, c->flags);
+
+	igt_nsec_elapsed(&ts);
+	gpgpu_shader_exec(ibb, buf, w_dim.x, w_dim.y, shader, sip, 0, 0);
+
+	gpgpu_shader_destroy(sip);
+	gpgpu_shader_destroy(shader);
+
+	intel_bb_sync(ibb);
+
+	if (c->flags & TRIGGER_RECONNECT)
+		xe_eudebug_client_wait_stage(c, DEBUGGER_REATTACHED);
+	else
+		/* Make sure it wasn't the timeout. */
+		igt_assert(igt_nsec_elapsed(&ts) <
+			   XE_EUDEBUG_DEFAULT_TIMEOUT_MS / MSEC_PER_SEC * NSEC_PER_SEC);
+
+	if (!(c->flags & DO_NOT_EXPECT_CANARIES)) {
+		ptr = xe_bo_mmap_ext(fd, buf->handle, buf->size, PROT_READ);
+		data->threads_count = count_canaries_neq(ptr, w_dim, 0);
+		igt_assert_f(data->threads_count, "No canaries found, nothing executed?\n");
+
+		if ((c->flags & SHADER_BREAKPOINT || c->flags & TRIGGER_RESUME_SET_BP ||
+		     c->flags & SHADER_N_NOOP_BREAKPOINT) && !(c->flags & DISABLE_DEBUG_MODE)) {
+			uint32_t aip = ptr[0];
+
+			igt_assert_f(aip != SHADER_CANARY, "Workload executed but breakpoint not hit!\n");
+			igt_assert_eq(count_canaries_eq(ptr, w_dim, aip), data->threads_count);
+			igt_debug("Breakpoint hit in %d threads, AIP=0x%08x\n", data->threads_count, aip);
+		}
+
+		munmap(ptr, buf->size);
+	}
+
+	intel_bb_destroy(ibb);
+
+	xe_eudebug_client_exec_queue_destroy(c, fd, &create);
+	xe_eudebug_client_vm_destroy(c, fd,  create.vm_id);
+
+	xe_eudebug_client_metadata_destroy(c, fd, metadata_id[0], DRM_XE_DEBUG_METADATA_ELF_BINARY,
+					   2 * sizeof(*metadata));
+	xe_eudebug_client_metadata_destroy(c, fd, metadata_id[1],
+					   DRM_XE_DEBUG_METADATA_PROGRAM_MODULE,
+					   2 * sizeof(*metadata));
+
+	xe_device_put(fd);
+	xe_eudebug_client_close_driver(c, fd);
+}
+
+static bool intel_gen_has_lockstep_eus(int fd)
+{
+	const uint32_t id = intel_get_drm_devid(fd);
+
+	/*
+	 * Lockstep (or in some parlance, fused) EUs are pair of EUs
+	 * that work in sync, supposedly same clock and same control flow.
+	 * Thus for attentions, if the control has breakpoint, both will be
+	 * excepted into SIP. In this level, the hardware has only one attention
+	 * thread bit for units. PVC is the first one without lockstepping.
+	 */
+	return !(intel_graphics_ver(id) == IP_VER(12, 60) || intel_gen(id) >= 20);
+}
+
+static int query_attention_bitmask_size(int fd, int gt)
+{
+	const unsigned int threads = 8;
+	struct drm_xe_query_topology_mask *c_dss = NULL, *g_dss = NULL, *eu_per_dss = NULL;
+	struct drm_xe_query_topology_mask *topology;
+	struct drm_xe_device_query query = {
+		.extensions = 0,
+		.query = DRM_XE_DEVICE_QUERY_GT_TOPOLOGY,
+		.size = 0,
+		.data = 0,
+	};
+	int pos = 0, eus;
+	uint8_t *any_dss;
+
+	igt_assert_eq(igt_ioctl(fd, DRM_IOCTL_XE_DEVICE_QUERY, &query), 0);
+	igt_assert_neq(query.size, 0);
+
+	topology = malloc(query.size);
+	igt_assert(topology);
+
+	query.data = to_user_pointer(topology);
+	igt_assert_eq(igt_ioctl(fd, DRM_IOCTL_XE_DEVICE_QUERY, &query), 0);
+
+	while (query.size >= sizeof(struct drm_xe_query_topology_mask)) {
+		struct drm_xe_query_topology_mask *topo;
+		int sz;
+
+		topo = (struct drm_xe_query_topology_mask *)((unsigned char *)topology + pos);
+		sz = sizeof(struct drm_xe_query_topology_mask) + topo->num_bytes;
+
+		query.size -= sz;
+		pos += sz;
+
+		if (topo->gt_id != gt)
+			continue;
+
+		if (topo->type == DRM_XE_TOPO_DSS_GEOMETRY)
+			g_dss = topo;
+		else if (topo->type == DRM_XE_TOPO_DSS_COMPUTE)
+			c_dss = topo;
+		else if (topo->type == DRM_XE_TOPO_EU_PER_DSS ||
+			 topo->type == DRM_XE_TOPO_SIMD16_EU_PER_DSS)
+			eu_per_dss = topo;
+	}
+
+	igt_assert(g_dss && c_dss && eu_per_dss);
+	igt_assert_eq_u32(c_dss->num_bytes, g_dss->num_bytes);
+
+	any_dss = malloc(c_dss->num_bytes);
+
+	for (int i = 0; i < c_dss->num_bytes; i++)
+		any_dss[i] = c_dss->mask[i] | g_dss->mask[i];
+
+	eus = count_set_bits(any_dss, c_dss->num_bytes);
+	eus *= count_set_bits(eu_per_dss->mask, eu_per_dss->num_bytes);
+
+	if (intel_gen_has_lockstep_eus(fd))
+		eus /= 2;
+
+	free(any_dss);
+	free(topology);
+
+	return eus * threads / 8;
+}
+
+static struct drm_xe_eudebug_event_exec_queue *
+match_attention_with_exec_queue(struct xe_eudebug_event_log *log,
+				struct drm_xe_eudebug_event_eu_attention *ea)
+{
+	struct drm_xe_eudebug_event_exec_queue *ee;
+	struct drm_xe_eudebug_event *event = NULL, *current = NULL, *matching_destroy = NULL;
+	int lrc_idx;
+
+	xe_eudebug_for_each_event(event, log) {
+		if (event->type == DRM_XE_EUDEBUG_EVENT_EXEC_QUEUE &&
+		    event->flags == DRM_XE_EUDEBUG_EVENT_CREATE) {
+			ee = (struct drm_xe_eudebug_event_exec_queue *)event;
+
+			if (ee->exec_queue_handle != ea->exec_queue_handle)
+				continue;
+
+			if (ee->client_handle != ea->client_handle)
+				continue;
+
+			for (lrc_idx = 0; lrc_idx < ee->width; lrc_idx++) {
+				if (ee->lrc_handle[lrc_idx] == ea->lrc_handle)
+					break;
+			}
+
+			if (lrc_idx >= ee->width) {
+				igt_debug("No matching lrc handle within matching exec_queue!");
+				continue;
+			}
+
+			/* event logs are sorted, every found next would not be present. */
+			if (ea->base.seqno < ee->base.seqno)
+				break;
+
+			/* sanity check whether attention did
+			 * not appear yet on already destroyed exec_queue
+			 */
+			current = event;
+			xe_eudebug_for_each_event(current, log) {
+				if (current->type == DRM_XE_EUDEBUG_EVENT_EXEC_QUEUE &&
+				    current->flags == DRM_XE_EUDEBUG_EVENT_DESTROY) {
+					uint8_t offset = sizeof(struct drm_xe_eudebug_event);
+
+					if (memcmp((uint8_t *)current + offset,
+						   (uint8_t *)event + offset,
+						   current->len - offset) == 0) {
+						matching_destroy = current;
+					}
+				}
+			}
+
+			if (!matching_destroy || ea->base.seqno > matching_destroy->seqno)
+				continue;
+
+			return ee;
+		}
+	}
+
+	return NULL;
+}
+
+static void online_session_check(struct xe_eudebug_session *s, int flags)
+{
+	struct drm_xe_eudebug_event_eu_attention *ea = NULL;
+	struct drm_xe_eudebug_event *event = NULL;
+	struct online_debug_data *data = s->c->ptr;
+	bool expect_exception = flags & DISABLE_DEBUG_MODE ? false : true;
+	int sum = 0;
+	int bitmask_size;
+
+	xe_eudebug_session_check(s, true, XE_EUDEBUG_FILTER_EVENT_VM_BIND |
+					  XE_EUDEBUG_FILTER_EVENT_VM_BIND_OP |
+					  XE_EUDEBUG_FILTER_EVENT_VM_BIND_UFENCE);
+
+	bitmask_size = query_attention_bitmask_size(s->d->master_fd, data->hwe.gt_id);
+
+	xe_eudebug_for_each_event(event, s->d->log) {
+		if (event->type == DRM_XE_EUDEBUG_EVENT_EU_ATTENTION) {
+			ea = (struct drm_xe_eudebug_event_eu_attention *)event;
+
+			igt_assert(event->flags == DRM_XE_EUDEBUG_EVENT_STATE_CHANGE);
+			igt_assert_eq(ea->bitmask_size, bitmask_size);
+			sum += count_set_bits(ea->bitmask, bitmask_size);
+			igt_assert(match_attention_with_exec_queue(s->d->log, ea));
+		}
+	}
+
+	/*
+	 * We can expect attention to sum up only
+	 * if we have a breakpoint set and we resume all threads always.
+	 */
+	if (flags == SHADER_BREAKPOINT)
+		igt_assert_eq(sum, data->threads_count);
+
+	if (expect_exception)
+		igt_assert(sum > 0);
+	else
+		igt_assert(sum == 0);
+}
+
+static void ufence_ack_trigger(struct xe_eudebug_debugger *d,
+			       struct drm_xe_eudebug_event *e)
+{
+	struct drm_xe_eudebug_event_vm_bind_ufence *ef = (void *)e;
+
+	if (e->flags & DRM_XE_EUDEBUG_EVENT_CREATE)
+		xe_eudebug_ack_ufence(d->fd, ef);
+}
+
+/**
+ * SUBTEST: basic-breakpoint
+ * Description:
+ *	Check whether KMD sends attention events
+ *	for workload in debug mode stopped on breakpoint.
+ *
+ * SUBTEST: breakpoint-not-in-debug-mode
+ * Description:
+ *	Check whether KMD resets the GPU when it spots an attention
+ *	coming from workload not in debug mode.
+ *
+ * SUBTEST: stopped-thread
+ * Description:
+ *	Hits breakpoint on runalone workload and
+ *	reads attention for fixed time.
+ *
+ * SUBTEST: resume-%s
+ * Description:
+ *	Resumes stopped on a breakpoint workload
+ *	with granularity of %arg[1].
+ *
+ *
+ * arg[1]:
+ *
+ * @one:	one thread
+ * @dss:	threads running on one subslice
+ */
+static void test_basic_online(int fd, struct drm_xe_engine_class_instance *hwe, int flags)
+{
+	struct xe_eudebug_session *s;
+	struct online_debug_data *data;
+
+	data = online_debug_data_create(hwe);
+	s = xe_eudebug_session_create(fd, run_online_client, flags, data);
+
+	xe_eudebug_debugger_add_trigger(s->d, DRM_XE_EUDEBUG_EVENT_EU_ATTENTION,
+					eu_attention_debug_trigger);
+	xe_eudebug_debugger_add_trigger(s->d, DRM_XE_EUDEBUG_EVENT_EU_ATTENTION,
+					eu_attention_resume_trigger);
+	xe_eudebug_debugger_add_trigger(s->d, DRM_XE_EUDEBUG_EVENT_VM_BIND_UFENCE,
+					ufence_ack_trigger);
+
+	xe_eudebug_session_run(s);
+	online_session_check(s, s->flags);
+
+	xe_eudebug_session_destroy(s);
+	online_debug_data_destroy(data);
+}
+
+/**
+ * SUBTEST: preempt-breakpoint
+ * Description:
+ *	Verify that eu debugger disables preemption timeout to
+ *	prevent reset of workload stopped on breakpoint.
+ */
+static void test_preemption(int fd, struct drm_xe_engine_class_instance *hwe)
+{
+	int flags = SHADER_BREAKPOINT | TRIGGER_RESUME_DELAYED;
+	struct xe_eudebug_session *s;
+	struct online_debug_data *data;
+	struct xe_eudebug_client *other;
+
+	data = online_debug_data_create(hwe);
+	s = xe_eudebug_session_create(fd, run_online_client, flags, data);
+	other = xe_eudebug_client_create(fd, run_online_client, SHADER_NOP, data);
+
+	xe_eudebug_debugger_add_trigger(s->d, DRM_XE_EUDEBUG_EVENT_EU_ATTENTION,
+					eu_attention_debug_trigger);
+	xe_eudebug_debugger_add_trigger(s->d, DRM_XE_EUDEBUG_EVENT_EU_ATTENTION,
+					eu_attention_resume_trigger);
+	xe_eudebug_debugger_add_trigger(s->d, DRM_XE_EUDEBUG_EVENT_VM_BIND_UFENCE,
+					ufence_ack_trigger);
+
+	igt_assert_eq(xe_eudebug_debugger_attach(s->d, s->c), 0);
+	xe_eudebug_debugger_start_worker(s->d);
+
+	xe_eudebug_client_start(s->c);
+	sleep(1); /* make sure s->c starts first */
+	xe_eudebug_client_start(other);
+
+	xe_eudebug_client_wait_done(s->c);
+	xe_eudebug_client_wait_done(other);
+
+	xe_eudebug_debugger_stop_worker(s->d, 1);
+
+	xe_eudebug_session_destroy(s);
+	xe_eudebug_client_destroy(other);
+
+	igt_assert_f(data->last_eu_control_seqno != 0,
+		     "Workload with breakpoint has ended without resume!\n");
+
+	online_debug_data_destroy(data);
+}
+
+/**
+ * SUBTEST: reset-with-attention
+ * Description:
+ *	Check whether GPU is usable after resetting with attention raised 
+ *	(stopped on breakpoint) by running the same workload again.
+ */
+static void test_reset_with_attention_online(int fd, struct drm_xe_engine_class_instance *hwe, int flags)
+{
+	struct xe_eudebug_session *s1, *s2;
+	struct online_debug_data *data;
+
+	data = online_debug_data_create(hwe);
+	s1 = xe_eudebug_session_create(fd, run_online_client, flags, data);
+
+	xe_eudebug_debugger_add_trigger(s1->d, DRM_XE_EUDEBUG_EVENT_EU_ATTENTION,
+					eu_attention_reset_trigger);
+	xe_eudebug_debugger_add_trigger(s1->d, DRM_XE_EUDEBUG_EVENT_VM_BIND_UFENCE,
+					ufence_ack_trigger);
+
+	xe_eudebug_session_run(s1);
+	xe_eudebug_session_destroy(s1);
+
+	s2 = xe_eudebug_session_create(fd, run_online_client, flags, data);
+	xe_eudebug_debugger_add_trigger(s2->d, DRM_XE_EUDEBUG_EVENT_EU_ATTENTION,
+					eu_attention_resume_trigger);
+	xe_eudebug_debugger_add_trigger(s2->d, DRM_XE_EUDEBUG_EVENT_VM_BIND_UFENCE,
+					ufence_ack_trigger);
+
+	xe_eudebug_session_run(s2);
+
+	online_session_check(s2, s2->flags);
+
+	xe_eudebug_session_destroy(s2);
+	online_debug_data_destroy(data);
+}
+
+/**
+ * SUBTEST: interrupt-all
+ * Description:
+ *	Schedules EU workload which should last about a few seconds, then
+ *	interrupts all threads, checks whether attention event came, and
+ *	resumes stopped threads back.
+ *
+ * SUBTEST: interrupt-all-set-breakpoint
+ * Description:
+ *	Schedules EU workload which should last about a few seconds, then
+ *	interrupts all threads, once attention event come it sets breakpoint on
+ *	the very next instruction and resumes stopped threads back. It expects
+ *	that every thread hits the breakpoint.
+ */
+static void test_interrupt_all(int fd, struct drm_xe_engine_class_instance *hwe, int flags)
+{
+	struct xe_eudebug_session *s;
+	struct online_debug_data *data;
+	uint32_t val;
+
+	data = online_debug_data_create(hwe);
+	s = xe_eudebug_session_create(fd, run_online_client, flags, data);
+
+	xe_eudebug_debugger_add_trigger(s->d, DRM_XE_EUDEBUG_EVENT_OPEN,
+					open_trigger);
+	xe_eudebug_debugger_add_trigger(s->d, DRM_XE_EUDEBUG_EVENT_EXEC_QUEUE,
+					exec_queue_trigger);
+	xe_eudebug_debugger_add_trigger(s->d, DRM_XE_EUDEBUG_EVENT_EU_ATTENTION,
+					eu_attention_debug_trigger);
+	xe_eudebug_debugger_add_trigger(s->d, DRM_XE_EUDEBUG_EVENT_EU_ATTENTION,
+					eu_attention_resume_trigger);
+	xe_eudebug_debugger_add_trigger(s->d, DRM_XE_EUDEBUG_EVENT_VM, vm_open_trigger);
+	xe_eudebug_debugger_add_trigger(s->d, DRM_XE_EUDEBUG_EVENT_METADATA,
+					create_metadata_trigger);
+	xe_eudebug_debugger_add_trigger(s->d, DRM_XE_EUDEBUG_EVENT_VM_BIND_UFENCE,
+					ufence_ack_trigger);
+
+	igt_assert_eq(xe_eudebug_debugger_attach(s->d, s->c), 0);
+	xe_eudebug_debugger_start_worker(s->d);
+	xe_eudebug_client_start(s->c);
+
+	/* wait for workload to start */
+	igt_for_milliseconds(STARTUP_TIMEOUT_MS) {
+		/* collect needed data from triggers */
+		if (READ_ONCE(data->vm_fd) == -1 || READ_ONCE(data->target_size) == 0)
+			continue;
+
+		if (pread(data->vm_fd, &val, sizeof(val), data->target_offset) == sizeof(val))
+			if (val != 0)
+				break;
+	}
+
+	pthread_mutex_lock(&data->mutex);
+	igt_assert(data->client_handle != -1);
+	igt_assert(data->exec_queue_handle != -1);
+	eu_ctl_interrupt_all(s->d->fd, data->client_handle,
+			     data->exec_queue_handle, data->lrc_handle);
+	pthread_mutex_unlock(&data->mutex);
+
+	xe_eudebug_client_wait_done(s->c);
+
+	xe_eudebug_debugger_stop_worker(s->d, 1);
+
+	xe_eudebug_event_log_print(s->d->log, true);
+	xe_eudebug_event_log_print(s->c->log, true);
+
+	online_session_check(s, s->flags);
+
+	xe_eudebug_session_destroy(s);
+	online_debug_data_destroy(data);
+}
+
+static void reset_debugger_log(struct xe_eudebug_debugger *d)
+{
+	unsigned int max_size;
+	char log_name[80];
+
+	/* Don't pull the rug out from under an active debugger */
+	igt_assert(d->target_pid == 0);
+
+	max_size = d->log->max_size;
+	strncpy(log_name, d->log->name, sizeof(d->log->name) - 1);
+	log_name[79] = '\0';
+	xe_eudebug_event_log_destroy(d->log);
+	d->log = xe_eudebug_event_log_create(log_name, max_size);
+}
+
+/**
+ * SUBTEST: interrupt-other-debuggable
+ * Description:
+ *	Schedules EU workload in runalone mode with never ending loop, while
+ *	it is not under debug, tries to interrupt all threads using the different
+ *	client attached to debugger.
+ *
+ * SUBTEST: interrupt-other
+ * Description:
+ * 	Schedules EU workload with a never ending loop and, while it is not
+ * 	configured for debugging, tries to interrupt all threads using the client
+ * 	attached to debugger.
+ */
+static void test_interrupt_other(int fd, struct drm_xe_engine_class_instance *hwe, int flags)
+{
+	struct online_debug_data *data;
+	struct online_debug_data *debugee_data;
+	struct xe_eudebug_session *s;
+	struct xe_eudebug_client *debugee;
+	int debugee_flags = SHADER_LOOP | DO_NOT_EXPECT_CANARIES;
+	int val;
+
+	data = online_debug_data_create(hwe);
+	s = xe_eudebug_session_create(fd, run_online_client, flags, data);
+
+	xe_eudebug_debugger_add_trigger(s->d, DRM_XE_EUDEBUG_EVENT_OPEN, open_trigger);
+	xe_eudebug_debugger_add_trigger(s->d, DRM_XE_EUDEBUG_EVENT_EXEC_QUEUE,
+					exec_queue_trigger);
+	xe_eudebug_debugger_add_trigger(s->d, DRM_XE_EUDEBUG_EVENT_VM, vm_open_trigger);
+	xe_eudebug_debugger_add_trigger(s->d, DRM_XE_EUDEBUG_EVENT_METADATA,
+					create_metadata_trigger);
+	xe_eudebug_debugger_add_trigger(s->d, DRM_XE_EUDEBUG_EVENT_VM_BIND_UFENCE,
+					ufence_ack_trigger);
+
+	igt_assert_eq(xe_eudebug_debugger_attach(s->d, s->c), 0);
+	xe_eudebug_debugger_start_worker(s->d);
+	xe_eudebug_client_start(s->c);
+
+	/* wait for workload to start */
+	igt_for_milliseconds(STARTUP_TIMEOUT_MS) {
+		if (READ_ONCE(data->vm_fd) == -1 || READ_ONCE(data->target_size) == 0)
+			continue;
+
+		if (pread(data->vm_fd, &val, sizeof(val), data->target_offset) == sizeof(val))
+			if (val != 0)
+				break;
+	}
+	igt_assert_f(val != 0, "Workload execution is not yet started\n");
+
+	xe_eudebug_debugger_dettach(s->d);
+	reset_debugger_log(s->d);
+
+	debugee_data = online_debug_data_create(hwe);
+	s->d->ptr = debugee_data;
+	debugee = xe_eudebug_client_create(fd, run_online_client, debugee_flags, debugee_data);
+	igt_assert_eq(xe_eudebug_debugger_attach(s->d, debugee), 0);
+	xe_eudebug_client_start(debugee);
+
+	igt_for_milliseconds(STARTUP_TIMEOUT_MS) {
+		if (READ_ONCE(debugee_data->vm_fd) == -1 || READ_ONCE(debugee_data->target_size) == 0)
+			continue;
+	}
+
+	pthread_mutex_lock(&debugee_data->mutex);
+	igt_assert(debugee_data->client_handle != -1);
+	igt_assert(debugee_data->exec_queue_handle != -1);
+	/* Interrupting the other client should return invalid state
+	 * as it is running in runalone mode */
+	igt_assert_eq(__eu_ctl(s->d->fd, debugee_data->client_handle,
+		       debugee_data->exec_queue_handle, debugee_data->lrc_handle,
+		       NULL, 0, DRM_XE_EUDEBUG_EU_CONTROL_CMD_INTERRUPT_ALL, NULL), -EINVAL);
+	pthread_mutex_unlock(&debugee_data->mutex);
+
+	xe_force_gt_reset_async(s->d->master_fd, debugee_data->hwe.gt_id);
+
+	xe_eudebug_client_wait_done(debugee);
+	xe_eudebug_debugger_stop_worker(s->d, 1);
+
+	xe_eudebug_event_log_print(s->d->log, true);
+	xe_eudebug_event_log_print(debugee->log, true);
+
+	xe_eudebug_session_check(s, true, XE_EUDEBUG_FILTER_EVENT_VM_BIND |
+				 XE_EUDEBUG_FILTER_EVENT_VM_BIND_OP |
+				 XE_EUDEBUG_FILTER_EVENT_VM_BIND_UFENCE);
+
+	xe_eudebug_client_destroy(debugee);
+	xe_eudebug_session_destroy(s);
+	online_debug_data_destroy(data);
+	online_debug_data_destroy(debugee_data);
+}
+
+/**
+ * SUBTEST: tdctl-parameters
+ * Description:
+ *	Schedules EU workload which should last about a few seconds, then
+ *	checks negative scenarios of EU_THREADS ioctl usage, interrupts all threads,
+ *	checks whether attention event came, and resumes stopped threads back.
+ */
+static void test_tdctl_parameters(int fd, struct drm_xe_engine_class_instance *hwe, int flags)
+{
+	struct xe_eudebug_session *s;
+	struct online_debug_data *data;
+	uint32_t val;
+	uint32_t random_command;
+	uint32_t bitmask_size = query_attention_bitmask_size(fd, hwe->gt_id);
+	uint8_t *attention_bitmask = malloc(bitmask_size * sizeof(uint8_t));
+	igt_assert(attention_bitmask);
+
+	data = online_debug_data_create(hwe);
+	s = xe_eudebug_session_create(fd, run_online_client, flags, data);
+
+	xe_eudebug_debugger_add_trigger(s->d, DRM_XE_EUDEBUG_EVENT_OPEN,
+					open_trigger);
+	xe_eudebug_debugger_add_trigger(s->d, DRM_XE_EUDEBUG_EVENT_EXEC_QUEUE,
+					exec_queue_trigger);
+	xe_eudebug_debugger_add_trigger(s->d, DRM_XE_EUDEBUG_EVENT_EU_ATTENTION,
+					eu_attention_debug_trigger);
+	xe_eudebug_debugger_add_trigger(s->d, DRM_XE_EUDEBUG_EVENT_EU_ATTENTION,
+					eu_attention_resume_trigger);
+	xe_eudebug_debugger_add_trigger(s->d, DRM_XE_EUDEBUG_EVENT_VM, vm_open_trigger);
+	xe_eudebug_debugger_add_trigger(s->d, DRM_XE_EUDEBUG_EVENT_METADATA,
+					create_metadata_trigger);
+	xe_eudebug_debugger_add_trigger(s->d, DRM_XE_EUDEBUG_EVENT_VM_BIND_UFENCE,
+					ufence_ack_trigger);
+
+	igt_assert_eq(xe_eudebug_debugger_attach(s->d, s->c), 0);
+	xe_eudebug_debugger_start_worker(s->d);
+	xe_eudebug_client_start(s->c);
+
+	/* wait for workload to start */
+	igt_for_milliseconds(STARTUP_TIMEOUT_MS) {
+		/* collect needed data from triggers */
+		if (READ_ONCE(data->vm_fd) == -1 || READ_ONCE(data->target_size) == 0)
+			continue;
+
+		if (pread(data->vm_fd, &val, sizeof(val), data->target_offset) == sizeof(val))
+			if (val != 0)
+				break;
+	}
+
+	pthread_mutex_lock(&data->mutex);
+	igt_assert(data->client_handle != -1);
+	igt_assert(data->exec_queue_handle != -1);
+	igt_assert(data->lrc_handle != -1);
+
+	/* fail on invalid lrc_handle */
+	igt_assert(__eu_ctl(s->d->fd, data->client_handle,
+			    data->exec_queue_handle, data->lrc_handle + 1,
+			    attention_bitmask, &bitmask_size,
+			    DRM_XE_EUDEBUG_EU_CONTROL_CMD_INTERRUPT_ALL, NULL) == -EINVAL);
+
+	/* fail on invalid exec_queue_handle */
+	igt_assert(__eu_ctl(s->d->fd, data->client_handle,
+			    data->exec_queue_handle + 1, data->lrc_handle,
+			    attention_bitmask, &bitmask_size,
+			    DRM_XE_EUDEBUG_EU_CONTROL_CMD_INTERRUPT_ALL, NULL) == -EINVAL);
+
+	/* fail on invalid client */
+	igt_assert(__eu_ctl(s->d->fd, data->client_handle + 1,
+			    data->exec_queue_handle, data->lrc_handle,
+			    attention_bitmask, &bitmask_size,
+			    DRM_XE_EUDEBUG_EU_CONTROL_CMD_INTERRUPT_ALL, NULL) == -EINVAL);
+
+	/*
+	 * bitmask size must be aligned to sizeof(u32) for all commands
+	 * and be zero for interrupt all
+	 */
+	bitmask_size = sizeof(uint32_t) - 1;
+	igt_assert(__eu_ctl(s->d->fd, data->client_handle,
+			    data->exec_queue_handle, data->lrc_handle,
+			    attention_bitmask, &bitmask_size,
+			    DRM_XE_EUDEBUG_EU_CONTROL_CMD_STOPPED, NULL) == -EINVAL);
+	bitmask_size = 0;
+
+	/* fail on invalid command */
+	random_command = random() | (DRM_XE_EUDEBUG_EU_CONTROL_CMD_RESUME + 1);
+	igt_assert(__eu_ctl(s->d->fd, data->client_handle,
+			    data->exec_queue_handle, data->lrc_handle,
+			    attention_bitmask, &bitmask_size, random_command, NULL) == -EINVAL);
+
+	free(attention_bitmask);
+
+	eu_ctl_interrupt_all(s->d->fd, data->client_handle,
+			     data->exec_queue_handle, data->lrc_handle);
+	pthread_mutex_unlock(&data->mutex);
+
+	xe_eudebug_client_wait_done(s->c);
+
+	xe_eudebug_debugger_stop_worker(s->d, 1);
+
+	xe_eudebug_event_log_print(s->d->log, true);
+	xe_eudebug_event_log_print(s->c->log, true);
+
+	online_session_check(s, s->flags);
+
+	xe_eudebug_session_destroy(s);
+	online_debug_data_destroy(data);
+}
+
+static void eu_attention_debugger_detach_trigger(struct xe_eudebug_debugger *d,
+						 struct drm_xe_eudebug_event *event)
+{
+	struct online_debug_data *data = d->ptr;
+	uint64_t c_pid;
+	int ret;
+
+	c_pid = d->target_pid;
+
+	/* Reset VM data so the re-triggered VM open handler works properly */
+	data->vm_fd = -1;
+
+	xe_eudebug_debugger_dettach(d);
+
+	/* Let the KMD scan function notice unhandled EU attention */
+	if (!(d->flags & SHADER_N_NOOP_BREAKPOINT))
+		sleep(1);
+
+	/*
+	 * New session that is created by EU debugger on reconnect restarts
+	 * seqno, causing isses with log sorting. To avoid that, create
+	 * a new event log.
+	 */
+	reset_debugger_log(d);
+
+	ret = xe_eudebug_connect(d->master_fd, c_pid, 0);
+	igt_assert(ret >= 0);
+	d->fd = ret;
+	d->target_pid = c_pid;
+
+	/* Let the discovery worker discover resources */
+	sleep(2);
+
+	if (!(d->flags & SHADER_N_NOOP_BREAKPOINT))
+		xe_eudebug_debugger_signal_stage(d, DEBUGGER_REATTACHED);
+}
+
+/**
+ * SUBTEST: interrupt-reconnect
+ * Description:
+ *	Schedules EU workload which should last about a few seconds,
+ *	interrupts all threads and detaches debugger when attention is
+ *	raised. The test checks if KMD resets the workload when there's
+ *	no debugger attached and does the event playback on discovery.
+ */
+static void test_interrupt_reconnect(int fd, struct drm_xe_engine_class_instance *hwe, int flags)
+{
+	struct drm_xe_eudebug_event *e = NULL;
+	struct online_debug_data *data;
+	struct xe_eudebug_session *s;
+	uint32_t val;
+
+	data = online_debug_data_create(hwe);
+	s = xe_eudebug_session_create(fd, run_online_client, flags, data);
+
+	xe_eudebug_debugger_add_trigger(s->d, DRM_XE_EUDEBUG_EVENT_OPEN,
+					open_trigger);
+	xe_eudebug_debugger_add_trigger(s->d, DRM_XE_EUDEBUG_EVENT_EXEC_QUEUE,
+					exec_queue_trigger);
+	xe_eudebug_debugger_add_trigger(s->d, DRM_XE_EUDEBUG_EVENT_EU_ATTENTION,
+					eu_attention_debug_trigger);
+	xe_eudebug_debugger_add_trigger(s->d, DRM_XE_EUDEBUG_EVENT_EU_ATTENTION,
+					eu_attention_debugger_detach_trigger);
+	xe_eudebug_debugger_add_trigger(s->d, DRM_XE_EUDEBUG_EVENT_VM, vm_open_trigger);
+	xe_eudebug_debugger_add_trigger(s->d, DRM_XE_EUDEBUG_EVENT_METADATA,
+					create_metadata_trigger);
+	xe_eudebug_debugger_add_trigger(s->d, DRM_XE_EUDEBUG_EVENT_VM_BIND_UFENCE,
+					ufence_ack_trigger);
+
+	igt_assert_eq(xe_eudebug_debugger_attach(s->d, s->c), 0);
+	xe_eudebug_debugger_start_worker(s->d);
+	xe_eudebug_client_start(s->c);
+
+	/* wait for workload to start */
+	igt_for_milliseconds(STARTUP_TIMEOUT_MS) {
+		/* collect needed data from triggers */
+		if (READ_ONCE(data->vm_fd) == -1 || READ_ONCE(data->target_size) == 0)
+			continue;
+
+		if (pread(data->vm_fd, &val, sizeof(val), data->target_offset) == sizeof(val))
+			if (val != 0)
+				break;
+	}
+
+	pthread_mutex_lock(&data->mutex);
+	igt_assert(data->client_handle != -1);
+	igt_assert(data->exec_queue_handle != -1);
+	eu_ctl_interrupt_all(s->d->fd, data->client_handle,
+			     data->exec_queue_handle, data->lrc_handle);
+	pthread_mutex_unlock(&data->mutex);
+
+	xe_eudebug_client_wait_done(s->c);
+
+	xe_eudebug_debugger_stop_worker(s->d, 1);
+
+	xe_eudebug_event_log_print(s->d->log, true);
+	xe_eudebug_event_log_print(s->c->log, true);
+
+	xe_eudebug_session_check(s, true, XE_EUDEBUG_FILTER_EVENT_VM_BIND |
+					  XE_EUDEBUG_FILTER_EVENT_VM_BIND_OP |
+					  XE_EUDEBUG_FILTER_EVENT_VM_BIND_UFENCE);
+
+	/* We expect workload reset, so no attention should be raised */
+	xe_eudebug_for_each_event(e, s->d->log)
+		igt_assert(e->type != DRM_XE_EUDEBUG_EVENT_EU_ATTENTION);
+
+	xe_eudebug_session_destroy(s);
+	online_debug_data_destroy(data);
+}
+
+/**
+ * SUBTEST: single-step
+ * Description:
+ *	Schedules EU workload with 16 nops after breakpoint, then single-steps
+ *	through the shader, advances all threads each step, checking if all
+ *	threads advanced every step.
+ *
+ * SUBTEST: single-step-one
+ * Description:
+ *	Schedules EU workload with 16 nops after breakpoint, then single-steps
+ *	through the shader, advances one thread each step, checking if one
+ *	thread advanced every step. Due to the time constraint, only first two
+ *	shader instructions after breakpoint are validated.
+ */
+static void test_single_step(int fd, struct drm_xe_engine_class_instance *hwe, int flags)
+{
+	struct xe_eudebug_session *s;
+	struct online_debug_data *data;
+
+	data = online_debug_data_create(hwe);
+	s = xe_eudebug_session_create(fd, run_online_client, flags, data);
+
+	xe_eudebug_debugger_add_trigger(s->d, DRM_XE_EUDEBUG_EVENT_OPEN,
+					open_trigger);
+	xe_eudebug_debugger_add_trigger(s->d, DRM_XE_EUDEBUG_EVENT_EU_ATTENTION,
+					eu_attention_debug_trigger);
+	xe_eudebug_debugger_add_trigger(s->d, DRM_XE_EUDEBUG_EVENT_EU_ATTENTION,
+					eu_attention_resume_single_step_trigger);
+	xe_eudebug_debugger_add_trigger(s->d, DRM_XE_EUDEBUG_EVENT_VM, vm_open_trigger);
+	xe_eudebug_debugger_add_trigger(s->d, DRM_XE_EUDEBUG_EVENT_METADATA,
+					create_metadata_trigger);
+	xe_eudebug_debugger_add_trigger(s->d, DRM_XE_EUDEBUG_EVENT_VM_BIND_UFENCE,
+					ufence_ack_trigger);
+
+	xe_eudebug_session_run(s);
+	online_session_check(s, s->flags);
+	xe_eudebug_session_destroy(s);
+	online_debug_data_destroy(data);
+}
+
+static void eu_attention_debugger_ndetach_trigger(struct xe_eudebug_debugger *d,
+						 struct drm_xe_eudebug_event *event)
+{
+	static int debugger_detach_count;
+
+	if (debugger_detach_count < (SHADER_LOOP_N - 1)) {
+		/* Make sure the resume command has issued before detaching the debugger */
+		if (!is_client_resumed)
+			return;
+		eu_attention_debugger_detach_trigger(d, event);
+		debugger_detach_count++;
+	} else {
+		igt_debug("Reached Nth breakpoint hence preventing the debugger detach\n");
+	}
+	is_client_resumed = 0;
+}
+
+/**
+ * SUBTEST: debugger-reopen
+ * Description:
+ *	Check whether the debugger is able to reopen the connection and
+ *	capture the events of already running client.
+ */
+static void test_debugger_reopen(int fd, struct drm_xe_engine_class_instance *hwe, int flags)
+{
+	struct xe_eudebug_session *s;
+	struct online_debug_data *data;
+
+	data = online_debug_data_create(hwe);
+
+	s = xe_eudebug_session_create(fd, run_online_client, flags, data);
+
+	xe_eudebug_debugger_add_trigger(s->d, DRM_XE_EUDEBUG_EVENT_EU_ATTENTION,
+					eu_attention_debug_trigger);
+	xe_eudebug_debugger_add_trigger(s->d, DRM_XE_EUDEBUG_EVENT_EU_ATTENTION,
+					eu_attention_resume_trigger);
+	xe_eudebug_debugger_add_trigger(s->d, DRM_XE_EUDEBUG_EVENT_EU_ATTENTION,
+					eu_attention_debugger_ndetach_trigger);
+	xe_eudebug_debugger_add_trigger(s->d, DRM_XE_EUDEBUG_EVENT_VM_BIND_UFENCE,
+					ufence_ack_trigger);
+
+	xe_eudebug_session_run(s);
+
+	xe_eudebug_session_destroy(s);
+	online_debug_data_destroy(data);
+}
+
+/**
+ * SUBTEST: writes-caching-%s
+ * Description:
+ *	Write incrementing values to 2-page-long target surface, poisoning the data one breakpoint
+ *	before each write instruction and restoring it when the poisoned instruction breakpoint
+ *	is hit. Expect to never see poison values in target surface.
+ *
+ *
+ * arg[1]:
+ *
+ * @sram:	Use page size of SRAM
+ * @vram:	Use page size of VRAM
+ */
+static void test_caching(int fd, struct drm_xe_engine_class_instance *hwe, int flags)
+{
+	struct xe_eudebug_session *s;
+	struct online_debug_data *data;
+
+	if (flags & SHADER_CACHING_VRAM)
+		igt_skip_on_f(!xe_has_vram(fd), "Device does not have VRAM.\n");
+
+	data = online_debug_data_create(hwe);
+	s = xe_eudebug_session_create(fd, run_online_client, flags, data);
+
+	xe_eudebug_debugger_add_trigger(s->d, DRM_XE_EUDEBUG_EVENT_OPEN,
+					open_trigger);
+	xe_eudebug_debugger_add_trigger(s->d, DRM_XE_EUDEBUG_EVENT_EU_ATTENTION,
+					eu_attention_debug_trigger);
+	xe_eudebug_debugger_add_trigger(s->d, DRM_XE_EUDEBUG_EVENT_EU_ATTENTION,
+					eu_attention_resume_caching_trigger);
+	xe_eudebug_debugger_add_trigger(s->d, DRM_XE_EUDEBUG_EVENT_VM, vm_open_trigger);
+	xe_eudebug_debugger_add_trigger(s->d, DRM_XE_EUDEBUG_EVENT_METADATA,
+					create_metadata_trigger);
+	xe_eudebug_debugger_add_trigger(s->d, DRM_XE_EUDEBUG_EVENT_VM_BIND_UFENCE,
+					ufence_ack_trigger);
+
+	xe_eudebug_session_run(s);
+	online_session_check(s, s->flags);
+	xe_eudebug_session_destroy(s);
+	online_debug_data_destroy(data);
+}
+
+static int wait_for_exception(struct online_debug_data *data, int timeout)
+{
+	int ret = -ETIMEDOUT;
+
+	igt_for_milliseconds(timeout) {
+		pthread_mutex_lock(&data->mutex);
+		if ((data->exception_arrived.tv_sec |
+		     data->exception_arrived.tv_nsec) != 0)
+			ret = 0;
+		pthread_mutex_unlock(&data->mutex);
+
+		if (!ret)
+			break;
+		usleep(1000);
+	}
+
+	return ret;
+}
+
+#define is_compute_on_gt(__e, __gt) ((__e->engine_class == DRM_XE_ENGINE_CLASS_RENDER || \
+				      __e->engine_class == DRM_XE_ENGINE_CLASS_COMPUTE) && \
+				      __e->gt_id == __gt)
+
+struct xe_engine_list_entry {
+	struct igt_list_head link;
+	struct drm_xe_engine_class_instance *hwe;
+};
+
+#define MAX_TILES	2
+static int find_suitable_engines(struct drm_xe_engine_class_instance *hwes[GEM_MAX_ENGINES],
+				 int fd, bool many_tiles)
+{
+	struct xe_device *xe_dev;
+	struct drm_xe_engine_class_instance *e;
+	struct xe_engine_list_entry *en, *tmp;
+	struct igt_list_head compute_engines[MAX_TILES];
+	int gt_id;
+	int tile_id, i, engine_count = 0, tile_count = 0;
+
+	xe_dev = xe_device_get(fd);
+
+	for (i = 0; i < MAX_TILES; i++)
+		IGT_INIT_LIST_HEAD(&compute_engines[i]);
+
+	xe_for_each_gt(fd, gt_id) {
+		xe_for_each_engine(fd, e) {
+			if (is_compute_on_gt(e, gt_id)) {
+				tile_id = xe_dev->gt_list->gt_list[gt_id].tile_id;
+
+				en = malloc(sizeof(struct xe_engine_list_entry));
+				en->hwe = e;
+
+				igt_list_add_tail(&en->link, &compute_engines[tile_id]);
+			}
+		}
+	}
+
+	for (i = 0; i < MAX_TILES; i++) {
+		if (igt_list_empty(&compute_engines[i]))
+			continue;
+
+		if (many_tiles) {
+			en = igt_list_first_entry(&compute_engines[i], en, link);
+			hwes[engine_count++] = en->hwe;
+			tile_count++;
+		} else {
+			if (igt_list_length(&compute_engines[i]) > 1) {
+				igt_list_for_each_entry(en, &compute_engines[i], link)
+					hwes[engine_count++] = en->hwe;
+				break;
+			}
+		}
+	}
+
+	for (i = 0; i < MAX_TILES; i++) {
+		igt_list_for_each_entry_safe(en, tmp, &compute_engines[i], link) {
+			igt_list_del(&en->link);
+			free(en);
+		}
+	}
+
+	if (many_tiles)
+		igt_require_f(tile_count > 1, "Mulit-tile scenario requires more tiles\n");
+
+	return engine_count;
+}
+
+/**
+ * SUBTEST: breakpoint-many-sessions-single-tile
+ * Description:
+ *	Schedules EU workload with preinstalled breakpoint on every compute engine
+ *	available on the tile. Checks if the contexts hit breakpoint in sequence
+ *	and resumes them.
+ *
+ * SUBTEST: breakpoint-many-sessions-tiles
+ * Description:
+ *	Schedules EU workload with preinstalled breakpoint on selected compute
+ *      engines, with one per tile. Checks if each context hit breakpoint and
+ *      resumes them.
+ */
+static void test_many_sessions_on_tiles(int fd, bool multi_tile)
+{
+	int n = 0, flags = SHADER_BREAKPOINT | SHADER_MIN_THREADS;
+	struct xe_eudebug_session *s[GEM_MAX_ENGINES] = {};
+	struct online_debug_data *data[GEM_MAX_ENGINES] = {};
+	struct drm_xe_engine_class_instance *hwe[GEM_MAX_ENGINES] = {};
+	struct drm_xe_eudebug_event_eu_attention *eus;
+	uint64_t current_t, next_t, diff;
+	int i;
+
+	n = find_suitable_engines(hwe, fd, multi_tile);
+
+	igt_require_f(n > 1, "Test requires at least two parallel compute engines!\n");
+
+	for (i = 0; i < n; i++) {
+		data[i] = online_debug_data_create(hwe[i]);
+		s[i] = xe_eudebug_session_create(fd, run_online_client, flags, data[i]);
+
+		xe_eudebug_debugger_add_trigger(s[i]->d, DRM_XE_EUDEBUG_EVENT_EU_ATTENTION,
+						eu_attention_debug_trigger);
+		xe_eudebug_debugger_add_trigger(s[i]->d, DRM_XE_EUDEBUG_EVENT_EU_ATTENTION,
+						save_first_exception_trigger);
+		xe_eudebug_debugger_add_trigger(s[i]->d, DRM_XE_EUDEBUG_EVENT_VM_BIND_UFENCE,
+						ufence_ack_trigger);
+
+		igt_assert_eq(xe_eudebug_debugger_attach(s[i]->d, s[i]->c), 0);
+
+		xe_eudebug_debugger_start_worker(s[i]->d);
+		xe_eudebug_client_start(s[i]->c);
+	}
+
+	for (i = 0; i < n; i++) {
+		/* XXX: Sometimes racy, expects clients to execute in sequence */
+		igt_assert(!wait_for_exception(data[i], STARTUP_TIMEOUT_MS));
+
+		eus = (struct drm_xe_eudebug_event_eu_attention *)data[i]->exception_event;
+
+		/* Delay all but the last workload to check serialization */
+		if (i < n - 1)
+			usleep(WORKLOAD_DELAY_US);
+
+		eu_ctl_resume(s[i]->d->master_fd, s[i]->d->fd,
+			      eus->client_handle, eus->exec_queue_handle,
+			      eus->lrc_handle, eus->bitmask, eus->bitmask_size);
+		free(eus);
+	}
+
+	for (i = 0; i < n - 1; i++) {
+		/* Convert timestamps to microseconds */
+		current_t = data[i]->exception_arrived.tv_nsec * 1000;
+		next_t = data[i + 1]->exception_arrived.tv_nsec * 1000;
+		diff = current_t < next_t ? next_t - current_t : current_t - next_t;
+
+		if (multi_tile)
+			igt_assert_f(diff < WORKLOAD_DELAY_US,
+				     "Expected to execute workloads concurrently. Actual delay: %lu ms\n",
+				     diff);
+		else
+			igt_assert_f(diff >= WORKLOAD_DELAY_US,
+				     "Expected a serialization of workloads. Actual delay: %lu ms\n",
+				     diff);
+	}
+
+	for (i = 0; i < n; i++) {
+		xe_eudebug_client_wait_done(s[i]->c);
+		xe_eudebug_debugger_stop_worker(s[i]->d, 1);
+
+		xe_eudebug_event_log_print(s[i]->d->log, true);
+		online_session_check(s[i], flags);
+
+		xe_eudebug_session_destroy(s[i]);
+		online_debug_data_destroy(data[i]);
+	}
+}
+
+static struct drm_xe_engine_class_instance *pick_compute(int fd, int gt)
+{
+	struct drm_xe_engine_class_instance *hwe;
+	int count = 0;
+
+	xe_for_each_engine(fd, hwe)
+		if (is_compute_on_gt(hwe, gt))
+			count++;
+
+	xe_for_each_engine(fd, hwe)
+		if (is_compute_on_gt(hwe, gt) && rand() % count-- == 0)
+			return hwe;
+
+	return NULL;
+}
+
+#define test_gt_render_or_compute(t, i915, __hwe) \
+	igt_subtest_with_dynamic(t) \
+		for (int gt = 0; (__hwe = pick_compute(i915, gt)); gt++) \
+			igt_dynamic_f("%s%d", xe_engine_class_string(__hwe->engine_class), hwe->engine_instance)
+
+igt_main
+{
+	struct drm_xe_engine_class_instance *hwe;
+	bool was_enabled;
+	int fd;
+
+	igt_fixture {
+		fd = drm_open_driver(DRIVER_XE);
+		intel_allocator_multiprocess_start();
+		igt_srandom();
+		was_enabled = xe_eudebug_enable(fd, true);
+	}
+
+	test_gt_render_or_compute("basic-breakpoint", fd, hwe)
+		test_basic_online(fd, hwe, SHADER_BREAKPOINT);
+
+	test_gt_render_or_compute("preempt-breakpoint", fd, hwe)
+		test_preemption(fd, hwe);
+
+	test_gt_render_or_compute("breakpoint-not-in-debug-mode", fd, hwe)
+		test_basic_online(fd, hwe, SHADER_BREAKPOINT | DISABLE_DEBUG_MODE);
+
+	test_gt_render_or_compute("stopped-thread", fd, hwe)
+		test_basic_online(fd, hwe, SHADER_BREAKPOINT | TRIGGER_RESUME_DELAYED);
+
+	test_gt_render_or_compute("resume-one", fd, hwe)
+		test_basic_online(fd, hwe, SHADER_BREAKPOINT | TRIGGER_RESUME_ONE);
+
+	test_gt_render_or_compute("resume-dss", fd, hwe)
+		test_basic_online(fd, hwe, SHADER_BREAKPOINT | TRIGGER_RESUME_DSS);
+
+	test_gt_render_or_compute("interrupt-all", fd, hwe)
+		test_interrupt_all(fd, hwe, SHADER_LOOP);
+
+	test_gt_render_or_compute("interrupt-other-debuggable", fd, hwe)
+		test_interrupt_other(fd, hwe, SHADER_LOOP);
+
+	test_gt_render_or_compute("interrupt-other", fd, hwe)
+		test_interrupt_other(fd, hwe, SHADER_LOOP | DISABLE_DEBUG_MODE);
+
+	test_gt_render_or_compute("interrupt-all-set-breakpoint", fd, hwe)
+		test_interrupt_all(fd, hwe, SHADER_LOOP | TRIGGER_RESUME_SET_BP);
+
+	test_gt_render_or_compute("tdctl-parameters", fd, hwe)
+		test_tdctl_parameters(fd, hwe, SHADER_LOOP);
+
+	test_gt_render_or_compute("reset-with-attention", fd, hwe)
+		test_reset_with_attention_online(fd, hwe, SHADER_BREAKPOINT);
+
+	test_gt_render_or_compute("interrupt-reconnect", fd, hwe)
+		test_interrupt_reconnect(fd, hwe, SHADER_LOOP | TRIGGER_RECONNECT);
+
+	test_gt_render_or_compute("single-step", fd, hwe)
+		test_single_step(fd, hwe, SHADER_SINGLE_STEP | SIP_SINGLE_STEP |
+				 TRIGGER_RESUME_PARALLEL_WALK);
+
+	test_gt_render_or_compute("single-step-one", fd, hwe)
+		test_single_step(fd, hwe, SHADER_SINGLE_STEP | SIP_SINGLE_STEP |
+				 TRIGGER_RESUME_SINGLE_WALK);
+
+	test_gt_render_or_compute("debugger-reopen", fd, hwe)
+		test_debugger_reopen(fd, hwe, SHADER_N_NOOP_BREAKPOINT);
+
+	test_gt_render_or_compute("writes-caching-sram", fd, hwe)
+		test_caching(fd, hwe, SHADER_CACHING_SRAM);
+
+	test_gt_render_or_compute("writes-caching-vram", fd, hwe)
+		test_caching(fd, hwe, SHADER_CACHING_VRAM);
+
+	igt_subtest("breakpoint-many-sessions-single-tile")
+		test_many_sessions_on_tiles(fd, false);
+
+	igt_subtest("breakpoint-many-sessions-tiles")
+		test_many_sessions_on_tiles(fd, true);
+
+	igt_fixture {
+		xe_eudebug_enable(fd, was_enabled);
+
+		intel_allocator_multiprocess_stop();
+		drm_close_driver(fd);
+	}
+}
diff --git a/tests/meson.build b/tests/meson.build
index 35bf8ed35..f18eec7e7 100644
--- a/tests/meson.build
+++ b/tests/meson.build
@@ -280,6 +280,7 @@ intel_xe_progs = [
 	'xe_debugfs',
 	'xe_drm_fdinfo',
 	'xe_eudebug',
+	'xe_eudebug_online',
 	'xe_evict',
 	'xe_evict_ccs',
 	'xe_exec_atomic',
-- 
2.34.1


^ permalink raw reply related	[flat|nested] 41+ messages in thread

* [PATCH i-g-t v3 14/14] tests/xe_live_ktest: Add xe_eudebug live test
  2024-08-09 12:37 [PATCH i-g-t v3 00/14] Test coverage for GPU debug support Christoph Manszewski
                   ` (12 preceding siblings ...)
  2024-08-09 12:38 ` [PATCH i-g-t v3 13/14] tests/xe_eudebug_online: Debug client which runs workloads on EU Christoph Manszewski
@ 2024-08-09 12:38 ` Christoph Manszewski
  2024-08-09 13:36 ` ✗ GitLab.Pipeline: warning for Test coverage for GPU debug support (rev3) Patchwork
                   ` (5 subsequent siblings)
  19 siblings, 0 replies; 41+ messages in thread
From: Christoph Manszewski @ 2024-08-09 12:38 UTC (permalink / raw)
  To: igt-dev
  Cc: Zbigniew Kempczyński, Kamil Konieczny, Dominik Grzegorzek,
	Maciej Patelczyk, Dominik Karol Piątkowski, Pawel Sikora,
	Andrzej Hajda, Kolanupaka Naveena, Mika Kuoppala, Gwan-gyeong Mun,
	Christoph Manszewski

xe_eudebug introduces a dedicated kunit test to the live test module.
Add it to the list of live tests to be executed.

Signed-off-by: Christoph Manszewski <christoph.manszewski@intel.com>
---
 tests/intel/xe_live_ktest.c | 6 ++++++
 1 file changed, 6 insertions(+)

diff --git a/tests/intel/xe_live_ktest.c b/tests/intel/xe_live_ktest.c
index 4376d5df7..50af97ecc 100644
--- a/tests/intel/xe_live_ktest.c
+++ b/tests/intel/xe_live_ktest.c
@@ -30,6 +30,11 @@
  * Description:
  *	Kernel dynamic selftests to check mocs configuration.
  * Functionality: mocs configuration
+ *
+ * SUBTEST: xe_eudebug
+ * Description:
+ *	Kernel dynamic selftests to check eudebug functionality.
+ * Functionality: eudebug kunit
  */
 
 static const char *live_tests[] = {
@@ -37,6 +42,7 @@ static const char *live_tests[] = {
 	"xe_dma_buf",
 	"xe_migrate",
 	"xe_mocs",
+	"xe_eudebug",
 };
 
 igt_main
-- 
2.34.1


^ permalink raw reply related	[flat|nested] 41+ messages in thread

* ✗ GitLab.Pipeline: warning for Test coverage for GPU debug support (rev3)
  2024-08-09 12:37 [PATCH i-g-t v3 00/14] Test coverage for GPU debug support Christoph Manszewski
                   ` (13 preceding siblings ...)
  2024-08-09 12:38 ` [PATCH i-g-t v3 14/14] tests/xe_live_ktest: Add xe_eudebug live test Christoph Manszewski
@ 2024-08-09 13:36 ` Patchwork
  2024-08-09 13:50 ` ✓ CI.xeBAT: success " Patchwork
                   ` (4 subsequent siblings)
  19 siblings, 0 replies; 41+ messages in thread
From: Patchwork @ 2024-08-09 13:36 UTC (permalink / raw)
  To: Christoph Manszewski; +Cc: igt-dev

== Series Details ==

Series: Test coverage for GPU debug support (rev3)
URL   : https://patchwork.freedesktop.org/series/136623/
State : warning

== Summary ==

Pipeline status: FAILED.

see https://gitlab.freedesktop.org/gfx-ci/igt-ci-tags/-/pipelines/1245140 for the overview.

build:tests-debian-meson-armhf has failed (https://gitlab.freedesktop.org/gfx-ci/igt-ci-tags/-/jobs/62081847):
              ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~                    ~~~~~
  ../lib/igt_core.h:1257:64: note: in definition of macro ‘igt_debug’
   #define igt_debug(f...) igt_log(IGT_LOG_DOMAIN, IGT_LOG_DEBUG, f)
                                                                  ^
  ../lib/xe/xe_eudebug.c: In function ‘xe_eudebug_client_exec_queue_create’:
  ../lib/xe/xe_eudebug.c:1792:20: error: cast to pointer from integer of different size [-Werror=int-to-pointer-cast]
    uint16_t class = ((struct drm_xe_engine_class_instance *)(create->instances))[0].engine_class;
                      ^
  ../lib/xe/xe_eudebug.c: In function ‘xe_eudebug_client_exec_queue_destroy’:
  ../lib/xe/xe_eudebug.c:1816:20: error: cast to pointer from integer of different size [-Werror=int-to-pointer-cast]
    uint16_t class = ((struct drm_xe_engine_class_instance *)(create->instances))[0].engine_class;
                      ^
  cc1: some warnings being treated as errors
  ninja: build stopped: subcommand failed.
  section_end:1723210264:step_script
  section_start:1723210264:cleanup_file_variables
  Cleaning up project directory and file based variables
  section_end:1723210265:cleanup_file_variables
  ERROR: Job failed: exit code 1
  

build:tests-debian-meson-mips has failed (https://gitlab.freedesktop.org/gfx-ci/igt-ci-tags/-/jobs/62081849):
              ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~                    ~~~~~
  ../lib/igt_core.h:1257:64: note: in definition of macro ‘igt_debug’
   #define igt_debug(f...) igt_log(IGT_LOG_DOMAIN, IGT_LOG_DEBUG, f)
                                                                  ^
  ../lib/xe/xe_eudebug.c: In function ‘xe_eudebug_client_exec_queue_create’:
  ../lib/xe/xe_eudebug.c:1792:20: error: cast to pointer from integer of different size [-Werror=int-to-pointer-cast]
    uint16_t class = ((struct drm_xe_engine_class_instance *)(create->instances))[0].engine_class;
                      ^
  ../lib/xe/xe_eudebug.c: In function ‘xe_eudebug_client_exec_queue_destroy’:
  ../lib/xe/xe_eudebug.c:1816:20: error: cast to pointer from integer of different size [-Werror=int-to-pointer-cast]
    uint16_t class = ((struct drm_xe_engine_class_instance *)(create->instances))[0].engine_class;
                      ^
  cc1: some warnings being treated as errors
  ninja: build stopped: subcommand failed.
  section_end:1723210260:step_script
  section_start:1723210260:cleanup_file_variables
  Cleaning up project directory and file based variables
  section_end:1723210261:cleanup_file_variables
  ERROR: Job failed: exit code 1

== Logs ==

For more details see: https://gitlab.freedesktop.org/gfx-ci/igt-ci-tags/-/pipelines/1245140

^ permalink raw reply	[flat|nested] 41+ messages in thread

* ✓ CI.xeBAT: success for Test coverage for GPU debug support (rev3)
  2024-08-09 12:37 [PATCH i-g-t v3 00/14] Test coverage for GPU debug support Christoph Manszewski
                   ` (14 preceding siblings ...)
  2024-08-09 13:36 ` ✗ GitLab.Pipeline: warning for Test coverage for GPU debug support (rev3) Patchwork
@ 2024-08-09 13:50 ` Patchwork
  2024-08-09 14:01 ` ✓ Fi.CI.BAT: " Patchwork
                   ` (3 subsequent siblings)
  19 siblings, 0 replies; 41+ messages in thread
From: Patchwork @ 2024-08-09 13:50 UTC (permalink / raw)
  To: Christoph Manszewski; +Cc: igt-dev

[-- Attachment #1: Type: text/plain, Size: 861 bytes --]

== Series Details ==

Series: Test coverage for GPU debug support (rev3)
URL   : https://patchwork.freedesktop.org/series/136623/
State : success

== Summary ==

CI Bug Log - changes from XEIGT_7964_BAT -> XEIGTPW_11551_BAT
====================================================

Summary
-------

  **SUCCESS**

  No regressions found.

  

Participating hosts (9 -> 9)
------------------------------

  No changes in participating hosts


Changes
-------

  No changes found


Build changes
-------------

  * IGT: IGT_7964 -> IGTPW_11551

  IGTPW_11551: 11551
  IGT_7964: 0dabf88262c0349d261248638064b97da35369f8 @ https://gitlab.freedesktop.org/drm/igt-gpu-tools.git
  xe-1740-8d0f83b389ec64b9f7f9b2ee241fc352346868f1: 8d0f83b389ec64b9f7f9b2ee241fc352346868f1

== Logs ==

For more details see: https://intel-gfx-ci.01.org/tree/intel-xe/IGTPW_11551/index.html

[-- Attachment #2: Type: text/html, Size: 1406 bytes --]

^ permalink raw reply	[flat|nested] 41+ messages in thread

* ✓ Fi.CI.BAT: success for Test coverage for GPU debug support (rev3)
  2024-08-09 12:37 [PATCH i-g-t v3 00/14] Test coverage for GPU debug support Christoph Manszewski
                   ` (15 preceding siblings ...)
  2024-08-09 13:50 ` ✓ CI.xeBAT: success " Patchwork
@ 2024-08-09 14:01 ` Patchwork
  2024-08-09 14:24 ` [PATCH i-g-t v3 00/14] Test coverage for GPU debug support Kamil Konieczny
                   ` (2 subsequent siblings)
  19 siblings, 0 replies; 41+ messages in thread
From: Patchwork @ 2024-08-09 14:01 UTC (permalink / raw)
  To: Christoph Manszewski; +Cc: igt-dev

[-- Attachment #1: Type: text/plain, Size: 3755 bytes --]

== Series Details ==

Series: Test coverage for GPU debug support (rev3)
URL   : https://patchwork.freedesktop.org/series/136623/
State : success

== Summary ==

CI Bug Log - changes from IGT_7964 -> IGTPW_11551
====================================================

Summary
-------

  **SUCCESS**

  No regressions found.

  External URL: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_11551/index.html

Participating hosts (37 -> 37)
------------------------------

  Additional (3): fi-cfl-8109u fi-bsw-n3050 fi-pnv-d510 
  Missing    (3): fi-glk-j4005 fi-snb-2520m fi-elk-e7500 

Known issues
------------

  Here are the changes found in IGTPW_11551 that come from known issues:

### IGT changes ###

#### Issues hit ####

  * igt@gem_huc_copy@huc-copy:
    - fi-cfl-8109u:       NOTRUN -> [SKIP][1] ([i915#2190])
   [1]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_11551/fi-cfl-8109u/igt@gem_huc_copy@huc-copy.html

  * igt@gem_lmem_swapping@basic:
    - fi-pnv-d510:        NOTRUN -> [SKIP][2] +32 other tests skip
   [2]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_11551/fi-pnv-d510/igt@gem_lmem_swapping@basic.html

  * igt@gem_lmem_swapping@random-engines:
    - fi-bsw-n3050:       NOTRUN -> [SKIP][3] +19 other tests skip
   [3]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_11551/fi-bsw-n3050/igt@gem_lmem_swapping@random-engines.html

  * igt@gem_lmem_swapping@verify-random:
    - fi-cfl-8109u:       NOTRUN -> [SKIP][4] ([i915#4613]) +3 other tests skip
   [4]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_11551/fi-cfl-8109u/igt@gem_lmem_swapping@verify-random.html

  * igt@i915_selftest@live@hangcheck:
    - bat-arls-2:         [PASS][5] -> [DMESG-WARN][6] ([i915#11349] / [i915#11378])
   [5]: https://intel-gfx-ci.01.org/tree/drm-tip/IGT_7964/bat-arls-2/igt@i915_selftest@live@hangcheck.html
   [6]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_11551/bat-arls-2/igt@i915_selftest@live@hangcheck.html

  * igt@kms_pm_backlight@basic-brightness:
    - fi-cfl-8109u:       NOTRUN -> [SKIP][7] +11 other tests skip
   [7]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_11551/fi-cfl-8109u/igt@kms_pm_backlight@basic-brightness.html

  
#### Possible fixes ####

  * igt@i915_selftest@live@gt_lrc:
    - bat-adlp-11:        [INCOMPLETE][8] ([i915#9413]) -> [PASS][9]
   [8]: https://intel-gfx-ci.01.org/tree/drm-tip/IGT_7964/bat-adlp-11/igt@i915_selftest@live@gt_lrc.html
   [9]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_11551/bat-adlp-11/igt@i915_selftest@live@gt_lrc.html

  * igt@kms_pm_rpm@basic-pci-d3-state:
    - bat-arls-5:         [DMESG-WARN][10] ([i915#11898]) -> [PASS][11]
   [10]: https://intel-gfx-ci.01.org/tree/drm-tip/IGT_7964/bat-arls-5/igt@kms_pm_rpm@basic-pci-d3-state.html
   [11]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_11551/bat-arls-5/igt@kms_pm_rpm@basic-pci-d3-state.html

  
  [i915#11349]: https://gitlab.freedesktop.org/drm/i915/kernel/-/issues/11349
  [i915#11378]: https://gitlab.freedesktop.org/drm/i915/kernel/-/issues/11378
  [i915#11898]: https://gitlab.freedesktop.org/drm/i915/kernel/-/issues/11898
  [i915#2190]: https://gitlab.freedesktop.org/drm/i915/kernel/-/issues/2190
  [i915#4613]: https://gitlab.freedesktop.org/drm/i915/kernel/-/issues/4613
  [i915#9413]: https://gitlab.freedesktop.org/drm/i915/kernel/-/issues/9413


Build changes
-------------

  * CI: CI-20190529 -> None
  * IGT: IGT_7964 -> IGTPW_11551

  CI-20190529: 20190529
  CI_DRM_15204: 040fda7980e8fccde054bf16158538ff80223e36 @ git://anongit.freedesktop.org/gfx-ci/linux
  IGTPW_11551: 11551
  IGT_7964: 0dabf88262c0349d261248638064b97da35369f8 @ https://gitlab.freedesktop.org/drm/igt-gpu-tools.git

== Logs ==

For more details see: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_11551/index.html

[-- Attachment #2: Type: text/html, Size: 4560 bytes --]

^ permalink raw reply	[flat|nested] 41+ messages in thread

* Re: [PATCH i-g-t v3 01/14] drm-uapi/xe: Sync with oa uapi fix
  2024-08-09 12:38 ` [PATCH i-g-t v3 01/14] drm-uapi/xe: Sync with oa uapi fix Christoph Manszewski
@ 2024-08-09 14:21   ` Kamil Konieczny
  0 siblings, 0 replies; 41+ messages in thread
From: Kamil Konieczny @ 2024-08-09 14:21 UTC (permalink / raw)
  To: igt-dev
  Cc: Christoph Manszewski, Zbigniew Kempczyński,
	Dominik Grzegorzek, Maciej Patelczyk,
	Dominik Karol Piątkowski, Pawel Sikora, Andrzej Hajda,
	Kolanupaka Naveena, Mika Kuoppala, Gwan-gyeong Mun,
	Jonathan Cavitt, Lucas De Marchi

Hi Christoph,
On 2024-08-09 at 14:38:00 +0200, Christoph Manszewski wrote:
> Align with kernel commit f2881dfdaaa9 ("drm/xe/oa/uapi: Make bit masks
> unsigned"). Use built header instead of raw uapi header.
> 
> Cc: Jonathan Cavitt <jonathan.cavitt@intel.com>
> Cc: Lucas De Marchi <lucas.demarchi@intel.com>
> Signed-off-by: Christoph Manszewski <christoph.manszewski@intel.com>

Reviewed-by: Kamil Konieczny <kamil.konieczny@linux.intel.com>

> ---
>  include/drm-uapi/xe_drm.h | 16 ++++++++--------
>  1 file changed, 8 insertions(+), 8 deletions(-)
> 
> diff --git a/include/drm-uapi/xe_drm.h b/include/drm-uapi/xe_drm.h
> index 29425d7fd..f0a450db9 100644
> --- a/include/drm-uapi/xe_drm.h
> +++ b/include/drm-uapi/xe_drm.h
> @@ -3,8 +3,8 @@
>   * Copyright © 2023 Intel Corporation
>   */
>  
> -#ifndef _UAPI_XE_DRM_H_
> -#define _UAPI_XE_DRM_H_
> +#ifndef _XE_DRM_H_
> +#define _XE_DRM_H_
>  
>  #include "drm.h"
>  
> @@ -134,7 +134,7 @@ extern "C" {
>   * redefine the interface more easily than an ever growing struct of
>   * increasing complexity, and for large parts of that interface to be
>   * entirely optional. The downside is more pointer chasing; chasing across
> - * the __user boundary with pointers encapsulated inside u64.
> + * the boundary with pointers encapsulated inside u64.
>   *
>   * Example chaining:
>   *
> @@ -1598,10 +1598,10 @@ enum drm_xe_oa_property_id {
>  	 * b. Counter select c. Counter size and d. BC report. Also refer to the
>  	 * oa_formats array in drivers/gpu/drm/xe/xe_oa.c.
>  	 */
> -#define DRM_XE_OA_FORMAT_MASK_FMT_TYPE		(0xff << 0)
> -#define DRM_XE_OA_FORMAT_MASK_COUNTER_SEL	(0xff << 8)
> -#define DRM_XE_OA_FORMAT_MASK_COUNTER_SIZE	(0xff << 16)
> -#define DRM_XE_OA_FORMAT_MASK_BC_REPORT		(0xff << 24)
> +#define DRM_XE_OA_FORMAT_MASK_FMT_TYPE		(0xffu << 0)
> +#define DRM_XE_OA_FORMAT_MASK_COUNTER_SEL	(0xffu << 8)
> +#define DRM_XE_OA_FORMAT_MASK_COUNTER_SIZE	(0xffu << 16)
> +#define DRM_XE_OA_FORMAT_MASK_BC_REPORT		(0xffu << 24)
>  
>  	/**
>  	 * @DRM_XE_OA_PROPERTY_OA_PERIOD_EXPONENT: Requests periodic OA unit
> @@ -1698,4 +1698,4 @@ struct drm_xe_oa_stream_info {
>  }
>  #endif
>  
> -#endif /* _UAPI_XE_DRM_H_ */
> +#endif /* _XE_DRM_H_ */
> -- 
> 2.34.1
> 

^ permalink raw reply	[flat|nested] 41+ messages in thread

* Re: [PATCH i-g-t v3 00/14] Test coverage for GPU debug support
  2024-08-09 12:37 [PATCH i-g-t v3 00/14] Test coverage for GPU debug support Christoph Manszewski
                   ` (16 preceding siblings ...)
  2024-08-09 14:01 ` ✓ Fi.CI.BAT: " Patchwork
@ 2024-08-09 14:24 ` Kamil Konieczny
  2024-08-09 15:40 ` ✗ CI.xeFULL: failure for Test coverage for GPU debug support (rev3) Patchwork
  2024-08-10 18:30 ` ✗ Fi.CI.IGT: " Patchwork
  19 siblings, 0 replies; 41+ messages in thread
From: Kamil Konieczny @ 2024-08-09 14:24 UTC (permalink / raw)
  To: igt-dev
  Cc: Christoph Manszewski, Zbigniew Kempczyński,
	Dominik Grzegorzek, Maciej Patelczyk,
	Dominik Karol Piątkowski, Pawel Sikora, Andrzej Hajda,
	Kolanupaka Naveena, Mika Kuoppala, Gwan-gyeong Mun

Hi Christoph,
On 2024-08-09 at 14:37:59 +0200, Christoph Manszewski wrote:
> Hi,
> 
> In this series the eudebug kernel and validation team would like to
> add test coverage for GPU debug support recently proposed as an RFC.
> (https://patchwork.freedesktop.org/series/136572/)
> 
> This series adds 'xe_eudebug' and 'xe_eudebug_online' tests together
> with a library that encapsulates common paths in current and future
> EU debugger scenarios. It also extends the 'xe_exec_sip' test and
> 'gpgpu_shader' library.
> 
> The aim of the 'xe_eudebug' test is to validate the eudebug resource
> tracking and event delivery mechanism. The 'xe_eudebug_online' test is
> dedicated for 'online' scenarios which means scenarios that exercise
> hardware exception handling and thread state manipulation.
> 
> The xe_eudebug library provides an abstraction over debugger and debuggee
> processes, asynchronous event reader, and event log buffers for post-mortem
> analysis.
> 
> Latest kernel code can be found here:
> https://gitlab.freedesktop.org/miku/kernel/-/commits/eudebug-dev
> 
> Thank you in advance for any comments and insight.
> 
> v2:
>  - make sure to include all patches and verify that each individual
>  patch compiles (Zbigniew)
> 
> v3:
>  - fix multiple typos (Dominik Karol),
>  - squash subtest and eudebug lib patches (Zbigniew),
>  - include uapi sync/fix (Kamil)
> 

Please look into GitLab compilation report with error for
armhf and mips:

build:tests-debian-meson-armhf has failed (https://gitlab.freedesktop.org/gfx-ci/igt-ci-tags/-/jobs/62081847):

../lib/igt_core.h:1257:64: note: in definition of macro ‘igt_debug’
#define igt_debug(f...) igt_log(IGT_LOG_DOMAIN, IGT_LOG_DEBUG, f)

../lib/xe/xe_eudebug.c: In function ‘xe_eudebug_client_exec_queue_create’:
../lib/xe/xe_eudebug.c:1792:20: error: cast to pointer from integer of different size [-Werror=int-to-pointer-cast]
uint16_t class = ((struct drm_xe_engine_class_instance *)(create->instances))[0].engine_class;

../lib/xe/xe_eudebug.c: In function ‘xe_eudebug_client_exec_queue_destroy’:
../lib/xe/xe_eudebug.c:1816:20: error: cast to pointer from integer of different size [-Werror=int-to-pointer-cast]
uint16_t class = ((struct drm_xe_engine_class_instance *)(create->instances))[0].engine_class;

cc1: some warnings being treated as errors
ninja: build stopped: subcommand failed.

Regards,
Kamil

> Andrzej Hajda (4):
>   lib/gpgpu_shader: Add write_on_exception template
>   lib/gpgpu_shader: Add set/clear exception register (cr0.1) helpers
>   lib/intel_batchbuffer: Add helper to get pointer at specified offset
>   lib/gpgpu_shader: Allow enabling illegal opcode exceptions in shader
> 
> Christoph Manszewski (5):
>   drm-uapi/xe: Sync with oa uapi fix
>   lib/xe_ioctl: Add wrapper with vm_bind_op extension parameter
>   lib/gpgpu_shader: Extend shader building library
>   tests/xe_exec_sip: Extend SIP interaction testing
>   tests/xe_live_ktest: Add xe_eudebug live test
> 
> Dominik Grzegorzek (4):
>   drm-uapi/xe: Sync with eudebug uapi
>   lib/xe_eudebug: Introduce eu debug testing framework
>   tests/xe_eudebug: Test eudebug resource tracking and manipulation
>   tests/xe_eudebug_online: Debug client which runs workloads on EU
> 
> Gwan-gyeong Mun (1):
>   lib/intel_batchbuffer: Add support for long-running mode execution
> 
>  include/drm-uapi/xe_drm.h         |  112 +-
>  include/drm-uapi/xe_drm_eudebug.h |  225 +++
>  lib/gpgpu_shader.c                |  474 ++++-
>  lib/gpgpu_shader.h                |   29 +-
>  lib/iga64_generated_codes.c       |  428 ++++-
>  lib/intel_batchbuffer.c           |  153 +-
>  lib/intel_batchbuffer.h           |   22 +
>  lib/meson.build                   |    1 +
>  lib/xe/xe_eudebug.c               | 2192 +++++++++++++++++++++++
>  lib/xe/xe_eudebug.h               |  206 +++
>  lib/xe/xe_ioctl.c                 |   20 +-
>  lib/xe/xe_ioctl.h                 |    5 +
>  tests/intel/xe_eudebug.c          | 2671 +++++++++++++++++++++++++++++
>  tests/intel/xe_eudebug_online.c   | 2203 ++++++++++++++++++++++++
>  tests/intel/xe_exec_sip.c         |  332 +++-
>  tests/intel/xe_live_ktest.c       |    6 +
>  tests/meson.build                 |    2 +
>  17 files changed, 9036 insertions(+), 45 deletions(-)
>  create mode 100644 include/drm-uapi/xe_drm_eudebug.h
>  create mode 100644 lib/xe/xe_eudebug.c
>  create mode 100644 lib/xe/xe_eudebug.h
>  create mode 100644 tests/intel/xe_eudebug.c
>  create mode 100644 tests/intel/xe_eudebug_online.c
> 
> -- 
> 2.34.1
> 

^ permalink raw reply	[flat|nested] 41+ messages in thread

* Re: [PATCH i-g-t v3 13/14] tests/xe_eudebug_online: Debug client which runs workloads on EU
  2024-08-09 12:38 ` [PATCH i-g-t v3 13/14] tests/xe_eudebug_online: Debug client which runs workloads on EU Christoph Manszewski
@ 2024-08-09 14:38   ` Kamil Konieczny
  2024-08-19 15:31     ` Manszewski, Christoph
  2024-08-19  9:58   ` Grzegorzek, Dominik
  1 sibling, 1 reply; 41+ messages in thread
From: Kamil Konieczny @ 2024-08-09 14:38 UTC (permalink / raw)
  To: igt-dev
  Cc: Christoph Manszewski, Zbigniew Kempczyński,
	Dominik Grzegorzek, Maciej Patelczyk,
	Dominik Karol Piątkowski, Pawel Sikora, Andrzej Hajda,
	Kolanupaka Naveena, Mika Kuoppala, Gwan-gyeong Mun

Hi Christoph,
On 2024-08-09 at 14:38:12 +0200, Christoph Manszewski wrote:
> From: Dominik Grzegorzek <dominik.grzegorzek@intel.com>
> 
> For typical debugging under gdb one can specify two main usecases:
> accessing and manupulating resources created by the application and
> manipulating thread execution (interrupting and setting breakpoints).
> 
> This test adds coverage for the latter by checking that:
> - EU workloads that hit a instruction with breakpoint bit set will stop
>   halt execution and the debugger will report this via attention events,
> - the debugger is able to interrupt workload execution by issuing a
>   'interrupt_all' ioctl call,
> - the debugger is able to resume selected workloads that are stopped.
> 
> Signed-off-by: Dominik Grzegorzek <dominik.grzegorzek@intel.com>
> Signed-off-by: Mika Kuoppala <mika.kuoppala@intel.com>
> Signed-off-by: Christoph Manszewski <christoph.manszewski@intel.com>
> Signed-off-by: Dominik Karol Piątkowski <dominik.karol.piatkowski@intel.com>
> Signed-off-by: Pawel Sikora <pawel.sikora@intel.com>
> Signed-off-by: Karolina Stolarek <karolina.stolarek@intel.com>
> Signed-off-by: Kolanupaka Naveena <kolanupaka.naveena@intel.com>
> ---
>  tests/intel/xe_eudebug_online.c | 2203 +++++++++++++++++++++++++++++++
>  tests/meson.build               |    1 +
>  2 files changed, 2204 insertions(+)
>  create mode 100644 tests/intel/xe_eudebug_online.c
[...cut...]

Please use checkpatch.pl from linux kernel, watch out for
whitespace errors, unbalanced braces '{/}' in if...else
and few other problems.

A few useful options for checkpatch.pl are in CONTRIBUTE.md

Regards,
Kamil


^ permalink raw reply	[flat|nested] 41+ messages in thread

* ✗ CI.xeFULL: failure for Test coverage for GPU debug support (rev3)
  2024-08-09 12:37 [PATCH i-g-t v3 00/14] Test coverage for GPU debug support Christoph Manszewski
                   ` (17 preceding siblings ...)
  2024-08-09 14:24 ` [PATCH i-g-t v3 00/14] Test coverage for GPU debug support Kamil Konieczny
@ 2024-08-09 15:40 ` Patchwork
  2024-08-10 18:30 ` ✗ Fi.CI.IGT: " Patchwork
  19 siblings, 0 replies; 41+ messages in thread
From: Patchwork @ 2024-08-09 15:40 UTC (permalink / raw)
  To: Christoph Manszewski; +Cc: igt-dev

[-- Attachment #1: Type: text/plain, Size: 78046 bytes --]

== Series Details ==

Series: Test coverage for GPU debug support (rev3)
URL   : https://patchwork.freedesktop.org/series/136623/
State : failure

== Summary ==

CI Bug Log - changes from XEIGT_7964_full -> XEIGTPW_11551_full
====================================================

Summary
-------

  **FAILURE**

  Serious unknown changes coming with XEIGTPW_11551_full absolutely need to be
  verified manually.
  
  If you think the reported changes have nothing to do with the changes
  introduced in XEIGTPW_11551_full, please notify your bug team (I915-ci-infra@lists.freedesktop.org) to allow them
  to document this new failure mode, which will reduce false positives in CI.

  

Participating hosts (4 -> 4)
------------------------------

  No changes in participating hosts

Possible new issues
-------------------

  Here are the unknown changes that may have been introduced in XEIGTPW_11551_full:

### IGT changes ###

#### Possible regressions ####

  * igt@xe_eudebug@basic-connect (NEW):
    - shard-lnl:          NOTRUN -> [SKIP][1] +55 other tests skip
   [1]: https://intel-gfx-ci.01.org/tree/intel-xe/IGTPW_11551/shard-lnl-1/igt@xe_eudebug@basic-connect.html

  * igt@xe_eudebug@basic-vm-bind-metadata-discovery (NEW):
    - {shard-bmg}:        NOTRUN -> [SKIP][2] +53 other tests skip
   [2]: https://intel-gfx-ci.01.org/tree/intel-xe/IGTPW_11551/shard-bmg-2/igt@xe_eudebug@basic-vm-bind-metadata-discovery.html

  * igt@xe_eudebug_online@resume-dss (NEW):
    - shard-dg2-set2:     NOTRUN -> [SKIP][3] +7 other tests skip
   [3]: https://intel-gfx-ci.01.org/tree/intel-xe/IGTPW_11551/shard-dg2-432/igt@xe_eudebug_online@resume-dss.html

  * igt@xe_oa@oa-regs-whitelisted:
    - shard-lnl:          [PASS][4] -> [FAIL][5]
   [4]: https://intel-gfx-ci.01.org/tree/intel-xe/IGT_7964/shard-lnl-7/igt@xe_oa@oa-regs-whitelisted.html
   [5]: https://intel-gfx-ci.01.org/tree/intel-xe/IGTPW_11551/shard-lnl-4/igt@xe_oa@oa-regs-whitelisted.html

  * igt@xe_oa@oa-regs-whitelisted@ccs-0:
    - shard-lnl:          NOTRUN -> [FAIL][6]
   [6]: https://intel-gfx-ci.01.org/tree/intel-xe/IGTPW_11551/shard-lnl-4/igt@xe_oa@oa-regs-whitelisted@ccs-0.html

  
#### Suppressed ####

  The following results come from untrusted machines, tests, or statuses.
  They do not affect the overall result.

  * igt@kms_cursor_edge_walk@256x256-left-edge@pipe-a-dp-2:
    - {shard-bmg}:        [PASS][7] -> [INCOMPLETE][8] +3 other tests incomplete
   [7]: https://intel-gfx-ci.01.org/tree/intel-xe/IGT_7964/shard-bmg-7/igt@kms_cursor_edge_walk@256x256-left-edge@pipe-a-dp-2.html
   [8]: https://intel-gfx-ci.01.org/tree/intel-xe/IGTPW_11551/shard-bmg-5/igt@kms_cursor_edge_walk@256x256-left-edge@pipe-a-dp-2.html

  * igt@kms_pm_lpsp@kms-lpsp:
    - {shard-bmg}:        NOTRUN -> [SKIP][9] +1 other test skip
   [9]: https://intel-gfx-ci.01.org/tree/intel-xe/IGTPW_11551/shard-bmg-2/igt@kms_pm_lpsp@kms-lpsp.html

  * igt@xe_ccs@suspend-resume:
    - {shard-bmg}:        [PASS][10] -> [DMESG-WARN][11] +3 other tests dmesg-warn
   [10]: https://intel-gfx-ci.01.org/tree/intel-xe/IGT_7964/shard-bmg-2/igt@xe_ccs@suspend-resume.html
   [11]: https://intel-gfx-ci.01.org/tree/intel-xe/IGTPW_11551/shard-bmg-4/igt@xe_ccs@suspend-resume.html

  * igt@xe_oa@oa-regs-whitelisted:
    - {shard-bmg}:        [PASS][12] -> [FAIL][13]
   [12]: https://intel-gfx-ci.01.org/tree/intel-xe/IGT_7964/shard-bmg-5/igt@xe_oa@oa-regs-whitelisted.html
   [13]: https://intel-gfx-ci.01.org/tree/intel-xe/IGTPW_11551/shard-bmg-3/igt@xe_oa@oa-regs-whitelisted.html

  * igt@xe_oa@oa-regs-whitelisted@rcs-0:
    - {shard-bmg}:        NOTRUN -> [FAIL][14]
   [14]: https://intel-gfx-ci.01.org/tree/intel-xe/IGTPW_11551/shard-bmg-3/igt@xe_oa@oa-regs-whitelisted@rcs-0.html

  
New tests
---------

  New tests have been introduced between XEIGT_7964_full and XEIGTPW_11551_full:

### New IGT tests (74) ###

  * igt@xe_eudebug@attach-debug-metadata:
    - Statuses : 3 skip(s)
    - Exec time: [0.0] s

  * igt@xe_eudebug@basic-client:
    - Statuses : 1 skip(s)
    - Exec time: [0.0] s

  * igt@xe_eudebug@basic-client-th:
    - Statuses : 3 skip(s)
    - Exec time: [0.0] s

  * igt@xe_eudebug@basic-close:
    - Statuses : 3 skip(s)
    - Exec time: [0.0] s

  * igt@xe_eudebug@basic-connect:
    - Statuses : 3 skip(s)
    - Exec time: [0.0] s

  * igt@xe_eudebug@basic-exec-queues:
    - Statuses : 3 skip(s)
    - Exec time: [0.0] s

  * igt@xe_eudebug@basic-read-event:
    - Statuses : 3 skip(s)
    - Exec time: [0.0] s

  * igt@xe_eudebug@basic-vm-access:
    - Statuses : 3 skip(s)
    - Exec time: [0.0] s

  * igt@xe_eudebug@basic-vm-access-parameters:
    - Statuses : 3 skip(s)
    - Exec time: [0.0] s

  * igt@xe_eudebug@basic-vm-access-userptr:
    - Statuses : 3 skip(s)
    - Exec time: [0.0] s

  * igt@xe_eudebug@basic-vm-bind:
    - Statuses : 3 skip(s)
    - Exec time: [0.0] s

  * igt@xe_eudebug@basic-vm-bind-discovery:
    - Statuses : 3 skip(s)
    - Exec time: [0.0] s

  * igt@xe_eudebug@basic-vm-bind-extended:
    - Statuses : 3 skip(s)
    - Exec time: [0.0] s

  * igt@xe_eudebug@basic-vm-bind-extended-discovery:
    - Statuses : 3 skip(s)
    - Exec time: [0.0] s

  * igt@xe_eudebug@basic-vm-bind-metadata-discovery:
    - Statuses : 3 skip(s)
    - Exec time: [0.0] s

  * igt@xe_eudebug@basic-vm-bind-ufence:
    - Statuses : 3 skip(s)
    - Exec time: [0.0] s

  * igt@xe_eudebug@basic-vm-bind-vm-destroy:
    - Statuses : 3 skip(s)
    - Exec time: [0.0] s

  * igt@xe_eudebug@basic-vm-bind-vm-destroy-discovery:
    - Statuses : 3 skip(s)
    - Exec time: [0.0] s

  * igt@xe_eudebug@basic-vms:
    - Statuses : 3 skip(s)
    - Exec time: [0.0] s

  * igt@xe_eudebug@connect-user:
    - Statuses : 3 skip(s)
    - Exec time: [0.0] s

  * igt@xe_eudebug@discovery-empty:
    - Statuses : 3 skip(s)
    - Exec time: [0.0] s

  * igt@xe_eudebug@discovery-empty-clients:
    - Statuses : 3 skip(s)
    - Exec time: [0.0] s

  * igt@xe_eudebug@discovery-race:
    - Statuses : 3 skip(s)
    - Exec time: [0.0] s

  * igt@xe_eudebug@discovery-race-vmbind:
    - Statuses : 3 skip(s)
    - Exec time: [0.0] s

  * igt@xe_eudebug@multigpu-basic-client:
    - Statuses : 3 skip(s)
    - Exec time: [0.0] s

  * igt@xe_eudebug@multigpu-basic-client-many:
    - Statuses : 3 skip(s)
    - Exec time: [0.0] s

  * igt@xe_eudebug@multiple-sessions:
    - Statuses : 3 skip(s)
    - Exec time: [0.0] s

  * igt@xe_eudebug@read-metadata:
    - Statuses : 3 skip(s)
    - Exec time: [0.0] s

  * igt@xe_eudebug@sysfs-toggle:
    - Statuses : 3 skip(s)
    - Exec time: [0.0] s

  * igt@xe_eudebug@vm-bind-clear:
    - Statuses : 3 skip(s)
    - Exec time: [0.0] s

  * igt@xe_eudebug@vma-ufence:
    - Statuses : 3 skip(s)
    - Exec time: [0.0] s

  * igt@xe_eudebug_online@basic-breakpoint:
    - Statuses : 1 skip(s)
    - Exec time: [0.0] s

  * igt@xe_eudebug_online@breakpoint-many-sessions-single-tile:
    - Statuses : 3 skip(s)
    - Exec time: [0.0] s

  * igt@xe_eudebug_online@breakpoint-many-sessions-tiles:
    - Statuses : 3 skip(s)
    - Exec time: [0.0] s

  * igt@xe_eudebug_online@breakpoint-not-in-debug-mode:
    - Statuses : 3 skip(s)
    - Exec time: [0.0] s

  * igt@xe_eudebug_online@debugger-reopen:
    - Statuses : 3 skip(s)
    - Exec time: [0.0] s

  * igt@xe_eudebug_online@interrupt-all:
    - Statuses : 3 skip(s)
    - Exec time: [0.0] s

  * igt@xe_eudebug_online@interrupt-all-set-breakpoint:
    - Statuses : 3 skip(s)
    - Exec time: [0.0] s

  * igt@xe_eudebug_online@interrupt-other:
    - Statuses : 3 skip(s)
    - Exec time: [0.0] s

  * igt@xe_eudebug_online@interrupt-other-debuggable:
    - Statuses : 3 skip(s)
    - Exec time: [0.0] s

  * igt@xe_eudebug_online@interrupt-reconnect:
    - Statuses : 3 skip(s)
    - Exec time: [0.0] s

  * igt@xe_eudebug_online@preempt-breakpoint:
    - Statuses : 3 skip(s)
    - Exec time: [0.0] s

  * igt@xe_eudebug_online@reset-with-attention:
    - Statuses : 3 skip(s)
    - Exec time: [0.0] s

  * igt@xe_eudebug_online@resume-dss:
    - Statuses : 3 skip(s)
    - Exec time: [0.0] s

  * igt@xe_eudebug_online@resume-one:
    - Statuses : 3 skip(s)
    - Exec time: [0.0] s

  * igt@xe_eudebug_online@single-step:
    - Statuses : 3 skip(s)
    - Exec time: [0.0] s

  * igt@xe_eudebug_online@single-step-one:
    - Statuses : 3 skip(s)
    - Exec time: [0.0] s

  * igt@xe_eudebug_online@stopped-thread:
    - Statuses : 3 skip(s)
    - Exec time: [0.0] s

  * igt@xe_eudebug_online@tdctl-parameters:
    - Statuses : 3 skip(s)
    - Exec time: [0.0] s

  * igt@xe_eudebug_online@writes-caching-sram:
    - Statuses : 3 skip(s)
    - Exec time: [0.0] s

  * igt@xe_eudebug_online@writes-caching-vram:
    - Statuses : 3 skip(s)
    - Exec time: [0.0] s

  * igt@xe_exec_sip@breakpoint-waitsip:
    - Statuses : 3 skip(s)
    - Exec time: [0.0] s

  * igt@xe_exec_sip@breakpoint-waitsip-heavy:
    - Statuses : 3 skip(s)
    - Exec time: [0.0] s

  * igt@xe_exec_sip@breakpoint-writesip:
    - Statuses : 3 skip(s)
    - Exec time: [0.0] s

  * igt@xe_exec_sip@breakpoint-writesip-nodebug:
    - Statuses : 3 pass(s)
    - Exec time: [0.01] s

  * igt@xe_exec_sip@breakpoint-writesip-nodebug@drm_xe_engine_class_compute0:
    - Statuses : 3 pass(s)
    - Exec time: [0.00, 0.01] s

  * igt@xe_exec_sip@breakpoint-writesip-nodebug@drm_xe_engine_class_render0:
    - Statuses : 3 pass(s)
    - Exec time: [0.00, 0.01] s

  * igt@xe_exec_sip@breakpoint-writesip-twice:
    - Statuses : 3 skip(s)
    - Exec time: [0.0] s

  * igt@xe_exec_sip@invalidinstr-disabled:
    - Statuses : 3 pass(s)
    - Exec time: [0.01] s

  * igt@xe_exec_sip@invalidinstr-disabled@drm_xe_engine_class_compute0:
    - Statuses : 3 pass(s)
    - Exec time: [0.00] s

  * igt@xe_exec_sip@invalidinstr-disabled@drm_xe_engine_class_render0:
    - Statuses : 3 pass(s)
    - Exec time: [0.00] s

  * igt@xe_exec_sip@invalidinstr-thread-enabled:
    - Statuses : 3 pass(s)
    - Exec time: [11.56, 11.72] s

  * igt@xe_exec_sip@invalidinstr-thread-enabled@drm_xe_engine_class_compute0:
    - Statuses : 3 pass(s)
    - Exec time: [5.68, 5.88] s

  * igt@xe_exec_sip@invalidinstr-thread-enabled@drm_xe_engine_class_render0:
    - Statuses : 3 pass(s)
    - Exec time: [5.67, 6.00] s

  * igt@xe_exec_sip@invalidinstr-walker-enabled:
    - Statuses : 3 pass(s)
    - Exec time: [11.63, 11.86] s

  * igt@xe_exec_sip@invalidinstr-walker-enabled@drm_xe_engine_class_compute0:
    - Statuses : 3 pass(s)
    - Exec time: [5.81, 5.89] s

  * igt@xe_exec_sip@invalidinstr-walker-enabled@drm_xe_engine_class_render0:
    - Statuses : 3 pass(s)
    - Exec time: [5.74, 6.04] s

  * igt@xe_exec_sip@sanity-after-timeout:
    - Statuses : 3 pass(s)
    - Exec time: [11.62, 11.77] s

  * igt@xe_exec_sip@sanity-after-timeout@drm_xe_engine_class_compute0:
    - Statuses : 3 pass(s)
    - Exec time: [5.89, 5.90] s

  * igt@xe_exec_sip@sanity-after-timeout@drm_xe_engine_class_render0:
    - Statuses : 3 pass(s)
    - Exec time: [5.73, 5.87] s

  * igt@xe_exec_sip@wait-writesip-nodebug:
    - Statuses : 3 pass(s)
    - Exec time: [11.61, 11.74] s

  * igt@xe_exec_sip@wait-writesip-nodebug@drm_xe_engine_class_compute0:
    - Statuses : 3 pass(s)
    - Exec time: [5.88] s

  * igt@xe_exec_sip@wait-writesip-nodebug@drm_xe_engine_class_render0:
    - Statuses : 3 pass(s)
    - Exec time: [5.73, 5.86] s

  * igt@xe_live_ktest@xe_eudebug:
    - Statuses : 3 skip(s)
    - Exec time: [0.00] s

  

Known issues
------------

  Here are the changes found in XEIGTPW_11551_full that come from known issues:

### IGT changes ###

#### Issues hit ####

  * igt@kms_addfb_basic@addfb25-y-tiled-small-legacy:
    - shard-dg2-set2:     NOTRUN -> [SKIP][15] ([Intel XE#1201] / [Intel XE#623])
   [15]: https://intel-gfx-ci.01.org/tree/intel-xe/IGTPW_11551/shard-dg2-436/igt@kms_addfb_basic@addfb25-y-tiled-small-legacy.html

  * igt@kms_async_flips@async-flip-with-page-flip-events@pipe-a-edp-1-linear:
    - shard-lnl:          [PASS][16] -> [FAIL][17] ([Intel XE#911]) +3 other tests fail
   [16]: https://intel-gfx-ci.01.org/tree/intel-xe/IGT_7964/shard-lnl-5/igt@kms_async_flips@async-flip-with-page-flip-events@pipe-a-edp-1-linear.html
   [17]: https://intel-gfx-ci.01.org/tree/intel-xe/IGTPW_11551/shard-lnl-8/igt@kms_async_flips@async-flip-with-page-flip-events@pipe-a-edp-1-linear.html

  * igt@kms_atomic_transition@plane-all-modeset-transition-fencing-internal-panels:
    - shard-lnl:          [PASS][18] -> [FAIL][19] ([Intel XE#1426]) +1 other test fail
   [18]: https://intel-gfx-ci.01.org/tree/intel-xe/IGT_7964/shard-lnl-5/igt@kms_atomic_transition@plane-all-modeset-transition-fencing-internal-panels.html
   [19]: https://intel-gfx-ci.01.org/tree/intel-xe/IGTPW_11551/shard-lnl-4/igt@kms_atomic_transition@plane-all-modeset-transition-fencing-internal-panels.html

  * igt@kms_atomic_transition@plane-all-modeset-transition-internal-panels@pipe-a-edp-1:
    - shard-lnl:          NOTRUN -> [FAIL][20] ([Intel XE#1426]) +1 other test fail
   [20]: https://intel-gfx-ci.01.org/tree/intel-xe/IGTPW_11551/shard-lnl-7/igt@kms_atomic_transition@plane-all-modeset-transition-internal-panels@pipe-a-edp-1.html

  * igt@kms_big_fb@4-tiled-32bpp-rotate-90:
    - shard-lnl:          NOTRUN -> [SKIP][21] ([Intel XE#1407]) +3 other tests skip
   [21]: https://intel-gfx-ci.01.org/tree/intel-xe/IGTPW_11551/shard-lnl-1/igt@kms_big_fb@4-tiled-32bpp-rotate-90.html

  * igt@kms_big_fb@linear-8bpp-rotate-90:
    - shard-dg2-set2:     NOTRUN -> [SKIP][22] ([Intel XE#1201] / [Intel XE#316])
   [22]: https://intel-gfx-ci.01.org/tree/intel-xe/IGTPW_11551/shard-dg2-434/igt@kms_big_fb@linear-8bpp-rotate-90.html

  * igt@kms_big_fb@y-tiled-8bpp-rotate-180:
    - shard-dg2-set2:     NOTRUN -> [SKIP][23] ([Intel XE#1124] / [Intel XE#1201]) +2 other tests skip
   [23]: https://intel-gfx-ci.01.org/tree/intel-xe/IGTPW_11551/shard-dg2-433/igt@kms_big_fb@y-tiled-8bpp-rotate-180.html

  * igt@kms_big_fb@yf-tiled-max-hw-stride-32bpp-rotate-0-hflip-async-flip:
    - shard-lnl:          NOTRUN -> [SKIP][24] ([Intel XE#1124]) +4 other tests skip
   [24]: https://intel-gfx-ci.01.org/tree/intel-xe/IGTPW_11551/shard-lnl-7/igt@kms_big_fb@yf-tiled-max-hw-stride-32bpp-rotate-0-hflip-async-flip.html

  * igt@kms_bw@connected-linear-tiling-2-displays-2560x1440p:
    - shard-dg2-set2:     NOTRUN -> [SKIP][25] ([Intel XE#1201] / [Intel XE#367]) +1 other test skip
   [25]: https://intel-gfx-ci.01.org/tree/intel-xe/IGTPW_11551/shard-dg2-463/igt@kms_bw@connected-linear-tiling-2-displays-2560x1440p.html

  * igt@kms_bw@connected-linear-tiling-3-displays-1920x1080p:
    - shard-dg2-set2:     NOTRUN -> [SKIP][26] ([Intel XE#1201] / [Intel XE#2191])
   [26]: https://intel-gfx-ci.01.org/tree/intel-xe/IGTPW_11551/shard-dg2-466/igt@kms_bw@connected-linear-tiling-3-displays-1920x1080p.html

  * igt@kms_bw@connected-linear-tiling-3-displays-3840x2160p:
    - shard-lnl:          NOTRUN -> [SKIP][27] ([Intel XE#2191])
   [27]: https://intel-gfx-ci.01.org/tree/intel-xe/IGTPW_11551/shard-lnl-6/igt@kms_bw@connected-linear-tiling-3-displays-3840x2160p.html

  * igt@kms_bw@linear-tiling-2-displays-1920x1080p:
    - shard-lnl:          NOTRUN -> [SKIP][28] ([Intel XE#367])
   [28]: https://intel-gfx-ci.01.org/tree/intel-xe/IGTPW_11551/shard-lnl-3/igt@kms_bw@linear-tiling-2-displays-1920x1080p.html

  * igt@kms_ccs@bad-aux-stride-4-tiled-mtl-mc-ccs@pipe-a-hdmi-a-6:
    - shard-dg2-set2:     NOTRUN -> [SKIP][29] ([Intel XE#1201] / [Intel XE#787]) +20 other tests skip
   [29]: https://intel-gfx-ci.01.org/tree/intel-xe/IGTPW_11551/shard-dg2-434/igt@kms_ccs@bad-aux-stride-4-tiled-mtl-mc-ccs@pipe-a-hdmi-a-6.html

  * igt@kms_ccs@bad-rotation-90-y-tiled-ccs:
    - shard-dg2-set2:     NOTRUN -> [SKIP][30] ([Intel XE#455] / [Intel XE#787]) +1 other test skip
   [30]: https://intel-gfx-ci.01.org/tree/intel-xe/IGTPW_11551/shard-dg2-432/igt@kms_ccs@bad-rotation-90-y-tiled-ccs.html

  * igt@kms_ccs@bad-rotation-90-y-tiled-ccs@pipe-b-dp-4:
    - shard-dg2-set2:     NOTRUN -> [SKIP][31] ([Intel XE#787]) +6 other tests skip
   [31]: https://intel-gfx-ci.01.org/tree/intel-xe/IGTPW_11551/shard-dg2-432/igt@kms_ccs@bad-rotation-90-y-tiled-ccs@pipe-b-dp-4.html

  * igt@kms_ccs@crc-primary-rotation-180-4-tiled-dg2-rc-ccs-cc:
    - shard-lnl:          NOTRUN -> [SKIP][32] ([Intel XE#1399]) +3 other tests skip
   [32]: https://intel-gfx-ci.01.org/tree/intel-xe/IGTPW_11551/shard-lnl-8/igt@kms_ccs@crc-primary-rotation-180-4-tiled-dg2-rc-ccs-cc.html

  * igt@kms_ccs@missing-ccs-buffer-4-tiled-mtl-rc-ccs@pipe-d-dp-4:
    - shard-dg2-set2:     NOTRUN -> [SKIP][33] ([Intel XE#1201] / [Intel XE#455] / [Intel XE#787]) +5 other tests skip
   [33]: https://intel-gfx-ci.01.org/tree/intel-xe/IGTPW_11551/shard-dg2-434/igt@kms_ccs@missing-ccs-buffer-4-tiled-mtl-rc-ccs@pipe-d-dp-4.html

  * igt@kms_chamelium_audio@dp-audio:
    - shard-dg2-set2:     NOTRUN -> [SKIP][34] ([Intel XE#1201] / [Intel XE#373]) +2 other tests skip
   [34]: https://intel-gfx-ci.01.org/tree/intel-xe/IGTPW_11551/shard-dg2-433/igt@kms_chamelium_audio@dp-audio.html

  * igt@kms_chamelium_color@ctm-green-to-red:
    - shard-dg2-set2:     NOTRUN -> [SKIP][35] ([Intel XE#1201] / [Intel XE#306])
   [35]: https://intel-gfx-ci.01.org/tree/intel-xe/IGTPW_11551/shard-dg2-435/igt@kms_chamelium_color@ctm-green-to-red.html

  * igt@kms_chamelium_hpd@hdmi-hpd:
    - shard-lnl:          NOTRUN -> [SKIP][36] ([Intel XE#373]) +1 other test skip
   [36]: https://intel-gfx-ci.01.org/tree/intel-xe/IGTPW_11551/shard-lnl-5/igt@kms_chamelium_hpd@hdmi-hpd.html

  * igt@kms_cursor_crc@cursor-offscreen-max-size:
    - shard-lnl:          NOTRUN -> [SKIP][37] ([Intel XE#1424])
   [37]: https://intel-gfx-ci.01.org/tree/intel-xe/IGTPW_11551/shard-lnl-8/igt@kms_cursor_crc@cursor-offscreen-max-size.html

  * igt@kms_cursor_crc@cursor-random-512x512:
    - shard-dg2-set2:     NOTRUN -> [SKIP][38] ([Intel XE#1201] / [Intel XE#308])
   [38]: https://intel-gfx-ci.01.org/tree/intel-xe/IGTPW_11551/shard-dg2-434/igt@kms_cursor_crc@cursor-random-512x512.html

  * igt@kms_cursor_legacy@2x-cursor-vs-flip-atomic:
    - shard-lnl:          NOTRUN -> [SKIP][39] ([Intel XE#309])
   [39]: https://intel-gfx-ci.01.org/tree/intel-xe/IGTPW_11551/shard-lnl-3/igt@kms_cursor_legacy@2x-cursor-vs-flip-atomic.html

  * igt@kms_fbcon_fbt@psr-suspend:
    - shard-lnl:          [PASS][40] -> [FAIL][41] ([Intel XE#2028])
   [40]: https://intel-gfx-ci.01.org/tree/intel-xe/IGT_7964/shard-lnl-5/igt@kms_fbcon_fbt@psr-suspend.html
   [41]: https://intel-gfx-ci.01.org/tree/intel-xe/IGTPW_11551/shard-lnl-1/igt@kms_fbcon_fbt@psr-suspend.html

  * igt@kms_feature_discovery@display-3x:
    - shard-lnl:          NOTRUN -> [SKIP][42] ([Intel XE#703])
   [42]: https://intel-gfx-ci.01.org/tree/intel-xe/IGTPW_11551/shard-lnl-7/igt@kms_feature_discovery@display-3x.html

  * igt@kms_flip@2x-blocking-absolute-wf_vblank:
    - shard-dg2-set2:     [PASS][43] -> [INCOMPLETE][44] ([Intel XE#1195]) +1 other test incomplete
   [43]: https://intel-gfx-ci.01.org/tree/intel-xe/IGT_7964/shard-dg2-435/igt@kms_flip@2x-blocking-absolute-wf_vblank.html
   [44]: https://intel-gfx-ci.01.org/tree/intel-xe/IGTPW_11551/shard-dg2-435/igt@kms_flip@2x-blocking-absolute-wf_vblank.html
    - shard-lnl:          NOTRUN -> [SKIP][45] ([Intel XE#1421]) +1 other test skip
   [45]: https://intel-gfx-ci.01.org/tree/intel-xe/IGTPW_11551/shard-lnl-7/igt@kms_flip@2x-blocking-absolute-wf_vblank.html

  * igt@kms_flip@flip-vs-absolute-wf_vblank:
    - shard-lnl:          [PASS][46] -> [FAIL][47] ([Intel XE#886]) +1 other test fail
   [46]: https://intel-gfx-ci.01.org/tree/intel-xe/IGT_7964/shard-lnl-3/igt@kms_flip@flip-vs-absolute-wf_vblank.html
   [47]: https://intel-gfx-ci.01.org/tree/intel-xe/IGTPW_11551/shard-lnl-3/igt@kms_flip@flip-vs-absolute-wf_vblank.html

  * igt@kms_flip@flip-vs-suspend@a-hdmi-a6:
    - shard-dg2-set2:     [PASS][48] -> [DMESG-WARN][49] ([Intel XE#1551]) +2 other tests dmesg-warn
   [48]: https://intel-gfx-ci.01.org/tree/intel-xe/IGT_7964/shard-dg2-463/igt@kms_flip@flip-vs-suspend@a-hdmi-a6.html
   [49]: https://intel-gfx-ci.01.org/tree/intel-xe/IGTPW_11551/shard-dg2-432/igt@kms_flip@flip-vs-suspend@a-hdmi-a6.html

  * igt@kms_flip@flip-vs-suspend@d-dp4:
    - shard-dg2-set2:     NOTRUN -> [INCOMPLETE][50] ([Intel XE#2049])
   [50]: https://intel-gfx-ci.01.org/tree/intel-xe/IGTPW_11551/shard-dg2-432/igt@kms_flip@flip-vs-suspend@d-dp4.html

  * igt@kms_flip_scaled_crc@flip-32bpp-ytile-to-32bpp-ytilegen12rcccs-downscaling@pipe-a-default-mode:
    - shard-lnl:          NOTRUN -> [SKIP][51] ([Intel XE#1401]) +1 other test skip
   [51]: https://intel-gfx-ci.01.org/tree/intel-xe/IGTPW_11551/shard-lnl-8/igt@kms_flip_scaled_crc@flip-32bpp-ytile-to-32bpp-ytilegen12rcccs-downscaling@pipe-a-default-mode.html

  * igt@kms_flip_scaled_crc@flip-32bpp-ytileccs-to-64bpp-ytile-downscaling:
    - shard-lnl:          NOTRUN -> [SKIP][52] ([Intel XE#1401] / [Intel XE#1745]) +1 other test skip
   [52]: https://intel-gfx-ci.01.org/tree/intel-xe/IGTPW_11551/shard-lnl-5/igt@kms_flip_scaled_crc@flip-32bpp-ytileccs-to-64bpp-ytile-downscaling.html

  * igt@kms_frontbuffer_tracking@drrs-2p-scndscrn-pri-shrfb-draw-render:
    - shard-dg2-set2:     NOTRUN -> [SKIP][53] ([Intel XE#651]) +2 other tests skip
   [53]: https://intel-gfx-ci.01.org/tree/intel-xe/IGTPW_11551/shard-dg2-432/igt@kms_frontbuffer_tracking@drrs-2p-scndscrn-pri-shrfb-draw-render.html

  * igt@kms_frontbuffer_tracking@fbcdrrs-2p-primscrn-spr-indfb-draw-render:
    - shard-dg2-set2:     NOTRUN -> [SKIP][54] ([Intel XE#1201] / [Intel XE#651]) +2 other tests skip
   [54]: https://intel-gfx-ci.01.org/tree/intel-xe/IGTPW_11551/shard-dg2-434/igt@kms_frontbuffer_tracking@fbcdrrs-2p-primscrn-spr-indfb-draw-render.html

  * igt@kms_frontbuffer_tracking@fbcdrrs-slowdraw:
    - shard-lnl:          NOTRUN -> [SKIP][55] ([Intel XE#651]) +2 other tests skip
   [55]: https://intel-gfx-ci.01.org/tree/intel-xe/IGTPW_11551/shard-lnl-7/igt@kms_frontbuffer_tracking@fbcdrrs-slowdraw.html

  * igt@kms_frontbuffer_tracking@fbcdrrs-tiling-y:
    - shard-dg2-set2:     NOTRUN -> [SKIP][56] ([Intel XE#1201] / [Intel XE#658])
   [56]: https://intel-gfx-ci.01.org/tree/intel-xe/IGTPW_11551/shard-dg2-433/igt@kms_frontbuffer_tracking@fbcdrrs-tiling-y.html

  * igt@kms_frontbuffer_tracking@fbcpsr-2p-primscrn-spr-indfb-draw-render:
    - shard-lnl:          NOTRUN -> [SKIP][57] ([Intel XE#656]) +10 other tests skip
   [57]: https://intel-gfx-ci.01.org/tree/intel-xe/IGTPW_11551/shard-lnl-4/igt@kms_frontbuffer_tracking@fbcpsr-2p-primscrn-spr-indfb-draw-render.html

  * igt@kms_frontbuffer_tracking@fbcpsr-2p-scndscrn-cur-indfb-draw-blt:
    - shard-dg2-set2:     NOTRUN -> [SKIP][58] ([Intel XE#1201] / [Intel XE#653]) +2 other tests skip
   [58]: https://intel-gfx-ci.01.org/tree/intel-xe/IGTPW_11551/shard-dg2-434/igt@kms_frontbuffer_tracking@fbcpsr-2p-scndscrn-cur-indfb-draw-blt.html

  * igt@kms_frontbuffer_tracking@pipe-fbc-rte:
    - shard-dg2-set2:     NOTRUN -> [SKIP][59] ([Intel XE#1201] / [Intel XE#455]) +1 other test skip
   [59]: https://intel-gfx-ci.01.org/tree/intel-xe/IGTPW_11551/shard-dg2-434/igt@kms_frontbuffer_tracking@pipe-fbc-rte.html

  * igt@kms_frontbuffer_tracking@psr-2p-primscrn-shrfb-msflip-blt:
    - shard-dg2-set2:     NOTRUN -> [SKIP][60] ([Intel XE#653]) +1 other test skip
   [60]: https://intel-gfx-ci.01.org/tree/intel-xe/IGTPW_11551/shard-dg2-432/igt@kms_frontbuffer_tracking@psr-2p-primscrn-shrfb-msflip-blt.html

  * igt@kms_hdr@invalid-hdr:
    - shard-dg2-set2:     [PASS][61] -> [SKIP][62] ([Intel XE#1201] / [Intel XE#455])
   [61]: https://intel-gfx-ci.01.org/tree/intel-xe/IGT_7964/shard-dg2-463/igt@kms_hdr@invalid-hdr.html
   [62]: https://intel-gfx-ci.01.org/tree/intel-xe/IGTPW_11551/shard-dg2-466/igt@kms_hdr@invalid-hdr.html

  * igt@kms_plane@plane-position-covered:
    - shard-lnl:          [PASS][63] -> [DMESG-FAIL][64] ([Intel XE#324]) +2 other tests dmesg-fail
   [63]: https://intel-gfx-ci.01.org/tree/intel-xe/IGT_7964/shard-lnl-3/igt@kms_plane@plane-position-covered.html
   [64]: https://intel-gfx-ci.01.org/tree/intel-xe/IGTPW_11551/shard-lnl-2/igt@kms_plane@plane-position-covered.html

  * igt@kms_plane@plane-position-hole:
    - shard-lnl:          [PASS][65] -> [DMESG-WARN][66] ([Intel XE#324]) +1 other test dmesg-warn
   [65]: https://intel-gfx-ci.01.org/tree/intel-xe/IGT_7964/shard-lnl-6/igt@kms_plane@plane-position-hole.html
   [66]: https://intel-gfx-ci.01.org/tree/intel-xe/IGTPW_11551/shard-lnl-8/igt@kms_plane@plane-position-hole.html

  * igt@kms_plane_scaling@intel-max-src-size:
    - shard-dg2-set2:     NOTRUN -> [FAIL][67] ([Intel XE#361]) +1 other test fail
   [67]: https://intel-gfx-ci.01.org/tree/intel-xe/IGTPW_11551/shard-dg2-433/igt@kms_plane_scaling@intel-max-src-size.html

  * igt@kms_plane_scaling@planes-upscale-factor-0-25-downscale-factor-0-5:
    - shard-lnl:          NOTRUN -> [SKIP][68] ([Intel XE#2318]) +3 other tests skip
   [68]: https://intel-gfx-ci.01.org/tree/intel-xe/IGTPW_11551/shard-lnl-7/igt@kms_plane_scaling@planes-upscale-factor-0-25-downscale-factor-0-5.html

  * igt@kms_pm_dc@dc5-psr:
    - shard-lnl:          [PASS][69] -> [FAIL][70] ([Intel XE#718])
   [69]: https://intel-gfx-ci.01.org/tree/intel-xe/IGT_7964/shard-lnl-5/igt@kms_pm_dc@dc5-psr.html
   [70]: https://intel-gfx-ci.01.org/tree/intel-xe/IGTPW_11551/shard-lnl-1/igt@kms_pm_dc@dc5-psr.html

  * igt@kms_psr2_sf@fbc-cursor-plane-move-continuous-exceed-fully-sf:
    - shard-dg2-set2:     NOTRUN -> [SKIP][71] ([Intel XE#1201] / [Intel XE#1489])
   [71]: https://intel-gfx-ci.01.org/tree/intel-xe/IGTPW_11551/shard-dg2-435/igt@kms_psr2_sf@fbc-cursor-plane-move-continuous-exceed-fully-sf.html

  * igt@kms_psr@fbc-psr2-sprite-plane-onoff:
    - shard-dg2-set2:     NOTRUN -> [SKIP][72] ([Intel XE#929])
   [72]: https://intel-gfx-ci.01.org/tree/intel-xe/IGTPW_11551/shard-dg2-432/igt@kms_psr@fbc-psr2-sprite-plane-onoff.html

  * igt@kms_psr@pr-sprite-blt:
    - shard-lnl:          NOTRUN -> [SKIP][73] ([Intel XE#1406])
   [73]: https://intel-gfx-ci.01.org/tree/intel-xe/IGTPW_11551/shard-lnl-2/igt@kms_psr@pr-sprite-blt.html

  * igt@kms_psr@psr2-cursor-plane-onoff:
    - shard-dg2-set2:     NOTRUN -> [SKIP][74] ([Intel XE#1201] / [Intel XE#929]) +3 other tests skip
   [74]: https://intel-gfx-ci.01.org/tree/intel-xe/IGTPW_11551/shard-dg2-466/igt@kms_psr@psr2-cursor-plane-onoff.html

  * igt@kms_rotation_crc@primary-yf-tiled-reflect-x-90:
    - shard-dg2-set2:     NOTRUN -> [SKIP][75] ([Intel XE#1201] / [Intel XE#327])
   [75]: https://intel-gfx-ci.01.org/tree/intel-xe/IGTPW_11551/shard-dg2-463/igt@kms_rotation_crc@primary-yf-tiled-reflect-x-90.html

  * igt@kms_setmode@invalid-clone-exclusive-crtc:
    - shard-lnl:          NOTRUN -> [SKIP][76] ([Intel XE#1435])
   [76]: https://intel-gfx-ci.01.org/tree/intel-xe/IGTPW_11551/shard-lnl-4/igt@kms_setmode@invalid-clone-exclusive-crtc.html

  * igt@kms_tiled_display@basic-test-pattern-with-chamelium:
    - shard-lnl:          NOTRUN -> [SKIP][77] ([Intel XE#362])
   [77]: https://intel-gfx-ci.01.org/tree/intel-xe/IGTPW_11551/shard-lnl-4/igt@kms_tiled_display@basic-test-pattern-with-chamelium.html

  * igt@kms_vrr@flip-basic-fastset:
    - shard-lnl:          NOTRUN -> [FAIL][78] ([Intel XE#2443]) +1 other test fail
   [78]: https://intel-gfx-ci.01.org/tree/intel-xe/IGTPW_11551/shard-lnl-7/igt@kms_vrr@flip-basic-fastset.html

  * igt@sriov_basic@enable-vfs-autoprobe-off:
    - shard-lnl:          NOTRUN -> [SKIP][79] ([Intel XE#1091])
   [79]: https://intel-gfx-ci.01.org/tree/intel-xe/IGTPW_11551/shard-lnl-1/igt@sriov_basic@enable-vfs-autoprobe-off.html

  * igt@xe_copy_basic@mem-set-linear-0x369:
    - shard-dg2-set2:     NOTRUN -> [SKIP][80] ([Intel XE#1126])
   [80]: https://intel-gfx-ci.01.org/tree/intel-xe/IGTPW_11551/shard-dg2-432/igt@xe_copy_basic@mem-set-linear-0x369.html

  * igt@xe_eudebug@basic-close (NEW):
    - shard-dg2-set2:     NOTRUN -> [SKIP][81] ([Intel XE#1201]) +45 other tests skip
   [81]: https://intel-gfx-ci.01.org/tree/intel-xe/IGTPW_11551/shard-dg2-466/igt@xe_eudebug@basic-close.html

  * igt@xe_evict@evict-beng-mixed-many-threads-small:
    - shard-dg2-set2:     [PASS][82] -> [TIMEOUT][83] ([Intel XE#1473] / [Intel XE#402])
   [82]: https://intel-gfx-ci.01.org/tree/intel-xe/IGT_7964/shard-dg2-435/igt@xe_evict@evict-beng-mixed-many-threads-small.html
   [83]: https://intel-gfx-ci.01.org/tree/intel-xe/IGTPW_11551/shard-dg2-463/igt@xe_evict@evict-beng-mixed-many-threads-small.html

  * igt@xe_evict@evict-threads-large:
    - shard-dg2-set2:     NOTRUN -> [TIMEOUT][84] ([Intel XE#1473] / [Intel XE#392])
   [84]: https://intel-gfx-ci.01.org/tree/intel-xe/IGTPW_11551/shard-dg2-436/igt@xe_evict@evict-threads-large.html

  * igt@xe_evict@evict-threads-small:
    - shard-lnl:          NOTRUN -> [SKIP][85] ([Intel XE#688]) +1 other test skip
   [85]: https://intel-gfx-ci.01.org/tree/intel-xe/IGTPW_11551/shard-lnl-1/igt@xe_evict@evict-threads-small.html

  * igt@xe_exec_basic@multigpu-once-null-rebind:
    - shard-lnl:          NOTRUN -> [SKIP][86] ([Intel XE#1392]) +1 other test skip
   [86]: https://intel-gfx-ci.01.org/tree/intel-xe/IGTPW_11551/shard-lnl-8/igt@xe_exec_basic@multigpu-once-null-rebind.html

  * igt@xe_exec_fault_mode@twice-basic-prefetch:
    - shard-dg2-set2:     NOTRUN -> [SKIP][87] ([Intel XE#1201] / [Intel XE#288]) +6 other tests skip
   [87]: https://intel-gfx-ci.01.org/tree/intel-xe/IGTPW_11551/shard-dg2-433/igt@xe_exec_fault_mode@twice-basic-prefetch.html

  * igt@xe_exec_mix_modes@exec-spinner-interrupted-dma-fence:
    - shard-dg2-set2:     NOTRUN -> [SKIP][88] ([Intel XE#1201] / [Intel XE#2360])
   [88]: https://intel-gfx-ci.01.org/tree/intel-xe/IGTPW_11551/shard-dg2-463/igt@xe_exec_mix_modes@exec-spinner-interrupted-dma-fence.html

  * igt@xe_exec_threads@threads-mixed-shared-vm-userptr-invalidate:
    - shard-lnl:          [PASS][89] -> [INCOMPLETE][90] ([Intel XE#1169])
   [89]: https://intel-gfx-ci.01.org/tree/intel-xe/IGT_7964/shard-lnl-5/igt@xe_exec_threads@threads-mixed-shared-vm-userptr-invalidate.html
   [90]: https://intel-gfx-ci.01.org/tree/intel-xe/IGTPW_11551/shard-lnl-8/igt@xe_exec_threads@threads-mixed-shared-vm-userptr-invalidate.html

  * igt@xe_live_ktest@xe_migrate@xe_validate_ccs_kunit:
    - shard-dg2-set2:     NOTRUN -> [SKIP][91] ([Intel XE#1201] / [Intel XE#2229])
   [91]: https://intel-gfx-ci.01.org/tree/intel-xe/IGTPW_11551/shard-dg2-434/igt@xe_live_ktest@xe_migrate@xe_validate_ccs_kunit.html
    - shard-lnl:          NOTRUN -> [SKIP][92] ([Intel XE#2229])
   [92]: https://intel-gfx-ci.01.org/tree/intel-xe/IGTPW_11551/shard-lnl-5/igt@xe_live_ktest@xe_migrate@xe_validate_ccs_kunit.html

  * igt@xe_module_load@reload-no-display:
    - shard-dg2-set2:     [PASS][93] -> [DMESG-WARN][94] ([Intel XE#2019])
   [93]: https://intel-gfx-ci.01.org/tree/intel-xe/IGT_7964/shard-dg2-432/igt@xe_module_load@reload-no-display.html
   [94]: https://intel-gfx-ci.01.org/tree/intel-xe/IGTPW_11551/shard-dg2-434/igt@xe_module_load@reload-no-display.html

  * igt@xe_oa@missing-sample-flags:
    - shard-dg2-set2:     NOTRUN -> [SKIP][95] ([Intel XE#1201] / [Intel XE#2207]) +1 other test skip
   [95]: https://intel-gfx-ci.01.org/tree/intel-xe/IGTPW_11551/shard-dg2-434/igt@xe_oa@missing-sample-flags.html

  * igt@xe_pat@pat-index-xehpc:
    - shard-lnl:          NOTRUN -> [SKIP][96] ([Intel XE#1420])
   [96]: https://intel-gfx-ci.01.org/tree/intel-xe/IGTPW_11551/shard-lnl-8/igt@xe_pat@pat-index-xehpc.html

  * igt@xe_pm@d3hot-mmap-vram:
    - shard-lnl:          NOTRUN -> [SKIP][97] ([Intel XE#1948])
   [97]: https://intel-gfx-ci.01.org/tree/intel-xe/IGTPW_11551/shard-lnl-3/igt@xe_pm@d3hot-mmap-vram.html

  * igt@xe_pm@s2idle-mocs:
    - shard-lnl:          NOTRUN -> [FAIL][98] ([Intel XE#2028])
   [98]: https://intel-gfx-ci.01.org/tree/intel-xe/IGTPW_11551/shard-lnl-1/igt@xe_pm@s2idle-mocs.html

  * igt@xe_pm@s3-multiple-execs:
    - shard-dg2-set2:     [PASS][99] -> [DMESG-WARN][100] ([Intel XE#1551] / [Intel XE#569])
   [99]: https://intel-gfx-ci.01.org/tree/intel-xe/IGT_7964/shard-dg2-435/igt@xe_pm@s3-multiple-execs.html
   [100]: https://intel-gfx-ci.01.org/tree/intel-xe/IGTPW_11551/shard-dg2-435/igt@xe_pm@s3-multiple-execs.html

  * igt@xe_pm@s4-vm-bind-prefetch:
    - shard-lnl:          [PASS][101] -> [ABORT][102] ([Intel XE#1794])
   [101]: https://intel-gfx-ci.01.org/tree/intel-xe/IGT_7964/shard-lnl-5/igt@xe_pm@s4-vm-bind-prefetch.html
   [102]: https://intel-gfx-ci.01.org/tree/intel-xe/IGTPW_11551/shard-lnl-2/igt@xe_pm@s4-vm-bind-prefetch.html

  * igt@xe_query@multigpu-query-engines:
    - shard-lnl:          NOTRUN -> [SKIP][103] ([Intel XE#944])
   [103]: https://intel-gfx-ci.01.org/tree/intel-xe/IGTPW_11551/shard-lnl-8/igt@xe_query@multigpu-query-engines.html

  * igt@xe_query@multigpu-query-oa-units:
    - shard-dg2-set2:     NOTRUN -> [SKIP][104] ([Intel XE#1201] / [Intel XE#944])
   [104]: https://intel-gfx-ci.01.org/tree/intel-xe/IGTPW_11551/shard-dg2-433/igt@xe_query@multigpu-query-oa-units.html

  
#### Possible fixes ####

  * igt@kms_async_flips@alternate-sync-async-flip@pipe-d-dp-2:
    - {shard-bmg}:        [DMESG-WARN][105] ([Intel XE#1033]) -> [PASS][106]
   [105]: https://intel-gfx-ci.01.org/tree/intel-xe/IGT_7964/shard-bmg-7/igt@kms_async_flips@alternate-sync-async-flip@pipe-d-dp-2.html
   [106]: https://intel-gfx-ci.01.org/tree/intel-xe/IGTPW_11551/shard-bmg-7/igt@kms_async_flips@alternate-sync-async-flip@pipe-d-dp-2.html

  * igt@kms_atomic_transition@plane-all-modeset-transition-fencing:
    - {shard-bmg}:        [FAIL][107] ([Intel XE#1426]) -> [PASS][108] +1 other test pass
   [107]: https://intel-gfx-ci.01.org/tree/intel-xe/IGT_7964/shard-bmg-7/igt@kms_atomic_transition@plane-all-modeset-transition-fencing.html
   [108]: https://intel-gfx-ci.01.org/tree/intel-xe/IGTPW_11551/shard-bmg-8/igt@kms_atomic_transition@plane-all-modeset-transition-fencing.html

  * igt@kms_atomic_transition@plane-toggle-modeset-transition@pipe-a-edp-1:
    - shard-lnl:          [FAIL][109] ([Intel XE#1426]) -> [PASS][110] +1 other test pass
   [109]: https://intel-gfx-ci.01.org/tree/intel-xe/IGT_7964/shard-lnl-4/igt@kms_atomic_transition@plane-toggle-modeset-transition@pipe-a-edp-1.html
   [110]: https://intel-gfx-ci.01.org/tree/intel-xe/IGTPW_11551/shard-lnl-4/igt@kms_atomic_transition@plane-toggle-modeset-transition@pipe-a-edp-1.html

  * igt@kms_big_fb@4-tiled-64bpp-rotate-180:
    - shard-lnl:          [FAIL][111] ([Intel XE#1659]) -> [PASS][112] +1 other test pass
   [111]: https://intel-gfx-ci.01.org/tree/intel-xe/IGT_7964/shard-lnl-4/igt@kms_big_fb@4-tiled-64bpp-rotate-180.html
   [112]: https://intel-gfx-ci.01.org/tree/intel-xe/IGTPW_11551/shard-lnl-7/igt@kms_big_fb@4-tiled-64bpp-rotate-180.html

  * igt@kms_ccs@crc-sprite-planes-basic-4-tiled-xe2-ccs@pipe-a-hdmi-a-3:
    - {shard-bmg}:        [FAIL][113] ([Intel XE#2436]) -> [PASS][114] +1 other test pass
   [113]: https://intel-gfx-ci.01.org/tree/intel-xe/IGT_7964/shard-bmg-6/igt@kms_ccs@crc-sprite-planes-basic-4-tiled-xe2-ccs@pipe-a-hdmi-a-3.html
   [114]: https://intel-gfx-ci.01.org/tree/intel-xe/IGTPW_11551/shard-bmg-4/igt@kms_ccs@crc-sprite-planes-basic-4-tiled-xe2-ccs@pipe-a-hdmi-a-3.html

  * igt@kms_cursor_legacy@torture-move@pipe-a:
    - shard-dg2-set2:     [DMESG-WARN][115] ([Intel XE#877]) -> [PASS][116] +1 other test pass
   [115]: https://intel-gfx-ci.01.org/tree/intel-xe/IGT_7964/shard-dg2-466/igt@kms_cursor_legacy@torture-move@pipe-a.html
   [116]: https://intel-gfx-ci.01.org/tree/intel-xe/IGTPW_11551/shard-dg2-432/igt@kms_cursor_legacy@torture-move@pipe-a.html

  * igt@kms_flip@2x-flip-vs-panning:
    - {shard-bmg}:        [DMESG-WARN][117] ([Intel XE#877]) -> [PASS][118]
   [117]: https://intel-gfx-ci.01.org/tree/intel-xe/IGT_7964/shard-bmg-8/igt@kms_flip@2x-flip-vs-panning.html
   [118]: https://intel-gfx-ci.01.org/tree/intel-xe/IGTPW_11551/shard-bmg-6/igt@kms_flip@2x-flip-vs-panning.html

  * igt@kms_flip@flip-vs-suspend@c-dp4:
    - shard-dg2-set2:     [INCOMPLETE][119] ([Intel XE#1195] / [Intel XE#2049]) -> [PASS][120]
   [119]: https://intel-gfx-ci.01.org/tree/intel-xe/IGT_7964/shard-dg2-463/igt@kms_flip@flip-vs-suspend@c-dp4.html
   [120]: https://intel-gfx-ci.01.org/tree/intel-xe/IGTPW_11551/shard-dg2-432/igt@kms_flip@flip-vs-suspend@c-dp4.html

  * igt@kms_plane@plane-panning-bottom-right-suspend:
    - shard-lnl:          [FAIL][121] ([Intel XE#2028]) -> [PASS][122] +2 other tests pass
   [121]: https://intel-gfx-ci.01.org/tree/intel-xe/IGT_7964/shard-lnl-1/igt@kms_plane@plane-panning-bottom-right-suspend.html
   [122]: https://intel-gfx-ci.01.org/tree/intel-xe/IGTPW_11551/shard-lnl-3/igt@kms_plane@plane-panning-bottom-right-suspend.html

  * igt@kms_plane_scaling@2x-scaler-multi-pipe:
    - {shard-bmg}:        [DMESG-WARN][123] -> [PASS][124] +4 other tests pass
   [123]: https://intel-gfx-ci.01.org/tree/intel-xe/IGT_7964/shard-bmg-7/igt@kms_plane_scaling@2x-scaler-multi-pipe.html
   [124]: https://intel-gfx-ci.01.org/tree/intel-xe/IGTPW_11551/shard-bmg-1/igt@kms_plane_scaling@2x-scaler-multi-pipe.html

  * igt@kms_pm_backlight@fade:
    - shard-lnl:          [SKIP][125] ([Intel XE#870]) -> [PASS][126]
   [125]: https://intel-gfx-ci.01.org/tree/intel-xe/IGT_7964/shard-lnl-3/igt@kms_pm_backlight@fade.html
   [126]: https://intel-gfx-ci.01.org/tree/intel-xe/IGTPW_11551/shard-lnl-7/igt@kms_pm_backlight@fade.html

  * igt@kms_pm_rpm@universal-planes-dpms:
    - shard-lnl:          [INCOMPLETE][127] ([Intel XE#1620]) -> [PASS][128]
   [127]: https://intel-gfx-ci.01.org/tree/intel-xe/IGT_7964/shard-lnl-1/igt@kms_pm_rpm@universal-planes-dpms.html
   [128]: https://intel-gfx-ci.01.org/tree/intel-xe/IGTPW_11551/shard-lnl-6/igt@kms_pm_rpm@universal-planes-dpms.html

  * igt@kms_pm_rpm@universal-planes-dpms@plane-32:
    - shard-lnl:          [DMESG-FAIL][129] ([Intel XE#1620]) -> [PASS][130]
   [129]: https://intel-gfx-ci.01.org/tree/intel-xe/IGT_7964/shard-lnl-1/igt@kms_pm_rpm@universal-planes-dpms@plane-32.html
   [130]: https://intel-gfx-ci.01.org/tree/intel-xe/IGTPW_11551/shard-lnl-6/igt@kms_pm_rpm@universal-planes-dpms@plane-32.html

  * igt@kms_rotation_crc@primary-4-tiled-reflect-x-180:
    - {shard-bmg}:        [FAIL][131] -> [PASS][132]
   [131]: https://intel-gfx-ci.01.org/tree/intel-xe/IGT_7964/shard-bmg-6/igt@kms_rotation_crc@primary-4-tiled-reflect-x-180.html
   [132]: https://intel-gfx-ci.01.org/tree/intel-xe/IGTPW_11551/shard-bmg-5/igt@kms_rotation_crc@primary-4-tiled-reflect-x-180.html

  * igt@kms_universal_plane@cursor-fb-leak@pipe-d-dp-2:
    - {shard-bmg}:        [FAIL][133] ([Intel XE#899]) -> [PASS][134]
   [133]: https://intel-gfx-ci.01.org/tree/intel-xe/IGT_7964/shard-bmg-2/igt@kms_universal_plane@cursor-fb-leak@pipe-d-dp-2.html
   [134]: https://intel-gfx-ci.01.org/tree/intel-xe/IGTPW_11551/shard-bmg-2/igt@kms_universal_plane@cursor-fb-leak@pipe-d-dp-2.html

  * igt@kms_vblank@ts-continuation-dpms-suspend:
    - shard-lnl:          [INCOMPLETE][135] -> [PASS][136] +1 other test pass
   [135]: https://intel-gfx-ci.01.org/tree/intel-xe/IGT_7964/shard-lnl-7/igt@kms_vblank@ts-continuation-dpms-suspend.html
   [136]: https://intel-gfx-ci.01.org/tree/intel-xe/IGTPW_11551/shard-lnl-4/igt@kms_vblank@ts-continuation-dpms-suspend.html

  * igt@kms_vrr@flip-basic:
    - shard-lnl:          [FAIL][137] ([Intel XE#2443]) -> [PASS][138] +2 other tests pass
   [137]: https://intel-gfx-ci.01.org/tree/intel-xe/IGT_7964/shard-lnl-8/igt@kms_vrr@flip-basic.html
   [138]: https://intel-gfx-ci.01.org/tree/intel-xe/IGTPW_11551/shard-lnl-4/igt@kms_vrr@flip-basic.html

  * igt@kms_vrr@flip-dpms:
    - shard-lnl:          [FAIL][139] -> [PASS][140]
   [139]: https://intel-gfx-ci.01.org/tree/intel-xe/IGT_7964/shard-lnl-5/igt@kms_vrr@flip-dpms.html
   [140]: https://intel-gfx-ci.01.org/tree/intel-xe/IGTPW_11551/shard-lnl-7/igt@kms_vrr@flip-dpms.html

  * igt@xe_evict@evict-beng-threads-large:
    - shard-dg2-set2:     [TIMEOUT][141] ([Intel XE#1473]) -> [PASS][142]
   [141]: https://intel-gfx-ci.01.org/tree/intel-xe/IGT_7964/shard-dg2-432/igt@xe_evict@evict-beng-threads-large.html
   [142]: https://intel-gfx-ci.01.org/tree/intel-xe/IGTPW_11551/shard-dg2-433/igt@xe_evict@evict-beng-threads-large.html

  * igt@xe_evict@evict-cm-threads-large:
    - shard-dg2-set2:     [TIMEOUT][143] ([Intel XE#1473] / [Intel XE#392]) -> [PASS][144]
   [143]: https://intel-gfx-ci.01.org/tree/intel-xe/IGT_7964/shard-dg2-436/igt@xe_evict@evict-cm-threads-large.html
   [144]: https://intel-gfx-ci.01.org/tree/intel-xe/IGTPW_11551/shard-dg2-436/igt@xe_evict@evict-cm-threads-large.html

  * igt@xe_evict@evict-small-cm:
    - {shard-bmg}:        [INCOMPLETE][145] -> [PASS][146]
   [145]: https://intel-gfx-ci.01.org/tree/intel-xe/IGT_7964/shard-bmg-5/igt@xe_evict@evict-small-cm.html
   [146]: https://intel-gfx-ci.01.org/tree/intel-xe/IGTPW_11551/shard-bmg-2/igt@xe_evict@evict-small-cm.html

  * igt@xe_live_ktest@xe_dma_buf:
    - shard-dg2-set2:     [SKIP][147] ([Intel XE#1192]) -> [PASS][148]
   [147]: https://intel-gfx-ci.01.org/tree/intel-xe/IGT_7964/shard-dg2-432/igt@xe_live_ktest@xe_dma_buf.html
   [148]: https://intel-gfx-ci.01.org/tree/intel-xe/IGTPW_11551/shard-dg2-436/igt@xe_live_ktest@xe_dma_buf.html

  * igt@xe_live_ktest@xe_migrate:
    - shard-dg2-set2:     [SKIP][149] ([Intel XE#1192] / [Intel XE#1201]) -> [PASS][150]
   [149]: https://intel-gfx-ci.01.org/tree/intel-xe/IGT_7964/shard-dg2-435/igt@xe_live_ktest@xe_migrate.html
   [150]: https://intel-gfx-ci.01.org/tree/intel-xe/IGTPW_11551/shard-dg2-434/igt@xe_live_ktest@xe_migrate.html

  * igt@xe_module_load@many-reload:
    - shard-dg2-set2:     [FAIL][151] ([Intel XE#2136]) -> [PASS][152]
   [151]: https://intel-gfx-ci.01.org/tree/intel-xe/IGT_7964/shard-dg2-463/igt@xe_module_load@many-reload.html
   [152]: https://intel-gfx-ci.01.org/tree/intel-xe/IGTPW_11551/shard-dg2-435/igt@xe_module_load@many-reload.html

  * igt@xe_pm@s4-multiple-execs:
    - shard-lnl:          [ABORT][153] ([Intel XE#1358] / [Intel XE#1794]) -> [PASS][154]
   [153]: https://intel-gfx-ci.01.org/tree/intel-xe/IGT_7964/shard-lnl-2/igt@xe_pm@s4-multiple-execs.html
   [154]: https://intel-gfx-ci.01.org/tree/intel-xe/IGTPW_11551/shard-lnl-3/igt@xe_pm@s4-multiple-execs.html

  
#### Warnings ####

  * igt@kms_big_fb@4-tiled-8bpp-rotate-270:
    - shard-dg2-set2:     [SKIP][155] ([Intel XE#316]) -> [SKIP][156] ([Intel XE#1201] / [Intel XE#316]) +2 other tests skip
   [155]: https://intel-gfx-ci.01.org/tree/intel-xe/IGT_7964/shard-dg2-432/igt@kms_big_fb@4-tiled-8bpp-rotate-270.html
   [156]: https://intel-gfx-ci.01.org/tree/intel-xe/IGTPW_11551/shard-dg2-466/igt@kms_big_fb@4-tiled-8bpp-rotate-270.html

  * igt@kms_big_fb@linear-16bpp-rotate-90:
    - shard-dg2-set2:     [SKIP][157] ([Intel XE#1201] / [Intel XE#316]) -> [SKIP][158] ([Intel XE#316]) +3 other tests skip
   [157]: https://intel-gfx-ci.01.org/tree/intel-xe/IGT_7964/shard-dg2-434/igt@kms_big_fb@linear-16bpp-rotate-90.html
   [158]: https://intel-gfx-ci.01.org/tree/intel-xe/IGTPW_11551/shard-dg2-432/igt@kms_big_fb@linear-16bpp-rotate-90.html

  * igt@kms_big_fb@y-tiled-64bpp-rotate-180:
    - shard-dg2-set2:     [SKIP][159] ([Intel XE#1124] / [Intel XE#1201]) -> [SKIP][160] ([Intel XE#1124]) +8 other tests skip
   [159]: https://intel-gfx-ci.01.org/tree/intel-xe/IGT_7964/shard-dg2-463/igt@kms_big_fb@y-tiled-64bpp-rotate-180.html
   [160]: https://intel-gfx-ci.01.org/tree/intel-xe/IGTPW_11551/shard-dg2-432/igt@kms_big_fb@y-tiled-64bpp-rotate-180.html

  * igt@kms_big_fb@y-tiled-max-hw-stride-64bpp-rotate-0:
    - shard-dg2-set2:     [SKIP][161] ([Intel XE#1124]) -> [SKIP][162] ([Intel XE#1124] / [Intel XE#1201]) +7 other tests skip
   [161]: https://intel-gfx-ci.01.org/tree/intel-xe/IGT_7964/shard-dg2-432/igt@kms_big_fb@y-tiled-max-hw-stride-64bpp-rotate-0.html
   [162]: https://intel-gfx-ci.01.org/tree/intel-xe/IGTPW_11551/shard-dg2-433/igt@kms_big_fb@y-tiled-max-hw-stride-64bpp-rotate-0.html

  * igt@kms_big_fb@yf-tiled-addfb-size-offset-overflow:
    - shard-dg2-set2:     [SKIP][163] ([Intel XE#607]) -> [SKIP][164] ([Intel XE#1201] / [Intel XE#607]) +1 other test skip
   [163]: https://intel-gfx-ci.01.org/tree/intel-xe/IGT_7964/shard-dg2-432/igt@kms_big_fb@yf-tiled-addfb-size-offset-overflow.html
   [164]: https://intel-gfx-ci.01.org/tree/intel-xe/IGTPW_11551/shard-dg2-433/igt@kms_big_fb@yf-tiled-addfb-size-offset-overflow.html

  * igt@kms_big_fb@yf-tiled-addfb-size-overflow:
    - shard-dg2-set2:     [SKIP][165] ([Intel XE#1201] / [Intel XE#610]) -> [SKIP][166] ([Intel XE#610])
   [165]: https://intel-gfx-ci.01.org/tree/intel-xe/IGT_7964/shard-dg2-466/igt@kms_big_fb@yf-tiled-addfb-size-overflow.html
   [166]: https://intel-gfx-ci.01.org/tree/intel-xe/IGTPW_11551/shard-dg2-432/igt@kms_big_fb@yf-tiled-addfb-size-overflow.html

  * igt@kms_bw@connected-linear-tiling-3-displays-3840x2160p:
    - shard-dg2-set2:     [SKIP][167] ([Intel XE#1201] / [Intel XE#2191]) -> [SKIP][168] ([Intel XE#2191])
   [167]: https://intel-gfx-ci.01.org/tree/intel-xe/IGT_7964/shard-dg2-435/igt@kms_bw@connected-linear-tiling-3-displays-3840x2160p.html
   [168]: https://intel-gfx-ci.01.org/tree/intel-xe/IGTPW_11551/shard-dg2-432/igt@kms_bw@connected-linear-tiling-3-displays-3840x2160p.html

  * igt@kms_bw@linear-tiling-2-displays-1920x1080p:
    - shard-dg2-set2:     [SKIP][169] ([Intel XE#1201] / [Intel XE#367]) -> [SKIP][170] ([Intel XE#367]) +1 other test skip
   [169]: https://intel-gfx-ci.01.org/tree/intel-xe/IGT_7964/shard-dg2-436/igt@kms_bw@linear-tiling-2-displays-1920x1080p.html
   [170]: https://intel-gfx-ci.01.org/tree/intel-xe/IGTPW_11551/shard-dg2-432/igt@kms_bw@linear-tiling-2-displays-1920x1080p.html

  * igt@kms_bw@linear-tiling-2-displays-3840x2160p:
    - shard-dg2-set2:     [SKIP][171] ([Intel XE#367]) -> [SKIP][172] ([Intel XE#1201] / [Intel XE#367]) +2 other tests skip
   [171]: https://intel-gfx-ci.01.org/tree/intel-xe/IGT_7964/shard-dg2-432/igt@kms_bw@linear-tiling-2-displays-3840x2160p.html
   [172]: https://intel-gfx-ci.01.org/tree/intel-xe/IGTPW_11551/shard-dg2-434/igt@kms_bw@linear-tiling-2-displays-3840x2160p.html

  * igt@kms_ccs@bad-rotation-90-y-tiled-gen12-rc-ccs:
    - shard-dg2-set2:     [SKIP][173] ([Intel XE#1201] / [Intel XE#455] / [Intel XE#787]) -> [SKIP][174] ([Intel XE#455] / [Intel XE#787]) +9 other tests skip
   [173]: https://intel-gfx-ci.01.org/tree/intel-xe/IGT_7964/shard-dg2-435/igt@kms_ccs@bad-rotation-90-y-tiled-gen12-rc-ccs.html
   [174]: https://intel-gfx-ci.01.org/tree/intel-xe/IGTPW_11551/shard-dg2-432/igt@kms_ccs@bad-rotation-90-y-tiled-gen12-rc-ccs.html

  * igt@kms_ccs@crc-primary-basic-4-tiled-xe2-ccs:
    - shard-dg2-set2:     [SKIP][175] ([Intel XE#1252]) -> [SKIP][176] ([Intel XE#1201] / [Intel XE#1252]) +1 other test skip
   [175]: https://intel-gfx-ci.01.org/tree/intel-xe/IGT_7964/shard-dg2-432/igt@kms_ccs@crc-primary-basic-4-tiled-xe2-ccs.html
   [176]: https://intel-gfx-ci.01.org/tree/intel-xe/IGTPW_11551/shard-dg2-436/igt@kms_ccs@crc-primary-basic-4-tiled-xe2-ccs.html

  * igt@kms_ccs@crc-primary-basic-y-tiled-gen12-rc-ccs-cc@pipe-a-dp-4:
    - shard-dg2-set2:     [SKIP][177] ([Intel XE#787]) -> [SKIP][178] ([Intel XE#1201] / [Intel XE#787]) +48 other tests skip
   [177]: https://intel-gfx-ci.01.org/tree/intel-xe/IGT_7964/shard-dg2-432/igt@kms_ccs@crc-primary-basic-y-tiled-gen12-rc-ccs-cc@pipe-a-dp-4.html
   [178]: https://intel-gfx-ci.01.org/tree/intel-xe/IGTPW_11551/shard-dg2-434/igt@kms_ccs@crc-primary-basic-y-tiled-gen12-rc-ccs-cc@pipe-a-dp-4.html

  * igt@kms_ccs@crc-primary-rotation-180-4-tiled-xe2-ccs:
    - shard-dg2-set2:     [SKIP][179] ([Intel XE#1201] / [Intel XE#1252]) -> [SKIP][180] ([Intel XE#1252])
   [179]: https://intel-gfx-ci.01.org/tree/intel-xe/IGT_7964/shard-dg2-466/igt@kms_ccs@crc-primary-rotation-180-4-tiled-xe2-ccs.html
   [180]: https://intel-gfx-ci.01.org/tree/intel-xe/IGTPW_11551/shard-dg2-432/igt@kms_ccs@crc-primary-rotation-180-4-tiled-xe2-ccs.html

  * igt@kms_ccs@missing-ccs-buffer-4-tiled-mtl-mc-ccs@pipe-d-dp-4:
    - shard-dg2-set2:     [SKIP][181] ([Intel XE#455] / [Intel XE#787]) -> [SKIP][182] ([Intel XE#1201] / [Intel XE#455] / [Intel XE#787]) +13 other tests skip
   [181]: https://intel-gfx-ci.01.org/tree/intel-xe/IGT_7964/shard-dg2-432/igt@kms_ccs@missing-ccs-buffer-4-tiled-mtl-mc-ccs@pipe-d-dp-4.html
   [182]: https://intel-gfx-ci.01.org/tree/intel-xe/IGTPW_11551/shard-dg2-436/igt@kms_ccs@missing-ccs-buffer-4-tiled-mtl-mc-ccs@pipe-d-dp-4.html

  * igt@kms_ccs@missing-ccs-buffer-y-tiled-gen12-rc-ccs@pipe-d-hdmi-a-6:
    - shard-dg2-set2:     [SKIP][183] ([Intel XE#1201] / [Intel XE#787]) -> [SKIP][184] ([Intel XE#787]) +34 other tests skip
   [183]: https://intel-gfx-ci.01.org/tree/intel-xe/IGT_7964/shard-dg2-434/igt@kms_ccs@missing-ccs-buffer-y-tiled-gen12-rc-ccs@pipe-d-hdmi-a-6.html
   [184]: https://intel-gfx-ci.01.org/tree/intel-xe/IGTPW_11551/shard-dg2-432/igt@kms_ccs@missing-ccs-buffer-y-tiled-gen12-rc-ccs@pipe-d-hdmi-a-6.html

  * igt@kms_cdclk@mode-transition@pipe-d-dp-4:
    - shard-dg2-set2:     [SKIP][185] ([Intel XE#314]) -> [SKIP][186] ([Intel XE#1201] / [Intel XE#314]) +3 other tests skip
   [185]: https://intel-gfx-ci.01.org/tree/intel-xe/IGT_7964/shard-dg2-432/igt@kms_cdclk@mode-transition@pipe-d-dp-4.html
   [186]: https://intel-gfx-ci.01.org/tree/intel-xe/IGTPW_11551/shard-dg2-433/igt@kms_cdclk@mode-transition@pipe-d-dp-4.html

  * igt@kms_cdclk@plane-scaling@pipe-b-dp-4:
    - shard-dg2-set2:     [SKIP][187] ([Intel XE#1152] / [Intel XE#1201]) -> [SKIP][188] ([Intel XE#1152]) +3 other tests skip
   [187]: https://intel-gfx-ci.01.org/tree/intel-xe/IGT_7964/shard-dg2-436/igt@kms_cdclk@plane-scaling@pipe-b-dp-4.html
   [188]: https://intel-gfx-ci.01.org/tree/intel-xe/IGTPW_11551/shard-dg2-432/igt@kms_cdclk@plane-scaling@pipe-b-dp-4.html

  * igt@kms_chamelium_color@ctm-0-75:
    - shard-dg2-set2:     [SKIP][189] ([Intel XE#306]) -> [SKIP][190] ([Intel XE#1201] / [Intel XE#306]) +1 other test skip
   [189]: https://intel-gfx-ci.01.org/tree/intel-xe/IGT_7964/shard-dg2-432/igt@kms_chamelium_color@ctm-0-75.html
   [190]: https://intel-gfx-ci.01.org/tree/intel-xe/IGTPW_11551/shard-dg2-466/igt@kms_chamelium_color@ctm-0-75.html

  * igt@kms_chamelium_color@gamma:
    - shard-dg2-set2:     [SKIP][191] ([Intel XE#1201] / [Intel XE#306]) -> [SKIP][192] ([Intel XE#306]) +1 other test skip
   [191]: https://intel-gfx-ci.01.org/tree/intel-xe/IGT_7964/shard-dg2-463/igt@kms_chamelium_color@gamma.html
   [192]: https://intel-gfx-ci.01.org/tree/intel-xe/IGTPW_11551/shard-dg2-432/igt@kms_chamelium_color@gamma.html

  * igt@kms_chamelium_edid@dp-edid-change-during-suspend:
    - shard-dg2-set2:     [SKIP][193] ([Intel XE#373]) -> [SKIP][194] ([Intel XE#1201] / [Intel XE#373]) +8 other tests skip
   [193]: https://intel-gfx-ci.01.org/tree/intel-xe/IGT_7964/shard-dg2-432/igt@kms_chamelium_edid@dp-edid-change-during-suspend.html
   [194]: https://intel-gfx-ci.01.org/tree/intel-xe/IGTPW_11551/shard-dg2-433/igt@kms_chamelium_edid@dp-edid-change-during-suspend.html

  * igt@kms_chamelium_edid@vga-edid-read:
    - shard-dg2-set2:     [SKIP][195] ([Intel XE#1201] / [Intel XE#373]) -> [SKIP][196] ([Intel XE#373]) +6 other tests skip
   [195]: https://intel-gfx-ci.01.org/tree/intel-xe/IGT_7964/shard-dg2-436/igt@kms_chamelium_edid@vga-edid-read.html
   [196]: https://intel-gfx-ci.01.org/tree/intel-xe/IGTPW_11551/shard-dg2-432/igt@kms_chamelium_edid@vga-edid-read.html

  * igt@kms_content_protection@dp-mst-type-1:
    - shard-dg2-set2:     [SKIP][197] ([Intel XE#307]) -> [SKIP][198] ([Intel XE#1201] / [Intel XE#307])
   [197]: https://intel-gfx-ci.01.org/tree/intel-xe/IGT_7964/shard-dg2-432/igt@kms_content_protection@dp-mst-type-1.html
   [198]: https://intel-gfx-ci.01.org/tree/intel-xe/IGTPW_11551/shard-dg2-436/igt@kms_content_protection@dp-mst-type-1.html

  * igt@kms_content_protection@lic-type-0:
    - shard-dg2-set2:     [FAIL][199] ([Intel XE#1178] / [Intel XE#1204]) -> [INCOMPLETE][200] ([Intel XE#1195])
   [199]: https://intel-gfx-ci.01.org/tree/intel-xe/IGT_7964/shard-dg2-466/igt@kms_content_protection@lic-type-0.html
   [200]: https://intel-gfx-ci.01.org/tree/intel-xe/IGTPW_11551/shard-dg2-433/igt@kms_content_protection@lic-type-0.html

  * igt@kms_content_protection@lic-type-0@pipe-a-dp-4:
    - shard-dg2-set2:     [FAIL][201] ([Intel XE#1204]) -> [INCOMPLETE][202] ([Intel XE#1195])
   [201]: https://intel-gfx-ci.01.org/tree/intel-xe/IGT_7964/shard-dg2-466/igt@kms_content_protection@lic-type-0@pipe-a-dp-4.html
   [202]: https://intel-gfx-ci.01.org/tree/intel-xe/IGTPW_11551/shard-dg2-433/igt@kms_content_protection@lic-type-0@pipe-a-dp-4.html

  * igt@kms_cursor_crc@cursor-offscreen-512x170:
    - shard-dg2-set2:     [SKIP][203] ([Intel XE#1201] / [Intel XE#308]) -> [SKIP][204] ([Intel XE#308])
   [203]: https://intel-gfx-ci.01.org/tree/intel-xe/IGT_7964/shard-dg2-433/igt@kms_cursor_crc@cursor-offscreen-512x170.html
   [204]: https://intel-gfx-ci.01.org/tree/intel-xe/IGTPW_11551/shard-dg2-432/igt@kms_cursor_crc@cursor-offscreen-512x170.html

  * igt@kms_cursor_crc@cursor-rapid-movement-512x170:
    - shard-dg2-set2:     [SKIP][205] ([Intel XE#308]) -> [SKIP][206] ([Intel XE#1201] / [Intel XE#308]) +1 other test skip
   [205]: https://intel-gfx-ci.01.org/tree/intel-xe/IGT_7964/shard-dg2-432/igt@kms_cursor_crc@cursor-rapid-movement-512x170.html
   [206]: https://intel-gfx-ci.01.org/tree/intel-xe/IGTPW_11551/shard-dg2-434/igt@kms_cursor_crc@cursor-rapid-movement-512x170.html

  * igt@kms_cursor_crc@cursor-sliding-max-size:
    - shard-dg2-set2:     [SKIP][207] ([Intel XE#455]) -> [SKIP][208] ([Intel XE#1201] / [Intel XE#455]) +9 other tests skip
   [207]: https://intel-gfx-ci.01.org/tree/intel-xe/IGT_7964/shard-dg2-432/igt@kms_cursor_crc@cursor-sliding-max-size.html
   [208]: https://intel-gfx-ci.01.org/tree/intel-xe/IGTPW_11551/shard-dg2-466/igt@kms_cursor_crc@cursor-sliding-max-size.html

  * igt@kms_display_modes@mst-extended-mode-negative:
    - shard-dg2-set2:     [SKIP][209] ([Intel XE#1201] / [Intel XE#307]) -> [SKIP][210] ([Intel XE#307])
   [209]: https://intel-gfx-ci.01.org/tree/intel-xe/IGT_7964/shard-dg2-435/igt@kms_display_modes@mst-extended-mode-negative.html
   [210]: https://intel-gfx-ci.01.org/tree/intel-xe/IGTPW_11551/shard-dg2-432/igt@kms_display_modes@mst-extended-mode-negative.html

  * igt@kms_feature_discovery@dp-mst:
    - shard-dg2-set2:     [SKIP][211] ([Intel XE#1137]) -> [SKIP][212] ([Intel XE#1137] / [Intel XE#1201])
   [211]: https://intel-gfx-ci.01.org/tree/intel-xe/IGT_7964/shard-dg2-432/igt@kms_feature_discovery@dp-mst.html
   [212]: https://intel-gfx-ci.01.org/tree/intel-xe/IGTPW_11551/shard-dg2-434/igt@kms_feature_discovery@dp-mst.html

  * igt@kms_flip@flip-vs-suspend:
    - shard-dg2-set2:     [INCOMPLETE][213] ([Intel XE#1195] / [Intel XE#2049]) -> [INCOMPLETE][214] ([Intel XE#1551] / [Intel XE#2049])
   [213]: https://intel-gfx-ci.01.org/tree/intel-xe/IGT_7964/shard-dg2-463/igt@kms_flip@flip-vs-suspend.html
   [214]: https://intel-gfx-ci.01.org/tree/intel-xe/IGTPW_11551/shard-dg2-432/igt@kms_flip@flip-vs-suspend.html

  * igt@kms_frontbuffer_tracking@drrs-2p-primscrn-cur-indfb-onoff:
    - shard-dg2-set2:     [SKIP][215] ([Intel XE#651]) -> [SKIP][216] ([Intel XE#1201] / [Intel XE#651]) +19 other tests skip
   [215]: https://intel-gfx-ci.01.org/tree/intel-xe/IGT_7964/shard-dg2-432/igt@kms_frontbuffer_tracking@drrs-2p-primscrn-cur-indfb-onoff.html
   [216]: https://intel-gfx-ci.01.org/tree/intel-xe/IGTPW_11551/shard-dg2-436/igt@kms_frontbuffer_tracking@drrs-2p-primscrn-cur-indfb-onoff.html

  * igt@kms_frontbuffer_tracking@fbcdrrs-1p-primscrn-pri-indfb-draw-blt:
    - shard-dg2-set2:     [SKIP][217] ([Intel XE#1201] / [Intel XE#651]) -> [SKIP][218] ([Intel XE#651]) +14 other tests skip
   [217]: https://intel-gfx-ci.01.org/tree/intel-xe/IGT_7964/shard-dg2-433/igt@kms_frontbuffer_tracking@fbcdrrs-1p-primscrn-pri-indfb-draw-blt.html
   [218]: https://intel-gfx-ci.01.org/tree/intel-xe/IGTPW_11551/shard-dg2-432/igt@kms_frontbuffer_tracking@fbcdrrs-1p-primscrn-pri-indfb-draw-blt.html

  * igt@kms_frontbuffer_tracking@fbcpsr-slowdraw:
    - shard-dg2-set2:     [SKIP][219] ([Intel XE#1201] / [Intel XE#653]) -> [SKIP][220] ([Intel XE#653]) +17 other tests skip
   [219]: https://intel-gfx-ci.01.org/tree/intel-xe/IGT_7964/shard-dg2-435/igt@kms_frontbuffer_tracking@fbcpsr-slowdraw.html
   [220]: https://intel-gfx-ci.01.org/tree/intel-xe/IGTPW_11551/shard-dg2-432/igt@kms_frontbuffer_tracking@fbcpsr-slowdraw.html

  * igt@kms_frontbuffer_tracking@psr-2p-primscrn-shrfb-plflip-blt:
    - shard-dg2-set2:     [SKIP][221] ([Intel XE#653]) -> [SKIP][222] ([Intel XE#1201] / [Intel XE#653]) +22 other tests skip
   [221]: https://intel-gfx-ci.01.org/tree/intel-xe/IGT_7964/shard-dg2-432/igt@kms_frontbuffer_tracking@psr-2p-primscrn-shrfb-plflip-blt.html
   [222]: https://intel-gfx-ci.01.org/tree/intel-xe/IGTPW_11551/shard-dg2-433/igt@kms_frontbuffer_tracking@psr-2p-primscrn-shrfb-plflip-blt.html

  * igt@kms_pm_backlight@bad-brightness:
    - shard-dg2-set2:     [SKIP][223] ([Intel XE#870]) -> [SKIP][224] ([Intel XE#1201] / [Intel XE#870])
   [223]: https://intel-gfx-ci.01.org/tree/intel-xe/IGT_7964/shard-dg2-432/igt@kms_pm_backlight@bad-brightness.html
   [224]: https://intel-gfx-ci.01.org/tree/intel-xe/IGTPW_11551/shard-dg2-433/igt@kms_pm_backlight@bad-brightness.html

  * igt@kms_pm_backlight@fade-with-dpms:
    - shard-dg2-set2:     [SKIP][225] ([Intel XE#1201] / [Intel XE#870]) -> [SKIP][226] ([Intel XE#870])
   [225]: https://intel-gfx-ci.01.org/tree/intel-xe/IGT_7964/shard-dg2-463/igt@kms_pm_backlight@fade-with-dpms.html
   [226]: https://intel-gfx-ci.01.org/tree/intel-xe/IGTPW_11551/shard-dg2-432/igt@kms_pm_backlight@fade-with-dpms.html

  * igt@kms_psr2_sf@fbc-overlay-primary-update-sf-dmg-area:
    - shard-dg2-set2:     [SKIP][227] ([Intel XE#1201] / [Intel XE#1489]) -> [SKIP][228] ([Intel XE#1489]) +1 other test skip
   [227]: https://intel-gfx-ci.01.org/tree/intel-xe/IGT_7964/shard-dg2-435/igt@kms_psr2_sf@fbc-overlay-primary-update-sf-dmg-area.html
   [228]: https://intel-gfx-ci.01.org/tree/intel-xe/IGTPW_11551/shard-dg2-432/igt@kms_psr2_sf@fbc-overlay-primary-update-sf-dmg-area.html

  * igt@kms_psr2_sf@primary-plane-update-sf-dmg-area-big-fb:
    - shard-dg2-set2:     [SKIP][229] ([Intel XE#1489]) -> [SKIP][230] ([Intel XE#1201] / [Intel XE#1489]) +2 other tests skip
   [229]: https://intel-gfx-ci.01.org/tree/intel-xe/IGT_7964/shard-dg2-432/igt@kms_psr2_sf@primary-plane-update-sf-dmg-area-big-fb.html
   [230]: https://intel-gfx-ci.01.org/tree/intel-xe/IGTPW_11551/shard-dg2-466/igt@kms_psr2_sf@primary-plane-update-sf-dmg-area-big-fb.html

  * igt@kms_psr2_su@frontbuffer-xrgb8888:
    - shard-dg2-set2:     [SKIP][231] ([Intel XE#1122] / [Intel XE#1201]) -> [SKIP][232] ([Intel XE#1122])
   [231]: https://intel-gfx-ci.01.org/tree/intel-xe/IGT_7964/shard-dg2-433/igt@kms_psr2_su@frontbuffer-xrgb8888.html
   [232]: https://intel-gfx-ci.01.org/tree/intel-xe/IGTPW_11551/shard-dg2-432/igt@kms_psr2_su@frontbuffer-xrgb8888.html

  * igt@kms_psr2_su@page_flip-p010:
    - shard-dg2-set2:     [SKIP][233] ([Intel XE#1122]) -> [SKIP][234] ([Intel XE#1122] / [Intel XE#1201])
   [233]: https://intel-gfx-ci.01.org/tree/intel-xe/IGT_7964/shard-dg2-432/igt@kms_psr2_su@page_flip-p010.html
   [234]: https://intel-gfx-ci.01.org/tree/intel-xe/IGTPW_11551/shard-dg2-434/igt@kms_psr2_su@page_flip-p010.html

  * igt@kms_psr@fbc-pr-cursor-blt:
    - shard-dg2-set2:     [SKIP][235] ([Intel XE#1201] / [Intel XE#929]) -> [SKIP][236] ([Intel XE#929]) +8 other tests skip
   [235]: https://intel-gfx-ci.01.org/tree/intel-xe/IGT_7964/shard-dg2-466/igt@kms_psr@fbc-pr-cursor-blt.html
   [236]: https://intel-gfx-ci.01.org/tree/intel-xe/IGTPW_11551/shard-dg2-432/igt@kms_psr@fbc-pr-cursor-blt.html

  * igt@kms_psr@fbc-pr-dpms:
    - shard-dg2-set2:     [SKIP][237] ([Intel XE#929]) -> [SKIP][238] ([Intel XE#1201] / [Intel XE#929]) +7 other tests skip
   [237]: https://intel-gfx-ci.01.org/tree/intel-xe/IGT_7964/shard-dg2-432/igt@kms_psr@fbc-pr-dpms.html
   [238]: https://intel-gfx-ci.01.org/tree/intel-xe/IGTPW_11551/shard-dg2-434/igt@kms_psr@fbc-pr-dpms.html

  * igt@kms_psr_stress_test@invalidate-primary-flip-overlay:
    - shard-dg2-set2:     [SKIP][239] ([Intel XE#1149]) -> [SKIP][240] ([Intel XE#1149] / [Intel XE#1201])
   [239]: https://intel-gfx-ci.01.org/tree/intel-xe/IGT_7964/shard-dg2-432/igt@kms_psr_stress_test@invalidate-primary-flip-overlay.html
   [240]: https://intel-gfx-ci.01.org/tree/intel-xe/IGTPW_11551/shard-dg2-463/igt@kms_psr_stress_test@invalidate-primary-flip-overlay.html

  * igt@kms_rotation_crc@primary-y-tiled-reflect-x-90:
    - shard-dg2-set2:     [SKIP][241] ([Intel XE#1201] / [Intel XE#327]) -> [SKIP][242] ([Intel XE#327]) +1 other test skip
   [241]: https://intel-gfx-ci.01.org/tree/intel-xe/IGT_7964/shard-dg2-434/igt@kms_rotation_crc@primary-y-tiled-reflect-x-90.html
   [242]: https://intel-gfx-ci.01.org/tree/intel-xe/IGTPW_11551/shard-dg2-432/igt@kms_rotation_crc@primary-y-tiled-reflect-x-90.html

  * igt@kms_rotation_crc@sprite-rotation-90-pos-100-0:
    - shard-dg2-set2:     [SKIP][243] ([Intel XE#327]) -> [SKIP][244] ([Intel XE#1201] / [Intel XE#327])
   [243]: https://intel-gfx-ci.01.org/tree/intel-xe/IGT_7964/shard-dg2-432/igt@kms_rotation_crc@sprite-rotation-90-pos-100-0.html
   [244]: https://intel-gfx-ci.01.org/tree/intel-xe/IGTPW_11551/shard-dg2-434/igt@kms_rotation_crc@sprite-rotation-90-pos-100-0.html

  * igt@kms_tiled_display@basic-test-pattern:
    - shard-dg2-set2:     [FAIL][245] ([Intel XE#1729]) -> [SKIP][246] ([Intel XE#1201] / [Intel XE#362])
   [245]: https://intel-gfx-ci.01.org/tree/intel-xe/IGT_7964/shard-dg2-436/igt@kms_tiled_display@basic-test-pattern.html
   [246]: https://intel-gfx-ci.01.org/tree/intel-xe/IGTPW_11551/shard-dg2-463/igt@kms_tiled_display@basic-test-pattern.html

  * igt@kms_tiled_display@basic-test-pattern-with-chamelium:
    - shard-dg2-set2:     [SKIP][247] ([Intel XE#1201] / [Intel XE#1500]) -> [SKIP][248] ([Intel XE#1201] / [Intel XE#362])
   [247]: https://intel-gfx-ci.01.org/tree/intel-xe/IGT_7964/shard-dg2-435/igt@kms_tiled_display@basic-test-pattern-with-chamelium.html
   [248]: https://intel-gfx-ci.01.org/tree/intel-xe/IGTPW_11551/shard-dg2-463/igt@kms_tiled_display@basic-test-pattern-with-chamelium.html

  * igt@kms_vblank@ts-continuation-suspend:
    - shard-dg2-set2:     [DMESG-WARN][249] ([Intel XE#2019]) -> [DMESG-WARN][250] ([Intel XE#2019] / [Intel XE#2226]) +1 other test dmesg-warn
   [249]: https://intel-gfx-ci.01.org/tree/intel-xe/IGT_7964/shard-dg2-436/igt@kms_vblank@ts-continuation-suspend.html
   [250]: https://intel-gfx-ci.01.org/tree/intel-xe/IGTPW_11551/shard-dg2-463/igt@kms_vblank@ts-continuation-suspend.html

  * igt@kms_vrr@flipline:
    - shard-dg2-set2:     [SKIP][251] ([Intel XE#1201] / [Intel XE#455]) -> [SKIP][252] ([Intel XE#455]) +13 other tests skip
   [251]: https://intel-gfx-ci.01.org/tree/intel-xe/IGT_7964/shard-dg2-466/igt@kms_vrr@flipline.html
   [252]: https://intel-gfx-ci.01.org/tree/intel-xe/IGTPW_11551/shard-dg2-432/igt@kms_vrr@flipline.html

  * igt@kms_vrr@lobf:
    - shard-dg2-set2:     [SKIP][253] -> [SKIP][254] ([Intel XE#1201])
   [253]: https://intel-gfx-ci.01.org/tree/intel-xe/IGT_7964/shard-dg2-432/igt@kms_vrr@lobf.html
   [254]: https://intel-gfx-ci.01.org/tree/intel-xe/IGTPW_11551/shard-dg2-436/igt@kms_vrr@lobf.html

  * igt@kms_writeback@writeback-fb-id-xrgb2101010:
    - shard-dg2-set2:     [SKIP][255] ([Intel XE#1201] / [Intel XE#756]) -> [SKIP][256] ([Intel XE#756]) +1 other test skip
   [255]: https://intel-gfx-ci.01.org/tree/intel-xe/IGT_7964/shard-dg2-466/igt@kms_writeback@writeback-fb-id-xrgb2101010.html
   [256]: https://intel-gfx-ci.01.org/tree/intel-xe/IGTPW_11551/shard-dg2-432/igt@kms_writeback@writeback-fb-id-xrgb2101010.html

  * igt@kms_writeback@writeback-invalid-parameters:
    - shard-dg2-set2:     [SKIP][257] ([Intel XE#756]) -> [SKIP][258] ([Intel XE#1201] / [Intel XE#756])
   [257]: https://intel-gfx-ci.01.org/tree/intel-xe/IGT_7964/shard-dg2-432/igt@kms_writeback@writeback-invalid-parameters.html
   [258]: https://intel-gfx-ci.01.org/tree/intel-xe/IGTPW_11551/shard-dg2-463/igt@kms_writeback@writeback-invalid-parameters.html

  * igt@xe_copy_basic@mem-copy-linear-0xfffe:
    - shard-dg2-set2:     [SKIP][259] ([Intel XE#1123]) -> [SKIP][260] ([Intel XE#1123] / [Intel XE#1201])
   [259]: https://intel-gfx-ci.01.org/tree/intel-xe/IGT_7964/shard-dg2-432/igt@xe_copy_basic@mem-copy-linear-0xfffe.html
   [260]: https://intel-gfx-ci.01.org/tree/intel-xe/IGTPW_11551/shard-dg2-463/igt@xe_copy_basic@mem-copy-linear-0xfffe.html

  * igt@xe_exec_fault_mode@once-rebind-imm:
    - shard-dg2-set2:     [SKIP][261] ([Intel XE#288]) -> [SKIP][262] ([Intel XE#1201] / [Intel XE#288]) +12 other tests skip
   [261]: https://intel-gfx-ci.01.org/tree/intel-xe/IGT_7964/shard-dg2-432/igt@xe_exec_fault_mode@once-rebind-imm.html
   [262]: https://intel-gfx-ci.01.org/tree/intel-xe/IGTPW_11551/shard-dg2-466/igt@xe_exec_fault_mode@once-rebind-imm.html

  * igt@xe_exec_fault_mode@twice-userptr-prefetch:
    - shard-dg2-set2:     [SKIP][263] ([Intel XE#1201] / [Intel XE#288]) -> [SKIP][264] ([Intel XE#288]) +15 other tests skip
   [263]: https://intel-gfx-ci.01.org/tree/intel-xe/IGT_7964/shard-dg2-466/igt@xe_exec_fault_mode@twice-userptr-prefetch.html
   [264]: https://intel-gfx-ci.01.org/tree/intel-xe/IGTPW_11551/shard-dg2-432/igt@xe_exec_fault_mode@twice-userptr-prefetch.html

  * igt@xe_exec_mix_modes@exec-simple-batch-store-lr:
    - shard-dg2-set2:     [SKIP][265] ([Intel XE#2360]) -> [SKIP][266] ([Intel XE#1201] / [Intel XE#2360])
   [265]: https://intel-gfx-ci.01.org/tree/intel-xe/IGT_7964/shard-dg2-432/igt@xe_exec_mix_modes@exec-simple-batch-store-lr.html
   [266]: https://intel-gfx-ci.01.org/tree/intel-xe/IGTPW_11551/shard-dg2-435/igt@xe_exec_mix_modes@exec-simple-batch-store-lr.html

  * igt@xe_mmap@small-bar:
    - shard-dg2-set2:     [SKIP][267] ([Intel XE#512]) -> [SKIP][268] ([Intel XE#1201] / [Intel XE#512])
   [267]: https://intel-gfx-ci.01.org/tree/intel-xe/IGT_7964/shard-dg2-432/igt@xe_mmap@small-bar.html
   [268]: https://intel-gfx-ci.01.org/tree/intel-xe/IGTPW_11551/shard-dg2-434/igt@xe_mmap@small-bar.html

  * igt@xe_oa@non-zero-reason:
    - shard-dg2-set2:     [SKIP][269] ([Intel XE#1201] / [Intel XE#2207]) -> [SKIP][270] ([Intel XE#2207]) +3 other tests skip
   [269]: https://intel-gfx-ci.01.org/tree/intel-xe/IGT_7964/shard-dg2-435/igt@xe_oa@non-zero-reason.html
   [270]: https://intel-gfx-ci.01.org/tree/intel-xe/IGTPW_11551/shard-dg2-432/igt@xe_oa@non-zero-reason.html

  * igt@xe_oa@polling-small-buf:
    - shard-dg2-set2:     [SKIP][271] ([Intel XE#2207]) -> [SKIP][272] ([Intel XE#1201] / [Intel XE#2207]) +2 other tests skip
   [271]: https://intel-gfx-ci.01.org/tree/intel-xe/IGT_7964/shard-dg2-432/igt@xe_oa@polling-small-buf.html
   [272]: https://intel-gfx-ci.01.org/tree/intel-xe/IGTPW_11551/shard-dg2-436/igt@xe_oa@polling-small-buf.html

  * igt@xe_pm@d3cold-basic-exec:
    - shard-dg2-set2:     [SKIP][273] ([Intel XE#2284] / [Intel XE#366]) -> [SKIP][274] ([Intel XE#1201] / [Intel XE#2284] / [Intel XE#366]) +2 other tests skip
   [273]: https://intel-gfx-ci.01.org/tree/intel-xe/IGT_7964/shard-dg2-432/igt@xe_pm@d3cold-basic-exec.html
   [274]: https://intel-gfx-ci.01.org/tree/intel-xe/IGTPW_11551/shard-dg2-466/igt@xe_pm@d3cold-basic-exec.html

  * igt@xe_pm@d3cold-multiple-execs:
    - shard-dg2-set2:     [SKIP][275] ([Intel XE#1201] / [Intel XE#2284] / [Intel XE#366]) -> [SKIP][276] ([Intel XE#2284] / [Intel XE#366])
   [275]: https://intel-gfx-ci.01.org/tree/intel-xe/IGT_7964/shard-dg2-463/igt@xe_pm@d3cold-multiple-execs.html
   [276]: https://intel-gfx-ci.01.org/tree/intel-xe/IGTPW_11551/shard-dg2-432/igt@xe_pm@d3cold-multiple-execs.html

  * igt@xe_query@multigpu-query-uc-fw-version-guc:
    - shard-dg2-set2:     [SKIP][277] ([Intel XE#944]) -> [SKIP][278] ([Intel XE#1201] / [Intel XE#944]) +2 other tests skip
   [277]: https://intel-gfx-ci.01.org/tree/intel-xe/IGT_7964/shard-dg2-432/igt@xe_query@multigpu-query-uc-fw-version-guc.html
   [278]: https://intel-gfx-ci.01.org/tree/intel-xe/IGTPW_11551/shard-dg2-463/igt@xe_query@multigpu-query-uc-fw-version-guc.html

  * igt@xe_query@multigpu-query-uc-fw-version-huc:
    - shard-dg2-set2:     [SKIP][279] ([Intel XE#1201] / [Intel XE#944]) -> [SKIP][280] ([Intel XE#944]) +1 other test skip
   [279]: https://intel-gfx-ci.01.org/tree/intel-xe/IGT_7964/shard-dg2-466/igt@xe_query@multigpu-query-uc-fw-version-huc.html
   [280]: https://intel-gfx-ci.01.org/tree/intel-xe/IGTPW_11551/shard-dg2-432/igt@xe_query@multigpu-query-uc-fw-version-huc.html

  * igt@xe_wedged@wedged-at-any-timeout:
    - shard-lnl:          [DMESG-FAIL][281] ([Intel XE#1760]) -> [DMESG-WARN][282] ([Intel XE#1760])
   [281]: https://intel-gfx-ci.01.org/tree/intel-xe/IGT_7964/shard-lnl-3/igt@xe_wedged@wedged-at-any-timeout.html
   [282]: https://intel-gfx-ci.01.org/tree/intel-xe/IGTPW_11551/shard-lnl-3/igt@xe_wedged@wedged-at-any-timeout.html

  
  {name}: This element is suppressed. This means it is ignored when computing
          the status of the difference (SUCCESS, WARNING, or FAILURE).

  [Intel XE#1033]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/1033
  [Intel XE#1091]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/1091
  [Intel XE#1122]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/1122
  [Intel XE#1123]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/1123
  [Intel XE#1124]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/1124
  [Intel XE#1126]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/1126
  [Intel XE#1137]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/1137
  [Intel XE#1149]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/1149
  [Intel XE#1152]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/1152
  [Intel XE#1169]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/1169
  [Intel XE#1178]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/1178
  [Intel XE#1192]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/1192
  [Intel XE#1195]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/1195
  [Intel XE#1201]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/1201
  [Intel XE#1204]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/1204
  [Intel XE#1252]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/1252
  [Intel XE#1358]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/1358
  [Intel XE#1392]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/1392
  [Intel XE#1399]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/1399
  [Intel XE#1401]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/1401
  [Intel XE#1406]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/1406
  [Intel XE#1407]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/1407
  [Intel XE#1420]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/1420
  [Intel XE#1421]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/1421
  [Intel XE#1424]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/1424
  [Intel XE#1426]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/1426
  [Intel XE#1435]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/1435
  [Intel XE#1473]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/1473
  [Intel XE#1489]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/1489
  [Intel XE#1500]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/1500
  [Intel XE#1551]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/1551
  [Intel XE#1620]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/1620
  [Intel XE#1659]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/1659
  [Intel XE#1695]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/1695
  [Intel XE#1729]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/1729
  [Intel XE#1745]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/1745
  [Intel XE#1760]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/1760
  [Intel XE#1794]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/1794
  [Intel XE#1861]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/1861
  [Intel XE#1948]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/1948
  [Intel XE#2019]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/2019
  [Intel XE#2028]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/2028
  [Intel XE#2049]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/2049
  [Intel XE#2136]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/2136
  [Intel XE#2191]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/2191
  [Intel XE#2207]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/2207
  [Intel XE#2226]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/2226
  [Intel XE#2229]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/2229
  [Intel XE#2233]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/2233
  [Intel XE#2234]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/2234
  [Intel XE#2244]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/2244
  [Intel XE#2251]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/2251
  [Intel XE#2252]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/2252
  [Intel XE#2284]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/2284
  [Intel XE#2293]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/2293
  [Intel XE#2311]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/2311
  [Intel XE#2313]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/2313
  [Intel XE#2314]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/2314
  [Intel XE#2318]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/2318
  [Intel XE#2320]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/2320
  [Intel XE#2321]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/2321
  [Intel XE#2322]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/2322
  [Intel XE#2325]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/2325
  [Intel XE#2327]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/2327
  [Intel XE#2329]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/2329
  [Intel XE#2333]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/2333
  [Intel XE#2360]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/2360
  [Intel XE#2362]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/2362
  [Intel XE#2380]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/2380
  [Intel XE#2429]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/2429
  [Intel XE#2436]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/2436
  [Intel XE#2443]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/2443
  [Intel XE#2472]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/2472
  [Intel XE#288]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/288
  [Intel XE#306]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/306
  [Intel XE#307]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/307
  [Intel XE#308]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/308
  [Intel XE#309]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/309
  [Intel XE#314]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/314
  [Intel XE#316]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/316
  [Intel XE#324]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/324
  [Intel XE#327]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/327
  [Intel XE#361]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/361
  [Intel XE#362]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/362
  [Intel XE#366]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/366
  [Intel XE#367]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/367
  [Intel XE#373]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/373
  [Intel XE#392]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/392
  [Intel XE#402]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/402
  [Intel XE#455]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/455
  [Intel XE#512]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/512
  [Intel XE#569]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/569
  [Intel XE#607]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/607
  [Intel XE#610]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/610
  [Intel XE#623]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/623
  [Intel XE#651]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/651
  [Intel XE#653]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/653
  [Intel XE#656]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/656
  [Intel XE#658]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/658
  [Intel XE#688]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/688
  [Intel XE#703]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/703
  [Intel XE#718]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/718
  [Intel XE#756]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/756
  [Intel XE#787]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/787
  [Intel XE#870]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/870
  [Intel XE#877]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/877
  [Intel XE#886]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/886
  [Intel XE#899]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/899
  [Intel XE#911]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/911
  [Intel XE#929]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/929
  [Intel XE#944]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/944


Build changes
-------------

  * IGT: IGT_7964 -> IGTPW_11551

  IGTPW_11551: 11551
  IGT_7964: 0dabf88262c0349d261248638064b97da35369f8 @ https://gitlab.freedesktop.org/drm/igt-gpu-tools.git
  xe-1740-8d0f83b389ec64b9f7f9b2ee241fc352346868f1: 8d0f83b389ec64b9f7f9b2ee241fc352346868f1

== Logs ==

For more details see: https://intel-gfx-ci.01.org/tree/intel-xe/IGTPW_11551/index.html

[-- Attachment #2: Type: text/html, Size: 99420 bytes --]

^ permalink raw reply	[flat|nested] 41+ messages in thread

* ✗ Fi.CI.IGT: failure for Test coverage for GPU debug support (rev3)
  2024-08-09 12:37 [PATCH i-g-t v3 00/14] Test coverage for GPU debug support Christoph Manszewski
                   ` (18 preceding siblings ...)
  2024-08-09 15:40 ` ✗ CI.xeFULL: failure for Test coverage for GPU debug support (rev3) Patchwork
@ 2024-08-10 18:30 ` Patchwork
  19 siblings, 0 replies; 41+ messages in thread
From: Patchwork @ 2024-08-10 18:30 UTC (permalink / raw)
  To: Christoph Manszewski; +Cc: igt-dev

[-- Attachment #1: Type: text/plain, Size: 95630 bytes --]

== Series Details ==

Series: Test coverage for GPU debug support (rev3)
URL   : https://patchwork.freedesktop.org/series/136623/
State : failure

== Summary ==

CI Bug Log - changes from IGT_7964_full -> IGTPW_11551_full
====================================================

Summary
-------

  **FAILURE**

  Serious unknown changes coming with IGTPW_11551_full absolutely need to be
  verified manually.
  
  If you think the reported changes have nothing to do with the changes
  introduced in IGTPW_11551_full, please notify your bug team (I915-ci-infra@lists.freedesktop.org) to allow them
  to document this new failure mode, which will reduce false positives in CI.

  External URL: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_11551/index.html

Participating hosts (9 -> 9)
------------------------------

  No changes in participating hosts

Possible new issues
-------------------

  Here are the unknown changes that may have been introduced in IGTPW_11551_full:

### IGT changes ###

#### Possible regressions ####

  * igt@gem_ppgtt@blt-vs-render-ctxn:
    - shard-glk:          [PASS][1] -> [INCOMPLETE][2]
   [1]: https://intel-gfx-ci.01.org/tree/drm-tip/IGT_7964/shard-glk7/igt@gem_ppgtt@blt-vs-render-ctxn.html
   [2]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_11551/shard-glk5/igt@gem_ppgtt@blt-vs-render-ctxn.html

  * igt@i915_pm_rps@thresholds:
    - shard-dg2:          NOTRUN -> [SKIP][3]
   [3]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_11551/shard-dg2-4/igt@i915_pm_rps@thresholds.html

  * igt@i915_pm_rps@thresholds-park:
    - shard-dg1:          NOTRUN -> [SKIP][4]
   [4]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_11551/shard-dg1-14/igt@i915_pm_rps@thresholds-park.html

  * igt@kms_color@ctm-0-75:
    - shard-snb:          NOTRUN -> [INCOMPLETE][5]
   [5]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_11551/shard-snb2/igt@kms_color@ctm-0-75.html

  * igt@kms_flip@2x-blocking-absolute-wf_vblank@ab-vga1-hdmi-a1:
    - shard-snb:          [PASS][6] -> [ABORT][7]
   [6]: https://intel-gfx-ci.01.org/tree/drm-tip/IGT_7964/shard-snb4/igt@kms_flip@2x-blocking-absolute-wf_vblank@ab-vga1-hdmi-a1.html
   [7]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_11551/shard-snb6/igt@kms_flip@2x-blocking-absolute-wf_vblank@ab-vga1-hdmi-a1.html

  * igt@prime_self_import@export-vs-gem_close-race:
    - shard-rkl:          [PASS][8] -> [FAIL][9]
   [8]: https://intel-gfx-ci.01.org/tree/drm-tip/IGT_7964/shard-rkl-2/igt@prime_self_import@export-vs-gem_close-race.html
   [9]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_11551/shard-rkl-2/igt@prime_self_import@export-vs-gem_close-race.html
    - shard-glk:          [PASS][10] -> [FAIL][11]
   [10]: https://intel-gfx-ci.01.org/tree/drm-tip/IGT_7964/shard-glk9/igt@prime_self_import@export-vs-gem_close-race.html
   [11]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_11551/shard-glk7/igt@prime_self_import@export-vs-gem_close-race.html

  
Known issues
------------

  Here are the changes found in IGTPW_11551_full that come from known issues:

### IGT changes ###

#### Issues hit ####

  * igt@debugfs_test@basic-hwmon:
    - shard-rkl:          NOTRUN -> [SKIP][12] ([i915#9318])
   [12]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_11551/shard-rkl-1/igt@debugfs_test@basic-hwmon.html

  * igt@device_reset@cold-reset-bound:
    - shard-mtlp:         NOTRUN -> [SKIP][13] ([i915#11078]) +1 other test skip
   [13]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_11551/shard-mtlp-5/igt@device_reset@cold-reset-bound.html
    - shard-dg2:          NOTRUN -> [SKIP][14] ([i915#11078])
   [14]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_11551/shard-dg2-10/igt@device_reset@cold-reset-bound.html
    - shard-rkl:          NOTRUN -> [SKIP][15] ([i915#11078])
   [15]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_11551/shard-rkl-4/igt@device_reset@cold-reset-bound.html

  * igt@device_reset@unbind-cold-reset-rebind:
    - shard-dg1:          NOTRUN -> [SKIP][16] ([i915#11078])
   [16]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_11551/shard-dg1-14/igt@device_reset@unbind-cold-reset-rebind.html

  * igt@drm_fdinfo@busy-hang@bcs0:
    - shard-dg1:          NOTRUN -> [SKIP][17] ([i915#8414]) +7 other tests skip
   [17]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_11551/shard-dg1-15/igt@drm_fdinfo@busy-hang@bcs0.html

  * igt@drm_fdinfo@most-busy-idle-check-all@vecs1:
    - shard-dg2:          NOTRUN -> [SKIP][18] ([i915#8414]) +14 other tests skip
   [18]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_11551/shard-dg2-4/igt@drm_fdinfo@most-busy-idle-check-all@vecs1.html

  * igt@drm_fdinfo@virtual-idle:
    - shard-rkl:          NOTRUN -> [FAIL][19] ([i915#11900] / [i915#7742])
   [19]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_11551/shard-rkl-2/igt@drm_fdinfo@virtual-idle.html

  * igt@gem_bad_reloc@negative-reloc-bltcopy:
    - shard-mtlp:         NOTRUN -> [SKIP][20] ([i915#3281]) +9 other tests skip
   [20]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_11551/shard-mtlp-6/igt@gem_bad_reloc@negative-reloc-bltcopy.html

  * igt@gem_basic@multigpu-create-close:
    - shard-dg2:          NOTRUN -> [SKIP][21] ([i915#7697])
   [21]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_11551/shard-dg2-4/igt@gem_basic@multigpu-create-close.html

  * igt@gem_ctx_persistence@heartbeat-close:
    - shard-dg2:          NOTRUN -> [SKIP][22] ([i915#8555])
   [22]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_11551/shard-dg2-11/igt@gem_ctx_persistence@heartbeat-close.html

  * igt@gem_ctx_persistence@heartbeat-stop:
    - shard-mtlp:         NOTRUN -> [SKIP][23] ([i915#8555]) +1 other test skip
   [23]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_11551/shard-mtlp-8/igt@gem_ctx_persistence@heartbeat-stop.html

  * igt@gem_ctx_sseu@engines:
    - shard-mtlp:         NOTRUN -> [SKIP][24] ([i915#280])
   [24]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_11551/shard-mtlp-8/igt@gem_ctx_sseu@engines.html

  * igt@gem_ctx_sseu@invalid-sseu:
    - shard-rkl:          NOTRUN -> [SKIP][25] ([i915#280])
   [25]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_11551/shard-rkl-4/igt@gem_ctx_sseu@invalid-sseu.html
    - shard-dg1:          NOTRUN -> [SKIP][26] ([i915#280]) +1 other test skip
   [26]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_11551/shard-dg1-17/igt@gem_ctx_sseu@invalid-sseu.html

  * igt@gem_eio@hibernate:
    - shard-rkl:          NOTRUN -> [ABORT][27] ([i915#7975] / [i915#8213])
   [27]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_11551/shard-rkl-3/igt@gem_eio@hibernate.html

  * igt@gem_eio@kms:
    - shard-dg2:          [PASS][28] -> [FAIL][29] ([i915#5784])
   [28]: https://intel-gfx-ci.01.org/tree/drm-tip/IGT_7964/shard-dg2-6/igt@gem_eio@kms.html
   [29]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_11551/shard-dg2-5/igt@gem_eio@kms.html

  * igt@gem_eio@reset-stress:
    - shard-snb:          NOTRUN -> [FAIL][30] ([i915#8898])
   [30]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_11551/shard-snb5/igt@gem_eio@reset-stress.html

  * igt@gem_eio@unwedge-stress:
    - shard-dg1:          NOTRUN -> [FAIL][31] ([i915#5784])
   [31]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_11551/shard-dg1-17/igt@gem_eio@unwedge-stress.html

  * igt@gem_exec_balancer@invalid-bonds:
    - shard-dg2:          NOTRUN -> [SKIP][32] ([i915#4036])
   [32]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_11551/shard-dg2-11/igt@gem_exec_balancer@invalid-bonds.html

  * igt@gem_exec_balancer@parallel-bb-first:
    - shard-rkl:          NOTRUN -> [SKIP][33] ([i915#4525]) +1 other test skip
   [33]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_11551/shard-rkl-4/igt@gem_exec_balancer@parallel-bb-first.html

  * igt@gem_exec_big@single:
    - shard-tglu:         [PASS][34] -> [ABORT][35] ([i915#11713])
   [34]: https://intel-gfx-ci.01.org/tree/drm-tip/IGT_7964/shard-tglu-8/igt@gem_exec_big@single.html
   [35]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_11551/shard-tglu-6/igt@gem_exec_big@single.html

  * igt@gem_exec_fair@basic-deadline:
    - shard-glk:          NOTRUN -> [FAIL][36] ([i915#2846])
   [36]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_11551/shard-glk1/igt@gem_exec_fair@basic-deadline.html

  * igt@gem_exec_fair@basic-none-share@rcs0:
    - shard-tglu:         NOTRUN -> [FAIL][37] ([i915#2842])
   [37]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_11551/shard-tglu-7/igt@gem_exec_fair@basic-none-share@rcs0.html

  * igt@gem_exec_fair@basic-pace:
    - shard-mtlp:         NOTRUN -> [SKIP][38] ([i915#4473] / [i915#4771])
   [38]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_11551/shard-mtlp-4/igt@gem_exec_fair@basic-pace.html

  * igt@gem_exec_fair@basic-pace-solo:
    - shard-dg2:          NOTRUN -> [SKIP][39] ([i915#3539]) +2 other tests skip
   [39]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_11551/shard-dg2-4/igt@gem_exec_fair@basic-pace-solo.html

  * igt@gem_exec_fair@basic-pace@vecs0:
    - shard-rkl:          [PASS][40] -> [FAIL][41] ([i915#2842]) +2 other tests fail
   [40]: https://intel-gfx-ci.01.org/tree/drm-tip/IGT_7964/shard-rkl-3/igt@gem_exec_fair@basic-pace@vecs0.html
   [41]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_11551/shard-rkl-3/igt@gem_exec_fair@basic-pace@vecs0.html

  * igt@gem_exec_fair@basic-throttle@rcs0:
    - shard-glk:          NOTRUN -> [FAIL][42] ([i915#2842]) +2 other tests fail
   [42]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_11551/shard-glk8/igt@gem_exec_fair@basic-throttle@rcs0.html

  * igt@gem_exec_fence@submit67:
    - shard-mtlp:         NOTRUN -> [SKIP][43] ([i915#4812])
   [43]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_11551/shard-mtlp-2/igt@gem_exec_fence@submit67.html

  * igt@gem_exec_flush@basic-batch-kernel-default-wb:
    - shard-dg1:          NOTRUN -> [SKIP][44] ([i915#3539] / [i915#4852]) +2 other tests skip
   [44]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_11551/shard-dg1-15/igt@gem_exec_flush@basic-batch-kernel-default-wb.html

  * igt@gem_exec_flush@basic-uc-pro-default:
    - shard-dg2:          NOTRUN -> [SKIP][45] ([i915#3539] / [i915#4852]) +3 other tests skip
   [45]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_11551/shard-dg2-7/igt@gem_exec_flush@basic-uc-pro-default.html

  * igt@gem_exec_flush@basic-uc-prw-default:
    - shard-dg1:          NOTRUN -> [SKIP][46] ([i915#3539])
   [46]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_11551/shard-dg1-16/igt@gem_exec_flush@basic-uc-prw-default.html

  * igt@gem_exec_reloc@basic-cpu-read:
    - shard-dg2:          NOTRUN -> [SKIP][47] ([i915#3281]) +11 other tests skip
   [47]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_11551/shard-dg2-5/igt@gem_exec_reloc@basic-cpu-read.html

  * igt@gem_exec_reloc@basic-gtt-read-noreloc:
    - shard-rkl:          NOTRUN -> [SKIP][48] ([i915#3281]) +7 other tests skip
   [48]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_11551/shard-rkl-5/igt@gem_exec_reloc@basic-gtt-read-noreloc.html

  * igt@gem_exec_reloc@basic-wc-read:
    - shard-dg1:          NOTRUN -> [SKIP][49] ([i915#3281]) +10 other tests skip
   [49]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_11551/shard-dg1-13/igt@gem_exec_reloc@basic-wc-read.html

  * igt@gem_exec_schedule@preempt-queue-contexts:
    - shard-dg1:          NOTRUN -> [SKIP][50] ([i915#4812]) +1 other test skip
   [50]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_11551/shard-dg1-18/igt@gem_exec_schedule@preempt-queue-contexts.html

  * igt@gem_exec_schedule@reorder-wide:
    - shard-dg2:          NOTRUN -> [SKIP][51] ([i915#4537] / [i915#4812]) +1 other test skip
   [51]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_11551/shard-dg2-5/igt@gem_exec_schedule@reorder-wide.html

  * igt@gem_fence_thrash@bo-write-verify-none:
    - shard-mtlp:         NOTRUN -> [SKIP][52] ([i915#4860]) +3 other tests skip
   [52]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_11551/shard-mtlp-7/igt@gem_fence_thrash@bo-write-verify-none.html

  * igt@gem_fenced_exec_thrash@no-spare-fences-busy-interruptible:
    - shard-dg2:          NOTRUN -> [SKIP][53] ([i915#4860]) +1 other test skip
   [53]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_11551/shard-dg2-10/igt@gem_fenced_exec_thrash@no-spare-fences-busy-interruptible.html

  * igt@gem_lmem_swapping@heavy-verify-multi:
    - shard-mtlp:         NOTRUN -> [SKIP][54] ([i915#4613]) +3 other tests skip
   [54]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_11551/shard-mtlp-2/igt@gem_lmem_swapping@heavy-verify-multi.html

  * igt@gem_lmem_swapping@heavy-verify-multi-ccs:
    - shard-tglu:         NOTRUN -> [SKIP][55] ([i915#4613]) +1 other test skip
   [55]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_11551/shard-tglu-7/igt@gem_lmem_swapping@heavy-verify-multi-ccs.html
    - shard-glk:          NOTRUN -> [SKIP][56] ([i915#4613]) +2 other tests skip
   [56]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_11551/shard-glk8/igt@gem_lmem_swapping@heavy-verify-multi-ccs.html

  * igt@gem_lmem_swapping@parallel-random-verify:
    - shard-rkl:          NOTRUN -> [SKIP][57] ([i915#4613]) +5 other tests skip
   [57]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_11551/shard-rkl-4/igt@gem_lmem_swapping@parallel-random-verify.html

  * igt@gem_media_vme:
    - shard-mtlp:         NOTRUN -> [SKIP][58] ([i915#284])
   [58]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_11551/shard-mtlp-5/igt@gem_media_vme.html

  * igt@gem_mmap@short-mmap:
    - shard-mtlp:         NOTRUN -> [SKIP][59] ([i915#4083]) +2 other tests skip
   [59]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_11551/shard-mtlp-7/igt@gem_mmap@short-mmap.html

  * igt@gem_mmap_gtt@bad-object:
    - shard-dg2:          NOTRUN -> [SKIP][60] ([i915#4077]) +5 other tests skip
   [60]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_11551/shard-dg2-7/igt@gem_mmap_gtt@bad-object.html

  * igt@gem_mmap_gtt@coherency:
    - shard-dg1:          NOTRUN -> [SKIP][61] ([i915#4077]) +11 other tests skip
   [61]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_11551/shard-dg1-17/igt@gem_mmap_gtt@coherency.html

  * igt@gem_mmap_wc@close:
    - shard-dg2:          NOTRUN -> [SKIP][62] ([i915#4083]) +4 other tests skip
   [62]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_11551/shard-dg2-11/igt@gem_mmap_wc@close.html

  * igt@gem_mmap_wc@write-read:
    - shard-dg1:          NOTRUN -> [SKIP][63] ([i915#4083]) +7 other tests skip
   [63]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_11551/shard-dg1-17/igt@gem_mmap_wc@write-read.html

  * igt@gem_pread@self:
    - shard-dg2:          NOTRUN -> [SKIP][64] ([i915#3282]) +4 other tests skip
   [64]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_11551/shard-dg2-6/igt@gem_pread@self.html

  * igt@gem_pwrite@basic-exhaustion:
    - shard-rkl:          NOTRUN -> [SKIP][65] ([i915#3282]) +7 other tests skip
   [65]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_11551/shard-rkl-4/igt@gem_pwrite@basic-exhaustion.html
    - shard-glk:          NOTRUN -> [WARN][66] ([i915#2658])
   [66]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_11551/shard-glk9/igt@gem_pwrite@basic-exhaustion.html

  * igt@gem_pxp@create-valid-protected-context:
    - shard-mtlp:         NOTRUN -> [SKIP][67] ([i915#4270]) +2 other tests skip
   [67]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_11551/shard-mtlp-4/igt@gem_pxp@create-valid-protected-context.html

  * igt@gem_pxp@fail-invalid-protected-context:
    - shard-dg1:          NOTRUN -> [SKIP][68] ([i915#4270]) +1 other test skip
   [68]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_11551/shard-dg1-17/igt@gem_pxp@fail-invalid-protected-context.html

  * igt@gem_pxp@protected-raw-src-copy-not-readible:
    - shard-dg2:          NOTRUN -> [SKIP][69] ([i915#4270]) +2 other tests skip
   [69]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_11551/shard-dg2-11/igt@gem_pxp@protected-raw-src-copy-not-readible.html
    - shard-rkl:          NOTRUN -> [SKIP][70] ([i915#4270]) +1 other test skip
   [70]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_11551/shard-rkl-6/igt@gem_pxp@protected-raw-src-copy-not-readible.html
    - shard-tglu:         NOTRUN -> [SKIP][71] ([i915#4270])
   [71]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_11551/shard-tglu-6/igt@gem_pxp@protected-raw-src-copy-not-readible.html

  * igt@gem_readwrite@beyond-eob:
    - shard-dg1:          NOTRUN -> [SKIP][72] ([i915#3282]) +3 other tests skip
   [72]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_11551/shard-dg1-16/igt@gem_readwrite@beyond-eob.html

  * igt@gem_readwrite@new-obj:
    - shard-mtlp:         NOTRUN -> [SKIP][73] ([i915#3282]) +4 other tests skip
   [73]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_11551/shard-mtlp-3/igt@gem_readwrite@new-obj.html

  * igt@gem_render_copy@y-tiled-mc-ccs-to-vebox-yf-tiled:
    - shard-mtlp:         NOTRUN -> [SKIP][74] ([i915#8428]) +4 other tests skip
   [74]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_11551/shard-mtlp-3/igt@gem_render_copy@y-tiled-mc-ccs-to-vebox-yf-tiled.html

  * igt@gem_render_copy@y-tiled-to-vebox-linear:
    - shard-dg2:          NOTRUN -> [SKIP][75] ([i915#5190] / [i915#8428]) +6 other tests skip
   [75]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_11551/shard-dg2-7/igt@gem_render_copy@y-tiled-to-vebox-linear.html

  * igt@gem_softpin@evict-snoop:
    - shard-dg1:          NOTRUN -> [SKIP][76] ([i915#4885]) +1 other test skip
   [76]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_11551/shard-dg1-16/igt@gem_softpin@evict-snoop.html
    - shard-mtlp:         NOTRUN -> [SKIP][77] ([i915#4885])
   [77]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_11551/shard-mtlp-3/igt@gem_softpin@evict-snoop.html

  * igt@gem_tiled_pread_basic:
    - shard-mtlp:         NOTRUN -> [SKIP][78] ([i915#4079]) +1 other test skip
   [78]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_11551/shard-mtlp-1/igt@gem_tiled_pread_basic.html

  * igt@gem_tiling_max_stride:
    - shard-mtlp:         NOTRUN -> [SKIP][79] ([i915#4077]) +7 other tests skip
   [79]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_11551/shard-mtlp-5/igt@gem_tiling_max_stride.html

  * igt@gem_unfence_active_buffers:
    - shard-dg2:          NOTRUN -> [SKIP][80] ([i915#4879])
   [80]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_11551/shard-dg2-3/igt@gem_unfence_active_buffers.html

  * igt@gem_userptr_blits@map-fixed-invalidate:
    - shard-dg2:          NOTRUN -> [SKIP][81] ([i915#3297] / [i915#4880])
   [81]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_11551/shard-dg2-11/igt@gem_userptr_blits@map-fixed-invalidate.html

  * igt@gem_userptr_blits@map-fixed-invalidate-busy:
    - shard-mtlp:         NOTRUN -> [SKIP][82] ([i915#3297])
   [82]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_11551/shard-mtlp-2/igt@gem_userptr_blits@map-fixed-invalidate-busy.html

  * igt@gem_userptr_blits@unsync-overlap:
    - shard-dg1:          NOTRUN -> [SKIP][83] ([i915#3297]) +1 other test skip
   [83]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_11551/shard-dg1-15/igt@gem_userptr_blits@unsync-overlap.html

  * igt@gem_userptr_blits@unsync-unmap:
    - shard-dg2:          NOTRUN -> [SKIP][84] ([i915#3297]) +2 other tests skip
   [84]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_11551/shard-dg2-11/igt@gem_userptr_blits@unsync-unmap.html
    - shard-rkl:          NOTRUN -> [SKIP][85] ([i915#3297]) +1 other test skip
   [85]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_11551/shard-rkl-1/igt@gem_userptr_blits@unsync-unmap.html
    - shard-tglu:         NOTRUN -> [SKIP][86] ([i915#3297])
   [86]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_11551/shard-tglu-8/igt@gem_userptr_blits@unsync-unmap.html

  * igt@gen9_exec_parse@allowed-single:
    - shard-mtlp:         NOTRUN -> [SKIP][87] ([i915#2856]) +3 other tests skip
   [87]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_11551/shard-mtlp-5/igt@gen9_exec_parse@allowed-single.html

  * igt@gen9_exec_parse@bb-start-out:
    - shard-dg1:          NOTRUN -> [SKIP][88] ([i915#2527]) +1 other test skip
   [88]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_11551/shard-dg1-16/igt@gen9_exec_parse@bb-start-out.html

  * igt@gen9_exec_parse@secure-batches:
    - shard-rkl:          NOTRUN -> [SKIP][89] ([i915#2527]) +2 other tests skip
   [89]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_11551/shard-rkl-4/igt@gen9_exec_parse@secure-batches.html

  * igt@gen9_exec_parse@unaligned-access:
    - shard-dg2:          NOTRUN -> [SKIP][90] ([i915#2856]) +4 other tests skip
   [90]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_11551/shard-dg2-4/igt@gen9_exec_parse@unaligned-access.html

  * igt@i915_fb_tiling:
    - shard-mtlp:         NOTRUN -> [SKIP][91] ([i915#4881])
   [91]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_11551/shard-mtlp-2/igt@i915_fb_tiling.html

  * igt@i915_module_load@reload-with-fault-injection:
    - shard-dg1:          [PASS][92] -> [ABORT][93] ([i915#9820])
   [92]: https://intel-gfx-ci.01.org/tree/drm-tip/IGT_7964/shard-dg1-16/igt@i915_module_load@reload-with-fault-injection.html
   [93]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_11551/shard-dg1-15/igt@i915_module_load@reload-with-fault-injection.html

  * igt@i915_pm_rc6_residency@rc6-idle@gt0-vcs0:
    - shard-dg1:          [PASS][94] -> [FAIL][95] ([i915#3591]) +1 other test fail
   [94]: https://intel-gfx-ci.01.org/tree/drm-tip/IGT_7964/shard-dg1-15/igt@i915_pm_rc6_residency@rc6-idle@gt0-vcs0.html
   [95]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_11551/shard-dg1-17/igt@i915_pm_rc6_residency@rc6-idle@gt0-vcs0.html

  * igt@i915_pm_sseu@full-enable:
    - shard-dg1:          NOTRUN -> [SKIP][96] ([i915#4387])
   [96]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_11551/shard-dg1-17/igt@i915_pm_sseu@full-enable.html

  * igt@i915_suspend@basic-s3-without-i915:
    - shard-snb:          [PASS][97] -> [ABORT][98] ([i915#11703])
   [97]: https://intel-gfx-ci.01.org/tree/drm-tip/IGT_7964/shard-snb2/igt@i915_suspend@basic-s3-without-i915.html
   [98]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_11551/shard-snb2/igt@i915_suspend@basic-s3-without-i915.html

  * igt@intel_hwmon@hwmon-read:
    - shard-rkl:          NOTRUN -> [SKIP][99] ([i915#7707])
   [99]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_11551/shard-rkl-5/igt@intel_hwmon@hwmon-read.html

  * igt@intel_hwmon@hwmon-write:
    - shard-mtlp:         NOTRUN -> [SKIP][100] ([i915#7707])
   [100]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_11551/shard-mtlp-1/igt@intel_hwmon@hwmon-write.html

  * igt@kms_addfb_basic@addfb25-x-tiled-mismatch-legacy:
    - shard-mtlp:         NOTRUN -> [SKIP][101] ([i915#4212]) +2 other tests skip
   [101]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_11551/shard-mtlp-5/igt@kms_addfb_basic@addfb25-x-tiled-mismatch-legacy.html

  * igt@kms_addfb_basic@addfb25-y-tiled-small-legacy:
    - shard-dg2:          NOTRUN -> [SKIP][102] ([i915#5190]) +3 other tests skip
   [102]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_11551/shard-dg2-5/igt@kms_addfb_basic@addfb25-y-tiled-small-legacy.html

  * igt@kms_addfb_basic@clobberred-modifier:
    - shard-dg2:          NOTRUN -> [SKIP][103] ([i915#4212]) +2 other tests skip
   [103]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_11551/shard-dg2-5/igt@kms_addfb_basic@clobberred-modifier.html

  * igt@kms_async_flips@async-flip-with-page-flip-events@pipe-b-hdmi-a-4-y-rc-ccs:
    - shard-dg1:          NOTRUN -> [SKIP][104] ([i915#8709]) +7 other tests skip
   [104]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_11551/shard-dg1-15/igt@kms_async_flips@async-flip-with-page-flip-events@pipe-b-hdmi-a-4-y-rc-ccs.html

  * igt@kms_async_flips@async-flip-with-page-flip-events@pipe-d-hdmi-a-2-4-mc-ccs:
    - shard-dg2:          NOTRUN -> [SKIP][105] ([i915#8709]) +11 other tests skip
   [105]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_11551/shard-dg2-3/igt@kms_async_flips@async-flip-with-page-flip-events@pipe-d-hdmi-a-2-4-mc-ccs.html

  * igt@kms_async_flips@invalid-async-flip:
    - shard-mtlp:         NOTRUN -> [SKIP][106] ([i915#6228])
   [106]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_11551/shard-mtlp-3/igt@kms_async_flips@invalid-async-flip.html

  * igt@kms_atomic@plane-primary-overlay-mutable-zpos:
    - shard-rkl:          NOTRUN -> [SKIP][107] ([i915#9531])
   [107]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_11551/shard-rkl-2/igt@kms_atomic@plane-primary-overlay-mutable-zpos.html

  * igt@kms_atomic_transition@modeset-transition@2x-outputs:
    - shard-glk:          [PASS][108] -> [FAIL][109] ([i915#11859])
   [108]: https://intel-gfx-ci.01.org/tree/drm-tip/IGT_7964/shard-glk6/igt@kms_atomic_transition@modeset-transition@2x-outputs.html
   [109]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_11551/shard-glk8/igt@kms_atomic_transition@modeset-transition@2x-outputs.html

  * igt@kms_atomic_transition@plane-all-modeset-transition@pipe-a-hdmi-a-1:
    - shard-snb:          [PASS][110] -> [FAIL][111] ([i915#5956])
   [110]: https://intel-gfx-ci.01.org/tree/drm-tip/IGT_7964/shard-snb7/igt@kms_atomic_transition@plane-all-modeset-transition@pipe-a-hdmi-a-1.html
   [111]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_11551/shard-snb7/igt@kms_atomic_transition@plane-all-modeset-transition@pipe-a-hdmi-a-1.html

  * igt@kms_atomic_transition@plane-toggle-modeset-transition@pipe-a-hdmi-a-2:
    - shard-rkl:          [PASS][112] -> [FAIL][113] ([i915#11808])
   [112]: https://intel-gfx-ci.01.org/tree/drm-tip/IGT_7964/shard-rkl-5/igt@kms_atomic_transition@plane-toggle-modeset-transition@pipe-a-hdmi-a-2.html
   [113]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_11551/shard-rkl-1/igt@kms_atomic_transition@plane-toggle-modeset-transition@pipe-a-hdmi-a-2.html

  * igt@kms_atomic_transition@plane-toggle-modeset-transition@pipe-a-hdmi-a-4:
    - shard-dg1:          [PASS][114] -> [FAIL][115] ([i915#5956])
   [114]: https://intel-gfx-ci.01.org/tree/drm-tip/IGT_7964/shard-dg1-17/igt@kms_atomic_transition@plane-toggle-modeset-transition@pipe-a-hdmi-a-4.html
   [115]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_11551/shard-dg1-15/igt@kms_atomic_transition@plane-toggle-modeset-transition@pipe-a-hdmi-a-4.html

  * igt@kms_big_fb@4-tiled-8bpp-rotate-0:
    - shard-rkl:          NOTRUN -> [SKIP][116] ([i915#5286]) +6 other tests skip
   [116]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_11551/shard-rkl-6/igt@kms_big_fb@4-tiled-8bpp-rotate-0.html

  * igt@kms_big_fb@4-tiled-addfb-size-offset-overflow:
    - shard-dg1:          NOTRUN -> [SKIP][117] ([i915#5286])
   [117]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_11551/shard-dg1-16/igt@kms_big_fb@4-tiled-addfb-size-offset-overflow.html

  * igt@kms_big_fb@4-tiled-max-hw-stride-64bpp-rotate-0-async-flip:
    - shard-dg1:          NOTRUN -> [SKIP][118] ([i915#4538] / [i915#5286]) +2 other tests skip
   [118]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_11551/shard-dg1-18/igt@kms_big_fb@4-tiled-max-hw-stride-64bpp-rotate-0-async-flip.html

  * igt@kms_big_fb@4-tiled-max-hw-stride-64bpp-rotate-180:
    - shard-tglu:         NOTRUN -> [SKIP][119] ([i915#5286]) +1 other test skip
   [119]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_11551/shard-tglu-9/igt@kms_big_fb@4-tiled-max-hw-stride-64bpp-rotate-180.html

  * igt@kms_big_fb@x-tiled-32bpp-rotate-270:
    - shard-rkl:          NOTRUN -> [SKIP][120] ([i915#3638]) +4 other tests skip
   [120]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_11551/shard-rkl-1/igt@kms_big_fb@x-tiled-32bpp-rotate-270.html

  * igt@kms_big_fb@y-tiled-64bpp-rotate-90:
    - shard-dg1:          NOTRUN -> [SKIP][121] ([i915#3638]) +6 other tests skip
   [121]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_11551/shard-dg1-17/igt@kms_big_fb@y-tiled-64bpp-rotate-90.html

  * igt@kms_big_fb@yf-tiled-64bpp-rotate-0:
    - shard-dg2:          NOTRUN -> [SKIP][122] ([i915#4538] / [i915#5190]) +8 other tests skip
   [122]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_11551/shard-dg2-4/igt@kms_big_fb@yf-tiled-64bpp-rotate-0.html

  * igt@kms_big_fb@yf-tiled-max-hw-stride-64bpp-rotate-180:
    - shard-dg1:          NOTRUN -> [SKIP][123] ([i915#4538]) +5 other tests skip
   [123]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_11551/shard-dg1-13/igt@kms_big_fb@yf-tiled-max-hw-stride-64bpp-rotate-180.html

  * igt@kms_big_joiner@basic:
    - shard-dg1:          NOTRUN -> [SKIP][124] ([i915#10656])
   [124]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_11551/shard-dg1-16/igt@kms_big_joiner@basic.html
    - shard-mtlp:         NOTRUN -> [SKIP][125] ([i915#10656])
   [125]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_11551/shard-mtlp-5/igt@kms_big_joiner@basic.html

  * igt@kms_big_joiner@basic-force-joiner:
    - shard-dg2:          NOTRUN -> [SKIP][126] ([i915#10656])
   [126]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_11551/shard-dg2-4/igt@kms_big_joiner@basic-force-joiner.html

  * igt@kms_ccs@bad-aux-stride-yf-tiled-ccs@pipe-c-edp-1:
    - shard-mtlp:         NOTRUN -> [SKIP][127] ([i915#6095]) +27 other tests skip
   [127]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_11551/shard-mtlp-8/igt@kms_ccs@bad-aux-stride-yf-tiled-ccs@pipe-c-edp-1.html

  * igt@kms_ccs@bad-rotation-90-4-tiled-mtl-rc-ccs@pipe-b-hdmi-a-2:
    - shard-rkl:          NOTRUN -> [SKIP][128] ([i915#6095]) +61 other tests skip
   [128]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_11551/shard-rkl-5/igt@kms_ccs@bad-rotation-90-4-tiled-mtl-rc-ccs@pipe-b-hdmi-a-2.html

  * igt@kms_ccs@bad-rotation-90-y-tiled-gen12-rc-ccs-cc@pipe-a-dp-4:
    - shard-dg2:          NOTRUN -> [SKIP][129] ([i915#10307] / [i915#6095]) +166 other tests skip
   [129]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_11551/shard-dg2-11/igt@kms_ccs@bad-rotation-90-y-tiled-gen12-rc-ccs-cc@pipe-a-dp-4.html

  * igt@kms_ccs@crc-sprite-planes-basic-4-tiled-mtl-rc-ccs-cc@pipe-b-hdmi-a-1:
    - shard-tglu:         NOTRUN -> [SKIP][130] ([i915#6095]) +23 other tests skip
   [130]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_11551/shard-tglu-6/igt@kms_ccs@crc-sprite-planes-basic-4-tiled-mtl-rc-ccs-cc@pipe-b-hdmi-a-1.html

  * igt@kms_ccs@random-ccs-data-4-tiled-dg2-rc-ccs@pipe-a-hdmi-a-3:
    - shard-dg1:          NOTRUN -> [SKIP][131] ([i915#6095]) +95 other tests skip
   [131]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_11551/shard-dg1-13/igt@kms_ccs@random-ccs-data-4-tiled-dg2-rc-ccs@pipe-a-hdmi-a-3.html

  * igt@kms_ccs@random-ccs-data-yf-tiled-ccs@pipe-d-hdmi-a-1:
    - shard-dg2:          NOTRUN -> [SKIP][132] ([i915#10307] / [i915#10434] / [i915#6095]) +4 other tests skip
   [132]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_11551/shard-dg2-2/igt@kms_ccs@random-ccs-data-yf-tiled-ccs@pipe-d-hdmi-a-1.html

  * igt@kms_cdclk@mode-transition@pipe-a-dp-4:
    - shard-dg2:          NOTRUN -> [SKIP][133] ([i915#11616] / [i915#7213]) +3 other tests skip
   [133]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_11551/shard-dg2-11/igt@kms_cdclk@mode-transition@pipe-a-dp-4.html

  * igt@kms_cdclk@mode-transition@pipe-b-edp-1:
    - shard-mtlp:         NOTRUN -> [SKIP][134] ([i915#7213] / [i915#9010]) +3 other tests skip
   [134]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_11551/shard-mtlp-3/igt@kms_cdclk@mode-transition@pipe-b-edp-1.html

  * igt@kms_cdclk@plane-scaling:
    - shard-dg1:          NOTRUN -> [SKIP][135] ([i915#3742])
   [135]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_11551/shard-dg1-16/igt@kms_cdclk@plane-scaling.html

  * igt@kms_cdclk@plane-scaling@pipe-c-edp-1:
    - shard-mtlp:         NOTRUN -> [SKIP][136] ([i915#4087]) +3 other tests skip
   [136]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_11551/shard-mtlp-3/igt@kms_cdclk@plane-scaling@pipe-c-edp-1.html

  * igt@kms_cdclk@plane-scaling@pipe-d-dp-4:
    - shard-dg2:          NOTRUN -> [SKIP][137] ([i915#4087]) +3 other tests skip
   [137]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_11551/shard-dg2-11/igt@kms_cdclk@plane-scaling@pipe-d-dp-4.html

  * igt@kms_chamelium_edid@dp-mode-timings:
    - shard-mtlp:         NOTRUN -> [SKIP][138] ([i915#7828]) +6 other tests skip
   [138]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_11551/shard-mtlp-1/igt@kms_chamelium_edid@dp-mode-timings.html

  * igt@kms_chamelium_edid@hdmi-edid-change-during-suspend:
    - shard-rkl:          NOTRUN -> [SKIP][139] ([i915#7828]) +8 other tests skip
   [139]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_11551/shard-rkl-1/igt@kms_chamelium_edid@hdmi-edid-change-during-suspend.html
    - shard-dg1:          NOTRUN -> [SKIP][140] ([i915#7828]) +6 other tests skip
   [140]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_11551/shard-dg1-15/igt@kms_chamelium_edid@hdmi-edid-change-during-suspend.html

  * igt@kms_chamelium_frames@dp-crc-fast:
    - shard-dg2:          NOTRUN -> [SKIP][141] ([i915#7828]) +7 other tests skip
   [141]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_11551/shard-dg2-10/igt@kms_chamelium_frames@dp-crc-fast.html

  * igt@kms_chamelium_hpd@vga-hpd-without-ddc:
    - shard-tglu:         NOTRUN -> [SKIP][142] ([i915#7828]) +3 other tests skip
   [142]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_11551/shard-tglu-8/igt@kms_chamelium_hpd@vga-hpd-without-ddc.html

  * igt@kms_content_protection@atomic:
    - shard-dg1:          NOTRUN -> [SKIP][143] ([i915#7116] / [i915#9424]) +1 other test skip
   [143]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_11551/shard-dg1-16/igt@kms_content_protection@atomic.html

  * igt@kms_content_protection@atomic-dpms:
    - shard-rkl:          NOTRUN -> [SKIP][144] ([i915#7118] / [i915#9424])
   [144]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_11551/shard-rkl-5/igt@kms_content_protection@atomic-dpms.html

  * igt@kms_content_protection@dp-mst-lic-type-1:
    - shard-dg2:          NOTRUN -> [SKIP][145] ([i915#3299]) +1 other test skip
   [145]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_11551/shard-dg2-6/igt@kms_content_protection@dp-mst-lic-type-1.html

  * igt@kms_content_protection@legacy:
    - shard-mtlp:         NOTRUN -> [SKIP][146] ([i915#6944] / [i915#9424])
   [146]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_11551/shard-mtlp-2/igt@kms_content_protection@legacy.html

  * igt@kms_content_protection@lic-type-1:
    - shard-dg2:          NOTRUN -> [SKIP][147] ([i915#9424])
   [147]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_11551/shard-dg2-11/igt@kms_content_protection@lic-type-1.html
    - shard-tglu:         NOTRUN -> [SKIP][148] ([i915#6944] / [i915#9424])
   [148]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_11551/shard-tglu-8/igt@kms_content_protection@lic-type-1.html

  * igt@kms_content_protection@mei-interface:
    - shard-rkl:          NOTRUN -> [SKIP][149] ([i915#9424]) +2 other tests skip
   [149]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_11551/shard-rkl-5/igt@kms_content_protection@mei-interface.html

  * igt@kms_content_protection@srm:
    - shard-mtlp:         NOTRUN -> [SKIP][150] ([i915#6944])
   [150]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_11551/shard-mtlp-6/igt@kms_content_protection@srm.html

  * igt@kms_cursor_crc@cursor-offscreen-32x10:
    - shard-mtlp:         NOTRUN -> [SKIP][151] ([i915#3555] / [i915#8814]) +5 other tests skip
   [151]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_11551/shard-mtlp-5/igt@kms_cursor_crc@cursor-offscreen-32x10.html

  * igt@kms_cursor_crc@cursor-offscreen-512x512:
    - shard-tglu:         NOTRUN -> [SKIP][152] ([i915#11453])
   [152]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_11551/shard-tglu-9/igt@kms_cursor_crc@cursor-offscreen-512x512.html

  * igt@kms_cursor_crc@cursor-onscreen-32x10:
    - shard-dg2:          NOTRUN -> [SKIP][153] ([i915#3555]) +1 other test skip
   [153]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_11551/shard-dg2-2/igt@kms_cursor_crc@cursor-onscreen-32x10.html

  * igt@kms_cursor_crc@cursor-onscreen-512x170:
    - shard-dg1:          NOTRUN -> [SKIP][154] ([i915#11453]) +2 other tests skip
   [154]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_11551/shard-dg1-16/igt@kms_cursor_crc@cursor-onscreen-512x170.html

  * igt@kms_cursor_crc@cursor-onscreen-512x512:
    - shard-mtlp:         NOTRUN -> [SKIP][155] ([i915#3359]) +1 other test skip
   [155]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_11551/shard-mtlp-1/igt@kms_cursor_crc@cursor-onscreen-512x512.html

  * igt@kms_cursor_crc@cursor-random-512x512:
    - shard-dg2:          NOTRUN -> [SKIP][156] ([i915#11453]) +1 other test skip
   [156]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_11551/shard-dg2-2/igt@kms_cursor_crc@cursor-random-512x512.html

  * igt@kms_cursor_crc@cursor-rapid-movement-32x10:
    - shard-rkl:          NOTRUN -> [SKIP][157] ([i915#3555]) +5 other tests skip
   [157]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_11551/shard-rkl-6/igt@kms_cursor_crc@cursor-rapid-movement-32x10.html
    - shard-tglu:         NOTRUN -> [SKIP][158] ([i915#3555]) +3 other tests skip
   [158]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_11551/shard-tglu-7/igt@kms_cursor_crc@cursor-rapid-movement-32x10.html

  * igt@kms_cursor_crc@cursor-rapid-movement-64x21:
    - shard-mtlp:         NOTRUN -> [SKIP][159] ([i915#8814]) +1 other test skip
   [159]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_11551/shard-mtlp-2/igt@kms_cursor_crc@cursor-rapid-movement-64x21.html

  * igt@kms_cursor_crc@cursor-sliding-512x512:
    - shard-rkl:          NOTRUN -> [SKIP][160] ([i915#11453]) +2 other tests skip
   [160]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_11551/shard-rkl-2/igt@kms_cursor_crc@cursor-sliding-512x512.html

  * igt@kms_cursor_legacy@cursorb-vs-flipb-atomic-transitions:
    - shard-mtlp:         NOTRUN -> [SKIP][161] +23 other tests skip
   [161]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_11551/shard-mtlp-2/igt@kms_cursor_legacy@cursorb-vs-flipb-atomic-transitions.html

  * igt@kms_cursor_legacy@cursorb-vs-flipb-atomic-transitions-varying-size:
    - shard-mtlp:         NOTRUN -> [SKIP][162] ([i915#9809]) +3 other tests skip
   [162]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_11551/shard-mtlp-3/igt@kms_cursor_legacy@cursorb-vs-flipb-atomic-transitions-varying-size.html

  * igt@kms_cursor_legacy@modeset-atomic-cursor-hotspot:
    - shard-dg1:          NOTRUN -> [SKIP][163] ([i915#9067])
   [163]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_11551/shard-dg1-15/igt@kms_cursor_legacy@modeset-atomic-cursor-hotspot.html

  * igt@kms_cursor_legacy@short-busy-flip-before-cursor-atomic-transitions:
    - shard-tglu:         NOTRUN -> [SKIP][164] ([i915#4103])
   [164]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_11551/shard-tglu-8/igt@kms_cursor_legacy@short-busy-flip-before-cursor-atomic-transitions.html
    - shard-dg2:          NOTRUN -> [SKIP][165] ([i915#4103] / [i915#4213])
   [165]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_11551/shard-dg2-11/igt@kms_cursor_legacy@short-busy-flip-before-cursor-atomic-transitions.html
    - shard-rkl:          NOTRUN -> [SKIP][166] ([i915#4103])
   [166]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_11551/shard-rkl-6/igt@kms_cursor_legacy@short-busy-flip-before-cursor-atomic-transitions.html

  * igt@kms_cursor_legacy@short-busy-flip-before-cursor-atomic-transitions-varying-size:
    - shard-dg1:          NOTRUN -> [SKIP][167] ([i915#4103] / [i915#4213])
   [167]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_11551/shard-dg1-13/igt@kms_cursor_legacy@short-busy-flip-before-cursor-atomic-transitions-varying-size.html

  * igt@kms_dirtyfb@psr-dirtyfb-ioctl:
    - shard-rkl:          NOTRUN -> [SKIP][168] ([i915#9723])
   [168]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_11551/shard-rkl-3/igt@kms_dirtyfb@psr-dirtyfb-ioctl.html
    - shard-dg1:          NOTRUN -> [SKIP][169] ([i915#9723])
   [169]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_11551/shard-dg1-16/igt@kms_dirtyfb@psr-dirtyfb-ioctl.html

  * igt@kms_draw_crc@draw-method-mmap-gtt:
    - shard-dg2:          NOTRUN -> [SKIP][170] ([i915#8812])
   [170]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_11551/shard-dg2-11/igt@kms_draw_crc@draw-method-mmap-gtt.html

  * igt@kms_dsc@dsc-basic:
    - shard-dg2:          NOTRUN -> [SKIP][171] ([i915#3555] / [i915#3840])
   [171]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_11551/shard-dg2-5/igt@kms_dsc@dsc-basic.html

  * igt@kms_dsc@dsc-fractional-bpp-with-bpc:
    - shard-rkl:          NOTRUN -> [SKIP][172] ([i915#3840])
   [172]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_11551/shard-rkl-6/igt@kms_dsc@dsc-fractional-bpp-with-bpc.html
    - shard-tglu:         NOTRUN -> [SKIP][173] ([i915#3840])
   [173]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_11551/shard-tglu-6/igt@kms_dsc@dsc-fractional-bpp-with-bpc.html

  * igt@kms_dsc@dsc-with-bpc:
    - shard-dg1:          NOTRUN -> [SKIP][174] ([i915#3555] / [i915#3840])
   [174]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_11551/shard-dg1-13/igt@kms_dsc@dsc-with-bpc.html

  * igt@kms_dsc@dsc-with-output-formats-with-bpc:
    - shard-dg2:          NOTRUN -> [SKIP][175] ([i915#3840] / [i915#9053])
   [175]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_11551/shard-dg2-7/igt@kms_dsc@dsc-with-output-formats-with-bpc.html

  * igt@kms_fbcon_fbt@psr:
    - shard-dg2:          NOTRUN -> [SKIP][176] ([i915#3469]) +1 other test skip
   [176]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_11551/shard-dg2-10/igt@kms_fbcon_fbt@psr.html

  * igt@kms_feature_discovery@chamelium:
    - shard-dg2:          NOTRUN -> [SKIP][177] ([i915#4854])
   [177]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_11551/shard-dg2-2/igt@kms_feature_discovery@chamelium.html

  * igt@kms_feature_discovery@display-2x:
    - shard-dg2:          NOTRUN -> [SKIP][178] ([i915#1839])
   [178]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_11551/shard-dg2-2/igt@kms_feature_discovery@display-2x.html

  * igt@kms_feature_discovery@psr2:
    - shard-rkl:          NOTRUN -> [SKIP][179] ([i915#658])
   [179]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_11551/shard-rkl-5/igt@kms_feature_discovery@psr2.html

  * igt@kms_fence_pin_leak:
    - shard-dg1:          NOTRUN -> [SKIP][180] ([i915#4881])
   [180]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_11551/shard-dg1-15/igt@kms_fence_pin_leak.html

  * igt@kms_flip@2x-flip-vs-blocking-wf-vblank@ab-vga1-hdmi-a1:
    - shard-snb:          [PASS][181] -> [FAIL][182] ([i915#2122]) +1 other test fail
   [181]: https://intel-gfx-ci.01.org/tree/drm-tip/IGT_7964/shard-snb5/igt@kms_flip@2x-flip-vs-blocking-wf-vblank@ab-vga1-hdmi-a1.html
   [182]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_11551/shard-snb7/igt@kms_flip@2x-flip-vs-blocking-wf-vblank@ab-vga1-hdmi-a1.html

  * igt@kms_flip@2x-flip-vs-dpms:
    - shard-rkl:          NOTRUN -> [SKIP][183] +27 other tests skip
   [183]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_11551/shard-rkl-3/igt@kms_flip@2x-flip-vs-dpms.html

  * igt@kms_flip@2x-flip-vs-fences-interruptible:
    - shard-tglu:         NOTRUN -> [SKIP][184] ([i915#3637]) +1 other test skip
   [184]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_11551/shard-tglu-8/igt@kms_flip@2x-flip-vs-fences-interruptible.html
    - shard-dg2:          NOTRUN -> [SKIP][185] ([i915#8381])
   [185]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_11551/shard-dg2-7/igt@kms_flip@2x-flip-vs-fences-interruptible.html

  * igt@kms_flip@2x-flip-vs-suspend:
    - shard-mtlp:         NOTRUN -> [SKIP][186] ([i915#3637]) +4 other tests skip
   [186]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_11551/shard-mtlp-3/igt@kms_flip@2x-flip-vs-suspend.html

  * igt@kms_flip@2x-modeset-vs-vblank-race:
    - shard-dg2:          NOTRUN -> [SKIP][187] +23 other tests skip
   [187]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_11551/shard-dg2-10/igt@kms_flip@2x-modeset-vs-vblank-race.html

  * igt@kms_flip@2x-wf_vblank-ts-check-interruptible:
    - shard-dg1:          NOTRUN -> [SKIP][188] ([i915#9934]) +8 other tests skip
   [188]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_11551/shard-dg1-17/igt@kms_flip@2x-wf_vblank-ts-check-interruptible.html

  * igt@kms_flip@flip-vs-fences:
    - shard-mtlp:         NOTRUN -> [SKIP][189] ([i915#8381])
   [189]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_11551/shard-mtlp-3/igt@kms_flip@flip-vs-fences.html

  * igt@kms_flip_scaled_crc@flip-32bpp-4tile-to-64bpp-4tile-upscaling@pipe-a-valid-mode:
    - shard-rkl:          NOTRUN -> [SKIP][190] ([i915#2672]) +5 other tests skip
   [190]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_11551/shard-rkl-4/igt@kms_flip_scaled_crc@flip-32bpp-4tile-to-64bpp-4tile-upscaling@pipe-a-valid-mode.html

  * igt@kms_flip_scaled_crc@flip-32bpp-yftile-to-32bpp-yftileccs-upscaling@pipe-a-valid-mode:
    - shard-dg1:          NOTRUN -> [SKIP][191] ([i915#2587] / [i915#2672]) +4 other tests skip
   [191]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_11551/shard-dg1-16/igt@kms_flip_scaled_crc@flip-32bpp-yftile-to-32bpp-yftileccs-upscaling@pipe-a-valid-mode.html

  * igt@kms_flip_scaled_crc@flip-32bpp-ytile-to-32bpp-ytileccs-upscaling@pipe-a-default-mode:
    - shard-mtlp:         NOTRUN -> [SKIP][192] ([i915#2672]) +3 other tests skip
   [192]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_11551/shard-mtlp-3/igt@kms_flip_scaled_crc@flip-32bpp-ytile-to-32bpp-ytileccs-upscaling@pipe-a-default-mode.html

  * igt@kms_flip_scaled_crc@flip-32bpp-ytile-to-32bpp-ytilegen12rcccs-upscaling@pipe-a-valid-mode:
    - shard-dg2:          NOTRUN -> [SKIP][193] ([i915#2672]) +3 other tests skip
   [193]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_11551/shard-dg2-7/igt@kms_flip_scaled_crc@flip-32bpp-ytile-to-32bpp-ytilegen12rcccs-upscaling@pipe-a-valid-mode.html

  * igt@kms_flip_scaled_crc@flip-64bpp-4tile-to-32bpp-4tiledg2rcccs-upscaling@pipe-a-valid-mode:
    - shard-tglu:         NOTRUN -> [SKIP][194] ([i915#2587] / [i915#2672])
   [194]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_11551/shard-tglu-9/igt@kms_flip_scaled_crc@flip-64bpp-4tile-to-32bpp-4tiledg2rcccs-upscaling@pipe-a-valid-mode.html

  * igt@kms_flip_scaled_crc@flip-64bpp-linear-to-32bpp-linear-downscaling@pipe-a-default-mode:
    - shard-mtlp:         NOTRUN -> [SKIP][195] ([i915#3555] / [i915#8810])
   [195]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_11551/shard-mtlp-3/igt@kms_flip_scaled_crc@flip-64bpp-linear-to-32bpp-linear-downscaling@pipe-a-default-mode.html

  * igt@kms_flip_scaled_crc@flip-64bpp-ytile-to-16bpp-ytile-downscaling@pipe-a-default-mode:
    - shard-mtlp:         NOTRUN -> [SKIP][196] ([i915#2672] / [i915#3555]) +1 other test skip
   [196]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_11551/shard-mtlp-1/igt@kms_flip_scaled_crc@flip-64bpp-ytile-to-16bpp-ytile-downscaling@pipe-a-default-mode.html

  * igt@kms_frontbuffer_tracking@fbc-1p-offscren-pri-indfb-draw-mmap-gtt:
    - shard-mtlp:         NOTRUN -> [SKIP][197] ([i915#8708]) +7 other tests skip
   [197]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_11551/shard-mtlp-6/igt@kms_frontbuffer_tracking@fbc-1p-offscren-pri-indfb-draw-mmap-gtt.html

  * igt@kms_frontbuffer_tracking@fbc-2p-scndscrn-cur-indfb-draw-render:
    - shard-dg1:          NOTRUN -> [SKIP][198] +36 other tests skip
   [198]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_11551/shard-dg1-16/igt@kms_frontbuffer_tracking@fbc-2p-scndscrn-cur-indfb-draw-render.html

  * igt@kms_frontbuffer_tracking@fbc-rgb565-draw-mmap-gtt:
    - shard-dg1:          NOTRUN -> [SKIP][199] ([i915#8708]) +16 other tests skip
   [199]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_11551/shard-dg1-18/igt@kms_frontbuffer_tracking@fbc-rgb565-draw-mmap-gtt.html

  * igt@kms_frontbuffer_tracking@fbc-tiling-y:
    - shard-mtlp:         NOTRUN -> [SKIP][200] ([i915#10055]) +1 other test skip
   [200]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_11551/shard-mtlp-2/igt@kms_frontbuffer_tracking@fbc-tiling-y.html
    - shard-dg2:          NOTRUN -> [SKIP][201] ([i915#10055])
   [201]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_11551/shard-dg2-5/igt@kms_frontbuffer_tracking@fbc-tiling-y.html

  * igt@kms_frontbuffer_tracking@fbcpsr-1p-primscrn-pri-shrfb-draw-mmap-cpu:
    - shard-dg2:          NOTRUN -> [SKIP][202] ([i915#3458]) +12 other tests skip
   [202]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_11551/shard-dg2-2/igt@kms_frontbuffer_tracking@fbcpsr-1p-primscrn-pri-shrfb-draw-mmap-cpu.html

  * igt@kms_frontbuffer_tracking@fbcpsr-2p-primscrn-cur-indfb-draw-mmap-gtt:
    - shard-rkl:          NOTRUN -> [SKIP][203] ([i915#1825]) +38 other tests skip
   [203]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_11551/shard-rkl-3/igt@kms_frontbuffer_tracking@fbcpsr-2p-primscrn-cur-indfb-draw-mmap-gtt.html

  * igt@kms_frontbuffer_tracking@fbcpsr-2p-primscrn-pri-shrfb-draw-render:
    - shard-dg2:          NOTRUN -> [SKIP][204] ([i915#5354]) +35 other tests skip
   [204]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_11551/shard-dg2-4/igt@kms_frontbuffer_tracking@fbcpsr-2p-primscrn-pri-shrfb-draw-render.html

  * igt@kms_frontbuffer_tracking@fbcpsr-2p-primscrn-spr-indfb-draw-blt:
    - shard-snb:          NOTRUN -> [SKIP][205] +44 other tests skip
   [205]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_11551/shard-snb6/igt@kms_frontbuffer_tracking@fbcpsr-2p-primscrn-spr-indfb-draw-blt.html

  * igt@kms_frontbuffer_tracking@fbcpsr-2p-scndscrn-spr-indfb-fullscreen:
    - shard-tglu:         NOTRUN -> [SKIP][206] +34 other tests skip
   [206]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_11551/shard-tglu-7/igt@kms_frontbuffer_tracking@fbcpsr-2p-scndscrn-spr-indfb-fullscreen.html

  * igt@kms_frontbuffer_tracking@fbcpsr-2p-shrfb-fliptrack-mmap-gtt:
    - shard-dg2:          NOTRUN -> [SKIP][207] ([i915#8708]) +22 other tests skip
   [207]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_11551/shard-dg2-2/igt@kms_frontbuffer_tracking@fbcpsr-2p-shrfb-fliptrack-mmap-gtt.html

  * igt@kms_frontbuffer_tracking@fbcpsr-rgb101010-draw-blt:
    - shard-dg1:          NOTRUN -> [SKIP][208] ([i915#3458]) +20 other tests skip
   [208]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_11551/shard-dg1-17/igt@kms_frontbuffer_tracking@fbcpsr-rgb101010-draw-blt.html

  * igt@kms_frontbuffer_tracking@fbcpsr-tiling-4:
    - shard-rkl:          NOTRUN -> [SKIP][209] ([i915#5439])
   [209]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_11551/shard-rkl-6/igt@kms_frontbuffer_tracking@fbcpsr-tiling-4.html
    - shard-dg1:          NOTRUN -> [SKIP][210] ([i915#5439])
   [210]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_11551/shard-dg1-13/igt@kms_frontbuffer_tracking@fbcpsr-tiling-4.html

  * igt@kms_frontbuffer_tracking@pipe-fbc-rte:
    - shard-dg2:          NOTRUN -> [SKIP][211] ([i915#9766])
   [211]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_11551/shard-dg2-5/igt@kms_frontbuffer_tracking@pipe-fbc-rte.html

  * igt@kms_frontbuffer_tracking@psr-1p-primscrn-indfb-plflip-blt:
    - shard-rkl:          NOTRUN -> [SKIP][212] ([i915#3023]) +23 other tests skip
   [212]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_11551/shard-rkl-4/igt@kms_frontbuffer_tracking@psr-1p-primscrn-indfb-plflip-blt.html

  * igt@kms_frontbuffer_tracking@psr-2p-primscrn-indfb-pgflip-blt:
    - shard-mtlp:         NOTRUN -> [SKIP][213] ([i915#1825]) +29 other tests skip
   [213]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_11551/shard-mtlp-3/igt@kms_frontbuffer_tracking@psr-2p-primscrn-indfb-pgflip-blt.html

  * igt@kms_frontbuffer_tracking@psr-rgb565-draw-mmap-cpu:
    - shard-dg2:          NOTRUN -> [SKIP][214] ([i915#10433] / [i915#3458]) +2 other tests skip
   [214]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_11551/shard-dg2-4/igt@kms_frontbuffer_tracking@psr-rgb565-draw-mmap-cpu.html

  * igt@kms_hdmi_inject@inject-audio:
    - shard-dg1:          NOTRUN -> [SKIP][215] ([i915#433])
   [215]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_11551/shard-dg1-15/igt@kms_hdmi_inject@inject-audio.html

  * igt@kms_hdr@static-toggle:
    - shard-dg1:          NOTRUN -> [SKIP][216] ([i915#3555] / [i915#8228])
   [216]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_11551/shard-dg1-15/igt@kms_hdr@static-toggle.html

  * igt@kms_hdr@static-toggle-suspend:
    - shard-dg2:          NOTRUN -> [SKIP][217] ([i915#3555] / [i915#8228]) +1 other test skip
   [217]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_11551/shard-dg2-10/igt@kms_hdr@static-toggle-suspend.html

  * igt@kms_multipipe_modeset@basic-max-pipe-crc-check:
    - shard-mtlp:         NOTRUN -> [SKIP][218] ([i915#4816])
   [218]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_11551/shard-mtlp-3/igt@kms_multipipe_modeset@basic-max-pipe-crc-check.html

  * igt@kms_panel_fitting@atomic-fastset:
    - shard-dg2:          NOTRUN -> [SKIP][219] ([i915#6301]) +1 other test skip
   [219]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_11551/shard-dg2-5/igt@kms_panel_fitting@atomic-fastset.html

  * igt@kms_panel_fitting@legacy:
    - shard-tglu:         NOTRUN -> [SKIP][220] ([i915#6301])
   [220]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_11551/shard-tglu-5/igt@kms_panel_fitting@legacy.html
    - shard-rkl:          NOTRUN -> [SKIP][221] ([i915#6301])
   [221]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_11551/shard-rkl-5/igt@kms_panel_fitting@legacy.html

  * igt@kms_plane_multiple@tiling-yf:
    - shard-mtlp:         NOTRUN -> [SKIP][222] ([i915#3555] / [i915#8806])
   [222]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_11551/shard-mtlp-3/igt@kms_plane_multiple@tiling-yf.html

  * igt@kms_plane_scaling@intel-max-src-size:
    - shard-dg2:          NOTRUN -> [SKIP][223] ([i915#6953] / [i915#9423])
   [223]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_11551/shard-dg2-4/igt@kms_plane_scaling@intel-max-src-size.html

  * igt@kms_plane_scaling@intel-max-src-size@pipe-a-hdmi-a-3:
    - shard-dg1:          NOTRUN -> [FAIL][224] ([i915#8292])
   [224]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_11551/shard-dg1-13/igt@kms_plane_scaling@intel-max-src-size@pipe-a-hdmi-a-3.html

  * igt@kms_plane_scaling@plane-downscale-factor-0-25-with-rotation@pipe-d-edp-1:
    - shard-mtlp:         NOTRUN -> [SKIP][225] ([i915#5176]) +3 other tests skip
   [225]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_11551/shard-mtlp-1/igt@kms_plane_scaling@plane-downscale-factor-0-25-with-rotation@pipe-d-edp-1.html

  * igt@kms_plane_scaling@plane-scaler-unity-scaling-with-rotation@pipe-d-hdmi-a-4:
    - shard-dg1:          NOTRUN -> [SKIP][226] ([i915#9423]) +7 other tests skip
   [226]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_11551/shard-dg1-17/igt@kms_plane_scaling@plane-scaler-unity-scaling-with-rotation@pipe-d-hdmi-a-4.html

  * igt@kms_plane_scaling@plane-scaler-with-clipping-clamping-rotation@pipe-b-hdmi-a-2:
    - shard-rkl:          NOTRUN -> [SKIP][227] ([i915#9423]) +7 other tests skip
   [227]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_11551/shard-rkl-5/igt@kms_plane_scaling@plane-scaler-with-clipping-clamping-rotation@pipe-b-hdmi-a-2.html

  * igt@kms_plane_scaling@plane-upscale-factor-0-25-with-rotation@pipe-d-hdmi-a-1:
    - shard-tglu:         NOTRUN -> [SKIP][228] ([i915#9423]) +3 other tests skip
   [228]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_11551/shard-tglu-6/igt@kms_plane_scaling@plane-upscale-factor-0-25-with-rotation@pipe-d-hdmi-a-1.html

  * igt@kms_plane_scaling@planes-downscale-factor-0-25-upscale-factor-0-25@pipe-a-hdmi-a-3:
    - shard-dg2:          NOTRUN -> [SKIP][229] ([i915#5235] / [i915#9423]) +2 other tests skip
   [229]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_11551/shard-dg2-7/igt@kms_plane_scaling@planes-downscale-factor-0-25-upscale-factor-0-25@pipe-a-hdmi-a-3.html

  * igt@kms_plane_scaling@planes-downscale-factor-0-25-upscale-factor-0-25@pipe-b-edp-1:
    - shard-mtlp:         NOTRUN -> [SKIP][230] ([i915#5235]) +9 other tests skip
   [230]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_11551/shard-mtlp-4/igt@kms_plane_scaling@planes-downscale-factor-0-25-upscale-factor-0-25@pipe-b-edp-1.html

  * igt@kms_plane_scaling@planes-downscale-factor-0-25-upscale-factor-0-25@pipe-b-hdmi-a-2:
    - shard-rkl:          NOTRUN -> [SKIP][231] ([i915#5235]) +1 other test skip
   [231]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_11551/shard-rkl-6/igt@kms_plane_scaling@planes-downscale-factor-0-25-upscale-factor-0-25@pipe-b-hdmi-a-2.html

  * igt@kms_plane_scaling@planes-downscale-factor-0-25@pipe-c-hdmi-a-1:
    - shard-tglu:         NOTRUN -> [SKIP][232] ([i915#9728]) +3 other tests skip
   [232]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_11551/shard-tglu-5/igt@kms_plane_scaling@planes-downscale-factor-0-25@pipe-c-hdmi-a-1.html

  * igt@kms_plane_scaling@planes-downscale-factor-0-25@pipe-d-hdmi-a-2:
    - shard-dg2:          NOTRUN -> [SKIP][233] ([i915#9423]) +20 other tests skip
   [233]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_11551/shard-dg2-3/igt@kms_plane_scaling@planes-downscale-factor-0-25@pipe-d-hdmi-a-2.html

  * igt@kms_plane_scaling@planes-downscale-factor-0-75@pipe-d-edp-1:
    - shard-mtlp:         NOTRUN -> [SKIP][234] ([i915#3555] / [i915#5235]) +1 other test skip
   [234]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_11551/shard-mtlp-6/igt@kms_plane_scaling@planes-downscale-factor-0-75@pipe-d-edp-1.html

  * igt@kms_plane_scaling@planes-upscale-20x20-downscale-factor-0-25@pipe-b-hdmi-a-2:
    - shard-rkl:          NOTRUN -> [SKIP][235] ([i915#9728]) +5 other tests skip
   [235]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_11551/shard-rkl-5/igt@kms_plane_scaling@planes-upscale-20x20-downscale-factor-0-25@pipe-b-hdmi-a-2.html

  * igt@kms_plane_scaling@planes-upscale-factor-0-25-downscale-factor-0-25@pipe-a-hdmi-a-1:
    - shard-glk:          NOTRUN -> [SKIP][236] +190 other tests skip
   [236]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_11551/shard-glk5/igt@kms_plane_scaling@planes-upscale-factor-0-25-downscale-factor-0-25@pipe-a-hdmi-a-1.html

  * igt@kms_plane_scaling@planes-upscale-factor-0-25-downscale-factor-0-25@pipe-d-hdmi-a-4:
    - shard-dg1:          NOTRUN -> [SKIP][237] ([i915#9728]) +11 other tests skip
   [237]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_11551/shard-dg1-17/igt@kms_plane_scaling@planes-upscale-factor-0-25-downscale-factor-0-25@pipe-d-hdmi-a-4.html

  * igt@kms_pm_backlight@basic-brightness:
    - shard-rkl:          NOTRUN -> [SKIP][238] ([i915#5354]) +1 other test skip
   [238]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_11551/shard-rkl-5/igt@kms_pm_backlight@basic-brightness.html
    - shard-dg1:          NOTRUN -> [SKIP][239] ([i915#5354])
   [239]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_11551/shard-dg1-14/igt@kms_pm_backlight@basic-brightness.html

  * igt@kms_pm_dc@dc5-psr:
    - shard-rkl:          NOTRUN -> [SKIP][240] ([i915#9685])
   [240]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_11551/shard-rkl-6/igt@kms_pm_dc@dc5-psr.html
    - shard-dg1:          NOTRUN -> [SKIP][241] ([i915#9685])
   [241]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_11551/shard-dg1-13/igt@kms_pm_dc@dc5-psr.html

  * igt@kms_pm_dc@dc6-dpms:
    - shard-dg2:          NOTRUN -> [SKIP][242] ([i915#5978])
   [242]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_11551/shard-dg2-2/igt@kms_pm_dc@dc6-dpms.html

  * igt@kms_pm_dc@dc9-dpms:
    - shard-tglu:         [PASS][243] -> [SKIP][244] ([i915#4281])
   [243]: https://intel-gfx-ci.01.org/tree/drm-tip/IGT_7964/shard-tglu-6/igt@kms_pm_dc@dc9-dpms.html
   [244]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_11551/shard-tglu-7/igt@kms_pm_dc@dc9-dpms.html

  * igt@kms_pm_rpm@dpms-lpsp:
    - shard-dg1:          NOTRUN -> [SKIP][245] ([i915#9519]) +1 other test skip
   [245]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_11551/shard-dg1-17/igt@kms_pm_rpm@dpms-lpsp.html

  * igt@kms_pm_rpm@dpms-mode-unset-non-lpsp:
    - shard-tglu:         NOTRUN -> [SKIP][246] ([i915#9519])
   [246]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_11551/shard-tglu-5/igt@kms_pm_rpm@dpms-mode-unset-non-lpsp.html

  * igt@kms_pm_rpm@dpms-non-lpsp:
    - shard-dg2:          NOTRUN -> [SKIP][247] ([i915#9519])
   [247]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_11551/shard-dg2-4/igt@kms_pm_rpm@dpms-non-lpsp.html

  * igt@kms_pm_rpm@modeset-non-lpsp-stress-no-wait:
    - shard-rkl:          [PASS][248] -> [SKIP][249] ([i915#9519]) +2 other tests skip
   [248]: https://intel-gfx-ci.01.org/tree/drm-tip/IGT_7964/shard-rkl-5/igt@kms_pm_rpm@modeset-non-lpsp-stress-no-wait.html
   [249]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_11551/shard-rkl-2/igt@kms_pm_rpm@modeset-non-lpsp-stress-no-wait.html

  * igt@kms_prime@basic-crc-vgem:
    - shard-dg2:          NOTRUN -> [SKIP][250] ([i915#6524] / [i915#6805]) +1 other test skip
   [250]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_11551/shard-dg2-7/igt@kms_prime@basic-crc-vgem.html

  * igt@kms_prime@basic-modeset-hybrid:
    - shard-mtlp:         NOTRUN -> [SKIP][251] ([i915#6524]) +1 other test skip
   [251]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_11551/shard-mtlp-4/igt@kms_prime@basic-modeset-hybrid.html

  * igt@kms_psr2_sf@fbc-cursor-plane-move-continuous-exceed-sf:
    - shard-dg2:          NOTRUN -> [SKIP][252] ([i915#11520]) +5 other tests skip
   [252]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_11551/shard-dg2-6/igt@kms_psr2_sf@fbc-cursor-plane-move-continuous-exceed-sf.html
    - shard-rkl:          NOTRUN -> [SKIP][253] ([i915#11520]) +6 other tests skip
   [253]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_11551/shard-rkl-5/igt@kms_psr2_sf@fbc-cursor-plane-move-continuous-exceed-sf.html

  * igt@kms_psr2_sf@fbc-cursor-plane-move-continuous-exceed-sf@psr2-pipe-a-edp-1:
    - shard-mtlp:         NOTRUN -> [SKIP][254] ([i915#9808]) +1 other test skip
   [254]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_11551/shard-mtlp-5/igt@kms_psr2_sf@fbc-cursor-plane-move-continuous-exceed-sf@psr2-pipe-a-edp-1.html

  * igt@kms_psr2_sf@fbc-overlay-plane-update-sf-dmg-area:
    - shard-dg1:          NOTRUN -> [SKIP][255] ([i915#11520]) +4 other tests skip
   [255]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_11551/shard-dg1-16/igt@kms_psr2_sf@fbc-overlay-plane-update-sf-dmg-area.html

  * igt@kms_psr2_sf@fbc-plane-move-sf-dmg-area:
    - shard-tglu:         NOTRUN -> [SKIP][256] ([i915#11520]) +1 other test skip
   [256]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_11551/shard-tglu-5/igt@kms_psr2_sf@fbc-plane-move-sf-dmg-area.html

  * igt@kms_psr2_su@page_flip-xrgb8888:
    - shard-rkl:          NOTRUN -> [SKIP][257] ([i915#9683])
   [257]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_11551/shard-rkl-6/igt@kms_psr2_su@page_flip-xrgb8888.html
    - shard-dg1:          NOTRUN -> [SKIP][258] ([i915#9683]) +1 other test skip
   [258]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_11551/shard-dg1-13/igt@kms_psr2_su@page_flip-xrgb8888.html

  * igt@kms_psr@fbc-pr-primary-mmap-cpu:
    - shard-mtlp:         NOTRUN -> [SKIP][259] ([i915#9688]) +10 other tests skip
   [259]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_11551/shard-mtlp-3/igt@kms_psr@fbc-pr-primary-mmap-cpu.html

  * igt@kms_psr@fbc-pr-primary-mmap-gtt:
    - shard-dg2:          NOTRUN -> [SKIP][260] ([i915#1072] / [i915#9732]) +20 other tests skip
   [260]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_11551/shard-dg2-2/igt@kms_psr@fbc-pr-primary-mmap-gtt.html

  * igt@kms_psr@psr-cursor-mmap-gtt:
    - shard-tglu:         NOTRUN -> [SKIP][261] ([i915#9732]) +8 other tests skip
   [261]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_11551/shard-tglu-5/igt@kms_psr@psr-cursor-mmap-gtt.html

  * igt@kms_psr@psr-primary-mmap-cpu:
    - shard-dg2:          NOTRUN -> [SKIP][262] ([i915#1072] / [i915#9673] / [i915#9732]) +2 other tests skip
   [262]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_11551/shard-dg2-11/igt@kms_psr@psr-primary-mmap-cpu.html

  * igt@kms_psr@psr-sprite-mmap-cpu:
    - shard-dg1:          NOTRUN -> [SKIP][263] ([i915#1072] / [i915#9732]) +19 other tests skip
   [263]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_11551/shard-dg1-17/igt@kms_psr@psr-sprite-mmap-cpu.html

  * igt@kms_psr@psr2-sprite-mmap-cpu:
    - shard-rkl:          NOTRUN -> [SKIP][264] ([i915#1072] / [i915#9732]) +21 other tests skip
   [264]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_11551/shard-rkl-3/igt@kms_psr@psr2-sprite-mmap-cpu.html

  * igt@kms_psr_stress_test@flip-primary-invalidate-overlay:
    - shard-dg2:          NOTRUN -> [SKIP][265] ([i915#9685])
   [265]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_11551/shard-dg2-4/igt@kms_psr_stress_test@flip-primary-invalidate-overlay.html

  * igt@kms_rotation_crc@bad-tiling:
    - shard-mtlp:         NOTRUN -> [SKIP][266] ([i915#4235]) +1 other test skip
   [266]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_11551/shard-mtlp-2/igt@kms_rotation_crc@bad-tiling.html

  * igt@kms_rotation_crc@primary-rotation-270:
    - shard-dg2:          NOTRUN -> [SKIP][267] ([i915#11131] / [i915#4235])
   [267]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_11551/shard-dg2-11/igt@kms_rotation_crc@primary-rotation-270.html

  * igt@kms_rotation_crc@primary-y-tiled-reflect-x-180:
    - shard-mtlp:         NOTRUN -> [SKIP][268] ([i915#5289])
   [268]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_11551/shard-mtlp-2/igt@kms_rotation_crc@primary-y-tiled-reflect-x-180.html

  * igt@kms_rotation_crc@primary-y-tiled-reflect-x-90:
    - shard-dg2:          NOTRUN -> [SKIP][269] ([i915#11131] / [i915#5190])
   [269]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_11551/shard-dg2-5/igt@kms_rotation_crc@primary-y-tiled-reflect-x-90.html

  * igt@kms_rotation_crc@primary-yf-tiled-reflect-x-270:
    - shard-rkl:          NOTRUN -> [SKIP][270] ([i915#5289])
   [270]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_11551/shard-rkl-1/igt@kms_rotation_crc@primary-yf-tiled-reflect-x-270.html
    - shard-tglu:         NOTRUN -> [SKIP][271] ([i915#5289])
   [271]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_11551/shard-tglu-8/igt@kms_rotation_crc@primary-yf-tiled-reflect-x-270.html

  * igt@kms_scaling_modes@scaling-mode-center:
    - shard-dg1:          NOTRUN -> [SKIP][272] ([i915#3555]) +2 other tests skip
   [272]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_11551/shard-dg1-18/igt@kms_scaling_modes@scaling-mode-center.html

  * igt@kms_setmode@basic-clone-single-crtc:
    - shard-mtlp:         NOTRUN -> [SKIP][273] ([i915#3555] / [i915#8809])
   [273]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_11551/shard-mtlp-3/igt@kms_setmode@basic-clone-single-crtc.html

  * igt@kms_sysfs_edid_timing:
    - shard-dg2:          [PASS][274] -> [FAIL][275] ([IGT#2])
   [274]: https://intel-gfx-ci.01.org/tree/drm-tip/IGT_7964/shard-dg2-11/igt@kms_sysfs_edid_timing.html
   [275]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_11551/shard-dg2-4/igt@kms_sysfs_edid_timing.html

  * igt@kms_tiled_display@basic-test-pattern-with-chamelium:
    - shard-rkl:          NOTRUN -> [SKIP][276] ([i915#8623])
   [276]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_11551/shard-rkl-6/igt@kms_tiled_display@basic-test-pattern-with-chamelium.html
    - shard-dg1:          NOTRUN -> [SKIP][277] ([i915#8623])
   [277]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_11551/shard-dg1-15/igt@kms_tiled_display@basic-test-pattern-with-chamelium.html

  * igt@kms_universal_plane@cursor-fb-leak@pipe-c-hdmi-a-1:
    - shard-tglu:         [PASS][278] -> [FAIL][279] ([i915#9196]) +1 other test fail
   [278]: https://intel-gfx-ci.01.org/tree/drm-tip/IGT_7964/shard-tglu-6/igt@kms_universal_plane@cursor-fb-leak@pipe-c-hdmi-a-1.html
   [279]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_11551/shard-tglu-5/igt@kms_universal_plane@cursor-fb-leak@pipe-c-hdmi-a-1.html

  * igt@kms_vrr@flip-basic:
    - shard-mtlp:         NOTRUN -> [SKIP][280] ([i915#3555] / [i915#8808])
   [280]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_11551/shard-mtlp-1/igt@kms_vrr@flip-basic.html

  * igt@kms_vrr@seamless-rr-switch-virtual:
    - shard-dg1:          NOTRUN -> [SKIP][281] ([i915#9906])
   [281]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_11551/shard-dg1-16/igt@kms_vrr@seamless-rr-switch-virtual.html

  * igt@kms_vrr@seamless-rr-switch-vrr:
    - shard-mtlp:         NOTRUN -> [SKIP][282] ([i915#8808] / [i915#9906])
   [282]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_11551/shard-mtlp-7/igt@kms_vrr@seamless-rr-switch-vrr.html

  * igt@kms_writeback@writeback-check-output:
    - shard-dg1:          NOTRUN -> [SKIP][283] ([i915#2437])
   [283]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_11551/shard-dg1-17/igt@kms_writeback@writeback-check-output.html

  * igt@kms_writeback@writeback-fb-id-xrgb2101010:
    - shard-rkl:          NOTRUN -> [SKIP][284] ([i915#2437] / [i915#9412])
   [284]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_11551/shard-rkl-5/igt@kms_writeback@writeback-fb-id-xrgb2101010.html
    - shard-tglu:         NOTRUN -> [SKIP][285] ([i915#2437] / [i915#9412])
   [285]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_11551/shard-tglu-5/igt@kms_writeback@writeback-fb-id-xrgb2101010.html
    - shard-glk:          NOTRUN -> [SKIP][286] ([i915#2437])
   [286]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_11551/shard-glk9/igt@kms_writeback@writeback-fb-id-xrgb2101010.html

  * igt@kms_writeback@writeback-invalid-parameters:
    - shard-mtlp:         NOTRUN -> [SKIP][287] ([i915#2437]) +1 other test skip
   [287]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_11551/shard-mtlp-5/igt@kms_writeback@writeback-invalid-parameters.html

  * igt@perf@global-sseu-config:
    - shard-mtlp:         NOTRUN -> [SKIP][288] ([i915#7387])
   [288]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_11551/shard-mtlp-8/igt@perf@global-sseu-config.html

  * igt@perf@mi-rpc:
    - shard-dg1:          NOTRUN -> [SKIP][289] ([i915#2434])
   [289]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_11551/shard-dg1-17/igt@perf@mi-rpc.html

  * igt@perf@non-zero-reason@0-rcs0:
    - shard-dg2:          NOTRUN -> [FAIL][290] ([i915#9100])
   [290]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_11551/shard-dg2-10/igt@perf@non-zero-reason@0-rcs0.html

  * igt@perf_pmu@busy-double-start@vecs1:
    - shard-dg2:          NOTRUN -> [FAIL][291] ([i915#4349]) +3 other tests fail
   [291]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_11551/shard-dg2-10/igt@perf_pmu@busy-double-start@vecs1.html

  * igt@perf_pmu@frequency@gt0:
    - shard-dg2:          NOTRUN -> [FAIL][292] ([i915#6806])
   [292]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_11551/shard-dg2-10/igt@perf_pmu@frequency@gt0.html

  * igt@perf_pmu@rc6-all-gts:
    - shard-dg1:          NOTRUN -> [SKIP][293] ([i915#8516]) +1 other test skip
   [293]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_11551/shard-dg1-16/igt@perf_pmu@rc6-all-gts.html
    - shard-rkl:          NOTRUN -> [SKIP][294] ([i915#8516])
   [294]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_11551/shard-rkl-3/igt@perf_pmu@rc6-all-gts.html

  * igt@prime_vgem@basic-fence-flip:
    - shard-dg2:          NOTRUN -> [SKIP][295] ([i915#3708])
   [295]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_11551/shard-dg2-11/igt@prime_vgem@basic-fence-flip.html

  * igt@prime_vgem@basic-fence-mmap:
    - shard-dg2:          NOTRUN -> [SKIP][296] ([i915#3708] / [i915#4077])
   [296]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_11551/shard-dg2-5/igt@prime_vgem@basic-fence-mmap.html

  * igt@prime_vgem@basic-fence-read:
    - shard-rkl:          NOTRUN -> [SKIP][297] ([i915#3291] / [i915#3708])
   [297]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_11551/shard-rkl-6/igt@prime_vgem@basic-fence-read.html

  * igt@prime_vgem@basic-write:
    - shard-mtlp:         NOTRUN -> [SKIP][298] ([i915#10216] / [i915#3708])
   [298]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_11551/shard-mtlp-1/igt@prime_vgem@basic-write.html

  * igt@prime_vgem@fence-flip-hang:
    - shard-dg1:          NOTRUN -> [SKIP][299] ([i915#3708]) +1 other test skip
   [299]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_11551/shard-dg1-17/igt@prime_vgem@fence-flip-hang.html

  * igt@sriov_basic@bind-unbind-vf:
    - shard-dg1:          NOTRUN -> [SKIP][300] ([i915#9917]) +1 other test skip
   [300]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_11551/shard-dg1-13/igt@sriov_basic@bind-unbind-vf.html

  * igt@sriov_basic@enable-vfs-autoprobe-on:
    - shard-rkl:          NOTRUN -> [SKIP][301] ([i915#9917])
   [301]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_11551/shard-rkl-5/igt@sriov_basic@enable-vfs-autoprobe-on.html

  * igt@syncobj_wait@invalid-wait-zero-handles:
    - shard-rkl:          NOTRUN -> [FAIL][302] ([i915#9781])
   [302]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_11551/shard-rkl-6/igt@syncobj_wait@invalid-wait-zero-handles.html
    - shard-dg1:          NOTRUN -> [FAIL][303] ([i915#9781])
   [303]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_11551/shard-dg1-13/igt@syncobj_wait@invalid-wait-zero-handles.html
    - shard-glk:          NOTRUN -> [FAIL][304] ([i915#9781])
   [304]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_11551/shard-glk8/igt@syncobj_wait@invalid-wait-zero-handles.html

  
#### Possible fixes ####

  * igt@gem_exec_fair@basic-none@bcs0:
    - shard-rkl:          [FAIL][305] ([i915#2842]) -> [PASS][306]
   [305]: https://intel-gfx-ci.01.org/tree/drm-tip/IGT_7964/shard-rkl-5/igt@gem_exec_fair@basic-none@bcs0.html
   [306]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_11551/shard-rkl-4/igt@gem_exec_fair@basic-none@bcs0.html

  * igt@gem_exec_fair@basic-pace-share@rcs0:
    - shard-tglu:         [FAIL][307] ([i915#2842]) -> [PASS][308]
   [307]: https://intel-gfx-ci.01.org/tree/drm-tip/IGT_7964/shard-tglu-10/igt@gem_exec_fair@basic-pace-share@rcs0.html
   [308]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_11551/shard-tglu-8/igt@gem_exec_fair@basic-pace-share@rcs0.html

  * igt@i915_module_load@reload-with-fault-injection:
    - shard-snb:          [ABORT][309] ([i915#9820]) -> [PASS][310]
   [309]: https://intel-gfx-ci.01.org/tree/drm-tip/IGT_7964/shard-snb6/igt@i915_module_load@reload-with-fault-injection.html
   [310]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_11551/shard-snb7/igt@i915_module_load@reload-with-fault-injection.html
    - shard-mtlp:         [ABORT][311] ([i915#10131] / [i915#10887] / [i915#9820]) -> [PASS][312]
   [311]: https://intel-gfx-ci.01.org/tree/drm-tip/IGT_7964/shard-mtlp-6/igt@i915_module_load@reload-with-fault-injection.html
   [312]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_11551/shard-mtlp-7/igt@i915_module_load@reload-with-fault-injection.html
    - shard-dg2:          [ABORT][313] ([i915#9820]) -> [PASS][314]
   [313]: https://intel-gfx-ci.01.org/tree/drm-tip/IGT_7964/shard-dg2-3/igt@i915_module_load@reload-with-fault-injection.html
   [314]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_11551/shard-dg2-10/igt@i915_module_load@reload-with-fault-injection.html

  * igt@kms_atomic_transition@plane-all-modeset-transition-fencing@pipe-a-hdmi-a-1:
    - shard-snb:          [FAIL][315] ([i915#5956]) -> [PASS][316]
   [315]: https://intel-gfx-ci.01.org/tree/drm-tip/IGT_7964/shard-snb4/igt@kms_atomic_transition@plane-all-modeset-transition-fencing@pipe-a-hdmi-a-1.html
   [316]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_11551/shard-snb5/igt@kms_atomic_transition@plane-all-modeset-transition-fencing@pipe-a-hdmi-a-1.html

  * igt@kms_atomic_transition@plane-all-modeset-transition@pipe-a-hdmi-a-4:
    - shard-dg1:          [FAIL][317] ([i915#5956]) -> [PASS][318]
   [317]: https://intel-gfx-ci.01.org/tree/drm-tip/IGT_7964/shard-dg1-14/igt@kms_atomic_transition@plane-all-modeset-transition@pipe-a-hdmi-a-4.html
   [318]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_11551/shard-dg1-15/igt@kms_atomic_transition@plane-all-modeset-transition@pipe-a-hdmi-a-4.html

  * igt@kms_atomic_transition@plane-toggle-modeset-transition@pipe-a-hdmi-a-1:
    - shard-dg2:          [FAIL][319] ([i915#5956]) -> [PASS][320]
   [319]: https://intel-gfx-ci.01.org/tree/drm-tip/IGT_7964/shard-dg2-2/igt@kms_atomic_transition@plane-toggle-modeset-transition@pipe-a-hdmi-a-1.html
   [320]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_11551/shard-dg2-10/igt@kms_atomic_transition@plane-toggle-modeset-transition@pipe-a-hdmi-a-1.html

  * igt@kms_frontbuffer_tracking@fbc-2p-scndscrn-spr-indfb-draw-render:
    - shard-snb:          [SKIP][321] -> [PASS][322] +3 other tests pass
   [321]: https://intel-gfx-ci.01.org/tree/drm-tip/IGT_7964/shard-snb4/igt@kms_frontbuffer_tracking@fbc-2p-scndscrn-spr-indfb-draw-render.html
   [322]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_11551/shard-snb7/igt@kms_frontbuffer_tracking@fbc-2p-scndscrn-spr-indfb-draw-render.html

  * igt@kms_frontbuffer_tracking@fbc-rgb565-draw-pwrite:
    - shard-dg2:          [FAIL][323] ([i915#6880]) -> [PASS][324]
   [323]: https://intel-gfx-ci.01.org/tree/drm-tip/IGT_7964/shard-dg2-10/igt@kms_frontbuffer_tracking@fbc-rgb565-draw-pwrite.html
   [324]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_11551/shard-dg2-10/igt@kms_frontbuffer_tracking@fbc-rgb565-draw-pwrite.html

  * igt@kms_plane@pixel-format@pipe-b-plane-3:
    - shard-mtlp:         [ABORT][325] ([i915#10354]) -> [PASS][326]
   [325]: https://intel-gfx-ci.01.org/tree/drm-tip/IGT_7964/shard-mtlp-4/igt@kms_plane@pixel-format@pipe-b-plane-3.html
   [326]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_11551/shard-mtlp-8/igt@kms_plane@pixel-format@pipe-b-plane-3.html

  * igt@kms_pm_rpm@modeset-lpsp-stress-no-wait:
    - shard-dg2:          [SKIP][327] ([i915#9519]) -> [PASS][328]
   [327]: https://intel-gfx-ci.01.org/tree/drm-tip/IGT_7964/shard-dg2-5/igt@kms_pm_rpm@modeset-lpsp-stress-no-wait.html
   [328]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_11551/shard-dg2-10/igt@kms_pm_rpm@modeset-lpsp-stress-no-wait.html

  * igt@kms_rotation_crc@primary-4-tiled-reflect-x-180:
    - shard-dg2:          [INCOMPLETE][329] -> [PASS][330]
   [329]: https://intel-gfx-ci.01.org/tree/drm-tip/IGT_7964/shard-dg2-10/igt@kms_rotation_crc@primary-4-tiled-reflect-x-180.html
   [330]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_11551/shard-dg2-4/igt@kms_rotation_crc@primary-4-tiled-reflect-x-180.html

  * igt@kms_universal_plane@cursor-fb-leak@pipe-a-hdmi-a-1:
    - shard-tglu:         [FAIL][331] ([i915#9196]) -> [PASS][332]
   [331]: https://intel-gfx-ci.01.org/tree/drm-tip/IGT_7964/shard-tglu-6/igt@kms_universal_plane@cursor-fb-leak@pipe-a-hdmi-a-1.html
   [332]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_11551/shard-tglu-5/igt@kms_universal_plane@cursor-fb-leak@pipe-a-hdmi-a-1.html

  * igt@perf_pmu@busy-double-start@vcs0:
    - shard-dg1:          [FAIL][333] ([i915#4349]) -> [PASS][334] +1 other test pass
   [333]: https://intel-gfx-ci.01.org/tree/drm-tip/IGT_7964/shard-dg1-17/igt@perf_pmu@busy-double-start@vcs0.html
   [334]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_11551/shard-dg1-18/igt@perf_pmu@busy-double-start@vcs0.html

  * igt@perf_pmu@busy-double-start@vecs0:
    - shard-mtlp:         [FAIL][335] ([i915#4349]) -> [PASS][336] +1 other test pass
   [335]: https://intel-gfx-ci.01.org/tree/drm-tip/IGT_7964/shard-mtlp-1/igt@perf_pmu@busy-double-start@vecs0.html
   [336]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_11551/shard-mtlp-5/igt@perf_pmu@busy-double-start@vecs0.html

  
#### Warnings ####

  * igt@gem_lmem_swapping@smem-oom@lmem0:
    - shard-dg2:          [DMESG-WARN][337] ([i915#4936] / [i915#5493]) -> [TIMEOUT][338] ([i915#5493])
   [337]: https://intel-gfx-ci.01.org/tree/drm-tip/IGT_7964/shard-dg2-8/igt@gem_lmem_swapping@smem-oom@lmem0.html
   [338]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_11551/shard-dg2-7/igt@gem_lmem_swapping@smem-oom@lmem0.html

  * igt@kms_content_protection@srm:
    - shard-snb:          [SKIP][339] -> [INCOMPLETE][340] ([i915#8816])
   [339]: https://intel-gfx-ci.01.org/tree/drm-tip/IGT_7964/shard-snb6/igt@kms_content_protection@srm.html
   [340]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_11551/shard-snb2/igt@kms_content_protection@srm.html

  * igt@kms_cursor_crc@cursor-offscreen-512x512:
    - shard-dg2:          [SKIP][341] ([i915#11453] / [i915#3359]) -> [SKIP][342] ([i915#11453])
   [341]: https://intel-gfx-ci.01.org/tree/drm-tip/IGT_7964/shard-dg2-11/igt@kms_cursor_crc@cursor-offscreen-512x512.html
   [342]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_11551/shard-dg2-5/igt@kms_cursor_crc@cursor-offscreen-512x512.html

  * igt@kms_cursor_crc@cursor-onscreen-512x170:
    - shard-dg2:          [SKIP][343] ([i915#11453]) -> [SKIP][344] ([i915#11453] / [i915#3359]) +1 other test skip
   [343]: https://intel-gfx-ci.01.org/tree/drm-tip/IGT_7964/shard-dg2-4/igt@kms_cursor_crc@cursor-onscreen-512x170.html
   [344]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_11551/shard-dg2-11/igt@kms_cursor_crc@cursor-onscreen-512x170.html

  * igt@kms_frontbuffer_tracking@fbcpsr-1p-primscrn-pri-indfb-draw-blt:
    - shard-dg2:          [SKIP][345] ([i915#10433] / [i915#3458]) -> [SKIP][346] ([i915#3458])
   [345]: https://intel-gfx-ci.01.org/tree/drm-tip/IGT_7964/shard-dg2-4/igt@kms_frontbuffer_tracking@fbcpsr-1p-primscrn-pri-indfb-draw-blt.html
   [346]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_11551/shard-dg2-3/igt@kms_frontbuffer_tracking@fbcpsr-1p-primscrn-pri-indfb-draw-blt.html

  * igt@kms_frontbuffer_tracking@psr-1p-offscren-pri-indfb-draw-mmap-cpu:
    - shard-dg2:          [SKIP][347] ([i915#3458]) -> [SKIP][348] ([i915#10433] / [i915#3458])
   [347]: https://intel-gfx-ci.01.org/tree/drm-tip/IGT_7964/shard-dg2-8/igt@kms_frontbuffer_tracking@psr-1p-offscren-pri-indfb-draw-mmap-cpu.html
   [348]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_11551/shard-dg2-4/igt@kms_frontbuffer_tracking@psr-1p-offscren-pri-indfb-draw-mmap-cpu.html

  * igt@kms_multipipe_modeset@basic-max-pipe-crc-check:
    - shard-rkl:          [SKIP][349] ([i915#4070] / [i915#4816]) -> [SKIP][350] ([i915#4816])
   [349]: https://intel-gfx-ci.01.org/tree/drm-tip/IGT_7964/shard-rkl-5/igt@kms_multipipe_modeset@basic-max-pipe-crc-check.html
   [350]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_11551/shard-rkl-3/igt@kms_multipipe_modeset@basic-max-pipe-crc-check.html

  * igt@kms_pm_dc@dc9-dpms:
    - shard-rkl:          [SKIP][351] ([i915#4281]) -> [SKIP][352] ([i915#3361])
   [351]: https://intel-gfx-ci.01.org/tree/drm-tip/IGT_7964/shard-rkl-5/igt@kms_pm_dc@dc9-dpms.html
   [352]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_11551/shard-rkl-6/igt@kms_pm_dc@dc9-dpms.html

  * igt@kms_psr@fbc-psr-cursor-plane-move:
    - shard-dg2:          [SKIP][353] ([i915#1072] / [i915#9673] / [i915#9732]) -> [SKIP][354] ([i915#1072] / [i915#9732]) +14 other tests skip
   [353]: https://intel-gfx-ci.01.org/tree/drm-tip/IGT_7964/shard-dg2-11/igt@kms_psr@fbc-psr-cursor-plane-move.html
   [354]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_11551/shard-dg2-7/igt@kms_psr@fbc-psr-cursor-plane-move.html

  * igt@kms_psr@fbc-psr-primary-mmap-gtt:
    - shard-dg2:          [SKIP][355] ([i915#1072] / [i915#9732]) -> [SKIP][356] ([i915#1072] / [i915#9673] / [i915#9732]) +17 other tests skip
   [355]: https://intel-gfx-ci.01.org/tree/drm-tip/IGT_7964/shard-dg2-10/igt@kms_psr@fbc-psr-primary-mmap-gtt.html
   [356]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_11551/shard-dg2-11/igt@kms_psr@fbc-psr-primary-mmap-gtt.html

  
  [IGT#2]: https://gitlab.freedesktop.org/drm/igt-gpu-tools/issues/2
  [i915#10055]: https://gitlab.freedesktop.org/drm/i915/kernel/-/issues/10055
  [i915#10131]: https://gitlab.freedesktop.org/drm/i915/kernel/-/issues/10131
  [i915#10216]: https://gitlab.freedesktop.org/drm/i915/kernel/-/issues/10216
  [i915#10307]: https://gitlab.freedesktop.org/drm/i915/kernel/-/issues/10307
  [i915#10354]: https://gitlab.freedesktop.org/drm/i915/kernel/-/issues/10354
  [i915#10433]: https://gitlab.freedesktop.org/drm/i915/kernel/-/issues/10433
  [i915#10434]: https://gitlab.freedesktop.org/drm/i915/kernel/-/issues/10434
  [i915#10656]: https://gitlab.freedesktop.org/drm/i915/kernel/-/issues/10656
  [i915#1072]: https://gitlab.freedesktop.org/drm/i915/kernel/-/issues/1072
  [i915#10887]: https://gitlab.freedesktop.org/drm/i915/kernel/-/issues/10887
  [i915#11078]: https://gitlab.freedesktop.org/drm/i915/kernel/-/issues/11078
  [i915#11131]: https://gitlab.freedesktop.org/drm/i915/kernel/-/issues/11131
  [i915#11453]: https://gitlab.freedesktop.org/drm/i915/kernel/-/issues/11453
  [i915#11520]: https://gitlab.freedesktop.org/drm/i915/kernel/-/issues/11520
  [i915#11616]: https://gitlab.freedesktop.org/drm/i915/kernel/-/issues/11616
  [i915#11703]: https://gitlab.freedesktop.org/drm/i915/kernel/-/issues/11703
  [i915#11713]: https://gitlab.freedesktop.org/drm/i915/kernel/-/issues/11713
  [i915#11808]: https://gitlab.freedesktop.org/drm/i915/kernel/-/issues/11808
  [i915#11859]: https://gitlab.freedesktop.org/drm/i915/kernel/-/issues/11859
  [i915#11900]: https://gitlab.freedesktop.org/drm/i915/kernel/-/issues/11900
  [i915#1825]: https://gitlab.freedesktop.org/drm/i915/kernel/-/issues/1825
  [i915#1839]: https://gitlab.freedesktop.org/drm/i915/kernel/-/issues/1839
  [i915#2122]: https://gitlab.freedesktop.org/drm/i915/kernel/-/issues/2122
  [i915#2434]: https://gitlab.freedesktop.org/drm/i915/kernel/-/issues/2434
  [i915#2437]: https://gitlab.freedesktop.org/drm/i915/kernel/-/issues/2437
  [i915#2527]: https://gitlab.freedesktop.org/drm/i915/kernel/-/issues/2527
  [i915#2587]: https://gitlab.freedesktop.org/drm/i915/kernel/-/issues/2587
  [i915#2658]: https://gitlab.freedesktop.org/drm/i915/kernel/-/issues/2658
  [i915#2672]: https://gitlab.freedesktop.org/drm/i915/kernel/-/issues/2672
  [i915#280]: https://gitlab.freedesktop.org/drm/i915/kernel/-/issues/280
  [i915#284]: https://gitlab.freedesktop.org/drm/i915/kernel/-/issues/284
  [i915#2842]: https://gitlab.freedesktop.org/drm/i915/kernel/-/issues/2842
  [i915#2846]: https://gitlab.freedesktop.org/drm/i915/kernel/-/issues/2846
  [i915#2856]: https://gitlab.freedesktop.org/drm/i915/kernel/-/issues/2856
  [i915#3023]: https://gitlab.freedesktop.org/drm/i915/kernel/-/issues/3023
  [i915#3281]: https://gitlab.freedesktop.org/drm/i915/kernel/-/issues/3281
  [i915#3282]: https://gitlab.freedesktop.org/drm/i915/kernel/-/issues/3282
  [i915#3291]: https://gitlab.freedesktop.org/drm/i915/kernel/-/issues/3291
  [i915#3297]: https://gitlab.freedesktop.org/drm/i915/kernel/-/issues/3297
  [i915#3299]: https://gitlab.freedesktop.org/drm/i915/kernel/-/issues/3299
  [i915#3359]: https://gitlab.freedesktop.org/drm/i915/kernel/-/issues/3359
  [i915#3361]: https://gitlab.freedesktop.org/drm/i915/kernel/-/issues/3361
  [i915#3458]: https://gitlab.freedesktop.org/drm/i915/kernel/-/issues/3458
  [i915#3469]: https://gitlab.freedesktop.org/drm/i915/kernel/-/issues/3469
  [i915#3539]: https://gitlab.freedesktop.org/drm/i915/kernel/-/issues/3539
  [i915#3555]: https://gitlab.freedesktop.org/drm/i915/kernel/-/issues/3555
  [i915#3591]: https://gitlab.freedesktop.org/drm/i915/kernel/-/issues/3591
  [i915#3637]: https://gitlab.freedesktop.org/drm/i915/kernel/-/issues/3637
  [i915#3638]: https://gitlab.freedesktop.org/drm/i915/kernel/-/issues/3638
  [i915#3708]: https://gitlab.freedesktop.org/drm/i915/kernel/-/issues/3708
  [i915#3742]: https://gitlab.freedesktop.org/drm/i915/kernel/-/issues/3742
  [i915#3840]: https://gitlab.freedesktop.org/drm/i915/kernel/-/issues/3840
  [i915#4036]: https://gitlab.freedesktop.org/drm/i915/kernel/-/issues/4036
  [i915#4070]: https://gitlab.freedesktop.org/drm/i915/kernel/-/issues/4070
  [i915#4077]: https://gitlab.freedesktop.org/drm/i915/kernel/-/issues/4077
  [i915#4079]: https://gitlab.freedesktop.org/drm/i915/kernel/-/issues/4079
  [i915#4083]: https://gitlab.freedesktop.org/drm/i915/kernel/-/issues/4083
  [i915#4087]: https://gitlab.freedesktop.org/drm/i915/kernel/-/issues/4087
  [i915#4103]: https://gitlab.freedesktop.org/drm/i915/kernel/-/issues/4103
  [i915#4212]: https://gitlab.freedesktop.org/drm/i915/kernel/-/issues/4212
  [i915#4213]: https://gitlab.freedesktop.org/drm/i915/kernel/-/issues/4213
  [i915#4235]: https://gitlab.freedesktop.org/drm/i915/kernel/-/issues/4235
  [i915#4270]: https://gitlab.freedesktop.org/drm/i915/kernel/-/issues/4270
  [i915#4281]: https://gitlab.freedesktop.org/drm/i915/kernel/-/issues/4281
  [i915#433]: https://gitlab.freedesktop.org/drm/i915/kernel/-/issues/433
  [i915#4349]: https://gitlab.freedesktop.org/drm/i915/kernel/-/issues/4349
  [i915#4387]: https://gitlab.freedesktop.org/drm/i915/kernel/-/issues/4387
  [i915#4473]: https://gitlab.freedesktop.org/drm/i915/kernel/-/issues/4473
  [i915#4525]: https://gitlab.freedesktop.org/drm/i915/kernel/-/issues/4525
  [i915#4537]: https://gitlab.freedesktop.org/drm/i915/kernel/-/issues/4537
  [i915#4538]: https://gitlab.freedesktop.org/drm/i915/kernel/-/issues/4538
  [i915#4613]: https://gitlab.freedesktop.org/drm/i915/kernel/-/issues/4613
  [i915#4771]: https://gitlab.freedesktop.org/drm/i915/kernel/-/issues/4771
  [i915#4812]: https://gitlab.freedesktop.org/drm/i915/kernel/-/issues/4812
  [i915#4816]: https://gitlab.freedesktop.org/drm/i915/kernel/-/issues/4816
  [i915#4852]: https://gitlab.freedesktop.org/drm/i915/kernel/-/issues/4852
  [i915#4854]: https://gitlab.freedesktop.org/drm/i915/kernel/-/issues/4854
  [i915#4860]: https://gitlab.freedesktop.org/drm/i915/kernel/-/issues/4860
  [i915#4879]: https://gitlab.freedesktop.org/drm/i915/kernel/-/issues/4879
  [i915#4880]: https://gitlab.freedesktop.org/drm/i915/kernel/-/issues/4880
  [i915#4881]: https://gitlab.freedesktop.org/drm/i915/kernel/-/issues/4881
  [i915#4885]: https://gitlab.freedesktop.org/drm/i915/kernel/-/issues/4885
  [i915#4936]: https://gitlab.freedesktop.org/drm/i915/kernel/-/issues/4936
  [i915#5176]: https://gitlab.freedesktop.org/drm/i915/kernel/-/issues/5176
  [i915#5190]: https://gitlab.freedesktop.org/drm/i915/kernel/-/issues/5190
  [i915#5235]: https://gitlab.freedesktop.org/drm/i915/kernel/-/issues/5235
  [i915#5286]: https://gitlab.freedesktop.org/drm/i915/kernel/-/issues/5286
  [i915#5289]: https://gitlab.freedesktop.org/drm/i915/kernel/-/issues/5289
  [i915#5354]: https://gitlab.freedesktop.org/drm/i915/kernel/-/issues/5354
  [i915#5439]: https://gitlab.freedesktop.org/drm/i915/kernel/-/issues/5439
  [i915#5493]: https://gitlab.freedesktop.org/drm/i915/kernel/-/issues/5493
  [i915#5784]: https://gitlab.freedesktop.org/drm/i915/kernel/-/issues/5784
  [i915#5956]: https://gitlab.freedesktop.org/drm/i915/kernel/-/issues/5956
  [i915#5978]: https://gitlab.freedesktop.org/drm/i915/kernel/-/issues/5978
  [i915#6095]: https://gitlab.freedesktop.org/drm/i915/kernel/-/issues/6095
  [i915#6228]: https://gitlab.freedesktop.org/drm/i915/kernel/-/issues/6228
  [i915#6301]: https://gitlab.freedesktop.org/drm/i915/kernel/-/issues/6301
  [i915#6524]: https://gitlab.freedesktop.org/drm/i915/kernel/-/issues/6524
  [i915#658]: https://gitlab.freedesktop.org/drm/i915/kernel/-/issues/658
  [i915#6805]: https://gitlab.freedesktop.org/drm/i915/kernel/-/issues/6805
  [i915#6806]: https://gitlab.freedesktop.org/drm/i915/kernel/-/issues/6806
  [i915#6880]: https://gitlab.freedesktop.org/drm/i915/kernel/-/issues/6880
  [i915#6944]: https://gitlab.freedesktop.org/drm/i915/kernel/-/issues/6944
  [i915#6953]: https://gitlab.freedesktop.org/drm/i915/kernel/-/issues/6953
  [i915#7116]: https://gitlab.freedesktop.org/drm/i915/kernel/-/issues/7116
  [i915#7118]: https://gitlab.freedesktop.org/drm/i915/kernel/-/issues/7118
  [i915#7213]: https://gitlab.freedesktop.org/drm/i915/kernel/-/issues/7213
  [i915#7387]: https://gitlab.freedesktop.org/drm/i915/kernel/-/issues/7387
  [i915#7697]: https://gitlab.freedesktop.org/drm/i915/kernel/-/issues/7697
  [i915#7707]: https://gitlab.freedesktop.org/drm/i915/kernel/-/issues/7707
  [i915#7742]: https://gitlab.freedesktop.org/drm/i915/kernel/-/issues/7742
  [i915#7828]: https://gitlab.freedesktop.org/drm/i915/kernel/-/issues/7828
  [i915#7975]: https://gitlab.freedesktop.org/drm/i915/kernel/-/issues/7975
  [i915#8213]: https://gitlab.freedesktop.org/drm/i915/kernel/-/issues/8213
  [i915#8228]: https://gitlab.freedesktop.org/drm/i915/kernel/-/issues/8228
  [i915#8292]: https://gitlab.freedesktop.org/drm/i915/kernel/-/issues/8292
  [i915#8381]: https://gitlab.freedesktop.org/drm/i915/kernel/-/issues/8381
  [i915#8414]: https://gitlab.freedesktop.org/drm/i915/kernel/-/issues/8414
  [i915#8428]: https://gitlab.freedesktop.org/drm/i915/kernel/-/issues/8428
  [i915#8516]: https://gitlab.freedesktop.org/drm/i915/kernel/-/issues/8516
  [i915#8555]: https://gitlab.freedesktop.org/drm/i915/kernel/-/issues/8555
  [i915#8623]: https://gitlab.freedesktop.org/drm/i915/kernel/-/issues/8623
  [i915#8708]: https://gitlab.freedesktop.org/drm/i915/kernel/-/issues/8708
  [i915#8709]: https://gitlab.freedesktop.org/drm/i915/kernel/-/issues/8709
  [i915#8806]: https://gitlab.freedesktop.org/drm/i915/kernel/-/issues/8806
  [i915#8808]: https://gitlab.freedesktop.org/drm/i915/kernel/-/issues/8808
  [i915#8809]: https://gitlab.freedesktop.org/drm/i915/kernel/-/issues/8809
  [i915#8810]: https://gitlab.freedesktop.org/drm/i915/kernel/-/issues/8810
  [i915#8812]: https://gitlab.freedesktop.org/drm/i915/kernel/-/issues/8812
  [i915#8814]: https://gitlab.freedesktop.org/drm/i915/kernel/-/issues/8814
  [i915#8816]: https://gitlab.freedesktop.org/drm/i915/kernel/-/issues/8816
  [i915#8898]: https://gitlab.freedesktop.org/drm/i915/kernel/-/issues/8898
  [i915#9010]: https://gitlab.freedesktop.org/drm/i915/kernel/-/issues/9010
  [i915#9053]: https://gitlab.freedesktop.org/drm/i915/kernel/-/issues/9053
  [i915#9067]: https://gitlab.freedesktop.org/drm/i915/kernel/-/issues/9067
  [i915#9100]: https://gitlab.freedesktop.org/drm/i915/kernel/-/issues/9100
  [i915#9196]: https://gitlab.freedesktop.org/drm/i915/kernel/-/issues/9196
  [i915#9318]: https://gitlab.freedesktop.org/drm/i915/kernel/-/issues/9318
  [i915#9412]: https://gitlab.freedesktop.org/drm/i915/kernel/-/issues/9412
  [i915#9423]: https://gitlab.freedesktop.org/drm/i915/kernel/-/issues/9423
  [i915#9424]: https://gitlab.freedesktop.org/drm/i915/kernel/-/issues/9424
  [i915#9519]: https://gitlab.freedesktop.org/drm/i915/kernel/-/issues/9519
  [i915#9531]: https://gitlab.freedesktop.org/drm/i915/kernel/-/issues/9531
  [i915#9673]: https://gitlab.freedesktop.org/drm/i915/kernel/-/issues/9673
  [i915#9683]: https://gitlab.freedesktop.org/drm/i915/kernel/-/issues/9683
  [i915#9685]: https://gitlab.freedesktop.org/drm/i915/kernel/-/issues/9685
  [i915#9688]: https://gitlab.freedesktop.org/drm/i915/kernel/-/issues/9688
  [i915#9723]: https://gitlab.freedesktop.org/drm/i915/kernel/-/issues/9723
  [i915#9728]: https://gitlab.freedesktop.org/drm/i915/kernel/-/issues/9728
  [i915#9732]: https://gitlab.freedesktop.org/drm/i915/kernel/-/issues/9732
  [i915#9766]: https://gitlab.freedesktop.org/drm/i915/kernel/-/issues/9766
  [i915#9781]: https://gitlab.freedesktop.org/drm/i915/kernel/-/issues/9781
  [i915#9808]: https://gitlab.freedesktop.org/drm/i915/kernel/-/issues/9808
  [i915#9809]: https://gitlab.freedesktop.org/drm/i915/kernel/-/issues/9809
  [i915#9820]: https://gitlab.freedesktop.org/drm/i915/kernel/-/issues/9820
  [i915#9906]: https://gitlab.freedesktop.org/drm/i915/kernel/-/issues/9906
  [i915#9917]: https://gitlab.freedesktop.org/drm/i915/kernel/-/issues/9917
  [i915#9934]: https://gitlab.freedesktop.org/drm/i915/kernel/-/issues/9934


Build changes
-------------

  * CI: CI-20190529 -> None
  * IGT: IGT_7964 -> IGTPW_11551

  CI-20190529: 20190529
  CI_DRM_15204: 040fda7980e8fccde054bf16158538ff80223e36 @ git://anongit.freedesktop.org/gfx-ci/linux
  IGTPW_11551: 11551
  IGT_7964: 0dabf88262c0349d261248638064b97da35369f8 @ https://gitlab.freedesktop.org/drm/igt-gpu-tools.git

== Logs ==

For more details see: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_11551/index.html

[-- Attachment #2: Type: text/html, Size: 118774 bytes --]

^ permalink raw reply	[flat|nested] 41+ messages in thread

* Re: [PATCH i-g-t v3 04/14] lib/xe_eudebug: Introduce eu debug testing framework
  2024-08-09 12:38 ` [PATCH i-g-t v3 04/14] lib/xe_eudebug: Introduce eu debug testing framework Christoph Manszewski
@ 2024-08-19  8:30   ` Grzegorzek, Dominik
  2024-08-19 15:33     ` Manszewski, Christoph
  2024-08-20  8:14   ` Zbigniew Kempczyński
  1 sibling, 1 reply; 41+ messages in thread
From: Grzegorzek, Dominik @ 2024-08-19  8:30 UTC (permalink / raw)
  To: igt-dev@lists.freedesktop.org, Manszewski, Christoph
  Cc: Patelczyk, Maciej, Hajda, Andrzej, karolina.stolarek@intel.com,
	Kempczynski, Zbigniew, Piatkowski, Dominik Karol, Sikora, Pawel,
	Kuoppala, Mika, Mun, Gwan-gyeong, kamil.konieczny@linux.intel.com,
	mika.kuaoppala@linux.intel.com, Kolanupaka Naveena

On Fri, 2024-08-09 at 14:38 +0200, Christoph Manszewski wrote:
> From: Dominik Grzegorzek <dominik.grzegorzek@intel.com>
> 
> Introduce library which simplifies testing of eu debug capability.
> The library provides event log helpers together with asynchronous
> abstraction for client proccess and the debugger itself.
> 
> xe_eudebug_client creates its own proccess with user's work function,
> and gives machanisms to synchronize beginning of execution and event
> logging.
> 
> xe_eudebug_debugger allows to attach to the given proccess, provides
> asynchronous thread for event reading and introduces triggers -
> a callback mechanism triggered every time subscribed event was read.
> 
> Signed-off-by: Dominik Grzegorzek <dominik.grzegorzek@intel.com>
> Signed-off-by: Mika Kuoppala <mika.kuaoppala@linux.intel.com>
> Signed-off-by: Christoph Manszewski <christoph.manszewski@intel.com>
> Signed-off-by: Maciej Patelczyk <maciej.patelczyk@intel.com>
> Signed-off-by: Pawel Sikora <pawel.sikora@intel.com>
> Signed-off-by: Karolina Stolarek <karolina.stolarek@intel.com>
> ---
>  lib/meson.build     |    1 +
>  lib/xe/xe_eudebug.c | 2192 +++++++++++++++++++++++++++++++++++++++++++
>  lib/xe/xe_eudebug.h |  206 ++++
>  3 files changed, 2399 insertions(+)
>  create mode 100644 lib/xe/xe_eudebug.c
>  create mode 100644 lib/xe/xe_eudebug.h
> 
> diff --git a/lib/meson.build b/lib/meson.build
> index f711e60a7..969ca4101 100644
> --- a/lib/meson.build
> +++ b/lib/meson.build
> @@ -111,6 +111,7 @@ lib_sources = [
>  	'igt_msm.c',
>  	'igt_dsc.c',
>  	'xe/xe_gt.c',
> +	'xe/xe_eudebug.c',
>  	'xe/xe_ioctl.c',
>  	'xe/xe_mmio.c',
>  	'xe/xe_query.c',
> diff --git a/lib/xe/xe_eudebug.c b/lib/xe/xe_eudebug.c
> new file mode 100644
> index 000000000..4eac87476
> --- /dev/null
> +++ b/lib/xe/xe_eudebug.c
> @@ -0,0 +1,2192 @@
> +// SPDX-License-Identifier: MIT
> +/*
> + * Copyright © 2023 Intel Corporation
> + */
> +
> +#include <fcntl.h>
> +#include <poll.h>
> +#include <signal.h>
> +#include <sys/select.h>
> +#include <sys/stat.h>
> +#include <sys/types.h>
> +#include <sys/wait.h>
> +
> +#include "igt.h"
> +#include "igt_sysfs.h"
> +#include "intel_pat.h"
> +#include "xe_eudebug.h"
> +#include "xe_ioctl.h"
> +
> +struct event_trigger {
> +	xe_eudebug_trigger_fn fn;
> +	int type;
> +	struct igt_list_head link;
> +};
> +
> +struct seqno_list_entry {
> +	struct igt_list_head link;
> +	uint64_t seqno;
> +};
> +
> +struct match_dto {
> +	struct drm_xe_eudebug_event *target;
> +	struct igt_list_head *seqno_list;
> +	uint64_t client_handle;
> +	uint32_t filter;
> +
> +	/* store latest 'EVENT_VM_BIND' seqno */
> +	uint64_t *bind_seqno;
> +	/* latest vm_bind_op seqno matching bind_seqno */
> +	uint64_t *bind_op_seqno;
> +};
> +
> +#define CLIENT_PID  1
> +#define CLIENT_RUN  2
> +#define CLIENT_FINI 3
> +#define CLIENT_STOP 4
> +#define CLIENT_STAGE 5
> +#define DEBUGGER_STAGE 6
> +
> +#define DEBUGGER_WORKER_INACTIVE  0
> +#define DEBUGGER_WORKER_ACTIVE  1
> +#define DEBUGGER_WORKER_QUITTING 2
> +
> +static const char *type_to_str(unsigned int type)
> +{
> +	switch (type) {
> +	case DRM_XE_EUDEBUG_EVENT_NONE:
> +		return "none";
> +	case DRM_XE_EUDEBUG_EVENT_READ:
> +		return "read";
> +	case DRM_XE_EUDEBUG_EVENT_OPEN:
> +		return "client";
> +	case DRM_XE_EUDEBUG_EVENT_VM:
> +		return "vm";
> +	case DRM_XE_EUDEBUG_EVENT_EXEC_QUEUE:
> +		return "exec_queue";
> +	case DRM_XE_EUDEBUG_EVENT_EU_ATTENTION:
> +		return "attention";
> +	case DRM_XE_EUDEBUG_EVENT_VM_BIND:
> +		return "vm_bind";
> +	case DRM_XE_EUDEBUG_EVENT_VM_BIND_OP:
> +		return "vm_bind_op";
> +	case DRM_XE_EUDEBUG_EVENT_VM_BIND_UFENCE:
> +		return "vm_bind_ufence";
> +	case DRM_XE_EUDEBUG_EVENT_METADATA:
> +		return "metadata";
> +	case DRM_XE_EUDEBUG_EVENT_VM_BIND_OP_METADATA:
> +		return "vm_bind_op_metadata";
> +	}
> +
> +	return "UNKNOWN";
> +}
> +
> +static const char *event_type_to_str(struct drm_xe_eudebug_event *e, char *buf)
> +{
> +	sprintf(buf, "%s(%d)", type_to_str(e->type), e->type);
> +
> +	return buf;
> +}
> +
> +static const char *flags_to_str(unsigned int flags)
> +{
> +	if (flags & DRM_XE_EUDEBUG_EVENT_CREATE) {
> +		if (flags & DRM_XE_EUDEBUG_EVENT_NEED_ACK)
> +			return "create|ack";
> +		else
> +			return "create";
> +	}
> +	if (flags & DRM_XE_EUDEBUG_EVENT_DESTROY)
> +		return "destroy";
> +
> +	if (flags & DRM_XE_EUDEBUG_EVENT_STATE_CHANGE)
> +		return "state-change";
> +
> +	igt_assert(!(flags & DRM_XE_EUDEBUG_EVENT_NEED_ACK));
> +
> +	return "flags unknown";
> +}
> +
> +static const char *event_members_to_str(struct drm_xe_eudebug_event *e, char *b)
> +{
> +	switch (e->type) {
> +	case DRM_XE_EUDEBUG_EVENT_OPEN: {
> +		struct drm_xe_eudebug_event_client *ec = (struct drm_xe_eudebug_event_client *)e;
> +
> +		sprintf(b, "handle=%llu", ec->client_handle);
> +		break;
> +	}
> +	case DRM_XE_EUDEBUG_EVENT_VM: {
> +		struct drm_xe_eudebug_event_vm *evm = (struct drm_xe_eudebug_event_vm *)e;
> +
> +		sprintf(b, "client_handle=%llu, handle=%llu",
> +			evm->client_handle, evm->vm_handle);
> +		break;
> +	}
> +	case DRM_XE_EUDEBUG_EVENT_EXEC_QUEUE: {
> +		struct drm_xe_eudebug_event_exec_queue *ee = (void *)e;
> +
> +		sprintf(b, "client_handle=%llu, vm_handle=%llu, "
> +			   "exec_queue_handle=%llu, engine_class=%d, exec_queue_width=%d",
> +			ee->client_handle, ee->vm_handle,
> +			ee->exec_queue_handle, ee->engine_class, ee->width);
> +		break;
> +	}
> +	case DRM_XE_EUDEBUG_EVENT_EU_ATTENTION: {
> +		struct drm_xe_eudebug_event_eu_attention *ea = (void *)e;
> +
> +		sprintf(b, "client_handle=%llu, exec_queue_handle=%llu, "
> +			   "lrc_handle=%llu, bitmask_size=%d",
> +			ea->client_handle, ea->exec_queue_handle,
> +			ea->lrc_handle, ea->bitmask_size);
> +		break;
> +	}
> +	case DRM_XE_EUDEBUG_EVENT_VM_BIND: {
> +		struct drm_xe_eudebug_event_vm_bind *evmb = (void *)e;
> +
> +		sprintf(b, "client_handle=%llu, vm_handle=%llu, flags=0x%x, num_binds=%u",
> +			evmb->client_handle, evmb->vm_handle, evmb->flags, evmb->num_binds);
> +		break;
> +	}
> +	case DRM_XE_EUDEBUG_EVENT_VM_BIND_OP: {
> +		struct drm_xe_eudebug_event_vm_bind_op *op = (void *)e;
> +
> +		sprintf(b, "vm_bind_ref_seqno=%lld, addr=%016llx, range=%llu num_extensions=%llu",
> +			op->vm_bind_ref_seqno, op->addr, op->range, op->num_extensions);
> +		break;
> +	}
> +	case DRM_XE_EUDEBUG_EVENT_VM_BIND_UFENCE: {
> +		struct drm_xe_eudebug_event_vm_bind_ufence *f = (void *)e;
> +
> +		sprintf(b, "vm_bind_ref_seqno=%lld", f->vm_bind_ref_seqno);
> +		break;
> +	}
> +	case DRM_XE_EUDEBUG_EVENT_METADATA: {
> +		struct drm_xe_eudebug_event_metadata *em = (void *)e;
> +
> +		sprintf(b, "client_handle=%llu, metadata_handle=%llu, type=%llu, len=%llu",
> +			em->client_handle, em->metadata_handle, em->type, em->len);
> +		break;
> +	}
> +	case DRM_XE_EUDEBUG_EVENT_VM_BIND_OP_METADATA: {
> +		struct drm_xe_eudebug_event_vm_bind_op_metadata *op = (void *)e;
> +
> +		sprintf(b, "vm_bind_op_ref_seqno=%lld, metadata_handle=%llu, metadata_cookie=%llu",
> +			op->vm_bind_op_ref_seqno, op->metadata_handle, op->metadata_cookie);
> +		break;
> +	}
> +	default:
> +		strcpy(b, "<...>");
> +	}
> +
> +	return b;
> +}
> +
> +/**
> + * xe_eudebug_event_to_str:
> + * @e: pointer to event
> + * @buf: target to write string representation of @e
> + * @len: size of target buffer @buf
> + *
> + * Creates string representation for given event.
> + *
> + * Returns: the written input buffer pointed by @buf.
> + */
> +const char *xe_eudebug_event_to_str(struct drm_xe_eudebug_event *e, char *buf, size_t len)
> +{
> +	char a[256];
> +	char b[256];
> +
> +	snprintf(buf, len, "(%llu) %15s:%s: %s",
> +		 e->seqno,
> +		 event_type_to_str(e, a),
> +		 flags_to_str(e->flags),
> +		 event_members_to_str(e, b));
> +
> +	return buf;
> +}
> +
> +static void catch_child_failure(void)
> +{
> +	pid_t pid;
> +	int status;
> +
> +	pid = waitpid(-1, &status, WNOHANG);
> +
> +	if (pid == 0 || pid == -1)
> +		return;
> +
> +	if (!WIFEXITED(status))
> +		return;
> +
> +	igt_assert_f(WEXITSTATUS(status) == 0, "Client failed!\n");
> +}
> +
> +static int safe_pipe_read(int pipe[2], void *buf, int nbytes, int timeout_ms)
> +{
> +	int ret;
> +	int t = 0;
> +	struct pollfd fd = {
> +		.fd = pipe[0],
> +		.events = POLLIN,
> +		.revents = 0
> +	};
> +
> +	/* When child fails we may get stuck forever. Check whether
> +	 * the child process ended with an error.
> +	 */
> +	do {
> +		const int interval_ms = 1000;
> +
> +		ret = poll(&fd, 1, interval_ms);
> +
> +		if (!ret) {
> +			catch_child_failure();
> +			t += interval_ms;
> +		}
> +	} while (!ret && t < timeout_ms);
> +
> +	if (ret > 0)
> +		return read(pipe[0], buf, nbytes);
> +
> +	return 0;
> +}
> +
> +static uint64_t pipe_read(int pipe[2], int timeout_ms)
> +{
> +	uint64_t in;
> +	uint64_t ret;
> +
> +	ret = safe_pipe_read(pipe, &in, sizeof(in), timeout_ms);
> +	igt_assert(ret == sizeof(in));
> +
> +	return in;
> +}
> +
> +static void pipe_signal(int pipe[2], uint64_t token)
> +{
> +	igt_assert(write(pipe[1], &token, sizeof(token)) == sizeof(token));
> +}
> +
> +static void pipe_close(int pipe[2])
> +{
> +	if (pipe[0] != -1)
> +		close(pipe[0]);
> +
> +	if (pipe[1] != -1)
> +		close(pipe[1]);
> +}
> +
> +static uint64_t __wait_token(int p[2], const uint64_t token, int timeout_ms)
> +{
> +	uint64_t in;
> +
> +	in = pipe_read(p, timeout_ms);
> +
> +	igt_assert_eq(in, token);
> +
> +	return pipe_read(p, timeout_ms);
> +}
> +
> +static uint64_t client_wait_token(struct xe_eudebug_client *c,
> +				 const uint64_t token)
> +{
> +	return __wait_token(c->p_in, token, c->timeout_ms);
> +}
> +
> +static uint64_t wait_from_client(struct xe_eudebug_client *c,
> +				 const uint64_t token)
> +{
> +	return __wait_token(c->p_out, token, c->timeout_ms);
> +}
> +
> +static void token_signal(int p[2], const uint64_t token, const uint64_t value)
> +{
> +	pipe_signal(p, token);
> +	pipe_signal(p, value);
> +}
> +
> +static void client_signal(struct xe_eudebug_client *c,
> +			  const uint64_t token,
> +			  const uint64_t value)
> +{
> +	token_signal(c->p_out, token, value);
> +}
> +
> +static int __xe_eudebug_connect(int fd, pid_t pid, uint32_t flags, uint64_t events)
> +{
> +	struct drm_xe_eudebug_connect param = {
> +		.pid = pid,
> +		.flags = flags,
> +	};
> +	int debugfd;
> +
> +	debugfd = igt_ioctl(fd, DRM_IOCTL_XE_EUDEBUG_CONNECT, &param);
> +
> +	if (debugfd < 0)
> +		return -errno;
> +
> +	return debugfd;
> +}
> +
> +static void event_log_write_to_fd(struct xe_eudebug_event_log *l, int fd)
> +{
> +	igt_assert_eq(write(fd, &l->head, sizeof(l->head)),
> +		      sizeof(l->head));
> +
> +	igt_assert_eq(write(fd, l->log, l->head), l->head);
> +}
> +
> +static void read_all(int fd, void *buf, size_t nbytes)
> +{
> +	ssize_t remaining_size = nbytes;
> +	ssize_t current_size = 0;
> +	ssize_t read_size = 0;
> +
> +	do {
> +		read_size = read(fd, buf + current_size, remaining_size);
> +		igt_assert_f(read_size >= 0, "read failed: %s\n", strerror(errno));
> +
> +		current_size += read_size;
> +		remaining_size -= read_size;
> +	} while (remaining_size > 0 && read_size > 0);
> +
> +	igt_assert_eq(current_size, nbytes);
> +}
> +
> +static void event_log_read_from_fd(struct xe_eudebug_event_log *l, int fd)
> +{
> +	read_all(fd, &l->head, sizeof(l->head));
> +	igt_assert_lt(l->head, l->max_size);
> +
> +	read_all(fd, l->log, l->head);
> +}
> +
> +typedef int (*cmp_fn_t)(struct drm_xe_eudebug_event *, void *);
> +
> +static struct drm_xe_eudebug_event *
> +event_cmp(struct xe_eudebug_event_log *l,
> +	  struct drm_xe_eudebug_event *current,
> +	  cmp_fn_t match,
> +	  void *data)
> +{
> +	struct drm_xe_eudebug_event *e = current;
> +
> +	xe_eudebug_for_each_event(e, l) {
> +		if (match(e, data))
> +			return e;
> +	}
> +
> +	return NULL;
> +}
> +
> +static int match_type_and_flags(struct drm_xe_eudebug_event *a, void *data)
> +{
> +	struct drm_xe_eudebug_event *b = data;
> +
> +	if (a->type == b->type &&
> +	    a->flags == b->flags)
> +		return 1;
> +
> +	return 0;
> +}
> +
> +static int match_fields(struct drm_xe_eudebug_event *a, void *data)
> +{
> +	struct drm_xe_eudebug_event *b = data;
> +	int ret = 0;
> +
> +	ret = match_type_and_flags(a, data);
> +	if (!ret)
> +		return ret;
> +
> +	ret = 0;
> +
> +	switch (a->type) {
> +	case DRM_XE_EUDEBUG_EVENT_EXEC_QUEUE: {
> +		struct drm_xe_eudebug_event_exec_queue *ae = (void *)a;
> +		struct drm_xe_eudebug_event_exec_queue *be = (void *)b;
> +
> +		if (ae->engine_class == be->engine_class && ae->width == be->width)
> +			ret = 1;
> +		break;
> +	}
> +	case DRM_XE_EUDEBUG_EVENT_VM_BIND: {
> +		struct drm_xe_eudebug_event_vm_bind *ea = (void *)a;
> +		struct drm_xe_eudebug_event_vm_bind *eb = (void *)b;
> +
> +		if (ea->num_binds == eb->num_binds)
> +			ret = 1;
> +		break;
> +	}
> +	case DRM_XE_EUDEBUG_EVENT_VM_BIND_OP: {
> +		struct drm_xe_eudebug_event_vm_bind_op *ea = (void *)a;
> +		struct drm_xe_eudebug_event_vm_bind_op *eb = (void *)b;
> +
> +		if (ea->addr == eb->addr && ea->range == eb->range &&
> +		    ea->num_extensions == eb->num_extensions)
> +			ret = 1;
> +		break;
> +	}
> +	case DRM_XE_EUDEBUG_EVENT_VM_BIND_OP_METADATA: {
> +		struct drm_xe_eudebug_event_vm_bind_op_metadata *ea = (void *)a;
> +		struct drm_xe_eudebug_event_vm_bind_op_metadata *eb = (void *)b;
> +
> +		if (ea->metadata_handle == eb->metadata_handle &&
> +		    ea->metadata_cookie == eb->metadata_cookie)
> +			ret = 1;
> +		break;
> +	}
> +
> +	default:
> +		ret = 1;
> +		break;
> +	}
> +
> +	return ret;
> +}
> +
> +static int match_client_handle(struct drm_xe_eudebug_event *e, void *data)
> +{
> +	struct match_dto *md = (void *)data;
> +	uint64_t *bind_seqno = md->bind_seqno;
> +	uint64_t *bind_op_seqno = md->bind_op_seqno;
> +	uint64_t h = md->client_handle;
> +
> +	if (XE_EUDEBUG_EVENT_IS_FILTERED(e->type, md->filter))
> +		return 0;
> +
> +	switch (e->type) {
> +	case DRM_XE_EUDEBUG_EVENT_OPEN: {
> +		struct drm_xe_eudebug_event_client *client = (struct drm_xe_eudebug_event_client *)e;
> +
> +		if (client->client_handle == h)
> +			return 1;
> +		break;
> +	}
> +	case DRM_XE_EUDEBUG_EVENT_VM: {
> +		struct drm_xe_eudebug_event_vm *vm = (struct drm_xe_eudebug_event_vm *)e;
> +
> +		if (vm->client_handle == h)
> +			return 1;
> +		break;
> +	}
> +	case DRM_XE_EUDEBUG_EVENT_EXEC_QUEUE: {
> +		struct drm_xe_eudebug_event_exec_queue *ee = (struct drm_xe_eudebug_event_exec_queue *)e;
> +
> +		if (ee->client_handle == h)
> +			return 1;
> +		break;
> +	}
> +	case DRM_XE_EUDEBUG_EVENT_VM_BIND: {
> +		struct drm_xe_eudebug_event_vm_bind *evmb = (struct drm_xe_eudebug_event_vm_bind *)e;
> +
> +		if (evmb->client_handle == h) {
> +			*bind_seqno = evmb->base.seqno;
> +			return 1;
> +		}
> +		break;
> +	}
> +	case DRM_XE_EUDEBUG_EVENT_VM_BIND_OP: {
> +		struct drm_xe_eudebug_event_vm_bind_op *eo = (struct drm_xe_eudebug_event_vm_bind_op *)e;
> +
> +		if (eo->vm_bind_ref_seqno == *bind_seqno) {
> +			*bind_op_seqno = eo->base.seqno;
> +			return 1;
> +		}
> +		break;
> +	}
> +	case DRM_XE_EUDEBUG_EVENT_VM_BIND_UFENCE: {
> +		struct drm_xe_eudebug_event_vm_bind_ufence *ef  = (struct drm_xe_eudebug_event_vm_bind_ufence *)e;
> +
> +		if (ef->vm_bind_ref_seqno == *bind_seqno)
> +			return 1;
> +
> +		break;
> +	}
> +	case DRM_XE_EUDEBUG_EVENT_METADATA: {
> +		struct drm_xe_eudebug_event_metadata *em = (struct drm_xe_eudebug_event_metadata *)e;
> +
> +		if (em->client_handle == h)
> +			return 1;
> +		break;
> +	}
> +	case DRM_XE_EUDEBUG_EVENT_VM_BIND_OP_METADATA: {
> +		struct drm_xe_eudebug_event_vm_bind_op_metadata *eo = (struct drm_xe_eudebug_event_vm_bind_op_metadata *)e;
> +
> +		if (eo->vm_bind_op_ref_seqno == *bind_op_seqno)
> +			return 1;
> +		break;
> +	}
> +	default:
> +		break;
> +	}
> +
> +	return 0;
> +}
> +
> +static int match_opposite_resource(struct drm_xe_eudebug_event *e, void *data)
> +{
> +
> +	struct drm_xe_eudebug_event *d = (void *)data;
> +	int ret;
> +
> +	d->flags ^= DRM_XE_EUDEBUG_EVENT_CREATE | DRM_XE_EUDEBUG_EVENT_DESTROY;
> +	d->flags &= ~(DRM_XE_EUDEBUG_EVENT_NEED_ACK);
> +	ret = match_type_and_flags(e, data);
> +	d->flags ^= DRM_XE_EUDEBUG_EVENT_CREATE | DRM_XE_EUDEBUG_EVENT_DESTROY;
> +
> +	if (!ret)
> +		return 0;
> +
> +	switch (e->type) {
> +	case DRM_XE_EUDEBUG_EVENT_OPEN: {
> +		struct drm_xe_eudebug_event_client *client = (struct drm_xe_eudebug_event_client *)e;
> +		struct drm_xe_eudebug_event_client *filter = (struct drm_xe_eudebug_event_client *)data;
> +
> +		if (client->client_handle == filter->client_handle)
> +			return 1;
> +		break;
> +	}
> +	case DRM_XE_EUDEBUG_EVENT_VM: {
> +		struct drm_xe_eudebug_event_vm *vm = (struct drm_xe_eudebug_event_vm *)e;
> +		struct drm_xe_eudebug_event_vm *filter = (struct drm_xe_eudebug_event_vm *)data;
> +
> +		if (vm->vm_handle == filter->vm_handle)
> +			return 1;
> +		break;
> +	}
> +	case DRM_XE_EUDEBUG_EVENT_EXEC_QUEUE: {
> +		struct drm_xe_eudebug_event_exec_queue *ee = (void *)e;
> +		struct drm_xe_eudebug_event_exec_queue *filter = (struct drm_xe_eudebug_event_exec_queue *)data;
> +
> +		if (ee->exec_queue_handle == filter->exec_queue_handle)
> +			return 1;
> +		break;
> +	}
> +	case DRM_XE_EUDEBUG_EVENT_VM_BIND: {
> +		struct drm_xe_eudebug_event_vm_bind *evmb = (void *)e;
> +		struct drm_xe_eudebug_event_vm_bind *filter = (struct drm_xe_eudebug_event_vm_bind *)data;
> +
> +		if (evmb->vm_handle == filter->vm_handle &&
> +		    evmb->num_binds == filter->num_binds)
> +			return 1;
> +		break;
> +	}
> +	case DRM_XE_EUDEBUG_EVENT_VM_BIND_OP: {
> +		struct drm_xe_eudebug_event_vm_bind_op *avmb = (void *)e;
> +		struct drm_xe_eudebug_event_vm_bind_op *filter = (struct drm_xe_eudebug_event_vm_bind_op *)data;
> +
> +		if (avmb->addr == filter->addr &&
> +		    avmb->range == filter->range)
> +			return 1;
> +		break;
> +	}
> +	case DRM_XE_EUDEBUG_EVENT_METADATA: {
> +		struct drm_xe_eudebug_event_metadata *em = (void *)e;
> +		struct drm_xe_eudebug_event_metadata *filter = (struct drm_xe_eudebug_event_metadata *)data;
> +
> +		if (em->metadata_handle == filter->metadata_handle)
> +			return 1;
> +		break;
> +	}
> +	case DRM_XE_EUDEBUG_EVENT_VM_BIND_OP_METADATA: {
> +		struct drm_xe_eudebug_event_vm_bind_op_metadata *avmb = (void *)e;
> +		struct drm_xe_eudebug_event_vm_bind_op_metadata *filter = (struct drm_xe_eudebug_event_vm_bind_op_metadata *)data;
> +
> +		if (avmb->metadata_handle == filter->metadata_handle &&
> +		    avmb->metadata_cookie == filter->metadata_cookie)
> +			return 1;
> +		break;
> +	}
> +
> +	default:
> +		break;
> +	}
> +	return 0;
> +}
> +
> +static int match_full(struct drm_xe_eudebug_event *e, void *data)
> +{
> +	struct seqno_list_entry *sl;
> +
> +	struct match_dto *md = (void *)data;
> +	int ret = 0;
> +
> +	ret = match_client_handle(e, md);
> +	if (!ret)
> +		return 0;
> +
> +	ret = match_fields(e, md->target);
> +	if (!ret)
> +		return 0;
> +
> +	igt_list_for_each_entry(sl, md->seqno_list, link) {
> +		if (sl->seqno == e->seqno)
> +			return 0;
> +	}
> +
> +	return 1;
> +}
> +
> +static struct drm_xe_eudebug_event *
> +event_type_match(struct xe_eudebug_event_log *l,
> +		 struct drm_xe_eudebug_event *target,
> +		 struct drm_xe_eudebug_event *current)
> +{
> +	return event_cmp(l, current, match_type_and_flags, target);
> +}
> +
> +static struct drm_xe_eudebug_event *
> +client_match(struct xe_eudebug_event_log *l,
> +	     uint64_t client_handle,
> +	     struct drm_xe_eudebug_event *current,
> +	     uint32_t filter,
> +	     uint64_t *bind_seqno,
> +	     uint64_t *bind_op_seqno)
> +{
> +	struct match_dto md = {
> +		.client_handle = client_handle,
> +		.filter = filter,
> +		.bind_seqno = bind_seqno,
> +		.bind_op_seqno = bind_op_seqno,
> +	};
> +
> +	return event_cmp(l, current, match_client_handle, &md);
> +}
> +
> +static struct drm_xe_eudebug_event *
> +opposite_event_match(struct xe_eudebug_event_log *l,
> +		    struct drm_xe_eudebug_event *target,
> +		    struct drm_xe_eudebug_event *current)
> +{
> +	return event_cmp(l, current, match_opposite_resource, target);
> +}
> +
> +static struct drm_xe_eudebug_event *
> +event_match(struct xe_eudebug_event_log *l,
> +	    struct drm_xe_eudebug_event *target,
> +	    uint64_t client_handle,
> +	    struct igt_list_head *seqno_list,
> +	    uint64_t *bind_seqno,
> +	    uint64_t *bind_op_seqno)
> +{
> +	struct match_dto md = {
> +		.target = target,
> +		.client_handle = client_handle,
> +		.seqno_list = seqno_list,
> +		.bind_seqno = bind_seqno,
> +		.bind_op_seqno = bind_op_seqno,
> +	};
> +
> +	return event_cmp(l, NULL, match_full, &md);
> +}
> +
> +static void compare_client(struct xe_eudebug_event_log *c, struct drm_xe_eudebug_event *_ce,
> +			   struct xe_eudebug_event_log *d, struct drm_xe_eudebug_event *_de,
> +			   uint32_t filter)
> +{
> +	struct drm_xe_eudebug_event_client *ce = (void *)_ce;
> +	struct drm_xe_eudebug_event_client *de = (void *)_de;
> +	uint64_t cbs = 0, dbs = 0, cbso = 0, dbso = 0;
> +
> +	struct igt_list_head matched_seqno_list;
> +	struct drm_xe_eudebug_event *hc, *hd;
> +	struct seqno_list_entry *entry, *tmp;
> +
> +	igt_assert(ce);
> +	igt_assert(de);
> +
> +	igt_debug("client: %llu -> %llu\n", ce->client_handle, de->client_handle);
> +
> +	hc = NULL;
> +	hd = NULL;
> +	IGT_INIT_LIST_HEAD(&matched_seqno_list);
> +
> +	do {
> +		hc = client_match(c, ce->client_handle, hc, filter, &cbs, &cbso);
> +		if (!hc)
> +			break;
> +
> +		hd = event_match(d, hc, de->client_handle, &matched_seqno_list, &dbs, &dbso);
> +
> +		igt_assert_f(hd, "%s (%llu): no matching event type %u found for client %llu\n",
> +			     c->name,
> +			     hc->seqno,
> +			     hc->type,
> +			     ce->client_handle);
> +
> +		igt_debug("comparing %s %llu vs %s %llu\n",
> +			  c->name, hc->seqno, d->name, hd->seqno);
> +
> +		/*
> +		 * Store the seqno of the event that was matched above,
> +		 * inside 'matched_seqno_list', to avoid it getting matched
> +		 * by subsequent 'event_match' calls.
> +		 */
> +		entry = malloc(sizeof(*entry));
> +		entry->seqno = hd->seqno;
> +		igt_list_add(&entry->link, &matched_seqno_list);
> +	} while (hc);
> +
> +	igt_list_for_each_entry_safe(entry, tmp, &matched_seqno_list, link)
> +		free(entry);
> +}
> +
> +/**
> + * xe_eudebug_event_log_find_seqno:
> + * @l: event log pointer
> + * @seqno: seqno of event to be found
> + *
> + * Finds the event with given seqno in the event log.
> + *
> + * Returns: pointer to the event with given seqno within @l or NULL seqno is
> + * not present.
> + */
> +struct drm_xe_eudebug_event *
> +xe_eudebug_event_log_find_seqno(struct xe_eudebug_event_log *l, uint64_t seqno)
> +{
> +	struct drm_xe_eudebug_event *e = NULL, *found = NULL;
> +
> +	igt_assert_neq(seqno, 0);
> +	/*
> +	 * Try to catch if seqno is corrupted and prevent too long tests,
> +	 * as our post processing of events is not optimized.
> +	 */
> +	igt_assert_lt(seqno, 10 * 1000 * 1000);
> +
> +	xe_eudebug_for_each_event(e, l) {
> +		if (e->seqno == seqno) {
> +			if (found) {
> +				igt_warn("Found multiple events with the same seqno %lu\n", seqno);
> +				xe_eudebug_event_log_print(l, false);
> +				igt_assert(!found);
> +			}
> +			found = e;
> +		}
> +	}
> +
> +	return found;
> +}
> +
> +static void event_log_sort(struct xe_eudebug_event_log *l)
> +{
> +	struct xe_eudebug_event_log *tmp;
> +	struct drm_xe_eudebug_event *e = NULL;
> +	uint64_t first_seqno = 0;
> +	uint64_t last_seqno = 0;
> +	uint64_t events = 0, added = 0;
> +	uint64_t i;
> +
> +	xe_eudebug_for_each_event(e, l) {
> +		if (e->seqno > last_seqno)
> +			last_seqno = e->seqno;
> +
> +		if (e->seqno < first_seqno)
> +			first_seqno = e->seqno;
> +
> +		events++;
> +	}
> +
> +	tmp = xe_eudebug_event_log_create("tmp", l->max_size);
> +
> +	for (i = 1; i <= last_seqno; i++) {
> +		e = xe_eudebug_event_log_find_seqno(l, i);
> +		if (e) {
> +			xe_eudebug_event_log_write(tmp, e);
> +			added++;
> +		}
> +	}
> +
> +	igt_assert_eq(events, added);
> +	igt_assert_eq(tmp->head, l->head);
> +
> +	memcpy(l->log, tmp->log, tmp->head);
> +
> +	xe_eudebug_event_log_destroy(tmp);
> +}
> +
> +/**
> + * xe_eudebug_connect:
> + * @fd: Xe file descriptor
> + * @pid: client PID
> + * @flags: connection flags
> + *
> + * Opens the xe eu debugger connection to the process described by @pid
> + *
> + * Returns: 0 if the debugger was successfully attached, -errno otherwise.
> + */
> +int xe_eudebug_connect(int fd, pid_t pid, uint32_t flags)
> +{
> +	int ret;
> +	uint64_t events = 0; /* events filtering not supported yet! */
> +
> +	ret = __xe_eudebug_connect(fd, pid, flags, events);
> +
> +	return ret;
> +}
> +
> +/**
> + * xe_eudebug_event_log_create:
> + * @name: event log identifier
> + * @max_size: maximum size of created log
> + *
> + * Function creates an Eu Debugger event log with size equal to @max_size.
> + *
> + * Returns: pointer to just created log
> + */
> +#define MAX_EVENT_LOG_SIZE (32 * 1024 * 1024)
> +struct xe_eudebug_event_log *xe_eudebug_event_log_create(const char *name, unsigned int max_size)
> +{
> +	struct xe_eudebug_event_log *l;
> +
> +	l = calloc(1, sizeof(*l));
> +	igt_assert(l);
> +	l->log = calloc(1, max_size);
> +	igt_assert(l->log);
> +	l->max_size = max_size;
> +	strncpy(l->name, name, sizeof(l->name) - 1);
> +	pthread_mutex_init(&l->lock, NULL);
> +
> +	return l;
> +}
> +
> +/**
> + * xe_eudebug_event_log_destroy:
> + * @l: event log pointer
> + *
> + * Frees given event log @l.
> + */
> +void xe_eudebug_event_log_destroy(struct xe_eudebug_event_log *l)
> +{
> +	pthread_mutex_destroy(&l->lock);
> +	free(l->log);
> +	free(l);
> +}
> +
> +/**
> + * xe_eudebug_event_log_write:
> + * @l: event log pointer
> + * @e: event to be written to event log
> + *
> + * Writes event @e to the event log, thread-safe.
> + */
> +void xe_eudebug_event_log_write(struct xe_eudebug_event_log *l, struct drm_xe_eudebug_event *e)
> +{
> +	igt_assert(e->seqno);
> +	/*
> +	 * Try to catch if seqno is corrupted and prevent too long tests,
> +	 * as our post processing of events is not optimized.
> +	 */
> +	igt_assert_lt(e->seqno, 10 * 1000 * 1000);
> +
> +	pthread_mutex_lock(&l->lock);
> +	igt_assert_lt(l->head + e->len, l->max_size);
> +	memcpy(l->log + l->head, e, e->len);
> +	l->head += e->len;
> +
> +#ifdef DEBUG_LOG
> +	igt_info("%s: wrote %u bytes to eventlog, free %u bytes\n",
> +		 l->name, e->len, l->max_size - l->head);
> +#endif
> +	pthread_mutex_unlock(&l->lock);
I'm not fan of #ifdef debug logs. As the event log has been proven in action,
can we just strip it out? 
<cut>

Regards,
Dominik


^ permalink raw reply	[flat|nested] 41+ messages in thread

* Re: [PATCH i-g-t v3 13/14] tests/xe_eudebug_online: Debug client which runs workloads on EU
  2024-08-09 12:38 ` [PATCH i-g-t v3 13/14] tests/xe_eudebug_online: Debug client which runs workloads on EU Christoph Manszewski
  2024-08-09 14:38   ` Kamil Konieczny
@ 2024-08-19  9:58   ` Grzegorzek, Dominik
  2024-08-19 15:36     ` Manszewski, Christoph
  1 sibling, 1 reply; 41+ messages in thread
From: Grzegorzek, Dominik @ 2024-08-19  9:58 UTC (permalink / raw)
  To: igt-dev@lists.freedesktop.org, Manszewski, Christoph
  Cc: Patelczyk, Maciej, Hajda, Andrzej, karolina.stolarek@intel.com,
	Kempczynski, Zbigniew, Piatkowski, Dominik Karol, Sikora, Pawel,
	Kuoppala, Mika, Mun, Gwan-gyeong, kamil.konieczny@linux.intel.com,
	Kolanupaka Naveena

On Fri, 2024-08-09 at 14:38 +0200, Christoph Manszewski wrote:
> From: Dominik Grzegorzek <dominik.grzegorzek@intel.com>
> 
> For typical debugging under gdb one can specify two main usecases:
> accessing and manupulating resources created by the application and
> manipulating thread execution (interrupting and setting breakpoints).
> 
> This test adds coverage for the latter by checking that:
> - EU workloads that hit a instruction with breakpoint bit set will stop
>   halt execution and the debugger will report this via attention events,
> - the debugger is able to interrupt workload execution by issuing a
>   'interrupt_all' ioctl call,
> - the debugger is able to resume selected workloads that are stopped.
> 
> Signed-off-by: Dominik Grzegorzek <dominik.grzegorzek@intel.com>
> Signed-off-by: Mika Kuoppala <mika.kuoppala@intel.com>
> Signed-off-by: Christoph Manszewski <christoph.manszewski@intel.com>
> Signed-off-by: Dominik Karol Piątkowski <dominik.karol.piatkowski@intel.com>
> Signed-off-by: Pawel Sikora <pawel.sikora@intel.com>
> Signed-off-by: Karolina Stolarek <karolina.stolarek@intel.com>
> Signed-off-by: Kolanupaka Naveena <kolanupaka.naveena@intel.com>
> ---
>  tests/intel/xe_eudebug_online.c | 2203 +++++++++++++++++++++++++++++++
>  tests/meson.build               |    1 +
>  2 files changed, 2204 insertions(+)
>  create mode 100644 tests/intel/xe_eudebug_online.c
> 
> diff --git a/tests/intel/xe_eudebug_online.c b/tests/intel/xe_eudebug_online.c
> new file mode 100644
> index 000000000..1c8ac67f1
> --- /dev/null
> +++ b/tests/intel/xe_eudebug_online.c
> @@ -0,0 +1,2203 @@
> +// SPDX-License-Identifier: MIT
> +/*
> + * Copyright © 2023 Intel Corporation
> + */
> +
> +/**
> + * TEST: Tests for eudebug online functionality
> + * Category: Core
> + * Mega feature: EUdebug
> + * Sub-category: EUdebug tests
> + * Functionality: eu kernel debug
> + * Test category: functionality test
> + */
> +
> +#include "xe/xe_eudebug.h"
> +#include "xe/xe_ioctl.h"
> +#include "xe/xe_query.h"
> +#include "igt.h"
> +#include "intel_pat.h"
> +#include "intel_mocs.h"
> +#include "gpgpu_shader.h"
> +
> +#define SHADER_NOP			(0 << 0)
> +#define SHADER_BREAKPOINT		(1 << 0)
> +#define SHADER_LOOP			(1 << 1)
> +#define SHADER_SINGLE_STEP		(1 << 2)
> +#define SIP_SINGLE_STEP			(1 << 3)
> +#define DISABLE_DEBUG_MODE		(1 << 4)
> +#define SHADER_N_NOOP_BREAKPOINT	(1 << 5)
> +#define SHADER_CACHING_SRAM		(1 << 6)
> +#define SHADER_CACHING_VRAM		(1 << 7)
> +#define SHADER_MIN_THREADS		(1 << 8)
> +#define DO_NOT_EXPECT_CANARIES		(1 << 9)
> +#define TRIGGER_RESUME_SINGLE_WALK	(1 << 25)
> +#define TRIGGER_RESUME_PARALLEL_WALK	(1 << 26)
> +#define TRIGGER_RECONNECT		(1 << 27)
> +#define TRIGGER_RESUME_SET_BP		(1 << 28)
> +#define TRIGGER_RESUME_DELAYED		(1 << 29)
> +#define TRIGGER_RESUME_DSS		(1 << 30)
> +#define TRIGGER_RESUME_ONE		(1 << 31)
> +
> +#define DEBUGGER_REATTACHED	1
> +
> +#define SHADER_LOOP_N		3
> +#define SINGLE_STEP_COUNT	16
> +#define STEERING_SINGLE_STEP	0
> +#define STEERING_CONTINUE	0x00c0ffee
> +#define STEERING_END_LOOP	0xdeadca11
> +
> +#define CACHING_INIT_VALUE	0xcafe0000
> +#define CACHING_POISON_VALUE	0xcafedead
> +#define CACHING_VALUE(n)	(CACHING_INIT_VALUE + n)
> +
> +#define SHADER_CANARY 0x01010101
> +
> +#define WALKER_X_DIM		4
> +#define WALKER_ALIGNMENT	16
> +#define SIMD_SIZE		16
> +
> +#define STARTUP_TIMEOUT_MS	3000
> +#define WORKLOAD_DELAY_US	(5000 * 1000)
> +
> +#define PAGE_SIZE 4096
> +
> +struct dim_t {
> +	uint32_t x;
> +	uint32_t y;
> +	uint32_t alignment;
> +};
> +
> +static struct dim_t walker_dimensions(int threads)
> +{
> +	uint32_t x_dim = min_t(x_dim, threads, WALKER_X_DIM);
> +	struct dim_t ret = {
> +		.x = x_dim,
> +		.y = threads / x_dim,
> +		.alignment = WALKER_ALIGNMENT
> +	};
> +
> +	return ret;
> +}
> +
> +static struct dim_t surface_dimensions(int threads)
> +{
> +	struct dim_t ret = walker_dimensions(threads);
> +
> +	ret.y = max_t(ret.y, threads/ret.x, 4);
> +	ret.x *= SIMD_SIZE;
> +	ret.alignment *= SIMD_SIZE;
> +
> +	return ret;
> +}
> +
> +static uint32_t steering_offset(int threads)
> +{
> +	struct dim_t w = walker_dimensions(threads);
> +
> +	return ALIGN(w.x, w.alignment) * w.y * 4;
> +}
> +
> +static struct intel_buf *create_uc_buf(int fd, int width, int height)
> +{
> +	struct intel_buf *buf;
> +
> +	buf = intel_buf_create_full(buf_ops_create(fd), 0, width/4, height,
> +				    32, 0, I915_TILING_NONE, 0, 0, 0,
> +				    vram_if_possible(fd, 0),
> +				    DEFAULT_PAT_INDEX, DEFAULT_MOCS_INDEX);
> +
> +	return buf;
> +}
> +
> +static int get_number_of_threads(uint64_t flags)
> +{
> +	if (flags & SHADER_MIN_THREADS)
> +		return 16;
> +
> +	if (flags & (TRIGGER_RESUME_ONE | TRIGGER_RESUME_SINGLE_WALK |
> +		     TRIGGER_RESUME_PARALLEL_WALK | SHADER_CACHING_SRAM | SHADER_CACHING_VRAM))
> +		return 32;
> +
> +	return 512;
> +}
> +
> +static int caching_get_instruction_count(int fd, uint32_t s_dim__x, int flags)
> +{
> +	uint64_t memory;
> +
> +	igt_assert((flags & SHADER_CACHING_SRAM) || (flags & SHADER_CACHING_VRAM));
> +
> +	if (flags & SHADER_CACHING_SRAM)
> +		memory = system_memory(fd);
> +	else
> +		memory = vram_memory(fd, 0);
> +
> +	/* each instruction writes to given y offset */
> +	return (2 * xe_min_page_size(fd, memory)) / s_dim__x;
> +}
> +
> +static struct gpgpu_shader *get_shader(int fd, const unsigned int flags)
> +{
> +	struct dim_t w_dim = walker_dimensions(get_number_of_threads(flags));
> +	struct dim_t s_dim = surface_dimensions(get_number_of_threads(flags));
> +	static struct gpgpu_shader *shader;
> +
> +	shader = gpgpu_shader_create(fd);
> +
> +	gpgpu_shader__write_dword(shader, SHADER_CANARY, 0);
> +	if (flags & SHADER_BREAKPOINT) {
> +		gpgpu_shader__nop(shader);
> +		gpgpu_shader__breakpoint(shader);
> +	} else if (flags & SHADER_LOOP) {
> +		gpgpu_shader__label(shader, 0);
> +		gpgpu_shader__write_dword(shader, SHADER_CANARY, 0);
> +		gpgpu_shader__jump_neq(shader, 0, w_dim.y, STEERING_END_LOOP);
> +		gpgpu_shader__write_dword(shader, SHADER_CANARY, 0);
> +	} else if (flags & SHADER_SINGLE_STEP) {
> +		gpgpu_shader__nop(shader);
> +		gpgpu_shader__breakpoint(shader);
> +		for (int i = 0; i < SINGLE_STEP_COUNT; i++)
> +			gpgpu_shader__nop(shader);
> +	} else if (flags & SHADER_N_NOOP_BREAKPOINT) {
> +		for (int i = 0; i < SHADER_LOOP_N; i++) {
> +			gpgpu_shader__nop(shader);
> +			gpgpu_shader__breakpoint(shader);
> +		}
> +	} else if ((flags & SHADER_CACHING_SRAM) || (flags & SHADER_CACHING_VRAM)) {
> +		gpgpu_shader__nop(shader);
> +		gpgpu_shader__breakpoint(shader);
> +		for (int i = 0; i < caching_get_instruction_count(fd, s_dim.x, flags); i++)
> +			gpgpu_shader__common_target_write_u32(shader, s_dim.y + i, CACHING_VALUE(i));
> +		gpgpu_shader__nop(shader);
> +		gpgpu_shader__breakpoint(shader);
> +	}
> +
> +	gpgpu_shader__eot(shader);
> +	return shader;
> +}
> +
> +static struct gpgpu_shader *get_sip(int fd, const unsigned int flags)
> +{
> +	struct dim_t w_dim = walker_dimensions(get_number_of_threads(flags));
> +	static struct gpgpu_shader *sip;
> +
> +	sip = gpgpu_shader_create(fd);
> +	gpgpu_shader__write_aip(sip, 0);
> +
> +	gpgpu_shader__wait(sip);
> +	if (flags & SIP_SINGLE_STEP)
> +		gpgpu_shader__end_system_routine_step_if_eq(sip, w_dim.y, 0);
> +	else
> +		gpgpu_shader__end_system_routine(sip, true);
> +	return sip;
> +}
> +
> +static int count_set_bits(void *ptr, size_t size)
> +{
> +	uint8_t *p = ptr;
> +	int count = 0;
> +	int i, j;
> +
> +	for (i = 0; i < size; i++)
> +		for (j = 0; j < 8; j++)
> +			count += !!(p[i] & (1 << j));
> +
> +	return count;
> +}
> +
> +static int count_canaries_eq(uint32_t *ptr, struct dim_t w_dim, uint32_t value)
> +{
> +	int count = 0;
> +	int x, y;
> +
> +	for (x = 0; x < w_dim.x; x++)
> +		for (y = 0; y < w_dim.y; y++)
> +			if (READ_ONCE(ptr[x + ALIGN(w_dim.x, w_dim.alignment) * y]) == value)
> +				count++;
> +
> +	return count;
> +}
> +
> +static int count_canaries_neq(uint32_t *ptr, struct dim_t w_dim, uint32_t value)
> +{
> +	return w_dim.x * w_dim.y - count_canaries_eq(ptr, w_dim, value);
> +}
> +
> +static const char *td_ctl_cmd_to_str(uint32_t cmd)
> +{
> +	switch (cmd) {
> +	case DRM_XE_EUDEBUG_EU_CONTROL_CMD_INTERRUPT_ALL:
> +		return "interrupt all";
> +	case DRM_XE_EUDEBUG_EU_CONTROL_CMD_STOPPED:
> +		return "stopped";
> +	case DRM_XE_EUDEBUG_EU_CONTROL_CMD_RESUME:
> +		return "resume";
> +	default:
> +		return "unknown command";
> +	}
> +}
> +
> +static int __eu_ctl(int debugfd, uint64_t client,
> +		    uint64_t exec_queue, uint64_t lrc,
> +		    uint8_t *bitmask, uint32_t *bitmask_size,
> +		    uint32_t cmd, uint64_t *seqno)
> +{
> +	struct drm_xe_eudebug_eu_control control = {
> +		.client_handle = lower_32_bits(client),
> +		.exec_queue_handle = exec_queue,
> +		.lrc_handle = lrc,
> +		.cmd = cmd,
> +		.bitmask_ptr = to_user_pointer(bitmask),
> +	};
> +	int ret;
> +
> +	if (bitmask_size)
> +		control.bitmask_size = *bitmask_size;
> +
> +	ret = igt_ioctl(debugfd, DRM_XE_EUDEBUG_IOCTL_EU_CONTROL, &control);
> +
> +	if (ret < 0)
> +		return -errno;
> +
> +	igt_debug("EU CONTROL[%llu]: %s\n", control.seqno, td_ctl_cmd_to_str(cmd));
> +
> +	if (bitmask_size)
> +		*bitmask_size = control.bitmask_size;
> +
> +	if (seqno)
> +		*seqno = control.seqno;
> +
> +	return 0;
> +
> +}
> +
> +static uint64_t eu_ctl(int debugfd, uint64_t client,
> +		       uint64_t exec_queue, uint64_t lrc,
> +		       uint8_t *bitmask, uint32_t *bitmask_size, uint32_t cmd)
> +{
> +	uint64_t seqno;
> +
> +	igt_assert_eq(__eu_ctl(debugfd, client, exec_queue, lrc, bitmask,
> +			       bitmask_size, cmd, &seqno), 0);
> +
> +	return seqno;
> +}
> +
> +static bool intel_gen_needs_resume_wa(int fd)
> +{
> +	const uint32_t id = intel_get_drm_devid(fd);
> +
> +	return intel_gen(id) == 12 && intel_graphics_ver(id) < IP_VER(12, 55);
> +}
> +
> +static uint64_t eu_ctl_resume(int fd, int debugfd, uint64_t client,
> +			      uint64_t exec_queue, uint64_t lrc,
> +			      uint8_t *bitmask, uint32_t bitmask_size)
> +{
> +	int i;
> +
> +	/* XXX: WA for hsd: 14011332042 */
Remove XXX.
Maybe plain: /* Wa_14011332042 */ ?
> +	if (intel_gen_needs_resume_wa(fd)) {
> +		uint32_t *att_reg_half = (uint32_t *)bitmask;
> +
> +		for (i = 0; i < bitmask_size / sizeof(uint32_t); i += 2) {
> +			att_reg_half[i] |= att_reg_half[i + 1];
> +			att_reg_half[i + 1] |= att_reg_half[i];
> +		}
> +	}
> +
> +	return eu_ctl(debugfd, client, exec_queue, lrc, bitmask, &bitmask_size,
> +		      DRM_XE_EUDEBUG_EU_CONTROL_CMD_RESUME);
> +}
> +
> +static inline uint64_t eu_ctl_stopped(int debugfd, uint64_t client,
> +				      uint64_t exec_queue, uint64_t lrc,
> +				      uint8_t *bitmask, uint32_t *bitmask_size)
> +{
> +	return eu_ctl(debugfd, client, exec_queue, lrc, bitmask, bitmask_size,
> +		      DRM_XE_EUDEBUG_EU_CONTROL_CMD_STOPPED);
> +}
> +
> +static inline uint64_t eu_ctl_interrupt_all(int debugfd, uint64_t client,
> +					    uint64_t exec_queue, uint64_t lrc)
> +{
> +	return eu_ctl(debugfd, client, exec_queue, lrc, NULL, 0,
> +		      DRM_XE_EUDEBUG_EU_CONTROL_CMD_INTERRUPT_ALL);
> +}
> +
> +struct online_debug_data {
> +	pthread_mutex_t mutex;
> +	/* client in */
> +	struct drm_xe_engine_class_instance hwe;
> +	/* client out */
> +	int threads_count;
> +	/* debugger internals */
> +	uint64_t client_handle;
> +	uint64_t exec_queue_handle;
> +	uint64_t lrc_handle;
> +	uint64_t target_offset;
> +	size_t target_size;
> +	uint64_t bb_offset;
> +	size_t bb_size;
> +	int vm_fd;
> +	uint32_t first_aip;
> +	uint64_t *aips_offset_table;
> +	uint32_t steps_done;
> +	uint8_t *single_step_bitmask;
> +	int stepped_threads_count;
> +	struct timespec exception_arrived;
> +	int last_eu_control_seqno;
> +	struct drm_xe_eudebug_event *exception_event;
> +};
> +
> +static struct online_debug_data *
> +online_debug_data_create(struct drm_xe_engine_class_instance *hwe)
> +{
> +	struct online_debug_data *data;
> +
> +	data = mmap(0, ALIGN(sizeof(*data), PAGE_SIZE),
> +		    PROT_WRITE, MAP_SHARED | MAP_ANON, -1, 0);
> +	memcpy(&data->hwe, hwe, sizeof(*hwe));
> +	pthread_mutex_init(&data->mutex, NULL);
> +	data->client_handle = -1ULL;
> +	data->exec_queue_handle = -1ULL;
> +	data->lrc_handle = -1ULL;
> +	data->vm_fd = -1;
> +	data->stepped_threads_count = -1;
> +
> +	return data;
> +}
> +
> +static void online_debug_data_destroy(struct online_debug_data *data)
> +{
> +	free(data->aips_offset_table);
> +	munmap(data, ALIGN(sizeof(*data), PAGE_SIZE));
> +}
> +
> +static void eu_attention_debug_trigger(struct xe_eudebug_debugger *d,
> +					struct drm_xe_eudebug_event *e)
> +{
> +	struct drm_xe_eudebug_event_eu_attention *att = (void *) e;
> +	uint32_t *ptr = (uint32_t *) att->bitmask;
> +
> +	igt_debug("EVENT[%llu] eu-attenttion; threads=%d "
> +		 "client[%llu], exec_queue[%llu], lrc[%llu], bitmask_size[%d]\n",
> +		 att->base.seqno, count_set_bits(att->bitmask, att->bitmask_size),
> +				att->client_handle, att->exec_queue_handle,
> +				att->lrc_handle, att->bitmask_size);
strange alignment
> +
> +	for (uint32_t i = 0; i < att->bitmask_size/4; i += 2)
> +		igt_debug("bitmask[%d] = 0x%08x%08x\n", i/2, ptr[i], ptr[i+1]);
> +
> +}
> +
> +static void eu_attention_reset_trigger(struct xe_eudebug_debugger *d,
> +					struct drm_xe_eudebug_event *e)
> +{
> +	struct drm_xe_eudebug_event_eu_attention *att = (void *) e;
> +	uint32_t *ptr = (uint32_t *) att->bitmask;
> +	struct online_debug_data *data = d->ptr;
> +
> +	igt_debug("EVENT[%llu] eu-attention with reset; threads=%d "
> +		 "client[%llu], exec_queue[%llu], lrc[%llu], bitmask_size[%d]\n",
> +		 att->base.seqno, count_set_bits(att->bitmask, att->bitmask_size),
> +				att->client_handle, att->exec_queue_handle,
> +				att->lrc_handle, att->bitmask_size);
here as well
> +
> +	for (uint32_t i = 0; i < att->bitmask_size/4; i += 2)
> +		igt_debug("bitmask[%d] = 0x%08x%08x\n", i/2, ptr[i], ptr[i+1]);
> +
> +	xe_force_gt_reset_async(d->master_fd, data->hwe.gt_id);
> +}
> +
> +static void copy_first_bit(uint8_t *dst, uint8_t *src, int size)
> +{
> +	bool found = false;
> +	int i, j;
> +
> +	for (i = 0; i < size; i++) {
> +		if (found) {
> +			dst[i] = 0;
> +		} else {
> +			uint32_t tmp = src[i]; /* in case dst == src */
> +
> +			for (j = 0; j < 8; j++) {
> +				dst[i] = tmp & (1 << j);
> +				if (dst[i]) {
> +					found = true;
> +					break;
> +				}
> +			}
> +		}
> +	}
> +}
> +
> +static void copy_nth_bit(uint8_t *dst, uint8_t *src, int size, int n)
> +{
> +	int count = 0;
> +
> +	for (int i = 0; i < size; i++) {
> +		uint32_t tmp = src[i];
> +		for (int j = 7; j >= 0; j--) {
> +			if (tmp & (1 << j)) {
> +				count++;
> +				if (count == n)
> +					dst[i] |= (1 << j);
> +				else
> +					dst[i] &= ~(1 << j);
> +			} else
> +				dst[i] &= ~(1 << j);
> +		}
> +	}
> +}
> +
> +/*
> + * Searches for the first instruction. It stands on assumption,
> + * that shader kernel is placed before sip within the bb.
> + */
> +static uint32_t find_kernel_in_bb(struct gpgpu_shader *kernel,
> +				  struct online_debug_data *data)
> +{
> +	uint32_t *p = kernel->code;
> +	size_t sz = 4 * sizeof(uint32_t);
> +	uint32_t buf[4];
> +	int i;
> +
> +	for (i = 0; i < data->bb_size; i += sz) {
> +		igt_assert_eq(pread(data->vm_fd, &buf, sz, data->bb_offset + i), sz);
> +
> +
> +		if (memcmp(p, buf, sz) == 0)
> +			break;
> +	}
> +
> +	igt_assert(i < data->bb_size);
> +
> +	return i;
> +}
> +
> +static void set_breakpoint_once(struct xe_eudebug_debugger *d,
> +				struct online_debug_data *data)
> +{
> +	const uint32_t breakpoint_bit = 1 << 30;
> +	size_t sz = sizeof(uint32_t);
> +	struct gpgpu_shader *kernel;
> +	uint32_t aip;
> +
> +	kernel = get_shader(d->master_fd, d->flags);
> +
> +	if (data->first_aip) {
> +		uint32_t expected = find_kernel_in_bb(kernel, data) + kernel->size * 4 - 0x10;
> +
> +		igt_assert_eq(pread(data->vm_fd, &aip, sz, data->target_offset), sz);
> +		igt_assert_eq_u32(aip, expected);
> +	} else {
> +		uint32_t instr_usdw;
> +
> +		igt_assert(data->vm_fd != -1);
> +		igt_assert(data->target_size != 0);
> +		igt_assert(data->bb_size != 0);
> +
> +		igt_assert_eq(pread(data->vm_fd, &aip, sz, data->target_offset), sz);
> +		data->first_aip = aip;
> +
> +		aip = find_kernel_in_bb(kernel, data);
> +
> +		/* set breakpoint on last instruction */
> +		aip += kernel->size * 4 - 0x10;
> +		igt_assert_eq(pread(data->vm_fd, &instr_usdw, sz,
> +				    data->bb_offset + aip), sz);
> +		instr_usdw |= breakpoint_bit;
> +		igt_assert_eq(pwrite(data->vm_fd, &instr_usdw, sz,
> +				     data->bb_offset + aip), sz);
> +
> +	}
> +
> +	gpgpu_shader_destroy(kernel);
> +}
> +
> +static void get_aips_offset_table(struct online_debug_data *data, int threads)
> +{
> +	size_t sz = sizeof(uint32_t);
> +	uint32_t aip;
> +	uint32_t first_aip;
> +	int table_index = 0;
> +
> +	if (data->aips_offset_table)
> +		return;
> +
> +	data->aips_offset_table = malloc(threads * sizeof(uint64_t));
> +	igt_assert(data->aips_offset_table);
> +
> +	igt_assert_eq(pread(data->vm_fd, &first_aip, sz, data->target_offset), sz);
> +	data->first_aip = first_aip;
> +	data->aips_offset_table[table_index++] = 0;
> +
> +	fsync(data->vm_fd);
> +	for (int i = 1; i < data->target_size; i++) {
for (int i = sz; i < data->target_size; i+=sz), no need to compare byte by byte when instruction
pointer has 4 bytes.
> +		igt_assert_eq(pread(data->vm_fd, &aip, sz, data->target_offset + i), sz);
> +		if (aip == first_aip)
> +			data->aips_offset_table[table_index++] = i;
> +	}
> +
> +	igt_assert_eq(threads, table_index);
> +
> +	igt_debug("AIPs offset table:\n");
> +	for (int i = 0; i < threads; i++) {
> +		igt_debug("%lx\n", data->aips_offset_table[i]);
> +	}
redundant bracets.
> +}
> +
> +static int get_stepped_threads_count(struct online_debug_data *data, int threads)
> +{
> +	int count = 0;
> +	size_t sz = sizeof(uint32_t);
> +	uint32_t aip;
> +
> +	fsync(data->vm_fd);
> +	for (int i = 0; i < threads; i++) {
> +		igt_assert_eq(pread(data->vm_fd, &aip, sz,
> +				    data->target_offset + data->aips_offset_table[i]), sz);
> +		if (aip != data->first_aip) {
> +			igt_assert(aip == data->first_aip + 0x10);
> +			count++;
> +		}
> +	}
> +
> +	return count;
> +}
> +
> +static void save_first_exception_trigger(struct xe_eudebug_debugger *d,
> +					 struct drm_xe_eudebug_event *e)
> +{
> +	struct online_debug_data *data = d->ptr;
> +
> +	pthread_mutex_lock(&data->mutex);
> +	if (!data->exception_event) {
> +		igt_gettime(&data->exception_arrived);
> +		data->exception_event = igt_memdup(e, e->len);
> +	}
> +	pthread_mutex_unlock(&data->mutex);
> +}
> +
> +#define MAX_PREEMPT_TIMEOUT 10ull
> +static int is_client_resumed;

Why this is static global variable? Could be part of the data struct. 
What's more, it slighlty indicates that when it was set to =1 client 
is running. Which may not be true. (more than one dispatch or more than one breakpoint)
I would consider changing the name, but I do not have strong opinion about that.

> +static void eu_attention_resume_trigger(struct xe_eudebug_debugger *d,
> +					struct drm_xe_eudebug_event *e)
> +{
> +	struct drm_xe_eudebug_event_eu_attention *att = (void *) e;
> +	struct online_debug_data *data = d->ptr;
> +	uint32_t bitmask_size = att->bitmask_size;
> +	uint8_t *bitmask;
> +	int i;
> +
> +	if (data->last_eu_control_seqno > att->base.seqno)
> +		return;
> +
> +	bitmask = calloc(1, att->bitmask_size);
> +
> +	eu_ctl_stopped(d->fd, att->client_handle, att->exec_queue_handle,
> +		       att->lrc_handle, bitmask, &bitmask_size);
> +	igt_assert(bitmask_size == att->bitmask_size);
> +	igt_assert(memcmp(bitmask, att->bitmask, att->bitmask_size) == 0);
> +
> +	pthread_mutex_lock(&data->mutex);
> +	if (igt_nsec_elapsed(&data->exception_arrived) < (MAX_PREEMPT_TIMEOUT + 1) * NSEC_PER_SEC &&
> +	    d->flags & TRIGGER_RESUME_DELAYED) {
> +		pthread_mutex_unlock(&data->mutex);
> +		free(bitmask);
> +		return;
> +	} else if (d->flags & TRIGGER_RESUME_ONE) {
> +		copy_first_bit(bitmask, bitmask, bitmask_size);
> +	} else if (d->flags & TRIGGER_RESUME_DSS) {
> +		uint64_t *event = (uint64_t *)att->bitmask;
> +		uint64_t *resume = (uint64_t *)bitmask;
> +
> +		memset(bitmask, 0, bitmask_size);
> +		for (i = 0; i < att->bitmask_size / sizeof(uint64_t); i++) {
> +			if (!event[i])
> +				continue;
> +
> +			resume[i] = event[i];
> +			break;
> +		}
> +	} else if (d->flags & TRIGGER_RESUME_SET_BP) {
> +		set_breakpoint_once(d, data);
> +	}
> +
> +	if (d->flags & SHADER_LOOP) {
> +		uint32_t threads = get_number_of_threads(d->flags);
> +		uint32_t val = STEERING_END_LOOP;
> +
> +		igt_assert_eq(pwrite(data->vm_fd, &val, sizeof(uint32_t),
> +				     data->target_offset + steering_offset(threads)),
> +			      sizeof(uint32_t));
> +		fsync(data->vm_fd);
> +	}
> +	pthread_mutex_unlock(&data->mutex);
> +
> +	data->last_eu_control_seqno = eu_ctl_resume(d->master_fd, d->fd, att->client_handle,
> +						    att->exec_queue_handle, att->lrc_handle,
> +						    bitmask, att->bitmask_size);
> +
> +	is_client_resumed = 1;
> +	free(bitmask);
> +}
> +
> +static void eu_attention_resume_single_step_trigger(struct xe_eudebug_debugger *d,
> +						    struct drm_xe_eudebug_event *e)
> +{
> +	struct drm_xe_eudebug_event_eu_attention *att = (void *) e;
> +	struct online_debug_data *data = d->ptr;
> +	const int threads = get_number_of_threads(d->flags);
> +	uint32_t val;
> +	size_t sz = sizeof(uint32_t);
> +
> +	get_aips_offset_table(data, threads);
> +
> +	if (d->flags & TRIGGER_RESUME_PARALLEL_WALK) {
> +		if (data->stepped_threads_count != -1)
> +			if (data->steps_done < SINGLE_STEP_COUNT) {
> +				int stepped_threads_count_after_resume =
> +						get_stepped_threads_count(data, threads);
> +				igt_debug("Stepped threads after: %d\n",
> +					  stepped_threads_count_after_resume);
> +
> +				if (stepped_threads_count_after_resume == threads) {
> +					data->first_aip += 0x10;
> +					data->steps_done++;
> +				}
> +
> +				igt_debug("Shader steps: %d\n", data->steps_done);
> +				igt_assert(data->stepped_threads_count == 0);
> +				igt_assert(stepped_threads_count_after_resume == threads);
> +			}
> +
> +		if (data->steps_done < SINGLE_STEP_COUNT) {
> +			data->stepped_threads_count = get_stepped_threads_count(data, threads);
> +			igt_debug("Stepped threads before: %d\n", data->stepped_threads_count);
> +		}
> +
> +		val = data->steps_done < SINGLE_STEP_COUNT ? STEERING_SINGLE_STEP :
> +							     STEERING_CONTINUE;
> +	} else if (d->flags & TRIGGER_RESUME_SINGLE_WALK) {
> +		if (data->stepped_threads_count != -1)
> +			if (data->steps_done < 2) {
> +				int stepped_threads_count_after_resume =
> +						get_stepped_threads_count(data, threads);
> +				igt_debug("Stepped threads after: %d\n",
> +					  stepped_threads_count_after_resume);
> +
> +				if (stepped_threads_count_after_resume == threads) {
> +					data->first_aip += 0x10;
> +					data->steps_done++;
> +					free(data->single_step_bitmask);
> +					data->single_step_bitmask = 0;
> +				}
> +
> +				igt_debug("Shader steps: %d\n", data->steps_done);
> +				igt_assert(data->stepped_threads_count +
> +					   (intel_gen_needs_resume_wa(d->master_fd) ? 2 : 1) ==
> +					   stepped_threads_count_after_resume);
> +			}
> +
> +		if (data->steps_done < 2) {
> +			data->stepped_threads_count = get_stepped_threads_count(data, threads);
> +			igt_debug("Stepped threads before: %d\n", data->stepped_threads_count);
> +			if (intel_gen_needs_resume_wa(d->master_fd)) {
> +				if (!data->single_step_bitmask) {
> +					data->single_step_bitmask = malloc(att->bitmask_size *
> +									   sizeof(uint8_t));
> +					igt_assert(data->single_step_bitmask);
> +					memcpy(data->single_step_bitmask, att->bitmask,
> +					       att->bitmask_size);
> +				}
> +
> +				copy_first_bit(att->bitmask, data->single_step_bitmask,
> +					       att->bitmask_size);
> +			} else
> +				copy_nth_bit(att->bitmask, att->bitmask, att->bitmask_size,
> +					     data->stepped_threads_count + 1);
> +		}
> +
> +		val = data->steps_done < 2 ? STEERING_SINGLE_STEP : STEERING_CONTINUE;
> +	}
> +
> +	igt_assert_eq(pwrite(data->vm_fd, &val, sz,
> +			     data->target_offset + steering_offset(threads)), sz);
> +	fsync(data->vm_fd);
> +
> +	eu_ctl_resume(d->master_fd, d->fd, att->client_handle,
> +		      att->exec_queue_handle, att->lrc_handle,
> +		      att->bitmask, att->bitmask_size);
> +
> +	if (data->single_step_bitmask)
> +		for (int i = 0; i < att->bitmask_size; i++)
> +			data->single_step_bitmask[i] &= ~att->bitmask[i];
> +}
> +
> +static void open_trigger(struct xe_eudebug_debugger *d,
> +			 struct drm_xe_eudebug_event *e)
> +{
> +	struct drm_xe_eudebug_event_client *client = (void *)e;
> +	struct online_debug_data *data = d->ptr;
> +
> +	if (e->flags & DRM_XE_EUDEBUG_EVENT_DESTROY)
> +		return;
> +
> +	pthread_mutex_lock(&data->mutex);
> +	data->client_handle = client->client_handle;
> +	pthread_mutex_unlock(&data->mutex);
> +}
> +
> +static void exec_queue_trigger(struct xe_eudebug_debugger *d,
> +			       struct drm_xe_eudebug_event *e)
> +{
> +	struct drm_xe_eudebug_event_exec_queue *eq = (void *)e;
> +	struct online_debug_data *data = d->ptr;
> +
> +	if (e->flags & DRM_XE_EUDEBUG_EVENT_DESTROY)
> +		return;
> +
> +	pthread_mutex_lock(&data->mutex);
> +	data->exec_queue_handle = eq->exec_queue_handle;
> +	data->lrc_handle = eq->lrc_handle[0];
> +	pthread_mutex_unlock(&data->mutex);
> +}
> +
> +static void vm_open_trigger(struct xe_eudebug_debugger *d,
> +			    struct drm_xe_eudebug_event *e)
> +{
> +	struct drm_xe_eudebug_event_vm *vm = (void *)e;
> +	struct online_debug_data *data = d->ptr;
> +	struct drm_xe_eudebug_vm_open vo = {
> +		.client_handle = vm->client_handle,
> +		.vm_handle = vm->vm_handle,
> +	};
> +	int fd;
> +
> +	if (e->flags & DRM_XE_EUDEBUG_EVENT_CREATE) {
> +		fd = igt_ioctl(d->fd, DRM_XE_EUDEBUG_IOCTL_VM_OPEN, &vo);
> +		igt_assert_lte(0, fd);
> +
> +		pthread_mutex_lock(&data->mutex);
> +		igt_assert(data->vm_fd == -1);
> +		data->vm_fd = fd;
> +		pthread_mutex_unlock(&data->mutex);
> +		return;
> +	}
> +
> +	pthread_mutex_lock(&data->mutex);
> +	close(data->vm_fd);
> +	data->vm_fd = -1;
> +	pthread_mutex_unlock(&data->mutex);
> +}
> +
> +static void read_metadata(struct xe_eudebug_debugger *d,
> +			  uint64_t client_handle,
> +			  uint64_t metadata_handle,
> +			  uint64_t type,
> +			  uint64_t len)
> +{
> +	struct drm_xe_eudebug_read_metadata rm = {
> +		.client_handle = client_handle,
> +		.metadata_handle = metadata_handle,
> +		.size = len,
> +	};
> +	struct online_debug_data *data = d->ptr;
> +	uint64_t *metadata;
> +
> +	metadata = malloc(len);
> +	igt_assert(metadata);
> +
> +	rm.ptr = to_user_pointer(metadata);
> +	igt_assert_eq(igt_ioctl(d->fd, DRM_XE_EUDEBUG_IOCTL_READ_METADATA, &rm), 0);
> +
> +	pthread_mutex_lock(&data->mutex);
> +	switch (type) {
> +	case DRM_XE_DEBUG_METADATA_ELF_BINARY:
> +		data->bb_offset = metadata[0];
> +		data->bb_size = metadata[1];
> +		break;
> +	case DRM_XE_DEBUG_METADATA_PROGRAM_MODULE:
> +		data->target_offset = metadata[0];
> +		data->target_size = metadata[1];
> +		break;
> +	default:
> +		break;
> +	}
> +	pthread_mutex_unlock(&data->mutex);
> +
> +	free(metadata);
> +}
> +
> +static void create_metadata_trigger(struct xe_eudebug_debugger *d, struct drm_xe_eudebug_event *e)
> +{
> +	struct drm_xe_eudebug_event_metadata *em = (void *)e;
> +
> +	if (e->flags & DRM_XE_EUDEBUG_EVENT_CREATE) {
> +		read_metadata(d, em->client_handle, em->metadata_handle, em->type, em->len);
> +	}
> +}
> +
> +static void overwrite_immediate_value_in_common_target_write(int vm_fd, uint64_t offset,
> +							     uint32_t old_val, uint32_t new_val)
> +{
> +	uint64_t addr = offset;
> +	int vals_changed = 0;
> +	uint32_t val;
> +
> +	while (vals_changed < 4) {
> +		igt_assert_eq(pread(vm_fd, &val, sizeof(uint32_t), addr), sizeof(uint32_t));
> +		if (val == old_val) {
> +			igt_debug("val_before_write[%d]: %08x\n", vals_changed, val);
> +			igt_assert_eq(pwrite(vm_fd, &new_val, sizeof(uint32_t), addr),
> +				      sizeof(uint32_t));
> +			igt_assert_eq(pread(vm_fd, &val, sizeof(uint32_t), addr),
> +				      sizeof(uint32_t));
> +			igt_debug("val_before_fsync[%d]: %08x\n", vals_changed, val);
> +			fsync(vm_fd);
> +			igt_assert_eq(pread(vm_fd, &val, sizeof(uint32_t), addr),
> +				      sizeof(uint32_t));
> +			igt_debug("val_after_fsync[%d]: %08x\n", vals_changed, val);
> +			igt_assert_eq_u32(val, new_val);
> +			vals_changed++;
> +		}
> +		addr += sizeof(uint32_t);
> +	}
> +}
> +
> +static void eu_attention_resume_caching_trigger(struct xe_eudebug_debugger *d,
> +						struct drm_xe_eudebug_event *e)
> +{
> +	struct drm_xe_eudebug_event_eu_attention *att = (void *) e;
> +	struct online_debug_data *data = d->ptr;
> +	static int counter = 0;
> +	static int kernel_in_bb = 0;
> +	struct dim_t s_dim = surface_dimensions(get_number_of_threads(d->flags));
> +	int val;
> +	uint32_t instr_usdw;
> +	struct gpgpu_shader *kernel;
> +	const uint32_t breakpoint_bit = 1 << 30;
> +	struct gpgpu_shader *shader_preamble;
> +	struct gpgpu_shader *shader_write_instr;
> +
> +	shader_preamble = gpgpu_shader_create(d->master_fd);
> +	gpgpu_shader__write_dword(shader_preamble, SHADER_CANARY, 0);
> +	gpgpu_shader__nop(shader_preamble);
> +	gpgpu_shader__breakpoint(shader_preamble);
> +
> +	shader_write_instr = gpgpu_shader_create(d->master_fd);
> +	gpgpu_shader__common_target_write_u32(shader_write_instr, 0, 0);
> +
> +	if (!kernel_in_bb) {
> +		kernel = get_shader(d->master_fd, d->flags);
> +		kernel_in_bb = find_kernel_in_bb(kernel, data);
> +		gpgpu_shader_destroy(kernel);
> +	}
> +
> +	/* set breakpoint on next write instruction */
> +	if (counter < caching_get_instruction_count(d->master_fd, s_dim.x, d->flags)) {
> +		igt_assert_eq(pread(data->vm_fd, &instr_usdw, sizeof(instr_usdw),
> +				    data->bb_offset + kernel_in_bb + shader_preamble->size * 4 +
> +				    shader_write_instr->size * 4 * counter), sizeof(instr_usdw));
> +		instr_usdw |= breakpoint_bit;
> +		igt_assert_eq(pwrite(data->vm_fd, &instr_usdw, sizeof(instr_usdw),
> +				     data->bb_offset + kernel_in_bb + shader_preamble->size * 4 +
> +				     shader_write_instr->size * 4 * counter), sizeof(instr_usdw));
> +		fsync(data->vm_fd);
> +	}
> +
> +	/* restore current instruction */
> +	if (counter && counter <= caching_get_instruction_count(d->master_fd, s_dim.x, d->flags))
> +		overwrite_immediate_value_in_common_target_write(data->vm_fd,
> +								 data->bb_offset + kernel_in_bb +
> +								 shader_preamble->size * 4 +
> +								 shader_write_instr->size * 4 * (counter - 1),
> +								 CACHING_POISON_VALUE,
> +								 CACHING_VALUE(counter - 1));
> +
> +	/* poison next instruction */
> +	if (counter < caching_get_instruction_count(d->master_fd, s_dim.x, d->flags))
> +		overwrite_immediate_value_in_common_target_write(data->vm_fd,
> +								 data->bb_offset + kernel_in_bb +
> +								 shader_preamble->size * 4 +
> +								 shader_write_instr->size * 4 * counter,
> +								 CACHING_VALUE(counter),
> +								 CACHING_POISON_VALUE);
> +
> +	gpgpu_shader_destroy(shader_write_instr);
> +	gpgpu_shader_destroy(shader_preamble);
> +
> +	for (int i = 0; i < data->target_size; i += sizeof(uint32_t)) {
> +		igt_assert_eq(pread(data->vm_fd, &val, sizeof(val), data->target_offset + i),
> +			      sizeof(val));
> +		igt_assert_f(val != CACHING_POISON_VALUE, "Poison value found at %04d!\n", i);
> +	}
> +
> +	eu_ctl_resume(d->master_fd, d->fd, att->client_handle,
> +		      att->exec_queue_handle, att->lrc_handle,
> +		      att->bitmask, att->bitmask_size);
> +
> +	counter++;
> +}
> +
> +static struct intel_bb *xe_bb_create_on_offset(int fd, uint32_t exec_queue, uint32_t vm,
> +					       uint64_t offset, uint32_t size)
> +{
> +	struct intel_bb *ibb;
> +
> +	ibb = intel_bb_create_with_context(fd, exec_queue, vm, NULL, size);
> +
> +	/* update intel bb offset */
> +	intel_bb_remove_object(ibb, ibb->handle, ibb->batch_offset, ibb->size);
> +	intel_bb_add_object(ibb, ibb->handle, ibb->size, offset, ibb->alignment, false);
> +	ibb->batch_offset = offset;
> +
> +	return ibb;
> +}
> +
> +static size_t get_bb_size(int flags)
> +{
> +	if ((flags & SHADER_CACHING_SRAM) || (flags & SHADER_CACHING_VRAM))
> +		return 32768;
> +
> +	return 4096;
> +}
> +
> +static void run_online_client(struct xe_eudebug_client *c)
> +{
> +	int threads = get_number_of_threads(c->flags);
> +	const uint64_t target_offset = 0x1a000000;
> +	const uint64_t bb_offset = 0x1b000000;
> +	const size_t bb_size = get_bb_size(c->flags);
> +	struct online_debug_data *data = c->ptr;
> +	struct drm_xe_engine_class_instance hwe = data->hwe;
> +	struct drm_xe_ext_set_property ext = {
> +		.base.name = DRM_XE_EXEC_QUEUE_EXTENSION_SET_PROPERTY,
> +		.property = DRM_XE_EXEC_QUEUE_SET_PROPERTY_EUDEBUG,
> +		.value = DRM_XE_EXEC_QUEUE_EUDEBUG_FLAG_ENABLE,
> +	};
> +	struct drm_xe_exec_queue_create create = {
> +		.instances = to_user_pointer(&hwe),
> +		.width = 1,
> +		.num_placements = 1,
> +		.extensions = c->flags & DISABLE_DEBUG_MODE ? 0 : to_user_pointer(&ext)
> +	};
> +	struct dim_t w_dim = walker_dimensions(threads);
> +	struct dim_t s_dim = surface_dimensions(threads);
> +	struct timespec ts = { };
> +	struct gpgpu_shader *sip, *shader;
> +	uint32_t metadata_id[2];
> +	uint64_t *metadata[2];
> +	struct intel_bb *ibb;
> +	struct intel_buf *buf;
> +	uint32_t *ptr;
> +	int fd;
> +
> +	metadata[0] = calloc(2, sizeof(*metadata));
> +	metadata[1] = calloc(2, sizeof(*metadata));
> +	igt_assert(metadata[0]);
> +	igt_assert(metadata[1]);
> +
> +	fd = xe_eudebug_client_open_driver(c);
> +	xe_device_get(fd);
> +
> +	/* Additional memory for steering control */
> +	if (c->flags & SHADER_LOOP || c->flags & SHADER_SINGLE_STEP)
> +		s_dim.y++;
> +	/* Additional memory for caching check */
> +	if ((c->flags & SHADER_CACHING_SRAM) || (c->flags & SHADER_CACHING_VRAM))
> +		s_dim.y += caching_get_instruction_count(fd, s_dim.x, c->flags);
> +	buf = create_uc_buf(fd, s_dim.x, s_dim.y);
> +
> +	buf->addr.offset = target_offset;
> +
> +	metadata[0][0] = bb_offset;
> +	metadata[0][1] = bb_size;
> +	metadata[1][0] = target_offset;
> +	metadata[1][1] = buf->size;
> +	metadata_id[0] = xe_eudebug_client_metadata_create(c, fd, DRM_XE_DEBUG_METADATA_ELF_BINARY,
> +							   2 * sizeof(*metadata), metadata[0]);
> +	metadata_id[1] = xe_eudebug_client_metadata_create(c, fd,
> +							   DRM_XE_DEBUG_METADATA_PROGRAM_MODULE,
> +							   2 * sizeof(*metadata), metadata[1]);
> +
> +	create.vm_id = xe_eudebug_client_vm_create(c, fd, DRM_XE_VM_CREATE_FLAG_LR_MODE, 0);
> +	xe_eudebug_client_exec_queue_create(c, fd, &create);
> +
> +	ibb = xe_bb_create_on_offset(fd, create.exec_queue_id, create.vm_id,
> +				     bb_offset, bb_size);
> +	intel_bb_set_lr_mode(ibb, true);
> +
> +	sip = get_sip(fd, c->flags);
> +	shader = get_shader(fd, c->flags);
> +
> +	igt_nsec_elapsed(&ts);
> +	gpgpu_shader_exec(ibb, buf, w_dim.x, w_dim.y, shader, sip, 0, 0);
> +
> +	gpgpu_shader_destroy(sip);
> +	gpgpu_shader_destroy(shader);
> +
> +	intel_bb_sync(ibb);
> +
> +	if (c->flags & TRIGGER_RECONNECT)
> +		xe_eudebug_client_wait_stage(c, DEBUGGER_REATTACHED);
> +	else
> +		/* Make sure it wasn't the timeout. */
> +		igt_assert(igt_nsec_elapsed(&ts) <
> +			   XE_EUDEBUG_DEFAULT_TIMEOUT_MS / MSEC_PER_SEC * NSEC_PER_SEC);
> +
> +	if (!(c->flags & DO_NOT_EXPECT_CANARIES)) {
> +		ptr = xe_bo_mmap_ext(fd, buf->handle, buf->size, PROT_READ);
> +		data->threads_count = count_canaries_neq(ptr, w_dim, 0);
> +		igt_assert_f(data->threads_count, "No canaries found, nothing executed?\n");
> +
> +		if ((c->flags & SHADER_BREAKPOINT || c->flags & TRIGGER_RESUME_SET_BP ||
> +		     c->flags & SHADER_N_NOOP_BREAKPOINT) && !(c->flags & DISABLE_DEBUG_MODE)) {
> +			uint32_t aip = ptr[0];
> +
> +			igt_assert_f(aip != SHADER_CANARY, "Workload executed but breakpoint not hit!\n");
> +			igt_assert_eq(count_canaries_eq(ptr, w_dim, aip), data->threads_count);
> +			igt_debug("Breakpoint hit in %d threads, AIP=0x%08x\n", data->threads_count, aip);
> +		}
> +
> +		munmap(ptr, buf->size);
> +	}
> +
> +	intel_bb_destroy(ibb);
> +
> +	xe_eudebug_client_exec_queue_destroy(c, fd, &create);
> +	xe_eudebug_client_vm_destroy(c, fd,  create.vm_id);
> +
> +	xe_eudebug_client_metadata_destroy(c, fd, metadata_id[0], DRM_XE_DEBUG_METADATA_ELF_BINARY,
> +					   2 * sizeof(*metadata));
> +	xe_eudebug_client_metadata_destroy(c, fd, metadata_id[1],
> +					   DRM_XE_DEBUG_METADATA_PROGRAM_MODULE,
> +					   2 * sizeof(*metadata));
> +
> +	xe_device_put(fd);
> +	xe_eudebug_client_close_driver(c, fd);
> +}
> +
> +static bool intel_gen_has_lockstep_eus(int fd)
> +{
> +	const uint32_t id = intel_get_drm_devid(fd);
> +
> +	/*
> +	 * Lockstep (or in some parlance, fused) EUs are pair of EUs
> +	 * that work in sync, supposedly same clock and same control flow.
> +	 * Thus for attentions, if the control has breakpoint, both will be
> +	 * excepted into SIP. In this level, the hardware has only one attention
> +	 * thread bit for units. PVC is the first one without lockstepping.
> +	 */
> +	return !(intel_graphics_ver(id) == IP_VER(12, 60) || intel_gen(id) >= 20);
> +}
> +
> +static int query_attention_bitmask_size(int fd, int gt)
> +{
> +	const unsigned int threads = 8;
> +	struct drm_xe_query_topology_mask *c_dss = NULL, *g_dss = NULL, *eu_per_dss = NULL;
> +	struct drm_xe_query_topology_mask *topology;
> +	struct drm_xe_device_query query = {
> +		.extensions = 0,
> +		.query = DRM_XE_DEVICE_QUERY_GT_TOPOLOGY,
> +		.size = 0,
> +		.data = 0,
> +	};
> +	int pos = 0, eus;
> +	uint8_t *any_dss;
> +
> +	igt_assert_eq(igt_ioctl(fd, DRM_IOCTL_XE_DEVICE_QUERY, &query), 0);
> +	igt_assert_neq(query.size, 0);
> +
> +	topology = malloc(query.size);
> +	igt_assert(topology);
> +
> +	query.data = to_user_pointer(topology);
> +	igt_assert_eq(igt_ioctl(fd, DRM_IOCTL_XE_DEVICE_QUERY, &query), 0);
> +
> +	while (query.size >= sizeof(struct drm_xe_query_topology_mask)) {
> +		struct drm_xe_query_topology_mask *topo;
> +		int sz;
> +
> +		topo = (struct drm_xe_query_topology_mask *)((unsigned char *)topology + pos);
> +		sz = sizeof(struct drm_xe_query_topology_mask) + topo->num_bytes;
> +
> +		query.size -= sz;
> +		pos += sz;
> +
> +		if (topo->gt_id != gt)
> +			continue;
> +
> +		if (topo->type == DRM_XE_TOPO_DSS_GEOMETRY)
> +			g_dss = topo;
> +		else if (topo->type == DRM_XE_TOPO_DSS_COMPUTE)
> +			c_dss = topo;
> +		else if (topo->type == DRM_XE_TOPO_EU_PER_DSS ||
> +			 topo->type == DRM_XE_TOPO_SIMD16_EU_PER_DSS)
> +			eu_per_dss = topo;
> +	}
> +
> +	igt_assert(g_dss && c_dss && eu_per_dss);
> +	igt_assert_eq_u32(c_dss->num_bytes, g_dss->num_bytes);
> +
> +	any_dss = malloc(c_dss->num_bytes);
> +
> +	for (int i = 0; i < c_dss->num_bytes; i++)
> +		any_dss[i] = c_dss->mask[i] | g_dss->mask[i];
> +
> +	eus = count_set_bits(any_dss, c_dss->num_bytes);
> +	eus *= count_set_bits(eu_per_dss->mask, eu_per_dss->num_bytes);
> +
> +	if (intel_gen_has_lockstep_eus(fd))
> +		eus /= 2;
> +
> +	free(any_dss);
> +	free(topology);
> +
> +	return eus * threads / 8;
> +}
> +
> +static struct drm_xe_eudebug_event_exec_queue *
> +match_attention_with_exec_queue(struct xe_eudebug_event_log *log,
> +				struct drm_xe_eudebug_event_eu_attention *ea)
> +{
> +	struct drm_xe_eudebug_event_exec_queue *ee;
> +	struct drm_xe_eudebug_event *event = NULL, *current = NULL, *matching_destroy = NULL;
> +	int lrc_idx;
> +
> +	xe_eudebug_for_each_event(event, log) {
> +		if (event->type == DRM_XE_EUDEBUG_EVENT_EXEC_QUEUE &&
> +		    event->flags == DRM_XE_EUDEBUG_EVENT_CREATE) {
> +			ee = (struct drm_xe_eudebug_event_exec_queue *)event;
> +
> +			if (ee->exec_queue_handle != ea->exec_queue_handle)
> +				continue;
> +
> +			if (ee->client_handle != ea->client_handle)
> +				continue;
> +
> +			for (lrc_idx = 0; lrc_idx < ee->width; lrc_idx++) {
> +				if (ee->lrc_handle[lrc_idx] == ea->lrc_handle)
> +					break;
> +			}
> +
> +			if (lrc_idx >= ee->width) {
> +				igt_debug("No matching lrc handle within matching exec_queue!");
> +				continue;
> +			}
> +
> +			/* event logs are sorted, every found next would not be present. */
> +			if (ea->base.seqno < ee->base.seqno)
> +				break;
> +
> +			/* sanity check whether attention did
> +			 * not appear yet on already destroyed exec_queue
> +			 */
> +			current = event;
> +			xe_eudebug_for_each_event(current, log) {
> +				if (current->type == DRM_XE_EUDEBUG_EVENT_EXEC_QUEUE &&
> +				    current->flags == DRM_XE_EUDEBUG_EVENT_DESTROY) {
> +					uint8_t offset = sizeof(struct drm_xe_eudebug_event);
> +
> +					if (memcmp((uint8_t *)current + offset,
> +						   (uint8_t *)event + offset,
> +						   current->len - offset) == 0) {
> +						matching_destroy = current;
> +					}
> +				}
> +			}
> +
> +			if (!matching_destroy || ea->base.seqno > matching_destroy->seqno)
> +				continue;
> +
> +			return ee;
> +		}
> +	}
> +
> +	return NULL;
> +}
> +
> +static void online_session_check(struct xe_eudebug_session *s, int flags)
> +{
> +	struct drm_xe_eudebug_event_eu_attention *ea = NULL;
> +	struct drm_xe_eudebug_event *event = NULL;
> +	struct online_debug_data *data = s->c->ptr;
> +	bool expect_exception = flags & DISABLE_DEBUG_MODE ? false : true;
> +	int sum = 0;
> +	int bitmask_size;
> +
> +	xe_eudebug_session_check(s, true, XE_EUDEBUG_FILTER_EVENT_VM_BIND |
> +					  XE_EUDEBUG_FILTER_EVENT_VM_BIND_OP |
> +					  XE_EUDEBUG_FILTER_EVENT_VM_BIND_UFENCE);
> +
> +	bitmask_size = query_attention_bitmask_size(s->d->master_fd, data->hwe.gt_id);
> +
> +	xe_eudebug_for_each_event(event, s->d->log) {
> +		if (event->type == DRM_XE_EUDEBUG_EVENT_EU_ATTENTION) {
> +			ea = (struct drm_xe_eudebug_event_eu_attention *)event;
> +
> +			igt_assert(event->flags == DRM_XE_EUDEBUG_EVENT_STATE_CHANGE);
> +			igt_assert_eq(ea->bitmask_size, bitmask_size);
> +			sum += count_set_bits(ea->bitmask, bitmask_size);
> +			igt_assert(match_attention_with_exec_queue(s->d->log, ea));
> +		}
> +	}
> +
> +	/*
> +	 * We can expect attention to sum up only
> +	 * if we have a breakpoint set and we resume all threads always.
> +	 */
> +	if (flags == SHADER_BREAKPOINT)
> +		igt_assert_eq(sum, data->threads_count);
> +
> +	if (expect_exception)
> +		igt_assert(sum > 0);
> +	else
> +		igt_assert(sum == 0);
> +}
> +
> +static void ufence_ack_trigger(struct xe_eudebug_debugger *d,
> +			       struct drm_xe_eudebug_event *e)
> +{
> +	struct drm_xe_eudebug_event_vm_bind_ufence *ef = (void *)e;
> +
> +	if (e->flags & DRM_XE_EUDEBUG_EVENT_CREATE)
> +		xe_eudebug_ack_ufence(d->fd, ef);
> +}
> +
> +/**
> + * SUBTEST: basic-breakpoint
> + * Description:
> + *	Check whether KMD sends attention events
> + *	for workload in debug mode stopped on breakpoint.
> + *
> + * SUBTEST: breakpoint-not-in-debug-mode
> + * Description:
> + *	Check whether KMD resets the GPU when it spots an attention
> + *	coming from workload not in debug mode.
> + *
> + * SUBTEST: stopped-thread
> + * Description:
> + *	Hits breakpoint on runalone workload and
> + *	reads attention for fixed time.
> + *
> + * SUBTEST: resume-%s
> + * Description:
> + *	Resumes stopped on a breakpoint workload
> + *	with granularity of %arg[1].
> + *
> + *
> + * arg[1]:
> + *
> + * @one:	one thread
> + * @dss:	threads running on one subslice
> + */
> +static void test_basic_online(int fd, struct drm_xe_engine_class_instance *hwe, int flags)
> +{
> +	struct xe_eudebug_session *s;
> +	struct online_debug_data *data;
> +
> +	data = online_debug_data_create(hwe);
> +	s = xe_eudebug_session_create(fd, run_online_client, flags, data);
> +
> +	xe_eudebug_debugger_add_trigger(s->d, DRM_XE_EUDEBUG_EVENT_EU_ATTENTION,
> +					eu_attention_debug_trigger);
> +	xe_eudebug_debugger_add_trigger(s->d, DRM_XE_EUDEBUG_EVENT_EU_ATTENTION,
> +					eu_attention_resume_trigger);
> +	xe_eudebug_debugger_add_trigger(s->d, DRM_XE_EUDEBUG_EVENT_VM_BIND_UFENCE,
> +					ufence_ack_trigger);
> +
> +	xe_eudebug_session_run(s);
> +	online_session_check(s, s->flags);
> +
> +	xe_eudebug_session_destroy(s);
> +	online_debug_data_destroy(data);
> +}
> +
> +/**
> + * SUBTEST: preempt-breakpoint
> + * Description:
> + *	Verify that eu debugger disables preemption timeout to
> + *	prevent reset of workload stopped on breakpoint.
> + */
> +static void test_preemption(int fd, struct drm_xe_engine_class_instance *hwe)
> +{
> +	int flags = SHADER_BREAKPOINT | TRIGGER_RESUME_DELAYED;
> +	struct xe_eudebug_session *s;
> +	struct online_debug_data *data;
> +	struct xe_eudebug_client *other;
> +
> +	data = online_debug_data_create(hwe);
> +	s = xe_eudebug_session_create(fd, run_online_client, flags, data);
> +	other = xe_eudebug_client_create(fd, run_online_client, SHADER_NOP, data);
> +
> +	xe_eudebug_debugger_add_trigger(s->d, DRM_XE_EUDEBUG_EVENT_EU_ATTENTION,
> +					eu_attention_debug_trigger);
> +	xe_eudebug_debugger_add_trigger(s->d, DRM_XE_EUDEBUG_EVENT_EU_ATTENTION,
> +					eu_attention_resume_trigger);
> +	xe_eudebug_debugger_add_trigger(s->d, DRM_XE_EUDEBUG_EVENT_VM_BIND_UFENCE,
> +					ufence_ack_trigger);
> +
> +	igt_assert_eq(xe_eudebug_debugger_attach(s->d, s->c), 0);
> +	xe_eudebug_debugger_start_worker(s->d);
> +
> +	xe_eudebug_client_start(s->c);
> +	sleep(1); /* make sure s->c starts first */
> +	xe_eudebug_client_start(other);
> +
> +	xe_eudebug_client_wait_done(s->c);
> +	xe_eudebug_client_wait_done(other);
> +
> +	xe_eudebug_debugger_stop_worker(s->d, 1);
> +
> +	xe_eudebug_session_destroy(s);
> +	xe_eudebug_client_destroy(other);
> +
> +	igt_assert_f(data->last_eu_control_seqno != 0,
> +		     "Workload with breakpoint has ended without resume!\n");
> +
> +	online_debug_data_destroy(data);
> +}
> +
> +/**
> + * SUBTEST: reset-with-attention
> + * Description:
> + *	Check whether GPU is usable after resetting with attention raised 
> + *	(stopped on breakpoint) by running the same workload again.
> + */
> +static void test_reset_with_attention_online(int fd, struct drm_xe_engine_class_instance *hwe, int flags)
> +{
> +	struct xe_eudebug_session *s1, *s2;
> +	struct online_debug_data *data;
> +
> +	data = online_debug_data_create(hwe);
> +	s1 = xe_eudebug_session_create(fd, run_online_client, flags, data);
> +
> +	xe_eudebug_debugger_add_trigger(s1->d, DRM_XE_EUDEBUG_EVENT_EU_ATTENTION,
> +					eu_attention_reset_trigger);
> +	xe_eudebug_debugger_add_trigger(s1->d, DRM_XE_EUDEBUG_EVENT_VM_BIND_UFENCE,
> +					ufence_ack_trigger);
> +
> +	xe_eudebug_session_run(s1);
> +	xe_eudebug_session_destroy(s1);
> +
> +	s2 = xe_eudebug_session_create(fd, run_online_client, flags, data);
> +	xe_eudebug_debugger_add_trigger(s2->d, DRM_XE_EUDEBUG_EVENT_EU_ATTENTION,
> +					eu_attention_resume_trigger);
> +	xe_eudebug_debugger_add_trigger(s2->d, DRM_XE_EUDEBUG_EVENT_VM_BIND_UFENCE,
> +					ufence_ack_trigger);
> +
> +	xe_eudebug_session_run(s2);
> +
> +	online_session_check(s2, s2->flags);
> +
> +	xe_eudebug_session_destroy(s2);
> +	online_debug_data_destroy(data);
> +}
> +
> +/**
> + * SUBTEST: interrupt-all
> + * Description:
> + *	Schedules EU workload which should last about a few seconds, then
> + *	interrupts all threads, checks whether attention event came, and
> + *	resumes stopped threads back.
> + *
> + * SUBTEST: interrupt-all-set-breakpoint
> + * Description:
> + *	Schedules EU workload which should last about a few seconds, then
> + *	interrupts all threads, once attention event come it sets breakpoint on
> + *	the very next instruction and resumes stopped threads back. It expects
> + *	that every thread hits the breakpoint.
> + */
> +static void test_interrupt_all(int fd, struct drm_xe_engine_class_instance *hwe, int flags)
> +{
> +	struct xe_eudebug_session *s;
> +	struct online_debug_data *data;
> +	uint32_t val;
> +
> +	data = online_debug_data_create(hwe);
> +	s = xe_eudebug_session_create(fd, run_online_client, flags, data);
> +
> +	xe_eudebug_debugger_add_trigger(s->d, DRM_XE_EUDEBUG_EVENT_OPEN,
> +					open_trigger);
> +	xe_eudebug_debugger_add_trigger(s->d, DRM_XE_EUDEBUG_EVENT_EXEC_QUEUE,
> +					exec_queue_trigger);
> +	xe_eudebug_debugger_add_trigger(s->d, DRM_XE_EUDEBUG_EVENT_EU_ATTENTION,
> +					eu_attention_debug_trigger);
> +	xe_eudebug_debugger_add_trigger(s->d, DRM_XE_EUDEBUG_EVENT_EU_ATTENTION,
> +					eu_attention_resume_trigger);
> +	xe_eudebug_debugger_add_trigger(s->d, DRM_XE_EUDEBUG_EVENT_VM, vm_open_trigger);
> +	xe_eudebug_debugger_add_trigger(s->d, DRM_XE_EUDEBUG_EVENT_METADATA,
> +					create_metadata_trigger);
> +	xe_eudebug_debugger_add_trigger(s->d, DRM_XE_EUDEBUG_EVENT_VM_BIND_UFENCE,
> +					ufence_ack_trigger);
> +
> +	igt_assert_eq(xe_eudebug_debugger_attach(s->d, s->c), 0);
> +	xe_eudebug_debugger_start_worker(s->d);
> +	xe_eudebug_client_start(s->c);
> +
> +	/* wait for workload to start */
> +	igt_for_milliseconds(STARTUP_TIMEOUT_MS) {
> +		/* collect needed data from triggers */
> +		if (READ_ONCE(data->vm_fd) == -1 || READ_ONCE(data->target_size) == 0)
> +			continue;
> +
> +		if (pread(data->vm_fd, &val, sizeof(val), data->target_offset) == sizeof(val))
> +			if (val != 0)
> +				break;
> +	}
> +
> +	pthread_mutex_lock(&data->mutex);
> +	igt_assert(data->client_handle != -1);
> +	igt_assert(data->exec_queue_handle != -1);
> +	eu_ctl_interrupt_all(s->d->fd, data->client_handle,
> +			     data->exec_queue_handle, data->lrc_handle);
> +	pthread_mutex_unlock(&data->mutex);
> +
> +	xe_eudebug_client_wait_done(s->c);
> +
> +	xe_eudebug_debugger_stop_worker(s->d, 1);
> +
> +	xe_eudebug_event_log_print(s->d->log, true);
> +	xe_eudebug_event_log_print(s->c->log, true);
> +
> +	online_session_check(s, s->flags);
> +
> +	xe_eudebug_session_destroy(s);
> +	online_debug_data_destroy(data);
> +}
> +
> +static void reset_debugger_log(struct xe_eudebug_debugger *d)
> +{
> +	unsigned int max_size;
> +	char log_name[80];
> +
> +	/* Don't pull the rug out from under an active debugger */
> +	igt_assert(d->target_pid == 0);
> +
> +	max_size = d->log->max_size;
> +	strncpy(log_name, d->log->name, sizeof(d->log->name) - 1);
> +	log_name[79] = '\0';
> +	xe_eudebug_event_log_destroy(d->log);
> +	d->log = xe_eudebug_event_log_create(log_name, max_size);
> +}
> +
> +/**
> + * SUBTEST: interrupt-other-debuggable
> + * Description:
> + *	Schedules EU workload in runalone mode with never ending loop, while
> + *	it is not under debug, tries to interrupt all threads using the different
> + *	client attached to debugger.
> + *
> + * SUBTEST: interrupt-other
> + * Description:
> + * 	Schedules EU workload with a never ending loop and, while it is not
> + * 	configured for debugging, tries to interrupt all threads using the client
> + * 	attached to debugger.
> + */
> +static void test_interrupt_other(int fd, struct drm_xe_engine_class_instance *hwe, int flags)
> +{
> +	struct online_debug_data *data;
> +	struct online_debug_data *debugee_data;
> +	struct xe_eudebug_session *s;
> +	struct xe_eudebug_client *debugee;
> +	int debugee_flags = SHADER_LOOP | DO_NOT_EXPECT_CANARIES;
> +	int val;
> +
> +	data = online_debug_data_create(hwe);
> +	s = xe_eudebug_session_create(fd, run_online_client, flags, data);
> +
> +	xe_eudebug_debugger_add_trigger(s->d, DRM_XE_EUDEBUG_EVENT_OPEN, open_trigger);
> +	xe_eudebug_debugger_add_trigger(s->d, DRM_XE_EUDEBUG_EVENT_EXEC_QUEUE,
> +					exec_queue_trigger);
> +	xe_eudebug_debugger_add_trigger(s->d, DRM_XE_EUDEBUG_EVENT_VM, vm_open_trigger);
> +	xe_eudebug_debugger_add_trigger(s->d, DRM_XE_EUDEBUG_EVENT_METADATA,
> +					create_metadata_trigger);
> +	xe_eudebug_debugger_add_trigger(s->d, DRM_XE_EUDEBUG_EVENT_VM_BIND_UFENCE,
> +					ufence_ack_trigger);
> +
> +	igt_assert_eq(xe_eudebug_debugger_attach(s->d, s->c), 0);
> +	xe_eudebug_debugger_start_worker(s->d);
> +	xe_eudebug_client_start(s->c);
> +
> +	/* wait for workload to start */
> +	igt_for_milliseconds(STARTUP_TIMEOUT_MS) {
> +		if (READ_ONCE(data->vm_fd) == -1 || READ_ONCE(data->target_size) == 0)
> +			continue;
> +
> +		if (pread(data->vm_fd, &val, sizeof(val), data->target_offset) == sizeof(val))
> +			if (val != 0)
> +				break;
> +	}
> +	igt_assert_f(val != 0, "Workload execution is not yet started\n");
> +
> +	xe_eudebug_debugger_dettach(s->d);
> +	reset_debugger_log(s->d);
> +
> +	debugee_data = online_debug_data_create(hwe);
> +	s->d->ptr = debugee_data;
> +	debugee = xe_eudebug_client_create(fd, run_online_client, debugee_flags, debugee_data);
> +	igt_assert_eq(xe_eudebug_debugger_attach(s->d, debugee), 0);
> +	xe_eudebug_client_start(debugee);
> +
> +	igt_for_milliseconds(STARTUP_TIMEOUT_MS) {
> +		if (READ_ONCE(debugee_data->vm_fd) == -1 || READ_ONCE(debugee_data->target_size) == 0)
> +			continue;
> +	}
> +
> +	pthread_mutex_lock(&debugee_data->mutex);
> +	igt_assert(debugee_data->client_handle != -1);
> +	igt_assert(debugee_data->exec_queue_handle != -1);
> +	/* Interrupting the other client should return invalid state
> +	 * as it is running in runalone mode */
> +	igt_assert_eq(__eu_ctl(s->d->fd, debugee_data->client_handle,
> +		       debugee_data->exec_queue_handle, debugee_data->lrc_handle,
> +		       NULL, 0, DRM_XE_EUDEBUG_EU_CONTROL_CMD_INTERRUPT_ALL, NULL), -EINVAL);
> +	pthread_mutex_unlock(&debugee_data->mutex);
> +
> +	xe_force_gt_reset_async(s->d->master_fd, debugee_data->hwe.gt_id);
> +
> +	xe_eudebug_client_wait_done(debugee);
> +	xe_eudebug_debugger_stop_worker(s->d, 1);
> +
> +	xe_eudebug_event_log_print(s->d->log, true);
> +	xe_eudebug_event_log_print(debugee->log, true);
> +
> +	xe_eudebug_session_check(s, true, XE_EUDEBUG_FILTER_EVENT_VM_BIND |
> +				 XE_EUDEBUG_FILTER_EVENT_VM_BIND_OP |
> +				 XE_EUDEBUG_FILTER_EVENT_VM_BIND_UFENCE);
> +
> +	xe_eudebug_client_destroy(debugee);
> +	xe_eudebug_session_destroy(s);
> +	online_debug_data_destroy(data);
> +	online_debug_data_destroy(debugee_data);
> +}
> +
> +/**
> + * SUBTEST: tdctl-parameters
> + * Description:
> + *	Schedules EU workload which should last about a few seconds, then
> + *	checks negative scenarios of EU_THREADS ioctl usage, interrupts all threads,
> + *	checks whether attention event came, and resumes stopped threads back.
> + */
> +static void test_tdctl_parameters(int fd, struct drm_xe_engine_class_instance *hwe, int flags)
> +{
> +	struct xe_eudebug_session *s;
> +	struct online_debug_data *data;
> +	uint32_t val;
> +	uint32_t random_command;
> +	uint32_t bitmask_size = query_attention_bitmask_size(fd, hwe->gt_id);
> +	uint8_t *attention_bitmask = malloc(bitmask_size * sizeof(uint8_t));
> +	igt_assert(attention_bitmask);
> +
> +	data = online_debug_data_create(hwe);
> +	s = xe_eudebug_session_create(fd, run_online_client, flags, data);
> +
> +	xe_eudebug_debugger_add_trigger(s->d, DRM_XE_EUDEBUG_EVENT_OPEN,
> +					open_trigger);
> +	xe_eudebug_debugger_add_trigger(s->d, DRM_XE_EUDEBUG_EVENT_EXEC_QUEUE,
> +					exec_queue_trigger);
> +	xe_eudebug_debugger_add_trigger(s->d, DRM_XE_EUDEBUG_EVENT_EU_ATTENTION,
> +					eu_attention_debug_trigger);
> +	xe_eudebug_debugger_add_trigger(s->d, DRM_XE_EUDEBUG_EVENT_EU_ATTENTION,
> +					eu_attention_resume_trigger);
> +	xe_eudebug_debugger_add_trigger(s->d, DRM_XE_EUDEBUG_EVENT_VM, vm_open_trigger);
> +	xe_eudebug_debugger_add_trigger(s->d, DRM_XE_EUDEBUG_EVENT_METADATA,
> +					create_metadata_trigger);
> +	xe_eudebug_debugger_add_trigger(s->d, DRM_XE_EUDEBUG_EVENT_VM_BIND_UFENCE,
> +					ufence_ack_trigger);
> +
> +	igt_assert_eq(xe_eudebug_debugger_attach(s->d, s->c), 0);
> +	xe_eudebug_debugger_start_worker(s->d);
> +	xe_eudebug_client_start(s->c);
> +
> +	/* wait for workload to start */
> +	igt_for_milliseconds(STARTUP_TIMEOUT_MS) {
> +		/* collect needed data from triggers */
> +		if (READ_ONCE(data->vm_fd) == -1 || READ_ONCE(data->target_size) == 0)
> +			continue;
> +
> +		if (pread(data->vm_fd, &val, sizeof(val), data->target_offset) == sizeof(val))
> +			if (val != 0)
> +				break;
> +	}
> +
> +	pthread_mutex_lock(&data->mutex);
> +	igt_assert(data->client_handle != -1);
> +	igt_assert(data->exec_queue_handle != -1);
> +	igt_assert(data->lrc_handle != -1);
> +
> +	/* fail on invalid lrc_handle */
> +	igt_assert(__eu_ctl(s->d->fd, data->client_handle,
> +			    data->exec_queue_handle, data->lrc_handle + 1,
> +			    attention_bitmask, &bitmask_size,
> +			    DRM_XE_EUDEBUG_EU_CONTROL_CMD_INTERRUPT_ALL, NULL) == -EINVAL);
> +
> +	/* fail on invalid exec_queue_handle */
> +	igt_assert(__eu_ctl(s->d->fd, data->client_handle,
> +			    data->exec_queue_handle + 1, data->lrc_handle,
> +			    attention_bitmask, &bitmask_size,
> +			    DRM_XE_EUDEBUG_EU_CONTROL_CMD_INTERRUPT_ALL, NULL) == -EINVAL);
> +
> +	/* fail on invalid client */
> +	igt_assert(__eu_ctl(s->d->fd, data->client_handle + 1,
> +			    data->exec_queue_handle, data->lrc_handle,
> +			    attention_bitmask, &bitmask_size,
> +			    DRM_XE_EUDEBUG_EU_CONTROL_CMD_INTERRUPT_ALL, NULL) == -EINVAL);
> +
> +	/*
> +	 * bitmask size must be aligned to sizeof(u32) for all commands
> +	 * and be zero for interrupt all
> +	 */
> +	bitmask_size = sizeof(uint32_t) - 1;
> +	igt_assert(__eu_ctl(s->d->fd, data->client_handle,
> +			    data->exec_queue_handle, data->lrc_handle,
> +			    attention_bitmask, &bitmask_size,
> +			    DRM_XE_EUDEBUG_EU_CONTROL_CMD_STOPPED, NULL) == -EINVAL);
> +	bitmask_size = 0;
> +
> +	/* fail on invalid command */
> +	random_command = random() | (DRM_XE_EUDEBUG_EU_CONTROL_CMD_RESUME + 1);
> +	igt_assert(__eu_ctl(s->d->fd, data->client_handle,
> +			    data->exec_queue_handle, data->lrc_handle,
> +			    attention_bitmask, &bitmask_size, random_command, NULL) == -EINVAL);
> +
> +	free(attention_bitmask);
> +
> +	eu_ctl_interrupt_all(s->d->fd, data->client_handle,
> +			     data->exec_queue_handle, data->lrc_handle);
> +	pthread_mutex_unlock(&data->mutex);
> +
> +	xe_eudebug_client_wait_done(s->c);
> +
> +	xe_eudebug_debugger_stop_worker(s->d, 1);
> +
> +	xe_eudebug_event_log_print(s->d->log, true);
> +	xe_eudebug_event_log_print(s->c->log, true);
> +
> +	online_session_check(s, s->flags);
> +
> +	xe_eudebug_session_destroy(s);
> +	online_debug_data_destroy(data);
> +}
> +
> +static void eu_attention_debugger_detach_trigger(struct xe_eudebug_debugger *d,
> +						 struct drm_xe_eudebug_event *event)
> +{
> +	struct online_debug_data *data = d->ptr;
> +	uint64_t c_pid;
> +	int ret;
> +
> +	c_pid = d->target_pid;
> +
> +	/* Reset VM data so the re-triggered VM open handler works properly */
> +	data->vm_fd = -1;
> +
> +	xe_eudebug_debugger_dettach(d);
> +
> +	/* Let the KMD scan function notice unhandled EU attention */
> +	if (!(d->flags & SHADER_N_NOOP_BREAKPOINT))
> +		sleep(1);
> +
> +	/*
> +	 * New session that is created by EU debugger on reconnect restarts
> +	 * seqno, causing isses with log sorting. To avoid that, create
> +	 * a new event log.
> +	 */
> +	reset_debugger_log(d);
> +
> +	ret = xe_eudebug_connect(d->master_fd, c_pid, 0);
> +	igt_assert(ret >= 0);
> +	d->fd = ret;
> +	d->target_pid = c_pid;
> +
> +	/* Let the discovery worker discover resources */
> +	sleep(2);
> +
> +	if (!(d->flags & SHADER_N_NOOP_BREAKPOINT))
> +		xe_eudebug_debugger_signal_stage(d, DEBUGGER_REATTACHED);
> +}
> +
> +/**
> + * SUBTEST: interrupt-reconnect
> + * Description:
> + *	Schedules EU workload which should last about a few seconds,
> + *	interrupts all threads and detaches debugger when attention is
> + *	raised. The test checks if KMD resets the workload when there's
> + *	no debugger attached and does the event playback on discovery.
> + */
> +static void test_interrupt_reconnect(int fd, struct drm_xe_engine_class_instance *hwe, int flags)
> +{
> +	struct drm_xe_eudebug_event *e = NULL;
> +	struct online_debug_data *data;
> +	struct xe_eudebug_session *s;
> +	uint32_t val;
> +
> +	data = online_debug_data_create(hwe);
> +	s = xe_eudebug_session_create(fd, run_online_client, flags, data);
> +
> +	xe_eudebug_debugger_add_trigger(s->d, DRM_XE_EUDEBUG_EVENT_OPEN,
> +					open_trigger);
> +	xe_eudebug_debugger_add_trigger(s->d, DRM_XE_EUDEBUG_EVENT_EXEC_QUEUE,
> +					exec_queue_trigger);
> +	xe_eudebug_debugger_add_trigger(s->d, DRM_XE_EUDEBUG_EVENT_EU_ATTENTION,
> +					eu_attention_debug_trigger);
> +	xe_eudebug_debugger_add_trigger(s->d, DRM_XE_EUDEBUG_EVENT_EU_ATTENTION,
> +					eu_attention_debugger_detach_trigger);
> +	xe_eudebug_debugger_add_trigger(s->d, DRM_XE_EUDEBUG_EVENT_VM, vm_open_trigger);
> +	xe_eudebug_debugger_add_trigger(s->d, DRM_XE_EUDEBUG_EVENT_METADATA,
> +					create_metadata_trigger);
> +	xe_eudebug_debugger_add_trigger(s->d, DRM_XE_EUDEBUG_EVENT_VM_BIND_UFENCE,
> +					ufence_ack_trigger);
> +
> +	igt_assert_eq(xe_eudebug_debugger_attach(s->d, s->c), 0);
> +	xe_eudebug_debugger_start_worker(s->d);
> +	xe_eudebug_client_start(s->c);
> +
> +	/* wait for workload to start */
> +	igt_for_milliseconds(STARTUP_TIMEOUT_MS) {
> +		/* collect needed data from triggers */
> +		if (READ_ONCE(data->vm_fd) == -1 || READ_ONCE(data->target_size) == 0)
> +			continue;
> +
> +		if (pread(data->vm_fd, &val, sizeof(val), data->target_offset) == sizeof(val))
> +			if (val != 0)
> +				break;
> +	}
> +
> +	pthread_mutex_lock(&data->mutex);
> +	igt_assert(data->client_handle != -1);
> +	igt_assert(data->exec_queue_handle != -1);
> +	eu_ctl_interrupt_all(s->d->fd, data->client_handle,
> +			     data->exec_queue_handle, data->lrc_handle);
> +	pthread_mutex_unlock(&data->mutex);
> +
> +	xe_eudebug_client_wait_done(s->c);
> +
> +	xe_eudebug_debugger_stop_worker(s->d, 1);
> +
> +	xe_eudebug_event_log_print(s->d->log, true);
> +	xe_eudebug_event_log_print(s->c->log, true);
> +
> +	xe_eudebug_session_check(s, true, XE_EUDEBUG_FILTER_EVENT_VM_BIND |
> +					  XE_EUDEBUG_FILTER_EVENT_VM_BIND_OP |
> +					  XE_EUDEBUG_FILTER_EVENT_VM_BIND_UFENCE);
> +
> +	/* We expect workload reset, so no attention should be raised */
> +	xe_eudebug_for_each_event(e, s->d->log)
> +		igt_assert(e->type != DRM_XE_EUDEBUG_EVENT_EU_ATTENTION);
> +
> +	xe_eudebug_session_destroy(s);
> +	online_debug_data_destroy(data);
> +}
> +
> +/**
> + * SUBTEST: single-step
> + * Description:
> + *	Schedules EU workload with 16 nops after breakpoint, then single-steps
> + *	through the shader, advances all threads each step, checking if all
> + *	threads advanced every step.
> + *
> + * SUBTEST: single-step-one
> + * Description:
> + *	Schedules EU workload with 16 nops after breakpoint, then single-steps
> + *	through the shader, advances one thread each step, checking if one
> + *	thread advanced every step. Due to the time constraint, only first two
> + *	shader instructions after breakpoint are validated.
> + */
> +static void test_single_step(int fd, struct drm_xe_engine_class_instance *hwe, int flags)
> +{
> +	struct xe_eudebug_session *s;
> +	struct online_debug_data *data;
> +
> +	data = online_debug_data_create(hwe);
> +	s = xe_eudebug_session_create(fd, run_online_client, flags, data);
> +
> +	xe_eudebug_debugger_add_trigger(s->d, DRM_XE_EUDEBUG_EVENT_OPEN,
> +					open_trigger);
> +	xe_eudebug_debugger_add_trigger(s->d, DRM_XE_EUDEBUG_EVENT_EU_ATTENTION,
> +					eu_attention_debug_trigger);
> +	xe_eudebug_debugger_add_trigger(s->d, DRM_XE_EUDEBUG_EVENT_EU_ATTENTION,
> +					eu_attention_resume_single_step_trigger);
> +	xe_eudebug_debugger_add_trigger(s->d, DRM_XE_EUDEBUG_EVENT_VM, vm_open_trigger);
> +	xe_eudebug_debugger_add_trigger(s->d, DRM_XE_EUDEBUG_EVENT_METADATA,
> +					create_metadata_trigger);
> +	xe_eudebug_debugger_add_trigger(s->d, DRM_XE_EUDEBUG_EVENT_VM_BIND_UFENCE,
> +					ufence_ack_trigger);
> +
> +	xe_eudebug_session_run(s);
> +	online_session_check(s, s->flags);
> +	xe_eudebug_session_destroy(s);
> +	online_debug_data_destroy(data);
> +}
> +
> +static void eu_attention_debugger_ndetach_trigger(struct xe_eudebug_debugger *d,
> +						 struct drm_xe_eudebug_event *event)
> +{
> +	static int debugger_detach_count;
> +
> +	if (debugger_detach_count < (SHADER_LOOP_N - 1)) {
> +		/* Make sure the resume command has issued before detaching the debugger */
> +		if (!is_client_resumed)
> +			return;
Not sure what we are trying to achieve here. I think triggers are executed in the same order as
added, so this should be ensured. But anyway, I think we could just resue data-
>last_eu_control_seqno. If its greater than seqno of this event we are free to reattach.

Regards,
Dominik

> +		eu_attention_debugger_detach_trigger(d, event);
> +		debugger_detach_count++;
> +	} else {
> +		igt_debug("Reached Nth breakpoint hence preventing the debugger detach\n");
> +	}
> +	is_client_resumed = 0;
> +}
> +
> +/**
> + * SUBTEST: debugger-reopen
> + * Description:
> + *	Check whether the debugger is able to reopen the connection and
> + *	capture the events of already running client.
> + */
> +static void test_debugger_reopen(int fd, struct drm_xe_engine_class_instance *hwe, int flags)
> +{
> +	struct xe_eudebug_session *s;
> +	struct online_debug_data *data;
> +
> +	data = online_debug_data_create(hwe);
> +
> +	s = xe_eudebug_session_create(fd, run_online_client, flags, data);
> +
> +	xe_eudebug_debugger_add_trigger(s->d, DRM_XE_EUDEBUG_EVENT_EU_ATTENTION,
> +					eu_attention_debug_trigger);
> +	xe_eudebug_debugger_add_trigger(s->d, DRM_XE_EUDEBUG_EVENT_EU_ATTENTION,
> +					eu_attention_resume_trigger);
> +	xe_eudebug_debugger_add_trigger(s->d, DRM_XE_EUDEBUG_EVENT_EU_ATTENTION,
> +					eu_attention_debugger_ndetach_trigger);
> +	xe_eudebug_debugger_add_trigger(s->d, DRM_XE_EUDEBUG_EVENT_VM_BIND_UFENCE,
> +					ufence_ack_trigger);
> +
> +	xe_eudebug_session_run(s);
> +
> +	xe_eudebug_session_destroy(s);
> +	online_debug_data_destroy(data);
> +}
> +
> +/**
> + * SUBTEST: writes-caching-%s
> + * Description:
> + *	Write incrementing values to 2-page-long target surface, poisoning the data one breakpoint
> + *	before each write instruction and restoring it when the poisoned instruction breakpoint
> + *	is hit. Expect to never see poison values in target surface.
> + *
> + *
> + * arg[1]:
> + *
> + * @sram:	Use page size of SRAM
> + * @vram:	Use page size of VRAM
> + */
> +static void test_caching(int fd, struct drm_xe_engine_class_instance *hwe, int flags)
> +{
> +	struct xe_eudebug_session *s;
> +	struct online_debug_data *data;
> +
> +	if (flags & SHADER_CACHING_VRAM)
> +		igt_skip_on_f(!xe_has_vram(fd), "Device does not have VRAM.\n");
> +
> +	data = online_debug_data_create(hwe);
> +	s = xe_eudebug_session_create(fd, run_online_client, flags, data);
> +
> +	xe_eudebug_debugger_add_trigger(s->d, DRM_XE_EUDEBUG_EVENT_OPEN,
> +					open_trigger);
> +	xe_eudebug_debugger_add_trigger(s->d, DRM_XE_EUDEBUG_EVENT_EU_ATTENTION,
> +					eu_attention_debug_trigger);
> +	xe_eudebug_debugger_add_trigger(s->d, DRM_XE_EUDEBUG_EVENT_EU_ATTENTION,
> +					eu_attention_resume_caching_trigger);
> +	xe_eudebug_debugger_add_trigger(s->d, DRM_XE_EUDEBUG_EVENT_VM, vm_open_trigger);
> +	xe_eudebug_debugger_add_trigger(s->d, DRM_XE_EUDEBUG_EVENT_METADATA,
> +					create_metadata_trigger);
> +	xe_eudebug_debugger_add_trigger(s->d, DRM_XE_EUDEBUG_EVENT_VM_BIND_UFENCE,
> +					ufence_ack_trigger);
> +
> +	xe_eudebug_session_run(s);
> +	online_session_check(s, s->flags);
> +	xe_eudebug_session_destroy(s);
> +	online_debug_data_destroy(data);
> +}
> +
> +static int wait_for_exception(struct online_debug_data *data, int timeout)
> +{
> +	int ret = -ETIMEDOUT;
> +
> +	igt_for_milliseconds(timeout) {
> +		pthread_mutex_lock(&data->mutex);
> +		if ((data->exception_arrived.tv_sec |
> +		     data->exception_arrived.tv_nsec) != 0)
> +			ret = 0;
> +		pthread_mutex_unlock(&data->mutex);
> +
> +		if (!ret)
> +			break;
> +		usleep(1000);
> +	}
> +
> +	return ret;
> +}
> +
> +#define is_compute_on_gt(__e, __gt) ((__e->engine_class == DRM_XE_ENGINE_CLASS_RENDER || \
> +				      __e->engine_class == DRM_XE_ENGINE_CLASS_COMPUTE) && \
> +				      __e->gt_id == __gt)
> +
> +struct xe_engine_list_entry {
> +	struct igt_list_head link;
> +	struct drm_xe_engine_class_instance *hwe;
> +};
> +
> +#define MAX_TILES	2
> +static int find_suitable_engines(struct drm_xe_engine_class_instance *hwes[GEM_MAX_ENGINES],
> +				 int fd, bool many_tiles)
> +{
> +	struct xe_device *xe_dev;
> +	struct drm_xe_engine_class_instance *e;
> +	struct xe_engine_list_entry *en, *tmp;
> +	struct igt_list_head compute_engines[MAX_TILES];
> +	int gt_id;
> +	int tile_id, i, engine_count = 0, tile_count = 0;
> +
> +	xe_dev = xe_device_get(fd);
> +
> +	for (i = 0; i < MAX_TILES; i++)
> +		IGT_INIT_LIST_HEAD(&compute_engines[i]);
> +
> +	xe_for_each_gt(fd, gt_id) {
> +		xe_for_each_engine(fd, e) {
> +			if (is_compute_on_gt(e, gt_id)) {
> +				tile_id = xe_dev->gt_list->gt_list[gt_id].tile_id;
> +
> +				en = malloc(sizeof(struct xe_engine_list_entry));
> +				en->hwe = e;
> +
> +				igt_list_add_tail(&en->link, &compute_engines[tile_id]);
> +			}
> +		}
> +	}
> +
> +	for (i = 0; i < MAX_TILES; i++) {
> +		if (igt_list_empty(&compute_engines[i]))
> +			continue;
> +
> +		if (many_tiles) {
> +			en = igt_list_first_entry(&compute_engines[i], en, link);
> +			hwes[engine_count++] = en->hwe;
> +			tile_count++;
> +		} else {
> +			if (igt_list_length(&compute_engines[i]) > 1) {
> +				igt_list_for_each_entry(en, &compute_engines[i], link)
> +					hwes[engine_count++] = en->hwe;
> +				break;
> +			}
> +		}
> +	}
> +
> +	for (i = 0; i < MAX_TILES; i++) {
> +		igt_list_for_each_entry_safe(en, tmp, &compute_engines[i], link) {
> +			igt_list_del(&en->link);
> +			free(en);
> +		}
> +	}
> +
> +	if (many_tiles)
> +		igt_require_f(tile_count > 1, "Mulit-tile scenario requires more tiles\n");
> +
> +	return engine_count;
> +}
> +
> +/**
> + * SUBTEST: breakpoint-many-sessions-single-tile
> + * Description:
> + *	Schedules EU workload with preinstalled breakpoint on every compute engine
> + *	available on the tile. Checks if the contexts hit breakpoint in sequence
> + *	and resumes them.
> + *
> + * SUBTEST: breakpoint-many-sessions-tiles
> + * Description:
> + *	Schedules EU workload with preinstalled breakpoint on selected compute
> + *      engines, with one per tile. Checks if each context hit breakpoint and
> + *      resumes them.
> + */
> +static void test_many_sessions_on_tiles(int fd, bool multi_tile)
> +{
> +	int n = 0, flags = SHADER_BREAKPOINT | SHADER_MIN_THREADS;
> +	struct xe_eudebug_session *s[GEM_MAX_ENGINES] = {};
> +	struct online_debug_data *data[GEM_MAX_ENGINES] = {};
> +	struct drm_xe_engine_class_instance *hwe[GEM_MAX_ENGINES] = {};
> +	struct drm_xe_eudebug_event_eu_attention *eus;
> +	uint64_t current_t, next_t, diff;
> +	int i;
> +
> +	n = find_suitable_engines(hwe, fd, multi_tile);
> +
> +	igt_require_f(n > 1, "Test requires at least two parallel compute engines!\n");
> +
> +	for (i = 0; i < n; i++) {
> +		data[i] = online_debug_data_create(hwe[i]);
> +		s[i] = xe_eudebug_session_create(fd, run_online_client, flags, data[i]);
> +
> +		xe_eudebug_debugger_add_trigger(s[i]->d, DRM_XE_EUDEBUG_EVENT_EU_ATTENTION,
> +						eu_attention_debug_trigger);
> +		xe_eudebug_debugger_add_trigger(s[i]->d, DRM_XE_EUDEBUG_EVENT_EU_ATTENTION,
> +						save_first_exception_trigger);
> +		xe_eudebug_debugger_add_trigger(s[i]->d, DRM_XE_EUDEBUG_EVENT_VM_BIND_UFENCE,
> +						ufence_ack_trigger);
> +
> +		igt_assert_eq(xe_eudebug_debugger_attach(s[i]->d, s[i]->c), 0);
> +
> +		xe_eudebug_debugger_start_worker(s[i]->d);
> +		xe_eudebug_client_start(s[i]->c);
> +	}
> +
> +	for (i = 0; i < n; i++) {
> +		/* XXX: Sometimes racy, expects clients to execute in sequence */
> +		igt_assert(!wait_for_exception(data[i], STARTUP_TIMEOUT_MS));
> +
> +		eus = (struct drm_xe_eudebug_event_eu_attention *)data[i]->exception_event;
> +
> +		/* Delay all but the last workload to check serialization */
> +		if (i < n - 1)
> +			usleep(WORKLOAD_DELAY_US);
> +
> +		eu_ctl_resume(s[i]->d->master_fd, s[i]->d->fd,
> +			      eus->client_handle, eus->exec_queue_handle,
> +			      eus->lrc_handle, eus->bitmask, eus->bitmask_size);
> +		free(eus);
> +	}
> +
> +	for (i = 0; i < n - 1; i++) {
> +		/* Convert timestamps to microseconds */
> +		current_t = data[i]->exception_arrived.tv_nsec * 1000;
> +		next_t = data[i + 1]->exception_arrived.tv_nsec * 1000;
> +		diff = current_t < next_t ? next_t - current_t : current_t - next_t;
> +
> +		if (multi_tile)
> +			igt_assert_f(diff < WORKLOAD_DELAY_US,
> +				     "Expected to execute workloads concurrently. Actual delay: %lu ms\n",
> +				     diff);
> +		else
> +			igt_assert_f(diff >= WORKLOAD_DELAY_US,
> +				     "Expected a serialization of workloads. Actual delay: %lu ms\n",
> +				     diff);
> +	}
> +
> +	for (i = 0; i < n; i++) {
> +		xe_eudebug_client_wait_done(s[i]->c);
> +		xe_eudebug_debugger_stop_worker(s[i]->d, 1);
> +
> +		xe_eudebug_event_log_print(s[i]->d->log, true);
> +		online_session_check(s[i], flags);
> +
> +		xe_eudebug_session_destroy(s[i]);
> +		online_debug_data_destroy(data[i]);
> +	}
> +}
> +
> +static struct drm_xe_engine_class_instance *pick_compute(int fd, int gt)
> +{
> +	struct drm_xe_engine_class_instance *hwe;
> +	int count = 0;
> +
> +	xe_for_each_engine(fd, hwe)
> +		if (is_compute_on_gt(hwe, gt))
> +			count++;
> +
> +	xe_for_each_engine(fd, hwe)
> +		if (is_compute_on_gt(hwe, gt) && rand() % count-- == 0)
> +			return hwe;
> +
> +	return NULL;
> +}
> +
> +#define test_gt_render_or_compute(t, i915, __hwe) \
> +	igt_subtest_with_dynamic(t) \
> +		for (int gt = 0; (__hwe = pick_compute(i915, gt)); gt++) \
> +			igt_dynamic_f("%s%d", xe_engine_class_string(__hwe->engine_class), hwe->engine_instance)
> +
> +igt_main
> +{
> +	struct drm_xe_engine_class_instance *hwe;
> +	bool was_enabled;
> +	int fd;
> +
> +	igt_fixture {
> +		fd = drm_open_driver(DRIVER_XE);
> +		intel_allocator_multiprocess_start();
> +		igt_srandom();
> +		was_enabled = xe_eudebug_enable(fd, true);
> +	}
> +
> +	test_gt_render_or_compute("basic-breakpoint", fd, hwe)
> +		test_basic_online(fd, hwe, SHADER_BREAKPOINT);
> +
> +	test_gt_render_or_compute("preempt-breakpoint", fd, hwe)
> +		test_preemption(fd, hwe);
> +
> +	test_gt_render_or_compute("breakpoint-not-in-debug-mode", fd, hwe)
> +		test_basic_online(fd, hwe, SHADER_BREAKPOINT | DISABLE_DEBUG_MODE);
> +
> +	test_gt_render_or_compute("stopped-thread", fd, hwe)
> +		test_basic_online(fd, hwe, SHADER_BREAKPOINT | TRIGGER_RESUME_DELAYED);
> +
> +	test_gt_render_or_compute("resume-one", fd, hwe)
> +		test_basic_online(fd, hwe, SHADER_BREAKPOINT | TRIGGER_RESUME_ONE);
> +
> +	test_gt_render_or_compute("resume-dss", fd, hwe)
> +		test_basic_online(fd, hwe, SHADER_BREAKPOINT | TRIGGER_RESUME_DSS);
> +
> +	test_gt_render_or_compute("interrupt-all", fd, hwe)
> +		test_interrupt_all(fd, hwe, SHADER_LOOP);
> +
> +	test_gt_render_or_compute("interrupt-other-debuggable", fd, hwe)
> +		test_interrupt_other(fd, hwe, SHADER_LOOP);
> +
> +	test_gt_render_or_compute("interrupt-other", fd, hwe)
> +		test_interrupt_other(fd, hwe, SHADER_LOOP | DISABLE_DEBUG_MODE);
> +
> +	test_gt_render_or_compute("interrupt-all-set-breakpoint", fd, hwe)
> +		test_interrupt_all(fd, hwe, SHADER_LOOP | TRIGGER_RESUME_SET_BP);
> +
> +	test_gt_render_or_compute("tdctl-parameters", fd, hwe)
> +		test_tdctl_parameters(fd, hwe, SHADER_LOOP);
> +
> +	test_gt_render_or_compute("reset-with-attention", fd, hwe)
> +		test_reset_with_attention_online(fd, hwe, SHADER_BREAKPOINT);
> +
> +	test_gt_render_or_compute("interrupt-reconnect", fd, hwe)
> +		test_interrupt_reconnect(fd, hwe, SHADER_LOOP | TRIGGER_RECONNECT);
> +
> +	test_gt_render_or_compute("single-step", fd, hwe)
> +		test_single_step(fd, hwe, SHADER_SINGLE_STEP | SIP_SINGLE_STEP |
> +				 TRIGGER_RESUME_PARALLEL_WALK);
> +
> +	test_gt_render_or_compute("single-step-one", fd, hwe)
> +		test_single_step(fd, hwe, SHADER_SINGLE_STEP | SIP_SINGLE_STEP |
> +				 TRIGGER_RESUME_SINGLE_WALK);
> +
> +	test_gt_render_or_compute("debugger-reopen", fd, hwe)
> +		test_debugger_reopen(fd, hwe, SHADER_N_NOOP_BREAKPOINT);
> +
> +	test_gt_render_or_compute("writes-caching-sram", fd, hwe)
> +		test_caching(fd, hwe, SHADER_CACHING_SRAM);
> +
> +	test_gt_render_or_compute("writes-caching-vram", fd, hwe)
> +		test_caching(fd, hwe, SHADER_CACHING_VRAM);
> +
> +	igt_subtest("breakpoint-many-sessions-single-tile")
> +		test_many_sessions_on_tiles(fd, false);
> +
> +	igt_subtest("breakpoint-many-sessions-tiles")
> +		test_many_sessions_on_tiles(fd, true);
> +
> +	igt_fixture {
> +		xe_eudebug_enable(fd, was_enabled);
> +
> +		intel_allocator_multiprocess_stop();
> +		drm_close_driver(fd);
> +	}
> +}
> diff --git a/tests/meson.build b/tests/meson.build
> index 35bf8ed35..f18eec7e7 100644
> --- a/tests/meson.build
> +++ b/tests/meson.build
> @@ -280,6 +280,7 @@ intel_xe_progs = [
>  	'xe_debugfs',
>  	'xe_drm_fdinfo',
>  	'xe_eudebug',
> +	'xe_eudebug_online',
>  	'xe_evict',
>  	'xe_evict_ccs',
>  	'xe_exec_atomic',


^ permalink raw reply	[flat|nested] 41+ messages in thread

* Re: [PATCH i-g-t v3 07/14] lib/gpgpu_shader: Add write_on_exception template
  2024-08-09 12:38 ` [PATCH i-g-t v3 07/14] lib/gpgpu_shader: Add write_on_exception template Christoph Manszewski
@ 2024-08-19 10:09   ` Grzegorzek, Dominik
  0 siblings, 0 replies; 41+ messages in thread
From: Grzegorzek, Dominik @ 2024-08-19 10:09 UTC (permalink / raw)
  To: igt-dev@lists.freedesktop.org, Manszewski, Christoph
  Cc: Patelczyk, Maciej, Hajda, Andrzej, Kempczynski, Zbigniew,
	Piatkowski, Dominik Karol, Sikora, Pawel, Kuoppala, Mika,
	Mun, Gwan-gyeong, kamil.konieczny@linux.intel.com,
	Kolanupaka Naveena

On Fri, 2024-08-09 at 14:38 +0200, Christoph Manszewski wrote:
> From: Andrzej Hajda <andrzej.hajda@intel.com>
> 
> Writing specific value to memory location on unexpected value in exception
> register allows to report errors from inside shader or siplet.
> 
> Signed-off-by: Andrzej Hajda <andrzej.hajda@intel.com>
> Signed-off-by: Christoph Manszewski <christoph.manszewski@intel.com>
> ---
>  lib/gpgpu_shader.c          | 56 ++++++++++++++++++++++++++
>  lib/gpgpu_shader.h          |  2 +
>  lib/iga64_generated_codes.c | 79 ++++++++++++++++++++++++++++++++++++-
>  3 files changed, 136 insertions(+), 1 deletion(-)
> 
> diff --git a/lib/gpgpu_shader.c b/lib/gpgpu_shader.c
> index 3b16db593..a98abc57e 100644
> --- a/lib/gpgpu_shader.c
> +++ b/lib/gpgpu_shader.c
> @@ -628,6 +628,62 @@ void gpgpu_shader__write_dword(struct gpgpu_shader *shdr, uint32_t value,
>  	", 2, y_offset, 3, value, value, value, value);
>  }
>  
> +/**
> + * gpgpu_shader__write_on_exception:
> + * @shdr: shader to be modified
> + * @value: dword to be written
> + * @y_offset: write target offset within the surface in rows
> + * @mask: mask to be applied on exception register
> + * @expected: expected value of exception register with @mask applied
> + *
> + * Check if bits specified by @mask in exception register(cr0.1) are equal
> + * to provided ones: cr0.1 & @mask == @expected,
> + * if yes fill dword in (row, column/dword) == (tg_id_y + @y_offset, tg_id_x).
> + */
> +void gpgpu_shader__write_on_exception(struct gpgpu_shader *shdr, uint32_t value,
> +				      uint32_t y_offset, uint32_t mask, uint32_t expected)
> +{
> +	emit_iga64_code(shdr, write_on_exception, "					\n\
> +	// Payload									\n\
> +(W)	mov (1|M0)               r5.0<1>:ud    ARG(3):ud				\n\
> +(W)	mov (1|M0)               r5.1<1>:ud    ARG(4):ud				\n\
> +(W)	mov (1|M0)               r5.2<1>:ud    ARG(5):ud				\n\
> +(W)	mov (1|M0)               r5.3<1>:ud    ARG(6):ud				\n\
> +#if GEN_VER < 2000 // prepare Media Block Write						\n\
> +	// X offset of the block in bytes := (thread group id X << ARG(0))		\n\
> +(W)	shl (1|M0)               r4.0<1>:ud    r0.1<0;1,0>:ud    ARG(0):ud		\n\
> +	// Y offset of the block in rows := thread group id Y				\n\
> +(W)	mov (1|M0)               r4.1<1>:ud    r0.6<0;1,0>:ud				\n\
> +(W)	add (1|M0)               r4.1<1>:ud    r4.1<0;1,0>:ud   ARG(1):ud		\n\
> +	// block width [0,63] representing 1 to 64 bytes				\n\
> +(W)	mov (1|M0)               r4.2<1>:ud    ARG(2):ud				\n\
> +	// FFTID := FFTID from R0 header						\n\
> +(W)	mov (1|M0)               r4.4<1>:ud    r0.5<0;1,0>:ud				\n\
> +#else // prepare Typed 2D Block Store							\n\
> +	// Load r2.0-3 with tg id X << ARG(0)						\n\
> +(W)	shl (1|M0)               r2.0<1>:ud    r0.1<0;1,0>:ud    ARG(0):ud		\n\
> +	// Load r2.4-7 with tg id Y + ARG(1):ud						\n\
> +(W)	mov (1|M0)               r2.1<1>:ud    r0.6<0;1,0>:ud				\n\
> +(W)	add (1|M0)               r2.1<1>:ud    r2.1<0;1,0>:ud    ARG(1):ud		\n\
> +	// payload setup								\n\
> +(W)	mov (16|M0)              r4.0<1>:ud    0x0:ud					\n\

Move r4.0 clearing part before to common code before the if statement like in [1].
For < 2000 we depend on clear r4.0 register. If it was used before, by any different shader, we may
have broken code. 

[1] https://patchwork.freedesktop.org/patch/608230/?series=137284&rev=2

With that, it is:

Reviewed-by: Dominik Grzegorzek <dominik.grzegorzek@intel.com>
> +	// Store X and Y block start (160:191 and 192:223)				\n\
> +(W)	mov (2|M0)               r4.5<1>:ud    r2.0<2;2,1>:ud				\n\
> +	// Store X and Y block max_size (224:231 and 232:239)				\n\
> +(W)	mov (1|M0)               r4.7<1>:ud    ARG(2):ud				\n\
> +#endif											\n\
> +	// Check if masked exception is equal to provided value and write conditionally \n\
> +(W)      and (1|M0)              r3.0<1>:ud     cr0.1<0;1,0>:ud ARG(7):ud		\n\
> +(W)      mov (1|M0)              f0.0<1>:ud     0x0:ud					\n\
> +(W)      cmp (1|M0)     (eq)f0.0 null:ud        r3.0<0;1,0>:ud  ARG(8):ud		\n\
> +#if GEN_VER < 2000 // Media Block Write							\n\
> +(W&f0.0) send.dc1 (16|M0)        null     r4   src1_null 0    0x40A8000			\n\
> +#else // Typed 2D Block Store								\n\
> +(W&f0.0) send.tgm (16|M0)        null     r4   null:0    0    0x64000007		\n\
> +#endif											\n\
> +	", 2, y_offset, 3, value, value, value, value, mask, expected);
> +}
> +
>  /**
>   * gpgpu_shader__end_system_routine:
>   * @shdr: shader to be modified
> diff --git a/lib/gpgpu_shader.h b/lib/gpgpu_shader.h
> index e4ca0be4c..76ff4989e 100644
> --- a/lib/gpgpu_shader.h
> +++ b/lib/gpgpu_shader.h
> @@ -74,6 +74,8 @@ void gpgpu_shader__end_system_routine_step_if_eq(struct gpgpu_shader *shdr,
>  void gpgpu_shader__write_aip(struct gpgpu_shader *shdr, uint32_t y_offset);
>  void gpgpu_shader__write_dword(struct gpgpu_shader *shdr, uint32_t value,
>  			       uint32_t y_offset);
> +void gpgpu_shader__write_on_exception(struct gpgpu_shader *shdr, uint32_t dw,
> +			       uint32_t y_offset, uint32_t mask, uint32_t value);
>  void gpgpu_shader__label(struct gpgpu_shader *shdr, int label_id);
>  void gpgpu_shader__jump(struct gpgpu_shader *shdr, int label_id);
>  void gpgpu_shader__jump_neq(struct gpgpu_shader *shdr, int label_id,
> diff --git a/lib/iga64_generated_codes.c b/lib/iga64_generated_codes.c
> index fea436dee..71429d442 100644
> --- a/lib/iga64_generated_codes.c
> +++ b/lib/iga64_generated_codes.c
> @@ -3,7 +3,7 @@
>  
>  #include "gpgpu_shader.h"
>  
> -#define MD5_SUM_IGA64_ASMS 61a15534954fe7c6bd0e983fbfd54c27
> +#define MD5_SUM_IGA64_ASMS 88529cc180578939c0b8c4bb29da7db6
>  
>  struct iga64_template const iga64_code_end_system_routine_step_if_eq[] = {
>  	{ .gen_ver = 2000, .size = 44, .code = (const uint32_t []) {
> @@ -93,6 +93,83 @@ struct iga64_template const iga64_code_breakpoint_suppress[] = {
>  	}}
>  };
>  
> +struct iga64_template const iga64_code_write_on_exception[] = {
> +	{ .gen_ver = 2000, .size = 68, .code = (const uint32_t []) {
> +		0x80000061, 0x05054220, 0x00000000, 0xc0ded003,
> +		0x80000061, 0x05154220, 0x00000000, 0xc0ded004,
> +		0x80000061, 0x05254220, 0x00000000, 0xc0ded005,
> +		0x80000061, 0x05354220, 0x00000000, 0xc0ded006,
> +		0x80000069, 0x02058220, 0x02000014, 0xc0ded000,
> +		0x80000061, 0x02150220, 0x00000064, 0x00000000,
> +		0x80001940, 0x02158220, 0x02000214, 0xc0ded001,
> +		0x80100061, 0x04054220, 0x00000000, 0x00000000,
> +		0x80041a61, 0x04550220, 0x00220205, 0x00000000,
> +		0x80000061, 0x04754220, 0x00000000, 0xc0ded002,
> +		0x80000965, 0x03058220, 0x02008010, 0xc0ded007,
> +		0x80000961, 0x30014220, 0x00000000, 0x00000000,
> +		0x80001a70, 0x00018220, 0x12000304, 0xc0ded008,
> +		0x84134031, 0x00000000, 0xd00e0494, 0x04000000,
> +		0x80000001, 0x00010000, 0x20000000, 0x00000000,
> +		0x80000001, 0x00010000, 0x30000000, 0x00000000,
> +		0x80000901, 0x00010000, 0x00000000, 0x00000000,
> +	}},
> +	{ .gen_ver = 1272, .size = 64, .code = (const uint32_t []) {
> +		0x80000061, 0x05054220, 0x00000000, 0xc0ded003,
> +		0x80000061, 0x05154220, 0x00000000, 0xc0ded004,
> +		0x80000061, 0x05254220, 0x00000000, 0xc0ded005,
> +		0x80000061, 0x05354220, 0x00000000, 0xc0ded006,
> +		0x80000069, 0x04058220, 0x02000014, 0xc0ded000,
> +		0x80000061, 0x04150220, 0x00000064, 0x00000000,
> +		0x80001940, 0x04158220, 0x02000414, 0xc0ded001,
> +		0x80000061, 0x04254220, 0x00000000, 0xc0ded002,
> +		0x80000061, 0x04450220, 0x00000054, 0x00000000,
> +		0x80000965, 0x03058220, 0x02008010, 0xc0ded007,
> +		0x80000961, 0x30014220, 0x00000000, 0x00000000,
> +		0x80001a70, 0x00018220, 0x12000304, 0xc0ded008,
> +		0x84134031, 0x00000000, 0xc0000414, 0x02a00000,
> +		0x80000001, 0x00010000, 0x20000000, 0x00000000,
> +		0x80000001, 0x00010000, 0x30000000, 0x00000000,
> +		0x80000901, 0x00010000, 0x00000000, 0x00000000,
> +	}},
> +	{ .gen_ver = 1250, .size = 68, .code = (const uint32_t []) {
> +		0x80000061, 0x05054220, 0x00000000, 0xc0ded003,
> +		0x80000061, 0x05254220, 0x00000000, 0xc0ded004,
> +		0x80000061, 0x05454220, 0x00000000, 0xc0ded005,
> +		0x80000061, 0x05654220, 0x00000000, 0xc0ded006,
> +		0x80000069, 0x04058220, 0x02000024, 0xc0ded000,
> +		0x80000061, 0x04250220, 0x000000c4, 0x00000000,
> +		0x80001940, 0x04258220, 0x02000424, 0xc0ded001,
> +		0x80000061, 0x04454220, 0x00000000, 0xc0ded002,
> +		0x80000061, 0x04850220, 0x000000a4, 0x00000000,
> +		0x80000965, 0x03058220, 0x02008020, 0xc0ded007,
> +		0x80000961, 0x30014220, 0x00000000, 0x00000000,
> +		0x80001a70, 0x00018220, 0x12000304, 0xc0ded008,
> +		0x80001a01, 0x00010000, 0x00000000, 0x00000000,
> +		0x81044031, 0x00000000, 0xc0000414, 0x02a00000,
> +		0x80000001, 0x00010000, 0x20000000, 0x00000000,
> +		0x80000001, 0x00010000, 0x30000000, 0x00000000,
> +		0x80000901, 0x00010000, 0x00000000, 0x00000000,
> +	}},
> +	{ .gen_ver = 0, .size = 64, .code = (const uint32_t []) {
> +		0x80000061, 0x05054220, 0x00000000, 0xc0ded003,
> +		0x80000061, 0x05254220, 0x00000000, 0xc0ded004,
> +		0x80000061, 0x05454220, 0x00000000, 0xc0ded005,
> +		0x80000061, 0x05654220, 0x00000000, 0xc0ded006,
> +		0x80000069, 0x04058220, 0x02000024, 0xc0ded000,
> +		0x80000061, 0x04250220, 0x000000c4, 0x00000000,
> +		0x80000140, 0x04258220, 0x02000424, 0xc0ded001,
> +		0x80000061, 0x04454220, 0x00000000, 0xc0ded002,
> +		0x80000061, 0x04850220, 0x000000a4, 0x00000000,
> +		0x80000165, 0x03058220, 0x02008020, 0xc0ded007,
> +		0x80000161, 0x30014220, 0x00000000, 0x00000000,
> +		0x80000270, 0x00018220, 0x12000304, 0xc0ded008,
> +		0x8104a031, 0x00000000, 0xc0000414, 0x02a00000,
> +		0x80000001, 0x00010000, 0x20000000, 0x00000000,
> +		0x80000001, 0x00010000, 0x30000000, 0x00000000,
> +		0x80000101, 0x00010000, 0x00000000, 0x00000000,
> +	}}
> +};
> +
>  struct iga64_template const iga64_code_media_block_write[] = {
>  	{ .gen_ver = 2000, .size = 56, .code = (const uint32_t []) {
>  		0x80000061, 0x05054220, 0x00000000, 0xc0ded003,


^ permalink raw reply	[flat|nested] 41+ messages in thread

* Re: [PATCH i-g-t v3 09/14] lib/intel_batchbuffer: Add helper to get pointer at specified offset
  2024-08-09 12:38 ` [PATCH i-g-t v3 09/14] lib/intel_batchbuffer: Add helper to get pointer at specified offset Christoph Manszewski
@ 2024-08-19 10:10   ` Grzegorzek, Dominik
  0 siblings, 0 replies; 41+ messages in thread
From: Grzegorzek, Dominik @ 2024-08-19 10:10 UTC (permalink / raw)
  To: igt-dev@lists.freedesktop.org, Manszewski, Christoph
  Cc: Patelczyk, Maciej, Hajda, Andrzej, Kempczynski, Zbigniew,
	Piatkowski, Dominik Karol, Sikora, Pawel, Kuoppala, Mika,
	Mun, Gwan-gyeong, kamil.konieczny@linux.intel.com,
	Kolanupaka Naveena

On Fri, 2024-08-09 at 14:38 +0200, Christoph Manszewski wrote:
> From: Andrzej Hajda <andrzej.hajda@intel.com>
> 
> The helper will be used to access data placed in batchbuffer.
> 
> Signed-off-by: Andrzej Hajda <andrzej.hajda@intel.com>
> Cc: Mika Kuoppala <mika.kuoppala@intel.com>
> Cc: Dominik Grzegorzek <dominik.grzegorzek@intel.com>
> Signed-off-by: Christoph Manszewski <christoph.manszewski@intel.com>
Acked-by: Dominik Grzegorzek <dominik.grzegorzek@intel.com>
> ---
>  lib/intel_batchbuffer.h | 5 +++++
>  1 file changed, 5 insertions(+)
> 
> diff --git a/lib/intel_batchbuffer.h b/lib/intel_batchbuffer.h
> index cb32206e5..7caf6c518 100644
> --- a/lib/intel_batchbuffer.h
> +++ b/lib/intel_batchbuffer.h
> @@ -353,6 +353,11 @@ static inline uint32_t intel_bb_offset(struct intel_bb *ibb)
>  	return (uint32_t) ((uint8_t *) ibb->ptr - (uint8_t *) ibb->batch);
>  }
>  
> +static inline void *intel_bb_ptr_get(struct intel_bb *ibb, uint32_t offset)
> +{
> +	return ((uint8_t *) ibb->batch + offset);
> +}
> +
>  static inline void intel_bb_ptr_set(struct intel_bb *ibb, uint32_t offset)
>  {
>  	ibb->ptr = (void *) ((uint8_t *) ibb->batch + offset);


^ permalink raw reply	[flat|nested] 41+ messages in thread

* Re: [PATCH i-g-t v3 10/14] lib/gpgpu_shader: Allow enabling illegal opcode exceptions in shader
  2024-08-09 12:38 ` [PATCH i-g-t v3 10/14] lib/gpgpu_shader: Allow enabling illegal opcode exceptions in shader Christoph Manszewski
@ 2024-08-19 10:12   ` Grzegorzek, Dominik
  0 siblings, 0 replies; 41+ messages in thread
From: Grzegorzek, Dominik @ 2024-08-19 10:12 UTC (permalink / raw)
  To: igt-dev@lists.freedesktop.org, Manszewski, Christoph
  Cc: Patelczyk, Maciej, Hajda, Andrzej, Kempczynski, Zbigniew,
	Piatkowski, Dominik Karol, Sikora, Pawel, Kuoppala, Mika,
	Mun, Gwan-gyeong, kamil.konieczny@linux.intel.com,
	Kolanupaka Naveena

On Fri, 2024-08-09 at 14:38 +0200, Christoph Manszewski wrote:
> From: Andrzej Hajda <andrzej.hajda@intel.com>
> 
> Illegal opcode exceptions can be enabled in interface descriptor data
> passed to COMPUTE_WALKER instruction.
> 
> Signed-off-by: Andrzej Hajda <andrzej.hajda@intel.com>
> Cc: Mika Kuoppala <mika.kuoppala@intel.com>
> Cc: Dominik Grzegorzek <dominik.grzegorzek@intel.com>
> Signed-off-by: Christoph Manszewski <christoph.manszewski@intel.com>

Looks good to me:

Reviewed-by: Dominik Grzegorzek <dominik.grzegorzek@intel.com>
> ---
>  lib/gpgpu_shader.c | 4 ++++
>  lib/gpgpu_shader.h | 1 +
>  2 files changed, 5 insertions(+)
> 
> diff --git a/lib/gpgpu_shader.c b/lib/gpgpu_shader.c
> index 40161c52b..e53c61553 100644
> --- a/lib/gpgpu_shader.c
> +++ b/lib/gpgpu_shader.c
> @@ -103,6 +103,7 @@ __xelp_gpgpu_execfunc(struct intel_bb *ibb,
>  		      struct gpgpu_shader *sip,
>  		      uint64_t ring, bool explicit_engine)
>  {
> +	struct gen8_interface_descriptor_data *idd;
>  	uint32_t interface_descriptor, sip_offset;
>  	uint64_t engine;
>  
> @@ -113,6 +114,8 @@ __xelp_gpgpu_execfunc(struct intel_bb *ibb,
>  	interface_descriptor = gen8_fill_interface_descriptor(ibb, target,
>  							      shdr->instr,
>  							      4 * shdr->size);
> +	idd = intel_bb_ptr_get(ibb, interface_descriptor);
> +	idd->desc2.illegal_opcode_exception_enable = shdr->illegal_opcode_exception_enable;
>  
>  	if (sip && sip->size)
>  		sip_offset = fill_sip(ibb, sip->instr, 4 * sip->size);
> @@ -163,6 +166,7 @@ __xehp_gpgpu_execfunc(struct intel_bb *ibb,
>  
>  	xehp_fill_interface_descriptor(ibb, target, shdr->instr,
>  				       4 * shdr->size, &idd);
> +	idd.desc2.illegal_opcode_exception_enable = shdr->illegal_opcode_exception_enable;
>  
>  	if (sip && sip->size)
>  		sip_offset = fill_sip(ibb, sip->instr, 4 * sip->size);
> diff --git a/lib/gpgpu_shader.h b/lib/gpgpu_shader.h
> index 0bbeae66f..26a117a0b 100644
> --- a/lib/gpgpu_shader.h
> +++ b/lib/gpgpu_shader.h
> @@ -22,6 +22,7 @@ struct gpgpu_shader {
>  		uint32_t (*instr)[4];
>  	};
>  	struct igt_map *labels;
> +	bool illegal_opcode_exception_enable;
>  };
>  
>  struct iga64_template {


^ permalink raw reply	[flat|nested] 41+ messages in thread

* Re: [PATCH i-g-t v3 13/14] tests/xe_eudebug_online: Debug client which runs workloads on EU
  2024-08-09 14:38   ` Kamil Konieczny
@ 2024-08-19 15:31     ` Manszewski, Christoph
  0 siblings, 0 replies; 41+ messages in thread
From: Manszewski, Christoph @ 2024-08-19 15:31 UTC (permalink / raw)
  To: Kamil Konieczny, igt-dev, Zbigniew Kempczyński,
	Dominik Grzegorzek, Maciej Patelczyk,
	Dominik Karol Piątkowski, Pawel Sikora, Andrzej Hajda,
	Kolanupaka Naveena, Mika Kuoppala, Gwan-gyeong Mun

Hi Kamil,

On 9.08.2024 16:38, Kamil Konieczny wrote:
> Hi Christoph,
> On 2024-08-09 at 14:38:12 +0200, Christoph Manszewski wrote:
>> From: Dominik Grzegorzek <dominik.grzegorzek@intel.com>
>>
>> For typical debugging under gdb one can specify two main usecases:
>> accessing and manupulating resources created by the application and
>> manipulating thread execution (interrupting and setting breakpoints).
>>
>> This test adds coverage for the latter by checking that:
>> - EU workloads that hit a instruction with breakpoint bit set will stop
>>    halt execution and the debugger will report this via attention events,
>> - the debugger is able to interrupt workload execution by issuing a
>>    'interrupt_all' ioctl call,
>> - the debugger is able to resume selected workloads that are stopped.
>>
>> Signed-off-by: Dominik Grzegorzek <dominik.grzegorzek@intel.com>
>> Signed-off-by: Mika Kuoppala <mika.kuoppala@intel.com>
>> Signed-off-by: Christoph Manszewski <christoph.manszewski@intel.com>
>> Signed-off-by: Dominik Karol Piątkowski <dominik.karol.piatkowski@intel.com>
>> Signed-off-by: Pawel Sikora <pawel.sikora@intel.com>
>> Signed-off-by: Karolina Stolarek <karolina.stolarek@intel.com>
>> Signed-off-by: Kolanupaka Naveena <kolanupaka.naveena@intel.com>
>> ---
>>   tests/intel/xe_eudebug_online.c | 2203 +++++++++++++++++++++++++++++++
>>   tests/meson.build               |    1 +
>>   2 files changed, 2204 insertions(+)
>>   create mode 100644 tests/intel/xe_eudebug_online.c
> [...cut...]
> 
> Please use checkpatch.pl from linux kernel, watch out for
> whitespace errors, unbalanced braces '{/}' in if...else
> and few other problems.
> 
> A few useful options for checkpatch.pl are in CONTRIBUTE.md

Thanks, I will address these issues in the next revision.

Christoph

> 
> Regards,
> Kamil
> 

^ permalink raw reply	[flat|nested] 41+ messages in thread

* Re: [PATCH i-g-t v3 04/14] lib/xe_eudebug: Introduce eu debug testing framework
  2024-08-19  8:30   ` Grzegorzek, Dominik
@ 2024-08-19 15:33     ` Manszewski, Christoph
  0 siblings, 0 replies; 41+ messages in thread
From: Manszewski, Christoph @ 2024-08-19 15:33 UTC (permalink / raw)
  To: Grzegorzek, Dominik, igt-dev@lists.freedesktop.org
  Cc: Patelczyk, Maciej, Hajda, Andrzej, karolina.stolarek@intel.com,
	Kempczynski, Zbigniew, Piatkowski, Dominik Karol, Sikora, Pawel,
	Kuoppala, Mika, Mun, Gwan-gyeong, kamil.konieczny@linux.intel.com,
	mika.kuaoppala@linux.intel.com, Kolanupaka Naveena

Hi Dominik,

On 19.08.2024 10:30, Grzegorzek, Dominik wrote:
> On Fri, 2024-08-09 at 14:38 +0200, Christoph Manszewski wrote:
>> From: Dominik Grzegorzek <dominik.grzegorzek@intel.com>
>>
>> Introduce library which simplifies testing of eu debug capability.
>> The library provides event log helpers together with asynchronous
>> abstraction for client proccess and the debugger itself.
>>
>> xe_eudebug_client creates its own proccess with user's work function,
>> and gives machanisms to synchronize beginning of execution and event
>> logging.
>>
>> xe_eudebug_debugger allows to attach to the given proccess, provides
>> asynchronous thread for event reading and introduces triggers -
>> a callback mechanism triggered every time subscribed event was read.
>>
>> Signed-off-by: Dominik Grzegorzek <dominik.grzegorzek@intel.com>
>> Signed-off-by: Mika Kuoppala <mika.kuaoppala@linux.intel.com>
>> Signed-off-by: Christoph Manszewski <christoph.manszewski@intel.com>
>> Signed-off-by: Maciej Patelczyk <maciej.patelczyk@intel.com>
>> Signed-off-by: Pawel Sikora <pawel.sikora@intel.com>
>> Signed-off-by: Karolina Stolarek <karolina.stolarek@intel.com>
>> ---
>>   lib/meson.build     |    1 +
>>   lib/xe/xe_eudebug.c | 2192 +++++++++++++++++++++++++++++++++++++++++++
>>   lib/xe/xe_eudebug.h |  206 ++++
>>   3 files changed, 2399 insertions(+)
>>   create mode 100644 lib/xe/xe_eudebug.c
>>   create mode 100644 lib/xe/xe_eudebug.h
>>
>> diff --git a/lib/meson.build b/lib/meson.build
>> index f711e60a7..969ca4101 100644
>> --- a/lib/meson.build
>> +++ b/lib/meson.build
>> @@ -111,6 +111,7 @@ lib_sources = [
>>   	'igt_msm.c',
>>   	'igt_dsc.c',
>>   	'xe/xe_gt.c',
>> +	'xe/xe_eudebug.c',
>>   	'xe/xe_ioctl.c',
>>   	'xe/xe_mmio.c',
>>   	'xe/xe_query.c',
>> diff --git a/lib/xe/xe_eudebug.c b/lib/xe/xe_eudebug.c
>> new file mode 100644
>> index 000000000..4eac87476
>> --- /dev/null
>> +++ b/lib/xe/xe_eudebug.c
>> @@ -0,0 +1,2192 @@
>> +// SPDX-License-Identifier: MIT
>> +/*
>> + * Copyright © 2023 Intel Corporation
>> + */
>> +
>> +#include <fcntl.h>
>> +#include <poll.h>
>> +#include <signal.h>
>> +#include <sys/select.h>
>> +#include <sys/stat.h>
>> +#include <sys/types.h>
>> +#include <sys/wait.h>
>> +
>> +#include "igt.h"
>> +#include "igt_sysfs.h"
>> +#include "intel_pat.h"
>> +#include "xe_eudebug.h"
>> +#include "xe_ioctl.h"
>> +
>> +struct event_trigger {
>> +	xe_eudebug_trigger_fn fn;
>> +	int type;
>> +	struct igt_list_head link;
>> +};
>> +
>> +struct seqno_list_entry {
>> +	struct igt_list_head link;
>> +	uint64_t seqno;
>> +};
>> +
>> +struct match_dto {
>> +	struct drm_xe_eudebug_event *target;
>> +	struct igt_list_head *seqno_list;
>> +	uint64_t client_handle;
>> +	uint32_t filter;
>> +
>> +	/* store latest 'EVENT_VM_BIND' seqno */
>> +	uint64_t *bind_seqno;
>> +	/* latest vm_bind_op seqno matching bind_seqno */
>> +	uint64_t *bind_op_seqno;
>> +};
>> +
>> +#define CLIENT_PID  1
>> +#define CLIENT_RUN  2
>> +#define CLIENT_FINI 3
>> +#define CLIENT_STOP 4
>> +#define CLIENT_STAGE 5
>> +#define DEBUGGER_STAGE 6
>> +
>> +#define DEBUGGER_WORKER_INACTIVE  0
>> +#define DEBUGGER_WORKER_ACTIVE  1
>> +#define DEBUGGER_WORKER_QUITTING 2
>> +
>> +static const char *type_to_str(unsigned int type)
>> +{
>> +	switch (type) {
>> +	case DRM_XE_EUDEBUG_EVENT_NONE:
>> +		return "none";
>> +	case DRM_XE_EUDEBUG_EVENT_READ:
>> +		return "read";
>> +	case DRM_XE_EUDEBUG_EVENT_OPEN:
>> +		return "client";
>> +	case DRM_XE_EUDEBUG_EVENT_VM:
>> +		return "vm";
>> +	case DRM_XE_EUDEBUG_EVENT_EXEC_QUEUE:
>> +		return "exec_queue";
>> +	case DRM_XE_EUDEBUG_EVENT_EU_ATTENTION:
>> +		return "attention";
>> +	case DRM_XE_EUDEBUG_EVENT_VM_BIND:
>> +		return "vm_bind";
>> +	case DRM_XE_EUDEBUG_EVENT_VM_BIND_OP:
>> +		return "vm_bind_op";
>> +	case DRM_XE_EUDEBUG_EVENT_VM_BIND_UFENCE:
>> +		return "vm_bind_ufence";
>> +	case DRM_XE_EUDEBUG_EVENT_METADATA:
>> +		return "metadata";
>> +	case DRM_XE_EUDEBUG_EVENT_VM_BIND_OP_METADATA:
>> +		return "vm_bind_op_metadata";
>> +	}
>> +
>> +	return "UNKNOWN";
>> +}
>> +
>> +static const char *event_type_to_str(struct drm_xe_eudebug_event *e, char *buf)
>> +{
>> +	sprintf(buf, "%s(%d)", type_to_str(e->type), e->type);
>> +
>> +	return buf;
>> +}
>> +
>> +static const char *flags_to_str(unsigned int flags)
>> +{
>> +	if (flags & DRM_XE_EUDEBUG_EVENT_CREATE) {
>> +		if (flags & DRM_XE_EUDEBUG_EVENT_NEED_ACK)
>> +			return "create|ack";
>> +		else
>> +			return "create";
>> +	}
>> +	if (flags & DRM_XE_EUDEBUG_EVENT_DESTROY)
>> +		return "destroy";
>> +
>> +	if (flags & DRM_XE_EUDEBUG_EVENT_STATE_CHANGE)
>> +		return "state-change";
>> +
>> +	igt_assert(!(flags & DRM_XE_EUDEBUG_EVENT_NEED_ACK));
>> +
>> +	return "flags unknown";
>> +}
>> +
>> +static const char *event_members_to_str(struct drm_xe_eudebug_event *e, char *b)
>> +{
>> +	switch (e->type) {
>> +	case DRM_XE_EUDEBUG_EVENT_OPEN: {
>> +		struct drm_xe_eudebug_event_client *ec = (struct drm_xe_eudebug_event_client *)e;
>> +
>> +		sprintf(b, "handle=%llu", ec->client_handle);
>> +		break;
>> +	}
>> +	case DRM_XE_EUDEBUG_EVENT_VM: {
>> +		struct drm_xe_eudebug_event_vm *evm = (struct drm_xe_eudebug_event_vm *)e;
>> +
>> +		sprintf(b, "client_handle=%llu, handle=%llu",
>> +			evm->client_handle, evm->vm_handle);
>> +		break;
>> +	}
>> +	case DRM_XE_EUDEBUG_EVENT_EXEC_QUEUE: {
>> +		struct drm_xe_eudebug_event_exec_queue *ee = (void *)e;
>> +
>> +		sprintf(b, "client_handle=%llu, vm_handle=%llu, "
>> +			   "exec_queue_handle=%llu, engine_class=%d, exec_queue_width=%d",
>> +			ee->client_handle, ee->vm_handle,
>> +			ee->exec_queue_handle, ee->engine_class, ee->width);
>> +		break;
>> +	}
>> +	case DRM_XE_EUDEBUG_EVENT_EU_ATTENTION: {
>> +		struct drm_xe_eudebug_event_eu_attention *ea = (void *)e;
>> +
>> +		sprintf(b, "client_handle=%llu, exec_queue_handle=%llu, "
>> +			   "lrc_handle=%llu, bitmask_size=%d",
>> +			ea->client_handle, ea->exec_queue_handle,
>> +			ea->lrc_handle, ea->bitmask_size);
>> +		break;
>> +	}
>> +	case DRM_XE_EUDEBUG_EVENT_VM_BIND: {
>> +		struct drm_xe_eudebug_event_vm_bind *evmb = (void *)e;
>> +
>> +		sprintf(b, "client_handle=%llu, vm_handle=%llu, flags=0x%x, num_binds=%u",
>> +			evmb->client_handle, evmb->vm_handle, evmb->flags, evmb->num_binds);
>> +		break;
>> +	}
>> +	case DRM_XE_EUDEBUG_EVENT_VM_BIND_OP: {
>> +		struct drm_xe_eudebug_event_vm_bind_op *op = (void *)e;
>> +
>> +		sprintf(b, "vm_bind_ref_seqno=%lld, addr=%016llx, range=%llu num_extensions=%llu",
>> +			op->vm_bind_ref_seqno, op->addr, op->range, op->num_extensions);
>> +		break;
>> +	}
>> +	case DRM_XE_EUDEBUG_EVENT_VM_BIND_UFENCE: {
>> +		struct drm_xe_eudebug_event_vm_bind_ufence *f = (void *)e;
>> +
>> +		sprintf(b, "vm_bind_ref_seqno=%lld", f->vm_bind_ref_seqno);
>> +		break;
>> +	}
>> +	case DRM_XE_EUDEBUG_EVENT_METADATA: {
>> +		struct drm_xe_eudebug_event_metadata *em = (void *)e;
>> +
>> +		sprintf(b, "client_handle=%llu, metadata_handle=%llu, type=%llu, len=%llu",
>> +			em->client_handle, em->metadata_handle, em->type, em->len);
>> +		break;
>> +	}
>> +	case DRM_XE_EUDEBUG_EVENT_VM_BIND_OP_METADATA: {
>> +		struct drm_xe_eudebug_event_vm_bind_op_metadata *op = (void *)e;
>> +
>> +		sprintf(b, "vm_bind_op_ref_seqno=%lld, metadata_handle=%llu, metadata_cookie=%llu",
>> +			op->vm_bind_op_ref_seqno, op->metadata_handle, op->metadata_cookie);
>> +		break;
>> +	}
>> +	default:
>> +		strcpy(b, "<...>");
>> +	}
>> +
>> +	return b;
>> +}
>> +
>> +/**
>> + * xe_eudebug_event_to_str:
>> + * @e: pointer to event
>> + * @buf: target to write string representation of @e
>> + * @len: size of target buffer @buf
>> + *
>> + * Creates string representation for given event.
>> + *
>> + * Returns: the written input buffer pointed by @buf.
>> + */
>> +const char *xe_eudebug_event_to_str(struct drm_xe_eudebug_event *e, char *buf, size_t len)
>> +{
>> +	char a[256];
>> +	char b[256];
>> +
>> +	snprintf(buf, len, "(%llu) %15s:%s: %s",
>> +		 e->seqno,
>> +		 event_type_to_str(e, a),
>> +		 flags_to_str(e->flags),
>> +		 event_members_to_str(e, b));
>> +
>> +	return buf;
>> +}
>> +
>> +static void catch_child_failure(void)
>> +{
>> +	pid_t pid;
>> +	int status;
>> +
>> +	pid = waitpid(-1, &status, WNOHANG);
>> +
>> +	if (pid == 0 || pid == -1)
>> +		return;
>> +
>> +	if (!WIFEXITED(status))
>> +		return;
>> +
>> +	igt_assert_f(WEXITSTATUS(status) == 0, "Client failed!\n");
>> +}
>> +
>> +static int safe_pipe_read(int pipe[2], void *buf, int nbytes, int timeout_ms)
>> +{
>> +	int ret;
>> +	int t = 0;
>> +	struct pollfd fd = {
>> +		.fd = pipe[0],
>> +		.events = POLLIN,
>> +		.revents = 0
>> +	};
>> +
>> +	/* When child fails we may get stuck forever. Check whether
>> +	 * the child process ended with an error.
>> +	 */
>> +	do {
>> +		const int interval_ms = 1000;
>> +
>> +		ret = poll(&fd, 1, interval_ms);
>> +
>> +		if (!ret) {
>> +			catch_child_failure();
>> +			t += interval_ms;
>> +		}
>> +	} while (!ret && t < timeout_ms);
>> +
>> +	if (ret > 0)
>> +		return read(pipe[0], buf, nbytes);
>> +
>> +	return 0;
>> +}
>> +
>> +static uint64_t pipe_read(int pipe[2], int timeout_ms)
>> +{
>> +	uint64_t in;
>> +	uint64_t ret;
>> +
>> +	ret = safe_pipe_read(pipe, &in, sizeof(in), timeout_ms);
>> +	igt_assert(ret == sizeof(in));
>> +
>> +	return in;
>> +}
>> +
>> +static void pipe_signal(int pipe[2], uint64_t token)
>> +{
>> +	igt_assert(write(pipe[1], &token, sizeof(token)) == sizeof(token));
>> +}
>> +
>> +static void pipe_close(int pipe[2])
>> +{
>> +	if (pipe[0] != -1)
>> +		close(pipe[0]);
>> +
>> +	if (pipe[1] != -1)
>> +		close(pipe[1]);
>> +}
>> +
>> +static uint64_t __wait_token(int p[2], const uint64_t token, int timeout_ms)
>> +{
>> +	uint64_t in;
>> +
>> +	in = pipe_read(p, timeout_ms);
>> +
>> +	igt_assert_eq(in, token);
>> +
>> +	return pipe_read(p, timeout_ms);
>> +}
>> +
>> +static uint64_t client_wait_token(struct xe_eudebug_client *c,
>> +				 const uint64_t token)
>> +{
>> +	return __wait_token(c->p_in, token, c->timeout_ms);
>> +}
>> +
>> +static uint64_t wait_from_client(struct xe_eudebug_client *c,
>> +				 const uint64_t token)
>> +{
>> +	return __wait_token(c->p_out, token, c->timeout_ms);
>> +}
>> +
>> +static void token_signal(int p[2], const uint64_t token, const uint64_t value)
>> +{
>> +	pipe_signal(p, token);
>> +	pipe_signal(p, value);
>> +}
>> +
>> +static void client_signal(struct xe_eudebug_client *c,
>> +			  const uint64_t token,
>> +			  const uint64_t value)
>> +{
>> +	token_signal(c->p_out, token, value);
>> +}
>> +
>> +static int __xe_eudebug_connect(int fd, pid_t pid, uint32_t flags, uint64_t events)
>> +{
>> +	struct drm_xe_eudebug_connect param = {
>> +		.pid = pid,
>> +		.flags = flags,
>> +	};
>> +	int debugfd;
>> +
>> +	debugfd = igt_ioctl(fd, DRM_IOCTL_XE_EUDEBUG_CONNECT, &param);
>> +
>> +	if (debugfd < 0)
>> +		return -errno;
>> +
>> +	return debugfd;
>> +}
>> +
>> +static void event_log_write_to_fd(struct xe_eudebug_event_log *l, int fd)
>> +{
>> +	igt_assert_eq(write(fd, &l->head, sizeof(l->head)),
>> +		      sizeof(l->head));
>> +
>> +	igt_assert_eq(write(fd, l->log, l->head), l->head);
>> +}
>> +
>> +static void read_all(int fd, void *buf, size_t nbytes)
>> +{
>> +	ssize_t remaining_size = nbytes;
>> +	ssize_t current_size = 0;
>> +	ssize_t read_size = 0;
>> +
>> +	do {
>> +		read_size = read(fd, buf + current_size, remaining_size);
>> +		igt_assert_f(read_size >= 0, "read failed: %s\n", strerror(errno));
>> +
>> +		current_size += read_size;
>> +		remaining_size -= read_size;
>> +	} while (remaining_size > 0 && read_size > 0);
>> +
>> +	igt_assert_eq(current_size, nbytes);
>> +}
>> +
>> +static void event_log_read_from_fd(struct xe_eudebug_event_log *l, int fd)
>> +{
>> +	read_all(fd, &l->head, sizeof(l->head));
>> +	igt_assert_lt(l->head, l->max_size);
>> +
>> +	read_all(fd, l->log, l->head);
>> +}
>> +
>> +typedef int (*cmp_fn_t)(struct drm_xe_eudebug_event *, void *);
>> +
>> +static struct drm_xe_eudebug_event *
>> +event_cmp(struct xe_eudebug_event_log *l,
>> +	  struct drm_xe_eudebug_event *current,
>> +	  cmp_fn_t match,
>> +	  void *data)
>> +{
>> +	struct drm_xe_eudebug_event *e = current;
>> +
>> +	xe_eudebug_for_each_event(e, l) {
>> +		if (match(e, data))
>> +			return e;
>> +	}
>> +
>> +	return NULL;
>> +}
>> +
>> +static int match_type_and_flags(struct drm_xe_eudebug_event *a, void *data)
>> +{
>> +	struct drm_xe_eudebug_event *b = data;
>> +
>> +	if (a->type == b->type &&
>> +	    a->flags == b->flags)
>> +		return 1;
>> +
>> +	return 0;
>> +}
>> +
>> +static int match_fields(struct drm_xe_eudebug_event *a, void *data)
>> +{
>> +	struct drm_xe_eudebug_event *b = data;
>> +	int ret = 0;
>> +
>> +	ret = match_type_and_flags(a, data);
>> +	if (!ret)
>> +		return ret;
>> +
>> +	ret = 0;
>> +
>> +	switch (a->type) {
>> +	case DRM_XE_EUDEBUG_EVENT_EXEC_QUEUE: {
>> +		struct drm_xe_eudebug_event_exec_queue *ae = (void *)a;
>> +		struct drm_xe_eudebug_event_exec_queue *be = (void *)b;
>> +
>> +		if (ae->engine_class == be->engine_class && ae->width == be->width)
>> +			ret = 1;
>> +		break;
>> +	}
>> +	case DRM_XE_EUDEBUG_EVENT_VM_BIND: {
>> +		struct drm_xe_eudebug_event_vm_bind *ea = (void *)a;
>> +		struct drm_xe_eudebug_event_vm_bind *eb = (void *)b;
>> +
>> +		if (ea->num_binds == eb->num_binds)
>> +			ret = 1;
>> +		break;
>> +	}
>> +	case DRM_XE_EUDEBUG_EVENT_VM_BIND_OP: {
>> +		struct drm_xe_eudebug_event_vm_bind_op *ea = (void *)a;
>> +		struct drm_xe_eudebug_event_vm_bind_op *eb = (void *)b;
>> +
>> +		if (ea->addr == eb->addr && ea->range == eb->range &&
>> +		    ea->num_extensions == eb->num_extensions)
>> +			ret = 1;
>> +		break;
>> +	}
>> +	case DRM_XE_EUDEBUG_EVENT_VM_BIND_OP_METADATA: {
>> +		struct drm_xe_eudebug_event_vm_bind_op_metadata *ea = (void *)a;
>> +		struct drm_xe_eudebug_event_vm_bind_op_metadata *eb = (void *)b;
>> +
>> +		if (ea->metadata_handle == eb->metadata_handle &&
>> +		    ea->metadata_cookie == eb->metadata_cookie)
>> +			ret = 1;
>> +		break;
>> +	}
>> +
>> +	default:
>> +		ret = 1;
>> +		break;
>> +	}
>> +
>> +	return ret;
>> +}
>> +
>> +static int match_client_handle(struct drm_xe_eudebug_event *e, void *data)
>> +{
>> +	struct match_dto *md = (void *)data;
>> +	uint64_t *bind_seqno = md->bind_seqno;
>> +	uint64_t *bind_op_seqno = md->bind_op_seqno;
>> +	uint64_t h = md->client_handle;
>> +
>> +	if (XE_EUDEBUG_EVENT_IS_FILTERED(e->type, md->filter))
>> +		return 0;
>> +
>> +	switch (e->type) {
>> +	case DRM_XE_EUDEBUG_EVENT_OPEN: {
>> +		struct drm_xe_eudebug_event_client *client = (struct drm_xe_eudebug_event_client *)e;
>> +
>> +		if (client->client_handle == h)
>> +			return 1;
>> +		break;
>> +	}
>> +	case DRM_XE_EUDEBUG_EVENT_VM: {
>> +		struct drm_xe_eudebug_event_vm *vm = (struct drm_xe_eudebug_event_vm *)e;
>> +
>> +		if (vm->client_handle == h)
>> +			return 1;
>> +		break;
>> +	}
>> +	case DRM_XE_EUDEBUG_EVENT_EXEC_QUEUE: {
>> +		struct drm_xe_eudebug_event_exec_queue *ee = (struct drm_xe_eudebug_event_exec_queue *)e;
>> +
>> +		if (ee->client_handle == h)
>> +			return 1;
>> +		break;
>> +	}
>> +	case DRM_XE_EUDEBUG_EVENT_VM_BIND: {
>> +		struct drm_xe_eudebug_event_vm_bind *evmb = (struct drm_xe_eudebug_event_vm_bind *)e;
>> +
>> +		if (evmb->client_handle == h) {
>> +			*bind_seqno = evmb->base.seqno;
>> +			return 1;
>> +		}
>> +		break;
>> +	}
>> +	case DRM_XE_EUDEBUG_EVENT_VM_BIND_OP: {
>> +		struct drm_xe_eudebug_event_vm_bind_op *eo = (struct drm_xe_eudebug_event_vm_bind_op *)e;
>> +
>> +		if (eo->vm_bind_ref_seqno == *bind_seqno) {
>> +			*bind_op_seqno = eo->base.seqno;
>> +			return 1;
>> +		}
>> +		break;
>> +	}
>> +	case DRM_XE_EUDEBUG_EVENT_VM_BIND_UFENCE: {
>> +		struct drm_xe_eudebug_event_vm_bind_ufence *ef  = (struct drm_xe_eudebug_event_vm_bind_ufence *)e;
>> +
>> +		if (ef->vm_bind_ref_seqno == *bind_seqno)
>> +			return 1;
>> +
>> +		break;
>> +	}
>> +	case DRM_XE_EUDEBUG_EVENT_METADATA: {
>> +		struct drm_xe_eudebug_event_metadata *em = (struct drm_xe_eudebug_event_metadata *)e;
>> +
>> +		if (em->client_handle == h)
>> +			return 1;
>> +		break;
>> +	}
>> +	case DRM_XE_EUDEBUG_EVENT_VM_BIND_OP_METADATA: {
>> +		struct drm_xe_eudebug_event_vm_bind_op_metadata *eo = (struct drm_xe_eudebug_event_vm_bind_op_metadata *)e;
>> +
>> +		if (eo->vm_bind_op_ref_seqno == *bind_op_seqno)
>> +			return 1;
>> +		break;
>> +	}
>> +	default:
>> +		break;
>> +	}
>> +
>> +	return 0;
>> +}
>> +
>> +static int match_opposite_resource(struct drm_xe_eudebug_event *e, void *data)
>> +{
>> +
>> +	struct drm_xe_eudebug_event *d = (void *)data;
>> +	int ret;
>> +
>> +	d->flags ^= DRM_XE_EUDEBUG_EVENT_CREATE | DRM_XE_EUDEBUG_EVENT_DESTROY;
>> +	d->flags &= ~(DRM_XE_EUDEBUG_EVENT_NEED_ACK);
>> +	ret = match_type_and_flags(e, data);
>> +	d->flags ^= DRM_XE_EUDEBUG_EVENT_CREATE | DRM_XE_EUDEBUG_EVENT_DESTROY;
>> +
>> +	if (!ret)
>> +		return 0;
>> +
>> +	switch (e->type) {
>> +	case DRM_XE_EUDEBUG_EVENT_OPEN: {
>> +		struct drm_xe_eudebug_event_client *client = (struct drm_xe_eudebug_event_client *)e;
>> +		struct drm_xe_eudebug_event_client *filter = (struct drm_xe_eudebug_event_client *)data;
>> +
>> +		if (client->client_handle == filter->client_handle)
>> +			return 1;
>> +		break;
>> +	}
>> +	case DRM_XE_EUDEBUG_EVENT_VM: {
>> +		struct drm_xe_eudebug_event_vm *vm = (struct drm_xe_eudebug_event_vm *)e;
>> +		struct drm_xe_eudebug_event_vm *filter = (struct drm_xe_eudebug_event_vm *)data;
>> +
>> +		if (vm->vm_handle == filter->vm_handle)
>> +			return 1;
>> +		break;
>> +	}
>> +	case DRM_XE_EUDEBUG_EVENT_EXEC_QUEUE: {
>> +		struct drm_xe_eudebug_event_exec_queue *ee = (void *)e;
>> +		struct drm_xe_eudebug_event_exec_queue *filter = (struct drm_xe_eudebug_event_exec_queue *)data;
>> +
>> +		if (ee->exec_queue_handle == filter->exec_queue_handle)
>> +			return 1;
>> +		break;
>> +	}
>> +	case DRM_XE_EUDEBUG_EVENT_VM_BIND: {
>> +		struct drm_xe_eudebug_event_vm_bind *evmb = (void *)e;
>> +		struct drm_xe_eudebug_event_vm_bind *filter = (struct drm_xe_eudebug_event_vm_bind *)data;
>> +
>> +		if (evmb->vm_handle == filter->vm_handle &&
>> +		    evmb->num_binds == filter->num_binds)
>> +			return 1;
>> +		break;
>> +	}
>> +	case DRM_XE_EUDEBUG_EVENT_VM_BIND_OP: {
>> +		struct drm_xe_eudebug_event_vm_bind_op *avmb = (void *)e;
>> +		struct drm_xe_eudebug_event_vm_bind_op *filter = (struct drm_xe_eudebug_event_vm_bind_op *)data;
>> +
>> +		if (avmb->addr == filter->addr &&
>> +		    avmb->range == filter->range)
>> +			return 1;
>> +		break;
>> +	}
>> +	case DRM_XE_EUDEBUG_EVENT_METADATA: {
>> +		struct drm_xe_eudebug_event_metadata *em = (void *)e;
>> +		struct drm_xe_eudebug_event_metadata *filter = (struct drm_xe_eudebug_event_metadata *)data;
>> +
>> +		if (em->metadata_handle == filter->metadata_handle)
>> +			return 1;
>> +		break;
>> +	}
>> +	case DRM_XE_EUDEBUG_EVENT_VM_BIND_OP_METADATA: {
>> +		struct drm_xe_eudebug_event_vm_bind_op_metadata *avmb = (void *)e;
>> +		struct drm_xe_eudebug_event_vm_bind_op_metadata *filter = (struct drm_xe_eudebug_event_vm_bind_op_metadata *)data;
>> +
>> +		if (avmb->metadata_handle == filter->metadata_handle &&
>> +		    avmb->metadata_cookie == filter->metadata_cookie)
>> +			return 1;
>> +		break;
>> +	}
>> +
>> +	default:
>> +		break;
>> +	}
>> +	return 0;
>> +}
>> +
>> +static int match_full(struct drm_xe_eudebug_event *e, void *data)
>> +{
>> +	struct seqno_list_entry *sl;
>> +
>> +	struct match_dto *md = (void *)data;
>> +	int ret = 0;
>> +
>> +	ret = match_client_handle(e, md);
>> +	if (!ret)
>> +		return 0;
>> +
>> +	ret = match_fields(e, md->target);
>> +	if (!ret)
>> +		return 0;
>> +
>> +	igt_list_for_each_entry(sl, md->seqno_list, link) {
>> +		if (sl->seqno == e->seqno)
>> +			return 0;
>> +	}
>> +
>> +	return 1;
>> +}
>> +
>> +static struct drm_xe_eudebug_event *
>> +event_type_match(struct xe_eudebug_event_log *l,
>> +		 struct drm_xe_eudebug_event *target,
>> +		 struct drm_xe_eudebug_event *current)
>> +{
>> +	return event_cmp(l, current, match_type_and_flags, target);
>> +}
>> +
>> +static struct drm_xe_eudebug_event *
>> +client_match(struct xe_eudebug_event_log *l,
>> +	     uint64_t client_handle,
>> +	     struct drm_xe_eudebug_event *current,
>> +	     uint32_t filter,
>> +	     uint64_t *bind_seqno,
>> +	     uint64_t *bind_op_seqno)
>> +{
>> +	struct match_dto md = {
>> +		.client_handle = client_handle,
>> +		.filter = filter,
>> +		.bind_seqno = bind_seqno,
>> +		.bind_op_seqno = bind_op_seqno,
>> +	};
>> +
>> +	return event_cmp(l, current, match_client_handle, &md);
>> +}
>> +
>> +static struct drm_xe_eudebug_event *
>> +opposite_event_match(struct xe_eudebug_event_log *l,
>> +		    struct drm_xe_eudebug_event *target,
>> +		    struct drm_xe_eudebug_event *current)
>> +{
>> +	return event_cmp(l, current, match_opposite_resource, target);
>> +}
>> +
>> +static struct drm_xe_eudebug_event *
>> +event_match(struct xe_eudebug_event_log *l,
>> +	    struct drm_xe_eudebug_event *target,
>> +	    uint64_t client_handle,
>> +	    struct igt_list_head *seqno_list,
>> +	    uint64_t *bind_seqno,
>> +	    uint64_t *bind_op_seqno)
>> +{
>> +	struct match_dto md = {
>> +		.target = target,
>> +		.client_handle = client_handle,
>> +		.seqno_list = seqno_list,
>> +		.bind_seqno = bind_seqno,
>> +		.bind_op_seqno = bind_op_seqno,
>> +	};
>> +
>> +	return event_cmp(l, NULL, match_full, &md);
>> +}
>> +
>> +static void compare_client(struct xe_eudebug_event_log *c, struct drm_xe_eudebug_event *_ce,
>> +			   struct xe_eudebug_event_log *d, struct drm_xe_eudebug_event *_de,
>> +			   uint32_t filter)
>> +{
>> +	struct drm_xe_eudebug_event_client *ce = (void *)_ce;
>> +	struct drm_xe_eudebug_event_client *de = (void *)_de;
>> +	uint64_t cbs = 0, dbs = 0, cbso = 0, dbso = 0;
>> +
>> +	struct igt_list_head matched_seqno_list;
>> +	struct drm_xe_eudebug_event *hc, *hd;
>> +	struct seqno_list_entry *entry, *tmp;
>> +
>> +	igt_assert(ce);
>> +	igt_assert(de);
>> +
>> +	igt_debug("client: %llu -> %llu\n", ce->client_handle, de->client_handle);
>> +
>> +	hc = NULL;
>> +	hd = NULL;
>> +	IGT_INIT_LIST_HEAD(&matched_seqno_list);
>> +
>> +	do {
>> +		hc = client_match(c, ce->client_handle, hc, filter, &cbs, &cbso);
>> +		if (!hc)
>> +			break;
>> +
>> +		hd = event_match(d, hc, de->client_handle, &matched_seqno_list, &dbs, &dbso);
>> +
>> +		igt_assert_f(hd, "%s (%llu): no matching event type %u found for client %llu\n",
>> +			     c->name,
>> +			     hc->seqno,
>> +			     hc->type,
>> +			     ce->client_handle);
>> +
>> +		igt_debug("comparing %s %llu vs %s %llu\n",
>> +			  c->name, hc->seqno, d->name, hd->seqno);
>> +
>> +		/*
>> +		 * Store the seqno of the event that was matched above,
>> +		 * inside 'matched_seqno_list', to avoid it getting matched
>> +		 * by subsequent 'event_match' calls.
>> +		 */
>> +		entry = malloc(sizeof(*entry));
>> +		entry->seqno = hd->seqno;
>> +		igt_list_add(&entry->link, &matched_seqno_list);
>> +	} while (hc);
>> +
>> +	igt_list_for_each_entry_safe(entry, tmp, &matched_seqno_list, link)
>> +		free(entry);
>> +}
>> +
>> +/**
>> + * xe_eudebug_event_log_find_seqno:
>> + * @l: event log pointer
>> + * @seqno: seqno of event to be found
>> + *
>> + * Finds the event with given seqno in the event log.
>> + *
>> + * Returns: pointer to the event with given seqno within @l or NULL seqno is
>> + * not present.
>> + */
>> +struct drm_xe_eudebug_event *
>> +xe_eudebug_event_log_find_seqno(struct xe_eudebug_event_log *l, uint64_t seqno)
>> +{
>> +	struct drm_xe_eudebug_event *e = NULL, *found = NULL;
>> +
>> +	igt_assert_neq(seqno, 0);
>> +	/*
>> +	 * Try to catch if seqno is corrupted and prevent too long tests,
>> +	 * as our post processing of events is not optimized.
>> +	 */
>> +	igt_assert_lt(seqno, 10 * 1000 * 1000);
>> +
>> +	xe_eudebug_for_each_event(e, l) {
>> +		if (e->seqno == seqno) {
>> +			if (found) {
>> +				igt_warn("Found multiple events with the same seqno %lu\n", seqno);
>> +				xe_eudebug_event_log_print(l, false);
>> +				igt_assert(!found);
>> +			}
>> +			found = e;
>> +		}
>> +	}
>> +
>> +	return found;
>> +}
>> +
>> +static void event_log_sort(struct xe_eudebug_event_log *l)
>> +{
>> +	struct xe_eudebug_event_log *tmp;
>> +	struct drm_xe_eudebug_event *e = NULL;
>> +	uint64_t first_seqno = 0;
>> +	uint64_t last_seqno = 0;
>> +	uint64_t events = 0, added = 0;
>> +	uint64_t i;
>> +
>> +	xe_eudebug_for_each_event(e, l) {
>> +		if (e->seqno > last_seqno)
>> +			last_seqno = e->seqno;
>> +
>> +		if (e->seqno < first_seqno)
>> +			first_seqno = e->seqno;
>> +
>> +		events++;
>> +	}
>> +
>> +	tmp = xe_eudebug_event_log_create("tmp", l->max_size);
>> +
>> +	for (i = 1; i <= last_seqno; i++) {
>> +		e = xe_eudebug_event_log_find_seqno(l, i);
>> +		if (e) {
>> +			xe_eudebug_event_log_write(tmp, e);
>> +			added++;
>> +		}
>> +	}
>> +
>> +	igt_assert_eq(events, added);
>> +	igt_assert_eq(tmp->head, l->head);
>> +
>> +	memcpy(l->log, tmp->log, tmp->head);
>> +
>> +	xe_eudebug_event_log_destroy(tmp);
>> +}
>> +
>> +/**
>> + * xe_eudebug_connect:
>> + * @fd: Xe file descriptor
>> + * @pid: client PID
>> + * @flags: connection flags
>> + *
>> + * Opens the xe eu debugger connection to the process described by @pid
>> + *
>> + * Returns: 0 if the debugger was successfully attached, -errno otherwise.
>> + */
>> +int xe_eudebug_connect(int fd, pid_t pid, uint32_t flags)
>> +{
>> +	int ret;
>> +	uint64_t events = 0; /* events filtering not supported yet! */
>> +
>> +	ret = __xe_eudebug_connect(fd, pid, flags, events);
>> +
>> +	return ret;
>> +}
>> +
>> +/**
>> + * xe_eudebug_event_log_create:
>> + * @name: event log identifier
>> + * @max_size: maximum size of created log
>> + *
>> + * Function creates an Eu Debugger event log with size equal to @max_size.
>> + *
>> + * Returns: pointer to just created log
>> + */
>> +#define MAX_EVENT_LOG_SIZE (32 * 1024 * 1024)
>> +struct xe_eudebug_event_log *xe_eudebug_event_log_create(const char *name, unsigned int max_size)
>> +{
>> +	struct xe_eudebug_event_log *l;
>> +
>> +	l = calloc(1, sizeof(*l));
>> +	igt_assert(l);
>> +	l->log = calloc(1, max_size);
>> +	igt_assert(l->log);
>> +	l->max_size = max_size;
>> +	strncpy(l->name, name, sizeof(l->name) - 1);
>> +	pthread_mutex_init(&l->lock, NULL);
>> +
>> +	return l;
>> +}
>> +
>> +/**
>> + * xe_eudebug_event_log_destroy:
>> + * @l: event log pointer
>> + *
>> + * Frees given event log @l.
>> + */
>> +void xe_eudebug_event_log_destroy(struct xe_eudebug_event_log *l)
>> +{
>> +	pthread_mutex_destroy(&l->lock);
>> +	free(l->log);
>> +	free(l);
>> +}
>> +
>> +/**
>> + * xe_eudebug_event_log_write:
>> + * @l: event log pointer
>> + * @e: event to be written to event log
>> + *
>> + * Writes event @e to the event log, thread-safe.
>> + */
>> +void xe_eudebug_event_log_write(struct xe_eudebug_event_log *l, struct drm_xe_eudebug_event *e)
>> +{
>> +	igt_assert(e->seqno);
>> +	/*
>> +	 * Try to catch if seqno is corrupted and prevent too long tests,
>> +	 * as our post processing of events is not optimized.
>> +	 */
>> +	igt_assert_lt(e->seqno, 10 * 1000 * 1000);
>> +
>> +	pthread_mutex_lock(&l->lock);
>> +	igt_assert_lt(l->head + e->len, l->max_size);
>> +	memcpy(l->log + l->head, e, e->len);
>> +	l->head += e->len;
>> +
>> +#ifdef DEBUG_LOG
>> +	igt_info("%s: wrote %u bytes to eventlog, free %u bytes\n",
>> +		 l->name, e->len, l->max_size - l->head);
>> +#endif
>> +	pthread_mutex_unlock(&l->lock);
> I'm not fan of #ifdef debug logs. As the event log has been proven in action,
> can we just strip it out?

Sure, will do.

Thanks,
Christoph

> <cut>
> 
> Regards,
> Dominik
> 

^ permalink raw reply	[flat|nested] 41+ messages in thread

* Re: [PATCH i-g-t v3 13/14] tests/xe_eudebug_online: Debug client which runs workloads on EU
  2024-08-19  9:58   ` Grzegorzek, Dominik
@ 2024-08-19 15:36     ` Manszewski, Christoph
  0 siblings, 0 replies; 41+ messages in thread
From: Manszewski, Christoph @ 2024-08-19 15:36 UTC (permalink / raw)
  To: Grzegorzek, Dominik, igt-dev@lists.freedesktop.org
  Cc: Patelczyk, Maciej, Hajda, Andrzej, karolina.stolarek@intel.com,
	Kempczynski, Zbigniew, Piatkowski, Dominik Karol, Sikora, Pawel,
	Kuoppala, Mika, Mun, Gwan-gyeong, kamil.konieczny@linux.intel.com,
	Kolanupaka Naveena

Hi Dominik,

On 19.08.2024 11:58, Grzegorzek, Dominik wrote:
> On Fri, 2024-08-09 at 14:38 +0200, Christoph Manszewski wrote:
>> From: Dominik Grzegorzek <dominik.grzegorzek@intel.com>
>>
>> For typical debugging under gdb one can specify two main usecases:
>> accessing and manupulating resources created by the application and
>> manipulating thread execution (interrupting and setting breakpoints).
>>
>> This test adds coverage for the latter by checking that:
>> - EU workloads that hit a instruction with breakpoint bit set will stop
>>    halt execution and the debugger will report this via attention events,
>> - the debugger is able to interrupt workload execution by issuing a
>>    'interrupt_all' ioctl call,
>> - the debugger is able to resume selected workloads that are stopped.
>>
>> Signed-off-by: Dominik Grzegorzek <dominik.grzegorzek@intel.com>
>> Signed-off-by: Mika Kuoppala <mika.kuoppala@intel.com>
>> Signed-off-by: Christoph Manszewski <christoph.manszewski@intel.com>
>> Signed-off-by: Dominik Karol Piątkowski <dominik.karol.piatkowski@intel.com>
>> Signed-off-by: Pawel Sikora <pawel.sikora@intel.com>
>> Signed-off-by: Karolina Stolarek <karolina.stolarek@intel.com>
>> Signed-off-by: Kolanupaka Naveena <kolanupaka.naveena@intel.com>
>> ---
>>   tests/intel/xe_eudebug_online.c | 2203 +++++++++++++++++++++++++++++++
>>   tests/meson.build               |    1 +
>>   2 files changed, 2204 insertions(+)
>>   create mode 100644 tests/intel/xe_eudebug_online.c
>>
>> diff --git a/tests/intel/xe_eudebug_online.c b/tests/intel/xe_eudebug_online.c
>> new file mode 100644
>> index 000000000..1c8ac67f1
>> --- /dev/null
>> +++ b/tests/intel/xe_eudebug_online.c
>> @@ -0,0 +1,2203 @@
>> +// SPDX-License-Identifier: MIT
>> +/*
>> + * Copyright © 2023 Intel Corporation
>> + */
>> +
>> +/**
>> + * TEST: Tests for eudebug online functionality
>> + * Category: Core
>> + * Mega feature: EUdebug
>> + * Sub-category: EUdebug tests
>> + * Functionality: eu kernel debug
>> + * Test category: functionality test
>> + */
>> +
>> +#include "xe/xe_eudebug.h"
>> +#include "xe/xe_ioctl.h"
>> +#include "xe/xe_query.h"
>> +#include "igt.h"
>> +#include "intel_pat.h"
>> +#include "intel_mocs.h"
>> +#include "gpgpu_shader.h"
>> +
>> +#define SHADER_NOP			(0 << 0)
>> +#define SHADER_BREAKPOINT		(1 << 0)
>> +#define SHADER_LOOP			(1 << 1)
>> +#define SHADER_SINGLE_STEP		(1 << 2)
>> +#define SIP_SINGLE_STEP			(1 << 3)
>> +#define DISABLE_DEBUG_MODE		(1 << 4)
>> +#define SHADER_N_NOOP_BREAKPOINT	(1 << 5)
>> +#define SHADER_CACHING_SRAM		(1 << 6)
>> +#define SHADER_CACHING_VRAM		(1 << 7)
>> +#define SHADER_MIN_THREADS		(1 << 8)
>> +#define DO_NOT_EXPECT_CANARIES		(1 << 9)
>> +#define TRIGGER_RESUME_SINGLE_WALK	(1 << 25)
>> +#define TRIGGER_RESUME_PARALLEL_WALK	(1 << 26)
>> +#define TRIGGER_RECONNECT		(1 << 27)
>> +#define TRIGGER_RESUME_SET_BP		(1 << 28)
>> +#define TRIGGER_RESUME_DELAYED		(1 << 29)
>> +#define TRIGGER_RESUME_DSS		(1 << 30)
>> +#define TRIGGER_RESUME_ONE		(1 << 31)
>> +
>> +#define DEBUGGER_REATTACHED	1
>> +
>> +#define SHADER_LOOP_N		3
>> +#define SINGLE_STEP_COUNT	16
>> +#define STEERING_SINGLE_STEP	0
>> +#define STEERING_CONTINUE	0x00c0ffee
>> +#define STEERING_END_LOOP	0xdeadca11
>> +
>> +#define CACHING_INIT_VALUE	0xcafe0000
>> +#define CACHING_POISON_VALUE	0xcafedead
>> +#define CACHING_VALUE(n)	(CACHING_INIT_VALUE + n)
>> +
>> +#define SHADER_CANARY 0x01010101
>> +
>> +#define WALKER_X_DIM		4
>> +#define WALKER_ALIGNMENT	16
>> +#define SIMD_SIZE		16
>> +
>> +#define STARTUP_TIMEOUT_MS	3000
>> +#define WORKLOAD_DELAY_US	(5000 * 1000)
>> +
>> +#define PAGE_SIZE 4096
>> +
>> +struct dim_t {
>> +	uint32_t x;
>> +	uint32_t y;
>> +	uint32_t alignment;
>> +};
>> +
>> +static struct dim_t walker_dimensions(int threads)
>> +{
>> +	uint32_t x_dim = min_t(x_dim, threads, WALKER_X_DIM);
>> +	struct dim_t ret = {
>> +		.x = x_dim,
>> +		.y = threads / x_dim,
>> +		.alignment = WALKER_ALIGNMENT
>> +	};
>> +
>> +	return ret;
>> +}
>> +
>> +static struct dim_t surface_dimensions(int threads)
>> +{
>> +	struct dim_t ret = walker_dimensions(threads);
>> +
>> +	ret.y = max_t(ret.y, threads/ret.x, 4);
>> +	ret.x *= SIMD_SIZE;
>> +	ret.alignment *= SIMD_SIZE;
>> +
>> +	return ret;
>> +}
>> +
>> +static uint32_t steering_offset(int threads)
>> +{
>> +	struct dim_t w = walker_dimensions(threads);
>> +
>> +	return ALIGN(w.x, w.alignment) * w.y * 4;
>> +}
>> +
>> +static struct intel_buf *create_uc_buf(int fd, int width, int height)
>> +{
>> +	struct intel_buf *buf;
>> +
>> +	buf = intel_buf_create_full(buf_ops_create(fd), 0, width/4, height,
>> +				    32, 0, I915_TILING_NONE, 0, 0, 0,
>> +				    vram_if_possible(fd, 0),
>> +				    DEFAULT_PAT_INDEX, DEFAULT_MOCS_INDEX);
>> +
>> +	return buf;
>> +}
>> +
>> +static int get_number_of_threads(uint64_t flags)
>> +{
>> +	if (flags & SHADER_MIN_THREADS)
>> +		return 16;
>> +
>> +	if (flags & (TRIGGER_RESUME_ONE | TRIGGER_RESUME_SINGLE_WALK |
>> +		     TRIGGER_RESUME_PARALLEL_WALK | SHADER_CACHING_SRAM | SHADER_CACHING_VRAM))
>> +		return 32;
>> +
>> +	return 512;
>> +}
>> +
>> +static int caching_get_instruction_count(int fd, uint32_t s_dim__x, int flags)
>> +{
>> +	uint64_t memory;
>> +
>> +	igt_assert((flags & SHADER_CACHING_SRAM) || (flags & SHADER_CACHING_VRAM));
>> +
>> +	if (flags & SHADER_CACHING_SRAM)
>> +		memory = system_memory(fd);
>> +	else
>> +		memory = vram_memory(fd, 0);
>> +
>> +	/* each instruction writes to given y offset */
>> +	return (2 * xe_min_page_size(fd, memory)) / s_dim__x;
>> +}
>> +
>> +static struct gpgpu_shader *get_shader(int fd, const unsigned int flags)
>> +{
>> +	struct dim_t w_dim = walker_dimensions(get_number_of_threads(flags));
>> +	struct dim_t s_dim = surface_dimensions(get_number_of_threads(flags));
>> +	static struct gpgpu_shader *shader;
>> +
>> +	shader = gpgpu_shader_create(fd);
>> +
>> +	gpgpu_shader__write_dword(shader, SHADER_CANARY, 0);
>> +	if (flags & SHADER_BREAKPOINT) {
>> +		gpgpu_shader__nop(shader);
>> +		gpgpu_shader__breakpoint(shader);
>> +	} else if (flags & SHADER_LOOP) {
>> +		gpgpu_shader__label(shader, 0);
>> +		gpgpu_shader__write_dword(shader, SHADER_CANARY, 0);
>> +		gpgpu_shader__jump_neq(shader, 0, w_dim.y, STEERING_END_LOOP);
>> +		gpgpu_shader__write_dword(shader, SHADER_CANARY, 0);
>> +	} else if (flags & SHADER_SINGLE_STEP) {
>> +		gpgpu_shader__nop(shader);
>> +		gpgpu_shader__breakpoint(shader);
>> +		for (int i = 0; i < SINGLE_STEP_COUNT; i++)
>> +			gpgpu_shader__nop(shader);
>> +	} else if (flags & SHADER_N_NOOP_BREAKPOINT) {
>> +		for (int i = 0; i < SHADER_LOOP_N; i++) {
>> +			gpgpu_shader__nop(shader);
>> +			gpgpu_shader__breakpoint(shader);
>> +		}
>> +	} else if ((flags & SHADER_CACHING_SRAM) || (flags & SHADER_CACHING_VRAM)) {
>> +		gpgpu_shader__nop(shader);
>> +		gpgpu_shader__breakpoint(shader);
>> +		for (int i = 0; i < caching_get_instruction_count(fd, s_dim.x, flags); i++)
>> +			gpgpu_shader__common_target_write_u32(shader, s_dim.y + i, CACHING_VALUE(i));
>> +		gpgpu_shader__nop(shader);
>> +		gpgpu_shader__breakpoint(shader);
>> +	}
>> +
>> +	gpgpu_shader__eot(shader);
>> +	return shader;
>> +}
>> +
>> +static struct gpgpu_shader *get_sip(int fd, const unsigned int flags)
>> +{
>> +	struct dim_t w_dim = walker_dimensions(get_number_of_threads(flags));
>> +	static struct gpgpu_shader *sip;
>> +
>> +	sip = gpgpu_shader_create(fd);
>> +	gpgpu_shader__write_aip(sip, 0);
>> +
>> +	gpgpu_shader__wait(sip);
>> +	if (flags & SIP_SINGLE_STEP)
>> +		gpgpu_shader__end_system_routine_step_if_eq(sip, w_dim.y, 0);
>> +	else
>> +		gpgpu_shader__end_system_routine(sip, true);
>> +	return sip;
>> +}
>> +
>> +static int count_set_bits(void *ptr, size_t size)
>> +{
>> +	uint8_t *p = ptr;
>> +	int count = 0;
>> +	int i, j;
>> +
>> +	for (i = 0; i < size; i++)
>> +		for (j = 0; j < 8; j++)
>> +			count += !!(p[i] & (1 << j));
>> +
>> +	return count;
>> +}
>> +
>> +static int count_canaries_eq(uint32_t *ptr, struct dim_t w_dim, uint32_t value)
>> +{
>> +	int count = 0;
>> +	int x, y;
>> +
>> +	for (x = 0; x < w_dim.x; x++)
>> +		for (y = 0; y < w_dim.y; y++)
>> +			if (READ_ONCE(ptr[x + ALIGN(w_dim.x, w_dim.alignment) * y]) == value)
>> +				count++;
>> +
>> +	return count;
>> +}
>> +
>> +static int count_canaries_neq(uint32_t *ptr, struct dim_t w_dim, uint32_t value)
>> +{
>> +	return w_dim.x * w_dim.y - count_canaries_eq(ptr, w_dim, value);
>> +}
>> +
>> +static const char *td_ctl_cmd_to_str(uint32_t cmd)
>> +{
>> +	switch (cmd) {
>> +	case DRM_XE_EUDEBUG_EU_CONTROL_CMD_INTERRUPT_ALL:
>> +		return "interrupt all";
>> +	case DRM_XE_EUDEBUG_EU_CONTROL_CMD_STOPPED:
>> +		return "stopped";
>> +	case DRM_XE_EUDEBUG_EU_CONTROL_CMD_RESUME:
>> +		return "resume";
>> +	default:
>> +		return "unknown command";
>> +	}
>> +}
>> +
>> +static int __eu_ctl(int debugfd, uint64_t client,
>> +		    uint64_t exec_queue, uint64_t lrc,
>> +		    uint8_t *bitmask, uint32_t *bitmask_size,
>> +		    uint32_t cmd, uint64_t *seqno)
>> +{
>> +	struct drm_xe_eudebug_eu_control control = {
>> +		.client_handle = lower_32_bits(client),
>> +		.exec_queue_handle = exec_queue,
>> +		.lrc_handle = lrc,
>> +		.cmd = cmd,
>> +		.bitmask_ptr = to_user_pointer(bitmask),
>> +	};
>> +	int ret;
>> +
>> +	if (bitmask_size)
>> +		control.bitmask_size = *bitmask_size;
>> +
>> +	ret = igt_ioctl(debugfd, DRM_XE_EUDEBUG_IOCTL_EU_CONTROL, &control);
>> +
>> +	if (ret < 0)
>> +		return -errno;
>> +
>> +	igt_debug("EU CONTROL[%llu]: %s\n", control.seqno, td_ctl_cmd_to_str(cmd));
>> +
>> +	if (bitmask_size)
>> +		*bitmask_size = control.bitmask_size;
>> +
>> +	if (seqno)
>> +		*seqno = control.seqno;
>> +
>> +	return 0;
>> +
>> +}
>> +
>> +static uint64_t eu_ctl(int debugfd, uint64_t client,
>> +		       uint64_t exec_queue, uint64_t lrc,
>> +		       uint8_t *bitmask, uint32_t *bitmask_size, uint32_t cmd)
>> +{
>> +	uint64_t seqno;
>> +
>> +	igt_assert_eq(__eu_ctl(debugfd, client, exec_queue, lrc, bitmask,
>> +			       bitmask_size, cmd, &seqno), 0);
>> +
>> +	return seqno;
>> +}
>> +
>> +static bool intel_gen_needs_resume_wa(int fd)
>> +{
>> +	const uint32_t id = intel_get_drm_devid(fd);
>> +
>> +	return intel_gen(id) == 12 && intel_graphics_ver(id) < IP_VER(12, 55);
>> +}
>> +
>> +static uint64_t eu_ctl_resume(int fd, int debugfd, uint64_t client,
>> +			      uint64_t exec_queue, uint64_t lrc,
>> +			      uint8_t *bitmask, uint32_t bitmask_size)
>> +{
>> +	int i;
>> +
>> +	/* XXX: WA for hsd: 14011332042 */
> Remove XXX.
> Maybe plain: /* Wa_14011332042 */ ?
>> +	if (intel_gen_needs_resume_wa(fd)) {
>> +		uint32_t *att_reg_half = (uint32_t *)bitmask;
>> +
>> +		for (i = 0; i < bitmask_size / sizeof(uint32_t); i += 2) {
>> +			att_reg_half[i] |= att_reg_half[i + 1];
>> +			att_reg_half[i + 1] |= att_reg_half[i];
>> +		}
>> +	}
>> +
>> +	return eu_ctl(debugfd, client, exec_queue, lrc, bitmask, &bitmask_size,
>> +		      DRM_XE_EUDEBUG_EU_CONTROL_CMD_RESUME);
>> +}
>> +
>> +static inline uint64_t eu_ctl_stopped(int debugfd, uint64_t client,
>> +				      uint64_t exec_queue, uint64_t lrc,
>> +				      uint8_t *bitmask, uint32_t *bitmask_size)
>> +{
>> +	return eu_ctl(debugfd, client, exec_queue, lrc, bitmask, bitmask_size,
>> +		      DRM_XE_EUDEBUG_EU_CONTROL_CMD_STOPPED);
>> +}
>> +
>> +static inline uint64_t eu_ctl_interrupt_all(int debugfd, uint64_t client,
>> +					    uint64_t exec_queue, uint64_t lrc)
>> +{
>> +	return eu_ctl(debugfd, client, exec_queue, lrc, NULL, 0,
>> +		      DRM_XE_EUDEBUG_EU_CONTROL_CMD_INTERRUPT_ALL);
>> +}
>> +
>> +struct online_debug_data {
>> +	pthread_mutex_t mutex;
>> +	/* client in */
>> +	struct drm_xe_engine_class_instance hwe;
>> +	/* client out */
>> +	int threads_count;
>> +	/* debugger internals */
>> +	uint64_t client_handle;
>> +	uint64_t exec_queue_handle;
>> +	uint64_t lrc_handle;
>> +	uint64_t target_offset;
>> +	size_t target_size;
>> +	uint64_t bb_offset;
>> +	size_t bb_size;
>> +	int vm_fd;
>> +	uint32_t first_aip;
>> +	uint64_t *aips_offset_table;
>> +	uint32_t steps_done;
>> +	uint8_t *single_step_bitmask;
>> +	int stepped_threads_count;
>> +	struct timespec exception_arrived;
>> +	int last_eu_control_seqno;
>> +	struct drm_xe_eudebug_event *exception_event;
>> +};
>> +
>> +static struct online_debug_data *
>> +online_debug_data_create(struct drm_xe_engine_class_instance *hwe)
>> +{
>> +	struct online_debug_data *data;
>> +
>> +	data = mmap(0, ALIGN(sizeof(*data), PAGE_SIZE),
>> +		    PROT_WRITE, MAP_SHARED | MAP_ANON, -1, 0);
>> +	memcpy(&data->hwe, hwe, sizeof(*hwe));
>> +	pthread_mutex_init(&data->mutex, NULL);
>> +	data->client_handle = -1ULL;
>> +	data->exec_queue_handle = -1ULL;
>> +	data->lrc_handle = -1ULL;
>> +	data->vm_fd = -1;
>> +	data->stepped_threads_count = -1;
>> +
>> +	return data;
>> +}
>> +
>> +static void online_debug_data_destroy(struct online_debug_data *data)
>> +{
>> +	free(data->aips_offset_table);
>> +	munmap(data, ALIGN(sizeof(*data), PAGE_SIZE));
>> +}
>> +
>> +static void eu_attention_debug_trigger(struct xe_eudebug_debugger *d,
>> +					struct drm_xe_eudebug_event *e)
>> +{
>> +	struct drm_xe_eudebug_event_eu_attention *att = (void *) e;
>> +	uint32_t *ptr = (uint32_t *) att->bitmask;
>> +
>> +	igt_debug("EVENT[%llu] eu-attenttion; threads=%d "
>> +		 "client[%llu], exec_queue[%llu], lrc[%llu], bitmask_size[%d]\n",
>> +		 att->base.seqno, count_set_bits(att->bitmask, att->bitmask_size),
>> +				att->client_handle, att->exec_queue_handle,
>> +				att->lrc_handle, att->bitmask_size);
> strange alignment
>> +
>> +	for (uint32_t i = 0; i < att->bitmask_size/4; i += 2)
>> +		igt_debug("bitmask[%d] = 0x%08x%08x\n", i/2, ptr[i], ptr[i+1]);
>> +
>> +}
>> +
>> +static void eu_attention_reset_trigger(struct xe_eudebug_debugger *d,
>> +					struct drm_xe_eudebug_event *e)
>> +{
>> +	struct drm_xe_eudebug_event_eu_attention *att = (void *) e;
>> +	uint32_t *ptr = (uint32_t *) att->bitmask;
>> +	struct online_debug_data *data = d->ptr;
>> +
>> +	igt_debug("EVENT[%llu] eu-attention with reset; threads=%d "
>> +		 "client[%llu], exec_queue[%llu], lrc[%llu], bitmask_size[%d]\n",
>> +		 att->base.seqno, count_set_bits(att->bitmask, att->bitmask_size),
>> +				att->client_handle, att->exec_queue_handle,
>> +				att->lrc_handle, att->bitmask_size);
> here as well
>> +
>> +	for (uint32_t i = 0; i < att->bitmask_size/4; i += 2)
>> +		igt_debug("bitmask[%d] = 0x%08x%08x\n", i/2, ptr[i], ptr[i+1]);
>> +
>> +	xe_force_gt_reset_async(d->master_fd, data->hwe.gt_id);
>> +}
>> +
>> +static void copy_first_bit(uint8_t *dst, uint8_t *src, int size)
>> +{
>> +	bool found = false;
>> +	int i, j;
>> +
>> +	for (i = 0; i < size; i++) {
>> +		if (found) {
>> +			dst[i] = 0;
>> +		} else {
>> +			uint32_t tmp = src[i]; /* in case dst == src */
>> +
>> +			for (j = 0; j < 8; j++) {
>> +				dst[i] = tmp & (1 << j);
>> +				if (dst[i]) {
>> +					found = true;
>> +					break;
>> +				}
>> +			}
>> +		}
>> +	}
>> +}
>> +
>> +static void copy_nth_bit(uint8_t *dst, uint8_t *src, int size, int n)
>> +{
>> +	int count = 0;
>> +
>> +	for (int i = 0; i < size; i++) {
>> +		uint32_t tmp = src[i];
>> +		for (int j = 7; j >= 0; j--) {
>> +			if (tmp & (1 << j)) {
>> +				count++;
>> +				if (count == n)
>> +					dst[i] |= (1 << j);
>> +				else
>> +					dst[i] &= ~(1 << j);
>> +			} else
>> +				dst[i] &= ~(1 << j);
>> +		}
>> +	}
>> +}
>> +
>> +/*
>> + * Searches for the first instruction. It stands on assumption,
>> + * that shader kernel is placed before sip within the bb.
>> + */
>> +static uint32_t find_kernel_in_bb(struct gpgpu_shader *kernel,
>> +				  struct online_debug_data *data)
>> +{
>> +	uint32_t *p = kernel->code;
>> +	size_t sz = 4 * sizeof(uint32_t);
>> +	uint32_t buf[4];
>> +	int i;
>> +
>> +	for (i = 0; i < data->bb_size; i += sz) {
>> +		igt_assert_eq(pread(data->vm_fd, &buf, sz, data->bb_offset + i), sz);
>> +
>> +
>> +		if (memcmp(p, buf, sz) == 0)
>> +			break;
>> +	}
>> +
>> +	igt_assert(i < data->bb_size);
>> +
>> +	return i;
>> +}
>> +
>> +static void set_breakpoint_once(struct xe_eudebug_debugger *d,
>> +				struct online_debug_data *data)
>> +{
>> +	const uint32_t breakpoint_bit = 1 << 30;
>> +	size_t sz = sizeof(uint32_t);
>> +	struct gpgpu_shader *kernel;
>> +	uint32_t aip;
>> +
>> +	kernel = get_shader(d->master_fd, d->flags);
>> +
>> +	if (data->first_aip) {
>> +		uint32_t expected = find_kernel_in_bb(kernel, data) + kernel->size * 4 - 0x10;
>> +
>> +		igt_assert_eq(pread(data->vm_fd, &aip, sz, data->target_offset), sz);
>> +		igt_assert_eq_u32(aip, expected);
>> +	} else {
>> +		uint32_t instr_usdw;
>> +
>> +		igt_assert(data->vm_fd != -1);
>> +		igt_assert(data->target_size != 0);
>> +		igt_assert(data->bb_size != 0);
>> +
>> +		igt_assert_eq(pread(data->vm_fd, &aip, sz, data->target_offset), sz);
>> +		data->first_aip = aip;
>> +
>> +		aip = find_kernel_in_bb(kernel, data);
>> +
>> +		/* set breakpoint on last instruction */
>> +		aip += kernel->size * 4 - 0x10;
>> +		igt_assert_eq(pread(data->vm_fd, &instr_usdw, sz,
>> +				    data->bb_offset + aip), sz);
>> +		instr_usdw |= breakpoint_bit;
>> +		igt_assert_eq(pwrite(data->vm_fd, &instr_usdw, sz,
>> +				     data->bb_offset + aip), sz);
>> +
>> +	}
>> +
>> +	gpgpu_shader_destroy(kernel);
>> +}
>> +
>> +static void get_aips_offset_table(struct online_debug_data *data, int threads)
>> +{
>> +	size_t sz = sizeof(uint32_t);
>> +	uint32_t aip;
>> +	uint32_t first_aip;
>> +	int table_index = 0;
>> +
>> +	if (data->aips_offset_table)
>> +		return;
>> +
>> +	data->aips_offset_table = malloc(threads * sizeof(uint64_t));
>> +	igt_assert(data->aips_offset_table);
>> +
>> +	igt_assert_eq(pread(data->vm_fd, &first_aip, sz, data->target_offset), sz);
>> +	data->first_aip = first_aip;
>> +	data->aips_offset_table[table_index++] = 0;
>> +
>> +	fsync(data->vm_fd);
>> +	for (int i = 1; i < data->target_size; i++) {
> for (int i = sz; i < data->target_size; i+=sz), no need to compare byte by byte when instruction
> pointer has 4 bytes.
>> +		igt_assert_eq(pread(data->vm_fd, &aip, sz, data->target_offset + i), sz);
>> +		if (aip == first_aip)
>> +			data->aips_offset_table[table_index++] = i;
>> +	}
>> +
>> +	igt_assert_eq(threads, table_index);
>> +
>> +	igt_debug("AIPs offset table:\n");
>> +	for (int i = 0; i < threads; i++) {
>> +		igt_debug("%lx\n", data->aips_offset_table[i]);
>> +	}
> redundant bracets.
>> +}
>> +
>> +static int get_stepped_threads_count(struct online_debug_data *data, int threads)
>> +{
>> +	int count = 0;
>> +	size_t sz = sizeof(uint32_t);
>> +	uint32_t aip;
>> +
>> +	fsync(data->vm_fd);
>> +	for (int i = 0; i < threads; i++) {
>> +		igt_assert_eq(pread(data->vm_fd, &aip, sz,
>> +				    data->target_offset + data->aips_offset_table[i]), sz);
>> +		if (aip != data->first_aip) {
>> +			igt_assert(aip == data->first_aip + 0x10);
>> +			count++;
>> +		}
>> +	}
>> +
>> +	return count;
>> +}
>> +
>> +static void save_first_exception_trigger(struct xe_eudebug_debugger *d,
>> +					 struct drm_xe_eudebug_event *e)
>> +{
>> +	struct online_debug_data *data = d->ptr;
>> +
>> +	pthread_mutex_lock(&data->mutex);
>> +	if (!data->exception_event) {
>> +		igt_gettime(&data->exception_arrived);
>> +		data->exception_event = igt_memdup(e, e->len);
>> +	}
>> +	pthread_mutex_unlock(&data->mutex);
>> +}
>> +
>> +#define MAX_PREEMPT_TIMEOUT 10ull
>> +static int is_client_resumed;
> 
> Why this is static global variable? Could be part of the data struct.
> What's more, it slighlty indicates that when it was set to =1 client
> is running. Which may not be true. (more than one dispatch or more than one breakpoint)
> I would consider changing the name, but I do not have strong opinion about that.
> 
>> +static void eu_attention_resume_trigger(struct xe_eudebug_debugger *d,
>> +					struct drm_xe_eudebug_event *e)
>> +{
>> +	struct drm_xe_eudebug_event_eu_attention *att = (void *) e;
>> +	struct online_debug_data *data = d->ptr;
>> +	uint32_t bitmask_size = att->bitmask_size;
>> +	uint8_t *bitmask;
>> +	int i;
>> +
>> +	if (data->last_eu_control_seqno > att->base.seqno)
>> +		return;
>> +
>> +	bitmask = calloc(1, att->bitmask_size);
>> +
>> +	eu_ctl_stopped(d->fd, att->client_handle, att->exec_queue_handle,
>> +		       att->lrc_handle, bitmask, &bitmask_size);
>> +	igt_assert(bitmask_size == att->bitmask_size);
>> +	igt_assert(memcmp(bitmask, att->bitmask, att->bitmask_size) == 0);
>> +
>> +	pthread_mutex_lock(&data->mutex);
>> +	if (igt_nsec_elapsed(&data->exception_arrived) < (MAX_PREEMPT_TIMEOUT + 1) * NSEC_PER_SEC &&
>> +	    d->flags & TRIGGER_RESUME_DELAYED) {
>> +		pthread_mutex_unlock(&data->mutex);
>> +		free(bitmask);
>> +		return;
>> +	} else if (d->flags & TRIGGER_RESUME_ONE) {
>> +		copy_first_bit(bitmask, bitmask, bitmask_size);
>> +	} else if (d->flags & TRIGGER_RESUME_DSS) {
>> +		uint64_t *event = (uint64_t *)att->bitmask;
>> +		uint64_t *resume = (uint64_t *)bitmask;
>> +
>> +		memset(bitmask, 0, bitmask_size);
>> +		for (i = 0; i < att->bitmask_size / sizeof(uint64_t); i++) {
>> +			if (!event[i])
>> +				continue;
>> +
>> +			resume[i] = event[i];
>> +			break;
>> +		}
>> +	} else if (d->flags & TRIGGER_RESUME_SET_BP) {
>> +		set_breakpoint_once(d, data);
>> +	}
>> +
>> +	if (d->flags & SHADER_LOOP) {
>> +		uint32_t threads = get_number_of_threads(d->flags);
>> +		uint32_t val = STEERING_END_LOOP;
>> +
>> +		igt_assert_eq(pwrite(data->vm_fd, &val, sizeof(uint32_t),
>> +				     data->target_offset + steering_offset(threads)),
>> +			      sizeof(uint32_t));
>> +		fsync(data->vm_fd);
>> +	}
>> +	pthread_mutex_unlock(&data->mutex);
>> +
>> +	data->last_eu_control_seqno = eu_ctl_resume(d->master_fd, d->fd, att->client_handle,
>> +						    att->exec_queue_handle, att->lrc_handle,
>> +						    bitmask, att->bitmask_size);
>> +
>> +	is_client_resumed = 1;
>> +	free(bitmask);
>> +}
>> +
>> +static void eu_attention_resume_single_step_trigger(struct xe_eudebug_debugger *d,
>> +						    struct drm_xe_eudebug_event *e)
>> +{
>> +	struct drm_xe_eudebug_event_eu_attention *att = (void *) e;
>> +	struct online_debug_data *data = d->ptr;
>> +	const int threads = get_number_of_threads(d->flags);
>> +	uint32_t val;
>> +	size_t sz = sizeof(uint32_t);
>> +
>> +	get_aips_offset_table(data, threads);
>> +
>> +	if (d->flags & TRIGGER_RESUME_PARALLEL_WALK) {
>> +		if (data->stepped_threads_count != -1)
>> +			if (data->steps_done < SINGLE_STEP_COUNT) {
>> +				int stepped_threads_count_after_resume =
>> +						get_stepped_threads_count(data, threads);
>> +				igt_debug("Stepped threads after: %d\n",
>> +					  stepped_threads_count_after_resume);
>> +
>> +				if (stepped_threads_count_after_resume == threads) {
>> +					data->first_aip += 0x10;
>> +					data->steps_done++;
>> +				}
>> +
>> +				igt_debug("Shader steps: %d\n", data->steps_done);
>> +				igt_assert(data->stepped_threads_count == 0);
>> +				igt_assert(stepped_threads_count_after_resume == threads);
>> +			}
>> +
>> +		if (data->steps_done < SINGLE_STEP_COUNT) {
>> +			data->stepped_threads_count = get_stepped_threads_count(data, threads);
>> +			igt_debug("Stepped threads before: %d\n", data->stepped_threads_count);
>> +		}
>> +
>> +		val = data->steps_done < SINGLE_STEP_COUNT ? STEERING_SINGLE_STEP :
>> +							     STEERING_CONTINUE;
>> +	} else if (d->flags & TRIGGER_RESUME_SINGLE_WALK) {
>> +		if (data->stepped_threads_count != -1)
>> +			if (data->steps_done < 2) {
>> +				int stepped_threads_count_after_resume =
>> +						get_stepped_threads_count(data, threads);
>> +				igt_debug("Stepped threads after: %d\n",
>> +					  stepped_threads_count_after_resume);
>> +
>> +				if (stepped_threads_count_after_resume == threads) {
>> +					data->first_aip += 0x10;
>> +					data->steps_done++;
>> +					free(data->single_step_bitmask);
>> +					data->single_step_bitmask = 0;
>> +				}
>> +
>> +				igt_debug("Shader steps: %d\n", data->steps_done);
>> +				igt_assert(data->stepped_threads_count +
>> +					   (intel_gen_needs_resume_wa(d->master_fd) ? 2 : 1) ==
>> +					   stepped_threads_count_after_resume);
>> +			}
>> +
>> +		if (data->steps_done < 2) {
>> +			data->stepped_threads_count = get_stepped_threads_count(data, threads);
>> +			igt_debug("Stepped threads before: %d\n", data->stepped_threads_count);
>> +			if (intel_gen_needs_resume_wa(d->master_fd)) {
>> +				if (!data->single_step_bitmask) {
>> +					data->single_step_bitmask = malloc(att->bitmask_size *
>> +									   sizeof(uint8_t));
>> +					igt_assert(data->single_step_bitmask);
>> +					memcpy(data->single_step_bitmask, att->bitmask,
>> +					       att->bitmask_size);
>> +				}
>> +
>> +				copy_first_bit(att->bitmask, data->single_step_bitmask,
>> +					       att->bitmask_size);
>> +			} else
>> +				copy_nth_bit(att->bitmask, att->bitmask, att->bitmask_size,
>> +					     data->stepped_threads_count + 1);
>> +		}
>> +
>> +		val = data->steps_done < 2 ? STEERING_SINGLE_STEP : STEERING_CONTINUE;
>> +	}
>> +
>> +	igt_assert_eq(pwrite(data->vm_fd, &val, sz,
>> +			     data->target_offset + steering_offset(threads)), sz);
>> +	fsync(data->vm_fd);
>> +
>> +	eu_ctl_resume(d->master_fd, d->fd, att->client_handle,
>> +		      att->exec_queue_handle, att->lrc_handle,
>> +		      att->bitmask, att->bitmask_size);
>> +
>> +	if (data->single_step_bitmask)
>> +		for (int i = 0; i < att->bitmask_size; i++)
>> +			data->single_step_bitmask[i] &= ~att->bitmask[i];
>> +}
>> +
>> +static void open_trigger(struct xe_eudebug_debugger *d,
>> +			 struct drm_xe_eudebug_event *e)
>> +{
>> +	struct drm_xe_eudebug_event_client *client = (void *)e;
>> +	struct online_debug_data *data = d->ptr;
>> +
>> +	if (e->flags & DRM_XE_EUDEBUG_EVENT_DESTROY)
>> +		return;
>> +
>> +	pthread_mutex_lock(&data->mutex);
>> +	data->client_handle = client->client_handle;
>> +	pthread_mutex_unlock(&data->mutex);
>> +}
>> +
>> +static void exec_queue_trigger(struct xe_eudebug_debugger *d,
>> +			       struct drm_xe_eudebug_event *e)
>> +{
>> +	struct drm_xe_eudebug_event_exec_queue *eq = (void *)e;
>> +	struct online_debug_data *data = d->ptr;
>> +
>> +	if (e->flags & DRM_XE_EUDEBUG_EVENT_DESTROY)
>> +		return;
>> +
>> +	pthread_mutex_lock(&data->mutex);
>> +	data->exec_queue_handle = eq->exec_queue_handle;
>> +	data->lrc_handle = eq->lrc_handle[0];
>> +	pthread_mutex_unlock(&data->mutex);
>> +}
>> +
>> +static void vm_open_trigger(struct xe_eudebug_debugger *d,
>> +			    struct drm_xe_eudebug_event *e)
>> +{
>> +	struct drm_xe_eudebug_event_vm *vm = (void *)e;
>> +	struct online_debug_data *data = d->ptr;
>> +	struct drm_xe_eudebug_vm_open vo = {
>> +		.client_handle = vm->client_handle,
>> +		.vm_handle = vm->vm_handle,
>> +	};
>> +	int fd;
>> +
>> +	if (e->flags & DRM_XE_EUDEBUG_EVENT_CREATE) {
>> +		fd = igt_ioctl(d->fd, DRM_XE_EUDEBUG_IOCTL_VM_OPEN, &vo);
>> +		igt_assert_lte(0, fd);
>> +
>> +		pthread_mutex_lock(&data->mutex);
>> +		igt_assert(data->vm_fd == -1);
>> +		data->vm_fd = fd;
>> +		pthread_mutex_unlock(&data->mutex);
>> +		return;
>> +	}
>> +
>> +	pthread_mutex_lock(&data->mutex);
>> +	close(data->vm_fd);
>> +	data->vm_fd = -1;
>> +	pthread_mutex_unlock(&data->mutex);
>> +}
>> +
>> +static void read_metadata(struct xe_eudebug_debugger *d,
>> +			  uint64_t client_handle,
>> +			  uint64_t metadata_handle,
>> +			  uint64_t type,
>> +			  uint64_t len)
>> +{
>> +	struct drm_xe_eudebug_read_metadata rm = {
>> +		.client_handle = client_handle,
>> +		.metadata_handle = metadata_handle,
>> +		.size = len,
>> +	};
>> +	struct online_debug_data *data = d->ptr;
>> +	uint64_t *metadata;
>> +
>> +	metadata = malloc(len);
>> +	igt_assert(metadata);
>> +
>> +	rm.ptr = to_user_pointer(metadata);
>> +	igt_assert_eq(igt_ioctl(d->fd, DRM_XE_EUDEBUG_IOCTL_READ_METADATA, &rm), 0);
>> +
>> +	pthread_mutex_lock(&data->mutex);
>> +	switch (type) {
>> +	case DRM_XE_DEBUG_METADATA_ELF_BINARY:
>> +		data->bb_offset = metadata[0];
>> +		data->bb_size = metadata[1];
>> +		break;
>> +	case DRM_XE_DEBUG_METADATA_PROGRAM_MODULE:
>> +		data->target_offset = metadata[0];
>> +		data->target_size = metadata[1];
>> +		break;
>> +	default:
>> +		break;
>> +	}
>> +	pthread_mutex_unlock(&data->mutex);
>> +
>> +	free(metadata);
>> +}
>> +
>> +static void create_metadata_trigger(struct xe_eudebug_debugger *d, struct drm_xe_eudebug_event *e)
>> +{
>> +	struct drm_xe_eudebug_event_metadata *em = (void *)e;
>> +
>> +	if (e->flags & DRM_XE_EUDEBUG_EVENT_CREATE) {
>> +		read_metadata(d, em->client_handle, em->metadata_handle, em->type, em->len);
>> +	}
>> +}
>> +
>> +static void overwrite_immediate_value_in_common_target_write(int vm_fd, uint64_t offset,
>> +							     uint32_t old_val, uint32_t new_val)
>> +{
>> +	uint64_t addr = offset;
>> +	int vals_changed = 0;
>> +	uint32_t val;
>> +
>> +	while (vals_changed < 4) {
>> +		igt_assert_eq(pread(vm_fd, &val, sizeof(uint32_t), addr), sizeof(uint32_t));
>> +		if (val == old_val) {
>> +			igt_debug("val_before_write[%d]: %08x\n", vals_changed, val);
>> +			igt_assert_eq(pwrite(vm_fd, &new_val, sizeof(uint32_t), addr),
>> +				      sizeof(uint32_t));
>> +			igt_assert_eq(pread(vm_fd, &val, sizeof(uint32_t), addr),
>> +				      sizeof(uint32_t));
>> +			igt_debug("val_before_fsync[%d]: %08x\n", vals_changed, val);
>> +			fsync(vm_fd);
>> +			igt_assert_eq(pread(vm_fd, &val, sizeof(uint32_t), addr),
>> +				      sizeof(uint32_t));
>> +			igt_debug("val_after_fsync[%d]: %08x\n", vals_changed, val);
>> +			igt_assert_eq_u32(val, new_val);
>> +			vals_changed++;
>> +		}
>> +		addr += sizeof(uint32_t);
>> +	}
>> +}
>> +
>> +static void eu_attention_resume_caching_trigger(struct xe_eudebug_debugger *d,
>> +						struct drm_xe_eudebug_event *e)
>> +{
>> +	struct drm_xe_eudebug_event_eu_attention *att = (void *) e;
>> +	struct online_debug_data *data = d->ptr;
>> +	static int counter = 0;
>> +	static int kernel_in_bb = 0;
>> +	struct dim_t s_dim = surface_dimensions(get_number_of_threads(d->flags));
>> +	int val;
>> +	uint32_t instr_usdw;
>> +	struct gpgpu_shader *kernel;
>> +	const uint32_t breakpoint_bit = 1 << 30;
>> +	struct gpgpu_shader *shader_preamble;
>> +	struct gpgpu_shader *shader_write_instr;
>> +
>> +	shader_preamble = gpgpu_shader_create(d->master_fd);
>> +	gpgpu_shader__write_dword(shader_preamble, SHADER_CANARY, 0);
>> +	gpgpu_shader__nop(shader_preamble);
>> +	gpgpu_shader__breakpoint(shader_preamble);
>> +
>> +	shader_write_instr = gpgpu_shader_create(d->master_fd);
>> +	gpgpu_shader__common_target_write_u32(shader_write_instr, 0, 0);
>> +
>> +	if (!kernel_in_bb) {
>> +		kernel = get_shader(d->master_fd, d->flags);
>> +		kernel_in_bb = find_kernel_in_bb(kernel, data);
>> +		gpgpu_shader_destroy(kernel);
>> +	}
>> +
>> +	/* set breakpoint on next write instruction */
>> +	if (counter < caching_get_instruction_count(d->master_fd, s_dim.x, d->flags)) {
>> +		igt_assert_eq(pread(data->vm_fd, &instr_usdw, sizeof(instr_usdw),
>> +				    data->bb_offset + kernel_in_bb + shader_preamble->size * 4 +
>> +				    shader_write_instr->size * 4 * counter), sizeof(instr_usdw));
>> +		instr_usdw |= breakpoint_bit;
>> +		igt_assert_eq(pwrite(data->vm_fd, &instr_usdw, sizeof(instr_usdw),
>> +				     data->bb_offset + kernel_in_bb + shader_preamble->size * 4 +
>> +				     shader_write_instr->size * 4 * counter), sizeof(instr_usdw));
>> +		fsync(data->vm_fd);
>> +	}
>> +
>> +	/* restore current instruction */
>> +	if (counter && counter <= caching_get_instruction_count(d->master_fd, s_dim.x, d->flags))
>> +		overwrite_immediate_value_in_common_target_write(data->vm_fd,
>> +								 data->bb_offset + kernel_in_bb +
>> +								 shader_preamble->size * 4 +
>> +								 shader_write_instr->size * 4 * (counter - 1),
>> +								 CACHING_POISON_VALUE,
>> +								 CACHING_VALUE(counter - 1));
>> +
>> +	/* poison next instruction */
>> +	if (counter < caching_get_instruction_count(d->master_fd, s_dim.x, d->flags))
>> +		overwrite_immediate_value_in_common_target_write(data->vm_fd,
>> +								 data->bb_offset + kernel_in_bb +
>> +								 shader_preamble->size * 4 +
>> +								 shader_write_instr->size * 4 * counter,
>> +								 CACHING_VALUE(counter),
>> +								 CACHING_POISON_VALUE);
>> +
>> +	gpgpu_shader_destroy(shader_write_instr);
>> +	gpgpu_shader_destroy(shader_preamble);
>> +
>> +	for (int i = 0; i < data->target_size; i += sizeof(uint32_t)) {
>> +		igt_assert_eq(pread(data->vm_fd, &val, sizeof(val), data->target_offset + i),
>> +			      sizeof(val));
>> +		igt_assert_f(val != CACHING_POISON_VALUE, "Poison value found at %04d!\n", i);
>> +	}
>> +
>> +	eu_ctl_resume(d->master_fd, d->fd, att->client_handle,
>> +		      att->exec_queue_handle, att->lrc_handle,
>> +		      att->bitmask, att->bitmask_size);
>> +
>> +	counter++;
>> +}
>> +
>> +static struct intel_bb *xe_bb_create_on_offset(int fd, uint32_t exec_queue, uint32_t vm,
>> +					       uint64_t offset, uint32_t size)
>> +{
>> +	struct intel_bb *ibb;
>> +
>> +	ibb = intel_bb_create_with_context(fd, exec_queue, vm, NULL, size);
>> +
>> +	/* update intel bb offset */
>> +	intel_bb_remove_object(ibb, ibb->handle, ibb->batch_offset, ibb->size);
>> +	intel_bb_add_object(ibb, ibb->handle, ibb->size, offset, ibb->alignment, false);
>> +	ibb->batch_offset = offset;
>> +
>> +	return ibb;
>> +}
>> +
>> +static size_t get_bb_size(int flags)
>> +{
>> +	if ((flags & SHADER_CACHING_SRAM) || (flags & SHADER_CACHING_VRAM))
>> +		return 32768;
>> +
>> +	return 4096;
>> +}
>> +
>> +static void run_online_client(struct xe_eudebug_client *c)
>> +{
>> +	int threads = get_number_of_threads(c->flags);
>> +	const uint64_t target_offset = 0x1a000000;
>> +	const uint64_t bb_offset = 0x1b000000;
>> +	const size_t bb_size = get_bb_size(c->flags);
>> +	struct online_debug_data *data = c->ptr;
>> +	struct drm_xe_engine_class_instance hwe = data->hwe;
>> +	struct drm_xe_ext_set_property ext = {
>> +		.base.name = DRM_XE_EXEC_QUEUE_EXTENSION_SET_PROPERTY,
>> +		.property = DRM_XE_EXEC_QUEUE_SET_PROPERTY_EUDEBUG,
>> +		.value = DRM_XE_EXEC_QUEUE_EUDEBUG_FLAG_ENABLE,
>> +	};
>> +	struct drm_xe_exec_queue_create create = {
>> +		.instances = to_user_pointer(&hwe),
>> +		.width = 1,
>> +		.num_placements = 1,
>> +		.extensions = c->flags & DISABLE_DEBUG_MODE ? 0 : to_user_pointer(&ext)
>> +	};
>> +	struct dim_t w_dim = walker_dimensions(threads);
>> +	struct dim_t s_dim = surface_dimensions(threads);
>> +	struct timespec ts = { };
>> +	struct gpgpu_shader *sip, *shader;
>> +	uint32_t metadata_id[2];
>> +	uint64_t *metadata[2];
>> +	struct intel_bb *ibb;
>> +	struct intel_buf *buf;
>> +	uint32_t *ptr;
>> +	int fd;
>> +
>> +	metadata[0] = calloc(2, sizeof(*metadata));
>> +	metadata[1] = calloc(2, sizeof(*metadata));
>> +	igt_assert(metadata[0]);
>> +	igt_assert(metadata[1]);
>> +
>> +	fd = xe_eudebug_client_open_driver(c);
>> +	xe_device_get(fd);
>> +
>> +	/* Additional memory for steering control */
>> +	if (c->flags & SHADER_LOOP || c->flags & SHADER_SINGLE_STEP)
>> +		s_dim.y++;
>> +	/* Additional memory for caching check */
>> +	if ((c->flags & SHADER_CACHING_SRAM) || (c->flags & SHADER_CACHING_VRAM))
>> +		s_dim.y += caching_get_instruction_count(fd, s_dim.x, c->flags);
>> +	buf = create_uc_buf(fd, s_dim.x, s_dim.y);
>> +
>> +	buf->addr.offset = target_offset;
>> +
>> +	metadata[0][0] = bb_offset;
>> +	metadata[0][1] = bb_size;
>> +	metadata[1][0] = target_offset;
>> +	metadata[1][1] = buf->size;
>> +	metadata_id[0] = xe_eudebug_client_metadata_create(c, fd, DRM_XE_DEBUG_METADATA_ELF_BINARY,
>> +							   2 * sizeof(*metadata), metadata[0]);
>> +	metadata_id[1] = xe_eudebug_client_metadata_create(c, fd,
>> +							   DRM_XE_DEBUG_METADATA_PROGRAM_MODULE,
>> +							   2 * sizeof(*metadata), metadata[1]);
>> +
>> +	create.vm_id = xe_eudebug_client_vm_create(c, fd, DRM_XE_VM_CREATE_FLAG_LR_MODE, 0);
>> +	xe_eudebug_client_exec_queue_create(c, fd, &create);
>> +
>> +	ibb = xe_bb_create_on_offset(fd, create.exec_queue_id, create.vm_id,
>> +				     bb_offset, bb_size);
>> +	intel_bb_set_lr_mode(ibb, true);
>> +
>> +	sip = get_sip(fd, c->flags);
>> +	shader = get_shader(fd, c->flags);
>> +
>> +	igt_nsec_elapsed(&ts);
>> +	gpgpu_shader_exec(ibb, buf, w_dim.x, w_dim.y, shader, sip, 0, 0);
>> +
>> +	gpgpu_shader_destroy(sip);
>> +	gpgpu_shader_destroy(shader);
>> +
>> +	intel_bb_sync(ibb);
>> +
>> +	if (c->flags & TRIGGER_RECONNECT)
>> +		xe_eudebug_client_wait_stage(c, DEBUGGER_REATTACHED);
>> +	else
>> +		/* Make sure it wasn't the timeout. */
>> +		igt_assert(igt_nsec_elapsed(&ts) <
>> +			   XE_EUDEBUG_DEFAULT_TIMEOUT_MS / MSEC_PER_SEC * NSEC_PER_SEC);
>> +
>> +	if (!(c->flags & DO_NOT_EXPECT_CANARIES)) {
>> +		ptr = xe_bo_mmap_ext(fd, buf->handle, buf->size, PROT_READ);
>> +		data->threads_count = count_canaries_neq(ptr, w_dim, 0);
>> +		igt_assert_f(data->threads_count, "No canaries found, nothing executed?\n");
>> +
>> +		if ((c->flags & SHADER_BREAKPOINT || c->flags & TRIGGER_RESUME_SET_BP ||
>> +		     c->flags & SHADER_N_NOOP_BREAKPOINT) && !(c->flags & DISABLE_DEBUG_MODE)) {
>> +			uint32_t aip = ptr[0];
>> +
>> +			igt_assert_f(aip != SHADER_CANARY, "Workload executed but breakpoint not hit!\n");
>> +			igt_assert_eq(count_canaries_eq(ptr, w_dim, aip), data->threads_count);
>> +			igt_debug("Breakpoint hit in %d threads, AIP=0x%08x\n", data->threads_count, aip);
>> +		}
>> +
>> +		munmap(ptr, buf->size);
>> +	}
>> +
>> +	intel_bb_destroy(ibb);
>> +
>> +	xe_eudebug_client_exec_queue_destroy(c, fd, &create);
>> +	xe_eudebug_client_vm_destroy(c, fd,  create.vm_id);
>> +
>> +	xe_eudebug_client_metadata_destroy(c, fd, metadata_id[0], DRM_XE_DEBUG_METADATA_ELF_BINARY,
>> +					   2 * sizeof(*metadata));
>> +	xe_eudebug_client_metadata_destroy(c, fd, metadata_id[1],
>> +					   DRM_XE_DEBUG_METADATA_PROGRAM_MODULE,
>> +					   2 * sizeof(*metadata));
>> +
>> +	xe_device_put(fd);
>> +	xe_eudebug_client_close_driver(c, fd);
>> +}
>> +
>> +static bool intel_gen_has_lockstep_eus(int fd)
>> +{
>> +	const uint32_t id = intel_get_drm_devid(fd);
>> +
>> +	/*
>> +	 * Lockstep (or in some parlance, fused) EUs are pair of EUs
>> +	 * that work in sync, supposedly same clock and same control flow.
>> +	 * Thus for attentions, if the control has breakpoint, both will be
>> +	 * excepted into SIP. In this level, the hardware has only one attention
>> +	 * thread bit for units. PVC is the first one without lockstepping.
>> +	 */
>> +	return !(intel_graphics_ver(id) == IP_VER(12, 60) || intel_gen(id) >= 20);
>> +}
>> +
>> +static int query_attention_bitmask_size(int fd, int gt)
>> +{
>> +	const unsigned int threads = 8;
>> +	struct drm_xe_query_topology_mask *c_dss = NULL, *g_dss = NULL, *eu_per_dss = NULL;
>> +	struct drm_xe_query_topology_mask *topology;
>> +	struct drm_xe_device_query query = {
>> +		.extensions = 0,
>> +		.query = DRM_XE_DEVICE_QUERY_GT_TOPOLOGY,
>> +		.size = 0,
>> +		.data = 0,
>> +	};
>> +	int pos = 0, eus;
>> +	uint8_t *any_dss;
>> +
>> +	igt_assert_eq(igt_ioctl(fd, DRM_IOCTL_XE_DEVICE_QUERY, &query), 0);
>> +	igt_assert_neq(query.size, 0);
>> +
>> +	topology = malloc(query.size);
>> +	igt_assert(topology);
>> +
>> +	query.data = to_user_pointer(topology);
>> +	igt_assert_eq(igt_ioctl(fd, DRM_IOCTL_XE_DEVICE_QUERY, &query), 0);
>> +
>> +	while (query.size >= sizeof(struct drm_xe_query_topology_mask)) {
>> +		struct drm_xe_query_topology_mask *topo;
>> +		int sz;
>> +
>> +		topo = (struct drm_xe_query_topology_mask *)((unsigned char *)topology + pos);
>> +		sz = sizeof(struct drm_xe_query_topology_mask) + topo->num_bytes;
>> +
>> +		query.size -= sz;
>> +		pos += sz;
>> +
>> +		if (topo->gt_id != gt)
>> +			continue;
>> +
>> +		if (topo->type == DRM_XE_TOPO_DSS_GEOMETRY)
>> +			g_dss = topo;
>> +		else if (topo->type == DRM_XE_TOPO_DSS_COMPUTE)
>> +			c_dss = topo;
>> +		else if (topo->type == DRM_XE_TOPO_EU_PER_DSS ||
>> +			 topo->type == DRM_XE_TOPO_SIMD16_EU_PER_DSS)
>> +			eu_per_dss = topo;
>> +	}
>> +
>> +	igt_assert(g_dss && c_dss && eu_per_dss);
>> +	igt_assert_eq_u32(c_dss->num_bytes, g_dss->num_bytes);
>> +
>> +	any_dss = malloc(c_dss->num_bytes);
>> +
>> +	for (int i = 0; i < c_dss->num_bytes; i++)
>> +		any_dss[i] = c_dss->mask[i] | g_dss->mask[i];
>> +
>> +	eus = count_set_bits(any_dss, c_dss->num_bytes);
>> +	eus *= count_set_bits(eu_per_dss->mask, eu_per_dss->num_bytes);
>> +
>> +	if (intel_gen_has_lockstep_eus(fd))
>> +		eus /= 2;
>> +
>> +	free(any_dss);
>> +	free(topology);
>> +
>> +	return eus * threads / 8;
>> +}
>> +
>> +static struct drm_xe_eudebug_event_exec_queue *
>> +match_attention_with_exec_queue(struct xe_eudebug_event_log *log,
>> +				struct drm_xe_eudebug_event_eu_attention *ea)
>> +{
>> +	struct drm_xe_eudebug_event_exec_queue *ee;
>> +	struct drm_xe_eudebug_event *event = NULL, *current = NULL, *matching_destroy = NULL;
>> +	int lrc_idx;
>> +
>> +	xe_eudebug_for_each_event(event, log) {
>> +		if (event->type == DRM_XE_EUDEBUG_EVENT_EXEC_QUEUE &&
>> +		    event->flags == DRM_XE_EUDEBUG_EVENT_CREATE) {
>> +			ee = (struct drm_xe_eudebug_event_exec_queue *)event;
>> +
>> +			if (ee->exec_queue_handle != ea->exec_queue_handle)
>> +				continue;
>> +
>> +			if (ee->client_handle != ea->client_handle)
>> +				continue;
>> +
>> +			for (lrc_idx = 0; lrc_idx < ee->width; lrc_idx++) {
>> +				if (ee->lrc_handle[lrc_idx] == ea->lrc_handle)
>> +					break;
>> +			}
>> +
>> +			if (lrc_idx >= ee->width) {
>> +				igt_debug("No matching lrc handle within matching exec_queue!");
>> +				continue;
>> +			}
>> +
>> +			/* event logs are sorted, every found next would not be present. */
>> +			if (ea->base.seqno < ee->base.seqno)
>> +				break;
>> +
>> +			/* sanity check whether attention did
>> +			 * not appear yet on already destroyed exec_queue
>> +			 */
>> +			current = event;
>> +			xe_eudebug_for_each_event(current, log) {
>> +				if (current->type == DRM_XE_EUDEBUG_EVENT_EXEC_QUEUE &&
>> +				    current->flags == DRM_XE_EUDEBUG_EVENT_DESTROY) {
>> +					uint8_t offset = sizeof(struct drm_xe_eudebug_event);
>> +
>> +					if (memcmp((uint8_t *)current + offset,
>> +						   (uint8_t *)event + offset,
>> +						   current->len - offset) == 0) {
>> +						matching_destroy = current;
>> +					}
>> +				}
>> +			}
>> +
>> +			if (!matching_destroy || ea->base.seqno > matching_destroy->seqno)
>> +				continue;
>> +
>> +			return ee;
>> +		}
>> +	}
>> +
>> +	return NULL;
>> +}
>> +
>> +static void online_session_check(struct xe_eudebug_session *s, int flags)
>> +{
>> +	struct drm_xe_eudebug_event_eu_attention *ea = NULL;
>> +	struct drm_xe_eudebug_event *event = NULL;
>> +	struct online_debug_data *data = s->c->ptr;
>> +	bool expect_exception = flags & DISABLE_DEBUG_MODE ? false : true;
>> +	int sum = 0;
>> +	int bitmask_size;
>> +
>> +	xe_eudebug_session_check(s, true, XE_EUDEBUG_FILTER_EVENT_VM_BIND |
>> +					  XE_EUDEBUG_FILTER_EVENT_VM_BIND_OP |
>> +					  XE_EUDEBUG_FILTER_EVENT_VM_BIND_UFENCE);
>> +
>> +	bitmask_size = query_attention_bitmask_size(s->d->master_fd, data->hwe.gt_id);
>> +
>> +	xe_eudebug_for_each_event(event, s->d->log) {
>> +		if (event->type == DRM_XE_EUDEBUG_EVENT_EU_ATTENTION) {
>> +			ea = (struct drm_xe_eudebug_event_eu_attention *)event;
>> +
>> +			igt_assert(event->flags == DRM_XE_EUDEBUG_EVENT_STATE_CHANGE);
>> +			igt_assert_eq(ea->bitmask_size, bitmask_size);
>> +			sum += count_set_bits(ea->bitmask, bitmask_size);
>> +			igt_assert(match_attention_with_exec_queue(s->d->log, ea));
>> +		}
>> +	}
>> +
>> +	/*
>> +	 * We can expect attention to sum up only
>> +	 * if we have a breakpoint set and we resume all threads always.
>> +	 */
>> +	if (flags == SHADER_BREAKPOINT)
>> +		igt_assert_eq(sum, data->threads_count);
>> +
>> +	if (expect_exception)
>> +		igt_assert(sum > 0);
>> +	else
>> +		igt_assert(sum == 0);
>> +}
>> +
>> +static void ufence_ack_trigger(struct xe_eudebug_debugger *d,
>> +			       struct drm_xe_eudebug_event *e)
>> +{
>> +	struct drm_xe_eudebug_event_vm_bind_ufence *ef = (void *)e;
>> +
>> +	if (e->flags & DRM_XE_EUDEBUG_EVENT_CREATE)
>> +		xe_eudebug_ack_ufence(d->fd, ef);
>> +}
>> +
>> +/**
>> + * SUBTEST: basic-breakpoint
>> + * Description:
>> + *	Check whether KMD sends attention events
>> + *	for workload in debug mode stopped on breakpoint.
>> + *
>> + * SUBTEST: breakpoint-not-in-debug-mode
>> + * Description:
>> + *	Check whether KMD resets the GPU when it spots an attention
>> + *	coming from workload not in debug mode.
>> + *
>> + * SUBTEST: stopped-thread
>> + * Description:
>> + *	Hits breakpoint on runalone workload and
>> + *	reads attention for fixed time.
>> + *
>> + * SUBTEST: resume-%s
>> + * Description:
>> + *	Resumes stopped on a breakpoint workload
>> + *	with granularity of %arg[1].
>> + *
>> + *
>> + * arg[1]:
>> + *
>> + * @one:	one thread
>> + * @dss:	threads running on one subslice
>> + */
>> +static void test_basic_online(int fd, struct drm_xe_engine_class_instance *hwe, int flags)
>> +{
>> +	struct xe_eudebug_session *s;
>> +	struct online_debug_data *data;
>> +
>> +	data = online_debug_data_create(hwe);
>> +	s = xe_eudebug_session_create(fd, run_online_client, flags, data);
>> +
>> +	xe_eudebug_debugger_add_trigger(s->d, DRM_XE_EUDEBUG_EVENT_EU_ATTENTION,
>> +					eu_attention_debug_trigger);
>> +	xe_eudebug_debugger_add_trigger(s->d, DRM_XE_EUDEBUG_EVENT_EU_ATTENTION,
>> +					eu_attention_resume_trigger);
>> +	xe_eudebug_debugger_add_trigger(s->d, DRM_XE_EUDEBUG_EVENT_VM_BIND_UFENCE,
>> +					ufence_ack_trigger);
>> +
>> +	xe_eudebug_session_run(s);
>> +	online_session_check(s, s->flags);
>> +
>> +	xe_eudebug_session_destroy(s);
>> +	online_debug_data_destroy(data);
>> +}
>> +
>> +/**
>> + * SUBTEST: preempt-breakpoint
>> + * Description:
>> + *	Verify that eu debugger disables preemption timeout to
>> + *	prevent reset of workload stopped on breakpoint.
>> + */
>> +static void test_preemption(int fd, struct drm_xe_engine_class_instance *hwe)
>> +{
>> +	int flags = SHADER_BREAKPOINT | TRIGGER_RESUME_DELAYED;
>> +	struct xe_eudebug_session *s;
>> +	struct online_debug_data *data;
>> +	struct xe_eudebug_client *other;
>> +
>> +	data = online_debug_data_create(hwe);
>> +	s = xe_eudebug_session_create(fd, run_online_client, flags, data);
>> +	other = xe_eudebug_client_create(fd, run_online_client, SHADER_NOP, data);
>> +
>> +	xe_eudebug_debugger_add_trigger(s->d, DRM_XE_EUDEBUG_EVENT_EU_ATTENTION,
>> +					eu_attention_debug_trigger);
>> +	xe_eudebug_debugger_add_trigger(s->d, DRM_XE_EUDEBUG_EVENT_EU_ATTENTION,
>> +					eu_attention_resume_trigger);
>> +	xe_eudebug_debugger_add_trigger(s->d, DRM_XE_EUDEBUG_EVENT_VM_BIND_UFENCE,
>> +					ufence_ack_trigger);
>> +
>> +	igt_assert_eq(xe_eudebug_debugger_attach(s->d, s->c), 0);
>> +	xe_eudebug_debugger_start_worker(s->d);
>> +
>> +	xe_eudebug_client_start(s->c);
>> +	sleep(1); /* make sure s->c starts first */
>> +	xe_eudebug_client_start(other);
>> +
>> +	xe_eudebug_client_wait_done(s->c);
>> +	xe_eudebug_client_wait_done(other);
>> +
>> +	xe_eudebug_debugger_stop_worker(s->d, 1);
>> +
>> +	xe_eudebug_session_destroy(s);
>> +	xe_eudebug_client_destroy(other);
>> +
>> +	igt_assert_f(data->last_eu_control_seqno != 0,
>> +		     "Workload with breakpoint has ended without resume!\n");
>> +
>> +	online_debug_data_destroy(data);
>> +}
>> +
>> +/**
>> + * SUBTEST: reset-with-attention
>> + * Description:
>> + *	Check whether GPU is usable after resetting with attention raised
>> + *	(stopped on breakpoint) by running the same workload again.
>> + */
>> +static void test_reset_with_attention_online(int fd, struct drm_xe_engine_class_instance *hwe, int flags)
>> +{
>> +	struct xe_eudebug_session *s1, *s2;
>> +	struct online_debug_data *data;
>> +
>> +	data = online_debug_data_create(hwe);
>> +	s1 = xe_eudebug_session_create(fd, run_online_client, flags, data);
>> +
>> +	xe_eudebug_debugger_add_trigger(s1->d, DRM_XE_EUDEBUG_EVENT_EU_ATTENTION,
>> +					eu_attention_reset_trigger);
>> +	xe_eudebug_debugger_add_trigger(s1->d, DRM_XE_EUDEBUG_EVENT_VM_BIND_UFENCE,
>> +					ufence_ack_trigger);
>> +
>> +	xe_eudebug_session_run(s1);
>> +	xe_eudebug_session_destroy(s1);
>> +
>> +	s2 = xe_eudebug_session_create(fd, run_online_client, flags, data);
>> +	xe_eudebug_debugger_add_trigger(s2->d, DRM_XE_EUDEBUG_EVENT_EU_ATTENTION,
>> +					eu_attention_resume_trigger);
>> +	xe_eudebug_debugger_add_trigger(s2->d, DRM_XE_EUDEBUG_EVENT_VM_BIND_UFENCE,
>> +					ufence_ack_trigger);
>> +
>> +	xe_eudebug_session_run(s2);
>> +
>> +	online_session_check(s2, s2->flags);
>> +
>> +	xe_eudebug_session_destroy(s2);
>> +	online_debug_data_destroy(data);
>> +}
>> +
>> +/**
>> + * SUBTEST: interrupt-all
>> + * Description:
>> + *	Schedules EU workload which should last about a few seconds, then
>> + *	interrupts all threads, checks whether attention event came, and
>> + *	resumes stopped threads back.
>> + *
>> + * SUBTEST: interrupt-all-set-breakpoint
>> + * Description:
>> + *	Schedules EU workload which should last about a few seconds, then
>> + *	interrupts all threads, once attention event come it sets breakpoint on
>> + *	the very next instruction and resumes stopped threads back. It expects
>> + *	that every thread hits the breakpoint.
>> + */
>> +static void test_interrupt_all(int fd, struct drm_xe_engine_class_instance *hwe, int flags)
>> +{
>> +	struct xe_eudebug_session *s;
>> +	struct online_debug_data *data;
>> +	uint32_t val;
>> +
>> +	data = online_debug_data_create(hwe);
>> +	s = xe_eudebug_session_create(fd, run_online_client, flags, data);
>> +
>> +	xe_eudebug_debugger_add_trigger(s->d, DRM_XE_EUDEBUG_EVENT_OPEN,
>> +					open_trigger);
>> +	xe_eudebug_debugger_add_trigger(s->d, DRM_XE_EUDEBUG_EVENT_EXEC_QUEUE,
>> +					exec_queue_trigger);
>> +	xe_eudebug_debugger_add_trigger(s->d, DRM_XE_EUDEBUG_EVENT_EU_ATTENTION,
>> +					eu_attention_debug_trigger);
>> +	xe_eudebug_debugger_add_trigger(s->d, DRM_XE_EUDEBUG_EVENT_EU_ATTENTION,
>> +					eu_attention_resume_trigger);
>> +	xe_eudebug_debugger_add_trigger(s->d, DRM_XE_EUDEBUG_EVENT_VM, vm_open_trigger);
>> +	xe_eudebug_debugger_add_trigger(s->d, DRM_XE_EUDEBUG_EVENT_METADATA,
>> +					create_metadata_trigger);
>> +	xe_eudebug_debugger_add_trigger(s->d, DRM_XE_EUDEBUG_EVENT_VM_BIND_UFENCE,
>> +					ufence_ack_trigger);
>> +
>> +	igt_assert_eq(xe_eudebug_debugger_attach(s->d, s->c), 0);
>> +	xe_eudebug_debugger_start_worker(s->d);
>> +	xe_eudebug_client_start(s->c);
>> +
>> +	/* wait for workload to start */
>> +	igt_for_milliseconds(STARTUP_TIMEOUT_MS) {
>> +		/* collect needed data from triggers */
>> +		if (READ_ONCE(data->vm_fd) == -1 || READ_ONCE(data->target_size) == 0)
>> +			continue;
>> +
>> +		if (pread(data->vm_fd, &val, sizeof(val), data->target_offset) == sizeof(val))
>> +			if (val != 0)
>> +				break;
>> +	}
>> +
>> +	pthread_mutex_lock(&data->mutex);
>> +	igt_assert(data->client_handle != -1);
>> +	igt_assert(data->exec_queue_handle != -1);
>> +	eu_ctl_interrupt_all(s->d->fd, data->client_handle,
>> +			     data->exec_queue_handle, data->lrc_handle);
>> +	pthread_mutex_unlock(&data->mutex);
>> +
>> +	xe_eudebug_client_wait_done(s->c);
>> +
>> +	xe_eudebug_debugger_stop_worker(s->d, 1);
>> +
>> +	xe_eudebug_event_log_print(s->d->log, true);
>> +	xe_eudebug_event_log_print(s->c->log, true);
>> +
>> +	online_session_check(s, s->flags);
>> +
>> +	xe_eudebug_session_destroy(s);
>> +	online_debug_data_destroy(data);
>> +}
>> +
>> +static void reset_debugger_log(struct xe_eudebug_debugger *d)
>> +{
>> +	unsigned int max_size;
>> +	char log_name[80];
>> +
>> +	/* Don't pull the rug out from under an active debugger */
>> +	igt_assert(d->target_pid == 0);
>> +
>> +	max_size = d->log->max_size;
>> +	strncpy(log_name, d->log->name, sizeof(d->log->name) - 1);
>> +	log_name[79] = '\0';
>> +	xe_eudebug_event_log_destroy(d->log);
>> +	d->log = xe_eudebug_event_log_create(log_name, max_size);
>> +}
>> +
>> +/**
>> + * SUBTEST: interrupt-other-debuggable
>> + * Description:
>> + *	Schedules EU workload in runalone mode with never ending loop, while
>> + *	it is not under debug, tries to interrupt all threads using the different
>> + *	client attached to debugger.
>> + *
>> + * SUBTEST: interrupt-other
>> + * Description:
>> + * 	Schedules EU workload with a never ending loop and, while it is not
>> + * 	configured for debugging, tries to interrupt all threads using the client
>> + * 	attached to debugger.
>> + */
>> +static void test_interrupt_other(int fd, struct drm_xe_engine_class_instance *hwe, int flags)
>> +{
>> +	struct online_debug_data *data;
>> +	struct online_debug_data *debugee_data;
>> +	struct xe_eudebug_session *s;
>> +	struct xe_eudebug_client *debugee;
>> +	int debugee_flags = SHADER_LOOP | DO_NOT_EXPECT_CANARIES;
>> +	int val;
>> +
>> +	data = online_debug_data_create(hwe);
>> +	s = xe_eudebug_session_create(fd, run_online_client, flags, data);
>> +
>> +	xe_eudebug_debugger_add_trigger(s->d, DRM_XE_EUDEBUG_EVENT_OPEN, open_trigger);
>> +	xe_eudebug_debugger_add_trigger(s->d, DRM_XE_EUDEBUG_EVENT_EXEC_QUEUE,
>> +					exec_queue_trigger);
>> +	xe_eudebug_debugger_add_trigger(s->d, DRM_XE_EUDEBUG_EVENT_VM, vm_open_trigger);
>> +	xe_eudebug_debugger_add_trigger(s->d, DRM_XE_EUDEBUG_EVENT_METADATA,
>> +					create_metadata_trigger);
>> +	xe_eudebug_debugger_add_trigger(s->d, DRM_XE_EUDEBUG_EVENT_VM_BIND_UFENCE,
>> +					ufence_ack_trigger);
>> +
>> +	igt_assert_eq(xe_eudebug_debugger_attach(s->d, s->c), 0);
>> +	xe_eudebug_debugger_start_worker(s->d);
>> +	xe_eudebug_client_start(s->c);
>> +
>> +	/* wait for workload to start */
>> +	igt_for_milliseconds(STARTUP_TIMEOUT_MS) {
>> +		if (READ_ONCE(data->vm_fd) == -1 || READ_ONCE(data->target_size) == 0)
>> +			continue;
>> +
>> +		if (pread(data->vm_fd, &val, sizeof(val), data->target_offset) == sizeof(val))
>> +			if (val != 0)
>> +				break;
>> +	}
>> +	igt_assert_f(val != 0, "Workload execution is not yet started\n");
>> +
>> +	xe_eudebug_debugger_dettach(s->d);
>> +	reset_debugger_log(s->d);
>> +
>> +	debugee_data = online_debug_data_create(hwe);
>> +	s->d->ptr = debugee_data;
>> +	debugee = xe_eudebug_client_create(fd, run_online_client, debugee_flags, debugee_data);
>> +	igt_assert_eq(xe_eudebug_debugger_attach(s->d, debugee), 0);
>> +	xe_eudebug_client_start(debugee);
>> +
>> +	igt_for_milliseconds(STARTUP_TIMEOUT_MS) {
>> +		if (READ_ONCE(debugee_data->vm_fd) == -1 || READ_ONCE(debugee_data->target_size) == 0)
>> +			continue;
>> +	}
>> +
>> +	pthread_mutex_lock(&debugee_data->mutex);
>> +	igt_assert(debugee_data->client_handle != -1);
>> +	igt_assert(debugee_data->exec_queue_handle != -1);
>> +	/* Interrupting the other client should return invalid state
>> +	 * as it is running in runalone mode */
>> +	igt_assert_eq(__eu_ctl(s->d->fd, debugee_data->client_handle,
>> +		       debugee_data->exec_queue_handle, debugee_data->lrc_handle,
>> +		       NULL, 0, DRM_XE_EUDEBUG_EU_CONTROL_CMD_INTERRUPT_ALL, NULL), -EINVAL);
>> +	pthread_mutex_unlock(&debugee_data->mutex);
>> +
>> +	xe_force_gt_reset_async(s->d->master_fd, debugee_data->hwe.gt_id);
>> +
>> +	xe_eudebug_client_wait_done(debugee);
>> +	xe_eudebug_debugger_stop_worker(s->d, 1);
>> +
>> +	xe_eudebug_event_log_print(s->d->log, true);
>> +	xe_eudebug_event_log_print(debugee->log, true);
>> +
>> +	xe_eudebug_session_check(s, true, XE_EUDEBUG_FILTER_EVENT_VM_BIND |
>> +				 XE_EUDEBUG_FILTER_EVENT_VM_BIND_OP |
>> +				 XE_EUDEBUG_FILTER_EVENT_VM_BIND_UFENCE);
>> +
>> +	xe_eudebug_client_destroy(debugee);
>> +	xe_eudebug_session_destroy(s);
>> +	online_debug_data_destroy(data);
>> +	online_debug_data_destroy(debugee_data);
>> +}
>> +
>> +/**
>> + * SUBTEST: tdctl-parameters
>> + * Description:
>> + *	Schedules EU workload which should last about a few seconds, then
>> + *	checks negative scenarios of EU_THREADS ioctl usage, interrupts all threads,
>> + *	checks whether attention event came, and resumes stopped threads back.
>> + */
>> +static void test_tdctl_parameters(int fd, struct drm_xe_engine_class_instance *hwe, int flags)
>> +{
>> +	struct xe_eudebug_session *s;
>> +	struct online_debug_data *data;
>> +	uint32_t val;
>> +	uint32_t random_command;
>> +	uint32_t bitmask_size = query_attention_bitmask_size(fd, hwe->gt_id);
>> +	uint8_t *attention_bitmask = malloc(bitmask_size * sizeof(uint8_t));
>> +	igt_assert(attention_bitmask);
>> +
>> +	data = online_debug_data_create(hwe);
>> +	s = xe_eudebug_session_create(fd, run_online_client, flags, data);
>> +
>> +	xe_eudebug_debugger_add_trigger(s->d, DRM_XE_EUDEBUG_EVENT_OPEN,
>> +					open_trigger);
>> +	xe_eudebug_debugger_add_trigger(s->d, DRM_XE_EUDEBUG_EVENT_EXEC_QUEUE,
>> +					exec_queue_trigger);
>> +	xe_eudebug_debugger_add_trigger(s->d, DRM_XE_EUDEBUG_EVENT_EU_ATTENTION,
>> +					eu_attention_debug_trigger);
>> +	xe_eudebug_debugger_add_trigger(s->d, DRM_XE_EUDEBUG_EVENT_EU_ATTENTION,
>> +					eu_attention_resume_trigger);
>> +	xe_eudebug_debugger_add_trigger(s->d, DRM_XE_EUDEBUG_EVENT_VM, vm_open_trigger);
>> +	xe_eudebug_debugger_add_trigger(s->d, DRM_XE_EUDEBUG_EVENT_METADATA,
>> +					create_metadata_trigger);
>> +	xe_eudebug_debugger_add_trigger(s->d, DRM_XE_EUDEBUG_EVENT_VM_BIND_UFENCE,
>> +					ufence_ack_trigger);
>> +
>> +	igt_assert_eq(xe_eudebug_debugger_attach(s->d, s->c), 0);
>> +	xe_eudebug_debugger_start_worker(s->d);
>> +	xe_eudebug_client_start(s->c);
>> +
>> +	/* wait for workload to start */
>> +	igt_for_milliseconds(STARTUP_TIMEOUT_MS) {
>> +		/* collect needed data from triggers */
>> +		if (READ_ONCE(data->vm_fd) == -1 || READ_ONCE(data->target_size) == 0)
>> +			continue;
>> +
>> +		if (pread(data->vm_fd, &val, sizeof(val), data->target_offset) == sizeof(val))
>> +			if (val != 0)
>> +				break;
>> +	}
>> +
>> +	pthread_mutex_lock(&data->mutex);
>> +	igt_assert(data->client_handle != -1);
>> +	igt_assert(data->exec_queue_handle != -1);
>> +	igt_assert(data->lrc_handle != -1);
>> +
>> +	/* fail on invalid lrc_handle */
>> +	igt_assert(__eu_ctl(s->d->fd, data->client_handle,
>> +			    data->exec_queue_handle, data->lrc_handle + 1,
>> +			    attention_bitmask, &bitmask_size,
>> +			    DRM_XE_EUDEBUG_EU_CONTROL_CMD_INTERRUPT_ALL, NULL) == -EINVAL);
>> +
>> +	/* fail on invalid exec_queue_handle */
>> +	igt_assert(__eu_ctl(s->d->fd, data->client_handle,
>> +			    data->exec_queue_handle + 1, data->lrc_handle,
>> +			    attention_bitmask, &bitmask_size,
>> +			    DRM_XE_EUDEBUG_EU_CONTROL_CMD_INTERRUPT_ALL, NULL) == -EINVAL);
>> +
>> +	/* fail on invalid client */
>> +	igt_assert(__eu_ctl(s->d->fd, data->client_handle + 1,
>> +			    data->exec_queue_handle, data->lrc_handle,
>> +			    attention_bitmask, &bitmask_size,
>> +			    DRM_XE_EUDEBUG_EU_CONTROL_CMD_INTERRUPT_ALL, NULL) == -EINVAL);
>> +
>> +	/*
>> +	 * bitmask size must be aligned to sizeof(u32) for all commands
>> +	 * and be zero for interrupt all
>> +	 */
>> +	bitmask_size = sizeof(uint32_t) - 1;
>> +	igt_assert(__eu_ctl(s->d->fd, data->client_handle,
>> +			    data->exec_queue_handle, data->lrc_handle,
>> +			    attention_bitmask, &bitmask_size,
>> +			    DRM_XE_EUDEBUG_EU_CONTROL_CMD_STOPPED, NULL) == -EINVAL);
>> +	bitmask_size = 0;
>> +
>> +	/* fail on invalid command */
>> +	random_command = random() | (DRM_XE_EUDEBUG_EU_CONTROL_CMD_RESUME + 1);
>> +	igt_assert(__eu_ctl(s->d->fd, data->client_handle,
>> +			    data->exec_queue_handle, data->lrc_handle,
>> +			    attention_bitmask, &bitmask_size, random_command, NULL) == -EINVAL);
>> +
>> +	free(attention_bitmask);
>> +
>> +	eu_ctl_interrupt_all(s->d->fd, data->client_handle,
>> +			     data->exec_queue_handle, data->lrc_handle);
>> +	pthread_mutex_unlock(&data->mutex);
>> +
>> +	xe_eudebug_client_wait_done(s->c);
>> +
>> +	xe_eudebug_debugger_stop_worker(s->d, 1);
>> +
>> +	xe_eudebug_event_log_print(s->d->log, true);
>> +	xe_eudebug_event_log_print(s->c->log, true);
>> +
>> +	online_session_check(s, s->flags);
>> +
>> +	xe_eudebug_session_destroy(s);
>> +	online_debug_data_destroy(data);
>> +}
>> +
>> +static void eu_attention_debugger_detach_trigger(struct xe_eudebug_debugger *d,
>> +						 struct drm_xe_eudebug_event *event)
>> +{
>> +	struct online_debug_data *data = d->ptr;
>> +	uint64_t c_pid;
>> +	int ret;
>> +
>> +	c_pid = d->target_pid;
>> +
>> +	/* Reset VM data so the re-triggered VM open handler works properly */
>> +	data->vm_fd = -1;
>> +
>> +	xe_eudebug_debugger_dettach(d);
>> +
>> +	/* Let the KMD scan function notice unhandled EU attention */
>> +	if (!(d->flags & SHADER_N_NOOP_BREAKPOINT))
>> +		sleep(1);
>> +
>> +	/*
>> +	 * New session that is created by EU debugger on reconnect restarts
>> +	 * seqno, causing isses with log sorting. To avoid that, create
>> +	 * a new event log.
>> +	 */
>> +	reset_debugger_log(d);
>> +
>> +	ret = xe_eudebug_connect(d->master_fd, c_pid, 0);
>> +	igt_assert(ret >= 0);
>> +	d->fd = ret;
>> +	d->target_pid = c_pid;
>> +
>> +	/* Let the discovery worker discover resources */
>> +	sleep(2);
>> +
>> +	if (!(d->flags & SHADER_N_NOOP_BREAKPOINT))
>> +		xe_eudebug_debugger_signal_stage(d, DEBUGGER_REATTACHED);
>> +}
>> +
>> +/**
>> + * SUBTEST: interrupt-reconnect
>> + * Description:
>> + *	Schedules EU workload which should last about a few seconds,
>> + *	interrupts all threads and detaches debugger when attention is
>> + *	raised. The test checks if KMD resets the workload when there's
>> + *	no debugger attached and does the event playback on discovery.
>> + */
>> +static void test_interrupt_reconnect(int fd, struct drm_xe_engine_class_instance *hwe, int flags)
>> +{
>> +	struct drm_xe_eudebug_event *e = NULL;
>> +	struct online_debug_data *data;
>> +	struct xe_eudebug_session *s;
>> +	uint32_t val;
>> +
>> +	data = online_debug_data_create(hwe);
>> +	s = xe_eudebug_session_create(fd, run_online_client, flags, data);
>> +
>> +	xe_eudebug_debugger_add_trigger(s->d, DRM_XE_EUDEBUG_EVENT_OPEN,
>> +					open_trigger);
>> +	xe_eudebug_debugger_add_trigger(s->d, DRM_XE_EUDEBUG_EVENT_EXEC_QUEUE,
>> +					exec_queue_trigger);
>> +	xe_eudebug_debugger_add_trigger(s->d, DRM_XE_EUDEBUG_EVENT_EU_ATTENTION,
>> +					eu_attention_debug_trigger);
>> +	xe_eudebug_debugger_add_trigger(s->d, DRM_XE_EUDEBUG_EVENT_EU_ATTENTION,
>> +					eu_attention_debugger_detach_trigger);
>> +	xe_eudebug_debugger_add_trigger(s->d, DRM_XE_EUDEBUG_EVENT_VM, vm_open_trigger);
>> +	xe_eudebug_debugger_add_trigger(s->d, DRM_XE_EUDEBUG_EVENT_METADATA,
>> +					create_metadata_trigger);
>> +	xe_eudebug_debugger_add_trigger(s->d, DRM_XE_EUDEBUG_EVENT_VM_BIND_UFENCE,
>> +					ufence_ack_trigger);
>> +
>> +	igt_assert_eq(xe_eudebug_debugger_attach(s->d, s->c), 0);
>> +	xe_eudebug_debugger_start_worker(s->d);
>> +	xe_eudebug_client_start(s->c);
>> +
>> +	/* wait for workload to start */
>> +	igt_for_milliseconds(STARTUP_TIMEOUT_MS) {
>> +		/* collect needed data from triggers */
>> +		if (READ_ONCE(data->vm_fd) == -1 || READ_ONCE(data->target_size) == 0)
>> +			continue;
>> +
>> +		if (pread(data->vm_fd, &val, sizeof(val), data->target_offset) == sizeof(val))
>> +			if (val != 0)
>> +				break;
>> +	}
>> +
>> +	pthread_mutex_lock(&data->mutex);
>> +	igt_assert(data->client_handle != -1);
>> +	igt_assert(data->exec_queue_handle != -1);
>> +	eu_ctl_interrupt_all(s->d->fd, data->client_handle,
>> +			     data->exec_queue_handle, data->lrc_handle);
>> +	pthread_mutex_unlock(&data->mutex);
>> +
>> +	xe_eudebug_client_wait_done(s->c);
>> +
>> +	xe_eudebug_debugger_stop_worker(s->d, 1);
>> +
>> +	xe_eudebug_event_log_print(s->d->log, true);
>> +	xe_eudebug_event_log_print(s->c->log, true);
>> +
>> +	xe_eudebug_session_check(s, true, XE_EUDEBUG_FILTER_EVENT_VM_BIND |
>> +					  XE_EUDEBUG_FILTER_EVENT_VM_BIND_OP |
>> +					  XE_EUDEBUG_FILTER_EVENT_VM_BIND_UFENCE);
>> +
>> +	/* We expect workload reset, so no attention should be raised */
>> +	xe_eudebug_for_each_event(e, s->d->log)
>> +		igt_assert(e->type != DRM_XE_EUDEBUG_EVENT_EU_ATTENTION);
>> +
>> +	xe_eudebug_session_destroy(s);
>> +	online_debug_data_destroy(data);
>> +}
>> +
>> +/**
>> + * SUBTEST: single-step
>> + * Description:
>> + *	Schedules EU workload with 16 nops after breakpoint, then single-steps
>> + *	through the shader, advances all threads each step, checking if all
>> + *	threads advanced every step.
>> + *
>> + * SUBTEST: single-step-one
>> + * Description:
>> + *	Schedules EU workload with 16 nops after breakpoint, then single-steps
>> + *	through the shader, advances one thread each step, checking if one
>> + *	thread advanced every step. Due to the time constraint, only first two
>> + *	shader instructions after breakpoint are validated.
>> + */
>> +static void test_single_step(int fd, struct drm_xe_engine_class_instance *hwe, int flags)
>> +{
>> +	struct xe_eudebug_session *s;
>> +	struct online_debug_data *data;
>> +
>> +	data = online_debug_data_create(hwe);
>> +	s = xe_eudebug_session_create(fd, run_online_client, flags, data);
>> +
>> +	xe_eudebug_debugger_add_trigger(s->d, DRM_XE_EUDEBUG_EVENT_OPEN,
>> +					open_trigger);
>> +	xe_eudebug_debugger_add_trigger(s->d, DRM_XE_EUDEBUG_EVENT_EU_ATTENTION,
>> +					eu_attention_debug_trigger);
>> +	xe_eudebug_debugger_add_trigger(s->d, DRM_XE_EUDEBUG_EVENT_EU_ATTENTION,
>> +					eu_attention_resume_single_step_trigger);
>> +	xe_eudebug_debugger_add_trigger(s->d, DRM_XE_EUDEBUG_EVENT_VM, vm_open_trigger);
>> +	xe_eudebug_debugger_add_trigger(s->d, DRM_XE_EUDEBUG_EVENT_METADATA,
>> +					create_metadata_trigger);
>> +	xe_eudebug_debugger_add_trigger(s->d, DRM_XE_EUDEBUG_EVENT_VM_BIND_UFENCE,
>> +					ufence_ack_trigger);
>> +
>> +	xe_eudebug_session_run(s);
>> +	online_session_check(s, s->flags);
>> +	xe_eudebug_session_destroy(s);
>> +	online_debug_data_destroy(data);
>> +}
>> +
>> +static void eu_attention_debugger_ndetach_trigger(struct xe_eudebug_debugger *d,
>> +						 struct drm_xe_eudebug_event *event)
>> +{
>> +	static int debugger_detach_count;
>> +
>> +	if (debugger_detach_count < (SHADER_LOOP_N - 1)) {
>> +		/* Make sure the resume command has issued before detaching the debugger */
>> +		if (!is_client_resumed)
>> +			return;
> Not sure what we are trying to achieve here. I think triggers are executed in the same order as
> added, so this should be ensured. But anyway, I think we could just resue data-
>> last_eu_control_seqno. If its greater than seqno of this event we are free to reattach.

Makes sense, thanks - will change it in the next version.

Christoph

> 
> Regards,
> Dominik
> 
>> +		eu_attention_debugger_detach_trigger(d, event);
>> +		debugger_detach_count++;
>> +	} else {
>> +		igt_debug("Reached Nth breakpoint hence preventing the debugger detach\n");
>> +	}
>> +	is_client_resumed = 0;
>> +}
>> +
>> +/**
>> + * SUBTEST: debugger-reopen
>> + * Description:
>> + *	Check whether the debugger is able to reopen the connection and
>> + *	capture the events of already running client.
>> + */
>> +static void test_debugger_reopen(int fd, struct drm_xe_engine_class_instance *hwe, int flags)
>> +{
>> +	struct xe_eudebug_session *s;
>> +	struct online_debug_data *data;
>> +
>> +	data = online_debug_data_create(hwe);
>> +
>> +	s = xe_eudebug_session_create(fd, run_online_client, flags, data);
>> +
>> +	xe_eudebug_debugger_add_trigger(s->d, DRM_XE_EUDEBUG_EVENT_EU_ATTENTION,
>> +					eu_attention_debug_trigger);
>> +	xe_eudebug_debugger_add_trigger(s->d, DRM_XE_EUDEBUG_EVENT_EU_ATTENTION,
>> +					eu_attention_resume_trigger);
>> +	xe_eudebug_debugger_add_trigger(s->d, DRM_XE_EUDEBUG_EVENT_EU_ATTENTION,
>> +					eu_attention_debugger_ndetach_trigger);
>> +	xe_eudebug_debugger_add_trigger(s->d, DRM_XE_EUDEBUG_EVENT_VM_BIND_UFENCE,
>> +					ufence_ack_trigger);
>> +
>> +	xe_eudebug_session_run(s);
>> +
>> +	xe_eudebug_session_destroy(s);
>> +	online_debug_data_destroy(data);
>> +}
>> +
>> +/**
>> + * SUBTEST: writes-caching-%s
>> + * Description:
>> + *	Write incrementing values to 2-page-long target surface, poisoning the data one breakpoint
>> + *	before each write instruction and restoring it when the poisoned instruction breakpoint
>> + *	is hit. Expect to never see poison values in target surface.
>> + *
>> + *
>> + * arg[1]:
>> + *
>> + * @sram:	Use page size of SRAM
>> + * @vram:	Use page size of VRAM
>> + */
>> +static void test_caching(int fd, struct drm_xe_engine_class_instance *hwe, int flags)
>> +{
>> +	struct xe_eudebug_session *s;
>> +	struct online_debug_data *data;
>> +
>> +	if (flags & SHADER_CACHING_VRAM)
>> +		igt_skip_on_f(!xe_has_vram(fd), "Device does not have VRAM.\n");
>> +
>> +	data = online_debug_data_create(hwe);
>> +	s = xe_eudebug_session_create(fd, run_online_client, flags, data);
>> +
>> +	xe_eudebug_debugger_add_trigger(s->d, DRM_XE_EUDEBUG_EVENT_OPEN,
>> +					open_trigger);
>> +	xe_eudebug_debugger_add_trigger(s->d, DRM_XE_EUDEBUG_EVENT_EU_ATTENTION,
>> +					eu_attention_debug_trigger);
>> +	xe_eudebug_debugger_add_trigger(s->d, DRM_XE_EUDEBUG_EVENT_EU_ATTENTION,
>> +					eu_attention_resume_caching_trigger);
>> +	xe_eudebug_debugger_add_trigger(s->d, DRM_XE_EUDEBUG_EVENT_VM, vm_open_trigger);
>> +	xe_eudebug_debugger_add_trigger(s->d, DRM_XE_EUDEBUG_EVENT_METADATA,
>> +					create_metadata_trigger);
>> +	xe_eudebug_debugger_add_trigger(s->d, DRM_XE_EUDEBUG_EVENT_VM_BIND_UFENCE,
>> +					ufence_ack_trigger);
>> +
>> +	xe_eudebug_session_run(s);
>> +	online_session_check(s, s->flags);
>> +	xe_eudebug_session_destroy(s);
>> +	online_debug_data_destroy(data);
>> +}
>> +
>> +static int wait_for_exception(struct online_debug_data *data, int timeout)
>> +{
>> +	int ret = -ETIMEDOUT;
>> +
>> +	igt_for_milliseconds(timeout) {
>> +		pthread_mutex_lock(&data->mutex);
>> +		if ((data->exception_arrived.tv_sec |
>> +		     data->exception_arrived.tv_nsec) != 0)
>> +			ret = 0;
>> +		pthread_mutex_unlock(&data->mutex);
>> +
>> +		if (!ret)
>> +			break;
>> +		usleep(1000);
>> +	}
>> +
>> +	return ret;
>> +}
>> +
>> +#define is_compute_on_gt(__e, __gt) ((__e->engine_class == DRM_XE_ENGINE_CLASS_RENDER || \
>> +				      __e->engine_class == DRM_XE_ENGINE_CLASS_COMPUTE) && \
>> +				      __e->gt_id == __gt)
>> +
>> +struct xe_engine_list_entry {
>> +	struct igt_list_head link;
>> +	struct drm_xe_engine_class_instance *hwe;
>> +};
>> +
>> +#define MAX_TILES	2
>> +static int find_suitable_engines(struct drm_xe_engine_class_instance *hwes[GEM_MAX_ENGINES],
>> +				 int fd, bool many_tiles)
>> +{
>> +	struct xe_device *xe_dev;
>> +	struct drm_xe_engine_class_instance *e;
>> +	struct xe_engine_list_entry *en, *tmp;
>> +	struct igt_list_head compute_engines[MAX_TILES];
>> +	int gt_id;
>> +	int tile_id, i, engine_count = 0, tile_count = 0;
>> +
>> +	xe_dev = xe_device_get(fd);
>> +
>> +	for (i = 0; i < MAX_TILES; i++)
>> +		IGT_INIT_LIST_HEAD(&compute_engines[i]);
>> +
>> +	xe_for_each_gt(fd, gt_id) {
>> +		xe_for_each_engine(fd, e) {
>> +			if (is_compute_on_gt(e, gt_id)) {
>> +				tile_id = xe_dev->gt_list->gt_list[gt_id].tile_id;
>> +
>> +				en = malloc(sizeof(struct xe_engine_list_entry));
>> +				en->hwe = e;
>> +
>> +				igt_list_add_tail(&en->link, &compute_engines[tile_id]);
>> +			}
>> +		}
>> +	}
>> +
>> +	for (i = 0; i < MAX_TILES; i++) {
>> +		if (igt_list_empty(&compute_engines[i]))
>> +			continue;
>> +
>> +		if (many_tiles) {
>> +			en = igt_list_first_entry(&compute_engines[i], en, link);
>> +			hwes[engine_count++] = en->hwe;
>> +			tile_count++;
>> +		} else {
>> +			if (igt_list_length(&compute_engines[i]) > 1) {
>> +				igt_list_for_each_entry(en, &compute_engines[i], link)
>> +					hwes[engine_count++] = en->hwe;
>> +				break;
>> +			}
>> +		}
>> +	}
>> +
>> +	for (i = 0; i < MAX_TILES; i++) {
>> +		igt_list_for_each_entry_safe(en, tmp, &compute_engines[i], link) {
>> +			igt_list_del(&en->link);
>> +			free(en);
>> +		}
>> +	}
>> +
>> +	if (many_tiles)
>> +		igt_require_f(tile_count > 1, "Mulit-tile scenario requires more tiles\n");
>> +
>> +	return engine_count;
>> +}
>> +
>> +/**
>> + * SUBTEST: breakpoint-many-sessions-single-tile
>> + * Description:
>> + *	Schedules EU workload with preinstalled breakpoint on every compute engine
>> + *	available on the tile. Checks if the contexts hit breakpoint in sequence
>> + *	and resumes them.
>> + *
>> + * SUBTEST: breakpoint-many-sessions-tiles
>> + * Description:
>> + *	Schedules EU workload with preinstalled breakpoint on selected compute
>> + *      engines, with one per tile. Checks if each context hit breakpoint and
>> + *      resumes them.
>> + */
>> +static void test_many_sessions_on_tiles(int fd, bool multi_tile)
>> +{
>> +	int n = 0, flags = SHADER_BREAKPOINT | SHADER_MIN_THREADS;
>> +	struct xe_eudebug_session *s[GEM_MAX_ENGINES] = {};
>> +	struct online_debug_data *data[GEM_MAX_ENGINES] = {};
>> +	struct drm_xe_engine_class_instance *hwe[GEM_MAX_ENGINES] = {};
>> +	struct drm_xe_eudebug_event_eu_attention *eus;
>> +	uint64_t current_t, next_t, diff;
>> +	int i;
>> +
>> +	n = find_suitable_engines(hwe, fd, multi_tile);
>> +
>> +	igt_require_f(n > 1, "Test requires at least two parallel compute engines!\n");
>> +
>> +	for (i = 0; i < n; i++) {
>> +		data[i] = online_debug_data_create(hwe[i]);
>> +		s[i] = xe_eudebug_session_create(fd, run_online_client, flags, data[i]);
>> +
>> +		xe_eudebug_debugger_add_trigger(s[i]->d, DRM_XE_EUDEBUG_EVENT_EU_ATTENTION,
>> +						eu_attention_debug_trigger);
>> +		xe_eudebug_debugger_add_trigger(s[i]->d, DRM_XE_EUDEBUG_EVENT_EU_ATTENTION,
>> +						save_first_exception_trigger);
>> +		xe_eudebug_debugger_add_trigger(s[i]->d, DRM_XE_EUDEBUG_EVENT_VM_BIND_UFENCE,
>> +						ufence_ack_trigger);
>> +
>> +		igt_assert_eq(xe_eudebug_debugger_attach(s[i]->d, s[i]->c), 0);
>> +
>> +		xe_eudebug_debugger_start_worker(s[i]->d);
>> +		xe_eudebug_client_start(s[i]->c);
>> +	}
>> +
>> +	for (i = 0; i < n; i++) {
>> +		/* XXX: Sometimes racy, expects clients to execute in sequence */
>> +		igt_assert(!wait_for_exception(data[i], STARTUP_TIMEOUT_MS));
>> +
>> +		eus = (struct drm_xe_eudebug_event_eu_attention *)data[i]->exception_event;
>> +
>> +		/* Delay all but the last workload to check serialization */
>> +		if (i < n - 1)
>> +			usleep(WORKLOAD_DELAY_US);
>> +
>> +		eu_ctl_resume(s[i]->d->master_fd, s[i]->d->fd,
>> +			      eus->client_handle, eus->exec_queue_handle,
>> +			      eus->lrc_handle, eus->bitmask, eus->bitmask_size);
>> +		free(eus);
>> +	}
>> +
>> +	for (i = 0; i < n - 1; i++) {
>> +		/* Convert timestamps to microseconds */
>> +		current_t = data[i]->exception_arrived.tv_nsec * 1000;
>> +		next_t = data[i + 1]->exception_arrived.tv_nsec * 1000;
>> +		diff = current_t < next_t ? next_t - current_t : current_t - next_t;
>> +
>> +		if (multi_tile)
>> +			igt_assert_f(diff < WORKLOAD_DELAY_US,
>> +				     "Expected to execute workloads concurrently. Actual delay: %lu ms\n",
>> +				     diff);
>> +		else
>> +			igt_assert_f(diff >= WORKLOAD_DELAY_US,
>> +				     "Expected a serialization of workloads. Actual delay: %lu ms\n",
>> +				     diff);
>> +	}
>> +
>> +	for (i = 0; i < n; i++) {
>> +		xe_eudebug_client_wait_done(s[i]->c);
>> +		xe_eudebug_debugger_stop_worker(s[i]->d, 1);
>> +
>> +		xe_eudebug_event_log_print(s[i]->d->log, true);
>> +		online_session_check(s[i], flags);
>> +
>> +		xe_eudebug_session_destroy(s[i]);
>> +		online_debug_data_destroy(data[i]);
>> +	}
>> +}
>> +
>> +static struct drm_xe_engine_class_instance *pick_compute(int fd, int gt)
>> +{
>> +	struct drm_xe_engine_class_instance *hwe;
>> +	int count = 0;
>> +
>> +	xe_for_each_engine(fd, hwe)
>> +		if (is_compute_on_gt(hwe, gt))
>> +			count++;
>> +
>> +	xe_for_each_engine(fd, hwe)
>> +		if (is_compute_on_gt(hwe, gt) && rand() % count-- == 0)
>> +			return hwe;
>> +
>> +	return NULL;
>> +}
>> +
>> +#define test_gt_render_or_compute(t, i915, __hwe) \
>> +	igt_subtest_with_dynamic(t) \
>> +		for (int gt = 0; (__hwe = pick_compute(i915, gt)); gt++) \
>> +			igt_dynamic_f("%s%d", xe_engine_class_string(__hwe->engine_class), hwe->engine_instance)
>> +
>> +igt_main
>> +{
>> +	struct drm_xe_engine_class_instance *hwe;
>> +	bool was_enabled;
>> +	int fd;
>> +
>> +	igt_fixture {
>> +		fd = drm_open_driver(DRIVER_XE);
>> +		intel_allocator_multiprocess_start();
>> +		igt_srandom();
>> +		was_enabled = xe_eudebug_enable(fd, true);
>> +	}
>> +
>> +	test_gt_render_or_compute("basic-breakpoint", fd, hwe)
>> +		test_basic_online(fd, hwe, SHADER_BREAKPOINT);
>> +
>> +	test_gt_render_or_compute("preempt-breakpoint", fd, hwe)
>> +		test_preemption(fd, hwe);
>> +
>> +	test_gt_render_or_compute("breakpoint-not-in-debug-mode", fd, hwe)
>> +		test_basic_online(fd, hwe, SHADER_BREAKPOINT | DISABLE_DEBUG_MODE);
>> +
>> +	test_gt_render_or_compute("stopped-thread", fd, hwe)
>> +		test_basic_online(fd, hwe, SHADER_BREAKPOINT | TRIGGER_RESUME_DELAYED);
>> +
>> +	test_gt_render_or_compute("resume-one", fd, hwe)
>> +		test_basic_online(fd, hwe, SHADER_BREAKPOINT | TRIGGER_RESUME_ONE);
>> +
>> +	test_gt_render_or_compute("resume-dss", fd, hwe)
>> +		test_basic_online(fd, hwe, SHADER_BREAKPOINT | TRIGGER_RESUME_DSS);
>> +
>> +	test_gt_render_or_compute("interrupt-all", fd, hwe)
>> +		test_interrupt_all(fd, hwe, SHADER_LOOP);
>> +
>> +	test_gt_render_or_compute("interrupt-other-debuggable", fd, hwe)
>> +		test_interrupt_other(fd, hwe, SHADER_LOOP);
>> +
>> +	test_gt_render_or_compute("interrupt-other", fd, hwe)
>> +		test_interrupt_other(fd, hwe, SHADER_LOOP | DISABLE_DEBUG_MODE);
>> +
>> +	test_gt_render_or_compute("interrupt-all-set-breakpoint", fd, hwe)
>> +		test_interrupt_all(fd, hwe, SHADER_LOOP | TRIGGER_RESUME_SET_BP);
>> +
>> +	test_gt_render_or_compute("tdctl-parameters", fd, hwe)
>> +		test_tdctl_parameters(fd, hwe, SHADER_LOOP);
>> +
>> +	test_gt_render_or_compute("reset-with-attention", fd, hwe)
>> +		test_reset_with_attention_online(fd, hwe, SHADER_BREAKPOINT);
>> +
>> +	test_gt_render_or_compute("interrupt-reconnect", fd, hwe)
>> +		test_interrupt_reconnect(fd, hwe, SHADER_LOOP | TRIGGER_RECONNECT);
>> +
>> +	test_gt_render_or_compute("single-step", fd, hwe)
>> +		test_single_step(fd, hwe, SHADER_SINGLE_STEP | SIP_SINGLE_STEP |
>> +				 TRIGGER_RESUME_PARALLEL_WALK);
>> +
>> +	test_gt_render_or_compute("single-step-one", fd, hwe)
>> +		test_single_step(fd, hwe, SHADER_SINGLE_STEP | SIP_SINGLE_STEP |
>> +				 TRIGGER_RESUME_SINGLE_WALK);
>> +
>> +	test_gt_render_or_compute("debugger-reopen", fd, hwe)
>> +		test_debugger_reopen(fd, hwe, SHADER_N_NOOP_BREAKPOINT);
>> +
>> +	test_gt_render_or_compute("writes-caching-sram", fd, hwe)
>> +		test_caching(fd, hwe, SHADER_CACHING_SRAM);
>> +
>> +	test_gt_render_or_compute("writes-caching-vram", fd, hwe)
>> +		test_caching(fd, hwe, SHADER_CACHING_VRAM);
>> +
>> +	igt_subtest("breakpoint-many-sessions-single-tile")
>> +		test_many_sessions_on_tiles(fd, false);
>> +
>> +	igt_subtest("breakpoint-many-sessions-tiles")
>> +		test_many_sessions_on_tiles(fd, true);
>> +
>> +	igt_fixture {
>> +		xe_eudebug_enable(fd, was_enabled);
>> +
>> +		intel_allocator_multiprocess_stop();
>> +		drm_close_driver(fd);
>> +	}
>> +}
>> diff --git a/tests/meson.build b/tests/meson.build
>> index 35bf8ed35..f18eec7e7 100644
>> --- a/tests/meson.build
>> +++ b/tests/meson.build
>> @@ -280,6 +280,7 @@ intel_xe_progs = [
>>   	'xe_debugfs',
>>   	'xe_drm_fdinfo',
>>   	'xe_eudebug',
>> +	'xe_eudebug_online',
>>   	'xe_evict',
>>   	'xe_evict_ccs',
>>   	'xe_exec_atomic',
> 

^ permalink raw reply	[flat|nested] 41+ messages in thread

* Re: [PATCH i-g-t v3 02/14] drm-uapi/xe: Sync with eudebug uapi
  2024-08-09 12:38 ` [PATCH i-g-t v3 02/14] drm-uapi/xe: Sync with eudebug uapi Christoph Manszewski
@ 2024-08-20  7:52   ` Zbigniew Kempczyński
  0 siblings, 0 replies; 41+ messages in thread
From: Zbigniew Kempczyński @ 2024-08-20  7:52 UTC (permalink / raw)
  To: Christoph Manszewski
  Cc: igt-dev, Kamil Konieczny, Dominik Grzegorzek, Maciej Patelczyk,
	Dominik Karol Piątkowski, Pawel Sikora, Andrzej Hajda,
	Kolanupaka Naveena, Mika Kuoppala, Gwan-gyeong Mun, Mika Kuoppala

On Fri, Aug 09, 2024 at 02:38:01PM +0200, Christoph Manszewski wrote:
> From: Dominik Grzegorzek <dominik.grzegorzek@intel.com>
> 
> Align with kernel commit 09411c6ecbef ("drm/xe/eudebug: Add debug
> metadata support for xe_eudebug") from:
> 
> https://gitlab.freedesktop.org/miku/kernel.git
> 
> which introduces most recent changes to the eudebug uapi.
> 
> Signed-off-by: Dominik Grzegorzek <dominik.grzegorzek@intel.com>
> Signed-off-by: Mika Kuoppala <mika.kuoppala@linux.intel.com>
> Signed-off-by: Christoph Manszewski <christoph.manszewski@intel.com>
> ---
>  include/drm-uapi/xe_drm.h         |  96 ++++++++++++-
>  include/drm-uapi/xe_drm_eudebug.h | 225 ++++++++++++++++++++++++++++++
>  2 files changed, 319 insertions(+), 2 deletions(-)
>  create mode 100644 include/drm-uapi/xe_drm_eudebug.h
> 
> diff --git a/include/drm-uapi/xe_drm.h b/include/drm-uapi/xe_drm.h
> index f0a450db9..01ffbb8cb 100644
> --- a/include/drm-uapi/xe_drm.h
> +++ b/include/drm-uapi/xe_drm.h
> @@ -102,7 +102,9 @@ extern "C" {
>  #define DRM_XE_EXEC			0x09
>  #define DRM_XE_WAIT_USER_FENCE		0x0a
>  #define DRM_XE_OBSERVATION		0x0b
> -
> +#define DRM_XE_EUDEBUG_CONNECT		0x0c
> +#define DRM_XE_DEBUG_METADATA_CREATE	0x0d
> +#define DRM_XE_DEBUG_METADATA_DESTROY	0x0e
>  /* Must be kept compact -- no holes */
>  
>  #define DRM_IOCTL_XE_DEVICE_QUERY		DRM_IOWR(DRM_COMMAND_BASE + DRM_XE_DEVICE_QUERY, struct drm_xe_device_query)
> @@ -117,6 +119,9 @@ extern "C" {
>  #define DRM_IOCTL_XE_EXEC			DRM_IOW(DRM_COMMAND_BASE + DRM_XE_EXEC, struct drm_xe_exec)
>  #define DRM_IOCTL_XE_WAIT_USER_FENCE		DRM_IOWR(DRM_COMMAND_BASE + DRM_XE_WAIT_USER_FENCE, struct drm_xe_wait_user_fence)
>  #define DRM_IOCTL_XE_OBSERVATION		DRM_IOW(DRM_COMMAND_BASE + DRM_XE_OBSERVATION, struct drm_xe_observation_param)
> +#define DRM_IOCTL_XE_EUDEBUG_CONNECT		DRM_IOWR(DRM_COMMAND_BASE + DRM_XE_EUDEBUG_CONNECT, struct drm_xe_eudebug_connect)
> +#define DRM_IOCTL_XE_DEBUG_METADATA_CREATE	 DRM_IOWR(DRM_COMMAND_BASE + DRM_XE_DEBUG_METADATA_CREATE, struct drm_xe_debug_metadata_create)
> +#define DRM_IOCTL_XE_DEBUG_METADATA_DESTROY	 DRM_IOW(DRM_COMMAND_BASE + DRM_XE_DEBUG_METADATA_DESTROY, struct drm_xe_debug_metadata_destroy)
>  
>  /**
>   * DOC: Xe IOCTL Extensions
> @@ -881,6 +886,23 @@ struct drm_xe_vm_destroy {
>  	__u64 reserved[2];
>  };
>  
> +struct drm_xe_vm_bind_op_ext_attach_debug {
> +	/** @base: base user extension */
> +	struct drm_xe_user_extension base;
> +
> +	/** @id: Debug object id from create metadata */
> +	__u64 metadata_id;
> +
> +	/** @flags: Flags */
> +	__u64 flags;
> +
> +	/** @cookie: Cookie */
> +	__u64 cookie;
> +
> +	/** @reserved: Reserved */
> +	__u64 reserved;
> +};
> +
>  /**
>   * struct drm_xe_vm_bind_op - run bind operations
>   *
> @@ -905,7 +927,9 @@ struct drm_xe_vm_destroy {
>   *    handle MBZ, and the BO offset MBZ. This flag is intended to
>   *    implement VK sparse bindings.
>   */
> +
>  struct drm_xe_vm_bind_op {
> +#define XE_VM_BIND_OP_EXTENSIONS_ATTACH_DEBUG 0
>  	/** @extensions: Pointer to the first extension struct, if any */
>  	__u64 extensions;
>  
> @@ -1108,7 +1132,8 @@ struct drm_xe_exec_queue_create {
>  #define DRM_XE_EXEC_QUEUE_EXTENSION_SET_PROPERTY		0
>  #define   DRM_XE_EXEC_QUEUE_SET_PROPERTY_PRIORITY		0
>  #define   DRM_XE_EXEC_QUEUE_SET_PROPERTY_TIMESLICE		1
> -
> +#define   DRM_XE_EXEC_QUEUE_SET_PROPERTY_EUDEBUG		2
> +#define     DRM_XE_EXEC_QUEUE_EUDEBUG_FLAG_ENABLE		(1 << 0)
>  	/** @extensions: Pointer to the first extension struct, if any */
>  	__u64 extensions;
>  
> @@ -1694,6 +1719,73 @@ struct drm_xe_oa_stream_info {
>  	__u64 reserved[3];
>  };
>  
> +/*
> + * Debugger ABI (ioctl and events) Version History:
> + * 0 - No debugger available
> + * 1 - Initial version
> + */
> +#define DRM_XE_EUDEBUG_VERSION 1
> +
> +struct drm_xe_eudebug_connect {
> +	/** @extensions: Pointer to the first extension struct, if any */
> +	__u64 extensions;
> +
> +	__u64 pid; /* input: Target process ID */
> +	__u32 flags; /* MBZ */
> +
> +	__u32 version; /* output: current ABI (ioctl / events) version */
> +};
> +
> +/*
> + * struct drm_xe_debug_metadata_create - Create debug metadata
> + *
> + * Add a region of user memory to be marked as debug metadata.
> + * When the debugger attaches, the metadata regions will be delivered
> + * for debugger. Debugger can then map these regions to help decode
> + * the program state.
> + *
> + * Returns handle to created metadata entry.
> + */
> +struct drm_xe_debug_metadata_create {
> +	/** @extensions: Pointer to the first extension struct, if any */
> +	__u64 extensions;
> +
> +#define DRM_XE_DEBUG_METADATA_ELF_BINARY     0
> +#define DRM_XE_DEBUG_METADATA_PROGRAM_MODULE 1
> +#define WORK_IN_PROGRESS_DRM_XE_DEBUG_METADATA_MODULE_AREA 2
> +#define WORK_IN_PROGRESS_DRM_XE_DEBUG_METADATA_SBA_AREA 3
> +#define WORK_IN_PROGRESS_DRM_XE_DEBUG_METADATA_SIP_AREA 4
> +#define WORK_IN_PROGRESS_DRM_XE_DEBUG_METADATA_NUM (1 + \
> +	  WORK_IN_PROGRESS_DRM_XE_DEBUG_METADATA_SIP_AREA)
> +
> +	/** @type: Type of metadata */
> +	__u64 type;
> +
> +	/** @user_addr: pointer to start of the metadata */
> +	__u64 user_addr;
> +
> +	/** @len: length, in bytes of the medata */
> +	__u64 len;
> +
> +	/** @metadata_id: created metadata handle (out) */
> +	__u32 metadata_id;
> +};
> +
> +/**
> + * struct drm_xe_debug_metadata_destroy - Destroy debug metadata
> + *
> + * Destroy debug metadata.
> + */
> +struct drm_xe_debug_metadata_destroy {
> +	/** @extensions: Pointer to the first extension struct, if any */
> +	__u64 extensions;
> +
> +	/** @metadata_id: metadata handle to destroy */
> +	__u32 metadata_id;
> +};
> +
> +#include "xe_drm_eudebug.h"
> +

I think we're going toward merging eudebug igt series behind
configuration flag where kernel uapi is not yet established and
merged so above is imo unacceptable.

I think you should:

1. keep xe_drm.h intact, synced with upstream
2. provide xe_drm_eudebug.h which should contain all structure
   definitions
3. eudebug tests should include xe_drm.h and xe_drm_eudebug.h
   because this is not official interface and according to kernel
   series may change. Thus any changes inside xe_drm.h will be
   noise.

--
Zbigniew

>  #if defined(__cplusplus)
>  }
>  #endif
> diff --git a/include/drm-uapi/xe_drm_eudebug.h b/include/drm-uapi/xe_drm_eudebug.h
> new file mode 100644
> index 000000000..1db737080
> --- /dev/null
> +++ b/include/drm-uapi/xe_drm_eudebug.h
> @@ -0,0 +1,225 @@
> +/* SPDX-License-Identifier: MIT */
> +/*
> + * Copyright © 2023 Intel Corporation
> + */
> +
> +#ifndef _XE_DRM_EUDEBUG_H_
> +#define _XE_DRM_EUDEBUG_H_
> +
> +#if defined(__cplusplus)
> +extern "C" {
> +#endif
> +
> +/**
> + * Do a eudebug event read for a debugger connection.
> + *
> + * This ioctl is available in debug version 1.
> + */
> +#define DRM_XE_EUDEBUG_IOCTL_READ_EVENT		_IO('j', 0x0)
> +#define DRM_XE_EUDEBUG_IOCTL_EU_CONTROL		_IOWR('j', 0x2, struct drm_xe_eudebug_eu_control)
> +#define DRM_XE_EUDEBUG_IOCTL_ACK_EVENT		_IOW('j', 0x4, struct drm_xe_eudebug_ack_event)
> +#define DRM_XE_EUDEBUG_IOCTL_VM_OPEN		_IOW('j', 0x1, struct drm_xe_eudebug_vm_open)
> +#define DRM_XE_EUDEBUG_IOCTL_READ_METADATA	_IOWR('j', 0x3, struct drm_xe_eudebug_read_metadata)
> +
> +/* XXX: Document events to match their internal counterparts when moved to xe_drm.h */
> +struct drm_xe_eudebug_event {
> +	__u32 len;
> +
> +	__u16 type;
> +#define DRM_XE_EUDEBUG_EVENT_NONE		0
> +#define DRM_XE_EUDEBUG_EVENT_READ		1
> +#define DRM_XE_EUDEBUG_EVENT_OPEN		2
> +#define DRM_XE_EUDEBUG_EVENT_VM			3
> +#define DRM_XE_EUDEBUG_EVENT_EXEC_QUEUE		4
> +#define DRM_XE_EUDEBUG_EVENT_EU_ATTENTION	5
> +#define DRM_XE_EUDEBUG_EVENT_VM_BIND		6
> +#define DRM_XE_EUDEBUG_EVENT_VM_BIND_OP		7
> +#define DRM_XE_EUDEBUG_EVENT_VM_BIND_UFENCE	8
> +#define DRM_XE_EUDEBUG_EVENT_METADATA		9
> +#define DRM_XE_EUDEBUG_EVENT_VM_BIND_OP_METADATA 10
> +
> +	__u16 flags;
> +#define DRM_XE_EUDEBUG_EVENT_CREATE		(1 << 0)
> +#define DRM_XE_EUDEBUG_EVENT_DESTROY		(1 << 1)
> +#define DRM_XE_EUDEBUG_EVENT_STATE_CHANGE	(1 << 2)
> +#define DRM_XE_EUDEBUG_EVENT_NEED_ACK		(1 << 3)
> +
> +	__u64 seqno;
> +	__u64 reserved;
> +};
> +
> +struct drm_xe_eudebug_event_client {
> +	struct drm_xe_eudebug_event base;
> +
> +	__u64 client_handle; /* This is unique per debug connection */
> +};
> +
> +struct drm_xe_eudebug_event_vm {
> +	struct drm_xe_eudebug_event base;
> +
> +	__u64 client_handle;
> +	__u64 vm_handle;
> +};
> +
> +struct drm_xe_eudebug_event_exec_queue {
> +	struct drm_xe_eudebug_event base;
> +
> +	__u64 client_handle;
> +	__u64 vm_handle;
> +	__u64 exec_queue_handle;
> +	__u32 engine_class;
> +	__u32 width;
> +	__u64 lrc_handle[];
> +};
> +
> +struct drm_xe_eudebug_event_eu_attention {
> +	struct drm_xe_eudebug_event base;
> +
> +	__u64 client_handle;
> +	__u64 exec_queue_handle;
> +	__u64 lrc_handle;
> +	__u32 flags;
> +	__u32 bitmask_size;
> +	__u8 bitmask[];
> +};
> +
> +struct drm_xe_eudebug_eu_control {
> +	__u64 client_handle;
> +
> +#define DRM_XE_EUDEBUG_EU_CONTROL_CMD_INTERRUPT_ALL	0
> +#define DRM_XE_EUDEBUG_EU_CONTROL_CMD_STOPPED		1
> +#define DRM_XE_EUDEBUG_EU_CONTROL_CMD_RESUME		2
> +	__u32 cmd;
> +	__u32 flags;
> +
> +	__u64 seqno;
> +
> +	__u64 exec_queue_handle;
> +	__u64 lrc_handle;
> +	__u32 reserved;
> +	__u32 bitmask_size;
> +	__u64 bitmask_ptr;
> +};
> +
> +/*
> + *  When client (debuggee) does vm_bind_ioctl() following event
> + *  sequence will be created (for the debugger):
> + *
> + *  ┌───────────────────────┐
> + *  │  EVENT_VM_BIND        ├───────┬─┬─┐
> + *  └───────────────────────┘       │ │ │
> + *      ┌───────────────────────┐   │ │ │
> + *      │ EVENT_VM_BIND_OP #1   ├───┘ │ │
> + *      └───────────────────────┘     │ │
> + *                 ...                │ │
> + *      ┌───────────────────────┐     │ │
> + *      │ EVENT_VM_BIND_OP #n   ├─────┘ │
> + *      └───────────────────────┘       │
> + *                                      │
> + *      ┌───────────────────────┐       │
> + *      │ EVENT_UFENCE          ├───────┘
> + *      └───────────────────────┘
> + *
> + * All the events below VM_BIND will reference the VM_BIND
> + * they associate with, by field .vm_bind_ref_seqno.
> + * event_ufence will only be included if the client did
> + * attach sync of type UFENCE into its vm_bind_ioctl().
> + *
> + * When EVENT_UFENCE is sent by the driver, all the OPs of
> + * the original VM_BIND are completed and the [addr,range]
> + * contained in them are present and modifiable through the
> + * vm accessors. Accessing [addr, range] before related ufence
> + * event will lead to undefined results as the actual bind
> + * operations are async and the backing storage might not
> + * be there on a moment of receiving the event.
> + *
> + * Client's UFENCE sync will be held by the driver: client's
> + * drm_xe_wait_ufence will not complete and the value of the ufence
> + * won't appear until ufence is acked by the debugger process calling
> + * DRM_XE_EUDEBUG_IOCTL_ACK_EVENT with the event_ufence.base.seqno.
> + * This will signal the fence, .value will update and the wait will
> + * complete allowing the client to continue.
> + *
> + */
> +
> +struct drm_xe_eudebug_event_vm_bind {
> +	struct drm_xe_eudebug_event base;
> +
> +	__u64 client_handle;
> +	__u64 vm_handle;
> +
> +	__u32 flags;
> +#define DRM_XE_EUDEBUG_EVENT_VM_BIND_FLAG_UFENCE (1 << 0)
> +
> +	__u32 num_binds;
> +};
> +
> +struct drm_xe_eudebug_event_vm_bind_op {
> +	struct drm_xe_eudebug_event base;
> +	__u64 vm_bind_ref_seqno; /* *_event_vm_bind.base.seqno */
> +	__u64 num_extensions;
> +
> +	__u64 addr; /* XXX: Zero for unmap all? */
> +	__u64 range; /* XXX: Zero for unmap all? */
> +};
> +
> +struct drm_xe_eudebug_event_vm_bind_ufence {
> +	struct drm_xe_eudebug_event base;
> +	__u64 vm_bind_ref_seqno; /* *_event_vm_bind.base.seqno */
> +};
> +
> +struct drm_xe_eudebug_ack_event {
> +	__u32 type;
> +	__u32 flags; /* MBZ */
> +	__u64 seqno;
> +};
> +
> +struct drm_xe_eudebug_vm_open {
> +	/** @extensions: Pointer to the first extension struct, if any */
> +	__u64 extensions;
> +
> +	/** @client_handle: id of client */
> +	__u64 client_handle;
> +
> +	/** @vm_handle: id of vm */
> +	__u64 vm_handle;
> +
> +	/** @flags: flags */
> +	__u64 flags;
> +
> +	/** @timeout_ns: Timeout value in nanoseconds operations (fsync) */
> +	__u64 timeout_ns;
> +};
> +
> +struct drm_xe_eudebug_read_metadata {
> +	__u64 client_handle;
> +	__u64 metadata_handle;
> +	__u32 flags;
> +	__u32 reserved;
> +	__u64 ptr;
> +	__u64 size;
> +};
> +
> +struct drm_xe_eudebug_event_metadata {
> +	struct drm_xe_eudebug_event base;
> +
> +	__u64 client_handle;
> +	__u64 metadata_handle;
> +	/* XXX: Refer to xe_drm.h for fields */
> +	__u64 type;
> +	__u64 len;
> +};
> +
> +struct drm_xe_eudebug_event_vm_bind_op_metadata {
> +	struct drm_xe_eudebug_event base;
> +	__u64 vm_bind_op_ref_seqno; /* *_event_vm_bind_op.base.seqno */
> +
> +	__u64 metadata_handle;
> +	__u64 metadata_cookie;
> +};
> +
> +#if defined(__cplusplus)
> +}
> +#endif
> +
> +#endif
> -- 
> 2.34.1
> 

^ permalink raw reply	[flat|nested] 41+ messages in thread

* Re: [PATCH i-g-t v3 03/14] lib/xe_ioctl: Add wrapper with vm_bind_op extension parameter
  2024-08-09 12:38 ` [PATCH i-g-t v3 03/14] lib/xe_ioctl: Add wrapper with vm_bind_op extension parameter Christoph Manszewski
@ 2024-08-20  7:54   ` Zbigniew Kempczyński
  0 siblings, 0 replies; 41+ messages in thread
From: Zbigniew Kempczyński @ 2024-08-20  7:54 UTC (permalink / raw)
  To: Christoph Manszewski
  Cc: igt-dev, Kamil Konieczny, Dominik Grzegorzek, Maciej Patelczyk,
	Dominik Karol Piątkowski, Pawel Sikora, Andrzej Hajda,
	Kolanupaka Naveena, Mika Kuoppala, Gwan-gyeong Mun, Mika Kuoppala

On Fri, Aug 09, 2024 at 02:38:02PM +0200, Christoph Manszewski wrote:
> Currently there is no way to set drm_xe_vm_bind_op extension field. Add
> vm_bind wrapper that allows pass this field as a parameter.
> 
> Signed-off-by: Mika Kuoppala <mika.kuoppala@linux.intel.com>
> Signed-off-by: Christoph Manszewski <christoph.manszewski@intel.com>
> Reviewed-by: Dominik Karol Piątkowski <dominik.karol.piatkowski@intel.com>
> ---
>  lib/xe/xe_ioctl.c | 20 ++++++++++++++++----
>  lib/xe/xe_ioctl.h |  5 +++++
>  2 files changed, 21 insertions(+), 4 deletions(-)
> 
> diff --git a/lib/xe/xe_ioctl.c b/lib/xe/xe_ioctl.c
> index ae43ffd15..6d8388918 100644
> --- a/lib/xe/xe_ioctl.c
> +++ b/lib/xe/xe_ioctl.c
> @@ -96,15 +96,17 @@ void xe_vm_bind_array(int fd, uint32_t vm, uint32_t exec_queue,
>  	igt_assert_eq(igt_ioctl(fd, DRM_IOCTL_XE_VM_BIND, &bind), 0);
>  }
>  
> -int  __xe_vm_bind(int fd, uint32_t vm, uint32_t exec_queue, uint32_t bo,
> -		  uint64_t offset, uint64_t addr, uint64_t size, uint32_t op,
> -		  uint32_t flags, struct drm_xe_sync *sync, uint32_t num_syncs,
> -		  uint32_t prefetch_region, uint8_t pat_index, uint64_t ext)
> +int  ___xe_vm_bind(int fd, uint32_t vm, uint32_t exec_queue, uint32_t bo,
> +		   uint64_t offset, uint64_t addr, uint64_t size, uint32_t op,
> +		   uint32_t flags, struct drm_xe_sync *sync, uint32_t num_syncs,
> +		   uint32_t prefetch_region, uint8_t pat_index, uint64_t ext,
> +		   uint64_t op_ext)
>  {
>  	struct drm_xe_vm_bind bind = {
>  		.extensions = ext,
>  		.vm_id = vm,
>  		.num_binds = 1,
> +		.bind.extensions = op_ext,
>  		.bind.obj = bo,
>  		.bind.obj_offset = offset,
>  		.bind.range = size,
> @@ -125,6 +127,16 @@ int  __xe_vm_bind(int fd, uint32_t vm, uint32_t exec_queue, uint32_t bo,
>  	return 0;
>  }
>  
> +int  __xe_vm_bind(int fd, uint32_t vm, uint32_t exec_queue, uint32_t bo,
> +		  uint64_t offset, uint64_t addr, uint64_t size, uint32_t op,
> +		  uint32_t flags, struct drm_xe_sync *sync, uint32_t num_syncs,
> +		  uint32_t prefetch_region, uint8_t pat_index, uint64_t ext)
> +{
> +	return ___xe_vm_bind(fd, vm, exec_queue, bo, offset, addr, size, op,
> +			     flags, sync, num_syncs, prefetch_region,
> +			     pat_index, ext, 0);
> +}
> +
>  void  __xe_vm_bind_assert(int fd, uint32_t vm, uint32_t exec_queue, uint32_t bo,
>  			  uint64_t offset, uint64_t addr, uint64_t size,
>  			  uint32_t op, uint32_t flags, struct drm_xe_sync *sync,
> diff --git a/lib/xe/xe_ioctl.h b/lib/xe/xe_ioctl.h
> index b27c0053f..18cc2b72b 100644
> --- a/lib/xe/xe_ioctl.h
> +++ b/lib/xe/xe_ioctl.h
> @@ -20,6 +20,11 @@
>  uint32_t xe_cs_prefetch_size(int fd);
>  uint64_t xe_bb_size(int fd, uint64_t reqsize);
>  uint32_t xe_vm_create(int fd, uint32_t flags, uint64_t ext);
> +int  ___xe_vm_bind(int fd, uint32_t vm, uint32_t exec_queue, uint32_t bo,
> +		   uint64_t offset, uint64_t addr, uint64_t size, uint32_t op,
> +		   uint32_t flags, struct drm_xe_sync *sync, uint32_t num_syncs,
> +		   uint32_t prefetch_region, uint8_t pat_index, uint64_t ext,
> +		   uint64_t op_ext);
>  int  __xe_vm_bind(int fd, uint32_t vm, uint32_t exec_queue, uint32_t bo,
>  		  uint64_t offset, uint64_t addr, uint64_t size, uint32_t op,
>  		  uint32_t flags, struct drm_xe_sync *sync, uint32_t num_syncs,
> -- 
> 2.34.1
> 

Ok, this is harmless and doesn't depend on eudebug uapi headers.

Reviewed-by: Zbigniew Kempczyński <zbigniew.kempczynski@intel.com>

--
Zbigniew


^ permalink raw reply	[flat|nested] 41+ messages in thread

* Re: [PATCH i-g-t v3 04/14] lib/xe_eudebug: Introduce eu debug testing framework
  2024-08-09 12:38 ` [PATCH i-g-t v3 04/14] lib/xe_eudebug: Introduce eu debug testing framework Christoph Manszewski
  2024-08-19  8:30   ` Grzegorzek, Dominik
@ 2024-08-20  8:14   ` Zbigniew Kempczyński
  2024-08-20 16:14     ` Manszewski, Christoph
  1 sibling, 1 reply; 41+ messages in thread
From: Zbigniew Kempczyński @ 2024-08-20  8:14 UTC (permalink / raw)
  To: Christoph Manszewski
  Cc: igt-dev, Kamil Konieczny, Dominik Grzegorzek, Maciej Patelczyk,
	Dominik Karol Piątkowski, Pawel Sikora, Andrzej Hajda,
	Kolanupaka Naveena, Mika Kuoppala, Gwan-gyeong Mun, Mika Kuoppala,
	Karolina Stolarek

On Fri, Aug 09, 2024 at 02:38:03PM +0200, Christoph Manszewski wrote:
> From: Dominik Grzegorzek <dominik.grzegorzek@intel.com>
> 
> Introduce library which simplifies testing of eu debug capability.
> The library provides event log helpers together with asynchronous
> abstraction for client proccess and the debugger itself.
> 
> xe_eudebug_client creates its own proccess with user's work function,
> and gives machanisms to synchronize beginning of execution and event
> logging.
> 
> xe_eudebug_debugger allows to attach to the given proccess, provides
> asynchronous thread for event reading and introduces triggers -
> a callback mechanism triggered every time subscribed event was read.
> 
> Signed-off-by: Dominik Grzegorzek <dominik.grzegorzek@intel.com>
> Signed-off-by: Mika Kuoppala <mika.kuaoppala@linux.intel.com>
> Signed-off-by: Christoph Manszewski <christoph.manszewski@intel.com>
> Signed-off-by: Maciej Patelczyk <maciej.patelczyk@intel.com>
> Signed-off-by: Pawel Sikora <pawel.sikora@intel.com>
> Signed-off-by: Karolina Stolarek <karolina.stolarek@intel.com>
> ---
>  lib/meson.build     |    1 +
>  lib/xe/xe_eudebug.c | 2192 +++++++++++++++++++++++++++++++++++++++++++
>  lib/xe/xe_eudebug.h |  206 ++++
>  3 files changed, 2399 insertions(+)
>  create mode 100644 lib/xe/xe_eudebug.c
>  create mode 100644 lib/xe/xe_eudebug.h
> 
> diff --git a/lib/meson.build b/lib/meson.build
> index f711e60a7..969ca4101 100644
> --- a/lib/meson.build
> +++ b/lib/meson.build
> @@ -111,6 +111,7 @@ lib_sources = [
>  	'igt_msm.c',
>  	'igt_dsc.c',
>  	'xe/xe_gt.c',
> +	'xe/xe_eudebug.c',
>  	'xe/xe_ioctl.c',
>  	'xe/xe_mmio.c',
>  	'xe/xe_query.c',

As eudebug is quite big feature I think it should be separated and
hidden behind a feature flag (check meson_options.txt), lets say
'xe_eudebug' which would be disabled by default. This way you can
develop it upstream even if kernel side is not officially merged.
I'm pragramatic and I see no reason to block not accepted feature
especially this would imo speed up developement. A final step when
kernel change would be accepted and merged would be to sync with
uapi and remove local definitions.

I look forward maintainers comments is my attitude acceptable.

--
Zbigniew


> diff --git a/lib/xe/xe_eudebug.c b/lib/xe/xe_eudebug.c
> new file mode 100644
> index 000000000..4eac87476
> --- /dev/null
> +++ b/lib/xe/xe_eudebug.c
> @@ -0,0 +1,2192 @@
> +// SPDX-License-Identifier: MIT
> +/*
> + * Copyright © 2023 Intel Corporation
> + */
> +
> +#include <fcntl.h>
> +#include <poll.h>
> +#include <signal.h>
> +#include <sys/select.h>
> +#include <sys/stat.h>
> +#include <sys/types.h>
> +#include <sys/wait.h>
> +
> +#include "igt.h"
> +#include "igt_sysfs.h"
> +#include "intel_pat.h"
> +#include "xe_eudebug.h"
> +#include "xe_ioctl.h"
> +
> +struct event_trigger {
> +	xe_eudebug_trigger_fn fn;
> +	int type;
> +	struct igt_list_head link;
> +};
> +
> +struct seqno_list_entry {
> +	struct igt_list_head link;
> +	uint64_t seqno;
> +};
> +
> +struct match_dto {
> +	struct drm_xe_eudebug_event *target;
> +	struct igt_list_head *seqno_list;
> +	uint64_t client_handle;
> +	uint32_t filter;
> +
> +	/* store latest 'EVENT_VM_BIND' seqno */
> +	uint64_t *bind_seqno;
> +	/* latest vm_bind_op seqno matching bind_seqno */
> +	uint64_t *bind_op_seqno;
> +};
> +
> +#define CLIENT_PID  1
> +#define CLIENT_RUN  2
> +#define CLIENT_FINI 3
> +#define CLIENT_STOP 4
> +#define CLIENT_STAGE 5
> +#define DEBUGGER_STAGE 6
> +
> +#define DEBUGGER_WORKER_INACTIVE  0
> +#define DEBUGGER_WORKER_ACTIVE  1
> +#define DEBUGGER_WORKER_QUITTING 2
> +
> +static const char *type_to_str(unsigned int type)
> +{
> +	switch (type) {
> +	case DRM_XE_EUDEBUG_EVENT_NONE:
> +		return "none";
> +	case DRM_XE_EUDEBUG_EVENT_READ:
> +		return "read";
> +	case DRM_XE_EUDEBUG_EVENT_OPEN:
> +		return "client";
> +	case DRM_XE_EUDEBUG_EVENT_VM:
> +		return "vm";
> +	case DRM_XE_EUDEBUG_EVENT_EXEC_QUEUE:
> +		return "exec_queue";
> +	case DRM_XE_EUDEBUG_EVENT_EU_ATTENTION:
> +		return "attention";
> +	case DRM_XE_EUDEBUG_EVENT_VM_BIND:
> +		return "vm_bind";
> +	case DRM_XE_EUDEBUG_EVENT_VM_BIND_OP:
> +		return "vm_bind_op";
> +	case DRM_XE_EUDEBUG_EVENT_VM_BIND_UFENCE:
> +		return "vm_bind_ufence";
> +	case DRM_XE_EUDEBUG_EVENT_METADATA:
> +		return "metadata";
> +	case DRM_XE_EUDEBUG_EVENT_VM_BIND_OP_METADATA:
> +		return "vm_bind_op_metadata";
> +	}
> +
> +	return "UNKNOWN";
> +}
> +
> +static const char *event_type_to_str(struct drm_xe_eudebug_event *e, char *buf)
> +{
> +	sprintf(buf, "%s(%d)", type_to_str(e->type), e->type);
> +
> +	return buf;
> +}
> +
> +static const char *flags_to_str(unsigned int flags)
> +{
> +	if (flags & DRM_XE_EUDEBUG_EVENT_CREATE) {
> +		if (flags & DRM_XE_EUDEBUG_EVENT_NEED_ACK)
> +			return "create|ack";
> +		else
> +			return "create";
> +	}
> +	if (flags & DRM_XE_EUDEBUG_EVENT_DESTROY)
> +		return "destroy";
> +
> +	if (flags & DRM_XE_EUDEBUG_EVENT_STATE_CHANGE)
> +		return "state-change";
> +
> +	igt_assert(!(flags & DRM_XE_EUDEBUG_EVENT_NEED_ACK));
> +
> +	return "flags unknown";
> +}
> +
> +static const char *event_members_to_str(struct drm_xe_eudebug_event *e, char *b)
> +{
> +	switch (e->type) {
> +	case DRM_XE_EUDEBUG_EVENT_OPEN: {
> +		struct drm_xe_eudebug_event_client *ec = (struct drm_xe_eudebug_event_client *)e;
> +
> +		sprintf(b, "handle=%llu", ec->client_handle);
> +		break;
> +	}
> +	case DRM_XE_EUDEBUG_EVENT_VM: {
> +		struct drm_xe_eudebug_event_vm *evm = (struct drm_xe_eudebug_event_vm *)e;
> +
> +		sprintf(b, "client_handle=%llu, handle=%llu",
> +			evm->client_handle, evm->vm_handle);
> +		break;
> +	}
> +	case DRM_XE_EUDEBUG_EVENT_EXEC_QUEUE: {
> +		struct drm_xe_eudebug_event_exec_queue *ee = (void *)e;
> +
> +		sprintf(b, "client_handle=%llu, vm_handle=%llu, "
> +			   "exec_queue_handle=%llu, engine_class=%d, exec_queue_width=%d",
> +			ee->client_handle, ee->vm_handle,
> +			ee->exec_queue_handle, ee->engine_class, ee->width);
> +		break;
> +	}
> +	case DRM_XE_EUDEBUG_EVENT_EU_ATTENTION: {
> +		struct drm_xe_eudebug_event_eu_attention *ea = (void *)e;
> +
> +		sprintf(b, "client_handle=%llu, exec_queue_handle=%llu, "
> +			   "lrc_handle=%llu, bitmask_size=%d",
> +			ea->client_handle, ea->exec_queue_handle,
> +			ea->lrc_handle, ea->bitmask_size);
> +		break;
> +	}
> +	case DRM_XE_EUDEBUG_EVENT_VM_BIND: {
> +		struct drm_xe_eudebug_event_vm_bind *evmb = (void *)e;
> +
> +		sprintf(b, "client_handle=%llu, vm_handle=%llu, flags=0x%x, num_binds=%u",
> +			evmb->client_handle, evmb->vm_handle, evmb->flags, evmb->num_binds);
> +		break;
> +	}
> +	case DRM_XE_EUDEBUG_EVENT_VM_BIND_OP: {
> +		struct drm_xe_eudebug_event_vm_bind_op *op = (void *)e;
> +
> +		sprintf(b, "vm_bind_ref_seqno=%lld, addr=%016llx, range=%llu num_extensions=%llu",
> +			op->vm_bind_ref_seqno, op->addr, op->range, op->num_extensions);
> +		break;
> +	}
> +	case DRM_XE_EUDEBUG_EVENT_VM_BIND_UFENCE: {
> +		struct drm_xe_eudebug_event_vm_bind_ufence *f = (void *)e;
> +
> +		sprintf(b, "vm_bind_ref_seqno=%lld", f->vm_bind_ref_seqno);
> +		break;
> +	}
> +	case DRM_XE_EUDEBUG_EVENT_METADATA: {
> +		struct drm_xe_eudebug_event_metadata *em = (void *)e;
> +
> +		sprintf(b, "client_handle=%llu, metadata_handle=%llu, type=%llu, len=%llu",
> +			em->client_handle, em->metadata_handle, em->type, em->len);
> +		break;
> +	}
> +	case DRM_XE_EUDEBUG_EVENT_VM_BIND_OP_METADATA: {
> +		struct drm_xe_eudebug_event_vm_bind_op_metadata *op = (void *)e;
> +
> +		sprintf(b, "vm_bind_op_ref_seqno=%lld, metadata_handle=%llu, metadata_cookie=%llu",
> +			op->vm_bind_op_ref_seqno, op->metadata_handle, op->metadata_cookie);
> +		break;
> +	}
> +	default:
> +		strcpy(b, "<...>");
> +	}
> +
> +	return b;
> +}
> +
> +/**
> + * xe_eudebug_event_to_str:
> + * @e: pointer to event
> + * @buf: target to write string representation of @e
> + * @len: size of target buffer @buf
> + *
> + * Creates string representation for given event.
> + *
> + * Returns: the written input buffer pointed by @buf.
> + */
> +const char *xe_eudebug_event_to_str(struct drm_xe_eudebug_event *e, char *buf, size_t len)
> +{
> +	char a[256];
> +	char b[256];
> +
> +	snprintf(buf, len, "(%llu) %15s:%s: %s",
> +		 e->seqno,
> +		 event_type_to_str(e, a),
> +		 flags_to_str(e->flags),
> +		 event_members_to_str(e, b));
> +
> +	return buf;
> +}
> +
> +static void catch_child_failure(void)
> +{
> +	pid_t pid;
> +	int status;
> +
> +	pid = waitpid(-1, &status, WNOHANG);
> +
> +	if (pid == 0 || pid == -1)
> +		return;
> +
> +	if (!WIFEXITED(status))
> +		return;
> +
> +	igt_assert_f(WEXITSTATUS(status) == 0, "Client failed!\n");
> +}
> +
> +static int safe_pipe_read(int pipe[2], void *buf, int nbytes, int timeout_ms)
> +{
> +	int ret;
> +	int t = 0;
> +	struct pollfd fd = {
> +		.fd = pipe[0],
> +		.events = POLLIN,
> +		.revents = 0
> +	};
> +
> +	/* When child fails we may get stuck forever. Check whether
> +	 * the child process ended with an error.
> +	 */
> +	do {
> +		const int interval_ms = 1000;
> +
> +		ret = poll(&fd, 1, interval_ms);
> +
> +		if (!ret) {
> +			catch_child_failure();
> +			t += interval_ms;
> +		}
> +	} while (!ret && t < timeout_ms);
> +
> +	if (ret > 0)
> +		return read(pipe[0], buf, nbytes);
> +
> +	return 0;
> +}
> +
> +static uint64_t pipe_read(int pipe[2], int timeout_ms)
> +{
> +	uint64_t in;
> +	uint64_t ret;
> +
> +	ret = safe_pipe_read(pipe, &in, sizeof(in), timeout_ms);
> +	igt_assert(ret == sizeof(in));
> +
> +	return in;
> +}
> +
> +static void pipe_signal(int pipe[2], uint64_t token)
> +{
> +	igt_assert(write(pipe[1], &token, sizeof(token)) == sizeof(token));
> +}
> +
> +static void pipe_close(int pipe[2])
> +{
> +	if (pipe[0] != -1)
> +		close(pipe[0]);
> +
> +	if (pipe[1] != -1)
> +		close(pipe[1]);
> +}
> +
> +static uint64_t __wait_token(int p[2], const uint64_t token, int timeout_ms)
> +{
> +	uint64_t in;
> +
> +	in = pipe_read(p, timeout_ms);
> +
> +	igt_assert_eq(in, token);
> +
> +	return pipe_read(p, timeout_ms);
> +}
> +
> +static uint64_t client_wait_token(struct xe_eudebug_client *c,
> +				 const uint64_t token)
> +{
> +	return __wait_token(c->p_in, token, c->timeout_ms);
> +}
> +
> +static uint64_t wait_from_client(struct xe_eudebug_client *c,
> +				 const uint64_t token)
> +{
> +	return __wait_token(c->p_out, token, c->timeout_ms);
> +}
> +
> +static void token_signal(int p[2], const uint64_t token, const uint64_t value)
> +{
> +	pipe_signal(p, token);
> +	pipe_signal(p, value);
> +}
> +
> +static void client_signal(struct xe_eudebug_client *c,
> +			  const uint64_t token,
> +			  const uint64_t value)
> +{
> +	token_signal(c->p_out, token, value);
> +}
> +
> +static int __xe_eudebug_connect(int fd, pid_t pid, uint32_t flags, uint64_t events)
> +{
> +	struct drm_xe_eudebug_connect param = {
> +		.pid = pid,
> +		.flags = flags,
> +	};
> +	int debugfd;
> +
> +	debugfd = igt_ioctl(fd, DRM_IOCTL_XE_EUDEBUG_CONNECT, &param);
> +
> +	if (debugfd < 0)
> +		return -errno;
> +
> +	return debugfd;
> +}
> +
> +static void event_log_write_to_fd(struct xe_eudebug_event_log *l, int fd)
> +{
> +	igt_assert_eq(write(fd, &l->head, sizeof(l->head)),
> +		      sizeof(l->head));
> +
> +	igt_assert_eq(write(fd, l->log, l->head), l->head);
> +}
> +
> +static void read_all(int fd, void *buf, size_t nbytes)
> +{
> +	ssize_t remaining_size = nbytes;
> +	ssize_t current_size = 0;
> +	ssize_t read_size = 0;
> +
> +	do {
> +		read_size = read(fd, buf + current_size, remaining_size);
> +		igt_assert_f(read_size >= 0, "read failed: %s\n", strerror(errno));
> +
> +		current_size += read_size;
> +		remaining_size -= read_size;
> +	} while (remaining_size > 0 && read_size > 0);
> +
> +	igt_assert_eq(current_size, nbytes);
> +}
> +
> +static void event_log_read_from_fd(struct xe_eudebug_event_log *l, int fd)
> +{
> +	read_all(fd, &l->head, sizeof(l->head));
> +	igt_assert_lt(l->head, l->max_size);
> +
> +	read_all(fd, l->log, l->head);
> +}
> +
> +typedef int (*cmp_fn_t)(struct drm_xe_eudebug_event *, void *);
> +
> +static struct drm_xe_eudebug_event *
> +event_cmp(struct xe_eudebug_event_log *l,
> +	  struct drm_xe_eudebug_event *current,
> +	  cmp_fn_t match,
> +	  void *data)
> +{
> +	struct drm_xe_eudebug_event *e = current;
> +
> +	xe_eudebug_for_each_event(e, l) {
> +		if (match(e, data))
> +			return e;
> +	}
> +
> +	return NULL;
> +}
> +
> +static int match_type_and_flags(struct drm_xe_eudebug_event *a, void *data)
> +{
> +	struct drm_xe_eudebug_event *b = data;
> +
> +	if (a->type == b->type &&
> +	    a->flags == b->flags)
> +		return 1;
> +
> +	return 0;
> +}
> +
> +static int match_fields(struct drm_xe_eudebug_event *a, void *data)
> +{
> +	struct drm_xe_eudebug_event *b = data;
> +	int ret = 0;
> +
> +	ret = match_type_and_flags(a, data);
> +	if (!ret)
> +		return ret;
> +
> +	ret = 0;
> +
> +	switch (a->type) {
> +	case DRM_XE_EUDEBUG_EVENT_EXEC_QUEUE: {
> +		struct drm_xe_eudebug_event_exec_queue *ae = (void *)a;
> +		struct drm_xe_eudebug_event_exec_queue *be = (void *)b;
> +
> +		if (ae->engine_class == be->engine_class && ae->width == be->width)
> +			ret = 1;
> +		break;
> +	}
> +	case DRM_XE_EUDEBUG_EVENT_VM_BIND: {
> +		struct drm_xe_eudebug_event_vm_bind *ea = (void *)a;
> +		struct drm_xe_eudebug_event_vm_bind *eb = (void *)b;
> +
> +		if (ea->num_binds == eb->num_binds)
> +			ret = 1;
> +		break;
> +	}
> +	case DRM_XE_EUDEBUG_EVENT_VM_BIND_OP: {
> +		struct drm_xe_eudebug_event_vm_bind_op *ea = (void *)a;
> +		struct drm_xe_eudebug_event_vm_bind_op *eb = (void *)b;
> +
> +		if (ea->addr == eb->addr && ea->range == eb->range &&
> +		    ea->num_extensions == eb->num_extensions)
> +			ret = 1;
> +		break;
> +	}
> +	case DRM_XE_EUDEBUG_EVENT_VM_BIND_OP_METADATA: {
> +		struct drm_xe_eudebug_event_vm_bind_op_metadata *ea = (void *)a;
> +		struct drm_xe_eudebug_event_vm_bind_op_metadata *eb = (void *)b;
> +
> +		if (ea->metadata_handle == eb->metadata_handle &&
> +		    ea->metadata_cookie == eb->metadata_cookie)
> +			ret = 1;
> +		break;
> +	}
> +
> +	default:
> +		ret = 1;
> +		break;
> +	}
> +
> +	return ret;
> +}
> +
> +static int match_client_handle(struct drm_xe_eudebug_event *e, void *data)
> +{
> +	struct match_dto *md = (void *)data;
> +	uint64_t *bind_seqno = md->bind_seqno;
> +	uint64_t *bind_op_seqno = md->bind_op_seqno;
> +	uint64_t h = md->client_handle;
> +
> +	if (XE_EUDEBUG_EVENT_IS_FILTERED(e->type, md->filter))
> +		return 0;
> +
> +	switch (e->type) {
> +	case DRM_XE_EUDEBUG_EVENT_OPEN: {
> +		struct drm_xe_eudebug_event_client *client = (struct drm_xe_eudebug_event_client *)e;
> +
> +		if (client->client_handle == h)
> +			return 1;
> +		break;
> +	}
> +	case DRM_XE_EUDEBUG_EVENT_VM: {
> +		struct drm_xe_eudebug_event_vm *vm = (struct drm_xe_eudebug_event_vm *)e;
> +
> +		if (vm->client_handle == h)
> +			return 1;
> +		break;
> +	}
> +	case DRM_XE_EUDEBUG_EVENT_EXEC_QUEUE: {
> +		struct drm_xe_eudebug_event_exec_queue *ee = (struct drm_xe_eudebug_event_exec_queue *)e;
> +
> +		if (ee->client_handle == h)
> +			return 1;
> +		break;
> +	}
> +	case DRM_XE_EUDEBUG_EVENT_VM_BIND: {
> +		struct drm_xe_eudebug_event_vm_bind *evmb = (struct drm_xe_eudebug_event_vm_bind *)e;
> +
> +		if (evmb->client_handle == h) {
> +			*bind_seqno = evmb->base.seqno;
> +			return 1;
> +		}
> +		break;
> +	}
> +	case DRM_XE_EUDEBUG_EVENT_VM_BIND_OP: {
> +		struct drm_xe_eudebug_event_vm_bind_op *eo = (struct drm_xe_eudebug_event_vm_bind_op *)e;
> +
> +		if (eo->vm_bind_ref_seqno == *bind_seqno) {
> +			*bind_op_seqno = eo->base.seqno;
> +			return 1;
> +		}
> +		break;
> +	}
> +	case DRM_XE_EUDEBUG_EVENT_VM_BIND_UFENCE: {
> +		struct drm_xe_eudebug_event_vm_bind_ufence *ef  = (struct drm_xe_eudebug_event_vm_bind_ufence *)e;
> +
> +		if (ef->vm_bind_ref_seqno == *bind_seqno)
> +			return 1;
> +
> +		break;
> +	}
> +	case DRM_XE_EUDEBUG_EVENT_METADATA: {
> +		struct drm_xe_eudebug_event_metadata *em = (struct drm_xe_eudebug_event_metadata *)e;
> +
> +		if (em->client_handle == h)
> +			return 1;
> +		break;
> +	}
> +	case DRM_XE_EUDEBUG_EVENT_VM_BIND_OP_METADATA: {
> +		struct drm_xe_eudebug_event_vm_bind_op_metadata *eo = (struct drm_xe_eudebug_event_vm_bind_op_metadata *)e;
> +
> +		if (eo->vm_bind_op_ref_seqno == *bind_op_seqno)
> +			return 1;
> +		break;
> +	}
> +	default:
> +		break;
> +	}
> +
> +	return 0;
> +}
> +
> +static int match_opposite_resource(struct drm_xe_eudebug_event *e, void *data)
> +{
> +
> +	struct drm_xe_eudebug_event *d = (void *)data;
> +	int ret;
> +
> +	d->flags ^= DRM_XE_EUDEBUG_EVENT_CREATE | DRM_XE_EUDEBUG_EVENT_DESTROY;
> +	d->flags &= ~(DRM_XE_EUDEBUG_EVENT_NEED_ACK);
> +	ret = match_type_and_flags(e, data);
> +	d->flags ^= DRM_XE_EUDEBUG_EVENT_CREATE | DRM_XE_EUDEBUG_EVENT_DESTROY;
> +
> +	if (!ret)
> +		return 0;
> +
> +	switch (e->type) {
> +	case DRM_XE_EUDEBUG_EVENT_OPEN: {
> +		struct drm_xe_eudebug_event_client *client = (struct drm_xe_eudebug_event_client *)e;
> +		struct drm_xe_eudebug_event_client *filter = (struct drm_xe_eudebug_event_client *)data;
> +
> +		if (client->client_handle == filter->client_handle)
> +			return 1;
> +		break;
> +	}
> +	case DRM_XE_EUDEBUG_EVENT_VM: {
> +		struct drm_xe_eudebug_event_vm *vm = (struct drm_xe_eudebug_event_vm *)e;
> +		struct drm_xe_eudebug_event_vm *filter = (struct drm_xe_eudebug_event_vm *)data;
> +
> +		if (vm->vm_handle == filter->vm_handle)
> +			return 1;
> +		break;
> +	}
> +	case DRM_XE_EUDEBUG_EVENT_EXEC_QUEUE: {
> +		struct drm_xe_eudebug_event_exec_queue *ee = (void *)e;
> +		struct drm_xe_eudebug_event_exec_queue *filter = (struct drm_xe_eudebug_event_exec_queue *)data;
> +
> +		if (ee->exec_queue_handle == filter->exec_queue_handle)
> +			return 1;
> +		break;
> +	}
> +	case DRM_XE_EUDEBUG_EVENT_VM_BIND: {
> +		struct drm_xe_eudebug_event_vm_bind *evmb = (void *)e;
> +		struct drm_xe_eudebug_event_vm_bind *filter = (struct drm_xe_eudebug_event_vm_bind *)data;
> +
> +		if (evmb->vm_handle == filter->vm_handle &&
> +		    evmb->num_binds == filter->num_binds)
> +			return 1;
> +		break;
> +	}
> +	case DRM_XE_EUDEBUG_EVENT_VM_BIND_OP: {
> +		struct drm_xe_eudebug_event_vm_bind_op *avmb = (void *)e;
> +		struct drm_xe_eudebug_event_vm_bind_op *filter = (struct drm_xe_eudebug_event_vm_bind_op *)data;
> +
> +		if (avmb->addr == filter->addr &&
> +		    avmb->range == filter->range)
> +			return 1;
> +		break;
> +	}
> +	case DRM_XE_EUDEBUG_EVENT_METADATA: {
> +		struct drm_xe_eudebug_event_metadata *em = (void *)e;
> +		struct drm_xe_eudebug_event_metadata *filter = (struct drm_xe_eudebug_event_metadata *)data;
> +
> +		if (em->metadata_handle == filter->metadata_handle)
> +			return 1;
> +		break;
> +	}
> +	case DRM_XE_EUDEBUG_EVENT_VM_BIND_OP_METADATA: {
> +		struct drm_xe_eudebug_event_vm_bind_op_metadata *avmb = (void *)e;
> +		struct drm_xe_eudebug_event_vm_bind_op_metadata *filter = (struct drm_xe_eudebug_event_vm_bind_op_metadata *)data;
> +
> +		if (avmb->metadata_handle == filter->metadata_handle &&
> +		    avmb->metadata_cookie == filter->metadata_cookie)
> +			return 1;
> +		break;
> +	}
> +
> +	default:
> +		break;
> +	}
> +	return 0;
> +}
> +
> +static int match_full(struct drm_xe_eudebug_event *e, void *data)
> +{
> +	struct seqno_list_entry *sl;
> +
> +	struct match_dto *md = (void *)data;
> +	int ret = 0;
> +
> +	ret = match_client_handle(e, md);
> +	if (!ret)
> +		return 0;
> +
> +	ret = match_fields(e, md->target);
> +	if (!ret)
> +		return 0;
> +
> +	igt_list_for_each_entry(sl, md->seqno_list, link) {
> +		if (sl->seqno == e->seqno)
> +			return 0;
> +	}
> +
> +	return 1;
> +}
> +
> +static struct drm_xe_eudebug_event *
> +event_type_match(struct xe_eudebug_event_log *l,
> +		 struct drm_xe_eudebug_event *target,
> +		 struct drm_xe_eudebug_event *current)
> +{
> +	return event_cmp(l, current, match_type_and_flags, target);
> +}
> +
> +static struct drm_xe_eudebug_event *
> +client_match(struct xe_eudebug_event_log *l,
> +	     uint64_t client_handle,
> +	     struct drm_xe_eudebug_event *current,
> +	     uint32_t filter,
> +	     uint64_t *bind_seqno,
> +	     uint64_t *bind_op_seqno)
> +{
> +	struct match_dto md = {
> +		.client_handle = client_handle,
> +		.filter = filter,
> +		.bind_seqno = bind_seqno,
> +		.bind_op_seqno = bind_op_seqno,
> +	};
> +
> +	return event_cmp(l, current, match_client_handle, &md);
> +}
> +
> +static struct drm_xe_eudebug_event *
> +opposite_event_match(struct xe_eudebug_event_log *l,
> +		    struct drm_xe_eudebug_event *target,
> +		    struct drm_xe_eudebug_event *current)
> +{
> +	return event_cmp(l, current, match_opposite_resource, target);
> +}
> +
> +static struct drm_xe_eudebug_event *
> +event_match(struct xe_eudebug_event_log *l,
> +	    struct drm_xe_eudebug_event *target,
> +	    uint64_t client_handle,
> +	    struct igt_list_head *seqno_list,
> +	    uint64_t *bind_seqno,
> +	    uint64_t *bind_op_seqno)
> +{
> +	struct match_dto md = {
> +		.target = target,
> +		.client_handle = client_handle,
> +		.seqno_list = seqno_list,
> +		.bind_seqno = bind_seqno,
> +		.bind_op_seqno = bind_op_seqno,
> +	};
> +
> +	return event_cmp(l, NULL, match_full, &md);
> +}
> +
> +static void compare_client(struct xe_eudebug_event_log *c, struct drm_xe_eudebug_event *_ce,
> +			   struct xe_eudebug_event_log *d, struct drm_xe_eudebug_event *_de,
> +			   uint32_t filter)
> +{
> +	struct drm_xe_eudebug_event_client *ce = (void *)_ce;
> +	struct drm_xe_eudebug_event_client *de = (void *)_de;
> +	uint64_t cbs = 0, dbs = 0, cbso = 0, dbso = 0;
> +
> +	struct igt_list_head matched_seqno_list;
> +	struct drm_xe_eudebug_event *hc, *hd;
> +	struct seqno_list_entry *entry, *tmp;
> +
> +	igt_assert(ce);
> +	igt_assert(de);
> +
> +	igt_debug("client: %llu -> %llu\n", ce->client_handle, de->client_handle);
> +
> +	hc = NULL;
> +	hd = NULL;
> +	IGT_INIT_LIST_HEAD(&matched_seqno_list);
> +
> +	do {
> +		hc = client_match(c, ce->client_handle, hc, filter, &cbs, &cbso);
> +		if (!hc)
> +			break;
> +
> +		hd = event_match(d, hc, de->client_handle, &matched_seqno_list, &dbs, &dbso);
> +
> +		igt_assert_f(hd, "%s (%llu): no matching event type %u found for client %llu\n",
> +			     c->name,
> +			     hc->seqno,
> +			     hc->type,
> +			     ce->client_handle);
> +
> +		igt_debug("comparing %s %llu vs %s %llu\n",
> +			  c->name, hc->seqno, d->name, hd->seqno);
> +
> +		/*
> +		 * Store the seqno of the event that was matched above,
> +		 * inside 'matched_seqno_list', to avoid it getting matched
> +		 * by subsequent 'event_match' calls.
> +		 */
> +		entry = malloc(sizeof(*entry));
> +		entry->seqno = hd->seqno;
> +		igt_list_add(&entry->link, &matched_seqno_list);
> +	} while (hc);
> +
> +	igt_list_for_each_entry_safe(entry, tmp, &matched_seqno_list, link)
> +		free(entry);
> +}
> +
> +/**
> + * xe_eudebug_event_log_find_seqno:
> + * @l: event log pointer
> + * @seqno: seqno of event to be found
> + *
> + * Finds the event with given seqno in the event log.
> + *
> + * Returns: pointer to the event with given seqno within @l or NULL seqno is
> + * not present.
> + */
> +struct drm_xe_eudebug_event *
> +xe_eudebug_event_log_find_seqno(struct xe_eudebug_event_log *l, uint64_t seqno)
> +{
> +	struct drm_xe_eudebug_event *e = NULL, *found = NULL;
> +
> +	igt_assert_neq(seqno, 0);
> +	/*
> +	 * Try to catch if seqno is corrupted and prevent too long tests,
> +	 * as our post processing of events is not optimized.
> +	 */
> +	igt_assert_lt(seqno, 10 * 1000 * 1000);
> +
> +	xe_eudebug_for_each_event(e, l) {
> +		if (e->seqno == seqno) {
> +			if (found) {
> +				igt_warn("Found multiple events with the same seqno %lu\n", seqno);
> +				xe_eudebug_event_log_print(l, false);
> +				igt_assert(!found);
> +			}
> +			found = e;
> +		}
> +	}
> +
> +	return found;
> +}
> +
> +static void event_log_sort(struct xe_eudebug_event_log *l)
> +{
> +	struct xe_eudebug_event_log *tmp;
> +	struct drm_xe_eudebug_event *e = NULL;
> +	uint64_t first_seqno = 0;
> +	uint64_t last_seqno = 0;
> +	uint64_t events = 0, added = 0;
> +	uint64_t i;
> +
> +	xe_eudebug_for_each_event(e, l) {
> +		if (e->seqno > last_seqno)
> +			last_seqno = e->seqno;
> +
> +		if (e->seqno < first_seqno)
> +			first_seqno = e->seqno;
> +
> +		events++;
> +	}
> +
> +	tmp = xe_eudebug_event_log_create("tmp", l->max_size);
> +
> +	for (i = 1; i <= last_seqno; i++) {
> +		e = xe_eudebug_event_log_find_seqno(l, i);
> +		if (e) {
> +			xe_eudebug_event_log_write(tmp, e);
> +			added++;
> +		}
> +	}
> +
> +	igt_assert_eq(events, added);
> +	igt_assert_eq(tmp->head, l->head);
> +
> +	memcpy(l->log, tmp->log, tmp->head);
> +
> +	xe_eudebug_event_log_destroy(tmp);
> +}
> +
> +/**
> + * xe_eudebug_connect:
> + * @fd: Xe file descriptor
> + * @pid: client PID
> + * @flags: connection flags
> + *
> + * Opens the xe eu debugger connection to the process described by @pid
> + *
> + * Returns: 0 if the debugger was successfully attached, -errno otherwise.
> + */
> +int xe_eudebug_connect(int fd, pid_t pid, uint32_t flags)
> +{
> +	int ret;
> +	uint64_t events = 0; /* events filtering not supported yet! */
> +
> +	ret = __xe_eudebug_connect(fd, pid, flags, events);
> +
> +	return ret;
> +}
> +
> +/**
> + * xe_eudebug_event_log_create:
> + * @name: event log identifier
> + * @max_size: maximum size of created log
> + *
> + * Function creates an Eu Debugger event log with size equal to @max_size.
> + *
> + * Returns: pointer to just created log
> + */
> +#define MAX_EVENT_LOG_SIZE (32 * 1024 * 1024)
> +struct xe_eudebug_event_log *xe_eudebug_event_log_create(const char *name, unsigned int max_size)
> +{
> +	struct xe_eudebug_event_log *l;
> +
> +	l = calloc(1, sizeof(*l));
> +	igt_assert(l);
> +	l->log = calloc(1, max_size);
> +	igt_assert(l->log);
> +	l->max_size = max_size;
> +	strncpy(l->name, name, sizeof(l->name) - 1);
> +	pthread_mutex_init(&l->lock, NULL);
> +
> +	return l;
> +}
> +
> +/**
> + * xe_eudebug_event_log_destroy:
> + * @l: event log pointer
> + *
> + * Frees given event log @l.
> + */
> +void xe_eudebug_event_log_destroy(struct xe_eudebug_event_log *l)
> +{
> +	pthread_mutex_destroy(&l->lock);
> +	free(l->log);
> +	free(l);
> +}
> +
> +/**
> + * xe_eudebug_event_log_write:
> + * @l: event log pointer
> + * @e: event to be written to event log
> + *
> + * Writes event @e to the event log, thread-safe.
> + */
> +void xe_eudebug_event_log_write(struct xe_eudebug_event_log *l, struct drm_xe_eudebug_event *e)
> +{
> +	igt_assert(e->seqno);
> +	/*
> +	 * Try to catch if seqno is corrupted and prevent too long tests,
> +	 * as our post processing of events is not optimized.
> +	 */
> +	igt_assert_lt(e->seqno, 10 * 1000 * 1000);
> +
> +	pthread_mutex_lock(&l->lock);
> +	igt_assert_lt(l->head + e->len, l->max_size);
> +	memcpy(l->log + l->head, e, e->len);
> +	l->head += e->len;
> +
> +#ifdef DEBUG_LOG
> +	igt_info("%s: wrote %u bytes to eventlog, free %u bytes\n",
> +		 l->name, e->len, l->max_size - l->head);
> +#endif
> +	pthread_mutex_unlock(&l->lock);
> +}
> +
> +/**
> + * xe_eudebug_event_log_print:
> + * @l: event log pointer
> + * @debug: when true function uses igt_debug instead of igt_info.
> + *
> + * Prints given event log.
> + */
> +void
> +xe_eudebug_event_log_print(struct xe_eudebug_event_log *l, bool debug)
> +{
> +	struct drm_xe_eudebug_event *e = NULL;
> +	int level = debug ? IGT_LOG_DEBUG : IGT_LOG_INFO;
> +	char str[XE_EUDEBUG_EVENT_STRING_MAX_LEN];
> +
> +	igt_log(IGT_LOG_DOMAIN, level,
> +		"event log '%s' (%u bytes):\n", l->name, l->head);
> +
> +	xe_eudebug_for_each_event(e, l) {
> +		xe_eudebug_event_to_str(e, str, XE_EUDEBUG_EVENT_STRING_MAX_LEN);
> +		igt_log(IGT_LOG_DOMAIN, level, "%s\n", str);
> +	}
> +}
> +
> +/**
> + * xe_eudebug_event_log_compare:
> + * @a: event log pointer
> + * @b: event log pointer
> + * @filter: mask that represents events to be skipped during comparison, useful
> + * for events like 'VM_BIND' since they can be asymmetric. Note that
> + * 'DRM_XE_EUDEBUG_EVENT_OPEN' will always be matched.
> + *
> + * Compares and asserts event logs @a, @b if the event
> + * sequence matches.
> + */
> +void xe_eudebug_event_log_compare(struct xe_eudebug_event_log *a, struct xe_eudebug_event_log *b,
> +				  uint32_t filter)
> +{
> +	struct drm_xe_eudebug_event *ae = NULL;
> +	struct drm_xe_eudebug_event *be = NULL;
> +
> +	xe_eudebug_for_each_event(ae, a) {
> +		if (ae->type == DRM_XE_EUDEBUG_EVENT_OPEN &&
> +		    ae->flags & DRM_XE_EUDEBUG_EVENT_CREATE) {
> +			be = event_type_match(b, ae, be);
> +
> +			compare_client(a, ae, b, be, filter);
> +			compare_client(b, be, a, ae, filter);
> +		}
> +	}
> +}
> +
> +/**
> + * xe_eudebug_event_log_match_opposite:
> + * @l: event log pointer
> + * @filter: mask that represents events to be skipped during comparison, useful
> + * for events like 'VM_BIND' since they can be asymmetric
> + *
> + * Matches and asserts content of all opposite events (create vs destroy).
> + */
> +void
> +xe_eudebug_event_log_match_opposite(struct xe_eudebug_event_log *l, uint32_t filter)
> +{
> +	struct drm_xe_eudebug_event *ce = NULL;
> +	struct drm_xe_eudebug_event *de = NULL;
> +
> +	xe_eudebug_for_each_event(ce, l) {
> +		if (ce->flags & DRM_XE_EUDEBUG_EVENT_CREATE) {
> +			uint8_t offset = sizeof(struct drm_xe_eudebug_event);
> +			int opposite_matching;
> +
> +			if (XE_EUDEBUG_EVENT_IS_FILTERED(ce->type, filter))
> +				continue;
> +
> +			/* No opposite matching for binds */
> +			if ((ce->type >= DRM_XE_EUDEBUG_EVENT_VM_BIND &&
> +			     ce->type <= DRM_XE_EUDEBUG_EVENT_VM_BIND_UFENCE) ||
> +			    ce->type == DRM_XE_EUDEBUG_EVENT_VM_BIND_OP_METADATA)
> +				continue;
> +
> +			de = opposite_event_match(l, ce, ce);
> +
> +			igt_assert_f(de, "no opposite event of type %u found\n", ce->type);
> +
> +			igt_assert_eq(ce->len, de->len);
> +			opposite_matching = memcmp((uint8_t *)de + offset,
> +						   (uint8_t *)ce + offset,
> +						   de->len - offset) == 0;
> +
> +			igt_assert_f(opposite_matching,
> +				     "%s: create|destroy event not "
> +				     "maching (%llu) vs (%llu)\n",
> +				     l->name, de->seqno, ce->seqno);
> +		}
> +	}
> +}
> +
> +static void debugger_run_triggers(struct xe_eudebug_debugger *d,
> +				  struct drm_xe_eudebug_event *e)
> +{
> +	struct event_trigger *t;
> +
> +	igt_list_for_each_entry(t, &d->triggers, link) {
> +		if (e->type == t->type)
> +			t->fn(d, e);
> +	}
> +}
> +
> +#define MAX_EVENT_SIZE (32 * 1024)
> +static int
> +xe_eudebug_read_event(int fd, struct drm_xe_eudebug_event *event)
> +{
> +	int ret;
> +
> +	event->type = DRM_XE_EUDEBUG_EVENT_READ;
> +	event->flags = 0;
> +	event->len = MAX_EVENT_SIZE;
> +
> +	ret = igt_ioctl(fd, DRM_XE_EUDEBUG_IOCTL_READ_EVENT, event);
> +	if (ret < 0)
> +		return -errno;
> +
> +	return ret;
> +}
> +
> +static void *debugger_worker_loop(void *data)
> +{
> +	uint8_t buf[MAX_EVENT_SIZE];
> +	struct drm_xe_eudebug_event *e = (void *)buf;
> +	struct xe_eudebug_debugger *d = data;
> +	struct pollfd p = {
> +		.events = POLLIN,
> +		.revents = 0,
> +	};
> +	int timeout_ms = 100, ret;
> +
> +	igt_assert(d->master_fd >= 0);
> +
> +	do {
> +		p.fd = d->fd;
> +		ret = poll(&p, 1, timeout_ms);
> +
> +		if (ret == -1) {
> +			igt_info("poll failed with errno %d\n", errno);
> +			break;
> +		}
> +
> +		if (ret == 1 && (p.revents & POLLIN)) {
> +			int err = xe_eudebug_read_event(d->fd, e);
> +
> +			if (!err) {
> +				++d->event_count;
> +
> +				xe_eudebug_event_log_write(d->log, e);
> +				debugger_run_triggers(d, e);
> +			} else {
> +				igt_info("xe_eudebug_read_event returned %d\n", ret);
> +			}
> +		}
> +	} while ((ret && READ_ONCE(d->worker_state) == DEBUGGER_WORKER_QUITTING) ||
> +		 READ_ONCE(d->worker_state) == DEBUGGER_WORKER_ACTIVE);
> +
> +	d->worker_state = DEBUGGER_WORKER_INACTIVE;
> +	return NULL;
> +}
> +
> +/**
> + * xe_eudebug_debugger_available:
> + * @fd: Xe file descriptor
> + *
> + * Returns: true it debugger connection is available, false otherwise.
> + */
> +bool xe_eudebug_debugger_available(int fd)
> +{
> +	struct drm_xe_eudebug_connect param = { .pid = getpid() };
> +	int debugfd;
> +
> +	debugfd = igt_ioctl(fd, DRM_IOCTL_XE_EUDEBUG_CONNECT, &param);
> +	if (debugfd >= 0)
> +		close(debugfd);
> +
> +	return debugfd >= 0;
> +}
> +
> +/**
> + * xe_eudebug_debugger_create:
> + * @master_fd: xe client used to open the debugger connection
> + * @flags: flags stored in a debugger structure, can be used at will
> + * of the caller, i.e. to be used inside triggers.
> + * @data: test's private data, allocated with MAP_SHARED | MAP_ANONYMOUS,
> + * can be shared between client and debugger. Can be NULL.
> + *
> + * Returns: newly created xe_eudebug_debugger structure with its
> + * event log initialized. Note that to open the connection
> + * you need call @xe_eudebug_debugger_attach.
> + */
> +struct xe_eudebug_debugger *
> +xe_eudebug_debugger_create(int master_fd, uint64_t flags, void *data)
> +{
> +	struct xe_eudebug_debugger *d;
> +
> +	d = calloc(1, sizeof(*d));
> +	d->flags = flags;
> +	igt_assert(d);
> +	IGT_INIT_LIST_HEAD(&d->triggers);
> +	d->log = xe_eudebug_event_log_create("debugger", MAX_EVENT_LOG_SIZE);
> +	d->fd = -1;
> +	d->master_fd = master_fd;
> +	d->ptr = data;
> +
> +	return d;
> +}
> +
> +static void debugger_destroy_triggers(struct xe_eudebug_debugger *d)
> +{
> +	struct event_trigger *t, *tmp;
> +
> +	igt_list_for_each_entry_safe(t, tmp, &d->triggers, link)
> +		free(t);
> +}
> +
> +/**
> + * xe_eudebug_debugger_destroy:
> + * @d: pointer to the debugger
> + *
> + * Frees xe_eudebug_debugger structure pointed by @d. If the debugger
> + * connection was still opened it terminates it.
> + */
> +void xe_eudebug_debugger_destroy(struct xe_eudebug_debugger *d)
> +{
> +	if (d->worker_state)
> +		xe_eudebug_debugger_stop_worker(d, 1);
> +
> +	if (d->target_pid)
> +		xe_eudebug_debugger_dettach(d);
> +
> +	xe_eudebug_event_log_destroy(d->log);
> +	debugger_destroy_triggers(d);
> +	free(d);
> +}
> +
> +/**
> + * xe_eudebug_debugger_attach:
> + * @d: pointer to the debugger
> + * @c: pointer to the client
> + *
> + * Opens the xe eu debugger connection to the process described by @c (c->pid)
> + *
> + * Returns: 0 if the debugger was successfully attached, -errno otherwise.
> + */
> +int xe_eudebug_debugger_attach(struct xe_eudebug_debugger *d,
> +			       struct xe_eudebug_client *c)
> +{
> +	int ret;
> +
> +	igt_assert_eq(d->fd, -1);
> +	igt_assert_neq(c->pid, 0);
> +	ret = xe_eudebug_connect(d->master_fd, c->pid, 0);
> +
> +	if (ret < 0)
> +		return ret;
> +
> +	d->fd = ret;
> +	d->target_pid = c->pid;
> +	d->p_client[0] = c->p_in[0];
> +	d->p_client[1] = c->p_in[1];
> +
> +	igt_debug("debugger connected to %lu\n", d->target_pid);
> +
> +	return 0;
> +}
> +
> +/**
> + * xe_eudebug_debugger_dettach:
> + * @d: pointer to the debugger
> + *
> + * Closes previously opened xe eu debugger connection. Asserts if
> + * the debugger has active session.
> + */
> +void xe_eudebug_debugger_dettach(struct xe_eudebug_debugger *d)
> +{
> +	igt_assert(d->target_pid);
> +	close(d->fd);
> +	d->target_pid = 0;
> +	d->fd = -1;
> +}
> +
> +/**
> + * xe_eudebug_debugger_add_trigger:
> + * @d: pointer to the debugger
> + * @type: the type of the event which activates the trigger
> + * @fn: function to be called when event of @type was read by the debugger.
> + *
> + * Adds function @fn to the list of triggers activated when event of @type
> + * has been read by worker.
> + * Note: Triggers are activated by the worker.
> + */
> +void xe_eudebug_debugger_add_trigger(struct xe_eudebug_debugger *d,
> +				     int type, xe_eudebug_trigger_fn fn)
> +{
> +	struct event_trigger *t;
> +
> +	t = calloc(1, sizeof(*t));
> +	IGT_INIT_LIST_HEAD(&t->link);
> +	t->type = type;
> +	t->fn = fn;
> +
> +	igt_list_add_tail(&t->link, &d->triggers);
> +	igt_debug("added trigger %p\n", t);
> +}
> +
> +/**
> + * xe_eudebug_debugger_start_worker:
> + * @d: pointer to the debugger
> + *
> + * Starts the debugger worker. Worker is resposible for reading all
> + * incoming events from the debugger, put then into debugger log and
> + * execute appropriate event triggers. Note that using the debuggers
> + * event log while worker is running is not safe.
> + */
> +void xe_eudebug_debugger_start_worker(struct xe_eudebug_debugger *d)
> +{
> +	int ret;
> +
> +	d->worker_state = true;
> +	ret = pthread_create(&d->worker_thread, NULL, &debugger_worker_loop, d);
> +
> +	igt_assert_f(ret == 0, "Debugger worker thread creation failed!");
> +}
> +
> +/**
> + * xe_eudebug_debugger_stop_worker:
> + * @d: pointer to the debugger
> + *
> + * Stops the debugger worker. Event log is sorted by seqno after closure.
> + */
> +void xe_eudebug_debugger_stop_worker(struct xe_eudebug_debugger *d,
> +				     int timeout_s)
> +{
> +	struct timespec t = {};
> +	int ret;
> +
> +	igt_assert(d->worker_state);
> +
> +	d->worker_state = DEBUGGER_WORKER_QUITTING; /* First time be polite. */
> +	igt_assert_eq(clock_gettime(CLOCK_REALTIME, &t), 0);
> +	t.tv_sec += timeout_s;
> +
> +	ret = pthread_timedjoin_np(d->worker_thread, NULL, &t);
> +
> +	if (ret == ETIMEDOUT) {
> +		d->worker_state = DEBUGGER_WORKER_INACTIVE;
> +		ret = pthread_join(d->worker_thread, NULL);
> +	}
> +
> +	igt_assert_f(ret == 0 || ret != ESRCH,
> +		     "pthread join failed with error %d!\n", ret);
> +
> +	event_log_sort(d->log);
> +}
> +
> +/**
> + * xe_eudebug_debugger_signal_stage:
> + * @d: pointer to the debugger
> + * @stage: stage to signal
> + *
> + * Signals to client, waiting in xe_eudebug_client_wait_stage(),
> + * releasing it to proceed.
> + */
> +void xe_eudebug_debugger_signal_stage(struct xe_eudebug_debugger *d, uint64_t stage)
> +{
> +	token_signal(d->p_client, CLIENT_STAGE, stage);
> +}
> +
> +/**
> + * xe_eudebug_debugger_wait_stage:
> + * @s: pointer to xe_eudebug_debugger structure
> + * @stage: stage to wait on
> + *
> + * Pauses debugger until the client has signalled the corresponding stage with
> + * xe_eudebug_client_signal_stage. This is only for situations where the actual
> + * event flow is not enough to coordinate between client/debugger and extra sync
> + * mechanism is needed.
> + */
> +void xe_eudebug_debugger_wait_stage(struct xe_eudebug_session *s, uint64_t stage)
> +{
> +	u64 stage_in;
> +
> +	igt_debug("debugger xe client fd: %d pausing for stage %lu\n", s->d->master_fd, stage);
> +
> +	stage_in = wait_from_client(s->c, DEBUGGER_STAGE);
> +	igt_debug("debugger xe client fd: %d stage %lu, expected %lu, stage\n", s->d->master_fd,
> +		  stage_in, stage);
> +
> +	igt_assert_eq(stage_in, stage);
> +}
> +
> +/**
> + * xe_eudebug_client_create:
> + * @master_fd: xe client used to open the debugger connection
> + * @work: function that opens xe device and executes arbitrary workload
> + * @flags: flags stored in a client structure, can be used at will
> + * of the caller, i.e. to provide the @work function an additional switch.
> + * @data: test's private data, allocated with MAP_SHARED | MAP_ANONYMOUS,
> + * can be shared between client and debugger. Accesible via client->ptr.
> + * Can be NULL.
> + *
> + * Forks and creates the debugger process. @work won't be called until
> + * xe_eudebug_client_start is called.
> + *
> + * Returns: newly created xe_eudebug_debugger structure with its
> + * event log initialized.
> + */
> +struct xe_eudebug_client *xe_eudebug_client_create(int master_fd, xe_eudebug_client_work_fn work,
> +						   uint64_t flags, void *data)
> +{
> +	struct xe_eudebug_client *c;
> +
> +	c = calloc(1, sizeof(*c));
> +	c->flags = flags;
> +	igt_assert(c);
> +	igt_assert(!pipe(c->p_in));
> +	igt_assert(!pipe(c->p_out));
> +	c->seqno = 1;
> +	c->log = xe_eudebug_event_log_create("client", MAX_EVENT_LOG_SIZE);
> +	c->done = 0;
> +	c->ptr = data;
> +	c->master_fd = master_fd;
> +	c->timeout_ms = XE_EUDEBUG_DEFAULT_TIMEOUT_MS;
> +
> +	igt_fork(child, 1) {
> +		int mypid;
> +
> +		igt_assert_eq(c->pid, 0);
> +
> +		close(c->p_out[0]);
> +		c->p_out[0] = -1;
> +		close(c->p_in[1]);
> +		c->p_in[1] = -1;
> +
> +		mypid = getpid();
> +		client_signal(c, CLIENT_PID, mypid);
> +
> +		c->pid = client_wait_token(c, CLIENT_RUN);
> +		igt_assert_eq(c->pid, mypid);
> +		if (work)
> +			work(c);
> +
> +		client_signal(c, CLIENT_FINI, c->seqno);
> +
> +		event_log_write_to_fd(c->log, c->p_out[1]);
> +
> +		c->pid = client_wait_token(c, CLIENT_STOP);
> +		igt_assert_eq(c->pid, mypid);
> +	}
> +
> +	close(c->p_out[1]);
> +	c->p_out[1] = -1;
> +	close(c->p_in[0]);
> +	c->p_in[0] = -1;
> +
> +	c->pid = wait_from_client(c, CLIENT_PID);
> +
> +	igt_info("client running with pid %d\n", c->pid);
> +
> +	return c;
> +}
> +
> +/**
> + * xe_eudebug_client_stop:
> + * @c: pointer to xe_eudebug_client structure
> + *
> + * Waits for the end of client's work and exits the proccess.
> + */
> +void xe_eudebug_client_stop(struct xe_eudebug_client *c)
> +{
> +	if (c->pid) {
> +		int waitstatus;
> +
> +		xe_eudebug_client_wait_done(c);
> +
> +		token_signal(c->p_in, CLIENT_STOP, c->pid);
> +		igt_assert_eq(waitpid(c->pid, &waitstatus, 0),
> +			      c->pid);
> +		c->pid = 0;
> +	}
> +}
> +
> +/**
> + * xe_eudebug_client_destroy:
> + * @c: pointer to xe_eudebug_client structure to be freed
> + *
> + * Frees the @c client structure. Note that it calls xe_eudebug_client_stop if
> + * client proccess has not terminated yet.
> + */
> +void xe_eudebug_client_destroy(struct xe_eudebug_client *c)
> +{
> +	xe_eudebug_client_stop(c);
> +	pipe_close(c->p_in);
> +	pipe_close(c->p_out);
> +	xe_eudebug_event_log_destroy(c->log);
> +	free(c);
> +}
> +
> +/**
> + * xe_eudebug_client_get_seqno:
> + * @c: pointer to xe_eudebug_client structure
> + *
> + * Increments and returns current seqno value of the given client @c
> + *
> + * Returns: incremented seqno
> + */
> +uint64_t xe_eudebug_client_get_seqno(struct xe_eudebug_client *c)
> +{
> +	return c->seqno++;
> +}
> +
> +/**
> + * xe_eudebug_client_start:
> + * @c: pointer to xe_eudebug_client structure
> + *
> + * Starts execution of client's work function within the client's proccess.
> + */
> +void xe_eudebug_client_start(struct xe_eudebug_client *c)
> +{
> +	token_signal(c->p_in, CLIENT_RUN, c->pid);
> +}
> +
> +/**
> + * xe_eudebug_client_wait_done:
> + * @c: pointer to xe_eudebug_client structure
> + *
> + * Waits for the client work end updates the event log.
> + * Doesn't terminate the client's proccess yet.
> + */
> +void xe_eudebug_client_wait_done(struct xe_eudebug_client *c)
> +{
> +	if (!c->done) {
> +		c->done = 1;
> +		c->seqno = wait_from_client(c, CLIENT_FINI);
> +		event_log_read_from_fd(c->log, c->p_out[0]);
> +	}
> +}
> +
> +/**
> + * xe_eudebug_client_signal_stage:
> + * @c: pointer to the client
> + * @stage: stage to signal
> + *
> + * Signals to debugger, waiting in xe_eudebug_debugger_wait_stage(),
> + * releasing it to proceed.
> + */
> +void xe_eudebug_client_signal_stage(struct xe_eudebug_client *c, uint64_t stage)
> +{
> +	token_signal(c->p_out, DEBUGGER_STAGE, stage);
> +}
> +
> +/**
> + * xe_eudebug_client_wait_stage:
> + * @c: pointer to xe_eudebug_client structure
> + * @stage: stage to wait on
> + *
> + * Pauses client until the debugger has signalled the corresponding stage with
> + * xe_eudebug_debugger_signal_stage. This is only for situations where the
> + * actual event flow is not enough to coordinate between client/debugger and extra
> + * sync mechanism is needed.
> + *
> + */
> +void xe_eudebug_client_wait_stage(struct xe_eudebug_client *c, uint64_t stage)
> +{
> +	u64 stage_in;
> +
> +	if (c->done) {
> +		igt_warn("client: %d already done before %lu\n", c->pid, stage);
> +		return;
> +	}
> +
> +	igt_debug("client: %d pausing for stage %lu\n", c->pid, stage);
> +
> +	stage_in = client_wait_token(c, CLIENT_STAGE);
> +	igt_debug("client: %d stage %lu, expected %lu, stage\n", c->pid, stage_in, stage);
> +
> +	igt_assert_eq(stage_in, stage);
> +}
> +
> +
> +/**
> + * xe_eudebug_session_create:
> + * @fd: XE file descriptor
> + * @work: function passed to the xe_eudebug_client_create
> + * @flags: flags passed to client and debugger
> + * @test_private: test's  data, allocated with MAP_SHARED | MAP_ANONYMOUS,
> + * passed to client and debugger. Can be NULL.
> + *
> + * Creates session together with client and debugger structures.
> + */
> +struct xe_eudebug_session *xe_eudebug_session_create(int fd,
> +						     xe_eudebug_client_work_fn work,
> +						     unsigned int flags,
> +						     void *test_private)
> +{
> +	struct xe_eudebug_session *s;
> +
> +	s = calloc(1, sizeof(*s));
> +	igt_assert(s);
> +
> +	s->c = xe_eudebug_client_create(fd, work, flags, test_private);
> +	s->d = xe_eudebug_debugger_create(fd, flags, test_private);
> +	s->flags = flags;
> +
> +	return s;
> +}
> +
> +/**
> + * xe_eudebug_session_run:
> + * @s: pointer to xe_eudebug_session structure
> + *
> + * Attaches debugger to client's proccess, starts debugger's
> + * async event reader, starts client and once client finish
> + * it stops debugger worker.
> + */
> +void xe_eudebug_session_run(struct xe_eudebug_session *s)
> +{
> +	struct xe_eudebug_debugger *debugger = s->d;
> +	struct xe_eudebug_client *client = s->c;
> +
> +	igt_assert_eq(xe_eudebug_debugger_attach(debugger, client), 0);
> +
> +	xe_eudebug_debugger_start_worker(debugger);
> +
> +	xe_eudebug_client_start(client);
> +	xe_eudebug_client_wait_done(client);
> +
> +	xe_eudebug_debugger_stop_worker(debugger, 1);
> +
> +	xe_eudebug_event_log_print(debugger->log, true);
> +	xe_eudebug_event_log_print(client->log, true);
> +}
> +
> +/**
> + * xe_eudebug_session_check:
> + * @s: pointer to xe_eudebug_session structure
> + * @match_opposite: indicates whether check should match all
> + * create and destroy events.
> + * @filter: mask that represents events to be skipped during comparison, useful
> + * for events like 'VM_BIND' since they can be asymmetric
> + *
> + * Validate debugger's log against the log created by the client.
> + */
> +void xe_eudebug_session_check(struct xe_eudebug_session *s, bool match_opposite, uint32_t filter)
> +{
> +	xe_eudebug_event_log_compare(s->c->log, s->d->log, filter);
> +
> +	if (match_opposite)
> +		xe_eudebug_event_log_match_opposite(s->d->log, filter);
> +}
> +
> +/**
> + * xe_eudebug_session_destroy:
> + * @s: pointer to xe_eudebug_session structure
> + *
> + * Destroy session together with its debugger and client.
> + */
> +void xe_eudebug_session_destroy(struct xe_eudebug_session *s)
> +{
> +	xe_eudebug_debugger_destroy(s->d);
> +	xe_eudebug_client_destroy(s->c);
> +
> +	free(s);
> +}
> +
> +#define to_base(x) ((struct drm_xe_eudebug_event *)&x)
> +
> +static void base_event(struct xe_eudebug_client *c,
> +		       struct drm_xe_eudebug_event *e,
> +		       uint32_t type,
> +		       uint32_t flags,
> +		       uint64_t size)
> +{
> +	e->type = type;
> +	e->flags = flags;
> +	e->seqno = xe_eudebug_client_get_seqno(c);
> +	e->len = size;
> +}
> +
> +static void client_event(struct xe_eudebug_client *c, uint32_t flags, int client_fd)
> +{
> +	struct drm_xe_eudebug_event_client ec;
> +
> +	base_event(c, to_base(ec), DRM_XE_EUDEBUG_EVENT_OPEN, flags, sizeof(ec));
> +
> +	ec.client_handle = client_fd;
> +
> +	xe_eudebug_event_log_write(c->log, (void *)&ec);
> +}
> +
> +static void vm_event(struct xe_eudebug_client *c, uint32_t flags, int client_fd, uint32_t vm_id)
> +{
> +	struct drm_xe_eudebug_event_vm evm;
> +
> +	base_event(c, to_base(evm), DRM_XE_EUDEBUG_EVENT_VM, flags, sizeof(evm));
> +
> +	evm.client_handle = client_fd;
> +	evm.vm_handle = vm_id;
> +
> +	xe_eudebug_event_log_write(c->log, (void *)&evm);
> +}
> +
> +static void exec_queue_event(struct xe_eudebug_client *c, uint32_t flags,
> +			     int client_fd, uint32_t vm_id,
> +			     uint32_t exec_queue_handle, uint16_t class,
> +			     uint16_t width)
> +{
> +	struct drm_xe_eudebug_event_exec_queue ee;
> +
> +	base_event(c, to_base(ee), DRM_XE_EUDEBUG_EVENT_EXEC_QUEUE,
> +		   flags, sizeof(ee));
> +
> +	ee.client_handle = client_fd;
> +	ee.vm_handle = vm_id;
> +	ee.exec_queue_handle = exec_queue_handle;
> +	ee.engine_class = class;
> +	ee.width = width;
> +
> +	xe_eudebug_event_log_write(c->log, (void *)&ee);
> +}
> +
> +static void metadata_event(struct xe_eudebug_client *c, uint32_t flags,
> +			   int client_fd, uint32_t id, uint64_t type, uint64_t len)
> +{
> +	struct drm_xe_eudebug_event_metadata em;
> +
> +	base_event(c, to_base(em), DRM_XE_EUDEBUG_EVENT_METADATA,
> +		   flags, sizeof(em));
> +
> +	em.client_handle = client_fd;
> +	em.metadata_handle = id;
> +	em.type = type;
> +	em.len = len;
> +
> +	xe_eudebug_event_log_write(c->log, (void *)&em);
> +}
> +
> +static int enable_getset(int fd, bool *old, bool *new)
> +{
> +	static const char * const fname = "enable_eudebug";
> +	int ret = 0;
> +
> +	int sysfs, device_fd;
> +	bool val_before;
> +	struct stat st;
> +
> +	igt_assert(new || old);
> +
> +	igt_assert_eq(fstat(fd, &st), 0);
> +	sysfs = igt_sysfs_open(fd);
> +	if (sysfs < 0)
> +		return -1;
> +
> +	device_fd = openat(sysfs, "device", O_DIRECTORY | O_RDONLY);
> +	close(sysfs);
> +	if (device_fd < 0)
> +		return -1;
> +
> +	if (!__igt_sysfs_get_boolean(device_fd, fname, &val_before)) {
> +		ret = -1;
> +		goto out;
> +	}
> +
> +	igt_debug("enable_eudebug before: %d\n", val_before);
> +
> +	if (old)
> +		*old = val_before;
> +
> +	ret = 0;
> +	if (new) {
> +		if (__igt_sysfs_set_boolean(device_fd, fname, *new))
> +			igt_assert_eq(igt_sysfs_get_boolean(device_fd, fname), *new);
> +		else
> +			ret = -1;
> +	}
> +
> +out:
> +	close(device_fd);
> +	return ret;
> +}
> +
> +/**
> + * xe_eudebug_enable
> + * @fd: xe client
> + * @enable: state toggle - true to enable, false to disable
> + *
> + * Enables/disables eudebug capability by writing to
> + * '/sys/class/drm/card<N>/device/enable_eudebug' sysfs entry.
> + *
> + * Returns: previous toggle value, i.e. true when eudebugging was enabled,
> + * false when eudebugging was disabled.
> + */
> +bool xe_eudebug_enable(int fd, bool enable)
> +{
> +	bool old = false;
> +	int ret = enable_getset(fd, &old, &enable);
> +
> +	if (ret) {
> +		igt_skip_on(enable);
> +		old = false;
> +	}
> +
> +	return old;
> +}
> +
> +/* Eu debugger wrappers around resource creating xe ioctls. */
> +
> +/**
> + * xe_eudebug_client_open_driver:
> + * @c: pointer to xe_eudebug_client structure
> + *
> + * Calls drm_open_client(DRIVER_XE) and logs the corresponding
> + * event in client's event log.
> + *
> + * Returns: valid DRM file descriptor
> + */
> +int xe_eudebug_client_open_driver(struct xe_eudebug_client *c)
> +{
> +	int fd;
> +
> +	fd = drm_reopen_driver(c->master_fd);
> +	client_event(c, DRM_XE_EUDEBUG_EVENT_CREATE, fd);
> +
> +	return fd;
> +}
> +
> +/**
> + * xe_eudebug_client_close_driver:
> + * @c: pointer to xe_eudebug_client structure
> + * @fd: xe client
> + *
> + * Calls close driver and logs the corresponding event in
> + * client's event log.
> + */
> +void xe_eudebug_client_close_driver(struct xe_eudebug_client *c, int fd)
> +{
> +	client_event(c, DRM_XE_EUDEBUG_EVENT_DESTROY, fd);
> +	close(fd);
> +}
> +
> +/**
> + * xe_eudebug_client_vm_create:
> + * @c: pointer to xe_eudebug_client structure
> + * @fd: xe client
> + * @flags: vm bind flags
> + * @ext: pointer to the first user extension
> + *
> + * Calls xe_vm_create() and logs corresponding events
> + * (including vm set metadata events) in client's event log.
> + *
> + * Returns: valid vm handle
> + */
> +uint32_t xe_eudebug_client_vm_create(struct xe_eudebug_client *c, int fd,
> +				     uint32_t flags, uint64_t ext)
> +{
> +	uint32_t vm;
> +
> +	vm = xe_vm_create(fd, flags, ext);
> +	vm_event(c, DRM_XE_EUDEBUG_EVENT_CREATE, fd, vm);
> +
> +	return vm;
> +}
> +
> +/**
> + * xe_eudebug_client_vm_destroy:
> + * @c: pointer to xe_eudebug_client structure
> + * fd: xe client
> + * vm: vm handle
> + *
> + * Calls xe_vm_destroy() and logs the corresponding event in
> + * client's event log.
> + */
> +void xe_eudebug_client_vm_destroy(struct xe_eudebug_client *c, int fd, uint32_t vm)
> +{
> +	xe_vm_destroy(fd, vm);
> +	vm_event(c, DRM_XE_EUDEBUG_EVENT_DESTROY, fd, vm);
> +}
> +
> +/**
> + * xe_eudebug_client_exec_queue_create:
> + * @c: pointer to xe_eudebug_client structure
> + * @fd: xe client
> + * @create: exec_queue create drm struct
> + *
> + * Calls xe exec queue create ioctl and logs the corresponding event in
> + * client's event log.
> + *
> + * Returns: valid exec queue handle
> + */
> +uint32_t xe_eudebug_client_exec_queue_create(struct xe_eudebug_client *c, int fd,
> +					     struct drm_xe_exec_queue_create *create)
> +{
> +	uint16_t class = ((struct drm_xe_engine_class_instance *)(create->instances))[0].engine_class;
> +
> +	igt_assert_eq(igt_ioctl(fd, DRM_IOCTL_XE_EXEC_QUEUE_CREATE, create), 0);
> +
> +	if (class == DRM_XE_ENGINE_CLASS_COMPUTE || class == DRM_XE_ENGINE_CLASS_RENDER)
> +		exec_queue_event(c, DRM_XE_EUDEBUG_EVENT_CREATE, fd, create->vm_id,
> +				 create->exec_queue_id, class, create->width);
> +
> +	return create->exec_queue_id;
> +}
> +
> +/**
> + * xe_eudebug_client_exec_queue_destroy:
> + * @c: pointer to xe_eudebug_client structure
> + * @fd: xe client
> + * @create: exec_queue create drm struct which was used for creation
> + *
> + * Calls xe exec_queue destroy ioctl and logs the corresponding event in
> + * client's event log.
> + */
> +void xe_eudebug_client_exec_queue_destroy(struct xe_eudebug_client *c, int fd,
> +					  struct drm_xe_exec_queue_create *create)
> +{
> +	struct drm_xe_exec_queue_destroy destroy = { .exec_queue_id = create->exec_queue_id, };
> +	uint16_t class = ((struct drm_xe_engine_class_instance *)(create->instances))[0].engine_class;
> +
> +	if (class == DRM_XE_ENGINE_CLASS_COMPUTE || class == DRM_XE_ENGINE_CLASS_RENDER)
> +		exec_queue_event(c, DRM_XE_EUDEBUG_EVENT_DESTROY, fd, create->vm_id,
> +				 create->exec_queue_id, class, create->width);
> +
> +	igt_assert_eq(igt_ioctl(fd, DRM_IOCTL_XE_EXEC_QUEUE_DESTROY, &destroy), 0);
> +}
> +
> +/**
> + * xe_eudebug_client_vm_bind_event:
> + * @c: pointer to xe_eudebug_client structure
> + * @event_flags: base event flags
> + * @fd: xe client
> + * @vm: vm handle
> + * @bind_flags: bind flags of vm_bind_event
> + * @num_binds: number of bind (operations) for event
> + * @ref_seqno: base vm bind reference seqno
> + * Logs vm bind event in client's event log.
> + */
> +void xe_eudebug_client_vm_bind_event(struct xe_eudebug_client *c,
> +				     uint32_t event_flags, int fd,
> +				     uint32_t vm, uint32_t bind_flags,
> +				     uint32_t num_binds, u64 *ref_seqno)
> +{
> +	struct drm_xe_eudebug_event_vm_bind evmb;
> +
> +	base_event(c, to_base(evmb), DRM_XE_EUDEBUG_EVENT_VM_BIND,
> +		   event_flags, sizeof(evmb));
> +	evmb.client_handle = fd;
> +	evmb.vm_handle = vm;
> +	evmb.flags = bind_flags;
> +	evmb.num_binds = num_binds;
> +
> +	*ref_seqno = evmb.base.seqno;
> +
> +	xe_eudebug_event_log_write(c->log, (void *)&evmb);
> +}
> +
> +/**
> + * xe_eudebug_client_vm_bind_op_event:
> + * @c: pointer to xe_eudebug_client structure
> + * @event_flags: base event flags
> + * @bind_ref_seqno: base vm bind reference seqno
> + * @op_ref_seqno: output, the vm_bind_op event seqno
> + * @addr: ppgtt address
> + * @size: size of the binding
> + * @num_extensions: number of vm bind op extensions
> + *
> + * Logs vm bind op event in client's event log.
> + */
> +void xe_eudebug_client_vm_bind_op_event(struct xe_eudebug_client *c, uint32_t event_flags,
> +					uint64_t bind_ref_seqno, uint64_t *op_ref_seqno,
> +					uint64_t addr, uint64_t range,
> +					uint64_t num_extensions)
> +{
> +	struct drm_xe_eudebug_event_vm_bind_op op;
> +
> +	base_event(c, to_base(op), DRM_XE_EUDEBUG_EVENT_VM_BIND_OP,
> +		   event_flags, sizeof(op));
> +	op.vm_bind_ref_seqno = bind_ref_seqno;
> +	op.addr = addr;
> +	op.range = range;
> +	op.num_extensions = num_extensions;
> +
> +	*op_ref_seqno = op.base.seqno;
> +
> +	xe_eudebug_event_log_write(c->log, (void *)&op);
> +}
> +
> +/**
> + * xe_eudebug_client_vm_bind_op_metadata_event:
> + * @c: pointer to xe_eudebug_client structure
> + * @event_flags: base event flags
> + * @op_ref_seqno: base vm bind op reference seqno
> + * @metadata_handle: metadata handle
> + * @metadata_cookie: metadata cookie
> + *
> + * Logs vm bind op metadata event in client's event log.
> + */
> +void xe_eudebug_client_vm_bind_op_metadata_event(struct xe_eudebug_client *c,
> +						 uint32_t event_flags, uint64_t op_ref_seqno,
> +						 uint64_t metadata_handle, uint64_t metadata_cookie)
> +{
> +	struct drm_xe_eudebug_event_vm_bind_op_metadata op;
> +
> +	base_event(c, to_base(op), DRM_XE_EUDEBUG_EVENT_VM_BIND_OP_METADATA,
> +		   event_flags, sizeof(op));
> +	op.vm_bind_op_ref_seqno = op_ref_seqno;
> +	op.metadata_handle = metadata_handle;
> +	op.metadata_cookie = metadata_cookie;
> +
> +	xe_eudebug_event_log_write(c->log, (void *)&op);
> +}
> +
> +/**
> + * xe_eudebug_client_vm_bind_ufence_event:
> + * @c: pointer to xe_eudebug_client structure
> + * @event_flags: base event flags
> + * @ref_seqno: base vm bind event seqno
> + *
> + * Logs vm bind ufence event in client's event log.
> + */
> +void xe_eudebug_client_vm_bind_ufence_event(struct xe_eudebug_client *c, uint32_t event_flags,
> +					    uint64_t ref_seqno)
> +{
> +	struct drm_xe_eudebug_event_vm_bind_ufence f;
> +
> +	base_event(c, to_base(f), DRM_XE_EUDEBUG_EVENT_VM_BIND_UFENCE,
> +		   event_flags, sizeof(f));
> +	f.vm_bind_ref_seqno = ref_seqno;
> +
> +	xe_eudebug_event_log_write(c->log, (void *)&f);
> +}
> +
> +static bool has_user_fence(const struct drm_xe_sync *sync, uint32_t num_syncs)
> +{
> +	while (num_syncs--)
> +		if (sync[num_syncs].type == DRM_XE_SYNC_TYPE_USER_FENCE)
> +			return true;
> +
> +	return false;
> +}
> +
> +#define for_each_metadata(__m, __ext)					\
> +	for ((__m) = from_user_pointer(__ext);				\
> +	     (__m);							\
> +	     (__m) = from_user_pointer((__m)->base.next_extension))	\
> +		if ((__m)->base.name == XE_VM_BIND_OP_EXTENSIONS_ATTACH_DEBUG)
> +
> +static int  __xe_eudebug_client_vm_bind(struct xe_eudebug_client *c,
> +					int fd, uint32_t vm, uint32_t exec_queue,
> +					uint32_t bo, uint64_t offset,
> +					uint64_t addr, uint64_t size,
> +					uint32_t op, uint32_t flags,
> +					struct drm_xe_sync *sync,
> +					uint32_t num_syncs,
> +					uint32_t prefetch_region,
> +					uint8_t pat_index, uint64_t op_ext)
> +{
> +	struct drm_xe_vm_bind_op_ext_attach_debug *metadata;
> +	const bool ufence = has_user_fence(sync, num_syncs);
> +	const uint32_t bind_flags = ufence ?
> +		DRM_XE_EUDEBUG_EVENT_VM_BIND_FLAG_UFENCE : 0;
> +	uint64_t seqno = 0, op_seqno = 0, num_metadata = 0;
> +	uint32_t bind_base_flags = 0;
> +	int ret;
> +
> +	for_each_metadata(metadata, op_ext)
> +		num_metadata++;
> +
> +	switch (op) {
> +	case DRM_XE_VM_BIND_OP_MAP:
> +		bind_base_flags = DRM_XE_EUDEBUG_EVENT_CREATE;
> +		break;
> +	case DRM_XE_VM_BIND_OP_UNMAP:
> +		bind_base_flags = DRM_XE_EUDEBUG_EVENT_DESTROY;
> +		igt_assert_eq(num_metadata, 0);
> +		igt_assert_eq(ufence, false);
> +		break;
> +	default:
> +		/* XXX unmap all? */
> +		igt_assert(op);
> +		break;
> +	}
> +
> +	ret = ___xe_vm_bind(fd, vm, exec_queue, bo, offset, addr, size,
> +			    op, flags, sync, num_syncs, prefetch_region,
> +			    pat_index, 0, op_ext);
> +
> +	if (ret)
> +		return ret;
> +
> +	if (!bind_base_flags)
> +		return -EINVAL;
> +
> +	xe_eudebug_client_vm_bind_event(c, DRM_XE_EUDEBUG_EVENT_STATE_CHANGE,
> +					fd, vm, bind_flags, 1, &seqno);
> +	xe_eudebug_client_vm_bind_op_event(c, bind_base_flags,
> +					   seqno, &op_seqno, addr, size,
> +					   num_metadata);
> +
> +	for_each_metadata(metadata, op_ext)
> +		xe_eudebug_client_vm_bind_op_metadata_event(c,
> +							    DRM_XE_EUDEBUG_EVENT_CREATE,
> +							    op_seqno,
> +							    metadata->metadata_id,
> +							    metadata->cookie);
> +	if (ufence)
> +		xe_eudebug_client_vm_bind_ufence_event(c, DRM_XE_EUDEBUG_EVENT_CREATE |
> +						       DRM_XE_EUDEBUG_EVENT_NEED_ACK,
> +						       seqno);
> +	return ret;
> +}
> +
> +static void _xe_eudebug_client_vm_bind(struct xe_eudebug_client *c, int fd,
> +				       uint32_t vm, uint32_t bo,
> +				       uint64_t offset, uint64_t addr, uint64_t size,
> +				       uint32_t op,
> +				       uint32_t flags,
> +				       struct drm_xe_sync *sync,
> +				       uint32_t num_syncs,
> +				       uint64_t op_ext)
> +{
> +	const uint32_t exec_queue_id = 0;
> +	const uint32_t prefetch_region = 0;
> +
> +	igt_assert_eq(__xe_eudebug_client_vm_bind(c, fd, vm, exec_queue_id, bo, offset,
> +						  addr, size, op, flags,
> +						  sync, num_syncs, prefetch_region,
> +						  DEFAULT_PAT_INDEX, op_ext),
> +		      0);
> +}
> +
> +/**
> + * xe_eudebug_client_vm_bind_flags
> + * @c: pointer to xe_eudebug_client structure
> + * @fd: xe client
> + * @vm: vm handle
> + * @bo: buffer object handle
> + * @offset: offset within buffer object
> + * @addr: ppgtt address
> + * @size: size of the binding
> + * @flags: vm_bind flags
> + * @sync: sync objects
> + * @num_syncs: number of sync objects
> + * @op_ext: BIND_OP extensions
> + *
> + * Calls xe vm_bind ioctl and logs the corresponding event in client's event log.
> + */
> +void xe_eudebug_client_vm_bind_flags(struct xe_eudebug_client *c, int fd, uint32_t vm,
> +				     uint32_t bo, uint64_t offset,
> +				     uint64_t addr, uint64_t size, uint32_t flags,
> +				     struct drm_xe_sync *sync, uint32_t num_syncs,
> +				     uint64_t op_ext)
> +{
> +	_xe_eudebug_client_vm_bind(c, fd, vm, bo, offset, addr, size,
> +				   DRM_XE_VM_BIND_OP_MAP, flags,
> +				   sync, num_syncs, op_ext);
> +}
> +
> +/**
> + * xe_eudebug_client_vm_bind
> + * @c: pointer to xe_eudebug_client structure
> + * @fd: xe client
> + * @vm: vm handle
> + * @bo: buffer object handle
> + * @offset: offset within buffer object
> + * @addr: ppgtt address
> + * @size: size of the binding
> + *
> + * Calls xe vm_bind ioctl and logs the corresponding event in client's event log.
> + */
> +void xe_eudebug_client_vm_bind(struct xe_eudebug_client *c, int fd, uint32_t vm,
> +			       uint32_t bo, uint64_t offset,
> +			       uint64_t addr, uint64_t size)
> +{
> +	const uint32_t flags = 0;
> +	struct drm_xe_sync *sync = NULL;
> +	const uint32_t num_syncs = 0;
> +	const uint64_t op_ext = 0;
> +
> +	xe_eudebug_client_vm_bind_flags(c, fd, vm, bo, offset, addr, size,
> +					flags,
> +					sync, num_syncs, op_ext);
> +}
> +
> +/**
> + * xe_eudebug_client_vm_unbind_flags
> + * @c: pointer to xe_eudebug_client structure
> + * @fd: xe client
> + * @vm: vm handle
> + * @offset: offset
> + * @addr: ppgtt address
> + * @size: size of the binding
> + * @flags: vm_bind flags
> + * @sync: sync objects
> + * @num_syncs: number of sync objects
> + *
> + * Calls xe vm_unbind ioctl and logs the corresponding event in client's event log.
> + */
> +void xe_eudebug_client_vm_unbind_flags(struct xe_eudebug_client *c, int fd,
> +				       uint32_t vm, uint64_t offset,
> +				       uint64_t addr, uint64_t size, uint32_t flags,
> +				       struct drm_xe_sync *sync, uint32_t num_syncs)
> +{
> +	_xe_eudebug_client_vm_bind(c, fd, vm, 0, offset, addr, size,
> +				   DRM_XE_VM_BIND_OP_UNMAP, flags,
> +				   sync, num_syncs, 0);
> +}
> +
> +/**
> + * xe_eudebug_client_vm_unbind
> + * @c: pointer to xe_eudebug_client structure
> + * @fd: xe client
> + * @vm: vm handle
> + * @offset: offset
> + * @addr: ppgtt address
> + * @size: size of the binding
> + *
> + * Calls xe vm_unbind ioctl and logs the corresponding event in client's event log.
> + */
> +void xe_eudebug_client_vm_unbind(struct xe_eudebug_client *c, int fd, uint32_t vm,
> +				 uint64_t offset, uint64_t addr, uint64_t size)
> +{
> +	const uint32_t flags = 0;
> +	struct drm_xe_sync *sync = NULL;
> +	const uint32_t num_syncs = 0;
> +
> +	xe_eudebug_client_vm_unbind_flags(c, fd, vm, offset, addr, size,
> +					  flags, sync, num_syncs);
> +}
> +
> +/**
> + * xe_eudebug_client_metadata_create:
> + * @c: pointer to xe_eudebug_client structure
> + * @fd: xe client
> + * @type: debug metadata type
> + * @len: size of @data
> + * @data: debug metadata paylad
> + *
> + * Calls xe metadata create ioctl and logs the corresponding event in
> + * client's event log.
> + *
> + * Return: valid debug metadata id.
> + */
> +uint32_t xe_eudebug_client_metadata_create(struct xe_eudebug_client *c, int fd,
> +					   int type, size_t len, void *data)
> +{
> +	struct drm_xe_debug_metadata_create create = {
> +		.type = type,
> +		.user_addr = to_user_pointer(data),
> +		.len = len
> +	};
> +
> +	igt_assert_eq(igt_ioctl(fd, DRM_IOCTL_XE_DEBUG_METADATA_CREATE, &create), 0);
> +
> +	metadata_event(c, DRM_XE_EUDEBUG_EVENT_CREATE, fd, create.metadata_id, type, len);
> +
> +	return create.metadata_id;
> +}
> +
> +/**
> + * xe_eudebug_client_metadata_destroy:
> + * @c: pointer to xe_eudebug_client structure
> + * @fd: xe client
> + * @id: xe debug metadata handle
> + * @type: debug metadata type
> + * @len: size of debug metadata payload
> + *
> + * Calls xe metadata destroy ioctl and logs the corresponding event in
> + * client's event log.
> + */
> +void xe_eudebug_client_metadata_destroy(struct xe_eudebug_client *c, int fd,
> +					uint32_t id, int type, size_t len)
> +{
> +	struct drm_xe_debug_metadata_destroy destroy = { .metadata_id = id };
> +
> +
> +	igt_assert_eq(igt_ioctl(fd, DRM_IOCTL_XE_DEBUG_METADATA_DESTROY, &destroy), 0);
> +
> +	metadata_event(c, DRM_XE_EUDEBUG_EVENT_DESTROY, fd, id, type, len);
> +}
> +
> +void xe_eudebug_ack_ufence(int debugfd,
> +			   const struct drm_xe_eudebug_event_vm_bind_ufence *f)
> +{
> +	struct drm_xe_eudebug_ack_event ack = { 0, };
> +	char event_str[XE_EUDEBUG_EVENT_STRING_MAX_LEN];
> +
> +	ack.type = f->base.type;
> +	ack.seqno = f->base.seqno;
> +
> +	xe_eudebug_event_to_str((void *)f, event_str, XE_EUDEBUG_EVENT_STRING_MAX_LEN);
> +	igt_debug("delivering ack for event: %s\n", event_str);
> +	igt_assert_eq(igt_ioctl(debugfd, DRM_XE_EUDEBUG_IOCTL_ACK_EVENT, &ack), 0);
> +}
> diff --git a/lib/xe/xe_eudebug.h b/lib/xe/xe_eudebug.h
> new file mode 100644
> index 000000000..444f5a7b7
> --- /dev/null
> +++ b/lib/xe/xe_eudebug.h
> @@ -0,0 +1,206 @@
> +/* SPDX-License-Identifier: MIT */
> +/*
> + * Copyright © 2023 Intel Corporation
> + */
> +#include <fcntl.h>
> +#include <pthread.h>
> +#include <stdint.h>
> +#include <xe_drm.h>
> +
> +#include "igt_list.h"
> +
> +struct xe_eudebug_event_log {
> +	uint8_t *log;
> +	unsigned int head;
> +	unsigned int max_size;
> +	char name[80];
> +	pthread_mutex_t lock;
> +};
> +
> +struct xe_eudebug_debugger {
> +	int fd;
> +	uint64_t flags;
> +
> +	/* Used to smuggle private data */
> +	void *ptr;
> +
> +	struct xe_eudebug_event_log *log;
> +
> +	uint64_t event_count;
> +
> +	uint64_t target_pid;
> +
> +	struct igt_list_head triggers;
> +
> +	int master_fd;
> +
> +	pthread_t worker_thread;
> +	int worker_state;
> +
> +	int p_client[2];
> +};
> +
> +struct xe_eudebug_client {
> +	int pid;
> +	uint64_t seqno;
> +	uint64_t flags;
> +
> +	/* Used to smuggle private data */
> +	void *ptr;
> +
> +	struct xe_eudebug_event_log *log;
> +
> +	int done;
> +	int p_in[2];
> +	int p_out[2];
> +
> +	/* Used to pickup right device (the one used in debugger) */
> +	int master_fd;
> +
> +	int timeout_ms;
> +};
> +
> +struct xe_eudebug_session {
> +	uint64_t flags;
> +	struct xe_eudebug_client *c;
> +	struct xe_eudebug_debugger *d;
> +};
> +
> +typedef void (*xe_eudebug_client_work_fn)(struct xe_eudebug_client *);
> +typedef void (*xe_eudebug_trigger_fn)(struct xe_eudebug_debugger *,
> +				      struct drm_xe_eudebug_event *);
> +
> +#define xe_eudebug_for_each_event(_e, _log) \
> +	for ((_e) = (_e) ? (void *)(uint8_t *)(_e) + (_e)->len : \
> +		    (void *)(_log)->log; \
> +	    (uint8_t *)(_e) < (_log)->log + (_log)->head; \
> +	    (_e) = (void *)(uint8_t *)(_e) + (_e)->len)
> +
> +#define xe_eudebug_assert(d, c)						\
> +	do {								\
> +		if (!(c)) {						\
> +			xe_eudebug_event_log_print((d)->log, true);	\
> +			igt_assert(c);					\
> +		}							\
> +	} while (0)
> +
> +#define xe_eudebug_assert_f(d, c, f...)					\
> +	do {								\
> +		if (!(c)) {						\
> +			xe_eudebug_event_log_print((d)->log, true);	\
> +			igt_assert_f(c, f);				\
> +		}							\
> +	} while (0)
> +
> +#define XE_EUDEBUG_EVENT_STRING_MAX_LEN		4096
> +
> +/*
> + * Default abort timeout to use across xe_eudebug lib and tests if no specific
> + * timeout value is required.
> + */
> +#define XE_EUDEBUG_DEFAULT_TIMEOUT_MS		25000ULL
> +
> +#define XE_EUDEBUG_FILTER_EVENT_NONE		BIT(DRM_XE_EUDEBUG_EVENT_NONE)
> +#define XE_EUDEBUG_FILTER_EVENT_READ		BIT(DRM_XE_EUDEBUG_EVENT_READ)
> +#define XE_EUDEBUG_FILTER_EVENT_OPEN		BIT(DRM_XE_EUDEBUG_EVENT_OPEN)
> +#define XE_EUDEBUG_FILTER_EVENT_VM		BIT(DRM_XE_EUDEBUG_EVENT_VM)
> +#define XE_EUDEBUG_FILTER_EVENT_EXEC_QUEUE	BIT(DRM_XE_EUDEBUG_EVENT_EXEC_QUEUE)
> +#define XE_EUDEBUG_FILTER_EVENT_EU_ATTENTION	BIT(DRM_XE_EUDEBUG_EVENT_EU_ATTENTION)
> +#define XE_EUDEBUG_FILTER_EVENT_VM_BIND		BIT(DRM_XE_EUDEBUG_EVENT_VM_BIND)
> +#define XE_EUDEBUG_FILTER_EVENT_VM_BIND_OP	BIT(DRM_XE_EUDEBUG_EVENT_VM_BIND_OP)
> +#define XE_EUDEBUG_FILTER_EVENT_VM_BIND_UFENCE  BIT(DRM_XE_EUDEBUG_EVENT_VM_BIND_UFENCE)
> +#define XE_EUDEBUG_FILTER_ALL			GENMASK(DRM_XE_EUDEBUG_EVENT_MAX_EVENT, 0)
> +#define XE_EUDEBUG_EVENT_IS_FILTERED(_e, _f)	((1UL << _e) & _f)
> +
> +int xe_eudebug_connect(int fd, pid_t pid, uint32_t flags);
> +const char *xe_eudebug_event_to_str(struct drm_xe_eudebug_event *e, char *buf, size_t len);
> +struct drm_xe_eudebug_event *
> +xe_eudebug_event_log_find_seqno(struct xe_eudebug_event_log *l, uint64_t seqno);
> +struct xe_eudebug_event_log *
> +xe_eudebug_event_log_create(const char *name, unsigned int max_size);
> +void xe_eudebug_event_log_destroy(struct xe_eudebug_event_log *l);
> +void xe_eudebug_event_log_print(struct xe_eudebug_event_log *l, bool debug);
> +void xe_eudebug_event_log_compare(struct xe_eudebug_event_log *c, struct xe_eudebug_event_log *d,
> +				  uint32_t filter);
> +void xe_eudebug_event_log_write(struct xe_eudebug_event_log *l, struct drm_xe_eudebug_event *e);
> +void xe_eudebug_event_log_match_opposite(struct xe_eudebug_event_log *l, uint32_t filter);
> +
> +bool xe_eudebug_debugger_available(int fd);
> +struct xe_eudebug_debugger *
> +xe_eudebug_debugger_create(int xe, uint64_t flags, void *data);
> +void xe_eudebug_debugger_destroy(struct xe_eudebug_debugger *d);
> +int xe_eudebug_debugger_attach(struct xe_eudebug_debugger *d, struct xe_eudebug_client *c);
> +void xe_eudebug_debugger_start_worker(struct xe_eudebug_debugger *d);
> +void xe_eudebug_debugger_stop_worker(struct xe_eudebug_debugger *d, int timeout_s);
> +void xe_eudebug_debugger_dettach(struct xe_eudebug_debugger *d);
> +void xe_eudebug_debugger_set_data(struct xe_eudebug_debugger *c, void *ptr);
> +void xe_eudebug_debugger_add_trigger(struct xe_eudebug_debugger *d, int type,
> +				     xe_eudebug_trigger_fn fn);
> +void xe_eudebug_debugger_signal_stage(struct xe_eudebug_debugger *d, uint64_t stage);
> +void xe_eudebug_debugger_wait_stage(struct xe_eudebug_session *s, uint64_t stage);
> +
> +struct xe_eudebug_client *
> +xe_eudebug_client_create(int xe, xe_eudebug_client_work_fn work, uint64_t flags, void *data);
> +void xe_eudebug_client_destroy(struct xe_eudebug_client *c);
> +void xe_eudebug_client_start(struct xe_eudebug_client *c);
> +void xe_eudebug_client_stop(struct xe_eudebug_client *c);
> +void xe_eudebug_client_wait_done(struct xe_eudebug_client *c);
> +void xe_eudebug_client_signal_stage(struct xe_eudebug_client *c, uint64_t stage);
> +void xe_eudebug_client_wait_stage(struct xe_eudebug_client *c, uint64_t stage);
> +
> +uint64_t xe_eudebug_client_get_seqno(struct xe_eudebug_client *c);
> +void xe_eudebug_client_set_data(struct xe_eudebug_client *c, void *ptr);
> +
> +bool xe_eudebug_enable(int fd, bool enable);
> +
> +int xe_eudebug_client_open_driver(struct xe_eudebug_client *c);
> +void xe_eudebug_client_close_driver(struct xe_eudebug_client *c, int fd);
> +uint32_t xe_eudebug_client_vm_create(struct xe_eudebug_client *c, int fd,
> +				     uint32_t flags, uint64_t ext);
> +void xe_eudebug_client_vm_destroy(struct xe_eudebug_client *c, int fd, uint32_t vm);
> +uint32_t xe_eudebug_client_exec_queue_create(struct xe_eudebug_client *c, int fd,
> +					     struct drm_xe_exec_queue_create *create);
> +void xe_eudebug_client_exec_queue_destroy(struct xe_eudebug_client *c, int fd,
> +					  struct drm_xe_exec_queue_create *create);
> +void xe_eudebug_client_vm_bind_event(struct xe_eudebug_client *c, uint32_t event_flags, int fd,
> +				     uint32_t vm, uint32_t bind_flags,
> +				     uint32_t num_ops, uint64_t *ref_seqno);
> +void xe_eudebug_client_vm_bind_op_event(struct xe_eudebug_client *c, uint32_t event_flags,
> +					uint64_t ref_seqno, uint64_t *op_ref_seqno,
> +					uint64_t addr, uint64_t range,
> +					uint64_t num_extensions);
> +void xe_eudebug_client_vm_bind_op_metadata_event(struct xe_eudebug_client *c,
> +						 uint32_t event_flags, uint64_t op_ref_seqno,
> +						 uint64_t metadata_handle, uint64_t metadata_cookie);
> +void xe_eudebug_client_vm_bind_ufence_event(struct xe_eudebug_client *c, uint32_t event_flags,
> +					    uint64_t ref_seqno);
> +void xe_eudebug_ack_ufence(int debugfd,
> +			   const struct drm_xe_eudebug_event_vm_bind_ufence *f);
> +
> +void xe_eudebug_client_vm_bind_flags(struct xe_eudebug_client *c, int fd, uint32_t vm,
> +				     uint32_t bo, uint64_t offset,
> +				     uint64_t addr, uint64_t size, uint32_t flags,
> +				     struct drm_xe_sync *sync, uint32_t num_syncs,
> +				     uint64_t op_ext);
> +void xe_eudebug_client_vm_bind(struct xe_eudebug_client *c, int fd, uint32_t vm,
> +			       uint32_t bo, uint64_t offset,
> +			       uint64_t addr, uint64_t size);
> +void xe_eudebug_client_vm_unbind_flags(struct xe_eudebug_client *c, int fd,
> +				       uint32_t vm, uint64_t offset,
> +				       uint64_t addr, uint64_t size, uint32_t flags,
> +				       struct drm_xe_sync *sync, uint32_t num_syncs);
> +void xe_eudebug_client_vm_unbind(struct xe_eudebug_client *c, int fd, uint32_t vm,
> +				 uint64_t offset, uint64_t addr, uint64_t size);
> +
> +uint32_t xe_eudebug_client_metadata_create(struct xe_eudebug_client *c, int fd,
> +					   int type, size_t len, void *data);
> +void xe_eudebug_client_metadata_destroy(struct xe_eudebug_client *c, int fd,
> +					uint32_t id, int type, size_t len);
> +
> +struct xe_eudebug_session *xe_eudebug_session_create(int fd,
> +						     xe_eudebug_client_work_fn work,
> +						     unsigned int flags,
> +						     void *test_private);
> +void xe_eudebug_session_destroy(struct xe_eudebug_session *s);
> +void xe_eudebug_session_run(struct xe_eudebug_session *s);
> +void xe_eudebug_session_check(struct xe_eudebug_session *s, bool match_opposite, uint32_t filter);
> -- 
> 2.34.1
> 

^ permalink raw reply	[flat|nested] 41+ messages in thread

* Re: [PATCH i-g-t v3 04/14] lib/xe_eudebug: Introduce eu debug testing framework
  2024-08-20  8:14   ` Zbigniew Kempczyński
@ 2024-08-20 16:14     ` Manszewski, Christoph
  2024-08-20 17:45       ` Kamil Konieczny
  0 siblings, 1 reply; 41+ messages in thread
From: Manszewski, Christoph @ 2024-08-20 16:14 UTC (permalink / raw)
  To: Zbigniew Kempczyński
  Cc: igt-dev, Kamil Konieczny, Dominik Grzegorzek, Maciej Patelczyk,
	Dominik Karol Piątkowski, Pawel Sikora, Andrzej Hajda,
	Kolanupaka Naveena, Mika Kuoppala, Gwan-gyeong Mun, Mika Kuoppala,
	Karolina Stolarek

Hi Zbigniew,

On 20.08.2024 10:14, Zbigniew Kempczyński wrote:
> On Fri, Aug 09, 2024 at 02:38:03PM +0200, Christoph Manszewski wrote:
>> From: Dominik Grzegorzek <dominik.grzegorzek@intel.com>
>>
>> Introduce library which simplifies testing of eu debug capability.
>> The library provides event log helpers together with asynchronous
>> abstraction for client proccess and the debugger itself.
>>
>> xe_eudebug_client creates its own proccess with user's work function,
>> and gives machanisms to synchronize beginning of execution and event
>> logging.
>>
>> xe_eudebug_debugger allows to attach to the given proccess, provides
>> asynchronous thread for event reading and introduces triggers -
>> a callback mechanism triggered every time subscribed event was read.
>>
>> Signed-off-by: Dominik Grzegorzek <dominik.grzegorzek@intel.com>
>> Signed-off-by: Mika Kuoppala <mika.kuaoppala@linux.intel.com>
>> Signed-off-by: Christoph Manszewski <christoph.manszewski@intel.com>
>> Signed-off-by: Maciej Patelczyk <maciej.patelczyk@intel.com>
>> Signed-off-by: Pawel Sikora <pawel.sikora@intel.com>
>> Signed-off-by: Karolina Stolarek <karolina.stolarek@intel.com>
>> ---
>>   lib/meson.build     |    1 +
>>   lib/xe/xe_eudebug.c | 2192 +++++++++++++++++++++++++++++++++++++++++++
>>   lib/xe/xe_eudebug.h |  206 ++++
>>   3 files changed, 2399 insertions(+)
>>   create mode 100644 lib/xe/xe_eudebug.c
>>   create mode 100644 lib/xe/xe_eudebug.h
>>
>> diff --git a/lib/meson.build b/lib/meson.build
>> index f711e60a7..969ca4101 100644
>> --- a/lib/meson.build
>> +++ b/lib/meson.build
>> @@ -111,6 +111,7 @@ lib_sources = [
>>   	'igt_msm.c',
>>   	'igt_dsc.c',
>>   	'xe/xe_gt.c',
>> +	'xe/xe_eudebug.c',
>>   	'xe/xe_ioctl.c',
>>   	'xe/xe_mmio.c',
>>   	'xe/xe_query.c',
> 
> As eudebug is quite big feature I think it should be separated and
> hidden behind a feature flag (check meson_options.txt), lets say
> 'xe_eudebug' which would be disabled by default. This way you can
> develop it upstream even if kernel side is not officially merged.
> I'm pragramatic and I see no reason to block not accepted feature
> especially this would imo speed up developement. A final step when
> kernel change would be accepted and merged would be to sync with
> uapi and remove local definitions.
> 
> I look forward maintainers comments is my attitude acceptable.

I agree that it is a good idea. The only problem that arises is for 
'xe_exec_sip'. We add a dependency on eudebug to this test - any ideas 
how to approach this correctly? The only thing that comes to my mind is 
conditional compilation with 'ifdef' statements but it doesn't appear to 
be pretty.

Thanks,
Christoph
> 
> --
> Zbigniew
> 
> 
>> diff --git a/lib/xe/xe_eudebug.c b/lib/xe/xe_eudebug.c
>> new file mode 100644
>> index 000000000..4eac87476
>> --- /dev/null
>> +++ b/lib/xe/xe_eudebug.c
>> @@ -0,0 +1,2192 @@
>> +// SPDX-License-Identifier: MIT
>> +/*
>> + * Copyright © 2023 Intel Corporation
>> + */
>> +
>> +#include <fcntl.h>
>> +#include <poll.h>
>> +#include <signal.h>
>> +#include <sys/select.h>
>> +#include <sys/stat.h>
>> +#include <sys/types.h>
>> +#include <sys/wait.h>
>> +
>> +#include "igt.h"
>> +#include "igt_sysfs.h"
>> +#include "intel_pat.h"
>> +#include "xe_eudebug.h"
>> +#include "xe_ioctl.h"
>> +
>> +struct event_trigger {
>> +	xe_eudebug_trigger_fn fn;
>> +	int type;
>> +	struct igt_list_head link;
>> +};
>> +
>> +struct seqno_list_entry {
>> +	struct igt_list_head link;
>> +	uint64_t seqno;
>> +};
>> +
>> +struct match_dto {
>> +	struct drm_xe_eudebug_event *target;
>> +	struct igt_list_head *seqno_list;
>> +	uint64_t client_handle;
>> +	uint32_t filter;
>> +
>> +	/* store latest 'EVENT_VM_BIND' seqno */
>> +	uint64_t *bind_seqno;
>> +	/* latest vm_bind_op seqno matching bind_seqno */
>> +	uint64_t *bind_op_seqno;
>> +};
>> +
>> +#define CLIENT_PID  1
>> +#define CLIENT_RUN  2
>> +#define CLIENT_FINI 3
>> +#define CLIENT_STOP 4
>> +#define CLIENT_STAGE 5
>> +#define DEBUGGER_STAGE 6
>> +
>> +#define DEBUGGER_WORKER_INACTIVE  0
>> +#define DEBUGGER_WORKER_ACTIVE  1
>> +#define DEBUGGER_WORKER_QUITTING 2
>> +
>> +static const char *type_to_str(unsigned int type)
>> +{
>> +	switch (type) {
>> +	case DRM_XE_EUDEBUG_EVENT_NONE:
>> +		return "none";
>> +	case DRM_XE_EUDEBUG_EVENT_READ:
>> +		return "read";
>> +	case DRM_XE_EUDEBUG_EVENT_OPEN:
>> +		return "client";
>> +	case DRM_XE_EUDEBUG_EVENT_VM:
>> +		return "vm";
>> +	case DRM_XE_EUDEBUG_EVENT_EXEC_QUEUE:
>> +		return "exec_queue";
>> +	case DRM_XE_EUDEBUG_EVENT_EU_ATTENTION:
>> +		return "attention";
>> +	case DRM_XE_EUDEBUG_EVENT_VM_BIND:
>> +		return "vm_bind";
>> +	case DRM_XE_EUDEBUG_EVENT_VM_BIND_OP:
>> +		return "vm_bind_op";
>> +	case DRM_XE_EUDEBUG_EVENT_VM_BIND_UFENCE:
>> +		return "vm_bind_ufence";
>> +	case DRM_XE_EUDEBUG_EVENT_METADATA:
>> +		return "metadata";
>> +	case DRM_XE_EUDEBUG_EVENT_VM_BIND_OP_METADATA:
>> +		return "vm_bind_op_metadata";
>> +	}
>> +
>> +	return "UNKNOWN";
>> +}
>> +
>> +static const char *event_type_to_str(struct drm_xe_eudebug_event *e, char *buf)
>> +{
>> +	sprintf(buf, "%s(%d)", type_to_str(e->type), e->type);
>> +
>> +	return buf;
>> +}
>> +
>> +static const char *flags_to_str(unsigned int flags)
>> +{
>> +	if (flags & DRM_XE_EUDEBUG_EVENT_CREATE) {
>> +		if (flags & DRM_XE_EUDEBUG_EVENT_NEED_ACK)
>> +			return "create|ack";
>> +		else
>> +			return "create";
>> +	}
>> +	if (flags & DRM_XE_EUDEBUG_EVENT_DESTROY)
>> +		return "destroy";
>> +
>> +	if (flags & DRM_XE_EUDEBUG_EVENT_STATE_CHANGE)
>> +		return "state-change";
>> +
>> +	igt_assert(!(flags & DRM_XE_EUDEBUG_EVENT_NEED_ACK));
>> +
>> +	return "flags unknown";
>> +}
>> +
>> +static const char *event_members_to_str(struct drm_xe_eudebug_event *e, char *b)
>> +{
>> +	switch (e->type) {
>> +	case DRM_XE_EUDEBUG_EVENT_OPEN: {
>> +		struct drm_xe_eudebug_event_client *ec = (struct drm_xe_eudebug_event_client *)e;
>> +
>> +		sprintf(b, "handle=%llu", ec->client_handle);
>> +		break;
>> +	}
>> +	case DRM_XE_EUDEBUG_EVENT_VM: {
>> +		struct drm_xe_eudebug_event_vm *evm = (struct drm_xe_eudebug_event_vm *)e;
>> +
>> +		sprintf(b, "client_handle=%llu, handle=%llu",
>> +			evm->client_handle, evm->vm_handle);
>> +		break;
>> +	}
>> +	case DRM_XE_EUDEBUG_EVENT_EXEC_QUEUE: {
>> +		struct drm_xe_eudebug_event_exec_queue *ee = (void *)e;
>> +
>> +		sprintf(b, "client_handle=%llu, vm_handle=%llu, "
>> +			   "exec_queue_handle=%llu, engine_class=%d, exec_queue_width=%d",
>> +			ee->client_handle, ee->vm_handle,
>> +			ee->exec_queue_handle, ee->engine_class, ee->width);
>> +		break;
>> +	}
>> +	case DRM_XE_EUDEBUG_EVENT_EU_ATTENTION: {
>> +		struct drm_xe_eudebug_event_eu_attention *ea = (void *)e;
>> +
>> +		sprintf(b, "client_handle=%llu, exec_queue_handle=%llu, "
>> +			   "lrc_handle=%llu, bitmask_size=%d",
>> +			ea->client_handle, ea->exec_queue_handle,
>> +			ea->lrc_handle, ea->bitmask_size);
>> +		break;
>> +	}
>> +	case DRM_XE_EUDEBUG_EVENT_VM_BIND: {
>> +		struct drm_xe_eudebug_event_vm_bind *evmb = (void *)e;
>> +
>> +		sprintf(b, "client_handle=%llu, vm_handle=%llu, flags=0x%x, num_binds=%u",
>> +			evmb->client_handle, evmb->vm_handle, evmb->flags, evmb->num_binds);
>> +		break;
>> +	}
>> +	case DRM_XE_EUDEBUG_EVENT_VM_BIND_OP: {
>> +		struct drm_xe_eudebug_event_vm_bind_op *op = (void *)e;
>> +
>> +		sprintf(b, "vm_bind_ref_seqno=%lld, addr=%016llx, range=%llu num_extensions=%llu",
>> +			op->vm_bind_ref_seqno, op->addr, op->range, op->num_extensions);
>> +		break;
>> +	}
>> +	case DRM_XE_EUDEBUG_EVENT_VM_BIND_UFENCE: {
>> +		struct drm_xe_eudebug_event_vm_bind_ufence *f = (void *)e;
>> +
>> +		sprintf(b, "vm_bind_ref_seqno=%lld", f->vm_bind_ref_seqno);
>> +		break;
>> +	}
>> +	case DRM_XE_EUDEBUG_EVENT_METADATA: {
>> +		struct drm_xe_eudebug_event_metadata *em = (void *)e;
>> +
>> +		sprintf(b, "client_handle=%llu, metadata_handle=%llu, type=%llu, len=%llu",
>> +			em->client_handle, em->metadata_handle, em->type, em->len);
>> +		break;
>> +	}
>> +	case DRM_XE_EUDEBUG_EVENT_VM_BIND_OP_METADATA: {
>> +		struct drm_xe_eudebug_event_vm_bind_op_metadata *op = (void *)e;
>> +
>> +		sprintf(b, "vm_bind_op_ref_seqno=%lld, metadata_handle=%llu, metadata_cookie=%llu",
>> +			op->vm_bind_op_ref_seqno, op->metadata_handle, op->metadata_cookie);
>> +		break;
>> +	}
>> +	default:
>> +		strcpy(b, "<...>");
>> +	}
>> +
>> +	return b;
>> +}
>> +
>> +/**
>> + * xe_eudebug_event_to_str:
>> + * @e: pointer to event
>> + * @buf: target to write string representation of @e
>> + * @len: size of target buffer @buf
>> + *
>> + * Creates string representation for given event.
>> + *
>> + * Returns: the written input buffer pointed by @buf.
>> + */
>> +const char *xe_eudebug_event_to_str(struct drm_xe_eudebug_event *e, char *buf, size_t len)
>> +{
>> +	char a[256];
>> +	char b[256];
>> +
>> +	snprintf(buf, len, "(%llu) %15s:%s: %s",
>> +		 e->seqno,
>> +		 event_type_to_str(e, a),
>> +		 flags_to_str(e->flags),
>> +		 event_members_to_str(e, b));
>> +
>> +	return buf;
>> +}
>> +
>> +static void catch_child_failure(void)
>> +{
>> +	pid_t pid;
>> +	int status;
>> +
>> +	pid = waitpid(-1, &status, WNOHANG);
>> +
>> +	if (pid == 0 || pid == -1)
>> +		return;
>> +
>> +	if (!WIFEXITED(status))
>> +		return;
>> +
>> +	igt_assert_f(WEXITSTATUS(status) == 0, "Client failed!\n");
>> +}
>> +
>> +static int safe_pipe_read(int pipe[2], void *buf, int nbytes, int timeout_ms)
>> +{
>> +	int ret;
>> +	int t = 0;
>> +	struct pollfd fd = {
>> +		.fd = pipe[0],
>> +		.events = POLLIN,
>> +		.revents = 0
>> +	};
>> +
>> +	/* When child fails we may get stuck forever. Check whether
>> +	 * the child process ended with an error.
>> +	 */
>> +	do {
>> +		const int interval_ms = 1000;
>> +
>> +		ret = poll(&fd, 1, interval_ms);
>> +
>> +		if (!ret) {
>> +			catch_child_failure();
>> +			t += interval_ms;
>> +		}
>> +	} while (!ret && t < timeout_ms);
>> +
>> +	if (ret > 0)
>> +		return read(pipe[0], buf, nbytes);
>> +
>> +	return 0;
>> +}
>> +
>> +static uint64_t pipe_read(int pipe[2], int timeout_ms)
>> +{
>> +	uint64_t in;
>> +	uint64_t ret;
>> +
>> +	ret = safe_pipe_read(pipe, &in, sizeof(in), timeout_ms);
>> +	igt_assert(ret == sizeof(in));
>> +
>> +	return in;
>> +}
>> +
>> +static void pipe_signal(int pipe[2], uint64_t token)
>> +{
>> +	igt_assert(write(pipe[1], &token, sizeof(token)) == sizeof(token));
>> +}
>> +
>> +static void pipe_close(int pipe[2])
>> +{
>> +	if (pipe[0] != -1)
>> +		close(pipe[0]);
>> +
>> +	if (pipe[1] != -1)
>> +		close(pipe[1]);
>> +}
>> +
>> +static uint64_t __wait_token(int p[2], const uint64_t token, int timeout_ms)
>> +{
>> +	uint64_t in;
>> +
>> +	in = pipe_read(p, timeout_ms);
>> +
>> +	igt_assert_eq(in, token);
>> +
>> +	return pipe_read(p, timeout_ms);
>> +}
>> +
>> +static uint64_t client_wait_token(struct xe_eudebug_client *c,
>> +				 const uint64_t token)
>> +{
>> +	return __wait_token(c->p_in, token, c->timeout_ms);
>> +}
>> +
>> +static uint64_t wait_from_client(struct xe_eudebug_client *c,
>> +				 const uint64_t token)
>> +{
>> +	return __wait_token(c->p_out, token, c->timeout_ms);
>> +}
>> +
>> +static void token_signal(int p[2], const uint64_t token, const uint64_t value)
>> +{
>> +	pipe_signal(p, token);
>> +	pipe_signal(p, value);
>> +}
>> +
>> +static void client_signal(struct xe_eudebug_client *c,
>> +			  const uint64_t token,
>> +			  const uint64_t value)
>> +{
>> +	token_signal(c->p_out, token, value);
>> +}
>> +
>> +static int __xe_eudebug_connect(int fd, pid_t pid, uint32_t flags, uint64_t events)
>> +{
>> +	struct drm_xe_eudebug_connect param = {
>> +		.pid = pid,
>> +		.flags = flags,
>> +	};
>> +	int debugfd;
>> +
>> +	debugfd = igt_ioctl(fd, DRM_IOCTL_XE_EUDEBUG_CONNECT, &param);
>> +
>> +	if (debugfd < 0)
>> +		return -errno;
>> +
>> +	return debugfd;
>> +}
>> +
>> +static void event_log_write_to_fd(struct xe_eudebug_event_log *l, int fd)
>> +{
>> +	igt_assert_eq(write(fd, &l->head, sizeof(l->head)),
>> +		      sizeof(l->head));
>> +
>> +	igt_assert_eq(write(fd, l->log, l->head), l->head);
>> +}
>> +
>> +static void read_all(int fd, void *buf, size_t nbytes)
>> +{
>> +	ssize_t remaining_size = nbytes;
>> +	ssize_t current_size = 0;
>> +	ssize_t read_size = 0;
>> +
>> +	do {
>> +		read_size = read(fd, buf + current_size, remaining_size);
>> +		igt_assert_f(read_size >= 0, "read failed: %s\n", strerror(errno));
>> +
>> +		current_size += read_size;
>> +		remaining_size -= read_size;
>> +	} while (remaining_size > 0 && read_size > 0);
>> +
>> +	igt_assert_eq(current_size, nbytes);
>> +}
>> +
>> +static void event_log_read_from_fd(struct xe_eudebug_event_log *l, int fd)
>> +{
>> +	read_all(fd, &l->head, sizeof(l->head));
>> +	igt_assert_lt(l->head, l->max_size);
>> +
>> +	read_all(fd, l->log, l->head);
>> +}
>> +
>> +typedef int (*cmp_fn_t)(struct drm_xe_eudebug_event *, void *);
>> +
>> +static struct drm_xe_eudebug_event *
>> +event_cmp(struct xe_eudebug_event_log *l,
>> +	  struct drm_xe_eudebug_event *current,
>> +	  cmp_fn_t match,
>> +	  void *data)
>> +{
>> +	struct drm_xe_eudebug_event *e = current;
>> +
>> +	xe_eudebug_for_each_event(e, l) {
>> +		if (match(e, data))
>> +			return e;
>> +	}
>> +
>> +	return NULL;
>> +}
>> +
>> +static int match_type_and_flags(struct drm_xe_eudebug_event *a, void *data)
>> +{
>> +	struct drm_xe_eudebug_event *b = data;
>> +
>> +	if (a->type == b->type &&
>> +	    a->flags == b->flags)
>> +		return 1;
>> +
>> +	return 0;
>> +}
>> +
>> +static int match_fields(struct drm_xe_eudebug_event *a, void *data)
>> +{
>> +	struct drm_xe_eudebug_event *b = data;
>> +	int ret = 0;
>> +
>> +	ret = match_type_and_flags(a, data);
>> +	if (!ret)
>> +		return ret;
>> +
>> +	ret = 0;
>> +
>> +	switch (a->type) {
>> +	case DRM_XE_EUDEBUG_EVENT_EXEC_QUEUE: {
>> +		struct drm_xe_eudebug_event_exec_queue *ae = (void *)a;
>> +		struct drm_xe_eudebug_event_exec_queue *be = (void *)b;
>> +
>> +		if (ae->engine_class == be->engine_class && ae->width == be->width)
>> +			ret = 1;
>> +		break;
>> +	}
>> +	case DRM_XE_EUDEBUG_EVENT_VM_BIND: {
>> +		struct drm_xe_eudebug_event_vm_bind *ea = (void *)a;
>> +		struct drm_xe_eudebug_event_vm_bind *eb = (void *)b;
>> +
>> +		if (ea->num_binds == eb->num_binds)
>> +			ret = 1;
>> +		break;
>> +	}
>> +	case DRM_XE_EUDEBUG_EVENT_VM_BIND_OP: {
>> +		struct drm_xe_eudebug_event_vm_bind_op *ea = (void *)a;
>> +		struct drm_xe_eudebug_event_vm_bind_op *eb = (void *)b;
>> +
>> +		if (ea->addr == eb->addr && ea->range == eb->range &&
>> +		    ea->num_extensions == eb->num_extensions)
>> +			ret = 1;
>> +		break;
>> +	}
>> +	case DRM_XE_EUDEBUG_EVENT_VM_BIND_OP_METADATA: {
>> +		struct drm_xe_eudebug_event_vm_bind_op_metadata *ea = (void *)a;
>> +		struct drm_xe_eudebug_event_vm_bind_op_metadata *eb = (void *)b;
>> +
>> +		if (ea->metadata_handle == eb->metadata_handle &&
>> +		    ea->metadata_cookie == eb->metadata_cookie)
>> +			ret = 1;
>> +		break;
>> +	}
>> +
>> +	default:
>> +		ret = 1;
>> +		break;
>> +	}
>> +
>> +	return ret;
>> +}
>> +
>> +static int match_client_handle(struct drm_xe_eudebug_event *e, void *data)
>> +{
>> +	struct match_dto *md = (void *)data;
>> +	uint64_t *bind_seqno = md->bind_seqno;
>> +	uint64_t *bind_op_seqno = md->bind_op_seqno;
>> +	uint64_t h = md->client_handle;
>> +
>> +	if (XE_EUDEBUG_EVENT_IS_FILTERED(e->type, md->filter))
>> +		return 0;
>> +
>> +	switch (e->type) {
>> +	case DRM_XE_EUDEBUG_EVENT_OPEN: {
>> +		struct drm_xe_eudebug_event_client *client = (struct drm_xe_eudebug_event_client *)e;
>> +
>> +		if (client->client_handle == h)
>> +			return 1;
>> +		break;
>> +	}
>> +	case DRM_XE_EUDEBUG_EVENT_VM: {
>> +		struct drm_xe_eudebug_event_vm *vm = (struct drm_xe_eudebug_event_vm *)e;
>> +
>> +		if (vm->client_handle == h)
>> +			return 1;
>> +		break;
>> +	}
>> +	case DRM_XE_EUDEBUG_EVENT_EXEC_QUEUE: {
>> +		struct drm_xe_eudebug_event_exec_queue *ee = (struct drm_xe_eudebug_event_exec_queue *)e;
>> +
>> +		if (ee->client_handle == h)
>> +			return 1;
>> +		break;
>> +	}
>> +	case DRM_XE_EUDEBUG_EVENT_VM_BIND: {
>> +		struct drm_xe_eudebug_event_vm_bind *evmb = (struct drm_xe_eudebug_event_vm_bind *)e;
>> +
>> +		if (evmb->client_handle == h) {
>> +			*bind_seqno = evmb->base.seqno;
>> +			return 1;
>> +		}
>> +		break;
>> +	}
>> +	case DRM_XE_EUDEBUG_EVENT_VM_BIND_OP: {
>> +		struct drm_xe_eudebug_event_vm_bind_op *eo = (struct drm_xe_eudebug_event_vm_bind_op *)e;
>> +
>> +		if (eo->vm_bind_ref_seqno == *bind_seqno) {
>> +			*bind_op_seqno = eo->base.seqno;
>> +			return 1;
>> +		}
>> +		break;
>> +	}
>> +	case DRM_XE_EUDEBUG_EVENT_VM_BIND_UFENCE: {
>> +		struct drm_xe_eudebug_event_vm_bind_ufence *ef  = (struct drm_xe_eudebug_event_vm_bind_ufence *)e;
>> +
>> +		if (ef->vm_bind_ref_seqno == *bind_seqno)
>> +			return 1;
>> +
>> +		break;
>> +	}
>> +	case DRM_XE_EUDEBUG_EVENT_METADATA: {
>> +		struct drm_xe_eudebug_event_metadata *em = (struct drm_xe_eudebug_event_metadata *)e;
>> +
>> +		if (em->client_handle == h)
>> +			return 1;
>> +		break;
>> +	}
>> +	case DRM_XE_EUDEBUG_EVENT_VM_BIND_OP_METADATA: {
>> +		struct drm_xe_eudebug_event_vm_bind_op_metadata *eo = (struct drm_xe_eudebug_event_vm_bind_op_metadata *)e;
>> +
>> +		if (eo->vm_bind_op_ref_seqno == *bind_op_seqno)
>> +			return 1;
>> +		break;
>> +	}
>> +	default:
>> +		break;
>> +	}
>> +
>> +	return 0;
>> +}
>> +
>> +static int match_opposite_resource(struct drm_xe_eudebug_event *e, void *data)
>> +{
>> +
>> +	struct drm_xe_eudebug_event *d = (void *)data;
>> +	int ret;
>> +
>> +	d->flags ^= DRM_XE_EUDEBUG_EVENT_CREATE | DRM_XE_EUDEBUG_EVENT_DESTROY;
>> +	d->flags &= ~(DRM_XE_EUDEBUG_EVENT_NEED_ACK);
>> +	ret = match_type_and_flags(e, data);
>> +	d->flags ^= DRM_XE_EUDEBUG_EVENT_CREATE | DRM_XE_EUDEBUG_EVENT_DESTROY;
>> +
>> +	if (!ret)
>> +		return 0;
>> +
>> +	switch (e->type) {
>> +	case DRM_XE_EUDEBUG_EVENT_OPEN: {
>> +		struct drm_xe_eudebug_event_client *client = (struct drm_xe_eudebug_event_client *)e;
>> +		struct drm_xe_eudebug_event_client *filter = (struct drm_xe_eudebug_event_client *)data;
>> +
>> +		if (client->client_handle == filter->client_handle)
>> +			return 1;
>> +		break;
>> +	}
>> +	case DRM_XE_EUDEBUG_EVENT_VM: {
>> +		struct drm_xe_eudebug_event_vm *vm = (struct drm_xe_eudebug_event_vm *)e;
>> +		struct drm_xe_eudebug_event_vm *filter = (struct drm_xe_eudebug_event_vm *)data;
>> +
>> +		if (vm->vm_handle == filter->vm_handle)
>> +			return 1;
>> +		break;
>> +	}
>> +	case DRM_XE_EUDEBUG_EVENT_EXEC_QUEUE: {
>> +		struct drm_xe_eudebug_event_exec_queue *ee = (void *)e;
>> +		struct drm_xe_eudebug_event_exec_queue *filter = (struct drm_xe_eudebug_event_exec_queue *)data;
>> +
>> +		if (ee->exec_queue_handle == filter->exec_queue_handle)
>> +			return 1;
>> +		break;
>> +	}
>> +	case DRM_XE_EUDEBUG_EVENT_VM_BIND: {
>> +		struct drm_xe_eudebug_event_vm_bind *evmb = (void *)e;
>> +		struct drm_xe_eudebug_event_vm_bind *filter = (struct drm_xe_eudebug_event_vm_bind *)data;
>> +
>> +		if (evmb->vm_handle == filter->vm_handle &&
>> +		    evmb->num_binds == filter->num_binds)
>> +			return 1;
>> +		break;
>> +	}
>> +	case DRM_XE_EUDEBUG_EVENT_VM_BIND_OP: {
>> +		struct drm_xe_eudebug_event_vm_bind_op *avmb = (void *)e;
>> +		struct drm_xe_eudebug_event_vm_bind_op *filter = (struct drm_xe_eudebug_event_vm_bind_op *)data;
>> +
>> +		if (avmb->addr == filter->addr &&
>> +		    avmb->range == filter->range)
>> +			return 1;
>> +		break;
>> +	}
>> +	case DRM_XE_EUDEBUG_EVENT_METADATA: {
>> +		struct drm_xe_eudebug_event_metadata *em = (void *)e;
>> +		struct drm_xe_eudebug_event_metadata *filter = (struct drm_xe_eudebug_event_metadata *)data;
>> +
>> +		if (em->metadata_handle == filter->metadata_handle)
>> +			return 1;
>> +		break;
>> +	}
>> +	case DRM_XE_EUDEBUG_EVENT_VM_BIND_OP_METADATA: {
>> +		struct drm_xe_eudebug_event_vm_bind_op_metadata *avmb = (void *)e;
>> +		struct drm_xe_eudebug_event_vm_bind_op_metadata *filter = (struct drm_xe_eudebug_event_vm_bind_op_metadata *)data;
>> +
>> +		if (avmb->metadata_handle == filter->metadata_handle &&
>> +		    avmb->metadata_cookie == filter->metadata_cookie)
>> +			return 1;
>> +		break;
>> +	}
>> +
>> +	default:
>> +		break;
>> +	}
>> +	return 0;
>> +}
>> +
>> +static int match_full(struct drm_xe_eudebug_event *e, void *data)
>> +{
>> +	struct seqno_list_entry *sl;
>> +
>> +	struct match_dto *md = (void *)data;
>> +	int ret = 0;
>> +
>> +	ret = match_client_handle(e, md);
>> +	if (!ret)
>> +		return 0;
>> +
>> +	ret = match_fields(e, md->target);
>> +	if (!ret)
>> +		return 0;
>> +
>> +	igt_list_for_each_entry(sl, md->seqno_list, link) {
>> +		if (sl->seqno == e->seqno)
>> +			return 0;
>> +	}
>> +
>> +	return 1;
>> +}
>> +
>> +static struct drm_xe_eudebug_event *
>> +event_type_match(struct xe_eudebug_event_log *l,
>> +		 struct drm_xe_eudebug_event *target,
>> +		 struct drm_xe_eudebug_event *current)
>> +{
>> +	return event_cmp(l, current, match_type_and_flags, target);
>> +}
>> +
>> +static struct drm_xe_eudebug_event *
>> +client_match(struct xe_eudebug_event_log *l,
>> +	     uint64_t client_handle,
>> +	     struct drm_xe_eudebug_event *current,
>> +	     uint32_t filter,
>> +	     uint64_t *bind_seqno,
>> +	     uint64_t *bind_op_seqno)
>> +{
>> +	struct match_dto md = {
>> +		.client_handle = client_handle,
>> +		.filter = filter,
>> +		.bind_seqno = bind_seqno,
>> +		.bind_op_seqno = bind_op_seqno,
>> +	};
>> +
>> +	return event_cmp(l, current, match_client_handle, &md);
>> +}
>> +
>> +static struct drm_xe_eudebug_event *
>> +opposite_event_match(struct xe_eudebug_event_log *l,
>> +		    struct drm_xe_eudebug_event *target,
>> +		    struct drm_xe_eudebug_event *current)
>> +{
>> +	return event_cmp(l, current, match_opposite_resource, target);
>> +}
>> +
>> +static struct drm_xe_eudebug_event *
>> +event_match(struct xe_eudebug_event_log *l,
>> +	    struct drm_xe_eudebug_event *target,
>> +	    uint64_t client_handle,
>> +	    struct igt_list_head *seqno_list,
>> +	    uint64_t *bind_seqno,
>> +	    uint64_t *bind_op_seqno)
>> +{
>> +	struct match_dto md = {
>> +		.target = target,
>> +		.client_handle = client_handle,
>> +		.seqno_list = seqno_list,
>> +		.bind_seqno = bind_seqno,
>> +		.bind_op_seqno = bind_op_seqno,
>> +	};
>> +
>> +	return event_cmp(l, NULL, match_full, &md);
>> +}
>> +
>> +static void compare_client(struct xe_eudebug_event_log *c, struct drm_xe_eudebug_event *_ce,
>> +			   struct xe_eudebug_event_log *d, struct drm_xe_eudebug_event *_de,
>> +			   uint32_t filter)
>> +{
>> +	struct drm_xe_eudebug_event_client *ce = (void *)_ce;
>> +	struct drm_xe_eudebug_event_client *de = (void *)_de;
>> +	uint64_t cbs = 0, dbs = 0, cbso = 0, dbso = 0;
>> +
>> +	struct igt_list_head matched_seqno_list;
>> +	struct drm_xe_eudebug_event *hc, *hd;
>> +	struct seqno_list_entry *entry, *tmp;
>> +
>> +	igt_assert(ce);
>> +	igt_assert(de);
>> +
>> +	igt_debug("client: %llu -> %llu\n", ce->client_handle, de->client_handle);
>> +
>> +	hc = NULL;
>> +	hd = NULL;
>> +	IGT_INIT_LIST_HEAD(&matched_seqno_list);
>> +
>> +	do {
>> +		hc = client_match(c, ce->client_handle, hc, filter, &cbs, &cbso);
>> +		if (!hc)
>> +			break;
>> +
>> +		hd = event_match(d, hc, de->client_handle, &matched_seqno_list, &dbs, &dbso);
>> +
>> +		igt_assert_f(hd, "%s (%llu): no matching event type %u found for client %llu\n",
>> +			     c->name,
>> +			     hc->seqno,
>> +			     hc->type,
>> +			     ce->client_handle);
>> +
>> +		igt_debug("comparing %s %llu vs %s %llu\n",
>> +			  c->name, hc->seqno, d->name, hd->seqno);
>> +
>> +		/*
>> +		 * Store the seqno of the event that was matched above,
>> +		 * inside 'matched_seqno_list', to avoid it getting matched
>> +		 * by subsequent 'event_match' calls.
>> +		 */
>> +		entry = malloc(sizeof(*entry));
>> +		entry->seqno = hd->seqno;
>> +		igt_list_add(&entry->link, &matched_seqno_list);
>> +	} while (hc);
>> +
>> +	igt_list_for_each_entry_safe(entry, tmp, &matched_seqno_list, link)
>> +		free(entry);
>> +}
>> +
>> +/**
>> + * xe_eudebug_event_log_find_seqno:
>> + * @l: event log pointer
>> + * @seqno: seqno of event to be found
>> + *
>> + * Finds the event with given seqno in the event log.
>> + *
>> + * Returns: pointer to the event with given seqno within @l or NULL seqno is
>> + * not present.
>> + */
>> +struct drm_xe_eudebug_event *
>> +xe_eudebug_event_log_find_seqno(struct xe_eudebug_event_log *l, uint64_t seqno)
>> +{
>> +	struct drm_xe_eudebug_event *e = NULL, *found = NULL;
>> +
>> +	igt_assert_neq(seqno, 0);
>> +	/*
>> +	 * Try to catch if seqno is corrupted and prevent too long tests,
>> +	 * as our post processing of events is not optimized.
>> +	 */
>> +	igt_assert_lt(seqno, 10 * 1000 * 1000);
>> +
>> +	xe_eudebug_for_each_event(e, l) {
>> +		if (e->seqno == seqno) {
>> +			if (found) {
>> +				igt_warn("Found multiple events with the same seqno %lu\n", seqno);
>> +				xe_eudebug_event_log_print(l, false);
>> +				igt_assert(!found);
>> +			}
>> +			found = e;
>> +		}
>> +	}
>> +
>> +	return found;
>> +}
>> +
>> +static void event_log_sort(struct xe_eudebug_event_log *l)
>> +{
>> +	struct xe_eudebug_event_log *tmp;
>> +	struct drm_xe_eudebug_event *e = NULL;
>> +	uint64_t first_seqno = 0;
>> +	uint64_t last_seqno = 0;
>> +	uint64_t events = 0, added = 0;
>> +	uint64_t i;
>> +
>> +	xe_eudebug_for_each_event(e, l) {
>> +		if (e->seqno > last_seqno)
>> +			last_seqno = e->seqno;
>> +
>> +		if (e->seqno < first_seqno)
>> +			first_seqno = e->seqno;
>> +
>> +		events++;
>> +	}
>> +
>> +	tmp = xe_eudebug_event_log_create("tmp", l->max_size);
>> +
>> +	for (i = 1; i <= last_seqno; i++) {
>> +		e = xe_eudebug_event_log_find_seqno(l, i);
>> +		if (e) {
>> +			xe_eudebug_event_log_write(tmp, e);
>> +			added++;
>> +		}
>> +	}
>> +
>> +	igt_assert_eq(events, added);
>> +	igt_assert_eq(tmp->head, l->head);
>> +
>> +	memcpy(l->log, tmp->log, tmp->head);
>> +
>> +	xe_eudebug_event_log_destroy(tmp);
>> +}
>> +
>> +/**
>> + * xe_eudebug_connect:
>> + * @fd: Xe file descriptor
>> + * @pid: client PID
>> + * @flags: connection flags
>> + *
>> + * Opens the xe eu debugger connection to the process described by @pid
>> + *
>> + * Returns: 0 if the debugger was successfully attached, -errno otherwise.
>> + */
>> +int xe_eudebug_connect(int fd, pid_t pid, uint32_t flags)
>> +{
>> +	int ret;
>> +	uint64_t events = 0; /* events filtering not supported yet! */
>> +
>> +	ret = __xe_eudebug_connect(fd, pid, flags, events);
>> +
>> +	return ret;
>> +}
>> +
>> +/**
>> + * xe_eudebug_event_log_create:
>> + * @name: event log identifier
>> + * @max_size: maximum size of created log
>> + *
>> + * Function creates an Eu Debugger event log with size equal to @max_size.
>> + *
>> + * Returns: pointer to just created log
>> + */
>> +#define MAX_EVENT_LOG_SIZE (32 * 1024 * 1024)
>> +struct xe_eudebug_event_log *xe_eudebug_event_log_create(const char *name, unsigned int max_size)
>> +{
>> +	struct xe_eudebug_event_log *l;
>> +
>> +	l = calloc(1, sizeof(*l));
>> +	igt_assert(l);
>> +	l->log = calloc(1, max_size);
>> +	igt_assert(l->log);
>> +	l->max_size = max_size;
>> +	strncpy(l->name, name, sizeof(l->name) - 1);
>> +	pthread_mutex_init(&l->lock, NULL);
>> +
>> +	return l;
>> +}
>> +
>> +/**
>> + * xe_eudebug_event_log_destroy:
>> + * @l: event log pointer
>> + *
>> + * Frees given event log @l.
>> + */
>> +void xe_eudebug_event_log_destroy(struct xe_eudebug_event_log *l)
>> +{
>> +	pthread_mutex_destroy(&l->lock);
>> +	free(l->log);
>> +	free(l);
>> +}
>> +
>> +/**
>> + * xe_eudebug_event_log_write:
>> + * @l: event log pointer
>> + * @e: event to be written to event log
>> + *
>> + * Writes event @e to the event log, thread-safe.
>> + */
>> +void xe_eudebug_event_log_write(struct xe_eudebug_event_log *l, struct drm_xe_eudebug_event *e)
>> +{
>> +	igt_assert(e->seqno);
>> +	/*
>> +	 * Try to catch if seqno is corrupted and prevent too long tests,
>> +	 * as our post processing of events is not optimized.
>> +	 */
>> +	igt_assert_lt(e->seqno, 10 * 1000 * 1000);
>> +
>> +	pthread_mutex_lock(&l->lock);
>> +	igt_assert_lt(l->head + e->len, l->max_size);
>> +	memcpy(l->log + l->head, e, e->len);
>> +	l->head += e->len;
>> +
>> +#ifdef DEBUG_LOG
>> +	igt_info("%s: wrote %u bytes to eventlog, free %u bytes\n",
>> +		 l->name, e->len, l->max_size - l->head);
>> +#endif
>> +	pthread_mutex_unlock(&l->lock);
>> +}
>> +
>> +/**
>> + * xe_eudebug_event_log_print:
>> + * @l: event log pointer
>> + * @debug: when true function uses igt_debug instead of igt_info.
>> + *
>> + * Prints given event log.
>> + */
>> +void
>> +xe_eudebug_event_log_print(struct xe_eudebug_event_log *l, bool debug)
>> +{
>> +	struct drm_xe_eudebug_event *e = NULL;
>> +	int level = debug ? IGT_LOG_DEBUG : IGT_LOG_INFO;
>> +	char str[XE_EUDEBUG_EVENT_STRING_MAX_LEN];
>> +
>> +	igt_log(IGT_LOG_DOMAIN, level,
>> +		"event log '%s' (%u bytes):\n", l->name, l->head);
>> +
>> +	xe_eudebug_for_each_event(e, l) {
>> +		xe_eudebug_event_to_str(e, str, XE_EUDEBUG_EVENT_STRING_MAX_LEN);
>> +		igt_log(IGT_LOG_DOMAIN, level, "%s\n", str);
>> +	}
>> +}
>> +
>> +/**
>> + * xe_eudebug_event_log_compare:
>> + * @a: event log pointer
>> + * @b: event log pointer
>> + * @filter: mask that represents events to be skipped during comparison, useful
>> + * for events like 'VM_BIND' since they can be asymmetric. Note that
>> + * 'DRM_XE_EUDEBUG_EVENT_OPEN' will always be matched.
>> + *
>> + * Compares and asserts event logs @a, @b if the event
>> + * sequence matches.
>> + */
>> +void xe_eudebug_event_log_compare(struct xe_eudebug_event_log *a, struct xe_eudebug_event_log *b,
>> +				  uint32_t filter)
>> +{
>> +	struct drm_xe_eudebug_event *ae = NULL;
>> +	struct drm_xe_eudebug_event *be = NULL;
>> +
>> +	xe_eudebug_for_each_event(ae, a) {
>> +		if (ae->type == DRM_XE_EUDEBUG_EVENT_OPEN &&
>> +		    ae->flags & DRM_XE_EUDEBUG_EVENT_CREATE) {
>> +			be = event_type_match(b, ae, be);
>> +
>> +			compare_client(a, ae, b, be, filter);
>> +			compare_client(b, be, a, ae, filter);
>> +		}
>> +	}
>> +}
>> +
>> +/**
>> + * xe_eudebug_event_log_match_opposite:
>> + * @l: event log pointer
>> + * @filter: mask that represents events to be skipped during comparison, useful
>> + * for events like 'VM_BIND' since they can be asymmetric
>> + *
>> + * Matches and asserts content of all opposite events (create vs destroy).
>> + */
>> +void
>> +xe_eudebug_event_log_match_opposite(struct xe_eudebug_event_log *l, uint32_t filter)
>> +{
>> +	struct drm_xe_eudebug_event *ce = NULL;
>> +	struct drm_xe_eudebug_event *de = NULL;
>> +
>> +	xe_eudebug_for_each_event(ce, l) {
>> +		if (ce->flags & DRM_XE_EUDEBUG_EVENT_CREATE) {
>> +			uint8_t offset = sizeof(struct drm_xe_eudebug_event);
>> +			int opposite_matching;
>> +
>> +			if (XE_EUDEBUG_EVENT_IS_FILTERED(ce->type, filter))
>> +				continue;
>> +
>> +			/* No opposite matching for binds */
>> +			if ((ce->type >= DRM_XE_EUDEBUG_EVENT_VM_BIND &&
>> +			     ce->type <= DRM_XE_EUDEBUG_EVENT_VM_BIND_UFENCE) ||
>> +			    ce->type == DRM_XE_EUDEBUG_EVENT_VM_BIND_OP_METADATA)
>> +				continue;
>> +
>> +			de = opposite_event_match(l, ce, ce);
>> +
>> +			igt_assert_f(de, "no opposite event of type %u found\n", ce->type);
>> +
>> +			igt_assert_eq(ce->len, de->len);
>> +			opposite_matching = memcmp((uint8_t *)de + offset,
>> +						   (uint8_t *)ce + offset,
>> +						   de->len - offset) == 0;
>> +
>> +			igt_assert_f(opposite_matching,
>> +				     "%s: create|destroy event not "
>> +				     "maching (%llu) vs (%llu)\n",
>> +				     l->name, de->seqno, ce->seqno);
>> +		}
>> +	}
>> +}
>> +
>> +static void debugger_run_triggers(struct xe_eudebug_debugger *d,
>> +				  struct drm_xe_eudebug_event *e)
>> +{
>> +	struct event_trigger *t;
>> +
>> +	igt_list_for_each_entry(t, &d->triggers, link) {
>> +		if (e->type == t->type)
>> +			t->fn(d, e);
>> +	}
>> +}
>> +
>> +#define MAX_EVENT_SIZE (32 * 1024)
>> +static int
>> +xe_eudebug_read_event(int fd, struct drm_xe_eudebug_event *event)
>> +{
>> +	int ret;
>> +
>> +	event->type = DRM_XE_EUDEBUG_EVENT_READ;
>> +	event->flags = 0;
>> +	event->len = MAX_EVENT_SIZE;
>> +
>> +	ret = igt_ioctl(fd, DRM_XE_EUDEBUG_IOCTL_READ_EVENT, event);
>> +	if (ret < 0)
>> +		return -errno;
>> +
>> +	return ret;
>> +}
>> +
>> +static void *debugger_worker_loop(void *data)
>> +{
>> +	uint8_t buf[MAX_EVENT_SIZE];
>> +	struct drm_xe_eudebug_event *e = (void *)buf;
>> +	struct xe_eudebug_debugger *d = data;
>> +	struct pollfd p = {
>> +		.events = POLLIN,
>> +		.revents = 0,
>> +	};
>> +	int timeout_ms = 100, ret;
>> +
>> +	igt_assert(d->master_fd >= 0);
>> +
>> +	do {
>> +		p.fd = d->fd;
>> +		ret = poll(&p, 1, timeout_ms);
>> +
>> +		if (ret == -1) {
>> +			igt_info("poll failed with errno %d\n", errno);
>> +			break;
>> +		}
>> +
>> +		if (ret == 1 && (p.revents & POLLIN)) {
>> +			int err = xe_eudebug_read_event(d->fd, e);
>> +
>> +			if (!err) {
>> +				++d->event_count;
>> +
>> +				xe_eudebug_event_log_write(d->log, e);
>> +				debugger_run_triggers(d, e);
>> +			} else {
>> +				igt_info("xe_eudebug_read_event returned %d\n", ret);
>> +			}
>> +		}
>> +	} while ((ret && READ_ONCE(d->worker_state) == DEBUGGER_WORKER_QUITTING) ||
>> +		 READ_ONCE(d->worker_state) == DEBUGGER_WORKER_ACTIVE);
>> +
>> +	d->worker_state = DEBUGGER_WORKER_INACTIVE;
>> +	return NULL;
>> +}
>> +
>> +/**
>> + * xe_eudebug_debugger_available:
>> + * @fd: Xe file descriptor
>> + *
>> + * Returns: true it debugger connection is available, false otherwise.
>> + */
>> +bool xe_eudebug_debugger_available(int fd)
>> +{
>> +	struct drm_xe_eudebug_connect param = { .pid = getpid() };
>> +	int debugfd;
>> +
>> +	debugfd = igt_ioctl(fd, DRM_IOCTL_XE_EUDEBUG_CONNECT, &param);
>> +	if (debugfd >= 0)
>> +		close(debugfd);
>> +
>> +	return debugfd >= 0;
>> +}
>> +
>> +/**
>> + * xe_eudebug_debugger_create:
>> + * @master_fd: xe client used to open the debugger connection
>> + * @flags: flags stored in a debugger structure, can be used at will
>> + * of the caller, i.e. to be used inside triggers.
>> + * @data: test's private data, allocated with MAP_SHARED | MAP_ANONYMOUS,
>> + * can be shared between client and debugger. Can be NULL.
>> + *
>> + * Returns: newly created xe_eudebug_debugger structure with its
>> + * event log initialized. Note that to open the connection
>> + * you need call @xe_eudebug_debugger_attach.
>> + */
>> +struct xe_eudebug_debugger *
>> +xe_eudebug_debugger_create(int master_fd, uint64_t flags, void *data)
>> +{
>> +	struct xe_eudebug_debugger *d;
>> +
>> +	d = calloc(1, sizeof(*d));
>> +	d->flags = flags;
>> +	igt_assert(d);
>> +	IGT_INIT_LIST_HEAD(&d->triggers);
>> +	d->log = xe_eudebug_event_log_create("debugger", MAX_EVENT_LOG_SIZE);
>> +	d->fd = -1;
>> +	d->master_fd = master_fd;
>> +	d->ptr = data;
>> +
>> +	return d;
>> +}
>> +
>> +static void debugger_destroy_triggers(struct xe_eudebug_debugger *d)
>> +{
>> +	struct event_trigger *t, *tmp;
>> +
>> +	igt_list_for_each_entry_safe(t, tmp, &d->triggers, link)
>> +		free(t);
>> +}
>> +
>> +/**
>> + * xe_eudebug_debugger_destroy:
>> + * @d: pointer to the debugger
>> + *
>> + * Frees xe_eudebug_debugger structure pointed by @d. If the debugger
>> + * connection was still opened it terminates it.
>> + */
>> +void xe_eudebug_debugger_destroy(struct xe_eudebug_debugger *d)
>> +{
>> +	if (d->worker_state)
>> +		xe_eudebug_debugger_stop_worker(d, 1);
>> +
>> +	if (d->target_pid)
>> +		xe_eudebug_debugger_dettach(d);
>> +
>> +	xe_eudebug_event_log_destroy(d->log);
>> +	debugger_destroy_triggers(d);
>> +	free(d);
>> +}
>> +
>> +/**
>> + * xe_eudebug_debugger_attach:
>> + * @d: pointer to the debugger
>> + * @c: pointer to the client
>> + *
>> + * Opens the xe eu debugger connection to the process described by @c (c->pid)
>> + *
>> + * Returns: 0 if the debugger was successfully attached, -errno otherwise.
>> + */
>> +int xe_eudebug_debugger_attach(struct xe_eudebug_debugger *d,
>> +			       struct xe_eudebug_client *c)
>> +{
>> +	int ret;
>> +
>> +	igt_assert_eq(d->fd, -1);
>> +	igt_assert_neq(c->pid, 0);
>> +	ret = xe_eudebug_connect(d->master_fd, c->pid, 0);
>> +
>> +	if (ret < 0)
>> +		return ret;
>> +
>> +	d->fd = ret;
>> +	d->target_pid = c->pid;
>> +	d->p_client[0] = c->p_in[0];
>> +	d->p_client[1] = c->p_in[1];
>> +
>> +	igt_debug("debugger connected to %lu\n", d->target_pid);
>> +
>> +	return 0;
>> +}
>> +
>> +/**
>> + * xe_eudebug_debugger_dettach:
>> + * @d: pointer to the debugger
>> + *
>> + * Closes previously opened xe eu debugger connection. Asserts if
>> + * the debugger has active session.
>> + */
>> +void xe_eudebug_debugger_dettach(struct xe_eudebug_debugger *d)
>> +{
>> +	igt_assert(d->target_pid);
>> +	close(d->fd);
>> +	d->target_pid = 0;
>> +	d->fd = -1;
>> +}
>> +
>> +/**
>> + * xe_eudebug_debugger_add_trigger:
>> + * @d: pointer to the debugger
>> + * @type: the type of the event which activates the trigger
>> + * @fn: function to be called when event of @type was read by the debugger.
>> + *
>> + * Adds function @fn to the list of triggers activated when event of @type
>> + * has been read by worker.
>> + * Note: Triggers are activated by the worker.
>> + */
>> +void xe_eudebug_debugger_add_trigger(struct xe_eudebug_debugger *d,
>> +				     int type, xe_eudebug_trigger_fn fn)
>> +{
>> +	struct event_trigger *t;
>> +
>> +	t = calloc(1, sizeof(*t));
>> +	IGT_INIT_LIST_HEAD(&t->link);
>> +	t->type = type;
>> +	t->fn = fn;
>> +
>> +	igt_list_add_tail(&t->link, &d->triggers);
>> +	igt_debug("added trigger %p\n", t);
>> +}
>> +
>> +/**
>> + * xe_eudebug_debugger_start_worker:
>> + * @d: pointer to the debugger
>> + *
>> + * Starts the debugger worker. Worker is resposible for reading all
>> + * incoming events from the debugger, put then into debugger log and
>> + * execute appropriate event triggers. Note that using the debuggers
>> + * event log while worker is running is not safe.
>> + */
>> +void xe_eudebug_debugger_start_worker(struct xe_eudebug_debugger *d)
>> +{
>> +	int ret;
>> +
>> +	d->worker_state = true;
>> +	ret = pthread_create(&d->worker_thread, NULL, &debugger_worker_loop, d);
>> +
>> +	igt_assert_f(ret == 0, "Debugger worker thread creation failed!");
>> +}
>> +
>> +/**
>> + * xe_eudebug_debugger_stop_worker:
>> + * @d: pointer to the debugger
>> + *
>> + * Stops the debugger worker. Event log is sorted by seqno after closure.
>> + */
>> +void xe_eudebug_debugger_stop_worker(struct xe_eudebug_debugger *d,
>> +				     int timeout_s)
>> +{
>> +	struct timespec t = {};
>> +	int ret;
>> +
>> +	igt_assert(d->worker_state);
>> +
>> +	d->worker_state = DEBUGGER_WORKER_QUITTING; /* First time be polite. */
>> +	igt_assert_eq(clock_gettime(CLOCK_REALTIME, &t), 0);
>> +	t.tv_sec += timeout_s;
>> +
>> +	ret = pthread_timedjoin_np(d->worker_thread, NULL, &t);
>> +
>> +	if (ret == ETIMEDOUT) {
>> +		d->worker_state = DEBUGGER_WORKER_INACTIVE;
>> +		ret = pthread_join(d->worker_thread, NULL);
>> +	}
>> +
>> +	igt_assert_f(ret == 0 || ret != ESRCH,
>> +		     "pthread join failed with error %d!\n", ret);
>> +
>> +	event_log_sort(d->log);
>> +}
>> +
>> +/**
>> + * xe_eudebug_debugger_signal_stage:
>> + * @d: pointer to the debugger
>> + * @stage: stage to signal
>> + *
>> + * Signals to client, waiting in xe_eudebug_client_wait_stage(),
>> + * releasing it to proceed.
>> + */
>> +void xe_eudebug_debugger_signal_stage(struct xe_eudebug_debugger *d, uint64_t stage)
>> +{
>> +	token_signal(d->p_client, CLIENT_STAGE, stage);
>> +}
>> +
>> +/**
>> + * xe_eudebug_debugger_wait_stage:
>> + * @s: pointer to xe_eudebug_debugger structure
>> + * @stage: stage to wait on
>> + *
>> + * Pauses debugger until the client has signalled the corresponding stage with
>> + * xe_eudebug_client_signal_stage. This is only for situations where the actual
>> + * event flow is not enough to coordinate between client/debugger and extra sync
>> + * mechanism is needed.
>> + */
>> +void xe_eudebug_debugger_wait_stage(struct xe_eudebug_session *s, uint64_t stage)
>> +{
>> +	u64 stage_in;
>> +
>> +	igt_debug("debugger xe client fd: %d pausing for stage %lu\n", s->d->master_fd, stage);
>> +
>> +	stage_in = wait_from_client(s->c, DEBUGGER_STAGE);
>> +	igt_debug("debugger xe client fd: %d stage %lu, expected %lu, stage\n", s->d->master_fd,
>> +		  stage_in, stage);
>> +
>> +	igt_assert_eq(stage_in, stage);
>> +}
>> +
>> +/**
>> + * xe_eudebug_client_create:
>> + * @master_fd: xe client used to open the debugger connection
>> + * @work: function that opens xe device and executes arbitrary workload
>> + * @flags: flags stored in a client structure, can be used at will
>> + * of the caller, i.e. to provide the @work function an additional switch.
>> + * @data: test's private data, allocated with MAP_SHARED | MAP_ANONYMOUS,
>> + * can be shared between client and debugger. Accesible via client->ptr.
>> + * Can be NULL.
>> + *
>> + * Forks and creates the debugger process. @work won't be called until
>> + * xe_eudebug_client_start is called.
>> + *
>> + * Returns: newly created xe_eudebug_debugger structure with its
>> + * event log initialized.
>> + */
>> +struct xe_eudebug_client *xe_eudebug_client_create(int master_fd, xe_eudebug_client_work_fn work,
>> +						   uint64_t flags, void *data)
>> +{
>> +	struct xe_eudebug_client *c;
>> +
>> +	c = calloc(1, sizeof(*c));
>> +	c->flags = flags;
>> +	igt_assert(c);
>> +	igt_assert(!pipe(c->p_in));
>> +	igt_assert(!pipe(c->p_out));
>> +	c->seqno = 1;
>> +	c->log = xe_eudebug_event_log_create("client", MAX_EVENT_LOG_SIZE);
>> +	c->done = 0;
>> +	c->ptr = data;
>> +	c->master_fd = master_fd;
>> +	c->timeout_ms = XE_EUDEBUG_DEFAULT_TIMEOUT_MS;
>> +
>> +	igt_fork(child, 1) {
>> +		int mypid;
>> +
>> +		igt_assert_eq(c->pid, 0);
>> +
>> +		close(c->p_out[0]);
>> +		c->p_out[0] = -1;
>> +		close(c->p_in[1]);
>> +		c->p_in[1] = -1;
>> +
>> +		mypid = getpid();
>> +		client_signal(c, CLIENT_PID, mypid);
>> +
>> +		c->pid = client_wait_token(c, CLIENT_RUN);
>> +		igt_assert_eq(c->pid, mypid);
>> +		if (work)
>> +			work(c);
>> +
>> +		client_signal(c, CLIENT_FINI, c->seqno);
>> +
>> +		event_log_write_to_fd(c->log, c->p_out[1]);
>> +
>> +		c->pid = client_wait_token(c, CLIENT_STOP);
>> +		igt_assert_eq(c->pid, mypid);
>> +	}
>> +
>> +	close(c->p_out[1]);
>> +	c->p_out[1] = -1;
>> +	close(c->p_in[0]);
>> +	c->p_in[0] = -1;
>> +
>> +	c->pid = wait_from_client(c, CLIENT_PID);
>> +
>> +	igt_info("client running with pid %d\n", c->pid);
>> +
>> +	return c;
>> +}
>> +
>> +/**
>> + * xe_eudebug_client_stop:
>> + * @c: pointer to xe_eudebug_client structure
>> + *
>> + * Waits for the end of client's work and exits the proccess.
>> + */
>> +void xe_eudebug_client_stop(struct xe_eudebug_client *c)
>> +{
>> +	if (c->pid) {
>> +		int waitstatus;
>> +
>> +		xe_eudebug_client_wait_done(c);
>> +
>> +		token_signal(c->p_in, CLIENT_STOP, c->pid);
>> +		igt_assert_eq(waitpid(c->pid, &waitstatus, 0),
>> +			      c->pid);
>> +		c->pid = 0;
>> +	}
>> +}
>> +
>> +/**
>> + * xe_eudebug_client_destroy:
>> + * @c: pointer to xe_eudebug_client structure to be freed
>> + *
>> + * Frees the @c client structure. Note that it calls xe_eudebug_client_stop if
>> + * client proccess has not terminated yet.
>> + */
>> +void xe_eudebug_client_destroy(struct xe_eudebug_client *c)
>> +{
>> +	xe_eudebug_client_stop(c);
>> +	pipe_close(c->p_in);
>> +	pipe_close(c->p_out);
>> +	xe_eudebug_event_log_destroy(c->log);
>> +	free(c);
>> +}
>> +
>> +/**
>> + * xe_eudebug_client_get_seqno:
>> + * @c: pointer to xe_eudebug_client structure
>> + *
>> + * Increments and returns current seqno value of the given client @c
>> + *
>> + * Returns: incremented seqno
>> + */
>> +uint64_t xe_eudebug_client_get_seqno(struct xe_eudebug_client *c)
>> +{
>> +	return c->seqno++;
>> +}
>> +
>> +/**
>> + * xe_eudebug_client_start:
>> + * @c: pointer to xe_eudebug_client structure
>> + *
>> + * Starts execution of client's work function within the client's proccess.
>> + */
>> +void xe_eudebug_client_start(struct xe_eudebug_client *c)
>> +{
>> +	token_signal(c->p_in, CLIENT_RUN, c->pid);
>> +}
>> +
>> +/**
>> + * xe_eudebug_client_wait_done:
>> + * @c: pointer to xe_eudebug_client structure
>> + *
>> + * Waits for the client work end updates the event log.
>> + * Doesn't terminate the client's proccess yet.
>> + */
>> +void xe_eudebug_client_wait_done(struct xe_eudebug_client *c)
>> +{
>> +	if (!c->done) {
>> +		c->done = 1;
>> +		c->seqno = wait_from_client(c, CLIENT_FINI);
>> +		event_log_read_from_fd(c->log, c->p_out[0]);
>> +	}
>> +}
>> +
>> +/**
>> + * xe_eudebug_client_signal_stage:
>> + * @c: pointer to the client
>> + * @stage: stage to signal
>> + *
>> + * Signals to debugger, waiting in xe_eudebug_debugger_wait_stage(),
>> + * releasing it to proceed.
>> + */
>> +void xe_eudebug_client_signal_stage(struct xe_eudebug_client *c, uint64_t stage)
>> +{
>> +	token_signal(c->p_out, DEBUGGER_STAGE, stage);
>> +}
>> +
>> +/**
>> + * xe_eudebug_client_wait_stage:
>> + * @c: pointer to xe_eudebug_client structure
>> + * @stage: stage to wait on
>> + *
>> + * Pauses client until the debugger has signalled the corresponding stage with
>> + * xe_eudebug_debugger_signal_stage. This is only for situations where the
>> + * actual event flow is not enough to coordinate between client/debugger and extra
>> + * sync mechanism is needed.
>> + *
>> + */
>> +void xe_eudebug_client_wait_stage(struct xe_eudebug_client *c, uint64_t stage)
>> +{
>> +	u64 stage_in;
>> +
>> +	if (c->done) {
>> +		igt_warn("client: %d already done before %lu\n", c->pid, stage);
>> +		return;
>> +	}
>> +
>> +	igt_debug("client: %d pausing for stage %lu\n", c->pid, stage);
>> +
>> +	stage_in = client_wait_token(c, CLIENT_STAGE);
>> +	igt_debug("client: %d stage %lu, expected %lu, stage\n", c->pid, stage_in, stage);
>> +
>> +	igt_assert_eq(stage_in, stage);
>> +}
>> +
>> +
>> +/**
>> + * xe_eudebug_session_create:
>> + * @fd: XE file descriptor
>> + * @work: function passed to the xe_eudebug_client_create
>> + * @flags: flags passed to client and debugger
>> + * @test_private: test's  data, allocated with MAP_SHARED | MAP_ANONYMOUS,
>> + * passed to client and debugger. Can be NULL.
>> + *
>> + * Creates session together with client and debugger structures.
>> + */
>> +struct xe_eudebug_session *xe_eudebug_session_create(int fd,
>> +						     xe_eudebug_client_work_fn work,
>> +						     unsigned int flags,
>> +						     void *test_private)
>> +{
>> +	struct xe_eudebug_session *s;
>> +
>> +	s = calloc(1, sizeof(*s));
>> +	igt_assert(s);
>> +
>> +	s->c = xe_eudebug_client_create(fd, work, flags, test_private);
>> +	s->d = xe_eudebug_debugger_create(fd, flags, test_private);
>> +	s->flags = flags;
>> +
>> +	return s;
>> +}
>> +
>> +/**
>> + * xe_eudebug_session_run:
>> + * @s: pointer to xe_eudebug_session structure
>> + *
>> + * Attaches debugger to client's proccess, starts debugger's
>> + * async event reader, starts client and once client finish
>> + * it stops debugger worker.
>> + */
>> +void xe_eudebug_session_run(struct xe_eudebug_session *s)
>> +{
>> +	struct xe_eudebug_debugger *debugger = s->d;
>> +	struct xe_eudebug_client *client = s->c;
>> +
>> +	igt_assert_eq(xe_eudebug_debugger_attach(debugger, client), 0);
>> +
>> +	xe_eudebug_debugger_start_worker(debugger);
>> +
>> +	xe_eudebug_client_start(client);
>> +	xe_eudebug_client_wait_done(client);
>> +
>> +	xe_eudebug_debugger_stop_worker(debugger, 1);
>> +
>> +	xe_eudebug_event_log_print(debugger->log, true);
>> +	xe_eudebug_event_log_print(client->log, true);
>> +}
>> +
>> +/**
>> + * xe_eudebug_session_check:
>> + * @s: pointer to xe_eudebug_session structure
>> + * @match_opposite: indicates whether check should match all
>> + * create and destroy events.
>> + * @filter: mask that represents events to be skipped during comparison, useful
>> + * for events like 'VM_BIND' since they can be asymmetric
>> + *
>> + * Validate debugger's log against the log created by the client.
>> + */
>> +void xe_eudebug_session_check(struct xe_eudebug_session *s, bool match_opposite, uint32_t filter)
>> +{
>> +	xe_eudebug_event_log_compare(s->c->log, s->d->log, filter);
>> +
>> +	if (match_opposite)
>> +		xe_eudebug_event_log_match_opposite(s->d->log, filter);
>> +}
>> +
>> +/**
>> + * xe_eudebug_session_destroy:
>> + * @s: pointer to xe_eudebug_session structure
>> + *
>> + * Destroy session together with its debugger and client.
>> + */
>> +void xe_eudebug_session_destroy(struct xe_eudebug_session *s)
>> +{
>> +	xe_eudebug_debugger_destroy(s->d);
>> +	xe_eudebug_client_destroy(s->c);
>> +
>> +	free(s);
>> +}
>> +
>> +#define to_base(x) ((struct drm_xe_eudebug_event *)&x)
>> +
>> +static void base_event(struct xe_eudebug_client *c,
>> +		       struct drm_xe_eudebug_event *e,
>> +		       uint32_t type,
>> +		       uint32_t flags,
>> +		       uint64_t size)
>> +{
>> +	e->type = type;
>> +	e->flags = flags;
>> +	e->seqno = xe_eudebug_client_get_seqno(c);
>> +	e->len = size;
>> +}
>> +
>> +static void client_event(struct xe_eudebug_client *c, uint32_t flags, int client_fd)
>> +{
>> +	struct drm_xe_eudebug_event_client ec;
>> +
>> +	base_event(c, to_base(ec), DRM_XE_EUDEBUG_EVENT_OPEN, flags, sizeof(ec));
>> +
>> +	ec.client_handle = client_fd;
>> +
>> +	xe_eudebug_event_log_write(c->log, (void *)&ec);
>> +}
>> +
>> +static void vm_event(struct xe_eudebug_client *c, uint32_t flags, int client_fd, uint32_t vm_id)
>> +{
>> +	struct drm_xe_eudebug_event_vm evm;
>> +
>> +	base_event(c, to_base(evm), DRM_XE_EUDEBUG_EVENT_VM, flags, sizeof(evm));
>> +
>> +	evm.client_handle = client_fd;
>> +	evm.vm_handle = vm_id;
>> +
>> +	xe_eudebug_event_log_write(c->log, (void *)&evm);
>> +}
>> +
>> +static void exec_queue_event(struct xe_eudebug_client *c, uint32_t flags,
>> +			     int client_fd, uint32_t vm_id,
>> +			     uint32_t exec_queue_handle, uint16_t class,
>> +			     uint16_t width)
>> +{
>> +	struct drm_xe_eudebug_event_exec_queue ee;
>> +
>> +	base_event(c, to_base(ee), DRM_XE_EUDEBUG_EVENT_EXEC_QUEUE,
>> +		   flags, sizeof(ee));
>> +
>> +	ee.client_handle = client_fd;
>> +	ee.vm_handle = vm_id;
>> +	ee.exec_queue_handle = exec_queue_handle;
>> +	ee.engine_class = class;
>> +	ee.width = width;
>> +
>> +	xe_eudebug_event_log_write(c->log, (void *)&ee);
>> +}
>> +
>> +static void metadata_event(struct xe_eudebug_client *c, uint32_t flags,
>> +			   int client_fd, uint32_t id, uint64_t type, uint64_t len)
>> +{
>> +	struct drm_xe_eudebug_event_metadata em;
>> +
>> +	base_event(c, to_base(em), DRM_XE_EUDEBUG_EVENT_METADATA,
>> +		   flags, sizeof(em));
>> +
>> +	em.client_handle = client_fd;
>> +	em.metadata_handle = id;
>> +	em.type = type;
>> +	em.len = len;
>> +
>> +	xe_eudebug_event_log_write(c->log, (void *)&em);
>> +}
>> +
>> +static int enable_getset(int fd, bool *old, bool *new)
>> +{
>> +	static const char * const fname = "enable_eudebug";
>> +	int ret = 0;
>> +
>> +	int sysfs, device_fd;
>> +	bool val_before;
>> +	struct stat st;
>> +
>> +	igt_assert(new || old);
>> +
>> +	igt_assert_eq(fstat(fd, &st), 0);
>> +	sysfs = igt_sysfs_open(fd);
>> +	if (sysfs < 0)
>> +		return -1;
>> +
>> +	device_fd = openat(sysfs, "device", O_DIRECTORY | O_RDONLY);
>> +	close(sysfs);
>> +	if (device_fd < 0)
>> +		return -1;
>> +
>> +	if (!__igt_sysfs_get_boolean(device_fd, fname, &val_before)) {
>> +		ret = -1;
>> +		goto out;
>> +	}
>> +
>> +	igt_debug("enable_eudebug before: %d\n", val_before);
>> +
>> +	if (old)
>> +		*old = val_before;
>> +
>> +	ret = 0;
>> +	if (new) {
>> +		if (__igt_sysfs_set_boolean(device_fd, fname, *new))
>> +			igt_assert_eq(igt_sysfs_get_boolean(device_fd, fname), *new);
>> +		else
>> +			ret = -1;
>> +	}
>> +
>> +out:
>> +	close(device_fd);
>> +	return ret;
>> +}
>> +
>> +/**
>> + * xe_eudebug_enable
>> + * @fd: xe client
>> + * @enable: state toggle - true to enable, false to disable
>> + *
>> + * Enables/disables eudebug capability by writing to
>> + * '/sys/class/drm/card<N>/device/enable_eudebug' sysfs entry.
>> + *
>> + * Returns: previous toggle value, i.e. true when eudebugging was enabled,
>> + * false when eudebugging was disabled.
>> + */
>> +bool xe_eudebug_enable(int fd, bool enable)
>> +{
>> +	bool old = false;
>> +	int ret = enable_getset(fd, &old, &enable);
>> +
>> +	if (ret) {
>> +		igt_skip_on(enable);
>> +		old = false;
>> +	}
>> +
>> +	return old;
>> +}
>> +
>> +/* Eu debugger wrappers around resource creating xe ioctls. */
>> +
>> +/**
>> + * xe_eudebug_client_open_driver:
>> + * @c: pointer to xe_eudebug_client structure
>> + *
>> + * Calls drm_open_client(DRIVER_XE) and logs the corresponding
>> + * event in client's event log.
>> + *
>> + * Returns: valid DRM file descriptor
>> + */
>> +int xe_eudebug_client_open_driver(struct xe_eudebug_client *c)
>> +{
>> +	int fd;
>> +
>> +	fd = drm_reopen_driver(c->master_fd);
>> +	client_event(c, DRM_XE_EUDEBUG_EVENT_CREATE, fd);
>> +
>> +	return fd;
>> +}
>> +
>> +/**
>> + * xe_eudebug_client_close_driver:
>> + * @c: pointer to xe_eudebug_client structure
>> + * @fd: xe client
>> + *
>> + * Calls close driver and logs the corresponding event in
>> + * client's event log.
>> + */
>> +void xe_eudebug_client_close_driver(struct xe_eudebug_client *c, int fd)
>> +{
>> +	client_event(c, DRM_XE_EUDEBUG_EVENT_DESTROY, fd);
>> +	close(fd);
>> +}
>> +
>> +/**
>> + * xe_eudebug_client_vm_create:
>> + * @c: pointer to xe_eudebug_client structure
>> + * @fd: xe client
>> + * @flags: vm bind flags
>> + * @ext: pointer to the first user extension
>> + *
>> + * Calls xe_vm_create() and logs corresponding events
>> + * (including vm set metadata events) in client's event log.
>> + *
>> + * Returns: valid vm handle
>> + */
>> +uint32_t xe_eudebug_client_vm_create(struct xe_eudebug_client *c, int fd,
>> +				     uint32_t flags, uint64_t ext)
>> +{
>> +	uint32_t vm;
>> +
>> +	vm = xe_vm_create(fd, flags, ext);
>> +	vm_event(c, DRM_XE_EUDEBUG_EVENT_CREATE, fd, vm);
>> +
>> +	return vm;
>> +}
>> +
>> +/**
>> + * xe_eudebug_client_vm_destroy:
>> + * @c: pointer to xe_eudebug_client structure
>> + * fd: xe client
>> + * vm: vm handle
>> + *
>> + * Calls xe_vm_destroy() and logs the corresponding event in
>> + * client's event log.
>> + */
>> +void xe_eudebug_client_vm_destroy(struct xe_eudebug_client *c, int fd, uint32_t vm)
>> +{
>> +	xe_vm_destroy(fd, vm);
>> +	vm_event(c, DRM_XE_EUDEBUG_EVENT_DESTROY, fd, vm);
>> +}
>> +
>> +/**
>> + * xe_eudebug_client_exec_queue_create:
>> + * @c: pointer to xe_eudebug_client structure
>> + * @fd: xe client
>> + * @create: exec_queue create drm struct
>> + *
>> + * Calls xe exec queue create ioctl and logs the corresponding event in
>> + * client's event log.
>> + *
>> + * Returns: valid exec queue handle
>> + */
>> +uint32_t xe_eudebug_client_exec_queue_create(struct xe_eudebug_client *c, int fd,
>> +					     struct drm_xe_exec_queue_create *create)
>> +{
>> +	uint16_t class = ((struct drm_xe_engine_class_instance *)(create->instances))[0].engine_class;
>> +
>> +	igt_assert_eq(igt_ioctl(fd, DRM_IOCTL_XE_EXEC_QUEUE_CREATE, create), 0);
>> +
>> +	if (class == DRM_XE_ENGINE_CLASS_COMPUTE || class == DRM_XE_ENGINE_CLASS_RENDER)
>> +		exec_queue_event(c, DRM_XE_EUDEBUG_EVENT_CREATE, fd, create->vm_id,
>> +				 create->exec_queue_id, class, create->width);
>> +
>> +	return create->exec_queue_id;
>> +}
>> +
>> +/**
>> + * xe_eudebug_client_exec_queue_destroy:
>> + * @c: pointer to xe_eudebug_client structure
>> + * @fd: xe client
>> + * @create: exec_queue create drm struct which was used for creation
>> + *
>> + * Calls xe exec_queue destroy ioctl and logs the corresponding event in
>> + * client's event log.
>> + */
>> +void xe_eudebug_client_exec_queue_destroy(struct xe_eudebug_client *c, int fd,
>> +					  struct drm_xe_exec_queue_create *create)
>> +{
>> +	struct drm_xe_exec_queue_destroy destroy = { .exec_queue_id = create->exec_queue_id, };
>> +	uint16_t class = ((struct drm_xe_engine_class_instance *)(create->instances))[0].engine_class;
>> +
>> +	if (class == DRM_XE_ENGINE_CLASS_COMPUTE || class == DRM_XE_ENGINE_CLASS_RENDER)
>> +		exec_queue_event(c, DRM_XE_EUDEBUG_EVENT_DESTROY, fd, create->vm_id,
>> +				 create->exec_queue_id, class, create->width);
>> +
>> +	igt_assert_eq(igt_ioctl(fd, DRM_IOCTL_XE_EXEC_QUEUE_DESTROY, &destroy), 0);
>> +}
>> +
>> +/**
>> + * xe_eudebug_client_vm_bind_event:
>> + * @c: pointer to xe_eudebug_client structure
>> + * @event_flags: base event flags
>> + * @fd: xe client
>> + * @vm: vm handle
>> + * @bind_flags: bind flags of vm_bind_event
>> + * @num_binds: number of bind (operations) for event
>> + * @ref_seqno: base vm bind reference seqno
>> + * Logs vm bind event in client's event log.
>> + */
>> +void xe_eudebug_client_vm_bind_event(struct xe_eudebug_client *c,
>> +				     uint32_t event_flags, int fd,
>> +				     uint32_t vm, uint32_t bind_flags,
>> +				     uint32_t num_binds, u64 *ref_seqno)
>> +{
>> +	struct drm_xe_eudebug_event_vm_bind evmb;
>> +
>> +	base_event(c, to_base(evmb), DRM_XE_EUDEBUG_EVENT_VM_BIND,
>> +		   event_flags, sizeof(evmb));
>> +	evmb.client_handle = fd;
>> +	evmb.vm_handle = vm;
>> +	evmb.flags = bind_flags;
>> +	evmb.num_binds = num_binds;
>> +
>> +	*ref_seqno = evmb.base.seqno;
>> +
>> +	xe_eudebug_event_log_write(c->log, (void *)&evmb);
>> +}
>> +
>> +/**
>> + * xe_eudebug_client_vm_bind_op_event:
>> + * @c: pointer to xe_eudebug_client structure
>> + * @event_flags: base event flags
>> + * @bind_ref_seqno: base vm bind reference seqno
>> + * @op_ref_seqno: output, the vm_bind_op event seqno
>> + * @addr: ppgtt address
>> + * @size: size of the binding
>> + * @num_extensions: number of vm bind op extensions
>> + *
>> + * Logs vm bind op event in client's event log.
>> + */
>> +void xe_eudebug_client_vm_bind_op_event(struct xe_eudebug_client *c, uint32_t event_flags,
>> +					uint64_t bind_ref_seqno, uint64_t *op_ref_seqno,
>> +					uint64_t addr, uint64_t range,
>> +					uint64_t num_extensions)
>> +{
>> +	struct drm_xe_eudebug_event_vm_bind_op op;
>> +
>> +	base_event(c, to_base(op), DRM_XE_EUDEBUG_EVENT_VM_BIND_OP,
>> +		   event_flags, sizeof(op));
>> +	op.vm_bind_ref_seqno = bind_ref_seqno;
>> +	op.addr = addr;
>> +	op.range = range;
>> +	op.num_extensions = num_extensions;
>> +
>> +	*op_ref_seqno = op.base.seqno;
>> +
>> +	xe_eudebug_event_log_write(c->log, (void *)&op);
>> +}
>> +
>> +/**
>> + * xe_eudebug_client_vm_bind_op_metadata_event:
>> + * @c: pointer to xe_eudebug_client structure
>> + * @event_flags: base event flags
>> + * @op_ref_seqno: base vm bind op reference seqno
>> + * @metadata_handle: metadata handle
>> + * @metadata_cookie: metadata cookie
>> + *
>> + * Logs vm bind op metadata event in client's event log.
>> + */
>> +void xe_eudebug_client_vm_bind_op_metadata_event(struct xe_eudebug_client *c,
>> +						 uint32_t event_flags, uint64_t op_ref_seqno,
>> +						 uint64_t metadata_handle, uint64_t metadata_cookie)
>> +{
>> +	struct drm_xe_eudebug_event_vm_bind_op_metadata op;
>> +
>> +	base_event(c, to_base(op), DRM_XE_EUDEBUG_EVENT_VM_BIND_OP_METADATA,
>> +		   event_flags, sizeof(op));
>> +	op.vm_bind_op_ref_seqno = op_ref_seqno;
>> +	op.metadata_handle = metadata_handle;
>> +	op.metadata_cookie = metadata_cookie;
>> +
>> +	xe_eudebug_event_log_write(c->log, (void *)&op);
>> +}
>> +
>> +/**
>> + * xe_eudebug_client_vm_bind_ufence_event:
>> + * @c: pointer to xe_eudebug_client structure
>> + * @event_flags: base event flags
>> + * @ref_seqno: base vm bind event seqno
>> + *
>> + * Logs vm bind ufence event in client's event log.
>> + */
>> +void xe_eudebug_client_vm_bind_ufence_event(struct xe_eudebug_client *c, uint32_t event_flags,
>> +					    uint64_t ref_seqno)
>> +{
>> +	struct drm_xe_eudebug_event_vm_bind_ufence f;
>> +
>> +	base_event(c, to_base(f), DRM_XE_EUDEBUG_EVENT_VM_BIND_UFENCE,
>> +		   event_flags, sizeof(f));
>> +	f.vm_bind_ref_seqno = ref_seqno;
>> +
>> +	xe_eudebug_event_log_write(c->log, (void *)&f);
>> +}
>> +
>> +static bool has_user_fence(const struct drm_xe_sync *sync, uint32_t num_syncs)
>> +{
>> +	while (num_syncs--)
>> +		if (sync[num_syncs].type == DRM_XE_SYNC_TYPE_USER_FENCE)
>> +			return true;
>> +
>> +	return false;
>> +}
>> +
>> +#define for_each_metadata(__m, __ext)					\
>> +	for ((__m) = from_user_pointer(__ext);				\
>> +	     (__m);							\
>> +	     (__m) = from_user_pointer((__m)->base.next_extension))	\
>> +		if ((__m)->base.name == XE_VM_BIND_OP_EXTENSIONS_ATTACH_DEBUG)
>> +
>> +static int  __xe_eudebug_client_vm_bind(struct xe_eudebug_client *c,
>> +					int fd, uint32_t vm, uint32_t exec_queue,
>> +					uint32_t bo, uint64_t offset,
>> +					uint64_t addr, uint64_t size,
>> +					uint32_t op, uint32_t flags,
>> +					struct drm_xe_sync *sync,
>> +					uint32_t num_syncs,
>> +					uint32_t prefetch_region,
>> +					uint8_t pat_index, uint64_t op_ext)
>> +{
>> +	struct drm_xe_vm_bind_op_ext_attach_debug *metadata;
>> +	const bool ufence = has_user_fence(sync, num_syncs);
>> +	const uint32_t bind_flags = ufence ?
>> +		DRM_XE_EUDEBUG_EVENT_VM_BIND_FLAG_UFENCE : 0;
>> +	uint64_t seqno = 0, op_seqno = 0, num_metadata = 0;
>> +	uint32_t bind_base_flags = 0;
>> +	int ret;
>> +
>> +	for_each_metadata(metadata, op_ext)
>> +		num_metadata++;
>> +
>> +	switch (op) {
>> +	case DRM_XE_VM_BIND_OP_MAP:
>> +		bind_base_flags = DRM_XE_EUDEBUG_EVENT_CREATE;
>> +		break;
>> +	case DRM_XE_VM_BIND_OP_UNMAP:
>> +		bind_base_flags = DRM_XE_EUDEBUG_EVENT_DESTROY;
>> +		igt_assert_eq(num_metadata, 0);
>> +		igt_assert_eq(ufence, false);
>> +		break;
>> +	default:
>> +		/* XXX unmap all? */
>> +		igt_assert(op);
>> +		break;
>> +	}
>> +
>> +	ret = ___xe_vm_bind(fd, vm, exec_queue, bo, offset, addr, size,
>> +			    op, flags, sync, num_syncs, prefetch_region,
>> +			    pat_index, 0, op_ext);
>> +
>> +	if (ret)
>> +		return ret;
>> +
>> +	if (!bind_base_flags)
>> +		return -EINVAL;
>> +
>> +	xe_eudebug_client_vm_bind_event(c, DRM_XE_EUDEBUG_EVENT_STATE_CHANGE,
>> +					fd, vm, bind_flags, 1, &seqno);
>> +	xe_eudebug_client_vm_bind_op_event(c, bind_base_flags,
>> +					   seqno, &op_seqno, addr, size,
>> +					   num_metadata);
>> +
>> +	for_each_metadata(metadata, op_ext)
>> +		xe_eudebug_client_vm_bind_op_metadata_event(c,
>> +							    DRM_XE_EUDEBUG_EVENT_CREATE,
>> +							    op_seqno,
>> +							    metadata->metadata_id,
>> +							    metadata->cookie);
>> +	if (ufence)
>> +		xe_eudebug_client_vm_bind_ufence_event(c, DRM_XE_EUDEBUG_EVENT_CREATE |
>> +						       DRM_XE_EUDEBUG_EVENT_NEED_ACK,
>> +						       seqno);
>> +	return ret;
>> +}
>> +
>> +static void _xe_eudebug_client_vm_bind(struct xe_eudebug_client *c, int fd,
>> +				       uint32_t vm, uint32_t bo,
>> +				       uint64_t offset, uint64_t addr, uint64_t size,
>> +				       uint32_t op,
>> +				       uint32_t flags,
>> +				       struct drm_xe_sync *sync,
>> +				       uint32_t num_syncs,
>> +				       uint64_t op_ext)
>> +{
>> +	const uint32_t exec_queue_id = 0;
>> +	const uint32_t prefetch_region = 0;
>> +
>> +	igt_assert_eq(__xe_eudebug_client_vm_bind(c, fd, vm, exec_queue_id, bo, offset,
>> +						  addr, size, op, flags,
>> +						  sync, num_syncs, prefetch_region,
>> +						  DEFAULT_PAT_INDEX, op_ext),
>> +		      0);
>> +}
>> +
>> +/**
>> + * xe_eudebug_client_vm_bind_flags
>> + * @c: pointer to xe_eudebug_client structure
>> + * @fd: xe client
>> + * @vm: vm handle
>> + * @bo: buffer object handle
>> + * @offset: offset within buffer object
>> + * @addr: ppgtt address
>> + * @size: size of the binding
>> + * @flags: vm_bind flags
>> + * @sync: sync objects
>> + * @num_syncs: number of sync objects
>> + * @op_ext: BIND_OP extensions
>> + *
>> + * Calls xe vm_bind ioctl and logs the corresponding event in client's event log.
>> + */
>> +void xe_eudebug_client_vm_bind_flags(struct xe_eudebug_client *c, int fd, uint32_t vm,
>> +				     uint32_t bo, uint64_t offset,
>> +				     uint64_t addr, uint64_t size, uint32_t flags,
>> +				     struct drm_xe_sync *sync, uint32_t num_syncs,
>> +				     uint64_t op_ext)
>> +{
>> +	_xe_eudebug_client_vm_bind(c, fd, vm, bo, offset, addr, size,
>> +				   DRM_XE_VM_BIND_OP_MAP, flags,
>> +				   sync, num_syncs, op_ext);
>> +}
>> +
>> +/**
>> + * xe_eudebug_client_vm_bind
>> + * @c: pointer to xe_eudebug_client structure
>> + * @fd: xe client
>> + * @vm: vm handle
>> + * @bo: buffer object handle
>> + * @offset: offset within buffer object
>> + * @addr: ppgtt address
>> + * @size: size of the binding
>> + *
>> + * Calls xe vm_bind ioctl and logs the corresponding event in client's event log.
>> + */
>> +void xe_eudebug_client_vm_bind(struct xe_eudebug_client *c, int fd, uint32_t vm,
>> +			       uint32_t bo, uint64_t offset,
>> +			       uint64_t addr, uint64_t size)
>> +{
>> +	const uint32_t flags = 0;
>> +	struct drm_xe_sync *sync = NULL;
>> +	const uint32_t num_syncs = 0;
>> +	const uint64_t op_ext = 0;
>> +
>> +	xe_eudebug_client_vm_bind_flags(c, fd, vm, bo, offset, addr, size,
>> +					flags,
>> +					sync, num_syncs, op_ext);
>> +}
>> +
>> +/**
>> + * xe_eudebug_client_vm_unbind_flags
>> + * @c: pointer to xe_eudebug_client structure
>> + * @fd: xe client
>> + * @vm: vm handle
>> + * @offset: offset
>> + * @addr: ppgtt address
>> + * @size: size of the binding
>> + * @flags: vm_bind flags
>> + * @sync: sync objects
>> + * @num_syncs: number of sync objects
>> + *
>> + * Calls xe vm_unbind ioctl and logs the corresponding event in client's event log.
>> + */
>> +void xe_eudebug_client_vm_unbind_flags(struct xe_eudebug_client *c, int fd,
>> +				       uint32_t vm, uint64_t offset,
>> +				       uint64_t addr, uint64_t size, uint32_t flags,
>> +				       struct drm_xe_sync *sync, uint32_t num_syncs)
>> +{
>> +	_xe_eudebug_client_vm_bind(c, fd, vm, 0, offset, addr, size,
>> +				   DRM_XE_VM_BIND_OP_UNMAP, flags,
>> +				   sync, num_syncs, 0);
>> +}
>> +
>> +/**
>> + * xe_eudebug_client_vm_unbind
>> + * @c: pointer to xe_eudebug_client structure
>> + * @fd: xe client
>> + * @vm: vm handle
>> + * @offset: offset
>> + * @addr: ppgtt address
>> + * @size: size of the binding
>> + *
>> + * Calls xe vm_unbind ioctl and logs the corresponding event in client's event log.
>> + */
>> +void xe_eudebug_client_vm_unbind(struct xe_eudebug_client *c, int fd, uint32_t vm,
>> +				 uint64_t offset, uint64_t addr, uint64_t size)
>> +{
>> +	const uint32_t flags = 0;
>> +	struct drm_xe_sync *sync = NULL;
>> +	const uint32_t num_syncs = 0;
>> +
>> +	xe_eudebug_client_vm_unbind_flags(c, fd, vm, offset, addr, size,
>> +					  flags, sync, num_syncs);
>> +}
>> +
>> +/**
>> + * xe_eudebug_client_metadata_create:
>> + * @c: pointer to xe_eudebug_client structure
>> + * @fd: xe client
>> + * @type: debug metadata type
>> + * @len: size of @data
>> + * @data: debug metadata paylad
>> + *
>> + * Calls xe metadata create ioctl and logs the corresponding event in
>> + * client's event log.
>> + *
>> + * Return: valid debug metadata id.
>> + */
>> +uint32_t xe_eudebug_client_metadata_create(struct xe_eudebug_client *c, int fd,
>> +					   int type, size_t len, void *data)
>> +{
>> +	struct drm_xe_debug_metadata_create create = {
>> +		.type = type,
>> +		.user_addr = to_user_pointer(data),
>> +		.len = len
>> +	};
>> +
>> +	igt_assert_eq(igt_ioctl(fd, DRM_IOCTL_XE_DEBUG_METADATA_CREATE, &create), 0);
>> +
>> +	metadata_event(c, DRM_XE_EUDEBUG_EVENT_CREATE, fd, create.metadata_id, type, len);
>> +
>> +	return create.metadata_id;
>> +}
>> +
>> +/**
>> + * xe_eudebug_client_metadata_destroy:
>> + * @c: pointer to xe_eudebug_client structure
>> + * @fd: xe client
>> + * @id: xe debug metadata handle
>> + * @type: debug metadata type
>> + * @len: size of debug metadata payload
>> + *
>> + * Calls xe metadata destroy ioctl and logs the corresponding event in
>> + * client's event log.
>> + */
>> +void xe_eudebug_client_metadata_destroy(struct xe_eudebug_client *c, int fd,
>> +					uint32_t id, int type, size_t len)
>> +{
>> +	struct drm_xe_debug_metadata_destroy destroy = { .metadata_id = id };
>> +
>> +
>> +	igt_assert_eq(igt_ioctl(fd, DRM_IOCTL_XE_DEBUG_METADATA_DESTROY, &destroy), 0);
>> +
>> +	metadata_event(c, DRM_XE_EUDEBUG_EVENT_DESTROY, fd, id, type, len);
>> +}
>> +
>> +void xe_eudebug_ack_ufence(int debugfd,
>> +			   const struct drm_xe_eudebug_event_vm_bind_ufence *f)
>> +{
>> +	struct drm_xe_eudebug_ack_event ack = { 0, };
>> +	char event_str[XE_EUDEBUG_EVENT_STRING_MAX_LEN];
>> +
>> +	ack.type = f->base.type;
>> +	ack.seqno = f->base.seqno;
>> +
>> +	xe_eudebug_event_to_str((void *)f, event_str, XE_EUDEBUG_EVENT_STRING_MAX_LEN);
>> +	igt_debug("delivering ack for event: %s\n", event_str);
>> +	igt_assert_eq(igt_ioctl(debugfd, DRM_XE_EUDEBUG_IOCTL_ACK_EVENT, &ack), 0);
>> +}
>> diff --git a/lib/xe/xe_eudebug.h b/lib/xe/xe_eudebug.h
>> new file mode 100644
>> index 000000000..444f5a7b7
>> --- /dev/null
>> +++ b/lib/xe/xe_eudebug.h
>> @@ -0,0 +1,206 @@
>> +/* SPDX-License-Identifier: MIT */
>> +/*
>> + * Copyright © 2023 Intel Corporation
>> + */
>> +#include <fcntl.h>
>> +#include <pthread.h>
>> +#include <stdint.h>
>> +#include <xe_drm.h>
>> +
>> +#include "igt_list.h"
>> +
>> +struct xe_eudebug_event_log {
>> +	uint8_t *log;
>> +	unsigned int head;
>> +	unsigned int max_size;
>> +	char name[80];
>> +	pthread_mutex_t lock;
>> +};
>> +
>> +struct xe_eudebug_debugger {
>> +	int fd;
>> +	uint64_t flags;
>> +
>> +	/* Used to smuggle private data */
>> +	void *ptr;
>> +
>> +	struct xe_eudebug_event_log *log;
>> +
>> +	uint64_t event_count;
>> +
>> +	uint64_t target_pid;
>> +
>> +	struct igt_list_head triggers;
>> +
>> +	int master_fd;
>> +
>> +	pthread_t worker_thread;
>> +	int worker_state;
>> +
>> +	int p_client[2];
>> +};
>> +
>> +struct xe_eudebug_client {
>> +	int pid;
>> +	uint64_t seqno;
>> +	uint64_t flags;
>> +
>> +	/* Used to smuggle private data */
>> +	void *ptr;
>> +
>> +	struct xe_eudebug_event_log *log;
>> +
>> +	int done;
>> +	int p_in[2];
>> +	int p_out[2];
>> +
>> +	/* Used to pickup right device (the one used in debugger) */
>> +	int master_fd;
>> +
>> +	int timeout_ms;
>> +};
>> +
>> +struct xe_eudebug_session {
>> +	uint64_t flags;
>> +	struct xe_eudebug_client *c;
>> +	struct xe_eudebug_debugger *d;
>> +};
>> +
>> +typedef void (*xe_eudebug_client_work_fn)(struct xe_eudebug_client *);
>> +typedef void (*xe_eudebug_trigger_fn)(struct xe_eudebug_debugger *,
>> +				      struct drm_xe_eudebug_event *);
>> +
>> +#define xe_eudebug_for_each_event(_e, _log) \
>> +	for ((_e) = (_e) ? (void *)(uint8_t *)(_e) + (_e)->len : \
>> +		    (void *)(_log)->log; \
>> +	    (uint8_t *)(_e) < (_log)->log + (_log)->head; \
>> +	    (_e) = (void *)(uint8_t *)(_e) + (_e)->len)
>> +
>> +#define xe_eudebug_assert(d, c)						\
>> +	do {								\
>> +		if (!(c)) {						\
>> +			xe_eudebug_event_log_print((d)->log, true);	\
>> +			igt_assert(c);					\
>> +		}							\
>> +	} while (0)
>> +
>> +#define xe_eudebug_assert_f(d, c, f...)					\
>> +	do {								\
>> +		if (!(c)) {						\
>> +			xe_eudebug_event_log_print((d)->log, true);	\
>> +			igt_assert_f(c, f);				\
>> +		}							\
>> +	} while (0)
>> +
>> +#define XE_EUDEBUG_EVENT_STRING_MAX_LEN		4096
>> +
>> +/*
>> + * Default abort timeout to use across xe_eudebug lib and tests if no specific
>> + * timeout value is required.
>> + */
>> +#define XE_EUDEBUG_DEFAULT_TIMEOUT_MS		25000ULL
>> +
>> +#define XE_EUDEBUG_FILTER_EVENT_NONE		BIT(DRM_XE_EUDEBUG_EVENT_NONE)
>> +#define XE_EUDEBUG_FILTER_EVENT_READ		BIT(DRM_XE_EUDEBUG_EVENT_READ)
>> +#define XE_EUDEBUG_FILTER_EVENT_OPEN		BIT(DRM_XE_EUDEBUG_EVENT_OPEN)
>> +#define XE_EUDEBUG_FILTER_EVENT_VM		BIT(DRM_XE_EUDEBUG_EVENT_VM)
>> +#define XE_EUDEBUG_FILTER_EVENT_EXEC_QUEUE	BIT(DRM_XE_EUDEBUG_EVENT_EXEC_QUEUE)
>> +#define XE_EUDEBUG_FILTER_EVENT_EU_ATTENTION	BIT(DRM_XE_EUDEBUG_EVENT_EU_ATTENTION)
>> +#define XE_EUDEBUG_FILTER_EVENT_VM_BIND		BIT(DRM_XE_EUDEBUG_EVENT_VM_BIND)
>> +#define XE_EUDEBUG_FILTER_EVENT_VM_BIND_OP	BIT(DRM_XE_EUDEBUG_EVENT_VM_BIND_OP)
>> +#define XE_EUDEBUG_FILTER_EVENT_VM_BIND_UFENCE  BIT(DRM_XE_EUDEBUG_EVENT_VM_BIND_UFENCE)
>> +#define XE_EUDEBUG_FILTER_ALL			GENMASK(DRM_XE_EUDEBUG_EVENT_MAX_EVENT, 0)
>> +#define XE_EUDEBUG_EVENT_IS_FILTERED(_e, _f)	((1UL << _e) & _f)
>> +
>> +int xe_eudebug_connect(int fd, pid_t pid, uint32_t flags);
>> +const char *xe_eudebug_event_to_str(struct drm_xe_eudebug_event *e, char *buf, size_t len);
>> +struct drm_xe_eudebug_event *
>> +xe_eudebug_event_log_find_seqno(struct xe_eudebug_event_log *l, uint64_t seqno);
>> +struct xe_eudebug_event_log *
>> +xe_eudebug_event_log_create(const char *name, unsigned int max_size);
>> +void xe_eudebug_event_log_destroy(struct xe_eudebug_event_log *l);
>> +void xe_eudebug_event_log_print(struct xe_eudebug_event_log *l, bool debug);
>> +void xe_eudebug_event_log_compare(struct xe_eudebug_event_log *c, struct xe_eudebug_event_log *d,
>> +				  uint32_t filter);
>> +void xe_eudebug_event_log_write(struct xe_eudebug_event_log *l, struct drm_xe_eudebug_event *e);
>> +void xe_eudebug_event_log_match_opposite(struct xe_eudebug_event_log *l, uint32_t filter);
>> +
>> +bool xe_eudebug_debugger_available(int fd);
>> +struct xe_eudebug_debugger *
>> +xe_eudebug_debugger_create(int xe, uint64_t flags, void *data);
>> +void xe_eudebug_debugger_destroy(struct xe_eudebug_debugger *d);
>> +int xe_eudebug_debugger_attach(struct xe_eudebug_debugger *d, struct xe_eudebug_client *c);
>> +void xe_eudebug_debugger_start_worker(struct xe_eudebug_debugger *d);
>> +void xe_eudebug_debugger_stop_worker(struct xe_eudebug_debugger *d, int timeout_s);
>> +void xe_eudebug_debugger_dettach(struct xe_eudebug_debugger *d);
>> +void xe_eudebug_debugger_set_data(struct xe_eudebug_debugger *c, void *ptr);
>> +void xe_eudebug_debugger_add_trigger(struct xe_eudebug_debugger *d, int type,
>> +				     xe_eudebug_trigger_fn fn);
>> +void xe_eudebug_debugger_signal_stage(struct xe_eudebug_debugger *d, uint64_t stage);
>> +void xe_eudebug_debugger_wait_stage(struct xe_eudebug_session *s, uint64_t stage);
>> +
>> +struct xe_eudebug_client *
>> +xe_eudebug_client_create(int xe, xe_eudebug_client_work_fn work, uint64_t flags, void *data);
>> +void xe_eudebug_client_destroy(struct xe_eudebug_client *c);
>> +void xe_eudebug_client_start(struct xe_eudebug_client *c);
>> +void xe_eudebug_client_stop(struct xe_eudebug_client *c);
>> +void xe_eudebug_client_wait_done(struct xe_eudebug_client *c);
>> +void xe_eudebug_client_signal_stage(struct xe_eudebug_client *c, uint64_t stage);
>> +void xe_eudebug_client_wait_stage(struct xe_eudebug_client *c, uint64_t stage);
>> +
>> +uint64_t xe_eudebug_client_get_seqno(struct xe_eudebug_client *c);
>> +void xe_eudebug_client_set_data(struct xe_eudebug_client *c, void *ptr);
>> +
>> +bool xe_eudebug_enable(int fd, bool enable);
>> +
>> +int xe_eudebug_client_open_driver(struct xe_eudebug_client *c);
>> +void xe_eudebug_client_close_driver(struct xe_eudebug_client *c, int fd);
>> +uint32_t xe_eudebug_client_vm_create(struct xe_eudebug_client *c, int fd,
>> +				     uint32_t flags, uint64_t ext);
>> +void xe_eudebug_client_vm_destroy(struct xe_eudebug_client *c, int fd, uint32_t vm);
>> +uint32_t xe_eudebug_client_exec_queue_create(struct xe_eudebug_client *c, int fd,
>> +					     struct drm_xe_exec_queue_create *create);
>> +void xe_eudebug_client_exec_queue_destroy(struct xe_eudebug_client *c, int fd,
>> +					  struct drm_xe_exec_queue_create *create);
>> +void xe_eudebug_client_vm_bind_event(struct xe_eudebug_client *c, uint32_t event_flags, int fd,
>> +				     uint32_t vm, uint32_t bind_flags,
>> +				     uint32_t num_ops, uint64_t *ref_seqno);
>> +void xe_eudebug_client_vm_bind_op_event(struct xe_eudebug_client *c, uint32_t event_flags,
>> +					uint64_t ref_seqno, uint64_t *op_ref_seqno,
>> +					uint64_t addr, uint64_t range,
>> +					uint64_t num_extensions);
>> +void xe_eudebug_client_vm_bind_op_metadata_event(struct xe_eudebug_client *c,
>> +						 uint32_t event_flags, uint64_t op_ref_seqno,
>> +						 uint64_t metadata_handle, uint64_t metadata_cookie);
>> +void xe_eudebug_client_vm_bind_ufence_event(struct xe_eudebug_client *c, uint32_t event_flags,
>> +					    uint64_t ref_seqno);
>> +void xe_eudebug_ack_ufence(int debugfd,
>> +			   const struct drm_xe_eudebug_event_vm_bind_ufence *f);
>> +
>> +void xe_eudebug_client_vm_bind_flags(struct xe_eudebug_client *c, int fd, uint32_t vm,
>> +				     uint32_t bo, uint64_t offset,
>> +				     uint64_t addr, uint64_t size, uint32_t flags,
>> +				     struct drm_xe_sync *sync, uint32_t num_syncs,
>> +				     uint64_t op_ext);
>> +void xe_eudebug_client_vm_bind(struct xe_eudebug_client *c, int fd, uint32_t vm,
>> +			       uint32_t bo, uint64_t offset,
>> +			       uint64_t addr, uint64_t size);
>> +void xe_eudebug_client_vm_unbind_flags(struct xe_eudebug_client *c, int fd,
>> +				       uint32_t vm, uint64_t offset,
>> +				       uint64_t addr, uint64_t size, uint32_t flags,
>> +				       struct drm_xe_sync *sync, uint32_t num_syncs);
>> +void xe_eudebug_client_vm_unbind(struct xe_eudebug_client *c, int fd, uint32_t vm,
>> +				 uint64_t offset, uint64_t addr, uint64_t size);
>> +
>> +uint32_t xe_eudebug_client_metadata_create(struct xe_eudebug_client *c, int fd,
>> +					   int type, size_t len, void *data);
>> +void xe_eudebug_client_metadata_destroy(struct xe_eudebug_client *c, int fd,
>> +					uint32_t id, int type, size_t len);
>> +
>> +struct xe_eudebug_session *xe_eudebug_session_create(int fd,
>> +						     xe_eudebug_client_work_fn work,
>> +						     unsigned int flags,
>> +						     void *test_private);
>> +void xe_eudebug_session_destroy(struct xe_eudebug_session *s);
>> +void xe_eudebug_session_run(struct xe_eudebug_session *s);
>> +void xe_eudebug_session_check(struct xe_eudebug_session *s, bool match_opposite, uint32_t filter);
>> -- 
>> 2.34.1
>>

^ permalink raw reply	[flat|nested] 41+ messages in thread

* Re: [PATCH i-g-t v3 04/14] lib/xe_eudebug: Introduce eu debug testing framework
  2024-08-20 16:14     ` Manszewski, Christoph
@ 2024-08-20 17:45       ` Kamil Konieczny
  2024-08-21  7:05         ` Manszewski, Christoph
  2024-08-21  9:31         ` Zbigniew Kempczyński
  0 siblings, 2 replies; 41+ messages in thread
From: Kamil Konieczny @ 2024-08-20 17:45 UTC (permalink / raw)
  To: igt-dev
  Cc: Manszewski, Christoph, Zbigniew Kempczyński,
	Dominik Grzegorzek, Maciej Patelczyk,
	Dominik Karol Piątkowski, Pawel Sikora, Andrzej Hajda,
	Kolanupaka Naveena, Mika Kuoppala, Gwan-gyeong Mun, Mika Kuoppala,
	Karolina Stolarek

Hi Manszewski,,
On 2024-08-20 at 18:14:07 +0200, Manszewski, Christoph wrote:
> Hi Zbigniew,
> 
> On 20.08.2024 10:14, Zbigniew Kempczyński wrote:
> > On Fri, Aug 09, 2024 at 02:38:03PM +0200, Christoph Manszewski wrote:
> > > From: Dominik Grzegorzek <dominik.grzegorzek@intel.com>
> > > 
> > > Introduce library which simplifies testing of eu debug capability.
> > > The library provides event log helpers together with asynchronous
> > > abstraction for client proccess and the debugger itself.
> > > 
> > > xe_eudebug_client creates its own proccess with user's work function,
> > > and gives machanisms to synchronize beginning of execution and event
> > > logging.
> > > 
> > > xe_eudebug_debugger allows to attach to the given proccess, provides
> > > asynchronous thread for event reading and introduces triggers -
> > > a callback mechanism triggered every time subscribed event was read.
> > > 
> > > Signed-off-by: Dominik Grzegorzek <dominik.grzegorzek@intel.com>
> > > Signed-off-by: Mika Kuoppala <mika.kuaoppala@linux.intel.com>
> > > Signed-off-by: Christoph Manszewski <christoph.manszewski@intel.com>
> > > Signed-off-by: Maciej Patelczyk <maciej.patelczyk@intel.com>
> > > Signed-off-by: Pawel Sikora <pawel.sikora@intel.com>
> > > Signed-off-by: Karolina Stolarek <karolina.stolarek@intel.com>
> > > ---
> > >   lib/meson.build     |    1 +
> > >   lib/xe/xe_eudebug.c | 2192 +++++++++++++++++++++++++++++++++++++++++++
> > >   lib/xe/xe_eudebug.h |  206 ++++
> > >   3 files changed, 2399 insertions(+)
> > >   create mode 100644 lib/xe/xe_eudebug.c
> > >   create mode 100644 lib/xe/xe_eudebug.h
> > > 
> > > diff --git a/lib/meson.build b/lib/meson.build
> > > index f711e60a7..969ca4101 100644
> > > --- a/lib/meson.build
> > > +++ b/lib/meson.build
> > > @@ -111,6 +111,7 @@ lib_sources = [
> > >   	'igt_msm.c',
> > >   	'igt_dsc.c',
> > >   	'xe/xe_gt.c',
> > > +	'xe/xe_eudebug.c',
> > >   	'xe/xe_ioctl.c',
> > >   	'xe/xe_mmio.c',
> > >   	'xe/xe_query.c',
> > 
> > As eudebug is quite big feature I think it should be separated and
> > hidden behind a feature flag (check meson_options.txt), lets say
> > 'xe_eudebug' which would be disabled by default. This way you can
> > develop it upstream even if kernel side is not officially merged.
> > I'm pragramatic and I see no reason to block not accepted feature
> > especially this would imo speed up developement. A final step when
> > kernel change would be accepted and merged would be to sync with
> > uapi and remove local definitions.
> > 
> > I look forward maintainers comments is my attitude acceptable.
> 
> I agree that it is a good idea. The only problem that arises is for
> 'xe_exec_sip'. We add a dependency on eudebug to this test - any ideas how
> to approach this correctly? The only thing that comes to my mind is
> conditional compilation with 'ifdef' statements but it doesn't appear to be
> pretty.

What about adding skips in added tests if kernel do not support eudebug?

This way you can have it without conditional compilation with
ifdef/meson and also have it compile-time tested (if CI support test-with
for Xe kernels).

Regards,
Kamil

> 
> Thanks,
> Christoph
> > 
> > --
> > Zbigniew
> > 
> > 
> > > diff --git a/lib/xe/xe_eudebug.c b/lib/xe/xe_eudebug.c
> > > new file mode 100644
> > > index 000000000..4eac87476
> > > --- /dev/null
> > > +++ b/lib/xe/xe_eudebug.c
> > > @@ -0,0 +1,2192 @@
> > > +// SPDX-License-Identifier: MIT
> > > +/*
> > > + * Copyright © 2023 Intel Corporation
> > > + */
> > > +
> > > +#include <fcntl.h>
> > > +#include <poll.h>
> > > +#include <signal.h>
> > > +#include <sys/select.h>
> > > +#include <sys/stat.h>
> > > +#include <sys/types.h>
> > > +#include <sys/wait.h>
> > > +
> > > +#include "igt.h"
> > > +#include "igt_sysfs.h"
> > > +#include "intel_pat.h"
> > > +#include "xe_eudebug.h"
> > > +#include "xe_ioctl.h"
> > > +
> > > +struct event_trigger {
> > > +	xe_eudebug_trigger_fn fn;
> > > +	int type;
> > > +	struct igt_list_head link;
> > > +};
> > > +
> > > +struct seqno_list_entry {
> > > +	struct igt_list_head link;
> > > +	uint64_t seqno;
> > > +};
> > > +
> > > +struct match_dto {
> > > +	struct drm_xe_eudebug_event *target;
> > > +	struct igt_list_head *seqno_list;
> > > +	uint64_t client_handle;
> > > +	uint32_t filter;
> > > +
> > > +	/* store latest 'EVENT_VM_BIND' seqno */
> > > +	uint64_t *bind_seqno;
> > > +	/* latest vm_bind_op seqno matching bind_seqno */
> > > +	uint64_t *bind_op_seqno;
> > > +};
> > > +
> > > +#define CLIENT_PID  1
> > > +#define CLIENT_RUN  2
> > > +#define CLIENT_FINI 3
> > > +#define CLIENT_STOP 4
> > > +#define CLIENT_STAGE 5
> > > +#define DEBUGGER_STAGE 6
> > > +
> > > +#define DEBUGGER_WORKER_INACTIVE  0
> > > +#define DEBUGGER_WORKER_ACTIVE  1
> > > +#define DEBUGGER_WORKER_QUITTING 2
> > > +
> > > +static const char *type_to_str(unsigned int type)
> > > +{
> > > +	switch (type) {
> > > +	case DRM_XE_EUDEBUG_EVENT_NONE:
> > > +		return "none";
> > > +	case DRM_XE_EUDEBUG_EVENT_READ:
> > > +		return "read";
> > > +	case DRM_XE_EUDEBUG_EVENT_OPEN:
> > > +		return "client";
> > > +	case DRM_XE_EUDEBUG_EVENT_VM:
> > > +		return "vm";
> > > +	case DRM_XE_EUDEBUG_EVENT_EXEC_QUEUE:
> > > +		return "exec_queue";
> > > +	case DRM_XE_EUDEBUG_EVENT_EU_ATTENTION:
> > > +		return "attention";
> > > +	case DRM_XE_EUDEBUG_EVENT_VM_BIND:
> > > +		return "vm_bind";
> > > +	case DRM_XE_EUDEBUG_EVENT_VM_BIND_OP:
> > > +		return "vm_bind_op";
> > > +	case DRM_XE_EUDEBUG_EVENT_VM_BIND_UFENCE:
> > > +		return "vm_bind_ufence";
> > > +	case DRM_XE_EUDEBUG_EVENT_METADATA:
> > > +		return "metadata";
> > > +	case DRM_XE_EUDEBUG_EVENT_VM_BIND_OP_METADATA:
> > > +		return "vm_bind_op_metadata";
> > > +	}
> > > +
> > > +	return "UNKNOWN";
> > > +}
> > > +
> > > +static const char *event_type_to_str(struct drm_xe_eudebug_event *e, char *buf)
> > > +{
> > > +	sprintf(buf, "%s(%d)", type_to_str(e->type), e->type);
> > > +
> > > +	return buf;
> > > +}
> > > +
> > > +static const char *flags_to_str(unsigned int flags)
> > > +{
> > > +	if (flags & DRM_XE_EUDEBUG_EVENT_CREATE) {
> > > +		if (flags & DRM_XE_EUDEBUG_EVENT_NEED_ACK)
> > > +			return "create|ack";
> > > +		else
> > > +			return "create";
> > > +	}
> > > +	if (flags & DRM_XE_EUDEBUG_EVENT_DESTROY)
> > > +		return "destroy";
> > > +
> > > +	if (flags & DRM_XE_EUDEBUG_EVENT_STATE_CHANGE)
> > > +		return "state-change";
> > > +
> > > +	igt_assert(!(flags & DRM_XE_EUDEBUG_EVENT_NEED_ACK));
> > > +
> > > +	return "flags unknown";
> > > +}
> > > +
> > > +static const char *event_members_to_str(struct drm_xe_eudebug_event *e, char *b)
> > > +{
> > > +	switch (e->type) {
> > > +	case DRM_XE_EUDEBUG_EVENT_OPEN: {
> > > +		struct drm_xe_eudebug_event_client *ec = (struct drm_xe_eudebug_event_client *)e;
> > > +
> > > +		sprintf(b, "handle=%llu", ec->client_handle);
> > > +		break;
> > > +	}
> > > +	case DRM_XE_EUDEBUG_EVENT_VM: {
> > > +		struct drm_xe_eudebug_event_vm *evm = (struct drm_xe_eudebug_event_vm *)e;
> > > +
> > > +		sprintf(b, "client_handle=%llu, handle=%llu",
> > > +			evm->client_handle, evm->vm_handle);
> > > +		break;
> > > +	}
> > > +	case DRM_XE_EUDEBUG_EVENT_EXEC_QUEUE: {
> > > +		struct drm_xe_eudebug_event_exec_queue *ee = (void *)e;
> > > +
> > > +		sprintf(b, "client_handle=%llu, vm_handle=%llu, "
> > > +			   "exec_queue_handle=%llu, engine_class=%d, exec_queue_width=%d",
> > > +			ee->client_handle, ee->vm_handle,
> > > +			ee->exec_queue_handle, ee->engine_class, ee->width);
> > > +		break;
> > > +	}
> > > +	case DRM_XE_EUDEBUG_EVENT_EU_ATTENTION: {
> > > +		struct drm_xe_eudebug_event_eu_attention *ea = (void *)e;
> > > +
> > > +		sprintf(b, "client_handle=%llu, exec_queue_handle=%llu, "
> > > +			   "lrc_handle=%llu, bitmask_size=%d",
> > > +			ea->client_handle, ea->exec_queue_handle,
> > > +			ea->lrc_handle, ea->bitmask_size);
> > > +		break;
> > > +	}
> > > +	case DRM_XE_EUDEBUG_EVENT_VM_BIND: {
> > > +		struct drm_xe_eudebug_event_vm_bind *evmb = (void *)e;
> > > +
> > > +		sprintf(b, "client_handle=%llu, vm_handle=%llu, flags=0x%x, num_binds=%u",
> > > +			evmb->client_handle, evmb->vm_handle, evmb->flags, evmb->num_binds);
> > > +		break;
> > > +	}
> > > +	case DRM_XE_EUDEBUG_EVENT_VM_BIND_OP: {
> > > +		struct drm_xe_eudebug_event_vm_bind_op *op = (void *)e;
> > > +
> > > +		sprintf(b, "vm_bind_ref_seqno=%lld, addr=%016llx, range=%llu num_extensions=%llu",
> > > +			op->vm_bind_ref_seqno, op->addr, op->range, op->num_extensions);
> > > +		break;
> > > +	}
> > > +	case DRM_XE_EUDEBUG_EVENT_VM_BIND_UFENCE: {
> > > +		struct drm_xe_eudebug_event_vm_bind_ufence *f = (void *)e;
> > > +
> > > +		sprintf(b, "vm_bind_ref_seqno=%lld", f->vm_bind_ref_seqno);
> > > +		break;
> > > +	}
> > > +	case DRM_XE_EUDEBUG_EVENT_METADATA: {
> > > +		struct drm_xe_eudebug_event_metadata *em = (void *)e;
> > > +
> > > +		sprintf(b, "client_handle=%llu, metadata_handle=%llu, type=%llu, len=%llu",
> > > +			em->client_handle, em->metadata_handle, em->type, em->len);
> > > +		break;
> > > +	}
> > > +	case DRM_XE_EUDEBUG_EVENT_VM_BIND_OP_METADATA: {
> > > +		struct drm_xe_eudebug_event_vm_bind_op_metadata *op = (void *)e;
> > > +
> > > +		sprintf(b, "vm_bind_op_ref_seqno=%lld, metadata_handle=%llu, metadata_cookie=%llu",
> > > +			op->vm_bind_op_ref_seqno, op->metadata_handle, op->metadata_cookie);
> > > +		break;
> > > +	}
> > > +	default:
> > > +		strcpy(b, "<...>");
> > > +	}
> > > +
> > > +	return b;
> > > +}
> > > +
> > > +/**
> > > + * xe_eudebug_event_to_str:
> > > + * @e: pointer to event
> > > + * @buf: target to write string representation of @e
> > > + * @len: size of target buffer @buf
> > > + *
> > > + * Creates string representation for given event.
> > > + *
> > > + * Returns: the written input buffer pointed by @buf.
> > > + */
> > > +const char *xe_eudebug_event_to_str(struct drm_xe_eudebug_event *e, char *buf, size_t len)
> > > +{
> > > +	char a[256];
> > > +	char b[256];
> > > +
> > > +	snprintf(buf, len, "(%llu) %15s:%s: %s",
> > > +		 e->seqno,
> > > +		 event_type_to_str(e, a),
> > > +		 flags_to_str(e->flags),
> > > +		 event_members_to_str(e, b));
> > > +
> > > +	return buf;
> > > +}
> > > +
> > > +static void catch_child_failure(void)
> > > +{
> > > +	pid_t pid;
> > > +	int status;
> > > +
> > > +	pid = waitpid(-1, &status, WNOHANG);
> > > +
> > > +	if (pid == 0 || pid == -1)
> > > +		return;
> > > +
> > > +	if (!WIFEXITED(status))
> > > +		return;
> > > +
> > > +	igt_assert_f(WEXITSTATUS(status) == 0, "Client failed!\n");
> > > +}
> > > +
> > > +static int safe_pipe_read(int pipe[2], void *buf, int nbytes, int timeout_ms)
> > > +{
> > > +	int ret;
> > > +	int t = 0;
> > > +	struct pollfd fd = {
> > > +		.fd = pipe[0],
> > > +		.events = POLLIN,
> > > +		.revents = 0
> > > +	};
> > > +
> > > +	/* When child fails we may get stuck forever. Check whether
> > > +	 * the child process ended with an error.
> > > +	 */
> > > +	do {
> > > +		const int interval_ms = 1000;
> > > +
> > > +		ret = poll(&fd, 1, interval_ms);
> > > +
> > > +		if (!ret) {
> > > +			catch_child_failure();
> > > +			t += interval_ms;
> > > +		}
> > > +	} while (!ret && t < timeout_ms);
> > > +
> > > +	if (ret > 0)
> > > +		return read(pipe[0], buf, nbytes);
> > > +
> > > +	return 0;
> > > +}
> > > +
> > > +static uint64_t pipe_read(int pipe[2], int timeout_ms)
> > > +{
> > > +	uint64_t in;
> > > +	uint64_t ret;
> > > +
> > > +	ret = safe_pipe_read(pipe, &in, sizeof(in), timeout_ms);
> > > +	igt_assert(ret == sizeof(in));
> > > +
> > > +	return in;
> > > +}
> > > +
> > > +static void pipe_signal(int pipe[2], uint64_t token)
> > > +{
> > > +	igt_assert(write(pipe[1], &token, sizeof(token)) == sizeof(token));
> > > +}
> > > +
> > > +static void pipe_close(int pipe[2])
> > > +{
> > > +	if (pipe[0] != -1)
> > > +		close(pipe[0]);
> > > +
> > > +	if (pipe[1] != -1)
> > > +		close(pipe[1]);
> > > +}
> > > +
> > > +static uint64_t __wait_token(int p[2], const uint64_t token, int timeout_ms)
> > > +{
> > > +	uint64_t in;
> > > +
> > > +	in = pipe_read(p, timeout_ms);
> > > +
> > > +	igt_assert_eq(in, token);
> > > +
> > > +	return pipe_read(p, timeout_ms);
> > > +}
> > > +
> > > +static uint64_t client_wait_token(struct xe_eudebug_client *c,
> > > +				 const uint64_t token)
> > > +{
> > > +	return __wait_token(c->p_in, token, c->timeout_ms);
> > > +}
> > > +
> > > +static uint64_t wait_from_client(struct xe_eudebug_client *c,
> > > +				 const uint64_t token)
> > > +{
> > > +	return __wait_token(c->p_out, token, c->timeout_ms);
> > > +}
> > > +
> > > +static void token_signal(int p[2], const uint64_t token, const uint64_t value)
> > > +{
> > > +	pipe_signal(p, token);
> > > +	pipe_signal(p, value);
> > > +}
> > > +
> > > +static void client_signal(struct xe_eudebug_client *c,
> > > +			  const uint64_t token,
> > > +			  const uint64_t value)
> > > +{
> > > +	token_signal(c->p_out, token, value);
> > > +}
> > > +
> > > +static int __xe_eudebug_connect(int fd, pid_t pid, uint32_t flags, uint64_t events)
> > > +{
> > > +	struct drm_xe_eudebug_connect param = {
> > > +		.pid = pid,
> > > +		.flags = flags,
> > > +	};
> > > +	int debugfd;
> > > +
> > > +	debugfd = igt_ioctl(fd, DRM_IOCTL_XE_EUDEBUG_CONNECT, &param);
> > > +
> > > +	if (debugfd < 0)
> > > +		return -errno;
> > > +
> > > +	return debugfd;
> > > +}
> > > +
> > > +static void event_log_write_to_fd(struct xe_eudebug_event_log *l, int fd)
> > > +{
> > > +	igt_assert_eq(write(fd, &l->head, sizeof(l->head)),
> > > +		      sizeof(l->head));
> > > +
> > > +	igt_assert_eq(write(fd, l->log, l->head), l->head);
> > > +}
> > > +
> > > +static void read_all(int fd, void *buf, size_t nbytes)
> > > +{
> > > +	ssize_t remaining_size = nbytes;
> > > +	ssize_t current_size = 0;
> > > +	ssize_t read_size = 0;
> > > +
> > > +	do {
> > > +		read_size = read(fd, buf + current_size, remaining_size);
> > > +		igt_assert_f(read_size >= 0, "read failed: %s\n", strerror(errno));
> > > +
> > > +		current_size += read_size;
> > > +		remaining_size -= read_size;
> > > +	} while (remaining_size > 0 && read_size > 0);
> > > +
> > > +	igt_assert_eq(current_size, nbytes);
> > > +}
> > > +
> > > +static void event_log_read_from_fd(struct xe_eudebug_event_log *l, int fd)
> > > +{
> > > +	read_all(fd, &l->head, sizeof(l->head));
> > > +	igt_assert_lt(l->head, l->max_size);
> > > +
> > > +	read_all(fd, l->log, l->head);
> > > +}
> > > +
> > > +typedef int (*cmp_fn_t)(struct drm_xe_eudebug_event *, void *);
> > > +
> > > +static struct drm_xe_eudebug_event *
> > > +event_cmp(struct xe_eudebug_event_log *l,
> > > +	  struct drm_xe_eudebug_event *current,
> > > +	  cmp_fn_t match,
> > > +	  void *data)
> > > +{
> > > +	struct drm_xe_eudebug_event *e = current;
> > > +
> > > +	xe_eudebug_for_each_event(e, l) {
> > > +		if (match(e, data))
> > > +			return e;
> > > +	}
> > > +
> > > +	return NULL;
> > > +}
> > > +
> > > +static int match_type_and_flags(struct drm_xe_eudebug_event *a, void *data)
> > > +{
> > > +	struct drm_xe_eudebug_event *b = data;
> > > +
> > > +	if (a->type == b->type &&
> > > +	    a->flags == b->flags)
> > > +		return 1;
> > > +
> > > +	return 0;
> > > +}
> > > +
> > > +static int match_fields(struct drm_xe_eudebug_event *a, void *data)
> > > +{
> > > +	struct drm_xe_eudebug_event *b = data;
> > > +	int ret = 0;
> > > +
> > > +	ret = match_type_and_flags(a, data);
> > > +	if (!ret)
> > > +		return ret;
> > > +
> > > +	ret = 0;
> > > +
> > > +	switch (a->type) {
> > > +	case DRM_XE_EUDEBUG_EVENT_EXEC_QUEUE: {
> > > +		struct drm_xe_eudebug_event_exec_queue *ae = (void *)a;
> > > +		struct drm_xe_eudebug_event_exec_queue *be = (void *)b;
> > > +
> > > +		if (ae->engine_class == be->engine_class && ae->width == be->width)
> > > +			ret = 1;
> > > +		break;
> > > +	}
> > > +	case DRM_XE_EUDEBUG_EVENT_VM_BIND: {
> > > +		struct drm_xe_eudebug_event_vm_bind *ea = (void *)a;
> > > +		struct drm_xe_eudebug_event_vm_bind *eb = (void *)b;
> > > +
> > > +		if (ea->num_binds == eb->num_binds)
> > > +			ret = 1;
> > > +		break;
> > > +	}
> > > +	case DRM_XE_EUDEBUG_EVENT_VM_BIND_OP: {
> > > +		struct drm_xe_eudebug_event_vm_bind_op *ea = (void *)a;
> > > +		struct drm_xe_eudebug_event_vm_bind_op *eb = (void *)b;
> > > +
> > > +		if (ea->addr == eb->addr && ea->range == eb->range &&
> > > +		    ea->num_extensions == eb->num_extensions)
> > > +			ret = 1;
> > > +		break;
> > > +	}
> > > +	case DRM_XE_EUDEBUG_EVENT_VM_BIND_OP_METADATA: {
> > > +		struct drm_xe_eudebug_event_vm_bind_op_metadata *ea = (void *)a;
> > > +		struct drm_xe_eudebug_event_vm_bind_op_metadata *eb = (void *)b;
> > > +
> > > +		if (ea->metadata_handle == eb->metadata_handle &&
> > > +		    ea->metadata_cookie == eb->metadata_cookie)
> > > +			ret = 1;
> > > +		break;
> > > +	}
> > > +
> > > +	default:
> > > +		ret = 1;
> > > +		break;
> > > +	}
> > > +
> > > +	return ret;
> > > +}
> > > +
> > > +static int match_client_handle(struct drm_xe_eudebug_event *e, void *data)
> > > +{
> > > +	struct match_dto *md = (void *)data;
> > > +	uint64_t *bind_seqno = md->bind_seqno;
> > > +	uint64_t *bind_op_seqno = md->bind_op_seqno;
> > > +	uint64_t h = md->client_handle;
> > > +
> > > +	if (XE_EUDEBUG_EVENT_IS_FILTERED(e->type, md->filter))
> > > +		return 0;
> > > +
> > > +	switch (e->type) {
> > > +	case DRM_XE_EUDEBUG_EVENT_OPEN: {
> > > +		struct drm_xe_eudebug_event_client *client = (struct drm_xe_eudebug_event_client *)e;
> > > +
> > > +		if (client->client_handle == h)
> > > +			return 1;
> > > +		break;
> > > +	}
> > > +	case DRM_XE_EUDEBUG_EVENT_VM: {
> > > +		struct drm_xe_eudebug_event_vm *vm = (struct drm_xe_eudebug_event_vm *)e;
> > > +
> > > +		if (vm->client_handle == h)
> > > +			return 1;
> > > +		break;
> > > +	}
> > > +	case DRM_XE_EUDEBUG_EVENT_EXEC_QUEUE: {
> > > +		struct drm_xe_eudebug_event_exec_queue *ee = (struct drm_xe_eudebug_event_exec_queue *)e;
> > > +
> > > +		if (ee->client_handle == h)
> > > +			return 1;
> > > +		break;
> > > +	}
> > > +	case DRM_XE_EUDEBUG_EVENT_VM_BIND: {
> > > +		struct drm_xe_eudebug_event_vm_bind *evmb = (struct drm_xe_eudebug_event_vm_bind *)e;
> > > +
> > > +		if (evmb->client_handle == h) {
> > > +			*bind_seqno = evmb->base.seqno;
> > > +			return 1;
> > > +		}
> > > +		break;
> > > +	}
> > > +	case DRM_XE_EUDEBUG_EVENT_VM_BIND_OP: {
> > > +		struct drm_xe_eudebug_event_vm_bind_op *eo = (struct drm_xe_eudebug_event_vm_bind_op *)e;
> > > +
> > > +		if (eo->vm_bind_ref_seqno == *bind_seqno) {
> > > +			*bind_op_seqno = eo->base.seqno;
> > > +			return 1;
> > > +		}
> > > +		break;
> > > +	}
> > > +	case DRM_XE_EUDEBUG_EVENT_VM_BIND_UFENCE: {
> > > +		struct drm_xe_eudebug_event_vm_bind_ufence *ef  = (struct drm_xe_eudebug_event_vm_bind_ufence *)e;
> > > +
> > > +		if (ef->vm_bind_ref_seqno == *bind_seqno)
> > > +			return 1;
> > > +
> > > +		break;
> > > +	}
> > > +	case DRM_XE_EUDEBUG_EVENT_METADATA: {
> > > +		struct drm_xe_eudebug_event_metadata *em = (struct drm_xe_eudebug_event_metadata *)e;
> > > +
> > > +		if (em->client_handle == h)
> > > +			return 1;
> > > +		break;
> > > +	}
> > > +	case DRM_XE_EUDEBUG_EVENT_VM_BIND_OP_METADATA: {
> > > +		struct drm_xe_eudebug_event_vm_bind_op_metadata *eo = (struct drm_xe_eudebug_event_vm_bind_op_metadata *)e;
> > > +
> > > +		if (eo->vm_bind_op_ref_seqno == *bind_op_seqno)
> > > +			return 1;
> > > +		break;
> > > +	}
> > > +	default:
> > > +		break;
> > > +	}
> > > +
> > > +	return 0;
> > > +}
> > > +
> > > +static int match_opposite_resource(struct drm_xe_eudebug_event *e, void *data)
> > > +{
> > > +
> > > +	struct drm_xe_eudebug_event *d = (void *)data;
> > > +	int ret;
> > > +
> > > +	d->flags ^= DRM_XE_EUDEBUG_EVENT_CREATE | DRM_XE_EUDEBUG_EVENT_DESTROY;
> > > +	d->flags &= ~(DRM_XE_EUDEBUG_EVENT_NEED_ACK);
> > > +	ret = match_type_and_flags(e, data);
> > > +	d->flags ^= DRM_XE_EUDEBUG_EVENT_CREATE | DRM_XE_EUDEBUG_EVENT_DESTROY;
> > > +
> > > +	if (!ret)
> > > +		return 0;
> > > +
> > > +	switch (e->type) {
> > > +	case DRM_XE_EUDEBUG_EVENT_OPEN: {
> > > +		struct drm_xe_eudebug_event_client *client = (struct drm_xe_eudebug_event_client *)e;
> > > +		struct drm_xe_eudebug_event_client *filter = (struct drm_xe_eudebug_event_client *)data;
> > > +
> > > +		if (client->client_handle == filter->client_handle)
> > > +			return 1;
> > > +		break;
> > > +	}
> > > +	case DRM_XE_EUDEBUG_EVENT_VM: {
> > > +		struct drm_xe_eudebug_event_vm *vm = (struct drm_xe_eudebug_event_vm *)e;
> > > +		struct drm_xe_eudebug_event_vm *filter = (struct drm_xe_eudebug_event_vm *)data;
> > > +
> > > +		if (vm->vm_handle == filter->vm_handle)
> > > +			return 1;
> > > +		break;
> > > +	}
> > > +	case DRM_XE_EUDEBUG_EVENT_EXEC_QUEUE: {
> > > +		struct drm_xe_eudebug_event_exec_queue *ee = (void *)e;
> > > +		struct drm_xe_eudebug_event_exec_queue *filter = (struct drm_xe_eudebug_event_exec_queue *)data;
> > > +
> > > +		if (ee->exec_queue_handle == filter->exec_queue_handle)
> > > +			return 1;
> > > +		break;
> > > +	}
> > > +	case DRM_XE_EUDEBUG_EVENT_VM_BIND: {
> > > +		struct drm_xe_eudebug_event_vm_bind *evmb = (void *)e;
> > > +		struct drm_xe_eudebug_event_vm_bind *filter = (struct drm_xe_eudebug_event_vm_bind *)data;
> > > +
> > > +		if (evmb->vm_handle == filter->vm_handle &&
> > > +		    evmb->num_binds == filter->num_binds)
> > > +			return 1;
> > > +		break;
> > > +	}
> > > +	case DRM_XE_EUDEBUG_EVENT_VM_BIND_OP: {
> > > +		struct drm_xe_eudebug_event_vm_bind_op *avmb = (void *)e;
> > > +		struct drm_xe_eudebug_event_vm_bind_op *filter = (struct drm_xe_eudebug_event_vm_bind_op *)data;
> > > +
> > > +		if (avmb->addr == filter->addr &&
> > > +		    avmb->range == filter->range)
> > > +			return 1;
> > > +		break;
> > > +	}
> > > +	case DRM_XE_EUDEBUG_EVENT_METADATA: {
> > > +		struct drm_xe_eudebug_event_metadata *em = (void *)e;
> > > +		struct drm_xe_eudebug_event_metadata *filter = (struct drm_xe_eudebug_event_metadata *)data;
> > > +
> > > +		if (em->metadata_handle == filter->metadata_handle)
> > > +			return 1;
> > > +		break;
> > > +	}
> > > +	case DRM_XE_EUDEBUG_EVENT_VM_BIND_OP_METADATA: {
> > > +		struct drm_xe_eudebug_event_vm_bind_op_metadata *avmb = (void *)e;
> > > +		struct drm_xe_eudebug_event_vm_bind_op_metadata *filter = (struct drm_xe_eudebug_event_vm_bind_op_metadata *)data;
> > > +
> > > +		if (avmb->metadata_handle == filter->metadata_handle &&
> > > +		    avmb->metadata_cookie == filter->metadata_cookie)
> > > +			return 1;
> > > +		break;
> > > +	}
> > > +
> > > +	default:
> > > +		break;
> > > +	}
> > > +	return 0;
> > > +}
> > > +
> > > +static int match_full(struct drm_xe_eudebug_event *e, void *data)
> > > +{
> > > +	struct seqno_list_entry *sl;
> > > +
> > > +	struct match_dto *md = (void *)data;
> > > +	int ret = 0;
> > > +
> > > +	ret = match_client_handle(e, md);
> > > +	if (!ret)
> > > +		return 0;
> > > +
> > > +	ret = match_fields(e, md->target);
> > > +	if (!ret)
> > > +		return 0;
> > > +
> > > +	igt_list_for_each_entry(sl, md->seqno_list, link) {
> > > +		if (sl->seqno == e->seqno)
> > > +			return 0;
> > > +	}
> > > +
> > > +	return 1;
> > > +}
> > > +
> > > +static struct drm_xe_eudebug_event *
> > > +event_type_match(struct xe_eudebug_event_log *l,
> > > +		 struct drm_xe_eudebug_event *target,
> > > +		 struct drm_xe_eudebug_event *current)
> > > +{
> > > +	return event_cmp(l, current, match_type_and_flags, target);
> > > +}
> > > +
> > > +static struct drm_xe_eudebug_event *
> > > +client_match(struct xe_eudebug_event_log *l,
> > > +	     uint64_t client_handle,
> > > +	     struct drm_xe_eudebug_event *current,
> > > +	     uint32_t filter,
> > > +	     uint64_t *bind_seqno,
> > > +	     uint64_t *bind_op_seqno)
> > > +{
> > > +	struct match_dto md = {
> > > +		.client_handle = client_handle,
> > > +		.filter = filter,
> > > +		.bind_seqno = bind_seqno,
> > > +		.bind_op_seqno = bind_op_seqno,
> > > +	};
> > > +
> > > +	return event_cmp(l, current, match_client_handle, &md);
> > > +}
> > > +
> > > +static struct drm_xe_eudebug_event *
> > > +opposite_event_match(struct xe_eudebug_event_log *l,
> > > +		    struct drm_xe_eudebug_event *target,
> > > +		    struct drm_xe_eudebug_event *current)
> > > +{
> > > +	return event_cmp(l, current, match_opposite_resource, target);
> > > +}
> > > +
> > > +static struct drm_xe_eudebug_event *
> > > +event_match(struct xe_eudebug_event_log *l,
> > > +	    struct drm_xe_eudebug_event *target,
> > > +	    uint64_t client_handle,
> > > +	    struct igt_list_head *seqno_list,
> > > +	    uint64_t *bind_seqno,
> > > +	    uint64_t *bind_op_seqno)
> > > +{
> > > +	struct match_dto md = {
> > > +		.target = target,
> > > +		.client_handle = client_handle,
> > > +		.seqno_list = seqno_list,
> > > +		.bind_seqno = bind_seqno,
> > > +		.bind_op_seqno = bind_op_seqno,
> > > +	};
> > > +
> > > +	return event_cmp(l, NULL, match_full, &md);
> > > +}
> > > +
> > > +static void compare_client(struct xe_eudebug_event_log *c, struct drm_xe_eudebug_event *_ce,
> > > +			   struct xe_eudebug_event_log *d, struct drm_xe_eudebug_event *_de,
> > > +			   uint32_t filter)
> > > +{
> > > +	struct drm_xe_eudebug_event_client *ce = (void *)_ce;
> > > +	struct drm_xe_eudebug_event_client *de = (void *)_de;
> > > +	uint64_t cbs = 0, dbs = 0, cbso = 0, dbso = 0;
> > > +
> > > +	struct igt_list_head matched_seqno_list;
> > > +	struct drm_xe_eudebug_event *hc, *hd;
> > > +	struct seqno_list_entry *entry, *tmp;
> > > +
> > > +	igt_assert(ce);
> > > +	igt_assert(de);
> > > +
> > > +	igt_debug("client: %llu -> %llu\n", ce->client_handle, de->client_handle);
> > > +
> > > +	hc = NULL;
> > > +	hd = NULL;
> > > +	IGT_INIT_LIST_HEAD(&matched_seqno_list);
> > > +
> > > +	do {
> > > +		hc = client_match(c, ce->client_handle, hc, filter, &cbs, &cbso);
> > > +		if (!hc)
> > > +			break;
> > > +
> > > +		hd = event_match(d, hc, de->client_handle, &matched_seqno_list, &dbs, &dbso);
> > > +
> > > +		igt_assert_f(hd, "%s (%llu): no matching event type %u found for client %llu\n",
> > > +			     c->name,
> > > +			     hc->seqno,
> > > +			     hc->type,
> > > +			     ce->client_handle);
> > > +
> > > +		igt_debug("comparing %s %llu vs %s %llu\n",
> > > +			  c->name, hc->seqno, d->name, hd->seqno);
> > > +
> > > +		/*
> > > +		 * Store the seqno of the event that was matched above,
> > > +		 * inside 'matched_seqno_list', to avoid it getting matched
> > > +		 * by subsequent 'event_match' calls.
> > > +		 */
> > > +		entry = malloc(sizeof(*entry));
> > > +		entry->seqno = hd->seqno;
> > > +		igt_list_add(&entry->link, &matched_seqno_list);
> > > +	} while (hc);
> > > +
> > > +	igt_list_for_each_entry_safe(entry, tmp, &matched_seqno_list, link)
> > > +		free(entry);
> > > +}
> > > +
> > > +/**
> > > + * xe_eudebug_event_log_find_seqno:
> > > + * @l: event log pointer
> > > + * @seqno: seqno of event to be found
> > > + *
> > > + * Finds the event with given seqno in the event log.
> > > + *
> > > + * Returns: pointer to the event with given seqno within @l or NULL seqno is
> > > + * not present.
> > > + */
> > > +struct drm_xe_eudebug_event *
> > > +xe_eudebug_event_log_find_seqno(struct xe_eudebug_event_log *l, uint64_t seqno)
> > > +{
> > > +	struct drm_xe_eudebug_event *e = NULL, *found = NULL;
> > > +
> > > +	igt_assert_neq(seqno, 0);
> > > +	/*
> > > +	 * Try to catch if seqno is corrupted and prevent too long tests,
> > > +	 * as our post processing of events is not optimized.
> > > +	 */
> > > +	igt_assert_lt(seqno, 10 * 1000 * 1000);
> > > +
> > > +	xe_eudebug_for_each_event(e, l) {
> > > +		if (e->seqno == seqno) {
> > > +			if (found) {
> > > +				igt_warn("Found multiple events with the same seqno %lu\n", seqno);
> > > +				xe_eudebug_event_log_print(l, false);
> > > +				igt_assert(!found);
> > > +			}
> > > +			found = e;
> > > +		}
> > > +	}
> > > +
> > > +	return found;
> > > +}
> > > +
> > > +static void event_log_sort(struct xe_eudebug_event_log *l)
> > > +{
> > > +	struct xe_eudebug_event_log *tmp;
> > > +	struct drm_xe_eudebug_event *e = NULL;
> > > +	uint64_t first_seqno = 0;
> > > +	uint64_t last_seqno = 0;
> > > +	uint64_t events = 0, added = 0;
> > > +	uint64_t i;
> > > +
> > > +	xe_eudebug_for_each_event(e, l) {
> > > +		if (e->seqno > last_seqno)
> > > +			last_seqno = e->seqno;
> > > +
> > > +		if (e->seqno < first_seqno)
> > > +			first_seqno = e->seqno;
> > > +
> > > +		events++;
> > > +	}
> > > +
> > > +	tmp = xe_eudebug_event_log_create("tmp", l->max_size);
> > > +
> > > +	for (i = 1; i <= last_seqno; i++) {
> > > +		e = xe_eudebug_event_log_find_seqno(l, i);
> > > +		if (e) {
> > > +			xe_eudebug_event_log_write(tmp, e);
> > > +			added++;
> > > +		}
> > > +	}
> > > +
> > > +	igt_assert_eq(events, added);
> > > +	igt_assert_eq(tmp->head, l->head);
> > > +
> > > +	memcpy(l->log, tmp->log, tmp->head);
> > > +
> > > +	xe_eudebug_event_log_destroy(tmp);
> > > +}
> > > +
> > > +/**
> > > + * xe_eudebug_connect:
> > > + * @fd: Xe file descriptor
> > > + * @pid: client PID
> > > + * @flags: connection flags
> > > + *
> > > + * Opens the xe eu debugger connection to the process described by @pid
> > > + *
> > > + * Returns: 0 if the debugger was successfully attached, -errno otherwise.
> > > + */
> > > +int xe_eudebug_connect(int fd, pid_t pid, uint32_t flags)
> > > +{
> > > +	int ret;
> > > +	uint64_t events = 0; /* events filtering not supported yet! */
> > > +
> > > +	ret = __xe_eudebug_connect(fd, pid, flags, events);
> > > +
> > > +	return ret;
> > > +}
> > > +
> > > +/**
> > > + * xe_eudebug_event_log_create:
> > > + * @name: event log identifier
> > > + * @max_size: maximum size of created log
> > > + *
> > > + * Function creates an Eu Debugger event log with size equal to @max_size.
> > > + *
> > > + * Returns: pointer to just created log
> > > + */
> > > +#define MAX_EVENT_LOG_SIZE (32 * 1024 * 1024)
> > > +struct xe_eudebug_event_log *xe_eudebug_event_log_create(const char *name, unsigned int max_size)
> > > +{
> > > +	struct xe_eudebug_event_log *l;
> > > +
> > > +	l = calloc(1, sizeof(*l));
> > > +	igt_assert(l);
> > > +	l->log = calloc(1, max_size);
> > > +	igt_assert(l->log);
> > > +	l->max_size = max_size;
> > > +	strncpy(l->name, name, sizeof(l->name) - 1);
> > > +	pthread_mutex_init(&l->lock, NULL);
> > > +
> > > +	return l;
> > > +}
> > > +
> > > +/**
> > > + * xe_eudebug_event_log_destroy:
> > > + * @l: event log pointer
> > > + *
> > > + * Frees given event log @l.
> > > + */
> > > +void xe_eudebug_event_log_destroy(struct xe_eudebug_event_log *l)
> > > +{
> > > +	pthread_mutex_destroy(&l->lock);
> > > +	free(l->log);
> > > +	free(l);
> > > +}
> > > +
> > > +/**
> > > + * xe_eudebug_event_log_write:
> > > + * @l: event log pointer
> > > + * @e: event to be written to event log
> > > + *
> > > + * Writes event @e to the event log, thread-safe.
> > > + */
> > > +void xe_eudebug_event_log_write(struct xe_eudebug_event_log *l, struct drm_xe_eudebug_event *e)
> > > +{
> > > +	igt_assert(e->seqno);
> > > +	/*
> > > +	 * Try to catch if seqno is corrupted and prevent too long tests,
> > > +	 * as our post processing of events is not optimized.
> > > +	 */
> > > +	igt_assert_lt(e->seqno, 10 * 1000 * 1000);
> > > +
> > > +	pthread_mutex_lock(&l->lock);
> > > +	igt_assert_lt(l->head + e->len, l->max_size);
> > > +	memcpy(l->log + l->head, e, e->len);
> > > +	l->head += e->len;
> > > +
> > > +#ifdef DEBUG_LOG
> > > +	igt_info("%s: wrote %u bytes to eventlog, free %u bytes\n",
> > > +		 l->name, e->len, l->max_size - l->head);
> > > +#endif
> > > +	pthread_mutex_unlock(&l->lock);
> > > +}
> > > +
> > > +/**
> > > + * xe_eudebug_event_log_print:
> > > + * @l: event log pointer
> > > + * @debug: when true function uses igt_debug instead of igt_info.
> > > + *
> > > + * Prints given event log.
> > > + */
> > > +void
> > > +xe_eudebug_event_log_print(struct xe_eudebug_event_log *l, bool debug)
> > > +{
> > > +	struct drm_xe_eudebug_event *e = NULL;
> > > +	int level = debug ? IGT_LOG_DEBUG : IGT_LOG_INFO;
> > > +	char str[XE_EUDEBUG_EVENT_STRING_MAX_LEN];
> > > +
> > > +	igt_log(IGT_LOG_DOMAIN, level,
> > > +		"event log '%s' (%u bytes):\n", l->name, l->head);
> > > +
> > > +	xe_eudebug_for_each_event(e, l) {
> > > +		xe_eudebug_event_to_str(e, str, XE_EUDEBUG_EVENT_STRING_MAX_LEN);
> > > +		igt_log(IGT_LOG_DOMAIN, level, "%s\n", str);
> > > +	}
> > > +}
> > > +
> > > +/**
> > > + * xe_eudebug_event_log_compare:
> > > + * @a: event log pointer
> > > + * @b: event log pointer
> > > + * @filter: mask that represents events to be skipped during comparison, useful
> > > + * for events like 'VM_BIND' since they can be asymmetric. Note that
> > > + * 'DRM_XE_EUDEBUG_EVENT_OPEN' will always be matched.
> > > + *
> > > + * Compares and asserts event logs @a, @b if the event
> > > + * sequence matches.
> > > + */
> > > +void xe_eudebug_event_log_compare(struct xe_eudebug_event_log *a, struct xe_eudebug_event_log *b,
> > > +				  uint32_t filter)
> > > +{
> > > +	struct drm_xe_eudebug_event *ae = NULL;
> > > +	struct drm_xe_eudebug_event *be = NULL;
> > > +
> > > +	xe_eudebug_for_each_event(ae, a) {
> > > +		if (ae->type == DRM_XE_EUDEBUG_EVENT_OPEN &&
> > > +		    ae->flags & DRM_XE_EUDEBUG_EVENT_CREATE) {
> > > +			be = event_type_match(b, ae, be);
> > > +
> > > +			compare_client(a, ae, b, be, filter);
> > > +			compare_client(b, be, a, ae, filter);
> > > +		}
> > > +	}
> > > +}
> > > +
> > > +/**
> > > + * xe_eudebug_event_log_match_opposite:
> > > + * @l: event log pointer
> > > + * @filter: mask that represents events to be skipped during comparison, useful
> > > + * for events like 'VM_BIND' since they can be asymmetric
> > > + *
> > > + * Matches and asserts content of all opposite events (create vs destroy).
> > > + */
> > > +void
> > > +xe_eudebug_event_log_match_opposite(struct xe_eudebug_event_log *l, uint32_t filter)
> > > +{
> > > +	struct drm_xe_eudebug_event *ce = NULL;
> > > +	struct drm_xe_eudebug_event *de = NULL;
> > > +
> > > +	xe_eudebug_for_each_event(ce, l) {
> > > +		if (ce->flags & DRM_XE_EUDEBUG_EVENT_CREATE) {
> > > +			uint8_t offset = sizeof(struct drm_xe_eudebug_event);
> > > +			int opposite_matching;
> > > +
> > > +			if (XE_EUDEBUG_EVENT_IS_FILTERED(ce->type, filter))
> > > +				continue;
> > > +
> > > +			/* No opposite matching for binds */
> > > +			if ((ce->type >= DRM_XE_EUDEBUG_EVENT_VM_BIND &&
> > > +			     ce->type <= DRM_XE_EUDEBUG_EVENT_VM_BIND_UFENCE) ||
> > > +			    ce->type == DRM_XE_EUDEBUG_EVENT_VM_BIND_OP_METADATA)
> > > +				continue;
> > > +
> > > +			de = opposite_event_match(l, ce, ce);
> > > +
> > > +			igt_assert_f(de, "no opposite event of type %u found\n", ce->type);
> > > +
> > > +			igt_assert_eq(ce->len, de->len);
> > > +			opposite_matching = memcmp((uint8_t *)de + offset,
> > > +						   (uint8_t *)ce + offset,
> > > +						   de->len - offset) == 0;
> > > +
> > > +			igt_assert_f(opposite_matching,
> > > +				     "%s: create|destroy event not "
> > > +				     "maching (%llu) vs (%llu)\n",
> > > +				     l->name, de->seqno, ce->seqno);
> > > +		}
> > > +	}
> > > +}
> > > +
> > > +static void debugger_run_triggers(struct xe_eudebug_debugger *d,
> > > +				  struct drm_xe_eudebug_event *e)
> > > +{
> > > +	struct event_trigger *t;
> > > +
> > > +	igt_list_for_each_entry(t, &d->triggers, link) {
> > > +		if (e->type == t->type)
> > > +			t->fn(d, e);
> > > +	}
> > > +}
> > > +
> > > +#define MAX_EVENT_SIZE (32 * 1024)
> > > +static int
> > > +xe_eudebug_read_event(int fd, struct drm_xe_eudebug_event *event)
> > > +{
> > > +	int ret;
> > > +
> > > +	event->type = DRM_XE_EUDEBUG_EVENT_READ;
> > > +	event->flags = 0;
> > > +	event->len = MAX_EVENT_SIZE;
> > > +
> > > +	ret = igt_ioctl(fd, DRM_XE_EUDEBUG_IOCTL_READ_EVENT, event);
> > > +	if (ret < 0)
> > > +		return -errno;
> > > +
> > > +	return ret;
> > > +}
> > > +
> > > +static void *debugger_worker_loop(void *data)
> > > +{
> > > +	uint8_t buf[MAX_EVENT_SIZE];
> > > +	struct drm_xe_eudebug_event *e = (void *)buf;
> > > +	struct xe_eudebug_debugger *d = data;
> > > +	struct pollfd p = {
> > > +		.events = POLLIN,
> > > +		.revents = 0,
> > > +	};
> > > +	int timeout_ms = 100, ret;
> > > +
> > > +	igt_assert(d->master_fd >= 0);
> > > +
> > > +	do {
> > > +		p.fd = d->fd;
> > > +		ret = poll(&p, 1, timeout_ms);
> > > +
> > > +		if (ret == -1) {
> > > +			igt_info("poll failed with errno %d\n", errno);
> > > +			break;
> > > +		}
> > > +
> > > +		if (ret == 1 && (p.revents & POLLIN)) {
> > > +			int err = xe_eudebug_read_event(d->fd, e);
> > > +
> > > +			if (!err) {
> > > +				++d->event_count;
> > > +
> > > +				xe_eudebug_event_log_write(d->log, e);
> > > +				debugger_run_triggers(d, e);
> > > +			} else {
> > > +				igt_info("xe_eudebug_read_event returned %d\n", ret);
> > > +			}
> > > +		}
> > > +	} while ((ret && READ_ONCE(d->worker_state) == DEBUGGER_WORKER_QUITTING) ||
> > > +		 READ_ONCE(d->worker_state) == DEBUGGER_WORKER_ACTIVE);
> > > +
> > > +	d->worker_state = DEBUGGER_WORKER_INACTIVE;
> > > +	return NULL;
> > > +}
> > > +
> > > +/**
> > > + * xe_eudebug_debugger_available:
> > > + * @fd: Xe file descriptor
> > > + *
> > > + * Returns: true it debugger connection is available, false otherwise.
> > > + */
> > > +bool xe_eudebug_debugger_available(int fd)
> > > +{
> > > +	struct drm_xe_eudebug_connect param = { .pid = getpid() };
> > > +	int debugfd;
> > > +
> > > +	debugfd = igt_ioctl(fd, DRM_IOCTL_XE_EUDEBUG_CONNECT, &param);
> > > +	if (debugfd >= 0)
> > > +		close(debugfd);
> > > +
> > > +	return debugfd >= 0;
> > > +}
> > > +
> > > +/**
> > > + * xe_eudebug_debugger_create:
> > > + * @master_fd: xe client used to open the debugger connection
> > > + * @flags: flags stored in a debugger structure, can be used at will
> > > + * of the caller, i.e. to be used inside triggers.
> > > + * @data: test's private data, allocated with MAP_SHARED | MAP_ANONYMOUS,
> > > + * can be shared between client and debugger. Can be NULL.
> > > + *
> > > + * Returns: newly created xe_eudebug_debugger structure with its
> > > + * event log initialized. Note that to open the connection
> > > + * you need call @xe_eudebug_debugger_attach.
> > > + */
> > > +struct xe_eudebug_debugger *
> > > +xe_eudebug_debugger_create(int master_fd, uint64_t flags, void *data)
> > > +{
> > > +	struct xe_eudebug_debugger *d;
> > > +
> > > +	d = calloc(1, sizeof(*d));
> > > +	d->flags = flags;
> > > +	igt_assert(d);
> > > +	IGT_INIT_LIST_HEAD(&d->triggers);
> > > +	d->log = xe_eudebug_event_log_create("debugger", MAX_EVENT_LOG_SIZE);
> > > +	d->fd = -1;
> > > +	d->master_fd = master_fd;
> > > +	d->ptr = data;
> > > +
> > > +	return d;
> > > +}
> > > +
> > > +static void debugger_destroy_triggers(struct xe_eudebug_debugger *d)
> > > +{
> > > +	struct event_trigger *t, *tmp;
> > > +
> > > +	igt_list_for_each_entry_safe(t, tmp, &d->triggers, link)
> > > +		free(t);
> > > +}
> > > +
> > > +/**
> > > + * xe_eudebug_debugger_destroy:
> > > + * @d: pointer to the debugger
> > > + *
> > > + * Frees xe_eudebug_debugger structure pointed by @d. If the debugger
> > > + * connection was still opened it terminates it.
> > > + */
> > > +void xe_eudebug_debugger_destroy(struct xe_eudebug_debugger *d)
> > > +{
> > > +	if (d->worker_state)
> > > +		xe_eudebug_debugger_stop_worker(d, 1);
> > > +
> > > +	if (d->target_pid)
> > > +		xe_eudebug_debugger_dettach(d);
> > > +
> > > +	xe_eudebug_event_log_destroy(d->log);
> > > +	debugger_destroy_triggers(d);
> > > +	free(d);
> > > +}
> > > +
> > > +/**
> > > + * xe_eudebug_debugger_attach:
> > > + * @d: pointer to the debugger
> > > + * @c: pointer to the client
> > > + *
> > > + * Opens the xe eu debugger connection to the process described by @c (c->pid)
> > > + *
> > > + * Returns: 0 if the debugger was successfully attached, -errno otherwise.
> > > + */
> > > +int xe_eudebug_debugger_attach(struct xe_eudebug_debugger *d,
> > > +			       struct xe_eudebug_client *c)
> > > +{
> > > +	int ret;
> > > +
> > > +	igt_assert_eq(d->fd, -1);
> > > +	igt_assert_neq(c->pid, 0);
> > > +	ret = xe_eudebug_connect(d->master_fd, c->pid, 0);
> > > +
> > > +	if (ret < 0)
> > > +		return ret;
> > > +
> > > +	d->fd = ret;
> > > +	d->target_pid = c->pid;
> > > +	d->p_client[0] = c->p_in[0];
> > > +	d->p_client[1] = c->p_in[1];
> > > +
> > > +	igt_debug("debugger connected to %lu\n", d->target_pid);
> > > +
> > > +	return 0;
> > > +}
> > > +
> > > +/**
> > > + * xe_eudebug_debugger_dettach:
> > > + * @d: pointer to the debugger
> > > + *
> > > + * Closes previously opened xe eu debugger connection. Asserts if
> > > + * the debugger has active session.
> > > + */
> > > +void xe_eudebug_debugger_dettach(struct xe_eudebug_debugger *d)
> > > +{
> > > +	igt_assert(d->target_pid);
> > > +	close(d->fd);
> > > +	d->target_pid = 0;
> > > +	d->fd = -1;
> > > +}
> > > +
> > > +/**
> > > + * xe_eudebug_debugger_add_trigger:
> > > + * @d: pointer to the debugger
> > > + * @type: the type of the event which activates the trigger
> > > + * @fn: function to be called when event of @type was read by the debugger.
> > > + *
> > > + * Adds function @fn to the list of triggers activated when event of @type
> > > + * has been read by worker.
> > > + * Note: Triggers are activated by the worker.
> > > + */
> > > +void xe_eudebug_debugger_add_trigger(struct xe_eudebug_debugger *d,
> > > +				     int type, xe_eudebug_trigger_fn fn)
> > > +{
> > > +	struct event_trigger *t;
> > > +
> > > +	t = calloc(1, sizeof(*t));
> > > +	IGT_INIT_LIST_HEAD(&t->link);
> > > +	t->type = type;
> > > +	t->fn = fn;
> > > +
> > > +	igt_list_add_tail(&t->link, &d->triggers);
> > > +	igt_debug("added trigger %p\n", t);
> > > +}
> > > +
> > > +/**
> > > + * xe_eudebug_debugger_start_worker:
> > > + * @d: pointer to the debugger
> > > + *
> > > + * Starts the debugger worker. Worker is resposible for reading all
> > > + * incoming events from the debugger, put then into debugger log and
> > > + * execute appropriate event triggers. Note that using the debuggers
> > > + * event log while worker is running is not safe.
> > > + */
> > > +void xe_eudebug_debugger_start_worker(struct xe_eudebug_debugger *d)
> > > +{
> > > +	int ret;
> > > +
> > > +	d->worker_state = true;
> > > +	ret = pthread_create(&d->worker_thread, NULL, &debugger_worker_loop, d);
> > > +
> > > +	igt_assert_f(ret == 0, "Debugger worker thread creation failed!");
> > > +}
> > > +
> > > +/**
> > > + * xe_eudebug_debugger_stop_worker:
> > > + * @d: pointer to the debugger
> > > + *
> > > + * Stops the debugger worker. Event log is sorted by seqno after closure.
> > > + */
> > > +void xe_eudebug_debugger_stop_worker(struct xe_eudebug_debugger *d,
> > > +				     int timeout_s)
> > > +{
> > > +	struct timespec t = {};
> > > +	int ret;
> > > +
> > > +	igt_assert(d->worker_state);
> > > +
> > > +	d->worker_state = DEBUGGER_WORKER_QUITTING; /* First time be polite. */
> > > +	igt_assert_eq(clock_gettime(CLOCK_REALTIME, &t), 0);
> > > +	t.tv_sec += timeout_s;
> > > +
> > > +	ret = pthread_timedjoin_np(d->worker_thread, NULL, &t);
> > > +
> > > +	if (ret == ETIMEDOUT) {
> > > +		d->worker_state = DEBUGGER_WORKER_INACTIVE;
> > > +		ret = pthread_join(d->worker_thread, NULL);
> > > +	}
> > > +
> > > +	igt_assert_f(ret == 0 || ret != ESRCH,
> > > +		     "pthread join failed with error %d!\n", ret);
> > > +
> > > +	event_log_sort(d->log);
> > > +}
> > > +
> > > +/**
> > > + * xe_eudebug_debugger_signal_stage:
> > > + * @d: pointer to the debugger
> > > + * @stage: stage to signal
> > > + *
> > > + * Signals to client, waiting in xe_eudebug_client_wait_stage(),
> > > + * releasing it to proceed.
> > > + */
> > > +void xe_eudebug_debugger_signal_stage(struct xe_eudebug_debugger *d, uint64_t stage)
> > > +{
> > > +	token_signal(d->p_client, CLIENT_STAGE, stage);
> > > +}
> > > +
> > > +/**
> > > + * xe_eudebug_debugger_wait_stage:
> > > + * @s: pointer to xe_eudebug_debugger structure
> > > + * @stage: stage to wait on
> > > + *
> > > + * Pauses debugger until the client has signalled the corresponding stage with
> > > + * xe_eudebug_client_signal_stage. This is only for situations where the actual
> > > + * event flow is not enough to coordinate between client/debugger and extra sync
> > > + * mechanism is needed.
> > > + */
> > > +void xe_eudebug_debugger_wait_stage(struct xe_eudebug_session *s, uint64_t stage)
> > > +{
> > > +	u64 stage_in;
> > > +
> > > +	igt_debug("debugger xe client fd: %d pausing for stage %lu\n", s->d->master_fd, stage);
> > > +
> > > +	stage_in = wait_from_client(s->c, DEBUGGER_STAGE);
> > > +	igt_debug("debugger xe client fd: %d stage %lu, expected %lu, stage\n", s->d->master_fd,
> > > +		  stage_in, stage);
> > > +
> > > +	igt_assert_eq(stage_in, stage);
> > > +}
> > > +
> > > +/**
> > > + * xe_eudebug_client_create:
> > > + * @master_fd: xe client used to open the debugger connection
> > > + * @work: function that opens xe device and executes arbitrary workload
> > > + * @flags: flags stored in a client structure, can be used at will
> > > + * of the caller, i.e. to provide the @work function an additional switch.
> > > + * @data: test's private data, allocated with MAP_SHARED | MAP_ANONYMOUS,
> > > + * can be shared between client and debugger. Accesible via client->ptr.
> > > + * Can be NULL.
> > > + *
> > > + * Forks and creates the debugger process. @work won't be called until
> > > + * xe_eudebug_client_start is called.
> > > + *
> > > + * Returns: newly created xe_eudebug_debugger structure with its
> > > + * event log initialized.
> > > + */
> > > +struct xe_eudebug_client *xe_eudebug_client_create(int master_fd, xe_eudebug_client_work_fn work,
> > > +						   uint64_t flags, void *data)
> > > +{
> > > +	struct xe_eudebug_client *c;
> > > +
> > > +	c = calloc(1, sizeof(*c));
> > > +	c->flags = flags;
> > > +	igt_assert(c);
> > > +	igt_assert(!pipe(c->p_in));
> > > +	igt_assert(!pipe(c->p_out));
> > > +	c->seqno = 1;
> > > +	c->log = xe_eudebug_event_log_create("client", MAX_EVENT_LOG_SIZE);
> > > +	c->done = 0;
> > > +	c->ptr = data;
> > > +	c->master_fd = master_fd;
> > > +	c->timeout_ms = XE_EUDEBUG_DEFAULT_TIMEOUT_MS;
> > > +
> > > +	igt_fork(child, 1) {
> > > +		int mypid;
> > > +
> > > +		igt_assert_eq(c->pid, 0);
> > > +
> > > +		close(c->p_out[0]);
> > > +		c->p_out[0] = -1;
> > > +		close(c->p_in[1]);
> > > +		c->p_in[1] = -1;
> > > +
> > > +		mypid = getpid();
> > > +		client_signal(c, CLIENT_PID, mypid);
> > > +
> > > +		c->pid = client_wait_token(c, CLIENT_RUN);
> > > +		igt_assert_eq(c->pid, mypid);
> > > +		if (work)
> > > +			work(c);
> > > +
> > > +		client_signal(c, CLIENT_FINI, c->seqno);
> > > +
> > > +		event_log_write_to_fd(c->log, c->p_out[1]);
> > > +
> > > +		c->pid = client_wait_token(c, CLIENT_STOP);
> > > +		igt_assert_eq(c->pid, mypid);
> > > +	}
> > > +
> > > +	close(c->p_out[1]);
> > > +	c->p_out[1] = -1;
> > > +	close(c->p_in[0]);
> > > +	c->p_in[0] = -1;
> > > +
> > > +	c->pid = wait_from_client(c, CLIENT_PID);
> > > +
> > > +	igt_info("client running with pid %d\n", c->pid);
> > > +
> > > +	return c;
> > > +}
> > > +
> > > +/**
> > > + * xe_eudebug_client_stop:
> > > + * @c: pointer to xe_eudebug_client structure
> > > + *
> > > + * Waits for the end of client's work and exits the proccess.
> > > + */
> > > +void xe_eudebug_client_stop(struct xe_eudebug_client *c)
> > > +{
> > > +	if (c->pid) {
> > > +		int waitstatus;
> > > +
> > > +		xe_eudebug_client_wait_done(c);
> > > +
> > > +		token_signal(c->p_in, CLIENT_STOP, c->pid);
> > > +		igt_assert_eq(waitpid(c->pid, &waitstatus, 0),
> > > +			      c->pid);
> > > +		c->pid = 0;
> > > +	}
> > > +}
> > > +
> > > +/**
> > > + * xe_eudebug_client_destroy:
> > > + * @c: pointer to xe_eudebug_client structure to be freed
> > > + *
> > > + * Frees the @c client structure. Note that it calls xe_eudebug_client_stop if
> > > + * client proccess has not terminated yet.
> > > + */
> > > +void xe_eudebug_client_destroy(struct xe_eudebug_client *c)
> > > +{
> > > +	xe_eudebug_client_stop(c);
> > > +	pipe_close(c->p_in);
> > > +	pipe_close(c->p_out);
> > > +	xe_eudebug_event_log_destroy(c->log);
> > > +	free(c);
> > > +}
> > > +
> > > +/**
> > > + * xe_eudebug_client_get_seqno:
> > > + * @c: pointer to xe_eudebug_client structure
> > > + *
> > > + * Increments and returns current seqno value of the given client @c
> > > + *
> > > + * Returns: incremented seqno
> > > + */
> > > +uint64_t xe_eudebug_client_get_seqno(struct xe_eudebug_client *c)
> > > +{
> > > +	return c->seqno++;
> > > +}
> > > +
> > > +/**
> > > + * xe_eudebug_client_start:
> > > + * @c: pointer to xe_eudebug_client structure
> > > + *
> > > + * Starts execution of client's work function within the client's proccess.
> > > + */
> > > +void xe_eudebug_client_start(struct xe_eudebug_client *c)
> > > +{
> > > +	token_signal(c->p_in, CLIENT_RUN, c->pid);
> > > +}
> > > +
> > > +/**
> > > + * xe_eudebug_client_wait_done:
> > > + * @c: pointer to xe_eudebug_client structure
> > > + *
> > > + * Waits for the client work end updates the event log.
> > > + * Doesn't terminate the client's proccess yet.
> > > + */
> > > +void xe_eudebug_client_wait_done(struct xe_eudebug_client *c)
> > > +{
> > > +	if (!c->done) {
> > > +		c->done = 1;
> > > +		c->seqno = wait_from_client(c, CLIENT_FINI);
> > > +		event_log_read_from_fd(c->log, c->p_out[0]);
> > > +	}
> > > +}
> > > +
> > > +/**
> > > + * xe_eudebug_client_signal_stage:
> > > + * @c: pointer to the client
> > > + * @stage: stage to signal
> > > + *
> > > + * Signals to debugger, waiting in xe_eudebug_debugger_wait_stage(),
> > > + * releasing it to proceed.
> > > + */
> > > +void xe_eudebug_client_signal_stage(struct xe_eudebug_client *c, uint64_t stage)
> > > +{
> > > +	token_signal(c->p_out, DEBUGGER_STAGE, stage);
> > > +}
> > > +
> > > +/**
> > > + * xe_eudebug_client_wait_stage:
> > > + * @c: pointer to xe_eudebug_client structure
> > > + * @stage: stage to wait on
> > > + *
> > > + * Pauses client until the debugger has signalled the corresponding stage with
> > > + * xe_eudebug_debugger_signal_stage. This is only for situations where the
> > > + * actual event flow is not enough to coordinate between client/debugger and extra
> > > + * sync mechanism is needed.
> > > + *
> > > + */
> > > +void xe_eudebug_client_wait_stage(struct xe_eudebug_client *c, uint64_t stage)
> > > +{
> > > +	u64 stage_in;
> > > +
> > > +	if (c->done) {
> > > +		igt_warn("client: %d already done before %lu\n", c->pid, stage);
> > > +		return;
> > > +	}
> > > +
> > > +	igt_debug("client: %d pausing for stage %lu\n", c->pid, stage);
> > > +
> > > +	stage_in = client_wait_token(c, CLIENT_STAGE);
> > > +	igt_debug("client: %d stage %lu, expected %lu, stage\n", c->pid, stage_in, stage);
> > > +
> > > +	igt_assert_eq(stage_in, stage);
> > > +}
> > > +
> > > +
> > > +/**
> > > + * xe_eudebug_session_create:
> > > + * @fd: XE file descriptor
> > > + * @work: function passed to the xe_eudebug_client_create
> > > + * @flags: flags passed to client and debugger
> > > + * @test_private: test's  data, allocated with MAP_SHARED | MAP_ANONYMOUS,
> > > + * passed to client and debugger. Can be NULL.
> > > + *
> > > + * Creates session together with client and debugger structures.
> > > + */
> > > +struct xe_eudebug_session *xe_eudebug_session_create(int fd,
> > > +						     xe_eudebug_client_work_fn work,
> > > +						     unsigned int flags,
> > > +						     void *test_private)
> > > +{
> > > +	struct xe_eudebug_session *s;
> > > +
> > > +	s = calloc(1, sizeof(*s));
> > > +	igt_assert(s);
> > > +
> > > +	s->c = xe_eudebug_client_create(fd, work, flags, test_private);
> > > +	s->d = xe_eudebug_debugger_create(fd, flags, test_private);
> > > +	s->flags = flags;
> > > +
> > > +	return s;
> > > +}
> > > +
> > > +/**
> > > + * xe_eudebug_session_run:
> > > + * @s: pointer to xe_eudebug_session structure
> > > + *
> > > + * Attaches debugger to client's proccess, starts debugger's
> > > + * async event reader, starts client and once client finish
> > > + * it stops debugger worker.
> > > + */
> > > +void xe_eudebug_session_run(struct xe_eudebug_session *s)
> > > +{
> > > +	struct xe_eudebug_debugger *debugger = s->d;
> > > +	struct xe_eudebug_client *client = s->c;
> > > +
> > > +	igt_assert_eq(xe_eudebug_debugger_attach(debugger, client), 0);
> > > +
> > > +	xe_eudebug_debugger_start_worker(debugger);
> > > +
> > > +	xe_eudebug_client_start(client);
> > > +	xe_eudebug_client_wait_done(client);
> > > +
> > > +	xe_eudebug_debugger_stop_worker(debugger, 1);
> > > +
> > > +	xe_eudebug_event_log_print(debugger->log, true);
> > > +	xe_eudebug_event_log_print(client->log, true);
> > > +}
> > > +
> > > +/**
> > > + * xe_eudebug_session_check:
> > > + * @s: pointer to xe_eudebug_session structure
> > > + * @match_opposite: indicates whether check should match all
> > > + * create and destroy events.
> > > + * @filter: mask that represents events to be skipped during comparison, useful
> > > + * for events like 'VM_BIND' since they can be asymmetric
> > > + *
> > > + * Validate debugger's log against the log created by the client.
> > > + */
> > > +void xe_eudebug_session_check(struct xe_eudebug_session *s, bool match_opposite, uint32_t filter)
> > > +{
> > > +	xe_eudebug_event_log_compare(s->c->log, s->d->log, filter);
> > > +
> > > +	if (match_opposite)
> > > +		xe_eudebug_event_log_match_opposite(s->d->log, filter);
> > > +}
> > > +
> > > +/**
> > > + * xe_eudebug_session_destroy:
> > > + * @s: pointer to xe_eudebug_session structure
> > > + *
> > > + * Destroy session together with its debugger and client.
> > > + */
> > > +void xe_eudebug_session_destroy(struct xe_eudebug_session *s)
> > > +{
> > > +	xe_eudebug_debugger_destroy(s->d);
> > > +	xe_eudebug_client_destroy(s->c);
> > > +
> > > +	free(s);
> > > +}
> > > +
> > > +#define to_base(x) ((struct drm_xe_eudebug_event *)&x)
> > > +
> > > +static void base_event(struct xe_eudebug_client *c,
> > > +		       struct drm_xe_eudebug_event *e,
> > > +		       uint32_t type,
> > > +		       uint32_t flags,
> > > +		       uint64_t size)
> > > +{
> > > +	e->type = type;
> > > +	e->flags = flags;
> > > +	e->seqno = xe_eudebug_client_get_seqno(c);
> > > +	e->len = size;
> > > +}
> > > +
> > > +static void client_event(struct xe_eudebug_client *c, uint32_t flags, int client_fd)
> > > +{
> > > +	struct drm_xe_eudebug_event_client ec;
> > > +
> > > +	base_event(c, to_base(ec), DRM_XE_EUDEBUG_EVENT_OPEN, flags, sizeof(ec));
> > > +
> > > +	ec.client_handle = client_fd;
> > > +
> > > +	xe_eudebug_event_log_write(c->log, (void *)&ec);
> > > +}
> > > +
> > > +static void vm_event(struct xe_eudebug_client *c, uint32_t flags, int client_fd, uint32_t vm_id)
> > > +{
> > > +	struct drm_xe_eudebug_event_vm evm;
> > > +
> > > +	base_event(c, to_base(evm), DRM_XE_EUDEBUG_EVENT_VM, flags, sizeof(evm));
> > > +
> > > +	evm.client_handle = client_fd;
> > > +	evm.vm_handle = vm_id;
> > > +
> > > +	xe_eudebug_event_log_write(c->log, (void *)&evm);
> > > +}
> > > +
> > > +static void exec_queue_event(struct xe_eudebug_client *c, uint32_t flags,
> > > +			     int client_fd, uint32_t vm_id,
> > > +			     uint32_t exec_queue_handle, uint16_t class,
> > > +			     uint16_t width)
> > > +{
> > > +	struct drm_xe_eudebug_event_exec_queue ee;
> > > +
> > > +	base_event(c, to_base(ee), DRM_XE_EUDEBUG_EVENT_EXEC_QUEUE,
> > > +		   flags, sizeof(ee));
> > > +
> > > +	ee.client_handle = client_fd;
> > > +	ee.vm_handle = vm_id;
> > > +	ee.exec_queue_handle = exec_queue_handle;
> > > +	ee.engine_class = class;
> > > +	ee.width = width;
> > > +
> > > +	xe_eudebug_event_log_write(c->log, (void *)&ee);
> > > +}
> > > +
> > > +static void metadata_event(struct xe_eudebug_client *c, uint32_t flags,
> > > +			   int client_fd, uint32_t id, uint64_t type, uint64_t len)
> > > +{
> > > +	struct drm_xe_eudebug_event_metadata em;
> > > +
> > > +	base_event(c, to_base(em), DRM_XE_EUDEBUG_EVENT_METADATA,
> > > +		   flags, sizeof(em));
> > > +
> > > +	em.client_handle = client_fd;
> > > +	em.metadata_handle = id;
> > > +	em.type = type;
> > > +	em.len = len;
> > > +
> > > +	xe_eudebug_event_log_write(c->log, (void *)&em);
> > > +}
> > > +
> > > +static int enable_getset(int fd, bool *old, bool *new)
> > > +{
> > > +	static const char * const fname = "enable_eudebug";
> > > +	int ret = 0;
> > > +
> > > +	int sysfs, device_fd;
> > > +	bool val_before;
> > > +	struct stat st;
> > > +
> > > +	igt_assert(new || old);
> > > +
> > > +	igt_assert_eq(fstat(fd, &st), 0);
> > > +	sysfs = igt_sysfs_open(fd);
> > > +	if (sysfs < 0)
> > > +		return -1;
> > > +
> > > +	device_fd = openat(sysfs, "device", O_DIRECTORY | O_RDONLY);
> > > +	close(sysfs);
> > > +	if (device_fd < 0)
> > > +		return -1;
> > > +
> > > +	if (!__igt_sysfs_get_boolean(device_fd, fname, &val_before)) {
> > > +		ret = -1;
> > > +		goto out;
> > > +	}
> > > +
> > > +	igt_debug("enable_eudebug before: %d\n", val_before);
> > > +
> > > +	if (old)
> > > +		*old = val_before;
> > > +
> > > +	ret = 0;
> > > +	if (new) {
> > > +		if (__igt_sysfs_set_boolean(device_fd, fname, *new))
> > > +			igt_assert_eq(igt_sysfs_get_boolean(device_fd, fname), *new);
> > > +		else
> > > +			ret = -1;
> > > +	}
> > > +
> > > +out:
> > > +	close(device_fd);
> > > +	return ret;
> > > +}
> > > +
> > > +/**
> > > + * xe_eudebug_enable
> > > + * @fd: xe client
> > > + * @enable: state toggle - true to enable, false to disable
> > > + *
> > > + * Enables/disables eudebug capability by writing to
> > > + * '/sys/class/drm/card<N>/device/enable_eudebug' sysfs entry.
> > > + *
> > > + * Returns: previous toggle value, i.e. true when eudebugging was enabled,
> > > + * false when eudebugging was disabled.
> > > + */
> > > +bool xe_eudebug_enable(int fd, bool enable)
> > > +{
> > > +	bool old = false;
> > > +	int ret = enable_getset(fd, &old, &enable);
> > > +
> > > +	if (ret) {
> > > +		igt_skip_on(enable);
> > > +		old = false;
> > > +	}
> > > +
> > > +	return old;
> > > +}
> > > +
> > > +/* Eu debugger wrappers around resource creating xe ioctls. */
> > > +
> > > +/**
> > > + * xe_eudebug_client_open_driver:
> > > + * @c: pointer to xe_eudebug_client structure
> > > + *
> > > + * Calls drm_open_client(DRIVER_XE) and logs the corresponding
> > > + * event in client's event log.
> > > + *
> > > + * Returns: valid DRM file descriptor
> > > + */
> > > +int xe_eudebug_client_open_driver(struct xe_eudebug_client *c)
> > > +{
> > > +	int fd;
> > > +
> > > +	fd = drm_reopen_driver(c->master_fd);
> > > +	client_event(c, DRM_XE_EUDEBUG_EVENT_CREATE, fd);
> > > +
> > > +	return fd;
> > > +}
> > > +
> > > +/**
> > > + * xe_eudebug_client_close_driver:
> > > + * @c: pointer to xe_eudebug_client structure
> > > + * @fd: xe client
> > > + *
> > > + * Calls close driver and logs the corresponding event in
> > > + * client's event log.
> > > + */
> > > +void xe_eudebug_client_close_driver(struct xe_eudebug_client *c, int fd)
> > > +{
> > > +	client_event(c, DRM_XE_EUDEBUG_EVENT_DESTROY, fd);
> > > +	close(fd);
> > > +}
> > > +
> > > +/**
> > > + * xe_eudebug_client_vm_create:
> > > + * @c: pointer to xe_eudebug_client structure
> > > + * @fd: xe client
> > > + * @flags: vm bind flags
> > > + * @ext: pointer to the first user extension
> > > + *
> > > + * Calls xe_vm_create() and logs corresponding events
> > > + * (including vm set metadata events) in client's event log.
> > > + *
> > > + * Returns: valid vm handle
> > > + */
> > > +uint32_t xe_eudebug_client_vm_create(struct xe_eudebug_client *c, int fd,
> > > +				     uint32_t flags, uint64_t ext)
> > > +{
> > > +	uint32_t vm;
> > > +
> > > +	vm = xe_vm_create(fd, flags, ext);
> > > +	vm_event(c, DRM_XE_EUDEBUG_EVENT_CREATE, fd, vm);
> > > +
> > > +	return vm;
> > > +}
> > > +
> > > +/**
> > > + * xe_eudebug_client_vm_destroy:
> > > + * @c: pointer to xe_eudebug_client structure
> > > + * fd: xe client
> > > + * vm: vm handle
> > > + *
> > > + * Calls xe_vm_destroy() and logs the corresponding event in
> > > + * client's event log.
> > > + */
> > > +void xe_eudebug_client_vm_destroy(struct xe_eudebug_client *c, int fd, uint32_t vm)
> > > +{
> > > +	xe_vm_destroy(fd, vm);
> > > +	vm_event(c, DRM_XE_EUDEBUG_EVENT_DESTROY, fd, vm);
> > > +}
> > > +
> > > +/**
> > > + * xe_eudebug_client_exec_queue_create:
> > > + * @c: pointer to xe_eudebug_client structure
> > > + * @fd: xe client
> > > + * @create: exec_queue create drm struct
> > > + *
> > > + * Calls xe exec queue create ioctl and logs the corresponding event in
> > > + * client's event log.
> > > + *
> > > + * Returns: valid exec queue handle
> > > + */
> > > +uint32_t xe_eudebug_client_exec_queue_create(struct xe_eudebug_client *c, int fd,
> > > +					     struct drm_xe_exec_queue_create *create)
> > > +{
> > > +	uint16_t class = ((struct drm_xe_engine_class_instance *)(create->instances))[0].engine_class;
> > > +
> > > +	igt_assert_eq(igt_ioctl(fd, DRM_IOCTL_XE_EXEC_QUEUE_CREATE, create), 0);
> > > +
> > > +	if (class == DRM_XE_ENGINE_CLASS_COMPUTE || class == DRM_XE_ENGINE_CLASS_RENDER)
> > > +		exec_queue_event(c, DRM_XE_EUDEBUG_EVENT_CREATE, fd, create->vm_id,
> > > +				 create->exec_queue_id, class, create->width);
> > > +
> > > +	return create->exec_queue_id;
> > > +}
> > > +
> > > +/**
> > > + * xe_eudebug_client_exec_queue_destroy:
> > > + * @c: pointer to xe_eudebug_client structure
> > > + * @fd: xe client
> > > + * @create: exec_queue create drm struct which was used for creation
> > > + *
> > > + * Calls xe exec_queue destroy ioctl and logs the corresponding event in
> > > + * client's event log.
> > > + */
> > > +void xe_eudebug_client_exec_queue_destroy(struct xe_eudebug_client *c, int fd,
> > > +					  struct drm_xe_exec_queue_create *create)
> > > +{
> > > +	struct drm_xe_exec_queue_destroy destroy = { .exec_queue_id = create->exec_queue_id, };
> > > +	uint16_t class = ((struct drm_xe_engine_class_instance *)(create->instances))[0].engine_class;
> > > +
> > > +	if (class == DRM_XE_ENGINE_CLASS_COMPUTE || class == DRM_XE_ENGINE_CLASS_RENDER)
> > > +		exec_queue_event(c, DRM_XE_EUDEBUG_EVENT_DESTROY, fd, create->vm_id,
> > > +				 create->exec_queue_id, class, create->width);
> > > +
> > > +	igt_assert_eq(igt_ioctl(fd, DRM_IOCTL_XE_EXEC_QUEUE_DESTROY, &destroy), 0);
> > > +}
> > > +
> > > +/**
> > > + * xe_eudebug_client_vm_bind_event:
> > > + * @c: pointer to xe_eudebug_client structure
> > > + * @event_flags: base event flags
> > > + * @fd: xe client
> > > + * @vm: vm handle
> > > + * @bind_flags: bind flags of vm_bind_event
> > > + * @num_binds: number of bind (operations) for event
> > > + * @ref_seqno: base vm bind reference seqno
> > > + * Logs vm bind event in client's event log.
> > > + */
> > > +void xe_eudebug_client_vm_bind_event(struct xe_eudebug_client *c,
> > > +				     uint32_t event_flags, int fd,
> > > +				     uint32_t vm, uint32_t bind_flags,
> > > +				     uint32_t num_binds, u64 *ref_seqno)
> > > +{
> > > +	struct drm_xe_eudebug_event_vm_bind evmb;
> > > +
> > > +	base_event(c, to_base(evmb), DRM_XE_EUDEBUG_EVENT_VM_BIND,
> > > +		   event_flags, sizeof(evmb));
> > > +	evmb.client_handle = fd;
> > > +	evmb.vm_handle = vm;
> > > +	evmb.flags = bind_flags;
> > > +	evmb.num_binds = num_binds;
> > > +
> > > +	*ref_seqno = evmb.base.seqno;
> > > +
> > > +	xe_eudebug_event_log_write(c->log, (void *)&evmb);
> > > +}
> > > +
> > > +/**
> > > + * xe_eudebug_client_vm_bind_op_event:
> > > + * @c: pointer to xe_eudebug_client structure
> > > + * @event_flags: base event flags
> > > + * @bind_ref_seqno: base vm bind reference seqno
> > > + * @op_ref_seqno: output, the vm_bind_op event seqno
> > > + * @addr: ppgtt address
> > > + * @size: size of the binding
> > > + * @num_extensions: number of vm bind op extensions
> > > + *
> > > + * Logs vm bind op event in client's event log.
> > > + */
> > > +void xe_eudebug_client_vm_bind_op_event(struct xe_eudebug_client *c, uint32_t event_flags,
> > > +					uint64_t bind_ref_seqno, uint64_t *op_ref_seqno,
> > > +					uint64_t addr, uint64_t range,
> > > +					uint64_t num_extensions)
> > > +{
> > > +	struct drm_xe_eudebug_event_vm_bind_op op;
> > > +
> > > +	base_event(c, to_base(op), DRM_XE_EUDEBUG_EVENT_VM_BIND_OP,
> > > +		   event_flags, sizeof(op));
> > > +	op.vm_bind_ref_seqno = bind_ref_seqno;
> > > +	op.addr = addr;
> > > +	op.range = range;
> > > +	op.num_extensions = num_extensions;
> > > +
> > > +	*op_ref_seqno = op.base.seqno;
> > > +
> > > +	xe_eudebug_event_log_write(c->log, (void *)&op);
> > > +}
> > > +
> > > +/**
> > > + * xe_eudebug_client_vm_bind_op_metadata_event:
> > > + * @c: pointer to xe_eudebug_client structure
> > > + * @event_flags: base event flags
> > > + * @op_ref_seqno: base vm bind op reference seqno
> > > + * @metadata_handle: metadata handle
> > > + * @metadata_cookie: metadata cookie
> > > + *
> > > + * Logs vm bind op metadata event in client's event log.
> > > + */
> > > +void xe_eudebug_client_vm_bind_op_metadata_event(struct xe_eudebug_client *c,
> > > +						 uint32_t event_flags, uint64_t op_ref_seqno,
> > > +						 uint64_t metadata_handle, uint64_t metadata_cookie)
> > > +{
> > > +	struct drm_xe_eudebug_event_vm_bind_op_metadata op;
> > > +
> > > +	base_event(c, to_base(op), DRM_XE_EUDEBUG_EVENT_VM_BIND_OP_METADATA,
> > > +		   event_flags, sizeof(op));
> > > +	op.vm_bind_op_ref_seqno = op_ref_seqno;
> > > +	op.metadata_handle = metadata_handle;
> > > +	op.metadata_cookie = metadata_cookie;
> > > +
> > > +	xe_eudebug_event_log_write(c->log, (void *)&op);
> > > +}
> > > +
> > > +/**
> > > + * xe_eudebug_client_vm_bind_ufence_event:
> > > + * @c: pointer to xe_eudebug_client structure
> > > + * @event_flags: base event flags
> > > + * @ref_seqno: base vm bind event seqno
> > > + *
> > > + * Logs vm bind ufence event in client's event log.
> > > + */
> > > +void xe_eudebug_client_vm_bind_ufence_event(struct xe_eudebug_client *c, uint32_t event_flags,
> > > +					    uint64_t ref_seqno)
> > > +{
> > > +	struct drm_xe_eudebug_event_vm_bind_ufence f;
> > > +
> > > +	base_event(c, to_base(f), DRM_XE_EUDEBUG_EVENT_VM_BIND_UFENCE,
> > > +		   event_flags, sizeof(f));
> > > +	f.vm_bind_ref_seqno = ref_seqno;
> > > +
> > > +	xe_eudebug_event_log_write(c->log, (void *)&f);
> > > +}
> > > +
> > > +static bool has_user_fence(const struct drm_xe_sync *sync, uint32_t num_syncs)
> > > +{
> > > +	while (num_syncs--)
> > > +		if (sync[num_syncs].type == DRM_XE_SYNC_TYPE_USER_FENCE)
> > > +			return true;
> > > +
> > > +	return false;
> > > +}
> > > +
> > > +#define for_each_metadata(__m, __ext)					\
> > > +	for ((__m) = from_user_pointer(__ext);				\
> > > +	     (__m);							\
> > > +	     (__m) = from_user_pointer((__m)->base.next_extension))	\
> > > +		if ((__m)->base.name == XE_VM_BIND_OP_EXTENSIONS_ATTACH_DEBUG)
> > > +
> > > +static int  __xe_eudebug_client_vm_bind(struct xe_eudebug_client *c,
> > > +					int fd, uint32_t vm, uint32_t exec_queue,
> > > +					uint32_t bo, uint64_t offset,
> > > +					uint64_t addr, uint64_t size,
> > > +					uint32_t op, uint32_t flags,
> > > +					struct drm_xe_sync *sync,
> > > +					uint32_t num_syncs,
> > > +					uint32_t prefetch_region,
> > > +					uint8_t pat_index, uint64_t op_ext)
> > > +{
> > > +	struct drm_xe_vm_bind_op_ext_attach_debug *metadata;
> > > +	const bool ufence = has_user_fence(sync, num_syncs);
> > > +	const uint32_t bind_flags = ufence ?
> > > +		DRM_XE_EUDEBUG_EVENT_VM_BIND_FLAG_UFENCE : 0;
> > > +	uint64_t seqno = 0, op_seqno = 0, num_metadata = 0;
> > > +	uint32_t bind_base_flags = 0;
> > > +	int ret;
> > > +
> > > +	for_each_metadata(metadata, op_ext)
> > > +		num_metadata++;
> > > +
> > > +	switch (op) {
> > > +	case DRM_XE_VM_BIND_OP_MAP:
> > > +		bind_base_flags = DRM_XE_EUDEBUG_EVENT_CREATE;
> > > +		break;
> > > +	case DRM_XE_VM_BIND_OP_UNMAP:
> > > +		bind_base_flags = DRM_XE_EUDEBUG_EVENT_DESTROY;
> > > +		igt_assert_eq(num_metadata, 0);
> > > +		igt_assert_eq(ufence, false);
> > > +		break;
> > > +	default:
> > > +		/* XXX unmap all? */
> > > +		igt_assert(op);
> > > +		break;
> > > +	}
> > > +
> > > +	ret = ___xe_vm_bind(fd, vm, exec_queue, bo, offset, addr, size,
> > > +			    op, flags, sync, num_syncs, prefetch_region,
> > > +			    pat_index, 0, op_ext);
> > > +
> > > +	if (ret)
> > > +		return ret;
> > > +
> > > +	if (!bind_base_flags)
> > > +		return -EINVAL;
> > > +
> > > +	xe_eudebug_client_vm_bind_event(c, DRM_XE_EUDEBUG_EVENT_STATE_CHANGE,
> > > +					fd, vm, bind_flags, 1, &seqno);
> > > +	xe_eudebug_client_vm_bind_op_event(c, bind_base_flags,
> > > +					   seqno, &op_seqno, addr, size,
> > > +					   num_metadata);
> > > +
> > > +	for_each_metadata(metadata, op_ext)
> > > +		xe_eudebug_client_vm_bind_op_metadata_event(c,
> > > +							    DRM_XE_EUDEBUG_EVENT_CREATE,
> > > +							    op_seqno,
> > > +							    metadata->metadata_id,
> > > +							    metadata->cookie);
> > > +	if (ufence)
> > > +		xe_eudebug_client_vm_bind_ufence_event(c, DRM_XE_EUDEBUG_EVENT_CREATE |
> > > +						       DRM_XE_EUDEBUG_EVENT_NEED_ACK,
> > > +						       seqno);
> > > +	return ret;
> > > +}
> > > +
> > > +static void _xe_eudebug_client_vm_bind(struct xe_eudebug_client *c, int fd,
> > > +				       uint32_t vm, uint32_t bo,
> > > +				       uint64_t offset, uint64_t addr, uint64_t size,
> > > +				       uint32_t op,
> > > +				       uint32_t flags,
> > > +				       struct drm_xe_sync *sync,
> > > +				       uint32_t num_syncs,
> > > +				       uint64_t op_ext)
> > > +{
> > > +	const uint32_t exec_queue_id = 0;
> > > +	const uint32_t prefetch_region = 0;
> > > +
> > > +	igt_assert_eq(__xe_eudebug_client_vm_bind(c, fd, vm, exec_queue_id, bo, offset,
> > > +						  addr, size, op, flags,
> > > +						  sync, num_syncs, prefetch_region,
> > > +						  DEFAULT_PAT_INDEX, op_ext),
> > > +		      0);
> > > +}
> > > +
> > > +/**
> > > + * xe_eudebug_client_vm_bind_flags
> > > + * @c: pointer to xe_eudebug_client structure
> > > + * @fd: xe client
> > > + * @vm: vm handle
> > > + * @bo: buffer object handle
> > > + * @offset: offset within buffer object
> > > + * @addr: ppgtt address
> > > + * @size: size of the binding
> > > + * @flags: vm_bind flags
> > > + * @sync: sync objects
> > > + * @num_syncs: number of sync objects
> > > + * @op_ext: BIND_OP extensions
> > > + *
> > > + * Calls xe vm_bind ioctl and logs the corresponding event in client's event log.
> > > + */
> > > +void xe_eudebug_client_vm_bind_flags(struct xe_eudebug_client *c, int fd, uint32_t vm,
> > > +				     uint32_t bo, uint64_t offset,
> > > +				     uint64_t addr, uint64_t size, uint32_t flags,
> > > +				     struct drm_xe_sync *sync, uint32_t num_syncs,
> > > +				     uint64_t op_ext)
> > > +{
> > > +	_xe_eudebug_client_vm_bind(c, fd, vm, bo, offset, addr, size,
> > > +				   DRM_XE_VM_BIND_OP_MAP, flags,
> > > +				   sync, num_syncs, op_ext);
> > > +}
> > > +
> > > +/**
> > > + * xe_eudebug_client_vm_bind
> > > + * @c: pointer to xe_eudebug_client structure
> > > + * @fd: xe client
> > > + * @vm: vm handle
> > > + * @bo: buffer object handle
> > > + * @offset: offset within buffer object
> > > + * @addr: ppgtt address
> > > + * @size: size of the binding
> > > + *
> > > + * Calls xe vm_bind ioctl and logs the corresponding event in client's event log.
> > > + */
> > > +void xe_eudebug_client_vm_bind(struct xe_eudebug_client *c, int fd, uint32_t vm,
> > > +			       uint32_t bo, uint64_t offset,
> > > +			       uint64_t addr, uint64_t size)
> > > +{
> > > +	const uint32_t flags = 0;
> > > +	struct drm_xe_sync *sync = NULL;
> > > +	const uint32_t num_syncs = 0;
> > > +	const uint64_t op_ext = 0;
> > > +
> > > +	xe_eudebug_client_vm_bind_flags(c, fd, vm, bo, offset, addr, size,
> > > +					flags,
> > > +					sync, num_syncs, op_ext);
> > > +}
> > > +
> > > +/**
> > > + * xe_eudebug_client_vm_unbind_flags
> > > + * @c: pointer to xe_eudebug_client structure
> > > + * @fd: xe client
> > > + * @vm: vm handle
> > > + * @offset: offset
> > > + * @addr: ppgtt address
> > > + * @size: size of the binding
> > > + * @flags: vm_bind flags
> > > + * @sync: sync objects
> > > + * @num_syncs: number of sync objects
> > > + *
> > > + * Calls xe vm_unbind ioctl and logs the corresponding event in client's event log.
> > > + */
> > > +void xe_eudebug_client_vm_unbind_flags(struct xe_eudebug_client *c, int fd,
> > > +				       uint32_t vm, uint64_t offset,
> > > +				       uint64_t addr, uint64_t size, uint32_t flags,
> > > +				       struct drm_xe_sync *sync, uint32_t num_syncs)
> > > +{
> > > +	_xe_eudebug_client_vm_bind(c, fd, vm, 0, offset, addr, size,
> > > +				   DRM_XE_VM_BIND_OP_UNMAP, flags,
> > > +				   sync, num_syncs, 0);
> > > +}
> > > +
> > > +/**
> > > + * xe_eudebug_client_vm_unbind
> > > + * @c: pointer to xe_eudebug_client structure
> > > + * @fd: xe client
> > > + * @vm: vm handle
> > > + * @offset: offset
> > > + * @addr: ppgtt address
> > > + * @size: size of the binding
> > > + *
> > > + * Calls xe vm_unbind ioctl and logs the corresponding event in client's event log.
> > > + */
> > > +void xe_eudebug_client_vm_unbind(struct xe_eudebug_client *c, int fd, uint32_t vm,
> > > +				 uint64_t offset, uint64_t addr, uint64_t size)
> > > +{
> > > +	const uint32_t flags = 0;
> > > +	struct drm_xe_sync *sync = NULL;
> > > +	const uint32_t num_syncs = 0;
> > > +
> > > +	xe_eudebug_client_vm_unbind_flags(c, fd, vm, offset, addr, size,
> > > +					  flags, sync, num_syncs);
> > > +}
> > > +
> > > +/**
> > > + * xe_eudebug_client_metadata_create:
> > > + * @c: pointer to xe_eudebug_client structure
> > > + * @fd: xe client
> > > + * @type: debug metadata type
> > > + * @len: size of @data
> > > + * @data: debug metadata paylad
> > > + *
> > > + * Calls xe metadata create ioctl and logs the corresponding event in
> > > + * client's event log.
> > > + *
> > > + * Return: valid debug metadata id.
> > > + */
> > > +uint32_t xe_eudebug_client_metadata_create(struct xe_eudebug_client *c, int fd,
> > > +					   int type, size_t len, void *data)
> > > +{
> > > +	struct drm_xe_debug_metadata_create create = {
> > > +		.type = type,
> > > +		.user_addr = to_user_pointer(data),
> > > +		.len = len
> > > +	};
> > > +
> > > +	igt_assert_eq(igt_ioctl(fd, DRM_IOCTL_XE_DEBUG_METADATA_CREATE, &create), 0);
> > > +
> > > +	metadata_event(c, DRM_XE_EUDEBUG_EVENT_CREATE, fd, create.metadata_id, type, len);
> > > +
> > > +	return create.metadata_id;
> > > +}
> > > +
> > > +/**
> > > + * xe_eudebug_client_metadata_destroy:
> > > + * @c: pointer to xe_eudebug_client structure
> > > + * @fd: xe client
> > > + * @id: xe debug metadata handle
> > > + * @type: debug metadata type
> > > + * @len: size of debug metadata payload
> > > + *
> > > + * Calls xe metadata destroy ioctl and logs the corresponding event in
> > > + * client's event log.
> > > + */
> > > +void xe_eudebug_client_metadata_destroy(struct xe_eudebug_client *c, int fd,
> > > +					uint32_t id, int type, size_t len)
> > > +{
> > > +	struct drm_xe_debug_metadata_destroy destroy = { .metadata_id = id };
> > > +
> > > +
> > > +	igt_assert_eq(igt_ioctl(fd, DRM_IOCTL_XE_DEBUG_METADATA_DESTROY, &destroy), 0);
> > > +
> > > +	metadata_event(c, DRM_XE_EUDEBUG_EVENT_DESTROY, fd, id, type, len);
> > > +}
> > > +
> > > +void xe_eudebug_ack_ufence(int debugfd,
> > > +			   const struct drm_xe_eudebug_event_vm_bind_ufence *f)
> > > +{
> > > +	struct drm_xe_eudebug_ack_event ack = { 0, };
> > > +	char event_str[XE_EUDEBUG_EVENT_STRING_MAX_LEN];
> > > +
> > > +	ack.type = f->base.type;
> > > +	ack.seqno = f->base.seqno;
> > > +
> > > +	xe_eudebug_event_to_str((void *)f, event_str, XE_EUDEBUG_EVENT_STRING_MAX_LEN);
> > > +	igt_debug("delivering ack for event: %s\n", event_str);
> > > +	igt_assert_eq(igt_ioctl(debugfd, DRM_XE_EUDEBUG_IOCTL_ACK_EVENT, &ack), 0);
> > > +}
> > > diff --git a/lib/xe/xe_eudebug.h b/lib/xe/xe_eudebug.h
> > > new file mode 100644
> > > index 000000000..444f5a7b7
> > > --- /dev/null
> > > +++ b/lib/xe/xe_eudebug.h
> > > @@ -0,0 +1,206 @@
> > > +/* SPDX-License-Identifier: MIT */
> > > +/*
> > > + * Copyright © 2023 Intel Corporation
> > > + */
> > > +#include <fcntl.h>
> > > +#include <pthread.h>
> > > +#include <stdint.h>
> > > +#include <xe_drm.h>
> > > +
> > > +#include "igt_list.h"
> > > +
> > > +struct xe_eudebug_event_log {
> > > +	uint8_t *log;
> > > +	unsigned int head;
> > > +	unsigned int max_size;
> > > +	char name[80];
> > > +	pthread_mutex_t lock;
> > > +};
> > > +
> > > +struct xe_eudebug_debugger {
> > > +	int fd;
> > > +	uint64_t flags;
> > > +
> > > +	/* Used to smuggle private data */
> > > +	void *ptr;
> > > +
> > > +	struct xe_eudebug_event_log *log;
> > > +
> > > +	uint64_t event_count;
> > > +
> > > +	uint64_t target_pid;
> > > +
> > > +	struct igt_list_head triggers;
> > > +
> > > +	int master_fd;
> > > +
> > > +	pthread_t worker_thread;
> > > +	int worker_state;
> > > +
> > > +	int p_client[2];
> > > +};
> > > +
> > > +struct xe_eudebug_client {
> > > +	int pid;
> > > +	uint64_t seqno;
> > > +	uint64_t flags;
> > > +
> > > +	/* Used to smuggle private data */
> > > +	void *ptr;
> > > +
> > > +	struct xe_eudebug_event_log *log;
> > > +
> > > +	int done;
> > > +	int p_in[2];
> > > +	int p_out[2];
> > > +
> > > +	/* Used to pickup right device (the one used in debugger) */
> > > +	int master_fd;
> > > +
> > > +	int timeout_ms;
> > > +};
> > > +
> > > +struct xe_eudebug_session {
> > > +	uint64_t flags;
> > > +	struct xe_eudebug_client *c;
> > > +	struct xe_eudebug_debugger *d;
> > > +};
> > > +
> > > +typedef void (*xe_eudebug_client_work_fn)(struct xe_eudebug_client *);
> > > +typedef void (*xe_eudebug_trigger_fn)(struct xe_eudebug_debugger *,
> > > +				      struct drm_xe_eudebug_event *);
> > > +
> > > +#define xe_eudebug_for_each_event(_e, _log) \
> > > +	for ((_e) = (_e) ? (void *)(uint8_t *)(_e) + (_e)->len : \
> > > +		    (void *)(_log)->log; \
> > > +	    (uint8_t *)(_e) < (_log)->log + (_log)->head; \
> > > +	    (_e) = (void *)(uint8_t *)(_e) + (_e)->len)
> > > +
> > > +#define xe_eudebug_assert(d, c)						\
> > > +	do {								\
> > > +		if (!(c)) {						\
> > > +			xe_eudebug_event_log_print((d)->log, true);	\
> > > +			igt_assert(c);					\
> > > +		}							\
> > > +	} while (0)
> > > +
> > > +#define xe_eudebug_assert_f(d, c, f...)					\
> > > +	do {								\
> > > +		if (!(c)) {						\
> > > +			xe_eudebug_event_log_print((d)->log, true);	\
> > > +			igt_assert_f(c, f);				\
> > > +		}							\
> > > +	} while (0)
> > > +
> > > +#define XE_EUDEBUG_EVENT_STRING_MAX_LEN		4096
> > > +
> > > +/*
> > > + * Default abort timeout to use across xe_eudebug lib and tests if no specific
> > > + * timeout value is required.
> > > + */
> > > +#define XE_EUDEBUG_DEFAULT_TIMEOUT_MS		25000ULL
> > > +
> > > +#define XE_EUDEBUG_FILTER_EVENT_NONE		BIT(DRM_XE_EUDEBUG_EVENT_NONE)
> > > +#define XE_EUDEBUG_FILTER_EVENT_READ		BIT(DRM_XE_EUDEBUG_EVENT_READ)
> > > +#define XE_EUDEBUG_FILTER_EVENT_OPEN		BIT(DRM_XE_EUDEBUG_EVENT_OPEN)
> > > +#define XE_EUDEBUG_FILTER_EVENT_VM		BIT(DRM_XE_EUDEBUG_EVENT_VM)
> > > +#define XE_EUDEBUG_FILTER_EVENT_EXEC_QUEUE	BIT(DRM_XE_EUDEBUG_EVENT_EXEC_QUEUE)
> > > +#define XE_EUDEBUG_FILTER_EVENT_EU_ATTENTION	BIT(DRM_XE_EUDEBUG_EVENT_EU_ATTENTION)
> > > +#define XE_EUDEBUG_FILTER_EVENT_VM_BIND		BIT(DRM_XE_EUDEBUG_EVENT_VM_BIND)
> > > +#define XE_EUDEBUG_FILTER_EVENT_VM_BIND_OP	BIT(DRM_XE_EUDEBUG_EVENT_VM_BIND_OP)
> > > +#define XE_EUDEBUG_FILTER_EVENT_VM_BIND_UFENCE  BIT(DRM_XE_EUDEBUG_EVENT_VM_BIND_UFENCE)
> > > +#define XE_EUDEBUG_FILTER_ALL			GENMASK(DRM_XE_EUDEBUG_EVENT_MAX_EVENT, 0)
> > > +#define XE_EUDEBUG_EVENT_IS_FILTERED(_e, _f)	((1UL << _e) & _f)
> > > +
> > > +int xe_eudebug_connect(int fd, pid_t pid, uint32_t flags);
> > > +const char *xe_eudebug_event_to_str(struct drm_xe_eudebug_event *e, char *buf, size_t len);
> > > +struct drm_xe_eudebug_event *
> > > +xe_eudebug_event_log_find_seqno(struct xe_eudebug_event_log *l, uint64_t seqno);
> > > +struct xe_eudebug_event_log *
> > > +xe_eudebug_event_log_create(const char *name, unsigned int max_size);
> > > +void xe_eudebug_event_log_destroy(struct xe_eudebug_event_log *l);
> > > +void xe_eudebug_event_log_print(struct xe_eudebug_event_log *l, bool debug);
> > > +void xe_eudebug_event_log_compare(struct xe_eudebug_event_log *c, struct xe_eudebug_event_log *d,
> > > +				  uint32_t filter);
> > > +void xe_eudebug_event_log_write(struct xe_eudebug_event_log *l, struct drm_xe_eudebug_event *e);
> > > +void xe_eudebug_event_log_match_opposite(struct xe_eudebug_event_log *l, uint32_t filter);
> > > +
> > > +bool xe_eudebug_debugger_available(int fd);
> > > +struct xe_eudebug_debugger *
> > > +xe_eudebug_debugger_create(int xe, uint64_t flags, void *data);
> > > +void xe_eudebug_debugger_destroy(struct xe_eudebug_debugger *d);
> > > +int xe_eudebug_debugger_attach(struct xe_eudebug_debugger *d, struct xe_eudebug_client *c);
> > > +void xe_eudebug_debugger_start_worker(struct xe_eudebug_debugger *d);
> > > +void xe_eudebug_debugger_stop_worker(struct xe_eudebug_debugger *d, int timeout_s);
> > > +void xe_eudebug_debugger_dettach(struct xe_eudebug_debugger *d);
> > > +void xe_eudebug_debugger_set_data(struct xe_eudebug_debugger *c, void *ptr);
> > > +void xe_eudebug_debugger_add_trigger(struct xe_eudebug_debugger *d, int type,
> > > +				     xe_eudebug_trigger_fn fn);
> > > +void xe_eudebug_debugger_signal_stage(struct xe_eudebug_debugger *d, uint64_t stage);
> > > +void xe_eudebug_debugger_wait_stage(struct xe_eudebug_session *s, uint64_t stage);
> > > +
> > > +struct xe_eudebug_client *
> > > +xe_eudebug_client_create(int xe, xe_eudebug_client_work_fn work, uint64_t flags, void *data);
> > > +void xe_eudebug_client_destroy(struct xe_eudebug_client *c);
> > > +void xe_eudebug_client_start(struct xe_eudebug_client *c);
> > > +void xe_eudebug_client_stop(struct xe_eudebug_client *c);
> > > +void xe_eudebug_client_wait_done(struct xe_eudebug_client *c);
> > > +void xe_eudebug_client_signal_stage(struct xe_eudebug_client *c, uint64_t stage);
> > > +void xe_eudebug_client_wait_stage(struct xe_eudebug_client *c, uint64_t stage);
> > > +
> > > +uint64_t xe_eudebug_client_get_seqno(struct xe_eudebug_client *c);
> > > +void xe_eudebug_client_set_data(struct xe_eudebug_client *c, void *ptr);
> > > +
> > > +bool xe_eudebug_enable(int fd, bool enable);
> > > +
> > > +int xe_eudebug_client_open_driver(struct xe_eudebug_client *c);
> > > +void xe_eudebug_client_close_driver(struct xe_eudebug_client *c, int fd);
> > > +uint32_t xe_eudebug_client_vm_create(struct xe_eudebug_client *c, int fd,
> > > +				     uint32_t flags, uint64_t ext);
> > > +void xe_eudebug_client_vm_destroy(struct xe_eudebug_client *c, int fd, uint32_t vm);
> > > +uint32_t xe_eudebug_client_exec_queue_create(struct xe_eudebug_client *c, int fd,
> > > +					     struct drm_xe_exec_queue_create *create);
> > > +void xe_eudebug_client_exec_queue_destroy(struct xe_eudebug_client *c, int fd,
> > > +					  struct drm_xe_exec_queue_create *create);
> > > +void xe_eudebug_client_vm_bind_event(struct xe_eudebug_client *c, uint32_t event_flags, int fd,
> > > +				     uint32_t vm, uint32_t bind_flags,
> > > +				     uint32_t num_ops, uint64_t *ref_seqno);
> > > +void xe_eudebug_client_vm_bind_op_event(struct xe_eudebug_client *c, uint32_t event_flags,
> > > +					uint64_t ref_seqno, uint64_t *op_ref_seqno,
> > > +					uint64_t addr, uint64_t range,
> > > +					uint64_t num_extensions);
> > > +void xe_eudebug_client_vm_bind_op_metadata_event(struct xe_eudebug_client *c,
> > > +						 uint32_t event_flags, uint64_t op_ref_seqno,
> > > +						 uint64_t metadata_handle, uint64_t metadata_cookie);
> > > +void xe_eudebug_client_vm_bind_ufence_event(struct xe_eudebug_client *c, uint32_t event_flags,
> > > +					    uint64_t ref_seqno);
> > > +void xe_eudebug_ack_ufence(int debugfd,
> > > +			   const struct drm_xe_eudebug_event_vm_bind_ufence *f);
> > > +
> > > +void xe_eudebug_client_vm_bind_flags(struct xe_eudebug_client *c, int fd, uint32_t vm,
> > > +				     uint32_t bo, uint64_t offset,
> > > +				     uint64_t addr, uint64_t size, uint32_t flags,
> > > +				     struct drm_xe_sync *sync, uint32_t num_syncs,
> > > +				     uint64_t op_ext);
> > > +void xe_eudebug_client_vm_bind(struct xe_eudebug_client *c, int fd, uint32_t vm,
> > > +			       uint32_t bo, uint64_t offset,
> > > +			       uint64_t addr, uint64_t size);
> > > +void xe_eudebug_client_vm_unbind_flags(struct xe_eudebug_client *c, int fd,
> > > +				       uint32_t vm, uint64_t offset,
> > > +				       uint64_t addr, uint64_t size, uint32_t flags,
> > > +				       struct drm_xe_sync *sync, uint32_t num_syncs);
> > > +void xe_eudebug_client_vm_unbind(struct xe_eudebug_client *c, int fd, uint32_t vm,
> > > +				 uint64_t offset, uint64_t addr, uint64_t size);
> > > +
> > > +uint32_t xe_eudebug_client_metadata_create(struct xe_eudebug_client *c, int fd,
> > > +					   int type, size_t len, void *data);
> > > +void xe_eudebug_client_metadata_destroy(struct xe_eudebug_client *c, int fd,
> > > +					uint32_t id, int type, size_t len);
> > > +
> > > +struct xe_eudebug_session *xe_eudebug_session_create(int fd,
> > > +						     xe_eudebug_client_work_fn work,
> > > +						     unsigned int flags,
> > > +						     void *test_private);
> > > +void xe_eudebug_session_destroy(struct xe_eudebug_session *s);
> > > +void xe_eudebug_session_run(struct xe_eudebug_session *s);
> > > +void xe_eudebug_session_check(struct xe_eudebug_session *s, bool match_opposite, uint32_t filter);
> > > -- 
> > > 2.34.1
> > > 

^ permalink raw reply	[flat|nested] 41+ messages in thread

* Re: [PATCH i-g-t v3 04/14] lib/xe_eudebug: Introduce eu debug testing framework
  2024-08-20 17:45       ` Kamil Konieczny
@ 2024-08-21  7:05         ` Manszewski, Christoph
  2024-08-21  9:31         ` Zbigniew Kempczyński
  1 sibling, 0 replies; 41+ messages in thread
From: Manszewski, Christoph @ 2024-08-21  7:05 UTC (permalink / raw)
  To: Kamil Konieczny, igt-dev, Zbigniew Kempczyński,
	Dominik Grzegorzek, Maciej Patelczyk,
	Dominik Karol Piątkowski, Pawel Sikora, Andrzej Hajda,
	Kolanupaka Naveena, Mika Kuoppala, Gwan-gyeong Mun, Mika Kuoppala,
	Karolina Stolarek

Hi Kamil,

On 20.08.2024 19:45, Kamil Konieczny wrote:
> Hi Manszewski,,
> On 2024-08-20 at 18:14:07 +0200, Manszewski, Christoph wrote:
>> Hi Zbigniew,
>>
>> On 20.08.2024 10:14, Zbigniew Kempczyński wrote:
>>> On Fri, Aug 09, 2024 at 02:38:03PM +0200, Christoph Manszewski wrote:
>>>> From: Dominik Grzegorzek <dominik.grzegorzek@intel.com>
>>>>
>>>> Introduce library which simplifies testing of eu debug capability.
>>>> The library provides event log helpers together with asynchronous
>>>> abstraction for client proccess and the debugger itself.
>>>>
>>>> xe_eudebug_client creates its own proccess with user's work function,
>>>> and gives machanisms to synchronize beginning of execution and event
>>>> logging.
>>>>
>>>> xe_eudebug_debugger allows to attach to the given proccess, provides
>>>> asynchronous thread for event reading and introduces triggers -
>>>> a callback mechanism triggered every time subscribed event was read.
>>>>
>>>> Signed-off-by: Dominik Grzegorzek <dominik.grzegorzek@intel.com>
>>>> Signed-off-by: Mika Kuoppala <mika.kuaoppala@linux.intel.com>
>>>> Signed-off-by: Christoph Manszewski <christoph.manszewski@intel.com>
>>>> Signed-off-by: Maciej Patelczyk <maciej.patelczyk@intel.com>
>>>> Signed-off-by: Pawel Sikora <pawel.sikora@intel.com>
>>>> Signed-off-by: Karolina Stolarek <karolina.stolarek@intel.com>
>>>> ---
>>>>    lib/meson.build     |    1 +
>>>>    lib/xe/xe_eudebug.c | 2192 +++++++++++++++++++++++++++++++++++++++++++
>>>>    lib/xe/xe_eudebug.h |  206 ++++
>>>>    3 files changed, 2399 insertions(+)
>>>>    create mode 100644 lib/xe/xe_eudebug.c
>>>>    create mode 100644 lib/xe/xe_eudebug.h
>>>>
>>>> diff --git a/lib/meson.build b/lib/meson.build
>>>> index f711e60a7..969ca4101 100644
>>>> --- a/lib/meson.build
>>>> +++ b/lib/meson.build
>>>> @@ -111,6 +111,7 @@ lib_sources = [
>>>>    	'igt_msm.c',
>>>>    	'igt_dsc.c',
>>>>    	'xe/xe_gt.c',
>>>> +	'xe/xe_eudebug.c',
>>>>    	'xe/xe_ioctl.c',
>>>>    	'xe/xe_mmio.c',
>>>>    	'xe/xe_query.c',
>>>
>>> As eudebug is quite big feature I think it should be separated and
>>> hidden behind a feature flag (check meson_options.txt), lets say
>>> 'xe_eudebug' which would be disabled by default. This way you can
>>> develop it upstream even if kernel side is not officially merged.
>>> I'm pragramatic and I see no reason to block not accepted feature
>>> especially this would imo speed up developement. A final step when
>>> kernel change would be accepted and merged would be to sync with
>>> uapi and remove local definitions.
>>>
>>> I look forward maintainers comments is my attitude acceptable.
>>
>> I agree that it is a good idea. The only problem that arises is for
>> 'xe_exec_sip'. We add a dependency on eudebug to this test - any ideas how
>> to approach this correctly? The only thing that comes to my mind is
>> conditional compilation with 'ifdef' statements but it doesn't appear to be
>> pretty.
> 
> What about adding skips in added tests if kernel do not support eudebug?

If the kernel doesn't support eudebug it skips already (see 
'xe_eudebug_enable' in the test fixtures).

Thanks,
Christoph
> 
> This way you can have it without conditional compilation with
> ifdef/meson and also have it compile-time tested (if CI support test-with
> for Xe kernels).
> 
> Regards,
> Kamil
> 
>>
>> Thanks,
>> Christoph
>>>
>>> --
>>> Zbigniew
>>>
>>>
>>>> diff --git a/lib/xe/xe_eudebug.c b/lib/xe/xe_eudebug.c
>>>> new file mode 100644
>>>> index 000000000..4eac87476
>>>> --- /dev/null
>>>> +++ b/lib/xe/xe_eudebug.c
>>>> @@ -0,0 +1,2192 @@
>>>> +// SPDX-License-Identifier: MIT
>>>> +/*
>>>> + * Copyright © 2023 Intel Corporation
>>>> + */
>>>> +
>>>> +#include <fcntl.h>
>>>> +#include <poll.h>
>>>> +#include <signal.h>
>>>> +#include <sys/select.h>
>>>> +#include <sys/stat.h>
>>>> +#include <sys/types.h>
>>>> +#include <sys/wait.h>
>>>> +
>>>> +#include "igt.h"
>>>> +#include "igt_sysfs.h"
>>>> +#include "intel_pat.h"
>>>> +#include "xe_eudebug.h"
>>>> +#include "xe_ioctl.h"
>>>> +
>>>> +struct event_trigger {
>>>> +	xe_eudebug_trigger_fn fn;
>>>> +	int type;
>>>> +	struct igt_list_head link;
>>>> +};
>>>> +
>>>> +struct seqno_list_entry {
>>>> +	struct igt_list_head link;
>>>> +	uint64_t seqno;
>>>> +};
>>>> +
>>>> +struct match_dto {
>>>> +	struct drm_xe_eudebug_event *target;
>>>> +	struct igt_list_head *seqno_list;
>>>> +	uint64_t client_handle;
>>>> +	uint32_t filter;
>>>> +
>>>> +	/* store latest 'EVENT_VM_BIND' seqno */
>>>> +	uint64_t *bind_seqno;
>>>> +	/* latest vm_bind_op seqno matching bind_seqno */
>>>> +	uint64_t *bind_op_seqno;
>>>> +};
>>>> +
>>>> +#define CLIENT_PID  1
>>>> +#define CLIENT_RUN  2
>>>> +#define CLIENT_FINI 3
>>>> +#define CLIENT_STOP 4
>>>> +#define CLIENT_STAGE 5
>>>> +#define DEBUGGER_STAGE 6
>>>> +
>>>> +#define DEBUGGER_WORKER_INACTIVE  0
>>>> +#define DEBUGGER_WORKER_ACTIVE  1
>>>> +#define DEBUGGER_WORKER_QUITTING 2
>>>> +
>>>> +static const char *type_to_str(unsigned int type)
>>>> +{
>>>> +	switch (type) {
>>>> +	case DRM_XE_EUDEBUG_EVENT_NONE:
>>>> +		return "none";
>>>> +	case DRM_XE_EUDEBUG_EVENT_READ:
>>>> +		return "read";
>>>> +	case DRM_XE_EUDEBUG_EVENT_OPEN:
>>>> +		return "client";
>>>> +	case DRM_XE_EUDEBUG_EVENT_VM:
>>>> +		return "vm";
>>>> +	case DRM_XE_EUDEBUG_EVENT_EXEC_QUEUE:
>>>> +		return "exec_queue";
>>>> +	case DRM_XE_EUDEBUG_EVENT_EU_ATTENTION:
>>>> +		return "attention";
>>>> +	case DRM_XE_EUDEBUG_EVENT_VM_BIND:
>>>> +		return "vm_bind";
>>>> +	case DRM_XE_EUDEBUG_EVENT_VM_BIND_OP:
>>>> +		return "vm_bind_op";
>>>> +	case DRM_XE_EUDEBUG_EVENT_VM_BIND_UFENCE:
>>>> +		return "vm_bind_ufence";
>>>> +	case DRM_XE_EUDEBUG_EVENT_METADATA:
>>>> +		return "metadata";
>>>> +	case DRM_XE_EUDEBUG_EVENT_VM_BIND_OP_METADATA:
>>>> +		return "vm_bind_op_metadata";
>>>> +	}
>>>> +
>>>> +	return "UNKNOWN";
>>>> +}
>>>> +
>>>> +static const char *event_type_to_str(struct drm_xe_eudebug_event *e, char *buf)
>>>> +{
>>>> +	sprintf(buf, "%s(%d)", type_to_str(e->type), e->type);
>>>> +
>>>> +	return buf;
>>>> +}
>>>> +
>>>> +static const char *flags_to_str(unsigned int flags)
>>>> +{
>>>> +	if (flags & DRM_XE_EUDEBUG_EVENT_CREATE) {
>>>> +		if (flags & DRM_XE_EUDEBUG_EVENT_NEED_ACK)
>>>> +			return "create|ack";
>>>> +		else
>>>> +			return "create";
>>>> +	}
>>>> +	if (flags & DRM_XE_EUDEBUG_EVENT_DESTROY)
>>>> +		return "destroy";
>>>> +
>>>> +	if (flags & DRM_XE_EUDEBUG_EVENT_STATE_CHANGE)
>>>> +		return "state-change";
>>>> +
>>>> +	igt_assert(!(flags & DRM_XE_EUDEBUG_EVENT_NEED_ACK));
>>>> +
>>>> +	return "flags unknown";
>>>> +}
>>>> +
>>>> +static const char *event_members_to_str(struct drm_xe_eudebug_event *e, char *b)
>>>> +{
>>>> +	switch (e->type) {
>>>> +	case DRM_XE_EUDEBUG_EVENT_OPEN: {
>>>> +		struct drm_xe_eudebug_event_client *ec = (struct drm_xe_eudebug_event_client *)e;
>>>> +
>>>> +		sprintf(b, "handle=%llu", ec->client_handle);
>>>> +		break;
>>>> +	}
>>>> +	case DRM_XE_EUDEBUG_EVENT_VM: {
>>>> +		struct drm_xe_eudebug_event_vm *evm = (struct drm_xe_eudebug_event_vm *)e;
>>>> +
>>>> +		sprintf(b, "client_handle=%llu, handle=%llu",
>>>> +			evm->client_handle, evm->vm_handle);
>>>> +		break;
>>>> +	}
>>>> +	case DRM_XE_EUDEBUG_EVENT_EXEC_QUEUE: {
>>>> +		struct drm_xe_eudebug_event_exec_queue *ee = (void *)e;
>>>> +
>>>> +		sprintf(b, "client_handle=%llu, vm_handle=%llu, "
>>>> +			   "exec_queue_handle=%llu, engine_class=%d, exec_queue_width=%d",
>>>> +			ee->client_handle, ee->vm_handle,
>>>> +			ee->exec_queue_handle, ee->engine_class, ee->width);
>>>> +		break;
>>>> +	}
>>>> +	case DRM_XE_EUDEBUG_EVENT_EU_ATTENTION: {
>>>> +		struct drm_xe_eudebug_event_eu_attention *ea = (void *)e;
>>>> +
>>>> +		sprintf(b, "client_handle=%llu, exec_queue_handle=%llu, "
>>>> +			   "lrc_handle=%llu, bitmask_size=%d",
>>>> +			ea->client_handle, ea->exec_queue_handle,
>>>> +			ea->lrc_handle, ea->bitmask_size);
>>>> +		break;
>>>> +	}
>>>> +	case DRM_XE_EUDEBUG_EVENT_VM_BIND: {
>>>> +		struct drm_xe_eudebug_event_vm_bind *evmb = (void *)e;
>>>> +
>>>> +		sprintf(b, "client_handle=%llu, vm_handle=%llu, flags=0x%x, num_binds=%u",
>>>> +			evmb->client_handle, evmb->vm_handle, evmb->flags, evmb->num_binds);
>>>> +		break;
>>>> +	}
>>>> +	case DRM_XE_EUDEBUG_EVENT_VM_BIND_OP: {
>>>> +		struct drm_xe_eudebug_event_vm_bind_op *op = (void *)e;
>>>> +
>>>> +		sprintf(b, "vm_bind_ref_seqno=%lld, addr=%016llx, range=%llu num_extensions=%llu",
>>>> +			op->vm_bind_ref_seqno, op->addr, op->range, op->num_extensions);
>>>> +		break;
>>>> +	}
>>>> +	case DRM_XE_EUDEBUG_EVENT_VM_BIND_UFENCE: {
>>>> +		struct drm_xe_eudebug_event_vm_bind_ufence *f = (void *)e;
>>>> +
>>>> +		sprintf(b, "vm_bind_ref_seqno=%lld", f->vm_bind_ref_seqno);
>>>> +		break;
>>>> +	}
>>>> +	case DRM_XE_EUDEBUG_EVENT_METADATA: {
>>>> +		struct drm_xe_eudebug_event_metadata *em = (void *)e;
>>>> +
>>>> +		sprintf(b, "client_handle=%llu, metadata_handle=%llu, type=%llu, len=%llu",
>>>> +			em->client_handle, em->metadata_handle, em->type, em->len);
>>>> +		break;
>>>> +	}
>>>> +	case DRM_XE_EUDEBUG_EVENT_VM_BIND_OP_METADATA: {
>>>> +		struct drm_xe_eudebug_event_vm_bind_op_metadata *op = (void *)e;
>>>> +
>>>> +		sprintf(b, "vm_bind_op_ref_seqno=%lld, metadata_handle=%llu, metadata_cookie=%llu",
>>>> +			op->vm_bind_op_ref_seqno, op->metadata_handle, op->metadata_cookie);
>>>> +		break;
>>>> +	}
>>>> +	default:
>>>> +		strcpy(b, "<...>");
>>>> +	}
>>>> +
>>>> +	return b;
>>>> +}
>>>> +
>>>> +/**
>>>> + * xe_eudebug_event_to_str:
>>>> + * @e: pointer to event
>>>> + * @buf: target to write string representation of @e
>>>> + * @len: size of target buffer @buf
>>>> + *
>>>> + * Creates string representation for given event.
>>>> + *
>>>> + * Returns: the written input buffer pointed by @buf.
>>>> + */
>>>> +const char *xe_eudebug_event_to_str(struct drm_xe_eudebug_event *e, char *buf, size_t len)
>>>> +{
>>>> +	char a[256];
>>>> +	char b[256];
>>>> +
>>>> +	snprintf(buf, len, "(%llu) %15s:%s: %s",
>>>> +		 e->seqno,
>>>> +		 event_type_to_str(e, a),
>>>> +		 flags_to_str(e->flags),
>>>> +		 event_members_to_str(e, b));
>>>> +
>>>> +	return buf;
>>>> +}
>>>> +
>>>> +static void catch_child_failure(void)
>>>> +{
>>>> +	pid_t pid;
>>>> +	int status;
>>>> +
>>>> +	pid = waitpid(-1, &status, WNOHANG);
>>>> +
>>>> +	if (pid == 0 || pid == -1)
>>>> +		return;
>>>> +
>>>> +	if (!WIFEXITED(status))
>>>> +		return;
>>>> +
>>>> +	igt_assert_f(WEXITSTATUS(status) == 0, "Client failed!\n");
>>>> +}
>>>> +
>>>> +static int safe_pipe_read(int pipe[2], void *buf, int nbytes, int timeout_ms)
>>>> +{
>>>> +	int ret;
>>>> +	int t = 0;
>>>> +	struct pollfd fd = {
>>>> +		.fd = pipe[0],
>>>> +		.events = POLLIN,
>>>> +		.revents = 0
>>>> +	};
>>>> +
>>>> +	/* When child fails we may get stuck forever. Check whether
>>>> +	 * the child process ended with an error.
>>>> +	 */
>>>> +	do {
>>>> +		const int interval_ms = 1000;
>>>> +
>>>> +		ret = poll(&fd, 1, interval_ms);
>>>> +
>>>> +		if (!ret) {
>>>> +			catch_child_failure();
>>>> +			t += interval_ms;
>>>> +		}
>>>> +	} while (!ret && t < timeout_ms);
>>>> +
>>>> +	if (ret > 0)
>>>> +		return read(pipe[0], buf, nbytes);
>>>> +
>>>> +	return 0;
>>>> +}
>>>> +
>>>> +static uint64_t pipe_read(int pipe[2], int timeout_ms)
>>>> +{
>>>> +	uint64_t in;
>>>> +	uint64_t ret;
>>>> +
>>>> +	ret = safe_pipe_read(pipe, &in, sizeof(in), timeout_ms);
>>>> +	igt_assert(ret == sizeof(in));
>>>> +
>>>> +	return in;
>>>> +}
>>>> +
>>>> +static void pipe_signal(int pipe[2], uint64_t token)
>>>> +{
>>>> +	igt_assert(write(pipe[1], &token, sizeof(token)) == sizeof(token));
>>>> +}
>>>> +
>>>> +static void pipe_close(int pipe[2])
>>>> +{
>>>> +	if (pipe[0] != -1)
>>>> +		close(pipe[0]);
>>>> +
>>>> +	if (pipe[1] != -1)
>>>> +		close(pipe[1]);
>>>> +}
>>>> +
>>>> +static uint64_t __wait_token(int p[2], const uint64_t token, int timeout_ms)
>>>> +{
>>>> +	uint64_t in;
>>>> +
>>>> +	in = pipe_read(p, timeout_ms);
>>>> +
>>>> +	igt_assert_eq(in, token);
>>>> +
>>>> +	return pipe_read(p, timeout_ms);
>>>> +}
>>>> +
>>>> +static uint64_t client_wait_token(struct xe_eudebug_client *c,
>>>> +				 const uint64_t token)
>>>> +{
>>>> +	return __wait_token(c->p_in, token, c->timeout_ms);
>>>> +}
>>>> +
>>>> +static uint64_t wait_from_client(struct xe_eudebug_client *c,
>>>> +				 const uint64_t token)
>>>> +{
>>>> +	return __wait_token(c->p_out, token, c->timeout_ms);
>>>> +}
>>>> +
>>>> +static void token_signal(int p[2], const uint64_t token, const uint64_t value)
>>>> +{
>>>> +	pipe_signal(p, token);
>>>> +	pipe_signal(p, value);
>>>> +}
>>>> +
>>>> +static void client_signal(struct xe_eudebug_client *c,
>>>> +			  const uint64_t token,
>>>> +			  const uint64_t value)
>>>> +{
>>>> +	token_signal(c->p_out, token, value);
>>>> +}
>>>> +
>>>> +static int __xe_eudebug_connect(int fd, pid_t pid, uint32_t flags, uint64_t events)
>>>> +{
>>>> +	struct drm_xe_eudebug_connect param = {
>>>> +		.pid = pid,
>>>> +		.flags = flags,
>>>> +	};
>>>> +	int debugfd;
>>>> +
>>>> +	debugfd = igt_ioctl(fd, DRM_IOCTL_XE_EUDEBUG_CONNECT, &param);
>>>> +
>>>> +	if (debugfd < 0)
>>>> +		return -errno;
>>>> +
>>>> +	return debugfd;
>>>> +}
>>>> +
>>>> +static void event_log_write_to_fd(struct xe_eudebug_event_log *l, int fd)
>>>> +{
>>>> +	igt_assert_eq(write(fd, &l->head, sizeof(l->head)),
>>>> +		      sizeof(l->head));
>>>> +
>>>> +	igt_assert_eq(write(fd, l->log, l->head), l->head);
>>>> +}
>>>> +
>>>> +static void read_all(int fd, void *buf, size_t nbytes)
>>>> +{
>>>> +	ssize_t remaining_size = nbytes;
>>>> +	ssize_t current_size = 0;
>>>> +	ssize_t read_size = 0;
>>>> +
>>>> +	do {
>>>> +		read_size = read(fd, buf + current_size, remaining_size);
>>>> +		igt_assert_f(read_size >= 0, "read failed: %s\n", strerror(errno));
>>>> +
>>>> +		current_size += read_size;
>>>> +		remaining_size -= read_size;
>>>> +	} while (remaining_size > 0 && read_size > 0);
>>>> +
>>>> +	igt_assert_eq(current_size, nbytes);
>>>> +}
>>>> +
>>>> +static void event_log_read_from_fd(struct xe_eudebug_event_log *l, int fd)
>>>> +{
>>>> +	read_all(fd, &l->head, sizeof(l->head));
>>>> +	igt_assert_lt(l->head, l->max_size);
>>>> +
>>>> +	read_all(fd, l->log, l->head);
>>>> +}
>>>> +
>>>> +typedef int (*cmp_fn_t)(struct drm_xe_eudebug_event *, void *);
>>>> +
>>>> +static struct drm_xe_eudebug_event *
>>>> +event_cmp(struct xe_eudebug_event_log *l,
>>>> +	  struct drm_xe_eudebug_event *current,
>>>> +	  cmp_fn_t match,
>>>> +	  void *data)
>>>> +{
>>>> +	struct drm_xe_eudebug_event *e = current;
>>>> +
>>>> +	xe_eudebug_for_each_event(e, l) {
>>>> +		if (match(e, data))
>>>> +			return e;
>>>> +	}
>>>> +
>>>> +	return NULL;
>>>> +}
>>>> +
>>>> +static int match_type_and_flags(struct drm_xe_eudebug_event *a, void *data)
>>>> +{
>>>> +	struct drm_xe_eudebug_event *b = data;
>>>> +
>>>> +	if (a->type == b->type &&
>>>> +	    a->flags == b->flags)
>>>> +		return 1;
>>>> +
>>>> +	return 0;
>>>> +}
>>>> +
>>>> +static int match_fields(struct drm_xe_eudebug_event *a, void *data)
>>>> +{
>>>> +	struct drm_xe_eudebug_event *b = data;
>>>> +	int ret = 0;
>>>> +
>>>> +	ret = match_type_and_flags(a, data);
>>>> +	if (!ret)
>>>> +		return ret;
>>>> +
>>>> +	ret = 0;
>>>> +
>>>> +	switch (a->type) {
>>>> +	case DRM_XE_EUDEBUG_EVENT_EXEC_QUEUE: {
>>>> +		struct drm_xe_eudebug_event_exec_queue *ae = (void *)a;
>>>> +		struct drm_xe_eudebug_event_exec_queue *be = (void *)b;
>>>> +
>>>> +		if (ae->engine_class == be->engine_class && ae->width == be->width)
>>>> +			ret = 1;
>>>> +		break;
>>>> +	}
>>>> +	case DRM_XE_EUDEBUG_EVENT_VM_BIND: {
>>>> +		struct drm_xe_eudebug_event_vm_bind *ea = (void *)a;
>>>> +		struct drm_xe_eudebug_event_vm_bind *eb = (void *)b;
>>>> +
>>>> +		if (ea->num_binds == eb->num_binds)
>>>> +			ret = 1;
>>>> +		break;
>>>> +	}
>>>> +	case DRM_XE_EUDEBUG_EVENT_VM_BIND_OP: {
>>>> +		struct drm_xe_eudebug_event_vm_bind_op *ea = (void *)a;
>>>> +		struct drm_xe_eudebug_event_vm_bind_op *eb = (void *)b;
>>>> +
>>>> +		if (ea->addr == eb->addr && ea->range == eb->range &&
>>>> +		    ea->num_extensions == eb->num_extensions)
>>>> +			ret = 1;
>>>> +		break;
>>>> +	}
>>>> +	case DRM_XE_EUDEBUG_EVENT_VM_BIND_OP_METADATA: {
>>>> +		struct drm_xe_eudebug_event_vm_bind_op_metadata *ea = (void *)a;
>>>> +		struct drm_xe_eudebug_event_vm_bind_op_metadata *eb = (void *)b;
>>>> +
>>>> +		if (ea->metadata_handle == eb->metadata_handle &&
>>>> +		    ea->metadata_cookie == eb->metadata_cookie)
>>>> +			ret = 1;
>>>> +		break;
>>>> +	}
>>>> +
>>>> +	default:
>>>> +		ret = 1;
>>>> +		break;
>>>> +	}
>>>> +
>>>> +	return ret;
>>>> +}
>>>> +
>>>> +static int match_client_handle(struct drm_xe_eudebug_event *e, void *data)
>>>> +{
>>>> +	struct match_dto *md = (void *)data;
>>>> +	uint64_t *bind_seqno = md->bind_seqno;
>>>> +	uint64_t *bind_op_seqno = md->bind_op_seqno;
>>>> +	uint64_t h = md->client_handle;
>>>> +
>>>> +	if (XE_EUDEBUG_EVENT_IS_FILTERED(e->type, md->filter))
>>>> +		return 0;
>>>> +
>>>> +	switch (e->type) {
>>>> +	case DRM_XE_EUDEBUG_EVENT_OPEN: {
>>>> +		struct drm_xe_eudebug_event_client *client = (struct drm_xe_eudebug_event_client *)e;
>>>> +
>>>> +		if (client->client_handle == h)
>>>> +			return 1;
>>>> +		break;
>>>> +	}
>>>> +	case DRM_XE_EUDEBUG_EVENT_VM: {
>>>> +		struct drm_xe_eudebug_event_vm *vm = (struct drm_xe_eudebug_event_vm *)e;
>>>> +
>>>> +		if (vm->client_handle == h)
>>>> +			return 1;
>>>> +		break;
>>>> +	}
>>>> +	case DRM_XE_EUDEBUG_EVENT_EXEC_QUEUE: {
>>>> +		struct drm_xe_eudebug_event_exec_queue *ee = (struct drm_xe_eudebug_event_exec_queue *)e;
>>>> +
>>>> +		if (ee->client_handle == h)
>>>> +			return 1;
>>>> +		break;
>>>> +	}
>>>> +	case DRM_XE_EUDEBUG_EVENT_VM_BIND: {
>>>> +		struct drm_xe_eudebug_event_vm_bind *evmb = (struct drm_xe_eudebug_event_vm_bind *)e;
>>>> +
>>>> +		if (evmb->client_handle == h) {
>>>> +			*bind_seqno = evmb->base.seqno;
>>>> +			return 1;
>>>> +		}
>>>> +		break;
>>>> +	}
>>>> +	case DRM_XE_EUDEBUG_EVENT_VM_BIND_OP: {
>>>> +		struct drm_xe_eudebug_event_vm_bind_op *eo = (struct drm_xe_eudebug_event_vm_bind_op *)e;
>>>> +
>>>> +		if (eo->vm_bind_ref_seqno == *bind_seqno) {
>>>> +			*bind_op_seqno = eo->base.seqno;
>>>> +			return 1;
>>>> +		}
>>>> +		break;
>>>> +	}
>>>> +	case DRM_XE_EUDEBUG_EVENT_VM_BIND_UFENCE: {
>>>> +		struct drm_xe_eudebug_event_vm_bind_ufence *ef  = (struct drm_xe_eudebug_event_vm_bind_ufence *)e;
>>>> +
>>>> +		if (ef->vm_bind_ref_seqno == *bind_seqno)
>>>> +			return 1;
>>>> +
>>>> +		break;
>>>> +	}
>>>> +	case DRM_XE_EUDEBUG_EVENT_METADATA: {
>>>> +		struct drm_xe_eudebug_event_metadata *em = (struct drm_xe_eudebug_event_metadata *)e;
>>>> +
>>>> +		if (em->client_handle == h)
>>>> +			return 1;
>>>> +		break;
>>>> +	}
>>>> +	case DRM_XE_EUDEBUG_EVENT_VM_BIND_OP_METADATA: {
>>>> +		struct drm_xe_eudebug_event_vm_bind_op_metadata *eo = (struct drm_xe_eudebug_event_vm_bind_op_metadata *)e;
>>>> +
>>>> +		if (eo->vm_bind_op_ref_seqno == *bind_op_seqno)
>>>> +			return 1;
>>>> +		break;
>>>> +	}
>>>> +	default:
>>>> +		break;
>>>> +	}
>>>> +
>>>> +	return 0;
>>>> +}
>>>> +
>>>> +static int match_opposite_resource(struct drm_xe_eudebug_event *e, void *data)
>>>> +{
>>>> +
>>>> +	struct drm_xe_eudebug_event *d = (void *)data;
>>>> +	int ret;
>>>> +
>>>> +	d->flags ^= DRM_XE_EUDEBUG_EVENT_CREATE | DRM_XE_EUDEBUG_EVENT_DESTROY;
>>>> +	d->flags &= ~(DRM_XE_EUDEBUG_EVENT_NEED_ACK);
>>>> +	ret = match_type_and_flags(e, data);
>>>> +	d->flags ^= DRM_XE_EUDEBUG_EVENT_CREATE | DRM_XE_EUDEBUG_EVENT_DESTROY;
>>>> +
>>>> +	if (!ret)
>>>> +		return 0;
>>>> +
>>>> +	switch (e->type) {
>>>> +	case DRM_XE_EUDEBUG_EVENT_OPEN: {
>>>> +		struct drm_xe_eudebug_event_client *client = (struct drm_xe_eudebug_event_client *)e;
>>>> +		struct drm_xe_eudebug_event_client *filter = (struct drm_xe_eudebug_event_client *)data;
>>>> +
>>>> +		if (client->client_handle == filter->client_handle)
>>>> +			return 1;
>>>> +		break;
>>>> +	}
>>>> +	case DRM_XE_EUDEBUG_EVENT_VM: {
>>>> +		struct drm_xe_eudebug_event_vm *vm = (struct drm_xe_eudebug_event_vm *)e;
>>>> +		struct drm_xe_eudebug_event_vm *filter = (struct drm_xe_eudebug_event_vm *)data;
>>>> +
>>>> +		if (vm->vm_handle == filter->vm_handle)
>>>> +			return 1;
>>>> +		break;
>>>> +	}
>>>> +	case DRM_XE_EUDEBUG_EVENT_EXEC_QUEUE: {
>>>> +		struct drm_xe_eudebug_event_exec_queue *ee = (void *)e;
>>>> +		struct drm_xe_eudebug_event_exec_queue *filter = (struct drm_xe_eudebug_event_exec_queue *)data;
>>>> +
>>>> +		if (ee->exec_queue_handle == filter->exec_queue_handle)
>>>> +			return 1;
>>>> +		break;
>>>> +	}
>>>> +	case DRM_XE_EUDEBUG_EVENT_VM_BIND: {
>>>> +		struct drm_xe_eudebug_event_vm_bind *evmb = (void *)e;
>>>> +		struct drm_xe_eudebug_event_vm_bind *filter = (struct drm_xe_eudebug_event_vm_bind *)data;
>>>> +
>>>> +		if (evmb->vm_handle == filter->vm_handle &&
>>>> +		    evmb->num_binds == filter->num_binds)
>>>> +			return 1;
>>>> +		break;
>>>> +	}
>>>> +	case DRM_XE_EUDEBUG_EVENT_VM_BIND_OP: {
>>>> +		struct drm_xe_eudebug_event_vm_bind_op *avmb = (void *)e;
>>>> +		struct drm_xe_eudebug_event_vm_bind_op *filter = (struct drm_xe_eudebug_event_vm_bind_op *)data;
>>>> +
>>>> +		if (avmb->addr == filter->addr &&
>>>> +		    avmb->range == filter->range)
>>>> +			return 1;
>>>> +		break;
>>>> +	}
>>>> +	case DRM_XE_EUDEBUG_EVENT_METADATA: {
>>>> +		struct drm_xe_eudebug_event_metadata *em = (void *)e;
>>>> +		struct drm_xe_eudebug_event_metadata *filter = (struct drm_xe_eudebug_event_metadata *)data;
>>>> +
>>>> +		if (em->metadata_handle == filter->metadata_handle)
>>>> +			return 1;
>>>> +		break;
>>>> +	}
>>>> +	case DRM_XE_EUDEBUG_EVENT_VM_BIND_OP_METADATA: {
>>>> +		struct drm_xe_eudebug_event_vm_bind_op_metadata *avmb = (void *)e;
>>>> +		struct drm_xe_eudebug_event_vm_bind_op_metadata *filter = (struct drm_xe_eudebug_event_vm_bind_op_metadata *)data;
>>>> +
>>>> +		if (avmb->metadata_handle == filter->metadata_handle &&
>>>> +		    avmb->metadata_cookie == filter->metadata_cookie)
>>>> +			return 1;
>>>> +		break;
>>>> +	}
>>>> +
>>>> +	default:
>>>> +		break;
>>>> +	}
>>>> +	return 0;
>>>> +}
>>>> +
>>>> +static int match_full(struct drm_xe_eudebug_event *e, void *data)
>>>> +{
>>>> +	struct seqno_list_entry *sl;
>>>> +
>>>> +	struct match_dto *md = (void *)data;
>>>> +	int ret = 0;
>>>> +
>>>> +	ret = match_client_handle(e, md);
>>>> +	if (!ret)
>>>> +		return 0;
>>>> +
>>>> +	ret = match_fields(e, md->target);
>>>> +	if (!ret)
>>>> +		return 0;
>>>> +
>>>> +	igt_list_for_each_entry(sl, md->seqno_list, link) {
>>>> +		if (sl->seqno == e->seqno)
>>>> +			return 0;
>>>> +	}
>>>> +
>>>> +	return 1;
>>>> +}
>>>> +
>>>> +static struct drm_xe_eudebug_event *
>>>> +event_type_match(struct xe_eudebug_event_log *l,
>>>> +		 struct drm_xe_eudebug_event *target,
>>>> +		 struct drm_xe_eudebug_event *current)
>>>> +{
>>>> +	return event_cmp(l, current, match_type_and_flags, target);
>>>> +}
>>>> +
>>>> +static struct drm_xe_eudebug_event *
>>>> +client_match(struct xe_eudebug_event_log *l,
>>>> +	     uint64_t client_handle,
>>>> +	     struct drm_xe_eudebug_event *current,
>>>> +	     uint32_t filter,
>>>> +	     uint64_t *bind_seqno,
>>>> +	     uint64_t *bind_op_seqno)
>>>> +{
>>>> +	struct match_dto md = {
>>>> +		.client_handle = client_handle,
>>>> +		.filter = filter,
>>>> +		.bind_seqno = bind_seqno,
>>>> +		.bind_op_seqno = bind_op_seqno,
>>>> +	};
>>>> +
>>>> +	return event_cmp(l, current, match_client_handle, &md);
>>>> +}
>>>> +
>>>> +static struct drm_xe_eudebug_event *
>>>> +opposite_event_match(struct xe_eudebug_event_log *l,
>>>> +		    struct drm_xe_eudebug_event *target,
>>>> +		    struct drm_xe_eudebug_event *current)
>>>> +{
>>>> +	return event_cmp(l, current, match_opposite_resource, target);
>>>> +}
>>>> +
>>>> +static struct drm_xe_eudebug_event *
>>>> +event_match(struct xe_eudebug_event_log *l,
>>>> +	    struct drm_xe_eudebug_event *target,
>>>> +	    uint64_t client_handle,
>>>> +	    struct igt_list_head *seqno_list,
>>>> +	    uint64_t *bind_seqno,
>>>> +	    uint64_t *bind_op_seqno)
>>>> +{
>>>> +	struct match_dto md = {
>>>> +		.target = target,
>>>> +		.client_handle = client_handle,
>>>> +		.seqno_list = seqno_list,
>>>> +		.bind_seqno = bind_seqno,
>>>> +		.bind_op_seqno = bind_op_seqno,
>>>> +	};
>>>> +
>>>> +	return event_cmp(l, NULL, match_full, &md);
>>>> +}
>>>> +
>>>> +static void compare_client(struct xe_eudebug_event_log *c, struct drm_xe_eudebug_event *_ce,
>>>> +			   struct xe_eudebug_event_log *d, struct drm_xe_eudebug_event *_de,
>>>> +			   uint32_t filter)
>>>> +{
>>>> +	struct drm_xe_eudebug_event_client *ce = (void *)_ce;
>>>> +	struct drm_xe_eudebug_event_client *de = (void *)_de;
>>>> +	uint64_t cbs = 0, dbs = 0, cbso = 0, dbso = 0;
>>>> +
>>>> +	struct igt_list_head matched_seqno_list;
>>>> +	struct drm_xe_eudebug_event *hc, *hd;
>>>> +	struct seqno_list_entry *entry, *tmp;
>>>> +
>>>> +	igt_assert(ce);
>>>> +	igt_assert(de);
>>>> +
>>>> +	igt_debug("client: %llu -> %llu\n", ce->client_handle, de->client_handle);
>>>> +
>>>> +	hc = NULL;
>>>> +	hd = NULL;
>>>> +	IGT_INIT_LIST_HEAD(&matched_seqno_list);
>>>> +
>>>> +	do {
>>>> +		hc = client_match(c, ce->client_handle, hc, filter, &cbs, &cbso);
>>>> +		if (!hc)
>>>> +			break;
>>>> +
>>>> +		hd = event_match(d, hc, de->client_handle, &matched_seqno_list, &dbs, &dbso);
>>>> +
>>>> +		igt_assert_f(hd, "%s (%llu): no matching event type %u found for client %llu\n",
>>>> +			     c->name,
>>>> +			     hc->seqno,
>>>> +			     hc->type,
>>>> +			     ce->client_handle);
>>>> +
>>>> +		igt_debug("comparing %s %llu vs %s %llu\n",
>>>> +			  c->name, hc->seqno, d->name, hd->seqno);
>>>> +
>>>> +		/*
>>>> +		 * Store the seqno of the event that was matched above,
>>>> +		 * inside 'matched_seqno_list', to avoid it getting matched
>>>> +		 * by subsequent 'event_match' calls.
>>>> +		 */
>>>> +		entry = malloc(sizeof(*entry));
>>>> +		entry->seqno = hd->seqno;
>>>> +		igt_list_add(&entry->link, &matched_seqno_list);
>>>> +	} while (hc);
>>>> +
>>>> +	igt_list_for_each_entry_safe(entry, tmp, &matched_seqno_list, link)
>>>> +		free(entry);
>>>> +}
>>>> +
>>>> +/**
>>>> + * xe_eudebug_event_log_find_seqno:
>>>> + * @l: event log pointer
>>>> + * @seqno: seqno of event to be found
>>>> + *
>>>> + * Finds the event with given seqno in the event log.
>>>> + *
>>>> + * Returns: pointer to the event with given seqno within @l or NULL seqno is
>>>> + * not present.
>>>> + */
>>>> +struct drm_xe_eudebug_event *
>>>> +xe_eudebug_event_log_find_seqno(struct xe_eudebug_event_log *l, uint64_t seqno)
>>>> +{
>>>> +	struct drm_xe_eudebug_event *e = NULL, *found = NULL;
>>>> +
>>>> +	igt_assert_neq(seqno, 0);
>>>> +	/*
>>>> +	 * Try to catch if seqno is corrupted and prevent too long tests,
>>>> +	 * as our post processing of events is not optimized.
>>>> +	 */
>>>> +	igt_assert_lt(seqno, 10 * 1000 * 1000);
>>>> +
>>>> +	xe_eudebug_for_each_event(e, l) {
>>>> +		if (e->seqno == seqno) {
>>>> +			if (found) {
>>>> +				igt_warn("Found multiple events with the same seqno %lu\n", seqno);
>>>> +				xe_eudebug_event_log_print(l, false);
>>>> +				igt_assert(!found);
>>>> +			}
>>>> +			found = e;
>>>> +		}
>>>> +	}
>>>> +
>>>> +	return found;
>>>> +}
>>>> +
>>>> +static void event_log_sort(struct xe_eudebug_event_log *l)
>>>> +{
>>>> +	struct xe_eudebug_event_log *tmp;
>>>> +	struct drm_xe_eudebug_event *e = NULL;
>>>> +	uint64_t first_seqno = 0;
>>>> +	uint64_t last_seqno = 0;
>>>> +	uint64_t events = 0, added = 0;
>>>> +	uint64_t i;
>>>> +
>>>> +	xe_eudebug_for_each_event(e, l) {
>>>> +		if (e->seqno > last_seqno)
>>>> +			last_seqno = e->seqno;
>>>> +
>>>> +		if (e->seqno < first_seqno)
>>>> +			first_seqno = e->seqno;
>>>> +
>>>> +		events++;
>>>> +	}
>>>> +
>>>> +	tmp = xe_eudebug_event_log_create("tmp", l->max_size);
>>>> +
>>>> +	for (i = 1; i <= last_seqno; i++) {
>>>> +		e = xe_eudebug_event_log_find_seqno(l, i);
>>>> +		if (e) {
>>>> +			xe_eudebug_event_log_write(tmp, e);
>>>> +			added++;
>>>> +		}
>>>> +	}
>>>> +
>>>> +	igt_assert_eq(events, added);
>>>> +	igt_assert_eq(tmp->head, l->head);
>>>> +
>>>> +	memcpy(l->log, tmp->log, tmp->head);
>>>> +
>>>> +	xe_eudebug_event_log_destroy(tmp);
>>>> +}
>>>> +
>>>> +/**
>>>> + * xe_eudebug_connect:
>>>> + * @fd: Xe file descriptor
>>>> + * @pid: client PID
>>>> + * @flags: connection flags
>>>> + *
>>>> + * Opens the xe eu debugger connection to the process described by @pid
>>>> + *
>>>> + * Returns: 0 if the debugger was successfully attached, -errno otherwise.
>>>> + */
>>>> +int xe_eudebug_connect(int fd, pid_t pid, uint32_t flags)
>>>> +{
>>>> +	int ret;
>>>> +	uint64_t events = 0; /* events filtering not supported yet! */
>>>> +
>>>> +	ret = __xe_eudebug_connect(fd, pid, flags, events);
>>>> +
>>>> +	return ret;
>>>> +}
>>>> +
>>>> +/**
>>>> + * xe_eudebug_event_log_create:
>>>> + * @name: event log identifier
>>>> + * @max_size: maximum size of created log
>>>> + *
>>>> + * Function creates an Eu Debugger event log with size equal to @max_size.
>>>> + *
>>>> + * Returns: pointer to just created log
>>>> + */
>>>> +#define MAX_EVENT_LOG_SIZE (32 * 1024 * 1024)
>>>> +struct xe_eudebug_event_log *xe_eudebug_event_log_create(const char *name, unsigned int max_size)
>>>> +{
>>>> +	struct xe_eudebug_event_log *l;
>>>> +
>>>> +	l = calloc(1, sizeof(*l));
>>>> +	igt_assert(l);
>>>> +	l->log = calloc(1, max_size);
>>>> +	igt_assert(l->log);
>>>> +	l->max_size = max_size;
>>>> +	strncpy(l->name, name, sizeof(l->name) - 1);
>>>> +	pthread_mutex_init(&l->lock, NULL);
>>>> +
>>>> +	return l;
>>>> +}
>>>> +
>>>> +/**
>>>> + * xe_eudebug_event_log_destroy:
>>>> + * @l: event log pointer
>>>> + *
>>>> + * Frees given event log @l.
>>>> + */
>>>> +void xe_eudebug_event_log_destroy(struct xe_eudebug_event_log *l)
>>>> +{
>>>> +	pthread_mutex_destroy(&l->lock);
>>>> +	free(l->log);
>>>> +	free(l);
>>>> +}
>>>> +
>>>> +/**
>>>> + * xe_eudebug_event_log_write:
>>>> + * @l: event log pointer
>>>> + * @e: event to be written to event log
>>>> + *
>>>> + * Writes event @e to the event log, thread-safe.
>>>> + */
>>>> +void xe_eudebug_event_log_write(struct xe_eudebug_event_log *l, struct drm_xe_eudebug_event *e)
>>>> +{
>>>> +	igt_assert(e->seqno);
>>>> +	/*
>>>> +	 * Try to catch if seqno is corrupted and prevent too long tests,
>>>> +	 * as our post processing of events is not optimized.
>>>> +	 */
>>>> +	igt_assert_lt(e->seqno, 10 * 1000 * 1000);
>>>> +
>>>> +	pthread_mutex_lock(&l->lock);
>>>> +	igt_assert_lt(l->head + e->len, l->max_size);
>>>> +	memcpy(l->log + l->head, e, e->len);
>>>> +	l->head += e->len;
>>>> +
>>>> +#ifdef DEBUG_LOG
>>>> +	igt_info("%s: wrote %u bytes to eventlog, free %u bytes\n",
>>>> +		 l->name, e->len, l->max_size - l->head);
>>>> +#endif
>>>> +	pthread_mutex_unlock(&l->lock);
>>>> +}
>>>> +
>>>> +/**
>>>> + * xe_eudebug_event_log_print:
>>>> + * @l: event log pointer
>>>> + * @debug: when true function uses igt_debug instead of igt_info.
>>>> + *
>>>> + * Prints given event log.
>>>> + */
>>>> +void
>>>> +xe_eudebug_event_log_print(struct xe_eudebug_event_log *l, bool debug)
>>>> +{
>>>> +	struct drm_xe_eudebug_event *e = NULL;
>>>> +	int level = debug ? IGT_LOG_DEBUG : IGT_LOG_INFO;
>>>> +	char str[XE_EUDEBUG_EVENT_STRING_MAX_LEN];
>>>> +
>>>> +	igt_log(IGT_LOG_DOMAIN, level,
>>>> +		"event log '%s' (%u bytes):\n", l->name, l->head);
>>>> +
>>>> +	xe_eudebug_for_each_event(e, l) {
>>>> +		xe_eudebug_event_to_str(e, str, XE_EUDEBUG_EVENT_STRING_MAX_LEN);
>>>> +		igt_log(IGT_LOG_DOMAIN, level, "%s\n", str);
>>>> +	}
>>>> +}
>>>> +
>>>> +/**
>>>> + * xe_eudebug_event_log_compare:
>>>> + * @a: event log pointer
>>>> + * @b: event log pointer
>>>> + * @filter: mask that represents events to be skipped during comparison, useful
>>>> + * for events like 'VM_BIND' since they can be asymmetric. Note that
>>>> + * 'DRM_XE_EUDEBUG_EVENT_OPEN' will always be matched.
>>>> + *
>>>> + * Compares and asserts event logs @a, @b if the event
>>>> + * sequence matches.
>>>> + */
>>>> +void xe_eudebug_event_log_compare(struct xe_eudebug_event_log *a, struct xe_eudebug_event_log *b,
>>>> +				  uint32_t filter)
>>>> +{
>>>> +	struct drm_xe_eudebug_event *ae = NULL;
>>>> +	struct drm_xe_eudebug_event *be = NULL;
>>>> +
>>>> +	xe_eudebug_for_each_event(ae, a) {
>>>> +		if (ae->type == DRM_XE_EUDEBUG_EVENT_OPEN &&
>>>> +		    ae->flags & DRM_XE_EUDEBUG_EVENT_CREATE) {
>>>> +			be = event_type_match(b, ae, be);
>>>> +
>>>> +			compare_client(a, ae, b, be, filter);
>>>> +			compare_client(b, be, a, ae, filter);
>>>> +		}
>>>> +	}
>>>> +}
>>>> +
>>>> +/**
>>>> + * xe_eudebug_event_log_match_opposite:
>>>> + * @l: event log pointer
>>>> + * @filter: mask that represents events to be skipped during comparison, useful
>>>> + * for events like 'VM_BIND' since they can be asymmetric
>>>> + *
>>>> + * Matches and asserts content of all opposite events (create vs destroy).
>>>> + */
>>>> +void
>>>> +xe_eudebug_event_log_match_opposite(struct xe_eudebug_event_log *l, uint32_t filter)
>>>> +{
>>>> +	struct drm_xe_eudebug_event *ce = NULL;
>>>> +	struct drm_xe_eudebug_event *de = NULL;
>>>> +
>>>> +	xe_eudebug_for_each_event(ce, l) {
>>>> +		if (ce->flags & DRM_XE_EUDEBUG_EVENT_CREATE) {
>>>> +			uint8_t offset = sizeof(struct drm_xe_eudebug_event);
>>>> +			int opposite_matching;
>>>> +
>>>> +			if (XE_EUDEBUG_EVENT_IS_FILTERED(ce->type, filter))
>>>> +				continue;
>>>> +
>>>> +			/* No opposite matching for binds */
>>>> +			if ((ce->type >= DRM_XE_EUDEBUG_EVENT_VM_BIND &&
>>>> +			     ce->type <= DRM_XE_EUDEBUG_EVENT_VM_BIND_UFENCE) ||
>>>> +			    ce->type == DRM_XE_EUDEBUG_EVENT_VM_BIND_OP_METADATA)
>>>> +				continue;
>>>> +
>>>> +			de = opposite_event_match(l, ce, ce);
>>>> +
>>>> +			igt_assert_f(de, "no opposite event of type %u found\n", ce->type);
>>>> +
>>>> +			igt_assert_eq(ce->len, de->len);
>>>> +			opposite_matching = memcmp((uint8_t *)de + offset,
>>>> +						   (uint8_t *)ce + offset,
>>>> +						   de->len - offset) == 0;
>>>> +
>>>> +			igt_assert_f(opposite_matching,
>>>> +				     "%s: create|destroy event not "
>>>> +				     "maching (%llu) vs (%llu)\n",
>>>> +				     l->name, de->seqno, ce->seqno);
>>>> +		}
>>>> +	}
>>>> +}
>>>> +
>>>> +static void debugger_run_triggers(struct xe_eudebug_debugger *d,
>>>> +				  struct drm_xe_eudebug_event *e)
>>>> +{
>>>> +	struct event_trigger *t;
>>>> +
>>>> +	igt_list_for_each_entry(t, &d->triggers, link) {
>>>> +		if (e->type == t->type)
>>>> +			t->fn(d, e);
>>>> +	}
>>>> +}
>>>> +
>>>> +#define MAX_EVENT_SIZE (32 * 1024)
>>>> +static int
>>>> +xe_eudebug_read_event(int fd, struct drm_xe_eudebug_event *event)
>>>> +{
>>>> +	int ret;
>>>> +
>>>> +	event->type = DRM_XE_EUDEBUG_EVENT_READ;
>>>> +	event->flags = 0;
>>>> +	event->len = MAX_EVENT_SIZE;
>>>> +
>>>> +	ret = igt_ioctl(fd, DRM_XE_EUDEBUG_IOCTL_READ_EVENT, event);
>>>> +	if (ret < 0)
>>>> +		return -errno;
>>>> +
>>>> +	return ret;
>>>> +}
>>>> +
>>>> +static void *debugger_worker_loop(void *data)
>>>> +{
>>>> +	uint8_t buf[MAX_EVENT_SIZE];
>>>> +	struct drm_xe_eudebug_event *e = (void *)buf;
>>>> +	struct xe_eudebug_debugger *d = data;
>>>> +	struct pollfd p = {
>>>> +		.events = POLLIN,
>>>> +		.revents = 0,
>>>> +	};
>>>> +	int timeout_ms = 100, ret;
>>>> +
>>>> +	igt_assert(d->master_fd >= 0);
>>>> +
>>>> +	do {
>>>> +		p.fd = d->fd;
>>>> +		ret = poll(&p, 1, timeout_ms);
>>>> +
>>>> +		if (ret == -1) {
>>>> +			igt_info("poll failed with errno %d\n", errno);
>>>> +			break;
>>>> +		}
>>>> +
>>>> +		if (ret == 1 && (p.revents & POLLIN)) {
>>>> +			int err = xe_eudebug_read_event(d->fd, e);
>>>> +
>>>> +			if (!err) {
>>>> +				++d->event_count;
>>>> +
>>>> +				xe_eudebug_event_log_write(d->log, e);
>>>> +				debugger_run_triggers(d, e);
>>>> +			} else {
>>>> +				igt_info("xe_eudebug_read_event returned %d\n", ret);
>>>> +			}
>>>> +		}
>>>> +	} while ((ret && READ_ONCE(d->worker_state) == DEBUGGER_WORKER_QUITTING) ||
>>>> +		 READ_ONCE(d->worker_state) == DEBUGGER_WORKER_ACTIVE);
>>>> +
>>>> +	d->worker_state = DEBUGGER_WORKER_INACTIVE;
>>>> +	return NULL;
>>>> +}
>>>> +
>>>> +/**
>>>> + * xe_eudebug_debugger_available:
>>>> + * @fd: Xe file descriptor
>>>> + *
>>>> + * Returns: true it debugger connection is available, false otherwise.
>>>> + */
>>>> +bool xe_eudebug_debugger_available(int fd)
>>>> +{
>>>> +	struct drm_xe_eudebug_connect param = { .pid = getpid() };
>>>> +	int debugfd;
>>>> +
>>>> +	debugfd = igt_ioctl(fd, DRM_IOCTL_XE_EUDEBUG_CONNECT, &param);
>>>> +	if (debugfd >= 0)
>>>> +		close(debugfd);
>>>> +
>>>> +	return debugfd >= 0;
>>>> +}
>>>> +
>>>> +/**
>>>> + * xe_eudebug_debugger_create:
>>>> + * @master_fd: xe client used to open the debugger connection
>>>> + * @flags: flags stored in a debugger structure, can be used at will
>>>> + * of the caller, i.e. to be used inside triggers.
>>>> + * @data: test's private data, allocated with MAP_SHARED | MAP_ANONYMOUS,
>>>> + * can be shared between client and debugger. Can be NULL.
>>>> + *
>>>> + * Returns: newly created xe_eudebug_debugger structure with its
>>>> + * event log initialized. Note that to open the connection
>>>> + * you need call @xe_eudebug_debugger_attach.
>>>> + */
>>>> +struct xe_eudebug_debugger *
>>>> +xe_eudebug_debugger_create(int master_fd, uint64_t flags, void *data)
>>>> +{
>>>> +	struct xe_eudebug_debugger *d;
>>>> +
>>>> +	d = calloc(1, sizeof(*d));
>>>> +	d->flags = flags;
>>>> +	igt_assert(d);
>>>> +	IGT_INIT_LIST_HEAD(&d->triggers);
>>>> +	d->log = xe_eudebug_event_log_create("debugger", MAX_EVENT_LOG_SIZE);
>>>> +	d->fd = -1;
>>>> +	d->master_fd = master_fd;
>>>> +	d->ptr = data;
>>>> +
>>>> +	return d;
>>>> +}
>>>> +
>>>> +static void debugger_destroy_triggers(struct xe_eudebug_debugger *d)
>>>> +{
>>>> +	struct event_trigger *t, *tmp;
>>>> +
>>>> +	igt_list_for_each_entry_safe(t, tmp, &d->triggers, link)
>>>> +		free(t);
>>>> +}
>>>> +
>>>> +/**
>>>> + * xe_eudebug_debugger_destroy:
>>>> + * @d: pointer to the debugger
>>>> + *
>>>> + * Frees xe_eudebug_debugger structure pointed by @d. If the debugger
>>>> + * connection was still opened it terminates it.
>>>> + */
>>>> +void xe_eudebug_debugger_destroy(struct xe_eudebug_debugger *d)
>>>> +{
>>>> +	if (d->worker_state)
>>>> +		xe_eudebug_debugger_stop_worker(d, 1);
>>>> +
>>>> +	if (d->target_pid)
>>>> +		xe_eudebug_debugger_dettach(d);
>>>> +
>>>> +	xe_eudebug_event_log_destroy(d->log);
>>>> +	debugger_destroy_triggers(d);
>>>> +	free(d);
>>>> +}
>>>> +
>>>> +/**
>>>> + * xe_eudebug_debugger_attach:
>>>> + * @d: pointer to the debugger
>>>> + * @c: pointer to the client
>>>> + *
>>>> + * Opens the xe eu debugger connection to the process described by @c (c->pid)
>>>> + *
>>>> + * Returns: 0 if the debugger was successfully attached, -errno otherwise.
>>>> + */
>>>> +int xe_eudebug_debugger_attach(struct xe_eudebug_debugger *d,
>>>> +			       struct xe_eudebug_client *c)
>>>> +{
>>>> +	int ret;
>>>> +
>>>> +	igt_assert_eq(d->fd, -1);
>>>> +	igt_assert_neq(c->pid, 0);
>>>> +	ret = xe_eudebug_connect(d->master_fd, c->pid, 0);
>>>> +
>>>> +	if (ret < 0)
>>>> +		return ret;
>>>> +
>>>> +	d->fd = ret;
>>>> +	d->target_pid = c->pid;
>>>> +	d->p_client[0] = c->p_in[0];
>>>> +	d->p_client[1] = c->p_in[1];
>>>> +
>>>> +	igt_debug("debugger connected to %lu\n", d->target_pid);
>>>> +
>>>> +	return 0;
>>>> +}
>>>> +
>>>> +/**
>>>> + * xe_eudebug_debugger_dettach:
>>>> + * @d: pointer to the debugger
>>>> + *
>>>> + * Closes previously opened xe eu debugger connection. Asserts if
>>>> + * the debugger has active session.
>>>> + */
>>>> +void xe_eudebug_debugger_dettach(struct xe_eudebug_debugger *d)
>>>> +{
>>>> +	igt_assert(d->target_pid);
>>>> +	close(d->fd);
>>>> +	d->target_pid = 0;
>>>> +	d->fd = -1;
>>>> +}
>>>> +
>>>> +/**
>>>> + * xe_eudebug_debugger_add_trigger:
>>>> + * @d: pointer to the debugger
>>>> + * @type: the type of the event which activates the trigger
>>>> + * @fn: function to be called when event of @type was read by the debugger.
>>>> + *
>>>> + * Adds function @fn to the list of triggers activated when event of @type
>>>> + * has been read by worker.
>>>> + * Note: Triggers are activated by the worker.
>>>> + */
>>>> +void xe_eudebug_debugger_add_trigger(struct xe_eudebug_debugger *d,
>>>> +				     int type, xe_eudebug_trigger_fn fn)
>>>> +{
>>>> +	struct event_trigger *t;
>>>> +
>>>> +	t = calloc(1, sizeof(*t));
>>>> +	IGT_INIT_LIST_HEAD(&t->link);
>>>> +	t->type = type;
>>>> +	t->fn = fn;
>>>> +
>>>> +	igt_list_add_tail(&t->link, &d->triggers);
>>>> +	igt_debug("added trigger %p\n", t);
>>>> +}
>>>> +
>>>> +/**
>>>> + * xe_eudebug_debugger_start_worker:
>>>> + * @d: pointer to the debugger
>>>> + *
>>>> + * Starts the debugger worker. Worker is resposible for reading all
>>>> + * incoming events from the debugger, put then into debugger log and
>>>> + * execute appropriate event triggers. Note that using the debuggers
>>>> + * event log while worker is running is not safe.
>>>> + */
>>>> +void xe_eudebug_debugger_start_worker(struct xe_eudebug_debugger *d)
>>>> +{
>>>> +	int ret;
>>>> +
>>>> +	d->worker_state = true;
>>>> +	ret = pthread_create(&d->worker_thread, NULL, &debugger_worker_loop, d);
>>>> +
>>>> +	igt_assert_f(ret == 0, "Debugger worker thread creation failed!");
>>>> +}
>>>> +
>>>> +/**
>>>> + * xe_eudebug_debugger_stop_worker:
>>>> + * @d: pointer to the debugger
>>>> + *
>>>> + * Stops the debugger worker. Event log is sorted by seqno after closure.
>>>> + */
>>>> +void xe_eudebug_debugger_stop_worker(struct xe_eudebug_debugger *d,
>>>> +				     int timeout_s)
>>>> +{
>>>> +	struct timespec t = {};
>>>> +	int ret;
>>>> +
>>>> +	igt_assert(d->worker_state);
>>>> +
>>>> +	d->worker_state = DEBUGGER_WORKER_QUITTING; /* First time be polite. */
>>>> +	igt_assert_eq(clock_gettime(CLOCK_REALTIME, &t), 0);
>>>> +	t.tv_sec += timeout_s;
>>>> +
>>>> +	ret = pthread_timedjoin_np(d->worker_thread, NULL, &t);
>>>> +
>>>> +	if (ret == ETIMEDOUT) {
>>>> +		d->worker_state = DEBUGGER_WORKER_INACTIVE;
>>>> +		ret = pthread_join(d->worker_thread, NULL);
>>>> +	}
>>>> +
>>>> +	igt_assert_f(ret == 0 || ret != ESRCH,
>>>> +		     "pthread join failed with error %d!\n", ret);
>>>> +
>>>> +	event_log_sort(d->log);
>>>> +}
>>>> +
>>>> +/**
>>>> + * xe_eudebug_debugger_signal_stage:
>>>> + * @d: pointer to the debugger
>>>> + * @stage: stage to signal
>>>> + *
>>>> + * Signals to client, waiting in xe_eudebug_client_wait_stage(),
>>>> + * releasing it to proceed.
>>>> + */
>>>> +void xe_eudebug_debugger_signal_stage(struct xe_eudebug_debugger *d, uint64_t stage)
>>>> +{
>>>> +	token_signal(d->p_client, CLIENT_STAGE, stage);
>>>> +}
>>>> +
>>>> +/**
>>>> + * xe_eudebug_debugger_wait_stage:
>>>> + * @s: pointer to xe_eudebug_debugger structure
>>>> + * @stage: stage to wait on
>>>> + *
>>>> + * Pauses debugger until the client has signalled the corresponding stage with
>>>> + * xe_eudebug_client_signal_stage. This is only for situations where the actual
>>>> + * event flow is not enough to coordinate between client/debugger and extra sync
>>>> + * mechanism is needed.
>>>> + */
>>>> +void xe_eudebug_debugger_wait_stage(struct xe_eudebug_session *s, uint64_t stage)
>>>> +{
>>>> +	u64 stage_in;
>>>> +
>>>> +	igt_debug("debugger xe client fd: %d pausing for stage %lu\n", s->d->master_fd, stage);
>>>> +
>>>> +	stage_in = wait_from_client(s->c, DEBUGGER_STAGE);
>>>> +	igt_debug("debugger xe client fd: %d stage %lu, expected %lu, stage\n", s->d->master_fd,
>>>> +		  stage_in, stage);
>>>> +
>>>> +	igt_assert_eq(stage_in, stage);
>>>> +}
>>>> +
>>>> +/**
>>>> + * xe_eudebug_client_create:
>>>> + * @master_fd: xe client used to open the debugger connection
>>>> + * @work: function that opens xe device and executes arbitrary workload
>>>> + * @flags: flags stored in a client structure, can be used at will
>>>> + * of the caller, i.e. to provide the @work function an additional switch.
>>>> + * @data: test's private data, allocated with MAP_SHARED | MAP_ANONYMOUS,
>>>> + * can be shared between client and debugger. Accesible via client->ptr.
>>>> + * Can be NULL.
>>>> + *
>>>> + * Forks and creates the debugger process. @work won't be called until
>>>> + * xe_eudebug_client_start is called.
>>>> + *
>>>> + * Returns: newly created xe_eudebug_debugger structure with its
>>>> + * event log initialized.
>>>> + */
>>>> +struct xe_eudebug_client *xe_eudebug_client_create(int master_fd, xe_eudebug_client_work_fn work,
>>>> +						   uint64_t flags, void *data)
>>>> +{
>>>> +	struct xe_eudebug_client *c;
>>>> +
>>>> +	c = calloc(1, sizeof(*c));
>>>> +	c->flags = flags;
>>>> +	igt_assert(c);
>>>> +	igt_assert(!pipe(c->p_in));
>>>> +	igt_assert(!pipe(c->p_out));
>>>> +	c->seqno = 1;
>>>> +	c->log = xe_eudebug_event_log_create("client", MAX_EVENT_LOG_SIZE);
>>>> +	c->done = 0;
>>>> +	c->ptr = data;
>>>> +	c->master_fd = master_fd;
>>>> +	c->timeout_ms = XE_EUDEBUG_DEFAULT_TIMEOUT_MS;
>>>> +
>>>> +	igt_fork(child, 1) {
>>>> +		int mypid;
>>>> +
>>>> +		igt_assert_eq(c->pid, 0);
>>>> +
>>>> +		close(c->p_out[0]);
>>>> +		c->p_out[0] = -1;
>>>> +		close(c->p_in[1]);
>>>> +		c->p_in[1] = -1;
>>>> +
>>>> +		mypid = getpid();
>>>> +		client_signal(c, CLIENT_PID, mypid);
>>>> +
>>>> +		c->pid = client_wait_token(c, CLIENT_RUN);
>>>> +		igt_assert_eq(c->pid, mypid);
>>>> +		if (work)
>>>> +			work(c);
>>>> +
>>>> +		client_signal(c, CLIENT_FINI, c->seqno);
>>>> +
>>>> +		event_log_write_to_fd(c->log, c->p_out[1]);
>>>> +
>>>> +		c->pid = client_wait_token(c, CLIENT_STOP);
>>>> +		igt_assert_eq(c->pid, mypid);
>>>> +	}
>>>> +
>>>> +	close(c->p_out[1]);
>>>> +	c->p_out[1] = -1;
>>>> +	close(c->p_in[0]);
>>>> +	c->p_in[0] = -1;
>>>> +
>>>> +	c->pid = wait_from_client(c, CLIENT_PID);
>>>> +
>>>> +	igt_info("client running with pid %d\n", c->pid);
>>>> +
>>>> +	return c;
>>>> +}
>>>> +
>>>> +/**
>>>> + * xe_eudebug_client_stop:
>>>> + * @c: pointer to xe_eudebug_client structure
>>>> + *
>>>> + * Waits for the end of client's work and exits the proccess.
>>>> + */
>>>> +void xe_eudebug_client_stop(struct xe_eudebug_client *c)
>>>> +{
>>>> +	if (c->pid) {
>>>> +		int waitstatus;
>>>> +
>>>> +		xe_eudebug_client_wait_done(c);
>>>> +
>>>> +		token_signal(c->p_in, CLIENT_STOP, c->pid);
>>>> +		igt_assert_eq(waitpid(c->pid, &waitstatus, 0),
>>>> +			      c->pid);
>>>> +		c->pid = 0;
>>>> +	}
>>>> +}
>>>> +
>>>> +/**
>>>> + * xe_eudebug_client_destroy:
>>>> + * @c: pointer to xe_eudebug_client structure to be freed
>>>> + *
>>>> + * Frees the @c client structure. Note that it calls xe_eudebug_client_stop if
>>>> + * client proccess has not terminated yet.
>>>> + */
>>>> +void xe_eudebug_client_destroy(struct xe_eudebug_client *c)
>>>> +{
>>>> +	xe_eudebug_client_stop(c);
>>>> +	pipe_close(c->p_in);
>>>> +	pipe_close(c->p_out);
>>>> +	xe_eudebug_event_log_destroy(c->log);
>>>> +	free(c);
>>>> +}
>>>> +
>>>> +/**
>>>> + * xe_eudebug_client_get_seqno:
>>>> + * @c: pointer to xe_eudebug_client structure
>>>> + *
>>>> + * Increments and returns current seqno value of the given client @c
>>>> + *
>>>> + * Returns: incremented seqno
>>>> + */
>>>> +uint64_t xe_eudebug_client_get_seqno(struct xe_eudebug_client *c)
>>>> +{
>>>> +	return c->seqno++;
>>>> +}
>>>> +
>>>> +/**
>>>> + * xe_eudebug_client_start:
>>>> + * @c: pointer to xe_eudebug_client structure
>>>> + *
>>>> + * Starts execution of client's work function within the client's proccess.
>>>> + */
>>>> +void xe_eudebug_client_start(struct xe_eudebug_client *c)
>>>> +{
>>>> +	token_signal(c->p_in, CLIENT_RUN, c->pid);
>>>> +}
>>>> +
>>>> +/**
>>>> + * xe_eudebug_client_wait_done:
>>>> + * @c: pointer to xe_eudebug_client structure
>>>> + *
>>>> + * Waits for the client work end updates the event log.
>>>> + * Doesn't terminate the client's proccess yet.
>>>> + */
>>>> +void xe_eudebug_client_wait_done(struct xe_eudebug_client *c)
>>>> +{
>>>> +	if (!c->done) {
>>>> +		c->done = 1;
>>>> +		c->seqno = wait_from_client(c, CLIENT_FINI);
>>>> +		event_log_read_from_fd(c->log, c->p_out[0]);
>>>> +	}
>>>> +}
>>>> +
>>>> +/**
>>>> + * xe_eudebug_client_signal_stage:
>>>> + * @c: pointer to the client
>>>> + * @stage: stage to signal
>>>> + *
>>>> + * Signals to debugger, waiting in xe_eudebug_debugger_wait_stage(),
>>>> + * releasing it to proceed.
>>>> + */
>>>> +void xe_eudebug_client_signal_stage(struct xe_eudebug_client *c, uint64_t stage)
>>>> +{
>>>> +	token_signal(c->p_out, DEBUGGER_STAGE, stage);
>>>> +}
>>>> +
>>>> +/**
>>>> + * xe_eudebug_client_wait_stage:
>>>> + * @c: pointer to xe_eudebug_client structure
>>>> + * @stage: stage to wait on
>>>> + *
>>>> + * Pauses client until the debugger has signalled the corresponding stage with
>>>> + * xe_eudebug_debugger_signal_stage. This is only for situations where the
>>>> + * actual event flow is not enough to coordinate between client/debugger and extra
>>>> + * sync mechanism is needed.
>>>> + *
>>>> + */
>>>> +void xe_eudebug_client_wait_stage(struct xe_eudebug_client *c, uint64_t stage)
>>>> +{
>>>> +	u64 stage_in;
>>>> +
>>>> +	if (c->done) {
>>>> +		igt_warn("client: %d already done before %lu\n", c->pid, stage);
>>>> +		return;
>>>> +	}
>>>> +
>>>> +	igt_debug("client: %d pausing for stage %lu\n", c->pid, stage);
>>>> +
>>>> +	stage_in = client_wait_token(c, CLIENT_STAGE);
>>>> +	igt_debug("client: %d stage %lu, expected %lu, stage\n", c->pid, stage_in, stage);
>>>> +
>>>> +	igt_assert_eq(stage_in, stage);
>>>> +}
>>>> +
>>>> +
>>>> +/**
>>>> + * xe_eudebug_session_create:
>>>> + * @fd: XE file descriptor
>>>> + * @work: function passed to the xe_eudebug_client_create
>>>> + * @flags: flags passed to client and debugger
>>>> + * @test_private: test's  data, allocated with MAP_SHARED | MAP_ANONYMOUS,
>>>> + * passed to client and debugger. Can be NULL.
>>>> + *
>>>> + * Creates session together with client and debugger structures.
>>>> + */
>>>> +struct xe_eudebug_session *xe_eudebug_session_create(int fd,
>>>> +						     xe_eudebug_client_work_fn work,
>>>> +						     unsigned int flags,
>>>> +						     void *test_private)
>>>> +{
>>>> +	struct xe_eudebug_session *s;
>>>> +
>>>> +	s = calloc(1, sizeof(*s));
>>>> +	igt_assert(s);
>>>> +
>>>> +	s->c = xe_eudebug_client_create(fd, work, flags, test_private);
>>>> +	s->d = xe_eudebug_debugger_create(fd, flags, test_private);
>>>> +	s->flags = flags;
>>>> +
>>>> +	return s;
>>>> +}
>>>> +
>>>> +/**
>>>> + * xe_eudebug_session_run:
>>>> + * @s: pointer to xe_eudebug_session structure
>>>> + *
>>>> + * Attaches debugger to client's proccess, starts debugger's
>>>> + * async event reader, starts client and once client finish
>>>> + * it stops debugger worker.
>>>> + */
>>>> +void xe_eudebug_session_run(struct xe_eudebug_session *s)
>>>> +{
>>>> +	struct xe_eudebug_debugger *debugger = s->d;
>>>> +	struct xe_eudebug_client *client = s->c;
>>>> +
>>>> +	igt_assert_eq(xe_eudebug_debugger_attach(debugger, client), 0);
>>>> +
>>>> +	xe_eudebug_debugger_start_worker(debugger);
>>>> +
>>>> +	xe_eudebug_client_start(client);
>>>> +	xe_eudebug_client_wait_done(client);
>>>> +
>>>> +	xe_eudebug_debugger_stop_worker(debugger, 1);
>>>> +
>>>> +	xe_eudebug_event_log_print(debugger->log, true);
>>>> +	xe_eudebug_event_log_print(client->log, true);
>>>> +}
>>>> +
>>>> +/**
>>>> + * xe_eudebug_session_check:
>>>> + * @s: pointer to xe_eudebug_session structure
>>>> + * @match_opposite: indicates whether check should match all
>>>> + * create and destroy events.
>>>> + * @filter: mask that represents events to be skipped during comparison, useful
>>>> + * for events like 'VM_BIND' since they can be asymmetric
>>>> + *
>>>> + * Validate debugger's log against the log created by the client.
>>>> + */
>>>> +void xe_eudebug_session_check(struct xe_eudebug_session *s, bool match_opposite, uint32_t filter)
>>>> +{
>>>> +	xe_eudebug_event_log_compare(s->c->log, s->d->log, filter);
>>>> +
>>>> +	if (match_opposite)
>>>> +		xe_eudebug_event_log_match_opposite(s->d->log, filter);
>>>> +}
>>>> +
>>>> +/**
>>>> + * xe_eudebug_session_destroy:
>>>> + * @s: pointer to xe_eudebug_session structure
>>>> + *
>>>> + * Destroy session together with its debugger and client.
>>>> + */
>>>> +void xe_eudebug_session_destroy(struct xe_eudebug_session *s)
>>>> +{
>>>> +	xe_eudebug_debugger_destroy(s->d);
>>>> +	xe_eudebug_client_destroy(s->c);
>>>> +
>>>> +	free(s);
>>>> +}
>>>> +
>>>> +#define to_base(x) ((struct drm_xe_eudebug_event *)&x)
>>>> +
>>>> +static void base_event(struct xe_eudebug_client *c,
>>>> +		       struct drm_xe_eudebug_event *e,
>>>> +		       uint32_t type,
>>>> +		       uint32_t flags,
>>>> +		       uint64_t size)
>>>> +{
>>>> +	e->type = type;
>>>> +	e->flags = flags;
>>>> +	e->seqno = xe_eudebug_client_get_seqno(c);
>>>> +	e->len = size;
>>>> +}
>>>> +
>>>> +static void client_event(struct xe_eudebug_client *c, uint32_t flags, int client_fd)
>>>> +{
>>>> +	struct drm_xe_eudebug_event_client ec;
>>>> +
>>>> +	base_event(c, to_base(ec), DRM_XE_EUDEBUG_EVENT_OPEN, flags, sizeof(ec));
>>>> +
>>>> +	ec.client_handle = client_fd;
>>>> +
>>>> +	xe_eudebug_event_log_write(c->log, (void *)&ec);
>>>> +}
>>>> +
>>>> +static void vm_event(struct xe_eudebug_client *c, uint32_t flags, int client_fd, uint32_t vm_id)
>>>> +{
>>>> +	struct drm_xe_eudebug_event_vm evm;
>>>> +
>>>> +	base_event(c, to_base(evm), DRM_XE_EUDEBUG_EVENT_VM, flags, sizeof(evm));
>>>> +
>>>> +	evm.client_handle = client_fd;
>>>> +	evm.vm_handle = vm_id;
>>>> +
>>>> +	xe_eudebug_event_log_write(c->log, (void *)&evm);
>>>> +}
>>>> +
>>>> +static void exec_queue_event(struct xe_eudebug_client *c, uint32_t flags,
>>>> +			     int client_fd, uint32_t vm_id,
>>>> +			     uint32_t exec_queue_handle, uint16_t class,
>>>> +			     uint16_t width)
>>>> +{
>>>> +	struct drm_xe_eudebug_event_exec_queue ee;
>>>> +
>>>> +	base_event(c, to_base(ee), DRM_XE_EUDEBUG_EVENT_EXEC_QUEUE,
>>>> +		   flags, sizeof(ee));
>>>> +
>>>> +	ee.client_handle = client_fd;
>>>> +	ee.vm_handle = vm_id;
>>>> +	ee.exec_queue_handle = exec_queue_handle;
>>>> +	ee.engine_class = class;
>>>> +	ee.width = width;
>>>> +
>>>> +	xe_eudebug_event_log_write(c->log, (void *)&ee);
>>>> +}
>>>> +
>>>> +static void metadata_event(struct xe_eudebug_client *c, uint32_t flags,
>>>> +			   int client_fd, uint32_t id, uint64_t type, uint64_t len)
>>>> +{
>>>> +	struct drm_xe_eudebug_event_metadata em;
>>>> +
>>>> +	base_event(c, to_base(em), DRM_XE_EUDEBUG_EVENT_METADATA,
>>>> +		   flags, sizeof(em));
>>>> +
>>>> +	em.client_handle = client_fd;
>>>> +	em.metadata_handle = id;
>>>> +	em.type = type;
>>>> +	em.len = len;
>>>> +
>>>> +	xe_eudebug_event_log_write(c->log, (void *)&em);
>>>> +}
>>>> +
>>>> +static int enable_getset(int fd, bool *old, bool *new)
>>>> +{
>>>> +	static const char * const fname = "enable_eudebug";
>>>> +	int ret = 0;
>>>> +
>>>> +	int sysfs, device_fd;
>>>> +	bool val_before;
>>>> +	struct stat st;
>>>> +
>>>> +	igt_assert(new || old);
>>>> +
>>>> +	igt_assert_eq(fstat(fd, &st), 0);
>>>> +	sysfs = igt_sysfs_open(fd);
>>>> +	if (sysfs < 0)
>>>> +		return -1;
>>>> +
>>>> +	device_fd = openat(sysfs, "device", O_DIRECTORY | O_RDONLY);
>>>> +	close(sysfs);
>>>> +	if (device_fd < 0)
>>>> +		return -1;
>>>> +
>>>> +	if (!__igt_sysfs_get_boolean(device_fd, fname, &val_before)) {
>>>> +		ret = -1;
>>>> +		goto out;
>>>> +	}
>>>> +
>>>> +	igt_debug("enable_eudebug before: %d\n", val_before);
>>>> +
>>>> +	if (old)
>>>> +		*old = val_before;
>>>> +
>>>> +	ret = 0;
>>>> +	if (new) {
>>>> +		if (__igt_sysfs_set_boolean(device_fd, fname, *new))
>>>> +			igt_assert_eq(igt_sysfs_get_boolean(device_fd, fname), *new);
>>>> +		else
>>>> +			ret = -1;
>>>> +	}
>>>> +
>>>> +out:
>>>> +	close(device_fd);
>>>> +	return ret;
>>>> +}
>>>> +
>>>> +/**
>>>> + * xe_eudebug_enable
>>>> + * @fd: xe client
>>>> + * @enable: state toggle - true to enable, false to disable
>>>> + *
>>>> + * Enables/disables eudebug capability by writing to
>>>> + * '/sys/class/drm/card<N>/device/enable_eudebug' sysfs entry.
>>>> + *
>>>> + * Returns: previous toggle value, i.e. true when eudebugging was enabled,
>>>> + * false when eudebugging was disabled.
>>>> + */
>>>> +bool xe_eudebug_enable(int fd, bool enable)
>>>> +{
>>>> +	bool old = false;
>>>> +	int ret = enable_getset(fd, &old, &enable);
>>>> +
>>>> +	if (ret) {
>>>> +		igt_skip_on(enable);
>>>> +		old = false;
>>>> +	}
>>>> +
>>>> +	return old;
>>>> +}
>>>> +
>>>> +/* Eu debugger wrappers around resource creating xe ioctls. */
>>>> +
>>>> +/**
>>>> + * xe_eudebug_client_open_driver:
>>>> + * @c: pointer to xe_eudebug_client structure
>>>> + *
>>>> + * Calls drm_open_client(DRIVER_XE) and logs the corresponding
>>>> + * event in client's event log.
>>>> + *
>>>> + * Returns: valid DRM file descriptor
>>>> + */
>>>> +int xe_eudebug_client_open_driver(struct xe_eudebug_client *c)
>>>> +{
>>>> +	int fd;
>>>> +
>>>> +	fd = drm_reopen_driver(c->master_fd);
>>>> +	client_event(c, DRM_XE_EUDEBUG_EVENT_CREATE, fd);
>>>> +
>>>> +	return fd;
>>>> +}
>>>> +
>>>> +/**
>>>> + * xe_eudebug_client_close_driver:
>>>> + * @c: pointer to xe_eudebug_client structure
>>>> + * @fd: xe client
>>>> + *
>>>> + * Calls close driver and logs the corresponding event in
>>>> + * client's event log.
>>>> + */
>>>> +void xe_eudebug_client_close_driver(struct xe_eudebug_client *c, int fd)
>>>> +{
>>>> +	client_event(c, DRM_XE_EUDEBUG_EVENT_DESTROY, fd);
>>>> +	close(fd);
>>>> +}
>>>> +
>>>> +/**
>>>> + * xe_eudebug_client_vm_create:
>>>> + * @c: pointer to xe_eudebug_client structure
>>>> + * @fd: xe client
>>>> + * @flags: vm bind flags
>>>> + * @ext: pointer to the first user extension
>>>> + *
>>>> + * Calls xe_vm_create() and logs corresponding events
>>>> + * (including vm set metadata events) in client's event log.
>>>> + *
>>>> + * Returns: valid vm handle
>>>> + */
>>>> +uint32_t xe_eudebug_client_vm_create(struct xe_eudebug_client *c, int fd,
>>>> +				     uint32_t flags, uint64_t ext)
>>>> +{
>>>> +	uint32_t vm;
>>>> +
>>>> +	vm = xe_vm_create(fd, flags, ext);
>>>> +	vm_event(c, DRM_XE_EUDEBUG_EVENT_CREATE, fd, vm);
>>>> +
>>>> +	return vm;
>>>> +}
>>>> +
>>>> +/**
>>>> + * xe_eudebug_client_vm_destroy:
>>>> + * @c: pointer to xe_eudebug_client structure
>>>> + * fd: xe client
>>>> + * vm: vm handle
>>>> + *
>>>> + * Calls xe_vm_destroy() and logs the corresponding event in
>>>> + * client's event log.
>>>> + */
>>>> +void xe_eudebug_client_vm_destroy(struct xe_eudebug_client *c, int fd, uint32_t vm)
>>>> +{
>>>> +	xe_vm_destroy(fd, vm);
>>>> +	vm_event(c, DRM_XE_EUDEBUG_EVENT_DESTROY, fd, vm);
>>>> +}
>>>> +
>>>> +/**
>>>> + * xe_eudebug_client_exec_queue_create:
>>>> + * @c: pointer to xe_eudebug_client structure
>>>> + * @fd: xe client
>>>> + * @create: exec_queue create drm struct
>>>> + *
>>>> + * Calls xe exec queue create ioctl and logs the corresponding event in
>>>> + * client's event log.
>>>> + *
>>>> + * Returns: valid exec queue handle
>>>> + */
>>>> +uint32_t xe_eudebug_client_exec_queue_create(struct xe_eudebug_client *c, int fd,
>>>> +					     struct drm_xe_exec_queue_create *create)
>>>> +{
>>>> +	uint16_t class = ((struct drm_xe_engine_class_instance *)(create->instances))[0].engine_class;
>>>> +
>>>> +	igt_assert_eq(igt_ioctl(fd, DRM_IOCTL_XE_EXEC_QUEUE_CREATE, create), 0);
>>>> +
>>>> +	if (class == DRM_XE_ENGINE_CLASS_COMPUTE || class == DRM_XE_ENGINE_CLASS_RENDER)
>>>> +		exec_queue_event(c, DRM_XE_EUDEBUG_EVENT_CREATE, fd, create->vm_id,
>>>> +				 create->exec_queue_id, class, create->width);
>>>> +
>>>> +	return create->exec_queue_id;
>>>> +}
>>>> +
>>>> +/**
>>>> + * xe_eudebug_client_exec_queue_destroy:
>>>> + * @c: pointer to xe_eudebug_client structure
>>>> + * @fd: xe client
>>>> + * @create: exec_queue create drm struct which was used for creation
>>>> + *
>>>> + * Calls xe exec_queue destroy ioctl and logs the corresponding event in
>>>> + * client's event log.
>>>> + */
>>>> +void xe_eudebug_client_exec_queue_destroy(struct xe_eudebug_client *c, int fd,
>>>> +					  struct drm_xe_exec_queue_create *create)
>>>> +{
>>>> +	struct drm_xe_exec_queue_destroy destroy = { .exec_queue_id = create->exec_queue_id, };
>>>> +	uint16_t class = ((struct drm_xe_engine_class_instance *)(create->instances))[0].engine_class;
>>>> +
>>>> +	if (class == DRM_XE_ENGINE_CLASS_COMPUTE || class == DRM_XE_ENGINE_CLASS_RENDER)
>>>> +		exec_queue_event(c, DRM_XE_EUDEBUG_EVENT_DESTROY, fd, create->vm_id,
>>>> +				 create->exec_queue_id, class, create->width);
>>>> +
>>>> +	igt_assert_eq(igt_ioctl(fd, DRM_IOCTL_XE_EXEC_QUEUE_DESTROY, &destroy), 0);
>>>> +}
>>>> +
>>>> +/**
>>>> + * xe_eudebug_client_vm_bind_event:
>>>> + * @c: pointer to xe_eudebug_client structure
>>>> + * @event_flags: base event flags
>>>> + * @fd: xe client
>>>> + * @vm: vm handle
>>>> + * @bind_flags: bind flags of vm_bind_event
>>>> + * @num_binds: number of bind (operations) for event
>>>> + * @ref_seqno: base vm bind reference seqno
>>>> + * Logs vm bind event in client's event log.
>>>> + */
>>>> +void xe_eudebug_client_vm_bind_event(struct xe_eudebug_client *c,
>>>> +				     uint32_t event_flags, int fd,
>>>> +				     uint32_t vm, uint32_t bind_flags,
>>>> +				     uint32_t num_binds, u64 *ref_seqno)
>>>> +{
>>>> +	struct drm_xe_eudebug_event_vm_bind evmb;
>>>> +
>>>> +	base_event(c, to_base(evmb), DRM_XE_EUDEBUG_EVENT_VM_BIND,
>>>> +		   event_flags, sizeof(evmb));
>>>> +	evmb.client_handle = fd;
>>>> +	evmb.vm_handle = vm;
>>>> +	evmb.flags = bind_flags;
>>>> +	evmb.num_binds = num_binds;
>>>> +
>>>> +	*ref_seqno = evmb.base.seqno;
>>>> +
>>>> +	xe_eudebug_event_log_write(c->log, (void *)&evmb);
>>>> +}
>>>> +
>>>> +/**
>>>> + * xe_eudebug_client_vm_bind_op_event:
>>>> + * @c: pointer to xe_eudebug_client structure
>>>> + * @event_flags: base event flags
>>>> + * @bind_ref_seqno: base vm bind reference seqno
>>>> + * @op_ref_seqno: output, the vm_bind_op event seqno
>>>> + * @addr: ppgtt address
>>>> + * @size: size of the binding
>>>> + * @num_extensions: number of vm bind op extensions
>>>> + *
>>>> + * Logs vm bind op event in client's event log.
>>>> + */
>>>> +void xe_eudebug_client_vm_bind_op_event(struct xe_eudebug_client *c, uint32_t event_flags,
>>>> +					uint64_t bind_ref_seqno, uint64_t *op_ref_seqno,
>>>> +					uint64_t addr, uint64_t range,
>>>> +					uint64_t num_extensions)
>>>> +{
>>>> +	struct drm_xe_eudebug_event_vm_bind_op op;
>>>> +
>>>> +	base_event(c, to_base(op), DRM_XE_EUDEBUG_EVENT_VM_BIND_OP,
>>>> +		   event_flags, sizeof(op));
>>>> +	op.vm_bind_ref_seqno = bind_ref_seqno;
>>>> +	op.addr = addr;
>>>> +	op.range = range;
>>>> +	op.num_extensions = num_extensions;
>>>> +
>>>> +	*op_ref_seqno = op.base.seqno;
>>>> +
>>>> +	xe_eudebug_event_log_write(c->log, (void *)&op);
>>>> +}
>>>> +
>>>> +/**
>>>> + * xe_eudebug_client_vm_bind_op_metadata_event:
>>>> + * @c: pointer to xe_eudebug_client structure
>>>> + * @event_flags: base event flags
>>>> + * @op_ref_seqno: base vm bind op reference seqno
>>>> + * @metadata_handle: metadata handle
>>>> + * @metadata_cookie: metadata cookie
>>>> + *
>>>> + * Logs vm bind op metadata event in client's event log.
>>>> + */
>>>> +void xe_eudebug_client_vm_bind_op_metadata_event(struct xe_eudebug_client *c,
>>>> +						 uint32_t event_flags, uint64_t op_ref_seqno,
>>>> +						 uint64_t metadata_handle, uint64_t metadata_cookie)
>>>> +{
>>>> +	struct drm_xe_eudebug_event_vm_bind_op_metadata op;
>>>> +
>>>> +	base_event(c, to_base(op), DRM_XE_EUDEBUG_EVENT_VM_BIND_OP_METADATA,
>>>> +		   event_flags, sizeof(op));
>>>> +	op.vm_bind_op_ref_seqno = op_ref_seqno;
>>>> +	op.metadata_handle = metadata_handle;
>>>> +	op.metadata_cookie = metadata_cookie;
>>>> +
>>>> +	xe_eudebug_event_log_write(c->log, (void *)&op);
>>>> +}
>>>> +
>>>> +/**
>>>> + * xe_eudebug_client_vm_bind_ufence_event:
>>>> + * @c: pointer to xe_eudebug_client structure
>>>> + * @event_flags: base event flags
>>>> + * @ref_seqno: base vm bind event seqno
>>>> + *
>>>> + * Logs vm bind ufence event in client's event log.
>>>> + */
>>>> +void xe_eudebug_client_vm_bind_ufence_event(struct xe_eudebug_client *c, uint32_t event_flags,
>>>> +					    uint64_t ref_seqno)
>>>> +{
>>>> +	struct drm_xe_eudebug_event_vm_bind_ufence f;
>>>> +
>>>> +	base_event(c, to_base(f), DRM_XE_EUDEBUG_EVENT_VM_BIND_UFENCE,
>>>> +		   event_flags, sizeof(f));
>>>> +	f.vm_bind_ref_seqno = ref_seqno;
>>>> +
>>>> +	xe_eudebug_event_log_write(c->log, (void *)&f);
>>>> +}
>>>> +
>>>> +static bool has_user_fence(const struct drm_xe_sync *sync, uint32_t num_syncs)
>>>> +{
>>>> +	while (num_syncs--)
>>>> +		if (sync[num_syncs].type == DRM_XE_SYNC_TYPE_USER_FENCE)
>>>> +			return true;
>>>> +
>>>> +	return false;
>>>> +}
>>>> +
>>>> +#define for_each_metadata(__m, __ext)					\
>>>> +	for ((__m) = from_user_pointer(__ext);				\
>>>> +	     (__m);							\
>>>> +	     (__m) = from_user_pointer((__m)->base.next_extension))	\
>>>> +		if ((__m)->base.name == XE_VM_BIND_OP_EXTENSIONS_ATTACH_DEBUG)
>>>> +
>>>> +static int  __xe_eudebug_client_vm_bind(struct xe_eudebug_client *c,
>>>> +					int fd, uint32_t vm, uint32_t exec_queue,
>>>> +					uint32_t bo, uint64_t offset,
>>>> +					uint64_t addr, uint64_t size,
>>>> +					uint32_t op, uint32_t flags,
>>>> +					struct drm_xe_sync *sync,
>>>> +					uint32_t num_syncs,
>>>> +					uint32_t prefetch_region,
>>>> +					uint8_t pat_index, uint64_t op_ext)
>>>> +{
>>>> +	struct drm_xe_vm_bind_op_ext_attach_debug *metadata;
>>>> +	const bool ufence = has_user_fence(sync, num_syncs);
>>>> +	const uint32_t bind_flags = ufence ?
>>>> +		DRM_XE_EUDEBUG_EVENT_VM_BIND_FLAG_UFENCE : 0;
>>>> +	uint64_t seqno = 0, op_seqno = 0, num_metadata = 0;
>>>> +	uint32_t bind_base_flags = 0;
>>>> +	int ret;
>>>> +
>>>> +	for_each_metadata(metadata, op_ext)
>>>> +		num_metadata++;
>>>> +
>>>> +	switch (op) {
>>>> +	case DRM_XE_VM_BIND_OP_MAP:
>>>> +		bind_base_flags = DRM_XE_EUDEBUG_EVENT_CREATE;
>>>> +		break;
>>>> +	case DRM_XE_VM_BIND_OP_UNMAP:
>>>> +		bind_base_flags = DRM_XE_EUDEBUG_EVENT_DESTROY;
>>>> +		igt_assert_eq(num_metadata, 0);
>>>> +		igt_assert_eq(ufence, false);
>>>> +		break;
>>>> +	default:
>>>> +		/* XXX unmap all? */
>>>> +		igt_assert(op);
>>>> +		break;
>>>> +	}
>>>> +
>>>> +	ret = ___xe_vm_bind(fd, vm, exec_queue, bo, offset, addr, size,
>>>> +			    op, flags, sync, num_syncs, prefetch_region,
>>>> +			    pat_index, 0, op_ext);
>>>> +
>>>> +	if (ret)
>>>> +		return ret;
>>>> +
>>>> +	if (!bind_base_flags)
>>>> +		return -EINVAL;
>>>> +
>>>> +	xe_eudebug_client_vm_bind_event(c, DRM_XE_EUDEBUG_EVENT_STATE_CHANGE,
>>>> +					fd, vm, bind_flags, 1, &seqno);
>>>> +	xe_eudebug_client_vm_bind_op_event(c, bind_base_flags,
>>>> +					   seqno, &op_seqno, addr, size,
>>>> +					   num_metadata);
>>>> +
>>>> +	for_each_metadata(metadata, op_ext)
>>>> +		xe_eudebug_client_vm_bind_op_metadata_event(c,
>>>> +							    DRM_XE_EUDEBUG_EVENT_CREATE,
>>>> +							    op_seqno,
>>>> +							    metadata->metadata_id,
>>>> +							    metadata->cookie);
>>>> +	if (ufence)
>>>> +		xe_eudebug_client_vm_bind_ufence_event(c, DRM_XE_EUDEBUG_EVENT_CREATE |
>>>> +						       DRM_XE_EUDEBUG_EVENT_NEED_ACK,
>>>> +						       seqno);
>>>> +	return ret;
>>>> +}
>>>> +
>>>> +static void _xe_eudebug_client_vm_bind(struct xe_eudebug_client *c, int fd,
>>>> +				       uint32_t vm, uint32_t bo,
>>>> +				       uint64_t offset, uint64_t addr, uint64_t size,
>>>> +				       uint32_t op,
>>>> +				       uint32_t flags,
>>>> +				       struct drm_xe_sync *sync,
>>>> +				       uint32_t num_syncs,
>>>> +				       uint64_t op_ext)
>>>> +{
>>>> +	const uint32_t exec_queue_id = 0;
>>>> +	const uint32_t prefetch_region = 0;
>>>> +
>>>> +	igt_assert_eq(__xe_eudebug_client_vm_bind(c, fd, vm, exec_queue_id, bo, offset,
>>>> +						  addr, size, op, flags,
>>>> +						  sync, num_syncs, prefetch_region,
>>>> +						  DEFAULT_PAT_INDEX, op_ext),
>>>> +		      0);
>>>> +}
>>>> +
>>>> +/**
>>>> + * xe_eudebug_client_vm_bind_flags
>>>> + * @c: pointer to xe_eudebug_client structure
>>>> + * @fd: xe client
>>>> + * @vm: vm handle
>>>> + * @bo: buffer object handle
>>>> + * @offset: offset within buffer object
>>>> + * @addr: ppgtt address
>>>> + * @size: size of the binding
>>>> + * @flags: vm_bind flags
>>>> + * @sync: sync objects
>>>> + * @num_syncs: number of sync objects
>>>> + * @op_ext: BIND_OP extensions
>>>> + *
>>>> + * Calls xe vm_bind ioctl and logs the corresponding event in client's event log.
>>>> + */
>>>> +void xe_eudebug_client_vm_bind_flags(struct xe_eudebug_client *c, int fd, uint32_t vm,
>>>> +				     uint32_t bo, uint64_t offset,
>>>> +				     uint64_t addr, uint64_t size, uint32_t flags,
>>>> +				     struct drm_xe_sync *sync, uint32_t num_syncs,
>>>> +				     uint64_t op_ext)
>>>> +{
>>>> +	_xe_eudebug_client_vm_bind(c, fd, vm, bo, offset, addr, size,
>>>> +				   DRM_XE_VM_BIND_OP_MAP, flags,
>>>> +				   sync, num_syncs, op_ext);
>>>> +}
>>>> +
>>>> +/**
>>>> + * xe_eudebug_client_vm_bind
>>>> + * @c: pointer to xe_eudebug_client structure
>>>> + * @fd: xe client
>>>> + * @vm: vm handle
>>>> + * @bo: buffer object handle
>>>> + * @offset: offset within buffer object
>>>> + * @addr: ppgtt address
>>>> + * @size: size of the binding
>>>> + *
>>>> + * Calls xe vm_bind ioctl and logs the corresponding event in client's event log.
>>>> + */
>>>> +void xe_eudebug_client_vm_bind(struct xe_eudebug_client *c, int fd, uint32_t vm,
>>>> +			       uint32_t bo, uint64_t offset,
>>>> +			       uint64_t addr, uint64_t size)
>>>> +{
>>>> +	const uint32_t flags = 0;
>>>> +	struct drm_xe_sync *sync = NULL;
>>>> +	const uint32_t num_syncs = 0;
>>>> +	const uint64_t op_ext = 0;
>>>> +
>>>> +	xe_eudebug_client_vm_bind_flags(c, fd, vm, bo, offset, addr, size,
>>>> +					flags,
>>>> +					sync, num_syncs, op_ext);
>>>> +}
>>>> +
>>>> +/**
>>>> + * xe_eudebug_client_vm_unbind_flags
>>>> + * @c: pointer to xe_eudebug_client structure
>>>> + * @fd: xe client
>>>> + * @vm: vm handle
>>>> + * @offset: offset
>>>> + * @addr: ppgtt address
>>>> + * @size: size of the binding
>>>> + * @flags: vm_bind flags
>>>> + * @sync: sync objects
>>>> + * @num_syncs: number of sync objects
>>>> + *
>>>> + * Calls xe vm_unbind ioctl and logs the corresponding event in client's event log.
>>>> + */
>>>> +void xe_eudebug_client_vm_unbind_flags(struct xe_eudebug_client *c, int fd,
>>>> +				       uint32_t vm, uint64_t offset,
>>>> +				       uint64_t addr, uint64_t size, uint32_t flags,
>>>> +				       struct drm_xe_sync *sync, uint32_t num_syncs)
>>>> +{
>>>> +	_xe_eudebug_client_vm_bind(c, fd, vm, 0, offset, addr, size,
>>>> +				   DRM_XE_VM_BIND_OP_UNMAP, flags,
>>>> +				   sync, num_syncs, 0);
>>>> +}
>>>> +
>>>> +/**
>>>> + * xe_eudebug_client_vm_unbind
>>>> + * @c: pointer to xe_eudebug_client structure
>>>> + * @fd: xe client
>>>> + * @vm: vm handle
>>>> + * @offset: offset
>>>> + * @addr: ppgtt address
>>>> + * @size: size of the binding
>>>> + *
>>>> + * Calls xe vm_unbind ioctl and logs the corresponding event in client's event log.
>>>> + */
>>>> +void xe_eudebug_client_vm_unbind(struct xe_eudebug_client *c, int fd, uint32_t vm,
>>>> +				 uint64_t offset, uint64_t addr, uint64_t size)
>>>> +{
>>>> +	const uint32_t flags = 0;
>>>> +	struct drm_xe_sync *sync = NULL;
>>>> +	const uint32_t num_syncs = 0;
>>>> +
>>>> +	xe_eudebug_client_vm_unbind_flags(c, fd, vm, offset, addr, size,
>>>> +					  flags, sync, num_syncs);
>>>> +}
>>>> +
>>>> +/**
>>>> + * xe_eudebug_client_metadata_create:
>>>> + * @c: pointer to xe_eudebug_client structure
>>>> + * @fd: xe client
>>>> + * @type: debug metadata type
>>>> + * @len: size of @data
>>>> + * @data: debug metadata paylad
>>>> + *
>>>> + * Calls xe metadata create ioctl and logs the corresponding event in
>>>> + * client's event log.
>>>> + *
>>>> + * Return: valid debug metadata id.
>>>> + */
>>>> +uint32_t xe_eudebug_client_metadata_create(struct xe_eudebug_client *c, int fd,
>>>> +					   int type, size_t len, void *data)
>>>> +{
>>>> +	struct drm_xe_debug_metadata_create create = {
>>>> +		.type = type,
>>>> +		.user_addr = to_user_pointer(data),
>>>> +		.len = len
>>>> +	};
>>>> +
>>>> +	igt_assert_eq(igt_ioctl(fd, DRM_IOCTL_XE_DEBUG_METADATA_CREATE, &create), 0);
>>>> +
>>>> +	metadata_event(c, DRM_XE_EUDEBUG_EVENT_CREATE, fd, create.metadata_id, type, len);
>>>> +
>>>> +	return create.metadata_id;
>>>> +}
>>>> +
>>>> +/**
>>>> + * xe_eudebug_client_metadata_destroy:
>>>> + * @c: pointer to xe_eudebug_client structure
>>>> + * @fd: xe client
>>>> + * @id: xe debug metadata handle
>>>> + * @type: debug metadata type
>>>> + * @len: size of debug metadata payload
>>>> + *
>>>> + * Calls xe metadata destroy ioctl and logs the corresponding event in
>>>> + * client's event log.
>>>> + */
>>>> +void xe_eudebug_client_metadata_destroy(struct xe_eudebug_client *c, int fd,
>>>> +					uint32_t id, int type, size_t len)
>>>> +{
>>>> +	struct drm_xe_debug_metadata_destroy destroy = { .metadata_id = id };
>>>> +
>>>> +
>>>> +	igt_assert_eq(igt_ioctl(fd, DRM_IOCTL_XE_DEBUG_METADATA_DESTROY, &destroy), 0);
>>>> +
>>>> +	metadata_event(c, DRM_XE_EUDEBUG_EVENT_DESTROY, fd, id, type, len);
>>>> +}
>>>> +
>>>> +void xe_eudebug_ack_ufence(int debugfd,
>>>> +			   const struct drm_xe_eudebug_event_vm_bind_ufence *f)
>>>> +{
>>>> +	struct drm_xe_eudebug_ack_event ack = { 0, };
>>>> +	char event_str[XE_EUDEBUG_EVENT_STRING_MAX_LEN];
>>>> +
>>>> +	ack.type = f->base.type;
>>>> +	ack.seqno = f->base.seqno;
>>>> +
>>>> +	xe_eudebug_event_to_str((void *)f, event_str, XE_EUDEBUG_EVENT_STRING_MAX_LEN);
>>>> +	igt_debug("delivering ack for event: %s\n", event_str);
>>>> +	igt_assert_eq(igt_ioctl(debugfd, DRM_XE_EUDEBUG_IOCTL_ACK_EVENT, &ack), 0);
>>>> +}
>>>> diff --git a/lib/xe/xe_eudebug.h b/lib/xe/xe_eudebug.h
>>>> new file mode 100644
>>>> index 000000000..444f5a7b7
>>>> --- /dev/null
>>>> +++ b/lib/xe/xe_eudebug.h
>>>> @@ -0,0 +1,206 @@
>>>> +/* SPDX-License-Identifier: MIT */
>>>> +/*
>>>> + * Copyright © 2023 Intel Corporation
>>>> + */
>>>> +#include <fcntl.h>
>>>> +#include <pthread.h>
>>>> +#include <stdint.h>
>>>> +#include <xe_drm.h>
>>>> +
>>>> +#include "igt_list.h"
>>>> +
>>>> +struct xe_eudebug_event_log {
>>>> +	uint8_t *log;
>>>> +	unsigned int head;
>>>> +	unsigned int max_size;
>>>> +	char name[80];
>>>> +	pthread_mutex_t lock;
>>>> +};
>>>> +
>>>> +struct xe_eudebug_debugger {
>>>> +	int fd;
>>>> +	uint64_t flags;
>>>> +
>>>> +	/* Used to smuggle private data */
>>>> +	void *ptr;
>>>> +
>>>> +	struct xe_eudebug_event_log *log;
>>>> +
>>>> +	uint64_t event_count;
>>>> +
>>>> +	uint64_t target_pid;
>>>> +
>>>> +	struct igt_list_head triggers;
>>>> +
>>>> +	int master_fd;
>>>> +
>>>> +	pthread_t worker_thread;
>>>> +	int worker_state;
>>>> +
>>>> +	int p_client[2];
>>>> +};
>>>> +
>>>> +struct xe_eudebug_client {
>>>> +	int pid;
>>>> +	uint64_t seqno;
>>>> +	uint64_t flags;
>>>> +
>>>> +	/* Used to smuggle private data */
>>>> +	void *ptr;
>>>> +
>>>> +	struct xe_eudebug_event_log *log;
>>>> +
>>>> +	int done;
>>>> +	int p_in[2];
>>>> +	int p_out[2];
>>>> +
>>>> +	/* Used to pickup right device (the one used in debugger) */
>>>> +	int master_fd;
>>>> +
>>>> +	int timeout_ms;
>>>> +};
>>>> +
>>>> +struct xe_eudebug_session {
>>>> +	uint64_t flags;
>>>> +	struct xe_eudebug_client *c;
>>>> +	struct xe_eudebug_debugger *d;
>>>> +};
>>>> +
>>>> +typedef void (*xe_eudebug_client_work_fn)(struct xe_eudebug_client *);
>>>> +typedef void (*xe_eudebug_trigger_fn)(struct xe_eudebug_debugger *,
>>>> +				      struct drm_xe_eudebug_event *);
>>>> +
>>>> +#define xe_eudebug_for_each_event(_e, _log) \
>>>> +	for ((_e) = (_e) ? (void *)(uint8_t *)(_e) + (_e)->len : \
>>>> +		    (void *)(_log)->log; \
>>>> +	    (uint8_t *)(_e) < (_log)->log + (_log)->head; \
>>>> +	    (_e) = (void *)(uint8_t *)(_e) + (_e)->len)
>>>> +
>>>> +#define xe_eudebug_assert(d, c)						\
>>>> +	do {								\
>>>> +		if (!(c)) {						\
>>>> +			xe_eudebug_event_log_print((d)->log, true);	\
>>>> +			igt_assert(c);					\
>>>> +		}							\
>>>> +	} while (0)
>>>> +
>>>> +#define xe_eudebug_assert_f(d, c, f...)					\
>>>> +	do {								\
>>>> +		if (!(c)) {						\
>>>> +			xe_eudebug_event_log_print((d)->log, true);	\
>>>> +			igt_assert_f(c, f);				\
>>>> +		}							\
>>>> +	} while (0)
>>>> +
>>>> +#define XE_EUDEBUG_EVENT_STRING_MAX_LEN		4096
>>>> +
>>>> +/*
>>>> + * Default abort timeout to use across xe_eudebug lib and tests if no specific
>>>> + * timeout value is required.
>>>> + */
>>>> +#define XE_EUDEBUG_DEFAULT_TIMEOUT_MS		25000ULL
>>>> +
>>>> +#define XE_EUDEBUG_FILTER_EVENT_NONE		BIT(DRM_XE_EUDEBUG_EVENT_NONE)
>>>> +#define XE_EUDEBUG_FILTER_EVENT_READ		BIT(DRM_XE_EUDEBUG_EVENT_READ)
>>>> +#define XE_EUDEBUG_FILTER_EVENT_OPEN		BIT(DRM_XE_EUDEBUG_EVENT_OPEN)
>>>> +#define XE_EUDEBUG_FILTER_EVENT_VM		BIT(DRM_XE_EUDEBUG_EVENT_VM)
>>>> +#define XE_EUDEBUG_FILTER_EVENT_EXEC_QUEUE	BIT(DRM_XE_EUDEBUG_EVENT_EXEC_QUEUE)
>>>> +#define XE_EUDEBUG_FILTER_EVENT_EU_ATTENTION	BIT(DRM_XE_EUDEBUG_EVENT_EU_ATTENTION)
>>>> +#define XE_EUDEBUG_FILTER_EVENT_VM_BIND		BIT(DRM_XE_EUDEBUG_EVENT_VM_BIND)
>>>> +#define XE_EUDEBUG_FILTER_EVENT_VM_BIND_OP	BIT(DRM_XE_EUDEBUG_EVENT_VM_BIND_OP)
>>>> +#define XE_EUDEBUG_FILTER_EVENT_VM_BIND_UFENCE  BIT(DRM_XE_EUDEBUG_EVENT_VM_BIND_UFENCE)
>>>> +#define XE_EUDEBUG_FILTER_ALL			GENMASK(DRM_XE_EUDEBUG_EVENT_MAX_EVENT, 0)
>>>> +#define XE_EUDEBUG_EVENT_IS_FILTERED(_e, _f)	((1UL << _e) & _f)
>>>> +
>>>> +int xe_eudebug_connect(int fd, pid_t pid, uint32_t flags);
>>>> +const char *xe_eudebug_event_to_str(struct drm_xe_eudebug_event *e, char *buf, size_t len);
>>>> +struct drm_xe_eudebug_event *
>>>> +xe_eudebug_event_log_find_seqno(struct xe_eudebug_event_log *l, uint64_t seqno);
>>>> +struct xe_eudebug_event_log *
>>>> +xe_eudebug_event_log_create(const char *name, unsigned int max_size);
>>>> +void xe_eudebug_event_log_destroy(struct xe_eudebug_event_log *l);
>>>> +void xe_eudebug_event_log_print(struct xe_eudebug_event_log *l, bool debug);
>>>> +void xe_eudebug_event_log_compare(struct xe_eudebug_event_log *c, struct xe_eudebug_event_log *d,
>>>> +				  uint32_t filter);
>>>> +void xe_eudebug_event_log_write(struct xe_eudebug_event_log *l, struct drm_xe_eudebug_event *e);
>>>> +void xe_eudebug_event_log_match_opposite(struct xe_eudebug_event_log *l, uint32_t filter);
>>>> +
>>>> +bool xe_eudebug_debugger_available(int fd);
>>>> +struct xe_eudebug_debugger *
>>>> +xe_eudebug_debugger_create(int xe, uint64_t flags, void *data);
>>>> +void xe_eudebug_debugger_destroy(struct xe_eudebug_debugger *d);
>>>> +int xe_eudebug_debugger_attach(struct xe_eudebug_debugger *d, struct xe_eudebug_client *c);
>>>> +void xe_eudebug_debugger_start_worker(struct xe_eudebug_debugger *d);
>>>> +void xe_eudebug_debugger_stop_worker(struct xe_eudebug_debugger *d, int timeout_s);
>>>> +void xe_eudebug_debugger_dettach(struct xe_eudebug_debugger *d);
>>>> +void xe_eudebug_debugger_set_data(struct xe_eudebug_debugger *c, void *ptr);
>>>> +void xe_eudebug_debugger_add_trigger(struct xe_eudebug_debugger *d, int type,
>>>> +				     xe_eudebug_trigger_fn fn);
>>>> +void xe_eudebug_debugger_signal_stage(struct xe_eudebug_debugger *d, uint64_t stage);
>>>> +void xe_eudebug_debugger_wait_stage(struct xe_eudebug_session *s, uint64_t stage);
>>>> +
>>>> +struct xe_eudebug_client *
>>>> +xe_eudebug_client_create(int xe, xe_eudebug_client_work_fn work, uint64_t flags, void *data);
>>>> +void xe_eudebug_client_destroy(struct xe_eudebug_client *c);
>>>> +void xe_eudebug_client_start(struct xe_eudebug_client *c);
>>>> +void xe_eudebug_client_stop(struct xe_eudebug_client *c);
>>>> +void xe_eudebug_client_wait_done(struct xe_eudebug_client *c);
>>>> +void xe_eudebug_client_signal_stage(struct xe_eudebug_client *c, uint64_t stage);
>>>> +void xe_eudebug_client_wait_stage(struct xe_eudebug_client *c, uint64_t stage);
>>>> +
>>>> +uint64_t xe_eudebug_client_get_seqno(struct xe_eudebug_client *c);
>>>> +void xe_eudebug_client_set_data(struct xe_eudebug_client *c, void *ptr);
>>>> +
>>>> +bool xe_eudebug_enable(int fd, bool enable);
>>>> +
>>>> +int xe_eudebug_client_open_driver(struct xe_eudebug_client *c);
>>>> +void xe_eudebug_client_close_driver(struct xe_eudebug_client *c, int fd);
>>>> +uint32_t xe_eudebug_client_vm_create(struct xe_eudebug_client *c, int fd,
>>>> +				     uint32_t flags, uint64_t ext);
>>>> +void xe_eudebug_client_vm_destroy(struct xe_eudebug_client *c, int fd, uint32_t vm);
>>>> +uint32_t xe_eudebug_client_exec_queue_create(struct xe_eudebug_client *c, int fd,
>>>> +					     struct drm_xe_exec_queue_create *create);
>>>> +void xe_eudebug_client_exec_queue_destroy(struct xe_eudebug_client *c, int fd,
>>>> +					  struct drm_xe_exec_queue_create *create);
>>>> +void xe_eudebug_client_vm_bind_event(struct xe_eudebug_client *c, uint32_t event_flags, int fd,
>>>> +				     uint32_t vm, uint32_t bind_flags,
>>>> +				     uint32_t num_ops, uint64_t *ref_seqno);
>>>> +void xe_eudebug_client_vm_bind_op_event(struct xe_eudebug_client *c, uint32_t event_flags,
>>>> +					uint64_t ref_seqno, uint64_t *op_ref_seqno,
>>>> +					uint64_t addr, uint64_t range,
>>>> +					uint64_t num_extensions);
>>>> +void xe_eudebug_client_vm_bind_op_metadata_event(struct xe_eudebug_client *c,
>>>> +						 uint32_t event_flags, uint64_t op_ref_seqno,
>>>> +						 uint64_t metadata_handle, uint64_t metadata_cookie);
>>>> +void xe_eudebug_client_vm_bind_ufence_event(struct xe_eudebug_client *c, uint32_t event_flags,
>>>> +					    uint64_t ref_seqno);
>>>> +void xe_eudebug_ack_ufence(int debugfd,
>>>> +			   const struct drm_xe_eudebug_event_vm_bind_ufence *f);
>>>> +
>>>> +void xe_eudebug_client_vm_bind_flags(struct xe_eudebug_client *c, int fd, uint32_t vm,
>>>> +				     uint32_t bo, uint64_t offset,
>>>> +				     uint64_t addr, uint64_t size, uint32_t flags,
>>>> +				     struct drm_xe_sync *sync, uint32_t num_syncs,
>>>> +				     uint64_t op_ext);
>>>> +void xe_eudebug_client_vm_bind(struct xe_eudebug_client *c, int fd, uint32_t vm,
>>>> +			       uint32_t bo, uint64_t offset,
>>>> +			       uint64_t addr, uint64_t size);
>>>> +void xe_eudebug_client_vm_unbind_flags(struct xe_eudebug_client *c, int fd,
>>>> +				       uint32_t vm, uint64_t offset,
>>>> +				       uint64_t addr, uint64_t size, uint32_t flags,
>>>> +				       struct drm_xe_sync *sync, uint32_t num_syncs);
>>>> +void xe_eudebug_client_vm_unbind(struct xe_eudebug_client *c, int fd, uint32_t vm,
>>>> +				 uint64_t offset, uint64_t addr, uint64_t size);
>>>> +
>>>> +uint32_t xe_eudebug_client_metadata_create(struct xe_eudebug_client *c, int fd,
>>>> +					   int type, size_t len, void *data);
>>>> +void xe_eudebug_client_metadata_destroy(struct xe_eudebug_client *c, int fd,
>>>> +					uint32_t id, int type, size_t len);
>>>> +
>>>> +struct xe_eudebug_session *xe_eudebug_session_create(int fd,
>>>> +						     xe_eudebug_client_work_fn work,
>>>> +						     unsigned int flags,
>>>> +						     void *test_private);
>>>> +void xe_eudebug_session_destroy(struct xe_eudebug_session *s);
>>>> +void xe_eudebug_session_run(struct xe_eudebug_session *s);
>>>> +void xe_eudebug_session_check(struct xe_eudebug_session *s, bool match_opposite, uint32_t filter);
>>>> -- 
>>>> 2.34.1
>>>>

^ permalink raw reply	[flat|nested] 41+ messages in thread

* Re: [PATCH i-g-t v3 04/14] lib/xe_eudebug: Introduce eu debug testing framework
  2024-08-20 17:45       ` Kamil Konieczny
  2024-08-21  7:05         ` Manszewski, Christoph
@ 2024-08-21  9:31         ` Zbigniew Kempczyński
  2024-08-22 15:39           ` Kamil Konieczny
  1 sibling, 1 reply; 41+ messages in thread
From: Zbigniew Kempczyński @ 2024-08-21  9:31 UTC (permalink / raw)
  To: Kamil Konieczny, igt-dev, Manszewski, Christoph,
	Dominik Grzegorzek, Maciej Patelczyk,
	Dominik Karol Piątkowski, Pawel Sikora, Andrzej Hajda,
	Kolanupaka Naveena, Mika Kuoppala, Gwan-gyeong Mun, Mika Kuoppala,
	Karolina Stolarek

On Tue, Aug 20, 2024 at 07:45:18PM +0200, Kamil Konieczny wrote:
> Hi Manszewski,,
> On 2024-08-20 at 18:14:07 +0200, Manszewski, Christoph wrote:
> > Hi Zbigniew,
> > 
> > On 20.08.2024 10:14, Zbigniew Kempczyński wrote:
> > > On Fri, Aug 09, 2024 at 02:38:03PM +0200, Christoph Manszewski wrote:
> > > > From: Dominik Grzegorzek <dominik.grzegorzek@intel.com>
> > > > 
> > > > Introduce library which simplifies testing of eu debug capability.
> > > > The library provides event log helpers together with asynchronous
> > > > abstraction for client proccess and the debugger itself.
> > > > 
> > > > xe_eudebug_client creates its own proccess with user's work function,
> > > > and gives machanisms to synchronize beginning of execution and event
> > > > logging.
> > > > 
> > > > xe_eudebug_debugger allows to attach to the given proccess, provides
> > > > asynchronous thread for event reading and introduces triggers -
> > > > a callback mechanism triggered every time subscribed event was read.
> > > > 
> > > > Signed-off-by: Dominik Grzegorzek <dominik.grzegorzek@intel.com>
> > > > Signed-off-by: Mika Kuoppala <mika.kuaoppala@linux.intel.com>
> > > > Signed-off-by: Christoph Manszewski <christoph.manszewski@intel.com>
> > > > Signed-off-by: Maciej Patelczyk <maciej.patelczyk@intel.com>
> > > > Signed-off-by: Pawel Sikora <pawel.sikora@intel.com>
> > > > Signed-off-by: Karolina Stolarek <karolina.stolarek@intel.com>
> > > > ---
> > > >   lib/meson.build     |    1 +
> > > >   lib/xe/xe_eudebug.c | 2192 +++++++++++++++++++++++++++++++++++++++++++
> > > >   lib/xe/xe_eudebug.h |  206 ++++
> > > >   3 files changed, 2399 insertions(+)
> > > >   create mode 100644 lib/xe/xe_eudebug.c
> > > >   create mode 100644 lib/xe/xe_eudebug.h
> > > > 
> > > > diff --git a/lib/meson.build b/lib/meson.build
> > > > index f711e60a7..969ca4101 100644
> > > > --- a/lib/meson.build
> > > > +++ b/lib/meson.build
> > > > @@ -111,6 +111,7 @@ lib_sources = [
> > > >   	'igt_msm.c',
> > > >   	'igt_dsc.c',
> > > >   	'xe/xe_gt.c',
> > > > +	'xe/xe_eudebug.c',
> > > >   	'xe/xe_ioctl.c',
> > > >   	'xe/xe_mmio.c',
> > > >   	'xe/xe_query.c',
> > > 
> > > As eudebug is quite big feature I think it should be separated and
> > > hidden behind a feature flag (check meson_options.txt), lets say
> > > 'xe_eudebug' which would be disabled by default. This way you can
> > > develop it upstream even if kernel side is not officially merged.
> > > I'm pragramatic and I see no reason to block not accepted feature
> > > especially this would imo speed up developement. A final step when
> > > kernel change would be accepted and merged would be to sync with
> > > uapi and remove local definitions.
> > > 
> > > I look forward maintainers comments is my attitude acceptable.
> > 
> > I agree that it is a good idea. The only problem that arises is for
> > 'xe_exec_sip'. We add a dependency on eudebug to this test - any ideas how
> > to approach this correctly? The only thing that comes to my mind is
> > conditional compilation with 'ifdef' statements but it doesn't appear to be
> > pretty.
> 
> What about adding skips in added tests if kernel do not support eudebug?
> 
> This way you can have it without conditional compilation with
> ifdef/meson and also have it compile-time tested (if CI support test-with
> for Xe kernels).

IMO simplest is to migrate code to xe_exec_sip_eudebug.c.

--
Zbigniew


> 
> Regards,
> Kamil
> 
> > 
> > Thanks,
> > Christoph
> > > 
> > > --
> > > Zbigniew
> > > 
> > > 
> > > > diff --git a/lib/xe/xe_eudebug.c b/lib/xe/xe_eudebug.c
> > > > new file mode 100644
> > > > index 000000000..4eac87476
> > > > --- /dev/null
> > > > +++ b/lib/xe/xe_eudebug.c
> > > > @@ -0,0 +1,2192 @@
> > > > +// SPDX-License-Identifier: MIT
> > > > +/*
> > > > + * Copyright © 2023 Intel Corporation
> > > > + */
> > > > +
> > > > +#include <fcntl.h>
> > > > +#include <poll.h>
> > > > +#include <signal.h>
> > > > +#include <sys/select.h>
> > > > +#include <sys/stat.h>
> > > > +#include <sys/types.h>
> > > > +#include <sys/wait.h>
> > > > +
> > > > +#include "igt.h"
> > > > +#include "igt_sysfs.h"
> > > > +#include "intel_pat.h"
> > > > +#include "xe_eudebug.h"
> > > > +#include "xe_ioctl.h"
> > > > +
> > > > +struct event_trigger {
> > > > +	xe_eudebug_trigger_fn fn;
> > > > +	int type;
> > > > +	struct igt_list_head link;
> > > > +};
> > > > +
> > > > +struct seqno_list_entry {
> > > > +	struct igt_list_head link;
> > > > +	uint64_t seqno;
> > > > +};
> > > > +
> > > > +struct match_dto {
> > > > +	struct drm_xe_eudebug_event *target;
> > > > +	struct igt_list_head *seqno_list;
> > > > +	uint64_t client_handle;
> > > > +	uint32_t filter;
> > > > +
> > > > +	/* store latest 'EVENT_VM_BIND' seqno */
> > > > +	uint64_t *bind_seqno;
> > > > +	/* latest vm_bind_op seqno matching bind_seqno */
> > > > +	uint64_t *bind_op_seqno;
> > > > +};
> > > > +
> > > > +#define CLIENT_PID  1
> > > > +#define CLIENT_RUN  2
> > > > +#define CLIENT_FINI 3
> > > > +#define CLIENT_STOP 4
> > > > +#define CLIENT_STAGE 5
> > > > +#define DEBUGGER_STAGE 6
> > > > +
> > > > +#define DEBUGGER_WORKER_INACTIVE  0
> > > > +#define DEBUGGER_WORKER_ACTIVE  1
> > > > +#define DEBUGGER_WORKER_QUITTING 2
> > > > +
> > > > +static const char *type_to_str(unsigned int type)
> > > > +{
> > > > +	switch (type) {
> > > > +	case DRM_XE_EUDEBUG_EVENT_NONE:
> > > > +		return "none";
> > > > +	case DRM_XE_EUDEBUG_EVENT_READ:
> > > > +		return "read";
> > > > +	case DRM_XE_EUDEBUG_EVENT_OPEN:
> > > > +		return "client";
> > > > +	case DRM_XE_EUDEBUG_EVENT_VM:
> > > > +		return "vm";
> > > > +	case DRM_XE_EUDEBUG_EVENT_EXEC_QUEUE:
> > > > +		return "exec_queue";
> > > > +	case DRM_XE_EUDEBUG_EVENT_EU_ATTENTION:
> > > > +		return "attention";
> > > > +	case DRM_XE_EUDEBUG_EVENT_VM_BIND:
> > > > +		return "vm_bind";
> > > > +	case DRM_XE_EUDEBUG_EVENT_VM_BIND_OP:
> > > > +		return "vm_bind_op";
> > > > +	case DRM_XE_EUDEBUG_EVENT_VM_BIND_UFENCE:
> > > > +		return "vm_bind_ufence";
> > > > +	case DRM_XE_EUDEBUG_EVENT_METADATA:
> > > > +		return "metadata";
> > > > +	case DRM_XE_EUDEBUG_EVENT_VM_BIND_OP_METADATA:
> > > > +		return "vm_bind_op_metadata";
> > > > +	}
> > > > +
> > > > +	return "UNKNOWN";
> > > > +}
> > > > +
> > > > +static const char *event_type_to_str(struct drm_xe_eudebug_event *e, char *buf)
> > > > +{
> > > > +	sprintf(buf, "%s(%d)", type_to_str(e->type), e->type);
> > > > +
> > > > +	return buf;
> > > > +}
> > > > +
> > > > +static const char *flags_to_str(unsigned int flags)
> > > > +{
> > > > +	if (flags & DRM_XE_EUDEBUG_EVENT_CREATE) {
> > > > +		if (flags & DRM_XE_EUDEBUG_EVENT_NEED_ACK)
> > > > +			return "create|ack";
> > > > +		else
> > > > +			return "create";
> > > > +	}
> > > > +	if (flags & DRM_XE_EUDEBUG_EVENT_DESTROY)
> > > > +		return "destroy";
> > > > +
> > > > +	if (flags & DRM_XE_EUDEBUG_EVENT_STATE_CHANGE)
> > > > +		return "state-change";
> > > > +
> > > > +	igt_assert(!(flags & DRM_XE_EUDEBUG_EVENT_NEED_ACK));
> > > > +
> > > > +	return "flags unknown";
> > > > +}
> > > > +
> > > > +static const char *event_members_to_str(struct drm_xe_eudebug_event *e, char *b)
> > > > +{
> > > > +	switch (e->type) {
> > > > +	case DRM_XE_EUDEBUG_EVENT_OPEN: {
> > > > +		struct drm_xe_eudebug_event_client *ec = (struct drm_xe_eudebug_event_client *)e;
> > > > +
> > > > +		sprintf(b, "handle=%llu", ec->client_handle);
> > > > +		break;
> > > > +	}
> > > > +	case DRM_XE_EUDEBUG_EVENT_VM: {
> > > > +		struct drm_xe_eudebug_event_vm *evm = (struct drm_xe_eudebug_event_vm *)e;
> > > > +
> > > > +		sprintf(b, "client_handle=%llu, handle=%llu",
> > > > +			evm->client_handle, evm->vm_handle);
> > > > +		break;
> > > > +	}
> > > > +	case DRM_XE_EUDEBUG_EVENT_EXEC_QUEUE: {
> > > > +		struct drm_xe_eudebug_event_exec_queue *ee = (void *)e;
> > > > +
> > > > +		sprintf(b, "client_handle=%llu, vm_handle=%llu, "
> > > > +			   "exec_queue_handle=%llu, engine_class=%d, exec_queue_width=%d",
> > > > +			ee->client_handle, ee->vm_handle,
> > > > +			ee->exec_queue_handle, ee->engine_class, ee->width);
> > > > +		break;
> > > > +	}
> > > > +	case DRM_XE_EUDEBUG_EVENT_EU_ATTENTION: {
> > > > +		struct drm_xe_eudebug_event_eu_attention *ea = (void *)e;
> > > > +
> > > > +		sprintf(b, "client_handle=%llu, exec_queue_handle=%llu, "
> > > > +			   "lrc_handle=%llu, bitmask_size=%d",
> > > > +			ea->client_handle, ea->exec_queue_handle,
> > > > +			ea->lrc_handle, ea->bitmask_size);
> > > > +		break;
> > > > +	}
> > > > +	case DRM_XE_EUDEBUG_EVENT_VM_BIND: {
> > > > +		struct drm_xe_eudebug_event_vm_bind *evmb = (void *)e;
> > > > +
> > > > +		sprintf(b, "client_handle=%llu, vm_handle=%llu, flags=0x%x, num_binds=%u",
> > > > +			evmb->client_handle, evmb->vm_handle, evmb->flags, evmb->num_binds);
> > > > +		break;
> > > > +	}
> > > > +	case DRM_XE_EUDEBUG_EVENT_VM_BIND_OP: {
> > > > +		struct drm_xe_eudebug_event_vm_bind_op *op = (void *)e;
> > > > +
> > > > +		sprintf(b, "vm_bind_ref_seqno=%lld, addr=%016llx, range=%llu num_extensions=%llu",
> > > > +			op->vm_bind_ref_seqno, op->addr, op->range, op->num_extensions);
> > > > +		break;
> > > > +	}
> > > > +	case DRM_XE_EUDEBUG_EVENT_VM_BIND_UFENCE: {
> > > > +		struct drm_xe_eudebug_event_vm_bind_ufence *f = (void *)e;
> > > > +
> > > > +		sprintf(b, "vm_bind_ref_seqno=%lld", f->vm_bind_ref_seqno);
> > > > +		break;
> > > > +	}
> > > > +	case DRM_XE_EUDEBUG_EVENT_METADATA: {
> > > > +		struct drm_xe_eudebug_event_metadata *em = (void *)e;
> > > > +
> > > > +		sprintf(b, "client_handle=%llu, metadata_handle=%llu, type=%llu, len=%llu",
> > > > +			em->client_handle, em->metadata_handle, em->type, em->len);
> > > > +		break;
> > > > +	}
> > > > +	case DRM_XE_EUDEBUG_EVENT_VM_BIND_OP_METADATA: {
> > > > +		struct drm_xe_eudebug_event_vm_bind_op_metadata *op = (void *)e;
> > > > +
> > > > +		sprintf(b, "vm_bind_op_ref_seqno=%lld, metadata_handle=%llu, metadata_cookie=%llu",
> > > > +			op->vm_bind_op_ref_seqno, op->metadata_handle, op->metadata_cookie);
> > > > +		break;
> > > > +	}
> > > > +	default:
> > > > +		strcpy(b, "<...>");
> > > > +	}
> > > > +
> > > > +	return b;
> > > > +}
> > > > +
> > > > +/**
> > > > + * xe_eudebug_event_to_str:
> > > > + * @e: pointer to event
> > > > + * @buf: target to write string representation of @e
> > > > + * @len: size of target buffer @buf
> > > > + *
> > > > + * Creates string representation for given event.
> > > > + *
> > > > + * Returns: the written input buffer pointed by @buf.
> > > > + */
> > > > +const char *xe_eudebug_event_to_str(struct drm_xe_eudebug_event *e, char *buf, size_t len)
> > > > +{
> > > > +	char a[256];
> > > > +	char b[256];
> > > > +
> > > > +	snprintf(buf, len, "(%llu) %15s:%s: %s",
> > > > +		 e->seqno,
> > > > +		 event_type_to_str(e, a),
> > > > +		 flags_to_str(e->flags),
> > > > +		 event_members_to_str(e, b));
> > > > +
> > > > +	return buf;
> > > > +}
> > > > +
> > > > +static void catch_child_failure(void)
> > > > +{
> > > > +	pid_t pid;
> > > > +	int status;
> > > > +
> > > > +	pid = waitpid(-1, &status, WNOHANG);
> > > > +
> > > > +	if (pid == 0 || pid == -1)
> > > > +		return;
> > > > +
> > > > +	if (!WIFEXITED(status))
> > > > +		return;
> > > > +
> > > > +	igt_assert_f(WEXITSTATUS(status) == 0, "Client failed!\n");
> > > > +}
> > > > +
> > > > +static int safe_pipe_read(int pipe[2], void *buf, int nbytes, int timeout_ms)
> > > > +{
> > > > +	int ret;
> > > > +	int t = 0;
> > > > +	struct pollfd fd = {
> > > > +		.fd = pipe[0],
> > > > +		.events = POLLIN,
> > > > +		.revents = 0
> > > > +	};
> > > > +
> > > > +	/* When child fails we may get stuck forever. Check whether
> > > > +	 * the child process ended with an error.
> > > > +	 */
> > > > +	do {
> > > > +		const int interval_ms = 1000;
> > > > +
> > > > +		ret = poll(&fd, 1, interval_ms);
> > > > +
> > > > +		if (!ret) {
> > > > +			catch_child_failure();
> > > > +			t += interval_ms;
> > > > +		}
> > > > +	} while (!ret && t < timeout_ms);
> > > > +
> > > > +	if (ret > 0)
> > > > +		return read(pipe[0], buf, nbytes);
> > > > +
> > > > +	return 0;
> > > > +}
> > > > +
> > > > +static uint64_t pipe_read(int pipe[2], int timeout_ms)
> > > > +{
> > > > +	uint64_t in;
> > > > +	uint64_t ret;
> > > > +
> > > > +	ret = safe_pipe_read(pipe, &in, sizeof(in), timeout_ms);
> > > > +	igt_assert(ret == sizeof(in));
> > > > +
> > > > +	return in;
> > > > +}
> > > > +
> > > > +static void pipe_signal(int pipe[2], uint64_t token)
> > > > +{
> > > > +	igt_assert(write(pipe[1], &token, sizeof(token)) == sizeof(token));
> > > > +}
> > > > +
> > > > +static void pipe_close(int pipe[2])
> > > > +{
> > > > +	if (pipe[0] != -1)
> > > > +		close(pipe[0]);
> > > > +
> > > > +	if (pipe[1] != -1)
> > > > +		close(pipe[1]);
> > > > +}
> > > > +
> > > > +static uint64_t __wait_token(int p[2], const uint64_t token, int timeout_ms)
> > > > +{
> > > > +	uint64_t in;
> > > > +
> > > > +	in = pipe_read(p, timeout_ms);
> > > > +
> > > > +	igt_assert_eq(in, token);
> > > > +
> > > > +	return pipe_read(p, timeout_ms);
> > > > +}
> > > > +
> > > > +static uint64_t client_wait_token(struct xe_eudebug_client *c,
> > > > +				 const uint64_t token)
> > > > +{
> > > > +	return __wait_token(c->p_in, token, c->timeout_ms);
> > > > +}
> > > > +
> > > > +static uint64_t wait_from_client(struct xe_eudebug_client *c,
> > > > +				 const uint64_t token)
> > > > +{
> > > > +	return __wait_token(c->p_out, token, c->timeout_ms);
> > > > +}
> > > > +
> > > > +static void token_signal(int p[2], const uint64_t token, const uint64_t value)
> > > > +{
> > > > +	pipe_signal(p, token);
> > > > +	pipe_signal(p, value);
> > > > +}
> > > > +
> > > > +static void client_signal(struct xe_eudebug_client *c,
> > > > +			  const uint64_t token,
> > > > +			  const uint64_t value)
> > > > +{
> > > > +	token_signal(c->p_out, token, value);
> > > > +}
> > > > +
> > > > +static int __xe_eudebug_connect(int fd, pid_t pid, uint32_t flags, uint64_t events)
> > > > +{
> > > > +	struct drm_xe_eudebug_connect param = {
> > > > +		.pid = pid,
> > > > +		.flags = flags,
> > > > +	};
> > > > +	int debugfd;
> > > > +
> > > > +	debugfd = igt_ioctl(fd, DRM_IOCTL_XE_EUDEBUG_CONNECT, &param);
> > > > +
> > > > +	if (debugfd < 0)
> > > > +		return -errno;
> > > > +
> > > > +	return debugfd;
> > > > +}
> > > > +
> > > > +static void event_log_write_to_fd(struct xe_eudebug_event_log *l, int fd)
> > > > +{
> > > > +	igt_assert_eq(write(fd, &l->head, sizeof(l->head)),
> > > > +		      sizeof(l->head));
> > > > +
> > > > +	igt_assert_eq(write(fd, l->log, l->head), l->head);
> > > > +}
> > > > +
> > > > +static void read_all(int fd, void *buf, size_t nbytes)
> > > > +{
> > > > +	ssize_t remaining_size = nbytes;
> > > > +	ssize_t current_size = 0;
> > > > +	ssize_t read_size = 0;
> > > > +
> > > > +	do {
> > > > +		read_size = read(fd, buf + current_size, remaining_size);
> > > > +		igt_assert_f(read_size >= 0, "read failed: %s\n", strerror(errno));
> > > > +
> > > > +		current_size += read_size;
> > > > +		remaining_size -= read_size;
> > > > +	} while (remaining_size > 0 && read_size > 0);
> > > > +
> > > > +	igt_assert_eq(current_size, nbytes);
> > > > +}
> > > > +
> > > > +static void event_log_read_from_fd(struct xe_eudebug_event_log *l, int fd)
> > > > +{
> > > > +	read_all(fd, &l->head, sizeof(l->head));
> > > > +	igt_assert_lt(l->head, l->max_size);
> > > > +
> > > > +	read_all(fd, l->log, l->head);
> > > > +}
> > > > +
> > > > +typedef int (*cmp_fn_t)(struct drm_xe_eudebug_event *, void *);
> > > > +
> > > > +static struct drm_xe_eudebug_event *
> > > > +event_cmp(struct xe_eudebug_event_log *l,
> > > > +	  struct drm_xe_eudebug_event *current,
> > > > +	  cmp_fn_t match,
> > > > +	  void *data)
> > > > +{
> > > > +	struct drm_xe_eudebug_event *e = current;
> > > > +
> > > > +	xe_eudebug_for_each_event(e, l) {
> > > > +		if (match(e, data))
> > > > +			return e;
> > > > +	}
> > > > +
> > > > +	return NULL;
> > > > +}
> > > > +
> > > > +static int match_type_and_flags(struct drm_xe_eudebug_event *a, void *data)
> > > > +{
> > > > +	struct drm_xe_eudebug_event *b = data;
> > > > +
> > > > +	if (a->type == b->type &&
> > > > +	    a->flags == b->flags)
> > > > +		return 1;
> > > > +
> > > > +	return 0;
> > > > +}
> > > > +
> > > > +static int match_fields(struct drm_xe_eudebug_event *a, void *data)
> > > > +{
> > > > +	struct drm_xe_eudebug_event *b = data;
> > > > +	int ret = 0;
> > > > +
> > > > +	ret = match_type_and_flags(a, data);
> > > > +	if (!ret)
> > > > +		return ret;
> > > > +
> > > > +	ret = 0;
> > > > +
> > > > +	switch (a->type) {
> > > > +	case DRM_XE_EUDEBUG_EVENT_EXEC_QUEUE: {
> > > > +		struct drm_xe_eudebug_event_exec_queue *ae = (void *)a;
> > > > +		struct drm_xe_eudebug_event_exec_queue *be = (void *)b;
> > > > +
> > > > +		if (ae->engine_class == be->engine_class && ae->width == be->width)
> > > > +			ret = 1;
> > > > +		break;
> > > > +	}
> > > > +	case DRM_XE_EUDEBUG_EVENT_VM_BIND: {
> > > > +		struct drm_xe_eudebug_event_vm_bind *ea = (void *)a;
> > > > +		struct drm_xe_eudebug_event_vm_bind *eb = (void *)b;
> > > > +
> > > > +		if (ea->num_binds == eb->num_binds)
> > > > +			ret = 1;
> > > > +		break;
> > > > +	}
> > > > +	case DRM_XE_EUDEBUG_EVENT_VM_BIND_OP: {
> > > > +		struct drm_xe_eudebug_event_vm_bind_op *ea = (void *)a;
> > > > +		struct drm_xe_eudebug_event_vm_bind_op *eb = (void *)b;
> > > > +
> > > > +		if (ea->addr == eb->addr && ea->range == eb->range &&
> > > > +		    ea->num_extensions == eb->num_extensions)
> > > > +			ret = 1;
> > > > +		break;
> > > > +	}
> > > > +	case DRM_XE_EUDEBUG_EVENT_VM_BIND_OP_METADATA: {
> > > > +		struct drm_xe_eudebug_event_vm_bind_op_metadata *ea = (void *)a;
> > > > +		struct drm_xe_eudebug_event_vm_bind_op_metadata *eb = (void *)b;
> > > > +
> > > > +		if (ea->metadata_handle == eb->metadata_handle &&
> > > > +		    ea->metadata_cookie == eb->metadata_cookie)
> > > > +			ret = 1;
> > > > +		break;
> > > > +	}
> > > > +
> > > > +	default:
> > > > +		ret = 1;
> > > > +		break;
> > > > +	}
> > > > +
> > > > +	return ret;
> > > > +}
> > > > +
> > > > +static int match_client_handle(struct drm_xe_eudebug_event *e, void *data)
> > > > +{
> > > > +	struct match_dto *md = (void *)data;
> > > > +	uint64_t *bind_seqno = md->bind_seqno;
> > > > +	uint64_t *bind_op_seqno = md->bind_op_seqno;
> > > > +	uint64_t h = md->client_handle;
> > > > +
> > > > +	if (XE_EUDEBUG_EVENT_IS_FILTERED(e->type, md->filter))
> > > > +		return 0;
> > > > +
> > > > +	switch (e->type) {
> > > > +	case DRM_XE_EUDEBUG_EVENT_OPEN: {
> > > > +		struct drm_xe_eudebug_event_client *client = (struct drm_xe_eudebug_event_client *)e;
> > > > +
> > > > +		if (client->client_handle == h)
> > > > +			return 1;
> > > > +		break;
> > > > +	}
> > > > +	case DRM_XE_EUDEBUG_EVENT_VM: {
> > > > +		struct drm_xe_eudebug_event_vm *vm = (struct drm_xe_eudebug_event_vm *)e;
> > > > +
> > > > +		if (vm->client_handle == h)
> > > > +			return 1;
> > > > +		break;
> > > > +	}
> > > > +	case DRM_XE_EUDEBUG_EVENT_EXEC_QUEUE: {
> > > > +		struct drm_xe_eudebug_event_exec_queue *ee = (struct drm_xe_eudebug_event_exec_queue *)e;
> > > > +
> > > > +		if (ee->client_handle == h)
> > > > +			return 1;
> > > > +		break;
> > > > +	}
> > > > +	case DRM_XE_EUDEBUG_EVENT_VM_BIND: {
> > > > +		struct drm_xe_eudebug_event_vm_bind *evmb = (struct drm_xe_eudebug_event_vm_bind *)e;
> > > > +
> > > > +		if (evmb->client_handle == h) {
> > > > +			*bind_seqno = evmb->base.seqno;
> > > > +			return 1;
> > > > +		}
> > > > +		break;
> > > > +	}
> > > > +	case DRM_XE_EUDEBUG_EVENT_VM_BIND_OP: {
> > > > +		struct drm_xe_eudebug_event_vm_bind_op *eo = (struct drm_xe_eudebug_event_vm_bind_op *)e;
> > > > +
> > > > +		if (eo->vm_bind_ref_seqno == *bind_seqno) {
> > > > +			*bind_op_seqno = eo->base.seqno;
> > > > +			return 1;
> > > > +		}
> > > > +		break;
> > > > +	}
> > > > +	case DRM_XE_EUDEBUG_EVENT_VM_BIND_UFENCE: {
> > > > +		struct drm_xe_eudebug_event_vm_bind_ufence *ef  = (struct drm_xe_eudebug_event_vm_bind_ufence *)e;
> > > > +
> > > > +		if (ef->vm_bind_ref_seqno == *bind_seqno)
> > > > +			return 1;
> > > > +
> > > > +		break;
> > > > +	}
> > > > +	case DRM_XE_EUDEBUG_EVENT_METADATA: {
> > > > +		struct drm_xe_eudebug_event_metadata *em = (struct drm_xe_eudebug_event_metadata *)e;
> > > > +
> > > > +		if (em->client_handle == h)
> > > > +			return 1;
> > > > +		break;
> > > > +	}
> > > > +	case DRM_XE_EUDEBUG_EVENT_VM_BIND_OP_METADATA: {
> > > > +		struct drm_xe_eudebug_event_vm_bind_op_metadata *eo = (struct drm_xe_eudebug_event_vm_bind_op_metadata *)e;
> > > > +
> > > > +		if (eo->vm_bind_op_ref_seqno == *bind_op_seqno)
> > > > +			return 1;
> > > > +		break;
> > > > +	}
> > > > +	default:
> > > > +		break;
> > > > +	}
> > > > +
> > > > +	return 0;
> > > > +}
> > > > +
> > > > +static int match_opposite_resource(struct drm_xe_eudebug_event *e, void *data)
> > > > +{
> > > > +
> > > > +	struct drm_xe_eudebug_event *d = (void *)data;
> > > > +	int ret;
> > > > +
> > > > +	d->flags ^= DRM_XE_EUDEBUG_EVENT_CREATE | DRM_XE_EUDEBUG_EVENT_DESTROY;
> > > > +	d->flags &= ~(DRM_XE_EUDEBUG_EVENT_NEED_ACK);
> > > > +	ret = match_type_and_flags(e, data);
> > > > +	d->flags ^= DRM_XE_EUDEBUG_EVENT_CREATE | DRM_XE_EUDEBUG_EVENT_DESTROY;
> > > > +
> > > > +	if (!ret)
> > > > +		return 0;
> > > > +
> > > > +	switch (e->type) {
> > > > +	case DRM_XE_EUDEBUG_EVENT_OPEN: {
> > > > +		struct drm_xe_eudebug_event_client *client = (struct drm_xe_eudebug_event_client *)e;
> > > > +		struct drm_xe_eudebug_event_client *filter = (struct drm_xe_eudebug_event_client *)data;
> > > > +
> > > > +		if (client->client_handle == filter->client_handle)
> > > > +			return 1;
> > > > +		break;
> > > > +	}
> > > > +	case DRM_XE_EUDEBUG_EVENT_VM: {
> > > > +		struct drm_xe_eudebug_event_vm *vm = (struct drm_xe_eudebug_event_vm *)e;
> > > > +		struct drm_xe_eudebug_event_vm *filter = (struct drm_xe_eudebug_event_vm *)data;
> > > > +
> > > > +		if (vm->vm_handle == filter->vm_handle)
> > > > +			return 1;
> > > > +		break;
> > > > +	}
> > > > +	case DRM_XE_EUDEBUG_EVENT_EXEC_QUEUE: {
> > > > +		struct drm_xe_eudebug_event_exec_queue *ee = (void *)e;
> > > > +		struct drm_xe_eudebug_event_exec_queue *filter = (struct drm_xe_eudebug_event_exec_queue *)data;
> > > > +
> > > > +		if (ee->exec_queue_handle == filter->exec_queue_handle)
> > > > +			return 1;
> > > > +		break;
> > > > +	}
> > > > +	case DRM_XE_EUDEBUG_EVENT_VM_BIND: {
> > > > +		struct drm_xe_eudebug_event_vm_bind *evmb = (void *)e;
> > > > +		struct drm_xe_eudebug_event_vm_bind *filter = (struct drm_xe_eudebug_event_vm_bind *)data;
> > > > +
> > > > +		if (evmb->vm_handle == filter->vm_handle &&
> > > > +		    evmb->num_binds == filter->num_binds)
> > > > +			return 1;
> > > > +		break;
> > > > +	}
> > > > +	case DRM_XE_EUDEBUG_EVENT_VM_BIND_OP: {
> > > > +		struct drm_xe_eudebug_event_vm_bind_op *avmb = (void *)e;
> > > > +		struct drm_xe_eudebug_event_vm_bind_op *filter = (struct drm_xe_eudebug_event_vm_bind_op *)data;
> > > > +
> > > > +		if (avmb->addr == filter->addr &&
> > > > +		    avmb->range == filter->range)
> > > > +			return 1;
> > > > +		break;
> > > > +	}
> > > > +	case DRM_XE_EUDEBUG_EVENT_METADATA: {
> > > > +		struct drm_xe_eudebug_event_metadata *em = (void *)e;
> > > > +		struct drm_xe_eudebug_event_metadata *filter = (struct drm_xe_eudebug_event_metadata *)data;
> > > > +
> > > > +		if (em->metadata_handle == filter->metadata_handle)
> > > > +			return 1;
> > > > +		break;
> > > > +	}
> > > > +	case DRM_XE_EUDEBUG_EVENT_VM_BIND_OP_METADATA: {
> > > > +		struct drm_xe_eudebug_event_vm_bind_op_metadata *avmb = (void *)e;
> > > > +		struct drm_xe_eudebug_event_vm_bind_op_metadata *filter = (struct drm_xe_eudebug_event_vm_bind_op_metadata *)data;
> > > > +
> > > > +		if (avmb->metadata_handle == filter->metadata_handle &&
> > > > +		    avmb->metadata_cookie == filter->metadata_cookie)
> > > > +			return 1;
> > > > +		break;
> > > > +	}
> > > > +
> > > > +	default:
> > > > +		break;
> > > > +	}
> > > > +	return 0;
> > > > +}
> > > > +
> > > > +static int match_full(struct drm_xe_eudebug_event *e, void *data)
> > > > +{
> > > > +	struct seqno_list_entry *sl;
> > > > +
> > > > +	struct match_dto *md = (void *)data;
> > > > +	int ret = 0;
> > > > +
> > > > +	ret = match_client_handle(e, md);
> > > > +	if (!ret)
> > > > +		return 0;
> > > > +
> > > > +	ret = match_fields(e, md->target);
> > > > +	if (!ret)
> > > > +		return 0;
> > > > +
> > > > +	igt_list_for_each_entry(sl, md->seqno_list, link) {
> > > > +		if (sl->seqno == e->seqno)
> > > > +			return 0;
> > > > +	}
> > > > +
> > > > +	return 1;
> > > > +}
> > > > +
> > > > +static struct drm_xe_eudebug_event *
> > > > +event_type_match(struct xe_eudebug_event_log *l,
> > > > +		 struct drm_xe_eudebug_event *target,
> > > > +		 struct drm_xe_eudebug_event *current)
> > > > +{
> > > > +	return event_cmp(l, current, match_type_and_flags, target);
> > > > +}
> > > > +
> > > > +static struct drm_xe_eudebug_event *
> > > > +client_match(struct xe_eudebug_event_log *l,
> > > > +	     uint64_t client_handle,
> > > > +	     struct drm_xe_eudebug_event *current,
> > > > +	     uint32_t filter,
> > > > +	     uint64_t *bind_seqno,
> > > > +	     uint64_t *bind_op_seqno)
> > > > +{
> > > > +	struct match_dto md = {
> > > > +		.client_handle = client_handle,
> > > > +		.filter = filter,
> > > > +		.bind_seqno = bind_seqno,
> > > > +		.bind_op_seqno = bind_op_seqno,
> > > > +	};
> > > > +
> > > > +	return event_cmp(l, current, match_client_handle, &md);
> > > > +}
> > > > +
> > > > +static struct drm_xe_eudebug_event *
> > > > +opposite_event_match(struct xe_eudebug_event_log *l,
> > > > +		    struct drm_xe_eudebug_event *target,
> > > > +		    struct drm_xe_eudebug_event *current)
> > > > +{
> > > > +	return event_cmp(l, current, match_opposite_resource, target);
> > > > +}
> > > > +
> > > > +static struct drm_xe_eudebug_event *
> > > > +event_match(struct xe_eudebug_event_log *l,
> > > > +	    struct drm_xe_eudebug_event *target,
> > > > +	    uint64_t client_handle,
> > > > +	    struct igt_list_head *seqno_list,
> > > > +	    uint64_t *bind_seqno,
> > > > +	    uint64_t *bind_op_seqno)
> > > > +{
> > > > +	struct match_dto md = {
> > > > +		.target = target,
> > > > +		.client_handle = client_handle,
> > > > +		.seqno_list = seqno_list,
> > > > +		.bind_seqno = bind_seqno,
> > > > +		.bind_op_seqno = bind_op_seqno,
> > > > +	};
> > > > +
> > > > +	return event_cmp(l, NULL, match_full, &md);
> > > > +}
> > > > +
> > > > +static void compare_client(struct xe_eudebug_event_log *c, struct drm_xe_eudebug_event *_ce,
> > > > +			   struct xe_eudebug_event_log *d, struct drm_xe_eudebug_event *_de,
> > > > +			   uint32_t filter)
> > > > +{
> > > > +	struct drm_xe_eudebug_event_client *ce = (void *)_ce;
> > > > +	struct drm_xe_eudebug_event_client *de = (void *)_de;
> > > > +	uint64_t cbs = 0, dbs = 0, cbso = 0, dbso = 0;
> > > > +
> > > > +	struct igt_list_head matched_seqno_list;
> > > > +	struct drm_xe_eudebug_event *hc, *hd;
> > > > +	struct seqno_list_entry *entry, *tmp;
> > > > +
> > > > +	igt_assert(ce);
> > > > +	igt_assert(de);
> > > > +
> > > > +	igt_debug("client: %llu -> %llu\n", ce->client_handle, de->client_handle);
> > > > +
> > > > +	hc = NULL;
> > > > +	hd = NULL;
> > > > +	IGT_INIT_LIST_HEAD(&matched_seqno_list);
> > > > +
> > > > +	do {
> > > > +		hc = client_match(c, ce->client_handle, hc, filter, &cbs, &cbso);
> > > > +		if (!hc)
> > > > +			break;
> > > > +
> > > > +		hd = event_match(d, hc, de->client_handle, &matched_seqno_list, &dbs, &dbso);
> > > > +
> > > > +		igt_assert_f(hd, "%s (%llu): no matching event type %u found for client %llu\n",
> > > > +			     c->name,
> > > > +			     hc->seqno,
> > > > +			     hc->type,
> > > > +			     ce->client_handle);
> > > > +
> > > > +		igt_debug("comparing %s %llu vs %s %llu\n",
> > > > +			  c->name, hc->seqno, d->name, hd->seqno);
> > > > +
> > > > +		/*
> > > > +		 * Store the seqno of the event that was matched above,
> > > > +		 * inside 'matched_seqno_list', to avoid it getting matched
> > > > +		 * by subsequent 'event_match' calls.
> > > > +		 */
> > > > +		entry = malloc(sizeof(*entry));
> > > > +		entry->seqno = hd->seqno;
> > > > +		igt_list_add(&entry->link, &matched_seqno_list);
> > > > +	} while (hc);
> > > > +
> > > > +	igt_list_for_each_entry_safe(entry, tmp, &matched_seqno_list, link)
> > > > +		free(entry);
> > > > +}
> > > > +
> > > > +/**
> > > > + * xe_eudebug_event_log_find_seqno:
> > > > + * @l: event log pointer
> > > > + * @seqno: seqno of event to be found
> > > > + *
> > > > + * Finds the event with given seqno in the event log.
> > > > + *
> > > > + * Returns: pointer to the event with given seqno within @l or NULL seqno is
> > > > + * not present.
> > > > + */
> > > > +struct drm_xe_eudebug_event *
> > > > +xe_eudebug_event_log_find_seqno(struct xe_eudebug_event_log *l, uint64_t seqno)
> > > > +{
> > > > +	struct drm_xe_eudebug_event *e = NULL, *found = NULL;
> > > > +
> > > > +	igt_assert_neq(seqno, 0);
> > > > +	/*
> > > > +	 * Try to catch if seqno is corrupted and prevent too long tests,
> > > > +	 * as our post processing of events is not optimized.
> > > > +	 */
> > > > +	igt_assert_lt(seqno, 10 * 1000 * 1000);
> > > > +
> > > > +	xe_eudebug_for_each_event(e, l) {
> > > > +		if (e->seqno == seqno) {
> > > > +			if (found) {
> > > > +				igt_warn("Found multiple events with the same seqno %lu\n", seqno);
> > > > +				xe_eudebug_event_log_print(l, false);
> > > > +				igt_assert(!found);
> > > > +			}
> > > > +			found = e;
> > > > +		}
> > > > +	}
> > > > +
> > > > +	return found;
> > > > +}
> > > > +
> > > > +static void event_log_sort(struct xe_eudebug_event_log *l)
> > > > +{
> > > > +	struct xe_eudebug_event_log *tmp;
> > > > +	struct drm_xe_eudebug_event *e = NULL;
> > > > +	uint64_t first_seqno = 0;
> > > > +	uint64_t last_seqno = 0;
> > > > +	uint64_t events = 0, added = 0;
> > > > +	uint64_t i;
> > > > +
> > > > +	xe_eudebug_for_each_event(e, l) {
> > > > +		if (e->seqno > last_seqno)
> > > > +			last_seqno = e->seqno;
> > > > +
> > > > +		if (e->seqno < first_seqno)
> > > > +			first_seqno = e->seqno;
> > > > +
> > > > +		events++;
> > > > +	}
> > > > +
> > > > +	tmp = xe_eudebug_event_log_create("tmp", l->max_size);
> > > > +
> > > > +	for (i = 1; i <= last_seqno; i++) {
> > > > +		e = xe_eudebug_event_log_find_seqno(l, i);
> > > > +		if (e) {
> > > > +			xe_eudebug_event_log_write(tmp, e);
> > > > +			added++;
> > > > +		}
> > > > +	}
> > > > +
> > > > +	igt_assert_eq(events, added);
> > > > +	igt_assert_eq(tmp->head, l->head);
> > > > +
> > > > +	memcpy(l->log, tmp->log, tmp->head);
> > > > +
> > > > +	xe_eudebug_event_log_destroy(tmp);
> > > > +}
> > > > +
> > > > +/**
> > > > + * xe_eudebug_connect:
> > > > + * @fd: Xe file descriptor
> > > > + * @pid: client PID
> > > > + * @flags: connection flags
> > > > + *
> > > > + * Opens the xe eu debugger connection to the process described by @pid
> > > > + *
> > > > + * Returns: 0 if the debugger was successfully attached, -errno otherwise.
> > > > + */
> > > > +int xe_eudebug_connect(int fd, pid_t pid, uint32_t flags)
> > > > +{
> > > > +	int ret;
> > > > +	uint64_t events = 0; /* events filtering not supported yet! */
> > > > +
> > > > +	ret = __xe_eudebug_connect(fd, pid, flags, events);
> > > > +
> > > > +	return ret;
> > > > +}
> > > > +
> > > > +/**
> > > > + * xe_eudebug_event_log_create:
> > > > + * @name: event log identifier
> > > > + * @max_size: maximum size of created log
> > > > + *
> > > > + * Function creates an Eu Debugger event log with size equal to @max_size.
> > > > + *
> > > > + * Returns: pointer to just created log
> > > > + */
> > > > +#define MAX_EVENT_LOG_SIZE (32 * 1024 * 1024)
> > > > +struct xe_eudebug_event_log *xe_eudebug_event_log_create(const char *name, unsigned int max_size)
> > > > +{
> > > > +	struct xe_eudebug_event_log *l;
> > > > +
> > > > +	l = calloc(1, sizeof(*l));
> > > > +	igt_assert(l);
> > > > +	l->log = calloc(1, max_size);
> > > > +	igt_assert(l->log);
> > > > +	l->max_size = max_size;
> > > > +	strncpy(l->name, name, sizeof(l->name) - 1);
> > > > +	pthread_mutex_init(&l->lock, NULL);
> > > > +
> > > > +	return l;
> > > > +}
> > > > +
> > > > +/**
> > > > + * xe_eudebug_event_log_destroy:
> > > > + * @l: event log pointer
> > > > + *
> > > > + * Frees given event log @l.
> > > > + */
> > > > +void xe_eudebug_event_log_destroy(struct xe_eudebug_event_log *l)
> > > > +{
> > > > +	pthread_mutex_destroy(&l->lock);
> > > > +	free(l->log);
> > > > +	free(l);
> > > > +}
> > > > +
> > > > +/**
> > > > + * xe_eudebug_event_log_write:
> > > > + * @l: event log pointer
> > > > + * @e: event to be written to event log
> > > > + *
> > > > + * Writes event @e to the event log, thread-safe.
> > > > + */
> > > > +void xe_eudebug_event_log_write(struct xe_eudebug_event_log *l, struct drm_xe_eudebug_event *e)
> > > > +{
> > > > +	igt_assert(e->seqno);
> > > > +	/*
> > > > +	 * Try to catch if seqno is corrupted and prevent too long tests,
> > > > +	 * as our post processing of events is not optimized.
> > > > +	 */
> > > > +	igt_assert_lt(e->seqno, 10 * 1000 * 1000);
> > > > +
> > > > +	pthread_mutex_lock(&l->lock);
> > > > +	igt_assert_lt(l->head + e->len, l->max_size);
> > > > +	memcpy(l->log + l->head, e, e->len);
> > > > +	l->head += e->len;
> > > > +
> > > > +#ifdef DEBUG_LOG
> > > > +	igt_info("%s: wrote %u bytes to eventlog, free %u bytes\n",
> > > > +		 l->name, e->len, l->max_size - l->head);
> > > > +#endif
> > > > +	pthread_mutex_unlock(&l->lock);
> > > > +}
> > > > +
> > > > +/**
> > > > + * xe_eudebug_event_log_print:
> > > > + * @l: event log pointer
> > > > + * @debug: when true function uses igt_debug instead of igt_info.
> > > > + *
> > > > + * Prints given event log.
> > > > + */
> > > > +void
> > > > +xe_eudebug_event_log_print(struct xe_eudebug_event_log *l, bool debug)
> > > > +{
> > > > +	struct drm_xe_eudebug_event *e = NULL;
> > > > +	int level = debug ? IGT_LOG_DEBUG : IGT_LOG_INFO;
> > > > +	char str[XE_EUDEBUG_EVENT_STRING_MAX_LEN];
> > > > +
> > > > +	igt_log(IGT_LOG_DOMAIN, level,
> > > > +		"event log '%s' (%u bytes):\n", l->name, l->head);
> > > > +
> > > > +	xe_eudebug_for_each_event(e, l) {
> > > > +		xe_eudebug_event_to_str(e, str, XE_EUDEBUG_EVENT_STRING_MAX_LEN);
> > > > +		igt_log(IGT_LOG_DOMAIN, level, "%s\n", str);
> > > > +	}
> > > > +}
> > > > +
> > > > +/**
> > > > + * xe_eudebug_event_log_compare:
> > > > + * @a: event log pointer
> > > > + * @b: event log pointer
> > > > + * @filter: mask that represents events to be skipped during comparison, useful
> > > > + * for events like 'VM_BIND' since they can be asymmetric. Note that
> > > > + * 'DRM_XE_EUDEBUG_EVENT_OPEN' will always be matched.
> > > > + *
> > > > + * Compares and asserts event logs @a, @b if the event
> > > > + * sequence matches.
> > > > + */
> > > > +void xe_eudebug_event_log_compare(struct xe_eudebug_event_log *a, struct xe_eudebug_event_log *b,
> > > > +				  uint32_t filter)
> > > > +{
> > > > +	struct drm_xe_eudebug_event *ae = NULL;
> > > > +	struct drm_xe_eudebug_event *be = NULL;
> > > > +
> > > > +	xe_eudebug_for_each_event(ae, a) {
> > > > +		if (ae->type == DRM_XE_EUDEBUG_EVENT_OPEN &&
> > > > +		    ae->flags & DRM_XE_EUDEBUG_EVENT_CREATE) {
> > > > +			be = event_type_match(b, ae, be);
> > > > +
> > > > +			compare_client(a, ae, b, be, filter);
> > > > +			compare_client(b, be, a, ae, filter);
> > > > +		}
> > > > +	}
> > > > +}
> > > > +
> > > > +/**
> > > > + * xe_eudebug_event_log_match_opposite:
> > > > + * @l: event log pointer
> > > > + * @filter: mask that represents events to be skipped during comparison, useful
> > > > + * for events like 'VM_BIND' since they can be asymmetric
> > > > + *
> > > > + * Matches and asserts content of all opposite events (create vs destroy).
> > > > + */
> > > > +void
> > > > +xe_eudebug_event_log_match_opposite(struct xe_eudebug_event_log *l, uint32_t filter)
> > > > +{
> > > > +	struct drm_xe_eudebug_event *ce = NULL;
> > > > +	struct drm_xe_eudebug_event *de = NULL;
> > > > +
> > > > +	xe_eudebug_for_each_event(ce, l) {
> > > > +		if (ce->flags & DRM_XE_EUDEBUG_EVENT_CREATE) {
> > > > +			uint8_t offset = sizeof(struct drm_xe_eudebug_event);
> > > > +			int opposite_matching;
> > > > +
> > > > +			if (XE_EUDEBUG_EVENT_IS_FILTERED(ce->type, filter))
> > > > +				continue;
> > > > +
> > > > +			/* No opposite matching for binds */
> > > > +			if ((ce->type >= DRM_XE_EUDEBUG_EVENT_VM_BIND &&
> > > > +			     ce->type <= DRM_XE_EUDEBUG_EVENT_VM_BIND_UFENCE) ||
> > > > +			    ce->type == DRM_XE_EUDEBUG_EVENT_VM_BIND_OP_METADATA)
> > > > +				continue;
> > > > +
> > > > +			de = opposite_event_match(l, ce, ce);
> > > > +
> > > > +			igt_assert_f(de, "no opposite event of type %u found\n", ce->type);
> > > > +
> > > > +			igt_assert_eq(ce->len, de->len);
> > > > +			opposite_matching = memcmp((uint8_t *)de + offset,
> > > > +						   (uint8_t *)ce + offset,
> > > > +						   de->len - offset) == 0;
> > > > +
> > > > +			igt_assert_f(opposite_matching,
> > > > +				     "%s: create|destroy event not "
> > > > +				     "maching (%llu) vs (%llu)\n",
> > > > +				     l->name, de->seqno, ce->seqno);
> > > > +		}
> > > > +	}
> > > > +}
> > > > +
> > > > +static void debugger_run_triggers(struct xe_eudebug_debugger *d,
> > > > +				  struct drm_xe_eudebug_event *e)
> > > > +{
> > > > +	struct event_trigger *t;
> > > > +
> > > > +	igt_list_for_each_entry(t, &d->triggers, link) {
> > > > +		if (e->type == t->type)
> > > > +			t->fn(d, e);
> > > > +	}
> > > > +}
> > > > +
> > > > +#define MAX_EVENT_SIZE (32 * 1024)
> > > > +static int
> > > > +xe_eudebug_read_event(int fd, struct drm_xe_eudebug_event *event)
> > > > +{
> > > > +	int ret;
> > > > +
> > > > +	event->type = DRM_XE_EUDEBUG_EVENT_READ;
> > > > +	event->flags = 0;
> > > > +	event->len = MAX_EVENT_SIZE;
> > > > +
> > > > +	ret = igt_ioctl(fd, DRM_XE_EUDEBUG_IOCTL_READ_EVENT, event);
> > > > +	if (ret < 0)
> > > > +		return -errno;
> > > > +
> > > > +	return ret;
> > > > +}
> > > > +
> > > > +static void *debugger_worker_loop(void *data)
> > > > +{
> > > > +	uint8_t buf[MAX_EVENT_SIZE];
> > > > +	struct drm_xe_eudebug_event *e = (void *)buf;
> > > > +	struct xe_eudebug_debugger *d = data;
> > > > +	struct pollfd p = {
> > > > +		.events = POLLIN,
> > > > +		.revents = 0,
> > > > +	};
> > > > +	int timeout_ms = 100, ret;
> > > > +
> > > > +	igt_assert(d->master_fd >= 0);
> > > > +
> > > > +	do {
> > > > +		p.fd = d->fd;
> > > > +		ret = poll(&p, 1, timeout_ms);
> > > > +
> > > > +		if (ret == -1) {
> > > > +			igt_info("poll failed with errno %d\n", errno);
> > > > +			break;
> > > > +		}
> > > > +
> > > > +		if (ret == 1 && (p.revents & POLLIN)) {
> > > > +			int err = xe_eudebug_read_event(d->fd, e);
> > > > +
> > > > +			if (!err) {
> > > > +				++d->event_count;
> > > > +
> > > > +				xe_eudebug_event_log_write(d->log, e);
> > > > +				debugger_run_triggers(d, e);
> > > > +			} else {
> > > > +				igt_info("xe_eudebug_read_event returned %d\n", ret);
> > > > +			}
> > > > +		}
> > > > +	} while ((ret && READ_ONCE(d->worker_state) == DEBUGGER_WORKER_QUITTING) ||
> > > > +		 READ_ONCE(d->worker_state) == DEBUGGER_WORKER_ACTIVE);
> > > > +
> > > > +	d->worker_state = DEBUGGER_WORKER_INACTIVE;
> > > > +	return NULL;
> > > > +}
> > > > +
> > > > +/**
> > > > + * xe_eudebug_debugger_available:
> > > > + * @fd: Xe file descriptor
> > > > + *
> > > > + * Returns: true it debugger connection is available, false otherwise.
> > > > + */
> > > > +bool xe_eudebug_debugger_available(int fd)
> > > > +{
> > > > +	struct drm_xe_eudebug_connect param = { .pid = getpid() };
> > > > +	int debugfd;
> > > > +
> > > > +	debugfd = igt_ioctl(fd, DRM_IOCTL_XE_EUDEBUG_CONNECT, &param);
> > > > +	if (debugfd >= 0)
> > > > +		close(debugfd);
> > > > +
> > > > +	return debugfd >= 0;
> > > > +}
> > > > +
> > > > +/**
> > > > + * xe_eudebug_debugger_create:
> > > > + * @master_fd: xe client used to open the debugger connection
> > > > + * @flags: flags stored in a debugger structure, can be used at will
> > > > + * of the caller, i.e. to be used inside triggers.
> > > > + * @data: test's private data, allocated with MAP_SHARED | MAP_ANONYMOUS,
> > > > + * can be shared between client and debugger. Can be NULL.
> > > > + *
> > > > + * Returns: newly created xe_eudebug_debugger structure with its
> > > > + * event log initialized. Note that to open the connection
> > > > + * you need call @xe_eudebug_debugger_attach.
> > > > + */
> > > > +struct xe_eudebug_debugger *
> > > > +xe_eudebug_debugger_create(int master_fd, uint64_t flags, void *data)
> > > > +{
> > > > +	struct xe_eudebug_debugger *d;
> > > > +
> > > > +	d = calloc(1, sizeof(*d));
> > > > +	d->flags = flags;
> > > > +	igt_assert(d);
> > > > +	IGT_INIT_LIST_HEAD(&d->triggers);
> > > > +	d->log = xe_eudebug_event_log_create("debugger", MAX_EVENT_LOG_SIZE);
> > > > +	d->fd = -1;
> > > > +	d->master_fd = master_fd;
> > > > +	d->ptr = data;
> > > > +
> > > > +	return d;
> > > > +}
> > > > +
> > > > +static void debugger_destroy_triggers(struct xe_eudebug_debugger *d)
> > > > +{
> > > > +	struct event_trigger *t, *tmp;
> > > > +
> > > > +	igt_list_for_each_entry_safe(t, tmp, &d->triggers, link)
> > > > +		free(t);
> > > > +}
> > > > +
> > > > +/**
> > > > + * xe_eudebug_debugger_destroy:
> > > > + * @d: pointer to the debugger
> > > > + *
> > > > + * Frees xe_eudebug_debugger structure pointed by @d. If the debugger
> > > > + * connection was still opened it terminates it.
> > > > + */
> > > > +void xe_eudebug_debugger_destroy(struct xe_eudebug_debugger *d)
> > > > +{
> > > > +	if (d->worker_state)
> > > > +		xe_eudebug_debugger_stop_worker(d, 1);
> > > > +
> > > > +	if (d->target_pid)
> > > > +		xe_eudebug_debugger_dettach(d);
> > > > +
> > > > +	xe_eudebug_event_log_destroy(d->log);
> > > > +	debugger_destroy_triggers(d);
> > > > +	free(d);
> > > > +}
> > > > +
> > > > +/**
> > > > + * xe_eudebug_debugger_attach:
> > > > + * @d: pointer to the debugger
> > > > + * @c: pointer to the client
> > > > + *
> > > > + * Opens the xe eu debugger connection to the process described by @c (c->pid)
> > > > + *
> > > > + * Returns: 0 if the debugger was successfully attached, -errno otherwise.
> > > > + */
> > > > +int xe_eudebug_debugger_attach(struct xe_eudebug_debugger *d,
> > > > +			       struct xe_eudebug_client *c)
> > > > +{
> > > > +	int ret;
> > > > +
> > > > +	igt_assert_eq(d->fd, -1);
> > > > +	igt_assert_neq(c->pid, 0);
> > > > +	ret = xe_eudebug_connect(d->master_fd, c->pid, 0);
> > > > +
> > > > +	if (ret < 0)
> > > > +		return ret;
> > > > +
> > > > +	d->fd = ret;
> > > > +	d->target_pid = c->pid;
> > > > +	d->p_client[0] = c->p_in[0];
> > > > +	d->p_client[1] = c->p_in[1];
> > > > +
> > > > +	igt_debug("debugger connected to %lu\n", d->target_pid);
> > > > +
> > > > +	return 0;
> > > > +}
> > > > +
> > > > +/**
> > > > + * xe_eudebug_debugger_dettach:
> > > > + * @d: pointer to the debugger
> > > > + *
> > > > + * Closes previously opened xe eu debugger connection. Asserts if
> > > > + * the debugger has active session.
> > > > + */
> > > > +void xe_eudebug_debugger_dettach(struct xe_eudebug_debugger *d)
> > > > +{
> > > > +	igt_assert(d->target_pid);
> > > > +	close(d->fd);
> > > > +	d->target_pid = 0;
> > > > +	d->fd = -1;
> > > > +}
> > > > +
> > > > +/**
> > > > + * xe_eudebug_debugger_add_trigger:
> > > > + * @d: pointer to the debugger
> > > > + * @type: the type of the event which activates the trigger
> > > > + * @fn: function to be called when event of @type was read by the debugger.
> > > > + *
> > > > + * Adds function @fn to the list of triggers activated when event of @type
> > > > + * has been read by worker.
> > > > + * Note: Triggers are activated by the worker.
> > > > + */
> > > > +void xe_eudebug_debugger_add_trigger(struct xe_eudebug_debugger *d,
> > > > +				     int type, xe_eudebug_trigger_fn fn)
> > > > +{
> > > > +	struct event_trigger *t;
> > > > +
> > > > +	t = calloc(1, sizeof(*t));
> > > > +	IGT_INIT_LIST_HEAD(&t->link);
> > > > +	t->type = type;
> > > > +	t->fn = fn;
> > > > +
> > > > +	igt_list_add_tail(&t->link, &d->triggers);
> > > > +	igt_debug("added trigger %p\n", t);
> > > > +}
> > > > +
> > > > +/**
> > > > + * xe_eudebug_debugger_start_worker:
> > > > + * @d: pointer to the debugger
> > > > + *
> > > > + * Starts the debugger worker. Worker is resposible for reading all
> > > > + * incoming events from the debugger, put then into debugger log and
> > > > + * execute appropriate event triggers. Note that using the debuggers
> > > > + * event log while worker is running is not safe.
> > > > + */
> > > > +void xe_eudebug_debugger_start_worker(struct xe_eudebug_debugger *d)
> > > > +{
> > > > +	int ret;
> > > > +
> > > > +	d->worker_state = true;
> > > > +	ret = pthread_create(&d->worker_thread, NULL, &debugger_worker_loop, d);
> > > > +
> > > > +	igt_assert_f(ret == 0, "Debugger worker thread creation failed!");
> > > > +}
> > > > +
> > > > +/**
> > > > + * xe_eudebug_debugger_stop_worker:
> > > > + * @d: pointer to the debugger
> > > > + *
> > > > + * Stops the debugger worker. Event log is sorted by seqno after closure.
> > > > + */
> > > > +void xe_eudebug_debugger_stop_worker(struct xe_eudebug_debugger *d,
> > > > +				     int timeout_s)
> > > > +{
> > > > +	struct timespec t = {};
> > > > +	int ret;
> > > > +
> > > > +	igt_assert(d->worker_state);
> > > > +
> > > > +	d->worker_state = DEBUGGER_WORKER_QUITTING; /* First time be polite. */
> > > > +	igt_assert_eq(clock_gettime(CLOCK_REALTIME, &t), 0);
> > > > +	t.tv_sec += timeout_s;
> > > > +
> > > > +	ret = pthread_timedjoin_np(d->worker_thread, NULL, &t);
> > > > +
> > > > +	if (ret == ETIMEDOUT) {
> > > > +		d->worker_state = DEBUGGER_WORKER_INACTIVE;
> > > > +		ret = pthread_join(d->worker_thread, NULL);
> > > > +	}
> > > > +
> > > > +	igt_assert_f(ret == 0 || ret != ESRCH,
> > > > +		     "pthread join failed with error %d!\n", ret);
> > > > +
> > > > +	event_log_sort(d->log);
> > > > +}
> > > > +
> > > > +/**
> > > > + * xe_eudebug_debugger_signal_stage:
> > > > + * @d: pointer to the debugger
> > > > + * @stage: stage to signal
> > > > + *
> > > > + * Signals to client, waiting in xe_eudebug_client_wait_stage(),
> > > > + * releasing it to proceed.
> > > > + */
> > > > +void xe_eudebug_debugger_signal_stage(struct xe_eudebug_debugger *d, uint64_t stage)
> > > > +{
> > > > +	token_signal(d->p_client, CLIENT_STAGE, stage);
> > > > +}
> > > > +
> > > > +/**
> > > > + * xe_eudebug_debugger_wait_stage:
> > > > + * @s: pointer to xe_eudebug_debugger structure
> > > > + * @stage: stage to wait on
> > > > + *
> > > > + * Pauses debugger until the client has signalled the corresponding stage with
> > > > + * xe_eudebug_client_signal_stage. This is only for situations where the actual
> > > > + * event flow is not enough to coordinate between client/debugger and extra sync
> > > > + * mechanism is needed.
> > > > + */
> > > > +void xe_eudebug_debugger_wait_stage(struct xe_eudebug_session *s, uint64_t stage)
> > > > +{
> > > > +	u64 stage_in;
> > > > +
> > > > +	igt_debug("debugger xe client fd: %d pausing for stage %lu\n", s->d->master_fd, stage);
> > > > +
> > > > +	stage_in = wait_from_client(s->c, DEBUGGER_STAGE);
> > > > +	igt_debug("debugger xe client fd: %d stage %lu, expected %lu, stage\n", s->d->master_fd,
> > > > +		  stage_in, stage);
> > > > +
> > > > +	igt_assert_eq(stage_in, stage);
> > > > +}
> > > > +
> > > > +/**
> > > > + * xe_eudebug_client_create:
> > > > + * @master_fd: xe client used to open the debugger connection
> > > > + * @work: function that opens xe device and executes arbitrary workload
> > > > + * @flags: flags stored in a client structure, can be used at will
> > > > + * of the caller, i.e. to provide the @work function an additional switch.
> > > > + * @data: test's private data, allocated with MAP_SHARED | MAP_ANONYMOUS,
> > > > + * can be shared between client and debugger. Accesible via client->ptr.
> > > > + * Can be NULL.
> > > > + *
> > > > + * Forks and creates the debugger process. @work won't be called until
> > > > + * xe_eudebug_client_start is called.
> > > > + *
> > > > + * Returns: newly created xe_eudebug_debugger structure with its
> > > > + * event log initialized.
> > > > + */
> > > > +struct xe_eudebug_client *xe_eudebug_client_create(int master_fd, xe_eudebug_client_work_fn work,
> > > > +						   uint64_t flags, void *data)
> > > > +{
> > > > +	struct xe_eudebug_client *c;
> > > > +
> > > > +	c = calloc(1, sizeof(*c));
> > > > +	c->flags = flags;
> > > > +	igt_assert(c);
> > > > +	igt_assert(!pipe(c->p_in));
> > > > +	igt_assert(!pipe(c->p_out));
> > > > +	c->seqno = 1;
> > > > +	c->log = xe_eudebug_event_log_create("client", MAX_EVENT_LOG_SIZE);
> > > > +	c->done = 0;
> > > > +	c->ptr = data;
> > > > +	c->master_fd = master_fd;
> > > > +	c->timeout_ms = XE_EUDEBUG_DEFAULT_TIMEOUT_MS;
> > > > +
> > > > +	igt_fork(child, 1) {
> > > > +		int mypid;
> > > > +
> > > > +		igt_assert_eq(c->pid, 0);
> > > > +
> > > > +		close(c->p_out[0]);
> > > > +		c->p_out[0] = -1;
> > > > +		close(c->p_in[1]);
> > > > +		c->p_in[1] = -1;
> > > > +
> > > > +		mypid = getpid();
> > > > +		client_signal(c, CLIENT_PID, mypid);
> > > > +
> > > > +		c->pid = client_wait_token(c, CLIENT_RUN);
> > > > +		igt_assert_eq(c->pid, mypid);
> > > > +		if (work)
> > > > +			work(c);
> > > > +
> > > > +		client_signal(c, CLIENT_FINI, c->seqno);
> > > > +
> > > > +		event_log_write_to_fd(c->log, c->p_out[1]);
> > > > +
> > > > +		c->pid = client_wait_token(c, CLIENT_STOP);
> > > > +		igt_assert_eq(c->pid, mypid);
> > > > +	}
> > > > +
> > > > +	close(c->p_out[1]);
> > > > +	c->p_out[1] = -1;
> > > > +	close(c->p_in[0]);
> > > > +	c->p_in[0] = -1;
> > > > +
> > > > +	c->pid = wait_from_client(c, CLIENT_PID);
> > > > +
> > > > +	igt_info("client running with pid %d\n", c->pid);
> > > > +
> > > > +	return c;
> > > > +}
> > > > +
> > > > +/**
> > > > + * xe_eudebug_client_stop:
> > > > + * @c: pointer to xe_eudebug_client structure
> > > > + *
> > > > + * Waits for the end of client's work and exits the proccess.
> > > > + */
> > > > +void xe_eudebug_client_stop(struct xe_eudebug_client *c)
> > > > +{
> > > > +	if (c->pid) {
> > > > +		int waitstatus;
> > > > +
> > > > +		xe_eudebug_client_wait_done(c);
> > > > +
> > > > +		token_signal(c->p_in, CLIENT_STOP, c->pid);
> > > > +		igt_assert_eq(waitpid(c->pid, &waitstatus, 0),
> > > > +			      c->pid);
> > > > +		c->pid = 0;
> > > > +	}
> > > > +}
> > > > +
> > > > +/**
> > > > + * xe_eudebug_client_destroy:
> > > > + * @c: pointer to xe_eudebug_client structure to be freed
> > > > + *
> > > > + * Frees the @c client structure. Note that it calls xe_eudebug_client_stop if
> > > > + * client proccess has not terminated yet.
> > > > + */
> > > > +void xe_eudebug_client_destroy(struct xe_eudebug_client *c)
> > > > +{
> > > > +	xe_eudebug_client_stop(c);
> > > > +	pipe_close(c->p_in);
> > > > +	pipe_close(c->p_out);
> > > > +	xe_eudebug_event_log_destroy(c->log);
> > > > +	free(c);
> > > > +}
> > > > +
> > > > +/**
> > > > + * xe_eudebug_client_get_seqno:
> > > > + * @c: pointer to xe_eudebug_client structure
> > > > + *
> > > > + * Increments and returns current seqno value of the given client @c
> > > > + *
> > > > + * Returns: incremented seqno
> > > > + */
> > > > +uint64_t xe_eudebug_client_get_seqno(struct xe_eudebug_client *c)
> > > > +{
> > > > +	return c->seqno++;
> > > > +}
> > > > +
> > > > +/**
> > > > + * xe_eudebug_client_start:
> > > > + * @c: pointer to xe_eudebug_client structure
> > > > + *
> > > > + * Starts execution of client's work function within the client's proccess.
> > > > + */
> > > > +void xe_eudebug_client_start(struct xe_eudebug_client *c)
> > > > +{
> > > > +	token_signal(c->p_in, CLIENT_RUN, c->pid);
> > > > +}
> > > > +
> > > > +/**
> > > > + * xe_eudebug_client_wait_done:
> > > > + * @c: pointer to xe_eudebug_client structure
> > > > + *
> > > > + * Waits for the client work end updates the event log.
> > > > + * Doesn't terminate the client's proccess yet.
> > > > + */
> > > > +void xe_eudebug_client_wait_done(struct xe_eudebug_client *c)
> > > > +{
> > > > +	if (!c->done) {
> > > > +		c->done = 1;
> > > > +		c->seqno = wait_from_client(c, CLIENT_FINI);
> > > > +		event_log_read_from_fd(c->log, c->p_out[0]);
> > > > +	}
> > > > +}
> > > > +
> > > > +/**
> > > > + * xe_eudebug_client_signal_stage:
> > > > + * @c: pointer to the client
> > > > + * @stage: stage to signal
> > > > + *
> > > > + * Signals to debugger, waiting in xe_eudebug_debugger_wait_stage(),
> > > > + * releasing it to proceed.
> > > > + */
> > > > +void xe_eudebug_client_signal_stage(struct xe_eudebug_client *c, uint64_t stage)
> > > > +{
> > > > +	token_signal(c->p_out, DEBUGGER_STAGE, stage);
> > > > +}
> > > > +
> > > > +/**
> > > > + * xe_eudebug_client_wait_stage:
> > > > + * @c: pointer to xe_eudebug_client structure
> > > > + * @stage: stage to wait on
> > > > + *
> > > > + * Pauses client until the debugger has signalled the corresponding stage with
> > > > + * xe_eudebug_debugger_signal_stage. This is only for situations where the
> > > > + * actual event flow is not enough to coordinate between client/debugger and extra
> > > > + * sync mechanism is needed.
> > > > + *
> > > > + */
> > > > +void xe_eudebug_client_wait_stage(struct xe_eudebug_client *c, uint64_t stage)
> > > > +{
> > > > +	u64 stage_in;
> > > > +
> > > > +	if (c->done) {
> > > > +		igt_warn("client: %d already done before %lu\n", c->pid, stage);
> > > > +		return;
> > > > +	}
> > > > +
> > > > +	igt_debug("client: %d pausing for stage %lu\n", c->pid, stage);
> > > > +
> > > > +	stage_in = client_wait_token(c, CLIENT_STAGE);
> > > > +	igt_debug("client: %d stage %lu, expected %lu, stage\n", c->pid, stage_in, stage);
> > > > +
> > > > +	igt_assert_eq(stage_in, stage);
> > > > +}
> > > > +
> > > > +
> > > > +/**
> > > > + * xe_eudebug_session_create:
> > > > + * @fd: XE file descriptor
> > > > + * @work: function passed to the xe_eudebug_client_create
> > > > + * @flags: flags passed to client and debugger
> > > > + * @test_private: test's  data, allocated with MAP_SHARED | MAP_ANONYMOUS,
> > > > + * passed to client and debugger. Can be NULL.
> > > > + *
> > > > + * Creates session together with client and debugger structures.
> > > > + */
> > > > +struct xe_eudebug_session *xe_eudebug_session_create(int fd,
> > > > +						     xe_eudebug_client_work_fn work,
> > > > +						     unsigned int flags,
> > > > +						     void *test_private)
> > > > +{
> > > > +	struct xe_eudebug_session *s;
> > > > +
> > > > +	s = calloc(1, sizeof(*s));
> > > > +	igt_assert(s);
> > > > +
> > > > +	s->c = xe_eudebug_client_create(fd, work, flags, test_private);
> > > > +	s->d = xe_eudebug_debugger_create(fd, flags, test_private);
> > > > +	s->flags = flags;
> > > > +
> > > > +	return s;
> > > > +}
> > > > +
> > > > +/**
> > > > + * xe_eudebug_session_run:
> > > > + * @s: pointer to xe_eudebug_session structure
> > > > + *
> > > > + * Attaches debugger to client's proccess, starts debugger's
> > > > + * async event reader, starts client and once client finish
> > > > + * it stops debugger worker.
> > > > + */
> > > > +void xe_eudebug_session_run(struct xe_eudebug_session *s)
> > > > +{
> > > > +	struct xe_eudebug_debugger *debugger = s->d;
> > > > +	struct xe_eudebug_client *client = s->c;
> > > > +
> > > > +	igt_assert_eq(xe_eudebug_debugger_attach(debugger, client), 0);
> > > > +
> > > > +	xe_eudebug_debugger_start_worker(debugger);
> > > > +
> > > > +	xe_eudebug_client_start(client);
> > > > +	xe_eudebug_client_wait_done(client);
> > > > +
> > > > +	xe_eudebug_debugger_stop_worker(debugger, 1);
> > > > +
> > > > +	xe_eudebug_event_log_print(debugger->log, true);
> > > > +	xe_eudebug_event_log_print(client->log, true);
> > > > +}
> > > > +
> > > > +/**
> > > > + * xe_eudebug_session_check:
> > > > + * @s: pointer to xe_eudebug_session structure
> > > > + * @match_opposite: indicates whether check should match all
> > > > + * create and destroy events.
> > > > + * @filter: mask that represents events to be skipped during comparison, useful
> > > > + * for events like 'VM_BIND' since they can be asymmetric
> > > > + *
> > > > + * Validate debugger's log against the log created by the client.
> > > > + */
> > > > +void xe_eudebug_session_check(struct xe_eudebug_session *s, bool match_opposite, uint32_t filter)
> > > > +{
> > > > +	xe_eudebug_event_log_compare(s->c->log, s->d->log, filter);
> > > > +
> > > > +	if (match_opposite)
> > > > +		xe_eudebug_event_log_match_opposite(s->d->log, filter);
> > > > +}
> > > > +
> > > > +/**
> > > > + * xe_eudebug_session_destroy:
> > > > + * @s: pointer to xe_eudebug_session structure
> > > > + *
> > > > + * Destroy session together with its debugger and client.
> > > > + */
> > > > +void xe_eudebug_session_destroy(struct xe_eudebug_session *s)
> > > > +{
> > > > +	xe_eudebug_debugger_destroy(s->d);
> > > > +	xe_eudebug_client_destroy(s->c);
> > > > +
> > > > +	free(s);
> > > > +}
> > > > +
> > > > +#define to_base(x) ((struct drm_xe_eudebug_event *)&x)
> > > > +
> > > > +static void base_event(struct xe_eudebug_client *c,
> > > > +		       struct drm_xe_eudebug_event *e,
> > > > +		       uint32_t type,
> > > > +		       uint32_t flags,
> > > > +		       uint64_t size)
> > > > +{
> > > > +	e->type = type;
> > > > +	e->flags = flags;
> > > > +	e->seqno = xe_eudebug_client_get_seqno(c);
> > > > +	e->len = size;
> > > > +}
> > > > +
> > > > +static void client_event(struct xe_eudebug_client *c, uint32_t flags, int client_fd)
> > > > +{
> > > > +	struct drm_xe_eudebug_event_client ec;
> > > > +
> > > > +	base_event(c, to_base(ec), DRM_XE_EUDEBUG_EVENT_OPEN, flags, sizeof(ec));
> > > > +
> > > > +	ec.client_handle = client_fd;
> > > > +
> > > > +	xe_eudebug_event_log_write(c->log, (void *)&ec);
> > > > +}
> > > > +
> > > > +static void vm_event(struct xe_eudebug_client *c, uint32_t flags, int client_fd, uint32_t vm_id)
> > > > +{
> > > > +	struct drm_xe_eudebug_event_vm evm;
> > > > +
> > > > +	base_event(c, to_base(evm), DRM_XE_EUDEBUG_EVENT_VM, flags, sizeof(evm));
> > > > +
> > > > +	evm.client_handle = client_fd;
> > > > +	evm.vm_handle = vm_id;
> > > > +
> > > > +	xe_eudebug_event_log_write(c->log, (void *)&evm);
> > > > +}
> > > > +
> > > > +static void exec_queue_event(struct xe_eudebug_client *c, uint32_t flags,
> > > > +			     int client_fd, uint32_t vm_id,
> > > > +			     uint32_t exec_queue_handle, uint16_t class,
> > > > +			     uint16_t width)
> > > > +{
> > > > +	struct drm_xe_eudebug_event_exec_queue ee;
> > > > +
> > > > +	base_event(c, to_base(ee), DRM_XE_EUDEBUG_EVENT_EXEC_QUEUE,
> > > > +		   flags, sizeof(ee));
> > > > +
> > > > +	ee.client_handle = client_fd;
> > > > +	ee.vm_handle = vm_id;
> > > > +	ee.exec_queue_handle = exec_queue_handle;
> > > > +	ee.engine_class = class;
> > > > +	ee.width = width;
> > > > +
> > > > +	xe_eudebug_event_log_write(c->log, (void *)&ee);
> > > > +}
> > > > +
> > > > +static void metadata_event(struct xe_eudebug_client *c, uint32_t flags,
> > > > +			   int client_fd, uint32_t id, uint64_t type, uint64_t len)
> > > > +{
> > > > +	struct drm_xe_eudebug_event_metadata em;
> > > > +
> > > > +	base_event(c, to_base(em), DRM_XE_EUDEBUG_EVENT_METADATA,
> > > > +		   flags, sizeof(em));
> > > > +
> > > > +	em.client_handle = client_fd;
> > > > +	em.metadata_handle = id;
> > > > +	em.type = type;
> > > > +	em.len = len;
> > > > +
> > > > +	xe_eudebug_event_log_write(c->log, (void *)&em);
> > > > +}
> > > > +
> > > > +static int enable_getset(int fd, bool *old, bool *new)
> > > > +{
> > > > +	static const char * const fname = "enable_eudebug";
> > > > +	int ret = 0;
> > > > +
> > > > +	int sysfs, device_fd;
> > > > +	bool val_before;
> > > > +	struct stat st;
> > > > +
> > > > +	igt_assert(new || old);
> > > > +
> > > > +	igt_assert_eq(fstat(fd, &st), 0);
> > > > +	sysfs = igt_sysfs_open(fd);
> > > > +	if (sysfs < 0)
> > > > +		return -1;
> > > > +
> > > > +	device_fd = openat(sysfs, "device", O_DIRECTORY | O_RDONLY);
> > > > +	close(sysfs);
> > > > +	if (device_fd < 0)
> > > > +		return -1;
> > > > +
> > > > +	if (!__igt_sysfs_get_boolean(device_fd, fname, &val_before)) {
> > > > +		ret = -1;
> > > > +		goto out;
> > > > +	}
> > > > +
> > > > +	igt_debug("enable_eudebug before: %d\n", val_before);
> > > > +
> > > > +	if (old)
> > > > +		*old = val_before;
> > > > +
> > > > +	ret = 0;
> > > > +	if (new) {
> > > > +		if (__igt_sysfs_set_boolean(device_fd, fname, *new))
> > > > +			igt_assert_eq(igt_sysfs_get_boolean(device_fd, fname), *new);
> > > > +		else
> > > > +			ret = -1;
> > > > +	}
> > > > +
> > > > +out:
> > > > +	close(device_fd);
> > > > +	return ret;
> > > > +}
> > > > +
> > > > +/**
> > > > + * xe_eudebug_enable
> > > > + * @fd: xe client
> > > > + * @enable: state toggle - true to enable, false to disable
> > > > + *
> > > > + * Enables/disables eudebug capability by writing to
> > > > + * '/sys/class/drm/card<N>/device/enable_eudebug' sysfs entry.
> > > > + *
> > > > + * Returns: previous toggle value, i.e. true when eudebugging was enabled,
> > > > + * false when eudebugging was disabled.
> > > > + */
> > > > +bool xe_eudebug_enable(int fd, bool enable)
> > > > +{
> > > > +	bool old = false;
> > > > +	int ret = enable_getset(fd, &old, &enable);
> > > > +
> > > > +	if (ret) {
> > > > +		igt_skip_on(enable);
> > > > +		old = false;
> > > > +	}
> > > > +
> > > > +	return old;
> > > > +}
> > > > +
> > > > +/* Eu debugger wrappers around resource creating xe ioctls. */
> > > > +
> > > > +/**
> > > > + * xe_eudebug_client_open_driver:
> > > > + * @c: pointer to xe_eudebug_client structure
> > > > + *
> > > > + * Calls drm_open_client(DRIVER_XE) and logs the corresponding
> > > > + * event in client's event log.
> > > > + *
> > > > + * Returns: valid DRM file descriptor
> > > > + */
> > > > +int xe_eudebug_client_open_driver(struct xe_eudebug_client *c)
> > > > +{
> > > > +	int fd;
> > > > +
> > > > +	fd = drm_reopen_driver(c->master_fd);
> > > > +	client_event(c, DRM_XE_EUDEBUG_EVENT_CREATE, fd);
> > > > +
> > > > +	return fd;
> > > > +}
> > > > +
> > > > +/**
> > > > + * xe_eudebug_client_close_driver:
> > > > + * @c: pointer to xe_eudebug_client structure
> > > > + * @fd: xe client
> > > > + *
> > > > + * Calls close driver and logs the corresponding event in
> > > > + * client's event log.
> > > > + */
> > > > +void xe_eudebug_client_close_driver(struct xe_eudebug_client *c, int fd)
> > > > +{
> > > > +	client_event(c, DRM_XE_EUDEBUG_EVENT_DESTROY, fd);
> > > > +	close(fd);
> > > > +}
> > > > +
> > > > +/**
> > > > + * xe_eudebug_client_vm_create:
> > > > + * @c: pointer to xe_eudebug_client structure
> > > > + * @fd: xe client
> > > > + * @flags: vm bind flags
> > > > + * @ext: pointer to the first user extension
> > > > + *
> > > > + * Calls xe_vm_create() and logs corresponding events
> > > > + * (including vm set metadata events) in client's event log.
> > > > + *
> > > > + * Returns: valid vm handle
> > > > + */
> > > > +uint32_t xe_eudebug_client_vm_create(struct xe_eudebug_client *c, int fd,
> > > > +				     uint32_t flags, uint64_t ext)
> > > > +{
> > > > +	uint32_t vm;
> > > > +
> > > > +	vm = xe_vm_create(fd, flags, ext);
> > > > +	vm_event(c, DRM_XE_EUDEBUG_EVENT_CREATE, fd, vm);
> > > > +
> > > > +	return vm;
> > > > +}
> > > > +
> > > > +/**
> > > > + * xe_eudebug_client_vm_destroy:
> > > > + * @c: pointer to xe_eudebug_client structure
> > > > + * fd: xe client
> > > > + * vm: vm handle
> > > > + *
> > > > + * Calls xe_vm_destroy() and logs the corresponding event in
> > > > + * client's event log.
> > > > + */
> > > > +void xe_eudebug_client_vm_destroy(struct xe_eudebug_client *c, int fd, uint32_t vm)
> > > > +{
> > > > +	xe_vm_destroy(fd, vm);
> > > > +	vm_event(c, DRM_XE_EUDEBUG_EVENT_DESTROY, fd, vm);
> > > > +}
> > > > +
> > > > +/**
> > > > + * xe_eudebug_client_exec_queue_create:
> > > > + * @c: pointer to xe_eudebug_client structure
> > > > + * @fd: xe client
> > > > + * @create: exec_queue create drm struct
> > > > + *
> > > > + * Calls xe exec queue create ioctl and logs the corresponding event in
> > > > + * client's event log.
> > > > + *
> > > > + * Returns: valid exec queue handle
> > > > + */
> > > > +uint32_t xe_eudebug_client_exec_queue_create(struct xe_eudebug_client *c, int fd,
> > > > +					     struct drm_xe_exec_queue_create *create)
> > > > +{
> > > > +	uint16_t class = ((struct drm_xe_engine_class_instance *)(create->instances))[0].engine_class;
> > > > +
> > > > +	igt_assert_eq(igt_ioctl(fd, DRM_IOCTL_XE_EXEC_QUEUE_CREATE, create), 0);
> > > > +
> > > > +	if (class == DRM_XE_ENGINE_CLASS_COMPUTE || class == DRM_XE_ENGINE_CLASS_RENDER)
> > > > +		exec_queue_event(c, DRM_XE_EUDEBUG_EVENT_CREATE, fd, create->vm_id,
> > > > +				 create->exec_queue_id, class, create->width);
> > > > +
> > > > +	return create->exec_queue_id;
> > > > +}
> > > > +
> > > > +/**
> > > > + * xe_eudebug_client_exec_queue_destroy:
> > > > + * @c: pointer to xe_eudebug_client structure
> > > > + * @fd: xe client
> > > > + * @create: exec_queue create drm struct which was used for creation
> > > > + *
> > > > + * Calls xe exec_queue destroy ioctl and logs the corresponding event in
> > > > + * client's event log.
> > > > + */
> > > > +void xe_eudebug_client_exec_queue_destroy(struct xe_eudebug_client *c, int fd,
> > > > +					  struct drm_xe_exec_queue_create *create)
> > > > +{
> > > > +	struct drm_xe_exec_queue_destroy destroy = { .exec_queue_id = create->exec_queue_id, };
> > > > +	uint16_t class = ((struct drm_xe_engine_class_instance *)(create->instances))[0].engine_class;
> > > > +
> > > > +	if (class == DRM_XE_ENGINE_CLASS_COMPUTE || class == DRM_XE_ENGINE_CLASS_RENDER)
> > > > +		exec_queue_event(c, DRM_XE_EUDEBUG_EVENT_DESTROY, fd, create->vm_id,
> > > > +				 create->exec_queue_id, class, create->width);
> > > > +
> > > > +	igt_assert_eq(igt_ioctl(fd, DRM_IOCTL_XE_EXEC_QUEUE_DESTROY, &destroy), 0);
> > > > +}
> > > > +
> > > > +/**
> > > > + * xe_eudebug_client_vm_bind_event:
> > > > + * @c: pointer to xe_eudebug_client structure
> > > > + * @event_flags: base event flags
> > > > + * @fd: xe client
> > > > + * @vm: vm handle
> > > > + * @bind_flags: bind flags of vm_bind_event
> > > > + * @num_binds: number of bind (operations) for event
> > > > + * @ref_seqno: base vm bind reference seqno
> > > > + * Logs vm bind event in client's event log.
> > > > + */
> > > > +void xe_eudebug_client_vm_bind_event(struct xe_eudebug_client *c,
> > > > +				     uint32_t event_flags, int fd,
> > > > +				     uint32_t vm, uint32_t bind_flags,
> > > > +				     uint32_t num_binds, u64 *ref_seqno)
> > > > +{
> > > > +	struct drm_xe_eudebug_event_vm_bind evmb;
> > > > +
> > > > +	base_event(c, to_base(evmb), DRM_XE_EUDEBUG_EVENT_VM_BIND,
> > > > +		   event_flags, sizeof(evmb));
> > > > +	evmb.client_handle = fd;
> > > > +	evmb.vm_handle = vm;
> > > > +	evmb.flags = bind_flags;
> > > > +	evmb.num_binds = num_binds;
> > > > +
> > > > +	*ref_seqno = evmb.base.seqno;
> > > > +
> > > > +	xe_eudebug_event_log_write(c->log, (void *)&evmb);
> > > > +}
> > > > +
> > > > +/**
> > > > + * xe_eudebug_client_vm_bind_op_event:
> > > > + * @c: pointer to xe_eudebug_client structure
> > > > + * @event_flags: base event flags
> > > > + * @bind_ref_seqno: base vm bind reference seqno
> > > > + * @op_ref_seqno: output, the vm_bind_op event seqno
> > > > + * @addr: ppgtt address
> > > > + * @size: size of the binding
> > > > + * @num_extensions: number of vm bind op extensions
> > > > + *
> > > > + * Logs vm bind op event in client's event log.
> > > > + */
> > > > +void xe_eudebug_client_vm_bind_op_event(struct xe_eudebug_client *c, uint32_t event_flags,
> > > > +					uint64_t bind_ref_seqno, uint64_t *op_ref_seqno,
> > > > +					uint64_t addr, uint64_t range,
> > > > +					uint64_t num_extensions)
> > > > +{
> > > > +	struct drm_xe_eudebug_event_vm_bind_op op;
> > > > +
> > > > +	base_event(c, to_base(op), DRM_XE_EUDEBUG_EVENT_VM_BIND_OP,
> > > > +		   event_flags, sizeof(op));
> > > > +	op.vm_bind_ref_seqno = bind_ref_seqno;
> > > > +	op.addr = addr;
> > > > +	op.range = range;
> > > > +	op.num_extensions = num_extensions;
> > > > +
> > > > +	*op_ref_seqno = op.base.seqno;
> > > > +
> > > > +	xe_eudebug_event_log_write(c->log, (void *)&op);
> > > > +}
> > > > +
> > > > +/**
> > > > + * xe_eudebug_client_vm_bind_op_metadata_event:
> > > > + * @c: pointer to xe_eudebug_client structure
> > > > + * @event_flags: base event flags
> > > > + * @op_ref_seqno: base vm bind op reference seqno
> > > > + * @metadata_handle: metadata handle
> > > > + * @metadata_cookie: metadata cookie
> > > > + *
> > > > + * Logs vm bind op metadata event in client's event log.
> > > > + */
> > > > +void xe_eudebug_client_vm_bind_op_metadata_event(struct xe_eudebug_client *c,
> > > > +						 uint32_t event_flags, uint64_t op_ref_seqno,
> > > > +						 uint64_t metadata_handle, uint64_t metadata_cookie)
> > > > +{
> > > > +	struct drm_xe_eudebug_event_vm_bind_op_metadata op;
> > > > +
> > > > +	base_event(c, to_base(op), DRM_XE_EUDEBUG_EVENT_VM_BIND_OP_METADATA,
> > > > +		   event_flags, sizeof(op));
> > > > +	op.vm_bind_op_ref_seqno = op_ref_seqno;
> > > > +	op.metadata_handle = metadata_handle;
> > > > +	op.metadata_cookie = metadata_cookie;
> > > > +
> > > > +	xe_eudebug_event_log_write(c->log, (void *)&op);
> > > > +}
> > > > +
> > > > +/**
> > > > + * xe_eudebug_client_vm_bind_ufence_event:
> > > > + * @c: pointer to xe_eudebug_client structure
> > > > + * @event_flags: base event flags
> > > > + * @ref_seqno: base vm bind event seqno
> > > > + *
> > > > + * Logs vm bind ufence event in client's event log.
> > > > + */
> > > > +void xe_eudebug_client_vm_bind_ufence_event(struct xe_eudebug_client *c, uint32_t event_flags,
> > > > +					    uint64_t ref_seqno)
> > > > +{
> > > > +	struct drm_xe_eudebug_event_vm_bind_ufence f;
> > > > +
> > > > +	base_event(c, to_base(f), DRM_XE_EUDEBUG_EVENT_VM_BIND_UFENCE,
> > > > +		   event_flags, sizeof(f));
> > > > +	f.vm_bind_ref_seqno = ref_seqno;
> > > > +
> > > > +	xe_eudebug_event_log_write(c->log, (void *)&f);
> > > > +}
> > > > +
> > > > +static bool has_user_fence(const struct drm_xe_sync *sync, uint32_t num_syncs)
> > > > +{
> > > > +	while (num_syncs--)
> > > > +		if (sync[num_syncs].type == DRM_XE_SYNC_TYPE_USER_FENCE)
> > > > +			return true;
> > > > +
> > > > +	return false;
> > > > +}
> > > > +
> > > > +#define for_each_metadata(__m, __ext)					\
> > > > +	for ((__m) = from_user_pointer(__ext);				\
> > > > +	     (__m);							\
> > > > +	     (__m) = from_user_pointer((__m)->base.next_extension))	\
> > > > +		if ((__m)->base.name == XE_VM_BIND_OP_EXTENSIONS_ATTACH_DEBUG)
> > > > +
> > > > +static int  __xe_eudebug_client_vm_bind(struct xe_eudebug_client *c,
> > > > +					int fd, uint32_t vm, uint32_t exec_queue,
> > > > +					uint32_t bo, uint64_t offset,
> > > > +					uint64_t addr, uint64_t size,
> > > > +					uint32_t op, uint32_t flags,
> > > > +					struct drm_xe_sync *sync,
> > > > +					uint32_t num_syncs,
> > > > +					uint32_t prefetch_region,
> > > > +					uint8_t pat_index, uint64_t op_ext)
> > > > +{
> > > > +	struct drm_xe_vm_bind_op_ext_attach_debug *metadata;
> > > > +	const bool ufence = has_user_fence(sync, num_syncs);
> > > > +	const uint32_t bind_flags = ufence ?
> > > > +		DRM_XE_EUDEBUG_EVENT_VM_BIND_FLAG_UFENCE : 0;
> > > > +	uint64_t seqno = 0, op_seqno = 0, num_metadata = 0;
> > > > +	uint32_t bind_base_flags = 0;
> > > > +	int ret;
> > > > +
> > > > +	for_each_metadata(metadata, op_ext)
> > > > +		num_metadata++;
> > > > +
> > > > +	switch (op) {
> > > > +	case DRM_XE_VM_BIND_OP_MAP:
> > > > +		bind_base_flags = DRM_XE_EUDEBUG_EVENT_CREATE;
> > > > +		break;
> > > > +	case DRM_XE_VM_BIND_OP_UNMAP:
> > > > +		bind_base_flags = DRM_XE_EUDEBUG_EVENT_DESTROY;
> > > > +		igt_assert_eq(num_metadata, 0);
> > > > +		igt_assert_eq(ufence, false);
> > > > +		break;
> > > > +	default:
> > > > +		/* XXX unmap all? */
> > > > +		igt_assert(op);
> > > > +		break;
> > > > +	}
> > > > +
> > > > +	ret = ___xe_vm_bind(fd, vm, exec_queue, bo, offset, addr, size,
> > > > +			    op, flags, sync, num_syncs, prefetch_region,
> > > > +			    pat_index, 0, op_ext);
> > > > +
> > > > +	if (ret)
> > > > +		return ret;
> > > > +
> > > > +	if (!bind_base_flags)
> > > > +		return -EINVAL;
> > > > +
> > > > +	xe_eudebug_client_vm_bind_event(c, DRM_XE_EUDEBUG_EVENT_STATE_CHANGE,
> > > > +					fd, vm, bind_flags, 1, &seqno);
> > > > +	xe_eudebug_client_vm_bind_op_event(c, bind_base_flags,
> > > > +					   seqno, &op_seqno, addr, size,
> > > > +					   num_metadata);
> > > > +
> > > > +	for_each_metadata(metadata, op_ext)
> > > > +		xe_eudebug_client_vm_bind_op_metadata_event(c,
> > > > +							    DRM_XE_EUDEBUG_EVENT_CREATE,
> > > > +							    op_seqno,
> > > > +							    metadata->metadata_id,
> > > > +							    metadata->cookie);
> > > > +	if (ufence)
> > > > +		xe_eudebug_client_vm_bind_ufence_event(c, DRM_XE_EUDEBUG_EVENT_CREATE |
> > > > +						       DRM_XE_EUDEBUG_EVENT_NEED_ACK,
> > > > +						       seqno);
> > > > +	return ret;
> > > > +}
> > > > +
> > > > +static void _xe_eudebug_client_vm_bind(struct xe_eudebug_client *c, int fd,
> > > > +				       uint32_t vm, uint32_t bo,
> > > > +				       uint64_t offset, uint64_t addr, uint64_t size,
> > > > +				       uint32_t op,
> > > > +				       uint32_t flags,
> > > > +				       struct drm_xe_sync *sync,
> > > > +				       uint32_t num_syncs,
> > > > +				       uint64_t op_ext)
> > > > +{
> > > > +	const uint32_t exec_queue_id = 0;
> > > > +	const uint32_t prefetch_region = 0;
> > > > +
> > > > +	igt_assert_eq(__xe_eudebug_client_vm_bind(c, fd, vm, exec_queue_id, bo, offset,
> > > > +						  addr, size, op, flags,
> > > > +						  sync, num_syncs, prefetch_region,
> > > > +						  DEFAULT_PAT_INDEX, op_ext),
> > > > +		      0);
> > > > +}
> > > > +
> > > > +/**
> > > > + * xe_eudebug_client_vm_bind_flags
> > > > + * @c: pointer to xe_eudebug_client structure
> > > > + * @fd: xe client
> > > > + * @vm: vm handle
> > > > + * @bo: buffer object handle
> > > > + * @offset: offset within buffer object
> > > > + * @addr: ppgtt address
> > > > + * @size: size of the binding
> > > > + * @flags: vm_bind flags
> > > > + * @sync: sync objects
> > > > + * @num_syncs: number of sync objects
> > > > + * @op_ext: BIND_OP extensions
> > > > + *
> > > > + * Calls xe vm_bind ioctl and logs the corresponding event in client's event log.
> > > > + */
> > > > +void xe_eudebug_client_vm_bind_flags(struct xe_eudebug_client *c, int fd, uint32_t vm,
> > > > +				     uint32_t bo, uint64_t offset,
> > > > +				     uint64_t addr, uint64_t size, uint32_t flags,
> > > > +				     struct drm_xe_sync *sync, uint32_t num_syncs,
> > > > +				     uint64_t op_ext)
> > > > +{
> > > > +	_xe_eudebug_client_vm_bind(c, fd, vm, bo, offset, addr, size,
> > > > +				   DRM_XE_VM_BIND_OP_MAP, flags,
> > > > +				   sync, num_syncs, op_ext);
> > > > +}
> > > > +
> > > > +/**
> > > > + * xe_eudebug_client_vm_bind
> > > > + * @c: pointer to xe_eudebug_client structure
> > > > + * @fd: xe client
> > > > + * @vm: vm handle
> > > > + * @bo: buffer object handle
> > > > + * @offset: offset within buffer object
> > > > + * @addr: ppgtt address
> > > > + * @size: size of the binding
> > > > + *
> > > > + * Calls xe vm_bind ioctl and logs the corresponding event in client's event log.
> > > > + */
> > > > +void xe_eudebug_client_vm_bind(struct xe_eudebug_client *c, int fd, uint32_t vm,
> > > > +			       uint32_t bo, uint64_t offset,
> > > > +			       uint64_t addr, uint64_t size)
> > > > +{
> > > > +	const uint32_t flags = 0;
> > > > +	struct drm_xe_sync *sync = NULL;
> > > > +	const uint32_t num_syncs = 0;
> > > > +	const uint64_t op_ext = 0;
> > > > +
> > > > +	xe_eudebug_client_vm_bind_flags(c, fd, vm, bo, offset, addr, size,
> > > > +					flags,
> > > > +					sync, num_syncs, op_ext);
> > > > +}
> > > > +
> > > > +/**
> > > > + * xe_eudebug_client_vm_unbind_flags
> > > > + * @c: pointer to xe_eudebug_client structure
> > > > + * @fd: xe client
> > > > + * @vm: vm handle
> > > > + * @offset: offset
> > > > + * @addr: ppgtt address
> > > > + * @size: size of the binding
> > > > + * @flags: vm_bind flags
> > > > + * @sync: sync objects
> > > > + * @num_syncs: number of sync objects
> > > > + *
> > > > + * Calls xe vm_unbind ioctl and logs the corresponding event in client's event log.
> > > > + */
> > > > +void xe_eudebug_client_vm_unbind_flags(struct xe_eudebug_client *c, int fd,
> > > > +				       uint32_t vm, uint64_t offset,
> > > > +				       uint64_t addr, uint64_t size, uint32_t flags,
> > > > +				       struct drm_xe_sync *sync, uint32_t num_syncs)
> > > > +{
> > > > +	_xe_eudebug_client_vm_bind(c, fd, vm, 0, offset, addr, size,
> > > > +				   DRM_XE_VM_BIND_OP_UNMAP, flags,
> > > > +				   sync, num_syncs, 0);
> > > > +}
> > > > +
> > > > +/**
> > > > + * xe_eudebug_client_vm_unbind
> > > > + * @c: pointer to xe_eudebug_client structure
> > > > + * @fd: xe client
> > > > + * @vm: vm handle
> > > > + * @offset: offset
> > > > + * @addr: ppgtt address
> > > > + * @size: size of the binding
> > > > + *
> > > > + * Calls xe vm_unbind ioctl and logs the corresponding event in client's event log.
> > > > + */
> > > > +void xe_eudebug_client_vm_unbind(struct xe_eudebug_client *c, int fd, uint32_t vm,
> > > > +				 uint64_t offset, uint64_t addr, uint64_t size)
> > > > +{
> > > > +	const uint32_t flags = 0;
> > > > +	struct drm_xe_sync *sync = NULL;
> > > > +	const uint32_t num_syncs = 0;
> > > > +
> > > > +	xe_eudebug_client_vm_unbind_flags(c, fd, vm, offset, addr, size,
> > > > +					  flags, sync, num_syncs);
> > > > +}
> > > > +
> > > > +/**
> > > > + * xe_eudebug_client_metadata_create:
> > > > + * @c: pointer to xe_eudebug_client structure
> > > > + * @fd: xe client
> > > > + * @type: debug metadata type
> > > > + * @len: size of @data
> > > > + * @data: debug metadata paylad
> > > > + *
> > > > + * Calls xe metadata create ioctl and logs the corresponding event in
> > > > + * client's event log.
> > > > + *
> > > > + * Return: valid debug metadata id.
> > > > + */
> > > > +uint32_t xe_eudebug_client_metadata_create(struct xe_eudebug_client *c, int fd,
> > > > +					   int type, size_t len, void *data)
> > > > +{
> > > > +	struct drm_xe_debug_metadata_create create = {
> > > > +		.type = type,
> > > > +		.user_addr = to_user_pointer(data),
> > > > +		.len = len
> > > > +	};
> > > > +
> > > > +	igt_assert_eq(igt_ioctl(fd, DRM_IOCTL_XE_DEBUG_METADATA_CREATE, &create), 0);
> > > > +
> > > > +	metadata_event(c, DRM_XE_EUDEBUG_EVENT_CREATE, fd, create.metadata_id, type, len);
> > > > +
> > > > +	return create.metadata_id;
> > > > +}
> > > > +
> > > > +/**
> > > > + * xe_eudebug_client_metadata_destroy:
> > > > + * @c: pointer to xe_eudebug_client structure
> > > > + * @fd: xe client
> > > > + * @id: xe debug metadata handle
> > > > + * @type: debug metadata type
> > > > + * @len: size of debug metadata payload
> > > > + *
> > > > + * Calls xe metadata destroy ioctl and logs the corresponding event in
> > > > + * client's event log.
> > > > + */
> > > > +void xe_eudebug_client_metadata_destroy(struct xe_eudebug_client *c, int fd,
> > > > +					uint32_t id, int type, size_t len)
> > > > +{
> > > > +	struct drm_xe_debug_metadata_destroy destroy = { .metadata_id = id };
> > > > +
> > > > +
> > > > +	igt_assert_eq(igt_ioctl(fd, DRM_IOCTL_XE_DEBUG_METADATA_DESTROY, &destroy), 0);
> > > > +
> > > > +	metadata_event(c, DRM_XE_EUDEBUG_EVENT_DESTROY, fd, id, type, len);
> > > > +}
> > > > +
> > > > +void xe_eudebug_ack_ufence(int debugfd,
> > > > +			   const struct drm_xe_eudebug_event_vm_bind_ufence *f)
> > > > +{
> > > > +	struct drm_xe_eudebug_ack_event ack = { 0, };
> > > > +	char event_str[XE_EUDEBUG_EVENT_STRING_MAX_LEN];
> > > > +
> > > > +	ack.type = f->base.type;
> > > > +	ack.seqno = f->base.seqno;
> > > > +
> > > > +	xe_eudebug_event_to_str((void *)f, event_str, XE_EUDEBUG_EVENT_STRING_MAX_LEN);
> > > > +	igt_debug("delivering ack for event: %s\n", event_str);
> > > > +	igt_assert_eq(igt_ioctl(debugfd, DRM_XE_EUDEBUG_IOCTL_ACK_EVENT, &ack), 0);
> > > > +}
> > > > diff --git a/lib/xe/xe_eudebug.h b/lib/xe/xe_eudebug.h
> > > > new file mode 100644
> > > > index 000000000..444f5a7b7
> > > > --- /dev/null
> > > > +++ b/lib/xe/xe_eudebug.h
> > > > @@ -0,0 +1,206 @@
> > > > +/* SPDX-License-Identifier: MIT */
> > > > +/*
> > > > + * Copyright © 2023 Intel Corporation
> > > > + */
> > > > +#include <fcntl.h>
> > > > +#include <pthread.h>
> > > > +#include <stdint.h>
> > > > +#include <xe_drm.h>
> > > > +
> > > > +#include "igt_list.h"
> > > > +
> > > > +struct xe_eudebug_event_log {
> > > > +	uint8_t *log;
> > > > +	unsigned int head;
> > > > +	unsigned int max_size;
> > > > +	char name[80];
> > > > +	pthread_mutex_t lock;
> > > > +};
> > > > +
> > > > +struct xe_eudebug_debugger {
> > > > +	int fd;
> > > > +	uint64_t flags;
> > > > +
> > > > +	/* Used to smuggle private data */
> > > > +	void *ptr;
> > > > +
> > > > +	struct xe_eudebug_event_log *log;
> > > > +
> > > > +	uint64_t event_count;
> > > > +
> > > > +	uint64_t target_pid;
> > > > +
> > > > +	struct igt_list_head triggers;
> > > > +
> > > > +	int master_fd;
> > > > +
> > > > +	pthread_t worker_thread;
> > > > +	int worker_state;
> > > > +
> > > > +	int p_client[2];
> > > > +};
> > > > +
> > > > +struct xe_eudebug_client {
> > > > +	int pid;
> > > > +	uint64_t seqno;
> > > > +	uint64_t flags;
> > > > +
> > > > +	/* Used to smuggle private data */
> > > > +	void *ptr;
> > > > +
> > > > +	struct xe_eudebug_event_log *log;
> > > > +
> > > > +	int done;
> > > > +	int p_in[2];
> > > > +	int p_out[2];
> > > > +
> > > > +	/* Used to pickup right device (the one used in debugger) */
> > > > +	int master_fd;
> > > > +
> > > > +	int timeout_ms;
> > > > +};
> > > > +
> > > > +struct xe_eudebug_session {
> > > > +	uint64_t flags;
> > > > +	struct xe_eudebug_client *c;
> > > > +	struct xe_eudebug_debugger *d;
> > > > +};
> > > > +
> > > > +typedef void (*xe_eudebug_client_work_fn)(struct xe_eudebug_client *);
> > > > +typedef void (*xe_eudebug_trigger_fn)(struct xe_eudebug_debugger *,
> > > > +				      struct drm_xe_eudebug_event *);
> > > > +
> > > > +#define xe_eudebug_for_each_event(_e, _log) \
> > > > +	for ((_e) = (_e) ? (void *)(uint8_t *)(_e) + (_e)->len : \
> > > > +		    (void *)(_log)->log; \
> > > > +	    (uint8_t *)(_e) < (_log)->log + (_log)->head; \
> > > > +	    (_e) = (void *)(uint8_t *)(_e) + (_e)->len)
> > > > +
> > > > +#define xe_eudebug_assert(d, c)						\
> > > > +	do {								\
> > > > +		if (!(c)) {						\
> > > > +			xe_eudebug_event_log_print((d)->log, true);	\
> > > > +			igt_assert(c);					\
> > > > +		}							\
> > > > +	} while (0)
> > > > +
> > > > +#define xe_eudebug_assert_f(d, c, f...)					\
> > > > +	do {								\
> > > > +		if (!(c)) {						\
> > > > +			xe_eudebug_event_log_print((d)->log, true);	\
> > > > +			igt_assert_f(c, f);				\
> > > > +		}							\
> > > > +	} while (0)
> > > > +
> > > > +#define XE_EUDEBUG_EVENT_STRING_MAX_LEN		4096
> > > > +
> > > > +/*
> > > > + * Default abort timeout to use across xe_eudebug lib and tests if no specific
> > > > + * timeout value is required.
> > > > + */
> > > > +#define XE_EUDEBUG_DEFAULT_TIMEOUT_MS		25000ULL
> > > > +
> > > > +#define XE_EUDEBUG_FILTER_EVENT_NONE		BIT(DRM_XE_EUDEBUG_EVENT_NONE)
> > > > +#define XE_EUDEBUG_FILTER_EVENT_READ		BIT(DRM_XE_EUDEBUG_EVENT_READ)
> > > > +#define XE_EUDEBUG_FILTER_EVENT_OPEN		BIT(DRM_XE_EUDEBUG_EVENT_OPEN)
> > > > +#define XE_EUDEBUG_FILTER_EVENT_VM		BIT(DRM_XE_EUDEBUG_EVENT_VM)
> > > > +#define XE_EUDEBUG_FILTER_EVENT_EXEC_QUEUE	BIT(DRM_XE_EUDEBUG_EVENT_EXEC_QUEUE)
> > > > +#define XE_EUDEBUG_FILTER_EVENT_EU_ATTENTION	BIT(DRM_XE_EUDEBUG_EVENT_EU_ATTENTION)
> > > > +#define XE_EUDEBUG_FILTER_EVENT_VM_BIND		BIT(DRM_XE_EUDEBUG_EVENT_VM_BIND)
> > > > +#define XE_EUDEBUG_FILTER_EVENT_VM_BIND_OP	BIT(DRM_XE_EUDEBUG_EVENT_VM_BIND_OP)
> > > > +#define XE_EUDEBUG_FILTER_EVENT_VM_BIND_UFENCE  BIT(DRM_XE_EUDEBUG_EVENT_VM_BIND_UFENCE)
> > > > +#define XE_EUDEBUG_FILTER_ALL			GENMASK(DRM_XE_EUDEBUG_EVENT_MAX_EVENT, 0)
> > > > +#define XE_EUDEBUG_EVENT_IS_FILTERED(_e, _f)	((1UL << _e) & _f)
> > > > +
> > > > +int xe_eudebug_connect(int fd, pid_t pid, uint32_t flags);
> > > > +const char *xe_eudebug_event_to_str(struct drm_xe_eudebug_event *e, char *buf, size_t len);
> > > > +struct drm_xe_eudebug_event *
> > > > +xe_eudebug_event_log_find_seqno(struct xe_eudebug_event_log *l, uint64_t seqno);
> > > > +struct xe_eudebug_event_log *
> > > > +xe_eudebug_event_log_create(const char *name, unsigned int max_size);
> > > > +void xe_eudebug_event_log_destroy(struct xe_eudebug_event_log *l);
> > > > +void xe_eudebug_event_log_print(struct xe_eudebug_event_log *l, bool debug);
> > > > +void xe_eudebug_event_log_compare(struct xe_eudebug_event_log *c, struct xe_eudebug_event_log *d,
> > > > +				  uint32_t filter);
> > > > +void xe_eudebug_event_log_write(struct xe_eudebug_event_log *l, struct drm_xe_eudebug_event *e);
> > > > +void xe_eudebug_event_log_match_opposite(struct xe_eudebug_event_log *l, uint32_t filter);
> > > > +
> > > > +bool xe_eudebug_debugger_available(int fd);
> > > > +struct xe_eudebug_debugger *
> > > > +xe_eudebug_debugger_create(int xe, uint64_t flags, void *data);
> > > > +void xe_eudebug_debugger_destroy(struct xe_eudebug_debugger *d);
> > > > +int xe_eudebug_debugger_attach(struct xe_eudebug_debugger *d, struct xe_eudebug_client *c);
> > > > +void xe_eudebug_debugger_start_worker(struct xe_eudebug_debugger *d);
> > > > +void xe_eudebug_debugger_stop_worker(struct xe_eudebug_debugger *d, int timeout_s);
> > > > +void xe_eudebug_debugger_dettach(struct xe_eudebug_debugger *d);
> > > > +void xe_eudebug_debugger_set_data(struct xe_eudebug_debugger *c, void *ptr);
> > > > +void xe_eudebug_debugger_add_trigger(struct xe_eudebug_debugger *d, int type,
> > > > +				     xe_eudebug_trigger_fn fn);
> > > > +void xe_eudebug_debugger_signal_stage(struct xe_eudebug_debugger *d, uint64_t stage);
> > > > +void xe_eudebug_debugger_wait_stage(struct xe_eudebug_session *s, uint64_t stage);
> > > > +
> > > > +struct xe_eudebug_client *
> > > > +xe_eudebug_client_create(int xe, xe_eudebug_client_work_fn work, uint64_t flags, void *data);
> > > > +void xe_eudebug_client_destroy(struct xe_eudebug_client *c);
> > > > +void xe_eudebug_client_start(struct xe_eudebug_client *c);
> > > > +void xe_eudebug_client_stop(struct xe_eudebug_client *c);
> > > > +void xe_eudebug_client_wait_done(struct xe_eudebug_client *c);
> > > > +void xe_eudebug_client_signal_stage(struct xe_eudebug_client *c, uint64_t stage);
> > > > +void xe_eudebug_client_wait_stage(struct xe_eudebug_client *c, uint64_t stage);
> > > > +
> > > > +uint64_t xe_eudebug_client_get_seqno(struct xe_eudebug_client *c);
> > > > +void xe_eudebug_client_set_data(struct xe_eudebug_client *c, void *ptr);
> > > > +
> > > > +bool xe_eudebug_enable(int fd, bool enable);
> > > > +
> > > > +int xe_eudebug_client_open_driver(struct xe_eudebug_client *c);
> > > > +void xe_eudebug_client_close_driver(struct xe_eudebug_client *c, int fd);
> > > > +uint32_t xe_eudebug_client_vm_create(struct xe_eudebug_client *c, int fd,
> > > > +				     uint32_t flags, uint64_t ext);
> > > > +void xe_eudebug_client_vm_destroy(struct xe_eudebug_client *c, int fd, uint32_t vm);
> > > > +uint32_t xe_eudebug_client_exec_queue_create(struct xe_eudebug_client *c, int fd,
> > > > +					     struct drm_xe_exec_queue_create *create);
> > > > +void xe_eudebug_client_exec_queue_destroy(struct xe_eudebug_client *c, int fd,
> > > > +					  struct drm_xe_exec_queue_create *create);
> > > > +void xe_eudebug_client_vm_bind_event(struct xe_eudebug_client *c, uint32_t event_flags, int fd,
> > > > +				     uint32_t vm, uint32_t bind_flags,
> > > > +				     uint32_t num_ops, uint64_t *ref_seqno);
> > > > +void xe_eudebug_client_vm_bind_op_event(struct xe_eudebug_client *c, uint32_t event_flags,
> > > > +					uint64_t ref_seqno, uint64_t *op_ref_seqno,
> > > > +					uint64_t addr, uint64_t range,
> > > > +					uint64_t num_extensions);
> > > > +void xe_eudebug_client_vm_bind_op_metadata_event(struct xe_eudebug_client *c,
> > > > +						 uint32_t event_flags, uint64_t op_ref_seqno,
> > > > +						 uint64_t metadata_handle, uint64_t metadata_cookie);
> > > > +void xe_eudebug_client_vm_bind_ufence_event(struct xe_eudebug_client *c, uint32_t event_flags,
> > > > +					    uint64_t ref_seqno);
> > > > +void xe_eudebug_ack_ufence(int debugfd,
> > > > +			   const struct drm_xe_eudebug_event_vm_bind_ufence *f);
> > > > +
> > > > +void xe_eudebug_client_vm_bind_flags(struct xe_eudebug_client *c, int fd, uint32_t vm,
> > > > +				     uint32_t bo, uint64_t offset,
> > > > +				     uint64_t addr, uint64_t size, uint32_t flags,
> > > > +				     struct drm_xe_sync *sync, uint32_t num_syncs,
> > > > +				     uint64_t op_ext);
> > > > +void xe_eudebug_client_vm_bind(struct xe_eudebug_client *c, int fd, uint32_t vm,
> > > > +			       uint32_t bo, uint64_t offset,
> > > > +			       uint64_t addr, uint64_t size);
> > > > +void xe_eudebug_client_vm_unbind_flags(struct xe_eudebug_client *c, int fd,
> > > > +				       uint32_t vm, uint64_t offset,
> > > > +				       uint64_t addr, uint64_t size, uint32_t flags,
> > > > +				       struct drm_xe_sync *sync, uint32_t num_syncs);
> > > > +void xe_eudebug_client_vm_unbind(struct xe_eudebug_client *c, int fd, uint32_t vm,
> > > > +				 uint64_t offset, uint64_t addr, uint64_t size);
> > > > +
> > > > +uint32_t xe_eudebug_client_metadata_create(struct xe_eudebug_client *c, int fd,
> > > > +					   int type, size_t len, void *data);
> > > > +void xe_eudebug_client_metadata_destroy(struct xe_eudebug_client *c, int fd,
> > > > +					uint32_t id, int type, size_t len);
> > > > +
> > > > +struct xe_eudebug_session *xe_eudebug_session_create(int fd,
> > > > +						     xe_eudebug_client_work_fn work,
> > > > +						     unsigned int flags,
> > > > +						     void *test_private);
> > > > +void xe_eudebug_session_destroy(struct xe_eudebug_session *s);
> > > > +void xe_eudebug_session_run(struct xe_eudebug_session *s);
> > > > +void xe_eudebug_session_check(struct xe_eudebug_session *s, bool match_opposite, uint32_t filter);
> > > > -- 
> > > > 2.34.1
> > > > 

^ permalink raw reply	[flat|nested] 41+ messages in thread

* Re: [PATCH i-g-t v3 11/14] tests/xe_exec_sip: Extend SIP interaction testing
  2024-08-09 12:38 ` [PATCH i-g-t v3 11/14] tests/xe_exec_sip: Extend SIP interaction testing Christoph Manszewski
@ 2024-08-21  9:49   ` Zbigniew Kempczyński
  0 siblings, 0 replies; 41+ messages in thread
From: Zbigniew Kempczyński @ 2024-08-21  9:49 UTC (permalink / raw)
  To: Christoph Manszewski
  Cc: igt-dev, Kamil Konieczny, Dominik Grzegorzek, Maciej Patelczyk,
	Dominik Karol Piątkowski, Pawel Sikora, Andrzej Hajda,
	Kolanupaka Naveena, Mika Kuoppala, Gwan-gyeong Mun, Mika Kuoppala,
	Karolina Stolarek

On Fri, Aug 09, 2024 at 02:38:10PM +0200, Christoph Manszewski wrote:
> Extend xe_exec_sip test by adding subtests that check SIP interaction
> sanity with regard to resets and hardware debugging capabilities like
> breakpoints, and invalid instruction exceptions.
> 
> Signed-off-by: Mika Kuoppala <mika.kuoppala@linux.intel.com>
> Signed-off-by: Dominik Grzegorzek <dominik.grzegorzek@intel.com>
> Signed-off-by: Christoph Manszewski <christoph.manszewski@intel.com>
> Signed-off-by: Dominik Karol Piątkowski <dominik.karol.piatkowski@intel.com>
> Signed-off-by: Karolina Stolarek <karolina.stolarek@intel.com>
> Signed-off-by: Andrzej Hajda <andrzej.hajda@intel.com>
> ---
>  tests/intel/xe_exec_sip.c | 332 +++++++++++++++++++++++++++++++++++---
>  1 file changed, 310 insertions(+), 22 deletions(-)
> 
> diff --git a/tests/intel/xe_exec_sip.c b/tests/intel/xe_exec_sip.c
> index 4b599e7f6..0db0dc4b8 100644
> --- a/tests/intel/xe_exec_sip.c
> +++ b/tests/intel/xe_exec_sip.c
> @@ -21,6 +21,7 @@
>  #include "gpgpu_shader.h"
>  #include "igt.h"
>  #include "igt_sysfs.h"
> +#include "xe/xe_eudebug.h"
>  #include "xe/xe_ioctl.h"
>  #include "xe/xe_query.h"
>  
> @@ -30,9 +31,29 @@
>  #define COLOR_C4 0xc4
>  
>  #define SHADER_CANARY 0x01010101
> +#define SIP_CANARY 0x02020202
>  
>  #define NSEC_PER_MSEC (1000 * 1000ull)
>  
> +#define SHADER_BREAKPOINT 0
> +#define SHADER_WRITE 1
> +#define SHADER_WAIT 2
> +#define SHADER_INV_INSTR_DISABLED 3
> +#define SHADER_INV_INSTR_THREAD_ENABLED 4
> +#define SHADER_INV_INSTR_WALKER_ENABLED 5
> +#define SHADER_HANG 6
> +#define SIP_WRITE 7
> +#define SIP_NULL 8
> +#define SIP_WAIT 9
> +#define SIP_HEAVY 10
> +#define SIP_INV_INSTR 11
> +
> +#define F_SUBMIT_TWICE	(1 << 0)
> +
> +/* Control Register cr0.1 bits for exception handling */
> +#define ILLEGAL_OPCODE_ENABLE BIT(12)
> +#define ILLEGAL_OPCODE_STATUS BIT(28)
> +
>  static struct intel_buf *
>  create_fill_buf(int fd, int width, int height, uint8_t color)
>  {
> @@ -52,24 +73,109 @@ create_fill_buf(int fd, int width, int height, uint8_t color)
>  	return buf;
>  }
>  
> -static struct gpgpu_shader *get_shader(int fd)
> +static struct gpgpu_shader *get_shader(int fd, const int shadertype)
>  {
>  	static struct gpgpu_shader *shader;
> +	uint32_t bad;
>  
>  	shader = gpgpu_shader_create(fd);
> +	if (shadertype == SHADER_INV_INSTR_WALKER_ENABLED)
> +		shader->illegal_opcode_exception_enable = true;
> +
>  	gpgpu_shader__write_dword(shader, SHADER_CANARY, 0);
> +
> +	switch (shadertype) {
> +	case SHADER_HANG:
> +		gpgpu_shader__label(shader, 0);
> +		gpgpu_shader__nop(shader);
> +		gpgpu_shader__jump(shader, 0);
> +		break;
> +	case SHADER_WAIT:
> +		gpgpu_shader__wait(shader);
> +		break;
> +	case SHADER_WRITE:
> +		break;
> +	case SHADER_BREAKPOINT:
> +		gpgpu_shader__nop(shader);
> +		gpgpu_shader__breakpoint(shader);
> +		break;
> +	case SHADER_INV_INSTR_THREAD_ENABLED:
> +		gpgpu_shader__set_exception(shader, ILLEGAL_OPCODE_ENABLE);
> +		/* fall through */
> +	case SHADER_INV_INSTR_DISABLED:
> +	case SHADER_INV_INSTR_WALKER_ENABLED:
> +		bad = (shadertype == SHADER_INV_INSTR_DISABLED) ? ILLEGAL_OPCODE_ENABLE : 0;
> +		gpgpu_shader__write_on_exception(shader, 1, 0, ILLEGAL_OPCODE_ENABLE, bad);
> +		gpgpu_shader__nop(shader);
> +		gpgpu_shader__nop(shader);
> +		/* modify second nop, set only opcode bits[6:0] */
> +		shader->instr[shader->size/4 - 1][0] = 0x7f;
> +		/* SIP should clear exception bit */
> +		bad = ILLEGAL_OPCODE_STATUS;
> +		gpgpu_shader__write_on_exception(shader, 2, 0, ILLEGAL_OPCODE_STATUS, bad);
> +		break;
> +	}
> +
>  	gpgpu_shader__eot(shader);
>  	return shader;
>  }
>  
> -static uint32_t gpgpu_shader(int fd, struct intel_bb *ibb, unsigned int threads,
> -			     unsigned int width, unsigned int height)
> +static struct gpgpu_shader *get_sip(int fd, const int siptype,
> +				    const int shadertype, unsigned int y_offset)
> +{
> +	static struct gpgpu_shader *sip;
> +
> +	if (siptype == SIP_NULL)
> +		return NULL;
> +
> +	sip = gpgpu_shader_create(fd);
> +	gpgpu_shader__write_dword(sip, SIP_CANARY, y_offset);
> +
> +	switch (siptype) {
> +	case SIP_WRITE:
> +		break;
> +	case SIP_WAIT:
> +		gpgpu_shader__wait(sip);
> +		break;
> +	case SIP_HEAVY:
> +		/* Depending on the generation, the production sip
> +		 * executes between 145 to 157 instructions.
> +		 * It performs at most 45 data port writes and 5 data port reads.
> +		 * Make sure our heavy sip is at least twice heavy as production one.
> +		 */
> +		gpgpu_shader__loop_begin(sip, 0);
> +		gpgpu_shader__write_dword(sip, 0xdeadbeef, y_offset);
> +		gpgpu_shader__write_dword(sip, SIP_CANARY, y_offset);
> +		gpgpu_shader__loop_end(sip, 0, 45);
> +
> +		gpgpu_shader__loop_begin(sip, 1);
> +		gpgpu_shader__jump_neq(sip, 1, y_offset, SIP_CANARY);
> +		gpgpu_shader__loop_end(sip, 1, 10);
> +
> +		gpgpu_shader__wait(sip);
> +		break;
> +	case SIP_INV_INSTR:
> +		gpgpu_shader__write_on_exception(sip, 1, y_offset, ILLEGAL_OPCODE_STATUS, 0);
> +		break;
> +	}
> +
> +	gpgpu_shader__end_system_routine(sip, shadertype == SHADER_BREAKPOINT);
> +	return sip;
> +}
> +
> +static uint32_t gpgpu_shader(int fd, struct intel_bb *ibb, const int shadertype, const int siptype,
> +			     unsigned int threads, unsigned int width, unsigned int height)
>  {
>  	struct intel_buf *buf = create_fill_buf(fd, width, height, COLOR_C4);
> -	struct gpgpu_shader *shader = get_shader(fd);
> +	struct gpgpu_shader *sip = get_sip(fd, siptype, shadertype, height / 2);
> +	struct gpgpu_shader *shader = get_shader(fd, shadertype);
>  
> -	gpgpu_shader_exec(ibb, buf, 1, threads, shader, NULL, 0, 0);
> +	gpgpu_shader_exec(ibb, buf, 1, threads, shader, sip, 0, 0);
> +
> +	if (sip)
> +		gpgpu_shader_destroy(sip);
>  	gpgpu_shader_destroy(shader);
> +
>  	return buf->handle;
>  }
>  
> @@ -83,11 +189,11 @@ static void check_fill_buf(uint8_t *ptr, const int width, const int x,
>  		     color, val, x, y);
>  }
>  
> -static void check_buf(int fd, uint32_t handle, int width, int height,
> -		      uint8_t poison_c)
> +static int check_buf(int fd, uint32_t handle, int width, int height,
> +		      int shadertype, int siptype, uint8_t poison_c)
>  {
>  	unsigned int sz = ALIGN(width * height, 4096);
> -	int thread_count = 0;
> +	int thread_count = 0, sip_count = 0;
>  	uint32_t *ptr;
>  	int i, j;
>  
> @@ -105,9 +211,87 @@ static void check_buf(int fd, uint32_t handle, int width, int height,
>  		i = 0;
>  	}
>  
> +	for (i = 0, j = height / 2; j < height; ++j) {
> +		if (ptr[j * width / 4] == SIP_CANARY) {
> +			++sip_count;
> +			i = 4;
> +		}
> +
> +		for (; i < width; i++)
> +			check_fill_buf((uint8_t *)ptr, width, i, j, poison_c);
> +
> +		i = 0;
> +	}
> +
>  	igt_assert(thread_count);
> +	if (shadertype == SHADER_INV_INSTR_DISABLED)
> +		igt_assert(!sip_count);
> +	else if ((siptype != SIP_NULL && xe_eudebug_debugger_available(fd)) ||
> +		 (siptype == SIP_INV_INSTR && shadertype != SHADER_INV_INSTR_DISABLED))
> +		igt_assert_f(thread_count == sip_count,
> +			     "Thread and SIP count mismatch, %d != %d\n",
> +			     thread_count, sip_count);
> +	else
> +		igt_assert(sip_count == 0);
>  
>  	munmap(ptr, sz);
> +
> +	return sip_count;
> +}
> +
> +#define USERCOREDUMP_FORMAT "usercoredumps/%d/%d"
> +static char *get_latest_usercoredump(int dir)
> +{
> +	char tmp[256];
> +	int i = 1;
> +
> +	do {
> +		snprintf(tmp, sizeof(tmp), USERCOREDUMP_FORMAT, getpid(), i++);
> +	} while (igt_sysfs_has_attr(dir, tmp));
> +
> +	snprintf(tmp, sizeof(tmp), USERCOREDUMP_FORMAT, getpid(), i-2);
> +	return igt_sysfs_get(dir, tmp);
> +}
> +
> +static void check_usercoredump(int fd, int sip, int dispatched)
> +{
> +	int dir = igt_debugfs_dir(fd);
> +	char *usercoredump, *str;
> +	unsigned int before, after;
> +	char match[256];
> +
> +	if (sip != SIP_WAIT && sip != SIP_HEAVY)
> +		return;
> +
> +	/* XXX reinstate when offline coredumps are implemented */
> +#ifndef XXX_ATTENTIONS_THROUGH_COREDUMPS
> +	return;
> +#endif
> +	usercoredump = get_latest_usercoredump(dir);
> +	igt_assert(usercoredump);
> +	igt_debug("%s\n", usercoredump);
> +
> +	snprintf(match, sizeof(match), "PID: %d", getpid());
> +	str = strstr(usercoredump, match);
> +	igt_assert(str);
> +
> +	snprintf(match, sizeof(match), "Comm: %s", igt_test_name());
> +	str = strstr(str, match);
> +	igt_assert(str);
> +
> +	str = strstr(str, "TD_ATT");
> +	igt_assert(str);
> +	igt_assert_eq(sscanf(str, "TD_ATT before (%d):", &before), 1);
> +	str = strstr(str + 1, "TD_ATT");
> +	igt_assert_eq(sscanf(str, "TD_ATT after (%d):", &after), 1);
> +
> +	igt_info("attentions %d before, %d after\n", before, after);
> +
> +	igt_assert_eq(before, dispatched);
> +	igt_assert_eq(after, dispatched);
> +
> +	free(usercoredump);
> +	close(dir);
>  }
>  
>  static uint64_t
> @@ -128,17 +312,58 @@ xe_sysfs_get_job_timeout_ms(int fd, struct drm_xe_engine_class_instance *eci)
>   * Description: check basic shader with write operation
>   * Run type: BAT
>   *
> + * SUBTEST: sanity-after-timeout
> + * Description: check basic shader execution after job timeout
> + *
> + * SUBTEST: wait-writesip-nodebug
> + * Description: verify that we don't enter SIP after wait with debugging disabled.
> + *
> + * SUBTEST: invalidinstr-disabled
> + * Description: Verify that we don't enter SIP after running into an invalid
> + *		instruction when exception is not enabled.
> + *
> + * SUBTEST: invalidinstr-thread-enabled
> + * Description: Verify that we enter SIP after running into an invalid instruction
> + *              when exception is enabled from thread.
> + *
> + * SUBTEST: invalidinstr-walker-enabled
> + * Description: Verify that we enter SIP after running into an invalid instruction
> + *              when exception is enabled from COMPUTE_WALKER.
> + *
> + * SUBTEST: breakpoint-writesip-nodebug
> + * Description: verify that we don't enter SIP after hitting breakpoint in shader
> + *		when debugging is disabled.
> + *
> + * SUBTEST: breakpoint-writesip
> + * Description: Test that we enter SIP after hitting breakpoint in shader.
> + *
> + * SUBTEST: breakpoint-writesip-twice
> + * Description: Test twice that we enter SIP after hitting breakpoint in shader.
> + *
> + * SUBTEST: breakpoint-waitsip
> + * Description: Test that we reset after seeing the attention without the debugger.
> + *
> + * SUBTEST: breakpoint-waitsip-heavy
> + * Description:
> + *	Test that we reset after seeing the attention from heavy SIP, that resembles
> + *	the production one, without the debugger.
>   */
> -static void test_sip(struct drm_xe_engine_class_instance *eci, uint32_t flags)
> +static void test_sip(int shader, int sip, struct drm_xe_engine_class_instance *eci, uint32_t flags)
>  {
>  	unsigned int threads = 512;
>  	unsigned int height = max_t(threads, HEIGHT, threads * 2);
> -	uint32_t exec_queue_id, handle, vm_id;
>  	unsigned int width = WIDTH;
> +	struct drm_xe_ext_set_property ext = {
> +		.base.name = DRM_XE_EXEC_QUEUE_EXTENSION_SET_PROPERTY,
> +		.property = DRM_XE_EXEC_QUEUE_SET_PROPERTY_EUDEBUG,
> +		.value = DRM_XE_EXEC_QUEUE_EUDEBUG_FLAG_ENABLE,
> +	};
>  	struct timespec ts = { };
> -	uint64_t timeout;
> +	int done = 0;
> +	uint32_t exec_queue_id, handle, vm_id;
>  	struct intel_bb *ibb;
> -	int fd;
> +	uint64_t timeout;
> +	int dispatched, fd;
>  
>  	igt_debug("Using %s\n", xe_engine_class_string(eci->engine_class));
>  
> @@ -152,19 +377,24 @@ static void test_sip(struct drm_xe_engine_class_instance *eci, uint32_t flags)
>  	timeout *= NSEC_PER_MSEC;
>  	timeout *= igt_run_in_simulation() ? 10 : 1;
>  
> -	exec_queue_id = xe_exec_queue_create(fd, vm_id, eci, 0);
> -	ibb = intel_bb_create_with_context(fd, exec_queue_id, vm_id, NULL, 4096);
> +	exec_queue_id = xe_exec_queue_create(fd, vm_id, eci,
> +					     xe_eudebug_debugger_available(fd) ?
> +					     to_user_pointer(&ext) : 0);
> +	do {
> +		ibb = intel_bb_create_with_context(fd, exec_queue_id, vm_id, NULL, 4096);
>  
> -	igt_nsec_elapsed(&ts);
> -	handle = gpgpu_shader(fd, ibb, threads, width, height);
> +		igt_nsec_elapsed(&ts);
> +		handle = gpgpu_shader(fd, ibb, shader, sip, threads, width, height);
>  
> -	intel_bb_sync(ibb);
> -	igt_assert_lt_u64(igt_nsec_elapsed(&ts), timeout);
> +		intel_bb_sync(ibb);
> +		igt_assert_lt_u64(igt_nsec_elapsed(&ts), timeout);
>  
> -	check_buf(fd, handle, width, height, COLOR_C4);
> +		dispatched = check_buf(fd, handle, width, height, shader, sip, COLOR_C4);
> +		check_usercoredump(fd, sip, dispatched);
>  
> -	gem_close(fd, handle);
> -	intel_bb_destroy(ibb);
> +		gem_close(fd, handle);
> +		intel_bb_destroy(ibb);
> +	} while (!done++ && (flags & F_SUBMIT_TWICE));
>  
>  	xe_exec_queue_destroy(fd, exec_queue_id);
>  	xe_vm_destroy(fd, vm_id);
> @@ -183,13 +413,71 @@ static void test_sip(struct drm_xe_engine_class_instance *eci, uint32_t flags)
>  igt_main
>  {
>  	struct drm_xe_engine_class_instance *eci;
> +	bool was_enabled;
>  	int fd;
>  
>  	igt_fixture
>  		fd = drm_open_driver(DRIVER_XE);
>  
>  	test_render_and_compute("sanity", fd, eci)
> -		test_sip(eci, 0);
> +		test_sip(SHADER_WRITE, SIP_NULL, eci, 0);
> +
> +	test_render_and_compute("sanity-after-timeout", fd, eci) {
> +		test_sip(SHADER_HANG, SIP_NULL, eci, 0);
> +
> +		xe_for_each_engine(fd, eci)
> +			if (eci->engine_class == DRM_XE_ENGINE_CLASS_RENDER ||
> +			    eci->engine_class == DRM_XE_ENGINE_CLASS_COMPUTE)
> +				test_sip(SHADER_WRITE, SIP_NULL, eci, 0);
> +	}
> +
> +	/* Debugger disabled (TD_CTL not set) */
> +	igt_subtest_group {
> +		igt_fixture {
> +			was_enabled = xe_eudebug_enable(fd, false);
> +			igt_require(!xe_eudebug_debugger_available(fd));
> +		}
> +
> +		test_render_and_compute("wait-writesip-nodebug", fd, eci)
> +			test_sip(SHADER_WAIT, SIP_WRITE, eci, 0);
> +
> +		test_render_and_compute("invalidinstr-disabled", fd, eci)
> +			test_sip(SHADER_INV_INSTR_DISABLED, SIP_INV_INSTR, eci, 0);
> +
> +		test_render_and_compute("invalidinstr-thread-enabled", fd, eci)
> +			test_sip(SHADER_INV_INSTR_THREAD_ENABLED, SIP_INV_INSTR, eci, 0);
> +
> +		test_render_and_compute("invalidinstr-walker-enabled", fd, eci)
> +			test_sip(SHADER_INV_INSTR_WALKER_ENABLED, SIP_INV_INSTR, eci, 0);
> +
> +		test_render_and_compute("breakpoint-writesip-nodebug", fd, eci)
> +			test_sip(SHADER_BREAKPOINT, SIP_WRITE, eci, 0);
> +
> +		igt_fixture
> +			xe_eudebug_enable(fd, was_enabled);
> +	}
> +
> +	/* Debugger enabled (TD_CTL set) */
> +	igt_subtest_group {
> +		igt_fixture {
> +			was_enabled = xe_eudebug_enable(fd, true);

Shouldn't this require the debugger to be available?

--
Zbigniew

> +		}
> +
> +		test_render_and_compute("breakpoint-writesip", fd, eci)
> +			test_sip(SHADER_BREAKPOINT, SIP_WRITE, eci, 0);
> +
> +		test_render_and_compute("breakpoint-writesip-twice", fd, eci)
> +			test_sip(SHADER_BREAKPOINT, SIP_WRITE, eci, F_SUBMIT_TWICE);
> +
> +		test_render_and_compute("breakpoint-waitsip", fd, eci)
> +			test_sip(SHADER_BREAKPOINT, SIP_WAIT, eci, 0);
> +
> +		test_render_and_compute("breakpoint-waitsip-heavy", fd, eci)
> +			test_sip(SHADER_BREAKPOINT, SIP_HEAVY, eci, 0);
> +
> +		igt_fixture
> +			xe_eudebug_enable(fd, was_enabled);
> +	}
>  
>  	igt_fixture
>  		drm_close_driver(fd);
> -- 
> 2.34.1
> 

^ permalink raw reply	[flat|nested] 41+ messages in thread

* Re: [PATCH i-g-t v3 04/14] lib/xe_eudebug: Introduce eu debug testing framework
  2024-08-21  9:31         ` Zbigniew Kempczyński
@ 2024-08-22 15:39           ` Kamil Konieczny
  2024-08-23  7:58             ` Manszewski, Christoph
  0 siblings, 1 reply; 41+ messages in thread
From: Kamil Konieczny @ 2024-08-22 15:39 UTC (permalink / raw)
  To: igt-dev
  Cc: Zbigniew Kempczyński, Manszewski, Christoph,
	Dominik Grzegorzek, Maciej Patelczyk,
	Dominik Karol Piątkowski, Pawel Sikora, Andrzej Hajda,
	Kolanupaka Naveena, Mika Kuoppala, Gwan-gyeong Mun, Mika Kuoppala,
	Karolina Stolarek

Hi Zbigniew,
On 2024-08-21 at 11:31:40 +0200, Zbigniew Kempczyński wrote:
> On Tue, Aug 20, 2024 at 07:45:18PM +0200, Kamil Konieczny wrote:
> > Hi Manszewski,,
> > On 2024-08-20 at 18:14:07 +0200, Manszewski, Christoph wrote:
> > > Hi Zbigniew,
> > > 
> > > On 20.08.2024 10:14, Zbigniew Kempczyński wrote:
> > > > On Fri, Aug 09, 2024 at 02:38:03PM +0200, Christoph Manszewski wrote:
> > > > > From: Dominik Grzegorzek <dominik.grzegorzek@intel.com>
> > > > > 
> > > > > Introduce library which simplifies testing of eu debug capability.
> > > > > The library provides event log helpers together with asynchronous
> > > > > abstraction for client proccess and the debugger itself.
> > > > > 
> > > > > xe_eudebug_client creates its own proccess with user's work function,
> > > > > and gives machanisms to synchronize beginning of execution and event
> > > > > logging.
> > > > > 
> > > > > xe_eudebug_debugger allows to attach to the given proccess, provides
> > > > > asynchronous thread for event reading and introduces triggers -
> > > > > a callback mechanism triggered every time subscribed event was read.
> > > > > 
> > > > > Signed-off-by: Dominik Grzegorzek <dominik.grzegorzek@intel.com>
> > > > > Signed-off-by: Mika Kuoppala <mika.kuaoppala@linux.intel.com>
> > > > > Signed-off-by: Christoph Manszewski <christoph.manszewski@intel.com>
> > > > > Signed-off-by: Maciej Patelczyk <maciej.patelczyk@intel.com>
> > > > > Signed-off-by: Pawel Sikora <pawel.sikora@intel.com>
> > > > > Signed-off-by: Karolina Stolarek <karolina.stolarek@intel.com>
> > > > > ---
> > > > >   lib/meson.build     |    1 +
> > > > >   lib/xe/xe_eudebug.c | 2192 +++++++++++++++++++++++++++++++++++++++++++
> > > > >   lib/xe/xe_eudebug.h |  206 ++++
> > > > >   3 files changed, 2399 insertions(+)
> > > > >   create mode 100644 lib/xe/xe_eudebug.c
> > > > >   create mode 100644 lib/xe/xe_eudebug.h
> > > > > 
> > > > > diff --git a/lib/meson.build b/lib/meson.build
> > > > > index f711e60a7..969ca4101 100644
> > > > > --- a/lib/meson.build
> > > > > +++ b/lib/meson.build
> > > > > @@ -111,6 +111,7 @@ lib_sources = [
> > > > >   	'igt_msm.c',
> > > > >   	'igt_dsc.c',
> > > > >   	'xe/xe_gt.c',
> > > > > +	'xe/xe_eudebug.c',
> > > > >   	'xe/xe_ioctl.c',
> > > > >   	'xe/xe_mmio.c',
> > > > >   	'xe/xe_query.c',
> > > > 
> > > > As eudebug is quite big feature I think it should be separated and
> > > > hidden behind a feature flag (check meson_options.txt), lets say
> > > > 'xe_eudebug' which would be disabled by default. This way you can
> > > > develop it upstream even if kernel side is not officially merged.
> > > > I'm pragramatic and I see no reason to block not accepted feature
> > > > especially this would imo speed up developement. A final step when
> > > > kernel change would be accepted and merged would be to sync with
> > > > uapi and remove local definitions.
> > > > 
> > > > I look forward maintainers comments is my attitude acceptable.
> > > 
> > > I agree that it is a good idea. The only problem that arises is for
> > > 'xe_exec_sip'. We add a dependency on eudebug to this test - any ideas how
> > > to approach this correctly? The only thing that comes to my mind is
> > > conditional compilation with 'ifdef' statements but it doesn't appear to be
> > > pretty.
> > 
> > What about adding skips in added tests if kernel do not support eudebug?
> > 
> > This way you can have it without conditional compilation with
> > ifdef/meson and also have it compile-time tested (if CI support test-with
> > for Xe kernels).
> 
> IMO simplest is to migrate code to xe_exec_sip_eudebug.c.
> 

What about shorter name like xe_eudebug_sip.c ?

Regards,
Kamil

> --
> Zbigniew
> 
> 
> > 
> > Regards,
> > Kamil
> > 
> > > 
> > > Thanks,
> > > Christoph
> > > > 
> > > > --
> > > > Zbigniew
> > > > 
> > > > 
> > > > > diff --git a/lib/xe/xe_eudebug.c b/lib/xe/xe_eudebug.c
> > > > > new file mode 100644
> > > > > index 000000000..4eac87476
> > > > > --- /dev/null
> > > > > +++ b/lib/xe/xe_eudebug.c
> > > > > @@ -0,0 +1,2192 @@
> > > > > +// SPDX-License-Identifier: MIT
> > > > > +/*
> > > > > + * Copyright © 2023 Intel Corporation
> > > > > + */
> > > > > +
> > > > > +#include <fcntl.h>
> > > > > +#include <poll.h>
> > > > > +#include <signal.h>
> > > > > +#include <sys/select.h>
> > > > > +#include <sys/stat.h>
> > > > > +#include <sys/types.h>
> > > > > +#include <sys/wait.h>
> > > > > +
> > > > > +#include "igt.h"
> > > > > +#include "igt_sysfs.h"
> > > > > +#include "intel_pat.h"
> > > > > +#include "xe_eudebug.h"
> > > > > +#include "xe_ioctl.h"
> > > > > +
> > > > > +struct event_trigger {
> > > > > +	xe_eudebug_trigger_fn fn;
> > > > > +	int type;
> > > > > +	struct igt_list_head link;
> > > > > +};
> > > > > +
> > > > > +struct seqno_list_entry {
> > > > > +	struct igt_list_head link;
> > > > > +	uint64_t seqno;
> > > > > +};
> > > > > +
> > > > > +struct match_dto {
> > > > > +	struct drm_xe_eudebug_event *target;
> > > > > +	struct igt_list_head *seqno_list;
> > > > > +	uint64_t client_handle;
> > > > > +	uint32_t filter;
> > > > > +
> > > > > +	/* store latest 'EVENT_VM_BIND' seqno */
> > > > > +	uint64_t *bind_seqno;
> > > > > +	/* latest vm_bind_op seqno matching bind_seqno */
> > > > > +	uint64_t *bind_op_seqno;
> > > > > +};
> > > > > +
> > > > > +#define CLIENT_PID  1
> > > > > +#define CLIENT_RUN  2
> > > > > +#define CLIENT_FINI 3
> > > > > +#define CLIENT_STOP 4
> > > > > +#define CLIENT_STAGE 5
> > > > > +#define DEBUGGER_STAGE 6
> > > > > +
> > > > > +#define DEBUGGER_WORKER_INACTIVE  0
> > > > > +#define DEBUGGER_WORKER_ACTIVE  1
> > > > > +#define DEBUGGER_WORKER_QUITTING 2
> > > > > +
> > > > > +static const char *type_to_str(unsigned int type)
> > > > > +{
> > > > > +	switch (type) {
> > > > > +	case DRM_XE_EUDEBUG_EVENT_NONE:
> > > > > +		return "none";
> > > > > +	case DRM_XE_EUDEBUG_EVENT_READ:
> > > > > +		return "read";
> > > > > +	case DRM_XE_EUDEBUG_EVENT_OPEN:
> > > > > +		return "client";
> > > > > +	case DRM_XE_EUDEBUG_EVENT_VM:
> > > > > +		return "vm";
> > > > > +	case DRM_XE_EUDEBUG_EVENT_EXEC_QUEUE:
> > > > > +		return "exec_queue";
> > > > > +	case DRM_XE_EUDEBUG_EVENT_EU_ATTENTION:
> > > > > +		return "attention";
> > > > > +	case DRM_XE_EUDEBUG_EVENT_VM_BIND:
> > > > > +		return "vm_bind";
> > > > > +	case DRM_XE_EUDEBUG_EVENT_VM_BIND_OP:
> > > > > +		return "vm_bind_op";
> > > > > +	case DRM_XE_EUDEBUG_EVENT_VM_BIND_UFENCE:
> > > > > +		return "vm_bind_ufence";
> > > > > +	case DRM_XE_EUDEBUG_EVENT_METADATA:
> > > > > +		return "metadata";
> > > > > +	case DRM_XE_EUDEBUG_EVENT_VM_BIND_OP_METADATA:
> > > > > +		return "vm_bind_op_metadata";
> > > > > +	}
> > > > > +
> > > > > +	return "UNKNOWN";
> > > > > +}
> > > > > +
> > > > > +static const char *event_type_to_str(struct drm_xe_eudebug_event *e, char *buf)
> > > > > +{
> > > > > +	sprintf(buf, "%s(%d)", type_to_str(e->type), e->type);
> > > > > +
> > > > > +	return buf;
> > > > > +}
> > > > > +
> > > > > +static const char *flags_to_str(unsigned int flags)
> > > > > +{
> > > > > +	if (flags & DRM_XE_EUDEBUG_EVENT_CREATE) {
> > > > > +		if (flags & DRM_XE_EUDEBUG_EVENT_NEED_ACK)
> > > > > +			return "create|ack";
> > > > > +		else
> > > > > +			return "create";
> > > > > +	}
> > > > > +	if (flags & DRM_XE_EUDEBUG_EVENT_DESTROY)
> > > > > +		return "destroy";
> > > > > +
> > > > > +	if (flags & DRM_XE_EUDEBUG_EVENT_STATE_CHANGE)
> > > > > +		return "state-change";
> > > > > +
> > > > > +	igt_assert(!(flags & DRM_XE_EUDEBUG_EVENT_NEED_ACK));
> > > > > +
> > > > > +	return "flags unknown";
> > > > > +}
> > > > > +
> > > > > +static const char *event_members_to_str(struct drm_xe_eudebug_event *e, char *b)
> > > > > +{
> > > > > +	switch (e->type) {
> > > > > +	case DRM_XE_EUDEBUG_EVENT_OPEN: {
> > > > > +		struct drm_xe_eudebug_event_client *ec = (struct drm_xe_eudebug_event_client *)e;
> > > > > +
> > > > > +		sprintf(b, "handle=%llu", ec->client_handle);
> > > > > +		break;
> > > > > +	}
> > > > > +	case DRM_XE_EUDEBUG_EVENT_VM: {
> > > > > +		struct drm_xe_eudebug_event_vm *evm = (struct drm_xe_eudebug_event_vm *)e;
> > > > > +
> > > > > +		sprintf(b, "client_handle=%llu, handle=%llu",
> > > > > +			evm->client_handle, evm->vm_handle);
> > > > > +		break;
> > > > > +	}
> > > > > +	case DRM_XE_EUDEBUG_EVENT_EXEC_QUEUE: {
> > > > > +		struct drm_xe_eudebug_event_exec_queue *ee = (void *)e;
> > > > > +
> > > > > +		sprintf(b, "client_handle=%llu, vm_handle=%llu, "
> > > > > +			   "exec_queue_handle=%llu, engine_class=%d, exec_queue_width=%d",
> > > > > +			ee->client_handle, ee->vm_handle,
> > > > > +			ee->exec_queue_handle, ee->engine_class, ee->width);
> > > > > +		break;
> > > > > +	}
> > > > > +	case DRM_XE_EUDEBUG_EVENT_EU_ATTENTION: {
> > > > > +		struct drm_xe_eudebug_event_eu_attention *ea = (void *)e;
> > > > > +
> > > > > +		sprintf(b, "client_handle=%llu, exec_queue_handle=%llu, "
> > > > > +			   "lrc_handle=%llu, bitmask_size=%d",
> > > > > +			ea->client_handle, ea->exec_queue_handle,
> > > > > +			ea->lrc_handle, ea->bitmask_size);
> > > > > +		break;
> > > > > +	}
> > > > > +	case DRM_XE_EUDEBUG_EVENT_VM_BIND: {
> > > > > +		struct drm_xe_eudebug_event_vm_bind *evmb = (void *)e;
> > > > > +
> > > > > +		sprintf(b, "client_handle=%llu, vm_handle=%llu, flags=0x%x, num_binds=%u",
> > > > > +			evmb->client_handle, evmb->vm_handle, evmb->flags, evmb->num_binds);
> > > > > +		break;
> > > > > +	}
> > > > > +	case DRM_XE_EUDEBUG_EVENT_VM_BIND_OP: {
> > > > > +		struct drm_xe_eudebug_event_vm_bind_op *op = (void *)e;
> > > > > +
> > > > > +		sprintf(b, "vm_bind_ref_seqno=%lld, addr=%016llx, range=%llu num_extensions=%llu",
> > > > > +			op->vm_bind_ref_seqno, op->addr, op->range, op->num_extensions);
> > > > > +		break;
> > > > > +	}
> > > > > +	case DRM_XE_EUDEBUG_EVENT_VM_BIND_UFENCE: {
> > > > > +		struct drm_xe_eudebug_event_vm_bind_ufence *f = (void *)e;
> > > > > +
> > > > > +		sprintf(b, "vm_bind_ref_seqno=%lld", f->vm_bind_ref_seqno);
> > > > > +		break;
> > > > > +	}
> > > > > +	case DRM_XE_EUDEBUG_EVENT_METADATA: {
> > > > > +		struct drm_xe_eudebug_event_metadata *em = (void *)e;
> > > > > +
> > > > > +		sprintf(b, "client_handle=%llu, metadata_handle=%llu, type=%llu, len=%llu",
> > > > > +			em->client_handle, em->metadata_handle, em->type, em->len);
> > > > > +		break;
> > > > > +	}
> > > > > +	case DRM_XE_EUDEBUG_EVENT_VM_BIND_OP_METADATA: {
> > > > > +		struct drm_xe_eudebug_event_vm_bind_op_metadata *op = (void *)e;
> > > > > +
> > > > > +		sprintf(b, "vm_bind_op_ref_seqno=%lld, metadata_handle=%llu, metadata_cookie=%llu",
> > > > > +			op->vm_bind_op_ref_seqno, op->metadata_handle, op->metadata_cookie);
> > > > > +		break;
> > > > > +	}
> > > > > +	default:
> > > > > +		strcpy(b, "<...>");
> > > > > +	}
> > > > > +
> > > > > +	return b;
> > > > > +}
> > > > > +
> > > > > +/**
> > > > > + * xe_eudebug_event_to_str:
> > > > > + * @e: pointer to event
> > > > > + * @buf: target to write string representation of @e
> > > > > + * @len: size of target buffer @buf
> > > > > + *
> > > > > + * Creates string representation for given event.
> > > > > + *
> > > > > + * Returns: the written input buffer pointed by @buf.
> > > > > + */
> > > > > +const char *xe_eudebug_event_to_str(struct drm_xe_eudebug_event *e, char *buf, size_t len)
> > > > > +{
> > > > > +	char a[256];
> > > > > +	char b[256];
> > > > > +
> > > > > +	snprintf(buf, len, "(%llu) %15s:%s: %s",
> > > > > +		 e->seqno,
> > > > > +		 event_type_to_str(e, a),
> > > > > +		 flags_to_str(e->flags),
> > > > > +		 event_members_to_str(e, b));
> > > > > +
> > > > > +	return buf;
> > > > > +}
> > > > > +
> > > > > +static void catch_child_failure(void)
> > > > > +{
> > > > > +	pid_t pid;
> > > > > +	int status;
> > > > > +
> > > > > +	pid = waitpid(-1, &status, WNOHANG);
> > > > > +
> > > > > +	if (pid == 0 || pid == -1)
> > > > > +		return;
> > > > > +
> > > > > +	if (!WIFEXITED(status))
> > > > > +		return;
> > > > > +
> > > > > +	igt_assert_f(WEXITSTATUS(status) == 0, "Client failed!\n");
> > > > > +}
> > > > > +
> > > > > +static int safe_pipe_read(int pipe[2], void *buf, int nbytes, int timeout_ms)
> > > > > +{
> > > > > +	int ret;
> > > > > +	int t = 0;
> > > > > +	struct pollfd fd = {
> > > > > +		.fd = pipe[0],
> > > > > +		.events = POLLIN,
> > > > > +		.revents = 0
> > > > > +	};
> > > > > +
> > > > > +	/* When child fails we may get stuck forever. Check whether
> > > > > +	 * the child process ended with an error.
> > > > > +	 */
> > > > > +	do {
> > > > > +		const int interval_ms = 1000;
> > > > > +
> > > > > +		ret = poll(&fd, 1, interval_ms);
> > > > > +
> > > > > +		if (!ret) {
> > > > > +			catch_child_failure();
> > > > > +			t += interval_ms;
> > > > > +		}
> > > > > +	} while (!ret && t < timeout_ms);
> > > > > +
> > > > > +	if (ret > 0)
> > > > > +		return read(pipe[0], buf, nbytes);
> > > > > +
> > > > > +	return 0;
> > > > > +}
> > > > > +
> > > > > +static uint64_t pipe_read(int pipe[2], int timeout_ms)
> > > > > +{
> > > > > +	uint64_t in;
> > > > > +	uint64_t ret;
> > > > > +
> > > > > +	ret = safe_pipe_read(pipe, &in, sizeof(in), timeout_ms);
> > > > > +	igt_assert(ret == sizeof(in));
> > > > > +
> > > > > +	return in;
> > > > > +}
> > > > > +
> > > > > +static void pipe_signal(int pipe[2], uint64_t token)
> > > > > +{
> > > > > +	igt_assert(write(pipe[1], &token, sizeof(token)) == sizeof(token));
> > > > > +}
> > > > > +
> > > > > +static void pipe_close(int pipe[2])
> > > > > +{
> > > > > +	if (pipe[0] != -1)
> > > > > +		close(pipe[0]);
> > > > > +
> > > > > +	if (pipe[1] != -1)
> > > > > +		close(pipe[1]);
> > > > > +}
> > > > > +
> > > > > +static uint64_t __wait_token(int p[2], const uint64_t token, int timeout_ms)
> > > > > +{
> > > > > +	uint64_t in;
> > > > > +
> > > > > +	in = pipe_read(p, timeout_ms);
> > > > > +
> > > > > +	igt_assert_eq(in, token);
> > > > > +
> > > > > +	return pipe_read(p, timeout_ms);
> > > > > +}
> > > > > +
> > > > > +static uint64_t client_wait_token(struct xe_eudebug_client *c,
> > > > > +				 const uint64_t token)
> > > > > +{
> > > > > +	return __wait_token(c->p_in, token, c->timeout_ms);
> > > > > +}
> > > > > +
> > > > > +static uint64_t wait_from_client(struct xe_eudebug_client *c,
> > > > > +				 const uint64_t token)
> > > > > +{
> > > > > +	return __wait_token(c->p_out, token, c->timeout_ms);
> > > > > +}
> > > > > +
> > > > > +static void token_signal(int p[2], const uint64_t token, const uint64_t value)
> > > > > +{
> > > > > +	pipe_signal(p, token);
> > > > > +	pipe_signal(p, value);
> > > > > +}
> > > > > +
> > > > > +static void client_signal(struct xe_eudebug_client *c,
> > > > > +			  const uint64_t token,
> > > > > +			  const uint64_t value)
> > > > > +{
> > > > > +	token_signal(c->p_out, token, value);
> > > > > +}
> > > > > +
> > > > > +static int __xe_eudebug_connect(int fd, pid_t pid, uint32_t flags, uint64_t events)
> > > > > +{
> > > > > +	struct drm_xe_eudebug_connect param = {
> > > > > +		.pid = pid,
> > > > > +		.flags = flags,
> > > > > +	};
> > > > > +	int debugfd;
> > > > > +
> > > > > +	debugfd = igt_ioctl(fd, DRM_IOCTL_XE_EUDEBUG_CONNECT, &param);
> > > > > +
> > > > > +	if (debugfd < 0)
> > > > > +		return -errno;
> > > > > +
> > > > > +	return debugfd;
> > > > > +}
> > > > > +
> > > > > +static void event_log_write_to_fd(struct xe_eudebug_event_log *l, int fd)
> > > > > +{
> > > > > +	igt_assert_eq(write(fd, &l->head, sizeof(l->head)),
> > > > > +		      sizeof(l->head));
> > > > > +
> > > > > +	igt_assert_eq(write(fd, l->log, l->head), l->head);
> > > > > +}
> > > > > +
> > > > > +static void read_all(int fd, void *buf, size_t nbytes)
> > > > > +{
> > > > > +	ssize_t remaining_size = nbytes;
> > > > > +	ssize_t current_size = 0;
> > > > > +	ssize_t read_size = 0;
> > > > > +
> > > > > +	do {
> > > > > +		read_size = read(fd, buf + current_size, remaining_size);
> > > > > +		igt_assert_f(read_size >= 0, "read failed: %s\n", strerror(errno));
> > > > > +
> > > > > +		current_size += read_size;
> > > > > +		remaining_size -= read_size;
> > > > > +	} while (remaining_size > 0 && read_size > 0);
> > > > > +
> > > > > +	igt_assert_eq(current_size, nbytes);
> > > > > +}
> > > > > +
> > > > > +static void event_log_read_from_fd(struct xe_eudebug_event_log *l, int fd)
> > > > > +{
> > > > > +	read_all(fd, &l->head, sizeof(l->head));
> > > > > +	igt_assert_lt(l->head, l->max_size);
> > > > > +
> > > > > +	read_all(fd, l->log, l->head);
> > > > > +}
> > > > > +
> > > > > +typedef int (*cmp_fn_t)(struct drm_xe_eudebug_event *, void *);
> > > > > +
> > > > > +static struct drm_xe_eudebug_event *
> > > > > +event_cmp(struct xe_eudebug_event_log *l,
> > > > > +	  struct drm_xe_eudebug_event *current,
> > > > > +	  cmp_fn_t match,
> > > > > +	  void *data)
> > > > > +{
> > > > > +	struct drm_xe_eudebug_event *e = current;
> > > > > +
> > > > > +	xe_eudebug_for_each_event(e, l) {
> > > > > +		if (match(e, data))
> > > > > +			return e;
> > > > > +	}
> > > > > +
> > > > > +	return NULL;
> > > > > +}
> > > > > +
> > > > > +static int match_type_and_flags(struct drm_xe_eudebug_event *a, void *data)
> > > > > +{
> > > > > +	struct drm_xe_eudebug_event *b = data;
> > > > > +
> > > > > +	if (a->type == b->type &&
> > > > > +	    a->flags == b->flags)
> > > > > +		return 1;
> > > > > +
> > > > > +	return 0;
> > > > > +}
> > > > > +
> > > > > +static int match_fields(struct drm_xe_eudebug_event *a, void *data)
> > > > > +{
> > > > > +	struct drm_xe_eudebug_event *b = data;
> > > > > +	int ret = 0;
> > > > > +
> > > > > +	ret = match_type_and_flags(a, data);
> > > > > +	if (!ret)
> > > > > +		return ret;
> > > > > +
> > > > > +	ret = 0;
> > > > > +
> > > > > +	switch (a->type) {
> > > > > +	case DRM_XE_EUDEBUG_EVENT_EXEC_QUEUE: {
> > > > > +		struct drm_xe_eudebug_event_exec_queue *ae = (void *)a;
> > > > > +		struct drm_xe_eudebug_event_exec_queue *be = (void *)b;
> > > > > +
> > > > > +		if (ae->engine_class == be->engine_class && ae->width == be->width)
> > > > > +			ret = 1;
> > > > > +		break;
> > > > > +	}
> > > > > +	case DRM_XE_EUDEBUG_EVENT_VM_BIND: {
> > > > > +		struct drm_xe_eudebug_event_vm_bind *ea = (void *)a;
> > > > > +		struct drm_xe_eudebug_event_vm_bind *eb = (void *)b;
> > > > > +
> > > > > +		if (ea->num_binds == eb->num_binds)
> > > > > +			ret = 1;
> > > > > +		break;
> > > > > +	}
> > > > > +	case DRM_XE_EUDEBUG_EVENT_VM_BIND_OP: {
> > > > > +		struct drm_xe_eudebug_event_vm_bind_op *ea = (void *)a;
> > > > > +		struct drm_xe_eudebug_event_vm_bind_op *eb = (void *)b;
> > > > > +
> > > > > +		if (ea->addr == eb->addr && ea->range == eb->range &&
> > > > > +		    ea->num_extensions == eb->num_extensions)
> > > > > +			ret = 1;
> > > > > +		break;
> > > > > +	}
> > > > > +	case DRM_XE_EUDEBUG_EVENT_VM_BIND_OP_METADATA: {
> > > > > +		struct drm_xe_eudebug_event_vm_bind_op_metadata *ea = (void *)a;
> > > > > +		struct drm_xe_eudebug_event_vm_bind_op_metadata *eb = (void *)b;
> > > > > +
> > > > > +		if (ea->metadata_handle == eb->metadata_handle &&
> > > > > +		    ea->metadata_cookie == eb->metadata_cookie)
> > > > > +			ret = 1;
> > > > > +		break;
> > > > > +	}
> > > > > +
> > > > > +	default:
> > > > > +		ret = 1;
> > > > > +		break;
> > > > > +	}
> > > > > +
> > > > > +	return ret;
> > > > > +}
> > > > > +
> > > > > +static int match_client_handle(struct drm_xe_eudebug_event *e, void *data)
> > > > > +{
> > > > > +	struct match_dto *md = (void *)data;
> > > > > +	uint64_t *bind_seqno = md->bind_seqno;
> > > > > +	uint64_t *bind_op_seqno = md->bind_op_seqno;
> > > > > +	uint64_t h = md->client_handle;
> > > > > +
> > > > > +	if (XE_EUDEBUG_EVENT_IS_FILTERED(e->type, md->filter))
> > > > > +		return 0;
> > > > > +
> > > > > +	switch (e->type) {
> > > > > +	case DRM_XE_EUDEBUG_EVENT_OPEN: {
> > > > > +		struct drm_xe_eudebug_event_client *client = (struct drm_xe_eudebug_event_client *)e;
> > > > > +
> > > > > +		if (client->client_handle == h)
> > > > > +			return 1;
> > > > > +		break;
> > > > > +	}
> > > > > +	case DRM_XE_EUDEBUG_EVENT_VM: {
> > > > > +		struct drm_xe_eudebug_event_vm *vm = (struct drm_xe_eudebug_event_vm *)e;
> > > > > +
> > > > > +		if (vm->client_handle == h)
> > > > > +			return 1;
> > > > > +		break;
> > > > > +	}
> > > > > +	case DRM_XE_EUDEBUG_EVENT_EXEC_QUEUE: {
> > > > > +		struct drm_xe_eudebug_event_exec_queue *ee = (struct drm_xe_eudebug_event_exec_queue *)e;
> > > > > +
> > > > > +		if (ee->client_handle == h)
> > > > > +			return 1;
> > > > > +		break;
> > > > > +	}
> > > > > +	case DRM_XE_EUDEBUG_EVENT_VM_BIND: {
> > > > > +		struct drm_xe_eudebug_event_vm_bind *evmb = (struct drm_xe_eudebug_event_vm_bind *)e;
> > > > > +
> > > > > +		if (evmb->client_handle == h) {
> > > > > +			*bind_seqno = evmb->base.seqno;
> > > > > +			return 1;
> > > > > +		}
> > > > > +		break;
> > > > > +	}
> > > > > +	case DRM_XE_EUDEBUG_EVENT_VM_BIND_OP: {
> > > > > +		struct drm_xe_eudebug_event_vm_bind_op *eo = (struct drm_xe_eudebug_event_vm_bind_op *)e;
> > > > > +
> > > > > +		if (eo->vm_bind_ref_seqno == *bind_seqno) {
> > > > > +			*bind_op_seqno = eo->base.seqno;
> > > > > +			return 1;
> > > > > +		}
> > > > > +		break;
> > > > > +	}
> > > > > +	case DRM_XE_EUDEBUG_EVENT_VM_BIND_UFENCE: {
> > > > > +		struct drm_xe_eudebug_event_vm_bind_ufence *ef  = (struct drm_xe_eudebug_event_vm_bind_ufence *)e;
> > > > > +
> > > > > +		if (ef->vm_bind_ref_seqno == *bind_seqno)
> > > > > +			return 1;
> > > > > +
> > > > > +		break;
> > > > > +	}
> > > > > +	case DRM_XE_EUDEBUG_EVENT_METADATA: {
> > > > > +		struct drm_xe_eudebug_event_metadata *em = (struct drm_xe_eudebug_event_metadata *)e;
> > > > > +
> > > > > +		if (em->client_handle == h)
> > > > > +			return 1;
> > > > > +		break;
> > > > > +	}
> > > > > +	case DRM_XE_EUDEBUG_EVENT_VM_BIND_OP_METADATA: {
> > > > > +		struct drm_xe_eudebug_event_vm_bind_op_metadata *eo = (struct drm_xe_eudebug_event_vm_bind_op_metadata *)e;
> > > > > +
> > > > > +		if (eo->vm_bind_op_ref_seqno == *bind_op_seqno)
> > > > > +			return 1;
> > > > > +		break;
> > > > > +	}
> > > > > +	default:
> > > > > +		break;
> > > > > +	}
> > > > > +
> > > > > +	return 0;
> > > > > +}
> > > > > +
> > > > > +static int match_opposite_resource(struct drm_xe_eudebug_event *e, void *data)
> > > > > +{
> > > > > +
> > > > > +	struct drm_xe_eudebug_event *d = (void *)data;
> > > > > +	int ret;
> > > > > +
> > > > > +	d->flags ^= DRM_XE_EUDEBUG_EVENT_CREATE | DRM_XE_EUDEBUG_EVENT_DESTROY;
> > > > > +	d->flags &= ~(DRM_XE_EUDEBUG_EVENT_NEED_ACK);
> > > > > +	ret = match_type_and_flags(e, data);
> > > > > +	d->flags ^= DRM_XE_EUDEBUG_EVENT_CREATE | DRM_XE_EUDEBUG_EVENT_DESTROY;
> > > > > +
> > > > > +	if (!ret)
> > > > > +		return 0;
> > > > > +
> > > > > +	switch (e->type) {
> > > > > +	case DRM_XE_EUDEBUG_EVENT_OPEN: {
> > > > > +		struct drm_xe_eudebug_event_client *client = (struct drm_xe_eudebug_event_client *)e;
> > > > > +		struct drm_xe_eudebug_event_client *filter = (struct drm_xe_eudebug_event_client *)data;
> > > > > +
> > > > > +		if (client->client_handle == filter->client_handle)
> > > > > +			return 1;
> > > > > +		break;
> > > > > +	}
> > > > > +	case DRM_XE_EUDEBUG_EVENT_VM: {
> > > > > +		struct drm_xe_eudebug_event_vm *vm = (struct drm_xe_eudebug_event_vm *)e;
> > > > > +		struct drm_xe_eudebug_event_vm *filter = (struct drm_xe_eudebug_event_vm *)data;
> > > > > +
> > > > > +		if (vm->vm_handle == filter->vm_handle)
> > > > > +			return 1;
> > > > > +		break;
> > > > > +	}
> > > > > +	case DRM_XE_EUDEBUG_EVENT_EXEC_QUEUE: {
> > > > > +		struct drm_xe_eudebug_event_exec_queue *ee = (void *)e;
> > > > > +		struct drm_xe_eudebug_event_exec_queue *filter = (struct drm_xe_eudebug_event_exec_queue *)data;
> > > > > +
> > > > > +		if (ee->exec_queue_handle == filter->exec_queue_handle)
> > > > > +			return 1;
> > > > > +		break;
> > > > > +	}
> > > > > +	case DRM_XE_EUDEBUG_EVENT_VM_BIND: {
> > > > > +		struct drm_xe_eudebug_event_vm_bind *evmb = (void *)e;
> > > > > +		struct drm_xe_eudebug_event_vm_bind *filter = (struct drm_xe_eudebug_event_vm_bind *)data;
> > > > > +
> > > > > +		if (evmb->vm_handle == filter->vm_handle &&
> > > > > +		    evmb->num_binds == filter->num_binds)
> > > > > +			return 1;
> > > > > +		break;
> > > > > +	}
> > > > > +	case DRM_XE_EUDEBUG_EVENT_VM_BIND_OP: {
> > > > > +		struct drm_xe_eudebug_event_vm_bind_op *avmb = (void *)e;
> > > > > +		struct drm_xe_eudebug_event_vm_bind_op *filter = (struct drm_xe_eudebug_event_vm_bind_op *)data;
> > > > > +
> > > > > +		if (avmb->addr == filter->addr &&
> > > > > +		    avmb->range == filter->range)
> > > > > +			return 1;
> > > > > +		break;
> > > > > +	}
> > > > > +	case DRM_XE_EUDEBUG_EVENT_METADATA: {
> > > > > +		struct drm_xe_eudebug_event_metadata *em = (void *)e;
> > > > > +		struct drm_xe_eudebug_event_metadata *filter = (struct drm_xe_eudebug_event_metadata *)data;
> > > > > +
> > > > > +		if (em->metadata_handle == filter->metadata_handle)
> > > > > +			return 1;
> > > > > +		break;
> > > > > +	}
> > > > > +	case DRM_XE_EUDEBUG_EVENT_VM_BIND_OP_METADATA: {
> > > > > +		struct drm_xe_eudebug_event_vm_bind_op_metadata *avmb = (void *)e;
> > > > > +		struct drm_xe_eudebug_event_vm_bind_op_metadata *filter = (struct drm_xe_eudebug_event_vm_bind_op_metadata *)data;
> > > > > +
> > > > > +		if (avmb->metadata_handle == filter->metadata_handle &&
> > > > > +		    avmb->metadata_cookie == filter->metadata_cookie)
> > > > > +			return 1;
> > > > > +		break;
> > > > > +	}
> > > > > +
> > > > > +	default:
> > > > > +		break;
> > > > > +	}
> > > > > +	return 0;
> > > > > +}
> > > > > +
> > > > > +static int match_full(struct drm_xe_eudebug_event *e, void *data)
> > > > > +{
> > > > > +	struct seqno_list_entry *sl;
> > > > > +
> > > > > +	struct match_dto *md = (void *)data;
> > > > > +	int ret = 0;
> > > > > +
> > > > > +	ret = match_client_handle(e, md);
> > > > > +	if (!ret)
> > > > > +		return 0;
> > > > > +
> > > > > +	ret = match_fields(e, md->target);
> > > > > +	if (!ret)
> > > > > +		return 0;
> > > > > +
> > > > > +	igt_list_for_each_entry(sl, md->seqno_list, link) {
> > > > > +		if (sl->seqno == e->seqno)
> > > > > +			return 0;
> > > > > +	}
> > > > > +
> > > > > +	return 1;
> > > > > +}
> > > > > +
> > > > > +static struct drm_xe_eudebug_event *
> > > > > +event_type_match(struct xe_eudebug_event_log *l,
> > > > > +		 struct drm_xe_eudebug_event *target,
> > > > > +		 struct drm_xe_eudebug_event *current)
> > > > > +{
> > > > > +	return event_cmp(l, current, match_type_and_flags, target);
> > > > > +}
> > > > > +
> > > > > +static struct drm_xe_eudebug_event *
> > > > > +client_match(struct xe_eudebug_event_log *l,
> > > > > +	     uint64_t client_handle,
> > > > > +	     struct drm_xe_eudebug_event *current,
> > > > > +	     uint32_t filter,
> > > > > +	     uint64_t *bind_seqno,
> > > > > +	     uint64_t *bind_op_seqno)
> > > > > +{
> > > > > +	struct match_dto md = {
> > > > > +		.client_handle = client_handle,
> > > > > +		.filter = filter,
> > > > > +		.bind_seqno = bind_seqno,
> > > > > +		.bind_op_seqno = bind_op_seqno,
> > > > > +	};
> > > > > +
> > > > > +	return event_cmp(l, current, match_client_handle, &md);
> > > > > +}
> > > > > +
> > > > > +static struct drm_xe_eudebug_event *
> > > > > +opposite_event_match(struct xe_eudebug_event_log *l,
> > > > > +		    struct drm_xe_eudebug_event *target,
> > > > > +		    struct drm_xe_eudebug_event *current)
> > > > > +{
> > > > > +	return event_cmp(l, current, match_opposite_resource, target);
> > > > > +}
> > > > > +
> > > > > +static struct drm_xe_eudebug_event *
> > > > > +event_match(struct xe_eudebug_event_log *l,
> > > > > +	    struct drm_xe_eudebug_event *target,
> > > > > +	    uint64_t client_handle,
> > > > > +	    struct igt_list_head *seqno_list,
> > > > > +	    uint64_t *bind_seqno,
> > > > > +	    uint64_t *bind_op_seqno)
> > > > > +{
> > > > > +	struct match_dto md = {
> > > > > +		.target = target,
> > > > > +		.client_handle = client_handle,
> > > > > +		.seqno_list = seqno_list,
> > > > > +		.bind_seqno = bind_seqno,
> > > > > +		.bind_op_seqno = bind_op_seqno,
> > > > > +	};
> > > > > +
> > > > > +	return event_cmp(l, NULL, match_full, &md);
> > > > > +}
> > > > > +
> > > > > +static void compare_client(struct xe_eudebug_event_log *c, struct drm_xe_eudebug_event *_ce,
> > > > > +			   struct xe_eudebug_event_log *d, struct drm_xe_eudebug_event *_de,
> > > > > +			   uint32_t filter)
> > > > > +{
> > > > > +	struct drm_xe_eudebug_event_client *ce = (void *)_ce;
> > > > > +	struct drm_xe_eudebug_event_client *de = (void *)_de;
> > > > > +	uint64_t cbs = 0, dbs = 0, cbso = 0, dbso = 0;
> > > > > +
> > > > > +	struct igt_list_head matched_seqno_list;
> > > > > +	struct drm_xe_eudebug_event *hc, *hd;
> > > > > +	struct seqno_list_entry *entry, *tmp;
> > > > > +
> > > > > +	igt_assert(ce);
> > > > > +	igt_assert(de);
> > > > > +
> > > > > +	igt_debug("client: %llu -> %llu\n", ce->client_handle, de->client_handle);
> > > > > +
> > > > > +	hc = NULL;
> > > > > +	hd = NULL;
> > > > > +	IGT_INIT_LIST_HEAD(&matched_seqno_list);
> > > > > +
> > > > > +	do {
> > > > > +		hc = client_match(c, ce->client_handle, hc, filter, &cbs, &cbso);
> > > > > +		if (!hc)
> > > > > +			break;
> > > > > +
> > > > > +		hd = event_match(d, hc, de->client_handle, &matched_seqno_list, &dbs, &dbso);
> > > > > +
> > > > > +		igt_assert_f(hd, "%s (%llu): no matching event type %u found for client %llu\n",
> > > > > +			     c->name,
> > > > > +			     hc->seqno,
> > > > > +			     hc->type,
> > > > > +			     ce->client_handle);
> > > > > +
> > > > > +		igt_debug("comparing %s %llu vs %s %llu\n",
> > > > > +			  c->name, hc->seqno, d->name, hd->seqno);
> > > > > +
> > > > > +		/*
> > > > > +		 * Store the seqno of the event that was matched above,
> > > > > +		 * inside 'matched_seqno_list', to avoid it getting matched
> > > > > +		 * by subsequent 'event_match' calls.
> > > > > +		 */
> > > > > +		entry = malloc(sizeof(*entry));
> > > > > +		entry->seqno = hd->seqno;
> > > > > +		igt_list_add(&entry->link, &matched_seqno_list);
> > > > > +	} while (hc);
> > > > > +
> > > > > +	igt_list_for_each_entry_safe(entry, tmp, &matched_seqno_list, link)
> > > > > +		free(entry);
> > > > > +}
> > > > > +
> > > > > +/**
> > > > > + * xe_eudebug_event_log_find_seqno:
> > > > > + * @l: event log pointer
> > > > > + * @seqno: seqno of event to be found
> > > > > + *
> > > > > + * Finds the event with given seqno in the event log.
> > > > > + *
> > > > > + * Returns: pointer to the event with given seqno within @l or NULL seqno is
> > > > > + * not present.
> > > > > + */
> > > > > +struct drm_xe_eudebug_event *
> > > > > +xe_eudebug_event_log_find_seqno(struct xe_eudebug_event_log *l, uint64_t seqno)
> > > > > +{
> > > > > +	struct drm_xe_eudebug_event *e = NULL, *found = NULL;
> > > > > +
> > > > > +	igt_assert_neq(seqno, 0);
> > > > > +	/*
> > > > > +	 * Try to catch if seqno is corrupted and prevent too long tests,
> > > > > +	 * as our post processing of events is not optimized.
> > > > > +	 */
> > > > > +	igt_assert_lt(seqno, 10 * 1000 * 1000);
> > > > > +
> > > > > +	xe_eudebug_for_each_event(e, l) {
> > > > > +		if (e->seqno == seqno) {
> > > > > +			if (found) {
> > > > > +				igt_warn("Found multiple events with the same seqno %lu\n", seqno);
> > > > > +				xe_eudebug_event_log_print(l, false);
> > > > > +				igt_assert(!found);
> > > > > +			}
> > > > > +			found = e;
> > > > > +		}
> > > > > +	}
> > > > > +
> > > > > +	return found;
> > > > > +}
> > > > > +
> > > > > +static void event_log_sort(struct xe_eudebug_event_log *l)
> > > > > +{
> > > > > +	struct xe_eudebug_event_log *tmp;
> > > > > +	struct drm_xe_eudebug_event *e = NULL;
> > > > > +	uint64_t first_seqno = 0;
> > > > > +	uint64_t last_seqno = 0;
> > > > > +	uint64_t events = 0, added = 0;
> > > > > +	uint64_t i;
> > > > > +
> > > > > +	xe_eudebug_for_each_event(e, l) {
> > > > > +		if (e->seqno > last_seqno)
> > > > > +			last_seqno = e->seqno;
> > > > > +
> > > > > +		if (e->seqno < first_seqno)
> > > > > +			first_seqno = e->seqno;
> > > > > +
> > > > > +		events++;
> > > > > +	}
> > > > > +
> > > > > +	tmp = xe_eudebug_event_log_create("tmp", l->max_size);
> > > > > +
> > > > > +	for (i = 1; i <= last_seqno; i++) {
> > > > > +		e = xe_eudebug_event_log_find_seqno(l, i);
> > > > > +		if (e) {
> > > > > +			xe_eudebug_event_log_write(tmp, e);
> > > > > +			added++;
> > > > > +		}
> > > > > +	}
> > > > > +
> > > > > +	igt_assert_eq(events, added);
> > > > > +	igt_assert_eq(tmp->head, l->head);
> > > > > +
> > > > > +	memcpy(l->log, tmp->log, tmp->head);
> > > > > +
> > > > > +	xe_eudebug_event_log_destroy(tmp);
> > > > > +}
> > > > > +
> > > > > +/**
> > > > > + * xe_eudebug_connect:
> > > > > + * @fd: Xe file descriptor
> > > > > + * @pid: client PID
> > > > > + * @flags: connection flags
> > > > > + *
> > > > > + * Opens the xe eu debugger connection to the process described by @pid
> > > > > + *
> > > > > + * Returns: 0 if the debugger was successfully attached, -errno otherwise.
> > > > > + */
> > > > > +int xe_eudebug_connect(int fd, pid_t pid, uint32_t flags)
> > > > > +{
> > > > > +	int ret;
> > > > > +	uint64_t events = 0; /* events filtering not supported yet! */
> > > > > +
> > > > > +	ret = __xe_eudebug_connect(fd, pid, flags, events);
> > > > > +
> > > > > +	return ret;
> > > > > +}
> > > > > +
> > > > > +/**
> > > > > + * xe_eudebug_event_log_create:
> > > > > + * @name: event log identifier
> > > > > + * @max_size: maximum size of created log
> > > > > + *
> > > > > + * Function creates an Eu Debugger event log with size equal to @max_size.
> > > > > + *
> > > > > + * Returns: pointer to just created log
> > > > > + */
> > > > > +#define MAX_EVENT_LOG_SIZE (32 * 1024 * 1024)
> > > > > +struct xe_eudebug_event_log *xe_eudebug_event_log_create(const char *name, unsigned int max_size)
> > > > > +{
> > > > > +	struct xe_eudebug_event_log *l;
> > > > > +
> > > > > +	l = calloc(1, sizeof(*l));
> > > > > +	igt_assert(l);
> > > > > +	l->log = calloc(1, max_size);
> > > > > +	igt_assert(l->log);
> > > > > +	l->max_size = max_size;
> > > > > +	strncpy(l->name, name, sizeof(l->name) - 1);
> > > > > +	pthread_mutex_init(&l->lock, NULL);
> > > > > +
> > > > > +	return l;
> > > > > +}
> > > > > +
> > > > > +/**
> > > > > + * xe_eudebug_event_log_destroy:
> > > > > + * @l: event log pointer
> > > > > + *
> > > > > + * Frees given event log @l.
> > > > > + */
> > > > > +void xe_eudebug_event_log_destroy(struct xe_eudebug_event_log *l)
> > > > > +{
> > > > > +	pthread_mutex_destroy(&l->lock);
> > > > > +	free(l->log);
> > > > > +	free(l);
> > > > > +}
> > > > > +
> > > > > +/**
> > > > > + * xe_eudebug_event_log_write:
> > > > > + * @l: event log pointer
> > > > > + * @e: event to be written to event log
> > > > > + *
> > > > > + * Writes event @e to the event log, thread-safe.
> > > > > + */
> > > > > +void xe_eudebug_event_log_write(struct xe_eudebug_event_log *l, struct drm_xe_eudebug_event *e)
> > > > > +{
> > > > > +	igt_assert(e->seqno);
> > > > > +	/*
> > > > > +	 * Try to catch if seqno is corrupted and prevent too long tests,
> > > > > +	 * as our post processing of events is not optimized.
> > > > > +	 */
> > > > > +	igt_assert_lt(e->seqno, 10 * 1000 * 1000);
> > > > > +
> > > > > +	pthread_mutex_lock(&l->lock);
> > > > > +	igt_assert_lt(l->head + e->len, l->max_size);
> > > > > +	memcpy(l->log + l->head, e, e->len);
> > > > > +	l->head += e->len;
> > > > > +
> > > > > +#ifdef DEBUG_LOG
> > > > > +	igt_info("%s: wrote %u bytes to eventlog, free %u bytes\n",
> > > > > +		 l->name, e->len, l->max_size - l->head);
> > > > > +#endif
> > > > > +	pthread_mutex_unlock(&l->lock);
> > > > > +}
> > > > > +
> > > > > +/**
> > > > > + * xe_eudebug_event_log_print:
> > > > > + * @l: event log pointer
> > > > > + * @debug: when true function uses igt_debug instead of igt_info.
> > > > > + *
> > > > > + * Prints given event log.
> > > > > + */
> > > > > +void
> > > > > +xe_eudebug_event_log_print(struct xe_eudebug_event_log *l, bool debug)
> > > > > +{
> > > > > +	struct drm_xe_eudebug_event *e = NULL;
> > > > > +	int level = debug ? IGT_LOG_DEBUG : IGT_LOG_INFO;
> > > > > +	char str[XE_EUDEBUG_EVENT_STRING_MAX_LEN];
> > > > > +
> > > > > +	igt_log(IGT_LOG_DOMAIN, level,
> > > > > +		"event log '%s' (%u bytes):\n", l->name, l->head);
> > > > > +
> > > > > +	xe_eudebug_for_each_event(e, l) {
> > > > > +		xe_eudebug_event_to_str(e, str, XE_EUDEBUG_EVENT_STRING_MAX_LEN);
> > > > > +		igt_log(IGT_LOG_DOMAIN, level, "%s\n", str);
> > > > > +	}
> > > > > +}
> > > > > +
> > > > > +/**
> > > > > + * xe_eudebug_event_log_compare:
> > > > > + * @a: event log pointer
> > > > > + * @b: event log pointer
> > > > > + * @filter: mask that represents events to be skipped during comparison, useful
> > > > > + * for events like 'VM_BIND' since they can be asymmetric. Note that
> > > > > + * 'DRM_XE_EUDEBUG_EVENT_OPEN' will always be matched.
> > > > > + *
> > > > > + * Compares and asserts event logs @a, @b if the event
> > > > > + * sequence matches.
> > > > > + */
> > > > > +void xe_eudebug_event_log_compare(struct xe_eudebug_event_log *a, struct xe_eudebug_event_log *b,
> > > > > +				  uint32_t filter)
> > > > > +{
> > > > > +	struct drm_xe_eudebug_event *ae = NULL;
> > > > > +	struct drm_xe_eudebug_event *be = NULL;
> > > > > +
> > > > > +	xe_eudebug_for_each_event(ae, a) {
> > > > > +		if (ae->type == DRM_XE_EUDEBUG_EVENT_OPEN &&
> > > > > +		    ae->flags & DRM_XE_EUDEBUG_EVENT_CREATE) {
> > > > > +			be = event_type_match(b, ae, be);
> > > > > +
> > > > > +			compare_client(a, ae, b, be, filter);
> > > > > +			compare_client(b, be, a, ae, filter);
> > > > > +		}
> > > > > +	}
> > > > > +}
> > > > > +
> > > > > +/**
> > > > > + * xe_eudebug_event_log_match_opposite:
> > > > > + * @l: event log pointer
> > > > > + * @filter: mask that represents events to be skipped during comparison, useful
> > > > > + * for events like 'VM_BIND' since they can be asymmetric
> > > > > + *
> > > > > + * Matches and asserts content of all opposite events (create vs destroy).
> > > > > + */
> > > > > +void
> > > > > +xe_eudebug_event_log_match_opposite(struct xe_eudebug_event_log *l, uint32_t filter)
> > > > > +{
> > > > > +	struct drm_xe_eudebug_event *ce = NULL;
> > > > > +	struct drm_xe_eudebug_event *de = NULL;
> > > > > +
> > > > > +	xe_eudebug_for_each_event(ce, l) {
> > > > > +		if (ce->flags & DRM_XE_EUDEBUG_EVENT_CREATE) {
> > > > > +			uint8_t offset = sizeof(struct drm_xe_eudebug_event);
> > > > > +			int opposite_matching;
> > > > > +
> > > > > +			if (XE_EUDEBUG_EVENT_IS_FILTERED(ce->type, filter))
> > > > > +				continue;
> > > > > +
> > > > > +			/* No opposite matching for binds */
> > > > > +			if ((ce->type >= DRM_XE_EUDEBUG_EVENT_VM_BIND &&
> > > > > +			     ce->type <= DRM_XE_EUDEBUG_EVENT_VM_BIND_UFENCE) ||
> > > > > +			    ce->type == DRM_XE_EUDEBUG_EVENT_VM_BIND_OP_METADATA)
> > > > > +				continue;
> > > > > +
> > > > > +			de = opposite_event_match(l, ce, ce);
> > > > > +
> > > > > +			igt_assert_f(de, "no opposite event of type %u found\n", ce->type);
> > > > > +
> > > > > +			igt_assert_eq(ce->len, de->len);
> > > > > +			opposite_matching = memcmp((uint8_t *)de + offset,
> > > > > +						   (uint8_t *)ce + offset,
> > > > > +						   de->len - offset) == 0;
> > > > > +
> > > > > +			igt_assert_f(opposite_matching,
> > > > > +				     "%s: create|destroy event not "
> > > > > +				     "maching (%llu) vs (%llu)\n",
> > > > > +				     l->name, de->seqno, ce->seqno);
> > > > > +		}
> > > > > +	}
> > > > > +}
> > > > > +
> > > > > +static void debugger_run_triggers(struct xe_eudebug_debugger *d,
> > > > > +				  struct drm_xe_eudebug_event *e)
> > > > > +{
> > > > > +	struct event_trigger *t;
> > > > > +
> > > > > +	igt_list_for_each_entry(t, &d->triggers, link) {
> > > > > +		if (e->type == t->type)
> > > > > +			t->fn(d, e);
> > > > > +	}
> > > > > +}
> > > > > +
> > > > > +#define MAX_EVENT_SIZE (32 * 1024)
> > > > > +static int
> > > > > +xe_eudebug_read_event(int fd, struct drm_xe_eudebug_event *event)
> > > > > +{
> > > > > +	int ret;
> > > > > +
> > > > > +	event->type = DRM_XE_EUDEBUG_EVENT_READ;
> > > > > +	event->flags = 0;
> > > > > +	event->len = MAX_EVENT_SIZE;
> > > > > +
> > > > > +	ret = igt_ioctl(fd, DRM_XE_EUDEBUG_IOCTL_READ_EVENT, event);
> > > > > +	if (ret < 0)
> > > > > +		return -errno;
> > > > > +
> > > > > +	return ret;
> > > > > +}
> > > > > +
> > > > > +static void *debugger_worker_loop(void *data)
> > > > > +{
> > > > > +	uint8_t buf[MAX_EVENT_SIZE];
> > > > > +	struct drm_xe_eudebug_event *e = (void *)buf;
> > > > > +	struct xe_eudebug_debugger *d = data;
> > > > > +	struct pollfd p = {
> > > > > +		.events = POLLIN,
> > > > > +		.revents = 0,
> > > > > +	};
> > > > > +	int timeout_ms = 100, ret;
> > > > > +
> > > > > +	igt_assert(d->master_fd >= 0);
> > > > > +
> > > > > +	do {
> > > > > +		p.fd = d->fd;
> > > > > +		ret = poll(&p, 1, timeout_ms);
> > > > > +
> > > > > +		if (ret == -1) {
> > > > > +			igt_info("poll failed with errno %d\n", errno);
> > > > > +			break;
> > > > > +		}
> > > > > +
> > > > > +		if (ret == 1 && (p.revents & POLLIN)) {
> > > > > +			int err = xe_eudebug_read_event(d->fd, e);
> > > > > +
> > > > > +			if (!err) {
> > > > > +				++d->event_count;
> > > > > +
> > > > > +				xe_eudebug_event_log_write(d->log, e);
> > > > > +				debugger_run_triggers(d, e);
> > > > > +			} else {
> > > > > +				igt_info("xe_eudebug_read_event returned %d\n", ret);
> > > > > +			}
> > > > > +		}
> > > > > +	} while ((ret && READ_ONCE(d->worker_state) == DEBUGGER_WORKER_QUITTING) ||
> > > > > +		 READ_ONCE(d->worker_state) == DEBUGGER_WORKER_ACTIVE);
> > > > > +
> > > > > +	d->worker_state = DEBUGGER_WORKER_INACTIVE;
> > > > > +	return NULL;
> > > > > +}
> > > > > +
> > > > > +/**
> > > > > + * xe_eudebug_debugger_available:
> > > > > + * @fd: Xe file descriptor
> > > > > + *
> > > > > + * Returns: true it debugger connection is available, false otherwise.
> > > > > + */
> > > > > +bool xe_eudebug_debugger_available(int fd)
> > > > > +{
> > > > > +	struct drm_xe_eudebug_connect param = { .pid = getpid() };
> > > > > +	int debugfd;
> > > > > +
> > > > > +	debugfd = igt_ioctl(fd, DRM_IOCTL_XE_EUDEBUG_CONNECT, &param);
> > > > > +	if (debugfd >= 0)
> > > > > +		close(debugfd);
> > > > > +
> > > > > +	return debugfd >= 0;
> > > > > +}
> > > > > +
> > > > > +/**
> > > > > + * xe_eudebug_debugger_create:
> > > > > + * @master_fd: xe client used to open the debugger connection
> > > > > + * @flags: flags stored in a debugger structure, can be used at will
> > > > > + * of the caller, i.e. to be used inside triggers.
> > > > > + * @data: test's private data, allocated with MAP_SHARED | MAP_ANONYMOUS,
> > > > > + * can be shared between client and debugger. Can be NULL.
> > > > > + *
> > > > > + * Returns: newly created xe_eudebug_debugger structure with its
> > > > > + * event log initialized. Note that to open the connection
> > > > > + * you need call @xe_eudebug_debugger_attach.
> > > > > + */
> > > > > +struct xe_eudebug_debugger *
> > > > > +xe_eudebug_debugger_create(int master_fd, uint64_t flags, void *data)
> > > > > +{
> > > > > +	struct xe_eudebug_debugger *d;
> > > > > +
> > > > > +	d = calloc(1, sizeof(*d));
> > > > > +	d->flags = flags;
> > > > > +	igt_assert(d);
> > > > > +	IGT_INIT_LIST_HEAD(&d->triggers);
> > > > > +	d->log = xe_eudebug_event_log_create("debugger", MAX_EVENT_LOG_SIZE);
> > > > > +	d->fd = -1;
> > > > > +	d->master_fd = master_fd;
> > > > > +	d->ptr = data;
> > > > > +
> > > > > +	return d;
> > > > > +}
> > > > > +
> > > > > +static void debugger_destroy_triggers(struct xe_eudebug_debugger *d)
> > > > > +{
> > > > > +	struct event_trigger *t, *tmp;
> > > > > +
> > > > > +	igt_list_for_each_entry_safe(t, tmp, &d->triggers, link)
> > > > > +		free(t);
> > > > > +}
> > > > > +
> > > > > +/**
> > > > > + * xe_eudebug_debugger_destroy:
> > > > > + * @d: pointer to the debugger
> > > > > + *
> > > > > + * Frees xe_eudebug_debugger structure pointed by @d. If the debugger
> > > > > + * connection was still opened it terminates it.
> > > > > + */
> > > > > +void xe_eudebug_debugger_destroy(struct xe_eudebug_debugger *d)
> > > > > +{
> > > > > +	if (d->worker_state)
> > > > > +		xe_eudebug_debugger_stop_worker(d, 1);
> > > > > +
> > > > > +	if (d->target_pid)
> > > > > +		xe_eudebug_debugger_dettach(d);
> > > > > +
> > > > > +	xe_eudebug_event_log_destroy(d->log);
> > > > > +	debugger_destroy_triggers(d);
> > > > > +	free(d);
> > > > > +}
> > > > > +
> > > > > +/**
> > > > > + * xe_eudebug_debugger_attach:
> > > > > + * @d: pointer to the debugger
> > > > > + * @c: pointer to the client
> > > > > + *
> > > > > + * Opens the xe eu debugger connection to the process described by @c (c->pid)
> > > > > + *
> > > > > + * Returns: 0 if the debugger was successfully attached, -errno otherwise.
> > > > > + */
> > > > > +int xe_eudebug_debugger_attach(struct xe_eudebug_debugger *d,
> > > > > +			       struct xe_eudebug_client *c)
> > > > > +{
> > > > > +	int ret;
> > > > > +
> > > > > +	igt_assert_eq(d->fd, -1);
> > > > > +	igt_assert_neq(c->pid, 0);
> > > > > +	ret = xe_eudebug_connect(d->master_fd, c->pid, 0);
> > > > > +
> > > > > +	if (ret < 0)
> > > > > +		return ret;
> > > > > +
> > > > > +	d->fd = ret;
> > > > > +	d->target_pid = c->pid;
> > > > > +	d->p_client[0] = c->p_in[0];
> > > > > +	d->p_client[1] = c->p_in[1];
> > > > > +
> > > > > +	igt_debug("debugger connected to %lu\n", d->target_pid);
> > > > > +
> > > > > +	return 0;
> > > > > +}
> > > > > +
> > > > > +/**
> > > > > + * xe_eudebug_debugger_dettach:
> > > > > + * @d: pointer to the debugger
> > > > > + *
> > > > > + * Closes previously opened xe eu debugger connection. Asserts if
> > > > > + * the debugger has active session.
> > > > > + */
> > > > > +void xe_eudebug_debugger_dettach(struct xe_eudebug_debugger *d)
> > > > > +{
> > > > > +	igt_assert(d->target_pid);
> > > > > +	close(d->fd);
> > > > > +	d->target_pid = 0;
> > > > > +	d->fd = -1;
> > > > > +}
> > > > > +
> > > > > +/**
> > > > > + * xe_eudebug_debugger_add_trigger:
> > > > > + * @d: pointer to the debugger
> > > > > + * @type: the type of the event which activates the trigger
> > > > > + * @fn: function to be called when event of @type was read by the debugger.
> > > > > + *
> > > > > + * Adds function @fn to the list of triggers activated when event of @type
> > > > > + * has been read by worker.
> > > > > + * Note: Triggers are activated by the worker.
> > > > > + */
> > > > > +void xe_eudebug_debugger_add_trigger(struct xe_eudebug_debugger *d,
> > > > > +				     int type, xe_eudebug_trigger_fn fn)
> > > > > +{
> > > > > +	struct event_trigger *t;
> > > > > +
> > > > > +	t = calloc(1, sizeof(*t));
> > > > > +	IGT_INIT_LIST_HEAD(&t->link);
> > > > > +	t->type = type;
> > > > > +	t->fn = fn;
> > > > > +
> > > > > +	igt_list_add_tail(&t->link, &d->triggers);
> > > > > +	igt_debug("added trigger %p\n", t);
> > > > > +}
> > > > > +
> > > > > +/**
> > > > > + * xe_eudebug_debugger_start_worker:
> > > > > + * @d: pointer to the debugger
> > > > > + *
> > > > > + * Starts the debugger worker. Worker is resposible for reading all
> > > > > + * incoming events from the debugger, put then into debugger log and
> > > > > + * execute appropriate event triggers. Note that using the debuggers
> > > > > + * event log while worker is running is not safe.
> > > > > + */
> > > > > +void xe_eudebug_debugger_start_worker(struct xe_eudebug_debugger *d)
> > > > > +{
> > > > > +	int ret;
> > > > > +
> > > > > +	d->worker_state = true;
> > > > > +	ret = pthread_create(&d->worker_thread, NULL, &debugger_worker_loop, d);
> > > > > +
> > > > > +	igt_assert_f(ret == 0, "Debugger worker thread creation failed!");
> > > > > +}
> > > > > +
> > > > > +/**
> > > > > + * xe_eudebug_debugger_stop_worker:
> > > > > + * @d: pointer to the debugger
> > > > > + *
> > > > > + * Stops the debugger worker. Event log is sorted by seqno after closure.
> > > > > + */
> > > > > +void xe_eudebug_debugger_stop_worker(struct xe_eudebug_debugger *d,
> > > > > +				     int timeout_s)
> > > > > +{
> > > > > +	struct timespec t = {};
> > > > > +	int ret;
> > > > > +
> > > > > +	igt_assert(d->worker_state);
> > > > > +
> > > > > +	d->worker_state = DEBUGGER_WORKER_QUITTING; /* First time be polite. */
> > > > > +	igt_assert_eq(clock_gettime(CLOCK_REALTIME, &t), 0);
> > > > > +	t.tv_sec += timeout_s;
> > > > > +
> > > > > +	ret = pthread_timedjoin_np(d->worker_thread, NULL, &t);
> > > > > +
> > > > > +	if (ret == ETIMEDOUT) {
> > > > > +		d->worker_state = DEBUGGER_WORKER_INACTIVE;
> > > > > +		ret = pthread_join(d->worker_thread, NULL);
> > > > > +	}
> > > > > +
> > > > > +	igt_assert_f(ret == 0 || ret != ESRCH,
> > > > > +		     "pthread join failed with error %d!\n", ret);
> > > > > +
> > > > > +	event_log_sort(d->log);
> > > > > +}
> > > > > +
> > > > > +/**
> > > > > + * xe_eudebug_debugger_signal_stage:
> > > > > + * @d: pointer to the debugger
> > > > > + * @stage: stage to signal
> > > > > + *
> > > > > + * Signals to client, waiting in xe_eudebug_client_wait_stage(),
> > > > > + * releasing it to proceed.
> > > > > + */
> > > > > +void xe_eudebug_debugger_signal_stage(struct xe_eudebug_debugger *d, uint64_t stage)
> > > > > +{
> > > > > +	token_signal(d->p_client, CLIENT_STAGE, stage);
> > > > > +}
> > > > > +
> > > > > +/**
> > > > > + * xe_eudebug_debugger_wait_stage:
> > > > > + * @s: pointer to xe_eudebug_debugger structure
> > > > > + * @stage: stage to wait on
> > > > > + *
> > > > > + * Pauses debugger until the client has signalled the corresponding stage with
> > > > > + * xe_eudebug_client_signal_stage. This is only for situations where the actual
> > > > > + * event flow is not enough to coordinate between client/debugger and extra sync
> > > > > + * mechanism is needed.
> > > > > + */
> > > > > +void xe_eudebug_debugger_wait_stage(struct xe_eudebug_session *s, uint64_t stage)
> > > > > +{
> > > > > +	u64 stage_in;
> > > > > +
> > > > > +	igt_debug("debugger xe client fd: %d pausing for stage %lu\n", s->d->master_fd, stage);
> > > > > +
> > > > > +	stage_in = wait_from_client(s->c, DEBUGGER_STAGE);
> > > > > +	igt_debug("debugger xe client fd: %d stage %lu, expected %lu, stage\n", s->d->master_fd,
> > > > > +		  stage_in, stage);
> > > > > +
> > > > > +	igt_assert_eq(stage_in, stage);
> > > > > +}
> > > > > +
> > > > > +/**
> > > > > + * xe_eudebug_client_create:
> > > > > + * @master_fd: xe client used to open the debugger connection
> > > > > + * @work: function that opens xe device and executes arbitrary workload
> > > > > + * @flags: flags stored in a client structure, can be used at will
> > > > > + * of the caller, i.e. to provide the @work function an additional switch.
> > > > > + * @data: test's private data, allocated with MAP_SHARED | MAP_ANONYMOUS,
> > > > > + * can be shared between client and debugger. Accesible via client->ptr.
> > > > > + * Can be NULL.
> > > > > + *
> > > > > + * Forks and creates the debugger process. @work won't be called until
> > > > > + * xe_eudebug_client_start is called.
> > > > > + *
> > > > > + * Returns: newly created xe_eudebug_debugger structure with its
> > > > > + * event log initialized.
> > > > > + */
> > > > > +struct xe_eudebug_client *xe_eudebug_client_create(int master_fd, xe_eudebug_client_work_fn work,
> > > > > +						   uint64_t flags, void *data)
> > > > > +{
> > > > > +	struct xe_eudebug_client *c;
> > > > > +
> > > > > +	c = calloc(1, sizeof(*c));
> > > > > +	c->flags = flags;
> > > > > +	igt_assert(c);
> > > > > +	igt_assert(!pipe(c->p_in));
> > > > > +	igt_assert(!pipe(c->p_out));
> > > > > +	c->seqno = 1;
> > > > > +	c->log = xe_eudebug_event_log_create("client", MAX_EVENT_LOG_SIZE);
> > > > > +	c->done = 0;
> > > > > +	c->ptr = data;
> > > > > +	c->master_fd = master_fd;
> > > > > +	c->timeout_ms = XE_EUDEBUG_DEFAULT_TIMEOUT_MS;
> > > > > +
> > > > > +	igt_fork(child, 1) {
> > > > > +		int mypid;
> > > > > +
> > > > > +		igt_assert_eq(c->pid, 0);
> > > > > +
> > > > > +		close(c->p_out[0]);
> > > > > +		c->p_out[0] = -1;
> > > > > +		close(c->p_in[1]);
> > > > > +		c->p_in[1] = -1;
> > > > > +
> > > > > +		mypid = getpid();
> > > > > +		client_signal(c, CLIENT_PID, mypid);
> > > > > +
> > > > > +		c->pid = client_wait_token(c, CLIENT_RUN);
> > > > > +		igt_assert_eq(c->pid, mypid);
> > > > > +		if (work)
> > > > > +			work(c);
> > > > > +
> > > > > +		client_signal(c, CLIENT_FINI, c->seqno);
> > > > > +
> > > > > +		event_log_write_to_fd(c->log, c->p_out[1]);
> > > > > +
> > > > > +		c->pid = client_wait_token(c, CLIENT_STOP);
> > > > > +		igt_assert_eq(c->pid, mypid);
> > > > > +	}
> > > > > +
> > > > > +	close(c->p_out[1]);
> > > > > +	c->p_out[1] = -1;
> > > > > +	close(c->p_in[0]);
> > > > > +	c->p_in[0] = -1;
> > > > > +
> > > > > +	c->pid = wait_from_client(c, CLIENT_PID);
> > > > > +
> > > > > +	igt_info("client running with pid %d\n", c->pid);
> > > > > +
> > > > > +	return c;
> > > > > +}
> > > > > +
> > > > > +/**
> > > > > + * xe_eudebug_client_stop:
> > > > > + * @c: pointer to xe_eudebug_client structure
> > > > > + *
> > > > > + * Waits for the end of client's work and exits the proccess.
> > > > > + */
> > > > > +void xe_eudebug_client_stop(struct xe_eudebug_client *c)
> > > > > +{
> > > > > +	if (c->pid) {
> > > > > +		int waitstatus;
> > > > > +
> > > > > +		xe_eudebug_client_wait_done(c);
> > > > > +
> > > > > +		token_signal(c->p_in, CLIENT_STOP, c->pid);
> > > > > +		igt_assert_eq(waitpid(c->pid, &waitstatus, 0),
> > > > > +			      c->pid);
> > > > > +		c->pid = 0;
> > > > > +	}
> > > > > +}
> > > > > +
> > > > > +/**
> > > > > + * xe_eudebug_client_destroy:
> > > > > + * @c: pointer to xe_eudebug_client structure to be freed
> > > > > + *
> > > > > + * Frees the @c client structure. Note that it calls xe_eudebug_client_stop if
> > > > > + * client proccess has not terminated yet.
> > > > > + */
> > > > > +void xe_eudebug_client_destroy(struct xe_eudebug_client *c)
> > > > > +{
> > > > > +	xe_eudebug_client_stop(c);
> > > > > +	pipe_close(c->p_in);
> > > > > +	pipe_close(c->p_out);
> > > > > +	xe_eudebug_event_log_destroy(c->log);
> > > > > +	free(c);
> > > > > +}
> > > > > +
> > > > > +/**
> > > > > + * xe_eudebug_client_get_seqno:
> > > > > + * @c: pointer to xe_eudebug_client structure
> > > > > + *
> > > > > + * Increments and returns current seqno value of the given client @c
> > > > > + *
> > > > > + * Returns: incremented seqno
> > > > > + */
> > > > > +uint64_t xe_eudebug_client_get_seqno(struct xe_eudebug_client *c)
> > > > > +{
> > > > > +	return c->seqno++;
> > > > > +}
> > > > > +
> > > > > +/**
> > > > > + * xe_eudebug_client_start:
> > > > > + * @c: pointer to xe_eudebug_client structure
> > > > > + *
> > > > > + * Starts execution of client's work function within the client's proccess.
> > > > > + */
> > > > > +void xe_eudebug_client_start(struct xe_eudebug_client *c)
> > > > > +{
> > > > > +	token_signal(c->p_in, CLIENT_RUN, c->pid);
> > > > > +}
> > > > > +
> > > > > +/**
> > > > > + * xe_eudebug_client_wait_done:
> > > > > + * @c: pointer to xe_eudebug_client structure
> > > > > + *
> > > > > + * Waits for the client work end updates the event log.
> > > > > + * Doesn't terminate the client's proccess yet.
> > > > > + */
> > > > > +void xe_eudebug_client_wait_done(struct xe_eudebug_client *c)
> > > > > +{
> > > > > +	if (!c->done) {
> > > > > +		c->done = 1;
> > > > > +		c->seqno = wait_from_client(c, CLIENT_FINI);
> > > > > +		event_log_read_from_fd(c->log, c->p_out[0]);
> > > > > +	}
> > > > > +}
> > > > > +
> > > > > +/**
> > > > > + * xe_eudebug_client_signal_stage:
> > > > > + * @c: pointer to the client
> > > > > + * @stage: stage to signal
> > > > > + *
> > > > > + * Signals to debugger, waiting in xe_eudebug_debugger_wait_stage(),
> > > > > + * releasing it to proceed.
> > > > > + */
> > > > > +void xe_eudebug_client_signal_stage(struct xe_eudebug_client *c, uint64_t stage)
> > > > > +{
> > > > > +	token_signal(c->p_out, DEBUGGER_STAGE, stage);
> > > > > +}
> > > > > +
> > > > > +/**
> > > > > + * xe_eudebug_client_wait_stage:
> > > > > + * @c: pointer to xe_eudebug_client structure
> > > > > + * @stage: stage to wait on
> > > > > + *
> > > > > + * Pauses client until the debugger has signalled the corresponding stage with
> > > > > + * xe_eudebug_debugger_signal_stage. This is only for situations where the
> > > > > + * actual event flow is not enough to coordinate between client/debugger and extra
> > > > > + * sync mechanism is needed.
> > > > > + *
> > > > > + */
> > > > > +void xe_eudebug_client_wait_stage(struct xe_eudebug_client *c, uint64_t stage)
> > > > > +{
> > > > > +	u64 stage_in;
> > > > > +
> > > > > +	if (c->done) {
> > > > > +		igt_warn("client: %d already done before %lu\n", c->pid, stage);
> > > > > +		return;
> > > > > +	}
> > > > > +
> > > > > +	igt_debug("client: %d pausing for stage %lu\n", c->pid, stage);
> > > > > +
> > > > > +	stage_in = client_wait_token(c, CLIENT_STAGE);
> > > > > +	igt_debug("client: %d stage %lu, expected %lu, stage\n", c->pid, stage_in, stage);
> > > > > +
> > > > > +	igt_assert_eq(stage_in, stage);
> > > > > +}
> > > > > +
> > > > > +
> > > > > +/**
> > > > > + * xe_eudebug_session_create:
> > > > > + * @fd: XE file descriptor
> > > > > + * @work: function passed to the xe_eudebug_client_create
> > > > > + * @flags: flags passed to client and debugger
> > > > > + * @test_private: test's  data, allocated with MAP_SHARED | MAP_ANONYMOUS,
> > > > > + * passed to client and debugger. Can be NULL.
> > > > > + *
> > > > > + * Creates session together with client and debugger structures.
> > > > > + */
> > > > > +struct xe_eudebug_session *xe_eudebug_session_create(int fd,
> > > > > +						     xe_eudebug_client_work_fn work,
> > > > > +						     unsigned int flags,
> > > > > +						     void *test_private)
> > > > > +{
> > > > > +	struct xe_eudebug_session *s;
> > > > > +
> > > > > +	s = calloc(1, sizeof(*s));
> > > > > +	igt_assert(s);
> > > > > +
> > > > > +	s->c = xe_eudebug_client_create(fd, work, flags, test_private);
> > > > > +	s->d = xe_eudebug_debugger_create(fd, flags, test_private);
> > > > > +	s->flags = flags;
> > > > > +
> > > > > +	return s;
> > > > > +}
> > > > > +
> > > > > +/**
> > > > > + * xe_eudebug_session_run:
> > > > > + * @s: pointer to xe_eudebug_session structure
> > > > > + *
> > > > > + * Attaches debugger to client's proccess, starts debugger's
> > > > > + * async event reader, starts client and once client finish
> > > > > + * it stops debugger worker.
> > > > > + */
> > > > > +void xe_eudebug_session_run(struct xe_eudebug_session *s)
> > > > > +{
> > > > > +	struct xe_eudebug_debugger *debugger = s->d;
> > > > > +	struct xe_eudebug_client *client = s->c;
> > > > > +
> > > > > +	igt_assert_eq(xe_eudebug_debugger_attach(debugger, client), 0);
> > > > > +
> > > > > +	xe_eudebug_debugger_start_worker(debugger);
> > > > > +
> > > > > +	xe_eudebug_client_start(client);
> > > > > +	xe_eudebug_client_wait_done(client);
> > > > > +
> > > > > +	xe_eudebug_debugger_stop_worker(debugger, 1);
> > > > > +
> > > > > +	xe_eudebug_event_log_print(debugger->log, true);
> > > > > +	xe_eudebug_event_log_print(client->log, true);
> > > > > +}
> > > > > +
> > > > > +/**
> > > > > + * xe_eudebug_session_check:
> > > > > + * @s: pointer to xe_eudebug_session structure
> > > > > + * @match_opposite: indicates whether check should match all
> > > > > + * create and destroy events.
> > > > > + * @filter: mask that represents events to be skipped during comparison, useful
> > > > > + * for events like 'VM_BIND' since they can be asymmetric
> > > > > + *
> > > > > + * Validate debugger's log against the log created by the client.
> > > > > + */
> > > > > +void xe_eudebug_session_check(struct xe_eudebug_session *s, bool match_opposite, uint32_t filter)
> > > > > +{
> > > > > +	xe_eudebug_event_log_compare(s->c->log, s->d->log, filter);
> > > > > +
> > > > > +	if (match_opposite)
> > > > > +		xe_eudebug_event_log_match_opposite(s->d->log, filter);
> > > > > +}
> > > > > +
> > > > > +/**
> > > > > + * xe_eudebug_session_destroy:
> > > > > + * @s: pointer to xe_eudebug_session structure
> > > > > + *
> > > > > + * Destroy session together with its debugger and client.
> > > > > + */
> > > > > +void xe_eudebug_session_destroy(struct xe_eudebug_session *s)
> > > > > +{
> > > > > +	xe_eudebug_debugger_destroy(s->d);
> > > > > +	xe_eudebug_client_destroy(s->c);
> > > > > +
> > > > > +	free(s);
> > > > > +}
> > > > > +
> > > > > +#define to_base(x) ((struct drm_xe_eudebug_event *)&x)
> > > > > +
> > > > > +static void base_event(struct xe_eudebug_client *c,
> > > > > +		       struct drm_xe_eudebug_event *e,
> > > > > +		       uint32_t type,
> > > > > +		       uint32_t flags,
> > > > > +		       uint64_t size)
> > > > > +{
> > > > > +	e->type = type;
> > > > > +	e->flags = flags;
> > > > > +	e->seqno = xe_eudebug_client_get_seqno(c);
> > > > > +	e->len = size;
> > > > > +}
> > > > > +
> > > > > +static void client_event(struct xe_eudebug_client *c, uint32_t flags, int client_fd)
> > > > > +{
> > > > > +	struct drm_xe_eudebug_event_client ec;
> > > > > +
> > > > > +	base_event(c, to_base(ec), DRM_XE_EUDEBUG_EVENT_OPEN, flags, sizeof(ec));
> > > > > +
> > > > > +	ec.client_handle = client_fd;
> > > > > +
> > > > > +	xe_eudebug_event_log_write(c->log, (void *)&ec);
> > > > > +}
> > > > > +
> > > > > +static void vm_event(struct xe_eudebug_client *c, uint32_t flags, int client_fd, uint32_t vm_id)
> > > > > +{
> > > > > +	struct drm_xe_eudebug_event_vm evm;
> > > > > +
> > > > > +	base_event(c, to_base(evm), DRM_XE_EUDEBUG_EVENT_VM, flags, sizeof(evm));
> > > > > +
> > > > > +	evm.client_handle = client_fd;
> > > > > +	evm.vm_handle = vm_id;
> > > > > +
> > > > > +	xe_eudebug_event_log_write(c->log, (void *)&evm);
> > > > > +}
> > > > > +
> > > > > +static void exec_queue_event(struct xe_eudebug_client *c, uint32_t flags,
> > > > > +			     int client_fd, uint32_t vm_id,
> > > > > +			     uint32_t exec_queue_handle, uint16_t class,
> > > > > +			     uint16_t width)
> > > > > +{
> > > > > +	struct drm_xe_eudebug_event_exec_queue ee;
> > > > > +
> > > > > +	base_event(c, to_base(ee), DRM_XE_EUDEBUG_EVENT_EXEC_QUEUE,
> > > > > +		   flags, sizeof(ee));
> > > > > +
> > > > > +	ee.client_handle = client_fd;
> > > > > +	ee.vm_handle = vm_id;
> > > > > +	ee.exec_queue_handle = exec_queue_handle;
> > > > > +	ee.engine_class = class;
> > > > > +	ee.width = width;
> > > > > +
> > > > > +	xe_eudebug_event_log_write(c->log, (void *)&ee);
> > > > > +}
> > > > > +
> > > > > +static void metadata_event(struct xe_eudebug_client *c, uint32_t flags,
> > > > > +			   int client_fd, uint32_t id, uint64_t type, uint64_t len)
> > > > > +{
> > > > > +	struct drm_xe_eudebug_event_metadata em;
> > > > > +
> > > > > +	base_event(c, to_base(em), DRM_XE_EUDEBUG_EVENT_METADATA,
> > > > > +		   flags, sizeof(em));
> > > > > +
> > > > > +	em.client_handle = client_fd;
> > > > > +	em.metadata_handle = id;
> > > > > +	em.type = type;
> > > > > +	em.len = len;
> > > > > +
> > > > > +	xe_eudebug_event_log_write(c->log, (void *)&em);
> > > > > +}
> > > > > +
> > > > > +static int enable_getset(int fd, bool *old, bool *new)
> > > > > +{
> > > > > +	static const char * const fname = "enable_eudebug";
> > > > > +	int ret = 0;
> > > > > +
> > > > > +	int sysfs, device_fd;
> > > > > +	bool val_before;
> > > > > +	struct stat st;
> > > > > +
> > > > > +	igt_assert(new || old);
> > > > > +
> > > > > +	igt_assert_eq(fstat(fd, &st), 0);
> > > > > +	sysfs = igt_sysfs_open(fd);
> > > > > +	if (sysfs < 0)
> > > > > +		return -1;
> > > > > +
> > > > > +	device_fd = openat(sysfs, "device", O_DIRECTORY | O_RDONLY);
> > > > > +	close(sysfs);
> > > > > +	if (device_fd < 0)
> > > > > +		return -1;
> > > > > +
> > > > > +	if (!__igt_sysfs_get_boolean(device_fd, fname, &val_before)) {
> > > > > +		ret = -1;
> > > > > +		goto out;
> > > > > +	}
> > > > > +
> > > > > +	igt_debug("enable_eudebug before: %d\n", val_before);
> > > > > +
> > > > > +	if (old)
> > > > > +		*old = val_before;
> > > > > +
> > > > > +	ret = 0;
> > > > > +	if (new) {
> > > > > +		if (__igt_sysfs_set_boolean(device_fd, fname, *new))
> > > > > +			igt_assert_eq(igt_sysfs_get_boolean(device_fd, fname), *new);
> > > > > +		else
> > > > > +			ret = -1;
> > > > > +	}
> > > > > +
> > > > > +out:
> > > > > +	close(device_fd);
> > > > > +	return ret;
> > > > > +}
> > > > > +
> > > > > +/**
> > > > > + * xe_eudebug_enable
> > > > > + * @fd: xe client
> > > > > + * @enable: state toggle - true to enable, false to disable
> > > > > + *
> > > > > + * Enables/disables eudebug capability by writing to
> > > > > + * '/sys/class/drm/card<N>/device/enable_eudebug' sysfs entry.
> > > > > + *
> > > > > + * Returns: previous toggle value, i.e. true when eudebugging was enabled,
> > > > > + * false when eudebugging was disabled.
> > > > > + */
> > > > > +bool xe_eudebug_enable(int fd, bool enable)
> > > > > +{
> > > > > +	bool old = false;
> > > > > +	int ret = enable_getset(fd, &old, &enable);
> > > > > +
> > > > > +	if (ret) {
> > > > > +		igt_skip_on(enable);
> > > > > +		old = false;
> > > > > +	}
> > > > > +
> > > > > +	return old;
> > > > > +}
> > > > > +
> > > > > +/* Eu debugger wrappers around resource creating xe ioctls. */
> > > > > +
> > > > > +/**
> > > > > + * xe_eudebug_client_open_driver:
> > > > > + * @c: pointer to xe_eudebug_client structure
> > > > > + *
> > > > > + * Calls drm_open_client(DRIVER_XE) and logs the corresponding
> > > > > + * event in client's event log.
> > > > > + *
> > > > > + * Returns: valid DRM file descriptor
> > > > > + */
> > > > > +int xe_eudebug_client_open_driver(struct xe_eudebug_client *c)
> > > > > +{
> > > > > +	int fd;
> > > > > +
> > > > > +	fd = drm_reopen_driver(c->master_fd);
> > > > > +	client_event(c, DRM_XE_EUDEBUG_EVENT_CREATE, fd);
> > > > > +
> > > > > +	return fd;
> > > > > +}
> > > > > +
> > > > > +/**
> > > > > + * xe_eudebug_client_close_driver:
> > > > > + * @c: pointer to xe_eudebug_client structure
> > > > > + * @fd: xe client
> > > > > + *
> > > > > + * Calls close driver and logs the corresponding event in
> > > > > + * client's event log.
> > > > > + */
> > > > > +void xe_eudebug_client_close_driver(struct xe_eudebug_client *c, int fd)
> > > > > +{
> > > > > +	client_event(c, DRM_XE_EUDEBUG_EVENT_DESTROY, fd);
> > > > > +	close(fd);
> > > > > +}
> > > > > +
> > > > > +/**
> > > > > + * xe_eudebug_client_vm_create:
> > > > > + * @c: pointer to xe_eudebug_client structure
> > > > > + * @fd: xe client
> > > > > + * @flags: vm bind flags
> > > > > + * @ext: pointer to the first user extension
> > > > > + *
> > > > > + * Calls xe_vm_create() and logs corresponding events
> > > > > + * (including vm set metadata events) in client's event log.
> > > > > + *
> > > > > + * Returns: valid vm handle
> > > > > + */
> > > > > +uint32_t xe_eudebug_client_vm_create(struct xe_eudebug_client *c, int fd,
> > > > > +				     uint32_t flags, uint64_t ext)
> > > > > +{
> > > > > +	uint32_t vm;
> > > > > +
> > > > > +	vm = xe_vm_create(fd, flags, ext);
> > > > > +	vm_event(c, DRM_XE_EUDEBUG_EVENT_CREATE, fd, vm);
> > > > > +
> > > > > +	return vm;
> > > > > +}
> > > > > +
> > > > > +/**
> > > > > + * xe_eudebug_client_vm_destroy:
> > > > > + * @c: pointer to xe_eudebug_client structure
> > > > > + * fd: xe client
> > > > > + * vm: vm handle
> > > > > + *
> > > > > + * Calls xe_vm_destroy() and logs the corresponding event in
> > > > > + * client's event log.
> > > > > + */
> > > > > +void xe_eudebug_client_vm_destroy(struct xe_eudebug_client *c, int fd, uint32_t vm)
> > > > > +{
> > > > > +	xe_vm_destroy(fd, vm);
> > > > > +	vm_event(c, DRM_XE_EUDEBUG_EVENT_DESTROY, fd, vm);
> > > > > +}
> > > > > +
> > > > > +/**
> > > > > + * xe_eudebug_client_exec_queue_create:
> > > > > + * @c: pointer to xe_eudebug_client structure
> > > > > + * @fd: xe client
> > > > > + * @create: exec_queue create drm struct
> > > > > + *
> > > > > + * Calls xe exec queue create ioctl and logs the corresponding event in
> > > > > + * client's event log.
> > > > > + *
> > > > > + * Returns: valid exec queue handle
> > > > > + */
> > > > > +uint32_t xe_eudebug_client_exec_queue_create(struct xe_eudebug_client *c, int fd,
> > > > > +					     struct drm_xe_exec_queue_create *create)
> > > > > +{
> > > > > +	uint16_t class = ((struct drm_xe_engine_class_instance *)(create->instances))[0].engine_class;
> > > > > +
> > > > > +	igt_assert_eq(igt_ioctl(fd, DRM_IOCTL_XE_EXEC_QUEUE_CREATE, create), 0);
> > > > > +
> > > > > +	if (class == DRM_XE_ENGINE_CLASS_COMPUTE || class == DRM_XE_ENGINE_CLASS_RENDER)
> > > > > +		exec_queue_event(c, DRM_XE_EUDEBUG_EVENT_CREATE, fd, create->vm_id,
> > > > > +				 create->exec_queue_id, class, create->width);
> > > > > +
> > > > > +	return create->exec_queue_id;
> > > > > +}
> > > > > +
> > > > > +/**
> > > > > + * xe_eudebug_client_exec_queue_destroy:
> > > > > + * @c: pointer to xe_eudebug_client structure
> > > > > + * @fd: xe client
> > > > > + * @create: exec_queue create drm struct which was used for creation
> > > > > + *
> > > > > + * Calls xe exec_queue destroy ioctl and logs the corresponding event in
> > > > > + * client's event log.
> > > > > + */
> > > > > +void xe_eudebug_client_exec_queue_destroy(struct xe_eudebug_client *c, int fd,
> > > > > +					  struct drm_xe_exec_queue_create *create)
> > > > > +{
> > > > > +	struct drm_xe_exec_queue_destroy destroy = { .exec_queue_id = create->exec_queue_id, };
> > > > > +	uint16_t class = ((struct drm_xe_engine_class_instance *)(create->instances))[0].engine_class;
> > > > > +
> > > > > +	if (class == DRM_XE_ENGINE_CLASS_COMPUTE || class == DRM_XE_ENGINE_CLASS_RENDER)
> > > > > +		exec_queue_event(c, DRM_XE_EUDEBUG_EVENT_DESTROY, fd, create->vm_id,
> > > > > +				 create->exec_queue_id, class, create->width);
> > > > > +
> > > > > +	igt_assert_eq(igt_ioctl(fd, DRM_IOCTL_XE_EXEC_QUEUE_DESTROY, &destroy), 0);
> > > > > +}
> > > > > +
> > > > > +/**
> > > > > + * xe_eudebug_client_vm_bind_event:
> > > > > + * @c: pointer to xe_eudebug_client structure
> > > > > + * @event_flags: base event flags
> > > > > + * @fd: xe client
> > > > > + * @vm: vm handle
> > > > > + * @bind_flags: bind flags of vm_bind_event
> > > > > + * @num_binds: number of bind (operations) for event
> > > > > + * @ref_seqno: base vm bind reference seqno
> > > > > + * Logs vm bind event in client's event log.
> > > > > + */
> > > > > +void xe_eudebug_client_vm_bind_event(struct xe_eudebug_client *c,
> > > > > +				     uint32_t event_flags, int fd,
> > > > > +				     uint32_t vm, uint32_t bind_flags,
> > > > > +				     uint32_t num_binds, u64 *ref_seqno)
> > > > > +{
> > > > > +	struct drm_xe_eudebug_event_vm_bind evmb;
> > > > > +
> > > > > +	base_event(c, to_base(evmb), DRM_XE_EUDEBUG_EVENT_VM_BIND,
> > > > > +		   event_flags, sizeof(evmb));
> > > > > +	evmb.client_handle = fd;
> > > > > +	evmb.vm_handle = vm;
> > > > > +	evmb.flags = bind_flags;
> > > > > +	evmb.num_binds = num_binds;
> > > > > +
> > > > > +	*ref_seqno = evmb.base.seqno;
> > > > > +
> > > > > +	xe_eudebug_event_log_write(c->log, (void *)&evmb);
> > > > > +}
> > > > > +
> > > > > +/**
> > > > > + * xe_eudebug_client_vm_bind_op_event:
> > > > > + * @c: pointer to xe_eudebug_client structure
> > > > > + * @event_flags: base event flags
> > > > > + * @bind_ref_seqno: base vm bind reference seqno
> > > > > + * @op_ref_seqno: output, the vm_bind_op event seqno
> > > > > + * @addr: ppgtt address
> > > > > + * @size: size of the binding
> > > > > + * @num_extensions: number of vm bind op extensions
> > > > > + *
> > > > > + * Logs vm bind op event in client's event log.
> > > > > + */
> > > > > +void xe_eudebug_client_vm_bind_op_event(struct xe_eudebug_client *c, uint32_t event_flags,
> > > > > +					uint64_t bind_ref_seqno, uint64_t *op_ref_seqno,
> > > > > +					uint64_t addr, uint64_t range,
> > > > > +					uint64_t num_extensions)
> > > > > +{
> > > > > +	struct drm_xe_eudebug_event_vm_bind_op op;
> > > > > +
> > > > > +	base_event(c, to_base(op), DRM_XE_EUDEBUG_EVENT_VM_BIND_OP,
> > > > > +		   event_flags, sizeof(op));
> > > > > +	op.vm_bind_ref_seqno = bind_ref_seqno;
> > > > > +	op.addr = addr;
> > > > > +	op.range = range;
> > > > > +	op.num_extensions = num_extensions;
> > > > > +
> > > > > +	*op_ref_seqno = op.base.seqno;
> > > > > +
> > > > > +	xe_eudebug_event_log_write(c->log, (void *)&op);
> > > > > +}
> > > > > +
> > > > > +/**
> > > > > + * xe_eudebug_client_vm_bind_op_metadata_event:
> > > > > + * @c: pointer to xe_eudebug_client structure
> > > > > + * @event_flags: base event flags
> > > > > + * @op_ref_seqno: base vm bind op reference seqno
> > > > > + * @metadata_handle: metadata handle
> > > > > + * @metadata_cookie: metadata cookie
> > > > > + *
> > > > > + * Logs vm bind op metadata event in client's event log.
> > > > > + */
> > > > > +void xe_eudebug_client_vm_bind_op_metadata_event(struct xe_eudebug_client *c,
> > > > > +						 uint32_t event_flags, uint64_t op_ref_seqno,
> > > > > +						 uint64_t metadata_handle, uint64_t metadata_cookie)
> > > > > +{
> > > > > +	struct drm_xe_eudebug_event_vm_bind_op_metadata op;
> > > > > +
> > > > > +	base_event(c, to_base(op), DRM_XE_EUDEBUG_EVENT_VM_BIND_OP_METADATA,
> > > > > +		   event_flags, sizeof(op));
> > > > > +	op.vm_bind_op_ref_seqno = op_ref_seqno;
> > > > > +	op.metadata_handle = metadata_handle;
> > > > > +	op.metadata_cookie = metadata_cookie;
> > > > > +
> > > > > +	xe_eudebug_event_log_write(c->log, (void *)&op);
> > > > > +}
> > > > > +
> > > > > +/**
> > > > > + * xe_eudebug_client_vm_bind_ufence_event:
> > > > > + * @c: pointer to xe_eudebug_client structure
> > > > > + * @event_flags: base event flags
> > > > > + * @ref_seqno: base vm bind event seqno
> > > > > + *
> > > > > + * Logs vm bind ufence event in client's event log.
> > > > > + */
> > > > > +void xe_eudebug_client_vm_bind_ufence_event(struct xe_eudebug_client *c, uint32_t event_flags,
> > > > > +					    uint64_t ref_seqno)
> > > > > +{
> > > > > +	struct drm_xe_eudebug_event_vm_bind_ufence f;
> > > > > +
> > > > > +	base_event(c, to_base(f), DRM_XE_EUDEBUG_EVENT_VM_BIND_UFENCE,
> > > > > +		   event_flags, sizeof(f));
> > > > > +	f.vm_bind_ref_seqno = ref_seqno;
> > > > > +
> > > > > +	xe_eudebug_event_log_write(c->log, (void *)&f);
> > > > > +}
> > > > > +
> > > > > +static bool has_user_fence(const struct drm_xe_sync *sync, uint32_t num_syncs)
> > > > > +{
> > > > > +	while (num_syncs--)
> > > > > +		if (sync[num_syncs].type == DRM_XE_SYNC_TYPE_USER_FENCE)
> > > > > +			return true;
> > > > > +
> > > > > +	return false;
> > > > > +}
> > > > > +
> > > > > +#define for_each_metadata(__m, __ext)					\
> > > > > +	for ((__m) = from_user_pointer(__ext);				\
> > > > > +	     (__m);							\
> > > > > +	     (__m) = from_user_pointer((__m)->base.next_extension))	\
> > > > > +		if ((__m)->base.name == XE_VM_BIND_OP_EXTENSIONS_ATTACH_DEBUG)
> > > > > +
> > > > > +static int  __xe_eudebug_client_vm_bind(struct xe_eudebug_client *c,
> > > > > +					int fd, uint32_t vm, uint32_t exec_queue,
> > > > > +					uint32_t bo, uint64_t offset,
> > > > > +					uint64_t addr, uint64_t size,
> > > > > +					uint32_t op, uint32_t flags,
> > > > > +					struct drm_xe_sync *sync,
> > > > > +					uint32_t num_syncs,
> > > > > +					uint32_t prefetch_region,
> > > > > +					uint8_t pat_index, uint64_t op_ext)
> > > > > +{
> > > > > +	struct drm_xe_vm_bind_op_ext_attach_debug *metadata;
> > > > > +	const bool ufence = has_user_fence(sync, num_syncs);
> > > > > +	const uint32_t bind_flags = ufence ?
> > > > > +		DRM_XE_EUDEBUG_EVENT_VM_BIND_FLAG_UFENCE : 0;
> > > > > +	uint64_t seqno = 0, op_seqno = 0, num_metadata = 0;
> > > > > +	uint32_t bind_base_flags = 0;
> > > > > +	int ret;
> > > > > +
> > > > > +	for_each_metadata(metadata, op_ext)
> > > > > +		num_metadata++;
> > > > > +
> > > > > +	switch (op) {
> > > > > +	case DRM_XE_VM_BIND_OP_MAP:
> > > > > +		bind_base_flags = DRM_XE_EUDEBUG_EVENT_CREATE;
> > > > > +		break;
> > > > > +	case DRM_XE_VM_BIND_OP_UNMAP:
> > > > > +		bind_base_flags = DRM_XE_EUDEBUG_EVENT_DESTROY;
> > > > > +		igt_assert_eq(num_metadata, 0);
> > > > > +		igt_assert_eq(ufence, false);
> > > > > +		break;
> > > > > +	default:
> > > > > +		/* XXX unmap all? */
> > > > > +		igt_assert(op);
> > > > > +		break;
> > > > > +	}
> > > > > +
> > > > > +	ret = ___xe_vm_bind(fd, vm, exec_queue, bo, offset, addr, size,
> > > > > +			    op, flags, sync, num_syncs, prefetch_region,
> > > > > +			    pat_index, 0, op_ext);
> > > > > +
> > > > > +	if (ret)
> > > > > +		return ret;
> > > > > +
> > > > > +	if (!bind_base_flags)
> > > > > +		return -EINVAL;
> > > > > +
> > > > > +	xe_eudebug_client_vm_bind_event(c, DRM_XE_EUDEBUG_EVENT_STATE_CHANGE,
> > > > > +					fd, vm, bind_flags, 1, &seqno);
> > > > > +	xe_eudebug_client_vm_bind_op_event(c, bind_base_flags,
> > > > > +					   seqno, &op_seqno, addr, size,
> > > > > +					   num_metadata);
> > > > > +
> > > > > +	for_each_metadata(metadata, op_ext)
> > > > > +		xe_eudebug_client_vm_bind_op_metadata_event(c,
> > > > > +							    DRM_XE_EUDEBUG_EVENT_CREATE,
> > > > > +							    op_seqno,
> > > > > +							    metadata->metadata_id,
> > > > > +							    metadata->cookie);
> > > > > +	if (ufence)
> > > > > +		xe_eudebug_client_vm_bind_ufence_event(c, DRM_XE_EUDEBUG_EVENT_CREATE |
> > > > > +						       DRM_XE_EUDEBUG_EVENT_NEED_ACK,
> > > > > +						       seqno);
> > > > > +	return ret;
> > > > > +}
> > > > > +
> > > > > +static void _xe_eudebug_client_vm_bind(struct xe_eudebug_client *c, int fd,
> > > > > +				       uint32_t vm, uint32_t bo,
> > > > > +				       uint64_t offset, uint64_t addr, uint64_t size,
> > > > > +				       uint32_t op,
> > > > > +				       uint32_t flags,
> > > > > +				       struct drm_xe_sync *sync,
> > > > > +				       uint32_t num_syncs,
> > > > > +				       uint64_t op_ext)
> > > > > +{
> > > > > +	const uint32_t exec_queue_id = 0;
> > > > > +	const uint32_t prefetch_region = 0;
> > > > > +
> > > > > +	igt_assert_eq(__xe_eudebug_client_vm_bind(c, fd, vm, exec_queue_id, bo, offset,
> > > > > +						  addr, size, op, flags,
> > > > > +						  sync, num_syncs, prefetch_region,
> > > > > +						  DEFAULT_PAT_INDEX, op_ext),
> > > > > +		      0);
> > > > > +}
> > > > > +
> > > > > +/**
> > > > > + * xe_eudebug_client_vm_bind_flags
> > > > > + * @c: pointer to xe_eudebug_client structure
> > > > > + * @fd: xe client
> > > > > + * @vm: vm handle
> > > > > + * @bo: buffer object handle
> > > > > + * @offset: offset within buffer object
> > > > > + * @addr: ppgtt address
> > > > > + * @size: size of the binding
> > > > > + * @flags: vm_bind flags
> > > > > + * @sync: sync objects
> > > > > + * @num_syncs: number of sync objects
> > > > > + * @op_ext: BIND_OP extensions
> > > > > + *
> > > > > + * Calls xe vm_bind ioctl and logs the corresponding event in client's event log.
> > > > > + */
> > > > > +void xe_eudebug_client_vm_bind_flags(struct xe_eudebug_client *c, int fd, uint32_t vm,
> > > > > +				     uint32_t bo, uint64_t offset,
> > > > > +				     uint64_t addr, uint64_t size, uint32_t flags,
> > > > > +				     struct drm_xe_sync *sync, uint32_t num_syncs,
> > > > > +				     uint64_t op_ext)
> > > > > +{
> > > > > +	_xe_eudebug_client_vm_bind(c, fd, vm, bo, offset, addr, size,
> > > > > +				   DRM_XE_VM_BIND_OP_MAP, flags,
> > > > > +				   sync, num_syncs, op_ext);
> > > > > +}
> > > > > +
> > > > > +/**
> > > > > + * xe_eudebug_client_vm_bind
> > > > > + * @c: pointer to xe_eudebug_client structure
> > > > > + * @fd: xe client
> > > > > + * @vm: vm handle
> > > > > + * @bo: buffer object handle
> > > > > + * @offset: offset within buffer object
> > > > > + * @addr: ppgtt address
> > > > > + * @size: size of the binding
> > > > > + *
> > > > > + * Calls xe vm_bind ioctl and logs the corresponding event in client's event log.
> > > > > + */
> > > > > +void xe_eudebug_client_vm_bind(struct xe_eudebug_client *c, int fd, uint32_t vm,
> > > > > +			       uint32_t bo, uint64_t offset,
> > > > > +			       uint64_t addr, uint64_t size)
> > > > > +{
> > > > > +	const uint32_t flags = 0;
> > > > > +	struct drm_xe_sync *sync = NULL;
> > > > > +	const uint32_t num_syncs = 0;
> > > > > +	const uint64_t op_ext = 0;
> > > > > +
> > > > > +	xe_eudebug_client_vm_bind_flags(c, fd, vm, bo, offset, addr, size,
> > > > > +					flags,
> > > > > +					sync, num_syncs, op_ext);
> > > > > +}
> > > > > +
> > > > > +/**
> > > > > + * xe_eudebug_client_vm_unbind_flags
> > > > > + * @c: pointer to xe_eudebug_client structure
> > > > > + * @fd: xe client
> > > > > + * @vm: vm handle
> > > > > + * @offset: offset
> > > > > + * @addr: ppgtt address
> > > > > + * @size: size of the binding
> > > > > + * @flags: vm_bind flags
> > > > > + * @sync: sync objects
> > > > > + * @num_syncs: number of sync objects
> > > > > + *
> > > > > + * Calls xe vm_unbind ioctl and logs the corresponding event in client's event log.
> > > > > + */
> > > > > +void xe_eudebug_client_vm_unbind_flags(struct xe_eudebug_client *c, int fd,
> > > > > +				       uint32_t vm, uint64_t offset,
> > > > > +				       uint64_t addr, uint64_t size, uint32_t flags,
> > > > > +				       struct drm_xe_sync *sync, uint32_t num_syncs)
> > > > > +{
> > > > > +	_xe_eudebug_client_vm_bind(c, fd, vm, 0, offset, addr, size,
> > > > > +				   DRM_XE_VM_BIND_OP_UNMAP, flags,
> > > > > +				   sync, num_syncs, 0);
> > > > > +}
> > > > > +
> > > > > +/**
> > > > > + * xe_eudebug_client_vm_unbind
> > > > > + * @c: pointer to xe_eudebug_client structure
> > > > > + * @fd: xe client
> > > > > + * @vm: vm handle
> > > > > + * @offset: offset
> > > > > + * @addr: ppgtt address
> > > > > + * @size: size of the binding
> > > > > + *
> > > > > + * Calls xe vm_unbind ioctl and logs the corresponding event in client's event log.
> > > > > + */
> > > > > +void xe_eudebug_client_vm_unbind(struct xe_eudebug_client *c, int fd, uint32_t vm,
> > > > > +				 uint64_t offset, uint64_t addr, uint64_t size)
> > > > > +{
> > > > > +	const uint32_t flags = 0;
> > > > > +	struct drm_xe_sync *sync = NULL;
> > > > > +	const uint32_t num_syncs = 0;
> > > > > +
> > > > > +	xe_eudebug_client_vm_unbind_flags(c, fd, vm, offset, addr, size,
> > > > > +					  flags, sync, num_syncs);
> > > > > +}
> > > > > +
> > > > > +/**
> > > > > + * xe_eudebug_client_metadata_create:
> > > > > + * @c: pointer to xe_eudebug_client structure
> > > > > + * @fd: xe client
> > > > > + * @type: debug metadata type
> > > > > + * @len: size of @data
> > > > > + * @data: debug metadata paylad
> > > > > + *
> > > > > + * Calls xe metadata create ioctl and logs the corresponding event in
> > > > > + * client's event log.
> > > > > + *
> > > > > + * Return: valid debug metadata id.
> > > > > + */
> > > > > +uint32_t xe_eudebug_client_metadata_create(struct xe_eudebug_client *c, int fd,
> > > > > +					   int type, size_t len, void *data)
> > > > > +{
> > > > > +	struct drm_xe_debug_metadata_create create = {
> > > > > +		.type = type,
> > > > > +		.user_addr = to_user_pointer(data),
> > > > > +		.len = len
> > > > > +	};
> > > > > +
> > > > > +	igt_assert_eq(igt_ioctl(fd, DRM_IOCTL_XE_DEBUG_METADATA_CREATE, &create), 0);
> > > > > +
> > > > > +	metadata_event(c, DRM_XE_EUDEBUG_EVENT_CREATE, fd, create.metadata_id, type, len);
> > > > > +
> > > > > +	return create.metadata_id;
> > > > > +}
> > > > > +
> > > > > +/**
> > > > > + * xe_eudebug_client_metadata_destroy:
> > > > > + * @c: pointer to xe_eudebug_client structure
> > > > > + * @fd: xe client
> > > > > + * @id: xe debug metadata handle
> > > > > + * @type: debug metadata type
> > > > > + * @len: size of debug metadata payload
> > > > > + *
> > > > > + * Calls xe metadata destroy ioctl and logs the corresponding event in
> > > > > + * client's event log.
> > > > > + */
> > > > > +void xe_eudebug_client_metadata_destroy(struct xe_eudebug_client *c, int fd,
> > > > > +					uint32_t id, int type, size_t len)
> > > > > +{
> > > > > +	struct drm_xe_debug_metadata_destroy destroy = { .metadata_id = id };
> > > > > +
> > > > > +
> > > > > +	igt_assert_eq(igt_ioctl(fd, DRM_IOCTL_XE_DEBUG_METADATA_DESTROY, &destroy), 0);
> > > > > +
> > > > > +	metadata_event(c, DRM_XE_EUDEBUG_EVENT_DESTROY, fd, id, type, len);
> > > > > +}
> > > > > +
> > > > > +void xe_eudebug_ack_ufence(int debugfd,
> > > > > +			   const struct drm_xe_eudebug_event_vm_bind_ufence *f)
> > > > > +{
> > > > > +	struct drm_xe_eudebug_ack_event ack = { 0, };
> > > > > +	char event_str[XE_EUDEBUG_EVENT_STRING_MAX_LEN];
> > > > > +
> > > > > +	ack.type = f->base.type;
> > > > > +	ack.seqno = f->base.seqno;
> > > > > +
> > > > > +	xe_eudebug_event_to_str((void *)f, event_str, XE_EUDEBUG_EVENT_STRING_MAX_LEN);
> > > > > +	igt_debug("delivering ack for event: %s\n", event_str);
> > > > > +	igt_assert_eq(igt_ioctl(debugfd, DRM_XE_EUDEBUG_IOCTL_ACK_EVENT, &ack), 0);
> > > > > +}
> > > > > diff --git a/lib/xe/xe_eudebug.h b/lib/xe/xe_eudebug.h
> > > > > new file mode 100644
> > > > > index 000000000..444f5a7b7
> > > > > --- /dev/null
> > > > > +++ b/lib/xe/xe_eudebug.h
> > > > > @@ -0,0 +1,206 @@
> > > > > +/* SPDX-License-Identifier: MIT */
> > > > > +/*
> > > > > + * Copyright © 2023 Intel Corporation
> > > > > + */
> > > > > +#include <fcntl.h>
> > > > > +#include <pthread.h>
> > > > > +#include <stdint.h>
> > > > > +#include <xe_drm.h>
> > > > > +
> > > > > +#include "igt_list.h"
> > > > > +
> > > > > +struct xe_eudebug_event_log {
> > > > > +	uint8_t *log;
> > > > > +	unsigned int head;
> > > > > +	unsigned int max_size;
> > > > > +	char name[80];
> > > > > +	pthread_mutex_t lock;
> > > > > +};
> > > > > +
> > > > > +struct xe_eudebug_debugger {
> > > > > +	int fd;
> > > > > +	uint64_t flags;
> > > > > +
> > > > > +	/* Used to smuggle private data */
> > > > > +	void *ptr;
> > > > > +
> > > > > +	struct xe_eudebug_event_log *log;
> > > > > +
> > > > > +	uint64_t event_count;
> > > > > +
> > > > > +	uint64_t target_pid;
> > > > > +
> > > > > +	struct igt_list_head triggers;
> > > > > +
> > > > > +	int master_fd;
> > > > > +
> > > > > +	pthread_t worker_thread;
> > > > > +	int worker_state;
> > > > > +
> > > > > +	int p_client[2];
> > > > > +};
> > > > > +
> > > > > +struct xe_eudebug_client {
> > > > > +	int pid;
> > > > > +	uint64_t seqno;
> > > > > +	uint64_t flags;
> > > > > +
> > > > > +	/* Used to smuggle private data */
> > > > > +	void *ptr;
> > > > > +
> > > > > +	struct xe_eudebug_event_log *log;
> > > > > +
> > > > > +	int done;
> > > > > +	int p_in[2];
> > > > > +	int p_out[2];
> > > > > +
> > > > > +	/* Used to pickup right device (the one used in debugger) */
> > > > > +	int master_fd;
> > > > > +
> > > > > +	int timeout_ms;
> > > > > +};
> > > > > +
> > > > > +struct xe_eudebug_session {
> > > > > +	uint64_t flags;
> > > > > +	struct xe_eudebug_client *c;
> > > > > +	struct xe_eudebug_debugger *d;
> > > > > +};
> > > > > +
> > > > > +typedef void (*xe_eudebug_client_work_fn)(struct xe_eudebug_client *);
> > > > > +typedef void (*xe_eudebug_trigger_fn)(struct xe_eudebug_debugger *,
> > > > > +				      struct drm_xe_eudebug_event *);
> > > > > +
> > > > > +#define xe_eudebug_for_each_event(_e, _log) \
> > > > > +	for ((_e) = (_e) ? (void *)(uint8_t *)(_e) + (_e)->len : \
> > > > > +		    (void *)(_log)->log; \
> > > > > +	    (uint8_t *)(_e) < (_log)->log + (_log)->head; \
> > > > > +	    (_e) = (void *)(uint8_t *)(_e) + (_e)->len)
> > > > > +
> > > > > +#define xe_eudebug_assert(d, c)						\
> > > > > +	do {								\
> > > > > +		if (!(c)) {						\
> > > > > +			xe_eudebug_event_log_print((d)->log, true);	\
> > > > > +			igt_assert(c);					\
> > > > > +		}							\
> > > > > +	} while (0)
> > > > > +
> > > > > +#define xe_eudebug_assert_f(d, c, f...)					\
> > > > > +	do {								\
> > > > > +		if (!(c)) {						\
> > > > > +			xe_eudebug_event_log_print((d)->log, true);	\
> > > > > +			igt_assert_f(c, f);				\
> > > > > +		}							\
> > > > > +	} while (0)
> > > > > +
> > > > > +#define XE_EUDEBUG_EVENT_STRING_MAX_LEN		4096
> > > > > +
> > > > > +/*
> > > > > + * Default abort timeout to use across xe_eudebug lib and tests if no specific
> > > > > + * timeout value is required.
> > > > > + */
> > > > > +#define XE_EUDEBUG_DEFAULT_TIMEOUT_MS		25000ULL
> > > > > +
> > > > > +#define XE_EUDEBUG_FILTER_EVENT_NONE		BIT(DRM_XE_EUDEBUG_EVENT_NONE)
> > > > > +#define XE_EUDEBUG_FILTER_EVENT_READ		BIT(DRM_XE_EUDEBUG_EVENT_READ)
> > > > > +#define XE_EUDEBUG_FILTER_EVENT_OPEN		BIT(DRM_XE_EUDEBUG_EVENT_OPEN)
> > > > > +#define XE_EUDEBUG_FILTER_EVENT_VM		BIT(DRM_XE_EUDEBUG_EVENT_VM)
> > > > > +#define XE_EUDEBUG_FILTER_EVENT_EXEC_QUEUE	BIT(DRM_XE_EUDEBUG_EVENT_EXEC_QUEUE)
> > > > > +#define XE_EUDEBUG_FILTER_EVENT_EU_ATTENTION	BIT(DRM_XE_EUDEBUG_EVENT_EU_ATTENTION)
> > > > > +#define XE_EUDEBUG_FILTER_EVENT_VM_BIND		BIT(DRM_XE_EUDEBUG_EVENT_VM_BIND)
> > > > > +#define XE_EUDEBUG_FILTER_EVENT_VM_BIND_OP	BIT(DRM_XE_EUDEBUG_EVENT_VM_BIND_OP)
> > > > > +#define XE_EUDEBUG_FILTER_EVENT_VM_BIND_UFENCE  BIT(DRM_XE_EUDEBUG_EVENT_VM_BIND_UFENCE)
> > > > > +#define XE_EUDEBUG_FILTER_ALL			GENMASK(DRM_XE_EUDEBUG_EVENT_MAX_EVENT, 0)
> > > > > +#define XE_EUDEBUG_EVENT_IS_FILTERED(_e, _f)	((1UL << _e) & _f)
> > > > > +
> > > > > +int xe_eudebug_connect(int fd, pid_t pid, uint32_t flags);
> > > > > +const char *xe_eudebug_event_to_str(struct drm_xe_eudebug_event *e, char *buf, size_t len);
> > > > > +struct drm_xe_eudebug_event *
> > > > > +xe_eudebug_event_log_find_seqno(struct xe_eudebug_event_log *l, uint64_t seqno);
> > > > > +struct xe_eudebug_event_log *
> > > > > +xe_eudebug_event_log_create(const char *name, unsigned int max_size);
> > > > > +void xe_eudebug_event_log_destroy(struct xe_eudebug_event_log *l);
> > > > > +void xe_eudebug_event_log_print(struct xe_eudebug_event_log *l, bool debug);
> > > > > +void xe_eudebug_event_log_compare(struct xe_eudebug_event_log *c, struct xe_eudebug_event_log *d,
> > > > > +				  uint32_t filter);
> > > > > +void xe_eudebug_event_log_write(struct xe_eudebug_event_log *l, struct drm_xe_eudebug_event *e);
> > > > > +void xe_eudebug_event_log_match_opposite(struct xe_eudebug_event_log *l, uint32_t filter);
> > > > > +
> > > > > +bool xe_eudebug_debugger_available(int fd);
> > > > > +struct xe_eudebug_debugger *
> > > > > +xe_eudebug_debugger_create(int xe, uint64_t flags, void *data);
> > > > > +void xe_eudebug_debugger_destroy(struct xe_eudebug_debugger *d);
> > > > > +int xe_eudebug_debugger_attach(struct xe_eudebug_debugger *d, struct xe_eudebug_client *c);
> > > > > +void xe_eudebug_debugger_start_worker(struct xe_eudebug_debugger *d);
> > > > > +void xe_eudebug_debugger_stop_worker(struct xe_eudebug_debugger *d, int timeout_s);
> > > > > +void xe_eudebug_debugger_dettach(struct xe_eudebug_debugger *d);
> > > > > +void xe_eudebug_debugger_set_data(struct xe_eudebug_debugger *c, void *ptr);
> > > > > +void xe_eudebug_debugger_add_trigger(struct xe_eudebug_debugger *d, int type,
> > > > > +				     xe_eudebug_trigger_fn fn);
> > > > > +void xe_eudebug_debugger_signal_stage(struct xe_eudebug_debugger *d, uint64_t stage);
> > > > > +void xe_eudebug_debugger_wait_stage(struct xe_eudebug_session *s, uint64_t stage);
> > > > > +
> > > > > +struct xe_eudebug_client *
> > > > > +xe_eudebug_client_create(int xe, xe_eudebug_client_work_fn work, uint64_t flags, void *data);
> > > > > +void xe_eudebug_client_destroy(struct xe_eudebug_client *c);
> > > > > +void xe_eudebug_client_start(struct xe_eudebug_client *c);
> > > > > +void xe_eudebug_client_stop(struct xe_eudebug_client *c);
> > > > > +void xe_eudebug_client_wait_done(struct xe_eudebug_client *c);
> > > > > +void xe_eudebug_client_signal_stage(struct xe_eudebug_client *c, uint64_t stage);
> > > > > +void xe_eudebug_client_wait_stage(struct xe_eudebug_client *c, uint64_t stage);
> > > > > +
> > > > > +uint64_t xe_eudebug_client_get_seqno(struct xe_eudebug_client *c);
> > > > > +void xe_eudebug_client_set_data(struct xe_eudebug_client *c, void *ptr);
> > > > > +
> > > > > +bool xe_eudebug_enable(int fd, bool enable);
> > > > > +
> > > > > +int xe_eudebug_client_open_driver(struct xe_eudebug_client *c);
> > > > > +void xe_eudebug_client_close_driver(struct xe_eudebug_client *c, int fd);
> > > > > +uint32_t xe_eudebug_client_vm_create(struct xe_eudebug_client *c, int fd,
> > > > > +				     uint32_t flags, uint64_t ext);
> > > > > +void xe_eudebug_client_vm_destroy(struct xe_eudebug_client *c, int fd, uint32_t vm);
> > > > > +uint32_t xe_eudebug_client_exec_queue_create(struct xe_eudebug_client *c, int fd,
> > > > > +					     struct drm_xe_exec_queue_create *create);
> > > > > +void xe_eudebug_client_exec_queue_destroy(struct xe_eudebug_client *c, int fd,
> > > > > +					  struct drm_xe_exec_queue_create *create);
> > > > > +void xe_eudebug_client_vm_bind_event(struct xe_eudebug_client *c, uint32_t event_flags, int fd,
> > > > > +				     uint32_t vm, uint32_t bind_flags,
> > > > > +				     uint32_t num_ops, uint64_t *ref_seqno);
> > > > > +void xe_eudebug_client_vm_bind_op_event(struct xe_eudebug_client *c, uint32_t event_flags,
> > > > > +					uint64_t ref_seqno, uint64_t *op_ref_seqno,
> > > > > +					uint64_t addr, uint64_t range,
> > > > > +					uint64_t num_extensions);
> > > > > +void xe_eudebug_client_vm_bind_op_metadata_event(struct xe_eudebug_client *c,
> > > > > +						 uint32_t event_flags, uint64_t op_ref_seqno,
> > > > > +						 uint64_t metadata_handle, uint64_t metadata_cookie);
> > > > > +void xe_eudebug_client_vm_bind_ufence_event(struct xe_eudebug_client *c, uint32_t event_flags,
> > > > > +					    uint64_t ref_seqno);
> > > > > +void xe_eudebug_ack_ufence(int debugfd,
> > > > > +			   const struct drm_xe_eudebug_event_vm_bind_ufence *f);
> > > > > +
> > > > > +void xe_eudebug_client_vm_bind_flags(struct xe_eudebug_client *c, int fd, uint32_t vm,
> > > > > +				     uint32_t bo, uint64_t offset,
> > > > > +				     uint64_t addr, uint64_t size, uint32_t flags,
> > > > > +				     struct drm_xe_sync *sync, uint32_t num_syncs,
> > > > > +				     uint64_t op_ext);
> > > > > +void xe_eudebug_client_vm_bind(struct xe_eudebug_client *c, int fd, uint32_t vm,
> > > > > +			       uint32_t bo, uint64_t offset,
> > > > > +			       uint64_t addr, uint64_t size);
> > > > > +void xe_eudebug_client_vm_unbind_flags(struct xe_eudebug_client *c, int fd,
> > > > > +				       uint32_t vm, uint64_t offset,
> > > > > +				       uint64_t addr, uint64_t size, uint32_t flags,
> > > > > +				       struct drm_xe_sync *sync, uint32_t num_syncs);
> > > > > +void xe_eudebug_client_vm_unbind(struct xe_eudebug_client *c, int fd, uint32_t vm,
> > > > > +				 uint64_t offset, uint64_t addr, uint64_t size);
> > > > > +
> > > > > +uint32_t xe_eudebug_client_metadata_create(struct xe_eudebug_client *c, int fd,
> > > > > +					   int type, size_t len, void *data);
> > > > > +void xe_eudebug_client_metadata_destroy(struct xe_eudebug_client *c, int fd,
> > > > > +					uint32_t id, int type, size_t len);
> > > > > +
> > > > > +struct xe_eudebug_session *xe_eudebug_session_create(int fd,
> > > > > +						     xe_eudebug_client_work_fn work,
> > > > > +						     unsigned int flags,
> > > > > +						     void *test_private);
> > > > > +void xe_eudebug_session_destroy(struct xe_eudebug_session *s);
> > > > > +void xe_eudebug_session_run(struct xe_eudebug_session *s);
> > > > > +void xe_eudebug_session_check(struct xe_eudebug_session *s, bool match_opposite, uint32_t filter);
> > > > > -- 
> > > > > 2.34.1
> > > > > 

^ permalink raw reply	[flat|nested] 41+ messages in thread

* Re: [PATCH i-g-t v3 04/14] lib/xe_eudebug: Introduce eu debug testing framework
  2024-08-22 15:39           ` Kamil Konieczny
@ 2024-08-23  7:58             ` Manszewski, Christoph
  0 siblings, 0 replies; 41+ messages in thread
From: Manszewski, Christoph @ 2024-08-23  7:58 UTC (permalink / raw)
  To: Kamil Konieczny, igt-dev, Zbigniew Kempczyński,
	Dominik Grzegorzek, Maciej Patelczyk,
	Dominik Karol Piątkowski, Pawel Sikora, Andrzej Hajda,
	Kolanupaka Naveena, Mika Kuoppala, Gwan-gyeong Mun, Mika Kuoppala,
	Karolina Stolarek

Hi Kamil,

On 22.08.2024 17:39, Kamil Konieczny wrote:
> Hi Zbigniew,
> On 2024-08-21 at 11:31:40 +0200, Zbigniew Kempczyński wrote:
>> On Tue, Aug 20, 2024 at 07:45:18PM +0200, Kamil Konieczny wrote:
>>> Hi Manszewski,,
>>> On 2024-08-20 at 18:14:07 +0200, Manszewski, Christoph wrote:
>>>> Hi Zbigniew,
>>>>
>>>> On 20.08.2024 10:14, Zbigniew Kempczyński wrote:
>>>>> On Fri, Aug 09, 2024 at 02:38:03PM +0200, Christoph Manszewski wrote:
>>>>>> From: Dominik Grzegorzek <dominik.grzegorzek@intel.com>
>>>>>>
>>>>>> Introduce library which simplifies testing of eu debug capability.
>>>>>> The library provides event log helpers together with asynchronous
>>>>>> abstraction for client proccess and the debugger itself.
>>>>>>
>>>>>> xe_eudebug_client creates its own proccess with user's work function,
>>>>>> and gives machanisms to synchronize beginning of execution and event
>>>>>> logging.
>>>>>>
>>>>>> xe_eudebug_debugger allows to attach to the given proccess, provides
>>>>>> asynchronous thread for event reading and introduces triggers -
>>>>>> a callback mechanism triggered every time subscribed event was read.
>>>>>>
>>>>>> Signed-off-by: Dominik Grzegorzek <dominik.grzegorzek@intel.com>
>>>>>> Signed-off-by: Mika Kuoppala <mika.kuaoppala@linux.intel.com>
>>>>>> Signed-off-by: Christoph Manszewski <christoph.manszewski@intel.com>
>>>>>> Signed-off-by: Maciej Patelczyk <maciej.patelczyk@intel.com>
>>>>>> Signed-off-by: Pawel Sikora <pawel.sikora@intel.com>
>>>>>> Signed-off-by: Karolina Stolarek <karolina.stolarek@intel.com>
>>>>>> ---
>>>>>>    lib/meson.build     |    1 +
>>>>>>    lib/xe/xe_eudebug.c | 2192 +++++++++++++++++++++++++++++++++++++++++++
>>>>>>    lib/xe/xe_eudebug.h |  206 ++++
>>>>>>    3 files changed, 2399 insertions(+)
>>>>>>    create mode 100644 lib/xe/xe_eudebug.c
>>>>>>    create mode 100644 lib/xe/xe_eudebug.h
>>>>>>
>>>>>> diff --git a/lib/meson.build b/lib/meson.build
>>>>>> index f711e60a7..969ca4101 100644
>>>>>> --- a/lib/meson.build
>>>>>> +++ b/lib/meson.build
>>>>>> @@ -111,6 +111,7 @@ lib_sources = [
>>>>>>    	'igt_msm.c',
>>>>>>    	'igt_dsc.c',
>>>>>>    	'xe/xe_gt.c',
>>>>>> +	'xe/xe_eudebug.c',
>>>>>>    	'xe/xe_ioctl.c',
>>>>>>    	'xe/xe_mmio.c',
>>>>>>    	'xe/xe_query.c',
>>>>>
>>>>> As eudebug is quite big feature I think it should be separated and
>>>>> hidden behind a feature flag (check meson_options.txt), lets say
>>>>> 'xe_eudebug' which would be disabled by default. This way you can
>>>>> develop it upstream even if kernel side is not officially merged.
>>>>> I'm pragramatic and I see no reason to block not accepted feature
>>>>> especially this would imo speed up developement. A final step when
>>>>> kernel change would be accepted and merged would be to sync with
>>>>> uapi and remove local definitions.
>>>>>
>>>>> I look forward maintainers comments is my attitude acceptable.
>>>>
>>>> I agree that it is a good idea. The only problem that arises is for
>>>> 'xe_exec_sip'. We add a dependency on eudebug to this test - any ideas how
>>>> to approach this correctly? The only thing that comes to my mind is
>>>> conditional compilation with 'ifdef' statements but it doesn't appear to be
>>>> pretty.
>>>
>>> What about adding skips in added tests if kernel do not support eudebug?
>>>
>>> This way you can have it without conditional compilation with
>>> ifdef/meson and also have it compile-time tested (if CI support test-with
>>> for Xe kernels).
>>
>> IMO simplest is to migrate code to xe_exec_sip_eudebug.c.
>>
> 
> What about shorter name like xe_eudebug_sip.c ?

Well that depends on the motivation. The name proposed by Zbyszek better 
undelines the fact that this is a variation of 'xe_exec_sip.c'. And 
eventually, after the eudebug feature gets merged to KMD, I would assume 
we could merge those two tests back into one.

Thanks,
Christoph
> 
> Regards,
> Kamil
> 
>> --
>> Zbigniew
>>
>>
>>>
>>> Regards,
>>> Kamil
>>>
>>>>
>>>> Thanks,
>>>> Christoph
>>>>>
>>>>> --
>>>>> Zbigniew
>>>>>
>>>>>
>>>>>> diff --git a/lib/xe/xe_eudebug.c b/lib/xe/xe_eudebug.c
>>>>>> new file mode 100644
>>>>>> index 000000000..4eac87476
>>>>>> --- /dev/null
>>>>>> +++ b/lib/xe/xe_eudebug.c
>>>>>> @@ -0,0 +1,2192 @@
>>>>>> +// SPDX-License-Identifier: MIT
>>>>>> +/*
>>>>>> + * Copyright © 2023 Intel Corporation
>>>>>> + */
>>>>>> +
>>>>>> +#include <fcntl.h>
>>>>>> +#include <poll.h>
>>>>>> +#include <signal.h>
>>>>>> +#include <sys/select.h>
>>>>>> +#include <sys/stat.h>
>>>>>> +#include <sys/types.h>
>>>>>> +#include <sys/wait.h>
>>>>>> +
>>>>>> +#include "igt.h"
>>>>>> +#include "igt_sysfs.h"
>>>>>> +#include "intel_pat.h"
>>>>>> +#include "xe_eudebug.h"
>>>>>> +#include "xe_ioctl.h"
>>>>>> +
>>>>>> +struct event_trigger {
>>>>>> +	xe_eudebug_trigger_fn fn;
>>>>>> +	int type;
>>>>>> +	struct igt_list_head link;
>>>>>> +};
>>>>>> +
>>>>>> +struct seqno_list_entry {
>>>>>> +	struct igt_list_head link;
>>>>>> +	uint64_t seqno;
>>>>>> +};
>>>>>> +
>>>>>> +struct match_dto {
>>>>>> +	struct drm_xe_eudebug_event *target;
>>>>>> +	struct igt_list_head *seqno_list;
>>>>>> +	uint64_t client_handle;
>>>>>> +	uint32_t filter;
>>>>>> +
>>>>>> +	/* store latest 'EVENT_VM_BIND' seqno */
>>>>>> +	uint64_t *bind_seqno;
>>>>>> +	/* latest vm_bind_op seqno matching bind_seqno */
>>>>>> +	uint64_t *bind_op_seqno;
>>>>>> +};
>>>>>> +
>>>>>> +#define CLIENT_PID  1
>>>>>> +#define CLIENT_RUN  2
>>>>>> +#define CLIENT_FINI 3
>>>>>> +#define CLIENT_STOP 4
>>>>>> +#define CLIENT_STAGE 5
>>>>>> +#define DEBUGGER_STAGE 6
>>>>>> +
>>>>>> +#define DEBUGGER_WORKER_INACTIVE  0
>>>>>> +#define DEBUGGER_WORKER_ACTIVE  1
>>>>>> +#define DEBUGGER_WORKER_QUITTING 2
>>>>>> +
>>>>>> +static const char *type_to_str(unsigned int type)
>>>>>> +{
>>>>>> +	switch (type) {
>>>>>> +	case DRM_XE_EUDEBUG_EVENT_NONE:
>>>>>> +		return "none";
>>>>>> +	case DRM_XE_EUDEBUG_EVENT_READ:
>>>>>> +		return "read";
>>>>>> +	case DRM_XE_EUDEBUG_EVENT_OPEN:
>>>>>> +		return "client";
>>>>>> +	case DRM_XE_EUDEBUG_EVENT_VM:
>>>>>> +		return "vm";
>>>>>> +	case DRM_XE_EUDEBUG_EVENT_EXEC_QUEUE:
>>>>>> +		return "exec_queue";
>>>>>> +	case DRM_XE_EUDEBUG_EVENT_EU_ATTENTION:
>>>>>> +		return "attention";
>>>>>> +	case DRM_XE_EUDEBUG_EVENT_VM_BIND:
>>>>>> +		return "vm_bind";
>>>>>> +	case DRM_XE_EUDEBUG_EVENT_VM_BIND_OP:
>>>>>> +		return "vm_bind_op";
>>>>>> +	case DRM_XE_EUDEBUG_EVENT_VM_BIND_UFENCE:
>>>>>> +		return "vm_bind_ufence";
>>>>>> +	case DRM_XE_EUDEBUG_EVENT_METADATA:
>>>>>> +		return "metadata";
>>>>>> +	case DRM_XE_EUDEBUG_EVENT_VM_BIND_OP_METADATA:
>>>>>> +		return "vm_bind_op_metadata";
>>>>>> +	}
>>>>>> +
>>>>>> +	return "UNKNOWN";
>>>>>> +}
>>>>>> +
>>>>>> +static const char *event_type_to_str(struct drm_xe_eudebug_event *e, char *buf)
>>>>>> +{
>>>>>> +	sprintf(buf, "%s(%d)", type_to_str(e->type), e->type);
>>>>>> +
>>>>>> +	return buf;
>>>>>> +}
>>>>>> +
>>>>>> +static const char *flags_to_str(unsigned int flags)
>>>>>> +{
>>>>>> +	if (flags & DRM_XE_EUDEBUG_EVENT_CREATE) {
>>>>>> +		if (flags & DRM_XE_EUDEBUG_EVENT_NEED_ACK)
>>>>>> +			return "create|ack";
>>>>>> +		else
>>>>>> +			return "create";
>>>>>> +	}
>>>>>> +	if (flags & DRM_XE_EUDEBUG_EVENT_DESTROY)
>>>>>> +		return "destroy";
>>>>>> +
>>>>>> +	if (flags & DRM_XE_EUDEBUG_EVENT_STATE_CHANGE)
>>>>>> +		return "state-change";
>>>>>> +
>>>>>> +	igt_assert(!(flags & DRM_XE_EUDEBUG_EVENT_NEED_ACK));
>>>>>> +
>>>>>> +	return "flags unknown";
>>>>>> +}
>>>>>> +
>>>>>> +static const char *event_members_to_str(struct drm_xe_eudebug_event *e, char *b)
>>>>>> +{
>>>>>> +	switch (e->type) {
>>>>>> +	case DRM_XE_EUDEBUG_EVENT_OPEN: {
>>>>>> +		struct drm_xe_eudebug_event_client *ec = (struct drm_xe_eudebug_event_client *)e;
>>>>>> +
>>>>>> +		sprintf(b, "handle=%llu", ec->client_handle);
>>>>>> +		break;
>>>>>> +	}
>>>>>> +	case DRM_XE_EUDEBUG_EVENT_VM: {
>>>>>> +		struct drm_xe_eudebug_event_vm *evm = (struct drm_xe_eudebug_event_vm *)e;
>>>>>> +
>>>>>> +		sprintf(b, "client_handle=%llu, handle=%llu",
>>>>>> +			evm->client_handle, evm->vm_handle);
>>>>>> +		break;
>>>>>> +	}
>>>>>> +	case DRM_XE_EUDEBUG_EVENT_EXEC_QUEUE: {
>>>>>> +		struct drm_xe_eudebug_event_exec_queue *ee = (void *)e;
>>>>>> +
>>>>>> +		sprintf(b, "client_handle=%llu, vm_handle=%llu, "
>>>>>> +			   "exec_queue_handle=%llu, engine_class=%d, exec_queue_width=%d",
>>>>>> +			ee->client_handle, ee->vm_handle,
>>>>>> +			ee->exec_queue_handle, ee->engine_class, ee->width);
>>>>>> +		break;
>>>>>> +	}
>>>>>> +	case DRM_XE_EUDEBUG_EVENT_EU_ATTENTION: {
>>>>>> +		struct drm_xe_eudebug_event_eu_attention *ea = (void *)e;
>>>>>> +
>>>>>> +		sprintf(b, "client_handle=%llu, exec_queue_handle=%llu, "
>>>>>> +			   "lrc_handle=%llu, bitmask_size=%d",
>>>>>> +			ea->client_handle, ea->exec_queue_handle,
>>>>>> +			ea->lrc_handle, ea->bitmask_size);
>>>>>> +		break;
>>>>>> +	}
>>>>>> +	case DRM_XE_EUDEBUG_EVENT_VM_BIND: {
>>>>>> +		struct drm_xe_eudebug_event_vm_bind *evmb = (void *)e;
>>>>>> +
>>>>>> +		sprintf(b, "client_handle=%llu, vm_handle=%llu, flags=0x%x, num_binds=%u",
>>>>>> +			evmb->client_handle, evmb->vm_handle, evmb->flags, evmb->num_binds);
>>>>>> +		break;
>>>>>> +	}
>>>>>> +	case DRM_XE_EUDEBUG_EVENT_VM_BIND_OP: {
>>>>>> +		struct drm_xe_eudebug_event_vm_bind_op *op = (void *)e;
>>>>>> +
>>>>>> +		sprintf(b, "vm_bind_ref_seqno=%lld, addr=%016llx, range=%llu num_extensions=%llu",
>>>>>> +			op->vm_bind_ref_seqno, op->addr, op->range, op->num_extensions);
>>>>>> +		break;
>>>>>> +	}
>>>>>> +	case DRM_XE_EUDEBUG_EVENT_VM_BIND_UFENCE: {
>>>>>> +		struct drm_xe_eudebug_event_vm_bind_ufence *f = (void *)e;
>>>>>> +
>>>>>> +		sprintf(b, "vm_bind_ref_seqno=%lld", f->vm_bind_ref_seqno);
>>>>>> +		break;
>>>>>> +	}
>>>>>> +	case DRM_XE_EUDEBUG_EVENT_METADATA: {
>>>>>> +		struct drm_xe_eudebug_event_metadata *em = (void *)e;
>>>>>> +
>>>>>> +		sprintf(b, "client_handle=%llu, metadata_handle=%llu, type=%llu, len=%llu",
>>>>>> +			em->client_handle, em->metadata_handle, em->type, em->len);
>>>>>> +		break;
>>>>>> +	}
>>>>>> +	case DRM_XE_EUDEBUG_EVENT_VM_BIND_OP_METADATA: {
>>>>>> +		struct drm_xe_eudebug_event_vm_bind_op_metadata *op = (void *)e;
>>>>>> +
>>>>>> +		sprintf(b, "vm_bind_op_ref_seqno=%lld, metadata_handle=%llu, metadata_cookie=%llu",
>>>>>> +			op->vm_bind_op_ref_seqno, op->metadata_handle, op->metadata_cookie);
>>>>>> +		break;
>>>>>> +	}
>>>>>> +	default:
>>>>>> +		strcpy(b, "<...>");
>>>>>> +	}
>>>>>> +
>>>>>> +	return b;
>>>>>> +}
>>>>>> +
>>>>>> +/**
>>>>>> + * xe_eudebug_event_to_str:
>>>>>> + * @e: pointer to event
>>>>>> + * @buf: target to write string representation of @e
>>>>>> + * @len: size of target buffer @buf
>>>>>> + *
>>>>>> + * Creates string representation for given event.
>>>>>> + *
>>>>>> + * Returns: the written input buffer pointed by @buf.
>>>>>> + */
>>>>>> +const char *xe_eudebug_event_to_str(struct drm_xe_eudebug_event *e, char *buf, size_t len)
>>>>>> +{
>>>>>> +	char a[256];
>>>>>> +	char b[256];
>>>>>> +
>>>>>> +	snprintf(buf, len, "(%llu) %15s:%s: %s",
>>>>>> +		 e->seqno,
>>>>>> +		 event_type_to_str(e, a),
>>>>>> +		 flags_to_str(e->flags),
>>>>>> +		 event_members_to_str(e, b));
>>>>>> +
>>>>>> +	return buf;
>>>>>> +}
>>>>>> +
>>>>>> +static void catch_child_failure(void)
>>>>>> +{
>>>>>> +	pid_t pid;
>>>>>> +	int status;
>>>>>> +
>>>>>> +	pid = waitpid(-1, &status, WNOHANG);
>>>>>> +
>>>>>> +	if (pid == 0 || pid == -1)
>>>>>> +		return;
>>>>>> +
>>>>>> +	if (!WIFEXITED(status))
>>>>>> +		return;
>>>>>> +
>>>>>> +	igt_assert_f(WEXITSTATUS(status) == 0, "Client failed!\n");
>>>>>> +}
>>>>>> +
>>>>>> +static int safe_pipe_read(int pipe[2], void *buf, int nbytes, int timeout_ms)
>>>>>> +{
>>>>>> +	int ret;
>>>>>> +	int t = 0;
>>>>>> +	struct pollfd fd = {
>>>>>> +		.fd = pipe[0],
>>>>>> +		.events = POLLIN,
>>>>>> +		.revents = 0
>>>>>> +	};
>>>>>> +
>>>>>> +	/* When child fails we may get stuck forever. Check whether
>>>>>> +	 * the child process ended with an error.
>>>>>> +	 */
>>>>>> +	do {
>>>>>> +		const int interval_ms = 1000;
>>>>>> +
>>>>>> +		ret = poll(&fd, 1, interval_ms);
>>>>>> +
>>>>>> +		if (!ret) {
>>>>>> +			catch_child_failure();
>>>>>> +			t += interval_ms;
>>>>>> +		}
>>>>>> +	} while (!ret && t < timeout_ms);
>>>>>> +
>>>>>> +	if (ret > 0)
>>>>>> +		return read(pipe[0], buf, nbytes);
>>>>>> +
>>>>>> +	return 0;
>>>>>> +}
>>>>>> +
>>>>>> +static uint64_t pipe_read(int pipe[2], int timeout_ms)
>>>>>> +{
>>>>>> +	uint64_t in;
>>>>>> +	uint64_t ret;
>>>>>> +
>>>>>> +	ret = safe_pipe_read(pipe, &in, sizeof(in), timeout_ms);
>>>>>> +	igt_assert(ret == sizeof(in));
>>>>>> +
>>>>>> +	return in;
>>>>>> +}
>>>>>> +
>>>>>> +static void pipe_signal(int pipe[2], uint64_t token)
>>>>>> +{
>>>>>> +	igt_assert(write(pipe[1], &token, sizeof(token)) == sizeof(token));
>>>>>> +}
>>>>>> +
>>>>>> +static void pipe_close(int pipe[2])
>>>>>> +{
>>>>>> +	if (pipe[0] != -1)
>>>>>> +		close(pipe[0]);
>>>>>> +
>>>>>> +	if (pipe[1] != -1)
>>>>>> +		close(pipe[1]);
>>>>>> +}
>>>>>> +
>>>>>> +static uint64_t __wait_token(int p[2], const uint64_t token, int timeout_ms)
>>>>>> +{
>>>>>> +	uint64_t in;
>>>>>> +
>>>>>> +	in = pipe_read(p, timeout_ms);
>>>>>> +
>>>>>> +	igt_assert_eq(in, token);
>>>>>> +
>>>>>> +	return pipe_read(p, timeout_ms);
>>>>>> +}
>>>>>> +
>>>>>> +static uint64_t client_wait_token(struct xe_eudebug_client *c,
>>>>>> +				 const uint64_t token)
>>>>>> +{
>>>>>> +	return __wait_token(c->p_in, token, c->timeout_ms);
>>>>>> +}
>>>>>> +
>>>>>> +static uint64_t wait_from_client(struct xe_eudebug_client *c,
>>>>>> +				 const uint64_t token)
>>>>>> +{
>>>>>> +	return __wait_token(c->p_out, token, c->timeout_ms);
>>>>>> +}
>>>>>> +
>>>>>> +static void token_signal(int p[2], const uint64_t token, const uint64_t value)
>>>>>> +{
>>>>>> +	pipe_signal(p, token);
>>>>>> +	pipe_signal(p, value);
>>>>>> +}
>>>>>> +
>>>>>> +static void client_signal(struct xe_eudebug_client *c,
>>>>>> +			  const uint64_t token,
>>>>>> +			  const uint64_t value)
>>>>>> +{
>>>>>> +	token_signal(c->p_out, token, value);
>>>>>> +}
>>>>>> +
>>>>>> +static int __xe_eudebug_connect(int fd, pid_t pid, uint32_t flags, uint64_t events)
>>>>>> +{
>>>>>> +	struct drm_xe_eudebug_connect param = {
>>>>>> +		.pid = pid,
>>>>>> +		.flags = flags,
>>>>>> +	};
>>>>>> +	int debugfd;
>>>>>> +
>>>>>> +	debugfd = igt_ioctl(fd, DRM_IOCTL_XE_EUDEBUG_CONNECT, &param);
>>>>>> +
>>>>>> +	if (debugfd < 0)
>>>>>> +		return -errno;
>>>>>> +
>>>>>> +	return debugfd;
>>>>>> +}
>>>>>> +
>>>>>> +static void event_log_write_to_fd(struct xe_eudebug_event_log *l, int fd)
>>>>>> +{
>>>>>> +	igt_assert_eq(write(fd, &l->head, sizeof(l->head)),
>>>>>> +		      sizeof(l->head));
>>>>>> +
>>>>>> +	igt_assert_eq(write(fd, l->log, l->head), l->head);
>>>>>> +}
>>>>>> +
>>>>>> +static void read_all(int fd, void *buf, size_t nbytes)
>>>>>> +{
>>>>>> +	ssize_t remaining_size = nbytes;
>>>>>> +	ssize_t current_size = 0;
>>>>>> +	ssize_t read_size = 0;
>>>>>> +
>>>>>> +	do {
>>>>>> +		read_size = read(fd, buf + current_size, remaining_size);
>>>>>> +		igt_assert_f(read_size >= 0, "read failed: %s\n", strerror(errno));
>>>>>> +
>>>>>> +		current_size += read_size;
>>>>>> +		remaining_size -= read_size;
>>>>>> +	} while (remaining_size > 0 && read_size > 0);
>>>>>> +
>>>>>> +	igt_assert_eq(current_size, nbytes);
>>>>>> +}
>>>>>> +
>>>>>> +static void event_log_read_from_fd(struct xe_eudebug_event_log *l, int fd)
>>>>>> +{
>>>>>> +	read_all(fd, &l->head, sizeof(l->head));
>>>>>> +	igt_assert_lt(l->head, l->max_size);
>>>>>> +
>>>>>> +	read_all(fd, l->log, l->head);
>>>>>> +}
>>>>>> +
>>>>>> +typedef int (*cmp_fn_t)(struct drm_xe_eudebug_event *, void *);
>>>>>> +
>>>>>> +static struct drm_xe_eudebug_event *
>>>>>> +event_cmp(struct xe_eudebug_event_log *l,
>>>>>> +	  struct drm_xe_eudebug_event *current,
>>>>>> +	  cmp_fn_t match,
>>>>>> +	  void *data)
>>>>>> +{
>>>>>> +	struct drm_xe_eudebug_event *e = current;
>>>>>> +
>>>>>> +	xe_eudebug_for_each_event(e, l) {
>>>>>> +		if (match(e, data))
>>>>>> +			return e;
>>>>>> +	}
>>>>>> +
>>>>>> +	return NULL;
>>>>>> +}
>>>>>> +
>>>>>> +static int match_type_and_flags(struct drm_xe_eudebug_event *a, void *data)
>>>>>> +{
>>>>>> +	struct drm_xe_eudebug_event *b = data;
>>>>>> +
>>>>>> +	if (a->type == b->type &&
>>>>>> +	    a->flags == b->flags)
>>>>>> +		return 1;
>>>>>> +
>>>>>> +	return 0;
>>>>>> +}
>>>>>> +
>>>>>> +static int match_fields(struct drm_xe_eudebug_event *a, void *data)
>>>>>> +{
>>>>>> +	struct drm_xe_eudebug_event *b = data;
>>>>>> +	int ret = 0;
>>>>>> +
>>>>>> +	ret = match_type_and_flags(a, data);
>>>>>> +	if (!ret)
>>>>>> +		return ret;
>>>>>> +
>>>>>> +	ret = 0;
>>>>>> +
>>>>>> +	switch (a->type) {
>>>>>> +	case DRM_XE_EUDEBUG_EVENT_EXEC_QUEUE: {
>>>>>> +		struct drm_xe_eudebug_event_exec_queue *ae = (void *)a;
>>>>>> +		struct drm_xe_eudebug_event_exec_queue *be = (void *)b;
>>>>>> +
>>>>>> +		if (ae->engine_class == be->engine_class && ae->width == be->width)
>>>>>> +			ret = 1;
>>>>>> +		break;
>>>>>> +	}
>>>>>> +	case DRM_XE_EUDEBUG_EVENT_VM_BIND: {
>>>>>> +		struct drm_xe_eudebug_event_vm_bind *ea = (void *)a;
>>>>>> +		struct drm_xe_eudebug_event_vm_bind *eb = (void *)b;
>>>>>> +
>>>>>> +		if (ea->num_binds == eb->num_binds)
>>>>>> +			ret = 1;
>>>>>> +		break;
>>>>>> +	}
>>>>>> +	case DRM_XE_EUDEBUG_EVENT_VM_BIND_OP: {
>>>>>> +		struct drm_xe_eudebug_event_vm_bind_op *ea = (void *)a;
>>>>>> +		struct drm_xe_eudebug_event_vm_bind_op *eb = (void *)b;
>>>>>> +
>>>>>> +		if (ea->addr == eb->addr && ea->range == eb->range &&
>>>>>> +		    ea->num_extensions == eb->num_extensions)
>>>>>> +			ret = 1;
>>>>>> +		break;
>>>>>> +	}
>>>>>> +	case DRM_XE_EUDEBUG_EVENT_VM_BIND_OP_METADATA: {
>>>>>> +		struct drm_xe_eudebug_event_vm_bind_op_metadata *ea = (void *)a;
>>>>>> +		struct drm_xe_eudebug_event_vm_bind_op_metadata *eb = (void *)b;
>>>>>> +
>>>>>> +		if (ea->metadata_handle == eb->metadata_handle &&
>>>>>> +		    ea->metadata_cookie == eb->metadata_cookie)
>>>>>> +			ret = 1;
>>>>>> +		break;
>>>>>> +	}
>>>>>> +
>>>>>> +	default:
>>>>>> +		ret = 1;
>>>>>> +		break;
>>>>>> +	}
>>>>>> +
>>>>>> +	return ret;
>>>>>> +}
>>>>>> +
>>>>>> +static int match_client_handle(struct drm_xe_eudebug_event *e, void *data)
>>>>>> +{
>>>>>> +	struct match_dto *md = (void *)data;
>>>>>> +	uint64_t *bind_seqno = md->bind_seqno;
>>>>>> +	uint64_t *bind_op_seqno = md->bind_op_seqno;
>>>>>> +	uint64_t h = md->client_handle;
>>>>>> +
>>>>>> +	if (XE_EUDEBUG_EVENT_IS_FILTERED(e->type, md->filter))
>>>>>> +		return 0;
>>>>>> +
>>>>>> +	switch (e->type) {
>>>>>> +	case DRM_XE_EUDEBUG_EVENT_OPEN: {
>>>>>> +		struct drm_xe_eudebug_event_client *client = (struct drm_xe_eudebug_event_client *)e;
>>>>>> +
>>>>>> +		if (client->client_handle == h)
>>>>>> +			return 1;
>>>>>> +		break;
>>>>>> +	}
>>>>>> +	case DRM_XE_EUDEBUG_EVENT_VM: {
>>>>>> +		struct drm_xe_eudebug_event_vm *vm = (struct drm_xe_eudebug_event_vm *)e;
>>>>>> +
>>>>>> +		if (vm->client_handle == h)
>>>>>> +			return 1;
>>>>>> +		break;
>>>>>> +	}
>>>>>> +	case DRM_XE_EUDEBUG_EVENT_EXEC_QUEUE: {
>>>>>> +		struct drm_xe_eudebug_event_exec_queue *ee = (struct drm_xe_eudebug_event_exec_queue *)e;
>>>>>> +
>>>>>> +		if (ee->client_handle == h)
>>>>>> +			return 1;
>>>>>> +		break;
>>>>>> +	}
>>>>>> +	case DRM_XE_EUDEBUG_EVENT_VM_BIND: {
>>>>>> +		struct drm_xe_eudebug_event_vm_bind *evmb = (struct drm_xe_eudebug_event_vm_bind *)e;
>>>>>> +
>>>>>> +		if (evmb->client_handle == h) {
>>>>>> +			*bind_seqno = evmb->base.seqno;
>>>>>> +			return 1;
>>>>>> +		}
>>>>>> +		break;
>>>>>> +	}
>>>>>> +	case DRM_XE_EUDEBUG_EVENT_VM_BIND_OP: {
>>>>>> +		struct drm_xe_eudebug_event_vm_bind_op *eo = (struct drm_xe_eudebug_event_vm_bind_op *)e;
>>>>>> +
>>>>>> +		if (eo->vm_bind_ref_seqno == *bind_seqno) {
>>>>>> +			*bind_op_seqno = eo->base.seqno;
>>>>>> +			return 1;
>>>>>> +		}
>>>>>> +		break;
>>>>>> +	}
>>>>>> +	case DRM_XE_EUDEBUG_EVENT_VM_BIND_UFENCE: {
>>>>>> +		struct drm_xe_eudebug_event_vm_bind_ufence *ef  = (struct drm_xe_eudebug_event_vm_bind_ufence *)e;
>>>>>> +
>>>>>> +		if (ef->vm_bind_ref_seqno == *bind_seqno)
>>>>>> +			return 1;
>>>>>> +
>>>>>> +		break;
>>>>>> +	}
>>>>>> +	case DRM_XE_EUDEBUG_EVENT_METADATA: {
>>>>>> +		struct drm_xe_eudebug_event_metadata *em = (struct drm_xe_eudebug_event_metadata *)e;
>>>>>> +
>>>>>> +		if (em->client_handle == h)
>>>>>> +			return 1;
>>>>>> +		break;
>>>>>> +	}
>>>>>> +	case DRM_XE_EUDEBUG_EVENT_VM_BIND_OP_METADATA: {
>>>>>> +		struct drm_xe_eudebug_event_vm_bind_op_metadata *eo = (struct drm_xe_eudebug_event_vm_bind_op_metadata *)e;
>>>>>> +
>>>>>> +		if (eo->vm_bind_op_ref_seqno == *bind_op_seqno)
>>>>>> +			return 1;
>>>>>> +		break;
>>>>>> +	}
>>>>>> +	default:
>>>>>> +		break;
>>>>>> +	}
>>>>>> +
>>>>>> +	return 0;
>>>>>> +}
>>>>>> +
>>>>>> +static int match_opposite_resource(struct drm_xe_eudebug_event *e, void *data)
>>>>>> +{
>>>>>> +
>>>>>> +	struct drm_xe_eudebug_event *d = (void *)data;
>>>>>> +	int ret;
>>>>>> +
>>>>>> +	d->flags ^= DRM_XE_EUDEBUG_EVENT_CREATE | DRM_XE_EUDEBUG_EVENT_DESTROY;
>>>>>> +	d->flags &= ~(DRM_XE_EUDEBUG_EVENT_NEED_ACK);
>>>>>> +	ret = match_type_and_flags(e, data);
>>>>>> +	d->flags ^= DRM_XE_EUDEBUG_EVENT_CREATE | DRM_XE_EUDEBUG_EVENT_DESTROY;
>>>>>> +
>>>>>> +	if (!ret)
>>>>>> +		return 0;
>>>>>> +
>>>>>> +	switch (e->type) {
>>>>>> +	case DRM_XE_EUDEBUG_EVENT_OPEN: {
>>>>>> +		struct drm_xe_eudebug_event_client *client = (struct drm_xe_eudebug_event_client *)e;
>>>>>> +		struct drm_xe_eudebug_event_client *filter = (struct drm_xe_eudebug_event_client *)data;
>>>>>> +
>>>>>> +		if (client->client_handle == filter->client_handle)
>>>>>> +			return 1;
>>>>>> +		break;
>>>>>> +	}
>>>>>> +	case DRM_XE_EUDEBUG_EVENT_VM: {
>>>>>> +		struct drm_xe_eudebug_event_vm *vm = (struct drm_xe_eudebug_event_vm *)e;
>>>>>> +		struct drm_xe_eudebug_event_vm *filter = (struct drm_xe_eudebug_event_vm *)data;
>>>>>> +
>>>>>> +		if (vm->vm_handle == filter->vm_handle)
>>>>>> +			return 1;
>>>>>> +		break;
>>>>>> +	}
>>>>>> +	case DRM_XE_EUDEBUG_EVENT_EXEC_QUEUE: {
>>>>>> +		struct drm_xe_eudebug_event_exec_queue *ee = (void *)e;
>>>>>> +		struct drm_xe_eudebug_event_exec_queue *filter = (struct drm_xe_eudebug_event_exec_queue *)data;
>>>>>> +
>>>>>> +		if (ee->exec_queue_handle == filter->exec_queue_handle)
>>>>>> +			return 1;
>>>>>> +		break;
>>>>>> +	}
>>>>>> +	case DRM_XE_EUDEBUG_EVENT_VM_BIND: {
>>>>>> +		struct drm_xe_eudebug_event_vm_bind *evmb = (void *)e;
>>>>>> +		struct drm_xe_eudebug_event_vm_bind *filter = (struct drm_xe_eudebug_event_vm_bind *)data;
>>>>>> +
>>>>>> +		if (evmb->vm_handle == filter->vm_handle &&
>>>>>> +		    evmb->num_binds == filter->num_binds)
>>>>>> +			return 1;
>>>>>> +		break;
>>>>>> +	}
>>>>>> +	case DRM_XE_EUDEBUG_EVENT_VM_BIND_OP: {
>>>>>> +		struct drm_xe_eudebug_event_vm_bind_op *avmb = (void *)e;
>>>>>> +		struct drm_xe_eudebug_event_vm_bind_op *filter = (struct drm_xe_eudebug_event_vm_bind_op *)data;
>>>>>> +
>>>>>> +		if (avmb->addr == filter->addr &&
>>>>>> +		    avmb->range == filter->range)
>>>>>> +			return 1;
>>>>>> +		break;
>>>>>> +	}
>>>>>> +	case DRM_XE_EUDEBUG_EVENT_METADATA: {
>>>>>> +		struct drm_xe_eudebug_event_metadata *em = (void *)e;
>>>>>> +		struct drm_xe_eudebug_event_metadata *filter = (struct drm_xe_eudebug_event_metadata *)data;
>>>>>> +
>>>>>> +		if (em->metadata_handle == filter->metadata_handle)
>>>>>> +			return 1;
>>>>>> +		break;
>>>>>> +	}
>>>>>> +	case DRM_XE_EUDEBUG_EVENT_VM_BIND_OP_METADATA: {
>>>>>> +		struct drm_xe_eudebug_event_vm_bind_op_metadata *avmb = (void *)e;
>>>>>> +		struct drm_xe_eudebug_event_vm_bind_op_metadata *filter = (struct drm_xe_eudebug_event_vm_bind_op_metadata *)data;
>>>>>> +
>>>>>> +		if (avmb->metadata_handle == filter->metadata_handle &&
>>>>>> +		    avmb->metadata_cookie == filter->metadata_cookie)
>>>>>> +			return 1;
>>>>>> +		break;
>>>>>> +	}
>>>>>> +
>>>>>> +	default:
>>>>>> +		break;
>>>>>> +	}
>>>>>> +	return 0;
>>>>>> +}
>>>>>> +
>>>>>> +static int match_full(struct drm_xe_eudebug_event *e, void *data)
>>>>>> +{
>>>>>> +	struct seqno_list_entry *sl;
>>>>>> +
>>>>>> +	struct match_dto *md = (void *)data;
>>>>>> +	int ret = 0;
>>>>>> +
>>>>>> +	ret = match_client_handle(e, md);
>>>>>> +	if (!ret)
>>>>>> +		return 0;
>>>>>> +
>>>>>> +	ret = match_fields(e, md->target);
>>>>>> +	if (!ret)
>>>>>> +		return 0;
>>>>>> +
>>>>>> +	igt_list_for_each_entry(sl, md->seqno_list, link) {
>>>>>> +		if (sl->seqno == e->seqno)
>>>>>> +			return 0;
>>>>>> +	}
>>>>>> +
>>>>>> +	return 1;
>>>>>> +}
>>>>>> +
>>>>>> +static struct drm_xe_eudebug_event *
>>>>>> +event_type_match(struct xe_eudebug_event_log *l,
>>>>>> +		 struct drm_xe_eudebug_event *target,
>>>>>> +		 struct drm_xe_eudebug_event *current)
>>>>>> +{
>>>>>> +	return event_cmp(l, current, match_type_and_flags, target);
>>>>>> +}
>>>>>> +
>>>>>> +static struct drm_xe_eudebug_event *
>>>>>> +client_match(struct xe_eudebug_event_log *l,
>>>>>> +	     uint64_t client_handle,
>>>>>> +	     struct drm_xe_eudebug_event *current,
>>>>>> +	     uint32_t filter,
>>>>>> +	     uint64_t *bind_seqno,
>>>>>> +	     uint64_t *bind_op_seqno)
>>>>>> +{
>>>>>> +	struct match_dto md = {
>>>>>> +		.client_handle = client_handle,
>>>>>> +		.filter = filter,
>>>>>> +		.bind_seqno = bind_seqno,
>>>>>> +		.bind_op_seqno = bind_op_seqno,
>>>>>> +	};
>>>>>> +
>>>>>> +	return event_cmp(l, current, match_client_handle, &md);
>>>>>> +}
>>>>>> +
>>>>>> +static struct drm_xe_eudebug_event *
>>>>>> +opposite_event_match(struct xe_eudebug_event_log *l,
>>>>>> +		    struct drm_xe_eudebug_event *target,
>>>>>> +		    struct drm_xe_eudebug_event *current)
>>>>>> +{
>>>>>> +	return event_cmp(l, current, match_opposite_resource, target);
>>>>>> +}
>>>>>> +
>>>>>> +static struct drm_xe_eudebug_event *
>>>>>> +event_match(struct xe_eudebug_event_log *l,
>>>>>> +	    struct drm_xe_eudebug_event *target,
>>>>>> +	    uint64_t client_handle,
>>>>>> +	    struct igt_list_head *seqno_list,
>>>>>> +	    uint64_t *bind_seqno,
>>>>>> +	    uint64_t *bind_op_seqno)
>>>>>> +{
>>>>>> +	struct match_dto md = {
>>>>>> +		.target = target,
>>>>>> +		.client_handle = client_handle,
>>>>>> +		.seqno_list = seqno_list,
>>>>>> +		.bind_seqno = bind_seqno,
>>>>>> +		.bind_op_seqno = bind_op_seqno,
>>>>>> +	};
>>>>>> +
>>>>>> +	return event_cmp(l, NULL, match_full, &md);
>>>>>> +}
>>>>>> +
>>>>>> +static void compare_client(struct xe_eudebug_event_log *c, struct drm_xe_eudebug_event *_ce,
>>>>>> +			   struct xe_eudebug_event_log *d, struct drm_xe_eudebug_event *_de,
>>>>>> +			   uint32_t filter)
>>>>>> +{
>>>>>> +	struct drm_xe_eudebug_event_client *ce = (void *)_ce;
>>>>>> +	struct drm_xe_eudebug_event_client *de = (void *)_de;
>>>>>> +	uint64_t cbs = 0, dbs = 0, cbso = 0, dbso = 0;
>>>>>> +
>>>>>> +	struct igt_list_head matched_seqno_list;
>>>>>> +	struct drm_xe_eudebug_event *hc, *hd;
>>>>>> +	struct seqno_list_entry *entry, *tmp;
>>>>>> +
>>>>>> +	igt_assert(ce);
>>>>>> +	igt_assert(de);
>>>>>> +
>>>>>> +	igt_debug("client: %llu -> %llu\n", ce->client_handle, de->client_handle);
>>>>>> +
>>>>>> +	hc = NULL;
>>>>>> +	hd = NULL;
>>>>>> +	IGT_INIT_LIST_HEAD(&matched_seqno_list);
>>>>>> +
>>>>>> +	do {
>>>>>> +		hc = client_match(c, ce->client_handle, hc, filter, &cbs, &cbso);
>>>>>> +		if (!hc)
>>>>>> +			break;
>>>>>> +
>>>>>> +		hd = event_match(d, hc, de->client_handle, &matched_seqno_list, &dbs, &dbso);
>>>>>> +
>>>>>> +		igt_assert_f(hd, "%s (%llu): no matching event type %u found for client %llu\n",
>>>>>> +			     c->name,
>>>>>> +			     hc->seqno,
>>>>>> +			     hc->type,
>>>>>> +			     ce->client_handle);
>>>>>> +
>>>>>> +		igt_debug("comparing %s %llu vs %s %llu\n",
>>>>>> +			  c->name, hc->seqno, d->name, hd->seqno);
>>>>>> +
>>>>>> +		/*
>>>>>> +		 * Store the seqno of the event that was matched above,
>>>>>> +		 * inside 'matched_seqno_list', to avoid it getting matched
>>>>>> +		 * by subsequent 'event_match' calls.
>>>>>> +		 */
>>>>>> +		entry = malloc(sizeof(*entry));
>>>>>> +		entry->seqno = hd->seqno;
>>>>>> +		igt_list_add(&entry->link, &matched_seqno_list);
>>>>>> +	} while (hc);
>>>>>> +
>>>>>> +	igt_list_for_each_entry_safe(entry, tmp, &matched_seqno_list, link)
>>>>>> +		free(entry);
>>>>>> +}
>>>>>> +
>>>>>> +/**
>>>>>> + * xe_eudebug_event_log_find_seqno:
>>>>>> + * @l: event log pointer
>>>>>> + * @seqno: seqno of event to be found
>>>>>> + *
>>>>>> + * Finds the event with given seqno in the event log.
>>>>>> + *
>>>>>> + * Returns: pointer to the event with given seqno within @l or NULL seqno is
>>>>>> + * not present.
>>>>>> + */
>>>>>> +struct drm_xe_eudebug_event *
>>>>>> +xe_eudebug_event_log_find_seqno(struct xe_eudebug_event_log *l, uint64_t seqno)
>>>>>> +{
>>>>>> +	struct drm_xe_eudebug_event *e = NULL, *found = NULL;
>>>>>> +
>>>>>> +	igt_assert_neq(seqno, 0);
>>>>>> +	/*
>>>>>> +	 * Try to catch if seqno is corrupted and prevent too long tests,
>>>>>> +	 * as our post processing of events is not optimized.
>>>>>> +	 */
>>>>>> +	igt_assert_lt(seqno, 10 * 1000 * 1000);
>>>>>> +
>>>>>> +	xe_eudebug_for_each_event(e, l) {
>>>>>> +		if (e->seqno == seqno) {
>>>>>> +			if (found) {
>>>>>> +				igt_warn("Found multiple events with the same seqno %lu\n", seqno);
>>>>>> +				xe_eudebug_event_log_print(l, false);
>>>>>> +				igt_assert(!found);
>>>>>> +			}
>>>>>> +			found = e;
>>>>>> +		}
>>>>>> +	}
>>>>>> +
>>>>>> +	return found;
>>>>>> +}
>>>>>> +
>>>>>> +static void event_log_sort(struct xe_eudebug_event_log *l)
>>>>>> +{
>>>>>> +	struct xe_eudebug_event_log *tmp;
>>>>>> +	struct drm_xe_eudebug_event *e = NULL;
>>>>>> +	uint64_t first_seqno = 0;
>>>>>> +	uint64_t last_seqno = 0;
>>>>>> +	uint64_t events = 0, added = 0;
>>>>>> +	uint64_t i;
>>>>>> +
>>>>>> +	xe_eudebug_for_each_event(e, l) {
>>>>>> +		if (e->seqno > last_seqno)
>>>>>> +			last_seqno = e->seqno;
>>>>>> +
>>>>>> +		if (e->seqno < first_seqno)
>>>>>> +			first_seqno = e->seqno;
>>>>>> +
>>>>>> +		events++;
>>>>>> +	}
>>>>>> +
>>>>>> +	tmp = xe_eudebug_event_log_create("tmp", l->max_size);
>>>>>> +
>>>>>> +	for (i = 1; i <= last_seqno; i++) {
>>>>>> +		e = xe_eudebug_event_log_find_seqno(l, i);
>>>>>> +		if (e) {
>>>>>> +			xe_eudebug_event_log_write(tmp, e);
>>>>>> +			added++;
>>>>>> +		}
>>>>>> +	}
>>>>>> +
>>>>>> +	igt_assert_eq(events, added);
>>>>>> +	igt_assert_eq(tmp->head, l->head);
>>>>>> +
>>>>>> +	memcpy(l->log, tmp->log, tmp->head);
>>>>>> +
>>>>>> +	xe_eudebug_event_log_destroy(tmp);
>>>>>> +}
>>>>>> +
>>>>>> +/**
>>>>>> + * xe_eudebug_connect:
>>>>>> + * @fd: Xe file descriptor
>>>>>> + * @pid: client PID
>>>>>> + * @flags: connection flags
>>>>>> + *
>>>>>> + * Opens the xe eu debugger connection to the process described by @pid
>>>>>> + *
>>>>>> + * Returns: 0 if the debugger was successfully attached, -errno otherwise.
>>>>>> + */
>>>>>> +int xe_eudebug_connect(int fd, pid_t pid, uint32_t flags)
>>>>>> +{
>>>>>> +	int ret;
>>>>>> +	uint64_t events = 0; /* events filtering not supported yet! */
>>>>>> +
>>>>>> +	ret = __xe_eudebug_connect(fd, pid, flags, events);
>>>>>> +
>>>>>> +	return ret;
>>>>>> +}
>>>>>> +
>>>>>> +/**
>>>>>> + * xe_eudebug_event_log_create:
>>>>>> + * @name: event log identifier
>>>>>> + * @max_size: maximum size of created log
>>>>>> + *
>>>>>> + * Function creates an Eu Debugger event log with size equal to @max_size.
>>>>>> + *
>>>>>> + * Returns: pointer to just created log
>>>>>> + */
>>>>>> +#define MAX_EVENT_LOG_SIZE (32 * 1024 * 1024)
>>>>>> +struct xe_eudebug_event_log *xe_eudebug_event_log_create(const char *name, unsigned int max_size)
>>>>>> +{
>>>>>> +	struct xe_eudebug_event_log *l;
>>>>>> +
>>>>>> +	l = calloc(1, sizeof(*l));
>>>>>> +	igt_assert(l);
>>>>>> +	l->log = calloc(1, max_size);
>>>>>> +	igt_assert(l->log);
>>>>>> +	l->max_size = max_size;
>>>>>> +	strncpy(l->name, name, sizeof(l->name) - 1);
>>>>>> +	pthread_mutex_init(&l->lock, NULL);
>>>>>> +
>>>>>> +	return l;
>>>>>> +}
>>>>>> +
>>>>>> +/**
>>>>>> + * xe_eudebug_event_log_destroy:
>>>>>> + * @l: event log pointer
>>>>>> + *
>>>>>> + * Frees given event log @l.
>>>>>> + */
>>>>>> +void xe_eudebug_event_log_destroy(struct xe_eudebug_event_log *l)
>>>>>> +{
>>>>>> +	pthread_mutex_destroy(&l->lock);
>>>>>> +	free(l->log);
>>>>>> +	free(l);
>>>>>> +}
>>>>>> +
>>>>>> +/**
>>>>>> + * xe_eudebug_event_log_write:
>>>>>> + * @l: event log pointer
>>>>>> + * @e: event to be written to event log
>>>>>> + *
>>>>>> + * Writes event @e to the event log, thread-safe.
>>>>>> + */
>>>>>> +void xe_eudebug_event_log_write(struct xe_eudebug_event_log *l, struct drm_xe_eudebug_event *e)
>>>>>> +{
>>>>>> +	igt_assert(e->seqno);
>>>>>> +	/*
>>>>>> +	 * Try to catch if seqno is corrupted and prevent too long tests,
>>>>>> +	 * as our post processing of events is not optimized.
>>>>>> +	 */
>>>>>> +	igt_assert_lt(e->seqno, 10 * 1000 * 1000);
>>>>>> +
>>>>>> +	pthread_mutex_lock(&l->lock);
>>>>>> +	igt_assert_lt(l->head + e->len, l->max_size);
>>>>>> +	memcpy(l->log + l->head, e, e->len);
>>>>>> +	l->head += e->len;
>>>>>> +
>>>>>> +#ifdef DEBUG_LOG
>>>>>> +	igt_info("%s: wrote %u bytes to eventlog, free %u bytes\n",
>>>>>> +		 l->name, e->len, l->max_size - l->head);
>>>>>> +#endif
>>>>>> +	pthread_mutex_unlock(&l->lock);
>>>>>> +}
>>>>>> +
>>>>>> +/**
>>>>>> + * xe_eudebug_event_log_print:
>>>>>> + * @l: event log pointer
>>>>>> + * @debug: when true function uses igt_debug instead of igt_info.
>>>>>> + *
>>>>>> + * Prints given event log.
>>>>>> + */
>>>>>> +void
>>>>>> +xe_eudebug_event_log_print(struct xe_eudebug_event_log *l, bool debug)
>>>>>> +{
>>>>>> +	struct drm_xe_eudebug_event *e = NULL;
>>>>>> +	int level = debug ? IGT_LOG_DEBUG : IGT_LOG_INFO;
>>>>>> +	char str[XE_EUDEBUG_EVENT_STRING_MAX_LEN];
>>>>>> +
>>>>>> +	igt_log(IGT_LOG_DOMAIN, level,
>>>>>> +		"event log '%s' (%u bytes):\n", l->name, l->head);
>>>>>> +
>>>>>> +	xe_eudebug_for_each_event(e, l) {
>>>>>> +		xe_eudebug_event_to_str(e, str, XE_EUDEBUG_EVENT_STRING_MAX_LEN);
>>>>>> +		igt_log(IGT_LOG_DOMAIN, level, "%s\n", str);
>>>>>> +	}
>>>>>> +}
>>>>>> +
>>>>>> +/**
>>>>>> + * xe_eudebug_event_log_compare:
>>>>>> + * @a: event log pointer
>>>>>> + * @b: event log pointer
>>>>>> + * @filter: mask that represents events to be skipped during comparison, useful
>>>>>> + * for events like 'VM_BIND' since they can be asymmetric. Note that
>>>>>> + * 'DRM_XE_EUDEBUG_EVENT_OPEN' will always be matched.
>>>>>> + *
>>>>>> + * Compares and asserts event logs @a, @b if the event
>>>>>> + * sequence matches.
>>>>>> + */
>>>>>> +void xe_eudebug_event_log_compare(struct xe_eudebug_event_log *a, struct xe_eudebug_event_log *b,
>>>>>> +				  uint32_t filter)
>>>>>> +{
>>>>>> +	struct drm_xe_eudebug_event *ae = NULL;
>>>>>> +	struct drm_xe_eudebug_event *be = NULL;
>>>>>> +
>>>>>> +	xe_eudebug_for_each_event(ae, a) {
>>>>>> +		if (ae->type == DRM_XE_EUDEBUG_EVENT_OPEN &&
>>>>>> +		    ae->flags & DRM_XE_EUDEBUG_EVENT_CREATE) {
>>>>>> +			be = event_type_match(b, ae, be);
>>>>>> +
>>>>>> +			compare_client(a, ae, b, be, filter);
>>>>>> +			compare_client(b, be, a, ae, filter);
>>>>>> +		}
>>>>>> +	}
>>>>>> +}
>>>>>> +
>>>>>> +/**
>>>>>> + * xe_eudebug_event_log_match_opposite:
>>>>>> + * @l: event log pointer
>>>>>> + * @filter: mask that represents events to be skipped during comparison, useful
>>>>>> + * for events like 'VM_BIND' since they can be asymmetric
>>>>>> + *
>>>>>> + * Matches and asserts content of all opposite events (create vs destroy).
>>>>>> + */
>>>>>> +void
>>>>>> +xe_eudebug_event_log_match_opposite(struct xe_eudebug_event_log *l, uint32_t filter)
>>>>>> +{
>>>>>> +	struct drm_xe_eudebug_event *ce = NULL;
>>>>>> +	struct drm_xe_eudebug_event *de = NULL;
>>>>>> +
>>>>>> +	xe_eudebug_for_each_event(ce, l) {
>>>>>> +		if (ce->flags & DRM_XE_EUDEBUG_EVENT_CREATE) {
>>>>>> +			uint8_t offset = sizeof(struct drm_xe_eudebug_event);
>>>>>> +			int opposite_matching;
>>>>>> +
>>>>>> +			if (XE_EUDEBUG_EVENT_IS_FILTERED(ce->type, filter))
>>>>>> +				continue;
>>>>>> +
>>>>>> +			/* No opposite matching for binds */
>>>>>> +			if ((ce->type >= DRM_XE_EUDEBUG_EVENT_VM_BIND &&
>>>>>> +			     ce->type <= DRM_XE_EUDEBUG_EVENT_VM_BIND_UFENCE) ||
>>>>>> +			    ce->type == DRM_XE_EUDEBUG_EVENT_VM_BIND_OP_METADATA)
>>>>>> +				continue;
>>>>>> +
>>>>>> +			de = opposite_event_match(l, ce, ce);
>>>>>> +
>>>>>> +			igt_assert_f(de, "no opposite event of type %u found\n", ce->type);
>>>>>> +
>>>>>> +			igt_assert_eq(ce->len, de->len);
>>>>>> +			opposite_matching = memcmp((uint8_t *)de + offset,
>>>>>> +						   (uint8_t *)ce + offset,
>>>>>> +						   de->len - offset) == 0;
>>>>>> +
>>>>>> +			igt_assert_f(opposite_matching,
>>>>>> +				     "%s: create|destroy event not "
>>>>>> +				     "maching (%llu) vs (%llu)\n",
>>>>>> +				     l->name, de->seqno, ce->seqno);
>>>>>> +		}
>>>>>> +	}
>>>>>> +}
>>>>>> +
>>>>>> +static void debugger_run_triggers(struct xe_eudebug_debugger *d,
>>>>>> +				  struct drm_xe_eudebug_event *e)
>>>>>> +{
>>>>>> +	struct event_trigger *t;
>>>>>> +
>>>>>> +	igt_list_for_each_entry(t, &d->triggers, link) {
>>>>>> +		if (e->type == t->type)
>>>>>> +			t->fn(d, e);
>>>>>> +	}
>>>>>> +}
>>>>>> +
>>>>>> +#define MAX_EVENT_SIZE (32 * 1024)
>>>>>> +static int
>>>>>> +xe_eudebug_read_event(int fd, struct drm_xe_eudebug_event *event)
>>>>>> +{
>>>>>> +	int ret;
>>>>>> +
>>>>>> +	event->type = DRM_XE_EUDEBUG_EVENT_READ;
>>>>>> +	event->flags = 0;
>>>>>> +	event->len = MAX_EVENT_SIZE;
>>>>>> +
>>>>>> +	ret = igt_ioctl(fd, DRM_XE_EUDEBUG_IOCTL_READ_EVENT, event);
>>>>>> +	if (ret < 0)
>>>>>> +		return -errno;
>>>>>> +
>>>>>> +	return ret;
>>>>>> +}
>>>>>> +
>>>>>> +static void *debugger_worker_loop(void *data)
>>>>>> +{
>>>>>> +	uint8_t buf[MAX_EVENT_SIZE];
>>>>>> +	struct drm_xe_eudebug_event *e = (void *)buf;
>>>>>> +	struct xe_eudebug_debugger *d = data;
>>>>>> +	struct pollfd p = {
>>>>>> +		.events = POLLIN,
>>>>>> +		.revents = 0,
>>>>>> +	};
>>>>>> +	int timeout_ms = 100, ret;
>>>>>> +
>>>>>> +	igt_assert(d->master_fd >= 0);
>>>>>> +
>>>>>> +	do {
>>>>>> +		p.fd = d->fd;
>>>>>> +		ret = poll(&p, 1, timeout_ms);
>>>>>> +
>>>>>> +		if (ret == -1) {
>>>>>> +			igt_info("poll failed with errno %d\n", errno);
>>>>>> +			break;
>>>>>> +		}
>>>>>> +
>>>>>> +		if (ret == 1 && (p.revents & POLLIN)) {
>>>>>> +			int err = xe_eudebug_read_event(d->fd, e);
>>>>>> +
>>>>>> +			if (!err) {
>>>>>> +				++d->event_count;
>>>>>> +
>>>>>> +				xe_eudebug_event_log_write(d->log, e);
>>>>>> +				debugger_run_triggers(d, e);
>>>>>> +			} else {
>>>>>> +				igt_info("xe_eudebug_read_event returned %d\n", ret);
>>>>>> +			}
>>>>>> +		}
>>>>>> +	} while ((ret && READ_ONCE(d->worker_state) == DEBUGGER_WORKER_QUITTING) ||
>>>>>> +		 READ_ONCE(d->worker_state) == DEBUGGER_WORKER_ACTIVE);
>>>>>> +
>>>>>> +	d->worker_state = DEBUGGER_WORKER_INACTIVE;
>>>>>> +	return NULL;
>>>>>> +}
>>>>>> +
>>>>>> +/**
>>>>>> + * xe_eudebug_debugger_available:
>>>>>> + * @fd: Xe file descriptor
>>>>>> + *
>>>>>> + * Returns: true it debugger connection is available, false otherwise.
>>>>>> + */
>>>>>> +bool xe_eudebug_debugger_available(int fd)
>>>>>> +{
>>>>>> +	struct drm_xe_eudebug_connect param = { .pid = getpid() };
>>>>>> +	int debugfd;
>>>>>> +
>>>>>> +	debugfd = igt_ioctl(fd, DRM_IOCTL_XE_EUDEBUG_CONNECT, &param);
>>>>>> +	if (debugfd >= 0)
>>>>>> +		close(debugfd);
>>>>>> +
>>>>>> +	return debugfd >= 0;
>>>>>> +}
>>>>>> +
>>>>>> +/**
>>>>>> + * xe_eudebug_debugger_create:
>>>>>> + * @master_fd: xe client used to open the debugger connection
>>>>>> + * @flags: flags stored in a debugger structure, can be used at will
>>>>>> + * of the caller, i.e. to be used inside triggers.
>>>>>> + * @data: test's private data, allocated with MAP_SHARED | MAP_ANONYMOUS,
>>>>>> + * can be shared between client and debugger. Can be NULL.
>>>>>> + *
>>>>>> + * Returns: newly created xe_eudebug_debugger structure with its
>>>>>> + * event log initialized. Note that to open the connection
>>>>>> + * you need call @xe_eudebug_debugger_attach.
>>>>>> + */
>>>>>> +struct xe_eudebug_debugger *
>>>>>> +xe_eudebug_debugger_create(int master_fd, uint64_t flags, void *data)
>>>>>> +{
>>>>>> +	struct xe_eudebug_debugger *d;
>>>>>> +
>>>>>> +	d = calloc(1, sizeof(*d));
>>>>>> +	d->flags = flags;
>>>>>> +	igt_assert(d);
>>>>>> +	IGT_INIT_LIST_HEAD(&d->triggers);
>>>>>> +	d->log = xe_eudebug_event_log_create("debugger", MAX_EVENT_LOG_SIZE);
>>>>>> +	d->fd = -1;
>>>>>> +	d->master_fd = master_fd;
>>>>>> +	d->ptr = data;
>>>>>> +
>>>>>> +	return d;
>>>>>> +}
>>>>>> +
>>>>>> +static void debugger_destroy_triggers(struct xe_eudebug_debugger *d)
>>>>>> +{
>>>>>> +	struct event_trigger *t, *tmp;
>>>>>> +
>>>>>> +	igt_list_for_each_entry_safe(t, tmp, &d->triggers, link)
>>>>>> +		free(t);
>>>>>> +}
>>>>>> +
>>>>>> +/**
>>>>>> + * xe_eudebug_debugger_destroy:
>>>>>> + * @d: pointer to the debugger
>>>>>> + *
>>>>>> + * Frees xe_eudebug_debugger structure pointed by @d. If the debugger
>>>>>> + * connection was still opened it terminates it.
>>>>>> + */
>>>>>> +void xe_eudebug_debugger_destroy(struct xe_eudebug_debugger *d)
>>>>>> +{
>>>>>> +	if (d->worker_state)
>>>>>> +		xe_eudebug_debugger_stop_worker(d, 1);
>>>>>> +
>>>>>> +	if (d->target_pid)
>>>>>> +		xe_eudebug_debugger_dettach(d);
>>>>>> +
>>>>>> +	xe_eudebug_event_log_destroy(d->log);
>>>>>> +	debugger_destroy_triggers(d);
>>>>>> +	free(d);
>>>>>> +}
>>>>>> +
>>>>>> +/**
>>>>>> + * xe_eudebug_debugger_attach:
>>>>>> + * @d: pointer to the debugger
>>>>>> + * @c: pointer to the client
>>>>>> + *
>>>>>> + * Opens the xe eu debugger connection to the process described by @c (c->pid)
>>>>>> + *
>>>>>> + * Returns: 0 if the debugger was successfully attached, -errno otherwise.
>>>>>> + */
>>>>>> +int xe_eudebug_debugger_attach(struct xe_eudebug_debugger *d,
>>>>>> +			       struct xe_eudebug_client *c)
>>>>>> +{
>>>>>> +	int ret;
>>>>>> +
>>>>>> +	igt_assert_eq(d->fd, -1);
>>>>>> +	igt_assert_neq(c->pid, 0);
>>>>>> +	ret = xe_eudebug_connect(d->master_fd, c->pid, 0);
>>>>>> +
>>>>>> +	if (ret < 0)
>>>>>> +		return ret;
>>>>>> +
>>>>>> +	d->fd = ret;
>>>>>> +	d->target_pid = c->pid;
>>>>>> +	d->p_client[0] = c->p_in[0];
>>>>>> +	d->p_client[1] = c->p_in[1];
>>>>>> +
>>>>>> +	igt_debug("debugger connected to %lu\n", d->target_pid);
>>>>>> +
>>>>>> +	return 0;
>>>>>> +}
>>>>>> +
>>>>>> +/**
>>>>>> + * xe_eudebug_debugger_dettach:
>>>>>> + * @d: pointer to the debugger
>>>>>> + *
>>>>>> + * Closes previously opened xe eu debugger connection. Asserts if
>>>>>> + * the debugger has active session.
>>>>>> + */
>>>>>> +void xe_eudebug_debugger_dettach(struct xe_eudebug_debugger *d)
>>>>>> +{
>>>>>> +	igt_assert(d->target_pid);
>>>>>> +	close(d->fd);
>>>>>> +	d->target_pid = 0;
>>>>>> +	d->fd = -1;
>>>>>> +}
>>>>>> +
>>>>>> +/**
>>>>>> + * xe_eudebug_debugger_add_trigger:
>>>>>> + * @d: pointer to the debugger
>>>>>> + * @type: the type of the event which activates the trigger
>>>>>> + * @fn: function to be called when event of @type was read by the debugger.
>>>>>> + *
>>>>>> + * Adds function @fn to the list of triggers activated when event of @type
>>>>>> + * has been read by worker.
>>>>>> + * Note: Triggers are activated by the worker.
>>>>>> + */
>>>>>> +void xe_eudebug_debugger_add_trigger(struct xe_eudebug_debugger *d,
>>>>>> +				     int type, xe_eudebug_trigger_fn fn)
>>>>>> +{
>>>>>> +	struct event_trigger *t;
>>>>>> +
>>>>>> +	t = calloc(1, sizeof(*t));
>>>>>> +	IGT_INIT_LIST_HEAD(&t->link);
>>>>>> +	t->type = type;
>>>>>> +	t->fn = fn;
>>>>>> +
>>>>>> +	igt_list_add_tail(&t->link, &d->triggers);
>>>>>> +	igt_debug("added trigger %p\n", t);
>>>>>> +}
>>>>>> +
>>>>>> +/**
>>>>>> + * xe_eudebug_debugger_start_worker:
>>>>>> + * @d: pointer to the debugger
>>>>>> + *
>>>>>> + * Starts the debugger worker. Worker is resposible for reading all
>>>>>> + * incoming events from the debugger, put then into debugger log and
>>>>>> + * execute appropriate event triggers. Note that using the debuggers
>>>>>> + * event log while worker is running is not safe.
>>>>>> + */
>>>>>> +void xe_eudebug_debugger_start_worker(struct xe_eudebug_debugger *d)
>>>>>> +{
>>>>>> +	int ret;
>>>>>> +
>>>>>> +	d->worker_state = true;
>>>>>> +	ret = pthread_create(&d->worker_thread, NULL, &debugger_worker_loop, d);
>>>>>> +
>>>>>> +	igt_assert_f(ret == 0, "Debugger worker thread creation failed!");
>>>>>> +}
>>>>>> +
>>>>>> +/**
>>>>>> + * xe_eudebug_debugger_stop_worker:
>>>>>> + * @d: pointer to the debugger
>>>>>> + *
>>>>>> + * Stops the debugger worker. Event log is sorted by seqno after closure.
>>>>>> + */
>>>>>> +void xe_eudebug_debugger_stop_worker(struct xe_eudebug_debugger *d,
>>>>>> +				     int timeout_s)
>>>>>> +{
>>>>>> +	struct timespec t = {};
>>>>>> +	int ret;
>>>>>> +
>>>>>> +	igt_assert(d->worker_state);
>>>>>> +
>>>>>> +	d->worker_state = DEBUGGER_WORKER_QUITTING; /* First time be polite. */
>>>>>> +	igt_assert_eq(clock_gettime(CLOCK_REALTIME, &t), 0);
>>>>>> +	t.tv_sec += timeout_s;
>>>>>> +
>>>>>> +	ret = pthread_timedjoin_np(d->worker_thread, NULL, &t);
>>>>>> +
>>>>>> +	if (ret == ETIMEDOUT) {
>>>>>> +		d->worker_state = DEBUGGER_WORKER_INACTIVE;
>>>>>> +		ret = pthread_join(d->worker_thread, NULL);
>>>>>> +	}
>>>>>> +
>>>>>> +	igt_assert_f(ret == 0 || ret != ESRCH,
>>>>>> +		     "pthread join failed with error %d!\n", ret);
>>>>>> +
>>>>>> +	event_log_sort(d->log);
>>>>>> +}
>>>>>> +
>>>>>> +/**
>>>>>> + * xe_eudebug_debugger_signal_stage:
>>>>>> + * @d: pointer to the debugger
>>>>>> + * @stage: stage to signal
>>>>>> + *
>>>>>> + * Signals to client, waiting in xe_eudebug_client_wait_stage(),
>>>>>> + * releasing it to proceed.
>>>>>> + */
>>>>>> +void xe_eudebug_debugger_signal_stage(struct xe_eudebug_debugger *d, uint64_t stage)
>>>>>> +{
>>>>>> +	token_signal(d->p_client, CLIENT_STAGE, stage);
>>>>>> +}
>>>>>> +
>>>>>> +/**
>>>>>> + * xe_eudebug_debugger_wait_stage:
>>>>>> + * @s: pointer to xe_eudebug_debugger structure
>>>>>> + * @stage: stage to wait on
>>>>>> + *
>>>>>> + * Pauses debugger until the client has signalled the corresponding stage with
>>>>>> + * xe_eudebug_client_signal_stage. This is only for situations where the actual
>>>>>> + * event flow is not enough to coordinate between client/debugger and extra sync
>>>>>> + * mechanism is needed.
>>>>>> + */
>>>>>> +void xe_eudebug_debugger_wait_stage(struct xe_eudebug_session *s, uint64_t stage)
>>>>>> +{
>>>>>> +	u64 stage_in;
>>>>>> +
>>>>>> +	igt_debug("debugger xe client fd: %d pausing for stage %lu\n", s->d->master_fd, stage);
>>>>>> +
>>>>>> +	stage_in = wait_from_client(s->c, DEBUGGER_STAGE);
>>>>>> +	igt_debug("debugger xe client fd: %d stage %lu, expected %lu, stage\n", s->d->master_fd,
>>>>>> +		  stage_in, stage);
>>>>>> +
>>>>>> +	igt_assert_eq(stage_in, stage);
>>>>>> +}
>>>>>> +
>>>>>> +/**
>>>>>> + * xe_eudebug_client_create:
>>>>>> + * @master_fd: xe client used to open the debugger connection
>>>>>> + * @work: function that opens xe device and executes arbitrary workload
>>>>>> + * @flags: flags stored in a client structure, can be used at will
>>>>>> + * of the caller, i.e. to provide the @work function an additional switch.
>>>>>> + * @data: test's private data, allocated with MAP_SHARED | MAP_ANONYMOUS,
>>>>>> + * can be shared between client and debugger. Accesible via client->ptr.
>>>>>> + * Can be NULL.
>>>>>> + *
>>>>>> + * Forks and creates the debugger process. @work won't be called until
>>>>>> + * xe_eudebug_client_start is called.
>>>>>> + *
>>>>>> + * Returns: newly created xe_eudebug_debugger structure with its
>>>>>> + * event log initialized.
>>>>>> + */
>>>>>> +struct xe_eudebug_client *xe_eudebug_client_create(int master_fd, xe_eudebug_client_work_fn work,
>>>>>> +						   uint64_t flags, void *data)
>>>>>> +{
>>>>>> +	struct xe_eudebug_client *c;
>>>>>> +
>>>>>> +	c = calloc(1, sizeof(*c));
>>>>>> +	c->flags = flags;
>>>>>> +	igt_assert(c);
>>>>>> +	igt_assert(!pipe(c->p_in));
>>>>>> +	igt_assert(!pipe(c->p_out));
>>>>>> +	c->seqno = 1;
>>>>>> +	c->log = xe_eudebug_event_log_create("client", MAX_EVENT_LOG_SIZE);
>>>>>> +	c->done = 0;
>>>>>> +	c->ptr = data;
>>>>>> +	c->master_fd = master_fd;
>>>>>> +	c->timeout_ms = XE_EUDEBUG_DEFAULT_TIMEOUT_MS;
>>>>>> +
>>>>>> +	igt_fork(child, 1) {
>>>>>> +		int mypid;
>>>>>> +
>>>>>> +		igt_assert_eq(c->pid, 0);
>>>>>> +
>>>>>> +		close(c->p_out[0]);
>>>>>> +		c->p_out[0] = -1;
>>>>>> +		close(c->p_in[1]);
>>>>>> +		c->p_in[1] = -1;
>>>>>> +
>>>>>> +		mypid = getpid();
>>>>>> +		client_signal(c, CLIENT_PID, mypid);
>>>>>> +
>>>>>> +		c->pid = client_wait_token(c, CLIENT_RUN);
>>>>>> +		igt_assert_eq(c->pid, mypid);
>>>>>> +		if (work)
>>>>>> +			work(c);
>>>>>> +
>>>>>> +		client_signal(c, CLIENT_FINI, c->seqno);
>>>>>> +
>>>>>> +		event_log_write_to_fd(c->log, c->p_out[1]);
>>>>>> +
>>>>>> +		c->pid = client_wait_token(c, CLIENT_STOP);
>>>>>> +		igt_assert_eq(c->pid, mypid);
>>>>>> +	}
>>>>>> +
>>>>>> +	close(c->p_out[1]);
>>>>>> +	c->p_out[1] = -1;
>>>>>> +	close(c->p_in[0]);
>>>>>> +	c->p_in[0] = -1;
>>>>>> +
>>>>>> +	c->pid = wait_from_client(c, CLIENT_PID);
>>>>>> +
>>>>>> +	igt_info("client running with pid %d\n", c->pid);
>>>>>> +
>>>>>> +	return c;
>>>>>> +}
>>>>>> +
>>>>>> +/**
>>>>>> + * xe_eudebug_client_stop:
>>>>>> + * @c: pointer to xe_eudebug_client structure
>>>>>> + *
>>>>>> + * Waits for the end of client's work and exits the proccess.
>>>>>> + */
>>>>>> +void xe_eudebug_client_stop(struct xe_eudebug_client *c)
>>>>>> +{
>>>>>> +	if (c->pid) {
>>>>>> +		int waitstatus;
>>>>>> +
>>>>>> +		xe_eudebug_client_wait_done(c);
>>>>>> +
>>>>>> +		token_signal(c->p_in, CLIENT_STOP, c->pid);
>>>>>> +		igt_assert_eq(waitpid(c->pid, &waitstatus, 0),
>>>>>> +			      c->pid);
>>>>>> +		c->pid = 0;
>>>>>> +	}
>>>>>> +}
>>>>>> +
>>>>>> +/**
>>>>>> + * xe_eudebug_client_destroy:
>>>>>> + * @c: pointer to xe_eudebug_client structure to be freed
>>>>>> + *
>>>>>> + * Frees the @c client structure. Note that it calls xe_eudebug_client_stop if
>>>>>> + * client proccess has not terminated yet.
>>>>>> + */
>>>>>> +void xe_eudebug_client_destroy(struct xe_eudebug_client *c)
>>>>>> +{
>>>>>> +	xe_eudebug_client_stop(c);
>>>>>> +	pipe_close(c->p_in);
>>>>>> +	pipe_close(c->p_out);
>>>>>> +	xe_eudebug_event_log_destroy(c->log);
>>>>>> +	free(c);
>>>>>> +}
>>>>>> +
>>>>>> +/**
>>>>>> + * xe_eudebug_client_get_seqno:
>>>>>> + * @c: pointer to xe_eudebug_client structure
>>>>>> + *
>>>>>> + * Increments and returns current seqno value of the given client @c
>>>>>> + *
>>>>>> + * Returns: incremented seqno
>>>>>> + */
>>>>>> +uint64_t xe_eudebug_client_get_seqno(struct xe_eudebug_client *c)
>>>>>> +{
>>>>>> +	return c->seqno++;
>>>>>> +}
>>>>>> +
>>>>>> +/**
>>>>>> + * xe_eudebug_client_start:
>>>>>> + * @c: pointer to xe_eudebug_client structure
>>>>>> + *
>>>>>> + * Starts execution of client's work function within the client's proccess.
>>>>>> + */
>>>>>> +void xe_eudebug_client_start(struct xe_eudebug_client *c)
>>>>>> +{
>>>>>> +	token_signal(c->p_in, CLIENT_RUN, c->pid);
>>>>>> +}
>>>>>> +
>>>>>> +/**
>>>>>> + * xe_eudebug_client_wait_done:
>>>>>> + * @c: pointer to xe_eudebug_client structure
>>>>>> + *
>>>>>> + * Waits for the client work end updates the event log.
>>>>>> + * Doesn't terminate the client's proccess yet.
>>>>>> + */
>>>>>> +void xe_eudebug_client_wait_done(struct xe_eudebug_client *c)
>>>>>> +{
>>>>>> +	if (!c->done) {
>>>>>> +		c->done = 1;
>>>>>> +		c->seqno = wait_from_client(c, CLIENT_FINI);
>>>>>> +		event_log_read_from_fd(c->log, c->p_out[0]);
>>>>>> +	}
>>>>>> +}
>>>>>> +
>>>>>> +/**
>>>>>> + * xe_eudebug_client_signal_stage:
>>>>>> + * @c: pointer to the client
>>>>>> + * @stage: stage to signal
>>>>>> + *
>>>>>> + * Signals to debugger, waiting in xe_eudebug_debugger_wait_stage(),
>>>>>> + * releasing it to proceed.
>>>>>> + */
>>>>>> +void xe_eudebug_client_signal_stage(struct xe_eudebug_client *c, uint64_t stage)
>>>>>> +{
>>>>>> +	token_signal(c->p_out, DEBUGGER_STAGE, stage);
>>>>>> +}
>>>>>> +
>>>>>> +/**
>>>>>> + * xe_eudebug_client_wait_stage:
>>>>>> + * @c: pointer to xe_eudebug_client structure
>>>>>> + * @stage: stage to wait on
>>>>>> + *
>>>>>> + * Pauses client until the debugger has signalled the corresponding stage with
>>>>>> + * xe_eudebug_debugger_signal_stage. This is only for situations where the
>>>>>> + * actual event flow is not enough to coordinate between client/debugger and extra
>>>>>> + * sync mechanism is needed.
>>>>>> + *
>>>>>> + */
>>>>>> +void xe_eudebug_client_wait_stage(struct xe_eudebug_client *c, uint64_t stage)
>>>>>> +{
>>>>>> +	u64 stage_in;
>>>>>> +
>>>>>> +	if (c->done) {
>>>>>> +		igt_warn("client: %d already done before %lu\n", c->pid, stage);
>>>>>> +		return;
>>>>>> +	}
>>>>>> +
>>>>>> +	igt_debug("client: %d pausing for stage %lu\n", c->pid, stage);
>>>>>> +
>>>>>> +	stage_in = client_wait_token(c, CLIENT_STAGE);
>>>>>> +	igt_debug("client: %d stage %lu, expected %lu, stage\n", c->pid, stage_in, stage);
>>>>>> +
>>>>>> +	igt_assert_eq(stage_in, stage);
>>>>>> +}
>>>>>> +
>>>>>> +
>>>>>> +/**
>>>>>> + * xe_eudebug_session_create:
>>>>>> + * @fd: XE file descriptor
>>>>>> + * @work: function passed to the xe_eudebug_client_create
>>>>>> + * @flags: flags passed to client and debugger
>>>>>> + * @test_private: test's  data, allocated with MAP_SHARED | MAP_ANONYMOUS,
>>>>>> + * passed to client and debugger. Can be NULL.
>>>>>> + *
>>>>>> + * Creates session together with client and debugger structures.
>>>>>> + */
>>>>>> +struct xe_eudebug_session *xe_eudebug_session_create(int fd,
>>>>>> +						     xe_eudebug_client_work_fn work,
>>>>>> +						     unsigned int flags,
>>>>>> +						     void *test_private)
>>>>>> +{
>>>>>> +	struct xe_eudebug_session *s;
>>>>>> +
>>>>>> +	s = calloc(1, sizeof(*s));
>>>>>> +	igt_assert(s);
>>>>>> +
>>>>>> +	s->c = xe_eudebug_client_create(fd, work, flags, test_private);
>>>>>> +	s->d = xe_eudebug_debugger_create(fd, flags, test_private);
>>>>>> +	s->flags = flags;
>>>>>> +
>>>>>> +	return s;
>>>>>> +}
>>>>>> +
>>>>>> +/**
>>>>>> + * xe_eudebug_session_run:
>>>>>> + * @s: pointer to xe_eudebug_session structure
>>>>>> + *
>>>>>> + * Attaches debugger to client's proccess, starts debugger's
>>>>>> + * async event reader, starts client and once client finish
>>>>>> + * it stops debugger worker.
>>>>>> + */
>>>>>> +void xe_eudebug_session_run(struct xe_eudebug_session *s)
>>>>>> +{
>>>>>> +	struct xe_eudebug_debugger *debugger = s->d;
>>>>>> +	struct xe_eudebug_client *client = s->c;
>>>>>> +
>>>>>> +	igt_assert_eq(xe_eudebug_debugger_attach(debugger, client), 0);
>>>>>> +
>>>>>> +	xe_eudebug_debugger_start_worker(debugger);
>>>>>> +
>>>>>> +	xe_eudebug_client_start(client);
>>>>>> +	xe_eudebug_client_wait_done(client);
>>>>>> +
>>>>>> +	xe_eudebug_debugger_stop_worker(debugger, 1);
>>>>>> +
>>>>>> +	xe_eudebug_event_log_print(debugger->log, true);
>>>>>> +	xe_eudebug_event_log_print(client->log, true);
>>>>>> +}
>>>>>> +
>>>>>> +/**
>>>>>> + * xe_eudebug_session_check:
>>>>>> + * @s: pointer to xe_eudebug_session structure
>>>>>> + * @match_opposite: indicates whether check should match all
>>>>>> + * create and destroy events.
>>>>>> + * @filter: mask that represents events to be skipped during comparison, useful
>>>>>> + * for events like 'VM_BIND' since they can be asymmetric
>>>>>> + *
>>>>>> + * Validate debugger's log against the log created by the client.
>>>>>> + */
>>>>>> +void xe_eudebug_session_check(struct xe_eudebug_session *s, bool match_opposite, uint32_t filter)
>>>>>> +{
>>>>>> +	xe_eudebug_event_log_compare(s->c->log, s->d->log, filter);
>>>>>> +
>>>>>> +	if (match_opposite)
>>>>>> +		xe_eudebug_event_log_match_opposite(s->d->log, filter);
>>>>>> +}
>>>>>> +
>>>>>> +/**
>>>>>> + * xe_eudebug_session_destroy:
>>>>>> + * @s: pointer to xe_eudebug_session structure
>>>>>> + *
>>>>>> + * Destroy session together with its debugger and client.
>>>>>> + */
>>>>>> +void xe_eudebug_session_destroy(struct xe_eudebug_session *s)
>>>>>> +{
>>>>>> +	xe_eudebug_debugger_destroy(s->d);
>>>>>> +	xe_eudebug_client_destroy(s->c);
>>>>>> +
>>>>>> +	free(s);
>>>>>> +}
>>>>>> +
>>>>>> +#define to_base(x) ((struct drm_xe_eudebug_event *)&x)
>>>>>> +
>>>>>> +static void base_event(struct xe_eudebug_client *c,
>>>>>> +		       struct drm_xe_eudebug_event *e,
>>>>>> +		       uint32_t type,
>>>>>> +		       uint32_t flags,
>>>>>> +		       uint64_t size)
>>>>>> +{
>>>>>> +	e->type = type;
>>>>>> +	e->flags = flags;
>>>>>> +	e->seqno = xe_eudebug_client_get_seqno(c);
>>>>>> +	e->len = size;
>>>>>> +}
>>>>>> +
>>>>>> +static void client_event(struct xe_eudebug_client *c, uint32_t flags, int client_fd)
>>>>>> +{
>>>>>> +	struct drm_xe_eudebug_event_client ec;
>>>>>> +
>>>>>> +	base_event(c, to_base(ec), DRM_XE_EUDEBUG_EVENT_OPEN, flags, sizeof(ec));
>>>>>> +
>>>>>> +	ec.client_handle = client_fd;
>>>>>> +
>>>>>> +	xe_eudebug_event_log_write(c->log, (void *)&ec);
>>>>>> +}
>>>>>> +
>>>>>> +static void vm_event(struct xe_eudebug_client *c, uint32_t flags, int client_fd, uint32_t vm_id)
>>>>>> +{
>>>>>> +	struct drm_xe_eudebug_event_vm evm;
>>>>>> +
>>>>>> +	base_event(c, to_base(evm), DRM_XE_EUDEBUG_EVENT_VM, flags, sizeof(evm));
>>>>>> +
>>>>>> +	evm.client_handle = client_fd;
>>>>>> +	evm.vm_handle = vm_id;
>>>>>> +
>>>>>> +	xe_eudebug_event_log_write(c->log, (void *)&evm);
>>>>>> +}
>>>>>> +
>>>>>> +static void exec_queue_event(struct xe_eudebug_client *c, uint32_t flags,
>>>>>> +			     int client_fd, uint32_t vm_id,
>>>>>> +			     uint32_t exec_queue_handle, uint16_t class,
>>>>>> +			     uint16_t width)
>>>>>> +{
>>>>>> +	struct drm_xe_eudebug_event_exec_queue ee;
>>>>>> +
>>>>>> +	base_event(c, to_base(ee), DRM_XE_EUDEBUG_EVENT_EXEC_QUEUE,
>>>>>> +		   flags, sizeof(ee));
>>>>>> +
>>>>>> +	ee.client_handle = client_fd;
>>>>>> +	ee.vm_handle = vm_id;
>>>>>> +	ee.exec_queue_handle = exec_queue_handle;
>>>>>> +	ee.engine_class = class;
>>>>>> +	ee.width = width;
>>>>>> +
>>>>>> +	xe_eudebug_event_log_write(c->log, (void *)&ee);
>>>>>> +}
>>>>>> +
>>>>>> +static void metadata_event(struct xe_eudebug_client *c, uint32_t flags,
>>>>>> +			   int client_fd, uint32_t id, uint64_t type, uint64_t len)
>>>>>> +{
>>>>>> +	struct drm_xe_eudebug_event_metadata em;
>>>>>> +
>>>>>> +	base_event(c, to_base(em), DRM_XE_EUDEBUG_EVENT_METADATA,
>>>>>> +		   flags, sizeof(em));
>>>>>> +
>>>>>> +	em.client_handle = client_fd;
>>>>>> +	em.metadata_handle = id;
>>>>>> +	em.type = type;
>>>>>> +	em.len = len;
>>>>>> +
>>>>>> +	xe_eudebug_event_log_write(c->log, (void *)&em);
>>>>>> +}
>>>>>> +
>>>>>> +static int enable_getset(int fd, bool *old, bool *new)
>>>>>> +{
>>>>>> +	static const char * const fname = "enable_eudebug";
>>>>>> +	int ret = 0;
>>>>>> +
>>>>>> +	int sysfs, device_fd;
>>>>>> +	bool val_before;
>>>>>> +	struct stat st;
>>>>>> +
>>>>>> +	igt_assert(new || old);
>>>>>> +
>>>>>> +	igt_assert_eq(fstat(fd, &st), 0);
>>>>>> +	sysfs = igt_sysfs_open(fd);
>>>>>> +	if (sysfs < 0)
>>>>>> +		return -1;
>>>>>> +
>>>>>> +	device_fd = openat(sysfs, "device", O_DIRECTORY | O_RDONLY);
>>>>>> +	close(sysfs);
>>>>>> +	if (device_fd < 0)
>>>>>> +		return -1;
>>>>>> +
>>>>>> +	if (!__igt_sysfs_get_boolean(device_fd, fname, &val_before)) {
>>>>>> +		ret = -1;
>>>>>> +		goto out;
>>>>>> +	}
>>>>>> +
>>>>>> +	igt_debug("enable_eudebug before: %d\n", val_before);
>>>>>> +
>>>>>> +	if (old)
>>>>>> +		*old = val_before;
>>>>>> +
>>>>>> +	ret = 0;
>>>>>> +	if (new) {
>>>>>> +		if (__igt_sysfs_set_boolean(device_fd, fname, *new))
>>>>>> +			igt_assert_eq(igt_sysfs_get_boolean(device_fd, fname), *new);
>>>>>> +		else
>>>>>> +			ret = -1;
>>>>>> +	}
>>>>>> +
>>>>>> +out:
>>>>>> +	close(device_fd);
>>>>>> +	return ret;
>>>>>> +}
>>>>>> +
>>>>>> +/**
>>>>>> + * xe_eudebug_enable
>>>>>> + * @fd: xe client
>>>>>> + * @enable: state toggle - true to enable, false to disable
>>>>>> + *
>>>>>> + * Enables/disables eudebug capability by writing to
>>>>>> + * '/sys/class/drm/card<N>/device/enable_eudebug' sysfs entry.
>>>>>> + *
>>>>>> + * Returns: previous toggle value, i.e. true when eudebugging was enabled,
>>>>>> + * false when eudebugging was disabled.
>>>>>> + */
>>>>>> +bool xe_eudebug_enable(int fd, bool enable)
>>>>>> +{
>>>>>> +	bool old = false;
>>>>>> +	int ret = enable_getset(fd, &old, &enable);
>>>>>> +
>>>>>> +	if (ret) {
>>>>>> +		igt_skip_on(enable);
>>>>>> +		old = false;
>>>>>> +	}
>>>>>> +
>>>>>> +	return old;
>>>>>> +}
>>>>>> +
>>>>>> +/* Eu debugger wrappers around resource creating xe ioctls. */
>>>>>> +
>>>>>> +/**
>>>>>> + * xe_eudebug_client_open_driver:
>>>>>> + * @c: pointer to xe_eudebug_client structure
>>>>>> + *
>>>>>> + * Calls drm_open_client(DRIVER_XE) and logs the corresponding
>>>>>> + * event in client's event log.
>>>>>> + *
>>>>>> + * Returns: valid DRM file descriptor
>>>>>> + */
>>>>>> +int xe_eudebug_client_open_driver(struct xe_eudebug_client *c)
>>>>>> +{
>>>>>> +	int fd;
>>>>>> +
>>>>>> +	fd = drm_reopen_driver(c->master_fd);
>>>>>> +	client_event(c, DRM_XE_EUDEBUG_EVENT_CREATE, fd);
>>>>>> +
>>>>>> +	return fd;
>>>>>> +}
>>>>>> +
>>>>>> +/**
>>>>>> + * xe_eudebug_client_close_driver:
>>>>>> + * @c: pointer to xe_eudebug_client structure
>>>>>> + * @fd: xe client
>>>>>> + *
>>>>>> + * Calls close driver and logs the corresponding event in
>>>>>> + * client's event log.
>>>>>> + */
>>>>>> +void xe_eudebug_client_close_driver(struct xe_eudebug_client *c, int fd)
>>>>>> +{
>>>>>> +	client_event(c, DRM_XE_EUDEBUG_EVENT_DESTROY, fd);
>>>>>> +	close(fd);
>>>>>> +}
>>>>>> +
>>>>>> +/**
>>>>>> + * xe_eudebug_client_vm_create:
>>>>>> + * @c: pointer to xe_eudebug_client structure
>>>>>> + * @fd: xe client
>>>>>> + * @flags: vm bind flags
>>>>>> + * @ext: pointer to the first user extension
>>>>>> + *
>>>>>> + * Calls xe_vm_create() and logs corresponding events
>>>>>> + * (including vm set metadata events) in client's event log.
>>>>>> + *
>>>>>> + * Returns: valid vm handle
>>>>>> + */
>>>>>> +uint32_t xe_eudebug_client_vm_create(struct xe_eudebug_client *c, int fd,
>>>>>> +				     uint32_t flags, uint64_t ext)
>>>>>> +{
>>>>>> +	uint32_t vm;
>>>>>> +
>>>>>> +	vm = xe_vm_create(fd, flags, ext);
>>>>>> +	vm_event(c, DRM_XE_EUDEBUG_EVENT_CREATE, fd, vm);
>>>>>> +
>>>>>> +	return vm;
>>>>>> +}
>>>>>> +
>>>>>> +/**
>>>>>> + * xe_eudebug_client_vm_destroy:
>>>>>> + * @c: pointer to xe_eudebug_client structure
>>>>>> + * fd: xe client
>>>>>> + * vm: vm handle
>>>>>> + *
>>>>>> + * Calls xe_vm_destroy() and logs the corresponding event in
>>>>>> + * client's event log.
>>>>>> + */
>>>>>> +void xe_eudebug_client_vm_destroy(struct xe_eudebug_client *c, int fd, uint32_t vm)
>>>>>> +{
>>>>>> +	xe_vm_destroy(fd, vm);
>>>>>> +	vm_event(c, DRM_XE_EUDEBUG_EVENT_DESTROY, fd, vm);
>>>>>> +}
>>>>>> +
>>>>>> +/**
>>>>>> + * xe_eudebug_client_exec_queue_create:
>>>>>> + * @c: pointer to xe_eudebug_client structure
>>>>>> + * @fd: xe client
>>>>>> + * @create: exec_queue create drm struct
>>>>>> + *
>>>>>> + * Calls xe exec queue create ioctl and logs the corresponding event in
>>>>>> + * client's event log.
>>>>>> + *
>>>>>> + * Returns: valid exec queue handle
>>>>>> + */
>>>>>> +uint32_t xe_eudebug_client_exec_queue_create(struct xe_eudebug_client *c, int fd,
>>>>>> +					     struct drm_xe_exec_queue_create *create)
>>>>>> +{
>>>>>> +	uint16_t class = ((struct drm_xe_engine_class_instance *)(create->instances))[0].engine_class;
>>>>>> +
>>>>>> +	igt_assert_eq(igt_ioctl(fd, DRM_IOCTL_XE_EXEC_QUEUE_CREATE, create), 0);
>>>>>> +
>>>>>> +	if (class == DRM_XE_ENGINE_CLASS_COMPUTE || class == DRM_XE_ENGINE_CLASS_RENDER)
>>>>>> +		exec_queue_event(c, DRM_XE_EUDEBUG_EVENT_CREATE, fd, create->vm_id,
>>>>>> +				 create->exec_queue_id, class, create->width);
>>>>>> +
>>>>>> +	return create->exec_queue_id;
>>>>>> +}
>>>>>> +
>>>>>> +/**
>>>>>> + * xe_eudebug_client_exec_queue_destroy:
>>>>>> + * @c: pointer to xe_eudebug_client structure
>>>>>> + * @fd: xe client
>>>>>> + * @create: exec_queue create drm struct which was used for creation
>>>>>> + *
>>>>>> + * Calls xe exec_queue destroy ioctl and logs the corresponding event in
>>>>>> + * client's event log.
>>>>>> + */
>>>>>> +void xe_eudebug_client_exec_queue_destroy(struct xe_eudebug_client *c, int fd,
>>>>>> +					  struct drm_xe_exec_queue_create *create)
>>>>>> +{
>>>>>> +	struct drm_xe_exec_queue_destroy destroy = { .exec_queue_id = create->exec_queue_id, };
>>>>>> +	uint16_t class = ((struct drm_xe_engine_class_instance *)(create->instances))[0].engine_class;
>>>>>> +
>>>>>> +	if (class == DRM_XE_ENGINE_CLASS_COMPUTE || class == DRM_XE_ENGINE_CLASS_RENDER)
>>>>>> +		exec_queue_event(c, DRM_XE_EUDEBUG_EVENT_DESTROY, fd, create->vm_id,
>>>>>> +				 create->exec_queue_id, class, create->width);
>>>>>> +
>>>>>> +	igt_assert_eq(igt_ioctl(fd, DRM_IOCTL_XE_EXEC_QUEUE_DESTROY, &destroy), 0);
>>>>>> +}
>>>>>> +
>>>>>> +/**
>>>>>> + * xe_eudebug_client_vm_bind_event:
>>>>>> + * @c: pointer to xe_eudebug_client structure
>>>>>> + * @event_flags: base event flags
>>>>>> + * @fd: xe client
>>>>>> + * @vm: vm handle
>>>>>> + * @bind_flags: bind flags of vm_bind_event
>>>>>> + * @num_binds: number of bind (operations) for event
>>>>>> + * @ref_seqno: base vm bind reference seqno
>>>>>> + * Logs vm bind event in client's event log.
>>>>>> + */
>>>>>> +void xe_eudebug_client_vm_bind_event(struct xe_eudebug_client *c,
>>>>>> +				     uint32_t event_flags, int fd,
>>>>>> +				     uint32_t vm, uint32_t bind_flags,
>>>>>> +				     uint32_t num_binds, u64 *ref_seqno)
>>>>>> +{
>>>>>> +	struct drm_xe_eudebug_event_vm_bind evmb;
>>>>>> +
>>>>>> +	base_event(c, to_base(evmb), DRM_XE_EUDEBUG_EVENT_VM_BIND,
>>>>>> +		   event_flags, sizeof(evmb));
>>>>>> +	evmb.client_handle = fd;
>>>>>> +	evmb.vm_handle = vm;
>>>>>> +	evmb.flags = bind_flags;
>>>>>> +	evmb.num_binds = num_binds;
>>>>>> +
>>>>>> +	*ref_seqno = evmb.base.seqno;
>>>>>> +
>>>>>> +	xe_eudebug_event_log_write(c->log, (void *)&evmb);
>>>>>> +}
>>>>>> +
>>>>>> +/**
>>>>>> + * xe_eudebug_client_vm_bind_op_event:
>>>>>> + * @c: pointer to xe_eudebug_client structure
>>>>>> + * @event_flags: base event flags
>>>>>> + * @bind_ref_seqno: base vm bind reference seqno
>>>>>> + * @op_ref_seqno: output, the vm_bind_op event seqno
>>>>>> + * @addr: ppgtt address
>>>>>> + * @size: size of the binding
>>>>>> + * @num_extensions: number of vm bind op extensions
>>>>>> + *
>>>>>> + * Logs vm bind op event in client's event log.
>>>>>> + */
>>>>>> +void xe_eudebug_client_vm_bind_op_event(struct xe_eudebug_client *c, uint32_t event_flags,
>>>>>> +					uint64_t bind_ref_seqno, uint64_t *op_ref_seqno,
>>>>>> +					uint64_t addr, uint64_t range,
>>>>>> +					uint64_t num_extensions)
>>>>>> +{
>>>>>> +	struct drm_xe_eudebug_event_vm_bind_op op;
>>>>>> +
>>>>>> +	base_event(c, to_base(op), DRM_XE_EUDEBUG_EVENT_VM_BIND_OP,
>>>>>> +		   event_flags, sizeof(op));
>>>>>> +	op.vm_bind_ref_seqno = bind_ref_seqno;
>>>>>> +	op.addr = addr;
>>>>>> +	op.range = range;
>>>>>> +	op.num_extensions = num_extensions;
>>>>>> +
>>>>>> +	*op_ref_seqno = op.base.seqno;
>>>>>> +
>>>>>> +	xe_eudebug_event_log_write(c->log, (void *)&op);
>>>>>> +}
>>>>>> +
>>>>>> +/**
>>>>>> + * xe_eudebug_client_vm_bind_op_metadata_event:
>>>>>> + * @c: pointer to xe_eudebug_client structure
>>>>>> + * @event_flags: base event flags
>>>>>> + * @op_ref_seqno: base vm bind op reference seqno
>>>>>> + * @metadata_handle: metadata handle
>>>>>> + * @metadata_cookie: metadata cookie
>>>>>> + *
>>>>>> + * Logs vm bind op metadata event in client's event log.
>>>>>> + */
>>>>>> +void xe_eudebug_client_vm_bind_op_metadata_event(struct xe_eudebug_client *c,
>>>>>> +						 uint32_t event_flags, uint64_t op_ref_seqno,
>>>>>> +						 uint64_t metadata_handle, uint64_t metadata_cookie)
>>>>>> +{
>>>>>> +	struct drm_xe_eudebug_event_vm_bind_op_metadata op;
>>>>>> +
>>>>>> +	base_event(c, to_base(op), DRM_XE_EUDEBUG_EVENT_VM_BIND_OP_METADATA,
>>>>>> +		   event_flags, sizeof(op));
>>>>>> +	op.vm_bind_op_ref_seqno = op_ref_seqno;
>>>>>> +	op.metadata_handle = metadata_handle;
>>>>>> +	op.metadata_cookie = metadata_cookie;
>>>>>> +
>>>>>> +	xe_eudebug_event_log_write(c->log, (void *)&op);
>>>>>> +}
>>>>>> +
>>>>>> +/**
>>>>>> + * xe_eudebug_client_vm_bind_ufence_event:
>>>>>> + * @c: pointer to xe_eudebug_client structure
>>>>>> + * @event_flags: base event flags
>>>>>> + * @ref_seqno: base vm bind event seqno
>>>>>> + *
>>>>>> + * Logs vm bind ufence event in client's event log.
>>>>>> + */
>>>>>> +void xe_eudebug_client_vm_bind_ufence_event(struct xe_eudebug_client *c, uint32_t event_flags,
>>>>>> +					    uint64_t ref_seqno)
>>>>>> +{
>>>>>> +	struct drm_xe_eudebug_event_vm_bind_ufence f;
>>>>>> +
>>>>>> +	base_event(c, to_base(f), DRM_XE_EUDEBUG_EVENT_VM_BIND_UFENCE,
>>>>>> +		   event_flags, sizeof(f));
>>>>>> +	f.vm_bind_ref_seqno = ref_seqno;
>>>>>> +
>>>>>> +	xe_eudebug_event_log_write(c->log, (void *)&f);
>>>>>> +}
>>>>>> +
>>>>>> +static bool has_user_fence(const struct drm_xe_sync *sync, uint32_t num_syncs)
>>>>>> +{
>>>>>> +	while (num_syncs--)
>>>>>> +		if (sync[num_syncs].type == DRM_XE_SYNC_TYPE_USER_FENCE)
>>>>>> +			return true;
>>>>>> +
>>>>>> +	return false;
>>>>>> +}
>>>>>> +
>>>>>> +#define for_each_metadata(__m, __ext)					\
>>>>>> +	for ((__m) = from_user_pointer(__ext);				\
>>>>>> +	     (__m);							\
>>>>>> +	     (__m) = from_user_pointer((__m)->base.next_extension))	\
>>>>>> +		if ((__m)->base.name == XE_VM_BIND_OP_EXTENSIONS_ATTACH_DEBUG)
>>>>>> +
>>>>>> +static int  __xe_eudebug_client_vm_bind(struct xe_eudebug_client *c,
>>>>>> +					int fd, uint32_t vm, uint32_t exec_queue,
>>>>>> +					uint32_t bo, uint64_t offset,
>>>>>> +					uint64_t addr, uint64_t size,
>>>>>> +					uint32_t op, uint32_t flags,
>>>>>> +					struct drm_xe_sync *sync,
>>>>>> +					uint32_t num_syncs,
>>>>>> +					uint32_t prefetch_region,
>>>>>> +					uint8_t pat_index, uint64_t op_ext)
>>>>>> +{
>>>>>> +	struct drm_xe_vm_bind_op_ext_attach_debug *metadata;
>>>>>> +	const bool ufence = has_user_fence(sync, num_syncs);
>>>>>> +	const uint32_t bind_flags = ufence ?
>>>>>> +		DRM_XE_EUDEBUG_EVENT_VM_BIND_FLAG_UFENCE : 0;
>>>>>> +	uint64_t seqno = 0, op_seqno = 0, num_metadata = 0;
>>>>>> +	uint32_t bind_base_flags = 0;
>>>>>> +	int ret;
>>>>>> +
>>>>>> +	for_each_metadata(metadata, op_ext)
>>>>>> +		num_metadata++;
>>>>>> +
>>>>>> +	switch (op) {
>>>>>> +	case DRM_XE_VM_BIND_OP_MAP:
>>>>>> +		bind_base_flags = DRM_XE_EUDEBUG_EVENT_CREATE;
>>>>>> +		break;
>>>>>> +	case DRM_XE_VM_BIND_OP_UNMAP:
>>>>>> +		bind_base_flags = DRM_XE_EUDEBUG_EVENT_DESTROY;
>>>>>> +		igt_assert_eq(num_metadata, 0);
>>>>>> +		igt_assert_eq(ufence, false);
>>>>>> +		break;
>>>>>> +	default:
>>>>>> +		/* XXX unmap all? */
>>>>>> +		igt_assert(op);
>>>>>> +		break;
>>>>>> +	}
>>>>>> +
>>>>>> +	ret = ___xe_vm_bind(fd, vm, exec_queue, bo, offset, addr, size,
>>>>>> +			    op, flags, sync, num_syncs, prefetch_region,
>>>>>> +			    pat_index, 0, op_ext);
>>>>>> +
>>>>>> +	if (ret)
>>>>>> +		return ret;
>>>>>> +
>>>>>> +	if (!bind_base_flags)
>>>>>> +		return -EINVAL;
>>>>>> +
>>>>>> +	xe_eudebug_client_vm_bind_event(c, DRM_XE_EUDEBUG_EVENT_STATE_CHANGE,
>>>>>> +					fd, vm, bind_flags, 1, &seqno);
>>>>>> +	xe_eudebug_client_vm_bind_op_event(c, bind_base_flags,
>>>>>> +					   seqno, &op_seqno, addr, size,
>>>>>> +					   num_metadata);
>>>>>> +
>>>>>> +	for_each_metadata(metadata, op_ext)
>>>>>> +		xe_eudebug_client_vm_bind_op_metadata_event(c,
>>>>>> +							    DRM_XE_EUDEBUG_EVENT_CREATE,
>>>>>> +							    op_seqno,
>>>>>> +							    metadata->metadata_id,
>>>>>> +							    metadata->cookie);
>>>>>> +	if (ufence)
>>>>>> +		xe_eudebug_client_vm_bind_ufence_event(c, DRM_XE_EUDEBUG_EVENT_CREATE |
>>>>>> +						       DRM_XE_EUDEBUG_EVENT_NEED_ACK,
>>>>>> +						       seqno);
>>>>>> +	return ret;
>>>>>> +}
>>>>>> +
>>>>>> +static void _xe_eudebug_client_vm_bind(struct xe_eudebug_client *c, int fd,
>>>>>> +				       uint32_t vm, uint32_t bo,
>>>>>> +				       uint64_t offset, uint64_t addr, uint64_t size,
>>>>>> +				       uint32_t op,
>>>>>> +				       uint32_t flags,
>>>>>> +				       struct drm_xe_sync *sync,
>>>>>> +				       uint32_t num_syncs,
>>>>>> +				       uint64_t op_ext)
>>>>>> +{
>>>>>> +	const uint32_t exec_queue_id = 0;
>>>>>> +	const uint32_t prefetch_region = 0;
>>>>>> +
>>>>>> +	igt_assert_eq(__xe_eudebug_client_vm_bind(c, fd, vm, exec_queue_id, bo, offset,
>>>>>> +						  addr, size, op, flags,
>>>>>> +						  sync, num_syncs, prefetch_region,
>>>>>> +						  DEFAULT_PAT_INDEX, op_ext),
>>>>>> +		      0);
>>>>>> +}
>>>>>> +
>>>>>> +/**
>>>>>> + * xe_eudebug_client_vm_bind_flags
>>>>>> + * @c: pointer to xe_eudebug_client structure
>>>>>> + * @fd: xe client
>>>>>> + * @vm: vm handle
>>>>>> + * @bo: buffer object handle
>>>>>> + * @offset: offset within buffer object
>>>>>> + * @addr: ppgtt address
>>>>>> + * @size: size of the binding
>>>>>> + * @flags: vm_bind flags
>>>>>> + * @sync: sync objects
>>>>>> + * @num_syncs: number of sync objects
>>>>>> + * @op_ext: BIND_OP extensions
>>>>>> + *
>>>>>> + * Calls xe vm_bind ioctl and logs the corresponding event in client's event log.
>>>>>> + */
>>>>>> +void xe_eudebug_client_vm_bind_flags(struct xe_eudebug_client *c, int fd, uint32_t vm,
>>>>>> +				     uint32_t bo, uint64_t offset,
>>>>>> +				     uint64_t addr, uint64_t size, uint32_t flags,
>>>>>> +				     struct drm_xe_sync *sync, uint32_t num_syncs,
>>>>>> +				     uint64_t op_ext)
>>>>>> +{
>>>>>> +	_xe_eudebug_client_vm_bind(c, fd, vm, bo, offset, addr, size,
>>>>>> +				   DRM_XE_VM_BIND_OP_MAP, flags,
>>>>>> +				   sync, num_syncs, op_ext);
>>>>>> +}
>>>>>> +
>>>>>> +/**
>>>>>> + * xe_eudebug_client_vm_bind
>>>>>> + * @c: pointer to xe_eudebug_client structure
>>>>>> + * @fd: xe client
>>>>>> + * @vm: vm handle
>>>>>> + * @bo: buffer object handle
>>>>>> + * @offset: offset within buffer object
>>>>>> + * @addr: ppgtt address
>>>>>> + * @size: size of the binding
>>>>>> + *
>>>>>> + * Calls xe vm_bind ioctl and logs the corresponding event in client's event log.
>>>>>> + */
>>>>>> +void xe_eudebug_client_vm_bind(struct xe_eudebug_client *c, int fd, uint32_t vm,
>>>>>> +			       uint32_t bo, uint64_t offset,
>>>>>> +			       uint64_t addr, uint64_t size)
>>>>>> +{
>>>>>> +	const uint32_t flags = 0;
>>>>>> +	struct drm_xe_sync *sync = NULL;
>>>>>> +	const uint32_t num_syncs = 0;
>>>>>> +	const uint64_t op_ext = 0;
>>>>>> +
>>>>>> +	xe_eudebug_client_vm_bind_flags(c, fd, vm, bo, offset, addr, size,
>>>>>> +					flags,
>>>>>> +					sync, num_syncs, op_ext);
>>>>>> +}
>>>>>> +
>>>>>> +/**
>>>>>> + * xe_eudebug_client_vm_unbind_flags
>>>>>> + * @c: pointer to xe_eudebug_client structure
>>>>>> + * @fd: xe client
>>>>>> + * @vm: vm handle
>>>>>> + * @offset: offset
>>>>>> + * @addr: ppgtt address
>>>>>> + * @size: size of the binding
>>>>>> + * @flags: vm_bind flags
>>>>>> + * @sync: sync objects
>>>>>> + * @num_syncs: number of sync objects
>>>>>> + *
>>>>>> + * Calls xe vm_unbind ioctl and logs the corresponding event in client's event log.
>>>>>> + */
>>>>>> +void xe_eudebug_client_vm_unbind_flags(struct xe_eudebug_client *c, int fd,
>>>>>> +				       uint32_t vm, uint64_t offset,
>>>>>> +				       uint64_t addr, uint64_t size, uint32_t flags,
>>>>>> +				       struct drm_xe_sync *sync, uint32_t num_syncs)
>>>>>> +{
>>>>>> +	_xe_eudebug_client_vm_bind(c, fd, vm, 0, offset, addr, size,
>>>>>> +				   DRM_XE_VM_BIND_OP_UNMAP, flags,
>>>>>> +				   sync, num_syncs, 0);
>>>>>> +}
>>>>>> +
>>>>>> +/**
>>>>>> + * xe_eudebug_client_vm_unbind
>>>>>> + * @c: pointer to xe_eudebug_client structure
>>>>>> + * @fd: xe client
>>>>>> + * @vm: vm handle
>>>>>> + * @offset: offset
>>>>>> + * @addr: ppgtt address
>>>>>> + * @size: size of the binding
>>>>>> + *
>>>>>> + * Calls xe vm_unbind ioctl and logs the corresponding event in client's event log.
>>>>>> + */
>>>>>> +void xe_eudebug_client_vm_unbind(struct xe_eudebug_client *c, int fd, uint32_t vm,
>>>>>> +				 uint64_t offset, uint64_t addr, uint64_t size)
>>>>>> +{
>>>>>> +	const uint32_t flags = 0;
>>>>>> +	struct drm_xe_sync *sync = NULL;
>>>>>> +	const uint32_t num_syncs = 0;
>>>>>> +
>>>>>> +	xe_eudebug_client_vm_unbind_flags(c, fd, vm, offset, addr, size,
>>>>>> +					  flags, sync, num_syncs);
>>>>>> +}
>>>>>> +
>>>>>> +/**
>>>>>> + * xe_eudebug_client_metadata_create:
>>>>>> + * @c: pointer to xe_eudebug_client structure
>>>>>> + * @fd: xe client
>>>>>> + * @type: debug metadata type
>>>>>> + * @len: size of @data
>>>>>> + * @data: debug metadata paylad
>>>>>> + *
>>>>>> + * Calls xe metadata create ioctl and logs the corresponding event in
>>>>>> + * client's event log.
>>>>>> + *
>>>>>> + * Return: valid debug metadata id.
>>>>>> + */
>>>>>> +uint32_t xe_eudebug_client_metadata_create(struct xe_eudebug_client *c, int fd,
>>>>>> +					   int type, size_t len, void *data)
>>>>>> +{
>>>>>> +	struct drm_xe_debug_metadata_create create = {
>>>>>> +		.type = type,
>>>>>> +		.user_addr = to_user_pointer(data),
>>>>>> +		.len = len
>>>>>> +	};
>>>>>> +
>>>>>> +	igt_assert_eq(igt_ioctl(fd, DRM_IOCTL_XE_DEBUG_METADATA_CREATE, &create), 0);
>>>>>> +
>>>>>> +	metadata_event(c, DRM_XE_EUDEBUG_EVENT_CREATE, fd, create.metadata_id, type, len);
>>>>>> +
>>>>>> +	return create.metadata_id;
>>>>>> +}
>>>>>> +
>>>>>> +/**
>>>>>> + * xe_eudebug_client_metadata_destroy:
>>>>>> + * @c: pointer to xe_eudebug_client structure
>>>>>> + * @fd: xe client
>>>>>> + * @id: xe debug metadata handle
>>>>>> + * @type: debug metadata type
>>>>>> + * @len: size of debug metadata payload
>>>>>> + *
>>>>>> + * Calls xe metadata destroy ioctl and logs the corresponding event in
>>>>>> + * client's event log.
>>>>>> + */
>>>>>> +void xe_eudebug_client_metadata_destroy(struct xe_eudebug_client *c, int fd,
>>>>>> +					uint32_t id, int type, size_t len)
>>>>>> +{
>>>>>> +	struct drm_xe_debug_metadata_destroy destroy = { .metadata_id = id };
>>>>>> +
>>>>>> +
>>>>>> +	igt_assert_eq(igt_ioctl(fd, DRM_IOCTL_XE_DEBUG_METADATA_DESTROY, &destroy), 0);
>>>>>> +
>>>>>> +	metadata_event(c, DRM_XE_EUDEBUG_EVENT_DESTROY, fd, id, type, len);
>>>>>> +}
>>>>>> +
>>>>>> +void xe_eudebug_ack_ufence(int debugfd,
>>>>>> +			   const struct drm_xe_eudebug_event_vm_bind_ufence *f)
>>>>>> +{
>>>>>> +	struct drm_xe_eudebug_ack_event ack = { 0, };
>>>>>> +	char event_str[XE_EUDEBUG_EVENT_STRING_MAX_LEN];
>>>>>> +
>>>>>> +	ack.type = f->base.type;
>>>>>> +	ack.seqno = f->base.seqno;
>>>>>> +
>>>>>> +	xe_eudebug_event_to_str((void *)f, event_str, XE_EUDEBUG_EVENT_STRING_MAX_LEN);
>>>>>> +	igt_debug("delivering ack for event: %s\n", event_str);
>>>>>> +	igt_assert_eq(igt_ioctl(debugfd, DRM_XE_EUDEBUG_IOCTL_ACK_EVENT, &ack), 0);
>>>>>> +}
>>>>>> diff --git a/lib/xe/xe_eudebug.h b/lib/xe/xe_eudebug.h
>>>>>> new file mode 100644
>>>>>> index 000000000..444f5a7b7
>>>>>> --- /dev/null
>>>>>> +++ b/lib/xe/xe_eudebug.h
>>>>>> @@ -0,0 +1,206 @@
>>>>>> +/* SPDX-License-Identifier: MIT */
>>>>>> +/*
>>>>>> + * Copyright © 2023 Intel Corporation
>>>>>> + */
>>>>>> +#include <fcntl.h>
>>>>>> +#include <pthread.h>
>>>>>> +#include <stdint.h>
>>>>>> +#include <xe_drm.h>
>>>>>> +
>>>>>> +#include "igt_list.h"
>>>>>> +
>>>>>> +struct xe_eudebug_event_log {
>>>>>> +	uint8_t *log;
>>>>>> +	unsigned int head;
>>>>>> +	unsigned int max_size;
>>>>>> +	char name[80];
>>>>>> +	pthread_mutex_t lock;
>>>>>> +};
>>>>>> +
>>>>>> +struct xe_eudebug_debugger {
>>>>>> +	int fd;
>>>>>> +	uint64_t flags;
>>>>>> +
>>>>>> +	/* Used to smuggle private data */
>>>>>> +	void *ptr;
>>>>>> +
>>>>>> +	struct xe_eudebug_event_log *log;
>>>>>> +
>>>>>> +	uint64_t event_count;
>>>>>> +
>>>>>> +	uint64_t target_pid;
>>>>>> +
>>>>>> +	struct igt_list_head triggers;
>>>>>> +
>>>>>> +	int master_fd;
>>>>>> +
>>>>>> +	pthread_t worker_thread;
>>>>>> +	int worker_state;
>>>>>> +
>>>>>> +	int p_client[2];
>>>>>> +};
>>>>>> +
>>>>>> +struct xe_eudebug_client {
>>>>>> +	int pid;
>>>>>> +	uint64_t seqno;
>>>>>> +	uint64_t flags;
>>>>>> +
>>>>>> +	/* Used to smuggle private data */
>>>>>> +	void *ptr;
>>>>>> +
>>>>>> +	struct xe_eudebug_event_log *log;
>>>>>> +
>>>>>> +	int done;
>>>>>> +	int p_in[2];
>>>>>> +	int p_out[2];
>>>>>> +
>>>>>> +	/* Used to pickup right device (the one used in debugger) */
>>>>>> +	int master_fd;
>>>>>> +
>>>>>> +	int timeout_ms;
>>>>>> +};
>>>>>> +
>>>>>> +struct xe_eudebug_session {
>>>>>> +	uint64_t flags;
>>>>>> +	struct xe_eudebug_client *c;
>>>>>> +	struct xe_eudebug_debugger *d;
>>>>>> +};
>>>>>> +
>>>>>> +typedef void (*xe_eudebug_client_work_fn)(struct xe_eudebug_client *);
>>>>>> +typedef void (*xe_eudebug_trigger_fn)(struct xe_eudebug_debugger *,
>>>>>> +				      struct drm_xe_eudebug_event *);
>>>>>> +
>>>>>> +#define xe_eudebug_for_each_event(_e, _log) \
>>>>>> +	for ((_e) = (_e) ? (void *)(uint8_t *)(_e) + (_e)->len : \
>>>>>> +		    (void *)(_log)->log; \
>>>>>> +	    (uint8_t *)(_e) < (_log)->log + (_log)->head; \
>>>>>> +	    (_e) = (void *)(uint8_t *)(_e) + (_e)->len)
>>>>>> +
>>>>>> +#define xe_eudebug_assert(d, c)						\
>>>>>> +	do {								\
>>>>>> +		if (!(c)) {						\
>>>>>> +			xe_eudebug_event_log_print((d)->log, true);	\
>>>>>> +			igt_assert(c);					\
>>>>>> +		}							\
>>>>>> +	} while (0)
>>>>>> +
>>>>>> +#define xe_eudebug_assert_f(d, c, f...)					\
>>>>>> +	do {								\
>>>>>> +		if (!(c)) {						\
>>>>>> +			xe_eudebug_event_log_print((d)->log, true);	\
>>>>>> +			igt_assert_f(c, f);				\
>>>>>> +		}							\
>>>>>> +	} while (0)
>>>>>> +
>>>>>> +#define XE_EUDEBUG_EVENT_STRING_MAX_LEN		4096
>>>>>> +
>>>>>> +/*
>>>>>> + * Default abort timeout to use across xe_eudebug lib and tests if no specific
>>>>>> + * timeout value is required.
>>>>>> + */
>>>>>> +#define XE_EUDEBUG_DEFAULT_TIMEOUT_MS		25000ULL
>>>>>> +
>>>>>> +#define XE_EUDEBUG_FILTER_EVENT_NONE		BIT(DRM_XE_EUDEBUG_EVENT_NONE)
>>>>>> +#define XE_EUDEBUG_FILTER_EVENT_READ		BIT(DRM_XE_EUDEBUG_EVENT_READ)
>>>>>> +#define XE_EUDEBUG_FILTER_EVENT_OPEN		BIT(DRM_XE_EUDEBUG_EVENT_OPEN)
>>>>>> +#define XE_EUDEBUG_FILTER_EVENT_VM		BIT(DRM_XE_EUDEBUG_EVENT_VM)
>>>>>> +#define XE_EUDEBUG_FILTER_EVENT_EXEC_QUEUE	BIT(DRM_XE_EUDEBUG_EVENT_EXEC_QUEUE)
>>>>>> +#define XE_EUDEBUG_FILTER_EVENT_EU_ATTENTION	BIT(DRM_XE_EUDEBUG_EVENT_EU_ATTENTION)
>>>>>> +#define XE_EUDEBUG_FILTER_EVENT_VM_BIND		BIT(DRM_XE_EUDEBUG_EVENT_VM_BIND)
>>>>>> +#define XE_EUDEBUG_FILTER_EVENT_VM_BIND_OP	BIT(DRM_XE_EUDEBUG_EVENT_VM_BIND_OP)
>>>>>> +#define XE_EUDEBUG_FILTER_EVENT_VM_BIND_UFENCE  BIT(DRM_XE_EUDEBUG_EVENT_VM_BIND_UFENCE)
>>>>>> +#define XE_EUDEBUG_FILTER_ALL			GENMASK(DRM_XE_EUDEBUG_EVENT_MAX_EVENT, 0)
>>>>>> +#define XE_EUDEBUG_EVENT_IS_FILTERED(_e, _f)	((1UL << _e) & _f)
>>>>>> +
>>>>>> +int xe_eudebug_connect(int fd, pid_t pid, uint32_t flags);
>>>>>> +const char *xe_eudebug_event_to_str(struct drm_xe_eudebug_event *e, char *buf, size_t len);
>>>>>> +struct drm_xe_eudebug_event *
>>>>>> +xe_eudebug_event_log_find_seqno(struct xe_eudebug_event_log *l, uint64_t seqno);
>>>>>> +struct xe_eudebug_event_log *
>>>>>> +xe_eudebug_event_log_create(const char *name, unsigned int max_size);
>>>>>> +void xe_eudebug_event_log_destroy(struct xe_eudebug_event_log *l);
>>>>>> +void xe_eudebug_event_log_print(struct xe_eudebug_event_log *l, bool debug);
>>>>>> +void xe_eudebug_event_log_compare(struct xe_eudebug_event_log *c, struct xe_eudebug_event_log *d,
>>>>>> +				  uint32_t filter);
>>>>>> +void xe_eudebug_event_log_write(struct xe_eudebug_event_log *l, struct drm_xe_eudebug_event *e);
>>>>>> +void xe_eudebug_event_log_match_opposite(struct xe_eudebug_event_log *l, uint32_t filter);
>>>>>> +
>>>>>> +bool xe_eudebug_debugger_available(int fd);
>>>>>> +struct xe_eudebug_debugger *
>>>>>> +xe_eudebug_debugger_create(int xe, uint64_t flags, void *data);
>>>>>> +void xe_eudebug_debugger_destroy(struct xe_eudebug_debugger *d);
>>>>>> +int xe_eudebug_debugger_attach(struct xe_eudebug_debugger *d, struct xe_eudebug_client *c);
>>>>>> +void xe_eudebug_debugger_start_worker(struct xe_eudebug_debugger *d);
>>>>>> +void xe_eudebug_debugger_stop_worker(struct xe_eudebug_debugger *d, int timeout_s);
>>>>>> +void xe_eudebug_debugger_dettach(struct xe_eudebug_debugger *d);
>>>>>> +void xe_eudebug_debugger_set_data(struct xe_eudebug_debugger *c, void *ptr);
>>>>>> +void xe_eudebug_debugger_add_trigger(struct xe_eudebug_debugger *d, int type,
>>>>>> +				     xe_eudebug_trigger_fn fn);
>>>>>> +void xe_eudebug_debugger_signal_stage(struct xe_eudebug_debugger *d, uint64_t stage);
>>>>>> +void xe_eudebug_debugger_wait_stage(struct xe_eudebug_session *s, uint64_t stage);
>>>>>> +
>>>>>> +struct xe_eudebug_client *
>>>>>> +xe_eudebug_client_create(int xe, xe_eudebug_client_work_fn work, uint64_t flags, void *data);
>>>>>> +void xe_eudebug_client_destroy(struct xe_eudebug_client *c);
>>>>>> +void xe_eudebug_client_start(struct xe_eudebug_client *c);
>>>>>> +void xe_eudebug_client_stop(struct xe_eudebug_client *c);
>>>>>> +void xe_eudebug_client_wait_done(struct xe_eudebug_client *c);
>>>>>> +void xe_eudebug_client_signal_stage(struct xe_eudebug_client *c, uint64_t stage);
>>>>>> +void xe_eudebug_client_wait_stage(struct xe_eudebug_client *c, uint64_t stage);
>>>>>> +
>>>>>> +uint64_t xe_eudebug_client_get_seqno(struct xe_eudebug_client *c);
>>>>>> +void xe_eudebug_client_set_data(struct xe_eudebug_client *c, void *ptr);
>>>>>> +
>>>>>> +bool xe_eudebug_enable(int fd, bool enable);
>>>>>> +
>>>>>> +int xe_eudebug_client_open_driver(struct xe_eudebug_client *c);
>>>>>> +void xe_eudebug_client_close_driver(struct xe_eudebug_client *c, int fd);
>>>>>> +uint32_t xe_eudebug_client_vm_create(struct xe_eudebug_client *c, int fd,
>>>>>> +				     uint32_t flags, uint64_t ext);
>>>>>> +void xe_eudebug_client_vm_destroy(struct xe_eudebug_client *c, int fd, uint32_t vm);
>>>>>> +uint32_t xe_eudebug_client_exec_queue_create(struct xe_eudebug_client *c, int fd,
>>>>>> +					     struct drm_xe_exec_queue_create *create);
>>>>>> +void xe_eudebug_client_exec_queue_destroy(struct xe_eudebug_client *c, int fd,
>>>>>> +					  struct drm_xe_exec_queue_create *create);
>>>>>> +void xe_eudebug_client_vm_bind_event(struct xe_eudebug_client *c, uint32_t event_flags, int fd,
>>>>>> +				     uint32_t vm, uint32_t bind_flags,
>>>>>> +				     uint32_t num_ops, uint64_t *ref_seqno);
>>>>>> +void xe_eudebug_client_vm_bind_op_event(struct xe_eudebug_client *c, uint32_t event_flags,
>>>>>> +					uint64_t ref_seqno, uint64_t *op_ref_seqno,
>>>>>> +					uint64_t addr, uint64_t range,
>>>>>> +					uint64_t num_extensions);
>>>>>> +void xe_eudebug_client_vm_bind_op_metadata_event(struct xe_eudebug_client *c,
>>>>>> +						 uint32_t event_flags, uint64_t op_ref_seqno,
>>>>>> +						 uint64_t metadata_handle, uint64_t metadata_cookie);
>>>>>> +void xe_eudebug_client_vm_bind_ufence_event(struct xe_eudebug_client *c, uint32_t event_flags,
>>>>>> +					    uint64_t ref_seqno);
>>>>>> +void xe_eudebug_ack_ufence(int debugfd,
>>>>>> +			   const struct drm_xe_eudebug_event_vm_bind_ufence *f);
>>>>>> +
>>>>>> +void xe_eudebug_client_vm_bind_flags(struct xe_eudebug_client *c, int fd, uint32_t vm,
>>>>>> +				     uint32_t bo, uint64_t offset,
>>>>>> +				     uint64_t addr, uint64_t size, uint32_t flags,
>>>>>> +				     struct drm_xe_sync *sync, uint32_t num_syncs,
>>>>>> +				     uint64_t op_ext);
>>>>>> +void xe_eudebug_client_vm_bind(struct xe_eudebug_client *c, int fd, uint32_t vm,
>>>>>> +			       uint32_t bo, uint64_t offset,
>>>>>> +			       uint64_t addr, uint64_t size);
>>>>>> +void xe_eudebug_client_vm_unbind_flags(struct xe_eudebug_client *c, int fd,
>>>>>> +				       uint32_t vm, uint64_t offset,
>>>>>> +				       uint64_t addr, uint64_t size, uint32_t flags,
>>>>>> +				       struct drm_xe_sync *sync, uint32_t num_syncs);
>>>>>> +void xe_eudebug_client_vm_unbind(struct xe_eudebug_client *c, int fd, uint32_t vm,
>>>>>> +				 uint64_t offset, uint64_t addr, uint64_t size);
>>>>>> +
>>>>>> +uint32_t xe_eudebug_client_metadata_create(struct xe_eudebug_client *c, int fd,
>>>>>> +					   int type, size_t len, void *data);
>>>>>> +void xe_eudebug_client_metadata_destroy(struct xe_eudebug_client *c, int fd,
>>>>>> +					uint32_t id, int type, size_t len);
>>>>>> +
>>>>>> +struct xe_eudebug_session *xe_eudebug_session_create(int fd,
>>>>>> +						     xe_eudebug_client_work_fn work,
>>>>>> +						     unsigned int flags,
>>>>>> +						     void *test_private);
>>>>>> +void xe_eudebug_session_destroy(struct xe_eudebug_session *s);
>>>>>> +void xe_eudebug_session_run(struct xe_eudebug_session *s);
>>>>>> +void xe_eudebug_session_check(struct xe_eudebug_session *s, bool match_opposite, uint32_t filter);
>>>>>> -- 
>>>>>> 2.34.1
>>>>>>

^ permalink raw reply	[flat|nested] 41+ messages in thread

end of thread, other threads:[~2024-08-23  7:58 UTC | newest]

Thread overview: 41+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2024-08-09 12:37 [PATCH i-g-t v3 00/14] Test coverage for GPU debug support Christoph Manszewski
2024-08-09 12:38 ` [PATCH i-g-t v3 01/14] drm-uapi/xe: Sync with oa uapi fix Christoph Manszewski
2024-08-09 14:21   ` Kamil Konieczny
2024-08-09 12:38 ` [PATCH i-g-t v3 02/14] drm-uapi/xe: Sync with eudebug uapi Christoph Manszewski
2024-08-20  7:52   ` Zbigniew Kempczyński
2024-08-09 12:38 ` [PATCH i-g-t v3 03/14] lib/xe_ioctl: Add wrapper with vm_bind_op extension parameter Christoph Manszewski
2024-08-20  7:54   ` Zbigniew Kempczyński
2024-08-09 12:38 ` [PATCH i-g-t v3 04/14] lib/xe_eudebug: Introduce eu debug testing framework Christoph Manszewski
2024-08-19  8:30   ` Grzegorzek, Dominik
2024-08-19 15:33     ` Manszewski, Christoph
2024-08-20  8:14   ` Zbigniew Kempczyński
2024-08-20 16:14     ` Manszewski, Christoph
2024-08-20 17:45       ` Kamil Konieczny
2024-08-21  7:05         ` Manszewski, Christoph
2024-08-21  9:31         ` Zbigniew Kempczyński
2024-08-22 15:39           ` Kamil Konieczny
2024-08-23  7:58             ` Manszewski, Christoph
2024-08-09 12:38 ` [PATCH i-g-t v3 05/14] tests/xe_eudebug: Test eudebug resource tracking and manipulation Christoph Manszewski
2024-08-09 12:38 ` [PATCH i-g-t v3 06/14] lib/gpgpu_shader: Extend shader building library Christoph Manszewski
2024-08-09 12:38 ` [PATCH i-g-t v3 07/14] lib/gpgpu_shader: Add write_on_exception template Christoph Manszewski
2024-08-19 10:09   ` Grzegorzek, Dominik
2024-08-09 12:38 ` [PATCH i-g-t v3 08/14] lib/gpgpu_shader: Add set/clear exception register (cr0.1) helpers Christoph Manszewski
2024-08-09 12:38 ` [PATCH i-g-t v3 09/14] lib/intel_batchbuffer: Add helper to get pointer at specified offset Christoph Manszewski
2024-08-19 10:10   ` Grzegorzek, Dominik
2024-08-09 12:38 ` [PATCH i-g-t v3 10/14] lib/gpgpu_shader: Allow enabling illegal opcode exceptions in shader Christoph Manszewski
2024-08-19 10:12   ` Grzegorzek, Dominik
2024-08-09 12:38 ` [PATCH i-g-t v3 11/14] tests/xe_exec_sip: Extend SIP interaction testing Christoph Manszewski
2024-08-21  9:49   ` Zbigniew Kempczyński
2024-08-09 12:38 ` [PATCH i-g-t v3 12/14] lib/intel_batchbuffer: Add support for long-running mode execution Christoph Manszewski
2024-08-09 12:38 ` [PATCH i-g-t v3 13/14] tests/xe_eudebug_online: Debug client which runs workloads on EU Christoph Manszewski
2024-08-09 14:38   ` Kamil Konieczny
2024-08-19 15:31     ` Manszewski, Christoph
2024-08-19  9:58   ` Grzegorzek, Dominik
2024-08-19 15:36     ` Manszewski, Christoph
2024-08-09 12:38 ` [PATCH i-g-t v3 14/14] tests/xe_live_ktest: Add xe_eudebug live test Christoph Manszewski
2024-08-09 13:36 ` ✗ GitLab.Pipeline: warning for Test coverage for GPU debug support (rev3) Patchwork
2024-08-09 13:50 ` ✓ CI.xeBAT: success " Patchwork
2024-08-09 14:01 ` ✓ Fi.CI.BAT: " Patchwork
2024-08-09 14:24 ` [PATCH i-g-t v3 00/14] Test coverage for GPU debug support Kamil Konieczny
2024-08-09 15:40 ` ✗ CI.xeFULL: failure for Test coverage for GPU debug support (rev3) Patchwork
2024-08-10 18:30 ` ✗ Fi.CI.IGT: " Patchwork

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox