* [RFC PATCH 00/29] UMD direct submission in Xe
@ 2024-11-18 23:37 Matthew Brost
2024-11-18 23:37 ` [RFC PATCH 01/29] dma-fence: Add dma_fence_preempt base class Matthew Brost
` (36 more replies)
0 siblings, 37 replies; 52+ messages in thread
From: Matthew Brost @ 2024-11-18 23:37 UTC (permalink / raw)
To: intel-xe, dri-devel
Cc: kenneth.w.graunke, lionel.g.landwerlin, jose.souza, simona.vetter,
thomas.hellstrom, boris.brezillon, airlied, christian.koenig,
mihail.atanassov, steven.price, shashank.sharma
This is an RFC, or possibly even a proof of concept, for UMD (User Mode
Driver) direct submission in Xe. It is similar to AMD's design [1] [2]
or ARM's design [3], utilizing a uAPI to convert user-space syncs
(memory writes) to kernel-space syncs (DMA fences). It is built around
the existing Xe preemption fences for dynamic memory management, such as
userptr invalidation and buffer object (BO) eviction.
The series also enables mapping a PPGTT-bound submission ring in
non-privileged mode, as well as exposing indirect ring state (such as
ring head, tail, etc.) and the doorbell to user space, enabling UMD
direct submission.
The target for this series is Mesa, with the goal of enabling UMD direct
submission and removing the submission thread that currently handles
future fences. I've discussed this with Sima and the Intel Mesa team,
and it seems like a reachable target. Most synchronization will be
handled in user space via memory writes and semaphore wait ring
instructions, with only legacy cross-process synchronization (e.g.,
compositors) requiring kernel synchronization (DMA fences).
The series includes some common patches at the beginning to implement
preemption fences and user fences. The idea of preemption
DMA-reservation slots [4] has been dropped in favor of attaching the
last exported DMA fence to the preemption fence as suggested by AMD.
This is a public checkpoint on the KMD (Kernel Mode Driver) work, which
will be tabled until Intel's Mesa team has the bandwidth to begin the
UMD work. That said, the uAPI is very preliminary and likely to change.
One idea that was discussed is a common user fence interface based
around DRM syncobjs, which will likely be explored further as UMD
engagement begins. Some work for syncing VM binds (kernel operation)
with UMD direct submission is also likely required.
Testing has been done with [5], and the main features—such as basic
submission, dynamic memory management, user-to-kernel sync conversion,
and protection against endless user fences—are working on BMG and LNL.
The GitLab branch [6] has also been pushed for reference.
Any early community feedback is always appreciated.
Matt
[1] https://patchwork.freedesktop.org/series/113675/
[2] https://patchwork.freedesktop.org/series/114385/
[3] https://patchwork.freedesktop.org/series/137924/
[4] https://patchwork.freedesktop.org/series/141129/
[5] https://patchwork.freedesktop.org/series/141518/
[6] https://gitlab.freedesktop.org/mbrost/xe-kernel-driver-umd-submission-post/-/tree/post-11-18-24?ref_type=heads
Matthew Brost (28):
dma-fence: Add dma_fence_preempt base class
dma-fence: Add dma_fence_user_fence
drm/xe: Use dma_fence_preempt base class
drm/xe: Allocate doorbells for UMD exec queues
drm/xe: Add doorbell ID to snapshot capture
drm/xe: Break submission ring out into its own BO
drm/xe: Break indirect ring state out into its own BO
drm/xe: Clear GGTT in xe_bo_restore_kernel
FIXME: drm/xe: Add pad to ring and indirect state
drm/xe: Enable indirect ring on media GT
drm/xe: Don't add pinned mappings to VM bulk move
drm/xe: Add exec queue post init extension processing
drm/xe: Add support for mmapping doorbells to user space
drm/xe: Add support for mmapping submission ring and indirect ring
state to user space
drm/xe/uapi: Define UMD exec queue mapping uAPI
drm/xe: Add usermap exec queue extension
drm/xe: Drop EXEC_QUEUE_FLAG_UMD_SUBMISSION flag
drm/xe: Do not allow usermap exec queues in exec IOCTL
drm/xe: Teach GuC backend to kill usermap queues
drm/xe: Enable preempt fences on usermap queues
drm/xe/uapi: Add uAPI to convert user semaphore to / from drm syncobj
drm/xe: Add user fence IRQ handler
drm/xe: Add xe_hw_fence_user_init
drm/xe: Add a message lock to the Xe GPU scheduler
drm/xe: Always wait on preempt fences in vma_check_userptr
drm/xe: Teach xe_sync layer about drm_xe_semaphore
drm/xe: Add VM convert fence IOCTL
drm/xe: Add user fence TDR
Tejas Upadhyay (1):
drm/xe/mmap: Add mmap support for PCI memory barrier
drivers/dma-buf/Makefile | 2 +-
drivers/dma-buf/dma-fence-preempt.c | 134 ++++++
drivers/dma-buf/dma-fence-user-fence.c | 73 ++++
drivers/gpu/drm/xe/xe_bo.c | 29 +-
drivers/gpu/drm/xe/xe_bo.h | 5 +
drivers/gpu/drm/xe/xe_bo_evict.c | 8 +-
drivers/gpu/drm/xe/xe_device.c | 181 +++++++-
drivers/gpu/drm/xe/xe_device_types.h | 3 +
drivers/gpu/drm/xe/xe_exec.c | 3 +-
drivers/gpu/drm/xe/xe_exec_queue.c | 175 +++++++-
drivers/gpu/drm/xe/xe_exec_queue.h | 5 +
drivers/gpu/drm/xe/xe_exec_queue_types.h | 13 +
drivers/gpu/drm/xe/xe_execlist.c | 2 +-
drivers/gpu/drm/xe/xe_ggtt.c | 19 +-
drivers/gpu/drm/xe/xe_ggtt.h | 2 +
drivers/gpu/drm/xe/xe_gpu_scheduler.c | 19 +-
drivers/gpu/drm/xe/xe_gpu_scheduler.h | 12 +-
drivers/gpu/drm/xe/xe_gpu_scheduler_types.h | 2 +
drivers/gpu/drm/xe/xe_guc_exec_queue_types.h | 9 +-
drivers/gpu/drm/xe/xe_guc_submit.c | 177 +++++++-
drivers/gpu/drm/xe/xe_guc_submit_types.h | 2 +
drivers/gpu/drm/xe/xe_hw_engine.c | 4 +-
drivers/gpu/drm/xe/xe_hw_engine_group.c | 4 +-
drivers/gpu/drm/xe/xe_hw_fence.c | 17 +
drivers/gpu/drm/xe/xe_hw_fence.h | 3 +
drivers/gpu/drm/xe/xe_lrc.c | 176 ++++++--
drivers/gpu/drm/xe/xe_lrc.h | 4 +-
drivers/gpu/drm/xe/xe_lrc_types.h | 16 +-
drivers/gpu/drm/xe/xe_pci.c | 1 +
drivers/gpu/drm/xe/xe_preempt_fence.c | 89 ++--
drivers/gpu/drm/xe/xe_preempt_fence.h | 2 +-
drivers/gpu/drm/xe/xe_preempt_fence_types.h | 11 +-
drivers/gpu/drm/xe/xe_pt.c | 5 +-
drivers/gpu/drm/xe/xe_sync.c | 90 ++++
drivers/gpu/drm/xe/xe_sync.h | 8 +
drivers/gpu/drm/xe/xe_sync_types.h | 5 +-
drivers/gpu/drm/xe/xe_vm.c | 423 ++++++++++++++++++-
drivers/gpu/drm/xe/xe_vm.h | 4 +-
drivers/gpu/drm/xe/xe_vm_types.h | 26 ++
include/linux/dma-fence-preempt.h | 56 +++
include/linux/dma-fence-user-fence.h | 31 ++
include/uapi/drm/xe_drm.h | 147 ++++++-
42 files changed, 1798 insertions(+), 199 deletions(-)
create mode 100644 drivers/dma-buf/dma-fence-preempt.c
create mode 100644 drivers/dma-buf/dma-fence-user-fence.c
create mode 100644 include/linux/dma-fence-preempt.h
create mode 100644 include/linux/dma-fence-user-fence.h
--
2.34.1
^ permalink raw reply [flat|nested] 52+ messages in thread
* [RFC PATCH 01/29] dma-fence: Add dma_fence_preempt base class
2024-11-18 23:37 [RFC PATCH 00/29] UMD direct submission in Xe Matthew Brost
@ 2024-11-18 23:37 ` Matthew Brost
2024-11-20 13:31 ` Christian König
2024-11-18 23:37 ` [RFC PATCH 02/29] dma-fence: Add dma_fence_user_fence Matthew Brost
` (35 subsequent siblings)
36 siblings, 1 reply; 52+ messages in thread
From: Matthew Brost @ 2024-11-18 23:37 UTC (permalink / raw)
To: intel-xe, dri-devel
Cc: kenneth.w.graunke, lionel.g.landwerlin, jose.souza, simona.vetter,
thomas.hellstrom, boris.brezillon, airlied, christian.koenig,
mihail.atanassov, steven.price, shashank.sharma
Add a dma_fence_preempt base class with driver ops to implement
preemption, based on the existing Xe preemptive fence implementation.
Annotated to ensure correct driver usage.
Cc: Dave Airlie <airlied@redhat.com>
Cc: Simona Vetter <simona.vetter@ffwll.ch>
Cc: Christian Koenig <christian.koenig@amd.com>
Signed-off-by: Matthew Brost <matthew.brost@intel.com>
---
drivers/dma-buf/Makefile | 2 +-
drivers/dma-buf/dma-fence-preempt.c | 133 ++++++++++++++++++++++++++++
include/linux/dma-fence-preempt.h | 56 ++++++++++++
3 files changed, 190 insertions(+), 1 deletion(-)
create mode 100644 drivers/dma-buf/dma-fence-preempt.c
create mode 100644 include/linux/dma-fence-preempt.h
diff --git a/drivers/dma-buf/Makefile b/drivers/dma-buf/Makefile
index 70ec901edf2c..c25500bb38b5 100644
--- a/drivers/dma-buf/Makefile
+++ b/drivers/dma-buf/Makefile
@@ -1,6 +1,6 @@
# SPDX-License-Identifier: GPL-2.0-only
obj-y := dma-buf.o dma-fence.o dma-fence-array.o dma-fence-chain.o \
- dma-fence-unwrap.o dma-resv.o
+ dma-fence-preempt.o dma-fence-unwrap.o dma-resv.o
obj-$(CONFIG_DMABUF_HEAPS) += dma-heap.o
obj-$(CONFIG_DMABUF_HEAPS) += heaps/
obj-$(CONFIG_SYNC_FILE) += sync_file.o
diff --git a/drivers/dma-buf/dma-fence-preempt.c b/drivers/dma-buf/dma-fence-preempt.c
new file mode 100644
index 000000000000..6e6ce7ea7421
--- /dev/null
+++ b/drivers/dma-buf/dma-fence-preempt.c
@@ -0,0 +1,133 @@
+// SPDX-License-Identifier: MIT
+/*
+ * Copyright © 2024 Intel Corporation
+ */
+
+#include <linux/dma-fence-preempt.h>
+#include <linux/dma-resv.h>
+
+static void dma_fence_preempt_work_func(struct work_struct *w)
+{
+ bool cookie = dma_fence_begin_signalling();
+ struct dma_fence_preempt *pfence =
+ container_of(w, typeof(*pfence), work);
+ const struct dma_fence_preempt_ops *ops = pfence->ops;
+ int err = pfence->base.error;
+
+ if (!err) {
+ err = ops->preempt_wait(pfence);
+ if (err)
+ dma_fence_set_error(&pfence->base, err);
+ }
+
+ dma_fence_signal(&pfence->base);
+ ops->preempt_finished(pfence);
+
+ dma_fence_end_signalling(cookie);
+}
+
+static const char *
+dma_fence_preempt_get_driver_name(struct dma_fence *fence)
+{
+ return "dma_fence_preempt";
+}
+
+static const char *
+dma_fence_preempt_get_timeline_name(struct dma_fence *fence)
+{
+ return "ordered";
+}
+
+static void dma_fence_preempt_issue(struct dma_fence_preempt *pfence)
+{
+ int err;
+
+ err = pfence->ops->preempt(pfence);
+ if (err)
+ dma_fence_set_error(&pfence->base, err);
+
+ queue_work(pfence->wq, &pfence->work);
+}
+
+static void dma_fence_preempt_cb(struct dma_fence *fence,
+ struct dma_fence_cb *cb)
+{
+ struct dma_fence_preempt *pfence =
+ container_of(cb, typeof(*pfence), cb);
+
+ dma_fence_preempt_issue(pfence);
+}
+
+static void dma_fence_preempt_delay(struct dma_fence_preempt *pfence)
+{
+ struct dma_fence *fence;
+ int err;
+
+ fence = pfence->ops->preempt_delay(pfence);
+ if (WARN_ON_ONCE(!fence || IS_ERR(fence)))
+ return;
+
+ err = dma_fence_add_callback(fence, &pfence->cb, dma_fence_preempt_cb);
+ if (err == -ENOENT)
+ dma_fence_preempt_issue(pfence);
+}
+
+static bool dma_fence_preempt_enable_signaling(struct dma_fence *fence)
+{
+ struct dma_fence_preempt *pfence =
+ container_of(fence, typeof(*pfence), base);
+
+ if (pfence->ops->preempt_delay)
+ dma_fence_preempt_delay(pfence);
+ else
+ dma_fence_preempt_issue(pfence);
+
+ return true;
+}
+
+static const struct dma_fence_ops preempt_fence_ops = {
+ .get_driver_name = dma_fence_preempt_get_driver_name,
+ .get_timeline_name = dma_fence_preempt_get_timeline_name,
+ .enable_signaling = dma_fence_preempt_enable_signaling,
+};
+
+/**
+ * dma_fence_is_preempt() - Is preempt fence
+ *
+ * @fence: Preempt fence
+ *
+ * Return: True if preempt fence, False otherwise
+ */
+bool dma_fence_is_preempt(const struct dma_fence *fence)
+{
+ return fence->ops == &preempt_fence_ops;
+}
+EXPORT_SYMBOL(dma_fence_is_preempt);
+
+/**
+ * dma_fence_preempt_init() - Initial preempt fence
+ *
+ * @fence: Preempt fence
+ * @ops: Preempt fence operations
+ * @wq: Work queue for preempt wait, should have WQ_MEM_RECLAIM set
+ * @context: Fence context
+ * @seqno: Fence seqence number
+ */
+void dma_fence_preempt_init(struct dma_fence_preempt *fence,
+ const struct dma_fence_preempt_ops *ops,
+ struct workqueue_struct *wq,
+ u64 context, u64 seqno)
+{
+ /*
+ * XXX: We really want to check wq for WQ_MEM_RECLAIM here but
+ * workqueue_struct is private.
+ */
+
+ fence->ops = ops;
+ fence->wq = wq;
+ INIT_WORK(&fence->work, dma_fence_preempt_work_func);
+ spin_lock_init(&fence->lock);
+ dma_fence_init(&fence->base, &preempt_fence_ops,
+ &fence->lock, context, seqno);
+}
+EXPORT_SYMBOL(dma_fence_preempt_init);
diff --git a/include/linux/dma-fence-preempt.h b/include/linux/dma-fence-preempt.h
new file mode 100644
index 000000000000..28d803f89527
--- /dev/null
+++ b/include/linux/dma-fence-preempt.h
@@ -0,0 +1,56 @@
+/* SPDX-License-Identifier: MIT */
+/*
+ * Copyright © 2024 Intel Corporation
+ */
+
+#ifndef __LINUX_DMA_FENCE_PREEMPT_H
+#define __LINUX_DMA_FENCE_PREEMPT_H
+
+#include <linux/dma-fence.h>
+#include <linux/workqueue.h>
+
+struct dma_fence_preempt;
+struct dma_resv;
+
+/**
+ * struct dma_fence_preempt_ops - Preempt fence operations
+ *
+ * These functions should be implemented in the driver side.
+ */
+struct dma_fence_preempt_ops {
+ /** @preempt_delay: Preempt execution with a delay */
+ struct dma_fence *(*preempt_delay)(struct dma_fence_preempt *fence);
+ /** @preempt: Preempt execution */
+ int (*preempt)(struct dma_fence_preempt *fence);
+ /** @preempt_wait: Wait for preempt of execution to complete */
+ int (*preempt_wait)(struct dma_fence_preempt *fence);
+ /** @preempt_finished: Signal that the preempt has finished */
+ void (*preempt_finished)(struct dma_fence_preempt *fence);
+};
+
+/**
+ * struct dma_fence_preempt - Embedded preempt fence base class
+ */
+struct dma_fence_preempt {
+ /** @base: Fence base class */
+ struct dma_fence base;
+ /** @lock: Spinlock for fence handling */
+ spinlock_t lock;
+ /** @cb: Callback preempt delay */
+ struct dma_fence_cb cb;
+ /** @ops: Preempt fence operation */
+ const struct dma_fence_preempt_ops *ops;
+ /** @wq: Work queue for preempt wait */
+ struct workqueue_struct *wq;
+ /** @work: Work struct for preempt wait */
+ struct work_struct work;
+};
+
+bool dma_fence_is_preempt(const struct dma_fence *fence);
+
+void dma_fence_preempt_init(struct dma_fence_preempt *fence,
+ const struct dma_fence_preempt_ops *ops,
+ struct workqueue_struct *wq,
+ u64 context, u64 seqno);
+
+#endif
--
2.34.1
^ permalink raw reply related [flat|nested] 52+ messages in thread
* [RFC PATCH 02/29] dma-fence: Add dma_fence_user_fence
2024-11-18 23:37 [RFC PATCH 00/29] UMD direct submission in Xe Matthew Brost
2024-11-18 23:37 ` [RFC PATCH 01/29] dma-fence: Add dma_fence_preempt base class Matthew Brost
@ 2024-11-18 23:37 ` Matthew Brost
2024-11-20 13:38 ` Christian König
2024-11-18 23:37 ` [RFC PATCH 03/29] drm/xe: Use dma_fence_preempt base class Matthew Brost
` (34 subsequent siblings)
36 siblings, 1 reply; 52+ messages in thread
From: Matthew Brost @ 2024-11-18 23:37 UTC (permalink / raw)
To: intel-xe, dri-devel
Cc: kenneth.w.graunke, lionel.g.landwerlin, jose.souza, simona.vetter,
thomas.hellstrom, boris.brezillon, airlied, christian.koenig,
mihail.atanassov, steven.price, shashank.sharma
Normalize user fence attachment to a DMA fence. A user fence is a simple
seqno write to memory, implemented by attaching a DMA fence callback
that writes out the seqno. Intended use case is importing a dma-fence
into kernel and exporting a user fence.
Helpers added to allocate, attach, and free a dma_fence_user_fence.
Cc: Dave Airlie <airlied@redhat.com>
Cc: Simona Vetter <simona.vetter@ffwll.ch>
Cc: Christian Koenig <christian.koenig@amd.com>
Signed-off-by: Matthew Brost <matthew.brost@intel.com>
---
drivers/dma-buf/Makefile | 2 +-
drivers/dma-buf/dma-fence-user-fence.c | 73 ++++++++++++++++++++++++++
include/linux/dma-fence-user-fence.h | 31 +++++++++++
3 files changed, 105 insertions(+), 1 deletion(-)
create mode 100644 drivers/dma-buf/dma-fence-user-fence.c
create mode 100644 include/linux/dma-fence-user-fence.h
diff --git a/drivers/dma-buf/Makefile b/drivers/dma-buf/Makefile
index c25500bb38b5..ba9ba339319e 100644
--- a/drivers/dma-buf/Makefile
+++ b/drivers/dma-buf/Makefile
@@ -1,6 +1,6 @@
# SPDX-License-Identifier: GPL-2.0-only
obj-y := dma-buf.o dma-fence.o dma-fence-array.o dma-fence-chain.o \
- dma-fence-preempt.o dma-fence-unwrap.o dma-resv.o
+ dma-fence-preempt.o dma-fence-unwrap.o dma-fence-user-fence.o dma-resv.o
obj-$(CONFIG_DMABUF_HEAPS) += dma-heap.o
obj-$(CONFIG_DMABUF_HEAPS) += heaps/
obj-$(CONFIG_SYNC_FILE) += sync_file.o
diff --git a/drivers/dma-buf/dma-fence-user-fence.c b/drivers/dma-buf/dma-fence-user-fence.c
new file mode 100644
index 000000000000..5a4b289bacb8
--- /dev/null
+++ b/drivers/dma-buf/dma-fence-user-fence.c
@@ -0,0 +1,73 @@
+// SPDX-License-Identifier: MIT
+/*
+ * Copyright © 2024 Intel Corporation
+ */
+
+#include <linux/dma-fence-user-fence.h>
+#include <linux/slab.h>
+
+static void user_fence_cb(struct dma_fence *fence, struct dma_fence_cb *cb)
+{
+ struct dma_fence_user_fence *user_fence =
+ container_of(cb, struct dma_fence_user_fence, cb);
+
+ if (user_fence->map.is_iomem)
+ writeq(user_fence->seqno, user_fence->map.vaddr_iomem);
+ else
+ *(u64 *)user_fence->map.vaddr = user_fence->seqno;
+
+ dma_fence_user_fence_free(user_fence);
+}
+
+/**
+ * dma_fence_user_fence_alloc() - Allocate user fence
+ *
+ * Return: Allocated struct dma_fence_user_fence on Success, NULL on failure
+ */
+struct dma_fence_user_fence *dma_fence_user_fence_alloc(void)
+{
+ return kmalloc(sizeof(struct dma_fence_user_fence), GFP_KERNEL);
+}
+EXPORT_SYMBOL(dma_fence_user_fence_alloc);
+
+/**
+ * dma_fence_user_fence_free() - Free user fence
+ *
+ * Free user fence. Should only be called on a user fence if
+ * dma_fence_user_fence_attach is not called to cleanup original allocation from
+ * dma_fence_user_fence_alloc.
+ */
+void dma_fence_user_fence_free(struct dma_fence_user_fence *user_fence)
+{
+ kfree(user_fence);
+}
+EXPORT_SYMBOL(dma_fence_user_fence_free);
+
+/**
+ * dma_fence_user_fence_attach() - Attach user fence to dma-fence
+ *
+ * @fence: fence
+ * @user_fence user fence
+ * @map: IOSYS map to write seqno to
+ * @seqno: seqno to write to IOSYS map
+ *
+ * Attach a user fence, which is a seqno write to an IOSYS map, to a DMA fence.
+ * The caller must guarantee that the memory in the IOSYS map doesn't move
+ * before the fence signals. This is typically done by installing the DMA fence
+ * into the BO's DMA reservation bookkeeping slot from which the IOSYS was
+ * derived.
+ */
+void dma_fence_user_fence_attach(struct dma_fence *fence,
+ struct dma_fence_user_fence *user_fence,
+ struct iosys_map *map, u64 seqno)
+{
+ int err;
+
+ user_fence->map = *map;
+ user_fence->seqno = seqno;
+
+ err = dma_fence_add_callback(fence, &user_fence->cb, user_fence_cb);
+ if (err == -ENOENT)
+ user_fence_cb(NULL, &user_fence->cb);
+}
+EXPORT_SYMBOL(dma_fence_user_fence_attach);
diff --git a/include/linux/dma-fence-user-fence.h b/include/linux/dma-fence-user-fence.h
new file mode 100644
index 000000000000..8678129c7d56
--- /dev/null
+++ b/include/linux/dma-fence-user-fence.h
@@ -0,0 +1,31 @@
+/* SPDX-License-Identifier: MIT */
+/*
+ * Copyright © 2024 Intel Corporation
+ */
+
+#ifndef __LINUX_DMA_FENCE_USER_FENCE_H
+#define __LINUX_DMA_FENCE_USER_FENCE_H
+
+#include <linux/dma-fence.h>
+#include <linux/iosys-map.h>
+
+/** struct dma_fence_user_fence - User fence */
+struct dma_fence_user_fence {
+ /** @cb: dma-fence callback used to attach user fence to dma-fence */
+ struct dma_fence_cb cb;
+ /** @map: IOSYS map to write seqno to */
+ struct iosys_map map;
+ /** @seqno: seqno to write to IOSYS map */
+ u64 seqno;
+};
+
+struct dma_fence_user_fence *dma_fence_user_fence_alloc(void);
+
+void dma_fence_user_fence_free(struct dma_fence_user_fence *user_fence);
+
+void dma_fence_user_fence_attach(struct dma_fence *fence,
+ struct dma_fence_user_fence *user_fence,
+ struct iosys_map *map,
+ u64 seqno);
+
+#endif
--
2.34.1
^ permalink raw reply related [flat|nested] 52+ messages in thread
* [RFC PATCH 03/29] drm/xe: Use dma_fence_preempt base class
2024-11-18 23:37 [RFC PATCH 00/29] UMD direct submission in Xe Matthew Brost
2024-11-18 23:37 ` [RFC PATCH 01/29] dma-fence: Add dma_fence_preempt base class Matthew Brost
2024-11-18 23:37 ` [RFC PATCH 02/29] dma-fence: Add dma_fence_user_fence Matthew Brost
@ 2024-11-18 23:37 ` Matthew Brost
2024-11-18 23:37 ` [RFC PATCH 04/29] drm/xe: Allocate doorbells for UMD exec queues Matthew Brost
` (33 subsequent siblings)
36 siblings, 0 replies; 52+ messages in thread
From: Matthew Brost @ 2024-11-18 23:37 UTC (permalink / raw)
To: intel-xe, dri-devel
Cc: kenneth.w.graunke, lionel.g.landwerlin, jose.souza, simona.vetter,
thomas.hellstrom, boris.brezillon, airlied, christian.koenig,
mihail.atanassov, steven.price, shashank.sharma
Use the dma_fence_preempt base class in Xe instead of open-coding the
preemption implementation.
Cc: Dave Airlie <airlied@redhat.com>
Cc: Simona Vetter <simona.vetter@ffwll.ch>
Cc: Christian Koenig <christian.koenig@amd.com>
Signed-off-by: Matthew Brost <matthew.brost@intel.com>
---
drivers/dma-buf/dma-fence-preempt.c | 5 +-
drivers/gpu/drm/xe/xe_guc_submit.c | 3 +
drivers/gpu/drm/xe/xe_hw_engine_group.c | 4 +-
drivers/gpu/drm/xe/xe_preempt_fence.c | 80 ++++++---------------
drivers/gpu/drm/xe/xe_preempt_fence.h | 2 +-
drivers/gpu/drm/xe/xe_preempt_fence_types.h | 11 +--
6 files changed, 34 insertions(+), 71 deletions(-)
diff --git a/drivers/dma-buf/dma-fence-preempt.c b/drivers/dma-buf/dma-fence-preempt.c
index 6e6ce7ea7421..bcc5e5cec919 100644
--- a/drivers/dma-buf/dma-fence-preempt.c
+++ b/drivers/dma-buf/dma-fence-preempt.c
@@ -8,11 +8,11 @@
static void dma_fence_preempt_work_func(struct work_struct *w)
{
- bool cookie = dma_fence_begin_signalling();
struct dma_fence_preempt *pfence =
container_of(w, typeof(*pfence), work);
const struct dma_fence_preempt_ops *ops = pfence->ops;
int err = pfence->base.error;
+ bool cookie = dma_fence_begin_signalling();
if (!err) {
err = ops->preempt_wait(pfence);
@@ -23,6 +23,7 @@ static void dma_fence_preempt_work_func(struct work_struct *w)
dma_fence_signal(&pfence->base);
ops->preempt_finished(pfence);
+ /* The entire worker is signaling path, thus annotate the entirety */
dma_fence_end_signalling(cookie);
}
@@ -109,7 +110,7 @@ EXPORT_SYMBOL(dma_fence_is_preempt);
*
* @fence: Preempt fence
* @ops: Preempt fence operations
- * @wq: Work queue for preempt wait, should have WQ_MEM_RECLAIM set
+ * @wq: Work queue for preempt wait, must have WQ_MEM_RECLAIM set
* @context: Fence context
* @seqno: Fence seqence number
*/
diff --git a/drivers/gpu/drm/xe/xe_guc_submit.c b/drivers/gpu/drm/xe/xe_guc_submit.c
index f9ecee5364d8..58a3f4bb3887 100644
--- a/drivers/gpu/drm/xe/xe_guc_submit.c
+++ b/drivers/gpu/drm/xe/xe_guc_submit.c
@@ -1603,6 +1603,9 @@ static int guc_exec_queue_suspend_wait(struct xe_exec_queue *q)
struct xe_guc *guc = exec_queue_to_guc(q);
int ret;
+ if (exec_queue_reset(q) || exec_queue_killed_or_banned_or_wedged(q))
+ return -ECANCELED;
+
/*
* Likely don't need to check exec_queue_killed() as we clear
* suspend_pending upon kill but to be paranoid but races in which
diff --git a/drivers/gpu/drm/xe/xe_hw_engine_group.c b/drivers/gpu/drm/xe/xe_hw_engine_group.c
index 82750520a90a..8ed5410c3964 100644
--- a/drivers/gpu/drm/xe/xe_hw_engine_group.c
+++ b/drivers/gpu/drm/xe/xe_hw_engine_group.c
@@ -163,7 +163,7 @@ int xe_hw_engine_group_add_exec_queue(struct xe_hw_engine_group *group, struct x
if (xe_vm_in_fault_mode(q->vm) && group->cur_mode == EXEC_MODE_DMA_FENCE) {
q->ops->suspend(q);
err = q->ops->suspend_wait(q);
- if (err)
+ if (err == -ETIME)
goto err_suspend;
xe_hw_engine_group_resume_faulting_lr_jobs(group);
@@ -236,7 +236,7 @@ static int xe_hw_engine_group_suspend_faulting_lr_jobs(struct xe_hw_engine_group
continue;
err = q->ops->suspend_wait(q);
- if (err)
+ if (err == -ETIME)
goto err_suspend;
}
diff --git a/drivers/gpu/drm/xe/xe_preempt_fence.c b/drivers/gpu/drm/xe/xe_preempt_fence.c
index 83fbeea5aa20..80a8bc82f3cc 100644
--- a/drivers/gpu/drm/xe/xe_preempt_fence.c
+++ b/drivers/gpu/drm/xe/xe_preempt_fence.c
@@ -4,73 +4,40 @@
*/
#include "xe_preempt_fence.h"
-
-#include <linux/slab.h>
-
#include "xe_exec_queue.h"
#include "xe_vm.h"
-static void preempt_fence_work_func(struct work_struct *w)
+static struct xe_exec_queue *to_exec_queue(struct dma_fence_preempt *fence)
{
- bool cookie = dma_fence_begin_signalling();
- struct xe_preempt_fence *pfence =
- container_of(w, typeof(*pfence), preempt_work);
- struct xe_exec_queue *q = pfence->q;
-
- if (pfence->error) {
- dma_fence_set_error(&pfence->base, pfence->error);
- } else if (!q->ops->reset_status(q)) {
- int err = q->ops->suspend_wait(q);
-
- if (err)
- dma_fence_set_error(&pfence->base, err);
- } else {
- dma_fence_set_error(&pfence->base, -ENOENT);
- }
-
- dma_fence_signal(&pfence->base);
- /*
- * Opt for keep everything in the fence critical section. This looks really strange since we
- * have just signalled the fence, however the preempt fences are all signalled via single
- * global ordered-wq, therefore anything that happens in this callback can easily block
- * progress on the entire wq, which itself may prevent other published preempt fences from
- * ever signalling. Therefore try to keep everything here in the callback in the fence
- * critical section. For example if something below grabs a scary lock like vm->lock,
- * lockdep should complain since we also hold that lock whilst waiting on preempt fences to
- * complete.
- */
- xe_vm_queue_rebind_worker(q->vm);
- xe_exec_queue_put(q);
- dma_fence_end_signalling(cookie);
+ return container_of(fence, struct xe_preempt_fence, base)->q;
}
-static const char *
-preempt_fence_get_driver_name(struct dma_fence *fence)
+static int xe_preempt_fence_preempt(struct dma_fence_preempt *fence)
{
- return "xe";
+ struct xe_exec_queue *q = to_exec_queue(fence);
+
+ return q->ops->suspend(q);
}
-static const char *
-preempt_fence_get_timeline_name(struct dma_fence *fence)
+static int xe_preempt_fence_preempt_wait(struct dma_fence_preempt *fence)
{
- return "preempt";
+ struct xe_exec_queue *q = to_exec_queue(fence);
+
+ return q->ops->suspend_wait(q);
}
-static bool preempt_fence_enable_signaling(struct dma_fence *fence)
+static void xe_preempt_fence_preempt_finished(struct dma_fence_preempt *fence)
{
- struct xe_preempt_fence *pfence =
- container_of(fence, typeof(*pfence), base);
- struct xe_exec_queue *q = pfence->q;
+ struct xe_exec_queue *q = to_exec_queue(fence);
- pfence->error = q->ops->suspend(q);
- queue_work(q->vm->xe->preempt_fence_wq, &pfence->preempt_work);
- return true;
+ xe_vm_queue_rebind_worker(q->vm);
+ xe_exec_queue_put(q);
}
-static const struct dma_fence_ops preempt_fence_ops = {
- .get_driver_name = preempt_fence_get_driver_name,
- .get_timeline_name = preempt_fence_get_timeline_name,
- .enable_signaling = preempt_fence_enable_signaling,
+static const struct dma_fence_preempt_ops xe_preempt_fence_ops = {
+ .preempt = xe_preempt_fence_preempt,
+ .preempt_wait = xe_preempt_fence_preempt_wait,
+ .preempt_finished = xe_preempt_fence_preempt_finished,
};
/**
@@ -95,7 +62,6 @@ struct xe_preempt_fence *xe_preempt_fence_alloc(void)
return ERR_PTR(-ENOMEM);
INIT_LIST_HEAD(&pfence->link);
- INIT_WORK(&pfence->preempt_work, preempt_fence_work_func);
return pfence;
}
@@ -134,11 +100,11 @@ xe_preempt_fence_arm(struct xe_preempt_fence *pfence, struct xe_exec_queue *q,
{
list_del_init(&pfence->link);
pfence->q = xe_exec_queue_get(q);
- spin_lock_init(&pfence->lock);
- dma_fence_init(&pfence->base, &preempt_fence_ops,
- &pfence->lock, context, seqno);
- return &pfence->base;
+ dma_fence_preempt_init(&pfence->base, &xe_preempt_fence_ops,
+ q->vm->xe->preempt_fence_wq, context, seqno);
+
+ return &pfence->base.base;
}
/**
@@ -169,5 +135,5 @@ xe_preempt_fence_create(struct xe_exec_queue *q,
bool xe_fence_is_xe_preempt(const struct dma_fence *fence)
{
- return fence->ops == &preempt_fence_ops;
+ return dma_fence_is_preempt(fence);
}
diff --git a/drivers/gpu/drm/xe/xe_preempt_fence.h b/drivers/gpu/drm/xe/xe_preempt_fence.h
index 9406c6fea525..7b56d12c0786 100644
--- a/drivers/gpu/drm/xe/xe_preempt_fence.h
+++ b/drivers/gpu/drm/xe/xe_preempt_fence.h
@@ -25,7 +25,7 @@ xe_preempt_fence_arm(struct xe_preempt_fence *pfence, struct xe_exec_queue *q,
static inline struct xe_preempt_fence *
to_preempt_fence(struct dma_fence *fence)
{
- return container_of(fence, struct xe_preempt_fence, base);
+ return container_of(fence, struct xe_preempt_fence, base.base);
}
/**
diff --git a/drivers/gpu/drm/xe/xe_preempt_fence_types.h b/drivers/gpu/drm/xe/xe_preempt_fence_types.h
index 312c3372a49f..f12b89f7dc35 100644
--- a/drivers/gpu/drm/xe/xe_preempt_fence_types.h
+++ b/drivers/gpu/drm/xe/xe_preempt_fence_types.h
@@ -6,8 +6,7 @@
#ifndef _XE_PREEMPT_FENCE_TYPES_H_
#define _XE_PREEMPT_FENCE_TYPES_H_
-#include <linux/dma-fence.h>
-#include <linux/workqueue.h>
+#include <linux/dma-fence-preempt.h>
struct xe_exec_queue;
@@ -18,17 +17,11 @@ struct xe_exec_queue;
*/
struct xe_preempt_fence {
/** @base: dma fence base */
- struct dma_fence base;
+ struct dma_fence_preempt base;
/** @link: link into list of pending preempt fences */
struct list_head link;
/** @q: exec queue for this preempt fence */
struct xe_exec_queue *q;
- /** @preempt_work: work struct which issues preemption */
- struct work_struct preempt_work;
- /** @lock: dma-fence fence lock */
- spinlock_t lock;
- /** @error: preempt fence is in error state */
- int error;
};
#endif
--
2.34.1
^ permalink raw reply related [flat|nested] 52+ messages in thread
* [RFC PATCH 04/29] drm/xe: Allocate doorbells for UMD exec queues
2024-11-18 23:37 [RFC PATCH 00/29] UMD direct submission in Xe Matthew Brost
` (2 preceding siblings ...)
2024-11-18 23:37 ` [RFC PATCH 03/29] drm/xe: Use dma_fence_preempt base class Matthew Brost
@ 2024-11-18 23:37 ` Matthew Brost
2024-11-18 23:37 ` [RFC PATCH 05/29] drm/xe: Add doorbell ID to snapshot capture Matthew Brost
` (32 subsequent siblings)
36 siblings, 0 replies; 52+ messages in thread
From: Matthew Brost @ 2024-11-18 23:37 UTC (permalink / raw)
To: intel-xe, dri-devel
Cc: kenneth.w.graunke, lionel.g.landwerlin, jose.souza, simona.vetter,
thomas.hellstrom, boris.brezillon, airlied, christian.koenig,
mihail.atanassov, steven.price, shashank.sharma
These will be mapped to user space for UMD submission. Add
infrastructure to GuC submission backend to manage these.
Signed-off-by: Matthew Brost <matthew.brost@intel.com>
---
drivers/gpu/drm/xe/xe_exec_queue_types.h | 2 +
drivers/gpu/drm/xe/xe_guc_exec_queue_types.h | 7 ++
drivers/gpu/drm/xe/xe_guc_submit.c | 107 +++++++++++++++++--
3 files changed, 106 insertions(+), 10 deletions(-)
diff --git a/drivers/gpu/drm/xe/xe_exec_queue_types.h b/drivers/gpu/drm/xe/xe_exec_queue_types.h
index 1158b6062a6c..7f68587d4021 100644
--- a/drivers/gpu/drm/xe/xe_exec_queue_types.h
+++ b/drivers/gpu/drm/xe/xe_exec_queue_types.h
@@ -83,6 +83,8 @@ struct xe_exec_queue {
#define EXEC_QUEUE_FLAG_BIND_ENGINE_CHILD BIT(3)
/* kernel exec_queue only, set priority to highest level */
#define EXEC_QUEUE_FLAG_HIGH_PRIORITY BIT(4)
+/* queue used for UMD submission */
+#define EXEC_QUEUE_FLAG_UMD_SUBMISSION BIT(5)
/**
* @flags: flags for this exec queue, should statically setup aside from ban
diff --git a/drivers/gpu/drm/xe/xe_guc_exec_queue_types.h b/drivers/gpu/drm/xe/xe_guc_exec_queue_types.h
index 4c39f01e4f52..2d53af75ed75 100644
--- a/drivers/gpu/drm/xe/xe_guc_exec_queue_types.h
+++ b/drivers/gpu/drm/xe/xe_guc_exec_queue_types.h
@@ -47,6 +47,13 @@ struct xe_guc_exec_queue {
u16 id;
/** @suspend_wait: wait queue used to wait on pending suspends */
wait_queue_head_t suspend_wait;
+ /** @db: doorbell state */
+ struct {
+ /** @db.id: doorbell ID */
+ int id;
+ /** @db.dpa: doorbell device physical address */
+ u64 dpa;
+ } db;
/** @suspend_pending: a suspend of the exec_queue is pending */
bool suspend_pending;
};
diff --git a/drivers/gpu/drm/xe/xe_guc_submit.c b/drivers/gpu/drm/xe/xe_guc_submit.c
index 58a3f4bb3887..cc7a98c1343e 100644
--- a/drivers/gpu/drm/xe/xe_guc_submit.c
+++ b/drivers/gpu/drm/xe/xe_guc_submit.c
@@ -29,6 +29,7 @@
#include "xe_guc.h"
#include "xe_guc_capture.h"
#include "xe_guc_ct.h"
+#include "xe_guc_db_mgr.h"
#include "xe_guc_exec_queue_types.h"
#include "xe_guc_id_mgr.h"
#include "xe_guc_submit_types.h"
@@ -67,6 +68,7 @@ exec_queue_to_guc(struct xe_exec_queue *q)
#define EXEC_QUEUE_STATE_BANNED (1 << 9)
#define EXEC_QUEUE_STATE_CHECK_TIMEOUT (1 << 10)
#define EXEC_QUEUE_STATE_EXTRA_REF (1 << 11)
+#define EXEC_QUEUE_STATE_DB_REGISTERED (1 << 12)
static bool exec_queue_registered(struct xe_exec_queue *q)
{
@@ -218,6 +220,16 @@ static void set_exec_queue_extra_ref(struct xe_exec_queue *q)
atomic_or(EXEC_QUEUE_STATE_EXTRA_REF, &q->guc->state);
}
+static bool exec_queue_doorbell_registered(struct xe_exec_queue *q)
+{
+ return atomic_read(&q->guc->state) & EXEC_QUEUE_STATE_DB_REGISTERED;
+}
+
+static void set_exec_queue_doorbell_registered(struct xe_exec_queue *q)
+{
+ atomic_or(EXEC_QUEUE_STATE_DB_REGISTERED, &q->guc->state);
+}
+
static bool exec_queue_killed_or_banned_or_wedged(struct xe_exec_queue *q)
{
return (atomic_read(&q->guc->state) &
@@ -354,13 +366,6 @@ static int alloc_guc_id(struct xe_guc *guc, struct xe_exec_queue *q)
return ret;
}
-static void release_guc_id(struct xe_guc *guc, struct xe_exec_queue *q)
-{
- mutex_lock(&guc->submission_state.lock);
- __release_guc_id(guc, q, q->width);
- mutex_unlock(&guc->submission_state.lock);
-}
-
struct exec_queue_policy {
u32 count;
struct guc_update_exec_queue_policy h2g;
@@ -1238,7 +1243,13 @@ static void __guc_exec_queue_fini_async(struct work_struct *w)
if (xe_exec_queue_is_lr(q))
cancel_work_sync(&ge->lr_tdr);
- release_guc_id(guc, q);
+
+ mutex_lock(&guc->submission_state.lock);
+ if (q->guc->db.id >= 0)
+ xe_guc_db_mgr_release_id_locked(&guc->dbm, q->guc->db.id);
+ __release_guc_id(guc, q, q->width);
+ mutex_unlock(&guc->submission_state.lock);
+
xe_sched_entity_fini(&ge->entity);
xe_sched_fini(&ge->sched);
@@ -1273,6 +1284,8 @@ static void __guc_exec_queue_fini(struct xe_guc *guc, struct xe_exec_queue *q)
guc_exec_queue_fini_async(q);
}
+static void deallocate_doorbell(struct xe_guc *guc, u16 guc_id);
+
static void __guc_exec_queue_process_msg_cleanup(struct xe_sched_msg *msg)
{
struct xe_exec_queue *q = msg->private_data;
@@ -1281,6 +1294,9 @@ static void __guc_exec_queue_process_msg_cleanup(struct xe_sched_msg *msg)
xe_gt_assert(guc_to_gt(guc), !(q->flags & EXEC_QUEUE_FLAG_PERMANENT));
trace_xe_exec_queue_cleanup_entity(q);
+ if (exec_queue_doorbell_registered(q))
+ deallocate_doorbell(guc, q->guc->id);
+
if (exec_queue_registered(q))
disable_scheduling_deregister(guc, q);
else
@@ -1399,6 +1415,53 @@ static void guc_exec_queue_process_msg(struct xe_sched_msg *msg)
xe_pm_runtime_put(xe);
}
+static int allocate_doorbell(struct xe_guc *guc, u16 guc_id, int doorbell_id,
+ u64 gpa)
+{
+ u32 action[] = {
+ XE_GUC_ACTION_ALLOCATE_DOORBELL,
+ guc_id,
+ doorbell_id,
+ lower_32_bits(gpa),
+ upper_32_bits(gpa),
+ 0,
+ };
+
+ return xe_guc_ct_send_block(&guc->ct, action, ARRAY_SIZE(action));
+}
+
+static void deallocate_doorbell(struct xe_guc *guc, u16 guc_id)
+{
+ u32 action[] = {
+ XE_GUC_ACTION_DEALLOCATE_DOORBELL,
+ guc_id
+ };
+
+ xe_guc_ct_send(&guc->ct, action, ARRAY_SIZE(action), 0, 0);
+}
+
+#define GUC_MMIO_DB_BAR_OFFSET SZ_4M
+
+static int create_doorbell(struct xe_guc *guc, struct xe_exec_queue *q)
+{
+ int ret;
+
+ set_exec_queue_doorbell_registered(q);
+ xe_guc_submit_reset_wait(guc);
+
+ q->guc->db.dpa = GUC_MMIO_DB_BAR_OFFSET + PAGE_SIZE * q->guc->db.id;
+ register_exec_queue(q);
+ enable_scheduling(q);
+
+ ret = allocate_doorbell(guc, q->guc->id, q->guc->db.id, q->guc->db.dpa);
+ if (ret) {
+ disable_scheduling_deregister(guc, q);
+ return ret;
+ }
+
+ return 0;
+}
+
static const struct drm_sched_backend_ops drm_sched_ops = {
.run_job = guc_exec_queue_run_job,
.free_job = guc_exec_queue_free_job,
@@ -1415,7 +1478,7 @@ static int guc_exec_queue_init(struct xe_exec_queue *q)
struct xe_guc *guc = exec_queue_to_guc(q);
struct xe_guc_exec_queue *ge;
long timeout;
- int err, i;
+ int err, i, db_id = 0;
xe_gt_assert(guc_to_gt(guc), xe_device_uc_enabled(guc_to_xe(guc)));
@@ -1458,14 +1521,35 @@ static int guc_exec_queue_init(struct xe_exec_queue *q)
if (xe_guc_read_stopped(guc))
xe_sched_stop(sched);
+ q->guc->db.id = -1;
+ if (q->flags & EXEC_QUEUE_FLAG_UMD_SUBMISSION) {
+ db_id = xe_guc_db_mgr_reserve_id_locked(&guc->dbm);
+ if (db_id < 0) {
+ err = db_id;
+ goto err_id;
+ }
+ }
+
mutex_unlock(&guc->submission_state.lock);
+ if (q->flags & EXEC_QUEUE_FLAG_UMD_SUBMISSION) {
+ q->guc->db.id = db_id;
+ err = create_doorbell(guc, q);
+ if (err)
+ goto err_db;
+ }
+
xe_exec_queue_assign_name(q, q->guc->id);
trace_xe_exec_queue_create(q);
return 0;
+err_db:
+ mutex_lock(&guc->submission_state.lock);
+ xe_guc_db_mgr_release_id_locked(&guc->dbm, q->guc->db.id);
+err_id:
+ __release_guc_id(guc, q, q->width);
err_entity:
mutex_unlock(&guc->submission_state.lock);
xe_sched_entity_fini(&ge->entity);
@@ -1699,7 +1783,10 @@ static void guc_exec_queue_stop(struct xe_guc *guc, struct xe_exec_queue *q)
struct xe_sched_job *job = xe_sched_first_pending_job(sched);
bool ban = false;
- if (job) {
+ if (exec_queue_doorbell_registered(q)) {
+ /* TODO: Ban via UMD shim too */
+ ban = true;
+ } else if (job) {
if ((xe_sched_job_started(job) &&
!xe_sched_job_completed(job)) ||
xe_sched_invalidate_job(job, 2)) {
--
2.34.1
^ permalink raw reply related [flat|nested] 52+ messages in thread
* [RFC PATCH 05/29] drm/xe: Add doorbell ID to snapshot capture
2024-11-18 23:37 [RFC PATCH 00/29] UMD direct submission in Xe Matthew Brost
` (3 preceding siblings ...)
2024-11-18 23:37 ` [RFC PATCH 04/29] drm/xe: Allocate doorbells for UMD exec queues Matthew Brost
@ 2024-11-18 23:37 ` Matthew Brost
2024-11-18 23:37 ` [RFC PATCH 06/29] drm/xe: Break submission ring out into its own BO Matthew Brost
` (31 subsequent siblings)
36 siblings, 0 replies; 52+ messages in thread
From: Matthew Brost @ 2024-11-18 23:37 UTC (permalink / raw)
To: intel-xe, dri-devel
Cc: kenneth.w.graunke, lionel.g.landwerlin, jose.souza, simona.vetter,
thomas.hellstrom, boris.brezillon, airlied, christian.koenig,
mihail.atanassov, steven.price, shashank.sharma
Useful for debugging hangs with doorbells.
Signed-off-by: Matthew Brost <matthew.brost@intel.com>
---
drivers/gpu/drm/xe/xe_guc_submit.c | 2 ++
drivers/gpu/drm/xe/xe_guc_submit_types.h | 2 ++
2 files changed, 4 insertions(+)
diff --git a/drivers/gpu/drm/xe/xe_guc_submit.c b/drivers/gpu/drm/xe/xe_guc_submit.c
index cc7a98c1343e..c226c7b3245d 100644
--- a/drivers/gpu/drm/xe/xe_guc_submit.c
+++ b/drivers/gpu/drm/xe/xe_guc_submit.c
@@ -2227,6 +2227,7 @@ xe_guc_exec_queue_snapshot_capture(struct xe_exec_queue *q)
return NULL;
snapshot->guc.id = q->guc->id;
+ snapshot->guc.db_id = q->guc->db.id;
memcpy(&snapshot->name, &q->name, sizeof(snapshot->name));
snapshot->class = q->class;
snapshot->logical_mask = q->logical_mask;
@@ -2321,6 +2322,7 @@ xe_guc_exec_queue_snapshot_print(struct xe_guc_submit_exec_queue_snapshot *snaps
drm_printf(p, "\tClass: %d\n", snapshot->class);
drm_printf(p, "\tLogical mask: 0x%x\n", snapshot->logical_mask);
drm_printf(p, "\tWidth: %d\n", snapshot->width);
+ drm_printf(p, "\tDoorbell ID: %d\n", snapshot->guc.db_id);
drm_printf(p, "\tRef: %d\n", snapshot->refcount);
drm_printf(p, "\tTimeout: %ld (ms)\n", snapshot->sched_timeout);
drm_printf(p, "\tTimeslice: %u (us)\n",
diff --git a/drivers/gpu/drm/xe/xe_guc_submit_types.h b/drivers/gpu/drm/xe/xe_guc_submit_types.h
index dc7456c34583..12fef7848b78 100644
--- a/drivers/gpu/drm/xe/xe_guc_submit_types.h
+++ b/drivers/gpu/drm/xe/xe_guc_submit_types.h
@@ -113,6 +113,8 @@ struct xe_guc_submit_exec_queue_snapshot {
u32 wqi_tail;
/** @guc.id: GuC id for this exec_queue */
u16 id;
+ /** @guc.db_id: Doorbell id */
+ u16 db_id;
} guc;
/**
--
2.34.1
^ permalink raw reply related [flat|nested] 52+ messages in thread
* [RFC PATCH 06/29] drm/xe: Break submission ring out into its own BO
2024-11-18 23:37 [RFC PATCH 00/29] UMD direct submission in Xe Matthew Brost
` (4 preceding siblings ...)
2024-11-18 23:37 ` [RFC PATCH 05/29] drm/xe: Add doorbell ID to snapshot capture Matthew Brost
@ 2024-11-18 23:37 ` Matthew Brost
2024-11-18 23:37 ` [RFC PATCH 07/29] drm/xe: Break indirect ring state " Matthew Brost
` (30 subsequent siblings)
36 siblings, 0 replies; 52+ messages in thread
From: Matthew Brost @ 2024-11-18 23:37 UTC (permalink / raw)
To: intel-xe, dri-devel
Cc: kenneth.w.graunke, lionel.g.landwerlin, jose.souza, simona.vetter,
thomas.hellstrom, boris.brezillon, airlied, christian.koenig,
mihail.atanassov, steven.price, shashank.sharma
Start laying the ground work for UMD submission. This will allow mmaping
the submission ring to user space.
Signed-off-by: Matthew Brost <matthew.brost@intel.com>
---
drivers/gpu/drm/xe/xe_lrc.c | 38 +++++++++++++++++++++++++------
drivers/gpu/drm/xe/xe_lrc_types.h | 9 ++++++--
2 files changed, 38 insertions(+), 9 deletions(-)
diff --git a/drivers/gpu/drm/xe/xe_lrc.c b/drivers/gpu/drm/xe/xe_lrc.c
index 22e58c6e2a35..758648b6a711 100644
--- a/drivers/gpu/drm/xe/xe_lrc.c
+++ b/drivers/gpu/drm/xe/xe_lrc.c
@@ -632,7 +632,7 @@ static inline u32 __xe_lrc_ring_offset(struct xe_lrc *lrc)
u32 xe_lrc_pphwsp_offset(struct xe_lrc *lrc)
{
- return lrc->ring.size;
+ return 0;
}
/* Make the magic macros work */
@@ -712,7 +712,21 @@ static inline u32 __maybe_unused __xe_lrc_##elem##_ggtt_addr(struct xe_lrc *lrc)
return xe_bo_ggtt_addr(lrc->bo) + __xe_lrc_##elem##_offset(lrc); \
} \
-DECL_MAP_ADDR_HELPERS(ring)
+#define DECL_MAP_RING_ADDR_HELPERS(elem) \
+static inline struct iosys_map __xe_lrc_##elem##_map(struct xe_lrc *lrc) \
+{ \
+ struct iosys_map map = lrc->submission_ring->vmap; \
+\
+ xe_assert(lrc_to_xe(lrc), !iosys_map_is_null(&map)); \
+ iosys_map_incr(&map, __xe_lrc_##elem##_offset(lrc)); \
+ return map; \
+} \
+static inline u32 __maybe_unused __xe_lrc_##elem##_ggtt_addr(struct xe_lrc *lrc) \
+{ \
+ return xe_bo_ggtt_addr(lrc->submission_ring) + __xe_lrc_##elem##_offset(lrc); \
+} \
+
+DECL_MAP_RING_ADDR_HELPERS(ring)
DECL_MAP_ADDR_HELPERS(pphwsp)
DECL_MAP_ADDR_HELPERS(seqno)
DECL_MAP_ADDR_HELPERS(regs)
@@ -722,6 +736,7 @@ DECL_MAP_ADDR_HELPERS(ctx_timestamp)
DECL_MAP_ADDR_HELPERS(parallel)
DECL_MAP_ADDR_HELPERS(indirect_ring)
+#undef DECL_RING_MAP_ADDR_HELPERS
#undef DECL_MAP_ADDR_HELPERS
/**
@@ -866,10 +881,8 @@ static void xe_lrc_set_ppgtt(struct xe_lrc *lrc, struct xe_vm *vm)
static void xe_lrc_finish(struct xe_lrc *lrc)
{
xe_hw_fence_ctx_finish(&lrc->fence_ctx);
- xe_bo_lock(lrc->bo, false);
- xe_bo_unpin(lrc->bo);
- xe_bo_unlock(lrc->bo);
- xe_bo_put(lrc->bo);
+ xe_bo_unpin_map_no_vm(lrc->bo);
+ xe_bo_unpin_map_no_vm(lrc->submission_ring);
}
#define PVC_CTX_ASID (0x2e + 1)
@@ -889,7 +902,7 @@ static int xe_lrc_init(struct xe_lrc *lrc, struct xe_hw_engine *hwe,
kref_init(&lrc->refcount);
lrc->flags = 0;
- lrc_size = ring_size + xe_gt_lrc_size(gt, hwe->class);
+ lrc_size = xe_gt_lrc_size(gt, hwe->class);
if (xe_gt_has_indirect_ring_state(gt))
lrc->flags |= XE_LRC_FLAG_INDIRECT_RING_STATE;
@@ -905,6 +918,17 @@ static int xe_lrc_init(struct xe_lrc *lrc, struct xe_hw_engine *hwe,
if (IS_ERR(lrc->bo))
return PTR_ERR(lrc->bo);
+ lrc->submission_ring = xe_bo_create_pin_map(xe, tile, vm, ring_size,
+ ttm_bo_type_kernel,
+ XE_BO_FLAG_VRAM_IF_DGFX(tile) |
+ XE_BO_FLAG_GGTT |
+ XE_BO_FLAG_GGTT_INVALIDATE);
+ if (IS_ERR(lrc->submission_ring)) {
+ err = PTR_ERR(lrc->submission_ring);
+ lrc->submission_ring = NULL;
+ goto err_lrc_finish;
+ }
+
lrc->size = lrc_size;
lrc->tile = gt_to_tile(hwe->gt);
lrc->ring.size = ring_size;
diff --git a/drivers/gpu/drm/xe/xe_lrc_types.h b/drivers/gpu/drm/xe/xe_lrc_types.h
index 71ecb453f811..3ad9ac2d644f 100644
--- a/drivers/gpu/drm/xe/xe_lrc_types.h
+++ b/drivers/gpu/drm/xe/xe_lrc_types.h
@@ -17,11 +17,16 @@ struct xe_bo;
*/
struct xe_lrc {
/**
- * @bo: buffer object (memory) for logical ring context, per process HW
- * status page, and submission ring.
+ * @bo: buffer object (memory) for logical ring context and per process
+ * HW status page.
*/
struct xe_bo *bo;
+ /**
+ * @submission_ring: buffer object (memory) for submission_ring
+ */
+ struct xe_bo *submission_ring;
+
/** @size: size of lrc including any indirect ring state page */
u32 size;
--
2.34.1
^ permalink raw reply related [flat|nested] 52+ messages in thread
* [RFC PATCH 07/29] drm/xe: Break indirect ring state out into its own BO
2024-11-18 23:37 [RFC PATCH 00/29] UMD direct submission in Xe Matthew Brost
` (5 preceding siblings ...)
2024-11-18 23:37 ` [RFC PATCH 06/29] drm/xe: Break submission ring out into its own BO Matthew Brost
@ 2024-11-18 23:37 ` Matthew Brost
2024-11-18 23:37 ` [RFC PATCH 08/29] drm/xe: Clear GGTT in xe_bo_restore_kernel Matthew Brost
` (29 subsequent siblings)
36 siblings, 0 replies; 52+ messages in thread
From: Matthew Brost @ 2024-11-18 23:37 UTC (permalink / raw)
To: intel-xe, dri-devel
Cc: kenneth.w.graunke, lionel.g.landwerlin, jose.souza, simona.vetter,
thomas.hellstrom, boris.brezillon, airlied, christian.koenig,
mihail.atanassov, steven.price, shashank.sharma
Start laying the ground work for UMD submission. This will allow mmaping
the indirect ring state to user space.
Signed-off-by: Matthew Brost <matthew.brost@intel.com>
---
drivers/gpu/drm/xe/xe_lrc.c | 79 ++++++++++++++++++++++---------
drivers/gpu/drm/xe/xe_lrc_types.h | 7 ++-
2 files changed, 63 insertions(+), 23 deletions(-)
diff --git a/drivers/gpu/drm/xe/xe_lrc.c b/drivers/gpu/drm/xe/xe_lrc.c
index 758648b6a711..e3c1773191bd 100644
--- a/drivers/gpu/drm/xe/xe_lrc.c
+++ b/drivers/gpu/drm/xe/xe_lrc.c
@@ -74,10 +74,6 @@ size_t xe_gt_lrc_size(struct xe_gt *gt, enum xe_engine_class class)
size = 2 * SZ_4K;
}
- /* Add indirect ring state page */
- if (xe_gt_has_indirect_ring_state(gt))
- size += LRC_INDIRECT_RING_STATE_SIZE;
-
return size;
}
@@ -694,8 +690,7 @@ static u32 __xe_lrc_ctx_timestamp_offset(struct xe_lrc *lrc)
static inline u32 __xe_lrc_indirect_ring_offset(struct xe_lrc *lrc)
{
- /* Indirect ring state page is at the very end of LRC */
- return lrc->size - LRC_INDIRECT_RING_STATE_SIZE;
+ return 0;
}
#define DECL_MAP_ADDR_HELPERS(elem) \
@@ -726,6 +721,20 @@ static inline u32 __maybe_unused __xe_lrc_##elem##_ggtt_addr(struct xe_lrc *lrc)
return xe_bo_ggtt_addr(lrc->submission_ring) + __xe_lrc_##elem##_offset(lrc); \
} \
+#define DECL_MAP_INDIRECT_ADDR_HELPERS(elem) \
+static inline struct iosys_map __xe_lrc_##elem##_map(struct xe_lrc *lrc) \
+{ \
+ struct iosys_map map = lrc->indirect_state->vmap; \
+\
+ xe_assert(lrc_to_xe(lrc), !iosys_map_is_null(&map)); \
+ iosys_map_incr(&map, __xe_lrc_##elem##_offset(lrc)); \
+ return map; \
+} \
+static inline u32 __maybe_unused __xe_lrc_##elem##_ggtt_addr(struct xe_lrc *lrc) \
+{ \
+ return xe_bo_ggtt_addr(lrc->indirect_state) + __xe_lrc_##elem##_offset(lrc); \
+} \
+
DECL_MAP_RING_ADDR_HELPERS(ring)
DECL_MAP_ADDR_HELPERS(pphwsp)
DECL_MAP_ADDR_HELPERS(seqno)
@@ -734,8 +743,9 @@ DECL_MAP_ADDR_HELPERS(start_seqno)
DECL_MAP_ADDR_HELPERS(ctx_job_timestamp)
DECL_MAP_ADDR_HELPERS(ctx_timestamp)
DECL_MAP_ADDR_HELPERS(parallel)
-DECL_MAP_ADDR_HELPERS(indirect_ring)
+DECL_MAP_INDIRECT_ADDR_HELPERS(indirect_ring)
+#undef DECL_INDIRECT_MAP_ADDR_HELPERS
#undef DECL_RING_MAP_ADDR_HELPERS
#undef DECL_MAP_ADDR_HELPERS
@@ -845,25 +855,27 @@ void xe_lrc_write_ctx_reg(struct xe_lrc *lrc, int reg_nr, u32 val)
xe_map_write32(xe, &map, val);
}
-static void *empty_lrc_data(struct xe_hw_engine *hwe)
+static void *empty_lrc_data(struct xe_hw_engine *hwe, bool has_default)
{
struct xe_gt *gt = hwe->gt;
void *data;
u32 *regs;
- data = kzalloc(xe_gt_lrc_size(gt, hwe->class), GFP_KERNEL);
+ data = kzalloc(xe_gt_lrc_size(gt, hwe->class) +
+ LRC_INDIRECT_RING_STATE_SIZE, GFP_KERNEL);
if (!data)
return NULL;
/* 1st page: Per-Process of HW status Page */
- regs = data + LRC_PPHWSP_SIZE;
- set_offsets(regs, reg_offsets(gt_to_xe(gt), hwe->class), hwe);
- set_context_control(regs, hwe);
- set_memory_based_intr(regs, hwe);
- reset_stop_ring(regs, hwe);
+ if (!has_default) {
+ regs = data + LRC_PPHWSP_SIZE;
+ set_offsets(regs, reg_offsets(gt_to_xe(gt), hwe->class), hwe);
+ set_context_control(regs, hwe);
+ set_memory_based_intr(regs, hwe);
+ reset_stop_ring(regs, hwe);
+ }
if (xe_gt_has_indirect_ring_state(gt)) {
- regs = data + xe_gt_lrc_size(gt, hwe->class) -
- LRC_INDIRECT_RING_STATE_SIZE;
+ regs = data + xe_gt_lrc_size(gt, hwe->class);
set_offsets(regs, xe2_indirect_ring_state_offsets, hwe);
}
@@ -883,6 +895,7 @@ static void xe_lrc_finish(struct xe_lrc *lrc)
xe_hw_fence_ctx_finish(&lrc->fence_ctx);
xe_bo_unpin_map_no_vm(lrc->bo);
xe_bo_unpin_map_no_vm(lrc->submission_ring);
+ xe_bo_unpin_map_no_vm(lrc->indirect_state);
}
#define PVC_CTX_ASID (0x2e + 1)
@@ -903,8 +916,6 @@ static int xe_lrc_init(struct xe_lrc *lrc, struct xe_hw_engine *hwe,
kref_init(&lrc->refcount);
lrc->flags = 0;
lrc_size = xe_gt_lrc_size(gt, hwe->class);
- if (xe_gt_has_indirect_ring_state(gt))
- lrc->flags |= XE_LRC_FLAG_INDIRECT_RING_STATE;
/*
* FIXME: Perma-pinning LRC as we don't yet support moving GGTT address
@@ -929,6 +940,22 @@ static int xe_lrc_init(struct xe_lrc *lrc, struct xe_hw_engine *hwe,
goto err_lrc_finish;
}
+ if (xe_gt_has_indirect_ring_state(gt)) {
+ lrc->flags |= XE_LRC_FLAG_INDIRECT_RING_STATE;
+
+ lrc->indirect_state = xe_bo_create_pin_map(xe, tile, vm,
+ LRC_INDIRECT_RING_STATE_SIZE,
+ ttm_bo_type_kernel,
+ XE_BO_FLAG_VRAM_IF_DGFX(tile) |
+ XE_BO_FLAG_GGTT |
+ XE_BO_FLAG_GGTT_INVALIDATE);
+ if (IS_ERR(lrc->indirect_state)) {
+ err = PTR_ERR(lrc->indirect_state);
+ lrc->indirect_state = NULL;
+ goto err_lrc_finish;
+ }
+ }
+
lrc->size = lrc_size;
lrc->tile = gt_to_tile(hwe->gt);
lrc->ring.size = ring_size;
@@ -938,8 +965,8 @@ static int xe_lrc_init(struct xe_lrc *lrc, struct xe_hw_engine *hwe,
xe_hw_fence_ctx_init(&lrc->fence_ctx, hwe->gt,
hwe->fence_irq, hwe->name);
- if (!gt->default_lrc[hwe->class]) {
- init_data = empty_lrc_data(hwe);
+ if (!gt->default_lrc[hwe->class] || xe_gt_has_indirect_ring_state(gt)) {
+ init_data = empty_lrc_data(hwe, !!gt->default_lrc[hwe->class]);
if (!init_data) {
err = -ENOMEM;
goto err_lrc_finish;
@@ -951,7 +978,7 @@ static int xe_lrc_init(struct xe_lrc *lrc, struct xe_hw_engine *hwe,
* values
*/
map = __xe_lrc_pphwsp_map(lrc);
- if (!init_data) {
+ if (gt->default_lrc[hwe->class]) {
xe_map_memset(xe, &map, 0, 0, LRC_PPHWSP_SIZE); /* PPHWSP */
xe_map_memcpy_to(xe, &map, LRC_PPHWSP_SIZE,
gt->default_lrc[hwe->class] + LRC_PPHWSP_SIZE,
@@ -959,9 +986,17 @@ static int xe_lrc_init(struct xe_lrc *lrc, struct xe_hw_engine *hwe,
} else {
xe_map_memcpy_to(xe, &map, 0, init_data,
xe_gt_lrc_size(gt, hwe->class));
- kfree(init_data);
}
+ if (xe_gt_has_indirect_ring_state(gt)) {
+ map = __xe_lrc_indirect_ring_map(lrc);
+ xe_map_memcpy_to(xe, &map, 0, init_data +
+ xe_gt_lrc_size(gt, hwe->class),
+ LRC_INDIRECT_RING_STATE_SIZE);
+ }
+
+ kfree(init_data);
+
if (vm) {
xe_lrc_set_ppgtt(lrc, vm);
diff --git a/drivers/gpu/drm/xe/xe_lrc_types.h b/drivers/gpu/drm/xe/xe_lrc_types.h
index 3ad9ac2d644f..3be708c82313 100644
--- a/drivers/gpu/drm/xe/xe_lrc_types.h
+++ b/drivers/gpu/drm/xe/xe_lrc_types.h
@@ -27,7 +27,12 @@ struct xe_lrc {
*/
struct xe_bo *submission_ring;
- /** @size: size of lrc including any indirect ring state page */
+ /**
+ * @indirect_state: buffer object (memory) for indirect state
+ */
+ struct xe_bo *indirect_state;
+
+ /** @size: size of lrc */
u32 size;
/** @tile: tile which this LRC belongs to */
--
2.34.1
^ permalink raw reply related [flat|nested] 52+ messages in thread
* [RFC PATCH 08/29] drm/xe: Clear GGTT in xe_bo_restore_kernel
2024-11-18 23:37 [RFC PATCH 00/29] UMD direct submission in Xe Matthew Brost
` (6 preceding siblings ...)
2024-11-18 23:37 ` [RFC PATCH 07/29] drm/xe: Break indirect ring state " Matthew Brost
@ 2024-11-18 23:37 ` Matthew Brost
2024-11-18 23:37 ` [RFC PATCH 09/29] FIXME: drm/xe: Add pad to ring and indirect state Matthew Brost
` (28 subsequent siblings)
36 siblings, 0 replies; 52+ messages in thread
From: Matthew Brost @ 2024-11-18 23:37 UTC (permalink / raw)
To: intel-xe, dri-devel
Cc: kenneth.w.graunke, lionel.g.landwerlin, jose.souza, simona.vetter,
thomas.hellstrom, boris.brezillon, airlied, christian.koenig,
mihail.atanassov, steven.price, shashank.sharma
Part of what xe_bo_restore_kernel does, is restore BO's GGTT mappings
which may have been lost during a power state change. Missing is
restoring the GGTT entries without BO mappings to a known state (e.g.,
scratch pages). Update xe_bo_restore_kernel to clear the entire GGTT
before restoring BO's GGTT mappings.
v2:
- Include missing local change of tile and id variable (CI)
v3:
- Fixed kernel doc (CI)
v4:
- Only clear holes (CI)
Fixes: dd08ebf6c352 ("drm/xe: Introduce a new DRM driver for Intel GPUs")
Cc: Lucas De Marchi <lucas.demarchi@intel.com>
Cc: Matthew Auld <matthew.auld@intel.com>
Signed-off-by: Matthew Brost <matthew.brost@intel.com>
Cc: <stable@vger.kernel.org> # v6.8+
---
drivers/gpu/drm/xe/xe_bo_evict.c | 8 +++++++-
drivers/gpu/drm/xe/xe_ggtt.c | 19 ++++++++++++++++---
drivers/gpu/drm/xe/xe_ggtt.h | 2 ++
3 files changed, 25 insertions(+), 4 deletions(-)
diff --git a/drivers/gpu/drm/xe/xe_bo_evict.c b/drivers/gpu/drm/xe/xe_bo_evict.c
index 8fb2be061003..d7bb3dbb41d6 100644
--- a/drivers/gpu/drm/xe/xe_bo_evict.c
+++ b/drivers/gpu/drm/xe/xe_bo_evict.c
@@ -123,7 +123,8 @@ int xe_bo_evict_all(struct xe_device *xe)
* @xe: xe device
*
* Move kernel BOs from temporary (typically system) memory to VRAM via CPU. All
- * moves done via TTM calls.
+ * moves done via TTM calls. All GGTT are restored too, first by clearing GGTT
+ * to known state and then restoring individual BO's GGTT mappings.
*
* This function should be called early, before trying to init the GT, on device
* resume.
@@ -131,8 +132,13 @@ int xe_bo_evict_all(struct xe_device *xe)
int xe_bo_restore_kernel(struct xe_device *xe)
{
struct xe_bo *bo;
+ struct xe_tile *tile;
+ u8 id;
int ret;
+ for_each_tile(tile, xe, id)
+ xe_ggtt_clear(tile->mem.ggtt);
+
spin_lock(&xe->pinned.lock);
for (;;) {
bo = list_first_entry_or_null(&xe->pinned.evicted,
diff --git a/drivers/gpu/drm/xe/xe_ggtt.c b/drivers/gpu/drm/xe/xe_ggtt.c
index 558fac8bb6fb..2fc498b89878 100644
--- a/drivers/gpu/drm/xe/xe_ggtt.c
+++ b/drivers/gpu/drm/xe/xe_ggtt.c
@@ -140,7 +140,7 @@ static void xe_ggtt_set_pte_and_flush(struct xe_ggtt *ggtt, u64 addr, u64 pte)
ggtt_update_access_counter(ggtt);
}
-static void xe_ggtt_clear(struct xe_ggtt *ggtt, u64 start, u64 size)
+static void __xe_ggtt_clear(struct xe_ggtt *ggtt, u64 start, u64 size)
{
u16 pat_index = tile_to_xe(ggtt->tile)->pat.idx[XE_CACHE_WB];
u64 end = start + size - 1;
@@ -160,6 +160,19 @@ static void xe_ggtt_clear(struct xe_ggtt *ggtt, u64 start, u64 size)
}
}
+static void xe_ggtt_initial_clear(struct xe_ggtt *ggtt);
+
+/**
+ * xe_ggtt_clear() - GGTT clear
+ * @ggtt: the &xe_ggtt to be cleared
+ *
+ * Clear all GGTT to a known state
+ */
+void xe_ggtt_clear(struct xe_ggtt *ggtt)
+{
+ xe_ggtt_initial_clear(ggtt);
+}
+
static void ggtt_fini_early(struct drm_device *drm, void *arg)
{
struct xe_ggtt *ggtt = arg;
@@ -277,7 +290,7 @@ static void xe_ggtt_initial_clear(struct xe_ggtt *ggtt)
/* Display may have allocated inside ggtt, so be careful with clearing here */
mutex_lock(&ggtt->lock);
drm_mm_for_each_hole(hole, &ggtt->mm, start, end)
- xe_ggtt_clear(ggtt, start, end - start);
+ __xe_ggtt_clear(ggtt, start, end - start);
xe_ggtt_invalidate(ggtt);
mutex_unlock(&ggtt->lock);
@@ -294,7 +307,7 @@ static void ggtt_node_remove(struct xe_ggtt_node *node)
mutex_lock(&ggtt->lock);
if (bound)
- xe_ggtt_clear(ggtt, node->base.start, node->base.size);
+ __xe_ggtt_clear(ggtt, node->base.start, node->base.size);
drm_mm_remove_node(&node->base);
node->base.size = 0;
mutex_unlock(&ggtt->lock);
diff --git a/drivers/gpu/drm/xe/xe_ggtt.h b/drivers/gpu/drm/xe/xe_ggtt.h
index 27e7d67de004..b7ae440cdebf 100644
--- a/drivers/gpu/drm/xe/xe_ggtt.h
+++ b/drivers/gpu/drm/xe/xe_ggtt.h
@@ -13,6 +13,8 @@ struct drm_printer;
int xe_ggtt_init_early(struct xe_ggtt *ggtt);
int xe_ggtt_init(struct xe_ggtt *ggtt);
+void xe_ggtt_clear(struct xe_ggtt *ggtt);
+
struct xe_ggtt_node *xe_ggtt_node_init(struct xe_ggtt *ggtt);
void xe_ggtt_node_fini(struct xe_ggtt_node *node);
int xe_ggtt_node_insert_balloon(struct xe_ggtt_node *node,
--
2.34.1
^ permalink raw reply related [flat|nested] 52+ messages in thread
* [RFC PATCH 09/29] FIXME: drm/xe: Add pad to ring and indirect state
2024-11-18 23:37 [RFC PATCH 00/29] UMD direct submission in Xe Matthew Brost
` (7 preceding siblings ...)
2024-11-18 23:37 ` [RFC PATCH 08/29] drm/xe: Clear GGTT in xe_bo_restore_kernel Matthew Brost
@ 2024-11-18 23:37 ` Matthew Brost
2024-11-18 23:37 ` [RFC PATCH 10/29] drm/xe: Enable indirect ring on media GT Matthew Brost
` (27 subsequent siblings)
36 siblings, 0 replies; 52+ messages in thread
From: Matthew Brost @ 2024-11-18 23:37 UTC (permalink / raw)
To: intel-xe, dri-devel
Cc: kenneth.w.graunke, lionel.g.landwerlin, jose.souza, simona.vetter,
thomas.hellstrom, boris.brezillon, airlied, christian.koenig,
mihail.atanassov, steven.price, shashank.sharma
Unsure why, but without this intermittent hangs occur on GuC context
switching.
Signed-off-by: Matthew Brost <matthew.brost@intel.com>
---
drivers/gpu/drm/xe/xe_lrc.c | 5 ++---
1 file changed, 2 insertions(+), 3 deletions(-)
diff --git a/drivers/gpu/drm/xe/xe_lrc.c b/drivers/gpu/drm/xe/xe_lrc.c
index e3c1773191bd..9633e5e700f6 100644
--- a/drivers/gpu/drm/xe/xe_lrc.c
+++ b/drivers/gpu/drm/xe/xe_lrc.c
@@ -929,7 +929,7 @@ static int xe_lrc_init(struct xe_lrc *lrc, struct xe_hw_engine *hwe,
if (IS_ERR(lrc->bo))
return PTR_ERR(lrc->bo);
- lrc->submission_ring = xe_bo_create_pin_map(xe, tile, vm, ring_size,
+ lrc->submission_ring = xe_bo_create_pin_map(xe, tile, vm, SZ_32K,
ttm_bo_type_kernel,
XE_BO_FLAG_VRAM_IF_DGFX(tile) |
XE_BO_FLAG_GGTT |
@@ -943,8 +943,7 @@ static int xe_lrc_init(struct xe_lrc *lrc, struct xe_hw_engine *hwe,
if (xe_gt_has_indirect_ring_state(gt)) {
lrc->flags |= XE_LRC_FLAG_INDIRECT_RING_STATE;
- lrc->indirect_state = xe_bo_create_pin_map(xe, tile, vm,
- LRC_INDIRECT_RING_STATE_SIZE,
+ lrc->indirect_state = xe_bo_create_pin_map(xe, tile, vm, SZ_8K,
ttm_bo_type_kernel,
XE_BO_FLAG_VRAM_IF_DGFX(tile) |
XE_BO_FLAG_GGTT |
--
2.34.1
^ permalink raw reply related [flat|nested] 52+ messages in thread
* [RFC PATCH 10/29] drm/xe: Enable indirect ring on media GT
2024-11-18 23:37 [RFC PATCH 00/29] UMD direct submission in Xe Matthew Brost
` (8 preceding siblings ...)
2024-11-18 23:37 ` [RFC PATCH 09/29] FIXME: drm/xe: Add pad to ring and indirect state Matthew Brost
@ 2024-11-18 23:37 ` Matthew Brost
2024-11-18 23:37 ` [RFC PATCH 11/29] drm/xe: Don't add pinned mappings to VM bulk move Matthew Brost
` (26 subsequent siblings)
36 siblings, 0 replies; 52+ messages in thread
From: Matthew Brost @ 2024-11-18 23:37 UTC (permalink / raw)
To: intel-xe, dri-devel
Cc: kenneth.w.graunke, lionel.g.landwerlin, jose.souza, simona.vetter,
thomas.hellstrom, boris.brezillon, airlied, christian.koenig,
mihail.atanassov, steven.price, shashank.sharma
The media GT supports this, required for UMD submission, so enable by
default.
Signed-off-by: Matthew Brost <matthew.brost@intel.com>
---
drivers/gpu/drm/xe/xe_pci.c | 1 +
1 file changed, 1 insertion(+)
diff --git a/drivers/gpu/drm/xe/xe_pci.c b/drivers/gpu/drm/xe/xe_pci.c
index 9b81e7d00a86..a27450e63cf9 100644
--- a/drivers/gpu/drm/xe/xe_pci.c
+++ b/drivers/gpu/drm/xe/xe_pci.c
@@ -209,6 +209,7 @@ static const struct xe_media_desc media_xelpmp = {
static const struct xe_media_desc media_xe2 = {
.name = "Xe2_LPM / Xe2_HPM / Xe3_LPM",
+ .has_indirect_ring_state = 1,
.hw_engine_mask =
GENMASK(XE_HW_ENGINE_VCS7, XE_HW_ENGINE_VCS0) |
GENMASK(XE_HW_ENGINE_VECS3, XE_HW_ENGINE_VECS0) |
--
2.34.1
^ permalink raw reply related [flat|nested] 52+ messages in thread
* [RFC PATCH 11/29] drm/xe: Don't add pinned mappings to VM bulk move
2024-11-18 23:37 [RFC PATCH 00/29] UMD direct submission in Xe Matthew Brost
` (9 preceding siblings ...)
2024-11-18 23:37 ` [RFC PATCH 10/29] drm/xe: Enable indirect ring on media GT Matthew Brost
@ 2024-11-18 23:37 ` Matthew Brost
2024-11-18 23:37 ` [RFC PATCH 12/29] drm/xe: Add exec queue post init extension processing Matthew Brost
` (25 subsequent siblings)
36 siblings, 0 replies; 52+ messages in thread
From: Matthew Brost @ 2024-11-18 23:37 UTC (permalink / raw)
To: intel-xe, dri-devel
Cc: kenneth.w.graunke, lionel.g.landwerlin, jose.souza, simona.vetter,
thomas.hellstrom, boris.brezillon, airlied, christian.koenig,
mihail.atanassov, steven.price, shashank.sharma
We don't want kernel pinned resources (ring, indirect state) in the VM's
bulk move as these are unevictable.
Signed-off-by: Matthew Brost <matthew.brost@intel.com>
---
drivers/gpu/drm/xe/xe_bo.c | 10 ++++------
1 file changed, 4 insertions(+), 6 deletions(-)
diff --git a/drivers/gpu/drm/xe/xe_bo.c b/drivers/gpu/drm/xe/xe_bo.c
index 549866da5cd1..96dbc88b1f55 100644
--- a/drivers/gpu/drm/xe/xe_bo.c
+++ b/drivers/gpu/drm/xe/xe_bo.c
@@ -1470,6 +1470,9 @@ __xe_bo_create_locked(struct xe_device *xe,
{
struct xe_bo *bo = NULL;
int err;
+ bool want_bulk = vm && !xe_vm_in_fault_mode(vm) &&
+ flags & XE_BO_FLAG_USER &&
+ !(flags & (XE_BO_FLAG_PINNED | XE_BO_FLAG_GGTT));
if (vm)
xe_vm_assert_held(vm);
@@ -1488,9 +1491,7 @@ __xe_bo_create_locked(struct xe_device *xe,
}
bo = ___xe_bo_create_locked(xe, bo, tile, vm ? xe_vm_resv(vm) : NULL,
- vm && !xe_vm_in_fault_mode(vm) &&
- flags & XE_BO_FLAG_USER ?
- &vm->lru_bulk_move : NULL, size,
+ want_bulk ? &vm->lru_bulk_move : NULL, size,
cpu_caching, type, flags);
if (IS_ERR(bo))
return bo;
@@ -1781,9 +1782,6 @@ int xe_bo_pin(struct xe_bo *bo)
struct xe_device *xe = xe_bo_device(bo);
int err;
- /* We currently don't expect user BO to be pinned */
- xe_assert(xe, !xe_bo_is_user(bo));
-
/* Pinned object must be in GGTT or have pinned flag */
xe_assert(xe, bo->flags & (XE_BO_FLAG_PINNED |
XE_BO_FLAG_GGTT));
--
2.34.1
^ permalink raw reply related [flat|nested] 52+ messages in thread
* [RFC PATCH 12/29] drm/xe: Add exec queue post init extension processing
2024-11-18 23:37 [RFC PATCH 00/29] UMD direct submission in Xe Matthew Brost
` (10 preceding siblings ...)
2024-11-18 23:37 ` [RFC PATCH 11/29] drm/xe: Don't add pinned mappings to VM bulk move Matthew Brost
@ 2024-11-18 23:37 ` Matthew Brost
2024-11-18 23:37 ` [RFC PATCH 13/29] drm/xe/mmap: Add mmap support for PCI memory barrier Matthew Brost
` (24 subsequent siblings)
36 siblings, 0 replies; 52+ messages in thread
From: Matthew Brost @ 2024-11-18 23:37 UTC (permalink / raw)
To: intel-xe, dri-devel
Cc: kenneth.w.graunke, lionel.g.landwerlin, jose.souza, simona.vetter,
thomas.hellstrom, boris.brezillon, airlied, christian.koenig,
mihail.atanassov, steven.price, shashank.sharma
Add exec queue post init extension processing which is needed for more
complex extensions in which data is returned to the user.
Signed-off-by: Matthew Brost <matthew.brost@intel.com>
---
drivers/gpu/drm/xe/xe_exec_queue.c | 48 ++++++++++++++++++++++++++++++
1 file changed, 48 insertions(+)
diff --git a/drivers/gpu/drm/xe/xe_exec_queue.c b/drivers/gpu/drm/xe/xe_exec_queue.c
index aab9e561153d..f402988b4fc0 100644
--- a/drivers/gpu/drm/xe/xe_exec_queue.c
+++ b/drivers/gpu/drm/xe/xe_exec_queue.c
@@ -33,6 +33,8 @@ enum xe_exec_queue_sched_prop {
static int exec_queue_user_extensions(struct xe_device *xe, struct xe_exec_queue *q,
u64 extensions, int ext_number);
+static int exec_queue_user_extensions_post_init(struct xe_device *xe, struct xe_exec_queue *q,
+ u64 extensions, int ext_number);
static void __xe_exec_queue_free(struct xe_exec_queue *q)
{
@@ -446,6 +448,10 @@ static const xe_exec_queue_user_extension_fn exec_queue_user_extension_funcs[] =
[DRM_XE_EXEC_QUEUE_EXTENSION_SET_PROPERTY] = exec_queue_user_ext_set_property,
};
+static const xe_exec_queue_user_extension_fn exec_queue_user_extension_post_init_funcs[] = {
+ [DRM_XE_EXEC_QUEUE_EXTENSION_SET_PROPERTY] = NULL,
+};
+
#define MAX_USER_EXTENSIONS 16
static int exec_queue_user_extensions(struct xe_device *xe, struct xe_exec_queue *q,
u64 extensions, int ext_number)
@@ -480,6 +486,42 @@ static int exec_queue_user_extensions(struct xe_device *xe, struct xe_exec_queue
return 0;
}
+static int exec_queue_user_extensions_post_init(struct xe_device *xe, struct xe_exec_queue *q,
+ u64 extensions, int ext_number)
+{
+ u64 __user *address = u64_to_user_ptr(extensions);
+ struct drm_xe_user_extension ext;
+ int err;
+ u32 idx;
+
+ if (XE_IOCTL_DBG(xe, ext_number >= MAX_USER_EXTENSIONS))
+ return -E2BIG;
+
+ err = __copy_from_user(&ext, address, sizeof(ext));
+ if (XE_IOCTL_DBG(xe, err))
+ return -EFAULT;
+
+ if (XE_IOCTL_DBG(xe, ext.pad) ||
+ XE_IOCTL_DBG(xe, ext.name >=
+ ARRAY_SIZE(exec_queue_user_extension_post_init_funcs)))
+ return -EINVAL;
+
+ idx = array_index_nospec(ext.name,
+ ARRAY_SIZE(exec_queue_user_extension_post_init_funcs));
+ if (exec_queue_user_extension_post_init_funcs[idx]) {
+ err = exec_queue_user_extension_post_init_funcs[idx](xe, q, extensions);
+ if (XE_IOCTL_DBG(xe, err))
+ return err;
+ }
+
+ if (ext.next_extension)
+ return exec_queue_user_extensions_post_init(xe, q,
+ ext.next_extension,
+ ++ext_number);
+
+ return 0;
+}
+
static u32 calc_validate_logical_mask(struct xe_device *xe, struct xe_gt *gt,
struct drm_xe_engine_class_instance *eci,
u16 width, u16 num_placements)
@@ -647,6 +689,12 @@ int xe_exec_queue_create_ioctl(struct drm_device *dev, void *data,
q->xef = xe_file_get(xef);
+ if (args->extensions) {
+ err = exec_queue_user_extensions_post_init(xe, q, args->extensions, 0);
+ if (err)
+ goto kill_exec_queue;
+ }
+
/* user id alloc must always be last in ioctl to prevent UAF */
err = xa_alloc(&xef->exec_queue.xa, &id, q, xa_limit_32b, GFP_KERNEL);
if (err)
--
2.34.1
^ permalink raw reply related [flat|nested] 52+ messages in thread
* [RFC PATCH 13/29] drm/xe/mmap: Add mmap support for PCI memory barrier
2024-11-18 23:37 [RFC PATCH 00/29] UMD direct submission in Xe Matthew Brost
` (11 preceding siblings ...)
2024-11-18 23:37 ` [RFC PATCH 12/29] drm/xe: Add exec queue post init extension processing Matthew Brost
@ 2024-11-18 23:37 ` Matthew Brost
2024-11-19 10:00 ` Christian König
2024-11-18 23:37 ` [RFC PATCH 14/29] drm/xe: Add support for mmapping doorbells to user space Matthew Brost
` (23 subsequent siblings)
36 siblings, 1 reply; 52+ messages in thread
From: Matthew Brost @ 2024-11-18 23:37 UTC (permalink / raw)
To: intel-xe, dri-devel
Cc: kenneth.w.graunke, lionel.g.landwerlin, jose.souza, simona.vetter,
thomas.hellstrom, boris.brezillon, airlied, christian.koenig,
mihail.atanassov, steven.price, shashank.sharma
From: Tejas Upadhyay <tejas.upadhyay@intel.com>
In order to avoid having userspace to use MI_MEM_FENCE,
we are adding a mechanism for userspace to generate a
PCI memory barrier with low overhead (avoiding IOCTL call
as well as writing to VRAM will adds some overhead).
This is implemented by memory-mapping a page as uncached
that is backed by MMIO on the dGPU and thus allowing userspace
to do memory write to the page without invoking an IOCTL.
We are selecting the MMIO so that it is not accessible from
the PCI bus so that the MMIO writes themselves are ignored,
but the PCI memory barrier will still take action as the MMIO
filtering will happen after the memory barrier effect.
When we detect special defined offset in mmap(), We are mapping
4K page which contains the last of page of doorbell MMIO range
to userspace for same purpose.
For user to query special offset we are adding special flag in
mmap_offset ioctl which needs to be passed as follows,
struct drm_xe_gem_mmap_offset mmo = {
.handle = 0, /* this must be 0 */
.flags = DRM_XE_MMAP_OFFSET_FLAG_PCI_BARRIER,
};
igt_ioctl(fd, DRM_IOCTL_XE_GEM_MMAP_OFFSET, &mmo);
map = mmap(NULL, size, PROT_WRITE, MAP_SHARED, fd, mmo);
Note: Test coverage for this is added by IGT
https://patchwork.freedesktop.org/series/140368/ here.
UMD implementing test, once PR is ready will attach with
this patch.
V6(MAuld)
- Move physical mmap to fault handler
- Modify kernel-doc and attach UMD PR when ready
V5(MAuld)
- Return invalid early in case of non 4K PAGE_SIZE
- Format kernel-doc and add note for 4K PAGE_SIZE HW limit
V4(MAuld)
- Add kernel-doc for uapi change
- Restrict page size to 4K
V3(MAuld)
- Remove offset defination from UAPI to be able to change later
- Edit commit message for special flag addition
V2(MAuld)
- Add fault handler with dummy page to handle unplug device
- Add Build check for special offset to be below normal start page
- Test d3hot, mapping seems to be valid in d3hot as well
- Add more info to commit message
Cc: Matthew Auld <matthew.auld@intel.com>
Cc: Michal Mrozek <michal.mrozek@intel.com>
Signed-off-by: Tejas Upadhyay <tejas.upadhyay@intel.com>
Reviewed-by: Matthew Auld <matthew.auld@intel.com>
---
drivers/gpu/drm/xe/xe_bo.c | 16 ++++-
drivers/gpu/drm/xe/xe_bo.h | 2 +
drivers/gpu/drm/xe/xe_device.c | 103 ++++++++++++++++++++++++++++++++-
include/uapi/drm/xe_drm.h | 29 +++++++++-
4 files changed, 147 insertions(+), 3 deletions(-)
diff --git a/drivers/gpu/drm/xe/xe_bo.c b/drivers/gpu/drm/xe/xe_bo.c
index 96dbc88b1f55..f948262e607f 100644
--- a/drivers/gpu/drm/xe/xe_bo.c
+++ b/drivers/gpu/drm/xe/xe_bo.c
@@ -2138,9 +2138,23 @@ int xe_gem_mmap_offset_ioctl(struct drm_device *dev, void *data,
XE_IOCTL_DBG(xe, args->reserved[0] || args->reserved[1]))
return -EINVAL;
- if (XE_IOCTL_DBG(xe, args->flags))
+ if (XE_IOCTL_DBG(xe, args->flags &
+ ~DRM_XE_MMAP_OFFSET_FLAG_PCI_BARRIER))
return -EINVAL;
+ if (args->flags & DRM_XE_MMAP_OFFSET_FLAG_PCI_BARRIER) {
+ if (XE_IOCTL_DBG(xe, args->handle))
+ return -EINVAL;
+
+ if (XE_IOCTL_DBG(xe, PAGE_SIZE > SZ_4K))
+ return -EINVAL;
+
+ BUILD_BUG_ON(((XE_PCI_BARRIER_MMAP_OFFSET >> XE_PTE_SHIFT) +
+ SZ_4K) >= DRM_FILE_PAGE_OFFSET_START);
+ args->offset = XE_PCI_BARRIER_MMAP_OFFSET;
+ return 0;
+ }
+
gem_obj = drm_gem_object_lookup(file, args->handle);
if (XE_IOCTL_DBG(xe, !gem_obj))
return -ENOENT;
diff --git a/drivers/gpu/drm/xe/xe_bo.h b/drivers/gpu/drm/xe/xe_bo.h
index 7fa44a0138b0..e7724965d3f1 100644
--- a/drivers/gpu/drm/xe/xe_bo.h
+++ b/drivers/gpu/drm/xe/xe_bo.h
@@ -63,6 +63,8 @@
#define XE_BO_PROPS_INVALID (-1)
+#define XE_PCI_BARRIER_MMAP_OFFSET (0x50 << XE_PTE_SHIFT)
+
struct sg_table;
struct xe_bo *xe_bo_alloc(void);
diff --git a/drivers/gpu/drm/xe/xe_device.c b/drivers/gpu/drm/xe/xe_device.c
index 930bb2750e2e..f6069db795e7 100644
--- a/drivers/gpu/drm/xe/xe_device.c
+++ b/drivers/gpu/drm/xe/xe_device.c
@@ -231,12 +231,113 @@ static long xe_drm_compat_ioctl(struct file *file, unsigned int cmd, unsigned lo
#define xe_drm_compat_ioctl NULL
#endif
+static void barrier_open(struct vm_area_struct *vma)
+{
+ drm_dev_get(vma->vm_private_data);
+}
+
+static void barrier_close(struct vm_area_struct *vma)
+{
+ drm_dev_put(vma->vm_private_data);
+}
+
+static void barrier_release_dummy_page(struct drm_device *dev, void *res)
+{
+ struct page *dummy_page = (struct page *)res;
+
+ __free_page(dummy_page);
+}
+
+static vm_fault_t barrier_fault(struct vm_fault *vmf)
+{
+ struct drm_device *dev = vmf->vma->vm_private_data;
+ struct vm_area_struct *vma = vmf->vma;
+ vm_fault_t ret = VM_FAULT_NOPAGE;
+ pgprot_t prot;
+ int idx;
+
+ prot = vm_get_page_prot(vma->vm_flags);
+
+ if (drm_dev_enter(dev, &idx)) {
+ unsigned long pfn;
+
+#define LAST_DB_PAGE_OFFSET 0x7ff001
+ pfn = PHYS_PFN(pci_resource_start(to_pci_dev(dev->dev), 0) +
+ LAST_DB_PAGE_OFFSET);
+ ret = vmf_insert_pfn_prot(vma, vma->vm_start, pfn,
+ pgprot_noncached(prot));
+ drm_dev_exit(idx);
+ } else {
+ struct page *page;
+
+ /* Allocate new dummy page to map all the VA range in this VMA to it*/
+ page = alloc_page(GFP_KERNEL | __GFP_ZERO);
+ if (!page)
+ return VM_FAULT_OOM;
+
+ /* Set the page to be freed using drmm release action */
+ if (drmm_add_action_or_reset(dev, barrier_release_dummy_page, page))
+ return VM_FAULT_OOM;
+
+ ret = vmf_insert_pfn_prot(vma, vma->vm_start, page_to_pfn(page),
+ prot);
+ }
+
+ return ret;
+}
+
+static const struct vm_operations_struct vm_ops_barrier = {
+ .open = barrier_open,
+ .close = barrier_close,
+ .fault = barrier_fault,
+};
+
+static int xe_pci_barrier_mmap(struct file *filp,
+ struct vm_area_struct *vma)
+{
+ struct drm_file *priv = filp->private_data;
+ struct drm_device *dev = priv->minor->dev;
+
+ if (vma->vm_end - vma->vm_start > SZ_4K)
+ return -EINVAL;
+
+ if (is_cow_mapping(vma->vm_flags))
+ return -EINVAL;
+
+ if (vma->vm_flags & (VM_READ | VM_EXEC))
+ return -EINVAL;
+
+ vm_flags_clear(vma, VM_MAYREAD | VM_MAYEXEC);
+ vm_flags_set(vma, VM_PFNMAP | VM_DONTEXPAND | VM_DONTDUMP | VM_IO);
+ vma->vm_ops = &vm_ops_barrier;
+ vma->vm_private_data = dev;
+ drm_dev_get(vma->vm_private_data);
+
+ return 0;
+}
+
+static int xe_mmap(struct file *filp, struct vm_area_struct *vma)
+{
+ struct drm_file *priv = filp->private_data;
+ struct drm_device *dev = priv->minor->dev;
+
+ if (drm_dev_is_unplugged(dev))
+ return -ENODEV;
+
+ switch (vma->vm_pgoff) {
+ case XE_PCI_BARRIER_MMAP_OFFSET >> XE_PTE_SHIFT:
+ return xe_pci_barrier_mmap(filp, vma);
+ }
+
+ return drm_gem_mmap(filp, vma);
+}
+
static const struct file_operations xe_driver_fops = {
.owner = THIS_MODULE,
.open = drm_open,
.release = drm_release_noglobal,
.unlocked_ioctl = xe_drm_ioctl,
- .mmap = drm_gem_mmap,
+ .mmap = xe_mmap,
.poll = drm_poll,
.read = drm_read,
.compat_ioctl = xe_drm_compat_ioctl,
diff --git a/include/uapi/drm/xe_drm.h b/include/uapi/drm/xe_drm.h
index 4a8a4a63e99c..6490b16b1217 100644
--- a/include/uapi/drm/xe_drm.h
+++ b/include/uapi/drm/xe_drm.h
@@ -811,6 +811,32 @@ struct drm_xe_gem_create {
/**
* struct drm_xe_gem_mmap_offset - Input of &DRM_IOCTL_XE_GEM_MMAP_OFFSET
+ *
+ * The @flags can be:
+ * - %DRM_XE_MMAP_OFFSET_FLAG_PCI_BARRIER - For user to query special offset
+ * for use in mmap ioctl. Writing to the returned mmap address will generate a
+ * PCI memory barrier with low overhead (avoiding IOCTL call as well as writing
+ * to VRAM which would also add overhead), acting like an MI_MEM_FENCE
+ * instruction.
+ *
+ * Note: The mmap size can be at most 4K, due to HW limitations. As a result
+ * this interface is only supported on CPU architectures that support 4K page
+ * size. The mmap_offset ioctl will detect this and gracefully return an
+ * error, where userspace is expected to have a different fallback method for
+ * triggering a barrier.
+ *
+ * Roughly the usage would be as follows:
+ *
+ * .. code-block:: C
+ *
+ * struct drm_xe_gem_mmap_offset mmo = {
+ * .handle = 0, // must be set to 0
+ * .flags = DRM_XE_MMAP_OFFSET_FLAG_PCI_BARRIER,
+ * };
+ *
+ * err = ioctl(fd, DRM_IOCTL_XE_GEM_MMAP_OFFSET, &mmo);
+ * map = mmap(NULL, size, PROT_WRITE, MAP_SHARED, fd, mmo.offset);
+ * map[i] = 0xdeadbeaf; // issue barrier
*/
struct drm_xe_gem_mmap_offset {
/** @extensions: Pointer to the first extension struct, if any */
@@ -819,7 +845,8 @@ struct drm_xe_gem_mmap_offset {
/** @handle: Handle for the object being mapped. */
__u32 handle;
- /** @flags: Must be zero */
+#define DRM_XE_MMAP_OFFSET_FLAG_PCI_BARRIER (1 << 0)
+ /** @flags: Flags */
__u32 flags;
/** @offset: The fake offset to use for subsequent mmap call */
--
2.34.1
^ permalink raw reply related [flat|nested] 52+ messages in thread
* [RFC PATCH 14/29] drm/xe: Add support for mmapping doorbells to user space
2024-11-18 23:37 [RFC PATCH 00/29] UMD direct submission in Xe Matthew Brost
` (12 preceding siblings ...)
2024-11-18 23:37 ` [RFC PATCH 13/29] drm/xe/mmap: Add mmap support for PCI memory barrier Matthew Brost
@ 2024-11-18 23:37 ` Matthew Brost
2024-11-18 23:37 ` [RFC PATCH 15/29] drm/xe: Add support for mmapping submission ring and indirect ring state " Matthew Brost
` (22 subsequent siblings)
36 siblings, 0 replies; 52+ messages in thread
From: Matthew Brost @ 2024-11-18 23:37 UTC (permalink / raw)
To: intel-xe, dri-devel
Cc: kenneth.w.graunke, lionel.g.landwerlin, jose.souza, simona.vetter,
thomas.hellstrom, boris.brezillon, airlied, christian.koenig,
mihail.atanassov, steven.price, shashank.sharma
Doorbells need to be mapped to user space for UMD direct submisssion,
add support for this.
FIXME: Wildly insecure as anyone can pick MMIO doorbell offset, will
need to randomize and tie unique offset to FD. Can be done in later revs
before upstreaming.
Signed-off-by: Matthew Brost <matthew.brost@intel.com>
---
drivers/gpu/drm/xe/xe_bo.h | 3 ++
drivers/gpu/drm/xe/xe_device.c | 73 ++++++++++++++++++++++++++++++++++
2 files changed, 76 insertions(+)
diff --git a/drivers/gpu/drm/xe/xe_bo.h b/drivers/gpu/drm/xe/xe_bo.h
index e7724965d3f1..2772d42ac057 100644
--- a/drivers/gpu/drm/xe/xe_bo.h
+++ b/drivers/gpu/drm/xe/xe_bo.h
@@ -64,6 +64,9 @@
#define XE_BO_PROPS_INVALID (-1)
#define XE_PCI_BARRIER_MMAP_OFFSET (0x50 << XE_PTE_SHIFT)
+#define XE_MMIO_DOORBELL_MMAP_OFFSET (0x100 << XE_PTE_SHIFT)
+#define XE_MMIO_DOORBELL_PFN_START (SZ_4M >> XE_PTE_SHIFT)
+#define XE_MMIO_DOORBELL_PFN_COUNT (256)
struct sg_table;
diff --git a/drivers/gpu/drm/xe/xe_device.c b/drivers/gpu/drm/xe/xe_device.c
index f6069db795e7..bbdff4308b2e 100644
--- a/drivers/gpu/drm/xe/xe_device.c
+++ b/drivers/gpu/drm/xe/xe_device.c
@@ -316,6 +316,75 @@ static int xe_pci_barrier_mmap(struct file *filp,
return 0;
}
+static vm_fault_t doorbell_fault(struct vm_fault *vmf)
+{
+ struct drm_device *dev = vmf->vma->vm_private_data;
+ struct vm_area_struct *vma = vmf->vma;
+ vm_fault_t ret = VM_FAULT_NOPAGE;
+ pgprot_t prot;
+ int idx;
+
+ prot = vm_get_page_prot(vma->vm_flags);
+
+ if (drm_dev_enter(dev, &idx)) {
+ unsigned long pfn;
+
+ pfn = PHYS_PFN(pci_resource_start(to_pci_dev(dev->dev), 0) +
+ (XE_MMIO_DOORBELL_PFN_START << XE_PTE_SHIFT));
+ pfn += vma->vm_pgoff & (XE_MMIO_DOORBELL_PFN_COUNT - 1);
+
+ ret = vmf_insert_pfn_prot(vma, vma->vm_start, pfn,
+ pgprot_noncached(prot));
+ drm_dev_exit(idx);
+ } else {
+ struct page *page;
+
+ /* Allocate new dummy page to map all the VA range in this VMA to it*/
+ page = alloc_page(GFP_KERNEL | __GFP_ZERO);
+ if (!page)
+ return VM_FAULT_OOM;
+
+ /* Set the page to be freed using drmm release action */
+ if (drmm_add_action_or_reset(dev, barrier_release_dummy_page, page))
+ return VM_FAULT_OOM;
+
+ ret = vmf_insert_pfn_prot(vma, vma->vm_start, page_to_pfn(page),
+ prot);
+ }
+
+ return ret;
+}
+
+static const struct vm_operations_struct vm_ops_doorbell = {
+ .open = barrier_open,
+ .close = barrier_close,
+ .fault = doorbell_fault,
+};
+
+static int xe_mmio_doorbell_mmap(struct file *filp,
+ struct vm_area_struct *vma)
+{
+ struct drm_file *priv = filp->private_data;
+ struct drm_device *dev = priv->minor->dev;
+
+ if (vma->vm_end - vma->vm_start > SZ_4K)
+ return -EINVAL;
+
+ if (is_cow_mapping(vma->vm_flags))
+ return -EINVAL;
+
+ if (vma->vm_flags & VM_EXEC)
+ return -EINVAL;
+
+ vm_flags_clear(vma, VM_MAYEXEC);
+ vm_flags_set(vma, VM_PFNMAP | VM_DONTEXPAND | VM_DONTDUMP | VM_IO);
+ vma->vm_ops = &vm_ops_doorbell;
+ vma->vm_private_data = dev;
+ drm_dev_get(vma->vm_private_data);
+
+ return 0;
+}
+
static int xe_mmap(struct file *filp, struct vm_area_struct *vma)
{
struct drm_file *priv = filp->private_data;
@@ -327,6 +396,10 @@ static int xe_mmap(struct file *filp, struct vm_area_struct *vma)
switch (vma->vm_pgoff) {
case XE_PCI_BARRIER_MMAP_OFFSET >> XE_PTE_SHIFT:
return xe_pci_barrier_mmap(filp, vma);
+ case (XE_MMIO_DOORBELL_MMAP_OFFSET >> XE_PTE_SHIFT) ...
+ ((XE_MMIO_DOORBELL_MMAP_OFFSET >> XE_PTE_SHIFT) +
+ XE_MMIO_DOORBELL_PFN_COUNT - 1):
+ return xe_mmio_doorbell_mmap(filp, vma);
}
return drm_gem_mmap(filp, vma);
--
2.34.1
^ permalink raw reply related [flat|nested] 52+ messages in thread
* [RFC PATCH 15/29] drm/xe: Add support for mmapping submission ring and indirect ring state to user space
2024-11-18 23:37 [RFC PATCH 00/29] UMD direct submission in Xe Matthew Brost
` (13 preceding siblings ...)
2024-11-18 23:37 ` [RFC PATCH 14/29] drm/xe: Add support for mmapping doorbells to user space Matthew Brost
@ 2024-11-18 23:37 ` Matthew Brost
2024-11-18 23:37 ` [RFC PATCH 16/29] drm/xe/uapi: Define UMD exec queue mapping uAPI Matthew Brost
` (21 subsequent siblings)
36 siblings, 0 replies; 52+ messages in thread
From: Matthew Brost @ 2024-11-18 23:37 UTC (permalink / raw)
To: intel-xe, dri-devel
Cc: kenneth.w.graunke, lionel.g.landwerlin, jose.souza, simona.vetter,
thomas.hellstrom, boris.brezillon, airlied, christian.koenig,
mihail.atanassov, steven.price, shashank.sharma
The ring and indirect ring state need to mapped to user space for UMD
direction submission, add support for this.
Signed-off-by: Matthew Brost <matthew.brost@intel.com>
---
drivers/gpu/drm/xe/xe_bo.c | 3 ---
drivers/gpu/drm/xe/xe_exec_queue.c | 2 +-
drivers/gpu/drm/xe/xe_execlist.c | 2 +-
drivers/gpu/drm/xe/xe_lrc.c | 29 ++++++++++++++++++++++-------
drivers/gpu/drm/xe/xe_lrc.h | 4 ++--
5 files changed, 26 insertions(+), 14 deletions(-)
diff --git a/drivers/gpu/drm/xe/xe_bo.c b/drivers/gpu/drm/xe/xe_bo.c
index f948262e607f..a87871f1cb95 100644
--- a/drivers/gpu/drm/xe/xe_bo.c
+++ b/drivers/gpu/drm/xe/xe_bo.c
@@ -1311,9 +1311,6 @@ struct xe_bo *___xe_bo_create_locked(struct xe_device *xe, struct xe_bo *bo,
size_t aligned_size;
int err;
- /* Only kernel objects should set GT */
- xe_assert(xe, !tile || type == ttm_bo_type_kernel);
-
if (XE_WARN_ON(!size)) {
xe_bo_free(bo);
return ERR_PTR(-EINVAL);
diff --git a/drivers/gpu/drm/xe/xe_exec_queue.c b/drivers/gpu/drm/xe/xe_exec_queue.c
index f402988b4fc0..aef5b130e7f8 100644
--- a/drivers/gpu/drm/xe/xe_exec_queue.c
+++ b/drivers/gpu/drm/xe/xe_exec_queue.c
@@ -119,7 +119,7 @@ static int __xe_exec_queue_init(struct xe_exec_queue *q)
}
for (i = 0; i < q->width; ++i) {
- q->lrc[i] = xe_lrc_create(q->hwe, q->vm, SZ_16K);
+ q->lrc[i] = xe_lrc_create(q, q->hwe, q->vm, SZ_16K);
if (IS_ERR(q->lrc[i])) {
err = PTR_ERR(q->lrc[i]);
goto err_unlock;
diff --git a/drivers/gpu/drm/xe/xe_execlist.c b/drivers/gpu/drm/xe/xe_execlist.c
index a8c416a48812..93f76280d453 100644
--- a/drivers/gpu/drm/xe/xe_execlist.c
+++ b/drivers/gpu/drm/xe/xe_execlist.c
@@ -265,7 +265,7 @@ struct xe_execlist_port *xe_execlist_port_create(struct xe_device *xe,
port->hwe = hwe;
- port->lrc = xe_lrc_create(hwe, NULL, SZ_16K);
+ port->lrc = xe_lrc_create(NULL, hwe, NULL, SZ_16K);
if (IS_ERR(port->lrc)) {
err = PTR_ERR(port->lrc);
goto err;
diff --git a/drivers/gpu/drm/xe/xe_lrc.c b/drivers/gpu/drm/xe/xe_lrc.c
index 9633e5e700f6..8a79470b52ae 100644
--- a/drivers/gpu/drm/xe/xe_lrc.c
+++ b/drivers/gpu/drm/xe/xe_lrc.c
@@ -901,8 +901,9 @@ static void xe_lrc_finish(struct xe_lrc *lrc)
#define PVC_CTX_ASID (0x2e + 1)
#define PVC_CTX_ACC_CTR_THOLD (0x2a + 1)
-static int xe_lrc_init(struct xe_lrc *lrc, struct xe_hw_engine *hwe,
- struct xe_vm *vm, u32 ring_size)
+static int xe_lrc_init(struct xe_lrc *lrc, struct xe_exec_queue *q,
+ struct xe_hw_engine *hwe, struct xe_vm *vm,
+ u32 ring_size)
{
struct xe_gt *gt = hwe->gt;
struct xe_tile *tile = gt_to_tile(gt);
@@ -911,6 +912,11 @@ static int xe_lrc_init(struct xe_lrc *lrc, struct xe_hw_engine *hwe,
void *init_data = NULL;
u32 arb_enable;
u32 lrc_size;
+ bool user_queue = q && q->flags & EXEC_QUEUE_FLAG_UMD_SUBMISSION;
+ enum ttm_bo_type submit_type = user_queue ? ttm_bo_type_device :
+ ttm_bo_type_kernel;
+ unsigned int submit_flags = user_queue ?
+ XE_BO_FLAG_USER : 0;
int err;
kref_init(&lrc->refcount);
@@ -930,7 +936,8 @@ static int xe_lrc_init(struct xe_lrc *lrc, struct xe_hw_engine *hwe,
return PTR_ERR(lrc->bo);
lrc->submission_ring = xe_bo_create_pin_map(xe, tile, vm, SZ_32K,
- ttm_bo_type_kernel,
+ submit_type,
+ submit_flags |
XE_BO_FLAG_VRAM_IF_DGFX(tile) |
XE_BO_FLAG_GGTT |
XE_BO_FLAG_GGTT_INVALIDATE);
@@ -944,7 +951,8 @@ static int xe_lrc_init(struct xe_lrc *lrc, struct xe_hw_engine *hwe,
lrc->flags |= XE_LRC_FLAG_INDIRECT_RING_STATE;
lrc->indirect_state = xe_bo_create_pin_map(xe, tile, vm, SZ_8K,
- ttm_bo_type_kernel,
+ submit_type,
+ submit_flags |
XE_BO_FLAG_VRAM_IF_DGFX(tile) |
XE_BO_FLAG_GGTT |
XE_BO_FLAG_GGTT_INVALIDATE);
@@ -955,6 +963,12 @@ static int xe_lrc_init(struct xe_lrc *lrc, struct xe_hw_engine *hwe,
}
}
+ /* Wait for clear */
+ if (user_queue)
+ dma_resv_wait_timeout(xe_vm_resv(vm),
+ DMA_RESV_USAGE_KERNEL,
+ false, MAX_SCHEDULE_TIMEOUT);
+
lrc->size = lrc_size;
lrc->tile = gt_to_tile(hwe->gt);
lrc->ring.size = ring_size;
@@ -1060,6 +1074,7 @@ static int xe_lrc_init(struct xe_lrc *lrc, struct xe_hw_engine *hwe,
/**
* xe_lrc_create - Create a LRC
+ * @q: Execution queue
* @hwe: Hardware Engine
* @vm: The VM (address space)
* @ring_size: LRC ring size
@@ -1069,8 +1084,8 @@ static int xe_lrc_init(struct xe_lrc *lrc, struct xe_hw_engine *hwe,
* Return pointer to created LRC upon success and an error pointer
* upon failure.
*/
-struct xe_lrc *xe_lrc_create(struct xe_hw_engine *hwe, struct xe_vm *vm,
- u32 ring_size)
+struct xe_lrc *xe_lrc_create(struct xe_exec_queue *q, struct xe_hw_engine *hwe,
+ struct xe_vm *vm, u32 ring_size)
{
struct xe_lrc *lrc;
int err;
@@ -1079,7 +1094,7 @@ struct xe_lrc *xe_lrc_create(struct xe_hw_engine *hwe, struct xe_vm *vm,
if (!lrc)
return ERR_PTR(-ENOMEM);
- err = xe_lrc_init(lrc, hwe, vm, ring_size);
+ err = xe_lrc_init(lrc, q, hwe, vm, ring_size);
if (err) {
kfree(lrc);
return ERR_PTR(err);
diff --git a/drivers/gpu/drm/xe/xe_lrc.h b/drivers/gpu/drm/xe/xe_lrc.h
index b459dcab8787..23d71283c79d 100644
--- a/drivers/gpu/drm/xe/xe_lrc.h
+++ b/drivers/gpu/drm/xe/xe_lrc.h
@@ -41,8 +41,8 @@ struct xe_lrc_snapshot {
#define LRC_PPHWSP_SCRATCH_ADDR (0x34 * 4)
-struct xe_lrc *xe_lrc_create(struct xe_hw_engine *hwe, struct xe_vm *vm,
- u32 ring_size);
+struct xe_lrc *xe_lrc_create(struct xe_exec_queue *q, struct xe_hw_engine *hwe,
+ struct xe_vm *vm, u32 ring_size);
void xe_lrc_destroy(struct kref *ref);
/**
--
2.34.1
^ permalink raw reply related [flat|nested] 52+ messages in thread
* [RFC PATCH 16/29] drm/xe/uapi: Define UMD exec queue mapping uAPI
2024-11-18 23:37 [RFC PATCH 00/29] UMD direct submission in Xe Matthew Brost
` (14 preceding siblings ...)
2024-11-18 23:37 ` [RFC PATCH 15/29] drm/xe: Add support for mmapping submission ring and indirect ring state " Matthew Brost
@ 2024-11-18 23:37 ` Matthew Brost
2024-11-18 23:37 ` [RFC PATCH 17/29] drm/xe: Add usermap exec queue extension Matthew Brost
` (20 subsequent siblings)
36 siblings, 0 replies; 52+ messages in thread
From: Matthew Brost @ 2024-11-18 23:37 UTC (permalink / raw)
To: intel-xe, dri-devel
Cc: kenneth.w.graunke, lionel.g.landwerlin, jose.souza, simona.vetter,
thomas.hellstrom, boris.brezillon, airlied, christian.koenig,
mihail.atanassov, steven.price, shashank.sharma
Define UMD exec queue mapping uAPI. The submit ring, indirect LRC state
(ring head, tail, etc...), and doorbell are securly mapped to user
space. The ring is a VM PPGTT addres, while indirect LRC state and
doorbell mapping is provided via a fake offset like BOs.
Signed-off-by: Matthew Brost <matthew.brost@intel.com>
---
include/uapi/drm/xe_drm.h | 56 +++++++++++++++++++++++++++++++++++++++
1 file changed, 56 insertions(+)
diff --git a/include/uapi/drm/xe_drm.h b/include/uapi/drm/xe_drm.h
index 6490b16b1217..9356a714a2e0 100644
--- a/include/uapi/drm/xe_drm.h
+++ b/include/uapi/drm/xe_drm.h
@@ -1111,6 +1111,61 @@ struct drm_xe_vm_bind {
__u64 reserved[2];
};
+/**
+ * struct drm_xe_exec_queue_ext_usermap
+ */
+struct drm_xe_exec_queue_ext_usermap {
+ /** @base: base user extension */
+ struct drm_xe_user_extension base;
+
+ /** @flags: MBZ */
+ __u32 flags;
+
+ /** @version: Version of usermap */
+#define DRM_XE_EXEC_QUEUE_USERMAP_VERSION_XE2_REV0 0
+ __u32 version;
+
+ /**
+ * @ring_size: The ring size. 4k-2M valid, must be 4k aligned. User
+ * space has to pad allocation / mapping to avoid prefetch faults.
+ * Prefetch size is platform dependent.
+ */
+ __u32 ring_size;
+
+ /** @pad: MBZ */
+ __u32 pad;
+
+ /**
+ * @ring_addr: Ring address mapped within the VM, should be mapped as
+ * UC.
+ */
+ __u64 ring_addr;
+
+ /**
+ * @indirect_ring_state_offset: The fake indirect ring state offset to
+ * use for subsequent mmap call. Always 4k in size.
+ */
+ __u64 indirect_ring_state_offset;
+
+ /**
+ * @doorbell_offset: The fake doorbell offset to use for subsequent mmap
+ * call. Always 4k in size.
+ */
+ __u64 doorbell_offset;
+
+ /** @doorbell_page_offset: The doorbell offset within the mmapped page */
+ __u32 doorbell_page_offset;
+
+ /**
+ * @indirect_ring_state_handle: Indirect ring state buffer object
+ * handle. Allocated by KMD and must be closed by user.
+ */
+ __u32 indirect_ring_state_handle;
+
+ /** @reserved: Reserved */
+ __u64 reserved[2];
+};
+
/**
* struct drm_xe_exec_queue_create - Input of &DRM_IOCTL_XE_EXEC_QUEUE_CREATE
*
@@ -1138,6 +1193,7 @@ struct drm_xe_exec_queue_create {
#define DRM_XE_EXEC_QUEUE_SET_PROPERTY_PRIORITY 0
#define DRM_XE_EXEC_QUEUE_SET_PROPERTY_TIMESLICE 1
+#define DRM_XE_EXEC_QUEUE_EXTENSION_USERMAP 1
/** @extensions: Pointer to the first extension struct, if any */
__u64 extensions;
--
2.34.1
^ permalink raw reply related [flat|nested] 52+ messages in thread
* [RFC PATCH 17/29] drm/xe: Add usermap exec queue extension
2024-11-18 23:37 [RFC PATCH 00/29] UMD direct submission in Xe Matthew Brost
` (15 preceding siblings ...)
2024-11-18 23:37 ` [RFC PATCH 16/29] drm/xe/uapi: Define UMD exec queue mapping uAPI Matthew Brost
@ 2024-11-18 23:37 ` Matthew Brost
2024-11-18 23:37 ` [RFC PATCH 18/29] drm/xe: Drop EXEC_QUEUE_FLAG_UMD_SUBMISSION flag Matthew Brost
` (19 subsequent siblings)
36 siblings, 0 replies; 52+ messages in thread
From: Matthew Brost @ 2024-11-18 23:37 UTC (permalink / raw)
To: intel-xe, dri-devel
Cc: kenneth.w.graunke, lionel.g.landwerlin, jose.souza, simona.vetter,
thomas.hellstrom, boris.brezillon, airlied, christian.koenig,
mihail.atanassov, steven.price, shashank.sharma
Implement uAPI which maps submit rings, indirect LRC state, and
doorbells to user space. This is required for UMD direction submission.
Signed-off-by: Matthew Brost <matthew.brost@intel.com>
---
drivers/gpu/drm/xe/xe_exec_queue.c | 125 ++++++++++++++++++++++-
drivers/gpu/drm/xe/xe_exec_queue_types.h | 13 +++
drivers/gpu/drm/xe/xe_execlist.c | 2 +-
drivers/gpu/drm/xe/xe_lrc.c | 59 +++++++----
drivers/gpu/drm/xe/xe_lrc.h | 2 +-
5 files changed, 176 insertions(+), 25 deletions(-)
diff --git a/drivers/gpu/drm/xe/xe_exec_queue.c b/drivers/gpu/drm/xe/xe_exec_queue.c
index aef5b130e7f8..c8d45133eb59 100644
--- a/drivers/gpu/drm/xe/xe_exec_queue.c
+++ b/drivers/gpu/drm/xe/xe_exec_queue.c
@@ -11,6 +11,7 @@
#include <drm/drm_file.h>
#include <uapi/drm/xe_drm.h>
+#include "xe_bo.h"
#include "xe_device.h"
#include "xe_gt.h"
#include "xe_hw_engine_class_sysfs.h"
@@ -38,12 +39,18 @@ static int exec_queue_user_extensions_post_init(struct xe_device *xe, struct xe_
static void __xe_exec_queue_free(struct xe_exec_queue *q)
{
+ struct xe_device *xe = q->vm ? q->vm->xe : NULL;
+
if (q->vm)
xe_vm_put(q->vm);
if (q->xef)
xe_file_put(q->xef);
+ if (q->usermap)
+ xe_pm_runtime_put(xe);
+
+ kfree(q->usermap);
kfree(q);
}
@@ -110,6 +117,8 @@ static struct xe_exec_queue *__xe_exec_queue_alloc(struct xe_device *xe,
static int __xe_exec_queue_init(struct xe_exec_queue *q)
{
struct xe_vm *vm = q->vm;
+ u64 ring_addr = q->usermap ? q->usermap->ring_addr : 0;
+ u32 ring_size = q->usermap ? q->usermap->ring_size : SZ_16K;
int i, err;
if (vm) {
@@ -119,7 +128,8 @@ static int __xe_exec_queue_init(struct xe_exec_queue *q)
}
for (i = 0; i < q->width; ++i) {
- q->lrc[i] = xe_lrc_create(q, q->hwe, q->vm, SZ_16K);
+ q->lrc[i] = xe_lrc_create(q, q->hwe, q->vm, ring_size,
+ ring_addr);
if (IS_ERR(q->lrc[i])) {
err = PTR_ERR(q->lrc[i]);
goto err_unlock;
@@ -444,12 +454,125 @@ typedef int (*xe_exec_queue_user_extension_fn)(struct xe_device *xe,
struct xe_exec_queue *q,
u64 extension);
+static int exec_queue_user_ext_usermap(struct xe_device *xe,
+ struct xe_exec_queue *q,
+ u64 extension)
+{
+ u64 __user *address = u64_to_user_ptr(extension);
+ struct drm_xe_exec_queue_ext_usermap ext;
+ int err;
+
+ /* Just parse args and make sure they are sane */
+
+ if (XE_IOCTL_DBG(xe, !xe_gt_has_indirect_ring_state(q->gt)))
+ return -EOPNOTSUPP;
+
+ if (XE_IOCTL_DBG(xe, q->width != 1))
+ return -EOPNOTSUPP;
+
+ if (XE_IOCTL_DBG(xe, q->flags & (EXEC_QUEUE_FLAG_KERNEL |
+ EXEC_QUEUE_FLAG_PERMANENT |
+ EXEC_QUEUE_FLAG_VM |
+ EXEC_QUEUE_FLAG_BIND_ENGINE_CHILD)))
+ return -EOPNOTSUPP;
+
+ if (XE_IOCTL_DBG(xe, q->width != 1))
+ return -EOPNOTSUPP;
+
+ /*
+ * XXX: More or less free to support this but targeting Mesa for now as
+ * LR mode has ULLS.
+ */
+ if (XE_IOCTL_DBG(xe, xe_vm_in_lr_mode(q->vm)))
+ return -EOPNOTSUPP;
+
+ if (XE_IOCTL_DBG(xe, q->flags & EXEC_QUEUE_FLAG_UMD_SUBMISSION))
+ return -EINVAL;
+
+ err = __copy_from_user(&ext, address, sizeof(ext));
+ if (XE_IOCTL_DBG(xe, err))
+ return -EFAULT;
+
+ if (XE_IOCTL_DBG(xe, ext.reserved[0] || ext.reserved[1]))
+ return -EINVAL;
+
+ if (XE_IOCTL_DBG(xe, ext.pad))
+ return -EINVAL;
+
+ if (XE_IOCTL_DBG(xe, ext.flags))
+ return -EINVAL;
+
+ if (XE_IOCTL_DBG(xe, ext.ring_size < SZ_4K ||
+ ext.ring_size > SZ_2M ||
+ ext.ring_size & ~PAGE_MASK))
+ return -EINVAL;
+
+ if (XE_IOCTL_DBG(xe, ext.version !=
+ DRM_XE_EXEC_QUEUE_USERMAP_VERSION_XE2_REV0))
+ return -EINVAL;
+
+ q->usermap = kzalloc(sizeof(struct xe_exec_queue_usermap), GFP_KERNEL);
+ if (!q->usermap)
+ return -ENOMEM;
+
+ q->usermap->ring_size = ext.ring_size;
+ q->usermap->ring_addr = ext.ring_addr;
+
+ xe_pm_runtime_get_noresume(xe);
+ q->flags |= EXEC_QUEUE_FLAG_UMD_SUBMISSION;
+
+ return 0;
+}
+
+static int exec_queue_user_ext_post_init_usermap(struct xe_device *xe,
+ struct xe_exec_queue *q,
+ u64 extension)
+{
+ struct drm_xe_exec_queue_ext_usermap ext;
+ struct xe_lrc *lrc = q->lrc[0];
+ u64 __user *address = u64_to_user_ptr(extension);
+ u32 indirect_ring_state_handle;
+ int err;
+
+ err = __copy_from_user(&ext, address, sizeof(ext));
+ if (XE_IOCTL_DBG(xe, err))
+ return -EFAULT;
+
+ err = drm_gem_handle_create(q->xef->drm,
+ &lrc->indirect_state->ttm.base,
+ &indirect_ring_state_handle);
+ if (err)
+ return err;
+
+ ext.indirect_ring_state_offset =
+ drm_vma_node_offset_addr(&lrc->indirect_state->ttm.base.vma_node);
+ ext.indirect_ring_state_handle = indirect_ring_state_handle;
+ ext.doorbell_offset = XE_MMIO_DOORBELL_MMAP_OFFSET +
+ SZ_4K * q->guc->db.id;
+ ext.doorbell_page_offset = 0;
+
+ err = copy_to_user(address, &ext, sizeof(ext));
+ if (XE_IOCTL_DBG(xe, err)) {
+ err = -EFAULT;
+ goto close_handles;
+ }
+
+ return 0;
+
+close_handles:
+ drm_gem_handle_delete(q->xef->drm, indirect_ring_state_handle);
+
+ return err;
+}
+
static const xe_exec_queue_user_extension_fn exec_queue_user_extension_funcs[] = {
[DRM_XE_EXEC_QUEUE_EXTENSION_SET_PROPERTY] = exec_queue_user_ext_set_property,
+ [DRM_XE_EXEC_QUEUE_EXTENSION_USERMAP] = exec_queue_user_ext_usermap,
};
static const xe_exec_queue_user_extension_fn exec_queue_user_extension_post_init_funcs[] = {
[DRM_XE_EXEC_QUEUE_EXTENSION_SET_PROPERTY] = NULL,
+ [DRM_XE_EXEC_QUEUE_EXTENSION_USERMAP] = exec_queue_user_ext_post_init_usermap,
};
#define MAX_USER_EXTENSIONS 16
diff --git a/drivers/gpu/drm/xe/xe_exec_queue_types.h b/drivers/gpu/drm/xe/xe_exec_queue_types.h
index 7f68587d4021..b30b5ee910fa 100644
--- a/drivers/gpu/drm/xe/xe_exec_queue_types.h
+++ b/drivers/gpu/drm/xe/xe_exec_queue_types.h
@@ -31,6 +31,16 @@ enum xe_exec_queue_priority {
XE_EXEC_QUEUE_PRIORITY_COUNT
};
+/**
+ * struct xe_exec_queue_usermap - Execution queue usermap (UMD submission)
+ */
+struct xe_exec_queue_usermap {
+ /** @ring_addr: ring address (PPGTT) */
+ u64 ring_addr;
+ /** @ring_size: ring size */
+ u32 ring_size;
+};
+
/**
* struct xe_exec_queue - Execution queue
*
@@ -130,6 +140,9 @@ struct xe_exec_queue {
struct list_head link;
} lr;
+ /** @usermap: user map interface */
+ struct xe_exec_queue_usermap *usermap;
+
/** @ops: submission backend exec queue operations */
const struct xe_exec_queue_ops *ops;
diff --git a/drivers/gpu/drm/xe/xe_execlist.c b/drivers/gpu/drm/xe/xe_execlist.c
index 93f76280d453..803c84b2e4ed 100644
--- a/drivers/gpu/drm/xe/xe_execlist.c
+++ b/drivers/gpu/drm/xe/xe_execlist.c
@@ -265,7 +265,7 @@ struct xe_execlist_port *xe_execlist_port_create(struct xe_device *xe,
port->hwe = hwe;
- port->lrc = xe_lrc_create(NULL, hwe, NULL, SZ_16K);
+ port->lrc = xe_lrc_create(NULL, hwe, NULL, SZ_16K, 0);
if (IS_ERR(port->lrc)) {
err = PTR_ERR(port->lrc);
goto err;
diff --git a/drivers/gpu/drm/xe/xe_lrc.c b/drivers/gpu/drm/xe/xe_lrc.c
index 8a79470b52ae..8d5a65724c04 100644
--- a/drivers/gpu/drm/xe/xe_lrc.c
+++ b/drivers/gpu/drm/xe/xe_lrc.c
@@ -903,7 +903,7 @@ static void xe_lrc_finish(struct xe_lrc *lrc)
static int xe_lrc_init(struct xe_lrc *lrc, struct xe_exec_queue *q,
struct xe_hw_engine *hwe, struct xe_vm *vm,
- u32 ring_size)
+ u32 ring_size, u64 ring_addr)
{
struct xe_gt *gt = hwe->gt;
struct xe_tile *tile = gt_to_tile(gt);
@@ -919,6 +919,8 @@ static int xe_lrc_init(struct xe_lrc *lrc, struct xe_exec_queue *q,
XE_BO_FLAG_USER : 0;
int err;
+ xe_assert(xe, (!user_queue && !ring_addr) || (user_queue && ring_addr));
+
kref_init(&lrc->refcount);
lrc->flags = 0;
lrc_size = xe_gt_lrc_size(gt, hwe->class);
@@ -935,16 +937,18 @@ static int xe_lrc_init(struct xe_lrc *lrc, struct xe_exec_queue *q,
if (IS_ERR(lrc->bo))
return PTR_ERR(lrc->bo);
- lrc->submission_ring = xe_bo_create_pin_map(xe, tile, vm, SZ_32K,
- submit_type,
- submit_flags |
- XE_BO_FLAG_VRAM_IF_DGFX(tile) |
- XE_BO_FLAG_GGTT |
- XE_BO_FLAG_GGTT_INVALIDATE);
- if (IS_ERR(lrc->submission_ring)) {
- err = PTR_ERR(lrc->submission_ring);
- lrc->submission_ring = NULL;
- goto err_lrc_finish;
+ if (!user_queue) {
+ lrc->submission_ring = xe_bo_create_pin_map(xe, tile, vm, SZ_32K,
+ submit_type,
+ submit_flags |
+ XE_BO_FLAG_VRAM_IF_DGFX(tile) |
+ XE_BO_FLAG_GGTT |
+ XE_BO_FLAG_GGTT_INVALIDATE);
+ if (IS_ERR(lrc->submission_ring)) {
+ err = PTR_ERR(lrc->submission_ring);
+ lrc->submission_ring = NULL;
+ goto err_lrc_finish;
+ }
}
if (xe_gt_has_indirect_ring_state(gt)) {
@@ -1018,12 +1022,19 @@ static int xe_lrc_init(struct xe_lrc *lrc, struct xe_exec_queue *q,
}
if (xe_gt_has_indirect_ring_state(gt)) {
- xe_lrc_write_ctx_reg(lrc, CTX_INDIRECT_RING_STATE,
- __xe_lrc_indirect_ring_ggtt_addr(lrc));
-
- xe_lrc_write_indirect_ctx_reg(lrc, INDIRECT_CTX_RING_START,
- __xe_lrc_ring_ggtt_addr(lrc));
- xe_lrc_write_indirect_ctx_reg(lrc, INDIRECT_CTX_RING_START_UDW, 0);
+ if (ring_addr) { /* PPGTT */
+ xe_lrc_write_ctx_reg(lrc, CTX_INDIRECT_RING_STATE,
+ __xe_lrc_indirect_ring_ggtt_addr(lrc) | BIT(0));
+ xe_lrc_write_indirect_ctx_reg(lrc, INDIRECT_CTX_RING_START,
+ ring_addr);
+ } else {
+ xe_lrc_write_ctx_reg(lrc, CTX_INDIRECT_RING_STATE,
+ __xe_lrc_indirect_ring_ggtt_addr(lrc));
+ xe_lrc_write_indirect_ctx_reg(lrc, INDIRECT_CTX_RING_START,
+ __xe_lrc_ring_ggtt_addr(lrc));
+ }
+ xe_lrc_write_indirect_ctx_reg(lrc, INDIRECT_CTX_RING_START_UDW,
+ ring_addr >> 32);
xe_lrc_write_indirect_ctx_reg(lrc, INDIRECT_CTX_RING_HEAD, 0);
xe_lrc_write_indirect_ctx_reg(lrc, INDIRECT_CTX_RING_TAIL, lrc->ring.tail);
xe_lrc_write_indirect_ctx_reg(lrc, INDIRECT_CTX_RING_CTL,
@@ -1056,8 +1067,10 @@ static int xe_lrc_init(struct xe_lrc *lrc, struct xe_exec_queue *q,
lrc->desc |= FIELD_PREP(LRC_ENGINE_CLASS, hwe->class);
}
- arb_enable = MI_ARB_ON_OFF | MI_ARB_ENABLE;
- xe_lrc_write_ring(lrc, &arb_enable, sizeof(arb_enable));
+ if (lrc->submission_ring) {
+ arb_enable = MI_ARB_ON_OFF | MI_ARB_ENABLE;
+ xe_lrc_write_ring(lrc, &arb_enable, sizeof(arb_enable));
+ }
map = __xe_lrc_seqno_map(lrc);
xe_map_write32(lrc_to_xe(lrc), &map, lrc->fence_ctx.next_seqno - 1);
@@ -1078,6 +1091,7 @@ static int xe_lrc_init(struct xe_lrc *lrc, struct xe_exec_queue *q,
* @hwe: Hardware Engine
* @vm: The VM (address space)
* @ring_size: LRC ring size
+ * @ring_addr: LRC ring address, only valid for usermap queues
*
* Allocate and initialize the Logical Ring Context (LRC).
*
@@ -1085,7 +1099,7 @@ static int xe_lrc_init(struct xe_lrc *lrc, struct xe_exec_queue *q,
* upon failure.
*/
struct xe_lrc *xe_lrc_create(struct xe_exec_queue *q, struct xe_hw_engine *hwe,
- struct xe_vm *vm, u32 ring_size)
+ struct xe_vm *vm, u32 ring_size, u64 ring_addr)
{
struct xe_lrc *lrc;
int err;
@@ -1094,7 +1108,7 @@ struct xe_lrc *xe_lrc_create(struct xe_exec_queue *q, struct xe_hw_engine *hwe,
if (!lrc)
return ERR_PTR(-ENOMEM);
- err = xe_lrc_init(lrc, q, hwe, vm, ring_size);
+ err = xe_lrc_init(lrc, q, hwe, vm, ring_size, ring_addr);
if (err) {
kfree(lrc);
return ERR_PTR(err);
@@ -1717,7 +1731,8 @@ struct xe_lrc_snapshot *xe_lrc_snapshot_capture(struct xe_lrc *lrc)
xe_vm_get(lrc->bo->vm);
snapshot->context_desc = xe_lrc_ggtt_addr(lrc);
- snapshot->ring_addr = __xe_lrc_ring_ggtt_addr(lrc);
+ snapshot->ring_addr = lrc->submission_ring ?
+ __xe_lrc_ring_ggtt_addr(lrc) : 0;
snapshot->indirect_context_desc = xe_lrc_indirect_ring_ggtt_addr(lrc);
snapshot->head = xe_lrc_ring_head(lrc);
snapshot->tail.internal = lrc->ring.tail;
diff --git a/drivers/gpu/drm/xe/xe_lrc.h b/drivers/gpu/drm/xe/xe_lrc.h
index 23d71283c79d..a7facfa8bf51 100644
--- a/drivers/gpu/drm/xe/xe_lrc.h
+++ b/drivers/gpu/drm/xe/xe_lrc.h
@@ -42,7 +42,7 @@ struct xe_lrc_snapshot {
#define LRC_PPHWSP_SCRATCH_ADDR (0x34 * 4)
struct xe_lrc *xe_lrc_create(struct xe_exec_queue *q, struct xe_hw_engine *hwe,
- struct xe_vm *vm, u32 ring_size);
+ struct xe_vm *vm, u32 ring_size, u64 ring_addr);
void xe_lrc_destroy(struct kref *ref);
/**
--
2.34.1
^ permalink raw reply related [flat|nested] 52+ messages in thread
* [RFC PATCH 18/29] drm/xe: Drop EXEC_QUEUE_FLAG_UMD_SUBMISSION flag
2024-11-18 23:37 [RFC PATCH 00/29] UMD direct submission in Xe Matthew Brost
` (16 preceding siblings ...)
2024-11-18 23:37 ` [RFC PATCH 17/29] drm/xe: Add usermap exec queue extension Matthew Brost
@ 2024-11-18 23:37 ` Matthew Brost
2024-11-18 23:37 ` [RFC PATCH 19/29] drm/xe: Do not allow usermap exec queues in exec IOCTL Matthew Brost
` (18 subsequent siblings)
36 siblings, 0 replies; 52+ messages in thread
From: Matthew Brost @ 2024-11-18 23:37 UTC (permalink / raw)
To: intel-xe, dri-devel
Cc: kenneth.w.graunke, lionel.g.landwerlin, jose.souza, simona.vetter,
thomas.hellstrom, boris.brezillon, airlied, christian.koenig,
mihail.atanassov, steven.price, shashank.sharma
Use xe_exec_queue_is_usermap helper instead.
Signed-off-by: Matthew Brost <matthew.brost@intel.com>
---
drivers/gpu/drm/xe/xe_exec_queue.c | 3 +--
drivers/gpu/drm/xe/xe_exec_queue.h | 5 +++++
drivers/gpu/drm/xe/xe_exec_queue_types.h | 2 --
drivers/gpu/drm/xe/xe_guc_submit.c | 4 ++--
drivers/gpu/drm/xe/xe_lrc.c | 4 ++--
5 files changed, 10 insertions(+), 8 deletions(-)
diff --git a/drivers/gpu/drm/xe/xe_exec_queue.c b/drivers/gpu/drm/xe/xe_exec_queue.c
index c8d45133eb59..a22f089ccec6 100644
--- a/drivers/gpu/drm/xe/xe_exec_queue.c
+++ b/drivers/gpu/drm/xe/xe_exec_queue.c
@@ -486,7 +486,7 @@ static int exec_queue_user_ext_usermap(struct xe_device *xe,
if (XE_IOCTL_DBG(xe, xe_vm_in_lr_mode(q->vm)))
return -EOPNOTSUPP;
- if (XE_IOCTL_DBG(xe, q->flags & EXEC_QUEUE_FLAG_UMD_SUBMISSION))
+ if (XE_IOCTL_DBG(xe, xe_exec_queue_is_usermap(q)))
return -EINVAL;
err = __copy_from_user(&ext, address, sizeof(ext));
@@ -519,7 +519,6 @@ static int exec_queue_user_ext_usermap(struct xe_device *xe,
q->usermap->ring_addr = ext.ring_addr;
xe_pm_runtime_get_noresume(xe);
- q->flags |= EXEC_QUEUE_FLAG_UMD_SUBMISSION;
return 0;
}
diff --git a/drivers/gpu/drm/xe/xe_exec_queue.h b/drivers/gpu/drm/xe/xe_exec_queue.h
index 90c7f73eab88..a4a1dbf5b977 100644
--- a/drivers/gpu/drm/xe/xe_exec_queue.h
+++ b/drivers/gpu/drm/xe/xe_exec_queue.h
@@ -57,6 +57,11 @@ static inline bool xe_exec_queue_is_parallel(struct xe_exec_queue *q)
return q->width > 1;
}
+static inline bool xe_exec_queue_is_usermap(struct xe_exec_queue *q)
+{
+ return !!q->usermap;
+}
+
bool xe_exec_queue_is_lr(struct xe_exec_queue *q);
bool xe_exec_queue_ring_full(struct xe_exec_queue *q);
diff --git a/drivers/gpu/drm/xe/xe_exec_queue_types.h b/drivers/gpu/drm/xe/xe_exec_queue_types.h
index b30b5ee910fa..26ce85b8d163 100644
--- a/drivers/gpu/drm/xe/xe_exec_queue_types.h
+++ b/drivers/gpu/drm/xe/xe_exec_queue_types.h
@@ -93,8 +93,6 @@ struct xe_exec_queue {
#define EXEC_QUEUE_FLAG_BIND_ENGINE_CHILD BIT(3)
/* kernel exec_queue only, set priority to highest level */
#define EXEC_QUEUE_FLAG_HIGH_PRIORITY BIT(4)
-/* queue used for UMD submission */
-#define EXEC_QUEUE_FLAG_UMD_SUBMISSION BIT(5)
/**
* @flags: flags for this exec queue, should statically setup aside from ban
diff --git a/drivers/gpu/drm/xe/xe_guc_submit.c b/drivers/gpu/drm/xe/xe_guc_submit.c
index c226c7b3245d..59d2e08797f5 100644
--- a/drivers/gpu/drm/xe/xe_guc_submit.c
+++ b/drivers/gpu/drm/xe/xe_guc_submit.c
@@ -1522,7 +1522,7 @@ static int guc_exec_queue_init(struct xe_exec_queue *q)
xe_sched_stop(sched);
q->guc->db.id = -1;
- if (q->flags & EXEC_QUEUE_FLAG_UMD_SUBMISSION) {
+ if (xe_exec_queue_is_usermap(q)) {
db_id = xe_guc_db_mgr_reserve_id_locked(&guc->dbm);
if (db_id < 0) {
err = db_id;
@@ -1532,7 +1532,7 @@ static int guc_exec_queue_init(struct xe_exec_queue *q)
mutex_unlock(&guc->submission_state.lock);
- if (q->flags & EXEC_QUEUE_FLAG_UMD_SUBMISSION) {
+ if (xe_exec_queue_is_usermap(q)) {
q->guc->db.id = db_id;
err = create_doorbell(guc, q);
if (err)
diff --git a/drivers/gpu/drm/xe/xe_lrc.c b/drivers/gpu/drm/xe/xe_lrc.c
index 8d5a65724c04..e8675624966d 100644
--- a/drivers/gpu/drm/xe/xe_lrc.c
+++ b/drivers/gpu/drm/xe/xe_lrc.c
@@ -18,7 +18,7 @@
#include "xe_bo.h"
#include "xe_device.h"
#include "xe_drm_client.h"
-#include "xe_exec_queue_types.h"
+#include "xe_exec_queue.h"
#include "xe_gt.h"
#include "xe_gt_printk.h"
#include "xe_hw_fence.h"
@@ -912,7 +912,7 @@ static int xe_lrc_init(struct xe_lrc *lrc, struct xe_exec_queue *q,
void *init_data = NULL;
u32 arb_enable;
u32 lrc_size;
- bool user_queue = q && q->flags & EXEC_QUEUE_FLAG_UMD_SUBMISSION;
+ bool user_queue = q && xe_exec_queue_is_usermap(q);;
enum ttm_bo_type submit_type = user_queue ? ttm_bo_type_device :
ttm_bo_type_kernel;
unsigned int submit_flags = user_queue ?
--
2.34.1
^ permalink raw reply related [flat|nested] 52+ messages in thread
* [RFC PATCH 19/29] drm/xe: Do not allow usermap exec queues in exec IOCTL
2024-11-18 23:37 [RFC PATCH 00/29] UMD direct submission in Xe Matthew Brost
` (17 preceding siblings ...)
2024-11-18 23:37 ` [RFC PATCH 18/29] drm/xe: Drop EXEC_QUEUE_FLAG_UMD_SUBMISSION flag Matthew Brost
@ 2024-11-18 23:37 ` Matthew Brost
2024-11-18 23:37 ` [RFC PATCH 20/29] drm/xe: Teach GuC backend to kill usermap queues Matthew Brost
` (17 subsequent siblings)
36 siblings, 0 replies; 52+ messages in thread
From: Matthew Brost @ 2024-11-18 23:37 UTC (permalink / raw)
To: intel-xe, dri-devel
Cc: kenneth.w.graunke, lionel.g.landwerlin, jose.souza, simona.vetter,
thomas.hellstrom, boris.brezillon, airlied, christian.koenig,
mihail.atanassov, steven.price, shashank.sharma
Not supported at the moment, may need something in the no doorbells
available.
Signed-off-by: Matthew Brost <matthew.brost@intel.com>
---
drivers/gpu/drm/xe/xe_exec.c | 3 ++-
1 file changed, 2 insertions(+), 1 deletion(-)
diff --git a/drivers/gpu/drm/xe/xe_exec.c b/drivers/gpu/drm/xe/xe_exec.c
index 31cca938956f..898e4718d639 100644
--- a/drivers/gpu/drm/xe/xe_exec.c
+++ b/drivers/gpu/drm/xe/xe_exec.c
@@ -132,7 +132,8 @@ int xe_exec_ioctl(struct drm_device *dev, void *data, struct drm_file *file)
if (XE_IOCTL_DBG(xe, !q))
return -ENOENT;
- if (XE_IOCTL_DBG(xe, q->flags & EXEC_QUEUE_FLAG_VM)) {
+ if (XE_IOCTL_DBG(xe, q->flags & EXEC_QUEUE_FLAG_VM ||
+ xe_exec_queue_is_usermap(q))) {
err = -EINVAL;
goto err_exec_queue;
}
--
2.34.1
^ permalink raw reply related [flat|nested] 52+ messages in thread
* [RFC PATCH 20/29] drm/xe: Teach GuC backend to kill usermap queues
2024-11-18 23:37 [RFC PATCH 00/29] UMD direct submission in Xe Matthew Brost
` (18 preceding siblings ...)
2024-11-18 23:37 ` [RFC PATCH 19/29] drm/xe: Do not allow usermap exec queues in exec IOCTL Matthew Brost
@ 2024-11-18 23:37 ` Matthew Brost
2024-11-18 23:37 ` [RFC PATCH 21/29] drm/xe: Enable preempt fences on " Matthew Brost
` (16 subsequent siblings)
36 siblings, 0 replies; 52+ messages in thread
From: Matthew Brost @ 2024-11-18 23:37 UTC (permalink / raw)
To: intel-xe, dri-devel
Cc: kenneth.w.graunke, lionel.g.landwerlin, jose.souza, simona.vetter,
thomas.hellstrom, boris.brezillon, airlied, christian.koenig,
mihail.atanassov, steven.price, shashank.sharma
Usermap exec queue's teardown (kill) differs from other exec queues as
no job is available, a doorbell is mapped, and the kill should be
immediate.
A follow up could unify LR queue cleanup with usermap but keeping this
a seperate flow for now.
Signed-off-by: Matthew Brost <matthew.brost@intel.com>
---
drivers/gpu/drm/xe/xe_guc_exec_queue_types.h | 2 +-
drivers/gpu/drm/xe/xe_guc_submit.c | 56 +++++++++++++++++++-
2 files changed, 55 insertions(+), 3 deletions(-)
diff --git a/drivers/gpu/drm/xe/xe_guc_exec_queue_types.h b/drivers/gpu/drm/xe/xe_guc_exec_queue_types.h
index 2d53af75ed75..c6c58e414b19 100644
--- a/drivers/gpu/drm/xe/xe_guc_exec_queue_types.h
+++ b/drivers/gpu/drm/xe/xe_guc_exec_queue_types.h
@@ -29,7 +29,7 @@ struct xe_guc_exec_queue {
* a message needs to sent through the GPU scheduler but memory
* allocations are not allowed.
*/
-#define MAX_STATIC_MSG_TYPE 3
+#define MAX_STATIC_MSG_TYPE 4
struct xe_sched_msg static_msgs[MAX_STATIC_MSG_TYPE];
/** @lr_tdr: long running TDR worker */
struct work_struct lr_tdr;
diff --git a/drivers/gpu/drm/xe/xe_guc_submit.c b/drivers/gpu/drm/xe/xe_guc_submit.c
index 59d2e08797f5..82071a0ec91e 100644
--- a/drivers/gpu/drm/xe/xe_guc_submit.c
+++ b/drivers/gpu/drm/xe/xe_guc_submit.c
@@ -230,6 +230,11 @@ static void set_exec_queue_doorbell_registered(struct xe_exec_queue *q)
atomic_or(EXEC_QUEUE_STATE_DB_REGISTERED, &q->guc->state);
}
+static void clear_exec_queue_doorbell_registered(struct xe_exec_queue *q)
+{
+ atomic_and(~EXEC_QUEUE_STATE_DB_REGISTERED, &q->guc->state);
+}
+
static bool exec_queue_killed_or_banned_or_wedged(struct xe_exec_queue *q)
{
return (atomic_read(&q->guc->state) &
@@ -798,6 +803,8 @@ static void disable_scheduling_deregister(struct xe_guc *guc,
G2H_LEN_DW_DEREGISTER_CONTEXT, 2);
}
+static void guc_exec_queue_kill_user(struct xe_exec_queue *q);
+
static void xe_guc_exec_queue_trigger_cleanup(struct xe_exec_queue *q)
{
struct xe_guc *guc = exec_queue_to_guc(q);
@@ -806,7 +813,9 @@ static void xe_guc_exec_queue_trigger_cleanup(struct xe_exec_queue *q)
/** to wakeup xe_wait_user_fence ioctl if exec queue is reset */
wake_up_all(&xe->ufence_wq);
- if (xe_exec_queue_is_lr(q))
+ if (xe_exec_queue_is_usermap(q))
+ guc_exec_queue_kill_user(q);
+ else if (xe_exec_queue_is_lr(q))
queue_work(guc_to_gt(guc)->ordered_wq, &q->guc->lr_tdr);
else
xe_sched_tdr_queue_imm(&q->guc->sched);
@@ -1294,8 +1303,10 @@ static void __guc_exec_queue_process_msg_cleanup(struct xe_sched_msg *msg)
xe_gt_assert(guc_to_gt(guc), !(q->flags & EXEC_QUEUE_FLAG_PERMANENT));
trace_xe_exec_queue_cleanup_entity(q);
- if (exec_queue_doorbell_registered(q))
+ if (exec_queue_doorbell_registered(q)) {
+ clear_exec_queue_doorbell_registered(q);
deallocate_doorbell(guc, q->guc->id);
+ }
if (exec_queue_registered(q))
disable_scheduling_deregister(guc, q);
@@ -1382,10 +1393,29 @@ static void __guc_exec_queue_process_msg_resume(struct xe_sched_msg *msg)
}
}
+static void __guc_exec_queue_process_msg_kill_user(struct xe_sched_msg *msg)
+{
+ struct xe_exec_queue *q = msg->private_data;
+ struct xe_guc *guc = exec_queue_to_guc(q);
+
+ if (!xe_lrc_ring_is_idle(q->lrc[0]))
+ xe_gt_dbg(q->gt, "Killing non-idle usermap queue: guc_id=%d",
+ q->guc->id);
+
+ if (exec_queue_doorbell_registered(q)) {
+ clear_exec_queue_doorbell_registered(q);
+ deallocate_doorbell(guc, q->guc->id);
+ }
+
+ if (exec_queue_registered(q))
+ disable_scheduling_deregister(guc, q);
+}
+
#define CLEANUP 1 /* Non-zero values to catch uninitialized msg */
#define SET_SCHED_PROPS 2
#define SUSPEND 3
#define RESUME 4
+#define KILL_USER 5
#define OPCODE_MASK 0xf
#define MSG_LOCKED BIT(8)
@@ -1408,6 +1438,9 @@ static void guc_exec_queue_process_msg(struct xe_sched_msg *msg)
case RESUME:
__guc_exec_queue_process_msg_resume(msg);
break;
+ case KILL_USER:
+ __guc_exec_queue_process_msg_kill_user(msg);
+ break;
default:
XE_WARN_ON("Unknown message type");
}
@@ -1600,6 +1633,7 @@ static bool guc_exec_queue_try_add_msg(struct xe_exec_queue *q,
#define STATIC_MSG_CLEANUP 0
#define STATIC_MSG_SUSPEND 1
#define STATIC_MSG_RESUME 2
+#define STATIC_MSG_KILL_USER 3
static void guc_exec_queue_fini(struct xe_exec_queue *q)
{
struct xe_sched_msg *msg = q->guc->static_msgs + STATIC_MSG_CLEANUP;
@@ -1725,6 +1759,24 @@ static void guc_exec_queue_resume(struct xe_exec_queue *q)
xe_sched_msg_unlock(sched);
}
+static void guc_exec_queue_kill_user(struct xe_exec_queue *q)
+{
+ struct xe_gpu_scheduler *sched = &q->guc->sched;
+ struct xe_sched_msg *msg = q->guc->static_msgs + STATIC_MSG_KILL_USER;
+
+ if (exec_queue_extra_ref(q))
+ return;
+
+ set_exec_queue_banned(q);
+
+ xe_sched_msg_lock(sched);
+ if (guc_exec_queue_try_add_msg(q, msg, KILL_USER)) {
+ set_exec_queue_extra_ref(q);
+ xe_exec_queue_get(q);
+ }
+ xe_sched_msg_unlock(sched);
+}
+
static bool guc_exec_queue_reset_status(struct xe_exec_queue *q)
{
return exec_queue_reset(q) || exec_queue_killed_or_banned_or_wedged(q);
--
2.34.1
^ permalink raw reply related [flat|nested] 52+ messages in thread
* [RFC PATCH 21/29] drm/xe: Enable preempt fences on usermap queues
2024-11-18 23:37 [RFC PATCH 00/29] UMD direct submission in Xe Matthew Brost
` (19 preceding siblings ...)
2024-11-18 23:37 ` [RFC PATCH 20/29] drm/xe: Teach GuC backend to kill usermap queues Matthew Brost
@ 2024-11-18 23:37 ` Matthew Brost
2024-11-18 23:37 ` [RFC PATCH 22/29] drm/xe/uapi: Add uAPI to convert user semaphore to / from drm syncobj Matthew Brost
` (15 subsequent siblings)
36 siblings, 0 replies; 52+ messages in thread
From: Matthew Brost @ 2024-11-18 23:37 UTC (permalink / raw)
To: intel-xe, dri-devel
Cc: kenneth.w.graunke, lionel.g.landwerlin, jose.souza, simona.vetter,
thomas.hellstrom, boris.brezillon, airlied, christian.koenig,
mihail.atanassov, steven.price, shashank.sharma
Preempt fences are used by usermap queues to implement dynamic memory
(BO eviction, userptr invalidation), enable preempt fences on usermap
queues.
Signed-off-by: Matthew Brost <matthew.brost@intel.com>
---
drivers/gpu/drm/xe/xe_exec_queue.c | 3 ++-
drivers/gpu/drm/xe/xe_pt.c | 3 +--
drivers/gpu/drm/xe/xe_vm.c | 18 ++++++++----------
drivers/gpu/drm/xe/xe_vm.h | 2 +-
4 files changed, 12 insertions(+), 14 deletions(-)
diff --git a/drivers/gpu/drm/xe/xe_exec_queue.c b/drivers/gpu/drm/xe/xe_exec_queue.c
index a22f089ccec6..987584090263 100644
--- a/drivers/gpu/drm/xe/xe_exec_queue.c
+++ b/drivers/gpu/drm/xe/xe_exec_queue.c
@@ -794,7 +794,8 @@ int xe_exec_queue_create_ioctl(struct drm_device *dev, void *data,
if (IS_ERR(q))
return PTR_ERR(q);
- if (xe_vm_in_preempt_fence_mode(vm)) {
+ if (xe_vm_in_preempt_fence_mode(vm) ||
+ xe_exec_queue_is_usermap(q)) {
q->lr.context = dma_fence_context_alloc(1);
err = xe_vm_add_compute_exec_queue(vm, q);
diff --git a/drivers/gpu/drm/xe/xe_pt.c b/drivers/gpu/drm/xe/xe_pt.c
index 684dc075deac..a75667346ab3 100644
--- a/drivers/gpu/drm/xe/xe_pt.c
+++ b/drivers/gpu/drm/xe/xe_pt.c
@@ -1882,8 +1882,7 @@ static void bind_op_commit(struct xe_vm *vm, struct xe_tile *tile,
* the rebind worker
*/
if (pt_update_ops->wait_vm_bookkeep &&
- xe_vm_in_preempt_fence_mode(vm) &&
- !current->mm)
+ vm->preempt.num_exec_queues && !current->mm)
xe_vm_queue_rebind_worker(vm);
}
diff --git a/drivers/gpu/drm/xe/xe_vm.c b/drivers/gpu/drm/xe/xe_vm.c
index 2e67648ed512..16bc1b82d950 100644
--- a/drivers/gpu/drm/xe/xe_vm.c
+++ b/drivers/gpu/drm/xe/xe_vm.c
@@ -229,7 +229,8 @@ int xe_vm_add_compute_exec_queue(struct xe_vm *vm, struct xe_exec_queue *q)
int err;
bool wait;
- xe_assert(vm->xe, xe_vm_in_preempt_fence_mode(vm));
+ xe_assert(vm->xe, xe_vm_in_preempt_fence_mode(vm) ||
+ xe_exec_queue_is_usermap(q));
down_write(&vm->lock);
err = drm_gpuvm_exec_lock(&vm_exec);
@@ -280,7 +281,7 @@ int xe_vm_add_compute_exec_queue(struct xe_vm *vm, struct xe_exec_queue *q)
*/
void xe_vm_remove_compute_exec_queue(struct xe_vm *vm, struct xe_exec_queue *q)
{
- if (!xe_vm_in_preempt_fence_mode(vm))
+ if (!xe_vm_in_preempt_fence_mode(vm) && !xe_exec_queue_is_usermap(q))
return;
down_write(&vm->lock);
@@ -487,7 +488,7 @@ static void preempt_rebind_work_func(struct work_struct *w)
long wait;
int __maybe_unused tries = 0;
- xe_assert(vm->xe, xe_vm_in_preempt_fence_mode(vm));
+ xe_assert(vm->xe, !xe_vm_in_fault_mode(vm));
trace_xe_vm_rebind_worker_enter(vm);
down_write(&vm->lock);
@@ -1467,10 +1468,9 @@ struct xe_vm *xe_vm_create(struct xe_device *xe, u32 flags)
vm->batch_invalidate_tlb = true;
}
- if (vm->flags & XE_VM_FLAG_LR_MODE) {
- INIT_WORK(&vm->preempt.rebind_work, preempt_rebind_work_func);
+ INIT_WORK(&vm->preempt.rebind_work, preempt_rebind_work_func);
+ if (vm->flags & XE_VM_FLAG_LR_MODE)
vm->batch_invalidate_tlb = false;
- }
/* Fill pt_root after allocating scratch tables */
for_each_tile(tile, xe, id) {
@@ -1543,8 +1543,7 @@ void xe_vm_close_and_put(struct xe_vm *vm)
xe_assert(xe, !vm->preempt.num_exec_queues);
xe_vm_close(vm);
- if (xe_vm_in_preempt_fence_mode(vm))
- flush_work(&vm->preempt.rebind_work);
+ flush_work(&vm->preempt.rebind_work);
down_write(&vm->lock);
for_each_tile(tile, xe, id) {
@@ -1644,8 +1643,7 @@ static void vm_destroy_work_func(struct work_struct *w)
/* xe_vm_close_and_put was not called? */
xe_assert(xe, !vm->size);
- if (xe_vm_in_preempt_fence_mode(vm))
- flush_work(&vm->preempt.rebind_work);
+ flush_work(&vm->preempt.rebind_work);
mutex_destroy(&vm->snap_mutex);
diff --git a/drivers/gpu/drm/xe/xe_vm.h b/drivers/gpu/drm/xe/xe_vm.h
index c864dba35e1d..4391dbaeba51 100644
--- a/drivers/gpu/drm/xe/xe_vm.h
+++ b/drivers/gpu/drm/xe/xe_vm.h
@@ -216,7 +216,7 @@ int xe_vm_invalidate_vma(struct xe_vma *vma);
static inline void xe_vm_queue_rebind_worker(struct xe_vm *vm)
{
- xe_assert(vm->xe, xe_vm_in_preempt_fence_mode(vm));
+ xe_assert(vm->xe, !xe_vm_in_fault_mode(vm));
queue_work(vm->xe->ordered_wq, &vm->preempt.rebind_work);
}
--
2.34.1
^ permalink raw reply related [flat|nested] 52+ messages in thread
* [RFC PATCH 22/29] drm/xe/uapi: Add uAPI to convert user semaphore to / from drm syncobj
2024-11-18 23:37 [RFC PATCH 00/29] UMD direct submission in Xe Matthew Brost
` (20 preceding siblings ...)
2024-11-18 23:37 ` [RFC PATCH 21/29] drm/xe: Enable preempt fences on " Matthew Brost
@ 2024-11-18 23:37 ` Matthew Brost
2024-11-18 23:37 ` [RFC PATCH 23/29] drm/xe: Add user fence IRQ handler Matthew Brost
` (14 subsequent siblings)
36 siblings, 0 replies; 52+ messages in thread
From: Matthew Brost @ 2024-11-18 23:37 UTC (permalink / raw)
To: intel-xe, dri-devel
Cc: kenneth.w.graunke, lionel.g.landwerlin, jose.souza, simona.vetter,
thomas.hellstrom, boris.brezillon, airlied, christian.koenig,
mihail.atanassov, steven.price, shashank.sharma
Simple interface to allow user space to share user syncs with kernel
syncs (dma-fences). The idea also is when user syncs are converted to
kernel syncs, preemption is guarded against until the kernel sync
signals. This is required to adhere to dma-fencing rules (no memory
allocates done in path of dma-fence, resume after preemption requires
memory allocations).
FIXME: uAPI likely to change, perhaps in drm generic way. Currently
enough for a PoC and enable initial Mesa development.
Signed-off-by: Matthew Brost <matthew.brost@intel.com>
---
include/uapi/drm/xe_drm.h | 62 +++++++++++++++++++++++++++++++++++++++
1 file changed, 62 insertions(+)
diff --git a/include/uapi/drm/xe_drm.h b/include/uapi/drm/xe_drm.h
index 9356a714a2e0..0cd473d2d91b 100644
--- a/include/uapi/drm/xe_drm.h
+++ b/include/uapi/drm/xe_drm.h
@@ -102,6 +102,7 @@ extern "C" {
#define DRM_XE_EXEC 0x09
#define DRM_XE_WAIT_USER_FENCE 0x0a
#define DRM_XE_OBSERVATION 0x0b
+#define DRM_XE_VM_CONVERT_FENCE 0x0c
/* Must be kept compact -- no holes */
@@ -117,6 +118,7 @@ extern "C" {
#define DRM_IOCTL_XE_EXEC DRM_IOW(DRM_COMMAND_BASE + DRM_XE_EXEC, struct drm_xe_exec)
#define DRM_IOCTL_XE_WAIT_USER_FENCE DRM_IOWR(DRM_COMMAND_BASE + DRM_XE_WAIT_USER_FENCE, struct drm_xe_wait_user_fence)
#define DRM_IOCTL_XE_OBSERVATION DRM_IOW(DRM_COMMAND_BASE + DRM_XE_OBSERVATION, struct drm_xe_observation_param)
+#define DRM_IOCTL_XE_VM_CONVERT_FENCE DRM_IOW(DRM_COMMAND_BASE + DRM_XE_VM_CONVERT_FENCE, struct drm_xe_vm_convert_fence)
/**
* DOC: Xe IOCTL Extensions
@@ -1796,6 +1798,66 @@ struct drm_xe_oa_stream_info {
__u64 reserved[3];
};
+/**
+ * struct drm_xe_semaphore - Semaphore
+ */
+struct drm_xe_semaphore {
+ /**
+ * @handle: Handle for the semaphore. Must be bound to the VM when
+ * passed into drm_xe_vm_convert_fence.
+ */
+ __u32 handle;
+
+ /** @offset: Offset in BO for semaphore, must QW aligned */
+ __u32 offset;
+
+ /** @seqno: Sequence number of semaphore */
+ __u64 seqno;
+
+ /** @token: Semaphore token - MBZ as not supported yet */
+ __u64 token;
+
+ /** @reserved: reserved for future use */
+ __u64 reserved[2];
+};
+
+/**
+ * struct drm_xe_vm_convert_fence - Convert semaphore to / from syncobj
+ *
+ * DRM_XE_SYNC_FLAG_SIGNAL set indicates semaphore -> syncobj
+ * DRM_XE_SYNC_FLAG_SIGNAL clear indicates syncobj -> semaphore
+ */
+struct drm_xe_vm_convert_fence {
+ /**
+ * @extensions: Pointer to the first extension struct, if any
+ */
+ __u64 extensions;
+
+ /** @vm_id: VM ID */
+ __u32 vm_id;
+
+ /** @flags: Flags - MBZ */
+ __u32 flags;
+
+ /** @pad: MBZ */
+ __u32 pad;
+
+ /**
+ * @num_syncs: Number of struct drm_xe_sync and struct drm_xe_semaphore
+ * in arrays.
+ */
+ __u32 num_syncs;
+
+ /** @syncs: Pointer to struct drm_xe_sync array. */
+ __u64 syncs;
+
+ /** @semaphores: Pointer to struct drm_xe_semaphore array. */
+ __u64 semaphores;
+
+ /** @reserved: reserved for future use */
+ __u64 reserved[2];
+};
+
#if defined(__cplusplus)
}
#endif
--
2.34.1
^ permalink raw reply related [flat|nested] 52+ messages in thread
* [RFC PATCH 23/29] drm/xe: Add user fence IRQ handler
2024-11-18 23:37 [RFC PATCH 00/29] UMD direct submission in Xe Matthew Brost
` (21 preceding siblings ...)
2024-11-18 23:37 ` [RFC PATCH 22/29] drm/xe/uapi: Add uAPI to convert user semaphore to / from drm syncobj Matthew Brost
@ 2024-11-18 23:37 ` Matthew Brost
2024-11-18 23:37 ` [RFC PATCH 24/29] drm/xe: Add xe_hw_fence_user_init Matthew Brost
` (13 subsequent siblings)
36 siblings, 0 replies; 52+ messages in thread
From: Matthew Brost @ 2024-11-18 23:37 UTC (permalink / raw)
To: intel-xe, dri-devel
Cc: kenneth.w.graunke, lionel.g.landwerlin, jose.souza, simona.vetter,
thomas.hellstrom, boris.brezillon, airlied, christian.koenig,
mihail.atanassov, steven.price, shashank.sharma
Imported user fences will not be tied to a specific queue or hardware
engine class. Therefore, a device IRQ handler is needed to signal the
associated exported DMA fences.
Signed-off-by: Matthew Brost <matthew.brost@intel.com>
---
drivers/gpu/drm/xe/xe_device.c | 4 ++++
drivers/gpu/drm/xe/xe_device_types.h | 3 +++
drivers/gpu/drm/xe/xe_hw_engine.c | 4 +++-
3 files changed, 10 insertions(+), 1 deletion(-)
diff --git a/drivers/gpu/drm/xe/xe_device.c b/drivers/gpu/drm/xe/xe_device.c
index bbdff4308b2e..573b5f3df0c8 100644
--- a/drivers/gpu/drm/xe/xe_device.c
+++ b/drivers/gpu/drm/xe/xe_device.c
@@ -39,6 +39,7 @@
#include "xe_gt_sriov_vf.h"
#include "xe_guc.h"
#include "xe_hw_engine_group.h"
+#include "xe_hw_fence.h"
#include "xe_hwmon.h"
#include "xe_irq.h"
#include "xe_memirq.h"
@@ -902,6 +903,7 @@ int xe_device_probe(struct xe_device *xe)
if (err)
goto err;
+ xe_hw_fence_irq_init(&xe->user_fence_irq);
for_each_gt(gt, xe, id) {
last_gt = id;
@@ -944,6 +946,7 @@ int xe_device_probe(struct xe_device *xe)
xe_oa_fini(xe);
err_fini_gt:
+ xe_hw_fence_irq_finish(&xe->user_fence_irq);
for_each_gt(gt, xe, id) {
if (id < last_gt)
xe_gt_remove(gt);
@@ -979,6 +982,7 @@ void xe_device_remove(struct xe_device *xe)
xe_heci_gsc_fini(xe);
+ xe_hw_fence_irq_finish(&xe->user_fence_irq);
for_each_gt(gt, xe, id)
xe_gt_remove(gt);
}
diff --git a/drivers/gpu/drm/xe/xe_device_types.h b/drivers/gpu/drm/xe/xe_device_types.h
index 8592f1b02db1..3ac118c6f85e 100644
--- a/drivers/gpu/drm/xe/xe_device_types.h
+++ b/drivers/gpu/drm/xe/xe_device_types.h
@@ -507,6 +507,9 @@ struct xe_device {
int mode;
} wedged;
+ /** @user_fence_irq: User fence IRQ handler */
+ struct xe_hw_fence_irq user_fence_irq;
+
#ifdef TEST_VM_OPS_ERROR
/**
* @vm_inject_error_position: inject errors at different places in VM
diff --git a/drivers/gpu/drm/xe/xe_hw_engine.c b/drivers/gpu/drm/xe/xe_hw_engine.c
index c4b0dc3be39c..2c9aa5343971 100644
--- a/drivers/gpu/drm/xe/xe_hw_engine.c
+++ b/drivers/gpu/drm/xe/xe_hw_engine.c
@@ -822,8 +822,10 @@ void xe_hw_engine_handle_irq(struct xe_hw_engine *hwe, u16 intr_vec)
if (hwe->irq_handler)
hwe->irq_handler(hwe, intr_vec);
- if (intr_vec & GT_RENDER_USER_INTERRUPT)
+ if (intr_vec & GT_RENDER_USER_INTERRUPT) {
+ xe_hw_fence_irq_run(>_to_xe(hwe->gt)->user_fence_irq);
xe_hw_fence_irq_run(hwe->fence_irq);
+ }
}
/**
--
2.34.1
^ permalink raw reply related [flat|nested] 52+ messages in thread
* [RFC PATCH 24/29] drm/xe: Add xe_hw_fence_user_init
2024-11-18 23:37 [RFC PATCH 00/29] UMD direct submission in Xe Matthew Brost
` (22 preceding siblings ...)
2024-11-18 23:37 ` [RFC PATCH 23/29] drm/xe: Add user fence IRQ handler Matthew Brost
@ 2024-11-18 23:37 ` Matthew Brost
2024-11-18 23:37 ` [RFC PATCH 25/29] drm/xe: Add a message lock to the Xe GPU scheduler Matthew Brost
` (12 subsequent siblings)
36 siblings, 0 replies; 52+ messages in thread
From: Matthew Brost @ 2024-11-18 23:37 UTC (permalink / raw)
To: intel-xe, dri-devel
Cc: kenneth.w.graunke, lionel.g.landwerlin, jose.souza, simona.vetter,
thomas.hellstrom, boris.brezillon, airlied, christian.koenig,
mihail.atanassov, steven.price, shashank.sharma
Add xe_hw_fence_user_init which can create a struct xe_hw_fence from a
user input rather than internal LRC state. Used to import user fence and
export them as dma fences.
Signed-off-by: Matthew Brost <matthew.brost@intel.com>
---
drivers/gpu/drm/xe/xe_hw_fence.c | 17 +++++++++++++++++
drivers/gpu/drm/xe/xe_hw_fence.h | 3 +++
2 files changed, 20 insertions(+)
diff --git a/drivers/gpu/drm/xe/xe_hw_fence.c b/drivers/gpu/drm/xe/xe_hw_fence.c
index 0b4f12be3692..2ea4d8bca6eb 100644
--- a/drivers/gpu/drm/xe/xe_hw_fence.c
+++ b/drivers/gpu/drm/xe/xe_hw_fence.c
@@ -263,3 +263,20 @@ void xe_hw_fence_init(struct dma_fence *fence, struct xe_hw_fence_ctx *ctx,
trace_xe_hw_fence_create(hw_fence);
}
+
+void xe_hw_fence_user_init(struct dma_fence *fence, struct xe_device *xe,
+ struct iosys_map seqno_map, u64 seqno)
+{
+ struct xe_hw_fence *hw_fence =
+ container_of(fence, typeof(*hw_fence), dma);
+
+ hw_fence->xe = xe;
+ snprintf(hw_fence->name, sizeof(hw_fence->name), "user");
+ hw_fence->seqno_map = seqno_map;
+
+ INIT_LIST_HEAD(&hw_fence->irq_link);
+ dma_fence_init(fence, &xe_hw_fence_ops, &xe->user_fence_irq.lock,
+ dma_fence_context_alloc(1), seqno);
+
+ trace_xe_hw_fence_create(hw_fence);
+}
diff --git a/drivers/gpu/drm/xe/xe_hw_fence.h b/drivers/gpu/drm/xe/xe_hw_fence.h
index f13a1c4982c7..76571ef2ef36 100644
--- a/drivers/gpu/drm/xe/xe_hw_fence.h
+++ b/drivers/gpu/drm/xe/xe_hw_fence.h
@@ -30,4 +30,7 @@ void xe_hw_fence_free(struct dma_fence *fence);
void xe_hw_fence_init(struct dma_fence *fence, struct xe_hw_fence_ctx *ctx,
struct iosys_map seqno_map);
+void xe_hw_fence_user_init(struct dma_fence *fence, struct xe_device *xe,
+ struct iosys_map seqno_map, u64 seqno);
+
#endif
--
2.34.1
^ permalink raw reply related [flat|nested] 52+ messages in thread
* [RFC PATCH 25/29] drm/xe: Add a message lock to the Xe GPU scheduler
2024-11-18 23:37 [RFC PATCH 00/29] UMD direct submission in Xe Matthew Brost
` (23 preceding siblings ...)
2024-11-18 23:37 ` [RFC PATCH 24/29] drm/xe: Add xe_hw_fence_user_init Matthew Brost
@ 2024-11-18 23:37 ` Matthew Brost
2024-11-18 23:37 ` [RFC PATCH 26/29] drm/xe: Always wait on preempt fences in vma_check_userptr Matthew Brost
` (11 subsequent siblings)
36 siblings, 0 replies; 52+ messages in thread
From: Matthew Brost @ 2024-11-18 23:37 UTC (permalink / raw)
To: intel-xe, dri-devel
Cc: kenneth.w.graunke, lionel.g.landwerlin, jose.souza, simona.vetter,
thomas.hellstrom, boris.brezillon, airlied, christian.koenig,
mihail.atanassov, steven.price, shashank.sharma
Stop abusing job list lock for message, use a dedicated lock. This lock
will soon be able to be taken in IRQ contexts, using irqsave for
simplicity. Can to tweaked in a follow up as needed.
Signed-off-by: Matthew Brost <matthew.brost@intel.com>
---
drivers/gpu/drm/xe/xe_gpu_scheduler.c | 19 ++++++++++++-------
drivers/gpu/drm/xe/xe_gpu_scheduler.h | 12 ++++--------
drivers/gpu/drm/xe/xe_gpu_scheduler_types.h | 2 ++
drivers/gpu/drm/xe/xe_guc_submit.c | 15 +++++++++------
4 files changed, 27 insertions(+), 21 deletions(-)
diff --git a/drivers/gpu/drm/xe/xe_gpu_scheduler.c b/drivers/gpu/drm/xe/xe_gpu_scheduler.c
index 50361b4638f9..55ccfb587523 100644
--- a/drivers/gpu/drm/xe/xe_gpu_scheduler.c
+++ b/drivers/gpu/drm/xe/xe_gpu_scheduler.c
@@ -14,25 +14,27 @@ static void xe_sched_process_msg_queue(struct xe_gpu_scheduler *sched)
static void xe_sched_process_msg_queue_if_ready(struct xe_gpu_scheduler *sched)
{
struct xe_sched_msg *msg;
+ unsigned long flags;
- xe_sched_msg_lock(sched);
+ xe_sched_msg_lock(sched, flags);
msg = list_first_entry_or_null(&sched->msgs, struct xe_sched_msg, link);
if (msg)
xe_sched_process_msg_queue(sched);
- xe_sched_msg_unlock(sched);
+ xe_sched_msg_unlock(sched, flags);
}
static struct xe_sched_msg *
xe_sched_get_msg(struct xe_gpu_scheduler *sched)
{
struct xe_sched_msg *msg;
+ unsigned long flags;
- xe_sched_msg_lock(sched);
+ xe_sched_msg_lock(sched, flags);
msg = list_first_entry_or_null(&sched->msgs,
struct xe_sched_msg, link);
if (msg)
list_del_init(&msg->link);
- xe_sched_msg_unlock(sched);
+ xe_sched_msg_unlock(sched, flags);
return msg;
}
@@ -64,6 +66,7 @@ int xe_sched_init(struct xe_gpu_scheduler *sched,
struct device *dev)
{
sched->ops = xe_ops;
+ spin_lock_init(&sched->msg_lock);
INIT_LIST_HEAD(&sched->msgs);
INIT_WORK(&sched->work_process_msg, xe_sched_process_msg_work);
@@ -98,15 +101,17 @@ void xe_sched_submission_resume_tdr(struct xe_gpu_scheduler *sched)
void xe_sched_add_msg(struct xe_gpu_scheduler *sched,
struct xe_sched_msg *msg)
{
- xe_sched_msg_lock(sched);
+ unsigned long flags;
+
+ xe_sched_msg_lock(sched, flags);
xe_sched_add_msg_locked(sched, msg);
- xe_sched_msg_unlock(sched);
+ xe_sched_msg_unlock(sched, flags);
}
void xe_sched_add_msg_locked(struct xe_gpu_scheduler *sched,
struct xe_sched_msg *msg)
{
- lockdep_assert_held(&sched->base.job_list_lock);
+ lockdep_assert_held(&sched->msg_lock);
list_add_tail(&msg->link, &sched->msgs);
xe_sched_process_msg_queue(sched);
diff --git a/drivers/gpu/drm/xe/xe_gpu_scheduler.h b/drivers/gpu/drm/xe/xe_gpu_scheduler.h
index c250ea773491..3238de26dcfe 100644
--- a/drivers/gpu/drm/xe/xe_gpu_scheduler.h
+++ b/drivers/gpu/drm/xe/xe_gpu_scheduler.h
@@ -29,15 +29,11 @@ void xe_sched_add_msg(struct xe_gpu_scheduler *sched,
void xe_sched_add_msg_locked(struct xe_gpu_scheduler *sched,
struct xe_sched_msg *msg);
-static inline void xe_sched_msg_lock(struct xe_gpu_scheduler *sched)
-{
- spin_lock(&sched->base.job_list_lock);
-}
+#define xe_sched_msg_lock(sched, flags) \
+ spin_lock_irqsave(&sched->msg_lock, flags)
-static inline void xe_sched_msg_unlock(struct xe_gpu_scheduler *sched)
-{
- spin_unlock(&sched->base.job_list_lock);
-}
+#define xe_sched_msg_unlock(sched, flags) \
+ spin_unlock_irqrestore(&sched->msg_lock, flags)
static inline void xe_sched_stop(struct xe_gpu_scheduler *sched)
{
diff --git a/drivers/gpu/drm/xe/xe_gpu_scheduler_types.h b/drivers/gpu/drm/xe/xe_gpu_scheduler_types.h
index 6731b13da8bb..c8e0352ef941 100644
--- a/drivers/gpu/drm/xe/xe_gpu_scheduler_types.h
+++ b/drivers/gpu/drm/xe/xe_gpu_scheduler_types.h
@@ -47,6 +47,8 @@ struct xe_gpu_scheduler {
const struct xe_sched_backend_ops *ops;
/** @msgs: list of messages to be processed in @work_process_msg */
struct list_head msgs;
+ /** @msg_lock: Lock for messages */
+ spinlock_t msg_lock;
/** @work_process_msg: processes messages */
struct work_struct work_process_msg;
};
diff --git a/drivers/gpu/drm/xe/xe_guc_submit.c b/drivers/gpu/drm/xe/xe_guc_submit.c
index 82071a0ec91e..3efd2000c0a2 100644
--- a/drivers/gpu/drm/xe/xe_guc_submit.c
+++ b/drivers/gpu/drm/xe/xe_guc_submit.c
@@ -1704,14 +1704,15 @@ static int guc_exec_queue_suspend(struct xe_exec_queue *q)
{
struct xe_gpu_scheduler *sched = &q->guc->sched;
struct xe_sched_msg *msg = q->guc->static_msgs + STATIC_MSG_SUSPEND;
+ unsigned long flags;
if (exec_queue_killed_or_banned_or_wedged(q))
return -EINVAL;
- xe_sched_msg_lock(sched);
+ xe_sched_msg_lock(sched, flags);
if (guc_exec_queue_try_add_msg(q, msg, SUSPEND))
q->guc->suspend_pending = true;
- xe_sched_msg_unlock(sched);
+ xe_sched_msg_unlock(sched, flags);
return 0;
}
@@ -1751,30 +1752,32 @@ static void guc_exec_queue_resume(struct xe_exec_queue *q)
struct xe_gpu_scheduler *sched = &q->guc->sched;
struct xe_sched_msg *msg = q->guc->static_msgs + STATIC_MSG_RESUME;
struct xe_guc *guc = exec_queue_to_guc(q);
+ unsigned long flags;
xe_gt_assert(guc_to_gt(guc), !q->guc->suspend_pending);
- xe_sched_msg_lock(sched);
+ xe_sched_msg_lock(sched, flags);
guc_exec_queue_try_add_msg(q, msg, RESUME);
- xe_sched_msg_unlock(sched);
+ xe_sched_msg_unlock(sched, flags);
}
static void guc_exec_queue_kill_user(struct xe_exec_queue *q)
{
struct xe_gpu_scheduler *sched = &q->guc->sched;
struct xe_sched_msg *msg = q->guc->static_msgs + STATIC_MSG_KILL_USER;
+ unsigned long flags;
if (exec_queue_extra_ref(q))
return;
set_exec_queue_banned(q);
- xe_sched_msg_lock(sched);
+ xe_sched_msg_lock(sched, flags);
if (guc_exec_queue_try_add_msg(q, msg, KILL_USER)) {
set_exec_queue_extra_ref(q);
xe_exec_queue_get(q);
}
- xe_sched_msg_unlock(sched);
+ xe_sched_msg_unlock(sched, flags);
}
static bool guc_exec_queue_reset_status(struct xe_exec_queue *q)
--
2.34.1
^ permalink raw reply related [flat|nested] 52+ messages in thread
* [RFC PATCH 26/29] drm/xe: Always wait on preempt fences in vma_check_userptr
2024-11-18 23:37 [RFC PATCH 00/29] UMD direct submission in Xe Matthew Brost
` (24 preceding siblings ...)
2024-11-18 23:37 ` [RFC PATCH 25/29] drm/xe: Add a message lock to the Xe GPU scheduler Matthew Brost
@ 2024-11-18 23:37 ` Matthew Brost
2024-11-18 23:37 ` [RFC PATCH 27/29] drm/xe: Teach xe_sync layer about drm_xe_semaphore Matthew Brost
` (10 subsequent siblings)
36 siblings, 0 replies; 52+ messages in thread
From: Matthew Brost @ 2024-11-18 23:37 UTC (permalink / raw)
To: intel-xe, dri-devel
Cc: kenneth.w.graunke, lionel.g.landwerlin, jose.souza, simona.vetter,
thomas.hellstrom, boris.brezillon, airlied, christian.koenig,
mihail.atanassov, steven.price, shashank.sharma
The assumption only a VM in preempt fence mode has preempt fences
attached is not true, preempt fences can be attached to a dma-resv VM if
user queues are open.
Signed-off-by: Matthew Brost <matthew.brost@intel.com>
---
drivers/gpu/drm/xe/xe_pt.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/drivers/gpu/drm/xe/xe_pt.c b/drivers/gpu/drm/xe/xe_pt.c
index a75667346ab3..1efe17b0b1f8 100644
--- a/drivers/gpu/drm/xe/xe_pt.c
+++ b/drivers/gpu/drm/xe/xe_pt.c
@@ -1231,7 +1231,7 @@ static int vma_check_userptr(struct xe_vm *vm, struct xe_vma *vma,
&vm->userptr.invalidated);
spin_unlock(&vm->userptr.invalidated_lock);
- if (xe_vm_in_preempt_fence_mode(vm)) {
+ if (vm->preempt.num_exec_queues) {
struct dma_resv_iter cursor;
struct dma_fence *fence;
long err;
--
2.34.1
^ permalink raw reply related [flat|nested] 52+ messages in thread
* [RFC PATCH 27/29] drm/xe: Teach xe_sync layer about drm_xe_semaphore
2024-11-18 23:37 [RFC PATCH 00/29] UMD direct submission in Xe Matthew Brost
` (25 preceding siblings ...)
2024-11-18 23:37 ` [RFC PATCH 26/29] drm/xe: Always wait on preempt fences in vma_check_userptr Matthew Brost
@ 2024-11-18 23:37 ` Matthew Brost
2024-11-18 23:37 ` [RFC PATCH 28/29] drm/xe: Add VM convert fence IOCTL Matthew Brost
` (9 subsequent siblings)
36 siblings, 0 replies; 52+ messages in thread
From: Matthew Brost @ 2024-11-18 23:37 UTC (permalink / raw)
To: intel-xe, dri-devel
Cc: kenneth.w.graunke, lionel.g.landwerlin, jose.souza, simona.vetter,
thomas.hellstrom, boris.brezillon, airlied, christian.koenig,
mihail.atanassov, steven.price, shashank.sharma
Teach xe_sync layer about drm_xe_semaphore which is used import / export
user fences.
Signed-off-by: Matthew Brost <matthew.brost@intel.com>
---
drivers/gpu/drm/xe/xe_sync.c | 90 ++++++++++++++++++++++++++++++
drivers/gpu/drm/xe/xe_sync.h | 8 +++
drivers/gpu/drm/xe/xe_sync_types.h | 5 +-
3 files changed, 102 insertions(+), 1 deletion(-)
diff --git a/drivers/gpu/drm/xe/xe_sync.c b/drivers/gpu/drm/xe/xe_sync.c
index 42f5bebd09e5..ac4510ad52a9 100644
--- a/drivers/gpu/drm/xe/xe_sync.c
+++ b/drivers/gpu/drm/xe/xe_sync.c
@@ -6,6 +6,7 @@
#include "xe_sync.h"
#include <linux/dma-fence-array.h>
+#include <linux/dma-fence-user-fence.h>
#include <linux/kthread.h>
#include <linux/sched/mm.h>
#include <linux/uaccess.h>
@@ -14,11 +15,15 @@
#include <drm/drm_syncobj.h>
#include <uapi/drm/xe_drm.h>
+#include "xe_bo.h"
#include "xe_device_types.h"
#include "xe_exec_queue.h"
+#include "xe_hw_fence.h"
#include "xe_macros.h"
#include "xe_sched_job_types.h"
+#define IS_UNINSTALLED_HW_FENCE BIT(31)
+
struct xe_user_fence {
struct xe_device *xe;
struct kref refcount;
@@ -211,6 +216,74 @@ int xe_sync_entry_parse(struct xe_device *xe, struct xe_file *xef,
return 0;
}
+int xe_sync_semaphore_parse(struct xe_device *xe, struct xe_file *xef,
+ struct xe_sync_entry *sync,
+ struct drm_xe_semaphore __user *semaphore_user,
+ unsigned int flags)
+{
+ struct drm_xe_semaphore semaphore_in;
+ struct drm_gem_object *gem_obj;
+ struct xe_bo *bo;
+
+ if (copy_from_user(&semaphore_in, semaphore_user,
+ sizeof(*semaphore_user)))
+ return -EFAULT;
+
+ if (XE_IOCTL_DBG(xe, semaphore_in.offset & 0x7 ||
+ !semaphore_in.handle || semaphore_in.token ||
+ semaphore_in.reserved[0] || semaphore_in.reserved[1]))
+ return -EINVAL;
+
+ gem_obj = drm_gem_object_lookup(xef->drm, semaphore_in.handle);
+ if (XE_IOCTL_DBG(xe, !gem_obj))
+ return -ENOENT;
+
+ bo = gem_to_xe_bo(gem_obj);
+
+ if (XE_IOCTL_DBG(xe, bo->size < semaphore_in.offset)) {
+ xe_bo_put(bo);
+ return -EINVAL;
+ }
+
+ if (flags & DRM_XE_SYNC_FLAG_SIGNAL) {
+ struct iosys_map vmap = sync->bo->vmap;
+ struct dma_fence *fence;
+
+ sync->chain_fence = dma_fence_chain_alloc();
+ if (!sync->chain_fence) {
+ xe_bo_put(bo);
+ dma_fence_chain_free(sync->chain_fence);
+ return -ENOMEM;
+ }
+
+ fence = xe_hw_fence_alloc();
+ if (IS_ERR(fence)) {
+ xe_bo_put(bo);
+ return PTR_ERR(fence);
+ }
+
+ vmap = bo->vmap;
+ iosys_map_incr(&vmap, semaphore_in.offset);
+
+ xe_hw_fence_user_init(fence, xe, vmap, semaphore_in.seqno);
+ sync->fence = fence;
+ sync->flags = IS_UNINSTALLED_HW_FENCE;
+ } else {
+ sync->user_fence = dma_fence_user_fence_alloc();
+ if (XE_IOCTL_DBG(xe, !sync->user_fence)) {
+ xe_bo_put(bo);
+ return PTR_ERR(sync->ufence);
+ }
+
+ sync->addr = semaphore_in.offset;
+ sync->timeline_value = semaphore_in.seqno;
+ sync->flags = DRM_XE_SYNC_FLAG_SIGNAL;
+ }
+ sync->bo = bo;
+
+ return 0;
+}
+
int xe_sync_entry_add_deps(struct xe_sync_entry *sync, struct xe_sched_job *job)
{
if (sync->fence)
@@ -249,17 +322,34 @@ void xe_sync_entry_signal(struct xe_sync_entry *sync, struct dma_fence *fence)
user_fence_put(sync->ufence);
dma_fence_put(fence);
}
+ } else if (sync->user_fence) {
+ struct iosys_map vmap = sync->bo->vmap;
+
+ iosys_map_incr(&vmap, sync->addr);
+ dma_fence_user_fence_attach(fence, sync->user_fence,
+ &vmap, sync->timeline_value);
+ sync->user_fence = NULL;
}
}
+void xe_sync_entry_hw_fence_installed(struct xe_sync_entry *sync)
+{
+ sync->flags &= ~IS_UNINSTALLED_HW_FENCE;
+}
+
void xe_sync_entry_cleanup(struct xe_sync_entry *sync)
{
if (sync->syncobj)
drm_syncobj_put(sync->syncobj);
+ xe_bo_put(sync->bo);
+ if (sync->flags & IS_UNINSTALLED_HW_FENCE)
+ dma_fence_set_error(sync->fence, -ECANCELED);
dma_fence_put(sync->fence);
dma_fence_chain_free(sync->chain_fence);
if (sync->ufence)
user_fence_put(sync->ufence);
+ if (sync->user_fence)
+ dma_fence_user_fence_free(sync->user_fence);
}
/**
diff --git a/drivers/gpu/drm/xe/xe_sync.h b/drivers/gpu/drm/xe/xe_sync.h
index 256ffc1e54dc..fd56929e37cc 100644
--- a/drivers/gpu/drm/xe/xe_sync.h
+++ b/drivers/gpu/drm/xe/xe_sync.h
@@ -8,6 +8,9 @@
#include "xe_sync_types.h"
+struct drm_xe_semaphore;
+struct drm_xe_sync;
+
struct xe_device;
struct xe_exec_queue;
struct xe_file;
@@ -22,10 +25,15 @@ int xe_sync_entry_parse(struct xe_device *xe, struct xe_file *xef,
struct xe_sync_entry *sync,
struct drm_xe_sync __user *sync_user,
unsigned int flags);
+int xe_sync_semaphore_parse(struct xe_device *xe, struct xe_file *xef,
+ struct xe_sync_entry *sync,
+ struct drm_xe_semaphore __user *semaphore_user,
+ unsigned int flags);
int xe_sync_entry_add_deps(struct xe_sync_entry *sync,
struct xe_sched_job *job);
void xe_sync_entry_signal(struct xe_sync_entry *sync,
struct dma_fence *fence);
+void xe_sync_entry_hw_fence_installed(struct xe_sync_entry *sync);
void xe_sync_entry_cleanup(struct xe_sync_entry *sync);
struct dma_fence *
xe_sync_in_fence_get(struct xe_sync_entry *sync, int num_sync,
diff --git a/drivers/gpu/drm/xe/xe_sync_types.h b/drivers/gpu/drm/xe/xe_sync_types.h
index 30ac3f51993b..28e846c29122 100644
--- a/drivers/gpu/drm/xe/xe_sync_types.h
+++ b/drivers/gpu/drm/xe/xe_sync_types.h
@@ -11,14 +11,17 @@
struct drm_syncobj;
struct dma_fence;
struct dma_fence_chain;
-struct drm_xe_sync;
+struct dma_fence_user_fence;
struct user_fence;
+struct xe_bo;
struct xe_sync_entry {
struct drm_syncobj *syncobj;
struct dma_fence *fence;
struct dma_fence_chain *chain_fence;
struct xe_user_fence *ufence;
+ struct dma_fence_user_fence *user_fence;
+ struct xe_bo *bo;
u64 addr;
u64 timeline_value;
u32 type;
--
2.34.1
^ permalink raw reply related [flat|nested] 52+ messages in thread
* [RFC PATCH 28/29] drm/xe: Add VM convert fence IOCTL
2024-11-18 23:37 [RFC PATCH 00/29] UMD direct submission in Xe Matthew Brost
` (26 preceding siblings ...)
2024-11-18 23:37 ` [RFC PATCH 27/29] drm/xe: Teach xe_sync layer about drm_xe_semaphore Matthew Brost
@ 2024-11-18 23:37 ` Matthew Brost
2024-11-18 23:37 ` [RFC PATCH 29/29] drm/xe: Add user fence TDR Matthew Brost
` (8 subsequent siblings)
36 siblings, 0 replies; 52+ messages in thread
From: Matthew Brost @ 2024-11-18 23:37 UTC (permalink / raw)
To: intel-xe, dri-devel
Cc: kenneth.w.graunke, lionel.g.landwerlin, jose.souza, simona.vetter,
thomas.hellstrom, boris.brezillon, airlied, christian.koenig,
mihail.atanassov, steven.price, shashank.sharma
Basically a version of the resume worker which also converts user syncs
to kerenl syncs (dma-fences) and vise versa. The expoxrted dma-fences in
the conversion guard against preemption which is required to avoid
breaking dma fence rules (no memory allocations in path of dma-fence,
resume requires memory allocations).
Signed-off-by: Matthew Brost <matthew.brost@intel.com>
---
drivers/gpu/drm/xe/xe_device.c | 1 +
drivers/gpu/drm/xe/xe_preempt_fence.c | 9 +
drivers/gpu/drm/xe/xe_vm.c | 247 +++++++++++++++++++++++++-
drivers/gpu/drm/xe/xe_vm.h | 2 +
drivers/gpu/drm/xe/xe_vm_types.h | 4 +
5 files changed, 254 insertions(+), 9 deletions(-)
diff --git a/drivers/gpu/drm/xe/xe_device.c b/drivers/gpu/drm/xe/xe_device.c
index 573b5f3df0c8..56dd26eddd92 100644
--- a/drivers/gpu/drm/xe/xe_device.c
+++ b/drivers/gpu/drm/xe/xe_device.c
@@ -191,6 +191,7 @@ static const struct drm_ioctl_desc xe_ioctls[] = {
DRM_IOCTL_DEF_DRV(XE_WAIT_USER_FENCE, xe_wait_user_fence_ioctl,
DRM_RENDER_ALLOW),
DRM_IOCTL_DEF_DRV(XE_OBSERVATION, xe_observation_ioctl, DRM_RENDER_ALLOW),
+ DRM_IOCTL_DEF_DRV(XE_VM_CONVERT_FENCE, xe_vm_convert_fence_ioctl, DRM_RENDER_ALLOW),
};
static long xe_drm_ioctl(struct file *file, unsigned int cmd, unsigned long arg)
diff --git a/drivers/gpu/drm/xe/xe_preempt_fence.c b/drivers/gpu/drm/xe/xe_preempt_fence.c
index 80a8bc82f3cc..c225f3cc82a3 100644
--- a/drivers/gpu/drm/xe/xe_preempt_fence.c
+++ b/drivers/gpu/drm/xe/xe_preempt_fence.c
@@ -12,6 +12,14 @@ static struct xe_exec_queue *to_exec_queue(struct dma_fence_preempt *fence)
return container_of(fence, struct xe_preempt_fence, base)->q;
}
+static struct dma_fence *
+xe_preempt_fence_preempt_delay(struct dma_fence_preempt *fence)
+{
+ struct xe_exec_queue *q = to_exec_queue(fence);
+
+ return q->vm->preempt.exported_fence ?: dma_fence_get_stub();
+}
+
static int xe_preempt_fence_preempt(struct dma_fence_preempt *fence)
{
struct xe_exec_queue *q = to_exec_queue(fence);
@@ -35,6 +43,7 @@ static void xe_preempt_fence_preempt_finished(struct dma_fence_preempt *fence)
}
static const struct dma_fence_preempt_ops xe_preempt_fence_ops = {
+ .preempt_delay = xe_preempt_fence_preempt_delay,
.preempt = xe_preempt_fence_preempt,
.preempt_wait = xe_preempt_fence_preempt_wait,
.preempt_finished = xe_preempt_fence_preempt_finished,
diff --git a/drivers/gpu/drm/xe/xe_vm.c b/drivers/gpu/drm/xe/xe_vm.c
index 16bc1b82d950..5078aeea2bd8 100644
--- a/drivers/gpu/drm/xe/xe_vm.c
+++ b/drivers/gpu/drm/xe/xe_vm.c
@@ -6,6 +6,7 @@
#include "xe_vm.h"
#include <linux/dma-fence-array.h>
+#include <linux/dma-fence-chain.h>
#include <linux/nospec.h>
#include <drm/drm_exec.h>
@@ -441,29 +442,44 @@ int xe_vm_validate_rebind(struct xe_vm *vm, struct drm_exec *exec,
}
static int xe_preempt_work_begin(struct drm_exec *exec, struct xe_vm *vm,
- bool *done)
+ int extra_fence_count, bool *done)
{
int err;
+ *done = false;
+
err = drm_gpuvm_prepare_vm(&vm->gpuvm, exec, 0);
if (err)
return err;
- if (xe_vm_is_idle(vm)) {
+ if (xe_vm_in_preempt_fence_mode(vm) && xe_vm_is_idle(vm)) {
vm->preempt.rebind_deactivated = true;
*done = true;
return 0;
}
+ err = drm_gpuvm_prepare_objects(&vm->gpuvm, exec, 0);
+ if (err)
+ return err;
+
if (!preempt_fences_waiting(vm)) {
*done = true;
+
+ if (extra_fence_count) {
+ struct drm_gem_object *obj;
+ unsigned long index;
+
+ drm_exec_for_each_locked_object(exec, index, obj) {
+ err = dma_resv_reserve_fences(obj->resv,
+ extra_fence_count);
+ if (err)
+ return err;
+ }
+ }
+
return 0;
}
- err = drm_gpuvm_prepare_objects(&vm->gpuvm, exec, 0);
- if (err)
- return err;
-
err = wait_for_existing_preempt_fences(vm);
if (err)
return err;
@@ -474,7 +490,8 @@ static int xe_preempt_work_begin(struct drm_exec *exec, struct xe_vm *vm,
* The fence reservation here is intended for the new preempt fences
* we attach at the end of the rebind work.
*/
- return xe_vm_validate_rebind(vm, exec, vm->preempt.num_exec_queues);
+ return xe_vm_validate_rebind(vm, exec, vm->preempt.num_exec_queues +
+ extra_fence_count);
}
static void preempt_rebind_work_func(struct work_struct *w)
@@ -509,9 +526,9 @@ static void preempt_rebind_work_func(struct work_struct *w)
drm_exec_init(&exec, DRM_EXEC_INTERRUPTIBLE_WAIT, 0);
drm_exec_until_all_locked(&exec) {
- bool done = false;
+ bool done;
- err = xe_preempt_work_begin(&exec, vm, &done);
+ err = xe_preempt_work_begin(&exec, vm, 0, &done);
drm_exec_retry_on_contention(&exec);
if (err || done) {
drm_exec_fini(&exec);
@@ -1638,6 +1655,7 @@ static void vm_destroy_work_func(struct work_struct *w)
container_of(w, struct xe_vm, destroy_work);
struct xe_device *xe = vm->xe;
struct xe_tile *tile;
+ struct dma_fence *fence;
u8 id;
/* xe_vm_close_and_put was not called? */
@@ -1660,6 +1678,9 @@ static void vm_destroy_work_func(struct work_struct *w)
if (vm->xef)
xe_file_put(vm->xef);
+ dma_fence_chain_for_each(fence, vm->preempt.exported_fence);
+ dma_fence_put(vm->preempt.exported_fence);
+
kfree(vm);
}
@@ -3403,3 +3424,211 @@ void xe_vm_snapshot_free(struct xe_vm_snapshot *snap)
}
kvfree(snap);
}
+
+static int check_semaphores(struct xe_vm *vm, struct xe_sync_entry *syncs,
+ struct drm_exec *exec, int num_syncs)
+{
+ int i, j;
+
+ for (i = 0; i < num_syncs; ++i) {
+ struct xe_bo *bo = syncs[i].bo;
+ struct drm_gem_object *obj = &bo->ttm.base;
+
+ if (bo->vm == vm)
+ continue;
+
+ for (j = 0; j < exec->num_objects; ++j) {
+ if (obj == exec->objects[j])
+ break;
+ }
+
+ if (j == exec->num_objects)
+ return -EINVAL;
+ }
+
+ return 0;
+}
+
+int xe_vm_convert_fence_ioctl(struct drm_device *dev, void *data,
+ struct drm_file *file)
+{
+ struct xe_device *xe = to_xe_device(dev);
+ struct xe_file *xef = to_xe_file(file);
+ struct drm_xe_vm_convert_fence __user *args = data;
+ struct drm_xe_sync __user *syncs_user;
+ struct drm_xe_semaphore __user *semaphores_user;
+ struct xe_sync_entry *syncs = NULL;
+ struct xe_vm *vm;
+ int err = 0, i, num_syncs = 0;
+ bool done = false;
+ struct drm_exec exec;
+ unsigned int fence_count = 0;
+ LIST_HEAD(preempt_fences);
+ ktime_t end = 0;
+ long wait;
+ int __maybe_unused tries = 0;
+ struct dma_fence *fence, *prev = NULL;
+
+ if (XE_IOCTL_DBG(xe, args->extensions || args->flags ||
+ args->reserved[0] || args->reserved[1] ||
+ args->pad))
+ return -EINVAL;
+
+ vm = xe_vm_lookup(xef, args->vm_id);
+ if (XE_IOCTL_DBG(xe, !vm))
+ return -EINVAL;
+
+ err = down_write_killable(&vm->lock);
+ if (err)
+ goto put_vm;
+
+ if (XE_IOCTL_DBG(xe, xe_vm_is_closed_or_banned(vm))) {
+ err = -ENOENT;
+ goto release_vm_lock;
+ }
+
+ syncs = kcalloc(args->num_syncs * 2, sizeof(*syncs), GFP_KERNEL);
+ if (!syncs) {
+ err = -ENOMEM;
+ goto release_vm_lock;
+ }
+
+ syncs_user = u64_to_user_ptr(args->syncs);
+ semaphores_user = u64_to_user_ptr(args->semaphores);
+ for (i = 0; i < args->num_syncs; i++, num_syncs++) {
+ struct xe_sync_entry *sync = &syncs[i];
+ struct xe_sync_entry *semaphore_sync =
+ &syncs[args->num_syncs + i];
+
+ err = xe_sync_entry_parse(xe, xef, sync, &syncs_user[i],
+ SYNC_PARSE_FLAG_DISALLOW_USER_FENCE);
+ if (err)
+ goto release_syncs;
+
+ err = xe_sync_semaphore_parse(xe, xef, semaphore_sync,
+ &semaphores_user[i],
+ sync->flags);
+ if (err) {
+ xe_sync_entry_cleanup(&syncs[i]);
+ goto release_syncs;
+ }
+ }
+
+retry:
+ if (xe_vm_userptr_check_repin(vm)) {
+ err = xe_vm_userptr_pin(vm);
+ if (err)
+ goto release_syncs;
+ }
+
+ drm_exec_init(&exec, DRM_EXEC_INTERRUPTIBLE_WAIT, 0);
+
+ drm_exec_until_all_locked(&exec) {
+ err = xe_preempt_work_begin(&exec, vm, num_syncs, &done);
+ drm_exec_retry_on_contention(&exec);
+ if (err) {
+ drm_exec_fini(&exec);
+ if (err && xe_vm_validate_should_retry(&exec, err, &end))
+ err = -EAGAIN;
+
+ goto release_syncs;
+ }
+ }
+
+ if (XE_IOCTL_DBG(xe, check_semaphores(vm, syncs + num_syncs,
+ &exec, num_syncs))) {
+ err = -EINVAL;
+ goto out_unlock;
+ }
+
+ if (!done) {
+ err = alloc_preempt_fences(vm, &preempt_fences, &fence_count);
+ if (err)
+ goto out_unlock;
+
+ wait = dma_resv_wait_timeout(xe_vm_resv(vm),
+ DMA_RESV_USAGE_KERNEL,
+ false, MAX_SCHEDULE_TIMEOUT);
+ if (wait <= 0) {
+ err = -ETIME;
+ goto out_unlock;
+ }
+ }
+
+#define retry_required(__tries, __vm) \
+ (IS_ENABLED(CONFIG_DRM_XE_USERPTR_INVAL_INJECT) ? \
+ (!(__tries)++ || __xe_vm_userptr_needs_repin(__vm)) : \
+ __xe_vm_userptr_needs_repin(__vm))
+
+ down_read(&vm->userptr.notifier_lock);
+ if (retry_required(tries, vm)) {
+ up_read(&vm->userptr.notifier_lock);
+ err = -EAGAIN;
+ goto out_unlock;
+ }
+
+#undef retry_required
+
+ /* Point of no return. */
+ xe_assert(vm->xe, list_empty(&vm->rebind_list));
+
+ for (i = 0; i < num_syncs; i++) {
+ struct xe_sync_entry *sync = &syncs[i];
+ struct xe_sync_entry *semaphore_sync = &syncs[num_syncs + i];
+
+ if (sync->flags & DRM_XE_SYNC_FLAG_SIGNAL) {
+ xe_sync_entry_signal(sync, semaphore_sync->fence);
+ xe_sync_entry_hw_fence_installed(semaphore_sync);
+
+ dma_fence_put(prev);
+ prev = dma_fence_get(vm->preempt.exported_fence);
+
+ dma_fence_chain_init(semaphore_sync->chain_fence,
+ prev, semaphore_sync->fence,
+ vm->preempt.seqno++);
+
+ vm->preempt.exported_fence =
+ &semaphore_sync->chain_fence->base;
+ semaphore_sync->chain_fence = NULL;
+
+ semaphore_sync->fence = NULL; /* Ref owned by chain */
+ } else {
+ xe_sync_entry_signal(semaphore_sync, sync->fence);
+ drm_gpuvm_resv_add_fence(&vm->gpuvm, &exec,
+ dma_fence_chain_contained(sync->fence),
+ DMA_RESV_USAGE_BOOKKEEP,
+ DMA_RESV_USAGE_BOOKKEEP);
+ }
+ }
+
+ dma_fence_chain_for_each(fence, prev);
+ dma_fence_put(prev);
+
+ if (!done) {
+ spin_lock(&vm->xe->ttm.lru_lock);
+ ttm_lru_bulk_move_tail(&vm->lru_bulk_move);
+ spin_unlock(&vm->xe->ttm.lru_lock);
+
+ arm_preempt_fences(vm, &preempt_fences);
+ resume_and_reinstall_preempt_fences(vm, &exec);
+ }
+ up_read(&vm->userptr.notifier_lock);
+
+out_unlock:
+ drm_exec_fini(&exec);
+release_syncs:
+ while (err != -EAGAIN && num_syncs--) {
+ xe_sync_entry_cleanup(&syncs[num_syncs]);
+ xe_sync_entry_cleanup(&syncs[args->num_syncs + num_syncs]);
+ }
+release_vm_lock:
+ if (err == -EAGAIN)
+ goto retry;
+ up_write(&vm->lock);
+put_vm:
+ xe_vm_put(vm);
+ free_preempt_fences(&preempt_fences);
+ kfree(syncs);
+
+ return err;
+}
diff --git a/drivers/gpu/drm/xe/xe_vm.h b/drivers/gpu/drm/xe/xe_vm.h
index 4391dbaeba51..c1c70239cc91 100644
--- a/drivers/gpu/drm/xe/xe_vm.h
+++ b/drivers/gpu/drm/xe/xe_vm.h
@@ -181,6 +181,8 @@ int xe_vm_destroy_ioctl(struct drm_device *dev, void *data,
struct drm_file *file);
int xe_vm_bind_ioctl(struct drm_device *dev, void *data,
struct drm_file *file);
+int xe_vm_convert_fence_ioctl(struct drm_device *dev, void *data,
+ struct drm_file *file);
void xe_vm_close_and_put(struct xe_vm *vm);
diff --git a/drivers/gpu/drm/xe/xe_vm_types.h b/drivers/gpu/drm/xe/xe_vm_types.h
index 7f9a303e51d8..c5cb83722706 100644
--- a/drivers/gpu/drm/xe/xe_vm_types.h
+++ b/drivers/gpu/drm/xe/xe_vm_types.h
@@ -254,6 +254,10 @@ struct xe_vm {
* BOs
*/
struct work_struct rebind_work;
+ /** @seqno: Seqno of exported dma-fences */
+ u64 seqno;
+ /** @exported_fence: Chain of exported dma-fences */
+ struct dma_fence *exported_fence;
} preempt;
/** @um: unified memory state */
--
2.34.1
^ permalink raw reply related [flat|nested] 52+ messages in thread
* [RFC PATCH 29/29] drm/xe: Add user fence TDR
2024-11-18 23:37 [RFC PATCH 00/29] UMD direct submission in Xe Matthew Brost
` (27 preceding siblings ...)
2024-11-18 23:37 ` [RFC PATCH 28/29] drm/xe: Add VM convert fence IOCTL Matthew Brost
@ 2024-11-18 23:37 ` Matthew Brost
2024-11-18 23:55 ` ✓ CI.Patch_applied: success for UMD direct submission in Xe Patchwork
` (7 subsequent siblings)
36 siblings, 0 replies; 52+ messages in thread
From: Matthew Brost @ 2024-11-18 23:37 UTC (permalink / raw)
To: intel-xe, dri-devel
Cc: kenneth.w.graunke, lionel.g.landwerlin, jose.souza, simona.vetter,
thomas.hellstrom, boris.brezillon, airlied, christian.koenig,
mihail.atanassov, steven.price, shashank.sharma
We cannot let user fences exported as dma-fence run forever. Add a TDR
to protect against this. If the TDR fires the entire VM is killed as
dma-fences are not tied to an individual queue.
Signed-off-by: Matthew Brost <matthew.brost@intel.com>
---
drivers/gpu/drm/xe/xe_vm.c | 164 +++++++++++++++++++++++++++++--
drivers/gpu/drm/xe/xe_vm_types.h | 22 +++++
2 files changed, 179 insertions(+), 7 deletions(-)
diff --git a/drivers/gpu/drm/xe/xe_vm.c b/drivers/gpu/drm/xe/xe_vm.c
index 5078aeea2bd8..8b475e76bfe0 100644
--- a/drivers/gpu/drm/xe/xe_vm.c
+++ b/drivers/gpu/drm/xe/xe_vm.c
@@ -30,6 +30,7 @@
#include "xe_exec_queue.h"
#include "xe_gt_pagefault.h"
#include "xe_gt_tlb_invalidation.h"
+#include "xe_hw_fence.h"
#include "xe_migrate.h"
#include "xe_pat.h"
#include "xe_pm.h"
@@ -336,11 +337,15 @@ void xe_vm_kill(struct xe_vm *vm, bool unlocked)
if (unlocked)
xe_vm_lock(vm, false);
- vm->flags |= XE_VM_FLAG_BANNED;
- trace_xe_vm_kill(vm);
+ if (!(vm->flags |= XE_VM_FLAG_BANNED)) {
+ vm->flags |= XE_VM_FLAG_BANNED;
+ trace_xe_vm_kill(vm);
- list_for_each_entry(q, &vm->preempt.exec_queues, lr.link)
- q->ops->kill(q);
+ list_for_each_entry(q, &vm->preempt.exec_queues, lr.link)
+ q->ops->kill(q);
+
+ /* TODO: Unmap usermap doorbells */
+ }
if (unlocked)
xe_vm_unlock(vm);
@@ -1393,6 +1398,9 @@ static void xe_vm_free_scratch(struct xe_vm *vm)
}
}
+static void userfence_tdr(struct work_struct *w);
+static void userfence_kill(struct work_struct *w);
+
struct xe_vm *xe_vm_create(struct xe_device *xe, u32 flags)
{
struct drm_gem_object *vm_resv_obj;
@@ -1517,6 +1525,12 @@ struct xe_vm *xe_vm_create(struct xe_device *xe, u32 flags)
}
}
+ spin_lock_init(&vm->userfence.lock);
+ INIT_LIST_HEAD(&vm->userfence.pending_list);
+ vm->userfence.timeout = HZ * 5;
+ INIT_DELAYED_WORK(&vm->userfence.tdr, userfence_tdr);
+ INIT_WORK(&vm->userfence.kill_work, userfence_kill);
+
if (number_tiles > 1)
vm->composite_fence_ctx = dma_fence_context_alloc(1);
@@ -1562,6 +1576,9 @@ void xe_vm_close_and_put(struct xe_vm *vm)
xe_vm_close(vm);
flush_work(&vm->preempt.rebind_work);
+ flush_delayed_work(&vm->userfence.tdr);
+ flush_work(&vm->userfence.kill_work);
+
down_write(&vm->lock);
for_each_tile(tile, xe, id) {
if (vm->q[id])
@@ -3449,6 +3466,114 @@ static int check_semaphores(struct xe_vm *vm, struct xe_sync_entry *syncs,
return 0;
}
+struct tdr_item {
+ struct dma_fence *fence;
+ struct xe_vm *vm;
+ struct list_head link;
+ struct dma_fence_cb cb;
+ u64 deadline;
+};
+
+static void userfence_kill(struct work_struct *w)
+{
+ struct xe_vm *vm =
+ container_of(w, struct xe_vm, userfence.kill_work);
+
+ down_write(&vm->lock);
+ xe_vm_kill(vm, true);
+ up_write(&vm->lock);
+}
+
+static void userfence_tdr(struct work_struct *w)
+{
+ struct xe_vm *vm =
+ container_of(w, struct xe_vm, userfence.tdr.work);
+ struct tdr_item *tdr_item;
+ bool timeout = false, cookie = dma_fence_begin_signalling();
+
+ xe_hw_fence_irq_stop(&vm->xe->user_fence_irq);
+
+ spin_lock_irq(&vm->userfence.lock);
+ list_for_each_entry(tdr_item, &vm->userfence.pending_list, link) {
+ if (!dma_fence_is_signaled(tdr_item->fence)) {
+ drm_notice(&vm->xe->drm,
+ "Timedout usermap fence: seqno=%llu, deadline=%llu, jiffies=%llu",
+ tdr_item->fence->seqno, tdr_item->deadline,
+ get_jiffies_64());
+ dma_fence_set_error(tdr_item->fence, -ETIME);
+ timeout = true;
+ vm->userfence.timeout = 0;
+ }
+ }
+ spin_unlock_irq(&vm->userfence.lock);
+
+ xe_hw_fence_irq_start(&vm->xe->user_fence_irq);
+
+ /*
+ * This is dma-fence signaling path so we cannot take the locks requires
+ * to kill a VM. Defer killing to a worker.
+ */
+ if (timeout)
+ schedule_work(&vm->userfence.kill_work);
+
+ dma_fence_end_signalling(cookie);
+}
+
+static void userfence_fence_cb(struct dma_fence *fence,
+ struct dma_fence_cb *cb)
+{
+ struct tdr_item *next, *tdr_item = container_of(cb, struct tdr_item, cb);
+ struct xe_vm *vm = tdr_item->vm;
+ struct xe_gt *gt = xe_device_get_gt(vm->xe, 0);
+
+ if (fence)
+ spin_lock(&vm->userfence.lock);
+ else
+ spin_lock_irq(&vm->userfence.lock);
+
+ list_del(&tdr_item->link);
+ next = list_first_entry_or_null(&vm->userfence.pending_list,
+ typeof(*next), link);
+ if (next)
+ mod_delayed_work(gt->ordered_wq, &vm->userfence.tdr,
+ next->deadline - get_jiffies_64());
+ else
+ cancel_delayed_work(&vm->userfence.tdr);
+
+ if (fence)
+ spin_unlock(&vm->userfence.lock);
+ else
+ spin_unlock_irq(&vm->userfence.lock);
+
+ dma_fence_put(tdr_item->fence);
+ xe_vm_put(tdr_item->vm);
+ kfree(tdr_item);
+}
+
+static void userfence_tdr_add(struct xe_vm *vm, struct tdr_item *tdr_item,
+ struct dma_fence *fence)
+{
+ struct xe_gt *gt = xe_device_get_gt(vm->xe, 0);
+ int ret;
+
+ tdr_item->fence = dma_fence_get(fence);
+ tdr_item->vm = xe_vm_get(vm);
+ INIT_LIST_HEAD(&tdr_item->link);
+ tdr_item->deadline = vm->userfence.timeout + get_jiffies_64();
+
+ spin_lock_irq(&vm->userfence.lock);
+ list_add_tail(&tdr_item->link, &vm->userfence.pending_list);
+ if (list_is_singular(&vm->userfence.pending_list))
+ mod_delayed_work(gt->ordered_wq,
+ &vm->userfence.tdr,
+ vm->userfence.timeout);
+ spin_unlock_irq(&vm->userfence.lock);
+
+ ret = dma_fence_add_callback(fence, &tdr_item->cb, userfence_fence_cb);
+ if (ret == -ENOENT)
+ userfence_fence_cb(NULL, &tdr_item->cb);
+}
+
int xe_vm_convert_fence_ioctl(struct drm_device *dev, void *data,
struct drm_file *file)
{
@@ -3459,6 +3584,7 @@ int xe_vm_convert_fence_ioctl(struct drm_device *dev, void *data,
struct drm_xe_semaphore __user *semaphores_user;
struct xe_sync_entry *syncs = NULL;
struct xe_vm *vm;
+ struct tdr_item **tdr_items = NULL;
int err = 0, i, num_syncs = 0;
bool done = false;
struct drm_exec exec;
@@ -3493,6 +3619,12 @@ int xe_vm_convert_fence_ioctl(struct drm_device *dev, void *data,
goto release_vm_lock;
}
+ tdr_items = kcalloc(args->num_syncs, sizeof(*tdr_items), GFP_KERNEL);
+ if (!tdr_items) {
+ err = -ENOMEM;
+ goto release_vm_lock;
+ }
+
syncs_user = u64_to_user_ptr(args->syncs);
semaphores_user = u64_to_user_ptr(args->semaphores);
for (i = 0; i < args->num_syncs; i++, num_syncs++) {
@@ -3505,6 +3637,15 @@ int xe_vm_convert_fence_ioctl(struct drm_device *dev, void *data,
if (err)
goto release_syncs;
+ if (sync->flags & DRM_XE_SYNC_FLAG_SIGNAL) {
+ tdr_items[i] = kmalloc(sizeof(struct tdr_item),
+ GFP_KERNEL);
+ if (!tdr_items[i]) {
+ xe_sync_entry_cleanup(&syncs[i]);
+ goto release_syncs;
+ }
+ }
+
err = xe_sync_semaphore_parse(xe, xef, semaphore_sync,
&semaphores_user[i],
sync->flags);
@@ -3591,6 +3732,10 @@ int xe_vm_convert_fence_ioctl(struct drm_device *dev, void *data,
&semaphore_sync->chain_fence->base;
semaphore_sync->chain_fence = NULL;
+ userfence_tdr_add(vm, tdr_items[i],
+ semaphore_sync->fence);
+ tdr_items[i] = 0;
+
semaphore_sync->fence = NULL; /* Ref owned by chain */
} else {
xe_sync_entry_signal(semaphore_sync, sync->fence);
@@ -3617,9 +3762,13 @@ int xe_vm_convert_fence_ioctl(struct drm_device *dev, void *data,
out_unlock:
drm_exec_fini(&exec);
release_syncs:
- while (err != -EAGAIN && num_syncs--) {
- xe_sync_entry_cleanup(&syncs[num_syncs]);
- xe_sync_entry_cleanup(&syncs[args->num_syncs + num_syncs]);
+ if (err != -EAGAIN) {
+ for (i = 0; i < num_syncs; ++i)
+ kfree(tdr_items[i]);
+ while (num_syncs--) {
+ xe_sync_entry_cleanup(&syncs[num_syncs]);
+ xe_sync_entry_cleanup(&syncs[args->num_syncs + num_syncs]);
+ }
}
release_vm_lock:
if (err == -EAGAIN)
@@ -3629,6 +3778,7 @@ int xe_vm_convert_fence_ioctl(struct drm_device *dev, void *data,
xe_vm_put(vm);
free_preempt_fences(&preempt_fences);
kfree(syncs);
+ kfree(tdr_items);
return err;
}
diff --git a/drivers/gpu/drm/xe/xe_vm_types.h b/drivers/gpu/drm/xe/xe_vm_types.h
index c5cb83722706..49cac5716f72 100644
--- a/drivers/gpu/drm/xe/xe_vm_types.h
+++ b/drivers/gpu/drm/xe/xe_vm_types.h
@@ -260,6 +260,28 @@ struct xe_vm {
struct dma_fence *exported_fence;
} preempt;
+ /** @userfence: User fence state */
+ struct {
+ /**
+ * @userfence.lock: fence lock
+ */
+ spinlock_t lock;
+ /**
+ * @userfence.pending_list: pending fence list, protected by
+ * userfence.lock
+ */
+ struct list_head pending_list;
+ /** @userfence.tdr: fence TDR */
+ struct delayed_work tdr;
+ /** @userfence.kill_work */
+ struct work_struct kill_work;
+ /**
+ * @userfence.timeout: Fence timeout period, protected by
+ * userfence.lock
+ */
+ u32 timeout;
+ } userfence;
+
/** @um: unified memory state */
struct {
/** @asid: address space ID, unique to each VM */
--
2.34.1
^ permalink raw reply related [flat|nested] 52+ messages in thread
* ✓ CI.Patch_applied: success for UMD direct submission in Xe
2024-11-18 23:37 [RFC PATCH 00/29] UMD direct submission in Xe Matthew Brost
` (28 preceding siblings ...)
2024-11-18 23:37 ` [RFC PATCH 29/29] drm/xe: Add user fence TDR Matthew Brost
@ 2024-11-18 23:55 ` Patchwork
2024-11-18 23:56 ` ✗ CI.checkpatch: warning " Patchwork
` (6 subsequent siblings)
36 siblings, 0 replies; 52+ messages in thread
From: Patchwork @ 2024-11-18 23:55 UTC (permalink / raw)
To: Matthew Brost; +Cc: intel-xe
== Series Details ==
Series: UMD direct submission in Xe
URL : https://patchwork.freedesktop.org/series/141524/
State : success
== Summary ==
=== Applying kernel patches on branch 'drm-tip' with base: ===
Base commit: 1fe9a6cc7d13 drm-tip: 2024y-11m-18d-21h-03m-03s UTC integration manifest
=== git am output follows ===
Applying: dma-fence: Add dma_fence_preempt base class
Applying: dma-fence: Add dma_fence_user_fence
Applying: drm/xe: Use dma_fence_preempt base class
Applying: drm/xe: Allocate doorbells for UMD exec queues
Applying: drm/xe: Add doorbell ID to snapshot capture
Applying: drm/xe: Break submission ring out into its own BO
Applying: drm/xe: Break indirect ring state out into its own BO
Applying: drm/xe: Clear GGTT in xe_bo_restore_kernel
Applying: FIXME: drm/xe: Add pad to ring and indirect state
Applying: drm/xe: Enable indirect ring on media GT
Applying: drm/xe: Don't add pinned mappings to VM bulk move
Applying: drm/xe: Add exec queue post init extension processing
Applying: drm/xe/mmap: Add mmap support for PCI memory barrier
Applying: drm/xe: Add support for mmapping doorbells to user space
Applying: drm/xe: Add support for mmapping submission ring and indirect ring state to user space
Applying: drm/xe/uapi: Define UMD exec queue mapping uAPI
Applying: drm/xe: Add usermap exec queue extension
Applying: drm/xe: Drop EXEC_QUEUE_FLAG_UMD_SUBMISSION flag
Applying: drm/xe: Do not allow usermap exec queues in exec IOCTL
Applying: drm/xe: Teach GuC backend to kill usermap queues
Applying: drm/xe: Enable preempt fences on usermap queues
Applying: drm/xe/uapi: Add uAPI to convert user semaphore to / from drm syncobj
Applying: drm/xe: Add user fence IRQ handler
Applying: drm/xe: Add xe_hw_fence_user_init
Applying: drm/xe: Add a message lock to the Xe GPU scheduler
Applying: drm/xe: Always wait on preempt fences in vma_check_userptr
Applying: drm/xe: Teach xe_sync layer about drm_xe_semaphore
Applying: drm/xe: Add VM convert fence IOCTL
Applying: drm/xe: Add user fence TDR
^ permalink raw reply [flat|nested] 52+ messages in thread
* ✗ CI.checkpatch: warning for UMD direct submission in Xe
2024-11-18 23:37 [RFC PATCH 00/29] UMD direct submission in Xe Matthew Brost
` (29 preceding siblings ...)
2024-11-18 23:55 ` ✓ CI.Patch_applied: success for UMD direct submission in Xe Patchwork
@ 2024-11-18 23:56 ` Patchwork
2024-11-18 23:57 ` ✓ CI.KUnit: success " Patchwork
` (5 subsequent siblings)
36 siblings, 0 replies; 52+ messages in thread
From: Patchwork @ 2024-11-18 23:56 UTC (permalink / raw)
To: Matthew Brost; +Cc: intel-xe
== Series Details ==
Series: UMD direct submission in Xe
URL : https://patchwork.freedesktop.org/series/141524/
State : warning
== Summary ==
+ KERNEL=/kernel
+ git clone https://gitlab.freedesktop.org/drm/maintainer-tools mt
Cloning into 'mt'...
warning: redirecting to https://gitlab.freedesktop.org/drm/maintainer-tools.git/
+ git -C mt rev-list -n1 origin/master
30ab6715fc09baee6cc14cb3c89ad8858688d474
+ cd /kernel
+ git config --global --add safe.directory /kernel
+ git log -n1
commit fd7041dbc34b697461e84cd7d38b2784e7853a0f
Author: Matthew Brost <matthew.brost@intel.com>
Date: Mon Nov 18 15:37:57 2024 -0800
drm/xe: Add user fence TDR
We cannot let user fences exported as dma-fence run forever. Add a TDR
to protect against this. If the TDR fires the entire VM is killed as
dma-fences are not tied to an individual queue.
Signed-off-by: Matthew Brost <matthew.brost@intel.com>
+ /mt/dim checkpatch 1fe9a6cc7d136c9a34c47ccd6ee5a2b7d02c0bd6 drm-intel
3b27548d741a dma-fence: Add dma_fence_preempt base class
-:29: WARNING:FILE_PATH_CHANGES: added, moved or deleted file(s), does MAINTAINERS need updating?
#29:
new file mode 100644
total: 0 errors, 1 warnings, 0 checks, 196 lines checked
f836e556d719 dma-fence: Add dma_fence_user_fence
-:31: WARNING:FILE_PATH_CHANGES: added, moved or deleted file(s), does MAINTAINERS need updating?
#31:
new file mode 100644
total: 0 errors, 1 warnings, 0 checks, 111 lines checked
4c039e71cffc drm/xe: Use dma_fence_preempt base class
38341048855b drm/xe: Allocate doorbells for UMD exec queues
b7209cee2419 drm/xe: Add doorbell ID to snapshot capture
b2dee00a61e4 drm/xe: Break submission ring out into its own BO
d95585c2ee8f drm/xe: Break indirect ring state out into its own BO
a30d863635da drm/xe: Clear GGTT in xe_bo_restore_kernel
84944b9c1f1d FIXME: drm/xe: Add pad to ring and indirect state
550fa09a00fe drm/xe: Enable indirect ring on media GT
be690f2647b7 drm/xe: Don't add pinned mappings to VM bulk move
ce8d0d54124a drm/xe: Add exec queue post init extension processing
8ff7df45be74 drm/xe/mmap: Add mmap support for PCI memory barrier
86de49917d5c drm/xe: Add support for mmapping doorbells to user space
b7d96c9304d8 drm/xe: Add support for mmapping submission ring and indirect ring state to user space
1630ab640fee drm/xe/uapi: Define UMD exec queue mapping uAPI
-:8: WARNING:TYPO_SPELLING: 'addres' may be misspelled - perhaps 'address'?
#8:
space. The ring is a VM PPGTT addres, while indirect LRC state and
^^^^^^
-:67: WARNING:BLOCK_COMMENT_STYLE: Block comments should align the * on each line
#67: FILE: include/uapi/drm/xe_drm.h:1160:
+ /**
+ * @indirect_ring_state_handle: Indirect ring state buffer object
-:69: WARNING:BLOCK_COMMENT_STYLE: Block comments should align the * on each line
#69: FILE: include/uapi/drm/xe_drm.h:1162:
+ * handle. Allocated by KMD and must be closed by user.
+ */
total: 0 errors, 3 warnings, 0 checks, 68 lines checked
b2d24cbf3b91 drm/xe: Add usermap exec queue extension
-:122: CHECK:ALLOC_SIZEOF_STRUCT: Prefer kzalloc(sizeof(*q->usermap)...) over kzalloc(sizeof(struct xe_exec_queue_usermap)...)
#122: FILE: drivers/gpu/drm/xe/xe_exec_queue.c:514:
+ q->usermap = kzalloc(sizeof(struct xe_exec_queue_usermap), GFP_KERNEL);
total: 0 errors, 0 warnings, 1 checks, 321 lines checked
b134a5b875c3 drm/xe: Drop EXEC_QUEUE_FLAG_UMD_SUBMISSION flag
-:100: WARNING:ONE_SEMICOLON: Statements terminations use 1 semicolon
#100: FILE: drivers/gpu/drm/xe/xe_lrc.c:915:
+ bool user_queue = q && xe_exec_queue_is_usermap(q);;
total: 0 errors, 1 warnings, 0 checks, 66 lines checked
223fca1c8cba drm/xe: Do not allow usermap exec queues in exec IOCTL
0cb1042c6859 drm/xe: Teach GuC backend to kill usermap queues
-:11: WARNING:TYPO_SPELLING: 'seperate' may be misspelled - perhaps 'separate'?
#11:
a seperate flow for now.
^^^^^^^^
total: 0 errors, 1 warnings, 0 checks, 117 lines checked
470726cbe300 drm/xe: Enable preempt fences on usermap queues
477f8b172fd2 drm/xe/uapi: Add uAPI to convert user semaphore to / from drm syncobj
-:35: WARNING:LONG_LINE: line length of 131 exceeds 100 columns
#35: FILE: include/uapi/drm/xe_drm.h:121:
+#define DRM_IOCTL_XE_VM_CONVERT_FENCE DRM_IOW(DRM_COMMAND_BASE + DRM_XE_VM_CONVERT_FENCE, struct drm_xe_vm_convert_fence)
total: 0 errors, 1 warnings, 0 checks, 80 lines checked
bd5175771179 drm/xe: Add user fence IRQ handler
25422eefe1b9 drm/xe: Add xe_hw_fence_user_init
a5f0a9af98c7 drm/xe: Add a message lock to the Xe GPU scheduler
-:89: CHECK:MACRO_ARG_PRECEDENCE: Macro argument 'sched' may be better as '(sched)' to avoid precedence issues
#89: FILE: drivers/gpu/drm/xe/xe_gpu_scheduler.h:32:
+#define xe_sched_msg_lock(sched, flags) \
+ spin_lock_irqsave(&sched->msg_lock, flags)
-:96: CHECK:MACRO_ARG_PRECEDENCE: Macro argument 'sched' may be better as '(sched)' to avoid precedence issues
#96: FILE: drivers/gpu/drm/xe/xe_gpu_scheduler.h:35:
+#define xe_sched_msg_unlock(sched, flags) \
+ spin_unlock_irqrestore(&sched->msg_lock, flags)
total: 0 errors, 0 warnings, 2 checks, 138 lines checked
27c8b6a6ff59 drm/xe: Always wait on preempt fences in vma_check_userptr
a56a39c0e6d2 drm/xe: Teach xe_sync layer about drm_xe_semaphore
8978b6080bfd drm/xe: Add VM convert fence IOCTL
-:291: CHECK:MACRO_ARG_REUSE: Macro argument reuse '__vm' - possible side-effects?
#291: FILE: drivers/gpu/drm/xe/xe_vm.c:3558:
+#define retry_required(__tries, __vm) \
+ (IS_ENABLED(CONFIG_DRM_XE_USERPTR_INVAL_INJECT) ? \
+ (!(__tries)++ || __xe_vm_userptr_needs_repin(__vm)) : \
+ __xe_vm_userptr_needs_repin(__vm))
total: 0 errors, 0 warnings, 1 checks, 350 lines checked
fd7041dbc34b drm/xe: Add user fence TDR
-:30: ERROR:ASSIGN_IN_IF: do not use assignment in if condition
#30: FILE: drivers/gpu/drm/xe/xe_vm.c:340:
+ if (!(vm->flags |= XE_VM_FLAG_BANNED)) {
-:218: CHECK:ALLOC_SIZEOF_STRUCT: Prefer kmalloc(sizeof(*tdr_items[i])...) over kmalloc(sizeof(struct tdr_item)...)
#218: FILE: drivers/gpu/drm/xe/xe_vm.c:3641:
+ tdr_items[i] = kmalloc(sizeof(struct tdr_item),
total: 1 errors, 0 warnings, 1 checks, 265 lines checked
^ permalink raw reply [flat|nested] 52+ messages in thread
* ✓ CI.KUnit: success for UMD direct submission in Xe
2024-11-18 23:37 [RFC PATCH 00/29] UMD direct submission in Xe Matthew Brost
` (30 preceding siblings ...)
2024-11-18 23:56 ` ✗ CI.checkpatch: warning " Patchwork
@ 2024-11-18 23:57 ` Patchwork
2024-11-19 0:15 ` ✓ CI.Build: " Patchwork
` (4 subsequent siblings)
36 siblings, 0 replies; 52+ messages in thread
From: Patchwork @ 2024-11-18 23:57 UTC (permalink / raw)
To: Matthew Brost; +Cc: intel-xe
== Series Details ==
Series: UMD direct submission in Xe
URL : https://patchwork.freedesktop.org/series/141524/
State : success
== Summary ==
+ trap cleanup EXIT
+ /kernel/tools/testing/kunit/kunit.py run --kunitconfig /kernel/drivers/gpu/drm/xe/.kunitconfig
[23:56:14] Configuring KUnit Kernel ...
Generating .config ...
Populating config with:
$ make ARCH=um O=.kunit olddefconfig
[23:56:19] Building KUnit Kernel ...
Populating config with:
$ make ARCH=um O=.kunit olddefconfig
Building with:
$ make all compile_commands.json ARCH=um O=.kunit --jobs=48
../lib/iomap.c:156:5: warning: no previous prototype for ‘ioread64_lo_hi’ [-Wmissing-prototypes]
156 | u64 ioread64_lo_hi(const void __iomem *addr)
| ^~~~~~~~~~~~~~
../lib/iomap.c:163:5: warning: no previous prototype for ‘ioread64_hi_lo’ [-Wmissing-prototypes]
163 | u64 ioread64_hi_lo(const void __iomem *addr)
| ^~~~~~~~~~~~~~
../lib/iomap.c:170:5: warning: no previous prototype for ‘ioread64be_lo_hi’ [-Wmissing-prototypes]
170 | u64 ioread64be_lo_hi(const void __iomem *addr)
| ^~~~~~~~~~~~~~~~
../lib/iomap.c:178:5: warning: no previous prototype for ‘ioread64be_hi_lo’ [-Wmissing-prototypes]
178 | u64 ioread64be_hi_lo(const void __iomem *addr)
| ^~~~~~~~~~~~~~~~
../lib/iomap.c:264:6: warning: no previous prototype for ‘iowrite64_lo_hi’ [-Wmissing-prototypes]
264 | void iowrite64_lo_hi(u64 val, void __iomem *addr)
| ^~~~~~~~~~~~~~~
../lib/iomap.c:272:6: warning: no previous prototype for ‘iowrite64_hi_lo’ [-Wmissing-prototypes]
272 | void iowrite64_hi_lo(u64 val, void __iomem *addr)
| ^~~~~~~~~~~~~~~
../lib/iomap.c:280:6: warning: no previous prototype for ‘iowrite64be_lo_hi’ [-Wmissing-prototypes]
280 | void iowrite64be_lo_hi(u64 val, void __iomem *addr)
| ^~~~~~~~~~~~~~~~~
../lib/iomap.c:288:6: warning: no previous prototype for ‘iowrite64be_hi_lo’ [-Wmissing-prototypes]
288 | void iowrite64be_hi_lo(u64 val, void __iomem *addr)
| ^~~~~~~~~~~~~~~~~
[23:56:47] Starting KUnit Kernel (1/1)...
[23:56:47] ============================================================
Running tests with:
$ .kunit/linux kunit.enable=1 mem=1G console=tty kunit_shutdown=halt
[23:56:47] =================== guc_dbm (7 subtests) ===================
[23:56:47] [PASSED] test_empty
[23:56:47] [PASSED] test_default
[23:56:47] ======================== test_size ========================
[23:56:47] [PASSED] 4
[23:56:47] [PASSED] 8
[23:56:47] [PASSED] 32
[23:56:47] [PASSED] 256
[23:56:47] ==================== [PASSED] test_size ====================
[23:56:47] ======================= test_reuse ========================
[23:56:47] [PASSED] 4
[23:56:47] [PASSED] 8
[23:56:47] [PASSED] 32
[23:56:47] [PASSED] 256
[23:56:47] =================== [PASSED] test_reuse ====================
[23:56:47] =================== test_range_overlap ====================
[23:56:47] [PASSED] 4
[23:56:47] [PASSED] 8
[23:56:47] [PASSED] 32
[23:56:47] [PASSED] 256
[23:56:47] =============== [PASSED] test_range_overlap ================
[23:56:47] =================== test_range_compact ====================
[23:56:47] [PASSED] 4
[23:56:47] [PASSED] 8
[23:56:47] [PASSED] 32
[23:56:47] [PASSED] 256
[23:56:47] =============== [PASSED] test_range_compact ================
[23:56:47] ==================== test_range_spare =====================
[23:56:47] [PASSED] 4
[23:56:47] [PASSED] 8
[23:56:47] [PASSED] 32
[23:56:47] [PASSED] 256
[23:56:47] ================ [PASSED] test_range_spare =================
[23:56:47] ===================== [PASSED] guc_dbm =====================
[23:56:47] =================== guc_idm (6 subtests) ===================
[23:56:47] [PASSED] bad_init
[23:56:47] [PASSED] no_init
[23:56:47] [PASSED] init_fini
[23:56:47] [PASSED] check_used
[23:56:47] [PASSED] check_quota
[23:56:47] [PASSED] check_all
[23:56:47] ===================== [PASSED] guc_idm =====================
[23:56:47] ================== no_relay (3 subtests) ===================
[23:56:47] [PASSED] xe_drops_guc2pf_if_not_ready
[23:56:47] [PASSED] xe_drops_guc2vf_if_not_ready
[23:56:47] [PASSED] xe_rejects_send_if_not_ready
[23:56:47] ==================== [PASSED] no_relay =====================
[23:56:47] ================== pf_relay (14 subtests) ==================
[23:56:47] [PASSED] pf_rejects_guc2pf_too_short
[23:56:47] [PASSED] pf_rejects_guc2pf_too_long
[23:56:47] [PASSED] pf_rejects_guc2pf_no_payload
[23:56:47] [PASSED] pf_fails_no_payload
[23:56:47] [PASSED] pf_fails_bad_origin
[23:56:47] [PASSED] pf_fails_bad_type
[23:56:47] [PASSED] pf_txn_reports_error
[23:56:47] [PASSED] pf_txn_sends_pf2guc
[23:56:47] [PASSED] pf_sends_pf2guc
[23:56:47] [SKIPPED] pf_loopback_nop
[23:56:47] [SKIPPED] pf_loopback_echo
[23:56:47] [SKIPPED] pf_loopback_fail
[23:56:47] [SKIPPED] pf_loopback_busy
[23:56:47] [SKIPPED] pf_loopback_retry
[23:56:47] ==================== [PASSED] pf_relay =====================
[23:56:47] ================== vf_relay (3 subtests) ===================
[23:56:47] [PASSED] vf_rejects_guc2vf_too_short
[23:56:47] [PASSED] vf_rejects_guc2vf_too_long
[23:56:47] [PASSED] vf_rejects_guc2vf_no_payload
[23:56:47] ==================== [PASSED] vf_relay =====================
[23:56:47] ================= pf_service (11 subtests) =================
[23:56:47] [PASSED] pf_negotiate_any
[23:56:47] [PASSED] pf_negotiate_base_match
[23:56:47] [PASSED] pf_negotiate_base_newer
[23:56:47] [PASSED] pf_negotiate_base_next
[23:56:47] [SKIPPED] pf_negotiate_base_older
[23:56:47] [PASSED] pf_negotiate_base_prev
[23:56:47] [PASSED] pf_negotiate_latest_match
[23:56:47] [PASSED] pf_negotiate_latest_newer
[23:56:47] [PASSED] pf_negotiate_latest_next
[23:56:47] [SKIPPED] pf_negotiate_latest_older
[23:56:47] [SKIPPED] pf_negotiate_latest_prev
[23:56:47] =================== [PASSED] pf_service ====================
[23:56:47] ===================== lmtt (1 subtest) =====================
[23:56:47] ======================== test_ops =========================
[23:56:47] [PASSED] 2-level
[23:56:47] [PASSED] multi-level
[23:56:47] ==================== [PASSED] test_ops =====================
[23:56:47] ====================== [PASSED] lmtt =======================
[23:56:47] =================== xe_mocs (2 subtests) ===================
[23:56:47] ================ xe_live_mocs_kernel_kunit ================
[23:56:47] =========== [SKIPPED] xe_live_mocs_kernel_kunit ============
[23:56:47] ================ xe_live_mocs_reset_kunit =================
[23:56:47] ============ [SKIPPED] xe_live_mocs_reset_kunit ============
[23:56:47] ==================== [SKIPPED] xe_mocs =====================
[23:56:47] ================= xe_migrate (2 subtests) ==================
[23:56:47] ================= xe_migrate_sanity_kunit =================
[23:56:47] ============ [SKIPPED] xe_migrate_sanity_kunit =============
[23:56:47] ================== xe_validate_ccs_kunit ==================
[23:56:47] ============= [SKIPPED] xe_validate_ccs_kunit ==============
[23:56:47] =================== [SKIPPED] xe_migrate ===================
[23:56:47] ================== xe_dma_buf (1 subtest) ==================
[23:56:47] ==================== xe_dma_buf_kunit =====================
[23:56:47] ================ [SKIPPED] xe_dma_buf_kunit ================
[23:56:47] =================== [SKIPPED] xe_dma_buf ===================
[23:56:47] ==================== xe_bo (3 subtests) ====================
[23:56:47] ================== xe_ccs_migrate_kunit ===================
[23:56:47] ============== [SKIPPED] xe_ccs_migrate_kunit ==============
[23:56:47] ==================== xe_bo_evict_kunit ====================
[23:56:47] =============== [SKIPPED] xe_bo_evict_kunit ================
[23:56:47] =================== xe_bo_shrink_kunit ====================
[23:56:47] =============== [SKIPPED] xe_bo_shrink_kunit ===============
[23:56:47] ===================== [SKIPPED] xe_bo ======================
[23:56:47] ==================== args (11 subtests) ====================
[23:56:47] [PASSED] count_args_test
[23:56:47] [PASSED] call_args_example
[23:56:47] [PASSED] call_args_test
[23:56:47] [PASSED] drop_first_arg_example
[23:56:47] [PASSED] drop_first_arg_test
[23:56:47] [PASSED] first_arg_example
[23:56:47] [PASSED] first_arg_test
[23:56:47] [PASSED] last_arg_example
[23:56:47] [PASSED] last_arg_test
[23:56:47] [PASSED] pick_arg_example
[23:56:47] [PASSED] sep_comma_examplestty: 'standard input': Inappropriate ioctl for device
[23:56:47] ====================== [PASSED] args =======================
[23:56:47] =================== xe_pci (2 subtests) ====================
[23:56:47] [PASSED] xe_gmdid_graphics_ip
[23:56:47] [PASSED] xe_gmdid_media_ip
[23:56:47] ===================== [PASSED] xe_pci ======================
[23:56:47] =================== xe_rtp (2 subtests) ====================
[23:56:47] =============== xe_rtp_process_to_sr_tests ================
[23:56:47] [PASSED] coalesce-same-reg
[23:56:47] [PASSED] no-match-no-add
[23:56:47] [PASSED] match-or
[23:56:47] [PASSED] match-or-xfail
[23:56:47] [PASSED] no-match-no-add-multiple-rules
[23:56:47] [PASSED] two-regs-two-entries
[23:56:47] [PASSED] clr-one-set-other
[23:56:47] [PASSED] set-field
[23:56:47] [PASSED] conflict-duplicate
[23:56:47] [PASSED] conflict-not-disjoint
[23:56:47] [PASSED] conflict-reg-type
[23:56:47] =========== [PASSED] xe_rtp_process_to_sr_tests ============
[23:56:47] ================== xe_rtp_process_tests ===================
[23:56:47] [PASSED] active1
[23:56:47] [PASSED] active2
[23:56:47] [PASSED] active-inactive
[23:56:47] [PASSED] inactive-active
[23:56:47] [PASSED] inactive-1st_or_active-inactive
[23:56:47] [PASSED] inactive-2nd_or_active-inactive
[23:56:47] [PASSED] inactive-last_or_active-inactive
[23:56:47] [PASSED] inactive-no_or_active-inactive
[23:56:47] ============== [PASSED] xe_rtp_process_tests ===============
[23:56:47] ===================== [PASSED] xe_rtp ======================
[23:56:47] ==================== xe_wa (1 subtest) =====================
[23:56:47] ======================== xe_wa_gt =========================
[23:56:47] [PASSED] TIGERLAKE (B0)
[23:56:47] [PASSED] DG1 (A0)
[23:56:47] [PASSED] DG1 (B0)
[23:56:47] [PASSED] ALDERLAKE_S (A0)
[23:56:47] [PASSED] ALDERLAKE_S (B0)
[23:56:47] [PASSED] ALDERLAKE_S (C0)
[23:56:47] [PASSED] ALDERLAKE_S (D0)
[23:56:47] [PASSED] ALDERLAKE_P (A0)
[23:56:47] [PASSED] ALDERLAKE_P (B0)
[23:56:47] [PASSED] ALDERLAKE_P (C0)
[23:56:47] [PASSED] ALDERLAKE_S_RPLS (D0)
[23:56:47] [PASSED] ALDERLAKE_P_RPLU (E0)
[23:56:47] [PASSED] DG2_G10 (C0)
[23:56:47] [PASSED] DG2_G11 (B1)
[23:56:47] [PASSED] DG2_G12 (A1)
[23:56:47] [PASSED] METEORLAKE (g:A0, m:A0)
[23:56:47] [PASSED] METEORLAKE (g:A0, m:A0)
[23:56:47] [PASSED] METEORLAKE (g:A0, m:A0)
[23:56:47] [PASSED] LUNARLAKE (g:A0, m:A0)
[23:56:47] [PASSED] LUNARLAKE (g:B0, m:A0)
[23:56:47] [PASSED] BATTLEMAGE (g:A0, m:A1)
[23:56:47] ==================== [PASSED] xe_wa_gt =====================
[23:56:47] ====================== [PASSED] xe_wa ======================
[23:56:47] ============================================================
[23:56:47] Testing complete. Ran 122 tests: passed: 106, skipped: 16
[23:56:47] Elapsed time: 33.077s total, 4.388s configuring, 28.422s building, 0.223s running
+ /kernel/tools/testing/kunit/kunit.py run --kunitconfig /kernel/drivers/gpu/drm/tests/.kunitconfig
[23:56:48] Configuring KUnit Kernel ...
Regenerating .config ...
Populating config with:
$ make ARCH=um O=.kunit olddefconfig
[23:56:49] Building KUnit Kernel ...
Populating config with:
$ make ARCH=um O=.kunit olddefconfig
Building with:
$ make all compile_commands.json ARCH=um O=.kunit --jobs=48
../lib/iomap.c:156:5: warning: no previous prototype for ‘ioread64_lo_hi’ [-Wmissing-prototypes]
156 | u64 ioread64_lo_hi(const void __iomem *addr)
| ^~~~~~~~~~~~~~
../lib/iomap.c:163:5: warning: no previous prototype for ‘ioread64_hi_lo’ [-Wmissing-prototypes]
163 | u64 ioread64_hi_lo(const void __iomem *addr)
| ^~~~~~~~~~~~~~
../lib/iomap.c:170:5: warning: no previous prototype for ‘ioread64be_lo_hi’ [-Wmissing-prototypes]
170 | u64 ioread64be_lo_hi(const void __iomem *addr)
| ^~~~~~~~~~~~~~~~
../lib/iomap.c:178:5: warning: no previous prototype for ‘ioread64be_hi_lo’ [-Wmissing-prototypes]
178 | u64 ioread64be_hi_lo(const void __iomem *addr)
| ^~~~~~~~~~~~~~~~
../lib/iomap.c:264:6: warning: no previous prototype for ‘iowrite64_lo_hi’ [-Wmissing-prototypes]
264 | void iowrite64_lo_hi(u64 val, void __iomem *addr)
| ^~~~~~~~~~~~~~~
../lib/iomap.c:272:6: warning: no previous prototype for ‘iowrite64_hi_lo’ [-Wmissing-prototypes]
272 | void iowrite64_hi_lo(u64 val, void __iomem *addr)
| ^~~~~~~~~~~~~~~
../lib/iomap.c:280:6: warning: no previous prototype for ‘iowrite64be_lo_hi’ [-Wmissing-prototypes]
280 | void iowrite64be_lo_hi(u64 val, void __iomem *addr)
| ^~~~~~~~~~~~~~~~~
../lib/iomap.c:288:6: warning: no previous prototype for ‘iowrite64be_hi_lo’ [-Wmissing-prototypes]
288 | void iowrite64be_hi_lo(u64 val, void __iomem *addr)
| ^~~~~~~~~~~~~~~~~
[23:57:12] Starting KUnit Kernel (1/1)...
[23:57:12] ============================================================
Running tests with:
$ .kunit/linux kunit.enable=1 mem=1G console=tty kunit_shutdown=halt
[23:57:12] ================== drm_buddy (7 subtests) ==================
[23:57:12] [PASSED] drm_test_buddy_alloc_limit
[23:57:12] [PASSED] drm_test_buddy_alloc_optimistic
[23:57:12] [PASSED] drm_test_buddy_alloc_pessimistic
[23:57:12] [PASSED] drm_test_buddy_alloc_pathological
[23:57:12] [PASSED] drm_test_buddy_alloc_contiguous
[23:57:12] [PASSED] drm_test_buddy_alloc_clear
[23:57:12] [PASSED] drm_test_buddy_alloc_range_bias
[23:57:12] ==================== [PASSED] drm_buddy ====================
[23:57:12] ============= drm_cmdline_parser (40 subtests) =============
[23:57:12] [PASSED] drm_test_cmdline_force_d_only
[23:57:12] [PASSED] drm_test_cmdline_force_D_only_dvi
[23:57:12] [PASSED] drm_test_cmdline_force_D_only_hdmi
[23:57:12] [PASSED] drm_test_cmdline_force_D_only_not_digital
[23:57:12] [PASSED] drm_test_cmdline_force_e_only
[23:57:12] [PASSED] drm_test_cmdline_res
[23:57:12] [PASSED] drm_test_cmdline_res_vesa
[23:57:12] [PASSED] drm_test_cmdline_res_vesa_rblank
[23:57:12] [PASSED] drm_test_cmdline_res_rblank
[23:57:12] [PASSED] drm_test_cmdline_res_bpp
[23:57:12] [PASSED] drm_test_cmdline_res_refresh
[23:57:12] [PASSED] drm_test_cmdline_res_bpp_refresh
[23:57:12] [PASSED] drm_test_cmdline_res_bpp_refresh_interlaced
[23:57:12] [PASSED] drm_test_cmdline_res_bpp_refresh_margins
[23:57:12] [PASSED] drm_test_cmdline_res_bpp_refresh_force_off
[23:57:12] [PASSED] drm_test_cmdline_res_bpp_refresh_force_on
[23:57:12] [PASSED] drm_test_cmdline_res_bpp_refresh_force_on_analog
[23:57:12] [PASSED] drm_test_cmdline_res_bpp_refresh_force_on_digital
[23:57:12] [PASSED] drm_test_cmdline_res_bpp_refresh_interlaced_margins_force_on
[23:57:12] [PASSED] drm_test_cmdline_res_margins_force_on
[23:57:12] [PASSED] drm_test_cmdline_res_vesa_margins
[23:57:12] [PASSED] drm_test_cmdline_name
[23:57:12] [PASSED] drm_test_cmdline_name_bpp
[23:57:12] [PASSED] drm_test_cmdline_name_option
[23:57:12] [PASSED] drm_test_cmdline_name_bpp_option
[23:57:12] [PASSED] drm_test_cmdline_rotate_0
[23:57:12] [PASSED] drm_test_cmdline_rotate_90
[23:57:12] [PASSED] drm_test_cmdline_rotate_180
[23:57:12] [PASSED] drm_test_cmdline_rotate_270
[23:57:12] [PASSED] drm_test_cmdline_hmirror
[23:57:12] [PASSED] drm_test_cmdline_vmirror
[23:57:12] [PASSED] drm_test_cmdline_margin_options
[23:57:12] [PASSED] drm_test_cmdline_multiple_options
[23:57:12] [PASSED] drm_test_cmdline_bpp_extra_and_option
[23:57:12] [PASSED] drm_test_cmdline_extra_and_option
[23:57:12] [PASSED] drm_test_cmdline_freestanding_options
[23:57:12] [PASSED] drm_test_cmdline_freestanding_force_e_and_options
[23:57:12] [PASSED] drm_test_cmdline_panel_orientation
[23:57:12] ================ drm_test_cmdline_invalid =================
[23:57:12] [PASSED] margin_only
[23:57:12] [PASSED] interlace_only
[23:57:12] [PASSED] res_missing_x
[23:57:12] [PASSED] res_missing_y
[23:57:12] [PASSED] res_bad_y
[23:57:12] [PASSED] res_missing_y_bpp
[23:57:12] [PASSED] res_bad_bpp
[23:57:12] [PASSED] res_bad_refresh
[23:57:12] [PASSED] res_bpp_refresh_force_on_off
[23:57:12] [PASSED] res_invalid_mode
[23:57:12] [PASSED] res_bpp_wrong_place_mode
[23:57:12] [PASSED] name_bpp_refresh
[23:57:12] [PASSED] name_refresh
[23:57:12] [PASSED] name_refresh_wrong_mode
[23:57:12] [PASSED] name_refresh_invalid_mode
[23:57:12] [PASSED] rotate_multiple
[23:57:12] [PASSED] rotate_invalid_val
[23:57:12] [PASSED] rotate_truncated
[23:57:12] [PASSED] invalid_option
[23:57:12] [PASSED] invalid_tv_option
[23:57:12] [PASSED] truncated_tv_option
[23:57:12] ============ [PASSED] drm_test_cmdline_invalid =============
[23:57:12] =============== drm_test_cmdline_tv_options ===============
[23:57:12] [PASSED] NTSC
[23:57:12] [PASSED] NTSC_443
[23:57:12] [PASSED] NTSC_J
[23:57:12] [PASSED] PAL
[23:57:12] [PASSED] PAL_M
[23:57:12] [PASSED] PAL_N
[23:57:12] [PASSED] SECAM
[23:57:12] [PASSED] MONO_525
[23:57:12] [PASSED] MONO_625
[23:57:12] =========== [PASSED] drm_test_cmdline_tv_options ===========
[23:57:12] =============== [PASSED] drm_cmdline_parser ================
[23:57:12] ========== drmm_connector_hdmi_init (19 subtests) ==========
[23:57:12] [PASSED] drm_test_connector_hdmi_init_valid
[23:57:12] [PASSED] drm_test_connector_hdmi_init_bpc_8
[23:57:12] [PASSED] drm_test_connector_hdmi_init_bpc_10
[23:57:12] [PASSED] drm_test_connector_hdmi_init_bpc_12
[23:57:12] [PASSED] drm_test_connector_hdmi_init_bpc_invalid
[23:57:12] [PASSED] drm_test_connector_hdmi_init_bpc_null
[23:57:12] [PASSED] drm_test_connector_hdmi_init_formats_empty
[23:57:12] [PASSED] drm_test_connector_hdmi_init_formats_no_rgb
[23:57:12] [PASSED] drm_test_connector_hdmi_init_null_ddc
[23:57:12] [PASSED] drm_test_connector_hdmi_init_null_product
[23:57:12] [PASSED] drm_test_connector_hdmi_init_null_vendor
[23:57:12] [PASSED] drm_test_connector_hdmi_init_product_length_exact
[23:57:12] [PASSED] drm_test_connector_hdmi_init_product_length_too_long
[23:57:12] [PASSED] drm_test_connector_hdmi_init_product_valid
[23:57:12] [PASSED] drm_test_connector_hdmi_init_vendor_length_exact
[23:57:12] [PASSED] drm_test_connector_hdmi_init_vendor_length_too_long
[23:57:12] [PASSED] drm_test_connector_hdmi_init_vendor_valid
[23:57:12] ========= drm_test_connector_hdmi_init_type_valid =========
[23:57:12] [PASSED] HDMI-A
[23:57:12] [PASSED] HDMI-B
[23:57:12] ===== [PASSED] drm_test_connector_hdmi_init_type_valid =====
[23:57:12] ======== drm_test_connector_hdmi_init_type_invalid ========
[23:57:12] [PASSED] Unknown
[23:57:12] [PASSED] VGA
[23:57:12] [PASSED] DVI-I
[23:57:12] [PASSED] DVI-D
[23:57:12] [PASSED] DVI-A
[23:57:12] [PASSED] Composite
[23:57:12] [PASSED] SVIDEO
[23:57:12] [PASSED] LVDS
[23:57:12] [PASSED] Component
[23:57:12] [PASSED] DIN
[23:57:12] [PASSED] DP
[23:57:12] [PASSED] TV
[23:57:12] [PASSED] eDP
[23:57:12] [PASSED] Virtual
[23:57:12] [PASSED] DSI
[23:57:12] [PASSED] DPI
[23:57:12] [PASSED] Writeback
[23:57:12] [PASSED] SPI
[23:57:12] [PASSED] USB
[23:57:12] ==== [PASSED] drm_test_connector_hdmi_init_type_invalid ====
[23:57:12] ============ [PASSED] drmm_connector_hdmi_init =============
[23:57:12] ============= drmm_connector_init (3 subtests) =============
[23:57:12] [PASSED] drm_test_drmm_connector_init
[23:57:12] [PASSED] drm_test_drmm_connector_init_null_ddc
[23:57:12] ========= drm_test_drmm_connector_init_type_valid =========
[23:57:12] [PASSED] Unknown
[23:57:12] [PASSED] VGA
[23:57:12] [PASSED] DVI-I
[23:57:12] [PASSED] DVI-D
[23:57:12] [PASSED] DVI-A
[23:57:12] [PASSED] Composite
[23:57:12] [PASSED] SVIDEO
[23:57:12] [PASSED] LVDS
[23:57:12] [PASSED] Component
[23:57:12] [PASSED] DIN
[23:57:12] [PASSED] DP
[23:57:12] [PASSED] HDMI-A
[23:57:12] [PASSED] HDMI-B
[23:57:12] [PASSED] TV
[23:57:12] [PASSED] eDP
[23:57:12] [PASSED] Virtual
[23:57:12] [PASSED] DSI
[23:57:12] [PASSED] DPI
[23:57:12] [PASSED] Writeback
[23:57:12] [PASSED] SPI
[23:57:12] [PASSED] USB
[23:57:12] ===== [PASSED] drm_test_drmm_connector_init_type_valid =====
[23:57:12] =============== [PASSED] drmm_connector_init ===============
[23:57:12] = drm_connector_attach_broadcast_rgb_property (2 subtests) =
[23:57:12] [PASSED] drm_test_drm_connector_attach_broadcast_rgb_property
[23:57:12] [PASSED] drm_test_drm_connector_attach_broadcast_rgb_property_hdmi_connector
[23:57:12] === [PASSED] drm_connector_attach_broadcast_rgb_property ===
[23:57:12] ========== drm_get_tv_mode_from_name (2 subtests) ==========
[23:57:12] ========== drm_test_get_tv_mode_from_name_valid ===========
[23:57:12] [PASSED] NTSC
[23:57:12] [PASSED] NTSC-443
[23:57:12] [PASSED] NTSC-J
[23:57:12] [PASSED] PAL
[23:57:12] [PASSED] PAL-M
[23:57:12] [PASSED] PAL-N
[23:57:12] [PASSED] SECAM
[23:57:12] [PASSED] Mono
[23:57:12] ====== [PASSED] drm_test_get_tv_mode_from_name_valid =======
[23:57:12] [PASSED] drm_test_get_tv_mode_from_name_truncated
[23:57:12] ============ [PASSED] drm_get_tv_mode_from_name ============
[23:57:12] = drm_test_connector_hdmi_compute_mode_clock (12 subtests) =
[23:57:12] [PASSED] drm_test_drm_hdmi_compute_mode_clock_rgb
[23:57:12] [PASSED] drm_test_drm_hdmi_compute_mode_clock_rgb_10bpc
[23:57:12] [PASSED] drm_test_drm_hdmi_compute_mode_clock_rgb_10bpc_vic_1
[23:57:12] [PASSED] drm_test_drm_hdmi_compute_mode_clock_rgb_12bpc
[23:57:12] [PASSED] drm_test_drm_hdmi_compute_mode_clock_rgb_12bpc_vic_1
[23:57:12] [PASSED] drm_test_drm_hdmi_compute_mode_clock_rgb_double
[23:57:12] = drm_test_connector_hdmi_compute_mode_clock_yuv420_valid =
[23:57:12] [PASSED] VIC 96
[23:57:12] [PASSED] VIC 97
[23:57:12] [PASSED] VIC 101
[23:57:12] [PASSED] VIC 102
[23:57:12] [PASSED] VIC 106
[23:57:12] [PASSED] VIC 107
[23:57:12] === [PASSED] drm_test_connector_hdmi_compute_mode_clock_yuv420_valid ===
[23:57:12] [PASSED] drm_test_connector_hdmi_compute_mode_clock_yuv420_10_bpc
[23:57:12] [PASSED] drm_test_connector_hdmi_compute_mode_clock_yuv420_12_bpc
[23:57:12] [PASSED] drm_test_connector_hdmi_compute_mode_clock_yuv422_8_bpc
[23:57:12] [PASSED] drm_test_connector_hdmi_compute_mode_clock_yuv422_10_bpc
[23:57:12] [PASSED] drm_test_connector_hdmi_compute_mode_clock_yuv422_12_bpc
[23:57:12] === [PASSED] drm_test_connector_hdmi_compute_mode_clock ====
[23:57:12] == drm_hdmi_connector_get_broadcast_rgb_name (2 subtests) ==
[23:57:12] === drm_test_drm_hdmi_connector_get_broadcast_rgb_name ====
[23:57:12] [PASSED] Automatic
[23:57:12] [PASSED] Full
[23:57:12] [PASSED] Limited 16:235
[23:57:12] === [PASSED] drm_test_drm_hdmi_connector_get_broadcast_rgb_name ===
[23:57:12] [PASSED] drm_test_drm_hdmi_connector_get_broadcast_rgb_name_invalid
[23:57:12] ==== [PASSED] drm_hdmi_connector_get_broadcast_rgb_name ====
[23:57:12] == drm_hdmi_connector_get_output_format_name (2 subtests) ==
[23:57:12] === drm_test_drm_hdmi_connector_get_output_format_name ====
[23:57:12] [PASSED] RGB
[23:57:12] [PASSED] YUV 4:2:0
[23:57:12] [PASSED] YUV 4:2:2
[23:57:12] [PASSED] YUV 4:4:4
[23:57:12] === [PASSED] drm_test_drm_hdmi_connector_get_output_format_name ===
[23:57:12] [PASSED] drm_test_drm_hdmi_connector_get_output_format_name_invalid
[23:57:12] ==== [PASSED] drm_hdmi_connector_get_output_format_name ====
[23:57:12] ============= drm_damage_helper (21 subtests) ==============
[23:57:12] [PASSED] drm_test_damage_iter_no_damage
[23:57:12] [PASSED] drm_test_damage_iter_no_damage_fractional_src
[23:57:12] [PASSED] drm_test_damage_iter_no_damage_src_moved
[23:57:12] [PASSED] drm_test_damage_iter_no_damage_fractional_src_moved
[23:57:12] [PASSED] drm_test_damage_iter_no_damage_not_visible
[23:57:12] [PASSED] drm_test_damage_iter_no_damage_no_crtc
[23:57:12] [PASSED] drm_test_damage_iter_no_damage_no_fb
[23:57:12] [PASSED] drm_test_damage_iter_simple_damage
[23:57:12] [PASSED] drm_test_damage_iter_single_damage
[23:57:12] [PASSED] drm_test_damage_iter_single_damage_intersect_src
[23:57:12] [PASSED] drm_test_damage_iter_single_damage_outside_src
[23:57:12] [PASSED] drm_test_damage_iter_single_damage_fractional_src
[23:57:12] [PASSED] drm_test_damage_iter_single_damage_intersect_fractional_src
[23:57:12] [PASSED] drm_test_damage_iter_single_damage_outside_fractional_src
[23:57:12] [PASSED] drm_test_damage_iter_single_damage_src_moved
[23:57:12] [PASSED] drm_test_damage_iter_single_damage_fractional_src_moved
[23:57:12] [PASSED] drm_test_damage_iter_damage
[23:57:12] [PASSED] drm_test_damage_iter_damage_one_intersect
[23:57:12] [PASSED] drm_test_damage_iter_damage_one_outside
[23:57:12] [PASSED] drm_test_damage_iter_damage_src_moved
[23:57:12] [PASSED] drm_test_damage_iter_damage_not_visible
[23:57:12] ================ [PASSED] drm_damage_helper ================
[23:57:12] ============== drm_dp_mst_helper (3 subtests) ==============
[23:57:12] ============== drm_test_dp_mst_calc_pbn_mode ==============
[23:57:12] [PASSED] Clock 154000 BPP 30 DSC disabled
[23:57:12] [PASSED] Clock 234000 BPP 30 DSC disabled
[23:57:12] [PASSED] Clock 297000 BPP 24 DSC disabled
[23:57:12] [PASSED] Clock 332880 BPP 24 DSC enabled
[23:57:12] [PASSED] Clock 324540 BPP 24 DSC enabled
[23:57:12] ========== [PASSED] drm_test_dp_mst_calc_pbn_mode ==========
[23:57:12] ============== drm_test_dp_mst_calc_pbn_div ===============
[23:57:12] [PASSED] Link rate 2000000 lane count 4
[23:57:12] [PASSED] Link rate 2000000 lane count 2
[23:57:12] [PASSED] Link rate 2000000 lane count 1
[23:57:12] [PASSED] Link rate 1350000 lane count 4
[23:57:12] [PASSED] Link rate 1350000 lane count 2
[23:57:12] [PASSED] Link rate 1350000 lane count 1
[23:57:12] [PASSED] Link rate 1000000 lane count 4
[23:57:12] [PASSED] Link rate 1000000 lane count 2
[23:57:12] [PASSED] Link rate 1000000 lane count 1
[23:57:12] [PASSED] Link rate 810000 lane count 4
[23:57:12] [PASSED] Link rate 810000 lane count 2
[23:57:12] [PASSED] Link rate 810000 lane count 1
[23:57:12] [PASSED] Link rate 540000 lane count 4
[23:57:12] [PASSED] Link rate 540000 lane count 2
[23:57:12] [PASSED] Link rate 540000 lane count 1
[23:57:12] [PASSED] Link rate 270000 lane count 4
[23:57:12] [PASSED] Link rate 270000 lane count 2
[23:57:12] [PASSED] Link rate 270000 lane count 1
[23:57:12] [PASSED] Link rate 162000 lane count 4
[23:57:12] [PASSED] Link rate 162000 lane count 2
[23:57:12] [PASSED] Link rate 162000 lane count 1
[23:57:12] ========== [PASSED] drm_test_dp_mst_calc_pbn_div ===========
[23:57:12] ========= drm_test_dp_mst_sideband_msg_req_decode =========
[23:57:12] [PASSED] DP_ENUM_PATH_RESOURCES with port number
[23:57:12] [PASSED] DP_POWER_UP_PHY with port number
[23:57:12] [PASSED] DP_POWER_DOWN_PHY with port number
[23:57:12] [PASSED] DP_ALLOCATE_PAYLOAD with SDP stream sinks
[23:57:12] [PASSED] DP_ALLOCATE_PAYLOAD with port number
[23:57:12] [PASSED] DP_ALLOCATE_PAYLOAD with VCPI
[23:57:12] [PASSED] DP_ALLOCATE_PAYLOAD with PBN
[23:57:12] [PASSED] DP_QUERY_PAYLOAD with port number
[23:57:12] [PASSED] DP_QUERY_PAYLOAD with VCPI
[23:57:12] [PASSED] DP_REMOTE_DPCD_READ with port number
[23:57:12] [PASSED] DP_REMOTE_DPCD_READ with DPCD address
[23:57:12] [PASSED] DP_REMOTE_DPCD_READ with max number of bytes
[23:57:12] [PASSED] DP_REMOTE_DPCD_WRITE with port number
[23:57:12] [PASSED] DP_REMOTE_DPCD_WRITE with DPCD address
[23:57:12] [PASSED] DP_REMOTE_DPCD_WRITE with data array
[23:57:12] [PASSED] DP_REMOTE_I2C_READ with port number
[23:57:12] [PASSED] DP_REMOTE_I2C_READ with I2C device ID
[23:57:12] [PASSED] DP_REMOTE_I2C_READ with transactions array
[23:57:12] [PASSED] DP_REMOTE_I2C_WRITE with port number
[23:57:12] [PASSED] DP_REMOTE_I2C_WRITE with I2C device ID
[23:57:12] [PASSED] DP_REMOTE_I2C_WRITE with data array
[23:57:12] [PASSED] DP_QUERY_STREAM_ENC_STATUS with stream ID
[23:57:12] [PASSED] DP_QUERY_STREAM_ENC_STATUS with client ID
[23:57:12] [PASSED] DP_QUERY_STREAM_ENC_STATUS with stream event
[23:57:12] [PASSED] DP_QUERY_STREAM_ENC_STATUS with valid stream event
[23:57:12] [PASSED] DP_QUERY_STREAM_ENC_STATUS with stream behavior
[23:57:12] [PASSED] DP_QUERY_STREAM_ENC_STATUS with a valid stream behavior
[23:57:12] ===== [PASSED] drm_test_dp_mst_sideband_msg_req_decode =====
[23:57:12] ================ [PASSED] drm_dp_mst_helper ================
[23:57:12] ================== drm_exec (7 subtests) ===================
[23:57:12] [PASSED] sanitycheck
[23:57:12] [PASSED] test_lock
[23:57:12] [PASSED] test_lock_unlock
[23:57:12] [PASSED] test_duplicates
[23:57:12] [PASSED] test_prepare
[23:57:12] [PASSED] test_prepare_array
[23:57:12] [PASSED] test_multiple_loops
[23:57:12] ==================== [PASSED] drm_exec =====================
[23:57:12] =========== drm_format_helper_test (17 subtests) ===========
[23:57:12] ============== drm_test_fb_xrgb8888_to_gray8 ==============
[23:57:12] [PASSED] single_pixel_source_buffer
[23:57:12] [PASSED] single_pixel_clip_rectangle
[23:57:12] [PASSED] well_known_colors
[23:57:12] [PASSED] destination_pitch
[23:57:12] ========== [PASSED] drm_test_fb_xrgb8888_to_gray8 ==========
[23:57:12] ============= drm_test_fb_xrgb8888_to_rgb332 ==============
[23:57:12] [PASSED] single_pixel_source_buffer
[23:57:12] [PASSED] single_pixel_clip_rectangle
[23:57:12] [PASSED] well_known_colors
[23:57:12] [PASSED] destination_pitch
[23:57:12] ========= [PASSED] drm_test_fb_xrgb8888_to_rgb332 ==========
[23:57:12] ============= drm_test_fb_xrgb8888_to_rgb565 ==============
[23:57:12] [PASSED] single_pixel_source_buffer
[23:57:12] [PASSED] single_pixel_clip_rectangle
[23:57:12] [PASSED] well_known_colors
[23:57:12] [PASSED] destination_pitch
[23:57:12] ========= [PASSED] drm_test_fb_xrgb8888_to_rgb565 ==========
[23:57:12] ============ drm_test_fb_xrgb8888_to_xrgb1555 =============
[23:57:12] [PASSED] single_pixel_source_buffer
[23:57:12] [PASSED] single_pixel_clip_rectangle
[23:57:12] [PASSED] well_known_colors
[23:57:12] [PASSED] destination_pitch
[23:57:12] ======== [PASSED] drm_test_fb_xrgb8888_to_xrgb1555 =========
[23:57:12] ============ drm_test_fb_xrgb8888_to_argb1555 =============
[23:57:12] [PASSED] single_pixel_source_buffer
[23:57:12] [PASSED] single_pixel_clip_rectangle
[23:57:12] [PASSED] well_known_colors
[23:57:12] [PASSED] destination_pitch
[23:57:12] ======== [PASSED] drm_test_fb_xrgb8888_to_argb1555 =========
[23:57:12] ============ drm_test_fb_xrgb8888_to_rgba5551 =============
[23:57:12] [PASSED] single_pixel_source_buffer
[23:57:12] [PASSED] single_pixel_clip_rectangle
[23:57:12] [PASSED] well_known_colors
[23:57:12] [PASSED] destination_pitch
[23:57:12] ======== [PASSED] drm_test_fb_xrgb8888_to_rgba5551 =========
[23:57:12] ============= drm_test_fb_xrgb8888_to_rgb888 ==============
[23:57:12] [PASSED] single_pixel_source_buffer
[23:57:12] [PASSED] single_pixel_clip_rectangle
[23:57:12] [PASSED] well_known_colors
[23:57:12] [PASSED] destination_pitch
[23:57:12] ========= [PASSED] drm_test_fb_xrgb8888_to_rgb888 ==========
[23:57:12] ============ drm_test_fb_xrgb8888_to_argb8888 =============
[23:57:12] [PASSED] single_pixel_source_buffer
[23:57:12] [PASSED] single_pixel_clip_rectangle
[23:57:12] [PASSED] well_known_colors
[23:57:12] [PASSED] destination_pitch
[23:57:12] ======== [PASSED] drm_test_fb_xrgb8888_to_argb8888 =========
[23:57:12] =========== drm_test_fb_xrgb8888_to_xrgb2101010 ===========
[23:57:12] [PASSED] single_pixel_source_buffer
[23:57:12] [PASSED] single_pixel_clip_rectangle
[23:57:12] [PASSED] well_known_colors
[23:57:12] [PASSED] destination_pitch
[23:57:12] ======= [PASSED] drm_test_fb_xrgb8888_to_xrgb2101010 =======
[23:57:12] =========== drm_test_fb_xrgb8888_to_argb2101010 ===========
[23:57:12] [PASSED] single_pixel_source_buffer
[23:57:12] [PASSED] single_pixel_clip_rectangle
[23:57:12] [PASSED] well_known_colors
[23:57:12] [PASSED] destination_pitch
[23:57:12] ======= [PASSED] drm_test_fb_xrgb8888_to_argb2101010 =======
[23:57:12] ============== drm_test_fb_xrgb8888_to_mono ===============
[23:57:12] [PASSED] single_pixel_source_buffer
[23:57:12] [PASSED] single_pixel_clip_rectangle
[23:57:12] [PASSED] well_known_colors
[23:57:12] [PASSED] destination_pitch
[23:57:12] ========== [PASSED] drm_test_fb_xrgb8888_to_mono ===========
[23:57:12] ==================== drm_test_fb_swab =====================
[23:57:12] [PASSED] single_pixel_source_buffer
[23:57:12] [PASSED] single_pixel_clip_rectangle
[23:57:12] [PASSED] well_known_colors
[23:57:12] [PASSED] destination_pitch
[23:57:12] ================ [PASSED] drm_test_fb_swab =================
[23:57:12] ============ drm_test_fb_xrgb8888_to_xbgr8888 =============
[23:57:12] [PASSED] single_pixel_source_buffer
[23:57:12] [PASSED] single_pixel_clip_rectangle
[23:57:12] [PASSED] well_known_colors
[23:57:12] [PASSED] destination_pitch
[23:57:12] ======== [PASSED] drm_test_fb_xrgb8888_to_xbgr8888 =========
[23:57:12] ============ drm_test_fb_xrgb8888_to_abgr8888 =============
[23:57:12] [PASSED] single_pixel_source_buffer
[23:57:12] [PASSED] single_pixel_clip_rectangle
[23:57:12] [PASSED] well_known_colors
[23:57:12] [PASSED] destination_pitch
[23:57:12] ======== [PASSED] drm_test_fb_xrgb8888_to_abgr8888 =========
[23:57:12] ================= drm_test_fb_clip_offset =================
[23:57:12] [PASSED] pass through
[23:57:12] [PASSED] horizontal offset
[23:57:12] [PASSED] vertical offset
[23:57:12] [PASSED] horizontal and vertical offset
[23:57:12] [PASSED] horizontal offset (custom pitch)
[23:57:12] [PASSED] vertical offset (custom pitch)
[23:57:12] [PASSED] horizontal and vertical offset (custom pitch)
[23:57:12] ============= [PASSED] drm_test_fb_clip_offset =============
[23:57:12] ============== drm_test_fb_build_fourcc_list ==============
[23:57:12] [PASSED] no native formats
[23:57:12] [PASSED] XRGB8888 as native format
[23:57:12] [PASSED] remove duplicates
[23:57:12] [PASSED] convert alpha formats
[23:57:12] [PASSED] random formats
[23:57:12] ========== [PASSED] drm_test_fb_build_fourcc_list ==========
[23:57:12] =================== drm_test_fb_memcpy ====================
[23:57:12] [PASSED] single_pixel_source_buffer: XR24 little-endian (0x34325258)
[23:57:12] [PASSED] single_pixel_source_buffer: XRA8 little-endian (0x38415258)
[23:57:12] [PASSED] single_pixel_source_buffer: YU24 little-endian (0x34325559)
[23:57:12] [PASSED] single_pixel_clip_rectangle: XB24 little-endian (0x34324258)
[23:57:12] [PASSED] single_pixel_clip_rectangle: XRA8 little-endian (0x38415258)
[23:57:12] [PASSED] single_pixel_clip_rectangle: YU24 little-endian (0x34325559)
[23:57:12] [PASSED] well_known_colors: XB24 little-endian (0x34324258)
[23:57:12] [PASSED] well_known_colors: XRA8 little-endian (0x38415258)
[23:57:12] [PASSED] well_known_colors: YU24 little-endian (0x34325559)
[23:57:12] [PASSED] destination_pitch: XB24 little-endian (0x34324258)
[23:57:12] [PASSED] destination_pitch: XRA8 little-endian (0x38415258)
[23:57:12] [PASSED] destination_pitch: YU24 little-endian (0x34325559)
[23:57:12] =============== [PASSED] drm_test_fb_memcpy ================
[23:57:12] ============= [PASSED] drm_format_helper_test ==============
[23:57:12] ================= drm_format (18 subtests) =================
[23:57:12] [PASSED] drm_test_format_block_width_invalid
[23:57:12] [PASSED] drm_test_format_block_width_one_plane
[23:57:12] [PASSED] drm_test_format_block_width_two_plane
[23:57:12] [PASSED] drm_test_format_block_width_three_plane
[23:57:12] [PASSED] drm_test_format_block_width_tiled
[23:57:12] [PASSED] drm_test_format_block_height_invalid
[23:57:12] [PASSED] drm_test_format_block_height_one_plane
[23:57:12] [PASSED] drm_test_format_block_height_two_plane
[23:57:12] [PASSED] drm_test_format_block_height_three_plane
[23:57:12] [PASSED] drm_test_format_block_height_tiled
[23:57:12] [PASSED] drm_test_format_min_pitch_invalid
[23:57:12] [PASSED] drm_test_format_min_pitch_one_plane_8bpp
[23:57:12] [PASSED] drm_test_format_min_pitch_one_plane_16bpp
[23:57:12] [PASSED] drm_test_format_min_pitch_one_plane_24bpp
[23:57:12] [PASSED] drm_test_format_min_pitch_one_plane_32bpp
[23:57:12] [PASSED] drm_test_format_min_pitch_two_plane
[23:57:12] [PASSED] drm_test_format_min_pitch_three_plane_8bpp
[23:57:12] [PASSED] drm_test_format_min_pitch_tiled
[23:57:12] =================== [PASSED] drm_format ====================
[23:57:12] ============== drm_framebuffer (10 subtests) ===============
[23:57:12] ========== drm_test_framebuffer_check_src_coords ==========
[23:57:12] [PASSED] Success: source fits into fb
[23:57:12] [PASSED] Fail: overflowing fb with x-axis coordinate
[23:57:12] [PASSED] Fail: overflowing fb with y-axis coordinate
[23:57:12] [PASSED] Fail: overflowing fb with source width
[23:57:12] [PASSED] Fail: overflowing fb with source height
[23:57:12] ====== [PASSED] drm_test_framebuffer_check_src_coords ======
[23:57:12] [PASSED] drm_test_framebuffer_cleanup
[23:57:12] =============== drm_test_framebuffer_create ===============
[23:57:12] [PASSED] ABGR8888 normal sizes
[23:57:12] [PASSED] ABGR8888 max sizes
[23:57:12] [PASSED] ABGR8888 pitch greater than min required
[23:57:12] [PASSED] ABGR8888 pitch less than min required
[23:57:12] [PASSED] ABGR8888 Invalid width
[23:57:12] [PASSED] ABGR8888 Invalid buffer handle
[23:57:12] [PASSED] No pixel format
[23:57:12] [PASSED] ABGR8888 Width 0
[23:57:12] [PASSED] ABGR8888 Height 0
[23:57:12] [PASSED] ABGR8888 Out of bound height * pitch combination
[23:57:12] [PASSED] ABGR8888 Large buffer offset
[23:57:12] [PASSED] ABGR8888 Buffer offset for inexistent plane
[23:57:12] [PASSED] ABGR8888 Invalid flag
[23:57:12] [PASSED] ABGR8888 Set DRM_MODE_FB_MODIFIERS without modifiers
[23:57:12] [PASSED] ABGR8888 Valid buffer modifier
[23:57:12] [PASSED] ABGR8888 Invalid buffer modifier(DRM_FORMAT_MOD_SAMSUNG_64_32_TILE)
[23:57:12] [PASSED] ABGR8888 Extra pitches without DRM_MODE_FB_MODIFIERS
[23:57:12] [PASSED] ABGR8888 Extra pitches with DRM_MODE_FB_MODIFIERS
[23:57:12] [PASSED] NV12 Normal sizes
[23:57:12] [PASSED] NV12 Max sizes
[23:57:12] [PASSED] NV12 Invalid pitch
[23:57:12] [PASSED] NV12 Invalid modifier/missing DRM_MODE_FB_MODIFIERS flag
[23:57:12] [PASSED] NV12 different modifier per-plane
[23:57:12] [PASSED] NV12 with DRM_FORMAT_MOD_SAMSUNG_64_32_TILE
[23:57:12] [PASSED] NV12 Valid modifiers without DRM_MODE_FB_MODIFIERS
[23:57:12] [PASSED] NV12 Modifier for inexistent plane
[23:57:12] [PASSED] NV12 Handle for inexistent plane
[23:57:12] [PASSED] NV12 Handle for inexistent plane without DRM_MODE_FB_MODIFIERS
[23:57:12] [PASSED] YVU420 DRM_MODE_FB_MODIFIERS set without modifier
[23:57:12] [PASSED] YVU420 Normal sizes
[23:57:12] [PASSED] YVU420 Max sizes
[23:57:12] [PASSED] YVU420 Invalid pitch
[23:57:12] [PASSED] YVU420 Different pitches
[23:57:12] [PASSED] YVU420 Different buffer offsets/pitches
[23:57:12] [PASSED] YVU420 Modifier set just for plane 0, without DRM_MODE_FB_MODIFIERS
[23:57:12] [PASSED] YVU420 Modifier set just for planes 0, 1, without DRM_MODE_FB_MODIFIERS
[23:57:12] [PASSED] YVU420 Modifier set just for plane 0, 1, with DRM_MODE_FB_MODIFIERS
[23:57:12] [PASSED] YVU420 Valid modifier
[23:57:12] [PASSED] YVU420 Different modifiers per plane
[23:57:12] [PASSED] YVU420 Modifier for inexistent plane
[23:57:12] [PASSED] YUV420_10BIT Invalid modifier(DRM_FORMAT_MOD_LINEAR)
[23:57:12] [PASSED] X0L2 Normal sizes
[23:57:12] [PASSED] X0L2 Max sizes
[23:57:12] [PASSED] X0L2 Invalid pitch
[23:57:12] [PASSED] X0L2 Pitch greater than minimum required
[23:57:12] [PASSED] X0L2 Handle for inexistent plane
[23:57:12] [PASSED] X0L2 Offset for inexistent plane, without DRM_MODE_FB_MODIFIERS set
[23:57:12] [PASSED] X0L2 Modifier without DRM_MODE_FB_MODIFIERS set
[23:57:12] [PASSED] X0L2 Valid modifier
[23:57:12] [PASSED] X0L2 Modifier for inexistent plane
[23:57:12] =========== [PASSED] drm_test_framebuffer_create ===========
[23:57:12] [PASSED] drm_test_framebuffer_free
[23:57:12] [PASSED] drm_test_framebuffer_init
[23:57:12] [PASSED] drm_test_framebuffer_init_bad_format
[23:57:12] [PASSED] drm_test_framebuffer_init_dev_mismatch
[23:57:12] [PASSED] drm_test_framebuffer_lookup
[23:57:12] [PASSED] drm_test_framebuffer_lookup_inexistent
[23:57:12] [PASSED] drm_test_framebuffer_modifiers_not_supported
[23:57:12] ================= [PASSED] drm_framebuffer =================
[23:57:12] ================ drm_gem_shmem (8 subtests) ================
[23:57:12] [PASSED] drm_gem_shmem_test_obj_create
[23:57:12] [PASSED] drm_gem_shmem_test_obj_create_private
[23:57:12] [PASSED] drm_gem_shmem_test_pin_pages
[23:57:12] [PASSED] drm_gem_shmem_test_vmap
[23:57:12] [PASSED] drm_gem_shmem_test_get_pages_sgt
[23:57:12] [PASSED] drm_gem_shmem_test_get_sg_table
[23:57:12] [PASSED] drm_gem_shmem_test_madvise
[23:57:12] [PASSED] drm_gem_shmem_test_purge
[23:57:12] ================== [PASSED] drm_gem_shmem ==================
[23:57:12] === drm_atomic_helper_connector_hdmi_check (22 subtests) ===
[23:57:12] [PASSED] drm_test_check_broadcast_rgb_auto_cea_mode
[23:57:12] [PASSED] drm_test_check_broadcast_rgb_auto_cea_mode_vic_1
[23:57:12] [PASSED] drm_test_check_broadcast_rgb_full_cea_mode
[23:57:12] [PASSED] drm_test_check_broadcast_rgb_full_cea_mode_vic_1
[23:57:12] [PASSED] drm_test_check_broadcast_rgb_limited_cea_mode
[23:57:12] [PASSED] drm_test_check_broadcast_rgb_limited_cea_mode_vic_1
[23:57:12] [PASSED] drm_test_check_broadcast_rgb_crtc_mode_changed
[23:57:12] [PASSED] drm_test_check_broadcast_rgb_crtc_mode_not_changed
[23:57:12] [PASSED] drm_test_check_hdmi_funcs_reject_rate
[23:57:12] [PASSED] drm_test_check_max_tmds_rate_bpc_fallback
[23:57:12] [PASSED] drm_test_check_max_tmds_rate_format_fallback
[23:57:12] [PASSED] drm_test_check_output_bpc_crtc_mode_changed
[23:57:12] [PASSED] drm_test_check_output_bpc_crtc_mode_not_changed
[23:57:12] [PASSED] drm_test_check_output_bpc_dvi
[23:57:12] [PASSED] drm_test_check_output_bpc_format_vic_1
[23:57:12] [PASSED] drm_test_check_output_bpc_format_display_8bpc_only
[23:57:12] [PASSED] drm_test_check_output_bpc_format_display_rgb_only
[23:57:12] [PASSED] drm_test_check_output_bpc_format_driver_8bpc_only
[23:57:12] [PASSED] drm_test_check_output_bpc_format_driver_rgb_only
[23:57:12] [PASSED] drm_test_check_tmds_char_rate_rgb_8bpc
[23:57:12] [PASSED] drm_test_check_tmds_char_rate_rgb_10bpc
[23:57:12] [PASSED] drm_test_check_tmds_char_rate_rgb_12bpc
[23:57:12] ===== [PASSED] drm_atomic_helper_connector_hdmi_check ======
[23:57:12] === drm_atomic_helper_connector_hdmi_reset (6 subtests) ====
[23:57:12] [PASSED] drm_test_check_broadcast_rgb_value
[23:57:12] [PASSED] drm_test_check_bpc_8_value
[23:57:12] [PASSED] drm_test_check_bpc_10_value
[23:57:12] [PASSED] drm_test_check_bpc_12_value
[23:57:12] [PASSED] drm_test_check_format_value
[23:57:12] [PASSED] drm_test_check_tmds_char_value
[23:57:12] ===== [PASSED] drm_atomic_helper_connector_hdmi_reset ======
[23:57:12] ================= drm_managed (2 subtests) =================
[23:57:12] [PASSED] drm_test_managed_release_action
[23:57:12] [PASSED] drm_test_managed_run_action
[23:57:12] =================== [PASSED] drm_managed ===================
[23:57:12] =================== drm_mm (6 subtests) ====================
[23:57:12] [PASSED] drm_test_mm_init
[23:57:12] [PASSED] drm_test_mm_debug
[23:57:12] [PASSED] drm_test_mm_align32
[23:57:12] [PASSED] drm_test_mm_align64
[23:57:12] [PASSED] drm_test_mm_lowest
[23:57:12] [PASSED] drm_test_mm_highest
[23:57:12] ===================== [PASSED] drm_mm ======================
[23:57:12] ============= drm_modes_analog_tv (5 subtests) =============
[23:57:12] [PASSED] drm_test_modes_analog_tv_mono_576i
[23:57:12] [PASSED] drm_test_modes_analog_tv_ntsc_480i
[23:57:12] [PASSED] drm_test_modes_analog_tv_ntsc_480i_inlined
[23:57:12] [PASSED] drm_test_modes_analog_tv_pal_576i
[23:57:12] [PASSED] drm_test_modes_analog_tv_pal_576i_inlined
[23:57:12] =============== [PASSED] drm_modes_analog_tv ===============
stty: 'standard input': Inappropriate ioctl for device
[23:57:12] ============== drm_plane_helper (2 subtests) ===============
[23:57:12] =============== drm_test_check_plane_state ================
[23:57:12] [PASSED] clipping_simple
[23:57:12] [PASSED] clipping_rotate_reflect
[23:57:12] [PASSED] positioning_simple
[23:57:12] [PASSED] upscaling
[23:57:12] [PASSED] downscaling
[23:57:12] [PASSED] rounding1
[23:57:12] [PASSED] rounding2
[23:57:12] [PASSED] rounding3
[23:57:12] [PASSED] rounding4
[23:57:12] =========== [PASSED] drm_test_check_plane_state ============
[23:57:12] =========== drm_test_check_invalid_plane_state ============
[23:57:12] [PASSED] positioning_invalid
[23:57:12] [PASSED] upscaling_invalid
[23:57:12] [PASSED] downscaling_invalid
[23:57:12] ======= [PASSED] drm_test_check_invalid_plane_state ========
[23:57:12] ================ [PASSED] drm_plane_helper =================
[23:57:12] ====== drm_connector_helper_tv_get_modes (1 subtest) =======
[23:57:12] ====== drm_test_connector_helper_tv_get_modes_check =======
[23:57:12] [PASSED] None
[23:57:12] [PASSED] PAL
[23:57:12] [PASSED] NTSC
[23:57:12] [PASSED] Both, NTSC Default
[23:57:12] [PASSED] Both, PAL Default
[23:57:12] [PASSED] Both, NTSC Default, with PAL on command-line
[23:57:12] [PASSED] Both, PAL Default, with NTSC on command-line
[23:57:12] == [PASSED] drm_test_connector_helper_tv_get_modes_check ===
[23:57:12] ======== [PASSED] drm_connector_helper_tv_get_modes ========
[23:57:12] ================== drm_rect (9 subtests) ===================
[23:57:12] [PASSED] drm_test_rect_clip_scaled_div_by_zero
[23:57:12] [PASSED] drm_test_rect_clip_scaled_not_clipped
[23:57:12] [PASSED] drm_test_rect_clip_scaled_clipped
[23:57:12] [PASSED] drm_test_rect_clip_scaled_signed_vs_unsigned
[23:57:12] ================= drm_test_rect_intersect =================
[23:57:12] [PASSED] top-left x bottom-right: 2x2+1+1 x 2x2+0+0
[23:57:12] [PASSED] top-right x bottom-left: 2x2+0+0 x 2x2+1-1
[23:57:12] [PASSED] bottom-left x top-right: 2x2+1-1 x 2x2+0+0
[23:57:12] [PASSED] bottom-right x top-left: 2x2+0+0 x 2x2+1+1
[23:57:12] [PASSED] right x left: 2x1+0+0 x 3x1+1+0
[23:57:12] [PASSED] left x right: 3x1+1+0 x 2x1+0+0
[23:57:12] [PASSED] up x bottom: 1x2+0+0 x 1x3+0-1
[23:57:12] [PASSED] bottom x up: 1x3+0-1 x 1x2+0+0
[23:57:12] [PASSED] touching corner: 1x1+0+0 x 2x2+1+1
[23:57:12] [PASSED] touching side: 1x1+0+0 x 1x1+1+0
[23:57:12] [PASSED] equal rects: 2x2+0+0 x 2x2+0+0
[23:57:12] [PASSED] inside another: 2x2+0+0 x 1x1+1+1
[23:57:12] [PASSED] far away: 1x1+0+0 x 1x1+3+6
[23:57:12] [PASSED] points intersecting: 0x0+5+10 x 0x0+5+10
[23:57:12] [PASSED] points not intersecting: 0x0+0+0 x 0x0+5+10
[23:57:12] ============= [PASSED] drm_test_rect_intersect =============
[23:57:12] ================ drm_test_rect_calc_hscale ================
[23:57:12] [PASSED] normal use
[23:57:12] [PASSED] out of max range
[23:57:12] [PASSED] out of min range
[23:57:12] [PASSED] zero dst
[23:57:12] [PASSED] negative src
[23:57:12] [PASSED] negative dst
[23:57:12] ============ [PASSED] drm_test_rect_calc_hscale ============
[23:57:12] ================ drm_test_rect_calc_vscale ================
[23:57:12] [PASSED] normal use
[23:57:12] [PASSED] out of max range
[23:57:12] [PASSED] out of min range
[23:57:12] [PASSED] zero dst
[23:57:12] [PASSED] negative src
[23:57:12] [PASSED] negative dst
[23:57:12] ============ [PASSED] drm_test_rect_calc_vscale ============
[23:57:12] ================== drm_test_rect_rotate ===================
[23:57:12] [PASSED] reflect-x
[23:57:12] [PASSED] reflect-y
[23:57:12] [PASSED] rotate-0
[23:57:12] [PASSED] rotate-90
[23:57:12] [PASSED] rotate-180
[23:57:12] [PASSED] rotate-270
[23:57:12] ============== [PASSED] drm_test_rect_rotate ===============
[23:57:12] ================ drm_test_rect_rotate_inv =================
[23:57:12] [PASSED] reflect-x
[23:57:12] [PASSED] reflect-y
[23:57:12] [PASSED] rotate-0
[23:57:12] [PASSED] rotate-90
[23:57:12] [PASSED] rotate-180
[23:57:12] [PASSED] rotate-270
[23:57:12] ============ [PASSED] drm_test_rect_rotate_inv =============
[23:57:12] ==================== [PASSED] drm_rect =====================
[23:57:12] ============================================================
[23:57:12] Testing complete. Ran 526 tests: passed: 526
[23:57:12] Elapsed time: 24.728s total, 1.656s configuring, 22.900s building, 0.171s running
+ /kernel/tools/testing/kunit/kunit.py run --kunitconfig /kernel/drivers/gpu/drm/ttm/tests/.kunitconfig
[23:57:12] Configuring KUnit Kernel ...
Regenerating .config ...
Populating config with:
$ make ARCH=um O=.kunit olddefconfig
[23:57:14] Building KUnit Kernel ...
Populating config with:
$ make ARCH=um O=.kunit olddefconfig
Building with:
$ make all compile_commands.json ARCH=um O=.kunit --jobs=48
[23:57:22] Starting KUnit Kernel (1/1)...
[23:57:22] ============================================================
Running tests with:
$ .kunit/linux kunit.enable=1 mem=1G console=tty kunit_shutdown=halt
[23:57:22] ================= ttm_device (5 subtests) ==================
[23:57:22] [PASSED] ttm_device_init_basic
[23:57:22] [PASSED] ttm_device_init_multiple
[23:57:22] [PASSED] ttm_device_fini_basic
[23:57:22] [PASSED] ttm_device_init_no_vma_man
[23:57:22] ================== ttm_device_init_pools ==================
[23:57:22] [PASSED] No DMA allocations, no DMA32 required
[23:57:22] [PASSED] DMA allocations, DMA32 required
[23:57:22] [PASSED] No DMA allocations, DMA32 required
[23:57:22] [PASSED] DMA allocations, no DMA32 required
[23:57:22] ============== [PASSED] ttm_device_init_pools ==============
[23:57:22] =================== [PASSED] ttm_device ====================
[23:57:22] ================== ttm_pool (8 subtests) ===================
[23:57:22] ================== ttm_pool_alloc_basic ===================
[23:57:22] [PASSED] One page
[23:57:22] [PASSED] More than one page
[23:57:22] [PASSED] Above the allocation limit
[23:57:22] [PASSED] One page, with coherent DMA mappings enabled
[23:57:22] [PASSED] Above the allocation limit, with coherent DMA mappings enabled
[23:57:22] ============== [PASSED] ttm_pool_alloc_basic ===============
[23:57:22] ============== ttm_pool_alloc_basic_dma_addr ==============
[23:57:22] [PASSED] One page
[23:57:22] [PASSED] More than one page
[23:57:22] [PASSED] Above the allocation limit
[23:57:22] [PASSED] One page, with coherent DMA mappings enabled
[23:57:22] [PASSED] Above the allocation limit, with coherent DMA mappings enabled
[23:57:22] ========== [PASSED] ttm_pool_alloc_basic_dma_addr ==========
[23:57:22] [PASSED] ttm_pool_alloc_order_caching_match
[23:57:22] [PASSED] ttm_pool_alloc_caching_mismatch
[23:57:22] [PASSED] ttm_pool_alloc_order_mismatch
[23:57:22] [PASSED] ttm_pool_free_dma_alloc
[23:57:22] [PASSED] ttm_pool_free_no_dma_alloc
[23:57:22] [PASSED] ttm_pool_fini_basic
[23:57:22] ==================== [PASSED] ttm_pool =====================
[23:57:22] ================ ttm_resource (8 subtests) =================
[23:57:22] ================= ttm_resource_init_basic =================
[23:57:22] [PASSED] Init resource in TTM_PL_SYSTEM
[23:57:22] [PASSED] Init resource in TTM_PL_VRAM
[23:57:22] [PASSED] Init resource in a private placement
[23:57:22] [PASSED] Init resource in TTM_PL_SYSTEM, set placement flags
[23:57:22] ============= [PASSED] ttm_resource_init_basic =============
[23:57:22] [PASSED] ttm_resource_init_pinned
[23:57:22] [PASSED] ttm_resource_fini_basic
[23:57:22] [PASSED] ttm_resource_manager_init_basic
[23:57:22] [PASSED] ttm_resource_manager_usage_basic
[23:57:22] [PASSED] ttm_resource_manager_set_used_basic
[23:57:22] [PASSED] ttm_sys_man_alloc_basic
[23:57:22] [PASSED] ttm_sys_man_free_basic
[23:57:22] ================== [PASSED] ttm_resource ===================
[23:57:22] =================== ttm_tt (15 subtests) ===================
[23:57:22] ==================== ttm_tt_init_basic ====================
[23:57:22] [PASSED] Page-aligned size
[23:57:22] [PASSED] Extra pages requested
[23:57:22] ================ [PASSED] ttm_tt_init_basic ================
[23:57:22] [PASSED] ttm_tt_init_misaligned
[23:57:22] [PASSED] ttm_tt_fini_basic
[23:57:22] [PASSED] ttm_tt_fini_sg
[23:57:22] [PASSED] ttm_tt_fini_shmem
[23:57:22] [PASSED] ttm_tt_create_basic
[23:57:22] [PASSED] ttm_tt_create_invalid_bo_type
[23:57:22] [PASSED] ttm_tt_create_ttm_exists
[23:57:22] [PASSED] ttm_tt_create_failed
[23:57:22] [PASSED] ttm_tt_destroy_basic
[23:57:22] [PASSED] ttm_tt_populate_null_ttm
[23:57:22] [PASSED] ttm_tt_populate_populated_ttm
[23:57:22] [PASSED] ttm_tt_unpopulate_basic
[23:57:22] [PASSED] ttm_tt_unpopulate_empty_ttm
[23:57:22] [PASSED] ttm_tt_swapin_basic
[23:57:22] ===================== [PASSED] ttm_tt ======================
[23:57:22] =================== ttm_bo (14 subtests) ===================
[23:57:22] =========== ttm_bo_reserve_optimistic_no_ticket ===========
[23:57:22] [PASSED] Cannot be interrupted and sleeps
[23:57:22] [PASSED] Cannot be interrupted, locks straight away
[23:57:22] [PASSED] Can be interrupted, sleeps
[23:57:22] ======= [PASSED] ttm_bo_reserve_optimistic_no_ticket =======
[23:57:22] [PASSED] ttm_bo_reserve_locked_no_sleep
[23:57:22] [PASSED] ttm_bo_reserve_no_wait_ticket
[23:57:22] [PASSED] ttm_bo_reserve_double_resv
[23:57:22] [PASSED] ttm_bo_reserve_interrupted
[23:57:22] [PASSED] ttm_bo_reserve_deadlock
[23:57:22] [PASSED] ttm_bo_unreserve_basic
[23:57:22] [PASSED] ttm_bo_unreserve_pinned
[23:57:22] [PASSED] ttm_bo_unreserve_bulk
[23:57:22] [PASSED] ttm_bo_put_basic
[23:57:22] [PASSED] ttm_bo_put_shared_resv
[23:57:22] [PASSED] ttm_bo_pin_basic
[23:57:22] [PASSED] ttm_bo_pin_unpin_resource
[23:57:22] [PASSED] ttm_bo_multiple_pin_one_unpin
[23:57:22] ===================== [PASSED] ttm_bo ======================
[23:57:22] ============== ttm_bo_validate (22 subtests) ===============
[23:57:22] ============== ttm_bo_init_reserved_sys_man ===============
[23:57:22] [PASSED] Buffer object for userspace
[23:57:22] [PASSED] Kernel buffer object
[23:57:22] [PASSED] Shared buffer object
[23:57:22] ========== [PASSED] ttm_bo_init_reserved_sys_man ===========
[23:57:22] ============== ttm_bo_init_reserved_mock_man ==============
[23:57:22] [PASSED] Buffer object for userspace
[23:57:22] [PASSED] Kernel buffer object
[23:57:22] [PASSED] Shared buffer object
[23:57:22] ========== [PASSED] ttm_bo_init_reserved_mock_man ==========
[23:57:22] [PASSED] ttm_bo_init_reserved_resv
[23:57:22] ================== ttm_bo_validate_basic ==================
[23:57:22] [PASSED] Buffer object for userspace
[23:57:22] [PASSED] Kernel buffer object
[23:57:22] [PASSED] Shared buffer object
[23:57:22] ============== [PASSED] ttm_bo_validate_basic ==============
[23:57:22] [PASSED] ttm_bo_validate_invalid_placement
[23:57:22] ============= ttm_bo_validate_same_placement ==============
[23:57:22] [PASSED] System manager
[23:57:22] [PASSED] VRAM manager
[23:57:22] ========= [PASSED] ttm_bo_validate_same_placement ==========
[23:57:22] [PASSED] ttm_bo_validate_failed_alloc
[23:57:22] [PASSED] ttm_bo_validate_pinned
[23:57:22] [PASSED] ttm_bo_validate_busy_placement
[23:57:22] ================ ttm_bo_validate_multihop =================
[23:57:22] [PASSED] Buffer object for userspace
[23:57:22] [PASSED] Kernel buffer object
[23:57:22] [PASSED] Shared buffer object
[23:57:22] ============ [PASSED] ttm_bo_validate_multihop =============
[23:57:22] ========== ttm_bo_validate_no_placement_signaled ==========
[23:57:22] [PASSED] Buffer object in system domain, no page vector
[23:57:22] [PASSED] Buffer object in system domain with an existing page vector
[23:57:22] ====== [PASSED] ttm_bo_validate_no_placement_signaled ======
[23:57:22] ======== ttm_bo_validate_no_placement_not_signaled ========
[23:57:22] [PASSED] Buffer object for userspace
[23:57:22] [PASSED] Kernel buffer object
[23:57:22] [PASSED] Shared buffer object
[23:57:22] ==== [PASSED] ttm_bo_validate_no_placement_not_signaled ====
[23:57:22] [PASSED] ttm_bo_validate_move_fence_signaled
[23:57:22] ========= ttm_bo_validate_move_fence_not_signaled =========
[23:57:22] [PASSED] Waits for GPU
[23:57:22] [PASSED] Tries to lock straight away
[23:57:22] ===== [PASSED] ttm_bo_validate_move_fence_not_signaled =====
[23:57:22] [PASSED] ttm_bo_validate_swapout
[23:57:22] [PASSED] ttm_bo_validate_happy_evict
[23:57:22] [PASSED] ttm_bo_validate_all_pinned_evict
[23:57:22] [PASSED] ttm_bo_validate_allowed_only_evict
[23:57:22] [PASSED] ttm_bo_validate_deleted_evict
[23:57:22] [PASSED] ttm_bo_validate_busy_domain_evict
[23:57:22] [PASSED] ttm_bo_validate_evict_gutting
[23:57:22] [PASSED] ttm_bo_validate_recrusive_evict
stty: 'standard input': Inappropriate ioctl for device
[23:57:22] ================= [PASSED] ttm_bo_validate =================
[23:57:22] ============================================================
[23:57:22] Testing complete. Ran 102 tests: passed: 102
[23:57:22] Elapsed time: 10.014s total, 1.613s configuring, 7.734s building, 0.560s running
+ cleanup
++ stat -c %u:%g /kernel
+ chown -R 1003:1003 /kernel
^ permalink raw reply [flat|nested] 52+ messages in thread
* ✓ CI.Build: success for UMD direct submission in Xe
2024-11-18 23:37 [RFC PATCH 00/29] UMD direct submission in Xe Matthew Brost
` (31 preceding siblings ...)
2024-11-18 23:57 ` ✓ CI.KUnit: success " Patchwork
@ 2024-11-19 0:15 ` Patchwork
2024-11-19 0:17 ` ✗ CI.Hooks: failure " Patchwork
` (3 subsequent siblings)
36 siblings, 0 replies; 52+ messages in thread
From: Patchwork @ 2024-11-19 0:15 UTC (permalink / raw)
To: Matthew Brost; +Cc: intel-xe
== Series Details ==
Series: UMD direct submission in Xe
URL : https://patchwork.freedesktop.org/series/141524/
State : success
== Summary ==
lib/modules/6.12.0-xe/kernel/arch/x86/events/rapl.ko
lib/modules/6.12.0-xe/kernel/arch/x86/kvm/
lib/modules/6.12.0-xe/kernel/arch/x86/kvm/kvm.ko
lib/modules/6.12.0-xe/kernel/arch/x86/kvm/kvm-intel.ko
lib/modules/6.12.0-xe/kernel/arch/x86/kvm/kvm-amd.ko
lib/modules/6.12.0-xe/kernel/kernel/
lib/modules/6.12.0-xe/kernel/kernel/kheaders.ko
lib/modules/6.12.0-xe/kernel/crypto/
lib/modules/6.12.0-xe/kernel/crypto/ecrdsa_generic.ko
lib/modules/6.12.0-xe/kernel/crypto/xcbc.ko
lib/modules/6.12.0-xe/kernel/crypto/serpent_generic.ko
lib/modules/6.12.0-xe/kernel/crypto/aria_generic.ko
lib/modules/6.12.0-xe/kernel/crypto/crypto_simd.ko
lib/modules/6.12.0-xe/kernel/crypto/adiantum.ko
lib/modules/6.12.0-xe/kernel/crypto/tcrypt.ko
lib/modules/6.12.0-xe/kernel/crypto/crypto_engine.ko
lib/modules/6.12.0-xe/kernel/crypto/zstd.ko
lib/modules/6.12.0-xe/kernel/crypto/asymmetric_keys/
lib/modules/6.12.0-xe/kernel/crypto/asymmetric_keys/pkcs7_test_key.ko
lib/modules/6.12.0-xe/kernel/crypto/asymmetric_keys/pkcs8_key_parser.ko
lib/modules/6.12.0-xe/kernel/crypto/des_generic.ko
lib/modules/6.12.0-xe/kernel/crypto/xctr.ko
lib/modules/6.12.0-xe/kernel/crypto/authenc.ko
lib/modules/6.12.0-xe/kernel/crypto/sm4_generic.ko
lib/modules/6.12.0-xe/kernel/crypto/keywrap.ko
lib/modules/6.12.0-xe/kernel/crypto/camellia_generic.ko
lib/modules/6.12.0-xe/kernel/crypto/sm3.ko
lib/modules/6.12.0-xe/kernel/crypto/pcrypt.ko
lib/modules/6.12.0-xe/kernel/crypto/aegis128.ko
lib/modules/6.12.0-xe/kernel/crypto/af_alg.ko
lib/modules/6.12.0-xe/kernel/crypto/algif_aead.ko
lib/modules/6.12.0-xe/kernel/crypto/cmac.ko
lib/modules/6.12.0-xe/kernel/crypto/sm3_generic.ko
lib/modules/6.12.0-xe/kernel/crypto/aes_ti.ko
lib/modules/6.12.0-xe/kernel/crypto/chacha_generic.ko
lib/modules/6.12.0-xe/kernel/crypto/poly1305_generic.ko
lib/modules/6.12.0-xe/kernel/crypto/nhpoly1305.ko
lib/modules/6.12.0-xe/kernel/crypto/crc32_generic.ko
lib/modules/6.12.0-xe/kernel/crypto/essiv.ko
lib/modules/6.12.0-xe/kernel/crypto/ccm.ko
lib/modules/6.12.0-xe/kernel/crypto/wp512.ko
lib/modules/6.12.0-xe/kernel/crypto/streebog_generic.ko
lib/modules/6.12.0-xe/kernel/crypto/authencesn.ko
lib/modules/6.12.0-xe/kernel/crypto/echainiv.ko
lib/modules/6.12.0-xe/kernel/crypto/lrw.ko
lib/modules/6.12.0-xe/kernel/crypto/cryptd.ko
lib/modules/6.12.0-xe/kernel/crypto/crypto_user.ko
lib/modules/6.12.0-xe/kernel/crypto/algif_hash.ko
lib/modules/6.12.0-xe/kernel/crypto/vmac.ko
lib/modules/6.12.0-xe/kernel/crypto/polyval-generic.ko
lib/modules/6.12.0-xe/kernel/crypto/hctr2.ko
lib/modules/6.12.0-xe/kernel/crypto/842.ko
lib/modules/6.12.0-xe/kernel/crypto/pcbc.ko
lib/modules/6.12.0-xe/kernel/crypto/ansi_cprng.ko
lib/modules/6.12.0-xe/kernel/crypto/cast6_generic.ko
lib/modules/6.12.0-xe/kernel/crypto/twofish_common.ko
lib/modules/6.12.0-xe/kernel/crypto/twofish_generic.ko
lib/modules/6.12.0-xe/kernel/crypto/lz4hc.ko
lib/modules/6.12.0-xe/kernel/crypto/blowfish_generic.ko
lib/modules/6.12.0-xe/kernel/crypto/md4.ko
lib/modules/6.12.0-xe/kernel/crypto/chacha20poly1305.ko
lib/modules/6.12.0-xe/kernel/crypto/curve25519-generic.ko
lib/modules/6.12.0-xe/kernel/crypto/lz4.ko
lib/modules/6.12.0-xe/kernel/crypto/rmd160.ko
lib/modules/6.12.0-xe/kernel/crypto/algif_skcipher.ko
lib/modules/6.12.0-xe/kernel/crypto/cast5_generic.ko
lib/modules/6.12.0-xe/kernel/crypto/fcrypt.ko
lib/modules/6.12.0-xe/kernel/crypto/ecdsa_generic.ko
lib/modules/6.12.0-xe/kernel/crypto/sm4.ko
lib/modules/6.12.0-xe/kernel/crypto/cast_common.ko
lib/modules/6.12.0-xe/kernel/crypto/blowfish_common.ko
lib/modules/6.12.0-xe/kernel/crypto/michael_mic.ko
lib/modules/6.12.0-xe/kernel/crypto/async_tx/
lib/modules/6.12.0-xe/kernel/crypto/async_tx/async_xor.ko
lib/modules/6.12.0-xe/kernel/crypto/async_tx/async_tx.ko
lib/modules/6.12.0-xe/kernel/crypto/async_tx/async_memcpy.ko
lib/modules/6.12.0-xe/kernel/crypto/async_tx/async_pq.ko
lib/modules/6.12.0-xe/kernel/crypto/async_tx/async_raid6_recov.ko
lib/modules/6.12.0-xe/kernel/crypto/algif_rng.ko
lib/modules/6.12.0-xe/kernel/block/
lib/modules/6.12.0-xe/kernel/block/bfq.ko
lib/modules/6.12.0-xe/kernel/block/kyber-iosched.ko
lib/modules/6.12.0-xe/build
lib/modules/6.12.0-xe/modules.alias.bin
lib/modules/6.12.0-xe/modules.builtin
lib/modules/6.12.0-xe/modules.softdep
lib/modules/6.12.0-xe/modules.alias
lib/modules/6.12.0-xe/modules.order
lib/modules/6.12.0-xe/modules.symbols
lib/modules/6.12.0-xe/modules.dep.bin
+ mv kernel-nodebug.tar.gz ..
+ cd ..
+ rm -rf archive
++ date +%s
+ echo -e '\e[0Ksection_end:1731975315:package_x86_64_nodebug\r\e[0K'
+ sync
^[[0Ksection_end:1731975315:package_x86_64_nodebug
^[[0K
+ cleanup
++ stat -c %u:%g /kernel
+ chown -R 1003:1003 /kernel
^ permalink raw reply [flat|nested] 52+ messages in thread
* ✗ CI.Hooks: failure for UMD direct submission in Xe
2024-11-18 23:37 [RFC PATCH 00/29] UMD direct submission in Xe Matthew Brost
` (32 preceding siblings ...)
2024-11-19 0:15 ` ✓ CI.Build: " Patchwork
@ 2024-11-19 0:17 ` Patchwork
2024-11-19 0:19 ` ✓ CI.checksparse: success " Patchwork
` (2 subsequent siblings)
36 siblings, 0 replies; 52+ messages in thread
From: Patchwork @ 2024-11-19 0:17 UTC (permalink / raw)
To: Matthew Brost; +Cc: intel-xe
== Series Details ==
Series: UMD direct submission in Xe
URL : https://patchwork.freedesktop.org/series/141524/
State : failure
== Summary ==
run-parts: executing /workspace/ci/hooks/00-showenv
+ export
+ grep -Ei '(^|\W)CI_'
declare -x CI_KERNEL_BUILD_DIR="/workspace/kernel/build64-default"
declare -x CI_KERNEL_SRC_DIR="/workspace/kernel"
declare -x CI_TOOLS_SRC_DIR="/workspace/ci"
declare -x CI_WORKSPACE_DIR="/workspace"
run-parts: executing /workspace/ci/hooks/10-build-W1
+ SRC_DIR=/workspace/kernel
+ RESTORE_DISPLAY_CONFIG=0
+ '[' -n /workspace/kernel/build64-default ']'
+ BUILD_DIR=/workspace/kernel/build64-default
+ cd /workspace/kernel
++ nproc
+ make -j48 O=/workspace/kernel/build64-default modules_prepare
make[1]: Entering directory '/workspace/kernel/build64-default'
GEN Makefile
UPD include/config/kernel.release
mkdir -p /workspace/kernel/build64-default/tools/objtool && make O=/workspace/kernel/build64-default subdir=tools/objtool --no-print-directory -C objtool
UPD include/generated/utsrelease.h
CALL ../scripts/checksyscalls.sh
INSTALL libsubcmd_headers
CC /workspace/kernel/build64-default/tools/objtool/libsubcmd/exec-cmd.o
CC /workspace/kernel/build64-default/tools/objtool/libsubcmd/help.o
CC /workspace/kernel/build64-default/tools/objtool/libsubcmd/pager.o
CC /workspace/kernel/build64-default/tools/objtool/libsubcmd/parse-options.o
CC /workspace/kernel/build64-default/tools/objtool/libsubcmd/run-command.o
CC /workspace/kernel/build64-default/tools/objtool/libsubcmd/sigchain.o
CC /workspace/kernel/build64-default/tools/objtool/libsubcmd/subcmd-config.o
LD /workspace/kernel/build64-default/tools/objtool/libsubcmd/libsubcmd-in.o
AR /workspace/kernel/build64-default/tools/objtool/libsubcmd/libsubcmd.a
CC /workspace/kernel/build64-default/tools/objtool/weak.o
CC /workspace/kernel/build64-default/tools/objtool/check.o
CC /workspace/kernel/build64-default/tools/objtool/special.o
CC /workspace/kernel/build64-default/tools/objtool/builtin-check.o
CC /workspace/kernel/build64-default/tools/objtool/elf.o
CC /workspace/kernel/build64-default/tools/objtool/objtool.o
CC /workspace/kernel/build64-default/tools/objtool/orc_gen.o
CC /workspace/kernel/build64-default/tools/objtool/orc_dump.o
CC /workspace/kernel/build64-default/tools/objtool/arch/x86/special.o
CC /workspace/kernel/build64-default/tools/objtool/libstring.o
CC /workspace/kernel/build64-default/tools/objtool/libctype.o
CC /workspace/kernel/build64-default/tools/objtool/str_error_r.o
CC /workspace/kernel/build64-default/tools/objtool/arch/x86/decode.o
CC /workspace/kernel/build64-default/tools/objtool/librbtree.o
CC /workspace/kernel/build64-default/tools/objtool/arch/x86/orc.o
LD /workspace/kernel/build64-default/tools/objtool/arch/x86/objtool-in.o
LD /workspace/kernel/build64-default/tools/objtool/objtool-in.o
LINK /workspace/kernel/build64-default/tools/objtool/objtool
make[1]: Leaving directory '/workspace/kernel/build64-default'
++ nproc
+ make -j48 O=/workspace/kernel/build64-default W=1 drivers/gpu/drm/xe
make[1]: Entering directory '/workspace/kernel/build64-default'
make[2]: Nothing to be done for 'drivers/gpu/drm/xe'.
make[1]: Leaving directory '/workspace/kernel/build64-default'
run-parts: executing /workspace/ci/hooks/11-build-32b
+++ realpath /workspace/ci/hooks/11-build-32b
++ dirname /workspace/ci/hooks/11-build-32b
+ THIS_SCRIPT_DIR=/workspace/ci/hooks
+ SRC_DIR=/workspace/kernel
+ TOOLS_SRC_DIR=/workspace/ci
+ '[' -n /workspace/kernel/build64-default ']'
+ BUILD_DIR=/workspace/kernel/build64-default
+ BUILD_DIR=/workspace/kernel/build64-default/build32
+ cd /workspace/kernel
+ mkdir -p /workspace/kernel/build64-default/build32
++ nproc
+ make -j48 ARCH=i386 O=/workspace/kernel/build64-default/build32 defconfig
make[1]: Entering directory '/workspace/kernel/build64-default/build32'
GEN Makefile
HOSTCC scripts/basic/fixdep
HOSTCC scripts/kconfig/conf.o
HOSTCC scripts/kconfig/confdata.o
HOSTCC scripts/kconfig/expr.o
LEX scripts/kconfig/lexer.lex.c
YACC scripts/kconfig/parser.tab.[ch]
HOSTCC scripts/kconfig/menu.o
HOSTCC scripts/kconfig/preprocess.o
HOSTCC scripts/kconfig/symbol.o
HOSTCC scripts/kconfig/util.o
HOSTCC scripts/kconfig/lexer.lex.o
HOSTCC scripts/kconfig/parser.tab.o
HOSTLD scripts/kconfig/conf
*** Default configuration is based on 'i386_defconfig'
#
# configuration written to .config
#
make[1]: Leaving directory '/workspace/kernel/build64-default/build32'
+ cd /workspace/kernel/build64-default/build32
+ /workspace/kernel/scripts/kconfig/merge_config.sh .config /workspace/ci/kernel/fragments/10-xe.fragment
Using .config as base
Merging /workspace/ci/kernel/fragments/10-xe.fragment
Value of CONFIG_DRM_XE is redefined by fragment /workspace/ci/kernel/fragments/10-xe.fragment:
Previous value: # CONFIG_DRM_XE is not set
New value: CONFIG_DRM_XE=m
GEN Makefile
WARNING: unmet direct dependencies detected for FB_IOMEM_HELPERS
Depends on [n]: HAS_IOMEM [=y] && FB_CORE [=n]
Selected by [m]:
- DRM_XE_DISPLAY [=y] && HAS_IOMEM [=y] && DRM [=y] && DRM_XE [=m] && DRM_XE [=m]=m [=m]
#
# configuration written to .config
#
Value requested for CONFIG_HAVE_UID16 not in final .config
Requested value: CONFIG_HAVE_UID16=y
Actual value:
Value requested for CONFIG_UID16 not in final .config
Requested value: CONFIG_UID16=y
Actual value:
Value requested for CONFIG_X86_32 not in final .config
Requested value: CONFIG_X86_32=y
Actual value:
Value requested for CONFIG_OUTPUT_FORMAT not in final .config
Requested value: CONFIG_OUTPUT_FORMAT="elf32-i386"
Actual value: CONFIG_OUTPUT_FORMAT="elf64-x86-64"
Value requested for CONFIG_ARCH_MMAP_RND_BITS_MIN not in final .config
Requested value: CONFIG_ARCH_MMAP_RND_BITS_MIN=8
Actual value: CONFIG_ARCH_MMAP_RND_BITS_MIN=28
Value requested for CONFIG_ARCH_MMAP_RND_BITS_MAX not in final .config
Requested value: CONFIG_ARCH_MMAP_RND_BITS_MAX=16
Actual value: CONFIG_ARCH_MMAP_RND_BITS_MAX=32
Value requested for CONFIG_PGTABLE_LEVELS not in final .config
Requested value: CONFIG_PGTABLE_LEVELS=2
Actual value: CONFIG_PGTABLE_LEVELS=5
Value requested for CONFIG_X86_BIGSMP not in final .config
Requested value: # CONFIG_X86_BIGSMP is not set
Actual value:
Value requested for CONFIG_X86_INTEL_QUARK not in final .config
Requested value: # CONFIG_X86_INTEL_QUARK is not set
Actual value:
Value requested for CONFIG_X86_RDC321X not in final .config
Requested value: # CONFIG_X86_RDC321X is not set
Actual value:
Value requested for CONFIG_X86_32_NON_STANDARD not in final .config
Requested value: # CONFIG_X86_32_NON_STANDARD is not set
Actual value:
Value requested for CONFIG_X86_32_IRIS not in final .config
Requested value: # CONFIG_X86_32_IRIS is not set
Actual value:
Value requested for CONFIG_M486SX not in final .config
Requested value: # CONFIG_M486SX is not set
Actual value:
Value requested for CONFIG_M486 not in final .config
Requested value: # CONFIG_M486 is not set
Actual value:
Value requested for CONFIG_M586 not in final .config
Requested value: # CONFIG_M586 is not set
Actual value:
Value requested for CONFIG_M586TSC not in final .config
Requested value: # CONFIG_M586TSC is not set
Actual value:
Value requested for CONFIG_M586MMX not in final .config
Requested value: # CONFIG_M586MMX is not set
Actual value:
Value requested for CONFIG_M686 not in final .config
Requested value: CONFIG_M686=y
Actual value:
Value requested for CONFIG_MPENTIUMII not in final .config
Requested value: # CONFIG_MPENTIUMII is not set
Actual value:
Value requested for CONFIG_MPENTIUMIII not in final .config
Requested value: # CONFIG_MPENTIUMIII is not set
Actual value:
Value requested for CONFIG_MPENTIUMM not in final .config
Requested value: # CONFIG_MPENTIUMM is not set
Actual value:
Value requested for CONFIG_MPENTIUM4 not in final .config
Requested value: # CONFIG_MPENTIUM4 is not set
Actual value:
Value requested for CONFIG_MK6 not in final .config
Requested value: # CONFIG_MK6 is not set
Actual value:
Value requested for CONFIG_MK7 not in final .config
Requested value: # CONFIG_MK7 is not set
Actual value:
Value requested for CONFIG_MCRUSOE not in final .config
Requested value: # CONFIG_MCRUSOE is not set
Actual value:
Value requested for CONFIG_MEFFICEON not in final .config
Requested value: # CONFIG_MEFFICEON is not set
Actual value:
Value requested for CONFIG_MWINCHIPC6 not in final .config
Requested value: # CONFIG_MWINCHIPC6 is not set
Actual value:
Value requested for CONFIG_MWINCHIP3D not in final .config
Requested value: # CONFIG_MWINCHIP3D is not set
Actual value:
Value requested for CONFIG_MELAN not in final .config
Requested value: # CONFIG_MELAN is not set
Actual value:
Value requested for CONFIG_MGEODEGX1 not in final .config
Requested value: # CONFIG_MGEODEGX1 is not set
Actual value:
Value requested for CONFIG_MGEODE_LX not in final .config
Requested value: # CONFIG_MGEODE_LX is not set
Actual value:
Value requested for CONFIG_MCYRIXIII not in final .config
Requested value: # CONFIG_MCYRIXIII is not set
Actual value:
Value requested for CONFIG_MVIAC3_2 not in final .config
Requested value: # CONFIG_MVIAC3_2 is not set
Actual value:
Value requested for CONFIG_MVIAC7 not in final .config
Requested value: # CONFIG_MVIAC7 is not set
Actual value:
Value requested for CONFIG_X86_GENERIC not in final .config
Requested value: # CONFIG_X86_GENERIC is not set
Actual value:
Value requested for CONFIG_X86_INTERNODE_CACHE_SHIFT not in final .config
Requested value: CONFIG_X86_INTERNODE_CACHE_SHIFT=5
Actual value: CONFIG_X86_INTERNODE_CACHE_SHIFT=6
Value requested for CONFIG_X86_L1_CACHE_SHIFT not in final .config
Requested value: CONFIG_X86_L1_CACHE_SHIFT=5
Actual value: CONFIG_X86_L1_CACHE_SHIFT=6
Value requested for CONFIG_X86_USE_PPRO_CHECKSUM not in final .config
Requested value: CONFIG_X86_USE_PPRO_CHECKSUM=y
Actual value:
Value requested for CONFIG_X86_MINIMUM_CPU_FAMILY not in final .config
Requested value: CONFIG_X86_MINIMUM_CPU_FAMILY=6
Actual value: CONFIG_X86_MINIMUM_CPU_FAMILY=64
Value requested for CONFIG_CPU_SUP_TRANSMETA_32 not in final .config
Requested value: CONFIG_CPU_SUP_TRANSMETA_32=y
Actual value:
Value requested for CONFIG_CPU_SUP_VORTEX_32 not in final .config
Requested value: CONFIG_CPU_SUP_VORTEX_32=y
Actual value:
Value requested for CONFIG_HPET_TIMER not in final .config
Requested value: # CONFIG_HPET_TIMER is not set
Actual value: CONFIG_HPET_TIMER=y
Value requested for CONFIG_NR_CPUS_RANGE_END not in final .config
Requested value: CONFIG_NR_CPUS_RANGE_END=8
Actual value: CONFIG_NR_CPUS_RANGE_END=512
Value requested for CONFIG_NR_CPUS_DEFAULT not in final .config
Requested value: CONFIG_NR_CPUS_DEFAULT=8
Actual value: CONFIG_NR_CPUS_DEFAULT=64
Value requested for CONFIG_X86_ANCIENT_MCE not in final .config
Requested value: # CONFIG_X86_ANCIENT_MCE is not set
Actual value:
Value requested for CONFIG_X86_LEGACY_VM86 not in final .config
Requested value: # CONFIG_X86_LEGACY_VM86 is not set
Actual value:
Value requested for CONFIG_X86_ESPFIX32 not in final .config
Requested value: CONFIG_X86_ESPFIX32=y
Actual value:
Value requested for CONFIG_TOSHIBA not in final .config
Requested value: # CONFIG_TOSHIBA is not set
Actual value:
Value requested for CONFIG_X86_REBOOTFIXUPS not in final .config
Requested value: # CONFIG_X86_REBOOTFIXUPS is not set
Actual value:
Value requested for CONFIG_MICROCODE_INITRD32 not in final .config
Requested value: CONFIG_MICROCODE_INITRD32=y
Actual value:
Value requested for CONFIG_NOHIGHMEM not in final .config
Requested value: # CONFIG_NOHIGHMEM is not set
Actual value:
Value requested for CONFIG_HIGHMEM4G not in final .config
Requested value: CONFIG_HIGHMEM4G=y
Actual value:
Value requested for CONFIG_HIGHMEM64G not in final .config
Requested value: # CONFIG_HIGHMEM64G is not set
Actual value:
Value requested for CONFIG_VMSPLIT_3G not in final .config
Requested value: CONFIG_VMSPLIT_3G=y
Actual value:
Value requested for CONFIG_VMSPLIT_3G_OPT not in final .config
Requested value: # CONFIG_VMSPLIT_3G_OPT is not set
Actual value:
Value requested for CONFIG_VMSPLIT_2G not in final .config
Requested value: # CONFIG_VMSPLIT_2G is not set
Actual value:
Value requested for CONFIG_VMSPLIT_2G_OPT not in final .config
Requested value: # CONFIG_VMSPLIT_2G_OPT is not set
Actual value:
Value requested for CONFIG_VMSPLIT_1G not in final .config
Requested value: # CONFIG_VMSPLIT_1G is not set
Actual value:
Value requested for CONFIG_PAGE_OFFSET not in final .config
Requested value: CONFIG_PAGE_OFFSET=0xC0000000
Actual value:
Value requested for CONFIG_HIGHMEM not in final .config
Requested value: CONFIG_HIGHMEM=y
Actual value:
Value requested for CONFIG_X86_PAE not in final .config
Requested value: # CONFIG_X86_PAE is not set
Actual value:
Value requested for CONFIG_ARCH_FLATMEM_ENABLE not in final .config
Requested value: CONFIG_ARCH_FLATMEM_ENABLE=y
Actual value:
Value requested for CONFIG_ARCH_SELECT_MEMORY_MODEL not in final .config
Requested value: CONFIG_ARCH_SELECT_MEMORY_MODEL=y
Actual value:
Value requested for CONFIG_ILLEGAL_POINTER_VALUE not in final .config
Requested value: CONFIG_ILLEGAL_POINTER_VALUE=0
Actual value: CONFIG_ILLEGAL_POINTER_VALUE=0xdead000000000000
Value requested for CONFIG_HIGHPTE not in final .config
Requested value: # CONFIG_HIGHPTE is not set
Actual value:
Value requested for CONFIG_COMPAT_VDSO not in final .config
Requested value: # CONFIG_COMPAT_VDSO is not set
Actual value:
Value requested for CONFIG_FUNCTION_PADDING_CFI not in final .config
Requested value: CONFIG_FUNCTION_PADDING_CFI=0
Actual value: CONFIG_FUNCTION_PADDING_CFI=11
Value requested for CONFIG_FUNCTION_PADDING_BYTES not in final .config
Requested value: CONFIG_FUNCTION_PADDING_BYTES=4
Actual value: CONFIG_FUNCTION_PADDING_BYTES=16
Value requested for CONFIG_APM not in final .config
Requested value: # CONFIG_APM is not set
Actual value:
Value requested for CONFIG_X86_POWERNOW_K6 not in final .config
Requested value: # CONFIG_X86_POWERNOW_K6 is not set
Actual value:
Value requested for CONFIG_X86_POWERNOW_K7 not in final .config
Requested value: # CONFIG_X86_POWERNOW_K7 is not set
Actual value:
Value requested for CONFIG_X86_GX_SUSPMOD not in final .config
Requested value: # CONFIG_X86_GX_SUSPMOD is not set
Actual value:
Value requested for CONFIG_X86_SPEEDSTEP_ICH not in final .config
Requested value: # CONFIG_X86_SPEEDSTEP_ICH is not set
Actual value:
Value requested for CONFIG_X86_SPEEDSTEP_SMI not in final .config
Requested value: # CONFIG_X86_SPEEDSTEP_SMI is not set
Actual value:
Value requested for CONFIG_X86_CPUFREQ_NFORCE2 not in final .config
Requested value: # CONFIG_X86_CPUFREQ_NFORCE2 is not set
Actual value:
Value requested for CONFIG_X86_LONGRUN not in final .config
Requested value: # CONFIG_X86_LONGRUN is not set
Actual value:
Value requested for CONFIG_X86_LONGHAUL not in final .config
Requested value: # CONFIG_X86_LONGHAUL is not set
Actual value:
Value requested for CONFIG_X86_E_POWERSAVER not in final .config
Requested value: # CONFIG_X86_E_POWERSAVER is not set
Actual value:
Value requested for CONFIG_PCI_GOBIOS not in final .config
Requested value: # CONFIG_PCI_GOBIOS is not set
Actual value:
Value requested for CONFIG_PCI_GOMMCONFIG not in final .config
Requested value: # CONFIG_PCI_GOMMCONFIG is not set
Actual value:
Value requested for CONFIG_PCI_GODIRECT not in final .config
Requested value: # CONFIG_PCI_GODIRECT is not set
Actual value:
Value requested for CONFIG_PCI_GOANY not in final .config
Requested value: CONFIG_PCI_GOANY=y
Actual value:
Value requested for CONFIG_PCI_BIOS not in final .config
Requested value: CONFIG_PCI_BIOS=y
Actual value:
Value requested for CONFIG_ISA not in final .config
Requested value: # CONFIG_ISA is not set
Actual value:
Value requested for CONFIG_SCx200 not in final .config
Requested value: # CONFIG_SCx200 is not set
Actual value:
Value requested for CONFIG_OLPC not in final .config
Requested value: # CONFIG_OLPC is not set
Actual value:
Value requested for CONFIG_ALIX not in final .config
Requested value: # CONFIG_ALIX is not set
Actual value:
Value requested for CONFIG_NET5501 not in final .config
Requested value: # CONFIG_NET5501 is not set
Actual value:
Value requested for CONFIG_GEOS not in final .config
Requested value: # CONFIG_GEOS is not set
Actual value:
Value requested for CONFIG_COMPAT_32 not in final .config
Requested value: CONFIG_COMPAT_32=y
Actual value:
Value requested for CONFIG_HAVE_ATOMIC_IOMAP not in final .config
Requested value: CONFIG_HAVE_ATOMIC_IOMAP=y
Actual value:
Value requested for CONFIG_ARCH_32BIT_OFF_T not in final .config
Requested value: CONFIG_ARCH_32BIT_OFF_T=y
Actual value:
Value requested for CONFIG_ARCH_WANT_IPC_PARSE_VERSION not in final .config
Requested value: CONFIG_ARCH_WANT_IPC_PARSE_VERSION=y
Actual value:
Value requested for CONFIG_MODULES_USE_ELF_REL not in final .config
Requested value: CONFIG_MODULES_USE_ELF_REL=y
Actual value:
Value requested for CONFIG_ARCH_MMAP_RND_BITS not in final .config
Requested value: CONFIG_ARCH_MMAP_RND_BITS=8
Actual value: CONFIG_ARCH_MMAP_RND_BITS=28
Value requested for CONFIG_CLONE_BACKWARDS not in final .config
Requested value: CONFIG_CLONE_BACKWARDS=y
Actual value:
Value requested for CONFIG_OLD_SIGSUSPEND3 not in final .config
Requested value: CONFIG_OLD_SIGSUSPEND3=y
Actual value:
Value requested for CONFIG_OLD_SIGACTION not in final .config
Requested value: CONFIG_OLD_SIGACTION=y
Actual value:
Value requested for CONFIG_ARCH_SPLIT_ARG64 not in final .config
Requested value: CONFIG_ARCH_SPLIT_ARG64=y
Actual value:
Value requested for CONFIG_FUNCTION_ALIGNMENT not in final .config
Requested value: CONFIG_FUNCTION_ALIGNMENT=4
Actual value: CONFIG_FUNCTION_ALIGNMENT=16
Value requested for CONFIG_SELECT_MEMORY_MODEL not in final .config
Requested value: CONFIG_SELECT_MEMORY_MODEL=y
Actual value:
Value requested for CONFIG_FLATMEM_MANUAL not in final .config
Requested value: CONFIG_FLATMEM_MANUAL=y
Actual value:
Value requested for CONFIG_SPARSEMEM_MANUAL not in final .config
Requested value: # CONFIG_SPARSEMEM_MANUAL is not set
Actual value:
Value requested for CONFIG_FLATMEM not in final .config
Requested value: CONFIG_FLATMEM=y
Actual value:
Value requested for CONFIG_SPARSEMEM_STATIC not in final .config
Requested value: CONFIG_SPARSEMEM_STATIC=y
Actual value:
Value requested for CONFIG_BOUNCE not in final .config
Requested value: CONFIG_BOUNCE=y
Actual value:
Value requested for CONFIG_KMAP_LOCAL not in final .config
Requested value: CONFIG_KMAP_LOCAL=y
Actual value:
Value requested for CONFIG_HOTPLUG_PCI_COMPAQ not in final .config
Requested value: # CONFIG_HOTPLUG_PCI_COMPAQ is not set
Actual value:
Value requested for CONFIG_HOTPLUG_PCI_IBM not in final .config
Requested value: # CONFIG_HOTPLUG_PCI_IBM is not set
Actual value:
Value requested for CONFIG_EFI_CAPSULE_QUIRK_QUARK_CSH not in final .config
Requested value: CONFIG_EFI_CAPSULE_QUIRK_QUARK_CSH=y
Actual value:
Value requested for CONFIG_PCH_PHUB not in final .config
Requested value: # CONFIG_PCH_PHUB is not set
Actual value:
Value requested for CONFIG_SCSI_NSP32 not in final .config
Requested value: # CONFIG_SCSI_NSP32 is not set
Actual value:
Value requested for CONFIG_PATA_CS5520 not in final .config
Requested value: # CONFIG_PATA_CS5520 is not set
Actual value:
Value requested for CONFIG_PATA_CS5530 not in final .config
Requested value: # CONFIG_PATA_CS5530 is not set
Actual value:
Value requested for CONFIG_PATA_CS5535 not in final .config
Requested value: # CONFIG_PATA_CS5535 is not set
Actual value:
Value requested for CONFIG_PATA_CS5536 not in final .config
Requested value: # CONFIG_PATA_CS5536 is not set
Actual value:
Value requested for CONFIG_PATA_SC1200 not in final .config
Requested value: # CONFIG_PATA_SC1200 is not set
Actual value:
Value requested for CONFIG_PCH_GBE not in final .config
Requested value: # CONFIG_PCH_GBE is not set
Actual value:
Value requested for CONFIG_INPUT_WISTRON_BTNS not in final .config
Requested value: # CONFIG_INPUT_WISTRON_BTNS is not set
Actual value:
Value requested for CONFIG_SERIAL_TIMBERDALE not in final .config
Requested value: # CONFIG_SERIAL_TIMBERDALE is not set
Actual value:
Value requested for CONFIG_SERIAL_PCH_UART not in final .config
Requested value: # CONFIG_SERIAL_PCH_UART is not set
Actual value:
Value requested for CONFIG_HW_RANDOM_GEODE not in final .config
Requested value: CONFIG_HW_RANDOM_GEODE=y
Actual value:
Value requested for CONFIG_SONYPI not in final .config
Requested value: # CONFIG_SONYPI is not set
Actual value:
Value requested for CONFIG_PC8736x_GPIO not in final .config
Requested value: # CONFIG_PC8736x_GPIO is not set
Actual value:
Value requested for CONFIG_NSC_GPIO not in final .config
Requested value: # CONFIG_NSC_GPIO is not set
Actual value:
Value requested for CONFIG_I2C_EG20T not in final .config
Requested value: # CONFIG_I2C_EG20T is not set
Actual value:
Value requested for CONFIG_SCx200_ACB not in final .config
Requested value: # CONFIG_SCx200_ACB is not set
Actual value:
Value requested for CONFIG_PTP_1588_CLOCK_PCH not in final .config
Requested value: # CONFIG_PTP_1588_CLOCK_PCH is not set
Actual value:
Value requested for CONFIG_SBC8360_WDT not in final .config
Requested value: # CONFIG_SBC8360_WDT is not set
Actual value:
Value requested for CONFIG_SBC7240_WDT not in final .config
Requested value: # CONFIG_SBC7240_WDT is not set
Actual value:
Value requested for CONFIG_MFD_CS5535 not in final .config
Requested value: # CONFIG_MFD_CS5535 is not set
Actual value:
Value requested for CONFIG_AGP_ALI not in final .config
Requested value: # CONFIG_AGP_ALI is not set
Actual value:
Value requested for CONFIG_AGP_ATI not in final .config
Requested value: # CONFIG_AGP_ATI is not set
Actual value:
Value requested for CONFIG_AGP_AMD not in final .config
Requested value: # CONFIG_AGP_AMD is not set
Actual value:
Value requested for CONFIG_AGP_NVIDIA not in final .config
Requested value: # CONFIG_AGP_NVIDIA is not set
Actual value:
Value requested for CONFIG_AGP_SWORKS not in final .config
Requested value: # CONFIG_AGP_SWORKS is not set
Actual value:
Value requested for CONFIG_AGP_EFFICEON not in final .config
Requested value: # CONFIG_AGP_EFFICEON is not set
Actual value:
Value requested for CONFIG_SND_CS5530 not in final .config
Requested value: # CONFIG_SND_CS5530 is not set
Actual value:
Value requested for CONFIG_SND_CS5535AUDIO not in final .config
Requested value: # CONFIG_SND_CS5535AUDIO is not set
Actual value:
Value requested for CONFIG_SND_SIS7019 not in final .config
Requested value: # CONFIG_SND_SIS7019 is not set
Actual value:
Value requested for CONFIG_LEDS_OT200 not in final .config
Requested value: # CONFIG_LEDS_OT200 is not set
Actual value:
Value requested for CONFIG_PCH_DMA not in final .config
Requested value: # CONFIG_PCH_DMA is not set
Actual value:
Value requested for CONFIG_CLKSRC_I8253 not in final .config
Requested value: CONFIG_CLKSRC_I8253=y
Actual value:
Value requested for CONFIG_MAILBOX not in final .config
Requested value: # CONFIG_MAILBOX is not set
Actual value: CONFIG_MAILBOX=y
Value requested for CONFIG_CRYPTO_SERPENT_SSE2_586 not in final .config
Requested value: # CONFIG_CRYPTO_SERPENT_SSE2_586 is not set
Actual value:
Value requested for CONFIG_CRYPTO_TWOFISH_586 not in final .config
Requested value: # CONFIG_CRYPTO_TWOFISH_586 is not set
Actual value:
Value requested for CONFIG_CRYPTO_DEV_GEODE not in final .config
Requested value: # CONFIG_CRYPTO_DEV_GEODE is not set
Actual value:
Value requested for CONFIG_CRYPTO_DEV_HIFN_795X not in final .config
Requested value: # CONFIG_CRYPTO_DEV_HIFN_795X is not set
Actual value:
Value requested for CONFIG_CRYPTO_LIB_POLY1305_RSIZE not in final .config
Requested value: CONFIG_CRYPTO_LIB_POLY1305_RSIZE=1
Actual value: CONFIG_CRYPTO_LIB_POLY1305_RSIZE=11
Value requested for CONFIG_AUDIT_GENERIC not in final .config
Requested value: CONFIG_AUDIT_GENERIC=y
Actual value:
Value requested for CONFIG_GENERIC_VDSO_32 not in final .config
Requested value: CONFIG_GENERIC_VDSO_32=y
Actual value:
Value requested for CONFIG_DEBUG_KMAP_LOCAL not in final .config
Requested value: # CONFIG_DEBUG_KMAP_LOCAL is not set
Actual value:
Value requested for CONFIG_DEBUG_HIGHMEM not in final .config
Requested value: # CONFIG_DEBUG_HIGHMEM is not set
Actual value:
Value requested for CONFIG_HAVE_DEBUG_STACKOVERFLOW not in final .config
Requested value: CONFIG_HAVE_DEBUG_STACKOVERFLOW=y
Actual value:
Value requested for CONFIG_DEBUG_STACKOVERFLOW not in final .config
Requested value: # CONFIG_DEBUG_STACKOVERFLOW is not set
Actual value:
Value requested for CONFIG_HAVE_FUNCTION_GRAPH_TRACER not in final .config
Requested value: CONFIG_HAVE_FUNCTION_GRAPH_TRACER=y
Actual value:
Value requested for CONFIG_HAVE_FUNCTION_GRAPH_RETVAL not in final .config
Requested value: CONFIG_HAVE_FUNCTION_GRAPH_RETVAL=y
Actual value:
Value requested for CONFIG_DRM_KUNIT_TEST not in final .config
Requested value: CONFIG_DRM_KUNIT_TEST=m
Actual value:
Value requested for CONFIG_DRM_XE_WERROR not in final .config
Requested value: CONFIG_DRM_XE_WERROR=y
Actual value:
Value requested for CONFIG_DRM_XE_DEBUG not in final .config
Requested value: CONFIG_DRM_XE_DEBUG=y
Actual value:
Value requested for CONFIG_DRM_XE_DEBUG_MEM not in final .config
Requested value: CONFIG_DRM_XE_DEBUG_MEM=y
Actual value:
Value requested for CONFIG_DRM_XE_KUNIT_TEST not in final .config
Requested value: CONFIG_DRM_XE_KUNIT_TEST=m
Actual value:
++ nproc
+ make -j48 ARCH=i386 olddefconfig
GEN Makefile
WARNING: unmet direct dependencies detected for FB_IOMEM_HELPERS
Depends on [n]: HAS_IOMEM [=y] && FB_CORE [=n]
Selected by [m]:
- DRM_XE_DISPLAY [=y] && HAS_IOMEM [=y] && DRM [=y] && DRM_XE [=m] && DRM_XE [=m]=m [=m]
#
# configuration written to .config
#
++ nproc
+ make -j48 ARCH=i386
SYNC include/config/auto.conf.cmd
GEN Makefile
WARNING: unmet direct dependencies detected for FB_IOMEM_HELPERS
Depends on [n]: HAS_IOMEM [=y] && FB_CORE [=n]
Selected by [m]:
- DRM_XE_DISPLAY [=y] && HAS_IOMEM [=y] && DRM [=y] && DRM_XE [=m] && DRM_XE [=m]=m [=m]
WARNING: unmet direct dependencies detected for FB_IOMEM_HELPERS
Depends on [n]: HAS_IOMEM [=y] && FB_CORE [=n]
Selected by [m]:
- DRM_XE_DISPLAY [=y] && HAS_IOMEM [=y] && DRM [=y] && DRM_XE [=m] && DRM_XE [=m]=m [=m]
WARNING: unmet direct dependencies detected for FB_IOMEM_HELPERS
Depends on [n]: HAS_IOMEM [=y] && FB_CORE [=n]
Selected by [m]:
- DRM_XE_DISPLAY [=y] && HAS_IOMEM [=y] && DRM [=y] && DRM_XE [=m] && DRM_XE [=m]=m [=m]
GEN Makefile
WRAP arch/x86/include/generated/uapi/asm/bpf_perf_event.h
WRAP arch/x86/include/generated/uapi/asm/errno.h
WRAP arch/x86/include/generated/uapi/asm/fcntl.h
WRAP arch/x86/include/generated/uapi/asm/ioctl.h
WRAP arch/x86/include/generated/uapi/asm/ioctls.h
SYSHDR arch/x86/include/generated/uapi/asm/unistd_32.h
SYSHDR arch/x86/include/generated/uapi/asm/unistd_64.h
WRAP arch/x86/include/generated/uapi/asm/ipcbuf.h
UPD include/generated/uapi/linux/version.h
SYSHDR arch/x86/include/generated/uapi/asm/unistd_x32.h
WRAP arch/x86/include/generated/uapi/asm/param.h
WRAP arch/x86/include/generated/uapi/asm/poll.h
SYSTBL arch/x86/include/generated/asm/syscalls_32.h
WRAP arch/x86/include/generated/uapi/asm/resource.h
WRAP arch/x86/include/generated/uapi/asm/socket.h
WRAP arch/x86/include/generated/uapi/asm/sockios.h
WRAP arch/x86/include/generated/uapi/asm/termbits.h
WRAP arch/x86/include/generated/uapi/asm/termios.h
WRAP arch/x86/include/generated/uapi/asm/types.h
UPD include/generated/compile.h
WRAP arch/x86/include/generated/asm/early_ioremap.h
HOSTCC arch/x86/tools/relocs_32.o
WRAP arch/x86/include/generated/asm/mcs_spinlock.h
HOSTCC arch/x86/tools/relocs_64.o
WRAP arch/x86/include/generated/asm/mmzone.h
WRAP arch/x86/include/generated/asm/irq_regs.h
HOSTCC arch/x86/tools/relocs_common.o
WRAP arch/x86/include/generated/asm/kmap_size.h
WRAP arch/x86/include/generated/asm/local64.h
WRAP arch/x86/include/generated/asm/mmiowb.h
WRAP arch/x86/include/generated/asm/module.lds.h
WRAP arch/x86/include/generated/asm/rwonce.h
HOSTCC scripts/kallsyms
HOSTCC scripts/sorttable
HOSTCC scripts/asn1_compiler
HOSTCC scripts/selinux/genheaders/genheaders
HOSTCC scripts/selinux/mdp/mdp
HOSTLD arch/x86/tools/relocs
UPD include/config/kernel.release
UPD include/generated/utsrelease.h
CC scripts/mod/empty.o
HOSTCC scripts/mod/mk_elfconfig
CC scripts/mod/devicetable-offsets.s
UPD scripts/mod/devicetable-offsets.h
MKELF scripts/mod/elfconfig.h
HOSTCC scripts/mod/modpost.o
HOSTCC scripts/mod/file2alias.o
HOSTCC scripts/mod/sumversion.o
HOSTCC scripts/mod/symsearch.o
HOSTLD scripts/mod/modpost
CC kernel/bounds.s
CHKSHA1 /workspace/kernel/include/linux/atomic/atomic-arch-fallback.h
CHKSHA1 /workspace/kernel/include/linux/atomic/atomic-instrumented.h
CHKSHA1 /workspace/kernel/include/linux/atomic/atomic-long.h
UPD include/generated/timeconst.h
UPD include/generated/bounds.h
CC arch/x86/kernel/asm-offsets.s
UPD include/generated/asm-offsets.h
CALL /workspace/kernel/scripts/checksyscalls.sh
LDS scripts/module.lds
HOSTCC usr/gen_init_cpio
CC init/main.o
CC certs/system_keyring.o
CC init/do_mounts.o
CC init/do_mounts_initrd.o
UPD init/utsversion-tmp.h
CC ipc/util.o
CC init/initramfs.o
CC security/commoncap.o
CC block/bdev.o
CC ipc/msgutil.o
CC init/calibrate.o
CC security/lsm_syscalls.o
AS arch/x86/entry/entry.o
CC mm/filemap.o
CC block/fops.o
CC ipc/msg.o
CC io_uring/io_uring.o
AS arch/x86/lib/atomic64_cx8_32.o
CC security/min_addr.o
AS arch/x86/entry/entry_32.o
AR arch/x86/crypto/built-in.a
CC mm/mempool.o
CC arch/x86/realmode/init.o
CC init/init_task.o
CC arch/x86/power/cpu.o
CC security/keys/gc.o
AR arch/x86/net/built-in.a
GEN security/selinux/flask.h security/selinux/av_permissions.h
CC security/integrity/iint.o
CC mm/oom_kill.o
CC block/partitions/core.o
CC arch/x86/pci/i386.o
CC arch/x86/video/video-common.o
AR arch/x86/entry/vsyscall/built-in.a
AR virt/lib/built-in.a
AR arch/x86/platform/atom/built-in.a
AR drivers/cache/built-in.a
AR arch/x86/virt/svm/built-in.a
CC security/selinux/avc.o
CC arch/x86/mm/pat/set_memory.o
CC arch/x86/events/amd/core.o
CC lib/math/div64.o
CC arch/x86/kernel/fpu/init.o
CC fs/notify/dnotify/dnotify.o
CC net/core/sock.o
AR virt/built-in.a
CC sound/core/seq/seq.o
CC lib/zlib_inflate/inffast.o
AR arch/x86/platform/ce4100/built-in.a
AR drivers/irqchip/built-in.a
AR arch/x86/virt/vmx/built-in.a
CC lib/crypto/mpi/generic_mpih-lshift.o
AS arch/x86/lib/checksum_32.o
CC arch/x86/kernel/cpu/mce/core.o
CC arch/x86/entry/vdso/vma.o
CC arch/x86/entry/vdso/extable.o
AR arch/x86/virt/built-in.a
CC lib/crypto/mpi/generic_mpih-mul1.o
CC arch/x86/platform/efi/memmap.o
AR drivers/bus/mhi/built-in.a
CC kernel/sched/core.o
AR drivers/bus/built-in.a
CC lib/zlib_deflate/deflate.o
AR arch/x86/platform/geode/built-in.a
CC arch/x86/lib/cmdline.o
AR drivers/pwm/built-in.a
CC net/core/request_sock.o
CC crypto/asymmetric_keys/asymmetric_type.o
AR drivers/leds/trigger/built-in.a
AR drivers/leds/blink/built-in.a
AR drivers/leds/simple/built-in.a
CC drivers/leds/led-core.o
AS arch/x86/lib/cmpxchg8b_emu.o
CC lib/math/gcd.o
CC arch/x86/lib/cpu.o
CC lib/zlib_inflate/inflate.o
CC lib/math/lcm.o
CC lib/math/int_log.o
AS arch/x86/realmode/rm/header.o
GEN usr/initramfs_data.cpio
AS arch/x86/realmode/rm/trampoline_32.o
COPY usr/initramfs_inc_data
AS usr/initramfs_data.o
AS arch/x86/realmode/rm/stack.o
AR usr/built-in.a
HOSTCC certs/extract-cert
CC lib/math/int_pow.o
CC block/bio.o
CC arch/x86/kernel/fpu/bugs.o
AS arch/x86/realmode/rm/reboot.o
AS arch/x86/realmode/rm/wakeup_asm.o
CC arch/x86/realmode/rm/wakemain.o
CC lib/math/int_sqrt.o
CC arch/x86/kernel/fpu/core.o
CC arch/x86/realmode/rm/video-mode.o
CC lib/math/reciprocal_div.o
CC sound/core/seq/seq_lock.o
CC lib/math/rational.o
CC security/keys/key.o
CC security/integrity/integrity_audit.o
CC arch/x86/lib/delay.o
AS arch/x86/realmode/rm/copy.o
CC lib/crypto/mpi/generic_mpih-mul2.o
AR arch/x86/video/built-in.a
CC lib/crypto/mpi/generic_mpih-mul3.o
AS arch/x86/realmode/rm/bioscall.o
CC arch/x86/realmode/rm/regs.o
CERT certs/x509_certificate_list
CERT certs/signing_key.x509
AS certs/system_certificates.o
AS arch/x86/lib/getuser.o
AR certs/built-in.a
CC arch/x86/mm/pat/memtype.o
CC drivers/leds/led-class.o
CC arch/x86/mm/pat/memtype_interval.o
CC arch/x86/realmode/rm/video-vga.o
CC arch/x86/kernel/fpu/regset.o
CC arch/x86/pci/init.o
CC arch/x86/kernel/fpu/signal.o
CC arch/x86/kernel/fpu/xstate.o
CC arch/x86/pci/pcbios.o
CC arch/x86/kernel/cpu/mce/severity.o
CC crypto/asymmetric_keys/restrict.o
CC arch/x86/pci/mmconfig_32.o
LDS arch/x86/entry/vdso/vdso32/vdso32.lds
CC arch/x86/realmode/rm/video-vesa.o
CC arch/x86/realmode/rm/video-bios.o
AR fs/notify/dnotify/built-in.a
AS arch/x86/entry/vdso/vdso32/note.o
CC arch/x86/platform/efi/quirks.o
CC arch/x86/kernel/acpi/boot.o
CC fs/notify/inotify/inotify_fsnotify.o
AS arch/x86/entry/vdso/vdso32/system_call.o
CC lib/zlib_inflate/infutil.o
AS arch/x86/entry/vdso/vdso32/sigreturn.o
CC arch/x86/power/hibernate_32.o
CC security/selinux/hooks.o
CC arch/x86/entry/vdso/vdso32/vclock_gettime.o
PASYMS arch/x86/realmode/rm/pasyms.h
CC block/partitions/msdos.o
GEN arch/x86/lib/inat-tables.c
CC lib/zlib_deflate/deftree.o
CC arch/x86/lib/insn-eval.o
LDS arch/x86/realmode/rm/realmode.lds
LD arch/x86/realmode/rm/realmode.elf
RELOCS arch/x86/realmode/rm/realmode.relocs
OBJCOPY arch/x86/realmode/rm/realmode.bin
CC sound/core/seq/seq_clientmgr.o
AS arch/x86/realmode/rmpiggy.o
AR arch/x86/realmode/built-in.a
CC security/keys/keyring.o
AR lib/math/built-in.a
CC arch/x86/mm/init.o
CC ipc/sem.o
CC arch/x86/kernel/apic/apic.o
CC lib/zlib_inflate/inftrees.o
CC arch/x86/platform/efi/efi.o
AR fs/notify/fanotify/built-in.a
CC arch/x86/kernel/cpu/mtrr/mtrr.o
CC fs/notify/fsnotify.o
CC net/ethernet/eth.o
CC arch/x86/events/amd/lbr.o
AR arch/x86/platform/iris/built-in.a
CC drivers/leds/led-triggers.o
CC arch/x86/platform/intel/iosf_mbi.o
CC lib/zlib_inflate/inflate_syms.o
CC lib/crypto/mpi/generic_mpih-rshift.o
CC crypto/asymmetric_keys/signature.o
CC security/selinux/selinuxfs.o
CC arch/x86/lib/insn.o
CC arch/x86/lib/kaslr.o
CC arch/x86/kernel/acpi/sleep.o
CC fs/notify/inotify/inotify_user.o
AR security/integrity/built-in.a
AS arch/x86/kernel/acpi/wakeup_32.o
CC arch/x86/entry/vdso/vdso32/vgetcpu.o
CC block/elevator.o
CC arch/x86/kernel/cpu/mce/genpool.o
CC net/core/skbuff.o
CC arch/x86/pci/direct.o
CC init/version.o
CC lib/zlib_deflate/deflate_syms.o
HOSTCC arch/x86/entry/vdso/vdso2c
CC kernel/locking/mutex.o
AR arch/x86/mm/pat/built-in.a
CC security/keys/keyctl.o
CC security/selinux/netlink.o
AR lib/zlib_inflate/built-in.a
CC arch/x86/kernel/kprobes/core.o
AS arch/x86/power/hibernate_asm_32.o
CC arch/x86/pci/mmconfig-shared.o
CC crypto/asymmetric_keys/public_key.o
CC sound/core/seq/seq_memory.o
CC arch/x86/power/hibernate.o
CC arch/x86/kernel/kprobes/opt.o
LDS arch/x86/kernel/vmlinux.lds
CC crypto/api.o
AR init/built-in.a
CC lib/crypto/memneq.o
CC block/partitions/efi.o
CC arch/x86/pci/fixup.o
CC sound/core/sound.o
CC arch/x86/events/intel/core.o
CC arch/x86/events/zhaoxin/core.o
CC arch/x86/lib/memcpy_32.o
CC lib/crypto/mpi/generic_mpih-sub1.o
CC arch/x86/events/core.o
AS arch/x86/lib/memmove_32.o
CC arch/x86/lib/misc.o
CC arch/x86/kernel/cpu/mtrr/if.o
CC arch/x86/lib/pc-conf-reg.o
AR lib/zlib_deflate/built-in.a
CC arch/x86/events/probe.o
CC arch/x86/entry/vdso/vdso32-setup.o
CC arch/x86/mm/init_32.o
CC sound/core/init.o
AR arch/x86/platform/intel/built-in.a
AR drivers/leds/built-in.a
AR arch/x86/kernel/fpu/built-in.a
CC crypto/cipher.o
AR sound/i2c/other/built-in.a
AR sound/i2c/built-in.a
CC drivers/pci/msi/pcidev_msi.o
CC drivers/pci/pcie/portdrv.o
CC arch/x86/platform/efi/efi_32.o
CC arch/x86/kernel/acpi/cstate.o
AR drivers/pci/pwrctl/built-in.a
CC arch/x86/kernel/apic/apic_common.o
CC arch/x86/events/amd/ibs.o
AS arch/x86/lib/putuser.o
CC arch/x86/kernel/cpu/mce/intel.o
CC arch/x86/kernel/cpu/mce/amd.o
AS arch/x86/lib/retpoline.o
AR arch/x86/platform/intel-mid/built-in.a
CC arch/x86/mm/fault.o
CC arch/x86/lib/string_32.o
CC sound/core/seq/seq_queue.o
CC arch/x86/lib/strstr_32.o
CC sound/core/seq/seq_fifo.o
VDSO arch/x86/entry/vdso/vdso32.so.dbg
CC arch/x86/lib/usercopy.o
CC arch/x86/events/amd/uncore.o
OBJCOPY arch/x86/entry/vdso/vdso32.so
VDSO2C arch/x86/entry/vdso/vdso-image-32.c
ASN.1 crypto/asymmetric_keys/x509.asn1.[ch]
CC arch/x86/entry/vdso/vdso-image-32.o
ASN.1 crypto/asymmetric_keys/x509_akid.asn1.[ch]
CC crypto/asymmetric_keys/x509_loader.o
CC fs/nfs_common/nfsacl.o
AR fs/notify/inotify/built-in.a
CC fs/notify/notification.o
AR arch/x86/power/built-in.a
CC security/selinux/nlmsgtab.o
CC security/selinux/netif.o
CC lib/crypto/mpi/generic_mpih-add1.o
CC drivers/pci/hotplug/pci_hotplug_core.o
CC kernel/locking/semaphore.o
CC arch/x86/kernel/cpu/mce/threshold.o
CC io_uring/opdef.o
CC arch/x86/kernel/cpu/mtrr/generic.o
CC crypto/asymmetric_keys/x509_public_key.o
CC security/keys/permission.o
AR net/ethernet/built-in.a
CC arch/x86/lib/usercopy_32.o
CC arch/x86/kernel/cpu/microcode/core.o
AR arch/x86/entry/vdso/built-in.a
CC arch/x86/entry/syscall_32.o
CC sound/core/seq/seq_prioq.o
AR arch/x86/kernel/kprobes/built-in.a
CC arch/x86/events/utils.o
CC security/keys/process_keys.o
CC arch/x86/kernel/apic/apic_noop.o
AR arch/x86/kernel/acpi/built-in.a
AS arch/x86/platform/efi/efi_stub_32.o
CC drivers/pci/msi/api.o
CC ipc/shm.o
AR block/partitions/built-in.a
CC block/blk-core.o
AR arch/x86/events/zhaoxin/built-in.a
CC arch/x86/pci/acpi.o
CC lib/lzo/lzo1x_compress.o
CC arch/x86/events/intel/bts.o
CC arch/x86/platform/efi/runtime-map.o
CC arch/x86/kernel/cpu/cacheinfo.o
CC drivers/pci/pcie/rcec.o
CC arch/x86/lib/msr-smp.o
AR sound/drivers/opl3/built-in.a
CC security/security.o
AR sound/drivers/opl4/built-in.a
AR sound/isa/ad1816a/built-in.a
AR sound/drivers/mpu401/built-in.a
AR sound/isa/ad1848/built-in.a
AR sound/drivers/vx/built-in.a
AR sound/isa/cs423x/built-in.a
AR sound/drivers/pcsp/built-in.a
AR sound/isa/es1688/built-in.a
AR sound/ppc/built-in.a
AR sound/pci/ac97/built-in.a
AR sound/drivers/built-in.a
CC mm/fadvise.o
AR sound/isa/galaxy/built-in.a
AR sound/pci/ali5451/built-in.a
AR net/802/built-in.a
AR sound/isa/gus/built-in.a
AR sound/pci/asihpi/built-in.a
CC mm/maccess.o
AR sound/isa/msnd/built-in.a
CC lib/crypto/utils.o
AR sound/pci/au88x0/built-in.a
AR sound/isa/opti9xx/built-in.a
AR sound/pci/aw2/built-in.a
AR sound/isa/sb/built-in.a
AR sound/pci/ctxfi/built-in.a
AR sound/isa/wavefront/built-in.a
AR sound/pci/ca0106/built-in.a
AR sound/isa/wss/built-in.a
CC lib/crypto/mpi/mpicoder.o
AR sound/pci/cs46xx/built-in.a
AR sound/isa/built-in.a
CC fs/notify/group.o
AR sound/pci/cs5535audio/built-in.a
CC security/selinux/netnode.o
AR sound/pci/lola/built-in.a
AR sound/arm/built-in.a
AR sound/pci/lx6464es/built-in.a
ASN.1 crypto/asymmetric_keys/pkcs7.asn1.[ch]
AR sound/pci/echoaudio/built-in.a
CC io_uring/kbuf.o
CC lib/crypto/chacha.o
AR sound/pci/emu10k1/built-in.a
CC arch/x86/kernel/apic/ipi.o
CC sound/pci/hda/hda_bind.o
CC fs/nfs_common/grace.o
CC lib/crypto/aes.o
CC block/blk-sysfs.o
CC arch/x86/lib/cache-smp.o
CC crypto/asymmetric_keys/pkcs7_trust.o
CC kernel/locking/rwsem.o
CC arch/x86/lib/msr.o
CC lib/lzo/lzo1x_decompress_safe.o
CC drivers/pci/hotplug/acpi_pcihp.o
CC sound/core/seq/seq_timer.o
CC arch/x86/kernel/cpu/microcode/intel.o
CC security/selinux/netport.o
CC arch/x86/kernel/apic/vector.o
CC arch/x86/entry/common.o
CC drivers/video/console/dummycon.o
CC lib/lz4/lz4_decompress.o
CC drivers/video/backlight/backlight.o
CC drivers/pci/msi/msi.o
CC net/core/datagram.o
AR arch/x86/events/amd/built-in.a
CC fs/notify/mark.o
CC arch/x86/mm/ioremap.o
CC arch/x86/pci/legacy.o
CC crypto/asymmetric_keys/pkcs7_verify.o
AR arch/x86/platform/efi/built-in.a
CC drivers/pci/pcie/aspm.o
AR arch/x86/platform/intel-quark/built-in.a
AR arch/x86/platform/olpc/built-in.a
AR arch/x86/platform/scx200/built-in.a
CC arch/x86/kernel/cpu/mtrr/cleanup.o
AR arch/x86/platform/ts5500/built-in.a
AR arch/x86/platform/uv/built-in.a
AR arch/x86/platform/built-in.a
CC drivers/video/console/vgacon.o
CC net/core/stream.o
AR drivers/pci/controller/dwc/built-in.a
CC fs/notify/fdinfo.o
CC security/keys/request_key.o
AR drivers/pci/controller/mobiveil/built-in.a
CC lib/crypto/mpi/mpi-add.o
CC lib/crypto/mpi/mpi-bit.o
AR drivers/pci/controller/plda/built-in.a
AR drivers/pci/controller/built-in.a
CC arch/x86/kernel/cpu/mtrr/amd.o
CC arch/x86/kernel/cpu/scattered.o
AS arch/x86/kernel/head_32.o
CC security/keys/request_key_auth.o
CC drivers/pci/pcie/pme.o
AR lib/lzo/built-in.a
CC sound/pci/hda/hda_codec.o
CC sound/core/memory.o
AR arch/x86/kernel/cpu/mce/built-in.a
CC arch/x86/kernel/cpu/topology_common.o
CC mm/page-writeback.o
CC security/selinux/status.o
CC fs/nfs_common/common.o
AS arch/x86/lib/msr-reg.o
CC kernel/locking/percpu-rwsem.o
AR drivers/video/fbdev/core/built-in.a
CC drivers/video/aperture.o
AR drivers/video/fbdev/omap/built-in.a
AR drivers/video/fbdev/omap2/omapfb/dss/built-in.a
CC ipc/syscall.o
AR drivers/video/fbdev/omap2/omapfb/displays/built-in.a
CC arch/x86/kernel/apic/init.o
AR drivers/pci/hotplug/built-in.a
AR drivers/video/fbdev/omap2/omapfb/built-in.a
AR drivers/video/fbdev/omap2/built-in.a
AR drivers/idle/built-in.a
AR drivers/video/fbdev/built-in.a
CC security/lsm_audit.o
CC crypto/asymmetric_keys/x509.asn1.o
CC sound/pci/hda/hda_jack.o
CC arch/x86/lib/msr-reg-export.o
CC crypto/asymmetric_keys/x509_akid.asn1.o
CC sound/pci/hda/hda_auto_parser.o
CC crypto/asymmetric_keys/x509_cert_parser.o
CC sound/core/seq/seq_system.o
CC arch/x86/kernel/cpu/microcode/amd.o
CC arch/x86/mm/extable.o
AS arch/x86/entry/thunk.o
CC lib/crypto/mpi/mpi-cmp.o
CC arch/x86/pci/irq.o
AR arch/x86/entry/built-in.a
CC lib/crypto/arc4.o
AS arch/x86/lib/hweight.o
CC io_uring/rsrc.o
CC arch/x86/lib/iomem.o
CC arch/x86/kernel/head32.o
CC arch/x86/kernel/ebda.o
AR drivers/video/backlight/built-in.a
CC arch/x86/kernel/cpu/topology_ext.o
AR sound/sh/built-in.a
CC kernel/power/qos.o
CC kernel/power/main.o
CC security/selinux/ss/ebitmap.o
AR drivers/char/ipmi/built-in.a
CC sound/core/seq/seq_ports.o
CC drivers/pci/msi/irqdomain.o
CC arch/x86/kernel/cpu/mtrr/cyrix.o
CC fs/iomap/trace.o
CC fs/iomap/iter.o
CC fs/iomap/buffered-io.o
CC net/core/scm.o
AR fs/notify/built-in.a
CC lib/crypto/gf128mul.o
CC arch/x86/kernel/cpu/mtrr/centaur.o
CC arch/x86/kernel/platform-quirks.o
CC kernel/locking/spinlock.o
CC crypto/compress.o
AR fs/nfs_common/built-in.a
CC fs/quota/dquot.o
CC crypto/asymmetric_keys/pkcs7.asn1.o
CC arch/x86/lib/atomic64_32.o
CC net/sched/sch_generic.o
CC crypto/asymmetric_keys/pkcs7_parser.o
CC security/keys/user_defined.o
CC arch/x86/lib/inat.o
AR drivers/video/console/built-in.a
AR lib/lz4/built-in.a
AR arch/x86/lib/built-in.a
CC kernel/sched/fair.o
CC kernel/power/console.o
CC net/core/gen_stats.o
CC lib/crypto/mpi/mpi-sub-ui.o
CC drivers/video/cmdline.o
CC ipc/ipc_sysctl.o
CC kernel/locking/osq_lock.o
CC kernel/sched/build_policy.o
CC io_uring/notif.o
AR arch/x86/lib/lib.a
AR drivers/pci/pcie/built-in.a
CC lib/crypto/mpi/mpi-div.o
CC net/sched/sch_mq.o
CC block/blk-flush.o
CC arch/x86/pci/common.o
CC security/keys/proc.o
CC arch/x86/events/intel/ds.o
CC arch/x86/kernel/apic/hw_nmi.o
CC sound/pci/hda/hda_sysfs.o
CC sound/core/seq/seq_info.o
AR arch/x86/kernel/cpu/microcode/built-in.a
CC mm/folio-compat.o
CC crypto/algapi.o
CC kernel/locking/qspinlock.o
CC arch/x86/mm/mmap.o
CC lib/crypto/blake2s.o
CC arch/x86/kernel/cpu/mtrr/legacy.o
CC sound/core/control.o
CC mm/readahead.o
AR sound/synth/emux/built-in.a
AR sound/synth/built-in.a
CC drivers/acpi/acpica/dsargs.o
CC net/sched/sch_frag.o
AR drivers/pci/msi/built-in.a
AR drivers/acpi/pmic/built-in.a
AR drivers/pci/switch/built-in.a
CC drivers/pci/access.o
CC arch/x86/mm/pgtable.o
CC kernel/printk/printk.o
AR crypto/asymmetric_keys/built-in.a
CC arch/x86/events/rapl.o
CC drivers/video/nomodeset.o
CC security/keys/sysctl.o
CC ipc/mqueue.o
CC mm/swap.o
CC io_uring/tctx.o
CC crypto/scatterwalk.o
AR arch/x86/kernel/cpu/mtrr/built-in.a
CC sound/core/seq/seq_dummy.o
CC arch/x86/kernel/cpu/topology_amd.o
CC drivers/pnp/pnpacpi/core.o
CC block/blk-settings.o
CC kernel/power/process.o
CC kernel/locking/rtmutex_api.o
CC kernel/power/suspend.o
CC lib/crypto/mpi/mpi-mod.o
CC arch/x86/events/intel/knc.o
CC drivers/acpi/acpica/dscontrol.o
CC arch/x86/kernel/apic/io_apic.o
CC drivers/video/hdmi.o
CC security/selinux/ss/hashtab.o
CC security/device_cgroup.o
AR drivers/amba/built-in.a
CC sound/pci/hda/hda_controller.o
CC arch/x86/events/intel/lbr.o
CC lib/crypto/blake2s-generic.o
CC arch/x86/events/msr.o
CC net/sched/sch_api.o
CC ipc/namespace.o
CC net/netlink/af_netlink.o
AR net/bpf/built-in.a
CC kernel/locking/qrwlock.o
CC arch/x86/pci/early.o
CC kernel/sched/build_utility.o
CC security/keys/keyctl_pkey.o
CC arch/x86/kernel/cpu/common.o
CC drivers/pnp/core.o
CC drivers/acpi/acpica/dsdebug.o
CC arch/x86/mm/physaddr.o
CC lib/zstd/zstd_decompress_module.o
CC security/selinux/ss/symtab.o
AR sound/core/seq/built-in.a
CC security/selinux/ss/sidtab.o
CC arch/x86/kernel/process_32.o
CC arch/x86/kernel/signal.o
CC drivers/pci/bus.o
CC lib/crypto/mpi/mpi-mul.o
CC arch/x86/kernel/apic/msi.o
CC drivers/pnp/pnpacpi/rsparser.o
AR drivers/clk/actions/built-in.a
AR drivers/clk/analogbits/built-in.a
AR drivers/clk/bcm/built-in.a
AR drivers/clk/imgtec/built-in.a
AR drivers/clk/imx/built-in.a
CC drivers/dma/dw/core.o
AR drivers/clk/ingenic/built-in.a
CC crypto/proc.o
AR drivers/clk/mediatek/built-in.a
CC drivers/dma/hsu/hsu.o
AR drivers/clk/microchip/built-in.a
CC net/netlink/genetlink.o
AR drivers/clk/mstar/built-in.a
AR drivers/clk/mvebu/built-in.a
AR drivers/clk/ralink/built-in.a
CC lib/crypto/mpi/mpih-cmp.o
AR drivers/clk/renesas/built-in.a
AR drivers/clk/socfpga/built-in.a
AR drivers/clk/sophgo/built-in.a
CC drivers/acpi/acpica/dsfield.o
CC fs/proc/task_mmu.o
AR drivers/clk/sprd/built-in.a
AR drivers/clk/starfive/built-in.a
AR drivers/clk/sunxi-ng/built-in.a
AR drivers/clk/ti/built-in.a
CC kernel/printk/printk_safe.o
AR drivers/clk/versatile/built-in.a
CC io_uring/filetable.o
CC net/core/gen_estimator.o
CC net/ethtool/ioctl.o
AR drivers/clk/xilinx/built-in.a
AR drivers/clk/built-in.a
CC block/blk-ioc.o
CC lib/zstd/decompress/huf_decompress.o
AR drivers/video/built-in.a
CC net/netfilter/core.o
CC net/ipv4/netfilter/nf_defrag_ipv4.o
CC fs/iomap/direct-io.o
CC net/ipv4/netfilter/nf_reject_ipv4.o
CC arch/x86/pci/bus_numa.o
AR kernel/locking/built-in.a
CC net/xfrm/xfrm_policy.o
AR security/keys/built-in.a
CC net/ipv4/netfilter/ip_tables.o
CC arch/x86/mm/tlb.o
CC sound/core/misc.o
CC arch/x86/mm/cpu_entry_area.o
CC io_uring/rw.o
CC kernel/power/hibernate.o
CC kernel/irq/irqdesc.o
CC fs/quota/quota_v2.o
AR sound/usb/misc/built-in.a
CC drivers/acpi/acpica/dsinit.o
AR sound/usb/usx2y/built-in.a
CC mm/truncate.o
AR sound/usb/caiaq/built-in.a
AR sound/usb/6fire/built-in.a
CC crypto/aead.o
CC net/netfilter/nf_log.o
AR sound/usb/hiface/built-in.a
CC lib/crypto/mpi/mpih-div.o
AR sound/usb/bcd2000/built-in.a
AR sound/usb/built-in.a
CC block/blk-map.o
AR drivers/dma/idxd/built-in.a
CC drivers/dma/dw/dw.o
CC drivers/pci/probe.o
CC arch/x86/mm/maccess.o
CC mm/vmscan.o
CC ipc/mq_sysctl.o
CC lib/zstd/decompress/zstd_ddict.o
CC fs/iomap/fiemap.o
CC arch/x86/events/intel/p4.o
CC sound/pci/hda/hda_proc.o
AR drivers/pnp/pnpacpi/built-in.a
AR drivers/dma/hsu/built-in.a
CC drivers/pnp/card.o
CC sound/pci/hda/hda_hwdep.o
CC kernel/printk/nbcon.o
CC arch/x86/pci/amd_bus.o
CC drivers/acpi/acpica/dsmethod.o
CC arch/x86/mm/pgprot.o
CC arch/x86/kernel/cpu/rdrand.o
CC sound/core/device.o
CC net/ethtool/common.o
CC fs/proc/inode.o
CC security/selinux/ss/avtab.o
CC arch/x86/kernel/apic/probe_32.o
CC security/selinux/ss/policydb.o
CC drivers/acpi/dptf/int340x_thermal.o
CC net/core/net_namespace.o
AR ipc/built-in.a
CC fs/quota/quota_tree.o
CC arch/x86/kernel/cpu/match.o
CC drivers/acpi/x86/apple.o
CC kernel/irq/handle.o
CC drivers/acpi/x86/cmos_rtc.o
CC drivers/dma/dw/idma32.o
CC fs/iomap/seek.o
CC arch/x86/events/intel/p6.o
CC drivers/acpi/acpica/dsmthdat.o
CC drivers/pnp/driver.o
CC lib/zstd/decompress/zstd_decompress.o
CC drivers/pnp/resource.o
CC crypto/geniv.o
CC arch/x86/mm/pgtable_32.o
CC net/sched/sch_blackhole.o
CC lib/crypto/mpi/mpih-mul.o
CC kernel/irq/manage.o
CC sound/core/info.o
AR arch/x86/kernel/apic/built-in.a
AR drivers/acpi/dptf/built-in.a
CC net/xfrm/xfrm_state.o
CC net/ethtool/netlink.o
AR sound/pci/ice1712/built-in.a
CC net/sched/cls_api.o
CC net/ipv4/route.o
CC fs/iomap/swapfile.o
CC block/blk-merge.o
CC arch/x86/kernel/cpu/bugs.o
AR arch/x86/pci/built-in.a
CC net/xfrm/xfrm_hash.o
AR sound/firewire/built-in.a
CC lib/crypto/mpi/mpi-pow.o
CC sound/core/isadma.o
CC lib/zstd/decompress/zstd_decompress_block.o
CC net/netfilter/nf_queue.o
CC kernel/printk/printk_ringbuffer.o
CC net/core/secure_seq.o
CC net/ipv4/netfilter/iptable_filter.o
CC drivers/acpi/acpica/dsobject.o
CC drivers/acpi/tables.o
CC kernel/power/snapshot.o
CC security/selinux/ss/services.o
CC fs/proc/root.o
CC io_uring/net.o
CC drivers/acpi/x86/lpss.o
CC net/netlink/policy.o
CC sound/pci/hda/hda_intel.o
CC drivers/acpi/acpica/dsopcode.o
CC net/xfrm/xfrm_input.o
CC drivers/dma/dw/acpi.o
AR sound/sparc/built-in.a
CC kernel/rcu/update.o
CC arch/x86/events/intel/pt.o
CC fs/quota/quota.o
CC lib/zstd/zstd_common_module.o
CC arch/x86/events/intel/uncore.o
CC arch/x86/mm/iomap_32.o
CC block/blk-timeout.o
CC drivers/pci/host-bridge.o
CC net/core/flow_dissector.o
CC kernel/printk/sysctl.o
AR fs/iomap/built-in.a
AR sound/spi/built-in.a
CC io_uring/poll.o
AR sound/parisc/built-in.a
CC kernel/irq/spurious.o
CC crypto/lskcipher.o
CC fs/quota/kqid.o
CC sound/core/vmaster.o
CC lib/crypto/mpi/mpiutil.o
CC block/blk-lib.o
CC drivers/acpi/acpica/dspkginit.o
CC drivers/pnp/manager.o
CC fs/proc/base.o
CC mm/shrinker.o
CC lib/xz/xz_dec_syms.o
CC drivers/acpi/x86/s2idle.o
AR kernel/printk/built-in.a
CC arch/x86/kernel/cpu/aperfmperf.o
CC crypto/skcipher.o
AR drivers/dma/dw/built-in.a
AR drivers/dma/amd/built-in.a
AR drivers/dma/mediatek/built-in.a
AR drivers/dma/qcom/built-in.a
AR drivers/dma/stm32/built-in.a
CC net/ethtool/bitset.o
AR drivers/dma/ti/built-in.a
CC arch/x86/mm/hugetlbpage.o
AR drivers/dma/xilinx/built-in.a
CC drivers/dma/dmaengine.o
CC net/ipv4/netfilter/iptable_mangle.o
CC kernel/power/swap.o
AR net/netlink/built-in.a
CC net/ipv4/inetpeer.o
CC drivers/acpi/acpica/dsutils.o
CC net/ethtool/strset.o
AR sound/pcmcia/vx/built-in.a
CC net/xfrm/xfrm_output.o
CC drivers/pci/remove.o
AR sound/pcmcia/pdaudiocf/built-in.a
CC kernel/irq/resend.o
AR sound/pcmcia/built-in.a
CC kernel/irq/chip.o
CC net/netfilter/nf_sockopt.o
CC drivers/acpi/osi.o
CC lib/xz/xz_dec_stream.o
CC sound/core/ctljack.o
AR lib/crypto/mpi/built-in.a
CC lib/crypto/sha1.o
CC drivers/pci/pci.o
CC drivers/pnp/support.o
CC block/blk-mq.o
CC net/unix/af_unix.o
CC net/ipv6/netfilter/ip6_tables.o
CC net/packet/af_packet.o
CC fs/quota/netlink.o
CC drivers/acpi/acpica/dswexec.o
CC arch/x86/kernel/cpu/cpuid-deps.o
CC security/selinux/ss/conditional.o
AR drivers/soc/apple/built-in.a
CC lib/crypto/sha256.o
AR drivers/soc/aspeed/built-in.a
CC arch/x86/mm/dump_pagetables.o
CC sound/core/jack.o
CC net/ipv4/netfilter/ipt_REJECT.o
AR drivers/soc/bcm/built-in.a
AR drivers/soc/fsl/built-in.a
AR drivers/soc/fujitsu/built-in.a
CC drivers/dma/virt-dma.o
AR drivers/soc/hisilicon/built-in.a
CC arch/x86/kernel/cpu/umwait.o
AR drivers/soc/imx/built-in.a
AR sound/pci/hda/built-in.a
CC lib/xz/xz_dec_lzma2.o
AR drivers/soc/ixp4xx/built-in.a
AR sound/pci/korg1212/built-in.a
AR drivers/soc/loongson/built-in.a
CC drivers/acpi/x86/utils.o
AR sound/pci/mixart/built-in.a
CC lib/dim/dim.o
AR drivers/soc/mediatek/built-in.a
CC lib/fonts/fonts.o
AR kernel/sched/built-in.a
AR sound/pci/nm256/built-in.a
CC lib/fonts/font_8x16.o
AR drivers/soc/microchip/built-in.a
AR sound/pci/oxygen/built-in.a
CC lib/dim/net_dim.o
AR drivers/soc/nuvoton/built-in.a
AR sound/pci/pcxhr/built-in.a
CC arch/x86/kernel/signal_32.o
AR drivers/soc/pxa/built-in.a
AR sound/pci/riptide/built-in.a
AR drivers/soc/amlogic/built-in.a
CC net/ipv6/netfilter/ip6table_filter.o
AR sound/pci/rme9652/built-in.a
CC mm/shmem.o
AR drivers/soc/qcom/built-in.a
AR sound/pci/trident/built-in.a
AR drivers/soc/renesas/built-in.a
AR sound/pci/ymfpci/built-in.a
AR drivers/soc/rockchip/built-in.a
AR sound/pci/vx222/built-in.a
AR sound/pci/built-in.a
AR drivers/soc/sunxi/built-in.a
AR drivers/soc/ti/built-in.a
AR drivers/soc/versatile/built-in.a
CC kernel/rcu/sync.o
AR drivers/soc/xilinx/built-in.a
AR drivers/soc/built-in.a
CC kernel/rcu/srcutree.o
CC drivers/pnp/interface.o
CC fs/kernfs/mount.o
CC arch/x86/events/intel/uncore_nhmex.o
CC net/ethtool/linkinfo.o
CC drivers/acpi/acpica/dswload.o
CC crypto/seqiv.o
CC drivers/acpi/osl.o
CC lib/zstd/common/debug.o
CC lib/zstd/common/entropy_common.o
CC kernel/irq/dummychip.o
CC net/unix/garbage.o
CC drivers/virtio/virtio.o
CC net/netfilter/utils.o
AR lib/fonts/built-in.a
CC net/ipv4/protocol.o
CC drivers/dma/acpi-dma.o
CC fs/proc/generic.o
CC drivers/pnp/quirks.o
CC fs/kernfs/inode.o
CC io_uring/eventfd.o
AR lib/crypto/built-in.a
CC lib/zstd/common/error_private.o
CC lib/dim/rdma_dim.o
AR sound/mips/built-in.a
CC lib/zstd/common/fse_decompress.o
CC kernel/irq/devres.o
CC drivers/acpi/x86/blacklist.o
CC kernel/power/user.o
AR fs/quota/built-in.a
CC net/sched/act_api.o
CC net/ipv6/netfilter/ip6table_mangle.o
CC net/sched/sch_fifo.o
CC sound/core/hwdep.o
CC drivers/acpi/acpica/dswload2.o
CC lib/xz/xz_dec_bcj.o
MKCAP arch/x86/kernel/cpu/capflags.c
CC arch/x86/mm/highmem_32.o
CC net/xfrm/xfrm_sysctl.o
CC block/blk-mq-tag.o
CC net/ipv6/af_inet6.o
CC mm/util.o
CC net/core/sysctl_net_core.o
CC net/core/dev.o
CC mm/mmzone.o
CC lib/argv_split.o
CC crypto/echainiv.o
CC net/netfilter/nfnetlink.o
CC [M] net/ipv4/netfilter/iptable_nat.o
CC security/selinux/ss/mls.o
AR lib/dim/built-in.a
CC fs/proc/array.o
CC arch/x86/kernel/traps.o
CC kernel/irq/autoprobe.o
AR sound/soc/built-in.a
AR drivers/acpi/x86/built-in.a
CC kernel/power/poweroff.o
CC kernel/irq/irqdomain.o
CC lib/zstd/common/zstd_common.o
AR net/dsa/built-in.a
CC lib/bug.o
AR lib/zstd/built-in.a
CC drivers/acpi/acpica/dswscope.o
CC net/unix/sysctl_net_unix.o
AR drivers/dma/built-in.a
CC block/blk-stat.o
CC drivers/virtio/virtio_ring.o
CC net/ethtool/linkmodes.o
CC drivers/pnp/system.o
CC io_uring/uring_cmd.o
AR lib/xz/built-in.a
CC fs/kernfs/dir.o
CC sound/core/timer.o
CC drivers/acpi/acpica/dswstate.o
CC drivers/pci/pci-driver.o
CC drivers/pci/search.o
CC arch/x86/events/intel/uncore_snb.o
CC arch/x86/events/intel/uncore_snbep.o
AR kernel/power/built-in.a
CC net/ethtool/rss.o
AR sound/atmel/built-in.a
CC net/ethtool/linkstate.o
CC fs/kernfs/file.o
CC kernel/rcu/tree.o
AR arch/x86/mm/built-in.a
CC arch/x86/kernel/cpu/powerflags.o
CC drivers/acpi/acpica/evevent.o
CC net/ipv6/anycast.o
CC lib/buildid.o
CC kernel/irq/proc.o
CC drivers/pci/rom.o
CC sound/hda/hda_bus_type.o
CC arch/x86/kernel/idt.o
CC crypto/ahash.o
CC arch/x86/kernel/irq.o
CC security/selinux/ss/context.o
CC lib/clz_tab.o
AR drivers/pnp/built-in.a
CC net/ethtool/debug.o
CC fs/proc/fd.o
CC sound/core/hrtimer.o
CC net/xfrm/xfrm_replay.o
CC net/ipv6/netfilter/nf_defrag_ipv6_hooks.o
CC drivers/pci/setup-res.o
CC drivers/acpi/acpica/evgpe.o
CC crypto/shash.o
CC kernel/rcu/rcu_segcblist.o
CC lib/cmdline.o
AR net/ipv4/netfilter/built-in.a
CC drivers/acpi/utils.o
CC security/selinux/netlabel.o
CC net/ipv4/ip_input.o
CC crypto/akcipher.o
CC kernel/irq/migration.o
CC net/netfilter/nfnetlink_log.o
AR net/unix/built-in.a
CC drivers/acpi/acpica/evgpeblk.o
CC sound/hda/hdac_bus.o
CC drivers/virtio/virtio_anchor.o
CC arch/x86/kernel/irq_32.o
AR kernel/livepatch/built-in.a
CC net/ipv4/ip_fragment.o
CC arch/x86/kernel/cpu/topology.o
CC arch/x86/kernel/dumpstack_32.o
CC lib/cpumask.o
CC net/sched/cls_cgroup.o
CC net/xfrm/xfrm_device.o
CC drivers/tty/vt/vt_ioctl.o
CC net/ipv6/ip6_output.o
CC fs/kernfs/symlink.o
CC drivers/virtio/virtio_pci_modern_dev.o
CC fs/proc/proc_tty.o
CC fs/sysfs/file.o
CC io_uring/openclose.o
CC mm/vmstat.o
CC block/blk-mq-sysfs.o
CC lib/ctype.o
AR sound/x86/built-in.a
CC net/netfilter/nf_conntrack_core.o
CC mm/backing-dev.o
CC net/ipv4/ip_forward.o
CC net/xfrm/xfrm_nat_keepalive.o
CC net/ethtool/wol.o
CC drivers/char/hw_random/core.o
CC kernel/irq/cpuhotplug.o
CC drivers/acpi/acpica/evgpeinit.o
AR drivers/iommu/amd/built-in.a
AR drivers/iommu/intel/built-in.a
CC drivers/pci/irq.o
AR drivers/iommu/arm/arm-smmu/built-in.a
AR drivers/iommu/arm/arm-smmu-v3/built-in.a
AR drivers/iommu/arm/built-in.a
CC drivers/pci/vpd.o
AR drivers/iommu/iommufd/built-in.a
CC drivers/iommu/iommu.o
CC drivers/char/agp/backend.o
CC sound/core/pcm.o
CC drivers/iommu/iommu-traces.o
CC drivers/acpi/acpica/evgpeutil.o
CC sound/hda/hdac_device.o
CC lib/dec_and_lock.o
CC net/ethtool/features.o
CC drivers/tty/hvc/hvc_console.o
AR net/packet/built-in.a
CC drivers/tty/serial/8250/8250_core.o
CC drivers/tty/serial/serial_core.o
CC drivers/tty/serial/serial_base_bus.o
CC drivers/tty/serial/8250/8250_platform.o
CC crypto/sig.o
AR sound/xen/built-in.a
CC drivers/virtio/virtio_pci_legacy_dev.o
CC net/ipv4/ip_options.o
CC lib/decompress.o
CC net/sunrpc/auth_gss/auth_gss.o
CC fs/proc/cmdline.o
CC lib/decompress_bunzip2.o
CC drivers/tty/serial/8250/8250_pnp.o
AR fs/kernfs/built-in.a
CC net/netfilter/nf_conntrack_standalone.o
CC net/ipv6/netfilter/nf_conntrack_reasm.o
CC net/xfrm/xfrm_algo.o
CC fs/sysfs/dir.o
CC block/blk-mq-cpumap.o
CC drivers/acpi/acpica/evglock.o
CC drivers/tty/vt/vc_screen.o
CC drivers/acpi/reboot.o
CC kernel/irq/pm.o
CC drivers/char/agp/generic.o
CC net/sched/ematch.o
AR security/selinux/built-in.a
CC drivers/char/hw_random/intel-rng.o
AR security/built-in.a
CC io_uring/sqpoll.o
CC fs/proc/consoles.o
CC arch/x86/kernel/cpu/proc.o
CC drivers/iommu/iommu-sysfs.o
CC sound/core/pcm_native.o
CC sound/hda/hdac_sysfs.o
CC mm/mm_init.o
CC drivers/pci/setup-bus.o
AR sound/virtio/built-in.a
CC kernel/dma/mapping.o
CC drivers/char/mem.o
CC arch/x86/kernel/time.o
CC drivers/tty/serial/serial_ctrl.o
CC drivers/virtio/virtio_pci_modern.o
CC drivers/virtio/virtio_pci_common.o
CC drivers/acpi/acpica/evhandler.o
CC arch/x86/events/intel/uncore_discovery.o
CC drivers/char/agp/isoch.o
CC crypto/kpp.o
AR drivers/gpu/host1x/built-in.a
CC drivers/connector/cn_queue.o
CC fs/sysfs/symlink.o
CC lib/decompress_inflate.o
AR drivers/tty/hvc/built-in.a
CC drivers/acpi/acpica/evmisc.o
CC arch/x86/kernel/ioport.o
CC drivers/pci/vc.o
CC net/core/dev_addr_lists.o
CC net/ethtool/privflags.o
CC drivers/tty/serial/8250/8250_rsa.o
CC drivers/tty/serial/8250/8250_port.o
AR drivers/gpu/drm/tests/built-in.a
AR drivers/gpu/drm/arm/built-in.a
AR drivers/gpu/drm/clients/built-in.a
CC drivers/gpu/drm/display/drm_display_helper_mod.o
CC drivers/gpu/drm/ttm/ttm_tt.o
CC block/blk-mq-sched.o
CC fs/proc/cpuinfo.o
CC kernel/irq/msi.o
CC drivers/gpu/drm/i915/i915_config.o
CC drivers/tty/serial/8250/8250_dma.o
CC drivers/char/hw_random/amd-rng.o
CC fs/sysfs/mount.o
CC net/ipv6/netfilter/nf_reject_ipv6.o
CC drivers/acpi/acpica/evregion.o
CC drivers/tty/vt/selection.o
CC block/ioctl.o
CC drivers/gpu/drm/display/drm_dp_dual_mode_helper.o
CC lib/decompress_unlz4.o
CC sound/sound_core.o
CC net/ipv4/ip_output.o
CC drivers/gpu/drm/i915/i915_driver.o
CC sound/hda/hdac_regmap.o
CC net/xfrm/xfrm_user.o
AR net/sched/built-in.a
CC drivers/gpu/drm/display/drm_dp_helper.o
CC io_uring/xattr.o
CC net/ipv6/netfilter/ip6t_ipv6header.o
CC drivers/char/agp/amd64-agp.o
CC drivers/tty/vt/keyboard.o
CC arch/x86/events/intel/cstate.o
CC kernel/irq/affinity.o
CC fs/proc/devices.o
CC kernel/dma/direct.o
CC drivers/virtio/virtio_pci_legacy.o
CC sound/hda/hdac_controller.o
ASN.1 crypto/rsapubkey.asn1.[ch]
ASN.1 crypto/rsaprivkey.asn1.[ch]
CC drivers/iommu/dma-iommu.o
CC crypto/rsa.o
CC kernel/irq/matrix.o
CC drivers/acpi/acpica/evrgnini.o
CC drivers/pci/mmap.o
CC fs/proc/interrupts.o
CC net/netfilter/nf_conntrack_expect.o
CC drivers/char/hw_random/geode-rng.o
CC drivers/connector/connector.o
CC mm/percpu.o
CC net/ethtool/rings.o
CC lib/decompress_unlzma.o
CC sound/hda/hdac_stream.o
CC drivers/gpu/drm/ttm/ttm_bo.o
CC fs/sysfs/group.o
CC arch/x86/kernel/cpu/feat_ctl.o
CC net/sunrpc/clnt.o
AR drivers/tty/ipwireless/built-in.a
CC drivers/acpi/acpica/evsci.o
AR net/wireless/tests/built-in.a
CC net/wireless/core.o
CC drivers/char/random.o
CC lib/decompress_unlzo.o
CC net/ipv6/netfilter/ip6t_REJECT.o
CC mm/slab_common.o
CC net/ipv6/ip6_input.o
CC drivers/tty/serial/8250/8250_dwlib.o
AR drivers/gpu/vga/built-in.a
CC drivers/tty/tty_io.o
CC net/ethtool/channels.o
AR drivers/gpu/drm/renesas/rcar-du/built-in.a
AR drivers/gpu/drm/omapdrm/built-in.a
AR drivers/gpu/drm/renesas/rz-du/built-in.a
CC net/sunrpc/auth_gss/gss_generic_token.o
CC kernel/entry/common.o
AR drivers/gpu/drm/renesas/built-in.a
CC fs/proc/loadavg.o
CC block/genhd.o
CC drivers/base/power/sysfs.o
CC drivers/virtio/virtio_pci_admin_legacy_io.o
CC crypto/rsa_helper.o
CC drivers/char/agp/intel-agp.o
CC drivers/pci/devres.o
CC arch/x86/kernel/cpu/intel.o
CC crypto/rsa-pkcs1pad.o
AR arch/x86/events/intel/built-in.a
CC drivers/char/hw_random/via-rng.o
AR arch/x86/events/built-in.a
CC drivers/acpi/acpica/evxface.o
CC drivers/acpi/nvs.o
CC io_uring/nop.o
CC drivers/tty/serial/serial_port.o
CC net/ipv4/ip_sockglue.o
CC kernel/dma/ops_helpers.o
CC sound/core/pcm_lib.o
AR kernel/rcu/built-in.a
CC block/ioprio.o
AR fs/sysfs/built-in.a
CC net/ethtool/coalesce.o
CC drivers/tty/vt/vt.o
CC net/sunrpc/xprt.o
CC lib/decompress_unxz.o
CC drivers/gpu/drm/ttm/ttm_bo_util.o
CC drivers/connector/cn_proc.o
AR drivers/char/hw_random/built-in.a
AR drivers/gpu/drm/tilcdc/built-in.a
CC drivers/iommu/iova.o
CC net/core/dst.o
CC fs/proc/meminfo.o
CC drivers/base/power/generic_ops.o
CC net/netfilter/nf_conntrack_helper.o
CC sound/hda/array.o
CC sound/last.o
CC drivers/tty/serial/earlycon.o
CC drivers/tty/serial/8250/8250_pcilib.o
AR kernel/irq/built-in.a
CC drivers/acpi/acpica/evxfevnt.o
CC lib/decompress_unzstd.o
CC drivers/virtio/virtio_input.o
CC net/sunrpc/auth_gss/gss_mech_switch.o
CC drivers/gpu/drm/i915/i915_drm_client.o
CC arch/x86/kernel/dumpstack.o
CC arch/x86/kernel/cpu/tsx.o
CC kernel/dma/remap.o
CC drivers/char/agp/intel-gtt.o
CC sound/hda/hdmi_chmap.o
CC drivers/pci/proc.o
CC crypto/acompress.o
CC drivers/gpu/drm/ttm/ttm_bo_vm.o
CC kernel/module/main.o
AR net/ipv6/netfilter/built-in.a
CC net/ipv4/inet_hashtables.o
CC drivers/gpu/drm/i915/i915_getparam.o
CC fs/devpts/inode.o
CC io_uring/fs.o
CC drivers/base/power/common.o
CC drivers/gpu/drm/display/drm_dp_mst_topology.o
CC kernel/entry/syscall_user_dispatch.o
CC drivers/acpi/acpica/evxfgpe.o
CC fs/netfs/buffered_read.o
CC drivers/tty/n_tty.o
CC block/badblocks.o
CC lib/dump_stack.o
CC net/core/netevent.o
AR net/mac80211/tests/built-in.a
CC net/mac80211/main.o
CC drivers/pci/pci-sysfs.o
CC drivers/tty/serial/8250/8250_early.o
CC arch/x86/kernel/cpu/intel_epb.o
CC fs/proc/stat.o
CC fs/netfs/buffered_write.o
CC fs/netfs/direct_read.o
AR drivers/iommu/built-in.a
COPY drivers/tty/vt/defkeymap.c
CC net/sunrpc/auth_gss/svcauth_gss.o
AR kernel/dma/built-in.a
CC sound/core/pcm_misc.o
CC drivers/char/misc.o
CC net/ethtool/pause.o
CC drivers/virtio/virtio_dma_buf.o
CC net/ipv6/addrconf.o
CC net/sunrpc/socklib.o
CC drivers/acpi/acpica/evxfregn.o
AR drivers/connector/built-in.a
CC drivers/base/power/qos.o
CC drivers/tty/tty_ioctl.o
AR net/xfrm/built-in.a
CC net/ethtool/eee.o
CC net/ipv4/inet_timewait_sock.o
AR kernel/entry/built-in.a
CC drivers/pci/slot.o
CC drivers/gpu/drm/ttm/ttm_module.o
CC crypto/scompress.o
CC arch/x86/kernel/cpu/amd.o
CC drivers/gpu/drm/virtio/virtgpu_drv.o
AR drivers/gpu/drm/imx/built-in.a
CC kernel/module/strict_rwx.o
CC mm/compaction.o
AR fs/devpts/built-in.a
CC sound/hda/trace.o
CC lib/earlycpio.o
CC net/wireless/sysfs.o
CC net/wireless/radiotap.o
AR drivers/char/agp/built-in.a
CC net/sunrpc/xprtsock.o
CC net/netfilter/nf_conntrack_proto.o
CC lib/extable.o
CC fs/proc/uptime.o
CC net/netlabel/netlabel_user.o
CC drivers/acpi/acpica/exconcat.o
CC drivers/gpu/drm/i915/i915_ioctl.o
CC drivers/tty/serial/8250/8250_exar.o
CC io_uring/splice.o
CC drivers/acpi/wakeup.o
CC block/blk-rq-qos.o
CC net/netlabel/netlabel_kapi.o
AR drivers/virtio/built-in.a
CC drivers/gpu/drm/ttm/ttm_execbuf_util.o
CC drivers/char/virtio_console.o
CC drivers/gpu/drm/display/drm_dsc_helper.o
CC sound/core/pcm_memory.o
CC arch/x86/kernel/nmi.o
CC drivers/acpi/acpica/exconfig.o
CC lib/flex_proportions.o
CC net/mac80211/status.o
CC drivers/acpi/acpica/exconvrt.o
CC arch/x86/kernel/cpu/hygon.o
CC drivers/pci/pci-acpi.o
CC block/disk-events.o
CC drivers/gpu/drm/display/drm_hdcp_helper.o
CC lib/idr.o
CC drivers/gpu/drm/virtio/virtgpu_kms.o
CC fs/proc/util.o
CC fs/netfs/direct_write.o
CC drivers/tty/vt/consolemap.o
CC net/ethtool/tsinfo.o
CC net/core/neighbour.o
CC fs/ext4/balloc.o
CC crypto/algboss.o
CC net/netfilter/nf_conntrack_proto_generic.o
CC sound/core/memalloc.o
CC mm/show_mem.o
CC drivers/gpu/drm/ttm/ttm_range_manager.o
CC net/ipv4/inet_connection_sock.o
CC drivers/base/power/runtime.o
CC drivers/char/hpet.o
CC net/netlabel/netlabel_domainhash.o
AR drivers/gpu/drm/i2c/built-in.a
CC net/ethtool/cabletest.o
CC kernel/time/time.o
CC drivers/block/loop.o
CC sound/hda/hdac_component.o
CC drivers/acpi/acpica/excreate.o
CC drivers/block/virtio_blk.o
CC drivers/gpu/drm/i915/i915_irq.o
CC arch/x86/kernel/cpu/centaur.o
CC net/ipv4/tcp.o
CC io_uring/sync.o
CC drivers/tty/serial/8250/8250_lpss.o
CC kernel/module/kmod.o
CC lib/irq_regs.o
CC drivers/gpu/drm/virtio/virtgpu_gem.o
CC fs/proc/version.o
HOSTCC drivers/tty/vt/conmakehash
CC fs/ext4/bitmap.o
CC drivers/pci/iomap.o
CC lib/is_single_threaded.o
CC block/blk-ia-ranges.o
CC net/rfkill/core.o
CC drivers/acpi/acpica/exdebug.o
CC arch/x86/kernel/cpu/transmeta.o
CC net/mac80211/driver-ops.o
CC arch/x86/kernel/ldt.o
CC kernel/futex/core.o
CC net/ethtool/tunnels.o
CC drivers/gpu/drm/ttm/ttm_resource.o
CC net/sunrpc/auth_gss/gss_rpc_upcall.o
CC kernel/futex/syscalls.o
CC drivers/tty/vt/defkeymap.o
CC sound/hda/hdac_i915.o
CC fs/netfs/iterator.o
CC fs/ext4/block_validity.o
CC crypto/testmgr.o
CC drivers/base/power/wakeirq.o
CC drivers/gpu/drm/i915/i915_mitigations.o
CC sound/core/pcm_timer.o
CC drivers/acpi/acpica/exdump.o
CC mm/interval_tree.o
CC fs/proc/softirqs.o
CC lib/klist.o
CONMK drivers/tty/vt/consolemap_deftbl.c
CC drivers/tty/vt/consolemap_deftbl.o
CC net/netfilter/nf_conntrack_proto_tcp.o
AR drivers/tty/vt/built-in.a
CC kernel/module/tree_lookup.o
CC drivers/tty/serial/8250/8250_mid.o
CC drivers/char/nvram.o
CC drivers/acpi/sleep.o
CC kernel/time/timer.o
CC drivers/gpu/drm/virtio/virtgpu_vram.o
CC drivers/pci/quirks.o
CC net/sunrpc/sched.o
CC io_uring/msg_ring.o
CC arch/x86/kernel/cpu/zhaoxin.o
CC drivers/gpu/drm/i915/i915_module.o
CC net/ipv4/tcp_input.o
CC block/early-lookup.o
CC lib/kobject.o
CC drivers/acpi/acpica/exfield.o
CC drivers/gpu/drm/display/drm_hdmi_helper.o
CC net/core/rtnetlink.o
CC net/netlabel/netlabel_addrlist.o
CC drivers/base/power/main.o
CC crypto/cmac.o
CC net/sunrpc/auth_gss/gss_rpc_xdr.o
CC sound/hda/intel-dsp-config.o
CC net/rfkill/input.o
CC net/ipv6/addrlabel.o
CC fs/proc/namespaces.o
CC sound/core/seq_device.o
CC drivers/acpi/device_sysfs.o
AR drivers/block/built-in.a
CC fs/proc/self.o
CC arch/x86/kernel/cpu/vortex.o
CC kernel/cgroup/cgroup.o
CC kernel/module/kallsyms.o
CC drivers/acpi/acpica/exfldio.o
CC mm/list_lru.o
CC drivers/gpu/drm/ttm/ttm_pool.o
CC kernel/futex/pi.o
CC arch/x86/kernel/setup.o
CC drivers/tty/serial/8250/8250_pci.o
CC drivers/acpi/device_pm.o
CC net/wireless/util.o
CC net/ethtool/fec.o
CC fs/netfs/locking.o
AR drivers/char/built-in.a
CC net/ipv6/route.o
CC drivers/gpu/drm/virtio/virtgpu_display.o
CC lib/kobject_uevent.o
CC arch/x86/kernel/cpu/perfctr-watchdog.o
CC crypto/hmac.o
CC drivers/gpu/drm/i915/i915_params.o
CC net/9p/mod.o
CC block/bounce.o
CC drivers/gpu/drm/display/drm_scdc_helper.o
AR net/rfkill/built-in.a
CC net/netfilter/nf_conntrack_proto_udp.o
CC io_uring/advise.o
CC fs/ext4/dir.o
CC drivers/base/firmware_loader/builtin/main.o
CC sound/hda/intel-nhlt.o
CC drivers/base/firmware_loader/main.o
AR sound/core/built-in.a
CC drivers/acpi/acpica/exmisc.o
CC fs/jbd2/transaction.o
CC fs/ramfs/inode.o
CC fs/proc/thread_self.o
CC block/bsg.o
CC mm/workingset.o
CC kernel/module/procfs.o
CC kernel/futex/requeue.o
CC drivers/base/power/wakeup.o
CC net/netlabel/netlabel_mgmt.o
AR drivers/base/firmware_loader/builtin/built-in.a
CC net/mac80211/sta_info.o
CC net/ethtool/eeprom.o
CC net/9p/client.o
CC drivers/acpi/acpica/exmutex.o
CC net/sunrpc/auth_gss/trace.o
CC arch/x86/kernel/cpu/vmware.o
CC crypto/crypto_null.o
CC drivers/gpu/drm/virtio/virtgpu_vq.o
CC block/blk-cgroup.o
CC drivers/gpu/drm/ttm/ttm_device.o
CC kernel/time/hrtimer.o
CC sound/hda/intel-sdw-acpi.o
CC fs/netfs/main.o
CC drivers/tty/tty_ldisc.o
CC drivers/base/regmap/regmap.o
AR drivers/gpu/drm/display/built-in.a
CC net/wireless/reg.o
AR drivers/base/test/built-in.a
CC io_uring/epoll.o
CC drivers/gpu/drm/i915/i915_pci.o
CC kernel/module/sysfs.o
CC fs/proc/proc_sysctl.o
CC drivers/tty/serial/8250/8250_pericom.o
CC fs/hugetlbfs/inode.o
CC net/ipv4/tcp_output.o
CC lib/logic_pio.o
CC fs/netfs/misc.o
CC drivers/acpi/acpica/exnames.o
CC fs/ramfs/file-mmu.o
CC net/core/utils.o
CC arch/x86/kernel/x86_init.o
CC fs/fat/cache.o
CC kernel/cgroup/rstat.o
CC drivers/gpu/drm/ttm/ttm_sys_manager.o
AR drivers/base/firmware_loader/built-in.a
CC kernel/futex/waitwake.o
CC mm/debug.o
CC mm/gup.o
CC fs/ext4/ext4_jbd2.o
CC crypto/md5.o
CC arch/x86/kernel/cpu/hypervisor.o
AR sound/hda/built-in.a
AR sound/built-in.a
CC net/netfilter/nf_conntrack_proto_icmp.o
CC kernel/cgroup/namespace.o
CC drivers/pci/pci-label.o
CC drivers/acpi/acpica/exoparg1.o
CC net/core/link_watch.o
CC net/ethtool/stats.o
CC arch/x86/kernel/cpu/mshyperv.o
CC net/wireless/scan.o
CC drivers/acpi/proc.o
CC lib/maple_tree.o
AR kernel/module/built-in.a
CC lib/memcat_p.o
AR drivers/tty/serial/8250/built-in.a
AR fs/ramfs/built-in.a
AR drivers/tty/serial/built-in.a
CC net/ipv6/ip6_fib.o
CC drivers/gpu/drm/ttm/ttm_agp_backend.o
CC drivers/tty/tty_buffer.o
CC io_uring/statx.o
CC net/netlabel/netlabel_unlabeled.o
CC drivers/base/power/wakeup_stats.o
CC crypto/sha256_generic.o
CC fs/jbd2/commit.o
CC drivers/tty/tty_port.o
CC drivers/gpu/drm/virtio/virtgpu_fence.o
CC drivers/gpu/drm/i915/i915_scatterlist.o
CC fs/fat/dir.o
CC drivers/acpi/acpica/exoparg2.o
AR kernel/futex/built-in.a
CC arch/x86/kernel/cpu/debugfs.o
CC fs/jbd2/recovery.o
CC crypto/sha512_generic.o
CC block/blk-ioprio.o
CC net/core/filter.o
CC drivers/gpu/drm/virtio/virtgpu_object.o
CC drivers/pci/vgaarb.o
AR fs/hugetlbfs/built-in.a
CC kernel/time/timekeeping.o
CC net/sunrpc/auth_gss/gss_krb5_mech.o
CC net/9p/error.o
CC drivers/acpi/bus.o
CC net/core/sock_diag.o
CC drivers/base/power/trace.o
CC kernel/time/ntp.o
CC drivers/acpi/acpica/exoparg3.o
AR drivers/gpu/drm/ttm/built-in.a
CC drivers/acpi/acpica/exoparg6.o
CC drivers/base/regmap/regcache.o
CC kernel/cgroup/cgroup-v1.o
CC fs/proc/proc_net.o
CC fs/proc/kcore.o
CC net/netfilter/nf_conntrack_extend.o
CC arch/x86/kernel/i8259.o
CC fs/netfs/objects.o
CC arch/x86/kernel/cpu/capflags.o
CC io_uring/timeout.o
AR drivers/gpu/drm/panel/built-in.a
AR arch/x86/kernel/cpu/built-in.a
CC fs/fat/fatent.o
AR drivers/gpu/drm/bridge/analogix/built-in.a
AR drivers/gpu/drm/bridge/cadence/built-in.a
CC net/9p/protocol.o
AR drivers/gpu/drm/bridge/imx/built-in.a
AR drivers/gpu/drm/bridge/synopsys/built-in.a
CC drivers/tty/tty_mutex.o
AR drivers/gpu/drm/bridge/built-in.a
CC drivers/gpu/drm/i915/i915_switcheroo.o
CC net/sunrpc/auth_gss/gss_krb5_seal.o
CC net/ethtool/phc_vclocks.o
CC drivers/base/component.o
CC drivers/base/regmap/regcache-rbtree.o
CC drivers/acpi/acpica/exprep.o
CC block/blk-iolatency.o
CC crypto/sha3_generic.o
CC drivers/gpu/drm/virtio/virtgpu_debugfs.o
CC fs/ext4/extents.o
AR drivers/base/power/built-in.a
CC lib/nmi_backtrace.o
CC kernel/trace/trace_clock.o
CC arch/x86/kernel/irqinit.o
CC fs/jbd2/checkpoint.o
CC drivers/acpi/acpica/exregion.o
CC kernel/cgroup/freezer.o
CC drivers/gpu/drm/i915/i915_sysfs.o
CC net/core/dev_ioctl.o
CC mm/mmap_lock.o
CC net/netlabel/netlabel_cipso_v4.o
AR drivers/pci/built-in.a
CC net/ethtool/mm.o
CC drivers/base/regmap/regcache-flat.o
CC drivers/tty/tty_ldsem.o
CC net/dns_resolver/dns_key.o
CC kernel/bpf/core.o
CC crypto/ecb.o
CC fs/isofs/namei.o
CC drivers/base/regmap/regcache-maple.o
CC fs/proc/vmcore.o
CC drivers/gpu/drm/i915/i915_utils.o
CC net/9p/trans_common.o
CC fs/isofs/inode.o
CC kernel/trace/ring_buffer.o
CC mm/highmem.o
CC kernel/events/core.o
CC drivers/acpi/acpica/exresnte.o
CC fs/netfs/read_collect.o
CC drivers/gpu/drm/virtio/virtgpu_plane.o
CC net/netfilter/nf_conntrack_acct.o
CC net/netlabel/netlabel_calipso.o
CC kernel/trace/trace.o
CC io_uring/fdinfo.o
CC net/sunrpc/auth_gss/gss_krb5_unseal.o
CC fs/ext4/extents_status.o
CC kernel/time/clocksource.o
CC fs/isofs/dir.o
CC fs/fat/file.o
CC crypto/cbc.o
CC crypto/ctr.o
CC net/core/tso.o
CC lib/objpool.o
CC fs/fat/inode.o
CC net/ipv6/ipv6_sockglue.o
CC net/dns_resolver/dns_query.o
CC drivers/acpi/acpica/exresolv.o
CC drivers/tty/tty_baudrate.o
CC arch/x86/kernel/jump_label.o
CC net/9p/trans_fd.o
CC fs/isofs/util.o
CC drivers/base/regmap/regmap-debugfs.o
CC fs/proc/kmsg.o
CC io_uring/cancel.o
CC net/sunrpc/auth.o
CC block/blk-iocost.o
CC kernel/cgroup/legacy_freezer.o
CC fs/jbd2/revoke.o
CC net/ipv4/tcp_timer.o
CC drivers/acpi/glue.o
CC crypto/gcm.o
CC net/ethtool/module.o
CC net/mac80211/wep.o
CC drivers/gpu/drm/i915/intel_clock_gating.o
CC drivers/gpu/drm/virtio/virtgpu_ioctl.o
CC drivers/acpi/acpica/exresop.o
AR drivers/gpu/drm/hisilicon/built-in.a
CC drivers/base/core.o
CC mm/memory.o
CC fs/isofs/rock.o
CC fs/nfs/client.o
CC crypto/ccm.o
CC fs/proc/page.o
CC arch/x86/kernel/irq_work.o
CC drivers/tty/tty_jobctrl.o
CC drivers/gpu/drm/virtio/virtgpu_prime.o
AR net/dns_resolver/built-in.a
CC net/ipv4/tcp_ipv4.o
CC net/netfilter/nf_conntrack_seqadj.o
CC net/ipv6/ndisc.o
CC kernel/time/jiffies.o
CC net/sunrpc/auth_null.o
CC kernel/fork.o
CC net/sunrpc/auth_gss/gss_krb5_wrap.o
AR net/netlabel/built-in.a
CC arch/x86/kernel/probe_roms.o
CC drivers/base/bus.o
CC fs/netfs/read_pgpriv2.o
CC drivers/acpi/acpica/exserial.o
CC kernel/cgroup/pids.o
CC block/mq-deadline.o
AR drivers/base/regmap/built-in.a
CC lib/plist.o
CC fs/jbd2/journal.o
CC net/ethtool/cmis_fw_update.o
CC drivers/gpu/drm/i915/intel_device_info.o
CC kernel/time/timer_list.o
CC fs/exportfs/expfs.o
CC io_uring/waitid.o
CC drivers/acpi/acpica/exstore.o
CC fs/lockd/clntlock.o
CC fs/fat/misc.o
CC fs/nls/nls_base.o
CC net/9p/trans_virtio.o
CC drivers/gpu/drm/virtio/virtgpu_trace_points.o
CC fs/isofs/export.o
AR fs/proc/built-in.a
CC fs/lockd/clntproc.o
CC drivers/tty/n_null.o
CC fs/nls/nls_cp437.o
CC net/mac80211/aead_api.o
CC crypto/aes_generic.o
CC drivers/acpi/scan.o
CC kernel/cgroup/rdma.o
CC fs/lockd/clntxdr.o
CC net/wireless/nl80211.o
CC drivers/acpi/acpica/exstoren.o
CC arch/x86/kernel/sys_ia32.o
CC fs/lockd/host.o
CC net/handshake/alert.o
CC lib/radix-tree.o
CC fs/netfs/read_retry.o
CC net/wireless/mlme.o
CC net/sunrpc/auth_gss/gss_krb5_crypto.o
AR fs/exportfs/built-in.a
CC kernel/events/ring_buffer.o
CC kernel/time/timeconv.o
CC fs/nls/nls_ascii.o
CC net/netfilter/nf_conntrack_proto_icmpv6.o
AR kernel/bpf/built-in.a
CC fs/lockd/svc.o
CC fs/isofs/joliet.o
CC drivers/tty/pty.o
CC net/mac80211/wpa.o
CC drivers/acpi/acpica/exstorob.o
CC fs/nls/nls_iso8859-1.o
CC fs/nfs/dir.o
CC net/ethtool/cmis_cdb.o
CC io_uring/register.o
CC fs/fat/nfs.o
CC drivers/gpu/drm/i915/intel_memory_region.o
CC fs/ext4/file.o
CC kernel/cgroup/cpuset.o
CC kernel/time/timecounter.o
CC drivers/gpu/drm/virtio/virtgpu_submit.o
CC net/handshake/genl.o
CC net/handshake/netlink.o
AR drivers/gpu/drm/mxsfb/built-in.a
CC lib/ratelimit.o
CC kernel/time/alarmtimer.o
CC crypto/crc32c_generic.o
CC net/core/sock_reuseport.o
CC kernel/events/callchain.o
CC fs/nls/nls_utf8.o
CC drivers/acpi/acpica/exsystem.o
CC arch/x86/kernel/ksysfs.o
CC net/core/fib_notifier.o
AR net/9p/built-in.a
CC kernel/cgroup/misc.o
CC drivers/acpi/mipi-disco-img.o
CC mm/mincore.o
CC fs/isofs/compress.o
CC drivers/base/dd.o
CC block/kyber-iosched.o
CC crypto/authenc.o
CC net/ipv6/udp.o
AR drivers/gpu/drm/tiny/built-in.a
CC net/ethtool/pse-pd.o
AR fs/nls/built-in.a
CC fs/netfs/write_collect.o
CC drivers/tty/tty_audit.o
CC drivers/acpi/acpica/extrace.o
CC lib/rbtree.o
CC fs/fat/namei_vfat.o
CC net/handshake/request.o
CC net/mac80211/scan.o
CC io_uring/truncate.o
CC kernel/cgroup/debug.o
CC kernel/trace/trace_output.o
CC net/devres.o
CC net/sunrpc/auth_gss/gss_krb5_keys.o
AR drivers/gpu/drm/virtio/built-in.a
CC net/ethtool/plca.o
AR fs/unicode/built-in.a
CC arch/x86/kernel/bootflag.o
CC net/netfilter/nf_conntrack_netlink.o
CC crypto/authencesn.o
CC fs/ext4/fsmap.o
AR drivers/misc/eeprom/built-in.a
CC fs/lockd/svclock.o
AR drivers/misc/cb710/built-in.a
CC lib/seq_buf.o
AR drivers/misc/ti-st/built-in.a
CC drivers/acpi/acpica/exutils.o
AR drivers/misc/lis3lv02d/built-in.a
CC drivers/tty/sysrq.o
CC net/ipv4/tcp_minisocks.o
AR drivers/misc/cardreader/built-in.a
AR drivers/misc/keba/built-in.a
AR drivers/misc/built-in.a
CC drivers/acpi/resource.o
CC drivers/gpu/drm/i915/intel_pcode.o
CC net/socket.o
CC kernel/time/posix-timers.o
CC net/sunrpc/auth_tls.o
CC net/ipv6/udplite.o
CC fs/nfs/file.o
AR fs/isofs/built-in.a
CC io_uring/memmap.o
CC drivers/gpu/drm/i915/intel_region_ttm.o
CC net/sunrpc/auth_unix.o
AR fs/jbd2/built-in.a
AR drivers/gpu/drm/xlnx/built-in.a
CC net/sunrpc/svc.o
CC net/sunrpc/svcsock.o
CC drivers/gpu/drm/i915/intel_runtime_pm.o
CC kernel/events/hw_breakpoint.o
CC drivers/base/syscore.o
CC net/sysctl_net.o
CC drivers/acpi/acpica/hwacpi.o
CC net/handshake/tlshd.o
CC block/blk-mq-pci.o
CC block/blk-mq-virtio.o
CC arch/x86/kernel/e820.o
CC fs/autofs/init.o
CC fs/9p/vfs_super.o
CC lib/siphash.o
AR fs/hostfs/built-in.a
CC drivers/base/driver.o
CC mm/mlock.o
CC drivers/acpi/acpica/hwesleep.o
CC drivers/acpi/acpica/hwgpe.o
CC fs/debugfs/inode.o
CC net/ethtool/phy.o
CC fs/fat/namei_msdos.o
CC fs/netfs/write_issue.o
CC crypto/lzo.o
CC io_uring/io-wq.o
AR net/sunrpc/auth_gss/built-in.a
AR drivers/tty/built-in.a
CC block/blk-mq-debugfs.o
CC kernel/events/uprobes.o
CC lib/string.o
CC net/ipv6/raw.o
CC block/blk-pm.o
CC kernel/time/posix-cpu-timers.o
CC fs/nfs/getroot.o
AR kernel/cgroup/built-in.a
CC net/core/xdp.o
CC net/sunrpc/svcauth.o
CC kernel/time/posix-clock.o
CC io_uring/futex.o
CC fs/ext4/fsync.o
CC fs/autofs/inode.o
CC drivers/acpi/acpica/hwregs.o
CC net/ipv6/icmp.o
CC kernel/time/itimer.o
CC kernel/trace/trace_seq.o
CC lib/timerqueue.o
CC fs/lockd/svcshare.o
CC net/wireless/ibss.o
CC fs/9p/vfs_inode.o
AR drivers/gpu/drm/gud/built-in.a
CC fs/9p/vfs_inode_dotl.o
CC io_uring/napi.o
CC lib/union_find.o
CC drivers/base/class.o
CC lib/vsprintf.o
CC drivers/gpu/drm/i915/intel_sbi.o
CC net/handshake/trace.o
CC net/ipv4/tcp_cong.o
CC lib/win_minmax.o
CC crypto/lzo-rle.o
CC arch/x86/kernel/pci-dma.o
CC drivers/acpi/acpica/hwsleep.o
CC drivers/acpi/acpi_processor.o
CC net/ipv4/tcp_metrics.o
CC kernel/trace/trace_stat.o
CC fs/debugfs/file.o
AR fs/fat/built-in.a
AR drivers/gpu/drm/solomon/built-in.a
CC kernel/time/clockevents.o
CC mm/mmap.o
CC kernel/time/tick-common.o
CC block/holder.o
AR net/ethtool/built-in.a
CC kernel/trace/trace_printk.o
CC [M] drivers/gpu/drm/scheduler/sched_main.o
CC net/mac80211/offchannel.o
CC net/netfilter/nf_conntrack_ftp.o
CC fs/autofs/root.o
CC kernel/trace/pid_list.o
HOSTCC drivers/gpu/drm/xe/xe_gen_wa_oob
CC net/core/flow_offload.o
CC drivers/acpi/acpica/hwvalid.o
CC crypto/rng.o
AR fs/netfs/built-in.a
CC fs/nfs/inode.o
CC fs/tracefs/inode.o
GEN xe_wa_oob.c xe_wa_oob.h
CC arch/x86/kernel/quirks.o
CC [M] drivers/gpu/drm/xe/xe_bb.o
CC [M] fs/efivarfs/inode.o
CC net/ipv4/tcp_fastopen.o
CC drivers/acpi/acpica/hwxface.o
CC fs/lockd/svcproc.o
CC drivers/base/platform.o
CC drivers/base/cpu.o
CC kernel/time/tick-broadcast.o
CC drivers/acpi/processor_core.o
CC drivers/gpu/drm/i915/intel_step.o
CC fs/9p/vfs_addr.o
CC crypto/drbg.o
CC [M] fs/efivarfs/file.o
CC drivers/gpu/drm/i915/intel_uncore.o
AR block/built-in.a
CC net/sunrpc/svcauth_unix.o
CC [M] fs/efivarfs/super.o
CC kernel/exec_domain.o
CC net/ipv4/tcp_rate.o
CC net/mac80211/ht.o
CC drivers/acpi/acpica/hwxfsleep.o
AR kernel/events/built-in.a
CC fs/ext4/hash.o
CC kernel/panic.o
CC drivers/base/firmware.o
CC drivers/gpu/drm/i915/intel_wakeref.o
CC fs/9p/vfs_file.o
AR io_uring/built-in.a
CC arch/x86/kernel/kdebugfs.o
CC kernel/trace/trace_sched_switch.o
CC net/ipv6/mcast.o
CC fs/autofs/symlink.o
CC [M] drivers/gpu/drm/xe/xe_bo.o
CC net/wireless/sme.o
CC arch/x86/kernel/alternative.o
CC fs/9p/vfs_dir.o
CC drivers/gpu/drm/i915/vlv_sideband.o
CC net/mac80211/agg-tx.o
AR net/handshake/built-in.a
CC net/core/gro.o
AR fs/debugfs/built-in.a
CC fs/tracefs/event_inode.o
CC fs/lockd/svcsubs.o
CC drivers/acpi/acpica/hwpci.o
CC fs/autofs/waitq.o
CC fs/9p/vfs_dentry.o
CC kernel/time/tick-broadcast-hrtimer.o
AR drivers/mfd/built-in.a
CC drivers/acpi/acpica/nsaccess.o
CC net/netfilter/nf_conntrack_irc.o
CC net/ipv6/reassembly.o
CC kernel/time/tick-oneshot.o
CC [M] drivers/gpu/drm/scheduler/sched_fence.o
CC arch/x86/kernel/i8253.o
CC drivers/acpi/acpica/nsalloc.o
CC fs/open.o
CC drivers/gpu/drm/i915/vlv_suspend.o
CC crypto/jitterentropy.o
CC mm/mmu_gather.o
CC net/sunrpc/addr.o
CC fs/lockd/mon.o
CC [M] fs/efivarfs/vars.o
CC crypto/jitterentropy-kcapi.o
CC drivers/acpi/processor_pdc.o
CC drivers/acpi/ec.o
CC fs/ext4/ialloc.o
CC drivers/gpu/drm/i915/soc/intel_dram.o
CC fs/ext4/indirect.o
CC net/mac80211/agg-rx.o
CC drivers/base/init.o
CC fs/autofs/expire.o
CC drivers/gpu/drm/drm_atomic.o
CC net/netfilter/nf_conntrack_sip.o
CC net/sunrpc/rpcb_clnt.o
CC fs/read_write.o
CC arch/x86/kernel/hw_breakpoint.o
CC kernel/time/tick-sched.o
CC drivers/acpi/acpica/nsarguments.o
CC fs/9p/v9fs.o
CC lib/xarray.o
CC [M] drivers/gpu/drm/scheduler/sched_entity.o
CC net/core/netdev-genl.o
CC net/ipv4/tcp_recovery.o
CC crypto/ghash-generic.o
CC net/mac80211/vht.o
CC kernel/cpu.o
CC net/wireless/chan.o
AR fs/tracefs/built-in.a
AR drivers/nfc/built-in.a
CC kernel/trace/trace_nop.o
CC drivers/acpi/acpica/nsconvert.o
CC drivers/base/map.o
LD [M] fs/efivarfs/efivarfs.o
CC drivers/gpu/drm/drm_atomic_uapi.o
CC net/wireless/ethtool.o
CC mm/mprotect.o
CC fs/ext4/inline.o
CC kernel/time/timer_migration.o
CC net/ipv6/tcp_ipv6.o
CC net/core/netdev-genl-gen.o
CC drivers/acpi/acpica/nsdump.o
CC crypto/hash_info.o
CC crypto/rsapubkey.asn1.o
CC crypto/rsaprivkey.asn1.o
AR crypto/built-in.a
CC fs/9p/fid.o
CC fs/autofs/dev-ioctl.o
CC drivers/base/devres.o
LD [M] drivers/gpu/drm/scheduler/gpu-sched.o
CC net/mac80211/he.o
CC drivers/gpu/drm/i915/soc/intel_gmch.o
CC fs/lockd/trace.o
CC arch/x86/kernel/tsc.o
CC drivers/acpi/acpica/nseval.o
CC drivers/acpi/dock.o
CC drivers/acpi/acpica/nsinit.o
CC net/wireless/mesh.o
CC net/ipv4/tcp_ulp.o
CC net/core/gso.o
CC net/netfilter/nf_nat_core.o
CC fs/file_table.o
CC [M] drivers/gpu/drm/xe/xe_bo_evict.o
CC fs/lockd/xdr.o
CC kernel/trace/blktrace.o
CC net/sunrpc/timer.o
CC net/netfilter/nf_nat_proto.o
AR drivers/dax/hmem/built-in.a
AR drivers/dax/built-in.a
CC kernel/exit.o
CC fs/lockd/clnt4xdr.o
CC lib/lockref.o
CC drivers/acpi/acpica/nsload.o
CC fs/ext4/inode.o
CC fs/nfs/super.o
CC net/wireless/ap.o
CC net/ipv6/ping.o
CC fs/9p/xattr.o
CC drivers/base/attribute_container.o
CC [M] drivers/gpu/drm/xe/xe_devcoredump.o
CC kernel/time/vsyscall.o
CC drivers/gpu/drm/drm_auth.o
CC lib/bcd.o
CC kernel/time/timekeeping_debug.o
CC net/mac80211/s1g.o
CC mm/mremap.o
AR fs/autofs/built-in.a
CC lib/sort.o
CC drivers/base/transport_class.o
CC [M] drivers/gpu/drm/xe/xe_device.o
CC drivers/acpi/acpica/nsnames.o
CC lib/parser.o
CC drivers/gpu/drm/i915/soc/intel_pch.o
CC net/ipv4/tcp_offload.o
CC fs/nfs/io.o
CC fs/ext4/ioctl.o
CC net/wireless/trace.o
CC net/sunrpc/xdr.o
CC arch/x86/kernel/tsc_msr.o
CC drivers/acpi/acpica/nsobject.o
CC kernel/trace/trace_events.o
CC drivers/gpu/drm/drm_blend.o
CC drivers/base/topology.o
CC drivers/base/container.o
CC kernel/time/namespace.o
CC drivers/gpu/drm/drm_bridge.o
CC net/core/net-sysfs.o
CC fs/nfs/direct.o
CC lib/debug_locks.o
CC drivers/base/property.o
CC drivers/acpi/pci_root.o
CC drivers/gpu/drm/drm_cache.o
AR fs/9p/built-in.a
CC net/core/hotdata.o
CC net/wireless/ocb.o
CC kernel/softirq.o
CC lib/random32.o
CC net/netfilter/nf_nat_helper.o
CC net/sunrpc/sunrpc_syms.o
CC drivers/base/cacheinfo.o
CC net/ipv4/tcp_plb.o
CC fs/lockd/xdr4.o
CC net/ipv6/exthdrs.o
CC arch/x86/kernel/io_delay.o
CC drivers/acpi/acpica/nsparse.o
CC drivers/dma-buf/dma-buf.o
AR drivers/cxl/core/built-in.a
AR drivers/cxl/built-in.a
CC mm/msync.o
CC net/sunrpc/cache.o
CC net/wireless/pmsr.o
CC net/core/netdev_rx_queue.o
CC lib/bust_spinlocks.o
CC drivers/gpu/drm/drm_color_mgmt.o
CC drivers/gpu/drm/i915/soc/intel_rom.o
CC arch/x86/kernel/rtc.o
CC arch/x86/kernel/resource.o
CC fs/nfs/pagelist.o
AS arch/x86/kernel/irqflags.o
CC drivers/acpi/acpica/nspredef.o
AR kernel/time/built-in.a
CC drivers/gpu/drm/i915/i915_memcpy.o
CC drivers/acpi/acpica/nsprepkg.o
CC drivers/acpi/pci_link.o
CC net/mac80211/ibss.o
CC kernel/trace/trace_export.o
CC drivers/acpi/acpica/nsrepair.o
CC [M] drivers/gpu/drm/xe/xe_device_sysfs.o
CC [M] drivers/gpu/drm/xe/xe_dma_buf.o
CC drivers/macintosh/mac_hid.o
CC lib/kasprintf.o
GEN net/wireless/shipped-certs.c
CC mm/page_vma_mapped.o
CC net/core/net-procfs.o
CC fs/super.o
CC net/netfilter/nf_nat_masquerade.o
CC kernel/resource.o
CC net/mac80211/iface.o
CC lib/bitmap.o
CC net/mac80211/link.o
CC drivers/gpu/drm/drm_connector.o
CC drivers/gpu/drm/drm_crtc.o
CC drivers/acpi/acpica/nsrepair2.o
CC net/sunrpc/rpc_pipe.o
CC fs/lockd/svc4proc.o
CC drivers/acpi/pci_irq.o
CC kernel/trace/trace_event_perf.o
CC arch/x86/kernel/static_call.o
CC drivers/base/swnode.o
CC arch/x86/kernel/process.o
CC drivers/acpi/acpica/nssearch.o
CC arch/x86/kernel/ptrace.o
AR drivers/macintosh/built-in.a
CC net/ipv4/datagram.o
CC arch/x86/kernel/tls.o
CC kernel/trace/trace_events_filter.o
CC drivers/acpi/acpica/nsutils.o
CC kernel/trace/trace_events_trigger.o
CC net/sunrpc/sysfs.o
CC drivers/dma-buf/dma-fence.o
CC drivers/gpu/drm/i915/i915_mm.o
CC arch/x86/kernel/step.o
CC arch/x86/kernel/i8237.o
CC mm/pagewalk.o
CC [M] drivers/gpu/drm/xe/xe_drm_client.o
CC fs/nfs/read.o
CC drivers/base/auxiliary.o
CC fs/lockd/procfs.o
AR drivers/scsi/pcmcia/built-in.a
CC drivers/scsi/scsi.o
CC lib/scatterlist.o
CC fs/nfs/symlink.o
CC fs/char_dev.o
CC net/sunrpc/svc_xprt.o
CC net/sunrpc/xprtmultipath.o
CC net/ipv6/datagram.o
CC drivers/acpi/acpica/nswalk.o
CC net/core/netpoll.o
CC fs/nfs/unlink.o
CC drivers/base/devtmpfs.o
CC drivers/scsi/hosts.o
CC lib/list_sort.o
CC net/sunrpc/stats.o
CC drivers/gpu/drm/i915/i915_sw_fence.o
CC net/netfilter/nf_nat_ftp.o
CC drivers/acpi/acpica/nsxfeval.o
CC kernel/trace/trace_eprobe.o
CC drivers/dma-buf/dma-fence-array.o
CC net/sunrpc/sysctl.o
CC mm/pgtable-generic.o
CC [M] drivers/gpu/drm/xe/xe_exec.o
CC drivers/acpi/acpica/nsxfname.o
AR fs/lockd/built-in.a
CC arch/x86/kernel/stacktrace.o
CC drivers/scsi/scsi_ioctl.o
CC drivers/gpu/drm/drm_displayid.o
AR drivers/nvme/common/built-in.a
AR drivers/nvme/host/built-in.a
AR drivers/nvme/target/built-in.a
AR drivers/nvme/built-in.a
CC net/netfilter/nf_nat_irc.o
CC net/netfilter/nf_nat_sip.o
CC net/ipv6/ip6_flowlabel.o
CC net/ipv4/raw.o
CC fs/ext4/mballoc.o
CC kernel/trace/trace_kprobe.o
CC net/core/fib_rules.o
CC lib/uuid.o
CC drivers/dma-buf/dma-fence-chain.o
CC arch/x86/kernel/reboot.o
CC drivers/scsi/scsicam.o
CC mm/rmap.o
CC drivers/gpu/drm/i915/i915_sw_fence_work.o
CC fs/nfs/write.o
CC drivers/gpu/drm/i915/i915_syncmap.o
CC drivers/acpi/acpica/nsxfobj.o
CC fs/nfs/namespace.o
CC drivers/acpi/acpica/psargs.o
CC drivers/base/module.o
CC drivers/gpu/drm/i915/i915_user_extensions.o
CC mm/vmalloc.o
CC net/ipv6/inet6_connection_sock.o
CC lib/iov_iter.o
CC net/ipv4/udp.o
CC fs/ext4/migrate.o
CC drivers/gpu/drm/drm_drv.o
CC drivers/ata/libata-core.o
CC net/wireless/shipped-certs.o
CC net/mac80211/rate.o
CC [M] drivers/gpu/drm/xe/xe_execlist.o
CC lib/clz_ctz.o
CC kernel/trace/error_report-traces.o
AR drivers/net/pse-pd/built-in.a
AR drivers/net/phy/qcom/built-in.a
CC drivers/net/phy/mdio-boardinfo.o
CC fs/ext4/mmp.o
CC drivers/gpu/drm/drm_dumb_buffers.o
CC mm/vma.o
CC drivers/base/auxiliary_sysfs.o
CC drivers/firewire/init_ohci1394_dma.o
CC drivers/ata/libata-scsi.o
CC drivers/gpu/drm/i915/i915_debugfs.o
CC drivers/dma-buf/dma-fence-preempt.o
CC drivers/acpi/acpica/psloop.o
CC drivers/scsi/scsi_error.o
CC drivers/cdrom/cdrom.o
CC drivers/net/phy/stubs.o
CC fs/ext4/move_extent.o
CC drivers/base/devcoredump.o
CC arch/x86/kernel/msr.o
CC [M] drivers/gpu/drm/xe/xe_exec_queue.o
CC drivers/dma-buf/dma-fence-unwrap.o
CC arch/x86/kernel/cpuid.o
CC net/ipv4/udplite.o
AR drivers/auxdisplay/built-in.a
CC net/ipv6/udp_offload.o
CC lib/bsearch.o
CC kernel/trace/power-traces.o
CC net/netfilter/x_tables.o
CC drivers/net/mdio/acpi_mdio.o
CC drivers/acpi/acpica/psobject.o
CC drivers/net/phy/mdio_devres.o
CC drivers/net/mdio/fwnode_mdio.o
CC net/ipv4/udp_offload.o
AR drivers/firewire/built-in.a
CC drivers/acpi/acpica/psopcode.o
CC drivers/base/platform-msi.o
CC drivers/acpi/acpi_apd.o
CC net/netfilter/xt_tcpudp.o
CC [M] drivers/gpu/drm/xe/xe_force_wake.o
CC net/ipv6/seg6.o
CC fs/stat.o
CC net/mac80211/michael.o
CC net/netfilter/xt_CONNSECMARK.o
CC kernel/sysctl.o
CC drivers/dma-buf/dma-fence-user-fence.o
CC net/core/net-traces.o
CC net/core/selftests.o
CC lib/find_bit.o
AR drivers/net/pcs/built-in.a
CC drivers/base/physical_location.o
CC drivers/ata/libata-eh.o
CC drivers/pcmcia/cs.o
CC drivers/acpi/acpica/psopinfo.o
CC drivers/acpi/acpica/psparse.o
CC kernel/trace/rpm-traces.o
CC arch/x86/kernel/early-quirks.o
CC net/ipv6/fib6_notifier.o
CC net/mac80211/tkip.o
CC net/ipv4/arp.o
/workspace/kernel/drivers/dma-buf/dma-fence-user-fence.c: In function ‘user_fence_cb’:
/workspace/kernel/drivers/dma-buf/dma-fence-user-fence.c:15:17: error: implicit declaration of function ‘writeq’; did you mean ‘writel’? [-Werror=implicit-function-declaration]
15 | writeq(user_fence->seqno, user_fence->map.vaddr_iomem);
| ^~~~~~
| writel
CC [M] drivers/gpu/drm/xe/xe_ggtt.o
CC drivers/dma-buf/dma-resv.o
cc1: all warnings being treated as errors
make[4]: *** [/workspace/kernel/scripts/Makefile.build:229: drivers/dma-buf/dma-fence-user-fence.o] Error 1
make[4]: *** Waiting for unfinished jobs....
CC fs/exec.o
CC drivers/gpu/drm/drm_edid.o
CC arch/x86/kernel/smp.o
CC drivers/usb/common/common.o
CC drivers/usb/core/usb.o
AR drivers/usb/phy/built-in.a
AR drivers/net/ethernet/3com/built-in.a
CC drivers/net/ethernet/8390/ne2k-pci.o
CC arch/x86/kernel/smpboot.o
CC drivers/gpu/drm/i915/i915_debugfs_params.o
AR net/sunrpc/built-in.a
CC drivers/acpi/acpica/psscope.o
CC drivers/base/trace.o
CC kernel/trace/trace_dynevent.o
CC drivers/net/phy/phy.o
CC drivers/scsi/scsi_lib.o
AR drivers/net/mdio/built-in.a
AR drivers/net/ethernet/adaptec/built-in.a
CC drivers/gpu/drm/drm_eld.o
CC fs/pipe.o
CC mm/process_vm_access.o
CC fs/ext4/namei.o
CC drivers/usb/common/debug.o
CC drivers/pcmcia/socket_sysfs.o
CC lib/llist.o
CC drivers/net/ethernet/8390/8390.o
CC drivers/pcmcia/cardbus.o
CC drivers/acpi/acpica/pstree.o
CC arch/x86/kernel/tsc_sync.o
CC fs/nfs/mount_clnt.o
CC drivers/gpu/drm/drm_encoder.o
CC drivers/gpu/drm/i915/i915_pmu.o
CC lib/lwq.o
AR drivers/cdrom/built-in.a
CC net/core/ptp_classifier.o
CC fs/ext4/page-io.o
CC fs/ext4/readpage.o
AR drivers/usb/common/built-in.a
make[3]: *** [/workspace/kernel/scripts/Makefile.build:478: drivers/dma-buf] Error 2
CC drivers/scsi/constants.o
make[3]: *** Waiting for unfinished jobs....
CC drivers/usb/mon/mon_main.o
CC net/ipv6/rpl.o
CC fs/nfs/nfstrace.o
CC lib/memweight.o
CC fs/nfs/export.o
CC drivers/gpu/drm/i915/gt/gen2_engine_cs.o
CC lib/kfifo.o
CC [M] drivers/gpu/drm/xe/xe_gpu_scheduler.o
CC drivers/acpi/acpica/psutils.o
AR drivers/base/built-in.a
CC net/mac80211/aes_cmac.o
CC drivers/pcmcia/ds.o
CC mm/page_alloc.o
CC kernel/trace/trace_probe.o
CC net/netfilter/xt_NFLOG.o
CC drivers/usb/core/hub.o
CC net/ipv4/icmp.o
CC drivers/scsi/scsi_lib_dma.o
CC net/ipv4/devinet.o
CC fs/namei.o
CC net/mac80211/aes_gmac.o
CC drivers/net/phy/phy-c45.o
CC arch/x86/kernel/setup_percpu.o
CC fs/nfs/sysfs.o
AR drivers/net/ethernet/agere/built-in.a
CC drivers/ata/libata-transport.o
CC drivers/gpu/drm/drm_file.o
CC fs/ext4/resize.o
CC [M] drivers/gpu/drm/xe/xe_gsc.o
AR drivers/net/wireless/admtek/built-in.a
CC drivers/usb/host/pci-quirks.o
AR drivers/net/wireless/ath/built-in.a
CC drivers/acpi/acpica/pswalk.o
CC drivers/ata/libata-trace.o
AR drivers/net/wireless/atmel/built-in.a
AR drivers/net/wireless/broadcom/built-in.a
AR drivers/net/wireless/intel/built-in.a
AR drivers/net/wireless/intersil/built-in.a
AR drivers/net/wireless/marvell/built-in.a
AR drivers/net/wireless/mediatek/built-in.a
CC drivers/usb/mon/mon_stat.o
AR drivers/net/wireless/microchip/built-in.a
AR drivers/net/wireless/purelifi/built-in.a
AR drivers/net/wireless/quantenna/built-in.a
AR drivers/net/wireless/ralink/built-in.a
AR drivers/net/wireless/realtek/built-in.a
AR drivers/net/wireless/rsi/built-in.a
AR drivers/net/wireless/silabs/built-in.a
CC drivers/ata/libata-sata.o
AR drivers/net/wireless/st/built-in.a
CC drivers/net/phy/phy-core.o
AR drivers/net/wireless/ti/built-in.a
AR drivers/net/usb/built-in.a
CC kernel/capability.o
AR drivers/net/wireless/zydas/built-in.a
CC net/mac80211/fils_aead.o
AR drivers/net/wireless/virtual/built-in.a
AR drivers/net/wireless/built-in.a
CC fs/nfs/fs_context.o
CC lib/percpu-refcount.o
CC net/netfilter/xt_SECMARK.o
AR drivers/net/ethernet/8390/built-in.a
CC arch/x86/kernel/mpparse.o
AR drivers/net/ethernet/alacritech/built-in.a
AR drivers/net/ethernet/alteon/built-in.a
AR drivers/net/ethernet/amazon/built-in.a
CC drivers/acpi/acpica/psxface.o
AR drivers/net/ethernet/amd/built-in.a
AR drivers/net/ethernet/aquantia/built-in.a
CC drivers/usb/class/usblp.o
AR drivers/net/ethernet/arc/built-in.a
CC drivers/usb/storage/scsiglue.o
AR drivers/net/ethernet/asix/built-in.a
CC drivers/usb/mon/mon_text.o
AR drivers/net/ethernet/atheros/built-in.a
CC drivers/gpu/drm/drm_fourcc.o
AR drivers/net/ethernet/cadence/built-in.a
CC net/ipv6/ioam6.o
CC drivers/net/ethernet/broadcom/bnx2.o
CC mm/init-mm.o
CC drivers/acpi/acpi_platform.o
CC net/core/netprio_cgroup.o
CC kernel/trace/trace_uprobe.o
CC drivers/scsi/scsi_scan.o
CC drivers/net/phy/phy_device.o
CC drivers/pcmcia/pcmcia_resource.o
CC drivers/gpu/drm/i915/gt/gen6_engine_cs.o
CC drivers/acpi/acpica/rsaddr.o
CC net/mac80211/cfg.o
CC drivers/usb/host/ehci-hcd.o
CC drivers/net/mii.o
CC fs/nfs/nfsroot.o
CC lib/rhashtable.o
CC [M] drivers/gpu/drm/xe/xe_gsc_debugfs.o
CC [M] drivers/gpu/drm/xe/xe_gsc_proxy.o
CC net/ipv6/sysctl_net_ipv6.o
CC mm/memblock.o
AR drivers/usb/misc/built-in.a
AR drivers/net/ethernet/brocade/built-in.a
CC drivers/pcmcia/cistpl.o
CC net/core/netclassid_cgroup.o
CC drivers/net/phy/linkmode.o
AR drivers/net/ethernet/cavium/common/built-in.a
AR drivers/net/ethernet/cavium/thunder/built-in.a
CC fs/ext4/super.o
CC kernel/trace/rethook.o
CC net/netfilter/xt_TCPMSS.o
AR drivers/net/ethernet/cavium/liquidio/built-in.a
CC drivers/acpi/acpica/rscalc.o
CC net/netfilter/xt_conntrack.o
AR drivers/net/ethernet/cavium/octeon/built-in.a
AR drivers/net/ethernet/cavium/built-in.a
CC drivers/usb/mon/mon_bin.o
CC drivers/pcmcia/pcmcia_cis.o
CC lib/base64.o
CC drivers/usb/host/ehci-pci.o
CC drivers/usb/storage/protocol.o
CC net/mac80211/ethtool.o
CC drivers/ata/libata-sff.o
AR drivers/usb/class/built-in.a
CC arch/x86/kernel/trace_clock.o
CC drivers/gpu/drm/i915/gt/gen6_ppgtt.o
CC net/netfilter/xt_policy.o
CC drivers/net/ethernet/broadcom/tg3.o
CC fs/fcntl.o
CC arch/x86/kernel/trace.o
CC kernel/ptrace.o
CC net/ipv6/xfrm6_policy.o
CC drivers/acpi/acpica/rscreate.o
CC fs/ext4/symlink.o
GEN drivers/scsi/scsi_devinfo_tbl.c
AR drivers/net/ethernet/chelsio/built-in.a
CC drivers/net/loopback.o
CC drivers/pcmcia/rsrc_mgr.o
CC drivers/gpu/drm/i915/gt/gen7_renderclear.o
CC fs/nfs/sysctl.o
CC drivers/scsi/scsi_devinfo.o
CC drivers/usb/core/hcd.o
CC net/ipv4/af_inet.o
CC kernel/user.o
CC [M] drivers/gpu/drm/xe/xe_gsc_submit.o
CC mm/slub.o
CC drivers/usb/storage/transport.o
CC drivers/ata/libata-pmp.o
CC lib/once.o
CC lib/refcount.o
CC drivers/acpi/acpica/rsdumpinfo.o
CC drivers/usb/early/ehci-dbgp.o
CC kernel/signal.o
CC drivers/acpi/acpica/rsinfo.o
CC arch/x86/kernel/rethook.o
AR drivers/usb/mon/built-in.a
CC fs/nfs/nfs3super.o
CC drivers/scsi/scsi_sysctl.o
CC drivers/gpu/drm/drm_framebuffer.o
CC drivers/gpu/drm/i915/gt/gen8_engine_cs.o
AR drivers/net/ethernet/cisco/built-in.a
CC drivers/usb/core/urb.o
AR kernel/trace/built-in.a
CC arch/x86/kernel/vmcore_info_32.o
CC drivers/usb/storage/usb.o
CC drivers/ata/libata-acpi.o
CC drivers/usb/storage/initializers.o
CC drivers/ata/libata-pata-timings.o
CC net/ipv6/xfrm6_state.o
CC lib/rcuref.o
CC drivers/acpi/acpica/rsio.o
CC drivers/usb/core/message.o
CC drivers/pcmcia/rsrc_nonstatic.o
CC drivers/net/phy/phy_link_topology.o
CC [M] drivers/gpu/drm/xe/xe_gt.o
CC net/netfilter/xt_state.o
CC drivers/gpu/drm/i915/gt/gen8_ppgtt.o
CC lib/usercopy.o
CC net/core/dst_cache.o
CC arch/x86/kernel/machine_kexec_32.o
CC net/core/gro_cells.o
AR drivers/net/ethernet/cortina/built-in.a
CC net/core/failover.o
CC drivers/ata/ahci.o
CC drivers/acpi/acpica/rsirq.o
CC drivers/net/netconsole.o
CC drivers/usb/core/driver.o
CC drivers/net/virtio_net.o
CC mm/madvise.o
CC drivers/usb/host/ohci-hcd.o
CC drivers/usb/host/ohci-pci.o
CC drivers/acpi/acpica/rslist.o
CC lib/errseq.o
AR drivers/net/ethernet/dec/tulip/built-in.a
AR drivers/net/ethernet/dec/built-in.a
CC drivers/ata/libahci.o
AR drivers/usb/early/built-in.a
CC drivers/usb/core/config.o
CC drivers/net/phy/mdio_bus.o
CC lib/bucket_locks.o
CC drivers/scsi/scsi_proc.o
CC [M] net/netfilter/nf_log_syslog.o
AR drivers/net/ethernet/dlink/built-in.a
CC drivers/gpu/drm/drm_gem.o
CC net/ipv4/igmp.o
CC fs/ext4/sysfs.o
CC net/mac80211/rx.o
CC drivers/acpi/acpica/rsmemory.o
CC drivers/net/net_failover.o
CC drivers/usb/host/uhci-hcd.o
CC drivers/pcmcia/yenta_socket.o
CC drivers/usb/core/file.o
CC net/ipv6/xfrm6_input.o
CC drivers/usb/storage/sierra_ms.o
CC net/ipv4/fib_frontend.o
CC drivers/scsi/scsi_debugfs.o
AS arch/x86/kernel/relocate_kernel_32.o
CC arch/x86/kernel/crash_dump_32.o
CC net/ipv6/xfrm6_output.o
CC fs/ext4/xattr.o
CC [M] net/netfilter/xt_mark.o
CC drivers/acpi/acpica/rsmisc.o
CC [M] drivers/gpu/drm/xe/xe_gt_ccs_mode.o
CC drivers/gpu/drm/i915/gt/intel_breadcrumbs.o
CC lib/generic-radix-tree.o
CC arch/x86/kernel/crash.o
CC drivers/acpi/acpi_pnp.o
CC fs/nfs/nfs3client.o
CC fs/ioctl.o
AR net/core/built-in.a
CC drivers/ata/ata_piix.o
CC net/ipv6/xfrm6_protocol.o
CC drivers/scsi/scsi_trace.o
CC [M] net/netfilter/xt_nat.o
CC fs/ext4/xattr_hurd.o
CC drivers/usb/storage/option_ms.o
CC arch/x86/kernel/module.o
CC net/mac80211/spectmgmt.o
CC drivers/gpu/drm/drm_ioctl.o
CC drivers/usb/storage/usual-tables.o
CC kernel/sys.o
CC fs/readdir.o
CC drivers/acpi/acpica/rsserial.o
CC drivers/ata/pata_amd.o
CC fs/select.o
CC lib/bitmap-str.o
CC drivers/usb/core/buffer.o
CC [M] net/netfilter/xt_LOG.o
CC net/ipv4/fib_semantics.o
CC mm/page_io.o
CC drivers/net/phy/mdio_device.o
AR net/wireless/built-in.a
CC fs/ext4/xattr_trusted.o
CC mm/swap_state.o
CC drivers/usb/core/sysfs.o
CC [M] drivers/gpu/drm/xe/xe_gt_clock.o
CC fs/nfs/nfs3proc.o
CC drivers/acpi/acpica/rsutils.o
CC net/mac80211/tx.o
CC drivers/usb/host/xhci.o
CC net/ipv6/netfilter.o
AR drivers/usb/storage/built-in.a
CC [M] net/netfilter/xt_MASQUERADE.o
AR drivers/pcmcia/built-in.a
CC drivers/scsi/scsi_logging.o
CC arch/x86/kernel/doublefault_32.o
CC [M] net/netfilter/xt_addrtype.o
CC drivers/gpu/drm/drm_lease.o
CC [M] drivers/gpu/drm/xe/xe_gt_freq.o
CC drivers/scsi/scsi_pm.o
CC drivers/acpi/power.o
CC arch/x86/kernel/early_printk.o
CC fs/nfs/nfs3xdr.o
CC net/ipv6/proc.o
CC lib/string_helpers.o
CC mm/swapfile.o
CC drivers/ata/pata_oldpiix.o
CC fs/nfs/nfs3acl.o
CC drivers/acpi/acpica/rsxface.o
CC net/ipv6/syncookies.o
CC fs/ext4/xattr_user.o
CC drivers/net/phy/swphy.o
CC kernel/umh.o
CC fs/nfs/nfs4proc.o
CC drivers/gpu/drm/i915/gt/intel_context.o
CC drivers/acpi/acpica/tbdata.o
CC drivers/gpu/drm/drm_managed.o
CC lib/hexdump.o
CC lib/kstrtox.o
CC arch/x86/kernel/hpet.o
CC net/ipv6/calipso.o
CC drivers/usb/core/endpoint.o
CC drivers/usb/host/xhci-mem.o
CC fs/dcache.o
CC drivers/gpu/drm/i915/gt/intel_context_sseu.o
AR drivers/net/ethernet/emulex/built-in.a
CC drivers/acpi/event.o
CC fs/nfs/nfs4xdr.o
CC drivers/usb/host/xhci-ext-caps.o
CC drivers/ata/pata_sch.o
CC drivers/gpu/drm/drm_mm.o
CC drivers/scsi/scsi_bsg.o
CC fs/inode.o
CC [M] drivers/gpu/drm/xe/xe_gt_idle.o
CC drivers/acpi/acpica/tbfadt.o
CC fs/nfs/nfs4state.o
CC kernel/workqueue.o
CC drivers/ata/pata_mpiix.o
CC kernel/pid.o
CC fs/ext4/fast_commit.o
CC fs/attr.o
AR drivers/net/ethernet/engleder/built-in.a
CC lib/iomap.o
CC drivers/usb/host/xhci-ring.o
CC drivers/acpi/evged.o
CC mm/swap_slots.o
CC drivers/usb/core/devio.o
CC [M] drivers/gpu/drm/xe/xe_gt_mcr.o
CC drivers/net/phy/fixed_phy.o
CC drivers/net/phy/realtek.o
CC drivers/ata/ata_generic.o
AR drivers/net/ethernet/ezchip/built-in.a
CC drivers/usb/host/xhci-hub.o
CC drivers/gpu/drm/i915/gt/intel_engine_cs.o
AR net/netfilter/built-in.a
CC kernel/task_work.o
CC drivers/scsi/scsi_common.o
CC drivers/acpi/acpica/tbfind.o
CC fs/nfs/nfs4renewd.o
CC mm/dmapool.o
CC drivers/acpi/sysfs.o
CC fs/bad_inode.o
CC arch/x86/kernel/amd_nb.o
CC arch/x86/kernel/kvm.o
CC lib/iomap_copy.o
CC fs/ext4/orphan.o
CC net/mac80211/key.o
CC kernel/extable.o
CC drivers/acpi/acpica/tbinstal.o
AR drivers/net/ethernet/fujitsu/built-in.a
AR drivers/net/ethernet/fungible/built-in.a
CC drivers/acpi/property.o
CC fs/ext4/acl.o
CC drivers/usb/core/notify.o
CC lib/devres.o
CC net/ipv4/fib_trie.o
CC drivers/scsi/scsi_transport_spi.o
CC net/ipv6/ah6.o
CC drivers/gpu/drm/i915/gt/intel_engine_heartbeat.o
CC drivers/acpi/acpica/tbprint.o
CC drivers/usb/host/xhci-dbg.o
CC drivers/gpu/drm/drm_mode_config.o
AR drivers/ata/built-in.a
CC lib/check_signature.o
CC drivers/acpi/debugfs.o
CC [M] drivers/gpu/drm/xe/xe_gt_pagefault.o
CC fs/file.o
CC drivers/gpu/drm/drm_mode_object.o
CC mm/hugetlb.o
CC net/mac80211/util.o
CC net/ipv6/esp6.o
CC lib/interval_tree.o
CC drivers/acpi/acpica/tbutils.o
CC drivers/acpi/acpica/tbxface.o
AR drivers/net/ethernet/google/built-in.a
CC fs/ext4/xattr_security.o
CC net/ipv4/fib_notifier.o
CC drivers/usb/host/xhci-trace.o
CC fs/nfs/nfs4super.o
CC [M] drivers/gpu/drm/xe/xe_gt_sysfs.o
AR drivers/net/phy/built-in.a
CC drivers/gpu/drm/i915/gt/intel_engine_pm.o
CC kernel/params.o
CC drivers/gpu/drm/drm_modes.o
CC lib/assoc_array.o
CC mm/mmu_notifier.o
CC drivers/gpu/drm/drm_modeset_lock.o
CC drivers/acpi/acpica/tbxfload.o
CC drivers/acpi/acpica/tbxfroot.o
AR drivers/net/ethernet/huawei/built-in.a
CC drivers/gpu/drm/i915/gt/intel_engine_user.o
CC net/mac80211/parse.o
CC lib/bitrev.o
CC [M] drivers/gpu/drm/xe/xe_gt_throttle.o
CC drivers/usb/core/generic.o
CC net/ipv6/sit.o
CC net/ipv4/inet_fragment.o
CC kernel/kthread.o
CC drivers/usb/host/xhci-debugfs.o
CC arch/x86/kernel/kvmclock.o
CC mm/migrate.o
CC drivers/gpu/drm/drm_plane.o
CC fs/nfs/nfs4file.o
CC drivers/acpi/acpica/utaddress.o
CC drivers/usb/core/quirks.o
CC kernel/sys_ni.o
CC drivers/scsi/virtio_scsi.o
CC net/mac80211/wme.o
CC drivers/net/ethernet/intel/e1000/e1000_main.o
CC drivers/net/ethernet/intel/e1000e/82571.o
CC drivers/acpi/acpi_lpat.o
CC lib/crc-ccitt.o
CC drivers/gpu/drm/i915/gt/intel_execlists_submission.o
CC drivers/gpu/drm/drm_prime.o
CC drivers/usb/core/devices.o
CC drivers/acpi/acpi_pcc.o
CC drivers/acpi/acpica/utalloc.o
AR drivers/net/ethernet/i825xx/built-in.a
CC drivers/net/ethernet/intel/e1000e/ich8lan.o
CC [M] drivers/gpu/drm/xe/xe_gt_tlb_invalidation.o
CC drivers/gpu/drm/i915/gt/intel_ggtt.o
CC drivers/scsi/sd.o
CC net/ipv4/ping.o
CC drivers/net/ethernet/intel/e1000/e1000_hw.o
CC drivers/net/ethernet/intel/e100.o
CC arch/x86/kernel/paravirt.o
CC net/mac80211/chan.o
AR drivers/net/ethernet/microsoft/built-in.a
CC lib/crc16.o
CC drivers/usb/core/phy.o
CC drivers/gpu/drm/drm_print.o
CC drivers/scsi/sr.o
CC drivers/gpu/drm/i915/gt/intel_ggtt_fencing.o
AR drivers/net/ethernet/litex/built-in.a
CC drivers/acpi/ac.o
CC fs/nfs/delegation.o
CC drivers/scsi/sr_ioctl.o
CC kernel/nsproxy.o
CC drivers/acpi/acpica/utascii.o
CC arch/x86/kernel/pvclock.o
CC drivers/net/ethernet/intel/e1000e/80003es2lan.o
CC net/mac80211/trace.o
HOSTCC lib/gen_crc32table
CC drivers/net/ethernet/intel/e1000/e1000_ethtool.o
CC drivers/usb/host/xhci-pci.o
CC lib/xxhash.o
CC drivers/net/ethernet/intel/e1000/e1000_param.o
CC drivers/acpi/acpica/utbuffer.o
CC net/mac80211/mlme.o
CC fs/nfs/nfs4idmap.o
CC mm/page_counter.o
CC arch/x86/kernel/pcspeaker.o
CC net/ipv6/addrconf_core.o
CC [M] drivers/gpu/drm/xe/xe_gt_topology.o
CC lib/genalloc.o
AR drivers/net/ethernet/marvell/octeon_ep/built-in.a
AR drivers/net/ethernet/marvell/octeon_ep_vf/built-in.a
AR drivers/net/ethernet/marvell/octeontx2/built-in.a
AR drivers/net/ethernet/marvell/prestera/built-in.a
CC drivers/net/ethernet/marvell/sky2.o
CC fs/nfs/callback.o
CC drivers/usb/core/port.o
CC drivers/scsi/sr_vendor.o
CC drivers/scsi/sg.o
CC drivers/acpi/button.o
CC fs/filesystems.o
CC drivers/acpi/acpica/utcksum.o
CC drivers/gpu/drm/drm_property.o
CC net/ipv6/exthdrs_core.o
AR drivers/net/ethernet/mellanox/built-in.a
CC lib/percpu_counter.o
AR drivers/net/ethernet/meta/built-in.a
CC drivers/usb/core/hcd-pci.o
CC net/mac80211/tdls.o
CC arch/x86/kernel/check.o
CC kernel/notifier.o
CC [M] drivers/gpu/drm/xe/xe_guc.o
CC drivers/gpu/drm/drm_rect.o
CC net/ipv4/ip_tunnel_core.o
CC drivers/acpi/fan_core.o
CC fs/nfs/callback_xdr.o
CC drivers/acpi/acpica/utcopy.o
CC kernel/ksysfs.o
CC fs/namespace.o
AR drivers/net/ethernet/micrel/built-in.a
CC drivers/acpi/acpica/utexcep.o
CC arch/x86/kernel/uprobes.o
AR fs/ext4/built-in.a
CC fs/seq_file.o
CC net/ipv6/ip6_checksum.o
CC lib/audit.o
CC drivers/acpi/fan_attr.o
CC drivers/gpu/drm/i915/gt/intel_gt.o
CC net/mac80211/ocb.o
CC drivers/net/ethernet/intel/e1000e/mac.o
CC fs/xattr.o
CC mm/hugetlb_cgroup.o
CC mm/early_ioremap.o
CC fs/nfs/callback_proc.o
CC kernel/cred.o
CC [M] drivers/gpu/drm/xe/xe_guc_ads.o
CC net/ipv4/gre_offload.o
CC net/mac80211/airtime.o
CC drivers/gpu/drm/i915/gt/intel_gt_buffer_pool.o
CC drivers/acpi/acpica/utdebug.o
CC drivers/acpi/fan_hwmon.o
CC drivers/usb/core/usb-acpi.o
CC drivers/net/ethernet/intel/e1000e/manage.o
AR drivers/net/ethernet/microchip/built-in.a
CC [M] drivers/gpu/drm/xe/xe_guc_capture.o
CC drivers/scsi/scsi_sysfs.o
AR drivers/usb/host/built-in.a
CC net/ipv4/metrics.o
CC drivers/gpu/drm/i915/gt/intel_gt_ccs_mode.o
CC net/mac80211/eht.o
CC fs/nfs/nfs4namespace.o
CC drivers/acpi/acpica/utdecode.o
CC drivers/acpi/acpi_video.o
CC mm/secretmem.o
AR drivers/net/ethernet/broadcom/built-in.a
CC drivers/gpu/drm/drm_syncobj.o
CC net/ipv6/ip6_icmp.o
CC lib/syscall.o
CC drivers/acpi/video_detect.o
CC net/ipv6/output_core.o
CC fs/nfs/nfs4getroot.o
CC drivers/gpu/drm/i915/gt/intel_gt_clock_utils.o
CC drivers/net/ethernet/intel/e1000e/nvm.o
CC net/ipv4/netlink.o
CC arch/x86/kernel/perf_regs.o
CC net/ipv6/protocol.o
CC kernel/reboot.o
CC net/ipv6/ip6_offload.o
CC lib/errname.o
CC drivers/acpi/acpica/utdelete.o
AR drivers/net/ethernet/mscc/built-in.a
CC fs/libfs.o
CC net/mac80211/led.o
AR drivers/net/ethernet/intel/e1000/built-in.a
CC [M] drivers/gpu/drm/xe/xe_guc_ct.o
AR drivers/usb/core/built-in.a
AR drivers/usb/built-in.a
CC kernel/async.o
CC [M] drivers/gpu/drm/xe/xe_guc_db_mgr.o
CC drivers/gpu/drm/drm_sysfs.o
AR drivers/net/ethernet/myricom/built-in.a
CC drivers/acpi/acpica/uterror.o
CC net/mac80211/pm.o
CC lib/nlattr.o
CC [M] drivers/gpu/drm/xe/xe_guc_hwconfig.o
CC fs/fs-writeback.o
CC net/ipv6/tcpv6_offload.o
CC mm/hmm.o
CC mm/memfd.o
CC fs/nfs/nfs4client.o
CC drivers/net/ethernet/intel/e1000e/phy.o
CC arch/x86/kernel/tracepoint.o
CC drivers/acpi/acpica/uteval.o
CC net/ipv4/nexthop.o
CC fs/pnode.o
CC drivers/gpu/drm/i915/gt/intel_gt_debugfs.o
CC fs/splice.o
CC drivers/gpu/drm/i915/gt/intel_gt_engines_debugfs.o
CC fs/sync.o
AR drivers/scsi/built-in.a
CC drivers/net/ethernet/intel/e1000e/param.o
CC drivers/gpu/drm/drm_trace_points.o
AR drivers/net/ethernet/neterion/built-in.a
AR drivers/net/ethernet/natsemi/built-in.a
CC net/ipv4/udp_tunnel_stub.o
CC drivers/net/ethernet/intel/e1000e/ethtool.o
CC net/ipv6/exthdrs_offload.o
CC [M] drivers/gpu/drm/xe/xe_guc_id_mgr.o
CC kernel/range.o
CC fs/utimes.o
CC drivers/acpi/processor_driver.o
CC fs/nfs/nfs4session.o
CC drivers/gpu/drm/drm_vblank.o
CC drivers/gpu/drm/i915/gt/intel_gt_irq.o
AR drivers/net/ethernet/netronome/built-in.a
CC mm/ptdump.o
AR drivers/net/ethernet/ni/built-in.a
CC drivers/gpu/drm/i915/gt/intel_gt_mcr.o
CC drivers/net/ethernet/intel/e1000e/netdev.o
CC arch/x86/kernel/itmt.o
CC drivers/acpi/acpica/utglobal.o
CC net/mac80211/rc80211_minstrel_ht.o
CC fs/d_path.o
CC net/ipv4/ip_tunnel.o
CC kernel/smpboot.o
CC net/ipv6/inet6_hashtables.o
CC drivers/gpu/drm/i915/gt/intel_gt_pm.o
CC lib/cpu_rmap.o
CC drivers/net/ethernet/nvidia/forcedeth.o
AR drivers/net/ethernet/marvell/built-in.a
CC drivers/acpi/processor_thermal.o
CC drivers/acpi/acpica/uthex.o
CC arch/x86/kernel/umip.o
CC drivers/net/ethernet/intel/e1000e/ptp.o
CC net/ipv4/sysctl_net_ipv4.o
CC net/ipv4/proc.o
CC drivers/gpu/drm/drm_vblank_work.o
CC mm/execmem.o
CC fs/nfs/dns_resolve.o
CC drivers/acpi/acpica/utids.o
CC drivers/gpu/drm/i915/gt/intel_gt_pm_debugfs.o
CC net/mac80211/wbrf.o
AR drivers/net/ethernet/oki-semi/built-in.a
CC net/ipv4/fib_rules.o
CC net/ipv6/mcast_snoop.o
CC [M] drivers/gpu/drm/xe/xe_guc_klv_helpers.o
CC fs/stack.o
CC fs/fs_struct.o
CC kernel/ucount.o
CC fs/statfs.o
CC lib/dynamic_queue_limits.o
CC drivers/acpi/processor_idle.o
CC drivers/gpu/drm/i915/gt/intel_gt_pm_irq.o
CC kernel/regset.o
CC fs/fs_pin.o
CC drivers/acpi/acpica/utinit.o
CC arch/x86/kernel/unwind_frame.o
CC [M] drivers/gpu/drm/xe/xe_guc_log.o
CC lib/glob.o
CC drivers/gpu/drm/i915/gt/intel_gt_requests.o
CC drivers/acpi/acpica/utlock.o
CC fs/nfs/nfs4trace.o
CC lib/strncpy_from_user.o
AR drivers/net/ethernet/packetengines/built-in.a
CC kernel/ksyms_common.o
CC kernel/groups.o
CC net/ipv4/ipmr.o
AR mm/built-in.a
CC drivers/acpi/processor_throttling.o
CC lib/strnlen_user.o
CC [M] drivers/gpu/drm/xe/xe_guc_pc.o
CC fs/nfs/nfs4sysctl.o
CC fs/nsfs.o
AR drivers/net/ethernet/qlogic/built-in.a
CC kernel/kcmp.o
CC fs/fs_types.o
CC lib/net_utils.o
CC drivers/acpi/acpica/utmath.o
CC kernel/freezer.o
CC fs/fs_context.o
AR drivers/net/ethernet/qualcomm/emac/built-in.a
AR drivers/net/ethernet/qualcomm/built-in.a
CC drivers/gpu/drm/i915/gt/intel_gt_sysfs.o
CC drivers/acpi/acpica/utmisc.o
CC [M] drivers/gpu/drm/xe/xe_guc_submit.o
CC drivers/gpu/drm/drm_vma_manager.o
CC drivers/gpu/drm/i915/gt/intel_gt_sysfs_pm.o
CC lib/sg_pool.o
AR arch/x86/kernel/built-in.a
CC kernel/profile.o
CC drivers/acpi/processor_perflib.o
AR arch/x86/built-in.a
CC drivers/gpu/drm/drm_writeback.o
CC drivers/net/ethernet/realtek/8139too.o
AR drivers/net/ethernet/renesas/built-in.a
CC [M] drivers/gpu/drm/xe/xe_heci_gsc.o
CC drivers/gpu/drm/drm_panel.o
CC drivers/acpi/acpica/utmutex.o
CC kernel/stacktrace.o
CC drivers/net/ethernet/realtek/r8169_main.o
AR drivers/net/ethernet/rdc/built-in.a
CC drivers/acpi/container.o
CC fs/fs_parser.o
CC kernel/dma.o
AR net/ipv6/built-in.a
CC net/ipv4/ipmr_base.o
CC drivers/net/ethernet/realtek/r8169_firmware.o
CC lib/stackdepot.o
CC fs/fsopen.o
CC drivers/gpu/drm/i915/gt/intel_gtt.o
CC drivers/acpi/acpica/utnonansi.o
CC drivers/gpu/drm/drm_pci.o
CC kernel/smp.o
AR drivers/net/ethernet/rocker/built-in.a
AR drivers/net/ethernet/samsung/built-in.a
CC drivers/net/ethernet/realtek/r8169_phy_config.o
CC drivers/acpi/acpica/utobject.o
CC drivers/gpu/drm/drm_debugfs.o
CC kernel/uid16.o
CC drivers/gpu/drm/i915/gt/intel_llc.o
CC drivers/gpu/drm/i915/gt/intel_lrc.o
CC kernel/kallsyms.o
CC [M] drivers/gpu/drm/xe/xe_hw_engine.o
AR drivers/net/ethernet/seeq/built-in.a
CC drivers/acpi/acpica/utosi.o
CC lib/asn1_decoder.o
CC drivers/gpu/drm/i915/gt/intel_migrate.o
CC [M] drivers/gpu/drm/xe/xe_hw_engine_class_sysfs.o
CC drivers/acpi/thermal_lib.o
GEN lib/oid_registry_data.c
CC drivers/gpu/drm/drm_debugfs_crc.o
CC drivers/gpu/drm/i915/gt/intel_mocs.o
AR drivers/net/ethernet/silan/built-in.a
CC net/ipv4/syncookies.o
CC drivers/gpu/drm/drm_panel_orientation_quirks.o
CC drivers/gpu/drm/drm_buddy.o
CC [M] drivers/gpu/drm/xe/xe_hw_engine_group.o
CC lib/ucs2_string.o
CC kernel/acct.o
CC fs/init.o
CC net/ipv4/tunnel4.o
CC drivers/gpu/drm/drm_gem_shmem_helper.o
CC [M] drivers/gpu/drm/xe/xe_hw_fence.o
CC drivers/acpi/acpica/utownerid.o
CC lib/sbitmap.o
CC [M] drivers/gpu/drm/xe/xe_huc.o
CC [M] drivers/gpu/drm/xe/xe_irq.o
CC net/ipv4/ipconfig.o
CC fs/kernel_read_file.o
CC drivers/gpu/drm/i915/gt/intel_ppgtt.o
CC fs/mnt_idmapping.o
AR drivers/net/ethernet/sis/built-in.a
AR drivers/net/ethernet/sfc/built-in.a
CC drivers/gpu/drm/i915/gt/intel_rc6.o
AR drivers/net/ethernet/smsc/built-in.a
CC fs/remap_range.o
AR drivers/net/ethernet/socionext/built-in.a
CC drivers/gpu/drm/i915/gt/intel_region_lmem.o
CC net/ipv4/netfilter.o
CC kernel/vmcore_info.o
CC [M] drivers/gpu/drm/xe/xe_lrc.o
CC drivers/acpi/acpica/utpredef.o
CC kernel/elfcorehdr.o
CC drivers/gpu/drm/i915/gt/intel_renderstate.o
CC kernel/crash_reserve.o
AR drivers/net/ethernet/stmicro/built-in.a
CC drivers/acpi/thermal.o
CC drivers/gpu/drm/drm_atomic_helper.o
CC drivers/acpi/acpica/utresdecode.o
CC net/ipv4/tcp_cubic.o
CC net/ipv4/tcp_sigpool.o
CC fs/pidfs.o
CC drivers/acpi/acpica/utresrc.o
CC fs/buffer.o
AR drivers/net/ethernet/sun/built-in.a
CC drivers/acpi/acpica/utstate.o
CC fs/mpage.o
CC lib/group_cpus.o
CC net/ipv4/cipso_ipv4.o
CC drivers/gpu/drm/i915/gt/intel_reset.o
CC [M] drivers/gpu/drm/xe/xe_migrate.o
CC fs/proc_namespace.o
AR drivers/net/ethernet/tehuti/built-in.a
CC drivers/gpu/drm/i915/gt/intel_ring.o
CC net/ipv4/xfrm4_policy.o
CC lib/fw_table.o
CC drivers/acpi/nhlt.o
CC drivers/acpi/acpi_memhotplug.o
CC drivers/gpu/drm/drm_atomic_state_helper.o
CC drivers/acpi/acpica/utstring.o
AR drivers/net/ethernet/ti/built-in.a
CC drivers/gpu/drm/i915/gt/intel_ring_submission.o
CC net/ipv4/xfrm4_state.o
CC fs/direct-io.o
AR drivers/net/ethernet/nvidia/built-in.a
CC drivers/acpi/ioapic.o
CC drivers/gpu/drm/drm_crtc_helper.o
CC net/ipv4/xfrm4_input.o
AR lib/lib.a
GEN lib/crc32table.h
AR drivers/net/ethernet/vertexcom/built-in.a
CC drivers/gpu/drm/i915/gt/intel_rps.o
AR drivers/net/ethernet/via/built-in.a
CC [M] drivers/gpu/drm/xe/xe_mmio.o
CC drivers/acpi/acpica/utstrsuppt.o
CC drivers/acpi/battery.o
AR drivers/net/ethernet/wangxun/built-in.a
CC [M] drivers/gpu/drm/xe/xe_mocs.o
CC drivers/acpi/acpica/utstrtoul64.o
CC drivers/gpu/drm/i915/gt/intel_sa_media.o
CC net/ipv4/xfrm4_output.o
CC kernel/kexec_core.o
CC drivers/acpi/acpica/utxface.o
CC drivers/acpi/acpica/utxfinit.o
CC drivers/gpu/drm/drm_damage_helper.o
CC lib/oid_registry.o
CC drivers/acpi/acpica/utxferror.o
CC [M] drivers/gpu/drm/xe/xe_module.o
CC drivers/acpi/bgrt.o
CC net/ipv4/xfrm4_protocol.o
AR drivers/net/ethernet/wiznet/built-in.a
CC drivers/acpi/acpica/utxfmutex.o
CC drivers/gpu/drm/i915/gt/intel_sseu.o
CC fs/eventpoll.o
CC drivers/gpu/drm/drm_encoder_slave.o
CC [M] drivers/gpu/drm/xe/xe_oa.o
CC kernel/crash_core.o
CC fs/anon_inodes.o
CC lib/crc32.o
AR drivers/net/ethernet/xilinx/built-in.a
CC drivers/acpi/spcr.o
CC drivers/gpu/drm/drm_flip_work.o
CC drivers/gpu/drm/i915/gt/intel_sseu_debugfs.o
CC [M] drivers/gpu/drm/xe/xe_observation.o
CC kernel/kexec.o
CC [M] drivers/gpu/drm/xe/xe_pat.o
CC drivers/gpu/drm/i915/gt/intel_timeline.o
CC drivers/gpu/drm/drm_format_helper.o
CC fs/signalfd.o
AR drivers/acpi/acpica/built-in.a
CC drivers/gpu/drm/i915/gt/intel_tlb.o
AR drivers/net/ethernet/xircom/built-in.a
AR drivers/net/ethernet/synopsys/built-in.a
CC kernel/utsname.o
CC [M] drivers/gpu/drm/xe/xe_pci.o
CC drivers/gpu/drm/i915/gt/intel_wopcm.o
CC fs/timerfd.o
CC drivers/gpu/drm/drm_gem_atomic_helper.o
CC drivers/gpu/drm/drm_gem_framebuffer_helper.o
CC kernel/pid_namespace.o
AR fs/nfs/built-in.a
CC drivers/gpu/drm/i915/gt/intel_workarounds.o
CC kernel/stop_machine.o
CC [M] drivers/gpu/drm/xe/xe_pcode.o
AR lib/built-in.a
AR drivers/net/ethernet/intel/e1000e/built-in.a
CC [M] drivers/gpu/drm/xe/xe_pm.o
AR drivers/net/ethernet/intel/built-in.a
CC kernel/audit.o
AR drivers/net/ethernet/pensando/built-in.a
CC drivers/gpu/drm/i915/gt/shmem_utils.o
CC [M] drivers/gpu/drm/xe/xe_preempt_fence.o
CC drivers/gpu/drm/drm_kms_helper_common.o
CC kernel/auditfilter.o
CC drivers/gpu/drm/i915/gt/sysfs_engines.o
CC drivers/gpu/drm/drm_modeset_helper.o
CC [M] drivers/gpu/drm/xe/xe_pt.o
CC fs/eventfd.o
CC drivers/gpu/drm/i915/gt/intel_ggtt_gmch.o
CC drivers/gpu/drm/drm_plane_helper.o
CC fs/aio.o
CC fs/locks.o
CC drivers/gpu/drm/i915/gt/gen6_renderstate.o
AR drivers/acpi/built-in.a
AR drivers/net/ethernet/realtek/built-in.a
CC fs/binfmt_misc.o
AR drivers/net/ethernet/built-in.a
CC kernel/auditsc.o
AR drivers/net/built-in.a
CC fs/binfmt_script.o
CC drivers/gpu/drm/i915/gt/gen7_renderstate.o
CC drivers/gpu/drm/drm_probe_helper.o
CC kernel/audit_watch.o
CC drivers/gpu/drm/i915/gt/gen8_renderstate.o
CC [M] drivers/gpu/drm/xe/xe_pt_walk.o
AR net/ipv4/built-in.a
CC fs/binfmt_elf.o
CC drivers/gpu/drm/drm_self_refresh_helper.o
CC drivers/gpu/drm/i915/gt/gen9_renderstate.o
CC fs/mbcache.o
CC [M] drivers/gpu/drm/xe/xe_query.o
CC kernel/audit_fsnotify.o
CC drivers/gpu/drm/drm_simple_kms_helper.o
CC kernel/audit_tree.o
CC [M] drivers/gpu/drm/xe/xe_range_fence.o
CC [M] drivers/gpu/drm/xe/xe_reg_sr.o
CC drivers/gpu/drm/i915/gem/i915_gem_busy.o
CC fs/posix_acl.o
AR net/mac80211/built-in.a
AR net/built-in.a
CC drivers/gpu/drm/bridge/panel.o
CC [M] drivers/gpu/drm/xe/xe_reg_whitelist.o
CC drivers/gpu/drm/i915/gem/i915_gem_clflush.o
CC kernel/kprobes.o
CC drivers/gpu/drm/drm_mipi_dsi.o
CC fs/coredump.o
CC drivers/gpu/drm/i915/gem/i915_gem_context.o
CC drivers/gpu/drm/i915/gem/i915_gem_create.o
CC [M] drivers/gpu/drm/xe/xe_rtp.o
CC [M] drivers/gpu/drm/xe/xe_ring_ops.o
CC kernel/seccomp.o
CC [M] drivers/gpu/drm/drm_exec.o
CC fs/drop_caches.o
CC [M] drivers/gpu/drm/xe/xe_sa.o
CC drivers/gpu/drm/i915/gem/i915_gem_dmabuf.o
CC fs/sysctls.o
CC [M] drivers/gpu/drm/drm_gpuvm.o
CC [M] drivers/gpu/drm/xe/xe_sched_job.o
CC [M] drivers/gpu/drm/xe/xe_step.o
CC fs/fhandle.o
CC kernel/relay.o
CC drivers/gpu/drm/i915/gem/i915_gem_domain.o
CC [M] drivers/gpu/drm/xe/xe_sync.o
CC kernel/utsname_sysctl.o
CC [M] drivers/gpu/drm/drm_suballoc.o
CC drivers/gpu/drm/i915/gem/i915_gem_execbuffer.o
CC drivers/gpu/drm/i915/gem/i915_gem_internal.o
CC [M] drivers/gpu/drm/xe/xe_tile.o
CC drivers/gpu/drm/i915/gem/i915_gem_lmem.o
CC [M] drivers/gpu/drm/drm_gem_ttm_helper.o
CC [M] drivers/gpu/drm/xe/xe_tile_sysfs.o
CC kernel/delayacct.o
CC drivers/gpu/drm/i915/gem/i915_gem_mman.o
CC kernel/taskstats.o
CC [M] drivers/gpu/drm/xe/xe_trace.o
CC drivers/gpu/drm/i915/gem/i915_gem_object.o
CC [M] drivers/gpu/drm/xe/xe_trace_bo.o
CC kernel/tsacct.o
CC [M] drivers/gpu/drm/xe/xe_trace_guc.o
CC kernel/tracepoint.o
CC drivers/gpu/drm/i915/gem/i915_gem_pages.o
CC [M] drivers/gpu/drm/xe/xe_trace_lrc.o
CC kernel/irq_work.o
CC [M] drivers/gpu/drm/xe/xe_ttm_sys_mgr.o
CC drivers/gpu/drm/i915/gem/i915_gem_phys.o
CC kernel/static_call.o
CC [M] drivers/gpu/drm/xe/xe_ttm_stolen_mgr.o
CC kernel/padata.o
CC drivers/gpu/drm/i915/gem/i915_gem_pm.o
CC kernel/jump_label.o
CC [M] drivers/gpu/drm/xe/xe_ttm_vram_mgr.o
CC kernel/context_tracking.o
LD [M] drivers/gpu/drm/drm_suballoc_helper.o
CC drivers/gpu/drm/i915/gem/i915_gem_region.o
CC [M] drivers/gpu/drm/xe/xe_tuning.o
CC kernel/iomem.o
CC kernel/rseq.o
CC drivers/gpu/drm/i915/gem/i915_gem_shmem.o
CC [M] drivers/gpu/drm/xe/xe_uc.o
CC drivers/gpu/drm/i915/gem/i915_gem_shrinker.o
CC [M] drivers/gpu/drm/xe/xe_uc_fw.o
CC [M] drivers/gpu/drm/xe/xe_vm.o
CC [M] drivers/gpu/drm/xe/xe_vram.o
LD [M] drivers/gpu/drm/drm_ttm_helper.o
CC drivers/gpu/drm/i915/gem/i915_gem_stolen.o
CC [M] drivers/gpu/drm/xe/xe_vram_freq.o
CC drivers/gpu/drm/i915/gem/i915_gem_throttle.o
CC [M] drivers/gpu/drm/xe/xe_wait_user_fence.o
CC [M] drivers/gpu/drm/xe/xe_wa.o
CC drivers/gpu/drm/i915/gem/i915_gem_tiling.o
CC [M] drivers/gpu/drm/xe/xe_wopcm.o
CC drivers/gpu/drm/i915/gem/i915_gem_ttm.o
CC drivers/gpu/drm/i915/gem/i915_gem_ttm_move.o
CC drivers/gpu/drm/i915/gem/i915_gem_ttm_pm.o
CC [M] drivers/gpu/drm/xe/xe_hmm.o
CC [M] drivers/gpu/drm/xe/xe_hwmon.o
CC drivers/gpu/drm/i915/gem/i915_gem_userptr.o
CC [M] drivers/gpu/drm/xe/xe_gt_sriov_vf.o
CC [M] drivers/gpu/drm/xe/xe_guc_relay.o
CC drivers/gpu/drm/i915/gem/i915_gem_wait.o
CC drivers/gpu/drm/i915/gem/i915_gemfs.o
CC [M] drivers/gpu/drm/xe/xe_memirq.o
CC drivers/gpu/drm/i915/i915_active.o
CC [M] drivers/gpu/drm/xe/xe_sriov.o
CC [M] drivers/gpu/drm/xe/xe_sriov_vf.o
CC drivers/gpu/drm/i915/i915_cmd_parser.o
AR fs/built-in.a
CC [M] drivers/gpu/drm/xe/display/ext/i915_irq.o
CC drivers/gpu/drm/i915/i915_deps.o
CC drivers/gpu/drm/i915/i915_gem.o
CC [M] drivers/gpu/drm/xe/display/ext/i915_utils.o
CC [M] drivers/gpu/drm/xe/display/intel_bo.o
CC drivers/gpu/drm/i915/i915_gem_evict.o
CC [M] drivers/gpu/drm/xe/display/intel_fb_bo.o
CC drivers/gpu/drm/i915/i915_gem_gtt.o
CC drivers/gpu/drm/i915/i915_gem_ww.o
CC [M] drivers/gpu/drm/xe/display/intel_fbdev_fb.o
CC [M] drivers/gpu/drm/xe/display/xe_display.o
CC [M] drivers/gpu/drm/xe/display/xe_display_misc.o
CC drivers/gpu/drm/i915/i915_query.o
CC drivers/gpu/drm/i915/i915_request.o
AR kernel/built-in.a
CC [M] drivers/gpu/drm/xe/display/xe_display_rps.o
CC drivers/gpu/drm/i915/i915_scheduler.o
CC [M] drivers/gpu/drm/xe/display/xe_display_wa.o
CC [M] drivers/gpu/drm/xe/display/xe_dsb_buffer.o
CC [M] drivers/gpu/drm/xe/display/xe_fb_pin.o
CC [M] drivers/gpu/drm/xe/display/xe_hdcp_gsc.o
CC drivers/gpu/drm/i915/i915_trace_points.o
CC drivers/gpu/drm/i915/i915_ttm_buddy_manager.o
CC [M] drivers/gpu/drm/xe/display/xe_plane_initial.o
CC [M] drivers/gpu/drm/xe/display/xe_tdf.o
CC drivers/gpu/drm/i915/i915_vma.o
CC [M] drivers/gpu/drm/xe/i915-soc/intel_dram.o
CC drivers/gpu/drm/i915/i915_vma_resource.o
CC [M] drivers/gpu/drm/xe/i915-soc/intel_pch.o
CC drivers/gpu/drm/i915/gt/uc/intel_gsc_fw.o
CC [M] drivers/gpu/drm/xe/i915-soc/intel_rom.o
CC drivers/gpu/drm/i915/gt/uc/intel_gsc_proxy.o
CC drivers/gpu/drm/i915/gt/uc/intel_gsc_uc.o
CC [M] drivers/gpu/drm/xe/i915-display/icl_dsi.o
CC drivers/gpu/drm/i915/gt/uc/intel_gsc_uc_debugfs.o
CC [M] drivers/gpu/drm/xe/i915-display/intel_alpm.o
CC [M] drivers/gpu/drm/xe/i915-display/intel_atomic.o
CC [M] drivers/gpu/drm/xe/i915-display/intel_atomic_plane.o
CC [M] drivers/gpu/drm/xe/i915-display/intel_audio.o
CC drivers/gpu/drm/i915/gt/uc/intel_gsc_uc_heci_cmd_submit.o
CC drivers/gpu/drm/i915/gt/uc/intel_guc.o
CC [M] drivers/gpu/drm/xe/i915-display/intel_backlight.o
CC drivers/gpu/drm/i915/gt/uc/intel_guc_ads.o
CC [M] drivers/gpu/drm/xe/i915-display/intel_bios.o
CC [M] drivers/gpu/drm/xe/i915-display/intel_bw.o
CC drivers/gpu/drm/i915/gt/uc/intel_guc_capture.o
CC drivers/gpu/drm/i915/gt/uc/intel_guc_ct.o
CC [M] drivers/gpu/drm/xe/i915-display/intel_cdclk.o
CC drivers/gpu/drm/i915/gt/uc/intel_guc_debugfs.o
CC drivers/gpu/drm/i915/gt/uc/intel_guc_fw.o
CC [M] drivers/gpu/drm/xe/i915-display/intel_color.o
CC drivers/gpu/drm/i915/gt/uc/intel_guc_hwconfig.o
CC [M] drivers/gpu/drm/xe/i915-display/intel_combo_phy.o
CC [M] drivers/gpu/drm/xe/i915-display/intel_connector.o
CC [M] drivers/gpu/drm/xe/i915-display/intel_crtc.o
CC [M] drivers/gpu/drm/xe/i915-display/intel_crtc_state_dump.o
CC drivers/gpu/drm/i915/gt/uc/intel_guc_log.o
CC drivers/gpu/drm/i915/gt/uc/intel_guc_log_debugfs.o
CC [M] drivers/gpu/drm/xe/i915-display/intel_cursor.o
CC [M] drivers/gpu/drm/xe/i915-display/intel_cx0_phy.o
CC drivers/gpu/drm/i915/gt/uc/intel_guc_rc.o
CC [M] drivers/gpu/drm/xe/i915-display/intel_ddi.o
CC drivers/gpu/drm/i915/gt/uc/intel_guc_slpc.o
CC drivers/gpu/drm/i915/gt/uc/intel_guc_submission.o
CC [M] drivers/gpu/drm/xe/i915-display/intel_ddi_buf_trans.o
CC drivers/gpu/drm/i915/gt/uc/intel_huc.o
CC [M] drivers/gpu/drm/xe/i915-display/intel_display.o
CC drivers/gpu/drm/i915/gt/uc/intel_huc_debugfs.o
CC drivers/gpu/drm/i915/gt/uc/intel_huc_fw.o
CC drivers/gpu/drm/i915/gt/uc/intel_uc.o
CC [M] drivers/gpu/drm/xe/i915-display/intel_display_device.o
CC [M] drivers/gpu/drm/xe/i915-display/intel_display_driver.o
CC drivers/gpu/drm/i915/gt/uc/intel_uc_debugfs.o
CC drivers/gpu/drm/i915/gt/uc/intel_uc_fw.o
CC [M] drivers/gpu/drm/xe/i915-display/intel_display_irq.o
CC drivers/gpu/drm/i915/gt/intel_gsc.o
CC drivers/gpu/drm/i915/i915_hwmon.o
CC drivers/gpu/drm/i915/display/hsw_ips.o
CC drivers/gpu/drm/i915/display/i9xx_plane.o
CC drivers/gpu/drm/i915/display/i9xx_display_sr.o
CC [M] drivers/gpu/drm/xe/i915-display/intel_display_params.o
CC [M] drivers/gpu/drm/xe/i915-display/intel_display_power.o
CC drivers/gpu/drm/i915/display/i9xx_wm.o
CC drivers/gpu/drm/i915/display/intel_alpm.o
CC drivers/gpu/drm/i915/display/intel_atomic.o
CC [M] drivers/gpu/drm/xe/i915-display/intel_display_power_map.o
CC drivers/gpu/drm/i915/display/intel_atomic_plane.o
CC drivers/gpu/drm/i915/display/intel_audio.o
CC drivers/gpu/drm/i915/display/intel_bios.o
CC [M] drivers/gpu/drm/xe/i915-display/intel_display_power_well.o
CC [M] drivers/gpu/drm/xe/i915-display/intel_display_trace.o
CC [M] drivers/gpu/drm/xe/i915-display/intel_display_wa.o
CC drivers/gpu/drm/i915/display/intel_bo.o
CC [M] drivers/gpu/drm/xe/i915-display/intel_dkl_phy.o
CC drivers/gpu/drm/i915/display/intel_bw.o
CC drivers/gpu/drm/i915/display/intel_cdclk.o
CC [M] drivers/gpu/drm/xe/i915-display/intel_dmc.o
CC [M] drivers/gpu/drm/xe/i915-display/intel_dp.o
CC [M] drivers/gpu/drm/xe/i915-display/intel_dp_aux.o
CC drivers/gpu/drm/i915/display/intel_color.o
CC [M] drivers/gpu/drm/xe/i915-display/intel_dp_aux_backlight.o
CC drivers/gpu/drm/i915/display/intel_combo_phy.o
CC [M] drivers/gpu/drm/xe/i915-display/intel_dp_hdcp.o
CC [M] drivers/gpu/drm/xe/i915-display/intel_dp_link_training.o
CC drivers/gpu/drm/i915/display/intel_connector.o
CC drivers/gpu/drm/i915/display/intel_crtc.o
CC [M] drivers/gpu/drm/xe/i915-display/intel_dp_mst.o
CC drivers/gpu/drm/i915/display/intel_crtc_state_dump.o
CC drivers/gpu/drm/i915/display/intel_cursor.o
CC [M] drivers/gpu/drm/xe/i915-display/intel_dp_test.o
CC drivers/gpu/drm/i915/display/intel_display.o
CC [M] drivers/gpu/drm/xe/i915-display/intel_dpll.o
CC drivers/gpu/drm/i915/display/intel_display_driver.o
CC [M] drivers/gpu/drm/xe/i915-display/intel_dpll_mgr.o
CC drivers/gpu/drm/i915/display/intel_display_irq.o
CC [M] drivers/gpu/drm/xe/i915-display/intel_dpt_common.o
CC [M] drivers/gpu/drm/xe/i915-display/intel_drrs.o
CC drivers/gpu/drm/i915/display/intel_display_params.o
CC [M] drivers/gpu/drm/xe/i915-display/intel_dsb.o
CC drivers/gpu/drm/i915/display/intel_display_power.o
CC drivers/gpu/drm/i915/display/intel_display_power_map.o
CC [M] drivers/gpu/drm/xe/i915-display/intel_dsi.o
CC drivers/gpu/drm/i915/display/intel_display_power_well.o
CC drivers/gpu/drm/i915/display/intel_display_reset.o
CC [M] drivers/gpu/drm/xe/i915-display/intel_dsi_dcs_backlight.o
CC drivers/gpu/drm/i915/display/intel_display_rps.o
CC [M] drivers/gpu/drm/xe/i915-display/intel_dsi_vbt.o
CC drivers/gpu/drm/i915/display/intel_display_snapshot.o
CC [M] drivers/gpu/drm/xe/i915-display/intel_encoder.o
CC [M] drivers/gpu/drm/xe/i915-display/intel_fb.o
CC [M] drivers/gpu/drm/xe/i915-display/intel_fbc.o
CC drivers/gpu/drm/i915/display/intel_display_wa.o
CC [M] drivers/gpu/drm/xe/i915-display/intel_fdi.o
CC drivers/gpu/drm/i915/display/intel_dmc.o
CC [M] drivers/gpu/drm/xe/i915-display/intel_fifo_underrun.o
CC drivers/gpu/drm/i915/display/intel_dmc_wl.o
CC [M] drivers/gpu/drm/xe/i915-display/intel_frontbuffer.o
CC [M] drivers/gpu/drm/xe/i915-display/intel_global_state.o
CC drivers/gpu/drm/i915/display/intel_dpio_phy.o
CC drivers/gpu/drm/i915/display/intel_dpll.o
CC [M] drivers/gpu/drm/xe/i915-display/intel_gmbus.o
CC drivers/gpu/drm/i915/display/intel_dpll_mgr.o
CC [M] drivers/gpu/drm/xe/i915-display/intel_hdcp.o
CC drivers/gpu/drm/i915/display/intel_dpt.o
CC [M] drivers/gpu/drm/xe/i915-display/intel_hdcp_gsc_message.o
CC drivers/gpu/drm/i915/display/intel_dpt_common.o
CC [M] drivers/gpu/drm/xe/i915-display/intel_hdmi.o
CC drivers/gpu/drm/i915/display/intel_drrs.o
CC drivers/gpu/drm/i915/display/intel_dsb.o
CC [M] drivers/gpu/drm/xe/i915-display/intel_hotplug.o
CC [M] drivers/gpu/drm/xe/i915-display/intel_hotplug_irq.o
CC drivers/gpu/drm/i915/display/intel_dsb_buffer.o
CC [M] drivers/gpu/drm/xe/i915-display/intel_hti.o
CC drivers/gpu/drm/i915/display/intel_fb.o
CC [M] drivers/gpu/drm/xe/i915-display/intel_link_bw.o
CC [M] drivers/gpu/drm/xe/i915-display/intel_lspcon.o
CC drivers/gpu/drm/i915/display/intel_fb_bo.o
CC drivers/gpu/drm/i915/display/intel_fb_pin.o
CC drivers/gpu/drm/i915/display/intel_fbc.o
CC drivers/gpu/drm/i915/display/intel_fdi.o
CC [M] drivers/gpu/drm/xe/i915-display/intel_modeset_lock.o
CC [M] drivers/gpu/drm/xe/i915-display/intel_modeset_setup.o
CC [M] drivers/gpu/drm/xe/i915-display/intel_modeset_verify.o
CC drivers/gpu/drm/i915/display/intel_fifo_underrun.o
CC drivers/gpu/drm/i915/display/intel_frontbuffer.o
CC drivers/gpu/drm/i915/display/intel_global_state.o
CC [M] drivers/gpu/drm/xe/i915-display/intel_panel.o
CC [M] drivers/gpu/drm/xe/i915-display/intel_pfit.o
CC [M] drivers/gpu/drm/xe/i915-display/intel_pmdemand.o
CC [M] drivers/gpu/drm/xe/i915-display/intel_pps.o
CC drivers/gpu/drm/i915/display/intel_hdcp.o
CC [M] drivers/gpu/drm/xe/i915-display/intel_psr.o
CC [M] drivers/gpu/drm/xe/i915-display/intel_qp_tables.o
CC [M] drivers/gpu/drm/xe/i915-display/intel_quirks.o
CC drivers/gpu/drm/i915/display/intel_hdcp_gsc.o
CC [M] drivers/gpu/drm/xe/i915-display/intel_snps_phy.o
CC drivers/gpu/drm/i915/display/intel_hdcp_gsc_message.o
CC [M] drivers/gpu/drm/xe/i915-display/intel_tc.o
CC drivers/gpu/drm/i915/display/intel_hotplug.o
CC [M] drivers/gpu/drm/xe/i915-display/intel_vblank.o
CC [M] drivers/gpu/drm/xe/i915-display/intel_vdsc.o
CC drivers/gpu/drm/i915/display/intel_hotplug_irq.o
CC drivers/gpu/drm/i915/display/intel_hti.o
CC [M] drivers/gpu/drm/xe/i915-display/intel_vga.o
CC drivers/gpu/drm/i915/display/intel_link_bw.o
CC [M] drivers/gpu/drm/xe/i915-display/intel_vrr.o
CC drivers/gpu/drm/i915/display/intel_load_detect.o
CC drivers/gpu/drm/i915/display/intel_lpe_audio.o
CC drivers/gpu/drm/i915/display/intel_modeset_lock.o
CC drivers/gpu/drm/i915/display/intel_modeset_setup.o
CC [M] drivers/gpu/drm/xe/i915-display/intel_dmc_wl.o
CC drivers/gpu/drm/i915/display/intel_modeset_verify.o
CC [M] drivers/gpu/drm/xe/i915-display/intel_wm.o
CC [M] drivers/gpu/drm/xe/i915-display/skl_scaler.o
CC [M] drivers/gpu/drm/xe/i915-display/skl_universal_plane.o
CC drivers/gpu/drm/i915/display/intel_overlay.o
CC [M] drivers/gpu/drm/xe/i915-display/skl_watermark.o
CC [M] drivers/gpu/drm/xe/i915-display/intel_acpi.o
CC drivers/gpu/drm/i915/display/intel_pch_display.o
CC [M] drivers/gpu/drm/xe/i915-display/intel_opregion.o
CC drivers/gpu/drm/i915/display/intel_pch_refclk.o
CC [M] drivers/gpu/drm/xe/xe_debugfs.o
CC drivers/gpu/drm/i915/display/intel_plane_initial.o
CC [M] drivers/gpu/drm/xe/xe_gt_debugfs.o
CC [M] drivers/gpu/drm/xe/xe_gt_sriov_vf_debugfs.o
CC drivers/gpu/drm/i915/display/intel_pmdemand.o
CC [M] drivers/gpu/drm/xe/xe_gt_stats.o
CC drivers/gpu/drm/i915/display/intel_psr.o
CC [M] drivers/gpu/drm/xe/xe_guc_debugfs.o
CC [M] drivers/gpu/drm/xe/xe_huc_debugfs.o
CC drivers/gpu/drm/i915/display/intel_quirks.o
CC [M] drivers/gpu/drm/xe/xe_uc_debugfs.o
CC [M] drivers/gpu/drm/xe/i915-display/intel_display_debugfs.o
CC [M] drivers/gpu/drm/xe/i915-display/intel_display_debugfs_params.o
CC [M] drivers/gpu/drm/xe/i915-display/intel_pipe_crc.o
CC drivers/gpu/drm/i915/display/intel_sprite.o
CC drivers/gpu/drm/i915/display/intel_sprite_uapi.o
CC drivers/gpu/drm/i915/display/intel_tc.o
CC drivers/gpu/drm/i915/display/intel_vblank.o
CC drivers/gpu/drm/i915/display/intel_vga.o
CC drivers/gpu/drm/i915/display/intel_wm.o
CC drivers/gpu/drm/i915/display/skl_scaler.o
CC drivers/gpu/drm/i915/display/skl_universal_plane.o
CC drivers/gpu/drm/i915/display/skl_watermark.o
CC drivers/gpu/drm/i915/display/intel_acpi.o
CC drivers/gpu/drm/i915/display/intel_opregion.o
CC drivers/gpu/drm/i915/display/intel_display_debugfs.o
CC drivers/gpu/drm/i915/display/intel_display_debugfs_params.o
CC drivers/gpu/drm/i915/display/intel_pipe_crc.o
CC drivers/gpu/drm/i915/display/dvo_ch7017.o
CC drivers/gpu/drm/i915/display/dvo_ch7xxx.o
CC drivers/gpu/drm/i915/display/dvo_ivch.o
CC drivers/gpu/drm/i915/display/dvo_ns2501.o
CC drivers/gpu/drm/i915/display/dvo_sil164.o
CC drivers/gpu/drm/i915/display/dvo_tfp410.o
CC drivers/gpu/drm/i915/display/g4x_dp.o
CC drivers/gpu/drm/i915/display/g4x_hdmi.o
CC drivers/gpu/drm/i915/display/icl_dsi.o
CC drivers/gpu/drm/i915/display/intel_backlight.o
CC drivers/gpu/drm/i915/display/intel_crt.o
CC drivers/gpu/drm/i915/display/intel_cx0_phy.o
CC drivers/gpu/drm/i915/display/intel_ddi.o
CC drivers/gpu/drm/i915/display/intel_ddi_buf_trans.o
CC drivers/gpu/drm/i915/display/intel_display_device.o
CC drivers/gpu/drm/i915/display/intel_display_trace.o
CC drivers/gpu/drm/i915/display/intel_dkl_phy.o
CC drivers/gpu/drm/i915/display/intel_dp.o
CC drivers/gpu/drm/i915/display/intel_dp_aux.o
CC drivers/gpu/drm/i915/display/intel_dp_aux_backlight.o
CC drivers/gpu/drm/i915/display/intel_dp_hdcp.o
CC drivers/gpu/drm/i915/display/intel_dp_link_training.o
CC drivers/gpu/drm/i915/display/intel_dp_mst.o
CC drivers/gpu/drm/i915/display/intel_dp_test.o
CC drivers/gpu/drm/i915/display/intel_dsi.o
CC drivers/gpu/drm/i915/display/intel_dsi_dcs_backlight.o
CC drivers/gpu/drm/i915/display/intel_dsi_vbt.o
CC drivers/gpu/drm/i915/display/intel_dvo.o
CC drivers/gpu/drm/i915/display/intel_encoder.o
CC drivers/gpu/drm/i915/display/intel_gmbus.o
CC drivers/gpu/drm/i915/display/intel_hdmi.o
CC drivers/gpu/drm/i915/display/intel_lspcon.o
CC drivers/gpu/drm/i915/display/intel_lvds.o
CC drivers/gpu/drm/i915/display/intel_panel.o
CC drivers/gpu/drm/i915/display/intel_pfit.o
CC drivers/gpu/drm/i915/display/intel_pps.o
CC drivers/gpu/drm/i915/display/intel_qp_tables.o
CC drivers/gpu/drm/i915/display/intel_sdvo.o
CC drivers/gpu/drm/i915/display/intel_snps_phy.o
CC drivers/gpu/drm/i915/display/intel_tv.o
CC drivers/gpu/drm/i915/display/intel_vdsc.o
CC drivers/gpu/drm/i915/display/intel_vrr.o
CC drivers/gpu/drm/i915/display/vlv_dsi.o
CC drivers/gpu/drm/i915/display/vlv_dsi_pll.o
CC drivers/gpu/drm/i915/i915_perf.o
CC drivers/gpu/drm/i915/pxp/intel_pxp.o
CC drivers/gpu/drm/i915/pxp/intel_pxp_huc.o
CC drivers/gpu/drm/i915/pxp/intel_pxp_tee.o
CC drivers/gpu/drm/i915/i915_gpu_error.o
CC drivers/gpu/drm/i915/i915_vgpu.o
LD [M] drivers/gpu/drm/xe/xe.o
AR drivers/gpu/drm/i915/built-in.a
AR drivers/gpu/drm/built-in.a
AR drivers/gpu/built-in.a
make[2]: *** [/workspace/kernel/scripts/Makefile.build:478: drivers] Error 2
make[1]: *** [/workspace/kernel/Makefile:1936: .] Error 2
make: *** [/workspace/kernel/Makefile:224: __sub-make] Error 2
run-parts: /workspace/ci/hooks/11-build-32b exited with return code 2
^ permalink raw reply [flat|nested] 52+ messages in thread
* ✓ CI.checksparse: success for UMD direct submission in Xe
2024-11-18 23:37 [RFC PATCH 00/29] UMD direct submission in Xe Matthew Brost
` (33 preceding siblings ...)
2024-11-19 0:17 ` ✗ CI.Hooks: failure " Patchwork
@ 2024-11-19 0:19 ` Patchwork
2024-11-19 0:39 ` ✗ CI.BAT: failure " Patchwork
2024-11-19 11:44 ` ✗ CI.FULL: " Patchwork
36 siblings, 0 replies; 52+ messages in thread
From: Patchwork @ 2024-11-19 0:19 UTC (permalink / raw)
To: Matthew Brost; +Cc: intel-xe
== Series Details ==
Series: UMD direct submission in Xe
URL : https://patchwork.freedesktop.org/series/141524/
State : success
== Summary ==
+ trap cleanup EXIT
+ KERNEL=/kernel
+ MT=/root/linux/maintainer-tools
+ git clone https://gitlab.freedesktop.org/drm/maintainer-tools /root/linux/maintainer-tools
Cloning into '/root/linux/maintainer-tools'...
warning: redirecting to https://gitlab.freedesktop.org/drm/maintainer-tools.git/
+ make -C /root/linux/maintainer-tools
make: Entering directory '/root/linux/maintainer-tools'
cc -O2 -g -Wextra -o remap-log remap-log.c
make: Leaving directory '/root/linux/maintainer-tools'
+ cd /kernel
+ git config --global --add safe.directory /kernel
+ /root/linux/maintainer-tools/dim sparse --fast 1fe9a6cc7d136c9a34c47ccd6ee5a2b7d02c0bd6
/root/linux/maintainer-tools/dim: line 2068: sparse: command not found
Sparse version:
Fast mode used, each commit won't be checked separately.
Okay!
+ cleanup
++ stat -c %u:%g /kernel
+ chown -R 1003:1003 /kernel
^ permalink raw reply [flat|nested] 52+ messages in thread
* ✗ CI.BAT: failure for UMD direct submission in Xe
2024-11-18 23:37 [RFC PATCH 00/29] UMD direct submission in Xe Matthew Brost
` (34 preceding siblings ...)
2024-11-19 0:19 ` ✓ CI.checksparse: success " Patchwork
@ 2024-11-19 0:39 ` Patchwork
2024-11-19 11:44 ` ✗ CI.FULL: " Patchwork
36 siblings, 0 replies; 52+ messages in thread
From: Patchwork @ 2024-11-19 0:39 UTC (permalink / raw)
To: Matthew Brost; +Cc: intel-xe
[-- Attachment #1: Type: text/plain, Size: 3286 bytes --]
== Series Details ==
Series: UMD direct submission in Xe
URL : https://patchwork.freedesktop.org/series/141524/
State : failure
== Summary ==
CI Bug Log - changes from xe-2249-1fe9a6cc7d136c9a34c47ccd6ee5a2b7d02c0bd6_BAT -> xe-pw-141524v1_BAT
====================================================
Summary
-------
**WARNING**
Minor unknown changes coming with xe-pw-141524v1_BAT need to be verified
manually.
If you think the reported changes have nothing to do with the changes
introduced in xe-pw-141524v1_BAT, please notify your bug team (I915-ci-infra@lists.freedesktop.org) to allow them
to document this new failure mode, which will reduce false positives in CI.
Participating hosts (9 -> 9)
------------------------------
No changes in participating hosts
Possible new issues
-------------------
Here are the unknown changes that may have been introduced in xe-pw-141524v1_BAT:
### IGT changes ###
#### Warnings ####
* igt@xe_exec_fault_mode@many-basic:
- bat-lnl-2: [DMESG-FAIL][1] ([Intel XE#3466]) -> [INCOMPLETE][2]
[1]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-2249-1fe9a6cc7d136c9a34c47ccd6ee5a2b7d02c0bd6/bat-lnl-2/igt@xe_exec_fault_mode@many-basic.html
[2]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-141524v1/bat-lnl-2/igt@xe_exec_fault_mode@many-basic.html
Known issues
------------
Here are the changes found in xe-pw-141524v1_BAT that come from known issues:
### IGT changes ###
#### Issues hit ####
* igt@xe_live_ktest@xe_migrate@xe_validate_ccs_kunit:
- bat-adlp-vf: NOTRUN -> [SKIP][3] ([Intel XE#2229])
[3]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-141524v1/bat-adlp-vf/igt@xe_live_ktest@xe_migrate@xe_validate_ccs_kunit.html
#### Possible fixes ####
* igt@kms_frontbuffer_tracking@basic:
- bat-adlp-7: [DMESG-FAIL][4] ([Intel XE#1033]) -> [PASS][5]
[4]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-2249-1fe9a6cc7d136c9a34c47ccd6ee5a2b7d02c0bd6/bat-adlp-7/igt@kms_frontbuffer_tracking@basic.html
[5]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-141524v1/bat-adlp-7/igt@kms_frontbuffer_tracking@basic.html
* igt@xe_live_ktest@xe_migrate:
- bat-adlp-vf: [DMESG-FAIL][6] ([Intel XE#358]) -> [PASS][7] +1 other test pass
[6]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-2249-1fe9a6cc7d136c9a34c47ccd6ee5a2b7d02c0bd6/bat-adlp-vf/igt@xe_live_ktest@xe_migrate.html
[7]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-141524v1/bat-adlp-vf/igt@xe_live_ktest@xe_migrate.html
[Intel XE#1033]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/1033
[Intel XE#2229]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/2229
[Intel XE#3466]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/3466
[Intel XE#358]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/358
Build changes
-------------
* Linux: xe-2249-1fe9a6cc7d136c9a34c47ccd6ee5a2b7d02c0bd6 -> xe-pw-141524v1
IGT_8115: 4942fc57c20f9cb2195e70991c4e4df03dd3db21 @ https://gitlab.freedesktop.org/drm/igt-gpu-tools.git
xe-2249-1fe9a6cc7d136c9a34c47ccd6ee5a2b7d02c0bd6: 1fe9a6cc7d136c9a34c47ccd6ee5a2b7d02c0bd6
xe-pw-141524v1: 141524v1
== Logs ==
For more details see: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-141524v1/index.html
[-- Attachment #2: Type: text/html, Size: 3941 bytes --]
^ permalink raw reply [flat|nested] 52+ messages in thread
* Re: [RFC PATCH 13/29] drm/xe/mmap: Add mmap support for PCI memory barrier
2024-11-18 23:37 ` [RFC PATCH 13/29] drm/xe/mmap: Add mmap support for PCI memory barrier Matthew Brost
@ 2024-11-19 10:00 ` Christian König
2024-11-19 11:57 ` Joonas Lahtinen
0 siblings, 1 reply; 52+ messages in thread
From: Christian König @ 2024-11-19 10:00 UTC (permalink / raw)
To: Matthew Brost, intel-xe, dri-devel
Cc: kenneth.w.graunke, lionel.g.landwerlin, jose.souza, simona.vetter,
thomas.hellstrom, boris.brezillon, airlied, mihail.atanassov,
steven.price, shashank.sharma
Am 19.11.24 um 00:37 schrieb Matthew Brost:
> From: Tejas Upadhyay <tejas.upadhyay@intel.com>
>
> In order to avoid having userspace to use MI_MEM_FENCE,
> we are adding a mechanism for userspace to generate a
> PCI memory barrier with low overhead (avoiding IOCTL call
> as well as writing to VRAM will adds some overhead).
>
> This is implemented by memory-mapping a page as uncached
> that is backed by MMIO on the dGPU and thus allowing userspace
> to do memory write to the page without invoking an IOCTL.
> We are selecting the MMIO so that it is not accessible from
> the PCI bus so that the MMIO writes themselves are ignored,
> but the PCI memory barrier will still take action as the MMIO
> filtering will happen after the memory barrier effect.
>
> When we detect special defined offset in mmap(), We are mapping
> 4K page which contains the last of page of doorbell MMIO range
> to userspace for same purpose.
Well that is quite a hack, but don't you still need a memory barrier
instruction? E.g. m_fence?
And why don't you expose the real doorbell instead of the last (unused?)
page of the MMIO region?
Regards,
Christian.
>
> For user to query special offset we are adding special flag in
> mmap_offset ioctl which needs to be passed as follows,
> struct drm_xe_gem_mmap_offset mmo = {
> .handle = 0, /* this must be 0 */
> .flags = DRM_XE_MMAP_OFFSET_FLAG_PCI_BARRIER,
> };
> igt_ioctl(fd, DRM_IOCTL_XE_GEM_MMAP_OFFSET, &mmo);
> map = mmap(NULL, size, PROT_WRITE, MAP_SHARED, fd, mmo);
>
> Note: Test coverage for this is added by IGT
> https://patchwork.freedesktop.org/series/140368/ here.
> UMD implementing test, once PR is ready will attach with
> this patch.
>
> V6(MAuld)
> - Move physical mmap to fault handler
> - Modify kernel-doc and attach UMD PR when ready
> V5(MAuld)
> - Return invalid early in case of non 4K PAGE_SIZE
> - Format kernel-doc and add note for 4K PAGE_SIZE HW limit
> V4(MAuld)
> - Add kernel-doc for uapi change
> - Restrict page size to 4K
> V3(MAuld)
> - Remove offset defination from UAPI to be able to change later
> - Edit commit message for special flag addition
> V2(MAuld)
> - Add fault handler with dummy page to handle unplug device
> - Add Build check for special offset to be below normal start page
> - Test d3hot, mapping seems to be valid in d3hot as well
> - Add more info to commit message
>
> Cc: Matthew Auld <matthew.auld@intel.com>
> Cc: Michal Mrozek <michal.mrozek@intel.com>
> Signed-off-by: Tejas Upadhyay <tejas.upadhyay@intel.com>
> Reviewed-by: Matthew Auld <matthew.auld@intel.com>
> ---
> drivers/gpu/drm/xe/xe_bo.c | 16 ++++-
> drivers/gpu/drm/xe/xe_bo.h | 2 +
> drivers/gpu/drm/xe/xe_device.c | 103 ++++++++++++++++++++++++++++++++-
> include/uapi/drm/xe_drm.h | 29 +++++++++-
> 4 files changed, 147 insertions(+), 3 deletions(-)
>
> diff --git a/drivers/gpu/drm/xe/xe_bo.c b/drivers/gpu/drm/xe/xe_bo.c
> index 96dbc88b1f55..f948262e607f 100644
> --- a/drivers/gpu/drm/xe/xe_bo.c
> +++ b/drivers/gpu/drm/xe/xe_bo.c
> @@ -2138,9 +2138,23 @@ int xe_gem_mmap_offset_ioctl(struct drm_device *dev, void *data,
> XE_IOCTL_DBG(xe, args->reserved[0] || args->reserved[1]))
> return -EINVAL;
>
> - if (XE_IOCTL_DBG(xe, args->flags))
> + if (XE_IOCTL_DBG(xe, args->flags &
> + ~DRM_XE_MMAP_OFFSET_FLAG_PCI_BARRIER))
> return -EINVAL;
>
> + if (args->flags & DRM_XE_MMAP_OFFSET_FLAG_PCI_BARRIER) {
> + if (XE_IOCTL_DBG(xe, args->handle))
> + return -EINVAL;
> +
> + if (XE_IOCTL_DBG(xe, PAGE_SIZE > SZ_4K))
> + return -EINVAL;
> +
> + BUILD_BUG_ON(((XE_PCI_BARRIER_MMAP_OFFSET >> XE_PTE_SHIFT) +
> + SZ_4K) >= DRM_FILE_PAGE_OFFSET_START);
> + args->offset = XE_PCI_BARRIER_MMAP_OFFSET;
> + return 0;
> + }
> +
> gem_obj = drm_gem_object_lookup(file, args->handle);
> if (XE_IOCTL_DBG(xe, !gem_obj))
> return -ENOENT;
> diff --git a/drivers/gpu/drm/xe/xe_bo.h b/drivers/gpu/drm/xe/xe_bo.h
> index 7fa44a0138b0..e7724965d3f1 100644
> --- a/drivers/gpu/drm/xe/xe_bo.h
> +++ b/drivers/gpu/drm/xe/xe_bo.h
> @@ -63,6 +63,8 @@
>
> #define XE_BO_PROPS_INVALID (-1)
>
> +#define XE_PCI_BARRIER_MMAP_OFFSET (0x50 << XE_PTE_SHIFT)
> +
> struct sg_table;
>
> struct xe_bo *xe_bo_alloc(void);
> diff --git a/drivers/gpu/drm/xe/xe_device.c b/drivers/gpu/drm/xe/xe_device.c
> index 930bb2750e2e..f6069db795e7 100644
> --- a/drivers/gpu/drm/xe/xe_device.c
> +++ b/drivers/gpu/drm/xe/xe_device.c
> @@ -231,12 +231,113 @@ static long xe_drm_compat_ioctl(struct file *file, unsigned int cmd, unsigned lo
> #define xe_drm_compat_ioctl NULL
> #endif
>
> +static void barrier_open(struct vm_area_struct *vma)
> +{
> + drm_dev_get(vma->vm_private_data);
> +}
> +
> +static void barrier_close(struct vm_area_struct *vma)
> +{
> + drm_dev_put(vma->vm_private_data);
> +}
> +
> +static void barrier_release_dummy_page(struct drm_device *dev, void *res)
> +{
> + struct page *dummy_page = (struct page *)res;
> +
> + __free_page(dummy_page);
> +}
> +
> +static vm_fault_t barrier_fault(struct vm_fault *vmf)
> +{
> + struct drm_device *dev = vmf->vma->vm_private_data;
> + struct vm_area_struct *vma = vmf->vma;
> + vm_fault_t ret = VM_FAULT_NOPAGE;
> + pgprot_t prot;
> + int idx;
> +
> + prot = vm_get_page_prot(vma->vm_flags);
> +
> + if (drm_dev_enter(dev, &idx)) {
> + unsigned long pfn;
> +
> +#define LAST_DB_PAGE_OFFSET 0x7ff001
> + pfn = PHYS_PFN(pci_resource_start(to_pci_dev(dev->dev), 0) +
> + LAST_DB_PAGE_OFFSET);
> + ret = vmf_insert_pfn_prot(vma, vma->vm_start, pfn,
> + pgprot_noncached(prot));
> + drm_dev_exit(idx);
> + } else {
> + struct page *page;
> +
> + /* Allocate new dummy page to map all the VA range in this VMA to it*/
> + page = alloc_page(GFP_KERNEL | __GFP_ZERO);
> + if (!page)
> + return VM_FAULT_OOM;
> +
> + /* Set the page to be freed using drmm release action */
> + if (drmm_add_action_or_reset(dev, barrier_release_dummy_page, page))
> + return VM_FAULT_OOM;
> +
> + ret = vmf_insert_pfn_prot(vma, vma->vm_start, page_to_pfn(page),
> + prot);
> + }
> +
> + return ret;
> +}
> +
> +static const struct vm_operations_struct vm_ops_barrier = {
> + .open = barrier_open,
> + .close = barrier_close,
> + .fault = barrier_fault,
> +};
> +
> +static int xe_pci_barrier_mmap(struct file *filp,
> + struct vm_area_struct *vma)
> +{
> + struct drm_file *priv = filp->private_data;
> + struct drm_device *dev = priv->minor->dev;
> +
> + if (vma->vm_end - vma->vm_start > SZ_4K)
> + return -EINVAL;
> +
> + if (is_cow_mapping(vma->vm_flags))
> + return -EINVAL;
> +
> + if (vma->vm_flags & (VM_READ | VM_EXEC))
> + return -EINVAL;
> +
> + vm_flags_clear(vma, VM_MAYREAD | VM_MAYEXEC);
> + vm_flags_set(vma, VM_PFNMAP | VM_DONTEXPAND | VM_DONTDUMP | VM_IO);
> + vma->vm_ops = &vm_ops_barrier;
> + vma->vm_private_data = dev;
> + drm_dev_get(vma->vm_private_data);
> +
> + return 0;
> +}
> +
> +static int xe_mmap(struct file *filp, struct vm_area_struct *vma)
> +{
> + struct drm_file *priv = filp->private_data;
> + struct drm_device *dev = priv->minor->dev;
> +
> + if (drm_dev_is_unplugged(dev))
> + return -ENODEV;
> +
> + switch (vma->vm_pgoff) {
> + case XE_PCI_BARRIER_MMAP_OFFSET >> XE_PTE_SHIFT:
> + return xe_pci_barrier_mmap(filp, vma);
> + }
> +
> + return drm_gem_mmap(filp, vma);
> +}
> +
> static const struct file_operations xe_driver_fops = {
> .owner = THIS_MODULE,
> .open = drm_open,
> .release = drm_release_noglobal,
> .unlocked_ioctl = xe_drm_ioctl,
> - .mmap = drm_gem_mmap,
> + .mmap = xe_mmap,
> .poll = drm_poll,
> .read = drm_read,
> .compat_ioctl = xe_drm_compat_ioctl,
> diff --git a/include/uapi/drm/xe_drm.h b/include/uapi/drm/xe_drm.h
> index 4a8a4a63e99c..6490b16b1217 100644
> --- a/include/uapi/drm/xe_drm.h
> +++ b/include/uapi/drm/xe_drm.h
> @@ -811,6 +811,32 @@ struct drm_xe_gem_create {
>
> /**
> * struct drm_xe_gem_mmap_offset - Input of &DRM_IOCTL_XE_GEM_MMAP_OFFSET
> + *
> + * The @flags can be:
> + * - %DRM_XE_MMAP_OFFSET_FLAG_PCI_BARRIER - For user to query special offset
> + * for use in mmap ioctl. Writing to the returned mmap address will generate a
> + * PCI memory barrier with low overhead (avoiding IOCTL call as well as writing
> + * to VRAM which would also add overhead), acting like an MI_MEM_FENCE
> + * instruction.
> + *
> + * Note: The mmap size can be at most 4K, due to HW limitations. As a result
> + * this interface is only supported on CPU architectures that support 4K page
> + * size. The mmap_offset ioctl will detect this and gracefully return an
> + * error, where userspace is expected to have a different fallback method for
> + * triggering a barrier.
> + *
> + * Roughly the usage would be as follows:
> + *
> + * .. code-block:: C
> + *
> + * struct drm_xe_gem_mmap_offset mmo = {
> + * .handle = 0, // must be set to 0
> + * .flags = DRM_XE_MMAP_OFFSET_FLAG_PCI_BARRIER,
> + * };
> + *
> + * err = ioctl(fd, DRM_IOCTL_XE_GEM_MMAP_OFFSET, &mmo);
> + * map = mmap(NULL, size, PROT_WRITE, MAP_SHARED, fd, mmo.offset);
> + * map[i] = 0xdeadbeaf; // issue barrier
> */
> struct drm_xe_gem_mmap_offset {
> /** @extensions: Pointer to the first extension struct, if any */
> @@ -819,7 +845,8 @@ struct drm_xe_gem_mmap_offset {
> /** @handle: Handle for the object being mapped. */
> __u32 handle;
>
> - /** @flags: Must be zero */
> +#define DRM_XE_MMAP_OFFSET_FLAG_PCI_BARRIER (1 << 0)
> + /** @flags: Flags */
> __u32 flags;
>
> /** @offset: The fake offset to use for subsequent mmap call */
^ permalink raw reply [flat|nested] 52+ messages in thread
* ✗ CI.FULL: failure for UMD direct submission in Xe
2024-11-18 23:37 [RFC PATCH 00/29] UMD direct submission in Xe Matthew Brost
` (35 preceding siblings ...)
2024-11-19 0:39 ` ✗ CI.BAT: failure " Patchwork
@ 2024-11-19 11:44 ` Patchwork
36 siblings, 0 replies; 52+ messages in thread
From: Patchwork @ 2024-11-19 11:44 UTC (permalink / raw)
To: Matthew Brost; +Cc: intel-xe
[-- Attachment #1: Type: text/plain, Size: 111641 bytes --]
== Series Details ==
Series: UMD direct submission in Xe
URL : https://patchwork.freedesktop.org/series/141524/
State : failure
== Summary ==
CI Bug Log - changes from xe-2249-1fe9a6cc7d136c9a34c47ccd6ee5a2b7d02c0bd6_full -> xe-pw-141524v1_full
====================================================
Summary
-------
**FAILURE**
Serious unknown changes coming with xe-pw-141524v1_full absolutely need to be
verified manually.
If you think the reported changes have nothing to do with the changes
introduced in xe-pw-141524v1_full, please notify your bug team (I915-ci-infra@lists.freedesktop.org) to allow them
to document this new failure mode, which will reduce false positives in CI.
Participating hosts (4 -> 4)
------------------------------
No changes in participating hosts
Possible new issues
-------------------
Here are the unknown changes that may have been introduced in xe-pw-141524v1_full:
### IGT changes ###
#### Possible regressions ####
* igt@kms_atomic_interruptible@legacy-setmode@pipe-a-hdmi-a-3:
- shard-bmg: NOTRUN -> [DMESG-WARN][1]
[1]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-141524v1/shard-bmg-6/igt@kms_atomic_interruptible@legacy-setmode@pipe-a-hdmi-a-3.html
* igt@kms_big_fb@x-tiled-max-hw-stride-32bpp-rotate-180-async-flip:
- shard-bmg: [PASS][2] -> [DMESG-FAIL][3]
[2]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-2249-1fe9a6cc7d136c9a34c47ccd6ee5a2b7d02c0bd6/shard-bmg-4/igt@kms_big_fb@x-tiled-max-hw-stride-32bpp-rotate-180-async-flip.html
[3]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-141524v1/shard-bmg-6/igt@kms_big_fb@x-tiled-max-hw-stride-32bpp-rotate-180-async-flip.html
* igt@kms_color@degamma@pipe-a-dp-2:
- shard-bmg: [PASS][4] -> [SKIP][5]
[4]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-2249-1fe9a6cc7d136c9a34c47ccd6ee5a2b7d02c0bd6/shard-bmg-4/igt@kms_color@degamma@pipe-a-dp-2.html
[5]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-141524v1/shard-bmg-6/igt@kms_color@degamma@pipe-a-dp-2.html
* igt@kms_flip@plain-flip-fb-recreate-interruptible@b-hdmi-a3:
- shard-bmg: NOTRUN -> [INCOMPLETE][6]
[6]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-141524v1/shard-bmg-4/igt@kms_flip@plain-flip-fb-recreate-interruptible@b-hdmi-a3.html
* igt@kms_flip_tiling@flip-change-tiling@pipe-a-dp-2-x-to-linear:
- shard-bmg: NOTRUN -> [DMESG-FAIL][7]
[7]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-141524v1/shard-bmg-5/igt@kms_flip_tiling@flip-change-tiling@pipe-a-dp-2-x-to-linear.html
* igt@kms_force_connector_basic@force-edid:
- shard-bmg: [PASS][8] -> [DMESG-WARN][9] +8 other tests dmesg-warn
[8]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-2249-1fe9a6cc7d136c9a34c47ccd6ee5a2b7d02c0bd6/shard-bmg-4/igt@kms_force_connector_basic@force-edid.html
[9]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-141524v1/shard-bmg-6/igt@kms_force_connector_basic@force-edid.html
* igt@kms_psr@psr-sprite-render@edp-1:
- shard-lnl: [PASS][10] -> [DMESG-WARN][11] +10 other tests dmesg-warn
[10]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-2249-1fe9a6cc7d136c9a34c47ccd6ee5a2b7d02c0bd6/shard-lnl-8/igt@kms_psr@psr-sprite-render@edp-1.html
[11]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-141524v1/shard-lnl-7/igt@kms_psr@psr-sprite-render@edp-1.html
* igt@xe_ccs@large-ctrl-surf-copy:
- shard-adlp: NOTRUN -> [SKIP][12]
[12]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-141524v1/shard-adlp-2/igt@xe_ccs@large-ctrl-surf-copy.html
* igt@xe_exec_threads@threads-bal-fd-userptr:
- shard-dg2-set2: [PASS][13] -> [FAIL][14]
[13]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-2249-1fe9a6cc7d136c9a34c47ccd6ee5a2b7d02c0bd6/shard-dg2-463/igt@xe_exec_threads@threads-bal-fd-userptr.html
[14]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-141524v1/shard-dg2-433/igt@xe_exec_threads@threads-bal-fd-userptr.html
* igt@xe_fault_injection@vm-create-fail-xe_vm_create_scratch:
- shard-adlp: [PASS][15] -> [ABORT][16]
[15]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-2249-1fe9a6cc7d136c9a34c47ccd6ee5a2b7d02c0bd6/shard-adlp-1/igt@xe_fault_injection@vm-create-fail-xe_vm_create_scratch.html
[16]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-141524v1/shard-adlp-2/igt@xe_fault_injection@vm-create-fail-xe_vm_create_scratch.html
- shard-lnl: [PASS][17] -> [ABORT][18] +1 other test abort
[17]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-2249-1fe9a6cc7d136c9a34c47ccd6ee5a2b7d02c0bd6/shard-lnl-2/igt@xe_fault_injection@vm-create-fail-xe_vm_create_scratch.html
[18]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-141524v1/shard-lnl-8/igt@xe_fault_injection@vm-create-fail-xe_vm_create_scratch.html
#### Warnings ####
* igt@kms_ccs@crc-primary-suspend-yf-tiled-ccs:
- shard-dg2-set2: [SKIP][19] ([Intel XE#455] / [Intel XE#787]) -> [SKIP][20] +1 other test skip
[19]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-2249-1fe9a6cc7d136c9a34c47ccd6ee5a2b7d02c0bd6/shard-dg2-464/igt@kms_ccs@crc-primary-suspend-yf-tiled-ccs.html
[20]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-141524v1/shard-dg2-466/igt@kms_ccs@crc-primary-suspend-yf-tiled-ccs.html
* igt@kms_draw_crc@draw-method-mmap-wc@rgb565-4tiled:
- shard-bmg: [DMESG-FAIL][21] ([Intel XE#2705]) -> [FAIL][22]
[21]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-2249-1fe9a6cc7d136c9a34c47ccd6ee5a2b7d02c0bd6/shard-bmg-7/igt@kms_draw_crc@draw-method-mmap-wc@rgb565-4tiled.html
[22]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-141524v1/shard-bmg-6/igt@kms_draw_crc@draw-method-mmap-wc@rgb565-4tiled.html
* igt@kms_draw_crc@draw-method-mmap-wc@xrgb2101010-4tiled:
- shard-bmg: [DMESG-WARN][23] ([Intel XE#2705]) -> [DMESG-FAIL][24]
[23]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-2249-1fe9a6cc7d136c9a34c47ccd6ee5a2b7d02c0bd6/shard-bmg-7/igt@kms_draw_crc@draw-method-mmap-wc@xrgb2101010-4tiled.html
[24]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-141524v1/shard-bmg-6/igt@kms_draw_crc@draw-method-mmap-wc@xrgb2101010-4tiled.html
* igt@kms_flip_tiling@flip-change-tiling@pipe-a-dp-2-x-to-x:
- shard-bmg: [DMESG-FAIL][25] ([Intel XE#3468]) -> [SKIP][26]
[25]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-2249-1fe9a6cc7d136c9a34c47ccd6ee5a2b7d02c0bd6/shard-bmg-1/igt@kms_flip_tiling@flip-change-tiling@pipe-a-dp-2-x-to-x.html
[26]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-141524v1/shard-bmg-5/igt@kms_flip_tiling@flip-change-tiling@pipe-a-dp-2-x-to-x.html
* igt@kms_frontbuffer_tracking@fbc-2p-primscrn-pri-shrfb-draw-render:
- shard-bmg: [FAIL][27] ([Intel XE#2333]) -> [DMESG-FAIL][28]
[27]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-2249-1fe9a6cc7d136c9a34c47ccd6ee5a2b7d02c0bd6/shard-bmg-4/igt@kms_frontbuffer_tracking@fbc-2p-primscrn-pri-shrfb-draw-render.html
[28]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-141524v1/shard-bmg-6/igt@kms_frontbuffer_tracking@fbc-2p-primscrn-pri-shrfb-draw-render.html
* igt@kms_plane_cursor@viewport@pipe-a-dp-2-size-256:
- shard-bmg: [DMESG-WARN][29] ([Intel XE#3468]) -> [DMESG-FAIL][30]
[29]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-2249-1fe9a6cc7d136c9a34c47ccd6ee5a2b7d02c0bd6/shard-bmg-2/igt@kms_plane_cursor@viewport@pipe-a-dp-2-size-256.html
[30]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-141524v1/shard-bmg-7/igt@kms_plane_cursor@viewport@pipe-a-dp-2-size-256.html
* igt@kms_vblank@ts-continuation-dpms-suspend:
- shard-dg2-set2: [DMESG-WARN][31] ([Intel XE#1727]) -> [DMESG-WARN][32]
[31]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-2249-1fe9a6cc7d136c9a34c47ccd6ee5a2b7d02c0bd6/shard-dg2-463/igt@kms_vblank@ts-continuation-dpms-suspend.html
[32]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-141524v1/shard-dg2-436/igt@kms_vblank@ts-continuation-dpms-suspend.html
* igt@xe_exec_threads@threads-bal-shared-vm-userptr:
- shard-bmg: [DMESG-WARN][33] ([Intel XE#2705]) -> [DMESG-WARN][34]
[33]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-2249-1fe9a6cc7d136c9a34c47ccd6ee5a2b7d02c0bd6/shard-bmg-7/igt@xe_exec_threads@threads-bal-shared-vm-userptr.html
[34]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-141524v1/shard-bmg-6/igt@xe_exec_threads@threads-bal-shared-vm-userptr.html
* igt@xe_fault_injection@vm-create-fail-xe_exec_queue_create_bind:
- shard-bmg: [DMESG-WARN][35] -> [ABORT][36]
[35]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-2249-1fe9a6cc7d136c9a34c47ccd6ee5a2b7d02c0bd6/shard-bmg-2/igt@xe_fault_injection@vm-create-fail-xe_exec_queue_create_bind.html
[36]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-141524v1/shard-bmg-7/igt@xe_fault_injection@vm-create-fail-xe_exec_queue_create_bind.html
* igt@xe_fault_injection@vm-create-fail-xe_pt_create:
- shard-adlp: [DMESG-WARN][37] ([Intel XE#3086]) -> [ABORT][38]
[37]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-2249-1fe9a6cc7d136c9a34c47ccd6ee5a2b7d02c0bd6/shard-adlp-4/igt@xe_fault_injection@vm-create-fail-xe_pt_create.html
[38]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-141524v1/shard-adlp-8/igt@xe_fault_injection@vm-create-fail-xe_pt_create.html
- shard-dg2-set2: [DMESG-WARN][39] ([Intel XE#3467]) -> [ABORT][40]
[39]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-2249-1fe9a6cc7d136c9a34c47ccd6ee5a2b7d02c0bd6/shard-dg2-436/igt@xe_fault_injection@vm-create-fail-xe_pt_create.html
[40]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-141524v1/shard-dg2-435/igt@xe_fault_injection@vm-create-fail-xe_pt_create.html
* igt@xe_fault_injection@vm-create-fail-xe_vm_create_scratch:
- shard-bmg: [DMESG-WARN][41] ([Intel XE#3467]) -> [ABORT][42] +1 other test abort
[41]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-2249-1fe9a6cc7d136c9a34c47ccd6ee5a2b7d02c0bd6/shard-bmg-3/igt@xe_fault_injection@vm-create-fail-xe_vm_create_scratch.html
[42]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-141524v1/shard-bmg-2/igt@xe_fault_injection@vm-create-fail-xe_vm_create_scratch.html
* igt@xe_pm@d3hot-mmap-vram:
- shard-bmg: [DMESG-WARN][43] ([Intel XE#3468]) -> [INCOMPLETE][44]
[43]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-2249-1fe9a6cc7d136c9a34c47ccd6ee5a2b7d02c0bd6/shard-bmg-4/igt@xe_pm@d3hot-mmap-vram.html
[44]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-141524v1/shard-bmg-8/igt@xe_pm@d3hot-mmap-vram.html
* igt@xe_wedged@wedged-at-any-timeout:
- shard-dg2-set2: [SKIP][45] ([Intel XE#1130]) -> [ABORT][46]
[45]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-2249-1fe9a6cc7d136c9a34c47ccd6ee5a2b7d02c0bd6/shard-dg2-466/igt@xe_wedged@wedged-at-any-timeout.html
[46]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-141524v1/shard-dg2-436/igt@xe_wedged@wedged-at-any-timeout.html
* igt@xe_wedged@wedged-mode-toggle:
- shard-bmg: [DMESG-WARN][47] ([Intel XE#3468]) -> [DMESG-WARN][48]
[47]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-2249-1fe9a6cc7d136c9a34c47ccd6ee5a2b7d02c0bd6/shard-bmg-2/igt@xe_wedged@wedged-mode-toggle.html
[48]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-141524v1/shard-bmg-7/igt@xe_wedged@wedged-mode-toggle.html
Known issues
------------
Here are the changes found in xe-pw-141524v1_full that come from known issues:
### IGT changes ###
#### Issues hit ####
* igt@core_hotunplug@unbind-rebind:
- shard-dg2-set2: [PASS][49] -> [SKIP][50] ([Intel XE#1885])
[49]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-2249-1fe9a6cc7d136c9a34c47ccd6ee5a2b7d02c0bd6/shard-dg2-463/igt@core_hotunplug@unbind-rebind.html
[50]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-141524v1/shard-dg2-434/igt@core_hotunplug@unbind-rebind.html
* igt@core_hotunplug@unplug-rescan:
- shard-dg2-set2: NOTRUN -> [SKIP][51] ([Intel XE#1885])
[51]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-141524v1/shard-dg2-466/igt@core_hotunplug@unplug-rescan.html
* igt@fbdev@info:
- shard-dg2-set2: [PASS][52] -> [SKIP][53] ([Intel XE#2134])
[52]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-2249-1fe9a6cc7d136c9a34c47ccd6ee5a2b7d02c0bd6/shard-dg2-463/igt@fbdev@info.html
[53]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-141524v1/shard-dg2-434/igt@fbdev@info.html
* igt@kms_atomic_transition@plane-all-modeset-transition-internal-panels:
- shard-bmg: NOTRUN -> [SKIP][54] ([Intel XE#2370])
[54]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-141524v1/shard-bmg-3/igt@kms_atomic_transition@plane-all-modeset-transition-internal-panels.html
* igt@kms_big_fb@x-tiled-max-hw-stride-64bpp-rotate-0-hflip-async-flip:
- shard-adlp: NOTRUN -> [DMESG-FAIL][55] ([Intel XE#1033])
[55]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-141524v1/shard-adlp-2/igt@kms_big_fb@x-tiled-max-hw-stride-64bpp-rotate-0-hflip-async-flip.html
* igt@kms_big_fb@y-tiled-32bpp-rotate-180:
- shard-bmg: NOTRUN -> [SKIP][56] ([Intel XE#1124])
[56]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-141524v1/shard-bmg-3/igt@kms_big_fb@y-tiled-32bpp-rotate-180.html
* igt@kms_big_fb@yf-tiled-max-hw-stride-64bpp-rotate-180-hflip-async-flip:
- shard-dg2-set2: NOTRUN -> [SKIP][57] ([Intel XE#2136]) +19 other tests skip
[57]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-141524v1/shard-dg2-466/igt@kms_big_fb@yf-tiled-max-hw-stride-64bpp-rotate-180-hflip-async-flip.html
* igt@kms_bw@connected-linear-tiling-2-displays-3840x2160p:
- shard-bmg: [PASS][58] -> [SKIP][59] ([Intel XE#2314] / [Intel XE#2894])
[58]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-2249-1fe9a6cc7d136c9a34c47ccd6ee5a2b7d02c0bd6/shard-bmg-8/igt@kms_bw@connected-linear-tiling-2-displays-3840x2160p.html
[59]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-141524v1/shard-bmg-6/igt@kms_bw@connected-linear-tiling-2-displays-3840x2160p.html
* igt@kms_bw@connected-linear-tiling-4-displays-3840x2160p:
- shard-bmg: NOTRUN -> [SKIP][60] ([Intel XE#2314] / [Intel XE#2894])
[60]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-141524v1/shard-bmg-3/igt@kms_bw@connected-linear-tiling-4-displays-3840x2160p.html
* igt@kms_ccs@crc-primary-basic-4-tiled-mtl-mc-ccs:
- shard-adlp: NOTRUN -> [SKIP][61] ([Intel XE#455] / [Intel XE#787]) +3 other tests skip
[61]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-141524v1/shard-adlp-2/igt@kms_ccs@crc-primary-basic-4-tiled-mtl-mc-ccs.html
* igt@kms_ccs@crc-primary-basic-4-tiled-mtl-mc-ccs@pipe-c-hdmi-a-1:
- shard-adlp: NOTRUN -> [SKIP][62] ([Intel XE#787]) +5 other tests skip
[62]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-141524v1/shard-adlp-2/igt@kms_ccs@crc-primary-basic-4-tiled-mtl-mc-ccs@pipe-c-hdmi-a-1.html
* igt@kms_ccs@crc-primary-rotation-180-4-tiled-dg2-rc-ccs:
- shard-bmg: NOTRUN -> [SKIP][63] ([Intel XE#2887]) +1 other test skip
[63]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-141524v1/shard-bmg-3/igt@kms_ccs@crc-primary-rotation-180-4-tiled-dg2-rc-ccs.html
* igt@kms_ccs@crc-primary-rotation-180-y-tiled-gen12-rc-ccs@pipe-d-dp-4:
- shard-dg2-set2: NOTRUN -> [SKIP][64] ([Intel XE#455] / [Intel XE#787]) +2 other tests skip
[64]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-141524v1/shard-dg2-463/igt@kms_ccs@crc-primary-rotation-180-y-tiled-gen12-rc-ccs@pipe-d-dp-4.html
* igt@kms_ccs@missing-ccs-buffer-4-tiled-mtl-rc-ccs-cc@pipe-a-dp-4:
- shard-dg2-set2: NOTRUN -> [SKIP][65] ([Intel XE#787]) +20 other tests skip
[65]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-141524v1/shard-dg2-463/igt@kms_ccs@missing-ccs-buffer-4-tiled-mtl-rc-ccs-cc@pipe-a-dp-4.html
* igt@kms_chamelium_color@ctm-max:
- shard-adlp: NOTRUN -> [SKIP][66] ([Intel XE#306])
[66]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-141524v1/shard-adlp-2/igt@kms_chamelium_color@ctm-max.html
* igt@kms_chamelium_color@ctm-negative:
- shard-bmg: NOTRUN -> [SKIP][67] ([Intel XE#2325])
[67]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-141524v1/shard-bmg-3/igt@kms_chamelium_color@ctm-negative.html
* igt@kms_chamelium_frames@hdmi-frame-dump:
- shard-adlp: NOTRUN -> [SKIP][68] ([Intel XE#373])
[68]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-141524v1/shard-adlp-2/igt@kms_chamelium_frames@hdmi-frame-dump.html
* igt@kms_chamelium_hpd@dp-hpd:
- shard-bmg: NOTRUN -> [SKIP][69] ([Intel XE#2252]) +1 other test skip
[69]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-141524v1/shard-bmg-3/igt@kms_chamelium_hpd@dp-hpd.html
* igt@kms_content_protection@lic-type-1:
- shard-adlp: NOTRUN -> [SKIP][70] ([Intel XE#455])
[70]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-141524v1/shard-adlp-2/igt@kms_content_protection@lic-type-1.html
* igt@kms_content_protection@srm:
- shard-bmg: NOTRUN -> [FAIL][71] ([Intel XE#1178]) +1 other test fail
[71]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-141524v1/shard-bmg-3/igt@kms_content_protection@srm.html
* igt@kms_cursor_crc@cursor-random-512x170:
- shard-bmg: NOTRUN -> [SKIP][72] ([Intel XE#2321])
[72]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-141524v1/shard-bmg-3/igt@kms_cursor_crc@cursor-random-512x170.html
* igt@kms_cursor_legacy@cursor-vs-flip-atomic-transitions:
- shard-dg2-set2: [PASS][73] -> [SKIP][74] ([Intel XE#2423] / [i915#2575]) +100 other tests skip
[73]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-2249-1fe9a6cc7d136c9a34c47ccd6ee5a2b7d02c0bd6/shard-dg2-434/igt@kms_cursor_legacy@cursor-vs-flip-atomic-transitions.html
[74]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-141524v1/shard-dg2-466/igt@kms_cursor_legacy@cursor-vs-flip-atomic-transitions.html
* igt@kms_cursor_legacy@cursorb-vs-flipa-atomic-transitions-varying-size:
- shard-bmg: NOTRUN -> [DMESG-WARN][75] ([Intel XE#877])
[75]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-141524v1/shard-bmg-3/igt@kms_cursor_legacy@cursorb-vs-flipa-atomic-transitions-varying-size.html
* igt@kms_dp_aux_dev:
- shard-dg2-set2: [PASS][76] -> [SKIP][77] ([Intel XE#2423])
[76]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-2249-1fe9a6cc7d136c9a34c47ccd6ee5a2b7d02c0bd6/shard-dg2-434/igt@kms_dp_aux_dev.html
[77]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-141524v1/shard-dg2-466/igt@kms_dp_aux_dev.html
* igt@kms_dsc@dsc-with-bpc-formats:
- shard-bmg: NOTRUN -> [SKIP][78] ([Intel XE#2244])
[78]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-141524v1/shard-bmg-3/igt@kms_dsc@dsc-with-bpc-formats.html
* igt@kms_flip@2x-flip-vs-expired-vblank-interruptible@bd-dp2-hdmi-a3:
- shard-bmg: [PASS][79] -> [FAIL][80] ([Intel XE#3486])
[79]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-2249-1fe9a6cc7d136c9a34c47ccd6ee5a2b7d02c0bd6/shard-bmg-8/igt@kms_flip@2x-flip-vs-expired-vblank-interruptible@bd-dp2-hdmi-a3.html
[80]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-141524v1/shard-bmg-4/igt@kms_flip@2x-flip-vs-expired-vblank-interruptible@bd-dp2-hdmi-a3.html
* igt@kms_flip@2x-flip-vs-expired-vblank@bd-dp2-hdmi-a3:
- shard-bmg: [PASS][81] -> [FAIL][82] ([Intel XE#2882]) +1 other test fail
[81]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-2249-1fe9a6cc7d136c9a34c47ccd6ee5a2b7d02c0bd6/shard-bmg-3/igt@kms_flip@2x-flip-vs-expired-vblank@bd-dp2-hdmi-a3.html
[82]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-141524v1/shard-bmg-2/igt@kms_flip@2x-flip-vs-expired-vblank@bd-dp2-hdmi-a3.html
* igt@kms_flip@2x-flip-vs-panning:
- shard-adlp: NOTRUN -> [SKIP][83] ([Intel XE#310])
[83]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-141524v1/shard-adlp-2/igt@kms_flip@2x-flip-vs-panning.html
* igt@kms_flip@2x-flip-vs-panning@bd-dp2-hdmi-a3:
- shard-bmg: [PASS][84] -> [DMESG-WARN][85] ([Intel XE#877]) +1 other test dmesg-warn
[84]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-2249-1fe9a6cc7d136c9a34c47ccd6ee5a2b7d02c0bd6/shard-bmg-4/igt@kms_flip@2x-flip-vs-panning@bd-dp2-hdmi-a3.html
[85]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-141524v1/shard-bmg-7/igt@kms_flip@2x-flip-vs-panning@bd-dp2-hdmi-a3.html
* igt@kms_flip@2x-single-buffer-flip-vs-dpms-off-vs-modeset:
- shard-bmg: [PASS][86] -> [SKIP][87] ([Intel XE#2316]) +1 other test skip
[86]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-2249-1fe9a6cc7d136c9a34c47ccd6ee5a2b7d02c0bd6/shard-bmg-8/igt@kms_flip@2x-single-buffer-flip-vs-dpms-off-vs-modeset.html
[87]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-141524v1/shard-bmg-6/igt@kms_flip@2x-single-buffer-flip-vs-dpms-off-vs-modeset.html
* igt@kms_flip@flip-vs-expired-vblank@a-hdmi-a1:
- shard-adlp: [PASS][88] -> [FAIL][89] ([Intel XE#301]) +1 other test fail
[88]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-2249-1fe9a6cc7d136c9a34c47ccd6ee5a2b7d02c0bd6/shard-adlp-1/igt@kms_flip@flip-vs-expired-vblank@a-hdmi-a1.html
[89]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-141524v1/shard-adlp-2/igt@kms_flip@flip-vs-expired-vblank@a-hdmi-a1.html
* igt@kms_flip@flip-vs-suspend-interruptible:
- shard-adlp: [PASS][90] -> [DMESG-WARN][91] ([Intel XE#2953] / [Intel XE#3086]) +3 other tests dmesg-warn
[90]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-2249-1fe9a6cc7d136c9a34c47ccd6ee5a2b7d02c0bd6/shard-adlp-4/igt@kms_flip@flip-vs-suspend-interruptible.html
[91]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-141524v1/shard-adlp-3/igt@kms_flip@flip-vs-suspend-interruptible.html
* igt@kms_flip@plain-flip-fb-recreate@a-edp1:
- shard-lnl: [PASS][92] -> [FAIL][93] ([Intel XE#886]) +3 other tests fail
[92]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-2249-1fe9a6cc7d136c9a34c47ccd6ee5a2b7d02c0bd6/shard-lnl-7/igt@kms_flip@plain-flip-fb-recreate@a-edp1.html
[93]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-141524v1/shard-lnl-2/igt@kms_flip@plain-flip-fb-recreate@a-edp1.html
* igt@kms_flip@plain-flip-fb-recreate@c-edp1:
- shard-lnl: [PASS][94] -> [FAIL][95] ([Intel XE#3149] / [Intel XE#886]) +1 other test fail
[94]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-2249-1fe9a6cc7d136c9a34c47ccd6ee5a2b7d02c0bd6/shard-lnl-7/igt@kms_flip@plain-flip-fb-recreate@c-edp1.html
[95]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-141524v1/shard-lnl-2/igt@kms_flip@plain-flip-fb-recreate@c-edp1.html
* igt@kms_frontbuffer_tracking@basic:
- shard-dg2-set2: [PASS][96] -> [SKIP][97] ([Intel XE#2351])
[96]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-2249-1fe9a6cc7d136c9a34c47ccd6ee5a2b7d02c0bd6/shard-dg2-434/igt@kms_frontbuffer_tracking@basic.html
[97]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-141524v1/shard-dg2-466/igt@kms_frontbuffer_tracking@basic.html
* igt@kms_frontbuffer_tracking@drrs-1p-primscrn-shrfb-pgflip-blt:
- shard-bmg: NOTRUN -> [SKIP][98] ([Intel XE#2311]) +4 other tests skip
[98]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-141524v1/shard-bmg-3/igt@kms_frontbuffer_tracking@drrs-1p-primscrn-shrfb-pgflip-blt.html
* igt@kms_frontbuffer_tracking@drrs-2p-scndscrn-spr-indfb-draw-mmap-wc:
- shard-adlp: NOTRUN -> [SKIP][99] ([Intel XE#656]) +3 other tests skip
[99]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-141524v1/shard-adlp-2/igt@kms_frontbuffer_tracking@drrs-2p-scndscrn-spr-indfb-draw-mmap-wc.html
* igt@kms_frontbuffer_tracking@fbc-1p-offscren-pri-shrfb-draw-blt:
- shard-bmg: NOTRUN -> [FAIL][100] ([Intel XE#2333]) +1 other test fail
[100]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-141524v1/shard-bmg-3/igt@kms_frontbuffer_tracking@fbc-1p-offscren-pri-shrfb-draw-blt.html
* igt@kms_frontbuffer_tracking@fbc-1p-primscrn-pri-indfb-draw-blt:
- shard-dg2-set2: [PASS][101] -> [SKIP][102] ([Intel XE#2136]) +30 other tests skip
[101]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-2249-1fe9a6cc7d136c9a34c47ccd6ee5a2b7d02c0bd6/shard-dg2-436/igt@kms_frontbuffer_tracking@fbc-1p-primscrn-pri-indfb-draw-blt.html
[102]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-141524v1/shard-dg2-466/igt@kms_frontbuffer_tracking@fbc-1p-primscrn-pri-indfb-draw-blt.html
* igt@kms_frontbuffer_tracking@fbc-2p-scndscrn-pri-indfb-draw-blt:
- shard-dg2-set2: [PASS][103] -> [SKIP][104] ([Intel XE#2136] / [Intel XE#2351]) +14 other tests skip
[103]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-2249-1fe9a6cc7d136c9a34c47ccd6ee5a2b7d02c0bd6/shard-dg2-433/igt@kms_frontbuffer_tracking@fbc-2p-scndscrn-pri-indfb-draw-blt.html
[104]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-141524v1/shard-dg2-434/igt@kms_frontbuffer_tracking@fbc-2p-scndscrn-pri-indfb-draw-blt.html
* igt@kms_frontbuffer_tracking@fbcdrrs-1p-primscrn-cur-indfb-move:
- shard-adlp: NOTRUN -> [SKIP][105] ([Intel XE#651])
[105]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-141524v1/shard-adlp-2/igt@kms_frontbuffer_tracking@fbcdrrs-1p-primscrn-cur-indfb-move.html
* igt@kms_frontbuffer_tracking@fbcpsr-1p-primscrn-pri-indfb-draw-render:
- shard-bmg: NOTRUN -> [SKIP][106] ([Intel XE#2313]) +1 other test skip
[106]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-141524v1/shard-bmg-3/igt@kms_frontbuffer_tracking@fbcpsr-1p-primscrn-pri-indfb-draw-render.html
* igt@kms_frontbuffer_tracking@fbcpsr-2p-scndscrn-pri-indfb-draw-blt:
- shard-dg2-set2: NOTRUN -> [SKIP][107] ([Intel XE#2136] / [Intel XE#2351]) +8 other tests skip
[107]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-141524v1/shard-dg2-466/igt@kms_frontbuffer_tracking@fbcpsr-2p-scndscrn-pri-indfb-draw-blt.html
* igt@kms_frontbuffer_tracking@psr-1p-offscren-pri-shrfb-draw-render:
- shard-adlp: NOTRUN -> [SKIP][108] ([Intel XE#653]) +1 other test skip
[108]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-141524v1/shard-adlp-2/igt@kms_frontbuffer_tracking@psr-1p-offscren-pri-shrfb-draw-render.html
* igt@kms_hdr@invalid-hdr:
- shard-dg2-set2: [PASS][109] -> [SKIP][110] ([Intel XE#455])
[109]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-2249-1fe9a6cc7d136c9a34c47ccd6ee5a2b7d02c0bd6/shard-dg2-463/igt@kms_hdr@invalid-hdr.html
[110]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-141524v1/shard-dg2-436/igt@kms_hdr@invalid-hdr.html
* igt@kms_pipe_crc_basic@compare-crc-sanitycheck-nv12:
- shard-bmg: [PASS][111] -> [DMESG-WARN][112] ([Intel XE#2705])
[111]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-2249-1fe9a6cc7d136c9a34c47ccd6ee5a2b7d02c0bd6/shard-bmg-4/igt@kms_pipe_crc_basic@compare-crc-sanitycheck-nv12.html
[112]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-141524v1/shard-bmg-6/igt@kms_pipe_crc_basic@compare-crc-sanitycheck-nv12.html
* igt@kms_plane@pixel-format-source-clamping:
- shard-lnl: [PASS][113] -> [DMESG-WARN][114] ([Intel XE#2566] / [Intel XE#3466])
[113]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-2249-1fe9a6cc7d136c9a34c47ccd6ee5a2b7d02c0bd6/shard-lnl-8/igt@kms_plane@pixel-format-source-clamping.html
[114]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-141524v1/shard-lnl-7/igt@kms_plane@pixel-format-source-clamping.html
* igt@kms_plane@pixel-format-source-clamping@pipe-b-plane-0:
- shard-lnl: [PASS][115] -> [DMESG-WARN][116] ([Intel XE#3466]) +1 other test dmesg-warn
[115]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-2249-1fe9a6cc7d136c9a34c47ccd6ee5a2b7d02c0bd6/shard-lnl-8/igt@kms_plane@pixel-format-source-clamping@pipe-b-plane-0.html
[116]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-141524v1/shard-lnl-7/igt@kms_plane@pixel-format-source-clamping@pipe-b-plane-0.html
* igt@kms_plane_scaling@planes-unity-scaling-downscale-factor-0-25@pipe-b:
- shard-dg2-set2: NOTRUN -> [SKIP][117] ([Intel XE#2763]) +2 other tests skip
[117]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-141524v1/shard-dg2-463/igt@kms_plane_scaling@planes-unity-scaling-downscale-factor-0-25@pipe-b.html
* igt@kms_plane_scaling@planes-unity-scaling-downscale-factor-0-25@pipe-d:
- shard-dg2-set2: NOTRUN -> [SKIP][118] ([Intel XE#2763] / [Intel XE#455])
[118]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-141524v1/shard-dg2-463/igt@kms_plane_scaling@planes-unity-scaling-downscale-factor-0-25@pipe-d.html
* igt@kms_plane_scaling@planes-upscale-factor-0-25:
- shard-dg2-set2: NOTRUN -> [SKIP][119] ([Intel XE#2423] / [i915#2575]) +18 other tests skip
[119]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-141524v1/shard-dg2-466/igt@kms_plane_scaling@planes-upscale-factor-0-25.html
- shard-lnl: [PASS][120] -> [DMESG-WARN][121] ([Intel XE#2566]) +1 other test dmesg-warn
[120]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-2249-1fe9a6cc7d136c9a34c47ccd6ee5a2b7d02c0bd6/shard-lnl-8/igt@kms_plane_scaling@planes-upscale-factor-0-25.html
[121]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-141524v1/shard-lnl-7/igt@kms_plane_scaling@planes-upscale-factor-0-25.html
* igt@kms_pm_rpm@cursor:
- shard-dg2-set2: [PASS][122] -> [SKIP][123] ([Intel XE#2446]) +2 other tests skip
[122]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-2249-1fe9a6cc7d136c9a34c47ccd6ee5a2b7d02c0bd6/shard-dg2-463/igt@kms_pm_rpm@cursor.html
[123]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-141524v1/shard-dg2-434/igt@kms_pm_rpm@cursor.html
* igt@kms_pm_rpm@universal-planes:
- shard-dg2-set2: NOTRUN -> [SKIP][124] ([Intel XE#2446]) +1 other test skip
[124]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-141524v1/shard-dg2-466/igt@kms_pm_rpm@universal-planes.html
- shard-lnl: [PASS][125] -> [DMESG-WARN][126] ([Intel XE#2042])
[125]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-2249-1fe9a6cc7d136c9a34c47ccd6ee5a2b7d02c0bd6/shard-lnl-8/igt@kms_pm_rpm@universal-planes.html
[126]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-141524v1/shard-lnl-7/igt@kms_pm_rpm@universal-planes.html
* igt@kms_pm_rpm@universal-planes@plane-59:
- shard-lnl: [PASS][127] -> [DMESG-WARN][128] ([Intel XE#3514]) +5 other tests dmesg-warn
[127]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-2249-1fe9a6cc7d136c9a34c47ccd6ee5a2b7d02c0bd6/shard-lnl-8/igt@kms_pm_rpm@universal-planes@plane-59.html
[128]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-141524v1/shard-lnl-7/igt@kms_pm_rpm@universal-planes@plane-59.html
* igt@kms_psr2_sf@fbc-psr2-overlay-plane-move-continuous-sf:
- shard-bmg: NOTRUN -> [SKIP][129] ([Intel XE#1489])
[129]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-141524v1/shard-bmg-3/igt@kms_psr2_sf@fbc-psr2-overlay-plane-move-continuous-sf.html
* igt@kms_psr@fbc-psr2-cursor-blt:
- shard-bmg: NOTRUN -> [SKIP][130] ([Intel XE#2234] / [Intel XE#2850])
[130]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-141524v1/shard-bmg-3/igt@kms_psr@fbc-psr2-cursor-blt.html
* igt@kms_psr@psr2-sprite-blt:
- shard-adlp: NOTRUN -> [SKIP][131] ([Intel XE#2850] / [Intel XE#929]) +2 other tests skip
[131]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-141524v1/shard-adlp-2/igt@kms_psr@psr2-sprite-blt.html
* igt@kms_tiled_display@basic-test-pattern:
- shard-bmg: NOTRUN -> [SKIP][132] ([Intel XE#2426])
[132]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-141524v1/shard-bmg-3/igt@kms_tiled_display@basic-test-pattern.html
* igt@kms_universal_plane@cursor-fb-leak@pipe-a-edp-1:
- shard-lnl: [PASS][133] -> [FAIL][134] ([Intel XE#899]) +1 other test fail
[133]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-2249-1fe9a6cc7d136c9a34c47ccd6ee5a2b7d02c0bd6/shard-lnl-3/igt@kms_universal_plane@cursor-fb-leak@pipe-a-edp-1.html
[134]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-141524v1/shard-lnl-8/igt@kms_universal_plane@cursor-fb-leak@pipe-a-edp-1.html
* igt@xe_create@multigpu-create-massive-size:
- shard-bmg: NOTRUN -> [SKIP][135] ([Intel XE#2504])
[135]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-141524v1/shard-bmg-3/igt@xe_create@multigpu-create-massive-size.html
* igt@xe_eudebug@basic-vms:
- shard-bmg: NOTRUN -> [SKIP][136] ([Intel XE#2905])
[136]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-141524v1/shard-bmg-3/igt@xe_eudebug@basic-vms.html
* igt@xe_eudebug_online@breakpoint-many-sessions-tiles:
- shard-adlp: NOTRUN -> [SKIP][137] ([Intel XE#2905]) +1 other test skip
[137]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-141524v1/shard-adlp-2/igt@xe_eudebug_online@breakpoint-many-sessions-tiles.html
* igt@xe_evict@evict-mixed-many-threads-small:
- shard-bmg: [PASS][138] -> [TIMEOUT][139] ([Intel XE#1473] / [Intel XE#2472])
[138]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-2249-1fe9a6cc7d136c9a34c47ccd6ee5a2b7d02c0bd6/shard-bmg-2/igt@xe_evict@evict-mixed-many-threads-small.html
[139]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-141524v1/shard-bmg-5/igt@xe_evict@evict-mixed-many-threads-small.html
* igt@xe_exec_basic@multigpu-no-exec-userptr:
- shard-bmg: NOTRUN -> [SKIP][140] ([Intel XE#2322])
[140]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-141524v1/shard-bmg-3/igt@xe_exec_basic@multigpu-no-exec-userptr.html
* igt@xe_exec_basic@no-exec-basic-defer-bind:
- shard-dg2-set2: [PASS][141] -> [SKIP][142] ([Intel XE#1130]) +190 other tests skip
[141]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-2249-1fe9a6cc7d136c9a34c47ccd6ee5a2b7d02c0bd6/shard-dg2-433/igt@xe_exec_basic@no-exec-basic-defer-bind.html
[142]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-141524v1/shard-dg2-434/igt@xe_exec_basic@no-exec-basic-defer-bind.html
* igt@xe_exec_compute_mode@once-bindexecqueue-rebind:
- shard-bmg: NOTRUN -> [DMESG-WARN][143] ([Intel XE#1727])
[143]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-141524v1/shard-bmg-1/igt@xe_exec_compute_mode@once-bindexecqueue-rebind.html
* igt@xe_exec_fault_mode@many-bindexecqueue-prefetch:
- shard-bmg: [PASS][144] -> [DMESG-WARN][145] ([Intel XE#3468]) +6 other tests dmesg-warn
[144]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-2249-1fe9a6cc7d136c9a34c47ccd6ee5a2b7d02c0bd6/shard-bmg-1/igt@xe_exec_fault_mode@many-bindexecqueue-prefetch.html
[145]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-141524v1/shard-bmg-8/igt@xe_exec_fault_mode@many-bindexecqueue-prefetch.html
* igt@xe_exec_fault_mode@twice-bindexecqueue-userptr-invalidate-imm:
- shard-adlp: NOTRUN -> [SKIP][146] ([Intel XE#288]) +1 other test skip
[146]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-141524v1/shard-adlp-2/igt@xe_exec_fault_mode@twice-bindexecqueue-userptr-invalidate-imm.html
* igt@xe_live_ktest@xe_bo@xe_bo_evict_kunit:
- shard-dg2-set2: [PASS][147] -> [SKIP][148] ([Intel XE#2229]) +1 other test skip
[147]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-2249-1fe9a6cc7d136c9a34c47ccd6ee5a2b7d02c0bd6/shard-dg2-434/igt@xe_live_ktest@xe_bo@xe_bo_evict_kunit.html
[148]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-141524v1/shard-dg2-466/igt@xe_live_ktest@xe_bo@xe_bo_evict_kunit.html
* igt@xe_oa@invalid-create-userspace-config:
- shard-adlp: NOTRUN -> [SKIP][149] ([Intel XE#2541]) +1 other test skip
[149]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-141524v1/shard-adlp-2/igt@xe_oa@invalid-create-userspace-config.html
* igt@xe_oa@stress-open-close@rcs-0:
- shard-lnl: NOTRUN -> [DMESG-WARN][150] ([Intel XE#3466])
[150]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-141524v1/shard-lnl-5/igt@xe_oa@stress-open-close@rcs-0.html
* igt@xe_pm@d3cold-basic:
- shard-bmg: NOTRUN -> [SKIP][151] ([Intel XE#2284])
[151]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-141524v1/shard-bmg-3/igt@xe_pm@d3cold-basic.html
* igt@xe_pm@s4-basic-exec:
- shard-adlp: [PASS][152] -> [ABORT][153] ([Intel XE#1358] / [Intel XE#1607] / [Intel XE#1794])
[152]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-2249-1fe9a6cc7d136c9a34c47ccd6ee5a2b7d02c0bd6/shard-adlp-6/igt@xe_pm@s4-basic-exec.html
[153]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-141524v1/shard-adlp-9/igt@xe_pm@s4-basic-exec.html
* igt@xe_pm@s4-vm-bind-prefetch:
- shard-adlp: [PASS][154] -> [ABORT][155] ([Intel XE#1607] / [Intel XE#1794])
[154]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-2249-1fe9a6cc7d136c9a34c47ccd6ee5a2b7d02c0bd6/shard-adlp-8/igt@xe_pm@s4-vm-bind-prefetch.html
[155]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-141524v1/shard-adlp-9/igt@xe_pm@s4-vm-bind-prefetch.html
* igt@xe_pm@s4-vm-bind-userptr:
- shard-lnl: [PASS][156] -> [ABORT][157] ([Intel XE#1794])
[156]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-2249-1fe9a6cc7d136c9a34c47ccd6ee5a2b7d02c0bd6/shard-lnl-1/igt@xe_pm@s4-vm-bind-userptr.html
[157]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-141524v1/shard-lnl-2/igt@xe_pm@s4-vm-bind-userptr.html
* igt@xe_sriov_flr@flr-vf1-clear:
- shard-adlp: [PASS][158] -> [FAIL][159] ([Intel XE#3507])
[158]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-2249-1fe9a6cc7d136c9a34c47ccd6ee5a2b7d02c0bd6/shard-adlp-6/igt@xe_sriov_flr@flr-vf1-clear.html
[159]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-141524v1/shard-adlp-8/igt@xe_sriov_flr@flr-vf1-clear.html
* igt@xe_vm@munmap-style-unbind-userptr-inval-many-either-side-partial:
- shard-dg2-set2: NOTRUN -> [SKIP][160] ([Intel XE#1130]) +30 other tests skip
[160]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-141524v1/shard-dg2-466/igt@xe_vm@munmap-style-unbind-userptr-inval-many-either-side-partial.html
#### Possible fixes ####
* igt@core_hotunplug@hotrebind:
- shard-dg2-set2: [SKIP][161] ([Intel XE#1885]) -> [PASS][162]
[161]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-2249-1fe9a6cc7d136c9a34c47ccd6ee5a2b7d02c0bd6/shard-dg2-466/igt@core_hotunplug@hotrebind.html
[162]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-141524v1/shard-dg2-463/igt@core_hotunplug@hotrebind.html
* igt@core_setmaster@master-drop-set-shared-fd:
- shard-dg2-set2: [SKIP][163] ([Intel XE#3453]) -> [PASS][164]
[163]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-2249-1fe9a6cc7d136c9a34c47ccd6ee5a2b7d02c0bd6/shard-dg2-466/igt@core_setmaster@master-drop-set-shared-fd.html
[164]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-141524v1/shard-dg2-436/igt@core_setmaster@master-drop-set-shared-fd.html
* igt@kms_async_flips@async-flip-suspend-resume@pipe-d-dp-4:
- shard-dg2-set2: [FAIL][165] ([Intel XE#3105]) -> [PASS][166]
[165]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-2249-1fe9a6cc7d136c9a34c47ccd6ee5a2b7d02c0bd6/shard-dg2-435/igt@kms_async_flips@async-flip-suspend-resume@pipe-d-dp-4.html
[166]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-141524v1/shard-dg2-464/igt@kms_async_flips@async-flip-suspend-resume@pipe-d-dp-4.html
* igt@kms_atomic_transition@plane-all-modeset-transition-fencing-internal-panels:
- shard-lnl: [FAIL][167] ([Intel XE#1426]) -> [PASS][168] +1 other test pass
[167]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-2249-1fe9a6cc7d136c9a34c47ccd6ee5a2b7d02c0bd6/shard-lnl-7/igt@kms_atomic_transition@plane-all-modeset-transition-fencing-internal-panels.html
[168]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-141524v1/shard-lnl-2/igt@kms_atomic_transition@plane-all-modeset-transition-fencing-internal-panels.html
* igt@kms_flip@2x-flip-vs-expired-vblank-interruptible@ad-dp2-hdmi-a3:
- shard-bmg: [FAIL][169] ([Intel XE#3321] / [Intel XE#3486]) -> [PASS][170]
[169]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-2249-1fe9a6cc7d136c9a34c47ccd6ee5a2b7d02c0bd6/shard-bmg-8/igt@kms_flip@2x-flip-vs-expired-vblank-interruptible@ad-dp2-hdmi-a3.html
[170]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-141524v1/shard-bmg-4/igt@kms_flip@2x-flip-vs-expired-vblank-interruptible@ad-dp2-hdmi-a3.html
* igt@kms_flip@2x-plain-flip-fb-recreate:
- shard-bmg: [FAIL][171] ([Intel XE#2882]) -> [PASS][172] +1 other test pass
[171]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-2249-1fe9a6cc7d136c9a34c47ccd6ee5a2b7d02c0bd6/shard-bmg-4/igt@kms_flip@2x-plain-flip-fb-recreate.html
[172]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-141524v1/shard-bmg-8/igt@kms_flip@2x-plain-flip-fb-recreate.html
* igt@kms_flip@flip-vs-expired-vblank@b-edp1:
- shard-lnl: [FAIL][173] ([Intel XE#886]) -> [PASS][174] +1 other test pass
[173]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-2249-1fe9a6cc7d136c9a34c47ccd6ee5a2b7d02c0bd6/shard-lnl-2/igt@kms_flip@flip-vs-expired-vblank@b-edp1.html
[174]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-141524v1/shard-lnl-8/igt@kms_flip@flip-vs-expired-vblank@b-edp1.html
* igt@kms_flip@flip-vs-suspend@a-hdmi-a3:
- shard-bmg: [DMESG-WARN][175] ([Intel XE#1727]) -> [PASS][176]
[175]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-2249-1fe9a6cc7d136c9a34c47ccd6ee5a2b7d02c0bd6/shard-bmg-4/igt@kms_flip@flip-vs-suspend@a-hdmi-a3.html
[176]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-141524v1/shard-bmg-8/igt@kms_flip@flip-vs-suspend@a-hdmi-a3.html
* igt@kms_flip@flip-vs-suspend@b-hdmi-a1:
- shard-adlp: [DMESG-WARN][177] ([Intel XE#2953] / [Intel XE#3086]) -> [PASS][178] +1 other test pass
[177]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-2249-1fe9a6cc7d136c9a34c47ccd6ee5a2b7d02c0bd6/shard-adlp-6/igt@kms_flip@flip-vs-suspend@b-hdmi-a1.html
[178]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-141524v1/shard-adlp-6/igt@kms_flip@flip-vs-suspend@b-hdmi-a1.html
* igt@kms_flip@flip-vs-suspend@b-hdmi-a3:
- shard-bmg: [DMESG-FAIL][179] ([Intel XE#3468]) -> [PASS][180] +2 other tests pass
[179]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-2249-1fe9a6cc7d136c9a34c47ccd6ee5a2b7d02c0bd6/shard-bmg-4/igt@kms_flip@flip-vs-suspend@b-hdmi-a3.html
[180]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-141524v1/shard-bmg-8/igt@kms_flip@flip-vs-suspend@b-hdmi-a3.html
* igt@kms_flip@flip-vs-suspend@c-dp2:
- shard-bmg: [DMESG-FAIL][181] ([Intel XE#1727]) -> [PASS][182] +3 other tests pass
[181]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-2249-1fe9a6cc7d136c9a34c47ccd6ee5a2b7d02c0bd6/shard-bmg-4/igt@kms_flip@flip-vs-suspend@c-dp2.html
[182]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-141524v1/shard-bmg-8/igt@kms_flip@flip-vs-suspend@c-dp2.html
* igt@kms_flip@flip-vs-suspend@c-edp1:
- shard-lnl: [DMESG-WARN][183] ([Intel XE#2932]) -> [PASS][184] +2 other tests pass
[183]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-2249-1fe9a6cc7d136c9a34c47ccd6ee5a2b7d02c0bd6/shard-lnl-4/igt@kms_flip@flip-vs-suspend@c-edp1.html
[184]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-141524v1/shard-lnl-4/igt@kms_flip@flip-vs-suspend@c-edp1.html
* igt@kms_flip@plain-flip-fb-recreate-interruptible@b-dp2:
- shard-bmg: [INCOMPLETE][185] ([Intel XE#2635]) -> [PASS][186]
[185]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-2249-1fe9a6cc7d136c9a34c47ccd6ee5a2b7d02c0bd6/shard-bmg-8/igt@kms_flip@plain-flip-fb-recreate-interruptible@b-dp2.html
[186]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-141524v1/shard-bmg-4/igt@kms_flip@plain-flip-fb-recreate-interruptible@b-dp2.html
* igt@kms_flip_scaled_crc@flip-32bpp-linear-to-64bpp-linear-upscaling@pipe-a-valid-mode:
- shard-bmg: [INCOMPLETE][187] ([Intel XE#3468]) -> [PASS][188] +1 other test pass
[187]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-2249-1fe9a6cc7d136c9a34c47ccd6ee5a2b7d02c0bd6/shard-bmg-7/igt@kms_flip_scaled_crc@flip-32bpp-linear-to-64bpp-linear-upscaling@pipe-a-valid-mode.html
[188]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-141524v1/shard-bmg-1/igt@kms_flip_scaled_crc@flip-32bpp-linear-to-64bpp-linear-upscaling@pipe-a-valid-mode.html
* igt@kms_flip_scaled_crc@flip-32bpp-xtile-to-64bpp-xtile-upscaling:
- shard-dg2-set2: [SKIP][189] ([Intel XE#2136] / [Intel XE#2351]) -> [PASS][190] +2 other tests pass
[189]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-2249-1fe9a6cc7d136c9a34c47ccd6ee5a2b7d02c0bd6/shard-dg2-466/igt@kms_flip_scaled_crc@flip-32bpp-xtile-to-64bpp-xtile-upscaling.html
[190]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-141524v1/shard-dg2-463/igt@kms_flip_scaled_crc@flip-32bpp-xtile-to-64bpp-xtile-upscaling.html
* igt@kms_frontbuffer_tracking@fbc-2p-primscrn-pri-shrfb-draw-render:
- shard-dg2-set2: [SKIP][191] ([Intel XE#2136]) -> [PASS][192] +3 other tests pass
[191]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-2249-1fe9a6cc7d136c9a34c47ccd6ee5a2b7d02c0bd6/shard-dg2-466/igt@kms_frontbuffer_tracking@fbc-2p-primscrn-pri-shrfb-draw-render.html
[192]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-141524v1/shard-dg2-463/igt@kms_frontbuffer_tracking@fbc-2p-primscrn-pri-shrfb-draw-render.html
* igt@kms_hdr@invalid-hdr:
- shard-bmg: [SKIP][193] ([Intel XE#1503]) -> [PASS][194]
[193]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-2249-1fe9a6cc7d136c9a34c47ccd6ee5a2b7d02c0bd6/shard-bmg-1/igt@kms_hdr@invalid-hdr.html
[194]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-141524v1/shard-bmg-5/igt@kms_hdr@invalid-hdr.html
* igt@kms_plane_scaling@plane-upscale-factor-0-25-with-pixel-format:
- shard-adlp: [DMESG-WARN][195] ([Intel XE#3086]) -> [PASS][196] +3 other tests pass
[195]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-2249-1fe9a6cc7d136c9a34c47ccd6ee5a2b7d02c0bd6/shard-adlp-4/igt@kms_plane_scaling@plane-upscale-factor-0-25-with-pixel-format.html
[196]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-141524v1/shard-adlp-8/igt@kms_plane_scaling@plane-upscale-factor-0-25-with-pixel-format.html
* igt@kms_plane_scaling@planes-downscale-factor-0-75-upscale-20x20:
- shard-bmg: [DMESG-WARN][197] ([Intel XE#2566]) -> [PASS][198]
[197]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-2249-1fe9a6cc7d136c9a34c47ccd6ee5a2b7d02c0bd6/shard-bmg-2/igt@kms_plane_scaling@planes-downscale-factor-0-75-upscale-20x20.html
[198]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-141524v1/shard-bmg-4/igt@kms_plane_scaling@planes-downscale-factor-0-75-upscale-20x20.html
* igt@kms_prop_blob@blob-prop-lifetime:
- shard-bmg: [DMESG-WARN][199] ([Intel XE#2705]) -> [PASS][200]
[199]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-2249-1fe9a6cc7d136c9a34c47ccd6ee5a2b7d02c0bd6/shard-bmg-7/igt@kms_prop_blob@blob-prop-lifetime.html
[200]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-141524v1/shard-bmg-6/igt@kms_prop_blob@blob-prop-lifetime.html
* igt@kms_rotation_crc@sprite-rotation-180:
- shard-dg2-set2: [SKIP][201] ([Intel XE#2423] / [i915#2575]) -> [PASS][202] +26 other tests pass
[201]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-2249-1fe9a6cc7d136c9a34c47ccd6ee5a2b7d02c0bd6/shard-dg2-466/igt@kms_rotation_crc@sprite-rotation-180.html
[202]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-141524v1/shard-dg2-463/igt@kms_rotation_crc@sprite-rotation-180.html
* igt@kms_universal_plane@cursor-fb-leak@pipe-b-edp-1:
- shard-lnl: [FAIL][203] ([Intel XE#899]) -> [PASS][204]
[203]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-2249-1fe9a6cc7d136c9a34c47ccd6ee5a2b7d02c0bd6/shard-lnl-3/igt@kms_universal_plane@cursor-fb-leak@pipe-b-edp-1.html
[204]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-141524v1/shard-lnl-8/igt@kms_universal_plane@cursor-fb-leak@pipe-b-edp-1.html
* igt@xe_ccs@block-copy-compressed@xmajor-compressed-compfmt0-vram01-vram01:
- shard-bmg: [DMESG-WARN][205] ([Intel XE#3468]) -> [PASS][206] +21 other tests pass
[205]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-2249-1fe9a6cc7d136c9a34c47ccd6ee5a2b7d02c0bd6/shard-bmg-7/igt@xe_ccs@block-copy-compressed@xmajor-compressed-compfmt0-vram01-vram01.html
[206]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-141524v1/shard-bmg-6/igt@xe_ccs@block-copy-compressed@xmajor-compressed-compfmt0-vram01-vram01.html
* igt@xe_ccs@ctrl-surf-copy-new-ctx:
- shard-dg2-set2: [SKIP][207] ([Intel XE#1130]) -> [PASS][208] +39 other tests pass
[207]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-2249-1fe9a6cc7d136c9a34c47ccd6ee5a2b7d02c0bd6/shard-dg2-466/igt@xe_ccs@ctrl-surf-copy-new-ctx.html
[208]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-141524v1/shard-dg2-463/igt@xe_ccs@ctrl-surf-copy-new-ctx.html
* igt@xe_drm_fdinfo@utilization-single-full-load-destroy-queue:
- shard-lnl: [FAIL][209] ([Intel XE#2667]) -> [PASS][210]
[209]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-2249-1fe9a6cc7d136c9a34c47ccd6ee5a2b7d02c0bd6/shard-lnl-8/igt@xe_drm_fdinfo@utilization-single-full-load-destroy-queue.html
[210]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-141524v1/shard-lnl-5/igt@xe_drm_fdinfo@utilization-single-full-load-destroy-queue.html
* igt@xe_evict@evict-mixed-many-threads-small:
- shard-dg2-set2: [TIMEOUT][211] ([Intel XE#1473]) -> [PASS][212]
[211]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-2249-1fe9a6cc7d136c9a34c47ccd6ee5a2b7d02c0bd6/shard-dg2-436/igt@xe_evict@evict-mixed-many-threads-small.html
[212]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-141524v1/shard-dg2-435/igt@xe_evict@evict-mixed-many-threads-small.html
* igt@xe_live_ktest@xe_bo:
- shard-dg2-set2: [TIMEOUT][213] ([Intel XE#2961] / [Intel XE#3191]) -> [PASS][214] +1 other test pass
[213]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-2249-1fe9a6cc7d136c9a34c47ccd6ee5a2b7d02c0bd6/shard-dg2-434/igt@xe_live_ktest@xe_bo.html
[214]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-141524v1/shard-dg2-466/igt@xe_live_ktest@xe_bo.html
* igt@xe_live_ktest@xe_bo@xe_bo_shrink_kunit:
- shard-bmg: [INCOMPLETE][215] ([Intel XE#2998]) -> [PASS][216] +1 other test pass
[215]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-2249-1fe9a6cc7d136c9a34c47ccd6ee5a2b7d02c0bd6/shard-bmg-2/igt@xe_live_ktest@xe_bo@xe_bo_shrink_kunit.html
[216]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-141524v1/shard-bmg-3/igt@xe_live_ktest@xe_bo@xe_bo_shrink_kunit.html
* igt@xe_module_load@unload:
- shard-dg2-set2: [DMESG-WARN][217] ([Intel XE#3467]) -> [PASS][218]
[217]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-2249-1fe9a6cc7d136c9a34c47ccd6ee5a2b7d02c0bd6/shard-dg2-436/igt@xe_module_load@unload.html
[218]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-141524v1/shard-dg2-466/igt@xe_module_load@unload.html
* igt@xe_pm@s4-basic:
- shard-adlp: [ABORT][219] ([Intel XE#1358] / [Intel XE#1607]) -> [PASS][220]
[219]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-2249-1fe9a6cc7d136c9a34c47ccd6ee5a2b7d02c0bd6/shard-adlp-9/igt@xe_pm@s4-basic.html
[220]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-141524v1/shard-adlp-2/igt@xe_pm@s4-basic.html
* igt@xe_pm_residency@toggle-gt-c6:
- shard-lnl: [FAIL][221] ([Intel XE#958]) -> [PASS][222]
[221]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-2249-1fe9a6cc7d136c9a34c47ccd6ee5a2b7d02c0bd6/shard-lnl-2/igt@xe_pm_residency@toggle-gt-c6.html
[222]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-141524v1/shard-lnl-6/igt@xe_pm_residency@toggle-gt-c6.html
#### Warnings ####
* igt@kms_addfb_basic@addfb25-y-tiled-small-legacy:
- shard-dg2-set2: [SKIP][223] ([Intel XE#2423] / [i915#2575]) -> [SKIP][224] ([Intel XE#623])
[223]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-2249-1fe9a6cc7d136c9a34c47ccd6ee5a2b7d02c0bd6/shard-dg2-466/igt@kms_addfb_basic@addfb25-y-tiled-small-legacy.html
[224]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-141524v1/shard-dg2-463/igt@kms_addfb_basic@addfb25-y-tiled-small-legacy.html
* igt@kms_async_flips@async-flip-suspend-resume:
- shard-dg2-set2: [DMESG-FAIL][225] ([Intel XE#3468]) -> [DMESG-WARN][226] ([Intel XE#3468])
[225]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-2249-1fe9a6cc7d136c9a34c47ccd6ee5a2b7d02c0bd6/shard-dg2-435/igt@kms_async_flips@async-flip-suspend-resume.html
[226]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-141524v1/shard-dg2-464/igt@kms_async_flips@async-flip-suspend-resume.html
* igt@kms_atomic@test-only:
- shard-dg2-set2: [DMESG-WARN][227] ([Intel XE#1727]) -> [SKIP][228] ([Intel XE#2423] / [i915#2575]) +1 other test skip
[227]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-2249-1fe9a6cc7d136c9a34c47ccd6ee5a2b7d02c0bd6/shard-dg2-434/igt@kms_atomic@test-only.html
[228]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-141524v1/shard-dg2-466/igt@kms_atomic@test-only.html
* igt@kms_big_fb@4-tiled-64bpp-rotate-180:
- shard-lnl: [INCOMPLETE][229] ([Intel XE#3225]) -> [INCOMPLETE][230] ([Intel XE#3225] / [Intel XE#3466]) +3 other tests incomplete
[229]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-2249-1fe9a6cc7d136c9a34c47ccd6ee5a2b7d02c0bd6/shard-lnl-1/igt@kms_big_fb@4-tiled-64bpp-rotate-180.html
[230]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-141524v1/shard-lnl-5/igt@kms_big_fb@4-tiled-64bpp-rotate-180.html
* igt@kms_big_fb@4-tiled-64bpp-rotate-270:
- shard-dg2-set2: [SKIP][231] ([Intel XE#316]) -> [SKIP][232] ([Intel XE#2136] / [Intel XE#2351]) +2 other tests skip
[231]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-2249-1fe9a6cc7d136c9a34c47ccd6ee5a2b7d02c0bd6/shard-dg2-463/igt@kms_big_fb@4-tiled-64bpp-rotate-270.html
[232]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-141524v1/shard-dg2-434/igt@kms_big_fb@4-tiled-64bpp-rotate-270.html
* igt@kms_big_fb@4-tiled-8bpp-rotate-180:
- shard-dg2-set2: [DMESG-WARN][233] ([Intel XE#1727]) -> [SKIP][234] ([Intel XE#2136])
[233]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-2249-1fe9a6cc7d136c9a34c47ccd6ee5a2b7d02c0bd6/shard-dg2-436/igt@kms_big_fb@4-tiled-8bpp-rotate-180.html
[234]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-141524v1/shard-dg2-466/igt@kms_big_fb@4-tiled-8bpp-rotate-180.html
* igt@kms_big_fb@linear-16bpp-rotate-180:
- shard-bmg: [DMESG-WARN][235] ([Intel XE#3468]) -> [DMESG-FAIL][236] ([Intel XE#3468]) +4 other tests dmesg-fail
[235]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-2249-1fe9a6cc7d136c9a34c47ccd6ee5a2b7d02c0bd6/shard-bmg-7/igt@kms_big_fb@linear-16bpp-rotate-180.html
[236]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-141524v1/shard-bmg-1/igt@kms_big_fb@linear-16bpp-rotate-180.html
* igt@kms_big_fb@x-tiled-64bpp-rotate-0:
- shard-bmg: [DMESG-FAIL][237] ([Intel XE#3468]) -> [INCOMPLETE][238] ([Intel XE#3225] / [Intel XE#3468])
[237]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-2249-1fe9a6cc7d136c9a34c47ccd6ee5a2b7d02c0bd6/shard-bmg-8/igt@kms_big_fb@x-tiled-64bpp-rotate-0.html
[238]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-141524v1/shard-bmg-2/igt@kms_big_fb@x-tiled-64bpp-rotate-0.html
* igt@kms_big_fb@x-tiled-8bpp-rotate-270:
- shard-dg2-set2: [SKIP][239] ([Intel XE#316]) -> [SKIP][240] ([Intel XE#2136]) +1 other test skip
[239]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-2249-1fe9a6cc7d136c9a34c47ccd6ee5a2b7d02c0bd6/shard-dg2-463/igt@kms_big_fb@x-tiled-8bpp-rotate-270.html
[240]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-141524v1/shard-dg2-434/igt@kms_big_fb@x-tiled-8bpp-rotate-270.html
* igt@kms_big_fb@y-tiled-64bpp-rotate-90:
- shard-dg2-set2: [SKIP][241] ([Intel XE#1124]) -> [SKIP][242] ([Intel XE#2136] / [Intel XE#2351]) +3 other tests skip
[241]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-2249-1fe9a6cc7d136c9a34c47ccd6ee5a2b7d02c0bd6/shard-dg2-434/igt@kms_big_fb@y-tiled-64bpp-rotate-90.html
[242]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-141524v1/shard-dg2-466/igt@kms_big_fb@y-tiled-64bpp-rotate-90.html
* igt@kms_big_fb@y-tiled-addfb-size-offset-overflow:
- shard-dg2-set2: [SKIP][243] ([Intel XE#607]) -> [SKIP][244] ([Intel XE#2136] / [Intel XE#2351])
[243]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-2249-1fe9a6cc7d136c9a34c47ccd6ee5a2b7d02c0bd6/shard-dg2-434/igt@kms_big_fb@y-tiled-addfb-size-offset-overflow.html
[244]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-141524v1/shard-dg2-466/igt@kms_big_fb@y-tiled-addfb-size-offset-overflow.html
* igt@kms_big_fb@y-tiled-max-hw-stride-32bpp-rotate-0-hflip:
- shard-dg2-set2: [SKIP][245] ([Intel XE#2136] / [Intel XE#2351]) -> [SKIP][246] ([Intel XE#1124])
[245]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-2249-1fe9a6cc7d136c9a34c47ccd6ee5a2b7d02c0bd6/shard-dg2-466/igt@kms_big_fb@y-tiled-max-hw-stride-32bpp-rotate-0-hflip.html
[246]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-141524v1/shard-dg2-463/igt@kms_big_fb@y-tiled-max-hw-stride-32bpp-rotate-0-hflip.html
* igt@kms_big_fb@yf-tiled-8bpp-rotate-270:
- shard-dg2-set2: [SKIP][247] ([Intel XE#1124]) -> [SKIP][248] ([Intel XE#2136]) +6 other tests skip
[247]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-2249-1fe9a6cc7d136c9a34c47ccd6ee5a2b7d02c0bd6/shard-dg2-434/igt@kms_big_fb@yf-tiled-8bpp-rotate-270.html
[248]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-141524v1/shard-dg2-466/igt@kms_big_fb@yf-tiled-8bpp-rotate-270.html
* igt@kms_big_fb@yf-tiled-addfb:
- shard-dg2-set2: [SKIP][249] ([Intel XE#619]) -> [SKIP][250] ([Intel XE#2136] / [Intel XE#2351])
[249]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-2249-1fe9a6cc7d136c9a34c47ccd6ee5a2b7d02c0bd6/shard-dg2-436/igt@kms_big_fb@yf-tiled-addfb.html
[250]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-141524v1/shard-dg2-466/igt@kms_big_fb@yf-tiled-addfb.html
* igt@kms_big_fb@yf-tiled-max-hw-stride-32bpp-rotate-0-async-flip:
- shard-dg2-set2: [SKIP][251] ([Intel XE#2136]) -> [SKIP][252] ([Intel XE#1124]) +1 other test skip
[251]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-2249-1fe9a6cc7d136c9a34c47ccd6ee5a2b7d02c0bd6/shard-dg2-466/igt@kms_big_fb@yf-tiled-max-hw-stride-32bpp-rotate-0-async-flip.html
[252]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-141524v1/shard-dg2-463/igt@kms_big_fb@yf-tiled-max-hw-stride-32bpp-rotate-0-async-flip.html
* igt@kms_bw@connected-linear-tiling-4-displays-1920x1080p:
- shard-dg2-set2: [SKIP][253] ([Intel XE#2191]) -> [SKIP][254] ([Intel XE#2423] / [i915#2575]) +1 other test skip
[253]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-2249-1fe9a6cc7d136c9a34c47ccd6ee5a2b7d02c0bd6/shard-dg2-434/igt@kms_bw@connected-linear-tiling-4-displays-1920x1080p.html
[254]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-141524v1/shard-dg2-466/igt@kms_bw@connected-linear-tiling-4-displays-1920x1080p.html
* igt@kms_bw@linear-tiling-1-displays-1920x1080p:
- shard-dg2-set2: [SKIP][255] ([Intel XE#2423] / [i915#2575]) -> [SKIP][256] ([Intel XE#367]) +2 other tests skip
[255]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-2249-1fe9a6cc7d136c9a34c47ccd6ee5a2b7d02c0bd6/shard-dg2-466/igt@kms_bw@linear-tiling-1-displays-1920x1080p.html
[256]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-141524v1/shard-dg2-463/igt@kms_bw@linear-tiling-1-displays-1920x1080p.html
* igt@kms_bw@linear-tiling-2-displays-3840x2160p:
- shard-dg2-set2: [SKIP][257] ([Intel XE#367]) -> [SKIP][258] ([Intel XE#2423] / [i915#2575]) +3 other tests skip
[257]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-2249-1fe9a6cc7d136c9a34c47ccd6ee5a2b7d02c0bd6/shard-dg2-434/igt@kms_bw@linear-tiling-2-displays-3840x2160p.html
[258]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-141524v1/shard-dg2-466/igt@kms_bw@linear-tiling-2-displays-3840x2160p.html
* igt@kms_ccs@bad-rotation-90-4-tiled-bmg-ccs:
- shard-dg2-set2: [SKIP][259] ([Intel XE#2907]) -> [SKIP][260] ([Intel XE#2136]) +3 other tests skip
[259]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-2249-1fe9a6cc7d136c9a34c47ccd6ee5a2b7d02c0bd6/shard-dg2-436/igt@kms_ccs@bad-rotation-90-4-tiled-bmg-ccs.html
[260]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-141524v1/shard-dg2-466/igt@kms_ccs@bad-rotation-90-4-tiled-bmg-ccs.html
* igt@kms_ccs@crc-primary-rotation-180-y-tiled-gen12-rc-ccs:
- shard-dg2-set2: [SKIP][261] ([Intel XE#2136]) -> [SKIP][262] ([Intel XE#455] / [Intel XE#787]) +2 other tests skip
[261]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-2249-1fe9a6cc7d136c9a34c47ccd6ee5a2b7d02c0bd6/shard-dg2-466/igt@kms_ccs@crc-primary-rotation-180-y-tiled-gen12-rc-ccs.html
[262]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-141524v1/shard-dg2-463/igt@kms_ccs@crc-primary-rotation-180-y-tiled-gen12-rc-ccs.html
* igt@kms_ccs@crc-primary-suspend-4-tiled-lnl-ccs:
- shard-dg2-set2: [SKIP][263] -> [SKIP][264] ([Intel XE#3442])
[263]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-2249-1fe9a6cc7d136c9a34c47ccd6ee5a2b7d02c0bd6/shard-dg2-466/igt@kms_ccs@crc-primary-suspend-4-tiled-lnl-ccs.html
[264]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-141524v1/shard-dg2-463/igt@kms_ccs@crc-primary-suspend-4-tiled-lnl-ccs.html
* igt@kms_ccs@crc-sprite-planes-basic-4-tiled-dg2-mc-ccs:
- shard-dg2-set2: [DMESG-WARN][265] -> [SKIP][266] ([Intel XE#2136])
[265]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-2249-1fe9a6cc7d136c9a34c47ccd6ee5a2b7d02c0bd6/shard-dg2-436/igt@kms_ccs@crc-sprite-planes-basic-4-tiled-dg2-mc-ccs.html
[266]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-141524v1/shard-dg2-466/igt@kms_ccs@crc-sprite-planes-basic-4-tiled-dg2-mc-ccs.html
* igt@kms_ccs@missing-ccs-buffer-yf-tiled-ccs:
- shard-dg2-set2: [SKIP][267] ([Intel XE#455] / [Intel XE#787]) -> [SKIP][268] ([Intel XE#2136] / [Intel XE#2351])
[267]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-2249-1fe9a6cc7d136c9a34c47ccd6ee5a2b7d02c0bd6/shard-dg2-463/igt@kms_ccs@missing-ccs-buffer-yf-tiled-ccs.html
[268]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-141524v1/shard-dg2-434/igt@kms_ccs@missing-ccs-buffer-yf-tiled-ccs.html
* igt@kms_ccs@random-ccs-data-4-tiled-mtl-rc-ccs:
- shard-dg2-set2: [SKIP][269] ([Intel XE#455] / [Intel XE#787]) -> [SKIP][270] ([Intel XE#2136]) +13 other tests skip
[269]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-2249-1fe9a6cc7d136c9a34c47ccd6ee5a2b7d02c0bd6/shard-dg2-434/igt@kms_ccs@random-ccs-data-4-tiled-mtl-rc-ccs.html
[270]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-141524v1/shard-dg2-466/igt@kms_ccs@random-ccs-data-4-tiled-mtl-rc-ccs.html
* igt@kms_chamelium_color@ctm-0-75:
- shard-dg2-set2: [SKIP][271] ([Intel XE#306]) -> [SKIP][272] ([Intel XE#2423] / [i915#2575]) +3 other tests skip
[271]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-2249-1fe9a6cc7d136c9a34c47ccd6ee5a2b7d02c0bd6/shard-dg2-433/igt@kms_chamelium_color@ctm-0-75.html
[272]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-141524v1/shard-dg2-434/igt@kms_chamelium_color@ctm-0-75.html
* igt@kms_chamelium_frames@hdmi-crc-nonplanar-formats:
- shard-dg2-set2: [SKIP][273] ([Intel XE#373]) -> [SKIP][274] ([Intel XE#2423] / [i915#2575]) +11 other tests skip
[273]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-2249-1fe9a6cc7d136c9a34c47ccd6ee5a2b7d02c0bd6/shard-dg2-463/igt@kms_chamelium_frames@hdmi-crc-nonplanar-formats.html
[274]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-141524v1/shard-dg2-434/igt@kms_chamelium_frames@hdmi-crc-nonplanar-formats.html
* igt@kms_chamelium_hpd@hdmi-hpd:
- shard-dg2-set2: [SKIP][275] ([Intel XE#2423] / [i915#2575]) -> [SKIP][276] ([Intel XE#373]) +3 other tests skip
[275]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-2249-1fe9a6cc7d136c9a34c47ccd6ee5a2b7d02c0bd6/shard-dg2-466/igt@kms_chamelium_hpd@hdmi-hpd.html
[276]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-141524v1/shard-dg2-463/igt@kms_chamelium_hpd@hdmi-hpd.html
* igt@kms_content_protection@srm:
- shard-dg2-set2: [FAIL][277] ([Intel XE#1178]) -> [SKIP][278] ([Intel XE#2423] / [i915#2575])
[277]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-2249-1fe9a6cc7d136c9a34c47ccd6ee5a2b7d02c0bd6/shard-dg2-434/igt@kms_content_protection@srm.html
[278]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-141524v1/shard-dg2-466/igt@kms_content_protection@srm.html
* igt@kms_content_protection@uevent:
- shard-dg2-set2: [FAIL][279] ([Intel XE#1188]) -> [SKIP][280] ([Intel XE#2423] / [i915#2575])
[279]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-2249-1fe9a6cc7d136c9a34c47ccd6ee5a2b7d02c0bd6/shard-dg2-434/igt@kms_content_protection@uevent.html
[280]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-141524v1/shard-dg2-466/igt@kms_content_protection@uevent.html
* igt@kms_cursor_crc@cursor-random-512x170:
- shard-dg2-set2: [SKIP][281] ([Intel XE#308]) -> [SKIP][282] ([Intel XE#2423] / [i915#2575])
[281]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-2249-1fe9a6cc7d136c9a34c47ccd6ee5a2b7d02c0bd6/shard-dg2-434/igt@kms_cursor_crc@cursor-random-512x170.html
[282]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-141524v1/shard-dg2-466/igt@kms_cursor_crc@cursor-random-512x170.html
* igt@kms_cursor_legacy@short-busy-flip-before-cursor-atomic-transitions-varying-size:
- shard-dg2-set2: [SKIP][283] ([Intel XE#323]) -> [SKIP][284] ([Intel XE#2423] / [i915#2575]) +1 other test skip
[283]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-2249-1fe9a6cc7d136c9a34c47ccd6ee5a2b7d02c0bd6/shard-dg2-464/igt@kms_cursor_legacy@short-busy-flip-before-cursor-atomic-transitions-varying-size.html
[284]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-141524v1/shard-dg2-466/igt@kms_cursor_legacy@short-busy-flip-before-cursor-atomic-transitions-varying-size.html
* igt@kms_display_modes@mst-extended-mode-negative:
- shard-dg2-set2: [SKIP][285] ([Intel XE#307]) -> [SKIP][286] ([Intel XE#2423] / [i915#2575]) +1 other test skip
[285]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-2249-1fe9a6cc7d136c9a34c47ccd6ee5a2b7d02c0bd6/shard-dg2-463/igt@kms_display_modes@mst-extended-mode-negative.html
[286]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-141524v1/shard-dg2-434/igt@kms_display_modes@mst-extended-mode-negative.html
* igt@kms_dsc@dsc-with-bpc-formats:
- shard-dg2-set2: [SKIP][287] ([Intel XE#455]) -> [SKIP][288] ([Intel XE#2136] / [Intel XE#2351]) +1 other test skip
[287]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-2249-1fe9a6cc7d136c9a34c47ccd6ee5a2b7d02c0bd6/shard-dg2-434/igt@kms_dsc@dsc-with-bpc-formats.html
[288]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-141524v1/shard-dg2-466/igt@kms_dsc@dsc-with-bpc-formats.html
* igt@kms_dsc@dsc-with-output-formats:
- shard-dg2-set2: [SKIP][289] ([Intel XE#2136]) -> [SKIP][290] ([Intel XE#455])
[289]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-2249-1fe9a6cc7d136c9a34c47ccd6ee5a2b7d02c0bd6/shard-dg2-466/igt@kms_dsc@dsc-with-output-formats.html
[290]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-141524v1/shard-dg2-463/igt@kms_dsc@dsc-with-output-formats.html
* igt@kms_feature_discovery@dp-mst:
- shard-dg2-set2: [SKIP][291] ([Intel XE#1137]) -> [SKIP][292] ([Intel XE#2423] / [i915#2575])
[291]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-2249-1fe9a6cc7d136c9a34c47ccd6ee5a2b7d02c0bd6/shard-dg2-463/igt@kms_feature_discovery@dp-mst.html
[292]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-141524v1/shard-dg2-434/igt@kms_feature_discovery@dp-mst.html
* igt@kms_feature_discovery@psr2:
- shard-dg2-set2: [SKIP][293] ([Intel XE#2423] / [i915#2575]) -> [SKIP][294] ([Intel XE#1135])
[293]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-2249-1fe9a6cc7d136c9a34c47ccd6ee5a2b7d02c0bd6/shard-dg2-466/igt@kms_feature_discovery@psr2.html
[294]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-141524v1/shard-dg2-436/igt@kms_feature_discovery@psr2.html
* igt@kms_flip@flip-vs-expired-vblank-interruptible:
- shard-dg2-set2: [FAIL][295] ([Intel XE#301]) -> [SKIP][296] ([Intel XE#2423] / [i915#2575]) +2 other tests skip
[295]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-2249-1fe9a6cc7d136c9a34c47ccd6ee5a2b7d02c0bd6/shard-dg2-464/igt@kms_flip@flip-vs-expired-vblank-interruptible.html
[296]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-141524v1/shard-dg2-466/igt@kms_flip@flip-vs-expired-vblank-interruptible.html
* igt@kms_flip@flip-vs-panning-interruptible:
- shard-bmg: [DMESG-WARN][297] ([Intel XE#3468]) -> [DMESG-WARN][298] ([Intel XE#2705] / [Intel XE#3468])
[297]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-2249-1fe9a6cc7d136c9a34c47ccd6ee5a2b7d02c0bd6/shard-bmg-2/igt@kms_flip@flip-vs-panning-interruptible.html
[298]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-141524v1/shard-bmg-4/igt@kms_flip@flip-vs-panning-interruptible.html
* igt@kms_flip@flip-vs-suspend-interruptible:
- shard-dg2-set2: [DMESG-FAIL][299] ([Intel XE#1727]) -> [SKIP][300] ([Intel XE#2423] / [i915#2575])
[299]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-2249-1fe9a6cc7d136c9a34c47ccd6ee5a2b7d02c0bd6/shard-dg2-464/igt@kms_flip@flip-vs-suspend-interruptible.html
[300]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-141524v1/shard-dg2-466/igt@kms_flip@flip-vs-suspend-interruptible.html
* igt@kms_flip_scaled_crc@flip-64bpp-ytile-to-32bpp-ytilegen12rcccs-upscaling:
- shard-dg2-set2: [SKIP][301] ([Intel XE#455]) -> [SKIP][302] ([Intel XE#2136]) +4 other tests skip
[301]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-2249-1fe9a6cc7d136c9a34c47ccd6ee5a2b7d02c0bd6/shard-dg2-433/igt@kms_flip_scaled_crc@flip-64bpp-ytile-to-32bpp-ytilegen12rcccs-upscaling.html
[302]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-141524v1/shard-dg2-434/igt@kms_flip_scaled_crc@flip-64bpp-ytile-to-32bpp-ytilegen12rcccs-upscaling.html
* igt@kms_flip_tiling@flip-change-tiling@pipe-a-dp-2-x-to-4:
- shard-bmg: [DMESG-FAIL][303] ([Intel XE#3468]) -> [DMESG-FAIL][304] ([Intel XE#2705] / [Intel XE#3468])
[303]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-2249-1fe9a6cc7d136c9a34c47ccd6ee5a2b7d02c0bd6/shard-bmg-1/igt@kms_flip_tiling@flip-change-tiling@pipe-a-dp-2-x-to-4.html
[304]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-141524v1/shard-bmg-5/igt@kms_flip_tiling@flip-change-tiling@pipe-a-dp-2-x-to-4.html
* igt@kms_frontbuffer_tracking@drrs-1p-primscrn-cur-indfb-onoff:
- shard-dg2-set2: [SKIP][305] ([Intel XE#651]) -> [SKIP][306] ([Intel XE#2136]) +25 other tests skip
[305]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-2249-1fe9a6cc7d136c9a34c47ccd6ee5a2b7d02c0bd6/shard-dg2-464/igt@kms_frontbuffer_tracking@drrs-1p-primscrn-cur-indfb-onoff.html
[306]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-141524v1/shard-dg2-466/igt@kms_frontbuffer_tracking@drrs-1p-primscrn-cur-indfb-onoff.html
* igt@kms_frontbuffer_tracking@drrs-2p-scndscrn-spr-indfb-move:
- shard-bmg: [SKIP][307] ([Intel XE#2311]) -> [SKIP][308] ([Intel XE#2312]) +1 other test skip
[307]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-2249-1fe9a6cc7d136c9a34c47ccd6ee5a2b7d02c0bd6/shard-bmg-4/igt@kms_frontbuffer_tracking@drrs-2p-scndscrn-spr-indfb-move.html
[308]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-141524v1/shard-bmg-6/igt@kms_frontbuffer_tracking@drrs-2p-scndscrn-spr-indfb-move.html
* igt@kms_frontbuffer_tracking@drrs-suspend:
- shard-dg2-set2: [SKIP][309] ([Intel XE#2136] / [Intel XE#2351]) -> [SKIP][310] ([Intel XE#651]) +3 other tests skip
[309]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-2249-1fe9a6cc7d136c9a34c47ccd6ee5a2b7d02c0bd6/shard-dg2-466/igt@kms_frontbuffer_tracking@drrs-suspend.html
[310]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-141524v1/shard-dg2-463/igt@kms_frontbuffer_tracking@drrs-suspend.html
* igt@kms_frontbuffer_tracking@fbc-1p-primscrn-spr-indfb-draw-mmap-wc:
- shard-bmg: [INCOMPLETE][311] ([Intel XE#2050]) -> [INCOMPLETE][312] ([Intel XE#2050] / [Intel XE#3468])
[311]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-2249-1fe9a6cc7d136c9a34c47ccd6ee5a2b7d02c0bd6/shard-bmg-2/igt@kms_frontbuffer_tracking@fbc-1p-primscrn-spr-indfb-draw-mmap-wc.html
[312]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-141524v1/shard-bmg-4/igt@kms_frontbuffer_tracking@fbc-1p-primscrn-spr-indfb-draw-mmap-wc.html
* igt@kms_frontbuffer_tracking@fbc-shrfb-scaledprimary:
- shard-bmg: [DMESG-FAIL][313] ([Intel XE#3468]) -> [FAIL][314] ([Intel XE#2333]) +1 other test fail
[313]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-2249-1fe9a6cc7d136c9a34c47ccd6ee5a2b7d02c0bd6/shard-bmg-4/igt@kms_frontbuffer_tracking@fbc-shrfb-scaledprimary.html
[314]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-141524v1/shard-bmg-8/igt@kms_frontbuffer_tracking@fbc-shrfb-scaledprimary.html
* igt@kms_frontbuffer_tracking@fbcdrrs-2p-primscrn-shrfb-pgflip-blt:
- shard-dg2-set2: [SKIP][315] ([Intel XE#2136]) -> [SKIP][316] ([Intel XE#651]) +4 other tests skip
[315]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-2249-1fe9a6cc7d136c9a34c47ccd6ee5a2b7d02c0bd6/shard-dg2-466/igt@kms_frontbuffer_tracking@fbcdrrs-2p-primscrn-shrfb-pgflip-blt.html
[316]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-141524v1/shard-dg2-463/igt@kms_frontbuffer_tracking@fbcdrrs-2p-primscrn-shrfb-pgflip-blt.html
* igt@kms_frontbuffer_tracking@fbcdrrs-rgb101010-draw-mmap-wc:
- shard-dg2-set2: [SKIP][317] ([Intel XE#651]) -> [SKIP][318] ([Intel XE#2136] / [Intel XE#2351]) +14 other tests skip
[317]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-2249-1fe9a6cc7d136c9a34c47ccd6ee5a2b7d02c0bd6/shard-dg2-463/igt@kms_frontbuffer_tracking@fbcdrrs-rgb101010-draw-mmap-wc.html
[318]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-141524v1/shard-dg2-434/igt@kms_frontbuffer_tracking@fbcdrrs-rgb101010-draw-mmap-wc.html
* igt@kms_frontbuffer_tracking@fbcpsr-1p-primscrn-spr-indfb-draw-blt:
- shard-dg2-set2: [SKIP][319] ([Intel XE#653]) -> [SKIP][320] ([Intel XE#2136]) +32 other tests skip
[319]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-2249-1fe9a6cc7d136c9a34c47ccd6ee5a2b7d02c0bd6/shard-dg2-463/igt@kms_frontbuffer_tracking@fbcpsr-1p-primscrn-spr-indfb-draw-blt.html
[320]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-141524v1/shard-dg2-434/igt@kms_frontbuffer_tracking@fbcpsr-1p-primscrn-spr-indfb-draw-blt.html
* igt@kms_frontbuffer_tracking@fbcpsr-2p-primscrn-spr-indfb-move:
- shard-bmg: [SKIP][321] ([Intel XE#2313]) -> [SKIP][322] ([Intel XE#2312]) +2 other tests skip
[321]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-2249-1fe9a6cc7d136c9a34c47ccd6ee5a2b7d02c0bd6/shard-bmg-8/igt@kms_frontbuffer_tracking@fbcpsr-2p-primscrn-spr-indfb-move.html
[322]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-141524v1/shard-bmg-6/igt@kms_frontbuffer_tracking@fbcpsr-2p-primscrn-spr-indfb-move.html
* igt@kms_frontbuffer_tracking@fbcpsr-2p-scndscrn-spr-indfb-move:
- shard-dg2-set2: [SKIP][323] ([Intel XE#2136] / [Intel XE#2351]) -> [SKIP][324] ([Intel XE#653])
[323]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-2249-1fe9a6cc7d136c9a34c47ccd6ee5a2b7d02c0bd6/shard-dg2-466/igt@kms_frontbuffer_tracking@fbcpsr-2p-scndscrn-spr-indfb-move.html
[324]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-141524v1/shard-dg2-463/igt@kms_frontbuffer_tracking@fbcpsr-2p-scndscrn-spr-indfb-move.html
* igt@kms_frontbuffer_tracking@fbcpsr-tiling-4:
- shard-dg2-set2: [SKIP][325] ([Intel XE#653]) -> [SKIP][326] ([Intel XE#2136] / [Intel XE#2351]) +8 other tests skip
[325]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-2249-1fe9a6cc7d136c9a34c47ccd6ee5a2b7d02c0bd6/shard-dg2-433/igt@kms_frontbuffer_tracking@fbcpsr-tiling-4.html
[326]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-141524v1/shard-dg2-434/igt@kms_frontbuffer_tracking@fbcpsr-tiling-4.html
* igt@kms_frontbuffer_tracking@fbcpsr-tiling-y:
- shard-dg2-set2: [SKIP][327] ([Intel XE#2136] / [Intel XE#2351]) -> [SKIP][328] ([Intel XE#658])
[327]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-2249-1fe9a6cc7d136c9a34c47ccd6ee5a2b7d02c0bd6/shard-dg2-466/igt@kms_frontbuffer_tracking@fbcpsr-tiling-y.html
[328]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-141524v1/shard-dg2-463/igt@kms_frontbuffer_tracking@fbcpsr-tiling-y.html
* igt@kms_frontbuffer_tracking@psr-slowdraw:
- shard-dg2-set2: [SKIP][329] ([Intel XE#2136]) -> [SKIP][330] ([Intel XE#653]) +7 other tests skip
[329]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-2249-1fe9a6cc7d136c9a34c47ccd6ee5a2b7d02c0bd6/shard-dg2-466/igt@kms_frontbuffer_tracking@psr-slowdraw.html
[330]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-141524v1/shard-dg2-463/igt@kms_frontbuffer_tracking@psr-slowdraw.html
* igt@kms_getfb@getfb-reject-ccs:
- shard-dg2-set2: [SKIP][331] ([Intel XE#605]) -> [SKIP][332] ([Intel XE#2423] / [i915#2575])
[331]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-2249-1fe9a6cc7d136c9a34c47ccd6ee5a2b7d02c0bd6/shard-dg2-433/igt@kms_getfb@getfb-reject-ccs.html
[332]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-141524v1/shard-dg2-434/igt@kms_getfb@getfb-reject-ccs.html
* igt@kms_joiner@basic-big-joiner:
- shard-dg2-set2: [SKIP][333] ([Intel XE#346]) -> [SKIP][334] ([Intel XE#2136])
[333]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-2249-1fe9a6cc7d136c9a34c47ccd6ee5a2b7d02c0bd6/shard-dg2-463/igt@kms_joiner@basic-big-joiner.html
[334]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-141524v1/shard-dg2-434/igt@kms_joiner@basic-big-joiner.html
* igt@kms_joiner@basic-force-ultra-joiner:
- shard-dg2-set2: [SKIP][335] ([Intel XE#2925]) -> [SKIP][336] ([Intel XE#2136])
[335]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-2249-1fe9a6cc7d136c9a34c47ccd6ee5a2b7d02c0bd6/shard-dg2-464/igt@kms_joiner@basic-force-ultra-joiner.html
[336]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-141524v1/shard-dg2-466/igt@kms_joiner@basic-force-ultra-joiner.html
* igt@kms_joiner@basic-ultra-joiner:
- shard-dg2-set2: [SKIP][337] ([Intel XE#2927]) -> [SKIP][338] ([Intel XE#2136])
[337]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-2249-1fe9a6cc7d136c9a34c47ccd6ee5a2b7d02c0bd6/shard-dg2-433/igt@kms_joiner@basic-ultra-joiner.html
[338]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-141524v1/shard-dg2-434/igt@kms_joiner@basic-ultra-joiner.html
* igt@kms_lease@lessee-list:
- shard-dg2-set2: [DMESG-WARN][339] -> [SKIP][340] ([Intel XE#2423] / [i915#2575]) +1 other test skip
[339]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-2249-1fe9a6cc7d136c9a34c47ccd6ee5a2b7d02c0bd6/shard-dg2-463/igt@kms_lease@lessee-list.html
[340]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-141524v1/shard-dg2-434/igt@kms_lease@lessee-list.html
* igt@kms_plane_scaling@planes-unity-scaling-downscale-factor-0-25:
- shard-dg2-set2: [SKIP][341] ([Intel XE#2423] / [i915#2575]) -> [SKIP][342] ([Intel XE#2763] / [Intel XE#455])
[341]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-2249-1fe9a6cc7d136c9a34c47ccd6ee5a2b7d02c0bd6/shard-dg2-466/igt@kms_plane_scaling@planes-unity-scaling-downscale-factor-0-25.html
[342]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-141524v1/shard-dg2-463/igt@kms_plane_scaling@planes-unity-scaling-downscale-factor-0-25.html
* igt@kms_pm_backlight@bad-brightness:
- shard-dg2-set2: [SKIP][343] ([Intel XE#870]) -> [SKIP][344] ([Intel XE#2136]) +1 other test skip
[343]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-2249-1fe9a6cc7d136c9a34c47ccd6ee5a2b7d02c0bd6/shard-dg2-463/igt@kms_pm_backlight@bad-brightness.html
[344]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-141524v1/shard-dg2-434/igt@kms_pm_backlight@bad-brightness.html
* igt@kms_pm_dc@dc6-psr:
- shard-dg2-set2: [SKIP][345] ([Intel XE#1129]) -> [SKIP][346] ([Intel XE#2136] / [Intel XE#2351])
[345]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-2249-1fe9a6cc7d136c9a34c47ccd6ee5a2b7d02c0bd6/shard-dg2-434/igt@kms_pm_dc@dc6-psr.html
[346]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-141524v1/shard-dg2-466/igt@kms_pm_dc@dc6-psr.html
* igt@kms_pm_dc@deep-pkgc:
- shard-dg2-set2: [SKIP][347] ([Intel XE#2136]) -> [SKIP][348] ([Intel XE#908])
[347]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-2249-1fe9a6cc7d136c9a34c47ccd6ee5a2b7d02c0bd6/shard-dg2-466/igt@kms_pm_dc@deep-pkgc.html
[348]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-141524v1/shard-dg2-463/igt@kms_pm_dc@deep-pkgc.html
* igt@kms_pm_lpsp@kms-lpsp:
- shard-dg2-set2: [FAIL][349] -> [SKIP][350] ([Intel XE#2136])
[349]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-2249-1fe9a6cc7d136c9a34c47ccd6ee5a2b7d02c0bd6/shard-dg2-436/igt@kms_pm_lpsp@kms-lpsp.html
[350]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-141524v1/shard-dg2-466/igt@kms_pm_lpsp@kms-lpsp.html
* igt@kms_psr2_sf@fbc-psr2-cursor-plane-update-sf:
- shard-dg2-set2: [SKIP][351] ([Intel XE#2136]) -> [SKIP][352] ([Intel XE#1489]) +1 other test skip
[351]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-2249-1fe9a6cc7d136c9a34c47ccd6ee5a2b7d02c0bd6/shard-dg2-466/igt@kms_psr2_sf@fbc-psr2-cursor-plane-update-sf.html
[352]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-141524v1/shard-dg2-463/igt@kms_psr2_sf@fbc-psr2-cursor-plane-update-sf.html
* igt@kms_psr2_sf@pr-cursor-plane-move-continuous-sf:
- shard-dg2-set2: [SKIP][353] ([Intel XE#1489]) -> [SKIP][354] ([Intel XE#2136]) +9 other tests skip
[353]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-2249-1fe9a6cc7d136c9a34c47ccd6ee5a2b7d02c0bd6/shard-dg2-464/igt@kms_psr2_sf@pr-cursor-plane-move-continuous-sf.html
[354]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-141524v1/shard-dg2-466/igt@kms_psr2_sf@pr-cursor-plane-move-continuous-sf.html
* igt@kms_psr2_su@page_flip-nv12:
- shard-dg2-set2: [SKIP][355] ([Intel XE#1122]) -> [SKIP][356] ([Intel XE#2136])
[355]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-2249-1fe9a6cc7d136c9a34c47ccd6ee5a2b7d02c0bd6/shard-dg2-434/igt@kms_psr2_su@page_flip-nv12.html
[356]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-141524v1/shard-dg2-466/igt@kms_psr2_su@page_flip-nv12.html
* igt@kms_psr@fbc-pr-sprite-plane-onoff:
- shard-dg2-set2: [SKIP][357] ([Intel XE#2136] / [Intel XE#2351]) -> [SKIP][358] ([Intel XE#2850] / [Intel XE#929]) +2 other tests skip
[357]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-2249-1fe9a6cc7d136c9a34c47ccd6ee5a2b7d02c0bd6/shard-dg2-466/igt@kms_psr@fbc-pr-sprite-plane-onoff.html
[358]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-141524v1/shard-dg2-436/igt@kms_psr@fbc-pr-sprite-plane-onoff.html
* igt@kms_psr@fbc-psr-no-drrs:
- shard-dg2-set2: [SKIP][359] ([Intel XE#2850] / [Intel XE#929]) -> [SKIP][360] ([Intel XE#2136] / [Intel XE#2351]) +5 other tests skip
[359]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-2249-1fe9a6cc7d136c9a34c47ccd6ee5a2b7d02c0bd6/shard-dg2-463/igt@kms_psr@fbc-psr-no-drrs.html
[360]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-141524v1/shard-dg2-434/igt@kms_psr@fbc-psr-no-drrs.html
* igt@kms_psr@fbc-psr2-dpms:
- shard-dg2-set2: [SKIP][361] ([Intel XE#2850] / [Intel XE#929]) -> [SKIP][362] ([Intel XE#2136]) +12 other tests skip
[361]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-2249-1fe9a6cc7d136c9a34c47ccd6ee5a2b7d02c0bd6/shard-dg2-433/igt@kms_psr@fbc-psr2-dpms.html
[362]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-141524v1/shard-dg2-434/igt@kms_psr@fbc-psr2-dpms.html
* igt@kms_psr@psr2-primary-render:
- shard-dg2-set2: [SKIP][363] ([Intel XE#2136]) -> [SKIP][364] ([Intel XE#2850] / [Intel XE#929]) +3 other tests skip
[363]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-2249-1fe9a6cc7d136c9a34c47ccd6ee5a2b7d02c0bd6/shard-dg2-466/igt@kms_psr@psr2-primary-render.html
[364]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-141524v1/shard-dg2-463/igt@kms_psr@psr2-primary-render.html
* igt@kms_rotation_crc@primary-y-tiled-reflect-x-0:
- shard-dg2-set2: [SKIP][365] ([Intel XE#1127]) -> [SKIP][366] ([Intel XE#2423] / [i915#2575])
[365]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-2249-1fe9a6cc7d136c9a34c47ccd6ee5a2b7d02c0bd6/shard-dg2-464/igt@kms_rotation_crc@primary-y-tiled-reflect-x-0.html
[366]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-141524v1/shard-dg2-466/igt@kms_rotation_crc@primary-y-tiled-reflect-x-0.html
* igt@kms_rotation_crc@primary-y-tiled-reflect-x-90:
- shard-dg2-set2: [SKIP][367] ([Intel XE#3414]) -> [SKIP][368] ([Intel XE#2423] / [i915#2575]) +1 other test skip
[367]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-2249-1fe9a6cc7d136c9a34c47ccd6ee5a2b7d02c0bd6/shard-dg2-434/igt@kms_rotation_crc@primary-y-tiled-reflect-x-90.html
[368]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-141524v1/shard-dg2-466/igt@kms_rotation_crc@primary-y-tiled-reflect-x-90.html
* igt@kms_rotation_crc@sprite-rotation-270:
- shard-dg2-set2: [SKIP][369] ([Intel XE#2423] / [i915#2575]) -> [SKIP][370] ([Intel XE#3414])
[369]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-2249-1fe9a6cc7d136c9a34c47ccd6ee5a2b7d02c0bd6/shard-dg2-466/igt@kms_rotation_crc@sprite-rotation-270.html
[370]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-141524v1/shard-dg2-463/igt@kms_rotation_crc@sprite-rotation-270.html
* igt@kms_scaling_modes@scaling-mode-full-aspect:
- shard-dg2-set2: [SKIP][371] ([Intel XE#2423] / [i915#2575]) -> [SKIP][372] ([Intel XE#455]) +1 other test skip
[371]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-2249-1fe9a6cc7d136c9a34c47ccd6ee5a2b7d02c0bd6/shard-dg2-466/igt@kms_scaling_modes@scaling-mode-full-aspect.html
[372]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-141524v1/shard-dg2-463/igt@kms_scaling_modes@scaling-mode-full-aspect.html
* igt@kms_tiled_display@basic-test-pattern:
- shard-dg2-set2: [FAIL][373] ([Intel XE#1729]) -> [SKIP][374] ([Intel XE#2423] / [i915#2575])
[373]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-2249-1fe9a6cc7d136c9a34c47ccd6ee5a2b7d02c0bd6/shard-dg2-434/igt@kms_tiled_display@basic-test-pattern.html
[374]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-141524v1/shard-dg2-466/igt@kms_tiled_display@basic-test-pattern.html
* igt@kms_tiled_display@basic-test-pattern-with-chamelium:
- shard-dg2-set2: [SKIP][375] ([Intel XE#1500]) -> [SKIP][376] ([Intel XE#2423] / [i915#2575])
[375]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-2249-1fe9a6cc7d136c9a34c47ccd6ee5a2b7d02c0bd6/shard-dg2-464/igt@kms_tiled_display@basic-test-pattern-with-chamelium.html
[376]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-141524v1/shard-dg2-466/igt@kms_tiled_display@basic-test-pattern-with-chamelium.html
* igt@kms_tv_load_detect@load-detect:
- shard-dg2-set2: [SKIP][377] ([Intel XE#330]) -> [SKIP][378] ([Intel XE#2423] / [i915#2575])
[377]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-2249-1fe9a6cc7d136c9a34c47ccd6ee5a2b7d02c0bd6/shard-dg2-433/igt@kms_tv_load_detect@load-detect.html
[378]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-141524v1/shard-dg2-434/igt@kms_tv_load_detect@load-detect.html
* igt@kms_vblank@accuracy-idle@pipe-a-dp-2:
- shard-bmg: [DMESG-FAIL][379] ([Intel XE#3468]) -> [DMESG-WARN][380] ([Intel XE#3468]) +2 other tests dmesg-warn
[379]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-2249-1fe9a6cc7d136c9a34c47ccd6ee5a2b7d02c0bd6/shard-bmg-7/igt@kms_vblank@accuracy-idle@pipe-a-dp-2.html
[380]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-141524v1/shard-bmg-6/igt@kms_vblank@accuracy-idle@pipe-a-dp-2.html
* igt@kms_vrr@cmrr:
- shard-dg2-set2: [SKIP][381] ([Intel XE#2168]) -> [SKIP][382] ([Intel XE#2423] / [i915#2575])
[381]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-2249-1fe9a6cc7d136c9a34c47ccd6ee5a2b7d02c0bd6/shard-dg2-434/igt@kms_vrr@cmrr.html
[382]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-141524v1/shard-dg2-466/igt@kms_vrr@cmrr.html
* igt@kms_vrr@flip-suspend:
- shard-dg2-set2: [SKIP][383] ([Intel XE#455]) -> [SKIP][384] ([Intel XE#2423] / [i915#2575]) +3 other tests skip
[383]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-2249-1fe9a6cc7d136c9a34c47ccd6ee5a2b7d02c0bd6/shard-dg2-436/igt@kms_vrr@flip-suspend.html
[384]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-141524v1/shard-dg2-466/igt@kms_vrr@flip-suspend.html
* igt@kms_writeback@writeback-pixel-formats:
- shard-dg2-set2: [SKIP][385] ([Intel XE#756]) -> [SKIP][386] ([Intel XE#2423] / [i915#2575])
[385]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-2249-1fe9a6cc7d136c9a34c47ccd6ee5a2b7d02c0bd6/shard-dg2-433/igt@kms_writeback@writeback-pixel-formats.html
[386]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-141524v1/shard-dg2-434/igt@kms_writeback@writeback-pixel-formats.html
* igt@sriov_basic@enable-vfs-bind-unbind-each-numvfs-all:
- shard-dg2-set2: [SKIP][387] ([Intel XE#1091] / [Intel XE#2849]) -> [SKIP][388] ([Intel XE#2423] / [i915#2575])
[387]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-2249-1fe9a6cc7d136c9a34c47ccd6ee5a2b7d02c0bd6/shard-dg2-463/igt@sriov_basic@enable-vfs-bind-unbind-each-numvfs-all.html
[388]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-141524v1/shard-dg2-434/igt@sriov_basic@enable-vfs-bind-unbind-each-numvfs-all.html
* igt@xe_ccs@suspend-resume:
- shard-dg2-set2: [DMESG-FAIL][389] ([Intel XE#3468]) -> [SKIP][390] ([Intel XE#1130])
[389]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-2249-1fe9a6cc7d136c9a34c47ccd6ee5a2b7d02c0bd6/shard-dg2-436/igt@xe_ccs@suspend-resume.html
[390]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-141524v1/shard-dg2-466/igt@xe_ccs@suspend-resume.html
* igt@xe_compute_preempt@compute-preempt-many:
- shard-dg2-set2: [SKIP][391] ([Intel XE#1280] / [Intel XE#455]) -> [SKIP][392] ([Intel XE#1130])
[391]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-2249-1fe9a6cc7d136c9a34c47ccd6ee5a2b7d02c0bd6/shard-dg2-433/igt@xe_compute_preempt@compute-preempt-many.html
[392]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-141524v1/shard-dg2-434/igt@xe_compute_preempt@compute-preempt-many.html
* igt@xe_copy_basic@mem-copy-linear-0x3fff:
- shard-dg2-set2: [SKIP][393] ([Intel XE#1123]) -> [SKIP][394] ([Intel XE#1130])
[393]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-2249-1fe9a6cc7d136c9a34c47ccd6ee5a2b7d02c0bd6/shard-dg2-434/igt@xe_copy_basic@mem-copy-linear-0x3fff.html
[394]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-141524v1/shard-dg2-466/igt@xe_copy_basic@mem-copy-linear-0x3fff.html
* igt@xe_copy_basic@mem-set-linear-0x3fff:
- shard-dg2-set2: [SKIP][395] ([Intel XE#1126]) -> [SKIP][396] ([Intel XE#1130])
[395]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-2249-1fe9a6cc7d136c9a34c47ccd6ee5a2b7d02c0bd6/shard-dg2-434/igt@xe_copy_basic@mem-set-linear-0x3fff.html
[396]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-141524v1/shard-dg2-466/igt@xe_copy_basic@mem-set-linear-0x3fff.html
* igt@xe_eudebug_online@basic-breakpoint:
- shard-dg2-set2: [SKIP][397] ([Intel XE#1130]) -> [SKIP][398] ([Intel XE#2905]) +1 other test skip
[397]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-2249-1fe9a6cc7d136c9a34c47ccd6ee5a2b7d02c0bd6/shard-dg2-466/igt@xe_eudebug_online@basic-breakpoint.html
[398]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-141524v1/shard-dg2-463/igt@xe_eudebug_online@basic-breakpoint.html
* igt@xe_eudebug_online@writes-caching-sram-bb-vram-target-sram:
- shard-dg2-set2: [SKIP][399] ([Intel XE#2905]) -> [SKIP][400] ([Intel XE#1130]) +13 other tests skip
[399]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-2249-1fe9a6cc7d136c9a34c47ccd6ee5a2b7d02c0bd6/shard-dg2-434/igt@xe_eudebug_online@writes-caching-sram-bb-vram-target-sram.html
[400]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-141524v1/shard-dg2-466/igt@xe_eudebug_online@writes-caching-sram-bb-vram-target-sram.html
* igt@xe_evict@evict-mixed-many-threads-large:
- shard-dg2-set2: [TIMEOUT][401] ([Intel XE#1473]) -> [FAIL][402] ([Intel XE#1000])
[401]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-2249-1fe9a6cc7d136c9a34c47ccd6ee5a2b7d02c0bd6/shard-dg2-433/igt@xe_evict@evict-mixed-many-threads-large.html
[402]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-141524v1/shard-dg2-464/igt@xe_evict@evict-mixed-many-threads-large.html
* igt@xe_exec_fault_mode@twice-userptr-prefetch:
- shard-dg2-set2: [SKIP][403] ([Intel XE#1130]) -> [SKIP][404] ([Intel XE#288]) +7 other tests skip
[403]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-2249-1fe9a6cc7d136c9a34c47ccd6ee5a2b7d02c0bd6/shard-dg2-466/igt@xe_exec_fault_mode@twice-userptr-prefetch.html
[404]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-141524v1/shard-dg2-463/igt@xe_exec_fault_mode@twice-userptr-prefetch.html
* igt@xe_exec_fault_mode@twice-userptr-rebind-imm:
- shard-dg2-set2: [SKIP][405] ([Intel XE#288]) -> [SKIP][406] ([Intel XE#1130]) +32 other tests skip
[405]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-2249-1fe9a6cc7d136c9a34c47ccd6ee5a2b7d02c0bd6/shard-dg2-434/igt@xe_exec_fault_mode@twice-userptr-rebind-imm.html
[406]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-141524v1/shard-dg2-466/igt@xe_exec_fault_mode@twice-userptr-rebind-imm.html
* igt@xe_exercise_blt@fast-copy:
- shard-dg2-set2: [INCOMPLETE][407] ([Intel XE#1195]) -> [SKIP][408] ([Intel XE#1130])
[407]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-2249-1fe9a6cc7d136c9a34c47ccd6ee5a2b7d02c0bd6/shard-dg2-436/igt@xe_exercise_blt@fast-copy.html
[408]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-141524v1/shard-dg2-466/igt@xe_exercise_blt@fast-copy.html
* igt@xe_fault_injection@inject-fault-probe-function-xe_guc_ads_init:
- shard-dg2-set2: [DMESG-WARN][409] ([Intel XE#3343]) -> [SKIP][410] ([Intel XE#1130])
[409]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-2249-1fe9a6cc7d136c9a34c47ccd6ee5a2b7d02c0bd6/shard-dg2-434/igt@xe_fault_injection@inject-fault-probe-function-xe_guc_ads_init.html
[410]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-141524v1/shard-dg2-466/igt@xe_fault_injection@inject-fault-probe-function-xe_guc_ads_init.html
* igt@xe_fault_injection@vm-bind-fail-vm_bind_ioctl_ops_create:
- shard-dg2-set2: [SKIP][411] ([Intel XE#1130]) -> [DMESG-WARN][412] ([Intel XE#3467])
[411]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-2249-1fe9a6cc7d136c9a34c47ccd6ee5a2b7d02c0bd6/shard-dg2-466/igt@xe_fault_injection@vm-bind-fail-vm_bind_ioctl_ops_create.html
[412]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-141524v1/shard-dg2-463/igt@xe_fault_injection@vm-bind-fail-vm_bind_ioctl_ops_create.html
- shard-bmg: [FAIL][413] ([Intel XE#3499]) -> [DMESG-FAIL][414] ([Intel XE#3467])
[413]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-2249-1fe9a6cc7d136c9a34c47ccd6ee5a2b7d02c0bd6/shard-bmg-4/igt@xe_fault_injection@vm-bind-fail-vm_bind_ioctl_ops_create.html
[414]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-141524v1/shard-bmg-6/igt@xe_fault_injection@vm-bind-fail-vm_bind_ioctl_ops_create.html
* igt@xe_fault_injection@vm-create-fail-xe_vm_create_scratch:
- shard-dg2-set2: [DMESG-WARN][415] ([Intel XE#3467]) -> [SKIP][416] ([Intel XE#1130]) +1 other test skip
[415]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-2249-1fe9a6cc7d136c9a34c47ccd6ee5a2b7d02c0bd6/shard-dg2-463/igt@xe_fault_injection@vm-create-fail-xe_vm_create_scratch.html
[416]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-141524v1/shard-dg2-434/igt@xe_fault_injection@vm-create-fail-xe_vm_create_scratch.html
* igt@xe_media_fill@media-fill:
- shard-dg2-set2: [SKIP][417] ([Intel XE#560]) -> [SKIP][418] ([Intel XE#1130])
[417]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-2249-1fe9a6cc7d136c9a34c47ccd6ee5a2b7d02c0bd6/shard-dg2-463/igt@xe_media_fill@media-fill.html
[418]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-141524v1/shard-dg2-434/igt@xe_media_fill@media-fill.html
* igt@xe_oa@closed-fd-and-unmapped-access:
- shard-dg2-set2: [SKIP][419] ([Intel XE#1130]) -> [SKIP][420] ([Intel XE#2541]) +1 other test skip
[419]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-2249-1fe9a6cc7d136c9a34c47ccd6ee5a2b7d02c0bd6/shard-dg2-466/igt@xe_oa@closed-fd-and-unmapped-access.html
[420]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-141524v1/shard-dg2-436/igt@xe_oa@closed-fd-and-unmapped-access.html
* igt@xe_oa@mi-rpc:
- shard-dg2-set2: [SKIP][421] ([Intel XE#2541]) -> [SKIP][422] ([Intel XE#1130]) +9 other tests skip
[421]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-2249-1fe9a6cc7d136c9a34c47ccd6ee5a2b7d02c0bd6/shard-dg2-436/igt@xe_oa@mi-rpc.html
[422]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-141524v1/shard-dg2-466/igt@xe_oa@mi-rpc.html
* igt@xe_peer2peer@read:
- shard-dg2-set2: [FAIL][423] ([Intel XE#1173]) -> [SKIP][424] ([Intel XE#1061])
[423]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-2249-1fe9a6cc7d136c9a34c47ccd6ee5a2b7d02c0bd6/shard-dg2-433/igt@xe_peer2peer@read.html
[424]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-141524v1/shard-dg2-434/igt@xe_peer2peer@read.html
* igt@xe_pm@d3cold-mmap-vram:
- shard-dg2-set2: [SKIP][425] ([Intel XE#1130]) -> [SKIP][426] ([Intel XE#2284] / [Intel XE#366])
[425]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-2249-1fe9a6cc7d136c9a34c47ccd6ee5a2b7d02c0bd6/shard-dg2-466/igt@xe_pm@d3cold-mmap-vram.html
[426]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-141524v1/shard-dg2-463/igt@xe_pm@d3cold-mmap-vram.html
* igt@xe_pm@d3cold-multiple-execs:
- shard-dg2-set2: [SKIP][427] ([Intel XE#2284] / [Intel XE#366]) -> [SKIP][428] ([Intel XE#1130]) +1 other test skip
[427]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-2249-1fe9a6cc7d136c9a34c47ccd6ee5a2b7d02c0bd6/shard-dg2-464/igt@xe_pm@d3cold-multiple-execs.html
[428]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-141524v1/shard-dg2-466/igt@xe_pm@d3cold-multiple-execs.html
* igt@xe_pm@s2idle-basic:
- shard-dg2-set2: [DMESG-WARN][429] ([Intel XE#3468]) -> [SKIP][430] ([Intel XE#1130])
[429]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-2249-1fe9a6cc7d136c9a34c47ccd6ee5a2b7d02c0bd6/shard-dg2-463/igt@xe_pm@s2idle-basic.html
[430]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-141524v1/shard-dg2-434/igt@xe_pm@s2idle-basic.html
* igt@xe_pm@s3-vm-bind-userptr:
- shard-dg2-set2: [DMESG-WARN][431] ([Intel XE#569]) -> [SKIP][432] ([Intel XE#1130])
[431]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-2249-1fe9a6cc7d136c9a34c47ccd6ee5a2b7d02c0bd6/shard-dg2-436/igt@xe_pm@s3-vm-bind-userptr.html
[432]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-141524v1/shard-dg2-466/igt@xe_pm@s3-vm-bind-userptr.html
* igt@xe_pm@s4-vm-bind-unbind-all:
- shard-dg2-set2: [DMESG-WARN][433] ([Intel XE#2280]) -> [SKIP][434] ([Intel XE#1130])
[433]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-2249-1fe9a6cc7d136c9a34c47ccd6ee5a2b7d02c0bd6/shard-dg2-434/igt@xe_pm@s4-vm-bind-unbind-all.html
[434]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-141524v1/shard-dg2-466/igt@xe_pm@s4-vm-bind-unbind-all.html
* igt@xe_query@multigpu-query-mem-usage:
- shard-dg2-set2: [SKIP][435] ([Intel XE#944]) -> [SKIP][436] ([Intel XE#1130]) +2 other tests skip
[435]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-2249-1fe9a6cc7d136c9a34c47ccd6ee5a2b7d02c0bd6/shard-dg2-434/igt@xe_query@multigpu-query-mem-usage.html
[436]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-141524v1/shard-dg2-466/igt@xe_query@multigpu-query-mem-usage.html
* igt@xe_query@multigpu-query-uc-fw-version-guc:
- shard-dg2-set2: [SKIP][437] ([Intel XE#1130]) -> [SKIP][438] ([Intel XE#944]) +1 other test skip
[437]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-2249-1fe9a6cc7d136c9a34c47ccd6ee5a2b7d02c0bd6/shard-dg2-466/igt@xe_query@multigpu-query-uc-fw-version-guc.html
[438]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-141524v1/shard-dg2-436/igt@xe_query@multigpu-query-uc-fw-version-guc.html
* igt@xe_sriov_flr@flr-vf1-clear:
- shard-dg2-set2: [SKIP][439] ([Intel XE#1130]) -> [SKIP][440] ([Intel XE#3342])
[439]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-2249-1fe9a6cc7d136c9a34c47ccd6ee5a2b7d02c0bd6/shard-dg2-466/igt@xe_sriov_flr@flr-vf1-clear.html
[440]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-141524v1/shard-dg2-463/igt@xe_sriov_flr@flr-vf1-clear.html
* igt@xe_vm@large-split-binds-2147483648:
- shard-dg2-set2: [DMESG-WARN][441] -> [SKIP][442] ([Intel XE#1130])
[441]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-2249-1fe9a6cc7d136c9a34c47ccd6ee5a2b7d02c0bd6/shard-dg2-463/igt@xe_vm@large-split-binds-2147483648.html
[442]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-141524v1/shard-dg2-434/igt@xe_vm@large-split-binds-2147483648.html
* igt@xe_vm@large-split-misaligned-binds-8388608:
- shard-dg2-set2: [DMESG-WARN][443] ([Intel XE#1727]) -> [SKIP][444] ([Intel XE#1130]) +2 other tests skip
[443]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-2249-1fe9a6cc7d136c9a34c47ccd6ee5a2b7d02c0bd6/shard-dg2-436/igt@xe_vm@large-split-misaligned-binds-8388608.html
[444]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-141524v1/shard-dg2-466/igt@xe_vm@large-split-misaligned-binds-8388608.html
{name}: This element is suppressed. This means it is ignored when computing
the status of the difference (SUCCESS, WARNING, or FAILURE).
[Intel XE#1000]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/1000
[Intel XE#1033]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/1033
[Intel XE#1061]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/1061
[Intel XE#1091]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/1091
[Intel XE#1122]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/1122
[Intel XE#1123]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/1123
[Intel XE#1124]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/1124
[Intel XE#1126]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/1126
[Intel XE#1127]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/1127
[Intel XE#1129]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/1129
[Intel XE#1130]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/1130
[Intel XE#1135]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/1135
[Intel XE#1137]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/1137
[Intel XE#1173]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/1173
[Intel XE#1178]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/1178
[Intel XE#1188]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/1188
[Intel XE#1195]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/1195
[Intel XE#1280]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/1280
[Intel XE#1358]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/1358
[Intel XE#1426]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/1426
[Intel XE#1473]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/1473
[Intel XE#1489]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/1489
[Intel XE#1500]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/1500
[Intel XE#1503]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/1503
[Intel XE#1607]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/1607
[Intel XE#1727]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/1727
[Intel XE#1729]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/1729
[Intel XE#1794]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/1794
[Intel XE#1885]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/1885
[Intel XE#2042]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/2042
[Intel XE#2050]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/2050
[Intel XE#2134]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/2134
[Intel XE#2136]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/2136
[Intel XE#2168]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/2168
[Intel XE#2191]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/2191
[Intel XE#2229]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/2229
[Intel XE#2234]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/2234
[Intel XE#2244]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/2244
[Intel XE#2252]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/2252
[Intel XE#2280]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/2280
[Intel XE#2284]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/2284
[Intel XE#2311]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/2311
[Intel XE#2312]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/2312
[Intel XE#2313]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/2313
[Intel XE#2314]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/2314
[Intel XE#2316]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/2316
[Intel XE#2321]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/2321
[Intel XE#2322]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/2322
[Intel XE#2325]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/2325
[Intel XE#2333]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/2333
[Intel XE#2351]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/2351
[Intel XE#2370]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/2370
[Intel XE#2423]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/2423
[Intel XE#2426]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/2426
[Intel XE#2446]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/2446
[Intel XE#2472]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/2472
[Intel XE#2504]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/2504
[Intel XE#2541]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/2541
[Intel XE#2566]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/2566
[Intel XE#2635]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/2635
[Intel XE#2667]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/2667
[Intel XE#2705]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/2705
[Intel XE#2763]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/2763
[Intel XE#2849]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/2849
[Intel XE#2850]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/2850
[Intel XE#288]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/288
[Intel XE#2882]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/2882
[Intel XE#2887]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/2887
[Intel XE#2894]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/2894
[Intel XE#2905]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/2905
[Intel XE#2907]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/2907
[Intel XE#2925]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/2925
[Intel XE#2927]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/2927
[Intel XE#2932]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/2932
[Intel XE#2953]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/2953
[Intel XE#2961]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/2961
[Intel XE#2998]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/2998
[Intel XE#301]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/301
[Intel XE#306]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/306
[Intel XE#307]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/307
[Intel XE#308]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/308
[Intel XE#3086]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/3086
[Intel XE#310]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/310
[Intel XE#3105]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/3105
[Intel XE#3149]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/3149
[Intel XE#316]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/316
[Intel XE#3191]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/3191
[Intel XE#3225]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/3225
[Intel XE#323]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/323
[Intel XE#330]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/330
[Intel XE#3321]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/3321
[Intel XE#3342]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/3342
[Intel XE#3343]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/3343
[Intel XE#3414]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/3414
[Intel XE#3442]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/3442
[Intel XE#3453]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/3453
[Intel XE#346]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/346
[Intel XE#3466]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/3466
[Intel XE#3467]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/3467
[Intel XE#3468]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/3468
[Intel XE#3486]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/3486
[Intel XE#3499]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/3499
[Intel XE#3507]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/3507
[Intel XE#3514]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/3514
[Intel XE#366]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/366
[Intel XE#367]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/367
[Intel XE#373]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/373
[Intel XE#455]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/455
[Intel XE#560]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/560
[Intel XE#569]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/569
[Intel XE#605]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/605
[Intel XE#607]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/607
[Intel XE#619]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/619
[Intel XE#623]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/623
[Intel XE#651]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/651
[Intel XE#653]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/653
[Intel XE#656]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/656
[Intel XE#658]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/658
[Intel XE#756]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/756
[Intel XE#787]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/787
[Intel XE#870]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/870
[Intel XE#877]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/877
[Intel XE#886]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/886
[Intel XE#899]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/899
[Intel XE#908]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/908
[Intel XE#929]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/929
[Intel XE#944]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/944
[Intel XE#958]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/958
[i915#2575]: https://gitlab.freedesktop.org/drm/i915/kernel/-/issues/2575
Build changes
-------------
* Linux: xe-2249-1fe9a6cc7d136c9a34c47ccd6ee5a2b7d02c0bd6 -> xe-pw-141524v1
IGT_8115: 4942fc57c20f9cb2195e70991c4e4df03dd3db21 @ https://gitlab.freedesktop.org/drm/igt-gpu-tools.git
xe-2249-1fe9a6cc7d136c9a34c47ccd6ee5a2b7d02c0bd6: 1fe9a6cc7d136c9a34c47ccd6ee5a2b7d02c0bd6
xe-pw-141524v1: 141524v1
== Logs ==
For more details see: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-141524v1/index.html
[-- Attachment #2: Type: text/html, Size: 138329 bytes --]
^ permalink raw reply [flat|nested] 52+ messages in thread
* Re: [RFC PATCH 13/29] drm/xe/mmap: Add mmap support for PCI memory barrier
2024-11-19 10:00 ` Christian König
@ 2024-11-19 11:57 ` Joonas Lahtinen
2024-11-19 12:42 ` Mrozek, Michal
0 siblings, 1 reply; 52+ messages in thread
From: Joonas Lahtinen @ 2024-11-19 11:57 UTC (permalink / raw)
To: Christian König, Matthew Brost, dri-devel, intel-xe,
Michal Mrozek
Cc: kenneth.w.graunke, lionel.g.landwerlin, jose.souza, simona.vetter,
thomas.hellstrom, boris.brezillon, airlied, mihail.atanassov,
steven.price, shashank.sharma
Adding Michal from the compute userspace team for sharing references to
the code.
Quoting Christian König (2024-11-19 12:00:44)
> Am 19.11.24 um 00:37 schrieb Matthew Brost:
> > From: Tejas Upadhyay <tejas.upadhyay@intel.com>
> >
> > In order to avoid having userspace to use MI_MEM_FENCE,
> > we are adding a mechanism for userspace to generate a
> > PCI memory barrier with low overhead (avoiding IOCTL call
> > as well as writing to VRAM will adds some overhead).
> >
> > This is implemented by memory-mapping a page as uncached
> > that is backed by MMIO on the dGPU and thus allowing userspace
> > to do memory write to the page without invoking an IOCTL.
> > We are selecting the MMIO so that it is not accessible from
> > the PCI bus so that the MMIO writes themselves are ignored,
> > but the PCI memory barrier will still take action as the MMIO
> > filtering will happen after the memory barrier effect.
> >
> > When we detect special defined offset in mmap(), We are mapping
> > 4K page which contains the last of page of doorbell MMIO range
> > to userspace for same purpose.
>
> Well that is quite a hack, but don't you still need a memory barrier
> instruction? E.g. m_fence?
I guess you refer on the userspace usage directions? Yeah, the
userspace definitely has to make sure that the write actually propagated
to the PCI bus before they can assume the serialization to happen on the
GPU. I think the userspace folks should be able to explain how exactly
the orchestrate that. Michal, can you or somebody else share the respective
lines of code in the userspace driver?
At this time, the userspace only enables this on X86, but could also
support other more exotic platforms via libpciaccess.
> And why don't you expose the real doorbell instead of the last (unused?)
> page of the MMIO region?
Doorbells are a complete red herring here.
Chosen page just happens to be a full 4K MMIO page where any writes coming over
PCI bus get dropped (and reads return zero) by the GPU. Such dummy (from CPU point
of view) 4K MMIO page allows doing a CPU write that generates a PCI bus transaction,
where the transaction itself is essentially a NOP. But as the transaction falls into
the MMIO address range, it will trigger a serialization of the incoming traffic in
the GPU side, before being ignored.
Regards, Joonas
^ permalink raw reply [flat|nested] 52+ messages in thread
* RE: [RFC PATCH 13/29] drm/xe/mmap: Add mmap support for PCI memory barrier
2024-11-19 11:57 ` Joonas Lahtinen
@ 2024-11-19 12:42 ` Mrozek, Michal
2024-12-18 12:59 ` Upadhyay, Tejas
0 siblings, 1 reply; 52+ messages in thread
From: Mrozek, Michal @ 2024-11-19 12:42 UTC (permalink / raw)
To: Joonas Lahtinen, Christian König, Brost, Matthew,
dri-devel@lists.freedesktop.org, intel-xe@lists.freedesktop.org
Cc: Graunke, Kenneth W, Landwerlin, Lionel G, Souza, Jose,
simona.vetter@ffwll.ch, thomas.hellstrom@linux.intel.com,
boris.brezillon@collabora.com, airlied@gmail.com,
mihail.atanassov@arm.com, steven.price@arm.com,
shashank.sharma@amd.com
"Adding Michal from the compute userspace team for sharing references to the code.
Quoting Christian König (2024-11-19 12:00:44)
> Am 19.11.24 um 00:37 schrieb Matthew Brost:
> > From: Tejas Upadhyay <tejas.upadhyay@intel.com>
> >
> > In order to avoid having userspace to use MI_MEM_FENCE, we are
> > adding a mechanism for userspace to generate a PCI memory barrier
> > with low overhead (avoiding IOCTL call as well as writing to VRAM
> > will adds some overhead).
> >
> > This is implemented by memory-mapping a page as uncached that is
> > backed by MMIO on the dGPU and thus allowing userspace to do memory
> > write to the page without invoking an IOCTL.
> > We are selecting the MMIO so that it is not accessible from the PCI
> > bus so that the MMIO writes themselves are ignored, but the PCI
> > memory barrier will still take action as the MMIO filtering will
> > happen after the memory barrier effect.
> >
> > When we detect special defined offset in mmap(), We are mapping 4K
> > page which contains the last of page of doorbell MMIO range to
> > userspace for same purpose.
>
> Well that is quite a hack, but don't you still need a memory barrier
> instruction? E.g. m_fence?
I guess you refer on the userspace usage directions? Yeah, the userspace definitely has to make sure that the write actually propagated to the PCI bus before they can assume the serialization to happen on the GPU. I think the userspace folks should be able to explain how exactly the orchestrate that. Michal, can you or somebody else share the respective lines of code in the userspace driver?
At this time, the userspace only enables this on X86, but could also support other more exotic platforms via libpciaccess.
> And why don't you expose the real doorbell instead of the last
> (unused?) page of the MMIO region?
Doorbells are a complete red herring here.
Chosen page just happens to be a full 4K MMIO page where any writes coming over PCI bus get dropped (and reads return zero) by the GPU. Such dummy (from CPU point of view) 4K MMIO page allows doing a CPU write that generates a PCI bus transaction, where the transaction itself is essentially a NOP. But as the transaction falls into the MMIO address range, it will trigger a serialization of the incoming traffic in the GPU side, before being ignored.
Regards, Joonas
"
Here is appropriate path:
https://github.com/intel/compute-runtime/blob/f589408848128434e410b6b4c2a9107ff78a74e9/shared/source/direct_submission/direct_submission_hw.inl#L437
flow is as follows:
1. do updates to shared memory between CPU/GPU using WC memory mapping
2. emit sfence instruction to make sure there is no reordering on the CPU side
3. emit pciBarrier write (this patch) , this ensures that all earlier transactions are properly ordered from the GPU side
So PCI memory barrier is submitted after sfence instruction and that makes sure that all earlier transactions are properly ordered.
Michal
^ permalink raw reply [flat|nested] 52+ messages in thread
* Re: [RFC PATCH 01/29] dma-fence: Add dma_fence_preempt base class
2024-11-18 23:37 ` [RFC PATCH 01/29] dma-fence: Add dma_fence_preempt base class Matthew Brost
@ 2024-11-20 13:31 ` Christian König
2024-11-20 17:36 ` Matthew Brost
0 siblings, 1 reply; 52+ messages in thread
From: Christian König @ 2024-11-20 13:31 UTC (permalink / raw)
To: Matthew Brost, intel-xe, dri-devel
Cc: kenneth.w.graunke, lionel.g.landwerlin, jose.souza, simona.vetter,
thomas.hellstrom, boris.brezillon, airlied, mihail.atanassov,
steven.price, shashank.sharma
Am 19.11.24 um 00:37 schrieb Matthew Brost:
> Add a dma_fence_preempt base class with driver ops to implement
> preemption, based on the existing Xe preemptive fence implementation.
>
> Annotated to ensure correct driver usage.
>
> Cc: Dave Airlie <airlied@redhat.com>
> Cc: Simona Vetter <simona.vetter@ffwll.ch>
> Cc: Christian Koenig <christian.koenig@amd.com>
> Signed-off-by: Matthew Brost <matthew.brost@intel.com>
> ---
> drivers/dma-buf/Makefile | 2 +-
> drivers/dma-buf/dma-fence-preempt.c | 133 ++++++++++++++++++++++++++++
> include/linux/dma-fence-preempt.h | 56 ++++++++++++
> 3 files changed, 190 insertions(+), 1 deletion(-)
> create mode 100644 drivers/dma-buf/dma-fence-preempt.c
> create mode 100644 include/linux/dma-fence-preempt.h
>
> diff --git a/drivers/dma-buf/Makefile b/drivers/dma-buf/Makefile
> index 70ec901edf2c..c25500bb38b5 100644
> --- a/drivers/dma-buf/Makefile
> +++ b/drivers/dma-buf/Makefile
> @@ -1,6 +1,6 @@
> # SPDX-License-Identifier: GPL-2.0-only
> obj-y := dma-buf.o dma-fence.o dma-fence-array.o dma-fence-chain.o \
> - dma-fence-unwrap.o dma-resv.o
> + dma-fence-preempt.o dma-fence-unwrap.o dma-resv.o
> obj-$(CONFIG_DMABUF_HEAPS) += dma-heap.o
> obj-$(CONFIG_DMABUF_HEAPS) += heaps/
> obj-$(CONFIG_SYNC_FILE) += sync_file.o
> diff --git a/drivers/dma-buf/dma-fence-preempt.c b/drivers/dma-buf/dma-fence-preempt.c
> new file mode 100644
> index 000000000000..6e6ce7ea7421
> --- /dev/null
> +++ b/drivers/dma-buf/dma-fence-preempt.c
> @@ -0,0 +1,133 @@
> +// SPDX-License-Identifier: MIT
> +/*
> + * Copyright © 2024 Intel Corporation
> + */
> +
> +#include <linux/dma-fence-preempt.h>
> +#include <linux/dma-resv.h>
> +
> +static void dma_fence_preempt_work_func(struct work_struct *w)
> +{
> + bool cookie = dma_fence_begin_signalling();
> + struct dma_fence_preempt *pfence =
> + container_of(w, typeof(*pfence), work);
> + const struct dma_fence_preempt_ops *ops = pfence->ops;
> + int err = pfence->base.error;
> +
> + if (!err) {
> + err = ops->preempt_wait(pfence);
> + if (err)
> + dma_fence_set_error(&pfence->base, err);
> + }
> +
> + dma_fence_signal(&pfence->base);
> + ops->preempt_finished(pfence);
Why is that callback useful?
> +
> + dma_fence_end_signalling(cookie);
> +}
> +
> +static const char *
> +dma_fence_preempt_get_driver_name(struct dma_fence *fence)
> +{
> + return "dma_fence_preempt";
> +}
> +
> +static const char *
> +dma_fence_preempt_get_timeline_name(struct dma_fence *fence)
> +{
> + return "ordered";
> +}
> +
> +static void dma_fence_preempt_issue(struct dma_fence_preempt *pfence)
> +{
> + int err;
> +
> + err = pfence->ops->preempt(pfence);
> + if (err)
> + dma_fence_set_error(&pfence->base, err);
> +
> + queue_work(pfence->wq, &pfence->work);
> +}
> +
> +static void dma_fence_preempt_cb(struct dma_fence *fence,
> + struct dma_fence_cb *cb)
> +{
> + struct dma_fence_preempt *pfence =
> + container_of(cb, typeof(*pfence), cb);
> +
> + dma_fence_preempt_issue(pfence);
> +}
> +
> +static void dma_fence_preempt_delay(struct dma_fence_preempt *pfence)
> +{
> + struct dma_fence *fence;
> + int err;
> +
> + fence = pfence->ops->preempt_delay(pfence);
Mhm, why is that useful?
> + if (WARN_ON_ONCE(!fence || IS_ERR(fence)))
> + return;
> +
> + err = dma_fence_add_callback(fence, &pfence->cb, dma_fence_preempt_cb);
You are running into the exactly same bug we had :)
The problem here is that you can't call dma_fence_add_callback() from
the enable_signaling callback. Background is that the
fence_ops->enable_signaling callback is called with the spinlock of the
preemption fence held.
This spinlock can be the same as the one of the user fence, but it could
also be a different one. Either way calling dma_fence_add_callback()
would let lockdep print a nice warning.
I tried to solve this by changing the dma_fence code to not call
enable_signaling with the lock held, we wanted to do that anyway to
prevent a bunch of issues with driver unload. But I realized that
getting this upstream would take to long.
Long story short we moved handling the user fence into the work item.
Apart from that looks rather solid to me.
Regards,
Christian.
> + if (err == -ENOENT)
> + dma_fence_preempt_issue(pfence);
> +}
> +
> +static bool dma_fence_preempt_enable_signaling(struct dma_fence *fence)
> +{
> + struct dma_fence_preempt *pfence =
> + container_of(fence, typeof(*pfence), base);
> +
> + if (pfence->ops->preempt_delay)
> + dma_fence_preempt_delay(pfence);
> + else
> + dma_fence_preempt_issue(pfence);
> +
> + return true;
> +}
> +
> +static const struct dma_fence_ops preempt_fence_ops = {
> + .get_driver_name = dma_fence_preempt_get_driver_name,
> + .get_timeline_name = dma_fence_preempt_get_timeline_name,
> + .enable_signaling = dma_fence_preempt_enable_signaling,
> +};
> +
> +/**
> + * dma_fence_is_preempt() - Is preempt fence
> + *
> + * @fence: Preempt fence
> + *
> + * Return: True if preempt fence, False otherwise
> + */
> +bool dma_fence_is_preempt(const struct dma_fence *fence)
> +{
> + return fence->ops == &preempt_fence_ops;
> +}
> +EXPORT_SYMBOL(dma_fence_is_preempt);
> +
> +/**
> + * dma_fence_preempt_init() - Initial preempt fence
> + *
> + * @fence: Preempt fence
> + * @ops: Preempt fence operations
> + * @wq: Work queue for preempt wait, should have WQ_MEM_RECLAIM set
> + * @context: Fence context
> + * @seqno: Fence seqence number
> + */
> +void dma_fence_preempt_init(struct dma_fence_preempt *fence,
> + const struct dma_fence_preempt_ops *ops,
> + struct workqueue_struct *wq,
> + u64 context, u64 seqno)
> +{
> + /*
> + * XXX: We really want to check wq for WQ_MEM_RECLAIM here but
> + * workqueue_struct is private.
> + */
> +
> + fence->ops = ops;
> + fence->wq = wq;
> + INIT_WORK(&fence->work, dma_fence_preempt_work_func);
> + spin_lock_init(&fence->lock);
> + dma_fence_init(&fence->base, &preempt_fence_ops,
> + &fence->lock, context, seqno);
> +}
> +EXPORT_SYMBOL(dma_fence_preempt_init);
> diff --git a/include/linux/dma-fence-preempt.h b/include/linux/dma-fence-preempt.h
> new file mode 100644
> index 000000000000..28d803f89527
> --- /dev/null
> +++ b/include/linux/dma-fence-preempt.h
> @@ -0,0 +1,56 @@
> +/* SPDX-License-Identifier: MIT */
> +/*
> + * Copyright © 2024 Intel Corporation
> + */
> +
> +#ifndef __LINUX_DMA_FENCE_PREEMPT_H
> +#define __LINUX_DMA_FENCE_PREEMPT_H
> +
> +#include <linux/dma-fence.h>
> +#include <linux/workqueue.h>
> +
> +struct dma_fence_preempt;
> +struct dma_resv;
> +
> +/**
> + * struct dma_fence_preempt_ops - Preempt fence operations
> + *
> + * These functions should be implemented in the driver side.
> + */
> +struct dma_fence_preempt_ops {
> + /** @preempt_delay: Preempt execution with a delay */
> + struct dma_fence *(*preempt_delay)(struct dma_fence_preempt *fence);
> + /** @preempt: Preempt execution */
> + int (*preempt)(struct dma_fence_preempt *fence);
> + /** @preempt_wait: Wait for preempt of execution to complete */
> + int (*preempt_wait)(struct dma_fence_preempt *fence);
> + /** @preempt_finished: Signal that the preempt has finished */
> + void (*preempt_finished)(struct dma_fence_preempt *fence);
> +};
> +
> +/**
> + * struct dma_fence_preempt - Embedded preempt fence base class
> + */
> +struct dma_fence_preempt {
> + /** @base: Fence base class */
> + struct dma_fence base;
> + /** @lock: Spinlock for fence handling */
> + spinlock_t lock;
> + /** @cb: Callback preempt delay */
> + struct dma_fence_cb cb;
> + /** @ops: Preempt fence operation */
> + const struct dma_fence_preempt_ops *ops;
> + /** @wq: Work queue for preempt wait */
> + struct workqueue_struct *wq;
> + /** @work: Work struct for preempt wait */
> + struct work_struct work;
> +};
> +
> +bool dma_fence_is_preempt(const struct dma_fence *fence);
> +
> +void dma_fence_preempt_init(struct dma_fence_preempt *fence,
> + const struct dma_fence_preempt_ops *ops,
> + struct workqueue_struct *wq,
> + u64 context, u64 seqno);
> +
> +#endif
^ permalink raw reply [flat|nested] 52+ messages in thread
* Re: [RFC PATCH 02/29] dma-fence: Add dma_fence_user_fence
2024-11-18 23:37 ` [RFC PATCH 02/29] dma-fence: Add dma_fence_user_fence Matthew Brost
@ 2024-11-20 13:38 ` Christian König
2024-11-20 22:50 ` Matthew Brost
0 siblings, 1 reply; 52+ messages in thread
From: Christian König @ 2024-11-20 13:38 UTC (permalink / raw)
To: Matthew Brost, intel-xe, dri-devel
Cc: kenneth.w.graunke, lionel.g.landwerlin, jose.souza, simona.vetter,
thomas.hellstrom, boris.brezillon, airlied, mihail.atanassov,
steven.price, shashank.sharma
Am 19.11.24 um 00:37 schrieb Matthew Brost:
> Normalize user fence attachment to a DMA fence. A user fence is a simple
> seqno write to memory, implemented by attaching a DMA fence callback
> that writes out the seqno. Intended use case is importing a dma-fence
> into kernel and exporting a user fence.
>
> Helpers added to allocate, attach, and free a dma_fence_user_fence.
>
> Cc: Dave Airlie <airlied@redhat.com>
> Cc: Simona Vetter <simona.vetter@ffwll.ch>
> Cc: Christian Koenig <christian.koenig@amd.com>
> Signed-off-by: Matthew Brost <matthew.brost@intel.com>
> ---
> drivers/dma-buf/Makefile | 2 +-
> drivers/dma-buf/dma-fence-user-fence.c | 73 ++++++++++++++++++++++++++
> include/linux/dma-fence-user-fence.h | 31 +++++++++++
> 3 files changed, 105 insertions(+), 1 deletion(-)
> create mode 100644 drivers/dma-buf/dma-fence-user-fence.c
> create mode 100644 include/linux/dma-fence-user-fence.h
>
> diff --git a/drivers/dma-buf/Makefile b/drivers/dma-buf/Makefile
> index c25500bb38b5..ba9ba339319e 100644
> --- a/drivers/dma-buf/Makefile
> +++ b/drivers/dma-buf/Makefile
> @@ -1,6 +1,6 @@
> # SPDX-License-Identifier: GPL-2.0-only
> obj-y := dma-buf.o dma-fence.o dma-fence-array.o dma-fence-chain.o \
> - dma-fence-preempt.o dma-fence-unwrap.o dma-resv.o
> + dma-fence-preempt.o dma-fence-unwrap.o dma-fence-user-fence.o dma-resv.o
> obj-$(CONFIG_DMABUF_HEAPS) += dma-heap.o
> obj-$(CONFIG_DMABUF_HEAPS) += heaps/
> obj-$(CONFIG_SYNC_FILE) += sync_file.o
> diff --git a/drivers/dma-buf/dma-fence-user-fence.c b/drivers/dma-buf/dma-fence-user-fence.c
> new file mode 100644
> index 000000000000..5a4b289bacb8
> --- /dev/null
> +++ b/drivers/dma-buf/dma-fence-user-fence.c
> @@ -0,0 +1,73 @@
> +// SPDX-License-Identifier: MIT
> +/*
> + * Copyright © 2024 Intel Corporation
> + */
> +
> +#include <linux/dma-fence-user-fence.h>
> +#include <linux/slab.h>
> +
> +static void user_fence_cb(struct dma_fence *fence, struct dma_fence_cb *cb)
> +{
> + struct dma_fence_user_fence *user_fence =
> + container_of(cb, struct dma_fence_user_fence, cb);
> +
> + if (user_fence->map.is_iomem)
> + writeq(user_fence->seqno, user_fence->map.vaddr_iomem);
> + else
> + *(u64 *)user_fence->map.vaddr = user_fence->seqno;
> +
> + dma_fence_user_fence_free(user_fence);
> +}
> +
> +/**
> + * dma_fence_user_fence_alloc() - Allocate user fence
> + *
> + * Return: Allocated struct dma_fence_user_fence on Success, NULL on failure
> + */
> +struct dma_fence_user_fence *dma_fence_user_fence_alloc(void)
> +{
> + return kmalloc(sizeof(struct dma_fence_user_fence), GFP_KERNEL);
> +}
> +EXPORT_SYMBOL(dma_fence_user_fence_alloc);
> +
> +/**
> + * dma_fence_user_fence_free() - Free user fence
> + *
> + * Free user fence. Should only be called on a user fence if
> + * dma_fence_user_fence_attach is not called to cleanup original allocation from
> + * dma_fence_user_fence_alloc.
> + */
> +void dma_fence_user_fence_free(struct dma_fence_user_fence *user_fence)
> +{
> + kfree(user_fence);
We need to give that child a different name, e.g. something like
dma_fence_seq_write or something like that.
I was just about to complain that all dma_fence implementations need to
be RCU save and only then saw that this isn't a dma_fence implementation.
Question: Why is that useful in the first place? At least AMD HW can
write to basically all memory locations and even registers when a fence
finishes?
Regards,
Christian.
> +}
> +EXPORT_SYMBOL(dma_fence_user_fence_free);
> +
> +/**
> + * dma_fence_user_fence_attach() - Attach user fence to dma-fence
> + *
> + * @fence: fence
> + * @user_fence user fence
> + * @map: IOSYS map to write seqno to
> + * @seqno: seqno to write to IOSYS map
> + *
> + * Attach a user fence, which is a seqno write to an IOSYS map, to a DMA fence.
> + * The caller must guarantee that the memory in the IOSYS map doesn't move
> + * before the fence signals. This is typically done by installing the DMA fence
> + * into the BO's DMA reservation bookkeeping slot from which the IOSYS was
> + * derived.
> + */
> +void dma_fence_user_fence_attach(struct dma_fence *fence,
> + struct dma_fence_user_fence *user_fence,
> + struct iosys_map *map, u64 seqno)
> +{
> + int err;
> +
> + user_fence->map = *map;
> + user_fence->seqno = seqno;
> +
> + err = dma_fence_add_callback(fence, &user_fence->cb, user_fence_cb);
> + if (err == -ENOENT)
> + user_fence_cb(NULL, &user_fence->cb);
> +}
> +EXPORT_SYMBOL(dma_fence_user_fence_attach);
> diff --git a/include/linux/dma-fence-user-fence.h b/include/linux/dma-fence-user-fence.h
> new file mode 100644
> index 000000000000..8678129c7d56
> --- /dev/null
> +++ b/include/linux/dma-fence-user-fence.h
> @@ -0,0 +1,31 @@
> +/* SPDX-License-Identifier: MIT */
> +/*
> + * Copyright © 2024 Intel Corporation
> + */
> +
> +#ifndef __LINUX_DMA_FENCE_USER_FENCE_H
> +#define __LINUX_DMA_FENCE_USER_FENCE_H
> +
> +#include <linux/dma-fence.h>
> +#include <linux/iosys-map.h>
> +
> +/** struct dma_fence_user_fence - User fence */
> +struct dma_fence_user_fence {
> + /** @cb: dma-fence callback used to attach user fence to dma-fence */
> + struct dma_fence_cb cb;
> + /** @map: IOSYS map to write seqno to */
> + struct iosys_map map;
> + /** @seqno: seqno to write to IOSYS map */
> + u64 seqno;
> +};
> +
> +struct dma_fence_user_fence *dma_fence_user_fence_alloc(void);
> +
> +void dma_fence_user_fence_free(struct dma_fence_user_fence *user_fence);
> +
> +void dma_fence_user_fence_attach(struct dma_fence *fence,
> + struct dma_fence_user_fence *user_fence,
> + struct iosys_map *map,
> + u64 seqno);
> +
> +#endif
^ permalink raw reply [flat|nested] 52+ messages in thread
* Re: [RFC PATCH 01/29] dma-fence: Add dma_fence_preempt base class
2024-11-20 13:31 ` Christian König
@ 2024-11-20 17:36 ` Matthew Brost
2024-11-21 10:04 ` Christian König
0 siblings, 1 reply; 52+ messages in thread
From: Matthew Brost @ 2024-11-20 17:36 UTC (permalink / raw)
To: Christian König
Cc: intel-xe, dri-devel, kenneth.w.graunke, lionel.g.landwerlin,
jose.souza, simona.vetter, thomas.hellstrom, boris.brezillon,
airlied, mihail.atanassov, steven.price, shashank.sharma
On Wed, Nov 20, 2024 at 02:31:50PM +0100, Christian König wrote:
> Am 19.11.24 um 00:37 schrieb Matthew Brost:
> > Add a dma_fence_preempt base class with driver ops to implement
> > preemption, based on the existing Xe preemptive fence implementation.
> >
> > Annotated to ensure correct driver usage.
> >
> > Cc: Dave Airlie <airlied@redhat.com>
> > Cc: Simona Vetter <simona.vetter@ffwll.ch>
> > Cc: Christian Koenig <christian.koenig@amd.com>
> > Signed-off-by: Matthew Brost <matthew.brost@intel.com>
> > ---
> > drivers/dma-buf/Makefile | 2 +-
> > drivers/dma-buf/dma-fence-preempt.c | 133 ++++++++++++++++++++++++++++
> > include/linux/dma-fence-preempt.h | 56 ++++++++++++
> > 3 files changed, 190 insertions(+), 1 deletion(-)
> > create mode 100644 drivers/dma-buf/dma-fence-preempt.c
> > create mode 100644 include/linux/dma-fence-preempt.h
> >
> > diff --git a/drivers/dma-buf/Makefile b/drivers/dma-buf/Makefile
> > index 70ec901edf2c..c25500bb38b5 100644
> > --- a/drivers/dma-buf/Makefile
> > +++ b/drivers/dma-buf/Makefile
> > @@ -1,6 +1,6 @@
> > # SPDX-License-Identifier: GPL-2.0-only
> > obj-y := dma-buf.o dma-fence.o dma-fence-array.o dma-fence-chain.o \
> > - dma-fence-unwrap.o dma-resv.o
> > + dma-fence-preempt.o dma-fence-unwrap.o dma-resv.o
> > obj-$(CONFIG_DMABUF_HEAPS) += dma-heap.o
> > obj-$(CONFIG_DMABUF_HEAPS) += heaps/
> > obj-$(CONFIG_SYNC_FILE) += sync_file.o
> > diff --git a/drivers/dma-buf/dma-fence-preempt.c b/drivers/dma-buf/dma-fence-preempt.c
> > new file mode 100644
> > index 000000000000..6e6ce7ea7421
> > --- /dev/null
> > +++ b/drivers/dma-buf/dma-fence-preempt.c
> > @@ -0,0 +1,133 @@
> > +// SPDX-License-Identifier: MIT
> > +/*
> > + * Copyright © 2024 Intel Corporation
> > + */
> > +
> > +#include <linux/dma-fence-preempt.h>
> > +#include <linux/dma-resv.h>
> > +
> > +static void dma_fence_preempt_work_func(struct work_struct *w)
> > +{
> > + bool cookie = dma_fence_begin_signalling();
> > + struct dma_fence_preempt *pfence =
> > + container_of(w, typeof(*pfence), work);
> > + const struct dma_fence_preempt_ops *ops = pfence->ops;
> > + int err = pfence->base.error;
> > +
> > + if (!err) {
> > + err = ops->preempt_wait(pfence);
> > + if (err)
> > + dma_fence_set_error(&pfence->base, err);
> > + }
> > +
> > + dma_fence_signal(&pfence->base);
> > + ops->preempt_finished(pfence);
>
> Why is that callback useful?
>
In Xe, this is where we kick the resume worker and drop a ref to the
preemption object, which in Xe is an individual queue, and in AMD it is
a VM, right? wrt preemption object, I've reasoned this should work for
an either per queue or VM driver design of preempt fences.
This part likely could be moved into the preempt_wait callback though
but would get a little goofy in the error case if preempt_wait is not
called as the driver side would still need to cleanup a ref. Maybe I
don't even need a ref though - have to think that through - but for
general safety we typically like to take refs whenever a fence
references a different object.
> > +
> > + dma_fence_end_signalling(cookie);
> > +}
> > +
> > +static const char *
> > +dma_fence_preempt_get_driver_name(struct dma_fence *fence)
> > +{
> > + return "dma_fence_preempt";
> > +}
> > +
> > +static const char *
> > +dma_fence_preempt_get_timeline_name(struct dma_fence *fence)
> > +{
> > + return "ordered";
> > +}
> > +
> > +static void dma_fence_preempt_issue(struct dma_fence_preempt *pfence)
> > +{
> > + int err;
> > +
> > + err = pfence->ops->preempt(pfence);
> > + if (err)
> > + dma_fence_set_error(&pfence->base, err);
> > +
> > + queue_work(pfence->wq, &pfence->work);
> > +}
> > +
> > +static void dma_fence_preempt_cb(struct dma_fence *fence,
> > + struct dma_fence_cb *cb)
> > +{
> > + struct dma_fence_preempt *pfence =
> > + container_of(cb, typeof(*pfence), cb);
> > +
> > + dma_fence_preempt_issue(pfence);
> > +}
> > +
> > +static void dma_fence_preempt_delay(struct dma_fence_preempt *pfence)
> > +{
> > + struct dma_fence *fence;
> > + int err;
> > +
> > + fence = pfence->ops->preempt_delay(pfence);
>
> Mhm, why is that useful?
>
This for attaching the preempt object's last exported fence which needs
to be signaled before the preemption is issued. So for purely long
running VM's, this function could be NULL. For VM's with user queues +
dma fences, the driver returns the last fence from the convert user
fence to dma-fence IOCTL.
I realized my kernel doc doesn't explain this as well as it should, I
have already made this more verbose locally and hopefully it clearly
explains all of this.
> > + if (WARN_ON_ONCE(!fence || IS_ERR(fence)))
> > + return;
> > +
> > + err = dma_fence_add_callback(fence, &pfence->cb, dma_fence_preempt_cb);
>
> You are running into the exactly same bug we had :)
>
> The problem here is that you can't call dma_fence_add_callback() from the
> enable_signaling callback. Background is that the
> fence_ops->enable_signaling callback is called with the spinlock of the
> preemption fence held.
>
> This spinlock can be the same as the one of the user fence, but it could
> also be a different one. Either way calling dma_fence_add_callback() would
> let lockdep print a nice warning.
>
Hmm, I see the problem if you share a lock between the preempt fence and
last exported fence but as long as these locks are seperate I don't see
the issue.
The locking order then is:
preempt fence lock -> last exported fence lock.
Lockdep does not explode in Xe but maybe can buy this is a little
unsafe. We could always move preempt_delay to the worker, attach a CB,
and rekick the worker upon the last fence signaling if you think that is
safer. Of course we could always just directly wait on the returned last
fence in the worker too.
> I tried to solve this by changing the dma_fence code to not call
> enable_signaling with the lock held, we wanted to do that anyway to prevent
> a bunch of issues with driver unload. But I realized that getting this
> upstream would take to long.
>
> Long story short we moved handling the user fence into the work item.
>
I did run into an issue when trying to make preempt_wait after return a
fence + attach a CB, and signal this preempt fence from the CB. I got
locking inversions almost worked through them but eventually gave up and
stuck with the worker.
Matt
> Apart from that looks rather solid to me.
>
> Regards,
> Christian.
>
> > + if (err == -ENOENT)
> > + dma_fence_preempt_issue(pfence);
> > +}
> > +
> > +static bool dma_fence_preempt_enable_signaling(struct dma_fence *fence)
> > +{
> > + struct dma_fence_preempt *pfence =
> > + container_of(fence, typeof(*pfence), base);
> > +
> > + if (pfence->ops->preempt_delay)
> > + dma_fence_preempt_delay(pfence);
> > + else
> > + dma_fence_preempt_issue(pfence);
> > +
> > + return true;
> > +}
> > +
> > +static const struct dma_fence_ops preempt_fence_ops = {
> > + .get_driver_name = dma_fence_preempt_get_driver_name,
> > + .get_timeline_name = dma_fence_preempt_get_timeline_name,
> > + .enable_signaling = dma_fence_preempt_enable_signaling,
> > +};
> > +
> > +/**
> > + * dma_fence_is_preempt() - Is preempt fence
> > + *
> > + * @fence: Preempt fence
> > + *
> > + * Return: True if preempt fence, False otherwise
> > + */
> > +bool dma_fence_is_preempt(const struct dma_fence *fence)
> > +{
> > + return fence->ops == &preempt_fence_ops;
> > +}
> > +EXPORT_SYMBOL(dma_fence_is_preempt);
> > +
> > +/**
> > + * dma_fence_preempt_init() - Initial preempt fence
> > + *
> > + * @fence: Preempt fence
> > + * @ops: Preempt fence operations
> > + * @wq: Work queue for preempt wait, should have WQ_MEM_RECLAIM set
> > + * @context: Fence context
> > + * @seqno: Fence seqence number
> > + */
> > +void dma_fence_preempt_init(struct dma_fence_preempt *fence,
> > + const struct dma_fence_preempt_ops *ops,
> > + struct workqueue_struct *wq,
> > + u64 context, u64 seqno)
> > +{
> > + /*
> > + * XXX: We really want to check wq for WQ_MEM_RECLAIM here but
> > + * workqueue_struct is private.
> > + */
> > +
> > + fence->ops = ops;
> > + fence->wq = wq;
> > + INIT_WORK(&fence->work, dma_fence_preempt_work_func);
> > + spin_lock_init(&fence->lock);
> > + dma_fence_init(&fence->base, &preempt_fence_ops,
> > + &fence->lock, context, seqno);
> > +}
> > +EXPORT_SYMBOL(dma_fence_preempt_init);
> > diff --git a/include/linux/dma-fence-preempt.h b/include/linux/dma-fence-preempt.h
> > new file mode 100644
> > index 000000000000..28d803f89527
> > --- /dev/null
> > +++ b/include/linux/dma-fence-preempt.h
> > @@ -0,0 +1,56 @@
> > +/* SPDX-License-Identifier: MIT */
> > +/*
> > + * Copyright © 2024 Intel Corporation
> > + */
> > +
> > +#ifndef __LINUX_DMA_FENCE_PREEMPT_H
> > +#define __LINUX_DMA_FENCE_PREEMPT_H
> > +
> > +#include <linux/dma-fence.h>
> > +#include <linux/workqueue.h>
> > +
> > +struct dma_fence_preempt;
> > +struct dma_resv;
> > +
> > +/**
> > + * struct dma_fence_preempt_ops - Preempt fence operations
> > + *
> > + * These functions should be implemented in the driver side.
> > + */
> > +struct dma_fence_preempt_ops {
> > + /** @preempt_delay: Preempt execution with a delay */
> > + struct dma_fence *(*preempt_delay)(struct dma_fence_preempt *fence);
> > + /** @preempt: Preempt execution */
> > + int (*preempt)(struct dma_fence_preempt *fence);
> > + /** @preempt_wait: Wait for preempt of execution to complete */
> > + int (*preempt_wait)(struct dma_fence_preempt *fence);
> > + /** @preempt_finished: Signal that the preempt has finished */
> > + void (*preempt_finished)(struct dma_fence_preempt *fence);
> > +};
> > +
> > +/**
> > + * struct dma_fence_preempt - Embedded preempt fence base class
> > + */
> > +struct dma_fence_preempt {
> > + /** @base: Fence base class */
> > + struct dma_fence base;
> > + /** @lock: Spinlock for fence handling */
> > + spinlock_t lock;
> > + /** @cb: Callback preempt delay */
> > + struct dma_fence_cb cb;
> > + /** @ops: Preempt fence operation */
> > + const struct dma_fence_preempt_ops *ops;
> > + /** @wq: Work queue for preempt wait */
> > + struct workqueue_struct *wq;
> > + /** @work: Work struct for preempt wait */
> > + struct work_struct work;
> > +};
> > +
> > +bool dma_fence_is_preempt(const struct dma_fence *fence);
> > +
> > +void dma_fence_preempt_init(struct dma_fence_preempt *fence,
> > + const struct dma_fence_preempt_ops *ops,
> > + struct workqueue_struct *wq,
> > + u64 context, u64 seqno);
> > +
> > +#endif
>
^ permalink raw reply [flat|nested] 52+ messages in thread
* Re: [RFC PATCH 02/29] dma-fence: Add dma_fence_user_fence
2024-11-20 13:38 ` Christian König
@ 2024-11-20 22:50 ` Matthew Brost
2024-11-21 9:31 ` Christian König
0 siblings, 1 reply; 52+ messages in thread
From: Matthew Brost @ 2024-11-20 22:50 UTC (permalink / raw)
To: Christian König
Cc: intel-xe, dri-devel, kenneth.w.graunke, lionel.g.landwerlin,
jose.souza, simona.vetter, thomas.hellstrom, boris.brezillon,
airlied, mihail.atanassov, steven.price, shashank.sharma
On Wed, Nov 20, 2024 at 02:38:49PM +0100, Christian König wrote:
> Am 19.11.24 um 00:37 schrieb Matthew Brost:
> > Normalize user fence attachment to a DMA fence. A user fence is a simple
> > seqno write to memory, implemented by attaching a DMA fence callback
> > that writes out the seqno. Intended use case is importing a dma-fence
> > into kernel and exporting a user fence.
> >
> > Helpers added to allocate, attach, and free a dma_fence_user_fence.
> >
> > Cc: Dave Airlie <airlied@redhat.com>
> > Cc: Simona Vetter <simona.vetter@ffwll.ch>
> > Cc: Christian Koenig <christian.koenig@amd.com>
> > Signed-off-by: Matthew Brost <matthew.brost@intel.com>
> > ---
> > drivers/dma-buf/Makefile | 2 +-
> > drivers/dma-buf/dma-fence-user-fence.c | 73 ++++++++++++++++++++++++++
> > include/linux/dma-fence-user-fence.h | 31 +++++++++++
> > 3 files changed, 105 insertions(+), 1 deletion(-)
> > create mode 100644 drivers/dma-buf/dma-fence-user-fence.c
> > create mode 100644 include/linux/dma-fence-user-fence.h
> >
> > diff --git a/drivers/dma-buf/Makefile b/drivers/dma-buf/Makefile
> > index c25500bb38b5..ba9ba339319e 100644
> > --- a/drivers/dma-buf/Makefile
> > +++ b/drivers/dma-buf/Makefile
> > @@ -1,6 +1,6 @@
> > # SPDX-License-Identifier: GPL-2.0-only
> > obj-y := dma-buf.o dma-fence.o dma-fence-array.o dma-fence-chain.o \
> > - dma-fence-preempt.o dma-fence-unwrap.o dma-resv.o
> > + dma-fence-preempt.o dma-fence-unwrap.o dma-fence-user-fence.o dma-resv.o
> > obj-$(CONFIG_DMABUF_HEAPS) += dma-heap.o
> > obj-$(CONFIG_DMABUF_HEAPS) += heaps/
> > obj-$(CONFIG_SYNC_FILE) += sync_file.o
> > diff --git a/drivers/dma-buf/dma-fence-user-fence.c b/drivers/dma-buf/dma-fence-user-fence.c
> > new file mode 100644
> > index 000000000000..5a4b289bacb8
> > --- /dev/null
> > +++ b/drivers/dma-buf/dma-fence-user-fence.c
> > @@ -0,0 +1,73 @@
> > +// SPDX-License-Identifier: MIT
> > +/*
> > + * Copyright © 2024 Intel Corporation
> > + */
> > +
> > +#include <linux/dma-fence-user-fence.h>
> > +#include <linux/slab.h>
> > +
> > +static void user_fence_cb(struct dma_fence *fence, struct dma_fence_cb *cb)
> > +{
> > + struct dma_fence_user_fence *user_fence =
> > + container_of(cb, struct dma_fence_user_fence, cb);
> > +
> > + if (user_fence->map.is_iomem)
> > + writeq(user_fence->seqno, user_fence->map.vaddr_iomem);
> > + else
> > + *(u64 *)user_fence->map.vaddr = user_fence->seqno;
> > +
> > + dma_fence_user_fence_free(user_fence);
> > +}
> > +
> > +/**
> > + * dma_fence_user_fence_alloc() - Allocate user fence
> > + *
> > + * Return: Allocated struct dma_fence_user_fence on Success, NULL on failure
> > + */
> > +struct dma_fence_user_fence *dma_fence_user_fence_alloc(void)
> > +{
> > + return kmalloc(sizeof(struct dma_fence_user_fence), GFP_KERNEL);
> > +}
> > +EXPORT_SYMBOL(dma_fence_user_fence_alloc);
> > +
> > +/**
> > + * dma_fence_user_fence_free() - Free user fence
> > + *
> > + * Free user fence. Should only be called on a user fence if
> > + * dma_fence_user_fence_attach is not called to cleanup original allocation from
> > + * dma_fence_user_fence_alloc.
> > + */
> > +void dma_fence_user_fence_free(struct dma_fence_user_fence *user_fence)
> > +{
> > + kfree(user_fence);
>
> We need to give that child a different name, e.g. something like
> dma_fence_seq_write or something like that.
>
Yea, I didn't like this name either. dma_fence_seq_write seems better.
> I was just about to complain that all dma_fence implementations need to be
> RCU save and only then saw that this isn't a dma_fence implementation.
>
Nope, just a helper to back a value which user space can find when a
dma-fence signals.
> Question: Why is that useful in the first place? At least AMD HW can write
> to basically all memory locations and even registers when a fence finishes?
>
This could be used in a few places.
1. VM bind completion a seqno is written which user jobs can then wait
on via ring instructions. We have something similar to this is Xe for LR
VMs already but I don't really like that interface - it is user address
+ write value. This would be based on a BO + offset which I think makes
a bit more sense and should perform quite a better too. I haven't wired
this up in this series but plan to doing this.
2. Convert an input dma-fence into seqno writeback when the dma-fence
signals. Again this seqno is something the user can wiat on via ring
instructions.
The flow here would be, a user job needs to wait on external dma-fence
in a syncobj, syncfile, etc..., call the convert dma-fence to user fence
IOCTL before the submission (patch 22, 28 in this series), program the
wait via ring instructions, and then do the user submission. This would
avoid blocking on external dma-fences in the submission path.
I think this makes sense and having a light weight helper to normalize
this flow across drivers makes a bit sense too.
Matt
> Regards,
> Christian.
>
> > +}
> > +EXPORT_SYMBOL(dma_fence_user_fence_free);
> > +
> > +/**
> > + * dma_fence_user_fence_attach() - Attach user fence to dma-fence
> > + *
> > + * @fence: fence
> > + * @user_fence user fence
> > + * @map: IOSYS map to write seqno to
> > + * @seqno: seqno to write to IOSYS map
> > + *
> > + * Attach a user fence, which is a seqno write to an IOSYS map, to a DMA fence.
> > + * The caller must guarantee that the memory in the IOSYS map doesn't move
> > + * before the fence signals. This is typically done by installing the DMA fence
> > + * into the BO's DMA reservation bookkeeping slot from which the IOSYS was
> > + * derived.
> > + */
> > +void dma_fence_user_fence_attach(struct dma_fence *fence,
> > + struct dma_fence_user_fence *user_fence,
> > + struct iosys_map *map, u64 seqno)
> > +{
> > + int err;
> > +
> > + user_fence->map = *map;
> > + user_fence->seqno = seqno;
> > +
> > + err = dma_fence_add_callback(fence, &user_fence->cb, user_fence_cb);
> > + if (err == -ENOENT)
> > + user_fence_cb(NULL, &user_fence->cb);
> > +}
> > +EXPORT_SYMBOL(dma_fence_user_fence_attach);
> > diff --git a/include/linux/dma-fence-user-fence.h b/include/linux/dma-fence-user-fence.h
> > new file mode 100644
> > index 000000000000..8678129c7d56
> > --- /dev/null
> > +++ b/include/linux/dma-fence-user-fence.h
> > @@ -0,0 +1,31 @@
> > +/* SPDX-License-Identifier: MIT */
> > +/*
> > + * Copyright © 2024 Intel Corporation
> > + */
> > +
> > +#ifndef __LINUX_DMA_FENCE_USER_FENCE_H
> > +#define __LINUX_DMA_FENCE_USER_FENCE_H
> > +
> > +#include <linux/dma-fence.h>
> > +#include <linux/iosys-map.h>
> > +
> > +/** struct dma_fence_user_fence - User fence */
> > +struct dma_fence_user_fence {
> > + /** @cb: dma-fence callback used to attach user fence to dma-fence */
> > + struct dma_fence_cb cb;
> > + /** @map: IOSYS map to write seqno to */
> > + struct iosys_map map;
> > + /** @seqno: seqno to write to IOSYS map */
> > + u64 seqno;
> > +};
> > +
> > +struct dma_fence_user_fence *dma_fence_user_fence_alloc(void);
> > +
> > +void dma_fence_user_fence_free(struct dma_fence_user_fence *user_fence);
> > +
> > +void dma_fence_user_fence_attach(struct dma_fence *fence,
> > + struct dma_fence_user_fence *user_fence,
> > + struct iosys_map *map,
> > + u64 seqno);
> > +
> > +#endif
>
^ permalink raw reply [flat|nested] 52+ messages in thread
* Re: [RFC PATCH 02/29] dma-fence: Add dma_fence_user_fence
2024-11-20 22:50 ` Matthew Brost
@ 2024-11-21 9:31 ` Christian König
2024-11-22 2:35 ` Matthew Brost
0 siblings, 1 reply; 52+ messages in thread
From: Christian König @ 2024-11-21 9:31 UTC (permalink / raw)
To: Matthew Brost
Cc: intel-xe, dri-devel, kenneth.w.graunke, lionel.g.landwerlin,
jose.souza, simona.vetter, thomas.hellstrom, boris.brezillon,
airlied, mihail.atanassov, steven.price, shashank.sharma
Am 20.11.24 um 23:50 schrieb Matthew Brost:
> On Wed, Nov 20, 2024 at 02:38:49PM +0100, Christian König wrote:
>> Am 19.11.24 um 00:37 schrieb Matthew Brost:
>>> Normalize user fence attachment to a DMA fence. A user fence is a simple
>>> seqno write to memory, implemented by attaching a DMA fence callback
>>> that writes out the seqno. Intended use case is importing a dma-fence
>>> into kernel and exporting a user fence.
>>>
>>> Helpers added to allocate, attach, and free a dma_fence_user_fence.
>>>
>>> Cc: Dave Airlie <airlied@redhat.com>
>>> Cc: Simona Vetter <simona.vetter@ffwll.ch>
>>> Cc: Christian Koenig <christian.koenig@amd.com>
>>> Signed-off-by: Matthew Brost <matthew.brost@intel.com>
>>> ---
>>> drivers/dma-buf/Makefile | 2 +-
>>> drivers/dma-buf/dma-fence-user-fence.c | 73 ++++++++++++++++++++++++++
>>> include/linux/dma-fence-user-fence.h | 31 +++++++++++
>>> 3 files changed, 105 insertions(+), 1 deletion(-)
>>> create mode 100644 drivers/dma-buf/dma-fence-user-fence.c
>>> create mode 100644 include/linux/dma-fence-user-fence.h
>>>
>>> diff --git a/drivers/dma-buf/Makefile b/drivers/dma-buf/Makefile
>>> index c25500bb38b5..ba9ba339319e 100644
>>> --- a/drivers/dma-buf/Makefile
>>> +++ b/drivers/dma-buf/Makefile
>>> @@ -1,6 +1,6 @@
>>> # SPDX-License-Identifier: GPL-2.0-only
>>> obj-y := dma-buf.o dma-fence.o dma-fence-array.o dma-fence-chain.o \
>>> - dma-fence-preempt.o dma-fence-unwrap.o dma-resv.o
>>> + dma-fence-preempt.o dma-fence-unwrap.o dma-fence-user-fence.o dma-resv.o
>>> obj-$(CONFIG_DMABUF_HEAPS) += dma-heap.o
>>> obj-$(CONFIG_DMABUF_HEAPS) += heaps/
>>> obj-$(CONFIG_SYNC_FILE) += sync_file.o
>>> diff --git a/drivers/dma-buf/dma-fence-user-fence.c b/drivers/dma-buf/dma-fence-user-fence.c
>>> new file mode 100644
>>> index 000000000000..5a4b289bacb8
>>> --- /dev/null
>>> +++ b/drivers/dma-buf/dma-fence-user-fence.c
>>> @@ -0,0 +1,73 @@
>>> +// SPDX-License-Identifier: MIT
>>> +/*
>>> + * Copyright © 2024 Intel Corporation
>>> + */
>>> +
>>> +#include <linux/dma-fence-user-fence.h>
>>> +#include <linux/slab.h>
>>> +
>>> +static void user_fence_cb(struct dma_fence *fence, struct dma_fence_cb *cb)
>>> +{
>>> + struct dma_fence_user_fence *user_fence =
>>> + container_of(cb, struct dma_fence_user_fence, cb);
>>> +
>>> + if (user_fence->map.is_iomem)
>>> + writeq(user_fence->seqno, user_fence->map.vaddr_iomem);
>>> + else
>>> + *(u64 *)user_fence->map.vaddr = user_fence->seqno;
>>> +
>>> + dma_fence_user_fence_free(user_fence);
>>> +}
>>> +
>>> +/**
>>> + * dma_fence_user_fence_alloc() - Allocate user fence
>>> + *
>>> + * Return: Allocated struct dma_fence_user_fence on Success, NULL on failure
>>> + */
>>> +struct dma_fence_user_fence *dma_fence_user_fence_alloc(void)
>>> +{
>>> + return kmalloc(sizeof(struct dma_fence_user_fence), GFP_KERNEL);
>>> +}
>>> +EXPORT_SYMBOL(dma_fence_user_fence_alloc);
>>> +
>>> +/**
>>> + * dma_fence_user_fence_free() - Free user fence
>>> + *
>>> + * Free user fence. Should only be called on a user fence if
>>> + * dma_fence_user_fence_attach is not called to cleanup original allocation from
>>> + * dma_fence_user_fence_alloc.
>>> + */
>>> +void dma_fence_user_fence_free(struct dma_fence_user_fence *user_fence)
>>> +{
>>> + kfree(user_fence);
>> We need to give that child a different name, e.g. something like
>> dma_fence_seq_write or something like that.
>>
> Yea, I didn't like this name either. dma_fence_seq_write seems better.
>
>> I was just about to complain that all dma_fence implementations need to be
>> RCU save and only then saw that this isn't a dma_fence implementation.
>>
> Nope, just a helper to back a value which user space can find when a
> dma-fence signals.
>
>> Question: Why is that useful in the first place? At least AMD HW can write
>> to basically all memory locations and even registers when a fence finishes?
>>
> This could be used in a few places.
>
> 1. VM bind completion a seqno is written which user jobs can then wait
> on via ring instructions. We have something similar to this is Xe for LR
> VMs already but I don't really like that interface - it is user address
> + write value. This would be based on a BO + offset which I think makes
> a bit more sense and should perform quite a better too. I haven't wired
> this up in this series but plan to doing this.
>
> 2. Convert an input dma-fence into seqno writeback when the dma-fence
> signals. Again this seqno is something the user can wiat on via ring
> instructions.
>
> The flow here would be, a user job needs to wait on external dma-fence
> in a syncobj, syncfile, etc..., call the convert dma-fence to user fence
> IOCTL before the submission (patch 22, 28 in this series), program the
> wait via ring instructions, and then do the user submission. This would
> avoid blocking on external dma-fences in the submission path.
>
> I think this makes sense and having a light weight helper to normalize
> this flow across drivers makes a bit sense too.
Well we have pretty much the same concept, but all writes are done by
the hardware and not go by a round-trip through the CPU.
We have a read only mapped seq64 area in the kernel reserved part of the
VM address space.
Through this area the queues can see each others fence progress and we
can say things like BO mapping and TLB flush are finished when this
seq64 increases please suspend further processing until you see that.
Could be that this is useful for more than XE, but at least for AMD I
currently don't see that.
Regards,
Christian.
>
> Matt
>
>> Regards,
>> Christian.
>>
>>> +}
>>> +EXPORT_SYMBOL(dma_fence_user_fence_free);
>>> +
>>> +/**
>>> + * dma_fence_user_fence_attach() - Attach user fence to dma-fence
>>> + *
>>> + * @fence: fence
>>> + * @user_fence user fence
>>> + * @map: IOSYS map to write seqno to
>>> + * @seqno: seqno to write to IOSYS map
>>> + *
>>> + * Attach a user fence, which is a seqno write to an IOSYS map, to a DMA fence.
>>> + * The caller must guarantee that the memory in the IOSYS map doesn't move
>>> + * before the fence signals. This is typically done by installing the DMA fence
>>> + * into the BO's DMA reservation bookkeeping slot from which the IOSYS was
>>> + * derived.
>>> + */
>>> +void dma_fence_user_fence_attach(struct dma_fence *fence,
>>> + struct dma_fence_user_fence *user_fence,
>>> + struct iosys_map *map, u64 seqno)
>>> +{
>>> + int err;
>>> +
>>> + user_fence->map = *map;
>>> + user_fence->seqno = seqno;
>>> +
>>> + err = dma_fence_add_callback(fence, &user_fence->cb, user_fence_cb);
>>> + if (err == -ENOENT)
>>> + user_fence_cb(NULL, &user_fence->cb);
>>> +}
>>> +EXPORT_SYMBOL(dma_fence_user_fence_attach);
>>> diff --git a/include/linux/dma-fence-user-fence.h b/include/linux/dma-fence-user-fence.h
>>> new file mode 100644
>>> index 000000000000..8678129c7d56
>>> --- /dev/null
>>> +++ b/include/linux/dma-fence-user-fence.h
>>> @@ -0,0 +1,31 @@
>>> +/* SPDX-License-Identifier: MIT */
>>> +/*
>>> + * Copyright © 2024 Intel Corporation
>>> + */
>>> +
>>> +#ifndef __LINUX_DMA_FENCE_USER_FENCE_H
>>> +#define __LINUX_DMA_FENCE_USER_FENCE_H
>>> +
>>> +#include <linux/dma-fence.h>
>>> +#include <linux/iosys-map.h>
>>> +
>>> +/** struct dma_fence_user_fence - User fence */
>>> +struct dma_fence_user_fence {
>>> + /** @cb: dma-fence callback used to attach user fence to dma-fence */
>>> + struct dma_fence_cb cb;
>>> + /** @map: IOSYS map to write seqno to */
>>> + struct iosys_map map;
>>> + /** @seqno: seqno to write to IOSYS map */
>>> + u64 seqno;
>>> +};
>>> +
>>> +struct dma_fence_user_fence *dma_fence_user_fence_alloc(void);
>>> +
>>> +void dma_fence_user_fence_free(struct dma_fence_user_fence *user_fence);
>>> +
>>> +void dma_fence_user_fence_attach(struct dma_fence *fence,
>>> + struct dma_fence_user_fence *user_fence,
>>> + struct iosys_map *map,
>>> + u64 seqno);
>>> +
>>> +#endif
^ permalink raw reply [flat|nested] 52+ messages in thread
* Re: [RFC PATCH 01/29] dma-fence: Add dma_fence_preempt base class
2024-11-20 17:36 ` Matthew Brost
@ 2024-11-21 10:04 ` Christian König
2024-11-21 18:41 ` Matthew Brost
0 siblings, 1 reply; 52+ messages in thread
From: Christian König @ 2024-11-21 10:04 UTC (permalink / raw)
To: Matthew Brost
Cc: intel-xe, dri-devel, kenneth.w.graunke, lionel.g.landwerlin,
jose.souza, simona.vetter, thomas.hellstrom, boris.brezillon,
airlied, mihail.atanassov, steven.price, shashank.sharma
[-- Attachment #1: Type: text/plain, Size: 12027 bytes --]
Am 20.11.24 um 18:36 schrieb Matthew Brost:
> On Wed, Nov 20, 2024 at 02:31:50PM +0100, Christian König wrote:
>> Am 19.11.24 um 00:37 schrieb Matthew Brost:
>>> Add a dma_fence_preempt base class with driver ops to implement
>>> preemption, based on the existing Xe preemptive fence implementation.
>>>
>>> Annotated to ensure correct driver usage.
>>>
>>> Cc: Dave Airlie<airlied@redhat.com>
>>> Cc: Simona Vetter<simona.vetter@ffwll.ch>
>>> Cc: Christian Koenig<christian.koenig@amd.com>
>>> Signed-off-by: Matthew Brost<matthew.brost@intel.com>
>>> ---
>>> drivers/dma-buf/Makefile | 2 +-
>>> drivers/dma-buf/dma-fence-preempt.c | 133 ++++++++++++++++++++++++++++
>>> include/linux/dma-fence-preempt.h | 56 ++++++++++++
>>> 3 files changed, 190 insertions(+), 1 deletion(-)
>>> create mode 100644 drivers/dma-buf/dma-fence-preempt.c
>>> create mode 100644 include/linux/dma-fence-preempt.h
>>>
>>> diff --git a/drivers/dma-buf/Makefile b/drivers/dma-buf/Makefile
>>> index 70ec901edf2c..c25500bb38b5 100644
>>> --- a/drivers/dma-buf/Makefile
>>> +++ b/drivers/dma-buf/Makefile
>>> @@ -1,6 +1,6 @@
>>> # SPDX-License-Identifier: GPL-2.0-only
>>> obj-y := dma-buf.o dma-fence.o dma-fence-array.o dma-fence-chain.o \
>>> - dma-fence-unwrap.o dma-resv.o
>>> + dma-fence-preempt.o dma-fence-unwrap.o dma-resv.o
>>> obj-$(CONFIG_DMABUF_HEAPS) += dma-heap.o
>>> obj-$(CONFIG_DMABUF_HEAPS) += heaps/
>>> obj-$(CONFIG_SYNC_FILE) += sync_file.o
>>> diff --git a/drivers/dma-buf/dma-fence-preempt.c b/drivers/dma-buf/dma-fence-preempt.c
>>> new file mode 100644
>>> index 000000000000..6e6ce7ea7421
>>> --- /dev/null
>>> +++ b/drivers/dma-buf/dma-fence-preempt.c
>>> @@ -0,0 +1,133 @@
>>> +// SPDX-License-Identifier: MIT
>>> +/*
>>> + * Copyright © 2024 Intel Corporation
>>> + */
>>> +
>>> +#include <linux/dma-fence-preempt.h>
>>> +#include <linux/dma-resv.h>
>>> +
>>> +static void dma_fence_preempt_work_func(struct work_struct *w)
>>> +{
>>> + bool cookie = dma_fence_begin_signalling();
>>> + struct dma_fence_preempt *pfence =
>>> + container_of(w, typeof(*pfence), work);
>>> + const struct dma_fence_preempt_ops *ops = pfence->ops;
>>> + int err = pfence->base.error;
>>> +
>>> + if (!err) {
>>> + err = ops->preempt_wait(pfence);
>>> + if (err)
>>> + dma_fence_set_error(&pfence->base, err);
>>> + }
>>> +
>>> + dma_fence_signal(&pfence->base);
>>> + ops->preempt_finished(pfence);
>> Why is that callback useful?
>>
> In Xe, this is where we kick the resume worker and drop a ref to the
> preemption object, which in Xe is an individual queue, and in AMD it is
> a VM, right?
Correct. The whole VM is preempted since we don't know which queue is
using which BO.
> wrt preemption object, I've reasoned this should work for
> an either per queue or VM driver design of preempt fences.
>
> This part likely could be moved into the preempt_wait callback though
> but would get a little goofy in the error case if preempt_wait is not
> called as the driver side would still need to cleanup a ref. Maybe I
> don't even need a ref though - have to think that through - but for
> general safety we typically like to take refs whenever a fence
> references a different object.
The tricky part is that at least for us we need to do this *before* the
fence is signaled.
This way we can do something like:
retry:
mutex_lock(&lock);
if (dma_fence_is_signaled(preemept_fence)) {
mutex_unlock(&lock);
flush_work(resume_work);
gotot retry;
}
To make sure that we have a valid and working VM before we publish the
user fence anywhere and preempting the VM will wait for this user fence
to complete.
>
>>> +
>>> + dma_fence_end_signalling(cookie);
>>> +}
>>> +
>>> +static const char *
>>> +dma_fence_preempt_get_driver_name(struct dma_fence *fence)
>>> +{
>>> + return "dma_fence_preempt";
>>> +}
>>> +
>>> +static const char *
>>> +dma_fence_preempt_get_timeline_name(struct dma_fence *fence)
>>> +{
>>> + return "ordered";
>>> +}
>>> +
>>> +static void dma_fence_preempt_issue(struct dma_fence_preempt *pfence)
>>> +{
>>> + int err;
>>> +
>>> + err = pfence->ops->preempt(pfence);
>>> + if (err)
>>> + dma_fence_set_error(&pfence->base, err);
>>> +
>>> + queue_work(pfence->wq, &pfence->work);
>>> +}
>>> +
>>> +static void dma_fence_preempt_cb(struct dma_fence *fence,
>>> + struct dma_fence_cb *cb)
>>> +{
>>> + struct dma_fence_preempt *pfence =
>>> + container_of(cb, typeof(*pfence), cb);
>>> +
>>> + dma_fence_preempt_issue(pfence);
>>> +}
>>> +
>>> +static void dma_fence_preempt_delay(struct dma_fence_preempt *pfence)
>>> +{
>>> + struct dma_fence *fence;
>>> + int err;
>>> +
>>> + fence = pfence->ops->preempt_delay(pfence);
>> Mhm, why is that useful?
>>
> This for attaching the preempt object's last exported fence which needs
> to be signaled before the preemption is issued. So for purely long
> running VM's, this function could be NULL. For VM's with user queues +
> dma fences, the driver returns the last fence from the convert user
> fence to dma-fence IOCTL.
>
> I realized my kernel doc doesn't explain this as well as it should, I
> have already made this more verbose locally and hopefully it clearly
> explains all of this.
That part was actually obvious. But I would expected that to be push
interface instead of a pull interface.
E.g. the preemption fence would also provide something like a manager
object which has a mutex, the last exported user fence and the necessary
functionality to update this user fence.
The tricky part is really to get this dance right between signaling the
preemption fence and not allowing installing a new user fence before the
resume worker has re-created the VM.
>>> + if (WARN_ON_ONCE(!fence || IS_ERR(fence)))
>>> + return;
>>> +
>>> + err = dma_fence_add_callback(fence, &pfence->cb, dma_fence_preempt_cb);
>> You are running into the exactly same bug we had :)
>>
>> The problem here is that you can't call dma_fence_add_callback() from the
>> enable_signaling callback. Background is that the
>> fence_ops->enable_signaling callback is called with the spinlock of the
>> preemption fence held.
>>
>> This spinlock can be the same as the one of the user fence, but it could
>> also be a different one. Either way calling dma_fence_add_callback() would
>> let lockdep print a nice warning.
>>
> Hmm, I see the problem if you share a lock between the preempt fence and
> last exported fence but as long as these locks are seperate I don't see
> the issue.
>
> The locking order then is:
>
> preempt fence lock -> last exported fence lock.
You would need to annotate that as nested lock for lockdep and the
dma_fence framework currently doesn't allow that.
> Lockdep does not explode in Xe but maybe can buy this is a little
> unsafe. We could always move preempt_delay to the worker, attach a CB,
> and rekick the worker upon the last fence signaling if you think that is
> safer. Of course we could always just directly wait on the returned last
> fence in the worker too.
Yeah I that is basically what we do at the moment since you also need to
make sure that no new user fence is installed while you wait for the
latest to signal.
Regards,
Christian.
>
>> I tried to solve this by changing the dma_fence code to not call
>> enable_signaling with the lock held, we wanted to do that anyway to prevent
>> a bunch of issues with driver unload. But I realized that getting this
>> upstream would take to long.
>>
>> Long story short we moved handling the user fence into the work item.
>>
> I did run into an issue when trying to make preempt_wait after return a
> fence + attach a CB, and signal this preempt fence from the CB. I got
> locking inversions almost worked through them but eventually gave up and
> stuck with the worker.
>
> Matt
>
>> Apart from that looks rather solid to me.
>>
>> Regards,
>> Christian.
>>
>>> + if (err == -ENOENT)
>>> + dma_fence_preempt_issue(pfence);
>>> +}
>>> +
>>> +static bool dma_fence_preempt_enable_signaling(struct dma_fence *fence)
>>> +{
>>> + struct dma_fence_preempt *pfence =
>>> + container_of(fence, typeof(*pfence), base);
>>> +
>>> + if (pfence->ops->preempt_delay)
>>> + dma_fence_preempt_delay(pfence);
>>> + else
>>> + dma_fence_preempt_issue(pfence);
>>> +
>>> + return true;
>>> +}
>>> +
>>> +static const struct dma_fence_ops preempt_fence_ops = {
>>> + .get_driver_name = dma_fence_preempt_get_driver_name,
>>> + .get_timeline_name = dma_fence_preempt_get_timeline_name,
>>> + .enable_signaling = dma_fence_preempt_enable_signaling,
>>> +};
>>> +
>>> +/**
>>> + * dma_fence_is_preempt() - Is preempt fence
>>> + *
>>> + * @fence: Preempt fence
>>> + *
>>> + * Return: True if preempt fence, False otherwise
>>> + */
>>> +bool dma_fence_is_preempt(const struct dma_fence *fence)
>>> +{
>>> + return fence->ops == &preempt_fence_ops;
>>> +}
>>> +EXPORT_SYMBOL(dma_fence_is_preempt);
>>> +
>>> +/**
>>> + * dma_fence_preempt_init() - Initial preempt fence
>>> + *
>>> + * @fence: Preempt fence
>>> + * @ops: Preempt fence operations
>>> + * @wq: Work queue for preempt wait, should have WQ_MEM_RECLAIM set
>>> + * @context: Fence context
>>> + * @seqno: Fence seqence number
>>> + */
>>> +void dma_fence_preempt_init(struct dma_fence_preempt *fence,
>>> + const struct dma_fence_preempt_ops *ops,
>>> + struct workqueue_struct *wq,
>>> + u64 context, u64 seqno)
>>> +{
>>> + /*
>>> + * XXX: We really want to check wq for WQ_MEM_RECLAIM here but
>>> + * workqueue_struct is private.
>>> + */
>>> +
>>> + fence->ops = ops;
>>> + fence->wq = wq;
>>> + INIT_WORK(&fence->work, dma_fence_preempt_work_func);
>>> + spin_lock_init(&fence->lock);
>>> + dma_fence_init(&fence->base, &preempt_fence_ops,
>>> + &fence->lock, context, seqno);
>>> +}
>>> +EXPORT_SYMBOL(dma_fence_preempt_init);
>>> diff --git a/include/linux/dma-fence-preempt.h b/include/linux/dma-fence-preempt.h
>>> new file mode 100644
>>> index 000000000000..28d803f89527
>>> --- /dev/null
>>> +++ b/include/linux/dma-fence-preempt.h
>>> @@ -0,0 +1,56 @@
>>> +/* SPDX-License-Identifier: MIT */
>>> +/*
>>> + * Copyright © 2024 Intel Corporation
>>> + */
>>> +
>>> +#ifndef __LINUX_DMA_FENCE_PREEMPT_H
>>> +#define __LINUX_DMA_FENCE_PREEMPT_H
>>> +
>>> +#include <linux/dma-fence.h>
>>> +#include <linux/workqueue.h>
>>> +
>>> +struct dma_fence_preempt;
>>> +struct dma_resv;
>>> +
>>> +/**
>>> + * struct dma_fence_preempt_ops - Preempt fence operations
>>> + *
>>> + * These functions should be implemented in the driver side.
>>> + */
>>> +struct dma_fence_preempt_ops {
>>> + /** @preempt_delay: Preempt execution with a delay */
>>> + struct dma_fence *(*preempt_delay)(struct dma_fence_preempt *fence);
>>> + /** @preempt: Preempt execution */
>>> + int (*preempt)(struct dma_fence_preempt *fence);
>>> + /** @preempt_wait: Wait for preempt of execution to complete */
>>> + int (*preempt_wait)(struct dma_fence_preempt *fence);
>>> + /** @preempt_finished: Signal that the preempt has finished */
>>> + void (*preempt_finished)(struct dma_fence_preempt *fence);
>>> +};
>>> +
>>> +/**
>>> + * struct dma_fence_preempt - Embedded preempt fence base class
>>> + */
>>> +struct dma_fence_preempt {
>>> + /** @base: Fence base class */
>>> + struct dma_fence base;
>>> + /** @lock: Spinlock for fence handling */
>>> + spinlock_t lock;
>>> + /** @cb: Callback preempt delay */
>>> + struct dma_fence_cb cb;
>>> + /** @ops: Preempt fence operation */
>>> + const struct dma_fence_preempt_ops *ops;
>>> + /** @wq: Work queue for preempt wait */
>>> + struct workqueue_struct *wq;
>>> + /** @work: Work struct for preempt wait */
>>> + struct work_struct work;
>>> +};
>>> +
>>> +bool dma_fence_is_preempt(const struct dma_fence *fence);
>>> +
>>> +void dma_fence_preempt_init(struct dma_fence_preempt *fence,
>>> + const struct dma_fence_preempt_ops *ops,
>>> + struct workqueue_struct *wq,
>>> + u64 context, u64 seqno);
>>> +
>>> +#endif
[-- Attachment #2: Type: text/html, Size: 14134 bytes --]
^ permalink raw reply [flat|nested] 52+ messages in thread
* Re: [RFC PATCH 01/29] dma-fence: Add dma_fence_preempt base class
2024-11-21 10:04 ` Christian König
@ 2024-11-21 18:41 ` Matthew Brost
2024-11-22 10:56 ` Christian König
0 siblings, 1 reply; 52+ messages in thread
From: Matthew Brost @ 2024-11-21 18:41 UTC (permalink / raw)
To: Christian König
Cc: intel-xe, dri-devel, kenneth.w.graunke, lionel.g.landwerlin,
jose.souza, simona.vetter, thomas.hellstrom, boris.brezillon,
airlied, mihail.atanassov, steven.price, shashank.sharma
On Thu, Nov 21, 2024 at 11:04:47AM +0100, Christian König wrote:
> Am 20.11.24 um 18:36 schrieb Matthew Brost:
> > On Wed, Nov 20, 2024 at 02:31:50PM +0100, Christian König wrote:
> > > Am 19.11.24 um 00:37 schrieb Matthew Brost:
> > > > Add a dma_fence_preempt base class with driver ops to implement
> > > > preemption, based on the existing Xe preemptive fence implementation.
> > > >
> > > > Annotated to ensure correct driver usage.
> > > >
> > > > Cc: Dave Airlie<airlied@redhat.com>
> > > > Cc: Simona Vetter<simona.vetter@ffwll.ch>
> > > > Cc: Christian Koenig<christian.koenig@amd.com>
> > > > Signed-off-by: Matthew Brost<matthew.brost@intel.com>
> > > > ---
> > > > drivers/dma-buf/Makefile | 2 +-
> > > > drivers/dma-buf/dma-fence-preempt.c | 133 ++++++++++++++++++++++++++++
> > > > include/linux/dma-fence-preempt.h | 56 ++++++++++++
> > > > 3 files changed, 190 insertions(+), 1 deletion(-)
> > > > create mode 100644 drivers/dma-buf/dma-fence-preempt.c
> > > > create mode 100644 include/linux/dma-fence-preempt.h
> > > >
> > > > diff --git a/drivers/dma-buf/Makefile b/drivers/dma-buf/Makefile
> > > > index 70ec901edf2c..c25500bb38b5 100644
> > > > --- a/drivers/dma-buf/Makefile
> > > > +++ b/drivers/dma-buf/Makefile
> > > > @@ -1,6 +1,6 @@
> > > > # SPDX-License-Identifier: GPL-2.0-only
> > > > obj-y := dma-buf.o dma-fence.o dma-fence-array.o dma-fence-chain.o \
> > > > - dma-fence-unwrap.o dma-resv.o
> > > > + dma-fence-preempt.o dma-fence-unwrap.o dma-resv.o
> > > > obj-$(CONFIG_DMABUF_HEAPS) += dma-heap.o
> > > > obj-$(CONFIG_DMABUF_HEAPS) += heaps/
> > > > obj-$(CONFIG_SYNC_FILE) += sync_file.o
> > > > diff --git a/drivers/dma-buf/dma-fence-preempt.c b/drivers/dma-buf/dma-fence-preempt.c
> > > > new file mode 100644
> > > > index 000000000000..6e6ce7ea7421
> > > > --- /dev/null
> > > > +++ b/drivers/dma-buf/dma-fence-preempt.c
> > > > @@ -0,0 +1,133 @@
> > > > +// SPDX-License-Identifier: MIT
> > > > +/*
> > > > + * Copyright © 2024 Intel Corporation
> > > > + */
> > > > +
> > > > +#include <linux/dma-fence-preempt.h>
> > > > +#include <linux/dma-resv.h>
> > > > +
> > > > +static void dma_fence_preempt_work_func(struct work_struct *w)
> > > > +{
> > > > + bool cookie = dma_fence_begin_signalling();
> > > > + struct dma_fence_preempt *pfence =
> > > > + container_of(w, typeof(*pfence), work);
> > > > + const struct dma_fence_preempt_ops *ops = pfence->ops;
> > > > + int err = pfence->base.error;
> > > > +
> > > > + if (!err) {
> > > > + err = ops->preempt_wait(pfence);
> > > > + if (err)
> > > > + dma_fence_set_error(&pfence->base, err);
> > > > + }
> > > > +
> > > > + dma_fence_signal(&pfence->base);
> > > > + ops->preempt_finished(pfence);
> > > Why is that callback useful?
> > >
> > In Xe, this is where we kick the resume worker and drop a ref to the
> > preemption object, which in Xe is an individual queue, and in AMD it is
> > a VM, right?
>
> Correct. The whole VM is preempted since we don't know which queue is using
> which BO.
>
Right. In Xe we don't know which queue is using a BO either - we trigger
all queues preempt fences attached to the VM effectively preempting the
entire VM. Per VM preempt fence or per queue preempt fence is a driver
choice (I can see arguments for both cases) is the point here and any
base class shouldn't dictate what a driver wants to do.
> > wrt preemption object, I've reasoned this should work for
> > an either per queue or VM driver design of preempt fences.
> >
> > This part likely could be moved into the preempt_wait callback though
> > but would get a little goofy in the error case if preempt_wait is not
> > called as the driver side would still need to cleanup a ref. Maybe I
> > don't even need a ref though - have to think that through - but for
> > general safety we typically like to take refs whenever a fence
> > references a different object.
>
> The tricky part is that at least for us we need to do this *before* the
> fence is signaled.
Hmm, I'm a little confused by this. Do you think the code as is missing
somehthing or opposed to keeping the preempt_finished vfunc?
>
> This way we can do something like:
>
> retry:
> mutex_lock(&lock);
> if (dma_fence_is_signaled(preemept_fence)) {
> mutex_unlock(&lock);
> flush_work(resume_work);
> gotot retry;
> }
>
This snippet is from your convert user fence to dma fence IOCTL? I think
this makes sense given your design of the conver user to dma fence IOCTL
not actually doing a resume - I landed making that IOCTL basically
another version of the resume worker for simplicity but that may change
if we find locking the entire VM is too costly.
>
> To make sure that we have a valid and working VM before we publish the user
> fence anywhere and preempting the VM will wait for this user fence to
> complete.
>
Agree the VM needs be in valid state befor publish a user fence as a dma
fence given a resume requires memory allocations thus breaking dma
fencing rules.
> >
> > > > +
> > > > + dma_fence_end_signalling(cookie);
> > > > +}
> > > > +
> > > > +static const char *
> > > > +dma_fence_preempt_get_driver_name(struct dma_fence *fence)
> > > > +{
> > > > + return "dma_fence_preempt";
> > > > +}
> > > > +
> > > > +static const char *
> > > > +dma_fence_preempt_get_timeline_name(struct dma_fence *fence)
> > > > +{
> > > > + return "ordered";
> > > > +}
> > > > +
> > > > +static void dma_fence_preempt_issue(struct dma_fence_preempt *pfence)
> > > > +{
> > > > + int err;
> > > > +
> > > > + err = pfence->ops->preempt(pfence);
> > > > + if (err)
> > > > + dma_fence_set_error(&pfence->base, err);
> > > > +
> > > > + queue_work(pfence->wq, &pfence->work);
> > > > +}
> > > > +
> > > > +static void dma_fence_preempt_cb(struct dma_fence *fence,
> > > > + struct dma_fence_cb *cb)
> > > > +{
> > > > + struct dma_fence_preempt *pfence =
> > > > + container_of(cb, typeof(*pfence), cb);
> > > > +
> > > > + dma_fence_preempt_issue(pfence);
> > > > +}
> > > > +
> > > > +static void dma_fence_preempt_delay(struct dma_fence_preempt *pfence)
> > > > +{
> > > > + struct dma_fence *fence;
> > > > + int err;
> > > > +
> > > > + fence = pfence->ops->preempt_delay(pfence);
> > > Mhm, why is that useful?
> > >
> > This for attaching the preempt object's last exported fence which needs
> > to be signaled before the preemption is issued. So for purely long
> > running VM's, this function could be NULL. For VM's with user queues +
> > dma fences, the driver returns the last fence from the convert user
> > fence to dma-fence IOCTL.
> >
> > I realized my kernel doc doesn't explain this as well as it should, I
> > have already made this more verbose locally and hopefully it clearly
> > explains all of this.
>
> That part was actually obvious. But I would expected that to be push
> interface instead of a pull interface.
>
> E.g. the preemption fence would also provide something like a manager object
> which has a mutex, the last exported user fence and the necessary
> functionality to update this user fence.
>
Hmm, I rather like the pull interface. In Xe this is dma-fence chain
attached to the VM. It safe pull given our covert IOCTL takes the VM's
dma-resv locks / notifier locks before publishing the user fence.
In your design couldn't you use spin lock in the last step of publishing
a user fence which checks for sw signaling on the preempt fence, if it
enabled restart the IOCTL waiting the resume worker? Then in this vfunc
pull the fence under the spin lock?
Not opposed to a push interface though if you really think this the way
to go. Quite certain I could make that work for Xe too.
> The tricky part is really to get this dance right between signaling the
> preemption fence and not allowing installing a new user fence before the
> resume worker has re-created the VM.
>
Yes, indeed this is tricky.
> > > > + if (WARN_ON_ONCE(!fence || IS_ERR(fence)))
> > > > + return;
> > > > +
> > > > + err = dma_fence_add_callback(fence, &pfence->cb, dma_fence_preempt_cb);
> > > You are running into the exactly same bug we had :)
> > >
> > > The problem here is that you can't call dma_fence_add_callback() from the
> > > enable_signaling callback. Background is that the
> > > fence_ops->enable_signaling callback is called with the spinlock of the
> > > preemption fence held.
> > >
> > > This spinlock can be the same as the one of the user fence, but it could
> > > also be a different one. Either way calling dma_fence_add_callback() would
> > > let lockdep print a nice warning.
> > >
> > Hmm, I see the problem if you share a lock between the preempt fence and
> > last exported fence but as long as these locks are seperate I don't see
> > the issue.
> >
> > The locking order then is:
> >
> > preempt fence lock -> last exported fence lock.
>
> You would need to annotate that as nested lock for lockdep and the dma_fence
> framework currently doesn't allow that.
>
This definitely works as is - I've tested this. If dma-fence's lock was
embedded within the dma-fence, then ofc lockdep would complain without
nesting. It isn't though - the spin lock is passed in as argument so
lockdep can identify the locks for 'preempt fence lock' and 'last
exported fence' as independent locks.
> > Lockdep does not explode in Xe but maybe can buy this is a little
> > unsafe. We could always move preempt_delay to the worker, attach a CB,
> > and rekick the worker upon the last fence signaling if you think that is
> > safer. Of course we could always just directly wait on the returned last
> > fence in the worker too.
>
> Yeah I that is basically what we do at the moment since you also need to
> make sure that no new user fence is installed while you wait for the latest
> to signal.
>
After I typed this realized waiting on 'last fence' in the worker is a no
go given we want to pipeline preemptions in Xe (e.g. issue all queues
preemption commands to firmware in parallel as these are async
operations which may be fast in cases and slow in others). I think
having preempt vfunc done directly in a dma-fence CB is a must.
Matt
> Regards,
> Christian.
>
> >
> > > I tried to solve this by changing the dma_fence code to not call
> > > enable_signaling with the lock held, we wanted to do that anyway to prevent
> > > a bunch of issues with driver unload. But I realized that getting this
> > > upstream would take to long.
> > >
> > > Long story short we moved handling the user fence into the work item.
> > >
> > I did run into an issue when trying to make preempt_wait after return a
> > fence + attach a CB, and signal this preempt fence from the CB. I got
> > locking inversions almost worked through them but eventually gave up and
> > stuck with the worker.
> >
> > Matt
> >
> > > Apart from that looks rather solid to me.
> > >
> > > Regards,
> > > Christian.
> > >
> > > > + if (err == -ENOENT)
> > > > + dma_fence_preempt_issue(pfence);
> > > > +}
> > > > +
> > > > +static bool dma_fence_preempt_enable_signaling(struct dma_fence *fence)
> > > > +{
> > > > + struct dma_fence_preempt *pfence =
> > > > + container_of(fence, typeof(*pfence), base);
> > > > +
> > > > + if (pfence->ops->preempt_delay)
> > > > + dma_fence_preempt_delay(pfence);
> > > > + else
> > > > + dma_fence_preempt_issue(pfence);
> > > > +
> > > > + return true;
> > > > +}
> > > > +
> > > > +static const struct dma_fence_ops preempt_fence_ops = {
> > > > + .get_driver_name = dma_fence_preempt_get_driver_name,
> > > > + .get_timeline_name = dma_fence_preempt_get_timeline_name,
> > > > + .enable_signaling = dma_fence_preempt_enable_signaling,
> > > > +};
> > > > +
> > > > +/**
> > > > + * dma_fence_is_preempt() - Is preempt fence
> > > > + *
> > > > + * @fence: Preempt fence
> > > > + *
> > > > + * Return: True if preempt fence, False otherwise
> > > > + */
> > > > +bool dma_fence_is_preempt(const struct dma_fence *fence)
> > > > +{
> > > > + return fence->ops == &preempt_fence_ops;
> > > > +}
> > > > +EXPORT_SYMBOL(dma_fence_is_preempt);
> > > > +
> > > > +/**
> > > > + * dma_fence_preempt_init() - Initial preempt fence
> > > > + *
> > > > + * @fence: Preempt fence
> > > > + * @ops: Preempt fence operations
> > > > + * @wq: Work queue for preempt wait, should have WQ_MEM_RECLAIM set
> > > > + * @context: Fence context
> > > > + * @seqno: Fence seqence number
> > > > + */
> > > > +void dma_fence_preempt_init(struct dma_fence_preempt *fence,
> > > > + const struct dma_fence_preempt_ops *ops,
> > > > + struct workqueue_struct *wq,
> > > > + u64 context, u64 seqno)
> > > > +{
> > > > + /*
> > > > + * XXX: We really want to check wq for WQ_MEM_RECLAIM here but
> > > > + * workqueue_struct is private.
> > > > + */
> > > > +
> > > > + fence->ops = ops;
> > > > + fence->wq = wq;
> > > > + INIT_WORK(&fence->work, dma_fence_preempt_work_func);
> > > > + spin_lock_init(&fence->lock);
> > > > + dma_fence_init(&fence->base, &preempt_fence_ops,
> > > > + &fence->lock, context, seqno);
> > > > +}
> > > > +EXPORT_SYMBOL(dma_fence_preempt_init);
> > > > diff --git a/include/linux/dma-fence-preempt.h b/include/linux/dma-fence-preempt.h
> > > > new file mode 100644
> > > > index 000000000000..28d803f89527
> > > > --- /dev/null
> > > > +++ b/include/linux/dma-fence-preempt.h
> > > > @@ -0,0 +1,56 @@
> > > > +/* SPDX-License-Identifier: MIT */
> > > > +/*
> > > > + * Copyright © 2024 Intel Corporation
> > > > + */
> > > > +
> > > > +#ifndef __LINUX_DMA_FENCE_PREEMPT_H
> > > > +#define __LINUX_DMA_FENCE_PREEMPT_H
> > > > +
> > > > +#include <linux/dma-fence.h>
> > > > +#include <linux/workqueue.h>
> > > > +
> > > > +struct dma_fence_preempt;
> > > > +struct dma_resv;
> > > > +
> > > > +/**
> > > > + * struct dma_fence_preempt_ops - Preempt fence operations
> > > > + *
> > > > + * These functions should be implemented in the driver side.
> > > > + */
> > > > +struct dma_fence_preempt_ops {
> > > > + /** @preempt_delay: Preempt execution with a delay */
> > > > + struct dma_fence *(*preempt_delay)(struct dma_fence_preempt *fence);
> > > > + /** @preempt: Preempt execution */
> > > > + int (*preempt)(struct dma_fence_preempt *fence);
> > > > + /** @preempt_wait: Wait for preempt of execution to complete */
> > > > + int (*preempt_wait)(struct dma_fence_preempt *fence);
> > > > + /** @preempt_finished: Signal that the preempt has finished */
> > > > + void (*preempt_finished)(struct dma_fence_preempt *fence);
> > > > +};
> > > > +
> > > > +/**
> > > > + * struct dma_fence_preempt - Embedded preempt fence base class
> > > > + */
> > > > +struct dma_fence_preempt {
> > > > + /** @base: Fence base class */
> > > > + struct dma_fence base;
> > > > + /** @lock: Spinlock for fence handling */
> > > > + spinlock_t lock;
> > > > + /** @cb: Callback preempt delay */
> > > > + struct dma_fence_cb cb;
> > > > + /** @ops: Preempt fence operation */
> > > > + const struct dma_fence_preempt_ops *ops;
> > > > + /** @wq: Work queue for preempt wait */
> > > > + struct workqueue_struct *wq;
> > > > + /** @work: Work struct for preempt wait */
> > > > + struct work_struct work;
> > > > +};
> > > > +
> > > > +bool dma_fence_is_preempt(const struct dma_fence *fence);
> > > > +
> > > > +void dma_fence_preempt_init(struct dma_fence_preempt *fence,
> > > > + const struct dma_fence_preempt_ops *ops,
> > > > + struct workqueue_struct *wq,
> > > > + u64 context, u64 seqno);
> > > > +
> > > > +#endif
^ permalink raw reply [flat|nested] 52+ messages in thread
* Re: [RFC PATCH 02/29] dma-fence: Add dma_fence_user_fence
2024-11-21 9:31 ` Christian König
@ 2024-11-22 2:35 ` Matthew Brost
2024-11-22 10:28 ` Christian König
0 siblings, 1 reply; 52+ messages in thread
From: Matthew Brost @ 2024-11-22 2:35 UTC (permalink / raw)
To: Christian König
Cc: intel-xe, dri-devel, kenneth.w.graunke, lionel.g.landwerlin,
jose.souza, simona.vetter, thomas.hellstrom, boris.brezillon,
airlied, mihail.atanassov, steven.price, shashank.sharma
On Thu, Nov 21, 2024 at 10:31:16AM +0100, Christian König wrote:
> Am 20.11.24 um 23:50 schrieb Matthew Brost:
> > On Wed, Nov 20, 2024 at 02:38:49PM +0100, Christian König wrote:
> > > Am 19.11.24 um 00:37 schrieb Matthew Brost:
> > > > Normalize user fence attachment to a DMA fence. A user fence is a simple
> > > > seqno write to memory, implemented by attaching a DMA fence callback
> > > > that writes out the seqno. Intended use case is importing a dma-fence
> > > > into kernel and exporting a user fence.
> > > >
> > > > Helpers added to allocate, attach, and free a dma_fence_user_fence.
> > > >
> > > > Cc: Dave Airlie <airlied@redhat.com>
> > > > Cc: Simona Vetter <simona.vetter@ffwll.ch>
> > > > Cc: Christian Koenig <christian.koenig@amd.com>
> > > > Signed-off-by: Matthew Brost <matthew.brost@intel.com>
> > > > ---
> > > > drivers/dma-buf/Makefile | 2 +-
> > > > drivers/dma-buf/dma-fence-user-fence.c | 73 ++++++++++++++++++++++++++
> > > > include/linux/dma-fence-user-fence.h | 31 +++++++++++
> > > > 3 files changed, 105 insertions(+), 1 deletion(-)
> > > > create mode 100644 drivers/dma-buf/dma-fence-user-fence.c
> > > > create mode 100644 include/linux/dma-fence-user-fence.h
> > > >
> > > > diff --git a/drivers/dma-buf/Makefile b/drivers/dma-buf/Makefile
> > > > index c25500bb38b5..ba9ba339319e 100644
> > > > --- a/drivers/dma-buf/Makefile
> > > > +++ b/drivers/dma-buf/Makefile
> > > > @@ -1,6 +1,6 @@
> > > > # SPDX-License-Identifier: GPL-2.0-only
> > > > obj-y := dma-buf.o dma-fence.o dma-fence-array.o dma-fence-chain.o \
> > > > - dma-fence-preempt.o dma-fence-unwrap.o dma-resv.o
> > > > + dma-fence-preempt.o dma-fence-unwrap.o dma-fence-user-fence.o dma-resv.o
> > > > obj-$(CONFIG_DMABUF_HEAPS) += dma-heap.o
> > > > obj-$(CONFIG_DMABUF_HEAPS) += heaps/
> > > > obj-$(CONFIG_SYNC_FILE) += sync_file.o
> > > > diff --git a/drivers/dma-buf/dma-fence-user-fence.c b/drivers/dma-buf/dma-fence-user-fence.c
> > > > new file mode 100644
> > > > index 000000000000..5a4b289bacb8
> > > > --- /dev/null
> > > > +++ b/drivers/dma-buf/dma-fence-user-fence.c
> > > > @@ -0,0 +1,73 @@
> > > > +// SPDX-License-Identifier: MIT
> > > > +/*
> > > > + * Copyright © 2024 Intel Corporation
> > > > + */
> > > > +
> > > > +#include <linux/dma-fence-user-fence.h>
> > > > +#include <linux/slab.h>
> > > > +
> > > > +static void user_fence_cb(struct dma_fence *fence, struct dma_fence_cb *cb)
> > > > +{
> > > > + struct dma_fence_user_fence *user_fence =
> > > > + container_of(cb, struct dma_fence_user_fence, cb);
> > > > +
> > > > + if (user_fence->map.is_iomem)
> > > > + writeq(user_fence->seqno, user_fence->map.vaddr_iomem);
> > > > + else
> > > > + *(u64 *)user_fence->map.vaddr = user_fence->seqno;
> > > > +
> > > > + dma_fence_user_fence_free(user_fence);
> > > > +}
> > > > +
> > > > +/**
> > > > + * dma_fence_user_fence_alloc() - Allocate user fence
> > > > + *
> > > > + * Return: Allocated struct dma_fence_user_fence on Success, NULL on failure
> > > > + */
> > > > +struct dma_fence_user_fence *dma_fence_user_fence_alloc(void)
> > > > +{
> > > > + return kmalloc(sizeof(struct dma_fence_user_fence), GFP_KERNEL);
> > > > +}
> > > > +EXPORT_SYMBOL(dma_fence_user_fence_alloc);
> > > > +
> > > > +/**
> > > > + * dma_fence_user_fence_free() - Free user fence
> > > > + *
> > > > + * Free user fence. Should only be called on a user fence if
> > > > + * dma_fence_user_fence_attach is not called to cleanup original allocation from
> > > > + * dma_fence_user_fence_alloc.
> > > > + */
> > > > +void dma_fence_user_fence_free(struct dma_fence_user_fence *user_fence)
> > > > +{
> > > > + kfree(user_fence);
> > > We need to give that child a different name, e.g. something like
> > > dma_fence_seq_write or something like that.
> > >
> > Yea, I didn't like this name either. dma_fence_seq_write seems better.
> >
> > > I was just about to complain that all dma_fence implementations need to be
> > > RCU save and only then saw that this isn't a dma_fence implementation.
> > >
> > Nope, just a helper to back a value which user space can find when a
> > dma-fence signals.
> >
> > > Question: Why is that useful in the first place? At least AMD HW can write
> > > to basically all memory locations and even registers when a fence finishes?
> > >
> > This could be used in a few places.
> >
> > 1. VM bind completion a seqno is written which user jobs can then wait
> > on via ring instructions. We have something similar to this is Xe for LR
> > VMs already but I don't really like that interface - it is user address
> > + write value. This would be based on a BO + offset which I think makes
> > a bit more sense and should perform quite a better too. I haven't wired
> > this up in this series but plan to doing this.
> >
> > 2. Convert an input dma-fence into seqno writeback when the dma-fence
> > signals. Again this seqno is something the user can wiat on via ring
> > instructions.
> >
> > The flow here would be, a user job needs to wait on external dma-fence
> > in a syncobj, syncfile, etc..., call the convert dma-fence to user fence
> > IOCTL before the submission (patch 22, 28 in this series), program the
> > wait via ring instructions, and then do the user submission. This would
> > avoid blocking on external dma-fences in the submission path.
> >
> > I think this makes sense and having a light weight helper to normalize
> > this flow across drivers makes a bit sense too.
>
> Well we have pretty much the same concept, but all writes are done by the
> hardware and not go by a round-trip through the CPU.
>
Hmm, I'm curious how that works on your end. Doesn't the DMA fence
signaling have to go through the kernel?
Yes, of course, in Xe we program seqno writes through the GPU when we
can, but our bind code currently opportunistically bypasses the GPU.
Eventually, I think it will become a 100% CPU operation for various
reasons. Likewise, if a fence is coming from an external process, there
is no GPU job to write the seqno. Of course, we could issue a GPU job to
write the seqno, but this would add latency. In the case of VM bind, we
really want to completely decouple that from the GPU for various reasons
(I can explain why if needed, but it's kind of off-topic).
> We have a read only mapped seq64 area in the kernel reserved part of the VM
> address space.
>
> Through this area the queues can see each others fence progress and we can
> say things like BO mapping and TLB flush are finished when this seq64
> increases please suspend further processing until you see that.
>
> Could be that this is useful for more than XE, but at least for AMD I
> currently don't see that.
>
Ok, we have no other current users, and if you feel it is better to
carry this in Xe in a way that it can be moved to the common layer
later, there’s no issue with that. We have several other components like
this in Xe that are generic but currently live in Xe.
Matt
> Regards,
> Christian.
>
> >
> > Matt
> >
> > > Regards,
> > > Christian.
> > >
> > > > +}
> > > > +EXPORT_SYMBOL(dma_fence_user_fence_free);
> > > > +
> > > > +/**
> > > > + * dma_fence_user_fence_attach() - Attach user fence to dma-fence
> > > > + *
> > > > + * @fence: fence
> > > > + * @user_fence user fence
> > > > + * @map: IOSYS map to write seqno to
> > > > + * @seqno: seqno to write to IOSYS map
> > > > + *
> > > > + * Attach a user fence, which is a seqno write to an IOSYS map, to a DMA fence.
> > > > + * The caller must guarantee that the memory in the IOSYS map doesn't move
> > > > + * before the fence signals. This is typically done by installing the DMA fence
> > > > + * into the BO's DMA reservation bookkeeping slot from which the IOSYS was
> > > > + * derived.
> > > > + */
> > > > +void dma_fence_user_fence_attach(struct dma_fence *fence,
> > > > + struct dma_fence_user_fence *user_fence,
> > > > + struct iosys_map *map, u64 seqno)
> > > > +{
> > > > + int err;
> > > > +
> > > > + user_fence->map = *map;
> > > > + user_fence->seqno = seqno;
> > > > +
> > > > + err = dma_fence_add_callback(fence, &user_fence->cb, user_fence_cb);
> > > > + if (err == -ENOENT)
> > > > + user_fence_cb(NULL, &user_fence->cb);
> > > > +}
> > > > +EXPORT_SYMBOL(dma_fence_user_fence_attach);
> > > > diff --git a/include/linux/dma-fence-user-fence.h b/include/linux/dma-fence-user-fence.h
> > > > new file mode 100644
> > > > index 000000000000..8678129c7d56
> > > > --- /dev/null
> > > > +++ b/include/linux/dma-fence-user-fence.h
> > > > @@ -0,0 +1,31 @@
> > > > +/* SPDX-License-Identifier: MIT */
> > > > +/*
> > > > + * Copyright © 2024 Intel Corporation
> > > > + */
> > > > +
> > > > +#ifndef __LINUX_DMA_FENCE_USER_FENCE_H
> > > > +#define __LINUX_DMA_FENCE_USER_FENCE_H
> > > > +
> > > > +#include <linux/dma-fence.h>
> > > > +#include <linux/iosys-map.h>
> > > > +
> > > > +/** struct dma_fence_user_fence - User fence */
> > > > +struct dma_fence_user_fence {
> > > > + /** @cb: dma-fence callback used to attach user fence to dma-fence */
> > > > + struct dma_fence_cb cb;
> > > > + /** @map: IOSYS map to write seqno to */
> > > > + struct iosys_map map;
> > > > + /** @seqno: seqno to write to IOSYS map */
> > > > + u64 seqno;
> > > > +};
> > > > +
> > > > +struct dma_fence_user_fence *dma_fence_user_fence_alloc(void);
> > > > +
> > > > +void dma_fence_user_fence_free(struct dma_fence_user_fence *user_fence);
> > > > +
> > > > +void dma_fence_user_fence_attach(struct dma_fence *fence,
> > > > + struct dma_fence_user_fence *user_fence,
> > > > + struct iosys_map *map,
> > > > + u64 seqno);
> > > > +
> > > > +#endif
>
^ permalink raw reply [flat|nested] 52+ messages in thread
* Re: [RFC PATCH 02/29] dma-fence: Add dma_fence_user_fence
2024-11-22 2:35 ` Matthew Brost
@ 2024-11-22 10:28 ` Christian König
0 siblings, 0 replies; 52+ messages in thread
From: Christian König @ 2024-11-22 10:28 UTC (permalink / raw)
To: Matthew Brost
Cc: intel-xe, dri-devel, kenneth.w.graunke, lionel.g.landwerlin,
jose.souza, simona.vetter, thomas.hellstrom, boris.brezillon,
airlied, mihail.atanassov, steven.price, shashank.sharma
[-- Attachment #1: Type: text/plain, Size: 5317 bytes --]
Am 22.11.24 um 03:35 schrieb Matthew Brost:
> [SNIP]
>>> The flow here would be, a user job needs to wait on external dma-fence
>>> in a syncobj, syncfile, etc..., call the convert dma-fence to user fence
>>> IOCTL before the submission (patch 22, 28 in this series), program the
>>> wait via ring instructions, and then do the user submission. This would
>>> avoid blocking on external dma-fences in the submission path.
>>>
>>> I think this makes sense and having a light weight helper to normalize
>>> this flow across drivers makes a bit sense too.
>> Well we have pretty much the same concept, but all writes are done by the
>> hardware and not go by a round-trip through the CPU.
>>
> Hmm, I'm curious how that works on your end. Doesn't the DMA fence
> signaling have to go through the kernel?
No, we have a protected_fence packet which basically writes the current
processing status (RPTR) into a location defined by the kernel driver.
So neither the value nor the location of the write can be manipulated by
userspace.
This way queues can signal each other their status without going through
a CPU round trip nor writing into a shared memory location. Writing into
a memory location can probably be done by any hardware, but that usually
has tons of scheduling implications, e.g. priority inversion etc...
> Yes, of course, in Xe we program seqno writes through the GPU when we
> can, but our bind code currently opportunistically bypasses the GPU.
> Eventually, I think it will become a 100% CPU operation for various
> reasons. Likewise, if a fence is coming from an external process, there
> is no GPU job to write the seqno.
Good point, for that use case the implementation would be useful for us
as well.
> Of course, we could issue a GPU job to
> write the seqno, but this would add latency. In the case of VM bind, we
> really want to completely decouple that from the GPU for various reasons
> (I can explain why if needed, but it's kind of off-topic).
>
>> We have a read only mapped seq64 area in the kernel reserved part of the VM
>> address space.
>>
>> Through this area the queues can see each others fence progress and we can
>> say things like BO mapping and TLB flush are finished when this seq64
>> increases please suspend further processing until you see that.
>>
>> Could be that this is useful for more than XE, but at least for AMD I
>> currently don't see that.
>>
> Ok, we have no other current users, and if you feel it is better to
> carry this in Xe in a way that it can be moved to the common layer
> later, there’s no issue with that. We have several other components like
> this in Xe that are generic but currently live in Xe.
It's probably overkill for DMA-buf, but maybe we can put that stuff into
DRM.
Christian.
>
> Matt
>
>> Regards,
>> Christian.
>>
>>> Matt
>>>
>>>> Regards,
>>>> Christian.
>>>>
>>>>> +}
>>>>> +EXPORT_SYMBOL(dma_fence_user_fence_free);
>>>>> +
>>>>> +/**
>>>>> + * dma_fence_user_fence_attach() - Attach user fence to dma-fence
>>>>> + *
>>>>> + * @fence: fence
>>>>> + * @user_fence user fence
>>>>> + * @map: IOSYS map to write seqno to
>>>>> + * @seqno: seqno to write to IOSYS map
>>>>> + *
>>>>> + * Attach a user fence, which is a seqno write to an IOSYS map, to a DMA fence.
>>>>> + * The caller must guarantee that the memory in the IOSYS map doesn't move
>>>>> + * before the fence signals. This is typically done by installing the DMA fence
>>>>> + * into the BO's DMA reservation bookkeeping slot from which the IOSYS was
>>>>> + * derived.
>>>>> + */
>>>>> +void dma_fence_user_fence_attach(struct dma_fence *fence,
>>>>> + struct dma_fence_user_fence *user_fence,
>>>>> + struct iosys_map *map, u64 seqno)
>>>>> +{
>>>>> + int err;
>>>>> +
>>>>> + user_fence->map = *map;
>>>>> + user_fence->seqno = seqno;
>>>>> +
>>>>> + err = dma_fence_add_callback(fence, &user_fence->cb, user_fence_cb);
>>>>> + if (err == -ENOENT)
>>>>> + user_fence_cb(NULL, &user_fence->cb);
>>>>> +}
>>>>> +EXPORT_SYMBOL(dma_fence_user_fence_attach);
>>>>> diff --git a/include/linux/dma-fence-user-fence.h b/include/linux/dma-fence-user-fence.h
>>>>> new file mode 100644
>>>>> index 000000000000..8678129c7d56
>>>>> --- /dev/null
>>>>> +++ b/include/linux/dma-fence-user-fence.h
>>>>> @@ -0,0 +1,31 @@
>>>>> +/* SPDX-License-Identifier: MIT */
>>>>> +/*
>>>>> + * Copyright © 2024 Intel Corporation
>>>>> + */
>>>>> +
>>>>> +#ifndef __LINUX_DMA_FENCE_USER_FENCE_H
>>>>> +#define __LINUX_DMA_FENCE_USER_FENCE_H
>>>>> +
>>>>> +#include <linux/dma-fence.h>
>>>>> +#include <linux/iosys-map.h>
>>>>> +
>>>>> +/** struct dma_fence_user_fence - User fence */
>>>>> +struct dma_fence_user_fence {
>>>>> + /** @cb: dma-fence callback used to attach user fence to dma-fence */
>>>>> + struct dma_fence_cb cb;
>>>>> + /** @map: IOSYS map to write seqno to */
>>>>> + struct iosys_map map;
>>>>> + /** @seqno: seqno to write to IOSYS map */
>>>>> + u64 seqno;
>>>>> +};
>>>>> +
>>>>> +struct dma_fence_user_fence *dma_fence_user_fence_alloc(void);
>>>>> +
>>>>> +void dma_fence_user_fence_free(struct dma_fence_user_fence *user_fence);
>>>>> +
>>>>> +void dma_fence_user_fence_attach(struct dma_fence *fence,
>>>>> + struct dma_fence_user_fence *user_fence,
>>>>> + struct iosys_map *map,
>>>>> + u64 seqno);
>>>>> +
>>>>> +#endif
[-- Attachment #2: Type: text/html, Size: 6605 bytes --]
^ permalink raw reply [flat|nested] 52+ messages in thread
* Re: [RFC PATCH 01/29] dma-fence: Add dma_fence_preempt base class
2024-11-21 18:41 ` Matthew Brost
@ 2024-11-22 10:56 ` Christian König
0 siblings, 0 replies; 52+ messages in thread
From: Christian König @ 2024-11-22 10:56 UTC (permalink / raw)
To: Matthew Brost
Cc: intel-xe, dri-devel, kenneth.w.graunke, lionel.g.landwerlin,
jose.souza, simona.vetter, thomas.hellstrom, boris.brezillon,
airlied, mihail.atanassov, steven.price, shashank.sharma
[-- Attachment #1: Type: text/plain, Size: 7367 bytes --]
Am 21.11.24 um 19:41 schrieb Matthew Brost:
> [SNIP]
>>>>> + ops->preempt_finished(pfence);
>>>> Why is that callback useful?
>>>>
>>> In Xe, this is where we kick the resume worker and drop a ref to the
>>> preemption object, which in Xe is an individual queue, and in AMD it is
>>> a VM, right?
>> Correct. The whole VM is preempted since we don't know which queue is using
>> which BO.
>>
> Right. In Xe we don't know which queue is using a BO either - we trigger
> all queues preempt fences attached to the VM effectively preempting the
> entire VM. Per VM preempt fence or per queue preempt fence is a driver
> choice (I can see arguments for both cases) is the point here and any
> base class shouldn't dictate what a driver wants to do.
Oh, I think we need to unify that for drivers or at least find an
interface which works for both cases.
And yeah, I agree there are really good arguments for both directions.
Essentially you want separate preemption fences for each queue, but only
one restore worker.
>>> wrt preemption object, I've reasoned this should work for
>>> an either per queue or VM driver design of preempt fences.
>>>
>>> This part likely could be moved into the preempt_wait callback though
>>> but would get a little goofy in the error case if preempt_wait is not
>>> called as the driver side would still need to cleanup a ref. Maybe I
>>> don't even need a ref though - have to think that through - but for
>>> general safety we typically like to take refs whenever a fence
>>> references a different object.
>> The tricky part is that at least for us we need to do this *before* the
>> fence is signaled.
> Hmm, I'm a little confused by this. Do you think the code as is missing
> somehthing or opposed to keeping the preempt_finished vfunc?
I think we first need a complete picture how all that is supposed to work.
When we say we resume only on demand then this callback would make sense
I think, but at least the AMD solution doesn't do that at the moment.
>> This way we can do something like:
>>
>> retry:
>> mutex_lock(&lock);
>> if (dma_fence_is_signaled(preemept_fence)) {
>> mutex_unlock(&lock);
>> flush_work(resume_work);
>> gotot retry;
>> }
>>
> This snippet is from your convert user fence to dma fence IOCTL?
Yes.
> I think
> this makes sense given your design of the conver user to dma fence IOCTL
> not actually doing a resume - I landed making that IOCTL basically
> another version of the resume worker for simplicity but that may change
> if we find locking the entire VM is too costly.
Well we could also resume on demand (thinking more about it that's most
likely the better approach) but I would implement it something like this
then:
retry:
mutex_lock(&lock);
if (dma_fence_is_signaled(preemept_fence)) {
mutex_unlock(&lock);
schedule_work(resume_work);
flush_work(resume_work);
gotot retry;
}
This way we don't run into issues when multiple participants try to
resume at the same time. E.g. multiple threads where each one tries to
submit work to different queues at the same time.
[SNIP]
>>>>> + fence = pfence->ops->preempt_delay(pfence);
>>>> Mhm, why is that useful?
>>>>
>>> This for attaching the preempt object's last exported fence which needs
>>> to be signaled before the preemption is issued. So for purely long
>>> running VM's, this function could be NULL. For VM's with user queues +
>>> dma fences, the driver returns the last fence from the convert user
>>> fence to dma-fence IOCTL.
>>>
>>> I realized my kernel doc doesn't explain this as well as it should, I
>>> have already made this more verbose locally and hopefully it clearly
>>> explains all of this.
>> That part was actually obvious. But I would expected that to be push
>> interface instead of a pull interface.
>>
>> E.g. the preemption fence would also provide something like a manager object
>> which has a mutex, the last exported user fence and the necessary
>> functionality to update this user fence.
>>
> Hmm, I rather like the pull interface. In Xe this is dma-fence chain
> attached to the VM. It safe pull given our covert IOCTL takes the VM's
> dma-resv locks / notifier locks before publishing the user fence.
I don't think that will work like this.
Publishing the user fence must be serialized with signaling the
preemption fence. And that serialization can't be done by the dma-resv
lock nor the notifier lock because we can't let a dma_fence signaling
depend on them.
That this is a separate lock or similar mechanism is a must have.
> In your design couldn't you use spin lock in the last step of publishing
> a user fence which checks for sw signaling on the preempt fence, if it
> enabled restart the IOCTL waiting the resume worker? Then in this vfunc
> pull the fence under the spin lock?
Yeah that could maybe work, but there is also a different challenge to
keep in mind. See below.
> Not opposed to a push interface though if you really think this the way
> to go. Quite certain I could make that work for Xe too.
A push interface is just easier to validate. Keep in mind that you can
only update the user fence when you can guarantee that the preemption
fence is not signaled nor in the process of signaling.
So when you create a user fence the approach becomes:
1. kmalloc the fence structure,
2. Initialize the fence.
3. Push it into the premption manage of your queue, this makes sure that
the queue is runable.
4. Publish the new fence in drm_sync, dma_resv etc...
> This definitely works as is - I've tested this. If dma-fence's lock was
> embedded within the dma-fence, then ofc lockdep would complain without
> nesting. It isn't though - the spin lock is passed in as argument so
> lockdep can identify the locks for 'preempt fence lock' and 'last
> exported fence' as independent locks.
Ah, good point.
But exactly that passing in of the lock is what we try to get away from
to allow dma_fences to surpass the module they issued.
>>> Lockdep does not explode in Xe but maybe can buy this is a little
>>> unsafe. We could always move preempt_delay to the worker, attach a CB,
>>> and rekick the worker upon the last fence signaling if you think that is
>>> safer. Of course we could always just directly wait on the returned last
>>> fence in the worker too.
>> Yeah I that is basically what we do at the moment since you also need to
>> make sure that no new user fence is installed while you wait for the latest
>> to signal.
>>
> After I typed this realized waiting on 'last fence' in the worker is a no
> go given we want to pipeline preemptions in Xe (e.g. issue all queues
> preemption commands to firmware in parallel as these are async
> operations which may be fast in cases and slow in others). I think
> having preempt vfunc done directly in a dma-fence CB is a must.
At least with out current design that won't work because you need to
somehow prevent installing new user fences while the preemption fence is
signaling.
Currently we do that by holding a mutex, but you can't hold a mutex and
return from a worker and then drop the mutex again in a different worker
(ok in theory you can, but that is so strongly disregarded that upstream
would probably reject the code).
Regards,
Christian.
>
> Matt
>
[-- Attachment #2: Type: text/html, Size: 11145 bytes --]
^ permalink raw reply [flat|nested] 52+ messages in thread
* RE: [RFC PATCH 13/29] drm/xe/mmap: Add mmap support for PCI memory barrier
2024-11-19 12:42 ` Mrozek, Michal
@ 2024-12-18 12:59 ` Upadhyay, Tejas
0 siblings, 0 replies; 52+ messages in thread
From: Upadhyay, Tejas @ 2024-12-18 12:59 UTC (permalink / raw)
To: Mrozek, Michal, Joonas Lahtinen, Christian König,
Brost, Matthew, dri-devel@lists.freedesktop.org,
intel-xe@lists.freedesktop.org
Cc: Graunke, Kenneth W, Landwerlin, Lionel G, Souza, Jose,
simona.vetter@ffwll.ch, thomas.hellstrom@linux.intel.com,
boris.brezillon@collabora.com, airlied@gmail.com,
mihail.atanassov@arm.com, steven.price@arm.com,
shashank.sharma@amd.com
> -----Original Message-----
> From: Intel-xe <intel-xe-bounces@lists.freedesktop.org> On Behalf Of Mrozek,
> Michal
> Sent: Tuesday, November 19, 2024 6:12 PM
> To: Joonas Lahtinen <joonas.lahtinen@linux.intel.com>; Christian König
> <christian.koenig@amd.com>; Brost, Matthew <matthew.brost@intel.com>;
> dri-devel@lists.freedesktop.org; intel-xe@lists.freedesktop.org
> Cc: Graunke, Kenneth W <kenneth.w.graunke@intel.com>; Landwerlin, Lionel
> G <lionel.g.landwerlin@intel.com>; Souza, Jose <jose.souza@intel.com>;
> simona.vetter@ffwll.ch; thomas.hellstrom@linux.intel.com;
> boris.brezillon@collabora.com; airlied@gmail.com;
> mihail.atanassov@arm.com; steven.price@arm.com;
> shashank.sharma@amd.com
> Subject: RE: [RFC PATCH 13/29] drm/xe/mmap: Add mmap support for PCI
> memory barrier
>
> "Adding Michal from the compute userspace team for sharing references to
> the code.
>
> Quoting Christian König (2024-11-19 12:00:44)
> > Am 19.11.24 um 00:37 schrieb Matthew Brost:
> > > From: Tejas Upadhyay <tejas.upadhyay@intel.com>
> > >
> > > In order to avoid having userspace to use MI_MEM_FENCE, we are
> > > adding a mechanism for userspace to generate a PCI memory barrier
> > > with low overhead (avoiding IOCTL call as well as writing to VRAM
> > > will adds some overhead).
> > >
> > > This is implemented by memory-mapping a page as uncached that is
> > > backed by MMIO on the dGPU and thus allowing userspace to do memory
> > > write to the page without invoking an IOCTL.
> > > We are selecting the MMIO so that it is not accessible from the PCI
> > > bus so that the MMIO writes themselves are ignored, but the PCI
> > > memory barrier will still take action as the MMIO filtering will
> > > happen after the memory barrier effect.
> > >
> > > When we detect special defined offset in mmap(), We are mapping 4K
> > > page which contains the last of page of doorbell MMIO range to
> > > userspace for same purpose.
> >
> > Well that is quite a hack, but don't you still need a memory barrier
> > instruction? E.g. m_fence?
>
> I guess you refer on the userspace usage directions? Yeah, the userspace
> definitely has to make sure that the write actually propagated to the PCI bus
> before they can assume the serialization to happen on the GPU. I think the
> userspace folks should be able to explain how exactly the orchestrate that.
> Michal, can you or somebody else share the respective lines of code in the
> userspace driver?
>
> At this time, the userspace only enables this on X86, but could also support
> other more exotic platforms via libpciaccess.
>
> > And why don't you expose the real doorbell instead of the last
> > (unused?) page of the MMIO region?
>
> Doorbells are a complete red herring here.
>
> Chosen page just happens to be a full 4K MMIO page where any writes
> coming over PCI bus get dropped (and reads return zero) by the GPU. Such
> dummy (from CPU point of view) 4K MMIO page allows doing a CPU write
> that generates a PCI bus transaction, where the transaction itself is essentially
> a NOP. But as the transaction falls into the MMIO address range, it will trigger a
> serialization of the incoming traffic in the GPU side, before being ignored.
>
> Regards, Joonas
> "
>
> Here is appropriate path:
> https://github.com/intel/compute-
> runtime/blob/f589408848128434e410b6b4c2a9107ff78a74e9/shared/sou
> rce/direct_submission/direct_submission_hw.inl#L437
>
> flow is as follows:
> 1. do updates to shared memory between CPU/GPU using WC memory
> mapping 2. emit sfence instruction to make sure there is no reordering on the
> CPU side 3. emit pciBarrier write (this patch) , this ensures that all earlier
> transactions are properly ordered from the GPU side
>
> So PCI memory barrier is submitted after sfence instruction and that makes
> sure that all earlier transactions are properly ordered.
>
> Michal
https://patchwork.freedesktop.org/patch/629628/ is separate reviewed submission intended for merge standalone. It will be merged if there are no objections.
Thanks,
Tejas
>
^ permalink raw reply [flat|nested] 52+ messages in thread
end of thread, other threads:[~2024-12-18 12:59 UTC | newest]
Thread overview: 52+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2024-11-18 23:37 [RFC PATCH 00/29] UMD direct submission in Xe Matthew Brost
2024-11-18 23:37 ` [RFC PATCH 01/29] dma-fence: Add dma_fence_preempt base class Matthew Brost
2024-11-20 13:31 ` Christian König
2024-11-20 17:36 ` Matthew Brost
2024-11-21 10:04 ` Christian König
2024-11-21 18:41 ` Matthew Brost
2024-11-22 10:56 ` Christian König
2024-11-18 23:37 ` [RFC PATCH 02/29] dma-fence: Add dma_fence_user_fence Matthew Brost
2024-11-20 13:38 ` Christian König
2024-11-20 22:50 ` Matthew Brost
2024-11-21 9:31 ` Christian König
2024-11-22 2:35 ` Matthew Brost
2024-11-22 10:28 ` Christian König
2024-11-18 23:37 ` [RFC PATCH 03/29] drm/xe: Use dma_fence_preempt base class Matthew Brost
2024-11-18 23:37 ` [RFC PATCH 04/29] drm/xe: Allocate doorbells for UMD exec queues Matthew Brost
2024-11-18 23:37 ` [RFC PATCH 05/29] drm/xe: Add doorbell ID to snapshot capture Matthew Brost
2024-11-18 23:37 ` [RFC PATCH 06/29] drm/xe: Break submission ring out into its own BO Matthew Brost
2024-11-18 23:37 ` [RFC PATCH 07/29] drm/xe: Break indirect ring state " Matthew Brost
2024-11-18 23:37 ` [RFC PATCH 08/29] drm/xe: Clear GGTT in xe_bo_restore_kernel Matthew Brost
2024-11-18 23:37 ` [RFC PATCH 09/29] FIXME: drm/xe: Add pad to ring and indirect state Matthew Brost
2024-11-18 23:37 ` [RFC PATCH 10/29] drm/xe: Enable indirect ring on media GT Matthew Brost
2024-11-18 23:37 ` [RFC PATCH 11/29] drm/xe: Don't add pinned mappings to VM bulk move Matthew Brost
2024-11-18 23:37 ` [RFC PATCH 12/29] drm/xe: Add exec queue post init extension processing Matthew Brost
2024-11-18 23:37 ` [RFC PATCH 13/29] drm/xe/mmap: Add mmap support for PCI memory barrier Matthew Brost
2024-11-19 10:00 ` Christian König
2024-11-19 11:57 ` Joonas Lahtinen
2024-11-19 12:42 ` Mrozek, Michal
2024-12-18 12:59 ` Upadhyay, Tejas
2024-11-18 23:37 ` [RFC PATCH 14/29] drm/xe: Add support for mmapping doorbells to user space Matthew Brost
2024-11-18 23:37 ` [RFC PATCH 15/29] drm/xe: Add support for mmapping submission ring and indirect ring state " Matthew Brost
2024-11-18 23:37 ` [RFC PATCH 16/29] drm/xe/uapi: Define UMD exec queue mapping uAPI Matthew Brost
2024-11-18 23:37 ` [RFC PATCH 17/29] drm/xe: Add usermap exec queue extension Matthew Brost
2024-11-18 23:37 ` [RFC PATCH 18/29] drm/xe: Drop EXEC_QUEUE_FLAG_UMD_SUBMISSION flag Matthew Brost
2024-11-18 23:37 ` [RFC PATCH 19/29] drm/xe: Do not allow usermap exec queues in exec IOCTL Matthew Brost
2024-11-18 23:37 ` [RFC PATCH 20/29] drm/xe: Teach GuC backend to kill usermap queues Matthew Brost
2024-11-18 23:37 ` [RFC PATCH 21/29] drm/xe: Enable preempt fences on " Matthew Brost
2024-11-18 23:37 ` [RFC PATCH 22/29] drm/xe/uapi: Add uAPI to convert user semaphore to / from drm syncobj Matthew Brost
2024-11-18 23:37 ` [RFC PATCH 23/29] drm/xe: Add user fence IRQ handler Matthew Brost
2024-11-18 23:37 ` [RFC PATCH 24/29] drm/xe: Add xe_hw_fence_user_init Matthew Brost
2024-11-18 23:37 ` [RFC PATCH 25/29] drm/xe: Add a message lock to the Xe GPU scheduler Matthew Brost
2024-11-18 23:37 ` [RFC PATCH 26/29] drm/xe: Always wait on preempt fences in vma_check_userptr Matthew Brost
2024-11-18 23:37 ` [RFC PATCH 27/29] drm/xe: Teach xe_sync layer about drm_xe_semaphore Matthew Brost
2024-11-18 23:37 ` [RFC PATCH 28/29] drm/xe: Add VM convert fence IOCTL Matthew Brost
2024-11-18 23:37 ` [RFC PATCH 29/29] drm/xe: Add user fence TDR Matthew Brost
2024-11-18 23:55 ` ✓ CI.Patch_applied: success for UMD direct submission in Xe Patchwork
2024-11-18 23:56 ` ✗ CI.checkpatch: warning " Patchwork
2024-11-18 23:57 ` ✓ CI.KUnit: success " Patchwork
2024-11-19 0:15 ` ✓ CI.Build: " Patchwork
2024-11-19 0:17 ` ✗ CI.Hooks: failure " Patchwork
2024-11-19 0:19 ` ✓ CI.checksparse: success " Patchwork
2024-11-19 0:39 ` ✗ CI.BAT: failure " Patchwork
2024-11-19 11:44 ` ✗ CI.FULL: " Patchwork
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox