* [PATCH v2 00/12] Add PXP HWDRM support
@ 2024-08-16 19:00 Daniele Ceraolo Spurio
2024-08-16 19:00 ` [PATCH v2 01/12] drm/xe/pxp: Initialize PXP structure and KCR reg Daniele Ceraolo Spurio
` (20 more replies)
0 siblings, 21 replies; 54+ messages in thread
From: Daniele Ceraolo Spurio @ 2024-08-16 19:00 UTC (permalink / raw)
To: intel-xe
Cc: Daniele Ceraolo Spurio, José Roberto de Souza, Alan Previn,
Matthew Brost, Thomas Hellström, John Harrison
PXP (Protected Xe Path) allows execution and flip to display of protected
(i.e. encrypted) objects. The HW supports multiple types of PXP, but
this series only introduces support for PXP HWDRM, which is mainly
targeted at encrypting data that is going to be displayed.
Even though we only plan to support 1 type of PXP for now, the interface
has been designed to allow support for other PXP types to be added at a
later point in time.
A user is expected to mark both BO and exec_queues as using PXP and the
driver will make sure that PXP is running, that the encryption is
valid and that no execution happens with an outdated encryption.
v2: code cleaned up and fixed while coming out of RFC, addressed review
feedback in regards to the interface.
Cc: José Roberto de Souza <jose.souza@intel.com>
Cc: Alan Previn <alan.previn.teres.alexis@intel.com>
Cc: Matthew Brost <matthew.brost@intel.com>
Cc: Thomas Hellström <thomas.hellstrom@linux.intel.com>
Cc: John Harrison <John.C.Harrison@Intel.com>
Daniele Ceraolo Spurio (12):
drm/xe/pxp: Initialize PXP structure and KCR reg
drm/xe/pxp: Allocate PXP execution resources
drm/xe/pxp: Add VCS inline termination support
drm/xe/pxp: Add GSC session invalidation support
drm/xe/pxp: Handle the PXP termination interrupt
drm/xe/pxp: Add GSC session initialization support
drm/xe/pxp: Add spport for PXP-using queues
drm/xe/pxp: add a query for PXP status
drm/xe/pxp: Add API to mark a BO as using PXP
drm/xe/pxp: add PXP PM support
drm/xe/pxp: Add PXP debugfs support
drm/xe/pxp: Enable PXP for MTL and LNL
drivers/gpu/drm/xe/Makefile | 3 +
drivers/gpu/drm/xe/abi/gsc_pxp_commands_abi.h | 40 +
.../xe/compat-i915-headers/pxp/intel_pxp.h | 14 +-
.../gpu/drm/xe/instructions/xe_instr_defs.h | 1 +
.../gpu/drm/xe/instructions/xe_mfx_commands.h | 29 +
.../gpu/drm/xe/instructions/xe_mi_commands.h | 5 +
drivers/gpu/drm/xe/regs/xe_engine_regs.h | 1 +
drivers/gpu/drm/xe/regs/xe_gt_regs.h | 8 +
drivers/gpu/drm/xe/regs/xe_pxp_regs.h | 23 +
drivers/gpu/drm/xe/xe_bo.c | 100 ++-
drivers/gpu/drm/xe/xe_bo.h | 5 +
drivers/gpu/drm/xe/xe_bo_types.h | 3 +
drivers/gpu/drm/xe/xe_debugfs.c | 3 +
drivers/gpu/drm/xe/xe_device.c | 6 +
drivers/gpu/drm/xe/xe_device_types.h | 8 +-
drivers/gpu/drm/xe/xe_exec.c | 6 +
drivers/gpu/drm/xe/xe_exec_queue.c | 61 +-
drivers/gpu/drm/xe/xe_exec_queue.h | 5 +
drivers/gpu/drm/xe/xe_exec_queue_types.h | 8 +
drivers/gpu/drm/xe/xe_hw_engine.c | 2 +-
drivers/gpu/drm/xe/xe_irq.c | 20 +-
drivers/gpu/drm/xe/xe_lrc.c | 16 +-
drivers/gpu/drm/xe/xe_lrc.h | 7 +-
drivers/gpu/drm/xe/xe_pci.c | 4 +
drivers/gpu/drm/xe/xe_pm.c | 42 +-
drivers/gpu/drm/xe/xe_pxp.c | 738 ++++++++++++++++++
drivers/gpu/drm/xe/xe_pxp.h | 33 +
drivers/gpu/drm/xe/xe_pxp_debugfs.c | 120 +++
drivers/gpu/drm/xe/xe_pxp_debugfs.h | 13 +
drivers/gpu/drm/xe/xe_pxp_submit.c | 572 ++++++++++++++
drivers/gpu/drm/xe/xe_pxp_submit.h | 22 +
drivers/gpu/drm/xe/xe_pxp_types.h | 123 +++
drivers/gpu/drm/xe/xe_query.c | 32 +
drivers/gpu/drm/xe/xe_ring_ops.c | 4 +-
drivers/gpu/drm/xe/xe_vm.c | 170 +++-
drivers/gpu/drm/xe/xe_vm.h | 8 +
drivers/gpu/drm/xe/xe_vm_types.h | 1 +
include/uapi/drm/xe_drm.h | 94 ++-
38 files changed, 2307 insertions(+), 43 deletions(-)
create mode 100644 drivers/gpu/drm/xe/instructions/xe_mfx_commands.h
create mode 100644 drivers/gpu/drm/xe/regs/xe_pxp_regs.h
create mode 100644 drivers/gpu/drm/xe/xe_pxp.c
create mode 100644 drivers/gpu/drm/xe/xe_pxp.h
create mode 100644 drivers/gpu/drm/xe/xe_pxp_debugfs.c
create mode 100644 drivers/gpu/drm/xe/xe_pxp_debugfs.h
create mode 100644 drivers/gpu/drm/xe/xe_pxp_submit.c
create mode 100644 drivers/gpu/drm/xe/xe_pxp_submit.h
create mode 100644 drivers/gpu/drm/xe/xe_pxp_types.h
--
2.43.0
^ permalink raw reply [flat|nested] 54+ messages in thread
* [PATCH v2 01/12] drm/xe/pxp: Initialize PXP structure and KCR reg
2024-08-16 19:00 [PATCH v2 00/12] Add PXP HWDRM support Daniele Ceraolo Spurio
@ 2024-08-16 19:00 ` Daniele Ceraolo Spurio
2024-10-04 20:29 ` John Harrison
2024-08-16 19:00 ` [PATCH v2 02/12] drm/xe/pxp: Allocate PXP execution resources Daniele Ceraolo Spurio
` (19 subsequent siblings)
20 siblings, 1 reply; 54+ messages in thread
From: Daniele Ceraolo Spurio @ 2024-08-16 19:00 UTC (permalink / raw)
To: intel-xe; +Cc: Daniele Ceraolo Spurio
As the first step towards adding PXP support, hook in the PXP init
function, allocate the PXP structure and initialize the KCR register to
allow PXP HWDRM sessions.
Signed-off-by: Daniele Ceraolo Spurio <daniele.ceraolospurio@intel.com>
---
drivers/gpu/drm/xe/Makefile | 1 +
| 4 +-
drivers/gpu/drm/xe/regs/xe_pxp_regs.h | 17 +++
drivers/gpu/drm/xe/xe_device.c | 6 +
drivers/gpu/drm/xe/xe_device_types.h | 8 +-
drivers/gpu/drm/xe/xe_pci.c | 2 +
drivers/gpu/drm/xe/xe_pxp.c | 103 ++++++++++++++++++
drivers/gpu/drm/xe/xe_pxp.h | 15 +++
drivers/gpu/drm/xe/xe_pxp_types.h | 28 +++++
9 files changed, 180 insertions(+), 4 deletions(-)
create mode 100644 drivers/gpu/drm/xe/regs/xe_pxp_regs.h
create mode 100644 drivers/gpu/drm/xe/xe_pxp.c
create mode 100644 drivers/gpu/drm/xe/xe_pxp.h
create mode 100644 drivers/gpu/drm/xe/xe_pxp_types.h
diff --git a/drivers/gpu/drm/xe/Makefile b/drivers/gpu/drm/xe/Makefile
index e11392b5dd3d..9e007e59de83 100644
--- a/drivers/gpu/drm/xe/Makefile
+++ b/drivers/gpu/drm/xe/Makefile
@@ -83,6 +83,7 @@ xe-y += xe_bb.o \
xe_preempt_fence.o \
xe_pt.o \
xe_pt_walk.o \
+ xe_pxp.o \
xe_query.o \
xe_range_fence.o \
xe_reg_sr.o \
--git a/drivers/gpu/drm/xe/compat-i915-headers/pxp/intel_pxp.h b/drivers/gpu/drm/xe/compat-i915-headers/pxp/intel_pxp.h
index c2c30ece8f77..881680727452 100644
--- a/drivers/gpu/drm/xe/compat-i915-headers/pxp/intel_pxp.h
+++ b/drivers/gpu/drm/xe/compat-i915-headers/pxp/intel_pxp.h
@@ -10,9 +10,9 @@
#include <linux/types.h>
struct drm_i915_gem_object;
-struct intel_pxp;
+struct xe_pxp;
-static inline int intel_pxp_key_check(struct intel_pxp *pxp,
+static inline int intel_pxp_key_check(struct xe_pxp *pxp,
struct drm_i915_gem_object *obj,
bool assign)
{
diff --git a/drivers/gpu/drm/xe/regs/xe_pxp_regs.h b/drivers/gpu/drm/xe/regs/xe_pxp_regs.h
new file mode 100644
index 000000000000..d67cf210d23d
--- /dev/null
+++ b/drivers/gpu/drm/xe/regs/xe_pxp_regs.h
@@ -0,0 +1,17 @@
+/* SPDX-License-Identifier: MIT */
+/*
+ * Copyright(c) 2024, Intel Corporation. All rights reserved.
+ */
+
+#ifndef __XE_PXP_REGS_H__
+#define __XE_PXP_REGS_H__
+
+#include "regs/xe_regs.h"
+
+/* The following registers are only valid on platforms with a media GT */
+
+/* KCR enable/disable control */
+#define KCR_INIT XE_REG(0x3860f0)
+#define KCR_INIT_ALLOW_DISPLAY_ME_WRITES REG_BIT(14)
+
+#endif /* __XE_PXP_REGS_H__ */
diff --git a/drivers/gpu/drm/xe/xe_device.c b/drivers/gpu/drm/xe/xe_device.c
index 206328387150..807a15c49a81 100644
--- a/drivers/gpu/drm/xe/xe_device.c
+++ b/drivers/gpu/drm/xe/xe_device.c
@@ -46,6 +46,7 @@
#include "xe_pat.h"
#include "xe_pcode.h"
#include "xe_pm.h"
+#include "xe_pxp.h"
#include "xe_query.h"
#include "xe_sriov.h"
#include "xe_tile.h"
@@ -730,6 +731,11 @@ int xe_device_probe(struct xe_device *xe)
if (err)
goto err_fini_oa;
+ /* A PXP init failure is not fatal */
+ err = xe_pxp_init(xe);
+ if (err && err != -EOPNOTSUPP)
+ drm_err(&xe->drm, "PXP initialization failed: %pe\n", ERR_PTR(err));
+
err = drm_dev_register(&xe->drm, 0);
if (err)
goto err_fini_display;
diff --git a/drivers/gpu/drm/xe/xe_device_types.h b/drivers/gpu/drm/xe/xe_device_types.h
index 16a24eadd94b..b00a78be3934 100644
--- a/drivers/gpu/drm/xe/xe_device_types.h
+++ b/drivers/gpu/drm/xe/xe_device_types.h
@@ -35,6 +35,7 @@
struct xe_ggtt;
struct xe_pat_ops;
+struct xe_pxp;
#define XE_BO_INVALID_OFFSET LONG_MAX
@@ -276,6 +277,8 @@ struct xe_device {
u8 has_llc:1;
/** @info.has_mmio_ext: Device has extra MMIO address range */
u8 has_mmio_ext:1;
+ /** @info.has_pxp: Device has PXP support */
+ u8 has_pxp:1;
/** @info.has_range_tlb_invalidation: Has range based TLB invalidations */
u8 has_range_tlb_invalidation:1;
/** @info.has_sriov: Supports SR-IOV */
@@ -480,6 +483,9 @@ struct xe_device {
/** @oa: oa observation subsystem */
struct xe_oa oa;
+ /** @pxp: Encapsulate Protected Xe Path support */
+ struct xe_pxp *pxp;
+
/** @needs_flr_on_fini: requests function-reset on fini */
bool needs_flr_on_fini;
@@ -552,8 +558,6 @@ struct xe_device {
unsigned int czclk_freq;
unsigned int fsb_freq, mem_freq, is_ddr3;
};
-
- void *pxp;
#endif
};
diff --git a/drivers/gpu/drm/xe/xe_pci.c b/drivers/gpu/drm/xe/xe_pci.c
index 3c34b032ebf4..d1453ba20dcd 100644
--- a/drivers/gpu/drm/xe/xe_pci.c
+++ b/drivers/gpu/drm/xe/xe_pci.c
@@ -62,6 +62,7 @@ struct xe_device_desc {
u8 has_heci_cscfi:1;
u8 has_llc:1;
u8 has_mmio_ext:1;
+ u8 has_pxp:1;
u8 has_sriov:1;
u8 skip_guc_pc:1;
u8 skip_mtcfg:1;
@@ -616,6 +617,7 @@ static int xe_info_init_early(struct xe_device *xe,
xe->info.has_heci_cscfi = desc->has_heci_cscfi;
xe->info.has_llc = desc->has_llc;
xe->info.has_mmio_ext = desc->has_mmio_ext;
+ xe->info.has_pxp = desc->has_pxp;
xe->info.has_sriov = desc->has_sriov;
xe->info.skip_guc_pc = desc->skip_guc_pc;
xe->info.skip_mtcfg = desc->skip_mtcfg;
diff --git a/drivers/gpu/drm/xe/xe_pxp.c b/drivers/gpu/drm/xe/xe_pxp.c
new file mode 100644
index 000000000000..f974f74be1d5
--- /dev/null
+++ b/drivers/gpu/drm/xe/xe_pxp.c
@@ -0,0 +1,103 @@
+// SPDX-License-Identifier: MIT
+/*
+ * Copyright(c) 2024 Intel Corporation.
+ */
+
+#include "xe_pxp.h"
+
+#include <drm/drm_managed.h>
+
+#include "xe_device_types.h"
+#include "xe_force_wake.h"
+#include "xe_gt.h"
+#include "xe_gt_types.h"
+#include "xe_mmio.h"
+#include "xe_pxp_types.h"
+#include "xe_uc_fw.h"
+#include "regs/xe_pxp_regs.h"
+
+/**
+ * DOC: PXP
+ *
+ * PXP (Protected Xe Path) allows execution and flip to display of protected
+ * (i.e. encrypted) objects. This feature is currently only supported in
+ * integrated parts.
+ */
+
+static bool pxp_is_supported(const struct xe_device *xe)
+{
+ return xe->info.has_pxp && IS_ENABLED(CONFIG_INTEL_MEI_GSC_PROXY);
+}
+
+static int kcr_pxp_set_status(const struct xe_pxp *pxp, bool enable)
+{
+ u32 val = enable ? _MASKED_BIT_ENABLE(KCR_INIT_ALLOW_DISPLAY_ME_WRITES) :
+ _MASKED_BIT_DISABLE(KCR_INIT_ALLOW_DISPLAY_ME_WRITES);
+ int err;
+
+ err = xe_force_wake_get(gt_to_fw(pxp->gt), XE_FW_GT);
+ if (err)
+ return err;
+
+ xe_mmio_write32(pxp->gt, KCR_INIT, val);
+ XE_WARN_ON(xe_force_wake_put(gt_to_fw(pxp->gt), XE_FW_GT));
+
+ return 0;
+}
+
+static int kcr_pxp_enable(const struct xe_pxp *pxp)
+{
+ return kcr_pxp_set_status(pxp, true);
+}
+
+/**
+ * xe_pxp_init - initialize PXP support
+ * @xe: the xe_device structure
+ *
+ * Initialize the HW state and allocate the objects required for PXP support.
+ * Note that some of the requirement for PXP support (GSC proxy init, HuC auth)
+ * are performed asynchronously as part of the GSC init. PXP can only be used
+ * after both this function and the async worker have completed.
+ *
+ * Returns -EOPNOTSUPP if PXP is not supported, 0 if PXP initialization is
+ * successful, other errno value if there is an error during the init.
+ */
+int xe_pxp_init(struct xe_device *xe)
+{
+ struct xe_gt *gt = xe->tiles[0].media_gt;
+ struct xe_pxp *pxp;
+ int err;
+
+ if (!pxp_is_supported(xe))
+ return -EOPNOTSUPP;
+
+ /* we only support PXP on single tile devices with a media GT */
+ if (xe->info.tile_count > 1 || !gt)
+ return -EOPNOTSUPP;
+
+ /* The GSCCS is required for submissions to the GSC FW */
+ if (!(gt->info.engine_mask & BIT(XE_HW_ENGINE_GSCCS0)))
+ return -EOPNOTSUPP;
+
+ /* PXP requires both GSC and HuC firmwares to be available */
+ if (!xe_uc_fw_is_loadable(>->uc.gsc.fw) ||
+ !xe_uc_fw_is_loadable(>->uc.huc.fw)) {
+ drm_info(&xe->drm, "skipping PXP init due to missing FW dependencies");
+ return -EOPNOTSUPP;
+ }
+
+ pxp = drmm_kzalloc(&xe->drm, sizeof(struct xe_pxp), GFP_KERNEL);
+ if (!pxp)
+ return -ENOMEM;
+
+ pxp->xe = xe;
+ pxp->gt = gt;
+
+ err = kcr_pxp_enable(pxp);
+ if (err)
+ return err;
+
+ xe->pxp = pxp;
+
+ return 0;
+}
diff --git a/drivers/gpu/drm/xe/xe_pxp.h b/drivers/gpu/drm/xe/xe_pxp.h
new file mode 100644
index 000000000000..79c951667f13
--- /dev/null
+++ b/drivers/gpu/drm/xe/xe_pxp.h
@@ -0,0 +1,15 @@
+/* SPDX-License-Identifier: MIT */
+/*
+ * Copyright(c) 2024, Intel Corporation. All rights reserved.
+ */
+
+#ifndef __XE_PXP_H__
+#define __XE_PXP_H__
+
+#include <linux/types.h>
+
+struct xe_device;
+
+int xe_pxp_init(struct xe_device *xe);
+
+#endif /* __XE_PXP_H__ */
diff --git a/drivers/gpu/drm/xe/xe_pxp_types.h b/drivers/gpu/drm/xe/xe_pxp_types.h
new file mode 100644
index 000000000000..3a141021972a
--- /dev/null
+++ b/drivers/gpu/drm/xe/xe_pxp_types.h
@@ -0,0 +1,28 @@
+/* SPDX-License-Identifier: MIT */
+/*
+ * Copyright(c) 2024, Intel Corporation. All rights reserved.
+ */
+
+#ifndef __XE_PXP_TYPES_H__
+#define __XE_PXP_TYPES_H__
+
+#include <linux/types.h>
+
+struct xe_device;
+struct xe_gt;
+
+/**
+ * struct xe_pxp - pxp state
+ */
+struct xe_pxp {
+ /** @xe: Backpoiner to the xe_device struct */
+ struct xe_device *xe;
+
+ /**
+ * @gt: pointer to the gt that owns the submission-side of PXP
+ * (VDBOX, KCR and GSC)
+ */
+ struct xe_gt *gt;
+};
+
+#endif /* __XE_PXP_TYPES_H__ */
--
2.43.0
^ permalink raw reply related [flat|nested] 54+ messages in thread
* [PATCH v2 02/12] drm/xe/pxp: Allocate PXP execution resources
2024-08-16 19:00 [PATCH v2 00/12] Add PXP HWDRM support Daniele Ceraolo Spurio
2024-08-16 19:00 ` [PATCH v2 01/12] drm/xe/pxp: Initialize PXP structure and KCR reg Daniele Ceraolo Spurio
@ 2024-08-16 19:00 ` Daniele Ceraolo Spurio
2024-08-19 9:19 ` Jani Nikula
2024-10-04 20:30 ` John Harrison
2024-08-16 19:00 ` [PATCH v2 03/12] drm/xe/pxp: Add VCS inline termination support Daniele Ceraolo Spurio
` (18 subsequent siblings)
20 siblings, 2 replies; 54+ messages in thread
From: Daniele Ceraolo Spurio @ 2024-08-16 19:00 UTC (permalink / raw)
To: intel-xe; +Cc: Daniele Ceraolo Spurio, Matthew Brost, Thomas Hellström
PXP requires submissions to the HW for the following operations
1) Key invalidation, done via the VCS engine
2) Communication with the GSC FW for session management, done via the
GSCCS.
Key invalidation submissions are serialized (only 1 termination can be
serviced at a given time) and done via GGTT, so we can allocate a simple
BO and a kernel queue for it.
Submission for session management are tied to a PXP client (identified
by a unique host_session_id); from the GSC POV this is a user-accessible
construct, so all related submission must be done via PPGTT. The driver
does not currently support PPGTT submission from within the kernek, so
to add this support, the following changes have been included:
- a new type of kernel-owned VM (marked as GSC), required to ensure we
don't set the device in no-fault mode when we initialize PXP and to
mark the different lock usage with lockdep.
- a new function to map a BO into a VM from within the kernel.
Signed-off-by: Daniele Ceraolo Spurio <daniele.ceraolospurio@intel.com>
Cc: Matthew Brost <matthew.brost@intel.com>
Cc: Thomas Hellström <thomas.hellstrom@linux.intel.com>
---
drivers/gpu/drm/xe/Makefile | 1 +
drivers/gpu/drm/xe/abi/gsc_pxp_commands_abi.h | 7 +
drivers/gpu/drm/xe/xe_exec_queue.c | 3 +
drivers/gpu/drm/xe/xe_pxp.c | 25 ++-
drivers/gpu/drm/xe/xe_pxp_submit.c | 201 ++++++++++++++++++
drivers/gpu/drm/xe/xe_pxp_submit.h | 16 ++
drivers/gpu/drm/xe/xe_pxp_types.h | 46 ++++
drivers/gpu/drm/xe/xe_vm.c | 124 ++++++++++-
drivers/gpu/drm/xe/xe_vm.h | 6 +
drivers/gpu/drm/xe/xe_vm_types.h | 1 +
10 files changed, 418 insertions(+), 12 deletions(-)
create mode 100644 drivers/gpu/drm/xe/xe_pxp_submit.c
create mode 100644 drivers/gpu/drm/xe/xe_pxp_submit.h
diff --git a/drivers/gpu/drm/xe/Makefile b/drivers/gpu/drm/xe/Makefile
index 9e007e59de83..a508b9166b88 100644
--- a/drivers/gpu/drm/xe/Makefile
+++ b/drivers/gpu/drm/xe/Makefile
@@ -84,6 +84,7 @@ xe-y += xe_bb.o \
xe_pt.o \
xe_pt_walk.o \
xe_pxp.o \
+ xe_pxp_submit.o \
xe_query.o \
xe_range_fence.o \
xe_reg_sr.o \
diff --git a/drivers/gpu/drm/xe/abi/gsc_pxp_commands_abi.h b/drivers/gpu/drm/xe/abi/gsc_pxp_commands_abi.h
index 57520809e48d..f3c4cf10ba20 100644
--- a/drivers/gpu/drm/xe/abi/gsc_pxp_commands_abi.h
+++ b/drivers/gpu/drm/xe/abi/gsc_pxp_commands_abi.h
@@ -6,6 +6,7 @@
#ifndef _ABI_GSC_PXP_COMMANDS_ABI_H
#define _ABI_GSC_PXP_COMMANDS_ABI_H
+#include <linux/sizes.h>
#include <linux/types.h>
/* Heci client ID for PXP commands */
@@ -13,6 +14,12 @@
#define PXP_APIVER(x, y) (((x) & 0xFFFF) << 16 | ((y) & 0xFFFF))
+/*
+ * A PXP sub-section in an HECI packet can be up to 64K big in each direction.
+ * This does not include the top-level GSC header.
+ */
+#define PXP_MAX_PACKET_SIZE SZ_64K
+
/*
* there are a lot of status codes for PXP, but we only define the cross-API
* common ones that we actually can handle in the kernel driver. Other failure
diff --git a/drivers/gpu/drm/xe/xe_exec_queue.c b/drivers/gpu/drm/xe/xe_exec_queue.c
index 7d170d37fdbe..e98e8794eddf 100644
--- a/drivers/gpu/drm/xe/xe_exec_queue.c
+++ b/drivers/gpu/drm/xe/xe_exec_queue.c
@@ -148,6 +148,9 @@ struct xe_exec_queue *xe_exec_queue_create(struct xe_device *xe, struct xe_vm *v
struct xe_exec_queue *q;
int err;
+ /* VMs for GSCCS queues (and only those) must have the XE_VM_FLAG_GSC flag */
+ xe_assert(xe, !vm || (!!(vm->flags & XE_VM_FLAG_GSC) == !!(hwe->engine_id == XE_HW_ENGINE_GSCCS0)));
+
q = __xe_exec_queue_alloc(xe, vm, logical_mask, width, hwe, flags,
extensions);
if (IS_ERR(q))
diff --git a/drivers/gpu/drm/xe/xe_pxp.c b/drivers/gpu/drm/xe/xe_pxp.c
index f974f74be1d5..56bb7d927c07 100644
--- a/drivers/gpu/drm/xe/xe_pxp.c
+++ b/drivers/gpu/drm/xe/xe_pxp.c
@@ -12,6 +12,7 @@
#include "xe_gt.h"
#include "xe_gt_types.h"
#include "xe_mmio.h"
+#include "xe_pxp_submit.h"
#include "xe_pxp_types.h"
#include "xe_uc_fw.h"
#include "regs/xe_pxp_regs.h"
@@ -50,6 +51,20 @@ static int kcr_pxp_enable(const struct xe_pxp *pxp)
return kcr_pxp_set_status(pxp, true);
}
+static int kcr_pxp_disable(const struct xe_pxp *pxp)
+{
+ return kcr_pxp_set_status(pxp, false);
+}
+
+static void pxp_fini(void *arg)
+{
+ struct xe_pxp *pxp = arg;
+
+ xe_pxp_destroy_execution_resources(pxp);
+
+ /* no need to explicitly disable KCR since we're going to do an FLR */
+}
+
/**
* xe_pxp_init - initialize PXP support
* @xe: the xe_device structure
@@ -97,7 +112,15 @@ int xe_pxp_init(struct xe_device *xe)
if (err)
return err;
+ err = xe_pxp_allocate_execution_resources(pxp);
+ if (err)
+ goto kcr_disable;
+
xe->pxp = pxp;
- return 0;
+ return devm_add_action_or_reset(xe->drm.dev, pxp_fini, pxp);
+
+kcr_disable:
+ kcr_pxp_disable(pxp);
+ return err;
}
diff --git a/drivers/gpu/drm/xe/xe_pxp_submit.c b/drivers/gpu/drm/xe/xe_pxp_submit.c
new file mode 100644
index 000000000000..b777b0765c8a
--- /dev/null
+++ b/drivers/gpu/drm/xe/xe_pxp_submit.c
@@ -0,0 +1,201 @@
+// SPDX-License-Identifier: MIT
+/*
+ * Copyright(c) 2024 Intel Corporation.
+ */
+
+#include "xe_pxp_submit.h"
+
+#include <drm/xe_drm.h>
+
+#include "xe_device_types.h"
+#include "xe_bo.h"
+#include "xe_exec_queue.h"
+#include "xe_gsc_submit.h"
+#include "xe_gt.h"
+#include "xe_pxp_types.h"
+#include "xe_vm.h"
+#include "regs/xe_gt_regs.h"
+
+/*
+ * The VCS is used for kernel-owned GGTT submissions to issue key termination.
+ * Terminations are serialized, so we only need a single queue and a single
+ * batch.
+ */
+static int allocate_vcs_execution_resources(struct xe_pxp *pxp)
+{
+ struct xe_gt *gt = pxp->gt;
+ struct xe_device *xe = pxp->xe;
+ struct xe_tile *tile = gt_to_tile(gt);
+ struct xe_hw_engine *hwe;
+ struct xe_exec_queue *q;
+ struct xe_bo *bo;
+ int err;
+
+ hwe = xe_gt_hw_engine(gt, XE_ENGINE_CLASS_VIDEO_DECODE, 0, true);
+ if (!hwe)
+ return -ENODEV;
+
+ q = xe_exec_queue_create(xe, NULL, BIT(hwe->logical_instance), 1, hwe,
+ EXEC_QUEUE_FLAG_KERNEL | EXEC_QUEUE_FLAG_PERMANENT, 0);
+ if (IS_ERR(q))
+ return PTR_ERR(q);
+
+ /*
+ * Each termination is 16 DWORDS, so 4K is enough to contain a
+ * termination for each sessions.
+ */
+ bo = xe_bo_create_pin_map(xe, tile, 0, SZ_4K, ttm_bo_type_kernel,
+ XE_BO_FLAG_SYSTEM | XE_BO_FLAG_PINNED | XE_BO_FLAG_GGTT);
+ if (IS_ERR(bo)) {
+ err = PTR_ERR(bo);
+ goto out_queue;
+ }
+
+ pxp->vcs_exec.q = q;
+ pxp->vcs_exec.bo = bo;
+
+ return 0;
+
+out_queue:
+ xe_exec_queue_put(q);
+ return err;
+}
+
+static void destroy_vcs_execution_resources(struct xe_pxp *pxp)
+{
+ if (pxp->vcs_exec.bo)
+ xe_bo_unpin_map_no_vm(pxp->vcs_exec.bo);
+
+ if (pxp->vcs_exec.q)
+ xe_exec_queue_put(pxp->vcs_exec.q);
+}
+
+#define PXP_BB_SIZE XE_PAGE_SIZE
+static int allocate_gsc_client_resources(struct xe_gt *gt,
+ struct xe_pxp_gsc_client_resources *gsc_res,
+ size_t inout_size)
+{
+ struct xe_tile *tile = gt_to_tile(gt);
+ struct xe_device *xe = tile_to_xe(tile);
+ struct xe_hw_engine *hwe;
+ struct xe_vm *vm;
+ struct xe_bo *bo;
+ struct xe_exec_queue *q;
+ struct dma_fence *fence;
+ long timeout;
+ int err = 0;
+
+ hwe = xe_gt_hw_engine(gt, XE_ENGINE_CLASS_OTHER, OTHER_GSC_INSTANCE, false);
+
+ /* we shouldn't reach here if the GSC engine is not available */
+ xe_assert(xe, hwe);
+
+ /* PXP instructions must be issued from PPGTT */
+ vm = xe_vm_create(xe, XE_VM_FLAG_GSC);
+ if (IS_ERR(vm))
+ return PTR_ERR(vm);
+
+ /* We allocate a single object for the batch and the in/out memory */
+ xe_vm_lock(vm, false);
+ bo = xe_bo_create_pin_map(xe, tile, vm, PXP_BB_SIZE + inout_size * 2,
+ ttm_bo_type_kernel,
+ XE_BO_FLAG_SYSTEM | XE_BO_FLAG_PINNED | XE_BO_FLAG_NEEDS_UC);
+ xe_vm_unlock(vm);
+ if (IS_ERR(bo)) {
+ err = PTR_ERR(bo);
+ goto vm_out;
+ }
+
+ fence = xe_vm_bind_bo(vm, bo, NULL, 0, XE_CACHE_WB);
+ if (IS_ERR(fence)) {
+ err = PTR_ERR(fence);
+ goto bo_out;
+ }
+
+ timeout = dma_fence_wait_timeout(fence, false, HZ);
+ dma_fence_put(fence);
+ if (timeout <= 0) {
+ err = timeout ?: -ETIME;
+ goto bo_out;
+ }
+
+ q = xe_exec_queue_create(xe, vm, BIT(hwe->logical_instance), 1, hwe,
+ EXEC_QUEUE_FLAG_KERNEL |
+ EXEC_QUEUE_FLAG_PERMANENT, 0);
+ if (IS_ERR(q)) {
+ err = PTR_ERR(q);
+ goto bo_out;
+ }
+
+ gsc_res->vm = vm;
+ gsc_res->bo = bo;
+ gsc_res->inout_size = inout_size;
+ gsc_res->batch = IOSYS_MAP_INIT_OFFSET(&bo->vmap, 0);
+ gsc_res->msg_in = IOSYS_MAP_INIT_OFFSET(&bo->vmap, PXP_BB_SIZE);
+ gsc_res->msg_out = IOSYS_MAP_INIT_OFFSET(&bo->vmap, PXP_BB_SIZE + inout_size);
+ gsc_res->q = q;
+
+ /* initialize host-session-handle (for all Xe-to-gsc-firmware PXP cmds) */
+ gsc_res->host_session_handle = xe_gsc_create_host_session_id();
+
+ return 0;
+
+bo_out:
+ xe_bo_unpin_map_no_vm(bo);
+vm_out:
+ xe_vm_close_and_put(vm);
+
+ return err;
+}
+
+static void destroy_gsc_client_resources(struct xe_pxp_gsc_client_resources *gsc_res)
+{
+ if (!gsc_res->q)
+ return;
+
+ xe_exec_queue_put(gsc_res->q);
+ xe_bo_unpin_map_no_vm(gsc_res->bo);
+ xe_vm_close_and_put(gsc_res->vm);
+}
+
+/**
+ * xe_pxp_allocate_execution_resources - Allocate PXP submission objects
+ * @pxp: the xe_pxp structure
+ *
+ * Allocates exec_queues objects for VCS and GSCCS submission. The GSCCS
+ * submissions are done via PPGTT, so this function allocates a VM for it and
+ * maps the object into it.
+ *
+ * Returns 0 if the allocation and mapping is successful, an errno value
+ * otherwise.
+ */
+int xe_pxp_allocate_execution_resources(struct xe_pxp *pxp)
+{
+ int err;
+
+ err = allocate_vcs_execution_resources(pxp);
+ if (err)
+ return err;
+
+ /*
+ * PXP commands can require a lot of BO space (see PXP_MAX_PACKET_SIZE),
+ * but we currently only support a subset of commands that are small
+ * (< 20 dwords), so a single page is enough for now.
+ */
+ err = allocate_gsc_client_resources(pxp->gt, &pxp->gsc_res, XE_PAGE_SIZE);
+ if (err)
+ goto destroy_vcs_context;
+
+ return 0;
+
+destroy_vcs_context:
+ destroy_vcs_execution_resources(pxp);
+ return err;
+}
+
+void xe_pxp_destroy_execution_resources(struct xe_pxp *pxp)
+{
+ destroy_gsc_client_resources(&pxp->gsc_res);
+ destroy_vcs_execution_resources(pxp);
+}
+
diff --git a/drivers/gpu/drm/xe/xe_pxp_submit.h b/drivers/gpu/drm/xe/xe_pxp_submit.h
new file mode 100644
index 000000000000..1a971fadc081
--- /dev/null
+++ b/drivers/gpu/drm/xe/xe_pxp_submit.h
@@ -0,0 +1,16 @@
+/* SPDX-License-Identifier: MIT */
+/*
+ * Copyright(c) 2024, Intel Corporation. All rights reserved.
+ */
+
+#ifndef __XE_PXP_SUBMIT_H__
+#define __XE_PXP_SUBMIT_H__
+
+#include <linux/types.h>
+
+struct xe_pxp;
+
+int xe_pxp_allocate_execution_resources(struct xe_pxp *pxp);
+void xe_pxp_destroy_execution_resources(struct xe_pxp *pxp);
+
+#endif /* __XE_PXP_SUBMIT_H__ */
diff --git a/drivers/gpu/drm/xe/xe_pxp_types.h b/drivers/gpu/drm/xe/xe_pxp_types.h
index 3a141021972a..3463caaad101 100644
--- a/drivers/gpu/drm/xe/xe_pxp_types.h
+++ b/drivers/gpu/drm/xe/xe_pxp_types.h
@@ -6,10 +6,45 @@
#ifndef __XE_PXP_TYPES_H__
#define __XE_PXP_TYPES_H__
+#include <linux/iosys-map.h>
#include <linux/types.h>
+struct xe_bo;
+struct xe_exec_queue;
struct xe_device;
struct xe_gt;
+struct xe_vm;
+
+/**
+ * struct xe_pxp_gsc_client_resources - resources for GSC submission by a PXP
+ * client. The GSC FW supports multiple GSC client active at the same time.
+ */
+struct xe_pxp_gsc_client_resources {
+ /**
+ * @host_session_handle: handle used to identify the client in messages
+ * sent to the GSC firmware.
+ */
+ u64 host_session_handle;
+ /** @vm: VM used for PXP submissions to the GSCCS */
+ struct xe_vm *vm;
+ /** @q: GSCCS exec queue for PXP submissions */
+ struct xe_exec_queue *q;
+
+ /**
+ * @bo: BO used for submissions to the GSCCS and GSC FW. It includes
+ * space for the GSCCS batch and the input/output buffers read/written
+ * by the FW
+ */
+ struct xe_bo *bo;
+ /** @inout_size: size of the msg_in and msg_out sections */
+ u32 inout_size;
+ /** @batch: iosys_map to the batch memory within the BO */
+ struct iosys_map batch;
+ /** @msg_in: iosys_map to the input memory within the BO */
+ struct iosys_map msg_in;
+ /** @msg_out: iosys_map to the output memory within the BO */
+ struct iosys_map msg_out;
+};
/**
* struct xe_pxp - pxp state
@@ -23,6 +58,17 @@ struct xe_pxp {
* (VDBOX, KCR and GSC)
*/
struct xe_gt *gt;
+
+ /** @vcs_exec: kernel-owned objects for PXP submissions to the VCS */
+ struct {
+ /** @vcs_exec.q: kernel-owned VCS exec queue used for PXP terminations */
+ struct xe_exec_queue *q;
+ /** @vcs_exec.bo: BO used for submissions to the VCS */
+ struct xe_bo *bo;
+ } vcs_exec;
+
+ /** @gsc_exec: kernel-owned objects for PXP submissions to the GSCCS */
+ struct xe_pxp_gsc_client_resources gsc_res;
};
#endif /* __XE_PXP_TYPES_H__ */
diff --git a/drivers/gpu/drm/xe/xe_vm.c b/drivers/gpu/drm/xe/xe_vm.c
index 6dd76f77b504..56f105797ae6 100644
--- a/drivers/gpu/drm/xe/xe_vm.c
+++ b/drivers/gpu/drm/xe/xe_vm.c
@@ -1381,6 +1381,15 @@ struct xe_vm *xe_vm_create(struct xe_device *xe, u32 flags)
struct xe_tile *tile;
u8 id;
+ /*
+ * All GSC VMs are owned by the kernel and can also only be used on
+ * the GSCCS. We don't want a kernel-owned VM to put the device in
+ * either fault or not fault mode, so we need to exclude the GSC VMs
+ * from that count; this is only safe if we ensure that all GSC VMs are
+ * non-faulting.
+ */
+ xe_assert(xe, !((flags & XE_VM_FLAG_GSC) && (flags & XE_VM_FLAG_FAULT_MODE)));
+
vm = kzalloc(sizeof(*vm), GFP_KERNEL);
if (!vm)
return ERR_PTR(-ENOMEM);
@@ -1391,7 +1400,21 @@ struct xe_vm *xe_vm_create(struct xe_device *xe, u32 flags)
vm->flags = flags;
- init_rwsem(&vm->lock);
+ /**
+ * GSC VMs are kernel-owned, only used for PXP ops and can be
+ * manipulated under the PXP mutex. However, the PXP mutex can be taken
+ * under a user-VM lock when the PXP session is started at exec_queue
+ * creation time. Those are different VMs and therefore there is no risk
+ * of deadlock, but we need to tell lockdep that this is the case or it
+ * will print a warning.
+ */
+ if (flags & XE_VM_FLAG_GSC) {
+ static struct lock_class_key gsc_vm_key;
+
+ __init_rwsem(&vm->lock, "gsc_vm", &gsc_vm_key);
+ } else {
+ init_rwsem(&vm->lock);
+ }
mutex_init(&vm->snap_mutex);
INIT_LIST_HEAD(&vm->rebind_list);
@@ -1510,7 +1533,7 @@ struct xe_vm *xe_vm_create(struct xe_device *xe, u32 flags)
mutex_lock(&xe->usm.lock);
if (flags & XE_VM_FLAG_FAULT_MODE)
xe->usm.num_vm_in_fault_mode++;
- else if (!(flags & XE_VM_FLAG_MIGRATION))
+ else if (!(flags & (XE_VM_FLAG_MIGRATION | XE_VM_FLAG_GSC)))
xe->usm.num_vm_in_non_fault_mode++;
mutex_unlock(&xe->usm.lock);
@@ -2694,11 +2717,10 @@ static void vm_bind_ioctl_ops_fini(struct xe_vm *vm, struct xe_vma_ops *vops,
for (i = 0; i < vops->num_syncs; i++)
xe_sync_entry_signal(vops->syncs + i, fence);
xe_exec_queue_last_fence_set(wait_exec_queue, vm, fence);
- dma_fence_put(fence);
}
-static int vm_bind_ioctl_ops_execute(struct xe_vm *vm,
- struct xe_vma_ops *vops)
+static struct dma_fence *vm_bind_ioctl_ops_execute(struct xe_vm *vm,
+ struct xe_vma_ops *vops)
{
struct drm_exec exec;
struct dma_fence *fence;
@@ -2711,21 +2733,21 @@ static int vm_bind_ioctl_ops_execute(struct xe_vm *vm,
drm_exec_until_all_locked(&exec) {
err = vm_bind_ioctl_ops_lock_and_prep(&exec, vm, vops);
drm_exec_retry_on_contention(&exec);
- if (err)
+ if (err) {
+ fence = ERR_PTR(err);
goto unlock;
+ }
fence = ops_execute(vm, vops);
- if (IS_ERR(fence)) {
- err = PTR_ERR(fence);
+ if (IS_ERR(fence))
goto unlock;
- }
vm_bind_ioctl_ops_fini(vm, vops, fence);
}
unlock:
drm_exec_fini(&exec);
- return err;
+ return fence;
}
#define SUPPORTED_FLAGS_STUB \
@@ -2946,6 +2968,7 @@ int xe_vm_bind_ioctl(struct drm_device *dev, void *data, struct drm_file *file)
struct xe_sync_entry *syncs = NULL;
struct drm_xe_vm_bind_op *bind_ops;
struct xe_vma_ops vops;
+ struct dma_fence *fence;
int err;
int i;
@@ -3108,7 +3131,11 @@ int xe_vm_bind_ioctl(struct drm_device *dev, void *data, struct drm_file *file)
if (err)
goto unwind_ops;
- err = vm_bind_ioctl_ops_execute(vm, &vops);
+ fence = vm_bind_ioctl_ops_execute(vm, &vops);
+ if (IS_ERR(fence))
+ err = PTR_ERR(fence);
+ else
+ dma_fence_put(fence);
unwind_ops:
if (err && err != -ENODATA)
@@ -3142,6 +3169,81 @@ int xe_vm_bind_ioctl(struct drm_device *dev, void *data, struct drm_file *file)
return err;
}
+/**
+ * xe_vm_bind_bo - bind a kernel BO to a VM
+ * @vm: VM to bind the BO to
+ * @bo: BO to bind
+ * @q: exec queue to use for the bind (optional)
+ * @addr: address at which to bind the BO
+ * @cache_lvl: PAT cache level to use
+ *
+ * Execute a VM bind map operation on a kernel-owned BO to bind it into a
+ * kernel-owned VM.
+ *
+ * Returns a dma_fence to track the binding completion if the job to do so was
+ * successfully submitted, an error pointer otherwise.
+ */
+struct dma_fence *xe_vm_bind_bo(struct xe_vm *vm, struct xe_bo *bo,
+ struct xe_exec_queue *q, u64 addr,
+ enum xe_cache_level cache_lvl)
+{
+ struct xe_vma_ops vops;
+ struct drm_gpuva_ops *ops = NULL;
+ struct dma_fence *fence;
+ int err;
+
+ xe_bo_get(bo);
+ xe_vm_get(vm);
+ if (q)
+ xe_exec_queue_get(q);
+
+ down_write(&vm->lock);
+
+ xe_vma_ops_init(&vops, vm, q, NULL, 0);
+
+ ops = vm_bind_ioctl_ops_create(vm, bo, 0, addr, bo->size,
+ DRM_XE_VM_BIND_OP_MAP, 0, 0,
+ vm->xe->pat.idx[cache_lvl]);
+ if (IS_ERR(ops)) {
+ err = PTR_ERR(ops);
+ goto release_vm_lock;
+ }
+
+ err = vm_bind_ioctl_ops_parse(vm, ops, &vops);
+ if (err)
+ goto release_vm_lock;
+
+ xe_assert(vm->xe, !list_empty(&vops.list));
+
+ err = xe_vma_ops_alloc(&vops, false);
+ if (err)
+ goto unwind_ops;
+
+ fence = vm_bind_ioctl_ops_execute(vm, &vops);
+ if (IS_ERR(fence))
+ err = PTR_ERR(fence);
+
+unwind_ops:
+ if (err && err != -ENODATA)
+ vm_bind_ioctl_ops_unwind(vm, &ops, 1);
+
+ xe_vma_ops_fini(&vops);
+ drm_gpuva_ops_free(&vm->gpuvm, ops);
+
+release_vm_lock:
+ up_write(&vm->lock);
+
+ if (q)
+ xe_exec_queue_put(q);
+ xe_vm_put(vm);
+ xe_bo_put(bo);
+
+ if (err)
+ fence = ERR_PTR(err);
+
+ return fence;
+}
+
/**
* xe_vm_lock() - Lock the vm's dma_resv object
* @vm: The struct xe_vm whose lock is to be locked
diff --git a/drivers/gpu/drm/xe/xe_vm.h b/drivers/gpu/drm/xe/xe_vm.h
index c864dba35e1d..bfc19e8113c3 100644
--- a/drivers/gpu/drm/xe/xe_vm.h
+++ b/drivers/gpu/drm/xe/xe_vm.h
@@ -19,6 +19,8 @@ struct drm_file;
struct ttm_buffer_object;
struct ttm_validate_buffer;
+struct dma_fence;
+
struct xe_exec_queue;
struct xe_file;
struct xe_sync_entry;
@@ -248,6 +250,10 @@ int xe_vm_lock_vma(struct drm_exec *exec, struct xe_vma *vma);
int xe_vm_validate_rebind(struct xe_vm *vm, struct drm_exec *exec,
unsigned int num_fences);
+struct dma_fence *xe_vm_bind_bo(struct xe_vm *vm, struct xe_bo *bo,
+ struct xe_exec_queue *q, u64 addr,
+ enum xe_cache_level cache_lvl);
+
/**
* xe_vm_resv() - Return's the vm's reservation object
* @vm: The vm
diff --git a/drivers/gpu/drm/xe/xe_vm_types.h b/drivers/gpu/drm/xe/xe_vm_types.h
index 7f9a303e51d8..52467b9b5348 100644
--- a/drivers/gpu/drm/xe/xe_vm_types.h
+++ b/drivers/gpu/drm/xe/xe_vm_types.h
@@ -164,6 +164,7 @@ struct xe_vm {
#define XE_VM_FLAG_BANNED BIT(5)
#define XE_VM_FLAG_TILE_ID(flags) FIELD_GET(GENMASK(7, 6), flags)
#define XE_VM_FLAG_SET_TILE_ID(tile) FIELD_PREP(GENMASK(7, 6), (tile)->id)
+#define XE_VM_FLAG_GSC BIT(8)
unsigned long flags;
/** @composite_fence_ctx: context composite fence */
--
2.43.0
^ permalink raw reply related [flat|nested] 54+ messages in thread
* [PATCH v2 03/12] drm/xe/pxp: Add VCS inline termination support
2024-08-16 19:00 [PATCH v2 00/12] Add PXP HWDRM support Daniele Ceraolo Spurio
2024-08-16 19:00 ` [PATCH v2 01/12] drm/xe/pxp: Initialize PXP structure and KCR reg Daniele Ceraolo Spurio
2024-08-16 19:00 ` [PATCH v2 02/12] drm/xe/pxp: Allocate PXP execution resources Daniele Ceraolo Spurio
@ 2024-08-16 19:00 ` Daniele Ceraolo Spurio
2024-10-04 22:25 ` John Harrison
2024-08-16 19:00 ` [PATCH v2 04/12] drm/xe/pxp: Add GSC session invalidation support Daniele Ceraolo Spurio
` (17 subsequent siblings)
20 siblings, 1 reply; 54+ messages in thread
From: Daniele Ceraolo Spurio @ 2024-08-16 19:00 UTC (permalink / raw)
To: intel-xe; +Cc: Daniele Ceraolo Spurio
The key termination is done with a specific submission to the VCS
engine.
Note that this patch is meant to be squashed with the follow-up patches
that implement the other pieces of the termination flow. It is separate
for now for ease of review.
Signed-off-by: Daniele Ceraolo Spurio <daniele.ceraolospurio@intel.com>
---
.../gpu/drm/xe/instructions/xe_instr_defs.h | 1 +
.../gpu/drm/xe/instructions/xe_mfx_commands.h | 29 +++++
.../gpu/drm/xe/instructions/xe_mi_commands.h | 5 +
drivers/gpu/drm/xe/xe_lrc.h | 3 +-
drivers/gpu/drm/xe/xe_pxp_submit.c | 108 ++++++++++++++++++
drivers/gpu/drm/xe/xe_pxp_submit.h | 2 +
drivers/gpu/drm/xe/xe_ring_ops.c | 4 +-
7 files changed, 149 insertions(+), 3 deletions(-)
create mode 100644 drivers/gpu/drm/xe/instructions/xe_mfx_commands.h
diff --git a/drivers/gpu/drm/xe/instructions/xe_instr_defs.h b/drivers/gpu/drm/xe/instructions/xe_instr_defs.h
index fd2ce7ace510..e559969468c4 100644
--- a/drivers/gpu/drm/xe/instructions/xe_instr_defs.h
+++ b/drivers/gpu/drm/xe/instructions/xe_instr_defs.h
@@ -16,6 +16,7 @@
#define XE_INSTR_CMD_TYPE GENMASK(31, 29)
#define XE_INSTR_MI REG_FIELD_PREP(XE_INSTR_CMD_TYPE, 0x0)
#define XE_INSTR_GSC REG_FIELD_PREP(XE_INSTR_CMD_TYPE, 0x2)
+#define XE_INSTR_VIDEOPIPE REG_FIELD_PREP(XE_INSTR_CMD_TYPE, 0x3)
#define XE_INSTR_GFXPIPE REG_FIELD_PREP(XE_INSTR_CMD_TYPE, 0x3)
#define XE_INSTR_GFX_STATE REG_FIELD_PREP(XE_INSTR_CMD_TYPE, 0x4)
diff --git a/drivers/gpu/drm/xe/instructions/xe_mfx_commands.h b/drivers/gpu/drm/xe/instructions/xe_mfx_commands.h
new file mode 100644
index 000000000000..686ca3b1d9e8
--- /dev/null
+++ b/drivers/gpu/drm/xe/instructions/xe_mfx_commands.h
@@ -0,0 +1,29 @@
+/* SPDX-License-Identifier: MIT */
+/*
+ * Copyright © 2024 Intel Corporation
+ */
+
+#ifndef _XE_MFX_COMMANDS_H_
+#define _XE_MFX_COMMANDS_H_
+
+#include "instructions/xe_instr_defs.h"
+
+#define MFX_CMD_SUBTYPE REG_GENMASK(28, 27) /* A.K.A cmd pipe */
+#define MFX_CMD_OPCODE REG_GENMASK(26, 24)
+#define MFX_CMD_SUB_OPCODE REG_GENMASK(23, 16)
+#define MFX_FLAGS_AND_LEN REG_GENMASK(15, 0)
+
+#define XE_MFX_INSTR(subtype, op, sub_op, flags) \
+ (XE_INSTR_VIDEOPIPE | \
+ REG_FIELD_PREP(MFX_CMD_SUBTYPE, subtype) | \
+ REG_FIELD_PREP(MFX_CMD_OPCODE, op) | \
+ REG_FIELD_PREP(MFX_CMD_SUB_OPCODE, sub_op) | \
+ REG_FIELD_PREP(MFX_FLAGS_AND_LEN, flags))
+
+#define MFX_WAIT XE_MFX_INSTR(1, 0, 0, 0)
+#define MFX_WAIT_DW0_PXP_SYNC_CONTROL_FLAG REG_BIT(9)
+#define MFX_WAIT_DW0_MFX_SYNC_CONTROL_FLAG REG_BIT(8)
+
+#define CRYPTO_KEY_EXCHANGE XE_MFX_INSTR(2, 6, 9, 0)
+
+#endif
diff --git a/drivers/gpu/drm/xe/instructions/xe_mi_commands.h b/drivers/gpu/drm/xe/instructions/xe_mi_commands.h
index 10ec2920d31b..167fb0f742de 100644
--- a/drivers/gpu/drm/xe/instructions/xe_mi_commands.h
+++ b/drivers/gpu/drm/xe/instructions/xe_mi_commands.h
@@ -48,6 +48,7 @@
#define MI_LRI_LEN(x) (((x) & 0xff) + 1)
#define MI_FLUSH_DW __MI_INSTR(0x26)
+#define MI_FLUSH_DW_PROTECTED_MEM_EN REG_BIT(22)
#define MI_FLUSH_DW_STORE_INDEX REG_BIT(21)
#define MI_INVALIDATE_TLB REG_BIT(18)
#define MI_FLUSH_DW_CCS REG_BIT(16)
@@ -66,4 +67,8 @@
#define MI_BATCH_BUFFER_START __MI_INSTR(0x31)
+#define MI_SET_APPID __MI_INSTR(0x0e)
+#define MI_SET_APPID_SESSION_ID_MASK REG_GENMASK(6, 0)
+#define MI_SET_APPID_SESSION_ID(x) REG_FIELD_PREP(MI_SET_APPID_SESSION_ID_MASK, x)
+
#endif
diff --git a/drivers/gpu/drm/xe/xe_lrc.h b/drivers/gpu/drm/xe/xe_lrc.h
index c24542e89318..d411c3fbcbc6 100644
--- a/drivers/gpu/drm/xe/xe_lrc.h
+++ b/drivers/gpu/drm/xe/xe_lrc.h
@@ -20,7 +20,8 @@ struct xe_lrc;
struct xe_lrc_snapshot;
struct xe_vm;
-#define LRC_PPHWSP_SCRATCH_ADDR (0x34 * 4)
+#define LRC_PPHWSP_FLUSH_INVAL_SCRATCH_ADDR (0x34 * 4)
+#define LRC_PPHWSP_PXP_INVAL_SCRATCH_ADDR (0x40 * 4)
struct xe_lrc *xe_lrc_create(struct xe_hw_engine *hwe, struct xe_vm *vm,
u32 ring_size);
diff --git a/drivers/gpu/drm/xe/xe_pxp_submit.c b/drivers/gpu/drm/xe/xe_pxp_submit.c
index b777b0765c8a..3b69dcc0a00f 100644
--- a/drivers/gpu/drm/xe/xe_pxp_submit.c
+++ b/drivers/gpu/drm/xe/xe_pxp_submit.c
@@ -6,14 +6,20 @@
#include "xe_pxp_submit.h"
#include <drm/xe_drm.h>
+#include <linux/delay.h>
#include "xe_device_types.h"
+#include "xe_bb.h"
#include "xe_bo.h"
#include "xe_exec_queue.h"
#include "xe_gsc_submit.h"
#include "xe_gt.h"
+#include "xe_lrc.h"
#include "xe_pxp_types.h"
+#include "xe_sched_job.h"
#include "xe_vm.h"
+#include "instructions/xe_mfx_commands.h"
+#include "instructions/xe_mi_commands.h"
#include "regs/xe_gt_regs.h"
/*
@@ -199,3 +205,105 @@ void xe_pxp_destroy_execution_resources(struct xe_pxp *pxp)
destroy_vcs_execution_resources(pxp);
}
+#define emit_cmd(xe_, map_, offset_, val_) \
+ xe_map_wr(xe_, map_, (offset_) * sizeof(u32), u32, val_)
+
+/* stall until prior PXP and MFX/HCP/HUC objects are cmopleted */
+#define MFX_WAIT_PXP (MFX_WAIT | \
+ MFX_WAIT_DW0_PXP_SYNC_CONTROL_FLAG | \
+ MFX_WAIT_DW0_MFX_SYNC_CONTROL_FLAG)
+static u32 pxp_emit_wait(struct xe_device *xe, struct iosys_map *batch, u32 offset)
+{
+ /* wait for cmds to go through */
+ emit_cmd(xe, batch, offset++, MFX_WAIT_PXP);
+ emit_cmd(xe, batch, offset++, 0);
+
+ return offset;
+}
+
+static u32 pxp_emit_session_selection(struct xe_device *xe, struct iosys_map *batch,
+ u32 offset, u32 idx)
+{
+ offset = pxp_emit_wait(xe, batch, offset);
+
+ /* pxp off */
+ emit_cmd(xe, batch, offset++, MI_FLUSH_DW | MI_FLUSH_IMM_DW);
+ emit_cmd(xe, batch, offset++, 0);
+ emit_cmd(xe, batch, offset++, 0);
+ emit_cmd(xe, batch, offset++, 0);
+
+ /* select session */
+ emit_cmd(xe, batch, offset++, MI_SET_APPID | MI_SET_APPID_SESSION_ID(idx));
+ emit_cmd(xe, batch, offset++, MFX_WAIT_PXP);
+
+ /* pxp on */
+ emit_cmd(xe, batch, offset++, MI_FLUSH_DW |
+ MI_FLUSH_DW_PROTECTED_MEM_EN |
+ MI_FLUSH_DW_OP_STOREDW | MI_FLUSH_DW_STORE_INDEX |
+ MI_FLUSH_IMM_DW);
+ emit_cmd(xe, batch, offset++, LRC_PPHWSP_PXP_INVAL_SCRATCH_ADDR |
+ MI_FLUSH_DW_USE_GTT);
+ emit_cmd(xe, batch, offset++, 0);
+ emit_cmd(xe, batch, offset++, 0);
+
+ offset = pxp_emit_wait(xe, batch, offset);
+
+ return offset;
+}
+
+static u32 pxp_emit_inline_termination(struct xe_device *xe,
+ struct iosys_map *batch, u32 offset)
+{
+ /* session inline termination */
+ emit_cmd(xe, batch, offset++, CRYPTO_KEY_EXCHANGE);
+ emit_cmd(xe, batch, offset++, 0);
+
+ return offset;
+}
+
+static u32 pxp_emit_session_termination(struct xe_device *xe, struct iosys_map *batch,
+ u32 offset, u32 idx)
+{
+ offset = pxp_emit_session_selection(xe, batch, offset, idx);
+ offset = pxp_emit_inline_termination(xe, batch, offset);
+
+ return offset;
+}
+
+/**
+ * xe_pxp_submit_session_termination - submits a PXP inline termination
+ * @pxp: the xe_pxp structure
+ * @id: the session to terminate
+ *
+ * Emit an inline termination via the VCS engine to terminate a session.
+ *
+ * Returns 0 if the submission is successful, an errno value otherwise.
+ */
+int xe_pxp_submit_session_termination(struct xe_pxp *pxp, u32 id)
+{
+ struct xe_sched_job *job;
+ struct dma_fence *fence;
+ long timeout;
+ u32 offset = 0;
+ u64 addr = xe_bo_ggtt_addr(pxp->vcs_exec.bo);
+
+ offset = pxp_emit_session_termination(pxp->xe, &pxp->vcs_exec.bo->vmap, offset, id);
+ offset = pxp_emit_wait(pxp->xe, &pxp->vcs_exec.bo->vmap, offset);
+ emit_cmd(pxp->xe, &pxp->vcs_exec.bo->vmap, offset, MI_BATCH_BUFFER_END);
+
+ job = xe_sched_job_create(pxp->vcs_exec.q, &addr);
+ if (IS_ERR(job))
+ return PTR_ERR(job);
+
+ xe_sched_job_arm(job);
+ fence = dma_fence_get(&job->drm.s_fence->finished);
+ xe_sched_job_push(job);
+
+ timeout = dma_fence_wait_timeout(fence, false, HZ);
+
+ dma_fence_put(fence);
+ if (timeout <= 0)
+ return -EAGAIN;
+
+ return 0;
+}
diff --git a/drivers/gpu/drm/xe/xe_pxp_submit.h b/drivers/gpu/drm/xe/xe_pxp_submit.h
index 1a971fadc081..4ee8c0acfed9 100644
--- a/drivers/gpu/drm/xe/xe_pxp_submit.h
+++ b/drivers/gpu/drm/xe/xe_pxp_submit.h
@@ -13,4 +13,6 @@ struct xe_pxp;
int xe_pxp_allocate_execution_resources(struct xe_pxp *pxp);
void xe_pxp_destroy_execution_resources(struct xe_pxp *pxp);
+int xe_pxp_submit_session_termination(struct xe_pxp *pxp, u32 id);
+
#endif /* __XE_PXP_SUBMIT_H__ */
diff --git a/drivers/gpu/drm/xe/xe_ring_ops.c b/drivers/gpu/drm/xe/xe_ring_ops.c
index 0be4f489d3e1..a4b5a0f68a32 100644
--- a/drivers/gpu/drm/xe/xe_ring_ops.c
+++ b/drivers/gpu/drm/xe/xe_ring_ops.c
@@ -118,7 +118,7 @@ static int emit_flush_invalidate(u32 flag, u32 *dw, int i)
dw[i++] |= MI_INVALIDATE_TLB | MI_FLUSH_DW_OP_STOREDW | MI_FLUSH_IMM_DW |
MI_FLUSH_DW_STORE_INDEX;
- dw[i++] = LRC_PPHWSP_SCRATCH_ADDR | MI_FLUSH_DW_USE_GTT;
+ dw[i++] = LRC_PPHWSP_FLUSH_INVAL_SCRATCH_ADDR | MI_FLUSH_DW_USE_GTT;
dw[i++] = 0;
dw[i++] = ~0U;
@@ -156,7 +156,7 @@ static int emit_pipe_invalidate(u32 mask_flags, bool invalidate_tlb, u32 *dw,
flags &= ~mask_flags;
- return emit_pipe_control(dw, i, 0, flags, LRC_PPHWSP_SCRATCH_ADDR, 0);
+ return emit_pipe_control(dw, i, 0, flags, LRC_PPHWSP_FLUSH_INVAL_SCRATCH_ADDR, 0);
}
static int emit_store_imm_ppgtt_posted(u64 addr, u64 value,
--
2.43.0
^ permalink raw reply related [flat|nested] 54+ messages in thread
* [PATCH v2 04/12] drm/xe/pxp: Add GSC session invalidation support
2024-08-16 19:00 [PATCH v2 00/12] Add PXP HWDRM support Daniele Ceraolo Spurio
` (2 preceding siblings ...)
2024-08-16 19:00 ` [PATCH v2 03/12] drm/xe/pxp: Add VCS inline termination support Daniele Ceraolo Spurio
@ 2024-08-16 19:00 ` Daniele Ceraolo Spurio
2024-10-07 20:05 ` John Harrison
2024-08-16 19:00 ` [PATCH v2 05/12] drm/xe/pxp: Handle the PXP termination interrupt Daniele Ceraolo Spurio
` (16 subsequent siblings)
20 siblings, 1 reply; 54+ messages in thread
From: Daniele Ceraolo Spurio @ 2024-08-16 19:00 UTC (permalink / raw)
To: intel-xe; +Cc: Daniele Ceraolo Spurio
After a session is terminated, we need to inform the GSC so that it can
clean up its side of the allocation. This is done by sending an
invalidation command with the session ID.
Note that this patch is meant to be squashed with the follow-up patches
that implement the other pieces of the termination flow. It is separate
for now for ease of review.
Signed-off-by: Daniele Ceraolo Spurio <daniele.ceraolospurio@intel.com>
---
drivers/gpu/drm/xe/abi/gsc_pxp_commands_abi.h | 12 +
drivers/gpu/drm/xe/xe_pxp_submit.c | 215 ++++++++++++++++++
drivers/gpu/drm/xe/xe_pxp_submit.h | 3 +
3 files changed, 230 insertions(+)
diff --git a/drivers/gpu/drm/xe/abi/gsc_pxp_commands_abi.h b/drivers/gpu/drm/xe/abi/gsc_pxp_commands_abi.h
index f3c4cf10ba20..4a59c564a0d0 100644
--- a/drivers/gpu/drm/xe/abi/gsc_pxp_commands_abi.h
+++ b/drivers/gpu/drm/xe/abi/gsc_pxp_commands_abi.h
@@ -49,6 +49,7 @@ struct pxp_cmd_header {
u32 buffer_len;
} __packed;
+#define PXP43_CMDID_INVALIDATE_STREAM_KEY 0x00000007
#define PXP43_CMDID_NEW_HUC_AUTH 0x0000003F /* MTL+ */
/* PXP-Input-Packet: HUC Auth-only */
@@ -63,4 +64,15 @@ struct pxp43_huc_auth_out {
struct pxp_cmd_header header;
} __packed;
+/* PXP-Input-Packet: Invalidate Stream Key */
+struct pxp43_inv_stream_key_in {
+ struct pxp_cmd_header header;
+ u32 rsvd[3];
+} __packed;
+
+/* PXP-Output-Packet: Invalidate Stream Key */
+struct pxp43_inv_stream_key_out {
+ struct pxp_cmd_header header;
+ u32 rsvd;
+} __packed;
#endif
diff --git a/drivers/gpu/drm/xe/xe_pxp_submit.c b/drivers/gpu/drm/xe/xe_pxp_submit.c
index 3b69dcc0a00f..41684d666376 100644
--- a/drivers/gpu/drm/xe/xe_pxp_submit.c
+++ b/drivers/gpu/drm/xe/xe_pxp_submit.c
@@ -15,9 +15,13 @@
#include "xe_gsc_submit.h"
#include "xe_gt.h"
#include "xe_lrc.h"
+#include "xe_map.h"
#include "xe_pxp_types.h"
#include "xe_sched_job.h"
#include "xe_vm.h"
+#include "abi/gsc_command_header_abi.h"
+#include "abi/gsc_pxp_commands_abi.h"
+#include "instructions/xe_gsc_commands.h"
#include "instructions/xe_mfx_commands.h"
#include "instructions/xe_mi_commands.h"
#include "regs/xe_gt_regs.h"
@@ -307,3 +311,214 @@ int xe_pxp_submit_session_termination(struct xe_pxp *pxp, u32 id)
return 0;
}
+
+static bool
+is_fw_err_platform_config(u32 type)
+{
+ switch (type) {
+ case PXP_STATUS_ERROR_API_VERSION:
+ case PXP_STATUS_PLATFCONFIG_KF1_NOVERIF:
+ case PXP_STATUS_PLATFCONFIG_KF1_BAD:
+ return true;
+ default:
+ break;
+ }
+ return false;
+}
+
+static const char *
+fw_err_to_string(u32 type)
+{
+ switch (type) {
+ case PXP_STATUS_ERROR_API_VERSION:
+ return "ERR_API_VERSION";
+ case PXP_STATUS_NOT_READY:
+ return "ERR_NOT_READY";
+ case PXP_STATUS_PLATFCONFIG_KF1_NOVERIF:
+ case PXP_STATUS_PLATFCONFIG_KF1_BAD:
+ return "ERR_PLATFORM_CONFIG";
+ default:
+ break;
+ }
+ return NULL;
+}
+
+static int pxp_pkt_submit(struct xe_exec_queue *q, u64 batch_addr)
+{
+ struct xe_gt *gt = q->gt;
+ struct xe_device *xe = gt_to_xe(gt);
+ struct xe_sched_job *job;
+ struct dma_fence *fence;
+ long timeout;
+
+ xe_assert(xe, q->hwe->engine_id == XE_HW_ENGINE_GSCCS0);
+
+ job = xe_sched_job_create(q, &batch_addr);
+ if (IS_ERR(job))
+ return PTR_ERR(job);
+
+ xe_sched_job_arm(job);
+ fence = dma_fence_get(&job->drm.s_fence->finished);
+ xe_sched_job_push(job);
+
+ timeout = dma_fence_wait_timeout(fence, false, HZ);
+ dma_fence_put(fence);
+ if (timeout < 0)
+ return timeout;
+ else if (!timeout)
+ return -ETIME;
+
+ return 0;
+}
+
+static void emit_pxp_heci_cmd(struct xe_device *xe, struct iosys_map *batch,
+ u64 addr_in, u32 size_in, u64 addr_out, u32 size_out)
+{
+ u32 len = 0;
+
+ xe_map_wr(xe, batch, len++ * sizeof(u32), u32, GSC_HECI_CMD_PKT);
+ xe_map_wr(xe, batch, len++ * sizeof(u32), u32, lower_32_bits(addr_in));
+ xe_map_wr(xe, batch, len++ * sizeof(u32), u32, upper_32_bits(addr_in));
+ xe_map_wr(xe, batch, len++ * sizeof(u32), u32, size_in);
+ xe_map_wr(xe, batch, len++ * sizeof(u32), u32, lower_32_bits(addr_out));
+ xe_map_wr(xe, batch, len++ * sizeof(u32), u32, upper_32_bits(addr_out));
+ xe_map_wr(xe, batch, len++ * sizeof(u32), u32, size_out);
+ xe_map_wr(xe, batch, len++ * sizeof(u32), u32, 0);
+ xe_map_wr(xe, batch, len++ * sizeof(u32), u32, MI_BATCH_BUFFER_END);
+}
+
+#define GSC_PENDING_RETRY_MAXCOUNT 40
+#define GSC_PENDING_RETRY_PAUSE_MS 50
+static int gsccs_send_message(struct xe_pxp_gsc_client_resources *gsc_res,
+ void *msg_in, size_t msg_in_size,
+ void *msg_out, size_t msg_out_size_max)
+{
+ struct xe_device *xe = gsc_res->vm->xe;
+ const size_t max_msg_size = gsc_res->inout_size - sizeof(struct intel_gsc_mtl_header);
+ u32 wr_offset = 0;
+ u32 rd_offset = 0;
+ u32 reply_size;
+ u32 min_reply_size = 0;
+ int ret = 0;
+ int retry = GSC_PENDING_RETRY_MAXCOUNT;
+
+ if (msg_in_size > max_msg_size || msg_out_size_max > max_msg_size)
+ return -ENOSPC;
+
+ wr_offset = xe_gsc_emit_header(xe, &gsc_res->msg_in, 0,
+ HECI_MEADDRESS_PXP,
+ gsc_res->host_session_handle,
+ msg_in_size);
+
+ /* NOTE: zero size packets are used for session-cleanups */
+ if (msg_in && msg_in_size) {
+ xe_map_memcpy_to(xe, &gsc_res->msg_in, wr_offset,
+ msg_in, msg_in_size);
+ min_reply_size = sizeof(struct pxp_cmd_header);
+ }
+
+ /* Make sure the reply header does not contain stale data */
+ xe_gsc_poison_header(xe, &gsc_res->msg_out, 0);
+
+ emit_pxp_heci_cmd(xe, &gsc_res->batch, PXP_BB_SIZE,
+ wr_offset + msg_in_size, PXP_BB_SIZE + gsc_res->inout_size,
+ msg_out_size_max + wr_offset);
+
+ xe_device_wmb(xe);
+
+ do {
+ ret = pxp_pkt_submit(gsc_res->q, 0);
+ if (ret)
+ break;
+
+ if (xe_gsc_check_and_update_pending(xe, &gsc_res->msg_in, 0,
+ &gsc_res->msg_out, 0)) {
+ ret = -EAGAIN;
+ msleep(GSC_PENDING_RETRY_PAUSE_MS);
+ }
+ } while (--retry && ret == -EAGAIN);
+
+ if (ret) {
+ drm_err(&xe->drm, "failed to submit GSC PXP message: %d\n", ret);
+ return ret;
+ }
+
+ ret = xe_gsc_read_out_header(xe, &gsc_res->msg_out, 0,
+ min_reply_size, &rd_offset);
+ if (ret) {
+ drm_err(&xe->drm, "invalid GSC reply for PXP (err=%d)\n", ret);
+ return ret;
+ }
+
+ if (msg_out && min_reply_size) {
+ reply_size = xe_map_rd_field(xe, &gsc_res->msg_out, rd_offset,
+ struct pxp_cmd_header, buffer_len);
+ reply_size += sizeof(struct pxp_cmd_header);
+
+ if (reply_size > msg_out_size_max) {
+ drm_warn(&xe->drm, "caller with insufficient PXP reply size %u (%ld)\n",
+ reply_size, msg_out_size_max);
+ reply_size = msg_out_size_max;
+ }
+
+ xe_map_memcpy_from(xe, msg_out, &gsc_res->msg_out,
+ rd_offset, reply_size);
+ }
+
+ xe_gsc_poison_header(xe, &gsc_res->msg_in, 0);
+
+ return ret;
+}
+
+/**
+ * xe_pxp_submit_session_invalidation - submits a PXP GSC invalidation
+ * @gsc_res: the pxp client resources
+ * @id: the session to invalidate
+ *
+ * Submit a message to the GSC FW to notify it that a session has been
+ * terminated and is therefore invalid.
+ *
+ * Returns 0 if the submission is successful, an errno value otherwise.
+ */
+int xe_pxp_submit_session_invalidation(struct xe_pxp_gsc_client_resources *gsc_res,
+ u32 id)
+{
+ struct xe_device *xe = gsc_res->vm->xe;
+ struct pxp43_inv_stream_key_in msg_in = {0};
+ struct pxp43_inv_stream_key_out msg_out = {0};
+ int ret = 0;
+
+ /*
+ * Stream key invalidation reuses the same version 4.2 input/output
+ * command format but firmware requires 4.3 API interaction
+ */
+ msg_in.header.api_version = PXP_APIVER(4, 3);
+ msg_in.header.command_id = PXP43_CMDID_INVALIDATE_STREAM_KEY;
+ msg_in.header.buffer_len = sizeof(msg_in) - sizeof(msg_in.header);
+
+ msg_in.header.stream_id = FIELD_PREP(PXP_CMDHDR_EXTDATA_SESSION_VALID, 1);
+ msg_in.header.stream_id |= FIELD_PREP(PXP_CMDHDR_EXTDATA_APP_TYPE, 0);
+ msg_in.header.stream_id |= FIELD_PREP(PXP_CMDHDR_EXTDATA_SESSION_ID, id);
+
+ ret = gsccs_send_message(gsc_res, &msg_in, sizeof(msg_in),
+ &msg_out, sizeof(msg_out));
+ if (ret) {
+ drm_err(&xe->drm, "Failed to inv-stream-key-%u, ret=[%d]\n",
+ id, ret);
+ } else if (msg_out.header.status != 0) {
+ if (is_fw_err_platform_config(msg_out.header.status)) {
+ drm_info_once(&xe->drm,
+ "PXP inv-stream-key-%u failed due to BIOS/SOC :0x%08x:%s\n",
+ id, msg_out.header.status,
+ fw_err_to_string(msg_out.header.status));
+ } else {
+ drm_dbg(&xe->drm, "PXP inv-stream-key-%u failed 0x%08x:%s:\n",
+ id, msg_out.header.status,
+ fw_err_to_string(msg_out.header.status));
+ drm_dbg(&xe->drm, " cmd-detail: ID=[0x%08x],API-Ver-[0x%08x]\n",
+ msg_in.header.command_id, msg_in.header.api_version);
+ }
+ }
+
+ return ret;
+}
diff --git a/drivers/gpu/drm/xe/xe_pxp_submit.h b/drivers/gpu/drm/xe/xe_pxp_submit.h
index 4ee8c0acfed9..48fdc9b09116 100644
--- a/drivers/gpu/drm/xe/xe_pxp_submit.h
+++ b/drivers/gpu/drm/xe/xe_pxp_submit.h
@@ -9,10 +9,13 @@
#include <linux/types.h>
struct xe_pxp;
+struct xe_pxp_gsc_client_resources;
int xe_pxp_allocate_execution_resources(struct xe_pxp *pxp);
void xe_pxp_destroy_execution_resources(struct xe_pxp *pxp);
int xe_pxp_submit_session_termination(struct xe_pxp *pxp, u32 id);
+int xe_pxp_submit_session_invalidation(struct xe_pxp_gsc_client_resources *gsc_res,
+ u32 id);
#endif /* __XE_PXP_SUBMIT_H__ */
--
2.43.0
^ permalink raw reply related [flat|nested] 54+ messages in thread
* [PATCH v2 05/12] drm/xe/pxp: Handle the PXP termination interrupt
2024-08-16 19:00 [PATCH v2 00/12] Add PXP HWDRM support Daniele Ceraolo Spurio
` (3 preceding siblings ...)
2024-08-16 19:00 ` [PATCH v2 04/12] drm/xe/pxp: Add GSC session invalidation support Daniele Ceraolo Spurio
@ 2024-08-16 19:00 ` Daniele Ceraolo Spurio
2024-10-08 0:34 ` John Harrison
2024-08-16 19:00 ` [PATCH v2 06/12] drm/xe/pxp: Add GSC session initialization support Daniele Ceraolo Spurio
` (15 subsequent siblings)
20 siblings, 1 reply; 54+ messages in thread
From: Daniele Ceraolo Spurio @ 2024-08-16 19:00 UTC (permalink / raw)
To: intel-xe; +Cc: Daniele Ceraolo Spurio
When something happen to the session, the HW generates a termination
interrupt. In reply to this, the driver is required to submit an inline
session termination via the VCS, trigger the global termination and
notify the GSC FW that the session is now invalid.
Signed-off-by: Daniele Ceraolo Spurio <daniele.ceraolospurio@intel.com>
---
drivers/gpu/drm/xe/regs/xe_gt_regs.h | 8 ++
drivers/gpu/drm/xe/regs/xe_pxp_regs.h | 6 ++
drivers/gpu/drm/xe/xe_irq.c | 20 +++-
drivers/gpu/drm/xe/xe_pxp.c | 138 +++++++++++++++++++++++++-
drivers/gpu/drm/xe/xe_pxp.h | 3 +
drivers/gpu/drm/xe/xe_pxp_types.h | 13 +++
6 files changed, 184 insertions(+), 4 deletions(-)
diff --git a/drivers/gpu/drm/xe/regs/xe_gt_regs.h b/drivers/gpu/drm/xe/regs/xe_gt_regs.h
index 0d1a4a9f4e11..9e9c20f1f1f4 100644
--- a/drivers/gpu/drm/xe/regs/xe_gt_regs.h
+++ b/drivers/gpu/drm/xe/regs/xe_gt_regs.h
@@ -570,6 +570,7 @@
#define ENGINE1_MASK REG_GENMASK(31, 16)
#define ENGINE0_MASK REG_GENMASK(15, 0)
#define GPM_WGBOXPERF_INTR_ENABLE XE_REG(0x19003c, XE_REG_OPTION_VF)
+#define CRYPTO_RSVD_INTR_ENABLE XE_REG(0x190040)
#define GUNIT_GSC_INTR_ENABLE XE_REG(0x190044, XE_REG_OPTION_VF)
#define CCS_RSVD_INTR_ENABLE XE_REG(0x190048, XE_REG_OPTION_VF)
@@ -580,6 +581,7 @@
#define INTR_ENGINE_INTR(x) REG_FIELD_GET(GENMASK(15, 0), x)
#define OTHER_GUC_INSTANCE 0
#define OTHER_GSC_HECI2_INSTANCE 3
+#define OTHER_KCR_INSTANCE 4
#define OTHER_GSC_INSTANCE 6
#define IIR_REG_SELECTOR(x) XE_REG(0x190070 + ((x) * 4), XE_REG_OPTION_VF)
@@ -591,6 +593,7 @@
#define HECI2_RSVD_INTR_MASK XE_REG(0x1900e4)
#define GUC_SG_INTR_MASK XE_REG(0x1900e8, XE_REG_OPTION_VF)
#define GPM_WGBOXPERF_INTR_MASK XE_REG(0x1900ec, XE_REG_OPTION_VF)
+#define CRYPTO_RSVD_INTR_MASK XE_REG(0x1900f0)
#define GUNIT_GSC_INTR_MASK XE_REG(0x1900f4, XE_REG_OPTION_VF)
#define CCS0_CCS1_INTR_MASK XE_REG(0x190100)
#define CCS2_CCS3_INTR_MASK XE_REG(0x190104)
@@ -605,4 +608,9 @@
#define GT_CS_MASTER_ERROR_INTERRUPT REG_BIT(3)
#define GT_RENDER_USER_INTERRUPT REG_BIT(0)
+/* irqs for OTHER_KCR_INSTANCE */
+#define KCR_PXP_STATE_TERMINATED_INTERRUPT REG_BIT(1)
+#define KCR_APP_TERMINATED_PER_FW_REQ_INTERRUPT REG_BIT(2)
+#define KCR_PXP_STATE_RESET_COMPLETE_INTERRUPT REG_BIT(3)
+
#endif
diff --git a/drivers/gpu/drm/xe/regs/xe_pxp_regs.h b/drivers/gpu/drm/xe/regs/xe_pxp_regs.h
index d67cf210d23d..aa158938b42e 100644
--- a/drivers/gpu/drm/xe/regs/xe_pxp_regs.h
+++ b/drivers/gpu/drm/xe/regs/xe_pxp_regs.h
@@ -14,4 +14,10 @@
#define KCR_INIT XE_REG(0x3860f0)
#define KCR_INIT_ALLOW_DISPLAY_ME_WRITES REG_BIT(14)
+/* KCR hwdrm session in play status 0-31 */
+#define KCR_SIP XE_REG(0x386260)
+
+/* PXP global terminate register for session termination */
+#define KCR_GLOBAL_TERMINATE XE_REG(0x3860f8)
+
#endif /* __XE_PXP_REGS_H__ */
diff --git a/drivers/gpu/drm/xe/xe_irq.c b/drivers/gpu/drm/xe/xe_irq.c
index 5f2c368c35ad..f11d9a740627 100644
--- a/drivers/gpu/drm/xe/xe_irq.c
+++ b/drivers/gpu/drm/xe/xe_irq.c
@@ -20,6 +20,7 @@
#include "xe_hw_engine.h"
#include "xe_memirq.h"
#include "xe_mmio.h"
+#include "xe_pxp.h"
#include "xe_sriov.h"
/*
@@ -202,6 +203,15 @@ void xe_irq_enable_hwe(struct xe_gt *gt)
}
if (heci_mask)
xe_mmio_write32(gt, HECI2_RSVD_INTR_MASK, ~(heci_mask << 16));
+
+ if (xe_pxp_is_supported(xe)) {
+ u32 kcr_mask = KCR_PXP_STATE_TERMINATED_INTERRUPT |
+ KCR_APP_TERMINATED_PER_FW_REQ_INTERRUPT |
+ KCR_PXP_STATE_RESET_COMPLETE_INTERRUPT;
+
+ xe_mmio_write32(gt, CRYPTO_RSVD_INTR_ENABLE, kcr_mask << 16);
+ xe_mmio_write32(gt, CRYPTO_RSVD_INTR_MASK, ~(kcr_mask << 16));
+ }
}
}
@@ -324,9 +334,15 @@ static void gt_irq_handler(struct xe_tile *tile,
}
if (class == XE_ENGINE_CLASS_OTHER) {
- /* HECI GSCFI interrupts come from outside of GT */
+ /*
+ * HECI GSCFI interrupts come from outside of GT.
+ * KCR irqs come from inside GT but are handled
+ * by the global PXP subsystem.
+ */
if (HAS_HECI_GSCFI(xe) && instance == OTHER_GSC_INSTANCE)
xe_heci_gsc_irq_handler(xe, intr_vec);
+ else if (instance == OTHER_KCR_INSTANCE)
+ xe_pxp_irq_handler(xe, intr_vec);
else
gt_other_irq_handler(engine_gt, instance, intr_vec);
}
@@ -512,6 +528,8 @@ static void gt_irq_reset(struct xe_tile *tile)
xe_mmio_write32(mmio, GUNIT_GSC_INTR_ENABLE, 0);
xe_mmio_write32(mmio, GUNIT_GSC_INTR_MASK, ~0);
xe_mmio_write32(mmio, HECI2_RSVD_INTR_MASK, ~0);
+ xe_mmio_write32(mmio, CRYPTO_RSVD_INTR_ENABLE, 0);
+ xe_mmio_write32(mmio, CRYPTO_RSVD_INTR_MASK, ~0);
}
xe_mmio_write32(mmio, GPM_WGBOXPERF_INTR_ENABLE, 0);
diff --git a/drivers/gpu/drm/xe/xe_pxp.c b/drivers/gpu/drm/xe/xe_pxp.c
index 56bb7d927c07..382eb0cb0018 100644
--- a/drivers/gpu/drm/xe/xe_pxp.c
+++ b/drivers/gpu/drm/xe/xe_pxp.c
@@ -12,9 +12,11 @@
#include "xe_gt.h"
#include "xe_gt_types.h"
#include "xe_mmio.h"
+#include "xe_pm.h"
#include "xe_pxp_submit.h"
#include "xe_pxp_types.h"
#include "xe_uc_fw.h"
+#include "regs/xe_gt_regs.h"
#include "regs/xe_pxp_regs.h"
/**
@@ -25,11 +27,133 @@
* integrated parts.
*/
-static bool pxp_is_supported(const struct xe_device *xe)
+#define ARB_SESSION 0xF /* TODO: move to UAPI */
+
+bool xe_pxp_is_supported(const struct xe_device *xe)
{
return xe->info.has_pxp && IS_ENABLED(CONFIG_INTEL_MEI_GSC_PROXY);
}
+static bool pxp_is_enabled(const struct xe_pxp *pxp)
+{
+ return pxp;
+}
+
+static int pxp_wait_for_session_state(struct xe_pxp *pxp, u32 id, bool in_play)
+{
+ struct xe_gt *gt = pxp->gt;
+ u32 mask = BIT(id);
+ int ret;
+
+ ret = xe_force_wake_get(gt_to_fw(gt), XE_FW_GT);
+ if (ret)
+ return ret;
+
+ ret = xe_mmio_wait32(gt, KCR_SIP, mask, in_play ? mask : 0,
+ 250, NULL, false);
+ xe_force_wake_put(gt_to_fw(gt), XE_FW_GT);
+
+ return ret;
+}
+
+static void pxp_terminate(struct xe_pxp *pxp)
+{
+ int ret = 0;
+ struct xe_device *xe = pxp->xe;
+ struct xe_gt *gt = pxp->gt;
+
+ drm_dbg(&xe->drm, "Terminating PXP\n");
+
+ /* terminate the hw session */
+ ret = xe_pxp_submit_session_termination(pxp, ARB_SESSION);
+ if (ret)
+ goto out;
+
+ ret = pxp_wait_for_session_state(pxp, ARB_SESSION, false);
+ if (ret)
+ goto out;
+
+ /* Trigger full HW cleanup */
+ XE_WARN_ON(xe_force_wake_get(gt_to_fw(gt), XE_FW_GT));
+ xe_mmio_write32(gt, KCR_GLOBAL_TERMINATE, 1);
+ xe_force_wake_put(gt_to_fw(gt), XE_FW_GT);
+
+ /* now we can tell the GSC to clean up its own state */
+ ret = xe_pxp_submit_session_invalidation(&pxp->gsc_res, ARB_SESSION);
+
+out:
+ if (ret)
+ drm_err(&xe->drm, "PXP termination failed: %pe\n", ERR_PTR(ret));
+}
+
+static void pxp_terminate_complete(struct xe_pxp *pxp)
+{
+ /* TODO mark the session as ready to start */
+}
+
+static void pxp_irq_work(struct work_struct *work)
+{
+ struct xe_pxp *pxp = container_of(work, typeof(*pxp), irq.work);
+ struct xe_device *xe = pxp->xe;
+ u32 events = 0;
+
+ spin_lock_irq(&xe->irq.lock);
+ events = pxp->irq.events;
+ pxp->irq.events = 0;
+ spin_unlock_irq(&xe->irq.lock);
+
+ if (!events)
+ return;
+
+ /*
+ * If we're processing a termination irq while suspending then don't
+ * bother, we're going to re-init everything on resume anyway.
+ */
+ if ((events & PXP_TERMINATION_REQUEST) && !xe_pm_runtime_get_if_active(xe))
+ return;
+
+ if (events & PXP_TERMINATION_REQUEST) {
+ events &= ~PXP_TERMINATION_COMPLETE;
+ pxp_terminate(pxp);
+ }
+
+ if (events & PXP_TERMINATION_COMPLETE)
+ pxp_terminate_complete(pxp);
+
+ if (events & PXP_TERMINATION_REQUEST)
+ xe_pm_runtime_put(xe);
+}
+
+/**
+ * xe_pxp_irq_handler - Handles PXP interrupts.
+ * @pxp: pointer to pxp struct
+ * @iir: interrupt vector
+ */
+void xe_pxp_irq_handler(struct xe_device *xe, u16 iir)
+{
+ struct xe_pxp *pxp = xe->pxp;
+
+ if (!pxp_is_enabled(pxp)) {
+ drm_err(&xe->drm, "PXP irq 0x%x received with PXP disabled!\n", iir);
+ return;
+ }
+
+ lockdep_assert_held(&xe->irq.lock);
+
+ if (unlikely(!iir))
+ return;
+
+ if (iir & (KCR_PXP_STATE_TERMINATED_INTERRUPT |
+ KCR_APP_TERMINATED_PER_FW_REQ_INTERRUPT))
+ pxp->irq.events |= PXP_TERMINATION_REQUEST;
+
+ if (iir & KCR_PXP_STATE_RESET_COMPLETE_INTERRUPT)
+ pxp->irq.events |= PXP_TERMINATION_COMPLETE;
+
+ if (pxp->irq.events)
+ queue_work(pxp->irq.wq, &pxp->irq.work);
+}
+
static int kcr_pxp_set_status(const struct xe_pxp *pxp, bool enable)
{
u32 val = enable ? _MASKED_BIT_ENABLE(KCR_INIT_ALLOW_DISPLAY_ME_WRITES) :
@@ -60,6 +184,7 @@ static void pxp_fini(void *arg)
{
struct xe_pxp *pxp = arg;
+ destroy_workqueue(pxp->irq.wq);
xe_pxp_destroy_execution_resources(pxp);
/* no need to explicitly disable KCR since we're going to do an FLR */
@@ -83,7 +208,7 @@ int xe_pxp_init(struct xe_device *xe)
struct xe_pxp *pxp;
int err;
- if (!pxp_is_supported(xe))
+ if (!xe_pxp_is_supported(xe))
return -EOPNOTSUPP;
/* we only support PXP on single tile devices with a media GT */
@@ -105,12 +230,17 @@ int xe_pxp_init(struct xe_device *xe)
if (!pxp)
return -ENOMEM;
+ INIT_WORK(&pxp->irq.work, pxp_irq_work);
pxp->xe = xe;
pxp->gt = gt;
+ pxp->irq.wq = alloc_ordered_workqueue("pxp-wq", 0);
+ if (!pxp->irq.wq)
+ return -ENOMEM;
+
err = kcr_pxp_enable(pxp);
if (err)
- return err;
+ goto out_wq;
err = xe_pxp_allocate_execution_resources(pxp);
if (err)
@@ -122,5 +252,7 @@ int xe_pxp_init(struct xe_device *xe)
kcr_disable:
kcr_pxp_disable(pxp);
+out_wq:
+ destroy_workqueue(pxp->irq.wq);
return err;
}
diff --git a/drivers/gpu/drm/xe/xe_pxp.h b/drivers/gpu/drm/xe/xe_pxp.h
index 79c951667f13..81bafe2714ff 100644
--- a/drivers/gpu/drm/xe/xe_pxp.h
+++ b/drivers/gpu/drm/xe/xe_pxp.h
@@ -10,6 +10,9 @@
struct xe_device;
+bool xe_pxp_is_supported(const struct xe_device *xe);
+
int xe_pxp_init(struct xe_device *xe);
+void xe_pxp_irq_handler(struct xe_device *xe, u16 iir);
#endif /* __XE_PXP_H__ */
diff --git a/drivers/gpu/drm/xe/xe_pxp_types.h b/drivers/gpu/drm/xe/xe_pxp_types.h
index 3463caaad101..d5cf8faed7be 100644
--- a/drivers/gpu/drm/xe/xe_pxp_types.h
+++ b/drivers/gpu/drm/xe/xe_pxp_types.h
@@ -8,6 +8,7 @@
#include <linux/iosys-map.h>
#include <linux/types.h>
+#include <linux/workqueue.h>
struct xe_bo;
struct xe_exec_queue;
@@ -69,6 +70,18 @@ struct xe_pxp {
/** @gsc_exec: kernel-owned objects for PXP submissions to the GSCCS */
struct xe_pxp_gsc_client_resources gsc_res;
+
+ /** @irq: wrapper for the worker and queue used for PXP irq support */
+ struct {
+ /** @irq.work: worker that manages irq events. */
+ struct work_struct work;
+ /** @irq.wq: workqueue on which to queue the irq work. */
+ struct workqueue_struct *wq;
+ /** @irq.events: pending events, protected with xe->irq.lock. */
+ u32 events;
+#define PXP_TERMINATION_REQUEST BIT(0)
+#define PXP_TERMINATION_COMPLETE BIT(1)
+ } irq;
};
#endif /* __XE_PXP_TYPES_H__ */
--
2.43.0
^ permalink raw reply related [flat|nested] 54+ messages in thread
* [PATCH v2 06/12] drm/xe/pxp: Add GSC session initialization support
2024-08-16 19:00 [PATCH v2 00/12] Add PXP HWDRM support Daniele Ceraolo Spurio
` (4 preceding siblings ...)
2024-08-16 19:00 ` [PATCH v2 05/12] drm/xe/pxp: Handle the PXP termination interrupt Daniele Ceraolo Spurio
@ 2024-08-16 19:00 ` Daniele Ceraolo Spurio
2024-10-08 18:43 ` John Harrison
2024-08-16 19:00 ` [PATCH v2 07/12] drm/xe/pxp: Add spport for PXP-using queues Daniele Ceraolo Spurio
` (14 subsequent siblings)
20 siblings, 1 reply; 54+ messages in thread
From: Daniele Ceraolo Spurio @ 2024-08-16 19:00 UTC (permalink / raw)
To: intel-xe; +Cc: Daniele Ceraolo Spurio
A session is initialized (i.e. started) by sending a message to the GSC.
Note that this patch is meant to be squashed with the follow-up patches
that implement the other pieces of the session initialization and queue
setup flow. It is separate for now for ease of review.
Signed-off-by: Daniele Ceraolo Spurio <daniele.ceraolospurio@intel.com>
---
drivers/gpu/drm/xe/abi/gsc_pxp_commands_abi.h | 21 ++++++++
drivers/gpu/drm/xe/xe_pxp_submit.c | 50 +++++++++++++++++++
drivers/gpu/drm/xe/xe_pxp_submit.h | 1 +
3 files changed, 72 insertions(+)
diff --git a/drivers/gpu/drm/xe/abi/gsc_pxp_commands_abi.h b/drivers/gpu/drm/xe/abi/gsc_pxp_commands_abi.h
index 4a59c564a0d0..734feb38f570 100644
--- a/drivers/gpu/drm/xe/abi/gsc_pxp_commands_abi.h
+++ b/drivers/gpu/drm/xe/abi/gsc_pxp_commands_abi.h
@@ -50,6 +50,7 @@ struct pxp_cmd_header {
} __packed;
#define PXP43_CMDID_INVALIDATE_STREAM_KEY 0x00000007
+#define PXP43_CMDID_INIT_SESSION 0x00000036
#define PXP43_CMDID_NEW_HUC_AUTH 0x0000003F /* MTL+ */
/* PXP-Input-Packet: HUC Auth-only */
@@ -64,6 +65,26 @@ struct pxp43_huc_auth_out {
struct pxp_cmd_header header;
} __packed;
+/* PXP-Input-Packet: Init PXP session */
+struct pxp43_create_arb_in {
+ struct pxp_cmd_header header;
+ /* header.stream_id fields for vesion 4.3 of Init PXP session: */
+ #define PXP43_INIT_SESSION_VALID BIT(0)
+ #define PXP43_INIT_SESSION_APPTYPE BIT(1)
+ #define PXP43_INIT_SESSION_APPID GENMASK(17, 2)
+ u32 protection_mode;
+ #define PXP43_INIT_SESSION_PROTECTION_ARB 0x2
+ u32 sub_session_id;
+ u32 init_flags;
+ u32 rsvd[12];
+} __packed;
+
+/* PXP-Input-Packet: Init PXP session */
+struct pxp43_create_arb_out {
+ struct pxp_cmd_header header;
+ u32 rsvd[8];
+} __packed;
+
/* PXP-Input-Packet: Invalidate Stream Key */
struct pxp43_inv_stream_key_in {
struct pxp_cmd_header header;
diff --git a/drivers/gpu/drm/xe/xe_pxp_submit.c b/drivers/gpu/drm/xe/xe_pxp_submit.c
index 41684d666376..c9258c861556 100644
--- a/drivers/gpu/drm/xe/xe_pxp_submit.c
+++ b/drivers/gpu/drm/xe/xe_pxp_submit.c
@@ -26,6 +26,8 @@
#include "instructions/xe_mi_commands.h"
#include "regs/xe_gt_regs.h"
+#define ARB_SESSION 0xF /* TODO: move to UAPI */
+
/*
* The VCS is used for kernel-owned GGTT submissions to issue key termination.
* Terminations are serialized, so we only need a single queue and a single
@@ -470,6 +472,54 @@ static int gsccs_send_message(struct xe_pxp_gsc_client_resources *gsc_res,
return ret;
}
+/**
+ * xe_pxp_submit_session_init - submits a PXP GSC session initialization
+ * @gsc_res: the pxp client resources
+ * @id: the session to initialize
+ *
+ * Submit a message to the GSC FW to initialize (i.e. start) a PXP session.
+ *
+ * Returns 0 if the submission is successful, an errno value otherwise.
+ */
+int xe_pxp_submit_session_init(struct xe_pxp_gsc_client_resources *gsc_res, u32 id)
+{
+ struct xe_device *xe = gsc_res->vm->xe;
+ struct pxp43_create_arb_in msg_in = {0};
+ struct pxp43_create_arb_out msg_out = {0};
+ int ret;
+
+ msg_in.header.api_version = PXP_APIVER(4, 3);
+ msg_in.header.command_id = PXP43_CMDID_INIT_SESSION;
+ msg_in.header.stream_id = (FIELD_PREP(PXP43_INIT_SESSION_APPID, id) |
+ FIELD_PREP(PXP43_INIT_SESSION_VALID, 1) |
+ FIELD_PREP(PXP43_INIT_SESSION_APPTYPE, 0));
+ msg_in.header.buffer_len = sizeof(msg_in) - sizeof(msg_in.header);
+
+ if (id == ARB_SESSION)
+ msg_in.protection_mode = PXP43_INIT_SESSION_PROTECTION_ARB;
+
+ ret = gsccs_send_message(gsc_res, &msg_in, sizeof(msg_in),
+ &msg_out, sizeof(msg_out));
+ if (ret) {
+ drm_err(&xe->drm, "Failed to init session %d, ret=[%d]\n", id, ret);
+ } else if (msg_out.header.status != 0) {
+ if (is_fw_err_platform_config(msg_out.header.status)) {
+ drm_info_once(&xe->drm,
+ "PXP init-session-%d failed due to BIOS/SOC:0x%08x:%s\n",
+ id, msg_out.header.status,
+ fw_err_to_string(msg_out.header.status));
+ } else {
+ drm_dbg(&xe->drm, "PXP init-session-%d failed 0x%08x:%st:\n",
+ id, msg_out.header.status,
+ fw_err_to_string(msg_out.header.status));
+ drm_dbg(&xe->drm, " cmd-detail: ID=[0x%08x],API-Ver-[0x%08x]\n",
+ msg_in.header.command_id, msg_in.header.api_version);
+ }
+ }
+
+ return ret;
+}
+
/**
* xe_pxp_submit_session_invalidation - submits a PXP GSC invalidation
* @gsc_res: the pxp client resources
diff --git a/drivers/gpu/drm/xe/xe_pxp_submit.h b/drivers/gpu/drm/xe/xe_pxp_submit.h
index 48fdc9b09116..c9efda02f4b0 100644
--- a/drivers/gpu/drm/xe/xe_pxp_submit.h
+++ b/drivers/gpu/drm/xe/xe_pxp_submit.h
@@ -14,6 +14,7 @@ struct xe_pxp_gsc_client_resources;
int xe_pxp_allocate_execution_resources(struct xe_pxp *pxp);
void xe_pxp_destroy_execution_resources(struct xe_pxp *pxp);
+int xe_pxp_submit_session_init(struct xe_pxp_gsc_client_resources *gsc_res, u32 id);
int xe_pxp_submit_session_termination(struct xe_pxp *pxp, u32 id);
int xe_pxp_submit_session_invalidation(struct xe_pxp_gsc_client_resources *gsc_res,
u32 id);
--
2.43.0
^ permalink raw reply related [flat|nested] 54+ messages in thread
* [PATCH v2 07/12] drm/xe/pxp: Add spport for PXP-using queues
2024-08-16 19:00 [PATCH v2 00/12] Add PXP HWDRM support Daniele Ceraolo Spurio
` (5 preceding siblings ...)
2024-08-16 19:00 ` [PATCH v2 06/12] drm/xe/pxp: Add GSC session initialization support Daniele Ceraolo Spurio
@ 2024-08-16 19:00 ` Daniele Ceraolo Spurio
2024-10-08 23:55 ` John Harrison
2024-10-09 10:07 ` Jani Nikula
2024-08-16 19:00 ` [PATCH v2 08/12] drm/xe/pxp: add a query for PXP status Daniele Ceraolo Spurio
` (13 subsequent siblings)
20 siblings, 2 replies; 54+ messages in thread
From: Daniele Ceraolo Spurio @ 2024-08-16 19:00 UTC (permalink / raw)
To: intel-xe; +Cc: Daniele Ceraolo Spurio
Userspace is required to mark a queue as using PXP to guarantee that the
PXP instructions will work. When a PXP queue is created, the driver will
do the following:
- Start the default PXP session if it is not already running;
- set the relevant bits in the context control register;
- assign an rpm ref to the queue to keep for its lifetime (this is
required because PXP HWDRM sessions are killed by the HW suspend flow).
When a PXP invalidation occurs, all the PXP queue will be killed.
On submission of a valid PXP queue, the driver will validate all
encrypted objects mapped to the VM to ensured they were encrypted with
the current key.
Signed-off-by: Daniele Ceraolo Spurio <daniele.ceraolospurio@intel.com>
---
drivers/gpu/drm/xe/regs/xe_engine_regs.h | 1 +
drivers/gpu/drm/xe/xe_exec_queue.c | 58 ++++-
drivers/gpu/drm/xe/xe_exec_queue.h | 5 +
drivers/gpu/drm/xe/xe_exec_queue_types.h | 8 +
drivers/gpu/drm/xe/xe_hw_engine.c | 2 +-
drivers/gpu/drm/xe/xe_lrc.c | 16 +-
drivers/gpu/drm/xe/xe_lrc.h | 4 +-
drivers/gpu/drm/xe/xe_pxp.c | 295 ++++++++++++++++++++++-
drivers/gpu/drm/xe/xe_pxp.h | 7 +
drivers/gpu/drm/xe/xe_pxp_submit.c | 4 +-
drivers/gpu/drm/xe/xe_pxp_types.h | 26 ++
include/uapi/drm/xe_drm.h | 40 ++-
12 files changed, 450 insertions(+), 16 deletions(-)
diff --git a/drivers/gpu/drm/xe/regs/xe_engine_regs.h b/drivers/gpu/drm/xe/regs/xe_engine_regs.h
index 81b71903675e..3692e887f503 100644
--- a/drivers/gpu/drm/xe/regs/xe_engine_regs.h
+++ b/drivers/gpu/drm/xe/regs/xe_engine_regs.h
@@ -130,6 +130,7 @@
#define RING_EXECLIST_STATUS_HI(base) XE_REG((base) + 0x234 + 4)
#define RING_CONTEXT_CONTROL(base) XE_REG((base) + 0x244, XE_REG_OPTION_MASKED)
+#define CTX_CTRL_PXP_ENABLE REG_BIT(10)
#define CTX_CTRL_OAC_CONTEXT_ENABLE REG_BIT(8)
#define CTX_CTRL_RUN_ALONE REG_BIT(7)
#define CTX_CTRL_INDIRECT_RING_STATE_ENABLE REG_BIT(4)
diff --git a/drivers/gpu/drm/xe/xe_exec_queue.c b/drivers/gpu/drm/xe/xe_exec_queue.c
index e98e8794eddf..504ba4aa2357 100644
--- a/drivers/gpu/drm/xe/xe_exec_queue.c
+++ b/drivers/gpu/drm/xe/xe_exec_queue.c
@@ -22,6 +22,8 @@
#include "xe_ring_ops_types.h"
#include "xe_trace.h"
#include "xe_vm.h"
+#include "xe_pxp.h"
+#include "xe_pxp_types.h"
enum xe_exec_queue_sched_prop {
XE_EXEC_QUEUE_JOB_TIMEOUT = 0,
@@ -35,6 +37,8 @@ static int exec_queue_user_extensions(struct xe_device *xe, struct xe_exec_queue
static void __xe_exec_queue_free(struct xe_exec_queue *q)
{
+ if (xe_exec_queue_uses_pxp(q))
+ xe_pxp_exec_queue_remove(gt_to_xe(q->gt)->pxp, q);
if (q->vm)
xe_vm_put(q->vm);
@@ -73,6 +77,7 @@ static struct xe_exec_queue *__xe_exec_queue_alloc(struct xe_device *xe,
q->ops = gt->exec_queue_ops;
INIT_LIST_HEAD(&q->lr.link);
INIT_LIST_HEAD(&q->multi_gt_link);
+ INIT_LIST_HEAD(&q->pxp.link);
q->sched_props.timeslice_us = hwe->eclass->sched_props.timeslice_us;
q->sched_props.preempt_timeout_us =
@@ -107,6 +112,21 @@ static int __xe_exec_queue_init(struct xe_exec_queue *q)
{
struct xe_vm *vm = q->vm;
int i, err;
+ u32 flags = 0;
+
+ /*
+ * PXP workloads executing on RCS or CCS must run in isolation (i.e. no
+ * other workload can use the EUs at the same time). On MTL this is done
+ * by setting the RUNALONE bit in the LRC, while starting on Xe2 there
+ * is a dedicated bit for it.
+ */
+ if (xe_exec_queue_uses_pxp(q) &&
+ (q->class == XE_ENGINE_CLASS_RENDER || q->class == XE_ENGINE_CLASS_COMPUTE)) {
+ if (GRAPHICS_VER(gt_to_xe(q->gt)) >= 20)
+ flags |= XE_LRC_CREATE_PXP;
+ else
+ flags |= XE_LRC_CREATE_RUNALONE;
+ }
if (vm) {
err = xe_vm_lock(vm, true);
@@ -115,7 +135,7 @@ static int __xe_exec_queue_init(struct xe_exec_queue *q)
}
for (i = 0; i < q->width; ++i) {
- q->lrc[i] = xe_lrc_create(q->hwe, q->vm, SZ_16K);
+ q->lrc[i] = xe_lrc_create(q->hwe, q->vm, SZ_16K, flags);
if (IS_ERR(q->lrc[i])) {
err = PTR_ERR(q->lrc[i]);
goto err_unlock;
@@ -160,6 +180,17 @@ struct xe_exec_queue *xe_exec_queue_create(struct xe_device *xe, struct xe_vm *v
if (err)
goto err_post_alloc;
+ /*
+ * we can only add the queue to the PXP list after the init is complete,
+ * because the PXP termination can call exec_queue_kill and that will
+ * go bad if the queue is only half-initialized.
+ */
+ if (xe_exec_queue_uses_pxp(q)) {
+ err = xe_pxp_exec_queue_add(xe->pxp, q);
+ if (err)
+ goto err_post_alloc;
+ }
+
return q;
err_post_alloc:
@@ -197,6 +228,9 @@ void xe_exec_queue_destroy(struct kref *ref)
struct xe_exec_queue *q = container_of(ref, struct xe_exec_queue, refcount);
struct xe_exec_queue *eq, *next;
+ if (xe_exec_queue_uses_pxp(q))
+ xe_pxp_exec_queue_remove(gt_to_xe(q->gt)->pxp, q);
+
xe_exec_queue_last_fence_put_unlocked(q);
if (!(q->flags & EXEC_QUEUE_FLAG_BIND_ENGINE_CHILD)) {
list_for_each_entry_safe(eq, next, &q->multi_gt_list,
@@ -343,6 +377,24 @@ static int exec_queue_set_timeslice(struct xe_device *xe, struct xe_exec_queue *
return 0;
}
+static int
+exec_queue_set_pxp_type(struct xe_device *xe, struct xe_exec_queue *q, u64 value)
+{
+ BUILD_BUG_ON(DRM_XE_PXP_TYPE_NONE != 0);
+
+ if (value == DRM_XE_PXP_TYPE_NONE)
+ return 0;
+
+ if (!xe_pxp_is_enabled(xe->pxp))
+ return -ENODEV;
+
+ /* we only support HWDRM sessions right now */
+ if (XE_IOCTL_DBG(xe, value != DRM_XE_PXP_TYPE_HWDRM))
+ return -EINVAL;
+
+ return xe_pxp_exec_queue_set_type(xe->pxp, q, DRM_XE_PXP_TYPE_HWDRM);
+}
+
typedef int (*xe_exec_queue_set_property_fn)(struct xe_device *xe,
struct xe_exec_queue *q,
u64 value);
@@ -350,6 +402,7 @@ typedef int (*xe_exec_queue_set_property_fn)(struct xe_device *xe,
static const xe_exec_queue_set_property_fn exec_queue_set_property_funcs[] = {
[DRM_XE_EXEC_QUEUE_SET_PROPERTY_PRIORITY] = exec_queue_set_priority,
[DRM_XE_EXEC_QUEUE_SET_PROPERTY_TIMESLICE] = exec_queue_set_timeslice,
+ [DRM_XE_EXEC_QUEUE_SET_PROPERTY_PXP_TYPE] = exec_queue_set_pxp_type,
};
static int exec_queue_user_ext_set_property(struct xe_device *xe,
@@ -369,7 +422,8 @@ static int exec_queue_user_ext_set_property(struct xe_device *xe,
ARRAY_SIZE(exec_queue_set_property_funcs)) ||
XE_IOCTL_DBG(xe, ext.pad) ||
XE_IOCTL_DBG(xe, ext.property != DRM_XE_EXEC_QUEUE_SET_PROPERTY_PRIORITY &&
- ext.property != DRM_XE_EXEC_QUEUE_SET_PROPERTY_TIMESLICE))
+ ext.property != DRM_XE_EXEC_QUEUE_SET_PROPERTY_TIMESLICE &&
+ ext.property != DRM_XE_EXEC_QUEUE_SET_PROPERTY_PXP_TYPE))
return -EINVAL;
idx = array_index_nospec(ext.property, ARRAY_SIZE(exec_queue_set_property_funcs));
diff --git a/drivers/gpu/drm/xe/xe_exec_queue.h b/drivers/gpu/drm/xe/xe_exec_queue.h
index ded77b0f3b90..7fa97719667a 100644
--- a/drivers/gpu/drm/xe/xe_exec_queue.h
+++ b/drivers/gpu/drm/xe/xe_exec_queue.h
@@ -53,6 +53,11 @@ static inline bool xe_exec_queue_is_parallel(struct xe_exec_queue *q)
return q->width > 1;
}
+static inline bool xe_exec_queue_uses_pxp(struct xe_exec_queue *q)
+{
+ return q->pxp.type;
+}
+
bool xe_exec_queue_is_lr(struct xe_exec_queue *q);
bool xe_exec_queue_ring_full(struct xe_exec_queue *q);
diff --git a/drivers/gpu/drm/xe/xe_exec_queue_types.h b/drivers/gpu/drm/xe/xe_exec_queue_types.h
index 1408b02eea53..28b56217f1df 100644
--- a/drivers/gpu/drm/xe/xe_exec_queue_types.h
+++ b/drivers/gpu/drm/xe/xe_exec_queue_types.h
@@ -130,6 +130,14 @@ struct xe_exec_queue {
spinlock_t lock;
} lr;
+ /** @pxp: PXP info tracking */
+ struct {
+ /** @pxp.type: PXP session type used by this queue */
+ u8 type;
+ /** @pxp.link: link into the list of PXP exec queues */
+ struct list_head link;
+ } pxp;
+
/** @ops: submission backend exec queue operations */
const struct xe_exec_queue_ops *ops;
diff --git a/drivers/gpu/drm/xe/xe_hw_engine.c b/drivers/gpu/drm/xe/xe_hw_engine.c
index e195022ca836..469932e7d7a6 100644
--- a/drivers/gpu/drm/xe/xe_hw_engine.c
+++ b/drivers/gpu/drm/xe/xe_hw_engine.c
@@ -557,7 +557,7 @@ static int hw_engine_init(struct xe_gt *gt, struct xe_hw_engine *hwe,
goto err_name;
}
- hwe->kernel_lrc = xe_lrc_create(hwe, NULL, SZ_16K);
+ hwe->kernel_lrc = xe_lrc_create(hwe, NULL, SZ_16K, 0);
if (IS_ERR(hwe->kernel_lrc)) {
err = PTR_ERR(hwe->kernel_lrc);
goto err_hwsp;
diff --git a/drivers/gpu/drm/xe/xe_lrc.c b/drivers/gpu/drm/xe/xe_lrc.c
index 974a9cd8c379..4f3e676db646 100644
--- a/drivers/gpu/drm/xe/xe_lrc.c
+++ b/drivers/gpu/drm/xe/xe_lrc.c
@@ -893,7 +893,7 @@ static void xe_lrc_finish(struct xe_lrc *lrc)
#define PVC_CTX_ACC_CTR_THOLD (0x2a + 1)
static int xe_lrc_init(struct xe_lrc *lrc, struct xe_hw_engine *hwe,
- struct xe_vm *vm, u32 ring_size)
+ struct xe_vm *vm, u32 ring_size, u32 init_flags)
{
struct xe_gt *gt = hwe->gt;
struct xe_tile *tile = gt_to_tile(gt);
@@ -981,6 +981,16 @@ static int xe_lrc_init(struct xe_lrc *lrc, struct xe_hw_engine *hwe,
RING_CTL_SIZE(lrc->ring.size) | RING_VALID);
}
+ if (init_flags & XE_LRC_CREATE_RUNALONE)
+ xe_lrc_write_ctx_reg(lrc, CTX_CONTEXT_CONTROL,
+ xe_lrc_read_ctx_reg(lrc, CTX_CONTEXT_CONTROL) |
+ _MASKED_BIT_ENABLE(CTX_CTRL_RUN_ALONE));
+
+ if (init_flags & XE_LRC_CREATE_PXP)
+ xe_lrc_write_ctx_reg(lrc, CTX_CONTEXT_CONTROL,
+ xe_lrc_read_ctx_reg(lrc, CTX_CONTEXT_CONTROL) |
+ _MASKED_BIT_ENABLE(CTX_CTRL_PXP_ENABLE));
+
xe_lrc_write_ctx_reg(lrc, CTX_TIMESTAMP, 0);
if (xe->info.has_asid && vm)
@@ -1029,7 +1039,7 @@ static int xe_lrc_init(struct xe_lrc *lrc, struct xe_hw_engine *hwe,
* upon failure.
*/
struct xe_lrc *xe_lrc_create(struct xe_hw_engine *hwe, struct xe_vm *vm,
- u32 ring_size)
+ u32 ring_size, u32 flags)
{
struct xe_lrc *lrc;
int err;
@@ -1038,7 +1048,7 @@ struct xe_lrc *xe_lrc_create(struct xe_hw_engine *hwe, struct xe_vm *vm,
if (!lrc)
return ERR_PTR(-ENOMEM);
- err = xe_lrc_init(lrc, hwe, vm, ring_size);
+ err = xe_lrc_init(lrc, hwe, vm, ring_size, flags);
if (err) {
kfree(lrc);
return ERR_PTR(err);
diff --git a/drivers/gpu/drm/xe/xe_lrc.h b/drivers/gpu/drm/xe/xe_lrc.h
index d411c3fbcbc6..cc8091bba2a0 100644
--- a/drivers/gpu/drm/xe/xe_lrc.h
+++ b/drivers/gpu/drm/xe/xe_lrc.h
@@ -23,8 +23,10 @@ struct xe_vm;
#define LRC_PPHWSP_FLUSH_INVAL_SCRATCH_ADDR (0x34 * 4)
#define LRC_PPHWSP_PXP_INVAL_SCRATCH_ADDR (0x40 * 4)
+#define XE_LRC_CREATE_RUNALONE 0x1
+#define XE_LRC_CREATE_PXP 0x2
struct xe_lrc *xe_lrc_create(struct xe_hw_engine *hwe, struct xe_vm *vm,
- u32 ring_size);
+ u32 ring_size, u32 flags);
void xe_lrc_destroy(struct kref *ref);
/**
diff --git a/drivers/gpu/drm/xe/xe_pxp.c b/drivers/gpu/drm/xe/xe_pxp.c
index 382eb0cb0018..acdc25c8e8a1 100644
--- a/drivers/gpu/drm/xe/xe_pxp.c
+++ b/drivers/gpu/drm/xe/xe_pxp.c
@@ -6,11 +6,17 @@
#include "xe_pxp.h"
#include <drm/drm_managed.h>
+#include <drm/xe_drm.h>
#include "xe_device_types.h"
+#include "xe_exec_queue.h"
+#include "xe_exec_queue_types.h"
#include "xe_force_wake.h"
+#include "xe_guc_submit.h"
+#include "xe_gsc_proxy.h"
#include "xe_gt.h"
#include "xe_gt_types.h"
+#include "xe_huc.h"
#include "xe_mmio.h"
#include "xe_pm.h"
#include "xe_pxp_submit.h"
@@ -27,18 +33,45 @@
* integrated parts.
*/
-#define ARB_SESSION 0xF /* TODO: move to UAPI */
+#define ARB_SESSION DRM_XE_PXP_HWDRM_DEFAULT_SESSION /* shorter define */
bool xe_pxp_is_supported(const struct xe_device *xe)
{
return xe->info.has_pxp && IS_ENABLED(CONFIG_INTEL_MEI_GSC_PROXY);
}
-static bool pxp_is_enabled(const struct xe_pxp *pxp)
+bool xe_pxp_is_enabled(const struct xe_pxp *pxp)
{
return pxp;
}
+static bool pxp_prerequisites_done(const struct xe_pxp *pxp)
+{
+ bool ready;
+
+ XE_WARN_ON(xe_force_wake_get(gt_to_fw(pxp->gt), XE_FW_GSC));
+
+ /* PXP requires both HuC authentication via GSC and GSC proxy initialized */
+ ready = xe_huc_is_authenticated(&pxp->gt->uc.huc, XE_HUC_AUTH_VIA_GSC) &&
+ xe_gsc_proxy_init_done(&pxp->gt->uc.gsc);
+
+ xe_force_wake_put(gt_to_fw(pxp->gt), XE_FW_GSC);
+
+ return ready;
+}
+
+static bool pxp_session_is_in_play(struct xe_pxp *pxp, u32 id)
+{
+ struct xe_gt *gt = pxp->gt;
+ u32 sip = 0;
+
+ XE_WARN_ON(xe_force_wake_get(gt_to_fw(gt), XE_FW_GT));
+ sip = xe_mmio_read32(gt, KCR_SIP);
+ xe_force_wake_put(gt_to_fw(gt), XE_FW_GT);
+
+ return sip & BIT(id);
+}
+
static int pxp_wait_for_session_state(struct xe_pxp *pxp, u32 id, bool in_play)
{
struct xe_gt *gt = pxp->gt;
@@ -56,12 +89,30 @@ static int pxp_wait_for_session_state(struct xe_pxp *pxp, u32 id, bool in_play)
return ret;
}
+static void pxp_invalidate_queues(struct xe_pxp *pxp);
+
static void pxp_terminate(struct xe_pxp *pxp)
{
int ret = 0;
struct xe_device *xe = pxp->xe;
struct xe_gt *gt = pxp->gt;
+ pxp_invalidate_queues(pxp);
+
+ /*
+ * If we have a termination already in progress, we need to wait for
+ * it to complete before queueing another one. We update the state
+ * to signal that another termination is required and leave it to the
+ * pxp_start() call to take care of it.
+ */
+ if (!completion_done(&pxp->termination)) {
+ pxp->status = XE_PXP_NEEDS_TERMINATION;
+ return;
+ }
+
+ reinit_completion(&pxp->termination);
+ pxp->status = XE_PXP_TERMINATION_IN_PROGRESS;
+
drm_dbg(&xe->drm, "Terminating PXP\n");
/* terminate the hw session */
@@ -82,13 +133,32 @@ static void pxp_terminate(struct xe_pxp *pxp)
ret = xe_pxp_submit_session_invalidation(&pxp->gsc_res, ARB_SESSION);
out:
- if (ret)
+ if (ret) {
drm_err(&xe->drm, "PXP termination failed: %pe\n", ERR_PTR(ret));
+ pxp->status = XE_PXP_ERROR;
+ complete_all(&pxp->termination);
+ }
}
static void pxp_terminate_complete(struct xe_pxp *pxp)
{
- /* TODO mark the session as ready to start */
+ /*
+ * We expect PXP to be in one of 2 states when we get here:
+ * - XE_PXP_TERMINATION_IN_PROGRESS: a single termination event was
+ * requested and it is now completing, so we're ready to start.
+ * - XE_PXP_NEEDS_TERMINATION: a second termination was requested while
+ * the first one was still being processed; we don't update the state
+ * in this case so the pxp_start code will automatically issue that
+ * second termination.
+ */
+ if (pxp->status == XE_PXP_TERMINATION_IN_PROGRESS)
+ pxp->status = XE_PXP_READY_TO_START;
+ else if (pxp->status != XE_PXP_NEEDS_TERMINATION)
+ drm_err(&pxp->xe->drm,
+ "PXP termination complete while status was %u\n",
+ pxp->status);
+
+ complete_all(&pxp->termination);
}
static void pxp_irq_work(struct work_struct *work)
@@ -112,6 +182,8 @@ static void pxp_irq_work(struct work_struct *work)
if ((events & PXP_TERMINATION_REQUEST) && !xe_pm_runtime_get_if_active(xe))
return;
+ mutex_lock(&pxp->mutex);
+
if (events & PXP_TERMINATION_REQUEST) {
events &= ~PXP_TERMINATION_COMPLETE;
pxp_terminate(pxp);
@@ -120,6 +192,8 @@ static void pxp_irq_work(struct work_struct *work)
if (events & PXP_TERMINATION_COMPLETE)
pxp_terminate_complete(pxp);
+ mutex_unlock(&pxp->mutex);
+
if (events & PXP_TERMINATION_REQUEST)
xe_pm_runtime_put(xe);
}
@@ -133,7 +207,7 @@ void xe_pxp_irq_handler(struct xe_device *xe, u16 iir)
{
struct xe_pxp *pxp = xe->pxp;
- if (!pxp_is_enabled(pxp)) {
+ if (!xe_pxp_is_enabled(pxp)) {
drm_err(&xe->drm, "PXP irq 0x%x received with PXP disabled!\n", iir);
return;
}
@@ -230,10 +304,22 @@ int xe_pxp_init(struct xe_device *xe)
if (!pxp)
return -ENOMEM;
+ INIT_LIST_HEAD(&pxp->queues.list);
+ spin_lock_init(&pxp->queues.lock);
INIT_WORK(&pxp->irq.work, pxp_irq_work);
pxp->xe = xe;
pxp->gt = gt;
+ /*
+ * we'll use the completion to check if there is a termination pending,
+ * so we start it as completed and we reinit it when a termination
+ * is triggered.
+ */
+ init_completion(&pxp->termination);
+ complete_all(&pxp->termination);
+
+ mutex_init(&pxp->mutex);
+
pxp->irq.wq = alloc_ordered_workqueue("pxp-wq", 0);
if (!pxp->irq.wq)
return -ENOMEM;
@@ -256,3 +342,202 @@ int xe_pxp_init(struct xe_device *xe)
destroy_workqueue(pxp->irq.wq);
return err;
}
+
+static int __pxp_start_arb_session(struct xe_pxp *pxp)
+{
+ int ret;
+
+ if (pxp_session_is_in_play(pxp, ARB_SESSION))
+ return -EEXIST;
+
+ ret = xe_pxp_submit_session_init(&pxp->gsc_res, ARB_SESSION);
+ if (ret) {
+ drm_err(&pxp->xe->drm, "Failed to init PXP arb session\n");
+ goto out;
+ }
+
+ ret = pxp_wait_for_session_state(pxp, ARB_SESSION, true);
+ if (ret) {
+ drm_err(&pxp->xe->drm, "PXP ARB session failed to go in play\n");
+ goto out;
+ }
+
+ drm_dbg(&pxp->xe->drm, "PXP ARB session is active\n");
+
+out:
+ if (!ret)
+ pxp->status = XE_PXP_ACTIVE;
+ else
+ pxp->status = XE_PXP_ERROR;
+
+ return ret;
+}
+
+/**
+ * xe_pxp_exec_queue_set_type - Mark a queue as using PXP
+ * @pxp: the xe->pxp pointer (it will be NULL if PXP is disabled)
+ * @q: the queue to mark as using PXP
+ * @type: the type of PXP session this queue will use
+ *
+ * Returns 0 if the selected PXP type is supported, -ENODEV otherwise.
+ */
+int xe_pxp_exec_queue_set_type(struct xe_pxp *pxp, struct xe_exec_queue *q, u8 type)
+{
+ if (!xe_pxp_is_enabled(pxp))
+ return -ENODEV;
+
+ /* we only support HWDRM sessions right now */
+ xe_assert(pxp->xe, type == DRM_XE_PXP_TYPE_HWDRM);
+
+ q->pxp.type = type;
+
+ return 0;
+}
+
+/**
+ * xe_pxp_exec_queue_add - add a queue to the PXP list
+ * @pxp: the xe->pxp pointer (it will be NULL if PXP is disabled)
+ * @q: the queue to add to the list
+ *
+ * If PXP is enabled and the prerequisites are done, start the PXP ARB
+ * session (if not already running) and add the queue to the PXP list. Note
+ * that the queue must have previously been marked as using PXP with
+ * xe_pxp_exec_queue_set_type.
+ *
+ * Returns 0 if the PXP ARB session is running and the queue is in the list,
+ * -ENODEV if PXP is disabled, -EBUSY if the PXP prerequisites are not done,
+ * other errno value if something goes wrong during the session start.
+ */
+#define PXP_TERMINATION_TIMEOUT_MS 500
+int xe_pxp_exec_queue_add(struct xe_pxp *pxp, struct xe_exec_queue *q)
+{
+ int ret = 0;
+
+ if (!xe_pxp_is_enabled(pxp))
+ return -ENODEV;
+
+ /* we only support HWDRM sessions right now */
+ xe_assert(pxp->xe, q->pxp.type == DRM_XE_PXP_TYPE_HWDRM);
+
+ /*
+ * Runtime suspend kills PXP, so we need to turn it off while we have
+ * active queues that use PXP
+ */
+ xe_pm_runtime_get(pxp->xe);
+
+ if (!pxp_prerequisites_done(pxp)) {
+ ret = -EBUSY;
+ goto out;
+ }
+
+wait_for_termination:
+ /*
+ * if there is a termination in progress, wait for it.
+ * We need to wait outside the lock because the completion is done from
+ * within the lock
+ */
+ if (!wait_for_completion_timeout(&pxp->termination,
+ msecs_to_jiffies(PXP_TERMINATION_TIMEOUT_MS)))
+ return -ETIMEDOUT;
+
+ mutex_lock(&pxp->mutex);
+
+ /*
+ * check if a new termination was issued between the above check and
+ * grabbing the mutex
+ */
+ if (!completion_done(&pxp->termination)) {
+ mutex_unlock(&pxp->mutex);
+ goto wait_for_termination;
+ }
+
+ /* If PXP is not already active, turn it on */
+ switch (pxp->status) {
+ case XE_PXP_ERROR:
+ ret = -EIO;
+ break;
+ case XE_PXP_ACTIVE:
+ break;
+ case XE_PXP_READY_TO_START:
+ ret = __pxp_start_arb_session(pxp);
+ break;
+ case XE_PXP_NEEDS_TERMINATION:
+ pxp_terminate(pxp);
+ mutex_unlock(&pxp->mutex);
+ goto wait_for_termination;
+ default:
+ drm_err(&pxp->xe->drm, "unexpected state during PXP start: %u", pxp->status);
+ ret = -EIO;
+ break;
+ }
+
+ /* If everything went ok, add the queue to the list */
+ if (!ret) {
+ spin_lock_irq(&pxp->queues.lock);
+ list_add_tail(&q->pxp.link, &pxp->queues.list);
+ spin_unlock_irq(&pxp->queues.lock);
+ }
+
+ mutex_unlock(&pxp->mutex);
+
+out:
+ /*
+ * in the successful case the PM ref is released from
+ * xe_pxp_exec_queue_remove
+ */
+ if (ret)
+ xe_pm_runtime_put(pxp->xe);
+
+ return ret;
+}
+
+/**
+ * xe_pxp_exec_queue_remove - remove a queue from the PXP list
+ * @pxp: the xe->pxp pointer (it will be NULL if PXP is disabled)
+ * @q: the queue to remove from the list
+ *
+ * If PXP is enabled and the exec_queue is in the list, the queue will be
+ * removed from the list and its PM reference will be released. It is safe to
+ * call this function multiple times for the same queue.
+ */
+void xe_pxp_exec_queue_remove(struct xe_pxp *pxp, struct xe_exec_queue *q)
+{
+ bool need_pm_put = false;
+
+ if (!xe_pxp_is_enabled(pxp))
+ return;
+
+ spin_lock_irq(&pxp->queues.lock);
+
+ if (!list_empty(&q->pxp.link)) {
+ list_del_init(&q->pxp.link);
+ need_pm_put = true;
+ }
+
+ q->pxp.type = DRM_XE_PXP_TYPE_NONE;
+
+ spin_unlock_irq(&pxp->queues.lock);
+
+ if (need_pm_put)
+ xe_pm_runtime_put(pxp->xe);
+}
+
+static void pxp_invalidate_queues(struct xe_pxp *pxp)
+{
+ struct xe_exec_queue *tmp, *q;
+
+ spin_lock_irq(&pxp->queues.lock);
+
+ list_for_each_entry(tmp, &pxp->queues.list, pxp.link) {
+ q = xe_exec_queue_get_unless_zero(tmp);
+
+ if (!q)
+ continue;
+
+ xe_exec_queue_kill(q);
+ xe_exec_queue_put(q);
+ }
+
+ spin_unlock_irq(&pxp->queues.lock);
+}
+
diff --git a/drivers/gpu/drm/xe/xe_pxp.h b/drivers/gpu/drm/xe/xe_pxp.h
index 81bafe2714ff..2e0ab186072a 100644
--- a/drivers/gpu/drm/xe/xe_pxp.h
+++ b/drivers/gpu/drm/xe/xe_pxp.h
@@ -9,10 +9,17 @@
#include <linux/types.h>
struct xe_device;
+struct xe_exec_queue;
+struct xe_pxp;
bool xe_pxp_is_supported(const struct xe_device *xe);
+bool xe_pxp_is_enabled(const struct xe_pxp *pxp);
int xe_pxp_init(struct xe_device *xe);
void xe_pxp_irq_handler(struct xe_device *xe, u16 iir);
+int xe_pxp_exec_queue_set_type(struct xe_pxp *pxp, struct xe_exec_queue *q, u8 type);
+int xe_pxp_exec_queue_add(struct xe_pxp *pxp, struct xe_exec_queue *q);
+void xe_pxp_exec_queue_remove(struct xe_pxp *pxp, struct xe_exec_queue *q);
+
#endif /* __XE_PXP_H__ */
diff --git a/drivers/gpu/drm/xe/xe_pxp_submit.c b/drivers/gpu/drm/xe/xe_pxp_submit.c
index c9258c861556..becffa6dfd4c 100644
--- a/drivers/gpu/drm/xe/xe_pxp_submit.c
+++ b/drivers/gpu/drm/xe/xe_pxp_submit.c
@@ -26,8 +26,6 @@
#include "instructions/xe_mi_commands.h"
#include "regs/xe_gt_regs.h"
-#define ARB_SESSION 0xF /* TODO: move to UAPI */
-
/*
* The VCS is used for kernel-owned GGTT submissions to issue key termination.
* Terminations are serialized, so we only need a single queue and a single
@@ -495,7 +493,7 @@ int xe_pxp_submit_session_init(struct xe_pxp_gsc_client_resources *gsc_res, u32
FIELD_PREP(PXP43_INIT_SESSION_APPTYPE, 0));
msg_in.header.buffer_len = sizeof(msg_in) - sizeof(msg_in.header);
- if (id == ARB_SESSION)
+ if (id == DRM_XE_PXP_HWDRM_DEFAULT_SESSION)
msg_in.protection_mode = PXP43_INIT_SESSION_PROTECTION_ARB;
ret = gsccs_send_message(gsc_res, &msg_in, sizeof(msg_in),
diff --git a/drivers/gpu/drm/xe/xe_pxp_types.h b/drivers/gpu/drm/xe/xe_pxp_types.h
index d5cf8faed7be..eb6a0183320a 100644
--- a/drivers/gpu/drm/xe/xe_pxp_types.h
+++ b/drivers/gpu/drm/xe/xe_pxp_types.h
@@ -6,7 +6,10 @@
#ifndef __XE_PXP_TYPES_H__
#define __XE_PXP_TYPES_H__
+#include <linux/completion.h>
#include <linux/iosys-map.h>
+#include <linux/mutex.h>
+#include <linux/spinlock.h>
#include <linux/types.h>
#include <linux/workqueue.h>
@@ -16,6 +19,14 @@ struct xe_device;
struct xe_gt;
struct xe_vm;
+enum xe_pxp_status {
+ XE_PXP_ERROR = -1,
+ XE_PXP_NEEDS_TERMINATION = 0, /* starting status */
+ XE_PXP_TERMINATION_IN_PROGRESS,
+ XE_PXP_READY_TO_START,
+ XE_PXP_ACTIVE
+};
+
/**
* struct xe_pxp_gsc_client_resources - resources for GSC submission by a PXP
* client. The GSC FW supports multiple GSC client active at the same time.
@@ -82,6 +93,21 @@ struct xe_pxp {
#define PXP_TERMINATION_REQUEST BIT(0)
#define PXP_TERMINATION_COMPLETE BIT(1)
} irq;
+
+ /** @mutex: protects the pxp status and the queue list */
+ struct mutex mutex;
+ /** @status: the current pxp status */
+ enum xe_pxp_status status;
+ /** @termination: completion struct that tracks terminations */
+ struct completion termination;
+
+ /** @queues: management of exec_queues that use PXP */
+ struct {
+ /** @queues.lock: spinlock protecting the queue management */
+ spinlock_t lock;
+ /** @queues.list: list of exec_queues that use PXP */
+ struct list_head list;
+ } queues;
};
#endif /* __XE_PXP_TYPES_H__ */
diff --git a/include/uapi/drm/xe_drm.h b/include/uapi/drm/xe_drm.h
index b6fbe4988f2e..5f4d08123672 100644
--- a/include/uapi/drm/xe_drm.h
+++ b/include/uapi/drm/xe_drm.h
@@ -1085,6 +1085,24 @@ struct drm_xe_vm_bind {
/**
* struct drm_xe_exec_queue_create - Input of &DRM_IOCTL_XE_EXEC_QUEUE_CREATE
*
+ * This ioctl supports setting the following properties via the
+ * %DRM_XE_EXEC_QUEUE_EXTENSION_SET_PROPERTY extension, which uses the
+ * generic @drm_xe_ext_set_property struct:
+ *
+ * - %DRM_XE_EXEC_QUEUE_SET_PROPERTY_PRIORITY - set the queue priority.
+ * CAP_SYS_NICE is required to set a value above normal.
+ * - %DRM_XE_EXEC_QUEUE_SET_PROPERTY_TIMESLICE - set the queue timeslice
+ * duration.
+ * - %DRM_XE_EXEC_QUEUE_SET_PROPERTY_PXP_TYPE - set the type of PXP session
+ * this queue will be used with. Valid values are listed in enum
+ * drm_xe_pxp_session_type. %DRM_XE_PXP_TYPE_NONE is the default behavior, so
+ * there is no need to explicitly set that. When a queue of type
+ * %DRM_XE_PXP_TYPE_HWDRM is created, the PXP default HWDRM session
+ * (%XE_PXP_HWDRM_DEFAULT_SESSION) will be started, if isn't already running.
+ * Given that going into a power-saving state kills PXP HWDRM sessions,
+ * runtime PM will be blocked while queues of this type are alive.
+ * All PXP queues will be killed if a PXP invalidation event occurs.
+ *
* The example below shows how to use @drm_xe_exec_queue_create to create
* a simple exec_queue (no parallel submission) of class
* &DRM_XE_ENGINE_CLASS_RENDER.
@@ -1108,7 +1126,7 @@ struct drm_xe_exec_queue_create {
#define DRM_XE_EXEC_QUEUE_EXTENSION_SET_PROPERTY 0
#define DRM_XE_EXEC_QUEUE_SET_PROPERTY_PRIORITY 0
#define DRM_XE_EXEC_QUEUE_SET_PROPERTY_TIMESLICE 1
-
+#define DRM_XE_EXEC_QUEUE_SET_PROPERTY_PXP_TYPE 2
/** @extensions: Pointer to the first extension struct, if any */
__u64 extensions;
@@ -1694,6 +1712,26 @@ struct drm_xe_oa_stream_info {
__u64 reserved[3];
};
+/**
+ * enum drm_xe_pxp_session_type - Supported PXP session types.
+ *
+ * We currently only support HWDRM sessions, which are used for protected
+ * content that ends up being displayed, but the HW supports multiple types, so
+ * we might extend support in the future.
+ */
+enum drm_xe_pxp_session_type {
+ /** @DRM_XE_PXP_TYPE_NONE: PXP not used */
+ DRM_XE_PXP_TYPE_NONE = 0,
+ /**
+ * @DRM_XE_PXP_TYPE_HWDRM: HWDRM sessions are used for content that ends
+ * up on the display.
+ */
+ DRM_XE_PXP_TYPE_HWDRM = 1,
+};
+
+/* ID of the protected content session managed by Xe when PXP is active */
+#define DRM_XE_PXP_HWDRM_DEFAULT_SESSION 0xf
+
#if defined(__cplusplus)
}
#endif
--
2.43.0
^ permalink raw reply related [flat|nested] 54+ messages in thread
* [PATCH v2 08/12] drm/xe/pxp: add a query for PXP status
2024-08-16 19:00 [PATCH v2 00/12] Add PXP HWDRM support Daniele Ceraolo Spurio
` (6 preceding siblings ...)
2024-08-16 19:00 ` [PATCH v2 07/12] drm/xe/pxp: Add spport for PXP-using queues Daniele Ceraolo Spurio
@ 2024-08-16 19:00 ` Daniele Ceraolo Spurio
2024-10-09 0:09 ` John Harrison
2024-08-16 19:00 ` [PATCH v2 09/12] drm/xe/pxp: Add API to mark a BO as using PXP Daniele Ceraolo Spurio
` (12 subsequent siblings)
20 siblings, 1 reply; 54+ messages in thread
From: Daniele Ceraolo Spurio @ 2024-08-16 19:00 UTC (permalink / raw)
To: intel-xe; +Cc: Daniele Ceraolo Spurio, José Roberto de Souza
PXP prerequisites (SW proxy and HuC auth via GSC) are completed
asynchronously from driver load, which means that userspace can start
submitting before we're ready to start a PXP session. Therefore, we need
a query that userspace can use to check not only if PXP is supported by
also to wait until the prerequisites are done.
v2: Improve doc, do not report TYPE_NONE as supported (José)
Signed-off-by: Daniele Ceraolo Spurio <daniele.ceraolospurio@intel.com>
Cc: José Roberto de Souza <jose.souza@intel.com>
---
drivers/gpu/drm/xe/xe_pxp.c | 33 +++++++++++++++++++++++++++++++++
drivers/gpu/drm/xe/xe_pxp.h | 1 +
drivers/gpu/drm/xe/xe_query.c | 32 ++++++++++++++++++++++++++++++++
include/uapi/drm/xe_drm.h | 35 +++++++++++++++++++++++++++++++++++
4 files changed, 101 insertions(+)
diff --git a/drivers/gpu/drm/xe/xe_pxp.c b/drivers/gpu/drm/xe/xe_pxp.c
index acdc25c8e8a1..ca4302af4ced 100644
--- a/drivers/gpu/drm/xe/xe_pxp.c
+++ b/drivers/gpu/drm/xe/xe_pxp.c
@@ -60,6 +60,39 @@ static bool pxp_prerequisites_done(const struct xe_pxp *pxp)
return ready;
}
+/**
+ * xe_pxp_get_readiness_status - check whether PXP is ready for userspace use
+ * @pxp: the xe_pxp pointer (can be NULL if PXP is disabled)
+ *
+ * This function is used for status query from userspace, so the returned value
+ * follow the uapi (see drm_xe_query_pxp_status)
+ *
+ * Returns: 0 if PXP is not ready yet, 1 if it is ready, an errno value if PXP
+ * is not supported/enabled or if something went wrong in the initialization of
+ * the prerequisites.
+ */
+int xe_pxp_get_readiness_status(struct xe_pxp *pxp)
+{
+ int ret = 0;
+
+ if (!xe_pxp_is_enabled(pxp))
+ return -ENODEV;
+
+ /* if the GSC or HuC FW are in an error state, PXP will never work */
+ if (xe_uc_fw_status_to_error(pxp->gt->uc.huc.fw.status) ||
+ xe_uc_fw_status_to_error(pxp->gt->uc.gsc.fw.status))
+ return -EIO;
+
+ xe_pm_runtime_get(pxp->xe);
+
+ /* PXP requires both HuC loaded and GSC proxy initialized */
+ if (pxp_prerequisites_done(pxp))
+ ret = 1;
+
+ xe_pm_runtime_put(pxp->xe);
+ return ret;
+}
+
static bool pxp_session_is_in_play(struct xe_pxp *pxp, u32 id)
{
struct xe_gt *gt = pxp->gt;
diff --git a/drivers/gpu/drm/xe/xe_pxp.h b/drivers/gpu/drm/xe/xe_pxp.h
index 2e0ab186072a..868813cc84b9 100644
--- a/drivers/gpu/drm/xe/xe_pxp.h
+++ b/drivers/gpu/drm/xe/xe_pxp.h
@@ -14,6 +14,7 @@ struct xe_pxp;
bool xe_pxp_is_supported(const struct xe_device *xe);
bool xe_pxp_is_enabled(const struct xe_pxp *pxp);
+int xe_pxp_get_readiness_status(struct xe_pxp *pxp);
int xe_pxp_init(struct xe_device *xe);
void xe_pxp_irq_handler(struct xe_device *xe, u16 iir);
diff --git a/drivers/gpu/drm/xe/xe_query.c b/drivers/gpu/drm/xe/xe_query.c
index 73ef6e4c2dc9..a1e297234972 100644
--- a/drivers/gpu/drm/xe/xe_query.c
+++ b/drivers/gpu/drm/xe/xe_query.c
@@ -22,6 +22,7 @@
#include "xe_guc_hwconfig.h"
#include "xe_macros.h"
#include "xe_mmio.h"
+#include "xe_pxp.h"
#include "xe_ttm_vram_mgr.h"
static const u16 xe_to_user_engine_class[] = {
@@ -680,6 +681,36 @@ static int query_oa_units(struct xe_device *xe,
return ret ? -EFAULT : 0;
}
+static int query_pxp_status(struct xe_device *xe, struct drm_xe_device_query *query)
+{
+ struct drm_xe_query_pxp_status __user *query_ptr = u64_to_user_ptr(query->data);
+ size_t size = sizeof(struct drm_xe_query_pxp_status);
+ struct drm_xe_query_pxp_status resp;
+ int ret;
+
+ if (query->size == 0) {
+ query->size = size;
+ return 0;
+ } else if (XE_IOCTL_DBG(xe, query->size != size)) {
+ return -EINVAL;
+ }
+
+ if (copy_from_user(&resp, query_ptr, size))
+ return -EFAULT;
+
+ ret = xe_pxp_get_readiness_status(xe->pxp);
+ if (ret < 0)
+ return ret;
+
+ resp.status = ret;
+ resp.supported_session_types = BIT(DRM_XE_PXP_TYPE_HWDRM);
+
+ if (copy_to_user(query_ptr, &resp, size))
+ return -EFAULT;
+
+ return 0;
+}
+
static int (* const xe_query_funcs[])(struct xe_device *xe,
struct drm_xe_device_query *query) = {
query_engines,
@@ -691,6 +722,7 @@ static int (* const xe_query_funcs[])(struct xe_device *xe,
query_engine_cycles,
query_uc_fw_version,
query_oa_units,
+ query_pxp_status,
};
int xe_query_ioctl(struct drm_device *dev, void *data, struct drm_file *file)
diff --git a/include/uapi/drm/xe_drm.h b/include/uapi/drm/xe_drm.h
index 5f4d08123672..9972ceb3fbfb 100644
--- a/include/uapi/drm/xe_drm.h
+++ b/include/uapi/drm/xe_drm.h
@@ -627,6 +627,39 @@ struct drm_xe_query_uc_fw_version {
__u64 reserved;
};
+/**
+ * struct drm_xe_query_pxp_status - query if PXP is ready
+ *
+ * If PXP is enabled and no fatal error as occurred, the status will be set to
+ * one of the following values:
+ * 0: PXP init still in progress
+ * 1: PXP init complete
+ *
+ * If PXP is not enabled or something has gone wrong, the query will be failed
+ * with one of the following error codes:
+ * -ENODEV: PXP not supported or disabled;
+ * -EIO: fatal error occurred during init, so PXP will never be enabled;
+ * -EINVAL: incorrect value provided as part of the query;
+ * -EFAULT: error copying the memory between kernel and userspace.
+ *
+ * The status can only be 0 in the first few seconds after driver load. If
+ * everything works as expected, the status will transition to init complete in
+ * less than 1 second, while in case of errors the driver might take longer to
+ * start returning an error code, but it should still take less than 10 seconds.
+ *
+ * The supported session type bitmask is based on the values in
+ * enum drm_xe_pxp_session_type. TYPE_NONE is always supported and therefore
+ * is not reported in the bitmask.
+ *
+ */
+struct drm_xe_query_pxp_status {
+ /** @status: current PXP status */
+ __u32 status;
+
+ /** @supported_session_types: bitmask of supported PXP session types */
+ __u32 supported_session_types;
+};
+
/**
* struct drm_xe_device_query - Input of &DRM_IOCTL_XE_DEVICE_QUERY - main
* structure to query device information
@@ -646,6 +679,7 @@ struct drm_xe_query_uc_fw_version {
* attributes.
* - %DRM_XE_DEVICE_QUERY_GT_TOPOLOGY
* - %DRM_XE_DEVICE_QUERY_ENGINE_CYCLES
+ * - %DRM_XE_DEVICE_QUERY_PXP_STATUS
*
* If size is set to 0, the driver fills it with the required size for
* the requested type of data to query. If size is equal to the required
@@ -698,6 +732,7 @@ struct drm_xe_device_query {
#define DRM_XE_DEVICE_QUERY_ENGINE_CYCLES 6
#define DRM_XE_DEVICE_QUERY_UC_FW_VERSION 7
#define DRM_XE_DEVICE_QUERY_OA_UNITS 8
+#define DRM_XE_DEVICE_QUERY_PXP_STATUS 9
/** @query: The type of data to query */
__u32 query;
--
2.43.0
^ permalink raw reply related [flat|nested] 54+ messages in thread
* [PATCH v2 09/12] drm/xe/pxp: Add API to mark a BO as using PXP
2024-08-16 19:00 [PATCH v2 00/12] Add PXP HWDRM support Daniele Ceraolo Spurio
` (7 preceding siblings ...)
2024-08-16 19:00 ` [PATCH v2 08/12] drm/xe/pxp: add a query for PXP status Daniele Ceraolo Spurio
@ 2024-08-16 19:00 ` Daniele Ceraolo Spurio
2024-10-09 0:42 ` John Harrison
2024-08-16 19:00 ` [PATCH v2 10/12] drm/xe/pxp: add PXP PM support Daniele Ceraolo Spurio
` (11 subsequent siblings)
20 siblings, 1 reply; 54+ messages in thread
From: Daniele Ceraolo Spurio @ 2024-08-16 19:00 UTC (permalink / raw)
To: intel-xe; +Cc: Daniele Ceraolo Spurio, Matthew Brost, Thomas Hellström
The driver needs to know if a BO is encrypted with PXP to enable the
display decryption at flip time.
Furthermore, we want to keep track of the status of the encryption and
reject any operation that involves a BO that is encrypted using an old
key. There are two points in time where such checks can kick in:
1 - at VM bind time, all operations except for unmapping will be
rejected if the key used to encrypt the BO is no longer valid. This
check is opt-in via a new VM_BIND flag, to avoid a scenario where a
malicious app purposely shares an invalid BO with the compositor (or
other app) and cause an error there.
2 - at job submission time, if the queue is marked as using PXP, all
objects bound to the VM will be checked and the submission will be
rejected if any of them was encrypted with a key that is no longer
valid.
Note that there is no risk of leaking the encrypted data if a user does
not opt-in to those checks; the only consequence is that the user will
not realize that the encryption key is changed and that the data is no
longer valid.
Signed-off-by: Daniele Ceraolo Spurio <daniele.ceraolospurio@intel.com>
Cc: Matthew Brost <matthew.brost@intel.com>
Cc: Thomas Hellström <thomas.hellstrom@linux.intel.com>
---
| 10 +-
drivers/gpu/drm/xe/xe_bo.c | 100 +++++++++++++++++-
drivers/gpu/drm/xe/xe_bo.h | 5 +
drivers/gpu/drm/xe/xe_bo_types.h | 3 +
drivers/gpu/drm/xe/xe_exec.c | 6 ++
drivers/gpu/drm/xe/xe_pxp.c | 74 +++++++++++++
drivers/gpu/drm/xe/xe_pxp.h | 4 +
drivers/gpu/drm/xe/xe_pxp_types.h | 3 +
drivers/gpu/drm/xe/xe_vm.c | 46 +++++++-
drivers/gpu/drm/xe/xe_vm.h | 2 +
include/uapi/drm/xe_drm.h | 19 ++++
11 files changed, 265 insertions(+), 7 deletions(-)
--git a/drivers/gpu/drm/xe/compat-i915-headers/pxp/intel_pxp.h b/drivers/gpu/drm/xe/compat-i915-headers/pxp/intel_pxp.h
index 881680727452..d8682f781619 100644
--- a/drivers/gpu/drm/xe/compat-i915-headers/pxp/intel_pxp.h
+++ b/drivers/gpu/drm/xe/compat-i915-headers/pxp/intel_pxp.h
@@ -9,6 +9,9 @@
#include <linux/errno.h>
#include <linux/types.h>
+#include "xe_bo.h"
+#include "xe_pxp.h"
+
struct drm_i915_gem_object;
struct xe_pxp;
@@ -16,13 +19,16 @@ static inline int intel_pxp_key_check(struct xe_pxp *pxp,
struct drm_i915_gem_object *obj,
bool assign)
{
- return -ENODEV;
+ if (assign)
+ return -EINVAL;
+
+ return xe_pxp_key_check(pxp, obj);
}
static inline bool
i915_gem_object_is_protected(const struct drm_i915_gem_object *obj)
{
- return false;
+ return xe_bo_is_protected(obj);
}
#endif
diff --git a/drivers/gpu/drm/xe/xe_bo.c b/drivers/gpu/drm/xe/xe_bo.c
index 56a089aa3916..0f591b7d93b1 100644
--- a/drivers/gpu/drm/xe/xe_bo.c
+++ b/drivers/gpu/drm/xe/xe_bo.c
@@ -6,6 +6,7 @@
#include "xe_bo.h"
#include <linux/dma-buf.h>
+#include <linux/nospec.h>
#include <drm/drm_drv.h>
#include <drm/drm_gem_ttm_helper.h>
@@ -24,6 +25,7 @@
#include "xe_migrate.h"
#include "xe_pm.h"
#include "xe_preempt_fence.h"
+#include "xe_pxp.h"
#include "xe_res_cursor.h"
#include "xe_trace_bo.h"
#include "xe_ttm_stolen_mgr.h"
@@ -1949,6 +1951,95 @@ void xe_bo_vunmap(struct xe_bo *bo)
__xe_bo_vunmap(bo);
}
+static int gem_create_set_pxp_type(struct xe_device *xe, struct xe_bo *bo, u64 value)
+{
+ if (value == DRM_XE_PXP_TYPE_NONE)
+ return 0;
+
+ /* we only support DRM_XE_PXP_TYPE_HWDRM for now */
+ if (XE_IOCTL_DBG(xe, value != DRM_XE_PXP_TYPE_HWDRM))
+ return -EINVAL;
+
+ xe_pxp_key_assign(xe->pxp, bo);
+
+ return 0;
+}
+
+typedef int (*xe_gem_create_set_property_fn)(struct xe_device *xe,
+ struct xe_bo *bo,
+ u64 value);
+
+static const xe_gem_create_set_property_fn gem_create_set_property_funcs[] = {
+ [DRM_XE_GEM_CREATE_EXTENSION_SET_PROPERTY] = gem_create_set_pxp_type,
+};
+
+static int gem_create_user_ext_set_property(struct xe_device *xe,
+ struct xe_bo *bo,
+ u64 extension)
+{
+ u64 __user *address = u64_to_user_ptr(extension);
+ struct drm_xe_ext_set_property ext;
+ int err;
+ u32 idx;
+
+ err = __copy_from_user(&ext, address, sizeof(ext));
+ if (XE_IOCTL_DBG(xe, err))
+ return -EFAULT;
+
+ if (XE_IOCTL_DBG(xe, ext.property >=
+ ARRAY_SIZE(gem_create_set_property_funcs)) ||
+ XE_IOCTL_DBG(xe, ext.pad) ||
+ XE_IOCTL_DBG(xe, ext.property != DRM_XE_GEM_CREATE_EXTENSION_SET_PROPERTY))
+ return -EINVAL;
+
+ idx = array_index_nospec(ext.property, ARRAY_SIZE(gem_create_set_property_funcs));
+ if (!gem_create_set_property_funcs[idx])
+ return -EINVAL;
+
+ return gem_create_set_property_funcs[idx](xe, bo, ext.value);
+}
+
+typedef int (*xe_gem_create_user_extension_fn)(struct xe_device *xe,
+ struct xe_bo *bo,
+ u64 extension);
+
+static const xe_gem_create_user_extension_fn gem_create_user_extension_funcs[] = {
+ [DRM_XE_GEM_CREATE_EXTENSION_SET_PROPERTY] = gem_create_user_ext_set_property,
+};
+
+#define MAX_USER_EXTENSIONS 16
+static int gem_create_user_extensions(struct xe_device *xe, struct xe_bo *bo,
+ u64 extensions, int ext_number)
+{
+ u64 __user *address = u64_to_user_ptr(extensions);
+ struct drm_xe_user_extension ext;
+ int err;
+ u32 idx;
+
+ if (XE_IOCTL_DBG(xe, ext_number >= MAX_USER_EXTENSIONS))
+ return -E2BIG;
+
+ err = __copy_from_user(&ext, address, sizeof(ext));
+ if (XE_IOCTL_DBG(xe, err))
+ return -EFAULT;
+
+ if (XE_IOCTL_DBG(xe, ext.pad) ||
+ XE_IOCTL_DBG(xe, ext.name >= ARRAY_SIZE(gem_create_user_extension_funcs)))
+ return -EINVAL;
+
+ idx = array_index_nospec(ext.name,
+ ARRAY_SIZE(gem_create_user_extension_funcs));
+ err = gem_create_user_extension_funcs[idx](xe, bo, extensions);
+ if (XE_IOCTL_DBG(xe, err))
+ return err;
+
+ if (ext.next_extension)
+ return gem_create_user_extensions(xe, bo, ext.next_extension,
+ ++ext_number);
+
+ return 0;
+}
+
int xe_gem_create_ioctl(struct drm_device *dev, void *data,
struct drm_file *file)
{
@@ -1961,8 +2052,7 @@ int xe_gem_create_ioctl(struct drm_device *dev, void *data,
u32 handle;
int err;
- if (XE_IOCTL_DBG(xe, args->extensions) ||
- XE_IOCTL_DBG(xe, args->pad[0] || args->pad[1] || args->pad[2]) ||
+ if (XE_IOCTL_DBG(xe, args->pad[0] || args->pad[1] || args->pad[2]) ||
XE_IOCTL_DBG(xe, args->reserved[0] || args->reserved[1]))
return -EINVAL;
@@ -2037,6 +2127,12 @@ int xe_gem_create_ioctl(struct drm_device *dev, void *data,
goto out_vm;
}
+ if (args->extensions) {
+ err = gem_create_user_extensions(xe, bo, args->extensions, 0);
+ if (err)
+ goto out_bulk;
+ }
+
err = drm_gem_handle_create(file, &bo->ttm.base, &handle);
if (err)
goto out_bulk;
diff --git a/drivers/gpu/drm/xe/xe_bo.h b/drivers/gpu/drm/xe/xe_bo.h
index 1c9dc8adaaa3..721f7dc35aac 100644
--- a/drivers/gpu/drm/xe/xe_bo.h
+++ b/drivers/gpu/drm/xe/xe_bo.h
@@ -171,6 +171,11 @@ static inline bool xe_bo_is_pinned(struct xe_bo *bo)
return bo->ttm.pin_count;
}
+static inline bool xe_bo_is_protected(const struct xe_bo *bo)
+{
+ return bo->pxp_key_instance;
+}
+
static inline void xe_bo_unpin_map_no_vm(struct xe_bo *bo)
{
if (likely(bo)) {
diff --git a/drivers/gpu/drm/xe/xe_bo_types.h b/drivers/gpu/drm/xe/xe_bo_types.h
index ebc8abf7930a..8668e0374b18 100644
--- a/drivers/gpu/drm/xe/xe_bo_types.h
+++ b/drivers/gpu/drm/xe/xe_bo_types.h
@@ -56,6 +56,9 @@ struct xe_bo {
*/
struct list_head client_link;
#endif
+ /** @pxp_key_instance: key instance this bo was created against (if any) */
+ u32 pxp_key_instance;
+
/** @freed: List node for delayed put. */
struct llist_node freed;
/** @update_index: Update index if PT BO */
diff --git a/drivers/gpu/drm/xe/xe_exec.c b/drivers/gpu/drm/xe/xe_exec.c
index f36980aa26e6..aa4f2fe2e131 100644
--- a/drivers/gpu/drm/xe/xe_exec.c
+++ b/drivers/gpu/drm/xe/xe_exec.c
@@ -250,6 +250,12 @@ int xe_exec_ioctl(struct drm_device *dev, void *data, struct drm_file *file)
goto err_exec;
}
+ if (xe_exec_queue_uses_pxp(q)) {
+ err = xe_vm_validate_protected(q->vm);
+ if (err)
+ goto err_exec;
+ }
+
job = xe_sched_job_create(q, xe_exec_queue_is_parallel(q) ?
addresses : &args->address);
if (IS_ERR(job)) {
diff --git a/drivers/gpu/drm/xe/xe_pxp.c b/drivers/gpu/drm/xe/xe_pxp.c
index ca4302af4ced..640e62d1d5d7 100644
--- a/drivers/gpu/drm/xe/xe_pxp.c
+++ b/drivers/gpu/drm/xe/xe_pxp.c
@@ -8,6 +8,8 @@
#include <drm/drm_managed.h>
#include <drm/xe_drm.h>
+#include "xe_bo.h"
+#include "xe_bo_types.h"
#include "xe_device_types.h"
#include "xe_exec_queue.h"
#include "xe_exec_queue_types.h"
@@ -132,6 +134,9 @@ static void pxp_terminate(struct xe_pxp *pxp)
pxp_invalidate_queues(pxp);
+ if (pxp->status == XE_PXP_ACTIVE)
+ pxp->key_instance++;
+
/*
* If we have a termination already in progress, we need to wait for
* it to complete before queueing another one. We update the state
@@ -343,6 +348,8 @@ int xe_pxp_init(struct xe_device *xe)
pxp->xe = xe;
pxp->gt = gt;
+ pxp->key_instance = 1;
+
/*
* we'll use the completion to check if there is a termination pending,
* so we start it as completed and we reinit it when a termination
@@ -574,3 +581,70 @@ static void pxp_invalidate_queues(struct xe_pxp *pxp)
spin_unlock_irq(&pxp->queues.lock);
}
+/**
+ * xe_pxp_key_assign - mark a BO as using the current PXP key iteration
+ * @pxp: the xe->pxp pointer (it will be NULL if PXP is disabled)
+ * @bo: the BO to mark
+ *
+ * Returns: -ENODEV if PXP is disabled, 0 otherwise.
+ */
+int xe_pxp_key_assign(struct xe_pxp *pxp, struct xe_bo *bo)
+{
+ if (!xe_pxp_is_enabled(pxp))
+ return -ENODEV;
+
+ xe_assert(pxp->xe, !bo->pxp_key_instance);
+
+ /*
+ * Note that the PXP key handling is inherently racey, because the key
+ * can theoretically change at any time (although it's unlikely to do
+ * so without triggers), even right after we copy it. Taking a lock
+ * wouldn't help because the value might still change as soon as we
+ * release the lock.
+ * Userspace needs to handle the fact that their BOs can go invalid at
+ * any point.
+ */
+ bo->pxp_key_instance = pxp->key_instance;
+
+ return 0;
+}
+
+/**
+ * xe_pxp_key_check - check if the key used by a BO is valid
+ * @pxp: the xe->pxp pointer (it will be NULL if PXP is disabled)
+ * @bo: the BO we want to check
+ *
+ * Checks whether a BO was encrypted with the current key or an obsolete one.
+ *
+ * Returns: 0 if the key is valid, -ENODEV if PXP is disabled, -EINVAL if the
+ * BO is not using PXP, -ENOEXEC if the key is not valid.
+ */
+int xe_pxp_key_check(struct xe_pxp *pxp, struct xe_bo *bo)
+{
+ if (!xe_pxp_is_enabled(pxp))
+ return -ENODEV;
+
+ if (!xe_bo_is_protected(bo))
+ return -EINVAL;
+
+ xe_assert(pxp->xe, bo->pxp_key_instance);
+
+ /*
+ * Note that the PXP key handling is inherently racey, because the key
+ * can theoretically change at any time (although it's unlikely to do
+ * so without triggers), even right after we check it. Taking a lock
+ * wouldn't help because the value might still change as soon as we
+ * release the lock.
+ * We mitigate the risk by checking the key at multiple points (on each
+ * submission involving the BO and right before flipping it on the
+ * display), but there is still a very small chance that we could
+ * operate on an invalid BO for a single submission or a single frame
+ * flip. This is a compromise made to protect the encrypted data (which
+ * is what the key termination is for).
+ */
+ if (bo->pxp_key_instance != pxp->key_instance)
+ return -ENOEXEC;
+
+ return 0;
+}
+
diff --git a/drivers/gpu/drm/xe/xe_pxp.h b/drivers/gpu/drm/xe/xe_pxp.h
index 868813cc84b9..2d22a6e6ab27 100644
--- a/drivers/gpu/drm/xe/xe_pxp.h
+++ b/drivers/gpu/drm/xe/xe_pxp.h
@@ -8,6 +8,7 @@
#include <linux/types.h>
+struct xe_bo;
struct xe_device;
struct xe_exec_queue;
struct xe_pxp;
@@ -23,4 +24,7 @@ int xe_pxp_exec_queue_set_type(struct xe_pxp *pxp, struct xe_exec_queue *q, u8 t
int xe_pxp_exec_queue_add(struct xe_pxp *pxp, struct xe_exec_queue *q);
void xe_pxp_exec_queue_remove(struct xe_pxp *pxp, struct xe_exec_queue *q);
+int xe_pxp_key_assign(struct xe_pxp *pxp, struct xe_bo *bo);
+int xe_pxp_key_check(struct xe_pxp *pxp, struct xe_bo *bo);
+
#endif /* __XE_PXP_H__ */
diff --git a/drivers/gpu/drm/xe/xe_pxp_types.h b/drivers/gpu/drm/xe/xe_pxp_types.h
index eb6a0183320a..1bb747837f86 100644
--- a/drivers/gpu/drm/xe/xe_pxp_types.h
+++ b/drivers/gpu/drm/xe/xe_pxp_types.h
@@ -108,6 +108,9 @@ struct xe_pxp {
/** @queues.list: list of exec_queues that use PXP */
struct list_head list;
} queues;
+
+ /** @key_instance: keep track of the current iteration of the PXP key */
+ u32 key_instance;
};
#endif /* __XE_PXP_TYPES_H__ */
diff --git a/drivers/gpu/drm/xe/xe_vm.c b/drivers/gpu/drm/xe/xe_vm.c
index 56f105797ae6..1011d643ebb8 100644
--- a/drivers/gpu/drm/xe/xe_vm.c
+++ b/drivers/gpu/drm/xe/xe_vm.c
@@ -34,6 +34,7 @@
#include "xe_pm.h"
#include "xe_preempt_fence.h"
#include "xe_pt.h"
+#include "xe_pxp.h"
#include "xe_res_cursor.h"
#include "xe_sync.h"
#include "xe_trace_bo.h"
@@ -2754,7 +2755,8 @@ static struct dma_fence *vm_bind_ioctl_ops_execute(struct xe_vm *vm,
(DRM_XE_VM_BIND_FLAG_READONLY | \
DRM_XE_VM_BIND_FLAG_IMMEDIATE | \
DRM_XE_VM_BIND_FLAG_NULL | \
- DRM_XE_VM_BIND_FLAG_DUMPABLE)
+ DRM_XE_VM_BIND_FLAG_DUMPABLE | \
+ DRM_XE_VM_BIND_FLAG_CHECK_PXP)
#ifdef TEST_VM_OPS_ERROR
#define SUPPORTED_FLAGS (SUPPORTED_FLAGS_STUB | FORCE_OP_ERROR)
@@ -2916,7 +2918,7 @@ static void xe_vma_ops_init(struct xe_vma_ops *vops, struct xe_vm *vm,
static int xe_vm_bind_ioctl_validate_bo(struct xe_device *xe, struct xe_bo *bo,
u64 addr, u64 range, u64 obj_offset,
- u16 pat_index)
+ u16 pat_index, u32 op, u32 bind_flags)
{
u16 coh_mode;
@@ -2951,6 +2953,12 @@ static int xe_vm_bind_ioctl_validate_bo(struct xe_device *xe, struct xe_bo *bo,
return -EINVAL;
}
+ /* If a BO is protected it must be valid to be mapped */
+ if ((bind_flags & DRM_XE_VM_BIND_FLAG_CHECK_PXP) && xe_bo_is_protected(bo) &&
+ op != DRM_XE_VM_BIND_OP_UNMAP && op != DRM_XE_VM_BIND_OP_UNMAP_ALL)
+ if (XE_IOCTL_DBG(xe, xe_pxp_key_check(xe->pxp, bo) != 0))
+ return -ENOEXEC;
+
return 0;
}
@@ -3038,6 +3046,8 @@ int xe_vm_bind_ioctl(struct drm_device *dev, void *data, struct drm_file *file)
u32 obj = bind_ops[i].obj;
u64 obj_offset = bind_ops[i].obj_offset;
u16 pat_index = bind_ops[i].pat_index;
+ u32 op = bind_ops[i].op;
+ u32 bind_flags = bind_ops[i].flags;
if (!obj)
continue;
@@ -3050,7 +3060,8 @@ int xe_vm_bind_ioctl(struct drm_device *dev, void *data, struct drm_file *file)
bos[i] = gem_to_xe_bo(gem_obj);
err = xe_vm_bind_ioctl_validate_bo(xe, bos[i], addr, range,
- obj_offset, pat_index);
+ obj_offset, pat_index, op,
+ bind_flags);
if (err)
goto put_obj;
}
@@ -3343,6 +3354,35 @@ int xe_vm_invalidate_vma(struct xe_vma *vma)
return ret;
}
+int xe_vm_validate_protected(struct xe_vm *vm)
+{
+ struct drm_gpuva *gpuva;
+ int err = 0;
+
+ if (!vm)
+ return -ENODEV;
+
+ mutex_lock(&vm->snap_mutex);
+
+ drm_gpuvm_for_each_va(gpuva, &vm->gpuvm) {
+ struct xe_vma *vma = gpuva_to_vma(gpuva);
+ struct xe_bo *bo = vma->gpuva.gem.obj ?
+ gem_to_xe_bo(vma->gpuva.gem.obj) : NULL;
+
+ if (!bo)
+ continue;
+
+ if (xe_bo_is_protected(bo)) {
+ err = xe_pxp_key_check(vm->xe->pxp, bo);
+ if (err)
+ break;
+ }
+ }
+
+ mutex_unlock(&vm->snap_mutex);
+ return err;
+}
+
struct xe_vm_snapshot {
unsigned long num_snaps;
struct {
diff --git a/drivers/gpu/drm/xe/xe_vm.h b/drivers/gpu/drm/xe/xe_vm.h
index bfc19e8113c3..dd51c9790dab 100644
--- a/drivers/gpu/drm/xe/xe_vm.h
+++ b/drivers/gpu/drm/xe/xe_vm.h
@@ -216,6 +216,8 @@ struct dma_fence *xe_vma_rebind(struct xe_vm *vm, struct xe_vma *vma,
int xe_vm_invalidate_vma(struct xe_vma *vma);
+int xe_vm_validate_protected(struct xe_vm *vm);
+
static inline void xe_vm_queue_rebind_worker(struct xe_vm *vm)
{
xe_assert(vm->xe, xe_vm_in_preempt_fence_mode(vm));
diff --git a/include/uapi/drm/xe_drm.h b/include/uapi/drm/xe_drm.h
index 9972ceb3fbfb..335febe03e40 100644
--- a/include/uapi/drm/xe_drm.h
+++ b/include/uapi/drm/xe_drm.h
@@ -776,8 +776,23 @@ struct drm_xe_device_query {
* - %DRM_XE_GEM_CPU_CACHING_WC - Allocate the pages as write-combined. This
* is uncached. Scanout surfaces should likely use this. All objects
* that can be placed in VRAM must use this.
+ *
+ * This ioctl supports setting the following properties via the
+ * %DRM_XE_GEM_CREATE_EXTENSION_SET_PROPERTY extension, which uses the
+ * generic @drm_xe_ext_set_property struct:
+ *
+ * - %DRM_XE_GEM_CREATE_SET_PROPERTY_PXP_TYPE - set the type of PXP session
+ * this object will be used with. Valid values are listed in enum
+ * drm_xe_pxp_session_type. %DRM_XE_PXP_TYPE_NONE is the default behavior, so
+ * there is no need to explicitly set that. Objects used with session of type
+ * %DRM_XE_PXP_TYPE_HWDRM will be marked as invalid if a PXP invalidation
+ * event occurs after their creation. Attempting to flip an invalid object
+ * will cause a black frame to be displayed instead. Submissions with invalid
+ * objects mapped in the VM will be rejected.
*/
struct drm_xe_gem_create {
+#define DRM_XE_GEM_CREATE_EXTENSION_SET_PROPERTY 0
+#define DRM_XE_GEM_CREATE_SET_PROPERTY_PXP_TYPE 0
/** @extensions: Pointer to the first extension struct, if any */
__u64 extensions;
@@ -939,6 +954,9 @@ struct drm_xe_vm_destroy {
* will only be valid for DRM_XE_VM_BIND_OP_MAP operations, the BO
* handle MBZ, and the BO offset MBZ. This flag is intended to
* implement VK sparse bindings.
+ * - %DRM_XE_VM_BIND_FLAG_CHECK_PXP - If the object is encrypted via PXP,
+ * reject the binding if the encryption key is no longer valid. This
+ * flag has no effect on BOs that are not marked as using PXP.
*/
struct drm_xe_vm_bind_op {
/** @extensions: Pointer to the first extension struct, if any */
@@ -1029,6 +1047,7 @@ struct drm_xe_vm_bind_op {
#define DRM_XE_VM_BIND_FLAG_IMMEDIATE (1 << 1)
#define DRM_XE_VM_BIND_FLAG_NULL (1 << 2)
#define DRM_XE_VM_BIND_FLAG_DUMPABLE (1 << 3)
+#define DRM_XE_VM_BIND_FLAG_CHECK_PXP (1 << 4)
/** @flags: Bind flags */
__u32 flags;
--
2.43.0
^ permalink raw reply related [flat|nested] 54+ messages in thread
* [PATCH v2 10/12] drm/xe/pxp: add PXP PM support
2024-08-16 19:00 [PATCH v2 00/12] Add PXP HWDRM support Daniele Ceraolo Spurio
` (8 preceding siblings ...)
2024-08-16 19:00 ` [PATCH v2 09/12] drm/xe/pxp: Add API to mark a BO as using PXP Daniele Ceraolo Spurio
@ 2024-08-16 19:00 ` Daniele Ceraolo Spurio
2024-08-26 21:55 ` Daniele Ceraolo Spurio
2024-10-09 1:12 ` John Harrison
2024-08-16 19:00 ` [PATCH v2 11/12] drm/xe/pxp: Add PXP debugfs support Daniele Ceraolo Spurio
` (10 subsequent siblings)
20 siblings, 2 replies; 54+ messages in thread
From: Daniele Ceraolo Spurio @ 2024-08-16 19:00 UTC (permalink / raw)
To: intel-xe; +Cc: Daniele Ceraolo Spurio
The HW suspend flow kills all PXP HWDRM sessions, so if there was any
PXP activity before the suspend we need to trigger a full termination on
suspend.
Signed-off-by: Daniele Ceraolo Spurio <daniele.ceraolospurio@intel.com>
---
drivers/gpu/drm/xe/xe_pm.c | 42 +++++++++++---
drivers/gpu/drm/xe/xe_pxp.c | 92 ++++++++++++++++++++++++++++++-
drivers/gpu/drm/xe/xe_pxp.h | 3 +
drivers/gpu/drm/xe/xe_pxp_types.h | 9 ++-
4 files changed, 134 insertions(+), 12 deletions(-)
diff --git a/drivers/gpu/drm/xe/xe_pm.c b/drivers/gpu/drm/xe/xe_pm.c
index 9f3c14fd9f33..1e1f87ec03a2 100644
--- a/drivers/gpu/drm/xe/xe_pm.c
+++ b/drivers/gpu/drm/xe/xe_pm.c
@@ -20,6 +20,7 @@
#include "xe_guc.h"
#include "xe_irq.h"
#include "xe_pcode.h"
+#include "xe_pxp.h"
#include "xe_trace.h"
#include "xe_wa.h"
@@ -90,22 +91,24 @@ int xe_pm_suspend(struct xe_device *xe)
drm_dbg(&xe->drm, "Suspending device\n");
trace_xe_pm_suspend(xe, __builtin_return_address(0));
+ err = xe_pxp_pm_suspend(xe->pxp);
+ if (err)
+ goto err;
+
for_each_gt(gt, xe, id)
xe_gt_suspend_prepare(gt);
/* FIXME: Super racey... */
err = xe_bo_evict_all(xe);
if (err)
- goto err;
+ goto err_pxp;
xe_display_pm_suspend(xe, false);
for_each_gt(gt, xe, id) {
err = xe_gt_suspend(gt);
- if (err) {
- xe_display_pm_resume(xe, false);
- goto err;
- }
+ if (err)
+ goto err_display;
}
xe_irq_suspend(xe);
@@ -114,6 +117,11 @@ int xe_pm_suspend(struct xe_device *xe)
drm_dbg(&xe->drm, "Device suspended\n");
return 0;
+
+err_display:
+ xe_display_pm_resume(xe, false);
+err_pxp:
+ xe_pxp_pm_resume(xe->pxp);
err:
drm_dbg(&xe->drm, "Device suspend failed %d\n", err);
return err;
@@ -163,6 +171,8 @@ int xe_pm_resume(struct xe_device *xe)
if (err)
goto err;
+ xe_pxp_pm_resume(xe->pxp);
+
drm_dbg(&xe->drm, "Device resumed\n");
return 0;
err:
@@ -356,6 +366,10 @@ int xe_pm_runtime_suspend(struct xe_device *xe)
*/
lock_map_acquire(&xe_pm_runtime_lockdep_map);
+ err = xe_pxp_pm_suspend(xe->pxp);
+ if (err)
+ goto out;
+
/*
* Applying lock for entire list op as xe_ttm_bo_destroy and xe_bo_move_notify
* also checks and delets bo entry from user fault list.
@@ -369,23 +383,30 @@ int xe_pm_runtime_suspend(struct xe_device *xe)
if (xe->d3cold.allowed) {
err = xe_bo_evict_all(xe);
if (err)
- goto out;
+ goto out_pxp;
xe_display_pm_suspend(xe, true);
}
for_each_gt(gt, xe, id) {
err = xe_gt_suspend(gt);
if (err)
- goto out;
+ goto out_display;
}
xe_irq_suspend(xe);
if (xe->d3cold.allowed)
xe_display_pm_suspend_late(xe);
+
+ lock_map_release(&xe_pm_runtime_lockdep_map);
+ xe_pm_write_callback_task(xe, NULL);
+ return 0;
+
+out_display:
+ xe_display_pm_resume(xe, true);
+out_pxp:
+ xe_pxp_pm_resume(xe->pxp);
out:
- if (err)
- xe_display_pm_resume(xe, true);
lock_map_release(&xe_pm_runtime_lockdep_map);
xe_pm_write_callback_task(xe, NULL);
return err;
@@ -436,6 +457,9 @@ int xe_pm_runtime_resume(struct xe_device *xe)
if (err)
goto out;
}
+
+ xe_pxp_pm_resume(xe->pxp);
+
out:
lock_map_release(&xe_pm_runtime_lockdep_map);
xe_pm_write_callback_task(xe, NULL);
diff --git a/drivers/gpu/drm/xe/xe_pxp.c b/drivers/gpu/drm/xe/xe_pxp.c
index 640e62d1d5d7..78373cbbe0d4 100644
--- a/drivers/gpu/drm/xe/xe_pxp.c
+++ b/drivers/gpu/drm/xe/xe_pxp.c
@@ -137,6 +137,13 @@ static void pxp_terminate(struct xe_pxp *pxp)
if (pxp->status == XE_PXP_ACTIVE)
pxp->key_instance++;
+ /*
+ * we'll mark the status as needing termination on resume, so no need to
+ * emit a termination now.
+ */
+ if (pxp->status == XE_PXP_SUSPENDED)
+ return;
+
/*
* If we have a termination already in progress, we need to wait for
* it to complete before queueing another one. We update the state
@@ -181,17 +188,19 @@ static void pxp_terminate(struct xe_pxp *pxp)
static void pxp_terminate_complete(struct xe_pxp *pxp)
{
/*
- * We expect PXP to be in one of 2 states when we get here:
+ * We expect PXP to be in one of 3 states when we get here:
* - XE_PXP_TERMINATION_IN_PROGRESS: a single termination event was
* requested and it is now completing, so we're ready to start.
* - XE_PXP_NEEDS_TERMINATION: a second termination was requested while
* the first one was still being processed; we don't update the state
* in this case so the pxp_start code will automatically issue that
* second termination.
+ * - XE_PXP_SUSPENDED: PXP is now suspended, so we defer everything to
+ * when we come back on resume.
*/
if (pxp->status == XE_PXP_TERMINATION_IN_PROGRESS)
pxp->status = XE_PXP_READY_TO_START;
- else if (pxp->status != XE_PXP_NEEDS_TERMINATION)
+ else if (pxp->status != XE_PXP_NEEDS_TERMINATION && pxp->status != XE_PXP_SUSPENDED)
drm_err(&pxp->xe->drm,
"PXP termination complete while status was %u\n",
pxp->status);
@@ -505,6 +514,7 @@ int xe_pxp_exec_queue_add(struct xe_pxp *pxp, struct xe_exec_queue *q)
pxp_terminate(pxp);
mutex_unlock(&pxp->mutex);
goto wait_for_termination;
+ case XE_PXP_SUSPENDED:
default:
drm_err(&pxp->xe->drm, "unexpected state during PXP start: %u", pxp->status);
ret = -EIO;
@@ -648,3 +658,81 @@ int xe_pxp_key_check(struct xe_pxp *pxp, struct xe_bo *bo)
return 0;
}
+int xe_pxp_pm_suspend(struct xe_pxp *pxp)
+{
+ int ret = 0;
+
+ if (!xe_pxp_is_enabled(pxp))
+ return 0;
+
+ mutex_lock(&pxp->mutex);
+
+ /* if the termination is already in progress, no need to re-emit it */
+ if (!completion_done(&pxp->termination))
+ goto mark_suspended;
+
+ switch (pxp->status) {
+ case XE_PXP_ERROR:
+ case XE_PXP_READY_TO_START:
+ case XE_PXP_SUSPENDED:
+ /* nothing to cleanup */
+ break;
+ case XE_PXP_NEEDS_TERMINATION:
+ /* If PXP was never used we can skip the cleanup */
+ if (pxp->key_instance == pxp->last_suspend_key_instance)
+ break;
+ fallthrough;
+ case XE_PXP_ACTIVE:
+ pxp_terminate(pxp);
+ break;
+ default:
+ drm_err(&pxp->xe->drm, "unexpected state during PXP suspend: %u",
+ pxp->status);
+ ret = -EIO;
+ goto out;
+ }
+
+mark_suspended:
+ /*
+ * We set this even if we were in error state, hoping the suspend clears
+ * the error. Worse case we fail again and go in error state again.
+ */
+ pxp->status = XE_PXP_SUSPENDED;
+
+ mutex_unlock(&pxp->mutex);
+
+ /*
+ * if there is a termination in progress, wait for it.
+ * We need to wait outside the lock because the completion is done from
+ * within the lock
+ */
+ if (!wait_for_completion_timeout(&pxp->termination,
+ msecs_to_jiffies(PXP_TERMINATION_TIMEOUT_MS)))
+ ret = -ETIMEDOUT;
+
+ pxp->last_suspend_key_instance = pxp->key_instance;
+
+out:
+ return ret;
+}
+
+void xe_pxp_pm_resume(struct xe_pxp *pxp)
+{
+ int err;
+
+ if (!xe_pxp_is_enabled(pxp))
+ return;
+
+ err = kcr_pxp_enable(pxp);
+
+ mutex_lock(&pxp->mutex);
+
+ xe_assert(pxp->xe, pxp->status == XE_PXP_SUSPENDED);
+
+ if (err)
+ pxp->status = XE_PXP_ERROR;
+ else
+ pxp->status = XE_PXP_NEEDS_TERMINATION;
+
+ mutex_unlock(&pxp->mutex);
+}
diff --git a/drivers/gpu/drm/xe/xe_pxp.h b/drivers/gpu/drm/xe/xe_pxp.h
index 2d22a6e6ab27..af32c2616641 100644
--- a/drivers/gpu/drm/xe/xe_pxp.h
+++ b/drivers/gpu/drm/xe/xe_pxp.h
@@ -20,6 +20,9 @@ int xe_pxp_get_readiness_status(struct xe_pxp *pxp);
int xe_pxp_init(struct xe_device *xe);
void xe_pxp_irq_handler(struct xe_device *xe, u16 iir);
+int xe_pxp_pm_suspend(struct xe_pxp *pxp);
+void xe_pxp_pm_resume(struct xe_pxp *pxp);
+
int xe_pxp_exec_queue_set_type(struct xe_pxp *pxp, struct xe_exec_queue *q, u8 type);
int xe_pxp_exec_queue_add(struct xe_pxp *pxp, struct xe_exec_queue *q);
void xe_pxp_exec_queue_remove(struct xe_pxp *pxp, struct xe_exec_queue *q);
diff --git a/drivers/gpu/drm/xe/xe_pxp_types.h b/drivers/gpu/drm/xe/xe_pxp_types.h
index 1bb747837f86..942f2fa40a58 100644
--- a/drivers/gpu/drm/xe/xe_pxp_types.h
+++ b/drivers/gpu/drm/xe/xe_pxp_types.h
@@ -24,7 +24,8 @@ enum xe_pxp_status {
XE_PXP_NEEDS_TERMINATION = 0, /* starting status */
XE_PXP_TERMINATION_IN_PROGRESS,
XE_PXP_READY_TO_START,
- XE_PXP_ACTIVE
+ XE_PXP_ACTIVE,
+ XE_PXP_SUSPENDED
};
/**
@@ -111,6 +112,12 @@ struct xe_pxp {
/** @key_instance: keep track of the current iteration of the PXP key */
u32 key_instance;
+ /**
+ * @last_suspend_key_instance: value of key_instance at the last
+ * suspend. Used to check if any PXP session has been created between
+ * suspend cycles.
+ */
+ u32 last_suspend_key_instance;
};
#endif /* __XE_PXP_TYPES_H__ */
--
2.43.0
^ permalink raw reply related [flat|nested] 54+ messages in thread
* [PATCH v2 11/12] drm/xe/pxp: Add PXP debugfs support
2024-08-16 19:00 [PATCH v2 00/12] Add PXP HWDRM support Daniele Ceraolo Spurio
` (9 preceding siblings ...)
2024-08-16 19:00 ` [PATCH v2 10/12] drm/xe/pxp: add PXP PM support Daniele Ceraolo Spurio
@ 2024-08-16 19:00 ` Daniele Ceraolo Spurio
2024-10-09 1:26 ` John Harrison
2024-08-16 19:00 ` [PATCH v2 12/12] drm/xe/pxp: Enable PXP for MTL and LNL Daniele Ceraolo Spurio
` (9 subsequent siblings)
20 siblings, 1 reply; 54+ messages in thread
From: Daniele Ceraolo Spurio @ 2024-08-16 19:00 UTC (permalink / raw)
To: intel-xe; +Cc: Daniele Ceraolo Spurio
This patch introduces 2 PXP debugfs entries:
- info: prints the current PXP status and key instance
- terminate: simulate a termination interrupt
The first one is useful for debug, while the second one can be used for
testing the termination flow.
Signed-off-by: Daniele Ceraolo Spurio <daniele.ceraolospurio@intel.com>
---
drivers/gpu/drm/xe/Makefile | 1 +
drivers/gpu/drm/xe/xe_debugfs.c | 3 +
drivers/gpu/drm/xe/xe_pxp_debugfs.c | 120 ++++++++++++++++++++++++++++
drivers/gpu/drm/xe/xe_pxp_debugfs.h | 13 +++
4 files changed, 137 insertions(+)
create mode 100644 drivers/gpu/drm/xe/xe_pxp_debugfs.c
create mode 100644 drivers/gpu/drm/xe/xe_pxp_debugfs.h
diff --git a/drivers/gpu/drm/xe/Makefile b/drivers/gpu/drm/xe/Makefile
index a508b9166b88..7cc65f419710 100644
--- a/drivers/gpu/drm/xe/Makefile
+++ b/drivers/gpu/drm/xe/Makefile
@@ -84,6 +84,7 @@ xe-y += xe_bb.o \
xe_pt.o \
xe_pt_walk.o \
xe_pxp.o \
+ xe_pxp_debugfs.o \
xe_pxp_submit.o \
xe_query.o \
xe_range_fence.o \
diff --git a/drivers/gpu/drm/xe/xe_debugfs.c b/drivers/gpu/drm/xe/xe_debugfs.c
index 1011e5d281fa..a04f9c2d886b 100644
--- a/drivers/gpu/drm/xe/xe_debugfs.c
+++ b/drivers/gpu/drm/xe/xe_debugfs.c
@@ -17,6 +17,7 @@
#include "xe_gt_printk.h"
#include "xe_guc_ads.h"
#include "xe_pm.h"
+#include "xe_pxp_debugfs.h"
#include "xe_sriov.h"
#include "xe_step.h"
@@ -214,6 +215,8 @@ void xe_debugfs_register(struct xe_device *xe)
for_each_gt(gt, xe, id)
xe_gt_debugfs_register(gt);
+ xe_pxp_debugfs_register(xe->pxp);
+
#ifdef CONFIG_FAULT_INJECTION
fault_create_debugfs_attr("fail_gt_reset", root, >_reset_failure);
#endif
diff --git a/drivers/gpu/drm/xe/xe_pxp_debugfs.c b/drivers/gpu/drm/xe/xe_pxp_debugfs.c
new file mode 100644
index 000000000000..00c8179a9f0f
--- /dev/null
+++ b/drivers/gpu/drm/xe/xe_pxp_debugfs.c
@@ -0,0 +1,120 @@
+// SPDX-License-Identifier: MIT
+/*
+ * Copyright © 2024 Intel Corporation
+ */
+
+#include "xe_pxp_debugfs.h"
+
+#include <linux/debugfs.h>
+
+#include <drm/drm_debugfs.h>
+#include <drm/drm_managed.h>
+#include <drm/drm_print.h>
+
+#include "xe_device.h"
+#include "xe_pxp.h"
+#include "xe_pxp_types.h"
+#include "regs/xe_gt_regs.h"
+
+static struct xe_pxp *node_to_pxp(struct drm_info_node *node)
+{
+ return node->info_ent->data;
+}
+
+static const char *pxp_status_to_str(struct xe_pxp *pxp)
+{
+ lockdep_assert_held(&pxp->mutex);
+
+ switch (pxp->status) {
+ case XE_PXP_ERROR:
+ return "error";
+ case XE_PXP_NEEDS_TERMINATION:
+ return "needs termination";
+ case XE_PXP_TERMINATION_IN_PROGRESS:
+ return "termination in progress";
+ case XE_PXP_READY_TO_START:
+ return "ready to start";
+ case XE_PXP_ACTIVE:
+ return "active";
+ case XE_PXP_SUSPENDED:
+ return "suspended";
+ default:
+ return "unknown";
+ }
+};
+
+static int pxp_info(struct seq_file *m, void *data)
+{
+ struct xe_pxp *pxp = node_to_pxp(m->private);
+ struct drm_printer p = drm_seq_file_printer(m);
+ const char *status;
+
+ if (!xe_pxp_is_enabled(pxp))
+ return -ENODEV;
+
+ mutex_lock(&pxp->mutex);
+ status = pxp_status_to_str(pxp);
+ mutex_unlock(&pxp->mutex);
+
+ drm_printf(&p, "status: %s\n", status);
+ drm_printf(&p, "instance counter: %u\n", pxp->key_instance);
+
+ return 0;
+}
+
+static int pxp_terminate(struct seq_file *m, void *data)
+{
+ struct xe_pxp *pxp = node_to_pxp(m->private);
+ struct drm_printer p = drm_seq_file_printer(m);
+
+ if (!xe_pxp_is_enabled(pxp))
+ return -ENODEV;
+
+ /* simulate a termination interrupt */
+ spin_lock_irq(&pxp->xe->irq.lock);
+ xe_pxp_irq_handler(pxp->xe, KCR_PXP_STATE_TERMINATED_INTERRUPT);
+ spin_unlock_irq(&pxp->xe->irq.lock);
+
+ drm_printf(&p, "Termination queued\n");
+
+ return 0;
+}
+
+static const struct drm_info_list debugfs_list[] = {
+ {"info", pxp_info, 0},
+ {"terminate", pxp_terminate, 0},
+};
+
+void xe_pxp_debugfs_register(struct xe_pxp *pxp)
+{
+ struct drm_minor *minor;
+ struct drm_info_list *local;
+ struct dentry *root;
+ int i;
+
+ if (!xe_pxp_is_enabled(pxp))
+ return;
+
+ minor = pxp->xe->drm.primary;
+ if (!minor->debugfs_root)
+ return;
+
+#define DEBUGFS_SIZE (ARRAY_SIZE(debugfs_list) * sizeof(struct drm_info_list))
+ local = drmm_kmalloc(&pxp->xe->drm, DEBUGFS_SIZE, GFP_KERNEL);
+ if (!local)
+ return;
+
+ memcpy(local, debugfs_list, DEBUGFS_SIZE);
+#undef DEBUGFS_SIZE
+
+ for (i = 0; i < ARRAY_SIZE(debugfs_list); ++i)
+ local[i].data = pxp;
+
+ root = debugfs_create_dir("pxp", minor->debugfs_root);
+ if (IS_ERR(root))
+ return;
+
+ drm_debugfs_create_files(local,
+ ARRAY_SIZE(debugfs_list),
+ root, minor);
+}
diff --git a/drivers/gpu/drm/xe/xe_pxp_debugfs.h b/drivers/gpu/drm/xe/xe_pxp_debugfs.h
new file mode 100644
index 000000000000..988466aad50b
--- /dev/null
+++ b/drivers/gpu/drm/xe/xe_pxp_debugfs.h
@@ -0,0 +1,13 @@
+/* SPDX-License-Identifier: MIT */
+/*
+ * Copyright © 2024 Intel Corporation
+ */
+
+#ifndef __XE_PXP_DEBUGFS_H__
+#define __XE_PXP_DEBUGFS_H__
+
+struct xe_pxp;
+
+void xe_pxp_debugfs_register(struct xe_pxp *pxp);
+
+#endif /* __XE_PXP_DEBUGFS_H__ */
--
2.43.0
^ permalink raw reply related [flat|nested] 54+ messages in thread
* [PATCH v2 12/12] drm/xe/pxp: Enable PXP for MTL and LNL
2024-08-16 19:00 [PATCH v2 00/12] Add PXP HWDRM support Daniele Ceraolo Spurio
` (10 preceding siblings ...)
2024-08-16 19:00 ` [PATCH v2 11/12] drm/xe/pxp: Add PXP debugfs support Daniele Ceraolo Spurio
@ 2024-08-16 19:00 ` Daniele Ceraolo Spurio
2024-10-09 1:27 ` John Harrison
2024-08-16 19:06 ` ✓ CI.Patch_applied: success for Add PXP HWDRM support (rev2) Patchwork
` (8 subsequent siblings)
20 siblings, 1 reply; 54+ messages in thread
From: Daniele Ceraolo Spurio @ 2024-08-16 19:00 UTC (permalink / raw)
To: intel-xe; +Cc: Daniele Ceraolo Spurio
Now that are the pieces are there, we can turn the feature on.
Signed-off-by: Daniele Ceraolo Spurio <daniele.ceraolospurio@intel.com>
---
drivers/gpu/drm/xe/xe_pci.c | 2 ++
1 file changed, 2 insertions(+)
diff --git a/drivers/gpu/drm/xe/xe_pci.c b/drivers/gpu/drm/xe/xe_pci.c
index d1453ba20dcd..0807e8a11585 100644
--- a/drivers/gpu/drm/xe/xe_pci.c
+++ b/drivers/gpu/drm/xe/xe_pci.c
@@ -338,11 +338,13 @@ static const struct xe_device_desc mtl_desc = {
.require_force_probe = true,
PLATFORM(METEORLAKE),
.has_display = true,
+ .has_pxp = true,
};
static const struct xe_device_desc lnl_desc = {
PLATFORM(LUNARLAKE),
.has_display = true,
+ .has_pxp = true,
.require_force_probe = true,
};
--
2.43.0
^ permalink raw reply related [flat|nested] 54+ messages in thread
* ✓ CI.Patch_applied: success for Add PXP HWDRM support (rev2)
2024-08-16 19:00 [PATCH v2 00/12] Add PXP HWDRM support Daniele Ceraolo Spurio
` (11 preceding siblings ...)
2024-08-16 19:00 ` [PATCH v2 12/12] drm/xe/pxp: Enable PXP for MTL and LNL Daniele Ceraolo Spurio
@ 2024-08-16 19:06 ` Patchwork
2024-08-16 19:07 ` ✗ CI.checkpatch: warning " Patchwork
` (7 subsequent siblings)
20 siblings, 0 replies; 54+ messages in thread
From: Patchwork @ 2024-08-16 19:06 UTC (permalink / raw)
To: Daniele Ceraolo Spurio; +Cc: intel-xe
== Series Details ==
Series: Add PXP HWDRM support (rev2)
URL : https://patchwork.freedesktop.org/series/136052/
State : success
== Summary ==
=== Applying kernel patches on branch 'drm-tip' with base: ===
Base commit: d6dac3db1993 drm-tip: 2024y-08m-16d-18h-30m-38s UTC integration manifest
=== git am output follows ===
.git/rebase-apply/patch:323: new blank line at EOF.
+
warning: 1 line adds whitespace errors.
.git/rebase-apply/patch:660: new blank line at EOF.
+
warning: 1 line adds whitespace errors.
.git/rebase-apply/patch:336: new blank line at EOF.
+
warning: 1 line adds whitespace errors.
Applying: drm/xe/pxp: Initialize PXP structure and KCR reg
Applying: drm/xe/pxp: Allocate PXP execution resources
Applying: drm/xe/pxp: Add VCS inline termination support
Applying: drm/xe/pxp: Add GSC session invalidation support
Applying: drm/xe/pxp: Handle the PXP termination interrupt
Applying: drm/xe/pxp: Add GSC session initialization support
Applying: drm/xe/pxp: Add spport for PXP-using queues
Applying: drm/xe/pxp: add a query for PXP status
Applying: drm/xe/pxp: Add API to mark a BO as using PXP
Applying: drm/xe/pxp: add PXP PM support
Applying: drm/xe/pxp: Add PXP debugfs support
Applying: drm/xe/pxp: Enable PXP for MTL and LNL
^ permalink raw reply [flat|nested] 54+ messages in thread
* ✗ CI.checkpatch: warning for Add PXP HWDRM support (rev2)
2024-08-16 19:00 [PATCH v2 00/12] Add PXP HWDRM support Daniele Ceraolo Spurio
` (12 preceding siblings ...)
2024-08-16 19:06 ` ✓ CI.Patch_applied: success for Add PXP HWDRM support (rev2) Patchwork
@ 2024-08-16 19:07 ` Patchwork
2024-08-16 19:08 ` ✓ CI.KUnit: success " Patchwork
` (6 subsequent siblings)
20 siblings, 0 replies; 54+ messages in thread
From: Patchwork @ 2024-08-16 19:07 UTC (permalink / raw)
To: Daniele Ceraolo Spurio; +Cc: intel-xe
== Series Details ==
Series: Add PXP HWDRM support (rev2)
URL : https://patchwork.freedesktop.org/series/136052/
State : warning
== Summary ==
+ KERNEL=/kernel
+ git clone https://gitlab.freedesktop.org/drm/maintainer-tools mt
Cloning into 'mt'...
warning: redirecting to https://gitlab.freedesktop.org/drm/maintainer-tools.git/
+ git -C mt rev-list -n1 origin/master
9fe5037901cabbcdf27a6fe0dfb047ca1474d363
+ cd /kernel
+ git config --global --add safe.directory /kernel
+ git log -n1
commit 4ac4eba87c34f724cace81d81d94b806d9b3175e
Author: Daniele Ceraolo Spurio <daniele.ceraolospurio@intel.com>
Date: Fri Aug 16 12:00:21 2024 -0700
drm/xe/pxp: Enable PXP for MTL and LNL
Now that are the pieces are there, we can turn the feature on.
Signed-off-by: Daniele Ceraolo Spurio <daniele.ceraolospurio@intel.com>
+ /mt/dim checkpatch d6dac3db19935f5939cbb033eea30c90bdf3888c drm-intel
f524ad9b903d drm/xe/pxp: Initialize PXP structure and KCR reg
-:41: WARNING:FILE_PATH_CHANGES: added, moved or deleted file(s), does MAINTAINERS need updating?
#41:
new file mode 100644
total: 0 errors, 1 warnings, 0 checks, 245 lines checked
d3ea2bf3ab04 drm/xe/pxp: Allocate PXP execution resources
-:80: WARNING:LONG_LINE: line length of 108 exceeds 100 columns
#80: FILE: drivers/gpu/drm/xe/xe_exec_queue.c:152:
+ xe_assert(xe, !vm || (!!(vm->flags & XE_VM_FLAG_GSC) == !!(hwe->engine_id == XE_HW_ENGINE_GSCCS0)));
-:136: WARNING:FILE_PATH_CHANGES: added, moved or deleted file(s), does MAINTAINERS need updating?
#136:
new file mode 100644
total: 0 errors, 2 warnings, 0 checks, 566 lines checked
e842580936d5 drm/xe/pxp: Add VCS inline termination support
-:28: WARNING:FILE_PATH_CHANGES: added, moved or deleted file(s), does MAINTAINERS need updating?
#28:
new file mode 100644
total: 0 errors, 1 warnings, 0 checks, 207 lines checked
788fb23fdf2f drm/xe/pxp: Add GSC session invalidation support
84b2360b4fd7 drm/xe/pxp: Handle the PXP termination interrupt
6d8ed35f74b3 drm/xe/pxp: Add GSC session initialization support
1b3ccfc32f59 drm/xe/pxp: Add spport for PXP-using queues
99ddd25e86e2 drm/xe/pxp: add a query for PXP status
e4cdd52a24ab drm/xe/pxp: Add API to mark a BO as using PXP
82608cec8960 drm/xe/pxp: add PXP PM support
21fc45006a80 drm/xe/pxp: Add PXP debugfs support
-:50: WARNING:FILE_PATH_CHANGES: added, moved or deleted file(s), does MAINTAINERS need updating?
#50:
new file mode 100644
total: 0 errors, 1 warnings, 0 checks, 155 lines checked
4ac4eba87c34 drm/xe/pxp: Enable PXP for MTL and LNL
^ permalink raw reply [flat|nested] 54+ messages in thread
* ✓ CI.KUnit: success for Add PXP HWDRM support (rev2)
2024-08-16 19:00 [PATCH v2 00/12] Add PXP HWDRM support Daniele Ceraolo Spurio
` (13 preceding siblings ...)
2024-08-16 19:07 ` ✗ CI.checkpatch: warning " Patchwork
@ 2024-08-16 19:08 ` Patchwork
2024-08-16 19:23 ` ✓ CI.Build: " Patchwork
` (5 subsequent siblings)
20 siblings, 0 replies; 54+ messages in thread
From: Patchwork @ 2024-08-16 19:08 UTC (permalink / raw)
To: Daniele Ceraolo Spurio; +Cc: intel-xe
== Series Details ==
Series: Add PXP HWDRM support (rev2)
URL : https://patchwork.freedesktop.org/series/136052/
State : success
== Summary ==
+ trap cleanup EXIT
+ /kernel/tools/testing/kunit/kunit.py run --kunitconfig /kernel/drivers/gpu/drm/xe/.kunitconfig
[19:07:21] Configuring KUnit Kernel ...
Generating .config ...
Populating config with:
$ make ARCH=um O=.kunit olddefconfig
[19:07:25] Building KUnit Kernel ...
Populating config with:
$ make ARCH=um O=.kunit olddefconfig
Building with:
$ make ARCH=um O=.kunit --jobs=48
../lib/iomap.c:156:5: warning: no previous prototype for ‘ioread64_lo_hi’ [-Wmissing-prototypes]
156 | u64 ioread64_lo_hi(const void __iomem *addr)
| ^~~~~~~~~~~~~~
../lib/iomap.c:163:5: warning: no previous prototype for ‘ioread64_hi_lo’ [-Wmissing-prototypes]
163 | u64 ioread64_hi_lo(const void __iomem *addr)
| ^~~~~~~~~~~~~~
../lib/iomap.c:170:5: warning: no previous prototype for ‘ioread64be_lo_hi’ [-Wmissing-prototypes]
170 | u64 ioread64be_lo_hi(const void __iomem *addr)
| ^~~~~~~~~~~~~~~~
../lib/iomap.c:178:5: warning: no previous prototype for ‘ioread64be_hi_lo’ [-Wmissing-prototypes]
178 | u64 ioread64be_hi_lo(const void __iomem *addr)
| ^~~~~~~~~~~~~~~~
../lib/iomap.c:264:6: warning: no previous prototype for ‘iowrite64_lo_hi’ [-Wmissing-prototypes]
264 | void iowrite64_lo_hi(u64 val, void __iomem *addr)
| ^~~~~~~~~~~~~~~
../lib/iomap.c:272:6: warning: no previous prototype for ‘iowrite64_hi_lo’ [-Wmissing-prototypes]
272 | void iowrite64_hi_lo(u64 val, void __iomem *addr)
| ^~~~~~~~~~~~~~~
../lib/iomap.c:280:6: warning: no previous prototype for ‘iowrite64be_lo_hi’ [-Wmissing-prototypes]
280 | void iowrite64be_lo_hi(u64 val, void __iomem *addr)
| ^~~~~~~~~~~~~~~~~
../lib/iomap.c:288:6: warning: no previous prototype for ‘iowrite64be_hi_lo’ [-Wmissing-prototypes]
288 | void iowrite64be_hi_lo(u64 val, void __iomem *addr)
| ^~~~~~~~~~~~~~~~~
[19:07:51] Starting KUnit Kernel (1/1)...
[19:07:51] ============================================================
Running tests with:
$ .kunit/linux kunit.enable=1 mem=1G console=tty kunit_shutdown=halt
[19:07:51] =================== guc_dbm (7 subtests) ===================
[19:07:51] [PASSED] test_empty
[19:07:51] [PASSED] test_default
[19:07:51] ======================== test_size ========================
[19:07:51] [PASSED] 4
[19:07:51] [PASSED] 8
[19:07:51] [PASSED] 32
[19:07:51] [PASSED] 256
[19:07:51] ==================== [PASSED] test_size ====================
[19:07:51] ======================= test_reuse ========================
[19:07:51] [PASSED] 4
[19:07:51] [PASSED] 8
[19:07:51] [PASSED] 32
[19:07:51] [PASSED] 256
[19:07:51] =================== [PASSED] test_reuse ====================
[19:07:51] =================== test_range_overlap ====================
[19:07:51] [PASSED] 4
[19:07:51] [PASSED] 8
[19:07:51] [PASSED] 32
[19:07:51] [PASSED] 256
[19:07:51] =============== [PASSED] test_range_overlap ================
[19:07:51] =================== test_range_compact ====================
[19:07:51] [PASSED] 4
[19:07:51] [PASSED] 8
[19:07:51] [PASSED] 32
[19:07:51] [PASSED] 256
[19:07:51] =============== [PASSED] test_range_compact ================
[19:07:51] ==================== test_range_spare =====================
[19:07:51] [PASSED] 4
[19:07:51] [PASSED] 8
[19:07:51] [PASSED] 32
[19:07:51] [PASSED] 256
[19:07:51] ================ [PASSED] test_range_spare =================
[19:07:51] ===================== [PASSED] guc_dbm =====================
[19:07:51] =================== guc_idm (6 subtests) ===================
[19:07:51] [PASSED] bad_init
[19:07:51] [PASSED] no_init
[19:07:51] [PASSED] init_fini
[19:07:51] [PASSED] check_used
[19:07:51] [PASSED] check_quota
[19:07:51] [PASSED] check_all
[19:07:51] ===================== [PASSED] guc_idm =====================
[19:07:51] ================== no_relay (3 subtests) ===================
[19:07:51] [PASSED] xe_drops_guc2pf_if_not_ready
[19:07:51] [PASSED] xe_drops_guc2vf_if_not_ready
[19:07:51] [PASSED] xe_rejects_send_if_not_ready
[19:07:51] ==================== [PASSED] no_relay =====================
[19:07:51] ================== pf_relay (14 subtests) ==================
[19:07:51] [PASSED] pf_rejects_guc2pf_too_short
[19:07:51] [PASSED] pf_rejects_guc2pf_too_long
[19:07:51] [PASSED] pf_rejects_guc2pf_no_payload
[19:07:51] [PASSED] pf_fails_no_payload
[19:07:51] [PASSED] pf_fails_bad_origin
[19:07:51] [PASSED] pf_fails_bad_type
[19:07:51] [PASSED] pf_txn_reports_error
[19:07:51] [PASSED] pf_txn_sends_pf2guc
[19:07:51] [PASSED] pf_sends_pf2guc
[19:07:51] [SKIPPED] pf_loopback_nop
[19:07:51] [SKIPPED] pf_loopback_echo
[19:07:51] [SKIPPED] pf_loopback_fail
[19:07:51] [SKIPPED] pf_loopback_busy
[19:07:51] [SKIPPED] pf_loopback_retry
[19:07:51] ==================== [PASSED] pf_relay =====================
[19:07:51] ================== vf_relay (3 subtests) ===================
[19:07:51] [PASSED] vf_rejects_guc2vf_too_short
[19:07:51] [PASSED] vf_rejects_guc2vf_too_long
[19:07:51] [PASSED] vf_rejects_guc2vf_no_payload
[19:07:51] ==================== [PASSED] vf_relay =====================
[19:07:51] ================= pf_service (11 subtests) =================
[19:07:51] [PASSED] pf_negotiate_any
[19:07:51] [PASSED] pf_negotiate_base_match
[19:07:51] [PASSED] pf_negotiate_base_newer
[19:07:51] [PASSED] pf_negotiate_base_next
[19:07:51] [SKIPPED] pf_negotiate_base_older
[19:07:51] [PASSED] pf_negotiate_base_prev
[19:07:51] [PASSED] pf_negotiate_latest_match
[19:07:51] [PASSED] pf_negotiate_latest_newer
[19:07:51] [PASSED] pf_negotiate_latest_next
[19:07:51] [SKIPPED] pf_negotiate_latest_older
[19:07:51] [SKIPPED] pf_negotiate_latest_prev
[19:07:51] =================== [PASSED] pf_service ====================
[19:07:51] ===================== lmtt (1 subtest) =====================
[19:07:51] ======================== test_ops =========================
[19:07:51] [PASSED] 2-level
[19:07:51] [PASSED] multi-level
[19:07:51] ==================== [PASSED] test_ops =====================
[19:07:51] ====================== [PASSED] lmtt =======================
[19:07:51] =================== xe_mocs (2 subtests) ===================
[19:07:51] ================ xe_live_mocs_kernel_kunit ================
[19:07:51] =========== [SKIPPED] xe_live_mocs_kernel_kunit ============
[19:07:51] ================ xe_live_mocs_reset_kunit =================
[19:07:51] ============ [SKIPPED] xe_live_mocs_reset_kunit ============
[19:07:51] ==================== [SKIPPED] xe_mocs =====================
[19:07:51] ================= xe_migrate (2 subtests) ==================
[19:07:51] ================= xe_migrate_sanity_kunit =================
[19:07:51] ============ [SKIPPED] xe_migrate_sanity_kunit =============
[19:07:51] ================== xe_validate_ccs_kunit ==================
[19:07:51] ============= [SKIPPED] xe_validate_ccs_kunit ==============
[19:07:51] =================== [SKIPPED] xe_migrate ===================
[19:07:51] ================== xe_dma_buf (1 subtest) ==================
[19:07:51] ==================== xe_dma_buf_kunit =====================
[19:07:51] ================ [SKIPPED] xe_dma_buf_kunit ================
[19:07:51] =================== [SKIPPED] xe_dma_buf ===================
[19:07:51] ==================== xe_bo (2 subtests) ====================
[19:07:51] ================== xe_ccs_migrate_kunit ===================
[19:07:51] ============== [SKIPPED] xe_ccs_migrate_kunit ==============
[19:07:51] ==================== xe_bo_evict_kunit ====================
[19:07:51] =============== [SKIPPED] xe_bo_evict_kunit ================
[19:07:51] ===================== [SKIPPED] xe_bo ======================
[19:07:51] ==================== args (11 subtests) ====================
[19:07:51] [PASSED] count_args_test
[19:07:51] [PASSED] call_args_example
[19:07:51] [PASSED] call_args_test
[19:07:51] [PASSED] drop_first_arg_example
[19:07:51] [PASSED] drop_first_arg_test
[19:07:51] [PASSED] first_arg_example
[19:07:51] [PASSED] first_arg_test
[19:07:51] [PASSED] last_arg_example
[19:07:51] [PASSED] last_arg_test
[19:07:51] [PASSED] pick_arg_example
[19:07:51] [PASSED] sep_comma_example
[19:07:51] ====================== [PASSED] args =======================
[19:07:51] =================== xe_pci (2 subtests) ====================
stty: 'standard input': Inappropriate ioctl for device
[19:07:51] [PASSED] xe_gmdid_graphics_ip
[19:07:51] [PASSED] xe_gmdid_media_ip
[19:07:51] ===================== [PASSED] xe_pci ======================
[19:07:51] =================== xe_rtp (2 subtests) ====================
[19:07:51] =============== xe_rtp_process_to_sr_tests ================
[19:07:51] [PASSED] coalesce-same-reg
[19:07:51] [PASSED] no-match-no-add
[19:07:51] [PASSED] match-or
[19:07:51] [PASSED] match-or-xfail
[19:07:51] [PASSED] no-match-no-add-multiple-rules
[19:07:51] [PASSED] two-regs-two-entries
[19:07:51] [PASSED] clr-one-set-other
[19:07:51] [PASSED] set-field
[19:07:51] [PASSED] conflict-duplicate
[19:07:51] [PASSED] conflict-not-disjoint
[19:07:51] [PASSED] conflict-reg-type
[19:07:51] =========== [PASSED] xe_rtp_process_to_sr_tests ============
[19:07:51] ================== xe_rtp_process_tests ===================
[19:07:51] [PASSED] active1
[19:07:51] [PASSED] active2
[19:07:51] [PASSED] active-inactive
[19:07:51] [PASSED] inactive-active
[19:07:51] [PASSED] inactive-1st_or_active-inactive
[19:07:51] [PASSED] inactive-2nd_or_active-inactive
[19:07:51] [PASSED] inactive-last_or_active-inactive
[19:07:51] [PASSED] inactive-no_or_active-inactive
[19:07:51] ============== [PASSED] xe_rtp_process_tests ===============
[19:07:51] ===================== [PASSED] xe_rtp ======================
[19:07:51] ==================== xe_wa (1 subtest) =====================
[19:07:51] ======================== xe_wa_gt =========================
[19:07:51] [PASSED] TIGERLAKE (B0)
[19:07:51] [PASSED] DG1 (A0)
[19:07:51] [PASSED] DG1 (B0)
[19:07:51] [PASSED] ALDERLAKE_S (A0)
[19:07:51] [PASSED] ALDERLAKE_S (B0)
[19:07:51] [PASSED] ALDERLAKE_S (C0)
[19:07:51] [PASSED] ALDERLAKE_S (D0)
[19:07:51] [PASSED] ALDERLAKE_P (A0)
[19:07:51] [PASSED] ALDERLAKE_P (B0)
[19:07:51] [PASSED] ALDERLAKE_P (C0)
[19:07:51] [PASSED] ALDERLAKE_S_RPLS (D0)
[19:07:51] [PASSED] ALDERLAKE_P_RPLU (E0)
[19:07:51] [PASSED] DG2_G10 (C0)
[19:07:51] [PASSED] DG2_G11 (B1)
[19:07:51] [PASSED] DG2_G12 (A1)
[19:07:51] [PASSED] METEORLAKE (g:A0, m:A0)
[19:07:51] [PASSED] METEORLAKE (g:A0, m:A0)
[19:07:51] [PASSED] METEORLAKE (g:A0, m:A0)
[19:07:51] [PASSED] LUNARLAKE (g:A0, m:A0)
[19:07:51] [PASSED] LUNARLAKE (g:B0, m:A0)
[19:07:51] [PASSED] BATTLEMAGE (g:A0, m:A1)
[19:07:51] ==================== [PASSED] xe_wa_gt =====================
[19:07:51] ====================== [PASSED] xe_wa ======================
[19:07:51] ============================================================
[19:07:51] Testing complete. Ran 121 tests: passed: 106, skipped: 15
[19:07:51] Elapsed time: 30.104s total, 4.142s configuring, 25.741s building, 0.189s running
+ /kernel/tools/testing/kunit/kunit.py run --kunitconfig /kernel/drivers/gpu/drm/tests/.kunitconfig
[19:07:51] Configuring KUnit Kernel ...
Regenerating .config ...
Populating config with:
$ make ARCH=um O=.kunit olddefconfig
[19:07:53] Building KUnit Kernel ...
Populating config with:
$ make ARCH=um O=.kunit olddefconfig
Building with:
$ make ARCH=um O=.kunit --jobs=48
../lib/iomap.c:156:5: warning: no previous prototype for ‘ioread64_lo_hi’ [-Wmissing-prototypes]
156 | u64 ioread64_lo_hi(const void __iomem *addr)
| ^~~~~~~~~~~~~~
../lib/iomap.c:163:5: warning: no previous prototype for ‘ioread64_hi_lo’ [-Wmissing-prototypes]
163 | u64 ioread64_hi_lo(const void __iomem *addr)
| ^~~~~~~~~~~~~~
../lib/iomap.c:170:5: warning: no previous prototype for ‘ioread64be_lo_hi’ [-Wmissing-prototypes]
170 | u64 ioread64be_lo_hi(const void __iomem *addr)
| ^~~~~~~~~~~~~~~~
../lib/iomap.c:178:5: warning: no previous prototype for ‘ioread64be_hi_lo’ [-Wmissing-prototypes]
178 | u64 ioread64be_hi_lo(const void __iomem *addr)
| ^~~~~~~~~~~~~~~~
../lib/iomap.c:264:6: warning: no previous prototype for ‘iowrite64_lo_hi’ [-Wmissing-prototypes]
264 | void iowrite64_lo_hi(u64 val, void __iomem *addr)
| ^~~~~~~~~~~~~~~
../lib/iomap.c:272:6: warning: no previous prototype for ‘iowrite64_hi_lo’ [-Wmissing-prototypes]
272 | void iowrite64_hi_lo(u64 val, void __iomem *addr)
| ^~~~~~~~~~~~~~~
../lib/iomap.c:280:6: warning: no previous prototype for ‘iowrite64be_lo_hi’ [-Wmissing-prototypes]
280 | void iowrite64be_lo_hi(u64 val, void __iomem *addr)
| ^~~~~~~~~~~~~~~~~
../lib/iomap.c:288:6: warning: no previous prototype for ‘iowrite64be_hi_lo’ [-Wmissing-prototypes]
288 | void iowrite64be_hi_lo(u64 val, void __iomem *addr)
| ^~~~~~~~~~~~~~~~~
[19:08:14] Starting KUnit Kernel (1/1)...
[19:08:14] ============================================================
Running tests with:
$ .kunit/linux kunit.enable=1 mem=1G console=tty kunit_shutdown=halt
[19:08:14] ============ drm_test_pick_cmdline (2 subtests) ============
[19:08:14] [PASSED] drm_test_pick_cmdline_res_1920_1080_60
[19:08:14] =============== drm_test_pick_cmdline_named ===============
[19:08:14] [PASSED] NTSC
[19:08:14] [PASSED] NTSC-J
[19:08:14] [PASSED] PAL
[19:08:14] [PASSED] PAL-M
[19:08:14] =========== [PASSED] drm_test_pick_cmdline_named ===========
[19:08:14] ============== [PASSED] drm_test_pick_cmdline ==============
[19:08:14] ================== drm_buddy (7 subtests) ==================
[19:08:14] [PASSED] drm_test_buddy_alloc_limit
[19:08:14] [PASSED] drm_test_buddy_alloc_optimistic
[19:08:14] [PASSED] drm_test_buddy_alloc_pessimistic
[19:08:14] [PASSED] drm_test_buddy_alloc_pathological
[19:08:14] [PASSED] drm_test_buddy_alloc_contiguous
[19:08:14] [PASSED] drm_test_buddy_alloc_clear
[19:08:14] [PASSED] drm_test_buddy_alloc_range_bias
[19:08:14] ==================== [PASSED] drm_buddy ====================
[19:08:14] ============= drm_cmdline_parser (40 subtests) =============
[19:08:14] [PASSED] drm_test_cmdline_force_d_only
[19:08:14] [PASSED] drm_test_cmdline_force_D_only_dvi
[19:08:14] [PASSED] drm_test_cmdline_force_D_only_hdmi
[19:08:14] [PASSED] drm_test_cmdline_force_D_only_not_digital
[19:08:14] [PASSED] drm_test_cmdline_force_e_only
[19:08:14] [PASSED] drm_test_cmdline_res
[19:08:14] [PASSED] drm_test_cmdline_res_vesa
[19:08:14] [PASSED] drm_test_cmdline_res_vesa_rblank
[19:08:14] [PASSED] drm_test_cmdline_res_rblank
[19:08:14] [PASSED] drm_test_cmdline_res_bpp
[19:08:14] [PASSED] drm_test_cmdline_res_refresh
[19:08:14] [PASSED] drm_test_cmdline_res_bpp_refresh
[19:08:14] [PASSED] drm_test_cmdline_res_bpp_refresh_interlaced
[19:08:14] [PASSED] drm_test_cmdline_res_bpp_refresh_margins
[19:08:14] [PASSED] drm_test_cmdline_res_bpp_refresh_force_off
[19:08:14] [PASSED] drm_test_cmdline_res_bpp_refresh_force_on
[19:08:14] [PASSED] drm_test_cmdline_res_bpp_refresh_force_on_analog
[19:08:14] [PASSED] drm_test_cmdline_res_bpp_refresh_force_on_digital
[19:08:14] [PASSED] drm_test_cmdline_res_bpp_refresh_interlaced_margins_force_on
[19:08:14] [PASSED] drm_test_cmdline_res_margins_force_on
[19:08:14] [PASSED] drm_test_cmdline_res_vesa_margins
[19:08:14] [PASSED] drm_test_cmdline_name
[19:08:14] [PASSED] drm_test_cmdline_name_bpp
[19:08:14] [PASSED] drm_test_cmdline_name_option
[19:08:14] [PASSED] drm_test_cmdline_name_bpp_option
[19:08:14] [PASSED] drm_test_cmdline_rotate_0
[19:08:14] [PASSED] drm_test_cmdline_rotate_90
[19:08:14] [PASSED] drm_test_cmdline_rotate_180
[19:08:14] [PASSED] drm_test_cmdline_rotate_270
[19:08:14] [PASSED] drm_test_cmdline_hmirror
[19:08:14] [PASSED] drm_test_cmdline_vmirror
[19:08:14] [PASSED] drm_test_cmdline_margin_options
[19:08:14] [PASSED] drm_test_cmdline_multiple_options
[19:08:14] [PASSED] drm_test_cmdline_bpp_extra_and_option
[19:08:14] [PASSED] drm_test_cmdline_extra_and_option
[19:08:14] [PASSED] drm_test_cmdline_freestanding_options
[19:08:14] [PASSED] drm_test_cmdline_freestanding_force_e_and_options
[19:08:14] [PASSED] drm_test_cmdline_panel_orientation
[19:08:14] ================ drm_test_cmdline_invalid =================
[19:08:14] [PASSED] margin_only
[19:08:14] [PASSED] interlace_only
[19:08:14] [PASSED] res_missing_x
[19:08:14] [PASSED] res_missing_y
[19:08:14] [PASSED] res_bad_y
[19:08:14] [PASSED] res_missing_y_bpp
[19:08:14] [PASSED] res_bad_bpp
[19:08:14] [PASSED] res_bad_refresh
[19:08:14] [PASSED] res_bpp_refresh_force_on_off
[19:08:14] [PASSED] res_invalid_mode
[19:08:14] [PASSED] res_bpp_wrong_place_mode
[19:08:14] [PASSED] name_bpp_refresh
[19:08:14] [PASSED] name_refresh
[19:08:14] [PASSED] name_refresh_wrong_mode
[19:08:14] [PASSED] name_refresh_invalid_mode
[19:08:14] [PASSED] rotate_multiple
[19:08:14] [PASSED] rotate_invalid_val
[19:08:14] [PASSED] rotate_truncated
[19:08:14] [PASSED] invalid_option
[19:08:14] [PASSED] invalid_tv_option
[19:08:14] [PASSED] truncated_tv_option
[19:08:14] ============ [PASSED] drm_test_cmdline_invalid =============
[19:08:14] =============== drm_test_cmdline_tv_options ===============
[19:08:14] [PASSED] NTSC
[19:08:14] [PASSED] NTSC_443
[19:08:14] [PASSED] NTSC_J
[19:08:14] [PASSED] PAL
[19:08:14] [PASSED] PAL_M
[19:08:14] [PASSED] PAL_N
[19:08:14] [PASSED] SECAM
[19:08:14] [PASSED] MONO_525
[19:08:14] [PASSED] MONO_625
[19:08:14] =========== [PASSED] drm_test_cmdline_tv_options ===========
[19:08:14] =============== [PASSED] drm_cmdline_parser ================
[19:08:14] ========== drmm_connector_hdmi_init (19 subtests) ==========
[19:08:14] [PASSED] drm_test_connector_hdmi_init_valid
[19:08:14] [PASSED] drm_test_connector_hdmi_init_bpc_8
[19:08:14] [PASSED] drm_test_connector_hdmi_init_bpc_10
[19:08:14] [PASSED] drm_test_connector_hdmi_init_bpc_12
[19:08:14] [PASSED] drm_test_connector_hdmi_init_bpc_invalid
[19:08:14] [PASSED] drm_test_connector_hdmi_init_bpc_null
[19:08:14] [PASSED] drm_test_connector_hdmi_init_formats_empty
[19:08:14] [PASSED] drm_test_connector_hdmi_init_formats_no_rgb
[19:08:14] [PASSED] drm_test_connector_hdmi_init_null_ddc
[19:08:14] [PASSED] drm_test_connector_hdmi_init_null_product
[19:08:14] [PASSED] drm_test_connector_hdmi_init_null_vendor
[19:08:14] [PASSED] drm_test_connector_hdmi_init_product_length_exact
[19:08:14] [PASSED] drm_test_connector_hdmi_init_product_length_too_long
[19:08:14] [PASSED] drm_test_connector_hdmi_init_product_valid
[19:08:14] [PASSED] drm_test_connector_hdmi_init_vendor_length_exact
[19:08:14] [PASSED] drm_test_connector_hdmi_init_vendor_length_too_long
[19:08:14] [PASSED] drm_test_connector_hdmi_init_vendor_valid
[19:08:14] ========= drm_test_connector_hdmi_init_type_valid =========
[19:08:14] [PASSED] HDMI-A
[19:08:14] [PASSED] HDMI-B
[19:08:14] ===== [PASSED] drm_test_connector_hdmi_init_type_valid =====
[19:08:14] ======== drm_test_connector_hdmi_init_type_invalid ========
[19:08:14] [PASSED] Unknown
[19:08:14] [PASSED] VGA
[19:08:14] [PASSED] DVI-I
[19:08:14] [PASSED] DVI-D
[19:08:14] [PASSED] DVI-A
[19:08:14] [PASSED] Composite
[19:08:14] [PASSED] SVIDEO
[19:08:14] [PASSED] LVDS
[19:08:14] [PASSED] Component
[19:08:14] [PASSED] DIN
[19:08:14] [PASSED] DP
[19:08:14] [PASSED] TV
[19:08:14] [PASSED] eDP
[19:08:14] [PASSED] Virtual
[19:08:14] [PASSED] DSI
[19:08:14] [PASSED] DPI
[19:08:14] [PASSED] Writeback
[19:08:14] [PASSED] SPI
[19:08:14] [PASSED] USB
[19:08:14] ==== [PASSED] drm_test_connector_hdmi_init_type_invalid ====
[19:08:14] ============ [PASSED] drmm_connector_hdmi_init =============
[19:08:14] ============= drmm_connector_init (3 subtests) =============
[19:08:14] [PASSED] drm_test_drmm_connector_init
[19:08:14] [PASSED] drm_test_drmm_connector_init_null_ddc
[19:08:14] ========= drm_test_drmm_connector_init_type_valid =========
[19:08:14] [PASSED] Unknown
[19:08:14] [PASSED] VGA
[19:08:14] [PASSED] DVI-I
[19:08:14] [PASSED] DVI-D
[19:08:14] [PASSED] DVI-A
[19:08:14] [PASSED] Composite
[19:08:14] [PASSED] SVIDEO
[19:08:14] [PASSED] LVDS
[19:08:14] [PASSED] Component
[19:08:14] [PASSED] DIN
[19:08:14] [PASSED] DP
[19:08:14] [PASSED] HDMI-A
[19:08:14] [PASSED] HDMI-B
[19:08:14] [PASSED] TV
[19:08:14] [PASSED] eDP
[19:08:14] [PASSED] Virtual
[19:08:14] [PASSED] DSI
[19:08:14] [PASSED] DPI
[19:08:14] [PASSED] Writeback
[19:08:14] [PASSED] SPI
[19:08:14] [PASSED] USB
[19:08:14] ===== [PASSED] drm_test_drmm_connector_init_type_valid =====
[19:08:14] =============== [PASSED] drmm_connector_init ===============
[19:08:14] = drm_connector_attach_broadcast_rgb_property (2 subtests) =
[19:08:14] [PASSED] drm_test_drm_connector_attach_broadcast_rgb_property
[19:08:14] [PASSED] drm_test_drm_connector_attach_broadcast_rgb_property_hdmi_connector
[19:08:14] === [PASSED] drm_connector_attach_broadcast_rgb_property ===
[19:08:14] ========== drm_get_tv_mode_from_name (2 subtests) ==========
[19:08:14] ========== drm_test_get_tv_mode_from_name_valid ===========
[19:08:14] [PASSED] NTSC
[19:08:14] [PASSED] NTSC-443
[19:08:14] [PASSED] NTSC-J
[19:08:14] [PASSED] PAL
[19:08:14] [PASSED] PAL-M
[19:08:14] [PASSED] PAL-N
[19:08:14] [PASSED] SECAM
[19:08:14] [PASSED] Mono
[19:08:14] ====== [PASSED] drm_test_get_tv_mode_from_name_valid =======
[19:08:14] [PASSED] drm_test_get_tv_mode_from_name_truncated
[19:08:14] ============ [PASSED] drm_get_tv_mode_from_name ============
[19:08:14] = drm_test_connector_hdmi_compute_mode_clock (12 subtests) =
[19:08:14] [PASSED] drm_test_drm_hdmi_compute_mode_clock_rgb
[19:08:14] [PASSED] drm_test_drm_hdmi_compute_mode_clock_rgb_10bpc
[19:08:14] [PASSED] drm_test_drm_hdmi_compute_mode_clock_rgb_10bpc_vic_1
[19:08:14] [PASSED] drm_test_drm_hdmi_compute_mode_clock_rgb_12bpc
[19:08:14] [PASSED] drm_test_drm_hdmi_compute_mode_clock_rgb_12bpc_vic_1
[19:08:14] [PASSED] drm_test_drm_hdmi_compute_mode_clock_rgb_double
[19:08:14] = drm_test_connector_hdmi_compute_mode_clock_yuv420_valid =
[19:08:14] [PASSED] VIC 96
[19:08:14] [PASSED] VIC 97
[19:08:14] [PASSED] VIC 101
[19:08:14] [PASSED] VIC 102
[19:08:14] [PASSED] VIC 106
[19:08:14] [PASSED] VIC 107
[19:08:14] === [PASSED] drm_test_connector_hdmi_compute_mode_clock_yuv420_valid ===
[19:08:14] [PASSED] drm_test_connector_hdmi_compute_mode_clock_yuv420_10_bpc
[19:08:14] [PASSED] drm_test_connector_hdmi_compute_mode_clock_yuv420_12_bpc
[19:08:14] [PASSED] drm_test_connector_hdmi_compute_mode_clock_yuv422_8_bpc
[19:08:14] [PASSED] drm_test_connector_hdmi_compute_mode_clock_yuv422_10_bpc
[19:08:14] [PASSED] drm_test_connector_hdmi_compute_mode_clock_yuv422_12_bpc
[19:08:14] === [PASSED] drm_test_connector_hdmi_compute_mode_clock ====
[19:08:14] == drm_hdmi_connector_get_broadcast_rgb_name (2 subtests) ==
[19:08:14] === drm_test_drm_hdmi_connector_get_broadcast_rgb_name ====
[19:08:14] [PASSED] Automatic
[19:08:14] [PASSED] Full
[19:08:14] [PASSED] Limited 16:235
[19:08:14] === [PASSED] drm_test_drm_hdmi_connector_get_broadcast_rgb_name ===
[19:08:14] [PASSED] drm_test_drm_hdmi_connector_get_broadcast_rgb_name_invalid
[19:08:14] ==== [PASSED] drm_hdmi_connector_get_broadcast_rgb_name ====
[19:08:14] == drm_hdmi_connector_get_output_format_name (2 subtests) ==
[19:08:14] === drm_test_drm_hdmi_connector_get_output_format_name ====
[19:08:14] [PASSED] RGB
[19:08:14] [PASSED] YUV 4:2:0
[19:08:14] [PASSED] YUV 4:2:2
[19:08:14] [PASSED] YUV 4:4:4
[19:08:14] === [PASSED] drm_test_drm_hdmi_connector_get_output_format_name ===
[19:08:14] [PASSED] drm_test_drm_hdmi_connector_get_output_format_name_invalid
[19:08:14] ==== [PASSED] drm_hdmi_connector_get_output_format_name ====
[19:08:14] ============= drm_damage_helper (21 subtests) ==============
[19:08:14] [PASSED] drm_test_damage_iter_no_damage
[19:08:14] [PASSED] drm_test_damage_iter_no_damage_fractional_src
[19:08:14] [PASSED] drm_test_damage_iter_no_damage_src_moved
[19:08:14] [PASSED] drm_test_damage_iter_no_damage_fractional_src_moved
[19:08:14] [PASSED] drm_test_damage_iter_no_damage_not_visible
[19:08:14] [PASSED] drm_test_damage_iter_no_damage_no_crtc
[19:08:14] [PASSED] drm_test_damage_iter_no_damage_no_fb
[19:08:14] [PASSED] drm_test_damage_iter_simple_damage
[19:08:14] [PASSED] drm_test_damage_iter_single_damage
[19:08:14] [PASSED] drm_test_damage_iter_single_damage_intersect_src
[19:08:14] [PASSED] drm_test_damage_iter_single_damage_outside_src
[19:08:14] [PASSED] drm_test_damage_iter_single_damage_fractional_src
[19:08:14] [PASSED] drm_test_damage_iter_single_damage_intersect_fractional_src
[19:08:14] [PASSED] drm_test_damage_iter_single_damage_outside_fractional_src
[19:08:14] [PASSED] drm_test_damage_iter_single_damage_src_moved
[19:08:14] [PASSED] drm_test_damage_iter_single_damage_fractional_src_moved
[19:08:14] [PASSED] drm_test_damage_iter_damage
[19:08:14] [PASSED] drm_test_damage_iter_damage_one_intersect
[19:08:14] [PASSED] drm_test_damage_iter_damage_one_outside
[19:08:14] [PASSED] drm_test_damage_iter_damage_src_moved
[19:08:14] [PASSED] drm_test_damage_iter_damage_not_visible
[19:08:14] ================ [PASSED] drm_damage_helper ================
[19:08:14] ============== drm_dp_mst_helper (3 subtests) ==============
[19:08:14] ============== drm_test_dp_mst_calc_pbn_mode ==============
[19:08:14] [PASSED] Clock 154000 BPP 30 DSC disabled
[19:08:14] [PASSED] Clock 234000 BPP 30 DSC disabled
[19:08:14] [PASSED] Clock 297000 BPP 24 DSC disabled
[19:08:14] [PASSED] Clock 332880 BPP 24 DSC enabled
[19:08:14] [PASSED] Clock 324540 BPP 24 DSC enabled
[19:08:14] ========== [PASSED] drm_test_dp_mst_calc_pbn_mode ==========
[19:08:14] ============== drm_test_dp_mst_calc_pbn_div ===============
[19:08:14] [PASSED] Link rate 2000000 lane count 4
[19:08:14] [PASSED] Link rate 2000000 lane count 2
[19:08:14] [PASSED] Link rate 2000000 lane count 1
[19:08:14] [PASSED] Link rate 1350000 lane count 4
[19:08:14] [PASSED] Link rate 1350000 lane count 2
[19:08:14] [PASSED] Link rate 1350000 lane count 1
[19:08:14] [PASSED] Link rate 1000000 lane count 4
[19:08:14] [PASSED] Link rate 1000000 lane count 2
[19:08:14] [PASSED] Link rate 1000000 lane count 1
[19:08:14] [PASSED] Link rate 810000 lane count 4
[19:08:14] [PASSED] Link rate 810000 lane count 2
[19:08:14] [PASSED] Link rate 810000 lane count 1
[19:08:14] [PASSED] Link rate 540000 lane count 4
[19:08:14] [PASSED] Link rate 540000 lane count 2
[19:08:14] [PASSED] Link rate 540000 lane count 1
[19:08:14] [PASSED] Link rate 270000 lane count 4
[19:08:14] [PASSED] Link rate 270000 lane count 2
[19:08:14] [PASSED] Link rate 270000 lane count 1
[19:08:14] [PASSED] Link rate 162000 lane count 4
[19:08:14] [PASSED] Link rate 162000 lane count 2
[19:08:14] [PASSED] Link rate 162000 lane count 1
[19:08:14] ========== [PASSED] drm_test_dp_mst_calc_pbn_div ===========
[19:08:14] ========= drm_test_dp_mst_sideband_msg_req_decode =========
[19:08:14] [PASSED] DP_ENUM_PATH_RESOURCES with port number
[19:08:14] [PASSED] DP_POWER_UP_PHY with port number
[19:08:14] [PASSED] DP_POWER_DOWN_PHY with port number
[19:08:14] [PASSED] DP_ALLOCATE_PAYLOAD with SDP stream sinks
[19:08:14] [PASSED] DP_ALLOCATE_PAYLOAD with port number
[19:08:14] [PASSED] DP_ALLOCATE_PAYLOAD with VCPI
[19:08:14] [PASSED] DP_ALLOCATE_PAYLOAD with PBN
[19:08:14] [PASSED] DP_QUERY_PAYLOAD with port number
[19:08:14] [PASSED] DP_QUERY_PAYLOAD with VCPI
[19:08:14] [PASSED] DP_REMOTE_DPCD_READ with port number
[19:08:14] [PASSED] DP_REMOTE_DPCD_READ with DPCD address
[19:08:14] [PASSED] DP_REMOTE_DPCD_READ with max number of bytes
[19:08:14] [PASSED] DP_REMOTE_DPCD_WRITE with port number
[19:08:14] [PASSED] DP_REMOTE_DPCD_WRITE with DPCD address
[19:08:14] [PASSED] DP_REMOTE_DPCD_WRITE with data array
[19:08:14] [PASSED] DP_REMOTE_I2C_READ with port number
[19:08:14] [PASSED] DP_REMOTE_I2C_READ with I2C device ID
[19:08:14] [PASSED] DP_REMOTE_I2C_READ with transactions array
[19:08:14] [PASSED] DP_REMOTE_I2C_WRITE with port number
[19:08:14] [PASSED] DP_REMOTE_I2C_WRITE with I2C device ID
[19:08:14] [PASSED] DP_REMOTE_I2C_WRITE with data array
[19:08:14] [PASSED] DP_QUERY_STREAM_ENC_STATUS with stream ID
[19:08:14] [PASSED] DP_QUERY_STREAM_ENC_STATUS with client ID
[19:08:14] [PASSED] DP_QUERY_STREAM_ENC_STATUS with stream event
[19:08:14] [PASSED] DP_QUERY_STREAM_ENC_STATUS with valid stream event
[19:08:14] [PASSED] DP_QUERY_STREAM_ENC_STATUS with stream behavior
[19:08:14] [PASSED] DP_QUERY_STREAM_ENC_STATUS with a valid stream behavior
[19:08:14] ===== [PASSED] drm_test_dp_mst_sideband_msg_req_decode =====
[19:08:14] ================ [PASSED] drm_dp_mst_helper ================
[19:08:14] ================== drm_exec (7 subtests) ===================
[19:08:14] [PASSED] sanitycheck
[19:08:14] [PASSED] test_lock
[19:08:14] [PASSED] test_lock_unlock
[19:08:14] [PASSED] test_duplicates
[19:08:14] [PASSED] test_prepare
[19:08:14] [PASSED] test_prepare_array
[19:08:14] [PASSED] test_multiple_loops
[19:08:14] ==================== [PASSED] drm_exec =====================
[19:08:14] =========== drm_format_helper_test (17 subtests) ===========
[19:08:14] ============== drm_test_fb_xrgb8888_to_gray8 ==============
[19:08:14] [PASSED] single_pixel_source_buffer
[19:08:14] [PASSED] single_pixel_clip_rectangle
[19:08:14] [PASSED] well_known_colors
[19:08:14] [PASSED] destination_pitch
[19:08:14] ========== [PASSED] drm_test_fb_xrgb8888_to_gray8 ==========
[19:08:14] ============= drm_test_fb_xrgb8888_to_rgb332 ==============
[19:08:14] [PASSED] single_pixel_source_buffer
[19:08:14] [PASSED] single_pixel_clip_rectangle
[19:08:14] [PASSED] well_known_colors
[19:08:14] [PASSED] destination_pitch
[19:08:14] ========= [PASSED] drm_test_fb_xrgb8888_to_rgb332 ==========
[19:08:14] ============= drm_test_fb_xrgb8888_to_rgb565 ==============
[19:08:14] [PASSED] single_pixel_source_buffer
[19:08:14] [PASSED] single_pixel_clip_rectangle
[19:08:14] [PASSED] well_known_colors
[19:08:14] [PASSED] destination_pitch
[19:08:14] ========= [PASSED] drm_test_fb_xrgb8888_to_rgb565 ==========
[19:08:14] ============ drm_test_fb_xrgb8888_to_xrgb1555 =============
[19:08:14] [PASSED] single_pixel_source_buffer
[19:08:14] [PASSED] single_pixel_clip_rectangle
[19:08:14] [PASSED] well_known_colors
[19:08:14] [PASSED] destination_pitch
[19:08:14] ======== [PASSED] drm_test_fb_xrgb8888_to_xrgb1555 =========
[19:08:14] ============ drm_test_fb_xrgb8888_to_argb1555 =============
[19:08:14] [PASSED] single_pixel_source_buffer
[19:08:14] [PASSED] single_pixel_clip_rectangle
[19:08:14] [PASSED] well_known_colors
[19:08:14] [PASSED] destination_pitch
[19:08:14] ======== [PASSED] drm_test_fb_xrgb8888_to_argb1555 =========
[19:08:14] ============ drm_test_fb_xrgb8888_to_rgba5551 =============
[19:08:14] [PASSED] single_pixel_source_buffer
[19:08:14] [PASSED] single_pixel_clip_rectangle
[19:08:14] [PASSED] well_known_colors
[19:08:14] [PASSED] destination_pitch
[19:08:14] ======== [PASSED] drm_test_fb_xrgb8888_to_rgba5551 =========
[19:08:14] ============= drm_test_fb_xrgb8888_to_rgb888 ==============
[19:08:14] [PASSED] single_pixel_source_buffer
[19:08:14] [PASSED] single_pixel_clip_rectangle
[19:08:14] [PASSED] well_known_colors
[19:08:14] [PASSED] destination_pitch
[19:08:14] ========= [PASSED] drm_test_fb_xrgb8888_to_rgb888 ==========
[19:08:14] ============ drm_test_fb_xrgb8888_to_argb8888 =============
[19:08:14] [PASSED] single_pixel_source_buffer
[19:08:14] [PASSED] single_pixel_clip_rectangle
[19:08:14] [PASSED] well_known_colors
[19:08:14] [PASSED] destination_pitch
[19:08:14] ======== [PASSED] drm_test_fb_xrgb8888_to_argb8888 =========
[19:08:14] =========== drm_test_fb_xrgb8888_to_xrgb2101010 ===========
[19:08:14] [PASSED] single_pixel_source_buffer
[19:08:14] [PASSED] single_pixel_clip_rectangle
[19:08:14] [PASSED] well_known_colors
[19:08:14] [PASSED] destination_pitch
[19:08:14] ======= [PASSED] drm_test_fb_xrgb8888_to_xrgb2101010 =======
[19:08:14] =========== drm_test_fb_xrgb8888_to_argb2101010 ===========
[19:08:14] [PASSED] single_pixel_source_buffer
[19:08:14] [PASSED] single_pixel_clip_rectangle
[19:08:14] [PASSED] well_known_colors
[19:08:14] [PASSED] destination_pitch
[19:08:14] ======= [PASSED] drm_test_fb_xrgb8888_to_argb2101010 =======
[19:08:14] ============== drm_test_fb_xrgb8888_to_mono ===============
[19:08:14] [PASSED] single_pixel_source_buffer
[19:08:14] [PASSED] single_pixel_clip_rectangle
[19:08:14] [PASSED] well_known_colors
[19:08:14] [PASSED] destination_pitch
[19:08:14] ========== [PASSED] drm_test_fb_xrgb8888_to_mono ===========
[19:08:14] ==================== drm_test_fb_swab =====================
[19:08:14] [PASSED] single_pixel_source_buffer
[19:08:14] [PASSED] single_pixel_clip_rectangle
[19:08:14] [PASSED] well_known_colors
[19:08:14] [PASSED] destination_pitch
[19:08:14] ================ [PASSED] drm_test_fb_swab =================
[19:08:14] ============ drm_test_fb_xrgb8888_to_xbgr8888 =============
[19:08:14] [PASSED] single_pixel_source_buffer
[19:08:14] [PASSED] single_pixel_clip_rectangle
[19:08:14] [PASSED] well_known_colors
[19:08:14] [PASSED] destination_pitch
[19:08:14] ======== [PASSED] drm_test_fb_xrgb8888_to_xbgr8888 =========
[19:08:14] ============ drm_test_fb_xrgb8888_to_abgr8888 =============
[19:08:14] [PASSED] single_pixel_source_buffer
[19:08:14] [PASSED] single_pixel_clip_rectangle
[19:08:14] [PASSED] well_known_colors
[19:08:14] [PASSED] destination_pitch
[19:08:14] ======== [PASSED] drm_test_fb_xrgb8888_to_abgr8888 =========
[19:08:14] ================= drm_test_fb_clip_offset =================
[19:08:14] [PASSED] pass through
[19:08:14] [PASSED] horizontal offset
[19:08:14] [PASSED] vertical offset
[19:08:14] [PASSED] horizontal and vertical offset
[19:08:14] [PASSED] horizontal offset (custom pitch)
[19:08:14] [PASSED] vertical offset (custom pitch)
[19:08:14] [PASSED] horizontal and vertical offset (custom pitch)
[19:08:14] ============= [PASSED] drm_test_fb_clip_offset =============
[19:08:14] ============== drm_test_fb_build_fourcc_list ==============
[19:08:14] [PASSED] no native formats
[19:08:14] [PASSED] XRGB8888 as native format
[19:08:14] [PASSED] remove duplicates
[19:08:14] [PASSED] convert alpha formats
[19:08:14] [PASSED] random formats
[19:08:14] ========== [PASSED] drm_test_fb_build_fourcc_list ==========
[19:08:14] =================== drm_test_fb_memcpy ====================
[19:08:14] [PASSED] single_pixel_source_buffer: XR24 little-endian (0x34325258)
[19:08:14] [PASSED] single_pixel_source_buffer: XRA8 little-endian (0x38415258)
[19:08:14] [PASSED] single_pixel_source_buffer: YU24 little-endian (0x34325559)
[19:08:14] [PASSED] single_pixel_clip_rectangle: XB24 little-endian (0x34324258)
[19:08:14] [PASSED] single_pixel_clip_rectangle: XRA8 little-endian (0x38415258)
[19:08:14] [PASSED] single_pixel_clip_rectangle: YU24 little-endian (0x34325559)
[19:08:14] [PASSED] well_known_colors: XB24 little-endian (0x34324258)
[19:08:14] [PASSED] well_known_colors: XRA8 little-endian (0x38415258)
[19:08:14] [PASSED] well_known_colors: YU24 little-endian (0x34325559)
[19:08:14] [PASSED] destination_pitch: XB24 little-endian (0x34324258)
[19:08:14] [PASSED] destination_pitch: XRA8 little-endian (0x38415258)
[19:08:14] [PASSED] destination_pitch: YU24 little-endian (0x34325559)
[19:08:14] =============== [PASSED] drm_test_fb_memcpy ================
[19:08:14] ============= [PASSED] drm_format_helper_test ==============
[19:08:14] ================= drm_format (18 subtests) =================
[19:08:14] [PASSED] drm_test_format_block_width_invalid
[19:08:14] [PASSED] drm_test_format_block_width_one_plane
[19:08:14] [PASSED] drm_test_format_block_width_two_plane
[19:08:14] [PASSED] drm_test_format_block_width_three_plane
[19:08:14] [PASSED] drm_test_format_block_width_tiled
[19:08:14] [PASSED] drm_test_format_block_height_invalid
[19:08:14] [PASSED] drm_test_format_block_height_one_plane
[19:08:14] [PASSED] drm_test_format_block_height_two_plane
[19:08:14] [PASSED] drm_test_format_block_height_three_plane
[19:08:14] [PASSED] drm_test_format_block_height_tiled
[19:08:14] [PASSED] drm_test_format_min_pitch_invalid
[19:08:14] [PASSED] drm_test_format_min_pitch_one_plane_8bpp
[19:08:14] [PASSED] drm_test_format_min_pitch_one_plane_16bpp
[19:08:14] [PASSED] drm_test_format_min_pitch_one_plane_24bpp
[19:08:14] [PASSED] drm_test_format_min_pitch_one_plane_32bpp
[19:08:14] [PASSED] drm_test_format_min_pitch_two_plane
[19:08:14] [PASSED] drm_test_format_min_pitch_three_plane_8bpp
[19:08:14] [PASSED] drm_test_format_min_pitch_tiled
[19:08:14] =================== [PASSED] drm_format ====================
[19:08:14] =============== drm_framebuffer (1 subtest) ================
[19:08:14] =============== drm_test_framebuffer_create ===============
[19:08:14] [PASSED] ABGR8888 normal sizes
[19:08:14] [PASSED] ABGR8888 max sizes
[19:08:14] [PASSED] ABGR8888 pitch greater than min required
[19:08:14] [PASSED] ABGR8888 pitch less than min required
[19:08:14] [PASSED] ABGR8888 Invalid width
[19:08:14] [PASSED] ABGR8888 Invalid buffer handle
[19:08:14] [PASSED] No pixel format
[19:08:14] [PASSED] ABGR8888 Width 0
[19:08:14] [PASSED] ABGR8888 Height 0
[19:08:14] [PASSED] ABGR8888 Out of bound height * pitch combination
[19:08:14] [PASSED] ABGR8888 Large buffer offset
[19:08:14] [PASSED] ABGR8888 Set DRM_MODE_FB_MODIFIERS without modifiers
[19:08:14] [PASSED] ABGR8888 Valid buffer modifier
[19:08:14] [PASSED] ABGR8888 Invalid buffer modifier(DRM_FORMAT_MOD_SAMSUNG_64_32_TILE)
[19:08:14] [PASSED] ABGR8888 Extra pitches without DRM_MODE_FB_MODIFIERS
[19:08:14] [PASSED] ABGR8888 Extra pitches with DRM_MODE_FB_MODIFIERS
[19:08:14] [PASSED] NV12 Normal sizes
[19:08:14] [PASSED] NV12 Max sizes
[19:08:14] [PASSED] NV12 Invalid pitch
[19:08:14] [PASSED] NV12 Invalid modifier/missing DRM_MODE_FB_MODIFIERS flag
[19:08:14] [PASSED] NV12 different modifier per-plane
[19:08:14] [PASSED] NV12 with DRM_FORMAT_MOD_SAMSUNG_64_32_TILE
[19:08:14] [PASSED] NV12 Valid modifiers without DRM_MODE_FB_MODIFIERS
[19:08:14] [PASSED] NV12 Modifier for inexistent plane
[19:08:14] [PASSED] NV12 Handle for inexistent plane
[19:08:14] [PASSED] NV12 Handle for inexistent plane without DRM_MODE_FB_MODIFIERS
[19:08:14] [PASSED] YVU420 DRM_MODE_FB_MODIFIERS set without modifier
[19:08:14] [PASSED] YVU420 Normal sizes
[19:08:14] [PASSED] YVU420 Max sizes
[19:08:14] [PASSED] YVU420 Invalid pitch
[19:08:14] [PASSED] YVU420 Different pitches
[19:08:14] [PASSED] YVU420 Different buffer offsets/pitches
[19:08:14] [PASSED] YVU420 Modifier set just for plane 0, without DRM_MODE_FB_MODIFIERS
[19:08:14] [PASSED] YVU420 Modifier set just for planes 0, 1, without DRM_MODE_FB_MODIFIERS
[19:08:14] [PASSED] YVU420 Modifier set just for plane 0, 1, with DRM_MODE_FB_MODIFIERS
[19:08:14] [PASSED] YVU420 Valid modifier
[19:08:14] [PASSED] YVU420 Different modifiers per plane
[19:08:14] [PASSED] YVU420 Modifier for inexistent plane
[19:08:14] [PASSED] X0L2 Normal sizes
[19:08:14] [PASSED] X0L2 Max sizes
[19:08:14] [PASSED] X0L2 Invalid pitch
[19:08:14] [PASSED] X0L2 Pitch greater than minimum required
[19:08:14] [PASSED] X0L2 Handle for inexistent plane
[19:08:14] [PASSED] X0L2 Offset for inexistent plane, without DRM_MODE_FB_MODIFIERS set
[19:08:14] [PASSED] X0L2 Modifier without DRM_MODE_FB_MODIFIERS set
[19:08:14] [PASSED] X0L2 Valid modifier
[19:08:14] [PASSED] X0L2 Modifier for inexistent plane
[19:08:14] =========== [PASSED] drm_test_framebuffer_create ===========
[19:08:14] ================= [PASSED] drm_framebuffer =================
[19:08:14] ================ drm_gem_shmem (8 subtests) ================
[19:08:14] [PASSED] drm_gem_shmem_test_obj_create
[19:08:14] [PASSED] drm_gem_shmem_test_obj_create_private
[19:08:14] [PASSED] drm_gem_shmem_test_pin_pages
[19:08:14] [PASSED] drm_gem_shmem_test_vmap
[19:08:14] [PASSED] drm_gem_shmem_test_get_pages_sgt
[19:08:14] [PASSED] drm_gem_shmem_test_get_sg_table
[19:08:14] [PASSED] drm_gem_shmem_test_madvise
[19:08:14] [PASSED] drm_gem_shmem_test_purge
[19:08:14] ================== [PASSED] drm_gem_shmem ==================
[19:08:14] === drm_atomic_helper_connector_hdmi_check (22 subtests) ===
[19:08:14] [PASSED] drm_test_check_broadcast_rgb_auto_cea_mode
[19:08:14] [PASSED] drm_test_check_broadcast_rgb_auto_cea_mode_vic_1
[19:08:14] [PASSED] drm_test_check_broadcast_rgb_full_cea_mode
[19:08:14] [PASSED] drm_test_check_broadcast_rgb_full_cea_mode_vic_1
[19:08:14] [PASSED] drm_test_check_broadcast_rgb_limited_cea_mode
[19:08:14] [PASSED] drm_test_check_broadcast_rgb_limited_cea_mode_vic_1
[19:08:14] [PASSED] drm_test_check_broadcast_rgb_crtc_mode_changed
[19:08:14] [PASSED] drm_test_check_broadcast_rgb_crtc_mode_not_changed
[19:08:14] [PASSED] drm_test_check_hdmi_funcs_reject_rate
[19:08:14] [PASSED] drm_test_check_max_tmds_rate_bpc_fallback
[19:08:14] [PASSED] drm_test_check_max_tmds_rate_format_fallback
[19:08:14] [PASSED] drm_test_check_output_bpc_crtc_mode_changed
[19:08:14] [PASSED] drm_test_check_output_bpc_crtc_mode_not_changed
[19:08:14] [PASSED] drm_test_check_output_bpc_dvi
[19:08:14] [PASSED] drm_test_check_output_bpc_format_vic_1
[19:08:14] [PASSED] drm_test_check_output_bpc_format_display_8bpc_only
[19:08:14] [PASSED] drm_test_check_output_bpc_format_display_rgb_only
[19:08:14] [PASSED] drm_test_check_output_bpc_format_driver_8bpc_only
[19:08:14] [PASSED] drm_test_check_output_bpc_format_driver_rgb_only
[19:08:14] [PASSED] drm_test_check_tmds_char_rate_rgb_8bpc
[19:08:14] [PASSED] drm_test_check_tmds_char_rate_rgb_10bpc
[19:08:14] [PASSED] drm_test_check_tmds_char_rate_rgb_12bpc
[19:08:14] ===== [PASSED] drm_atomic_helper_connector_hdmi_check ======
[19:08:14] === drm_atomic_helper_connector_hdmi_reset (6 subtests) ====
[19:08:14] [PASSED] drm_test_check_broadcast_rgb_value
[19:08:14] [PASSED] drm_test_check_bpc_8_value
[19:08:14] [PASSED] drm_test_check_bpc_10_value
[19:08:14] [PASSED] drm_test_check_bpc_12_value
[19:08:14] [PASSED] drm_test_check_format_value
[19:08:14] [PASSED] drm_test_check_tmds_char_value
[19:08:14] ===== [PASSED] drm_atomic_helper_connector_hdmi_reset ======
[19:08:14] ================= drm_managed (2 subtests) =================
[19:08:14] [PASSED] drm_test_managed_release_action
[19:08:14] [PASSED] drm_test_managed_run_action
[19:08:14] =================== [PASSED] drm_managed ===================
[19:08:14] =================== drm_mm (6 subtests) ====================
[19:08:14] [PASSED] drm_test_mm_init
[19:08:14] [PASSED] drm_test_mm_debug
[19:08:14] [PASSED] drm_test_mm_align32
[19:08:14] [PASSED] drm_test_mm_align64
[19:08:14] [PASSED] drm_test_mm_lowest
[19:08:14] [PASSED] drm_test_mm_highest
[19:08:14] ===================== [PASSED] drm_mm ======================
[19:08:14] ============= drm_modes_analog_tv (5 subtests) =============
[19:08:14] [PASSED] drm_test_modes_analog_tv_mono_576i
[19:08:14] [PASSED] drm_test_modes_analog_tv_ntsc_480i
[19:08:14] [PASSED] drm_test_modes_analog_tv_ntsc_480i_inlined
[19:08:14] [PASSED] drm_test_modes_analog_tv_pal_576i
[19:08:14] [PASSED] drm_test_modes_analog_tv_pal_576i_inlined
[19:08:14] =============== [PASSED] drm_modes_analog_tv ===============
[19:08:14] ============== drm_plane_helper (2 subtests) ===============
[19:08:14] =============== drm_test_check_plane_state ================
[19:08:14] [PASSED] clipping_simple
[19:08:14] [PASSED] clipping_rotate_reflect
[19:08:14] [PASSED] positioning_simple
[19:08:14] [PASSED] upscaling
[19:08:14] [PASSED] downscaling
[19:08:14] [PASSED] rounding1
[19:08:14] [PASSED] rounding2
[19:08:14] [PASSED] rounding3
[19:08:14] [PASSED] rounding4
[19:08:14] =========== [PASSED] drm_test_check_plane_state ============
[19:08:14] =========== drm_test_check_invalid_plane_state ============
[19:08:14] [PASSED] positioning_invalid
[19:08:14] [PASSED] upscaling_invalid
stty: 'standard input': Inappropriate ioctl for device
[19:08:14] [PASSED] downscaling_invalid
[19:08:14] ======= [PASSED] drm_test_check_invalid_plane_state ========
[19:08:14] ================ [PASSED] drm_plane_helper =================
[19:08:14] ====== drm_connector_helper_tv_get_modes (1 subtest) =======
[19:08:14] ====== drm_test_connector_helper_tv_get_modes_check =======
[19:08:14] [PASSED] None
[19:08:14] [PASSED] PAL
[19:08:14] [PASSED] NTSC
[19:08:14] [PASSED] Both, NTSC Default
[19:08:14] [PASSED] Both, PAL Default
[19:08:14] [PASSED] Both, NTSC Default, with PAL on command-line
[19:08:14] [PASSED] Both, PAL Default, with NTSC on command-line
[19:08:14] == [PASSED] drm_test_connector_helper_tv_get_modes_check ===
[19:08:14] ======== [PASSED] drm_connector_helper_tv_get_modes ========
[19:08:14] ================== drm_rect (9 subtests) ===================
[19:08:14] [PASSED] drm_test_rect_clip_scaled_div_by_zero
[19:08:14] [PASSED] drm_test_rect_clip_scaled_not_clipped
[19:08:14] [PASSED] drm_test_rect_clip_scaled_clipped
[19:08:14] [PASSED] drm_test_rect_clip_scaled_signed_vs_unsigned
[19:08:14] ================= drm_test_rect_intersect =================
[19:08:14] [PASSED] top-left x bottom-right: 2x2+1+1 x 2x2+0+0
[19:08:14] [PASSED] top-right x bottom-left: 2x2+0+0 x 2x2+1-1
[19:08:14] [PASSED] bottom-left x top-right: 2x2+1-1 x 2x2+0+0
[19:08:14] [PASSED] bottom-right x top-left: 2x2+0+0 x 2x2+1+1
[19:08:14] [PASSED] right x left: 2x1+0+0 x 3x1+1+0
[19:08:14] [PASSED] left x right: 3x1+1+0 x 2x1+0+0
[19:08:14] [PASSED] up x bottom: 1x2+0+0 x 1x3+0-1
[19:08:14] [PASSED] bottom x up: 1x3+0-1 x 1x2+0+0
[19:08:14] [PASSED] touching corner: 1x1+0+0 x 2x2+1+1
[19:08:14] [PASSED] touching side: 1x1+0+0 x 1x1+1+0
[19:08:14] [PASSED] equal rects: 2x2+0+0 x 2x2+0+0
[19:08:14] [PASSED] inside another: 2x2+0+0 x 1x1+1+1
[19:08:14] [PASSED] far away: 1x1+0+0 x 1x1+3+6
[19:08:14] [PASSED] points intersecting: 0x0+5+10 x 0x0+5+10
[19:08:14] [PASSED] points not intersecting: 0x0+0+0 x 0x0+5+10
[19:08:14] ============= [PASSED] drm_test_rect_intersect =============
[19:08:14] ================ drm_test_rect_calc_hscale ================
[19:08:14] [PASSED] normal use
[19:08:14] [PASSED] out of max range
[19:08:14] [PASSED] out of min range
[19:08:14] [PASSED] zero dst
[19:08:14] [PASSED] negative src
[19:08:14] [PASSED] negative dst
[19:08:14] ============ [PASSED] drm_test_rect_calc_hscale ============
[19:08:14] ================ drm_test_rect_calc_vscale ================
[19:08:14] [PASSED] normal use
[19:08:14] [PASSED] out of max range
[19:08:14] [PASSED] out of min range
[19:08:14] [PASSED] zero dst
[19:08:14] [PASSED] negative src
[19:08:14] [PASSED] negative dst
[19:08:14] ============ [PASSED] drm_test_rect_calc_vscale ============
[19:08:14] ================== drm_test_rect_rotate ===================
[19:08:14] [PASSED] reflect-x
[19:08:14] [PASSED] reflect-y
[19:08:14] [PASSED] rotate-0
[19:08:14] [PASSED] rotate-90
[19:08:14] [PASSED] rotate-180
[19:08:14] [PASSED] rotate-270
[19:08:14] ============== [PASSED] drm_test_rect_rotate ===============
[19:08:14] ================ drm_test_rect_rotate_inv =================
[19:08:14] [PASSED] reflect-x
[19:08:14] [PASSED] reflect-y
[19:08:14] [PASSED] rotate-0
[19:08:14] [PASSED] rotate-90
[19:08:14] [PASSED] rotate-180
[19:08:14] [PASSED] rotate-270
[19:08:14] ============ [PASSED] drm_test_rect_rotate_inv =============
[19:08:14] ==================== [PASSED] drm_rect =====================
[19:08:14] ============================================================
[19:08:14] Testing complete. Ran 515 tests: passed: 515
[19:08:14] Elapsed time: 23.256s total, 1.711s configuring, 21.374s building, 0.152s running
+ /kernel/tools/testing/kunit/kunit.py run --kunitconfig /kernel/drivers/gpu/drm/ttm/tests/.kunitconfig
[19:08:14] Configuring KUnit Kernel ...
Regenerating .config ...
Populating config with:
$ make ARCH=um O=.kunit olddefconfig
[19:08:16] Building KUnit Kernel ...
Populating config with:
$ make ARCH=um O=.kunit olddefconfig
Building with:
$ make ARCH=um O=.kunit --jobs=48
[19:08:25] Starting KUnit Kernel (1/1)...
[19:08:25] ============================================================
Running tests with:
$ .kunit/linux kunit.enable=1 mem=1G console=tty kunit_shutdown=halt
[19:08:25] ================= ttm_device (5 subtests) ==================
[19:08:25] [PASSED] ttm_device_init_basic
[19:08:25] [PASSED] ttm_device_init_multiple
[19:08:25] [PASSED] ttm_device_fini_basic
[19:08:25] [PASSED] ttm_device_init_no_vma_man
[19:08:25] ================== ttm_device_init_pools ==================
[19:08:25] [PASSED] No DMA allocations, no DMA32 required
[19:08:25] [PASSED] DMA allocations, DMA32 required
[19:08:25] [PASSED] No DMA allocations, DMA32 required
[19:08:25] [PASSED] DMA allocations, no DMA32 required
[19:08:25] ============== [PASSED] ttm_device_init_pools ==============
[19:08:25] =================== [PASSED] ttm_device ====================
[19:08:25] ================== ttm_pool (8 subtests) ===================
[19:08:25] ================== ttm_pool_alloc_basic ===================
[19:08:25] [PASSED] One page
[19:08:25] [PASSED] More than one page
[19:08:25] [PASSED] Above the allocation limit
[19:08:25] [PASSED] One page, with coherent DMA mappings enabled
[19:08:25] [PASSED] Above the allocation limit, with coherent DMA mappings enabled
[19:08:25] ============== [PASSED] ttm_pool_alloc_basic ===============
[19:08:25] ============== ttm_pool_alloc_basic_dma_addr ==============
[19:08:25] [PASSED] One page
[19:08:25] [PASSED] More than one page
[19:08:25] [PASSED] Above the allocation limit
[19:08:25] [PASSED] One page, with coherent DMA mappings enabled
[19:08:25] [PASSED] Above the allocation limit, with coherent DMA mappings enabled
[19:08:25] ========== [PASSED] ttm_pool_alloc_basic_dma_addr ==========
[19:08:25] [PASSED] ttm_pool_alloc_order_caching_match
[19:08:25] [PASSED] ttm_pool_alloc_caching_mismatch
[19:08:25] [PASSED] ttm_pool_alloc_order_mismatch
[19:08:25] [PASSED] ttm_pool_free_dma_alloc
[19:08:25] [PASSED] ttm_pool_free_no_dma_alloc
[19:08:25] [PASSED] ttm_pool_fini_basic
[19:08:25] ==================== [PASSED] ttm_pool =====================
[19:08:25] ================ ttm_resource (8 subtests) =================
[19:08:25] ================= ttm_resource_init_basic =================
[19:08:25] [PASSED] Init resource in TTM_PL_SYSTEM
[19:08:25] [PASSED] Init resource in TTM_PL_VRAM
[19:08:25] [PASSED] Init resource in a private placement
[19:08:25] [PASSED] Init resource in TTM_PL_SYSTEM, set placement flags
[19:08:25] ============= [PASSED] ttm_resource_init_basic =============
[19:08:25] [PASSED] ttm_resource_init_pinned
[19:08:25] [PASSED] ttm_resource_fini_basic
[19:08:25] [PASSED] ttm_resource_manager_init_basic
[19:08:25] [PASSED] ttm_resource_manager_usage_basic
[19:08:25] [PASSED] ttm_resource_manager_set_used_basic
[19:08:25] [PASSED] ttm_sys_man_alloc_basic
[19:08:25] [PASSED] ttm_sys_man_free_basic
[19:08:25] ================== [PASSED] ttm_resource ===================
[19:08:25] =================== ttm_tt (15 subtests) ===================
[19:08:25] ==================== ttm_tt_init_basic ====================
[19:08:25] [PASSED] Page-aligned size
[19:08:25] [PASSED] Extra pages requested
[19:08:25] ================ [PASSED] ttm_tt_init_basic ================
[19:08:25] [PASSED] ttm_tt_init_misaligned
[19:08:25] [PASSED] ttm_tt_fini_basic
[19:08:25] [PASSED] ttm_tt_fini_sg
[19:08:25] [PASSED] ttm_tt_fini_shmem
[19:08:25] [PASSED] ttm_tt_create_basic
[19:08:25] [PASSED] ttm_tt_create_invalid_bo_type
[19:08:25] [PASSED] ttm_tt_create_ttm_exists
[19:08:25] [PASSED] ttm_tt_create_failed
[19:08:25] [PASSED] ttm_tt_destroy_basic
[19:08:25] [PASSED] ttm_tt_populate_null_ttm
[19:08:25] [PASSED] ttm_tt_populate_populated_ttm
[19:08:25] [PASSED] ttm_tt_unpopulate_basic
[19:08:25] [PASSED] ttm_tt_unpopulate_empty_ttm
[19:08:25] [PASSED] ttm_tt_swapin_basic
[19:08:25] ===================== [PASSED] ttm_tt ======================
[19:08:25] =================== ttm_bo (14 subtests) ===================
[19:08:25] =========== ttm_bo_reserve_optimistic_no_ticket ===========
[19:08:25] [PASSED] Cannot be interrupted and sleeps
[19:08:25] [PASSED] Cannot be interrupted, locks straight away
[19:08:25] [PASSED] Can be interrupted, sleeps
[19:08:25] ======= [PASSED] ttm_bo_reserve_optimistic_no_ticket =======
[19:08:25] [PASSED] ttm_bo_reserve_locked_no_sleep
[19:08:25] [PASSED] ttm_bo_reserve_no_wait_ticket
[19:08:25] [PASSED] ttm_bo_reserve_double_resv
[19:08:25] [PASSED] ttm_bo_reserve_interrupted
[19:08:25] [PASSED] ttm_bo_reserve_deadlock
[19:08:25] [PASSED] ttm_bo_unreserve_basic
[19:08:25] [PASSED] ttm_bo_unreserve_pinned
[19:08:25] [PASSED] ttm_bo_unreserve_bulk
[19:08:25] [PASSED] ttm_bo_put_basic
[19:08:25] [PASSED] ttm_bo_put_shared_resv
[19:08:25] [PASSED] ttm_bo_pin_basic
[19:08:25] [PASSED] ttm_bo_pin_unpin_resource
[19:08:25] [PASSED] ttm_bo_multiple_pin_one_unpin
[19:08:25] ===================== [PASSED] ttm_bo ======================
[19:08:25] ============== ttm_bo_validate (22 subtests) ===============
[19:08:25] ============== ttm_bo_init_reserved_sys_man ===============
[19:08:25] [PASSED] Buffer object for userspace
[19:08:25] [PASSED] Kernel buffer object
[19:08:25] [PASSED] Shared buffer object
[19:08:25] ========== [PASSED] ttm_bo_init_reserved_sys_man ===========
[19:08:25] ============== ttm_bo_init_reserved_mock_man ==============
[19:08:25] [PASSED] Buffer object for userspace
[19:08:25] [PASSED] Kernel buffer object
[19:08:25] [PASSED] Shared buffer object
[19:08:25] ========== [PASSED] ttm_bo_init_reserved_mock_man ==========
[19:08:25] [PASSED] ttm_bo_init_reserved_resv
[19:08:25] ================== ttm_bo_validate_basic ==================
[19:08:25] [PASSED] Buffer object for userspace
[19:08:25] [PASSED] Kernel buffer object
[19:08:25] [PASSED] Shared buffer object
[19:08:25] ============== [PASSED] ttm_bo_validate_basic ==============
[19:08:25] [PASSED] ttm_bo_validate_invalid_placement
[19:08:25] ============= ttm_bo_validate_same_placement ==============
[19:08:25] [PASSED] System manager
[19:08:25] [PASSED] VRAM manager
[19:08:25] ========= [PASSED] ttm_bo_validate_same_placement ==========
[19:08:25] [PASSED] ttm_bo_validate_failed_alloc
[19:08:25] [PASSED] ttm_bo_validate_pinned
[19:08:25] [PASSED] ttm_bo_validate_busy_placement
[19:08:25] ================ ttm_bo_validate_multihop =================
[19:08:25] [PASSED] Buffer object for userspace
[19:08:25] [PASSED] Kernel buffer object
[19:08:25] [PASSED] Shared buffer object
[19:08:25] ============ [PASSED] ttm_bo_validate_multihop =============
[19:08:25] ========== ttm_bo_validate_no_placement_signaled ==========
[19:08:25] [PASSED] Buffer object in system domain, no page vector
[19:08:25] [PASSED] Buffer object in system domain with an existing page vector
[19:08:25] ====== [PASSED] ttm_bo_validate_no_placement_signaled ======
[19:08:25] ======== ttm_bo_validate_no_placement_not_signaled ========
[19:08:25] [PASSED] Buffer object for userspace
[19:08:25] [PASSED] Kernel buffer object
[19:08:25] [PASSED] Shared buffer object
[19:08:25] ==== [PASSED] ttm_bo_validate_no_placement_not_signaled ====
[19:08:25] [PASSED] ttm_bo_validate_move_fence_signaled
[19:08:25] ========= ttm_bo_validate_move_fence_not_signaled =========
[19:08:25] [PASSED] Waits for GPU
[19:08:25] [PASSED] Tries to lock straight away
[19:08:25] ===== [PASSED] ttm_bo_validate_move_fence_not_signaled =====
[19:08:25] [PASSED] ttm_bo_validate_swapout
[19:08:25] [PASSED] ttm_bo_validate_happy_evict
[19:08:25] [PASSED] ttm_bo_validate_all_pinned_evict
[19:08:25] [PASSED] ttm_bo_validate_allowed_only_evict
[19:08:25] [PASSED] ttm_bo_validate_deleted_evict
[19:08:25] [PASSED] ttm_bo_validate_busy_domain_evict
[19:08:25] [PASSED] ttm_bo_validate_evict_gutting
[19:08:25] [PASSED] ttm_bo_validate_recrusive_evict
stty: 'standard input': Inappropriate ioctl for device
[19:08:25] ================= [PASSED] ttm_bo_validate =================
[19:08:25] ============================================================
[19:08:25] Testing complete. Ran 102 tests: passed: 102
[19:08:26] Elapsed time: 11.323s total, 1.745s configuring, 8.907s building, 0.559s running
+ cleanup
++ stat -c %u:%g /kernel
+ chown -R 1003:1003 /kernel
^ permalink raw reply [flat|nested] 54+ messages in thread
* ✓ CI.Build: success for Add PXP HWDRM support (rev2)
2024-08-16 19:00 [PATCH v2 00/12] Add PXP HWDRM support Daniele Ceraolo Spurio
` (14 preceding siblings ...)
2024-08-16 19:08 ` ✓ CI.KUnit: success " Patchwork
@ 2024-08-16 19:23 ` Patchwork
2024-08-16 19:25 ` ✗ CI.Hooks: failure " Patchwork
` (4 subsequent siblings)
20 siblings, 0 replies; 54+ messages in thread
From: Patchwork @ 2024-08-16 19:23 UTC (permalink / raw)
To: Daniele Ceraolo Spurio; +Cc: intel-xe
== Series Details ==
Series: Add PXP HWDRM support (rev2)
URL : https://patchwork.freedesktop.org/series/136052/
State : success
== Summary ==
lib/modules/6.11.0-rc3-xe/kernel/sound/core/seq/
lib/modules/6.11.0-rc3-xe/kernel/sound/core/seq/snd-seq.ko
lib/modules/6.11.0-rc3-xe/kernel/sound/core/snd-seq-device.ko
lib/modules/6.11.0-rc3-xe/kernel/sound/core/snd-hwdep.ko
lib/modules/6.11.0-rc3-xe/kernel/sound/core/snd.ko
lib/modules/6.11.0-rc3-xe/kernel/sound/core/snd-pcm.ko
lib/modules/6.11.0-rc3-xe/kernel/sound/core/snd-compress.ko
lib/modules/6.11.0-rc3-xe/kernel/sound/core/snd-timer.ko
lib/modules/6.11.0-rc3-xe/kernel/sound/soundcore.ko
lib/modules/6.11.0-rc3-xe/kernel/sound/soc/
lib/modules/6.11.0-rc3-xe/kernel/sound/soc/intel/
lib/modules/6.11.0-rc3-xe/kernel/sound/soc/intel/atom/
lib/modules/6.11.0-rc3-xe/kernel/sound/soc/intel/atom/snd-soc-sst-atom-hifi2-platform.ko
lib/modules/6.11.0-rc3-xe/kernel/sound/soc/intel/atom/sst/
lib/modules/6.11.0-rc3-xe/kernel/sound/soc/intel/atom/sst/snd-intel-sst-acpi.ko
lib/modules/6.11.0-rc3-xe/kernel/sound/soc/intel/atom/sst/snd-intel-sst-core.ko
lib/modules/6.11.0-rc3-xe/kernel/sound/soc/intel/common/
lib/modules/6.11.0-rc3-xe/kernel/sound/soc/intel/common/snd-soc-acpi-intel-match.ko
lib/modules/6.11.0-rc3-xe/kernel/sound/soc/amd/
lib/modules/6.11.0-rc3-xe/kernel/sound/soc/amd/snd-acp-config.ko
lib/modules/6.11.0-rc3-xe/kernel/sound/soc/sof/
lib/modules/6.11.0-rc3-xe/kernel/sound/soc/sof/intel/
lib/modules/6.11.0-rc3-xe/kernel/sound/soc/sof/intel/snd-sof-pci-intel-tgl.ko
lib/modules/6.11.0-rc3-xe/kernel/sound/soc/sof/intel/snd-sof-intel-hda-mlink.ko
lib/modules/6.11.0-rc3-xe/kernel/sound/soc/sof/intel/snd-sof-pci-intel-cnl.ko
lib/modules/6.11.0-rc3-xe/kernel/sound/soc/sof/intel/snd-sof-pci-intel-lnl.ko
lib/modules/6.11.0-rc3-xe/kernel/sound/soc/sof/intel/snd-sof-intel-hda-common.ko
lib/modules/6.11.0-rc3-xe/kernel/sound/soc/sof/intel/snd-sof-intel-hda-generic.ko
lib/modules/6.11.0-rc3-xe/kernel/sound/soc/sof/intel/snd-sof-intel-hda.ko
lib/modules/6.11.0-rc3-xe/kernel/sound/soc/sof/intel/snd-sof-pci-intel-mtl.ko
lib/modules/6.11.0-rc3-xe/kernel/sound/soc/sof/amd/
lib/modules/6.11.0-rc3-xe/kernel/sound/soc/sof/amd/snd-sof-amd-renoir.ko
lib/modules/6.11.0-rc3-xe/kernel/sound/soc/sof/amd/snd-sof-amd-acp.ko
lib/modules/6.11.0-rc3-xe/kernel/sound/soc/sof/snd-sof-utils.ko
lib/modules/6.11.0-rc3-xe/kernel/sound/soc/sof/snd-sof-pci.ko
lib/modules/6.11.0-rc3-xe/kernel/sound/soc/sof/snd-sof.ko
lib/modules/6.11.0-rc3-xe/kernel/sound/soc/sof/snd-sof-probes.ko
lib/modules/6.11.0-rc3-xe/kernel/sound/soc/sof/xtensa/
lib/modules/6.11.0-rc3-xe/kernel/sound/soc/sof/xtensa/snd-sof-xtensa-dsp.ko
lib/modules/6.11.0-rc3-xe/kernel/sound/soc/snd-soc-core.ko
lib/modules/6.11.0-rc3-xe/kernel/sound/soc/snd-soc-acpi.ko
lib/modules/6.11.0-rc3-xe/kernel/sound/soc/codecs/
lib/modules/6.11.0-rc3-xe/kernel/sound/soc/codecs/snd-soc-hdac-hda.ko
lib/modules/6.11.0-rc3-xe/kernel/sound/hda/
lib/modules/6.11.0-rc3-xe/kernel/sound/hda/snd-intel-sdw-acpi.ko
lib/modules/6.11.0-rc3-xe/kernel/sound/hda/ext/
lib/modules/6.11.0-rc3-xe/kernel/sound/hda/ext/snd-hda-ext-core.ko
lib/modules/6.11.0-rc3-xe/kernel/sound/hda/snd-intel-dspcfg.ko
lib/modules/6.11.0-rc3-xe/kernel/sound/hda/snd-hda-core.ko
lib/modules/6.11.0-rc3-xe/kernel/arch/
lib/modules/6.11.0-rc3-xe/kernel/arch/x86/
lib/modules/6.11.0-rc3-xe/kernel/arch/x86/kernel/
lib/modules/6.11.0-rc3-xe/kernel/arch/x86/kernel/msr.ko
lib/modules/6.11.0-rc3-xe/kernel/arch/x86/kernel/cpuid.ko
lib/modules/6.11.0-rc3-xe/kernel/arch/x86/crypto/
lib/modules/6.11.0-rc3-xe/kernel/arch/x86/crypto/sha512-ssse3.ko
lib/modules/6.11.0-rc3-xe/kernel/arch/x86/crypto/crct10dif-pclmul.ko
lib/modules/6.11.0-rc3-xe/kernel/arch/x86/crypto/ghash-clmulni-intel.ko
lib/modules/6.11.0-rc3-xe/kernel/arch/x86/crypto/sha1-ssse3.ko
lib/modules/6.11.0-rc3-xe/kernel/arch/x86/crypto/crc32-pclmul.ko
lib/modules/6.11.0-rc3-xe/kernel/arch/x86/crypto/sha256-ssse3.ko
lib/modules/6.11.0-rc3-xe/kernel/arch/x86/crypto/aesni-intel.ko
lib/modules/6.11.0-rc3-xe/kernel/arch/x86/crypto/polyval-clmulni.ko
lib/modules/6.11.0-rc3-xe/kernel/arch/x86/events/
lib/modules/6.11.0-rc3-xe/kernel/arch/x86/events/intel/
lib/modules/6.11.0-rc3-xe/kernel/arch/x86/events/intel/intel-cstate.ko
lib/modules/6.11.0-rc3-xe/kernel/arch/x86/events/rapl.ko
lib/modules/6.11.0-rc3-xe/kernel/arch/x86/kvm/
lib/modules/6.11.0-rc3-xe/kernel/arch/x86/kvm/kvm.ko
lib/modules/6.11.0-rc3-xe/kernel/arch/x86/kvm/kvm-intel.ko
lib/modules/6.11.0-rc3-xe/kernel/crypto/
lib/modules/6.11.0-rc3-xe/kernel/crypto/crypto_simd.ko
lib/modules/6.11.0-rc3-xe/kernel/crypto/cmac.ko
lib/modules/6.11.0-rc3-xe/kernel/crypto/ccm.ko
lib/modules/6.11.0-rc3-xe/kernel/crypto/cryptd.ko
lib/modules/6.11.0-rc3-xe/kernel/crypto/polyval-generic.ko
lib/modules/6.11.0-rc3-xe/kernel/crypto/async_tx/
lib/modules/6.11.0-rc3-xe/kernel/crypto/async_tx/async_xor.ko
lib/modules/6.11.0-rc3-xe/kernel/crypto/async_tx/async_tx.ko
lib/modules/6.11.0-rc3-xe/kernel/crypto/async_tx/async_memcpy.ko
lib/modules/6.11.0-rc3-xe/kernel/crypto/async_tx/async_pq.ko
lib/modules/6.11.0-rc3-xe/kernel/crypto/async_tx/async_raid6_recov.ko
lib/modules/6.11.0-rc3-xe/build
lib/modules/6.11.0-rc3-xe/modules.alias.bin
lib/modules/6.11.0-rc3-xe/modules.builtin
lib/modules/6.11.0-rc3-xe/modules.softdep
lib/modules/6.11.0-rc3-xe/modules.alias
lib/modules/6.11.0-rc3-xe/modules.order
lib/modules/6.11.0-rc3-xe/modules.symbols
lib/modules/6.11.0-rc3-xe/modules.dep.bin
+ mv kernel-nodebug.tar.gz ..
+ cd ..
+ rm -rf archive
++ date +%s
+ echo -e '\e[0Ksection_end:1723836185:package_x86_64_nodebug\r\e[0K'
+ sync
^[[0Ksection_end:1723836185:package_x86_64_nodebug
^[[0K
+ cleanup
++ stat -c %u:%g /kernel
+ chown -R 1003:1003 /kernel
^ permalink raw reply [flat|nested] 54+ messages in thread
* ✗ CI.Hooks: failure for Add PXP HWDRM support (rev2)
2024-08-16 19:00 [PATCH v2 00/12] Add PXP HWDRM support Daniele Ceraolo Spurio
` (15 preceding siblings ...)
2024-08-16 19:23 ` ✓ CI.Build: " Patchwork
@ 2024-08-16 19:25 ` Patchwork
2024-08-16 19:27 ` ✓ CI.checksparse: success " Patchwork
` (3 subsequent siblings)
20 siblings, 0 replies; 54+ messages in thread
From: Patchwork @ 2024-08-16 19:25 UTC (permalink / raw)
To: Daniele Ceraolo Spurio; +Cc: intel-xe
== Series Details ==
Series: Add PXP HWDRM support (rev2)
URL : https://patchwork.freedesktop.org/series/136052/
State : failure
== Summary ==
run-parts: executing /workspace/ci/hooks/00-showenv
+ export
+ grep -Ei '(^|\W)CI_'
declare -x CI_KERNEL_BUILD_DIR="/workspace/kernel/build64-default"
declare -x CI_KERNEL_SRC_DIR="/workspace/kernel"
declare -x CI_TOOLS_SRC_DIR="/workspace/ci"
declare -x CI_WORKSPACE_DIR="/workspace"
run-parts: executing /workspace/ci/hooks/10-build-W1
+ SRC_DIR=/workspace/kernel
+ RESTORE_DISPLAY_CONFIG=0
+ '[' -n /workspace/kernel/build64-default ']'
+ BUILD_DIR=/workspace/kernel/build64-default
+ cd /workspace/kernel
++ nproc
+ make -j48 O=/workspace/kernel/build64-default modules_prepare
make[1]: Entering directory '/workspace/kernel/build64-default'
GEN Makefile
UPD include/generated/compile.h
UPD include/config/kernel.release
mkdir -p /workspace/kernel/build64-default/tools/objtool && make O=/workspace/kernel/build64-default subdir=tools/objtool --no-print-directory -C objtool
UPD include/generated/utsrelease.h
CALL ../scripts/checksyscalls.sh
HOSTCC /workspace/kernel/build64-default/tools/objtool/fixdep.o
HOSTLD /workspace/kernel/build64-default/tools/objtool/fixdep-in.o
LINK /workspace/kernel/build64-default/tools/objtool/fixdep
INSTALL libsubcmd_headers
CC /workspace/kernel/build64-default/tools/objtool/libsubcmd/exec-cmd.o
CC /workspace/kernel/build64-default/tools/objtool/libsubcmd/help.o
CC /workspace/kernel/build64-default/tools/objtool/libsubcmd/pager.o
CC /workspace/kernel/build64-default/tools/objtool/libsubcmd/parse-options.o
CC /workspace/kernel/build64-default/tools/objtool/libsubcmd/run-command.o
CC /workspace/kernel/build64-default/tools/objtool/libsubcmd/sigchain.o
CC /workspace/kernel/build64-default/tools/objtool/libsubcmd/subcmd-config.o
LD /workspace/kernel/build64-default/tools/objtool/libsubcmd/libsubcmd-in.o
AR /workspace/kernel/build64-default/tools/objtool/libsubcmd/libsubcmd.a
CC /workspace/kernel/build64-default/tools/objtool/weak.o
CC /workspace/kernel/build64-default/tools/objtool/check.o
CC /workspace/kernel/build64-default/tools/objtool/special.o
CC /workspace/kernel/build64-default/tools/objtool/builtin-check.o
CC /workspace/kernel/build64-default/tools/objtool/elf.o
CC /workspace/kernel/build64-default/tools/objtool/objtool.o
CC /workspace/kernel/build64-default/tools/objtool/orc_gen.o
CC /workspace/kernel/build64-default/tools/objtool/orc_dump.o
CC /workspace/kernel/build64-default/tools/objtool/libstring.o
CC /workspace/kernel/build64-default/tools/objtool/libctype.o
CC /workspace/kernel/build64-default/tools/objtool/str_error_r.o
CC /workspace/kernel/build64-default/tools/objtool/librbtree.o
CC /workspace/kernel/build64-default/tools/objtool/arch/x86/special.o
CC /workspace/kernel/build64-default/tools/objtool/arch/x86/decode.o
CC /workspace/kernel/build64-default/tools/objtool/arch/x86/orc.o
LD /workspace/kernel/build64-default/tools/objtool/arch/x86/objtool-in.o
LD /workspace/kernel/build64-default/tools/objtool/objtool-in.o
LINK /workspace/kernel/build64-default/tools/objtool/objtool
make[1]: Leaving directory '/workspace/kernel/build64-default'
++ nproc
+ make -j48 O=/workspace/kernel/build64-default W=1 drivers/gpu/drm/xe
make[1]: Entering directory '/workspace/kernel/build64-default'
make[2]: Nothing to be done for 'drivers/gpu/drm/xe'.
make[1]: Leaving directory '/workspace/kernel/build64-default'
run-parts: executing /workspace/ci/hooks/11-build-32b
+++ realpath /workspace/ci/hooks/11-build-32b
++ dirname /workspace/ci/hooks/11-build-32b
+ THIS_SCRIPT_DIR=/workspace/ci/hooks
+ SRC_DIR=/workspace/kernel
+ TOOLS_SRC_DIR=/workspace/ci
+ '[' -n /workspace/kernel/build64-default ']'
+ BUILD_DIR=/workspace/kernel/build64-default
+ BUILD_DIR=/workspace/kernel/build64-default/build32
+ cd /workspace/kernel
+ mkdir -p /workspace/kernel/build64-default/build32
++ nproc
+ make -j48 ARCH=i386 O=/workspace/kernel/build64-default/build32 defconfig
make[1]: Entering directory '/workspace/kernel/build64-default/build32'
GEN Makefile
HOSTCC scripts/basic/fixdep
HOSTCC scripts/kconfig/conf.o
HOSTCC scripts/kconfig/confdata.o
HOSTCC scripts/kconfig/expr.o
LEX scripts/kconfig/lexer.lex.c
YACC scripts/kconfig/parser.tab.[ch]
HOSTCC scripts/kconfig/menu.o
HOSTCC scripts/kconfig/preprocess.o
HOSTCC scripts/kconfig/symbol.o
HOSTCC scripts/kconfig/util.o
HOSTCC scripts/kconfig/lexer.lex.o
HOSTCC scripts/kconfig/parser.tab.o
HOSTLD scripts/kconfig/conf
*** Default configuration is based on 'i386_defconfig'
#
# configuration written to .config
#
make[1]: Leaving directory '/workspace/kernel/build64-default/build32'
+ cd /workspace/kernel/build64-default/build32
+ /workspace/kernel/scripts/kconfig/merge_config.sh .config /workspace/ci/kernel/10-xe.fragment
Using .config as base
Merging /workspace/ci/kernel/10-xe.fragment
Value of CONFIG_DRM_XE is redefined by fragment /workspace/ci/kernel/10-xe.fragment:
Previous value: # CONFIG_DRM_XE is not set
New value: CONFIG_DRM_XE=m
Value of CONFIG_SND_DEBUG is redefined by fragment /workspace/ci/kernel/10-xe.fragment:
Previous value: # CONFIG_SND_DEBUG is not set
New value: CONFIG_SND_DEBUG=y
Value of CONFIG_SND_HDA_INTEL is redefined by fragment /workspace/ci/kernel/10-xe.fragment:
Previous value: CONFIG_SND_HDA_INTEL=y
New value: CONFIG_SND_HDA_INTEL=m
Value of CONFIG_SND_HDA_CODEC_HDMI is redefined by fragment /workspace/ci/kernel/10-xe.fragment:
Previous value: # CONFIG_SND_HDA_CODEC_HDMI is not set
New value: CONFIG_SND_HDA_CODEC_HDMI=m
GEN Makefile
WARNING: unmet direct dependencies detected for FB_IOMEM_HELPERS
Depends on [n]: HAS_IOMEM [=y] && FB_CORE [=n]
Selected by [m]:
- DRM_XE_DISPLAY [=y] && HAS_IOMEM [=y] && DRM [=y] && DRM_XE [=m] && DRM_XE [=m]=m [=m]
#
# configuration written to .config
#
Value requested for CONFIG_HAVE_UID16 not in final .config
Requested value: CONFIG_HAVE_UID16=y
Actual value:
Value requested for CONFIG_UID16 not in final .config
Requested value: CONFIG_UID16=y
Actual value:
Value requested for CONFIG_X86_32 not in final .config
Requested value: CONFIG_X86_32=y
Actual value:
Value requested for CONFIG_OUTPUT_FORMAT not in final .config
Requested value: CONFIG_OUTPUT_FORMAT="elf32-i386"
Actual value: CONFIG_OUTPUT_FORMAT="elf64-x86-64"
Value requested for CONFIG_ARCH_MMAP_RND_BITS_MIN not in final .config
Requested value: CONFIG_ARCH_MMAP_RND_BITS_MIN=8
Actual value: CONFIG_ARCH_MMAP_RND_BITS_MIN=28
Value requested for CONFIG_ARCH_MMAP_RND_BITS_MAX not in final .config
Requested value: CONFIG_ARCH_MMAP_RND_BITS_MAX=16
Actual value: CONFIG_ARCH_MMAP_RND_BITS_MAX=32
Value requested for CONFIG_PGTABLE_LEVELS not in final .config
Requested value: CONFIG_PGTABLE_LEVELS=2
Actual value: CONFIG_PGTABLE_LEVELS=5
Value requested for CONFIG_X86_BIGSMP not in final .config
Requested value: # CONFIG_X86_BIGSMP is not set
Actual value:
Value requested for CONFIG_X86_INTEL_QUARK not in final .config
Requested value: # CONFIG_X86_INTEL_QUARK is not set
Actual value:
Value requested for CONFIG_X86_RDC321X not in final .config
Requested value: # CONFIG_X86_RDC321X is not set
Actual value:
Value requested for CONFIG_X86_32_NON_STANDARD not in final .config
Requested value: # CONFIG_X86_32_NON_STANDARD is not set
Actual value:
Value requested for CONFIG_X86_32_IRIS not in final .config
Requested value: # CONFIG_X86_32_IRIS is not set
Actual value:
Value requested for CONFIG_M486SX not in final .config
Requested value: # CONFIG_M486SX is not set
Actual value:
Value requested for CONFIG_M486 not in final .config
Requested value: # CONFIG_M486 is not set
Actual value:
Value requested for CONFIG_M586 not in final .config
Requested value: # CONFIG_M586 is not set
Actual value:
Value requested for CONFIG_M586TSC not in final .config
Requested value: # CONFIG_M586TSC is not set
Actual value:
Value requested for CONFIG_M586MMX not in final .config
Requested value: # CONFIG_M586MMX is not set
Actual value:
Value requested for CONFIG_M686 not in final .config
Requested value: CONFIG_M686=y
Actual value:
Value requested for CONFIG_MPENTIUMII not in final .config
Requested value: # CONFIG_MPENTIUMII is not set
Actual value:
Value requested for CONFIG_MPENTIUMIII not in final .config
Requested value: # CONFIG_MPENTIUMIII is not set
Actual value:
Value requested for CONFIG_MPENTIUMM not in final .config
Requested value: # CONFIG_MPENTIUMM is not set
Actual value:
Value requested for CONFIG_MPENTIUM4 not in final .config
Requested value: # CONFIG_MPENTIUM4 is not set
Actual value:
Value requested for CONFIG_MK6 not in final .config
Requested value: # CONFIG_MK6 is not set
Actual value:
Value requested for CONFIG_MK7 not in final .config
Requested value: # CONFIG_MK7 is not set
Actual value:
Value requested for CONFIG_MCRUSOE not in final .config
Requested value: # CONFIG_MCRUSOE is not set
Actual value:
Value requested for CONFIG_MEFFICEON not in final .config
Requested value: # CONFIG_MEFFICEON is not set
Actual value:
Value requested for CONFIG_MWINCHIPC6 not in final .config
Requested value: # CONFIG_MWINCHIPC6 is not set
Actual value:
Value requested for CONFIG_MWINCHIP3D not in final .config
Requested value: # CONFIG_MWINCHIP3D is not set
Actual value:
Value requested for CONFIG_MELAN not in final .config
Requested value: # CONFIG_MELAN is not set
Actual value:
Value requested for CONFIG_MGEODEGX1 not in final .config
Requested value: # CONFIG_MGEODEGX1 is not set
Actual value:
Value requested for CONFIG_MGEODE_LX not in final .config
Requested value: # CONFIG_MGEODE_LX is not set
Actual value:
Value requested for CONFIG_MCYRIXIII not in final .config
Requested value: # CONFIG_MCYRIXIII is not set
Actual value:
Value requested for CONFIG_MVIAC3_2 not in final .config
Requested value: # CONFIG_MVIAC3_2 is not set
Actual value:
Value requested for CONFIG_MVIAC7 not in final .config
Requested value: # CONFIG_MVIAC7 is not set
Actual value:
Value requested for CONFIG_X86_GENERIC not in final .config
Requested value: # CONFIG_X86_GENERIC is not set
Actual value:
Value requested for CONFIG_X86_INTERNODE_CACHE_SHIFT not in final .config
Requested value: CONFIG_X86_INTERNODE_CACHE_SHIFT=5
Actual value: CONFIG_X86_INTERNODE_CACHE_SHIFT=6
Value requested for CONFIG_X86_L1_CACHE_SHIFT not in final .config
Requested value: CONFIG_X86_L1_CACHE_SHIFT=5
Actual value: CONFIG_X86_L1_CACHE_SHIFT=6
Value requested for CONFIG_X86_USE_PPRO_CHECKSUM not in final .config
Requested value: CONFIG_X86_USE_PPRO_CHECKSUM=y
Actual value:
Value requested for CONFIG_X86_MINIMUM_CPU_FAMILY not in final .config
Requested value: CONFIG_X86_MINIMUM_CPU_FAMILY=6
Actual value: CONFIG_X86_MINIMUM_CPU_FAMILY=64
Value requested for CONFIG_CPU_SUP_TRANSMETA_32 not in final .config
Requested value: CONFIG_CPU_SUP_TRANSMETA_32=y
Actual value:
Value requested for CONFIG_CPU_SUP_VORTEX_32 not in final .config
Requested value: CONFIG_CPU_SUP_VORTEX_32=y
Actual value:
Value requested for CONFIG_HPET_TIMER not in final .config
Requested value: # CONFIG_HPET_TIMER is not set
Actual value: CONFIG_HPET_TIMER=y
Value requested for CONFIG_NR_CPUS_RANGE_END not in final .config
Requested value: CONFIG_NR_CPUS_RANGE_END=8
Actual value: CONFIG_NR_CPUS_RANGE_END=512
Value requested for CONFIG_NR_CPUS_DEFAULT not in final .config
Requested value: CONFIG_NR_CPUS_DEFAULT=8
Actual value: CONFIG_NR_CPUS_DEFAULT=64
Value requested for CONFIG_X86_ANCIENT_MCE not in final .config
Requested value: # CONFIG_X86_ANCIENT_MCE is not set
Actual value:
Value requested for CONFIG_X86_LEGACY_VM86 not in final .config
Requested value: # CONFIG_X86_LEGACY_VM86 is not set
Actual value:
Value requested for CONFIG_X86_ESPFIX32 not in final .config
Requested value: CONFIG_X86_ESPFIX32=y
Actual value:
Value requested for CONFIG_TOSHIBA not in final .config
Requested value: # CONFIG_TOSHIBA is not set
Actual value:
Value requested for CONFIG_X86_REBOOTFIXUPS not in final .config
Requested value: # CONFIG_X86_REBOOTFIXUPS is not set
Actual value:
Value requested for CONFIG_MICROCODE_INITRD32 not in final .config
Requested value: CONFIG_MICROCODE_INITRD32=y
Actual value:
Value requested for CONFIG_NOHIGHMEM not in final .config
Requested value: # CONFIG_NOHIGHMEM is not set
Actual value:
Value requested for CONFIG_HIGHMEM4G not in final .config
Requested value: CONFIG_HIGHMEM4G=y
Actual value:
Value requested for CONFIG_HIGHMEM64G not in final .config
Requested value: # CONFIG_HIGHMEM64G is not set
Actual value:
Value requested for CONFIG_VMSPLIT_3G not in final .config
Requested value: CONFIG_VMSPLIT_3G=y
Actual value:
Value requested for CONFIG_VMSPLIT_3G_OPT not in final .config
Requested value: # CONFIG_VMSPLIT_3G_OPT is not set
Actual value:
Value requested for CONFIG_VMSPLIT_2G not in final .config
Requested value: # CONFIG_VMSPLIT_2G is not set
Actual value:
Value requested for CONFIG_VMSPLIT_2G_OPT not in final .config
Requested value: # CONFIG_VMSPLIT_2G_OPT is not set
Actual value:
Value requested for CONFIG_VMSPLIT_1G not in final .config
Requested value: # CONFIG_VMSPLIT_1G is not set
Actual value:
Value requested for CONFIG_PAGE_OFFSET not in final .config
Requested value: CONFIG_PAGE_OFFSET=0xC0000000
Actual value:
Value requested for CONFIG_HIGHMEM not in final .config
Requested value: CONFIG_HIGHMEM=y
Actual value:
Value requested for CONFIG_X86_PAE not in final .config
Requested value: # CONFIG_X86_PAE is not set
Actual value:
Value requested for CONFIG_ARCH_FLATMEM_ENABLE not in final .config
Requested value: CONFIG_ARCH_FLATMEM_ENABLE=y
Actual value:
Value requested for CONFIG_ARCH_SELECT_MEMORY_MODEL not in final .config
Requested value: CONFIG_ARCH_SELECT_MEMORY_MODEL=y
Actual value:
Value requested for CONFIG_ILLEGAL_POINTER_VALUE not in final .config
Requested value: CONFIG_ILLEGAL_POINTER_VALUE=0
Actual value: CONFIG_ILLEGAL_POINTER_VALUE=0xdead000000000000
Value requested for CONFIG_HIGHPTE not in final .config
Requested value: # CONFIG_HIGHPTE is not set
Actual value:
Value requested for CONFIG_COMPAT_VDSO not in final .config
Requested value: # CONFIG_COMPAT_VDSO is not set
Actual value:
Value requested for CONFIG_FUNCTION_PADDING_CFI not in final .config
Requested value: CONFIG_FUNCTION_PADDING_CFI=0
Actual value: CONFIG_FUNCTION_PADDING_CFI=11
Value requested for CONFIG_FUNCTION_PADDING_BYTES not in final .config
Requested value: CONFIG_FUNCTION_PADDING_BYTES=4
Actual value: CONFIG_FUNCTION_PADDING_BYTES=16
Value requested for CONFIG_APM not in final .config
Requested value: # CONFIG_APM is not set
Actual value:
Value requested for CONFIG_X86_POWERNOW_K6 not in final .config
Requested value: # CONFIG_X86_POWERNOW_K6 is not set
Actual value:
Value requested for CONFIG_X86_POWERNOW_K7 not in final .config
Requested value: # CONFIG_X86_POWERNOW_K7 is not set
Actual value:
Value requested for CONFIG_X86_GX_SUSPMOD not in final .config
Requested value: # CONFIG_X86_GX_SUSPMOD is not set
Actual value:
Value requested for CONFIG_X86_SPEEDSTEP_ICH not in final .config
Requested value: # CONFIG_X86_SPEEDSTEP_ICH is not set
Actual value:
Value requested for CONFIG_X86_SPEEDSTEP_SMI not in final .config
Requested value: # CONFIG_X86_SPEEDSTEP_SMI is not set
Actual value:
Value requested for CONFIG_X86_CPUFREQ_NFORCE2 not in final .config
Requested value: # CONFIG_X86_CPUFREQ_NFORCE2 is not set
Actual value:
Value requested for CONFIG_X86_LONGRUN not in final .config
Requested value: # CONFIG_X86_LONGRUN is not set
Actual value:
Value requested for CONFIG_X86_LONGHAUL not in final .config
Requested value: # CONFIG_X86_LONGHAUL is not set
Actual value:
Value requested for CONFIG_X86_E_POWERSAVER not in final .config
Requested value: # CONFIG_X86_E_POWERSAVER is not set
Actual value:
Value requested for CONFIG_PCI_GOBIOS not in final .config
Requested value: # CONFIG_PCI_GOBIOS is not set
Actual value:
Value requested for CONFIG_PCI_GOMMCONFIG not in final .config
Requested value: # CONFIG_PCI_GOMMCONFIG is not set
Actual value:
Value requested for CONFIG_PCI_GODIRECT not in final .config
Requested value: # CONFIG_PCI_GODIRECT is not set
Actual value:
Value requested for CONFIG_PCI_GOANY not in final .config
Requested value: CONFIG_PCI_GOANY=y
Actual value:
Value requested for CONFIG_PCI_BIOS not in final .config
Requested value: CONFIG_PCI_BIOS=y
Actual value:
Value requested for CONFIG_ISA not in final .config
Requested value: # CONFIG_ISA is not set
Actual value:
Value requested for CONFIG_SCx200 not in final .config
Requested value: # CONFIG_SCx200 is not set
Actual value:
Value requested for CONFIG_OLPC not in final .config
Requested value: # CONFIG_OLPC is not set
Actual value:
Value requested for CONFIG_ALIX not in final .config
Requested value: # CONFIG_ALIX is not set
Actual value:
Value requested for CONFIG_NET5501 not in final .config
Requested value: # CONFIG_NET5501 is not set
Actual value:
Value requested for CONFIG_GEOS not in final .config
Requested value: # CONFIG_GEOS is not set
Actual value:
Value requested for CONFIG_COMPAT_32 not in final .config
Requested value: CONFIG_COMPAT_32=y
Actual value:
Value requested for CONFIG_HAVE_ATOMIC_IOMAP not in final .config
Requested value: CONFIG_HAVE_ATOMIC_IOMAP=y
Actual value:
Value requested for CONFIG_ARCH_32BIT_OFF_T not in final .config
Requested value: CONFIG_ARCH_32BIT_OFF_T=y
Actual value:
Value requested for CONFIG_ARCH_WANT_IPC_PARSE_VERSION not in final .config
Requested value: CONFIG_ARCH_WANT_IPC_PARSE_VERSION=y
Actual value:
Value requested for CONFIG_MODULES_USE_ELF_REL not in final .config
Requested value: CONFIG_MODULES_USE_ELF_REL=y
Actual value:
Value requested for CONFIG_ARCH_MMAP_RND_BITS not in final .config
Requested value: CONFIG_ARCH_MMAP_RND_BITS=8
Actual value: CONFIG_ARCH_MMAP_RND_BITS=28
Value requested for CONFIG_CLONE_BACKWARDS not in final .config
Requested value: CONFIG_CLONE_BACKWARDS=y
Actual value:
Value requested for CONFIG_OLD_SIGSUSPEND3 not in final .config
Requested value: CONFIG_OLD_SIGSUSPEND3=y
Actual value:
Value requested for CONFIG_OLD_SIGACTION not in final .config
Requested value: CONFIG_OLD_SIGACTION=y
Actual value:
Value requested for CONFIG_ARCH_SPLIT_ARG64 not in final .config
Requested value: CONFIG_ARCH_SPLIT_ARG64=y
Actual value:
Value requested for CONFIG_FUNCTION_ALIGNMENT not in final .config
Requested value: CONFIG_FUNCTION_ALIGNMENT=4
Actual value: CONFIG_FUNCTION_ALIGNMENT=16
Value requested for CONFIG_SELECT_MEMORY_MODEL not in final .config
Requested value: CONFIG_SELECT_MEMORY_MODEL=y
Actual value:
Value requested for CONFIG_FLATMEM_MANUAL not in final .config
Requested value: CONFIG_FLATMEM_MANUAL=y
Actual value:
Value requested for CONFIG_SPARSEMEM_MANUAL not in final .config
Requested value: # CONFIG_SPARSEMEM_MANUAL is not set
Actual value:
Value requested for CONFIG_FLATMEM not in final .config
Requested value: CONFIG_FLATMEM=y
Actual value:
Value requested for CONFIG_SPARSEMEM_STATIC not in final .config
Requested value: CONFIG_SPARSEMEM_STATIC=y
Actual value:
Value requested for CONFIG_BOUNCE not in final .config
Requested value: CONFIG_BOUNCE=y
Actual value:
Value requested for CONFIG_KMAP_LOCAL not in final .config
Requested value: CONFIG_KMAP_LOCAL=y
Actual value:
Value requested for CONFIG_HOTPLUG_PCI_COMPAQ not in final .config
Requested value: # CONFIG_HOTPLUG_PCI_COMPAQ is not set
Actual value:
Value requested for CONFIG_HOTPLUG_PCI_IBM not in final .config
Requested value: # CONFIG_HOTPLUG_PCI_IBM is not set
Actual value:
Value requested for CONFIG_EFI_CAPSULE_QUIRK_QUARK_CSH not in final .config
Requested value: CONFIG_EFI_CAPSULE_QUIRK_QUARK_CSH=y
Actual value:
Value requested for CONFIG_PCH_PHUB not in final .config
Requested value: # CONFIG_PCH_PHUB is not set
Actual value:
Value requested for CONFIG_SCSI_NSP32 not in final .config
Requested value: # CONFIG_SCSI_NSP32 is not set
Actual value:
Value requested for CONFIG_PATA_CS5520 not in final .config
Requested value: # CONFIG_PATA_CS5520 is not set
Actual value:
Value requested for CONFIG_PATA_CS5530 not in final .config
Requested value: # CONFIG_PATA_CS5530 is not set
Actual value:
Value requested for CONFIG_PATA_CS5535 not in final .config
Requested value: # CONFIG_PATA_CS5535 is not set
Actual value:
Value requested for CONFIG_PATA_CS5536 not in final .config
Requested value: # CONFIG_PATA_CS5536 is not set
Actual value:
Value requested for CONFIG_PATA_SC1200 not in final .config
Requested value: # CONFIG_PATA_SC1200 is not set
Actual value:
Value requested for CONFIG_PCH_GBE not in final .config
Requested value: # CONFIG_PCH_GBE is not set
Actual value:
Value requested for CONFIG_INPUT_WISTRON_BTNS not in final .config
Requested value: # CONFIG_INPUT_WISTRON_BTNS is not set
Actual value:
Value requested for CONFIG_SERIAL_TIMBERDALE not in final .config
Requested value: # CONFIG_SERIAL_TIMBERDALE is not set
Actual value:
Value requested for CONFIG_SERIAL_PCH_UART not in final .config
Requested value: # CONFIG_SERIAL_PCH_UART is not set
Actual value:
Value requested for CONFIG_HW_RANDOM_GEODE not in final .config
Requested value: CONFIG_HW_RANDOM_GEODE=y
Actual value:
Value requested for CONFIG_SONYPI not in final .config
Requested value: # CONFIG_SONYPI is not set
Actual value:
Value requested for CONFIG_PC8736x_GPIO not in final .config
Requested value: # CONFIG_PC8736x_GPIO is not set
Actual value:
Value requested for CONFIG_NSC_GPIO not in final .config
Requested value: # CONFIG_NSC_GPIO is not set
Actual value:
Value requested for CONFIG_I2C_EG20T not in final .config
Requested value: # CONFIG_I2C_EG20T is not set
Actual value:
Value requested for CONFIG_SCx200_ACB not in final .config
Requested value: # CONFIG_SCx200_ACB is not set
Actual value:
Value requested for CONFIG_PTP_1588_CLOCK_PCH not in final .config
Requested value: # CONFIG_PTP_1588_CLOCK_PCH is not set
Actual value:
Value requested for CONFIG_SBC8360_WDT not in final .config
Requested value: # CONFIG_SBC8360_WDT is not set
Actual value:
Value requested for CONFIG_SBC7240_WDT not in final .config
Requested value: # CONFIG_SBC7240_WDT is not set
Actual value:
Value requested for CONFIG_MFD_CS5535 not in final .config
Requested value: # CONFIG_MFD_CS5535 is not set
Actual value:
Value requested for CONFIG_AGP_ALI not in final .config
Requested value: # CONFIG_AGP_ALI is not set
Actual value:
Value requested for CONFIG_AGP_ATI not in final .config
Requested value: # CONFIG_AGP_ATI is not set
Actual value:
Value requested for CONFIG_AGP_AMD not in final .config
Requested value: # CONFIG_AGP_AMD is not set
Actual value:
Value requested for CONFIG_AGP_NVIDIA not in final .config
Requested value: # CONFIG_AGP_NVIDIA is not set
Actual value:
Value requested for CONFIG_AGP_SWORKS not in final .config
Requested value: # CONFIG_AGP_SWORKS is not set
Actual value:
Value requested for CONFIG_AGP_EFFICEON not in final .config
Requested value: # CONFIG_AGP_EFFICEON is not set
Actual value:
Value requested for CONFIG_SND_PCM not in final .config
Requested value: CONFIG_SND_PCM=y
Actual value: CONFIG_SND_PCM=m
Value requested for CONFIG_SND_HWDEP not in final .config
Requested value: CONFIG_SND_HWDEP=y
Actual value: CONFIG_SND_HWDEP=m
Value requested for CONFIG_SND_DYNAMIC_MINORS not in final .config
Requested value: # CONFIG_SND_DYNAMIC_MINORS is not set
Actual value: CONFIG_SND_DYNAMIC_MINORS=y
Value requested for CONFIG_SND_CS5530 not in final .config
Requested value: # CONFIG_SND_CS5530 is not set
Actual value:
Value requested for CONFIG_SND_CS5535AUDIO not in final .config
Requested value: # CONFIG_SND_CS5535AUDIO is not set
Actual value:
Value requested for CONFIG_SND_SIS7019 not in final .config
Requested value: # CONFIG_SND_SIS7019 is not set
Actual value:
Value requested for CONFIG_SND_HDA not in final .config
Requested value: CONFIG_SND_HDA=y
Actual value: CONFIG_SND_HDA=m
Value requested for CONFIG_SND_HDA_CORE not in final .config
Requested value: CONFIG_SND_HDA_CORE=y
Actual value: CONFIG_SND_HDA_CORE=m
Value requested for CONFIG_SND_INTEL_DSP_CONFIG not in final .config
Requested value: CONFIG_SND_INTEL_DSP_CONFIG=y
Actual value: CONFIG_SND_INTEL_DSP_CONFIG=m
Value requested for CONFIG_SND_INTEL_SOUNDWIRE_ACPI not in final .config
Requested value: CONFIG_SND_INTEL_SOUNDWIRE_ACPI=y
Actual value: CONFIG_SND_INTEL_SOUNDWIRE_ACPI=m
Value requested for CONFIG_LEDS_OT200 not in final .config
Requested value: # CONFIG_LEDS_OT200 is not set
Actual value:
Value requested for CONFIG_PCH_DMA not in final .config
Requested value: # CONFIG_PCH_DMA is not set
Actual value:
Value requested for CONFIG_CLKSRC_I8253 not in final .config
Requested value: CONFIG_CLKSRC_I8253=y
Actual value:
Value requested for CONFIG_MAILBOX not in final .config
Requested value: # CONFIG_MAILBOX is not set
Actual value: CONFIG_MAILBOX=y
Value requested for CONFIG_CRYPTO_SERPENT_SSE2_586 not in final .config
Requested value: # CONFIG_CRYPTO_SERPENT_SSE2_586 is not set
Actual value:
Value requested for CONFIG_CRYPTO_TWOFISH_586 not in final .config
Requested value: # CONFIG_CRYPTO_TWOFISH_586 is not set
Actual value:
Value requested for CONFIG_CRYPTO_DEV_GEODE not in final .config
Requested value: # CONFIG_CRYPTO_DEV_GEODE is not set
Actual value:
Value requested for CONFIG_CRYPTO_DEV_HIFN_795X not in final .config
Requested value: # CONFIG_CRYPTO_DEV_HIFN_795X is not set
Actual value:
Value requested for CONFIG_CRYPTO_LIB_POLY1305_RSIZE not in final .config
Requested value: CONFIG_CRYPTO_LIB_POLY1305_RSIZE=1
Actual value: CONFIG_CRYPTO_LIB_POLY1305_RSIZE=11
Value requested for CONFIG_AUDIT_GENERIC not in final .config
Requested value: CONFIG_AUDIT_GENERIC=y
Actual value:
Value requested for CONFIG_GENERIC_VDSO_32 not in final .config
Requested value: CONFIG_GENERIC_VDSO_32=y
Actual value:
Value requested for CONFIG_DEBUG_KMAP_LOCAL not in final .config
Requested value: # CONFIG_DEBUG_KMAP_LOCAL is not set
Actual value:
Value requested for CONFIG_DEBUG_HIGHMEM not in final .config
Requested value: # CONFIG_DEBUG_HIGHMEM is not set
Actual value:
Value requested for CONFIG_HAVE_DEBUG_STACKOVERFLOW not in final .config
Requested value: CONFIG_HAVE_DEBUG_STACKOVERFLOW=y
Actual value:
Value requested for CONFIG_DEBUG_STACKOVERFLOW not in final .config
Requested value: # CONFIG_DEBUG_STACKOVERFLOW is not set
Actual value:
Value requested for CONFIG_HAVE_FUNCTION_GRAPH_TRACER not in final .config
Requested value: CONFIG_HAVE_FUNCTION_GRAPH_TRACER=y
Actual value:
Value requested for CONFIG_HAVE_FUNCTION_GRAPH_RETVAL not in final .config
Requested value: CONFIG_HAVE_FUNCTION_GRAPH_RETVAL=y
Actual value:
Value requested for CONFIG_DRM_KUNIT_TEST not in final .config
Requested value: CONFIG_DRM_KUNIT_TEST=m
Actual value:
Value requested for CONFIG_DRM_XE_WERROR not in final .config
Requested value: CONFIG_DRM_XE_WERROR=y
Actual value:
Value requested for CONFIG_DRM_XE_DEBUG not in final .config
Requested value: CONFIG_DRM_XE_DEBUG=y
Actual value:
Value requested for CONFIG_DRM_XE_DEBUG_MEM not in final .config
Requested value: CONFIG_DRM_XE_DEBUG_MEM=y
Actual value:
Value requested for CONFIG_DRM_XE_KUNIT_TEST not in final .config
Requested value: CONFIG_DRM_XE_KUNIT_TEST=m
Actual value:
++ nproc
+ make -j48 ARCH=i386 olddefconfig
GEN Makefile
WARNING: unmet direct dependencies detected for FB_IOMEM_HELPERS
Depends on [n]: HAS_IOMEM [=y] && FB_CORE [=n]
Selected by [m]:
- DRM_XE_DISPLAY [=y] && HAS_IOMEM [=y] && DRM [=y] && DRM_XE [=m] && DRM_XE [=m]=m [=m]
#
# configuration written to .config
#
++ nproc
+ make -j48 ARCH=i386
SYNC include/config/auto.conf.cmd
GEN Makefile
WARNING: unmet direct dependencies detected for FB_IOMEM_HELPERS
Depends on [n]: HAS_IOMEM [=y] && FB_CORE [=n]
Selected by [m]:
- DRM_XE_DISPLAY [=y] && HAS_IOMEM [=y] && DRM [=y] && DRM_XE [=m] && DRM_XE [=m]=m [=m]
WARNING: unmet direct dependencies detected for FB_IOMEM_HELPERS
Depends on [n]: HAS_IOMEM [=y] && FB_CORE [=n]
Selected by [m]:
- DRM_XE_DISPLAY [=y] && HAS_IOMEM [=y] && DRM [=y] && DRM_XE [=m] && DRM_XE [=m]=m [=m]
WARNING: unmet direct dependencies detected for FB_IOMEM_HELPERS
Depends on [n]: HAS_IOMEM [=y] && FB_CORE [=n]
Selected by [m]:
- DRM_XE_DISPLAY [=y] && HAS_IOMEM [=y] && DRM [=y] && DRM_XE [=m] && DRM_XE [=m]=m [=m]
GEN Makefile
WRAP arch/x86/include/generated/uapi/asm/bpf_perf_event.h
UPD include/generated/uapi/linux/version.h
WRAP arch/x86/include/generated/uapi/asm/errno.h
WRAP arch/x86/include/generated/uapi/asm/fcntl.h
WRAP arch/x86/include/generated/uapi/asm/ioctl.h
WRAP arch/x86/include/generated/uapi/asm/ioctls.h
WRAP arch/x86/include/generated/uapi/asm/ipcbuf.h
WRAP arch/x86/include/generated/uapi/asm/param.h
WRAP arch/x86/include/generated/uapi/asm/poll.h
WRAP arch/x86/include/generated/uapi/asm/resource.h
WRAP arch/x86/include/generated/uapi/asm/socket.h
WRAP arch/x86/include/generated/uapi/asm/sockios.h
WRAP arch/x86/include/generated/uapi/asm/termbits.h
WRAP arch/x86/include/generated/uapi/asm/termios.h
SYSHDR arch/x86/include/generated/uapi/asm/unistd_32.h
SYSHDR arch/x86/include/generated/uapi/asm/unistd_64.h
WRAP arch/x86/include/generated/uapi/asm/types.h
SYSHDR arch/x86/include/generated/uapi/asm/unistd_x32.h
SYSTBL arch/x86/include/generated/asm/syscalls_32.h
UPD include/generated/compile.h
WRAP arch/x86/include/generated/asm/early_ioremap.h
HOSTCC arch/x86/tools/relocs_32.o
WRAP arch/x86/include/generated/asm/mcs_spinlock.h
HOSTCC arch/x86/tools/relocs_64.o
WRAP arch/x86/include/generated/asm/irq_regs.h
WRAP arch/x86/include/generated/asm/kmap_size.h
HOSTCC arch/x86/tools/relocs_common.o
WRAP arch/x86/include/generated/asm/local64.h
WRAP arch/x86/include/generated/asm/mmiowb.h
HOSTCC scripts/kallsyms
HOSTCC scripts/sorttable
WRAP arch/x86/include/generated/asm/rwonce.h
WRAP arch/x86/include/generated/asm/module.lds.h
HOSTCC scripts/asn1_compiler
WRAP arch/x86/include/generated/asm/unaligned.h
HOSTCC scripts/selinux/genheaders/genheaders
HOSTCC scripts/selinux/mdp/mdp
HOSTLD arch/x86/tools/relocs
UPD include/config/kernel.release
UPD include/generated/utsrelease.h
CC scripts/mod/empty.o
HOSTCC scripts/mod/mk_elfconfig
CC scripts/mod/devicetable-offsets.s
UPD scripts/mod/devicetable-offsets.h
MKELF scripts/mod/elfconfig.h
HOSTCC scripts/mod/modpost.o
HOSTCC scripts/mod/file2alias.o
HOSTCC scripts/mod/sumversion.o
HOSTCC scripts/mod/symsearch.o
HOSTLD scripts/mod/modpost
CC kernel/bounds.s
CHKSHA1 /workspace/kernel/include/linux/atomic/atomic-arch-fallback.h
CHKSHA1 /workspace/kernel/include/linux/atomic/atomic-instrumented.h
CHKSHA1 /workspace/kernel/include/linux/atomic/atomic-long.h
UPD include/generated/timeconst.h
UPD include/generated/bounds.h
CC arch/x86/kernel/asm-offsets.s
UPD include/generated/asm-offsets.h
CALL /workspace/kernel/scripts/checksyscalls.sh
LDS scripts/module.lds
HOSTCC usr/gen_init_cpio
CC security/commoncap.o
CC security/lsm_syscalls.o
CC security/min_addr.o
CC certs/system_keyring.o
CC security/security.o
CC init/main.o
CC init/do_mounts.o
CC security/lsm_audit.o
CC security/device_cgroup.o
CC init/do_mounts_initrd.o
CC ipc/util.o
CC arch/x86/power/cpu.o
AS arch/x86/lib/atomic64_cx8_32.o
CC init/initramfs.o
CC arch/x86/pci/i386.o
CC io_uring/io_uring.o
CC arch/x86/power/hibernate_32.o
CC ipc/msgutil.o
CC security/integrity/iint.o
CC mm/filemap.o
CC arch/x86/video/video-common.o
CC init/calibrate.o
CC security/keys/gc.o
UPD init/utsversion-tmp.h
AS arch/x86/power/hibernate_asm_32.o
AS arch/x86/lib/checksum_32.o
CC io_uring/opdef.o
GEN security/selinux/flask.h security/selinux/av_permissions.h
AR arch/x86/crypto/built-in.a
AR virt/lib/built-in.a
CC arch/x86/realmode/init.o
CC block/partitions/core.o
CC arch/x86/events/amd/core.o
CC net/core/sock.o
AR arch/x86/net/built-in.a
CC net/ethernet/eth.o
CC fs/notify/dnotify/dnotify.o
CC lib/math/div64.o
AR virt/built-in.a
CC arch/x86/mm/pat/set_memory.o
CC security/selinux/avc.o
AR arch/x86/platform/atom/built-in.a
AR drivers/irqchip/built-in.a
AR drivers/cache/built-in.a
CC arch/x86/kernel/fpu/init.o
CC sound/core/seq/seq.o
CC arch/x86/kernel/fpu/bugs.o
AR arch/x86/virt/svm/built-in.a
CC arch/x86/kernel/fpu/core.o
CC arch/x86/lib/cmdline.o
CC kernel/locking/mutex.o
HOSTCC certs/extract-cert
CC arch/x86/entry/vdso/vma.o
AR arch/x86/platform/ce4100/built-in.a
AR arch/x86/virt/vmx/built-in.a
CC kernel/power/qos.o
AR arch/x86/virt/built-in.a
CC kernel/sched/core.o
CC fs/notify/inotify/inotify_fsnotify.o
AR drivers/bus/mhi/built-in.a
CC arch/x86/platform/efi/memmap.o
AR drivers/bus/built-in.a
AR drivers/pwm/built-in.a
CC crypto/asymmetric_keys/asymmetric_type.o
AR arch/x86/platform/geode/built-in.a
CC fs/notify/inotify/inotify_user.o
CC crypto/asymmetric_keys/restrict.o
CC drivers/pci/msi/pcidev_msi.o
CC lib/math/gcd.o
AS arch/x86/lib/cmpxchg8b_emu.o
CC arch/x86/lib/cpu.o
CC lib/math/lcm.o
CC kernel/power/main.o
GEN usr/initramfs_data.cpio
CC lib/math/int_log.o
CC init/init_task.o
COPY usr/initramfs_inc_data
CERT certs/x509_certificate_list
AS usr/initramfs_data.o
CERT certs/signing_key.x509
CC arch/x86/platform/efi/quirks.o
AR usr/built-in.a
AS certs/system_certificates.o
CC arch/x86/platform/efi/efi.o
CC lib/math/int_pow.o
AR certs/built-in.a
CC lib/math/int_sqrt.o
CC arch/x86/entry/vdso/extable.o
CC sound/core/seq/seq_lock.o
CC lib/math/reciprocal_div.o
AS arch/x86/realmode/rm/header.o
AR arch/x86/video/built-in.a
AR net/802/built-in.a
CC init/version.o
CC block/bdev.o
AR sound/i2c/other/built-in.a
AR sound/i2c/built-in.a
AS arch/x86/realmode/rm/trampoline_32.o
CC lib/math/rational.o
LDS arch/x86/entry/vdso/vdso32/vdso32.lds
CC security/integrity/integrity_audit.o
CC arch/x86/power/hibernate.o
CC io_uring/kbuf.o
CC arch/x86/lib/delay.o
AS arch/x86/lib/getuser.o
CC arch/x86/pci/init.o
CC sound/core/sound.o
AS arch/x86/realmode/rm/stack.o
CC arch/x86/events/intel/core.o
AS arch/x86/realmode/rm/reboot.o
AS arch/x86/entry/vdso/vdso32/note.o
CC fs/nfs_common/nfsacl.o
AR fs/notify/dnotify/built-in.a
CC drivers/pci/msi/api.o
AS arch/x86/realmode/rm/wakeup_asm.o
CC fs/nfs_common/grace.o
CC security/keys/key.o
AS arch/x86/entry/vdso/vdso32/system_call.o
CC crypto/asymmetric_keys/signature.o
CC arch/x86/realmode/rm/wakemain.o
CC arch/x86/realmode/rm/video-mode.o
CC crypto/asymmetric_keys/public_key.o
CC drivers/pci/msi/msi.o
CC kernel/power/console.o
CC block/partitions/msdos.o
ASN.1 crypto/asymmetric_keys/x509.asn1.[ch]
CC block/partitions/efi.o
AS arch/x86/entry/vdso/vdso32/sigreturn.o
AR arch/x86/entry/vsyscall/built-in.a
AS arch/x86/entry/entry.o
CC sound/core/seq/seq_clientmgr.o
AS arch/x86/entry/entry_32.o
AS arch/x86/realmode/rm/copy.o
AS arch/x86/realmode/rm/bioscall.o
CC arch/x86/pci/pcbios.o
GEN arch/x86/lib/inat-tables.c
CC sound/core/init.o
CC arch/x86/mm/init.o
CC arch/x86/realmode/rm/regs.o
CC arch/x86/realmode/rm/video-vga.o
AR lib/math/built-in.a
AR fs/notify/inotify/built-in.a
AR fs/notify/fanotify/built-in.a
CC arch/x86/lib/insn-eval.o
CC lib/crypto/memneq.o
CC lib/zlib_inflate/inffast.o
CC lib/crypto/mpi/generic_mpih-lshift.o
CC fs/notify/fsnotify.o
CC arch/x86/realmode/rm/video-vesa.o
CC fs/notify/notification.o
CC lib/crypto/utils.o
CC ipc/msg.o
CC lib/crypto/chacha.o
CC arch/x86/events/amd/lbr.o
CC arch/x86/realmode/rm/video-bios.o
CC kernel/power/process.o
CC arch/x86/entry/vdso/vdso32/vclock_gettime.o
CC arch/x86/kernel/fpu/regset.o
AR net/ethernet/built-in.a
CC arch/x86/mm/pat/memtype.o
CC kernel/locking/semaphore.o
CC arch/x86/events/amd/ibs.o
CC arch/x86/kernel/fpu/signal.o
CC arch/x86/kernel/fpu/xstate.o
CC security/selinux/hooks.o
CC arch/x86/entry/syscall_32.o
CC lib/zlib_inflate/inflate.o
PASYMS arch/x86/realmode/rm/pasyms.h
ASN.1 crypto/asymmetric_keys/x509_akid.asn1.[ch]
AR arch/x86/power/built-in.a
CC arch/x86/platform/efi/efi_32.o
LDS arch/x86/realmode/rm/realmode.lds
CC kernel/printk/printk.o
CC kernel/irq/irqdesc.o
CC kernel/locking/rwsem.o
LD arch/x86/realmode/rm/realmode.elf
AR security/integrity/built-in.a
RELOCS arch/x86/realmode/rm/realmode.relocs
OBJCOPY arch/x86/realmode/rm/realmode.bin
AR init/built-in.a
AS arch/x86/realmode/rmpiggy.o
CC lib/zlib_deflate/deflate.o
CC kernel/locking/percpu-rwsem.o
CC crypto/asymmetric_keys/x509_loader.o
AR arch/x86/realmode/built-in.a
AR fs/nfs_common/built-in.a
CC fs/notify/group.o
AR sound/drivers/opl3/built-in.a
CC arch/x86/mm/pat/memtype_interval.o
CC drivers/pci/pcie/portdrv.o
CC fs/iomap/trace.o
AR arch/x86/platform/iris/built-in.a
AR drivers/pci/pwrctl/built-in.a
AR sound/drivers/opl4/built-in.a
CC fs/iomap/iter.o
CC arch/x86/platform/intel/iosf_mbi.o
CC drivers/video/console/dummycon.o
AR sound/drivers/mpu401/built-in.a
CC security/selinux/selinuxfs.o
AR sound/drivers/vx/built-in.a
CC fs/iomap/buffered-io.o
AR sound/drivers/pcsp/built-in.a
CC arch/x86/pci/mmconfig_32.o
CC crypto/asymmetric_keys/x509_public_key.o
AR sound/drivers/built-in.a
CC lib/crypto/mpi/generic_mpih-mul1.o
CC lib/crypto/mpi/generic_mpih-mul2.o
CC security/keys/keyring.o
AR block/partitions/built-in.a
CC lib/crypto/aes.o
CC block/fops.o
CC drivers/pci/msi/irqdomain.o
CC block/bio.o
CC lib/zlib_deflate/deftree.o
CC kernel/irq/handle.o
CC kernel/irq/manage.o
CC arch/x86/entry/vdso/vdso32/vgetcpu.o
CC arch/x86/lib/insn.o
CC lib/zlib_inflate/infutil.o
HOSTCC arch/x86/entry/vdso/vdso2c
CC arch/x86/lib/kaslr.o
AS arch/x86/platform/efi/efi_stub_32.o
CC arch/x86/lib/memcpy_32.o
CC drivers/pci/pcie/rcec.o
CC kernel/locking/spinlock.o
AS arch/x86/lib/memmove_32.o
CC arch/x86/platform/efi/runtime-map.o
CC drivers/pci/pcie/aspm.o
CC arch/x86/events/amd/uncore.o
AR arch/x86/mm/pat/built-in.a
CC kernel/irq/spurious.o
CC arch/x86/mm/init_32.o
CC fs/notify/mark.o
CC kernel/rcu/update.o
AR kernel/livepatch/built-in.a
CC sound/core/seq/seq_memory.o
CC fs/quota/dquot.o
CC ipc/sem.o
CC fs/proc/task_mmu.o
CC fs/notify/fdinfo.o
CC drivers/video/console/vgacon.o
CC arch/x86/lib/misc.o
AR arch/x86/platform/intel/built-in.a
ASN.1 crypto/asymmetric_keys/pkcs7.asn1.[ch]
CC fs/proc/inode.o
CC crypto/asymmetric_keys/pkcs7_trust.o
CC crypto/asymmetric_keys/pkcs7_verify.o
CC arch/x86/lib/pc-conf-reg.o
AS arch/x86/lib/putuser.o
CC arch/x86/pci/direct.o
CC fs/proc/root.o
CC lib/crypto/mpi/generic_mpih-mul3.o
CC lib/zlib_inflate/inftrees.o
CC fs/proc/base.o
CC sound/core/memory.o
CC kernel/power/suspend.o
CC arch/x86/entry/vdso/vdso32-setup.o
CC kernel/locking/osq_lock.o
CC lib/zlib_deflate/deflate_syms.o
AR arch/x86/kernel/fpu/built-in.a
CC fs/iomap/direct-io.o
AR arch/x86/platform/intel-mid/built-in.a
AR sound/isa/ad1816a/built-in.a
CC arch/x86/kernel/cpu/mce/core.o
CC fs/iomap/fiemap.o
AS arch/x86/lib/retpoline.o
CC arch/x86/kernel/cpu/mce/severity.o
CC lib/zlib_inflate/inflate_syms.o
AR sound/isa/ad1848/built-in.a
AR drivers/pci/msi/built-in.a
AR sound/isa/cs423x/built-in.a
CC arch/x86/kernel/cpu/mce/genpool.o
CC arch/x86/lib/string_32.o
AR sound/isa/es1688/built-in.a
CC fs/iomap/seek.o
CC mm/mempool.o
CC fs/iomap/swapfile.o
CC arch/x86/lib/strstr_32.o
AR sound/isa/galaxy/built-in.a
CC kernel/locking/qspinlock.o
AR sound/isa/gus/built-in.a
AR sound/isa/msnd/built-in.a
VDSO arch/x86/entry/vdso/vdso32.so.dbg
CC arch/x86/lib/usercopy.o
AR sound/isa/opti9xx/built-in.a
CC lib/crypto/arc4.o
OBJCOPY arch/x86/entry/vdso/vdso32.so
VDSO2C arch/x86/entry/vdso/vdso-image-32.c
CC arch/x86/entry/vdso/vdso-image-32.o
AR sound/isa/sb/built-in.a
CC kernel/locking/rtmutex_api.o
CC crypto/asymmetric_keys/x509.asn1.o
AR sound/isa/wavefront/built-in.a
AR arch/x86/platform/efi/built-in.a
AR lib/zlib_deflate/built-in.a
CC kernel/locking/qrwlock.o
CC crypto/asymmetric_keys/x509_akid.asn1.o
AR sound/isa/wss/built-in.a
AR arch/x86/platform/intel-quark/built-in.a
CC lib/crypto/mpi/generic_mpih-rshift.o
AR sound/isa/built-in.a
AR lib/zlib_inflate/built-in.a
CC net/core/request_sock.o
AR arch/x86/platform/olpc/built-in.a
CC crypto/asymmetric_keys/x509_cert_parser.o
AR arch/x86/platform/scx200/built-in.a
CC security/keys/keyctl.o
CC arch/x86/kernel/cpu/mtrr/mtrr.o
CC arch/x86/kernel/cpu/mtrr/if.o
CC arch/x86/kernel/cpu/mtrr/generic.o
CC drivers/pci/hotplug/pci_hotplug_core.o
AR arch/x86/platform/ts5500/built-in.a
AR arch/x86/platform/uv/built-in.a
AR arch/x86/platform/built-in.a
CC sound/core/seq/seq_queue.o
AR arch/x86/entry/vdso/built-in.a
CC arch/x86/entry/common.o
CC arch/x86/pci/mmconfig-shared.o
CC sound/core/seq/seq_fifo.o
CC sound/core/seq/seq_prioq.o
CC io_uring/rsrc.o
CC arch/x86/mm/fault.o
AR fs/notify/built-in.a
CC arch/x86/lib/usercopy_32.o
CC sound/core/control.o
CC lib/crypto/mpi/generic_mpih-sub1.o
AR arch/x86/events/amd/built-in.a
CC kernel/rcu/sync.o
AR drivers/pci/switch/built-in.a
AR drivers/pci/controller/dwc/built-in.a
CC kernel/printk/printk_safe.o
CC ipc/shm.o
AR drivers/pci/controller/mobiveil/built-in.a
CC drivers/pci/pcie/pme.o
AR drivers/pci/controller/plda/built-in.a
CC kernel/printk/nbcon.o
AR drivers/pci/controller/built-in.a
AR drivers/video/console/built-in.a
CC kernel/printk/printk_ringbuffer.o
CC sound/core/misc.o
CC drivers/video/backlight/backlight.o
CC kernel/irq/resend.o
AR drivers/video/fbdev/core/built-in.a
CC kernel/printk/sysctl.o
CC arch/x86/events/zhaoxin/core.o
CC arch/x86/events/core.o
CC kernel/irq/chip.o
AR drivers/video/fbdev/omap/built-in.a
CC arch/x86/lib/msr-smp.o
CC mm/oom_kill.o
CC arch/x86/events/probe.o
CC arch/x86/kernel/cpu/mce/intel.o
CC crypto/asymmetric_keys/pkcs7.asn1.o
CC block/elevator.o
AR drivers/video/fbdev/omap2/omapfb/dss/built-in.a
AR drivers/video/fbdev/omap2/omapfb/displays/built-in.a
CC crypto/asymmetric_keys/pkcs7_parser.o
AR drivers/video/fbdev/omap2/omapfb/built-in.a
CC kernel/power/hibernate.o
CC arch/x86/events/utils.o
AR drivers/video/fbdev/omap2/built-in.a
CC sound/core/device.o
CC sound/core/info.o
AR fs/iomap/built-in.a
AR drivers/video/fbdev/built-in.a
CC drivers/video/aperture.o
CC io_uring/notif.o
CC arch/x86/mm/ioremap.o
CC drivers/pci/hotplug/acpi_pcihp.o
CC drivers/pci/access.o
CC sound/core/seq/seq_timer.o
CC lib/crypto/mpi/generic_mpih-add1.o
CC arch/x86/events/intel/bts.o
CC arch/x86/lib/cache-smp.o
AR kernel/locking/built-in.a
CC drivers/pci/bus.o
CC arch/x86/events/intel/ds.o
CC net/core/skbuff.o
CC drivers/pci/probe.o
CC kernel/irq/dummychip.o
CC kernel/rcu/srcutree.o
AR kernel/printk/built-in.a
AS arch/x86/entry/thunk.o
CC lib/crypto/mpi/ec.o
CC lib/crypto/mpi/mpicoder.o
CC fs/proc/generic.o
CC arch/x86/kernel/cpu/mtrr/cleanup.o
CC kernel/dma/mapping.o
AR drivers/pci/pcie/built-in.a
CC arch/x86/pci/fixup.o
AR arch/x86/entry/built-in.a
CC kernel/entry/common.o
CC arch/x86/lib/msr.o
CC kernel/dma/direct.o
AS arch/x86/lib/msr-reg.o
CC ipc/syscall.o
CC arch/x86/lib/msr-reg-export.o
AR crypto/asymmetric_keys/built-in.a
CC fs/quota/quota_v2.o
CC security/keys/permission.o
CC kernel/sched/fair.o
CC crypto/api.o
AR drivers/video/backlight/built-in.a
CC kernel/sched/build_policy.o
CC kernel/sched/build_utility.o
CC arch/x86/kernel/cpu/mtrr/amd.o
CC arch/x86/kernel/cpu/mce/amd.o
CC mm/fadvise.o
CC lib/crypto/gf128mul.o
CC kernel/module/main.o
CC kernel/irq/devres.o
CC lib/crypto/blake2s.o
CC kernel/module/strict_rwx.o
CC lib/crypto/blake2s-generic.o
CC lib/crypto/mpi/mpi-add.o
CC lib/crypto/mpi/mpi-bit.o
AR drivers/pci/hotplug/built-in.a
CC sound/core/seq/seq_system.o
CC drivers/video/cmdline.o
CC sound/core/seq/seq_ports.o
CC lib/crypto/sha1.o
CC lib/crypto/mpi/mpi-cmp.o
CC lib/crypto/mpi/mpi-sub-ui.o
CC sound/core/seq/seq_info.o
AR arch/x86/events/zhaoxin/built-in.a
CC arch/x86/mm/extable.o
CC arch/x86/events/intel/knc.o
CC arch/x86/kernel/cpu/mtrr/cyrix.o
CC kernel/dma/ops_helpers.o
CC fs/proc/array.o
CC security/selinux/netlink.o
CC block/blk-core.o
CC kernel/irq/autoprobe.o
CC kernel/dma/dummy.o
CC security/keys/process_keys.o
CC security/selinux/nlmsgtab.o
CC fs/quota/quota_tree.o
CC ipc/ipc_sysctl.o
CC net/core/datagram.o
CC sound/core/isadma.o
CC kernel/power/snapshot.o
CC kernel/power/swap.o
CC io_uring/tctx.o
CC crypto/cipher.o
CC kernel/power/user.o
CC kernel/module/kmod.o
CC lib/crypto/mpi/mpi-div.o
CC kernel/time/time.o
CC kernel/rcu/tree.o
CC kernel/entry/syscall_user_dispatch.o
CC arch/x86/pci/acpi.o
CC lib/crypto/mpi/mpi-inv.o
CC kernel/dma/remap.o
CC kernel/irq/irqdomain.o
CC fs/quota/quota.o
CC mm/maccess.o
AS arch/x86/lib/hweight.o
CC security/selinux/netif.o
CC kernel/rcu/rcu_segcblist.o
CC kernel/irq/proc.o
CC security/keys/request_key.o
CC mm/page-writeback.o
CC arch/x86/lib/iomem.o
CC arch/x86/kernel/cpu/microcode/core.o
CC arch/x86/kernel/cpu/mtrr/centaur.o
CC arch/x86/kernel/cpu/microcode/intel.o
CC drivers/pci/host-bridge.o
CC drivers/video/nomodeset.o
CC drivers/pci/remove.o
CC ipc/mqueue.o
CC sound/core/seq/seq_dummy.o
CC fs/proc/fd.o
CC fs/proc/proc_tty.o
CC kernel/futex/core.o
CC arch/x86/mm/mmap.o
CC kernel/cgroup/cgroup.o
CC crypto/compress.o
CC arch/x86/events/intel/lbr.o
CC kernel/cgroup/rstat.o
CC arch/x86/kernel/cpu/microcode/amd.o
CC arch/x86/lib/atomic64_32.o
AR kernel/entry/built-in.a
CC kernel/cgroup/namespace.o
CC kernel/cgroup/cgroup-v1.o
CC drivers/video/hdmi.o
CC kernel/trace/trace_clock.o
CC lib/crypto/mpi/mpi-mod.o
AR kernel/dma/built-in.a
CC kernel/bpf/core.o
CC lib/crypto/mpi/mpi-mul.o
CC arch/x86/lib/inat.o
CC security/selinux/netnode.o
CC kernel/events/core.o
CC arch/x86/kernel/cpu/mtrr/legacy.o
CC security/selinux/netport.o
CC arch/x86/pci/legacy.o
AR arch/x86/lib/built-in.a
CC fs/quota/kqid.o
CC arch/x86/kernel/cpu/mce/threshold.o
CC mm/folio-compat.o
AR arch/x86/lib/lib.a
CC io_uring/filetable.o
CC crypto/algapi.o
CC kernel/time/timer.o
CC fs/kernfs/mount.o
CC arch/x86/mm/pgtable.o
CC arch/x86/events/intel/p4.o
CC kernel/time/hrtimer.o
AR sound/core/seq/built-in.a
CC security/keys/request_key_auth.o
CC drivers/pci/pci.o
CC sound/core/vmaster.o
CC fs/sysfs/file.o
CC sound/core/ctljack.o
CC kernel/trace/ring_buffer.o
CC sound/core/jack.o
CC kernel/irq/migration.o
AR arch/x86/kernel/cpu/mtrr/built-in.a
CC kernel/irq/cpuhotplug.o
CC fs/proc/cmdline.o
CC crypto/scatterwalk.o
CC crypto/proc.o
CC kernel/futex/syscalls.o
CC crypto/aead.o
CC lib/crypto/mpi/mpih-cmp.o
CC crypto/geniv.o
CC kernel/module/tree_lookup.o
CC arch/x86/pci/irq.o
AR arch/x86/kernel/cpu/microcode/built-in.a
CC ipc/namespace.o
CC fs/quota/netlink.o
AR drivers/video/built-in.a
CC fs/sysfs/dir.o
CC kernel/power/poweroff.o
CC ipc/mq_sysctl.o
CC block/blk-sysfs.o
CC kernel/irq/pm.o
CC sound/core/timer.o
CC mm/readahead.o
CC fs/kernfs/inode.o
CC kernel/irq/msi.o
CC security/keys/user_defined.o
CC arch/x86/mm/physaddr.o
CC kernel/irq/affinity.o
CC fs/proc/consoles.o
CC block/blk-flush.o
CC arch/x86/mm/tlb.o
CC io_uring/rw.o
AR kernel/power/built-in.a
CC fs/kernfs/dir.o
CC kernel/trace/trace.o
CC io_uring/net.o
CC fs/kernfs/file.o
CC security/selinux/status.o
CC kernel/trace/trace_output.o
CC io_uring/poll.o
CC lib/crypto/mpi/mpih-div.o
CC security/keys/proc.o
CC mm/swap.o
CC fs/kernfs/symlink.o
CC arch/x86/events/intel/p6.o
AR arch/x86/kernel/cpu/mce/built-in.a
CC kernel/module/kallsyms.o
CC fs/sysfs/symlink.o
AR ipc/built-in.a
CC arch/x86/kernel/cpu/cacheinfo.o
CC arch/x86/events/intel/pt.o
CC arch/x86/events/intel/uncore.o
CC arch/x86/events/intel/uncore_nhmex.o
CC crypto/lskcipher.o
CC kernel/irq/matrix.o
CC kernel/futex/pi.o
AR fs/quota/built-in.a
CC mm/truncate.o
CC mm/vmscan.o
CC security/keys/sysctl.o
CC crypto/skcipher.o
CC crypto/seqiv.o
CC fs/proc/cpuinfo.o
CC crypto/echainiv.o
CC crypto/ahash.o
CC fs/proc/devices.o
CC arch/x86/pci/common.o
CC block/blk-settings.o
CC arch/x86/pci/early.o
CC mm/shrinker.o
CC mm/shmem.o
CC fs/sysfs/mount.o
CC kernel/module/procfs.o
CC lib/crypto/mpi/mpih-mul.o
CC kernel/module/sysfs.o
CC kernel/cgroup/freezer.o
CC kernel/time/timekeeping.o
CC fs/devpts/inode.o
CC arch/x86/mm/cpu_entry_area.o
CC security/keys/keyctl_pkey.o
CC kernel/cgroup/legacy_freezer.o
CC security/selinux/ss/ebitmap.o
CC kernel/futex/requeue.o
AR fs/kernfs/built-in.a
CC net/core/stream.o
CC lib/crypto/mpi/mpi-pow.o
CC mm/util.o
CC sound/core/hrtimer.o
CC arch/x86/pci/bus_numa.o
CC arch/x86/kernel/cpu/scattered.o
CC crypto/shash.o
CC arch/x86/kernel/cpu/topology_common.o
CC arch/x86/pci/amd_bus.o
CC fs/proc/interrupts.o
CC crypto/akcipher.o
CC mm/mmzone.o
AR kernel/bpf/built-in.a
CC arch/x86/events/intel/uncore_snb.o
CC io_uring/eventfd.o
CC fs/proc/loadavg.o
CC fs/proc/meminfo.o
CC lib/crypto/mpi/mpiutil.o
CC arch/x86/events/intel/uncore_snbep.o
CC arch/x86/events/intel/uncore_discovery.o
AR kernel/irq/built-in.a
CC crypto/sig.o
AR kernel/sched/built-in.a
CC kernel/trace/trace_seq.o
CC io_uring/uring_cmd.o
AR sound/pci/ac97/built-in.a
CC kernel/trace/trace_stat.o
CC block/blk-ioc.o
CC sound/core/seq_device.o
AR kernel/module/built-in.a
CC mm/vmstat.o
AR sound/pci/ali5451/built-in.a
CC fs/sysfs/group.o
CC mm/backing-dev.o
CC arch/x86/mm/maccess.o
AR sound/pci/asihpi/built-in.a
AR security/keys/built-in.a
CC io_uring/openclose.o
CC crypto/kpp.o
AR sound/pci/au88x0/built-in.a
CC security/selinux/ss/hashtab.o
CC drivers/pci/pci-driver.o
AR sound/pci/aw2/built-in.a
CC arch/x86/kernel/cpu/topology_ext.o
CC security/selinux/ss/symtab.o
CC arch/x86/kernel/cpu/topology_amd.o
CC drivers/pci/search.o
AR sound/pci/ctxfi/built-in.a
AR fs/devpts/built-in.a
CC arch/x86/mm/pgprot.o
CC arch/x86/mm/pgtable_32.o
CC kernel/futex/waitwake.o
AR sound/pci/ca0106/built-in.a
AR kernel/rcu/built-in.a
CC arch/x86/mm/iomap_32.o
CC net/core/scm.o
AR sound/pci/cs46xx/built-in.a
CC fs/netfs/buffered_read.o
CC arch/x86/kernel/acpi/boot.o
AR sound/pci/cs5535audio/built-in.a
CC kernel/fork.o
CC fs/ext4/balloc.o
CC arch/x86/events/rapl.o
AR sound/pci/lola/built-in.a
CC kernel/exec_domain.o
CC arch/x86/events/msr.o
CC arch/x86/kernel/acpi/sleep.o
AR sound/pci/lx6464es/built-in.a
AR sound/pci/echoaudio/built-in.a
AR arch/x86/pci/built-in.a
AR sound/pci/emu10k1/built-in.a
AR lib/crypto/mpi/built-in.a
CC fs/netfs/buffered_write.o
CC lib/crypto/sha256.o
AR sound/pci/hda/built-in.a
CC fs/jbd2/transaction.o
CC fs/jbd2/commit.o
CC fs/proc/stat.o
AR sound/pci/ice1712/built-in.a
CC [M] sound/pci/hda/hda_bind.o
AS arch/x86/kernel/acpi/wakeup_32.o
CC arch/x86/kernel/acpi/cstate.o
CC arch/x86/kernel/cpu/common.o
CC kernel/time/ntp.o
CC fs/jbd2/recovery.o
CC net/core/gen_stats.o
AR drivers/idle/built-in.a
CC [M] sound/core/hwdep.o
CC kernel/cgroup/pids.o
CC net/sched/sch_generic.o
CC io_uring/sqpoll.o
CC security/selinux/ss/sidtab.o
CC net/netlink/af_netlink.o
AR fs/sysfs/built-in.a
CC io_uring/xattr.o
CC block/blk-map.o
CC block/blk-merge.o
CC block/blk-timeout.o
CC io_uring/nop.o
CC security/selinux/ss/avtab.o
CC arch/x86/mm/hugetlbpage.o
CC arch/x86/mm/dump_pagetables.o
CC arch/x86/mm/highmem_32.o
ASN.1 crypto/rsapubkey.asn1.[ch]
CC arch/x86/events/intel/cstate.o
ASN.1 crypto/rsaprivkey.asn1.[ch]
CC crypto/rsa.o
AR kernel/futex/built-in.a
CC fs/netfs/direct_read.o
CC kernel/panic.o
CC kernel/cpu.o
CC arch/x86/kernel/cpu/rdrand.o
CC security/selinux/ss/policydb.o
CC security/selinux/ss/services.o
AR lib/crypto/built-in.a
CC lib/lzo/lzo1x_compress.o
CC fs/proc/uptime.o
CC security/selinux/ss/conditional.o
CC arch/x86/kernel/cpu/match.o
CC lib/lzo/lzo1x_decompress_safe.o
CC kernel/cgroup/rdma.o
CC [M] sound/pci/hda/hda_codec.o
CC drivers/pci/rom.o
CC kernel/exit.o
CC [M] sound/core/pcm.o
CC kernel/time/clocksource.o
CC kernel/softirq.o
AR arch/x86/kernel/acpi/built-in.a
CC kernel/resource.o
CC kernel/time/jiffies.o
CC security/selinux/ss/mls.o
AR sound/pci/korg1212/built-in.a
CC arch/x86/kernel/apic/apic.o
CC mm/mm_init.o
CC arch/x86/kernel/apic/apic_common.o
CC arch/x86/kernel/kprobes/core.o
CC mm/percpu.o
CC security/selinux/ss/context.o
CC net/core/gen_estimator.o
AR arch/x86/mm/built-in.a
CC kernel/events/ring_buffer.o
CC block/blk-lib.o
CC io_uring/fs.o
LDS arch/x86/kernel/vmlinux.lds
CC fs/ramfs/inode.o
CC crypto/rsa_helper.o
AS arch/x86/kernel/head_32.o
CC arch/x86/kernel/head32.o
CC kernel/trace/trace_printk.o
CC block/blk-mq.o
CC arch/x86/kernel/kprobes/opt.o
AR lib/lzo/built-in.a
CC fs/proc/util.o
CC crypto/rsa-pkcs1pad.o
CC lib/lz4/lz4_decompress.o
CC crypto/acompress.o
CC kernel/cgroup/cpuset.o
CC crypto/scompress.o
AR net/bpf/built-in.a
CC fs/netfs/direct_write.o
CC net/ethtool/ioctl.o
CC kernel/sysctl.o
CC security/selinux/netlabel.o
CC net/sched/sch_mq.o
CC arch/x86/kernel/cpu/bugs.o
CC drivers/pci/setup-res.o
AR arch/x86/events/intel/built-in.a
CC fs/jbd2/checkpoint.o
CC drivers/pci/irq.o
AR arch/x86/events/built-in.a
CC io_uring/splice.o
CC drivers/pci/vpd.o
CC [M] sound/core/pcm_native.o
CC net/core/net_namespace.o
CC kernel/time/timer_list.o
CC fs/proc/version.o
CC block/blk-mq-tag.o
CC mm/slab_common.o
CC fs/ext4/bitmap.o
CC block/blk-stat.o
CC fs/ramfs/file-mmu.o
CC [M] sound/pci/hda/hda_jack.o
CC fs/netfs/io.o
CC net/netfilter/core.o
CC [M] sound/pci/hda/hda_auto_parser.o
CC net/netfilter/nf_log.o
CC kernel/trace/pid_list.o
AR arch/x86/kernel/kprobes/built-in.a
CC net/netfilter/nf_queue.o
CC arch/x86/kernel/ebda.o
CC block/blk-mq-sysfs.o
CC kernel/events/callchain.o
CC kernel/events/hw_breakpoint.o
CC arch/x86/kernel/platform-quirks.o
CC kernel/events/uprobes.o
CC io_uring/sync.o
CC net/ipv4/netfilter/nf_defrag_ipv4.o
CC io_uring/msg_ring.o
CC crypto/algboss.o
CC net/netfilter/nf_sockopt.o
CC net/netfilter/utils.o
CC fs/proc/softirqs.o
CC net/netfilter/nfnetlink.o
CC net/sched/sch_frag.o
AR lib/lz4/built-in.a
CC net/netlink/genetlink.o
CC kernel/time/timeconv.o
CC lib/zstd/zstd_decompress_module.o
CC drivers/pci/setup-bus.o
CC crypto/testmgr.o
CC fs/jbd2/revoke.o
CC lib/zstd/decompress/huf_decompress.o
AR fs/ramfs/built-in.a
CC kernel/capability.o
CC kernel/ptrace.o
CC fs/ext4/block_validity.o
CC net/netlink/policy.o
CC net/sched/sch_api.o
CC drivers/pci/vc.o
CC drivers/pci/mmap.o
CC net/netfilter/nfnetlink_log.o
CC arch/x86/kernel/apic/apic_noop.o
CC arch/x86/kernel/process_32.o
CC arch/x86/kernel/signal.o
CC kernel/time/timecounter.o
CC arch/x86/kernel/cpu/aperfmperf.o
CC arch/x86/kernel/cpu/cpuid-deps.o
CC arch/x86/kernel/cpu/umwait.o
CC kernel/trace/trace_sched_switch.o
CC fs/proc/namespaces.o
CC lib/zstd/decompress/zstd_ddict.o
AR security/selinux/built-in.a
CC lib/zstd/decompress/zstd_decompress.o
AR security/built-in.a
CC kernel/time/alarmtimer.o
CC [M] sound/pci/hda/hda_sysfs.o
CC [M] sound/pci/hda/hda_controller.o
CC lib/zstd/decompress/zstd_decompress_block.o
CC kernel/time/posix-timers.o
CC net/core/secure_seq.o
CC kernel/user.o
CC fs/proc/self.o
CC io_uring/advise.o
CC fs/proc/thread_self.o
CC fs/netfs/iterator.o
CC kernel/trace/trace_nop.o
CC crypto/cmac.o
CC arch/x86/kernel/apic/ipi.o
CC drivers/pci/devres.o
CC io_uring/epoll.o
CC net/ipv4/netfilter/nf_reject_ipv4.o
CC [M] sound/pci/hda/hda_proc.o
MKCAP arch/x86/kernel/cpu/capflags.c
CC drivers/pci/proc.o
CC fs/jbd2/journal.o
CC block/blk-mq-cpumap.o
CC [M] sound/pci/hda/hda_hwdep.o
CC fs/proc/proc_sysctl.o
CC fs/ext4/dir.o
CC mm/compaction.o
CC kernel/cgroup/misc.o
CC kernel/cgroup/debug.o
CC mm/show_mem.o
CC arch/x86/kernel/cpu/powerflags.o
CC arch/x86/kernel/cpu/topology.o
CC net/netfilter/nf_conntrack_core.o
CC kernel/signal.o
CC lib/xz/xz_dec_syms.o
AR sound/pci/nm256/built-in.a
CC fs/netfs/locking.o
AR sound/pci/mixart/built-in.a
CC block/blk-mq-sched.o
CC [M] sound/core/pcm_lib.o
CC net/ethtool/common.o
CC fs/netfs/main.o
CC fs/netfs/misc.o
CC crypto/hmac.o
AR kernel/events/built-in.a
CC kernel/sys.o
CC crypto/crypto_null.o
CC mm/shmem_quota.o
CC arch/x86/kernel/apic/vector.o
CC kernel/umh.o
CC mm/interval_tree.o
CC mm/list_lru.o
CC arch/x86/kernel/apic/init.o
CC net/sched/sch_blackhole.o
CC io_uring/statx.o
CC lib/xz/xz_dec_stream.o
AR net/netlink/built-in.a
CC lib/xz/xz_dec_lzma2.o
CC lib/xz/xz_dec_bcj.o
CC net/ethtool/netlink.o
CC fs/proc/proc_net.o
CC drivers/pci/pci-sysfs.o
CC drivers/pci/slot.o
CC kernel/workqueue.o
CC kernel/pid.o
CC kernel/task_work.o
CC kernel/trace/blktrace.o
CC kernel/trace/trace_events.o
CC kernel/trace/trace_export.o
CC net/core/flow_dissector.o
CC kernel/time/posix-cpu-timers.o
CC net/sched/cls_api.o
AR kernel/cgroup/built-in.a
CC fs/proc/kcore.o
CC fs/proc/vmcore.o
CC kernel/time/posix-clock.o
CC [M] sound/pci/hda/patch_hdmi.o
CC [M] sound/pci/hda/hda_eld.o
CC block/ioctl.o
CC block/genhd.o
CC crypto/md5.o
CC mm/workingset.o
CC fs/ext4/ext4_jbd2.o
CC [M] sound/pci/hda/hda_intel.o
CC net/sched/act_api.o
CC net/ipv4/netfilter/ip_tables.o
CC net/core/sysctl_net_core.o
CC net/sched/sch_fifo.o
CC net/sched/cls_cgroup.o
CC net/sched/ematch.o
CC net/ipv4/netfilter/iptable_filter.o
CC lib/dim/dim.o
CC net/xfrm/xfrm_policy.o
AR lib/xz/built-in.a
CC lib/fonts/fonts.o
CC net/unix/af_unix.o
CC io_uring/timeout.o
CC net/xfrm/xfrm_state.o
CC net/xfrm/xfrm_hash.o
CC kernel/trace/trace_event_perf.o
CC kernel/time/itimer.o
CC net/unix/garbage.o
CC kernel/trace/trace_events_filter.o
CC crypto/sha256_generic.o
CC lib/zstd/zstd_common_module.o
CC lib/dim/net_dim.o
CC kernel/time/clockevents.o
CC lib/fonts/font_8x16.o
CC [M] sound/core/pcm_misc.o
CC drivers/pci/pci-acpi.o
CC net/ethtool/bitset.o
CC lib/argv_split.o
CC fs/netfs/objects.o
CC fs/netfs/write_collect.o
CC lib/zstd/common/debug.o
CC lib/zstd/common/entropy_common.o
CC kernel/extable.o
CC block/ioprio.o
CC arch/x86/kernel/apic/hw_nmi.o
CC kernel/params.o
CC fs/proc/kmsg.o
CC arch/x86/kernel/apic/io_apic.o
CC drivers/pci/iomap.o
CC net/unix/sysctl_net_unix.o
AR lib/fonts/built-in.a
CC lib/zstd/common/error_private.o
CC mm/debug.o
CC net/ethtool/strset.o
CC lib/zstd/common/fse_decompress.o
CC crypto/sha512_generic.o
CC crypto/sha3_generic.o
CC fs/proc/page.o
CC drivers/pci/quirks.o
LD [M] sound/pci/hda/snd-hda-codec.o
CC net/xfrm/xfrm_input.o
CC lib/zstd/common/zstd_common.o
CC net/netfilter/nf_conntrack_standalone.o
CC block/badblocks.o
CC [M] sound/core/pcm_memory.o
CC [M] sound/core/memalloc.o
CC net/netfilter/nf_conntrack_expect.o
AR fs/jbd2/built-in.a
CC io_uring/fdinfo.o
CC net/xfrm/xfrm_output.o
CC net/xfrm/xfrm_sysctl.o
CC net/xfrm/xfrm_replay.o
CC mm/gup.o
CC kernel/time/tick-common.o
CC lib/bug.o
CC lib/buildid.o
CC lib/dim/rdma_dim.o
CC lib/clz_tab.o
CC crypto/ecb.o
CC crypto/cbc.o
CC net/ipv4/netfilter/iptable_mangle.o
CC net/ipv4/netfilter/ipt_REJECT.o
CC [M] net/ipv4/netfilter/iptable_nat.o
AR lib/zstd/built-in.a
LD [M] sound/pci/hda/snd-hda-intel.o
CC kernel/time/tick-broadcast.o
CC lib/cmdline.o
CC kernel/time/tick-broadcast-hrtimer.o
LD [M] sound/pci/hda/snd-hda-codec-hdmi.o
AR sound/pci/oxygen/built-in.a
AR sound/pci/pcxhr/built-in.a
CC net/core/dev.o
AR sound/pci/riptide/built-in.a
CC net/ethtool/linkinfo.o
AR sound/pci/rme9652/built-in.a
CC net/ethtool/linkmodes.o
AR sound/pci/trident/built-in.a
CC arch/x86/kernel/apic/msi.o
AR sound/pci/ymfpci/built-in.a
CC fs/netfs/write_issue.o
AR sound/pci/vx222/built-in.a
AR sound/pci/built-in.a
CC fs/ext4/extents.o
CC fs/hugetlbfs/inode.o
CC fs/ext4/extents_status.o
AR fs/proc/built-in.a
AR lib/dim/built-in.a
CC fs/fat/cache.o
CC fs/isofs/namei.o
CC kernel/trace/trace_events_trigger.o
CC fs/fat/dir.o
CC kernel/trace/trace_eprobe.o
CC kernel/trace/trace_kprobe.o
CC fs/fat/fatent.o
CC lib/cpumask.o
CC fs/nfs/client.o
CC fs/fat/file.o
CC crypto/ctr.o
CC kernel/kthread.o
CC block/blk-rq-qos.o
CC block/disk-events.o
CC fs/fat/inode.o
CC kernel/trace/error_report-traces.o
CC [M] sound/core/pcm_timer.o
CC kernel/trace/power-traces.o
CC fs/nfs/dir.o
CC io_uring/cancel.o
CC arch/x86/kernel/apic/probe_32.o
AR net/sched/built-in.a
CC drivers/pci/pci-label.o
CC fs/exportfs/expfs.o
CC lib/ctype.o
CC mm/mmap_lock.o
CC fs/nfs/file.o
CC fs/isofs/inode.o
LD [M] sound/core/snd-hwdep.o
CC kernel/time/tick-oneshot.o
CC crypto/gcm.o
CC drivers/pci/vgaarb.o
CC lib/dec_and_lock.o
CC fs/nfs/getroot.o
AR net/unix/built-in.a
CC fs/nfs/inode.o
CC block/blk-ia-ranges.o
CC block/early-lookup.o
AR net/ipv4/netfilter/built-in.a
CC net/netfilter/nf_conntrack_helper.o
CC net/netfilter/nf_conntrack_proto.o
CC net/ipv4/route.o
CC net/xfrm/xfrm_device.o
CC io_uring/waitid.o
AR arch/x86/kernel/apic/built-in.a
CC net/netfilter/nf_conntrack_proto_generic.o
CC fs/nfs/super.o
CC net/ethtool/rss.o
CC net/ethtool/linkstate.o
CC lib/decompress.o
AR sound/core/built-in.a
CC io_uring/register.o
LD [M] sound/core/snd-pcm.o
AR sound/ppc/built-in.a
CC lib/decompress_bunzip2.o
AR sound/arm/built-in.a
CC arch/x86/kernel/signal_32.o
AR sound/sh/built-in.a
AR fs/netfs/built-in.a
AR sound/synth/emux/built-in.a
CC kernel/trace/rpm-traces.o
AR sound/usb/misc/built-in.a
AR sound/synth/built-in.a
AR sound/usb/usx2y/built-in.a
AR sound/usb/caiaq/built-in.a
AR sound/usb/6fire/built-in.a
CC kernel/trace/trace_dynevent.o
AR fs/exportfs/built-in.a
AR sound/usb/hiface/built-in.a
CC block/bounce.o
CC lib/decompress_inflate.o
AR sound/usb/bcd2000/built-in.a
CC kernel/time/tick-sched.o
CC kernel/time/timer_migration.o
CC kernel/time/vsyscall.o
AR sound/usb/built-in.a
CC fs/lockd/clntlock.o
CC net/ipv4/inetpeer.o
CC kernel/trace/trace_probe.o
AR fs/hugetlbfs/built-in.a
AR sound/firewire/built-in.a
CC net/ipv4/protocol.o
AR sound/sparc/built-in.a
AR sound/spi/built-in.a
AR sound/parisc/built-in.a
CC lib/decompress_unlz4.o
CC net/ethtool/debug.o
AR sound/pcmcia/vx/built-in.a
CC kernel/sys_ni.o
AR sound/pcmcia/pdaudiocf/built-in.a
AR sound/pcmcia/built-in.a
CC kernel/nsproxy.o
CC mm/highmem.o
AR sound/mips/built-in.a
AR sound/soc/built-in.a
AR sound/atmel/built-in.a
AR sound/hda/built-in.a
CC io_uring/truncate.o
CC net/ipv6/netfilter/ip6_tables.o
CC [M] sound/hda/hda_bus_type.o
CC block/bsg.o
CC net/ipv6/netfilter/ip6table_filter.o
CC crypto/ccm.o
CC net/ipv6/af_inet6.o
CC net/ipv6/netfilter/ip6table_mangle.o
CC fs/fat/misc.o
AR drivers/pci/built-in.a
CC net/ipv6/anycast.o
CC net/ipv6/netfilter/nf_defrag_ipv6_hooks.o
AR drivers/char/ipmi/built-in.a
CC drivers/acpi/acpica/dsargs.o
CC mm/memory.o
CC drivers/pnp/pnpacpi/core.o
CC kernel/time/timekeeping_debug.o
CC drivers/pnp/pnpacpi/rsparser.o
CC net/xfrm/xfrm_nat_keepalive.o
CC fs/isofs/dir.o
CC net/xfrm/xfrm_algo.o
CC fs/isofs/util.o
CC lib/decompress_unlzma.o
CC kernel/time/namespace.o
CC mm/mincore.o
CC kernel/trace/trace_uprobe.o
CC mm/mlock.o
CC io_uring/memmap.o
CC kernel/notifier.o
CC drivers/acpi/acpica/dscontrol.o
CC drivers/acpi/acpica/dsdebug.o
CC net/ethtool/wol.o
CC [M] sound/hda/hdac_bus.o
CC net/netfilter/nf_conntrack_proto_tcp.o
CC fs/lockd/clntproc.o
CC block/blk-cgroup.o
CC drivers/pnp/core.o
AR drivers/amba/built-in.a
CC fs/nfs/io.o
AR drivers/acpi/pmic/built-in.a
CC block/blk-ioprio.o
CC drivers/acpi/x86/apple.o
CC drivers/acpi/dptf/int340x_thermal.o
CC block/blk-iolatency.o
CC block/blk-iocost.o
CC drivers/acpi/x86/cmos_rtc.o
AR drivers/clk/actions/built-in.a
CC block/mq-deadline.o
CC drivers/dma/dw/core.o
AR drivers/clk/analogbits/built-in.a
CC block/kyber-iosched.o
CC fs/fat/nfs.o
AR drivers/clk/bcm/built-in.a
CC crypto/aes_generic.o
AR drivers/clk/imgtec/built-in.a
AR drivers/soc/apple/built-in.a
CC fs/isofs/rock.o
CC crypto/crc32c_generic.o
AR drivers/clk/imx/built-in.a
AR drivers/soc/aspeed/built-in.a
AR drivers/soc/bcm/built-in.a
AR drivers/clk/ingenic/built-in.a
CC drivers/acpi/acpica/dsfield.o
AR drivers/soc/fsl/built-in.a
AR drivers/clk/mediatek/built-in.a
AR drivers/soc/fujitsu/built-in.a
CC lib/decompress_unlzo.o
AR drivers/clk/microchip/built-in.a
AR drivers/soc/hisilicon/built-in.a
AR drivers/clk/mstar/built-in.a
AR drivers/soc/imx/built-in.a
AR drivers/clk/mvebu/built-in.a
AR drivers/soc/ixp4xx/built-in.a
AR drivers/clk/ralink/built-in.a
CC drivers/acpi/x86/lpss.o
AR drivers/soc/loongson/built-in.a
AR drivers/pnp/pnpacpi/built-in.a
AR drivers/clk/renesas/built-in.a
AR drivers/soc/mediatek/built-in.a
CC drivers/pnp/card.o
AR drivers/soc/microchip/built-in.a
AR drivers/clk/socfpga/built-in.a
AR kernel/time/built-in.a
CC drivers/pnp/driver.o
CC kernel/trace/rethook.o
CC net/ipv6/ip6_output.o
AR drivers/clk/sophgo/built-in.a
AR drivers/soc/nuvoton/built-in.a
CC mm/mmap.o
AR drivers/soc/pxa/built-in.a
CC io_uring/io-wq.o
CC net/xfrm/xfrm_user.o
AR drivers/clk/sprd/built-in.a
AR drivers/acpi/dptf/built-in.a
AR drivers/soc/amlogic/built-in.a
AR drivers/clk/starfive/built-in.a
CC net/ipv6/netfilter/nf_conntrack_reasm.o
CC drivers/acpi/tables.o
AR drivers/clk/sunxi-ng/built-in.a
AR drivers/soc/qcom/built-in.a
AR drivers/soc/renesas/built-in.a
CC drivers/dma/dw/dw.o
AR drivers/clk/ti/built-in.a
AR drivers/soc/rockchip/built-in.a
CC fs/nls/nls_base.o
AR drivers/clk/versatile/built-in.a
AR drivers/soc/sunxi/built-in.a
CC drivers/pnp/resource.o
CC lib/decompress_unxz.o
AR drivers/clk/xilinx/built-in.a
CC lib/decompress_unzstd.o
CC lib/dump_stack.o
AR drivers/soc/ti/built-in.a
AR drivers/clk/built-in.a
CC [M] sound/hda/hdac_device.o
AR drivers/soc/xilinx/built-in.a
AR drivers/soc/built-in.a
CC [M] sound/hda/hdac_sysfs.o
CC [M] sound/hda/hdac_regmap.o
CC [M] sound/hda/hdac_controller.o
CC drivers/acpi/acpica/dsinit.o
CC net/core/dev_addr_lists.o
CC net/core/dst.o
AR sound/x86/built-in.a
CC net/ethtool/features.o
CC net/core/netevent.o
CC net/netfilter/nf_conntrack_proto_udp.o
CC arch/x86/kernel/cpu/proc.o
CC block/blk-mq-pci.o
CC block/blk-mq-virtio.o
CC mm/mmu_gather.o
CC crypto/authenc.o
CC net/ipv4/ip_input.o
CC fs/fat/namei_vfat.o
CC crypto/authencesn.o
CC fs/nls/nls_cp437.o
CC fs/isofs/export.o
CC drivers/acpi/acpica/dsmethod.o
CC drivers/dma/dw/idma32.o
CC drivers/acpi/x86/s2idle.o
CC crypto/lzo.o
CC block/blk-mq-debugfs.o
CC net/ipv4/ip_fragment.o
CC net/packet/af_packet.o
CC net/ipv4/ip_forward.o
CC fs/lockd/clntxdr.o
CC lib/earlycpio.o
CC fs/nfs/direct.o
CC lib/extable.o
CC net/ipv4/ip_options.o
CC drivers/dma/dw/acpi.o
AR kernel/trace/built-in.a
CC drivers/acpi/x86/utils.o
CC fs/nls/nls_ascii.o
CC net/core/neighbour.o
CC kernel/ksysfs.o
CC drivers/acpi/osi.o
CC fs/nfs/pagelist.o
CC fs/ext4/file.o
CC drivers/acpi/acpica/dsmthdat.o
CC [M] sound/hda/hdac_stream.o
CC drivers/acpi/acpica/dsobject.o
CC fs/ext4/fsmap.o
CC arch/x86/kernel/cpu/feat_ctl.o
CC drivers/pnp/manager.o
CC net/netfilter/nf_conntrack_proto_icmp.o
CC drivers/acpi/x86/blacklist.o
CC net/ipv6/ip6_input.o
CC drivers/acpi/osl.o
CC crypto/lzo-rle.o
CC fs/isofs/joliet.o
CC fs/nfs/read.o
CC lib/flex_proportions.o
CC net/ethtool/privflags.o
CC net/ipv6/netfilter/nf_reject_ipv6.o
AR net/dsa/built-in.a
CC drivers/dma/hsu/hsu.o
CC fs/lockd/host.o
CC fs/nls/nls_iso8859-1.o
CC io_uring/futex.o
CC drivers/acpi/utils.o
CC fs/lockd/svc.o
CC fs/lockd/svclock.o
CC net/netfilter/nf_conntrack_extend.o
CC net/ethtool/rings.o
CC net/core/rtnetlink.o
CC drivers/acpi/reboot.o
CC drivers/pnp/support.o
CC drivers/acpi/acpica/dsopcode.o
CC fs/nls/nls_utf8.o
CC net/core/utils.o
AR drivers/dma/dw/built-in.a
CC crypto/rng.o
AR drivers/acpi/x86/built-in.a
CC fs/fat/namei_msdos.o
CC kernel/cred.o
CC kernel/reboot.o
CC kernel/async.o
CC lib/idr.o
CC mm/mprotect.o
CC drivers/virtio/virtio.o
CC drivers/pnp/interface.o
CC drivers/pnp/quirks.o
CC fs/isofs/compress.o
AR fs/nls/built-in.a
CC drivers/pnp/system.o
CC drivers/acpi/acpica/dspkginit.o
CC drivers/tty/vt/vt_ioctl.o
CC drivers/char/hw_random/core.o
AR drivers/iommu/amd/built-in.a
CC [M] sound/hda/array.o
CC block/blk-pm.o
AR drivers/iommu/intel/built-in.a
AR drivers/iommu/arm/arm-smmu/built-in.a
AR drivers/iommu/arm/arm-smmu-v3/built-in.a
AR drivers/iommu/arm/built-in.a
AR drivers/gpu/host1x/built-in.a
AR drivers/iommu/iommufd/built-in.a
CC drivers/iommu/iommu.o
CC net/ipv4/ip_output.o
AR drivers/dma/hsu/built-in.a
CC drivers/char/agp/backend.o
CC drivers/char/agp/generic.o
CC net/ipv6/addrconf.o
CC drivers/acpi/nvs.o
CC lib/irq_regs.o
AR drivers/dma/idxd/built-in.a
CC net/ethtool/channels.o
AR drivers/dma/mediatek/built-in.a
AR drivers/dma/qcom/built-in.a
CC net/ethtool/coalesce.o
AR drivers/gpu/vga/built-in.a
AR drivers/dma/stm32/built-in.a
AR drivers/gpu/drm/tests/built-in.a
CC net/ipv6/addrlabel.o
CC net/ipv6/route.o
AR drivers/dma/ti/built-in.a
AR drivers/gpu/drm/arm/built-in.a
AR drivers/dma/xilinx/built-in.a
CC drivers/acpi/acpica/dsutils.o
CC drivers/gpu/drm/display/drm_display_helper_mod.o
AR net/xfrm/built-in.a
CC drivers/dma/dmaengine.o
CC lib/is_single_threaded.o
CC drivers/dma/virt-dma.o
CC net/ipv6/ip6_fib.o
CC io_uring/napi.o
CC drivers/gpu/drm/ttm/ttm_tt.o
CC crypto/drbg.o
CC net/ipv6/ipv6_sockglue.o
CC drivers/acpi/acpica/dswexec.o
CC net/netfilter/nf_conntrack_acct.o
CC drivers/virtio/virtio_ring.o
CC drivers/gpu/drm/ttm/ttm_bo.o
AR fs/fat/built-in.a
CC drivers/gpu/drm/ttm/ttm_bo_util.o
AR fs/unicode/built-in.a
AR drivers/pnp/built-in.a
CC net/ipv6/ndisc.o
CC [M] sound/hda/hdmi_chmap.o
CC net/ipv6/netfilter/ip6t_ipv6header.o
CC net/netfilter/nf_conntrack_seqadj.o
CC fs/lockd/svcshare.o
CC fs/lockd/svcproc.o
AR fs/isofs/built-in.a
CC block/holder.o
CC drivers/gpu/drm/ttm/ttm_bo_vm.o
CC drivers/gpu/drm/display/drm_dp_dual_mode_helper.o
CC kernel/range.o
CC drivers/acpi/acpica/dswload.o
CC drivers/char/hw_random/intel-rng.o
CC mm/mremap.o
CC drivers/gpu/drm/i915/i915_config.o
CC lib/klist.o
CC net/sunrpc/auth_gss/auth_gss.o
CC drivers/gpu/drm/display/drm_dp_helper.o
CC kernel/smpboot.o
AR drivers/gpu/drm/renesas/rcar-du/built-in.a
CC drivers/tty/vt/vc_screen.o
AR drivers/gpu/drm/renesas/rz-du/built-in.a
CC kernel/ucount.o
AR drivers/gpu/drm/renesas/built-in.a
CC drivers/acpi/acpica/dswload2.o
CC drivers/gpu/drm/ttm/ttm_module.o
CC fs/ext4/fsync.o
AR drivers/gpu/drm/omapdrm/built-in.a
CC drivers/gpu/drm/i915/i915_driver.o
CC arch/x86/kernel/cpu/intel.o
CC lib/kobject.o
CC drivers/iommu/iommu-traces.o
CC drivers/char/agp/isoch.o
CC drivers/gpu/drm/i915/i915_drm_client.o
CC fs/nfs/symlink.o
AR net/wireless/tests/built-in.a
CC drivers/tty/hvc/hvc_console.o
AR net/mac80211/tests/built-in.a
CC net/mac80211/main.o
CC net/wireless/core.o
CC crypto/jitterentropy.o
AR block/built-in.a
CC crypto/jitterentropy-kcapi.o
CC net/ethtool/pause.o
CC net/ethtool/eee.o
CC kernel/regset.o
CC drivers/acpi/acpica/dswscope.o
CC drivers/acpi/acpica/dswstate.o
CC drivers/char/hw_random/amd-rng.o
CC drivers/char/hw_random/geode-rng.o
CC drivers/char/hw_random/via-rng.o
CC net/ipv4/ip_sockglue.o
CC net/ipv4/inet_hashtables.o
CC drivers/char/agp/amd64-agp.o
CC [M] sound/hda/trace.o
CC drivers/dma/acpi-dma.o
CC drivers/gpu/drm/ttm/ttm_execbuf_util.o
AR net/packet/built-in.a
CC drivers/gpu/drm/display/drm_dp_mst_topology.o
CC drivers/gpu/drm/ttm/ttm_range_manager.o
CC fs/ext4/hash.o
CC net/sunrpc/auth_gss/gss_generic_token.o
AR io_uring/built-in.a
CC drivers/acpi/wakeup.o
CC drivers/tty/vt/selection.o
CC lib/kobject_uevent.o
CC kernel/ksyms_common.o
CC drivers/acpi/sleep.o
CC net/ipv6/netfilter/ip6t_REJECT.o
CC fs/lockd/svcsubs.o
CC net/ipv6/udp.o
CC net/netfilter/nf_conntrack_proto_icmpv6.o
CC mm/msync.o
CC [M] sound/hda/hdac_component.o
CC drivers/acpi/acpica/evevent.o
CC crypto/ghash-generic.o
CC arch/x86/kernel/cpu/tsx.o
CC drivers/virtio/virtio_anchor.o
AR drivers/char/hw_random/built-in.a
CC drivers/virtio/virtio_pci_modern_dev.o
CC drivers/connector/cn_queue.o
CC drivers/iommu/iommu-sysfs.o
CC drivers/iommu/dma-iommu.o
CC fs/ext4/ialloc.o
AR drivers/tty/hvc/built-in.a
CC drivers/gpu/drm/i915/i915_getparam.o
CC drivers/acpi/device_sysfs.o
CC drivers/base/power/sysfs.o
CC drivers/connector/connector.o
CC drivers/gpu/drm/ttm/ttm_resource.o
CC drivers/gpu/drm/ttm/ttm_pool.o
CC fs/nfs/unlink.o
CC drivers/acpi/acpica/evgpe.o
AR drivers/dma/built-in.a
CC drivers/char/agp/intel-agp.o
CC net/ipv6/udplite.o
CC crypto/hash_info.o
CC net/ethtool/tsinfo.o
CC fs/ext4/indirect.o
CC drivers/block/loop.o
CC drivers/block/virtio_blk.o
CC drivers/base/power/generic_ops.o
CC crypto/rsapubkey.asn1.o
CC crypto/rsaprivkey.asn1.o
CC kernel/groups.o
CC kernel/kcmp.o
AR crypto/built-in.a
CC drivers/gpu/drm/i915/i915_ioctl.o
CC drivers/tty/vt/keyboard.o
CC fs/nfs/write.o
CC mm/page_vma_mapped.o
CC mm/pagewalk.o
CC [M] sound/hda/hdac_i915.o
CC drivers/connector/cn_proc.o
CC net/ethtool/cabletest.o
CC drivers/gpu/drm/i915/i915_irq.o
CC kernel/freezer.o
CC net/mac80211/status.o
CC drivers/acpi/acpica/evgpeblk.o
CC net/mac80211/driver-ops.o
CC fs/lockd/mon.o
CC drivers/virtio/virtio_pci_legacy_dev.o
AR sound/xen/built-in.a
CC lib/logic_pio.o
CC drivers/virtio/virtio_pci_modern.o
AR net/ipv6/netfilter/built-in.a
CC drivers/base/power/common.o
CC lib/maple_tree.o
CC net/sunrpc/auth_gss/gss_mech_switch.o
CC net/core/link_watch.o
CC net/sunrpc/auth_gss/svcauth_gss.o
CC drivers/acpi/acpica/evgpeinit.o
CC net/netfilter/nf_conntrack_netlink.o
CC drivers/char/agp/intel-gtt.o
CC kernel/profile.o
CC drivers/gpu/drm/ttm/ttm_device.o
AR sound/virtio/built-in.a
CC drivers/acpi/device_pm.o
CC fs/lockd/trace.o
CC drivers/gpu/drm/ttm/ttm_sys_manager.o
CC net/netfilter/nf_conntrack_ftp.o
CC fs/autofs/init.o
CC net/core/filter.o
CC [M] sound/hda/intel-dsp-config.o
CC [M] sound/hda/intel-nhlt.o
CC fs/9p/vfs_super.o
CC mm/pgtable-generic.o
CC net/ipv4/inet_timewait_sock.o
CC net/netlabel/netlabel_user.o
CC net/netlabel/netlabel_kapi.o
CC drivers/acpi/acpica/evgpeutil.o
CC net/netlabel/netlabel_domainhash.o
CC drivers/base/power/qos.o
CC drivers/gpu/drm/display/drm_dsc_helper.o
CC drivers/iommu/iova.o
CC drivers/base/power/runtime.o
CC mm/rmap.o
CC arch/x86/kernel/cpu/intel_epb.o
CC net/ethtool/tunnels.o
AR drivers/connector/built-in.a
CC drivers/gpu/drm/ttm/ttm_agp_backend.o
AR drivers/block/built-in.a
AR drivers/misc/eeprom/built-in.a
CC fs/nfs/namespace.o
AR drivers/misc/cb710/built-in.a
CC kernel/stacktrace.o
CC drivers/virtio/virtio_pci_common.o
AR drivers/misc/ti-st/built-in.a
CC mm/vmalloc.o
CC drivers/acpi/acpica/evglock.o
AR drivers/misc/lis3lv02d/built-in.a
CC drivers/acpi/acpica/evhandler.o
AR drivers/misc/cardreader/built-in.a
CC fs/autofs/inode.o
AR drivers/mfd/built-in.a
CC net/ipv6/raw.o
AR drivers/misc/keba/built-in.a
CC drivers/tty/vt/vt.o
AR drivers/misc/built-in.a
AR drivers/nfc/built-in.a
CC drivers/base/firmware_loader/builtin/main.o
CC [M] sound/hda/intel-sdw-acpi.o
AR drivers/dax/hmem/built-in.a
CC drivers/dma-buf/dma-buf.o
AR drivers/dax/built-in.a
CC arch/x86/kernel/cpu/amd.o
CC fs/9p/vfs_inode.o
AR drivers/char/agp/built-in.a
CC fs/9p/vfs_inode_dotl.o
CC drivers/gpu/drm/i915/i915_mitigations.o
CC drivers/gpu/drm/i915/i915_module.o
AR drivers/cxl/core/built-in.a
CC drivers/char/mem.o
AR drivers/cxl/built-in.a
CC drivers/gpu/drm/display/drm_hdcp_helper.o
CC drivers/macintosh/mac_hid.o
CC drivers/acpi/proc.o
CC drivers/gpu/drm/i915/i915_params.o
AR drivers/scsi/pcmcia/built-in.a
CC drivers/gpu/drm/i915/i915_pci.o
CC drivers/acpi/acpica/evmisc.o
CC fs/lockd/xdr.o
CC drivers/scsi/scsi.o
CC drivers/gpu/drm/i915/i915_scatterlist.o
CC drivers/acpi/bus.o
AR drivers/base/firmware_loader/builtin/built-in.a
CC drivers/base/power/wakeirq.o
CC arch/x86/kernel/cpu/hygon.o
CC drivers/base/firmware_loader/main.o
AR drivers/gpu/drm/ttm/built-in.a
CC net/core/sock_diag.o
CC kernel/dma.o
AR drivers/iommu/built-in.a
CC net/mac80211/sta_info.o
LD [M] sound/hda/snd-hda-core.o
LD [M] sound/hda/snd-intel-dspcfg.o
LD [M] sound/hda/snd-intel-sdw-acpi.o
CC net/netfilter/nf_conntrack_irc.o
CC net/ipv4/inet_connection_sock.o
CC sound/sound_core.o
CC net/wireless/sysfs.o
CC net/wireless/radiotap.o
CC drivers/virtio/virtio_pci_legacy.o
CC drivers/virtio/virtio_pci_admin_legacy_io.o
CC drivers/base/power/main.o
CC drivers/acpi/acpica/evregion.o
CC drivers/base/power/wakeup.o
CC fs/autofs/root.o
CC net/netlabel/netlabel_addrlist.o
CC fs/ext4/inline.o
CC drivers/acpi/acpica/evrgnini.o
CC net/ethtool/fec.o
AR drivers/macintosh/built-in.a
CC fs/nfs/mount_clnt.o
AR drivers/nvme/common/built-in.a
CC fs/nfs/nfstrace.o
CC kernel/smp.o
AR drivers/nvme/host/built-in.a
CC net/ipv6/icmp.o
AR drivers/nvme/target/built-in.a
CC drivers/gpu/drm/display/drm_hdmi_helper.o
AR drivers/nvme/built-in.a
CC net/sunrpc/auth_gss/gss_rpc_upcall.o
CC fs/9p/vfs_addr.o
CC drivers/char/random.o
CC arch/x86/kernel/cpu/centaur.o
CC drivers/tty/serial/8250/8250_core.o
CC sound/last.o
CC arch/x86/kernel/cpu/transmeta.o
CC drivers/tty/serial/serial_core.o
CC drivers/gpu/drm/display/drm_scdc_helper.o
CC net/netfilter/nf_conntrack_sip.o
CC drivers/dma-buf/dma-fence.o
CC drivers/char/misc.o
CC net/ethtool/eeprom.o
CC net/ethtool/stats.o
CC drivers/acpi/acpica/evsci.o
CC fs/lockd/clnt4xdr.o
CC drivers/virtio/virtio_input.o
CC drivers/gpu/drm/i915/i915_suspend.o
CC net/wireless/util.o
CC fs/lockd/xdr4.o
CC net/netfilter/nf_nat_core.o
AR drivers/base/firmware_loader/built-in.a
CC fs/lockd/svc4proc.o
AR sound/built-in.a
CC net/netfilter/nf_nat_proto.o
CC net/mac80211/wep.o
CC drivers/ata/libata-core.o
AR drivers/net/phy/qcom/built-in.a
CC drivers/firewire/init_ohci1394_dma.o
CC drivers/net/phy/mdio-boardinfo.o
CC net/wireless/reg.o
CC drivers/scsi/hosts.o
CC drivers/acpi/acpica/evxface.o
CC drivers/net/phy/stubs.o
CC arch/x86/kernel/cpu/zhaoxin.o
AR drivers/net/pse-pd/built-in.a
CC fs/autofs/symlink.o
CC arch/x86/kernel/cpu/vortex.o
AR drivers/gpu/drm/tilcdc/built-in.a
CC drivers/base/regmap/regmap.o
CC arch/x86/kernel/cpu/perfctr-watchdog.o
CC fs/9p/vfs_file.o
CC drivers/base/regmap/regcache.o
CC net/netlabel/netlabel_mgmt.o
CC net/ethtool/phc_vclocks.o
CC drivers/net/mdio/acpi_mdio.o
AR drivers/gpu/drm/display/built-in.a
CC drivers/net/mdio/fwnode_mdio.o
CC drivers/tty/serial/8250/8250_platform.o
CC drivers/virtio/virtio_dma_buf.o
CC drivers/base/power/wakeup_stats.o
CC kernel/uid16.o
CC arch/x86/kernel/cpu/vmware.o
COPY drivers/tty/vt/defkeymap.c
CC drivers/tty/vt/consolemap.o
CC arch/x86/kernel/cpu/hypervisor.o
CC arch/x86/kernel/cpu/mshyperv.o
CC drivers/acpi/acpica/evxfevnt.o
AR drivers/firewire/built-in.a
CC net/sunrpc/auth_gss/gss_rpc_xdr.o
CC net/ethtool/mm.o
CC drivers/dma-buf/dma-fence-array.o
CC fs/autofs/waitq.o
CC fs/ext4/inode.o
CC drivers/char/virtio_console.o
CC fs/9p/vfs_dir.o
CC fs/9p/vfs_dentry.o
CC fs/ext4/ioctl.o
CC fs/autofs/expire.o
CC drivers/scsi/scsi_ioctl.o
CC net/ipv6/mcast.o
CC drivers/gpu/drm/i915/i915_switcheroo.o
CC fs/autofs/dev-ioctl.o
CC drivers/net/phy/mdio_devres.o
CC fs/lockd/procfs.o
CC drivers/cdrom/cdrom.o
CC net/ipv4/tcp.o
AR drivers/auxdisplay/built-in.a
CC drivers/acpi/acpica/evxfgpe.o
CC net/ipv4/tcp_input.o
CC drivers/base/regmap/regcache-rbtree.o
CC mm/process_vm_access.o
CC drivers/net/phy/phy.o
CC drivers/base/power/trace.o
AR drivers/virtio/built-in.a
CC drivers/tty/serial/8250/8250_pnp.o
CC drivers/base/regmap/regcache-flat.o
CC arch/x86/kernel/cpu/debugfs.o
CC net/ipv4/tcp_output.o
CC net/mac80211/aead_api.o
CC net/mac80211/wpa.o
CC drivers/dma-buf/dma-fence-chain.o
CC net/ipv6/reassembly.o
AR drivers/net/mdio/built-in.a
CC net/ipv6/tcp_ipv6.o
CC fs/nfs/export.o
CC kernel/kallsyms.o
CC drivers/gpu/drm/virtio/virtgpu_drv.o
CC fs/nfs/sysfs.o
CC net/netfilter/nf_nat_helper.o
CC fs/9p/v9fs.o
CC net/netlabel/netlabel_unlabeled.o
CC drivers/acpi/acpica/evxfregn.o
CC net/ethtool/module.o
CC net/netfilter/nf_nat_masquerade.o
HOSTCC drivers/tty/vt/conmakehash
CC drivers/tty/vt/defkeymap.o
CC net/ethtool/cmis_fw_update.o
AR fs/lockd/built-in.a
CC net/wireless/scan.o
CC net/ethtool/cmis_cdb.o
CC net/wireless/nl80211.o
AR fs/autofs/built-in.a
CC drivers/scsi/scsicam.o
CC drivers/scsi/scsi_error.o
CC drivers/char/hpet.o
AR fs/hostfs/built-in.a
AR drivers/base/power/built-in.a
CC net/sunrpc/auth_gss/trace.o
CC arch/x86/kernel/cpu/capflags.o
CC drivers/net/phy/phy-c45.o
AR drivers/base/test/built-in.a
CC net/sunrpc/auth_gss/gss_krb5_mech.o
AR arch/x86/kernel/cpu/built-in.a
CC mm/page_alloc.o
CC arch/x86/kernel/traps.o
CC arch/x86/kernel/idt.o
CC drivers/gpu/drm/i915/i915_sysfs.o
CC drivers/tty/serial/8250/8250_rsa.o
CONMK drivers/tty/vt/consolemap_deftbl.c
CC drivers/tty/vt/consolemap_deftbl.o
CC drivers/dma-buf/dma-fence-unwrap.o
CC drivers/tty/serial/8250/8250_port.o
AR drivers/tty/vt/built-in.a
CC drivers/ata/libata-scsi.o
CC drivers/acpi/acpica/exconcat.o
CC drivers/base/component.o
CC drivers/gpu/drm/virtio/virtgpu_kms.o
CC fs/9p/fid.o
CC lib/memcat_p.o
CC drivers/net/phy/phy-core.o
CC drivers/dma-buf/dma-resv.o
CC drivers/acpi/acpica/exconfig.o
CC drivers/scsi/scsi_lib.o
CC net/mac80211/scan.o
CC drivers/tty/serial/8250/8250_dma.o
AR drivers/cdrom/built-in.a
AR drivers/tty/ipwireless/built-in.a
CC kernel/acct.o
CC drivers/tty/serial/serial_base_bus.o
CC drivers/tty/serial/serial_ctrl.o
CC drivers/char/nvram.o
CC net/netlabel/netlabel_cipso_v4.o
CC net/sunrpc/auth_gss/gss_krb5_seal.o
CC drivers/base/regmap/regcache-maple.o
CC drivers/tty/serial/8250/8250_dwlib.o
CC net/rfkill/core.o
CC net/9p/mod.o
CC drivers/tty/serial/8250/8250_pcilib.o
CC lib/nmi_backtrace.o
CC net/ethtool/pse-pd.o
CC net/9p/client.o
CC drivers/base/core.o
CC net/netfilter/nf_nat_ftp.o
CC drivers/gpu/drm/i915/i915_utils.o
CC arch/x86/kernel/irq.o
CC drivers/tty/serial/8250/8250_early.o
CC drivers/acpi/acpica/exconvrt.o
CC drivers/gpu/drm/virtio/virtgpu_gem.o
CC fs/9p/xattr.o
CC drivers/pcmcia/cs.o
CC drivers/tty/serial/8250/8250_exar.o
CC net/9p/error.o
CC fs/ext4/mballoc.o
CC drivers/gpu/drm/i915/intel_clock_gating.o
CC arch/x86/kernel/irq_32.o
CC drivers/dma-buf/sync_file.o
CC fs/ext4/migrate.o
CC fs/ext4/mmp.o
CC fs/ext4/move_extent.o
CC fs/nfs/fs_context.o
CC arch/x86/kernel/dumpstack_32.o
CC drivers/base/regmap/regmap-debugfs.o
CC kernel/vmcore_info.o
AR drivers/char/built-in.a
CC drivers/acpi/acpica/excreate.o
CC lib/objpool.o
CC fs/debugfs/inode.o
CC drivers/tty/serial/serial_port.o
CC drivers/net/phy/phy_device.o
CC drivers/acpi/glue.o
CC drivers/acpi/scan.o
CC drivers/net/phy/linkmode.o
CC net/sunrpc/clnt.o
CC net/rfkill/input.o
CC drivers/scsi/constants.o
CC drivers/gpu/drm/virtio/virtgpu_vram.o
AR fs/9p/built-in.a
CC drivers/gpu/drm/virtio/virtgpu_display.o
CC net/ethtool/plca.o
CC drivers/acpi/acpica/exdebug.o
CC net/netlabel/netlabel_calipso.o
CC net/ipv6/ping.o
CC net/sunrpc/auth_gss/gss_krb5_unseal.o
CC drivers/gpu/drm/virtio/virtgpu_vq.o
CC net/sunrpc/auth_gss/gss_krb5_wrap.o
CC drivers/ata/libata-eh.o
CC fs/tracefs/inode.o
CC drivers/pcmcia/socket_sysfs.o
CC lib/plist.o
CC net/netfilter/nf_nat_irc.o
CC fs/tracefs/event_inode.o
CC drivers/ata/libata-transport.o
AR drivers/dma-buf/built-in.a
CC lib/radix-tree.o
CC kernel/elfcorehdr.o
CC drivers/pcmcia/cardbus.o
AR drivers/base/regmap/built-in.a
CC net/netfilter/nf_nat_sip.o
CC drivers/tty/serial/8250/8250_lpss.o
CC drivers/tty/serial/8250/8250_mid.o
CC fs/ext4/namei.o
CC net/ipv4/tcp_timer.o
CC drivers/acpi/acpica/exdump.o
AR net/rfkill/built-in.a
CC drivers/scsi/scsi_lib_dma.o
CC net/9p/protocol.o
CC net/ipv4/tcp_ipv4.o
CC net/netfilter/x_tables.o
CC drivers/scsi/scsi_scan.o
CC net/core/dev_ioctl.o
CC net/dns_resolver/dns_key.o
CC fs/debugfs/file.o
CC drivers/gpu/drm/i915/intel_device_info.o
GEN drivers/scsi/scsi_devinfo_tbl.c
CC net/dns_resolver/dns_query.o
CC drivers/scsi/scsi_devinfo.o
CC drivers/scsi/scsi_sysctl.o
CC drivers/gpu/drm/virtio/virtgpu_fence.o
CC net/devres.o
CC net/netfilter/xt_tcpudp.o
CC net/handshake/alert.o
CC drivers/acpi/acpica/exfield.o
CC arch/x86/kernel/time.o
CC net/sunrpc/auth_gss/gss_krb5_crypto.o
CC kernel/crash_reserve.o
CC drivers/pcmcia/ds.o
CC arch/x86/kernel/ioport.o
CC mm/init-mm.o
CC net/handshake/genl.o
AR net/ethtool/built-in.a
CC drivers/gpu/drm/virtio/virtgpu_object.o
AR net/netlabel/built-in.a
CC drivers/gpu/drm/virtio/virtgpu_debugfs.o
CC drivers/gpu/drm/virtio/virtgpu_plane.o
CC drivers/gpu/drm/virtio/virtgpu_ioctl.o
CC drivers/tty/serial/8250/8250_pci.o
CC lib/ratelimit.o
CC drivers/gpu/drm/virtio/virtgpu_prime.o
CC drivers/tty/serial/8250/8250_pericom.o
CC net/ipv6/exthdrs.o
CC drivers/acpi/acpica/exfldio.o
AR fs/tracefs/built-in.a
CC drivers/pcmcia/pcmcia_resource.o
CC net/netfilter/xt_CONNSECMARK.o
CC net/netfilter/xt_NFLOG.o
CC drivers/pcmcia/cistpl.o
CC net/9p/trans_common.o
CC drivers/usb/common/common.o
CC drivers/base/bus.o
AR net/dns_resolver/built-in.a
CC fs/ext4/page-io.o
CC lib/rbtree.o
CC kernel/kexec_core.o
CC kernel/crash_core.o
CC drivers/net/phy/mdio_bus.o
CC mm/memblock.o
CC kernel/kexec.o
CC drivers/usb/core/usb.o
CC net/mac80211/offchannel.o
CC drivers/usb/core/hub.o
CC drivers/usb/core/hcd.o
CC net/socket.o
AR fs/debugfs/built-in.a
CC drivers/gpu/drm/i915/intel_memory_region.o
CC drivers/scsi/scsi_proc.o
CC fs/nfs/nfsroot.o
CC arch/x86/kernel/dumpstack.o
CC drivers/acpi/acpica/exmisc.o
CC net/core/tso.o
CC lib/seq_buf.o
CC drivers/acpi/acpica/exmutex.o
CC mm/slub.o
CC mm/madvise.o
CC mm/page_io.o
CC mm/swap_state.o
CC drivers/scsi/scsi_debugfs.o
CC net/9p/trans_fd.o
CC drivers/scsi/scsi_trace.o
CC drivers/scsi/scsi_logging.o
CC net/sunrpc/auth_gss/gss_krb5_keys.o
AR drivers/usb/phy/built-in.a
CC arch/x86/kernel/nmi.o
CC kernel/utsname.o
CC net/mac80211/ht.o
CC net/handshake/netlink.o
CC drivers/usb/common/debug.o
CC drivers/gpu/drm/virtio/virtgpu_trace_points.o
CC drivers/base/dd.o
CC net/handshake/request.o
CC kernel/pid_namespace.o
CC net/handshake/tlshd.o
AR drivers/usb/common/built-in.a
CC net/ipv4/tcp_minisocks.o
CC drivers/gpu/drm/virtio/virtgpu_submit.o
CC drivers/acpi/acpica/exnames.o
CC net/netfilter/xt_SECMARK.o
CC kernel/stop_machine.o
CC lib/siphash.o
CC net/netfilter/xt_TCPMSS.o
AR drivers/net/pcs/built-in.a
CC lib/string.o
CC kernel/audit.o
AR drivers/tty/serial/8250/built-in.a
CC kernel/auditfilter.o
CC lib/timerqueue.o
CC drivers/tty/serial/earlycon.o
CC kernel/auditsc.o
CC net/sunrpc/xprt.o
CC fs/ext4/readpage.o
CC drivers/pcmcia/pcmcia_cis.o
CC drivers/ata/libata-trace.o
CC drivers/ata/libata-sata.o
CC drivers/acpi/acpica/exoparg1.o
CC drivers/usb/mon/mon_main.o
CC drivers/usb/mon/mon_stat.o
CC drivers/gpu/drm/i915/intel_pcode.o
CC drivers/scsi/scsi_pm.o
CC net/ipv6/datagram.o
CC fs/nfs/sysctl.o
CC drivers/net/phy/mdio_device.o
CC drivers/net/phy/swphy.o
AR net/sunrpc/auth_gss/built-in.a
CC drivers/net/phy/fixed_phy.o
CC drivers/net/phy/realtek.o
CC net/core/sock_reuseport.o
CC drivers/usb/core/urb.o
CC arch/x86/kernel/ldt.o
CC mm/swapfile.o
CC lib/vsprintf.o
CC arch/x86/kernel/setup.o
CC net/handshake/trace.o
CC drivers/base/syscore.o
CC kernel/audit_watch.o
AR drivers/gpu/drm/virtio/built-in.a
AR drivers/net/ethernet/3com/built-in.a
CC drivers/net/ethernet/8390/ne2k-pci.o
CC net/netfilter/xt_conntrack.o
AR drivers/net/wireless/admtek/built-in.a
CC drivers/acpi/acpica/exoparg2.o
CC drivers/usb/core/message.o
CC kernel/audit_fsnotify.o
CC net/ipv6/ip6_flowlabel.o
AR drivers/net/wireless/ath/built-in.a
CC net/9p/trans_virtio.o
AR drivers/tty/serial/built-in.a
AR drivers/net/wireless/atmel/built-in.a
CC drivers/tty/tty_io.o
CC drivers/ata/libata-sff.o
AR drivers/net/wireless/broadcom/built-in.a
AR drivers/net/wireless/intel/built-in.a
AR drivers/net/wireless/intersil/built-in.a
AR drivers/net/wireless/marvell/built-in.a
AR drivers/net/usb/built-in.a
CC mm/swap_slots.o
AR drivers/net/wireless/mediatek/built-in.a
CC drivers/usb/mon/mon_text.o
AR drivers/net/wireless/microchip/built-in.a
AR drivers/net/wireless/purelifi/built-in.a
CC net/netfilter/xt_policy.o
AR drivers/net/wireless/quantenna/built-in.a
AR drivers/net/wireless/ralink/built-in.a
CC drivers/acpi/mipi-disco-img.o
AR drivers/net/wireless/realtek/built-in.a
CC drivers/scsi/scsi_bsg.o
CC drivers/pcmcia/rsrc_mgr.o
AR drivers/net/wireless/rsi/built-in.a
AR drivers/net/wireless/silabs/built-in.a
AR drivers/net/wireless/st/built-in.a
AR drivers/net/wireless/ti/built-in.a
CC drivers/acpi/acpica/exoparg3.o
CC drivers/tty/n_tty.o
AR drivers/net/wireless/zydas/built-in.a
CC drivers/acpi/acpica/exoparg6.o
CC net/netfilter/xt_state.o
AR drivers/net/wireless/virtual/built-in.a
AR drivers/net/wireless/built-in.a
CC drivers/acpi/acpica/exprep.o
CC net/ipv4/tcp_cong.o
CC drivers/gpu/drm/i915/intel_region_ttm.o
CC mm/dmapool.o
CC drivers/pcmcia/rsrc_nonstatic.o
CC drivers/base/driver.o
CC drivers/base/class.o
CC fs/nfs/nfs3super.o
CC lib/win_minmax.o
CC drivers/input/serio/serio.o
CC fs/ext4/resize.o
CC drivers/input/serio/i8042.o
CC drivers/pcmcia/yenta_socket.o
AR drivers/net/phy/built-in.a
CC drivers/net/ethernet/8390/8390.o
CC drivers/net/mii.o
CC net/ipv4/tcp_metrics.o
CC drivers/ata/libata-pmp.o
CC drivers/ata/libata-acpi.o
CC net/ipv6/inet6_connection_sock.o
AR drivers/gpu/drm/imx/built-in.a
CC drivers/base/platform.o
CC arch/x86/kernel/x86_init.o
CC kernel/audit_tree.o
CC drivers/acpi/acpica/exregion.o
CC net/ipv6/udp_offload.o
CC net/core/fib_notifier.o
CC drivers/input/serio/serport.o
CC drivers/scsi/scsi_common.o
CC drivers/tty/tty_ioctl.o
CC fs/open.o
CC drivers/usb/mon/mon_bin.o
CC drivers/usb/core/driver.o
CC [M] fs/efivarfs/inode.o
CC drivers/usb/core/config.o
CC drivers/base/cpu.o
CC drivers/base/firmware.o
CC drivers/usb/core/file.o
CC mm/hugetlb.o
CC drivers/acpi/resource.o
AR net/9p/built-in.a
AR net/handshake/built-in.a
CC lib/xarray.o
CC net/sysctl_net.o
CC drivers/input/serio/libps2.o
CC lib/lockref.o
CC [M] net/netfilter/nf_log_syslog.o
CC drivers/acpi/acpica/exresnte.o
CC drivers/acpi/acpica/exresolv.o
CC drivers/acpi/acpica/exresop.o
CC drivers/acpi/acpica/exserial.o
CC drivers/acpi/acpi_processor.o
CC drivers/gpu/drm/i915/intel_runtime_pm.o
CC drivers/scsi/scsi_transport_spi.o
CC net/mac80211/agg-tx.o
CC arch/x86/kernel/i8259.o
CC [M] fs/efivarfs/file.o
CC fs/nfs/nfs3client.o
CC [M] net/netfilter/xt_mark.o
CC lib/bcd.o
CC drivers/scsi/virtio_scsi.o
CC lib/sort.o
CC drivers/scsi/sd.o
AR drivers/net/ethernet/adaptec/built-in.a
CC mm/mmu_notifier.o
AR drivers/net/ethernet/agere/built-in.a
AR drivers/net/ethernet/alacritech/built-in.a
CC drivers/scsi/sr.o
AR drivers/gpu/drm/i2c/built-in.a
CC net/wireless/mlme.o
CC drivers/input/keyboard/atkbd.o
CC drivers/input/mouse/psmouse-base.o
AR drivers/net/ethernet/8390/built-in.a
CC drivers/tty/tty_ldisc.o
CC net/core/xdp.o
AR drivers/input/tablet/built-in.a
AR drivers/input/joystick/built-in.a
CC fs/nfs/nfs3proc.o
AR drivers/net/ethernet/alteon/built-in.a
CC drivers/base/init.o
AR drivers/input/touchscreen/built-in.a
CC [M] net/netfilter/xt_nat.o
CC net/ipv6/seg6.o
CC drivers/rtc/lib.o
CC [M] net/netfilter/xt_LOG.o
AR drivers/net/ethernet/amazon/built-in.a
CC [M] net/netfilter/xt_MASQUERADE.o
CC drivers/acpi/acpica/exstore.o
AR drivers/net/ethernet/amd/built-in.a
AR drivers/net/ethernet/aquantia/built-in.a
CC drivers/ata/libata-pata-timings.o
CC drivers/net/loopback.o
AR drivers/pcmcia/built-in.a
AR drivers/net/ethernet/arc/built-in.a
AR drivers/net/ethernet/asix/built-in.a
CC drivers/acpi/processor_core.o
CC drivers/tty/tty_buffer.o
CC arch/x86/kernel/irqinit.o
AR drivers/net/ethernet/atheros/built-in.a
CC kernel/kprobes.o
AR drivers/net/ethernet/cadence/built-in.a
CC drivers/rtc/class.o
AR drivers/usb/mon/built-in.a
CC drivers/i2c/algos/i2c-algo-bit.o
CC drivers/ata/ahci.o
CC drivers/net/ethernet/broadcom/bnx2.o
CC drivers/ata/libahci.o
AR drivers/input/serio/built-in.a
CC drivers/acpi/processor_pdc.o
CC drivers/usb/core/buffer.o
CC drivers/usb/host/pci-quirks.o
CC drivers/net/netconsole.o
CC [M] fs/efivarfs/super.o
CC net/ipv4/tcp_fastopen.o
CC net/sunrpc/socklib.o
CC drivers/acpi/acpica/exstoren.o
CC fs/nfs/nfs3xdr.o
CC fs/ext4/super.o
CC net/ipv4/tcp_rate.o
CC [M] fs/efivarfs/vars.o
CC drivers/base/map.o
CC net/ipv4/tcp_recovery.o
CC lib/parser.o
CC fs/read_write.o
CC drivers/acpi/ec.o
CC net/ipv6/fib6_notifier.o
CC drivers/usb/host/ehci-hcd.o
CC [M] net/netfilter/xt_addrtype.o
CC drivers/rtc/interface.o
CC drivers/usb/core/sysfs.o
CC drivers/ata/ata_piix.o
CC drivers/gpu/drm/i915/intel_sbi.o
CC net/sunrpc/xprtsock.o
CC drivers/net/virtio_net.o
CC drivers/tty/tty_port.o
CC drivers/acpi/acpica/exstorob.o
CC net/wireless/ibss.o
AR drivers/input/keyboard/built-in.a
CC drivers/gpu/drm/i915/intel_step.o
CC drivers/gpu/drm/i915/intel_uncore.o
CC drivers/input/mouse/synaptics.o
CC arch/x86/kernel/jump_label.o
CC drivers/scsi/sr_ioctl.o
AR drivers/input/misc/built-in.a
CC drivers/usb/core/endpoint.o
AR drivers/i2c/algos/built-in.a
CC drivers/scsi/sr_vendor.o
CC drivers/i2c/busses/i2c-i801.o
CC lib/debug_locks.o
CC drivers/base/devres.o
AR drivers/net/ethernet/brocade/built-in.a
CC drivers/scsi/sg.o
AR drivers/i2c/muxes/built-in.a
CC drivers/input/input.o
CC drivers/input/input-compat.o
CC arch/x86/kernel/irq_work.o
CC arch/x86/kernel/probe_roms.o
CC drivers/usb/core/devio.o
CC drivers/acpi/acpica/exsystem.o
CC drivers/acpi/acpica/extrace.o
CC lib/random32.o
LD [M] fs/efivarfs/efivarfs.o
CC drivers/usb/core/notify.o
CC drivers/input/mouse/focaltech.o
CC mm/migrate.o
CC net/core/flow_offload.o
CC arch/x86/kernel/sys_ia32.o
CC kernel/seccomp.o
CC net/ipv6/rpl.o
CC fs/nfs/nfs3acl.o
CC drivers/tty/tty_mutex.o
CC lib/bust_spinlocks.o
CC net/ipv4/tcp_ulp.o
CC fs/nfs/nfs4proc.o
CC drivers/acpi/acpica/exutils.o
CC drivers/ata/pata_amd.o
CC net/ipv4/tcp_offload.o
CC drivers/base/attribute_container.o
CC net/ipv4/tcp_plb.o
CC net/wireless/sme.o
CC drivers/tty/tty_ldsem.o
CC mm/page_counter.o
CC drivers/input/mouse/alps.o
CC drivers/input/input-mt.o
CC drivers/acpi/acpica/hwacpi.o
CC net/wireless/chan.o
CC net/ipv4/datagram.o
CC drivers/ata/pata_oldpiix.o
CC net/core/gro.o
CC fs/file_table.o
CC net/ipv6/ioam6.o
AR net/netfilter/built-in.a
CC drivers/scsi/scsi_sysfs.o
CC net/ipv6/sysctl_net_ipv6.o
CC drivers/acpi/acpica/hwesleep.o
CC drivers/rtc/nvmem.o
CC drivers/gpu/drm/i915/intel_wakeref.o
CC net/mac80211/agg-rx.o
CC drivers/gpu/drm/i915/vlv_sideband.o
CC drivers/gpu/drm/i915/vlv_suspend.o
AR drivers/i2c/busses/built-in.a
CC drivers/gpu/drm/i915/soc/intel_dram.o
CC drivers/i2c/i2c-boardinfo.o
CC lib/kasprintf.o
CC drivers/gpu/drm/i915/soc/intel_gmch.o
CC drivers/base/transport_class.o
CC arch/x86/kernel/ksysfs.o
CC drivers/ata/pata_sch.o
CC drivers/tty/tty_baudrate.o
CC drivers/input/input-poller.o
CC net/core/netdev-genl.o
CC drivers/acpi/acpica/hwgpe.o
CC fs/nfs/nfs4xdr.o
CC net/sunrpc/sched.o
CC drivers/input/mouse/byd.o
CC drivers/input/mouse/logips2pp.o
CC net/mac80211/vht.o
CC lib/bitmap.o
CC drivers/rtc/dev.o
CC drivers/acpi/dock.o
CC drivers/base/topology.o
CC arch/x86/kernel/bootflag.o
CC net/ipv4/raw.o
CC mm/hugetlb_cgroup.o
CC drivers/rtc/proc.o
AR drivers/gpu/drm/panel/built-in.a
CC fs/ext4/symlink.o
CC drivers/acpi/acpica/hwregs.o
CC net/ipv4/udp.o
CC drivers/i2c/i2c-core-base.o
CC drivers/i2c/i2c-core-smbus.o
CC drivers/usb/host/ehci-pci.o
CC drivers/tty/tty_jobctrl.o
CC mm/early_ioremap.o
CC kernel/relay.o
CC drivers/usb/core/generic.o
CC net/ipv4/udplite.o
CC drivers/gpu/drm/i915/soc/intel_pch.o
CC drivers/ata/pata_mpiix.o
CC drivers/i2c/i2c-core-acpi.o
CC drivers/i2c/i2c-smbus.o
CC net/wireless/ethtool.o
AR drivers/net/ethernet/cavium/common/built-in.a
CC drivers/net/ethernet/broadcom/tg3.o
AR drivers/net/ethernet/chelsio/built-in.a
CC drivers/input/ff-core.o
CC drivers/input/mouse/lifebook.o
AR drivers/scsi/built-in.a
CC drivers/input/mouse/trackpoint.o
CC drivers/input/mouse/cypress_ps2.o
CC drivers/base/container.o
AR drivers/net/ethernet/cavium/thunder/built-in.a
AR drivers/net/ethernet/cisco/built-in.a
CC arch/x86/kernel/e820.o
AR drivers/net/ethernet/cavium/liquidio/built-in.a
CC arch/x86/kernel/pci-dma.o
AR drivers/net/ethernet/cavium/octeon/built-in.a
AR drivers/net/ethernet/cavium/built-in.a
CC drivers/rtc/sysfs.o
CC lib/scatterlist.o
CC drivers/rtc/rtc-mc146818-lib.o
AR drivers/gpu/drm/bridge/analogix/built-in.a
CC net/wireless/mesh.o
CC drivers/acpi/acpica/hwsleep.o
AR drivers/gpu/drm/bridge/cadence/built-in.a
CC drivers/acpi/acpica/hwvalid.o
CC drivers/usb/core/quirks.o
AR drivers/gpu/drm/bridge/imx/built-in.a
AR drivers/gpu/drm/hisilicon/built-in.a
CC drivers/usb/host/ohci-hcd.o
AR drivers/gpu/drm/bridge/synopsys/built-in.a
CC net/ipv6/xfrm6_policy.o
AR drivers/gpu/drm/bridge/built-in.a
CC drivers/usb/host/ohci-pci.o
CC drivers/usb/host/uhci-hcd.o
CC drivers/base/property.o
CC drivers/usb/core/devices.o
CC drivers/usb/host/xhci.o
CC net/core/netdev-genl-gen.o
CC mm/secretmem.o
CC drivers/acpi/acpica/hwxface.o
CC drivers/tty/n_null.o
CC drivers/acpi/acpica/hwxfsleep.o
CC drivers/gpu/drm/i915/i915_memcpy.o
CC net/wireless/ap.o
CC drivers/gpu/drm/i915/i915_mm.o
CC drivers/input/mouse/psmouse-smbus.o
CC drivers/ata/ata_generic.o
CC mm/hmm.o
CC drivers/base/cacheinfo.o
CC kernel/utsname_sysctl.o
CC net/wireless/trace.o
CC kernel/delayacct.o
CC drivers/gpu/drm/i915/i915_sw_fence.o
CC kernel/taskstats.o
CC drivers/rtc/rtc-cmos.o
CC drivers/base/swnode.o
CC mm/memfd.o
CC drivers/base/auxiliary.o
CC drivers/usb/core/phy.o
CC drivers/gpu/drm/i915/i915_sw_fence_work.o
CC kernel/tsacct.o
CC drivers/acpi/acpica/hwpci.o
CC drivers/acpi/acpica/nsaccess.o
CC drivers/acpi/pci_root.o
CC net/sunrpc/auth.o
CC net/sunrpc/auth_null.o
CC drivers/tty/pty.o
CC lib/list_sort.o
CC arch/x86/kernel/quirks.o
CC arch/x86/kernel/kdebugfs.o
CC lib/uuid.o
AR drivers/net/ethernet/cortina/built-in.a
CC net/core/gso.o
CC drivers/usb/core/port.o
CC lib/iov_iter.o
CC net/sunrpc/auth_tls.o
CC drivers/usb/core/hcd-pci.o
CC drivers/usb/host/xhci-mem.o
AR drivers/i3c/built-in.a
CC drivers/usb/host/xhci-ext-caps.o
CC net/ipv6/xfrm6_state.o
CC drivers/gpu/drm/i915/i915_syncmap.o
CC lib/clz_ctz.o
AR drivers/media/i2c/built-in.a
AR drivers/ata/built-in.a
AR drivers/input/mouse/built-in.a
AR drivers/i2c/built-in.a
CC drivers/acpi/acpica/nsalloc.o
CC mm/ptdump.o
CC drivers/net/net_failover.o
CC drivers/input/touchscreen.o
AR drivers/media/tuners/built-in.a
AR drivers/media/rc/keymaps/built-in.a
CC arch/x86/kernel/alternative.o
AR drivers/media/rc/built-in.a
AR drivers/media/common/b2c2/built-in.a
CC arch/x86/kernel/i8253.o
CC net/mac80211/he.o
CC drivers/base/devtmpfs.o
AR drivers/media/common/saa7146/built-in.a
AR drivers/media/common/siano/built-in.a
CC drivers/tty/tty_audit.o
CC fs/super.o
AR drivers/media/common/v4l2-tpg/built-in.a
AR drivers/media/common/videobuf2/built-in.a
AR drivers/media/common/built-in.a
CC net/sunrpc/auth_unix.o
CC drivers/acpi/acpica/nsarguments.o
AR drivers/media/platform/allegro-dvt/built-in.a
CC drivers/acpi/pci_link.o
CC drivers/acpi/pci_irq.o
AR drivers/media/platform/amlogic/meson-ge2d/built-in.a
AR drivers/media/platform/amlogic/built-in.a
CC net/ipv4/udp_offload.o
CC net/ipv4/arp.o
AR drivers/rtc/built-in.a
AR drivers/media/platform/amphion/built-in.a
CC kernel/tracepoint.o
CC drivers/acpi/acpica/nsconvert.o
AR drivers/media/platform/aspeed/built-in.a
CC drivers/usb/class/usblp.o
AR drivers/media/platform/atmel/built-in.a
CC drivers/usb/storage/scsiglue.o
AR drivers/media/platform/broadcom/built-in.a
CC drivers/gpu/drm/i915/i915_user_extensions.o
AR drivers/usb/misc/built-in.a
CC drivers/usb/storage/protocol.o
AR drivers/media/platform/cadence/built-in.a
CC drivers/acpi/acpica/nsdump.o
CC drivers/usb/early/ehci-dbgp.o
CC drivers/acpi/acpica/nseval.o
AR drivers/media/platform/chips-media/coda/built-in.a
AR drivers/media/platform/chips-media/wave5/built-in.a
CC drivers/acpi/acpi_apd.o
AR drivers/media/platform/chips-media/built-in.a
CC drivers/acpi/acpi_platform.o
CC net/sunrpc/svc.o
CC drivers/input/ff-memless.o
CC mm/execmem.o
CC drivers/input/sparse-keymap.o
AR drivers/media/platform/imagination/built-in.a
AR drivers/media/platform/intel/built-in.a
AR drivers/media/pci/ttpci/built-in.a
AR drivers/media/platform/marvell/built-in.a
AR drivers/gpu/drm/mxsfb/built-in.a
AR drivers/media/pci/b2c2/built-in.a
AR drivers/media/platform/mediatek/jpeg/built-in.a
AR drivers/media/pci/pluto2/built-in.a
CC net/ipv4/icmp.o
AR drivers/media/usb/b2c2/built-in.a
AR drivers/media/pci/dm1105/built-in.a
AR drivers/media/platform/mediatek/mdp/built-in.a
AR drivers/media/usb/dvb-usb/built-in.a
CC drivers/usb/core/usb-acpi.o
AR drivers/media/pci/pt1/built-in.a
AR drivers/media/platform/mediatek/vcodec/common/built-in.a
AR drivers/media/pci/pt3/built-in.a
AR drivers/media/platform/mediatek/vcodec/encoder/built-in.a
AR drivers/media/usb/dvb-usb-v2/built-in.a
AR drivers/gpu/drm/tiny/built-in.a
AR drivers/media/pci/mantis/built-in.a
AR drivers/media/platform/mediatek/vcodec/decoder/built-in.a
AR drivers/media/usb/s2255/built-in.a
CC drivers/gpu/drm/i915/i915_debugfs.o
AR drivers/media/platform/mediatek/vcodec/built-in.a
CC drivers/tty/sysrq.o
AR drivers/media/pci/ngene/built-in.a
AR drivers/gpu/drm/xlnx/built-in.a
AR drivers/media/usb/siano/built-in.a
AR drivers/media/pci/ddbridge/built-in.a
AR drivers/media/platform/mediatek/vpu/built-in.a
CC drivers/usb/storage/transport.o
AR drivers/media/pci/saa7146/built-in.a
AR drivers/media/platform/mediatek/mdp3/built-in.a
AR drivers/media/usb/ttusb-budget/built-in.a
AR drivers/media/usb/ttusb-dec/built-in.a
AR drivers/media/platform/mediatek/built-in.a
CC net/sunrpc/svcsock.o
CC drivers/acpi/acpi_pnp.o
AR drivers/media/usb/built-in.a
AR drivers/media/pci/smipcie/built-in.a
CC drivers/acpi/acpica/nsinit.o
AR drivers/media/platform/microchip/built-in.a
CC net/core/net-sysfs.o
AR drivers/media/platform/nuvoton/built-in.a
CC drivers/base/module.o
CC net/ipv6/xfrm6_input.o
AR drivers/media/pci/netup_unidvb/built-in.a
AR drivers/media/platform/nxp/dw100/built-in.a
AR drivers/media/platform/nxp/imx-jpeg/built-in.a
AR drivers/media/platform/nvidia/tegra-vde/built-in.a
CC net/wireless/ocb.o
AR drivers/media/pci/intel/ipu3/built-in.a
AR drivers/media/platform/nvidia/built-in.a
AR drivers/media/platform/nxp/imx8-isi/built-in.a
AR drivers/media/platform/nxp/built-in.a
CC drivers/acpi/power.o
CC drivers/acpi/event.o
CC kernel/irq_work.o
AR drivers/media/pci/intel/ivsc/built-in.a
AR drivers/media/pci/intel/built-in.a
CC fs/nfs/nfs4state.o
AR drivers/media/platform/qcom/camss/built-in.a
CC kernel/static_call.o
AR drivers/media/pci/built-in.a
CC arch/x86/kernel/hw_breakpoint.o
CC net/ipv6/xfrm6_output.o
AR drivers/media/platform/qcom/venus/built-in.a
CC net/ipv4/devinet.o
CC net/mac80211/s1g.o
AR drivers/media/platform/qcom/built-in.a
AR drivers/media/platform/raspberrypi/pisp_be/built-in.a
AR drivers/net/ethernet/dec/tulip/built-in.a
CC fs/nfs/nfs4renewd.o
AR drivers/media/platform/raspberrypi/built-in.a
AR mm/built-in.a
AR drivers/net/ethernet/dec/built-in.a
AR drivers/media/firewire/built-in.a
AR drivers/media/spi/built-in.a
AR drivers/media/mmc/siano/built-in.a
CC fs/nfs/nfs4super.o
AR drivers/media/mmc/built-in.a
AR drivers/net/ethernet/emulex/built-in.a
AR drivers/net/ethernet/dlink/built-in.a
CC net/ipv6/xfrm6_protocol.o
CC net/sunrpc/svcauth.o
AR drivers/usb/class/built-in.a
AR drivers/media/platform/rockchip/rga/built-in.a
AR drivers/media/platform/rockchip/rkisp1/built-in.a
AR drivers/media/test-drivers/built-in.a
AR drivers/media/platform/renesas/rcar-vin/built-in.a
AR drivers/media/platform/renesas/rzg2l-cru/built-in.a
CC drivers/input/vivaldi-fmap.o
CC kernel/padata.o
AR drivers/media/platform/rockchip/built-in.a
AR drivers/media/platform/renesas/vsp1/built-in.a
AR drivers/net/ethernet/engleder/built-in.a
CC drivers/input/input-leds.o
CC drivers/input/evdev.o
AR drivers/media/platform/renesas/built-in.a
CC drivers/acpi/evged.o
CC drivers/acpi/sysfs.o
CC kernel/jump_label.o
CC fs/nfs/nfs4file.o
CC drivers/acpi/acpica/nsload.o
CC net/sunrpc/svcauth_unix.o
AR drivers/media/platform/samsung/exynos-gsc/built-in.a
CC drivers/base/auxiliary_sysfs.o
AR drivers/usb/early/built-in.a
AR drivers/media/platform/samsung/exynos4-is/built-in.a
CC fs/nfs/delegation.o
CC drivers/acpi/property.o
AR drivers/media/platform/samsung/s3c-camif/built-in.a
AR drivers/media/platform/samsung/s5p-g2d/built-in.a
AR drivers/usb/core/built-in.a
AR drivers/media/platform/samsung/s5p-jpeg/built-in.a
CC drivers/acpi/debugfs.o
AR drivers/pps/clients/built-in.a
AR drivers/media/platform/samsung/s5p-mfc/built-in.a
AR drivers/media/platform/samsung/built-in.a
AR drivers/pps/generators/built-in.a
CC drivers/pps/pps.o
AR drivers/media/platform/st/sti/bdisp/built-in.a
CC drivers/acpi/acpi_lpat.o
CC lib/bsearch.o
AR drivers/media/platform/st/sti/c8sectpfe/built-in.a
CC fs/char_dev.o
AR drivers/media/platform/st/sti/delta/built-in.a
CC drivers/ptp/ptp_clock.o
CC drivers/ptp/ptp_chardev.o
AR drivers/media/platform/st/sti/hva/built-in.a
CC drivers/acpi/acpica/nsnames.o
AR drivers/media/platform/st/stm32/built-in.a
AR drivers/tty/built-in.a
CC drivers/base/devcoredump.o
CC net/ipv6/netfilter.o
CC drivers/base/platform-msi.o
AR drivers/media/platform/st/built-in.a
CC drivers/usb/storage/usb.o
CC fs/ext4/sysfs.o
CC drivers/power/supply/power_supply_core.o
AR drivers/media/platform/sunxi/sun4i-csi/built-in.a
CC drivers/usb/host/xhci-ring.o
CC fs/stat.o
AR drivers/media/platform/sunxi/sun6i-csi/built-in.a
CC net/sunrpc/addr.o
AR drivers/media/platform/sunxi/sun6i-mipi-csi2/built-in.a
AR drivers/gpu/drm/gud/built-in.a
CC arch/x86/kernel/tsc.o
CC arch/x86/kernel/tsc_msr.o
AR drivers/media/platform/sunxi/sun8i-a83t-mipi-csi2/built-in.a
AR drivers/media/platform/sunxi/sun8i-di/built-in.a
AR drivers/media/platform/sunxi/sun8i-rotate/built-in.a
AR drivers/gpu/drm/solomon/built-in.a
CC kernel/context_tracking.o
AR drivers/media/platform/sunxi/built-in.a
CC drivers/pps/kapi.o
CC drivers/pps/sysfs.o
AR drivers/net/ethernet/ezchip/built-in.a
CC [M] drivers/gpu/drm/scheduler/sched_main.o
CC drivers/gpu/drm/i915/i915_debugfs_params.o
CC [M] drivers/gpu/drm/scheduler/sched_fence.o
CC kernel/iomem.o
CC [M] drivers/gpu/drm/scheduler/sched_entity.o
CC drivers/usb/host/xhci-hub.o
AR drivers/media/platform/ti/am437x/built-in.a
CC drivers/acpi/acpica/nsobject.o
CC drivers/acpi/acpica/nsparse.o
CC lib/find_bit.o
AR drivers/media/platform/ti/cal/built-in.a
AR drivers/input/built-in.a
CC net/core/hotdata.o
CC drivers/acpi/acpi_pcc.o
AR drivers/media/platform/ti/vpe/built-in.a
AR drivers/media/platform/ti/davinci/built-in.a
CC kernel/rseq.o
CC drivers/usb/storage/initializers.o
AR drivers/media/platform/ti/j721e-csi2rx/built-in.a
AR drivers/media/platform/ti/omap/built-in.a
AR drivers/media/platform/ti/omap3isp/built-in.a
CC drivers/usb/storage/sierra_ms.o
AR drivers/media/platform/ti/built-in.a
CC drivers/usb/storage/option_ms.o
CC drivers/base/physical_location.o
AR drivers/media/platform/verisilicon/built-in.a
CC drivers/hwmon/hwmon.o
AR drivers/media/platform/via/built-in.a
AR drivers/thermal/broadcom/built-in.a
AR drivers/media/platform/xilinx/built-in.a
CC lib/llist.o
AR drivers/thermal/renesas/built-in.a
CC arch/x86/kernel/io_delay.o
CC lib/lwq.o
AR drivers/media/platform/built-in.a
AR drivers/thermal/samsung/built-in.a
CC arch/x86/kernel/rtc.o
CC drivers/thermal/intel/intel_tcc.o
CC drivers/acpi/acpica/nspredef.o
CC drivers/gpu/drm/i915/i915_pmu.o
AR drivers/net/ethernet/fujitsu/built-in.a
CC fs/ext4/xattr.o
AR drivers/watchdog/built-in.a
CC drivers/power/supply/power_supply_sysfs.o
CC drivers/power/supply/power_supply_leds.o
CC net/ipv4/af_inet.o
CC drivers/ptp/ptp_sysfs.o
AR drivers/media/built-in.a
AR drivers/pps/built-in.a
CC arch/x86/kernel/resource.o
CC lib/memweight.o
CC lib/kfifo.o
CC drivers/acpi/ac.o
CC drivers/base/trace.o
CC drivers/md/md.o
CC drivers/md/md-bitmap.o
CC drivers/cpufreq/cpufreq.o
CC drivers/thermal/intel/therm_throt.o
CC [M] drivers/thermal/intel/x86_pkg_temp_thermal.o
CC drivers/usb/storage/usual-tables.o
CC drivers/ptp/ptp_vclock.o
CC lib/percpu-refcount.o
CC net/ipv6/proc.o
CC net/sunrpc/rpcb_clnt.o
CC net/ipv6/syncookies.o
AR drivers/mmc/built-in.a
CC drivers/cpuidle/governors/menu.o
CC drivers/acpi/button.o
CC lib/rhashtable.o
CC drivers/acpi/acpica/nsprepkg.o
CC drivers/cpuidle/governors/haltpoll.o
CC lib/base64.o
CC drivers/gpu/drm/i915/gt/gen2_engine_cs.o
AR drivers/ufs/built-in.a
CC drivers/ptp/ptp_kvm_x86.o
AR drivers/net/ethernet/fungible/built-in.a
CC net/core/net-procfs.o
CC lib/once.o
AR drivers/net/ethernet/google/built-in.a
AS arch/x86/kernel/irqflags.o
CC lib/refcount.o
CC lib/rcuref.o
CC drivers/power/supply/power_supply_hwmon.o
AR kernel/built-in.a
CC arch/x86/kernel/static_call.o
CC fs/nfs/nfs4idmap.o
HOSTCC drivers/gpu/drm/xe/xe_gen_wa_oob
CC net/wireless/pmsr.o
CC lib/usercopy.o
CC drivers/usb/host/xhci-dbg.o
CC net/sunrpc/timer.o
LD [M] drivers/gpu/drm/scheduler/gpu-sched.o
CC lib/errseq.o
CC net/sunrpc/xdr.o
GEN net/wireless/shipped-certs.c
CC drivers/acpi/acpica/nsrepair.o
CC arch/x86/kernel/process.o
AR drivers/usb/storage/built-in.a
CC net/mac80211/ibss.o
GEN xe_wa_oob.c xe_wa_oob.h
CC [M] drivers/gpu/drm/xe/xe_bb.o
AR drivers/base/built-in.a
CC arch/x86/kernel/ptrace.o
CC drivers/acpi/acpica/nsrepair2.o
CC net/ipv4/igmp.o
CC [M] drivers/gpu/drm/xe/xe_bo.o
CC fs/ext4/xattr_hurd.o
CC drivers/gpu/drm/drm_aperture.o
CC net/ipv4/fib_frontend.o
CC drivers/acpi/fan_core.o
CC net/ipv6/calipso.o
CC drivers/acpi/acpica/nssearch.o
CC fs/nfs/callback.o
CC net/sunrpc/sunrpc_syms.o
AR drivers/hwmon/built-in.a
CC fs/nfs/callback_xdr.o
CC net/sunrpc/cache.o
CC lib/bucket_locks.o
CC lib/generic-radix-tree.o
CC fs/ext4/xattr_trusted.o
AR drivers/power/supply/built-in.a
CC drivers/acpi/acpica/nsutils.o
AR drivers/power/built-in.a
CC drivers/gpu/drm/i915/gt/gen6_engine_cs.o
CC drivers/acpi/acpica/nswalk.o
AR drivers/leds/trigger/built-in.a
CC drivers/ptp/ptp_kvm_common.o
AR drivers/thermal/intel/built-in.a
AR drivers/leds/blink/built-in.a
CC drivers/usb/host/xhci-trace.o
AR drivers/thermal/st/built-in.a
AR drivers/thermal/tegra/built-in.a
AR drivers/leds/simple/built-in.a
AR drivers/thermal/qcom/built-in.a
CC drivers/gpu/drm/drm_atomic.o
CC drivers/acpi/fan_attr.o
CC drivers/leds/led-core.o
CC drivers/acpi/fan_hwmon.o
CC drivers/acpi/acpi_video.o
AR drivers/cpuidle/governors/built-in.a
CC drivers/cpuidle/cpuidle.o
AR drivers/thermal/mediatek/built-in.a
CC net/ipv4/fib_semantics.o
CC net/ipv4/fib_trie.o
CC drivers/thermal/thermal_core.o
CC drivers/thermal/thermal_sysfs.o
CC drivers/thermal/thermal_trip.o
CC net/mac80211/iface.o
CC drivers/cpuidle/driver.o
CC [M] drivers/gpu/drm/xe/xe_bo_evict.o
CC net/ipv4/fib_notifier.o
CC drivers/thermal/thermal_helpers.o
CC lib/bitmap-str.o
CC drivers/acpi/acpica/nsxfeval.o
CC net/core/netpoll.o
CC fs/ext4/xattr_user.o
CC net/sunrpc/rpc_pipe.o
CC fs/ext4/fast_commit.o
CC drivers/gpu/drm/drm_atomic_uapi.o
CC net/ipv4/inet_fragment.o
CC arch/x86/kernel/tls.o
CC drivers/cpuidle/governor.o
CC fs/ext4/orphan.o
CC drivers/leds/led-class.o
CC drivers/leds/led-triggers.o
CC net/sunrpc/sysfs.o
AR drivers/ptp/built-in.a
CC arch/x86/kernel/step.o
CC drivers/thermal/thermal_hwmon.o
CC net/sunrpc/svc_xprt.o
CC drivers/cpufreq/freq_table.o
CC arch/x86/kernel/i8237.o
CC drivers/usb/host/xhci-debugfs.o
CC drivers/acpi/acpica/nsxfname.o
CC drivers/thermal/gov_step_wise.o
CC drivers/acpi/video_detect.o
CC drivers/gpu/drm/i915/gt/gen6_ppgtt.o
CC lib/string_helpers.o
CC fs/nfs/callback_proc.o
CC drivers/cpuidle/sysfs.o
CC drivers/cpuidle/poll_state.o
CC drivers/acpi/acpica/nsxfobj.o
CC drivers/cpuidle/cpuidle-haltpoll.o
AR drivers/firmware/arm_ffa/built-in.a
CC arch/x86/kernel/stacktrace.o
CC net/ipv6/ah6.o
AR drivers/firmware/arm_scmi/built-in.a
CC drivers/thermal/gov_user_space.o
AR drivers/firmware/broadcom/built-in.a
CC [M] drivers/gpu/drm/xe/xe_devcoredump.o
AR drivers/firmware/cirrus/built-in.a
CC drivers/cpufreq/cpufreq_performance.o
AR drivers/firmware/meson/built-in.a
CC lib/hexdump.o
CC drivers/acpi/acpica/psargs.o
CC drivers/firmware/efi/efi-bgrt.o
AR drivers/net/ethernet/huawei/built-in.a
AR drivers/firmware/microchip/built-in.a
CC net/sunrpc/xprtmultipath.o
CC drivers/firmware/efi/efi.o
CC drivers/cpufreq/cpufreq_userspace.o
CC net/sunrpc/stats.o
CC net/sunrpc/sysctl.o
AR drivers/crypto/stm32/built-in.a
AR drivers/leds/built-in.a
CC drivers/firmware/efi/vars.o
CC drivers/firmware/efi/reboot.o
AR drivers/crypto/xilinx/built-in.a
CC drivers/firmware/efi/libstub/efi-stub-helper.o
CC drivers/clocksource/acpi_pm.o
AR drivers/crypto/hisilicon/built-in.a
AR drivers/crypto/intel/keembay/built-in.a
CC lib/kstrtox.o
AR drivers/crypto/intel/ixp4xx/built-in.a
AR drivers/crypto/intel/built-in.a
CC drivers/cpufreq/cpufreq_ondemand.o
CC drivers/firmware/efi/memattr.o
AR drivers/crypto/starfive/built-in.a
CC [M] drivers/gpu/drm/xe/xe_device.o
AR drivers/crypto/built-in.a
CC drivers/md/md-autodetect.o
CC drivers/firmware/efi/tpm.o
AR drivers/thermal/built-in.a
CC arch/x86/kernel/reboot.o
AR drivers/cpuidle/built-in.a
CC [M] drivers/gpu/drm/xe/xe_device_sysfs.o
CC fs/ext4/acl.o
CC drivers/cpufreq/cpufreq_governor.o
CC drivers/acpi/acpica/psloop.o
CC net/core/fib_rules.o
CC fs/nfs/nfs4namespace.o
CC arch/x86/kernel/msr.o
CC drivers/net/ethernet/intel/e1000/e1000_main.o
AR drivers/net/ethernet/i825xx/built-in.a
CC drivers/acpi/acpica/psobject.o
CC drivers/net/ethernet/intel/e1000e/82571.o
AR drivers/net/ethernet/microsoft/built-in.a
CC arch/x86/kernel/cpuid.o
CC drivers/acpi/processor_driver.o
CC lib/iomap.o
CC drivers/net/ethernet/intel/e100.o
CC drivers/acpi/acpica/psopcode.o
CC arch/x86/kernel/early-quirks.o
CC drivers/md/dm.o
AR drivers/net/ethernet/litex/built-in.a
CC drivers/acpi/processor_thermal.o
CC net/core/net-traces.o
CC net/ipv4/ping.o
CC drivers/clocksource/i8253.o
CC drivers/md/dm-table.o
CC drivers/usb/host/xhci-pci.o
CC drivers/acpi/acpica/psopinfo.o
CC drivers/acpi/processor_idle.o
CC drivers/net/ethernet/intel/e1000e/ich8lan.o
CC drivers/firmware/efi/libstub/gop.o
CC drivers/firmware/efi/libstub/secureboot.o
CC drivers/cpufreq/cpufreq_governor_attr_set.o
CC net/core/selftests.o
CC drivers/gpu/drm/i915/gt/gen7_renderclear.o
CC drivers/acpi/acpica/psparse.o
CC drivers/firmware/efi/memmap.o
CC [M] drivers/gpu/drm/xe/xe_dma_buf.o
CC drivers/firmware/efi/capsule.o
CC drivers/md/dm-target.o
CC net/ipv6/esp6.o
CC net/ipv6/sit.o
CC net/ipv6/addrconf_core.o
CC arch/x86/kernel/smp.o
CC [M] drivers/gpu/drm/xe/xe_drm_client.o
AR drivers/clocksource/built-in.a
CC [M] drivers/gpu/drm/xe/xe_exec.o
CC drivers/firmware/efi/libstub/tpm.o
CC net/mac80211/link.o
CC net/ipv6/exthdrs_core.o
CC drivers/acpi/processor_throttling.o
CC drivers/acpi/acpica/psscope.o
CC net/ipv4/ip_tunnel_core.o
CC drivers/net/ethernet/intel/e1000/e1000_hw.o
CC lib/iomap_copy.o
CC drivers/acpi/processor_perflib.o
CC drivers/gpu/drm/drm_auth.o
CC drivers/hid/usbhid/hid-core.o
CC drivers/cpufreq/acpi-cpufreq.o
CC drivers/md/dm-linear.o
CC drivers/cpufreq/amd-pstate.o
CC lib/devres.o
CC drivers/cpufreq/amd-pstate-trace.o
CC drivers/acpi/acpica/pstree.o
CC fs/nfs/nfs4getroot.o
CC fs/ext4/xattr_security.o
CC drivers/acpi/acpica/psutils.o
CC drivers/acpi/acpica/pswalk.o
AR drivers/net/ethernet/broadcom/built-in.a
CC net/wireless/shipped-certs.o
CC [M] drivers/gpu/drm/xe/xe_execlist.o
CC [M] drivers/gpu/drm/xe/xe_exec_queue.o
CC arch/x86/kernel/smpboot.o
CC arch/x86/kernel/tsc_sync.o
CC drivers/gpu/drm/drm_blend.o
CC drivers/firmware/efi/libstub/file.o
CC drivers/gpu/drm/i915/gt/gen8_engine_cs.o
CC drivers/firmware/efi/libstub/mem.o
CC drivers/acpi/container.o
CC [M] drivers/gpu/drm/xe/xe_force_wake.o
CC drivers/gpu/drm/drm_bridge.o
CC drivers/firmware/efi/esrt.o
CC lib/check_signature.o
CC drivers/acpi/acpica/psxface.o
CC drivers/gpu/drm/drm_cache.o
CC drivers/firmware/efi/runtime-wrappers.o
CC lib/interval_tree.o
CC net/ipv4/gre_offload.o
AR drivers/firmware/imx/built-in.a
AR drivers/firmware/psci/built-in.a
CC net/ipv4/metrics.o
AR drivers/firmware/qcom/built-in.a
CC drivers/firmware/efi/capsule-loader.o
CC drivers/acpi/thermal_lib.o
CC lib/assoc_array.o
AR drivers/firmware/smccc/built-in.a
AR net/sunrpc/built-in.a
CC drivers/acpi/thermal.o
CC net/ipv6/ip6_checksum.o
AR drivers/firmware/tegra/built-in.a
CC net/ipv6/ip6_icmp.o
CC drivers/md/dm-stripe.o
CC lib/bitrev.o
AR drivers/usb/host/built-in.a
CC lib/crc-ccitt.o
AR drivers/usb/built-in.a
CC net/core/ptp_classifier.o
CC drivers/md/dm-ioctl.o
CC drivers/md/dm-io.o
CC drivers/md/dm-kcopyd.o
CC fs/nfs/nfs4client.o
AR fs/ext4/built-in.a
CC drivers/hid/hid-core.o
CC drivers/hid/hid-input.o
CC drivers/acpi/acpica/rsaddr.o
CC drivers/acpi/acpica/rscalc.o
CC drivers/hid/hid-quirks.o
CC drivers/hid/hid-debug.o
CC drivers/acpi/nhlt.o
CC drivers/gpu/drm/drm_client.o
CC arch/x86/kernel/setup_percpu.o
CC net/core/netprio_cgroup.o
CC drivers/firmware/efi/libstub/random.o
CC drivers/cpufreq/intel_pstate.o
CC drivers/firmware/efi/libstub/randomalloc.o
CC drivers/hid/usbhid/hiddev.o
CC drivers/firmware/dmi_scan.o
CC drivers/md/dm-sysfs.o
CC drivers/gpu/drm/drm_client_modeset.o
AR drivers/firmware/xilinx/built-in.a
CC drivers/gpu/drm/drm_color_mgmt.o
CC net/ipv6/output_core.o
CC net/ipv6/protocol.o
CC net/ipv6/ip6_offload.o
CC drivers/firmware/efi/earlycon.o
CC drivers/net/ethernet/intel/e1000/e1000_ethtool.o
CC [M] drivers/gpu/drm/xe/xe_ggtt.o
CC drivers/net/ethernet/intel/e1000/e1000_param.o
CC drivers/hid/hidraw.o
CC drivers/firmware/efi/libstub/pci.o
CC lib/crc16.o
CC drivers/acpi/acpica/rscreate.o
CC drivers/acpi/acpica/rsdumpinfo.o
CC arch/x86/kernel/mpparse.o
CC [M] drivers/gpu/drm/xe/xe_gpu_scheduler.o
CC drivers/hid/hid-generic.o
CC drivers/acpi/acpi_memhotplug.o
CC drivers/net/ethernet/intel/e1000e/80003es2lan.o
CC arch/x86/kernel/trace_clock.o
CC drivers/gpu/drm/drm_connector.o
CC [M] drivers/gpu/drm/xe/xe_gsc.o
CC arch/x86/kernel/trace.o
CC net/ipv6/tcpv6_offload.o
CC net/ipv4/netlink.o
CC drivers/net/ethernet/intel/e1000e/mac.o
CC drivers/firmware/dmi-id.o
CC drivers/hid/hid-a4tech.o
CC drivers/net/ethernet/intel/e1000e/manage.o
CC drivers/acpi/acpica/rsinfo.o
CC drivers/hid/hid-apple.o
CC drivers/gpu/drm/i915/gt/gen8_ppgtt.o
CC drivers/gpu/drm/i915/gt/intel_breadcrumbs.o
HOSTCC lib/gen_crc32table
CC drivers/gpu/drm/drm_crtc.o
CC drivers/hid/hid-belkin.o
CC drivers/gpu/drm/i915/gt/intel_context.o
CC drivers/net/ethernet/intel/e1000e/nvm.o
AR drivers/firmware/efi/built-in.a
CC net/ipv6/exthdrs_offload.o
CC drivers/acpi/acpica/rsio.o
CC drivers/acpi/acpica/rsirq.o
CC lib/xxhash.o
CC fs/exec.o
CC drivers/acpi/ioapic.o
CC drivers/acpi/battery.o
CC arch/x86/kernel/rethook.o
CC fs/nfs/nfs4session.o
CC drivers/hid/usbhid/hid-pidff.o
CC drivers/firmware/efi/libstub/skip_spaces.o
CC [M] drivers/gpu/drm/xe/xe_gsc_proxy.o
CC fs/nfs/dns_resolve.o
CC drivers/acpi/acpica/rslist.o
CC drivers/firmware/efi/libstub/lib-cmdline.o
CC drivers/firmware/memmap.o
CC arch/x86/kernel/vmcore_info_32.o
CC net/mac80211/rate.o
CC drivers/acpi/acpica/rsmemory.o
CC net/ipv6/inet6_hashtables.o
CC net/ipv6/mcast_snoop.o
CC drivers/firmware/efi/libstub/lib-ctype.o
CC drivers/acpi/acpica/rsmisc.o
CC drivers/gpu/drm/drm_displayid.o
CC [M] drivers/gpu/drm/xe/xe_gsc_submit.o
CC drivers/firmware/efi/libstub/alignedmem.o
CC drivers/gpu/drm/i915/gt/intel_context_sseu.o
CC drivers/md/dm-stats.o
AR net/wireless/built-in.a
CC arch/x86/kernel/machine_kexec_32.o
CC fs/pipe.o
CC drivers/firmware/efi/libstub/relocate.o
AR drivers/net/ethernet/marvell/octeon_ep/built-in.a
CC [M] drivers/gpu/drm/xe/xe_gt.o
AR drivers/net/ethernet/marvell/octeon_ep_vf/built-in.a
AR drivers/platform/x86/amd/built-in.a
AR drivers/net/ethernet/marvell/octeontx2/built-in.a
CC drivers/platform/x86/wmi.o
AR drivers/net/ethernet/marvell/prestera/built-in.a
AR drivers/platform/x86/intel/built-in.a
CC drivers/net/ethernet/marvell/sky2.o
CC drivers/gpu/drm/drm_drv.o
CC drivers/gpu/drm/drm_dumb_buffers.o
CC lib/genalloc.o
CC lib/percpu_counter.o
CC lib/audit.o
CC drivers/acpi/bgrt.o
CC drivers/acpi/acpica/rsserial.o
CC net/ipv4/nexthop.o
CC net/ipv4/udp_tunnel_stub.o
CC net/ipv4/ip_tunnel.o
CC drivers/gpu/drm/drm_edid.o
CC net/ipv4/sysctl_net_ipv4.o
CC drivers/firmware/efi/libstub/printk.o
CC drivers/acpi/spcr.o
CC drivers/acpi/acpica/rsutils.o
CC drivers/firmware/efi/libstub/vsprintf.o
CC drivers/acpi/acpica/rsxface.o
CC drivers/gpu/drm/drm_eld.o
AR drivers/net/ethernet/intel/e1000/built-in.a
CC net/core/netclassid_cgroup.o
CC drivers/hid/hid-cherry.o
CC net/core/dst_cache.o
CC lib/syscall.o
CC drivers/gpu/drm/i915/gt/intel_engine_cs.o
CC [M] drivers/gpu/drm/xe/xe_gt_ccs_mode.o
CC [M] drivers/gpu/drm/xe/xe_gt_clock.o
CC drivers/net/ethernet/intel/e1000e/phy.o
CC net/core/gro_cells.o
AR drivers/platform/surface/built-in.a
CC fs/nfs/nfs4trace.o
CC drivers/platform/x86/wmi-bmof.o
CC lib/errname.o
AR drivers/cpufreq/built-in.a
AR drivers/net/ethernet/mellanox/built-in.a
CC drivers/mailbox/mailbox.o
CC drivers/firmware/efi/libstub/x86-stub.o
CC fs/namei.o
CC drivers/mailbox/pcc.o
CC lib/nlattr.o
CC [M] drivers/gpu/drm/xe/xe_gt_freq.o
AR drivers/hid/usbhid/built-in.a
AR drivers/net/ethernet/meta/built-in.a
CC lib/cpu_rmap.o
AR drivers/perf/built-in.a
CC drivers/acpi/acpica/tbdata.o
CC drivers/hid/hid-chicony.o
CC drivers/gpu/drm/i915/gt/intel_engine_heartbeat.o
CC drivers/gpu/drm/drm_encoder.o
CC drivers/gpu/drm/drm_file.o
AS arch/x86/kernel/relocate_kernel_32.o
CC drivers/firmware/efi/libstub/smbios.o
CC [M] drivers/gpu/drm/xe/xe_gt_idle.o
CC fs/nfs/nfs4sysctl.o
CC arch/x86/kernel/crash_dump_32.o
CC net/mac80211/michael.o
CC drivers/hid/hid-cypress.o
AR drivers/net/ethernet/micrel/built-in.a
CC lib/dynamic_queue_limits.o
AR drivers/hwtracing/intel_th/built-in.a
CC lib/glob.o
CC lib/strncpy_from_user.o
CC drivers/gpu/drm/i915/gt/intel_engine_pm.o
CC drivers/gpu/drm/drm_fourcc.o
CC fs/fcntl.o
CC drivers/acpi/acpica/tbfadt.o
CC [M] drivers/gpu/drm/xe/xe_gt_mcr.o
CC drivers/gpu/drm/drm_framebuffer.o
AR net/ipv6/built-in.a
CC net/core/failover.o
CC drivers/gpu/drm/drm_gem.o
CC drivers/platform/x86/eeepc-laptop.o
CC arch/x86/kernel/crash.o
CC lib/strnlen_user.o
CC lib/net_utils.o
CC net/ipv4/proc.o
CC drivers/acpi/acpica/tbfind.o
AR drivers/net/ethernet/microchip/built-in.a
CC arch/x86/kernel/module.o
AR drivers/mailbox/built-in.a
CC drivers/hid/hid-ezkey.o
CC drivers/platform/x86/p2sb.o
CC fs/ioctl.o
AR drivers/android/built-in.a
CC fs/readdir.o
CC drivers/md/dm-rq.o
CC drivers/gpu/drm/i915/gt/intel_engine_user.o
CC net/mac80211/tkip.o
CC lib/sg_pool.o
CC fs/select.o
CC drivers/acpi/acpica/tbinstal.o
CC drivers/gpu/drm/drm_ioctl.o
CC drivers/gpu/drm/drm_lease.o
CC [M] drivers/gpu/drm/xe/xe_gt_pagefault.o
AR drivers/net/ethernet/mscc/built-in.a
CC net/mac80211/aes_cmac.o
CC [M] drivers/gpu/drm/xe/xe_gt_sysfs.o
STUBCPY drivers/firmware/efi/libstub/alignedmem.stub.o
CC drivers/hid/hid-gyration.o
STUBCPY drivers/firmware/efi/libstub/efi-stub-helper.stub.o
CC fs/dcache.o
STUBCPY drivers/firmware/efi/libstub/file.stub.o
STUBCPY drivers/firmware/efi/libstub/gop.stub.o
STUBCPY drivers/firmware/efi/libstub/lib-cmdline.stub.o
STUBCPY drivers/firmware/efi/libstub/lib-ctype.stub.o
AR drivers/nvmem/layouts/built-in.a
STUBCPY drivers/firmware/efi/libstub/mem.stub.o
CC drivers/acpi/acpica/tbprint.o
STUBCPY drivers/firmware/efi/libstub/pci.stub.o
STUBCPY drivers/firmware/efi/libstub/printk.stub.o
STUBCPY drivers/firmware/efi/libstub/random.stub.o
STUBCPY drivers/firmware/efi/libstub/randomalloc.stub.o
CC drivers/nvmem/core.o
CC lib/stackdepot.o
STUBCPY drivers/firmware/efi/libstub/relocate.stub.o
CC lib/asn1_decoder.o
STUBCPY drivers/firmware/efi/libstub/secureboot.stub.o
CC net/mac80211/aes_gmac.o
STUBCPY drivers/firmware/efi/libstub/skip_spaces.stub.o
STUBCPY drivers/firmware/efi/libstub/smbios.stub.o
STUBCPY drivers/firmware/efi/libstub/tpm.stub.o
CC drivers/gpu/drm/drm_managed.o
CC drivers/gpu/drm/drm_mm.o
AR drivers/net/ethernet/myricom/built-in.a
STUBCPY drivers/firmware/efi/libstub/vsprintf.stub.o
CC drivers/hid/hid-ite.o
CC drivers/net/ethernet/intel/e1000e/param.o
STUBCPY drivers/firmware/efi/libstub/x86-stub.stub.o
AR drivers/net/ethernet/natsemi/built-in.a
AR drivers/net/ethernet/neterion/built-in.a
GEN lib/oid_registry_data.c
AR drivers/firmware/efi/libstub/lib.a
CC net/ipv4/fib_rules.o
CC drivers/md/dm-io-rewind.o
CC arch/x86/kernel/doublefault_32.o
AR drivers/net/ethernet/netronome/built-in.a
AR drivers/firmware/built-in.a
CC arch/x86/kernel/early_printk.o
CC lib/ucs2_string.o
CC drivers/acpi/acpica/tbutils.o
CC lib/sbitmap.o
CC net/ipv4/ipmr.o
CC lib/group_cpus.o
CC lib/fw_table.o
AR net/core/built-in.a
CC fs/inode.o
CC fs/attr.o
CC [M] drivers/gpu/drm/xe/xe_gt_throttle.o
CC arch/x86/kernel/hpet.o
CC drivers/gpu/drm/drm_mode_config.o
CC drivers/md/dm-builtin.o
AR drivers/platform/x86/built-in.a
CC drivers/acpi/acpica/tbxface.o
AR drivers/platform/built-in.a
CC drivers/hid/hid-kensington.o
CC drivers/gpu/drm/i915/gt/intel_execlists_submission.o
CC drivers/gpu/drm/i915/gt/intel_ggtt.o
AR lib/lib.a
CC drivers/net/ethernet/intel/e1000e/ethtool.o
CC drivers/gpu/drm/i915/gt/intel_ggtt_fencing.o
GEN lib/crc32table.h
CC lib/oid_registry.o
CC drivers/hid/hid-lg.o
CC drivers/hid/hid-lgff.o
CC net/mac80211/fils_aead.o
CC net/mac80211/cfg.o
CC drivers/net/ethernet/intel/e1000e/netdev.o
CC drivers/gpu/drm/drm_mode_object.o
CC fs/bad_inode.o
AR drivers/net/ethernet/ni/built-in.a
CC drivers/gpu/drm/drm_modes.o
CC drivers/net/ethernet/intel/e1000e/ptp.o
CC net/mac80211/ethtool.o
CC drivers/net/ethernet/nvidia/forcedeth.o
CC net/mac80211/rx.o
CC net/mac80211/spectmgmt.o
CC net/mac80211/tx.o
CC arch/x86/kernel/amd_nb.o
CC net/ipv4/ipmr_base.o
CC drivers/acpi/acpica/tbxfload.o
CC net/ipv4/syncookies.o
CC drivers/gpu/drm/i915/gt/intel_gt.o
CC drivers/gpu/drm/drm_modeset_lock.o
AR drivers/nvmem/built-in.a
CC drivers/gpu/drm/drm_plane.o
CC net/ipv4/tunnel4.o
AR drivers/net/ethernet/oki-semi/built-in.a
CC lib/crc32.o
CC arch/x86/kernel/kvm.o
CC net/mac80211/key.o
CC drivers/gpu/drm/i915/gt/intel_gt_buffer_pool.o
CC [M] drivers/gpu/drm/xe/xe_gt_tlb_invalidation.o
CC [M] drivers/gpu/drm/xe/xe_gt_topology.o
CC drivers/hid/hid-lg4ff.o
CC drivers/hid/hid-lg-g15.o
CC fs/file.o
AR drivers/net/ethernet/marvell/built-in.a
CC fs/filesystems.o
CC drivers/md/dm-raid1.o
CC drivers/md/dm-log.o
CC drivers/md/dm-region-hash.o
CC drivers/gpu/drm/i915/gt/intel_gt_ccs_mode.o
CC net/ipv4/ipconfig.o
CC arch/x86/kernel/kvmclock.o
CC drivers/acpi/acpica/tbxfroot.o
CC drivers/hid/hid-microsoft.o
CC net/ipv4/netfilter.o
CC arch/x86/kernel/paravirt.o
AR lib/built-in.a
CC arch/x86/kernel/pcspeaker.o
CC arch/x86/kernel/pvclock.o
CC drivers/gpu/drm/drm_prime.o
CC drivers/hid/hid-monterey.o
CC fs/namespace.o
AR drivers/net/ethernet/packetengines/built-in.a
CC drivers/gpu/drm/i915/gt/intel_gt_clock_utils.o
CC drivers/hid/hid-ntrig.o
AR drivers/net/ethernet/qlogic/built-in.a
CC net/mac80211/util.o
CC drivers/acpi/acpica/utaddress.o
AR drivers/net/ethernet/qualcomm/emac/built-in.a
AR drivers/net/ethernet/qualcomm/built-in.a
CC drivers/net/ethernet/realtek/8139too.o
CC drivers/acpi/acpica/utalloc.o
AR drivers/net/ethernet/renesas/built-in.a
CC drivers/gpu/drm/drm_print.o
CC drivers/hid/hid-pl.o
CC arch/x86/kernel/check.o
AR fs/nfs/built-in.a
CC net/ipv4/tcp_cubic.o
CC drivers/md/dm-zero.o
CC drivers/gpu/drm/i915/gt/intel_gt_debugfs.o
CC drivers/gpu/drm/i915/gt/intel_gt_engines_debugfs.o
CC drivers/net/ethernet/realtek/r8169_main.o
CC fs/seq_file.o
CC net/ipv4/tcp_sigpool.o
AR drivers/net/ethernet/rdc/built-in.a
CC net/ipv4/cipso_ipv4.o
CC net/ipv4/xfrm4_policy.o
CC drivers/gpu/drm/drm_property.o
CC net/ipv4/xfrm4_state.o
CC drivers/acpi/acpica/utascii.o
CC [M] drivers/gpu/drm/xe/xe_guc.o
CC fs/xattr.o
CC [M] drivers/gpu/drm/xe/xe_guc_ads.o
CC net/mac80211/parse.o
CC drivers/hid/hid-petalynx.o
CC [M] drivers/gpu/drm/xe/xe_guc_ct.o
CC drivers/hid/hid-redragon.o
CC drivers/net/ethernet/realtek/r8169_firmware.o
CC fs/libfs.o
CC drivers/acpi/acpica/utbuffer.o
CC drivers/hid/hid-samsung.o
CC net/ipv4/xfrm4_input.o
CC drivers/hid/hid-sony.o
CC [M] drivers/gpu/drm/xe/xe_guc_db_mgr.o
CC [M] drivers/gpu/drm/xe/xe_guc_hwconfig.o
CC [M] drivers/gpu/drm/xe/xe_guc_id_mgr.o
CC net/ipv4/xfrm4_output.o
CC drivers/acpi/acpica/utcksum.o
CC net/ipv4/xfrm4_protocol.o
CC drivers/gpu/drm/i915/gt/intel_gt_irq.o
CC arch/x86/kernel/uprobes.o
AR drivers/md/built-in.a
CC arch/x86/kernel/perf_regs.o
CC arch/x86/kernel/tracepoint.o
CC drivers/hid/hid-sunplus.o
CC drivers/acpi/acpica/utcopy.o
CC drivers/hid/hid-topseed.o
CC [M] drivers/gpu/drm/xe/xe_guc_klv_helpers.o
AR drivers/net/ethernet/rocker/built-in.a
AR drivers/net/ethernet/samsung/built-in.a
CC fs/fs-writeback.o
CC drivers/gpu/drm/drm_syncobj.o
CC fs/pnode.o
CC drivers/gpu/drm/i915/gt/intel_gt_mcr.o
CC drivers/gpu/drm/i915/gt/intel_gt_pm.o
CC drivers/gpu/drm/i915/gt/intel_gt_pm_debugfs.o
CC [M] drivers/gpu/drm/xe/xe_guc_log.o
CC drivers/acpi/acpica/utexcep.o
CC drivers/gpu/drm/drm_sysfs.o
CC fs/splice.o
CC net/mac80211/wme.o
CC net/mac80211/chan.o
CC arch/x86/kernel/itmt.o
CC [M] drivers/gpu/drm/xe/xe_guc_pc.o
CC arch/x86/kernel/umip.o
CC [M] drivers/gpu/drm/xe/xe_guc_submit.o
CC [M] drivers/gpu/drm/xe/xe_heci_gsc.o
AR drivers/net/ethernet/seeq/built-in.a
CC [M] drivers/gpu/drm/xe/xe_hw_engine.o
CC drivers/net/ethernet/realtek/r8169_phy_config.o
CC drivers/gpu/drm/i915/gt/intel_gt_pm_irq.o
CC drivers/gpu/drm/i915/gt/intel_gt_requests.o
CC fs/sync.o
CC fs/utimes.o
CC drivers/gpu/drm/i915/gt/intel_gt_sysfs.o
CC drivers/gpu/drm/i915/gt/intel_gt_sysfs_pm.o
CC [M] drivers/gpu/drm/xe/xe_hw_engine_class_sysfs.o
CC arch/x86/kernel/unwind_frame.o
CC fs/d_path.o
CC drivers/acpi/acpica/utdebug.o
CC fs/stack.o
CC [M] drivers/gpu/drm/xe/xe_hw_fence.o
CC [M] drivers/gpu/drm/xe/xe_huc.o
AR drivers/hid/built-in.a
AR drivers/net/ethernet/nvidia/built-in.a
CC [M] drivers/gpu/drm/xe/xe_irq.o
AR drivers/net/ethernet/silan/built-in.a
AR drivers/net/ethernet/sis/built-in.a
CC fs/fs_struct.o
AR drivers/net/ethernet/sfc/built-in.a
CC drivers/gpu/drm/drm_trace_points.o
AR drivers/net/ethernet/smsc/built-in.a
CC [M] drivers/gpu/drm/xe/xe_lrc.o
CC drivers/gpu/drm/i915/gt/intel_gtt.o
AR drivers/net/ethernet/socionext/built-in.a
CC net/mac80211/trace.o
AR drivers/net/ethernet/stmicro/built-in.a
CC net/mac80211/mlme.o
CC drivers/gpu/drm/i915/gt/intel_llc.o
AR drivers/net/ethernet/sun/built-in.a
AR drivers/net/ethernet/tehuti/built-in.a
CC [M] drivers/gpu/drm/xe/xe_migrate.o
AR net/ipv4/built-in.a
CC drivers/acpi/acpica/utdecode.o
CC drivers/acpi/acpica/utdelete.o
CC net/mac80211/tdls.o
AR drivers/net/ethernet/ti/built-in.a
AR drivers/net/ethernet/vertexcom/built-in.a
AR drivers/net/ethernet/via/built-in.a
AR drivers/net/ethernet/wangxun/built-in.a
CC drivers/gpu/drm/drm_vblank.o
AR drivers/net/ethernet/xilinx/built-in.a
AR drivers/net/ethernet/wiznet/built-in.a
AR drivers/net/ethernet/xircom/built-in.a
CC fs/statfs.o
AR drivers/net/ethernet/synopsys/built-in.a
CC [M] drivers/gpu/drm/xe/xe_mmio.o
CC [M] drivers/gpu/drm/xe/xe_mocs.o
CC drivers/gpu/drm/drm_vblank_work.o
AR drivers/net/ethernet/pensando/built-in.a
CC [M] drivers/gpu/drm/xe/xe_module.o
CC drivers/gpu/drm/drm_vma_manager.o
AR arch/x86/kernel/built-in.a
CC drivers/gpu/drm/drm_writeback.o
CC fs/fs_pin.o
CC fs/nsfs.o
CC drivers/gpu/drm/i915/gt/intel_lrc.o
AR arch/x86/built-in.a
CC drivers/gpu/drm/i915/gt/intel_migrate.o
CC drivers/acpi/acpica/uterror.o
CC [M] drivers/gpu/drm/xe/xe_oa.o
CC drivers/acpi/acpica/uteval.o
CC drivers/acpi/acpica/utglobal.o
CC drivers/gpu/drm/i915/gt/intel_mocs.o
CC drivers/gpu/drm/i915/gt/intel_ppgtt.o
CC net/mac80211/ocb.o
CC fs/fs_types.o
CC net/mac80211/airtime.o
CC net/mac80211/eht.o
CC drivers/gpu/drm/i915/gt/intel_rc6.o
AR drivers/net/ethernet/intel/e1000e/built-in.a
CC fs/fs_context.o
AR drivers/net/ethernet/intel/built-in.a
CC fs/fs_parser.o
CC fs/fsopen.o
CC [M] drivers/gpu/drm/xe/xe_observation.o
CC drivers/gpu/drm/drm_panel.o
CC [M] drivers/gpu/drm/xe/xe_pat.o
CC drivers/gpu/drm/i915/gt/intel_region_lmem.o
CC fs/init.o
CC [M] drivers/gpu/drm/xe/xe_pci.o
CC drivers/acpi/acpica/uthex.o
CC [M] drivers/gpu/drm/xe/xe_pcode.o
CC drivers/acpi/acpica/utids.o
CC drivers/acpi/acpica/utinit.o
AR drivers/net/ethernet/realtek/built-in.a
CC fs/kernel_read_file.o
CC drivers/gpu/drm/drm_pci.o
AR drivers/net/ethernet/built-in.a
CC fs/mnt_idmapping.o
CC drivers/gpu/drm/i915/gt/intel_renderstate.o
CC drivers/gpu/drm/drm_debugfs.o
CC fs/remap_range.o
CC drivers/gpu/drm/drm_debugfs_crc.o
CC [M] drivers/gpu/drm/xe/xe_pm.o
CC fs/pidfs.o
CC net/mac80211/led.o
CC drivers/acpi/acpica/utlock.o
CC net/mac80211/pm.o
AR drivers/net/built-in.a
CC drivers/acpi/acpica/utmath.o
CC drivers/gpu/drm/i915/gt/intel_reset.o
CC drivers/gpu/drm/i915/gt/intel_ring.o
CC drivers/acpi/acpica/utmisc.o
CC fs/buffer.o
CC [M] drivers/gpu/drm/xe/xe_preempt_fence.o
CC [M] drivers/gpu/drm/xe/xe_pt.o
CC drivers/gpu/drm/i915/gt/intel_ring_submission.o
CC fs/mpage.o
CC fs/proc_namespace.o
CC fs/direct-io.o
CC net/mac80211/rc80211_minstrel_ht.o
CC fs/eventpoll.o
CC drivers/acpi/acpica/utmutex.o
CC fs/anon_inodes.o
CC drivers/gpu/drm/drm_panel_orientation_quirks.o
CC fs/signalfd.o
CC fs/timerfd.o
CC fs/eventfd.o
CC net/mac80211/wbrf.o
CC fs/aio.o
CC fs/locks.o
CC drivers/acpi/acpica/utnonansi.o
CC drivers/gpu/drm/i915/gt/intel_rps.o
CC drivers/gpu/drm/drm_buddy.o
CC [M] drivers/gpu/drm/xe/xe_pt_walk.o
CC drivers/gpu/drm/i915/gt/intel_sa_media.o
CC drivers/gpu/drm/i915/gt/intel_sseu.o
CC [M] drivers/gpu/drm/xe/xe_pxp.o
CC drivers/gpu/drm/drm_gem_shmem_helper.o
CC [M] drivers/gpu/drm/xe/xe_pxp_debugfs.o
CC drivers/gpu/drm/i915/gt/intel_sseu_debugfs.o
CC drivers/acpi/acpica/utobject.o
CC [M] drivers/gpu/drm/xe/xe_pxp_submit.o
CC drivers/gpu/drm/i915/gt/intel_timeline.o
CC fs/binfmt_misc.o
CC drivers/acpi/acpica/utosi.o
CC fs/binfmt_script.o
CC drivers/acpi/acpica/utownerid.o
CC [M] drivers/gpu/drm/xe/xe_query.o
CC [M] drivers/gpu/drm/xe/xe_range_fence.o
CC drivers/gpu/drm/i915/gt/intel_tlb.o
CC [M] drivers/gpu/drm/xe/xe_reg_sr.o
CC drivers/gpu/drm/i915/gt/intel_wopcm.o
CC drivers/gpu/drm/i915/gt/intel_workarounds.o
CC drivers/gpu/drm/drm_atomic_helper.o
CC drivers/gpu/drm/i915/gt/shmem_utils.o
CC fs/binfmt_elf.o
CC [M] drivers/gpu/drm/xe/xe_reg_whitelist.o
CC [M] drivers/gpu/drm/xe/xe_rtp.o
CC [M] drivers/gpu/drm/xe/xe_ring_ops.o
CC fs/mbcache.o
CC drivers/acpi/acpica/utpredef.o
CC drivers/gpu/drm/i915/gt/sysfs_engines.o
CC drivers/acpi/acpica/utresdecode.o
CC fs/posix_acl.o
CC drivers/acpi/acpica/utresrc.o
CC fs/coredump.o
CC fs/drop_caches.o
CC drivers/gpu/drm/drm_atomic_state_helper.o
CC drivers/gpu/drm/drm_bridge_connector.o
CC fs/sysctls.o
CC fs/fhandle.o
CC drivers/gpu/drm/i915/gt/intel_ggtt_gmch.o
CC [M] drivers/gpu/drm/xe/xe_sa.o
CC [M] drivers/gpu/drm/xe/xe_sched_job.o
CC drivers/gpu/drm/i915/gt/gen6_renderstate.o
In file included from /workspace/kernel/include/linux/device.h:15,
from /workspace/kernel/include/linux/pci.h:37,
from /workspace/kernel/drivers/gpu/drm/xe/xe_device_types.h:9,
from /workspace/kernel/drivers/gpu/drm/xe/xe_pxp_submit.c:11:
/workspace/kernel/drivers/gpu/drm/xe/xe_pxp_submit.c: In function ‘gsccs_send_message’:
/workspace/kernel/include/drm/drm_print.h:522:47: error: format ‘%ld’ expects argument of type ‘long int’, but argument 4 has type ‘size_t’ {aka ‘unsigned int’} [-Werror=format=]
522 | dev_##level##type((drm) ? (drm)->dev : NULL, "[drm] " fmt, ##__VA_ARGS__)
| ^~~~~~~~
/workspace/kernel/include/linux/dev_printk.h:110:16: note: in definition of macro ‘dev_printk_index_wrap’
110 | _p_func(dev, fmt, ##__VA_ARGS__); \
| ^~~
/workspace/kernel/include/linux/dev_printk.h:156:54: note: in expansion of macro ‘dev_fmt’
156 | dev_printk_index_wrap(_dev_warn, KERN_WARNING, dev, dev_fmt(fmt), ##__VA_ARGS__)
| ^~~~~~~
/workspace/kernel/include/drm/drm_print.h:522:2: note: in expansion of macro ‘dev_warn’
522 | dev_##level##type((drm) ? (drm)->dev : NULL, "[drm] " fmt, ##__VA_ARGS__)
| ^~~~
/workspace/kernel/include/drm/drm_print.h:532:2: note: in expansion of macro ‘__drm_printk’
532 | __drm_printk((drm), warn,, fmt, ##__VA_ARGS__)
| ^~~~~~~~~~~~
/workspace/kernel/drivers/gpu/drm/xe/xe_pxp_submit.c:459:4: note: in expansion of macro ‘drm_warn’
459 | drm_warn(&xe->drm, "caller with insufficient PXP reply size %u (%ld)\n",
| ^~~~~~~~
/workspace/kernel/drivers/gpu/drm/xe/xe_pxp_submit.c:459:70: note: format string is defined here
459 | drm_warn(&xe->drm, "caller with insufficient PXP reply size %u (%ld)\n",
| ~~^
| |
| long int
| %d
CC drivers/gpu/drm/drm_crtc_helper.o
CC [M] drivers/gpu/drm/xe/xe_step.o
CC drivers/gpu/drm/i915/gt/gen7_renderstate.o
CC drivers/acpi/acpica/utstate.o
CC [M] drivers/gpu/drm/xe/xe_sync.o
CC [M] drivers/gpu/drm/xe/xe_tile.o
CC [M] drivers/gpu/drm/xe/xe_tile_sysfs.o
CC [M] drivers/gpu/drm/xe/xe_trace.o
CC drivers/gpu/drm/i915/gt/gen8_renderstate.o
CC drivers/gpu/drm/drm_damage_helper.o
cc1: all warnings being treated as errors
CC drivers/gpu/drm/drm_encoder_slave.o
CC drivers/gpu/drm/i915/gt/gen9_renderstate.o
make[6]: *** [/workspace/kernel/scripts/Makefile.build:244: drivers/gpu/drm/xe/xe_pxp_submit.o] Error 1
make[6]: *** Waiting for unfinished jobs....
CC drivers/gpu/drm/i915/gem/i915_gem_busy.o
CC drivers/gpu/drm/i915/gem/i915_gem_clflush.o
CC drivers/gpu/drm/drm_flip_work.o
CC drivers/gpu/drm/i915/gem/i915_gem_context.o
CC drivers/gpu/drm/i915/gem/i915_gem_create.o
CC drivers/gpu/drm/drm_format_helper.o
CC drivers/gpu/drm/drm_gem_atomic_helper.o
CC drivers/gpu/drm/i915/gem/i915_gem_dmabuf.o
CC drivers/acpi/acpica/utstring.o
CC drivers/acpi/acpica/utstrsuppt.o
CC drivers/acpi/acpica/utstrtoul64.o
CC drivers/acpi/acpica/utxface.o
CC drivers/gpu/drm/i915/gem/i915_gem_domain.o
CC drivers/gpu/drm/i915/gem/i915_gem_execbuffer.o
CC drivers/gpu/drm/i915/gem/i915_gem_internal.o
CC drivers/gpu/drm/drm_gem_framebuffer_helper.o
CC drivers/gpu/drm/i915/gem/i915_gem_lmem.o
CC drivers/gpu/drm/i915/gem/i915_gem_mman.o
CC drivers/gpu/drm/i915/gem/i915_gem_object.o
CC drivers/gpu/drm/drm_kms_helper_common.o
CC drivers/gpu/drm/i915/gem/i915_gem_pages.o
CC drivers/gpu/drm/i915/gem/i915_gem_phys.o
CC drivers/gpu/drm/i915/gem/i915_gem_pm.o
CC drivers/gpu/drm/drm_modeset_helper.o
CC drivers/gpu/drm/i915/gem/i915_gem_region.o
CC drivers/gpu/drm/i915/gem/i915_gem_shmem.o
CC drivers/gpu/drm/drm_plane_helper.o
CC drivers/gpu/drm/drm_probe_helper.o
CC drivers/gpu/drm/i915/gem/i915_gem_shrinker.o
CC drivers/gpu/drm/drm_rect.o
CC drivers/gpu/drm/drm_self_refresh_helper.o
CC drivers/gpu/drm/drm_simple_kms_helper.o
CC drivers/acpi/acpica/utxfinit.o
CC drivers/acpi/acpica/utxferror.o
CC drivers/gpu/drm/i915/gem/i915_gem_stolen.o
CC drivers/gpu/drm/i915/gem/i915_gem_throttle.o
CC drivers/gpu/drm/bridge/panel.o
CC drivers/acpi/acpica/utxfmutex.o
CC drivers/gpu/drm/i915/gem/i915_gem_tiling.o
CC drivers/gpu/drm/i915/gem/i915_gem_ttm.o
CC drivers/gpu/drm/drm_mipi_dsi.o
CC [M] drivers/gpu/drm/drm_exec.o
CC [M] drivers/gpu/drm/drm_gpuvm.o
CC drivers/gpu/drm/i915/gem/i915_gem_ttm_move.o
CC drivers/gpu/drm/i915/gem/i915_gem_ttm_pm.o
CC [M] drivers/gpu/drm/drm_suballoc.o
CC drivers/gpu/drm/i915/gem/i915_gem_userptr.o
CC drivers/gpu/drm/i915/gem/i915_gem_wait.o
CC [M] drivers/gpu/drm/drm_gem_ttm_helper.o
CC drivers/gpu/drm/i915/gem/i915_gemfs.o
CC drivers/gpu/drm/i915/i915_active.o
CC drivers/gpu/drm/i915/i915_cmd_parser.o
CC drivers/gpu/drm/i915/i915_deps.o
AR drivers/acpi/acpica/built-in.a
AR drivers/acpi/built-in.a
CC drivers/gpu/drm/i915/i915_gem.o
AR fs/built-in.a
CC drivers/gpu/drm/i915/i915_gem_evict.o
CC drivers/gpu/drm/i915/i915_gem_gtt.o
CC drivers/gpu/drm/i915/i915_gem_ww.o
CC drivers/gpu/drm/i915/i915_query.o
CC drivers/gpu/drm/i915/i915_request.o
CC drivers/gpu/drm/i915/i915_scheduler.o
CC drivers/gpu/drm/i915/i915_trace_points.o
CC drivers/gpu/drm/i915/i915_ttm_buddy_manager.o
CC drivers/gpu/drm/i915/i915_vma.o
CC drivers/gpu/drm/i915/i915_vma_resource.o
CC drivers/gpu/drm/i915/gt/uc/intel_gsc_fw.o
CC drivers/gpu/drm/i915/gt/uc/intel_gsc_proxy.o
CC drivers/gpu/drm/i915/gt/uc/intel_gsc_uc.o
CC drivers/gpu/drm/i915/gt/uc/intel_gsc_uc_debugfs.o
CC drivers/gpu/drm/i915/gt/uc/intel_gsc_uc_heci_cmd_submit.o
CC drivers/gpu/drm/i915/gt/uc/intel_guc.o
CC drivers/gpu/drm/i915/gt/uc/intel_guc_ads.o
LD [M] drivers/gpu/drm/drm_suballoc_helper.o
CC drivers/gpu/drm/i915/gt/uc/intel_guc_capture.o
CC drivers/gpu/drm/i915/gt/uc/intel_guc_ct.o
CC drivers/gpu/drm/i915/gt/uc/intel_guc_debugfs.o
CC drivers/gpu/drm/i915/gt/uc/intel_guc_fw.o
LD [M] drivers/gpu/drm/drm_ttm_helper.o
CC drivers/gpu/drm/i915/gt/uc/intel_guc_hwconfig.o
CC drivers/gpu/drm/i915/gt/uc/intel_guc_log.o
make[5]: *** [/workspace/kernel/scripts/Makefile.build:485: drivers/gpu/drm/xe] Error 2
make[5]: *** Waiting for unfinished jobs....
CC drivers/gpu/drm/i915/gt/uc/intel_guc_log_debugfs.o
CC drivers/gpu/drm/i915/gt/uc/intel_guc_rc.o
CC drivers/gpu/drm/i915/gt/uc/intel_guc_slpc.o
CC drivers/gpu/drm/i915/gt/uc/intel_guc_submission.o
CC drivers/gpu/drm/i915/gt/uc/intel_huc.o
CC drivers/gpu/drm/i915/gt/uc/intel_huc_debugfs.o
CC drivers/gpu/drm/i915/gt/uc/intel_huc_fw.o
CC drivers/gpu/drm/i915/gt/uc/intel_uc.o
CC drivers/gpu/drm/i915/gt/uc/intel_uc_debugfs.o
CC drivers/gpu/drm/i915/gt/uc/intel_uc_fw.o
CC drivers/gpu/drm/i915/gt/intel_gsc.o
CC drivers/gpu/drm/i915/i915_hwmon.o
CC drivers/gpu/drm/i915/display/hsw_ips.o
CC drivers/gpu/drm/i915/display/i9xx_plane.o
CC drivers/gpu/drm/i915/display/i9xx_wm.o
CC drivers/gpu/drm/i915/display/intel_alpm.o
CC drivers/gpu/drm/i915/display/intel_atomic.o
CC drivers/gpu/drm/i915/display/intel_atomic_plane.o
CC drivers/gpu/drm/i915/display/intel_audio.o
CC drivers/gpu/drm/i915/display/intel_bios.o
CC drivers/gpu/drm/i915/display/intel_bw.o
CC drivers/gpu/drm/i915/display/intel_cdclk.o
CC drivers/gpu/drm/i915/display/intel_color.o
CC drivers/gpu/drm/i915/display/intel_combo_phy.o
CC drivers/gpu/drm/i915/display/intel_connector.o
CC drivers/gpu/drm/i915/display/intel_crtc.o
CC drivers/gpu/drm/i915/display/intel_crtc_state_dump.o
CC drivers/gpu/drm/i915/display/intel_cursor.o
CC drivers/gpu/drm/i915/display/intel_display.o
CC drivers/gpu/drm/i915/display/intel_display_driver.o
CC drivers/gpu/drm/i915/display/intel_display_irq.o
CC drivers/gpu/drm/i915/display/intel_display_params.o
CC drivers/gpu/drm/i915/display/intel_display_power.o
CC drivers/gpu/drm/i915/display/intel_display_power_map.o
CC drivers/gpu/drm/i915/display/intel_display_power_well.o
CC drivers/gpu/drm/i915/display/intel_display_reset.o
CC drivers/gpu/drm/i915/display/intel_display_rps.o
CC drivers/gpu/drm/i915/display/intel_display_wa.o
CC drivers/gpu/drm/i915/display/intel_dmc.o
CC drivers/gpu/drm/i915/display/intel_dmc_wl.o
CC drivers/gpu/drm/i915/display/intel_dpio_phy.o
CC drivers/gpu/drm/i915/display/intel_dpll.o
CC drivers/gpu/drm/i915/display/intel_dpll_mgr.o
CC drivers/gpu/drm/i915/display/intel_dpt.o
CC drivers/gpu/drm/i915/display/intel_dpt_common.o
CC drivers/gpu/drm/i915/display/intel_drrs.o
CC drivers/gpu/drm/i915/display/intel_dsb.o
CC drivers/gpu/drm/i915/display/intel_dsb_buffer.o
CC drivers/gpu/drm/i915/display/intel_fb.o
CC drivers/gpu/drm/i915/display/intel_fb_bo.o
CC drivers/gpu/drm/i915/display/intel_fb_pin.o
CC drivers/gpu/drm/i915/display/intel_fbc.o
CC drivers/gpu/drm/i915/display/intel_fdi.o
CC drivers/gpu/drm/i915/display/intel_fifo_underrun.o
CC drivers/gpu/drm/i915/display/intel_frontbuffer.o
CC drivers/gpu/drm/i915/display/intel_global_state.o
CC drivers/gpu/drm/i915/display/intel_hdcp.o
CC drivers/gpu/drm/i915/display/intel_hdcp_gsc.o
CC drivers/gpu/drm/i915/display/intel_hdcp_gsc_message.o
CC drivers/gpu/drm/i915/display/intel_hotplug.o
CC drivers/gpu/drm/i915/display/intel_hotplug_irq.o
CC drivers/gpu/drm/i915/display/intel_hti.o
CC drivers/gpu/drm/i915/display/intel_link_bw.o
CC drivers/gpu/drm/i915/display/intel_load_detect.o
CC drivers/gpu/drm/i915/display/intel_lpe_audio.o
CC drivers/gpu/drm/i915/display/intel_modeset_lock.o
CC drivers/gpu/drm/i915/display/intel_modeset_setup.o
CC drivers/gpu/drm/i915/display/intel_modeset_verify.o
CC drivers/gpu/drm/i915/display/intel_overlay.o
CC drivers/gpu/drm/i915/display/intel_pch_display.o
CC drivers/gpu/drm/i915/display/intel_pch_refclk.o
CC drivers/gpu/drm/i915/display/intel_plane_initial.o
CC drivers/gpu/drm/i915/display/intel_pmdemand.o
CC drivers/gpu/drm/i915/display/intel_psr.o
CC drivers/gpu/drm/i915/display/intel_quirks.o
CC drivers/gpu/drm/i915/display/intel_sprite.o
CC drivers/gpu/drm/i915/display/intel_sprite_uapi.o
CC drivers/gpu/drm/i915/display/intel_tc.o
CC drivers/gpu/drm/i915/display/intel_vblank.o
CC drivers/gpu/drm/i915/display/intel_vga.o
CC drivers/gpu/drm/i915/display/intel_wm.o
CC drivers/gpu/drm/i915/display/skl_scaler.o
CC drivers/gpu/drm/i915/display/skl_universal_plane.o
CC drivers/gpu/drm/i915/display/skl_watermark.o
CC drivers/gpu/drm/i915/display/intel_acpi.o
AR net/mac80211/built-in.a
AR net/built-in.a
CC drivers/gpu/drm/i915/display/intel_opregion.o
CC drivers/gpu/drm/i915/display/intel_display_debugfs.o
CC drivers/gpu/drm/i915/display/intel_display_debugfs_params.o
CC drivers/gpu/drm/i915/display/intel_pipe_crc.o
CC drivers/gpu/drm/i915/display/dvo_ch7017.o
CC drivers/gpu/drm/i915/display/dvo_ch7xxx.o
CC drivers/gpu/drm/i915/display/dvo_ivch.o
CC drivers/gpu/drm/i915/display/dvo_ns2501.o
CC drivers/gpu/drm/i915/display/dvo_sil164.o
CC drivers/gpu/drm/i915/display/dvo_tfp410.o
CC drivers/gpu/drm/i915/display/g4x_dp.o
CC drivers/gpu/drm/i915/display/g4x_hdmi.o
CC drivers/gpu/drm/i915/display/icl_dsi.o
CC drivers/gpu/drm/i915/display/intel_backlight.o
CC drivers/gpu/drm/i915/display/intel_crt.o
CC drivers/gpu/drm/i915/display/intel_cx0_phy.o
CC drivers/gpu/drm/i915/display/intel_ddi.o
CC drivers/gpu/drm/i915/display/intel_ddi_buf_trans.o
CC drivers/gpu/drm/i915/display/intel_display_device.o
CC drivers/gpu/drm/i915/display/intel_display_trace.o
CC drivers/gpu/drm/i915/display/intel_dkl_phy.o
CC drivers/gpu/drm/i915/display/intel_dp.o
CC drivers/gpu/drm/i915/display/intel_dp_aux.o
CC drivers/gpu/drm/i915/display/intel_dp_aux_backlight.o
CC drivers/gpu/drm/i915/display/intel_dp_hdcp.o
CC drivers/gpu/drm/i915/display/intel_dp_link_training.o
CC drivers/gpu/drm/i915/display/intel_dp_mst.o
CC drivers/gpu/drm/i915/display/intel_dsi.o
CC drivers/gpu/drm/i915/display/intel_dsi_dcs_backlight.o
CC drivers/gpu/drm/i915/display/intel_dsi_vbt.o
CC drivers/gpu/drm/i915/display/intel_dvo.o
CC drivers/gpu/drm/i915/display/intel_encoder.o
CC drivers/gpu/drm/i915/display/intel_gmbus.o
CC drivers/gpu/drm/i915/display/intel_hdmi.o
CC drivers/gpu/drm/i915/display/intel_lspcon.o
CC drivers/gpu/drm/i915/display/intel_lvds.o
CC drivers/gpu/drm/i915/display/intel_panel.o
CC drivers/gpu/drm/i915/display/intel_pps.o
CC drivers/gpu/drm/i915/display/intel_qp_tables.o
CC drivers/gpu/drm/i915/display/intel_sdvo.o
CC drivers/gpu/drm/i915/display/intel_snps_phy.o
CC drivers/gpu/drm/i915/display/intel_tv.o
CC drivers/gpu/drm/i915/display/intel_vdsc.o
CC drivers/gpu/drm/i915/display/intel_vrr.o
CC drivers/gpu/drm/i915/display/vlv_dsi.o
CC drivers/gpu/drm/i915/display/vlv_dsi_pll.o
CC drivers/gpu/drm/i915/i915_perf.o
CC drivers/gpu/drm/i915/pxp/intel_pxp.o
CC drivers/gpu/drm/i915/pxp/intel_pxp_huc.o
CC drivers/gpu/drm/i915/pxp/intel_pxp_tee.o
CC drivers/gpu/drm/i915/i915_gpu_error.o
CC drivers/gpu/drm/i915/i915_vgpu.o
AR drivers/gpu/drm/i915/built-in.a
make[4]: *** [/workspace/kernel/scripts/Makefile.build:485: drivers/gpu/drm] Error 2
make[3]: *** [/workspace/kernel/scripts/Makefile.build:485: drivers/gpu] Error 2
make[2]: *** [/workspace/kernel/scripts/Makefile.build:485: drivers] Error 2
make[1]: *** [/workspace/kernel/Makefile:1925: .] Error 2
make: *** [/workspace/kernel/Makefile:224: __sub-make] Error 2
run-parts: /workspace/ci/hooks/11-build-32b exited with return code 2
^ permalink raw reply [flat|nested] 54+ messages in thread
* ✓ CI.checksparse: success for Add PXP HWDRM support (rev2)
2024-08-16 19:00 [PATCH v2 00/12] Add PXP HWDRM support Daniele Ceraolo Spurio
` (16 preceding siblings ...)
2024-08-16 19:25 ` ✗ CI.Hooks: failure " Patchwork
@ 2024-08-16 19:27 ` Patchwork
2024-08-16 20:11 ` ✗ CI.BAT: failure " Patchwork
` (2 subsequent siblings)
20 siblings, 0 replies; 54+ messages in thread
From: Patchwork @ 2024-08-16 19:27 UTC (permalink / raw)
To: Daniele Ceraolo Spurio; +Cc: intel-xe
== Series Details ==
Series: Add PXP HWDRM support (rev2)
URL : https://patchwork.freedesktop.org/series/136052/
State : success
== Summary ==
+ trap cleanup EXIT
+ KERNEL=/kernel
+ MT=/root/linux/maintainer-tools
+ git clone https://gitlab.freedesktop.org/drm/maintainer-tools /root/linux/maintainer-tools
Cloning into '/root/linux/maintainer-tools'...
warning: redirecting to https://gitlab.freedesktop.org/drm/maintainer-tools.git/
+ make -C /root/linux/maintainer-tools
make: Entering directory '/root/linux/maintainer-tools'
cc -O2 -g -Wextra -o remap-log remap-log.c
make: Leaving directory '/root/linux/maintainer-tools'
+ cd /kernel
+ git config --global --add safe.directory /kernel
+ /root/linux/maintainer-tools/dim sparse --fast d6dac3db19935f5939cbb033eea30c90bdf3888c
Sparse version: 0.6.1 (Ubuntu: 0.6.1-2build1)
Fast mode used, each commit won't be checked separately.
Okay!
+ cleanup
++ stat -c %u:%g /kernel
+ chown -R 1003:1003 /kernel
^ permalink raw reply [flat|nested] 54+ messages in thread
* ✗ CI.BAT: failure for Add PXP HWDRM support (rev2)
2024-08-16 19:00 [PATCH v2 00/12] Add PXP HWDRM support Daniele Ceraolo Spurio
` (17 preceding siblings ...)
2024-08-16 19:27 ` ✓ CI.checksparse: success " Patchwork
@ 2024-08-16 20:11 ` Patchwork
2024-08-17 4:53 ` ✗ CI.FULL: " Patchwork
2024-08-19 14:33 ` [PATCH v2 00/12] Add PXP HWDRM support Souza, Jose
20 siblings, 0 replies; 54+ messages in thread
From: Patchwork @ 2024-08-16 20:11 UTC (permalink / raw)
To: Daniele Ceraolo Spurio; +Cc: intel-xe
[-- Attachment #1: Type: text/plain, Size: 4545 bytes --]
== Series Details ==
Series: Add PXP HWDRM support (rev2)
URL : https://patchwork.freedesktop.org/series/136052/
State : failure
== Summary ==
CI Bug Log - changes from xe-1785-479cde039b423852ee120f3832ca74ea45b646fd_BAT -> xe-pw-136052v2_BAT
====================================================
Summary
-------
**FAILURE**
Serious unknown changes coming with xe-pw-136052v2_BAT absolutely need to be
verified manually.
If you think the reported changes have nothing to do with the changes
introduced in xe-pw-136052v2_BAT, please notify your bug team (I915-ci-infra@lists.freedesktop.org) to allow them
to document this new failure mode, which will reduce false positives in CI.
Participating hosts (8 -> 8)
------------------------------
No changes in participating hosts
Possible new issues
-------------------
Here are the unknown changes that may have been introduced in xe-pw-136052v2_BAT:
### IGT changes ###
#### Possible regressions ####
* igt@xe_create@create-invalid-mbz:
- bat-atsm-2: [PASS][1] -> [FAIL][2] +1 other test fail
[1]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-1785-479cde039b423852ee120f3832ca74ea45b646fd/bat-atsm-2/igt@xe_create@create-invalid-mbz.html
[2]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-136052v2/bat-atsm-2/igt@xe_create@create-invalid-mbz.html
- bat-adlp-vf: [PASS][3] -> [FAIL][4] +1 other test fail
[3]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-1785-479cde039b423852ee120f3832ca74ea45b646fd/bat-adlp-vf/igt@xe_create@create-invalid-mbz.html
[4]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-136052v2/bat-adlp-vf/igt@xe_create@create-invalid-mbz.html
- bat-adlp-7: [PASS][5] -> [FAIL][6] +1 other test fail
[5]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-1785-479cde039b423852ee120f3832ca74ea45b646fd/bat-adlp-7/igt@xe_create@create-invalid-mbz.html
[6]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-136052v2/bat-adlp-7/igt@xe_create@create-invalid-mbz.html
- bat-bmg-1: [PASS][7] -> [FAIL][8] +1 other test fail
[7]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-1785-479cde039b423852ee120f3832ca74ea45b646fd/bat-bmg-1/igt@xe_create@create-invalid-mbz.html
[8]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-136052v2/bat-bmg-1/igt@xe_create@create-invalid-mbz.html
- bat-lnl-2: [PASS][9] -> [FAIL][10] +1 other test fail
[9]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-1785-479cde039b423852ee120f3832ca74ea45b646fd/bat-lnl-2/igt@xe_create@create-invalid-mbz.html
[10]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-136052v2/bat-lnl-2/igt@xe_create@create-invalid-mbz.html
* igt@xe_exec_queue_property@invalid-property:
- bat-dg2-oem2: [PASS][11] -> [FAIL][12] +1 other test fail
[11]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-1785-479cde039b423852ee120f3832ca74ea45b646fd/bat-dg2-oem2/igt@xe_exec_queue_property@invalid-property.html
[12]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-136052v2/bat-dg2-oem2/igt@xe_exec_queue_property@invalid-property.html
- bat-lnl-1: [PASS][13] -> [FAIL][14] +1 other test fail
[13]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-1785-479cde039b423852ee120f3832ca74ea45b646fd/bat-lnl-1/igt@xe_exec_queue_property@invalid-property.html
[14]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-136052v2/bat-lnl-1/igt@xe_exec_queue_property@invalid-property.html
#### Suppressed ####
The following results come from untrusted machines, tests, or statuses.
They do not affect the overall result.
* igt@xe_create@create-invalid-mbz:
- {bat-bmg-2}: [PASS][15] -> [FAIL][16] +1 other test fail
[15]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-1785-479cde039b423852ee120f3832ca74ea45b646fd/bat-bmg-2/igt@xe_create@create-invalid-mbz.html
[16]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-136052v2/bat-bmg-2/igt@xe_create@create-invalid-mbz.html
{name}: This element is suppressed. This means it is ignored when computing
the status of the difference (SUCCESS, WARNING, or FAILURE).
Build changes
-------------
* Linux: xe-1785-479cde039b423852ee120f3832ca74ea45b646fd -> xe-pw-136052v2
IGT_7973: 9c3a20d0403a2fe80bde618de5c2ef83b7e08d50 @ https://gitlab.freedesktop.org/drm/igt-gpu-tools.git
xe-1785-479cde039b423852ee120f3832ca74ea45b646fd: 479cde039b423852ee120f3832ca74ea45b646fd
xe-pw-136052v2: 136052v2
== Logs ==
For more details see: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-136052v2/index.html
[-- Attachment #2: Type: text/html, Size: 5246 bytes --]
^ permalink raw reply [flat|nested] 54+ messages in thread
* ✗ CI.FULL: failure for Add PXP HWDRM support (rev2)
2024-08-16 19:00 [PATCH v2 00/12] Add PXP HWDRM support Daniele Ceraolo Spurio
` (18 preceding siblings ...)
2024-08-16 20:11 ` ✗ CI.BAT: failure " Patchwork
@ 2024-08-17 4:53 ` Patchwork
2024-08-19 14:33 ` [PATCH v2 00/12] Add PXP HWDRM support Souza, Jose
20 siblings, 0 replies; 54+ messages in thread
From: Patchwork @ 2024-08-17 4:53 UTC (permalink / raw)
To: Daniele Ceraolo Spurio; +Cc: intel-xe
[-- Attachment #1: Type: text/plain, Size: 77233 bytes --]
== Series Details ==
Series: Add PXP HWDRM support (rev2)
URL : https://patchwork.freedesktop.org/series/136052/
State : failure
== Summary ==
CI Bug Log - changes from xe-1785-479cde039b423852ee120f3832ca74ea45b646fd_full -> xe-pw-136052v2_full
====================================================
Summary
-------
**FAILURE**
Serious unknown changes coming with xe-pw-136052v2_full absolutely need to be
verified manually.
If you think the reported changes have nothing to do with the changes
introduced in xe-pw-136052v2_full, please notify your bug team (I915-ci-infra@lists.freedesktop.org) to allow them
to document this new failure mode, which will reduce false positives in CI.
Participating hosts (4 -> 4)
------------------------------
No changes in participating hosts
Possible new issues
-------------------
Here are the unknown changes that may have been introduced in xe-pw-136052v2_full:
### IGT changes ###
#### Possible regressions ####
* igt@xe_exec_queue_property@invalid-property:
- shard-dg2-set2: [PASS][1] -> [FAIL][2] +2 other tests fail
[1]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-1785-479cde039b423852ee120f3832ca74ea45b646fd/shard-dg2-466/igt@xe_exec_queue_property@invalid-property.html
[2]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-136052v2/shard-dg2-433/igt@xe_exec_queue_property@invalid-property.html
- shard-lnl: [PASS][3] -> [FAIL][4] +1 other test fail
[3]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-1785-479cde039b423852ee120f3832ca74ea45b646fd/shard-lnl-2/igt@xe_exec_queue_property@invalid-property.html
[4]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-136052v2/shard-lnl-6/igt@xe_exec_queue_property@invalid-property.html
* igt@xe_vm@bind-flag-invalid:
- shard-adlp: [PASS][5] -> [FAIL][6] +2 other tests fail
[5]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-1785-479cde039b423852ee120f3832ca74ea45b646fd/shard-adlp-8/igt@xe_vm@bind-flag-invalid.html
[6]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-136052v2/shard-adlp-8/igt@xe_vm@bind-flag-invalid.html
#### Warnings ####
* igt@kms_pipe_crc_basic@suspend-read-crc@pipe-a-edp-1:
- shard-lnl: [FAIL][7] ([Intel XE#2028]) -> [INCOMPLETE][8] +1 other test incomplete
[7]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-1785-479cde039b423852ee120f3832ca74ea45b646fd/shard-lnl-1/igt@kms_pipe_crc_basic@suspend-read-crc@pipe-a-edp-1.html
[8]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-136052v2/shard-lnl-4/igt@kms_pipe_crc_basic@suspend-read-crc@pipe-a-edp-1.html
#### Suppressed ####
The following results come from untrusted machines, tests, or statuses.
They do not affect the overall result.
* igt@xe_vm@bind-flag-invalid:
- {shard-bmg}: [PASS][9] -> [FAIL][10] +2 other tests fail
[9]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-1785-479cde039b423852ee120f3832ca74ea45b646fd/shard-bmg-1/igt@xe_vm@bind-flag-invalid.html
[10]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-136052v2/shard-bmg-1/igt@xe_vm@bind-flag-invalid.html
Known issues
------------
Here are the changes found in xe-pw-136052v2_full that come from known issues:
### IGT changes ###
#### Issues hit ####
* igt@kms_addfb_basic@invalid-smem-bo-on-discrete:
- shard-adlp: NOTRUN -> [SKIP][11] ([Intel XE#1201] / [Intel XE#660])
[11]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-136052v2/shard-adlp-1/igt@kms_addfb_basic@invalid-smem-bo-on-discrete.html
- shard-lnl: NOTRUN -> [SKIP][12] ([Intel XE#660])
[12]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-136052v2/shard-lnl-7/igt@kms_addfb_basic@invalid-smem-bo-on-discrete.html
* igt@kms_async_flips@alternate-sync-async-flip:
- shard-adlp: [PASS][13] -> [FAIL][14] ([Intel XE#827]) +1 other test fail
[13]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-1785-479cde039b423852ee120f3832ca74ea45b646fd/shard-adlp-4/igt@kms_async_flips@alternate-sync-async-flip.html
[14]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-136052v2/shard-adlp-4/igt@kms_async_flips@alternate-sync-async-flip.html
* igt@kms_big_fb@4-tiled-addfb-size-offset-overflow:
- shard-adlp: NOTRUN -> [SKIP][15] ([Intel XE#1201] / [Intel XE#607])
[15]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-136052v2/shard-adlp-1/igt@kms_big_fb@4-tiled-addfb-size-offset-overflow.html
* igt@kms_big_fb@4-tiled-addfb-size-overflow:
- shard-adlp: NOTRUN -> [SKIP][16] ([Intel XE#1201] / [Intel XE#610])
[16]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-136052v2/shard-adlp-1/igt@kms_big_fb@4-tiled-addfb-size-overflow.html
* igt@kms_big_fb@4-tiled-max-hw-stride-64bpp-rotate-0:
- shard-lnl: [PASS][17] -> [FAIL][18] ([Intel XE#1659]) +1 other test fail
[17]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-1785-479cde039b423852ee120f3832ca74ea45b646fd/shard-lnl-3/igt@kms_big_fb@4-tiled-max-hw-stride-64bpp-rotate-0.html
[18]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-136052v2/shard-lnl-4/igt@kms_big_fb@4-tiled-max-hw-stride-64bpp-rotate-0.html
* igt@kms_big_fb@linear-64bpp-rotate-180:
- shard-lnl: NOTRUN -> [DMESG-WARN][19] ([Intel XE#1725])
[19]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-136052v2/shard-lnl-7/igt@kms_big_fb@linear-64bpp-rotate-180.html
* igt@kms_big_fb@x-tiled-8bpp-rotate-0:
- shard-adlp: NOTRUN -> [FAIL][20] ([Intel XE#1874]) +1 other test fail
[20]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-136052v2/shard-adlp-1/igt@kms_big_fb@x-tiled-8bpp-rotate-0.html
* igt@kms_big_fb@y-tiled-32bpp-rotate-270:
- shard-adlp: NOTRUN -> [SKIP][21] ([Intel XE#1201] / [Intel XE#316])
[21]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-136052v2/shard-adlp-1/igt@kms_big_fb@y-tiled-32bpp-rotate-270.html
* igt@kms_big_fb@y-tiled-max-hw-stride-64bpp-rotate-0-async-flip:
- shard-adlp: NOTRUN -> [FAIL][22] ([Intel XE#1231])
[22]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-136052v2/shard-adlp-1/igt@kms_big_fb@y-tiled-max-hw-stride-64bpp-rotate-0-async-flip.html
* igt@kms_big_fb@yf-tiled-8bpp-rotate-270:
- shard-lnl: NOTRUN -> [SKIP][23] ([Intel XE#1124]) +1 other test skip
[23]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-136052v2/shard-lnl-7/igt@kms_big_fb@yf-tiled-8bpp-rotate-270.html
* igt@kms_big_fb@yf-tiled-addfb:
- shard-adlp: NOTRUN -> [SKIP][24] ([Intel XE#1201] / [Intel XE#619])
[24]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-136052v2/shard-adlp-1/igt@kms_big_fb@yf-tiled-addfb.html
* igt@kms_big_fb@yf-tiled-max-hw-stride-64bpp-rotate-0:
- shard-adlp: NOTRUN -> [SKIP][25] ([Intel XE#1124] / [Intel XE#1201]) +2 other tests skip
[25]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-136052v2/shard-adlp-1/igt@kms_big_fb@yf-tiled-max-hw-stride-64bpp-rotate-0.html
* igt@kms_bw@connected-linear-tiling-3-displays-2160x1440p:
- shard-adlp: NOTRUN -> [SKIP][26] ([Intel XE#1201] / [Intel XE#2191])
[26]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-136052v2/shard-adlp-1/igt@kms_bw@connected-linear-tiling-3-displays-2160x1440p.html
- shard-lnl: NOTRUN -> [SKIP][27] ([Intel XE#2191])
[27]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-136052v2/shard-lnl-7/igt@kms_bw@connected-linear-tiling-3-displays-2160x1440p.html
* igt@kms_bw@linear-tiling-2-displays-1920x1080p:
- shard-adlp: NOTRUN -> [SKIP][28] ([Intel XE#1201] / [Intel XE#367]) +1 other test skip
[28]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-136052v2/shard-adlp-1/igt@kms_bw@linear-tiling-2-displays-1920x1080p.html
* igt@kms_ccs@crc-sprite-planes-basic-4-tiled-dg2-rc-ccs-cc:
- shard-lnl: NOTRUN -> [SKIP][29] ([Intel XE#1399])
[29]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-136052v2/shard-lnl-7/igt@kms_ccs@crc-sprite-planes-basic-4-tiled-dg2-rc-ccs-cc.html
* igt@kms_ccs@crc-sprite-planes-basic-4-tiled-dg2-rc-ccs-cc@pipe-b-hdmi-a-1:
- shard-adlp: NOTRUN -> [SKIP][30] ([Intel XE#1201] / [Intel XE#787]) +17 other tests skip
[30]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-136052v2/shard-adlp-1/igt@kms_ccs@crc-sprite-planes-basic-4-tiled-dg2-rc-ccs-cc@pipe-b-hdmi-a-1.html
* igt@kms_ccs@random-ccs-data-y-tiled-ccs:
- shard-adlp: NOTRUN -> [SKIP][31] ([Intel XE#1201] / [Intel XE#455] / [Intel XE#787]) +11 other tests skip
[31]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-136052v2/shard-adlp-1/igt@kms_ccs@random-ccs-data-y-tiled-ccs.html
* igt@kms_cdclk@plane-scaling:
- shard-lnl: NOTRUN -> [SKIP][32] ([Intel XE#1152]) +3 other tests skip
[32]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-136052v2/shard-lnl-7/igt@kms_cdclk@plane-scaling.html
- shard-adlp: NOTRUN -> [SKIP][33] ([Intel XE#1152] / [Intel XE#1201] / [Intel XE#455])
[33]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-136052v2/shard-adlp-1/igt@kms_cdclk@plane-scaling.html
* igt@kms_cdclk@plane-scaling@pipe-a-hdmi-a-1:
- shard-adlp: NOTRUN -> [SKIP][34] ([Intel XE#1152] / [Intel XE#1201]) +2 other tests skip
[34]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-136052v2/shard-adlp-1/igt@kms_cdclk@plane-scaling@pipe-a-hdmi-a-1.html
* igt@kms_chamelium_hpd@vga-hpd-fast:
- shard-adlp: NOTRUN -> [SKIP][35] ([Intel XE#1201] / [Intel XE#373]) +3 other tests skip
[35]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-136052v2/shard-adlp-1/igt@kms_chamelium_hpd@vga-hpd-fast.html
- shard-lnl: NOTRUN -> [SKIP][36] ([Intel XE#373]) +1 other test skip
[36]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-136052v2/shard-lnl-7/igt@kms_chamelium_hpd@vga-hpd-fast.html
* igt@kms_cursor_legacy@2x-long-flip-vs-cursor-legacy:
- shard-adlp: NOTRUN -> [SKIP][37] ([Intel XE#1201] / [Intel XE#309]) +1 other test skip
[37]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-136052v2/shard-adlp-1/igt@kms_cursor_legacy@2x-long-flip-vs-cursor-legacy.html
* igt@kms_cursor_legacy@cursorb-vs-flipa-varying-size:
- shard-lnl: NOTRUN -> [SKIP][38] ([Intel XE#309])
[38]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-136052v2/shard-lnl-7/igt@kms_cursor_legacy@cursorb-vs-flipa-varying-size.html
* igt@kms_feature_discovery@chamelium:
- shard-adlp: NOTRUN -> [SKIP][39] ([Intel XE#1201] / [Intel XE#701])
[39]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-136052v2/shard-adlp-1/igt@kms_feature_discovery@chamelium.html
* igt@kms_flip@2x-plain-flip-interruptible:
- shard-adlp: NOTRUN -> [SKIP][40] ([Intel XE#1201] / [Intel XE#310])
[40]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-136052v2/shard-adlp-1/igt@kms_flip@2x-plain-flip-interruptible.html
- shard-lnl: NOTRUN -> [SKIP][41] ([Intel XE#1421])
[41]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-136052v2/shard-lnl-7/igt@kms_flip@2x-plain-flip-interruptible.html
* igt@kms_flip_scaled_crc@flip-32bpp-yftile-to-32bpp-yftileccs-upscaling:
- shard-adlp: NOTRUN -> [SKIP][42] ([Intel XE#1201] / [Intel XE#455]) +6 other tests skip
[42]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-136052v2/shard-adlp-1/igt@kms_flip_scaled_crc@flip-32bpp-yftile-to-32bpp-yftileccs-upscaling.html
- shard-lnl: NOTRUN -> [SKIP][43] ([Intel XE#1401] / [Intel XE#1745])
[43]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-136052v2/shard-lnl-7/igt@kms_flip_scaled_crc@flip-32bpp-yftile-to-32bpp-yftileccs-upscaling.html
* igt@kms_flip_scaled_crc@flip-32bpp-yftile-to-32bpp-yftileccs-upscaling@pipe-a-default-mode:
- shard-lnl: NOTRUN -> [SKIP][44] ([Intel XE#1401])
[44]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-136052v2/shard-lnl-7/igt@kms_flip_scaled_crc@flip-32bpp-yftile-to-32bpp-yftileccs-upscaling@pipe-a-default-mode.html
* igt@kms_flip_scaled_crc@flip-64bpp-ytile-to-16bpp-ytile-downscaling@pipe-a-valid-mode:
- shard-adlp: NOTRUN -> [DMESG-FAIL][45] ([Intel XE#324]) +1 other test dmesg-fail
[45]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-136052v2/shard-adlp-1/igt@kms_flip_scaled_crc@flip-64bpp-ytile-to-16bpp-ytile-downscaling@pipe-a-valid-mode.html
* igt@kms_flip_tiling@flip-change-tiling@pipe-d-hdmi-a-1-y-to-y:
- shard-adlp: [PASS][46] -> [FAIL][47] ([Intel XE#1874]) +3 other tests fail
[46]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-1785-479cde039b423852ee120f3832ca74ea45b646fd/shard-adlp-1/igt@kms_flip_tiling@flip-change-tiling@pipe-d-hdmi-a-1-y-to-y.html
[47]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-136052v2/shard-adlp-6/igt@kms_flip_tiling@flip-change-tiling@pipe-d-hdmi-a-1-y-to-y.html
* igt@kms_frontbuffer_tracking@fbc-2p-primscrn-spr-indfb-fullscreen:
- shard-adlp: NOTRUN -> [SKIP][48] ([Intel XE#1201] / [Intel XE#656]) +14 other tests skip
[48]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-136052v2/shard-adlp-1/igt@kms_frontbuffer_tracking@fbc-2p-primscrn-spr-indfb-fullscreen.html
* igt@kms_frontbuffer_tracking@fbcdrrs-1p-offscren-pri-shrfb-draw-blt:
- shard-adlp: NOTRUN -> [SKIP][49] ([Intel XE#1201] / [Intel XE#651]) +5 other tests skip
[49]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-136052v2/shard-adlp-1/igt@kms_frontbuffer_tracking@fbcdrrs-1p-offscren-pri-shrfb-draw-blt.html
- shard-lnl: NOTRUN -> [SKIP][50] ([Intel XE#651]) +1 other test skip
[50]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-136052v2/shard-lnl-7/igt@kms_frontbuffer_tracking@fbcdrrs-1p-offscren-pri-shrfb-draw-blt.html
* igt@kms_frontbuffer_tracking@psr-2p-scndscrn-pri-indfb-draw-blt:
- shard-lnl: NOTRUN -> [SKIP][51] ([Intel XE#656]) +3 other tests skip
[51]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-136052v2/shard-lnl-7/igt@kms_frontbuffer_tracking@psr-2p-scndscrn-pri-indfb-draw-blt.html
* igt@kms_frontbuffer_tracking@psr-rgb565-draw-mmap-wc:
- shard-adlp: NOTRUN -> [SKIP][52] ([Intel XE#1201] / [Intel XE#653]) +5 other tests skip
[52]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-136052v2/shard-adlp-1/igt@kms_frontbuffer_tracking@psr-rgb565-draw-mmap-wc.html
* igt@kms_hdmi_inject@inject-4k:
- shard-lnl: NOTRUN -> [SKIP][53] ([Intel XE#1470])
[53]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-136052v2/shard-lnl-7/igt@kms_hdmi_inject@inject-4k.html
* igt@kms_hdr@invalid-hdr:
- shard-dg2-set2: [PASS][54] -> [SKIP][55] ([Intel XE#1201] / [Intel XE#455])
[54]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-1785-479cde039b423852ee120f3832ca74ea45b646fd/shard-dg2-463/igt@kms_hdr@invalid-hdr.html
[55]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-136052v2/shard-dg2-433/igt@kms_hdr@invalid-hdr.html
* igt@kms_plane@pixel-format:
- shard-adlp: NOTRUN -> [INCOMPLETE][56] ([Intel XE#1035] / [Intel XE#1195])
[56]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-136052v2/shard-adlp-1/igt@kms_plane@pixel-format.html
* igt@kms_plane@pixel-format@pipe-a-plane-3:
- shard-adlp: NOTRUN -> [WARN][57] ([Intel XE#2078]) +1 other test warn
[57]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-136052v2/shard-adlp-1/igt@kms_plane@pixel-format@pipe-a-plane-3.html
* igt@kms_plane@plane-position-hole-dpms@pipe-b-plane-3:
- shard-lnl: [PASS][58] -> [DMESG-WARN][59] ([Intel XE#324]) +2 other tests dmesg-warn
[58]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-1785-479cde039b423852ee120f3832ca74ea45b646fd/shard-lnl-3/igt@kms_plane@plane-position-hole-dpms@pipe-b-plane-3.html
[59]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-136052v2/shard-lnl-2/igt@kms_plane@plane-position-hole-dpms@pipe-b-plane-3.html
* igt@kms_plane_lowres@tiling-none@pipe-b-edp-1:
- shard-lnl: NOTRUN -> [SKIP][60] ([Intel XE#599]) +3 other tests skip
[60]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-136052v2/shard-lnl-7/igt@kms_plane_lowres@tiling-none@pipe-b-edp-1.html
* igt@kms_plane_scaling@intel-max-src-size@pipe-a-hdmi-a-6:
- shard-dg2-set2: [PASS][61] -> [FAIL][62] ([Intel XE#361])
[61]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-1785-479cde039b423852ee120f3832ca74ea45b646fd/shard-dg2-432/igt@kms_plane_scaling@intel-max-src-size@pipe-a-hdmi-a-6.html
[62]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-136052v2/shard-dg2-435/igt@kms_plane_scaling@intel-max-src-size@pipe-a-hdmi-a-6.html
* igt@kms_plane_scaling@planes-unity-scaling-downscale-factor-0-25:
- shard-lnl: NOTRUN -> [SKIP][63] ([Intel XE#2318]) +3 other tests skip
[63]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-136052v2/shard-lnl-7/igt@kms_plane_scaling@planes-unity-scaling-downscale-factor-0-25.html
* igt@kms_plane_scaling@planes-unity-scaling-downscale-factor-0-25@pipe-a-hdmi-a-1:
- shard-adlp: NOTRUN -> [SKIP][64] ([Intel XE#1201] / [Intel XE#2318]) +2 other tests skip
[64]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-136052v2/shard-adlp-1/igt@kms_plane_scaling@planes-unity-scaling-downscale-factor-0-25@pipe-a-hdmi-a-1.html
* igt@kms_plane_scaling@planes-unity-scaling-downscale-factor-0-25@pipe-d-hdmi-a-1:
- shard-adlp: NOTRUN -> [SKIP][65] ([Intel XE#1201] / [Intel XE#2318] / [Intel XE#455]) +1 other test skip
[65]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-136052v2/shard-adlp-1/igt@kms_plane_scaling@planes-unity-scaling-downscale-factor-0-25@pipe-d-hdmi-a-1.html
* igt@kms_pm_backlight@fade-with-dpms:
- shard-adlp: NOTRUN -> [SKIP][66] ([Intel XE#1201] / [Intel XE#870])
[66]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-136052v2/shard-adlp-1/igt@kms_pm_backlight@fade-with-dpms.html
* igt@kms_pm_dc@dc3co-vpb-simulation:
- shard-lnl: NOTRUN -> [SKIP][67] ([Intel XE#736])
[67]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-136052v2/shard-lnl-7/igt@kms_pm_dc@dc3co-vpb-simulation.html
* igt@kms_pm_dc@dc9-dpms:
- shard-lnl: [PASS][68] -> [DMESG-WARN][69] ([Intel XE#1705]) +8 other tests dmesg-warn
[68]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-1785-479cde039b423852ee120f3832ca74ea45b646fd/shard-lnl-7/igt@kms_pm_dc@dc9-dpms.html
[69]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-136052v2/shard-lnl-8/igt@kms_pm_dc@dc9-dpms.html
* igt@kms_pm_rpm@universal-planes-dpms:
- shard-lnl: [PASS][70] -> [DMESG-WARN][71] ([Intel XE#1705] / [Intel XE#2042])
[70]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-1785-479cde039b423852ee120f3832ca74ea45b646fd/shard-lnl-3/igt@kms_pm_rpm@universal-planes-dpms.html
[71]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-136052v2/shard-lnl-8/igt@kms_pm_rpm@universal-planes-dpms.html
* igt@kms_psr2_sf@overlay-plane-update-sf-dmg-area:
- shard-adlp: NOTRUN -> [SKIP][72] ([Intel XE#1201])
[72]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-136052v2/shard-adlp-1/igt@kms_psr2_sf@overlay-plane-update-sf-dmg-area.html
* igt@kms_psr2_su@page_flip-p010:
- shard-adlp: NOTRUN -> [SKIP][73] ([Intel XE#1122] / [Intel XE#1201]) +1 other test skip
[73]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-136052v2/shard-adlp-1/igt@kms_psr2_su@page_flip-p010.html
* igt@kms_psr@fbc-pr-suspend:
- shard-lnl: NOTRUN -> [SKIP][74] ([Intel XE#1406])
[74]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-136052v2/shard-lnl-7/igt@kms_psr@fbc-pr-suspend.html
* igt@kms_psr@psr-cursor-plane-onoff:
- shard-adlp: NOTRUN -> [SKIP][75] ([Intel XE#1201] / [Intel XE#929]) +4 other tests skip
[75]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-136052v2/shard-adlp-1/igt@kms_psr@psr-cursor-plane-onoff.html
* igt@kms_universal_plane@cursor-fb-leak@pipe-b-hdmi-a-1:
- shard-adlp: [PASS][76] -> [FAIL][77] ([Intel XE#899])
[76]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-1785-479cde039b423852ee120f3832ca74ea45b646fd/shard-adlp-1/igt@kms_universal_plane@cursor-fb-leak@pipe-b-hdmi-a-1.html
[77]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-136052v2/shard-adlp-4/igt@kms_universal_plane@cursor-fb-leak@pipe-b-hdmi-a-1.html
* igt@kms_writeback@writeback-fb-id:
- shard-adlp: NOTRUN -> [SKIP][78] ([Intel XE#1201] / [Intel XE#756])
[78]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-136052v2/shard-adlp-1/igt@kms_writeback@writeback-fb-id.html
* igt@xe_evict@evict-beng-large-multi-vm-cm:
- shard-dg2-set2: [PASS][79] -> [FAIL][80] ([Intel XE#1600])
[79]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-1785-479cde039b423852ee120f3832ca74ea45b646fd/shard-dg2-434/igt@xe_evict@evict-beng-large-multi-vm-cm.html
[80]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-136052v2/shard-dg2-466/igt@xe_evict@evict-beng-large-multi-vm-cm.html
* igt@xe_evict@evict-beng-mixed-threads-large:
- shard-adlp: NOTRUN -> [SKIP][81] ([Intel XE#1201] / [Intel XE#261]) +2 other tests skip
[81]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-136052v2/shard-adlp-1/igt@xe_evict@evict-beng-mixed-threads-large.html
- shard-dg2-set2: [PASS][82] -> [FAIL][83] ([Intel XE#1000])
[82]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-1785-479cde039b423852ee120f3832ca74ea45b646fd/shard-dg2-463/igt@xe_evict@evict-beng-mixed-threads-large.html
[83]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-136052v2/shard-dg2-466/igt@xe_evict@evict-beng-mixed-threads-large.html
- shard-lnl: NOTRUN -> [SKIP][84] ([Intel XE#688]) +1 other test skip
[84]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-136052v2/shard-lnl-7/igt@xe_evict@evict-beng-mixed-threads-large.html
* igt@xe_evict@evict-small-external-cm:
- shard-adlp: NOTRUN -> [SKIP][85] ([Intel XE#1201] / [Intel XE#261] / [Intel XE#688])
[85]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-136052v2/shard-adlp-1/igt@xe_evict@evict-small-external-cm.html
* igt@xe_exec_basic@multigpu-once-bindexecqueue-userptr-rebind:
- shard-adlp: NOTRUN -> [SKIP][86] ([Intel XE#1201] / [Intel XE#1392]) +1 other test skip
[86]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-136052v2/shard-adlp-1/igt@xe_exec_basic@multigpu-once-bindexecqueue-userptr-rebind.html
- shard-lnl: NOTRUN -> [SKIP][87] ([Intel XE#1392]) +1 other test skip
[87]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-136052v2/shard-lnl-7/igt@xe_exec_basic@multigpu-once-bindexecqueue-userptr-rebind.html
* igt@xe_exec_fault_mode@twice-bindexecqueue-userptr-invalidate-race-imm:
- shard-adlp: NOTRUN -> [SKIP][88] ([Intel XE#1201] / [Intel XE#288]) +8 other tests skip
[88]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-136052v2/shard-adlp-1/igt@xe_exec_fault_mode@twice-bindexecqueue-userptr-invalidate-race-imm.html
* igt@xe_live_ktest@xe_mocs:
- shard-lnl: [PASS][89] -> [SKIP][90] ([Intel XE#1192])
[89]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-1785-479cde039b423852ee120f3832ca74ea45b646fd/shard-lnl-7/igt@xe_live_ktest@xe_mocs.html
[90]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-136052v2/shard-lnl-6/igt@xe_live_ktest@xe_mocs.html
* igt@xe_oa@invalid-oa-format-id:
- shard-adlp: NOTRUN -> [SKIP][91] ([Intel XE#1201] / [Intel XE#2541]) +1 other test skip
[91]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-136052v2/shard-adlp-1/igt@xe_oa@invalid-oa-format-id.html
* igt@xe_pm@s2idle-exec-after:
- shard-lnl: [PASS][92] -> [FAIL][93] ([Intel XE#2028]) +1 other test fail
[92]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-1785-479cde039b423852ee120f3832ca74ea45b646fd/shard-lnl-8/igt@xe_pm@s2idle-exec-after.html
[93]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-136052v2/shard-lnl-1/igt@xe_pm@s2idle-exec-after.html
* igt@xe_pm@s4-vm-bind-prefetch:
- shard-adlp: [PASS][94] -> [ABORT][95] ([Intel XE#1607] / [Intel XE#1794])
[94]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-1785-479cde039b423852ee120f3832ca74ea45b646fd/shard-adlp-2/igt@xe_pm@s4-vm-bind-prefetch.html
[95]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-136052v2/shard-adlp-9/igt@xe_pm@s4-vm-bind-prefetch.html
* igt@xe_pm@s4-vm-bind-unbind-all:
- shard-adlp: [PASS][96] -> [ABORT][97] ([Intel XE#1794])
[96]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-1785-479cde039b423852ee120f3832ca74ea45b646fd/shard-adlp-2/igt@xe_pm@s4-vm-bind-unbind-all.html
[97]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-136052v2/shard-adlp-9/igt@xe_pm@s4-vm-bind-unbind-all.html
* igt@xe_pm@s4-vm-bind-userptr:
- shard-lnl: [PASS][98] -> [ABORT][99] ([Intel XE#1794])
[98]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-1785-479cde039b423852ee120f3832ca74ea45b646fd/shard-lnl-3/igt@xe_pm@s4-vm-bind-userptr.html
[99]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-136052v2/shard-lnl-2/igt@xe_pm@s4-vm-bind-userptr.html
* igt@xe_query@multigpu-query-oa-units:
- shard-adlp: NOTRUN -> [SKIP][100] ([Intel XE#1201] / [Intel XE#944]) +1 other test skip
[100]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-136052v2/shard-adlp-1/igt@xe_query@multigpu-query-oa-units.html
#### Possible fixes ####
* igt@kms_async_flips@alternate-sync-async-flip:
- shard-dg2-set2: [FAIL][101] ([Intel XE#827]) -> [PASS][102] +1 other test pass
[101]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-1785-479cde039b423852ee120f3832ca74ea45b646fd/shard-dg2-435/igt@kms_async_flips@alternate-sync-async-flip.html
[102]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-136052v2/shard-dg2-434/igt@kms_async_flips@alternate-sync-async-flip.html
* igt@kms_async_flips@alternate-sync-async-flip@pipe-d-dp-2:
- {shard-bmg}: [DMESG-WARN][103] ([Intel XE#1033]) -> [PASS][104]
[103]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-1785-479cde039b423852ee120f3832ca74ea45b646fd/shard-bmg-2/igt@kms_async_flips@alternate-sync-async-flip@pipe-d-dp-2.html
[104]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-136052v2/shard-bmg-8/igt@kms_async_flips@alternate-sync-async-flip@pipe-d-dp-2.html
* igt@kms_async_flips@async-flip-with-page-flip-events@pipe-d-hdmi-a-1-x:
- shard-adlp: [DMESG-WARN][105] ([Intel XE#1033]) -> [PASS][106]
[105]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-1785-479cde039b423852ee120f3832ca74ea45b646fd/shard-adlp-6/igt@kms_async_flips@async-flip-with-page-flip-events@pipe-d-hdmi-a-1-x.html
[106]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-136052v2/shard-adlp-4/igt@kms_async_flips@async-flip-with-page-flip-events@pipe-d-hdmi-a-1-x.html
* igt@kms_atomic_transition@plane-toggle-modeset-transition@pipe-a-edp-1:
- shard-lnl: [FAIL][107] ([Intel XE#1426]) -> [PASS][108] +1 other test pass
[107]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-1785-479cde039b423852ee120f3832ca74ea45b646fd/shard-lnl-8/igt@kms_atomic_transition@plane-toggle-modeset-transition@pipe-a-edp-1.html
[108]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-136052v2/shard-lnl-8/igt@kms_atomic_transition@plane-toggle-modeset-transition@pipe-a-edp-1.html
* igt@kms_big_fb@4-tiled-max-hw-stride-64bpp-rotate-180-hflip:
- shard-lnl: [FAIL][109] ([Intel XE#1659]) -> [PASS][110]
[109]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-1785-479cde039b423852ee120f3832ca74ea45b646fd/shard-lnl-8/igt@kms_big_fb@4-tiled-max-hw-stride-64bpp-rotate-180-hflip.html
[110]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-136052v2/shard-lnl-1/igt@kms_big_fb@4-tiled-max-hw-stride-64bpp-rotate-180-hflip.html
* igt@kms_ccs@crc-sprite-planes-basic-4-tiled-xe2-ccs@pipe-a-hdmi-a-3:
- {shard-bmg}: [DMESG-WARN][111] ([Intel XE#877]) -> [PASS][112] +3 other tests pass
[111]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-1785-479cde039b423852ee120f3832ca74ea45b646fd/shard-bmg-5/igt@kms_ccs@crc-sprite-planes-basic-4-tiled-xe2-ccs@pipe-a-hdmi-a-3.html
[112]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-136052v2/shard-bmg-1/igt@kms_ccs@crc-sprite-planes-basic-4-tiled-xe2-ccs@pipe-a-hdmi-a-3.html
* igt@kms_ccs@crc-sprite-planes-basic-4-tiled-xe2-ccs@pipe-b-dp-2:
- {shard-bmg}: [FAIL][113] ([Intel XE#2436]) -> [PASS][114]
[113]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-1785-479cde039b423852ee120f3832ca74ea45b646fd/shard-bmg-5/igt@kms_ccs@crc-sprite-planes-basic-4-tiled-xe2-ccs@pipe-b-dp-2.html
[114]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-136052v2/shard-bmg-1/igt@kms_ccs@crc-sprite-planes-basic-4-tiled-xe2-ccs@pipe-b-dp-2.html
* igt@kms_fbcon_fbt@psr-suspend:
- shard-lnl: [FAIL][115] ([Intel XE#2028]) -> [PASS][116]
[115]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-1785-479cde039b423852ee120f3832ca74ea45b646fd/shard-lnl-1/igt@kms_fbcon_fbt@psr-suspend.html
[116]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-136052v2/shard-lnl-4/igt@kms_fbcon_fbt@psr-suspend.html
* igt@kms_plane@plane-position-covered:
- shard-lnl: [DMESG-WARN][117] ([Intel XE#324]) -> [PASS][118] +3 other tests pass
[117]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-1785-479cde039b423852ee120f3832ca74ea45b646fd/shard-lnl-2/igt@kms_plane@plane-position-covered.html
[118]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-136052v2/shard-lnl-1/igt@kms_plane@plane-position-covered.html
* igt@kms_plane_lowres@tiling-y@pipe-d-hdmi-a-1:
- shard-adlp: [FAIL][119] ([Intel XE#1874]) -> [PASS][120] +2 other tests pass
[119]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-1785-479cde039b423852ee120f3832ca74ea45b646fd/shard-adlp-6/igt@kms_plane_lowres@tiling-y@pipe-d-hdmi-a-1.html
[120]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-136052v2/shard-adlp-6/igt@kms_plane_lowres@tiling-y@pipe-d-hdmi-a-1.html
* igt@kms_universal_plane@cursor-fb-leak@pipe-a-hdmi-a-1:
- shard-adlp: [FAIL][121] ([Intel XE#899]) -> [PASS][122]
[121]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-1785-479cde039b423852ee120f3832ca74ea45b646fd/shard-adlp-1/igt@kms_universal_plane@cursor-fb-leak@pipe-a-hdmi-a-1.html
[122]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-136052v2/shard-adlp-4/igt@kms_universal_plane@cursor-fb-leak@pipe-a-hdmi-a-1.html
* igt@xe_evict@evict-threads-large:
- shard-dg2-set2: [INCOMPLETE][123] ([Intel XE#1195] / [Intel XE#1473]) -> [PASS][124]
[123]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-1785-479cde039b423852ee120f3832ca74ea45b646fd/shard-dg2-463/igt@xe_evict@evict-threads-large.html
[124]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-136052v2/shard-dg2-463/igt@xe_evict@evict-threads-large.html
* igt@xe_pm@s2idle-d3hot-basic-exec:
- shard-lnl: [INCOMPLETE][125] ([Intel XE#1358] / [Intel XE#1616]) -> [PASS][126]
[125]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-1785-479cde039b423852ee120f3832ca74ea45b646fd/shard-lnl-5/igt@xe_pm@s2idle-d3hot-basic-exec.html
[126]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-136052v2/shard-lnl-7/igt@xe_pm@s2idle-d3hot-basic-exec.html
* igt@xe_pm@s4-basic:
- shard-adlp: [ABORT][127] ([Intel XE#1358] / [Intel XE#1607]) -> [PASS][128] +1 other test pass
[127]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-1785-479cde039b423852ee120f3832ca74ea45b646fd/shard-adlp-9/igt@xe_pm@s4-basic.html
[128]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-136052v2/shard-adlp-1/igt@xe_pm@s4-basic.html
* igt@xe_pm_residency@toggle-gt-c6:
- shard-lnl: [FAIL][129] ([Intel XE#958]) -> [PASS][130]
[129]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-1785-479cde039b423852ee120f3832ca74ea45b646fd/shard-lnl-8/igt@xe_pm_residency@toggle-gt-c6.html
[130]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-136052v2/shard-lnl-6/igt@xe_pm_residency@toggle-gt-c6.html
#### Warnings ####
* igt@kms_async_flips@async-flip-with-page-flip-events@pipe-b-hdmi-a-6-4-mc-ccs:
- shard-dg2-set2: [SKIP][131] ([Intel XE#1201] / [Intel XE#801]) -> [SKIP][132] ([Intel XE#801]) +23 other tests skip
[131]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-1785-479cde039b423852ee120f3832ca74ea45b646fd/shard-dg2-433/igt@kms_async_flips@async-flip-with-page-flip-events@pipe-b-hdmi-a-6-4-mc-ccs.html
[132]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-136052v2/shard-dg2-432/igt@kms_async_flips@async-flip-with-page-flip-events@pipe-b-hdmi-a-6-4-mc-ccs.html
* igt@kms_big_fb@4-tiled-8bpp-rotate-270:
- shard-dg2-set2: [SKIP][133] ([Intel XE#1201] / [Intel XE#316]) -> [SKIP][134] ([Intel XE#316]) +5 other tests skip
[133]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-1785-479cde039b423852ee120f3832ca74ea45b646fd/shard-dg2-433/igt@kms_big_fb@4-tiled-8bpp-rotate-270.html
[134]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-136052v2/shard-dg2-432/igt@kms_big_fb@4-tiled-8bpp-rotate-270.html
* igt@kms_big_fb@x-tiled-64bpp-rotate-90:
- shard-dg2-set2: [SKIP][135] ([Intel XE#316]) -> [SKIP][136] ([Intel XE#1201] / [Intel XE#316])
[135]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-1785-479cde039b423852ee120f3832ca74ea45b646fd/shard-dg2-432/igt@kms_big_fb@x-tiled-64bpp-rotate-90.html
[136]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-136052v2/shard-dg2-435/igt@kms_big_fb@x-tiled-64bpp-rotate-90.html
* igt@kms_big_fb@y-tiled-addfb-size-overflow:
- shard-dg2-set2: [SKIP][137] ([Intel XE#610]) -> [SKIP][138] ([Intel XE#1201] / [Intel XE#610]) +1 other test skip
[137]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-1785-479cde039b423852ee120f3832ca74ea45b646fd/shard-dg2-432/igt@kms_big_fb@y-tiled-addfb-size-overflow.html
[138]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-136052v2/shard-dg2-435/igt@kms_big_fb@y-tiled-addfb-size-overflow.html
* igt@kms_big_fb@y-tiled-max-hw-stride-64bpp-rotate-0-hflip-async-flip:
- shard-adlp: [DMESG-FAIL][139] ([Intel XE#324]) -> [FAIL][140] ([Intel XE#1231]) +3 other tests fail
[139]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-1785-479cde039b423852ee120f3832ca74ea45b646fd/shard-adlp-1/igt@kms_big_fb@y-tiled-max-hw-stride-64bpp-rotate-0-hflip-async-flip.html
[140]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-136052v2/shard-adlp-4/igt@kms_big_fb@y-tiled-max-hw-stride-64bpp-rotate-0-hflip-async-flip.html
* igt@kms_big_fb@yf-tiled-32bpp-rotate-180:
- shard-dg2-set2: [SKIP][141] ([Intel XE#1124] / [Intel XE#1201]) -> [SKIP][142] ([Intel XE#1124]) +10 other tests skip
[141]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-1785-479cde039b423852ee120f3832ca74ea45b646fd/shard-dg2-433/igt@kms_big_fb@yf-tiled-32bpp-rotate-180.html
[142]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-136052v2/shard-dg2-432/igt@kms_big_fb@yf-tiled-32bpp-rotate-180.html
* igt@kms_big_fb@yf-tiled-max-hw-stride-64bpp-rotate-180-hflip-async-flip:
- shard-dg2-set2: [SKIP][143] ([Intel XE#1124]) -> [SKIP][144] ([Intel XE#1124] / [Intel XE#1201]) +5 other tests skip
[143]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-1785-479cde039b423852ee120f3832ca74ea45b646fd/shard-dg2-432/igt@kms_big_fb@yf-tiled-max-hw-stride-64bpp-rotate-180-hflip-async-flip.html
[144]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-136052v2/shard-dg2-435/igt@kms_big_fb@yf-tiled-max-hw-stride-64bpp-rotate-180-hflip-async-flip.html
* igt@kms_bw@connected-linear-tiling-1-displays-2560x1440p:
- shard-dg2-set2: [SKIP][145] ([Intel XE#367]) -> [SKIP][146] ([Intel XE#1201] / [Intel XE#367]) +1 other test skip
[145]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-1785-479cde039b423852ee120f3832ca74ea45b646fd/shard-dg2-432/igt@kms_bw@connected-linear-tiling-1-displays-2560x1440p.html
[146]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-136052v2/shard-dg2-435/igt@kms_bw@connected-linear-tiling-1-displays-2560x1440p.html
* igt@kms_bw@connected-linear-tiling-4-displays-2160x1440p:
- shard-dg2-set2: [SKIP][147] ([Intel XE#1201] / [Intel XE#2191]) -> [SKIP][148] ([Intel XE#2191]) +1 other test skip
[147]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-1785-479cde039b423852ee120f3832ca74ea45b646fd/shard-dg2-433/igt@kms_bw@connected-linear-tiling-4-displays-2160x1440p.html
[148]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-136052v2/shard-dg2-432/igt@kms_bw@connected-linear-tiling-4-displays-2160x1440p.html
* igt@kms_bw@connected-linear-tiling-4-displays-3840x2160p:
- shard-dg2-set2: [SKIP][149] ([Intel XE#2191]) -> [SKIP][150] ([Intel XE#1201] / [Intel XE#2191])
[149]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-1785-479cde039b423852ee120f3832ca74ea45b646fd/shard-dg2-432/igt@kms_bw@connected-linear-tiling-4-displays-3840x2160p.html
[150]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-136052v2/shard-dg2-435/igt@kms_bw@connected-linear-tiling-4-displays-3840x2160p.html
* igt@kms_bw@linear-tiling-1-displays-1920x1080p:
- shard-dg2-set2: [SKIP][151] ([Intel XE#1201] / [Intel XE#367]) -> [SKIP][152] ([Intel XE#367]) +1 other test skip
[151]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-1785-479cde039b423852ee120f3832ca74ea45b646fd/shard-dg2-433/igt@kms_bw@linear-tiling-1-displays-1920x1080p.html
[152]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-136052v2/shard-dg2-432/igt@kms_bw@linear-tiling-1-displays-1920x1080p.html
* igt@kms_ccs@bad-aux-stride-y-tiled-gen12-rc-ccs-cc@pipe-d-dp-4:
- shard-dg2-set2: [SKIP][153] ([Intel XE#1201] / [Intel XE#455] / [Intel XE#787]) -> [SKIP][154] ([Intel XE#455] / [Intel XE#787]) +15 other tests skip
[153]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-1785-479cde039b423852ee120f3832ca74ea45b646fd/shard-dg2-433/igt@kms_ccs@bad-aux-stride-y-tiled-gen12-rc-ccs-cc@pipe-d-dp-4.html
[154]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-136052v2/shard-dg2-432/igt@kms_ccs@bad-aux-stride-y-tiled-gen12-rc-ccs-cc@pipe-d-dp-4.html
* igt@kms_ccs@bad-pixel-format-yf-tiled-ccs:
- shard-dg2-set2: [SKIP][155] ([Intel XE#455] / [Intel XE#787]) -> [SKIP][156] ([Intel XE#1201] / [Intel XE#455] / [Intel XE#787]) +19 other tests skip
[155]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-1785-479cde039b423852ee120f3832ca74ea45b646fd/shard-dg2-432/igt@kms_ccs@bad-pixel-format-yf-tiled-ccs.html
[156]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-136052v2/shard-dg2-435/igt@kms_ccs@bad-pixel-format-yf-tiled-ccs.html
* igt@kms_ccs@crc-primary-basic-4-tiled-mtl-rc-ccs@pipe-b-hdmi-a-6:
- shard-dg2-set2: [SKIP][157] ([Intel XE#787]) -> [SKIP][158] ([Intel XE#1201] / [Intel XE#787]) +69 other tests skip
[157]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-1785-479cde039b423852ee120f3832ca74ea45b646fd/shard-dg2-432/igt@kms_ccs@crc-primary-basic-4-tiled-mtl-rc-ccs@pipe-b-hdmi-a-6.html
[158]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-136052v2/shard-dg2-435/igt@kms_ccs@crc-primary-basic-4-tiled-mtl-rc-ccs@pipe-b-hdmi-a-6.html
* igt@kms_ccs@crc-primary-basic-4-tiled-xe2-ccs:
- shard-dg2-set2: [SKIP][159] ([Intel XE#1201] / [Intel XE#1252]) -> [SKIP][160] ([Intel XE#1252]) +2 other tests skip
[159]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-1785-479cde039b423852ee120f3832ca74ea45b646fd/shard-dg2-433/igt@kms_ccs@crc-primary-basic-4-tiled-xe2-ccs.html
[160]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-136052v2/shard-dg2-432/igt@kms_ccs@crc-primary-basic-4-tiled-xe2-ccs.html
* igt@kms_ccs@crc-sprite-planes-basic-y-tiled-gen12-mc-ccs@pipe-b-hdmi-a-6:
- shard-dg2-set2: [SKIP][161] ([Intel XE#1201] / [Intel XE#787]) -> [SKIP][162] ([Intel XE#787]) +55 other tests skip
[161]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-1785-479cde039b423852ee120f3832ca74ea45b646fd/shard-dg2-433/igt@kms_ccs@crc-sprite-planes-basic-y-tiled-gen12-mc-ccs@pipe-b-hdmi-a-6.html
[162]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-136052v2/shard-dg2-432/igt@kms_ccs@crc-sprite-planes-basic-y-tiled-gen12-mc-ccs@pipe-b-hdmi-a-6.html
* igt@kms_ccs@random-ccs-data-4-tiled-xe2-ccs:
- shard-dg2-set2: [SKIP][163] ([Intel XE#1252]) -> [SKIP][164] ([Intel XE#1201] / [Intel XE#1252])
[163]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-1785-479cde039b423852ee120f3832ca74ea45b646fd/shard-dg2-432/igt@kms_ccs@random-ccs-data-4-tiled-xe2-ccs.html
[164]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-136052v2/shard-dg2-435/igt@kms_ccs@random-ccs-data-4-tiled-xe2-ccs.html
* igt@kms_cdclk@mode-transition-all-outputs:
- shard-dg2-set2: [SKIP][165] ([Intel XE#1201] / [Intel XE#314]) -> [SKIP][166] ([Intel XE#314])
[165]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-1785-479cde039b423852ee120f3832ca74ea45b646fd/shard-dg2-433/igt@kms_cdclk@mode-transition-all-outputs.html
[166]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-136052v2/shard-dg2-432/igt@kms_cdclk@mode-transition-all-outputs.html
* igt@kms_chamelium_color@ctm-0-75:
- shard-dg2-set2: [SKIP][167] ([Intel XE#1201] / [Intel XE#306]) -> [SKIP][168] ([Intel XE#306]) +2 other tests skip
[167]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-1785-479cde039b423852ee120f3832ca74ea45b646fd/shard-dg2-433/igt@kms_chamelium_color@ctm-0-75.html
[168]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-136052v2/shard-dg2-432/igt@kms_chamelium_color@ctm-0-75.html
* igt@kms_chamelium_color@degamma:
- shard-dg2-set2: [SKIP][169] ([Intel XE#306]) -> [SKIP][170] ([Intel XE#1201] / [Intel XE#306]) +1 other test skip
[169]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-1785-479cde039b423852ee120f3832ca74ea45b646fd/shard-dg2-432/igt@kms_chamelium_color@degamma.html
[170]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-136052v2/shard-dg2-435/igt@kms_chamelium_color@degamma.html
* igt@kms_chamelium_frames@hdmi-crc-nonplanar-formats:
- shard-dg2-set2: [SKIP][171] ([Intel XE#1201] / [Intel XE#373]) -> [SKIP][172] ([Intel XE#373]) +9 other tests skip
[171]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-1785-479cde039b423852ee120f3832ca74ea45b646fd/shard-dg2-433/igt@kms_chamelium_frames@hdmi-crc-nonplanar-formats.html
[172]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-136052v2/shard-dg2-432/igt@kms_chamelium_frames@hdmi-crc-nonplanar-formats.html
* igt@kms_chamelium_hpd@hdmi-hpd-with-enabled-mode:
- shard-dg2-set2: [SKIP][173] ([Intel XE#373]) -> [SKIP][174] ([Intel XE#1201] / [Intel XE#373]) +5 other tests skip
[173]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-1785-479cde039b423852ee120f3832ca74ea45b646fd/shard-dg2-432/igt@kms_chamelium_hpd@hdmi-hpd-with-enabled-mode.html
[174]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-136052v2/shard-dg2-435/igt@kms_chamelium_hpd@hdmi-hpd-with-enabled-mode.html
* igt@kms_content_protection@dp-mst-type-1:
- shard-dg2-set2: [SKIP][175] ([Intel XE#307]) -> [SKIP][176] ([Intel XE#1201] / [Intel XE#307])
[175]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-1785-479cde039b423852ee120f3832ca74ea45b646fd/shard-dg2-432/igt@kms_content_protection@dp-mst-type-1.html
[176]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-136052v2/shard-dg2-435/igt@kms_content_protection@dp-mst-type-1.html
* igt@kms_cursor_crc@cursor-offscreen-512x512:
- shard-dg2-set2: [SKIP][177] ([Intel XE#1201] / [Intel XE#308]) -> [SKIP][178] ([Intel XE#308]) +3 other tests skip
[177]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-1785-479cde039b423852ee120f3832ca74ea45b646fd/shard-dg2-433/igt@kms_cursor_crc@cursor-offscreen-512x512.html
[178]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-136052v2/shard-dg2-432/igt@kms_cursor_crc@cursor-offscreen-512x512.html
* igt@kms_cursor_crc@cursor-random-512x512:
- shard-dg2-set2: [SKIP][179] ([Intel XE#308]) -> [SKIP][180] ([Intel XE#1201] / [Intel XE#308])
[179]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-1785-479cde039b423852ee120f3832ca74ea45b646fd/shard-dg2-432/igt@kms_cursor_crc@cursor-random-512x512.html
[180]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-136052v2/shard-dg2-435/igt@kms_cursor_crc@cursor-random-512x512.html
* igt@kms_cursor_legacy@short-busy-flip-before-cursor-toggle:
- shard-dg2-set2: [SKIP][181] ([Intel XE#323]) -> [SKIP][182] ([Intel XE#1201] / [Intel XE#323])
[181]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-1785-479cde039b423852ee120f3832ca74ea45b646fd/shard-dg2-432/igt@kms_cursor_legacy@short-busy-flip-before-cursor-toggle.html
[182]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-136052v2/shard-dg2-435/igt@kms_cursor_legacy@short-busy-flip-before-cursor-toggle.html
* igt@kms_feature_discovery@dp-mst:
- shard-dg2-set2: [SKIP][183] ([Intel XE#1137]) -> [SKIP][184] ([Intel XE#1137] / [Intel XE#1201])
[183]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-1785-479cde039b423852ee120f3832ca74ea45b646fd/shard-dg2-432/igt@kms_feature_discovery@dp-mst.html
[184]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-136052v2/shard-dg2-435/igt@kms_feature_discovery@dp-mst.html
* igt@kms_flip_scaled_crc@flip-64bpp-ytile-to-32bpp-ytile-downscaling:
- shard-dg2-set2: [SKIP][185] ([Intel XE#1201] / [Intel XE#455]) -> [SKIP][186] ([Intel XE#455]) +17 other tests skip
[185]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-1785-479cde039b423852ee120f3832ca74ea45b646fd/shard-dg2-433/igt@kms_flip_scaled_crc@flip-64bpp-ytile-to-32bpp-ytile-downscaling.html
[186]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-136052v2/shard-dg2-432/igt@kms_flip_scaled_crc@flip-64bpp-ytile-to-32bpp-ytile-downscaling.html
* igt@kms_flip_scaled_crc@flip-64bpp-ytile-to-32bpp-ytilegen12rcccs-upscaling:
- shard-dg2-set2: [SKIP][187] ([Intel XE#455]) -> [SKIP][188] ([Intel XE#1201] / [Intel XE#455]) +10 other tests skip
[187]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-1785-479cde039b423852ee120f3832ca74ea45b646fd/shard-dg2-432/igt@kms_flip_scaled_crc@flip-64bpp-ytile-to-32bpp-ytilegen12rcccs-upscaling.html
[188]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-136052v2/shard-dg2-435/igt@kms_flip_scaled_crc@flip-64bpp-ytile-to-32bpp-ytilegen12rcccs-upscaling.html
* igt@kms_frontbuffer_tracking@drrs-indfb-scaledprimary:
- shard-dg2-set2: [SKIP][189] ([Intel XE#651]) -> [SKIP][190] ([Intel XE#1201] / [Intel XE#651]) +22 other tests skip
[189]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-1785-479cde039b423852ee120f3832ca74ea45b646fd/shard-dg2-432/igt@kms_frontbuffer_tracking@drrs-indfb-scaledprimary.html
[190]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-136052v2/shard-dg2-435/igt@kms_frontbuffer_tracking@drrs-indfb-scaledprimary.html
* igt@kms_frontbuffer_tracking@drrs-suspend:
- shard-dg2-set2: [SKIP][191] ([Intel XE#1201] / [Intel XE#651]) -> [SKIP][192] ([Intel XE#651]) +30 other tests skip
[191]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-1785-479cde039b423852ee120f3832ca74ea45b646fd/shard-dg2-433/igt@kms_frontbuffer_tracking@drrs-suspend.html
[192]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-136052v2/shard-dg2-432/igt@kms_frontbuffer_tracking@drrs-suspend.html
* igt@kms_frontbuffer_tracking@fbc-tiling-y:
- shard-dg2-set2: [SKIP][193] ([Intel XE#1201] / [Intel XE#658]) -> [SKIP][194] ([Intel XE#658])
[193]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-1785-479cde039b423852ee120f3832ca74ea45b646fd/shard-dg2-433/igt@kms_frontbuffer_tracking@fbc-tiling-y.html
[194]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-136052v2/shard-dg2-432/igt@kms_frontbuffer_tracking@fbc-tiling-y.html
* igt@kms_frontbuffer_tracking@fbcpsr-slowdraw:
- shard-dg2-set2: [SKIP][195] ([Intel XE#1201] / [Intel XE#653]) -> [SKIP][196] ([Intel XE#653]) +28 other tests skip
[195]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-1785-479cde039b423852ee120f3832ca74ea45b646fd/shard-dg2-433/igt@kms_frontbuffer_tracking@fbcpsr-slowdraw.html
[196]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-136052v2/shard-dg2-432/igt@kms_frontbuffer_tracking@fbcpsr-slowdraw.html
* igt@kms_frontbuffer_tracking@psr-2p-primscrn-shrfb-msflip-blt:
- shard-dg2-set2: [SKIP][197] ([Intel XE#653]) -> [SKIP][198] ([Intel XE#1201] / [Intel XE#653]) +20 other tests skip
[197]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-1785-479cde039b423852ee120f3832ca74ea45b646fd/shard-dg2-432/igt@kms_frontbuffer_tracking@psr-2p-primscrn-shrfb-msflip-blt.html
[198]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-136052v2/shard-dg2-435/igt@kms_frontbuffer_tracking@psr-2p-primscrn-shrfb-msflip-blt.html
* igt@kms_getfb@getfb-reject-ccs:
- shard-dg2-set2: [SKIP][199] ([Intel XE#1201] / [Intel XE#605]) -> [SKIP][200] ([Intel XE#605])
[199]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-1785-479cde039b423852ee120f3832ca74ea45b646fd/shard-dg2-433/igt@kms_getfb@getfb-reject-ccs.html
[200]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-136052v2/shard-dg2-432/igt@kms_getfb@getfb-reject-ccs.html
* igt@kms_plane_scaling@plane-downscale-factor-0-25-with-pixel-format:
- shard-dg2-set2: [SKIP][201] ([Intel XE#1201] / [Intel XE#455] / [Intel XE#498]) -> [SKIP][202] ([Intel XE#455] / [Intel XE#498]) +1 other test skip
[201]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-1785-479cde039b423852ee120f3832ca74ea45b646fd/shard-dg2-433/igt@kms_plane_scaling@plane-downscale-factor-0-25-with-pixel-format.html
[202]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-136052v2/shard-dg2-432/igt@kms_plane_scaling@plane-downscale-factor-0-25-with-pixel-format.html
* igt@kms_plane_scaling@plane-downscale-factor-0-25-with-pixel-format@pipe-c-hdmi-a-6:
- shard-dg2-set2: [SKIP][203] ([Intel XE#1201] / [Intel XE#498]) -> [SKIP][204] ([Intel XE#498]) +2 other tests skip
[203]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-1785-479cde039b423852ee120f3832ca74ea45b646fd/shard-dg2-433/igt@kms_plane_scaling@plane-downscale-factor-0-25-with-pixel-format@pipe-c-hdmi-a-6.html
[204]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-136052v2/shard-dg2-432/igt@kms_plane_scaling@plane-downscale-factor-0-25-with-pixel-format@pipe-c-hdmi-a-6.html
* igt@kms_plane_scaling@plane-downscale-factor-0-25-with-rotation:
- shard-dg2-set2: [SKIP][205] ([Intel XE#455] / [Intel XE#498]) -> [SKIP][206] ([Intel XE#1201] / [Intel XE#455] / [Intel XE#498]) +1 other test skip
[205]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-1785-479cde039b423852ee120f3832ca74ea45b646fd/shard-dg2-432/igt@kms_plane_scaling@plane-downscale-factor-0-25-with-rotation.html
[206]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-136052v2/shard-dg2-435/igt@kms_plane_scaling@plane-downscale-factor-0-25-with-rotation.html
* igt@kms_plane_scaling@plane-downscale-factor-0-25-with-rotation@pipe-b-hdmi-a-6:
- shard-dg2-set2: [SKIP][207] ([Intel XE#498]) -> [SKIP][208] ([Intel XE#1201] / [Intel XE#498]) +2 other tests skip
[207]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-1785-479cde039b423852ee120f3832ca74ea45b646fd/shard-dg2-432/igt@kms_plane_scaling@plane-downscale-factor-0-25-with-rotation@pipe-b-hdmi-a-6.html
[208]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-136052v2/shard-dg2-435/igt@kms_plane_scaling@plane-downscale-factor-0-25-with-rotation@pipe-b-hdmi-a-6.html
* igt@kms_plane_scaling@planes-downscale-factor-0-25-unity-scaling:
- shard-dg2-set2: [SKIP][209] ([Intel XE#1201] / [Intel XE#2318] / [Intel XE#455]) -> [SKIP][210] ([Intel XE#2318] / [Intel XE#455]) +5 other tests skip
[209]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-1785-479cde039b423852ee120f3832ca74ea45b646fd/shard-dg2-433/igt@kms_plane_scaling@planes-downscale-factor-0-25-unity-scaling.html
[210]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-136052v2/shard-dg2-432/igt@kms_plane_scaling@planes-downscale-factor-0-25-unity-scaling.html
* igt@kms_plane_scaling@planes-upscale-factor-0-25-downscale-factor-0-25@pipe-c-hdmi-a-6:
- shard-dg2-set2: [SKIP][211] ([Intel XE#1201] / [Intel XE#2318]) -> [SKIP][212] ([Intel XE#2318]) +8 other tests skip
[211]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-1785-479cde039b423852ee120f3832ca74ea45b646fd/shard-dg2-433/igt@kms_plane_scaling@planes-upscale-factor-0-25-downscale-factor-0-25@pipe-c-hdmi-a-6.html
[212]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-136052v2/shard-dg2-432/igt@kms_plane_scaling@planes-upscale-factor-0-25-downscale-factor-0-25@pipe-c-hdmi-a-6.html
* igt@kms_pm_backlight@fade-with-suspend:
- shard-dg2-set2: [SKIP][213] ([Intel XE#1201] / [Intel XE#870]) -> [SKIP][214] ([Intel XE#870]) +1 other test skip
[213]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-1785-479cde039b423852ee120f3832ca74ea45b646fd/shard-dg2-433/igt@kms_pm_backlight@fade-with-suspend.html
[214]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-136052v2/shard-dg2-432/igt@kms_pm_backlight@fade-with-suspend.html
* igt@kms_pm_dc@deep-pkgc:
- shard-dg2-set2: [SKIP][215] ([Intel XE#908]) -> [SKIP][216] ([Intel XE#1201] / [Intel XE#908])
[215]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-1785-479cde039b423852ee120f3832ca74ea45b646fd/shard-dg2-432/igt@kms_pm_dc@deep-pkgc.html
[216]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-136052v2/shard-dg2-435/igt@kms_pm_dc@deep-pkgc.html
* igt@kms_psr2_sf@fbc-cursor-plane-move-continuous-exceed-fully-sf:
- shard-dg2-set2: [SKIP][217] ([Intel XE#1201] / [Intel XE#1489]) -> [SKIP][218] ([Intel XE#1489]) +4 other tests skip
[217]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-1785-479cde039b423852ee120f3832ca74ea45b646fd/shard-dg2-433/igt@kms_psr2_sf@fbc-cursor-plane-move-continuous-exceed-fully-sf.html
[218]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-136052v2/shard-dg2-432/igt@kms_psr2_sf@fbc-cursor-plane-move-continuous-exceed-fully-sf.html
* igt@kms_psr2_sf@overlay-plane-move-continuous-exceed-sf:
- shard-dg2-set2: [SKIP][219] ([Intel XE#1489]) -> [SKIP][220] ([Intel XE#1201] / [Intel XE#1489]) +1 other test skip
[219]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-1785-479cde039b423852ee120f3832ca74ea45b646fd/shard-dg2-432/igt@kms_psr2_sf@overlay-plane-move-continuous-exceed-sf.html
[220]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-136052v2/shard-dg2-435/igt@kms_psr2_sf@overlay-plane-move-continuous-exceed-sf.html
* igt@kms_psr2_su@frontbuffer-xrgb8888:
- shard-dg2-set2: [SKIP][221] ([Intel XE#1122] / [Intel XE#1201]) -> [SKIP][222] ([Intel XE#1122])
[221]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-1785-479cde039b423852ee120f3832ca74ea45b646fd/shard-dg2-433/igt@kms_psr2_su@frontbuffer-xrgb8888.html
[222]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-136052v2/shard-dg2-432/igt@kms_psr2_su@frontbuffer-xrgb8888.html
* igt@kms_psr@fbc-psr2-sprite-plane-move:
- shard-dg2-set2: [SKIP][223] ([Intel XE#1201] / [Intel XE#929]) -> [SKIP][224] ([Intel XE#929]) +13 other tests skip
[223]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-1785-479cde039b423852ee120f3832ca74ea45b646fd/shard-dg2-433/igt@kms_psr@fbc-psr2-sprite-plane-move.html
[224]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-136052v2/shard-dg2-432/igt@kms_psr@fbc-psr2-sprite-plane-move.html
* igt@kms_psr@psr2-primary-render:
- shard-dg2-set2: [SKIP][225] ([Intel XE#929]) -> [SKIP][226] ([Intel XE#1201] / [Intel XE#929]) +10 other tests skip
[225]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-1785-479cde039b423852ee120f3832ca74ea45b646fd/shard-dg2-432/igt@kms_psr@psr2-primary-render.html
[226]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-136052v2/shard-dg2-435/igt@kms_psr@psr2-primary-render.html
* igt@kms_rotation_crc@bad-tiling:
- shard-dg2-set2: [SKIP][227] ([Intel XE#1201] / [Intel XE#327]) -> [SKIP][228] ([Intel XE#327]) +3 other tests skip
[227]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-1785-479cde039b423852ee120f3832ca74ea45b646fd/shard-dg2-433/igt@kms_rotation_crc@bad-tiling.html
[228]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-136052v2/shard-dg2-432/igt@kms_rotation_crc@bad-tiling.html
* igt@kms_rotation_crc@primary-y-tiled-reflect-x-0:
- shard-dg2-set2: [SKIP][229] ([Intel XE#1127] / [Intel XE#1201]) -> [SKIP][230] ([Intel XE#1127]) +1 other test skip
[229]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-1785-479cde039b423852ee120f3832ca74ea45b646fd/shard-dg2-433/igt@kms_rotation_crc@primary-y-tiled-reflect-x-0.html
[230]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-136052v2/shard-dg2-432/igt@kms_rotation_crc@primary-y-tiled-reflect-x-0.html
* igt@kms_rotation_crc@sprite-rotation-270:
- shard-dg2-set2: [SKIP][231] ([Intel XE#327]) -> [SKIP][232] ([Intel XE#1201] / [Intel XE#327]) +1 other test skip
[231]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-1785-479cde039b423852ee120f3832ca74ea45b646fd/shard-dg2-432/igt@kms_rotation_crc@sprite-rotation-270.html
[232]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-136052v2/shard-dg2-435/igt@kms_rotation_crc@sprite-rotation-270.html
* igt@kms_tiled_display@basic-test-pattern-with-chamelium:
- shard-dg2-set2: [SKIP][233] ([Intel XE#1201] / [Intel XE#1500]) -> [SKIP][234] ([Intel XE#1201] / [Intel XE#362])
[233]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-1785-479cde039b423852ee120f3832ca74ea45b646fd/shard-dg2-434/igt@kms_tiled_display@basic-test-pattern-with-chamelium.html
[234]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-136052v2/shard-dg2-463/igt@kms_tiled_display@basic-test-pattern-with-chamelium.html
* igt@kms_tv_load_detect@load-detect:
- shard-dg2-set2: [SKIP][235] ([Intel XE#330]) -> [SKIP][236] ([Intel XE#1201] / [Intel XE#330])
[235]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-1785-479cde039b423852ee120f3832ca74ea45b646fd/shard-dg2-432/igt@kms_tv_load_detect@load-detect.html
[236]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-136052v2/shard-dg2-435/igt@kms_tv_load_detect@load-detect.html
* igt@kms_vrr@cmrr:
- shard-dg2-set2: [SKIP][237] ([Intel XE#1201] / [Intel XE#2168]) -> [SKIP][238] ([Intel XE#2168])
[237]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-1785-479cde039b423852ee120f3832ca74ea45b646fd/shard-dg2-433/igt@kms_vrr@cmrr.html
[238]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-136052v2/shard-dg2-432/igt@kms_vrr@cmrr.html
* igt@kms_writeback@writeback-pixel-formats:
- shard-dg2-set2: [SKIP][239] ([Intel XE#756]) -> [SKIP][240] ([Intel XE#1201] / [Intel XE#756]) +1 other test skip
[239]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-1785-479cde039b423852ee120f3832ca74ea45b646fd/shard-dg2-432/igt@kms_writeback@writeback-pixel-formats.html
[240]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-136052v2/shard-dg2-435/igt@kms_writeback@writeback-pixel-formats.html
* igt@sriov_basic@enable-vfs-bind-unbind-each-numvfs-all:
- shard-dg2-set2: [SKIP][241] ([Intel XE#1091] / [Intel XE#1201]) -> [SKIP][242] ([Intel XE#1091])
[241]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-1785-479cde039b423852ee120f3832ca74ea45b646fd/shard-dg2-433/igt@sriov_basic@enable-vfs-bind-unbind-each-numvfs-all.html
[242]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-136052v2/shard-dg2-432/igt@sriov_basic@enable-vfs-bind-unbind-each-numvfs-all.html
* igt@xe_compute_preempt@compute-preempt-many:
- shard-dg2-set2: [SKIP][243] ([Intel XE#1201] / [Intel XE#1280] / [Intel XE#455]) -> [SKIP][244] ([Intel XE#1280] / [Intel XE#455]) +1 other test skip
[243]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-1785-479cde039b423852ee120f3832ca74ea45b646fd/shard-dg2-433/igt@xe_compute_preempt@compute-preempt-many.html
[244]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-136052v2/shard-dg2-432/igt@xe_compute_preempt@compute-preempt-many.html
* igt@xe_copy_basic@mem-copy-linear-0x369:
- shard-dg2-set2: [SKIP][245] ([Intel XE#1123]) -> [SKIP][246] ([Intel XE#1123] / [Intel XE#1201])
[245]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-1785-479cde039b423852ee120f3832ca74ea45b646fd/shard-dg2-432/igt@xe_copy_basic@mem-copy-linear-0x369.html
[246]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-136052v2/shard-dg2-435/igt@xe_copy_basic@mem-copy-linear-0x369.html
* igt@xe_copy_basic@mem-copy-linear-0xfffe:
- shard-dg2-set2: [SKIP][247] ([Intel XE#1123] / [Intel XE#1201]) -> [SKIP][248] ([Intel XE#1123])
[247]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-1785-479cde039b423852ee120f3832ca74ea45b646fd/shard-dg2-433/igt@xe_copy_basic@mem-copy-linear-0xfffe.html
[248]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-136052v2/shard-dg2-432/igt@xe_copy_basic@mem-copy-linear-0xfffe.html
* igt@xe_copy_basic@mem-set-linear-0x3fff:
- shard-dg2-set2: [SKIP][249] ([Intel XE#1126] / [Intel XE#1201]) -> [SKIP][250] ([Intel XE#1126])
[249]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-1785-479cde039b423852ee120f3832ca74ea45b646fd/shard-dg2-433/igt@xe_copy_basic@mem-set-linear-0x3fff.html
[250]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-136052v2/shard-dg2-432/igt@xe_copy_basic@mem-set-linear-0x3fff.html
* igt@xe_create@multigpu-create-massive-size:
- shard-dg2-set2: [SKIP][251] ([Intel XE#944]) -> [SKIP][252] ([Intel XE#1201] / [Intel XE#944]) +1 other test skip
[251]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-1785-479cde039b423852ee120f3832ca74ea45b646fd/shard-dg2-432/igt@xe_create@multigpu-create-massive-size.html
[252]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-136052v2/shard-dg2-435/igt@xe_create@multigpu-create-massive-size.html
* igt@xe_evict@evict-mixed-many-threads-small:
- shard-dg2-set2: [INCOMPLETE][253] ([Intel XE#1195] / [Intel XE#1473]) -> [INCOMPLETE][254] ([Intel XE#1473])
[253]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-1785-479cde039b423852ee120f3832ca74ea45b646fd/shard-dg2-433/igt@xe_evict@evict-mixed-many-threads-small.html
[254]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-136052v2/shard-dg2-432/igt@xe_evict@evict-mixed-many-threads-small.html
* igt@xe_evict@evict-mixed-threads-large:
- shard-dg2-set2: [INCOMPLETE][255] ([Intel XE#1473]) -> [INCOMPLETE][256] ([Intel XE#1195] / [Intel XE#1473])
[255]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-1785-479cde039b423852ee120f3832ca74ea45b646fd/shard-dg2-432/igt@xe_evict@evict-mixed-threads-large.html
[256]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-136052v2/shard-dg2-435/igt@xe_evict@evict-mixed-threads-large.html
* igt@xe_exec_fault_mode@many-basic:
- shard-dg2-set2: [SKIP][257] ([Intel XE#288]) -> [SKIP][258] ([Intel XE#1201] / [Intel XE#288]) +17 other tests skip
[257]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-1785-479cde039b423852ee120f3832ca74ea45b646fd/shard-dg2-432/igt@xe_exec_fault_mode@many-basic.html
[258]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-136052v2/shard-dg2-435/igt@xe_exec_fault_mode@many-basic.html
* igt@xe_exec_fault_mode@once-invalid-userptr-fault:
- shard-dg2-set2: [SKIP][259] ([Intel XE#1201] / [Intel XE#288]) -> [SKIP][260] ([Intel XE#288]) +23 other tests skip
[259]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-1785-479cde039b423852ee120f3832ca74ea45b646fd/shard-dg2-433/igt@xe_exec_fault_mode@once-invalid-userptr-fault.html
[260]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-136052v2/shard-dg2-432/igt@xe_exec_fault_mode@once-invalid-userptr-fault.html
* igt@xe_exec_mix_modes@exec-simple-batch-store-dma-fence:
- shard-dg2-set2: [SKIP][261] ([Intel XE#1201] / [Intel XE#2360]) -> [SKIP][262] ([Intel XE#2360])
[261]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-1785-479cde039b423852ee120f3832ca74ea45b646fd/shard-dg2-433/igt@xe_exec_mix_modes@exec-simple-batch-store-dma-fence.html
[262]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-136052v2/shard-dg2-432/igt@xe_exec_mix_modes@exec-simple-batch-store-dma-fence.html
* igt@xe_exec_mix_modes@exec-spinner-interrupted-dma-fence:
- shard-dg2-set2: [SKIP][263] ([Intel XE#2360]) -> [SKIP][264] ([Intel XE#1201] / [Intel XE#2360])
[263]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-1785-479cde039b423852ee120f3832ca74ea45b646fd/shard-dg2-432/igt@xe_exec_mix_modes@exec-spinner-interrupted-dma-fence.html
[264]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-136052v2/shard-dg2-435/igt@xe_exec_mix_modes@exec-spinner-interrupted-dma-fence.html
* igt@xe_huc_copy@huc_copy:
- shard-dg2-set2: [SKIP][265] ([Intel XE#255]) -> [SKIP][266] ([Intel XE#1201] / [Intel XE#255])
[265]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-1785-479cde039b423852ee120f3832ca74ea45b646fd/shard-dg2-432/igt@xe_huc_copy@huc_copy.html
[266]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-136052v2/shard-dg2-435/igt@xe_huc_copy@huc_copy.html
* igt@xe_live_ktest@xe_migrate@xe_validate_ccs_kunit:
- shard-dg2-set2: [SKIP][267] ([Intel XE#2229]) -> [SKIP][268] ([Intel XE#1201] / [Intel XE#2229])
[267]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-1785-479cde039b423852ee120f3832ca74ea45b646fd/shard-dg2-432/igt@xe_live_ktest@xe_migrate@xe_validate_ccs_kunit.html
[268]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-136052v2/shard-dg2-435/igt@xe_live_ktest@xe_migrate@xe_validate_ccs_kunit.html
* igt@xe_mmap@small-bar:
- shard-dg2-set2: [SKIP][269] ([Intel XE#1201] / [Intel XE#512]) -> [SKIP][270] ([Intel XE#512])
[269]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-1785-479cde039b423852ee120f3832ca74ea45b646fd/shard-dg2-433/igt@xe_mmap@small-bar.html
[270]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-136052v2/shard-dg2-432/igt@xe_mmap@small-bar.html
* igt@xe_module_load@load:
- shard-dg2-set2: [SKIP][271] ([Intel XE#378]) -> [SKIP][272] ([Intel XE#1201] / [Intel XE#378])
[271]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-1785-479cde039b423852ee120f3832ca74ea45b646fd/shard-dg2-432/igt@xe_module_load@load.html
[272]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-136052v2/shard-dg2-435/igt@xe_module_load@load.html
* igt@xe_oa@closed-fd-and-unmapped-access:
- shard-dg2-set2: [SKIP][273] ([Intel XE#1201] / [Intel XE#2541]) -> [SKIP][274] ([Intel XE#2541]) +6 other tests skip
[273]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-1785-479cde039b423852ee120f3832ca74ea45b646fd/shard-dg2-433/igt@xe_oa@closed-fd-and-unmapped-access.html
[274]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-136052v2/shard-dg2-432/igt@xe_oa@closed-fd-and-unmapped-access.html
* igt@xe_oa@whitelisted-registers-userspace-config:
- shard-dg2-set2: [SKIP][275] ([Intel XE#2541]) -> [SKIP][276] ([Intel XE#1201] / [Intel XE#2541]) +3 other tests skip
[275]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-1785-479cde039b423852ee120f3832ca74ea45b646fd/shard-dg2-432/igt@xe_oa@whitelisted-registers-userspace-config.html
[276]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-136052v2/shard-dg2-435/igt@xe_oa@whitelisted-registers-userspace-config.html
* igt@xe_pat@pat-index-xehpc:
- shard-dg2-set2: [SKIP][277] ([Intel XE#979]) -> [SKIP][278] ([Intel XE#1201] / [Intel XE#979])
[277]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-1785-479cde039b423852ee120f3832ca74ea45b646fd/shard-dg2-432/igt@xe_pat@pat-index-xehpc.html
[278]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-136052v2/shard-dg2-435/igt@xe_pat@pat-index-xehpc.html
* igt@xe_pat@pat-index-xelpg:
- shard-dg2-set2: [SKIP][279] ([Intel XE#1201] / [Intel XE#979]) -> [SKIP][280] ([Intel XE#979])
[279]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-1785-479cde039b423852ee120f3832ca74ea45b646fd/shard-dg2-433/igt@xe_pat@pat-index-xelpg.html
[280]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-136052v2/shard-dg2-432/igt@xe_pat@pat-index-xelpg.html
* igt@xe_pm@d3cold-mmap-system:
- shard-dg2-set2: [SKIP][281] ([Intel XE#2284] / [Intel XE#366]) -> [SKIP][282] ([Intel XE#1201] / [Intel XE#2284] / [Intel XE#366]) +1 other test skip
[281]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-1785-479cde039b423852ee120f3832ca74ea45b646fd/shard-dg2-432/igt@xe_pm@d3cold-mmap-system.html
[282]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-136052v2/shard-dg2-435/igt@xe_pm@d3cold-mmap-system.html
* igt@xe_pm@s2idle-d3cold-basic-exec:
- shard-dg2-set2: [SKIP][283] ([Intel XE#1201] / [Intel XE#2284] / [Intel XE#366]) -> [SKIP][284] ([Intel XE#2284] / [Intel XE#366])
[283]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-1785-479cde039b423852ee120f3832ca74ea45b646fd/shard-dg2-433/igt@xe_pm@s2idle-d3cold-basic-exec.html
[284]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-136052v2/shard-dg2-432/igt@xe_pm@s2idle-d3cold-basic-exec.html
* igt@xe_query@multigpu-query-gt-list:
- shard-dg2-set2: [SKIP][285] ([Intel XE#1201] / [Intel XE#944]) -> [SKIP][286] ([Intel XE#944])
[285]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-1785-479cde039b423852ee120f3832ca74ea45b646fd/shard-dg2-433/igt@xe_query@multigpu-query-gt-list.html
[286]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-136052v2/shard-dg2-432/igt@xe_query@multigpu-query-gt-list.html
{name}: This element is suppressed. This means it is ignored when computing
the status of the difference (SUCCESS, WARNING, or FAILURE).
[Intel XE#1000]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/1000
[Intel XE#1033]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/1033
[Intel XE#1035]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/1035
[Intel XE#1091]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/1091
[Intel XE#1122]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/1122
[Intel XE#1123]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/1123
[Intel XE#1124]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/1124
[Intel XE#1126]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/1126
[Intel XE#1127]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/1127
[Intel XE#1137]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/1137
[Intel XE#1152]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/1152
[Intel XE#1192]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/1192
[Intel XE#1195]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/1195
[Intel XE#1201]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/1201
[Intel XE#1231]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/1231
[Intel XE#1252]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/1252
[Intel XE#1280]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/1280
[Intel XE#1358]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/1358
[Intel XE#1392]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/1392
[Intel XE#1399]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/1399
[Intel XE#1401]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/1401
[Intel XE#1406]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/1406
[Intel XE#1421]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/1421
[Intel XE#1426]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/1426
[Intel XE#1470]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/1470
[Intel XE#1473]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/1473
[Intel XE#1489]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/1489
[Intel XE#1500]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/1500
[Intel XE#1600]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/1600
[Intel XE#1607]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/1607
[Intel XE#1616]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/1616
[Intel XE#1659]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/1659
[Intel XE#1705]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/1705
[Intel XE#1725]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/1725
[Intel XE#1745]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/1745
[Intel XE#1794]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/1794
[Intel XE#1874]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/1874
[Intel XE#2028]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/2028
[Intel XE#2042]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/2042
[Intel XE#2058]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/2058
[Intel XE#2078]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/2078
[Intel XE#2168]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/2168
[Intel XE#2191]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/2191
[Intel XE#2229]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/2229
[Intel XE#2251]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/2251
[Intel XE#2252]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/2252
[Intel XE#2284]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/2284
[Intel XE#2311]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/2311
[Intel XE#2318]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/2318
[Intel XE#2320]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/2320
[Intel XE#2360]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/2360
[Intel XE#2436]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/2436
[Intel XE#2472]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/2472
[Intel XE#2541]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/2541
[Intel XE#255]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/255
[Intel XE#261]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/261
[Intel XE#288]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/288
[Intel XE#306]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/306
[Intel XE#307]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/307
[Intel XE#308]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/308
[Intel XE#309]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/309
[Intel XE#310]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/310
[Intel XE#314]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/314
[Intel XE#316]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/316
[Intel XE#323]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/323
[Intel XE#324]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/324
[Intel XE#327]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/327
[Intel XE#330]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/330
[Intel XE#361]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/361
[Intel XE#362]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/362
[Intel XE#366]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/366
[Intel XE#367]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/367
[Intel XE#373]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/373
[Intel XE#378]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/378
[Intel XE#455]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/455
[Intel XE#498]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/498
[Intel XE#512]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/512
[Intel XE#599]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/599
[Intel XE#605]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/605
[Intel XE#607]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/607
[Intel XE#610]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/610
[Intel XE#619]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/619
[Intel XE#651]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/651
[Intel XE#653]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/653
[Intel XE#656]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/656
[Intel XE#658]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/658
[Intel XE#660]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/660
[Intel XE#688]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/688
[Intel XE#701]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/701
[Intel XE#736]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/736
[Intel XE#756]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/756
[Intel XE#787]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/787
[Intel XE#801]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/801
[Intel XE#827]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/827
[Intel XE#870]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/870
[Intel XE#877]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/877
[Intel XE#899]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/899
[Intel XE#908]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/908
[Intel XE#929]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/929
[Intel XE#944]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/944
[Intel XE#958]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/958
[Intel XE#979]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/979
Build changes
-------------
* Linux: xe-1785-479cde039b423852ee120f3832ca74ea45b646fd -> xe-pw-136052v2
IGT_7973: 9c3a20d0403a2fe80bde618de5c2ef83b7e08d50 @ https://gitlab.freedesktop.org/drm/igt-gpu-tools.git
xe-1785-479cde039b423852ee120f3832ca74ea45b646fd: 479cde039b423852ee120f3832ca74ea45b646fd
xe-pw-136052v2: 136052v2
== Logs ==
For more details see: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-136052v2/index.html
[-- Attachment #2: Type: text/html, Size: 100760 bytes --]
^ permalink raw reply [flat|nested] 54+ messages in thread
* Re: [PATCH v2 02/12] drm/xe/pxp: Allocate PXP execution resources
2024-08-16 19:00 ` [PATCH v2 02/12] drm/xe/pxp: Allocate PXP execution resources Daniele Ceraolo Spurio
@ 2024-08-19 9:19 ` Jani Nikula
2024-10-04 20:30 ` John Harrison
1 sibling, 0 replies; 54+ messages in thread
From: Jani Nikula @ 2024-08-19 9:19 UTC (permalink / raw)
To: Daniele Ceraolo Spurio, intel-xe
Cc: Daniele Ceraolo Spurio, Matthew Brost, Thomas Hellström,
Lucas De Marchi
On Fri, 16 Aug 2024, Daniele Ceraolo Spurio <daniele.ceraolospurio@intel.com> wrote:
> diff --git a/drivers/gpu/drm/xe/xe_pxp_submit.c b/drivers/gpu/drm/xe/xe_pxp_submit.c
> new file mode 100644
> index 000000000000..b777b0765c8a
> --- /dev/null
> +++ b/drivers/gpu/drm/xe/xe_pxp_submit.c
> @@ -0,0 +1,201 @@
> +// SPDX-License-Identifier: MIT
> +/*
> + * Copyright(c) 2024 Intel Corporation.
> + */
> +
> +#include "xe_pxp_submit.h"
> +
> +#include <drm/xe_drm.h>
include/drm/xe_drm.h does not exist... but include/uapi/drm/xe_drm.h
does.
This seems to be prevalent in xe. Why does xe not use #include
<uapi/drm/xe_drm.h>?
BR,
Jani.
--
Jani Nikula, Intel
^ permalink raw reply [flat|nested] 54+ messages in thread
* Re: [PATCH v2 00/12] Add PXP HWDRM support
2024-08-16 19:00 [PATCH v2 00/12] Add PXP HWDRM support Daniele Ceraolo Spurio
` (19 preceding siblings ...)
2024-08-17 4:53 ` ✗ CI.FULL: " Patchwork
@ 2024-08-19 14:33 ` Souza, Jose
20 siblings, 0 replies; 54+ messages in thread
From: Souza, Jose @ 2024-08-19 14:33 UTC (permalink / raw)
To: intel-xe@lists.freedesktop.org, Ceraolo Spurio, Daniele
Cc: Brost, Matthew, Harrison, John C, Teres Alexis, Alan Previn,
thomas.hellstrom@linux.intel.com
On Fri, 2024-08-16 at 12:00 -0700, Daniele Ceraolo Spurio wrote:
> PXP (Protected Xe Path) allows execution and flip to display of protected
> (i.e. encrypted) objects. The HW supports multiple types of PXP, but
> this series only introduces support for PXP HWDRM, which is mainly
> targeted at encrypting data that is going to be displayed.
>
> Even though we only plan to support 1 type of PXP for now, the interface
> has been designed to allow support for other PXP types to be added at a
> later point in time.
>
> A user is expected to mark both BO and exec_queues as using PXP and the
> driver will make sure that PXP is running, that the encryption is
> valid and that no execution happens with an outdated encryption.
>
> v2: code cleaned up and fixed while coming out of RFC, addressed review
> feedback in regards to the interface.
uAPI is Acked-by: José Roberto de Souza <jose.souza@intel.com>
Here the Mesa side: https://gitlab.freedesktop.org/mesa/mesa/-/merge_requests/30723
>
> Cc: José Roberto de Souza <jose.souza@intel.com>
> Cc: Alan Previn <alan.previn.teres.alexis@intel.com>
> Cc: Matthew Brost <matthew.brost@intel.com>
> Cc: Thomas Hellström <thomas.hellstrom@linux.intel.com>
> Cc: John Harrison <John.C.Harrison@Intel.com>
>
> Daniele Ceraolo Spurio (12):
> drm/xe/pxp: Initialize PXP structure and KCR reg
> drm/xe/pxp: Allocate PXP execution resources
> drm/xe/pxp: Add VCS inline termination support
> drm/xe/pxp: Add GSC session invalidation support
> drm/xe/pxp: Handle the PXP termination interrupt
> drm/xe/pxp: Add GSC session initialization support
> drm/xe/pxp: Add spport for PXP-using queues
> drm/xe/pxp: add a query for PXP status
> drm/xe/pxp: Add API to mark a BO as using PXP
> drm/xe/pxp: add PXP PM support
> drm/xe/pxp: Add PXP debugfs support
> drm/xe/pxp: Enable PXP for MTL and LNL
>
> drivers/gpu/drm/xe/Makefile | 3 +
> drivers/gpu/drm/xe/abi/gsc_pxp_commands_abi.h | 40 +
> .../xe/compat-i915-headers/pxp/intel_pxp.h | 14 +-
> .../gpu/drm/xe/instructions/xe_instr_defs.h | 1 +
> .../gpu/drm/xe/instructions/xe_mfx_commands.h | 29 +
> .../gpu/drm/xe/instructions/xe_mi_commands.h | 5 +
> drivers/gpu/drm/xe/regs/xe_engine_regs.h | 1 +
> drivers/gpu/drm/xe/regs/xe_gt_regs.h | 8 +
> drivers/gpu/drm/xe/regs/xe_pxp_regs.h | 23 +
> drivers/gpu/drm/xe/xe_bo.c | 100 ++-
> drivers/gpu/drm/xe/xe_bo.h | 5 +
> drivers/gpu/drm/xe/xe_bo_types.h | 3 +
> drivers/gpu/drm/xe/xe_debugfs.c | 3 +
> drivers/gpu/drm/xe/xe_device.c | 6 +
> drivers/gpu/drm/xe/xe_device_types.h | 8 +-
> drivers/gpu/drm/xe/xe_exec.c | 6 +
> drivers/gpu/drm/xe/xe_exec_queue.c | 61 +-
> drivers/gpu/drm/xe/xe_exec_queue.h | 5 +
> drivers/gpu/drm/xe/xe_exec_queue_types.h | 8 +
> drivers/gpu/drm/xe/xe_hw_engine.c | 2 +-
> drivers/gpu/drm/xe/xe_irq.c | 20 +-
> drivers/gpu/drm/xe/xe_lrc.c | 16 +-
> drivers/gpu/drm/xe/xe_lrc.h | 7 +-
> drivers/gpu/drm/xe/xe_pci.c | 4 +
> drivers/gpu/drm/xe/xe_pm.c | 42 +-
> drivers/gpu/drm/xe/xe_pxp.c | 738 ++++++++++++++++++
> drivers/gpu/drm/xe/xe_pxp.h | 33 +
> drivers/gpu/drm/xe/xe_pxp_debugfs.c | 120 +++
> drivers/gpu/drm/xe/xe_pxp_debugfs.h | 13 +
> drivers/gpu/drm/xe/xe_pxp_submit.c | 572 ++++++++++++++
> drivers/gpu/drm/xe/xe_pxp_submit.h | 22 +
> drivers/gpu/drm/xe/xe_pxp_types.h | 123 +++
> drivers/gpu/drm/xe/xe_query.c | 32 +
> drivers/gpu/drm/xe/xe_ring_ops.c | 4 +-
> drivers/gpu/drm/xe/xe_vm.c | 170 +++-
> drivers/gpu/drm/xe/xe_vm.h | 8 +
> drivers/gpu/drm/xe/xe_vm_types.h | 1 +
> include/uapi/drm/xe_drm.h | 94 ++-
> 38 files changed, 2307 insertions(+), 43 deletions(-)
> create mode 100644 drivers/gpu/drm/xe/instructions/xe_mfx_commands.h
> create mode 100644 drivers/gpu/drm/xe/regs/xe_pxp_regs.h
> create mode 100644 drivers/gpu/drm/xe/xe_pxp.c
> create mode 100644 drivers/gpu/drm/xe/xe_pxp.h
> create mode 100644 drivers/gpu/drm/xe/xe_pxp_debugfs.c
> create mode 100644 drivers/gpu/drm/xe/xe_pxp_debugfs.h
> create mode 100644 drivers/gpu/drm/xe/xe_pxp_submit.c
> create mode 100644 drivers/gpu/drm/xe/xe_pxp_submit.h
> create mode 100644 drivers/gpu/drm/xe/xe_pxp_types.h
>
^ permalink raw reply [flat|nested] 54+ messages in thread
* Re: [PATCH v2 10/12] drm/xe/pxp: add PXP PM support
2024-08-16 19:00 ` [PATCH v2 10/12] drm/xe/pxp: add PXP PM support Daniele Ceraolo Spurio
@ 2024-08-26 21:55 ` Daniele Ceraolo Spurio
2024-10-09 1:12 ` John Harrison
1 sibling, 0 replies; 54+ messages in thread
From: Daniele Ceraolo Spurio @ 2024-08-26 21:55 UTC (permalink / raw)
To: intel-xe@lists.freedesktop.org, John Harrison
On 8/16/2024 12:00 PM, Daniele Ceraolo Spurio wrote:
> The HW suspend flow kills all PXP HWDRM sessions, so if there was any
> PXP activity before the suspend we need to trigger a full termination on
> suspend.
>
> Signed-off-by: Daniele Ceraolo Spurio <daniele.ceraolospurio@intel.com>
> ---
> drivers/gpu/drm/xe/xe_pm.c | 42 +++++++++++---
> drivers/gpu/drm/xe/xe_pxp.c | 92 ++++++++++++++++++++++++++++++-
> drivers/gpu/drm/xe/xe_pxp.h | 3 +
> drivers/gpu/drm/xe/xe_pxp_types.h | 9 ++-
> 4 files changed, 134 insertions(+), 12 deletions(-)
>
> diff --git a/drivers/gpu/drm/xe/xe_pm.c b/drivers/gpu/drm/xe/xe_pm.c
> index 9f3c14fd9f33..1e1f87ec03a2 100644
> --- a/drivers/gpu/drm/xe/xe_pm.c
> +++ b/drivers/gpu/drm/xe/xe_pm.c
> @@ -20,6 +20,7 @@
> #include "xe_guc.h"
> #include "xe_irq.h"
> #include "xe_pcode.h"
> +#include "xe_pxp.h"
> #include "xe_trace.h"
> #include "xe_wa.h"
>
> @@ -90,22 +91,24 @@ int xe_pm_suspend(struct xe_device *xe)
> drm_dbg(&xe->drm, "Suspending device\n");
> trace_xe_pm_suspend(xe, __builtin_return_address(0));
>
> + err = xe_pxp_pm_suspend(xe->pxp);
> + if (err)
> + goto err;
> +
> for_each_gt(gt, xe, id)
> xe_gt_suspend_prepare(gt);
>
> /* FIXME: Super racey... */
> err = xe_bo_evict_all(xe);
> if (err)
> - goto err;
> + goto err_pxp;
>
> xe_display_pm_suspend(xe, false);
>
> for_each_gt(gt, xe, id) {
> err = xe_gt_suspend(gt);
> - if (err) {
> - xe_display_pm_resume(xe, false);
> - goto err;
> - }
> + if (err)
> + goto err_display;
> }
>
> xe_irq_suspend(xe);
> @@ -114,6 +117,11 @@ int xe_pm_suspend(struct xe_device *xe)
>
> drm_dbg(&xe->drm, "Device suspended\n");
> return 0;
> +
> +err_display:
> + xe_display_pm_resume(xe, false);
> +err_pxp:
> + xe_pxp_pm_resume(xe->pxp);
> err:
> drm_dbg(&xe->drm, "Device suspend failed %d\n", err);
> return err;
> @@ -163,6 +171,8 @@ int xe_pm_resume(struct xe_device *xe)
> if (err)
> goto err;
>
> + xe_pxp_pm_resume(xe->pxp);
> +
> drm_dbg(&xe->drm, "Device resumed\n");
> return 0;
> err:
> @@ -356,6 +366,10 @@ int xe_pm_runtime_suspend(struct xe_device *xe)
> */
> lock_map_acquire(&xe_pm_runtime_lockdep_map);
>
> + err = xe_pxp_pm_suspend(xe->pxp);
> + if (err)
> + goto out;
> +
> /*
> * Applying lock for entire list op as xe_ttm_bo_destroy and xe_bo_move_notify
> * also checks and delets bo entry from user fault list.
> @@ -369,23 +383,30 @@ int xe_pm_runtime_suspend(struct xe_device *xe)
> if (xe->d3cold.allowed) {
> err = xe_bo_evict_all(xe);
> if (err)
> - goto out;
> + goto out_pxp;
> xe_display_pm_suspend(xe, true);
> }
>
> for_each_gt(gt, xe, id) {
> err = xe_gt_suspend(gt);
> if (err)
> - goto out;
> + goto out_display;
> }
>
> xe_irq_suspend(xe);
>
> if (xe->d3cold.allowed)
> xe_display_pm_suspend_late(xe);
> +
> + lock_map_release(&xe_pm_runtime_lockdep_map);
> + xe_pm_write_callback_task(xe, NULL);
> + return 0;
> +
> +out_display:
> + xe_display_pm_resume(xe, true);
> +out_pxp:
> + xe_pxp_pm_resume(xe->pxp);
> out:
> - if (err)
> - xe_display_pm_resume(xe, true);
> lock_map_release(&xe_pm_runtime_lockdep_map);
> xe_pm_write_callback_task(xe, NULL);
> return err;
> @@ -436,6 +457,9 @@ int xe_pm_runtime_resume(struct xe_device *xe)
> if (err)
> goto out;
> }
> +
> + xe_pxp_pm_resume(xe->pxp);
> +
> out:
> lock_map_release(&xe_pm_runtime_lockdep_map);
> xe_pm_write_callback_task(xe, NULL);
> diff --git a/drivers/gpu/drm/xe/xe_pxp.c b/drivers/gpu/drm/xe/xe_pxp.c
> index 640e62d1d5d7..78373cbbe0d4 100644
> --- a/drivers/gpu/drm/xe/xe_pxp.c
> +++ b/drivers/gpu/drm/xe/xe_pxp.c
> @@ -137,6 +137,13 @@ static void pxp_terminate(struct xe_pxp *pxp)
> if (pxp->status == XE_PXP_ACTIVE)
> pxp->key_instance++;
>
> + /*
> + * we'll mark the status as needing termination on resume, so no need to
> + * emit a termination now.
> + */
> + if (pxp->status == XE_PXP_SUSPENDED)
> + return;
> +
> /*
> * If we have a termination already in progress, we need to wait for
> * it to complete before queueing another one. We update the state
> @@ -181,17 +188,19 @@ static void pxp_terminate(struct xe_pxp *pxp)
> static void pxp_terminate_complete(struct xe_pxp *pxp)
> {
> /*
> - * We expect PXP to be in one of 2 states when we get here:
> + * We expect PXP to be in one of 3 states when we get here:
> * - XE_PXP_TERMINATION_IN_PROGRESS: a single termination event was
> * requested and it is now completing, so we're ready to start.
> * - XE_PXP_NEEDS_TERMINATION: a second termination was requested while
> * the first one was still being processed; we don't update the state
> * in this case so the pxp_start code will automatically issue that
> * second termination.
> + * - XE_PXP_SUSPENDED: PXP is now suspended, so we defer everything to
> + * when we come back on resume.
> */
> if (pxp->status == XE_PXP_TERMINATION_IN_PROGRESS)
> pxp->status = XE_PXP_READY_TO_START;
> - else if (pxp->status != XE_PXP_NEEDS_TERMINATION)
> + else if (pxp->status != XE_PXP_NEEDS_TERMINATION && pxp->status != XE_PXP_SUSPENDED)
> drm_err(&pxp->xe->drm,
> "PXP termination complete while status was %u\n",
> pxp->status);
> @@ -505,6 +514,7 @@ int xe_pxp_exec_queue_add(struct xe_pxp *pxp, struct xe_exec_queue *q)
> pxp_terminate(pxp);
> mutex_unlock(&pxp->mutex);
> goto wait_for_termination;
> + case XE_PXP_SUSPENDED:
> default:
> drm_err(&pxp->xe->drm, "unexpected state during PXP start: %u", pxp->status);
> ret = -EIO;
> @@ -648,3 +658,81 @@ int xe_pxp_key_check(struct xe_pxp *pxp, struct xe_bo *bo)
> return 0;
> }
>
> +int xe_pxp_pm_suspend(struct xe_pxp *pxp)
> +{
> + int ret = 0;
> +
> + if (!xe_pxp_is_enabled(pxp))
> + return 0;
> +
> + mutex_lock(&pxp->mutex);
> +
> + /* if the termination is already in progress, no need to re-emit it */
> + if (!completion_done(&pxp->termination))
> + goto mark_suspended;
> +
> + switch (pxp->status) {
> + case XE_PXP_ERROR:
> + case XE_PXP_READY_TO_START:
> + case XE_PXP_SUSPENDED:
> + /* nothing to cleanup */
> + break;
> + case XE_PXP_NEEDS_TERMINATION:
> + /* If PXP was never used we can skip the cleanup */
> + if (pxp->key_instance == pxp->last_suspend_key_instance)
> + break;
> + fallthrough;
> + case XE_PXP_ACTIVE:
> + pxp_terminate(pxp);
We don't actually need to do a full termination on runtime suspend, so
in that scenario we can switch this to be a simpler marking of the
queues as invalid and then do the termination on resume. This also fixes
a big in this patch with the submission code trying to take a pm ref
noresume from within the rpm suspend path.
Daniele
> + break;
> + default:
> + drm_err(&pxp->xe->drm, "unexpected state during PXP suspend: %u",
> + pxp->status);
> + ret = -EIO;
> + goto out;
> + }
> +
> +mark_suspended:
> + /*
> + * We set this even if we were in error state, hoping the suspend clears
> + * the error. Worse case we fail again and go in error state again.
> + */
> + pxp->status = XE_PXP_SUSPENDED;
> +
> + mutex_unlock(&pxp->mutex);
> +
> + /*
> + * if there is a termination in progress, wait for it.
> + * We need to wait outside the lock because the completion is done from
> + * within the lock
> + */
> + if (!wait_for_completion_timeout(&pxp->termination,
> + msecs_to_jiffies(PXP_TERMINATION_TIMEOUT_MS)))
> + ret = -ETIMEDOUT;
> +
> + pxp->last_suspend_key_instance = pxp->key_instance;
> +
> +out:
> + return ret;
> +}
> +
> +void xe_pxp_pm_resume(struct xe_pxp *pxp)
> +{
> + int err;
> +
> + if (!xe_pxp_is_enabled(pxp))
> + return;
> +
> + err = kcr_pxp_enable(pxp);
> +
> + mutex_lock(&pxp->mutex);
> +
> + xe_assert(pxp->xe, pxp->status == XE_PXP_SUSPENDED);
> +
> + if (err)
> + pxp->status = XE_PXP_ERROR;
> + else
> + pxp->status = XE_PXP_NEEDS_TERMINATION;
> +
> + mutex_unlock(&pxp->mutex);
> +}
> diff --git a/drivers/gpu/drm/xe/xe_pxp.h b/drivers/gpu/drm/xe/xe_pxp.h
> index 2d22a6e6ab27..af32c2616641 100644
> --- a/drivers/gpu/drm/xe/xe_pxp.h
> +++ b/drivers/gpu/drm/xe/xe_pxp.h
> @@ -20,6 +20,9 @@ int xe_pxp_get_readiness_status(struct xe_pxp *pxp);
> int xe_pxp_init(struct xe_device *xe);
> void xe_pxp_irq_handler(struct xe_device *xe, u16 iir);
>
> +int xe_pxp_pm_suspend(struct xe_pxp *pxp);
> +void xe_pxp_pm_resume(struct xe_pxp *pxp);
> +
> int xe_pxp_exec_queue_set_type(struct xe_pxp *pxp, struct xe_exec_queue *q, u8 type);
> int xe_pxp_exec_queue_add(struct xe_pxp *pxp, struct xe_exec_queue *q);
> void xe_pxp_exec_queue_remove(struct xe_pxp *pxp, struct xe_exec_queue *q);
> diff --git a/drivers/gpu/drm/xe/xe_pxp_types.h b/drivers/gpu/drm/xe/xe_pxp_types.h
> index 1bb747837f86..942f2fa40a58 100644
> --- a/drivers/gpu/drm/xe/xe_pxp_types.h
> +++ b/drivers/gpu/drm/xe/xe_pxp_types.h
> @@ -24,7 +24,8 @@ enum xe_pxp_status {
> XE_PXP_NEEDS_TERMINATION = 0, /* starting status */
> XE_PXP_TERMINATION_IN_PROGRESS,
> XE_PXP_READY_TO_START,
> - XE_PXP_ACTIVE
> + XE_PXP_ACTIVE,
> + XE_PXP_SUSPENDED
> };
>
> /**
> @@ -111,6 +112,12 @@ struct xe_pxp {
>
> /** @key_instance: keep track of the current iteration of the PXP key */
> u32 key_instance;
> + /**
> + * @last_suspend_key_instance: value of key_instance at the last
> + * suspend. Used to check if any PXP session has been created between
> + * suspend cycles.
> + */
> + u32 last_suspend_key_instance;
> };
>
> #endif /* __XE_PXP_TYPES_H__ */
^ permalink raw reply [flat|nested] 54+ messages in thread
* Re: [PATCH v2 01/12] drm/xe/pxp: Initialize PXP structure and KCR reg
2024-08-16 19:00 ` [PATCH v2 01/12] drm/xe/pxp: Initialize PXP structure and KCR reg Daniele Ceraolo Spurio
@ 2024-10-04 20:29 ` John Harrison
0 siblings, 0 replies; 54+ messages in thread
From: John Harrison @ 2024-10-04 20:29 UTC (permalink / raw)
To: Daniele Ceraolo Spurio, intel-xe
On 8/16/2024 12:00, Daniele Ceraolo Spurio wrote:
> As the first step towards adding PXP support, hook in the PXP init
> function, allocate the PXP structure and initialize the KCR register to
> allow PXP HWDRM sessions.
>
> Signed-off-by: Daniele Ceraolo Spurio <daniele.ceraolospurio@intel.com>
> ---
> drivers/gpu/drm/xe/Makefile | 1 +
> .../xe/compat-i915-headers/pxp/intel_pxp.h | 4 +-
> drivers/gpu/drm/xe/regs/xe_pxp_regs.h | 17 +++
> drivers/gpu/drm/xe/xe_device.c | 6 +
> drivers/gpu/drm/xe/xe_device_types.h | 8 +-
> drivers/gpu/drm/xe/xe_pci.c | 2 +
> drivers/gpu/drm/xe/xe_pxp.c | 103 ++++++++++++++++++
> drivers/gpu/drm/xe/xe_pxp.h | 15 +++
> drivers/gpu/drm/xe/xe_pxp_types.h | 28 +++++
> 9 files changed, 180 insertions(+), 4 deletions(-)
> create mode 100644 drivers/gpu/drm/xe/regs/xe_pxp_regs.h
> create mode 100644 drivers/gpu/drm/xe/xe_pxp.c
> create mode 100644 drivers/gpu/drm/xe/xe_pxp.h
> create mode 100644 drivers/gpu/drm/xe/xe_pxp_types.h
>
> diff --git a/drivers/gpu/drm/xe/Makefile b/drivers/gpu/drm/xe/Makefile
> index e11392b5dd3d..9e007e59de83 100644
> --- a/drivers/gpu/drm/xe/Makefile
> +++ b/drivers/gpu/drm/xe/Makefile
> @@ -83,6 +83,7 @@ xe-y += xe_bb.o \
> xe_preempt_fence.o \
> xe_pt.o \
> xe_pt_walk.o \
> + xe_pxp.o \
> xe_query.o \
> xe_range_fence.o \
> xe_reg_sr.o \
> diff --git a/drivers/gpu/drm/xe/compat-i915-headers/pxp/intel_pxp.h b/drivers/gpu/drm/xe/compat-i915-headers/pxp/intel_pxp.h
> index c2c30ece8f77..881680727452 100644
> --- a/drivers/gpu/drm/xe/compat-i915-headers/pxp/intel_pxp.h
> +++ b/drivers/gpu/drm/xe/compat-i915-headers/pxp/intel_pxp.h
> @@ -10,9 +10,9 @@
> #include <linux/types.h>
>
> struct drm_i915_gem_object;
> -struct intel_pxp;
> +struct xe_pxp;
>
> -static inline int intel_pxp_key_check(struct intel_pxp *pxp,
> +static inline int intel_pxp_key_check(struct xe_pxp *pxp,
> struct drm_i915_gem_object *obj,
> bool assign)
> {
> diff --git a/drivers/gpu/drm/xe/regs/xe_pxp_regs.h b/drivers/gpu/drm/xe/regs/xe_pxp_regs.h
> new file mode 100644
> index 000000000000..d67cf210d23d
> --- /dev/null
> +++ b/drivers/gpu/drm/xe/regs/xe_pxp_regs.h
> @@ -0,0 +1,17 @@
> +/* SPDX-License-Identifier: MIT */
> +/*
> + * Copyright(c) 2024, Intel Corporation. All rights reserved.
> + */
> +
> +#ifndef __XE_PXP_REGS_H__
> +#define __XE_PXP_REGS_H__
> +
> +#include "regs/xe_regs.h"
> +
> +/* The following registers are only valid on platforms with a media GT */
> +
> +/* KCR enable/disable control */
> +#define KCR_INIT XE_REG(0x3860f0)
> +#define KCR_INIT_ALLOW_DISPLAY_ME_WRITES REG_BIT(14)
> +
> +#endif /* __XE_PXP_REGS_H__ */
> diff --git a/drivers/gpu/drm/xe/xe_device.c b/drivers/gpu/drm/xe/xe_device.c
> index 206328387150..807a15c49a81 100644
> --- a/drivers/gpu/drm/xe/xe_device.c
> +++ b/drivers/gpu/drm/xe/xe_device.c
> @@ -46,6 +46,7 @@
> #include "xe_pat.h"
> #include "xe_pcode.h"
> #include "xe_pm.h"
> +#include "xe_pxp.h"
> #include "xe_query.h"
> #include "xe_sriov.h"
> #include "xe_tile.h"
> @@ -730,6 +731,11 @@ int xe_device_probe(struct xe_device *xe)
> if (err)
> goto err_fini_oa;
>
> + /* A PXP init failure is not fatal */
> + err = xe_pxp_init(xe);
> + if (err && err != -EOPNOTSUPP)
> + drm_err(&xe->drm, "PXP initialization failed: %pe\n", ERR_PTR(err));
> +
> err = drm_dev_register(&xe->drm, 0);
> if (err)
> goto err_fini_display;
> diff --git a/drivers/gpu/drm/xe/xe_device_types.h b/drivers/gpu/drm/xe/xe_device_types.h
> index 16a24eadd94b..b00a78be3934 100644
> --- a/drivers/gpu/drm/xe/xe_device_types.h
> +++ b/drivers/gpu/drm/xe/xe_device_types.h
> @@ -35,6 +35,7 @@
>
> struct xe_ggtt;
> struct xe_pat_ops;
> +struct xe_pxp;
>
> #define XE_BO_INVALID_OFFSET LONG_MAX
>
> @@ -276,6 +277,8 @@ struct xe_device {
> u8 has_llc:1;
> /** @info.has_mmio_ext: Device has extra MMIO address range */
> u8 has_mmio_ext:1;
> + /** @info.has_pxp: Device has PXP support */
> + u8 has_pxp:1;
> /** @info.has_range_tlb_invalidation: Has range based TLB invalidations */
> u8 has_range_tlb_invalidation:1;
> /** @info.has_sriov: Supports SR-IOV */
> @@ -480,6 +483,9 @@ struct xe_device {
> /** @oa: oa observation subsystem */
> struct xe_oa oa;
>
> + /** @pxp: Encapsulate Protected Xe Path support */
> + struct xe_pxp *pxp;
> +
> /** @needs_flr_on_fini: requests function-reset on fini */
> bool needs_flr_on_fini;
>
> @@ -552,8 +558,6 @@ struct xe_device {
> unsigned int czclk_freq;
> unsigned int fsb_freq, mem_freq, is_ddr3;
> };
> -
> - void *pxp;
> #endif
> };
>
> diff --git a/drivers/gpu/drm/xe/xe_pci.c b/drivers/gpu/drm/xe/xe_pci.c
> index 3c34b032ebf4..d1453ba20dcd 100644
> --- a/drivers/gpu/drm/xe/xe_pci.c
> +++ b/drivers/gpu/drm/xe/xe_pci.c
> @@ -62,6 +62,7 @@ struct xe_device_desc {
> u8 has_heci_cscfi:1;
> u8 has_llc:1;
> u8 has_mmio_ext:1;
> + u8 has_pxp:1;
> u8 has_sriov:1;
> u8 skip_guc_pc:1;
> u8 skip_mtcfg:1;
> @@ -616,6 +617,7 @@ static int xe_info_init_early(struct xe_device *xe,
> xe->info.has_heci_cscfi = desc->has_heci_cscfi;
> xe->info.has_llc = desc->has_llc;
> xe->info.has_mmio_ext = desc->has_mmio_ext;
> + xe->info.has_pxp = desc->has_pxp;
> xe->info.has_sriov = desc->has_sriov;
> xe->info.skip_guc_pc = desc->skip_guc_pc;
> xe->info.skip_mtcfg = desc->skip_mtcfg;
> diff --git a/drivers/gpu/drm/xe/xe_pxp.c b/drivers/gpu/drm/xe/xe_pxp.c
> new file mode 100644
> index 000000000000..f974f74be1d5
> --- /dev/null
> +++ b/drivers/gpu/drm/xe/xe_pxp.c
> @@ -0,0 +1,103 @@
> +// SPDX-License-Identifier: MIT
> +/*
> + * Copyright(c) 2024 Intel Corporation.
> + */
> +
> +#include "xe_pxp.h"
> +
> +#include <drm/drm_managed.h>
> +
> +#include "xe_device_types.h"
> +#include "xe_force_wake.h"
> +#include "xe_gt.h"
> +#include "xe_gt_types.h"
> +#include "xe_mmio.h"
> +#include "xe_pxp_types.h"
> +#include "xe_uc_fw.h"
> +#include "regs/xe_pxp_regs.h"
> +
> +/**
> + * DOC: PXP
> + *
> + * PXP (Protected Xe Path) allows execution and flip to display of protected
> + * (i.e. encrypted) objects. This feature is currently only supported in
> + * integrated parts.
> + */
> +
> +static bool pxp_is_supported(const struct xe_device *xe)
> +{
> + return xe->info.has_pxp && IS_ENABLED(CONFIG_INTEL_MEI_GSC_PROXY);
> +}
> +
> +static int kcr_pxp_set_status(const struct xe_pxp *pxp, bool enable)
> +{
> + u32 val = enable ? _MASKED_BIT_ENABLE(KCR_INIT_ALLOW_DISPLAY_ME_WRITES) :
> + _MASKED_BIT_DISABLE(KCR_INIT_ALLOW_DISPLAY_ME_WRITES);
> + int err;
> +
> + err = xe_force_wake_get(gt_to_fw(pxp->gt), XE_FW_GT);
> + if (err)
> + return err;
> +
> + xe_mmio_write32(pxp->gt, KCR_INIT, val);
> + XE_WARN_ON(xe_force_wake_put(gt_to_fw(pxp->gt), XE_FW_GT));
> +
> + return 0;
> +}
> +
> +static int kcr_pxp_enable(const struct xe_pxp *pxp)
> +{
> + return kcr_pxp_set_status(pxp, true);
> +}
> +
> +/**
> + * xe_pxp_init - initialize PXP support
> + * @xe: the xe_device structure
> + *
> + * Initialize the HW state and allocate the objects required for PXP support.
> + * Note that some of the requirement for PXP support (GSC proxy init, HuC auth)
> + * are performed asynchronously as part of the GSC init. PXP can only be used
> + * after both this function and the async worker have completed.
> + *
> + * Returns -EOPNOTSUPP if PXP is not supported, 0 if PXP initialization is
> + * successful, other errno value if there is an error during the init.
> + */
> +int xe_pxp_init(struct xe_device *xe)
> +{
> + struct xe_gt *gt = xe->tiles[0].media_gt;
> + struct xe_pxp *pxp;
> + int err;
> +
> + if (!pxp_is_supported(xe))
> + return -EOPNOTSUPP;
> +
> + /* we only support PXP on single tile devices with a media GT */
> + if (xe->info.tile_count > 1 || !gt)
> + return -EOPNOTSUPP;
> +
> + /* The GSCCS is required for submissions to the GSC FW */
> + if (!(gt->info.engine_mask & BIT(XE_HW_ENGINE_GSCCS0)))
> + return -EOPNOTSUPP;
> +
> + /* PXP requires both GSC and HuC firmwares to be available */
> + if (!xe_uc_fw_is_loadable(>->uc.gsc.fw) ||
> + !xe_uc_fw_is_loadable(>->uc.huc.fw)) {
> + drm_info(&xe->drm, "skipping PXP init due to missing FW dependencies");
> + return -EOPNOTSUPP;
> + }
> +
> + pxp = drmm_kzalloc(&xe->drm, sizeof(struct xe_pxp), GFP_KERNEL);
> + if (!pxp)
> + return -ENOMEM;
> +
> + pxp->xe = xe;
> + pxp->gt = gt;
> +
> + err = kcr_pxp_enable(pxp);
> + if (err)
> + return err;
Won't this leak the pxp object? DRM itself will clean it up eventually
on module unload but it should really be freed up here explicitly?
> +
> + xe->pxp = pxp;
> +
> + return 0;
> +}
> diff --git a/drivers/gpu/drm/xe/xe_pxp.h b/drivers/gpu/drm/xe/xe_pxp.h
> new file mode 100644
> index 000000000000..79c951667f13
> --- /dev/null
> +++ b/drivers/gpu/drm/xe/xe_pxp.h
> @@ -0,0 +1,15 @@
> +/* SPDX-License-Identifier: MIT */
> +/*
> + * Copyright(c) 2024, Intel Corporation. All rights reserved.
> + */
> +
> +#ifndef __XE_PXP_H__
> +#define __XE_PXP_H__
> +
> +#include <linux/types.h>
Is this actually needed? The only types in use are the locally defined
struct and 'int'.
> +
> +struct xe_device;
> +
> +int xe_pxp_init(struct xe_device *xe);
> +
> +#endif /* __XE_PXP_H__ */
> diff --git a/drivers/gpu/drm/xe/xe_pxp_types.h b/drivers/gpu/drm/xe/xe_pxp_types.h
> new file mode 100644
> index 000000000000..3a141021972a
> --- /dev/null
> +++ b/drivers/gpu/drm/xe/xe_pxp_types.h
> @@ -0,0 +1,28 @@
> +/* SPDX-License-Identifier: MIT */
> +/*
> + * Copyright(c) 2024, Intel Corporation. All rights reserved.
> + */
> +
> +#ifndef __XE_PXP_TYPES_H__
> +#define __XE_PXP_TYPES_H__
> +
> +#include <linux/types.h>
As above.
John.
> +
> +struct xe_device;
> +struct xe_gt;
> +
> +/**
> + * struct xe_pxp - pxp state
> + */
> +struct xe_pxp {
> + /** @xe: Backpoiner to the xe_device struct */
> + struct xe_device *xe;
> +
> + /**
> + * @gt: pointer to the gt that owns the submission-side of PXP
> + * (VDBOX, KCR and GSC)
> + */
> + struct xe_gt *gt;
> +};
> +
> +#endif /* __XE_PXP_TYPES_H__ */
^ permalink raw reply [flat|nested] 54+ messages in thread
* Re: [PATCH v2 02/12] drm/xe/pxp: Allocate PXP execution resources
2024-08-16 19:00 ` [PATCH v2 02/12] drm/xe/pxp: Allocate PXP execution resources Daniele Ceraolo Spurio
2024-08-19 9:19 ` Jani Nikula
@ 2024-10-04 20:30 ` John Harrison
2024-11-06 22:25 ` Daniele Ceraolo Spurio
1 sibling, 1 reply; 54+ messages in thread
From: John Harrison @ 2024-10-04 20:30 UTC (permalink / raw)
To: Daniele Ceraolo Spurio, intel-xe; +Cc: Matthew Brost, Thomas Hellström
On 8/16/2024 12:00, Daniele Ceraolo Spurio wrote:
> PXP requires submissions to the HW for the following operations
>
> 1) Key invalidation, done via the VCS engine
> 2) Communication with the GSC FW for session management, done via the
> GSCCS.
>
> Key invalidation submissions are serialized (only 1 termination can be
> serviced at a given time) and done via GGTT, so we can allocate a simple
> BO and a kernel queue for it.
>
> Submission for session management are tied to a PXP client (identified
Submissions are or submission is
> by a unique host_session_id); from the GSC POV this is a user-accessible
> construct, so all related submission must be done via PPGTT. The driver
> does not currently support PPGTT submission from within the kernek, so
kernel
> to add this support, the following changes have been included:
>
> - a new type of kernel-owned VM (marked as GSC), required to ensure we
> don't set the device in no-fault mode when we initialize PXP and to
> mark the different lock usage with lockdep.
> - a new function to map a BO into a VM from within the kernel.
>
> Signed-off-by: Daniele Ceraolo Spurio <daniele.ceraolospurio@intel.com>
> Cc: Matthew Brost <matthew.brost@intel.com>
> Cc: Thomas Hellström <thomas.hellstrom@linux.intel.com>
> ---
> drivers/gpu/drm/xe/Makefile | 1 +
> drivers/gpu/drm/xe/abi/gsc_pxp_commands_abi.h | 7 +
> drivers/gpu/drm/xe/xe_exec_queue.c | 3 +
> drivers/gpu/drm/xe/xe_pxp.c | 25 ++-
> drivers/gpu/drm/xe/xe_pxp_submit.c | 201 ++++++++++++++++++
> drivers/gpu/drm/xe/xe_pxp_submit.h | 16 ++
> drivers/gpu/drm/xe/xe_pxp_types.h | 46 ++++
> drivers/gpu/drm/xe/xe_vm.c | 124 ++++++++++-
> drivers/gpu/drm/xe/xe_vm.h | 6 +
> drivers/gpu/drm/xe/xe_vm_types.h | 1 +
> 10 files changed, 418 insertions(+), 12 deletions(-)
> create mode 100644 drivers/gpu/drm/xe/xe_pxp_submit.c
> create mode 100644 drivers/gpu/drm/xe/xe_pxp_submit.h
>
> diff --git a/drivers/gpu/drm/xe/Makefile b/drivers/gpu/drm/xe/Makefile
> index 9e007e59de83..a508b9166b88 100644
> --- a/drivers/gpu/drm/xe/Makefile
> +++ b/drivers/gpu/drm/xe/Makefile
> @@ -84,6 +84,7 @@ xe-y += xe_bb.o \
> xe_pt.o \
> xe_pt_walk.o \
> xe_pxp.o \
> + xe_pxp_submit.o \
> xe_query.o \
> xe_range_fence.o \
> xe_reg_sr.o \
> diff --git a/drivers/gpu/drm/xe/abi/gsc_pxp_commands_abi.h b/drivers/gpu/drm/xe/abi/gsc_pxp_commands_abi.h
> index 57520809e48d..f3c4cf10ba20 100644
> --- a/drivers/gpu/drm/xe/abi/gsc_pxp_commands_abi.h
> +++ b/drivers/gpu/drm/xe/abi/gsc_pxp_commands_abi.h
> @@ -6,6 +6,7 @@
> #ifndef _ABI_GSC_PXP_COMMANDS_ABI_H
> #define _ABI_GSC_PXP_COMMANDS_ABI_H
>
> +#include <linux/sizes.h>
> #include <linux/types.h>
>
> /* Heci client ID for PXP commands */
> @@ -13,6 +14,12 @@
>
> #define PXP_APIVER(x, y) (((x) & 0xFFFF) << 16 | ((y) & 0xFFFF))
>
> +/*
> + * A PXP sub-section in an HECI packet can be up to 64K big in each direction.
> + * This does not include the top-level GSC header.
> + */
> +#define PXP_MAX_PACKET_SIZE SZ_64K
> +
> /*
> * there are a lot of status codes for PXP, but we only define the cross-API
> * common ones that we actually can handle in the kernel driver. Other failure
> diff --git a/drivers/gpu/drm/xe/xe_exec_queue.c b/drivers/gpu/drm/xe/xe_exec_queue.c
> index 7d170d37fdbe..e98e8794eddf 100644
> --- a/drivers/gpu/drm/xe/xe_exec_queue.c
> +++ b/drivers/gpu/drm/xe/xe_exec_queue.c
> @@ -148,6 +148,9 @@ struct xe_exec_queue *xe_exec_queue_create(struct xe_device *xe, struct xe_vm *v
> struct xe_exec_queue *q;
> int err;
>
> + /* VMs for GSCCS queues (and only those) must have the XE_VM_FLAG_GSC flag */
> + xe_assert(xe, !vm || (!!(vm->flags & XE_VM_FLAG_GSC) == !!(hwe->engine_id == XE_HW_ENGINE_GSCCS0)));
> +
> q = __xe_exec_queue_alloc(xe, vm, logical_mask, width, hwe, flags,
> extensions);
> if (IS_ERR(q))
> diff --git a/drivers/gpu/drm/xe/xe_pxp.c b/drivers/gpu/drm/xe/xe_pxp.c
> index f974f74be1d5..56bb7d927c07 100644
> --- a/drivers/gpu/drm/xe/xe_pxp.c
> +++ b/drivers/gpu/drm/xe/xe_pxp.c
> @@ -12,6 +12,7 @@
> #include "xe_gt.h"
> #include "xe_gt_types.h"
> #include "xe_mmio.h"
> +#include "xe_pxp_submit.h"
> #include "xe_pxp_types.h"
> #include "xe_uc_fw.h"
> #include "regs/xe_pxp_regs.h"
> @@ -50,6 +51,20 @@ static int kcr_pxp_enable(const struct xe_pxp *pxp)
> return kcr_pxp_set_status(pxp, true);
> }
>
> +static int kcr_pxp_disable(const struct xe_pxp *pxp)
> +{
> + return kcr_pxp_set_status(pxp, false);
> +}
> +
> +static void pxp_fini(void *arg)
> +{
> + struct xe_pxp *pxp = arg;
> +
> + xe_pxp_destroy_execution_resources(pxp);
> +
> + /* no need to explicitly disable KCR since we're going to do an FLR */
> +}
> +
> /**
> * xe_pxp_init - initialize PXP support
> * @xe: the xe_device structure
> @@ -97,7 +112,15 @@ int xe_pxp_init(struct xe_device *xe)
> if (err)
> return err;
>
> + err = xe_pxp_allocate_execution_resources(pxp);
> + if (err)
> + goto kcr_disable;
> +
> xe->pxp = pxp;
>
> - return 0;
> + return devm_add_action_or_reset(xe->drm.dev, pxp_fini, pxp);
> +
> +kcr_disable:
> + kcr_pxp_disable(pxp);
> + return err;
Again, this will leak the pxp object until the driver is unloaded.
> }
> diff --git a/drivers/gpu/drm/xe/xe_pxp_submit.c b/drivers/gpu/drm/xe/xe_pxp_submit.c
> new file mode 100644
> index 000000000000..b777b0765c8a
> --- /dev/null
> +++ b/drivers/gpu/drm/xe/xe_pxp_submit.c
> @@ -0,0 +1,201 @@
> +// SPDX-License-Identifier: MIT
> +/*
> + * Copyright(c) 2024 Intel Corporation.
> + */
> +
> +#include "xe_pxp_submit.h"
> +
> +#include <drm/xe_drm.h>
> +
> +#include "xe_device_types.h"
> +#include "xe_bo.h"
> +#include "xe_exec_queue.h"
> +#include "xe_gsc_submit.h"
> +#include "xe_gt.h"
> +#include "xe_pxp_types.h"
> +#include "xe_vm.h"
> +#include "regs/xe_gt_regs.h"
> +
> +/*
> + * The VCS is used for kernel-owned GGTT submissions to issue key termination.
> + * Terminations are serialized, so we only need a single queue and a single
> + * batch.
> + */
> +static int allocate_vcs_execution_resources(struct xe_pxp *pxp)
> +{
> + struct xe_gt *gt = pxp->gt;
> + struct xe_device *xe = pxp->xe;
> + struct xe_tile *tile = gt_to_tile(gt);
> + struct xe_hw_engine *hwe;
> + struct xe_exec_queue *q;
> + struct xe_bo *bo;
> + int err;
> +
> + hwe = xe_gt_hw_engine(gt, XE_ENGINE_CLASS_VIDEO_DECODE, 0, true);
> + if (!hwe)
> + return -ENODEV;
> +
> + q = xe_exec_queue_create(xe, NULL, BIT(hwe->logical_instance), 1, hwe,
> + EXEC_QUEUE_FLAG_KERNEL | EXEC_QUEUE_FLAG_PERMANENT, 0);
> + if (IS_ERR(q))
> + return PTR_ERR(q);
> +
> + /*
> + * Each termination is 16 DWORDS, so 4K is enough to contain a
> + * termination for each sessions.
> + */
> + bo = xe_bo_create_pin_map(xe, tile, 0, SZ_4K, ttm_bo_type_kernel,
> + XE_BO_FLAG_SYSTEM | XE_BO_FLAG_PINNED | XE_BO_FLAG_GGTT);
> + if (IS_ERR(bo)) {
> + err = PTR_ERR(bo);
> + goto out_queue;
> + }
> +
> + pxp->vcs_exec.q = q;
> + pxp->vcs_exec.bo = bo;
> +
> + return 0;
> +
> +out_queue:
> + xe_exec_queue_put(q);
> + return err;
> +}
> +
> +static void destroy_vcs_execution_resources(struct xe_pxp *pxp)
> +{
> + if (pxp->vcs_exec.bo)
> + xe_bo_unpin_map_no_vm(pxp->vcs_exec.bo);
> +
> + if (pxp->vcs_exec.q)
> + xe_exec_queue_put(pxp->vcs_exec.q);
> +}
> +
> +#define PXP_BB_SIZE XE_PAGE_SIZE
> +static int allocate_gsc_client_resources(struct xe_gt *gt,
> + struct xe_pxp_gsc_client_resources *gsc_res,
> + size_t inout_size)
> +{
> + struct xe_tile *tile = gt_to_tile(gt);
> + struct xe_device *xe = tile_to_xe(tile);
> + struct xe_hw_engine *hwe;
> + struct xe_vm *vm;
> + struct xe_bo *bo;
> + struct xe_exec_queue *q;
> + struct dma_fence *fence;
> + long timeout;
> + int err = 0;
> +
> + hwe = xe_gt_hw_engine(gt, XE_ENGINE_CLASS_OTHER, OTHER_GSC_INSTANCE, false);
> +
> + /* we shouldn't reach here if the GSC engine is not available */
> + xe_assert(xe, hwe);
> +
> + /* PXP instructions must be issued from PPGTT */
> + vm = xe_vm_create(xe, XE_VM_FLAG_GSC);
> + if (IS_ERR(vm))
> + return PTR_ERR(vm);
> +
> + /* We allocate a single object for the batch and the in/out memory */
> + xe_vm_lock(vm, false);
> + bo = xe_bo_create_pin_map(xe, tile, vm, PXP_BB_SIZE + inout_size * 2,
> + ttm_bo_type_kernel,
> + XE_BO_FLAG_SYSTEM | XE_BO_FLAG_PINNED | XE_BO_FLAG_NEEDS_UC);
> + xe_vm_unlock(vm);
> + if (IS_ERR(bo)) {
> + err = PTR_ERR(bo);
> + goto vm_out;
> + }
> +
> + fence = xe_vm_bind_bo(vm, bo, NULL, 0, XE_CACHE_WB);
> + if (IS_ERR(fence)) {
> + err = PTR_ERR(fence);
> + goto bo_out;
> + }
> +
> + timeout = dma_fence_wait_timeout(fence, false, HZ);
> + dma_fence_put(fence);
> + if (timeout <= 0) {
> + err = timeout ?: -ETIME;
> + goto bo_out;
> + }
> +
> + q = xe_exec_queue_create(xe, vm, BIT(hwe->logical_instance), 1, hwe,
> + EXEC_QUEUE_FLAG_KERNEL |
> + EXEC_QUEUE_FLAG_PERMANENT, 0);
> + if (IS_ERR(q)) {
> + err = PTR_ERR(q);
> + goto bo_out;
> + }
> +
> + gsc_res->vm = vm;
> + gsc_res->bo = bo;
> + gsc_res->inout_size = inout_size;
> + gsc_res->batch = IOSYS_MAP_INIT_OFFSET(&bo->vmap, 0);
> + gsc_res->msg_in = IOSYS_MAP_INIT_OFFSET(&bo->vmap, PXP_BB_SIZE);
> + gsc_res->msg_out = IOSYS_MAP_INIT_OFFSET(&bo->vmap, PXP_BB_SIZE + inout_size);
> + gsc_res->q = q;
> +
> + /* initialize host-session-handle (for all Xe-to-gsc-firmware PXP cmds) */
> + gsc_res->host_session_handle = xe_gsc_create_host_session_id();
> +
> + return 0;
> +
> +bo_out:
> + xe_bo_unpin_map_no_vm(bo);
> +vm_out:
> + xe_vm_close_and_put(vm);
> +
> + return err;
> +}
> +
> +static void destroy_gsc_client_resources(struct xe_pxp_gsc_client_resources *gsc_res)
> +{
> + if (!gsc_res->q)
> + return;
> +
> + xe_exec_queue_put(gsc_res->q);
> + xe_bo_unpin_map_no_vm(gsc_res->bo);
> + xe_vm_close_and_put(gsc_res->vm);
> +}
> +
> +/**
> + * xe_pxp_allocate_execution_resources - Allocate PXP submission objects
> + * @pxp: the xe_pxp structure
> + *
> + * Allocates exec_queues objects for VCS and GSCCS submission. The GSCCS
> + * submissions are done via PPGTT, so this function allocates a VM for it and
> + * maps the object into it.
> + *
> + * Returns 0 if the allocation and mapping is successful, an errno value
> + * otherwise.
> + */
> +int xe_pxp_allocate_execution_resources(struct xe_pxp *pxp)
> +{
> + int err;
> +
> + err = allocate_vcs_execution_resources(pxp);
> + if (err)
> + return err;
> +
> + /*
> + * PXP commands can require a lot of BO space (see PXP_MAX_PACKET_SIZE),
> + * but we currently only support a subset of commands that are small
> + * (< 20 dwords), so a single page is enough for now.
> + */
> + err = allocate_gsc_client_resources(pxp->gt, &pxp->gsc_res, XE_PAGE_SIZE);
> + if (err)
> + goto destroy_vcs_context;
> +
> + return 0;
> +
> +destroy_vcs_context:
> + destroy_vcs_execution_resources(pxp);
> + return err;
> +}
> +
> +void xe_pxp_destroy_execution_resources(struct xe_pxp *pxp)
> +{
> + destroy_gsc_client_resources(&pxp->gsc_res);
> + destroy_vcs_execution_resources(pxp);
> +}
> +
> diff --git a/drivers/gpu/drm/xe/xe_pxp_submit.h b/drivers/gpu/drm/xe/xe_pxp_submit.h
> new file mode 100644
> index 000000000000..1a971fadc081
> --- /dev/null
> +++ b/drivers/gpu/drm/xe/xe_pxp_submit.h
> @@ -0,0 +1,16 @@
> +/* SPDX-License-Identifier: MIT */
> +/*
> + * Copyright(c) 2024, Intel Corporation. All rights reserved.
> + */
> +
> +#ifndef __XE_PXP_SUBMIT_H__
> +#define __XE_PXP_SUBMIT_H__
> +
> +#include <linux/types.h>
Also not necessary?
> +
> +struct xe_pxp;
> +
> +int xe_pxp_allocate_execution_resources(struct xe_pxp *pxp);
> +void xe_pxp_destroy_execution_resources(struct xe_pxp *pxp);
> +
> +#endif /* __XE_PXP_SUBMIT_H__ */
> diff --git a/drivers/gpu/drm/xe/xe_pxp_types.h b/drivers/gpu/drm/xe/xe_pxp_types.h
> index 3a141021972a..3463caaad101 100644
> --- a/drivers/gpu/drm/xe/xe_pxp_types.h
> +++ b/drivers/gpu/drm/xe/xe_pxp_types.h
> @@ -6,10 +6,45 @@
> #ifndef __XE_PXP_TYPES_H__
> #define __XE_PXP_TYPES_H__
>
> +#include <linux/iosys-map.h>
> #include <linux/types.h>
>
> +struct xe_bo;
> +struct xe_exec_queue;
> struct xe_device;
> struct xe_gt;
> +struct xe_vm;
> +
> +/**
> + * struct xe_pxp_gsc_client_resources - resources for GSC submission by a PXP
> + * client. The GSC FW supports multiple GSC client active at the same time.
> + */
> +struct xe_pxp_gsc_client_resources {
> + /**
> + * @host_session_handle: handle used to identify the client in messages
> + * sent to the GSC firmware.
> + */
> + u64 host_session_handle;
> + /** @vm: VM used for PXP submissions to the GSCCS */
> + struct xe_vm *vm;
> + /** @q: GSCCS exec queue for PXP submissions */
> + struct xe_exec_queue *q;
> +
> + /**
> + * @bo: BO used for submissions to the GSCCS and GSC FW. It includes
> + * space for the GSCCS batch and the input/output buffers read/written
> + * by the FW
> + */
> + struct xe_bo *bo;
> + /** @inout_size: size of the msg_in and msg_out sections */
Maybe 'size of each of the in/out sections individually' or something to
remove ambiguity about this being the total size of the two combined
(which is how I would read it).
> + u32 inout_size;
> + /** @batch: iosys_map to the batch memory within the BO */
> + struct iosys_map batch;
> + /** @msg_in: iosys_map to the input memory within the BO */
> + struct iosys_map msg_in;
> + /** @msg_out: iosys_map to the output memory within the BO */
> + struct iosys_map msg_out;
> +};
>
> /**
> * struct xe_pxp - pxp state
> @@ -23,6 +58,17 @@ struct xe_pxp {
> * (VDBOX, KCR and GSC)
> */
> struct xe_gt *gt;
> +
> + /** @vcs_exec: kernel-owned objects for PXP submissions to the VCS */
> + struct {
> + /** @vcs_exec.q: kernel-owned VCS exec queue used for PXP terminations */
> + struct xe_exec_queue *q;
> + /** @vcs_exec.bo: BO used for submissions to the VCS */
> + struct xe_bo *bo;
> + } vcs_exec;
> +
> + /** @gsc_exec: kernel-owned objects for PXP submissions to the GSCCS */
> + struct xe_pxp_gsc_client_resources gsc_res;
> };
>
> #endif /* __XE_PXP_TYPES_H__ */
> diff --git a/drivers/gpu/drm/xe/xe_vm.c b/drivers/gpu/drm/xe/xe_vm.c
> index 6dd76f77b504..56f105797ae6 100644
> --- a/drivers/gpu/drm/xe/xe_vm.c
> +++ b/drivers/gpu/drm/xe/xe_vm.c
> @@ -1381,6 +1381,15 @@ struct xe_vm *xe_vm_create(struct xe_device *xe, u32 flags)
> struct xe_tile *tile;
> u8 id;
>
> + /*
> + * All GSC VMs are owned by the kernel and can also only be used on
> + * the GSCCS. We don't want a kernel-owned VM to put the device in
> + * either fault or not fault mode, so we need to exclude the GSC VMs
> + * from that count; this is only safe if we ensure that all GSC VMs are
> + * non-faulting.
> + */
> + xe_assert(xe, !((flags & XE_VM_FLAG_GSC) && (flags & XE_VM_FLAG_FAULT_MODE)));
> +
> vm = kzalloc(sizeof(*vm), GFP_KERNEL);
> if (!vm)
> return ERR_PTR(-ENOMEM);
> @@ -1391,7 +1400,21 @@ struct xe_vm *xe_vm_create(struct xe_device *xe, u32 flags)
>
> vm->flags = flags;
>
> - init_rwsem(&vm->lock);
> + /**
> + * GSC VMs are kernel-owned, only used for PXP ops and can be
> + * manipulated under the PXP mutex. However, the PXP mutex can be taken
Is that 'can be (but don't have to be) manipulated' or 'can only be
manipulated'?
> + * under a user-VM lock when the PXP session is started at exec_queue
> + * creation time. Those are different VMs and therefore there is no risk
> + * of deadlock, but we need to tell lockdep that this is the case or it
> + * will print a warning.
> + */
> + if (flags & XE_VM_FLAG_GSC) {
> + static struct lock_class_key gsc_vm_key;
> +
> + __init_rwsem(&vm->lock, "gsc_vm", &gsc_vm_key);
> + } else {
> + init_rwsem(&vm->lock);
> + }
> mutex_init(&vm->snap_mutex);
>
> INIT_LIST_HEAD(&vm->rebind_list);
> @@ -1510,7 +1533,7 @@ struct xe_vm *xe_vm_create(struct xe_device *xe, u32 flags)
> mutex_lock(&xe->usm.lock);
> if (flags & XE_VM_FLAG_FAULT_MODE)
> xe->usm.num_vm_in_fault_mode++;
> - else if (!(flags & XE_VM_FLAG_MIGRATION))
> + else if (!(flags & (XE_VM_FLAG_MIGRATION | XE_VM_FLAG_GSC)))
> xe->usm.num_vm_in_non_fault_mode++;
> mutex_unlock(&xe->usm.lock);
>
> @@ -2694,11 +2717,10 @@ static void vm_bind_ioctl_ops_fini(struct xe_vm *vm, struct xe_vma_ops *vops,
> for (i = 0; i < vops->num_syncs; i++)
> xe_sync_entry_signal(vops->syncs + i, fence);
> xe_exec_queue_last_fence_set(wait_exec_queue, vm, fence);
> - dma_fence_put(fence);
> }
>
> -static int vm_bind_ioctl_ops_execute(struct xe_vm *vm,
> - struct xe_vma_ops *vops)
> +static struct dma_fence *vm_bind_ioctl_ops_execute(struct xe_vm *vm,
> + struct xe_vma_ops *vops)
Rather than changing the internals, is it not possible to just call
xe_exec_queue_last_fence_get() after vm_bind_ioctl_ops_execute has returned?
> {
> struct drm_exec exec;
> struct dma_fence *fence;
> @@ -2711,21 +2733,21 @@ static int vm_bind_ioctl_ops_execute(struct xe_vm *vm,
> drm_exec_until_all_locked(&exec) {
> err = vm_bind_ioctl_ops_lock_and_prep(&exec, vm, vops);
> drm_exec_retry_on_contention(&exec);
> - if (err)
> + if (err) {
> + fence = ERR_PTR(err);
> goto unlock;
> + }
>
> fence = ops_execute(vm, vops);
> - if (IS_ERR(fence)) {
> - err = PTR_ERR(fence);
> + if (IS_ERR(fence))
> goto unlock;
> - }
>
> vm_bind_ioctl_ops_fini(vm, vops, fence);
> }
>
> unlock:
> drm_exec_fini(&exec);
> - return err;
> + return fence;
> }
>
> #define SUPPORTED_FLAGS_STUB \
> @@ -2946,6 +2968,7 @@ int xe_vm_bind_ioctl(struct drm_device *dev, void *data, struct drm_file *file)
> struct xe_sync_entry *syncs = NULL;
> struct drm_xe_vm_bind_op *bind_ops;
> struct xe_vma_ops vops;
> + struct dma_fence *fence;
> int err;
> int i;
>
> @@ -3108,7 +3131,11 @@ int xe_vm_bind_ioctl(struct drm_device *dev, void *data, struct drm_file *file)
> if (err)
> goto unwind_ops;
>
> - err = vm_bind_ioctl_ops_execute(vm, &vops);
> + fence = vm_bind_ioctl_ops_execute(vm, &vops);
> + if (IS_ERR(fence))
> + err = PTR_ERR(fence);
> + else
> + dma_fence_put(fence);
There isn't a new fence get in vm_bind_ioctl_ops_execute(). The change
in return value is the only difference in behaviour. So why is an extra
put required?
>
> unwind_ops:
> if (err && err != -ENODATA)
> @@ -3142,6 +3169,81 @@ int xe_vm_bind_ioctl(struct drm_device *dev, void *data, struct drm_file *file)
> return err;
> }
>
> +/**
> + * xe_vm_bind_bo - bind a kernel BO to a VM
> + * @vm: VM to bind the BO to
> + * @bo: BO to bind
> + * @q: exec queue to use for the bind (optional)
> + * @addr: address at which to bind the BO
> + * @cache_lvl: PAT cache level to use
> + *
> + * Execute a VM bind map operation on a kernel-owned BO to bind it into a
> + * kernel-owned VM.
> + *
> + * Returns a dma_fence to track the binding completion if the job to do so was
> + * successfully submitted, an error pointer otherwise.
> + */
> +struct dma_fence *xe_vm_bind_bo(struct xe_vm *vm, struct xe_bo *bo,
> + struct xe_exec_queue *q, u64 addr,
> + enum xe_cache_level cache_lvl)
Should this have '_kernel_' in the name given the description of
kernel-owned BO to kernel-owned VM?
John.
> +{
> + struct xe_vma_ops vops;
> + struct drm_gpuva_ops *ops = NULL;
> + struct dma_fence *fence;
> + int err;
> +
> + xe_bo_get(bo);
> + xe_vm_get(vm);
> + if (q)
> + xe_exec_queue_get(q);
> +
> + down_write(&vm->lock);
> +
> + xe_vma_ops_init(&vops, vm, q, NULL, 0);
> +
> + ops = vm_bind_ioctl_ops_create(vm, bo, 0, addr, bo->size,
> + DRM_XE_VM_BIND_OP_MAP, 0, 0,
> + vm->xe->pat.idx[cache_lvl]);
> + if (IS_ERR(ops)) {
> + err = PTR_ERR(ops);
> + goto release_vm_lock;
> + }
> +
> + err = vm_bind_ioctl_ops_parse(vm, ops, &vops);
> + if (err)
> + goto release_vm_lock;
> +
> + xe_assert(vm->xe, !list_empty(&vops.list));
> +
> + err = xe_vma_ops_alloc(&vops, false);
> + if (err)
> + goto unwind_ops;
> +
> + fence = vm_bind_ioctl_ops_execute(vm, &vops);
> + if (IS_ERR(fence))
> + err = PTR_ERR(fence);
> +
> +unwind_ops:
> + if (err && err != -ENODATA)
> + vm_bind_ioctl_ops_unwind(vm, &ops, 1);
> +
> + xe_vma_ops_fini(&vops);
> + drm_gpuva_ops_free(&vm->gpuvm, ops);
> +
> +release_vm_lock:
> + up_write(&vm->lock);
> +
> + if (q)
> + xe_exec_queue_put(q);
> + xe_vm_put(vm);
> + xe_bo_put(bo);
> +
> + if (err)
> + fence = ERR_PTR(err);
> +
> + return fence;
> +}
> +
> /**
> * xe_vm_lock() - Lock the vm's dma_resv object
> * @vm: The struct xe_vm whose lock is to be locked
> diff --git a/drivers/gpu/drm/xe/xe_vm.h b/drivers/gpu/drm/xe/xe_vm.h
> index c864dba35e1d..bfc19e8113c3 100644
> --- a/drivers/gpu/drm/xe/xe_vm.h
> +++ b/drivers/gpu/drm/xe/xe_vm.h
> @@ -19,6 +19,8 @@ struct drm_file;
> struct ttm_buffer_object;
> struct ttm_validate_buffer;
>
> +struct dma_fence;
> +
> struct xe_exec_queue;
> struct xe_file;
> struct xe_sync_entry;
> @@ -248,6 +250,10 @@ int xe_vm_lock_vma(struct drm_exec *exec, struct xe_vma *vma);
> int xe_vm_validate_rebind(struct xe_vm *vm, struct drm_exec *exec,
> unsigned int num_fences);
>
> +struct dma_fence *xe_vm_bind_bo(struct xe_vm *vm, struct xe_bo *bo,
> + struct xe_exec_queue *q, u64 addr,
> + enum xe_cache_level cache_lvl);
> +
> /**
> * xe_vm_resv() - Return's the vm's reservation object
> * @vm: The vm
> diff --git a/drivers/gpu/drm/xe/xe_vm_types.h b/drivers/gpu/drm/xe/xe_vm_types.h
> index 7f9a303e51d8..52467b9b5348 100644
> --- a/drivers/gpu/drm/xe/xe_vm_types.h
> +++ b/drivers/gpu/drm/xe/xe_vm_types.h
> @@ -164,6 +164,7 @@ struct xe_vm {
> #define XE_VM_FLAG_BANNED BIT(5)
> #define XE_VM_FLAG_TILE_ID(flags) FIELD_GET(GENMASK(7, 6), flags)
> #define XE_VM_FLAG_SET_TILE_ID(tile) FIELD_PREP(GENMASK(7, 6), (tile)->id)
> +#define XE_VM_FLAG_GSC BIT(8)
> unsigned long flags;
>
> /** @composite_fence_ctx: context composite fence */
^ permalink raw reply [flat|nested] 54+ messages in thread
* Re: [PATCH v2 03/12] drm/xe/pxp: Add VCS inline termination support
2024-08-16 19:00 ` [PATCH v2 03/12] drm/xe/pxp: Add VCS inline termination support Daniele Ceraolo Spurio
@ 2024-10-04 22:25 ` John Harrison
2024-11-06 23:49 ` Daniele Ceraolo Spurio
0 siblings, 1 reply; 54+ messages in thread
From: John Harrison @ 2024-10-04 22:25 UTC (permalink / raw)
To: Daniele Ceraolo Spurio, intel-xe
On 8/16/2024 12:00, Daniele Ceraolo Spurio wrote:
> The key termination is done with a specific submission to the VCS
> engine.
>
> Note that this patch is meant to be squashed with the follow-up patches
> that implement the other pieces of the termination flow. It is separate
> for now for ease of review.
>
> Signed-off-by: Daniele Ceraolo Spurio <daniele.ceraolospurio@intel.com>
> ---
> .../gpu/drm/xe/instructions/xe_instr_defs.h | 1 +
> .../gpu/drm/xe/instructions/xe_mfx_commands.h | 29 +++++
> .../gpu/drm/xe/instructions/xe_mi_commands.h | 5 +
> drivers/gpu/drm/xe/xe_lrc.h | 3 +-
> drivers/gpu/drm/xe/xe_pxp_submit.c | 108 ++++++++++++++++++
> drivers/gpu/drm/xe/xe_pxp_submit.h | 2 +
> drivers/gpu/drm/xe/xe_ring_ops.c | 4 +-
> 7 files changed, 149 insertions(+), 3 deletions(-)
> create mode 100644 drivers/gpu/drm/xe/instructions/xe_mfx_commands.h
>
> diff --git a/drivers/gpu/drm/xe/instructions/xe_instr_defs.h b/drivers/gpu/drm/xe/instructions/xe_instr_defs.h
> index fd2ce7ace510..e559969468c4 100644
> --- a/drivers/gpu/drm/xe/instructions/xe_instr_defs.h
> +++ b/drivers/gpu/drm/xe/instructions/xe_instr_defs.h
> @@ -16,6 +16,7 @@
> #define XE_INSTR_CMD_TYPE GENMASK(31, 29)
> #define XE_INSTR_MI REG_FIELD_PREP(XE_INSTR_CMD_TYPE, 0x0)
> #define XE_INSTR_GSC REG_FIELD_PREP(XE_INSTR_CMD_TYPE, 0x2)
> +#define XE_INSTR_VIDEOPIPE REG_FIELD_PREP(XE_INSTR_CMD_TYPE, 0x3)
> #define XE_INSTR_GFXPIPE REG_FIELD_PREP(XE_INSTR_CMD_TYPE, 0x3)
> #define XE_INSTR_GFX_STATE REG_FIELD_PREP(XE_INSTR_CMD_TYPE, 0x4)
>
> diff --git a/drivers/gpu/drm/xe/instructions/xe_mfx_commands.h b/drivers/gpu/drm/xe/instructions/xe_mfx_commands.h
> new file mode 100644
> index 000000000000..686ca3b1d9e8
> --- /dev/null
> +++ b/drivers/gpu/drm/xe/instructions/xe_mfx_commands.h
> @@ -0,0 +1,29 @@
> +/* SPDX-License-Identifier: MIT */
> +/*
> + * Copyright © 2024 Intel Corporation
> + */
> +
> +#ifndef _XE_MFX_COMMANDS_H_
> +#define _XE_MFX_COMMANDS_H_
> +
> +#include "instructions/xe_instr_defs.h"
> +
> +#define MFX_CMD_SUBTYPE REG_GENMASK(28, 27) /* A.K.A cmd pipe */
> +#define MFX_CMD_OPCODE REG_GENMASK(26, 24)
> +#define MFX_CMD_SUB_OPCODE REG_GENMASK(23, 16)
> +#define MFX_FLAGS_AND_LEN REG_GENMASK(15, 0)
> +
> +#define XE_MFX_INSTR(subtype, op, sub_op, flags) \
> + (XE_INSTR_VIDEOPIPE | \
> + REG_FIELD_PREP(MFX_CMD_SUBTYPE, subtype) | \
> + REG_FIELD_PREP(MFX_CMD_OPCODE, op) | \
> + REG_FIELD_PREP(MFX_CMD_SUB_OPCODE, sub_op) | \
> + REG_FIELD_PREP(MFX_FLAGS_AND_LEN, flags))
> +
> +#define MFX_WAIT XE_MFX_INSTR(1, 0, 0, 0)
> +#define MFX_WAIT_DW0_PXP_SYNC_CONTROL_FLAG REG_BIT(9)
> +#define MFX_WAIT_DW0_MFX_SYNC_CONTROL_FLAG REG_BIT(8)
> +
> +#define CRYPTO_KEY_EXCHANGE XE_MFX_INSTR(2, 6, 9, 0)
> +
> +#endif
> diff --git a/drivers/gpu/drm/xe/instructions/xe_mi_commands.h b/drivers/gpu/drm/xe/instructions/xe_mi_commands.h
> index 10ec2920d31b..167fb0f742de 100644
> --- a/drivers/gpu/drm/xe/instructions/xe_mi_commands.h
> +++ b/drivers/gpu/drm/xe/instructions/xe_mi_commands.h
> @@ -48,6 +48,7 @@
> #define MI_LRI_LEN(x) (((x) & 0xff) + 1)
>
> #define MI_FLUSH_DW __MI_INSTR(0x26)
> +#define MI_FLUSH_DW_PROTECTED_MEM_EN REG_BIT(22)
> #define MI_FLUSH_DW_STORE_INDEX REG_BIT(21)
> #define MI_INVALIDATE_TLB REG_BIT(18)
> #define MI_FLUSH_DW_CCS REG_BIT(16)
> @@ -66,4 +67,8 @@
>
> #define MI_BATCH_BUFFER_START __MI_INSTR(0x31)
>
> +#define MI_SET_APPID __MI_INSTR(0x0e)
> +#define MI_SET_APPID_SESSION_ID_MASK REG_GENMASK(6, 0)
> +#define MI_SET_APPID_SESSION_ID(x) REG_FIELD_PREP(MI_SET_APPID_SESSION_ID_MASK, x)
> +
> #endif
> diff --git a/drivers/gpu/drm/xe/xe_lrc.h b/drivers/gpu/drm/xe/xe_lrc.h
> index c24542e89318..d411c3fbcbc6 100644
> --- a/drivers/gpu/drm/xe/xe_lrc.h
> +++ b/drivers/gpu/drm/xe/xe_lrc.h
> @@ -20,7 +20,8 @@ struct xe_lrc;
> struct xe_lrc_snapshot;
> struct xe_vm;
>
> -#define LRC_PPHWSP_SCRATCH_ADDR (0x34 * 4)
> +#define LRC_PPHWSP_FLUSH_INVAL_SCRATCH_ADDR (0x34 * 4)
> +#define LRC_PPHWSP_PXP_INVAL_SCRATCH_ADDR (0x40 * 4)
>
> struct xe_lrc *xe_lrc_create(struct xe_hw_engine *hwe, struct xe_vm *vm,
> u32 ring_size);
> diff --git a/drivers/gpu/drm/xe/xe_pxp_submit.c b/drivers/gpu/drm/xe/xe_pxp_submit.c
> index b777b0765c8a..3b69dcc0a00f 100644
> --- a/drivers/gpu/drm/xe/xe_pxp_submit.c
> +++ b/drivers/gpu/drm/xe/xe_pxp_submit.c
> @@ -6,14 +6,20 @@
> #include "xe_pxp_submit.h"
>
> #include <drm/xe_drm.h>
> +#include <linux/delay.h>
>
> #include "xe_device_types.h"
> +#include "xe_bb.h"
> #include "xe_bo.h"
> #include "xe_exec_queue.h"
> #include "xe_gsc_submit.h"
> #include "xe_gt.h"
> +#include "xe_lrc.h"
> #include "xe_pxp_types.h"
> +#include "xe_sched_job.h"
> #include "xe_vm.h"
> +#include "instructions/xe_mfx_commands.h"
> +#include "instructions/xe_mi_commands.h"
> #include "regs/xe_gt_regs.h"
>
> /*
> @@ -199,3 +205,105 @@ void xe_pxp_destroy_execution_resources(struct xe_pxp *pxp)
> destroy_vcs_execution_resources(pxp);
> }
>
> +#define emit_cmd(xe_, map_, offset_, val_) \
> + xe_map_wr(xe_, map_, (offset_) * sizeof(u32), u32, val_)
> +
> +/* stall until prior PXP and MFX/HCP/HUC objects are cmopleted */
completed
> +#define MFX_WAIT_PXP (MFX_WAIT | \
> + MFX_WAIT_DW0_PXP_SYNC_CONTROL_FLAG | \
> + MFX_WAIT_DW0_MFX_SYNC_CONTROL_FLAG)
Why define an XE_MFX_INSTR macro that takes a flags word only to
manually OR the flags in outside of the macro?
> +static u32 pxp_emit_wait(struct xe_device *xe, struct iosys_map *batch, u32 offset)
> +{
> + /* wait for cmds to go through */
> + emit_cmd(xe, batch, offset++, MFX_WAIT_PXP);
> + emit_cmd(xe, batch, offset++, 0);
This zero is just padding to ensure 64bit alignment of future instructions?
> +
> + return offset;
> +}
> +
> +static u32 pxp_emit_session_selection(struct xe_device *xe, struct iosys_map *batch,
> + u32 offset, u32 idx)
> +{
> + offset = pxp_emit_wait(xe, batch, offset);
> +
> + /* pxp off */
> + emit_cmd(xe, batch, offset++, MI_FLUSH_DW | MI_FLUSH_IMM_DW);
> + emit_cmd(xe, batch, offset++, 0);
> + emit_cmd(xe, batch, offset++, 0);
> + emit_cmd(xe, batch, offset++, 0);
> +
> + /* select session */
> + emit_cmd(xe, batch, offset++, MI_SET_APPID | MI_SET_APPID_SESSION_ID(idx));
> + emit_cmd(xe, batch, offset++, MFX_WAIT_PXP);
Seems odd to define a helper function to emit this instruction and then
only use it for some instances.
> +
> + /* pxp on */
> + emit_cmd(xe, batch, offset++, MI_FLUSH_DW |
> + MI_FLUSH_DW_PROTECTED_MEM_EN |
> + MI_FLUSH_DW_OP_STOREDW | MI_FLUSH_DW_STORE_INDEX |
> + MI_FLUSH_IMM_DW);
> + emit_cmd(xe, batch, offset++, LRC_PPHWSP_PXP_INVAL_SCRATCH_ADDR |
> + MI_FLUSH_DW_USE_GTT);
> + emit_cmd(xe, batch, offset++, 0);
> + emit_cmd(xe, batch, offset++, 0);
> +
> + offset = pxp_emit_wait(xe, batch, offset);
> +
> + return offset;
> +}
> +
> +static u32 pxp_emit_inline_termination(struct xe_device *xe,
> + struct iosys_map *batch, u32 offset)
> +{
> + /* session inline termination */
> + emit_cmd(xe, batch, offset++, CRYPTO_KEY_EXCHANGE);
> + emit_cmd(xe, batch, offset++, 0);
> +
> + return offset;
> +}
> +
> +static u32 pxp_emit_session_termination(struct xe_device *xe, struct iosys_map *batch,
> + u32 offset, u32 idx)
> +{
> + offset = pxp_emit_session_selection(xe, batch, offset, idx);
> + offset = pxp_emit_inline_termination(xe, batch, offset);
> +
> + return offset;
> +}
> +
> +/**
> + * xe_pxp_submit_session_termination - submits a PXP inline termination
> + * @pxp: the xe_pxp structure
> + * @id: the session to terminate
> + *
> + * Emit an inline termination via the VCS engine to terminate a session.
> + *
> + * Returns 0 if the submission is successful, an errno value otherwise.
> + */
> +int xe_pxp_submit_session_termination(struct xe_pxp *pxp, u32 id)
> +{
> + struct xe_sched_job *job;
> + struct dma_fence *fence;
> + long timeout;
> + u32 offset = 0;
> + u64 addr = xe_bo_ggtt_addr(pxp->vcs_exec.bo);
> +
> + offset = pxp_emit_session_termination(pxp->xe, &pxp->vcs_exec.bo->vmap, offset, id);
> + offset = pxp_emit_wait(pxp->xe, &pxp->vcs_exec.bo->vmap, offset);
> + emit_cmd(pxp->xe, &pxp->vcs_exec.bo->vmap, offset, MI_BATCH_BUFFER_END);
> +
> + job = xe_sched_job_create(pxp->vcs_exec.q, &addr);
Double space
> + if (IS_ERR(job))
> + return PTR_ERR(job);
> +
> + xe_sched_job_arm(job);
> + fence = dma_fence_get(&job->drm.s_fence->finished);
> + xe_sched_job_push(job);
> +
> + timeout = dma_fence_wait_timeout(fence, false, HZ);
> +
> + dma_fence_put(fence);
> + if (timeout <= 0)
> + return -EAGAIN;
Does it not matter what the error was? Why/how would this fail in a way
that needs to be re-tried?
Although looking at the later patches, the return value from this
function is just treated as a pass/fail bool anyway. So why bother
munging it at all?
John.
> +
> + return 0;
> +}
> diff --git a/drivers/gpu/drm/xe/xe_pxp_submit.h b/drivers/gpu/drm/xe/xe_pxp_submit.h
> index 1a971fadc081..4ee8c0acfed9 100644
> --- a/drivers/gpu/drm/xe/xe_pxp_submit.h
> +++ b/drivers/gpu/drm/xe/xe_pxp_submit.h
> @@ -13,4 +13,6 @@ struct xe_pxp;
> int xe_pxp_allocate_execution_resources(struct xe_pxp *pxp);
> void xe_pxp_destroy_execution_resources(struct xe_pxp *pxp);
>
> +int xe_pxp_submit_session_termination(struct xe_pxp *pxp, u32 id);
> +
> #endif /* __XE_PXP_SUBMIT_H__ */
> diff --git a/drivers/gpu/drm/xe/xe_ring_ops.c b/drivers/gpu/drm/xe/xe_ring_ops.c
> index 0be4f489d3e1..a4b5a0f68a32 100644
> --- a/drivers/gpu/drm/xe/xe_ring_ops.c
> +++ b/drivers/gpu/drm/xe/xe_ring_ops.c
> @@ -118,7 +118,7 @@ static int emit_flush_invalidate(u32 flag, u32 *dw, int i)
> dw[i++] |= MI_INVALIDATE_TLB | MI_FLUSH_DW_OP_STOREDW | MI_FLUSH_IMM_DW |
> MI_FLUSH_DW_STORE_INDEX;
>
> - dw[i++] = LRC_PPHWSP_SCRATCH_ADDR | MI_FLUSH_DW_USE_GTT;
> + dw[i++] = LRC_PPHWSP_FLUSH_INVAL_SCRATCH_ADDR | MI_FLUSH_DW_USE_GTT;
> dw[i++] = 0;
> dw[i++] = ~0U;
>
> @@ -156,7 +156,7 @@ static int emit_pipe_invalidate(u32 mask_flags, bool invalidate_tlb, u32 *dw,
>
> flags &= ~mask_flags;
>
> - return emit_pipe_control(dw, i, 0, flags, LRC_PPHWSP_SCRATCH_ADDR, 0);
> + return emit_pipe_control(dw, i, 0, flags, LRC_PPHWSP_FLUSH_INVAL_SCRATCH_ADDR, 0);
> }
>
> static int emit_store_imm_ppgtt_posted(u64 addr, u64 value,
^ permalink raw reply [flat|nested] 54+ messages in thread
* Re: [PATCH v2 04/12] drm/xe/pxp: Add GSC session invalidation support
2024-08-16 19:00 ` [PATCH v2 04/12] drm/xe/pxp: Add GSC session invalidation support Daniele Ceraolo Spurio
@ 2024-10-07 20:05 ` John Harrison
2024-11-07 0:15 ` Daniele Ceraolo Spurio
0 siblings, 1 reply; 54+ messages in thread
From: John Harrison @ 2024-10-07 20:05 UTC (permalink / raw)
To: Daniele Ceraolo Spurio, intel-xe
On 8/16/2024 12:00, Daniele Ceraolo Spurio wrote:
> After a session is terminated, we need to inform the GSC so that it can
> clean up its side of the allocation. This is done by sending an
> invalidation command with the session ID.
>
> Note that this patch is meant to be squashed with the follow-up patches
> that implement the other pieces of the termination flow. It is separate
> for now for ease of review.
>
> Signed-off-by: Daniele Ceraolo Spurio <daniele.ceraolospurio@intel.com>
> ---
> drivers/gpu/drm/xe/abi/gsc_pxp_commands_abi.h | 12 +
> drivers/gpu/drm/xe/xe_pxp_submit.c | 215 ++++++++++++++++++
> drivers/gpu/drm/xe/xe_pxp_submit.h | 3 +
> 3 files changed, 230 insertions(+)
>
> diff --git a/drivers/gpu/drm/xe/abi/gsc_pxp_commands_abi.h b/drivers/gpu/drm/xe/abi/gsc_pxp_commands_abi.h
> index f3c4cf10ba20..4a59c564a0d0 100644
> --- a/drivers/gpu/drm/xe/abi/gsc_pxp_commands_abi.h
> +++ b/drivers/gpu/drm/xe/abi/gsc_pxp_commands_abi.h
> @@ -49,6 +49,7 @@ struct pxp_cmd_header {
> u32 buffer_len;
> } __packed;
>
> +#define PXP43_CMDID_INVALIDATE_STREAM_KEY 0x00000007
> #define PXP43_CMDID_NEW_HUC_AUTH 0x0000003F /* MTL+ */
>
> /* PXP-Input-Packet: HUC Auth-only */
> @@ -63,4 +64,15 @@ struct pxp43_huc_auth_out {
> struct pxp_cmd_header header;
> } __packed;
>
> +/* PXP-Input-Packet: Invalidate Stream Key */
> +struct pxp43_inv_stream_key_in {
> + struct pxp_cmd_header header;
> + u32 rsvd[3];
> +} __packed;
> +
> +/* PXP-Output-Packet: Invalidate Stream Key */
> +struct pxp43_inv_stream_key_out {
> + struct pxp_cmd_header header;
> + u32 rsvd;
> +} __packed;
> #endif
> diff --git a/drivers/gpu/drm/xe/xe_pxp_submit.c b/drivers/gpu/drm/xe/xe_pxp_submit.c
> index 3b69dcc0a00f..41684d666376 100644
> --- a/drivers/gpu/drm/xe/xe_pxp_submit.c
> +++ b/drivers/gpu/drm/xe/xe_pxp_submit.c
> @@ -15,9 +15,13 @@
> #include "xe_gsc_submit.h"
> #include "xe_gt.h"
> #include "xe_lrc.h"
> +#include "xe_map.h"
> #include "xe_pxp_types.h"
> #include "xe_sched_job.h"
> #include "xe_vm.h"
> +#include "abi/gsc_command_header_abi.h"
> +#include "abi/gsc_pxp_commands_abi.h"
> +#include "instructions/xe_gsc_commands.h"
> #include "instructions/xe_mfx_commands.h"
> #include "instructions/xe_mi_commands.h"
> #include "regs/xe_gt_regs.h"
> @@ -307,3 +311,214 @@ int xe_pxp_submit_session_termination(struct xe_pxp *pxp, u32 id)
>
> return 0;
> }
> +
> +static bool
> +is_fw_err_platform_config(u32 type)
> +{
> + switch (type) {
> + case PXP_STATUS_ERROR_API_VERSION:
> + case PXP_STATUS_PLATFCONFIG_KF1_NOVERIF:
> + case PXP_STATUS_PLATFCONFIG_KF1_BAD:
> + return true;
> + default:
> + break;
> + }
> + return false;
> +}
> +
> +static const char *
> +fw_err_to_string(u32 type)
> +{
> + switch (type) {
> + case PXP_STATUS_ERROR_API_VERSION:
> + return "ERR_API_VERSION";
> + case PXP_STATUS_NOT_READY:
> + return "ERR_NOT_READY";
> + case PXP_STATUS_PLATFCONFIG_KF1_NOVERIF:
Is it not worth having a separate string for this error?
> + case PXP_STATUS_PLATFCONFIG_KF1_BAD:
> + return "ERR_PLATFORM_CONFIG";
> + default:
> + break;
> + }
> + return NULL;
> +}
> +
> +static int pxp_pkt_submit(struct xe_exec_queue *q, u64 batch_addr)
> +{
> + struct xe_gt *gt = q->gt;
> + struct xe_device *xe = gt_to_xe(gt);
> + struct xe_sched_job *job;
> + struct dma_fence *fence;
> + long timeout;
> +
> + xe_assert(xe, q->hwe->engine_id == XE_HW_ENGINE_GSCCS0);
> +
> + job = xe_sched_job_create(q, &batch_addr);
Double space.
> + if (IS_ERR(job))
> + return PTR_ERR(job);
> +
> + xe_sched_job_arm(job);
> + fence = dma_fence_get(&job->drm.s_fence->finished);
> + xe_sched_job_push(job);
> +
> + timeout = dma_fence_wait_timeout(fence, false, HZ);
> + dma_fence_put(fence);
> + if (timeout < 0)
> + return timeout;
> + else if (!timeout)
> + return -ETIME;
> +
> + return 0;
> +}
> +
> +static void emit_pxp_heci_cmd(struct xe_device *xe, struct iosys_map *batch,
> + u64 addr_in, u32 size_in, u64 addr_out, u32 size_out)
> +{
> + u32 len = 0;
> +
> + xe_map_wr(xe, batch, len++ * sizeof(u32), u32, GSC_HECI_CMD_PKT);
> + xe_map_wr(xe, batch, len++ * sizeof(u32), u32, lower_32_bits(addr_in));
> + xe_map_wr(xe, batch, len++ * sizeof(u32), u32, upper_32_bits(addr_in));
> + xe_map_wr(xe, batch, len++ * sizeof(u32), u32, size_in);
> + xe_map_wr(xe, batch, len++ * sizeof(u32), u32, lower_32_bits(addr_out));
> + xe_map_wr(xe, batch, len++ * sizeof(u32), u32, upper_32_bits(addr_out));
> + xe_map_wr(xe, batch, len++ * sizeof(u32), u32, size_out);
> + xe_map_wr(xe, batch, len++ * sizeof(u32), u32, 0);
> + xe_map_wr(xe, batch, len++ * sizeof(u32), u32, MI_BATCH_BUFFER_END);
> +}
> +
> +#define GSC_PENDING_RETRY_MAXCOUNT 40
> +#define GSC_PENDING_RETRY_PAUSE_MS 50
> +static int gsccs_send_message(struct xe_pxp_gsc_client_resources *gsc_res,
> + void *msg_in, size_t msg_in_size,
> + void *msg_out, size_t msg_out_size_max)
> +{
> + struct xe_device *xe = gsc_res->vm->xe;
> + const size_t max_msg_size = gsc_res->inout_size - sizeof(struct intel_gsc_mtl_header);
> + u32 wr_offset = 0;
> + u32 rd_offset = 0;
The intialisation is not necessary here. rd_offset has the appearance of
requiring it but doesn't really, and wr_offset is re-assigned almost
immediately.
> + u32 reply_size;
> + u32 min_reply_size = 0;
> + int ret = 0;
Also not necessary to be pre-initialised.
> + int retry = GSC_PENDING_RETRY_MAXCOUNT;
> +
> + if (msg_in_size > max_msg_size || msg_out_size_max > max_msg_size)
> + return -ENOSPC;
> +
> + wr_offset = xe_gsc_emit_header(xe, &gsc_res->msg_in, 0,
> + HECI_MEADDRESS_PXP,
> + gsc_res->host_session_handle,
> + msg_in_size);
> +
> + /* NOTE: zero size packets are used for session-cleanups */
> + if (msg_in && msg_in_size) {
> + xe_map_memcpy_to(xe, &gsc_res->msg_in, wr_offset,
> + msg_in, msg_in_size);
> + min_reply_size = sizeof(struct pxp_cmd_header);
> + }
> +
> + /* Make sure the reply header does not contain stale data */
> + xe_gsc_poison_header(xe, &gsc_res->msg_out, 0);
> +
> + emit_pxp_heci_cmd(xe, &gsc_res->batch, PXP_BB_SIZE,
> + wr_offset + msg_in_size, PXP_BB_SIZE + gsc_res->inout_size,
> + msg_out_size_max + wr_offset);
Is this correct? It is passing in the batch buffer allocation size as
the address. Shouldn't there be some kind of base address included? The
in/out buffer is after the BB in the same allocation but that allocation
is not guaranteed to be at address zero, is it?
Also, it would be more consistent to use 'wr_offset + out_size_max' to
match the input calculation rather than flipping the terms around.
> +
> + xe_device_wmb(xe);
> +
Might be worth a comment here to say why retries are required and how
many/how long is expected normally versus worst case?
> + do {
> + ret = pxp_pkt_submit(gsc_res->q, 0);
> + if (ret)
> + break;
> +
> + if (xe_gsc_check_and_update_pending(xe, &gsc_res->msg_in, 0,
> + &gsc_res->msg_out, 0)) {
> + ret = -EAGAIN;
> + msleep(GSC_PENDING_RETRY_PAUSE_MS);
> + }
> + } while (--retry && ret == -EAGAIN);
> +
> + if (ret) {
> + drm_err(&xe->drm, "failed to submit GSC PXP message: %d\n", ret);
> + return ret;
> + }
> +
> + ret = xe_gsc_read_out_header(xe, &gsc_res->msg_out, 0,
> + min_reply_size, &rd_offset);
> + if (ret) {
> + drm_err(&xe->drm, "invalid GSC reply for PXP (err=%d)\n", ret);
Should be %pe for the error code?
> + return ret;
> + }
> +
> + if (msg_out && min_reply_size) {
> + reply_size = xe_map_rd_field(xe, &gsc_res->msg_out, rd_offset,
> + struct pxp_cmd_header, buffer_len);
> + reply_size += sizeof(struct pxp_cmd_header);
> +
> + if (reply_size > msg_out_size_max) {
> + drm_warn(&xe->drm, "caller with insufficient PXP reply size %u (%ld)\n",
> + reply_size, msg_out_size_max);
I would maybe go with 'reply size overflow'. Took me a moment to work
out why 'size > max' becomes 'insufficient reply size'.
> + reply_size = msg_out_size_max;
Is it useful to return a partial message?
> + }
> +
> + xe_map_memcpy_from(xe, msg_out, &gsc_res->msg_out,
> + rd_offset, reply_size);
> + }
> +
> + xe_gsc_poison_header(xe, &gsc_res->msg_in, 0);
> +
> + return ret;
> +}
> +
> +/**
> + * xe_pxp_submit_session_invalidation - submits a PXP GSC invalidation
> + * @gsc_res: the pxp client resources
> + * @id: the session to invalidate
> + *
> + * Submit a message to the GSC FW to notify it that a session has been
> + * terminated and is therefore invalid.
> + *
> + * Returns 0 if the submission is successful, an errno value otherwise.
> + */
> +int xe_pxp_submit_session_invalidation(struct xe_pxp_gsc_client_resources *gsc_res,
> + u32 id)
Is this really over 100 columns if not wrapped?
> +{
> + struct xe_device *xe = gsc_res->vm->xe;
> + struct pxp43_inv_stream_key_in msg_in = {0};
> + struct pxp43_inv_stream_key_out msg_out = {0};
> + int ret = 0;
> +
> + /*
> + * Stream key invalidation reuses the same version 4.2 input/output
> + * command format but firmware requires 4.3 API interaction
> + */
> + msg_in.header.api_version = PXP_APIVER(4, 3);
> + msg_in.header.command_id = PXP43_CMDID_INVALIDATE_STREAM_KEY;
> + msg_in.header.buffer_len = sizeof(msg_in) - sizeof(msg_in.header);
> +
> + msg_in.header.stream_id = FIELD_PREP(PXP_CMDHDR_EXTDATA_SESSION_VALID, 1);
> + msg_in.header.stream_id |= FIELD_PREP(PXP_CMDHDR_EXTDATA_APP_TYPE, 0);
> + msg_in.header.stream_id |= FIELD_PREP(PXP_CMDHDR_EXTDATA_SESSION_ID, id);
> +
> + ret = gsccs_send_message(gsc_res, &msg_in, sizeof(msg_in),
> + &msg_out, sizeof(msg_out));
> + if (ret) {
> + drm_err(&xe->drm, "Failed to inv-stream-key-%u, ret=[%d]\n",
Would be clearer to say "failed to invalidate stream-key-%u"? The
message current reads as "failed to <name-of-variable>" which doesn't
make much sense. Same comment for the other two prints below.
Also, %pe for the return code?
John.
> + id, ret);
> + } else if (msg_out.header.status != 0) {
> + if (is_fw_err_platform_config(msg_out.header.status)) {
> + drm_info_once(&xe->drm,
> + "PXP inv-stream-key-%u failed due to BIOS/SOC :0x%08x:%s\n",
> + id, msg_out.header.status,
> + fw_err_to_string(msg_out.header.status));
> + } else {
> + drm_dbg(&xe->drm, "PXP inv-stream-key-%u failed 0x%08x:%s:\n",
> + id, msg_out.header.status,
> + fw_err_to_string(msg_out.header.status));
> + drm_dbg(&xe->drm, " cmd-detail: ID=[0x%08x],API-Ver-[0x%08x]\n",
> + msg_in.header.command_id, msg_in.header.api_version);
> + }
> + }
> +
> + return ret;
> +}
> diff --git a/drivers/gpu/drm/xe/xe_pxp_submit.h b/drivers/gpu/drm/xe/xe_pxp_submit.h
> index 4ee8c0acfed9..48fdc9b09116 100644
> --- a/drivers/gpu/drm/xe/xe_pxp_submit.h
> +++ b/drivers/gpu/drm/xe/xe_pxp_submit.h
> @@ -9,10 +9,13 @@
> #include <linux/types.h>
>
> struct xe_pxp;
> +struct xe_pxp_gsc_client_resources;
>
> int xe_pxp_allocate_execution_resources(struct xe_pxp *pxp);
> void xe_pxp_destroy_execution_resources(struct xe_pxp *pxp);
>
> int xe_pxp_submit_session_termination(struct xe_pxp *pxp, u32 id);
> +int xe_pxp_submit_session_invalidation(struct xe_pxp_gsc_client_resources *gsc_res,
> + u32 id);
>
> #endif /* __XE_PXP_SUBMIT_H__ */
^ permalink raw reply [flat|nested] 54+ messages in thread
* Re: [PATCH v2 05/12] drm/xe/pxp: Handle the PXP termination interrupt
2024-08-16 19:00 ` [PATCH v2 05/12] drm/xe/pxp: Handle the PXP termination interrupt Daniele Ceraolo Spurio
@ 2024-10-08 0:34 ` John Harrison
2024-11-07 0:33 ` Daniele Ceraolo Spurio
0 siblings, 1 reply; 54+ messages in thread
From: John Harrison @ 2024-10-08 0:34 UTC (permalink / raw)
To: Daniele Ceraolo Spurio, intel-xe
On 8/16/2024 12:00, Daniele Ceraolo Spurio wrote:
> When something happen to the session, the HW generates a termination
> interrupt. In reply to this, the driver is required to submit an inline
> session termination via the VCS, trigger the global termination and
> notify the GSC FW that the session is now invalid.
>
> Signed-off-by: Daniele Ceraolo Spurio <daniele.ceraolospurio@intel.com>
> ---
> drivers/gpu/drm/xe/regs/xe_gt_regs.h | 8 ++
> drivers/gpu/drm/xe/regs/xe_pxp_regs.h | 6 ++
> drivers/gpu/drm/xe/xe_irq.c | 20 +++-
> drivers/gpu/drm/xe/xe_pxp.c | 138 +++++++++++++++++++++++++-
> drivers/gpu/drm/xe/xe_pxp.h | 3 +
> drivers/gpu/drm/xe/xe_pxp_types.h | 13 +++
> 6 files changed, 184 insertions(+), 4 deletions(-)
>
> diff --git a/drivers/gpu/drm/xe/regs/xe_gt_regs.h b/drivers/gpu/drm/xe/regs/xe_gt_regs.h
> index 0d1a4a9f4e11..9e9c20f1f1f4 100644
> --- a/drivers/gpu/drm/xe/regs/xe_gt_regs.h
> +++ b/drivers/gpu/drm/xe/regs/xe_gt_regs.h
> @@ -570,6 +570,7 @@
> #define ENGINE1_MASK REG_GENMASK(31, 16)
> #define ENGINE0_MASK REG_GENMASK(15, 0)
> #define GPM_WGBOXPERF_INTR_ENABLE XE_REG(0x19003c, XE_REG_OPTION_VF)
> +#define CRYPTO_RSVD_INTR_ENABLE XE_REG(0x190040)
> #define GUNIT_GSC_INTR_ENABLE XE_REG(0x190044, XE_REG_OPTION_VF)
> #define CCS_RSVD_INTR_ENABLE XE_REG(0x190048, XE_REG_OPTION_VF)
>
> @@ -580,6 +581,7 @@
> #define INTR_ENGINE_INTR(x) REG_FIELD_GET(GENMASK(15, 0), x)
> #define OTHER_GUC_INSTANCE 0
> #define OTHER_GSC_HECI2_INSTANCE 3
> +#define OTHER_KCR_INSTANCE 4
> #define OTHER_GSC_INSTANCE 6
>
> #define IIR_REG_SELECTOR(x) XE_REG(0x190070 + ((x) * 4), XE_REG_OPTION_VF)
> @@ -591,6 +593,7 @@
> #define HECI2_RSVD_INTR_MASK XE_REG(0x1900e4)
> #define GUC_SG_INTR_MASK XE_REG(0x1900e8, XE_REG_OPTION_VF)
> #define GPM_WGBOXPERF_INTR_MASK XE_REG(0x1900ec, XE_REG_OPTION_VF)
> +#define CRYPTO_RSVD_INTR_MASK XE_REG(0x1900f0)
> #define GUNIT_GSC_INTR_MASK XE_REG(0x1900f4, XE_REG_OPTION_VF)
> #define CCS0_CCS1_INTR_MASK XE_REG(0x190100)
> #define CCS2_CCS3_INTR_MASK XE_REG(0x190104)
> @@ -605,4 +608,9 @@
> #define GT_CS_MASTER_ERROR_INTERRUPT REG_BIT(3)
> #define GT_RENDER_USER_INTERRUPT REG_BIT(0)
>
> +/* irqs for OTHER_KCR_INSTANCE */
> +#define KCR_PXP_STATE_TERMINATED_INTERRUPT REG_BIT(1)
> +#define KCR_APP_TERMINATED_PER_FW_REQ_INTERRUPT REG_BIT(2)
> +#define KCR_PXP_STATE_RESET_COMPLETE_INTERRUPT REG_BIT(3)
> +
> #endif
> diff --git a/drivers/gpu/drm/xe/regs/xe_pxp_regs.h b/drivers/gpu/drm/xe/regs/xe_pxp_regs.h
> index d67cf210d23d..aa158938b42e 100644
> --- a/drivers/gpu/drm/xe/regs/xe_pxp_regs.h
> +++ b/drivers/gpu/drm/xe/regs/xe_pxp_regs.h
> @@ -14,4 +14,10 @@
> #define KCR_INIT XE_REG(0x3860f0)
> #define KCR_INIT_ALLOW_DISPLAY_ME_WRITES REG_BIT(14)
>
> +/* KCR hwdrm session in play status 0-31 */
> +#define KCR_SIP XE_REG(0x386260)
> +
> +/* PXP global terminate register for session termination */
> +#define KCR_GLOBAL_TERMINATE XE_REG(0x3860f8)
> +
> #endif /* __XE_PXP_REGS_H__ */
> diff --git a/drivers/gpu/drm/xe/xe_irq.c b/drivers/gpu/drm/xe/xe_irq.c
> index 5f2c368c35ad..f11d9a740627 100644
> --- a/drivers/gpu/drm/xe/xe_irq.c
> +++ b/drivers/gpu/drm/xe/xe_irq.c
> @@ -20,6 +20,7 @@
> #include "xe_hw_engine.h"
> #include "xe_memirq.h"
> #include "xe_mmio.h"
> +#include "xe_pxp.h"
> #include "xe_sriov.h"
>
> /*
> @@ -202,6 +203,15 @@ void xe_irq_enable_hwe(struct xe_gt *gt)
> }
> if (heci_mask)
> xe_mmio_write32(gt, HECI2_RSVD_INTR_MASK, ~(heci_mask << 16));
> +
> + if (xe_pxp_is_supported(xe)) {
> + u32 kcr_mask = KCR_PXP_STATE_TERMINATED_INTERRUPT |
> + KCR_APP_TERMINATED_PER_FW_REQ_INTERRUPT |
> + KCR_PXP_STATE_RESET_COMPLETE_INTERRUPT;
> +
> + xe_mmio_write32(gt, CRYPTO_RSVD_INTR_ENABLE, kcr_mask << 16);
> + xe_mmio_write32(gt, CRYPTO_RSVD_INTR_MASK, ~(kcr_mask << 16));
> + }
> }
> }
>
> @@ -324,9 +334,15 @@ static void gt_irq_handler(struct xe_tile *tile,
> }
>
> if (class == XE_ENGINE_CLASS_OTHER) {
> - /* HECI GSCFI interrupts come from outside of GT */
> + /*
> + * HECI GSCFI interrupts come from outside of GT.
> + * KCR irqs come from inside GT but are handled
> + * by the global PXP subsystem.
> + */
> if (HAS_HECI_GSCFI(xe) && instance == OTHER_GSC_INSTANCE)
> xe_heci_gsc_irq_handler(xe, intr_vec);
> + else if (instance == OTHER_KCR_INSTANCE)
> + xe_pxp_irq_handler(xe, intr_vec);
> else
> gt_other_irq_handler(engine_gt, instance, intr_vec);
> }
> @@ -512,6 +528,8 @@ static void gt_irq_reset(struct xe_tile *tile)
> xe_mmio_write32(mmio, GUNIT_GSC_INTR_ENABLE, 0);
> xe_mmio_write32(mmio, GUNIT_GSC_INTR_MASK, ~0);
> xe_mmio_write32(mmio, HECI2_RSVD_INTR_MASK, ~0);
> + xe_mmio_write32(mmio, CRYPTO_RSVD_INTR_ENABLE, 0);
> + xe_mmio_write32(mmio, CRYPTO_RSVD_INTR_MASK, ~0);
> }
>
> xe_mmio_write32(mmio, GPM_WGBOXPERF_INTR_ENABLE, 0);
> diff --git a/drivers/gpu/drm/xe/xe_pxp.c b/drivers/gpu/drm/xe/xe_pxp.c
> index 56bb7d927c07..382eb0cb0018 100644
> --- a/drivers/gpu/drm/xe/xe_pxp.c
> +++ b/drivers/gpu/drm/xe/xe_pxp.c
> @@ -12,9 +12,11 @@
> #include "xe_gt.h"
> #include "xe_gt_types.h"
> #include "xe_mmio.h"
> +#include "xe_pm.h"
> #include "xe_pxp_submit.h"
> #include "xe_pxp_types.h"
> #include "xe_uc_fw.h"
> +#include "regs/xe_gt_regs.h"
> #include "regs/xe_pxp_regs.h"
>
> /**
> @@ -25,11 +27,133 @@
> * integrated parts.
> */
>
> -static bool pxp_is_supported(const struct xe_device *xe)
> +#define ARB_SESSION 0xF /* TODO: move to UAPI */
> +
> +bool xe_pxp_is_supported(const struct xe_device *xe)
> {
> return xe->info.has_pxp && IS_ENABLED(CONFIG_INTEL_MEI_GSC_PROXY);
> }
>
> +static bool pxp_is_enabled(const struct xe_pxp *pxp)
> +{
> + return pxp;
> +}
> +
> +static int pxp_wait_for_session_state(struct xe_pxp *pxp, u32 id, bool in_play)
> +{
> + struct xe_gt *gt = pxp->gt;
> + u32 mask = BIT(id);
> + int ret;
> +
> + ret = xe_force_wake_get(gt_to_fw(gt), XE_FW_GT);
> + if (ret)
> + return ret;
> +
> + ret = xe_mmio_wait32(gt, KCR_SIP, mask, in_play ? mask : 0,
> + 250, NULL, false);
> + xe_force_wake_put(gt_to_fw(gt), XE_FW_GT);
> +
> + return ret;
> +}
> +
> +static void pxp_terminate(struct xe_pxp *pxp)
> +{
> + int ret = 0;
> + struct xe_device *xe = pxp->xe;
> + struct xe_gt *gt = pxp->gt;
> +
> + drm_dbg(&xe->drm, "Terminating PXP\n");
> +
> + /* terminate the hw session */
> + ret = xe_pxp_submit_session_termination(pxp, ARB_SESSION);
> + if (ret)
> + goto out;
> +
> + ret = pxp_wait_for_session_state(pxp, ARB_SESSION, false);
> + if (ret)
> + goto out;
> +
> + /* Trigger full HW cleanup */
> + XE_WARN_ON(xe_force_wake_get(gt_to_fw(gt), XE_FW_GT));
Why WARN here but no explicit message at all if the earlier force wake
fails? And is it safe to keep going if the fw did fail?
Also, given two identical, back-to-back fw get/put sets, would it not be
more efficient to have pxp_terminate do the get and share that across
the two register access? It would also remove the issue with failed fw
half way through causing problems due to not wanting to abort.
> + xe_mmio_write32(gt, KCR_GLOBAL_TERMINATE, 1);
BSpec description for KCR_GLOBAL_TERMINATE says need to check
KCR_SIP_GCD rather than KCR_SIP_MEDIA for bits 0-15 being 0. Whereas the
KCR_SIP being checked above is KCR_SIP_MEDIA only.
> + xe_force_wake_put(gt_to_fw(gt), XE_FW_GT);
> +
> + /* now we can tell the GSC to clean up its own state */
> + ret = xe_pxp_submit_session_invalidation(&pxp->gsc_res, ARB_SESSION);
> +
> +out:
> + if (ret)
> + drm_err(&xe->drm, "PXP termination failed: %pe\n", ERR_PTR(ret));
> +}
> +
> +static void pxp_terminate_complete(struct xe_pxp *pxp)
> +{
> + /* TODO mark the session as ready to start */
> +}
> +
> +static void pxp_irq_work(struct work_struct *work)
> +{
> + struct xe_pxp *pxp = container_of(work, typeof(*pxp), irq.work);
> + struct xe_device *xe = pxp->xe;
> + u32 events = 0;
> +
> + spin_lock_irq(&xe->irq.lock);
> + events = pxp->irq.events;
> + pxp->irq.events = 0;
> + spin_unlock_irq(&xe->irq.lock);
> +
> + if (!events)
> + return;
> +
> + /*
> + * If we're processing a termination irq while suspending then don't
> + * bother, we're going to re-init everything on resume anyway.
> + */
> + if ((events & PXP_TERMINATION_REQUEST) && !xe_pm_runtime_get_if_active(xe))
> + return;
I assume it is not possible to have both REQUEST and COMPLETE set at the
same time? I.e. is it possible for this early exit to cause a lost
termination complete call?
John.
> +
> + if (events & PXP_TERMINATION_REQUEST) {
> + events &= ~PXP_TERMINATION_COMPLETE;
> + pxp_terminate(pxp);
> + }
> +
> + if (events & PXP_TERMINATION_COMPLETE)
> + pxp_terminate_complete(pxp);
> +
> + if (events & PXP_TERMINATION_REQUEST)
> + xe_pm_runtime_put(xe);
> +}
> +
> +/**
> + * xe_pxp_irq_handler - Handles PXP interrupts.
> + * @pxp: pointer to pxp struct
> + * @iir: interrupt vector
> + */
> +void xe_pxp_irq_handler(struct xe_device *xe, u16 iir)
> +{
> + struct xe_pxp *pxp = xe->pxp;
> +
> + if (!pxp_is_enabled(pxp)) {
> + drm_err(&xe->drm, "PXP irq 0x%x received with PXP disabled!\n", iir);
> + return;
> + }
> +
> + lockdep_assert_held(&xe->irq.lock);
> +
> + if (unlikely(!iir))
> + return;
> +
> + if (iir & (KCR_PXP_STATE_TERMINATED_INTERRUPT |
> + KCR_APP_TERMINATED_PER_FW_REQ_INTERRUPT))
> + pxp->irq.events |= PXP_TERMINATION_REQUEST;
> +
> + if (iir & KCR_PXP_STATE_RESET_COMPLETE_INTERRUPT)
> + pxp->irq.events |= PXP_TERMINATION_COMPLETE;
> +
> + if (pxp->irq.events)
> + queue_work(pxp->irq.wq, &pxp->irq.work);
> +}
> +
> static int kcr_pxp_set_status(const struct xe_pxp *pxp, bool enable)
> {
> u32 val = enable ? _MASKED_BIT_ENABLE(KCR_INIT_ALLOW_DISPLAY_ME_WRITES) :
> @@ -60,6 +184,7 @@ static void pxp_fini(void *arg)
> {
> struct xe_pxp *pxp = arg;
>
> + destroy_workqueue(pxp->irq.wq);
> xe_pxp_destroy_execution_resources(pxp);
>
> /* no need to explicitly disable KCR since we're going to do an FLR */
> @@ -83,7 +208,7 @@ int xe_pxp_init(struct xe_device *xe)
> struct xe_pxp *pxp;
> int err;
>
> - if (!pxp_is_supported(xe))
> + if (!xe_pxp_is_supported(xe))
> return -EOPNOTSUPP;
>
> /* we only support PXP on single tile devices with a media GT */
> @@ -105,12 +230,17 @@ int xe_pxp_init(struct xe_device *xe)
> if (!pxp)
> return -ENOMEM;
>
> + INIT_WORK(&pxp->irq.work, pxp_irq_work);
> pxp->xe = xe;
> pxp->gt = gt;
>
> + pxp->irq.wq = alloc_ordered_workqueue("pxp-wq", 0);
> + if (!pxp->irq.wq)
> + return -ENOMEM;
> +
> err = kcr_pxp_enable(pxp);
> if (err)
> - return err;
> + goto out_wq;
>
> err = xe_pxp_allocate_execution_resources(pxp);
> if (err)
> @@ -122,5 +252,7 @@ int xe_pxp_init(struct xe_device *xe)
>
> kcr_disable:
> kcr_pxp_disable(pxp);
> +out_wq:
> + destroy_workqueue(pxp->irq.wq);
> return err;
> }
> diff --git a/drivers/gpu/drm/xe/xe_pxp.h b/drivers/gpu/drm/xe/xe_pxp.h
> index 79c951667f13..81bafe2714ff 100644
> --- a/drivers/gpu/drm/xe/xe_pxp.h
> +++ b/drivers/gpu/drm/xe/xe_pxp.h
> @@ -10,6 +10,9 @@
>
> struct xe_device;
>
> +bool xe_pxp_is_supported(const struct xe_device *xe);
> +
> int xe_pxp_init(struct xe_device *xe);
> +void xe_pxp_irq_handler(struct xe_device *xe, u16 iir);
>
> #endif /* __XE_PXP_H__ */
> diff --git a/drivers/gpu/drm/xe/xe_pxp_types.h b/drivers/gpu/drm/xe/xe_pxp_types.h
> index 3463caaad101..d5cf8faed7be 100644
> --- a/drivers/gpu/drm/xe/xe_pxp_types.h
> +++ b/drivers/gpu/drm/xe/xe_pxp_types.h
> @@ -8,6 +8,7 @@
>
> #include <linux/iosys-map.h>
> #include <linux/types.h>
> +#include <linux/workqueue.h>
>
> struct xe_bo;
> struct xe_exec_queue;
> @@ -69,6 +70,18 @@ struct xe_pxp {
>
> /** @gsc_exec: kernel-owned objects for PXP submissions to the GSCCS */
> struct xe_pxp_gsc_client_resources gsc_res;
> +
> + /** @irq: wrapper for the worker and queue used for PXP irq support */
> + struct {
> + /** @irq.work: worker that manages irq events. */
> + struct work_struct work;
> + /** @irq.wq: workqueue on which to queue the irq work. */
> + struct workqueue_struct *wq;
> + /** @irq.events: pending events, protected with xe->irq.lock. */
> + u32 events;
> +#define PXP_TERMINATION_REQUEST BIT(0)
> +#define PXP_TERMINATION_COMPLETE BIT(1)
> + } irq;
> };
>
> #endif /* __XE_PXP_TYPES_H__ */
^ permalink raw reply [flat|nested] 54+ messages in thread
* Re: [PATCH v2 06/12] drm/xe/pxp: Add GSC session initialization support
2024-08-16 19:00 ` [PATCH v2 06/12] drm/xe/pxp: Add GSC session initialization support Daniele Ceraolo Spurio
@ 2024-10-08 18:43 ` John Harrison
2024-11-07 22:37 ` Daniele Ceraolo Spurio
0 siblings, 1 reply; 54+ messages in thread
From: John Harrison @ 2024-10-08 18:43 UTC (permalink / raw)
To: Daniele Ceraolo Spurio, intel-xe
On 8/16/2024 12:00, Daniele Ceraolo Spurio wrote:
> A session is initialized (i.e. started) by sending a message to the GSC.
>
> Note that this patch is meant to be squashed with the follow-up patches
> that implement the other pieces of the session initialization and queue
> setup flow. It is separate for now for ease of review.
>
> Signed-off-by: Daniele Ceraolo Spurio <daniele.ceraolospurio@intel.com>
> ---
> drivers/gpu/drm/xe/abi/gsc_pxp_commands_abi.h | 21 ++++++++
> drivers/gpu/drm/xe/xe_pxp_submit.c | 50 +++++++++++++++++++
> drivers/gpu/drm/xe/xe_pxp_submit.h | 1 +
> 3 files changed, 72 insertions(+)
>
> diff --git a/drivers/gpu/drm/xe/abi/gsc_pxp_commands_abi.h b/drivers/gpu/drm/xe/abi/gsc_pxp_commands_abi.h
> index 4a59c564a0d0..734feb38f570 100644
> --- a/drivers/gpu/drm/xe/abi/gsc_pxp_commands_abi.h
> +++ b/drivers/gpu/drm/xe/abi/gsc_pxp_commands_abi.h
> @@ -50,6 +50,7 @@ struct pxp_cmd_header {
> } __packed;
>
> #define PXP43_CMDID_INVALIDATE_STREAM_KEY 0x00000007
> +#define PXP43_CMDID_INIT_SESSION 0x00000036
> #define PXP43_CMDID_NEW_HUC_AUTH 0x0000003F /* MTL+ */
>
> /* PXP-Input-Packet: HUC Auth-only */
> @@ -64,6 +65,26 @@ struct pxp43_huc_auth_out {
> struct pxp_cmd_header header;
> } __packed;
>
> +/* PXP-Input-Packet: Init PXP session */
> +struct pxp43_create_arb_in {
> + struct pxp_cmd_header header;
> + /* header.stream_id fields for vesion 4.3 of Init PXP session: */
> + #define PXP43_INIT_SESSION_VALID BIT(0)
> + #define PXP43_INIT_SESSION_APPTYPE BIT(1)
> + #define PXP43_INIT_SESSION_APPID GENMASK(17, 2)
> + u32 protection_mode;
> + #define PXP43_INIT_SESSION_PROTECTION_ARB 0x2
> + u32 sub_session_id;
> + u32 init_flags;
> + u32 rsvd[12];
> +} __packed;
> +
> +/* PXP-Input-Packet: Init PXP session */
> +struct pxp43_create_arb_out {
> + struct pxp_cmd_header header;
> + u32 rsvd[8];
> +} __packed;
> +
> /* PXP-Input-Packet: Invalidate Stream Key */
> struct pxp43_inv_stream_key_in {
> struct pxp_cmd_header header;
> diff --git a/drivers/gpu/drm/xe/xe_pxp_submit.c b/drivers/gpu/drm/xe/xe_pxp_submit.c
> index 41684d666376..c9258c861556 100644
> --- a/drivers/gpu/drm/xe/xe_pxp_submit.c
> +++ b/drivers/gpu/drm/xe/xe_pxp_submit.c
> @@ -26,6 +26,8 @@
> #include "instructions/xe_mi_commands.h"
> #include "regs/xe_gt_regs.h"
>
> +#define ARB_SESSION 0xF /* TODO: move to UAPI */
This same define is now in two separate source files? Even if it can't
be moved to the UAPI header yet it should at least be in an internal
header rather than being replicated.
> +
> /*
> * The VCS is used for kernel-owned GGTT submissions to issue key termination.
> * Terminations are serialized, so we only need a single queue and a single
> @@ -470,6 +472,54 @@ static int gsccs_send_message(struct xe_pxp_gsc_client_resources *gsc_res,
> return ret;
> }
>
> +/**
> + * xe_pxp_submit_session_init - submits a PXP GSC session initialization
> + * @gsc_res: the pxp client resources
> + * @id: the session to initialize
> + *
> + * Submit a message to the GSC FW to initialize (i.e. start) a PXP session.
> + *
> + * Returns 0 if the submission is successful, an errno value otherwise.
> + */
> +int xe_pxp_submit_session_init(struct xe_pxp_gsc_client_resources *gsc_res, u32 id)
> +{
> + struct xe_device *xe = gsc_res->vm->xe;
> + struct pxp43_create_arb_in msg_in = {0};
> + struct pxp43_create_arb_out msg_out = {0};
> + int ret;
> +
> + msg_in.header.api_version = PXP_APIVER(4, 3);
> + msg_in.header.command_id = PXP43_CMDID_INIT_SESSION;
> + msg_in.header.stream_id = (FIELD_PREP(PXP43_INIT_SESSION_APPID, id) |
> + FIELD_PREP(PXP43_INIT_SESSION_VALID, 1) |
> + FIELD_PREP(PXP43_INIT_SESSION_APPTYPE, 0));
> + msg_in.header.buffer_len = sizeof(msg_in) - sizeof(msg_in.header);
> +
> + if (id == ARB_SESSION)
> + msg_in.protection_mode = PXP43_INIT_SESSION_PROTECTION_ARB;
> +
> + ret = gsccs_send_message(gsc_res, &msg_in, sizeof(msg_in),
> + &msg_out, sizeof(msg_out));
> + if (ret) {
> + drm_err(&xe->drm, "Failed to init session %d, ret=[%d]\n", id, ret);
%pe for error code
> + } else if (msg_out.header.status != 0) {
> + if (is_fw_err_platform_config(msg_out.header.status)) {
> + drm_info_once(&xe->drm,
> + "PXP init-session-%d failed due to BIOS/SOC:0x%08x:%s\n",
Style mis-match - "init session %d" in the first error but then
"init-session-%d" in this one and the one below (I prefer the first one
that actually looks like an operation rather than variable).
> + id, msg_out.header.status,
> + fw_err_to_string(msg_out.header.status));
> + } else {
> + drm_dbg(&xe->drm, "PXP init-session-%d failed 0x%08x:%st:\n",
> + id, msg_out.header.status,
> + fw_err_to_string(msg_out.header.status));
> + drm_dbg(&xe->drm, " cmd-detail: ID=[0x%08x],API-Ver-[0x%08x]\n",
More mis-matching message styles - 'SOC:%s:%s' vs 'ID=[%x]'. Neither of
which is the normal format of kernel messages.
John.
> + msg_in.header.command_id, msg_in.header.api_version);
> + }
> + }
> +
> + return ret;
> +}
> +
> /**
> * xe_pxp_submit_session_invalidation - submits a PXP GSC invalidation
> * @gsc_res: the pxp client resources
> diff --git a/drivers/gpu/drm/xe/xe_pxp_submit.h b/drivers/gpu/drm/xe/xe_pxp_submit.h
> index 48fdc9b09116..c9efda02f4b0 100644
> --- a/drivers/gpu/drm/xe/xe_pxp_submit.h
> +++ b/drivers/gpu/drm/xe/xe_pxp_submit.h
> @@ -14,6 +14,7 @@ struct xe_pxp_gsc_client_resources;
> int xe_pxp_allocate_execution_resources(struct xe_pxp *pxp);
> void xe_pxp_destroy_execution_resources(struct xe_pxp *pxp);
>
> +int xe_pxp_submit_session_init(struct xe_pxp_gsc_client_resources *gsc_res, u32 id);
> int xe_pxp_submit_session_termination(struct xe_pxp *pxp, u32 id);
> int xe_pxp_submit_session_invalidation(struct xe_pxp_gsc_client_resources *gsc_res,
> u32 id);
^ permalink raw reply [flat|nested] 54+ messages in thread
* Re: [PATCH v2 07/12] drm/xe/pxp: Add spport for PXP-using queues
2024-08-16 19:00 ` [PATCH v2 07/12] drm/xe/pxp: Add spport for PXP-using queues Daniele Ceraolo Spurio
@ 2024-10-08 23:55 ` John Harrison
2024-11-07 23:57 ` Daniele Ceraolo Spurio
2024-10-09 10:07 ` Jani Nikula
1 sibling, 1 reply; 54+ messages in thread
From: John Harrison @ 2024-10-08 23:55 UTC (permalink / raw)
To: Daniele Ceraolo Spurio, intel-xe
On 8/16/2024 12:00, Daniele Ceraolo Spurio wrote:
> Userspace is required to mark a queue as using PXP to guarantee that the
> PXP instructions will work. When a PXP queue is created, the driver will
> do the following:
> - Start the default PXP session if it is not already running;
> - set the relevant bits in the context control register;
> - assign an rpm ref to the queue to keep for its lifetime (this is
> required because PXP HWDRM sessions are killed by the HW suspend flow).
>
> When a PXP invalidation occurs, all the PXP queue will be killed.
"all the PXP queue" -> should be 'queues' or should not say 'all'?
> On submission of a valid PXP queue, the driver will validate all
> encrypted objects mapped to the VM to ensured they were encrypted with
> the current key.
>
> Signed-off-by: Daniele Ceraolo Spurio <daniele.ceraolospurio@intel.com>
> ---
> drivers/gpu/drm/xe/regs/xe_engine_regs.h | 1 +
> drivers/gpu/drm/xe/xe_exec_queue.c | 58 ++++-
> drivers/gpu/drm/xe/xe_exec_queue.h | 5 +
> drivers/gpu/drm/xe/xe_exec_queue_types.h | 8 +
> drivers/gpu/drm/xe/xe_hw_engine.c | 2 +-
> drivers/gpu/drm/xe/xe_lrc.c | 16 +-
> drivers/gpu/drm/xe/xe_lrc.h | 4 +-
> drivers/gpu/drm/xe/xe_pxp.c | 295 ++++++++++++++++++++++-
> drivers/gpu/drm/xe/xe_pxp.h | 7 +
> drivers/gpu/drm/xe/xe_pxp_submit.c | 4 +-
> drivers/gpu/drm/xe/xe_pxp_types.h | 26 ++
> include/uapi/drm/xe_drm.h | 40 ++-
> 12 files changed, 450 insertions(+), 16 deletions(-)
>
> diff --git a/drivers/gpu/drm/xe/regs/xe_engine_regs.h b/drivers/gpu/drm/xe/regs/xe_engine_regs.h
> index 81b71903675e..3692e887f503 100644
> --- a/drivers/gpu/drm/xe/regs/xe_engine_regs.h
> +++ b/drivers/gpu/drm/xe/regs/xe_engine_regs.h
> @@ -130,6 +130,7 @@
> #define RING_EXECLIST_STATUS_HI(base) XE_REG((base) + 0x234 + 4)
>
> #define RING_CONTEXT_CONTROL(base) XE_REG((base) + 0x244, XE_REG_OPTION_MASKED)
> +#define CTX_CTRL_PXP_ENABLE REG_BIT(10)
> #define CTX_CTRL_OAC_CONTEXT_ENABLE REG_BIT(8)
> #define CTX_CTRL_RUN_ALONE REG_BIT(7)
> #define CTX_CTRL_INDIRECT_RING_STATE_ENABLE REG_BIT(4)
> diff --git a/drivers/gpu/drm/xe/xe_exec_queue.c b/drivers/gpu/drm/xe/xe_exec_queue.c
> index e98e8794eddf..504ba4aa2357 100644
> --- a/drivers/gpu/drm/xe/xe_exec_queue.c
> +++ b/drivers/gpu/drm/xe/xe_exec_queue.c
> @@ -22,6 +22,8 @@
> #include "xe_ring_ops_types.h"
> #include "xe_trace.h"
> #include "xe_vm.h"
> +#include "xe_pxp.h"
> +#include "xe_pxp_types.h"
>
> enum xe_exec_queue_sched_prop {
> XE_EXEC_QUEUE_JOB_TIMEOUT = 0,
> @@ -35,6 +37,8 @@ static int exec_queue_user_extensions(struct xe_device *xe, struct xe_exec_queue
>
> static void __xe_exec_queue_free(struct xe_exec_queue *q)
> {
> + if (xe_exec_queue_uses_pxp(q))
> + xe_pxp_exec_queue_remove(gt_to_xe(q->gt)->pxp, q);
> if (q->vm)
> xe_vm_put(q->vm);
>
> @@ -73,6 +77,7 @@ static struct xe_exec_queue *__xe_exec_queue_alloc(struct xe_device *xe,
> q->ops = gt->exec_queue_ops;
> INIT_LIST_HEAD(&q->lr.link);
> INIT_LIST_HEAD(&q->multi_gt_link);
> + INIT_LIST_HEAD(&q->pxp.link);
>
> q->sched_props.timeslice_us = hwe->eclass->sched_props.timeslice_us;
> q->sched_props.preempt_timeout_us =
> @@ -107,6 +112,21 @@ static int __xe_exec_queue_init(struct xe_exec_queue *q)
> {
> struct xe_vm *vm = q->vm;
> int i, err;
> + u32 flags = 0;
> +
> + /*
> + * PXP workloads executing on RCS or CCS must run in isolation (i.e. no
> + * other workload can use the EUs at the same time). On MTL this is done
> + * by setting the RUNALONE bit in the LRC, while starting on Xe2 there
> + * is a dedicated bit for it.
> + */
> + if (xe_exec_queue_uses_pxp(q) &&
> + (q->class == XE_ENGINE_CLASS_RENDER || q->class == XE_ENGINE_CLASS_COMPUTE)) {
> + if (GRAPHICS_VER(gt_to_xe(q->gt)) >= 20)
> + flags |= XE_LRC_CREATE_PXP;
> + else
> + flags |= XE_LRC_CREATE_RUNALONE;
> + }
>
> if (vm) {
> err = xe_vm_lock(vm, true);
> @@ -115,7 +135,7 @@ static int __xe_exec_queue_init(struct xe_exec_queue *q)
> }
>
> for (i = 0; i < q->width; ++i) {
> - q->lrc[i] = xe_lrc_create(q->hwe, q->vm, SZ_16K);
> + q->lrc[i] = xe_lrc_create(q->hwe, q->vm, SZ_16K, flags);
> if (IS_ERR(q->lrc[i])) {
> err = PTR_ERR(q->lrc[i]);
> goto err_unlock;
> @@ -160,6 +180,17 @@ struct xe_exec_queue *xe_exec_queue_create(struct xe_device *xe, struct xe_vm *v
> if (err)
> goto err_post_alloc;
>
> + /*
> + * we can only add the queue to the PXP list after the init is complete,
> + * because the PXP termination can call exec_queue_kill and that will
> + * go bad if the queue is only half-initialized.
> + */
Not following how this comment relates to this code block. The comment
implies there should be a wait of some kind.
> + if (xe_exec_queue_uses_pxp(q)) {
> + err = xe_pxp_exec_queue_add(xe->pxp, q);
> + if (err)
> + goto err_post_alloc;
> + }
> +
> return q;
>
> err_post_alloc:
> @@ -197,6 +228,9 @@ void xe_exec_queue_destroy(struct kref *ref)
> struct xe_exec_queue *q = container_of(ref, struct xe_exec_queue, refcount);
> struct xe_exec_queue *eq, *next;
>
> + if (xe_exec_queue_uses_pxp(q))
> + xe_pxp_exec_queue_remove(gt_to_xe(q->gt)->pxp, q);
> +
> xe_exec_queue_last_fence_put_unlocked(q);
> if (!(q->flags & EXEC_QUEUE_FLAG_BIND_ENGINE_CHILD)) {
> list_for_each_entry_safe(eq, next, &q->multi_gt_list,
> @@ -343,6 +377,24 @@ static int exec_queue_set_timeslice(struct xe_device *xe, struct xe_exec_queue *
> return 0;
> }
>
> +static int
> +exec_queue_set_pxp_type(struct xe_device *xe, struct xe_exec_queue *q, u64 value)
> +{
> + BUILD_BUG_ON(DRM_XE_PXP_TYPE_NONE != 0);
Why a build bug for something that is a simple 'enum { X=0 }'? It's not
like there is some complex macro calculation that could be broken by
some seemingly unrelated change.
> +
> + if (value == DRM_XE_PXP_TYPE_NONE)
> + return 0;
This doesn't need to shut any existing PXP down? Is it not possible to
dynamically change the type?
> +
> + if (!xe_pxp_is_enabled(xe->pxp))
> + return -ENODEV;
> +
> + /* we only support HWDRM sessions right now */
> + if (XE_IOCTL_DBG(xe, value != DRM_XE_PXP_TYPE_HWDRM))
> + return -EINVAL;
> +
> + return xe_pxp_exec_queue_set_type(xe->pxp, q, DRM_XE_PXP_TYPE_HWDRM);
> +}
> +
> typedef int (*xe_exec_queue_set_property_fn)(struct xe_device *xe,
> struct xe_exec_queue *q,
> u64 value);
> @@ -350,6 +402,7 @@ typedef int (*xe_exec_queue_set_property_fn)(struct xe_device *xe,
> static const xe_exec_queue_set_property_fn exec_queue_set_property_funcs[] = {
> [DRM_XE_EXEC_QUEUE_SET_PROPERTY_PRIORITY] = exec_queue_set_priority,
> [DRM_XE_EXEC_QUEUE_SET_PROPERTY_TIMESLICE] = exec_queue_set_timeslice,
> + [DRM_XE_EXEC_QUEUE_SET_PROPERTY_PXP_TYPE] = exec_queue_set_pxp_type,
> };
>
> static int exec_queue_user_ext_set_property(struct xe_device *xe,
> @@ -369,7 +422,8 @@ static int exec_queue_user_ext_set_property(struct xe_device *xe,
> ARRAY_SIZE(exec_queue_set_property_funcs)) ||
> XE_IOCTL_DBG(xe, ext.pad) ||
> XE_IOCTL_DBG(xe, ext.property != DRM_XE_EXEC_QUEUE_SET_PROPERTY_PRIORITY &&
> - ext.property != DRM_XE_EXEC_QUEUE_SET_PROPERTY_TIMESLICE))
> + ext.property != DRM_XE_EXEC_QUEUE_SET_PROPERTY_TIMESLICE &&
> + ext.property != DRM_XE_EXEC_QUEUE_SET_PROPERTY_PXP_TYPE))
> return -EINVAL;
>
> idx = array_index_nospec(ext.property, ARRAY_SIZE(exec_queue_set_property_funcs));
> diff --git a/drivers/gpu/drm/xe/xe_exec_queue.h b/drivers/gpu/drm/xe/xe_exec_queue.h
> index ded77b0f3b90..7fa97719667a 100644
> --- a/drivers/gpu/drm/xe/xe_exec_queue.h
> +++ b/drivers/gpu/drm/xe/xe_exec_queue.h
> @@ -53,6 +53,11 @@ static inline bool xe_exec_queue_is_parallel(struct xe_exec_queue *q)
> return q->width > 1;
> }
>
> +static inline bool xe_exec_queue_uses_pxp(struct xe_exec_queue *q)
> +{
> + return q->pxp.type;
> +}
> +
> bool xe_exec_queue_is_lr(struct xe_exec_queue *q);
>
> bool xe_exec_queue_ring_full(struct xe_exec_queue *q);
> diff --git a/drivers/gpu/drm/xe/xe_exec_queue_types.h b/drivers/gpu/drm/xe/xe_exec_queue_types.h
> index 1408b02eea53..28b56217f1df 100644
> --- a/drivers/gpu/drm/xe/xe_exec_queue_types.h
> +++ b/drivers/gpu/drm/xe/xe_exec_queue_types.h
> @@ -130,6 +130,14 @@ struct xe_exec_queue {
> spinlock_t lock;
> } lr;
>
> + /** @pxp: PXP info tracking */
> + struct {
> + /** @pxp.type: PXP session type used by this queue */
> + u8 type;
> + /** @pxp.link: link into the list of PXP exec queues */
> + struct list_head link;
> + } pxp;
> +
> /** @ops: submission backend exec queue operations */
> const struct xe_exec_queue_ops *ops;
>
> diff --git a/drivers/gpu/drm/xe/xe_hw_engine.c b/drivers/gpu/drm/xe/xe_hw_engine.c
> index e195022ca836..469932e7d7a6 100644
> --- a/drivers/gpu/drm/xe/xe_hw_engine.c
> +++ b/drivers/gpu/drm/xe/xe_hw_engine.c
> @@ -557,7 +557,7 @@ static int hw_engine_init(struct xe_gt *gt, struct xe_hw_engine *hwe,
> goto err_name;
> }
>
> - hwe->kernel_lrc = xe_lrc_create(hwe, NULL, SZ_16K);
> + hwe->kernel_lrc = xe_lrc_create(hwe, NULL, SZ_16K, 0);
> if (IS_ERR(hwe->kernel_lrc)) {
> err = PTR_ERR(hwe->kernel_lrc);
> goto err_hwsp;
> diff --git a/drivers/gpu/drm/xe/xe_lrc.c b/drivers/gpu/drm/xe/xe_lrc.c
> index 974a9cd8c379..4f3e676db646 100644
> --- a/drivers/gpu/drm/xe/xe_lrc.c
> +++ b/drivers/gpu/drm/xe/xe_lrc.c
> @@ -893,7 +893,7 @@ static void xe_lrc_finish(struct xe_lrc *lrc)
> #define PVC_CTX_ACC_CTR_THOLD (0x2a + 1)
>
> static int xe_lrc_init(struct xe_lrc *lrc, struct xe_hw_engine *hwe,
> - struct xe_vm *vm, u32 ring_size)
> + struct xe_vm *vm, u32 ring_size, u32 init_flags)
> {
> struct xe_gt *gt = hwe->gt;
> struct xe_tile *tile = gt_to_tile(gt);
> @@ -981,6 +981,16 @@ static int xe_lrc_init(struct xe_lrc *lrc, struct xe_hw_engine *hwe,
> RING_CTL_SIZE(lrc->ring.size) | RING_VALID);
> }
>
> + if (init_flags & XE_LRC_CREATE_RUNALONE)
> + xe_lrc_write_ctx_reg(lrc, CTX_CONTEXT_CONTROL,
> + xe_lrc_read_ctx_reg(lrc, CTX_CONTEXT_CONTROL) |
> + _MASKED_BIT_ENABLE(CTX_CTRL_RUN_ALONE));
> +
> + if (init_flags & XE_LRC_CREATE_PXP)
> + xe_lrc_write_ctx_reg(lrc, CTX_CONTEXT_CONTROL,
> + xe_lrc_read_ctx_reg(lrc, CTX_CONTEXT_CONTROL) |
> + _MASKED_BIT_ENABLE(CTX_CTRL_PXP_ENABLE));
> +
> xe_lrc_write_ctx_reg(lrc, CTX_TIMESTAMP, 0);
>
> if (xe->info.has_asid && vm)
> @@ -1029,7 +1039,7 @@ static int xe_lrc_init(struct xe_lrc *lrc, struct xe_hw_engine *hwe,
> * upon failure.
> */
> struct xe_lrc *xe_lrc_create(struct xe_hw_engine *hwe, struct xe_vm *vm,
> - u32 ring_size)
> + u32 ring_size, u32 flags)
> {
> struct xe_lrc *lrc;
> int err;
> @@ -1038,7 +1048,7 @@ struct xe_lrc *xe_lrc_create(struct xe_hw_engine *hwe, struct xe_vm *vm,
> if (!lrc)
> return ERR_PTR(-ENOMEM);
>
> - err = xe_lrc_init(lrc, hwe, vm, ring_size);
> + err = xe_lrc_init(lrc, hwe, vm, ring_size, flags);
> if (err) {
> kfree(lrc);
> return ERR_PTR(err);
> diff --git a/drivers/gpu/drm/xe/xe_lrc.h b/drivers/gpu/drm/xe/xe_lrc.h
> index d411c3fbcbc6..cc8091bba2a0 100644
> --- a/drivers/gpu/drm/xe/xe_lrc.h
> +++ b/drivers/gpu/drm/xe/xe_lrc.h
> @@ -23,8 +23,10 @@ struct xe_vm;
> #define LRC_PPHWSP_FLUSH_INVAL_SCRATCH_ADDR (0x34 * 4)
> #define LRC_PPHWSP_PXP_INVAL_SCRATCH_ADDR (0x40 * 4)
>
> +#define XE_LRC_CREATE_RUNALONE 0x1
> +#define XE_LRC_CREATE_PXP 0x2
> struct xe_lrc *xe_lrc_create(struct xe_hw_engine *hwe, struct xe_vm *vm,
> - u32 ring_size);
> + u32 ring_size, u32 flags);
> void xe_lrc_destroy(struct kref *ref);
>
> /**
> diff --git a/drivers/gpu/drm/xe/xe_pxp.c b/drivers/gpu/drm/xe/xe_pxp.c
> index 382eb0cb0018..acdc25c8e8a1 100644
> --- a/drivers/gpu/drm/xe/xe_pxp.c
> +++ b/drivers/gpu/drm/xe/xe_pxp.c
> @@ -6,11 +6,17 @@
> #include "xe_pxp.h"
>
> #include <drm/drm_managed.h>
> +#include <drm/xe_drm.h>
>
> #include "xe_device_types.h"
> +#include "xe_exec_queue.h"
> +#include "xe_exec_queue_types.h"
> #include "xe_force_wake.h"
> +#include "xe_guc_submit.h"
> +#include "xe_gsc_proxy.h"
> #include "xe_gt.h"
> #include "xe_gt_types.h"
> +#include "xe_huc.h"
> #include "xe_mmio.h"
> #include "xe_pm.h"
> #include "xe_pxp_submit.h"
> @@ -27,18 +33,45 @@
> * integrated parts.
> */
>
> -#define ARB_SESSION 0xF /* TODO: move to UAPI */
> +#define ARB_SESSION DRM_XE_PXP_HWDRM_DEFAULT_SESSION /* shorter define */
Is this really worthwhile?
>
> bool xe_pxp_is_supported(const struct xe_device *xe)
> {
> return xe->info.has_pxp && IS_ENABLED(CONFIG_INTEL_MEI_GSC_PROXY);
> }
>
> -static bool pxp_is_enabled(const struct xe_pxp *pxp)
> +bool xe_pxp_is_enabled(const struct xe_pxp *pxp)
> {
> return pxp;
> }
>
> +static bool pxp_prerequisites_done(const struct xe_pxp *pxp)
> +{
> + bool ready;
> +
> + XE_WARN_ON(xe_force_wake_get(gt_to_fw(pxp->gt), XE_FW_GSC));
Again, why warn on this fw and then proceed anyway when others silently
return an error code to the layer above?
> +
> + /* PXP requires both HuC authentication via GSC and GSC proxy initialized */
> + ready = xe_huc_is_authenticated(&pxp->gt->uc.huc, XE_HUC_AUTH_VIA_GSC) &&
> + xe_gsc_proxy_init_done(&pxp->gt->uc.gsc);
> +
> + xe_force_wake_put(gt_to_fw(pxp->gt), XE_FW_GSC);
> +
> + return ready;
> +}
> +
> +static bool pxp_session_is_in_play(struct xe_pxp *pxp, u32 id)
> +{
> + struct xe_gt *gt = pxp->gt;
> + u32 sip = 0;
> +
> + XE_WARN_ON(xe_force_wake_get(gt_to_fw(gt), XE_FW_GT));
Same as above.
> + sip = xe_mmio_read32(gt, KCR_SIP);
> + xe_force_wake_put(gt_to_fw(gt), XE_FW_GT);
> +
> + return sip & BIT(id);
> +}
> +
> static int pxp_wait_for_session_state(struct xe_pxp *pxp, u32 id, bool in_play)
> {
> struct xe_gt *gt = pxp->gt;
> @@ -56,12 +89,30 @@ static int pxp_wait_for_session_state(struct xe_pxp *pxp, u32 id, bool in_play)
> return ret;
> }
>
> +static void pxp_invalidate_queues(struct xe_pxp *pxp);
> +
> static void pxp_terminate(struct xe_pxp *pxp)
> {
> int ret = 0;
> struct xe_device *xe = pxp->xe;
> struct xe_gt *gt = pxp->gt;
Should add a "lockdep_assert_held(pxp->mutex)" here?
>
> + pxp_invalidate_queues(pxp);
> +
> + /*
> + * If we have a termination already in progress, we need to wait for
> + * it to complete before queueing another one. We update the state
> + * to signal that another termination is required and leave it to the
> + * pxp_start() call to take care of it.
> + */
> + if (!completion_done(&pxp->termination)) {
> + pxp->status = XE_PXP_NEEDS_TERMINATION;
> + return;
> + }
> +
> + reinit_completion(&pxp->termination);
> + pxp->status = XE_PXP_TERMINATION_IN_PROGRESS;
> +
> drm_dbg(&xe->drm, "Terminating PXP\n");
>
> /* terminate the hw session */
> @@ -82,13 +133,32 @@ static void pxp_terminate(struct xe_pxp *pxp)
> ret = xe_pxp_submit_session_invalidation(&pxp->gsc_res, ARB_SESSION);
>
> out:
> - if (ret)
> + if (ret) {
> drm_err(&xe->drm, "PXP termination failed: %pe\n", ERR_PTR(ret));
> + pxp->status = XE_PXP_ERROR;
> + complete_all(&pxp->termination);
> + }
> }
>
> static void pxp_terminate_complete(struct xe_pxp *pxp)
> {
> - /* TODO mark the session as ready to start */
> + /*
> + * We expect PXP to be in one of 2 states when we get here:
> + * - XE_PXP_TERMINATION_IN_PROGRESS: a single termination event was
> + * requested and it is now completing, so we're ready to start.
> + * - XE_PXP_NEEDS_TERMINATION: a second termination was requested while
> + * the first one was still being processed; we don't update the state
> + * in this case so the pxp_start code will automatically issue that
> + * second termination.
> + */
> + if (pxp->status == XE_PXP_TERMINATION_IN_PROGRESS)
> + pxp->status = XE_PXP_READY_TO_START;
> + else if (pxp->status != XE_PXP_NEEDS_TERMINATION)
> + drm_err(&pxp->xe->drm,
> + "PXP termination complete while status was %u\n",
> + pxp->status);
> +
> + complete_all(&pxp->termination);
> }
>
> static void pxp_irq_work(struct work_struct *work)
> @@ -112,6 +182,8 @@ static void pxp_irq_work(struct work_struct *work)
> if ((events & PXP_TERMINATION_REQUEST) && !xe_pm_runtime_get_if_active(xe))
> return;
>
> + mutex_lock(&pxp->mutex);
> +
> if (events & PXP_TERMINATION_REQUEST) {
> events &= ~PXP_TERMINATION_COMPLETE;
> pxp_terminate(pxp);
> @@ -120,6 +192,8 @@ static void pxp_irq_work(struct work_struct *work)
> if (events & PXP_TERMINATION_COMPLETE)
> pxp_terminate_complete(pxp);
>
> + mutex_unlock(&pxp->mutex);
> +
> if (events & PXP_TERMINATION_REQUEST)
> xe_pm_runtime_put(xe);
> }
> @@ -133,7 +207,7 @@ void xe_pxp_irq_handler(struct xe_device *xe, u16 iir)
> {
> struct xe_pxp *pxp = xe->pxp;
>
> - if (!pxp_is_enabled(pxp)) {
> + if (!xe_pxp_is_enabled(pxp)) {
> drm_err(&xe->drm, "PXP irq 0x%x received with PXP disabled!\n", iir);
> return;
> }
> @@ -230,10 +304,22 @@ int xe_pxp_init(struct xe_device *xe)
> if (!pxp)
> return -ENOMEM;
>
> + INIT_LIST_HEAD(&pxp->queues.list);
> + spin_lock_init(&pxp->queues.lock);
> INIT_WORK(&pxp->irq.work, pxp_irq_work);
> pxp->xe = xe;
> pxp->gt = gt;
>
> + /*
> + * we'll use the completion to check if there is a termination pending,
> + * so we start it as completed and we reinit it when a termination
> + * is triggered.
> + */
> + init_completion(&pxp->termination);
> + complete_all(&pxp->termination);
> +
> + mutex_init(&pxp->mutex);
> +
> pxp->irq.wq = alloc_ordered_workqueue("pxp-wq", 0);
> if (!pxp->irq.wq)
> return -ENOMEM;
> @@ -256,3 +342,202 @@ int xe_pxp_init(struct xe_device *xe)
> destroy_workqueue(pxp->irq.wq);
> return err;
> }
> +
> +static int __pxp_start_arb_session(struct xe_pxp *pxp)
> +{
> + int ret;
> +
> + if (pxp_session_is_in_play(pxp, ARB_SESSION))
> + return -EEXIST;
> +
> + ret = xe_pxp_submit_session_init(&pxp->gsc_res, ARB_SESSION);
> + if (ret) {
> + drm_err(&pxp->xe->drm, "Failed to init PXP arb session\n");
> + goto out;
> + }
> +
> + ret = pxp_wait_for_session_state(pxp, ARB_SESSION, true);
> + if (ret) {
> + drm_err(&pxp->xe->drm, "PXP ARB session failed to go in play\n");
> + goto out;
> + }
> +
> + drm_dbg(&pxp->xe->drm, "PXP ARB session is active\n");
> +
> +out:
> + if (!ret)
> + pxp->status = XE_PXP_ACTIVE;
> + else
> + pxp->status = XE_PXP_ERROR;
> +
> + return ret;
> +}
> +
> +/**
> + * xe_pxp_exec_queue_set_type - Mark a queue as using PXP
> + * @pxp: the xe->pxp pointer (it will be NULL if PXP is disabled)
> + * @q: the queue to mark as using PXP
> + * @type: the type of PXP session this queue will use
> + *
> + * Returns 0 if the selected PXP type is supported, -ENODEV otherwise.
> + */
> +int xe_pxp_exec_queue_set_type(struct xe_pxp *pxp, struct xe_exec_queue *q, u8 type)
> +{
> + if (!xe_pxp_is_enabled(pxp))
> + return -ENODEV;
> +
> + /* we only support HWDRM sessions right now */
> + xe_assert(pxp->xe, type == DRM_XE_PXP_TYPE_HWDRM);
> +
> + q->pxp.type = type;
> +
> + return 0;
> +}
> +
> +/**
> + * xe_pxp_exec_queue_add - add a queue to the PXP list
> + * @pxp: the xe->pxp pointer (it will be NULL if PXP is disabled)
> + * @q: the queue to add to the list
> + *
> + * If PXP is enabled and the prerequisites are done, start the PXP ARB
> + * session (if not already running) and add the queue to the PXP list. Note
> + * that the queue must have previously been marked as using PXP with
> + * xe_pxp_exec_queue_set_type.
> + *
> + * Returns 0 if the PXP ARB session is running and the queue is in the list,
> + * -ENODEV if PXP is disabled, -EBUSY if the PXP prerequisites are not done,
> + * other errno value if something goes wrong during the session start.
> + */
> +#define PXP_TERMINATION_TIMEOUT_MS 500
> +int xe_pxp_exec_queue_add(struct xe_pxp *pxp, struct xe_exec_queue *q)
> +{
> + int ret = 0;
> +
> + if (!xe_pxp_is_enabled(pxp))
> + return -ENODEV;
> +
> + /* we only support HWDRM sessions right now */
> + xe_assert(pxp->xe, q->pxp.type == DRM_XE_PXP_TYPE_HWDRM);
> +
> + /*
> + * Runtime suspend kills PXP, so we need to turn it off while we have
> + * active queues that use PXP
> + */
> + xe_pm_runtime_get(pxp->xe);
> +
> + if (!pxp_prerequisites_done(pxp)) {
> + ret = -EBUSY;
Wouldn't EAGAIN be more appropriate? The pre-reqs here are the GSC
firmware load which is guaranteed to in progress or done (or dead?), in
which case it is just a matter or re-trying until the firmware init
completes?
> + goto out;
> + }
> +
> +wait_for_termination:
> + /*
> + * if there is a termination in progress, wait for it.
> + * We need to wait outside the lock because the completion is done from
> + * within the lock
> + */
> + if (!wait_for_completion_timeout(&pxp->termination,
> + msecs_to_jiffies(PXP_TERMINATION_TIMEOUT_MS)))
> + return -ETIMEDOUT;
> +
> + mutex_lock(&pxp->mutex);
> +
> + /*
> + * check if a new termination was issued between the above check and
> + * grabbing the mutex
> + */
> + if (!completion_done(&pxp->termination)) {
> + mutex_unlock(&pxp->mutex);
> + goto wait_for_termination;
> + }
> +
> + /* If PXP is not already active, turn it on */
> + switch (pxp->status) {
> + case XE_PXP_ERROR:
> + ret = -EIO;
> + break;
> + case XE_PXP_ACTIVE:
> + break;
> + case XE_PXP_READY_TO_START:
> + ret = __pxp_start_arb_session(pxp);
> + break;
> + case XE_PXP_NEEDS_TERMINATION:
> + pxp_terminate(pxp);
> + mutex_unlock(&pxp->mutex);
> + goto wait_for_termination;
> + default:
> + drm_err(&pxp->xe->drm, "unexpected state during PXP start: %u", pxp->status);
> + ret = -EIO;
> + break;
> + }
> +
> + /* If everything went ok, add the queue to the list */
> + if (!ret) {
> + spin_lock_irq(&pxp->queues.lock);
> + list_add_tail(&q->pxp.link, &pxp->queues.list);
> + spin_unlock_irq(&pxp->queues.lock);
> + }
> +
> + mutex_unlock(&pxp->mutex);
> +
> +out:
> + /*
> + * in the successful case the PM ref is released from
> + * xe_pxp_exec_queue_remove
> + */
> + if (ret)
> + xe_pm_runtime_put(pxp->xe);
Does the runtime PM get/put need to be mutex protected as well? Is it
possible for two xe_pxp_exec_queue_add() calls to be running concurrently?
> +
> + return ret;
> +}
> +
> +/**
> + * xe_pxp_exec_queue_remove - remove a queue from the PXP list
> + * @pxp: the xe->pxp pointer (it will be NULL if PXP is disabled)
> + * @q: the queue to remove from the list
> + *
> + * If PXP is enabled and the exec_queue is in the list, the queue will be
> + * removed from the list and its PM reference will be released. It is safe to
> + * call this function multiple times for the same queue.
> + */
> +void xe_pxp_exec_queue_remove(struct xe_pxp *pxp, struct xe_exec_queue *q)
> +{
> + bool need_pm_put = false;
> +
> + if (!xe_pxp_is_enabled(pxp))
> + return;
> +
> + spin_lock_irq(&pxp->queues.lock);
> +
> + if (!list_empty(&q->pxp.link)) {
> + list_del_init(&q->pxp.link);
> + need_pm_put = true;
> + }
> +
> + q->pxp.type = DRM_XE_PXP_TYPE_NONE;
> +
> + spin_unlock_irq(&pxp->queues.lock);
> +
> + if (need_pm_put)
> + xe_pm_runtime_put(pxp->xe);
> +}
> +
> +static void pxp_invalidate_queues(struct xe_pxp *pxp)
> +{
> + struct xe_exec_queue *tmp, *q;
> +
> + spin_lock_irq(&pxp->queues.lock);
> +
> + list_for_each_entry(tmp, &pxp->queues.list, pxp.link) {
Double space.
> + q = xe_exec_queue_get_unless_zero(tmp);
> +
> + if (!q)
> + continue;
> +
> + xe_exec_queue_kill(q);
> + xe_exec_queue_put(q);
> + }
This doesn't need to empty the list out as well?
> +
> + spin_unlock_irq(&pxp->queues.lock);
> +}
> +
> diff --git a/drivers/gpu/drm/xe/xe_pxp.h b/drivers/gpu/drm/xe/xe_pxp.h
> index 81bafe2714ff..2e0ab186072a 100644
> --- a/drivers/gpu/drm/xe/xe_pxp.h
> +++ b/drivers/gpu/drm/xe/xe_pxp.h
> @@ -9,10 +9,17 @@
> #include <linux/types.h>
>
> struct xe_device;
> +struct xe_exec_queue;
> +struct xe_pxp;
>
> bool xe_pxp_is_supported(const struct xe_device *xe);
> +bool xe_pxp_is_enabled(const struct xe_pxp *pxp);
>
> int xe_pxp_init(struct xe_device *xe);
> void xe_pxp_irq_handler(struct xe_device *xe, u16 iir);
>
> +int xe_pxp_exec_queue_set_type(struct xe_pxp *pxp, struct xe_exec_queue *q, u8 type);
> +int xe_pxp_exec_queue_add(struct xe_pxp *pxp, struct xe_exec_queue *q);
> +void xe_pxp_exec_queue_remove(struct xe_pxp *pxp, struct xe_exec_queue *q);
> +
> #endif /* __XE_PXP_H__ */
> diff --git a/drivers/gpu/drm/xe/xe_pxp_submit.c b/drivers/gpu/drm/xe/xe_pxp_submit.c
> index c9258c861556..becffa6dfd4c 100644
> --- a/drivers/gpu/drm/xe/xe_pxp_submit.c
> +++ b/drivers/gpu/drm/xe/xe_pxp_submit.c
> @@ -26,8 +26,6 @@
> #include "instructions/xe_mi_commands.h"
> #include "regs/xe_gt_regs.h"
>
> -#define ARB_SESSION 0xF /* TODO: move to UAPI */
> -
> /*
> * The VCS is used for kernel-owned GGTT submissions to issue key termination.
> * Terminations are serialized, so we only need a single queue and a single
> @@ -495,7 +493,7 @@ int xe_pxp_submit_session_init(struct xe_pxp_gsc_client_resources *gsc_res, u32
> FIELD_PREP(PXP43_INIT_SESSION_APPTYPE, 0));
> msg_in.header.buffer_len = sizeof(msg_in) - sizeof(msg_in.header);
>
> - if (id == ARB_SESSION)
> + if (id == DRM_XE_PXP_HWDRM_DEFAULT_SESSION)
Would have been clearer to just use the correct name from the start.
> msg_in.protection_mode = PXP43_INIT_SESSION_PROTECTION_ARB;
>
> ret = gsccs_send_message(gsc_res, &msg_in, sizeof(msg_in),
> diff --git a/drivers/gpu/drm/xe/xe_pxp_types.h b/drivers/gpu/drm/xe/xe_pxp_types.h
> index d5cf8faed7be..eb6a0183320a 100644
> --- a/drivers/gpu/drm/xe/xe_pxp_types.h
> +++ b/drivers/gpu/drm/xe/xe_pxp_types.h
> @@ -6,7 +6,10 @@
> #ifndef __XE_PXP_TYPES_H__
> #define __XE_PXP_TYPES_H__
>
> +#include <linux/completion.h>
> #include <linux/iosys-map.h>
> +#include <linux/mutex.h>
> +#include <linux/spinlock.h>
> #include <linux/types.h>
> #include <linux/workqueue.h>
>
> @@ -16,6 +19,14 @@ struct xe_device;
> struct xe_gt;
> struct xe_vm;
>
> +enum xe_pxp_status {
> + XE_PXP_ERROR = -1,
> + XE_PXP_NEEDS_TERMINATION = 0, /* starting status */
> + XE_PXP_TERMINATION_IN_PROGRESS,
> + XE_PXP_READY_TO_START,
> + XE_PXP_ACTIVE
> +};
> +
> /**
> * struct xe_pxp_gsc_client_resources - resources for GSC submission by a PXP
> * client. The GSC FW supports multiple GSC client active at the same time.
> @@ -82,6 +93,21 @@ struct xe_pxp {
> #define PXP_TERMINATION_REQUEST BIT(0)
> #define PXP_TERMINATION_COMPLETE BIT(1)
> } irq;
> +
> + /** @mutex: protects the pxp status and the queue list */
> + struct mutex mutex;
> + /** @status: the current pxp status */
> + enum xe_pxp_status status;
> + /** @termination: completion struct that tracks terminations */
> + struct completion termination;
> +
> + /** @queues: management of exec_queues that use PXP */
> + struct {
> + /** @queues.lock: spinlock protecting the queue management */
> + spinlock_t lock;
> + /** @queues.list: list of exec_queues that use PXP */
> + struct list_head list;
> + } queues;
> };
>
> #endif /* __XE_PXP_TYPES_H__ */
> diff --git a/include/uapi/drm/xe_drm.h b/include/uapi/drm/xe_drm.h
> index b6fbe4988f2e..5f4d08123672 100644
> --- a/include/uapi/drm/xe_drm.h
> +++ b/include/uapi/drm/xe_drm.h
> @@ -1085,6 +1085,24 @@ struct drm_xe_vm_bind {
> /**
> * struct drm_xe_exec_queue_create - Input of &DRM_IOCTL_XE_EXEC_QUEUE_CREATE
> *
> + * This ioctl supports setting the following properties via the
> + * %DRM_XE_EXEC_QUEUE_EXTENSION_SET_PROPERTY extension, which uses the
> + * generic @drm_xe_ext_set_property struct:
> + *
> + * - %DRM_XE_EXEC_QUEUE_SET_PROPERTY_PRIORITY - set the queue priority.
> + * CAP_SYS_NICE is required to set a value above normal.
> + * - %DRM_XE_EXEC_QUEUE_SET_PROPERTY_TIMESLICE - set the queue timeslice
> + * duration.
Units would be helpful.
> + * - %DRM_XE_EXEC_QUEUE_SET_PROPERTY_PXP_TYPE - set the type of PXP session
> + * this queue will be used with. Valid values are listed in enum
> + * drm_xe_pxp_session_type. %DRM_XE_PXP_TYPE_NONE is the default behavior, so
> + * there is no need to explicitly set that. When a queue of type
> + * %DRM_XE_PXP_TYPE_HWDRM is created, the PXP default HWDRM session
> + * (%XE_PXP_HWDRM_DEFAULT_SESSION) will be started, if isn't already running.
> + * Given that going into a power-saving state kills PXP HWDRM sessions,
> + * runtime PM will be blocked while queues of this type are alive.
> + * All PXP queues will be killed if a PXP invalidation event occurs.
Seems odd to say 'values are listed in ...' and then go on to describe
each type and provide extra information about them. Seems like the extra
details should be part of the enum documentation instead of here?
John.
> + *
> * The example below shows how to use @drm_xe_exec_queue_create to create
> * a simple exec_queue (no parallel submission) of class
> * &DRM_XE_ENGINE_CLASS_RENDER.
> @@ -1108,7 +1126,7 @@ struct drm_xe_exec_queue_create {
> #define DRM_XE_EXEC_QUEUE_EXTENSION_SET_PROPERTY 0
> #define DRM_XE_EXEC_QUEUE_SET_PROPERTY_PRIORITY 0
> #define DRM_XE_EXEC_QUEUE_SET_PROPERTY_TIMESLICE 1
> -
> +#define DRM_XE_EXEC_QUEUE_SET_PROPERTY_PXP_TYPE 2
> /** @extensions: Pointer to the first extension struct, if any */
> __u64 extensions;
>
> @@ -1694,6 +1712,26 @@ struct drm_xe_oa_stream_info {
> __u64 reserved[3];
> };
>
> +/**
> + * enum drm_xe_pxp_session_type - Supported PXP session types.
> + *
> + * We currently only support HWDRM sessions, which are used for protected
> + * content that ends up being displayed, but the HW supports multiple types, so
> + * we might extend support in the future.
> + */
> +enum drm_xe_pxp_session_type {
> + /** @DRM_XE_PXP_TYPE_NONE: PXP not used */
> + DRM_XE_PXP_TYPE_NONE = 0,
> + /**
> + * @DRM_XE_PXP_TYPE_HWDRM: HWDRM sessions are used for content that ends
> + * up on the display.
> + */
> + DRM_XE_PXP_TYPE_HWDRM = 1,
> +};
> +
> +/* ID of the protected content session managed by Xe when PXP is active */
> +#define DRM_XE_PXP_HWDRM_DEFAULT_SESSION 0xf
> +
> #if defined(__cplusplus)
> }
> #endif
^ permalink raw reply [flat|nested] 54+ messages in thread
* Re: [PATCH v2 08/12] drm/xe/pxp: add a query for PXP status
2024-08-16 19:00 ` [PATCH v2 08/12] drm/xe/pxp: add a query for PXP status Daniele Ceraolo Spurio
@ 2024-10-09 0:09 ` John Harrison
2024-11-12 21:29 ` Daniele Ceraolo Spurio
0 siblings, 1 reply; 54+ messages in thread
From: John Harrison @ 2024-10-09 0:09 UTC (permalink / raw)
To: Daniele Ceraolo Spurio, intel-xe; +Cc: José Roberto de Souza
On 8/16/2024 12:00, Daniele Ceraolo Spurio wrote:
> PXP prerequisites (SW proxy and HuC auth via GSC) are completed
> asynchronously from driver load, which means that userspace can start
> submitting before we're ready to start a PXP session. Therefore, we need
> a query that userspace can use to check not only if PXP is supported by
by -> but?
> also to wait until the prerequisites are done.
>
> v2: Improve doc, do not report TYPE_NONE as supported (José)
>
> Signed-off-by: Daniele Ceraolo Spurio <daniele.ceraolospurio@intel.com>
> Cc: José Roberto de Souza <jose.souza@intel.com>
> ---
> drivers/gpu/drm/xe/xe_pxp.c | 33 +++++++++++++++++++++++++++++++++
> drivers/gpu/drm/xe/xe_pxp.h | 1 +
> drivers/gpu/drm/xe/xe_query.c | 32 ++++++++++++++++++++++++++++++++
> include/uapi/drm/xe_drm.h | 35 +++++++++++++++++++++++++++++++++++
> 4 files changed, 101 insertions(+)
>
> diff --git a/drivers/gpu/drm/xe/xe_pxp.c b/drivers/gpu/drm/xe/xe_pxp.c
> index acdc25c8e8a1..ca4302af4ced 100644
> --- a/drivers/gpu/drm/xe/xe_pxp.c
> +++ b/drivers/gpu/drm/xe/xe_pxp.c
> @@ -60,6 +60,39 @@ static bool pxp_prerequisites_done(const struct xe_pxp *pxp)
> return ready;
> }
>
> +/**
> + * xe_pxp_get_readiness_status - check whether PXP is ready for userspace use
> + * @pxp: the xe_pxp pointer (can be NULL if PXP is disabled)
> + *
> + * This function is used for status query from userspace, so the returned value
value -> values
> + * follow the uapi (see drm_xe_query_pxp_status)
> + *
> + * Returns: 0 if PXP is not ready yet, 1 if it is ready, an errno value if PXP
> + * is not supported/enabled or if something went wrong in the initialization of
> + * the prerequisites.
You have two independent statements regarding the return code. Would be
better to just have the "Returns: ..." paragraph but include a statement
that these values are as defined in the UAPI.
> + */
> +int xe_pxp_get_readiness_status(struct xe_pxp *pxp)
> +{
> + int ret = 0;
> +
> + if (!xe_pxp_is_enabled(pxp))
> + return -ENODEV;
> +
> + /* if the GSC or HuC FW are in an error state, PXP will never work */
> + if (xe_uc_fw_status_to_error(pxp->gt->uc.huc.fw.status) ||
> + xe_uc_fw_status_to_error(pxp->gt->uc.gsc.fw.status))
> + return -EIO;
> +
> + xe_pm_runtime_get(pxp->xe);
> +
> + /* PXP requires both HuC loaded and GSC proxy initialized */
> + if (pxp_prerequisites_done(pxp))
> + ret = 1;
> +
> + xe_pm_runtime_put(pxp->xe);
> + return ret;
> +}
> +
> static bool pxp_session_is_in_play(struct xe_pxp *pxp, u32 id)
> {
> struct xe_gt *gt = pxp->gt;
> diff --git a/drivers/gpu/drm/xe/xe_pxp.h b/drivers/gpu/drm/xe/xe_pxp.h
> index 2e0ab186072a..868813cc84b9 100644
> --- a/drivers/gpu/drm/xe/xe_pxp.h
> +++ b/drivers/gpu/drm/xe/xe_pxp.h
> @@ -14,6 +14,7 @@ struct xe_pxp;
>
> bool xe_pxp_is_supported(const struct xe_device *xe);
> bool xe_pxp_is_enabled(const struct xe_pxp *pxp);
> +int xe_pxp_get_readiness_status(struct xe_pxp *pxp);
>
> int xe_pxp_init(struct xe_device *xe);
> void xe_pxp_irq_handler(struct xe_device *xe, u16 iir);
> diff --git a/drivers/gpu/drm/xe/xe_query.c b/drivers/gpu/drm/xe/xe_query.c
> index 73ef6e4c2dc9..a1e297234972 100644
> --- a/drivers/gpu/drm/xe/xe_query.c
> +++ b/drivers/gpu/drm/xe/xe_query.c
> @@ -22,6 +22,7 @@
> #include "xe_guc_hwconfig.h"
> #include "xe_macros.h"
> #include "xe_mmio.h"
> +#include "xe_pxp.h"
> #include "xe_ttm_vram_mgr.h"
>
> static const u16 xe_to_user_engine_class[] = {
> @@ -680,6 +681,36 @@ static int query_oa_units(struct xe_device *xe,
> return ret ? -EFAULT : 0;
> }
>
> +static int query_pxp_status(struct xe_device *xe, struct drm_xe_device_query *query)
> +{
> + struct drm_xe_query_pxp_status __user *query_ptr = u64_to_user_ptr(query->data);
> + size_t size = sizeof(struct drm_xe_query_pxp_status);
> + struct drm_xe_query_pxp_status resp;
> + int ret;
> +
> + if (query->size == 0) {
> + query->size = size;
> + return 0;
> + } else if (XE_IOCTL_DBG(xe, query->size != size)) {
Do we not allow structures to grow in future versions? In a backwards
compatible way, that is.
> + return -EINVAL;
> + }
> +
> + if (copy_from_user(&resp, query_ptr, size))
> + return -EFAULT;
Why copy in the data from the user side only to overwrite everything in
the structure?
> +
> + ret = xe_pxp_get_readiness_status(xe->pxp);
> + if (ret < 0)
> + return ret;
> +
> + resp.status = ret;
> + resp.supported_session_types = BIT(DRM_XE_PXP_TYPE_HWDRM);
> +
> + if (copy_to_user(query_ptr, &resp, size))
> + return -EFAULT;
> +
> + return 0;
> +}
> +
> static int (* const xe_query_funcs[])(struct xe_device *xe,
> struct drm_xe_device_query *query) = {
> query_engines,
> @@ -691,6 +722,7 @@ static int (* const xe_query_funcs[])(struct xe_device *xe,
> query_engine_cycles,
> query_uc_fw_version,
> query_oa_units,
> + query_pxp_status,
> };
>
> int xe_query_ioctl(struct drm_device *dev, void *data, struct drm_file *file)
> diff --git a/include/uapi/drm/xe_drm.h b/include/uapi/drm/xe_drm.h
> index 5f4d08123672..9972ceb3fbfb 100644
> --- a/include/uapi/drm/xe_drm.h
> +++ b/include/uapi/drm/xe_drm.h
> @@ -627,6 +627,39 @@ struct drm_xe_query_uc_fw_version {
> __u64 reserved;
> };
>
> +/**
> + * struct drm_xe_query_pxp_status - query if PXP is ready
> + *
> + * If PXP is enabled and no fatal error as occurred, the status will be set to
as -> has
> + * one of the following values:
> + * 0: PXP init still in progress
> + * 1: PXP init complete
> + *
> + * If PXP is not enabled or something has gone wrong, the query will be failed
> + * with one of the following error codes:
> + * -ENODEV: PXP not supported or disabled;
> + * -EIO: fatal error occurred during init, so PXP will never be enabled;
> + * -EINVAL: incorrect value provided as part of the query;
> + * -EFAULT: error copying the memory between kernel and userspace.
Currently, could also be copying from user to kernel. Although that copy
seems unnecessary.
John.
> + *
> + * The status can only be 0 in the first few seconds after driver load. If
> + * everything works as expected, the status will transition to init complete in
> + * less than 1 second, while in case of errors the driver might take longer to
> + * start returning an error code, but it should still take less than 10 seconds.
> + *
> + * The supported session type bitmask is based on the values in
> + * enum drm_xe_pxp_session_type. TYPE_NONE is always supported and therefore
> + * is not reported in the bitmask.
> + *
> + */
> +struct drm_xe_query_pxp_status {
> + /** @status: current PXP status */
> + __u32 status;
> +
> + /** @supported_session_types: bitmask of supported PXP session types */
> + __u32 supported_session_types;
> +};
> +
> /**
> * struct drm_xe_device_query - Input of &DRM_IOCTL_XE_DEVICE_QUERY - main
> * structure to query device information
> @@ -646,6 +679,7 @@ struct drm_xe_query_uc_fw_version {
> * attributes.
> * - %DRM_XE_DEVICE_QUERY_GT_TOPOLOGY
> * - %DRM_XE_DEVICE_QUERY_ENGINE_CYCLES
> + * - %DRM_XE_DEVICE_QUERY_PXP_STATUS
> *
> * If size is set to 0, the driver fills it with the required size for
> * the requested type of data to query. If size is equal to the required
> @@ -698,6 +732,7 @@ struct drm_xe_device_query {
> #define DRM_XE_DEVICE_QUERY_ENGINE_CYCLES 6
> #define DRM_XE_DEVICE_QUERY_UC_FW_VERSION 7
> #define DRM_XE_DEVICE_QUERY_OA_UNITS 8
> +#define DRM_XE_DEVICE_QUERY_PXP_STATUS 9
> /** @query: The type of data to query */
> __u32 query;
>
^ permalink raw reply [flat|nested] 54+ messages in thread
* Re: [PATCH v2 09/12] drm/xe/pxp: Add API to mark a BO as using PXP
2024-08-16 19:00 ` [PATCH v2 09/12] drm/xe/pxp: Add API to mark a BO as using PXP Daniele Ceraolo Spurio
@ 2024-10-09 0:42 ` John Harrison
2024-11-12 22:23 ` Daniele Ceraolo Spurio
0 siblings, 1 reply; 54+ messages in thread
From: John Harrison @ 2024-10-09 0:42 UTC (permalink / raw)
To: Daniele Ceraolo Spurio, intel-xe; +Cc: Matthew Brost, Thomas Hellström
On 8/16/2024 12:00, Daniele Ceraolo Spurio wrote:
> The driver needs to know if a BO is encrypted with PXP to enable the
> display decryption at flip time.
> Furthermore, we want to keep track of the status of the encryption and
> reject any operation that involves a BO that is encrypted using an old
> key. There are two points in time where such checks can kick in:
>
> 1 - at VM bind time, all operations except for unmapping will be
> rejected if the key used to encrypt the BO is no longer valid. This
> check is opt-in via a new VM_BIND flag, to avoid a scenario where a
> malicious app purposely shares an invalid BO with the compositor (or
> other app) and cause an error there.
Not following the last statement here.
>
> 2 - at job submission time, if the queue is marked as using PXP, all
> objects bound to the VM will be checked and the submission will be
> rejected if any of them was encrypted with a key that is no longer
> valid.
>
> Note that there is no risk of leaking the encrypted data if a user does
> not opt-in to those checks; the only consequence is that the user will
> not realize that the encryption key is changed and that the data is no
> longer valid.
>
> Signed-off-by: Daniele Ceraolo Spurio <daniele.ceraolospurio@intel.com>
> Cc: Matthew Brost <matthew.brost@intel.com>
> Cc: Thomas Hellström <thomas.hellstrom@linux.intel.com>
> ---
> .../xe/compat-i915-headers/pxp/intel_pxp.h | 10 +-
> drivers/gpu/drm/xe/xe_bo.c | 100 +++++++++++++++++-
> drivers/gpu/drm/xe/xe_bo.h | 5 +
> drivers/gpu/drm/xe/xe_bo_types.h | 3 +
> drivers/gpu/drm/xe/xe_exec.c | 6 ++
> drivers/gpu/drm/xe/xe_pxp.c | 74 +++++++++++++
> drivers/gpu/drm/xe/xe_pxp.h | 4 +
> drivers/gpu/drm/xe/xe_pxp_types.h | 3 +
> drivers/gpu/drm/xe/xe_vm.c | 46 +++++++-
> drivers/gpu/drm/xe/xe_vm.h | 2 +
> include/uapi/drm/xe_drm.h | 19 ++++
> 11 files changed, 265 insertions(+), 7 deletions(-)
>
> diff --git a/drivers/gpu/drm/xe/compat-i915-headers/pxp/intel_pxp.h b/drivers/gpu/drm/xe/compat-i915-headers/pxp/intel_pxp.h
> index 881680727452..d8682f781619 100644
> --- a/drivers/gpu/drm/xe/compat-i915-headers/pxp/intel_pxp.h
> +++ b/drivers/gpu/drm/xe/compat-i915-headers/pxp/intel_pxp.h
> @@ -9,6 +9,9 @@
> #include <linux/errno.h>
> #include <linux/types.h>
>
> +#include "xe_bo.h"
> +#include "xe_pxp.h"
> +
> struct drm_i915_gem_object;
> struct xe_pxp;
>
> @@ -16,13 +19,16 @@ static inline int intel_pxp_key_check(struct xe_pxp *pxp,
> struct drm_i915_gem_object *obj,
> bool assign)
> {
> - return -ENODEV;
> + if (assign)
> + return -EINVAL;
What does 'assign' mean and why is it always invalid?
> +
> + return xe_pxp_key_check(pxp, obj);
> }
>
> static inline bool
> i915_gem_object_is_protected(const struct drm_i915_gem_object *obj)
> {
> - return false;
> + return xe_bo_is_protected(obj);
> }
>
> #endif
> diff --git a/drivers/gpu/drm/xe/xe_bo.c b/drivers/gpu/drm/xe/xe_bo.c
> index 56a089aa3916..0f591b7d93b1 100644
> --- a/drivers/gpu/drm/xe/xe_bo.c
> +++ b/drivers/gpu/drm/xe/xe_bo.c
> @@ -6,6 +6,7 @@
> #include "xe_bo.h"
>
> #include <linux/dma-buf.h>
> +#include <linux/nospec.h>
>
> #include <drm/drm_drv.h>
> #include <drm/drm_gem_ttm_helper.h>
> @@ -24,6 +25,7 @@
> #include "xe_migrate.h"
> #include "xe_pm.h"
> #include "xe_preempt_fence.h"
> +#include "xe_pxp.h"
> #include "xe_res_cursor.h"
> #include "xe_trace_bo.h"
> #include "xe_ttm_stolen_mgr.h"
> @@ -1949,6 +1951,95 @@ void xe_bo_vunmap(struct xe_bo *bo)
> __xe_bo_vunmap(bo);
> }
>
> +static int gem_create_set_pxp_type(struct xe_device *xe, struct xe_bo *bo, u64 value)
> +{
> + if (value == DRM_XE_PXP_TYPE_NONE)
> + return 0;
> +
> + /* we only support DRM_XE_PXP_TYPE_HWDRM for now */
> + if (XE_IOCTL_DBG(xe, value != DRM_XE_PXP_TYPE_HWDRM))
> + return -EINVAL;
> +
> + xe_pxp_key_assign(xe->pxp, bo);
> +
> + return 0;
> +}
> +
> +typedef int (*xe_gem_create_set_property_fn)(struct xe_device *xe,
> + struct xe_bo *bo,
> + u64 value);
> +
> +static const xe_gem_create_set_property_fn gem_create_set_property_funcs[] = {
> + [DRM_XE_GEM_CREATE_EXTENSION_SET_PROPERTY] = gem_create_set_pxp_type,
> +};
> +
> +static int gem_create_user_ext_set_property(struct xe_device *xe,
> + struct xe_bo *bo,
> + u64 extension)
> +{
> + u64 __user *address = u64_to_user_ptr(extension);
> + struct drm_xe_ext_set_property ext;
> + int err;
> + u32 idx;
> +
> + err = __copy_from_user(&ext, address, sizeof(ext));
> + if (XE_IOCTL_DBG(xe, err))
> + return -EFAULT;
> +
> + if (XE_IOCTL_DBG(xe, ext.property >=
> + ARRAY_SIZE(gem_create_set_property_funcs)) ||
> + XE_IOCTL_DBG(xe, ext.pad) ||
> + XE_IOCTL_DBG(xe, ext.property != DRM_XE_GEM_CREATE_EXTENSION_SET_PROPERTY))
Two overlapping checks on the same field in the same if statement seems
unnecessary.
> + return -EINVAL;
> +
> + idx = array_index_nospec(ext.property, ARRAY_SIZE(gem_create_set_property_funcs));
> + if (!gem_create_set_property_funcs[idx])
> + return -EINVAL;
> +
> + return gem_create_set_property_funcs[idx](xe, bo, ext.value);
> +}
> +
> +typedef int (*xe_gem_create_user_extension_fn)(struct xe_device *xe,
> + struct xe_bo *bo,
> + u64 extension);
> +
> +static const xe_gem_create_user_extension_fn gem_create_user_extension_funcs[] = {
> + [DRM_XE_GEM_CREATE_EXTENSION_SET_PROPERTY] = gem_create_user_ext_set_property,
> +};
> +
> +#define MAX_USER_EXTENSIONS 16
> +static int gem_create_user_extensions(struct xe_device *xe, struct xe_bo *bo,
> + u64 extensions, int ext_number)
> +{
> + u64 __user *address = u64_to_user_ptr(extensions);
> + struct drm_xe_user_extension ext;
> + int err;
> + u32 idx;
> +
> + if (XE_IOCTL_DBG(xe, ext_number >= MAX_USER_EXTENSIONS))
> + return -E2BIG;
> +
> + err = __copy_from_user(&ext, address, sizeof(ext));
> + if (XE_IOCTL_DBG(xe, err))
> + return -EFAULT;
> +
> + if (XE_IOCTL_DBG(xe, ext.pad) ||
> + XE_IOCTL_DBG(xe, ext.name >= ARRAY_SIZE(gem_create_user_extension_funcs)))
> + return -EINVAL;
> +
> + idx = array_index_nospec(ext.name,
> + ARRAY_SIZE(gem_create_user_extension_funcs));
> + err = gem_create_user_extension_funcs[idx](xe, bo, extensions);
> + if (XE_IOCTL_DBG(xe, err))
> + return err;
> +
> + if (ext.next_extension)
> + return gem_create_user_extensions(xe, bo, ext.next_extension,
> + ++ext_number);
> +
> + return 0;
> +}
> +
> int xe_gem_create_ioctl(struct drm_device *dev, void *data,
> struct drm_file *file)
> {
> @@ -1961,8 +2052,7 @@ int xe_gem_create_ioctl(struct drm_device *dev, void *data,
> u32 handle;
> int err;
>
> - if (XE_IOCTL_DBG(xe, args->extensions) ||
> - XE_IOCTL_DBG(xe, args->pad[0] || args->pad[1] || args->pad[2]) ||
> + if (XE_IOCTL_DBG(xe, args->pad[0] || args->pad[1] || args->pad[2]) ||
> XE_IOCTL_DBG(xe, args->reserved[0] || args->reserved[1]))
> return -EINVAL;
>
> @@ -2037,6 +2127,12 @@ int xe_gem_create_ioctl(struct drm_device *dev, void *data,
> goto out_vm;
> }
>
> + if (args->extensions) {
> + err = gem_create_user_extensions(xe, bo, args->extensions, 0);
> + if (err)
> + goto out_bulk;
> + }
> +
> err = drm_gem_handle_create(file, &bo->ttm.base, &handle);
> if (err)
> goto out_bulk;
> diff --git a/drivers/gpu/drm/xe/xe_bo.h b/drivers/gpu/drm/xe/xe_bo.h
> index 1c9dc8adaaa3..721f7dc35aac 100644
> --- a/drivers/gpu/drm/xe/xe_bo.h
> +++ b/drivers/gpu/drm/xe/xe_bo.h
> @@ -171,6 +171,11 @@ static inline bool xe_bo_is_pinned(struct xe_bo *bo)
> return bo->ttm.pin_count;
> }
>
> +static inline bool xe_bo_is_protected(const struct xe_bo *bo)
> +{
> + return bo->pxp_key_instance;
> +}
> +
> static inline void xe_bo_unpin_map_no_vm(struct xe_bo *bo)
> {
> if (likely(bo)) {
> diff --git a/drivers/gpu/drm/xe/xe_bo_types.h b/drivers/gpu/drm/xe/xe_bo_types.h
> index ebc8abf7930a..8668e0374b18 100644
> --- a/drivers/gpu/drm/xe/xe_bo_types.h
> +++ b/drivers/gpu/drm/xe/xe_bo_types.h
> @@ -56,6 +56,9 @@ struct xe_bo {
> */
> struct list_head client_link;
> #endif
> + /** @pxp_key_instance: key instance this bo was created against (if any) */
> + u32 pxp_key_instance;
> +
> /** @freed: List node for delayed put. */
> struct llist_node freed;
> /** @update_index: Update index if PT BO */
> diff --git a/drivers/gpu/drm/xe/xe_exec.c b/drivers/gpu/drm/xe/xe_exec.c
> index f36980aa26e6..aa4f2fe2e131 100644
> --- a/drivers/gpu/drm/xe/xe_exec.c
> +++ b/drivers/gpu/drm/xe/xe_exec.c
> @@ -250,6 +250,12 @@ int xe_exec_ioctl(struct drm_device *dev, void *data, struct drm_file *file)
> goto err_exec;
> }
>
> + if (xe_exec_queue_uses_pxp(q)) {
> + err = xe_vm_validate_protected(q->vm);
> + if (err)
> + goto err_exec;
> + }
> +
> job = xe_sched_job_create(q, xe_exec_queue_is_parallel(q) ?
> addresses : &args->address);
> if (IS_ERR(job)) {
> diff --git a/drivers/gpu/drm/xe/xe_pxp.c b/drivers/gpu/drm/xe/xe_pxp.c
> index ca4302af4ced..640e62d1d5d7 100644
> --- a/drivers/gpu/drm/xe/xe_pxp.c
> +++ b/drivers/gpu/drm/xe/xe_pxp.c
> @@ -8,6 +8,8 @@
> #include <drm/drm_managed.h>
> #include <drm/xe_drm.h>
>
> +#include "xe_bo.h"
> +#include "xe_bo_types.h"
> #include "xe_device_types.h"
> #include "xe_exec_queue.h"
> #include "xe_exec_queue_types.h"
> @@ -132,6 +134,9 @@ static void pxp_terminate(struct xe_pxp *pxp)
>
> pxp_invalidate_queues(pxp);
>
> + if (pxp->status == XE_PXP_ACTIVE)
> + pxp->key_instance++;
> +
> /*
> * If we have a termination already in progress, we need to wait for
> * it to complete before queueing another one. We update the state
> @@ -343,6 +348,8 @@ int xe_pxp_init(struct xe_device *xe)
> pxp->xe = xe;
> pxp->gt = gt;
>
> + pxp->key_instance = 1;
> +
> /*
> * we'll use the completion to check if there is a termination pending,
> * so we start it as completed and we reinit it when a termination
> @@ -574,3 +581,70 @@ static void pxp_invalidate_queues(struct xe_pxp *pxp)
> spin_unlock_irq(&pxp->queues.lock);
> }
>
> +/**
> + * xe_pxp_key_assign - mark a BO as using the current PXP key iteration
> + * @pxp: the xe->pxp pointer (it will be NULL if PXP is disabled)
> + * @bo: the BO to mark
> + *
> + * Returns: -ENODEV if PXP is disabled, 0 otherwise.
> + */
> +int xe_pxp_key_assign(struct xe_pxp *pxp, struct xe_bo *bo)
> +{
> + if (!xe_pxp_is_enabled(pxp))
> + return -ENODEV;
> +
> + xe_assert(pxp->xe, !bo->pxp_key_instance);
> +
> + /*
> + * Note that the PXP key handling is inherently racey, because the key
> + * can theoretically change at any time (although it's unlikely to do
> + * so without triggers), even right after we copy it. Taking a lock
> + * wouldn't help because the value might still change as soon as we
> + * release the lock.
> + * Userspace needs to handle the fact that their BOs can go invalid at
> + * any point.
> + */
> + bo->pxp_key_instance = pxp->key_instance;
> +
> + return 0;
> +}
> +
> +/**
> + * xe_pxp_key_check - check if the key used by a BO is valid
> + * @pxp: the xe->pxp pointer (it will be NULL if PXP is disabled)
> + * @bo: the BO we want to check
> + *
> + * Checks whether a BO was encrypted with the current key or an obsolete one.
> + *
> + * Returns: 0 if the key is valid, -ENODEV if PXP is disabled, -EINVAL if the
> + * BO is not using PXP, -ENOEXEC if the key is not valid.
> + */
> +int xe_pxp_key_check(struct xe_pxp *pxp, struct xe_bo *bo)
> +{
> + if (!xe_pxp_is_enabled(pxp))
> + return -ENODEV;
> +
> + if (!xe_bo_is_protected(bo))
> + return -EINVAL;
> +
> + xe_assert(pxp->xe, bo->pxp_key_instance);
> +
> + /*
> + * Note that the PXP key handling is inherently racey, because the key
> + * can theoretically change at any time (although it's unlikely to do
> + * so without triggers), even right after we check it. Taking a lock
> + * wouldn't help because the value might still change as soon as we
> + * release the lock.
> + * We mitigate the risk by checking the key at multiple points (on each
> + * submission involving the BO and right before flipping it on the
> + * display), but there is still a very small chance that we could
> + * operate on an invalid BO for a single submission or a single frame
> + * flip. This is a compromise made to protect the encrypted data (which
> + * is what the key termination is for).
> + */
> + if (bo->pxp_key_instance != pxp->key_instance)
And the possibility that the key_instance value has wrapped around and
is valid again is considered not a problem? Using a bo with a bad key
potentially results in garbage being displayed but nothing worse than that?
> + return -ENOEXEC;
> +
> + return 0;
> +}
> +
> diff --git a/drivers/gpu/drm/xe/xe_pxp.h b/drivers/gpu/drm/xe/xe_pxp.h
> index 868813cc84b9..2d22a6e6ab27 100644
> --- a/drivers/gpu/drm/xe/xe_pxp.h
> +++ b/drivers/gpu/drm/xe/xe_pxp.h
> @@ -8,6 +8,7 @@
>
> #include <linux/types.h>
>
> +struct xe_bo;
> struct xe_device;
> struct xe_exec_queue;
> struct xe_pxp;
> @@ -23,4 +24,7 @@ int xe_pxp_exec_queue_set_type(struct xe_pxp *pxp, struct xe_exec_queue *q, u8 t
> int xe_pxp_exec_queue_add(struct xe_pxp *pxp, struct xe_exec_queue *q);
> void xe_pxp_exec_queue_remove(struct xe_pxp *pxp, struct xe_exec_queue *q);
>
> +int xe_pxp_key_assign(struct xe_pxp *pxp, struct xe_bo *bo);
> +int xe_pxp_key_check(struct xe_pxp *pxp, struct xe_bo *bo);
> +
> #endif /* __XE_PXP_H__ */
> diff --git a/drivers/gpu/drm/xe/xe_pxp_types.h b/drivers/gpu/drm/xe/xe_pxp_types.h
> index eb6a0183320a..1bb747837f86 100644
> --- a/drivers/gpu/drm/xe/xe_pxp_types.h
> +++ b/drivers/gpu/drm/xe/xe_pxp_types.h
> @@ -108,6 +108,9 @@ struct xe_pxp {
> /** @queues.list: list of exec_queues that use PXP */
> struct list_head list;
> } queues;
> +
> + /** @key_instance: keep track of the current iteration of the PXP key */
> + u32 key_instance;
> };
>
> #endif /* __XE_PXP_TYPES_H__ */
> diff --git a/drivers/gpu/drm/xe/xe_vm.c b/drivers/gpu/drm/xe/xe_vm.c
> index 56f105797ae6..1011d643ebb8 100644
> --- a/drivers/gpu/drm/xe/xe_vm.c
> +++ b/drivers/gpu/drm/xe/xe_vm.c
> @@ -34,6 +34,7 @@
> #include "xe_pm.h"
> #include "xe_preempt_fence.h"
> #include "xe_pt.h"
> +#include "xe_pxp.h"
> #include "xe_res_cursor.h"
> #include "xe_sync.h"
> #include "xe_trace_bo.h"
> @@ -2754,7 +2755,8 @@ static struct dma_fence *vm_bind_ioctl_ops_execute(struct xe_vm *vm,
> (DRM_XE_VM_BIND_FLAG_READONLY | \
> DRM_XE_VM_BIND_FLAG_IMMEDIATE | \
> DRM_XE_VM_BIND_FLAG_NULL | \
> - DRM_XE_VM_BIND_FLAG_DUMPABLE)
> + DRM_XE_VM_BIND_FLAG_DUMPABLE | \
> + DRM_XE_VM_BIND_FLAG_CHECK_PXP)
>
> #ifdef TEST_VM_OPS_ERROR
> #define SUPPORTED_FLAGS (SUPPORTED_FLAGS_STUB | FORCE_OP_ERROR)
> @@ -2916,7 +2918,7 @@ static void xe_vma_ops_init(struct xe_vma_ops *vops, struct xe_vm *vm,
>
> static int xe_vm_bind_ioctl_validate_bo(struct xe_device *xe, struct xe_bo *bo,
> u64 addr, u64 range, u64 obj_offset,
> - u16 pat_index)
> + u16 pat_index, u32 op, u32 bind_flags)
> {
> u16 coh_mode;
>
> @@ -2951,6 +2953,12 @@ static int xe_vm_bind_ioctl_validate_bo(struct xe_device *xe, struct xe_bo *bo,
> return -EINVAL;
> }
>
> + /* If a BO is protected it must be valid to be mapped */
"is protected it can only be mapped if the key is still valid". The
above can be read as saying the BO must be mappable, which isn't the
same thing.
> + if ((bind_flags & DRM_XE_VM_BIND_FLAG_CHECK_PXP) && xe_bo_is_protected(bo) &&
> + op != DRM_XE_VM_BIND_OP_UNMAP && op != DRM_XE_VM_BIND_OP_UNMAP_ALL)
> + if (XE_IOCTL_DBG(xe, xe_pxp_key_check(xe->pxp, bo) != 0))
> + return -ENOEXEC;
> +
> return 0;
> }
>
> @@ -3038,6 +3046,8 @@ int xe_vm_bind_ioctl(struct drm_device *dev, void *data, struct drm_file *file)
> u32 obj = bind_ops[i].obj;
> u64 obj_offset = bind_ops[i].obj_offset;
> u16 pat_index = bind_ops[i].pat_index;
> + u32 op = bind_ops[i].op;
> + u32 bind_flags = bind_ops[i].flags;
>
> if (!obj)
> continue;
> @@ -3050,7 +3060,8 @@ int xe_vm_bind_ioctl(struct drm_device *dev, void *data, struct drm_file *file)
> bos[i] = gem_to_xe_bo(gem_obj);
>
> err = xe_vm_bind_ioctl_validate_bo(xe, bos[i], addr, range,
> - obj_offset, pat_index);
> + obj_offset, pat_index, op,
> + bind_flags);
> if (err)
> goto put_obj;
> }
> @@ -3343,6 +3354,35 @@ int xe_vm_invalidate_vma(struct xe_vma *vma)
> return ret;
> }
>
> +int xe_vm_validate_protected(struct xe_vm *vm)
> +{
> + struct drm_gpuva *gpuva;
> + int err = 0;
> +
> + if (!vm)
> + return -ENODEV;
> +
> + mutex_lock(&vm->snap_mutex);
> +
> + drm_gpuvm_for_each_va(gpuva, &vm->gpuvm) {
> + struct xe_vma *vma = gpuva_to_vma(gpuva);
> + struct xe_bo *bo = vma->gpuva.gem.obj ?
> + gem_to_xe_bo(vma->gpuva.gem.obj) : NULL;
> +
> + if (!bo)
> + continue;
> +
> + if (xe_bo_is_protected(bo)) {
> + err = xe_pxp_key_check(vm->xe->pxp, bo);
> + if (err)
> + break;
> + }
> + }
> +
> + mutex_unlock(&vm->snap_mutex);
> + return err;
> +}
> +
> struct xe_vm_snapshot {
> unsigned long num_snaps;
> struct {
> diff --git a/drivers/gpu/drm/xe/xe_vm.h b/drivers/gpu/drm/xe/xe_vm.h
> index bfc19e8113c3..dd51c9790dab 100644
> --- a/drivers/gpu/drm/xe/xe_vm.h
> +++ b/drivers/gpu/drm/xe/xe_vm.h
> @@ -216,6 +216,8 @@ struct dma_fence *xe_vma_rebind(struct xe_vm *vm, struct xe_vma *vma,
>
> int xe_vm_invalidate_vma(struct xe_vma *vma);
>
> +int xe_vm_validate_protected(struct xe_vm *vm);
> +
> static inline void xe_vm_queue_rebind_worker(struct xe_vm *vm)
> {
> xe_assert(vm->xe, xe_vm_in_preempt_fence_mode(vm));
> diff --git a/include/uapi/drm/xe_drm.h b/include/uapi/drm/xe_drm.h
> index 9972ceb3fbfb..335febe03e40 100644
> --- a/include/uapi/drm/xe_drm.h
> +++ b/include/uapi/drm/xe_drm.h
> @@ -776,8 +776,23 @@ struct drm_xe_device_query {
> * - %DRM_XE_GEM_CPU_CACHING_WC - Allocate the pages as write-combined. This
> * is uncached. Scanout surfaces should likely use this. All objects
> * that can be placed in VRAM must use this.
> + *
> + * This ioctl supports setting the following properties via the
> + * %DRM_XE_GEM_CREATE_EXTENSION_SET_PROPERTY extension, which uses the
> + * generic @drm_xe_ext_set_property struct:
> + *
> + * - %DRM_XE_GEM_CREATE_SET_PROPERTY_PXP_TYPE - set the type of PXP session
> + * this object will be used with. Valid values are listed in enum
> + * drm_xe_pxp_session_type. %DRM_XE_PXP_TYPE_NONE is the default behavior, so
> + * there is no need to explicitly set that. Objects used with session of type
> + * %DRM_XE_PXP_TYPE_HWDRM will be marked as invalid if a PXP invalidation
> + * event occurs after their creation. Attempting to flip an invalid object
> + * will cause a black frame to be displayed instead. Submissions with invalid
> + * objects mapped in the VM will be rejected.
Again, seems like the per type descriptions should be collected together
in the type enum.
John.
> */
> struct drm_xe_gem_create {
> +#define DRM_XE_GEM_CREATE_EXTENSION_SET_PROPERTY 0
> +#define DRM_XE_GEM_CREATE_SET_PROPERTY_PXP_TYPE 0
> /** @extensions: Pointer to the first extension struct, if any */
> __u64 extensions;
>
> @@ -939,6 +954,9 @@ struct drm_xe_vm_destroy {
> * will only be valid for DRM_XE_VM_BIND_OP_MAP operations, the BO
> * handle MBZ, and the BO offset MBZ. This flag is intended to
> * implement VK sparse bindings.
> + * - %DRM_XE_VM_BIND_FLAG_CHECK_PXP - If the object is encrypted via PXP,
> + * reject the binding if the encryption key is no longer valid. This
> + * flag has no effect on BOs that are not marked as using PXP.
> */
> struct drm_xe_vm_bind_op {
> /** @extensions: Pointer to the first extension struct, if any */
> @@ -1029,6 +1047,7 @@ struct drm_xe_vm_bind_op {
> #define DRM_XE_VM_BIND_FLAG_IMMEDIATE (1 << 1)
> #define DRM_XE_VM_BIND_FLAG_NULL (1 << 2)
> #define DRM_XE_VM_BIND_FLAG_DUMPABLE (1 << 3)
> +#define DRM_XE_VM_BIND_FLAG_CHECK_PXP (1 << 4)
> /** @flags: Bind flags */
> __u32 flags;
>
^ permalink raw reply [flat|nested] 54+ messages in thread
* Re: [PATCH v2 10/12] drm/xe/pxp: add PXP PM support
2024-08-16 19:00 ` [PATCH v2 10/12] drm/xe/pxp: add PXP PM support Daniele Ceraolo Spurio
2024-08-26 21:55 ` Daniele Ceraolo Spurio
@ 2024-10-09 1:12 ` John Harrison
2024-11-12 22:27 ` Daniele Ceraolo Spurio
1 sibling, 1 reply; 54+ messages in thread
From: John Harrison @ 2024-10-09 1:12 UTC (permalink / raw)
To: Daniele Ceraolo Spurio, intel-xe
On 8/16/2024 12:00, Daniele Ceraolo Spurio wrote:
> The HW suspend flow kills all PXP HWDRM sessions, so if there was any
> PXP activity before the suspend we need to trigger a full termination on
> suspend.
>
> Signed-off-by: Daniele Ceraolo Spurio <daniele.ceraolospurio@intel.com>
> ---
> drivers/gpu/drm/xe/xe_pm.c | 42 +++++++++++---
> drivers/gpu/drm/xe/xe_pxp.c | 92 ++++++++++++++++++++++++++++++-
> drivers/gpu/drm/xe/xe_pxp.h | 3 +
> drivers/gpu/drm/xe/xe_pxp_types.h | 9 ++-
> 4 files changed, 134 insertions(+), 12 deletions(-)
>
> diff --git a/drivers/gpu/drm/xe/xe_pm.c b/drivers/gpu/drm/xe/xe_pm.c
> index 9f3c14fd9f33..1e1f87ec03a2 100644
> --- a/drivers/gpu/drm/xe/xe_pm.c
> +++ b/drivers/gpu/drm/xe/xe_pm.c
> @@ -20,6 +20,7 @@
> #include "xe_guc.h"
> #include "xe_irq.h"
> #include "xe_pcode.h"
> +#include "xe_pxp.h"
> #include "xe_trace.h"
> #include "xe_wa.h"
>
> @@ -90,22 +91,24 @@ int xe_pm_suspend(struct xe_device *xe)
> drm_dbg(&xe->drm, "Suspending device\n");
> trace_xe_pm_suspend(xe, __builtin_return_address(0));
>
> + err = xe_pxp_pm_suspend(xe->pxp);
> + if (err)
> + goto err;
> +
> for_each_gt(gt, xe, id)
> xe_gt_suspend_prepare(gt);
>
> /* FIXME: Super racey... */
> err = xe_bo_evict_all(xe);
> if (err)
> - goto err;
> + goto err_pxp;
>
> xe_display_pm_suspend(xe, false);
>
> for_each_gt(gt, xe, id) {
> err = xe_gt_suspend(gt);
> - if (err) {
> - xe_display_pm_resume(xe, false);
> - goto err;
> - }
> + if (err)
> + goto err_display;
> }
>
> xe_irq_suspend(xe);
> @@ -114,6 +117,11 @@ int xe_pm_suspend(struct xe_device *xe)
>
> drm_dbg(&xe->drm, "Device suspended\n");
> return 0;
> +
> +err_display:
> + xe_display_pm_resume(xe, false);
> +err_pxp:
> + xe_pxp_pm_resume(xe->pxp);
> err:
> drm_dbg(&xe->drm, "Device suspend failed %d\n", err);
> return err;
> @@ -163,6 +171,8 @@ int xe_pm_resume(struct xe_device *xe)
> if (err)
> goto err;
>
> + xe_pxp_pm_resume(xe->pxp);
> +
> drm_dbg(&xe->drm, "Device resumed\n");
> return 0;
> err:
> @@ -356,6 +366,10 @@ int xe_pm_runtime_suspend(struct xe_device *xe)
> */
> lock_map_acquire(&xe_pm_runtime_lockdep_map);
>
> + err = xe_pxp_pm_suspend(xe->pxp);
> + if (err)
> + goto out;
> +
> /*
> * Applying lock for entire list op as xe_ttm_bo_destroy and xe_bo_move_notify
> * also checks and delets bo entry from user fault list.
> @@ -369,23 +383,30 @@ int xe_pm_runtime_suspend(struct xe_device *xe)
> if (xe->d3cold.allowed) {
> err = xe_bo_evict_all(xe);
> if (err)
> - goto out;
> + goto out_pxp;
> xe_display_pm_suspend(xe, true);
> }
>
> for_each_gt(gt, xe, id) {
> err = xe_gt_suspend(gt);
> if (err)
> - goto out;
> + goto out_display;
> }
>
> xe_irq_suspend(xe);
>
> if (xe->d3cold.allowed)
> xe_display_pm_suspend_late(xe);
> +
> + lock_map_release(&xe_pm_runtime_lockdep_map);
> + xe_pm_write_callback_task(xe, NULL);
> + return 0;
> +
> +out_display:
> + xe_display_pm_resume(xe, true);
> +out_pxp:
> + xe_pxp_pm_resume(xe->pxp);
> out:
> - if (err)
> - xe_display_pm_resume(xe, true);
> lock_map_release(&xe_pm_runtime_lockdep_map);
> xe_pm_write_callback_task(xe, NULL);
> return err;
> @@ -436,6 +457,9 @@ int xe_pm_runtime_resume(struct xe_device *xe)
> if (err)
> goto out;
> }
> +
> + xe_pxp_pm_resume(xe->pxp);
> +
> out:
> lock_map_release(&xe_pm_runtime_lockdep_map);
> xe_pm_write_callback_task(xe, NULL);
> diff --git a/drivers/gpu/drm/xe/xe_pxp.c b/drivers/gpu/drm/xe/xe_pxp.c
> index 640e62d1d5d7..78373cbbe0d4 100644
> --- a/drivers/gpu/drm/xe/xe_pxp.c
> +++ b/drivers/gpu/drm/xe/xe_pxp.c
> @@ -137,6 +137,13 @@ static void pxp_terminate(struct xe_pxp *pxp)
> if (pxp->status == XE_PXP_ACTIVE)
> pxp->key_instance++;
>
> + /*
> + * we'll mark the status as needing termination on resume, so no need to
> + * emit a termination now.
> + */
> + if (pxp->status == XE_PXP_SUSPENDED)
> + return;
> +
> /*
> * If we have a termination already in progress, we need to wait for
> * it to complete before queueing another one. We update the state
> @@ -181,17 +188,19 @@ static void pxp_terminate(struct xe_pxp *pxp)
> static void pxp_terminate_complete(struct xe_pxp *pxp)
> {
> /*
> - * We expect PXP to be in one of 2 states when we get here:
> + * We expect PXP to be in one of 3 states when we get here:
> * - XE_PXP_TERMINATION_IN_PROGRESS: a single termination event was
> * requested and it is now completing, so we're ready to start.
> * - XE_PXP_NEEDS_TERMINATION: a second termination was requested while
> * the first one was still being processed; we don't update the state
> * in this case so the pxp_start code will automatically issue that
> * second termination.
> + * - XE_PXP_SUSPENDED: PXP is now suspended, so we defer everything to
> + * when we come back on resume.
> */
> if (pxp->status == XE_PXP_TERMINATION_IN_PROGRESS)
> pxp->status = XE_PXP_READY_TO_START;
> - else if (pxp->status != XE_PXP_NEEDS_TERMINATION)
> + else if (pxp->status != XE_PXP_NEEDS_TERMINATION && pxp->status != XE_PXP_SUSPENDED)
> drm_err(&pxp->xe->drm,
> "PXP termination complete while status was %u\n",
> pxp->status);
> @@ -505,6 +514,7 @@ int xe_pxp_exec_queue_add(struct xe_pxp *pxp, struct xe_exec_queue *q)
> pxp_terminate(pxp);
> mutex_unlock(&pxp->mutex);
> goto wait_for_termination;
> + case XE_PXP_SUSPENDED:
> default:
> drm_err(&pxp->xe->drm, "unexpected state during PXP start: %u", pxp->status);
> ret = -EIO;
> @@ -648,3 +658,81 @@ int xe_pxp_key_check(struct xe_pxp *pxp, struct xe_bo *bo)
> return 0;
> }
>
> +int xe_pxp_pm_suspend(struct xe_pxp *pxp)
> +{
> + int ret = 0;
> +
> + if (!xe_pxp_is_enabled(pxp))
> + return 0;
> +
> + mutex_lock(&pxp->mutex);
> +
> + /* if the termination is already in progress, no need to re-emit it */
> + if (!completion_done(&pxp->termination))
> + goto mark_suspended;
> +
> + switch (pxp->status) {
> + case XE_PXP_ERROR:
> + case XE_PXP_READY_TO_START:
> + case XE_PXP_SUSPENDED:
> + /* nothing to cleanup */
> + break;
> + case XE_PXP_NEEDS_TERMINATION:
> + /* If PXP was never used we can skip the cleanup */
> + if (pxp->key_instance == pxp->last_suspend_key_instance)
Again, there is the possibility of this being confused by key_instance
roll over.
> + break;
> + fallthrough;
> + case XE_PXP_ACTIVE:
> + pxp_terminate(pxp);
> + break;
> + default:
> + drm_err(&pxp->xe->drm, "unexpected state during PXP suspend: %u",
> + pxp->status);
> + ret = -EIO;
> + goto out;
> + }
> +
> +mark_suspended:
> + /*
> + * We set this even if we were in error state, hoping the suspend clears
> + * the error. Worse case we fail again and go in error state again.
> + */
> + pxp->status = XE_PXP_SUSPENDED;
> +
> + mutex_unlock(&pxp->mutex);
> +
> + /*
> + * if there is a termination in progress, wait for it.
> + * We need to wait outside the lock because the completion is done from
> + * within the lock
> + */
> + if (!wait_for_completion_timeout(&pxp->termination,
> + msecs_to_jiffies(PXP_TERMINATION_TIMEOUT_MS)))
> + ret = -ETIMEDOUT;
> +
> + pxp->last_suspend_key_instance = pxp->key_instance;
> +
> +out:
> + return ret;
> +}
> +
> +void xe_pxp_pm_resume(struct xe_pxp *pxp)
> +{
> + int err;
> +
> + if (!xe_pxp_is_enabled(pxp))
> + return;
> +
> + err = kcr_pxp_enable(pxp);
> +
> + mutex_lock(&pxp->mutex);
> +
> + xe_assert(pxp->xe, pxp->status == XE_PXP_SUSPENDED);
> +
> + if (err)
> + pxp->status = XE_PXP_ERROR;
> + else
> + pxp->status = XE_PXP_NEEDS_TERMINATION;
> +
> + mutex_unlock(&pxp->mutex);
> +}
> diff --git a/drivers/gpu/drm/xe/xe_pxp.h b/drivers/gpu/drm/xe/xe_pxp.h
> index 2d22a6e6ab27..af32c2616641 100644
> --- a/drivers/gpu/drm/xe/xe_pxp.h
> +++ b/drivers/gpu/drm/xe/xe_pxp.h
> @@ -20,6 +20,9 @@ int xe_pxp_get_readiness_status(struct xe_pxp *pxp);
> int xe_pxp_init(struct xe_device *xe);
> void xe_pxp_irq_handler(struct xe_device *xe, u16 iir);
>
> +int xe_pxp_pm_suspend(struct xe_pxp *pxp);
> +void xe_pxp_pm_resume(struct xe_pxp *pxp);
> +
> int xe_pxp_exec_queue_set_type(struct xe_pxp *pxp, struct xe_exec_queue *q, u8 type);
> int xe_pxp_exec_queue_add(struct xe_pxp *pxp, struct xe_exec_queue *q);
> void xe_pxp_exec_queue_remove(struct xe_pxp *pxp, struct xe_exec_queue *q);
> diff --git a/drivers/gpu/drm/xe/xe_pxp_types.h b/drivers/gpu/drm/xe/xe_pxp_types.h
> index 1bb747837f86..942f2fa40a58 100644
> --- a/drivers/gpu/drm/xe/xe_pxp_types.h
> +++ b/drivers/gpu/drm/xe/xe_pxp_types.h
> @@ -24,7 +24,8 @@ enum xe_pxp_status {
> XE_PXP_NEEDS_TERMINATION = 0, /* starting status */
> XE_PXP_TERMINATION_IN_PROGRESS,
> XE_PXP_READY_TO_START,
> - XE_PXP_ACTIVE
> + XE_PXP_ACTIVE,
You can add a trailing comma even on the last enum value to avoid such
unnecessary deltas.
John.
> + XE_PXP_SUSPENDED
> };
>
> /**
> @@ -111,6 +112,12 @@ struct xe_pxp {
>
> /** @key_instance: keep track of the current iteration of the PXP key */
> u32 key_instance;
> + /**
> + * @last_suspend_key_instance: value of key_instance at the last
> + * suspend. Used to check if any PXP session has been created between
> + * suspend cycles.
> + */
> + u32 last_suspend_key_instance;
> };
>
> #endif /* __XE_PXP_TYPES_H__ */
^ permalink raw reply [flat|nested] 54+ messages in thread
* Re: [PATCH v2 11/12] drm/xe/pxp: Add PXP debugfs support
2024-08-16 19:00 ` [PATCH v2 11/12] drm/xe/pxp: Add PXP debugfs support Daniele Ceraolo Spurio
@ 2024-10-09 1:26 ` John Harrison
0 siblings, 0 replies; 54+ messages in thread
From: John Harrison @ 2024-10-09 1:26 UTC (permalink / raw)
To: Daniele Ceraolo Spurio, intel-xe
On 8/16/2024 12:00, Daniele Ceraolo Spurio wrote:
> This patch introduces 2 PXP debugfs entries:
>
> - info: prints the current PXP status and key instance
> - terminate: simulate a termination interrupt
>
> The first one is useful for debug, while the second one can be used for
> testing the termination flow.
>
> Signed-off-by: Daniele Ceraolo Spurio <daniele.ceraolospurio@intel.com>
> ---
> drivers/gpu/drm/xe/Makefile | 1 +
> drivers/gpu/drm/xe/xe_debugfs.c | 3 +
> drivers/gpu/drm/xe/xe_pxp_debugfs.c | 120 ++++++++++++++++++++++++++++
> drivers/gpu/drm/xe/xe_pxp_debugfs.h | 13 +++
> 4 files changed, 137 insertions(+)
> create mode 100644 drivers/gpu/drm/xe/xe_pxp_debugfs.c
> create mode 100644 drivers/gpu/drm/xe/xe_pxp_debugfs.h
>
> diff --git a/drivers/gpu/drm/xe/Makefile b/drivers/gpu/drm/xe/Makefile
> index a508b9166b88..7cc65f419710 100644
> --- a/drivers/gpu/drm/xe/Makefile
> +++ b/drivers/gpu/drm/xe/Makefile
> @@ -84,6 +84,7 @@ xe-y += xe_bb.o \
> xe_pt.o \
> xe_pt_walk.o \
> xe_pxp.o \
> + xe_pxp_debugfs.o \
> xe_pxp_submit.o \
> xe_query.o \
> xe_range_fence.o \
> diff --git a/drivers/gpu/drm/xe/xe_debugfs.c b/drivers/gpu/drm/xe/xe_debugfs.c
> index 1011e5d281fa..a04f9c2d886b 100644
> --- a/drivers/gpu/drm/xe/xe_debugfs.c
> +++ b/drivers/gpu/drm/xe/xe_debugfs.c
> @@ -17,6 +17,7 @@
> #include "xe_gt_printk.h"
> #include "xe_guc_ads.h"
> #include "xe_pm.h"
> +#include "xe_pxp_debugfs.h"
> #include "xe_sriov.h"
> #include "xe_step.h"
>
> @@ -214,6 +215,8 @@ void xe_debugfs_register(struct xe_device *xe)
> for_each_gt(gt, xe, id)
> xe_gt_debugfs_register(gt);
>
> + xe_pxp_debugfs_register(xe->pxp);
> +
> #ifdef CONFIG_FAULT_INJECTION
> fault_create_debugfs_attr("fail_gt_reset", root, >_reset_failure);
> #endif
> diff --git a/drivers/gpu/drm/xe/xe_pxp_debugfs.c b/drivers/gpu/drm/xe/xe_pxp_debugfs.c
> new file mode 100644
> index 000000000000..00c8179a9f0f
> --- /dev/null
> +++ b/drivers/gpu/drm/xe/xe_pxp_debugfs.c
> @@ -0,0 +1,120 @@
> +// SPDX-License-Identifier: MIT
> +/*
> + * Copyright © 2024 Intel Corporation
> + */
> +
> +#include "xe_pxp_debugfs.h"
> +
> +#include <linux/debugfs.h>
> +
> +#include <drm/drm_debugfs.h>
> +#include <drm/drm_managed.h>
> +#include <drm/drm_print.h>
> +
> +#include "xe_device.h"
> +#include "xe_pxp.h"
> +#include "xe_pxp_types.h"
> +#include "regs/xe_gt_regs.h"
> +
> +static struct xe_pxp *node_to_pxp(struct drm_info_node *node)
> +{
> + return node->info_ent->data;
> +}
> +
> +static const char *pxp_status_to_str(struct xe_pxp *pxp)
> +{
> + lockdep_assert_held(&pxp->mutex);
> +
> + switch (pxp->status) {
> + case XE_PXP_ERROR:
> + return "error";
> + case XE_PXP_NEEDS_TERMINATION:
> + return "needs termination";
> + case XE_PXP_TERMINATION_IN_PROGRESS:
> + return "termination in progress";
> + case XE_PXP_READY_TO_START:
> + return "ready to start";
> + case XE_PXP_ACTIVE:
> + return "active";
> + case XE_PXP_SUSPENDED:
> + return "suspended";
> + default:
> + return "unknown";
> + }
> +};
> +
> +static int pxp_info(struct seq_file *m, void *data)
> +{
> + struct xe_pxp *pxp = node_to_pxp(m->private);
> + struct drm_printer p = drm_seq_file_printer(m);
> + const char *status;
> +
> + if (!xe_pxp_is_enabled(pxp))
> + return -ENODEV;
> +
> + mutex_lock(&pxp->mutex);
> + status = pxp_status_to_str(pxp);
> + mutex_unlock(&pxp->mutex);
> +
> + drm_printf(&p, "status: %s\n", status);
> + drm_printf(&p, "instance counter: %u\n", pxp->key_instance);
Maybe cache this inside the mutex lock as well? So that the instance and
the status were at least a valid pair at one point even if things might
have changed since.
John.
> +
> + return 0;
> +}
> +
> +static int pxp_terminate(struct seq_file *m, void *data)
> +{
> + struct xe_pxp *pxp = node_to_pxp(m->private);
> + struct drm_printer p = drm_seq_file_printer(m);
> +
> + if (!xe_pxp_is_enabled(pxp))
> + return -ENODEV;
> +
> + /* simulate a termination interrupt */
> + spin_lock_irq(&pxp->xe->irq.lock);
> + xe_pxp_irq_handler(pxp->xe, KCR_PXP_STATE_TERMINATED_INTERRUPT);
> + spin_unlock_irq(&pxp->xe->irq.lock);
> +
> + drm_printf(&p, "Termination queued\n");
> +
> + return 0;
> +}
> +
> +static const struct drm_info_list debugfs_list[] = {
> + {"info", pxp_info, 0},
> + {"terminate", pxp_terminate, 0},
> +};
> +
> +void xe_pxp_debugfs_register(struct xe_pxp *pxp)
> +{
> + struct drm_minor *minor;
> + struct drm_info_list *local;
> + struct dentry *root;
> + int i;
> +
> + if (!xe_pxp_is_enabled(pxp))
> + return;
> +
> + minor = pxp->xe->drm.primary;
> + if (!minor->debugfs_root)
> + return;
> +
> +#define DEBUGFS_SIZE (ARRAY_SIZE(debugfs_list) * sizeof(struct drm_info_list))
> + local = drmm_kmalloc(&pxp->xe->drm, DEBUGFS_SIZE, GFP_KERNEL);
> + if (!local)
> + return;
> +
> + memcpy(local, debugfs_list, DEBUGFS_SIZE);
> +#undef DEBUGFS_SIZE
> +
> + for (i = 0; i < ARRAY_SIZE(debugfs_list); ++i)
> + local[i].data = pxp;
> +
> + root = debugfs_create_dir("pxp", minor->debugfs_root);
> + if (IS_ERR(root))
> + return;
> +
> + drm_debugfs_create_files(local,
> + ARRAY_SIZE(debugfs_list),
> + root, minor);
> +}
> diff --git a/drivers/gpu/drm/xe/xe_pxp_debugfs.h b/drivers/gpu/drm/xe/xe_pxp_debugfs.h
> new file mode 100644
> index 000000000000..988466aad50b
> --- /dev/null
> +++ b/drivers/gpu/drm/xe/xe_pxp_debugfs.h
> @@ -0,0 +1,13 @@
> +/* SPDX-License-Identifier: MIT */
> +/*
> + * Copyright © 2024 Intel Corporation
> + */
> +
> +#ifndef __XE_PXP_DEBUGFS_H__
> +#define __XE_PXP_DEBUGFS_H__
> +
> +struct xe_pxp;
> +
> +void xe_pxp_debugfs_register(struct xe_pxp *pxp);
> +
> +#endif /* __XE_PXP_DEBUGFS_H__ */
^ permalink raw reply [flat|nested] 54+ messages in thread
* Re: [PATCH v2 12/12] drm/xe/pxp: Enable PXP for MTL and LNL
2024-08-16 19:00 ` [PATCH v2 12/12] drm/xe/pxp: Enable PXP for MTL and LNL Daniele Ceraolo Spurio
@ 2024-10-09 1:27 ` John Harrison
0 siblings, 0 replies; 54+ messages in thread
From: John Harrison @ 2024-10-09 1:27 UTC (permalink / raw)
To: Daniele Ceraolo Spurio, intel-xe
On 8/16/2024 12:00, Daniele Ceraolo Spurio wrote:
> Now that are the pieces are there, we can turn the feature on.
>
> Signed-off-by: Daniele Ceraolo Spurio <daniele.ceraolospurio@intel.com>
Reviewed-by: John Harrison <John.C.Harrison@Intel.com>
> ---
> drivers/gpu/drm/xe/xe_pci.c | 2 ++
> 1 file changed, 2 insertions(+)
>
> diff --git a/drivers/gpu/drm/xe/xe_pci.c b/drivers/gpu/drm/xe/xe_pci.c
> index d1453ba20dcd..0807e8a11585 100644
> --- a/drivers/gpu/drm/xe/xe_pci.c
> +++ b/drivers/gpu/drm/xe/xe_pci.c
> @@ -338,11 +338,13 @@ static const struct xe_device_desc mtl_desc = {
> .require_force_probe = true,
> PLATFORM(METEORLAKE),
> .has_display = true,
> + .has_pxp = true,
> };
>
> static const struct xe_device_desc lnl_desc = {
> PLATFORM(LUNARLAKE),
> .has_display = true,
> + .has_pxp = true,
> .require_force_probe = true,
> };
>
^ permalink raw reply [flat|nested] 54+ messages in thread
* Re: [PATCH v2 07/12] drm/xe/pxp: Add spport for PXP-using queues
2024-08-16 19:00 ` [PATCH v2 07/12] drm/xe/pxp: Add spport for PXP-using queues Daniele Ceraolo Spurio
2024-10-08 23:55 ` John Harrison
@ 2024-10-09 10:07 ` Jani Nikula
1 sibling, 0 replies; 54+ messages in thread
From: Jani Nikula @ 2024-10-09 10:07 UTC (permalink / raw)
To: Daniele Ceraolo Spurio, intel-xe; +Cc: Daniele Ceraolo Spurio
On Fri, 16 Aug 2024, Daniele Ceraolo Spurio <daniele.ceraolospurio@intel.com> wrote:
> Userspace is required to mark a queue as using PXP to guarantee that the
> PXP instructions will work. When a PXP queue is created, the driver will
> do the following:
> - Start the default PXP session if it is not already running;
> - set the relevant bits in the context control register;
> - assign an rpm ref to the queue to keep for its lifetime (this is
> required because PXP HWDRM sessions are killed by the HW suspend flow).
>
> When a PXP invalidation occurs, all the PXP queue will be killed.
> On submission of a valid PXP queue, the driver will validate all
> encrypted objects mapped to the VM to ensured they were encrypted with
> the current key.
>
> Signed-off-by: Daniele Ceraolo Spurio <daniele.ceraolospurio@intel.com>
> ---
> drivers/gpu/drm/xe/regs/xe_engine_regs.h | 1 +
> drivers/gpu/drm/xe/xe_exec_queue.c | 58 ++++-
> drivers/gpu/drm/xe/xe_exec_queue.h | 5 +
> drivers/gpu/drm/xe/xe_exec_queue_types.h | 8 +
> drivers/gpu/drm/xe/xe_hw_engine.c | 2 +-
> drivers/gpu/drm/xe/xe_lrc.c | 16 +-
> drivers/gpu/drm/xe/xe_lrc.h | 4 +-
> drivers/gpu/drm/xe/xe_pxp.c | 295 ++++++++++++++++++++++-
> drivers/gpu/drm/xe/xe_pxp.h | 7 +
> drivers/gpu/drm/xe/xe_pxp_submit.c | 4 +-
> drivers/gpu/drm/xe/xe_pxp_types.h | 26 ++
> include/uapi/drm/xe_drm.h | 40 ++-
> 12 files changed, 450 insertions(+), 16 deletions(-)
>
> diff --git a/drivers/gpu/drm/xe/regs/xe_engine_regs.h b/drivers/gpu/drm/xe/regs/xe_engine_regs.h
> index 81b71903675e..3692e887f503 100644
> --- a/drivers/gpu/drm/xe/regs/xe_engine_regs.h
> +++ b/drivers/gpu/drm/xe/regs/xe_engine_regs.h
> @@ -130,6 +130,7 @@
> #define RING_EXECLIST_STATUS_HI(base) XE_REG((base) + 0x234 + 4)
>
> #define RING_CONTEXT_CONTROL(base) XE_REG((base) + 0x244, XE_REG_OPTION_MASKED)
> +#define CTX_CTRL_PXP_ENABLE REG_BIT(10)
> #define CTX_CTRL_OAC_CONTEXT_ENABLE REG_BIT(8)
> #define CTX_CTRL_RUN_ALONE REG_BIT(7)
> #define CTX_CTRL_INDIRECT_RING_STATE_ENABLE REG_BIT(4)
> diff --git a/drivers/gpu/drm/xe/xe_exec_queue.c b/drivers/gpu/drm/xe/xe_exec_queue.c
> index e98e8794eddf..504ba4aa2357 100644
> --- a/drivers/gpu/drm/xe/xe_exec_queue.c
> +++ b/drivers/gpu/drm/xe/xe_exec_queue.c
> @@ -22,6 +22,8 @@
> #include "xe_ring_ops_types.h"
> #include "xe_trace.h"
> #include "xe_vm.h"
> +#include "xe_pxp.h"
> +#include "xe_pxp_types.h"
Why would you need to look inside xe_pxp_types.h? Isn't that supposed to
be the private implementation details for xe_pxp_*.c?
I know C makes it hard to enforce hiding the details when it also needs
to be shared across multiple .c files (i.e. you can't make it fully
opaque). But you should try. The moment you look at the guts from the
outside, it's a precedent for everyone else that it's fine to bypass the
interfaces and just poke at the data directly.
BR,
Jani.
>
> enum xe_exec_queue_sched_prop {
> XE_EXEC_QUEUE_JOB_TIMEOUT = 0,
> @@ -35,6 +37,8 @@ static int exec_queue_user_extensions(struct xe_device *xe, struct xe_exec_queue
>
> static void __xe_exec_queue_free(struct xe_exec_queue *q)
> {
> + if (xe_exec_queue_uses_pxp(q))
> + xe_pxp_exec_queue_remove(gt_to_xe(q->gt)->pxp, q);
> if (q->vm)
> xe_vm_put(q->vm);
>
> @@ -73,6 +77,7 @@ static struct xe_exec_queue *__xe_exec_queue_alloc(struct xe_device *xe,
> q->ops = gt->exec_queue_ops;
> INIT_LIST_HEAD(&q->lr.link);
> INIT_LIST_HEAD(&q->multi_gt_link);
> + INIT_LIST_HEAD(&q->pxp.link);
>
> q->sched_props.timeslice_us = hwe->eclass->sched_props.timeslice_us;
> q->sched_props.preempt_timeout_us =
> @@ -107,6 +112,21 @@ static int __xe_exec_queue_init(struct xe_exec_queue *q)
> {
> struct xe_vm *vm = q->vm;
> int i, err;
> + u32 flags = 0;
> +
> + /*
> + * PXP workloads executing on RCS or CCS must run in isolation (i.e. no
> + * other workload can use the EUs at the same time). On MTL this is done
> + * by setting the RUNALONE bit in the LRC, while starting on Xe2 there
> + * is a dedicated bit for it.
> + */
> + if (xe_exec_queue_uses_pxp(q) &&
> + (q->class == XE_ENGINE_CLASS_RENDER || q->class == XE_ENGINE_CLASS_COMPUTE)) {
> + if (GRAPHICS_VER(gt_to_xe(q->gt)) >= 20)
> + flags |= XE_LRC_CREATE_PXP;
> + else
> + flags |= XE_LRC_CREATE_RUNALONE;
> + }
>
> if (vm) {
> err = xe_vm_lock(vm, true);
> @@ -115,7 +135,7 @@ static int __xe_exec_queue_init(struct xe_exec_queue *q)
> }
>
> for (i = 0; i < q->width; ++i) {
> - q->lrc[i] = xe_lrc_create(q->hwe, q->vm, SZ_16K);
> + q->lrc[i] = xe_lrc_create(q->hwe, q->vm, SZ_16K, flags);
> if (IS_ERR(q->lrc[i])) {
> err = PTR_ERR(q->lrc[i]);
> goto err_unlock;
> @@ -160,6 +180,17 @@ struct xe_exec_queue *xe_exec_queue_create(struct xe_device *xe, struct xe_vm *v
> if (err)
> goto err_post_alloc;
>
> + /*
> + * we can only add the queue to the PXP list after the init is complete,
> + * because the PXP termination can call exec_queue_kill and that will
> + * go bad if the queue is only half-initialized.
> + */
> + if (xe_exec_queue_uses_pxp(q)) {
> + err = xe_pxp_exec_queue_add(xe->pxp, q);
> + if (err)
> + goto err_post_alloc;
> + }
> +
> return q;
>
> err_post_alloc:
> @@ -197,6 +228,9 @@ void xe_exec_queue_destroy(struct kref *ref)
> struct xe_exec_queue *q = container_of(ref, struct xe_exec_queue, refcount);
> struct xe_exec_queue *eq, *next;
>
> + if (xe_exec_queue_uses_pxp(q))
> + xe_pxp_exec_queue_remove(gt_to_xe(q->gt)->pxp, q);
> +
> xe_exec_queue_last_fence_put_unlocked(q);
> if (!(q->flags & EXEC_QUEUE_FLAG_BIND_ENGINE_CHILD)) {
> list_for_each_entry_safe(eq, next, &q->multi_gt_list,
> @@ -343,6 +377,24 @@ static int exec_queue_set_timeslice(struct xe_device *xe, struct xe_exec_queue *
> return 0;
> }
>
> +static int
> +exec_queue_set_pxp_type(struct xe_device *xe, struct xe_exec_queue *q, u64 value)
> +{
> + BUILD_BUG_ON(DRM_XE_PXP_TYPE_NONE != 0);
> +
> + if (value == DRM_XE_PXP_TYPE_NONE)
> + return 0;
> +
> + if (!xe_pxp_is_enabled(xe->pxp))
> + return -ENODEV;
> +
> + /* we only support HWDRM sessions right now */
> + if (XE_IOCTL_DBG(xe, value != DRM_XE_PXP_TYPE_HWDRM))
> + return -EINVAL;
> +
> + return xe_pxp_exec_queue_set_type(xe->pxp, q, DRM_XE_PXP_TYPE_HWDRM);
> +}
> +
> typedef int (*xe_exec_queue_set_property_fn)(struct xe_device *xe,
> struct xe_exec_queue *q,
> u64 value);
> @@ -350,6 +402,7 @@ typedef int (*xe_exec_queue_set_property_fn)(struct xe_device *xe,
> static const xe_exec_queue_set_property_fn exec_queue_set_property_funcs[] = {
> [DRM_XE_EXEC_QUEUE_SET_PROPERTY_PRIORITY] = exec_queue_set_priority,
> [DRM_XE_EXEC_QUEUE_SET_PROPERTY_TIMESLICE] = exec_queue_set_timeslice,
> + [DRM_XE_EXEC_QUEUE_SET_PROPERTY_PXP_TYPE] = exec_queue_set_pxp_type,
> };
>
> static int exec_queue_user_ext_set_property(struct xe_device *xe,
> @@ -369,7 +422,8 @@ static int exec_queue_user_ext_set_property(struct xe_device *xe,
> ARRAY_SIZE(exec_queue_set_property_funcs)) ||
> XE_IOCTL_DBG(xe, ext.pad) ||
> XE_IOCTL_DBG(xe, ext.property != DRM_XE_EXEC_QUEUE_SET_PROPERTY_PRIORITY &&
> - ext.property != DRM_XE_EXEC_QUEUE_SET_PROPERTY_TIMESLICE))
> + ext.property != DRM_XE_EXEC_QUEUE_SET_PROPERTY_TIMESLICE &&
> + ext.property != DRM_XE_EXEC_QUEUE_SET_PROPERTY_PXP_TYPE))
> return -EINVAL;
>
> idx = array_index_nospec(ext.property, ARRAY_SIZE(exec_queue_set_property_funcs));
> diff --git a/drivers/gpu/drm/xe/xe_exec_queue.h b/drivers/gpu/drm/xe/xe_exec_queue.h
> index ded77b0f3b90..7fa97719667a 100644
> --- a/drivers/gpu/drm/xe/xe_exec_queue.h
> +++ b/drivers/gpu/drm/xe/xe_exec_queue.h
> @@ -53,6 +53,11 @@ static inline bool xe_exec_queue_is_parallel(struct xe_exec_queue *q)
> return q->width > 1;
> }
>
> +static inline bool xe_exec_queue_uses_pxp(struct xe_exec_queue *q)
> +{
> + return q->pxp.type;
> +}
> +
> bool xe_exec_queue_is_lr(struct xe_exec_queue *q);
>
> bool xe_exec_queue_ring_full(struct xe_exec_queue *q);
> diff --git a/drivers/gpu/drm/xe/xe_exec_queue_types.h b/drivers/gpu/drm/xe/xe_exec_queue_types.h
> index 1408b02eea53..28b56217f1df 100644
> --- a/drivers/gpu/drm/xe/xe_exec_queue_types.h
> +++ b/drivers/gpu/drm/xe/xe_exec_queue_types.h
> @@ -130,6 +130,14 @@ struct xe_exec_queue {
> spinlock_t lock;
> } lr;
>
> + /** @pxp: PXP info tracking */
> + struct {
> + /** @pxp.type: PXP session type used by this queue */
> + u8 type;
> + /** @pxp.link: link into the list of PXP exec queues */
> + struct list_head link;
> + } pxp;
> +
> /** @ops: submission backend exec queue operations */
> const struct xe_exec_queue_ops *ops;
>
> diff --git a/drivers/gpu/drm/xe/xe_hw_engine.c b/drivers/gpu/drm/xe/xe_hw_engine.c
> index e195022ca836..469932e7d7a6 100644
> --- a/drivers/gpu/drm/xe/xe_hw_engine.c
> +++ b/drivers/gpu/drm/xe/xe_hw_engine.c
> @@ -557,7 +557,7 @@ static int hw_engine_init(struct xe_gt *gt, struct xe_hw_engine *hwe,
> goto err_name;
> }
>
> - hwe->kernel_lrc = xe_lrc_create(hwe, NULL, SZ_16K);
> + hwe->kernel_lrc = xe_lrc_create(hwe, NULL, SZ_16K, 0);
> if (IS_ERR(hwe->kernel_lrc)) {
> err = PTR_ERR(hwe->kernel_lrc);
> goto err_hwsp;
> diff --git a/drivers/gpu/drm/xe/xe_lrc.c b/drivers/gpu/drm/xe/xe_lrc.c
> index 974a9cd8c379..4f3e676db646 100644
> --- a/drivers/gpu/drm/xe/xe_lrc.c
> +++ b/drivers/gpu/drm/xe/xe_lrc.c
> @@ -893,7 +893,7 @@ static void xe_lrc_finish(struct xe_lrc *lrc)
> #define PVC_CTX_ACC_CTR_THOLD (0x2a + 1)
>
> static int xe_lrc_init(struct xe_lrc *lrc, struct xe_hw_engine *hwe,
> - struct xe_vm *vm, u32 ring_size)
> + struct xe_vm *vm, u32 ring_size, u32 init_flags)
> {
> struct xe_gt *gt = hwe->gt;
> struct xe_tile *tile = gt_to_tile(gt);
> @@ -981,6 +981,16 @@ static int xe_lrc_init(struct xe_lrc *lrc, struct xe_hw_engine *hwe,
> RING_CTL_SIZE(lrc->ring.size) | RING_VALID);
> }
>
> + if (init_flags & XE_LRC_CREATE_RUNALONE)
> + xe_lrc_write_ctx_reg(lrc, CTX_CONTEXT_CONTROL,
> + xe_lrc_read_ctx_reg(lrc, CTX_CONTEXT_CONTROL) |
> + _MASKED_BIT_ENABLE(CTX_CTRL_RUN_ALONE));
> +
> + if (init_flags & XE_LRC_CREATE_PXP)
> + xe_lrc_write_ctx_reg(lrc, CTX_CONTEXT_CONTROL,
> + xe_lrc_read_ctx_reg(lrc, CTX_CONTEXT_CONTROL) |
> + _MASKED_BIT_ENABLE(CTX_CTRL_PXP_ENABLE));
> +
> xe_lrc_write_ctx_reg(lrc, CTX_TIMESTAMP, 0);
>
> if (xe->info.has_asid && vm)
> @@ -1029,7 +1039,7 @@ static int xe_lrc_init(struct xe_lrc *lrc, struct xe_hw_engine *hwe,
> * upon failure.
> */
> struct xe_lrc *xe_lrc_create(struct xe_hw_engine *hwe, struct xe_vm *vm,
> - u32 ring_size)
> + u32 ring_size, u32 flags)
> {
> struct xe_lrc *lrc;
> int err;
> @@ -1038,7 +1048,7 @@ struct xe_lrc *xe_lrc_create(struct xe_hw_engine *hwe, struct xe_vm *vm,
> if (!lrc)
> return ERR_PTR(-ENOMEM);
>
> - err = xe_lrc_init(lrc, hwe, vm, ring_size);
> + err = xe_lrc_init(lrc, hwe, vm, ring_size, flags);
> if (err) {
> kfree(lrc);
> return ERR_PTR(err);
> diff --git a/drivers/gpu/drm/xe/xe_lrc.h b/drivers/gpu/drm/xe/xe_lrc.h
> index d411c3fbcbc6..cc8091bba2a0 100644
> --- a/drivers/gpu/drm/xe/xe_lrc.h
> +++ b/drivers/gpu/drm/xe/xe_lrc.h
> @@ -23,8 +23,10 @@ struct xe_vm;
> #define LRC_PPHWSP_FLUSH_INVAL_SCRATCH_ADDR (0x34 * 4)
> #define LRC_PPHWSP_PXP_INVAL_SCRATCH_ADDR (0x40 * 4)
>
> +#define XE_LRC_CREATE_RUNALONE 0x1
> +#define XE_LRC_CREATE_PXP 0x2
> struct xe_lrc *xe_lrc_create(struct xe_hw_engine *hwe, struct xe_vm *vm,
> - u32 ring_size);
> + u32 ring_size, u32 flags);
> void xe_lrc_destroy(struct kref *ref);
>
> /**
> diff --git a/drivers/gpu/drm/xe/xe_pxp.c b/drivers/gpu/drm/xe/xe_pxp.c
> index 382eb0cb0018..acdc25c8e8a1 100644
> --- a/drivers/gpu/drm/xe/xe_pxp.c
> +++ b/drivers/gpu/drm/xe/xe_pxp.c
> @@ -6,11 +6,17 @@
> #include "xe_pxp.h"
>
> #include <drm/drm_managed.h>
> +#include <drm/xe_drm.h>
>
> #include "xe_device_types.h"
> +#include "xe_exec_queue.h"
> +#include "xe_exec_queue_types.h"
> #include "xe_force_wake.h"
> +#include "xe_guc_submit.h"
> +#include "xe_gsc_proxy.h"
> #include "xe_gt.h"
> #include "xe_gt_types.h"
> +#include "xe_huc.h"
> #include "xe_mmio.h"
> #include "xe_pm.h"
> #include "xe_pxp_submit.h"
> @@ -27,18 +33,45 @@
> * integrated parts.
> */
>
> -#define ARB_SESSION 0xF /* TODO: move to UAPI */
> +#define ARB_SESSION DRM_XE_PXP_HWDRM_DEFAULT_SESSION /* shorter define */
>
> bool xe_pxp_is_supported(const struct xe_device *xe)
> {
> return xe->info.has_pxp && IS_ENABLED(CONFIG_INTEL_MEI_GSC_PROXY);
> }
>
> -static bool pxp_is_enabled(const struct xe_pxp *pxp)
> +bool xe_pxp_is_enabled(const struct xe_pxp *pxp)
> {
> return pxp;
> }
>
> +static bool pxp_prerequisites_done(const struct xe_pxp *pxp)
> +{
> + bool ready;
> +
> + XE_WARN_ON(xe_force_wake_get(gt_to_fw(pxp->gt), XE_FW_GSC));
> +
> + /* PXP requires both HuC authentication via GSC and GSC proxy initialized */
> + ready = xe_huc_is_authenticated(&pxp->gt->uc.huc, XE_HUC_AUTH_VIA_GSC) &&
> + xe_gsc_proxy_init_done(&pxp->gt->uc.gsc);
> +
> + xe_force_wake_put(gt_to_fw(pxp->gt), XE_FW_GSC);
> +
> + return ready;
> +}
> +
> +static bool pxp_session_is_in_play(struct xe_pxp *pxp, u32 id)
> +{
> + struct xe_gt *gt = pxp->gt;
> + u32 sip = 0;
> +
> + XE_WARN_ON(xe_force_wake_get(gt_to_fw(gt), XE_FW_GT));
> + sip = xe_mmio_read32(gt, KCR_SIP);
> + xe_force_wake_put(gt_to_fw(gt), XE_FW_GT);
> +
> + return sip & BIT(id);
> +}
> +
> static int pxp_wait_for_session_state(struct xe_pxp *pxp, u32 id, bool in_play)
> {
> struct xe_gt *gt = pxp->gt;
> @@ -56,12 +89,30 @@ static int pxp_wait_for_session_state(struct xe_pxp *pxp, u32 id, bool in_play)
> return ret;
> }
>
> +static void pxp_invalidate_queues(struct xe_pxp *pxp);
> +
> static void pxp_terminate(struct xe_pxp *pxp)
> {
> int ret = 0;
> struct xe_device *xe = pxp->xe;
> struct xe_gt *gt = pxp->gt;
>
> + pxp_invalidate_queues(pxp);
> +
> + /*
> + * If we have a termination already in progress, we need to wait for
> + * it to complete before queueing another one. We update the state
> + * to signal that another termination is required and leave it to the
> + * pxp_start() call to take care of it.
> + */
> + if (!completion_done(&pxp->termination)) {
> + pxp->status = XE_PXP_NEEDS_TERMINATION;
> + return;
> + }
> +
> + reinit_completion(&pxp->termination);
> + pxp->status = XE_PXP_TERMINATION_IN_PROGRESS;
> +
> drm_dbg(&xe->drm, "Terminating PXP\n");
>
> /* terminate the hw session */
> @@ -82,13 +133,32 @@ static void pxp_terminate(struct xe_pxp *pxp)
> ret = xe_pxp_submit_session_invalidation(&pxp->gsc_res, ARB_SESSION);
>
> out:
> - if (ret)
> + if (ret) {
> drm_err(&xe->drm, "PXP termination failed: %pe\n", ERR_PTR(ret));
> + pxp->status = XE_PXP_ERROR;
> + complete_all(&pxp->termination);
> + }
> }
>
> static void pxp_terminate_complete(struct xe_pxp *pxp)
> {
> - /* TODO mark the session as ready to start */
> + /*
> + * We expect PXP to be in one of 2 states when we get here:
> + * - XE_PXP_TERMINATION_IN_PROGRESS: a single termination event was
> + * requested and it is now completing, so we're ready to start.
> + * - XE_PXP_NEEDS_TERMINATION: a second termination was requested while
> + * the first one was still being processed; we don't update the state
> + * in this case so the pxp_start code will automatically issue that
> + * second termination.
> + */
> + if (pxp->status == XE_PXP_TERMINATION_IN_PROGRESS)
> + pxp->status = XE_PXP_READY_TO_START;
> + else if (pxp->status != XE_PXP_NEEDS_TERMINATION)
> + drm_err(&pxp->xe->drm,
> + "PXP termination complete while status was %u\n",
> + pxp->status);
> +
> + complete_all(&pxp->termination);
> }
>
> static void pxp_irq_work(struct work_struct *work)
> @@ -112,6 +182,8 @@ static void pxp_irq_work(struct work_struct *work)
> if ((events & PXP_TERMINATION_REQUEST) && !xe_pm_runtime_get_if_active(xe))
> return;
>
> + mutex_lock(&pxp->mutex);
> +
> if (events & PXP_TERMINATION_REQUEST) {
> events &= ~PXP_TERMINATION_COMPLETE;
> pxp_terminate(pxp);
> @@ -120,6 +192,8 @@ static void pxp_irq_work(struct work_struct *work)
> if (events & PXP_TERMINATION_COMPLETE)
> pxp_terminate_complete(pxp);
>
> + mutex_unlock(&pxp->mutex);
> +
> if (events & PXP_TERMINATION_REQUEST)
> xe_pm_runtime_put(xe);
> }
> @@ -133,7 +207,7 @@ void xe_pxp_irq_handler(struct xe_device *xe, u16 iir)
> {
> struct xe_pxp *pxp = xe->pxp;
>
> - if (!pxp_is_enabled(pxp)) {
> + if (!xe_pxp_is_enabled(pxp)) {
> drm_err(&xe->drm, "PXP irq 0x%x received with PXP disabled!\n", iir);
> return;
> }
> @@ -230,10 +304,22 @@ int xe_pxp_init(struct xe_device *xe)
> if (!pxp)
> return -ENOMEM;
>
> + INIT_LIST_HEAD(&pxp->queues.list);
> + spin_lock_init(&pxp->queues.lock);
> INIT_WORK(&pxp->irq.work, pxp_irq_work);
> pxp->xe = xe;
> pxp->gt = gt;
>
> + /*
> + * we'll use the completion to check if there is a termination pending,
> + * so we start it as completed and we reinit it when a termination
> + * is triggered.
> + */
> + init_completion(&pxp->termination);
> + complete_all(&pxp->termination);
> +
> + mutex_init(&pxp->mutex);
> +
> pxp->irq.wq = alloc_ordered_workqueue("pxp-wq", 0);
> if (!pxp->irq.wq)
> return -ENOMEM;
> @@ -256,3 +342,202 @@ int xe_pxp_init(struct xe_device *xe)
> destroy_workqueue(pxp->irq.wq);
> return err;
> }
> +
> +static int __pxp_start_arb_session(struct xe_pxp *pxp)
> +{
> + int ret;
> +
> + if (pxp_session_is_in_play(pxp, ARB_SESSION))
> + return -EEXIST;
> +
> + ret = xe_pxp_submit_session_init(&pxp->gsc_res, ARB_SESSION);
> + if (ret) {
> + drm_err(&pxp->xe->drm, "Failed to init PXP arb session\n");
> + goto out;
> + }
> +
> + ret = pxp_wait_for_session_state(pxp, ARB_SESSION, true);
> + if (ret) {
> + drm_err(&pxp->xe->drm, "PXP ARB session failed to go in play\n");
> + goto out;
> + }
> +
> + drm_dbg(&pxp->xe->drm, "PXP ARB session is active\n");
> +
> +out:
> + if (!ret)
> + pxp->status = XE_PXP_ACTIVE;
> + else
> + pxp->status = XE_PXP_ERROR;
> +
> + return ret;
> +}
> +
> +/**
> + * xe_pxp_exec_queue_set_type - Mark a queue as using PXP
> + * @pxp: the xe->pxp pointer (it will be NULL if PXP is disabled)
> + * @q: the queue to mark as using PXP
> + * @type: the type of PXP session this queue will use
> + *
> + * Returns 0 if the selected PXP type is supported, -ENODEV otherwise.
> + */
> +int xe_pxp_exec_queue_set_type(struct xe_pxp *pxp, struct xe_exec_queue *q, u8 type)
> +{
> + if (!xe_pxp_is_enabled(pxp))
> + return -ENODEV;
> +
> + /* we only support HWDRM sessions right now */
> + xe_assert(pxp->xe, type == DRM_XE_PXP_TYPE_HWDRM);
> +
> + q->pxp.type = type;
> +
> + return 0;
> +}
> +
> +/**
> + * xe_pxp_exec_queue_add - add a queue to the PXP list
> + * @pxp: the xe->pxp pointer (it will be NULL if PXP is disabled)
> + * @q: the queue to add to the list
> + *
> + * If PXP is enabled and the prerequisites are done, start the PXP ARB
> + * session (if not already running) and add the queue to the PXP list. Note
> + * that the queue must have previously been marked as using PXP with
> + * xe_pxp_exec_queue_set_type.
> + *
> + * Returns 0 if the PXP ARB session is running and the queue is in the list,
> + * -ENODEV if PXP is disabled, -EBUSY if the PXP prerequisites are not done,
> + * other errno value if something goes wrong during the session start.
> + */
> +#define PXP_TERMINATION_TIMEOUT_MS 500
> +int xe_pxp_exec_queue_add(struct xe_pxp *pxp, struct xe_exec_queue *q)
> +{
> + int ret = 0;
> +
> + if (!xe_pxp_is_enabled(pxp))
> + return -ENODEV;
> +
> + /* we only support HWDRM sessions right now */
> + xe_assert(pxp->xe, q->pxp.type == DRM_XE_PXP_TYPE_HWDRM);
> +
> + /*
> + * Runtime suspend kills PXP, so we need to turn it off while we have
> + * active queues that use PXP
> + */
> + xe_pm_runtime_get(pxp->xe);
> +
> + if (!pxp_prerequisites_done(pxp)) {
> + ret = -EBUSY;
> + goto out;
> + }
> +
> +wait_for_termination:
> + /*
> + * if there is a termination in progress, wait for it.
> + * We need to wait outside the lock because the completion is done from
> + * within the lock
> + */
> + if (!wait_for_completion_timeout(&pxp->termination,
> + msecs_to_jiffies(PXP_TERMINATION_TIMEOUT_MS)))
> + return -ETIMEDOUT;
> +
> + mutex_lock(&pxp->mutex);
> +
> + /*
> + * check if a new termination was issued between the above check and
> + * grabbing the mutex
> + */
> + if (!completion_done(&pxp->termination)) {
> + mutex_unlock(&pxp->mutex);
> + goto wait_for_termination;
> + }
> +
> + /* If PXP is not already active, turn it on */
> + switch (pxp->status) {
> + case XE_PXP_ERROR:
> + ret = -EIO;
> + break;
> + case XE_PXP_ACTIVE:
> + break;
> + case XE_PXP_READY_TO_START:
> + ret = __pxp_start_arb_session(pxp);
> + break;
> + case XE_PXP_NEEDS_TERMINATION:
> + pxp_terminate(pxp);
> + mutex_unlock(&pxp->mutex);
> + goto wait_for_termination;
> + default:
> + drm_err(&pxp->xe->drm, "unexpected state during PXP start: %u", pxp->status);
> + ret = -EIO;
> + break;
> + }
> +
> + /* If everything went ok, add the queue to the list */
> + if (!ret) {
> + spin_lock_irq(&pxp->queues.lock);
> + list_add_tail(&q->pxp.link, &pxp->queues.list);
> + spin_unlock_irq(&pxp->queues.lock);
> + }
> +
> + mutex_unlock(&pxp->mutex);
> +
> +out:
> + /*
> + * in the successful case the PM ref is released from
> + * xe_pxp_exec_queue_remove
> + */
> + if (ret)
> + xe_pm_runtime_put(pxp->xe);
> +
> + return ret;
> +}
> +
> +/**
> + * xe_pxp_exec_queue_remove - remove a queue from the PXP list
> + * @pxp: the xe->pxp pointer (it will be NULL if PXP is disabled)
> + * @q: the queue to remove from the list
> + *
> + * If PXP is enabled and the exec_queue is in the list, the queue will be
> + * removed from the list and its PM reference will be released. It is safe to
> + * call this function multiple times for the same queue.
> + */
> +void xe_pxp_exec_queue_remove(struct xe_pxp *pxp, struct xe_exec_queue *q)
> +{
> + bool need_pm_put = false;
> +
> + if (!xe_pxp_is_enabled(pxp))
> + return;
> +
> + spin_lock_irq(&pxp->queues.lock);
> +
> + if (!list_empty(&q->pxp.link)) {
> + list_del_init(&q->pxp.link);
> + need_pm_put = true;
> + }
> +
> + q->pxp.type = DRM_XE_PXP_TYPE_NONE;
> +
> + spin_unlock_irq(&pxp->queues.lock);
> +
> + if (need_pm_put)
> + xe_pm_runtime_put(pxp->xe);
> +}
> +
> +static void pxp_invalidate_queues(struct xe_pxp *pxp)
> +{
> + struct xe_exec_queue *tmp, *q;
> +
> + spin_lock_irq(&pxp->queues.lock);
> +
> + list_for_each_entry(tmp, &pxp->queues.list, pxp.link) {
> + q = xe_exec_queue_get_unless_zero(tmp);
> +
> + if (!q)
> + continue;
> +
> + xe_exec_queue_kill(q);
> + xe_exec_queue_put(q);
> + }
> +
> + spin_unlock_irq(&pxp->queues.lock);
> +}
> +
> diff --git a/drivers/gpu/drm/xe/xe_pxp.h b/drivers/gpu/drm/xe/xe_pxp.h
> index 81bafe2714ff..2e0ab186072a 100644
> --- a/drivers/gpu/drm/xe/xe_pxp.h
> +++ b/drivers/gpu/drm/xe/xe_pxp.h
> @@ -9,10 +9,17 @@
> #include <linux/types.h>
>
> struct xe_device;
> +struct xe_exec_queue;
> +struct xe_pxp;
>
> bool xe_pxp_is_supported(const struct xe_device *xe);
> +bool xe_pxp_is_enabled(const struct xe_pxp *pxp);
>
> int xe_pxp_init(struct xe_device *xe);
> void xe_pxp_irq_handler(struct xe_device *xe, u16 iir);
>
> +int xe_pxp_exec_queue_set_type(struct xe_pxp *pxp, struct xe_exec_queue *q, u8 type);
> +int xe_pxp_exec_queue_add(struct xe_pxp *pxp, struct xe_exec_queue *q);
> +void xe_pxp_exec_queue_remove(struct xe_pxp *pxp, struct xe_exec_queue *q);
> +
> #endif /* __XE_PXP_H__ */
> diff --git a/drivers/gpu/drm/xe/xe_pxp_submit.c b/drivers/gpu/drm/xe/xe_pxp_submit.c
> index c9258c861556..becffa6dfd4c 100644
> --- a/drivers/gpu/drm/xe/xe_pxp_submit.c
> +++ b/drivers/gpu/drm/xe/xe_pxp_submit.c
> @@ -26,8 +26,6 @@
> #include "instructions/xe_mi_commands.h"
> #include "regs/xe_gt_regs.h"
>
> -#define ARB_SESSION 0xF /* TODO: move to UAPI */
> -
> /*
> * The VCS is used for kernel-owned GGTT submissions to issue key termination.
> * Terminations are serialized, so we only need a single queue and a single
> @@ -495,7 +493,7 @@ int xe_pxp_submit_session_init(struct xe_pxp_gsc_client_resources *gsc_res, u32
> FIELD_PREP(PXP43_INIT_SESSION_APPTYPE, 0));
> msg_in.header.buffer_len = sizeof(msg_in) - sizeof(msg_in.header);
>
> - if (id == ARB_SESSION)
> + if (id == DRM_XE_PXP_HWDRM_DEFAULT_SESSION)
> msg_in.protection_mode = PXP43_INIT_SESSION_PROTECTION_ARB;
>
> ret = gsccs_send_message(gsc_res, &msg_in, sizeof(msg_in),
> diff --git a/drivers/gpu/drm/xe/xe_pxp_types.h b/drivers/gpu/drm/xe/xe_pxp_types.h
> index d5cf8faed7be..eb6a0183320a 100644
> --- a/drivers/gpu/drm/xe/xe_pxp_types.h
> +++ b/drivers/gpu/drm/xe/xe_pxp_types.h
> @@ -6,7 +6,10 @@
> #ifndef __XE_PXP_TYPES_H__
> #define __XE_PXP_TYPES_H__
>
> +#include <linux/completion.h>
> #include <linux/iosys-map.h>
> +#include <linux/mutex.h>
> +#include <linux/spinlock.h>
> #include <linux/types.h>
> #include <linux/workqueue.h>
>
> @@ -16,6 +19,14 @@ struct xe_device;
> struct xe_gt;
> struct xe_vm;
>
> +enum xe_pxp_status {
> + XE_PXP_ERROR = -1,
> + XE_PXP_NEEDS_TERMINATION = 0, /* starting status */
> + XE_PXP_TERMINATION_IN_PROGRESS,
> + XE_PXP_READY_TO_START,
> + XE_PXP_ACTIVE
> +};
> +
> /**
> * struct xe_pxp_gsc_client_resources - resources for GSC submission by a PXP
> * client. The GSC FW supports multiple GSC client active at the same time.
> @@ -82,6 +93,21 @@ struct xe_pxp {
> #define PXP_TERMINATION_REQUEST BIT(0)
> #define PXP_TERMINATION_COMPLETE BIT(1)
> } irq;
> +
> + /** @mutex: protects the pxp status and the queue list */
> + struct mutex mutex;
> + /** @status: the current pxp status */
> + enum xe_pxp_status status;
> + /** @termination: completion struct that tracks terminations */
> + struct completion termination;
> +
> + /** @queues: management of exec_queues that use PXP */
> + struct {
> + /** @queues.lock: spinlock protecting the queue management */
> + spinlock_t lock;
> + /** @queues.list: list of exec_queues that use PXP */
> + struct list_head list;
> + } queues;
> };
>
> #endif /* __XE_PXP_TYPES_H__ */
> diff --git a/include/uapi/drm/xe_drm.h b/include/uapi/drm/xe_drm.h
> index b6fbe4988f2e..5f4d08123672 100644
> --- a/include/uapi/drm/xe_drm.h
> +++ b/include/uapi/drm/xe_drm.h
> @@ -1085,6 +1085,24 @@ struct drm_xe_vm_bind {
> /**
> * struct drm_xe_exec_queue_create - Input of &DRM_IOCTL_XE_EXEC_QUEUE_CREATE
> *
> + * This ioctl supports setting the following properties via the
> + * %DRM_XE_EXEC_QUEUE_EXTENSION_SET_PROPERTY extension, which uses the
> + * generic @drm_xe_ext_set_property struct:
> + *
> + * - %DRM_XE_EXEC_QUEUE_SET_PROPERTY_PRIORITY - set the queue priority.
> + * CAP_SYS_NICE is required to set a value above normal.
> + * - %DRM_XE_EXEC_QUEUE_SET_PROPERTY_TIMESLICE - set the queue timeslice
> + * duration.
> + * - %DRM_XE_EXEC_QUEUE_SET_PROPERTY_PXP_TYPE - set the type of PXP session
> + * this queue will be used with. Valid values are listed in enum
> + * drm_xe_pxp_session_type. %DRM_XE_PXP_TYPE_NONE is the default behavior, so
> + * there is no need to explicitly set that. When a queue of type
> + * %DRM_XE_PXP_TYPE_HWDRM is created, the PXP default HWDRM session
> + * (%XE_PXP_HWDRM_DEFAULT_SESSION) will be started, if isn't already running.
> + * Given that going into a power-saving state kills PXP HWDRM sessions,
> + * runtime PM will be blocked while queues of this type are alive.
> + * All PXP queues will be killed if a PXP invalidation event occurs.
> + *
> * The example below shows how to use @drm_xe_exec_queue_create to create
> * a simple exec_queue (no parallel submission) of class
> * &DRM_XE_ENGINE_CLASS_RENDER.
> @@ -1108,7 +1126,7 @@ struct drm_xe_exec_queue_create {
> #define DRM_XE_EXEC_QUEUE_EXTENSION_SET_PROPERTY 0
> #define DRM_XE_EXEC_QUEUE_SET_PROPERTY_PRIORITY 0
> #define DRM_XE_EXEC_QUEUE_SET_PROPERTY_TIMESLICE 1
> -
> +#define DRM_XE_EXEC_QUEUE_SET_PROPERTY_PXP_TYPE 2
> /** @extensions: Pointer to the first extension struct, if any */
> __u64 extensions;
>
> @@ -1694,6 +1712,26 @@ struct drm_xe_oa_stream_info {
> __u64 reserved[3];
> };
>
> +/**
> + * enum drm_xe_pxp_session_type - Supported PXP session types.
> + *
> + * We currently only support HWDRM sessions, which are used for protected
> + * content that ends up being displayed, but the HW supports multiple types, so
> + * we might extend support in the future.
> + */
> +enum drm_xe_pxp_session_type {
> + /** @DRM_XE_PXP_TYPE_NONE: PXP not used */
> + DRM_XE_PXP_TYPE_NONE = 0,
> + /**
> + * @DRM_XE_PXP_TYPE_HWDRM: HWDRM sessions are used for content that ends
> + * up on the display.
> + */
> + DRM_XE_PXP_TYPE_HWDRM = 1,
> +};
> +
> +/* ID of the protected content session managed by Xe when PXP is active */
> +#define DRM_XE_PXP_HWDRM_DEFAULT_SESSION 0xf
> +
> #if defined(__cplusplus)
> }
> #endif
--
Jani Nikula, Intel
^ permalink raw reply [flat|nested] 54+ messages in thread
* Re: [PATCH v2 02/12] drm/xe/pxp: Allocate PXP execution resources
2024-10-04 20:30 ` John Harrison
@ 2024-11-06 22:25 ` Daniele Ceraolo Spurio
0 siblings, 0 replies; 54+ messages in thread
From: Daniele Ceraolo Spurio @ 2024-11-06 22:25 UTC (permalink / raw)
To: John Harrison, intel-xe; +Cc: Matthew Brost, Thomas Hellström
<snip>
>> diff --git a/drivers/gpu/drm/xe/xe_vm.c b/drivers/gpu/drm/xe/xe_vm.c
>> index 6dd76f77b504..56f105797ae6 100644
>> --- a/drivers/gpu/drm/xe/xe_vm.c
>> +++ b/drivers/gpu/drm/xe/xe_vm.c
>> @@ -1381,6 +1381,15 @@ struct xe_vm *xe_vm_create(struct xe_device
>> *xe, u32 flags)
>> struct xe_tile *tile;
>> u8 id;
>> + /*
>> + * All GSC VMs are owned by the kernel and can also only be used on
>> + * the GSCCS. We don't want a kernel-owned VM to put the device in
>> + * either fault or not fault mode, so we need to exclude the GSC
>> VMs
>> + * from that count; this is only safe if we ensure that all GSC
>> VMs are
>> + * non-faulting.
>> + */
>> + xe_assert(xe, !((flags & XE_VM_FLAG_GSC) && (flags &
>> XE_VM_FLAG_FAULT_MODE)));
>> +
>> vm = kzalloc(sizeof(*vm), GFP_KERNEL);
>> if (!vm)
>> return ERR_PTR(-ENOMEM);
>> @@ -1391,7 +1400,21 @@ struct xe_vm *xe_vm_create(struct xe_device
>> *xe, u32 flags)
>> vm->flags = flags;
>> - init_rwsem(&vm->lock);
>> + /**
>> + * GSC VMs are kernel-owned, only used for PXP ops and can be
>> + * manipulated under the PXP mutex. However, the PXP mutex can
>> be taken
> Is that 'can be (but don't have to be) manipulated' or 'can only be
> manipulated'?
The first one, will reword.
>
>> + * under a user-VM lock when the PXP session is started at
>> exec_queue
>> + * creation time. Those are different VMs and therefore there is
>> no risk
>> + * of deadlock, but we need to tell lockdep that this is the
>> case or it
>> + * will print a warning.
>> + */
>> + if (flags & XE_VM_FLAG_GSC) {
>> + static struct lock_class_key gsc_vm_key;
>> +
>> + __init_rwsem(&vm->lock, "gsc_vm", &gsc_vm_key);
>> + } else {
>> + init_rwsem(&vm->lock);
>> + }
>> mutex_init(&vm->snap_mutex);
>> INIT_LIST_HEAD(&vm->rebind_list);
>> @@ -1510,7 +1533,7 @@ struct xe_vm *xe_vm_create(struct xe_device
>> *xe, u32 flags)
>> mutex_lock(&xe->usm.lock);
>> if (flags & XE_VM_FLAG_FAULT_MODE)
>> xe->usm.num_vm_in_fault_mode++;
>> - else if (!(flags & XE_VM_FLAG_MIGRATION))
>> + else if (!(flags & (XE_VM_FLAG_MIGRATION | XE_VM_FLAG_GSC)))
>> xe->usm.num_vm_in_non_fault_mode++;
>> mutex_unlock(&xe->usm.lock);
>> @@ -2694,11 +2717,10 @@ static void vm_bind_ioctl_ops_fini(struct
>> xe_vm *vm, struct xe_vma_ops *vops,
>> for (i = 0; i < vops->num_syncs; i++)
>> xe_sync_entry_signal(vops->syncs + i, fence);
>> xe_exec_queue_last_fence_set(wait_exec_queue, vm, fence);
>> - dma_fence_put(fence);
>> }
>> -static int vm_bind_ioctl_ops_execute(struct xe_vm *vm,
>> - struct xe_vma_ops *vops)
>> +static struct dma_fence *vm_bind_ioctl_ops_execute(struct xe_vm *vm,
>> + struct xe_vma_ops *vops)
> Rather than changing the internals, is it not possible to just call
> xe_exec_queue_last_fence_get() after vm_bind_ioctl_ops_execute has
> returned?
If you have multiple bind ops, the last fence can be overwritten, so we
might end up waiting for a different (newer) allocation.
>
>> {
>> struct drm_exec exec;
>> struct dma_fence *fence;
>> @@ -2711,21 +2733,21 @@ static int vm_bind_ioctl_ops_execute(struct
>> xe_vm *vm,
>> drm_exec_until_all_locked(&exec) {
>> err = vm_bind_ioctl_ops_lock_and_prep(&exec, vm, vops);
>> drm_exec_retry_on_contention(&exec);
>> - if (err)
>> + if (err) {
>> + fence = ERR_PTR(err);
>> goto unlock;
>> + }
>> fence = ops_execute(vm, vops);
>> - if (IS_ERR(fence)) {
>> - err = PTR_ERR(fence);
>> + if (IS_ERR(fence))
>> goto unlock;
>> - }
>> vm_bind_ioctl_ops_fini(vm, vops, fence);
>> }
>> unlock:
>> drm_exec_fini(&exec);
>> - return err;
>> + return fence;
>> }
>> #define SUPPORTED_FLAGS_STUB \
>> @@ -2946,6 +2968,7 @@ int xe_vm_bind_ioctl(struct drm_device *dev,
>> void *data, struct drm_file *file)
>> struct xe_sync_entry *syncs = NULL;
>> struct drm_xe_vm_bind_op *bind_ops;
>> struct xe_vma_ops vops;
>> + struct dma_fence *fence;
>> int err;
>> int i;
>> @@ -3108,7 +3131,11 @@ int xe_vm_bind_ioctl(struct drm_device *dev,
>> void *data, struct drm_file *file)
>> if (err)
>> goto unwind_ops;
>> - err = vm_bind_ioctl_ops_execute(vm, &vops);
>> + fence = vm_bind_ioctl_ops_execute(vm, &vops);
>> + if (IS_ERR(fence))
>> + err = PTR_ERR(fence);
>> + else
>> + dma_fence_put(fence);
> There isn't a new fence get in vm_bind_ioctl_ops_execute(). The change
> in return value is the only difference in behaviour. So why is an
> extra put required?
I've removed a dma_fence_put from vm_bind_ioctl_ops_fini (which is
called from ops_execute), so this is not extra.
>
>> unwind_ops:
>> if (err && err != -ENODATA)
>> @@ -3142,6 +3169,81 @@ int xe_vm_bind_ioctl(struct drm_device *dev,
>> void *data, struct drm_file *file)
>> return err;
>> }
>> +/**
>> + * xe_vm_bind_bo - bind a kernel BO to a VM
>> + * @vm: VM to bind the BO to
>> + * @bo: BO to bind
>> + * @q: exec queue to use for the bind (optional)
>> + * @addr: address at which to bind the BO
>> + * @cache_lvl: PAT cache level to use
>> + *
>> + * Execute a VM bind map operation on a kernel-owned BO to bind it
>> into a
>> + * kernel-owned VM.
>> + *
>> + * Returns a dma_fence to track the binding completion if the job to
>> do so was
>> + * successfully submitted, an error pointer otherwise.
>> + */
>> +struct dma_fence *xe_vm_bind_bo(struct xe_vm *vm, struct xe_bo *bo,
>> + struct xe_exec_queue *q, u64 addr,
>> + enum xe_cache_level cache_lvl)
> Should this have '_kernel_' in the name given the description of
> kernel-owned BO to kernel-owned VM?
will rename.
Daniele
>
> John.
>
>> +{
>> + struct xe_vma_ops vops;
>> + struct drm_gpuva_ops *ops = NULL;
>> + struct dma_fence *fence;
>> + int err;
>> +
>> + xe_bo_get(bo);
>> + xe_vm_get(vm);
>> + if (q)
>> + xe_exec_queue_get(q);
>> +
>> + down_write(&vm->lock);
>> +
>> + xe_vma_ops_init(&vops, vm, q, NULL, 0);
>> +
>> + ops = vm_bind_ioctl_ops_create(vm, bo, 0, addr, bo->size,
>> + DRM_XE_VM_BIND_OP_MAP, 0, 0,
>> + vm->xe->pat.idx[cache_lvl]);
>> + if (IS_ERR(ops)) {
>> + err = PTR_ERR(ops);
>> + goto release_vm_lock;
>> + }
>> +
>> + err = vm_bind_ioctl_ops_parse(vm, ops, &vops);
>> + if (err)
>> + goto release_vm_lock;
>> +
>> + xe_assert(vm->xe, !list_empty(&vops.list));
>> +
>> + err = xe_vma_ops_alloc(&vops, false);
>> + if (err)
>> + goto unwind_ops;
>> +
>> + fence = vm_bind_ioctl_ops_execute(vm, &vops);
>> + if (IS_ERR(fence))
>> + err = PTR_ERR(fence);
>> +
>> +unwind_ops:
>> + if (err && err != -ENODATA)
>> + vm_bind_ioctl_ops_unwind(vm, &ops, 1);
>> +
>> + xe_vma_ops_fini(&vops);
>> + drm_gpuva_ops_free(&vm->gpuvm, ops);
>> +
>> +release_vm_lock:
>> + up_write(&vm->lock);
>> +
>> + if (q)
>> + xe_exec_queue_put(q);
>> + xe_vm_put(vm);
>> + xe_bo_put(bo);
>> +
>> + if (err)
>> + fence = ERR_PTR(err);
>> +
>> + return fence;
>> +}
>> +
>> /**
>> * xe_vm_lock() - Lock the vm's dma_resv object
>> * @vm: The struct xe_vm whose lock is to be locked
>> diff --git a/drivers/gpu/drm/xe/xe_vm.h b/drivers/gpu/drm/xe/xe_vm.h
>> index c864dba35e1d..bfc19e8113c3 100644
>> --- a/drivers/gpu/drm/xe/xe_vm.h
>> +++ b/drivers/gpu/drm/xe/xe_vm.h
>> @@ -19,6 +19,8 @@ struct drm_file;
>> struct ttm_buffer_object;
>> struct ttm_validate_buffer;
>> +struct dma_fence;
>> +
>> struct xe_exec_queue;
>> struct xe_file;
>> struct xe_sync_entry;
>> @@ -248,6 +250,10 @@ int xe_vm_lock_vma(struct drm_exec *exec, struct
>> xe_vma *vma);
>> int xe_vm_validate_rebind(struct xe_vm *vm, struct drm_exec *exec,
>> unsigned int num_fences);
>> +struct dma_fence *xe_vm_bind_bo(struct xe_vm *vm, struct xe_bo *bo,
>> + struct xe_exec_queue *q, u64 addr,
>> + enum xe_cache_level cache_lvl);
>> +
>> /**
>> * xe_vm_resv() - Return's the vm's reservation object
>> * @vm: The vm
>> diff --git a/drivers/gpu/drm/xe/xe_vm_types.h
>> b/drivers/gpu/drm/xe/xe_vm_types.h
>> index 7f9a303e51d8..52467b9b5348 100644
>> --- a/drivers/gpu/drm/xe/xe_vm_types.h
>> +++ b/drivers/gpu/drm/xe/xe_vm_types.h
>> @@ -164,6 +164,7 @@ struct xe_vm {
>> #define XE_VM_FLAG_BANNED BIT(5)
>> #define XE_VM_FLAG_TILE_ID(flags) FIELD_GET(GENMASK(7, 6), flags)
>> #define XE_VM_FLAG_SET_TILE_ID(tile) FIELD_PREP(GENMASK(7, 6),
>> (tile)->id)
>> +#define XE_VM_FLAG_GSC BIT(8)
>> unsigned long flags;
>> /** @composite_fence_ctx: context composite fence */
^ permalink raw reply [flat|nested] 54+ messages in thread
* Re: [PATCH v2 03/12] drm/xe/pxp: Add VCS inline termination support
2024-10-04 22:25 ` John Harrison
@ 2024-11-06 23:49 ` Daniele Ceraolo Spurio
2024-11-14 18:46 ` John Harrison
0 siblings, 1 reply; 54+ messages in thread
From: Daniele Ceraolo Spurio @ 2024-11-06 23:49 UTC (permalink / raw)
To: John Harrison, intel-xe
On 10/4/24 15:25, John Harrison wrote:
> On 8/16/2024 12:00, Daniele Ceraolo Spurio wrote:
>> The key termination is done with a specific submission to the VCS
>> engine.
>>
>> Note that this patch is meant to be squashed with the follow-up patches
>> that implement the other pieces of the termination flow. It is separate
>> for now for ease of review.
>>
>> Signed-off-by: Daniele Ceraolo Spurio <daniele.ceraolospurio@intel.com>
>> ---
>> .../gpu/drm/xe/instructions/xe_instr_defs.h | 1 +
>> .../gpu/drm/xe/instructions/xe_mfx_commands.h | 29 +++++
>> .../gpu/drm/xe/instructions/xe_mi_commands.h | 5 +
>> drivers/gpu/drm/xe/xe_lrc.h | 3 +-
>> drivers/gpu/drm/xe/xe_pxp_submit.c | 108 ++++++++++++++++++
>> drivers/gpu/drm/xe/xe_pxp_submit.h | 2 +
>> drivers/gpu/drm/xe/xe_ring_ops.c | 4 +-
>> 7 files changed, 149 insertions(+), 3 deletions(-)
>> create mode 100644 drivers/gpu/drm/xe/instructions/xe_mfx_commands.h
>>
>> diff --git a/drivers/gpu/drm/xe/instructions/xe_instr_defs.h
>> b/drivers/gpu/drm/xe/instructions/xe_instr_defs.h
>> index fd2ce7ace510..e559969468c4 100644
>> --- a/drivers/gpu/drm/xe/instructions/xe_instr_defs.h
>> +++ b/drivers/gpu/drm/xe/instructions/xe_instr_defs.h
>> @@ -16,6 +16,7 @@
>> #define XE_INSTR_CMD_TYPE GENMASK(31, 29)
>> #define XE_INSTR_MI REG_FIELD_PREP(XE_INSTR_CMD_TYPE, 0x0)
>> #define XE_INSTR_GSC REG_FIELD_PREP(XE_INSTR_CMD_TYPE, 0x2)
>> +#define XE_INSTR_VIDEOPIPE REG_FIELD_PREP(XE_INSTR_CMD_TYPE, 0x3)
>> #define XE_INSTR_GFXPIPE REG_FIELD_PREP(XE_INSTR_CMD_TYPE, 0x3)
>> #define XE_INSTR_GFX_STATE REG_FIELD_PREP(XE_INSTR_CMD_TYPE, 0x4)
>> diff --git a/drivers/gpu/drm/xe/instructions/xe_mfx_commands.h
>> b/drivers/gpu/drm/xe/instructions/xe_mfx_commands.h
>> new file mode 100644
>> index 000000000000..686ca3b1d9e8
>> --- /dev/null
>> +++ b/drivers/gpu/drm/xe/instructions/xe_mfx_commands.h
>> @@ -0,0 +1,29 @@
>> +/* SPDX-License-Identifier: MIT */
>> +/*
>> + * Copyright © 2024 Intel Corporation
>> + */
>> +
>> +#ifndef _XE_MFX_COMMANDS_H_
>> +#define _XE_MFX_COMMANDS_H_
>> +
>> +#include "instructions/xe_instr_defs.h"
>> +
>> +#define MFX_CMD_SUBTYPE REG_GENMASK(28, 27) /* A.K.A cmd pipe */
>> +#define MFX_CMD_OPCODE REG_GENMASK(26, 24)
>> +#define MFX_CMD_SUB_OPCODE REG_GENMASK(23, 16)
>> +#define MFX_FLAGS_AND_LEN REG_GENMASK(15, 0)
>> +
>> +#define XE_MFX_INSTR(subtype, op, sub_op, flags) \
>> + (XE_INSTR_VIDEOPIPE | \
>> + REG_FIELD_PREP(MFX_CMD_SUBTYPE, subtype) | \
>> + REG_FIELD_PREP(MFX_CMD_OPCODE, op) | \
>> + REG_FIELD_PREP(MFX_CMD_SUB_OPCODE, sub_op) | \
>> + REG_FIELD_PREP(MFX_FLAGS_AND_LEN, flags))
>> +
>> +#define MFX_WAIT XE_MFX_INSTR(1, 0, 0, 0)
>> +#define MFX_WAIT_DW0_PXP_SYNC_CONTROL_FLAG REG_BIT(9)
>> +#define MFX_WAIT_DW0_MFX_SYNC_CONTROL_FLAG REG_BIT(8)
>> +
>> +#define CRYPTO_KEY_EXCHANGE XE_MFX_INSTR(2, 6, 9, 0)
>> +
>> +#endif
>> diff --git a/drivers/gpu/drm/xe/instructions/xe_mi_commands.h
>> b/drivers/gpu/drm/xe/instructions/xe_mi_commands.h
>> index 10ec2920d31b..167fb0f742de 100644
>> --- a/drivers/gpu/drm/xe/instructions/xe_mi_commands.h
>> +++ b/drivers/gpu/drm/xe/instructions/xe_mi_commands.h
>> @@ -48,6 +48,7 @@
>> #define MI_LRI_LEN(x) (((x) & 0xff) + 1)
>> #define MI_FLUSH_DW __MI_INSTR(0x26)
>> +#define MI_FLUSH_DW_PROTECTED_MEM_EN REG_BIT(22)
>> #define MI_FLUSH_DW_STORE_INDEX REG_BIT(21)
>> #define MI_INVALIDATE_TLB REG_BIT(18)
>> #define MI_FLUSH_DW_CCS REG_BIT(16)
>> @@ -66,4 +67,8 @@
>> #define MI_BATCH_BUFFER_START __MI_INSTR(0x31)
>> +#define MI_SET_APPID __MI_INSTR(0x0e)
>> +#define MI_SET_APPID_SESSION_ID_MASK REG_GENMASK(6, 0)
>> +#define MI_SET_APPID_SESSION_ID(x)
>> REG_FIELD_PREP(MI_SET_APPID_SESSION_ID_MASK, x)
>> +
>> #endif
>> diff --git a/drivers/gpu/drm/xe/xe_lrc.h b/drivers/gpu/drm/xe/xe_lrc.h
>> index c24542e89318..d411c3fbcbc6 100644
>> --- a/drivers/gpu/drm/xe/xe_lrc.h
>> +++ b/drivers/gpu/drm/xe/xe_lrc.h
>> @@ -20,7 +20,8 @@ struct xe_lrc;
>> struct xe_lrc_snapshot;
>> struct xe_vm;
>> -#define LRC_PPHWSP_SCRATCH_ADDR (0x34 * 4)
>> +#define LRC_PPHWSP_FLUSH_INVAL_SCRATCH_ADDR (0x34 * 4)
>> +#define LRC_PPHWSP_PXP_INVAL_SCRATCH_ADDR (0x40 * 4)
>> struct xe_lrc *xe_lrc_create(struct xe_hw_engine *hwe, struct
>> xe_vm *vm,
>> u32 ring_size);
>> diff --git a/drivers/gpu/drm/xe/xe_pxp_submit.c
>> b/drivers/gpu/drm/xe/xe_pxp_submit.c
>> index b777b0765c8a..3b69dcc0a00f 100644
>> --- a/drivers/gpu/drm/xe/xe_pxp_submit.c
>> +++ b/drivers/gpu/drm/xe/xe_pxp_submit.c
>> @@ -6,14 +6,20 @@
>> #include "xe_pxp_submit.h"
>> #include <drm/xe_drm.h>
>> +#include <linux/delay.h>
>> #include "xe_device_types.h"
>> +#include "xe_bb.h"
>> #include "xe_bo.h"
>> #include "xe_exec_queue.h"
>> #include "xe_gsc_submit.h"
>> #include "xe_gt.h"
>> +#include "xe_lrc.h"
>> #include "xe_pxp_types.h"
>> +#include "xe_sched_job.h"
>> #include "xe_vm.h"
>> +#include "instructions/xe_mfx_commands.h"
>> +#include "instructions/xe_mi_commands.h"
>> #include "regs/xe_gt_regs.h"
>> /*
>> @@ -199,3 +205,105 @@ void xe_pxp_destroy_execution_resources(struct
>> xe_pxp *pxp)
>> destroy_vcs_execution_resources(pxp);
>> }
>> +#define emit_cmd(xe_, map_, offset_, val_) \
>> + xe_map_wr(xe_, map_, (offset_) * sizeof(u32), u32, val_)
>> +
>> +/* stall until prior PXP and MFX/HCP/HUC objects are cmopleted */
> completed
>
>> +#define MFX_WAIT_PXP (MFX_WAIT | \
>> + MFX_WAIT_DW0_PXP_SYNC_CONTROL_FLAG | \
>> + MFX_WAIT_DW0_MFX_SYNC_CONTROL_FLAG)
> Why define an XE_MFX_INSTR macro that takes a flags word only to
> manually OR the flags in outside of the macro?
Maybe flags is a misnomer here, but I couldn't think of anything
clearer. Some commands have bits that are always set within that flags
field, so for those we would add them at definition time; other
commands, like MFX_WAIT, do instead have optional flags, which we can
set as needed in the code.
Given that I haven't defined any XE_MFX_INSTR command that requires a
value in the flags field, I'll just remove that field from the
XE_MFX_INSTR macro for now to make things clearer.
>
>> +static u32 pxp_emit_wait(struct xe_device *xe, struct iosys_map
>> *batch, u32 offset)
>> +{
>> + /* wait for cmds to go through */
>> + emit_cmd(xe, batch, offset++, MFX_WAIT_PXP);
>> + emit_cmd(xe, batch, offset++, 0);
> This zero is just padding to ensure 64bit alignment of future
> instructions?
yes
>
>> +
>> + return offset;
>> +}
>> +
>> +static u32 pxp_emit_session_selection(struct xe_device *xe, struct
>> iosys_map *batch,
>> + u32 offset, u32 idx)
>> +{
>> + offset = pxp_emit_wait(xe, batch, offset);
>> +
>> + /* pxp off */
>> + emit_cmd(xe, batch, offset++, MI_FLUSH_DW | MI_FLUSH_IMM_DW);
>> + emit_cmd(xe, batch, offset++, 0);
>> + emit_cmd(xe, batch, offset++, 0);
>> + emit_cmd(xe, batch, offset++, 0);
>> +
>> + /* select session */
>> + emit_cmd(xe, batch, offset++, MI_SET_APPID |
>> MI_SET_APPID_SESSION_ID(idx));
>> + emit_cmd(xe, batch, offset++, MFX_WAIT_PXP);
> Seems odd to define a helper function to emit this instruction and
> then only use it for some instances.
I didn't want the extra padding here or to make the function
conditionally add it only when needed. I'll just add it, it's not like
we don't have enough memory (the BO is 1 page and we only use a few
dwords for the termination).
>
>> +
>> + /* pxp on */
>> + emit_cmd(xe, batch, offset++, MI_FLUSH_DW |
>> + MI_FLUSH_DW_PROTECTED_MEM_EN |
>> + MI_FLUSH_DW_OP_STOREDW |
>> MI_FLUSH_DW_STORE_INDEX |
>> + MI_FLUSH_IMM_DW);
>> + emit_cmd(xe, batch, offset++, LRC_PPHWSP_PXP_INVAL_SCRATCH_ADDR |
>> + MI_FLUSH_DW_USE_GTT);
>> + emit_cmd(xe, batch, offset++, 0);
>> + emit_cmd(xe, batch, offset++, 0);
>> +
>> + offset = pxp_emit_wait(xe, batch, offset);
>> +
>> + return offset;
>> +}
>> +
>> +static u32 pxp_emit_inline_termination(struct xe_device *xe,
>> + struct iosys_map *batch, u32 offset)
>> +{
>> + /* session inline termination */
>> + emit_cmd(xe, batch, offset++, CRYPTO_KEY_EXCHANGE);
>> + emit_cmd(xe, batch, offset++, 0);
>> +
>> + return offset;
>> +}
>> +
>> +static u32 pxp_emit_session_termination(struct xe_device *xe, struct
>> iosys_map *batch,
>> + u32 offset, u32 idx)
>> +{
>> + offset = pxp_emit_session_selection(xe, batch, offset, idx);
>> + offset = pxp_emit_inline_termination(xe, batch, offset);
>> +
>> + return offset;
>> +}
>> +
>> +/**
>> + * xe_pxp_submit_session_termination - submits a PXP inline termination
>> + * @pxp: the xe_pxp structure
>> + * @id: the session to terminate
>> + *
>> + * Emit an inline termination via the VCS engine to terminate a
>> session.
>> + *
>> + * Returns 0 if the submission is successful, an errno value otherwise.
>> + */
>> +int xe_pxp_submit_session_termination(struct xe_pxp *pxp, u32 id)
>> +{
>> + struct xe_sched_job *job;
>> + struct dma_fence *fence;
>> + long timeout;
>> + u32 offset = 0;
>> + u64 addr = xe_bo_ggtt_addr(pxp->vcs_exec.bo);
>> +
>> + offset = pxp_emit_session_termination(pxp->xe,
>> &pxp->vcs_exec.bo->vmap, offset, id);
>> + offset = pxp_emit_wait(pxp->xe, &pxp->vcs_exec.bo->vmap, offset);
>> + emit_cmd(pxp->xe, &pxp->vcs_exec.bo->vmap, offset,
>> MI_BATCH_BUFFER_END);
>> +
>> + job = xe_sched_job_create(pxp->vcs_exec.q, &addr);
> Double space
>
>> + if (IS_ERR(job))
>> + return PTR_ERR(job);
>> +
>> + xe_sched_job_arm(job);
>> + fence = dma_fence_get(&job->drm.s_fence->finished);
>> + xe_sched_job_push(job);
>> +
>> + timeout = dma_fence_wait_timeout(fence, false, HZ);
>> +
>> + dma_fence_put(fence);
>> + if (timeout <= 0)
>> + return -EAGAIN;
> Does it not matter what the error was? Why/how would this fail in a
> way that needs to be re-tried?
>
> Although looking at the later patches, the return value from this
> function is just treated as a pass/fail bool anyway. So why bother
> munging it at all?
Timeout can be 0 on failure, so can't return that directly. I'll change
it to return timeout if negative and an error if it actually timed out.
Daniele
>
> John.
>
>> +
>> + return 0;
>> +}
>> diff --git a/drivers/gpu/drm/xe/xe_pxp_submit.h
>> b/drivers/gpu/drm/xe/xe_pxp_submit.h
>> index 1a971fadc081..4ee8c0acfed9 100644
>> --- a/drivers/gpu/drm/xe/xe_pxp_submit.h
>> +++ b/drivers/gpu/drm/xe/xe_pxp_submit.h
>> @@ -13,4 +13,6 @@ struct xe_pxp;
>> int xe_pxp_allocate_execution_resources(struct xe_pxp *pxp);
>> void xe_pxp_destroy_execution_resources(struct xe_pxp *pxp);
>> +int xe_pxp_submit_session_termination(struct xe_pxp *pxp, u32 id);
>> +
>> #endif /* __XE_PXP_SUBMIT_H__ */
>> diff --git a/drivers/gpu/drm/xe/xe_ring_ops.c
>> b/drivers/gpu/drm/xe/xe_ring_ops.c
>> index 0be4f489d3e1..a4b5a0f68a32 100644
>> --- a/drivers/gpu/drm/xe/xe_ring_ops.c
>> +++ b/drivers/gpu/drm/xe/xe_ring_ops.c
>> @@ -118,7 +118,7 @@ static int emit_flush_invalidate(u32 flag, u32
>> *dw, int i)
>> dw[i++] |= MI_INVALIDATE_TLB | MI_FLUSH_DW_OP_STOREDW |
>> MI_FLUSH_IMM_DW |
>> MI_FLUSH_DW_STORE_INDEX;
>> - dw[i++] = LRC_PPHWSP_SCRATCH_ADDR | MI_FLUSH_DW_USE_GTT;
>> + dw[i++] = LRC_PPHWSP_FLUSH_INVAL_SCRATCH_ADDR |
>> MI_FLUSH_DW_USE_GTT;
>> dw[i++] = 0;
>> dw[i++] = ~0U;
>> @@ -156,7 +156,7 @@ static int emit_pipe_invalidate(u32 mask_flags,
>> bool invalidate_tlb, u32 *dw,
>> flags &= ~mask_flags;
>> - return emit_pipe_control(dw, i, 0, flags,
>> LRC_PPHWSP_SCRATCH_ADDR, 0);
>> + return emit_pipe_control(dw, i, 0, flags,
>> LRC_PPHWSP_FLUSH_INVAL_SCRATCH_ADDR, 0);
>> }
>> static int emit_store_imm_ppgtt_posted(u64 addr, u64 value,
>
^ permalink raw reply [flat|nested] 54+ messages in thread
* Re: [PATCH v2 04/12] drm/xe/pxp: Add GSC session invalidation support
2024-10-07 20:05 ` John Harrison
@ 2024-11-07 0:15 ` Daniele Ceraolo Spurio
0 siblings, 0 replies; 54+ messages in thread
From: Daniele Ceraolo Spurio @ 2024-11-07 0:15 UTC (permalink / raw)
To: John Harrison, intel-xe
On 10/7/24 13:05, John Harrison wrote:
> On 8/16/2024 12:00, Daniele Ceraolo Spurio wrote:
>> After a session is terminated, we need to inform the GSC so that it can
>> clean up its side of the allocation. This is done by sending an
>> invalidation command with the session ID.
>>
>> Note that this patch is meant to be squashed with the follow-up patches
>> that implement the other pieces of the termination flow. It is separate
>> for now for ease of review.
>>
>> Signed-off-by: Daniele Ceraolo Spurio <daniele.ceraolospurio@intel.com>
>> ---
>> drivers/gpu/drm/xe/abi/gsc_pxp_commands_abi.h | 12 +
>> drivers/gpu/drm/xe/xe_pxp_submit.c | 215 ++++++++++++++++++
>> drivers/gpu/drm/xe/xe_pxp_submit.h | 3 +
>> 3 files changed, 230 insertions(+)
>>
>> diff --git a/drivers/gpu/drm/xe/abi/gsc_pxp_commands_abi.h
>> b/drivers/gpu/drm/xe/abi/gsc_pxp_commands_abi.h
>> index f3c4cf10ba20..4a59c564a0d0 100644
>> --- a/drivers/gpu/drm/xe/abi/gsc_pxp_commands_abi.h
>> +++ b/drivers/gpu/drm/xe/abi/gsc_pxp_commands_abi.h
>> @@ -49,6 +49,7 @@ struct pxp_cmd_header {
>> u32 buffer_len;
>> } __packed;
>> +#define PXP43_CMDID_INVALIDATE_STREAM_KEY 0x00000007
>> #define PXP43_CMDID_NEW_HUC_AUTH 0x0000003F /* MTL+ */
>> /* PXP-Input-Packet: HUC Auth-only */
>> @@ -63,4 +64,15 @@ struct pxp43_huc_auth_out {
>> struct pxp_cmd_header header;
>> } __packed;
>> +/* PXP-Input-Packet: Invalidate Stream Key */
>> +struct pxp43_inv_stream_key_in {
>> + struct pxp_cmd_header header;
>> + u32 rsvd[3];
>> +} __packed;
>> +
>> +/* PXP-Output-Packet: Invalidate Stream Key */
>> +struct pxp43_inv_stream_key_out {
>> + struct pxp_cmd_header header;
>> + u32 rsvd;
>> +} __packed;
>> #endif
>> diff --git a/drivers/gpu/drm/xe/xe_pxp_submit.c
>> b/drivers/gpu/drm/xe/xe_pxp_submit.c
>> index 3b69dcc0a00f..41684d666376 100644
>> --- a/drivers/gpu/drm/xe/xe_pxp_submit.c
>> +++ b/drivers/gpu/drm/xe/xe_pxp_submit.c
>> @@ -15,9 +15,13 @@
>> #include "xe_gsc_submit.h"
>> #include "xe_gt.h"
>> #include "xe_lrc.h"
>> +#include "xe_map.h"
>> #include "xe_pxp_types.h"
>> #include "xe_sched_job.h"
>> #include "xe_vm.h"
>> +#include "abi/gsc_command_header_abi.h"
>> +#include "abi/gsc_pxp_commands_abi.h"
>> +#include "instructions/xe_gsc_commands.h"
>> #include "instructions/xe_mfx_commands.h"
>> #include "instructions/xe_mi_commands.h"
>> #include "regs/xe_gt_regs.h"
>> @@ -307,3 +311,214 @@ int xe_pxp_submit_session_termination(struct
>> xe_pxp *pxp, u32 id)
>> return 0;
>> }
>> +
>> +static bool
>> +is_fw_err_platform_config(u32 type)
>> +{
>> + switch (type) {
>> + case PXP_STATUS_ERROR_API_VERSION:
>> + case PXP_STATUS_PLATFCONFIG_KF1_NOVERIF:
>> + case PXP_STATUS_PLATFCONFIG_KF1_BAD:
>> + return true;
>> + default:
>> + break;
>> + }
>> + return false;
>> +}
>> +
>> +static const char *
>> +fw_err_to_string(u32 type)
>> +{
>> + switch (type) {
>> + case PXP_STATUS_ERROR_API_VERSION:
>> + return "ERR_API_VERSION";
>> + case PXP_STATUS_NOT_READY:
>> + return "ERR_NOT_READY";
>> + case PXP_STATUS_PLATFCONFIG_KF1_NOVERIF:
> Is it not worth having a separate string for this error?
Not really, the aim here is to communicate that the platform doesn't
have a valid configuration (which in theory should never happen without
a debug bios), it doesn't really matter which specific error it is. We
also print the error code anyway.
>
>> + case PXP_STATUS_PLATFCONFIG_KF1_BAD:
>> + return "ERR_PLATFORM_CONFIG";
>> + default:
>> + break;
>> + }
>> + return NULL;
>> +}
>> +
>> +static int pxp_pkt_submit(struct xe_exec_queue *q, u64 batch_addr)
>> +{
>> + struct xe_gt *gt = q->gt;
>> + struct xe_device *xe = gt_to_xe(gt);
>> + struct xe_sched_job *job;
>> + struct dma_fence *fence;
>> + long timeout;
>> +
>> + xe_assert(xe, q->hwe->engine_id == XE_HW_ENGINE_GSCCS0);
>> +
>> + job = xe_sched_job_create(q, &batch_addr);
> Double space.
>
>> + if (IS_ERR(job))
>> + return PTR_ERR(job);
>> +
>> + xe_sched_job_arm(job);
>> + fence = dma_fence_get(&job->drm.s_fence->finished);
>> + xe_sched_job_push(job);
>> +
>> + timeout = dma_fence_wait_timeout(fence, false, HZ);
>> + dma_fence_put(fence);
>> + if (timeout < 0)
>> + return timeout;
>> + else if (!timeout)
>> + return -ETIME;
>> +
>> + return 0;
>> +}
>> +
>> +static void emit_pxp_heci_cmd(struct xe_device *xe, struct iosys_map
>> *batch,
>> + u64 addr_in, u32 size_in, u64 addr_out, u32 size_out)
>> +{
>> + u32 len = 0;
>> +
>> + xe_map_wr(xe, batch, len++ * sizeof(u32), u32, GSC_HECI_CMD_PKT);
>> + xe_map_wr(xe, batch, len++ * sizeof(u32), u32,
>> lower_32_bits(addr_in));
>> + xe_map_wr(xe, batch, len++ * sizeof(u32), u32,
>> upper_32_bits(addr_in));
>> + xe_map_wr(xe, batch, len++ * sizeof(u32), u32, size_in);
>> + xe_map_wr(xe, batch, len++ * sizeof(u32), u32,
>> lower_32_bits(addr_out));
>> + xe_map_wr(xe, batch, len++ * sizeof(u32), u32,
>> upper_32_bits(addr_out));
>> + xe_map_wr(xe, batch, len++ * sizeof(u32), u32, size_out);
>> + xe_map_wr(xe, batch, len++ * sizeof(u32), u32, 0);
>> + xe_map_wr(xe, batch, len++ * sizeof(u32), u32,
>> MI_BATCH_BUFFER_END);
>> +}
>> +
>> +#define GSC_PENDING_RETRY_MAXCOUNT 40
>> +#define GSC_PENDING_RETRY_PAUSE_MS 50
>> +static int gsccs_send_message(struct xe_pxp_gsc_client_resources
>> *gsc_res,
>> + void *msg_in, size_t msg_in_size,
>> + void *msg_out, size_t msg_out_size_max)
>> +{
>> + struct xe_device *xe = gsc_res->vm->xe;
>> + const size_t max_msg_size = gsc_res->inout_size - sizeof(struct
>> intel_gsc_mtl_header);
>> + u32 wr_offset = 0;
>> + u32 rd_offset = 0;
> The intialisation is not necessary here. rd_offset has the appearance
> of requiring it but doesn't really, and wr_offset is re-assigned
> almost immediately.
>
>> + u32 reply_size;
>> + u32 min_reply_size = 0;
>> + int ret = 0;
> Also not necessary to be pre-initialised.
>
>> + int retry = GSC_PENDING_RETRY_MAXCOUNT;
>> +
>> + if (msg_in_size > max_msg_size || msg_out_size_max > max_msg_size)
>> + return -ENOSPC;
>> +
>> + wr_offset = xe_gsc_emit_header(xe, &gsc_res->msg_in, 0,
>> + HECI_MEADDRESS_PXP,
>> + gsc_res->host_session_handle,
>> + msg_in_size);
>> +
>> + /* NOTE: zero size packets are used for session-cleanups */
>> + if (msg_in && msg_in_size) {
>> + xe_map_memcpy_to(xe, &gsc_res->msg_in, wr_offset,
>> + msg_in, msg_in_size);
>> + min_reply_size = sizeof(struct pxp_cmd_header);
>> + }
>> +
>> + /* Make sure the reply header does not contain stale data */
>> + xe_gsc_poison_header(xe, &gsc_res->msg_out, 0);
>> +
>> + emit_pxp_heci_cmd(xe, &gsc_res->batch, PXP_BB_SIZE,
>> + wr_offset + msg_in_size, PXP_BB_SIZE +
>> gsc_res->inout_size,
>> + msg_out_size_max + wr_offset);
> Is this correct? It is passing in the batch buffer allocation size as
> the address. Shouldn't there be some kind of base address included?
> The in/out buffer is after the BB in the same allocation but that
> allocation is not guaranteed to be at address zero, is it?
It is guaranteed to be ad address zero, because when we call bind_bo()
we specify the address. I can make that a define if you think it'd make
things clearer.
>
> Also, it would be more consistent to use 'wr_offset + out_size_max' to
> match the input calculation rather than flipping the terms around.
>
>> +
>> + xe_device_wmb(xe);
>> +
> Might be worth a comment here to say why retries are required and how
> many/how long is expected normally versus worst case?
will do.
>
>> + do {
>> + ret = pxp_pkt_submit(gsc_res->q, 0);
>> + if (ret)
>> + break;
>> +
>> + if (xe_gsc_check_and_update_pending(xe, &gsc_res->msg_in, 0,
>> + &gsc_res->msg_out, 0)) {
>> + ret = -EAGAIN;
>> + msleep(GSC_PENDING_RETRY_PAUSE_MS);
>> + }
>> + } while (--retry && ret == -EAGAIN);
>> +
>> + if (ret) {
>> + drm_err(&xe->drm, "failed to submit GSC PXP message: %d\n",
>> ret);
>> + return ret;
>> + }
>> +
>> + ret = xe_gsc_read_out_header(xe, &gsc_res->msg_out, 0,
>> + min_reply_size, &rd_offset);
>> + if (ret) {
>> + drm_err(&xe->drm, "invalid GSC reply for PXP (err=%d)\n", ret);
> Should be %pe for the error code?
>
>> + return ret;
>> + }
>> +
>> + if (msg_out && min_reply_size) {
>> + reply_size = xe_map_rd_field(xe, &gsc_res->msg_out, rd_offset,
>> + struct pxp_cmd_header, buffer_len);
>> + reply_size += sizeof(struct pxp_cmd_header);
>> +
>> + if (reply_size > msg_out_size_max) {
>> + drm_warn(&xe->drm, "caller with insufficient PXP reply
>> size %u (%ld)\n",
>> + reply_size, msg_out_size_max);
> I would maybe go with 'reply size overflow'. Took me a moment to work
> out why 'size > max' becomes 'insufficient reply size'.
>
>> + reply_size = msg_out_size_max;
> Is it useful to return a partial message?
The caller can check the header and see if there is any error set there.
>
>> + }
>> +
>> + xe_map_memcpy_from(xe, msg_out, &gsc_res->msg_out,
>> + rd_offset, reply_size);
>> + }
>> +
>> + xe_gsc_poison_header(xe, &gsc_res->msg_in, 0);
>> +
>> + return ret;
>> +}
>> +
>> +/**
>> + * xe_pxp_submit_session_invalidation - submits a PXP GSC invalidation
>> + * @gsc_res: the pxp client resources
>> + * @id: the session to invalidate
>> + *
>> + * Submit a message to the GSC FW to notify it that a session has been
>> + * terminated and is therefore invalid.
>> + *
>> + * Returns 0 if the submission is successful, an errno value otherwise.
>> + */
>> +int xe_pxp_submit_session_invalidation(struct
>> xe_pxp_gsc_client_resources *gsc_res,
>> + u32 id)
> Is this really over 100 columns if not wrapped?
nope, will remove the wrapping
>
>> +{
>> + struct xe_device *xe = gsc_res->vm->xe;
>> + struct pxp43_inv_stream_key_in msg_in = {0};
>> + struct pxp43_inv_stream_key_out msg_out = {0};
>> + int ret = 0;
>> +
>> + /*
>> + * Stream key invalidation reuses the same version 4.2 input/output
>> + * command format but firmware requires 4.3 API interaction
>> + */
>> + msg_in.header.api_version = PXP_APIVER(4, 3);
>> + msg_in.header.command_id = PXP43_CMDID_INVALIDATE_STREAM_KEY;
>> + msg_in.header.buffer_len = sizeof(msg_in) - sizeof(msg_in.header);
>> +
>> + msg_in.header.stream_id =
>> FIELD_PREP(PXP_CMDHDR_EXTDATA_SESSION_VALID, 1);
>> + msg_in.header.stream_id |=
>> FIELD_PREP(PXP_CMDHDR_EXTDATA_APP_TYPE, 0);
>> + msg_in.header.stream_id |=
>> FIELD_PREP(PXP_CMDHDR_EXTDATA_SESSION_ID, id);
>> +
>> + ret = gsccs_send_message(gsc_res, &msg_in, sizeof(msg_in),
>> + &msg_out, sizeof(msg_out));
>> + if (ret) {
>> + drm_err(&xe->drm, "Failed to inv-stream-key-%u, ret=[%d]\n",
> Would be clearer to say "failed to invalidate stream-key-%u"? The
> message current reads as "failed to <name-of-variable>" which doesn't
> make much sense. Same comment for the other two prints below.
>
> Also, %pe for the return code?
Ack.
Daniele
>
> John.
>
>> + id, ret);
>> + } else if (msg_out.header.status != 0) {
>> + if (is_fw_err_platform_config(msg_out.header.status)) {
>> + drm_info_once(&xe->drm,
>> + "PXP inv-stream-key-%u failed due to BIOS/SOC
>> :0x%08x:%s\n",
>> + id, msg_out.header.status,
>> + fw_err_to_string(msg_out.header.status));
>> + } else {
>> + drm_dbg(&xe->drm, "PXP inv-stream-key-%u failed
>> 0x%08x:%s:\n",
>> + id, msg_out.header.status,
>> + fw_err_to_string(msg_out.header.status));
>> + drm_dbg(&xe->drm, " cmd-detail:
>> ID=[0x%08x],API-Ver-[0x%08x]\n",
>> + msg_in.header.command_id, msg_in.header.api_version);
>> + }
>> + }
>> +
>> + return ret;
>> +}
>> diff --git a/drivers/gpu/drm/xe/xe_pxp_submit.h
>> b/drivers/gpu/drm/xe/xe_pxp_submit.h
>> index 4ee8c0acfed9..48fdc9b09116 100644
>> --- a/drivers/gpu/drm/xe/xe_pxp_submit.h
>> +++ b/drivers/gpu/drm/xe/xe_pxp_submit.h
>> @@ -9,10 +9,13 @@
>> #include <linux/types.h>
>> struct xe_pxp;
>> +struct xe_pxp_gsc_client_resources;
>> int xe_pxp_allocate_execution_resources(struct xe_pxp *pxp);
>> void xe_pxp_destroy_execution_resources(struct xe_pxp *pxp);
>> int xe_pxp_submit_session_termination(struct xe_pxp *pxp, u32 id);
>> +int xe_pxp_submit_session_invalidation(struct
>> xe_pxp_gsc_client_resources *gsc_res,
>> + u32 id);
>> #endif /* __XE_PXP_SUBMIT_H__ */
>
^ permalink raw reply [flat|nested] 54+ messages in thread
* Re: [PATCH v2 05/12] drm/xe/pxp: Handle the PXP termination interrupt
2024-10-08 0:34 ` John Harrison
@ 2024-11-07 0:33 ` Daniele Ceraolo Spurio
2024-11-14 19:46 ` John Harrison
0 siblings, 1 reply; 54+ messages in thread
From: Daniele Ceraolo Spurio @ 2024-11-07 0:33 UTC (permalink / raw)
To: John Harrison, intel-xe
On 10/7/24 17:34, John Harrison wrote:
> On 8/16/2024 12:00, Daniele Ceraolo Spurio wrote:
>> When something happen to the session, the HW generates a termination
>> interrupt. In reply to this, the driver is required to submit an inline
>> session termination via the VCS, trigger the global termination and
>> notify the GSC FW that the session is now invalid.
>>
>> Signed-off-by: Daniele Ceraolo Spurio <daniele.ceraolospurio@intel.com>
>> ---
>> drivers/gpu/drm/xe/regs/xe_gt_regs.h | 8 ++
>> drivers/gpu/drm/xe/regs/xe_pxp_regs.h | 6 ++
>> drivers/gpu/drm/xe/xe_irq.c | 20 +++-
>> drivers/gpu/drm/xe/xe_pxp.c | 138 +++++++++++++++++++++++++-
>> drivers/gpu/drm/xe/xe_pxp.h | 3 +
>> drivers/gpu/drm/xe/xe_pxp_types.h | 13 +++
>> 6 files changed, 184 insertions(+), 4 deletions(-)
>>
>> diff --git a/drivers/gpu/drm/xe/regs/xe_gt_regs.h
>> b/drivers/gpu/drm/xe/regs/xe_gt_regs.h
>> index 0d1a4a9f4e11..9e9c20f1f1f4 100644
>> --- a/drivers/gpu/drm/xe/regs/xe_gt_regs.h
>> +++ b/drivers/gpu/drm/xe/regs/xe_gt_regs.h
>> @@ -570,6 +570,7 @@
>> #define ENGINE1_MASK REG_GENMASK(31, 16)
>> #define ENGINE0_MASK REG_GENMASK(15, 0)
>> #define GPM_WGBOXPERF_INTR_ENABLE XE_REG(0x19003c,
>> XE_REG_OPTION_VF)
>> +#define CRYPTO_RSVD_INTR_ENABLE XE_REG(0x190040)
>> #define GUNIT_GSC_INTR_ENABLE XE_REG(0x190044,
>> XE_REG_OPTION_VF)
>> #define CCS_RSVD_INTR_ENABLE XE_REG(0x190048,
>> XE_REG_OPTION_VF)
>> @@ -580,6 +581,7 @@
>> #define INTR_ENGINE_INTR(x) REG_FIELD_GET(GENMASK(15, 0), x)
>> #define OTHER_GUC_INSTANCE 0
>> #define OTHER_GSC_HECI2_INSTANCE 3
>> +#define OTHER_KCR_INSTANCE 4
>> #define OTHER_GSC_INSTANCE 6
>> #define IIR_REG_SELECTOR(x) XE_REG(0x190070 + ((x) *
>> 4), XE_REG_OPTION_VF)
>> @@ -591,6 +593,7 @@
>> #define HECI2_RSVD_INTR_MASK XE_REG(0x1900e4)
>> #define GUC_SG_INTR_MASK XE_REG(0x1900e8, XE_REG_OPTION_VF)
>> #define GPM_WGBOXPERF_INTR_MASK XE_REG(0x1900ec,
>> XE_REG_OPTION_VF)
>> +#define CRYPTO_RSVD_INTR_MASK XE_REG(0x1900f0)
>> #define GUNIT_GSC_INTR_MASK XE_REG(0x1900f4,
>> XE_REG_OPTION_VF)
>> #define CCS0_CCS1_INTR_MASK XE_REG(0x190100)
>> #define CCS2_CCS3_INTR_MASK XE_REG(0x190104)
>> @@ -605,4 +608,9 @@
>> #define GT_CS_MASTER_ERROR_INTERRUPT REG_BIT(3)
>> #define GT_RENDER_USER_INTERRUPT REG_BIT(0)
>> +/* irqs for OTHER_KCR_INSTANCE */
>> +#define KCR_PXP_STATE_TERMINATED_INTERRUPT REG_BIT(1)
>> +#define KCR_APP_TERMINATED_PER_FW_REQ_INTERRUPT REG_BIT(2)
>> +#define KCR_PXP_STATE_RESET_COMPLETE_INTERRUPT REG_BIT(3)
>> +
>> #endif
>> diff --git a/drivers/gpu/drm/xe/regs/xe_pxp_regs.h
>> b/drivers/gpu/drm/xe/regs/xe_pxp_regs.h
>> index d67cf210d23d..aa158938b42e 100644
>> --- a/drivers/gpu/drm/xe/regs/xe_pxp_regs.h
>> +++ b/drivers/gpu/drm/xe/regs/xe_pxp_regs.h
>> @@ -14,4 +14,10 @@
>> #define KCR_INIT XE_REG(0x3860f0)
>> #define KCR_INIT_ALLOW_DISPLAY_ME_WRITES REG_BIT(14)
>> +/* KCR hwdrm session in play status 0-31 */
>> +#define KCR_SIP XE_REG(0x386260)
>> +
>> +/* PXP global terminate register for session termination */
>> +#define KCR_GLOBAL_TERMINATE XE_REG(0x3860f8)
>> +
>> #endif /* __XE_PXP_REGS_H__ */
>> diff --git a/drivers/gpu/drm/xe/xe_irq.c b/drivers/gpu/drm/xe/xe_irq.c
>> index 5f2c368c35ad..f11d9a740627 100644
>> --- a/drivers/gpu/drm/xe/xe_irq.c
>> +++ b/drivers/gpu/drm/xe/xe_irq.c
>> @@ -20,6 +20,7 @@
>> #include "xe_hw_engine.h"
>> #include "xe_memirq.h"
>> #include "xe_mmio.h"
>> +#include "xe_pxp.h"
>> #include "xe_sriov.h"
>> /*
>> @@ -202,6 +203,15 @@ void xe_irq_enable_hwe(struct xe_gt *gt)
>> }
>> if (heci_mask)
>> xe_mmio_write32(gt, HECI2_RSVD_INTR_MASK, ~(heci_mask
>> << 16));
>> +
>> + if (xe_pxp_is_supported(xe)) {
>> + u32 kcr_mask = KCR_PXP_STATE_TERMINATED_INTERRUPT |
>> + KCR_APP_TERMINATED_PER_FW_REQ_INTERRUPT |
>> + KCR_PXP_STATE_RESET_COMPLETE_INTERRUPT;
>> +
>> + xe_mmio_write32(gt, CRYPTO_RSVD_INTR_ENABLE, kcr_mask <<
>> 16);
>> + xe_mmio_write32(gt, CRYPTO_RSVD_INTR_MASK, ~(kcr_mask <<
>> 16));
>> + }
>> }
>> }
>> @@ -324,9 +334,15 @@ static void gt_irq_handler(struct xe_tile *tile,
>> }
>> if (class == XE_ENGINE_CLASS_OTHER) {
>> - /* HECI GSCFI interrupts come from outside of GT */
>> + /*
>> + * HECI GSCFI interrupts come from outside of GT.
>> + * KCR irqs come from inside GT but are handled
>> + * by the global PXP subsystem.
>> + */
>> if (HAS_HECI_GSCFI(xe) && instance ==
>> OTHER_GSC_INSTANCE)
>> xe_heci_gsc_irq_handler(xe, intr_vec);
>> + else if (instance == OTHER_KCR_INSTANCE)
>> + xe_pxp_irq_handler(xe, intr_vec);
>> else
>> gt_other_irq_handler(engine_gt, instance,
>> intr_vec);
>> }
>> @@ -512,6 +528,8 @@ static void gt_irq_reset(struct xe_tile *tile)
>> xe_mmio_write32(mmio, GUNIT_GSC_INTR_ENABLE, 0);
>> xe_mmio_write32(mmio, GUNIT_GSC_INTR_MASK, ~0);
>> xe_mmio_write32(mmio, HECI2_RSVD_INTR_MASK, ~0);
>> + xe_mmio_write32(mmio, CRYPTO_RSVD_INTR_ENABLE, 0);
>> + xe_mmio_write32(mmio, CRYPTO_RSVD_INTR_MASK, ~0);
>> }
>> xe_mmio_write32(mmio, GPM_WGBOXPERF_INTR_ENABLE, 0);
>> diff --git a/drivers/gpu/drm/xe/xe_pxp.c b/drivers/gpu/drm/xe/xe_pxp.c
>> index 56bb7d927c07..382eb0cb0018 100644
>> --- a/drivers/gpu/drm/xe/xe_pxp.c
>> +++ b/drivers/gpu/drm/xe/xe_pxp.c
>> @@ -12,9 +12,11 @@
>> #include "xe_gt.h"
>> #include "xe_gt_types.h"
>> #include "xe_mmio.h"
>> +#include "xe_pm.h"
>> #include "xe_pxp_submit.h"
>> #include "xe_pxp_types.h"
>> #include "xe_uc_fw.h"
>> +#include "regs/xe_gt_regs.h"
>> #include "regs/xe_pxp_regs.h"
>> /**
>> @@ -25,11 +27,133 @@
>> * integrated parts.
>> */
>> -static bool pxp_is_supported(const struct xe_device *xe)
>> +#define ARB_SESSION 0xF /* TODO: move to UAPI */
>> +
>> +bool xe_pxp_is_supported(const struct xe_device *xe)
>> {
>> return xe->info.has_pxp && IS_ENABLED(CONFIG_INTEL_MEI_GSC_PROXY);
>> }
>> +static bool pxp_is_enabled(const struct xe_pxp *pxp)
>> +{
>> + return pxp;
>> +}
>> +
>> +static int pxp_wait_for_session_state(struct xe_pxp *pxp, u32 id,
>> bool in_play)
>> +{
>> + struct xe_gt *gt = pxp->gt;
>> + u32 mask = BIT(id);
>> + int ret;
>> +
>> + ret = xe_force_wake_get(gt_to_fw(gt), XE_FW_GT);
>> + if (ret)
>> + return ret;
>> +
>> + ret = xe_mmio_wait32(gt, KCR_SIP, mask, in_play ? mask : 0,
>> + 250, NULL, false);
>> + xe_force_wake_put(gt_to_fw(gt), XE_FW_GT);
>> +
>> + return ret;
>> +}
>> +
>> +static void pxp_terminate(struct xe_pxp *pxp)
>> +{
>> + int ret = 0;
>> + struct xe_device *xe = pxp->xe;
>> + struct xe_gt *gt = pxp->gt;
>> +
>> + drm_dbg(&xe->drm, "Terminating PXP\n");
>> +
>> + /* terminate the hw session */
>> + ret = xe_pxp_submit_session_termination(pxp, ARB_SESSION);
>> + if (ret)
>> + goto out;
>> +
>> + ret = pxp_wait_for_session_state(pxp, ARB_SESSION, false);
>> + if (ret)
>> + goto out;
>> +
>> + /* Trigger full HW cleanup */
>> + XE_WARN_ON(xe_force_wake_get(gt_to_fw(gt), XE_FW_GT));
> Why WARN here but no explicit message at all if the earlier force wake
> fails? And is it safe to keep going if the fw did fail?
The idea was that if we know that the state is good enough to terminate
we could try to do it even with a forcewake error, worst case it doesn't
work. If we don't know the state we can't attempt a termination at all.
>
> Also, given two identical, back-to-back fw get/put sets, would it not
> be more efficient to have pxp_terminate do the get and share that
> across the two register access? It would also remove the issue with
> failed fw half way through causing problems due to not wanting to abort.
will do.
>
>> + xe_mmio_write32(gt, KCR_GLOBAL_TERMINATE, 1);
> BSpec description for KCR_GLOBAL_TERMINATE says need to check
> KCR_SIP_GCD rather than KCR_SIP_MEDIA for bits 0-15 being 0. Whereas
> the KCR_SIP being checked above is KCR_SIP_MEDIA only.
That's just the spec not being super clear. The description is common
between the render and the media copies of the registers, but you need
to check the version of the registers on the actual GT you're doing the
termination on.
>
>> + xe_force_wake_put(gt_to_fw(gt), XE_FW_GT);
>> +
>> + /* now we can tell the GSC to clean up its own state */
>> + ret = xe_pxp_submit_session_invalidation(&pxp->gsc_res,
>> ARB_SESSION);
>> +
>> +out:
>> + if (ret)
>> + drm_err(&xe->drm, "PXP termination failed: %pe\n",
>> ERR_PTR(ret));
>> +}
>> +
>> +static void pxp_terminate_complete(struct xe_pxp *pxp)
>> +{
>> + /* TODO mark the session as ready to start */
>> +}
>> +
>> +static void pxp_irq_work(struct work_struct *work)
>> +{
>> + struct xe_pxp *pxp = container_of(work, typeof(*pxp), irq.work);
>> + struct xe_device *xe = pxp->xe;
>> + u32 events = 0;
>> +
>> + spin_lock_irq(&xe->irq.lock);
>> + events = pxp->irq.events;
>> + pxp->irq.events = 0;
>> + spin_unlock_irq(&xe->irq.lock);
>> +
>> + if (!events)
>> + return;
>> +
>> + /*
>> + * If we're processing a termination irq while suspending then
>> don't
>> + * bother, we're going to re-init everything on resume anyway.
>> + */
>> + if ((events & PXP_TERMINATION_REQUEST) &&
>> !xe_pm_runtime_get_if_active(xe))
>> + return;
> I assume it is not possible to have both REQUEST and COMPLETE set at
> the same time? I.e. is it possible for this early exit to cause a lost
> termination complete call?
A complete is only received in response to the submission of a
termination request, so they should never be set at the same time.
Doesn't really matter anyway since we submit a termination on resume anyway.
Daniele
>
> John.
>
>> +
>> + if (events & PXP_TERMINATION_REQUEST) {
>> + events &= ~PXP_TERMINATION_COMPLETE;
>> + pxp_terminate(pxp);
>> + }
>> +
>> + if (events & PXP_TERMINATION_COMPLETE)
>> + pxp_terminate_complete(pxp);
>> +
>> + if (events & PXP_TERMINATION_REQUEST)
>> + xe_pm_runtime_put(xe);
>> +}
>> +
>> +/**
>> + * xe_pxp_irq_handler - Handles PXP interrupts.
>> + * @pxp: pointer to pxp struct
>> + * @iir: interrupt vector
>> + */
>> +void xe_pxp_irq_handler(struct xe_device *xe, u16 iir)
>> +{
>> + struct xe_pxp *pxp = xe->pxp;
>> +
>> + if (!pxp_is_enabled(pxp)) {
>> + drm_err(&xe->drm, "PXP irq 0x%x received with PXP
>> disabled!\n", iir);
>> + return;
>> + }
>> +
>> + lockdep_assert_held(&xe->irq.lock);
>> +
>> + if (unlikely(!iir))
>> + return;
>> +
>> + if (iir & (KCR_PXP_STATE_TERMINATED_INTERRUPT |
>> + KCR_APP_TERMINATED_PER_FW_REQ_INTERRUPT))
>> + pxp->irq.events |= PXP_TERMINATION_REQUEST;
>> +
>> + if (iir & KCR_PXP_STATE_RESET_COMPLETE_INTERRUPT)
>> + pxp->irq.events |= PXP_TERMINATION_COMPLETE;
>> +
>> + if (pxp->irq.events)
>> + queue_work(pxp->irq.wq, &pxp->irq.work);
>> +}
>> +
>> static int kcr_pxp_set_status(const struct xe_pxp *pxp, bool enable)
>> {
>> u32 val = enable ?
>> _MASKED_BIT_ENABLE(KCR_INIT_ALLOW_DISPLAY_ME_WRITES) :
>> @@ -60,6 +184,7 @@ static void pxp_fini(void *arg)
>> {
>> struct xe_pxp *pxp = arg;
>> + destroy_workqueue(pxp->irq.wq);
>> xe_pxp_destroy_execution_resources(pxp);
>> /* no need to explicitly disable KCR since we're going to do
>> an FLR */
>> @@ -83,7 +208,7 @@ int xe_pxp_init(struct xe_device *xe)
>> struct xe_pxp *pxp;
>> int err;
>> - if (!pxp_is_supported(xe))
>> + if (!xe_pxp_is_supported(xe))
>> return -EOPNOTSUPP;
>> /* we only support PXP on single tile devices with a media GT */
>> @@ -105,12 +230,17 @@ int xe_pxp_init(struct xe_device *xe)
>> if (!pxp)
>> return -ENOMEM;
>> + INIT_WORK(&pxp->irq.work, pxp_irq_work);
>> pxp->xe = xe;
>> pxp->gt = gt;
>> + pxp->irq.wq = alloc_ordered_workqueue("pxp-wq", 0);
>> + if (!pxp->irq.wq)
>> + return -ENOMEM;
>> +
>> err = kcr_pxp_enable(pxp);
>> if (err)
>> - return err;
>> + goto out_wq;
>> err = xe_pxp_allocate_execution_resources(pxp);
>> if (err)
>> @@ -122,5 +252,7 @@ int xe_pxp_init(struct xe_device *xe)
>> kcr_disable:
>> kcr_pxp_disable(pxp);
>> +out_wq:
>> + destroy_workqueue(pxp->irq.wq);
>> return err;
>> }
>> diff --git a/drivers/gpu/drm/xe/xe_pxp.h b/drivers/gpu/drm/xe/xe_pxp.h
>> index 79c951667f13..81bafe2714ff 100644
>> --- a/drivers/gpu/drm/xe/xe_pxp.h
>> +++ b/drivers/gpu/drm/xe/xe_pxp.h
>> @@ -10,6 +10,9 @@
>> struct xe_device;
>> +bool xe_pxp_is_supported(const struct xe_device *xe);
>> +
>> int xe_pxp_init(struct xe_device *xe);
>> +void xe_pxp_irq_handler(struct xe_device *xe, u16 iir);
>> #endif /* __XE_PXP_H__ */
>> diff --git a/drivers/gpu/drm/xe/xe_pxp_types.h
>> b/drivers/gpu/drm/xe/xe_pxp_types.h
>> index 3463caaad101..d5cf8faed7be 100644
>> --- a/drivers/gpu/drm/xe/xe_pxp_types.h
>> +++ b/drivers/gpu/drm/xe/xe_pxp_types.h
>> @@ -8,6 +8,7 @@
>> #include <linux/iosys-map.h>
>> #include <linux/types.h>
>> +#include <linux/workqueue.h>
>> struct xe_bo;
>> struct xe_exec_queue;
>> @@ -69,6 +70,18 @@ struct xe_pxp {
>> /** @gsc_exec: kernel-owned objects for PXP submissions to
>> the GSCCS */
>> struct xe_pxp_gsc_client_resources gsc_res;
>> +
>> + /** @irq: wrapper for the worker and queue used for PXP irq
>> support */
>> + struct {
>> + /** @irq.work: worker that manages irq events. */
>> + struct work_struct work;
>> + /** @irq.wq: workqueue on which to queue the irq work. */
>> + struct workqueue_struct *wq;
>> + /** @irq.events: pending events, protected with
>> xe->irq.lock. */
>> + u32 events;
>> +#define PXP_TERMINATION_REQUEST BIT(0)
>> +#define PXP_TERMINATION_COMPLETE BIT(1)
>> + } irq;
>> };
>> #endif /* __XE_PXP_TYPES_H__ */
>
^ permalink raw reply [flat|nested] 54+ messages in thread
* Re: [PATCH v2 06/12] drm/xe/pxp: Add GSC session initialization support
2024-10-08 18:43 ` John Harrison
@ 2024-11-07 22:37 ` Daniele Ceraolo Spurio
2024-11-14 20:36 ` John Harrison
0 siblings, 1 reply; 54+ messages in thread
From: Daniele Ceraolo Spurio @ 2024-11-07 22:37 UTC (permalink / raw)
To: John Harrison, intel-xe
On 10/8/2024 11:43 AM, John Harrison wrote:
> On 8/16/2024 12:00, Daniele Ceraolo Spurio wrote:
>> A session is initialized (i.e. started) by sending a message to the GSC.
>>
>> Note that this patch is meant to be squashed with the follow-up patches
>> that implement the other pieces of the session initialization and queue
>> setup flow. It is separate for now for ease of review.
>>
>> Signed-off-by: Daniele Ceraolo Spurio <daniele.ceraolospurio@intel.com>
>> ---
>> drivers/gpu/drm/xe/abi/gsc_pxp_commands_abi.h | 21 ++++++++
>> drivers/gpu/drm/xe/xe_pxp_submit.c | 50 +++++++++++++++++++
>> drivers/gpu/drm/xe/xe_pxp_submit.h | 1 +
>> 3 files changed, 72 insertions(+)
>>
>> diff --git a/drivers/gpu/drm/xe/abi/gsc_pxp_commands_abi.h
>> b/drivers/gpu/drm/xe/abi/gsc_pxp_commands_abi.h
>> index 4a59c564a0d0..734feb38f570 100644
>> --- a/drivers/gpu/drm/xe/abi/gsc_pxp_commands_abi.h
>> +++ b/drivers/gpu/drm/xe/abi/gsc_pxp_commands_abi.h
>> @@ -50,6 +50,7 @@ struct pxp_cmd_header {
>> } __packed;
>> #define PXP43_CMDID_INVALIDATE_STREAM_KEY 0x00000007
>> +#define PXP43_CMDID_INIT_SESSION 0x00000036
>> #define PXP43_CMDID_NEW_HUC_AUTH 0x0000003F /* MTL+ */
>> /* PXP-Input-Packet: HUC Auth-only */
>> @@ -64,6 +65,26 @@ struct pxp43_huc_auth_out {
>> struct pxp_cmd_header header;
>> } __packed;
>> +/* PXP-Input-Packet: Init PXP session */
>> +struct pxp43_create_arb_in {
>> + struct pxp_cmd_header header;
>> + /* header.stream_id fields for vesion 4.3 of Init PXP
>> session: */
>> + #define PXP43_INIT_SESSION_VALID BIT(0)
>> + #define PXP43_INIT_SESSION_APPTYPE BIT(1)
>> + #define PXP43_INIT_SESSION_APPID GENMASK(17, 2)
>> + u32 protection_mode;
>> + #define PXP43_INIT_SESSION_PROTECTION_ARB 0x2
>> + u32 sub_session_id;
>> + u32 init_flags;
>> + u32 rsvd[12];
>> +} __packed;
>> +
>> +/* PXP-Input-Packet: Init PXP session */
>> +struct pxp43_create_arb_out {
>> + struct pxp_cmd_header header;
>> + u32 rsvd[8];
>> +} __packed;
>> +
>> /* PXP-Input-Packet: Invalidate Stream Key */
>> struct pxp43_inv_stream_key_in {
>> struct pxp_cmd_header header;
>> diff --git a/drivers/gpu/drm/xe/xe_pxp_submit.c
>> b/drivers/gpu/drm/xe/xe_pxp_submit.c
>> index 41684d666376..c9258c861556 100644
>> --- a/drivers/gpu/drm/xe/xe_pxp_submit.c
>> +++ b/drivers/gpu/drm/xe/xe_pxp_submit.c
>> @@ -26,6 +26,8 @@
>> #include "instructions/xe_mi_commands.h"
>> #include "regs/xe_gt_regs.h"
>> +#define ARB_SESSION 0xF /* TODO: move to UAPI */
> This same define is now in two separate source files? Even if it can't
> be moved to the UAPI header yet it should at least be in an internal
> header rather than being replicated.
I thought about it, but couldn't find a clean solution. This define
would belong in pxp.h, but that's not included from this file and I
didn't want to add the include just for a define that is going away a
few patches later. I could put it in pxp_types.h or pxp_submit.h, but if
it needs to be in the wrong place anyway I thought I might as well just
duplicate it.
Any preference?
>
>> +
>> /*
>> * The VCS is used for kernel-owned GGTT submissions to issue key
>> termination.
>> * Terminations are serialized, so we only need a single queue and
>> a single
>> @@ -470,6 +472,54 @@ static int gsccs_send_message(struct
>> xe_pxp_gsc_client_resources *gsc_res,
>> return ret;
>> }
>> +/**
>> + * xe_pxp_submit_session_init - submits a PXP GSC session
>> initialization
>> + * @gsc_res: the pxp client resources
>> + * @id: the session to initialize
>> + *
>> + * Submit a message to the GSC FW to initialize (i.e. start) a PXP
>> session.
>> + *
>> + * Returns 0 if the submission is successful, an errno value otherwise.
>> + */
>> +int xe_pxp_submit_session_init(struct xe_pxp_gsc_client_resources
>> *gsc_res, u32 id)
>> +{
>> + struct xe_device *xe = gsc_res->vm->xe;
>> + struct pxp43_create_arb_in msg_in = {0};
>> + struct pxp43_create_arb_out msg_out = {0};
>> + int ret;
>> +
>> + msg_in.header.api_version = PXP_APIVER(4, 3);
>> + msg_in.header.command_id = PXP43_CMDID_INIT_SESSION;
>> + msg_in.header.stream_id = (FIELD_PREP(PXP43_INIT_SESSION_APPID,
>> id) |
>> + FIELD_PREP(PXP43_INIT_SESSION_VALID, 1) |
>> + FIELD_PREP(PXP43_INIT_SESSION_APPTYPE, 0));
>> + msg_in.header.buffer_len = sizeof(msg_in) - sizeof(msg_in.header);
>> +
>> + if (id == ARB_SESSION)
>> + msg_in.protection_mode = PXP43_INIT_SESSION_PROTECTION_ARB;
>> +
>> + ret = gsccs_send_message(gsc_res, &msg_in, sizeof(msg_in),
>> + &msg_out, sizeof(msg_out));
>> + if (ret) {
>> + drm_err(&xe->drm, "Failed to init session %d, ret=[%d]\n",
>> id, ret);
> %pe for error code
>
>> + } else if (msg_out.header.status != 0) {
>> + if (is_fw_err_platform_config(msg_out.header.status)) {
>> + drm_info_once(&xe->drm,
>> + "PXP init-session-%d failed due to
>> BIOS/SOC:0x%08x:%s\n",
> Style mis-match - "init session %d" in the first error but then
> "init-session-%d" in this one and the one below (I prefer the first
> one that actually looks like an operation rather than variable).
>
>> + id, msg_out.header.status,
>> + fw_err_to_string(msg_out.header.status));
>> + } else {
>> + drm_dbg(&xe->drm, "PXP init-session-%d failed
>> 0x%08x:%st:\n",
>> + id, msg_out.header.status,
>> + fw_err_to_string(msg_out.header.status));
>> + drm_dbg(&xe->drm, " cmd-detail:
>> ID=[0x%08x],API-Ver-[0x%08x]\n",
> More mis-matching message styles - 'SOC:%s:%s' vs 'ID=[%x]'. Neither
> of which is the normal format of kernel messages.
Those I've copied straight from i915. Will reword.
Daniele
>
> John.
>
>> + msg_in.header.command_id, msg_in.header.api_version);
>> + }
>> + }
>> +
>> + return ret;
>> +}
>> +
>> /**
>> * xe_pxp_submit_session_invalidation - submits a PXP GSC invalidation
>> * @gsc_res: the pxp client resources
>> diff --git a/drivers/gpu/drm/xe/xe_pxp_submit.h
>> b/drivers/gpu/drm/xe/xe_pxp_submit.h
>> index 48fdc9b09116..c9efda02f4b0 100644
>> --- a/drivers/gpu/drm/xe/xe_pxp_submit.h
>> +++ b/drivers/gpu/drm/xe/xe_pxp_submit.h
>> @@ -14,6 +14,7 @@ struct xe_pxp_gsc_client_resources;
>> int xe_pxp_allocate_execution_resources(struct xe_pxp *pxp);
>> void xe_pxp_destroy_execution_resources(struct xe_pxp *pxp);
>> +int xe_pxp_submit_session_init(struct xe_pxp_gsc_client_resources
>> *gsc_res, u32 id);
>> int xe_pxp_submit_session_termination(struct xe_pxp *pxp, u32 id);
>> int xe_pxp_submit_session_invalidation(struct
>> xe_pxp_gsc_client_resources *gsc_res,
>> u32 id);
^ permalink raw reply [flat|nested] 54+ messages in thread
* Re: [PATCH v2 07/12] drm/xe/pxp: Add spport for PXP-using queues
2024-10-08 23:55 ` John Harrison
@ 2024-11-07 23:57 ` Daniele Ceraolo Spurio
2024-11-14 21:20 ` John Harrison
0 siblings, 1 reply; 54+ messages in thread
From: Daniele Ceraolo Spurio @ 2024-11-07 23:57 UTC (permalink / raw)
To: John Harrison, intel-xe
On 10/8/2024 4:55 PM, John Harrison wrote:
> On 8/16/2024 12:00, Daniele Ceraolo Spurio wrote:
>> Userspace is required to mark a queue as using PXP to guarantee that the
>> PXP instructions will work. When a PXP queue is created, the driver will
>> do the following:
>> - Start the default PXP session if it is not already running;
>> - set the relevant bits in the context control register;
>> - assign an rpm ref to the queue to keep for its lifetime (this is
>> required because PXP HWDRM sessions are killed by the HW suspend
>> flow).
>>
>> When a PXP invalidation occurs, all the PXP queue will be killed.
> "all the PXP queue" -> should be 'queues' or should not say 'all'?
>
>> On submission of a valid PXP queue, the driver will validate all
>> encrypted objects mapped to the VM to ensured they were encrypted with
>> the current key.
>>
>> Signed-off-by: Daniele Ceraolo Spurio <daniele.ceraolospurio@intel.com>
>> ---
>> drivers/gpu/drm/xe/regs/xe_engine_regs.h | 1 +
>> drivers/gpu/drm/xe/xe_exec_queue.c | 58 ++++-
>> drivers/gpu/drm/xe/xe_exec_queue.h | 5 +
>> drivers/gpu/drm/xe/xe_exec_queue_types.h | 8 +
>> drivers/gpu/drm/xe/xe_hw_engine.c | 2 +-
>> drivers/gpu/drm/xe/xe_lrc.c | 16 +-
>> drivers/gpu/drm/xe/xe_lrc.h | 4 +-
>> drivers/gpu/drm/xe/xe_pxp.c | 295 ++++++++++++++++++++++-
>> drivers/gpu/drm/xe/xe_pxp.h | 7 +
>> drivers/gpu/drm/xe/xe_pxp_submit.c | 4 +-
>> drivers/gpu/drm/xe/xe_pxp_types.h | 26 ++
>> include/uapi/drm/xe_drm.h | 40 ++-
>> 12 files changed, 450 insertions(+), 16 deletions(-)
>>
>> diff --git a/drivers/gpu/drm/xe/regs/xe_engine_regs.h
>> b/drivers/gpu/drm/xe/regs/xe_engine_regs.h
>> index 81b71903675e..3692e887f503 100644
>> --- a/drivers/gpu/drm/xe/regs/xe_engine_regs.h
>> +++ b/drivers/gpu/drm/xe/regs/xe_engine_regs.h
>> @@ -130,6 +130,7 @@
>> #define RING_EXECLIST_STATUS_HI(base) XE_REG((base) + 0x234
>> + 4)
>> #define RING_CONTEXT_CONTROL(base) XE_REG((base) + 0x244,
>> XE_REG_OPTION_MASKED)
>> +#define CTX_CTRL_PXP_ENABLE REG_BIT(10)
>> #define CTX_CTRL_OAC_CONTEXT_ENABLE REG_BIT(8)
>> #define CTX_CTRL_RUN_ALONE REG_BIT(7)
>> #define CTX_CTRL_INDIRECT_RING_STATE_ENABLE REG_BIT(4)
>> diff --git a/drivers/gpu/drm/xe/xe_exec_queue.c
>> b/drivers/gpu/drm/xe/xe_exec_queue.c
>> index e98e8794eddf..504ba4aa2357 100644
>> --- a/drivers/gpu/drm/xe/xe_exec_queue.c
>> +++ b/drivers/gpu/drm/xe/xe_exec_queue.c
>> @@ -22,6 +22,8 @@
>> #include "xe_ring_ops_types.h"
>> #include "xe_trace.h"
>> #include "xe_vm.h"
>> +#include "xe_pxp.h"
>> +#include "xe_pxp_types.h"
>> enum xe_exec_queue_sched_prop {
>> XE_EXEC_QUEUE_JOB_TIMEOUT = 0,
>> @@ -35,6 +37,8 @@ static int exec_queue_user_extensions(struct
>> xe_device *xe, struct xe_exec_queue
>> static void __xe_exec_queue_free(struct xe_exec_queue *q)
>> {
>> + if (xe_exec_queue_uses_pxp(q))
>> + xe_pxp_exec_queue_remove(gt_to_xe(q->gt)->pxp, q);
>> if (q->vm)
>> xe_vm_put(q->vm);
>> @@ -73,6 +77,7 @@ static struct xe_exec_queue
>> *__xe_exec_queue_alloc(struct xe_device *xe,
>> q->ops = gt->exec_queue_ops;
>> INIT_LIST_HEAD(&q->lr.link);
>> INIT_LIST_HEAD(&q->multi_gt_link);
>> + INIT_LIST_HEAD(&q->pxp.link);
>> q->sched_props.timeslice_us =
>> hwe->eclass->sched_props.timeslice_us;
>> q->sched_props.preempt_timeout_us =
>> @@ -107,6 +112,21 @@ static int __xe_exec_queue_init(struct
>> xe_exec_queue *q)
>> {
>> struct xe_vm *vm = q->vm;
>> int i, err;
>> + u32 flags = 0;
>> +
>> + /*
>> + * PXP workloads executing on RCS or CCS must run in isolation
>> (i.e. no
>> + * other workload can use the EUs at the same time). On MTL this
>> is done
>> + * by setting the RUNALONE bit in the LRC, while starting on Xe2
>> there
>> + * is a dedicated bit for it.
>> + */
>> + if (xe_exec_queue_uses_pxp(q) &&
>> + (q->class == XE_ENGINE_CLASS_RENDER || q->class ==
>> XE_ENGINE_CLASS_COMPUTE)) {
>> + if (GRAPHICS_VER(gt_to_xe(q->gt)) >= 20)
>> + flags |= XE_LRC_CREATE_PXP;
>> + else
>> + flags |= XE_LRC_CREATE_RUNALONE;
>> + }
>> if (vm) {
>> err = xe_vm_lock(vm, true);
>> @@ -115,7 +135,7 @@ static int __xe_exec_queue_init(struct
>> xe_exec_queue *q)
>> }
>> for (i = 0; i < q->width; ++i) {
>> - q->lrc[i] = xe_lrc_create(q->hwe, q->vm, SZ_16K);
>> + q->lrc[i] = xe_lrc_create(q->hwe, q->vm, SZ_16K, flags);
>> if (IS_ERR(q->lrc[i])) {
>> err = PTR_ERR(q->lrc[i]);
>> goto err_unlock;
>> @@ -160,6 +180,17 @@ struct xe_exec_queue
>> *xe_exec_queue_create(struct xe_device *xe, struct xe_vm *v
>> if (err)
>> goto err_post_alloc;
>> + /*
>> + * we can only add the queue to the PXP list after the init is
>> complete,
>> + * because the PXP termination can call exec_queue_kill and that
>> will
>> + * go bad if the queue is only half-initialized.
>> + */
> Not following how this comment relates to this code block. The comment
> implies there should be a wait of some kind.
We set the PXP type for the queue as part of the extension handling in
__xe_exec_queue_alloc. This comment was supposed to indicate that we
can't add the queue to the list back there because the init is not
complete yet, so we do it here instead. I'll add the explanation about
the extension handling to the comment.
>
>> + if (xe_exec_queue_uses_pxp(q)) {
>> + err = xe_pxp_exec_queue_add(xe->pxp, q);
>> + if (err)
>> + goto err_post_alloc;
>> + }
>> +
>> return q;
>> err_post_alloc:
>> @@ -197,6 +228,9 @@ void xe_exec_queue_destroy(struct kref *ref)
>> struct xe_exec_queue *q = container_of(ref, struct
>> xe_exec_queue, refcount);
>> struct xe_exec_queue *eq, *next;
>> + if (xe_exec_queue_uses_pxp(q))
>> + xe_pxp_exec_queue_remove(gt_to_xe(q->gt)->pxp, q);
>> +
>> xe_exec_queue_last_fence_put_unlocked(q);
>> if (!(q->flags & EXEC_QUEUE_FLAG_BIND_ENGINE_CHILD)) {
>> list_for_each_entry_safe(eq, next, &q->multi_gt_list,
>> @@ -343,6 +377,24 @@ static int exec_queue_set_timeslice(struct
>> xe_device *xe, struct xe_exec_queue *
>> return 0;
>> }
>> +static int
>> +exec_queue_set_pxp_type(struct xe_device *xe, struct xe_exec_queue
>> *q, u64 value)
>> +{
>> + BUILD_BUG_ON(DRM_XE_PXP_TYPE_NONE != 0);
> Why a build bug for something that is a simple 'enum { X=0 }'? It's
> not like there is some complex macro calculation that could be broken
> by some seemingly unrelated change.
This was more to make sure that the default value for the extension was
0. Given that this is UAPI and therefore can't change anyway, I'll drop
the BUG_ON
>
>> +
>> + if (value == DRM_XE_PXP_TYPE_NONE)
>> + return 0;
> This doesn't need to shut any existing PXP down? Is it not possible to
> dynamically change the type?
No, this can only be set at queue creation time
>
>> +
>> + if (!xe_pxp_is_enabled(xe->pxp))
>> + return -ENODEV;
>> +
>> + /* we only support HWDRM sessions right now */
>> + if (XE_IOCTL_DBG(xe, value != DRM_XE_PXP_TYPE_HWDRM))
>> + return -EINVAL;
>> +
>> + return xe_pxp_exec_queue_set_type(xe->pxp, q,
>> DRM_XE_PXP_TYPE_HWDRM);
>> +}
>> +
>> typedef int (*xe_exec_queue_set_property_fn)(struct xe_device *xe,
>> struct xe_exec_queue *q,
>> u64 value);
>> @@ -350,6 +402,7 @@ typedef int
>> (*xe_exec_queue_set_property_fn)(struct xe_device *xe,
>> static const xe_exec_queue_set_property_fn
>> exec_queue_set_property_funcs[] = {
>> [DRM_XE_EXEC_QUEUE_SET_PROPERTY_PRIORITY] =
>> exec_queue_set_priority,
>> [DRM_XE_EXEC_QUEUE_SET_PROPERTY_TIMESLICE] =
>> exec_queue_set_timeslice,
>> + [DRM_XE_EXEC_QUEUE_SET_PROPERTY_PXP_TYPE] =
>> exec_queue_set_pxp_type,
>> };
>> static int exec_queue_user_ext_set_property(struct xe_device *xe,
>> @@ -369,7 +422,8 @@ static int
>> exec_queue_user_ext_set_property(struct xe_device *xe,
>> ARRAY_SIZE(exec_queue_set_property_funcs)) ||
>> XE_IOCTL_DBG(xe, ext.pad) ||
>> XE_IOCTL_DBG(xe, ext.property !=
>> DRM_XE_EXEC_QUEUE_SET_PROPERTY_PRIORITY &&
>> - ext.property != DRM_XE_EXEC_QUEUE_SET_PROPERTY_TIMESLICE))
>> + ext.property !=
>> DRM_XE_EXEC_QUEUE_SET_PROPERTY_TIMESLICE &&
>> + ext.property != DRM_XE_EXEC_QUEUE_SET_PROPERTY_PXP_TYPE))
>> return -EINVAL;
>> idx = array_index_nospec(ext.property,
>> ARRAY_SIZE(exec_queue_set_property_funcs));
>> diff --git a/drivers/gpu/drm/xe/xe_exec_queue.h
>> b/drivers/gpu/drm/xe/xe_exec_queue.h
>> index ded77b0f3b90..7fa97719667a 100644
>> --- a/drivers/gpu/drm/xe/xe_exec_queue.h
>> +++ b/drivers/gpu/drm/xe/xe_exec_queue.h
>> @@ -53,6 +53,11 @@ static inline bool
>> xe_exec_queue_is_parallel(struct xe_exec_queue *q)
>> return q->width > 1;
>> }
>> +static inline bool xe_exec_queue_uses_pxp(struct xe_exec_queue *q)
>> +{
>> + return q->pxp.type;
>> +}
>> +
>> bool xe_exec_queue_is_lr(struct xe_exec_queue *q);
>> bool xe_exec_queue_ring_full(struct xe_exec_queue *q);
>> diff --git a/drivers/gpu/drm/xe/xe_exec_queue_types.h
>> b/drivers/gpu/drm/xe/xe_exec_queue_types.h
>> index 1408b02eea53..28b56217f1df 100644
>> --- a/drivers/gpu/drm/xe/xe_exec_queue_types.h
>> +++ b/drivers/gpu/drm/xe/xe_exec_queue_types.h
>> @@ -130,6 +130,14 @@ struct xe_exec_queue {
>> spinlock_t lock;
>> } lr;
>> + /** @pxp: PXP info tracking */
>> + struct {
>> + /** @pxp.type: PXP session type used by this queue */
>> + u8 type;
>> + /** @pxp.link: link into the list of PXP exec queues */
>> + struct list_head link;
>> + } pxp;
>> +
>> /** @ops: submission backend exec queue operations */
>> const struct xe_exec_queue_ops *ops;
>> diff --git a/drivers/gpu/drm/xe/xe_hw_engine.c
>> b/drivers/gpu/drm/xe/xe_hw_engine.c
>> index e195022ca836..469932e7d7a6 100644
>> --- a/drivers/gpu/drm/xe/xe_hw_engine.c
>> +++ b/drivers/gpu/drm/xe/xe_hw_engine.c
>> @@ -557,7 +557,7 @@ static int hw_engine_init(struct xe_gt *gt,
>> struct xe_hw_engine *hwe,
>> goto err_name;
>> }
>> - hwe->kernel_lrc = xe_lrc_create(hwe, NULL, SZ_16K);
>> + hwe->kernel_lrc = xe_lrc_create(hwe, NULL, SZ_16K, 0);
>> if (IS_ERR(hwe->kernel_lrc)) {
>> err = PTR_ERR(hwe->kernel_lrc);
>> goto err_hwsp;
>> diff --git a/drivers/gpu/drm/xe/xe_lrc.c b/drivers/gpu/drm/xe/xe_lrc.c
>> index 974a9cd8c379..4f3e676db646 100644
>> --- a/drivers/gpu/drm/xe/xe_lrc.c
>> +++ b/drivers/gpu/drm/xe/xe_lrc.c
>> @@ -893,7 +893,7 @@ static void xe_lrc_finish(struct xe_lrc *lrc)
>> #define PVC_CTX_ACC_CTR_THOLD (0x2a + 1)
>> static int xe_lrc_init(struct xe_lrc *lrc, struct xe_hw_engine *hwe,
>> - struct xe_vm *vm, u32 ring_size)
>> + struct xe_vm *vm, u32 ring_size, u32 init_flags)
>> {
>> struct xe_gt *gt = hwe->gt;
>> struct xe_tile *tile = gt_to_tile(gt);
>> @@ -981,6 +981,16 @@ static int xe_lrc_init(struct xe_lrc *lrc,
>> struct xe_hw_engine *hwe,
>> RING_CTL_SIZE(lrc->ring.size) | RING_VALID);
>> }
>> + if (init_flags & XE_LRC_CREATE_RUNALONE)
>> + xe_lrc_write_ctx_reg(lrc, CTX_CONTEXT_CONTROL,
>> + xe_lrc_read_ctx_reg(lrc, CTX_CONTEXT_CONTROL) |
>> + _MASKED_BIT_ENABLE(CTX_CTRL_RUN_ALONE));
>> +
>> + if (init_flags & XE_LRC_CREATE_PXP)
>> + xe_lrc_write_ctx_reg(lrc, CTX_CONTEXT_CONTROL,
>> + xe_lrc_read_ctx_reg(lrc, CTX_CONTEXT_CONTROL) |
>> + _MASKED_BIT_ENABLE(CTX_CTRL_PXP_ENABLE));
>> +
>> xe_lrc_write_ctx_reg(lrc, CTX_TIMESTAMP, 0);
>> if (xe->info.has_asid && vm)
>> @@ -1029,7 +1039,7 @@ static int xe_lrc_init(struct xe_lrc *lrc,
>> struct xe_hw_engine *hwe,
>> * upon failure.
>> */
>> struct xe_lrc *xe_lrc_create(struct xe_hw_engine *hwe, struct xe_vm
>> *vm,
>> - u32 ring_size)
>> + u32 ring_size, u32 flags)
>> {
>> struct xe_lrc *lrc;
>> int err;
>> @@ -1038,7 +1048,7 @@ struct xe_lrc *xe_lrc_create(struct
>> xe_hw_engine *hwe, struct xe_vm *vm,
>> if (!lrc)
>> return ERR_PTR(-ENOMEM);
>> - err = xe_lrc_init(lrc, hwe, vm, ring_size);
>> + err = xe_lrc_init(lrc, hwe, vm, ring_size, flags);
>> if (err) {
>> kfree(lrc);
>> return ERR_PTR(err);
>> diff --git a/drivers/gpu/drm/xe/xe_lrc.h b/drivers/gpu/drm/xe/xe_lrc.h
>> index d411c3fbcbc6..cc8091bba2a0 100644
>> --- a/drivers/gpu/drm/xe/xe_lrc.h
>> +++ b/drivers/gpu/drm/xe/xe_lrc.h
>> @@ -23,8 +23,10 @@ struct xe_vm;
>> #define LRC_PPHWSP_FLUSH_INVAL_SCRATCH_ADDR (0x34 * 4)
>> #define LRC_PPHWSP_PXP_INVAL_SCRATCH_ADDR (0x40 * 4)
>> +#define XE_LRC_CREATE_RUNALONE 0x1
>> +#define XE_LRC_CREATE_PXP 0x2
>> struct xe_lrc *xe_lrc_create(struct xe_hw_engine *hwe, struct xe_vm
>> *vm,
>> - u32 ring_size);
>> + u32 ring_size, u32 flags);
>> void xe_lrc_destroy(struct kref *ref);
>> /**
>> diff --git a/drivers/gpu/drm/xe/xe_pxp.c b/drivers/gpu/drm/xe/xe_pxp.c
>> index 382eb0cb0018..acdc25c8e8a1 100644
>> --- a/drivers/gpu/drm/xe/xe_pxp.c
>> +++ b/drivers/gpu/drm/xe/xe_pxp.c
>> @@ -6,11 +6,17 @@
>> #include "xe_pxp.h"
>> #include <drm/drm_managed.h>
>> +#include <drm/xe_drm.h>
>> #include "xe_device_types.h"
>> +#include "xe_exec_queue.h"
>> +#include "xe_exec_queue_types.h"
>> #include "xe_force_wake.h"
>> +#include "xe_guc_submit.h"
>> +#include "xe_gsc_proxy.h"
>> #include "xe_gt.h"
>> #include "xe_gt_types.h"
>> +#include "xe_huc.h"
>> #include "xe_mmio.h"
>> #include "xe_pm.h"
>> #include "xe_pxp_submit.h"
>> @@ -27,18 +33,45 @@
>> * integrated parts.
>> */
>> -#define ARB_SESSION 0xF /* TODO: move to UAPI */
>> +#define ARB_SESSION DRM_XE_PXP_HWDRM_DEFAULT_SESSION /* shorter
>> define */
> Is this really worthwhile?
The define is used enough time in this file that IMO it's worth having a
shorter version for redability
>
>> bool xe_pxp_is_supported(const struct xe_device *xe)
>> {
>> return xe->info.has_pxp && IS_ENABLED(CONFIG_INTEL_MEI_GSC_PROXY);
>> }
>> -static bool pxp_is_enabled(const struct xe_pxp *pxp)
>> +bool xe_pxp_is_enabled(const struct xe_pxp *pxp)
>> {
>> return pxp;
>> }
>> +static bool pxp_prerequisites_done(const struct xe_pxp *pxp)
>> +{
>> + bool ready;
>> +
>> + XE_WARN_ON(xe_force_wake_get(gt_to_fw(pxp->gt), XE_FW_GSC));
> Again, why warn on this fw and then proceed anyway when others
> silently return an error code to the layer above?
In this case because I wanted to have this function return a bool, so
can't escalate the error. In the unlikely case that this fails, the
caller will consider PXP not ready, which will be escalated out. If
forcewake doesn't work the GT is non-functional anyway, so reporting PXP
not ready it's not going to make things worse.
>
>> +
>> + /* PXP requires both HuC authentication via GSC and GSC proxy
>> initialized */
>> + ready = xe_huc_is_authenticated(&pxp->gt->uc.huc,
>> XE_HUC_AUTH_VIA_GSC) &&
>> + xe_gsc_proxy_init_done(&pxp->gt->uc.gsc);
>> +
>> + xe_force_wake_put(gt_to_fw(pxp->gt), XE_FW_GSC);
>> +
>> + return ready;
>> +}
>> +
>> +static bool pxp_session_is_in_play(struct xe_pxp *pxp, u32 id)
>> +{
>> + struct xe_gt *gt = pxp->gt;
>> + u32 sip = 0;
>> +
>> + XE_WARN_ON(xe_force_wake_get(gt_to_fw(gt), XE_FW_GT));
> Same as above.
Same reasoning, just that we'll report that PXP failed to start if this
fails, which again it's not going to make things worse if the GT is broken.
>
>> + sip = xe_mmio_read32(gt, KCR_SIP);
>> + xe_force_wake_put(gt_to_fw(gt), XE_FW_GT);
>> +
>> + return sip & BIT(id);
>> +}
>> +
>> static int pxp_wait_for_session_state(struct xe_pxp *pxp, u32 id,
>> bool in_play)
>> {
>> struct xe_gt *gt = pxp->gt;
>> @@ -56,12 +89,30 @@ static int pxp_wait_for_session_state(struct
>> xe_pxp *pxp, u32 id, bool in_play)
>> return ret;
>> }
>> +static void pxp_invalidate_queues(struct xe_pxp *pxp);
>> +
>> static void pxp_terminate(struct xe_pxp *pxp)
>> {
>> int ret = 0;
>> struct xe_device *xe = pxp->xe;
>> struct xe_gt *gt = pxp->gt;
> Should add a "lockdep_assert_held(pxp->mutex)" here?
I'll add it in
>
>> + pxp_invalidate_queues(pxp);
>> +
>> + /*
>> + * If we have a termination already in progress, we need to wait
>> for
>> + * it to complete before queueing another one. We update the state
>> + * to signal that another termination is required and leave it
>> to the
>> + * pxp_start() call to take care of it.
>> + */
>> + if (!completion_done(&pxp->termination)) {
>> + pxp->status = XE_PXP_NEEDS_TERMINATION;
>> + return;
>> + }
>> +
>> + reinit_completion(&pxp->termination);
>> + pxp->status = XE_PXP_TERMINATION_IN_PROGRESS;
>> +
>> drm_dbg(&xe->drm, "Terminating PXP\n");
>> /* terminate the hw session */
>> @@ -82,13 +133,32 @@ static void pxp_terminate(struct xe_pxp *pxp)
>> ret = xe_pxp_submit_session_invalidation(&pxp->gsc_res,
>> ARB_SESSION);
>> out:
>> - if (ret)
>> + if (ret) {
>> drm_err(&xe->drm, "PXP termination failed: %pe\n",
>> ERR_PTR(ret));
>> + pxp->status = XE_PXP_ERROR;
>> + complete_all(&pxp->termination);
>> + }
>> }
>> static void pxp_terminate_complete(struct xe_pxp *pxp)
>> {
>> - /* TODO mark the session as ready to start */
>> + /*
>> + * We expect PXP to be in one of 2 states when we get here:
>> + * - XE_PXP_TERMINATION_IN_PROGRESS: a single termination event was
>> + * requested and it is now completing, so we're ready to start.
>> + * - XE_PXP_NEEDS_TERMINATION: a second termination was
>> requested while
>> + * the first one was still being processed; we don't update the
>> state
>> + * in this case so the pxp_start code will automatically issue that
>> + * second termination.
>> + */
>> + if (pxp->status == XE_PXP_TERMINATION_IN_PROGRESS)
>> + pxp->status = XE_PXP_READY_TO_START;
>> + else if (pxp->status != XE_PXP_NEEDS_TERMINATION)
>> + drm_err(&pxp->xe->drm,
>> + "PXP termination complete while status was %u\n",
>> + pxp->status);
>> +
>> + complete_all(&pxp->termination);
>> }
>> static void pxp_irq_work(struct work_struct *work)
>> @@ -112,6 +182,8 @@ static void pxp_irq_work(struct work_struct *work)
>> if ((events & PXP_TERMINATION_REQUEST) &&
>> !xe_pm_runtime_get_if_active(xe))
>> return;
>> + mutex_lock(&pxp->mutex);
>> +
>> if (events & PXP_TERMINATION_REQUEST) {
>> events &= ~PXP_TERMINATION_COMPLETE;
>> pxp_terminate(pxp);
>> @@ -120,6 +192,8 @@ static void pxp_irq_work(struct work_struct *work)
>> if (events & PXP_TERMINATION_COMPLETE)
>> pxp_terminate_complete(pxp);
>> + mutex_unlock(&pxp->mutex);
>> +
>> if (events & PXP_TERMINATION_REQUEST)
>> xe_pm_runtime_put(xe);
>> }
>> @@ -133,7 +207,7 @@ void xe_pxp_irq_handler(struct xe_device *xe, u16
>> iir)
>> {
>> struct xe_pxp *pxp = xe->pxp;
>> - if (!pxp_is_enabled(pxp)) {
>> + if (!xe_pxp_is_enabled(pxp)) {
>> drm_err(&xe->drm, "PXP irq 0x%x received with PXP
>> disabled!\n", iir);
>> return;
>> }
>> @@ -230,10 +304,22 @@ int xe_pxp_init(struct xe_device *xe)
>> if (!pxp)
>> return -ENOMEM;
>> + INIT_LIST_HEAD(&pxp->queues.list);
>> + spin_lock_init(&pxp->queues.lock);
>> INIT_WORK(&pxp->irq.work, pxp_irq_work);
>> pxp->xe = xe;
>> pxp->gt = gt;
>> + /*
>> + * we'll use the completion to check if there is a termination
>> pending,
>> + * so we start it as completed and we reinit it when a termination
>> + * is triggered.
>> + */
>> + init_completion(&pxp->termination);
>> + complete_all(&pxp->termination);
>> +
>> + mutex_init(&pxp->mutex);
>> +
>> pxp->irq.wq = alloc_ordered_workqueue("pxp-wq", 0);
>> if (!pxp->irq.wq)
>> return -ENOMEM;
>> @@ -256,3 +342,202 @@ int xe_pxp_init(struct xe_device *xe)
>> destroy_workqueue(pxp->irq.wq);
>> return err;
>> }
>> +
>> +static int __pxp_start_arb_session(struct xe_pxp *pxp)
>> +{
>> + int ret;
>> +
>> + if (pxp_session_is_in_play(pxp, ARB_SESSION))
>> + return -EEXIST;
>> +
>> + ret = xe_pxp_submit_session_init(&pxp->gsc_res, ARB_SESSION);
>> + if (ret) {
>> + drm_err(&pxp->xe->drm, "Failed to init PXP arb session\n");
>> + goto out;
>> + }
>> +
>> + ret = pxp_wait_for_session_state(pxp, ARB_SESSION, true);
>> + if (ret) {
>> + drm_err(&pxp->xe->drm, "PXP ARB session failed to go in
>> play\n");
>> + goto out;
>> + }
>> +
>> + drm_dbg(&pxp->xe->drm, "PXP ARB session is active\n");
>> +
>> +out:
>> + if (!ret)
>> + pxp->status = XE_PXP_ACTIVE;
>> + else
>> + pxp->status = XE_PXP_ERROR;
>> +
>> + return ret;
>> +}
>> +
>> +/**
>> + * xe_pxp_exec_queue_set_type - Mark a queue as using PXP
>> + * @pxp: the xe->pxp pointer (it will be NULL if PXP is disabled)
>> + * @q: the queue to mark as using PXP
>> + * @type: the type of PXP session this queue will use
>> + *
>> + * Returns 0 if the selected PXP type is supported, -ENODEV otherwise.
>> + */
>> +int xe_pxp_exec_queue_set_type(struct xe_pxp *pxp, struct
>> xe_exec_queue *q, u8 type)
>> +{
>> + if (!xe_pxp_is_enabled(pxp))
>> + return -ENODEV;
>> +
>> + /* we only support HWDRM sessions right now */
>> + xe_assert(pxp->xe, type == DRM_XE_PXP_TYPE_HWDRM);
>> +
>> + q->pxp.type = type;
>> +
>> + return 0;
>> +}
>> +
>> +/**
>> + * xe_pxp_exec_queue_add - add a queue to the PXP list
>> + * @pxp: the xe->pxp pointer (it will be NULL if PXP is disabled)
>> + * @q: the queue to add to the list
>> + *
>> + * If PXP is enabled and the prerequisites are done, start the PXP ARB
>> + * session (if not already running) and add the queue to the PXP
>> list. Note
>> + * that the queue must have previously been marked as using PXP with
>> + * xe_pxp_exec_queue_set_type.
>> + *
>> + * Returns 0 if the PXP ARB session is running and the queue is in
>> the list,
>> + * -ENODEV if PXP is disabled, -EBUSY if the PXP prerequisites are
>> not done,
>> + * other errno value if something goes wrong during the session start.
>> + */
>> +#define PXP_TERMINATION_TIMEOUT_MS 500
>> +int xe_pxp_exec_queue_add(struct xe_pxp *pxp, struct xe_exec_queue *q)
>> +{
>> + int ret = 0;
>> +
>> + if (!xe_pxp_is_enabled(pxp))
>> + return -ENODEV;
>> +
>> + /* we only support HWDRM sessions right now */
>> + xe_assert(pxp->xe, q->pxp.type == DRM_XE_PXP_TYPE_HWDRM);
>> +
>> + /*
>> + * Runtime suspend kills PXP, so we need to turn it off while we
>> have
>> + * active queues that use PXP
>> + */
>> + xe_pm_runtime_get(pxp->xe);
>> +
>> + if (!pxp_prerequisites_done(pxp)) {
>> + ret = -EBUSY;
> Wouldn't EAGAIN be more appropriate? The pre-reqs here are the GSC
> firmware load which is guaranteed to in progress or done (or dead?),
> in which case it is just a matter or re-trying until the firmware init
> completes?
Userspace tends to retry immediately when we return -EAGAIN . This wait
can take several hundreds on ms and I didn't want userspace to just keep
retrying in a tight loop for that long, so I used a different error
code. This was also discussed on the mesa review here:
https://gitlab.freedesktop.org/mesa/mesa/-/merge_requests/30723#note_2622269
>
>> + goto out;
>> + }
>> +
>> +wait_for_termination:
>> + /*
>> + * if there is a termination in progress, wait for it.
>> + * We need to wait outside the lock because the completion is
>> done from
>> + * within the lock
>> + */
>> + if (!wait_for_completion_timeout(&pxp->termination,
>> + msecs_to_jiffies(PXP_TERMINATION_TIMEOUT_MS)))
>> + return -ETIMEDOUT;
>> +
>> + mutex_lock(&pxp->mutex);
>> +
>> + /*
>> + * check if a new termination was issued between the above check
>> and
>> + * grabbing the mutex
>> + */
>> + if (!completion_done(&pxp->termination)) {
>> + mutex_unlock(&pxp->mutex);
>> + goto wait_for_termination;
>> + }
>> +
>> + /* If PXP is not already active, turn it on */
>> + switch (pxp->status) {
>> + case XE_PXP_ERROR:
>> + ret = -EIO;
>> + break;
>> + case XE_PXP_ACTIVE:
>> + break;
>> + case XE_PXP_READY_TO_START:
>> + ret = __pxp_start_arb_session(pxp);
>> + break;
>> + case XE_PXP_NEEDS_TERMINATION:
>> + pxp_terminate(pxp);
>> + mutex_unlock(&pxp->mutex);
>> + goto wait_for_termination;
>> + default:
>> + drm_err(&pxp->xe->drm, "unexpected state during PXP start:
>> %u", pxp->status);
>> + ret = -EIO;
>> + break;
>> + }
>> +
>> + /* If everything went ok, add the queue to the list */
>> + if (!ret) {
>> + spin_lock_irq(&pxp->queues.lock);
>> + list_add_tail(&q->pxp.link, &pxp->queues.list);
>> + spin_unlock_irq(&pxp->queues.lock);
>> + }
>> +
>> + mutex_unlock(&pxp->mutex);
>> +
>> +out:
>> + /*
>> + * in the successful case the PM ref is released from
>> + * xe_pxp_exec_queue_remove
>> + */
>> + if (ret)
>> + xe_pm_runtime_put(pxp->xe);
> Does the runtime PM get/put need to be mutex protected as well? Is it
> possible for two xe_pxp_exec_queue_add() calls to be running
> concurrently?
It is possible to have two xe_pxp_exec_queue_add running concurrently,
but that shouldn't matter with the pm_put. Am I not seeing a race?
>
>> +
>> + return ret;
>> +}
>> +
>> +/**
>> + * xe_pxp_exec_queue_remove - remove a queue from the PXP list
>> + * @pxp: the xe->pxp pointer (it will be NULL if PXP is disabled)
>> + * @q: the queue to remove from the list
>> + *
>> + * If PXP is enabled and the exec_queue is in the list, the queue
>> will be
>> + * removed from the list and its PM reference will be released. It
>> is safe to
>> + * call this function multiple times for the same queue.
>> + */
>> +void xe_pxp_exec_queue_remove(struct xe_pxp *pxp, struct
>> xe_exec_queue *q)
>> +{
>> + bool need_pm_put = false;
>> +
>> + if (!xe_pxp_is_enabled(pxp))
>> + return;
>> +
>> + spin_lock_irq(&pxp->queues.lock);
>> +
>> + if (!list_empty(&q->pxp.link)) {
>> + list_del_init(&q->pxp.link);
>> + need_pm_put = true;
>> + }
>> +
>> + q->pxp.type = DRM_XE_PXP_TYPE_NONE;
>> +
>> + spin_unlock_irq(&pxp->queues.lock);
>> +
>> + if (need_pm_put)
>> + xe_pm_runtime_put(pxp->xe);
>> +}
>> +
>> +static void pxp_invalidate_queues(struct xe_pxp *pxp)
>> +{
>> + struct xe_exec_queue *tmp, *q;
>> +
>> + spin_lock_irq(&pxp->queues.lock);
>> +
>> + list_for_each_entry(tmp, &pxp->queues.list, pxp.link) {
> Double space.
>
>> + q = xe_exec_queue_get_unless_zero(tmp);
>> +
>> + if (!q)
>> + continue;
>> +
>> + xe_exec_queue_kill(q);
>> + xe_exec_queue_put(q);
>> + }
> This doesn't need to empty the list out as well?
It's not strictly necessary, because it is ok to kill a queue multiple
times. Given the PM handling required as part of removing a queue from
the list and the fact that it needs to happen outside the lock (see
xe_pxp_exec_queue_remove), my thought was that it'd be easier to just
not do it here and leave it to when the queue is cleaned up.
>
>> +
>> + spin_unlock_irq(&pxp->queues.lock);
>> +}
>> +
>> diff --git a/drivers/gpu/drm/xe/xe_pxp.h b/drivers/gpu/drm/xe/xe_pxp.h
>> index 81bafe2714ff..2e0ab186072a 100644
>> --- a/drivers/gpu/drm/xe/xe_pxp.h
>> +++ b/drivers/gpu/drm/xe/xe_pxp.h
>> @@ -9,10 +9,17 @@
>> #include <linux/types.h>
>> struct xe_device;
>> +struct xe_exec_queue;
>> +struct xe_pxp;
>> bool xe_pxp_is_supported(const struct xe_device *xe);
>> +bool xe_pxp_is_enabled(const struct xe_pxp *pxp);
>> int xe_pxp_init(struct xe_device *xe);
>> void xe_pxp_irq_handler(struct xe_device *xe, u16 iir);
>> +int xe_pxp_exec_queue_set_type(struct xe_pxp *pxp, struct
>> xe_exec_queue *q, u8 type);
>> +int xe_pxp_exec_queue_add(struct xe_pxp *pxp, struct xe_exec_queue *q);
>> +void xe_pxp_exec_queue_remove(struct xe_pxp *pxp, struct
>> xe_exec_queue *q);
>> +
>> #endif /* __XE_PXP_H__ */
>> diff --git a/drivers/gpu/drm/xe/xe_pxp_submit.c
>> b/drivers/gpu/drm/xe/xe_pxp_submit.c
>> index c9258c861556..becffa6dfd4c 100644
>> --- a/drivers/gpu/drm/xe/xe_pxp_submit.c
>> +++ b/drivers/gpu/drm/xe/xe_pxp_submit.c
>> @@ -26,8 +26,6 @@
>> #include "instructions/xe_mi_commands.h"
>> #include "regs/xe_gt_regs.h"
>> -#define ARB_SESSION 0xF /* TODO: move to UAPI */
>> -
>> /*
>> * The VCS is used for kernel-owned GGTT submissions to issue key
>> termination.
>> * Terminations are serialized, so we only need a single queue and
>> a single
>> @@ -495,7 +493,7 @@ int xe_pxp_submit_session_init(struct
>> xe_pxp_gsc_client_resources *gsc_res, u32
>> FIELD_PREP(PXP43_INIT_SESSION_APPTYPE, 0));
>> msg_in.header.buffer_len = sizeof(msg_in) - sizeof(msg_in.header);
>> - if (id == ARB_SESSION)
>> + if (id == DRM_XE_PXP_HWDRM_DEFAULT_SESSION)
> Would have been clearer to just use the correct name from the start.
You mean define DRM_XE_PXP_HWDRM_DEFAULT_SESSION locally in the earlier
patch, and then moving it to the uapi without a rename?
>
>> msg_in.protection_mode = PXP43_INIT_SESSION_PROTECTION_ARB;
>> ret = gsccs_send_message(gsc_res, &msg_in, sizeof(msg_in),
>> diff --git a/drivers/gpu/drm/xe/xe_pxp_types.h
>> b/drivers/gpu/drm/xe/xe_pxp_types.h
>> index d5cf8faed7be..eb6a0183320a 100644
>> --- a/drivers/gpu/drm/xe/xe_pxp_types.h
>> +++ b/drivers/gpu/drm/xe/xe_pxp_types.h
>> @@ -6,7 +6,10 @@
>> #ifndef __XE_PXP_TYPES_H__
>> #define __XE_PXP_TYPES_H__
>> +#include <linux/completion.h>
>> #include <linux/iosys-map.h>
>> +#include <linux/mutex.h>
>> +#include <linux/spinlock.h>
>> #include <linux/types.h>
>> #include <linux/workqueue.h>
>> @@ -16,6 +19,14 @@ struct xe_device;
>> struct xe_gt;
>> struct xe_vm;
>> +enum xe_pxp_status {
>> + XE_PXP_ERROR = -1,
>> + XE_PXP_NEEDS_TERMINATION = 0, /* starting status */
>> + XE_PXP_TERMINATION_IN_PROGRESS,
>> + XE_PXP_READY_TO_START,
>> + XE_PXP_ACTIVE
>> +};
>> +
>> /**
>> * struct xe_pxp_gsc_client_resources - resources for GSC
>> submission by a PXP
>> * client. The GSC FW supports multiple GSC client active at the
>> same time.
>> @@ -82,6 +93,21 @@ struct xe_pxp {
>> #define PXP_TERMINATION_REQUEST BIT(0)
>> #define PXP_TERMINATION_COMPLETE BIT(1)
>> } irq;
>> +
>> + /** @mutex: protects the pxp status and the queue list */
>> + struct mutex mutex;
>> + /** @status: the current pxp status */
>> + enum xe_pxp_status status;
>> + /** @termination: completion struct that tracks terminations */
>> + struct completion termination;
>> +
>> + /** @queues: management of exec_queues that use PXP */
>> + struct {
>> + /** @queues.lock: spinlock protecting the queue management */
>> + spinlock_t lock;
>> + /** @queues.list: list of exec_queues that use PXP */
>> + struct list_head list;
>> + } queues;
>> };
>> #endif /* __XE_PXP_TYPES_H__ */
>> diff --git a/include/uapi/drm/xe_drm.h b/include/uapi/drm/xe_drm.h
>> index b6fbe4988f2e..5f4d08123672 100644
>> --- a/include/uapi/drm/xe_drm.h
>> +++ b/include/uapi/drm/xe_drm.h
>> @@ -1085,6 +1085,24 @@ struct drm_xe_vm_bind {
>> /**
>> * struct drm_xe_exec_queue_create - Input of
>> &DRM_IOCTL_XE_EXEC_QUEUE_CREATE
>> *
>> + * This ioctl supports setting the following properties via the
>> + * %DRM_XE_EXEC_QUEUE_EXTENSION_SET_PROPERTY extension, which uses the
>> + * generic @drm_xe_ext_set_property struct:
>> + *
>> + * - %DRM_XE_EXEC_QUEUE_SET_PROPERTY_PRIORITY - set the queue
>> priority.
>> + * CAP_SYS_NICE is required to set a value above normal.
>> + * - %DRM_XE_EXEC_QUEUE_SET_PROPERTY_TIMESLICE - set the queue
>> timeslice
>> + * duration.
> Units would be helpful.
I have no idea what they are. I only added this documentation because it
seemed unclean to only add the part about PXP.
>
>> + * - %DRM_XE_EXEC_QUEUE_SET_PROPERTY_PXP_TYPE - set the type of PXP
>> session
>> + * this queue will be used with. Valid values are listed in enum
>> + * drm_xe_pxp_session_type. %DRM_XE_PXP_TYPE_NONE is the default
>> behavior, so
>> + * there is no need to explicitly set that. When a queue of type
>> + * %DRM_XE_PXP_TYPE_HWDRM is created, the PXP default HWDRM session
>> + * (%XE_PXP_HWDRM_DEFAULT_SESSION) will be started, if isn't
>> already running.
>> + * Given that going into a power-saving state kills PXP HWDRM
>> sessions,
>> + * runtime PM will be blocked while queues of this type are alive.
>> + * All PXP queues will be killed if a PXP invalidation event occurs.
> Seems odd to say 'values are listed in ...' and then go on to describe
> each type and provide extra information about them. Seems like the
> extra details should be part of the enum documentation instead of here?
This is documentation specific to how this ioctl handles those values,
so it belongs here. The 'values are listed in ...' sentence was about
being future proof, in case we update the enum in the future and don't
need to add any extra explanation here.
Daniele
>
> John.
>
>> + *
>> * The example below shows how to use @drm_xe_exec_queue_create to
>> create
>> * a simple exec_queue (no parallel submission) of class
>> * &DRM_XE_ENGINE_CLASS_RENDER.
>> @@ -1108,7 +1126,7 @@ struct drm_xe_exec_queue_create {
>> #define DRM_XE_EXEC_QUEUE_EXTENSION_SET_PROPERTY 0
>> #define DRM_XE_EXEC_QUEUE_SET_PROPERTY_PRIORITY 0
>> #define DRM_XE_EXEC_QUEUE_SET_PROPERTY_TIMESLICE 1
>> -
>> +#define DRM_XE_EXEC_QUEUE_SET_PROPERTY_PXP_TYPE 2
>> /** @extensions: Pointer to the first extension struct, if any */
>> __u64 extensions;
>> @@ -1694,6 +1712,26 @@ struct drm_xe_oa_stream_info {
>> __u64 reserved[3];
>> };
>> +/**
>> + * enum drm_xe_pxp_session_type - Supported PXP session types.
>> + *
>> + * We currently only support HWDRM sessions, which are used for
>> protected
>> + * content that ends up being displayed, but the HW supports
>> multiple types, so
>> + * we might extend support in the future.
>> + */
>> +enum drm_xe_pxp_session_type {
>> + /** @DRM_XE_PXP_TYPE_NONE: PXP not used */
>> + DRM_XE_PXP_TYPE_NONE = 0,
>> + /**
>> + * @DRM_XE_PXP_TYPE_HWDRM: HWDRM sessions are used for content
>> that ends
>> + * up on the display.
>> + */
>> + DRM_XE_PXP_TYPE_HWDRM = 1,
>> +};
>> +
>> +/* ID of the protected content session managed by Xe when PXP is
>> active */
>> +#define DRM_XE_PXP_HWDRM_DEFAULT_SESSION 0xf
>> +
>> #if defined(__cplusplus)
>> }
>> #endif
^ permalink raw reply [flat|nested] 54+ messages in thread
* Re: [PATCH v2 08/12] drm/xe/pxp: add a query for PXP status
2024-10-09 0:09 ` John Harrison
@ 2024-11-12 21:29 ` Daniele Ceraolo Spurio
0 siblings, 0 replies; 54+ messages in thread
From: Daniele Ceraolo Spurio @ 2024-11-12 21:29 UTC (permalink / raw)
To: John Harrison, intel-xe; +Cc: José Roberto de Souza
On 10/8/24 17:09, John Harrison wrote:
> On 8/16/2024 12:00, Daniele Ceraolo Spurio wrote:
>> PXP prerequisites (SW proxy and HuC auth via GSC) are completed
>> asynchronously from driver load, which means that userspace can start
>> submitting before we're ready to start a PXP session. Therefore, we need
>> a query that userspace can use to check not only if PXP is supported by
> by -> but?
>
>> also to wait until the prerequisites are done.
>>
>> v2: Improve doc, do not report TYPE_NONE as supported (José)
>>
>> Signed-off-by: Daniele Ceraolo Spurio <daniele.ceraolospurio@intel.com>
>> Cc: José Roberto de Souza <jose.souza@intel.com>
>> ---
>> drivers/gpu/drm/xe/xe_pxp.c | 33 +++++++++++++++++++++++++++++++++
>> drivers/gpu/drm/xe/xe_pxp.h | 1 +
>> drivers/gpu/drm/xe/xe_query.c | 32 ++++++++++++++++++++++++++++++++
>> include/uapi/drm/xe_drm.h | 35 +++++++++++++++++++++++++++++++++++
>> 4 files changed, 101 insertions(+)
>>
>> diff --git a/drivers/gpu/drm/xe/xe_pxp.c b/drivers/gpu/drm/xe/xe_pxp.c
>> index acdc25c8e8a1..ca4302af4ced 100644
>> --- a/drivers/gpu/drm/xe/xe_pxp.c
>> +++ b/drivers/gpu/drm/xe/xe_pxp.c
>> @@ -60,6 +60,39 @@ static bool pxp_prerequisites_done(const struct
>> xe_pxp *pxp)
>> return ready;
>> }
>> +/**
>> + * xe_pxp_get_readiness_status - check whether PXP is ready for
>> userspace use
>> + * @pxp: the xe_pxp pointer (can be NULL if PXP is disabled)
>> + *
>> + * This function is used for status query from userspace, so the
>> returned value
> value -> values
>
>> + * follow the uapi (see drm_xe_query_pxp_status)
>> + *
>> + * Returns: 0 if PXP is not ready yet, 1 if it is ready, an errno
>> value if PXP
>> + * is not supported/enabled or if something went wrong in the
>> initialization of
>> + * the prerequisites.
> You have two independent statements regarding the return code. Would
> be better to just have the "Returns: ..." paragraph but include a
> statement that these values are as defined in the UAPI.
>
>> + */
>> +int xe_pxp_get_readiness_status(struct xe_pxp *pxp)
>> +{
>> + int ret = 0;
>> +
>> + if (!xe_pxp_is_enabled(pxp))
>> + return -ENODEV;
>> +
>> + /* if the GSC or HuC FW are in an error state, PXP will never
>> work */
>> + if (xe_uc_fw_status_to_error(pxp->gt->uc.huc.fw.status) ||
>> + xe_uc_fw_status_to_error(pxp->gt->uc.gsc.fw.status))
>> + return -EIO;
>> +
>> + xe_pm_runtime_get(pxp->xe);
>> +
>> + /* PXP requires both HuC loaded and GSC proxy initialized */
>> + if (pxp_prerequisites_done(pxp))
>> + ret = 1;
>> +
>> + xe_pm_runtime_put(pxp->xe);
>> + return ret;
>> +}
>> +
>> static bool pxp_session_is_in_play(struct xe_pxp *pxp, u32 id)
>> {
>> struct xe_gt *gt = pxp->gt;
>> diff --git a/drivers/gpu/drm/xe/xe_pxp.h b/drivers/gpu/drm/xe/xe_pxp.h
>> index 2e0ab186072a..868813cc84b9 100644
>> --- a/drivers/gpu/drm/xe/xe_pxp.h
>> +++ b/drivers/gpu/drm/xe/xe_pxp.h
>> @@ -14,6 +14,7 @@ struct xe_pxp;
>> bool xe_pxp_is_supported(const struct xe_device *xe);
>> bool xe_pxp_is_enabled(const struct xe_pxp *pxp);
>> +int xe_pxp_get_readiness_status(struct xe_pxp *pxp);
>> int xe_pxp_init(struct xe_device *xe);
>> void xe_pxp_irq_handler(struct xe_device *xe, u16 iir);
>> diff --git a/drivers/gpu/drm/xe/xe_query.c
>> b/drivers/gpu/drm/xe/xe_query.c
>> index 73ef6e4c2dc9..a1e297234972 100644
>> --- a/drivers/gpu/drm/xe/xe_query.c
>> +++ b/drivers/gpu/drm/xe/xe_query.c
>> @@ -22,6 +22,7 @@
>> #include "xe_guc_hwconfig.h"
>> #include "xe_macros.h"
>> #include "xe_mmio.h"
>> +#include "xe_pxp.h"
>> #include "xe_ttm_vram_mgr.h"
>> static const u16 xe_to_user_engine_class[] = {
>> @@ -680,6 +681,36 @@ static int query_oa_units(struct xe_device *xe,
>> return ret ? -EFAULT : 0;
>> }
>> +static int query_pxp_status(struct xe_device *xe, struct
>> drm_xe_device_query *query)
>> +{
>> + struct drm_xe_query_pxp_status __user *query_ptr =
>> u64_to_user_ptr(query->data);
>> + size_t size = sizeof(struct drm_xe_query_pxp_status);
>> + struct drm_xe_query_pxp_status resp;
>> + int ret;
>> +
>> + if (query->size == 0) {
>> + query->size = size;
>> + return 0;
>> + } else if (XE_IOCTL_DBG(xe, query->size != size)) {
> Do we not allow structures to grow in future versions? In a backwards
> compatible way, that is.
We do. The user is expected to first call the ioctl with size 0, get the
actual size and then use that in a second ioctl call to get the actual
information. So even if the size is updated userspace should update with
it. All other query ioctl are coded the same way.
>
>> + return -EINVAL;
>> + }
>> +
>> + if (copy_from_user(&resp, query_ptr, size))
>> + return -EFAULT;
> Why copy in the data from the user side only to overwrite everything
> in the structure?
true, will drop.
>
>> +
>> + ret = xe_pxp_get_readiness_status(xe->pxp);
>> + if (ret < 0)
>> + return ret;
>> +
>> + resp.status = ret;
>> + resp.supported_session_types = BIT(DRM_XE_PXP_TYPE_HWDRM);
>> +
>> + if (copy_to_user(query_ptr, &resp, size))
>> + return -EFAULT;
>> +
>> + return 0;
>> +}
>> +
>> static int (* const xe_query_funcs[])(struct xe_device *xe,
>> struct drm_xe_device_query *query) = {
>> query_engines,
>> @@ -691,6 +722,7 @@ static int (* const xe_query_funcs[])(struct
>> xe_device *xe,
>> query_engine_cycles,
>> query_uc_fw_version,
>> query_oa_units,
>> + query_pxp_status,
>> };
>> int xe_query_ioctl(struct drm_device *dev, void *data, struct
>> drm_file *file)
>> diff --git a/include/uapi/drm/xe_drm.h b/include/uapi/drm/xe_drm.h
>> index 5f4d08123672..9972ceb3fbfb 100644
>> --- a/include/uapi/drm/xe_drm.h
>> +++ b/include/uapi/drm/xe_drm.h
>> @@ -627,6 +627,39 @@ struct drm_xe_query_uc_fw_version {
>> __u64 reserved;
>> };
>> +/**
>> + * struct drm_xe_query_pxp_status - query if PXP is ready
>> + *
>> + * If PXP is enabled and no fatal error as occurred, the status will
>> be set to
> as -> has
>
>> + * one of the following values:
>> + * 0: PXP init still in progress
>> + * 1: PXP init complete
>> + *
>> + * If PXP is not enabled or something has gone wrong, the query will
>> be failed
>> + * with one of the following error codes:
>> + * -ENODEV: PXP not supported or disabled;
>> + * -EIO: fatal error occurred during init, so PXP will never be
>> enabled;
>> + * -EINVAL: incorrect value provided as part of the query;
>> + * -EFAULT: error copying the memory between kernel and userspace.
> Currently, could also be copying from user to kernel. Although that
> copy seems unnecessary.
I meant this as both ways, but I've dropped the user to kernel copy anyway.
Daniele
>
> John.
>
>> + *
>> + * The status can only be 0 in the first few seconds after driver
>> load. If
>> + * everything works as expected, the status will transition to init
>> complete in
>> + * less than 1 second, while in case of errors the driver might take
>> longer to
>> + * start returning an error code, but it should still take less than
>> 10 seconds.
>> + *
>> + * The supported session type bitmask is based on the values in
>> + * enum drm_xe_pxp_session_type. TYPE_NONE is always supported and
>> therefore
>> + * is not reported in the bitmask.
>> + *
>> + */
>> +struct drm_xe_query_pxp_status {
>> + /** @status: current PXP status */
>> + __u32 status;
>> +
>> + /** @supported_session_types: bitmask of supported PXP session
>> types */
>> + __u32 supported_session_types;
>> +};
>> +
>> /**
>> * struct drm_xe_device_query - Input of &DRM_IOCTL_XE_DEVICE_QUERY
>> - main
>> * structure to query device information
>> @@ -646,6 +679,7 @@ struct drm_xe_query_uc_fw_version {
>> * attributes.
>> * - %DRM_XE_DEVICE_QUERY_GT_TOPOLOGY
>> * - %DRM_XE_DEVICE_QUERY_ENGINE_CYCLES
>> + * - %DRM_XE_DEVICE_QUERY_PXP_STATUS
>> *
>> * If size is set to 0, the driver fills it with the required size for
>> * the requested type of data to query. If size is equal to the
>> required
>> @@ -698,6 +732,7 @@ struct drm_xe_device_query {
>> #define DRM_XE_DEVICE_QUERY_ENGINE_CYCLES 6
>> #define DRM_XE_DEVICE_QUERY_UC_FW_VERSION 7
>> #define DRM_XE_DEVICE_QUERY_OA_UNITS 8
>> +#define DRM_XE_DEVICE_QUERY_PXP_STATUS 9
>> /** @query: The type of data to query */
>> __u32 query;
>
^ permalink raw reply [flat|nested] 54+ messages in thread
* Re: [PATCH v2 09/12] drm/xe/pxp: Add API to mark a BO as using PXP
2024-10-09 0:42 ` John Harrison
@ 2024-11-12 22:23 ` Daniele Ceraolo Spurio
2024-11-15 17:49 ` John Harrison
0 siblings, 1 reply; 54+ messages in thread
From: Daniele Ceraolo Spurio @ 2024-11-12 22:23 UTC (permalink / raw)
To: John Harrison, intel-xe; +Cc: Matthew Brost, Thomas Hellström
On 10/8/24 17:42, John Harrison wrote:
> On 8/16/2024 12:00, Daniele Ceraolo Spurio wrote:
>> The driver needs to know if a BO is encrypted with PXP to enable the
>> display decryption at flip time.
>> Furthermore, we want to keep track of the status of the encryption and
>> reject any operation that involves a BO that is encrypted using an old
>> key. There are two points in time where such checks can kick in:
>>
>> 1 - at VM bind time, all operations except for unmapping will be
>> rejected if the key used to encrypt the BO is no longer valid. This
>> check is opt-in via a new VM_BIND flag, to avoid a scenario where a
>> malicious app purposely shares an invalid BO with the compositor
>> (or
>> other app) and cause an error there.
> Not following the last statement here.
If we always reject VM_BIND on invalid BOs, a malicious app can
intentionally pass an invalid BO to the compositor to cause its VM_BIND
call to fail. The compositor might not have any knowledge of PXP and
therefore not be able to handle such an error, so we definitely need to
avoid this scenario; therefore, the check on the object validity is
opt-in. Any suggestion on how to reword it?
Note that the worst that can happen if the check is skipped is that we
display garbage, there is no risk of leaking the protected data.
>
>>
>> 2 - at job submission time, if the queue is marked as using PXP, all
>> objects bound to the VM will be checked and the submission will be
>> rejected if any of them was encrypted with a key that is no longer
>> valid.
>>
>> Note that there is no risk of leaking the encrypted data if a user does
>> not opt-in to those checks; the only consequence is that the user will
>> not realize that the encryption key is changed and that the data is no
>> longer valid.
>>
>> Signed-off-by: Daniele Ceraolo Spurio <daniele.ceraolospurio@intel.com>
>> Cc: Matthew Brost <matthew.brost@intel.com>
>> Cc: Thomas Hellström <thomas.hellstrom@linux.intel.com>
>> ---
>> .../xe/compat-i915-headers/pxp/intel_pxp.h | 10 +-
>> drivers/gpu/drm/xe/xe_bo.c | 100 +++++++++++++++++-
>> drivers/gpu/drm/xe/xe_bo.h | 5 +
>> drivers/gpu/drm/xe/xe_bo_types.h | 3 +
>> drivers/gpu/drm/xe/xe_exec.c | 6 ++
>> drivers/gpu/drm/xe/xe_pxp.c | 74 +++++++++++++
>> drivers/gpu/drm/xe/xe_pxp.h | 4 +
>> drivers/gpu/drm/xe/xe_pxp_types.h | 3 +
>> drivers/gpu/drm/xe/xe_vm.c | 46 +++++++-
>> drivers/gpu/drm/xe/xe_vm.h | 2 +
>> include/uapi/drm/xe_drm.h | 19 ++++
>> 11 files changed, 265 insertions(+), 7 deletions(-)
>>
>> diff --git a/drivers/gpu/drm/xe/compat-i915-headers/pxp/intel_pxp.h
>> b/drivers/gpu/drm/xe/compat-i915-headers/pxp/intel_pxp.h
>> index 881680727452..d8682f781619 100644
>> --- a/drivers/gpu/drm/xe/compat-i915-headers/pxp/intel_pxp.h
>> +++ b/drivers/gpu/drm/xe/compat-i915-headers/pxp/intel_pxp.h
>> @@ -9,6 +9,9 @@
>> #include <linux/errno.h>
>> #include <linux/types.h>
>> +#include "xe_bo.h"
>> +#include "xe_pxp.h"
>> +
>> struct drm_i915_gem_object;
>> struct xe_pxp;
>> @@ -16,13 +19,16 @@ static inline int intel_pxp_key_check(struct
>> xe_pxp *pxp,
>> struct drm_i915_gem_object *obj,
>> bool assign)
>> {
>> - return -ENODEV;
>> + if (assign)
>> + return -EINVAL;
> What does 'assign' mean and why is it always invalid?
In i915 we used the same function to both assign the key at first
submission (assign=true) and to check it later on (assign=false). This
header is for compatibility with the display code and the expectation is
that the display code will never assign a key and only check it.
>
>> +
>> + return xe_pxp_key_check(pxp, obj);
>> }
>> static inline bool
>> i915_gem_object_is_protected(const struct drm_i915_gem_object *obj)
>> {
>> - return false;
>> + return xe_bo_is_protected(obj);
>> }
>> #endif
>> diff --git a/drivers/gpu/drm/xe/xe_bo.c b/drivers/gpu/drm/xe/xe_bo.c
>> index 56a089aa3916..0f591b7d93b1 100644
>> --- a/drivers/gpu/drm/xe/xe_bo.c
>> +++ b/drivers/gpu/drm/xe/xe_bo.c
>> @@ -6,6 +6,7 @@
>> #include "xe_bo.h"
>> #include <linux/dma-buf.h>
>> +#include <linux/nospec.h>
>> #include <drm/drm_drv.h>
>> #include <drm/drm_gem_ttm_helper.h>
>> @@ -24,6 +25,7 @@
>> #include "xe_migrate.h"
>> #include "xe_pm.h"
>> #include "xe_preempt_fence.h"
>> +#include "xe_pxp.h"
>> #include "xe_res_cursor.h"
>> #include "xe_trace_bo.h"
>> #include "xe_ttm_stolen_mgr.h"
>> @@ -1949,6 +1951,95 @@ void xe_bo_vunmap(struct xe_bo *bo)
>> __xe_bo_vunmap(bo);
>> }
>> +static int gem_create_set_pxp_type(struct xe_device *xe, struct
>> xe_bo *bo, u64 value)
>> +{
>> + if (value == DRM_XE_PXP_TYPE_NONE)
>> + return 0;
>> +
>> + /* we only support DRM_XE_PXP_TYPE_HWDRM for now */
>> + if (XE_IOCTL_DBG(xe, value != DRM_XE_PXP_TYPE_HWDRM))
>> + return -EINVAL;
>> +
>> + xe_pxp_key_assign(xe->pxp, bo);
>> +
>> + return 0;
>> +}
>> +
>> +typedef int (*xe_gem_create_set_property_fn)(struct xe_device *xe,
>> + struct xe_bo *bo,
>> + u64 value);
>> +
>> +static const xe_gem_create_set_property_fn
>> gem_create_set_property_funcs[] = {
>> + [DRM_XE_GEM_CREATE_EXTENSION_SET_PROPERTY] =
>> gem_create_set_pxp_type,
>> +};
>> +
>> +static int gem_create_user_ext_set_property(struct xe_device *xe,
>> + struct xe_bo *bo,
>> + u64 extension)
>> +{
>> + u64 __user *address = u64_to_user_ptr(extension);
>> + struct drm_xe_ext_set_property ext;
>> + int err;
>> + u32 idx;
>> +
>> + err = __copy_from_user(&ext, address, sizeof(ext));
>> + if (XE_IOCTL_DBG(xe, err))
>> + return -EFAULT;
>> +
>> + if (XE_IOCTL_DBG(xe, ext.property >=
>> + ARRAY_SIZE(gem_create_set_property_funcs)) ||
>> + XE_IOCTL_DBG(xe, ext.pad) ||
>> + XE_IOCTL_DBG(xe, ext.property !=
>> DRM_XE_GEM_CREATE_EXTENSION_SET_PROPERTY))
> Two overlapping checks on the same field in the same if statement
> seems unnecessary.
I've followed the same approach as the existing
exec_queue_user_ext_set_property for consistency.
>
>> + return -EINVAL;
>> +
>> + idx = array_index_nospec(ext.property,
>> ARRAY_SIZE(gem_create_set_property_funcs));
>> + if (!gem_create_set_property_funcs[idx])
>> + return -EINVAL;
>> +
>> + return gem_create_set_property_funcs[idx](xe, bo, ext.value);
>> +}
>> +
>> +typedef int (*xe_gem_create_user_extension_fn)(struct xe_device *xe,
>> + struct xe_bo *bo,
>> + u64 extension);
>> +
>> +static const xe_gem_create_user_extension_fn
>> gem_create_user_extension_funcs[] = {
>> + [DRM_XE_GEM_CREATE_EXTENSION_SET_PROPERTY] =
>> gem_create_user_ext_set_property,
>> +};
>> +
>> +#define MAX_USER_EXTENSIONS 16
>> +static int gem_create_user_extensions(struct xe_device *xe, struct
>> xe_bo *bo,
>> + u64 extensions, int ext_number)
>> +{
>> + u64 __user *address = u64_to_user_ptr(extensions);
>> + struct drm_xe_user_extension ext;
>> + int err;
>> + u32 idx;
>> +
>> + if (XE_IOCTL_DBG(xe, ext_number >= MAX_USER_EXTENSIONS))
>> + return -E2BIG;
>> +
>> + err = __copy_from_user(&ext, address, sizeof(ext));
>> + if (XE_IOCTL_DBG(xe, err))
>> + return -EFAULT;
>> +
>> + if (XE_IOCTL_DBG(xe, ext.pad) ||
>> + XE_IOCTL_DBG(xe, ext.name >=
>> ARRAY_SIZE(gem_create_user_extension_funcs)))
>> + return -EINVAL;
>> +
>> + idx = array_index_nospec(ext.name,
>> + ARRAY_SIZE(gem_create_user_extension_funcs));
>> + err = gem_create_user_extension_funcs[idx](xe, bo, extensions);
>> + if (XE_IOCTL_DBG(xe, err))
>> + return err;
>> +
>> + if (ext.next_extension)
>> + return gem_create_user_extensions(xe, bo, ext.next_extension,
>> + ++ext_number);
>> +
>> + return 0;
>> +}
>> +
>> int xe_gem_create_ioctl(struct drm_device *dev, void *data,
>> struct drm_file *file)
>> {
>> @@ -1961,8 +2052,7 @@ int xe_gem_create_ioctl(struct drm_device *dev,
>> void *data,
>> u32 handle;
>> int err;
>> - if (XE_IOCTL_DBG(xe, args->extensions) ||
>> - XE_IOCTL_DBG(xe, args->pad[0] || args->pad[1] ||
>> args->pad[2]) ||
>> + if (XE_IOCTL_DBG(xe, args->pad[0] || args->pad[1] ||
>> args->pad[2]) ||
>> XE_IOCTL_DBG(xe, args->reserved[0] || args->reserved[1]))
>> return -EINVAL;
>> @@ -2037,6 +2127,12 @@ int xe_gem_create_ioctl(struct drm_device
>> *dev, void *data,
>> goto out_vm;
>> }
>> + if (args->extensions) {
>> + err = gem_create_user_extensions(xe, bo, args->extensions, 0);
>> + if (err)
>> + goto out_bulk;
>> + }
>> +
>> err = drm_gem_handle_create(file, &bo->ttm.base, &handle);
>> if (err)
>> goto out_bulk;
>> diff --git a/drivers/gpu/drm/xe/xe_bo.h b/drivers/gpu/drm/xe/xe_bo.h
>> index 1c9dc8adaaa3..721f7dc35aac 100644
>> --- a/drivers/gpu/drm/xe/xe_bo.h
>> +++ b/drivers/gpu/drm/xe/xe_bo.h
>> @@ -171,6 +171,11 @@ static inline bool xe_bo_is_pinned(struct xe_bo
>> *bo)
>> return bo->ttm.pin_count;
>> }
>> +static inline bool xe_bo_is_protected(const struct xe_bo *bo)
>> +{
>> + return bo->pxp_key_instance;
>> +}
>> +
>> static inline void xe_bo_unpin_map_no_vm(struct xe_bo *bo)
>> {
>> if (likely(bo)) {
>> diff --git a/drivers/gpu/drm/xe/xe_bo_types.h
>> b/drivers/gpu/drm/xe/xe_bo_types.h
>> index ebc8abf7930a..8668e0374b18 100644
>> --- a/drivers/gpu/drm/xe/xe_bo_types.h
>> +++ b/drivers/gpu/drm/xe/xe_bo_types.h
>> @@ -56,6 +56,9 @@ struct xe_bo {
>> */
>> struct list_head client_link;
>> #endif
>> + /** @pxp_key_instance: key instance this bo was created against
>> (if any) */
>> + u32 pxp_key_instance;
>> +
>> /** @freed: List node for delayed put. */
>> struct llist_node freed;
>> /** @update_index: Update index if PT BO */
>> diff --git a/drivers/gpu/drm/xe/xe_exec.c b/drivers/gpu/drm/xe/xe_exec.c
>> index f36980aa26e6..aa4f2fe2e131 100644
>> --- a/drivers/gpu/drm/xe/xe_exec.c
>> +++ b/drivers/gpu/drm/xe/xe_exec.c
>> @@ -250,6 +250,12 @@ int xe_exec_ioctl(struct drm_device *dev, void
>> *data, struct drm_file *file)
>> goto err_exec;
>> }
>> + if (xe_exec_queue_uses_pxp(q)) {
>> + err = xe_vm_validate_protected(q->vm);
>> + if (err)
>> + goto err_exec;
>> + }
>> +
>> job = xe_sched_job_create(q, xe_exec_queue_is_parallel(q) ?
>> addresses : &args->address);
>> if (IS_ERR(job)) {
>> diff --git a/drivers/gpu/drm/xe/xe_pxp.c b/drivers/gpu/drm/xe/xe_pxp.c
>> index ca4302af4ced..640e62d1d5d7 100644
>> --- a/drivers/gpu/drm/xe/xe_pxp.c
>> +++ b/drivers/gpu/drm/xe/xe_pxp.c
>> @@ -8,6 +8,8 @@
>> #include <drm/drm_managed.h>
>> #include <drm/xe_drm.h>
>> +#include "xe_bo.h"
>> +#include "xe_bo_types.h"
>> #include "xe_device_types.h"
>> #include "xe_exec_queue.h"
>> #include "xe_exec_queue_types.h"
>> @@ -132,6 +134,9 @@ static void pxp_terminate(struct xe_pxp *pxp)
>> pxp_invalidate_queues(pxp);
>> + if (pxp->status == XE_PXP_ACTIVE)
>> + pxp->key_instance++;
>> +
>> /*
>> * If we have a termination already in progress, we need to
>> wait for
>> * it to complete before queueing another one. We update the state
>> @@ -343,6 +348,8 @@ int xe_pxp_init(struct xe_device *xe)
>> pxp->xe = xe;
>> pxp->gt = gt;
>> + pxp->key_instance = 1;
>> +
>> /*
>> * we'll use the completion to check if there is a termination
>> pending,
>> * so we start it as completed and we reinit it when a termination
>> @@ -574,3 +581,70 @@ static void pxp_invalidate_queues(struct xe_pxp
>> *pxp)
>> spin_unlock_irq(&pxp->queues.lock);
>> }
>> +/**
>> + * xe_pxp_key_assign - mark a BO as using the current PXP key iteration
>> + * @pxp: the xe->pxp pointer (it will be NULL if PXP is disabled)
>> + * @bo: the BO to mark
>> + *
>> + * Returns: -ENODEV if PXP is disabled, 0 otherwise.
>> + */
>> +int xe_pxp_key_assign(struct xe_pxp *pxp, struct xe_bo *bo)
>> +{
>> + if (!xe_pxp_is_enabled(pxp))
>> + return -ENODEV;
>> +
>> + xe_assert(pxp->xe, !bo->pxp_key_instance);
>> +
>> + /*
>> + * Note that the PXP key handling is inherently racey, because
>> the key
>> + * can theoretically change at any time (although it's unlikely
>> to do
>> + * so without triggers), even right after we copy it. Taking a lock
>> + * wouldn't help because the value might still change as soon as we
>> + * release the lock.
>> + * Userspace needs to handle the fact that their BOs can go
>> invalid at
>> + * any point.
>> + */
>> + bo->pxp_key_instance = pxp->key_instance;
>> +
>> + return 0;
>> +}
>> +
>> +/**
>> + * xe_pxp_key_check - check if the key used by a BO is valid
>> + * @pxp: the xe->pxp pointer (it will be NULL if PXP is disabled)
>> + * @bo: the BO we want to check
>> + *
>> + * Checks whether a BO was encrypted with the current key or an
>> obsolete one.
>> + *
>> + * Returns: 0 if the key is valid, -ENODEV if PXP is disabled,
>> -EINVAL if the
>> + * BO is not using PXP, -ENOEXEC if the key is not valid.
>> + */
>> +int xe_pxp_key_check(struct xe_pxp *pxp, struct xe_bo *bo)
>> +{
>> + if (!xe_pxp_is_enabled(pxp))
>> + return -ENODEV;
>> +
>> + if (!xe_bo_is_protected(bo))
>> + return -EINVAL;
>> +
>> + xe_assert(pxp->xe, bo->pxp_key_instance);
>> +
>> + /*
>> + * Note that the PXP key handling is inherently racey, because
>> the key
>> + * can theoretically change at any time (although it's unlikely
>> to do
>> + * so without triggers), even right after we check it. Taking a
>> lock
>> + * wouldn't help because the value might still change as soon as we
>> + * release the lock.
>> + * We mitigate the risk by checking the key at multiple points
>> (on each
>> + * submission involving the BO and right before flipping it on the
>> + * display), but there is still a very small chance that we could
>> + * operate on an invalid BO for a single submission or a single
>> frame
>> + * flip. This is a compromise made to protect the encrypted data
>> (which
>> + * is what the key termination is for).
>> + */
>> + if (bo->pxp_key_instance != pxp->key_instance)
> And the possibility that the key_instance value has wrapped around and
> is valid again is considered not a problem? Using a bo with a bad key
> potentially results in garbage being displayed but nothing worse than
> that?
Considering that the instance variable is a u32, even if we had an
invalidation a second (which is extremely unlikely unless someone is
actively attacking the system in a loop) it'd take way too long for the
value to actually wrap. And yes on the second question.
>
>> + return -ENOEXEC;
>> +
>> + return 0;
>> +}
>> +
>> diff --git a/drivers/gpu/drm/xe/xe_pxp.h b/drivers/gpu/drm/xe/xe_pxp.h
>> index 868813cc84b9..2d22a6e6ab27 100644
>> --- a/drivers/gpu/drm/xe/xe_pxp.h
>> +++ b/drivers/gpu/drm/xe/xe_pxp.h
>> @@ -8,6 +8,7 @@
>> #include <linux/types.h>
>> +struct xe_bo;
>> struct xe_device;
>> struct xe_exec_queue;
>> struct xe_pxp;
>> @@ -23,4 +24,7 @@ int xe_pxp_exec_queue_set_type(struct xe_pxp *pxp,
>> struct xe_exec_queue *q, u8 t
>> int xe_pxp_exec_queue_add(struct xe_pxp *pxp, struct xe_exec_queue
>> *q);
>> void xe_pxp_exec_queue_remove(struct xe_pxp *pxp, struct
>> xe_exec_queue *q);
>> +int xe_pxp_key_assign(struct xe_pxp *pxp, struct xe_bo *bo);
>> +int xe_pxp_key_check(struct xe_pxp *pxp, struct xe_bo *bo);
>> +
>> #endif /* __XE_PXP_H__ */
>> diff --git a/drivers/gpu/drm/xe/xe_pxp_types.h
>> b/drivers/gpu/drm/xe/xe_pxp_types.h
>> index eb6a0183320a..1bb747837f86 100644
>> --- a/drivers/gpu/drm/xe/xe_pxp_types.h
>> +++ b/drivers/gpu/drm/xe/xe_pxp_types.h
>> @@ -108,6 +108,9 @@ struct xe_pxp {
>> /** @queues.list: list of exec_queues that use PXP */
>> struct list_head list;
>> } queues;
>> +
>> + /** @key_instance: keep track of the current iteration of the
>> PXP key */
>> + u32 key_instance;
>> };
>> #endif /* __XE_PXP_TYPES_H__ */
>> diff --git a/drivers/gpu/drm/xe/xe_vm.c b/drivers/gpu/drm/xe/xe_vm.c
>> index 56f105797ae6..1011d643ebb8 100644
>> --- a/drivers/gpu/drm/xe/xe_vm.c
>> +++ b/drivers/gpu/drm/xe/xe_vm.c
>> @@ -34,6 +34,7 @@
>> #include "xe_pm.h"
>> #include "xe_preempt_fence.h"
>> #include "xe_pt.h"
>> +#include "xe_pxp.h"
>> #include "xe_res_cursor.h"
>> #include "xe_sync.h"
>> #include "xe_trace_bo.h"
>> @@ -2754,7 +2755,8 @@ static struct dma_fence
>> *vm_bind_ioctl_ops_execute(struct xe_vm *vm,
>> (DRM_XE_VM_BIND_FLAG_READONLY | \
>> DRM_XE_VM_BIND_FLAG_IMMEDIATE | \
>> DRM_XE_VM_BIND_FLAG_NULL | \
>> - DRM_XE_VM_BIND_FLAG_DUMPABLE)
>> + DRM_XE_VM_BIND_FLAG_DUMPABLE | \
>> + DRM_XE_VM_BIND_FLAG_CHECK_PXP)
>> #ifdef TEST_VM_OPS_ERROR
>> #define SUPPORTED_FLAGS (SUPPORTED_FLAGS_STUB | FORCE_OP_ERROR)
>> @@ -2916,7 +2918,7 @@ static void xe_vma_ops_init(struct xe_vma_ops
>> *vops, struct xe_vm *vm,
>> static int xe_vm_bind_ioctl_validate_bo(struct xe_device *xe,
>> struct xe_bo *bo,
>> u64 addr, u64 range, u64 obj_offset,
>> - u16 pat_index)
>> + u16 pat_index, u32 op, u32 bind_flags)
>> {
>> u16 coh_mode;
>> @@ -2951,6 +2953,12 @@ static int
>> xe_vm_bind_ioctl_validate_bo(struct xe_device *xe, struct xe_bo *bo,
>> return -EINVAL;
>> }
>> + /* If a BO is protected it must be valid to be mapped */
> "is protected it can only be mapped if the key is still valid". The
> above can be read as saying the BO must be mappable, which isn't the
> same thing.
will update.
>
>> + if ((bind_flags & DRM_XE_VM_BIND_FLAG_CHECK_PXP) &&
>> xe_bo_is_protected(bo) &&
>> + op != DRM_XE_VM_BIND_OP_UNMAP && op !=
>> DRM_XE_VM_BIND_OP_UNMAP_ALL)
>> + if (XE_IOCTL_DBG(xe, xe_pxp_key_check(xe->pxp, bo) != 0))
>> + return -ENOEXEC;
>> +
>> return 0;
>> }
>> @@ -3038,6 +3046,8 @@ int xe_vm_bind_ioctl(struct drm_device *dev,
>> void *data, struct drm_file *file)
>> u32 obj = bind_ops[i].obj;
>> u64 obj_offset = bind_ops[i].obj_offset;
>> u16 pat_index = bind_ops[i].pat_index;
>> + u32 op = bind_ops[i].op;
>> + u32 bind_flags = bind_ops[i].flags;
>> if (!obj)
>> continue;
>> @@ -3050,7 +3060,8 @@ int xe_vm_bind_ioctl(struct drm_device *dev,
>> void *data, struct drm_file *file)
>> bos[i] = gem_to_xe_bo(gem_obj);
>> err = xe_vm_bind_ioctl_validate_bo(xe, bos[i], addr, range,
>> - obj_offset, pat_index);
>> + obj_offset, pat_index, op,
>> + bind_flags);
>> if (err)
>> goto put_obj;
>> }
>> @@ -3343,6 +3354,35 @@ int xe_vm_invalidate_vma(struct xe_vma *vma)
>> return ret;
>> }
>> +int xe_vm_validate_protected(struct xe_vm *vm)
>> +{
>> + struct drm_gpuva *gpuva;
>> + int err = 0;
>> +
>> + if (!vm)
>> + return -ENODEV;
>> +
>> + mutex_lock(&vm->snap_mutex);
>> +
>> + drm_gpuvm_for_each_va(gpuva, &vm->gpuvm) {
>> + struct xe_vma *vma = gpuva_to_vma(gpuva);
>> + struct xe_bo *bo = vma->gpuva.gem.obj ?
>> + gem_to_xe_bo(vma->gpuva.gem.obj) : NULL;
>> +
>> + if (!bo)
>> + continue;
>> +
>> + if (xe_bo_is_protected(bo)) {
>> + err = xe_pxp_key_check(vm->xe->pxp, bo);
>> + if (err)
>> + break;
>> + }
>> + }
>> +
>> + mutex_unlock(&vm->snap_mutex);
>> + return err;
>> +}
>> +
>> struct xe_vm_snapshot {
>> unsigned long num_snaps;
>> struct {
>> diff --git a/drivers/gpu/drm/xe/xe_vm.h b/drivers/gpu/drm/xe/xe_vm.h
>> index bfc19e8113c3..dd51c9790dab 100644
>> --- a/drivers/gpu/drm/xe/xe_vm.h
>> +++ b/drivers/gpu/drm/xe/xe_vm.h
>> @@ -216,6 +216,8 @@ struct dma_fence *xe_vma_rebind(struct xe_vm *vm,
>> struct xe_vma *vma,
>> int xe_vm_invalidate_vma(struct xe_vma *vma);
>> +int xe_vm_validate_protected(struct xe_vm *vm);
>> +
>> static inline void xe_vm_queue_rebind_worker(struct xe_vm *vm)
>> {
>> xe_assert(vm->xe, xe_vm_in_preempt_fence_mode(vm));
>> diff --git a/include/uapi/drm/xe_drm.h b/include/uapi/drm/xe_drm.h
>> index 9972ceb3fbfb..335febe03e40 100644
>> --- a/include/uapi/drm/xe_drm.h
>> +++ b/include/uapi/drm/xe_drm.h
>> @@ -776,8 +776,23 @@ struct drm_xe_device_query {
>> * - %DRM_XE_GEM_CPU_CACHING_WC - Allocate the pages as
>> write-combined. This
>> * is uncached. Scanout surfaces should likely use this. All
>> objects
>> * that can be placed in VRAM must use this.
>> + *
>> + * This ioctl supports setting the following properties via the
>> + * %DRM_XE_GEM_CREATE_EXTENSION_SET_PROPERTY extension, which uses the
>> + * generic @drm_xe_ext_set_property struct:
>> + *
>> + * - %DRM_XE_GEM_CREATE_SET_PROPERTY_PXP_TYPE - set the type of PXP
>> session
>> + * this object will be used with. Valid values are listed in enum
>> + * drm_xe_pxp_session_type. %DRM_XE_PXP_TYPE_NONE is the default
>> behavior, so
>> + * there is no need to explicitly set that. Objects used with
>> session of type
>> + * %DRM_XE_PXP_TYPE_HWDRM will be marked as invalid if a PXP
>> invalidation
>> + * event occurs after their creation. Attempting to flip an
>> invalid object
>> + * will cause a black frame to be displayed instead. Submissions
>> with invalid
>> + * objects mapped in the VM will be rejected.
> Again, seems like the per type descriptions should be collected
> together in the type enum.
This is how this ioctl handles those values, so IMO they belong here.
Daniele
>
> John.
>
>> */
>> struct drm_xe_gem_create {
>> +#define DRM_XE_GEM_CREATE_EXTENSION_SET_PROPERTY 0
>> +#define DRM_XE_GEM_CREATE_SET_PROPERTY_PXP_TYPE 0
>> /** @extensions: Pointer to the first extension struct, if any */
>> __u64 extensions;
>> @@ -939,6 +954,9 @@ struct drm_xe_vm_destroy {
>> * will only be valid for DRM_XE_VM_BIND_OP_MAP operations, the BO
>> * handle MBZ, and the BO offset MBZ. This flag is intended to
>> * implement VK sparse bindings.
>> + * - %DRM_XE_VM_BIND_FLAG_CHECK_PXP - If the object is encrypted
>> via PXP,
>> + * reject the binding if the encryption key is no longer valid. This
>> + * flag has no effect on BOs that are not marked as using PXP.
>> */
>> struct drm_xe_vm_bind_op {
>> /** @extensions: Pointer to the first extension struct, if any */
>> @@ -1029,6 +1047,7 @@ struct drm_xe_vm_bind_op {
>> #define DRM_XE_VM_BIND_FLAG_IMMEDIATE (1 << 1)
>> #define DRM_XE_VM_BIND_FLAG_NULL (1 << 2)
>> #define DRM_XE_VM_BIND_FLAG_DUMPABLE (1 << 3)
>> +#define DRM_XE_VM_BIND_FLAG_CHECK_PXP (1 << 4)
>> /** @flags: Bind flags */
>> __u32 flags;
>
^ permalink raw reply [flat|nested] 54+ messages in thread
* Re: [PATCH v2 10/12] drm/xe/pxp: add PXP PM support
2024-10-09 1:12 ` John Harrison
@ 2024-11-12 22:27 ` Daniele Ceraolo Spurio
0 siblings, 0 replies; 54+ messages in thread
From: Daniele Ceraolo Spurio @ 2024-11-12 22:27 UTC (permalink / raw)
To: John Harrison, intel-xe
On 10/8/24 18:12, John Harrison wrote:
> On 8/16/2024 12:00, Daniele Ceraolo Spurio wrote:
>> The HW suspend flow kills all PXP HWDRM sessions, so if there was any
>> PXP activity before the suspend we need to trigger a full termination on
>> suspend.
>>
>> Signed-off-by: Daniele Ceraolo Spurio <daniele.ceraolospurio@intel.com>
>> ---
>> drivers/gpu/drm/xe/xe_pm.c | 42 +++++++++++---
>> drivers/gpu/drm/xe/xe_pxp.c | 92 ++++++++++++++++++++++++++++++-
>> drivers/gpu/drm/xe/xe_pxp.h | 3 +
>> drivers/gpu/drm/xe/xe_pxp_types.h | 9 ++-
>> 4 files changed, 134 insertions(+), 12 deletions(-)
>>
>> diff --git a/drivers/gpu/drm/xe/xe_pm.c b/drivers/gpu/drm/xe/xe_pm.c
>> index 9f3c14fd9f33..1e1f87ec03a2 100644
>> --- a/drivers/gpu/drm/xe/xe_pm.c
>> +++ b/drivers/gpu/drm/xe/xe_pm.c
>> @@ -20,6 +20,7 @@
>> #include "xe_guc.h"
>> #include "xe_irq.h"
>> #include "xe_pcode.h"
>> +#include "xe_pxp.h"
>> #include "xe_trace.h"
>> #include "xe_wa.h"
>> @@ -90,22 +91,24 @@ int xe_pm_suspend(struct xe_device *xe)
>> drm_dbg(&xe->drm, "Suspending device\n");
>> trace_xe_pm_suspend(xe, __builtin_return_address(0));
>> + err = xe_pxp_pm_suspend(xe->pxp);
>> + if (err)
>> + goto err;
>> +
>> for_each_gt(gt, xe, id)
>> xe_gt_suspend_prepare(gt);
>> /* FIXME: Super racey... */
>> err = xe_bo_evict_all(xe);
>> if (err)
>> - goto err;
>> + goto err_pxp;
>> xe_display_pm_suspend(xe, false);
>> for_each_gt(gt, xe, id) {
>> err = xe_gt_suspend(gt);
>> - if (err) {
>> - xe_display_pm_resume(xe, false);
>> - goto err;
>> - }
>> + if (err)
>> + goto err_display;
>> }
>> xe_irq_suspend(xe);
>> @@ -114,6 +117,11 @@ int xe_pm_suspend(struct xe_device *xe)
>> drm_dbg(&xe->drm, "Device suspended\n");
>> return 0;
>> +
>> +err_display:
>> + xe_display_pm_resume(xe, false);
>> +err_pxp:
>> + xe_pxp_pm_resume(xe->pxp);
>> err:
>> drm_dbg(&xe->drm, "Device suspend failed %d\n", err);
>> return err;
>> @@ -163,6 +171,8 @@ int xe_pm_resume(struct xe_device *xe)
>> if (err)
>> goto err;
>> + xe_pxp_pm_resume(xe->pxp);
>> +
>> drm_dbg(&xe->drm, "Device resumed\n");
>> return 0;
>> err:
>> @@ -356,6 +366,10 @@ int xe_pm_runtime_suspend(struct xe_device *xe)
>> */
>> lock_map_acquire(&xe_pm_runtime_lockdep_map);
>> + err = xe_pxp_pm_suspend(xe->pxp);
>> + if (err)
>> + goto out;
>> +
>> /*
>> * Applying lock for entire list op as xe_ttm_bo_destroy and
>> xe_bo_move_notify
>> * also checks and delets bo entry from user fault list.
>> @@ -369,23 +383,30 @@ int xe_pm_runtime_suspend(struct xe_device *xe)
>> if (xe->d3cold.allowed) {
>> err = xe_bo_evict_all(xe);
>> if (err)
>> - goto out;
>> + goto out_pxp;
>> xe_display_pm_suspend(xe, true);
>> }
>> for_each_gt(gt, xe, id) {
>> err = xe_gt_suspend(gt);
>> if (err)
>> - goto out;
>> + goto out_display;
>> }
>> xe_irq_suspend(xe);
>> if (xe->d3cold.allowed)
>> xe_display_pm_suspend_late(xe);
>> +
>> + lock_map_release(&xe_pm_runtime_lockdep_map);
>> + xe_pm_write_callback_task(xe, NULL);
>> + return 0;
>> +
>> +out_display:
>> + xe_display_pm_resume(xe, true);
>> +out_pxp:
>> + xe_pxp_pm_resume(xe->pxp);
>> out:
>> - if (err)
>> - xe_display_pm_resume(xe, true);
>> lock_map_release(&xe_pm_runtime_lockdep_map);
>> xe_pm_write_callback_task(xe, NULL);
>> return err;
>> @@ -436,6 +457,9 @@ int xe_pm_runtime_resume(struct xe_device *xe)
>> if (err)
>> goto out;
>> }
>> +
>> + xe_pxp_pm_resume(xe->pxp);
>> +
>> out:
>> lock_map_release(&xe_pm_runtime_lockdep_map);
>> xe_pm_write_callback_task(xe, NULL);
>> diff --git a/drivers/gpu/drm/xe/xe_pxp.c b/drivers/gpu/drm/xe/xe_pxp.c
>> index 640e62d1d5d7..78373cbbe0d4 100644
>> --- a/drivers/gpu/drm/xe/xe_pxp.c
>> +++ b/drivers/gpu/drm/xe/xe_pxp.c
>> @@ -137,6 +137,13 @@ static void pxp_terminate(struct xe_pxp *pxp)
>> if (pxp->status == XE_PXP_ACTIVE)
>> pxp->key_instance++;
>> + /*
>> + * we'll mark the status as needing termination on resume, so no
>> need to
>> + * emit a termination now.
>> + */
>> + if (pxp->status == XE_PXP_SUSPENDED)
>> + return;
>> +
>> /*
>> * If we have a termination already in progress, we need to
>> wait for
>> * it to complete before queueing another one. We update the state
>> @@ -181,17 +188,19 @@ static void pxp_terminate(struct xe_pxp *pxp)
>> static void pxp_terminate_complete(struct xe_pxp *pxp)
>> {
>> /*
>> - * We expect PXP to be in one of 2 states when we get here:
>> + * We expect PXP to be in one of 3 states when we get here:
>> * - XE_PXP_TERMINATION_IN_PROGRESS: a single termination event
>> was
>> * requested and it is now completing, so we're ready to start.
>> * - XE_PXP_NEEDS_TERMINATION: a second termination was
>> requested while
>> * the first one was still being processed; we don't update the
>> state
>> * in this case so the pxp_start code will automatically issue
>> that
>> * second termination.
>> + * - XE_PXP_SUSPENDED: PXP is now suspended, so we defer
>> everything to
>> + * when we come back on resume.
>> */
>> if (pxp->status == XE_PXP_TERMINATION_IN_PROGRESS)
>> pxp->status = XE_PXP_READY_TO_START;
>> - else if (pxp->status != XE_PXP_NEEDS_TERMINATION)
>> + else if (pxp->status != XE_PXP_NEEDS_TERMINATION && pxp->status
>> != XE_PXP_SUSPENDED)
>> drm_err(&pxp->xe->drm,
>> "PXP termination complete while status was %u\n",
>> pxp->status);
>> @@ -505,6 +514,7 @@ int xe_pxp_exec_queue_add(struct xe_pxp *pxp,
>> struct xe_exec_queue *q)
>> pxp_terminate(pxp);
>> mutex_unlock(&pxp->mutex);
>> goto wait_for_termination;
>> + case XE_PXP_SUSPENDED:
>> default:
>> drm_err(&pxp->xe->drm, "unexpected state during PXP start:
>> %u", pxp->status);
>> ret = -EIO;
>> @@ -648,3 +658,81 @@ int xe_pxp_key_check(struct xe_pxp *pxp, struct
>> xe_bo *bo)
>> return 0;
>> }
>> +int xe_pxp_pm_suspend(struct xe_pxp *pxp)
>> +{
>> + int ret = 0;
>> +
>> + if (!xe_pxp_is_enabled(pxp))
>> + return 0;
>> +
>> + mutex_lock(&pxp->mutex);
>> +
>> + /* if the termination is already in progress, no need to re-emit
>> it */
>> + if (!completion_done(&pxp->termination))
>> + goto mark_suspended;
>> +
>> + switch (pxp->status) {
>> + case XE_PXP_ERROR:
>> + case XE_PXP_READY_TO_START:
>> + case XE_PXP_SUSPENDED:
>> + /* nothing to cleanup */
>> + break;
>> + case XE_PXP_NEEDS_TERMINATION:
>> + /* If PXP was never used we can skip the cleanup */
>> + if (pxp->key_instance == pxp->last_suspend_key_instance)
> Again, there is the possibility of this being confused by key_instance
> roll over.
I don't believe it is possible for it to actually roll over even if the
system was never rebooted in its lifetime.
Daniele
>
>> + break;
>> + fallthrough;
>> + case XE_PXP_ACTIVE:
>> + pxp_terminate(pxp);
>> + break;
>> + default:
>> + drm_err(&pxp->xe->drm, "unexpected state during PXP suspend:
>> %u",
>> + pxp->status);
>> + ret = -EIO;
>> + goto out;
>> + }
>> +
>> +mark_suspended:
>> + /*
>> + * We set this even if we were in error state, hoping the
>> suspend clears
>> + * the error. Worse case we fail again and go in error state again.
>> + */
>> + pxp->status = XE_PXP_SUSPENDED;
>> +
>> + mutex_unlock(&pxp->mutex);
>> +
>> + /*
>> + * if there is a termination in progress, wait for it.
>> + * We need to wait outside the lock because the completion is
>> done from
>> + * within the lock
>> + */
>> + if (!wait_for_completion_timeout(&pxp->termination,
>> + msecs_to_jiffies(PXP_TERMINATION_TIMEOUT_MS)))
>> + ret = -ETIMEDOUT;
>> +
>> + pxp->last_suspend_key_instance = pxp->key_instance;
>> +
>> +out:
>> + return ret;
>> +}
>> +
>> +void xe_pxp_pm_resume(struct xe_pxp *pxp)
>> +{
>> + int err;
>> +
>> + if (!xe_pxp_is_enabled(pxp))
>> + return;
>> +
>> + err = kcr_pxp_enable(pxp);
>> +
>> + mutex_lock(&pxp->mutex);
>> +
>> + xe_assert(pxp->xe, pxp->status == XE_PXP_SUSPENDED);
>> +
>> + if (err)
>> + pxp->status = XE_PXP_ERROR;
>> + else
>> + pxp->status = XE_PXP_NEEDS_TERMINATION;
>> +
>> + mutex_unlock(&pxp->mutex);
>> +}
>> diff --git a/drivers/gpu/drm/xe/xe_pxp.h b/drivers/gpu/drm/xe/xe_pxp.h
>> index 2d22a6e6ab27..af32c2616641 100644
>> --- a/drivers/gpu/drm/xe/xe_pxp.h
>> +++ b/drivers/gpu/drm/xe/xe_pxp.h
>> @@ -20,6 +20,9 @@ int xe_pxp_get_readiness_status(struct xe_pxp *pxp);
>> int xe_pxp_init(struct xe_device *xe);
>> void xe_pxp_irq_handler(struct xe_device *xe, u16 iir);
>> +int xe_pxp_pm_suspend(struct xe_pxp *pxp);
>> +void xe_pxp_pm_resume(struct xe_pxp *pxp);
>> +
>> int xe_pxp_exec_queue_set_type(struct xe_pxp *pxp, struct
>> xe_exec_queue *q, u8 type);
>> int xe_pxp_exec_queue_add(struct xe_pxp *pxp, struct xe_exec_queue
>> *q);
>> void xe_pxp_exec_queue_remove(struct xe_pxp *pxp, struct
>> xe_exec_queue *q);
>> diff --git a/drivers/gpu/drm/xe/xe_pxp_types.h
>> b/drivers/gpu/drm/xe/xe_pxp_types.h
>> index 1bb747837f86..942f2fa40a58 100644
>> --- a/drivers/gpu/drm/xe/xe_pxp_types.h
>> +++ b/drivers/gpu/drm/xe/xe_pxp_types.h
>> @@ -24,7 +24,8 @@ enum xe_pxp_status {
>> XE_PXP_NEEDS_TERMINATION = 0, /* starting status */
>> XE_PXP_TERMINATION_IN_PROGRESS,
>> XE_PXP_READY_TO_START,
>> - XE_PXP_ACTIVE
>> + XE_PXP_ACTIVE,
> You can add a trailing comma even on the last enum value to avoid such
> unnecessary deltas.
>
> John.
>
>> + XE_PXP_SUSPENDED
>> };
>> /**
>> @@ -111,6 +112,12 @@ struct xe_pxp {
>> /** @key_instance: keep track of the current iteration of the
>> PXP key */
>> u32 key_instance;
>> + /**
>> + * @last_suspend_key_instance: value of key_instance at the last
>> + * suspend. Used to check if any PXP session has been created
>> between
>> + * suspend cycles.
>> + */
>> + u32 last_suspend_key_instance;
>> };
>> #endif /* __XE_PXP_TYPES_H__ */
>
^ permalink raw reply [flat|nested] 54+ messages in thread
* Re: [PATCH v2 03/12] drm/xe/pxp: Add VCS inline termination support
2024-11-06 23:49 ` Daniele Ceraolo Spurio
@ 2024-11-14 18:46 ` John Harrison
0 siblings, 0 replies; 54+ messages in thread
From: John Harrison @ 2024-11-14 18:46 UTC (permalink / raw)
To: Daniele Ceraolo Spurio, intel-xe
On 11/6/2024 15:49, Daniele Ceraolo Spurio wrote:
> On 10/4/24 15:25, John Harrison wrote:
>> On 8/16/2024 12:00, Daniele Ceraolo Spurio wrote:
>>> The key termination is done with a specific submission to the VCS
>>> engine.
>>>
>>> Note that this patch is meant to be squashed with the follow-up patches
>>> that implement the other pieces of the termination flow. It is separate
>>> for now for ease of review.
>>>
>>> Signed-off-by: Daniele Ceraolo Spurio <daniele.ceraolospurio@intel.com>
>>> ---
>>> .../gpu/drm/xe/instructions/xe_instr_defs.h | 1 +
>>> .../gpu/drm/xe/instructions/xe_mfx_commands.h | 29 +++++
>>> .../gpu/drm/xe/instructions/xe_mi_commands.h | 5 +
>>> drivers/gpu/drm/xe/xe_lrc.h | 3 +-
>>> drivers/gpu/drm/xe/xe_pxp_submit.c | 108
>>> ++++++++++++++++++
>>> drivers/gpu/drm/xe/xe_pxp_submit.h | 2 +
>>> drivers/gpu/drm/xe/xe_ring_ops.c | 4 +-
>>> 7 files changed, 149 insertions(+), 3 deletions(-)
>>> create mode 100644 drivers/gpu/drm/xe/instructions/xe_mfx_commands.h
>>>
>>> diff --git a/drivers/gpu/drm/xe/instructions/xe_instr_defs.h
>>> b/drivers/gpu/drm/xe/instructions/xe_instr_defs.h
>>> index fd2ce7ace510..e559969468c4 100644
>>> --- a/drivers/gpu/drm/xe/instructions/xe_instr_defs.h
>>> +++ b/drivers/gpu/drm/xe/instructions/xe_instr_defs.h
>>> @@ -16,6 +16,7 @@
>>> #define XE_INSTR_CMD_TYPE GENMASK(31, 29)
>>> #define XE_INSTR_MI REG_FIELD_PREP(XE_INSTR_CMD_TYPE, 0x0)
>>> #define XE_INSTR_GSC REG_FIELD_PREP(XE_INSTR_CMD_TYPE, 0x2)
>>> +#define XE_INSTR_VIDEOPIPE REG_FIELD_PREP(XE_INSTR_CMD_TYPE, 0x3)
>>> #define XE_INSTR_GFXPIPE REG_FIELD_PREP(XE_INSTR_CMD_TYPE, 0x3)
>>> #define XE_INSTR_GFX_STATE REG_FIELD_PREP(XE_INSTR_CMD_TYPE, 0x4)
>>> diff --git a/drivers/gpu/drm/xe/instructions/xe_mfx_commands.h
>>> b/drivers/gpu/drm/xe/instructions/xe_mfx_commands.h
>>> new file mode 100644
>>> index 000000000000..686ca3b1d9e8
>>> --- /dev/null
>>> +++ b/drivers/gpu/drm/xe/instructions/xe_mfx_commands.h
>>> @@ -0,0 +1,29 @@
>>> +/* SPDX-License-Identifier: MIT */
>>> +/*
>>> + * Copyright © 2024 Intel Corporation
>>> + */
>>> +
>>> +#ifndef _XE_MFX_COMMANDS_H_
>>> +#define _XE_MFX_COMMANDS_H_
>>> +
>>> +#include "instructions/xe_instr_defs.h"
>>> +
>>> +#define MFX_CMD_SUBTYPE REG_GENMASK(28, 27) /* A.K.A cmd
>>> pipe */
>>> +#define MFX_CMD_OPCODE REG_GENMASK(26, 24)
>>> +#define MFX_CMD_SUB_OPCODE REG_GENMASK(23, 16)
>>> +#define MFX_FLAGS_AND_LEN REG_GENMASK(15, 0)
>>> +
>>> +#define XE_MFX_INSTR(subtype, op, sub_op, flags) \
>>> + (XE_INSTR_VIDEOPIPE | \
>>> + REG_FIELD_PREP(MFX_CMD_SUBTYPE, subtype) | \
>>> + REG_FIELD_PREP(MFX_CMD_OPCODE, op) | \
>>> + REG_FIELD_PREP(MFX_CMD_SUB_OPCODE, sub_op) | \
>>> + REG_FIELD_PREP(MFX_FLAGS_AND_LEN, flags))
>>> +
>>> +#define MFX_WAIT XE_MFX_INSTR(1, 0, 0, 0)
>>> +#define MFX_WAIT_DW0_PXP_SYNC_CONTROL_FLAG REG_BIT(9)
>>> +#define MFX_WAIT_DW0_MFX_SYNC_CONTROL_FLAG REG_BIT(8)
>>> +
>>> +#define CRYPTO_KEY_EXCHANGE XE_MFX_INSTR(2, 6, 9, 0)
>>> +
>>> +#endif
>>> diff --git a/drivers/gpu/drm/xe/instructions/xe_mi_commands.h
>>> b/drivers/gpu/drm/xe/instructions/xe_mi_commands.h
>>> index 10ec2920d31b..167fb0f742de 100644
>>> --- a/drivers/gpu/drm/xe/instructions/xe_mi_commands.h
>>> +++ b/drivers/gpu/drm/xe/instructions/xe_mi_commands.h
>>> @@ -48,6 +48,7 @@
>>> #define MI_LRI_LEN(x) (((x) & 0xff) + 1)
>>> #define MI_FLUSH_DW __MI_INSTR(0x26)
>>> +#define MI_FLUSH_DW_PROTECTED_MEM_EN REG_BIT(22)
>>> #define MI_FLUSH_DW_STORE_INDEX REG_BIT(21)
>>> #define MI_INVALIDATE_TLB REG_BIT(18)
>>> #define MI_FLUSH_DW_CCS REG_BIT(16)
>>> @@ -66,4 +67,8 @@
>>> #define MI_BATCH_BUFFER_START __MI_INSTR(0x31)
>>> +#define MI_SET_APPID __MI_INSTR(0x0e)
>>> +#define MI_SET_APPID_SESSION_ID_MASK REG_GENMASK(6, 0)
>>> +#define MI_SET_APPID_SESSION_ID(x)
>>> REG_FIELD_PREP(MI_SET_APPID_SESSION_ID_MASK, x)
>>> +
>>> #endif
>>> diff --git a/drivers/gpu/drm/xe/xe_lrc.h b/drivers/gpu/drm/xe/xe_lrc.h
>>> index c24542e89318..d411c3fbcbc6 100644
>>> --- a/drivers/gpu/drm/xe/xe_lrc.h
>>> +++ b/drivers/gpu/drm/xe/xe_lrc.h
>>> @@ -20,7 +20,8 @@ struct xe_lrc;
>>> struct xe_lrc_snapshot;
>>> struct xe_vm;
>>> -#define LRC_PPHWSP_SCRATCH_ADDR (0x34 * 4)
>>> +#define LRC_PPHWSP_FLUSH_INVAL_SCRATCH_ADDR (0x34 * 4)
>>> +#define LRC_PPHWSP_PXP_INVAL_SCRATCH_ADDR (0x40 * 4)
>>> struct xe_lrc *xe_lrc_create(struct xe_hw_engine *hwe, struct
>>> xe_vm *vm,
>>> u32 ring_size);
>>> diff --git a/drivers/gpu/drm/xe/xe_pxp_submit.c
>>> b/drivers/gpu/drm/xe/xe_pxp_submit.c
>>> index b777b0765c8a..3b69dcc0a00f 100644
>>> --- a/drivers/gpu/drm/xe/xe_pxp_submit.c
>>> +++ b/drivers/gpu/drm/xe/xe_pxp_submit.c
>>> @@ -6,14 +6,20 @@
>>> #include "xe_pxp_submit.h"
>>> #include <drm/xe_drm.h>
>>> +#include <linux/delay.h>
>>> #include "xe_device_types.h"
>>> +#include "xe_bb.h"
>>> #include "xe_bo.h"
>>> #include "xe_exec_queue.h"
>>> #include "xe_gsc_submit.h"
>>> #include "xe_gt.h"
>>> +#include "xe_lrc.h"
>>> #include "xe_pxp_types.h"
>>> +#include "xe_sched_job.h"
>>> #include "xe_vm.h"
>>> +#include "instructions/xe_mfx_commands.h"
>>> +#include "instructions/xe_mi_commands.h"
>>> #include "regs/xe_gt_regs.h"
>>> /*
>>> @@ -199,3 +205,105 @@ void xe_pxp_destroy_execution_resources(struct
>>> xe_pxp *pxp)
>>> destroy_vcs_execution_resources(pxp);
>>> }
>>> +#define emit_cmd(xe_, map_, offset_, val_) \
>>> + xe_map_wr(xe_, map_, (offset_) * sizeof(u32), u32, val_)
>>> +
>>> +/* stall until prior PXP and MFX/HCP/HUC objects are cmopleted */
>> completed
>>
>>> +#define MFX_WAIT_PXP (MFX_WAIT | \
>>> + MFX_WAIT_DW0_PXP_SYNC_CONTROL_FLAG | \
>>> + MFX_WAIT_DW0_MFX_SYNC_CONTROL_FLAG)
>> Why define an XE_MFX_INSTR macro that takes a flags word only to
>> manually OR the flags in outside of the macro?
>
> Maybe flags is a misnomer here, but I couldn't think of anything
> clearer. Some commands have bits that are always set within that flags
> field, so for those we would add them at definition time; other
> commands, like MFX_WAIT, do instead have optional flags, which we can
> set as needed in the code.
>
> Given that I haven't defined any XE_MFX_INSTR command that requires a
> value in the flags field, I'll just remove that field from the
> XE_MFX_INSTR macro for now to make things clearer.
>
>
>>
>>> +static u32 pxp_emit_wait(struct xe_device *xe, struct iosys_map
>>> *batch, u32 offset)
>>> +{
>>> + /* wait for cmds to go through */
>>> + emit_cmd(xe, batch, offset++, MFX_WAIT_PXP);
>>> + emit_cmd(xe, batch, offset++, 0);
>> This zero is just padding to ensure 64bit alignment of future
>> instructions?
>
> yes
>
>
>>
>>> +
>>> + return offset;
>>> +}
>>> +
>>> +static u32 pxp_emit_session_selection(struct xe_device *xe, struct
>>> iosys_map *batch,
>>> + u32 offset, u32 idx)
>>> +{
>>> + offset = pxp_emit_wait(xe, batch, offset);
>>> +
>>> + /* pxp off */
>>> + emit_cmd(xe, batch, offset++, MI_FLUSH_DW | MI_FLUSH_IMM_DW);
>>> + emit_cmd(xe, batch, offset++, 0);
>>> + emit_cmd(xe, batch, offset++, 0);
>>> + emit_cmd(xe, batch, offset++, 0);
>>> +
>>> + /* select session */
>>> + emit_cmd(xe, batch, offset++, MI_SET_APPID |
>>> MI_SET_APPID_SESSION_ID(idx));
>>> + emit_cmd(xe, batch, offset++, MFX_WAIT_PXP);
>> Seems odd to define a helper function to emit this instruction and
>> then only use it for some instances.
>
>
> I didn't want the extra padding here or to make the function
> conditionally add it only when needed. I'll just add it, it's not like
> we don't have enough memory (the BO is 1 page and we only use a few
> dwords for the termination).
I guess what we really want is an emit function that looks at the size
of the instruction being added and pre-pads as appropriate. Then you
don't need to worry about optional padding after an instruction
depending upon what may or may not happen next! But probably not worth
the complication. As you say, a few extra words of padding in a
non-critical buffer is no biggie.
John.
>
>
>>
>>> +
>>> + /* pxp on */
>>> + emit_cmd(xe, batch, offset++, MI_FLUSH_DW |
>>> + MI_FLUSH_DW_PROTECTED_MEM_EN |
>>> + MI_FLUSH_DW_OP_STOREDW |
>>> MI_FLUSH_DW_STORE_INDEX |
>>> + MI_FLUSH_IMM_DW);
>>> + emit_cmd(xe, batch, offset++, LRC_PPHWSP_PXP_INVAL_SCRATCH_ADDR |
>>> + MI_FLUSH_DW_USE_GTT);
>>> + emit_cmd(xe, batch, offset++, 0);
>>> + emit_cmd(xe, batch, offset++, 0);
>>> +
>>> + offset = pxp_emit_wait(xe, batch, offset);
>>> +
>>> + return offset;
>>> +}
>>> +
>>> +static u32 pxp_emit_inline_termination(struct xe_device *xe,
>>> + struct iosys_map *batch, u32 offset)
>>> +{
>>> + /* session inline termination */
>>> + emit_cmd(xe, batch, offset++, CRYPTO_KEY_EXCHANGE);
>>> + emit_cmd(xe, batch, offset++, 0);
>>> +
>>> + return offset;
>>> +}
>>> +
>>> +static u32 pxp_emit_session_termination(struct xe_device *xe,
>>> struct iosys_map *batch,
>>> + u32 offset, u32 idx)
>>> +{
>>> + offset = pxp_emit_session_selection(xe, batch, offset, idx);
>>> + offset = pxp_emit_inline_termination(xe, batch, offset);
>>> +
>>> + return offset;
>>> +}
>>> +
>>> +/**
>>> + * xe_pxp_submit_session_termination - submits a PXP inline
>>> termination
>>> + * @pxp: the xe_pxp structure
>>> + * @id: the session to terminate
>>> + *
>>> + * Emit an inline termination via the VCS engine to terminate a
>>> session.
>>> + *
>>> + * Returns 0 if the submission is successful, an errno value
>>> otherwise.
>>> + */
>>> +int xe_pxp_submit_session_termination(struct xe_pxp *pxp, u32 id)
>>> +{
>>> + struct xe_sched_job *job;
>>> + struct dma_fence *fence;
>>> + long timeout;
>>> + u32 offset = 0;
>>> + u64 addr = xe_bo_ggtt_addr(pxp->vcs_exec.bo);
>>> +
>>> + offset = pxp_emit_session_termination(pxp->xe,
>>> &pxp->vcs_exec.bo->vmap, offset, id);
>>> + offset = pxp_emit_wait(pxp->xe, &pxp->vcs_exec.bo->vmap, offset);
>>> + emit_cmd(pxp->xe, &pxp->vcs_exec.bo->vmap, offset,
>>> MI_BATCH_BUFFER_END);
>>> +
>>> + job = xe_sched_job_create(pxp->vcs_exec.q, &addr);
>> Double space
>>
>>> + if (IS_ERR(job))
>>> + return PTR_ERR(job);
>>> +
>>> + xe_sched_job_arm(job);
>>> + fence = dma_fence_get(&job->drm.s_fence->finished);
>>> + xe_sched_job_push(job);
>>> +
>>> + timeout = dma_fence_wait_timeout(fence, false, HZ);
>>> +
>>> + dma_fence_put(fence);
>>> + if (timeout <= 0)
>>> + return -EAGAIN;
>> Does it not matter what the error was? Why/how would this fail in a
>> way that needs to be re-tried?
>>
>> Although looking at the later patches, the return value from this
>> function is just treated as a pass/fail bool anyway. So why bother
>> munging it at all?
>
>
> Timeout can be 0 on failure, so can't return that directly. I'll
> change it to return timeout if negative and an error if it actually
> timed out.
>
> Daniele
>
>
>>
>> John.
>>
>>> +
>>> + return 0;
>>> +}
>>> diff --git a/drivers/gpu/drm/xe/xe_pxp_submit.h
>>> b/drivers/gpu/drm/xe/xe_pxp_submit.h
>>> index 1a971fadc081..4ee8c0acfed9 100644
>>> --- a/drivers/gpu/drm/xe/xe_pxp_submit.h
>>> +++ b/drivers/gpu/drm/xe/xe_pxp_submit.h
>>> @@ -13,4 +13,6 @@ struct xe_pxp;
>>> int xe_pxp_allocate_execution_resources(struct xe_pxp *pxp);
>>> void xe_pxp_destroy_execution_resources(struct xe_pxp *pxp);
>>> +int xe_pxp_submit_session_termination(struct xe_pxp *pxp, u32 id);
>>> +
>>> #endif /* __XE_PXP_SUBMIT_H__ */
>>> diff --git a/drivers/gpu/drm/xe/xe_ring_ops.c
>>> b/drivers/gpu/drm/xe/xe_ring_ops.c
>>> index 0be4f489d3e1..a4b5a0f68a32 100644
>>> --- a/drivers/gpu/drm/xe/xe_ring_ops.c
>>> +++ b/drivers/gpu/drm/xe/xe_ring_ops.c
>>> @@ -118,7 +118,7 @@ static int emit_flush_invalidate(u32 flag, u32
>>> *dw, int i)
>>> dw[i++] |= MI_INVALIDATE_TLB | MI_FLUSH_DW_OP_STOREDW |
>>> MI_FLUSH_IMM_DW |
>>> MI_FLUSH_DW_STORE_INDEX;
>>> - dw[i++] = LRC_PPHWSP_SCRATCH_ADDR | MI_FLUSH_DW_USE_GTT;
>>> + dw[i++] = LRC_PPHWSP_FLUSH_INVAL_SCRATCH_ADDR |
>>> MI_FLUSH_DW_USE_GTT;
>>> dw[i++] = 0;
>>> dw[i++] = ~0U;
>>> @@ -156,7 +156,7 @@ static int emit_pipe_invalidate(u32
>>> mask_flags, bool invalidate_tlb, u32 *dw,
>>> flags &= ~mask_flags;
>>> - return emit_pipe_control(dw, i, 0, flags,
>>> LRC_PPHWSP_SCRATCH_ADDR, 0);
>>> + return emit_pipe_control(dw, i, 0, flags,
>>> LRC_PPHWSP_FLUSH_INVAL_SCRATCH_ADDR, 0);
>>> }
>>> static int emit_store_imm_ppgtt_posted(u64 addr, u64 value,
>>
^ permalink raw reply [flat|nested] 54+ messages in thread
* Re: [PATCH v2 05/12] drm/xe/pxp: Handle the PXP termination interrupt
2024-11-07 0:33 ` Daniele Ceraolo Spurio
@ 2024-11-14 19:46 ` John Harrison
0 siblings, 0 replies; 54+ messages in thread
From: John Harrison @ 2024-11-14 19:46 UTC (permalink / raw)
To: Daniele Ceraolo Spurio, intel-xe
On 11/6/2024 16:33, Daniele Ceraolo Spurio wrote:
> On 10/7/24 17:34, John Harrison wrote:
>> On 8/16/2024 12:00, Daniele Ceraolo Spurio wrote:
>>> When something happen to the session, the HW generates a termination
>>> interrupt. In reply to this, the driver is required to submit an inline
>>> session termination via the VCS, trigger the global termination and
>>> notify the GSC FW that the session is now invalid.
>>>
>>> Signed-off-by: Daniele Ceraolo Spurio <daniele.ceraolospurio@intel.com>
>>> ---
>>> drivers/gpu/drm/xe/regs/xe_gt_regs.h | 8 ++
>>> drivers/gpu/drm/xe/regs/xe_pxp_regs.h | 6 ++
>>> drivers/gpu/drm/xe/xe_irq.c | 20 +++-
>>> drivers/gpu/drm/xe/xe_pxp.c | 138
>>> +++++++++++++++++++++++++-
>>> drivers/gpu/drm/xe/xe_pxp.h | 3 +
>>> drivers/gpu/drm/xe/xe_pxp_types.h | 13 +++
>>> 6 files changed, 184 insertions(+), 4 deletions(-)
>>>
>>> diff --git a/drivers/gpu/drm/xe/regs/xe_gt_regs.h
>>> b/drivers/gpu/drm/xe/regs/xe_gt_regs.h
>>> index 0d1a4a9f4e11..9e9c20f1f1f4 100644
>>> --- a/drivers/gpu/drm/xe/regs/xe_gt_regs.h
>>> +++ b/drivers/gpu/drm/xe/regs/xe_gt_regs.h
>>> @@ -570,6 +570,7 @@
>>> #define ENGINE1_MASK REG_GENMASK(31, 16)
>>> #define ENGINE0_MASK REG_GENMASK(15, 0)
>>> #define GPM_WGBOXPERF_INTR_ENABLE XE_REG(0x19003c,
>>> XE_REG_OPTION_VF)
>>> +#define CRYPTO_RSVD_INTR_ENABLE XE_REG(0x190040)
>>> #define GUNIT_GSC_INTR_ENABLE XE_REG(0x190044,
>>> XE_REG_OPTION_VF)
>>> #define CCS_RSVD_INTR_ENABLE XE_REG(0x190048,
>>> XE_REG_OPTION_VF)
>>> @@ -580,6 +581,7 @@
>>> #define INTR_ENGINE_INTR(x) REG_FIELD_GET(GENMASK(15, 0), x)
>>> #define OTHER_GUC_INSTANCE 0
>>> #define OTHER_GSC_HECI2_INSTANCE 3
>>> +#define OTHER_KCR_INSTANCE 4
>>> #define OTHER_GSC_INSTANCE 6
>>> #define IIR_REG_SELECTOR(x) XE_REG(0x190070 + ((x) *
>>> 4), XE_REG_OPTION_VF)
>>> @@ -591,6 +593,7 @@
>>> #define HECI2_RSVD_INTR_MASK XE_REG(0x1900e4)
>>> #define GUC_SG_INTR_MASK XE_REG(0x1900e8,
>>> XE_REG_OPTION_VF)
>>> #define GPM_WGBOXPERF_INTR_MASK XE_REG(0x1900ec,
>>> XE_REG_OPTION_VF)
>>> +#define CRYPTO_RSVD_INTR_MASK XE_REG(0x1900f0)
>>> #define GUNIT_GSC_INTR_MASK XE_REG(0x1900f4,
>>> XE_REG_OPTION_VF)
>>> #define CCS0_CCS1_INTR_MASK XE_REG(0x190100)
>>> #define CCS2_CCS3_INTR_MASK XE_REG(0x190104)
>>> @@ -605,4 +608,9 @@
>>> #define GT_CS_MASTER_ERROR_INTERRUPT REG_BIT(3)
>>> #define GT_RENDER_USER_INTERRUPT REG_BIT(0)
>>> +/* irqs for OTHER_KCR_INSTANCE */
>>> +#define KCR_PXP_STATE_TERMINATED_INTERRUPT REG_BIT(1)
>>> +#define KCR_APP_TERMINATED_PER_FW_REQ_INTERRUPT REG_BIT(2)
>>> +#define KCR_PXP_STATE_RESET_COMPLETE_INTERRUPT REG_BIT(3)
>>> +
>>> #endif
>>> diff --git a/drivers/gpu/drm/xe/regs/xe_pxp_regs.h
>>> b/drivers/gpu/drm/xe/regs/xe_pxp_regs.h
>>> index d67cf210d23d..aa158938b42e 100644
>>> --- a/drivers/gpu/drm/xe/regs/xe_pxp_regs.h
>>> +++ b/drivers/gpu/drm/xe/regs/xe_pxp_regs.h
>>> @@ -14,4 +14,10 @@
>>> #define KCR_INIT XE_REG(0x3860f0)
>>> #define KCR_INIT_ALLOW_DISPLAY_ME_WRITES REG_BIT(14)
>>> +/* KCR hwdrm session in play status 0-31 */
>>> +#define KCR_SIP XE_REG(0x386260)
>>> +
>>> +/* PXP global terminate register for session termination */
>>> +#define KCR_GLOBAL_TERMINATE XE_REG(0x3860f8)
>>> +
>>> #endif /* __XE_PXP_REGS_H__ */
>>> diff --git a/drivers/gpu/drm/xe/xe_irq.c b/drivers/gpu/drm/xe/xe_irq.c
>>> index 5f2c368c35ad..f11d9a740627 100644
>>> --- a/drivers/gpu/drm/xe/xe_irq.c
>>> +++ b/drivers/gpu/drm/xe/xe_irq.c
>>> @@ -20,6 +20,7 @@
>>> #include "xe_hw_engine.h"
>>> #include "xe_memirq.h"
>>> #include "xe_mmio.h"
>>> +#include "xe_pxp.h"
>>> #include "xe_sriov.h"
>>> /*
>>> @@ -202,6 +203,15 @@ void xe_irq_enable_hwe(struct xe_gt *gt)
>>> }
>>> if (heci_mask)
>>> xe_mmio_write32(gt, HECI2_RSVD_INTR_MASK, ~(heci_mask
>>> << 16));
>>> +
>>> + if (xe_pxp_is_supported(xe)) {
>>> + u32 kcr_mask = KCR_PXP_STATE_TERMINATED_INTERRUPT |
>>> + KCR_APP_TERMINATED_PER_FW_REQ_INTERRUPT |
>>> + KCR_PXP_STATE_RESET_COMPLETE_INTERRUPT;
>>> +
>>> + xe_mmio_write32(gt, CRYPTO_RSVD_INTR_ENABLE, kcr_mask
>>> << 16);
>>> + xe_mmio_write32(gt, CRYPTO_RSVD_INTR_MASK, ~(kcr_mask
>>> << 16));
>>> + }
>>> }
>>> }
>>> @@ -324,9 +334,15 @@ static void gt_irq_handler(struct xe_tile *tile,
>>> }
>>> if (class == XE_ENGINE_CLASS_OTHER) {
>>> - /* HECI GSCFI interrupts come from outside of GT */
>>> + /*
>>> + * HECI GSCFI interrupts come from outside of GT.
>>> + * KCR irqs come from inside GT but are handled
>>> + * by the global PXP subsystem.
>>> + */
>>> if (HAS_HECI_GSCFI(xe) && instance ==
>>> OTHER_GSC_INSTANCE)
>>> xe_heci_gsc_irq_handler(xe, intr_vec);
>>> + else if (instance == OTHER_KCR_INSTANCE)
>>> + xe_pxp_irq_handler(xe, intr_vec);
>>> else
>>> gt_other_irq_handler(engine_gt, instance,
>>> intr_vec);
>>> }
>>> @@ -512,6 +528,8 @@ static void gt_irq_reset(struct xe_tile *tile)
>>> xe_mmio_write32(mmio, GUNIT_GSC_INTR_ENABLE, 0);
>>> xe_mmio_write32(mmio, GUNIT_GSC_INTR_MASK, ~0);
>>> xe_mmio_write32(mmio, HECI2_RSVD_INTR_MASK, ~0);
>>> + xe_mmio_write32(mmio, CRYPTO_RSVD_INTR_ENABLE, 0);
>>> + xe_mmio_write32(mmio, CRYPTO_RSVD_INTR_MASK, ~0);
>>> }
>>> xe_mmio_write32(mmio, GPM_WGBOXPERF_INTR_ENABLE, 0);
>>> diff --git a/drivers/gpu/drm/xe/xe_pxp.c b/drivers/gpu/drm/xe/xe_pxp.c
>>> index 56bb7d927c07..382eb0cb0018 100644
>>> --- a/drivers/gpu/drm/xe/xe_pxp.c
>>> +++ b/drivers/gpu/drm/xe/xe_pxp.c
>>> @@ -12,9 +12,11 @@
>>> #include "xe_gt.h"
>>> #include "xe_gt_types.h"
>>> #include "xe_mmio.h"
>>> +#include "xe_pm.h"
>>> #include "xe_pxp_submit.h"
>>> #include "xe_pxp_types.h"
>>> #include "xe_uc_fw.h"
>>> +#include "regs/xe_gt_regs.h"
>>> #include "regs/xe_pxp_regs.h"
>>> /**
>>> @@ -25,11 +27,133 @@
>>> * integrated parts.
>>> */
>>> -static bool pxp_is_supported(const struct xe_device *xe)
>>> +#define ARB_SESSION 0xF /* TODO: move to UAPI */
>>> +
>>> +bool xe_pxp_is_supported(const struct xe_device *xe)
>>> {
>>> return xe->info.has_pxp &&
>>> IS_ENABLED(CONFIG_INTEL_MEI_GSC_PROXY);
>>> }
>>> +static bool pxp_is_enabled(const struct xe_pxp *pxp)
>>> +{
>>> + return pxp;
>>> +}
>>> +
>>> +static int pxp_wait_for_session_state(struct xe_pxp *pxp, u32 id,
>>> bool in_play)
>>> +{
>>> + struct xe_gt *gt = pxp->gt;
>>> + u32 mask = BIT(id);
>>> + int ret;
>>> +
>>> + ret = xe_force_wake_get(gt_to_fw(gt), XE_FW_GT);
>>> + if (ret)
>>> + return ret;
>>> +
>>> + ret = xe_mmio_wait32(gt, KCR_SIP, mask, in_play ? mask : 0,
>>> + 250, NULL, false);
>>> + xe_force_wake_put(gt_to_fw(gt), XE_FW_GT);
>>> +
>>> + return ret;
>>> +}
>>> +
>>> +static void pxp_terminate(struct xe_pxp *pxp)
>>> +{
>>> + int ret = 0;
>>> + struct xe_device *xe = pxp->xe;
>>> + struct xe_gt *gt = pxp->gt;
>>> +
>>> + drm_dbg(&xe->drm, "Terminating PXP\n");
>>> +
>>> + /* terminate the hw session */
>>> + ret = xe_pxp_submit_session_termination(pxp, ARB_SESSION);
>>> + if (ret)
>>> + goto out;
>>> +
>>> + ret = pxp_wait_for_session_state(pxp, ARB_SESSION, false);
>>> + if (ret)
>>> + goto out;
>>> +
>>> + /* Trigger full HW cleanup */
>>> + XE_WARN_ON(xe_force_wake_get(gt_to_fw(gt), XE_FW_GT));
>> Why WARN here but no explicit message at all if the earlier force
>> wake fails? And is it safe to keep going if the fw did fail?
>
> The idea was that if we know that the state is good enough to
> terminate we could try to do it even with a forcewake error, worst
> case it doesn't work. If we don't know the state we can't attempt a
> termination at all.
>
That needs a comment to describe the reasoning for the different behaviour.
Also, note that xe_force_wake_get() doesn't work the same any more. See
"drm/xe/gt: Update handling of xe_force_wake_get return". It now returns
a domain reference that must be passed back in to the put call.
John.
>
>>
>> Also, given two identical, back-to-back fw get/put sets, would it not
>> be more efficient to have pxp_terminate do the get and share that
>> across the two register access? It would also remove the issue with
>> failed fw half way through causing problems due to not wanting to abort.
>
> will do.
>
>
>>
>>> + xe_mmio_write32(gt, KCR_GLOBAL_TERMINATE, 1);
>> BSpec description for KCR_GLOBAL_TERMINATE says need to check
>> KCR_SIP_GCD rather than KCR_SIP_MEDIA for bits 0-15 being 0. Whereas
>> the KCR_SIP being checked above is KCR_SIP_MEDIA only.
>
> That's just the spec not being super clear. The description is common
> between the render and the media copies of the registers, but you need
> to check the version of the registers on the actual GT you're doing
> the termination on.
>
>
>>
>>> + xe_force_wake_put(gt_to_fw(gt), XE_FW_GT);
>>> +
>>> + /* now we can tell the GSC to clean up its own state */
>>> + ret = xe_pxp_submit_session_invalidation(&pxp->gsc_res,
>>> ARB_SESSION);
>>> +
>>> +out:
>>> + if (ret)
>>> + drm_err(&xe->drm, "PXP termination failed: %pe\n",
>>> ERR_PTR(ret));
>>> +}
>>> +
>>> +static void pxp_terminate_complete(struct xe_pxp *pxp)
>>> +{
>>> + /* TODO mark the session as ready to start */
>>> +}
>>> +
>>> +static void pxp_irq_work(struct work_struct *work)
>>> +{
>>> + struct xe_pxp *pxp = container_of(work, typeof(*pxp), irq.work);
>>> + struct xe_device *xe = pxp->xe;
>>> + u32 events = 0;
>>> +
>>> + spin_lock_irq(&xe->irq.lock);
>>> + events = pxp->irq.events;
>>> + pxp->irq.events = 0;
>>> + spin_unlock_irq(&xe->irq.lock);
>>> +
>>> + if (!events)
>>> + return;
>>> +
>>> + /*
>>> + * If we're processing a termination irq while suspending then
>>> don't
>>> + * bother, we're going to re-init everything on resume anyway.
>>> + */
>>> + if ((events & PXP_TERMINATION_REQUEST) &&
>>> !xe_pm_runtime_get_if_active(xe))
>>> + return;
>> I assume it is not possible to have both REQUEST and COMPLETE set at
>> the same time? I.e. is it possible for this early exit to cause a
>> lost termination complete call?
>
> A complete is only received in response to the submission of a
> termination request, so they should never be set at the same time.
> Doesn't really matter anyway since we submit a termination on resume
> anyway.
>
> Daniele
>
>
>>
>> John.
>>
>>> +
>>> + if (events & PXP_TERMINATION_REQUEST) {
>>> + events &= ~PXP_TERMINATION_COMPLETE;
>>> + pxp_terminate(pxp);
>>> + }
>>> +
>>> + if (events & PXP_TERMINATION_COMPLETE)
>>> + pxp_terminate_complete(pxp);
>>> +
>>> + if (events & PXP_TERMINATION_REQUEST)
>>> + xe_pm_runtime_put(xe);
>>> +}
>>> +
>>> +/**
>>> + * xe_pxp_irq_handler - Handles PXP interrupts.
>>> + * @pxp: pointer to pxp struct
>>> + * @iir: interrupt vector
>>> + */
>>> +void xe_pxp_irq_handler(struct xe_device *xe, u16 iir)
>>> +{
>>> + struct xe_pxp *pxp = xe->pxp;
>>> +
>>> + if (!pxp_is_enabled(pxp)) {
>>> + drm_err(&xe->drm, "PXP irq 0x%x received with PXP
>>> disabled!\n", iir);
>>> + return;
>>> + }
>>> +
>>> + lockdep_assert_held(&xe->irq.lock);
>>> +
>>> + if (unlikely(!iir))
>>> + return;
>>> +
>>> + if (iir & (KCR_PXP_STATE_TERMINATED_INTERRUPT |
>>> + KCR_APP_TERMINATED_PER_FW_REQ_INTERRUPT))
>>> + pxp->irq.events |= PXP_TERMINATION_REQUEST;
>>> +
>>> + if (iir & KCR_PXP_STATE_RESET_COMPLETE_INTERRUPT)
>>> + pxp->irq.events |= PXP_TERMINATION_COMPLETE;
>>> +
>>> + if (pxp->irq.events)
>>> + queue_work(pxp->irq.wq, &pxp->irq.work);
>>> +}
>>> +
>>> static int kcr_pxp_set_status(const struct xe_pxp *pxp, bool enable)
>>> {
>>> u32 val = enable ?
>>> _MASKED_BIT_ENABLE(KCR_INIT_ALLOW_DISPLAY_ME_WRITES) :
>>> @@ -60,6 +184,7 @@ static void pxp_fini(void *arg)
>>> {
>>> struct xe_pxp *pxp = arg;
>>> + destroy_workqueue(pxp->irq.wq);
>>> xe_pxp_destroy_execution_resources(pxp);
>>> /* no need to explicitly disable KCR since we're going to do
>>> an FLR */
>>> @@ -83,7 +208,7 @@ int xe_pxp_init(struct xe_device *xe)
>>> struct xe_pxp *pxp;
>>> int err;
>>> - if (!pxp_is_supported(xe))
>>> + if (!xe_pxp_is_supported(xe))
>>> return -EOPNOTSUPP;
>>> /* we only support PXP on single tile devices with a media
>>> GT */
>>> @@ -105,12 +230,17 @@ int xe_pxp_init(struct xe_device *xe)
>>> if (!pxp)
>>> return -ENOMEM;
>>> + INIT_WORK(&pxp->irq.work, pxp_irq_work);
>>> pxp->xe = xe;
>>> pxp->gt = gt;
>>> + pxp->irq.wq = alloc_ordered_workqueue("pxp-wq", 0);
>>> + if (!pxp->irq.wq)
>>> + return -ENOMEM;
>>> +
>>> err = kcr_pxp_enable(pxp);
>>> if (err)
>>> - return err;
>>> + goto out_wq;
>>> err = xe_pxp_allocate_execution_resources(pxp);
>>> if (err)
>>> @@ -122,5 +252,7 @@ int xe_pxp_init(struct xe_device *xe)
>>> kcr_disable:
>>> kcr_pxp_disable(pxp);
>>> +out_wq:
>>> + destroy_workqueue(pxp->irq.wq);
>>> return err;
>>> }
>>> diff --git a/drivers/gpu/drm/xe/xe_pxp.h b/drivers/gpu/drm/xe/xe_pxp.h
>>> index 79c951667f13..81bafe2714ff 100644
>>> --- a/drivers/gpu/drm/xe/xe_pxp.h
>>> +++ b/drivers/gpu/drm/xe/xe_pxp.h
>>> @@ -10,6 +10,9 @@
>>> struct xe_device;
>>> +bool xe_pxp_is_supported(const struct xe_device *xe);
>>> +
>>> int xe_pxp_init(struct xe_device *xe);
>>> +void xe_pxp_irq_handler(struct xe_device *xe, u16 iir);
>>> #endif /* __XE_PXP_H__ */
>>> diff --git a/drivers/gpu/drm/xe/xe_pxp_types.h
>>> b/drivers/gpu/drm/xe/xe_pxp_types.h
>>> index 3463caaad101..d5cf8faed7be 100644
>>> --- a/drivers/gpu/drm/xe/xe_pxp_types.h
>>> +++ b/drivers/gpu/drm/xe/xe_pxp_types.h
>>> @@ -8,6 +8,7 @@
>>> #include <linux/iosys-map.h>
>>> #include <linux/types.h>
>>> +#include <linux/workqueue.h>
>>> struct xe_bo;
>>> struct xe_exec_queue;
>>> @@ -69,6 +70,18 @@ struct xe_pxp {
>>> /** @gsc_exec: kernel-owned objects for PXP submissions to
>>> the GSCCS */
>>> struct xe_pxp_gsc_client_resources gsc_res;
>>> +
>>> + /** @irq: wrapper for the worker and queue used for PXP irq
>>> support */
>>> + struct {
>>> + /** @irq.work: worker that manages irq events. */
>>> + struct work_struct work;
>>> + /** @irq.wq: workqueue on which to queue the irq work. */
>>> + struct workqueue_struct *wq;
>>> + /** @irq.events: pending events, protected with
>>> xe->irq.lock. */
>>> + u32 events;
>>> +#define PXP_TERMINATION_REQUEST BIT(0)
>>> +#define PXP_TERMINATION_COMPLETE BIT(1)
>>> + } irq;
>>> };
>>> #endif /* __XE_PXP_TYPES_H__ */
>>
^ permalink raw reply [flat|nested] 54+ messages in thread
* Re: [PATCH v2 06/12] drm/xe/pxp: Add GSC session initialization support
2024-11-07 22:37 ` Daniele Ceraolo Spurio
@ 2024-11-14 20:36 ` John Harrison
0 siblings, 0 replies; 54+ messages in thread
From: John Harrison @ 2024-11-14 20:36 UTC (permalink / raw)
To: Daniele Ceraolo Spurio, intel-xe
On 11/7/2024 14:37, Daniele Ceraolo Spurio wrote:
> On 10/8/2024 11:43 AM, John Harrison wrote:
>> On 8/16/2024 12:00, Daniele Ceraolo Spurio wrote:
>>> A session is initialized (i.e. started) by sending a message to the
>>> GSC.
>>>
>>> Note that this patch is meant to be squashed with the follow-up patches
>>> that implement the other pieces of the session initialization and queue
>>> setup flow. It is separate for now for ease of review.
>>>
>>> Signed-off-by: Daniele Ceraolo Spurio <daniele.ceraolospurio@intel.com>
>>> ---
>>> drivers/gpu/drm/xe/abi/gsc_pxp_commands_abi.h | 21 ++++++++
>>> drivers/gpu/drm/xe/xe_pxp_submit.c | 50
>>> +++++++++++++++++++
>>> drivers/gpu/drm/xe/xe_pxp_submit.h | 1 +
>>> 3 files changed, 72 insertions(+)
>>>
>>> diff --git a/drivers/gpu/drm/xe/abi/gsc_pxp_commands_abi.h
>>> b/drivers/gpu/drm/xe/abi/gsc_pxp_commands_abi.h
>>> index 4a59c564a0d0..734feb38f570 100644
>>> --- a/drivers/gpu/drm/xe/abi/gsc_pxp_commands_abi.h
>>> +++ b/drivers/gpu/drm/xe/abi/gsc_pxp_commands_abi.h
>>> @@ -50,6 +50,7 @@ struct pxp_cmd_header {
>>> } __packed;
>>> #define PXP43_CMDID_INVALIDATE_STREAM_KEY 0x00000007
>>> +#define PXP43_CMDID_INIT_SESSION 0x00000036
>>> #define PXP43_CMDID_NEW_HUC_AUTH 0x0000003F /* MTL+ */
>>> /* PXP-Input-Packet: HUC Auth-only */
>>> @@ -64,6 +65,26 @@ struct pxp43_huc_auth_out {
>>> struct pxp_cmd_header header;
>>> } __packed;
>>> +/* PXP-Input-Packet: Init PXP session */
>>> +struct pxp43_create_arb_in {
>>> + struct pxp_cmd_header header;
>>> + /* header.stream_id fields for vesion 4.3 of Init PXP
>>> session: */
>>> + #define PXP43_INIT_SESSION_VALID BIT(0)
>>> + #define PXP43_INIT_SESSION_APPTYPE BIT(1)
>>> + #define PXP43_INIT_SESSION_APPID GENMASK(17, 2)
>>> + u32 protection_mode;
>>> + #define PXP43_INIT_SESSION_PROTECTION_ARB 0x2
>>> + u32 sub_session_id;
>>> + u32 init_flags;
>>> + u32 rsvd[12];
>>> +} __packed;
>>> +
>>> +/* PXP-Input-Packet: Init PXP session */
>>> +struct pxp43_create_arb_out {
>>> + struct pxp_cmd_header header;
>>> + u32 rsvd[8];
>>> +} __packed;
>>> +
>>> /* PXP-Input-Packet: Invalidate Stream Key */
>>> struct pxp43_inv_stream_key_in {
>>> struct pxp_cmd_header header;
>>> diff --git a/drivers/gpu/drm/xe/xe_pxp_submit.c
>>> b/drivers/gpu/drm/xe/xe_pxp_submit.c
>>> index 41684d666376..c9258c861556 100644
>>> --- a/drivers/gpu/drm/xe/xe_pxp_submit.c
>>> +++ b/drivers/gpu/drm/xe/xe_pxp_submit.c
>>> @@ -26,6 +26,8 @@
>>> #include "instructions/xe_mi_commands.h"
>>> #include "regs/xe_gt_regs.h"
>>> +#define ARB_SESSION 0xF /* TODO: move to UAPI */
>> This same define is now in two separate source files? Even if it
>> can't be moved to the UAPI header yet it should at least be in an
>> internal header rather than being replicated.
>
> I thought about it, but couldn't find a clean solution. This define
> would belong in pxp.h, but that's not included from this file and I
> didn't want to add the include just for a define that is going away a
> few patches later. I could put it in pxp_types.h or pxp_submit.h, but
> if it needs to be in the wrong place anyway I thought I might as well
> just duplicate it.
> Any preference?
Yeah, I saw that it disappeared again later but it still feels wrong to
be defining the same thing in multiple places. Also, as per comment in
later patch, it would also be cleaner to use the correct name from the
start rather than having extra deltas to rename it later.
Not entirely convinced it can't be added to the correct header file in
advance of the rest of the API arriving. It's not like it's tentative or
won't be landing in some much later patch series. But failing that,
pxp.h seems like the right place. Is adding and removing a #include
really worse than adding and removing a #define?
John.
>
>>
>>> +
>>> /*
>>> * The VCS is used for kernel-owned GGTT submissions to issue key
>>> termination.
>>> * Terminations are serialized, so we only need a single queue and
>>> a single
>>> @@ -470,6 +472,54 @@ static int gsccs_send_message(struct
>>> xe_pxp_gsc_client_resources *gsc_res,
>>> return ret;
>>> }
>>> +/**
>>> + * xe_pxp_submit_session_init - submits a PXP GSC session
>>> initialization
>>> + * @gsc_res: the pxp client resources
>>> + * @id: the session to initialize
>>> + *
>>> + * Submit a message to the GSC FW to initialize (i.e. start) a PXP
>>> session.
>>> + *
>>> + * Returns 0 if the submission is successful, an errno value
>>> otherwise.
>>> + */
>>> +int xe_pxp_submit_session_init(struct xe_pxp_gsc_client_resources
>>> *gsc_res, u32 id)
>>> +{
>>> + struct xe_device *xe = gsc_res->vm->xe;
>>> + struct pxp43_create_arb_in msg_in = {0};
>>> + struct pxp43_create_arb_out msg_out = {0};
>>> + int ret;
>>> +
>>> + msg_in.header.api_version = PXP_APIVER(4, 3);
>>> + msg_in.header.command_id = PXP43_CMDID_INIT_SESSION;
>>> + msg_in.header.stream_id = (FIELD_PREP(PXP43_INIT_SESSION_APPID,
>>> id) |
>>> + FIELD_PREP(PXP43_INIT_SESSION_VALID, 1) |
>>> + FIELD_PREP(PXP43_INIT_SESSION_APPTYPE, 0));
>>> + msg_in.header.buffer_len = sizeof(msg_in) - sizeof(msg_in.header);
>>> +
>>> + if (id == ARB_SESSION)
>>> + msg_in.protection_mode = PXP43_INIT_SESSION_PROTECTION_ARB;
>>> +
>>> + ret = gsccs_send_message(gsc_res, &msg_in, sizeof(msg_in),
>>> + &msg_out, sizeof(msg_out));
>>> + if (ret) {
>>> + drm_err(&xe->drm, "Failed to init session %d, ret=[%d]\n",
>>> id, ret);
>> %pe for error code
>>
>>> + } else if (msg_out.header.status != 0) {
>>> + if (is_fw_err_platform_config(msg_out.header.status)) {
>>> + drm_info_once(&xe->drm,
>>> + "PXP init-session-%d failed due to
>>> BIOS/SOC:0x%08x:%s\n",
>> Style mis-match - "init session %d" in the first error but then
>> "init-session-%d" in this one and the one below (I prefer the first
>> one that actually looks like an operation rather than variable).
>>
>>> + id, msg_out.header.status,
>>> + fw_err_to_string(msg_out.header.status));
>>> + } else {
>>> + drm_dbg(&xe->drm, "PXP init-session-%d failed
>>> 0x%08x:%st:\n",
>>> + id, msg_out.header.status,
>>> + fw_err_to_string(msg_out.header.status));
>>> + drm_dbg(&xe->drm, " cmd-detail:
>>> ID=[0x%08x],API-Ver-[0x%08x]\n",
>> More mis-matching message styles - 'SOC:%s:%s' vs 'ID=[%x]'. Neither
>> of which is the normal format of kernel messages.
>
> Those I've copied straight from i915. Will reword.
>
> Daniele
>
>>
>> John.
>>
>>> + msg_in.header.command_id, msg_in.header.api_version);
>>> + }
>>> + }
>>> +
>>> + return ret;
>>> +}
>>> +
>>> /**
>>> * xe_pxp_submit_session_invalidation - submits a PXP GSC
>>> invalidation
>>> * @gsc_res: the pxp client resources
>>> diff --git a/drivers/gpu/drm/xe/xe_pxp_submit.h
>>> b/drivers/gpu/drm/xe/xe_pxp_submit.h
>>> index 48fdc9b09116..c9efda02f4b0 100644
>>> --- a/drivers/gpu/drm/xe/xe_pxp_submit.h
>>> +++ b/drivers/gpu/drm/xe/xe_pxp_submit.h
>>> @@ -14,6 +14,7 @@ struct xe_pxp_gsc_client_resources;
>>> int xe_pxp_allocate_execution_resources(struct xe_pxp *pxp);
>>> void xe_pxp_destroy_execution_resources(struct xe_pxp *pxp);
>>> +int xe_pxp_submit_session_init(struct xe_pxp_gsc_client_resources
>>> *gsc_res, u32 id);
>>> int xe_pxp_submit_session_termination(struct xe_pxp *pxp, u32 id);
>>> int xe_pxp_submit_session_invalidation(struct
>>> xe_pxp_gsc_client_resources *gsc_res,
>>> u32 id);
>
^ permalink raw reply [flat|nested] 54+ messages in thread
* Re: [PATCH v2 07/12] drm/xe/pxp: Add spport for PXP-using queues
2024-11-07 23:57 ` Daniele Ceraolo Spurio
@ 2024-11-14 21:20 ` John Harrison
2024-11-14 21:39 ` Daniele Ceraolo Spurio
2024-11-15 0:47 ` Daniele Ceraolo Spurio
0 siblings, 2 replies; 54+ messages in thread
From: John Harrison @ 2024-11-14 21:20 UTC (permalink / raw)
To: Daniele Ceraolo Spurio, intel-xe
On 11/7/2024 15:57, Daniele Ceraolo Spurio wrote:
> On 10/8/2024 4:55 PM, John Harrison wrote:
>> On 8/16/2024 12:00, Daniele Ceraolo Spurio wrote:
>>> Userspace is required to mark a queue as using PXP to guarantee that
>>> the
>>> PXP instructions will work. When a PXP queue is created, the driver
>>> will
>>> do the following:
>>> - Start the default PXP session if it is not already running;
>>> - set the relevant bits in the context control register;
>>> - assign an rpm ref to the queue to keep for its lifetime (this is
>>> required because PXP HWDRM sessions are killed by the HW suspend
>>> flow).
>>>
>>> When a PXP invalidation occurs, all the PXP queue will be killed.
>> "all the PXP queue" -> should be 'queues' or should not say 'all'?
>>
>>> On submission of a valid PXP queue, the driver will validate all
>>> encrypted objects mapped to the VM to ensured they were encrypted with
>>> the current key.
>>>
>>> Signed-off-by: Daniele Ceraolo Spurio <daniele.ceraolospurio@intel.com>
>>> ---
>>> drivers/gpu/drm/xe/regs/xe_engine_regs.h | 1 +
>>> drivers/gpu/drm/xe/xe_exec_queue.c | 58 ++++-
>>> drivers/gpu/drm/xe/xe_exec_queue.h | 5 +
>>> drivers/gpu/drm/xe/xe_exec_queue_types.h | 8 +
>>> drivers/gpu/drm/xe/xe_hw_engine.c | 2 +-
>>> drivers/gpu/drm/xe/xe_lrc.c | 16 +-
>>> drivers/gpu/drm/xe/xe_lrc.h | 4 +-
>>> drivers/gpu/drm/xe/xe_pxp.c | 295
>>> ++++++++++++++++++++++-
>>> drivers/gpu/drm/xe/xe_pxp.h | 7 +
>>> drivers/gpu/drm/xe/xe_pxp_submit.c | 4 +-
>>> drivers/gpu/drm/xe/xe_pxp_types.h | 26 ++
>>> include/uapi/drm/xe_drm.h | 40 ++-
>>> 12 files changed, 450 insertions(+), 16 deletions(-)
>>>
>>> diff --git a/drivers/gpu/drm/xe/regs/xe_engine_regs.h
>>> b/drivers/gpu/drm/xe/regs/xe_engine_regs.h
>>> index 81b71903675e..3692e887f503 100644
>>> --- a/drivers/gpu/drm/xe/regs/xe_engine_regs.h
>>> +++ b/drivers/gpu/drm/xe/regs/xe_engine_regs.h
>>> @@ -130,6 +130,7 @@
>>> #define RING_EXECLIST_STATUS_HI(base) XE_REG((base) + 0x234
>>> + 4)
>>> #define RING_CONTEXT_CONTROL(base) XE_REG((base) + 0x244,
>>> XE_REG_OPTION_MASKED)
>>> +#define CTX_CTRL_PXP_ENABLE REG_BIT(10)
>>> #define CTX_CTRL_OAC_CONTEXT_ENABLE REG_BIT(8)
>>> #define CTX_CTRL_RUN_ALONE REG_BIT(7)
>>> #define CTX_CTRL_INDIRECT_RING_STATE_ENABLE REG_BIT(4)
>>> diff --git a/drivers/gpu/drm/xe/xe_exec_queue.c
>>> b/drivers/gpu/drm/xe/xe_exec_queue.c
>>> index e98e8794eddf..504ba4aa2357 100644
>>> --- a/drivers/gpu/drm/xe/xe_exec_queue.c
>>> +++ b/drivers/gpu/drm/xe/xe_exec_queue.c
>>> @@ -22,6 +22,8 @@
>>> #include "xe_ring_ops_types.h"
>>> #include "xe_trace.h"
>>> #include "xe_vm.h"
>>> +#include "xe_pxp.h"
>>> +#include "xe_pxp_types.h"
>>> enum xe_exec_queue_sched_prop {
>>> XE_EXEC_QUEUE_JOB_TIMEOUT = 0,
>>> @@ -35,6 +37,8 @@ static int exec_queue_user_extensions(struct
>>> xe_device *xe, struct xe_exec_queue
>>> static void __xe_exec_queue_free(struct xe_exec_queue *q)
>>> {
>>> + if (xe_exec_queue_uses_pxp(q))
>>> + xe_pxp_exec_queue_remove(gt_to_xe(q->gt)->pxp, q);
>>> if (q->vm)
>>> xe_vm_put(q->vm);
>>> @@ -73,6 +77,7 @@ static struct xe_exec_queue
>>> *__xe_exec_queue_alloc(struct xe_device *xe,
>>> q->ops = gt->exec_queue_ops;
>>> INIT_LIST_HEAD(&q->lr.link);
>>> INIT_LIST_HEAD(&q->multi_gt_link);
>>> + INIT_LIST_HEAD(&q->pxp.link);
>>> q->sched_props.timeslice_us =
>>> hwe->eclass->sched_props.timeslice_us;
>>> q->sched_props.preempt_timeout_us =
>>> @@ -107,6 +112,21 @@ static int __xe_exec_queue_init(struct
>>> xe_exec_queue *q)
>>> {
>>> struct xe_vm *vm = q->vm;
>>> int i, err;
>>> + u32 flags = 0;
>>> +
>>> + /*
>>> + * PXP workloads executing on RCS or CCS must run in isolation
>>> (i.e. no
>>> + * other workload can use the EUs at the same time). On MTL
>>> this is done
>>> + * by setting the RUNALONE bit in the LRC, while starting on
>>> Xe2 there
>>> + * is a dedicated bit for it.
>>> + */
>>> + if (xe_exec_queue_uses_pxp(q) &&
>>> + (q->class == XE_ENGINE_CLASS_RENDER || q->class ==
>>> XE_ENGINE_CLASS_COMPUTE)) {
>>> + if (GRAPHICS_VER(gt_to_xe(q->gt)) >= 20)
>>> + flags |= XE_LRC_CREATE_PXP;
>>> + else
>>> + flags |= XE_LRC_CREATE_RUNALONE;
>>> + }
>>> if (vm) {
>>> err = xe_vm_lock(vm, true);
>>> @@ -115,7 +135,7 @@ static int __xe_exec_queue_init(struct
>>> xe_exec_queue *q)
>>> }
>>> for (i = 0; i < q->width; ++i) {
>>> - q->lrc[i] = xe_lrc_create(q->hwe, q->vm, SZ_16K);
>>> + q->lrc[i] = xe_lrc_create(q->hwe, q->vm, SZ_16K, flags);
>>> if (IS_ERR(q->lrc[i])) {
>>> err = PTR_ERR(q->lrc[i]);
>>> goto err_unlock;
>>> @@ -160,6 +180,17 @@ struct xe_exec_queue
>>> *xe_exec_queue_create(struct xe_device *xe, struct xe_vm *v
>>> if (err)
>>> goto err_post_alloc;
>>> + /*
>>> + * we can only add the queue to the PXP list after the init is
>>> complete,
>>> + * because the PXP termination can call exec_queue_kill and
>>> that will
>>> + * go bad if the queue is only half-initialized.
>>> + */
>> Not following how this comment relates to this code block. The
>> comment implies there should be a wait of some kind.
>
> We set the PXP type for the queue as part of the extension handling in
> __xe_exec_queue_alloc. This comment was supposed to indicate that we
> can't add the queue to the list back there because the init is not
> complete yet, so we do it here instead. I'll add the explanation about
> the extension handling to the comment.
>
>>
>>> + if (xe_exec_queue_uses_pxp(q)) {
>>> + err = xe_pxp_exec_queue_add(xe->pxp, q);
>>> + if (err)
>>> + goto err_post_alloc;
>>> + }
>>> +
>>> return q;
>>> err_post_alloc:
>>> @@ -197,6 +228,9 @@ void xe_exec_queue_destroy(struct kref *ref)
>>> struct xe_exec_queue *q = container_of(ref, struct
>>> xe_exec_queue, refcount);
>>> struct xe_exec_queue *eq, *next;
>>> + if (xe_exec_queue_uses_pxp(q))
>>> + xe_pxp_exec_queue_remove(gt_to_xe(q->gt)->pxp, q);
>>> +
>>> xe_exec_queue_last_fence_put_unlocked(q);
>>> if (!(q->flags & EXEC_QUEUE_FLAG_BIND_ENGINE_CHILD)) {
>>> list_for_each_entry_safe(eq, next, &q->multi_gt_list,
>>> @@ -343,6 +377,24 @@ static int exec_queue_set_timeslice(struct
>>> xe_device *xe, struct xe_exec_queue *
>>> return 0;
>>> }
>>> +static int
>>> +exec_queue_set_pxp_type(struct xe_device *xe, struct xe_exec_queue
>>> *q, u64 value)
>>> +{
>>> + BUILD_BUG_ON(DRM_XE_PXP_TYPE_NONE != 0);
>> Why a build bug for something that is a simple 'enum { X=0 }'? It's
>> not like there is some complex macro calculation that could be broken
>> by some seemingly unrelated change.
>
> This was more to make sure that the default value for the extension
> was 0. Given that this is UAPI and therefore can't change anyway, I'll
> drop the BUG_ON
>
>>
>>> +
>>> + if (value == DRM_XE_PXP_TYPE_NONE)
>>> + return 0;
>> This doesn't need to shut any existing PXP down? Is it not possible
>> to dynamically change the type?
>
> No, this can only be set at queue creation time
Would be good to add a comment about that? Maybe even an assert or
something to ensure this is not called post creation?
>
>>
>>> +
>>> + if (!xe_pxp_is_enabled(xe->pxp))
>>> + return -ENODEV;
>>> +
>>> + /* we only support HWDRM sessions right now */
>>> + if (XE_IOCTL_DBG(xe, value != DRM_XE_PXP_TYPE_HWDRM))
>>> + return -EINVAL;
>>> +
>>> + return xe_pxp_exec_queue_set_type(xe->pxp, q,
>>> DRM_XE_PXP_TYPE_HWDRM);
>>> +}
>>> +
>>> typedef int (*xe_exec_queue_set_property_fn)(struct xe_device *xe,
>>> struct xe_exec_queue *q,
>>> u64 value);
>>> @@ -350,6 +402,7 @@ typedef int
>>> (*xe_exec_queue_set_property_fn)(struct xe_device *xe,
>>> static const xe_exec_queue_set_property_fn
>>> exec_queue_set_property_funcs[] = {
>>> [DRM_XE_EXEC_QUEUE_SET_PROPERTY_PRIORITY] =
>>> exec_queue_set_priority,
>>> [DRM_XE_EXEC_QUEUE_SET_PROPERTY_TIMESLICE] =
>>> exec_queue_set_timeslice,
>>> + [DRM_XE_EXEC_QUEUE_SET_PROPERTY_PXP_TYPE] =
>>> exec_queue_set_pxp_type,
>>> };
>>> static int exec_queue_user_ext_set_property(struct xe_device *xe,
>>> @@ -369,7 +422,8 @@ static int
>>> exec_queue_user_ext_set_property(struct xe_device *xe,
>>> ARRAY_SIZE(exec_queue_set_property_funcs)) ||
>>> XE_IOCTL_DBG(xe, ext.pad) ||
>>> XE_IOCTL_DBG(xe, ext.property !=
>>> DRM_XE_EXEC_QUEUE_SET_PROPERTY_PRIORITY &&
>>> - ext.property !=
>>> DRM_XE_EXEC_QUEUE_SET_PROPERTY_TIMESLICE))
>>> + ext.property !=
>>> DRM_XE_EXEC_QUEUE_SET_PROPERTY_TIMESLICE &&
>>> + ext.property != DRM_XE_EXEC_QUEUE_SET_PROPERTY_PXP_TYPE))
>>> return -EINVAL;
>>> idx = array_index_nospec(ext.property,
>>> ARRAY_SIZE(exec_queue_set_property_funcs));
>>> diff --git a/drivers/gpu/drm/xe/xe_exec_queue.h
>>> b/drivers/gpu/drm/xe/xe_exec_queue.h
>>> index ded77b0f3b90..7fa97719667a 100644
>>> --- a/drivers/gpu/drm/xe/xe_exec_queue.h
>>> +++ b/drivers/gpu/drm/xe/xe_exec_queue.h
>>> @@ -53,6 +53,11 @@ static inline bool
>>> xe_exec_queue_is_parallel(struct xe_exec_queue *q)
>>> return q->width > 1;
>>> }
>>> +static inline bool xe_exec_queue_uses_pxp(struct xe_exec_queue *q)
>>> +{
>>> + return q->pxp.type;
>>> +}
>>> +
>>> bool xe_exec_queue_is_lr(struct xe_exec_queue *q);
>>> bool xe_exec_queue_ring_full(struct xe_exec_queue *q);
>>> diff --git a/drivers/gpu/drm/xe/xe_exec_queue_types.h
>>> b/drivers/gpu/drm/xe/xe_exec_queue_types.h
>>> index 1408b02eea53..28b56217f1df 100644
>>> --- a/drivers/gpu/drm/xe/xe_exec_queue_types.h
>>> +++ b/drivers/gpu/drm/xe/xe_exec_queue_types.h
>>> @@ -130,6 +130,14 @@ struct xe_exec_queue {
>>> spinlock_t lock;
>>> } lr;
>>> + /** @pxp: PXP info tracking */
>>> + struct {
>>> + /** @pxp.type: PXP session type used by this queue */
>>> + u8 type;
>>> + /** @pxp.link: link into the list of PXP exec queues */
>>> + struct list_head link;
>>> + } pxp;
>>> +
>>> /** @ops: submission backend exec queue operations */
>>> const struct xe_exec_queue_ops *ops;
>>> diff --git a/drivers/gpu/drm/xe/xe_hw_engine.c
>>> b/drivers/gpu/drm/xe/xe_hw_engine.c
>>> index e195022ca836..469932e7d7a6 100644
>>> --- a/drivers/gpu/drm/xe/xe_hw_engine.c
>>> +++ b/drivers/gpu/drm/xe/xe_hw_engine.c
>>> @@ -557,7 +557,7 @@ static int hw_engine_init(struct xe_gt *gt,
>>> struct xe_hw_engine *hwe,
>>> goto err_name;
>>> }
>>> - hwe->kernel_lrc = xe_lrc_create(hwe, NULL, SZ_16K);
>>> + hwe->kernel_lrc = xe_lrc_create(hwe, NULL, SZ_16K, 0);
>>> if (IS_ERR(hwe->kernel_lrc)) {
>>> err = PTR_ERR(hwe->kernel_lrc);
>>> goto err_hwsp;
>>> diff --git a/drivers/gpu/drm/xe/xe_lrc.c b/drivers/gpu/drm/xe/xe_lrc.c
>>> index 974a9cd8c379..4f3e676db646 100644
>>> --- a/drivers/gpu/drm/xe/xe_lrc.c
>>> +++ b/drivers/gpu/drm/xe/xe_lrc.c
>>> @@ -893,7 +893,7 @@ static void xe_lrc_finish(struct xe_lrc *lrc)
>>> #define PVC_CTX_ACC_CTR_THOLD (0x2a + 1)
>>> static int xe_lrc_init(struct xe_lrc *lrc, struct xe_hw_engine
>>> *hwe,
>>> - struct xe_vm *vm, u32 ring_size)
>>> + struct xe_vm *vm, u32 ring_size, u32 init_flags)
>>> {
>>> struct xe_gt *gt = hwe->gt;
>>> struct xe_tile *tile = gt_to_tile(gt);
>>> @@ -981,6 +981,16 @@ static int xe_lrc_init(struct xe_lrc *lrc,
>>> struct xe_hw_engine *hwe,
>>> RING_CTL_SIZE(lrc->ring.size) | RING_VALID);
>>> }
>>> + if (init_flags & XE_LRC_CREATE_RUNALONE)
>>> + xe_lrc_write_ctx_reg(lrc, CTX_CONTEXT_CONTROL,
>>> + xe_lrc_read_ctx_reg(lrc, CTX_CONTEXT_CONTROL) |
>>> + _MASKED_BIT_ENABLE(CTX_CTRL_RUN_ALONE));
>>> +
>>> + if (init_flags & XE_LRC_CREATE_PXP)
>>> + xe_lrc_write_ctx_reg(lrc, CTX_CONTEXT_CONTROL,
>>> + xe_lrc_read_ctx_reg(lrc, CTX_CONTEXT_CONTROL) |
>>> + _MASKED_BIT_ENABLE(CTX_CTRL_PXP_ENABLE));
>>> +
>>> xe_lrc_write_ctx_reg(lrc, CTX_TIMESTAMP, 0);
>>> if (xe->info.has_asid && vm)
>>> @@ -1029,7 +1039,7 @@ static int xe_lrc_init(struct xe_lrc *lrc,
>>> struct xe_hw_engine *hwe,
>>> * upon failure.
>>> */
>>> struct xe_lrc *xe_lrc_create(struct xe_hw_engine *hwe, struct
>>> xe_vm *vm,
>>> - u32 ring_size)
>>> + u32 ring_size, u32 flags)
>>> {
>>> struct xe_lrc *lrc;
>>> int err;
>>> @@ -1038,7 +1048,7 @@ struct xe_lrc *xe_lrc_create(struct
>>> xe_hw_engine *hwe, struct xe_vm *vm,
>>> if (!lrc)
>>> return ERR_PTR(-ENOMEM);
>>> - err = xe_lrc_init(lrc, hwe, vm, ring_size);
>>> + err = xe_lrc_init(lrc, hwe, vm, ring_size, flags);
>>> if (err) {
>>> kfree(lrc);
>>> return ERR_PTR(err);
>>> diff --git a/drivers/gpu/drm/xe/xe_lrc.h b/drivers/gpu/drm/xe/xe_lrc.h
>>> index d411c3fbcbc6..cc8091bba2a0 100644
>>> --- a/drivers/gpu/drm/xe/xe_lrc.h
>>> +++ b/drivers/gpu/drm/xe/xe_lrc.h
>>> @@ -23,8 +23,10 @@ struct xe_vm;
>>> #define LRC_PPHWSP_FLUSH_INVAL_SCRATCH_ADDR (0x34 * 4)
>>> #define LRC_PPHWSP_PXP_INVAL_SCRATCH_ADDR (0x40 * 4)
>>> +#define XE_LRC_CREATE_RUNALONE 0x1
>>> +#define XE_LRC_CREATE_PXP 0x2
>>> struct xe_lrc *xe_lrc_create(struct xe_hw_engine *hwe, struct
>>> xe_vm *vm,
>>> - u32 ring_size);
>>> + u32 ring_size, u32 flags);
>>> void xe_lrc_destroy(struct kref *ref);
>>> /**
>>> diff --git a/drivers/gpu/drm/xe/xe_pxp.c b/drivers/gpu/drm/xe/xe_pxp.c
>>> index 382eb0cb0018..acdc25c8e8a1 100644
>>> --- a/drivers/gpu/drm/xe/xe_pxp.c
>>> +++ b/drivers/gpu/drm/xe/xe_pxp.c
>>> @@ -6,11 +6,17 @@
>>> #include "xe_pxp.h"
>>> #include <drm/drm_managed.h>
>>> +#include <drm/xe_drm.h>
>>> #include "xe_device_types.h"
>>> +#include "xe_exec_queue.h"
>>> +#include "xe_exec_queue_types.h"
>>> #include "xe_force_wake.h"
>>> +#include "xe_guc_submit.h"
>>> +#include "xe_gsc_proxy.h"
>>> #include "xe_gt.h"
>>> #include "xe_gt_types.h"
>>> +#include "xe_huc.h"
>>> #include "xe_mmio.h"
>>> #include "xe_pm.h"
>>> #include "xe_pxp_submit.h"
>>> @@ -27,18 +33,45 @@
>>> * integrated parts.
>>> */
>>> -#define ARB_SESSION 0xF /* TODO: move to UAPI */
>>> +#define ARB_SESSION DRM_XE_PXP_HWDRM_DEFAULT_SESSION /* shorter
>>> define */
>> Is this really worthwhile?
>
> The define is used enough time in this file that IMO it's worth having
> a shorter version for redability
>
>>
>>> bool xe_pxp_is_supported(const struct xe_device *xe)
>>> {
>>> return xe->info.has_pxp &&
>>> IS_ENABLED(CONFIG_INTEL_MEI_GSC_PROXY);
>>> }
>>> -static bool pxp_is_enabled(const struct xe_pxp *pxp)
>>> +bool xe_pxp_is_enabled(const struct xe_pxp *pxp)
>>> {
>>> return pxp;
>>> }
>>> +static bool pxp_prerequisites_done(const struct xe_pxp *pxp)
>>> +{
>>> + bool ready;
>>> +
>>> + XE_WARN_ON(xe_force_wake_get(gt_to_fw(pxp->gt), XE_FW_GSC));
>> Again, why warn on this fw and then proceed anyway when others
>> silently return an error code to the layer above?
>
> In this case because I wanted to have this function return a bool, so
> can't escalate the error. In the unlikely case that this fails, the
> caller will consider PXP not ready, which will be escalated out. If
> forcewake doesn't work the GT is non-functional anyway, so reporting
> PXP not ready it's not going to make things worse.
As per other comment, note that the fw_get return type has changed for
both here and below.
>
>>
>>> +
>>> + /* PXP requires both HuC authentication via GSC and GSC proxy
>>> initialized */
>>> + ready = xe_huc_is_authenticated(&pxp->gt->uc.huc,
>>> XE_HUC_AUTH_VIA_GSC) &&
>>> + xe_gsc_proxy_init_done(&pxp->gt->uc.gsc);
>>> +
>>> + xe_force_wake_put(gt_to_fw(pxp->gt), XE_FW_GSC);
>>> +
>>> + return ready;
>>> +}
>>> +
>>> +static bool pxp_session_is_in_play(struct xe_pxp *pxp, u32 id)
>>> +{
>>> + struct xe_gt *gt = pxp->gt;
>>> + u32 sip = 0;
>>> +
>>> + XE_WARN_ON(xe_force_wake_get(gt_to_fw(gt), XE_FW_GT));
>> Same as above.
>
> Same reasoning, just that we'll report that PXP failed to start if
> this fails, which again it's not going to make things worse if the GT
> is broken.
>
>>
>>> + sip = xe_mmio_read32(gt, KCR_SIP);
>>> + xe_force_wake_put(gt_to_fw(gt), XE_FW_GT);
>>> +
>>> + return sip & BIT(id);
>>> +}
>>> +
>>> static int pxp_wait_for_session_state(struct xe_pxp *pxp, u32 id,
>>> bool in_play)
>>> {
>>> struct xe_gt *gt = pxp->gt;
>>> @@ -56,12 +89,30 @@ static int pxp_wait_for_session_state(struct
>>> xe_pxp *pxp, u32 id, bool in_play)
>>> return ret;
>>> }
>>> +static void pxp_invalidate_queues(struct xe_pxp *pxp);
>>> +
>>> static void pxp_terminate(struct xe_pxp *pxp)
>>> {
>>> int ret = 0;
>>> struct xe_device *xe = pxp->xe;
>>> struct xe_gt *gt = pxp->gt;
>> Should add a "lockdep_assert_held(pxp->mutex)" here?
>
> I'll add it in
>
>>
>>> + pxp_invalidate_queues(pxp);
>>> +
>>> + /*
>>> + * If we have a termination already in progress, we need to
>>> wait for
>>> + * it to complete before queueing another one. We update the state
>>> + * to signal that another termination is required and leave it
>>> to the
>>> + * pxp_start() call to take care of it.
>>> + */
>>> + if (!completion_done(&pxp->termination)) {
>>> + pxp->status = XE_PXP_NEEDS_TERMINATION;
>>> + return;
>>> + }
>>> +
>>> + reinit_completion(&pxp->termination);
>>> + pxp->status = XE_PXP_TERMINATION_IN_PROGRESS;
>>> +
>>> drm_dbg(&xe->drm, "Terminating PXP\n");
>>> /* terminate the hw session */
>>> @@ -82,13 +133,32 @@ static void pxp_terminate(struct xe_pxp *pxp)
>>> ret = xe_pxp_submit_session_invalidation(&pxp->gsc_res,
>>> ARB_SESSION);
>>> out:
>>> - if (ret)
>>> + if (ret) {
>>> drm_err(&xe->drm, "PXP termination failed: %pe\n",
>>> ERR_PTR(ret));
>>> + pxp->status = XE_PXP_ERROR;
>>> + complete_all(&pxp->termination);
>>> + }
>>> }
>>> static void pxp_terminate_complete(struct xe_pxp *pxp)
>>> {
>>> - /* TODO mark the session as ready to start */
>>> + /*
>>> + * We expect PXP to be in one of 2 states when we get here:
>>> + * - XE_PXP_TERMINATION_IN_PROGRESS: a single termination event
>>> was
>>> + * requested and it is now completing, so we're ready to start.
>>> + * - XE_PXP_NEEDS_TERMINATION: a second termination was
>>> requested while
>>> + * the first one was still being processed; we don't update the
>>> state
>>> + * in this case so the pxp_start code will automatically issue
>>> that
>>> + * second termination.
>>> + */
>>> + if (pxp->status == XE_PXP_TERMINATION_IN_PROGRESS)
>>> + pxp->status = XE_PXP_READY_TO_START;
>>> + else if (pxp->status != XE_PXP_NEEDS_TERMINATION)
>>> + drm_err(&pxp->xe->drm,
>>> + "PXP termination complete while status was %u\n",
>>> + pxp->status);
>>> +
>>> + complete_all(&pxp->termination);
>>> }
>>> static void pxp_irq_work(struct work_struct *work)
>>> @@ -112,6 +182,8 @@ static void pxp_irq_work(struct work_struct *work)
>>> if ((events & PXP_TERMINATION_REQUEST) &&
>>> !xe_pm_runtime_get_if_active(xe))
>>> return;
>>> + mutex_lock(&pxp->mutex);
>>> +
>>> if (events & PXP_TERMINATION_REQUEST) {
>>> events &= ~PXP_TERMINATION_COMPLETE;
>>> pxp_terminate(pxp);
>>> @@ -120,6 +192,8 @@ static void pxp_irq_work(struct work_struct *work)
>>> if (events & PXP_TERMINATION_COMPLETE)
>>> pxp_terminate_complete(pxp);
>>> + mutex_unlock(&pxp->mutex);
>>> +
>>> if (events & PXP_TERMINATION_REQUEST)
>>> xe_pm_runtime_put(xe);
>>> }
>>> @@ -133,7 +207,7 @@ void xe_pxp_irq_handler(struct xe_device *xe,
>>> u16 iir)
>>> {
>>> struct xe_pxp *pxp = xe->pxp;
>>> - if (!pxp_is_enabled(pxp)) {
>>> + if (!xe_pxp_is_enabled(pxp)) {
>>> drm_err(&xe->drm, "PXP irq 0x%x received with PXP
>>> disabled!\n", iir);
>>> return;
>>> }
>>> @@ -230,10 +304,22 @@ int xe_pxp_init(struct xe_device *xe)
>>> if (!pxp)
>>> return -ENOMEM;
>>> + INIT_LIST_HEAD(&pxp->queues.list);
>>> + spin_lock_init(&pxp->queues.lock);
>>> INIT_WORK(&pxp->irq.work, pxp_irq_work);
>>> pxp->xe = xe;
>>> pxp->gt = gt;
>>> + /*
>>> + * we'll use the completion to check if there is a termination
>>> pending,
>>> + * so we start it as completed and we reinit it when a termination
>>> + * is triggered.
>>> + */
>>> + init_completion(&pxp->termination);
>>> + complete_all(&pxp->termination);
>>> +
>>> + mutex_init(&pxp->mutex);
>>> +
>>> pxp->irq.wq = alloc_ordered_workqueue("pxp-wq", 0);
>>> if (!pxp->irq.wq)
>>> return -ENOMEM;
>>> @@ -256,3 +342,202 @@ int xe_pxp_init(struct xe_device *xe)
>>> destroy_workqueue(pxp->irq.wq);
>>> return err;
>>> }
>>> +
>>> +static int __pxp_start_arb_session(struct xe_pxp *pxp)
>>> +{
>>> + int ret;
>>> +
>>> + if (pxp_session_is_in_play(pxp, ARB_SESSION))
>>> + return -EEXIST;
>>> +
>>> + ret = xe_pxp_submit_session_init(&pxp->gsc_res, ARB_SESSION);
>>> + if (ret) {
>>> + drm_err(&pxp->xe->drm, "Failed to init PXP arb session\n");
>>> + goto out;
>>> + }
>>> +
>>> + ret = pxp_wait_for_session_state(pxp, ARB_SESSION, true);
>>> + if (ret) {
>>> + drm_err(&pxp->xe->drm, "PXP ARB session failed to go in
>>> play\n");
>>> + goto out;
>>> + }
>>> +
>>> + drm_dbg(&pxp->xe->drm, "PXP ARB session is active\n");
>>> +
>>> +out:
>>> + if (!ret)
>>> + pxp->status = XE_PXP_ACTIVE;
>>> + else
>>> + pxp->status = XE_PXP_ERROR;
>>> +
>>> + return ret;
>>> +}
>>> +
>>> +/**
>>> + * xe_pxp_exec_queue_set_type - Mark a queue as using PXP
>>> + * @pxp: the xe->pxp pointer (it will be NULL if PXP is disabled)
>>> + * @q: the queue to mark as using PXP
>>> + * @type: the type of PXP session this queue will use
>>> + *
>>> + * Returns 0 if the selected PXP type is supported, -ENODEV otherwise.
>>> + */
>>> +int xe_pxp_exec_queue_set_type(struct xe_pxp *pxp, struct
>>> xe_exec_queue *q, u8 type)
>>> +{
>>> + if (!xe_pxp_is_enabled(pxp))
>>> + return -ENODEV;
>>> +
>>> + /* we only support HWDRM sessions right now */
>>> + xe_assert(pxp->xe, type == DRM_XE_PXP_TYPE_HWDRM);
>>> +
>>> + q->pxp.type = type;
>>> +
>>> + return 0;
>>> +}
>>> +
>>> +/**
>>> + * xe_pxp_exec_queue_add - add a queue to the PXP list
>>> + * @pxp: the xe->pxp pointer (it will be NULL if PXP is disabled)
>>> + * @q: the queue to add to the list
>>> + *
>>> + * If PXP is enabled and the prerequisites are done, start the PXP ARB
>>> + * session (if not already running) and add the queue to the PXP
>>> list. Note
>>> + * that the queue must have previously been marked as using PXP with
>>> + * xe_pxp_exec_queue_set_type.
>>> + *
>>> + * Returns 0 if the PXP ARB session is running and the queue is in
>>> the list,
>>> + * -ENODEV if PXP is disabled, -EBUSY if the PXP prerequisites are
>>> not done,
>>> + * other errno value if something goes wrong during the session start.
>>> + */
>>> +#define PXP_TERMINATION_TIMEOUT_MS 500
>>> +int xe_pxp_exec_queue_add(struct xe_pxp *pxp, struct xe_exec_queue *q)
>>> +{
>>> + int ret = 0;
>>> +
>>> + if (!xe_pxp_is_enabled(pxp))
>>> + return -ENODEV;
>>> +
>>> + /* we only support HWDRM sessions right now */
>>> + xe_assert(pxp->xe, q->pxp.type == DRM_XE_PXP_TYPE_HWDRM);
>>> +
>>> + /*
>>> + * Runtime suspend kills PXP, so we need to turn it off while
>>> we have
>>> + * active queues that use PXP
>>> + */
>>> + xe_pm_runtime_get(pxp->xe);
>>> +
>>> + if (!pxp_prerequisites_done(pxp)) {
>>> + ret = -EBUSY;
>> Wouldn't EAGAIN be more appropriate? The pre-reqs here are the GSC
>> firmware load which is guaranteed to in progress or done (or dead?),
>> in which case it is just a matter or re-trying until the firmware
>> init completes?
>
> Userspace tends to retry immediately when we return -EAGAIN . This
> wait can take several hundreds on ms and I didn't want userspace to
> just keep retrying in a tight loop for that long, so I used a
> different error code. This was also discussed on the mesa review here:
>
> https://gitlab.freedesktop.org/mesa/mesa/-/merge_requests/30723#note_2622269
>
>
>>
>>> + goto out;
>>> + }
>>> +
>>> +wait_for_termination:
>>> + /*
>>> + * if there is a termination in progress, wait for it.
>>> + * We need to wait outside the lock because the completion is
>>> done from
>>> + * within the lock
>>> + */
>>> + if (!wait_for_completion_timeout(&pxp->termination,
>>> + msecs_to_jiffies(PXP_TERMINATION_TIMEOUT_MS)))
>>> + return -ETIMEDOUT;
>>> +
>>> + mutex_lock(&pxp->mutex);
>>> +
>>> + /*
>>> + * check if a new termination was issued between the above
>>> check and
>>> + * grabbing the mutex
>>> + */
>>> + if (!completion_done(&pxp->termination)) {
>>> + mutex_unlock(&pxp->mutex);
>>> + goto wait_for_termination;
>>> + }
>>> +
>>> + /* If PXP is not already active, turn it on */
>>> + switch (pxp->status) {
>>> + case XE_PXP_ERROR:
>>> + ret = -EIO;
>>> + break;
>>> + case XE_PXP_ACTIVE:
>>> + break;
>>> + case XE_PXP_READY_TO_START:
>>> + ret = __pxp_start_arb_session(pxp);
>>> + break;
>>> + case XE_PXP_NEEDS_TERMINATION:
>>> + pxp_terminate(pxp);
>>> + mutex_unlock(&pxp->mutex);
>>> + goto wait_for_termination;
>>> + default:
>>> + drm_err(&pxp->xe->drm, "unexpected state during PXP start:
>>> %u", pxp->status);
>>> + ret = -EIO;
>>> + break;
>>> + }
>>> +
>>> + /* If everything went ok, add the queue to the list */
>>> + if (!ret) {
>>> + spin_lock_irq(&pxp->queues.lock);
>>> + list_add_tail(&q->pxp.link, &pxp->queues.list);
>>> + spin_unlock_irq(&pxp->queues.lock);
>>> + }
>>> +
>>> + mutex_unlock(&pxp->mutex);
>>> +
>>> +out:
>>> + /*
>>> + * in the successful case the PM ref is released from
>>> + * xe_pxp_exec_queue_remove
>>> + */
>>> + if (ret)
>>> + xe_pm_runtime_put(pxp->xe);
>> Does the runtime PM get/put need to be mutex protected as well? Is it
>> possible for two xe_pxp_exec_queue_add() calls to be running
>> concurrently?
>
> It is possible to have two xe_pxp_exec_queue_add running concurrently,
> but that shouldn't matter with the pm_put. Am I not seeing a race?
>
>>
>>> +
>>> + return ret;
>>> +}
>>> +
>>> +/**
>>> + * xe_pxp_exec_queue_remove - remove a queue from the PXP list
>>> + * @pxp: the xe->pxp pointer (it will be NULL if PXP is disabled)
>>> + * @q: the queue to remove from the list
>>> + *
>>> + * If PXP is enabled and the exec_queue is in the list, the queue
>>> will be
>>> + * removed from the list and its PM reference will be released. It
>>> is safe to
>>> + * call this function multiple times for the same queue.
>>> + */
>>> +void xe_pxp_exec_queue_remove(struct xe_pxp *pxp, struct
>>> xe_exec_queue *q)
>>> +{
>>> + bool need_pm_put = false;
>>> +
>>> + if (!xe_pxp_is_enabled(pxp))
>>> + return;
>>> +
>>> + spin_lock_irq(&pxp->queues.lock);
>>> +
>>> + if (!list_empty(&q->pxp.link)) {
>>> + list_del_init(&q->pxp.link);
>>> + need_pm_put = true;
>>> + }
>>> +
>>> + q->pxp.type = DRM_XE_PXP_TYPE_NONE;
>>> +
>>> + spin_unlock_irq(&pxp->queues.lock);
>>> +
>>> + if (need_pm_put)
>>> + xe_pm_runtime_put(pxp->xe);
>>> +}
>>> +
>>> +static void pxp_invalidate_queues(struct xe_pxp *pxp)
>>> +{
>>> + struct xe_exec_queue *tmp, *q;
>>> +
>>> + spin_lock_irq(&pxp->queues.lock);
>>> +
>>> + list_for_each_entry(tmp, &pxp->queues.list, pxp.link) {
>> Double space.
>>
>>> + q = xe_exec_queue_get_unless_zero(tmp);
>>> +
>>> + if (!q)
>>> + continue;
>>> +
>>> + xe_exec_queue_kill(q);
>>> + xe_exec_queue_put(q);
>>> + }
>> This doesn't need to empty the list out as well?
>
> It's not strictly necessary, because it is ok to kill a queue multiple
> times. Given the PM handling required as part of removing a queue from
> the list and the fact that it needs to happen outside the lock (see
> xe_pxp_exec_queue_remove), my thought was that it'd be easier to just
> not do it here and leave it to when the queue is cleaned up.
Maybe add a comment about that?
>
>>
>>> +
>>> + spin_unlock_irq(&pxp->queues.lock);
>>> +}
>>> +
>>> diff --git a/drivers/gpu/drm/xe/xe_pxp.h b/drivers/gpu/drm/xe/xe_pxp.h
>>> index 81bafe2714ff..2e0ab186072a 100644
>>> --- a/drivers/gpu/drm/xe/xe_pxp.h
>>> +++ b/drivers/gpu/drm/xe/xe_pxp.h
>>> @@ -9,10 +9,17 @@
>>> #include <linux/types.h>
>>> struct xe_device;
>>> +struct xe_exec_queue;
>>> +struct xe_pxp;
>>> bool xe_pxp_is_supported(const struct xe_device *xe);
>>> +bool xe_pxp_is_enabled(const struct xe_pxp *pxp);
>>> int xe_pxp_init(struct xe_device *xe);
>>> void xe_pxp_irq_handler(struct xe_device *xe, u16 iir);
>>> +int xe_pxp_exec_queue_set_type(struct xe_pxp *pxp, struct
>>> xe_exec_queue *q, u8 type);
>>> +int xe_pxp_exec_queue_add(struct xe_pxp *pxp, struct xe_exec_queue
>>> *q);
>>> +void xe_pxp_exec_queue_remove(struct xe_pxp *pxp, struct
>>> xe_exec_queue *q);
>>> +
>>> #endif /* __XE_PXP_H__ */
>>> diff --git a/drivers/gpu/drm/xe/xe_pxp_submit.c
>>> b/drivers/gpu/drm/xe/xe_pxp_submit.c
>>> index c9258c861556..becffa6dfd4c 100644
>>> --- a/drivers/gpu/drm/xe/xe_pxp_submit.c
>>> +++ b/drivers/gpu/drm/xe/xe_pxp_submit.c
>>> @@ -26,8 +26,6 @@
>>> #include "instructions/xe_mi_commands.h"
>>> #include "regs/xe_gt_regs.h"
>>> -#define ARB_SESSION 0xF /* TODO: move to UAPI */
>>> -
>>> /*
>>> * The VCS is used for kernel-owned GGTT submissions to issue key
>>> termination.
>>> * Terminations are serialized, so we only need a single queue and
>>> a single
>>> @@ -495,7 +493,7 @@ int xe_pxp_submit_session_init(struct
>>> xe_pxp_gsc_client_resources *gsc_res, u32
>>> FIELD_PREP(PXP43_INIT_SESSION_APPTYPE, 0));
>>> msg_in.header.buffer_len = sizeof(msg_in) -
>>> sizeof(msg_in.header);
>>> - if (id == ARB_SESSION)
>>> + if (id == DRM_XE_PXP_HWDRM_DEFAULT_SESSION)
>> Would have been clearer to just use the correct name from the start.
>
> You mean define DRM_XE_PXP_HWDRM_DEFAULT_SESSION locally in the
> earlier patch, and then moving it to the uapi without a rename?
Yes. That would keep things simpler.
>
>>
>>> msg_in.protection_mode = PXP43_INIT_SESSION_PROTECTION_ARB;
>>> ret = gsccs_send_message(gsc_res, &msg_in, sizeof(msg_in),
>>> diff --git a/drivers/gpu/drm/xe/xe_pxp_types.h
>>> b/drivers/gpu/drm/xe/xe_pxp_types.h
>>> index d5cf8faed7be..eb6a0183320a 100644
>>> --- a/drivers/gpu/drm/xe/xe_pxp_types.h
>>> +++ b/drivers/gpu/drm/xe/xe_pxp_types.h
>>> @@ -6,7 +6,10 @@
>>> #ifndef __XE_PXP_TYPES_H__
>>> #define __XE_PXP_TYPES_H__
>>> +#include <linux/completion.h>
>>> #include <linux/iosys-map.h>
>>> +#include <linux/mutex.h>
>>> +#include <linux/spinlock.h>
>>> #include <linux/types.h>
>>> #include <linux/workqueue.h>
>>> @@ -16,6 +19,14 @@ struct xe_device;
>>> struct xe_gt;
>>> struct xe_vm;
>>> +enum xe_pxp_status {
>>> + XE_PXP_ERROR = -1,
>>> + XE_PXP_NEEDS_TERMINATION = 0, /* starting status */
>>> + XE_PXP_TERMINATION_IN_PROGRESS,
>>> + XE_PXP_READY_TO_START,
>>> + XE_PXP_ACTIVE
>>> +};
>>> +
>>> /**
>>> * struct xe_pxp_gsc_client_resources - resources for GSC
>>> submission by a PXP
>>> * client. The GSC FW supports multiple GSC client active at the
>>> same time.
>>> @@ -82,6 +93,21 @@ struct xe_pxp {
>>> #define PXP_TERMINATION_REQUEST BIT(0)
>>> #define PXP_TERMINATION_COMPLETE BIT(1)
>>> } irq;
>>> +
>>> + /** @mutex: protects the pxp status and the queue list */
>>> + struct mutex mutex;
>>> + /** @status: the current pxp status */
>>> + enum xe_pxp_status status;
>>> + /** @termination: completion struct that tracks terminations */
>>> + struct completion termination;
>>> +
>>> + /** @queues: management of exec_queues that use PXP */
>>> + struct {
>>> + /** @queues.lock: spinlock protecting the queue management */
>>> + spinlock_t lock;
>>> + /** @queues.list: list of exec_queues that use PXP */
>>> + struct list_head list;
>>> + } queues;
>>> };
>>> #endif /* __XE_PXP_TYPES_H__ */
>>> diff --git a/include/uapi/drm/xe_drm.h b/include/uapi/drm/xe_drm.h
>>> index b6fbe4988f2e..5f4d08123672 100644
>>> --- a/include/uapi/drm/xe_drm.h
>>> +++ b/include/uapi/drm/xe_drm.h
>>> @@ -1085,6 +1085,24 @@ struct drm_xe_vm_bind {
>>> /**
>>> * struct drm_xe_exec_queue_create - Input of
>>> &DRM_IOCTL_XE_EXEC_QUEUE_CREATE
>>> *
>>> + * This ioctl supports setting the following properties via the
>>> + * %DRM_XE_EXEC_QUEUE_EXTENSION_SET_PROPERTY extension, which uses the
>>> + * generic @drm_xe_ext_set_property struct:
>>> + *
>>> + * - %DRM_XE_EXEC_QUEUE_SET_PROPERTY_PRIORITY - set the queue
>>> priority.
>>> + * CAP_SYS_NICE is required to set a value above normal.
>>> + * - %DRM_XE_EXEC_QUEUE_SET_PROPERTY_TIMESLICE - set the queue
>>> timeslice
>>> + * duration.
>> Units would be helpful.
>
> I have no idea what they are. I only added this documentation because
> it seemed unclean to only add the part about PXP.
Looks like it gets stored as "q->sched_props.timeslice_us", so
microseconds seems like a plausible guess :).
>
>>
>>> + * - %DRM_XE_EXEC_QUEUE_SET_PROPERTY_PXP_TYPE - set the type of
>>> PXP session
>>> + * this queue will be used with. Valid values are listed in enum
>>> + * drm_xe_pxp_session_type. %DRM_XE_PXP_TYPE_NONE is the default
>>> behavior, so
>>> + * there is no need to explicitly set that. When a queue of type
>>> + * %DRM_XE_PXP_TYPE_HWDRM is created, the PXP default HWDRM session
>>> + * (%XE_PXP_HWDRM_DEFAULT_SESSION) will be started, if isn't
>>> already running.
>>> + * Given that going into a power-saving state kills PXP HWDRM
>>> sessions,
>>> + * runtime PM will be blocked while queues of this type are alive.
>>> + * All PXP queues will be killed if a PXP invalidation event
>>> occurs.
>> Seems odd to say 'values are listed in ...' and then go on to
>> describe each type and provide extra information about them. Seems
>> like the extra details should be part of the enum documentation
>> instead of here?
>
> This is documentation specific to how this ioctl handles those values,
> so it belongs here. The 'values are listed in ...' sentence was about
> being future proof, in case we update the enum in the future and don't
> need to add any extra explanation here.
>
That is an argument for having a single point of documentation and that
point being the point of definition. Then, if new values are added it is
immediately obvious what documentation needs to be updated.
John.
> Daniele
>
>>
>> John.
>>
>>> + *
>>> * The example below shows how to use @drm_xe_exec_queue_create to
>>> create
>>> * a simple exec_queue (no parallel submission) of class
>>> * &DRM_XE_ENGINE_CLASS_RENDER.
>>> @@ -1108,7 +1126,7 @@ struct drm_xe_exec_queue_create {
>>> #define DRM_XE_EXEC_QUEUE_EXTENSION_SET_PROPERTY 0
>>> #define DRM_XE_EXEC_QUEUE_SET_PROPERTY_PRIORITY 0
>>> #define DRM_XE_EXEC_QUEUE_SET_PROPERTY_TIMESLICE 1
>>> -
>>> +#define DRM_XE_EXEC_QUEUE_SET_PROPERTY_PXP_TYPE 2
>>> /** @extensions: Pointer to the first extension struct, if any */
>>> __u64 extensions;
>>> @@ -1694,6 +1712,26 @@ struct drm_xe_oa_stream_info {
>>> __u64 reserved[3];
>>> };
>>> +/**
>>> + * enum drm_xe_pxp_session_type - Supported PXP session types.
>>> + *
>>> + * We currently only support HWDRM sessions, which are used for
>>> protected
>>> + * content that ends up being displayed, but the HW supports
>>> multiple types, so
>>> + * we might extend support in the future.
>>> + */
>>> +enum drm_xe_pxp_session_type {
>>> + /** @DRM_XE_PXP_TYPE_NONE: PXP not used */
>>> + DRM_XE_PXP_TYPE_NONE = 0,
>>> + /**
>>> + * @DRM_XE_PXP_TYPE_HWDRM: HWDRM sessions are used for content
>>> that ends
>>> + * up on the display.
>>> + */
>>> + DRM_XE_PXP_TYPE_HWDRM = 1,
>>> +};
>>> +
>>> +/* ID of the protected content session managed by Xe when PXP is
>>> active */
>>> +#define DRM_XE_PXP_HWDRM_DEFAULT_SESSION 0xf
>>> +
>>> #if defined(__cplusplus)
>>> }
>>> #endif
>
^ permalink raw reply [flat|nested] 54+ messages in thread
* Re: [PATCH v2 07/12] drm/xe/pxp: Add spport for PXP-using queues
2024-11-14 21:20 ` John Harrison
@ 2024-11-14 21:39 ` Daniele Ceraolo Spurio
2024-11-15 0:47 ` Daniele Ceraolo Spurio
1 sibling, 0 replies; 54+ messages in thread
From: Daniele Ceraolo Spurio @ 2024-11-14 21:39 UTC (permalink / raw)
To: John Harrison, intel-xe
<snip>
>
>>
>>>
>>>> + * - %DRM_XE_EXEC_QUEUE_SET_PROPERTY_PXP_TYPE - set the type of
>>>> PXP session
>>>> + * this queue will be used with. Valid values are listed in enum
>>>> + * drm_xe_pxp_session_type. %DRM_XE_PXP_TYPE_NONE is the
>>>> default behavior, so
>>>> + * there is no need to explicitly set that. When a queue of type
>>>> + * %DRM_XE_PXP_TYPE_HWDRM is created, the PXP default HWDRM
>>>> session
>>>> + * (%XE_PXP_HWDRM_DEFAULT_SESSION) will be started, if isn't
>>>> already running.
>>>> + * Given that going into a power-saving state kills PXP HWDRM
>>>> sessions,
>>>> + * runtime PM will be blocked while queues of this type are alive.
>>>> + * All PXP queues will be killed if a PXP invalidation event
>>>> occurs.
>>> Seems odd to say 'values are listed in ...' and then go on to
>>> describe each type and provide extra information about them. Seems
>>> like the extra details should be part of the enum documentation
>>> instead of here?
>>
>> This is documentation specific to how this ioctl handles those
>> values, so it belongs here. The 'values are listed in ...' sentence
>> was about being future proof, in case we update the enum in the
>> future and don't need to add any extra explanation here.
>>
> That is an argument for having a single point of documentation and
> that point being the point of definition. Then, if new values are
> added it is immediately obvious what documentation needs to be updated.
Still not convinced. Having the ioctl-specific info in the enum
definition would mean having to list the behavior each enum value has
for each ioctl that uses it; IMO it's cleaner to have them in the ioctl
documentation itself so it's easy to make clear which special behavior
applies to which ioctl. Also, when I said not adding any extra
explanation here for future extensions I was not referring to a mistake;
for example, compute sessions (which we don't currently support) don't
have any of the extra requirements that HWDRM sessions have, so we
wouldn't need to add any extra explanation to this ioctl if we added
support for those.
Daniele
>
> John.
>
>> Daniele
>>
>>>
>>> John.
>>>
>>>> + *
>>>> * The example below shows how to use @drm_xe_exec_queue_create
>>>> to create
>>>> * a simple exec_queue (no parallel submission) of class
>>>> * &DRM_XE_ENGINE_CLASS_RENDER.
>>>> @@ -1108,7 +1126,7 @@ struct drm_xe_exec_queue_create {
>>>> #define DRM_XE_EXEC_QUEUE_EXTENSION_SET_PROPERTY 0
>>>> #define DRM_XE_EXEC_QUEUE_SET_PROPERTY_PRIORITY 0
>>>> #define DRM_XE_EXEC_QUEUE_SET_PROPERTY_TIMESLICE 1
>>>> -
>>>> +#define DRM_XE_EXEC_QUEUE_SET_PROPERTY_PXP_TYPE 2
>>>> /** @extensions: Pointer to the first extension struct, if
>>>> any */
>>>> __u64 extensions;
>>>> @@ -1694,6 +1712,26 @@ struct drm_xe_oa_stream_info {
>>>> __u64 reserved[3];
>>>> };
>>>> +/**
>>>> + * enum drm_xe_pxp_session_type - Supported PXP session types.
>>>> + *
>>>> + * We currently only support HWDRM sessions, which are used for
>>>> protected
>>>> + * content that ends up being displayed, but the HW supports
>>>> multiple types, so
>>>> + * we might extend support in the future.
>>>> + */
>>>> +enum drm_xe_pxp_session_type {
>>>> + /** @DRM_XE_PXP_TYPE_NONE: PXP not used */
>>>> + DRM_XE_PXP_TYPE_NONE = 0,
>>>> + /**
>>>> + * @DRM_XE_PXP_TYPE_HWDRM: HWDRM sessions are used for content
>>>> that ends
>>>> + * up on the display.
>>>> + */
>>>> + DRM_XE_PXP_TYPE_HWDRM = 1,
>>>> +};
>>>> +
>>>> +/* ID of the protected content session managed by Xe when PXP is
>>>> active */
>>>> +#define DRM_XE_PXP_HWDRM_DEFAULT_SESSION 0xf
>>>> +
>>>> #if defined(__cplusplus)
>>>> }
>>>> #endif
>>
>
^ permalink raw reply [flat|nested] 54+ messages in thread
* Re: [PATCH v2 07/12] drm/xe/pxp: Add spport for PXP-using queues
2024-11-14 21:20 ` John Harrison
2024-11-14 21:39 ` Daniele Ceraolo Spurio
@ 2024-11-15 0:47 ` Daniele Ceraolo Spurio
1 sibling, 0 replies; 54+ messages in thread
From: Daniele Ceraolo Spurio @ 2024-11-15 0:47 UTC (permalink / raw)
To: John Harrison, intel-xe
<snip>
>>>> @@ -343,6 +377,24 @@ static int exec_queue_set_timeslice(struct
>>>> xe_device *xe, struct xe_exec_queue *
>>>> return 0;
>>>> }
>>>> +static int
>>>> +exec_queue_set_pxp_type(struct xe_device *xe, struct xe_exec_queue
>>>> *q, u64 value)
>>>> +{
>>>> + BUILD_BUG_ON(DRM_XE_PXP_TYPE_NONE != 0);
>>> Why a build bug for something that is a simple 'enum { X=0 }'? It's
>>> not like there is some complex macro calculation that could be
>>> broken by some seemingly unrelated change.
>>
>> This was more to make sure that the default value for the extension
>> was 0. Given that this is UAPI and therefore can't change anyway,
>> I'll drop the BUG_ON
>>
>>>
>>>> +
>>>> + if (value == DRM_XE_PXP_TYPE_NONE)
>>>> + return 0;
>>> This doesn't need to shut any existing PXP down? Is it not possible
>>> to dynamically change the type?
>>
>> No, this can only be set at queue creation time
> Would be good to add a comment about that? Maybe even an assert or
> something to ensure this is not called post creation?
Missed this comment on my first read through. All extension functions
are guaranteed to only be called at creation time, so there is no risk
of this being called later.
Daniele
>
>
>>
>>>
>>>> +
>>>> + if (!xe_pxp_is_enabled(xe->pxp))
>>>> + return -ENODEV;
>>>> +
>>>> + /* we only support HWDRM sessions right now */
>>>> + if (XE_IOCTL_DBG(xe, value != DRM_XE_PXP_TYPE_HWDRM))
>>>> + return -EINVAL;
>>>> +
>>>> + return xe_pxp_exec_queue_set_type(xe->pxp, q,
>>>> DRM_XE_PXP_TYPE_HWDRM);
>>>> +}
>>>> +
>>>> typedef int (*xe_exec_queue_set_property_fn)(struct xe_device *xe,
>>>> struct xe_exec_queue *q,
>>>> u64 value);
>>>> @@ -350,6 +402,7 @@ typedef int
>>>> (*xe_exec_queue_set_property_fn)(struct xe_device *xe,
>>>> static const xe_exec_queue_set_property_fn
>>>> exec_queue_set_property_funcs[] = {
>>>> [DRM_XE_EXEC_QUEUE_SET_PROPERTY_PRIORITY] =
>>>> exec_queue_set_priority,
>>>> [DRM_XE_EXEC_QUEUE_SET_PROPERTY_TIMESLICE] =
>>>> exec_queue_set_timeslice,
>>>> + [DRM_XE_EXEC_QUEUE_SET_PROPERTY_PXP_TYPE] =
>>>> exec_queue_set_pxp_type,
>>>> };
>>>> static int exec_queue_user_ext_set_property(struct xe_device *xe,
^ permalink raw reply [flat|nested] 54+ messages in thread
* Re: [PATCH v2 09/12] drm/xe/pxp: Add API to mark a BO as using PXP
2024-11-12 22:23 ` Daniele Ceraolo Spurio
@ 2024-11-15 17:49 ` John Harrison
2024-11-15 18:03 ` Daniele Ceraolo Spurio
0 siblings, 1 reply; 54+ messages in thread
From: John Harrison @ 2024-11-15 17:49 UTC (permalink / raw)
To: Daniele Ceraolo Spurio, intel-xe; +Cc: Matthew Brost, Thomas Hellström
On 11/12/2024 14:23, Daniele Ceraolo Spurio wrote:
> On 10/8/24 17:42, John Harrison wrote:
>> On 8/16/2024 12:00, Daniele Ceraolo Spurio wrote:
>>> The driver needs to know if a BO is encrypted with PXP to enable the
>>> display decryption at flip time.
>>> Furthermore, we want to keep track of the status of the encryption and
>>> reject any operation that involves a BO that is encrypted using an old
>>> key. There are two points in time where such checks can kick in:
>>>
>>> 1 - at VM bind time, all operations except for unmapping will be
>>> rejected if the key used to encrypt the BO is no longer valid.
>>> This
>>> check is opt-in via a new VM_BIND flag, to avoid a scenario
>>> where a
>>> malicious app purposely shares an invalid BO with the
>>> compositor (or
>>> other app) and cause an error there.
>> Not following the last statement here.
>
>
> If we always reject VM_BIND on invalid BOs, a malicious app can
> intentionally pass an invalid BO to the compositor to cause its
> VM_BIND call to fail. The compositor might not have any knowledge of
> PXP and therefore not be able to handle such an error, so we
> definitely need to avoid this scenario; therefore, the check on the
> object validity is opt-in. Any suggestion on how to reword it?
>
> Note that the worst that can happen if the check is skipped is that we
> display garbage, there is no risk of leaking the protected data.
Got it. Maybe something like this?
'...shares an invalid BO with a non-PXP aware app (such as a
compositor). If the VM_BIND was failed, the compositor would be unable
to display anything at all. Allowing the bind to go through means that
output still works, it just displays garbage data within the bounds of
the illegal BO'.
>
>
>>
>>>
>>> 2 - at job submission time, if the queue is marked as using PXP, all
>>> objects bound to the VM will be checked and the submission will be
>>> rejected if any of them was encrypted with a key that is no longer
>>> valid.
>>>
>>> Note that there is no risk of leaking the encrypted data if a user does
>>> not opt-in to those checks; the only consequence is that the user will
>>> not realize that the encryption key is changed and that the data is no
>>> longer valid.
>>>
>>> Signed-off-by: Daniele Ceraolo Spurio <daniele.ceraolospurio@intel.com>
>>> Cc: Matthew Brost <matthew.brost@intel.com>
>>> Cc: Thomas Hellström <thomas.hellstrom@linux.intel.com>
>>> ---
>>> .../xe/compat-i915-headers/pxp/intel_pxp.h | 10 +-
>>> drivers/gpu/drm/xe/xe_bo.c | 100
>>> +++++++++++++++++-
>>> drivers/gpu/drm/xe/xe_bo.h | 5 +
>>> drivers/gpu/drm/xe/xe_bo_types.h | 3 +
>>> drivers/gpu/drm/xe/xe_exec.c | 6 ++
>>> drivers/gpu/drm/xe/xe_pxp.c | 74 +++++++++++++
>>> drivers/gpu/drm/xe/xe_pxp.h | 4 +
>>> drivers/gpu/drm/xe/xe_pxp_types.h | 3 +
>>> drivers/gpu/drm/xe/xe_vm.c | 46 +++++++-
>>> drivers/gpu/drm/xe/xe_vm.h | 2 +
>>> include/uapi/drm/xe_drm.h | 19 ++++
>>> 11 files changed, 265 insertions(+), 7 deletions(-)
>>>
>>> diff --git a/drivers/gpu/drm/xe/compat-i915-headers/pxp/intel_pxp.h
>>> b/drivers/gpu/drm/xe/compat-i915-headers/pxp/intel_pxp.h
>>> index 881680727452..d8682f781619 100644
>>> --- a/drivers/gpu/drm/xe/compat-i915-headers/pxp/intel_pxp.h
>>> +++ b/drivers/gpu/drm/xe/compat-i915-headers/pxp/intel_pxp.h
>>> @@ -9,6 +9,9 @@
>>> #include <linux/errno.h>
>>> #include <linux/types.h>
>>> +#include "xe_bo.h"
>>> +#include "xe_pxp.h"
>>> +
>>> struct drm_i915_gem_object;
>>> struct xe_pxp;
>>> @@ -16,13 +19,16 @@ static inline int intel_pxp_key_check(struct
>>> xe_pxp *pxp,
>>> struct drm_i915_gem_object *obj,
>>> bool assign)
>>> {
>>> - return -ENODEV;
>>> + if (assign)
>>> + return -EINVAL;
>> What does 'assign' mean and why is it always invalid?
>
>
> In i915 we used the same function to both assign the key at first
> submission (assign=true) and to check it later on (assign=false). This
> header is for compatibility with the display code and the expectation
> is that the display code will never assign a key and only check it.
Okay. Add a comment about that?
>
>
>>
>>> +
>>> + return xe_pxp_key_check(pxp, obj);
>>> }
>>> static inline bool
>>> i915_gem_object_is_protected(const struct drm_i915_gem_object *obj)
>>> {
>>> - return false;
>>> + return xe_bo_is_protected(obj);
>>> }
>>> #endif
>>> diff --git a/drivers/gpu/drm/xe/xe_bo.c b/drivers/gpu/drm/xe/xe_bo.c
>>> index 56a089aa3916..0f591b7d93b1 100644
>>> --- a/drivers/gpu/drm/xe/xe_bo.c
>>> +++ b/drivers/gpu/drm/xe/xe_bo.c
>>> @@ -6,6 +6,7 @@
>>> #include "xe_bo.h"
>>> #include <linux/dma-buf.h>
>>> +#include <linux/nospec.h>
>>> #include <drm/drm_drv.h>
>>> #include <drm/drm_gem_ttm_helper.h>
>>> @@ -24,6 +25,7 @@
>>> #include "xe_migrate.h"
>>> #include "xe_pm.h"
>>> #include "xe_preempt_fence.h"
>>> +#include "xe_pxp.h"
>>> #include "xe_res_cursor.h"
>>> #include "xe_trace_bo.h"
>>> #include "xe_ttm_stolen_mgr.h"
>>> @@ -1949,6 +1951,95 @@ void xe_bo_vunmap(struct xe_bo *bo)
>>> __xe_bo_vunmap(bo);
>>> }
>>> +static int gem_create_set_pxp_type(struct xe_device *xe, struct
>>> xe_bo *bo, u64 value)
>>> +{
>>> + if (value == DRM_XE_PXP_TYPE_NONE)
>>> + return 0;
>>> +
>>> + /* we only support DRM_XE_PXP_TYPE_HWDRM for now */
>>> + if (XE_IOCTL_DBG(xe, value != DRM_XE_PXP_TYPE_HWDRM))
>>> + return -EINVAL;
>>> +
>>> + xe_pxp_key_assign(xe->pxp, bo);
>>> +
>>> + return 0;
>>> +}
>>> +
>>> +typedef int (*xe_gem_create_set_property_fn)(struct xe_device *xe,
>>> + struct xe_bo *bo,
>>> + u64 value);
>>> +
>>> +static const xe_gem_create_set_property_fn
>>> gem_create_set_property_funcs[] = {
>>> + [DRM_XE_GEM_CREATE_EXTENSION_SET_PROPERTY] =
>>> gem_create_set_pxp_type,
>>> +};
>>> +
>>> +static int gem_create_user_ext_set_property(struct xe_device *xe,
>>> + struct xe_bo *bo,
>>> + u64 extension)
>>> +{
>>> + u64 __user *address = u64_to_user_ptr(extension);
>>> + struct drm_xe_ext_set_property ext;
>>> + int err;
>>> + u32 idx;
>>> +
>>> + err = __copy_from_user(&ext, address, sizeof(ext));
>>> + if (XE_IOCTL_DBG(xe, err))
>>> + return -EFAULT;
>>> +
>>> + if (XE_IOCTL_DBG(xe, ext.property >=
>>> + ARRAY_SIZE(gem_create_set_property_funcs)) ||
>>> + XE_IOCTL_DBG(xe, ext.pad) ||
>>> + XE_IOCTL_DBG(xe, ext.property !=
>>> DRM_XE_GEM_CREATE_EXTENSION_SET_PROPERTY))
>> Two overlapping checks on the same field in the same if statement
>> seems unnecessary.
>
>
> I've followed the same approach as the existing
> exec_queue_user_ext_set_property for consistency.
Hmm. Still seems bizarre and redundant.
>
>
>>
>>> + return -EINVAL;
>>> +
>>> + idx = array_index_nospec(ext.property,
>>> ARRAY_SIZE(gem_create_set_property_funcs));
>>> + if (!gem_create_set_property_funcs[idx])
>>> + return -EINVAL;
>>> +
>>> + return gem_create_set_property_funcs[idx](xe, bo, ext.value);
>>> +}
>>> +
>>> +typedef int (*xe_gem_create_user_extension_fn)(struct xe_device *xe,
>>> + struct xe_bo *bo,
>>> + u64 extension);
>>> +
>>> +static const xe_gem_create_user_extension_fn
>>> gem_create_user_extension_funcs[] = {
>>> + [DRM_XE_GEM_CREATE_EXTENSION_SET_PROPERTY] =
>>> gem_create_user_ext_set_property,
>>> +};
>>> +
>>> +#define MAX_USER_EXTENSIONS 16
>>> +static int gem_create_user_extensions(struct xe_device *xe, struct
>>> xe_bo *bo,
>>> + u64 extensions, int ext_number)
>>> +{
>>> + u64 __user *address = u64_to_user_ptr(extensions);
>>> + struct drm_xe_user_extension ext;
>>> + int err;
>>> + u32 idx;
>>> +
>>> + if (XE_IOCTL_DBG(xe, ext_number >= MAX_USER_EXTENSIONS))
>>> + return -E2BIG;
>>> +
>>> + err = __copy_from_user(&ext, address, sizeof(ext));
>>> + if (XE_IOCTL_DBG(xe, err))
>>> + return -EFAULT;
>>> +
>>> + if (XE_IOCTL_DBG(xe, ext.pad) ||
>>> + XE_IOCTL_DBG(xe, ext.name >=
>>> ARRAY_SIZE(gem_create_user_extension_funcs)))
>>> + return -EINVAL;
>>> +
>>> + idx = array_index_nospec(ext.name,
>>> + ARRAY_SIZE(gem_create_user_extension_funcs));
>>> + err = gem_create_user_extension_funcs[idx](xe, bo, extensions);
>>> + if (XE_IOCTL_DBG(xe, err))
>>> + return err;
>>> +
>>> + if (ext.next_extension)
>>> + return gem_create_user_extensions(xe, bo, ext.next_extension,
>>> + ++ext_number);
>>> +
>>> + return 0;
>>> +}
>>> +
>>> int xe_gem_create_ioctl(struct drm_device *dev, void *data,
>>> struct drm_file *file)
>>> {
>>> @@ -1961,8 +2052,7 @@ int xe_gem_create_ioctl(struct drm_device
>>> *dev, void *data,
>>> u32 handle;
>>> int err;
>>> - if (XE_IOCTL_DBG(xe, args->extensions) ||
>>> - XE_IOCTL_DBG(xe, args->pad[0] || args->pad[1] ||
>>> args->pad[2]) ||
>>> + if (XE_IOCTL_DBG(xe, args->pad[0] || args->pad[1] ||
>>> args->pad[2]) ||
>>> XE_IOCTL_DBG(xe, args->reserved[0] || args->reserved[1]))
>>> return -EINVAL;
>>> @@ -2037,6 +2127,12 @@ int xe_gem_create_ioctl(struct drm_device
>>> *dev, void *data,
>>> goto out_vm;
>>> }
>>> + if (args->extensions) {
>>> + err = gem_create_user_extensions(xe, bo, args->extensions, 0);
>>> + if (err)
>>> + goto out_bulk;
>>> + }
>>> +
>>> err = drm_gem_handle_create(file, &bo->ttm.base, &handle);
>>> if (err)
>>> goto out_bulk;
>>> diff --git a/drivers/gpu/drm/xe/xe_bo.h b/drivers/gpu/drm/xe/xe_bo.h
>>> index 1c9dc8adaaa3..721f7dc35aac 100644
>>> --- a/drivers/gpu/drm/xe/xe_bo.h
>>> +++ b/drivers/gpu/drm/xe/xe_bo.h
>>> @@ -171,6 +171,11 @@ static inline bool xe_bo_is_pinned(struct xe_bo
>>> *bo)
>>> return bo->ttm.pin_count;
>>> }
>>> +static inline bool xe_bo_is_protected(const struct xe_bo *bo)
>>> +{
>>> + return bo->pxp_key_instance;
>>> +}
>>> +
>>> static inline void xe_bo_unpin_map_no_vm(struct xe_bo *bo)
>>> {
>>> if (likely(bo)) {
>>> diff --git a/drivers/gpu/drm/xe/xe_bo_types.h
>>> b/drivers/gpu/drm/xe/xe_bo_types.h
>>> index ebc8abf7930a..8668e0374b18 100644
>>> --- a/drivers/gpu/drm/xe/xe_bo_types.h
>>> +++ b/drivers/gpu/drm/xe/xe_bo_types.h
>>> @@ -56,6 +56,9 @@ struct xe_bo {
>>> */
>>> struct list_head client_link;
>>> #endif
>>> + /** @pxp_key_instance: key instance this bo was created against
>>> (if any) */
>>> + u32 pxp_key_instance;
>>> +
>>> /** @freed: List node for delayed put. */
>>> struct llist_node freed;
>>> /** @update_index: Update index if PT BO */
>>> diff --git a/drivers/gpu/drm/xe/xe_exec.c
>>> b/drivers/gpu/drm/xe/xe_exec.c
>>> index f36980aa26e6..aa4f2fe2e131 100644
>>> --- a/drivers/gpu/drm/xe/xe_exec.c
>>> +++ b/drivers/gpu/drm/xe/xe_exec.c
>>> @@ -250,6 +250,12 @@ int xe_exec_ioctl(struct drm_device *dev, void
>>> *data, struct drm_file *file)
>>> goto err_exec;
>>> }
>>> + if (xe_exec_queue_uses_pxp(q)) {
>>> + err = xe_vm_validate_protected(q->vm);
>>> + if (err)
>>> + goto err_exec;
>>> + }
>>> +
>>> job = xe_sched_job_create(q, xe_exec_queue_is_parallel(q) ?
>>> addresses : &args->address);
>>> if (IS_ERR(job)) {
>>> diff --git a/drivers/gpu/drm/xe/xe_pxp.c b/drivers/gpu/drm/xe/xe_pxp.c
>>> index ca4302af4ced..640e62d1d5d7 100644
>>> --- a/drivers/gpu/drm/xe/xe_pxp.c
>>> +++ b/drivers/gpu/drm/xe/xe_pxp.c
>>> @@ -8,6 +8,8 @@
>>> #include <drm/drm_managed.h>
>>> #include <drm/xe_drm.h>
>>> +#include "xe_bo.h"
>>> +#include "xe_bo_types.h"
>>> #include "xe_device_types.h"
>>> #include "xe_exec_queue.h"
>>> #include "xe_exec_queue_types.h"
>>> @@ -132,6 +134,9 @@ static void pxp_terminate(struct xe_pxp *pxp)
>>> pxp_invalidate_queues(pxp);
>>> + if (pxp->status == XE_PXP_ACTIVE)
>>> + pxp->key_instance++;
>>> +
>>> /*
>>> * If we have a termination already in progress, we need to
>>> wait for
>>> * it to complete before queueing another one. We update the
>>> state
>>> @@ -343,6 +348,8 @@ int xe_pxp_init(struct xe_device *xe)
>>> pxp->xe = xe;
>>> pxp->gt = gt;
>>> + pxp->key_instance = 1;
>>> +
>>> /*
>>> * we'll use the completion to check if there is a termination
>>> pending,
>>> * so we start it as completed and we reinit it when a
>>> termination
>>> @@ -574,3 +581,70 @@ static void pxp_invalidate_queues(struct xe_pxp
>>> *pxp)
>>> spin_unlock_irq(&pxp->queues.lock);
>>> }
>>> +/**
>>> + * xe_pxp_key_assign - mark a BO as using the current PXP key
>>> iteration
>>> + * @pxp: the xe->pxp pointer (it will be NULL if PXP is disabled)
>>> + * @bo: the BO to mark
>>> + *
>>> + * Returns: -ENODEV if PXP is disabled, 0 otherwise.
>>> + */
>>> +int xe_pxp_key_assign(struct xe_pxp *pxp, struct xe_bo *bo)
>>> +{
>>> + if (!xe_pxp_is_enabled(pxp))
>>> + return -ENODEV;
>>> +
>>> + xe_assert(pxp->xe, !bo->pxp_key_instance);
>>> +
>>> + /*
>>> + * Note that the PXP key handling is inherently racey, because
>>> the key
>>> + * can theoretically change at any time (although it's unlikely
>>> to do
>>> + * so without triggers), even right after we copy it. Taking a
>>> lock
>>> + * wouldn't help because the value might still change as soon
>>> as we
>>> + * release the lock.
>>> + * Userspace needs to handle the fact that their BOs can go
>>> invalid at
>>> + * any point.
>>> + */
>>> + bo->pxp_key_instance = pxp->key_instance;
>>> +
>>> + return 0;
>>> +}
>>> +
>>> +/**
>>> + * xe_pxp_key_check - check if the key used by a BO is valid
>>> + * @pxp: the xe->pxp pointer (it will be NULL if PXP is disabled)
>>> + * @bo: the BO we want to check
>>> + *
>>> + * Checks whether a BO was encrypted with the current key or an
>>> obsolete one.
>>> + *
>>> + * Returns: 0 if the key is valid, -ENODEV if PXP is disabled,
>>> -EINVAL if the
>>> + * BO is not using PXP, -ENOEXEC if the key is not valid.
>>> + */
>>> +int xe_pxp_key_check(struct xe_pxp *pxp, struct xe_bo *bo)
>>> +{
>>> + if (!xe_pxp_is_enabled(pxp))
>>> + return -ENODEV;
>>> +
>>> + if (!xe_bo_is_protected(bo))
>>> + return -EINVAL;
>>> +
>>> + xe_assert(pxp->xe, bo->pxp_key_instance);
>>> +
>>> + /*
>>> + * Note that the PXP key handling is inherently racey, because
>>> the key
>>> + * can theoretically change at any time (although it's unlikely
>>> to do
>>> + * so without triggers), even right after we check it. Taking a
>>> lock
>>> + * wouldn't help because the value might still change as soon
>>> as we
>>> + * release the lock.
>>> + * We mitigate the risk by checking the key at multiple points
>>> (on each
>>> + * submission involving the BO and right before flipping it on the
>>> + * display), but there is still a very small chance that we could
>>> + * operate on an invalid BO for a single submission or a single
>>> frame
>>> + * flip. This is a compromise made to protect the encrypted
>>> data (which
>>> + * is what the key termination is for).
>>> + */
>>> + if (bo->pxp_key_instance != pxp->key_instance)
>> And the possibility that the key_instance value has wrapped around
>> and is valid again is considered not a problem? Using a bo with a bad
>> key potentially results in garbage being displayed but nothing worse
>> than that?
>
>
> Considering that the instance variable is a u32, even if we had an
> invalidation a second (which is extremely unlikely unless someone is
> actively attacking the system in a loop) it'd take way too long for
> the value to actually wrap. And yes on the second question.
If it is a malicious app, it can be much faster than 1Hz. It only needs
to attack at 1kHz or so to bring the wrap time down to something
realistic. But if the malicious app can't actually get anywhere even if
it did successfully spoof the key by forcing a wrap, then it's still not
something we need to worry about. Because it is not actually spoofing
the key, just pretending to?
John.
>
>
>>
>>> + return -ENOEXEC;
>>> +
>>> + return 0;
>>> +}
>>> +
>>> diff --git a/drivers/gpu/drm/xe/xe_pxp.h b/drivers/gpu/drm/xe/xe_pxp.h
>>> index 868813cc84b9..2d22a6e6ab27 100644
>>> --- a/drivers/gpu/drm/xe/xe_pxp.h
>>> +++ b/drivers/gpu/drm/xe/xe_pxp.h
>>> @@ -8,6 +8,7 @@
>>> #include <linux/types.h>
>>> +struct xe_bo;
>>> struct xe_device;
>>> struct xe_exec_queue;
>>> struct xe_pxp;
>>> @@ -23,4 +24,7 @@ int xe_pxp_exec_queue_set_type(struct xe_pxp *pxp,
>>> struct xe_exec_queue *q, u8 t
>>> int xe_pxp_exec_queue_add(struct xe_pxp *pxp, struct xe_exec_queue
>>> *q);
>>> void xe_pxp_exec_queue_remove(struct xe_pxp *pxp, struct
>>> xe_exec_queue *q);
>>> +int xe_pxp_key_assign(struct xe_pxp *pxp, struct xe_bo *bo);
>>> +int xe_pxp_key_check(struct xe_pxp *pxp, struct xe_bo *bo);
>>> +
>>> #endif /* __XE_PXP_H__ */
>>> diff --git a/drivers/gpu/drm/xe/xe_pxp_types.h
>>> b/drivers/gpu/drm/xe/xe_pxp_types.h
>>> index eb6a0183320a..1bb747837f86 100644
>>> --- a/drivers/gpu/drm/xe/xe_pxp_types.h
>>> +++ b/drivers/gpu/drm/xe/xe_pxp_types.h
>>> @@ -108,6 +108,9 @@ struct xe_pxp {
>>> /** @queues.list: list of exec_queues that use PXP */
>>> struct list_head list;
>>> } queues;
>>> +
>>> + /** @key_instance: keep track of the current iteration of the
>>> PXP key */
>>> + u32 key_instance;
>>> };
>>> #endif /* __XE_PXP_TYPES_H__ */
>>> diff --git a/drivers/gpu/drm/xe/xe_vm.c b/drivers/gpu/drm/xe/xe_vm.c
>>> index 56f105797ae6..1011d643ebb8 100644
>>> --- a/drivers/gpu/drm/xe/xe_vm.c
>>> +++ b/drivers/gpu/drm/xe/xe_vm.c
>>> @@ -34,6 +34,7 @@
>>> #include "xe_pm.h"
>>> #include "xe_preempt_fence.h"
>>> #include "xe_pt.h"
>>> +#include "xe_pxp.h"
>>> #include "xe_res_cursor.h"
>>> #include "xe_sync.h"
>>> #include "xe_trace_bo.h"
>>> @@ -2754,7 +2755,8 @@ static struct dma_fence
>>> *vm_bind_ioctl_ops_execute(struct xe_vm *vm,
>>> (DRM_XE_VM_BIND_FLAG_READONLY | \
>>> DRM_XE_VM_BIND_FLAG_IMMEDIATE | \
>>> DRM_XE_VM_BIND_FLAG_NULL | \
>>> - DRM_XE_VM_BIND_FLAG_DUMPABLE)
>>> + DRM_XE_VM_BIND_FLAG_DUMPABLE | \
>>> + DRM_XE_VM_BIND_FLAG_CHECK_PXP)
>>> #ifdef TEST_VM_OPS_ERROR
>>> #define SUPPORTED_FLAGS (SUPPORTED_FLAGS_STUB | FORCE_OP_ERROR)
>>> @@ -2916,7 +2918,7 @@ static void xe_vma_ops_init(struct xe_vma_ops
>>> *vops, struct xe_vm *vm,
>>> static int xe_vm_bind_ioctl_validate_bo(struct xe_device *xe,
>>> struct xe_bo *bo,
>>> u64 addr, u64 range, u64 obj_offset,
>>> - u16 pat_index)
>>> + u16 pat_index, u32 op, u32 bind_flags)
>>> {
>>> u16 coh_mode;
>>> @@ -2951,6 +2953,12 @@ static int
>>> xe_vm_bind_ioctl_validate_bo(struct xe_device *xe, struct xe_bo *bo,
>>> return -EINVAL;
>>> }
>>> + /* If a BO is protected it must be valid to be mapped */
>> "is protected it can only be mapped if the key is still valid". The
>> above can be read as saying the BO must be mappable, which isn't the
>> same thing.
>
>
> will update.
>
>
>>
>>> + if ((bind_flags & DRM_XE_VM_BIND_FLAG_CHECK_PXP) &&
>>> xe_bo_is_protected(bo) &&
>>> + op != DRM_XE_VM_BIND_OP_UNMAP && op !=
>>> DRM_XE_VM_BIND_OP_UNMAP_ALL)
>>> + if (XE_IOCTL_DBG(xe, xe_pxp_key_check(xe->pxp, bo) != 0))
>>> + return -ENOEXEC;
>>> +
>>> return 0;
>>> }
>>> @@ -3038,6 +3046,8 @@ int xe_vm_bind_ioctl(struct drm_device *dev,
>>> void *data, struct drm_file *file)
>>> u32 obj = bind_ops[i].obj;
>>> u64 obj_offset = bind_ops[i].obj_offset;
>>> u16 pat_index = bind_ops[i].pat_index;
>>> + u32 op = bind_ops[i].op;
>>> + u32 bind_flags = bind_ops[i].flags;
>>> if (!obj)
>>> continue;
>>> @@ -3050,7 +3060,8 @@ int xe_vm_bind_ioctl(struct drm_device *dev,
>>> void *data, struct drm_file *file)
>>> bos[i] = gem_to_xe_bo(gem_obj);
>>> err = xe_vm_bind_ioctl_validate_bo(xe, bos[i], addr, range,
>>> - obj_offset, pat_index);
>>> + obj_offset, pat_index, op,
>>> + bind_flags);
>>> if (err)
>>> goto put_obj;
>>> }
>>> @@ -3343,6 +3354,35 @@ int xe_vm_invalidate_vma(struct xe_vma *vma)
>>> return ret;
>>> }
>>> +int xe_vm_validate_protected(struct xe_vm *vm)
>>> +{
>>> + struct drm_gpuva *gpuva;
>>> + int err = 0;
>>> +
>>> + if (!vm)
>>> + return -ENODEV;
>>> +
>>> + mutex_lock(&vm->snap_mutex);
>>> +
>>> + drm_gpuvm_for_each_va(gpuva, &vm->gpuvm) {
>>> + struct xe_vma *vma = gpuva_to_vma(gpuva);
>>> + struct xe_bo *bo = vma->gpuva.gem.obj ?
>>> + gem_to_xe_bo(vma->gpuva.gem.obj) : NULL;
>>> +
>>> + if (!bo)
>>> + continue;
>>> +
>>> + if (xe_bo_is_protected(bo)) {
>>> + err = xe_pxp_key_check(vm->xe->pxp, bo);
>>> + if (err)
>>> + break;
>>> + }
>>> + }
>>> +
>>> + mutex_unlock(&vm->snap_mutex);
>>> + return err;
>>> +}
>>> +
>>> struct xe_vm_snapshot {
>>> unsigned long num_snaps;
>>> struct {
>>> diff --git a/drivers/gpu/drm/xe/xe_vm.h b/drivers/gpu/drm/xe/xe_vm.h
>>> index bfc19e8113c3..dd51c9790dab 100644
>>> --- a/drivers/gpu/drm/xe/xe_vm.h
>>> +++ b/drivers/gpu/drm/xe/xe_vm.h
>>> @@ -216,6 +216,8 @@ struct dma_fence *xe_vma_rebind(struct xe_vm
>>> *vm, struct xe_vma *vma,
>>> int xe_vm_invalidate_vma(struct xe_vma *vma);
>>> +int xe_vm_validate_protected(struct xe_vm *vm);
>>> +
>>> static inline void xe_vm_queue_rebind_worker(struct xe_vm *vm)
>>> {
>>> xe_assert(vm->xe, xe_vm_in_preempt_fence_mode(vm));
>>> diff --git a/include/uapi/drm/xe_drm.h b/include/uapi/drm/xe_drm.h
>>> index 9972ceb3fbfb..335febe03e40 100644
>>> --- a/include/uapi/drm/xe_drm.h
>>> +++ b/include/uapi/drm/xe_drm.h
>>> @@ -776,8 +776,23 @@ struct drm_xe_device_query {
>>> * - %DRM_XE_GEM_CPU_CACHING_WC - Allocate the pages as
>>> write-combined. This
>>> * is uncached. Scanout surfaces should likely use this. All
>>> objects
>>> * that can be placed in VRAM must use this.
>>> + *
>>> + * This ioctl supports setting the following properties via the
>>> + * %DRM_XE_GEM_CREATE_EXTENSION_SET_PROPERTY extension, which uses the
>>> + * generic @drm_xe_ext_set_property struct:
>>> + *
>>> + * - %DRM_XE_GEM_CREATE_SET_PROPERTY_PXP_TYPE - set the type of
>>> PXP session
>>> + * this object will be used with. Valid values are listed in enum
>>> + * drm_xe_pxp_session_type. %DRM_XE_PXP_TYPE_NONE is the default
>>> behavior, so
>>> + * there is no need to explicitly set that. Objects used with
>>> session of type
>>> + * %DRM_XE_PXP_TYPE_HWDRM will be marked as invalid if a PXP
>>> invalidation
>>> + * event occurs after their creation. Attempting to flip an
>>> invalid object
>>> + * will cause a black frame to be displayed instead. Submissions
>>> with invalid
>>> + * objects mapped in the VM will be rejected.
>> Again, seems like the per type descriptions should be collected
>> together in the type enum.
>
>
> This is how this ioctl handles those values, so IMO they belong here.
>
> Daniele
>
>
>>
>> John.
>>
>>> */
>>> struct drm_xe_gem_create {
>>> +#define DRM_XE_GEM_CREATE_EXTENSION_SET_PROPERTY 0
>>> +#define DRM_XE_GEM_CREATE_SET_PROPERTY_PXP_TYPE 0
>>> /** @extensions: Pointer to the first extension struct, if any */
>>> __u64 extensions;
>>> @@ -939,6 +954,9 @@ struct drm_xe_vm_destroy {
>>> * will only be valid for DRM_XE_VM_BIND_OP_MAP operations, the BO
>>> * handle MBZ, and the BO offset MBZ. This flag is intended to
>>> * implement VK sparse bindings.
>>> + * - %DRM_XE_VM_BIND_FLAG_CHECK_PXP - If the object is encrypted
>>> via PXP,
>>> + * reject the binding if the encryption key is no longer valid.
>>> This
>>> + * flag has no effect on BOs that are not marked as using PXP.
>>> */
>>> struct drm_xe_vm_bind_op {
>>> /** @extensions: Pointer to the first extension struct, if any */
>>> @@ -1029,6 +1047,7 @@ struct drm_xe_vm_bind_op {
>>> #define DRM_XE_VM_BIND_FLAG_IMMEDIATE (1 << 1)
>>> #define DRM_XE_VM_BIND_FLAG_NULL (1 << 2)
>>> #define DRM_XE_VM_BIND_FLAG_DUMPABLE (1 << 3)
>>> +#define DRM_XE_VM_BIND_FLAG_CHECK_PXP (1 << 4)
>>> /** @flags: Bind flags */
>>> __u32 flags;
>>
^ permalink raw reply [flat|nested] 54+ messages in thread
* Re: [PATCH v2 09/12] drm/xe/pxp: Add API to mark a BO as using PXP
2024-11-15 17:49 ` John Harrison
@ 2024-11-15 18:03 ` Daniele Ceraolo Spurio
0 siblings, 0 replies; 54+ messages in thread
From: Daniele Ceraolo Spurio @ 2024-11-15 18:03 UTC (permalink / raw)
To: John Harrison, intel-xe; +Cc: Matthew Brost, Thomas Hellström
On 11/15/2024 9:49 AM, John Harrison wrote:
> On 11/12/2024 14:23, Daniele Ceraolo Spurio wrote:
>> On 10/8/24 17:42, John Harrison wrote:
>>> On 8/16/2024 12:00, Daniele Ceraolo Spurio wrote:
>>>> The driver needs to know if a BO is encrypted with PXP to enable the
>>>> display decryption at flip time.
>>>> Furthermore, we want to keep track of the status of the encryption and
>>>> reject any operation that involves a BO that is encrypted using an old
>>>> key. There are two points in time where such checks can kick in:
>>>>
>>>> 1 - at VM bind time, all operations except for unmapping will be
>>>> rejected if the key used to encrypt the BO is no longer valid.
>>>> This
>>>> check is opt-in via a new VM_BIND flag, to avoid a scenario
>>>> where a
>>>> malicious app purposely shares an invalid BO with the
>>>> compositor (or
>>>> other app) and cause an error there.
>>> Not following the last statement here.
>>
>>
>> If we always reject VM_BIND on invalid BOs, a malicious app can
>> intentionally pass an invalid BO to the compositor to cause its
>> VM_BIND call to fail. The compositor might not have any knowledge of
>> PXP and therefore not be able to handle such an error, so we
>> definitely need to avoid this scenario; therefore, the check on the
>> object validity is opt-in. Any suggestion on how to reword it?
>>
>> Note that the worst that can happen if the check is skipped is that
>> we display garbage, there is no risk of leaking the protected data.
> Got it. Maybe something like this?
> '...shares an invalid BO with a non-PXP aware app (such as a
> compositor). If the VM_BIND was failed, the compositor would be unable
> to display anything at all. Allowing the bind to go through means that
> output still works, it just displays garbage data within the bounds of
> the illegal BO'.
Sounds good.
>
>>
>>
>>>
>>>>
>>>> 2 - at job submission time, if the queue is marked as using PXP, all
>>>> objects bound to the VM will be checked and the submission
>>>> will be
>>>> rejected if any of them was encrypted with a key that is no
>>>> longer
>>>> valid.
>>>>
>>>> Note that there is no risk of leaking the encrypted data if a user
>>>> does
>>>> not opt-in to those checks; the only consequence is that the user will
>>>> not realize that the encryption key is changed and that the data is no
>>>> longer valid.
>>>>
>>>> Signed-off-by: Daniele Ceraolo Spurio
>>>> <daniele.ceraolospurio@intel.com>
>>>> Cc: Matthew Brost <matthew.brost@intel.com>
>>>> Cc: Thomas Hellström <thomas.hellstrom@linux.intel.com>
>>>> ---
>>>> .../xe/compat-i915-headers/pxp/intel_pxp.h | 10 +-
>>>> drivers/gpu/drm/xe/xe_bo.c | 100
>>>> +++++++++++++++++-
>>>> drivers/gpu/drm/xe/xe_bo.h | 5 +
>>>> drivers/gpu/drm/xe/xe_bo_types.h | 3 +
>>>> drivers/gpu/drm/xe/xe_exec.c | 6 ++
>>>> drivers/gpu/drm/xe/xe_pxp.c | 74 +++++++++++++
>>>> drivers/gpu/drm/xe/xe_pxp.h | 4 +
>>>> drivers/gpu/drm/xe/xe_pxp_types.h | 3 +
>>>> drivers/gpu/drm/xe/xe_vm.c | 46 +++++++-
>>>> drivers/gpu/drm/xe/xe_vm.h | 2 +
>>>> include/uapi/drm/xe_drm.h | 19 ++++
>>>> 11 files changed, 265 insertions(+), 7 deletions(-)
>>>>
>>>> diff --git a/drivers/gpu/drm/xe/compat-i915-headers/pxp/intel_pxp.h
>>>> b/drivers/gpu/drm/xe/compat-i915-headers/pxp/intel_pxp.h
>>>> index 881680727452..d8682f781619 100644
>>>> --- a/drivers/gpu/drm/xe/compat-i915-headers/pxp/intel_pxp.h
>>>> +++ b/drivers/gpu/drm/xe/compat-i915-headers/pxp/intel_pxp.h
>>>> @@ -9,6 +9,9 @@
>>>> #include <linux/errno.h>
>>>> #include <linux/types.h>
>>>> +#include "xe_bo.h"
>>>> +#include "xe_pxp.h"
>>>> +
>>>> struct drm_i915_gem_object;
>>>> struct xe_pxp;
>>>> @@ -16,13 +19,16 @@ static inline int intel_pxp_key_check(struct
>>>> xe_pxp *pxp,
>>>> struct drm_i915_gem_object *obj,
>>>> bool assign)
>>>> {
>>>> - return -ENODEV;
>>>> + if (assign)
>>>> + return -EINVAL;
>>> What does 'assign' mean and why is it always invalid?
>>
>>
>> In i915 we used the same function to both assign the key at first
>> submission (assign=true) and to check it later on (assign=false).
>> This header is for compatibility with the display code and the
>> expectation is that the display code will never assign a key and only
>> check it.
> Okay. Add a comment about that?
Will do.
>
>>
>>
>>>
>>>> +
>>>> + return xe_pxp_key_check(pxp, obj);
>>>> }
>>>> static inline bool
>>>> i915_gem_object_is_protected(const struct drm_i915_gem_object *obj)
>>>> {
>>>> - return false;
>>>> + return xe_bo_is_protected(obj);
>>>> }
>>>> #endif
>>>> diff --git a/drivers/gpu/drm/xe/xe_bo.c b/drivers/gpu/drm/xe/xe_bo.c
>>>> index 56a089aa3916..0f591b7d93b1 100644
>>>> --- a/drivers/gpu/drm/xe/xe_bo.c
>>>> +++ b/drivers/gpu/drm/xe/xe_bo.c
>>>> @@ -6,6 +6,7 @@
>>>> #include "xe_bo.h"
>>>> #include <linux/dma-buf.h>
>>>> +#include <linux/nospec.h>
>>>> #include <drm/drm_drv.h>
>>>> #include <drm/drm_gem_ttm_helper.h>
>>>> @@ -24,6 +25,7 @@
>>>> #include "xe_migrate.h"
>>>> #include "xe_pm.h"
>>>> #include "xe_preempt_fence.h"
>>>> +#include "xe_pxp.h"
>>>> #include "xe_res_cursor.h"
>>>> #include "xe_trace_bo.h"
>>>> #include "xe_ttm_stolen_mgr.h"
>>>> @@ -1949,6 +1951,95 @@ void xe_bo_vunmap(struct xe_bo *bo)
>>>> __xe_bo_vunmap(bo);
>>>> }
>>>> +static int gem_create_set_pxp_type(struct xe_device *xe, struct
>>>> xe_bo *bo, u64 value)
>>>> +{
>>>> + if (value == DRM_XE_PXP_TYPE_NONE)
>>>> + return 0;
>>>> +
>>>> + /* we only support DRM_XE_PXP_TYPE_HWDRM for now */
>>>> + if (XE_IOCTL_DBG(xe, value != DRM_XE_PXP_TYPE_HWDRM))
>>>> + return -EINVAL;
>>>> +
>>>> + xe_pxp_key_assign(xe->pxp, bo);
>>>> +
>>>> + return 0;
>>>> +}
>>>> +
>>>> +typedef int (*xe_gem_create_set_property_fn)(struct xe_device *xe,
>>>> + struct xe_bo *bo,
>>>> + u64 value);
>>>> +
>>>> +static const xe_gem_create_set_property_fn
>>>> gem_create_set_property_funcs[] = {
>>>> + [DRM_XE_GEM_CREATE_EXTENSION_SET_PROPERTY] =
>>>> gem_create_set_pxp_type,
>>>> +};
>>>> +
>>>> +static int gem_create_user_ext_set_property(struct xe_device *xe,
>>>> + struct xe_bo *bo,
>>>> + u64 extension)
>>>> +{
>>>> + u64 __user *address = u64_to_user_ptr(extension);
>>>> + struct drm_xe_ext_set_property ext;
>>>> + int err;
>>>> + u32 idx;
>>>> +
>>>> + err = __copy_from_user(&ext, address, sizeof(ext));
>>>> + if (XE_IOCTL_DBG(xe, err))
>>>> + return -EFAULT;
>>>> +
>>>> + if (XE_IOCTL_DBG(xe, ext.property >=
>>>> + ARRAY_SIZE(gem_create_set_property_funcs)) ||
>>>> + XE_IOCTL_DBG(xe, ext.pad) ||
>>>> + XE_IOCTL_DBG(xe, ext.property !=
>>>> DRM_XE_GEM_CREATE_EXTENSION_SET_PROPERTY))
>>> Two overlapping checks on the same field in the same if statement
>>> seems unnecessary.
>>
>>
>> I've followed the same approach as the existing
>> exec_queue_user_ext_set_property for consistency.
> Hmm. Still seems bizarre and redundant.
>
>>
>>
>>>
>>>> + return -EINVAL;
>>>> +
>>>> + idx = array_index_nospec(ext.property,
>>>> ARRAY_SIZE(gem_create_set_property_funcs));
>>>> + if (!gem_create_set_property_funcs[idx])
>>>> + return -EINVAL;
>>>> +
>>>> + return gem_create_set_property_funcs[idx](xe, bo, ext.value);
>>>> +}
>>>> +
>>>> +typedef int (*xe_gem_create_user_extension_fn)(struct xe_device *xe,
>>>> + struct xe_bo *bo,
>>>> + u64 extension);
>>>> +
>>>> +static const xe_gem_create_user_extension_fn
>>>> gem_create_user_extension_funcs[] = {
>>>> + [DRM_XE_GEM_CREATE_EXTENSION_SET_PROPERTY] =
>>>> gem_create_user_ext_set_property,
>>>> +};
>>>> +
>>>> +#define MAX_USER_EXTENSIONS 16
>>>> +static int gem_create_user_extensions(struct xe_device *xe, struct
>>>> xe_bo *bo,
>>>> + u64 extensions, int ext_number)
>>>> +{
>>>> + u64 __user *address = u64_to_user_ptr(extensions);
>>>> + struct drm_xe_user_extension ext;
>>>> + int err;
>>>> + u32 idx;
>>>> +
>>>> + if (XE_IOCTL_DBG(xe, ext_number >= MAX_USER_EXTENSIONS))
>>>> + return -E2BIG;
>>>> +
>>>> + err = __copy_from_user(&ext, address, sizeof(ext));
>>>> + if (XE_IOCTL_DBG(xe, err))
>>>> + return -EFAULT;
>>>> +
>>>> + if (XE_IOCTL_DBG(xe, ext.pad) ||
>>>> + XE_IOCTL_DBG(xe, ext.name >=
>>>> ARRAY_SIZE(gem_create_user_extension_funcs)))
>>>> + return -EINVAL;
>>>> +
>>>> + idx = array_index_nospec(ext.name,
>>>> + ARRAY_SIZE(gem_create_user_extension_funcs));
>>>> + err = gem_create_user_extension_funcs[idx](xe, bo, extensions);
>>>> + if (XE_IOCTL_DBG(xe, err))
>>>> + return err;
>>>> +
>>>> + if (ext.next_extension)
>>>> + return gem_create_user_extensions(xe, bo, ext.next_extension,
>>>> + ++ext_number);
>>>> +
>>>> + return 0;
>>>> +}
>>>> +
>>>> int xe_gem_create_ioctl(struct drm_device *dev, void *data,
>>>> struct drm_file *file)
>>>> {
>>>> @@ -1961,8 +2052,7 @@ int xe_gem_create_ioctl(struct drm_device
>>>> *dev, void *data,
>>>> u32 handle;
>>>> int err;
>>>> - if (XE_IOCTL_DBG(xe, args->extensions) ||
>>>> - XE_IOCTL_DBG(xe, args->pad[0] || args->pad[1] ||
>>>> args->pad[2]) ||
>>>> + if (XE_IOCTL_DBG(xe, args->pad[0] || args->pad[1] ||
>>>> args->pad[2]) ||
>>>> XE_IOCTL_DBG(xe, args->reserved[0] || args->reserved[1]))
>>>> return -EINVAL;
>>>> @@ -2037,6 +2127,12 @@ int xe_gem_create_ioctl(struct drm_device
>>>> *dev, void *data,
>>>> goto out_vm;
>>>> }
>>>> + if (args->extensions) {
>>>> + err = gem_create_user_extensions(xe, bo, args->extensions,
>>>> 0);
>>>> + if (err)
>>>> + goto out_bulk;
>>>> + }
>>>> +
>>>> err = drm_gem_handle_create(file, &bo->ttm.base, &handle);
>>>> if (err)
>>>> goto out_bulk;
>>>> diff --git a/drivers/gpu/drm/xe/xe_bo.h b/drivers/gpu/drm/xe/xe_bo.h
>>>> index 1c9dc8adaaa3..721f7dc35aac 100644
>>>> --- a/drivers/gpu/drm/xe/xe_bo.h
>>>> +++ b/drivers/gpu/drm/xe/xe_bo.h
>>>> @@ -171,6 +171,11 @@ static inline bool xe_bo_is_pinned(struct
>>>> xe_bo *bo)
>>>> return bo->ttm.pin_count;
>>>> }
>>>> +static inline bool xe_bo_is_protected(const struct xe_bo *bo)
>>>> +{
>>>> + return bo->pxp_key_instance;
>>>> +}
>>>> +
>>>> static inline void xe_bo_unpin_map_no_vm(struct xe_bo *bo)
>>>> {
>>>> if (likely(bo)) {
>>>> diff --git a/drivers/gpu/drm/xe/xe_bo_types.h
>>>> b/drivers/gpu/drm/xe/xe_bo_types.h
>>>> index ebc8abf7930a..8668e0374b18 100644
>>>> --- a/drivers/gpu/drm/xe/xe_bo_types.h
>>>> +++ b/drivers/gpu/drm/xe/xe_bo_types.h
>>>> @@ -56,6 +56,9 @@ struct xe_bo {
>>>> */
>>>> struct list_head client_link;
>>>> #endif
>>>> + /** @pxp_key_instance: key instance this bo was created
>>>> against (if any) */
>>>> + u32 pxp_key_instance;
>>>> +
>>>> /** @freed: List node for delayed put. */
>>>> struct llist_node freed;
>>>> /** @update_index: Update index if PT BO */
>>>> diff --git a/drivers/gpu/drm/xe/xe_exec.c
>>>> b/drivers/gpu/drm/xe/xe_exec.c
>>>> index f36980aa26e6..aa4f2fe2e131 100644
>>>> --- a/drivers/gpu/drm/xe/xe_exec.c
>>>> +++ b/drivers/gpu/drm/xe/xe_exec.c
>>>> @@ -250,6 +250,12 @@ int xe_exec_ioctl(struct drm_device *dev, void
>>>> *data, struct drm_file *file)
>>>> goto err_exec;
>>>> }
>>>> + if (xe_exec_queue_uses_pxp(q)) {
>>>> + err = xe_vm_validate_protected(q->vm);
>>>> + if (err)
>>>> + goto err_exec;
>>>> + }
>>>> +
>>>> job = xe_sched_job_create(q, xe_exec_queue_is_parallel(q) ?
>>>> addresses : &args->address);
>>>> if (IS_ERR(job)) {
>>>> diff --git a/drivers/gpu/drm/xe/xe_pxp.c b/drivers/gpu/drm/xe/xe_pxp.c
>>>> index ca4302af4ced..640e62d1d5d7 100644
>>>> --- a/drivers/gpu/drm/xe/xe_pxp.c
>>>> +++ b/drivers/gpu/drm/xe/xe_pxp.c
>>>> @@ -8,6 +8,8 @@
>>>> #include <drm/drm_managed.h>
>>>> #include <drm/xe_drm.h>
>>>> +#include "xe_bo.h"
>>>> +#include "xe_bo_types.h"
>>>> #include "xe_device_types.h"
>>>> #include "xe_exec_queue.h"
>>>> #include "xe_exec_queue_types.h"
>>>> @@ -132,6 +134,9 @@ static void pxp_terminate(struct xe_pxp *pxp)
>>>> pxp_invalidate_queues(pxp);
>>>> + if (pxp->status == XE_PXP_ACTIVE)
>>>> + pxp->key_instance++;
>>>> +
>>>> /*
>>>> * If we have a termination already in progress, we need to
>>>> wait for
>>>> * it to complete before queueing another one. We update the
>>>> state
>>>> @@ -343,6 +348,8 @@ int xe_pxp_init(struct xe_device *xe)
>>>> pxp->xe = xe;
>>>> pxp->gt = gt;
>>>> + pxp->key_instance = 1;
>>>> +
>>>> /*
>>>> * we'll use the completion to check if there is a
>>>> termination pending,
>>>> * so we start it as completed and we reinit it when a
>>>> termination
>>>> @@ -574,3 +581,70 @@ static void pxp_invalidate_queues(struct
>>>> xe_pxp *pxp)
>>>> spin_unlock_irq(&pxp->queues.lock);
>>>> }
>>>> +/**
>>>> + * xe_pxp_key_assign - mark a BO as using the current PXP key
>>>> iteration
>>>> + * @pxp: the xe->pxp pointer (it will be NULL if PXP is disabled)
>>>> + * @bo: the BO to mark
>>>> + *
>>>> + * Returns: -ENODEV if PXP is disabled, 0 otherwise.
>>>> + */
>>>> +int xe_pxp_key_assign(struct xe_pxp *pxp, struct xe_bo *bo)
>>>> +{
>>>> + if (!xe_pxp_is_enabled(pxp))
>>>> + return -ENODEV;
>>>> +
>>>> + xe_assert(pxp->xe, !bo->pxp_key_instance);
>>>> +
>>>> + /*
>>>> + * Note that the PXP key handling is inherently racey, because
>>>> the key
>>>> + * can theoretically change at any time (although it's
>>>> unlikely to do
>>>> + * so without triggers), even right after we copy it. Taking a
>>>> lock
>>>> + * wouldn't help because the value might still change as soon
>>>> as we
>>>> + * release the lock.
>>>> + * Userspace needs to handle the fact that their BOs can go
>>>> invalid at
>>>> + * any point.
>>>> + */
>>>> + bo->pxp_key_instance = pxp->key_instance;
>>>> +
>>>> + return 0;
>>>> +}
>>>> +
>>>> +/**
>>>> + * xe_pxp_key_check - check if the key used by a BO is valid
>>>> + * @pxp: the xe->pxp pointer (it will be NULL if PXP is disabled)
>>>> + * @bo: the BO we want to check
>>>> + *
>>>> + * Checks whether a BO was encrypted with the current key or an
>>>> obsolete one.
>>>> + *
>>>> + * Returns: 0 if the key is valid, -ENODEV if PXP is disabled,
>>>> -EINVAL if the
>>>> + * BO is not using PXP, -ENOEXEC if the key is not valid.
>>>> + */
>>>> +int xe_pxp_key_check(struct xe_pxp *pxp, struct xe_bo *bo)
>>>> +{
>>>> + if (!xe_pxp_is_enabled(pxp))
>>>> + return -ENODEV;
>>>> +
>>>> + if (!xe_bo_is_protected(bo))
>>>> + return -EINVAL;
>>>> +
>>>> + xe_assert(pxp->xe, bo->pxp_key_instance);
>>>> +
>>>> + /*
>>>> + * Note that the PXP key handling is inherently racey, because
>>>> the key
>>>> + * can theoretically change at any time (although it's
>>>> unlikely to do
>>>> + * so without triggers), even right after we check it. Taking
>>>> a lock
>>>> + * wouldn't help because the value might still change as soon
>>>> as we
>>>> + * release the lock.
>>>> + * We mitigate the risk by checking the key at multiple points
>>>> (on each
>>>> + * submission involving the BO and right before flipping it on
>>>> the
>>>> + * display), but there is still a very small chance that we could
>>>> + * operate on an invalid BO for a single submission or a
>>>> single frame
>>>> + * flip. This is a compromise made to protect the encrypted
>>>> data (which
>>>> + * is what the key termination is for).
>>>> + */
>>>> + if (bo->pxp_key_instance != pxp->key_instance)
>>> And the possibility that the key_instance value has wrapped around
>>> and is valid again is considered not a problem? Using a bo with a
>>> bad key potentially results in garbage being displayed but nothing
>>> worse than that?
>>
>>
>> Considering that the instance variable is a u32, even if we had an
>> invalidation a second (which is extremely unlikely unless someone is
>> actively attacking the system in a loop) it'd take way too long for
>> the value to actually wrap. And yes on the second question.
> If it is a malicious app, it can be much faster than 1Hz. It only
> needs to attack at 1kHz or so to bring the wrap time down to something
> realistic. But if the malicious app can't actually get anywhere even
> if it did successfully spoof the key by forcing a wrap, then it's
> still not something we need to worry about. Because it is not actually
> spoofing the key, just pretending to?
The PXP termination and restart flow takes about 100ms, so ~10HZ is the
absolute max speed of the key increase, which still means it'd take
several years before a wrap. Also note that AFAIU an app can't perform
an attack on the key, because it doesn't have the access; an attack is
more around the lines of someone trying to intercept the encrypted
stream (e.g. disconnecting an HDCP enabled monitor and replacing it with
a non-HDCP connection counts as an attack). And as you said, even if the
wrap happened and a malicious app still had an invalid BO that now
matches the key value again, the only consequence would still just be
garbage on screen because even if the key instance count matched the
actual key would be different.
Daniele
>
> John.
>
>>
>>
>>>
>>>> + return -ENOEXEC;
>>>> +
>>>> + return 0;
>>>> +}
>>>> +
>>>> diff --git a/drivers/gpu/drm/xe/xe_pxp.h b/drivers/gpu/drm/xe/xe_pxp.h
>>>> index 868813cc84b9..2d22a6e6ab27 100644
>>>> --- a/drivers/gpu/drm/xe/xe_pxp.h
>>>> +++ b/drivers/gpu/drm/xe/xe_pxp.h
>>>> @@ -8,6 +8,7 @@
>>>> #include <linux/types.h>
>>>> +struct xe_bo;
>>>> struct xe_device;
>>>> struct xe_exec_queue;
>>>> struct xe_pxp;
>>>> @@ -23,4 +24,7 @@ int xe_pxp_exec_queue_set_type(struct xe_pxp
>>>> *pxp, struct xe_exec_queue *q, u8 t
>>>> int xe_pxp_exec_queue_add(struct xe_pxp *pxp, struct
>>>> xe_exec_queue *q);
>>>> void xe_pxp_exec_queue_remove(struct xe_pxp *pxp, struct
>>>> xe_exec_queue *q);
>>>> +int xe_pxp_key_assign(struct xe_pxp *pxp, struct xe_bo *bo);
>>>> +int xe_pxp_key_check(struct xe_pxp *pxp, struct xe_bo *bo);
>>>> +
>>>> #endif /* __XE_PXP_H__ */
>>>> diff --git a/drivers/gpu/drm/xe/xe_pxp_types.h
>>>> b/drivers/gpu/drm/xe/xe_pxp_types.h
>>>> index eb6a0183320a..1bb747837f86 100644
>>>> --- a/drivers/gpu/drm/xe/xe_pxp_types.h
>>>> +++ b/drivers/gpu/drm/xe/xe_pxp_types.h
>>>> @@ -108,6 +108,9 @@ struct xe_pxp {
>>>> /** @queues.list: list of exec_queues that use PXP */
>>>> struct list_head list;
>>>> } queues;
>>>> +
>>>> + /** @key_instance: keep track of the current iteration of the
>>>> PXP key */
>>>> + u32 key_instance;
>>>> };
>>>> #endif /* __XE_PXP_TYPES_H__ */
>>>> diff --git a/drivers/gpu/drm/xe/xe_vm.c b/drivers/gpu/drm/xe/xe_vm.c
>>>> index 56f105797ae6..1011d643ebb8 100644
>>>> --- a/drivers/gpu/drm/xe/xe_vm.c
>>>> +++ b/drivers/gpu/drm/xe/xe_vm.c
>>>> @@ -34,6 +34,7 @@
>>>> #include "xe_pm.h"
>>>> #include "xe_preempt_fence.h"
>>>> #include "xe_pt.h"
>>>> +#include "xe_pxp.h"
>>>> #include "xe_res_cursor.h"
>>>> #include "xe_sync.h"
>>>> #include "xe_trace_bo.h"
>>>> @@ -2754,7 +2755,8 @@ static struct dma_fence
>>>> *vm_bind_ioctl_ops_execute(struct xe_vm *vm,
>>>> (DRM_XE_VM_BIND_FLAG_READONLY | \
>>>> DRM_XE_VM_BIND_FLAG_IMMEDIATE | \
>>>> DRM_XE_VM_BIND_FLAG_NULL | \
>>>> - DRM_XE_VM_BIND_FLAG_DUMPABLE)
>>>> + DRM_XE_VM_BIND_FLAG_DUMPABLE | \
>>>> + DRM_XE_VM_BIND_FLAG_CHECK_PXP)
>>>> #ifdef TEST_VM_OPS_ERROR
>>>> #define SUPPORTED_FLAGS (SUPPORTED_FLAGS_STUB | FORCE_OP_ERROR)
>>>> @@ -2916,7 +2918,7 @@ static void xe_vma_ops_init(struct xe_vma_ops
>>>> *vops, struct xe_vm *vm,
>>>> static int xe_vm_bind_ioctl_validate_bo(struct xe_device *xe,
>>>> struct xe_bo *bo,
>>>> u64 addr, u64 range, u64 obj_offset,
>>>> - u16 pat_index)
>>>> + u16 pat_index, u32 op, u32 bind_flags)
>>>> {
>>>> u16 coh_mode;
>>>> @@ -2951,6 +2953,12 @@ static int
>>>> xe_vm_bind_ioctl_validate_bo(struct xe_device *xe, struct xe_bo *bo,
>>>> return -EINVAL;
>>>> }
>>>> + /* If a BO is protected it must be valid to be mapped */
>>> "is protected it can only be mapped if the key is still valid". The
>>> above can be read as saying the BO must be mappable, which isn't the
>>> same thing.
>>
>>
>> will update.
>>
>>
>>>
>>>> + if ((bind_flags & DRM_XE_VM_BIND_FLAG_CHECK_PXP) &&
>>>> xe_bo_is_protected(bo) &&
>>>> + op != DRM_XE_VM_BIND_OP_UNMAP && op !=
>>>> DRM_XE_VM_BIND_OP_UNMAP_ALL)
>>>> + if (XE_IOCTL_DBG(xe, xe_pxp_key_check(xe->pxp, bo) != 0))
>>>> + return -ENOEXEC;
>>>> +
>>>> return 0;
>>>> }
>>>> @@ -3038,6 +3046,8 @@ int xe_vm_bind_ioctl(struct drm_device
>>>> *dev, void *data, struct drm_file *file)
>>>> u32 obj = bind_ops[i].obj;
>>>> u64 obj_offset = bind_ops[i].obj_offset;
>>>> u16 pat_index = bind_ops[i].pat_index;
>>>> + u32 op = bind_ops[i].op;
>>>> + u32 bind_flags = bind_ops[i].flags;
>>>> if (!obj)
>>>> continue;
>>>> @@ -3050,7 +3060,8 @@ int xe_vm_bind_ioctl(struct drm_device *dev,
>>>> void *data, struct drm_file *file)
>>>> bos[i] = gem_to_xe_bo(gem_obj);
>>>> err = xe_vm_bind_ioctl_validate_bo(xe, bos[i], addr,
>>>> range,
>>>> - obj_offset, pat_index);
>>>> + obj_offset, pat_index, op,
>>>> + bind_flags);
>>>> if (err)
>>>> goto put_obj;
>>>> }
>>>> @@ -3343,6 +3354,35 @@ int xe_vm_invalidate_vma(struct xe_vma *vma)
>>>> return ret;
>>>> }
>>>> +int xe_vm_validate_protected(struct xe_vm *vm)
>>>> +{
>>>> + struct drm_gpuva *gpuva;
>>>> + int err = 0;
>>>> +
>>>> + if (!vm)
>>>> + return -ENODEV;
>>>> +
>>>> + mutex_lock(&vm->snap_mutex);
>>>> +
>>>> + drm_gpuvm_for_each_va(gpuva, &vm->gpuvm) {
>>>> + struct xe_vma *vma = gpuva_to_vma(gpuva);
>>>> + struct xe_bo *bo = vma->gpuva.gem.obj ?
>>>> + gem_to_xe_bo(vma->gpuva.gem.obj) : NULL;
>>>> +
>>>> + if (!bo)
>>>> + continue;
>>>> +
>>>> + if (xe_bo_is_protected(bo)) {
>>>> + err = xe_pxp_key_check(vm->xe->pxp, bo);
>>>> + if (err)
>>>> + break;
>>>> + }
>>>> + }
>>>> +
>>>> + mutex_unlock(&vm->snap_mutex);
>>>> + return err;
>>>> +}
>>>> +
>>>> struct xe_vm_snapshot {
>>>> unsigned long num_snaps;
>>>> struct {
>>>> diff --git a/drivers/gpu/drm/xe/xe_vm.h b/drivers/gpu/drm/xe/xe_vm.h
>>>> index bfc19e8113c3..dd51c9790dab 100644
>>>> --- a/drivers/gpu/drm/xe/xe_vm.h
>>>> +++ b/drivers/gpu/drm/xe/xe_vm.h
>>>> @@ -216,6 +216,8 @@ struct dma_fence *xe_vma_rebind(struct xe_vm
>>>> *vm, struct xe_vma *vma,
>>>> int xe_vm_invalidate_vma(struct xe_vma *vma);
>>>> +int xe_vm_validate_protected(struct xe_vm *vm);
>>>> +
>>>> static inline void xe_vm_queue_rebind_worker(struct xe_vm *vm)
>>>> {
>>>> xe_assert(vm->xe, xe_vm_in_preempt_fence_mode(vm));
>>>> diff --git a/include/uapi/drm/xe_drm.h b/include/uapi/drm/xe_drm.h
>>>> index 9972ceb3fbfb..335febe03e40 100644
>>>> --- a/include/uapi/drm/xe_drm.h
>>>> +++ b/include/uapi/drm/xe_drm.h
>>>> @@ -776,8 +776,23 @@ struct drm_xe_device_query {
>>>> * - %DRM_XE_GEM_CPU_CACHING_WC - Allocate the pages as
>>>> write-combined. This
>>>> * is uncached. Scanout surfaces should likely use this. All
>>>> objects
>>>> * that can be placed in VRAM must use this.
>>>> + *
>>>> + * This ioctl supports setting the following properties via the
>>>> + * %DRM_XE_GEM_CREATE_EXTENSION_SET_PROPERTY extension, which uses
>>>> the
>>>> + * generic @drm_xe_ext_set_property struct:
>>>> + *
>>>> + * - %DRM_XE_GEM_CREATE_SET_PROPERTY_PXP_TYPE - set the type of
>>>> PXP session
>>>> + * this object will be used with. Valid values are listed in enum
>>>> + * drm_xe_pxp_session_type. %DRM_XE_PXP_TYPE_NONE is the
>>>> default behavior, so
>>>> + * there is no need to explicitly set that. Objects used with
>>>> session of type
>>>> + * %DRM_XE_PXP_TYPE_HWDRM will be marked as invalid if a PXP
>>>> invalidation
>>>> + * event occurs after their creation. Attempting to flip an
>>>> invalid object
>>>> + * will cause a black frame to be displayed instead.
>>>> Submissions with invalid
>>>> + * objects mapped in the VM will be rejected.
>>> Again, seems like the per type descriptions should be collected
>>> together in the type enum.
>>
>>
>> This is how this ioctl handles those values, so IMO they belong here.
>>
>> Daniele
>>
>>
>>>
>>> John.
>>>
>>>> */
>>>> struct drm_xe_gem_create {
>>>> +#define DRM_XE_GEM_CREATE_EXTENSION_SET_PROPERTY 0
>>>> +#define DRM_XE_GEM_CREATE_SET_PROPERTY_PXP_TYPE 0
>>>> /** @extensions: Pointer to the first extension struct, if
>>>> any */
>>>> __u64 extensions;
>>>> @@ -939,6 +954,9 @@ struct drm_xe_vm_destroy {
>>>> * will only be valid for DRM_XE_VM_BIND_OP_MAP operations,
>>>> the BO
>>>> * handle MBZ, and the BO offset MBZ. This flag is intended to
>>>> * implement VK sparse bindings.
>>>> + * - %DRM_XE_VM_BIND_FLAG_CHECK_PXP - If the object is encrypted
>>>> via PXP,
>>>> + * reject the binding if the encryption key is no longer valid.
>>>> This
>>>> + * flag has no effect on BOs that are not marked as using PXP.
>>>> */
>>>> struct drm_xe_vm_bind_op {
>>>> /** @extensions: Pointer to the first extension struct, if
>>>> any */
>>>> @@ -1029,6 +1047,7 @@ struct drm_xe_vm_bind_op {
>>>> #define DRM_XE_VM_BIND_FLAG_IMMEDIATE (1 << 1)
>>>> #define DRM_XE_VM_BIND_FLAG_NULL (1 << 2)
>>>> #define DRM_XE_VM_BIND_FLAG_DUMPABLE (1 << 3)
>>>> +#define DRM_XE_VM_BIND_FLAG_CHECK_PXP (1 << 4)
>>>> /** @flags: Bind flags */
>>>> __u32 flags;
>>>
>
^ permalink raw reply [flat|nested] 54+ messages in thread
end of thread, other threads:[~2024-11-15 18:04 UTC | newest]
Thread overview: 54+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2024-08-16 19:00 [PATCH v2 00/12] Add PXP HWDRM support Daniele Ceraolo Spurio
2024-08-16 19:00 ` [PATCH v2 01/12] drm/xe/pxp: Initialize PXP structure and KCR reg Daniele Ceraolo Spurio
2024-10-04 20:29 ` John Harrison
2024-08-16 19:00 ` [PATCH v2 02/12] drm/xe/pxp: Allocate PXP execution resources Daniele Ceraolo Spurio
2024-08-19 9:19 ` Jani Nikula
2024-10-04 20:30 ` John Harrison
2024-11-06 22:25 ` Daniele Ceraolo Spurio
2024-08-16 19:00 ` [PATCH v2 03/12] drm/xe/pxp: Add VCS inline termination support Daniele Ceraolo Spurio
2024-10-04 22:25 ` John Harrison
2024-11-06 23:49 ` Daniele Ceraolo Spurio
2024-11-14 18:46 ` John Harrison
2024-08-16 19:00 ` [PATCH v2 04/12] drm/xe/pxp: Add GSC session invalidation support Daniele Ceraolo Spurio
2024-10-07 20:05 ` John Harrison
2024-11-07 0:15 ` Daniele Ceraolo Spurio
2024-08-16 19:00 ` [PATCH v2 05/12] drm/xe/pxp: Handle the PXP termination interrupt Daniele Ceraolo Spurio
2024-10-08 0:34 ` John Harrison
2024-11-07 0:33 ` Daniele Ceraolo Spurio
2024-11-14 19:46 ` John Harrison
2024-08-16 19:00 ` [PATCH v2 06/12] drm/xe/pxp: Add GSC session initialization support Daniele Ceraolo Spurio
2024-10-08 18:43 ` John Harrison
2024-11-07 22:37 ` Daniele Ceraolo Spurio
2024-11-14 20:36 ` John Harrison
2024-08-16 19:00 ` [PATCH v2 07/12] drm/xe/pxp: Add spport for PXP-using queues Daniele Ceraolo Spurio
2024-10-08 23:55 ` John Harrison
2024-11-07 23:57 ` Daniele Ceraolo Spurio
2024-11-14 21:20 ` John Harrison
2024-11-14 21:39 ` Daniele Ceraolo Spurio
2024-11-15 0:47 ` Daniele Ceraolo Spurio
2024-10-09 10:07 ` Jani Nikula
2024-08-16 19:00 ` [PATCH v2 08/12] drm/xe/pxp: add a query for PXP status Daniele Ceraolo Spurio
2024-10-09 0:09 ` John Harrison
2024-11-12 21:29 ` Daniele Ceraolo Spurio
2024-08-16 19:00 ` [PATCH v2 09/12] drm/xe/pxp: Add API to mark a BO as using PXP Daniele Ceraolo Spurio
2024-10-09 0:42 ` John Harrison
2024-11-12 22:23 ` Daniele Ceraolo Spurio
2024-11-15 17:49 ` John Harrison
2024-11-15 18:03 ` Daniele Ceraolo Spurio
2024-08-16 19:00 ` [PATCH v2 10/12] drm/xe/pxp: add PXP PM support Daniele Ceraolo Spurio
2024-08-26 21:55 ` Daniele Ceraolo Spurio
2024-10-09 1:12 ` John Harrison
2024-11-12 22:27 ` Daniele Ceraolo Spurio
2024-08-16 19:00 ` [PATCH v2 11/12] drm/xe/pxp: Add PXP debugfs support Daniele Ceraolo Spurio
2024-10-09 1:26 ` John Harrison
2024-08-16 19:00 ` [PATCH v2 12/12] drm/xe/pxp: Enable PXP for MTL and LNL Daniele Ceraolo Spurio
2024-10-09 1:27 ` John Harrison
2024-08-16 19:06 ` ✓ CI.Patch_applied: success for Add PXP HWDRM support (rev2) Patchwork
2024-08-16 19:07 ` ✗ CI.checkpatch: warning " Patchwork
2024-08-16 19:08 ` ✓ CI.KUnit: success " Patchwork
2024-08-16 19:23 ` ✓ CI.Build: " Patchwork
2024-08-16 19:25 ` ✗ CI.Hooks: failure " Patchwork
2024-08-16 19:27 ` ✓ CI.checksparse: success " Patchwork
2024-08-16 20:11 ` ✗ CI.BAT: failure " Patchwork
2024-08-17 4:53 ` ✗ CI.FULL: " Patchwork
2024-08-19 14:33 ` [PATCH v2 00/12] Add PXP HWDRM support Souza, Jose
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox