Intel-XE Archive on lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH v2 00/17] PF: Add sriov_admin sysfs tree
@ 2025-10-28 17:58 Michal Wajdeczko
  2025-10-28 17:58 ` [PATCH v2 01/17] drm/xe/pf: Prepare sysfs for SR-IOV admin attributes Michal Wajdeczko
                   ` (20 more replies)
  0 siblings, 21 replies; 45+ messages in thread
From: Michal Wajdeczko @ 2025-10-28 17:58 UTC (permalink / raw)
  To: intel-xe
  Cc: Michal Wajdeczko, Lucas De Marchi, Rodrigo Vivi, Tvrtko Ursulin,
	Matthew Brost

To allow the admin to provision VFs using production quality vGPU
profiles, we need to expose some of the SR-IOV knobs at the sysfs
level as existing debugfs entries may not be available.

Start with some basic scheduling parameters and apply them to all
tiles/GTs where VF can be running. Provisioning of the other hard
resources (like GGTT, VRAM) is still fully done by the PF, which
uses fair auto-provisioning, but soon we will try to expose that
to the admin too (after this series).

Below is an example of the new sysfs attributes tree on the PF
with 2 of 3 VFs enabled (VFs were limited by max_vfs config):

  /sys/bus/pci/drivers/xe/0000:00:02.0/sriov_admin
  ├── .bulk_profile
  │   ├── exec_quantum_ms
  │   ├── preempt_timeout_us
  │   └── sched_priority
  ├── pf
  │   ├── device -> ../../../0000:00:02.0
  │   └── profile
  │       ├── exec_quantum_ms
  │       ├── preempt_timeout_us
  │       └── sched_priority
  ├── vf1
  │   ├── device -> ../../../0000:00:02.1
  │   ├── profile
  │   │   ├── exec_quantum_ms
  │   │   ├── preempt_timeout_us
  │   │   └── sched_priority
  │   └── stop
  ├── vf2
  │   ├── device -> ../../../0000:00:02.2
  │   ├── profile
  │   │   ├── exec_quantum_ms
  │   │   ├── preempt_timeout_us
  │   │   └── sched_priority
  │   └── stop
  └── vf3
      ├── profile
      │   ├── exec_quantum_ms
      │   ├── preempt_timeout_us
      │   └── sched_priority
      └── stop

Cc: Lucas De Marchi <lucas.demarchi@intel.com>
Cc: Rodrigo Vivi <rodrigo.vivi@intel.com>
Cc: Tvrtko Ursulin <tvrtko.ursulin@igalia.com>
Cc: Matthew Brost <matthew.brost@intel.com>

v1: https://patchwork.freedesktop.org/series/156220/#rev1
v2:
    drop dbg messages about unexpected attr usage (Lucas)
    use single lock while configuring all GTs (Lucas)
    drop redundant comment (Piotr)
    drop reset file (Rodrigo, Lucas)
    update kernel version (Rodrigo)

Michal Wajdeczko (17):
  drm/xe/pf: Prepare sysfs for SR-IOV admin attributes
  drm/xe/pf: Take RPM during calls to SR-IOV attr.store()
  drm/xe/pf: Add _locked variants of the VF EQ config functions
  drm/xe/pf: Add _locked variants of the VF PT config functions
  drm/xe/pf: Allow change PF and VFs EQ/PT using sysfs
  drm/xe/pf: Relax report helper to accept PF in bulk configs
  drm/xe/pf: Fix signature of internal config helpers
  drm/xe/pf: Add functions to bulk configure EQ/PT on GT
  drm/xe/pf: Add functions to bulk provision EQ/PT
  drm/xe/pf: Allow bulk change all VFs EQ/PT using sysfs
  drm/xe/pf: Add functions to provision scheduling priority
  drm/xe/pf: Allow bulk change all VFs priority using sysfs
  drm/xe/pf: Allow change PF scheduling priority using sysfs
  drm/xe/pf: Promote xe_pci_sriov_get_vf_pdev
  drm/xe/pf: Add sysfs device symlinks to enabled VFs
  drm/xe/pf: Allow to stop and reset VF using sysfs
  drm/xe/pf: Add documentation for sriov_admin attributes

 .../ABI/testing/sysfs-driver-intel-xe-sriov   | 160 +++++
 drivers/gpu/drm/xe/Makefile                   |   1 +
 drivers/gpu/drm/xe/xe_gt_sriov_pf_config.c    | 186 ++++-
 drivers/gpu/drm/xe/xe_gt_sriov_pf_config.h    |  10 +
 drivers/gpu/drm/xe/xe_pci_sriov.c             |  41 +-
 drivers/gpu/drm/xe/xe_pci_sriov.h             |   1 +
 drivers/gpu/drm/xe/xe_sriov_pf.c              |   5 +
 drivers/gpu/drm/xe/xe_sriov_pf_provision.c    | 273 ++++++++
 drivers/gpu/drm/xe/xe_sriov_pf_provision.h    |  14 +
 drivers/gpu/drm/xe/xe_sriov_pf_sysfs.c        | 645 ++++++++++++++++++
 drivers/gpu/drm/xe/xe_sriov_pf_sysfs.h        |  16 +
 drivers/gpu/drm/xe/xe_sriov_pf_types.h        |  11 +
 12 files changed, 1318 insertions(+), 45 deletions(-)
 create mode 100644 Documentation/ABI/testing/sysfs-driver-intel-xe-sriov
 create mode 100644 drivers/gpu/drm/xe/xe_sriov_pf_sysfs.c
 create mode 100644 drivers/gpu/drm/xe/xe_sriov_pf_sysfs.h

-- 
2.47.1


^ permalink raw reply	[flat|nested] 45+ messages in thread

* [PATCH v2 01/17] drm/xe/pf: Prepare sysfs for SR-IOV admin attributes
  2025-10-28 17:58 [PATCH v2 00/17] PF: Add sriov_admin sysfs tree Michal Wajdeczko
@ 2025-10-28 17:58 ` Michal Wajdeczko
  2025-10-28 17:58 ` [PATCH v2 02/17] drm/xe/pf: Take RPM during calls to SR-IOV attr.store() Michal Wajdeczko
                   ` (19 subsequent siblings)
  20 siblings, 0 replies; 45+ messages in thread
From: Michal Wajdeczko @ 2025-10-28 17:58 UTC (permalink / raw)
  To: intel-xe; +Cc: Michal Wajdeczko, Lucas De Marchi, Rodrigo Vivi

We already have some SR-IOV specific knobs exposed as debugfs
files to allow low level tuning of the SR-IOV configurations,
but those files are mainly for the use by the developers and
debugfs might not be available on the production builds.

Start building dedicated sysfs sub-tree under xe device, where
in upcoming patches we will add selected attributes that will
help provision and manage PF and all VFs:

  /sys/bus/pci/drivers/xe/BDF/
  ├── sriov_admin/
      ├── pf/
      ├── vf1/
      ├── vf2/
      :
      └── vfN/

Add all required data types and helper macros that will be used
by upcoming patches to define actual attributes.

Signed-off-by: Michal Wajdeczko <michal.wajdeczko@intel.com>
Cc: Lucas De Marchi <lucas.demarchi@intel.com>
Cc: Rodrigo Vivi <rodrigo.vivi@intel.com>
Acked-by: Rodrigo Vivi <rodrigo.vivi@intel.com>
Reviewed-by: Lucas De Marchi <lucas.demarchi@intel.com>
---
v2: drop dbg messages about unexpected attr usage (Lucas)
    rename new_xe_sriov_kobj (Lucas)
---
 drivers/gpu/drm/xe/Makefile            |   1 +
 drivers/gpu/drm/xe/xe_sriov_pf.c       |   5 +
 drivers/gpu/drm/xe/xe_sriov_pf_sysfs.c | 287 +++++++++++++++++++++++++
 drivers/gpu/drm/xe/xe_sriov_pf_sysfs.h |  13 ++
 drivers/gpu/drm/xe/xe_sriov_pf_types.h |  11 +
 5 files changed, 317 insertions(+)
 create mode 100644 drivers/gpu/drm/xe/xe_sriov_pf_sysfs.c
 create mode 100644 drivers/gpu/drm/xe/xe_sriov_pf_sysfs.h

diff --git a/drivers/gpu/drm/xe/Makefile b/drivers/gpu/drm/xe/Makefile
index 82c6b3d29676..e84811783115 100644
--- a/drivers/gpu/drm/xe/Makefile
+++ b/drivers/gpu/drm/xe/Makefile
@@ -178,6 +178,7 @@ xe-$(CONFIG_PCI_IOV) += \
 	xe_sriov_pf_debugfs.o \
 	xe_sriov_pf_provision.o \
 	xe_sriov_pf_service.o \
+	xe_sriov_pf_sysfs.o \
 	xe_tile_sriov_pf_debugfs.o
 
 # include helpers for tests even when XE is built-in
diff --git a/drivers/gpu/drm/xe/xe_sriov_pf.c b/drivers/gpu/drm/xe/xe_sriov_pf.c
index bc1ab9ee31d9..b8af93eb5b5f 100644
--- a/drivers/gpu/drm/xe/xe_sriov_pf.c
+++ b/drivers/gpu/drm/xe/xe_sriov_pf.c
@@ -16,6 +16,7 @@
 #include "xe_sriov_pf.h"
 #include "xe_sriov_pf_helpers.h"
 #include "xe_sriov_pf_service.h"
+#include "xe_sriov_pf_sysfs.h"
 #include "xe_sriov_printk.h"
 
 static unsigned int wanted_max_vfs(struct xe_device *xe)
@@ -128,6 +129,10 @@ int xe_sriov_pf_init_late(struct xe_device *xe)
 			return err;
 	}
 
+	err = xe_sriov_pf_sysfs_init(xe);
+	if (err)
+		return err;
+
 	return 0;
 }
 
diff --git a/drivers/gpu/drm/xe/xe_sriov_pf_sysfs.c b/drivers/gpu/drm/xe/xe_sriov_pf_sysfs.c
new file mode 100644
index 000000000000..0f2b19ca873e
--- /dev/null
+++ b/drivers/gpu/drm/xe/xe_sriov_pf_sysfs.c
@@ -0,0 +1,287 @@
+// SPDX-License-Identifier: MIT
+/*
+ * Copyright © 2025 Intel Corporation
+ */
+
+#include <linux/kobject.h>
+#include <linux/sysfs.h>
+
+#include <drm/drm_managed.h>
+
+#include "xe_assert.h"
+#include "xe_sriov.h"
+#include "xe_sriov_pf_helpers.h"
+#include "xe_sriov_pf_sysfs.h"
+#include "xe_sriov_printk.h"
+
+/*
+ * /sys/bus/pci/drivers/xe/BDF/
+ * :
+ * ├── sriov_admin/
+ *     ├── ...
+ *     ├── pf/
+ *     │   ├── ...
+ *     │   └── ...
+ *     ├── vf1/
+ *     │   ├── ...
+ *     │   └── ...
+ *     ├── vf2/
+ *     :
+ *     └── vfN/
+ */
+
+struct xe_sriov_kobj {
+	struct kobject base;
+	struct xe_device *xe;
+	unsigned int vfid;
+};
+#define to_xe_sriov_kobj(p) container_of_const((p), struct xe_sriov_kobj, base)
+
+struct xe_sriov_dev_attr {
+	struct attribute attr;
+	ssize_t (*show)(struct xe_device *xe, char *buf);
+	ssize_t (*store)(struct xe_device *xe, const char *buf, size_t count);
+};
+#define to_xe_sriov_dev_attr(p) container_of_const((p), struct xe_sriov_dev_attr, attr)
+
+#define XE_SRIOV_DEV_ATTR(NAME) \
+struct xe_sriov_dev_attr xe_sriov_dev_attr_##NAME = \
+	__ATTR(NAME, 0644, xe_sriov_dev_attr_##NAME##_show, xe_sriov_dev_attr_##NAME##_store)
+
+#define XE_SRIOV_DEV_ATTR_RO(NAME) \
+struct xe_sriov_dev_attr xe_sriov_dev_attr_##NAME = \
+	__ATTR(NAME, 0444, xe_sriov_dev_attr_##NAME##_show, NULL)
+
+#define XE_SRIOV_DEV_ATTR_WO(NAME) \
+struct xe_sriov_dev_attr xe_sriov_dev_attr_##NAME = \
+	__ATTR(NAME, 0200, NULL, xe_sriov_dev_attr_##NAME##_store)
+
+struct xe_sriov_vf_attr {
+	struct attribute attr;
+	ssize_t (*show)(struct xe_device *xe, unsigned int vfid, char *buf);
+	ssize_t (*store)(struct xe_device *xe, unsigned int vfid, const char *buf, size_t count);
+};
+#define to_xe_sriov_vf_attr(p) container_of_const((p), struct xe_sriov_vf_attr, attr)
+
+#define XE_SRIOV_VF_ATTR(NAME) \
+struct xe_sriov_vf_attr xe_sriov_vf_attr_##NAME = \
+	__ATTR(NAME, 0644, xe_sriov_vf_attr_##NAME##_show, xe_sriov_vf_attr_##NAME##_store)
+
+#define XE_SRIOV_VF_ATTR_RO(NAME) \
+struct xe_sriov_vf_attr xe_sriov_vf_attr_##NAME = \
+	__ATTR(NAME, 0444, xe_sriov_vf_attr_##NAME##_show, NULL)
+
+#define XE_SRIOV_VF_ATTR_WO(NAME) \
+struct xe_sriov_vf_attr xe_sriov_vf_attr_##NAME = \
+	__ATTR(NAME, 0200, NULL, xe_sriov_vf_attr_##NAME##_store)
+
+/* device level attributes go here */
+
+static const struct attribute_group *xe_sriov_dev_attr_groups[] = {
+	NULL
+};
+
+/* and VF-level attributes go here */
+
+static const struct attribute_group *xe_sriov_vf_attr_groups[] = {
+	NULL
+};
+
+/* no user serviceable parts below */
+
+static struct kobject *create_xe_sriov_kobj(struct xe_device *xe, unsigned int vfid)
+{
+	struct xe_sriov_kobj *vkobj;
+
+	xe_sriov_pf_assert_vfid(xe, vfid);
+
+	vkobj = kzalloc(sizeof(*vkobj), GFP_KERNEL);
+	if (!vkobj)
+		return NULL;
+
+	vkobj->xe = xe;
+	vkobj->vfid = vfid;
+	return &vkobj->base;
+}
+
+static void release_xe_sriov_kobj(struct kobject *kobj)
+{
+	struct xe_sriov_kobj *vkobj = to_xe_sriov_kobj(kobj);
+
+	kfree(vkobj);
+}
+
+static ssize_t xe_sriov_dev_attr_show(struct kobject *kobj, struct attribute *attr, char *buf)
+{
+	struct xe_sriov_dev_attr *vattr  = to_xe_sriov_dev_attr(attr);
+	struct xe_sriov_kobj *vkobj = to_xe_sriov_kobj(kobj);
+	struct xe_device *xe = vkobj->xe;
+
+	if (!vattr->show)
+		return -EPERM;
+
+	return vattr->show(xe, buf);
+}
+
+static ssize_t xe_sriov_dev_attr_store(struct kobject *kobj, struct attribute *attr,
+				       const char *buf, size_t count)
+{
+	struct xe_sriov_dev_attr *vattr = to_xe_sriov_dev_attr(attr);
+	struct xe_sriov_kobj *vkobj = to_xe_sriov_kobj(kobj);
+	struct xe_device *xe = vkobj->xe;
+
+	if (!vattr->store)
+		return -EPERM;
+
+	return vattr->store(xe, buf, count);
+}
+
+static ssize_t xe_sriov_vf_attr_show(struct kobject *kobj, struct attribute *attr, char *buf)
+{
+	struct xe_sriov_vf_attr *vattr = to_xe_sriov_vf_attr(attr);
+	struct xe_sriov_kobj *vkobj = to_xe_sriov_kobj(kobj);
+	struct xe_device *xe = vkobj->xe;
+	unsigned int vfid = vkobj->vfid;
+
+	xe_sriov_pf_assert_vfid(xe, vfid);
+
+	if (!vattr->show)
+		return -EPERM;
+
+	return vattr->show(xe, vfid, buf);
+}
+
+static ssize_t xe_sriov_vf_attr_store(struct kobject *kobj, struct attribute *attr,
+				      const char *buf, size_t count)
+{
+	struct xe_sriov_vf_attr *vattr = to_xe_sriov_vf_attr(attr);
+	struct xe_sriov_kobj *vkobj = to_xe_sriov_kobj(kobj);
+	struct xe_device *xe = vkobj->xe;
+	unsigned int vfid = vkobj->vfid;
+
+	xe_sriov_pf_assert_vfid(xe, vfid);
+
+	if (!vattr->store)
+		return -EPERM;
+
+	return vattr->store(xe, vfid, buf, count);
+}
+
+static const struct sysfs_ops xe_sriov_dev_sysfs_ops = {
+	.show = xe_sriov_dev_attr_show,
+	.store = xe_sriov_dev_attr_store,
+};
+
+static const struct sysfs_ops xe_sriov_vf_sysfs_ops = {
+	.show = xe_sriov_vf_attr_show,
+	.store = xe_sriov_vf_attr_store,
+};
+
+static const struct kobj_type xe_sriov_dev_ktype = {
+	.release = release_xe_sriov_kobj,
+	.sysfs_ops = &xe_sriov_dev_sysfs_ops,
+	.default_groups = xe_sriov_dev_attr_groups,
+};
+
+static const struct kobj_type xe_sriov_vf_ktype = {
+	.release = release_xe_sriov_kobj,
+	.sysfs_ops = &xe_sriov_vf_sysfs_ops,
+	.default_groups = xe_sriov_vf_attr_groups,
+};
+
+static int pf_sysfs_error(struct xe_device *xe, int err, const char *what)
+{
+	if (IS_ENABLED(CONFIG_DRM_XE_DEBUG))
+		xe_sriov_dbg(xe, "Failed to setup sysfs %s (%pe)\n", what, ERR_PTR(err));
+	return err;
+}
+
+static void action_put_kobject(void *arg)
+{
+	struct kobject *kobj = arg;
+
+	kobject_put(kobj);
+}
+
+static int pf_setup_root(struct xe_device *xe)
+{
+	struct kobject *parent = &xe->drm.dev->kobj;
+	struct kobject *root;
+	int err;
+
+	root = create_xe_sriov_kobj(xe, PFID);
+	if (!root)
+		return pf_sysfs_error(xe, -ENOMEM, "root obj");
+
+	err = devm_add_action_or_reset(xe->drm.dev, action_put_kobject, root);
+	if (err)
+		return pf_sysfs_error(xe, err, "root action");
+
+	err = kobject_init_and_add(root, &xe_sriov_dev_ktype, parent, "sriov_admin");
+	if (err)
+		return pf_sysfs_error(xe, err, "root init");
+
+	xe_assert(xe, IS_SRIOV_PF(xe));
+	xe_assert(xe, !xe->sriov.pf.sysfs.root);
+	xe->sriov.pf.sysfs.root = root;
+	return 0;
+}
+
+static int pf_setup_tree(struct xe_device *xe)
+{
+	unsigned int totalvfs = xe_sriov_pf_get_totalvfs(xe);
+	struct kobject *root, *kobj;
+	unsigned int n;
+	int err;
+
+	xe_assert(xe, IS_SRIOV_PF(xe));
+	root = xe->sriov.pf.sysfs.root;
+
+	for (n = 0; n <= totalvfs; n++) {
+		kobj = create_xe_sriov_kobj(xe, VFID(n));
+		if (!kobj)
+			return pf_sysfs_error(xe, -ENOMEM, "tree obj");
+
+		err = devm_add_action_or_reset(xe->drm.dev, action_put_kobject, root);
+		if (err)
+			return pf_sysfs_error(xe, err, "tree action");
+
+		if (n)
+			err = kobject_init_and_add(kobj, &xe_sriov_vf_ktype,
+						   root, "vf%u", n);
+		else
+			err = kobject_init_and_add(kobj, &xe_sriov_vf_ktype,
+						   root, "pf");
+		if (err)
+			return pf_sysfs_error(xe, err, "tree init");
+
+		xe_assert(xe, !xe->sriov.pf.vfs[n].kobj);
+		xe->sriov.pf.vfs[n].kobj = kobj;
+	}
+
+	return 0;
+}
+
+/**
+ * xe_sriov_pf_sysfs_init() - Setup PF's SR-IOV sysfs tree.
+ * @xe: the PF &xe_device to setup sysfs
+ *
+ * This function will create additional nodes that will represent PF and VFs
+ * devices, each populated with SR-IOV Xe specific attributes.
+ *
+ * Return: 0 on success or a negative error code on failure.
+ */
+int xe_sriov_pf_sysfs_init(struct xe_device *xe)
+{
+	int err;
+
+	err = pf_setup_root(xe);
+	if (err)
+		return err;
+
+	err = pf_setup_tree(xe);
+	if (err)
+		return err;
+
+	return 0;
+}
diff --git a/drivers/gpu/drm/xe/xe_sriov_pf_sysfs.h b/drivers/gpu/drm/xe/xe_sriov_pf_sysfs.h
new file mode 100644
index 000000000000..1e6698cc29d3
--- /dev/null
+++ b/drivers/gpu/drm/xe/xe_sriov_pf_sysfs.h
@@ -0,0 +1,13 @@
+/* SPDX-License-Identifier: MIT */
+/*
+ * Copyright © 2025 Intel Corporation
+ */
+
+#ifndef _XE_SRIOV_PF_SYSFS_H_
+#define _XE_SRIOV_PF_SYSFS_H_
+
+struct xe_device;
+
+int xe_sriov_pf_sysfs_init(struct xe_device *xe);
+
+#endif
diff --git a/drivers/gpu/drm/xe/xe_sriov_pf_types.h b/drivers/gpu/drm/xe/xe_sriov_pf_types.h
index c753cd59aed2..b3cd9797194b 100644
--- a/drivers/gpu/drm/xe/xe_sriov_pf_types.h
+++ b/drivers/gpu/drm/xe/xe_sriov_pf_types.h
@@ -12,10 +12,15 @@
 #include "xe_sriov_pf_provision_types.h"
 #include "xe_sriov_pf_service_types.h"
 
+struct kobject;
+
 /**
  * struct xe_sriov_metadata - per-VF device level metadata
  */
 struct xe_sriov_metadata {
+	/** @kobj: kobject representing VF in PF's SR-IOV sysfs tree. */
+	struct kobject *kobj;
+
 	/** @version: negotiated VF/PF ABI version */
 	struct xe_sriov_pf_service_version version;
 };
@@ -42,6 +47,12 @@ struct xe_device_pf {
 	/** @service: device level service data. */
 	struct xe_sriov_pf_service service;
 
+	/** @sysfs: device level sysfs data. */
+	struct {
+		/** @sysfs.root: the root kobject for all SR-IOV entries in sysfs. */
+		struct kobject *root;
+	} sysfs;
+
 	/** @vfs: metadata for all VFs. */
 	struct xe_sriov_metadata *vfs;
 };
-- 
2.47.1


^ permalink raw reply related	[flat|nested] 45+ messages in thread

* [PATCH v2 02/17] drm/xe/pf: Take RPM during calls to SR-IOV attr.store()
  2025-10-28 17:58 [PATCH v2 00/17] PF: Add sriov_admin sysfs tree Michal Wajdeczko
  2025-10-28 17:58 ` [PATCH v2 01/17] drm/xe/pf: Prepare sysfs for SR-IOV admin attributes Michal Wajdeczko
@ 2025-10-28 17:58 ` Michal Wajdeczko
  2025-10-28 17:58 ` [PATCH v2 03/17] drm/xe/pf: Add _locked variants of the VF EQ config functions Michal Wajdeczko
                   ` (18 subsequent siblings)
  20 siblings, 0 replies; 45+ messages in thread
From: Michal Wajdeczko @ 2025-10-28 17:58 UTC (permalink / raw)
  To: intel-xe; +Cc: Michal Wajdeczko, Lucas De Marchi

We expect that all SR-IOV attr.store() handlers will require active
runtime PM reference. To simplify implementation of those handlers,
take an implicit RPM reference on their behalf. Also wait until PF
completes its restart.

Signed-off-by: Michal Wajdeczko <michal.wajdeczko@intel.com>
Reviewed-by: Lucas De Marchi <lucas.demarchi@intel.com>
---
 drivers/gpu/drm/xe/xe_sriov_pf_sysfs.c | 16 ++++++++++++++--
 1 file changed, 14 insertions(+), 2 deletions(-)

diff --git a/drivers/gpu/drm/xe/xe_sriov_pf_sysfs.c b/drivers/gpu/drm/xe/xe_sriov_pf_sysfs.c
index 0f2b19ca873e..439a0cd02a86 100644
--- a/drivers/gpu/drm/xe/xe_sriov_pf_sysfs.c
+++ b/drivers/gpu/drm/xe/xe_sriov_pf_sysfs.c
@@ -9,7 +9,9 @@
 #include <drm/drm_managed.h>
 
 #include "xe_assert.h"
+#include "xe_pm.h"
 #include "xe_sriov.h"
+#include "xe_sriov_pf.h"
 #include "xe_sriov_pf_helpers.h"
 #include "xe_sriov_pf_sysfs.h"
 #include "xe_sriov_printk.h"
@@ -129,11 +131,16 @@ static ssize_t xe_sriov_dev_attr_store(struct kobject *kobj, struct attribute *a
 	struct xe_sriov_dev_attr *vattr = to_xe_sriov_dev_attr(attr);
 	struct xe_sriov_kobj *vkobj = to_xe_sriov_kobj(kobj);
 	struct xe_device *xe = vkobj->xe;
+	ssize_t ret;
 
 	if (!vattr->store)
 		return -EPERM;
 
-	return vattr->store(xe, buf, count);
+	xe_pm_runtime_get(xe);
+	ret = xe_sriov_pf_wait_ready(xe) ?: vattr->store(xe, buf, count);
+	xe_pm_runtime_put(xe);
+
+	return ret;
 }
 
 static ssize_t xe_sriov_vf_attr_show(struct kobject *kobj, struct attribute *attr, char *buf)
@@ -158,13 +165,18 @@ static ssize_t xe_sriov_vf_attr_store(struct kobject *kobj, struct attribute *at
 	struct xe_sriov_kobj *vkobj = to_xe_sriov_kobj(kobj);
 	struct xe_device *xe = vkobj->xe;
 	unsigned int vfid = vkobj->vfid;
+	ssize_t ret;
 
 	xe_sriov_pf_assert_vfid(xe, vfid);
 
 	if (!vattr->store)
 		return -EPERM;
 
-	return vattr->store(xe, vfid, buf, count);
+	xe_pm_runtime_get(xe);
+	ret = xe_sriov_pf_wait_ready(xe) ?: vattr->store(xe, vfid, buf, count);
+	xe_pm_runtime_get(xe);
+
+	return ret;
 }
 
 static const struct sysfs_ops xe_sriov_dev_sysfs_ops = {
-- 
2.47.1


^ permalink raw reply related	[flat|nested] 45+ messages in thread

* [PATCH v2 03/17] drm/xe/pf: Add _locked variants of the VF EQ config functions
  2025-10-28 17:58 [PATCH v2 00/17] PF: Add sriov_admin sysfs tree Michal Wajdeczko
  2025-10-28 17:58 ` [PATCH v2 01/17] drm/xe/pf: Prepare sysfs for SR-IOV admin attributes Michal Wajdeczko
  2025-10-28 17:58 ` [PATCH v2 02/17] drm/xe/pf: Take RPM during calls to SR-IOV attr.store() Michal Wajdeczko
@ 2025-10-28 17:58 ` Michal Wajdeczko
  2025-10-29  8:47   ` Piotr Piórkowski
  2025-10-28 17:58 ` [PATCH v2 04/17] drm/xe/pf: Add _locked variants of the VF PT " Michal Wajdeczko
                   ` (17 subsequent siblings)
  20 siblings, 1 reply; 45+ messages in thread
From: Michal Wajdeczko @ 2025-10-28 17:58 UTC (permalink / raw)
  To: intel-xe; +Cc: Michal Wajdeczko, Lucas De Marchi

In upcoming patches we will want to configure VF's execution
quantum (EQ) on all GTs under single lock to avoid potential
races in parallel GT configuration attempts.

Signed-off-by: Michal Wajdeczko <michal.wajdeczko@intel.com>
Cc: Lucas De Marchi <lucas.demarchi@intel.com>
---
 drivers/gpu/drm/xe/xe_gt_sriov_pf_config.c | 58 +++++++++++++++++-----
 drivers/gpu/drm/xe/xe_gt_sriov_pf_config.h |  4 ++
 2 files changed, 49 insertions(+), 13 deletions(-)

diff --git a/drivers/gpu/drm/xe/xe_gt_sriov_pf_config.c b/drivers/gpu/drm/xe/xe_gt_sriov_pf_config.c
index c0c0215c0703..717f81e76b8c 100644
--- a/drivers/gpu/drm/xe/xe_gt_sriov_pf_config.c
+++ b/drivers/gpu/drm/xe/xe_gt_sriov_pf_config.c
@@ -1732,29 +1732,65 @@ static int pf_get_exec_quantum(struct xe_gt *gt, unsigned int vfid)
 }
 
 /**
- * xe_gt_sriov_pf_config_set_exec_quantum - Configure execution quantum for the VF.
+ * xe_gt_sriov_pf_config_set_exec_quantum_locked() - Configure execution quantum of the VF.
  * @gt: the &xe_gt
  * @vfid: the VF identifier
  * @exec_quantum: requested execution quantum in milliseconds (0 is infinity)
  *
- * This function can only be called on PF.
+ * This function can only be called on PF with the master mutex hold.
+ * It will log the provisioned value or an error in case of the failure.
  *
  * Return: 0 on success or a negative error code on failure.
  */
-int xe_gt_sriov_pf_config_set_exec_quantum(struct xe_gt *gt, unsigned int vfid,
-					   u32 exec_quantum)
+int xe_gt_sriov_pf_config_set_exec_quantum_locked(struct xe_gt *gt, unsigned int vfid,
+						  u32 exec_quantum)
 {
 	int err;
 
-	mutex_lock(xe_gt_sriov_pf_master_mutex(gt));
+	lockdep_assert_held(xe_gt_sriov_pf_master_mutex(gt));
+
 	err = pf_provision_exec_quantum(gt, vfid, exec_quantum);
-	mutex_unlock(xe_gt_sriov_pf_master_mutex(gt));
 
 	return pf_config_set_u32_done(gt, vfid, exec_quantum,
-				      xe_gt_sriov_pf_config_get_exec_quantum(gt, vfid),
+				      pf_get_exec_quantum(gt, vfid),
 				      "execution quantum", exec_quantum_unit, err);
 }
 
+/**
+ * xe_gt_sriov_pf_config_set_exec_quantum - Configure execution quantum for the VF.
+ * @gt: the &xe_gt
+ * @vfid: the VF identifier
+ * @exec_quantum: requested execution quantum in milliseconds (0 is infinity)
+ *
+ * This function can only be called on PF.
+ * It will log the provisioned value or na error in case of the failure.
+ *
+ * Return: 0 on success or a negative error code on failure.
+ */
+int xe_gt_sriov_pf_config_set_exec_quantum(struct xe_gt *gt, unsigned int vfid,
+					   u32 exec_quantum)
+{
+	guard(mutex)(xe_gt_sriov_pf_master_mutex(gt));
+
+	return xe_gt_sriov_pf_config_set_exec_quantum_locked(gt, vfid, exec_quantum);
+}
+
+/**
+ * xe_gt_sriov_pf_config_get_exec_quantum_locked() - Get VF's execution quantum.
+ * @gt: the &xe_gt
+ * @vfid: the VF identifier
+ *
+ * This function can only be called on PF with the master mutex hold.
+ *
+ * Return: VF's (or PF's) execution quantum in milliseconds.
+ */
+u32 xe_gt_sriov_pf_config_get_exec_quantum_locked(struct xe_gt *gt, unsigned int vfid)
+{
+	lockdep_assert_held(xe_gt_sriov_pf_master_mutex(gt));
+
+	return pf_get_exec_quantum(gt, vfid);
+}
+
 /**
  * xe_gt_sriov_pf_config_get_exec_quantum - Get VF's execution quantum.
  * @gt: the &xe_gt
@@ -1766,13 +1802,9 @@ int xe_gt_sriov_pf_config_set_exec_quantum(struct xe_gt *gt, unsigned int vfid,
  */
 u32 xe_gt_sriov_pf_config_get_exec_quantum(struct xe_gt *gt, unsigned int vfid)
 {
-	u32 exec_quantum;
+	guard(mutex)(xe_gt_sriov_pf_master_mutex(gt));
 
-	mutex_lock(xe_gt_sriov_pf_master_mutex(gt));
-	exec_quantum = pf_get_exec_quantum(gt, vfid);
-	mutex_unlock(xe_gt_sriov_pf_master_mutex(gt));
-
-	return exec_quantum;
+	return pf_get_exec_quantum(gt, vfid);
 }
 
 static const char *preempt_timeout_unit(u32 preempt_timeout)
diff --git a/drivers/gpu/drm/xe/xe_gt_sriov_pf_config.h b/drivers/gpu/drm/xe/xe_gt_sriov_pf_config.h
index 513e6512a575..b4beb5a97031 100644
--- a/drivers/gpu/drm/xe/xe_gt_sriov_pf_config.h
+++ b/drivers/gpu/drm/xe/xe_gt_sriov_pf_config.h
@@ -40,6 +40,10 @@ int xe_gt_sriov_pf_config_bulk_set_lmem(struct xe_gt *gt, unsigned int vfid, uns
 u32 xe_gt_sriov_pf_config_get_exec_quantum(struct xe_gt *gt, unsigned int vfid);
 int xe_gt_sriov_pf_config_set_exec_quantum(struct xe_gt *gt, unsigned int vfid, u32 exec_quantum);
 
+u32 xe_gt_sriov_pf_config_get_exec_quantum_locked(struct xe_gt *gt, unsigned int vfid);
+int xe_gt_sriov_pf_config_set_exec_quantum_locked(struct xe_gt *gt, unsigned int vfid,
+						  u32 exec_quantum);
+
 u32 xe_gt_sriov_pf_config_get_preempt_timeout(struct xe_gt *gt, unsigned int vfid);
 int xe_gt_sriov_pf_config_set_preempt_timeout(struct xe_gt *gt, unsigned int vfid,
 					      u32 preempt_timeout);
-- 
2.47.1


^ permalink raw reply related	[flat|nested] 45+ messages in thread

* [PATCH v2 04/17] drm/xe/pf: Add _locked variants of the VF PT config functions
  2025-10-28 17:58 [PATCH v2 00/17] PF: Add sriov_admin sysfs tree Michal Wajdeczko
                   ` (2 preceding siblings ...)
  2025-10-28 17:58 ` [PATCH v2 03/17] drm/xe/pf: Add _locked variants of the VF EQ config functions Michal Wajdeczko
@ 2025-10-28 17:58 ` Michal Wajdeczko
  2025-10-29 11:00   ` Piotr Piórkowski
  2025-10-29 20:27   ` Lucas De Marchi
  2025-10-28 17:58 ` [PATCH v2 05/17] drm/xe/pf: Allow change PF and VFs EQ/PT using sysfs Michal Wajdeczko
                   ` (16 subsequent siblings)
  20 siblings, 2 replies; 45+ messages in thread
From: Michal Wajdeczko @ 2025-10-28 17:58 UTC (permalink / raw)
  To: intel-xe; +Cc: Michal Wajdeczko, Lucas De Marchi

In upcoming patches we will want to configure VF's preemption
timeout (PT) on all GTs under single lock to avoid potential
races due to parallel GT configuration attempts.

Signed-off-by: Michal Wajdeczko <michal.wajdeczko@intel.com>
Cc: Lucas De Marchi <lucas.demarchi@intel.com>
---
 drivers/gpu/drm/xe/xe_gt_sriov_pf_config.c | 59 +++++++++++++++++-----
 drivers/gpu/drm/xe/xe_gt_sriov_pf_config.h |  4 ++
 2 files changed, 49 insertions(+), 14 deletions(-)

diff --git a/drivers/gpu/drm/xe/xe_gt_sriov_pf_config.c b/drivers/gpu/drm/xe/xe_gt_sriov_pf_config.c
index 717f81e76b8c..e48457bd7d12 100644
--- a/drivers/gpu/drm/xe/xe_gt_sriov_pf_config.c
+++ b/drivers/gpu/drm/xe/xe_gt_sriov_pf_config.c
@@ -1835,31 +1835,66 @@ static int pf_get_preempt_timeout(struct xe_gt *gt, unsigned int vfid)
 }
 
 /**
- * xe_gt_sriov_pf_config_set_preempt_timeout - Configure preemption timeout for the VF.
+ * xe_gt_sriov_pf_config_set_preempt_timeout_locked() - Configure preemption timeout of the VF.
  * @gt: the &xe_gt
  * @vfid: the VF identifier
  * @preempt_timeout: requested preemption timeout in microseconds (0 is infinity)
  *
- * This function can only be called on PF.
+ * This function can only be called on PF with the master mutex hold.
+ * It will log the provisioned value or an error in case of the failure.
  *
  * Return: 0 on success or a negative error code on failure.
  */
-int xe_gt_sriov_pf_config_set_preempt_timeout(struct xe_gt *gt, unsigned int vfid,
-					      u32 preempt_timeout)
+int xe_gt_sriov_pf_config_set_preempt_timeout_locked(struct xe_gt *gt, unsigned int vfid,
+						     u32 preempt_timeout)
 {
 	int err;
 
-	mutex_lock(xe_gt_sriov_pf_master_mutex(gt));
+	lockdep_assert_held(xe_gt_sriov_pf_master_mutex(gt));
+
 	err = pf_provision_preempt_timeout(gt, vfid, preempt_timeout);
-	mutex_unlock(xe_gt_sriov_pf_master_mutex(gt));
 
 	return pf_config_set_u32_done(gt, vfid, preempt_timeout,
-				      xe_gt_sriov_pf_config_get_preempt_timeout(gt, vfid),
+				      pf_get_preempt_timeout(gt, vfid),
 				      "preemption timeout", preempt_timeout_unit, err);
 }
 
 /**
- * xe_gt_sriov_pf_config_get_preempt_timeout - Get VF's preemption timeout.
+ * xe_gt_sriov_pf_config_set_preempt_timeout() - Configure preemption timeout of the VF.
+ * @gt: the &xe_gt
+ * @vfid: the VF identifier
+ * @preempt_timeout: requested preemption timeout in microseconds (0 is infinity)
+ *
+ * This function can only be called on PF.
+ *
+ * Return: 0 on success or a negative error code on failure.
+ */
+int xe_gt_sriov_pf_config_set_preempt_timeout(struct xe_gt *gt, unsigned int vfid,
+					      u32 preempt_timeout)
+{
+	guard(mutex)(xe_gt_sriov_pf_master_mutex(gt));
+
+	return xe_gt_sriov_pf_config_set_preempt_timeout_locked(gt, vfid, preempt_timeout);
+}
+
+/**
+ * xe_gt_sriov_pf_config_get_preempt_timeout_locked() - Get VF's preemption timeout.
+ * @gt: the &xe_gt
+ * @vfid: the VF identifier
+ *
+ * This function can only be called on PF with the master mutex hold.
+ *
+ * Return: VF's (or PF's) preemption timeout in microseconds.
+ */
+u32 xe_gt_sriov_pf_config_get_preempt_timeout_locked(struct xe_gt *gt, unsigned int vfid)
+{
+	lockdep_assert_held(xe_gt_sriov_pf_master_mutex(gt));
+
+	return pf_get_preempt_timeout(gt, vfid);
+}
+
+/**
+ * xe_gt_sriov_pf_config_get_preempt_timeout() - Get VF's preemption timeout.
  * @gt: the &xe_gt
  * @vfid: the VF identifier
  *
@@ -1869,13 +1904,9 @@ int xe_gt_sriov_pf_config_set_preempt_timeout(struct xe_gt *gt, unsigned int vfi
  */
 u32 xe_gt_sriov_pf_config_get_preempt_timeout(struct xe_gt *gt, unsigned int vfid)
 {
-	u32 preempt_timeout;
+	guard(mutex)(xe_gt_sriov_pf_master_mutex(gt));
 
-	mutex_lock(xe_gt_sriov_pf_master_mutex(gt));
-	preempt_timeout = pf_get_preempt_timeout(gt, vfid);
-	mutex_unlock(xe_gt_sriov_pf_master_mutex(gt));
-
-	return preempt_timeout;
+	return pf_get_preempt_timeout(gt, vfid);
 }
 
 static const char *sched_priority_unit(u32 priority)
diff --git a/drivers/gpu/drm/xe/xe_gt_sriov_pf_config.h b/drivers/gpu/drm/xe/xe_gt_sriov_pf_config.h
index b4beb5a97031..6bab5ad6c849 100644
--- a/drivers/gpu/drm/xe/xe_gt_sriov_pf_config.h
+++ b/drivers/gpu/drm/xe/xe_gt_sriov_pf_config.h
@@ -48,6 +48,10 @@ u32 xe_gt_sriov_pf_config_get_preempt_timeout(struct xe_gt *gt, unsigned int vfi
 int xe_gt_sriov_pf_config_set_preempt_timeout(struct xe_gt *gt, unsigned int vfid,
 					      u32 preempt_timeout);
 
+u32 xe_gt_sriov_pf_config_get_preempt_timeout_locked(struct xe_gt *gt, unsigned int vfid);
+int xe_gt_sriov_pf_config_set_preempt_timeout_locked(struct xe_gt *gt, unsigned int vfid,
+						     u32 preempt_timeout);
+
 u32 xe_gt_sriov_pf_config_get_sched_priority(struct xe_gt *gt, unsigned int vfid);
 int xe_gt_sriov_pf_config_set_sched_priority(struct xe_gt *gt, unsigned int vfid, u32 priority);
 
-- 
2.47.1


^ permalink raw reply related	[flat|nested] 45+ messages in thread

* [PATCH v2 05/17] drm/xe/pf: Allow change PF and VFs EQ/PT using sysfs
  2025-10-28 17:58 [PATCH v2 00/17] PF: Add sriov_admin sysfs tree Michal Wajdeczko
                   ` (3 preceding siblings ...)
  2025-10-28 17:58 ` [PATCH v2 04/17] drm/xe/pf: Add _locked variants of the VF PT " Michal Wajdeczko
@ 2025-10-28 17:58 ` Michal Wajdeczko
  2025-10-29 11:17   ` Piotr Piórkowski
  2025-10-29 20:26   ` Lucas De Marchi
  2025-10-28 17:58 ` [PATCH v2 06/17] drm/xe/pf: Relax report helper to accept PF in bulk configs Michal Wajdeczko
                   ` (15 subsequent siblings)
  20 siblings, 2 replies; 45+ messages in thread
From: Michal Wajdeczko @ 2025-10-28 17:58 UTC (permalink / raw)
  To: intel-xe; +Cc: Michal Wajdeczko, Lucas De Marchi, Rodrigo Vivi

On current platforms, in SR-IOV virtualization, the GPU is shared
between VFs on the time-slice basis. The 'execution quantum' (EQ)
and 'preemption timeout' (PT) are two main scheduling parameters
that could be set individually per each VF.

Add EQ/PT read-write attributes for the PF and all VFs.

By exposing those two parameters over sysfs, the admin can change
their default values (infinity) and let the GuC scheduler enforce
that settings.

 /sys/bus/pci/drivers/xe/BDF/
 ├── sriov_admin/
     ├── pf/
     │   └── profile
     │       ├── exec_quantum_ms	[RW] unsigned integer
     │       └── preempt_timeout_us	[RW] unsigned integer
     ├── vf1/
     │   └── profile
     │       ├── exec_quantum_ms	[RW] unsigned integer
     │       └── preempt_timeout_us	[RW] unsigned integer

Writing 0 to these files will set infinity EQ/PT for the VF on all
tiles/GTs. This is a default value. Writing non-zero integers to
these files will change EQ/PT to new value (in their respective
units: msec or usec).

Reading from these files will return EQ/PT as previously set on
all tiles/GTs. In case of inconsistent values detected, due to
errors or low-level configuration done using debugfs, -EUCLEAN
error will be returned.

Signed-off-by: Michal Wajdeczko <michal.wajdeczko@intel.com>
Cc: Lucas De Marchi <lucas.demarchi@intel.com>
Cc: Rodrigo Vivi <rodrigo.vivi@intel.com>
Acked-by: Rodrigo Vivi <rodrigo.vivi@intel.com>
---
v2: apply EQ/PT under single lock (Lucas)
---
 drivers/gpu/drm/xe/xe_sriov_pf_provision.c | 124 +++++++++++++++++++++
 drivers/gpu/drm/xe/xe_sriov_pf_provision.h |   8 ++
 drivers/gpu/drm/xe/xe_sriov_pf_sysfs.c     |  54 ++++++++-
 3 files changed, 184 insertions(+), 2 deletions(-)

diff --git a/drivers/gpu/drm/xe/xe_sriov_pf_provision.c b/drivers/gpu/drm/xe/xe_sriov_pf_provision.c
index 663fb0c045e9..c5b3a6aa67f4 100644
--- a/drivers/gpu/drm/xe/xe_sriov_pf_provision.c
+++ b/drivers/gpu/drm/xe/xe_sriov_pf_provision.c
@@ -152,3 +152,127 @@ int xe_sriov_pf_provision_set_mode(struct xe_device *xe, enum xe_sriov_provision
 	xe->sriov.pf.provision.mode = mode;
 	return 0;
 }
+
+/**
+ * xe_sriov_pf_provision_apply_vf_eq() - Change VF's execution quantum.
+ * @xe: the PF &xe_device
+ * @vfid: the VF identifier
+ * @eq: execution quantum in [ms] to set
+ *
+ * Change VF's execution quantum (EQ) provisioning on all tiles/GTs.
+ *
+ * This function can only be called on PF.
+ *
+ * Return: 0 on success or a negative error code on failure.
+ */
+int xe_sriov_pf_provision_apply_vf_eq(struct xe_device *xe, unsigned int vfid, u32 eq)
+{
+	struct xe_gt *gt;
+	unsigned int id;
+	int result = 0;
+	int err;
+
+	guard(mutex)(xe_sriov_pf_master_mutex(xe));
+
+	for_each_gt(gt, xe, id) {
+		err = xe_gt_sriov_pf_config_set_exec_quantum_locked(gt, vfid, eq);
+		result = result ?: err;
+	}
+
+	return result;
+}
+
+/**
+ * xe_sriov_pf_provision_query_vf_eq() - Query VF's execution quantum.
+ * @xe: the PF &xe_device
+ * @vfid: the VF identifier
+ * @eq: placeholder for the returned execution quantum in [ms]
+ *
+ * Query VF's execution quantum (EQ) provisioning from all tiles/GTs.
+ * If values across tiles/GTs are inconsistent then -EUCLEAN error will be returned.
+ *
+ * This function can only be called on PF.
+ *
+ * Return: 0 on success or a negative error code on failure.
+ */
+int xe_sriov_pf_provision_query_vf_eq(struct xe_device *xe, unsigned int vfid, u32 *eq)
+{
+	struct xe_gt *gt;
+	unsigned int id;
+	int count = 0;
+	u32 value;
+
+	guard(mutex)(xe_sriov_pf_master_mutex(xe));
+
+	for_each_gt(gt, xe, id) {
+		value = xe_gt_sriov_pf_config_get_exec_quantum_locked(gt, vfid);
+		if (!count++)
+			*eq = value;
+		else if (value != *eq)
+			return -EUCLEAN;
+	}
+
+	return !count ? -ENODATA : 0;
+}
+
+/**
+ * xe_sriov_pf_provision_apply_vf_pt() - Change VF's preemption timeout.
+ * @xe: the PF &xe_device
+ * @vfid: the VF identifier
+ * @pt: preemption timeout in [us] to set
+ *
+ * Change VF's preemption timeout (PT) provisioning on all tiles/GTs.
+ *
+ * This function can only be called on PF.
+ *
+ * Return: 0 on success or a negative error code on failure.
+ */
+int xe_sriov_pf_provision_apply_vf_pt(struct xe_device *xe, unsigned int vfid, u32 pt)
+{
+	struct xe_gt *gt;
+	unsigned int id;
+	int result = 0;
+	int err;
+
+	guard(mutex)(xe_sriov_pf_master_mutex(xe));
+
+	for_each_gt(gt, xe, id) {
+		err = xe_gt_sriov_pf_config_set_preempt_timeout_locked(gt, vfid, pt);
+		result = result ?: err;
+	}
+
+	return result;
+}
+
+/**
+ * xe_sriov_pf_provision_query_vf_pt() - Query VF's preemption timeout.
+ * @xe: the PF &xe_device
+ * @vfid: the VF identifier
+ * @pt: placeholder for the returned preemption timeout in [us]
+ *
+ * Query VF's preemption timeout (PT) provisioning from all tiles/GTs.
+ * If values across tiles/GTs are inconsistent then -EUCLEAN error will be returned.
+ *
+ * This function can only be called on PF.
+ *
+ * Return: 0 on success or a negative error code on failure.
+ */
+int xe_sriov_pf_provision_query_vf_pt(struct xe_device *xe, unsigned int vfid, u32 *pt)
+{
+	struct xe_gt *gt;
+	unsigned int id;
+	int count = 0;
+	u32 value;
+
+	guard(mutex)(xe_sriov_pf_master_mutex(xe));
+
+	for_each_gt(gt, xe, id) {
+		value = xe_gt_sriov_pf_config_get_preempt_timeout_locked(gt, vfid);
+		if (!count++)
+			*pt = value;
+		else if (value != *pt)
+			return -EUCLEAN;
+	}
+
+	return !count ? -ENODATA : 0;
+}
diff --git a/drivers/gpu/drm/xe/xe_sriov_pf_provision.h b/drivers/gpu/drm/xe/xe_sriov_pf_provision.h
index cf3657a32e90..cb81b5880930 100644
--- a/drivers/gpu/drm/xe/xe_sriov_pf_provision.h
+++ b/drivers/gpu/drm/xe/xe_sriov_pf_provision.h
@@ -6,10 +6,18 @@
 #ifndef _XE_SRIOV_PF_PROVISION_H_
 #define _XE_SRIOV_PF_PROVISION_H_
 
+#include <linux/types.h>
+
 #include "xe_sriov_pf_provision_types.h"
 
 struct xe_device;
 
+int xe_sriov_pf_provision_apply_vf_eq(struct xe_device *xe, unsigned int vfid, u32 eq);
+int xe_sriov_pf_provision_query_vf_eq(struct xe_device *xe, unsigned int vfid, u32 *eq);
+
+int xe_sriov_pf_provision_apply_vf_pt(struct xe_device *xe, unsigned int vfid, u32 pt);
+int xe_sriov_pf_provision_query_vf_pt(struct xe_device *xe, unsigned int vfid, u32 *pt);
+
 int xe_sriov_pf_provision_vfs(struct xe_device *xe, unsigned int num_vfs);
 int xe_sriov_pf_unprovision_vfs(struct xe_device *xe, unsigned int num_vfs);
 
diff --git a/drivers/gpu/drm/xe/xe_sriov_pf_sysfs.c b/drivers/gpu/drm/xe/xe_sriov_pf_sysfs.c
index 439a0cd02a86..f12d6752e9f1 100644
--- a/drivers/gpu/drm/xe/xe_sriov_pf_sysfs.c
+++ b/drivers/gpu/drm/xe/xe_sriov_pf_sysfs.c
@@ -13,6 +13,7 @@
 #include "xe_sriov.h"
 #include "xe_sriov_pf.h"
 #include "xe_sriov_pf_helpers.h"
+#include "xe_sriov_pf_provision.h"
 #include "xe_sriov_pf_sysfs.h"
 #include "xe_sriov_printk.h"
 
@@ -23,10 +24,14 @@
  *     ├── ...
  *     ├── pf/
  *     │   ├── ...
- *     │   └── ...
+ *     │   └── profile
+ *     │       ├── exec_quantum_ms
+ *     │       └── preempt_timeout_us
  *     ├── vf1/
  *     │   ├── ...
- *     │   └── ...
+ *     │   └── profile
+ *     │       ├── exec_quantum_ms
+ *     │       └── preempt_timeout_us
  *     ├── vf2/
  *     :
  *     └── vfN/
@@ -85,7 +90,52 @@ static const struct attribute_group *xe_sriov_dev_attr_groups[] = {
 
 /* and VF-level attributes go here */
 
+#define DEFINE_SIMPLE_PROVISIONING_SRIOV_VF_ATTR(NAME, ITEM, TYPE, FORMAT)		\
+static ssize_t xe_sriov_vf_attr_##NAME##_show(struct xe_device *xe, unsigned int vfid,	\
+					      char *buf)				\
+{											\
+	TYPE value = 0;									\
+	int err;									\
+											\
+	err = xe_sriov_pf_provision_query_vf_##ITEM(xe, vfid, &value);			\
+	if (err)									\
+		return err;								\
+											\
+	return sysfs_emit(buf, FORMAT, value);						\
+}											\
+											\
+static ssize_t xe_sriov_vf_attr_##NAME##_store(struct xe_device *xe, unsigned int vfid,	\
+					       const char *buf, size_t count)		\
+{											\
+	TYPE value;									\
+	int err;									\
+											\
+	err = kstrto##TYPE(buf, 0, &value);						\
+	if (err)									\
+		return err;								\
+											\
+	err = xe_sriov_pf_provision_apply_vf_##ITEM(xe, vfid, value);			\
+	return err ?: count;								\
+}											\
+											\
+static XE_SRIOV_VF_ATTR(NAME)
+
+DEFINE_SIMPLE_PROVISIONING_SRIOV_VF_ATTR(exec_quantum_ms, eq, u32, "%u\n");
+DEFINE_SIMPLE_PROVISIONING_SRIOV_VF_ATTR(preempt_timeout_us, pt, u32, "%u\n");
+
+static struct attribute *profile_vf_attrs[] = {
+	&xe_sriov_vf_attr_exec_quantum_ms.attr,
+	&xe_sriov_vf_attr_preempt_timeout_us.attr,
+	NULL
+};
+
+static const struct attribute_group profile_vf_attr_group = {
+	.name = "profile",
+	.attrs = profile_vf_attrs,
+};
+
 static const struct attribute_group *xe_sriov_vf_attr_groups[] = {
+	&profile_vf_attr_group,
 	NULL
 };
 
-- 
2.47.1


^ permalink raw reply related	[flat|nested] 45+ messages in thread

* [PATCH v2 06/17] drm/xe/pf: Relax report helper to accept PF in bulk configs
  2025-10-28 17:58 [PATCH v2 00/17] PF: Add sriov_admin sysfs tree Michal Wajdeczko
                   ` (4 preceding siblings ...)
  2025-10-28 17:58 ` [PATCH v2 05/17] drm/xe/pf: Allow change PF and VFs EQ/PT using sysfs Michal Wajdeczko
@ 2025-10-28 17:58 ` Michal Wajdeczko
  2025-10-28 17:58 ` [PATCH v2 07/17] drm/xe/pf: Fix signature of internal config helpers Michal Wajdeczko
                   ` (14 subsequent siblings)
  20 siblings, 0 replies; 45+ messages in thread
From: Michal Wajdeczko @ 2025-10-28 17:58 UTC (permalink / raw)
  To: intel-xe; +Cc: Michal Wajdeczko, Lucas De Marchi

Our current bulk configuration requests are only about VFs, but
we want to add new functions that will also include PF configs.
Update our bulk report helper to accept also PFID as first VFID.

Signed-off-by: Michal Wajdeczko <michal.wajdeczko@intel.com>
Reviewed-by: Lucas De Marchi <lucas.demarchi@intel.com>
---
 drivers/gpu/drm/xe/xe_gt_sriov_pf_config.c | 13 ++++++++-----
 1 file changed, 8 insertions(+), 5 deletions(-)

diff --git a/drivers/gpu/drm/xe/xe_gt_sriov_pf_config.c b/drivers/gpu/drm/xe/xe_gt_sriov_pf_config.c
index e48457bd7d12..343ab4a32cb1 100644
--- a/drivers/gpu/drm/xe/xe_gt_sriov_pf_config.c
+++ b/drivers/gpu/drm/xe/xe_gt_sriov_pf_config.c
@@ -924,7 +924,8 @@ static int pf_config_bulk_set_u32_done(struct xe_gt *gt, unsigned int first, uns
 				       const char *what, const char *(*unit)(u32),
 				       unsigned int last, int err)
 {
-	xe_gt_assert(gt, first);
+	char name[8];
+
 	xe_gt_assert(gt, num_vfs);
 	xe_gt_assert(gt, first <= last);
 
@@ -932,8 +933,9 @@ static int pf_config_bulk_set_u32_done(struct xe_gt *gt, unsigned int first, uns
 		return pf_config_set_u32_done(gt, first, value, get(gt, first), what, unit, err);
 
 	if (unlikely(err)) {
-		xe_gt_sriov_notice(gt, "Failed to bulk provision VF%u..VF%u with %s\n",
-				   first, first + num_vfs - 1, what);
+		xe_gt_sriov_notice(gt, "Failed to bulk provision %s..VF%u with %s\n",
+				   xe_sriov_function_name(first, name, sizeof(name)),
+				   first + num_vfs - 1, what);
 		if (last > first)
 			pf_config_bulk_set_u32_done(gt, first, last - first, value,
 						    get, what, unit, last, 0);
@@ -942,8 +944,9 @@ static int pf_config_bulk_set_u32_done(struct xe_gt *gt, unsigned int first, uns
 
 	/* pick actual value from first VF - bulk provisioning shall be equal across all VFs */
 	value = get(gt, first);
-	xe_gt_sriov_info(gt, "VF%u..VF%u provisioned with %u%s %s\n",
-			 first, first + num_vfs - 1, value, unit(value), what);
+	xe_gt_sriov_info(gt, "%s..VF%u provisioned with %u%s %s\n",
+			 xe_sriov_function_name(first, name, sizeof(name)),
+			 first + num_vfs - 1, value, unit(value), what);
 	return 0;
 }
 
-- 
2.47.1


^ permalink raw reply related	[flat|nested] 45+ messages in thread

* [PATCH v2 07/17] drm/xe/pf: Fix signature of internal config helpers
  2025-10-28 17:58 [PATCH v2 00/17] PF: Add sriov_admin sysfs tree Michal Wajdeczko
                   ` (5 preceding siblings ...)
  2025-10-28 17:58 ` [PATCH v2 06/17] drm/xe/pf: Relax report helper to accept PF in bulk configs Michal Wajdeczko
@ 2025-10-28 17:58 ` Michal Wajdeczko
  2025-10-29  8:02   ` Piotr Piórkowski
  2025-10-28 17:58 ` [PATCH v2 08/17] drm/xe/pf: Add functions to bulk configure EQ/PT on GT Michal Wajdeczko
                   ` (13 subsequent siblings)
  20 siblings, 1 reply; 45+ messages in thread
From: Michal Wajdeczko @ 2025-10-28 17:58 UTC (permalink / raw)
  To: intel-xe; +Cc: Michal Wajdeczko

Both pf_get_exec_quantum() and pf_get_preempt_timeout() should
return u32 as this is a type of the underlying data.

Signed-off-by: Michal Wajdeczko <michal.wajdeczko@intel.com>
---
 drivers/gpu/drm/xe/xe_gt_sriov_pf_config.c | 4 ++--
 1 file changed, 2 insertions(+), 2 deletions(-)

diff --git a/drivers/gpu/drm/xe/xe_gt_sriov_pf_config.c b/drivers/gpu/drm/xe/xe_gt_sriov_pf_config.c
index 343ab4a32cb1..6365d5f2ae98 100644
--- a/drivers/gpu/drm/xe/xe_gt_sriov_pf_config.c
+++ b/drivers/gpu/drm/xe/xe_gt_sriov_pf_config.c
@@ -1727,7 +1727,7 @@ static int pf_provision_exec_quantum(struct xe_gt *gt, unsigned int vfid,
 	return 0;
 }
 
-static int pf_get_exec_quantum(struct xe_gt *gt, unsigned int vfid)
+static u32 pf_get_exec_quantum(struct xe_gt *gt, unsigned int vfid)
 {
 	struct xe_gt_sriov_config *config = pf_pick_vf_config(gt, vfid);
 
@@ -1830,7 +1830,7 @@ static int pf_provision_preempt_timeout(struct xe_gt *gt, unsigned int vfid,
 	return 0;
 }
 
-static int pf_get_preempt_timeout(struct xe_gt *gt, unsigned int vfid)
+static u32 pf_get_preempt_timeout(struct xe_gt *gt, unsigned int vfid)
 {
 	struct xe_gt_sriov_config *config = pf_pick_vf_config(gt, vfid);
 
-- 
2.47.1


^ permalink raw reply related	[flat|nested] 45+ messages in thread

* [PATCH v2 08/17] drm/xe/pf: Add functions to bulk configure EQ/PT on GT
  2025-10-28 17:58 [PATCH v2 00/17] PF: Add sriov_admin sysfs tree Michal Wajdeczko
                   ` (6 preceding siblings ...)
  2025-10-28 17:58 ` [PATCH v2 07/17] drm/xe/pf: Fix signature of internal config helpers Michal Wajdeczko
@ 2025-10-28 17:58 ` Michal Wajdeczko
  2025-10-29 13:59   ` Piotr Piórkowski
  2025-10-29 20:32   ` Lucas De Marchi
  2025-10-28 17:58 ` [PATCH v2 09/17] drm/xe/pf: Add functions to bulk provision EQ/PT Michal Wajdeczko
                   ` (12 subsequent siblings)
  20 siblings, 2 replies; 45+ messages in thread
From: Michal Wajdeczko @ 2025-10-28 17:58 UTC (permalink / raw)
  To: intel-xe; +Cc: Michal Wajdeczko

We already have functions to bulk configure 'hard' resources like
GGTT, LMEM or GuC context/doorbells IDs. Now add functions for the
'soft' scheduling parameters, as we will need them soon in the
upcoming patches.

Signed-off-by: Michal Wajdeczko <michal.wajdeczko@intel.com>
---
v2: add _locked variants instead
---
 drivers/gpu/drm/xe/xe_gt_sriov_pf_config.c | 56 ++++++++++++++++++++++
 drivers/gpu/drm/xe/xe_gt_sriov_pf_config.h |  2 +
 2 files changed, 58 insertions(+)

diff --git a/drivers/gpu/drm/xe/xe_gt_sriov_pf_config.c b/drivers/gpu/drm/xe/xe_gt_sriov_pf_config.c
index 6365d5f2ae98..56048cd79d15 100644
--- a/drivers/gpu/drm/xe/xe_gt_sriov_pf_config.c
+++ b/drivers/gpu/drm/xe/xe_gt_sriov_pf_config.c
@@ -1810,6 +1810,34 @@ u32 xe_gt_sriov_pf_config_get_exec_quantum(struct xe_gt *gt, unsigned int vfid)
 	return pf_get_exec_quantum(gt, vfid);
 }
 
+/**
+ * xe_gt_sriov_pf_config_bulk_set_exec_quantum_locked() - Configure EQ for PF and VFs.
+ * @gt: the &xe_gt to configure
+ * @exec_quantum: requested execution quantum in milliseconds (0 is infinity)
+ *
+ * This function can only be called on PF with the master mutex hold.
+ *
+ * Return: 0 on success or a negative error code on failure.
+ */
+int xe_gt_sriov_pf_config_bulk_set_exec_quantum_locked(struct xe_gt *gt, u32 exec_quantum)
+{
+	unsigned int totalvfs = xe_gt_sriov_pf_get_totalvfs(gt);
+	unsigned int n;
+	int err = 0;
+
+	lockdep_assert_held(xe_gt_sriov_pf_master_mutex(gt));
+
+	for (n = 0; n <= totalvfs; n++) {
+		err = pf_provision_exec_quantum(gt, VFID(n), exec_quantum);
+		if (err)
+			break;
+	}
+
+	return pf_config_bulk_set_u32_done(gt, 0, 1 + totalvfs, exec_quantum,
+					   pf_get_exec_quantum, "execution quantum",
+					   exec_quantum_unit, n, err);
+}
+
 static const char *preempt_timeout_unit(u32 preempt_timeout)
 {
 	return preempt_timeout ? "us" : "(infinity)";
@@ -1912,6 +1940,34 @@ u32 xe_gt_sriov_pf_config_get_preempt_timeout(struct xe_gt *gt, unsigned int vfi
 	return pf_get_preempt_timeout(gt, vfid);
 }
 
+/**
+ * xe_gt_sriov_pf_config_bulk_set_preempt_timeout_locked() - Configure PT for PF and VFs.
+ * @gt: the &xe_gt to configure
+ * @preempt_timeout: requested preemption timeout in microseconds (0 is infinity)
+ *
+ * This function can only be called on PF with the master mutex hold.
+ *
+ * Return: 0 on success or a negative error code on failure.
+ */
+int xe_gt_sriov_pf_config_bulk_set_preempt_timeout_locked(struct xe_gt *gt, u32 preempt_timeout)
+{
+	unsigned int totalvfs = xe_gt_sriov_pf_get_totalvfs(gt);
+	unsigned int n;
+	int err = 0;
+
+	lockdep_assert_held(xe_gt_sriov_pf_master_mutex(gt));
+
+	for (n = 0; n <= totalvfs; n++) {
+		err = pf_provision_preempt_timeout(gt, VFID(n), preempt_timeout);
+		if (err)
+			break;
+	}
+
+	return pf_config_bulk_set_u32_done(gt, 0, 1 + totalvfs, preempt_timeout,
+					   pf_get_preempt_timeout, "preemption timeout",
+					   preempt_timeout_unit, n, err);
+}
+
 static const char *sched_priority_unit(u32 priority)
 {
 	return priority == GUC_SCHED_PRIORITY_LOW ? "(low)" :
diff --git a/drivers/gpu/drm/xe/xe_gt_sriov_pf_config.h b/drivers/gpu/drm/xe/xe_gt_sriov_pf_config.h
index 6bab5ad6c849..14d036790695 100644
--- a/drivers/gpu/drm/xe/xe_gt_sriov_pf_config.h
+++ b/drivers/gpu/drm/xe/xe_gt_sriov_pf_config.h
@@ -43,6 +43,7 @@ int xe_gt_sriov_pf_config_set_exec_quantum(struct xe_gt *gt, unsigned int vfid,
 u32 xe_gt_sriov_pf_config_get_exec_quantum_locked(struct xe_gt *gt, unsigned int vfid);
 int xe_gt_sriov_pf_config_set_exec_quantum_locked(struct xe_gt *gt, unsigned int vfid,
 						  u32 exec_quantum);
+int xe_gt_sriov_pf_config_bulk_set_exec_quantum_locked(struct xe_gt *gt, u32 exec_quantum);
 
 u32 xe_gt_sriov_pf_config_get_preempt_timeout(struct xe_gt *gt, unsigned int vfid);
 int xe_gt_sriov_pf_config_set_preempt_timeout(struct xe_gt *gt, unsigned int vfid,
@@ -51,6 +52,7 @@ int xe_gt_sriov_pf_config_set_preempt_timeout(struct xe_gt *gt, unsigned int vfi
 u32 xe_gt_sriov_pf_config_get_preempt_timeout_locked(struct xe_gt *gt, unsigned int vfid);
 int xe_gt_sriov_pf_config_set_preempt_timeout_locked(struct xe_gt *gt, unsigned int vfid,
 						     u32 preempt_timeout);
+int xe_gt_sriov_pf_config_bulk_set_preempt_timeout_locked(struct xe_gt *gt, u32 preempt_timeout);
 
 u32 xe_gt_sriov_pf_config_get_sched_priority(struct xe_gt *gt, unsigned int vfid);
 int xe_gt_sriov_pf_config_set_sched_priority(struct xe_gt *gt, unsigned int vfid, u32 priority);
-- 
2.47.1


^ permalink raw reply related	[flat|nested] 45+ messages in thread

* [PATCH v2 09/17] drm/xe/pf: Add functions to bulk provision EQ/PT
  2025-10-28 17:58 [PATCH v2 00/17] PF: Add sriov_admin sysfs tree Michal Wajdeczko
                   ` (7 preceding siblings ...)
  2025-10-28 17:58 ` [PATCH v2 08/17] drm/xe/pf: Add functions to bulk configure EQ/PT on GT Michal Wajdeczko
@ 2025-10-28 17:58 ` Michal Wajdeczko
  2025-10-29 20:33   ` Lucas De Marchi
  2025-10-28 17:58 ` [PATCH v2 10/17] drm/xe/pf: Allow bulk change all VFs EQ/PT using sysfs Michal Wajdeczko
                   ` (11 subsequent siblings)
  20 siblings, 1 reply; 45+ messages in thread
From: Michal Wajdeczko @ 2025-10-28 17:58 UTC (permalink / raw)
  To: intel-xe; +Cc: Michal Wajdeczko, Lucas De Marchi

We already have functions to configure EQ/PT for single VF across
all tiles/GTs. Now add helper functions that will do that for all
VFs (and the PF) at once.

Signed-off-by: Michal Wajdeczko <michal.wajdeczko@intel.com>
Reviewed-by: Lucas De Marchi <lucas.demarchi@intel.com> #v1
---
v2: use single lock to avoid races (Lucas)
---
 drivers/gpu/drm/xe/xe_sriov_pf_provision.c | 56 ++++++++++++++++++++++
 drivers/gpu/drm/xe/xe_sriov_pf_provision.h |  2 +
 2 files changed, 58 insertions(+)

diff --git a/drivers/gpu/drm/xe/xe_sriov_pf_provision.c b/drivers/gpu/drm/xe/xe_sriov_pf_provision.c
index c5b3a6aa67f4..6a4cb3d5494a 100644
--- a/drivers/gpu/drm/xe/xe_sriov_pf_provision.c
+++ b/drivers/gpu/drm/xe/xe_sriov_pf_provision.c
@@ -153,6 +153,34 @@ int xe_sriov_pf_provision_set_mode(struct xe_device *xe, enum xe_sriov_provision
 	return 0;
 }
 
+/**
+ * xe_sriov_pf_provision_bulk_apply_eq() - Change execution quantum for all VFs and PF.
+ * @xe: the PF &xe_device
+ * @eq: execution quantum in [ms] to set
+ *
+ * Change execution quantum (EQ) provisioning on all tiles/GTs.
+ *
+ * This function can only be called on PF.
+ *
+ * Return: 0 on success or a negative error code on failure.
+ */
+int xe_sriov_pf_provision_bulk_apply_eq(struct xe_device *xe, u32 eq)
+{
+	struct xe_gt *gt;
+	unsigned int id;
+	int result = 0;
+	int err;
+
+	guard(mutex)(xe_sriov_pf_master_mutex(xe));
+
+	for_each_gt(gt, xe, id) {
+		err = xe_gt_sriov_pf_config_bulk_set_exec_quantum_locked(gt, eq);
+		result = result ?: err;
+	}
+
+	return result;
+}
+
 /**
  * xe_sriov_pf_provision_apply_vf_eq() - Change VF's execution quantum.
  * @xe: the PF &xe_device
@@ -215,6 +243,34 @@ int xe_sriov_pf_provision_query_vf_eq(struct xe_device *xe, unsigned int vfid, u
 	return !count ? -ENODATA : 0;
 }
 
+/**
+ * xe_sriov_pf_provision_bulk_apply_pt() - Change preemption timeout for all VFs and PF.
+ * @xe: the PF &xe_device
+ * @pt: preemption timeout in [us] to set
+ *
+ * Change preemption timeout (PT) provisioning on all tiles/GTs.
+ *
+ * This function can only be called on PF.
+ *
+ * Return: 0 on success or a negative error code on failure.
+ */
+int xe_sriov_pf_provision_bulk_apply_pt(struct xe_device *xe, u32 pt)
+{
+	struct xe_gt *gt;
+	unsigned int id;
+	int result = 0;
+	int err;
+
+	guard(mutex)(xe_sriov_pf_master_mutex(xe));
+
+	for_each_gt(gt, xe, id) {
+		err = xe_gt_sriov_pf_config_bulk_set_preempt_timeout_locked(gt, pt);
+		result = result ?: err;
+	}
+
+	return result;
+}
+
 /**
  * xe_sriov_pf_provision_apply_vf_pt() - Change VF's preemption timeout.
  * @xe: the PF &xe_device
diff --git a/drivers/gpu/drm/xe/xe_sriov_pf_provision.h b/drivers/gpu/drm/xe/xe_sriov_pf_provision.h
index cb81b5880930..aa8a95b1c0be 100644
--- a/drivers/gpu/drm/xe/xe_sriov_pf_provision.h
+++ b/drivers/gpu/drm/xe/xe_sriov_pf_provision.h
@@ -12,9 +12,11 @@
 
 struct xe_device;
 
+int xe_sriov_pf_provision_bulk_apply_eq(struct xe_device *xe, u32 eq);
 int xe_sriov_pf_provision_apply_vf_eq(struct xe_device *xe, unsigned int vfid, u32 eq);
 int xe_sriov_pf_provision_query_vf_eq(struct xe_device *xe, unsigned int vfid, u32 *eq);
 
+int xe_sriov_pf_provision_bulk_apply_pt(struct xe_device *xe, u32 pt);
 int xe_sriov_pf_provision_apply_vf_pt(struct xe_device *xe, unsigned int vfid, u32 pt);
 int xe_sriov_pf_provision_query_vf_pt(struct xe_device *xe, unsigned int vfid, u32 *pt);
 
-- 
2.47.1


^ permalink raw reply related	[flat|nested] 45+ messages in thread

* [PATCH v2 10/17] drm/xe/pf: Allow bulk change all VFs EQ/PT using sysfs
  2025-10-28 17:58 [PATCH v2 00/17] PF: Add sriov_admin sysfs tree Michal Wajdeczko
                   ` (8 preceding siblings ...)
  2025-10-28 17:58 ` [PATCH v2 09/17] drm/xe/pf: Add functions to bulk provision EQ/PT Michal Wajdeczko
@ 2025-10-28 17:58 ` Michal Wajdeczko
  2025-10-28 17:58 ` [PATCH v2 11/17] drm/xe/pf: Add functions to provision scheduling priority Michal Wajdeczko
                   ` (10 subsequent siblings)
  20 siblings, 0 replies; 45+ messages in thread
From: Michal Wajdeczko @ 2025-10-28 17:58 UTC (permalink / raw)
  To: intel-xe; +Cc: Michal Wajdeczko, Lucas De Marchi, Rodrigo Vivi

It is expected to be a common practice to configure the same values
of execution quantum and preemption timeout parameters across all VFs.

Add write-only sysfs attributes that will apply required EQ/PT values
globally, without forcing admin to update PF and each VF separately.

  /sys/bus/pci/drivers/xe/BDF/
  ├── sriov_admin/
      ├── .bulk_profile
      │   ├── exec_quantum_ms		[WO] unsigned integer
      │   └── preempt_timeout_us	[WO] unsigned integer

Signed-off-by: Michal Wajdeczko <michal.wajdeczko@intel.com>
Cc: Lucas De Marchi <lucas.demarchi@intel.com>
Cc: Rodrigo Vivi <rodrigo.vivi@intel.com>
Acked-by: Rodrigo Vivi <rodrigo.vivi@intel.com>
Reviewed-by: Lucas De Marchi <lucas.demarchi@intel.com>
---
 drivers/gpu/drm/xe/xe_sriov_pf_sysfs.c | 36 ++++++++++++++++++++++++++
 1 file changed, 36 insertions(+)

diff --git a/drivers/gpu/drm/xe/xe_sriov_pf_sysfs.c b/drivers/gpu/drm/xe/xe_sriov_pf_sysfs.c
index f12d6752e9f1..0430ffaa746a 100644
--- a/drivers/gpu/drm/xe/xe_sriov_pf_sysfs.c
+++ b/drivers/gpu/drm/xe/xe_sriov_pf_sysfs.c
@@ -22,6 +22,9 @@
  * :
  * ├── sriov_admin/
  *     ├── ...
+ *     ├── .bulk_profile
+ *     │   ├── exec_quantum_ms
+ *     │   └── preempt_timeout_us
  *     ├── pf/
  *     │   ├── ...
  *     │   └── profile
@@ -84,7 +87,40 @@ struct xe_sriov_vf_attr xe_sriov_vf_attr_##NAME = \
 
 /* device level attributes go here */
 
+#define DEFINE_SIMPLE_BULK_PROVISIONING_SRIOV_DEV_ATTR_WO(NAME, ITEM, TYPE)		\
+											\
+static ssize_t xe_sriov_dev_attr_##NAME##_store(struct xe_device *xe,			\
+						const char *buf, size_t count)		\
+{											\
+	TYPE value;									\
+	int err;									\
+											\
+	err = kstrto##TYPE(buf, 0, &value);						\
+	if (err)									\
+		return err;								\
+											\
+	err = xe_sriov_pf_provision_bulk_apply_##ITEM(xe, value);			\
+	return err ?: count;								\
+}											\
+											\
+static XE_SRIOV_DEV_ATTR_WO(NAME)
+
+DEFINE_SIMPLE_BULK_PROVISIONING_SRIOV_DEV_ATTR_WO(exec_quantum_ms, eq, u32);
+DEFINE_SIMPLE_BULK_PROVISIONING_SRIOV_DEV_ATTR_WO(preempt_timeout_us, pt, u32);
+
+static struct attribute *bulk_profile_dev_attrs[] = {
+	&xe_sriov_dev_attr_exec_quantum_ms.attr,
+	&xe_sriov_dev_attr_preempt_timeout_us.attr,
+	NULL
+};
+
+static const struct attribute_group bulk_profile_dev_attr_group = {
+	.name = ".bulk_profile",
+	.attrs = bulk_profile_dev_attrs,
+};
+
 static const struct attribute_group *xe_sriov_dev_attr_groups[] = {
+	&bulk_profile_dev_attr_group,
 	NULL
 };
 
-- 
2.47.1


^ permalink raw reply related	[flat|nested] 45+ messages in thread

* [PATCH v2 11/17] drm/xe/pf: Add functions to provision scheduling priority
  2025-10-28 17:58 [PATCH v2 00/17] PF: Add sriov_admin sysfs tree Michal Wajdeczko
                   ` (9 preceding siblings ...)
  2025-10-28 17:58 ` [PATCH v2 10/17] drm/xe/pf: Allow bulk change all VFs EQ/PT using sysfs Michal Wajdeczko
@ 2025-10-28 17:58 ` Michal Wajdeczko
  2025-10-28 17:58 ` [PATCH v2 12/17] drm/xe/pf: Allow bulk change all VFs priority using sysfs Michal Wajdeczko
                   ` (9 subsequent siblings)
  20 siblings, 0 replies; 45+ messages in thread
From: Michal Wajdeczko @ 2025-10-28 17:58 UTC (permalink / raw)
  To: intel-xe; +Cc: Michal Wajdeczko, Piotr Piórkowski

We already have function to configure PF (or VF) scheduling priority
on a single GT, but we also need function that will cover all tiles
and GTs.

However, due to the current GuC FW limitation, we can't always rely
on per-GT function as it actually only works for the PF case. The
only way to change VFs scheduling priority is to use 'sched_if_idle'
policy KLV that will change priorities for all VFs (and the PF).

We will use these new functions in the upcoming patches.

Signed-off-by: Michal Wajdeczko <michal.wajdeczko@intel.com>
Reviewed-by: Piotr Piórkowski <piotr.piorkowski@intel.com>
---
v2: fix comments/commit message (Piotr)
    add missing include
---
 drivers/gpu/drm/xe/xe_sriov_pf_provision.c | 93 ++++++++++++++++++++++
 drivers/gpu/drm/xe/xe_sriov_pf_provision.h |  4 +
 2 files changed, 97 insertions(+)

diff --git a/drivers/gpu/drm/xe/xe_sriov_pf_provision.c b/drivers/gpu/drm/xe/xe_sriov_pf_provision.c
index 6a4cb3d5494a..38915ea4f7c9 100644
--- a/drivers/gpu/drm/xe/xe_sriov_pf_provision.c
+++ b/drivers/gpu/drm/xe/xe_sriov_pf_provision.c
@@ -6,6 +6,7 @@
 #include "xe_assert.h"
 #include "xe_device.h"
 #include "xe_gt_sriov_pf_config.h"
+#include "xe_gt_sriov_pf_policy.h"
 #include "xe_sriov.h"
 #include "xe_sriov_pf_helpers.h"
 #include "xe_sriov_pf_provision.h"
@@ -332,3 +333,95 @@ int xe_sriov_pf_provision_query_vf_pt(struct xe_device *xe, unsigned int vfid, u
 
 	return !count ? -ENODATA : 0;
 }
+
+/**
+ * xe_sriov_pf_provision_bulk_apply_priority() - Change scheduling priority of all VFs and PF.
+ * @xe: the PF &xe_device
+ * @prio: scheduling priority to set
+ *
+ * Change the scheduling priority provisioning on all tiles/GTs.
+ *
+ * This function can only be called on PF.
+ *
+ * Return: 0 on success or a negative error code on failure.
+ */
+int xe_sriov_pf_provision_bulk_apply_priority(struct xe_device *xe, u32 prio)
+{
+	bool sched_if_idle;
+	struct xe_gt *gt;
+	unsigned int id;
+	int result = 0;
+	int err;
+
+	/*
+	 * Currently, priority changes that involves VFs are only allowed using
+	 * the 'sched_if_idle' policy KLV, so only LOW and NORMAL are supported.
+	 */
+	xe_assert(xe, prio < GUC_SCHED_PRIORITY_HIGH);
+	sched_if_idle = prio == GUC_SCHED_PRIORITY_NORMAL;
+
+	for_each_gt(gt, xe, id) {
+		err = xe_gt_sriov_pf_policy_set_sched_if_idle(gt, sched_if_idle);
+		result = result ?: err;
+	}
+
+	return result;
+}
+
+/**
+ * xe_sriov_pf_provision_apply_vf_priority() - Change VF's scheduling priority.
+ * @xe: the PF &xe_device
+ * @vfid: the VF identifier
+ * @prio: scheduling priority to set
+ *
+ * Change VF's scheduling priority provisioning on all tiles/GTs.
+ *
+ * This function can only be called on PF.
+ *
+ * Return: 0 on success or a negative error code on failure.
+ */
+int xe_sriov_pf_provision_apply_vf_priority(struct xe_device *xe, unsigned int vfid, u32 prio)
+{
+	struct xe_gt *gt;
+	unsigned int id;
+	int result = 0;
+	int err;
+
+	for_each_gt(gt, xe, id) {
+		err = xe_gt_sriov_pf_config_set_sched_priority(gt, vfid, prio);
+		result = result ?: err;
+	}
+
+	return result;
+}
+
+/**
+ * xe_sriov_pf_provision_query_vf_priority() - Query VF's scheduling priority.
+ * @xe: the PF &xe_device
+ * @vfid: the VF identifier
+ * @prio: placeholder for the returned scheduling priority
+ *
+ * Query VF's scheduling priority provisioning from all tiles/GTs.
+ * If values across tiles/GTs are inconsistent then -EUCLEAN error will be returned.
+ *
+ * This function can only be called on PF.
+ *
+ * Return: 0 on success or a negative error code on failure.
+ */
+int xe_sriov_pf_provision_query_vf_priority(struct xe_device *xe, unsigned int vfid, u32 *prio)
+{
+	struct xe_gt *gt;
+	unsigned int id;
+	int count = 0;
+	u32 value;
+
+	for_each_gt(gt, xe, id) {
+		value = xe_gt_sriov_pf_config_get_sched_priority(gt, vfid);
+		if (!count++)
+			*prio = value;
+		else if (value != *prio)
+			return -EUCLEAN;
+	}
+
+	return !count ? -ENODATA : 0;
+}
diff --git a/drivers/gpu/drm/xe/xe_sriov_pf_provision.h b/drivers/gpu/drm/xe/xe_sriov_pf_provision.h
index aa8a95b1c0be..bccf23d51396 100644
--- a/drivers/gpu/drm/xe/xe_sriov_pf_provision.h
+++ b/drivers/gpu/drm/xe/xe_sriov_pf_provision.h
@@ -20,6 +20,10 @@ int xe_sriov_pf_provision_bulk_apply_pt(struct xe_device *xe, u32 pt);
 int xe_sriov_pf_provision_apply_vf_pt(struct xe_device *xe, unsigned int vfid, u32 pt);
 int xe_sriov_pf_provision_query_vf_pt(struct xe_device *xe, unsigned int vfid, u32 *pt);
 
+int xe_sriov_pf_provision_bulk_apply_priority(struct xe_device *xe, u32 prio);
+int xe_sriov_pf_provision_apply_vf_priority(struct xe_device *xe, unsigned int vfid, u32 prio);
+int xe_sriov_pf_provision_query_vf_priority(struct xe_device *xe, unsigned int vfid, u32 *prio);
+
 int xe_sriov_pf_provision_vfs(struct xe_device *xe, unsigned int num_vfs);
 int xe_sriov_pf_unprovision_vfs(struct xe_device *xe, unsigned int num_vfs);
 
-- 
2.47.1


^ permalink raw reply related	[flat|nested] 45+ messages in thread

* [PATCH v2 12/17] drm/xe/pf: Allow bulk change all VFs priority using sysfs
  2025-10-28 17:58 [PATCH v2 00/17] PF: Add sriov_admin sysfs tree Michal Wajdeczko
                   ` (10 preceding siblings ...)
  2025-10-28 17:58 ` [PATCH v2 11/17] drm/xe/pf: Add functions to provision scheduling priority Michal Wajdeczko
@ 2025-10-28 17:58 ` Michal Wajdeczko
  2025-10-30 12:43   ` Lucas De Marchi
  2025-10-28 17:58 ` [PATCH v2 13/17] drm/xe/pf: Allow change PF scheduling " Michal Wajdeczko
                   ` (8 subsequent siblings)
  20 siblings, 1 reply; 45+ messages in thread
From: Michal Wajdeczko @ 2025-10-28 17:58 UTC (permalink / raw)
  To: intel-xe; +Cc: Michal Wajdeczko, Lucas De Marchi, Rodrigo Vivi

It is expected to be a common practice to configure the same level
of scheduling priority across all VFs and PF (at least as starting
point). Due to current GuC FW limitations it is also the only way
to change VFs priority.

Add write-only sysfs attribute that will apply required priority
level to all VFs and PF at once.

  /sys/bus/pci/drivers/xe/BDF/
  ├── sriov_admin/
      ├── .bulk_profile
      │   └── sched_priority		[WO] low, normal

Writing "low" to this write-only attribute will change PF and
VFs scheduling priority on all tiles/GTs to LOW (function will
be scheduled only if it has work submitted). Similarly, writing
"normal" will change functions priority to NORMAL (functions will
be scheduled irrespective of whether there is a work or not).

Signed-off-by: Michal Wajdeczko <michal.wajdeczko@intel.com>
Cc: Lucas De Marchi <lucas.demarchi@intel.com>
Cc: Rodrigo Vivi <rodrigo.vivi@intel.com>
Acked-by: Rodrigo Vivi <rodrigo.vivi@intel.com>
---
 drivers/gpu/drm/xe/xe_sriov_pf_sysfs.c | 42 +++++++++++++++++++++++++-
 1 file changed, 41 insertions(+), 1 deletion(-)

diff --git a/drivers/gpu/drm/xe/xe_sriov_pf_sysfs.c b/drivers/gpu/drm/xe/xe_sriov_pf_sysfs.c
index 0430ffaa746a..19724a28fb33 100644
--- a/drivers/gpu/drm/xe/xe_sriov_pf_sysfs.c
+++ b/drivers/gpu/drm/xe/xe_sriov_pf_sysfs.c
@@ -24,7 +24,8 @@
  *     ├── ...
  *     ├── .bulk_profile
  *     │   ├── exec_quantum_ms
- *     │   └── preempt_timeout_us
+ *     │   ├── preempt_timeout_us
+ *     │   └── sched_priority
  *     ├── pf/
  *     │   ├── ...
  *     │   └── profile
@@ -108,9 +109,48 @@ static XE_SRIOV_DEV_ATTR_WO(NAME)
 DEFINE_SIMPLE_BULK_PROVISIONING_SRIOV_DEV_ATTR_WO(exec_quantum_ms, eq, u32);
 DEFINE_SIMPLE_BULK_PROVISIONING_SRIOV_DEV_ATTR_WO(preempt_timeout_us, pt, u32);
 
+static const char * const sched_priority_names[] = {
+	[GUC_SCHED_PRIORITY_LOW] = "low",
+	[GUC_SCHED_PRIORITY_NORMAL] = "normal",
+	[GUC_SCHED_PRIORITY_HIGH] = "high",
+};
+
+static bool sched_priority_high_allowed(unsigned int vfid)
+{
+	/* As of today GuC FW allows to select 'high' priority only for the PF. */
+	return vfid == PFID;
+}
+
+static bool sched_priority_bulk_high_allowed(struct xe_device *xe)
+{
+	/* all VFs are equal - it's sufficient to check VF1 only */
+	return sched_priority_high_allowed(VFID(1));
+}
+
+static ssize_t xe_sriov_dev_attr_sched_priority_store(struct xe_device *xe,
+						      const char *buf, size_t count)
+{
+	size_t num_priorities = ARRAY_SIZE(sched_priority_names);
+	int match;
+	int err;
+
+	if (!sched_priority_bulk_high_allowed(xe))
+		num_priorities--;
+
+	match = __sysfs_match_string(sched_priority_names, num_priorities, buf);
+	if (match < 0)
+		return -EINVAL;
+
+	err = xe_sriov_pf_provision_bulk_apply_priority(xe, match);
+	return err ?: count;
+}
+
+static XE_SRIOV_DEV_ATTR_WO(sched_priority);
+
 static struct attribute *bulk_profile_dev_attrs[] = {
 	&xe_sriov_dev_attr_exec_quantum_ms.attr,
 	&xe_sriov_dev_attr_preempt_timeout_us.attr,
+	&xe_sriov_dev_attr_sched_priority.attr,
 	NULL
 };
 
-- 
2.47.1


^ permalink raw reply related	[flat|nested] 45+ messages in thread

* [PATCH v2 13/17] drm/xe/pf: Allow change PF scheduling priority using sysfs
  2025-10-28 17:58 [PATCH v2 00/17] PF: Add sriov_admin sysfs tree Michal Wajdeczko
                   ` (11 preceding siblings ...)
  2025-10-28 17:58 ` [PATCH v2 12/17] drm/xe/pf: Allow bulk change all VFs priority using sysfs Michal Wajdeczko
@ 2025-10-28 17:58 ` Michal Wajdeczko
  2025-10-30 13:35   ` Lucas De Marchi
  2025-10-28 17:58 ` [PATCH v2 14/17] drm/xe/pf: Promote xe_pci_sriov_get_vf_pdev Michal Wajdeczko
                   ` (7 subsequent siblings)
  20 siblings, 1 reply; 45+ messages in thread
From: Michal Wajdeczko @ 2025-10-28 17:58 UTC (permalink / raw)
  To: intel-xe; +Cc: Michal Wajdeczko, Lucas De Marchi, Rodrigo Vivi

We have just added bulk change of the scheduling priority for all
VFs and PF, but that only allow to select LOW and NORMAL priority.

Add read-write attribute under PF to allow changing its priority
without impacting other VFs priority settings.

For completeness also add read-only attributes under VFs, to show
currently selected priority levels used by the VFs.

  /sys/bus/pci/drivers/xe/BDF/
  ├── sriov_admin/
      ├── pf/
      │   └── profile
      │       └── sched_priority	[RW] low, normal, high
      ├── vf1/
      │   └── profile
      │       └── sched_priority	[RO] low, normal

Writing "high" to the PF read-write attribute will change PF
priority on all tiles/GTs to HIGH (schedule function in the next
time-slice after current one completes and it has work). Writing
"low" or "normal" to change priority to LOW/NORMAL is supported.

When read, those files will display the current and available
scheduling priorities. The currently active priority level will
be enclosed in square brackets, default output will be like:

 $ grep . -h sriov_admin/{pf,vf1,vf2}/profile/sched_priority
 [low] normal high
 [low] normal
 [low] normal

Signed-off-by: Michal Wajdeczko <michal.wajdeczko@intel.com>
Cc: Lucas De Marchi <lucas.demarchi@intel.com>
Cc: Rodrigo Vivi <rodrigo.vivi@intel.com>
Acked-by: Rodrigo Vivi <rodrigo.vivi@intel.com>
---
 drivers/gpu/drm/xe/xe_sriov_pf_sysfs.c | 82 +++++++++++++++++++++++++-
 1 file changed, 80 insertions(+), 2 deletions(-)

diff --git a/drivers/gpu/drm/xe/xe_sriov_pf_sysfs.c b/drivers/gpu/drm/xe/xe_sriov_pf_sysfs.c
index 19724a28fb33..2e5dbf1bff76 100644
--- a/drivers/gpu/drm/xe/xe_sriov_pf_sysfs.c
+++ b/drivers/gpu/drm/xe/xe_sriov_pf_sysfs.c
@@ -17,6 +17,21 @@
 #include "xe_sriov_pf_sysfs.h"
 #include "xe_sriov_printk.h"
 
+static int emit_choice(char *buf, int choice, const char * const *array, size_t size)
+{
+	int pos = 0;
+	int n;
+
+	for (n = 0; n < size; n++) {
+		pos += sysfs_emit_at(buf, pos, "%s%s%s%s",
+				    n ? " " : "",
+				    n == choice ? "[" : "",
+				    array[n],
+				    n == choice ? "]" : "");
+	}
+	return pos + sysfs_emit_at(buf, pos, "\n");
+}
+
 /*
  * /sys/bus/pci/drivers/xe/BDF/
  * :
@@ -30,12 +45,14 @@
  *     │   ├── ...
  *     │   └── profile
  *     │       ├── exec_quantum_ms
- *     │       └── preempt_timeout_us
+ *     │       ├── preempt_timeout_us
+ *     │       └── sched_priority
  *     ├── vf1/
  *     │   ├── ...
  *     │   └── profile
  *     │       ├── exec_quantum_ms
- *     │       └── preempt_timeout_us
+ *     │       ├── preempt_timeout_us
+ *     │       └── sched_priority
  *     ├── vf2/
  *     :
  *     └── vfN/
@@ -115,6 +132,12 @@ static const char * const sched_priority_names[] = {
 	[GUC_SCHED_PRIORITY_HIGH] = "high",
 };
 
+static bool sched_priority_change_allowed(unsigned int vfid)
+{
+	/* As of today GuC FW allows to selectively change only the PF priority. */
+	return vfid == PFID;
+}
+
 static bool sched_priority_high_allowed(unsigned int vfid)
 {
 	/* As of today GuC FW allows to select 'high' priority only for the PF. */
@@ -199,15 +222,70 @@ static XE_SRIOV_VF_ATTR(NAME)
 DEFINE_SIMPLE_PROVISIONING_SRIOV_VF_ATTR(exec_quantum_ms, eq, u32, "%u\n");
 DEFINE_SIMPLE_PROVISIONING_SRIOV_VF_ATTR(preempt_timeout_us, pt, u32, "%u\n");
 
+static ssize_t xe_sriov_vf_attr_sched_priority_show(struct xe_device *xe, unsigned int vfid,
+						    char *buf)
+{
+	size_t num_priorities = ARRAY_SIZE(sched_priority_names);
+	u32 priority;
+	int err;
+
+	err = xe_sriov_pf_provision_query_vf_priority(xe, vfid, &priority);
+	if (err)
+		return err;
+
+	if (!sched_priority_high_allowed(vfid))
+		num_priorities--;
+
+	xe_assert(xe, priority < num_priorities);
+	return emit_choice(buf, priority, sched_priority_names, num_priorities);
+}
+
+static ssize_t xe_sriov_vf_attr_sched_priority_store(struct xe_device *xe, unsigned int vfid,
+						     const char *buf, size_t count)
+{
+	size_t num_priorities = ARRAY_SIZE(sched_priority_names);
+	int match;
+	int err;
+
+	if (!sched_priority_change_allowed(vfid))
+		return -EOPNOTSUPP;
+
+	if (!sched_priority_high_allowed(vfid))
+		num_priorities--;
+
+	match = __sysfs_match_string(sched_priority_names, num_priorities, buf);
+	if (match < 0)
+		return -EINVAL;
+
+	err = xe_sriov_pf_provision_apply_vf_priority(xe, vfid, match);
+	return err ?: count;
+}
+
+static XE_SRIOV_VF_ATTR(sched_priority);
+
 static struct attribute *profile_vf_attrs[] = {
 	&xe_sriov_vf_attr_exec_quantum_ms.attr,
 	&xe_sriov_vf_attr_preempt_timeout_us.attr,
+	&xe_sriov_vf_attr_sched_priority.attr,
 	NULL
 };
 
+static umode_t profile_vf_attr_is_visible(struct kobject *kobj,
+					  struct attribute *attr, int index)
+{
+	struct xe_sriov_kobj *vkobj = to_xe_sriov_kobj(kobj);
+
+	if (attr == &xe_sriov_vf_attr_sched_priority.attr &&
+	    !sched_priority_change_allowed(vkobj->vfid))
+		return attr->mode & 0444;
+
+	return attr->mode;
+}
+
 static const struct attribute_group profile_vf_attr_group = {
 	.name = "profile",
 	.attrs = profile_vf_attrs,
+	.is_visible = profile_vf_attr_is_visible,
 };
 
 static const struct attribute_group *xe_sriov_vf_attr_groups[] = {
-- 
2.47.1


^ permalink raw reply related	[flat|nested] 45+ messages in thread

* [PATCH v2 14/17] drm/xe/pf: Promote xe_pci_sriov_get_vf_pdev
  2025-10-28 17:58 [PATCH v2 00/17] PF: Add sriov_admin sysfs tree Michal Wajdeczko
                   ` (12 preceding siblings ...)
  2025-10-28 17:58 ` [PATCH v2 13/17] drm/xe/pf: Allow change PF scheduling " Michal Wajdeczko
@ 2025-10-28 17:58 ` Michal Wajdeczko
  2025-10-28 17:58 ` [PATCH v2 15/17] drm/xe/pf: Add sysfs device symlinks to enabled VFs Michal Wajdeczko
                   ` (6 subsequent siblings)
  20 siblings, 0 replies; 45+ messages in thread
From: Michal Wajdeczko @ 2025-10-28 17:58 UTC (permalink / raw)
  To: intel-xe; +Cc: Michal Wajdeczko, Piotr Piórkowski

In the upcoming patch we would like to use this private helper
during preparation of the sysfs links. Promote it.

Signed-off-by: Michal Wajdeczko <michal.wajdeczko@intel.com>
Reviewed-by: Piotr Piórkowski <piotr.piorkowski@intel.com>
---
v2: drop redundant comment (Piotr)
---
 drivers/gpu/drm/xe/xe_pci_sriov.c | 36 ++++++++++++++++++++-----------
 drivers/gpu/drm/xe/xe_pci_sriov.h |  1 +
 2 files changed, 24 insertions(+), 13 deletions(-)

diff --git a/drivers/gpu/drm/xe/xe_pci_sriov.c b/drivers/gpu/drm/xe/xe_pci_sriov.c
index 735f51effc7a..4d5da6686e92 100644
--- a/drivers/gpu/drm/xe/xe_pci_sriov.c
+++ b/drivers/gpu/drm/xe/xe_pci_sriov.c
@@ -30,18 +30,6 @@ static void pf_reset_vfs(struct xe_device *xe, unsigned int num_vfs)
 		xe_sriov_pf_control_reset_vf(xe, n);
 }
 
-static struct pci_dev *xe_pci_pf_get_vf_dev(struct xe_device *xe, unsigned int vf_id)
-{
-	struct pci_dev *pdev = to_pci_dev(xe->drm.dev);
-
-	xe_assert(xe, IS_SRIOV_PF(xe));
-
-	/* caller must use pci_dev_put() */
-	return pci_get_domain_bus_and_slot(pci_domain_nr(pdev->bus),
-			pdev->bus->number,
-			pci_iov_virtfn_devfn(pdev, vf_id));
-}
-
 static void pf_link_vfs(struct xe_device *xe, int num_vfs)
 {
 	struct pci_dev *pdev_pf = to_pci_dev(xe->drm.dev);
@@ -60,7 +48,7 @@ static void pf_link_vfs(struct xe_device *xe, int num_vfs)
 	 * enforce correct resume order.
 	 */
 	for (n = 1; n <= num_vfs; n++) {
-		pdev_vf = xe_pci_pf_get_vf_dev(xe, n - 1);
+		pdev_vf = xe_pci_sriov_get_vf_pdev(pdev_pf, n);
 
 		/* unlikely, something weird is happening, abort */
 		if (!pdev_vf) {
@@ -228,3 +216,25 @@ int xe_pci_sriov_configure(struct pci_dev *pdev, int num_vfs)
 
 	return ret;
 }
+
+/**
+ * xe_pci_sriov_get_vf_pdev() - Lookup the VF's PCI device using the VF identifier.
+ * @pdev: the PF's &pci_dev
+ * @vfid: VF identifier (1-based)
+ *
+ * The caller must decrement the reference count by calling pci_dev_put().
+ *
+ * Return: the VF's &pci_dev or NULL if the VF device was not found.
+ */
+struct pci_dev *xe_pci_sriov_get_vf_pdev(struct pci_dev *pdev, unsigned int vfid)
+{
+	struct xe_device *xe = pdev_to_xe_device(pdev);
+
+	xe_assert(xe, dev_is_pf(&pdev->dev));
+	xe_assert(xe, vfid);
+	xe_assert(xe, vfid <= pci_sriov_get_totalvfs(pdev));
+
+	return pci_get_domain_bus_and_slot(pci_domain_nr(pdev->bus),
+					   pdev->bus->number,
+					   pci_iov_virtfn_devfn(pdev, vfid - 1));
+}
diff --git a/drivers/gpu/drm/xe/xe_pci_sriov.h b/drivers/gpu/drm/xe/xe_pci_sriov.h
index c76dd0d90495..b9105d71dbb1 100644
--- a/drivers/gpu/drm/xe/xe_pci_sriov.h
+++ b/drivers/gpu/drm/xe/xe_pci_sriov.h
@@ -10,6 +10,7 @@ struct pci_dev;
 
 #ifdef CONFIG_PCI_IOV
 int xe_pci_sriov_configure(struct pci_dev *pdev, int num_vfs);
+struct pci_dev *xe_pci_sriov_get_vf_pdev(struct pci_dev *pdev, unsigned int vfid);
 #else
 static inline int xe_pci_sriov_configure(struct pci_dev *pdev, int num_vfs)
 {
-- 
2.47.1


^ permalink raw reply related	[flat|nested] 45+ messages in thread

* [PATCH v2 15/17] drm/xe/pf: Add sysfs device symlinks to enabled VFs
  2025-10-28 17:58 [PATCH v2 00/17] PF: Add sriov_admin sysfs tree Michal Wajdeczko
                   ` (13 preceding siblings ...)
  2025-10-28 17:58 ` [PATCH v2 14/17] drm/xe/pf: Promote xe_pci_sriov_get_vf_pdev Michal Wajdeczko
@ 2025-10-28 17:58 ` Michal Wajdeczko
  2025-10-30 13:40   ` Lucas De Marchi
  2025-10-28 17:58 ` [PATCH v2 16/17] drm/xe/pf: Allow to stop and reset VF using sysfs Michal Wajdeczko
                   ` (5 subsequent siblings)
  20 siblings, 1 reply; 45+ messages in thread
From: Michal Wajdeczko @ 2025-10-28 17:58 UTC (permalink / raw)
  To: intel-xe; +Cc: Michal Wajdeczko, Lucas De Marchi, Rodrigo Vivi

For convenience, for every enabled VF add 'device' symlink from
our SR-IOV admin VF folder to enabled sysfs PCI VF device entry.
Remove all those links when disabling PCI VFs.

For completeness, add static 'device' symlink for the PF itself.

  /sys/bus/pci/drivers/xe/BDF/sriov_admin/
  ├── pf
  │   └── device -> ../../../BDF	# PF BDF
  ├── vf1
  │   └── device -> ../../../BDF'	# VF1 BDF
  ├── vf2
  │   └── device -> ../../../BDF"	# VF2 BDF

Signed-off-by: Michal Wajdeczko <michal.wajdeczko@intel.com>
Cc: Lucas De Marchi <lucas.demarchi@intel.com>
Cc: Rodrigo Vivi <rodrigo.vivi@intel.com>
Acked-by: Rodrigo Vivi <rodrigo.vivi@intel.com>
---
 drivers/gpu/drm/xe/xe_pci_sriov.c      |  5 ++
 drivers/gpu/drm/xe/xe_sriov_pf_sysfs.c | 93 ++++++++++++++++++++++++++
 drivers/gpu/drm/xe/xe_sriov_pf_sysfs.h |  3 +
 3 files changed, 101 insertions(+)

diff --git a/drivers/gpu/drm/xe/xe_pci_sriov.c b/drivers/gpu/drm/xe/xe_pci_sriov.c
index 4d5da6686e92..d0fcde66a774 100644
--- a/drivers/gpu/drm/xe/xe_pci_sriov.c
+++ b/drivers/gpu/drm/xe/xe_pci_sriov.c
@@ -20,6 +20,7 @@
 #include "xe_sriov_pf_control.h"
 #include "xe_sriov_pf_helpers.h"
 #include "xe_sriov_pf_provision.h"
+#include "xe_sriov_pf_sysfs.h"
 #include "xe_sriov_printk.h"
 
 static void pf_reset_vfs(struct xe_device *xe, unsigned int num_vfs)
@@ -138,6 +139,8 @@ static int pf_enable_vfs(struct xe_device *xe, int num_vfs)
 	xe_sriov_info(xe, "Enabled %u of %u VF%s\n",
 		      num_vfs, total_vfs, str_plural(total_vfs));
 
+	xe_sriov_pf_sysfs_link_vfs(xe, num_vfs);
+
 	pf_engine_activity_stats(xe, num_vfs, true);
 
 	return num_vfs;
@@ -165,6 +168,8 @@ static int pf_disable_vfs(struct xe_device *xe)
 
 	pf_engine_activity_stats(xe, num_vfs, false);
 
+	xe_sriov_pf_sysfs_unlink_vfs(xe, num_vfs);
+
 	pci_disable_sriov(pdev);
 
 	pf_reset_vfs(xe, num_vfs);
diff --git a/drivers/gpu/drm/xe/xe_sriov_pf_sysfs.c b/drivers/gpu/drm/xe/xe_sriov_pf_sysfs.c
index 2e5dbf1bff76..360b0ffd9cb4 100644
--- a/drivers/gpu/drm/xe/xe_sriov_pf_sysfs.c
+++ b/drivers/gpu/drm/xe/xe_sriov_pf_sysfs.c
@@ -9,6 +9,7 @@
 #include <drm/drm_managed.h>
 
 #include "xe_assert.h"
+#include "xe_pci_sriov.h"
 #include "xe_pm.h"
 #include "xe_sriov.h"
 #include "xe_sriov_pf.h"
@@ -43,12 +44,14 @@ static int emit_choice(char *buf, int choice, const char * const *array, size_t
  *     │   └── sched_priority
  *     ├── pf/
  *     │   ├── ...
+ *     │   ├── device -> ../../../BDF
  *     │   └── profile
  *     │       ├── exec_quantum_ms
  *     │       ├── preempt_timeout_us
  *     │       └── sched_priority
  *     ├── vf1/
  *     │   ├── ...
+ *     │   ├── device -> ../../../BDF.1
  *     │   └── profile
  *     │       ├── exec_quantum_ms
  *     │       ├── preempt_timeout_us
@@ -412,6 +415,11 @@ static int pf_sysfs_error(struct xe_device *xe, int err, const char *what)
 	return err;
 }
 
+static void pf_sysfs_note(struct xe_device *xe, int err, const char *what)
+{
+	xe_sriov_dbg(xe, "Failed to setup sysfs %s (%pe)\n", what, ERR_PTR(err));
+}
+
 static void action_put_kobject(void *arg)
 {
 	struct kobject *kobj = arg;
@@ -478,6 +486,29 @@ static int pf_setup_tree(struct xe_device *xe)
 	return 0;
 }
 
+static void action_rm_device_link(void *arg)
+{
+	struct kobject *kobj = arg;
+
+	sysfs_remove_link(kobj, "device");
+}
+
+static int pf_link_pf_device(struct xe_device *xe)
+{
+	struct kobject *kobj = xe->sriov.pf.vfs[PFID].kobj;
+	int err;
+
+	err = sysfs_create_link(kobj, &xe->drm.dev->kobj, "device");
+	if (err)
+		return pf_sysfs_error(xe, err, "PF device link");
+
+	err = devm_add_action_or_reset(xe->drm.dev, action_rm_device_link, kobj);
+	if (err)
+		return pf_sysfs_error(xe, err, "PF unlink action");
+
+	return 0;
+}
+
 /**
  * xe_sriov_pf_sysfs_init() - Setup PF's SR-IOV sysfs tree.
  * @xe: the PF &xe_device to setup sysfs
@@ -499,5 +530,67 @@ int xe_sriov_pf_sysfs_init(struct xe_device *xe)
 	if (err)
 		return err;
 
+	err = pf_link_pf_device(xe);
+	if (err)
+		return err;
+
 	return 0;
 }
+
+/**
+ * xe_sriov_pf_sysfs_link_vfs() - Add VF's links in SR-IOV sysfs tree.
+ * @xe: the &xe_device where to update sysfs
+ * @num_vfs: number of enabled VFs to link
+ *
+ * This function is specific for the PF driver.
+ *
+ * This function will add symbolic links between VFs represented in the SR-IOV
+ * sysfs tree maintained by the PF and enabled VF PCI devices.
+ *
+ * The @xe_sriov_pf_sysfs_unlink_vfs() shall be used to remove those links.
+ */
+void xe_sriov_pf_sysfs_link_vfs(struct xe_device *xe, unsigned int num_vfs)
+{
+	unsigned int totalvfs = xe_sriov_pf_get_totalvfs(xe);
+	struct pci_dev *pf_pdev = to_pci_dev(xe->drm.dev);
+	struct pci_dev *vf_pdev = NULL;
+	unsigned int n;
+	int err;
+
+	xe_assert(xe, IS_SRIOV_PF(xe));
+	xe_assert(xe, num_vfs <= totalvfs);
+
+	for (n = 1; n <= num_vfs; n++) {
+		vf_pdev = xe_pci_sriov_get_vf_pdev(pf_pdev, VFID(n));
+		if (!vf_pdev)
+			return pf_sysfs_note(xe, -ENOENT, "VF link");
+
+		err = sysfs_create_link(xe->sriov.pf.vfs[VFID(n)].kobj,
+					&vf_pdev->dev.kobj, "device");
+
+		/* must balance xe_pci_sriov_get_vf_pdev() */
+		pci_dev_put(vf_pdev);
+
+		if (err)
+			return pf_sysfs_note(xe, err, "VF link");
+	}
+}
+
+/**
+ * xe_sriov_pf_sysfs_unlink_vfs() - Remove VF's links from SR-IOV sysfs tree.
+ * @xe: the &xe_device where to update sysfs
+ * @num_vfs: number of VFs to unlink
+ *
+ * This function shall be called only on the PF.
+ * This function will remove "device" links added by @xe_sriov_sysfs_link_vfs().
+ */
+void xe_sriov_pf_sysfs_unlink_vfs(struct xe_device *xe, unsigned int num_vfs)
+{
+	unsigned int n;
+
+	xe_assert(xe, IS_SRIOV_PF(xe));
+	xe_assert(xe, num_vfs <= xe_sriov_pf_get_totalvfs(xe));
+
+	for (n = 1; n <= num_vfs; n++)
+		sysfs_remove_link(xe->sriov.pf.vfs[VFID(n)].kobj, "device");
+}
diff --git a/drivers/gpu/drm/xe/xe_sriov_pf_sysfs.h b/drivers/gpu/drm/xe/xe_sriov_pf_sysfs.h
index 1e6698cc29d3..ae92ed1766e7 100644
--- a/drivers/gpu/drm/xe/xe_sriov_pf_sysfs.h
+++ b/drivers/gpu/drm/xe/xe_sriov_pf_sysfs.h
@@ -10,4 +10,7 @@ struct xe_device;
 
 int xe_sriov_pf_sysfs_init(struct xe_device *xe);
 
+void xe_sriov_pf_sysfs_link_vfs(struct xe_device *xe, unsigned int num_vfs);
+void xe_sriov_pf_sysfs_unlink_vfs(struct xe_device *xe, unsigned int num_vfs);
+
 #endif
-- 
2.47.1


^ permalink raw reply related	[flat|nested] 45+ messages in thread

* [PATCH v2 16/17] drm/xe/pf: Allow to stop and reset VF using sysfs
  2025-10-28 17:58 [PATCH v2 00/17] PF: Add sriov_admin sysfs tree Michal Wajdeczko
                   ` (14 preceding siblings ...)
  2025-10-28 17:58 ` [PATCH v2 15/17] drm/xe/pf: Add sysfs device symlinks to enabled VFs Michal Wajdeczko
@ 2025-10-28 17:58 ` Michal Wajdeczko
  2025-10-30  8:45   ` Piotr Piórkowski
  2025-10-30 13:43   ` Lucas De Marchi
  2025-10-28 17:58 ` [PATCH v2 17/17] drm/xe/pf: Add documentation for sriov_admin attributes Michal Wajdeczko
                   ` (4 subsequent siblings)
  20 siblings, 2 replies; 45+ messages in thread
From: Michal Wajdeczko @ 2025-10-28 17:58 UTC (permalink / raw)
  To: intel-xe; +Cc: Michal Wajdeczko, Lucas De Marchi, Rodrigo Vivi

It is expected that VFs activity will be monitored and in some
cases admin might want to silence specific VF without killing
the VM where it was attached.

Add write-only attribute to stop GuC scheduling at VFs level.

  /sys/bus/pci/drivers/xe/BDF/
  ├── sriov_admin/
      ├── vf1/
      │   └── stop		[WO] bool
      ├── vf2/
      │   └── stop		[WO] bool

Writing "1" or "y" (or whatever is recognized by the strtobool()
function) to this file will trigger the change of the VF state
to STOP (GuC will stop servicing the VF). To go back to a READY
state (to allow GuC to service this VF again) the VF FLR must be
triggered (which can be done by writing 1 to device/reset file).

Signed-off-by: Michal Wajdeczko <michal.wajdeczko@intel.com>
Cc: Lucas De Marchi <lucas.demarchi@intel.com>
Cc: Rodrigo Vivi <rodrigo.vivi@intel.com>
---
v2: drop reset file (Rodrigo, Lucas)
---
 drivers/gpu/drm/xe/xe_sriov_pf_sysfs.c | 49 ++++++++++++++++++++++++++
 1 file changed, 49 insertions(+)

diff --git a/drivers/gpu/drm/xe/xe_sriov_pf_sysfs.c b/drivers/gpu/drm/xe/xe_sriov_pf_sysfs.c
index 360b0ffd9cb4..3a8c488d183c 100644
--- a/drivers/gpu/drm/xe/xe_sriov_pf_sysfs.c
+++ b/drivers/gpu/drm/xe/xe_sriov_pf_sysfs.c
@@ -13,6 +13,7 @@
 #include "xe_pm.h"
 #include "xe_sriov.h"
 #include "xe_sriov_pf.h"
+#include "xe_sriov_pf_control.h"
 #include "xe_sriov_pf_helpers.h"
 #include "xe_sriov_pf_provision.h"
 #include "xe_sriov_pf_sysfs.h"
@@ -52,6 +53,7 @@ static int emit_choice(char *buf, int choice, const char * const *array, size_t
  *     ├── vf1/
  *     │   ├── ...
  *     │   ├── device -> ../../../BDF.1
+ *     │   ├── stop
  *     │   └── profile
  *     │       ├── exec_quantum_ms
  *     │       ├── preempt_timeout_us
@@ -291,8 +293,55 @@ static const struct attribute_group profile_vf_attr_group = {
 	.is_visible = profile_vf_attr_is_visible,
 };
 
+#define DEFINE_SIMPLE_CONTROL_SRIOV_VF_ATTR(NAME)					\
+											\
+static ssize_t xe_sriov_vf_attr_##NAME##_store(struct xe_device *xe, unsigned int vfid,	\
+					       const char *buf, size_t count)		\
+{											\
+	bool yes;									\
+	int err;									\
+											\
+	if (!vfid)									\
+		return -EPERM;								\
+											\
+	err = kstrtobool(buf, &yes);							\
+	if (err)									\
+		return err;								\
+	if (!yes)									\
+		return count;								\
+											\
+	err = xe_sriov_pf_control_##NAME##_vf(xe, vfid);				\
+	return err ?: count;								\
+}											\
+											\
+static XE_SRIOV_VF_ATTR_WO(NAME)
+
+DEFINE_SIMPLE_CONTROL_SRIOV_VF_ATTR(stop);
+
+static struct attribute *control_vf_attrs[] = {
+	&xe_sriov_vf_attr_stop.attr,
+	NULL
+};
+
+static umode_t control_vf_attr_is_visible(struct kobject *kobj,
+					  struct attribute *attr, int index)
+{
+	struct xe_sriov_kobj *vkobj = to_xe_sriov_kobj(kobj);
+
+	if (vkobj->vfid == PFID)
+		return 0;
+
+	return attr->mode;
+}
+
+static const struct attribute_group control_vf_attr_group = {
+	.attrs = control_vf_attrs,
+	.is_visible = control_vf_attr_is_visible,
+};
+
 static const struct attribute_group *xe_sriov_vf_attr_groups[] = {
 	&profile_vf_attr_group,
+	&control_vf_attr_group,
 	NULL
 };
 
-- 
2.47.1


^ permalink raw reply related	[flat|nested] 45+ messages in thread

* [PATCH v2 17/17] drm/xe/pf: Add documentation for sriov_admin attributes
  2025-10-28 17:58 [PATCH v2 00/17] PF: Add sriov_admin sysfs tree Michal Wajdeczko
                   ` (15 preceding siblings ...)
  2025-10-28 17:58 ` [PATCH v2 16/17] drm/xe/pf: Allow to stop and reset VF using sysfs Michal Wajdeczko
@ 2025-10-28 17:58 ` Michal Wajdeczko
  2025-10-30 17:25   ` Lucas De Marchi
  2025-10-28 20:04 ` ✗ CI.checkpatch: warning for PF: Add sriov_admin sysfs tree (rev2) Patchwork
                   ` (3 subsequent siblings)
  20 siblings, 1 reply; 45+ messages in thread
From: Michal Wajdeczko @ 2025-10-28 17:58 UTC (permalink / raw)
  To: intel-xe; +Cc: Michal Wajdeczko, Lucas De Marchi, Rodrigo Vivi

Add initial documentation for all recently added Xe driver
specific SR-IOV sysfs files located under device/sriov_admin.

Signed-off-by: Michal Wajdeczko <michal.wajdeczko@intel.com>
Cc: Lucas De Marchi <lucas.demarchi@intel.com>
Cc: Rodrigo Vivi <rodrigo.vivi@intel.com>
---
v2: update version (Rodrigo)
---
 .../ABI/testing/sysfs-driver-intel-xe-sriov   | 160 ++++++++++++++++++
 1 file changed, 160 insertions(+)
 create mode 100644 Documentation/ABI/testing/sysfs-driver-intel-xe-sriov

diff --git a/Documentation/ABI/testing/sysfs-driver-intel-xe-sriov b/Documentation/ABI/testing/sysfs-driver-intel-xe-sriov
new file mode 100644
index 000000000000..ac688a66bf36
--- /dev/null
+++ b/Documentation/ABI/testing/sysfs-driver-intel-xe-sriov
@@ -0,0 +1,160 @@
+What:		/sys/bus/pci/drivers/xe/.../sriov_admin/
+Date:		October 2025
+KernelVersion:	6.19
+Contact:	intel-xe@lists.freedesktop.org
+Description:
+		This directory appears for the particular Intel Xe device when:
+
+		 - device supports SR-IOV, and
+		 - device is a Physical Function (PF), and
+		 - driver support for the SR-IOV PF is enabled on given device.
+
+		This directory is used as a root for all attributes required to
+		manage both Physical Function (PF) and Virtual Functions (VFs).
+
+
+What:		/sys/bus/pci/drivers/xe/.../sriov_admin/pf/
+Date:		October 2025
+KernelVersion:	6.19
+Contact:	intel-xe@lists.freedesktop.org
+Description:
+		This directory holds attributes related to the SR-IOV Physical
+		Function (PF).
+
+
+What:		/sys/bus/pci/drivers/xe/.../sriov_admin/vf1/
+What:		/sys/bus/pci/drivers/xe/.../sriov_admin/vf2/
+What:		/sys/bus/pci/drivers/xe/.../sriov_admin/vf<N>/
+Date:		October 2025
+KernelVersion:	6.19
+Contact:	intel-xe@lists.freedesktop.org
+Description:
+		These directories hold attributes related to the SR-IOV Virtual
+		Functions (VFs).
+
+		Note that the VF number <N> is 1-based as described in PCI SR-IOV
+		specification as the Xe driver follows that naming schema.
+
+		There could be "vf1", "vf2" and so on, up to "vf<N>", where <N>
+		matches value of the "sriov_totalvfs" attribute.
+
+
+What:		/sys/bus/pci/drivers/xe/.../sriov_admin/pf/profile/exec_quantum_ms
+What:		/sys/bus/pci/drivers/xe/.../sriov_admin/pf/profile/preempt_timeout_us
+What:		/sys/bus/pci/drivers/xe/.../sriov_admin/pf/profile/sched_priority
+What:		/sys/bus/pci/drivers/xe/.../sriov_admin/vf<n>/profile/exec_quantum_ms
+What:		/sys/bus/pci/drivers/xe/.../sriov_admin/vf<n>/profile/preempt_timeout_us
+What:		/sys/bus/pci/drivers/xe/.../sriov_admin/vf<n>/profile/sched_priority
+Date:		October 2025
+KernelVersion:	6.19
+Contact:	intel-xe@lists.freedesktop.org
+Description:
+		These files represent scheduling parameters of the PF or VFs and
+		are available only for Intel Xe platforms with GPU sharing based
+		on the time-slice basis. These scheduling parameters can be changed
+		even if VFs are enabled and running. Those parameters reflects
+		settings of all tiles/GTs assigned to the given function.
+
+		exec_quantum_ms: (RW) unsigned integer
+			The GT execution quantum (EQ) in [ms] of the given function.
+			Actual quantum value might be aligned per HW/FW requirements.
+
+			Default is 0 (unlimited).
+
+		preempt_timeout_us: (RW) unsigned integer
+			The GT preemption timeout in [us] of the given function.
+			Actual timeout value might be aligned per HW/FW requirements.
+
+			Default is 0 (unlimited).
+
+		sched_priority: (RW/RO) string
+			The GT scheduling priority of the given function.
+
+			"low" - function will be scheduled on the GPU for its EQ/PT
+				only if function has any work already submitted.
+
+			"normal" - functions will be scheduled on the GPU for its EQ/PT
+				irrespective of whether it has submitted a work or not.
+
+			"high" - function will be scheduled on the GPU for its EQ/PT
+				in the next time-slice after the current one completes
+				and function has a work submitted.
+
+			Default is "low".
+
+			When read, this file will display the current and available
+			scheduling priorities. The currently active priority level will
+			be enclosed in square brackets, like:
+
+				[low] normal high
+
+			This file can be read-only if changing is currently not supported
+			for given function due to any known HW/FW limitations.
+
+		Writes to these attributes may fail with errors like:
+			-EINVAL if provided input is malformed or not recognized,
+			-EPERM if change is not applicable on given HW/FW,
+			-EIO if FW refuses to change the provisioning.
+
+		Reads from these attributes may fail with:
+			-EUCLEAN if value is not consistent across all tiles/GTs.
+
+
+What:		/sys/bus/pci/drivers/xe/.../sriov_admin/.bulk_profile/exec_quantum_ms
+What:		/sys/bus/pci/drivers/xe/.../sriov_admin/.bulk_profile/preempt_timeout_us
+What:		/sys/bus/pci/drivers/xe/.../sriov_admin/.bulk_profile/sched_priority
+Date:		October 2025
+KernelVersion:	6.19
+Contact:	intel-xe@lists.freedesktop.org
+Description:
+		These files allows bulk reconfiguration of the scheduling parameters
+		of the PF or VFs and are available only for Intel Xe platforms with
+		GPU sharing based on the time-slice basis. These scheduling parameters
+		can be changed even if VFs are enabled and running.
+
+		exec_quantum_ms: (WO) unsigned integer
+			The GT execution quantum (EQ) in [ms] to be applied to all functions.
+			See sriov_admin/{pf,vf<N>}/profile/exec_quantum_ms for more details.
+
+		preempt_timeout_us: (WO) unsigned integer
+			The GT preemption timeout (PT) in [us] to be applied to all functions.
+			See sriov_admin/{pf,vf<N>}/profile/preempt_timeout_us for more details.
+
+		sched_priority: (RW/RO) string
+			The GT scheduling priority to be applied for all functions.
+			See sriov_admin/{pf,vf<N>}/profile/sched_priority for more details.
+
+		Writes to these attributes may fail with errors like:
+			-EINVAL if provided input is malformed or not recognized,
+			-EPERM if change is not applicable on given HW/FW,
+			-EIO if FW refuses to change the provisioning.
+
+
+What:		/sys/bus/pci/drivers/xe/.../sriov_admin/vf<n>/stop
+Date:		October 2025
+KernelVersion:	6.19
+Contact:	intel-xe@lists.freedesktop.org
+Description:
+		This file allows to control scheduling of the VF on the Intel Xe GPU
+		platforms. It allows to implement custom policy mechanism in case VFs
+		are misbehaving or triggering adverse events above defined thresholds.
+
+		stop: (WO) bool
+			All GT executions of given function shall be immediately stopped.
+			To allow scheduling this VF again, the VF FLR must be triggered.
+
+		Writes to this attribute may fail with errors like:
+			-EINVAL if provided input is malformed or not recognized,
+			-EPERM if change is not applicable on given HW/FW,
+			-EIO if FW refuses to change the scheduling.
+
+
+What:		/sys/bus/pci/drivers/xe/.../sriov_admin/pf/device
+What:		/sys/bus/pci/drivers/xe/.../sriov_admin/vf<n>/device
+Date:		October 2025
+KernelVersion:	6.19
+Contact:	intel-xe@lists.freedesktop.org
+Description:
+		These are symlinks to the underlying PCI device entry representing
+		given Xe SR-IOV function. For the PF, this link is always present.
+		For VFs, this link is present only for currently enabled VFs.
-- 
2.47.1


^ permalink raw reply related	[flat|nested] 45+ messages in thread

* ✗ CI.checkpatch: warning for PF: Add sriov_admin sysfs tree (rev2)
  2025-10-28 17:58 [PATCH v2 00/17] PF: Add sriov_admin sysfs tree Michal Wajdeczko
                   ` (16 preceding siblings ...)
  2025-10-28 17:58 ` [PATCH v2 17/17] drm/xe/pf: Add documentation for sriov_admin attributes Michal Wajdeczko
@ 2025-10-28 20:04 ` Patchwork
  2025-10-28 20:05 ` ✓ CI.KUnit: success " Patchwork
                   ` (2 subsequent siblings)
  20 siblings, 0 replies; 45+ messages in thread
From: Patchwork @ 2025-10-28 20:04 UTC (permalink / raw)
  To: Michal Wajdeczko; +Cc: intel-xe

== Series Details ==

Series: PF: Add sriov_admin sysfs tree (rev2)
URL   : https://patchwork.freedesktop.org/series/156220/
State : warning

== Summary ==

+ KERNEL=/kernel
+ git clone https://gitlab.freedesktop.org/drm/maintainer-tools mt
Cloning into 'mt'...
warning: redirecting to https://gitlab.freedesktop.org/drm/maintainer-tools.git/
+ git -C mt rev-list -n1 origin/master
f867e605613af1770f90c4b0afd4a8f06424d1f0
+ cd /kernel
+ git config --global --add safe.directory /kernel
+ git log -n1
commit f6141c36cae75be0b1c36528d9fb8468061c86a3
Author: Michal Wajdeczko <michal.wajdeczko@intel.com>
Date:   Tue Oct 28 18:58:31 2025 +0100

    drm/xe/pf: Add documentation for sriov_admin attributes
    
    Add initial documentation for all recently added Xe driver
    specific SR-IOV sysfs files located under device/sriov_admin.
    
    Signed-off-by: Michal Wajdeczko <michal.wajdeczko@intel.com>
    Cc: Lucas De Marchi <lucas.demarchi@intel.com>
    Cc: Rodrigo Vivi <rodrigo.vivi@intel.com>
+ /mt/dim checkpatch 5742fc7aea99a1326637a7106eeaeac383a1c76d drm-intel
0667d84d2e7e drm/xe/pf: Prepare sysfs for SR-IOV admin attributes
-:71: WARNING:FILE_PATH_CHANGES: added, moved or deleted file(s), does MAINTAINERS need updating?
#71: 
new file mode 100644

-:113: CHECK:LINE_SPACING: Please use a blank line after function/struct/union/enum declarations
#113: FILE: drivers/gpu/drm/xe/xe_sriov_pf_sysfs.c:38:
+};
+#define to_xe_sriov_kobj(p) container_of_const((p), struct xe_sriov_kobj, base)

-:120: CHECK:LINE_SPACING: Please use a blank line after function/struct/union/enum declarations
#120: FILE: drivers/gpu/drm/xe/xe_sriov_pf_sysfs.c:45:
+};
+#define to_xe_sriov_dev_attr(p) container_of_const((p), struct xe_sriov_dev_attr, attr)

-:139: CHECK:LINE_SPACING: Please use a blank line after function/struct/union/enum declarations
#139: FILE: drivers/gpu/drm/xe/xe_sriov_pf_sysfs.c:64:
+};
+#define to_xe_sriov_vf_attr(p) container_of_const((p), struct xe_sriov_vf_attr, attr)

total: 0 errors, 1 warnings, 3 checks, 351 lines checked
b135a846f302 drm/xe/pf: Take RPM during calls to SR-IOV attr.store()
393647c83584 drm/xe/pf: Add _locked variants of the VF EQ config functions
58b46519d40d drm/xe/pf: Add _locked variants of the VF PT config functions
1e6f55064bea drm/xe/pf: Allow change PF and VFs EQ/PT using sysfs
fb6496ce1a70 drm/xe/pf: Relax report helper to accept PF in bulk configs
efd1a6f75dea drm/xe/pf: Fix signature of internal config helpers
4dfa7187b7e5 drm/xe/pf: Add functions to bulk configure EQ/PT on GT
6b77fe60867b drm/xe/pf: Add functions to bulk provision EQ/PT
fa60cb7d7de6 drm/xe/pf: Allow bulk change all VFs EQ/PT using sysfs
55f66cdfe909 drm/xe/pf: Add functions to provision scheduling priority
e9f8292db5e0 drm/xe/pf: Allow bulk change all VFs priority using sysfs
fd064967c84c drm/xe/pf: Allow change PF scheduling priority using sysfs
475c07951324 drm/xe/pf: Promote xe_pci_sriov_get_vf_pdev
abc373ff2aef drm/xe/pf: Add sysfs device symlinks to enabled VFs
d4a9f39b9933 drm/xe/pf: Allow to stop and reset VF using sysfs
f6141c36cae7 drm/xe/pf: Add documentation for sriov_admin attributes
-:14: WARNING:FILE_PATH_CHANGES: added, moved or deleted file(s), does MAINTAINERS need updating?
#14: 
new file mode 100644

total: 0 errors, 1 warnings, 0 checks, 160 lines checked



^ permalink raw reply	[flat|nested] 45+ messages in thread

* ✓ CI.KUnit: success for PF: Add sriov_admin sysfs tree (rev2)
  2025-10-28 17:58 [PATCH v2 00/17] PF: Add sriov_admin sysfs tree Michal Wajdeczko
                   ` (17 preceding siblings ...)
  2025-10-28 20:04 ` ✗ CI.checkpatch: warning for PF: Add sriov_admin sysfs tree (rev2) Patchwork
@ 2025-10-28 20:05 ` Patchwork
  2025-10-28 20:43 ` ✓ Xe.CI.BAT: " Patchwork
  2025-10-29  7:15 ` ✗ Xe.CI.Full: failure " Patchwork
  20 siblings, 0 replies; 45+ messages in thread
From: Patchwork @ 2025-10-28 20:05 UTC (permalink / raw)
  To: Michal Wajdeczko; +Cc: intel-xe

== Series Details ==

Series: PF: Add sriov_admin sysfs tree (rev2)
URL   : https://patchwork.freedesktop.org/series/156220/
State : success

== Summary ==

+ trap cleanup EXIT
+ /kernel/tools/testing/kunit/kunit.py run --kunitconfig /kernel/drivers/gpu/drm/xe/.kunitconfig
[20:04:15] Configuring KUnit Kernel ...
Generating .config ...
Populating config with:
$ make ARCH=um O=.kunit olddefconfig
[20:04:19] Building KUnit Kernel ...
Populating config with:
$ make ARCH=um O=.kunit olddefconfig
Building with:
$ make all compile_commands.json scripts_gdb ARCH=um O=.kunit --jobs=48
[20:04:50] Starting KUnit Kernel (1/1)...
[20:04:50] ============================================================
Running tests with:
$ .kunit/linux kunit.enable=1 mem=1G console=tty kunit_shutdown=halt
[20:04:50] ================== guc_buf (11 subtests) ===================
[20:04:50] [PASSED] test_smallest
[20:04:50] [PASSED] test_largest
[20:04:50] [PASSED] test_granular
[20:04:50] [PASSED] test_unique
[20:04:50] [PASSED] test_overlap
[20:04:50] [PASSED] test_reusable
[20:04:50] [PASSED] test_too_big
[20:04:50] [PASSED] test_flush
[20:04:50] [PASSED] test_lookup
[20:04:50] [PASSED] test_data
[20:04:50] [PASSED] test_class
[20:04:50] ===================== [PASSED] guc_buf =====================
[20:04:50] =================== guc_dbm (7 subtests) ===================
[20:04:50] [PASSED] test_empty
[20:04:50] [PASSED] test_default
[20:04:50] ======================== test_size  ========================
[20:04:50] [PASSED] 4
[20:04:50] [PASSED] 8
[20:04:50] [PASSED] 32
[20:04:50] [PASSED] 256
[20:04:50] ==================== [PASSED] test_size ====================
[20:04:50] ======================= test_reuse  ========================
[20:04:50] [PASSED] 4
[20:04:50] [PASSED] 8
[20:04:50] [PASSED] 32
[20:04:50] [PASSED] 256
[20:04:50] =================== [PASSED] test_reuse ====================
[20:04:50] =================== test_range_overlap  ====================
[20:04:50] [PASSED] 4
[20:04:50] [PASSED] 8
[20:04:50] [PASSED] 32
[20:04:50] [PASSED] 256
[20:04:50] =============== [PASSED] test_range_overlap ================
[20:04:50] =================== test_range_compact  ====================
[20:04:50] [PASSED] 4
[20:04:50] [PASSED] 8
[20:04:50] [PASSED] 32
[20:04:50] [PASSED] 256
[20:04:50] =============== [PASSED] test_range_compact ================
[20:04:50] ==================== test_range_spare  =====================
[20:04:50] [PASSED] 4
[20:04:50] [PASSED] 8
[20:04:50] [PASSED] 32
[20:04:50] [PASSED] 256
[20:04:50] ================ [PASSED] test_range_spare =================
[20:04:50] ===================== [PASSED] guc_dbm =====================
[20:04:50] =================== guc_idm (6 subtests) ===================
[20:04:50] [PASSED] bad_init
[20:04:50] [PASSED] no_init
[20:04:50] [PASSED] init_fini
[20:04:50] [PASSED] check_used
[20:04:50] [PASSED] check_quota
[20:04:50] [PASSED] check_all
[20:04:50] ===================== [PASSED] guc_idm =====================
[20:04:50] ================== no_relay (3 subtests) ===================
[20:04:50] [PASSED] xe_drops_guc2pf_if_not_ready
[20:04:50] [PASSED] xe_drops_guc2vf_if_not_ready
[20:04:50] [PASSED] xe_rejects_send_if_not_ready
[20:04:50] ==================== [PASSED] no_relay =====================
[20:04:50] ================== pf_relay (14 subtests) ==================
[20:04:50] [PASSED] pf_rejects_guc2pf_too_short
[20:04:50] [PASSED] pf_rejects_guc2pf_too_long
[20:04:50] [PASSED] pf_rejects_guc2pf_no_payload
[20:04:50] [PASSED] pf_fails_no_payload
[20:04:50] [PASSED] pf_fails_bad_origin
[20:04:50] [PASSED] pf_fails_bad_type
[20:04:50] [PASSED] pf_txn_reports_error
[20:04:50] [PASSED] pf_txn_sends_pf2guc
[20:04:50] [PASSED] pf_sends_pf2guc
[20:04:50] [SKIPPED] pf_loopback_nop
[20:04:50] [SKIPPED] pf_loopback_echo
[20:04:50] [SKIPPED] pf_loopback_fail
[20:04:50] [SKIPPED] pf_loopback_busy
[20:04:50] [SKIPPED] pf_loopback_retry
[20:04:50] ==================== [PASSED] pf_relay =====================
[20:04:50] ================== vf_relay (3 subtests) ===================
[20:04:50] [PASSED] vf_rejects_guc2vf_too_short
[20:04:50] [PASSED] vf_rejects_guc2vf_too_long
[20:04:50] [PASSED] vf_rejects_guc2vf_no_payload
[20:04:50] ==================== [PASSED] vf_relay =====================
[20:04:50] ===================== lmtt (1 subtest) =====================
[20:04:50] ======================== test_ops  =========================
[20:04:50] [PASSED] 2-level
[20:04:50] [PASSED] multi-level
[20:04:50] ==================== [PASSED] test_ops =====================
[20:04:50] ====================== [PASSED] lmtt =======================
[20:04:50] ================= pf_service (11 subtests) =================
[20:04:50] [PASSED] pf_negotiate_any
[20:04:50] [PASSED] pf_negotiate_base_match
[20:04:50] [PASSED] pf_negotiate_base_newer
[20:04:50] [PASSED] pf_negotiate_base_next
[20:04:50] [SKIPPED] pf_negotiate_base_older
[20:04:50] [PASSED] pf_negotiate_base_prev
[20:04:50] [PASSED] pf_negotiate_latest_match
[20:04:50] [PASSED] pf_negotiate_latest_newer
[20:04:50] [PASSED] pf_negotiate_latest_next
[20:04:50] [SKIPPED] pf_negotiate_latest_older
[20:04:50] [SKIPPED] pf_negotiate_latest_prev
[20:04:50] =================== [PASSED] pf_service ====================
[20:04:50] ================= xe_guc_g2g (2 subtests) ==================
[20:04:50] ============== xe_live_guc_g2g_kunit_default  ==============
[20:04:50] ========= [SKIPPED] xe_live_guc_g2g_kunit_default ==========
[20:04:50] ============== xe_live_guc_g2g_kunit_allmem  ===============
[20:04:50] ========== [SKIPPED] xe_live_guc_g2g_kunit_allmem ==========
[20:04:50] =================== [SKIPPED] xe_guc_g2g ===================
[20:04:50] =================== xe_mocs (2 subtests) ===================
[20:04:50] ================ xe_live_mocs_kernel_kunit  ================
[20:04:50] =========== [SKIPPED] xe_live_mocs_kernel_kunit ============
[20:04:50] ================ xe_live_mocs_reset_kunit  =================
[20:04:50] ============ [SKIPPED] xe_live_mocs_reset_kunit ============
[20:04:50] ==================== [SKIPPED] xe_mocs =====================
[20:04:50] ================= xe_migrate (2 subtests) ==================
[20:04:50] ================= xe_migrate_sanity_kunit  =================
[20:04:50] ============ [SKIPPED] xe_migrate_sanity_kunit =============
[20:04:50] ================== xe_validate_ccs_kunit  ==================
[20:04:50] ============= [SKIPPED] xe_validate_ccs_kunit ==============
[20:04:50] =================== [SKIPPED] xe_migrate ===================
[20:04:50] ================== xe_dma_buf (1 subtest) ==================
[20:04:50] ==================== xe_dma_buf_kunit  =====================
[20:04:50] ================ [SKIPPED] xe_dma_buf_kunit ================
[20:04:50] =================== [SKIPPED] xe_dma_buf ===================
[20:04:50] ================= xe_bo_shrink (1 subtest) =================
[20:04:50] =================== xe_bo_shrink_kunit  ====================
[20:04:50] =============== [SKIPPED] xe_bo_shrink_kunit ===============
[20:04:50] ================== [SKIPPED] xe_bo_shrink ==================
[20:04:50] ==================== xe_bo (2 subtests) ====================
[20:04:50] ================== xe_ccs_migrate_kunit  ===================
[20:04:50] ============== [SKIPPED] xe_ccs_migrate_kunit ==============
[20:04:50] ==================== xe_bo_evict_kunit  ====================
[20:04:50] =============== [SKIPPED] xe_bo_evict_kunit ================
[20:04:50] ===================== [SKIPPED] xe_bo ======================
[20:04:50] ==================== args (11 subtests) ====================
[20:04:50] [PASSED] count_args_test
[20:04:50] [PASSED] call_args_example
[20:04:50] [PASSED] call_args_test
[20:04:50] [PASSED] drop_first_arg_example
[20:04:50] [PASSED] drop_first_arg_test
[20:04:50] [PASSED] first_arg_example
[20:04:50] [PASSED] first_arg_test
[20:04:50] [PASSED] last_arg_example
[20:04:50] [PASSED] last_arg_test
[20:04:50] [PASSED] pick_arg_example
[20:04:50] [PASSED] sep_comma_example
[20:04:50] ====================== [PASSED] args =======================
[20:04:50] =================== xe_pci (3 subtests) ====================
[20:04:50] ==================== check_graphics_ip  ====================
[20:04:50] [PASSED] 12.00 Xe_LP
[20:04:50] [PASSED] 12.10 Xe_LP+
[20:04:50] [PASSED] 12.55 Xe_HPG
[20:04:50] [PASSED] 12.60 Xe_HPC
[20:04:50] [PASSED] 12.70 Xe_LPG
[20:04:50] [PASSED] 12.71 Xe_LPG
[20:04:50] [PASSED] 12.74 Xe_LPG+
[20:04:50] [PASSED] 20.01 Xe2_HPG
[20:04:50] [PASSED] 20.02 Xe2_HPG
[20:04:50] [PASSED] 20.04 Xe2_LPG
[20:04:50] [PASSED] 30.00 Xe3_LPG
[20:04:50] [PASSED] 30.01 Xe3_LPG
[20:04:50] [PASSED] 30.03 Xe3_LPG
[20:04:50] [PASSED] 30.04 Xe3_LPG
[20:04:50] [PASSED] 30.05 Xe3_LPG
[20:04:50] [PASSED] 35.11 Xe3p_XPC
[20:04:50] ================ [PASSED] check_graphics_ip ================
[20:04:50] ===================== check_media_ip  ======================
[20:04:50] [PASSED] 12.00 Xe_M
[20:04:50] [PASSED] 12.55 Xe_HPM
[20:04:50] [PASSED] 13.00 Xe_LPM+
[20:04:50] [PASSED] 13.01 Xe2_HPM
[20:04:50] [PASSED] 20.00 Xe2_LPM
[20:04:50] [PASSED] 30.00 Xe3_LPM
[20:04:50] [PASSED] 30.02 Xe3_LPM
[20:04:50] [PASSED] 35.00 Xe3p_LPM
[20:04:50] [PASSED] 35.03 Xe3p_HPM
[20:04:50] ================= [PASSED] check_media_ip ==================
[20:04:50] =================== check_platform_desc  ===================
[20:04:50] [PASSED] 0x9A60 (TIGERLAKE)
[20:04:50] [PASSED] 0x9A68 (TIGERLAKE)
[20:04:50] [PASSED] 0x9A70 (TIGERLAKE)
[20:04:50] [PASSED] 0x9A40 (TIGERLAKE)
[20:04:50] [PASSED] 0x9A49 (TIGERLAKE)
[20:04:50] [PASSED] 0x9A59 (TIGERLAKE)
[20:04:50] [PASSED] 0x9A78 (TIGERLAKE)
[20:04:50] [PASSED] 0x9AC0 (TIGERLAKE)
[20:04:50] [PASSED] 0x9AC9 (TIGERLAKE)
[20:04:50] [PASSED] 0x9AD9 (TIGERLAKE)
[20:04:50] [PASSED] 0x9AF8 (TIGERLAKE)
[20:04:50] [PASSED] 0x4C80 (ROCKETLAKE)
[20:04:50] [PASSED] 0x4C8A (ROCKETLAKE)
[20:04:50] [PASSED] 0x4C8B (ROCKETLAKE)
[20:04:50] [PASSED] 0x4C8C (ROCKETLAKE)
[20:04:50] [PASSED] 0x4C90 (ROCKETLAKE)
[20:04:50] [PASSED] 0x4C9A (ROCKETLAKE)
[20:04:50] [PASSED] 0x4680 (ALDERLAKE_S)
[20:04:50] [PASSED] 0x4682 (ALDERLAKE_S)
[20:04:50] [PASSED] 0x4688 (ALDERLAKE_S)
[20:04:50] [PASSED] 0x468A (ALDERLAKE_S)
[20:04:50] [PASSED] 0x468B (ALDERLAKE_S)
[20:04:50] [PASSED] 0x4690 (ALDERLAKE_S)
[20:04:50] [PASSED] 0x4692 (ALDERLAKE_S)
[20:04:50] [PASSED] 0x4693 (ALDERLAKE_S)
[20:04:50] [PASSED] 0x46A0 (ALDERLAKE_P)
[20:04:50] [PASSED] 0x46A1 (ALDERLAKE_P)
[20:04:50] [PASSED] 0x46A2 (ALDERLAKE_P)
[20:04:50] [PASSED] 0x46A3 (ALDERLAKE_P)
[20:04:50] [PASSED] 0x46A6 (ALDERLAKE_P)
[20:04:50] [PASSED] 0x46A8 (ALDERLAKE_P)
[20:04:50] [PASSED] 0x46AA (ALDERLAKE_P)
[20:04:50] [PASSED] 0x462A (ALDERLAKE_P)
[20:04:50] [PASSED] 0x4626 (ALDERLAKE_P)
[20:04:50] [PASSED] 0x4628 (ALDERLAKE_P)
[20:04:50] [PASSED] 0x46B0 (ALDERLAKE_P)
[20:04:50] [PASSED] 0x46B1 (ALDERLAKE_P)
[20:04:50] [PASSED] 0x46B2 (ALDERLAKE_P)
[20:04:50] [PASSED] 0x46B3 (ALDERLAKE_P)
[20:04:50] [PASSED] 0x46C0 (ALDERLAKE_P)
[20:04:50] [PASSED] 0x46C1 (ALDERLAKE_P)
[20:04:50] [PASSED] 0x46C2 (ALDERLAKE_P)
[20:04:50] [PASSED] 0x46C3 (ALDERLAKE_P)
[20:04:50] [PASSED] 0x46D0 (ALDERLAKE_N)
[20:04:50] [PASSED] 0x46D1 (ALDERLAKE_N)
[20:04:50] [PASSED] 0x46D2 (ALDERLAKE_N)
[20:04:50] [PASSED] 0x46D3 (ALDERLAKE_N)
[20:04:50] [PASSED] 0x46D4 (ALDERLAKE_N)
[20:04:50] [PASSED] 0xA721 (ALDERLAKE_P)
[20:04:50] [PASSED] 0xA7A1 (ALDERLAKE_P)
[20:04:50] [PASSED] 0xA7A9 (ALDERLAKE_P)
[20:04:50] [PASSED] 0xA7AC (ALDERLAKE_P)
[20:04:50] [PASSED] 0xA7AD (ALDERLAKE_P)
[20:04:50] [PASSED] 0xA720 (ALDERLAKE_P)
[20:04:50] [PASSED] 0xA7A0 (ALDERLAKE_P)
[20:04:50] [PASSED] 0xA7A8 (ALDERLAKE_P)
[20:04:50] [PASSED] 0xA7AA (ALDERLAKE_P)
[20:04:50] [PASSED] 0xA7AB (ALDERLAKE_P)
[20:04:50] [PASSED] 0xA780 (ALDERLAKE_S)
[20:04:50] [PASSED] 0xA781 (ALDERLAKE_S)
[20:04:50] [PASSED] 0xA782 (ALDERLAKE_S)
[20:04:50] [PASSED] 0xA783 (ALDERLAKE_S)
[20:04:50] [PASSED] 0xA788 (ALDERLAKE_S)
[20:04:50] [PASSED] 0xA789 (ALDERLAKE_S)
[20:04:50] [PASSED] 0xA78A (ALDERLAKE_S)
[20:04:50] [PASSED] 0xA78B (ALDERLAKE_S)
[20:04:50] [PASSED] 0x4905 (DG1)
[20:04:50] [PASSED] 0x4906 (DG1)
[20:04:50] [PASSED] 0x4907 (DG1)
[20:04:50] [PASSED] 0x4908 (DG1)
[20:04:50] [PASSED] 0x4909 (DG1)
[20:04:50] [PASSED] 0x56C0 (DG2)
[20:04:50] [PASSED] 0x56C2 (DG2)
[20:04:50] [PASSED] 0x56C1 (DG2)
[20:04:50] [PASSED] 0x7D51 (METEORLAKE)
[20:04:50] [PASSED] 0x7DD1 (METEORLAKE)
[20:04:50] [PASSED] 0x7D41 (METEORLAKE)
[20:04:50] [PASSED] 0x7D67 (METEORLAKE)
[20:04:50] [PASSED] 0xB640 (METEORLAKE)
[20:04:50] [PASSED] 0x56A0 (DG2)
[20:04:50] [PASSED] 0x56A1 (DG2)
[20:04:50] [PASSED] 0x56A2 (DG2)
[20:04:50] [PASSED] 0x56BE (DG2)
[20:04:50] [PASSED] 0x56BF (DG2)
[20:04:50] [PASSED] 0x5690 (DG2)
[20:04:50] [PASSED] 0x5691 (DG2)
[20:04:50] [PASSED] 0x5692 (DG2)
[20:04:50] [PASSED] 0x56A5 (DG2)
[20:04:50] [PASSED] 0x56A6 (DG2)
[20:04:50] [PASSED] 0x56B0 (DG2)
[20:04:50] [PASSED] 0x56B1 (DG2)
[20:04:50] [PASSED] 0x56BA (DG2)
[20:04:50] [PASSED] 0x56BB (DG2)
[20:04:50] [PASSED] 0x56BC (DG2)
[20:04:50] [PASSED] 0x56BD (DG2)
[20:04:50] [PASSED] 0x5693 (DG2)
[20:04:50] [PASSED] 0x5694 (DG2)
[20:04:50] [PASSED] 0x5695 (DG2)
[20:04:50] [PASSED] 0x56A3 (DG2)
[20:04:50] [PASSED] 0x56A4 (DG2)
[20:04:50] [PASSED] 0x56B2 (DG2)
[20:04:50] [PASSED] 0x56B3 (DG2)
[20:04:50] [PASSED] 0x5696 (DG2)
[20:04:50] [PASSED] 0x5697 (DG2)
[20:04:50] [PASSED] 0xB69 (PVC)
[20:04:50] [PASSED] 0xB6E (PVC)
[20:04:50] [PASSED] 0xBD4 (PVC)
[20:04:50] [PASSED] 0xBD5 (PVC)
[20:04:50] [PASSED] 0xBD6 (PVC)
[20:04:50] [PASSED] 0xBD7 (PVC)
[20:04:50] [PASSED] 0xBD8 (PVC)
[20:04:50] [PASSED] 0xBD9 (PVC)
[20:04:50] [PASSED] 0xBDA (PVC)
[20:04:50] [PASSED] 0xBDB (PVC)
[20:04:50] [PASSED] 0xBE0 (PVC)
[20:04:50] [PASSED] 0xBE1 (PVC)
[20:04:50] [PASSED] 0xBE5 (PVC)
[20:04:50] [PASSED] 0x7D40 (METEORLAKE)
[20:04:50] [PASSED] 0x7D45 (METEORLAKE)
[20:04:50] [PASSED] 0x7D55 (METEORLAKE)
[20:04:50] [PASSED] 0x7D60 (METEORLAKE)
[20:04:50] [PASSED] 0x7DD5 (METEORLAKE)
[20:04:50] [PASSED] 0x6420 (LUNARLAKE)
[20:04:50] [PASSED] 0x64A0 (LUNARLAKE)
[20:04:50] [PASSED] 0x64B0 (LUNARLAKE)
[20:04:50] [PASSED] 0xE202 (BATTLEMAGE)
[20:04:50] [PASSED] 0xE209 (BATTLEMAGE)
[20:04:50] [PASSED] 0xE20B (BATTLEMAGE)
[20:04:50] [PASSED] 0xE20C (BATTLEMAGE)
[20:04:50] [PASSED] 0xE20D (BATTLEMAGE)
[20:04:50] [PASSED] 0xE210 (BATTLEMAGE)
[20:04:50] [PASSED] 0xE211 (BATTLEMAGE)
[20:04:50] [PASSED] 0xE212 (BATTLEMAGE)
[20:04:50] [PASSED] 0xE216 (BATTLEMAGE)
[20:04:50] [PASSED] 0xE220 (BATTLEMAGE)
[20:04:50] [PASSED] 0xE221 (BATTLEMAGE)
[20:04:50] [PASSED] 0xE222 (BATTLEMAGE)
[20:04:50] [PASSED] 0xE223 (BATTLEMAGE)
[20:04:50] [PASSED] 0xB080 (PANTHERLAKE)
[20:04:50] [PASSED] 0xB081 (PANTHERLAKE)
[20:04:50] [PASSED] 0xB082 (PANTHERLAKE)
[20:04:50] [PASSED] 0xB083 (PANTHERLAKE)
[20:04:50] [PASSED] 0xB084 (PANTHERLAKE)
[20:04:50] [PASSED] 0xB085 (PANTHERLAKE)
[20:04:50] [PASSED] 0xB086 (PANTHERLAKE)
[20:04:50] [PASSED] 0xB087 (PANTHERLAKE)
[20:04:50] [PASSED] 0xB08F (PANTHERLAKE)
[20:04:50] [PASSED] 0xB090 (PANTHERLAKE)
[20:04:50] [PASSED] 0xB0A0 (PANTHERLAKE)
[20:04:50] [PASSED] 0xB0B0 (PANTHERLAKE)
[20:04:50] [PASSED] 0xFD80 (PANTHERLAKE)
[20:04:50] [PASSED] 0xFD81 (PANTHERLAKE)
[20:04:50] [PASSED] 0xD740 (NOVALAKE_S)
[20:04:50] [PASSED] 0xD741 (NOVALAKE_S)
[20:04:50] [PASSED] 0xD742 (NOVALAKE_S)
[20:04:50] [PASSED] 0xD743 (NOVALAKE_S)
[20:04:50] [PASSED] 0xD744 (NOVALAKE_S)
[20:04:50] [PASSED] 0xD745 (NOVALAKE_S)
[20:04:50] [PASSED] 0x674C (CRESCENTISLAND)
[20:04:50] =============== [PASSED] check_platform_desc ===============
[20:04:50] ===================== [PASSED] xe_pci ======================
[20:04:50] =================== xe_rtp (2 subtests) ====================
[20:04:50] =============== xe_rtp_process_to_sr_tests  ================
[20:04:50] [PASSED] coalesce-same-reg
[20:04:50] [PASSED] no-match-no-add
[20:04:50] [PASSED] match-or
[20:04:50] [PASSED] match-or-xfail
[20:04:50] [PASSED] no-match-no-add-multiple-rules
[20:04:50] [PASSED] two-regs-two-entries
[20:04:50] [PASSED] clr-one-set-other
[20:04:50] [PASSED] set-field
[20:04:50] [PASSED] conflict-duplicate
[20:04:50] [PASSED] conflict-not-disjoint
[20:04:50] [PASSED] conflict-reg-type
[20:04:50] =========== [PASSED] xe_rtp_process_to_sr_tests ============
[20:04:50] ================== xe_rtp_process_tests  ===================
[20:04:50] [PASSED] active1
[20:04:50] [PASSED] active2
[20:04:50] [PASSED] active-inactive
[20:04:50] [PASSED] inactive-active
[20:04:50] [PASSED] inactive-1st_or_active-inactive
[20:04:50] [PASSED] inactive-2nd_or_active-inactive
[20:04:50] [PASSED] inactive-last_or_active-inactive
stty: 'standard input': Inappropriate ioctl for device
[20:04:50] [PASSED] inactive-no_or_active-inactive
[20:04:50] ============== [PASSED] xe_rtp_process_tests ===============
[20:04:50] ===================== [PASSED] xe_rtp ======================
[20:04:50] ==================== xe_wa (1 subtest) =====================
[20:04:50] ======================== xe_wa_gt  =========================
[20:04:50] [PASSED] TIGERLAKE B0
[20:04:50] [PASSED] DG1 A0
[20:04:50] [PASSED] DG1 B0
[20:04:50] [PASSED] ALDERLAKE_S A0
[20:04:50] [PASSED] ALDERLAKE_S B0
[20:04:50] [PASSED] ALDERLAKE_S C0
[20:04:50] [PASSED] ALDERLAKE_S D0
[20:04:50] [PASSED] ALDERLAKE_P A0
[20:04:50] [PASSED] ALDERLAKE_P B0
[20:04:50] [PASSED] ALDERLAKE_P C0
[20:04:50] [PASSED] ALDERLAKE_S RPLS D0
[20:04:50] [PASSED] ALDERLAKE_P RPLU E0
[20:04:50] [PASSED] DG2 G10 C0
[20:04:50] [PASSED] DG2 G11 B1
[20:04:50] [PASSED] DG2 G12 A1
[20:04:50] [PASSED] METEORLAKE 12.70(Xe_LPG) A0 13.00(Xe_LPM+) A0
[20:04:50] [PASSED] METEORLAKE 12.71(Xe_LPG) A0 13.00(Xe_LPM+) A0
[20:04:50] [PASSED] METEORLAKE 12.74(Xe_LPG+) A0 13.00(Xe_LPM+) A0
[20:04:50] [PASSED] LUNARLAKE 20.04(Xe2_LPG) A0 20.00(Xe2_LPM) A0
[20:04:50] [PASSED] LUNARLAKE 20.04(Xe2_LPG) B0 20.00(Xe2_LPM) A0
[20:04:50] [PASSED] BATTLEMAGE 20.01(Xe2_HPG) A0 13.01(Xe2_HPM) A1
[20:04:50] [PASSED] PANTHERLAKE 30.00(Xe3_LPG) A0 30.00(Xe3_LPM) A0
[20:04:50] ==================== [PASSED] xe_wa_gt =====================
[20:04:50] ====================== [PASSED] xe_wa ======================
[20:04:50] ============================================================
[20:04:50] Testing complete. Ran 318 tests: passed: 300, skipped: 18
[20:04:50] Elapsed time: 35.211s total, 4.223s configuring, 30.622s building, 0.329s running

+ /kernel/tools/testing/kunit/kunit.py run --kunitconfig /kernel/drivers/gpu/drm/tests/.kunitconfig
[20:04:50] Configuring KUnit Kernel ...
Regenerating .config ...
Populating config with:
$ make ARCH=um O=.kunit olddefconfig
[20:04:52] Building KUnit Kernel ...
Populating config with:
$ make ARCH=um O=.kunit olddefconfig
Building with:
$ make all compile_commands.json scripts_gdb ARCH=um O=.kunit --jobs=48
[20:05:17] Starting KUnit Kernel (1/1)...
[20:05:17] ============================================================
Running tests with:
$ .kunit/linux kunit.enable=1 mem=1G console=tty kunit_shutdown=halt
[20:05:17] ============ drm_test_pick_cmdline (2 subtests) ============
[20:05:17] [PASSED] drm_test_pick_cmdline_res_1920_1080_60
[20:05:17] =============== drm_test_pick_cmdline_named  ===============
[20:05:17] [PASSED] NTSC
[20:05:17] [PASSED] NTSC-J
[20:05:17] [PASSED] PAL
[20:05:17] [PASSED] PAL-M
[20:05:17] =========== [PASSED] drm_test_pick_cmdline_named ===========
[20:05:17] ============== [PASSED] drm_test_pick_cmdline ==============
[20:05:17] == drm_test_atomic_get_connector_for_encoder (1 subtest) ===
[20:05:17] [PASSED] drm_test_drm_atomic_get_connector_for_encoder
[20:05:17] ==== [PASSED] drm_test_atomic_get_connector_for_encoder ====
[20:05:17] =========== drm_validate_clone_mode (2 subtests) ===========
[20:05:17] ============== drm_test_check_in_clone_mode  ===============
[20:05:17] [PASSED] in_clone_mode
[20:05:17] [PASSED] not_in_clone_mode
[20:05:17] ========== [PASSED] drm_test_check_in_clone_mode ===========
[20:05:17] =============== drm_test_check_valid_clones  ===============
[20:05:17] [PASSED] not_in_clone_mode
[20:05:17] [PASSED] valid_clone
[20:05:17] [PASSED] invalid_clone
[20:05:17] =========== [PASSED] drm_test_check_valid_clones ===========
[20:05:17] ============= [PASSED] drm_validate_clone_mode =============
[20:05:17] ============= drm_validate_modeset (1 subtest) =============
[20:05:17] [PASSED] drm_test_check_connector_changed_modeset
[20:05:17] ============== [PASSED] drm_validate_modeset ===============
[20:05:17] ====== drm_test_bridge_get_current_state (2 subtests) ======
[20:05:17] [PASSED] drm_test_drm_bridge_get_current_state_atomic
[20:05:17] [PASSED] drm_test_drm_bridge_get_current_state_legacy
[20:05:17] ======== [PASSED] drm_test_bridge_get_current_state ========
[20:05:17] ====== drm_test_bridge_helper_reset_crtc (3 subtests) ======
[20:05:17] [PASSED] drm_test_drm_bridge_helper_reset_crtc_atomic
[20:05:17] [PASSED] drm_test_drm_bridge_helper_reset_crtc_atomic_disabled
[20:05:17] [PASSED] drm_test_drm_bridge_helper_reset_crtc_legacy
[20:05:17] ======== [PASSED] drm_test_bridge_helper_reset_crtc ========
[20:05:17] ============== drm_bridge_alloc (2 subtests) ===============
[20:05:17] [PASSED] drm_test_drm_bridge_alloc_basic
[20:05:17] [PASSED] drm_test_drm_bridge_alloc_get_put
[20:05:17] ================ [PASSED] drm_bridge_alloc =================
[20:05:17] ================== drm_buddy (8 subtests) ==================
[20:05:17] [PASSED] drm_test_buddy_alloc_limit
[20:05:17] [PASSED] drm_test_buddy_alloc_optimistic
[20:05:17] [PASSED] drm_test_buddy_alloc_pessimistic
[20:05:17] [PASSED] drm_test_buddy_alloc_pathological
[20:05:17] [PASSED] drm_test_buddy_alloc_contiguous
[20:05:17] [PASSED] drm_test_buddy_alloc_clear
[20:05:17] [PASSED] drm_test_buddy_alloc_range_bias
[20:05:17] [PASSED] drm_test_buddy_fragmentation_performance
[20:05:17] ==================== [PASSED] drm_buddy ====================
[20:05:17] ============= drm_cmdline_parser (40 subtests) =============
[20:05:17] [PASSED] drm_test_cmdline_force_d_only
[20:05:17] [PASSED] drm_test_cmdline_force_D_only_dvi
[20:05:17] [PASSED] drm_test_cmdline_force_D_only_hdmi
[20:05:17] [PASSED] drm_test_cmdline_force_D_only_not_digital
[20:05:17] [PASSED] drm_test_cmdline_force_e_only
[20:05:17] [PASSED] drm_test_cmdline_res
[20:05:17] [PASSED] drm_test_cmdline_res_vesa
[20:05:17] [PASSED] drm_test_cmdline_res_vesa_rblank
[20:05:17] [PASSED] drm_test_cmdline_res_rblank
[20:05:17] [PASSED] drm_test_cmdline_res_bpp
[20:05:17] [PASSED] drm_test_cmdline_res_refresh
[20:05:17] [PASSED] drm_test_cmdline_res_bpp_refresh
[20:05:17] [PASSED] drm_test_cmdline_res_bpp_refresh_interlaced
[20:05:17] [PASSED] drm_test_cmdline_res_bpp_refresh_margins
[20:05:17] [PASSED] drm_test_cmdline_res_bpp_refresh_force_off
[20:05:17] [PASSED] drm_test_cmdline_res_bpp_refresh_force_on
[20:05:17] [PASSED] drm_test_cmdline_res_bpp_refresh_force_on_analog
[20:05:17] [PASSED] drm_test_cmdline_res_bpp_refresh_force_on_digital
[20:05:17] [PASSED] drm_test_cmdline_res_bpp_refresh_interlaced_margins_force_on
[20:05:17] [PASSED] drm_test_cmdline_res_margins_force_on
[20:05:17] [PASSED] drm_test_cmdline_res_vesa_margins
[20:05:17] [PASSED] drm_test_cmdline_name
[20:05:17] [PASSED] drm_test_cmdline_name_bpp
[20:05:17] [PASSED] drm_test_cmdline_name_option
[20:05:17] [PASSED] drm_test_cmdline_name_bpp_option
[20:05:17] [PASSED] drm_test_cmdline_rotate_0
[20:05:17] [PASSED] drm_test_cmdline_rotate_90
[20:05:17] [PASSED] drm_test_cmdline_rotate_180
[20:05:17] [PASSED] drm_test_cmdline_rotate_270
[20:05:17] [PASSED] drm_test_cmdline_hmirror
[20:05:17] [PASSED] drm_test_cmdline_vmirror
[20:05:17] [PASSED] drm_test_cmdline_margin_options
[20:05:17] [PASSED] drm_test_cmdline_multiple_options
[20:05:17] [PASSED] drm_test_cmdline_bpp_extra_and_option
[20:05:17] [PASSED] drm_test_cmdline_extra_and_option
[20:05:17] [PASSED] drm_test_cmdline_freestanding_options
[20:05:17] [PASSED] drm_test_cmdline_freestanding_force_e_and_options
[20:05:17] [PASSED] drm_test_cmdline_panel_orientation
[20:05:17] ================ drm_test_cmdline_invalid  =================
[20:05:17] [PASSED] margin_only
[20:05:17] [PASSED] interlace_only
[20:05:17] [PASSED] res_missing_x
[20:05:17] [PASSED] res_missing_y
[20:05:17] [PASSED] res_bad_y
[20:05:17] [PASSED] res_missing_y_bpp
[20:05:17] [PASSED] res_bad_bpp
[20:05:17] [PASSED] res_bad_refresh
[20:05:17] [PASSED] res_bpp_refresh_force_on_off
[20:05:17] [PASSED] res_invalid_mode
[20:05:17] [PASSED] res_bpp_wrong_place_mode
[20:05:17] [PASSED] name_bpp_refresh
[20:05:17] [PASSED] name_refresh
[20:05:17] [PASSED] name_refresh_wrong_mode
[20:05:17] [PASSED] name_refresh_invalid_mode
[20:05:17] [PASSED] rotate_multiple
[20:05:17] [PASSED] rotate_invalid_val
[20:05:17] [PASSED] rotate_truncated
[20:05:17] [PASSED] invalid_option
[20:05:17] [PASSED] invalid_tv_option
[20:05:17] [PASSED] truncated_tv_option
[20:05:17] ============ [PASSED] drm_test_cmdline_invalid =============
[20:05:17] =============== drm_test_cmdline_tv_options  ===============
[20:05:17] [PASSED] NTSC
[20:05:17] [PASSED] NTSC_443
[20:05:17] [PASSED] NTSC_J
[20:05:17] [PASSED] PAL
[20:05:17] [PASSED] PAL_M
[20:05:17] [PASSED] PAL_N
[20:05:17] [PASSED] SECAM
[20:05:17] [PASSED] MONO_525
[20:05:17] [PASSED] MONO_625
[20:05:17] =========== [PASSED] drm_test_cmdline_tv_options ===========
[20:05:17] =============== [PASSED] drm_cmdline_parser ================
[20:05:17] ========== drmm_connector_hdmi_init (20 subtests) ==========
[20:05:17] [PASSED] drm_test_connector_hdmi_init_valid
[20:05:17] [PASSED] drm_test_connector_hdmi_init_bpc_8
[20:05:17] [PASSED] drm_test_connector_hdmi_init_bpc_10
[20:05:17] [PASSED] drm_test_connector_hdmi_init_bpc_12
[20:05:17] [PASSED] drm_test_connector_hdmi_init_bpc_invalid
[20:05:17] [PASSED] drm_test_connector_hdmi_init_bpc_null
[20:05:17] [PASSED] drm_test_connector_hdmi_init_formats_empty
[20:05:17] [PASSED] drm_test_connector_hdmi_init_formats_no_rgb
[20:05:17] === drm_test_connector_hdmi_init_formats_yuv420_allowed  ===
[20:05:17] [PASSED] supported_formats=0x9 yuv420_allowed=1
[20:05:17] [PASSED] supported_formats=0x9 yuv420_allowed=0
[20:05:17] [PASSED] supported_formats=0x3 yuv420_allowed=1
[20:05:17] [PASSED] supported_formats=0x3 yuv420_allowed=0
[20:05:17] === [PASSED] drm_test_connector_hdmi_init_formats_yuv420_allowed ===
[20:05:17] [PASSED] drm_test_connector_hdmi_init_null_ddc
[20:05:17] [PASSED] drm_test_connector_hdmi_init_null_product
[20:05:17] [PASSED] drm_test_connector_hdmi_init_null_vendor
[20:05:17] [PASSED] drm_test_connector_hdmi_init_product_length_exact
[20:05:17] [PASSED] drm_test_connector_hdmi_init_product_length_too_long
[20:05:17] [PASSED] drm_test_connector_hdmi_init_product_valid
[20:05:17] [PASSED] drm_test_connector_hdmi_init_vendor_length_exact
[20:05:17] [PASSED] drm_test_connector_hdmi_init_vendor_length_too_long
[20:05:17] [PASSED] drm_test_connector_hdmi_init_vendor_valid
[20:05:17] ========= drm_test_connector_hdmi_init_type_valid  =========
[20:05:17] [PASSED] HDMI-A
[20:05:17] [PASSED] HDMI-B
[20:05:17] ===== [PASSED] drm_test_connector_hdmi_init_type_valid =====
[20:05:17] ======== drm_test_connector_hdmi_init_type_invalid  ========
[20:05:17] [PASSED] Unknown
[20:05:17] [PASSED] VGA
[20:05:17] [PASSED] DVI-I
[20:05:17] [PASSED] DVI-D
[20:05:17] [PASSED] DVI-A
[20:05:17] [PASSED] Composite
[20:05:17] [PASSED] SVIDEO
[20:05:17] [PASSED] LVDS
[20:05:17] [PASSED] Component
[20:05:17] [PASSED] DIN
[20:05:17] [PASSED] DP
[20:05:17] [PASSED] TV
[20:05:17] [PASSED] eDP
[20:05:17] [PASSED] Virtual
[20:05:17] [PASSED] DSI
[20:05:17] [PASSED] DPI
[20:05:17] [PASSED] Writeback
[20:05:17] [PASSED] SPI
[20:05:17] [PASSED] USB
[20:05:17] ==== [PASSED] drm_test_connector_hdmi_init_type_invalid ====
[20:05:17] ============ [PASSED] drmm_connector_hdmi_init =============
[20:05:17] ============= drmm_connector_init (3 subtests) =============
[20:05:17] [PASSED] drm_test_drmm_connector_init
[20:05:17] [PASSED] drm_test_drmm_connector_init_null_ddc
[20:05:17] ========= drm_test_drmm_connector_init_type_valid  =========
[20:05:17] [PASSED] Unknown
[20:05:17] [PASSED] VGA
[20:05:17] [PASSED] DVI-I
[20:05:17] [PASSED] DVI-D
[20:05:17] [PASSED] DVI-A
[20:05:17] [PASSED] Composite
[20:05:17] [PASSED] SVIDEO
[20:05:17] [PASSED] LVDS
[20:05:17] [PASSED] Component
[20:05:17] [PASSED] DIN
[20:05:17] [PASSED] DP
[20:05:17] [PASSED] HDMI-A
[20:05:17] [PASSED] HDMI-B
[20:05:17] [PASSED] TV
[20:05:17] [PASSED] eDP
[20:05:17] [PASSED] Virtual
[20:05:17] [PASSED] DSI
[20:05:17] [PASSED] DPI
[20:05:17] [PASSED] Writeback
[20:05:17] [PASSED] SPI
[20:05:17] [PASSED] USB
[20:05:17] ===== [PASSED] drm_test_drmm_connector_init_type_valid =====
[20:05:17] =============== [PASSED] drmm_connector_init ===============
[20:05:17] ========= drm_connector_dynamic_init (6 subtests) ==========
[20:05:17] [PASSED] drm_test_drm_connector_dynamic_init
[20:05:17] [PASSED] drm_test_drm_connector_dynamic_init_null_ddc
[20:05:17] [PASSED] drm_test_drm_connector_dynamic_init_not_added
[20:05:17] [PASSED] drm_test_drm_connector_dynamic_init_properties
[20:05:17] ===== drm_test_drm_connector_dynamic_init_type_valid  ======
[20:05:17] [PASSED] Unknown
[20:05:17] [PASSED] VGA
[20:05:17] [PASSED] DVI-I
[20:05:17] [PASSED] DVI-D
[20:05:17] [PASSED] DVI-A
[20:05:17] [PASSED] Composite
[20:05:17] [PASSED] SVIDEO
[20:05:17] [PASSED] LVDS
[20:05:17] [PASSED] Component
[20:05:17] [PASSED] DIN
[20:05:17] [PASSED] DP
[20:05:17] [PASSED] HDMI-A
[20:05:17] [PASSED] HDMI-B
[20:05:17] [PASSED] TV
[20:05:17] [PASSED] eDP
[20:05:17] [PASSED] Virtual
[20:05:17] [PASSED] DSI
[20:05:17] [PASSED] DPI
[20:05:17] [PASSED] Writeback
[20:05:17] [PASSED] SPI
[20:05:17] [PASSED] USB
[20:05:17] = [PASSED] drm_test_drm_connector_dynamic_init_type_valid ==
[20:05:17] ======== drm_test_drm_connector_dynamic_init_name  =========
[20:05:17] [PASSED] Unknown
[20:05:17] [PASSED] VGA
[20:05:17] [PASSED] DVI-I
[20:05:17] [PASSED] DVI-D
[20:05:17] [PASSED] DVI-A
[20:05:17] [PASSED] Composite
[20:05:17] [PASSED] SVIDEO
[20:05:17] [PASSED] LVDS
[20:05:17] [PASSED] Component
[20:05:17] [PASSED] DIN
[20:05:17] [PASSED] DP
[20:05:17] [PASSED] HDMI-A
[20:05:17] [PASSED] HDMI-B
[20:05:17] [PASSED] TV
[20:05:17] [PASSED] eDP
[20:05:17] [PASSED] Virtual
[20:05:17] [PASSED] DSI
[20:05:17] [PASSED] DPI
[20:05:17] [PASSED] Writeback
[20:05:17] [PASSED] SPI
[20:05:17] [PASSED] USB
[20:05:17] ==== [PASSED] drm_test_drm_connector_dynamic_init_name =====
[20:05:17] =========== [PASSED] drm_connector_dynamic_init ============
[20:05:17] ==== drm_connector_dynamic_register_early (4 subtests) =====
[20:05:17] [PASSED] drm_test_drm_connector_dynamic_register_early_on_list
[20:05:17] [PASSED] drm_test_drm_connector_dynamic_register_early_defer
[20:05:17] [PASSED] drm_test_drm_connector_dynamic_register_early_no_init
[20:05:17] [PASSED] drm_test_drm_connector_dynamic_register_early_no_mode_object
[20:05:17] ====== [PASSED] drm_connector_dynamic_register_early =======
[20:05:17] ======= drm_connector_dynamic_register (7 subtests) ========
[20:05:17] [PASSED] drm_test_drm_connector_dynamic_register_on_list
[20:05:17] [PASSED] drm_test_drm_connector_dynamic_register_no_defer
[20:05:17] [PASSED] drm_test_drm_connector_dynamic_register_no_init
[20:05:17] [PASSED] drm_test_drm_connector_dynamic_register_mode_object
[20:05:17] [PASSED] drm_test_drm_connector_dynamic_register_sysfs
[20:05:17] [PASSED] drm_test_drm_connector_dynamic_register_sysfs_name
[20:05:17] [PASSED] drm_test_drm_connector_dynamic_register_debugfs
[20:05:17] ========= [PASSED] drm_connector_dynamic_register ==========
[20:05:17] = drm_connector_attach_broadcast_rgb_property (2 subtests) =
[20:05:17] [PASSED] drm_test_drm_connector_attach_broadcast_rgb_property
[20:05:17] [PASSED] drm_test_drm_connector_attach_broadcast_rgb_property_hdmi_connector
[20:05:17] === [PASSED] drm_connector_attach_broadcast_rgb_property ===
[20:05:17] ========== drm_get_tv_mode_from_name (2 subtests) ==========
[20:05:17] ========== drm_test_get_tv_mode_from_name_valid  ===========
[20:05:17] [PASSED] NTSC
[20:05:17] [PASSED] NTSC-443
[20:05:17] [PASSED] NTSC-J
[20:05:17] [PASSED] PAL
[20:05:17] [PASSED] PAL-M
[20:05:17] [PASSED] PAL-N
[20:05:17] [PASSED] SECAM
[20:05:17] [PASSED] Mono
[20:05:17] ====== [PASSED] drm_test_get_tv_mode_from_name_valid =======
[20:05:17] [PASSED] drm_test_get_tv_mode_from_name_truncated
[20:05:17] ============ [PASSED] drm_get_tv_mode_from_name ============
[20:05:17] = drm_test_connector_hdmi_compute_mode_clock (12 subtests) =
[20:05:17] [PASSED] drm_test_drm_hdmi_compute_mode_clock_rgb
[20:05:17] [PASSED] drm_test_drm_hdmi_compute_mode_clock_rgb_10bpc
[20:05:17] [PASSED] drm_test_drm_hdmi_compute_mode_clock_rgb_10bpc_vic_1
[20:05:17] [PASSED] drm_test_drm_hdmi_compute_mode_clock_rgb_12bpc
[20:05:17] [PASSED] drm_test_drm_hdmi_compute_mode_clock_rgb_12bpc_vic_1
[20:05:17] [PASSED] drm_test_drm_hdmi_compute_mode_clock_rgb_double
[20:05:17] = drm_test_connector_hdmi_compute_mode_clock_yuv420_valid  =
[20:05:17] [PASSED] VIC 96
[20:05:17] [PASSED] VIC 97
[20:05:17] [PASSED] VIC 101
[20:05:17] [PASSED] VIC 102
[20:05:17] [PASSED] VIC 106
[20:05:17] [PASSED] VIC 107
[20:05:17] === [PASSED] drm_test_connector_hdmi_compute_mode_clock_yuv420_valid ===
[20:05:17] [PASSED] drm_test_connector_hdmi_compute_mode_clock_yuv420_10_bpc
[20:05:17] [PASSED] drm_test_connector_hdmi_compute_mode_clock_yuv420_12_bpc
[20:05:17] [PASSED] drm_test_connector_hdmi_compute_mode_clock_yuv422_8_bpc
[20:05:17] [PASSED] drm_test_connector_hdmi_compute_mode_clock_yuv422_10_bpc
[20:05:17] [PASSED] drm_test_connector_hdmi_compute_mode_clock_yuv422_12_bpc
[20:05:17] === [PASSED] drm_test_connector_hdmi_compute_mode_clock ====
[20:05:17] == drm_hdmi_connector_get_broadcast_rgb_name (2 subtests) ==
[20:05:17] === drm_test_drm_hdmi_connector_get_broadcast_rgb_name  ====
[20:05:17] [PASSED] Automatic
[20:05:17] [PASSED] Full
[20:05:17] [PASSED] Limited 16:235
[20:05:17] === [PASSED] drm_test_drm_hdmi_connector_get_broadcast_rgb_name ===
[20:05:17] [PASSED] drm_test_drm_hdmi_connector_get_broadcast_rgb_name_invalid
[20:05:17] ==== [PASSED] drm_hdmi_connector_get_broadcast_rgb_name ====
[20:05:17] == drm_hdmi_connector_get_output_format_name (2 subtests) ==
[20:05:17] === drm_test_drm_hdmi_connector_get_output_format_name  ====
[20:05:17] [PASSED] RGB
[20:05:17] [PASSED] YUV 4:2:0
[20:05:17] [PASSED] YUV 4:2:2
[20:05:17] [PASSED] YUV 4:4:4
[20:05:17] === [PASSED] drm_test_drm_hdmi_connector_get_output_format_name ===
[20:05:17] [PASSED] drm_test_drm_hdmi_connector_get_output_format_name_invalid
[20:05:17] ==== [PASSED] drm_hdmi_connector_get_output_format_name ====
[20:05:17] ============= drm_damage_helper (21 subtests) ==============
[20:05:17] [PASSED] drm_test_damage_iter_no_damage
[20:05:17] [PASSED] drm_test_damage_iter_no_damage_fractional_src
[20:05:17] [PASSED] drm_test_damage_iter_no_damage_src_moved
[20:05:17] [PASSED] drm_test_damage_iter_no_damage_fractional_src_moved
[20:05:17] [PASSED] drm_test_damage_iter_no_damage_not_visible
[20:05:17] [PASSED] drm_test_damage_iter_no_damage_no_crtc
[20:05:17] [PASSED] drm_test_damage_iter_no_damage_no_fb
[20:05:17] [PASSED] drm_test_damage_iter_simple_damage
[20:05:17] [PASSED] drm_test_damage_iter_single_damage
[20:05:17] [PASSED] drm_test_damage_iter_single_damage_intersect_src
[20:05:17] [PASSED] drm_test_damage_iter_single_damage_outside_src
[20:05:17] [PASSED] drm_test_damage_iter_single_damage_fractional_src
[20:05:17] [PASSED] drm_test_damage_iter_single_damage_intersect_fractional_src
[20:05:17] [PASSED] drm_test_damage_iter_single_damage_outside_fractional_src
[20:05:17] [PASSED] drm_test_damage_iter_single_damage_src_moved
[20:05:17] [PASSED] drm_test_damage_iter_single_damage_fractional_src_moved
[20:05:17] [PASSED] drm_test_damage_iter_damage
[20:05:17] [PASSED] drm_test_damage_iter_damage_one_intersect
[20:05:17] [PASSED] drm_test_damage_iter_damage_one_outside
[20:05:17] [PASSED] drm_test_damage_iter_damage_src_moved
[20:05:17] [PASSED] drm_test_damage_iter_damage_not_visible
[20:05:17] ================ [PASSED] drm_damage_helper ================
[20:05:17] ============== drm_dp_mst_helper (3 subtests) ==============
[20:05:17] ============== drm_test_dp_mst_calc_pbn_mode  ==============
[20:05:17] [PASSED] Clock 154000 BPP 30 DSC disabled
[20:05:17] [PASSED] Clock 234000 BPP 30 DSC disabled
[20:05:17] [PASSED] Clock 297000 BPP 24 DSC disabled
[20:05:17] [PASSED] Clock 332880 BPP 24 DSC enabled
[20:05:17] [PASSED] Clock 324540 BPP 24 DSC enabled
[20:05:17] ========== [PASSED] drm_test_dp_mst_calc_pbn_mode ==========
[20:05:17] ============== drm_test_dp_mst_calc_pbn_div  ===============
[20:05:17] [PASSED] Link rate 2000000 lane count 4
[20:05:17] [PASSED] Link rate 2000000 lane count 2
[20:05:17] [PASSED] Link rate 2000000 lane count 1
[20:05:17] [PASSED] Link rate 1350000 lane count 4
[20:05:17] [PASSED] Link rate 1350000 lane count 2
[20:05:17] [PASSED] Link rate 1350000 lane count 1
[20:05:17] [PASSED] Link rate 1000000 lane count 4
[20:05:17] [PASSED] Link rate 1000000 lane count 2
[20:05:17] [PASSED] Link rate 1000000 lane count 1
[20:05:17] [PASSED] Link rate 810000 lane count 4
[20:05:17] [PASSED] Link rate 810000 lane count 2
[20:05:17] [PASSED] Link rate 810000 lane count 1
[20:05:17] [PASSED] Link rate 540000 lane count 4
[20:05:17] [PASSED] Link rate 540000 lane count 2
[20:05:17] [PASSED] Link rate 540000 lane count 1
[20:05:17] [PASSED] Link rate 270000 lane count 4
[20:05:17] [PASSED] Link rate 270000 lane count 2
[20:05:17] [PASSED] Link rate 270000 lane count 1
[20:05:17] [PASSED] Link rate 162000 lane count 4
[20:05:17] [PASSED] Link rate 162000 lane count 2
[20:05:17] [PASSED] Link rate 162000 lane count 1
[20:05:17] ========== [PASSED] drm_test_dp_mst_calc_pbn_div ===========
[20:05:17] ========= drm_test_dp_mst_sideband_msg_req_decode  =========
[20:05:17] [PASSED] DP_ENUM_PATH_RESOURCES with port number
[20:05:17] [PASSED] DP_POWER_UP_PHY with port number
[20:05:17] [PASSED] DP_POWER_DOWN_PHY with port number
[20:05:17] [PASSED] DP_ALLOCATE_PAYLOAD with SDP stream sinks
[20:05:17] [PASSED] DP_ALLOCATE_PAYLOAD with port number
[20:05:17] [PASSED] DP_ALLOCATE_PAYLOAD with VCPI
[20:05:17] [PASSED] DP_ALLOCATE_PAYLOAD with PBN
[20:05:17] [PASSED] DP_QUERY_PAYLOAD with port number
[20:05:17] [PASSED] DP_QUERY_PAYLOAD with VCPI
[20:05:17] [PASSED] DP_REMOTE_DPCD_READ with port number
[20:05:17] [PASSED] DP_REMOTE_DPCD_READ with DPCD address
[20:05:17] [PASSED] DP_REMOTE_DPCD_READ with max number of bytes
[20:05:17] [PASSED] DP_REMOTE_DPCD_WRITE with port number
[20:05:17] [PASSED] DP_REMOTE_DPCD_WRITE with DPCD address
[20:05:17] [PASSED] DP_REMOTE_DPCD_WRITE with data array
[20:05:17] [PASSED] DP_REMOTE_I2C_READ with port number
[20:05:17] [PASSED] DP_REMOTE_I2C_READ with I2C device ID
[20:05:17] [PASSED] DP_REMOTE_I2C_READ with transactions array
[20:05:17] [PASSED] DP_REMOTE_I2C_WRITE with port number
[20:05:17] [PASSED] DP_REMOTE_I2C_WRITE with I2C device ID
[20:05:17] [PASSED] DP_REMOTE_I2C_WRITE with data array
[20:05:17] [PASSED] DP_QUERY_STREAM_ENC_STATUS with stream ID
[20:05:17] [PASSED] DP_QUERY_STREAM_ENC_STATUS with client ID
[20:05:17] [PASSED] DP_QUERY_STREAM_ENC_STATUS with stream event
[20:05:17] [PASSED] DP_QUERY_STREAM_ENC_STATUS with valid stream event
[20:05:17] [PASSED] DP_QUERY_STREAM_ENC_STATUS with stream behavior
[20:05:17] [PASSED] DP_QUERY_STREAM_ENC_STATUS with a valid stream behavior
[20:05:17] ===== [PASSED] drm_test_dp_mst_sideband_msg_req_decode =====
[20:05:17] ================ [PASSED] drm_dp_mst_helper ================
[20:05:17] ================== drm_exec (7 subtests) ===================
[20:05:17] [PASSED] sanitycheck
[20:05:17] [PASSED] test_lock
[20:05:17] [PASSED] test_lock_unlock
[20:05:17] [PASSED] test_duplicates
[20:05:17] [PASSED] test_prepare
[20:05:17] [PASSED] test_prepare_array
[20:05:17] [PASSED] test_multiple_loops
[20:05:17] ==================== [PASSED] drm_exec =====================
[20:05:17] =========== drm_format_helper_test (17 subtests) ===========
[20:05:17] ============== drm_test_fb_xrgb8888_to_gray8  ==============
[20:05:17] [PASSED] single_pixel_source_buffer
[20:05:17] [PASSED] single_pixel_clip_rectangle
[20:05:17] [PASSED] well_known_colors
[20:05:17] [PASSED] destination_pitch
[20:05:17] ========== [PASSED] drm_test_fb_xrgb8888_to_gray8 ==========
[20:05:17] ============= drm_test_fb_xrgb8888_to_rgb332  ==============
[20:05:17] [PASSED] single_pixel_source_buffer
[20:05:17] [PASSED] single_pixel_clip_rectangle
[20:05:17] [PASSED] well_known_colors
[20:05:17] [PASSED] destination_pitch
[20:05:17] ========= [PASSED] drm_test_fb_xrgb8888_to_rgb332 ==========
[20:05:17] ============= drm_test_fb_xrgb8888_to_rgb565  ==============
[20:05:17] [PASSED] single_pixel_source_buffer
[20:05:17] [PASSED] single_pixel_clip_rectangle
[20:05:17] [PASSED] well_known_colors
[20:05:17] [PASSED] destination_pitch
[20:05:17] ========= [PASSED] drm_test_fb_xrgb8888_to_rgb565 ==========
[20:05:17] ============ drm_test_fb_xrgb8888_to_xrgb1555  =============
[20:05:17] [PASSED] single_pixel_source_buffer
[20:05:17] [PASSED] single_pixel_clip_rectangle
[20:05:17] [PASSED] well_known_colors
[20:05:17] [PASSED] destination_pitch
[20:05:17] ======== [PASSED] drm_test_fb_xrgb8888_to_xrgb1555 =========
[20:05:17] ============ drm_test_fb_xrgb8888_to_argb1555  =============
[20:05:17] [PASSED] single_pixel_source_buffer
[20:05:17] [PASSED] single_pixel_clip_rectangle
[20:05:17] [PASSED] well_known_colors
[20:05:17] [PASSED] destination_pitch
[20:05:17] ======== [PASSED] drm_test_fb_xrgb8888_to_argb1555 =========
[20:05:17] ============ drm_test_fb_xrgb8888_to_rgba5551  =============
[20:05:17] [PASSED] single_pixel_source_buffer
[20:05:17] [PASSED] single_pixel_clip_rectangle
[20:05:17] [PASSED] well_known_colors
[20:05:17] [PASSED] destination_pitch
[20:05:17] ======== [PASSED] drm_test_fb_xrgb8888_to_rgba5551 =========
[20:05:17] ============= drm_test_fb_xrgb8888_to_rgb888  ==============
[20:05:17] [PASSED] single_pixel_source_buffer
[20:05:17] [PASSED] single_pixel_clip_rectangle
[20:05:17] [PASSED] well_known_colors
[20:05:17] [PASSED] destination_pitch
[20:05:17] ========= [PASSED] drm_test_fb_xrgb8888_to_rgb888 ==========
[20:05:17] ============= drm_test_fb_xrgb8888_to_bgr888  ==============
[20:05:17] [PASSED] single_pixel_source_buffer
[20:05:17] [PASSED] single_pixel_clip_rectangle
[20:05:17] [PASSED] well_known_colors
[20:05:17] [PASSED] destination_pitch
[20:05:17] ========= [PASSED] drm_test_fb_xrgb8888_to_bgr888 ==========
[20:05:17] ============ drm_test_fb_xrgb8888_to_argb8888  =============
[20:05:17] [PASSED] single_pixel_source_buffer
[20:05:17] [PASSED] single_pixel_clip_rectangle
[20:05:17] [PASSED] well_known_colors
[20:05:17] [PASSED] destination_pitch
[20:05:17] ======== [PASSED] drm_test_fb_xrgb8888_to_argb8888 =========
[20:05:17] =========== drm_test_fb_xrgb8888_to_xrgb2101010  ===========
[20:05:17] [PASSED] single_pixel_source_buffer
[20:05:17] [PASSED] single_pixel_clip_rectangle
[20:05:17] [PASSED] well_known_colors
[20:05:17] [PASSED] destination_pitch
[20:05:17] ======= [PASSED] drm_test_fb_xrgb8888_to_xrgb2101010 =======
[20:05:17] =========== drm_test_fb_xrgb8888_to_argb2101010  ===========
[20:05:17] [PASSED] single_pixel_source_buffer
[20:05:17] [PASSED] single_pixel_clip_rectangle
[20:05:17] [PASSED] well_known_colors
[20:05:17] [PASSED] destination_pitch
[20:05:17] ======= [PASSED] drm_test_fb_xrgb8888_to_argb2101010 =======
[20:05:17] ============== drm_test_fb_xrgb8888_to_mono  ===============
[20:05:17] [PASSED] single_pixel_source_buffer
[20:05:17] [PASSED] single_pixel_clip_rectangle
[20:05:17] [PASSED] well_known_colors
[20:05:17] [PASSED] destination_pitch
[20:05:17] ========== [PASSED] drm_test_fb_xrgb8888_to_mono ===========
[20:05:17] ==================== drm_test_fb_swab  =====================
[20:05:17] [PASSED] single_pixel_source_buffer
[20:05:17] [PASSED] single_pixel_clip_rectangle
[20:05:17] [PASSED] well_known_colors
[20:05:17] [PASSED] destination_pitch
[20:05:17] ================ [PASSED] drm_test_fb_swab =================
[20:05:17] ============ drm_test_fb_xrgb8888_to_xbgr8888  =============
[20:05:17] [PASSED] single_pixel_source_buffer
[20:05:17] [PASSED] single_pixel_clip_rectangle
[20:05:17] [PASSED] well_known_colors
[20:05:17] [PASSED] destination_pitch
[20:05:17] ======== [PASSED] drm_test_fb_xrgb8888_to_xbgr8888 =========
[20:05:17] ============ drm_test_fb_xrgb8888_to_abgr8888  =============
[20:05:17] [PASSED] single_pixel_source_buffer
[20:05:17] [PASSED] single_pixel_clip_rectangle
[20:05:17] [PASSED] well_known_colors
[20:05:17] [PASSED] destination_pitch
[20:05:17] ======== [PASSED] drm_test_fb_xrgb8888_to_abgr8888 =========
[20:05:17] ================= drm_test_fb_clip_offset  =================
[20:05:17] [PASSED] pass through
[20:05:17] [PASSED] horizontal offset
[20:05:17] [PASSED] vertical offset
[20:05:17] [PASSED] horizontal and vertical offset
[20:05:17] [PASSED] horizontal offset (custom pitch)
[20:05:17] [PASSED] vertical offset (custom pitch)
[20:05:17] [PASSED] horizontal and vertical offset (custom pitch)
[20:05:17] ============= [PASSED] drm_test_fb_clip_offset =============
[20:05:17] =================== drm_test_fb_memcpy  ====================
[20:05:17] [PASSED] single_pixel_source_buffer: XR24 little-endian (0x34325258)
[20:05:17] [PASSED] single_pixel_source_buffer: XRA8 little-endian (0x38415258)
[20:05:17] [PASSED] single_pixel_source_buffer: YU24 little-endian (0x34325559)
[20:05:17] [PASSED] single_pixel_clip_rectangle: XB24 little-endian (0x34324258)
[20:05:17] [PASSED] single_pixel_clip_rectangle: XRA8 little-endian (0x38415258)
[20:05:17] [PASSED] single_pixel_clip_rectangle: YU24 little-endian (0x34325559)
[20:05:17] [PASSED] well_known_colors: XB24 little-endian (0x34324258)
[20:05:17] [PASSED] well_known_colors: XRA8 little-endian (0x38415258)
[20:05:17] [PASSED] well_known_colors: YU24 little-endian (0x34325559)
[20:05:17] [PASSED] destination_pitch: XB24 little-endian (0x34324258)
[20:05:17] [PASSED] destination_pitch: XRA8 little-endian (0x38415258)
[20:05:17] [PASSED] destination_pitch: YU24 little-endian (0x34325559)
[20:05:17] =============== [PASSED] drm_test_fb_memcpy ================
[20:05:17] ============= [PASSED] drm_format_helper_test ==============
[20:05:17] ================= drm_format (18 subtests) =================
[20:05:17] [PASSED] drm_test_format_block_width_invalid
[20:05:17] [PASSED] drm_test_format_block_width_one_plane
[20:05:17] [PASSED] drm_test_format_block_width_two_plane
[20:05:17] [PASSED] drm_test_format_block_width_three_plane
[20:05:17] [PASSED] drm_test_format_block_width_tiled
[20:05:17] [PASSED] drm_test_format_block_height_invalid
[20:05:17] [PASSED] drm_test_format_block_height_one_plane
[20:05:17] [PASSED] drm_test_format_block_height_two_plane
[20:05:17] [PASSED] drm_test_format_block_height_three_plane
[20:05:17] [PASSED] drm_test_format_block_height_tiled
[20:05:17] [PASSED] drm_test_format_min_pitch_invalid
[20:05:17] [PASSED] drm_test_format_min_pitch_one_plane_8bpp
[20:05:17] [PASSED] drm_test_format_min_pitch_one_plane_16bpp
[20:05:17] [PASSED] drm_test_format_min_pitch_one_plane_24bpp
[20:05:17] [PASSED] drm_test_format_min_pitch_one_plane_32bpp
[20:05:17] [PASSED] drm_test_format_min_pitch_two_plane
[20:05:17] [PASSED] drm_test_format_min_pitch_three_plane_8bpp
[20:05:17] [PASSED] drm_test_format_min_pitch_tiled
[20:05:17] =================== [PASSED] drm_format ====================
[20:05:17] ============== drm_framebuffer (10 subtests) ===============
[20:05:17] ========== drm_test_framebuffer_check_src_coords  ==========
[20:05:17] [PASSED] Success: source fits into fb
[20:05:17] [PASSED] Fail: overflowing fb with x-axis coordinate
[20:05:17] [PASSED] Fail: overflowing fb with y-axis coordinate
[20:05:17] [PASSED] Fail: overflowing fb with source width
[20:05:17] [PASSED] Fail: overflowing fb with source height
[20:05:17] ====== [PASSED] drm_test_framebuffer_check_src_coords ======
[20:05:17] [PASSED] drm_test_framebuffer_cleanup
[20:05:17] =============== drm_test_framebuffer_create  ===============
[20:05:17] [PASSED] ABGR8888 normal sizes
[20:05:17] [PASSED] ABGR8888 max sizes
[20:05:17] [PASSED] ABGR8888 pitch greater than min required
[20:05:17] [PASSED] ABGR8888 pitch less than min required
[20:05:17] [PASSED] ABGR8888 Invalid width
[20:05:17] [PASSED] ABGR8888 Invalid buffer handle
[20:05:17] [PASSED] No pixel format
[20:05:17] [PASSED] ABGR8888 Width 0
[20:05:17] [PASSED] ABGR8888 Height 0
[20:05:17] [PASSED] ABGR8888 Out of bound height * pitch combination
[20:05:17] [PASSED] ABGR8888 Large buffer offset
[20:05:17] [PASSED] ABGR8888 Buffer offset for inexistent plane
[20:05:17] [PASSED] ABGR8888 Invalid flag
[20:05:17] [PASSED] ABGR8888 Set DRM_MODE_FB_MODIFIERS without modifiers
[20:05:17] [PASSED] ABGR8888 Valid buffer modifier
[20:05:17] [PASSED] ABGR8888 Invalid buffer modifier(DRM_FORMAT_MOD_SAMSUNG_64_32_TILE)
[20:05:17] [PASSED] ABGR8888 Extra pitches without DRM_MODE_FB_MODIFIERS
[20:05:17] [PASSED] ABGR8888 Extra pitches with DRM_MODE_FB_MODIFIERS
[20:05:17] [PASSED] NV12 Normal sizes
[20:05:17] [PASSED] NV12 Max sizes
[20:05:17] [PASSED] NV12 Invalid pitch
[20:05:17] [PASSED] NV12 Invalid modifier/missing DRM_MODE_FB_MODIFIERS flag
[20:05:17] [PASSED] NV12 different  modifier per-plane
[20:05:17] [PASSED] NV12 with DRM_FORMAT_MOD_SAMSUNG_64_32_TILE
[20:05:17] [PASSED] NV12 Valid modifiers without DRM_MODE_FB_MODIFIERS
[20:05:17] [PASSED] NV12 Modifier for inexistent plane
[20:05:17] [PASSED] NV12 Handle for inexistent plane
[20:05:17] [PASSED] NV12 Handle for inexistent plane without DRM_MODE_FB_MODIFIERS
[20:05:17] [PASSED] YVU420 DRM_MODE_FB_MODIFIERS set without modifier
[20:05:17] [PASSED] YVU420 Normal sizes
[20:05:17] [PASSED] YVU420 Max sizes
[20:05:17] [PASSED] YVU420 Invalid pitch
[20:05:17] [PASSED] YVU420 Different pitches
[20:05:17] [PASSED] YVU420 Different buffer offsets/pitches
[20:05:17] [PASSED] YVU420 Modifier set just for plane 0, without DRM_MODE_FB_MODIFIERS
[20:05:17] [PASSED] YVU420 Modifier set just for planes 0, 1, without DRM_MODE_FB_MODIFIERS
[20:05:17] [PASSED] YVU420 Modifier set just for plane 0, 1, with DRM_MODE_FB_MODIFIERS
[20:05:17] [PASSED] YVU420 Valid modifier
[20:05:17] [PASSED] YVU420 Different modifiers per plane
[20:05:17] [PASSED] YVU420 Modifier for inexistent plane
[20:05:17] [PASSED] YUV420_10BIT Invalid modifier(DRM_FORMAT_MOD_LINEAR)
[20:05:17] [PASSED] X0L2 Normal sizes
[20:05:17] [PASSED] X0L2 Max sizes
[20:05:17] [PASSED] X0L2 Invalid pitch
[20:05:17] [PASSED] X0L2 Pitch greater than minimum required
[20:05:17] [PASSED] X0L2 Handle for inexistent plane
[20:05:17] [PASSED] X0L2 Offset for inexistent plane, without DRM_MODE_FB_MODIFIERS set
[20:05:17] [PASSED] X0L2 Modifier without DRM_MODE_FB_MODIFIERS set
[20:05:17] [PASSED] X0L2 Valid modifier
[20:05:17] [PASSED] X0L2 Modifier for inexistent plane
[20:05:17] =========== [PASSED] drm_test_framebuffer_create ===========
[20:05:17] [PASSED] drm_test_framebuffer_free
[20:05:17] [PASSED] drm_test_framebuffer_init
[20:05:17] [PASSED] drm_test_framebuffer_init_bad_format
[20:05:17] [PASSED] drm_test_framebuffer_init_dev_mismatch
[20:05:17] [PASSED] drm_test_framebuffer_lookup
[20:05:17] [PASSED] drm_test_framebuffer_lookup_inexistent
[20:05:17] [PASSED] drm_test_framebuffer_modifiers_not_supported
[20:05:17] ================= [PASSED] drm_framebuffer =================
[20:05:17] ================ drm_gem_shmem (8 subtests) ================
[20:05:17] [PASSED] drm_gem_shmem_test_obj_create
[20:05:17] [PASSED] drm_gem_shmem_test_obj_create_private
[20:05:17] [PASSED] drm_gem_shmem_test_pin_pages
[20:05:17] [PASSED] drm_gem_shmem_test_vmap
[20:05:17] [PASSED] drm_gem_shmem_test_get_pages_sgt
[20:05:17] [PASSED] drm_gem_shmem_test_get_sg_table
[20:05:17] [PASSED] drm_gem_shmem_test_madvise
[20:05:17] [PASSED] drm_gem_shmem_test_purge
[20:05:17] ================== [PASSED] drm_gem_shmem ==================
[20:05:17] === drm_atomic_helper_connector_hdmi_check (27 subtests) ===
[20:05:17] [PASSED] drm_test_check_broadcast_rgb_auto_cea_mode
[20:05:17] [PASSED] drm_test_check_broadcast_rgb_auto_cea_mode_vic_1
[20:05:17] [PASSED] drm_test_check_broadcast_rgb_full_cea_mode
[20:05:17] [PASSED] drm_test_check_broadcast_rgb_full_cea_mode_vic_1
[20:05:17] [PASSED] drm_test_check_broadcast_rgb_limited_cea_mode
[20:05:17] [PASSED] drm_test_check_broadcast_rgb_limited_cea_mode_vic_1
[20:05:17] ====== drm_test_check_broadcast_rgb_cea_mode_yuv420  =======
[20:05:17] [PASSED] Automatic
[20:05:17] [PASSED] Full
[20:05:17] [PASSED] Limited 16:235
[20:05:17] == [PASSED] drm_test_check_broadcast_rgb_cea_mode_yuv420 ===
[20:05:17] [PASSED] drm_test_check_broadcast_rgb_crtc_mode_changed
[20:05:17] [PASSED] drm_test_check_broadcast_rgb_crtc_mode_not_changed
[20:05:17] [PASSED] drm_test_check_disable_connector
[20:05:17] [PASSED] drm_test_check_hdmi_funcs_reject_rate
[20:05:17] [PASSED] drm_test_check_max_tmds_rate_bpc_fallback_rgb
[20:05:17] [PASSED] drm_test_check_max_tmds_rate_bpc_fallback_yuv420
[20:05:17] [PASSED] drm_test_check_max_tmds_rate_bpc_fallback_ignore_yuv422
[20:05:17] [PASSED] drm_test_check_max_tmds_rate_bpc_fallback_ignore_yuv420
[20:05:17] [PASSED] drm_test_check_driver_unsupported_fallback_yuv420
[20:05:17] [PASSED] drm_test_check_output_bpc_crtc_mode_changed
[20:05:17] [PASSED] drm_test_check_output_bpc_crtc_mode_not_changed
[20:05:17] [PASSED] drm_test_check_output_bpc_dvi
[20:05:17] [PASSED] drm_test_check_output_bpc_format_vic_1
[20:05:17] [PASSED] drm_test_check_output_bpc_format_display_8bpc_only
[20:05:17] [PASSED] drm_test_check_output_bpc_format_display_rgb_only
[20:05:17] [PASSED] drm_test_check_output_bpc_format_driver_8bpc_only
[20:05:17] [PASSED] drm_test_check_output_bpc_format_driver_rgb_only
[20:05:17] [PASSED] drm_test_check_tmds_char_rate_rgb_8bpc
[20:05:17] [PASSED] drm_test_check_tmds_char_rate_rgb_10bpc
[20:05:17] [PASSED] drm_test_check_tmds_char_rate_rgb_12bpc
[20:05:17] ===== [PASSED] drm_atomic_helper_connector_hdmi_check ======
[20:05:17] === drm_atomic_helper_connector_hdmi_reset (6 subtests) ====
[20:05:17] [PASSED] drm_test_check_broadcast_rgb_value
[20:05:17] [PASSED] drm_test_check_bpc_8_value
[20:05:17] [PASSED] drm_test_check_bpc_10_value
[20:05:17] [PASSED] drm_test_check_bpc_12_value
[20:05:17] [PASSED] drm_test_check_format_value
[20:05:17] [PASSED] drm_test_check_tmds_char_value
[20:05:17] ===== [PASSED] drm_atomic_helper_connector_hdmi_reset ======
[20:05:17] = drm_atomic_helper_connector_hdmi_mode_valid (4 subtests) =
[20:05:17] [PASSED] drm_test_check_mode_valid
[20:05:17] [PASSED] drm_test_check_mode_valid_reject
[20:05:17] [PASSED] drm_test_check_mode_valid_reject_rate
[20:05:17] [PASSED] drm_test_check_mode_valid_reject_max_clock
[20:05:17] === [PASSED] drm_atomic_helper_connector_hdmi_mode_valid ===
[20:05:17] ================= drm_managed (2 subtests) =================
[20:05:17] [PASSED] drm_test_managed_release_action
[20:05:17] [PASSED] drm_test_managed_run_action
[20:05:17] =================== [PASSED] drm_managed ===================
[20:05:17] =================== drm_mm (6 subtests) ====================
[20:05:17] [PASSED] drm_test_mm_init
[20:05:17] [PASSED] drm_test_mm_debug
[20:05:17] [PASSED] drm_test_mm_align32
[20:05:17] [PASSED] drm_test_mm_align64
[20:05:17] [PASSED] drm_test_mm_lowest
[20:05:17] [PASSED] drm_test_mm_highest
[20:05:17] ===================== [PASSED] drm_mm ======================
[20:05:17] ============= drm_modes_analog_tv (5 subtests) =============
[20:05:17] [PASSED] drm_test_modes_analog_tv_mono_576i
[20:05:17] [PASSED] drm_test_modes_analog_tv_ntsc_480i
[20:05:17] [PASSED] drm_test_modes_analog_tv_ntsc_480i_inlined
[20:05:17] [PASSED] drm_test_modes_analog_tv_pal_576i
[20:05:17] [PASSED] drm_test_modes_analog_tv_pal_576i_inlined
[20:05:17] =============== [PASSED] drm_modes_analog_tv ===============
[20:05:17] ============== drm_plane_helper (2 subtests) ===============
[20:05:17] =============== drm_test_check_plane_state  ================
[20:05:17] [PASSED] clipping_simple
[20:05:17] [PASSED] clipping_rotate_reflect
[20:05:17] [PASSED] positioning_simple
[20:05:17] [PASSED] upscaling
[20:05:17] [PASSED] downscaling
[20:05:17] [PASSED] rounding1
[20:05:17] [PASSED] rounding2
[20:05:17] [PASSED] rounding3
[20:05:17] [PASSED] rounding4
[20:05:17] =========== [PASSED] drm_test_check_plane_state ============
[20:05:17] =========== drm_test_check_invalid_plane_state  ============
[20:05:17] [PASSED] positioning_invalid
[20:05:17] [PASSED] upscaling_invalid
[20:05:17] [PASSED] downscaling_invalid
[20:05:17] ======= [PASSED] drm_test_check_invalid_plane_state ========
[20:05:17] ================ [PASSED] drm_plane_helper =================
[20:05:17] ====== drm_connector_helper_tv_get_modes (1 subtest) =======
[20:05:17] ====== drm_test_connector_helper_tv_get_modes_check  =======
[20:05:17] [PASSED] None
[20:05:17] [PASSED] PAL
[20:05:17] [PASSED] NTSC
[20:05:17] [PASSED] Both, NTSC Default
[20:05:17] [PASSED] Both, PAL Default
[20:05:17] [PASSED] Both, NTSC Default, with PAL on command-line
[20:05:17] [PASSED] Both, PAL Default, with NTSC on command-line
[20:05:17] == [PASSED] drm_test_connector_helper_tv_get_modes_check ===
[20:05:17] ======== [PASSED] drm_connector_helper_tv_get_modes ========
[20:05:17] ================== drm_rect (9 subtests) ===================
[20:05:17] [PASSED] drm_test_rect_clip_scaled_div_by_zero
[20:05:17] [PASSED] drm_test_rect_clip_scaled_not_clipped
[20:05:17] [PASSED] drm_test_rect_clip_scaled_clipped
[20:05:17] [PASSED] drm_test_rect_clip_scaled_signed_vs_unsigned
[20:05:17] ================= drm_test_rect_intersect  =================
[20:05:17] [PASSED] top-left x bottom-right: 2x2+1+1 x 2x2+0+0
[20:05:17] [PASSED] top-right x bottom-left: 2x2+0+0 x 2x2+1-1
[20:05:17] [PASSED] bottom-left x top-right: 2x2+1-1 x 2x2+0+0
[20:05:17] [PASSED] bottom-right x top-left: 2x2+0+0 x 2x2+1+1
[20:05:17] [PASSED] right x left: 2x1+0+0 x 3x1+1+0
[20:05:17] [PASSED] left x right: 3x1+1+0 x 2x1+0+0
[20:05:17] [PASSED] up x bottom: 1x2+0+0 x 1x3+0-1
[20:05:17] [PASSED] bottom x up: 1x3+0-1 x 1x2+0+0
[20:05:17] [PASSED] touching corner: 1x1+0+0 x 2x2+1+1
[20:05:17] [PASSED] touching side: 1x1+0+0 x 1x1+1+0
[20:05:17] [PASSED] equal rects: 2x2+0+0 x 2x2+0+0
[20:05:17] [PASSED] inside another: 2x2+0+0 x 1x1+1+1
[20:05:17] [PASSED] far away: 1x1+0+0 x 1x1+3+6
[20:05:17] [PASSED] points intersecting: 0x0+5+10 x 0x0+5+10
[20:05:17] [PASSED] points not intersecting: 0x0+0+0 x 0x0+5+10
[20:05:17] ============= [PASSED] drm_test_rect_intersect =============
[20:05:17] ================ drm_test_rect_calc_hscale  ================
[20:05:17] [PASSED] normal use
[20:05:17] [PASSED] out of max range
[20:05:17] [PASSED] out of min range
[20:05:17] [PASSED] zero dst
[20:05:17] [PASSED] negative src
[20:05:17] [PASSED] negative dst
[20:05:17] ============ [PASSED] drm_test_rect_calc_hscale ============
[20:05:17] ================ drm_test_rect_calc_vscale  ================
[20:05:17] [PASSED] normal use
stty: 'standard input': Inappropriate ioctl for device
[20:05:17] [PASSED] out of max range
[20:05:17] [PASSED] out of min range
[20:05:17] [PASSED] zero dst
[20:05:17] [PASSED] negative src
[20:05:17] [PASSED] negative dst
[20:05:17] ============ [PASSED] drm_test_rect_calc_vscale ============
[20:05:17] ================== drm_test_rect_rotate  ===================
[20:05:17] [PASSED] reflect-x
[20:05:17] [PASSED] reflect-y
[20:05:17] [PASSED] rotate-0
[20:05:17] [PASSED] rotate-90
[20:05:17] [PASSED] rotate-180
[20:05:17] [PASSED] rotate-270
[20:05:17] ============== [PASSED] drm_test_rect_rotate ===============
[20:05:17] ================ drm_test_rect_rotate_inv  =================
[20:05:17] [PASSED] reflect-x
[20:05:17] [PASSED] reflect-y
[20:05:17] [PASSED] rotate-0
[20:05:17] [PASSED] rotate-90
[20:05:17] [PASSED] rotate-180
[20:05:17] [PASSED] rotate-270
[20:05:17] ============ [PASSED] drm_test_rect_rotate_inv =============
[20:05:17] ==================== [PASSED] drm_rect =====================
[20:05:17] ============ drm_sysfb_modeset_test (1 subtest) ============
[20:05:17] ============ drm_test_sysfb_build_fourcc_list  =============
[20:05:17] [PASSED] no native formats
[20:05:17] [PASSED] XRGB8888 as native format
[20:05:17] [PASSED] remove duplicates
[20:05:17] [PASSED] convert alpha formats
[20:05:17] [PASSED] random formats
[20:05:17] ======== [PASSED] drm_test_sysfb_build_fourcc_list =========
[20:05:17] ============= [PASSED] drm_sysfb_modeset_test ==============
[20:05:17] ============================================================
[20:05:17] Testing complete. Ran 622 tests: passed: 622
[20:05:17] Elapsed time: 27.049s total, 1.584s configuring, 25.045s building, 0.390s running

+ /kernel/tools/testing/kunit/kunit.py run --kunitconfig /kernel/drivers/gpu/drm/ttm/tests/.kunitconfig
[20:05:17] Configuring KUnit Kernel ...
Regenerating .config ...
Populating config with:
$ make ARCH=um O=.kunit olddefconfig
[20:05:19] Building KUnit Kernel ...
Populating config with:
$ make ARCH=um O=.kunit olddefconfig
Building with:
$ make all compile_commands.json scripts_gdb ARCH=um O=.kunit --jobs=48
[20:05:28] Starting KUnit Kernel (1/1)...
[20:05:28] ============================================================
Running tests with:
$ .kunit/linux kunit.enable=1 mem=1G console=tty kunit_shutdown=halt
[20:05:28] ================= ttm_device (5 subtests) ==================
[20:05:28] [PASSED] ttm_device_init_basic
[20:05:28] [PASSED] ttm_device_init_multiple
[20:05:28] [PASSED] ttm_device_fini_basic
[20:05:28] [PASSED] ttm_device_init_no_vma_man
[20:05:28] ================== ttm_device_init_pools  ==================
[20:05:28] [PASSED] No DMA allocations, no DMA32 required
[20:05:28] [PASSED] DMA allocations, DMA32 required
[20:05:28] [PASSED] No DMA allocations, DMA32 required
[20:05:28] [PASSED] DMA allocations, no DMA32 required
[20:05:28] ============== [PASSED] ttm_device_init_pools ==============
[20:05:28] =================== [PASSED] ttm_device ====================
[20:05:28] ================== ttm_pool (8 subtests) ===================
[20:05:28] ================== ttm_pool_alloc_basic  ===================
[20:05:28] [PASSED] One page
[20:05:28] [PASSED] More than one page
[20:05:28] [PASSED] Above the allocation limit
[20:05:28] [PASSED] One page, with coherent DMA mappings enabled
[20:05:28] [PASSED] Above the allocation limit, with coherent DMA mappings enabled
[20:05:28] ============== [PASSED] ttm_pool_alloc_basic ===============
[20:05:28] ============== ttm_pool_alloc_basic_dma_addr  ==============
[20:05:28] [PASSED] One page
[20:05:28] [PASSED] More than one page
[20:05:28] [PASSED] Above the allocation limit
[20:05:28] [PASSED] One page, with coherent DMA mappings enabled
[20:05:28] [PASSED] Above the allocation limit, with coherent DMA mappings enabled
[20:05:28] ========== [PASSED] ttm_pool_alloc_basic_dma_addr ==========
[20:05:28] [PASSED] ttm_pool_alloc_order_caching_match
[20:05:28] [PASSED] ttm_pool_alloc_caching_mismatch
[20:05:28] [PASSED] ttm_pool_alloc_order_mismatch
[20:05:28] [PASSED] ttm_pool_free_dma_alloc
[20:05:28] [PASSED] ttm_pool_free_no_dma_alloc
[20:05:28] [PASSED] ttm_pool_fini_basic
[20:05:28] ==================== [PASSED] ttm_pool =====================
[20:05:28] ================ ttm_resource (8 subtests) =================
[20:05:28] ================= ttm_resource_init_basic  =================
[20:05:28] [PASSED] Init resource in TTM_PL_SYSTEM
[20:05:28] [PASSED] Init resource in TTM_PL_VRAM
[20:05:28] [PASSED] Init resource in a private placement
[20:05:28] [PASSED] Init resource in TTM_PL_SYSTEM, set placement flags
[20:05:28] ============= [PASSED] ttm_resource_init_basic =============
[20:05:28] [PASSED] ttm_resource_init_pinned
[20:05:28] [PASSED] ttm_resource_fini_basic
[20:05:28] [PASSED] ttm_resource_manager_init_basic
[20:05:28] [PASSED] ttm_resource_manager_usage_basic
[20:05:28] [PASSED] ttm_resource_manager_set_used_basic
[20:05:28] [PASSED] ttm_sys_man_alloc_basic
[20:05:28] [PASSED] ttm_sys_man_free_basic
[20:05:28] ================== [PASSED] ttm_resource ===================
[20:05:28] =================== ttm_tt (15 subtests) ===================
[20:05:28] ==================== ttm_tt_init_basic  ====================
[20:05:28] [PASSED] Page-aligned size
[20:05:28] [PASSED] Extra pages requested
[20:05:28] ================ [PASSED] ttm_tt_init_basic ================
[20:05:28] [PASSED] ttm_tt_init_misaligned
[20:05:28] [PASSED] ttm_tt_fini_basic
[20:05:28] [PASSED] ttm_tt_fini_sg
[20:05:28] [PASSED] ttm_tt_fini_shmem
[20:05:28] [PASSED] ttm_tt_create_basic
[20:05:28] [PASSED] ttm_tt_create_invalid_bo_type
[20:05:28] [PASSED] ttm_tt_create_ttm_exists
[20:05:28] [PASSED] ttm_tt_create_failed
[20:05:28] [PASSED] ttm_tt_destroy_basic
[20:05:28] [PASSED] ttm_tt_populate_null_ttm
[20:05:28] [PASSED] ttm_tt_populate_populated_ttm
[20:05:28] [PASSED] ttm_tt_unpopulate_basic
[20:05:28] [PASSED] ttm_tt_unpopulate_empty_ttm
[20:05:28] [PASSED] ttm_tt_swapin_basic
[20:05:28] ===================== [PASSED] ttm_tt ======================
[20:05:28] =================== ttm_bo (14 subtests) ===================
[20:05:28] =========== ttm_bo_reserve_optimistic_no_ticket  ===========
[20:05:28] [PASSED] Cannot be interrupted and sleeps
[20:05:28] [PASSED] Cannot be interrupted, locks straight away
[20:05:28] [PASSED] Can be interrupted, sleeps
[20:05:28] ======= [PASSED] ttm_bo_reserve_optimistic_no_ticket =======
[20:05:28] [PASSED] ttm_bo_reserve_locked_no_sleep
[20:05:28] [PASSED] ttm_bo_reserve_no_wait_ticket
[20:05:28] [PASSED] ttm_bo_reserve_double_resv
[20:05:28] [PASSED] ttm_bo_reserve_interrupted
[20:05:28] [PASSED] ttm_bo_reserve_deadlock
[20:05:28] [PASSED] ttm_bo_unreserve_basic
[20:05:28] [PASSED] ttm_bo_unreserve_pinned
[20:05:28] [PASSED] ttm_bo_unreserve_bulk
[20:05:28] [PASSED] ttm_bo_fini_basic
[20:05:28] [PASSED] ttm_bo_fini_shared_resv
[20:05:28] [PASSED] ttm_bo_pin_basic
[20:05:28] [PASSED] ttm_bo_pin_unpin_resource
[20:05:28] [PASSED] ttm_bo_multiple_pin_one_unpin
[20:05:28] ===================== [PASSED] ttm_bo ======================
[20:05:28] ============== ttm_bo_validate (21 subtests) ===============
[20:05:28] ============== ttm_bo_init_reserved_sys_man  ===============
[20:05:28] [PASSED] Buffer object for userspace
[20:05:28] [PASSED] Kernel buffer object
[20:05:28] [PASSED] Shared buffer object
[20:05:28] ========== [PASSED] ttm_bo_init_reserved_sys_man ===========
[20:05:28] ============== ttm_bo_init_reserved_mock_man  ==============
[20:05:28] [PASSED] Buffer object for userspace
[20:05:28] [PASSED] Kernel buffer object
[20:05:28] [PASSED] Shared buffer object
[20:05:28] ========== [PASSED] ttm_bo_init_reserved_mock_man ==========
[20:05:28] [PASSED] ttm_bo_init_reserved_resv
[20:05:28] ================== ttm_bo_validate_basic  ==================
[20:05:28] [PASSED] Buffer object for userspace
[20:05:28] [PASSED] Kernel buffer object
[20:05:28] [PASSED] Shared buffer object
[20:05:28] ============== [PASSED] ttm_bo_validate_basic ==============
[20:05:28] [PASSED] ttm_bo_validate_invalid_placement
[20:05:28] ============= ttm_bo_validate_same_placement  ==============
[20:05:28] [PASSED] System manager
[20:05:28] [PASSED] VRAM manager
[20:05:28] ========= [PASSED] ttm_bo_validate_same_placement ==========
[20:05:28] [PASSED] ttm_bo_validate_failed_alloc
[20:05:28] [PASSED] ttm_bo_validate_pinned
[20:05:28] [PASSED] ttm_bo_validate_busy_placement
[20:05:28] ================ ttm_bo_validate_multihop  =================
[20:05:28] [PASSED] Buffer object for userspace
[20:05:28] [PASSED] Kernel buffer object
[20:05:28] [PASSED] Shared buffer object
[20:05:28] ============ [PASSED] ttm_bo_validate_multihop =============
[20:05:28] ========== ttm_bo_validate_no_placement_signaled  ==========
[20:05:28] [PASSED] Buffer object in system domain, no page vector
[20:05:28] [PASSED] Buffer object in system domain with an existing page vector
[20:05:28] ====== [PASSED] ttm_bo_validate_no_placement_signaled ======
[20:05:28] ======== ttm_bo_validate_no_placement_not_signaled  ========
[20:05:28] [PASSED] Buffer object for userspace
[20:05:28] [PASSED] Kernel buffer object
[20:05:28] [PASSED] Shared buffer object
[20:05:28] ==== [PASSED] ttm_bo_validate_no_placement_not_signaled ====
[20:05:28] [PASSED] ttm_bo_validate_move_fence_signaled
[20:05:28] ========= ttm_bo_validate_move_fence_not_signaled  =========
[20:05:28] [PASSED] Waits for GPU
[20:05:28] [PASSED] Tries to lock straight away
[20:05:28] ===== [PASSED] ttm_bo_validate_move_fence_not_signaled =====
[20:05:28] [PASSED] ttm_bo_validate_happy_evict
[20:05:28] [PASSED] ttm_bo_validate_all_pinned_evict
[20:05:28] [PASSED] ttm_bo_validate_allowed_only_evict
[20:05:28] [PASSED] ttm_bo_validate_deleted_evict
[20:05:28] [PASSED] ttm_bo_validate_busy_domain_evict
[20:05:28] [PASSED] ttm_bo_validate_evict_gutting
[20:05:28] [PASSED] ttm_bo_validate_recrusive_evict
stty: 'standard input': Inappropriate ioctl for device
[20:05:28] ================= [PASSED] ttm_bo_validate =================
[20:05:28] ============================================================
[20:05:28] Testing complete. Ran 101 tests: passed: 101
[20:05:28] Elapsed time: 11.037s total, 1.659s configuring, 9.161s building, 0.188s running

+ cleanup
++ stat -c %u:%g /kernel
+ chown -R 1003:1003 /kernel



^ permalink raw reply	[flat|nested] 45+ messages in thread

* ✓ Xe.CI.BAT: success for PF: Add sriov_admin sysfs tree (rev2)
  2025-10-28 17:58 [PATCH v2 00/17] PF: Add sriov_admin sysfs tree Michal Wajdeczko
                   ` (18 preceding siblings ...)
  2025-10-28 20:05 ` ✓ CI.KUnit: success " Patchwork
@ 2025-10-28 20:43 ` Patchwork
  2025-10-29  7:15 ` ✗ Xe.CI.Full: failure " Patchwork
  20 siblings, 0 replies; 45+ messages in thread
From: Patchwork @ 2025-10-28 20:43 UTC (permalink / raw)
  To: Michal Wajdeczko; +Cc: intel-xe

[-- Attachment #1: Type: text/plain, Size: 1553 bytes --]

== Series Details ==

Series: PF: Add sriov_admin sysfs tree (rev2)
URL   : https://patchwork.freedesktop.org/series/156220/
State : success

== Summary ==

CI Bug Log - changes from xe-3996-5742fc7aea99a1326637a7106eeaeac383a1c76d_BAT -> xe-pw-156220v2_BAT
====================================================

Summary
-------

  **SUCCESS**

  No regressions found.

  

Participating hosts (13 -> 13)
------------------------------

  No changes in participating hosts

Known issues
------------

  Here are the changes found in xe-pw-156220v2_BAT that come from known issues:

### IGT changes ###

#### Possible fixes ####

  * igt@kms_flip@basic-plain-flip@a-edp1:
    - bat-adlp-7:         [DMESG-WARN][1] ([Intel XE#4543]) -> [PASS][2] +1 other test pass
   [1]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-3996-5742fc7aea99a1326637a7106eeaeac383a1c76d/bat-adlp-7/igt@kms_flip@basic-plain-flip@a-edp1.html
   [2]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-156220v2/bat-adlp-7/igt@kms_flip@basic-plain-flip@a-edp1.html

  
  [Intel XE#4543]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/4543


Build changes
-------------

  * Linux: xe-3996-5742fc7aea99a1326637a7106eeaeac383a1c76d -> xe-pw-156220v2

  IGT_8599: b22b9ca357de868f3848269e5eb7c4cc53b3f2d1 @ https://gitlab.freedesktop.org/drm/igt-gpu-tools.git
  xe-3996-5742fc7aea99a1326637a7106eeaeac383a1c76d: 5742fc7aea99a1326637a7106eeaeac383a1c76d
  xe-pw-156220v2: 156220v2

== Logs ==

For more details see: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-156220v2/index.html

[-- Attachment #2: Type: text/html, Size: 2118 bytes --]

^ permalink raw reply	[flat|nested] 45+ messages in thread

* ✗ Xe.CI.Full: failure for PF: Add sriov_admin sysfs tree (rev2)
  2025-10-28 17:58 [PATCH v2 00/17] PF: Add sriov_admin sysfs tree Michal Wajdeczko
                   ` (19 preceding siblings ...)
  2025-10-28 20:43 ` ✓ Xe.CI.BAT: " Patchwork
@ 2025-10-29  7:15 ` Patchwork
  2025-10-29 10:11   ` Michal Wajdeczko
  20 siblings, 1 reply; 45+ messages in thread
From: Patchwork @ 2025-10-29  7:15 UTC (permalink / raw)
  To: Michal Wajdeczko; +Cc: intel-xe

[-- Attachment #1: Type: text/plain, Size: 59563 bytes --]

== Series Details ==

Series: PF: Add sriov_admin sysfs tree (rev2)
URL   : https://patchwork.freedesktop.org/series/156220/
State : failure

== Summary ==

CI Bug Log - changes from xe-3996-5742fc7aea99a1326637a7106eeaeac383a1c76d_FULL -> xe-pw-156220v2_FULL
====================================================

Summary
-------

  **FAILURE**

  Serious unknown changes coming with xe-pw-156220v2_FULL absolutely need to be
  verified manually.
  
  If you think the reported changes have nothing to do with the changes
  introduced in xe-pw-156220v2_FULL, please notify your bug team (I915-ci-infra@lists.freedesktop.org) to allow them
  to document this new failure mode, which will reduce false positives in CI.

  

Participating hosts (4 -> 4)
------------------------------

  No changes in participating hosts

Possible new issues
-------------------

  Here are the unknown changes that may have been introduced in xe-pw-156220v2_FULL:

### IGT changes ###

#### Possible regressions ####

  * igt@kms_flip@2x-flip-vs-suspend@ad-dp2-hdmi-a3:
    - shard-bmg:          [PASS][1] -> [DMESG-WARN][2]
   [1]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-3996-5742fc7aea99a1326637a7106eeaeac383a1c76d/shard-bmg-4/igt@kms_flip@2x-flip-vs-suspend@ad-dp2-hdmi-a3.html
   [2]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-156220v2/shard-bmg-6/igt@kms_flip@2x-flip-vs-suspend@ad-dp2-hdmi-a3.html

  
Known issues
------------

  Here are the changes found in xe-pw-156220v2_FULL that come from known issues:

### IGT changes ###

#### Issues hit ####

  * igt@kms_addfb_basic@addfb25-y-tiled-small-legacy:
    - shard-dg2-set2:     NOTRUN -> [SKIP][3] ([Intel XE#623])
   [3]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-156220v2/shard-dg2-466/igt@kms_addfb_basic@addfb25-y-tiled-small-legacy.html

  * igt@kms_big_fb@linear-16bpp-rotate-270:
    - shard-dg2-set2:     NOTRUN -> [SKIP][4] ([Intel XE#316])
   [4]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-156220v2/shard-dg2-466/igt@kms_big_fb@linear-16bpp-rotate-270.html

  * igt@kms_big_fb@x-tiled-max-hw-stride-64bpp-rotate-0-hflip-async-flip:
    - shard-adlp:         [PASS][5] -> [DMESG-FAIL][6] ([Intel XE#4543]) +1 other test dmesg-fail
   [5]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-3996-5742fc7aea99a1326637a7106eeaeac383a1c76d/shard-adlp-8/igt@kms_big_fb@x-tiled-max-hw-stride-64bpp-rotate-0-hflip-async-flip.html
   [6]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-156220v2/shard-adlp-9/igt@kms_big_fb@x-tiled-max-hw-stride-64bpp-rotate-0-hflip-async-flip.html

  * igt@kms_big_fb@y-tiled-max-hw-stride-32bpp-rotate-180-hflip-async-flip:
    - shard-adlp:         NOTRUN -> [DMESG-FAIL][7] ([Intel XE#4543])
   [7]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-156220v2/shard-adlp-4/igt@kms_big_fb@y-tiled-max-hw-stride-32bpp-rotate-180-hflip-async-flip.html

  * igt@kms_big_fb@yf-tiled-8bpp-rotate-90:
    - shard-dg2-set2:     NOTRUN -> [SKIP][8] ([Intel XE#1124]) +1 other test skip
   [8]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-156220v2/shard-dg2-435/igt@kms_big_fb@yf-tiled-8bpp-rotate-90.html

  * igt@kms_big_fb@yf-tiled-max-hw-stride-64bpp-rotate-180:
    - shard-adlp:         NOTRUN -> [SKIP][9] ([Intel XE#1124]) +3 other tests skip
   [9]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-156220v2/shard-adlp-4/igt@kms_big_fb@yf-tiled-max-hw-stride-64bpp-rotate-180.html

  * igt@kms_bw@connected-linear-tiling-3-displays-2160x1440p:
    - shard-adlp:         NOTRUN -> [SKIP][10] ([Intel XE#2191])
   [10]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-156220v2/shard-adlp-4/igt@kms_bw@connected-linear-tiling-3-displays-2160x1440p.html

  * igt@kms_bw@connected-linear-tiling-4-displays-2160x1440p:
    - shard-dg2-set2:     NOTRUN -> [SKIP][11] ([Intel XE#2191])
   [11]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-156220v2/shard-dg2-466/igt@kms_bw@connected-linear-tiling-4-displays-2160x1440p.html

  * igt@kms_bw@linear-tiling-2-displays-3840x2160p:
    - shard-dg2-set2:     NOTRUN -> [SKIP][12] ([Intel XE#367])
   [12]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-156220v2/shard-dg2-435/igt@kms_bw@linear-tiling-2-displays-3840x2160p.html

  * igt@kms_ccs@bad-rotation-90-4-tiled-mtl-mc-ccs:
    - shard-dg2-set2:     NOTRUN -> [SKIP][13] ([Intel XE#455] / [Intel XE#787]) +3 other tests skip
   [13]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-156220v2/shard-dg2-466/igt@kms_ccs@bad-rotation-90-4-tiled-mtl-mc-ccs.html

  * igt@kms_ccs@bad-rotation-90-4-tiled-mtl-mc-ccs@pipe-c-hdmi-a-6:
    - shard-dg2-set2:     NOTRUN -> [SKIP][14] ([Intel XE#787]) +13 other tests skip
   [14]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-156220v2/shard-dg2-466/igt@kms_ccs@bad-rotation-90-4-tiled-mtl-mc-ccs@pipe-c-hdmi-a-6.html

  * igt@kms_ccs@bad-rotation-90-y-tiled-gen12-mc-ccs@pipe-d-hdmi-a-1:
    - shard-adlp:         NOTRUN -> [SKIP][15] ([Intel XE#455] / [Intel XE#787]) +7 other tests skip
   [15]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-156220v2/shard-adlp-4/igt@kms_ccs@bad-rotation-90-y-tiled-gen12-mc-ccs@pipe-d-hdmi-a-1.html

  * igt@kms_ccs@crc-primary-rotation-180-4-tiled-mtl-rc-ccs-cc@pipe-a-hdmi-a-1:
    - shard-adlp:         NOTRUN -> [SKIP][16] ([Intel XE#787]) +11 other tests skip
   [16]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-156220v2/shard-adlp-4/igt@kms_ccs@crc-primary-rotation-180-4-tiled-mtl-rc-ccs-cc@pipe-a-hdmi-a-1.html

  * igt@kms_ccs@crc-primary-suspend-4-tiled-lnl-ccs@pipe-a-dp-2:
    - shard-bmg:          NOTRUN -> [SKIP][17] ([Intel XE#2652] / [Intel XE#787]) +3 other tests skip
   [17]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-156220v2/shard-bmg-8/igt@kms_ccs@crc-primary-suspend-4-tiled-lnl-ccs@pipe-a-dp-2.html

  * igt@kms_ccs@random-ccs-data-4-tiled-dg2-rc-ccs@pipe-d-dp-4:
    - shard-dg2-set2:     NOTRUN -> [INCOMPLETE][18] ([Intel XE#6168] / [i915#14968])
   [18]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-156220v2/shard-dg2-434/igt@kms_ccs@random-ccs-data-4-tiled-dg2-rc-ccs@pipe-d-dp-4.html

  * igt@kms_ccs@random-ccs-data-4-tiled-dg2-rc-ccs@pipe-d-hdmi-a-6:
    - shard-dg2-set2:     NOTRUN -> [DMESG-WARN][19] ([Intel XE#1727] / [Intel XE#3113])
   [19]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-156220v2/shard-dg2-434/igt@kms_ccs@random-ccs-data-4-tiled-dg2-rc-ccs@pipe-d-hdmi-a-6.html

  * igt@kms_ccs@random-ccs-data-4-tiled-lnl-ccs:
    - shard-adlp:         NOTRUN -> [SKIP][20] ([Intel XE#2907])
   [20]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-156220v2/shard-adlp-4/igt@kms_ccs@random-ccs-data-4-tiled-lnl-ccs.html

  * igt@kms_chamelium_edid@dp-edid-change-during-suspend:
    - shard-dg2-set2:     NOTRUN -> [SKIP][21] ([Intel XE#373]) +3 other tests skip
   [21]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-156220v2/shard-dg2-466/igt@kms_chamelium_edid@dp-edid-change-during-suspend.html

  * igt@kms_chamelium_frames@dp-crc-multiple:
    - shard-adlp:         NOTRUN -> [SKIP][22] ([Intel XE#373]) +2 other tests skip
   [22]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-156220v2/shard-adlp-4/igt@kms_chamelium_frames@dp-crc-multiple.html

  * igt@kms_color@ctm-0-25:
    - shard-bmg:          [PASS][23] -> [DMESG-WARN][24] ([Intel XE#3372] / [Intel XE#3428])
   [23]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-3996-5742fc7aea99a1326637a7106eeaeac383a1c76d/shard-bmg-4/igt@kms_color@ctm-0-25.html
   [24]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-156220v2/shard-bmg-6/igt@kms_color@ctm-0-25.html

  * igt@kms_cursor_crc@cursor-dpms:
    - shard-bmg:          [PASS][25] -> [SKIP][26] ([Intel XE#2320])
   [25]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-3996-5742fc7aea99a1326637a7106eeaeac383a1c76d/shard-bmg-4/igt@kms_cursor_crc@cursor-dpms.html
   [26]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-156220v2/shard-bmg-6/igt@kms_cursor_crc@cursor-dpms.html

  * igt@kms_cursor_crc@cursor-random-512x170:
    - shard-dg2-set2:     NOTRUN -> [SKIP][27] ([Intel XE#308])
   [27]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-156220v2/shard-dg2-466/igt@kms_cursor_crc@cursor-random-512x170.html

  * igt@kms_cursor_legacy@flip-vs-cursor-atomic:
    - shard-bmg:          [PASS][28] -> [FAIL][29] ([Intel XE#4633])
   [28]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-3996-5742fc7aea99a1326637a7106eeaeac383a1c76d/shard-bmg-8/igt@kms_cursor_legacy@flip-vs-cursor-atomic.html
   [29]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-156220v2/shard-bmg-4/igt@kms_cursor_legacy@flip-vs-cursor-atomic.html

  * igt@kms_cursor_legacy@flip-vs-cursor-legacy:
    - shard-bmg:          [PASS][30] -> [FAIL][31] ([Intel XE#5299])
   [30]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-3996-5742fc7aea99a1326637a7106eeaeac383a1c76d/shard-bmg-5/igt@kms_cursor_legacy@flip-vs-cursor-legacy.html
   [31]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-156220v2/shard-bmg-3/igt@kms_cursor_legacy@flip-vs-cursor-legacy.html

  * igt@kms_dsc@dsc-with-bpc-formats:
    - shard-dg2-set2:     NOTRUN -> [SKIP][32] ([Intel XE#455]) +7 other tests skip
   [32]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-156220v2/shard-dg2-433/igt@kms_dsc@dsc-with-bpc-formats.html

  * igt@kms_feature_discovery@display-4x:
    - shard-dg2-set2:     NOTRUN -> [SKIP][33] ([Intel XE#1138])
   [33]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-156220v2/shard-dg2-435/igt@kms_feature_discovery@display-4x.html

  * igt@kms_flip@2x-flip-vs-expired-vblank:
    - shard-bmg:          [PASS][34] -> [SKIP][35] ([Intel XE#2316])
   [34]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-3996-5742fc7aea99a1326637a7106eeaeac383a1c76d/shard-bmg-4/igt@kms_flip@2x-flip-vs-expired-vblank.html
   [35]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-156220v2/shard-bmg-6/igt@kms_flip@2x-flip-vs-expired-vblank.html

  * igt@kms_flip@2x-flip-vs-suspend:
    - shard-bmg:          [PASS][36] -> [DMESG-WARN][37] ([Intel XE#5208] / [Intel XE#6381])
   [36]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-3996-5742fc7aea99a1326637a7106eeaeac383a1c76d/shard-bmg-4/igt@kms_flip@2x-flip-vs-suspend.html
   [37]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-156220v2/shard-bmg-6/igt@kms_flip@2x-flip-vs-suspend.html

  * igt@kms_flip@2x-nonexisting-fb-interruptible:
    - shard-adlp:         NOTRUN -> [SKIP][38] ([Intel XE#310]) +1 other test skip
   [38]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-156220v2/shard-adlp-4/igt@kms_flip@2x-nonexisting-fb-interruptible.html

  * igt@kms_flip@flip-vs-expired-vblank-interruptible@b-edp1:
    - shard-lnl:          [PASS][39] -> [FAIL][40] ([Intel XE#301]) +1 other test fail
   [39]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-3996-5742fc7aea99a1326637a7106eeaeac383a1c76d/shard-lnl-1/igt@kms_flip@flip-vs-expired-vblank-interruptible@b-edp1.html
   [40]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-156220v2/shard-lnl-5/igt@kms_flip@flip-vs-expired-vblank-interruptible@b-edp1.html

  * igt@kms_flip@flip-vs-expired-vblank-interruptible@c-hdmi-a6:
    - shard-dg2-set2:     [PASS][41] -> [FAIL][42] ([Intel XE#301])
   [41]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-3996-5742fc7aea99a1326637a7106eeaeac383a1c76d/shard-dg2-435/igt@kms_flip@flip-vs-expired-vblank-interruptible@c-hdmi-a6.html
   [42]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-156220v2/shard-dg2-464/igt@kms_flip@flip-vs-expired-vblank-interruptible@c-hdmi-a6.html

  * igt@kms_flip@flip-vs-expired-vblank@c-edp1:
    - shard-lnl:          [PASS][43] -> [FAIL][44] ([Intel XE#301] / [Intel XE#3149]) +1 other test fail
   [43]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-3996-5742fc7aea99a1326637a7106eeaeac383a1c76d/shard-lnl-5/igt@kms_flip@flip-vs-expired-vblank@c-edp1.html
   [44]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-156220v2/shard-lnl-3/igt@kms_flip@flip-vs-expired-vblank@c-edp1.html

  * igt@kms_flip@flip-vs-suspend-interruptible:
    - shard-bmg:          [PASS][45] -> [INCOMPLETE][46] ([Intel XE#2049] / [Intel XE#2597]) +1 other test incomplete
   [45]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-3996-5742fc7aea99a1326637a7106eeaeac383a1c76d/shard-bmg-2/igt@kms_flip@flip-vs-suspend-interruptible.html
   [46]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-156220v2/shard-bmg-7/igt@kms_flip@flip-vs-suspend-interruptible.html
    - shard-dg2-set2:     [PASS][47] -> [INCOMPLETE][48] ([Intel XE#2049] / [Intel XE#2597]) +3 other tests incomplete
   [47]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-3996-5742fc7aea99a1326637a7106eeaeac383a1c76d/shard-dg2-464/igt@kms_flip@flip-vs-suspend-interruptible.html
   [48]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-156220v2/shard-dg2-436/igt@kms_flip@flip-vs-suspend-interruptible.html

  * igt@kms_flip@flip-vs-suspend@d-hdmi-a1:
    - shard-adlp:         [PASS][49] -> [DMESG-WARN][50] ([Intel XE#2953] / [Intel XE#4173]) +6 other tests dmesg-warn
   [49]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-3996-5742fc7aea99a1326637a7106eeaeac383a1c76d/shard-adlp-8/igt@kms_flip@flip-vs-suspend@d-hdmi-a1.html
   [50]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-156220v2/shard-adlp-6/igt@kms_flip@flip-vs-suspend@d-hdmi-a1.html

  * igt@kms_flip@plain-flip-interruptible@c-hdmi-a1:
    - shard-adlp:         [PASS][51] -> [DMESG-WARN][52] ([Intel XE#4543])
   [51]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-3996-5742fc7aea99a1326637a7106eeaeac383a1c76d/shard-adlp-1/igt@kms_flip@plain-flip-interruptible@c-hdmi-a1.html
   [52]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-156220v2/shard-adlp-3/igt@kms_flip@plain-flip-interruptible@c-hdmi-a1.html

  * igt@kms_flip_scaled_crc@flip-32bpp-yftileccs-to-64bpp-yftile-downscaling@pipe-a-valid-mode:
    - shard-adlp:         NOTRUN -> [SKIP][53] ([Intel XE#455]) +1 other test skip
   [53]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-156220v2/shard-adlp-4/igt@kms_flip_scaled_crc@flip-32bpp-yftileccs-to-64bpp-yftile-downscaling@pipe-a-valid-mode.html

  * igt@kms_force_connector_basic@prune-stale-modes:
    - shard-adlp:         [PASS][54] -> [ABORT][55] ([Intel XE#2953])
   [54]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-3996-5742fc7aea99a1326637a7106eeaeac383a1c76d/shard-adlp-2/igt@kms_force_connector_basic@prune-stale-modes.html
   [55]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-156220v2/shard-adlp-2/igt@kms_force_connector_basic@prune-stale-modes.html

  * igt@kms_frontbuffer_tracking@drrs-1p-offscreen-pri-indfb-draw-blt:
    - shard-adlp:         NOTRUN -> [SKIP][56] ([Intel XE#6312])
   [56]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-156220v2/shard-adlp-4/igt@kms_frontbuffer_tracking@drrs-1p-offscreen-pri-indfb-draw-blt.html

  * igt@kms_frontbuffer_tracking@fbc-2p-primscrn-indfb-msflip-blt:
    - shard-adlp:         NOTRUN -> [SKIP][57] ([Intel XE#656]) +7 other tests skip
   [57]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-156220v2/shard-adlp-4/igt@kms_frontbuffer_tracking@fbc-2p-primscrn-indfb-msflip-blt.html

  * igt@kms_frontbuffer_tracking@fbcdrrs-1p-primscrn-cur-indfb-draw-mmap-wc:
    - shard-adlp:         NOTRUN -> [SKIP][58] ([Intel XE#651]) +1 other test skip
   [58]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-156220v2/shard-adlp-4/igt@kms_frontbuffer_tracking@fbcdrrs-1p-primscrn-cur-indfb-draw-mmap-wc.html

  * igt@kms_frontbuffer_tracking@fbcdrrs-indfb-scaledprimary:
    - shard-dg2-set2:     NOTRUN -> [SKIP][59] ([Intel XE#651]) +5 other tests skip
   [59]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-156220v2/shard-dg2-466/igt@kms_frontbuffer_tracking@fbcdrrs-indfb-scaledprimary.html

  * igt@kms_frontbuffer_tracking@fbcpsr-1p-offscreen-pri-shrfb-draw-mmap-wc:
    - shard-dg2-set2:     NOTRUN -> [SKIP][60] ([Intel XE#6312])
   [60]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-156220v2/shard-dg2-435/igt@kms_frontbuffer_tracking@fbcpsr-1p-offscreen-pri-shrfb-draw-mmap-wc.html

  * igt@kms_frontbuffer_tracking@fbcpsr-1p-pri-indfb-multidraw:
    - shard-adlp:         NOTRUN -> [SKIP][61] ([Intel XE#653]) +5 other tests skip
   [61]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-156220v2/shard-adlp-4/igt@kms_frontbuffer_tracking@fbcpsr-1p-pri-indfb-multidraw.html

  * igt@kms_frontbuffer_tracking@fbcpsr-2p-primscrn-shrfb-msflip-blt:
    - shard-dg2-set2:     NOTRUN -> [SKIP][62] ([Intel XE#653]) +9 other tests skip
   [62]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-156220v2/shard-dg2-466/igt@kms_frontbuffer_tracking@fbcpsr-2p-primscrn-shrfb-msflip-blt.html

  * igt@kms_hdr@invalid-hdr:
    - shard-bmg:          [PASS][63] -> [SKIP][64] ([Intel XE#1503])
   [63]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-3996-5742fc7aea99a1326637a7106eeaeac383a1c76d/shard-bmg-3/igt@kms_hdr@invalid-hdr.html
   [64]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-156220v2/shard-bmg-1/igt@kms_hdr@invalid-hdr.html

  * igt@kms_hdr@static-toggle-suspend@pipe-a-hdmi-a-6:
    - shard-dg2-set2:     [PASS][65] -> [TIMEOUT][66] ([Intel XE#6431]) +1 other test timeout
   [65]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-3996-5742fc7aea99a1326637a7106eeaeac383a1c76d/shard-dg2-436/igt@kms_hdr@static-toggle-suspend@pipe-a-hdmi-a-6.html
   [66]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-156220v2/shard-dg2-432/igt@kms_hdr@static-toggle-suspend@pipe-a-hdmi-a-6.html

  * igt@kms_joiner@basic-big-joiner:
    - shard-dg2-set2:     NOTRUN -> [SKIP][67] ([Intel XE#346])
   [67]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-156220v2/shard-dg2-466/igt@kms_joiner@basic-big-joiner.html

  * igt@kms_joiner@basic-max-non-joiner:
    - shard-adlp:         NOTRUN -> [SKIP][68] ([Intel XE#4298])
   [68]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-156220v2/shard-adlp-4/igt@kms_joiner@basic-max-non-joiner.html

  * igt@kms_multipipe_modeset@basic-max-pipe-crc-check:
    - shard-adlp:         NOTRUN -> [SKIP][69] ([Intel XE#356])
   [69]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-156220v2/shard-adlp-4/igt@kms_multipipe_modeset@basic-max-pipe-crc-check.html

  * igt@kms_pm_rpm@dpms-non-lpsp:
    - shard-adlp:         NOTRUN -> [SKIP][70] ([Intel XE#836])
   [70]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-156220v2/shard-adlp-4/igt@kms_pm_rpm@dpms-non-lpsp.html

  * igt@kms_psr2_sf@fbc-pr-overlay-plane-update-continuous-sf:
    - shard-dg2-set2:     NOTRUN -> [SKIP][71] ([Intel XE#1406] / [Intel XE#1489]) +2 other tests skip
   [71]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-156220v2/shard-dg2-433/igt@kms_psr2_sf@fbc-pr-overlay-plane-update-continuous-sf.html

  * igt@kms_psr2_sf@psr2-overlay-plane-move-continuous-exceed-fully-sf:
    - shard-adlp:         NOTRUN -> [SKIP][72] ([Intel XE#1406] / [Intel XE#1489])
   [72]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-156220v2/shard-adlp-4/igt@kms_psr2_sf@psr2-overlay-plane-move-continuous-exceed-fully-sf.html

  * igt@kms_psr@fbc-psr-sprite-blt:
    - shard-adlp:         NOTRUN -> [SKIP][73] ([Intel XE#1406] / [Intel XE#2850] / [Intel XE#929]) +3 other tests skip
   [73]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-156220v2/shard-adlp-4/igt@kms_psr@fbc-psr-sprite-blt.html

  * igt@kms_psr@fbc-psr2-sprite-plane-move:
    - shard-dg2-set2:     NOTRUN -> [SKIP][74] ([Intel XE#1406] / [Intel XE#2850] / [Intel XE#929]) +2 other tests skip
   [74]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-156220v2/shard-dg2-433/igt@kms_psr@fbc-psr2-sprite-plane-move.html

  * igt@sriov_basic@enable-vfs-autoprobe-off:
    - shard-dg2-set2:     NOTRUN -> [SKIP][75] ([Intel XE#1091] / [Intel XE#2849])
   [75]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-156220v2/shard-dg2-466/igt@sriov_basic@enable-vfs-autoprobe-off.html

  * igt@xe_ccs@block-multicopy-compressed:
    - shard-adlp:         NOTRUN -> [SKIP][76] ([Intel XE#455] / [Intel XE#488] / [Intel XE#5607])
   [76]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-156220v2/shard-adlp-4/igt@xe_ccs@block-multicopy-compressed.html

  * igt@xe_eudebug_online@reset-with-attention:
    - shard-adlp:         NOTRUN -> [SKIP][77] ([Intel XE#4837] / [Intel XE#5565]) +3 other tests skip
   [77]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-156220v2/shard-adlp-4/igt@xe_eudebug_online@reset-with-attention.html

  * igt@xe_eudebug_online@writes-caching-sram-bb-vram-target-sram:
    - shard-dg2-set2:     NOTRUN -> [SKIP][78] ([Intel XE#4837]) +2 other tests skip
   [78]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-156220v2/shard-dg2-466/igt@xe_eudebug_online@writes-caching-sram-bb-vram-target-sram.html

  * igt@xe_evict@evict-beng-mixed-threads-large-multi-vm:
    - shard-adlp:         NOTRUN -> [SKIP][79] ([Intel XE#261])
   [79]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-156220v2/shard-adlp-4/igt@xe_evict@evict-beng-mixed-threads-large-multi-vm.html

  * igt@xe_evict@evict-large-external-cm:
    - shard-adlp:         NOTRUN -> [SKIP][80] ([Intel XE#261] / [Intel XE#5564])
   [80]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-156220v2/shard-adlp-4/igt@xe_evict@evict-large-external-cm.html

  * igt@xe_evict@evict-mixed-many-threads-small:
    - shard-bmg:          [PASS][81] -> [INCOMPLETE][82] ([Intel XE#6321]) +1 other test incomplete
   [81]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-3996-5742fc7aea99a1326637a7106eeaeac383a1c76d/shard-bmg-4/igt@xe_evict@evict-mixed-many-threads-small.html
   [82]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-156220v2/shard-bmg-6/igt@xe_evict@evict-mixed-many-threads-small.html

  * igt@xe_exec_basic@multigpu-once-userptr-invalidate-race:
    - shard-adlp:         NOTRUN -> [SKIP][83] ([Intel XE#1392] / [Intel XE#5575]) +2 other tests skip
   [83]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-156220v2/shard-adlp-4/igt@xe_exec_basic@multigpu-once-userptr-invalidate-race.html

  * igt@xe_exec_fault_mode@invalid-va-scratch-nopagefault:
    - shard-dg2-set2:     NOTRUN -> [SKIP][84] ([Intel XE#288]) +5 other tests skip
   [84]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-156220v2/shard-dg2-466/igt@xe_exec_fault_mode@invalid-va-scratch-nopagefault.html

  * igt@xe_exec_fault_mode@many-bindexecqueue-userptr-imm:
    - shard-adlp:         NOTRUN -> [SKIP][85] ([Intel XE#288] / [Intel XE#5561]) +5 other tests skip
   [85]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-156220v2/shard-adlp-4/igt@xe_exec_fault_mode@many-bindexecqueue-userptr-imm.html

  * igt@xe_exec_fault_mode@many-bindexecqueue-userptr-rebind:
    - shard-bmg:          [PASS][86] -> [DMESG-WARN][87] ([Intel XE#3428]) +7 other tests dmesg-warn
   [86]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-3996-5742fc7aea99a1326637a7106eeaeac383a1c76d/shard-bmg-4/igt@xe_exec_fault_mode@many-bindexecqueue-userptr-rebind.html
   [87]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-156220v2/shard-bmg-6/igt@xe_exec_fault_mode@many-bindexecqueue-userptr-rebind.html

  * igt@xe_exec_mix_modes@exec-simple-batch-store-dma-fence:
    - shard-dg2-set2:     NOTRUN -> [SKIP][88] ([Intel XE#2360])
   [88]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-156220v2/shard-dg2-433/igt@xe_exec_mix_modes@exec-simple-batch-store-dma-fence.html

  * igt@xe_exec_system_allocator@pat-index-madvise-pat-idx-uc-multi-vma:
    - shard-lnl:          [PASS][89] -> [FAIL][90] ([Intel XE#5625])
   [89]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-3996-5742fc7aea99a1326637a7106eeaeac383a1c76d/shard-lnl-3/igt@xe_exec_system_allocator@pat-index-madvise-pat-idx-uc-multi-vma.html
   [90]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-156220v2/shard-lnl-1/igt@xe_exec_system_allocator@pat-index-madvise-pat-idx-uc-multi-vma.html

  * igt@xe_exec_system_allocator@threads-many-large-execqueues-mmap-new-huge:
    - shard-adlp:         NOTRUN -> [SKIP][91] ([Intel XE#4915]) +77 other tests skip
   [91]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-156220v2/shard-adlp-4/igt@xe_exec_system_allocator@threads-many-large-execqueues-mmap-new-huge.html

  * igt@xe_exec_system_allocator@threads-shared-vm-many-mmap-shared-nomemset:
    - shard-dg2-set2:     NOTRUN -> [SKIP][92] ([Intel XE#4915]) +105 other tests skip
   [92]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-156220v2/shard-dg2-435/igt@xe_exec_system_allocator@threads-shared-vm-many-mmap-shared-nomemset.html

  * igt@xe_fault_injection@probe-fail-guc-xe_guc_mmio_send_recv:
    - shard-dg2-set2:     [PASS][93] -> [DMESG-WARN][94] ([Intel XE#5893])
   [93]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-3996-5742fc7aea99a1326637a7106eeaeac383a1c76d/shard-dg2-436/igt@xe_fault_injection@probe-fail-guc-xe_guc_mmio_send_recv.html
   [94]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-156220v2/shard-dg2-432/igt@xe_fault_injection@probe-fail-guc-xe_guc_mmio_send_recv.html

  * igt@xe_oa@invalid-oa-metric-set-id:
    - shard-adlp:         NOTRUN -> [SKIP][95] ([Intel XE#3573])
   [95]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-156220v2/shard-adlp-4/igt@xe_oa@invalid-oa-metric-set-id.html

  * igt@xe_oa@oa-unit-exclusive-stream-sample-oa:
    - shard-dg2-set2:     NOTRUN -> [SKIP][96] ([Intel XE#3573]) +1 other test skip
   [96]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-156220v2/shard-dg2-466/igt@xe_oa@oa-unit-exclusive-stream-sample-oa.html

  * igt@xe_pmu@gt-c6-idle:
    - shard-dg2-set2:     NOTRUN -> [FAIL][97] ([Intel XE#6366])
   [97]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-156220v2/shard-dg2-466/igt@xe_pmu@gt-c6-idle.html

  * igt@xe_pxp@pxp-src-to-pxp-dest-rendercopy:
    - shard-dg2-set2:     NOTRUN -> [SKIP][98] ([Intel XE#4733])
   [98]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-156220v2/shard-dg2-435/igt@xe_pxp@pxp-src-to-pxp-dest-rendercopy.html

  * igt@xe_pxp@pxp-stale-queue-post-suspend:
    - shard-adlp:         NOTRUN -> [SKIP][99] ([Intel XE#4733] / [Intel XE#5594])
   [99]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-156220v2/shard-adlp-4/igt@xe_pxp@pxp-stale-queue-post-suspend.html

  * igt@xe_query@multigpu-query-invalid-extension:
    - shard-adlp:         NOTRUN -> [SKIP][100] ([Intel XE#944])
   [100]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-156220v2/shard-adlp-4/igt@xe_query@multigpu-query-invalid-extension.html

  * igt@xe_query@multigpu-query-topology:
    - shard-dg2-set2:     NOTRUN -> [SKIP][101] ([Intel XE#944])
   [101]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-156220v2/shard-dg2-433/igt@xe_query@multigpu-query-topology.html

  * igt@xe_sriov_auto_provisioning@selfconfig-reprovision-increase-numvfs:
    - shard-dg2-set2:     NOTRUN -> [SKIP][102] ([Intel XE#4130])
   [102]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-156220v2/shard-dg2-433/igt@xe_sriov_auto_provisioning@selfconfig-reprovision-increase-numvfs.html

  * igt@xe_sriov_flr@flr-twice:
    - shard-dg2-set2:     NOTRUN -> [SKIP][103] ([Intel XE#4273])
   [103]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-156220v2/shard-dg2-466/igt@xe_sriov_flr@flr-twice.html

  
#### Possible fixes ####

  * igt@kms_big_fb@x-tiled-max-hw-stride-64bpp-rotate-180-async-flip:
    - shard-adlp:         [DMESG-FAIL][104] ([Intel XE#4543]) -> [PASS][105]
   [104]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-3996-5742fc7aea99a1326637a7106eeaeac383a1c76d/shard-adlp-1/igt@kms_big_fb@x-tiled-max-hw-stride-64bpp-rotate-180-async-flip.html
   [105]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-156220v2/shard-adlp-3/igt@kms_big_fb@x-tiled-max-hw-stride-64bpp-rotate-180-async-flip.html

  * igt@kms_bw@connected-linear-tiling-2-displays-1920x1080p:
    - shard-bmg:          [SKIP][106] ([Intel XE#2314] / [Intel XE#2894]) -> [PASS][107] +1 other test pass
   [106]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-3996-5742fc7aea99a1326637a7106eeaeac383a1c76d/shard-bmg-6/igt@kms_bw@connected-linear-tiling-2-displays-1920x1080p.html
   [107]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-156220v2/shard-bmg-3/igt@kms_bw@connected-linear-tiling-2-displays-1920x1080p.html

  * igt@kms_ccs@crc-primary-suspend-4-tiled-dg2-mc-ccs@pipe-d-dp-4:
    - shard-dg2-set2:     [INCOMPLETE][108] ([Intel XE#3862]) -> [PASS][109] +1 other test pass
   [108]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-3996-5742fc7aea99a1326637a7106eeaeac383a1c76d/shard-dg2-433/igt@kms_ccs@crc-primary-suspend-4-tiled-dg2-mc-ccs@pipe-d-dp-4.html
   [109]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-156220v2/shard-dg2-433/igt@kms_ccs@crc-primary-suspend-4-tiled-dg2-mc-ccs@pipe-d-dp-4.html

  * igt@kms_ccs@random-ccs-data-4-tiled-dg2-mc-ccs:
    - shard-dg2-set2:     [INCOMPLETE][110] ([Intel XE#1727] / [Intel XE#3113] / [Intel XE#4345] / [Intel XE#6168]) -> [PASS][111]
   [110]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-3996-5742fc7aea99a1326637a7106eeaeac383a1c76d/shard-dg2-435/igt@kms_ccs@random-ccs-data-4-tiled-dg2-mc-ccs.html
   [111]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-156220v2/shard-dg2-466/igt@kms_ccs@random-ccs-data-4-tiled-dg2-mc-ccs.html

  * igt@kms_ccs@random-ccs-data-4-tiled-dg2-mc-ccs@pipe-a-hdmi-a-6:
    - shard-dg2-set2:     [INCOMPLETE][112] ([Intel XE#1727] / [Intel XE#3113] / [Intel XE#6168]) -> [PASS][113]
   [112]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-3996-5742fc7aea99a1326637a7106eeaeac383a1c76d/shard-dg2-435/igt@kms_ccs@random-ccs-data-4-tiled-dg2-mc-ccs@pipe-a-hdmi-a-6.html
   [113]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-156220v2/shard-dg2-466/igt@kms_ccs@random-ccs-data-4-tiled-dg2-mc-ccs@pipe-a-hdmi-a-6.html

  * igt@kms_ccs@random-ccs-data-4-tiled-dg2-rc-ccs@pipe-b-dp-4:
    - shard-dg2-set2:     [INCOMPLETE][114] ([Intel XE#1727] / [Intel XE#3113] / [Intel XE#4212] / [Intel XE#4522]) -> [PASS][115]
   [114]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-3996-5742fc7aea99a1326637a7106eeaeac383a1c76d/shard-dg2-434/igt@kms_ccs@random-ccs-data-4-tiled-dg2-rc-ccs@pipe-b-dp-4.html
   [115]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-156220v2/shard-dg2-434/igt@kms_ccs@random-ccs-data-4-tiled-dg2-rc-ccs@pipe-b-dp-4.html

  * igt@kms_cursor_legacy@cursorb-vs-flipb-toggle:
    - shard-bmg:          [SKIP][116] ([Intel XE#2291]) -> [PASS][117] +1 other test pass
   [116]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-3996-5742fc7aea99a1326637a7106eeaeac383a1c76d/shard-bmg-6/igt@kms_cursor_legacy@cursorb-vs-flipb-toggle.html
   [117]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-156220v2/shard-bmg-3/igt@kms_cursor_legacy@cursorb-vs-flipb-toggle.html

  * igt@kms_dp_aux_dev:
    - shard-bmg:          [SKIP][118] ([Intel XE#3009]) -> [PASS][119]
   [118]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-3996-5742fc7aea99a1326637a7106eeaeac383a1c76d/shard-bmg-6/igt@kms_dp_aux_dev.html
   [119]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-156220v2/shard-bmg-3/igt@kms_dp_aux_dev.html

  * igt@kms_dp_linktrain_fallback@dp-fallback:
    - shard-bmg:          [SKIP][120] ([Intel XE#4294]) -> [PASS][121]
   [120]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-3996-5742fc7aea99a1326637a7106eeaeac383a1c76d/shard-bmg-6/igt@kms_dp_linktrain_fallback@dp-fallback.html
   [121]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-156220v2/shard-bmg-5/igt@kms_dp_linktrain_fallback@dp-fallback.html

  * igt@kms_flip@2x-plain-flip-fb-recreate:
    - shard-bmg:          [SKIP][122] ([Intel XE#2316]) -> [PASS][123] +5 other tests pass
   [122]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-3996-5742fc7aea99a1326637a7106eeaeac383a1c76d/shard-bmg-6/igt@kms_flip@2x-plain-flip-fb-recreate.html
   [123]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-156220v2/shard-bmg-8/igt@kms_flip@2x-plain-flip-fb-recreate.html

  * igt@kms_flip@dpms-off-confusion@c-hdmi-a1:
    - shard-adlp:         [DMESG-WARN][124] ([Intel XE#4543]) -> [PASS][125] +3 other tests pass
   [124]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-3996-5742fc7aea99a1326637a7106eeaeac383a1c76d/shard-adlp-1/igt@kms_flip@dpms-off-confusion@c-hdmi-a1.html
   [125]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-156220v2/shard-adlp-3/igt@kms_flip@dpms-off-confusion@c-hdmi-a1.html

  * igt@kms_flip@flip-vs-expired-vblank-interruptible@b-dp4:
    - shard-dg2-set2:     [FAIL][126] ([Intel XE#301]) -> [PASS][127]
   [126]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-3996-5742fc7aea99a1326637a7106eeaeac383a1c76d/shard-dg2-435/igt@kms_flip@flip-vs-expired-vblank-interruptible@b-dp4.html
   [127]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-156220v2/shard-dg2-464/igt@kms_flip@flip-vs-expired-vblank-interruptible@b-dp4.html

  * igt@kms_flip@flip-vs-rmfb:
    - shard-adlp:         [DMESG-WARN][128] ([Intel XE#5208]) -> [PASS][129]
   [128]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-3996-5742fc7aea99a1326637a7106eeaeac383a1c76d/shard-adlp-3/igt@kms_flip@flip-vs-rmfb.html
   [129]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-156220v2/shard-adlp-2/igt@kms_flip@flip-vs-rmfb.html

  * igt@kms_hdr@invalid-metadata-sizes:
    - shard-bmg:          [SKIP][130] ([Intel XE#1503]) -> [PASS][131]
   [130]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-3996-5742fc7aea99a1326637a7106eeaeac383a1c76d/shard-bmg-6/igt@kms_hdr@invalid-metadata-sizes.html
   [131]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-156220v2/shard-bmg-5/igt@kms_hdr@invalid-metadata-sizes.html

  * igt@kms_vblank@ts-continuation-dpms-suspend:
    - shard-adlp:         [DMESG-WARN][132] ([Intel XE#2953] / [Intel XE#4173]) -> [PASS][133] +1 other test pass
   [132]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-3996-5742fc7aea99a1326637a7106eeaeac383a1c76d/shard-adlp-2/igt@kms_vblank@ts-continuation-dpms-suspend.html
   [133]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-156220v2/shard-adlp-2/igt@kms_vblank@ts-continuation-dpms-suspend.html

  * igt@xe_module_load@load:
    - shard-dg2-set2:     ([PASS][134], [PASS][135], [PASS][136], [PASS][137], [PASS][138], [PASS][139], [PASS][140], [PASS][141], [PASS][142], [PASS][143], [PASS][144], [PASS][145], [PASS][146], [SKIP][147], [PASS][148], [PASS][149], [PASS][150], [PASS][151], [PASS][152], [PASS][153], [PASS][154], [PASS][155], [PASS][156], [PASS][157], [PASS][158], [PASS][159]) ([Intel XE#378]) -> ([PASS][160], [PASS][161], [PASS][162], [PASS][163], [PASS][164], [PASS][165], [PASS][166], [PASS][167], [PASS][168], [PASS][169], [PASS][170], [PASS][171], [PASS][172], [PASS][173], [PASS][174], [PASS][175], [PASS][176], [PASS][177], [PASS][178], [PASS][179], [PASS][180], [PASS][181], [PASS][182], [PASS][183], [PASS][184])
   [134]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-3996-5742fc7aea99a1326637a7106eeaeac383a1c76d/shard-dg2-432/igt@xe_module_load@load.html
   [135]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-3996-5742fc7aea99a1326637a7106eeaeac383a1c76d/shard-dg2-432/igt@xe_module_load@load.html
   [136]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-3996-5742fc7aea99a1326637a7106eeaeac383a1c76d/shard-dg2-432/igt@xe_module_load@load.html
   [137]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-3996-5742fc7aea99a1326637a7106eeaeac383a1c76d/shard-dg2-435/igt@xe_module_load@load.html
   [138]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-3996-5742fc7aea99a1326637a7106eeaeac383a1c76d/shard-dg2-466/igt@xe_module_load@load.html
   [139]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-3996-5742fc7aea99a1326637a7106eeaeac383a1c76d/shard-dg2-434/igt@xe_module_load@load.html
   [140]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-3996-5742fc7aea99a1326637a7106eeaeac383a1c76d/shard-dg2-434/igt@xe_module_load@load.html
   [141]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-3996-5742fc7aea99a1326637a7106eeaeac383a1c76d/shard-dg2-466/igt@xe_module_load@load.html
   [142]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-3996-5742fc7aea99a1326637a7106eeaeac383a1c76d/shard-dg2-466/igt@xe_module_load@load.html
   [143]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-3996-5742fc7aea99a1326637a7106eeaeac383a1c76d/shard-dg2-463/igt@xe_module_load@load.html
   [144]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-3996-5742fc7aea99a1326637a7106eeaeac383a1c76d/shard-dg2-466/igt@xe_module_load@load.html
   [145]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-3996-5742fc7aea99a1326637a7106eeaeac383a1c76d/shard-dg2-433/igt@xe_module_load@load.html
   [146]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-3996-5742fc7aea99a1326637a7106eeaeac383a1c76d/shard-dg2-436/igt@xe_module_load@load.html
   [147]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-3996-5742fc7aea99a1326637a7106eeaeac383a1c76d/shard-dg2-464/igt@xe_module_load@load.html
   [148]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-3996-5742fc7aea99a1326637a7106eeaeac383a1c76d/shard-dg2-435/igt@xe_module_load@load.html
   [149]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-3996-5742fc7aea99a1326637a7106eeaeac383a1c76d/shard-dg2-463/igt@xe_module_load@load.html
   [150]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-3996-5742fc7aea99a1326637a7106eeaeac383a1c76d/shard-dg2-463/igt@xe_module_load@load.html
   [151]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-3996-5742fc7aea99a1326637a7106eeaeac383a1c76d/shard-dg2-435/igt@xe_module_load@load.html
   [152]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-3996-5742fc7aea99a1326637a7106eeaeac383a1c76d/shard-dg2-435/igt@xe_module_load@load.html
   [153]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-3996-5742fc7aea99a1326637a7106eeaeac383a1c76d/shard-dg2-464/igt@xe_module_load@load.html
   [154]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-3996-5742fc7aea99a1326637a7106eeaeac383a1c76d/shard-dg2-464/igt@xe_module_load@load.html
   [155]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-3996-5742fc7aea99a1326637a7106eeaeac383a1c76d/shard-dg2-433/igt@xe_module_load@load.html
   [156]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-3996-5742fc7aea99a1326637a7106eeaeac383a1c76d/shard-dg2-433/igt@xe_module_load@load.html
   [157]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-3996-5742fc7aea99a1326637a7106eeaeac383a1c76d/shard-dg2-464/igt@xe_module_load@load.html
   [158]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-3996-5742fc7aea99a1326637a7106eeaeac383a1c76d/shard-dg2-436/igt@xe_module_load@load.html
   [159]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-3996-5742fc7aea99a1326637a7106eeaeac383a1c76d/shard-dg2-436/igt@xe_module_load@load.html
   [160]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-156220v2/shard-dg2-466/igt@xe_module_load@load.html
   [161]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-156220v2/shard-dg2-433/igt@xe_module_load@load.html
   [162]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-156220v2/shard-dg2-436/igt@xe_module_load@load.html
   [163]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-156220v2/shard-dg2-436/igt@xe_module_load@load.html
   [164]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-156220v2/shard-dg2-433/igt@xe_module_load@load.html
   [165]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-156220v2/shard-dg2-466/igt@xe_module_load@load.html
   [166]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-156220v2/shard-dg2-463/igt@xe_module_load@load.html
   [167]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-156220v2/shard-dg2-463/igt@xe_module_load@load.html
   [168]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-156220v2/shard-dg2-432/igt@xe_module_load@load.html
   [169]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-156220v2/shard-dg2-432/igt@xe_module_load@load.html
   [170]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-156220v2/shard-dg2-432/igt@xe_module_load@load.html
   [171]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-156220v2/shard-dg2-434/igt@xe_module_load@load.html
   [172]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-156220v2/shard-dg2-466/igt@xe_module_load@load.html
   [173]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-156220v2/shard-dg2-466/igt@xe_module_load@load.html
   [174]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-156220v2/shard-dg2-434/igt@xe_module_load@load.html
   [175]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-156220v2/shard-dg2-435/igt@xe_module_load@load.html
   [176]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-156220v2/shard-dg2-464/igt@xe_module_load@load.html
   [177]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-156220v2/shard-dg2-436/igt@xe_module_load@load.html
   [178]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-156220v2/shard-dg2-464/igt@xe_module_load@load.html
   [179]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-156220v2/shard-dg2-435/igt@xe_module_load@load.html
   [180]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-156220v2/shard-dg2-435/igt@xe_module_load@load.html
   [181]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-156220v2/shard-dg2-464/igt@xe_module_load@load.html
   [182]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-156220v2/shard-dg2-434/igt@xe_module_load@load.html
   [183]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-156220v2/shard-dg2-436/igt@xe_module_load@load.html
   [184]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-156220v2/shard-dg2-463/igt@xe_module_load@load.html

  
#### Warnings ####

  * igt@kms_ccs@random-ccs-data-4-tiled-dg2-rc-ccs:
    - shard-dg2-set2:     [INCOMPLETE][185] ([Intel XE#1727] / [Intel XE#3113] / [Intel XE#4212] / [Intel XE#4345] / [Intel XE#4522]) -> [INCOMPLETE][186] ([Intel XE#1727] / [Intel XE#3113] / [Intel XE#4345] / [Intel XE#6168])
   [185]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-3996-5742fc7aea99a1326637a7106eeaeac383a1c76d/shard-dg2-434/igt@kms_ccs@random-ccs-data-4-tiled-dg2-rc-ccs.html
   [186]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-156220v2/shard-dg2-434/igt@kms_ccs@random-ccs-data-4-tiled-dg2-rc-ccs.html

  * igt@kms_frontbuffer_tracking@drrs-2p-primscrn-spr-indfb-draw-render:
    - shard-bmg:          [SKIP][187] ([Intel XE#2312]) -> [SKIP][188] ([Intel XE#2311]) +11 other tests skip
   [187]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-3996-5742fc7aea99a1326637a7106eeaeac383a1c76d/shard-bmg-6/igt@kms_frontbuffer_tracking@drrs-2p-primscrn-spr-indfb-draw-render.html
   [188]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-156220v2/shard-bmg-8/igt@kms_frontbuffer_tracking@drrs-2p-primscrn-spr-indfb-draw-render.html

  * igt@kms_frontbuffer_tracking@fbc-2p-scndscrn-spr-indfb-onoff:
    - shard-bmg:          [SKIP][189] ([Intel XE#2312]) -> [SKIP][190] ([Intel XE#5390]) +8 other tests skip
   [189]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-3996-5742fc7aea99a1326637a7106eeaeac383a1c76d/shard-bmg-6/igt@kms_frontbuffer_tracking@fbc-2p-scndscrn-spr-indfb-onoff.html
   [190]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-156220v2/shard-bmg-8/igt@kms_frontbuffer_tracking@fbc-2p-scndscrn-spr-indfb-onoff.html

  * igt@kms_frontbuffer_tracking@fbcdrrs-2p-scndscrn-cur-indfb-draw-mmap-wc:
    - shard-bmg:          [SKIP][191] ([Intel XE#2311]) -> [SKIP][192] ([Intel XE#2312])
   [191]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-3996-5742fc7aea99a1326637a7106eeaeac383a1c76d/shard-bmg-4/igt@kms_frontbuffer_tracking@fbcdrrs-2p-scndscrn-cur-indfb-draw-mmap-wc.html
   [192]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-156220v2/shard-bmg-6/igt@kms_frontbuffer_tracking@fbcdrrs-2p-scndscrn-cur-indfb-draw-mmap-wc.html

  * igt@kms_frontbuffer_tracking@psr-2p-primscrn-cur-indfb-onoff:
    - shard-bmg:          [SKIP][193] ([Intel XE#2313]) -> [SKIP][194] ([Intel XE#2312])
   [193]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-3996-5742fc7aea99a1326637a7106eeaeac383a1c76d/shard-bmg-4/igt@kms_frontbuffer_tracking@psr-2p-primscrn-cur-indfb-onoff.html
   [194]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-156220v2/shard-bmg-6/igt@kms_frontbuffer_tracking@psr-2p-primscrn-cur-indfb-onoff.html

  * igt@kms_frontbuffer_tracking@psr-2p-primscrn-indfb-msflip-blt:
    - shard-bmg:          [SKIP][195] ([Intel XE#2312]) -> [SKIP][196] ([Intel XE#2313]) +10 other tests skip
   [195]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-3996-5742fc7aea99a1326637a7106eeaeac383a1c76d/shard-bmg-6/igt@kms_frontbuffer_tracking@psr-2p-primscrn-indfb-msflip-blt.html
   [196]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-156220v2/shard-bmg-3/igt@kms_frontbuffer_tracking@psr-2p-primscrn-indfb-msflip-blt.html

  * igt@kms_tiled_display@basic-test-pattern:
    - shard-bmg:          [FAIL][197] ([Intel XE#1729]) -> [SKIP][198] ([Intel XE#2426])
   [197]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-3996-5742fc7aea99a1326637a7106eeaeac383a1c76d/shard-bmg-1/igt@kms_tiled_display@basic-test-pattern.html
   [198]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-156220v2/shard-bmg-2/igt@kms_tiled_display@basic-test-pattern.html

  * igt@xe_fault_injection@probe-fail-guc-xe_guc_ct_send_recv:
    - shard-adlp:         [ABORT][199] ([Intel XE#4917] / [Intel XE#5466] / [Intel XE#5530]) -> [ABORT][200] ([Intel XE#5466] / [Intel XE#5530])
   [199]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-3996-5742fc7aea99a1326637a7106eeaeac383a1c76d/shard-adlp-3/igt@xe_fault_injection@probe-fail-guc-xe_guc_ct_send_recv.html
   [200]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-156220v2/shard-adlp-1/igt@xe_fault_injection@probe-fail-guc-xe_guc_ct_send_recv.html
    - shard-bmg:          [ABORT][201] ([Intel XE#5466] / [Intel XE#5530]) -> [ABORT][202] ([Intel XE#4917] / [Intel XE#5466] / [Intel XE#5530])
   [201]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-3996-5742fc7aea99a1326637a7106eeaeac383a1c76d/shard-bmg-7/igt@xe_fault_injection@probe-fail-guc-xe_guc_ct_send_recv.html
   [202]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-156220v2/shard-bmg-7/igt@xe_fault_injection@probe-fail-guc-xe_guc_ct_send_recv.html

  * igt@xe_module_load@load:
    - shard-bmg:          ([PASS][203], [PASS][204], [PASS][205], [PASS][206], [PASS][207], [PASS][208], [PASS][209], [PASS][210], [PASS][211], [PASS][212], [PASS][213], [PASS][214], [PASS][215], [PASS][216], [PASS][217], [PASS][218], [PASS][219], [PASS][220], [PASS][221], [PASS][222], [PASS][223], [PASS][224], [PASS][225], [PASS][226], [PASS][227], [SKIP][228]) ([Intel XE#2457]) -> ([PASS][229], [DMESG-WARN][230], [PASS][231], [PASS][232], [PASS][233], [PASS][234], [PASS][235], [PASS][236], [PASS][237], [PASS][238], [PASS][239], [PASS][240], [PASS][241], [PASS][242], [PASS][243], [PASS][244], [PASS][245], [PASS][246], [PASS][247], [PASS][248], [PASS][249], [PASS][250], [PASS][251], [PASS][252], [PASS][253]) ([Intel XE#3428])
   [203]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-3996-5742fc7aea99a1326637a7106eeaeac383a1c76d/shard-bmg-5/igt@xe_module_load@load.html
   [204]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-3996-5742fc7aea99a1326637a7106eeaeac383a1c76d/shard-bmg-1/igt@xe_module_load@load.html
   [205]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-3996-5742fc7aea99a1326637a7106eeaeac383a1c76d/shard-bmg-1/igt@xe_module_load@load.html
   [206]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-3996-5742fc7aea99a1326637a7106eeaeac383a1c76d/shard-bmg-1/igt@xe_module_load@load.html
   [207]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-3996-5742fc7aea99a1326637a7106eeaeac383a1c76d/shard-bmg-8/igt@xe_module_load@load.html
   [208]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-3996-5742fc7aea99a1326637a7106eeaeac383a1c76d/shard-bmg-8/igt@xe_module_load@load.html
   [209]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-3996-5742fc7aea99a1326637a7106eeaeac383a1c76d/shard-bmg-8/igt@xe_module_load@load.html
   [210]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-3996-5742fc7aea99a1326637a7106eeaeac383a1c76d/shard-bmg-4/igt@xe_module_load@load.html
   [211]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-3996-5742fc7aea99a1326637a7106eeaeac383a1c76d/shard-bmg-8/igt@xe_module_load@load.html
   [212]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-3996-5742fc7aea99a1326637a7106eeaeac383a1c76d/shard-bmg-7/igt@xe_module_load@load.html
   [213]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-3996-5742fc7aea99a1326637a7106eeaeac383a1c76d/shard-bmg-7/igt@xe_module_load@load.html
   [214]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-3996-5742fc7aea99a1326637a7106eeaeac383a1c76d/shard-bmg-7/igt@xe_module_load@load.html
   [215]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-3996-5742fc7aea99a1326637a7106eeaeac383a1c76d/shard-bmg-3/igt@xe_module_load@load.html
   [216]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-3996-5742fc7aea99a1326637a7106eeaeac383a1c76d/shard-bmg-3/igt@xe_module_load@load.html
   [217]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-3996-5742fc7aea99a1326637a7106eeaeac383a1c76d/shard-bmg-3/igt@xe_module_load@load.html
   [218]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-3996-5742fc7aea99a1326637a7106eeaeac383a1c76d/shard-bmg-6/igt@xe_module_load@load.html
   [219]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-3996-5742fc7aea99a1326637a7106eeaeac383a1c76d/shard-bmg-2/igt@xe_module_load@load.html
   [220]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-3996-5742fc7aea99a1326637a7106eeaeac383a1c76d/shard-bmg-2/igt@xe_module_load@load.html
   [221]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-3996-5742fc7aea99a1326637a7106eeaeac383a1c76d/shard-bmg-5/igt@xe_module_load@load.html
   [222]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-3996-5742fc7aea99a1326637a7106eeaeac383a1c76d/shard-bmg-5/igt@xe_module_load@load.html
   [223]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-3996-5742fc7aea99a1326637a7106eeaeac383a1c76d/shard-bmg-2/igt@xe_module_load@load.html
   [224]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-3996-5742fc7aea99a1326637a7106eeaeac383a1c76d/shard-bmg-6/igt@xe_module_load@load.html
   [225]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-3996-5742fc7aea99a1326637a7106eeaeac383a1c76d/shard-bmg-6/igt@xe_module_load@load.html
   [226]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-3996-5742fc7aea99a1326637a7106eeaeac383a1c76d/shard-bmg-4/igt@xe_module_load@load.html
   [227]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-3996-5742fc7aea99a1326637a7106eeaeac383a1c76d/shard-bmg-7/igt@xe_module_load@load.html
   [228]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-3996-5742fc7aea99a1326637a7106eeaeac383a1c76d/shard-bmg-2/igt@xe_module_load@load.html
   [229]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-156220v2/shard-bmg-1/igt@xe_module_load@load.html
   [230]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-156220v2/shard-bmg-6/igt@xe_module_load@load.html
   [231]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-156220v2/shard-bmg-6/igt@xe_module_load@load.html
   [232]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-156220v2/shard-bmg-3/igt@xe_module_load@load.html
   [233]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-156220v2/shard-bmg-5/igt@xe_module_load@load.html
   [234]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-156220v2/shard-bmg-7/igt@xe_module_load@load.html
   [235]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-156220v2/shard-bmg-1/igt@xe_module_load@load.html
   [236]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-156220v2/shard-bmg-1/igt@xe_module_load@load.html
   [237]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-156220v2/shard-bmg-4/igt@xe_module_load@load.html
   [238]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-156220v2/shard-bmg-4/igt@xe_module_load@load.html
   [239]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-156220v2/shard-bmg-4/igt@xe_module_load@load.html
   [240]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-156220v2/shard-bmg-3/igt@xe_module_load@load.html
   [241]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-156220v2/shard-bmg-8/igt@xe_module_load@load.html
   [242]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-156220v2/shard-bmg-8/igt@xe_module_load@load.html
   [243]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-156220v2/shard-bmg-7/igt@xe_module_load@load.html
   [244]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-156220v2/shard-bmg-6/igt@xe_module_load@load.html
   [245]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-156220v2/shard-bmg-5/igt@xe_module_load@load.html
   [246]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-156220v2/shard-bmg-5/igt@xe_module_load@load.html
   [247]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-156220v2/shard-bmg-2/igt@xe_module_load@load.html
   [248]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-156220v2/shard-bmg-3/igt@xe_module_load@load.html
   [249]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-156220v2/shard-bmg-2/igt@xe_module_load@load.html
   [250]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-156220v2/shard-bmg-2/igt@xe_module_load@load.html
   [251]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-156220v2/shard-bmg-7/igt@xe_module_load@load.html
   [252]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-156220v2/shard-bmg-7/igt@xe_module_load@load.html
   [253]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-156220v2/shard-bmg-8/igt@xe_module_load@load.html

  
  [Intel XE#1091]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/1091
  [Intel XE#1124]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/1124
  [Intel XE#1138]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/1138
  [Intel XE#1392]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/1392
  [Intel XE#1406]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/1406
  [Intel XE#1489]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/1489
  [Intel XE#1503]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/1503
  [Intel XE#1727]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/1727
  [Intel XE#1729]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/1729
  [Intel XE#2049]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/2049
  [Intel XE#2191]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/2191
  [Intel XE#2291]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/2291
  [Intel XE#2311]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/2311
  [Intel XE#2312]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/2312
  [Intel XE#2313]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/2313
  [Intel XE#2314]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/2314
  [Intel XE#2316]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/2316
  [Intel XE#2320]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/2320
  [Intel XE#2360]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/2360
  [Intel XE#2426]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/2426
  [Intel XE#2457]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/2457
  [Intel XE#2597]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/2597
  [Intel XE#261]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/261
  [Intel XE#2652]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/2652
  [Intel XE#2849]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/2849
  [Intel XE#2850]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/2850
  [Intel XE#288]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/288
  [Intel XE#2894]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/2894
  [Intel XE#2907]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/2907
  [Intel XE#2953]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/2953
  [Intel XE#3009]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/3009
  [Intel XE#301]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/301
  [Intel XE#308]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/308
  [Intel XE#310]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/310
  [Intel XE#3113]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/3113
  [Intel XE#3149]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/3149
  [Intel XE#316]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/316
  [Intel XE#3372]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/3372
  [Intel XE#3428]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/3428
  [Intel XE#346]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/346
  [Intel XE#356]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/356
  [Intel XE#3573]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/3573
  [Intel XE#367]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/367
  [Intel XE#373]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/373
  [Intel XE#378]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/378
  [Intel XE#3862]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/3862
  [Intel XE#4130]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/4130
  [Intel XE#4173]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/4173
  [Intel XE#4212]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/4212
  [Intel XE#4273]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/4273
  [Intel XE#4294]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/4294
  [Intel XE#4298]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/4298
  [Intel XE#4345]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/4345
  [Intel XE#4522]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/4522
  [Intel XE#4543]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/4543
  [Intel XE#455]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/455
  [Intel XE#4633]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/4633
  [Intel XE#4733]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/4733
  [Intel XE#4837]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/4837
  [Intel XE#488]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/488
  [Intel XE#4915]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/4915
  [Intel XE#4917]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/4917
  [Intel XE#5208]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/5208
  [Intel XE#5299]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/5299
  [Intel XE#5390]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/5390
  [Intel XE#5466]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/5466
  [Intel XE#5530]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/5530
  [Intel XE#5561]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/5561
  [Intel XE#5564]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/5564
  [Intel XE#5565]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/5565
  [Intel XE#5575]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/5575
  [Intel XE#5594]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/5594
  [Intel XE#5607]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/5607
  [Intel XE#5625]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/5625
  [Intel XE#5893]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/5893
  [Intel XE#6168]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/6168
  [Intel XE#623]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/623
  [Intel XE#6312]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/6312
  [Intel XE#6321]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/6321
  [Intel XE#6366]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/6366
  [Intel XE#6381]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/6381
  [Intel XE#6431]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/6431
  [Intel XE#651]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/651
  [Intel XE#653]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/653
  [Intel XE#656]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/656
  [Intel XE#787]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/787
  [Intel XE#836]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/836
  [Intel XE#929]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/929
  [Intel XE#944]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/944
  [i915#14968]: https://gitlab.freedesktop.org/drm/i915/kernel/-/issues/14968


Build changes
-------------

  * Linux: xe-3996-5742fc7aea99a1326637a7106eeaeac383a1c76d -> xe-pw-156220v2

  IGT_8599: b22b9ca357de868f3848269e5eb7c4cc53b3f2d1 @ https://gitlab.freedesktop.org/drm/igt-gpu-tools.git
  xe-3996-5742fc7aea99a1326637a7106eeaeac383a1c76d: 5742fc7aea99a1326637a7106eeaeac383a1c76d
  xe-pw-156220v2: 156220v2

== Logs ==

For more details see: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-156220v2/index.html

[-- Attachment #2: Type: text/html, Size: 67926 bytes --]

^ permalink raw reply	[flat|nested] 45+ messages in thread

* Re: [PATCH v2 07/17] drm/xe/pf: Fix signature of internal config helpers
  2025-10-28 17:58 ` [PATCH v2 07/17] drm/xe/pf: Fix signature of internal config helpers Michal Wajdeczko
@ 2025-10-29  8:02   ` Piotr Piórkowski
  0 siblings, 0 replies; 45+ messages in thread
From: Piotr Piórkowski @ 2025-10-29  8:02 UTC (permalink / raw)
  To: Michal Wajdeczko; +Cc: intel-xe

Michal Wajdeczko <michal.wajdeczko@intel.com> wrote on wto [2025-paź-28 18:58:21 +0100]:
> Both pf_get_exec_quantum() and pf_get_preempt_timeout() should
> return u32 as this is a type of the underlying data.
> 
> Signed-off-by: Michal Wajdeczko <michal.wajdeczko@intel.com>
Reviewed-by: Piotr Piórkowski <piotr.piorkowski@intel.com>
> ---
>  drivers/gpu/drm/xe/xe_gt_sriov_pf_config.c | 4 ++--
>  1 file changed, 2 insertions(+), 2 deletions(-)
> 
> diff --git a/drivers/gpu/drm/xe/xe_gt_sriov_pf_config.c b/drivers/gpu/drm/xe/xe_gt_sriov_pf_config.c
> index 343ab4a32cb1..6365d5f2ae98 100644
> --- a/drivers/gpu/drm/xe/xe_gt_sriov_pf_config.c
> +++ b/drivers/gpu/drm/xe/xe_gt_sriov_pf_config.c
> @@ -1727,7 +1727,7 @@ static int pf_provision_exec_quantum(struct xe_gt *gt, unsigned int vfid,
>  	return 0;
>  }
>  
> -static int pf_get_exec_quantum(struct xe_gt *gt, unsigned int vfid)
> +static u32 pf_get_exec_quantum(struct xe_gt *gt, unsigned int vfid)
>  {
>  	struct xe_gt_sriov_config *config = pf_pick_vf_config(gt, vfid);
>  
> @@ -1830,7 +1830,7 @@ static int pf_provision_preempt_timeout(struct xe_gt *gt, unsigned int vfid,
>  	return 0;
>  }
>  
> -static int pf_get_preempt_timeout(struct xe_gt *gt, unsigned int vfid)
> +static u32 pf_get_preempt_timeout(struct xe_gt *gt, unsigned int vfid)
>  {
>  	struct xe_gt_sriov_config *config = pf_pick_vf_config(gt, vfid);
>  
> -- 
> 2.47.1
> 

-- 

^ permalink raw reply	[flat|nested] 45+ messages in thread

* Re: [PATCH v2 03/17] drm/xe/pf: Add _locked variants of the VF EQ config functions
  2025-10-28 17:58 ` [PATCH v2 03/17] drm/xe/pf: Add _locked variants of the VF EQ config functions Michal Wajdeczko
@ 2025-10-29  8:47   ` Piotr Piórkowski
  2025-10-29  9:00     ` Piotr Piórkowski
  2025-10-30 19:47     ` Michal Wajdeczko
  0 siblings, 2 replies; 45+ messages in thread
From: Piotr Piórkowski @ 2025-10-29  8:47 UTC (permalink / raw)
  To: Michal Wajdeczko; +Cc: intel-xe, Lucas De Marchi

Michal Wajdeczko <michal.wajdeczko@intel.com> wrote on wto [2025-paź-28 18:58:17 +0100]:
> In upcoming patches we will want to configure VF's execution
> quantum (EQ) on all GTs under single lock to avoid potential
> races in parallel GT configuration attempts.
> 
> Signed-off-by: Michal Wajdeczko <michal.wajdeczko@intel.com>
> Cc: Lucas De Marchi <lucas.demarchi@intel.com>
> ---
>  drivers/gpu/drm/xe/xe_gt_sriov_pf_config.c | 58 +++++++++++++++++-----
>  drivers/gpu/drm/xe/xe_gt_sriov_pf_config.h |  4 ++
>  2 files changed, 49 insertions(+), 13 deletions(-)
> 
> diff --git a/drivers/gpu/drm/xe/xe_gt_sriov_pf_config.c b/drivers/gpu/drm/xe/xe_gt_sriov_pf_config.c
> index c0c0215c0703..717f81e76b8c 100644
> --- a/drivers/gpu/drm/xe/xe_gt_sriov_pf_config.c
> +++ b/drivers/gpu/drm/xe/xe_gt_sriov_pf_config.c
> @@ -1732,29 +1732,65 @@ static int pf_get_exec_quantum(struct xe_gt *gt, unsigned int vfid)
>  }
>  
>  /**
> - * xe_gt_sriov_pf_config_set_exec_quantum - Configure execution quantum for the VF.
> + * xe_gt_sriov_pf_config_set_exec_quantum_locked() - Configure execution quantum of the VF.
>   * @gt: the &xe_gt
>   * @vfid: the VF identifier
>   * @exec_quantum: requested execution quantum in milliseconds (0 is infinity)
>   *
> - * This function can only be called on PF.
> + * This function can only be called on PF with the master mutex hold.
> + * It will log the provisioned value or an error in case of the failure.
>   *
>   * Return: 0 on success or a negative error code on failure.
>   */
> -int xe_gt_sriov_pf_config_set_exec_quantum(struct xe_gt *gt, unsigned int vfid,
> -					   u32 exec_quantum)
> +int xe_gt_sriov_pf_config_set_exec_quantum_locked(struct xe_gt *gt, unsigned int vfid,
> +						  u32 exec_quantum)
>  {
>  	int err;
>  
> -	mutex_lock(xe_gt_sriov_pf_master_mutex(gt));
> +	lockdep_assert_held(xe_gt_sriov_pf_master_mutex(gt));
> +
>  	err = pf_provision_exec_quantum(gt, vfid, exec_quantum);
> -	mutex_unlock(xe_gt_sriov_pf_master_mutex(gt));
>  
>  	return pf_config_set_u32_done(gt, vfid, exec_quantum,
> -				      xe_gt_sriov_pf_config_get_exec_quantum(gt, vfid),
> +				      pf_get_exec_quantum(gt, vfid),
>  				      "execution quantum", exec_quantum_unit, err);
>  }
>  
> +/**
> + * xe_gt_sriov_pf_config_set_exec_quantum - Configure execution quantum for the VF.
> + * @gt: the &xe_gt
> + * @vfid: the VF identifier
> + * @exec_quantum: requested execution quantum in milliseconds (0 is infinity)
> + *
> + * This function can only be called on PF.
> + * It will log the provisioned value or na error in case of the failure.

typo: na -> an
> + *
> + * Return: 0 on success or a negative error code on failure.
> + */
> +int xe_gt_sriov_pf_config_set_exec_quantum(struct xe_gt *gt, unsigned int vfid,
> +					   u32 exec_quantum)
> +{
> +	guard(mutex)(xe_gt_sriov_pf_master_mutex(gt));
> +
> +	return xe_gt_sriov_pf_config_set_exec_quantum_locked(gt, vfid, exec_quantum);
> +}
> +
> +/**
> + * xe_gt_sriov_pf_config_get_exec_quantum_locked() - Get VF's execution quantum.
> + * @gt: the &xe_gt
> + * @vfid: the VF identifier
> + *
> + * This function can only be called on PF with the master mutex hold.
> + *
> + * Return: VF's (or PF's) execution quantum in milliseconds.
> + */
> +u32 xe_gt_sriov_pf_config_get_exec_quantum_locked(struct xe_gt *gt, unsigned int vfid)
> +{
> +	lockdep_assert_held(xe_gt_sriov_pf_master_mutex(gt));
> +
> +	return pf_get_exec_quantum(gt, vfid);

NIT: Perhaps it would have been better to be consistent and call it the locked version here.

> +}
> +
>  /**
>   * xe_gt_sriov_pf_config_get_exec_quantum - Get VF's execution quantum.
>   * @gt: the &xe_gt
> @@ -1766,13 +1802,9 @@ int xe_gt_sriov_pf_config_set_exec_quantum(struct xe_gt *gt, unsigned int vfid,
>   */
>  u32 xe_gt_sriov_pf_config_get_exec_quantum(struct xe_gt *gt, unsigned int vfid)
>  {
> -	u32 exec_quantum;
> +	guard(mutex)(xe_gt_sriov_pf_master_mutex(gt));
>  
> -	mutex_lock(xe_gt_sriov_pf_master_mutex(gt));
> -	exec_quantum = pf_get_exec_quantum(gt, vfid);
> -	mutex_unlock(xe_gt_sriov_pf_master_mutex(gt));
> -
> -	return exec_quantum;
> +	return pf_get_exec_quantum(gt, vfid);
>  }
>  
>  static const char *preempt_timeout_unit(u32 preempt_timeout)
> diff --git a/drivers/gpu/drm/xe/xe_gt_sriov_pf_config.h b/drivers/gpu/drm/xe/xe_gt_sriov_pf_config.h
> index 513e6512a575..b4beb5a97031 100644
> --- a/drivers/gpu/drm/xe/xe_gt_sriov_pf_config.h
> +++ b/drivers/gpu/drm/xe/xe_gt_sriov_pf_config.h
> @@ -40,6 +40,10 @@ int xe_gt_sriov_pf_config_bulk_set_lmem(struct xe_gt *gt, unsigned int vfid, uns
>  u32 xe_gt_sriov_pf_config_get_exec_quantum(struct xe_gt *gt, unsigned int vfid);
>  int xe_gt_sriov_pf_config_set_exec_quantum(struct xe_gt *gt, unsigned int vfid, u32 exec_quantum);
>  
> +u32 xe_gt_sriov_pf_config_get_exec_quantum_locked(struct xe_gt *gt, unsigned int vfid);
> +int xe_gt_sriov_pf_config_set_exec_quantum_locked(struct xe_gt *gt, unsigned int vfid,
> +						  u32 exec_quantum);
> +
>  u32 xe_gt_sriov_pf_config_get_preempt_timeout(struct xe_gt *gt, unsigned int vfid);
>  int xe_gt_sriov_pf_config_set_preempt_timeout(struct xe_gt *gt, unsigned int vfid,
>  					      u32 preempt_timeout);

but anyway it looks good:
Reviewed-by: Piotr Piórkowski <piotr.piorkowski@intel.com>

> -- 
> 2.47.1
> 

-- 

^ permalink raw reply	[flat|nested] 45+ messages in thread

* Re: [PATCH v2 03/17] drm/xe/pf: Add _locked variants of the VF EQ config functions
  2025-10-29  8:47   ` Piotr Piórkowski
@ 2025-10-29  9:00     ` Piotr Piórkowski
  2025-10-30 19:47     ` Michal Wajdeczko
  1 sibling, 0 replies; 45+ messages in thread
From: Piotr Piórkowski @ 2025-10-29  9:00 UTC (permalink / raw)
  To: Michal Wajdeczko; +Cc: intel-xe, Lucas De Marchi

Piotr Piórkowski <piotr.piorkowski@intel.com> wrote on śro [2025-paź-29 09:47:07 +0100]:
> Michal Wajdeczko <michal.wajdeczko@intel.com> wrote on wto [2025-paź-28 18:58:17 +0100]:
> > In upcoming patches we will want to configure VF's execution
> > quantum (EQ) on all GTs under single lock to avoid potential
> > races in parallel GT configuration attempts.
> > 
> > Signed-off-by: Michal Wajdeczko <michal.wajdeczko@intel.com>
> > Cc: Lucas De Marchi <lucas.demarchi@intel.com>
> > ---
> >  drivers/gpu/drm/xe/xe_gt_sriov_pf_config.c | 58 +++++++++++++++++-----
> >  drivers/gpu/drm/xe/xe_gt_sriov_pf_config.h |  4 ++
> >  2 files changed, 49 insertions(+), 13 deletions(-)
> > 
> > diff --git a/drivers/gpu/drm/xe/xe_gt_sriov_pf_config.c b/drivers/gpu/drm/xe/xe_gt_sriov_pf_config.c
> > index c0c0215c0703..717f81e76b8c 100644
> > --- a/drivers/gpu/drm/xe/xe_gt_sriov_pf_config.c
> > +++ b/drivers/gpu/drm/xe/xe_gt_sriov_pf_config.c
> > @@ -1732,29 +1732,65 @@ static int pf_get_exec_quantum(struct xe_gt *gt, unsigned int vfid)
> >  }
> >  
> >  /**
> > - * xe_gt_sriov_pf_config_set_exec_quantum - Configure execution quantum for the VF.
> > + * xe_gt_sriov_pf_config_set_exec_quantum_locked() - Configure execution quantum of the VF.
> >   * @gt: the &xe_gt
> >   * @vfid: the VF identifier
> >   * @exec_quantum: requested execution quantum in milliseconds (0 is infinity)
> >   *
> > - * This function can only be called on PF.
> > + * This function can only be called on PF with the master mutex hold.
> > + * It will log the provisioned value or an error in case of the failure.
> >   *
> >   * Return: 0 on success or a negative error code on failure.
> >   */
> > -int xe_gt_sriov_pf_config_set_exec_quantum(struct xe_gt *gt, unsigned int vfid,
> > -					   u32 exec_quantum)
> > +int xe_gt_sriov_pf_config_set_exec_quantum_locked(struct xe_gt *gt, unsigned int vfid,
> > +						  u32 exec_quantum)
> >  {
> >  	int err;
> >  
> > -	mutex_lock(xe_gt_sriov_pf_master_mutex(gt));
> > +	lockdep_assert_held(xe_gt_sriov_pf_master_mutex(gt));
> > +
> >  	err = pf_provision_exec_quantum(gt, vfid, exec_quantum);
> > -	mutex_unlock(xe_gt_sriov_pf_master_mutex(gt));
> >  
> >  	return pf_config_set_u32_done(gt, vfid, exec_quantum,
> > -				      xe_gt_sriov_pf_config_get_exec_quantum(gt, vfid),
> > +				      pf_get_exec_quantum(gt, vfid),
> >  				      "execution quantum", exec_quantum_unit, err);
> >  }
> >  
> > +/**
> > + * xe_gt_sriov_pf_config_set_exec_quantum - Configure execution quantum for the VF.
> > + * @gt: the &xe_gt
> > + * @vfid: the VF identifier
> > + * @exec_quantum: requested execution quantum in milliseconds (0 is infinity)
> > + *
> > + * This function can only be called on PF.
> > + * It will log the provisioned value or na error in case of the failure.
> 
> typo: na -> an
> > + *
> > + * Return: 0 on success or a negative error code on failure.
> > + */
> > +int xe_gt_sriov_pf_config_set_exec_quantum(struct xe_gt *gt, unsigned int vfid,
> > +					   u32 exec_quantum)
> > +{
> > +	guard(mutex)(xe_gt_sriov_pf_master_mutex(gt));
> > +
> > +	return xe_gt_sriov_pf_config_set_exec_quantum_locked(gt, vfid, exec_quantum);
> > +}
> > +
> > +/**
> > + * xe_gt_sriov_pf_config_get_exec_quantum_locked() - Get VF's execution quantum.
> > + * @gt: the &xe_gt
> > + * @vfid: the VF identifier
> > + *
> > + * This function can only be called on PF with the master mutex hold.
> > + *
> > + * Return: VF's (or PF's) execution quantum in milliseconds.
> > + */
> > +u32 xe_gt_sriov_pf_config_get_exec_quantum_locked(struct xe_gt *gt, unsigned int vfid)
> > +{
> > +	lockdep_assert_held(xe_gt_sriov_pf_master_mutex(gt));
> > +
> > +	return pf_get_exec_quantum(gt, vfid);
> 
> NIT: Perhaps it would have been better to be consistent and call it the locked version here.
> 
> > +}
> > +
> >  /**
> >   * xe_gt_sriov_pf_config_get_exec_quantum - Get VF's execution quantum.
> >   * @gt: the &xe_gt
> > @@ -1766,13 +1802,9 @@ int xe_gt_sriov_pf_config_set_exec_quantum(struct xe_gt *gt, unsigned int vfid,
> >   */
> >  u32 xe_gt_sriov_pf_config_get_exec_quantum(struct xe_gt *gt, unsigned int vfid)
> >  {
> > -	u32 exec_quantum;
> > +	guard(mutex)(xe_gt_sriov_pf_master_mutex(gt));
> >  
> > -	mutex_lock(xe_gt_sriov_pf_master_mutex(gt));
> > -	exec_quantum = pf_get_exec_quantum(gt, vfid);
> > -	mutex_unlock(xe_gt_sriov_pf_master_mutex(gt));
> > -
> > -	return exec_quantum;
> > +	return pf_get_exec_quantum(gt, vfid);
> >  }
> >  
> >  static const char *preempt_timeout_unit(u32 preempt_timeout)
> > diff --git a/drivers/gpu/drm/xe/xe_gt_sriov_pf_config.h b/drivers/gpu/drm/xe/xe_gt_sriov_pf_config.h
> > index 513e6512a575..b4beb5a97031 100644
> > --- a/drivers/gpu/drm/xe/xe_gt_sriov_pf_config.h
> > +++ b/drivers/gpu/drm/xe/xe_gt_sriov_pf_config.h
> > @@ -40,6 +40,10 @@ int xe_gt_sriov_pf_config_bulk_set_lmem(struct xe_gt *gt, unsigned int vfid, uns
> >  u32 xe_gt_sriov_pf_config_get_exec_quantum(struct xe_gt *gt, unsigned int vfid);
> >  int xe_gt_sriov_pf_config_set_exec_quantum(struct xe_gt *gt, unsigned int vfid, u32 exec_quantum);
> >  
> > +u32 xe_gt_sriov_pf_config_get_exec_quantum_locked(struct xe_gt *gt, unsigned int vfid);
> > +int xe_gt_sriov_pf_config_set_exec_quantum_locked(struct xe_gt *gt, unsigned int vfid,
> > +						  u32 exec_quantum);
> > +
> >  u32 xe_gt_sriov_pf_config_get_preempt_timeout(struct xe_gt *gt, unsigned int vfid);
> >  int xe_gt_sriov_pf_config_set_preempt_timeout(struct xe_gt *gt, unsigned int vfid,
> >  					      u32 preempt_timeout);
> 
> but anyway it looks good:
> Reviewed-by: Piotr Piórkowski <piotr.piorkowski@intel.com>

one more comment about kernel-doc:
You are inconsistent when writing function names in kernel-doc. Sometimes you use
parentheses and sometimes you don't:

+ * xe_gt_sriov_pf_config_set_exec_quantum - Configure execution quantum for the VF.
vs
+ * xe_gt_sriov_pf_config_get_exec_quantum_locked() - Get VF's execution quantum.

Based on https://docs.kernel.org/doc-guide/kernel-doc.html#function-documentation
I assume that the version with parentheses is correct.

Thanks,
Piotr

> 
> > -- 
> > 2.47.1
> > 
> 
> -- 

-- 

^ permalink raw reply	[flat|nested] 45+ messages in thread

* Re: ✗ Xe.CI.Full: failure for PF: Add sriov_admin sysfs tree (rev2)
  2025-10-29  7:15 ` ✗ Xe.CI.Full: failure " Patchwork
@ 2025-10-29 10:11   ` Michal Wajdeczko
  0 siblings, 0 replies; 45+ messages in thread
From: Michal Wajdeczko @ 2025-10-29 10:11 UTC (permalink / raw)
  To: intel-xe



On 10/29/2025 8:15 AM, Patchwork wrote:
> *Patch Details*
> *Series:*	PF: Add sriov_admin sysfs tree (rev2)
> *URL:*	https://patchwork.freedesktop.org/series/156220/ <https://patchwork.freedesktop.org/series/156220/>
> *State:*	failure
> *Details:*	https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-156220v2/index.html <https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-156220v2/index.html>
> 
> 
>   CI Bug Log - changes from xe-3996-5742fc7aea99a1326637a7106eeaeac383a1c76d_FULL -> xe-pw-156220v2_FULL
> 
> 
>     Summary
> 
> *FAILURE*
> 
> Serious unknown changes coming with xe-pw-156220v2_FULL absolutely need to be
> verified manually.
> 
> If you think the reported changes have nothing to do with the changes
> introduced in xe-pw-156220v2_FULL, please notify your bug team (I915-ci-infra@lists.freedesktop.org) to allow them
> to document this new failure mode, which will reduce false positives in CI.
> 
> 
>     Participating hosts (4 -> 4)
> 
> No changes in participating hosts
> 
> 
>     Possible new issues
> 
> Here are the unknown changes that may have been introduced in xe-pw-156220v2_FULL:
> 
> 
>       IGT changes
> 
> 
>         Possible regressions
> 
>   * igt@kms_flip@2x-flip-vs-suspend@ad-dp2-hdmi-a3:
>       o shard-bmg: PASS <https://intel-gfx-ci.01.org/tree/intel-xe/xe-3996-5742fc7aea99a1326637a7106eeaeac383a1c76d/shard-bmg-4/igt@kms_flip@2x-flip-vs-suspend@ad-dp2-hdmi-a3.html> -> DMESG-WARN <https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-156220v2/shard-bmg-6/igt@kms_flip@2x-flip-vs-suspend@ad-dp2-hdmi-a3.html>

unrelated

<3> [435.370057] xe 0000:03:00.0: [drm] *ERROR* [CONNECTOR:281:DP-2][ENCODER:280:DDI TC2/PHY G][DPRX] Failed to enable link training



^ permalink raw reply	[flat|nested] 45+ messages in thread

* Re: [PATCH v2 04/17] drm/xe/pf: Add _locked variants of the VF PT config functions
  2025-10-28 17:58 ` [PATCH v2 04/17] drm/xe/pf: Add _locked variants of the VF PT " Michal Wajdeczko
@ 2025-10-29 11:00   ` Piotr Piórkowski
  2025-10-29 20:27   ` Lucas De Marchi
  1 sibling, 0 replies; 45+ messages in thread
From: Piotr Piórkowski @ 2025-10-29 11:00 UTC (permalink / raw)
  To: Michal Wajdeczko; +Cc: intel-xe, Lucas De Marchi

Michal Wajdeczko <michal.wajdeczko@intel.com> wrote on wto [2025-paź-28 18:58:18 +0100]:
> In upcoming patches we will want to configure VF's preemption
> timeout (PT) on all GTs under single lock to avoid potential
> races due to parallel GT configuration attempts.
> 
> Signed-off-by: Michal Wajdeczko <michal.wajdeczko@intel.com>
> Cc: Lucas De Marchi <lucas.demarchi@intel.com>
> ---
>  drivers/gpu/drm/xe/xe_gt_sriov_pf_config.c | 59 +++++++++++++++++-----
>  drivers/gpu/drm/xe/xe_gt_sriov_pf_config.h |  4 ++
>  2 files changed, 49 insertions(+), 14 deletions(-)
> 
> diff --git a/drivers/gpu/drm/xe/xe_gt_sriov_pf_config.c b/drivers/gpu/drm/xe/xe_gt_sriov_pf_config.c
> index 717f81e76b8c..e48457bd7d12 100644
> --- a/drivers/gpu/drm/xe/xe_gt_sriov_pf_config.c
> +++ b/drivers/gpu/drm/xe/xe_gt_sriov_pf_config.c
> @@ -1835,31 +1835,66 @@ static int pf_get_preempt_timeout(struct xe_gt *gt, unsigned int vfid)
>  }
>  
>  /**
> - * xe_gt_sriov_pf_config_set_preempt_timeout - Configure preemption timeout for the VF.
> + * xe_gt_sriov_pf_config_set_preempt_timeout_locked() - Configure preemption timeout of the VF.
>   * @gt: the &xe_gt
>   * @vfid: the VF identifier
>   * @preempt_timeout: requested preemption timeout in microseconds (0 is infinity)
>   *
> - * This function can only be called on PF.
> + * This function can only be called on PF with the master mutex hold.
> + * It will log the provisioned value or an error in case of the failure.
>   *
>   * Return: 0 on success or a negative error code on failure.
>   */
> -int xe_gt_sriov_pf_config_set_preempt_timeout(struct xe_gt *gt, unsigned int vfid,
> -					      u32 preempt_timeout)
> +int xe_gt_sriov_pf_config_set_preempt_timeout_locked(struct xe_gt *gt, unsigned int vfid,
> +						     u32 preempt_timeout)
>  {
>  	int err;
>  
> -	mutex_lock(xe_gt_sriov_pf_master_mutex(gt));
> +	lockdep_assert_held(xe_gt_sriov_pf_master_mutex(gt));
> +
>  	err = pf_provision_preempt_timeout(gt, vfid, preempt_timeout);
> -	mutex_unlock(xe_gt_sriov_pf_master_mutex(gt));
>  
>  	return pf_config_set_u32_done(gt, vfid, preempt_timeout,
> -				      xe_gt_sriov_pf_config_get_preempt_timeout(gt, vfid),
> +				      pf_get_preempt_timeout(gt, vfid),
>  				      "preemption timeout", preempt_timeout_unit, err);
>  }
>  
>  /**
> - * xe_gt_sriov_pf_config_get_preempt_timeout - Get VF's preemption timeout.
> + * xe_gt_sriov_pf_config_set_preempt_timeout() - Configure preemption timeout of the VF.
> + * @gt: the &xe_gt
> + * @vfid: the VF identifier
> + * @preempt_timeout: requested preemption timeout in microseconds (0 is infinity)
> + *
> + * This function can only be called on PF.
> + *
> + * Return: 0 on success or a negative error code on failure.
> + */
> +int xe_gt_sriov_pf_config_set_preempt_timeout(struct xe_gt *gt, unsigned int vfid,
> +					      u32 preempt_timeout)
> +{
> +	guard(mutex)(xe_gt_sriov_pf_master_mutex(gt));
> +
> +	return xe_gt_sriov_pf_config_set_preempt_timeout_locked(gt, vfid, preempt_timeout);
> +}
> +
> +/**
> + * xe_gt_sriov_pf_config_get_preempt_timeout_locked() - Get VF's preemption timeout.

NIT: I'm wondering, since you write in the return section that this also applies to PF,
and here you write about "VF's preemption...", maybe it would be worth using some universal
phrase:
at first I'm I thought of “function,” but without mentioning that this is about “PCI function,”
it might confuse things even more.
I don't know, I'm just mentioning it so that you might consider changing
"VF's” to something else.
But you can just as well ignore it

> + * @gt: the &xe_gt
> + * @vfid: the VF identifier
> + *
> + * This function can only be called on PF with the master mutex hold.
> + *
> + * Return: VF's (or PF's) preemption timeout in microseconds.
> + */
> +u32 xe_gt_sriov_pf_config_get_preempt_timeout_locked(struct xe_gt *gt, unsigned int vfid)
> +{
> +	lockdep_assert_held(xe_gt_sriov_pf_master_mutex(gt));
> +
> +	return pf_get_preempt_timeout(gt, vfid);
> +}
> +
> +/**
> + * xe_gt_sriov_pf_config_get_preempt_timeout() - Get VF's preemption timeout.
>   * @gt: the &xe_gt
>   * @vfid: the VF identifier
>   *
> @@ -1869,13 +1904,9 @@ int xe_gt_sriov_pf_config_set_preempt_timeout(struct xe_gt *gt, unsigned int vfi
>   */
>  u32 xe_gt_sriov_pf_config_get_preempt_timeout(struct xe_gt *gt, unsigned int vfid)
>  {
> -	u32 preempt_timeout;
> +	guard(mutex)(xe_gt_sriov_pf_master_mutex(gt));
>  
> -	mutex_lock(xe_gt_sriov_pf_master_mutex(gt));
> -	preempt_timeout = pf_get_preempt_timeout(gt, vfid);
> -	mutex_unlock(xe_gt_sriov_pf_master_mutex(gt));
> -
> -	return preempt_timeout;
> +	return pf_get_preempt_timeout(gt, vfid);

NIT: I mentioned this in the review of previous patch:
     Perhaps it would have been better to be consistent and call it the locked version here.

>  }
>  
>  static const char *sched_priority_unit(u32 priority)
> diff --git a/drivers/gpu/drm/xe/xe_gt_sriov_pf_config.h b/drivers/gpu/drm/xe/xe_gt_sriov_pf_config.h
> index b4beb5a97031..6bab5ad6c849 100644
> --- a/drivers/gpu/drm/xe/xe_gt_sriov_pf_config.h
> +++ b/drivers/gpu/drm/xe/xe_gt_sriov_pf_config.h
> @@ -48,6 +48,10 @@ u32 xe_gt_sriov_pf_config_get_preempt_timeout(struct xe_gt *gt, unsigned int vfi
>  int xe_gt_sriov_pf_config_set_preempt_timeout(struct xe_gt *gt, unsigned int vfid,
>  					      u32 preempt_timeout);
>  
> +u32 xe_gt_sriov_pf_config_get_preempt_timeout_locked(struct xe_gt *gt, unsigned int vfid);
> +int xe_gt_sriov_pf_config_set_preempt_timeout_locked(struct xe_gt *gt, unsigned int vfid,
> +						     u32 preempt_timeout);
> +
>  u32 xe_gt_sriov_pf_config_get_sched_priority(struct xe_gt *gt, unsigned int vfid);
>  int xe_gt_sriov_pf_config_set_sched_priority(struct xe_gt *gt, unsigned int vfid, u32 priority);
>  

LGTM:
Reviewed-by: Piotr Piórkowski <piotr.piorkowski@intel.com>

> -- 
> 2.47.1
> 

-- 

^ permalink raw reply	[flat|nested] 45+ messages in thread

* Re: [PATCH v2 05/17] drm/xe/pf: Allow change PF and VFs EQ/PT using sysfs
  2025-10-28 17:58 ` [PATCH v2 05/17] drm/xe/pf: Allow change PF and VFs EQ/PT using sysfs Michal Wajdeczko
@ 2025-10-29 11:17   ` Piotr Piórkowski
  2025-10-29 20:26   ` Lucas De Marchi
  1 sibling, 0 replies; 45+ messages in thread
From: Piotr Piórkowski @ 2025-10-29 11:17 UTC (permalink / raw)
  To: Michal Wajdeczko; +Cc: intel-xe, Lucas De Marchi, Rodrigo Vivi

Michal Wajdeczko <michal.wajdeczko@intel.com> wrote on wto [2025-paź-28 18:58:19 +0100]:
> On current platforms, in SR-IOV virtualization, the GPU is shared
> between VFs on the time-slice basis. The 'execution quantum' (EQ)
> and 'preemption timeout' (PT) are two main scheduling parameters
> that could be set individually per each VF.
> 
> Add EQ/PT read-write attributes for the PF and all VFs.
> 
> By exposing those two parameters over sysfs, the admin can change
> their default values (infinity) and let the GuC scheduler enforce
> that settings.
> 
>  /sys/bus/pci/drivers/xe/BDF/
>  ├── sriov_admin/
>      ├── pf/
>      │   └── profile
>      │       ├── exec_quantum_ms	[RW] unsigned integer
>      │       └── preempt_timeout_us	[RW] unsigned integer
>      ├── vf1/
>      │   └── profile
>      │       ├── exec_quantum_ms	[RW] unsigned integer
>      │       └── preempt_timeout_us	[RW] unsigned integer
> 
> Writing 0 to these files will set infinity EQ/PT for the VF on all
> tiles/GTs. This is a default value. Writing non-zero integers to
> these files will change EQ/PT to new value (in their respective
> units: msec or usec).
> 
> Reading from these files will return EQ/PT as previously set on
> all tiles/GTs. In case of inconsistent values detected, due to
> errors or low-level configuration done using debugfs, -EUCLEAN
> error will be returned.
> 
> Signed-off-by: Michal Wajdeczko <michal.wajdeczko@intel.com>
> Cc: Lucas De Marchi <lucas.demarchi@intel.com>
> Cc: Rodrigo Vivi <rodrigo.vivi@intel.com>
> Acked-by: Rodrigo Vivi <rodrigo.vivi@intel.com>
> ---
> v2: apply EQ/PT under single lock (Lucas)
> ---
>  drivers/gpu/drm/xe/xe_sriov_pf_provision.c | 124 +++++++++++++++++++++
>  drivers/gpu/drm/xe/xe_sriov_pf_provision.h |   8 ++
>  drivers/gpu/drm/xe/xe_sriov_pf_sysfs.c     |  54 ++++++++-
>  3 files changed, 184 insertions(+), 2 deletions(-)
> 
> diff --git a/drivers/gpu/drm/xe/xe_sriov_pf_provision.c b/drivers/gpu/drm/xe/xe_sriov_pf_provision.c
> index 663fb0c045e9..c5b3a6aa67f4 100644
> --- a/drivers/gpu/drm/xe/xe_sriov_pf_provision.c
> +++ b/drivers/gpu/drm/xe/xe_sriov_pf_provision.c
> @@ -152,3 +152,127 @@ int xe_sriov_pf_provision_set_mode(struct xe_device *xe, enum xe_sriov_provision
>  	xe->sriov.pf.provision.mode = mode;
>  	return 0;
>  }
> +
> +/**
> + * xe_sriov_pf_provision_apply_vf_eq() - Change VF's execution quantum.
> + * @xe: the PF &xe_device
> + * @vfid: the VF identifier
> + * @eq: execution quantum in [ms] to set
> + *
> + * Change VF's execution quantum (EQ) provisioning on all tiles/GTs.
> + *
> + * This function can only be called on PF.
> + *
> + * Return: 0 on success or a negative error code on failure.
> + */
> +int xe_sriov_pf_provision_apply_vf_eq(struct xe_device *xe, unsigned int vfid, u32 eq)
> +{
> +	struct xe_gt *gt;
> +	unsigned int id;
> +	int result = 0;
> +	int err;
> +
> +	guard(mutex)(xe_sriov_pf_master_mutex(xe));
> +
> +	for_each_gt(gt, xe, id) {
> +		err = xe_gt_sriov_pf_config_set_exec_quantum_locked(gt, vfid, eq);
> +		result = result ?: err;
> +	}
> +
> +	return result;
> +}
> +
> +/**
> + * xe_sriov_pf_provision_query_vf_eq() - Query VF's execution quantum.
> + * @xe: the PF &xe_device
> + * @vfid: the VF identifier
> + * @eq: placeholder for the returned execution quantum in [ms]
> + *
> + * Query VF's execution quantum (EQ) provisioning from all tiles/GTs.
> + * If values across tiles/GTs are inconsistent then -EUCLEAN error will be returned.
> + *
> + * This function can only be called on PF.
> + *
> + * Return: 0 on success or a negative error code on failure.
> + */
> +int xe_sriov_pf_provision_query_vf_eq(struct xe_device *xe, unsigned int vfid, u32 *eq)
> +{
> +	struct xe_gt *gt;
> +	unsigned int id;
> +	int count = 0;
> +	u32 value;
> +
> +	guard(mutex)(xe_sriov_pf_master_mutex(xe));
> +
> +	for_each_gt(gt, xe, id) {
> +		value = xe_gt_sriov_pf_config_get_exec_quantum_locked(gt, vfid);
> +		if (!count++)
> +			*eq = value;
> +		else if (value != *eq)
> +			return -EUCLEAN;

NIT: Perhaps it would be worth adding a dbg log with information about which GT is inconsistent.

> +	}
> +
> +	return !count ? -ENODATA : 0;
> +}
> +
> +/**
> + * xe_sriov_pf_provision_apply_vf_pt() - Change VF's preemption timeout.
> + * @xe: the PF &xe_device
> + * @vfid: the VF identifier
> + * @pt: preemption timeout in [us] to set
> + *
> + * Change VF's preemption timeout (PT) provisioning on all tiles/GTs.
> + *
> + * This function can only be called on PF.
> + *
> + * Return: 0 on success or a negative error code on failure.
> + */
> +int xe_sriov_pf_provision_apply_vf_pt(struct xe_device *xe, unsigned int vfid, u32 pt)
> +{
> +	struct xe_gt *gt;
> +	unsigned int id;
> +	int result = 0;
> +	int err;
> +
> +	guard(mutex)(xe_sriov_pf_master_mutex(xe));
> +
> +	for_each_gt(gt, xe, id) {
> +		err = xe_gt_sriov_pf_config_set_preempt_timeout_locked(gt, vfid, pt);
> +		result = result ?: err;
> +	}
> +
> +	return result;
> +}
> +
> +/**
> + * xe_sriov_pf_provision_query_vf_pt() - Query VF's preemption timeout.
> + * @xe: the PF &xe_device
> + * @vfid: the VF identifier
> + * @pt: placeholder for the returned preemption timeout in [us]
> + *
> + * Query VF's preemption timeout (PT) provisioning from all tiles/GTs.
> + * If values across tiles/GTs are inconsistent then -EUCLEAN error will be returned.
> + *
> + * This function can only be called on PF.
> + *
> + * Return: 0 on success or a negative error code on failure.
> + */
> +int xe_sriov_pf_provision_query_vf_pt(struct xe_device *xe, unsigned int vfid, u32 *pt)
> +{
> +	struct xe_gt *gt;
> +	unsigned int id;
> +	int count = 0;
> +	u32 value;
> +
> +	guard(mutex)(xe_sriov_pf_master_mutex(xe));
> +
> +	for_each_gt(gt, xe, id) {
> +		value = xe_gt_sriov_pf_config_get_preempt_timeout_locked(gt, vfid);
> +		if (!count++)
> +			*pt = value;
> +		else if (value != *pt)
> +			return -EUCLEAN;
> +	}
> +
> +	return !count ? -ENODATA : 0;
> +}
> diff --git a/drivers/gpu/drm/xe/xe_sriov_pf_provision.h b/drivers/gpu/drm/xe/xe_sriov_pf_provision.h
> index cf3657a32e90..cb81b5880930 100644
> --- a/drivers/gpu/drm/xe/xe_sriov_pf_provision.h
> +++ b/drivers/gpu/drm/xe/xe_sriov_pf_provision.h
> @@ -6,10 +6,18 @@
>  #ifndef _XE_SRIOV_PF_PROVISION_H_
>  #define _XE_SRIOV_PF_PROVISION_H_
>  
> +#include <linux/types.h>
> +
>  #include "xe_sriov_pf_provision_types.h"
>  
>  struct xe_device;
>  
> +int xe_sriov_pf_provision_apply_vf_eq(struct xe_device *xe, unsigned int vfid, u32 eq);
> +int xe_sriov_pf_provision_query_vf_eq(struct xe_device *xe, unsigned int vfid, u32 *eq);
> +
> +int xe_sriov_pf_provision_apply_vf_pt(struct xe_device *xe, unsigned int vfid, u32 pt);
> +int xe_sriov_pf_provision_query_vf_pt(struct xe_device *xe, unsigned int vfid, u32 *pt);
> +
>  int xe_sriov_pf_provision_vfs(struct xe_device *xe, unsigned int num_vfs);
>  int xe_sriov_pf_unprovision_vfs(struct xe_device *xe, unsigned int num_vfs);
>  
> diff --git a/drivers/gpu/drm/xe/xe_sriov_pf_sysfs.c b/drivers/gpu/drm/xe/xe_sriov_pf_sysfs.c
> index 439a0cd02a86..f12d6752e9f1 100644
> --- a/drivers/gpu/drm/xe/xe_sriov_pf_sysfs.c
> +++ b/drivers/gpu/drm/xe/xe_sriov_pf_sysfs.c
> @@ -13,6 +13,7 @@
>  #include "xe_sriov.h"
>  #include "xe_sriov_pf.h"
>  #include "xe_sriov_pf_helpers.h"
> +#include "xe_sriov_pf_provision.h"
>  #include "xe_sriov_pf_sysfs.h"
>  #include "xe_sriov_printk.h"
>  
> @@ -23,10 +24,14 @@
>   *     ├── ...
>   *     ├── pf/
>   *     │   ├── ...
> - *     │   └── ...
> + *     │   └── profile
> + *     │       ├── exec_quantum_ms
> + *     │       └── preempt_timeout_us
>   *     ├── vf1/
>   *     │   ├── ...
> - *     │   └── ...
> + *     │   └── profile
> + *     │       ├── exec_quantum_ms
> + *     │       └── preempt_timeout_us
>   *     ├── vf2/
>   *     :
>   *     └── vfN/
> @@ -85,7 +90,52 @@ static const struct attribute_group *xe_sriov_dev_attr_groups[] = {
>  
>  /* and VF-level attributes go here */
>  
> +#define DEFINE_SIMPLE_PROVISIONING_SRIOV_VF_ATTR(NAME, ITEM, TYPE, FORMAT)		\
> +static ssize_t xe_sriov_vf_attr_##NAME##_show(struct xe_device *xe, unsigned int vfid,	\
> +					      char *buf)				\
> +{											\
> +	TYPE value = 0;									\
> +	int err;									\
> +											\
> +	err = xe_sriov_pf_provision_query_vf_##ITEM(xe, vfid, &value);			\
> +	if (err)									\
> +		return err;								\
> +											\
> +	return sysfs_emit(buf, FORMAT, value);						\
> +}											\
> +											\
> +static ssize_t xe_sriov_vf_attr_##NAME##_store(struct xe_device *xe, unsigned int vfid,	\
> +					       const char *buf, size_t count)		\
> +{											\
> +	TYPE value;									\
> +	int err;									\
> +											\
> +	err = kstrto##TYPE(buf, 0, &value);						\
> +	if (err)									\
> +		return err;								\
> +											\
> +	err = xe_sriov_pf_provision_apply_vf_##ITEM(xe, vfid, value);			\
> +	return err ?: count;								\
> +}											\
> +											\
> +static XE_SRIOV_VF_ATTR(NAME)
> +
> +DEFINE_SIMPLE_PROVISIONING_SRIOV_VF_ATTR(exec_quantum_ms, eq, u32, "%u\n");
> +DEFINE_SIMPLE_PROVISIONING_SRIOV_VF_ATTR(preempt_timeout_us, pt, u32, "%u\n");
> +
> +static struct attribute *profile_vf_attrs[] = {
> +	&xe_sriov_vf_attr_exec_quantum_ms.attr,
> +	&xe_sriov_vf_attr_preempt_timeout_us.attr,
> +	NULL
> +};
> +
> +static const struct attribute_group profile_vf_attr_group = {
> +	.name = "profile",
> +	.attrs = profile_vf_attrs,
> +};
> +
>  static const struct attribute_group *xe_sriov_vf_attr_groups[] = {
> +	&profile_vf_attr_group,
>  	NULL
>  };

LGTM:
Reviewed-by: Piotr Piórkowski <piotr.piorkowski@intel.com>

>  
> -- 
> 2.47.1
> 

-- 

^ permalink raw reply	[flat|nested] 45+ messages in thread

* Re: [PATCH v2 08/17] drm/xe/pf: Add functions to bulk configure EQ/PT on GT
  2025-10-28 17:58 ` [PATCH v2 08/17] drm/xe/pf: Add functions to bulk configure EQ/PT on GT Michal Wajdeczko
@ 2025-10-29 13:59   ` Piotr Piórkowski
  2025-10-29 20:32   ` Lucas De Marchi
  1 sibling, 0 replies; 45+ messages in thread
From: Piotr Piórkowski @ 2025-10-29 13:59 UTC (permalink / raw)
  To: Michal Wajdeczko; +Cc: intel-xe

Michal Wajdeczko <michal.wajdeczko@intel.com> wrote on wto [2025-paź-28 18:58:22 +0100]:
> We already have functions to bulk configure 'hard' resources like
> GGTT, LMEM or GuC context/doorbells IDs. Now add functions for the
> 'soft' scheduling parameters, as we will need them soon in the
> upcoming patches.
> 
> Signed-off-by: Michal Wajdeczko <michal.wajdeczko@intel.com>
> ---
> v2: add _locked variants instead
> ---
>  drivers/gpu/drm/xe/xe_gt_sriov_pf_config.c | 56 ++++++++++++++++++++++
>  drivers/gpu/drm/xe/xe_gt_sriov_pf_config.h |  2 +
>  2 files changed, 58 insertions(+)
> 
> diff --git a/drivers/gpu/drm/xe/xe_gt_sriov_pf_config.c b/drivers/gpu/drm/xe/xe_gt_sriov_pf_config.c
> index 6365d5f2ae98..56048cd79d15 100644
> --- a/drivers/gpu/drm/xe/xe_gt_sriov_pf_config.c
> +++ b/drivers/gpu/drm/xe/xe_gt_sriov_pf_config.c
> @@ -1810,6 +1810,34 @@ u32 xe_gt_sriov_pf_config_get_exec_quantum(struct xe_gt *gt, unsigned int vfid)
>  	return pf_get_exec_quantum(gt, vfid);
>  }
>  
> +/**
> + * xe_gt_sriov_pf_config_bulk_set_exec_quantum_locked() - Configure EQ for PF and VFs.
> + * @gt: the &xe_gt to configure
> + * @exec_quantum: requested execution quantum in milliseconds (0 is infinity)
> + *
> + * This function can only be called on PF with the master mutex hold.
> + *
> + * Return: 0 on success or a negative error code on failure.
> + */
> +int xe_gt_sriov_pf_config_bulk_set_exec_quantum_locked(struct xe_gt *gt, u32 exec_quantum)
> +{
> +	unsigned int totalvfs = xe_gt_sriov_pf_get_totalvfs(gt);
> +	unsigned int n;
> +	int err = 0;
> +
> +	lockdep_assert_held(xe_gt_sriov_pf_master_mutex(gt));
> +
> +	for (n = 0; n <= totalvfs; n++) {
> +		err = pf_provision_exec_quantum(gt, VFID(n), exec_quantum);
> +		if (err)
> +			break;
> +	}
> +
> +	return pf_config_bulk_set_u32_done(gt, 0, 1 + totalvfs, exec_quantum,
> +					   pf_get_exec_quantum, "execution quantum",
> +					   exec_quantum_unit, n, err);
> +}
> +
>  static const char *preempt_timeout_unit(u32 preempt_timeout)
>  {
>  	return preempt_timeout ? "us" : "(infinity)";
> @@ -1912,6 +1940,34 @@ u32 xe_gt_sriov_pf_config_get_preempt_timeout(struct xe_gt *gt, unsigned int vfi
>  	return pf_get_preempt_timeout(gt, vfid);
>  }
>  
> +/**
> + * xe_gt_sriov_pf_config_bulk_set_preempt_timeout_locked() - Configure PT for PF and VFs.
> + * @gt: the &xe_gt to configure
> + * @preempt_timeout: requested preemption timeout in microseconds (0 is infinity)
> + *
> + * This function can only be called on PF with the master mutex hold.
> + *
> + * Return: 0 on success or a negative error code on failure.
> + */
> +int xe_gt_sriov_pf_config_bulk_set_preempt_timeout_locked(struct xe_gt *gt, u32 preempt_timeout)
> +{
> +	unsigned int totalvfs = xe_gt_sriov_pf_get_totalvfs(gt);
> +	unsigned int n;
> +	int err = 0;
> +
> +	lockdep_assert_held(xe_gt_sriov_pf_master_mutex(gt));
> +
> +	for (n = 0; n <= totalvfs; n++) {
> +		err = pf_provision_preempt_timeout(gt, VFID(n), preempt_timeout);
> +		if (err)
> +			break;
> +	}
> +
> +	return pf_config_bulk_set_u32_done(gt, 0, 1 + totalvfs, preempt_timeout,
> +					   pf_get_preempt_timeout, "preemption timeout",
> +					   preempt_timeout_unit, n, err);
> +}
> +
>  static const char *sched_priority_unit(u32 priority)
>  {
>  	return priority == GUC_SCHED_PRIORITY_LOW ? "(low)" :
> diff --git a/drivers/gpu/drm/xe/xe_gt_sriov_pf_config.h b/drivers/gpu/drm/xe/xe_gt_sriov_pf_config.h
> index 6bab5ad6c849..14d036790695 100644
> --- a/drivers/gpu/drm/xe/xe_gt_sriov_pf_config.h
> +++ b/drivers/gpu/drm/xe/xe_gt_sriov_pf_config.h
> @@ -43,6 +43,7 @@ int xe_gt_sriov_pf_config_set_exec_quantum(struct xe_gt *gt, unsigned int vfid,
>  u32 xe_gt_sriov_pf_config_get_exec_quantum_locked(struct xe_gt *gt, unsigned int vfid);
>  int xe_gt_sriov_pf_config_set_exec_quantum_locked(struct xe_gt *gt, unsigned int vfid,
>  						  u32 exec_quantum);
> +int xe_gt_sriov_pf_config_bulk_set_exec_quantum_locked(struct xe_gt *gt, u32 exec_quantum);
>  
>  u32 xe_gt_sriov_pf_config_get_preempt_timeout(struct xe_gt *gt, unsigned int vfid);
>  int xe_gt_sriov_pf_config_set_preempt_timeout(struct xe_gt *gt, unsigned int vfid,
> @@ -51,6 +52,7 @@ int xe_gt_sriov_pf_config_set_preempt_timeout(struct xe_gt *gt, unsigned int vfi
>  u32 xe_gt_sriov_pf_config_get_preempt_timeout_locked(struct xe_gt *gt, unsigned int vfid);
>  int xe_gt_sriov_pf_config_set_preempt_timeout_locked(struct xe_gt *gt, unsigned int vfid,
>  						     u32 preempt_timeout);
> +int xe_gt_sriov_pf_config_bulk_set_preempt_timeout_locked(struct xe_gt *gt, u32 preempt_timeout);
>  
>  u32 xe_gt_sriov_pf_config_get_sched_priority(struct xe_gt *gt, unsigned int vfid);
>  int xe_gt_sriov_pf_config_set_sched_priority(struct xe_gt *gt, unsigned int vfid, u32 priority);

LGTM:
Reviewed-by: Piotr Piórkowski <piotr.piorkowski@intel.com>

> -- 
> 2.47.1
> 

-- 

^ permalink raw reply	[flat|nested] 45+ messages in thread

* Re: [PATCH v2 05/17] drm/xe/pf: Allow change PF and VFs EQ/PT using sysfs
  2025-10-28 17:58 ` [PATCH v2 05/17] drm/xe/pf: Allow change PF and VFs EQ/PT using sysfs Michal Wajdeczko
  2025-10-29 11:17   ` Piotr Piórkowski
@ 2025-10-29 20:26   ` Lucas De Marchi
  1 sibling, 0 replies; 45+ messages in thread
From: Lucas De Marchi @ 2025-10-29 20:26 UTC (permalink / raw)
  To: Michal Wajdeczko; +Cc: intel-xe, Rodrigo Vivi

On Tue, Oct 28, 2025 at 06:58:19PM +0100, Michal Wajdeczko wrote:
>On current platforms, in SR-IOV virtualization, the GPU is shared
>between VFs on the time-slice basis. The 'execution quantum' (EQ)
>and 'preemption timeout' (PT) are two main scheduling parameters
>that could be set individually per each VF.
>
>Add EQ/PT read-write attributes for the PF and all VFs.
>
>By exposing those two parameters over sysfs, the admin can change
>their default values (infinity) and let the GuC scheduler enforce
>that settings.
>
> /sys/bus/pci/drivers/xe/BDF/
> ├── sriov_admin/
>     ├── pf/
>     │   └── profile
>     │       ├── exec_quantum_ms	[RW] unsigned integer
>     │       └── preempt_timeout_us	[RW] unsigned integer
>     ├── vf1/
>     │   └── profile
>     │       ├── exec_quantum_ms	[RW] unsigned integer
>     │       └── preempt_timeout_us	[RW] unsigned integer
>
>Writing 0 to these files will set infinity EQ/PT for the VF on all
>tiles/GTs. This is a default value. Writing non-zero integers to
>these files will change EQ/PT to new value (in their respective
>units: msec or usec).
>
>Reading from these files will return EQ/PT as previously set on
>all tiles/GTs. In case of inconsistent values detected, due to
>errors or low-level configuration done using debugfs, -EUCLEAN
>error will be returned.
>
>Signed-off-by: Michal Wajdeczko <michal.wajdeczko@intel.com>
>Cc: Lucas De Marchi <lucas.demarchi@intel.com>
>Cc: Rodrigo Vivi <rodrigo.vivi@intel.com>
>Acked-by: Rodrigo Vivi <rodrigo.vivi@intel.com>


Reviewed-by: Lucas De Marchi <lucas.demarchi@intel.com>

thanks
Lucas De Marchi

^ permalink raw reply	[flat|nested] 45+ messages in thread

* Re: [PATCH v2 04/17] drm/xe/pf: Add _locked variants of the VF PT config functions
  2025-10-28 17:58 ` [PATCH v2 04/17] drm/xe/pf: Add _locked variants of the VF PT " Michal Wajdeczko
  2025-10-29 11:00   ` Piotr Piórkowski
@ 2025-10-29 20:27   ` Lucas De Marchi
  1 sibling, 0 replies; 45+ messages in thread
From: Lucas De Marchi @ 2025-10-29 20:27 UTC (permalink / raw)
  To: Michal Wajdeczko; +Cc: intel-xe

On Tue, Oct 28, 2025 at 06:58:18PM +0100, Michal Wajdeczko wrote:
>In upcoming patches we will want to configure VF's preemption
>timeout (PT) on all GTs under single lock to avoid potential
>races due to parallel GT configuration attempts.
>
>Signed-off-by: Michal Wajdeczko <michal.wajdeczko@intel.com>
>Cc: Lucas De Marchi <lucas.demarchi@intel.com>

Reviewed-by: Lucas De Marchi <lucas.demarchi@intel.com>

Lucas De Marchi

^ permalink raw reply	[flat|nested] 45+ messages in thread

* Re: [PATCH v2 08/17] drm/xe/pf: Add functions to bulk configure EQ/PT on GT
  2025-10-28 17:58 ` [PATCH v2 08/17] drm/xe/pf: Add functions to bulk configure EQ/PT on GT Michal Wajdeczko
  2025-10-29 13:59   ` Piotr Piórkowski
@ 2025-10-29 20:32   ` Lucas De Marchi
  1 sibling, 0 replies; 45+ messages in thread
From: Lucas De Marchi @ 2025-10-29 20:32 UTC (permalink / raw)
  To: Michal Wajdeczko; +Cc: intel-xe

On Tue, Oct 28, 2025 at 06:58:22PM +0100, Michal Wajdeczko wrote:
>We already have functions to bulk configure 'hard' resources like
>GGTT, LMEM or GuC context/doorbells IDs. Now add functions for the
>'soft' scheduling parameters, as we will need them soon in the
>upcoming patches.
>
>Signed-off-by: Michal Wajdeczko <michal.wajdeczko@intel.com>

Reviewed-by: Lucas De Marchi <lucas.demarchi@intel.com>

Lucas De Marchi

^ permalink raw reply	[flat|nested] 45+ messages in thread

* Re: [PATCH v2 09/17] drm/xe/pf: Add functions to bulk provision EQ/PT
  2025-10-28 17:58 ` [PATCH v2 09/17] drm/xe/pf: Add functions to bulk provision EQ/PT Michal Wajdeczko
@ 2025-10-29 20:33   ` Lucas De Marchi
  0 siblings, 0 replies; 45+ messages in thread
From: Lucas De Marchi @ 2025-10-29 20:33 UTC (permalink / raw)
  To: Michal Wajdeczko; +Cc: intel-xe

On Tue, Oct 28, 2025 at 06:58:23PM +0100, Michal Wajdeczko wrote:
>We already have functions to configure EQ/PT for single VF across
>all tiles/GTs. Now add helper functions that will do that for all
>VFs (and the PF) at once.
>
>Signed-off-by: Michal Wajdeczko <michal.wajdeczko@intel.com>
>Reviewed-by: Lucas De Marchi <lucas.demarchi@intel.com> #v1

Reviewed-by: Lucas De Marchi <lucas.demarchi@intel.com>

thanks
Lucas De Marchi

^ permalink raw reply	[flat|nested] 45+ messages in thread

* Re: [PATCH v2 16/17] drm/xe/pf: Allow to stop and reset VF using sysfs
  2025-10-28 17:58 ` [PATCH v2 16/17] drm/xe/pf: Allow to stop and reset VF using sysfs Michal Wajdeczko
@ 2025-10-30  8:45   ` Piotr Piórkowski
  2025-10-30 13:43   ` Lucas De Marchi
  1 sibling, 0 replies; 45+ messages in thread
From: Piotr Piórkowski @ 2025-10-30  8:45 UTC (permalink / raw)
  To: Michal Wajdeczko; +Cc: intel-xe, Lucas De Marchi, Rodrigo Vivi

Michal Wajdeczko <michal.wajdeczko@intel.com> wrote on wto [2025-paź-28 18:58:30 +0100]:

drm/xe/pf: Allow to stop and reset VF using sysfs

When you removed the reset in v2, is the commit name still valid?

> It is expected that VFs activity will be monitored and in some
> cases admin might want to silence specific VF without killing
> the VM where it was attached.
> 
> Add write-only attribute to stop GuC scheduling at VFs level.
> 
>   /sys/bus/pci/drivers/xe/BDF/
>   ├── sriov_admin/
>       ├── vf1/
>       │   └── stop		[WO] bool
>       ├── vf2/
>       │   └── stop		[WO] bool
> 
> Writing "1" or "y" (or whatever is recognized by the strtobool()
> function) to this file will trigger the change of the VF state
> to STOP (GuC will stop servicing the VF). To go back to a READY
> state (to allow GuC to service this VF again) the VF FLR must be
> triggered (which can be done by writing 1 to device/reset file).
> 
> Signed-off-by: Michal Wajdeczko <michal.wajdeczko@intel.com>
> Cc: Lucas De Marchi <lucas.demarchi@intel.com>
> Cc: Rodrigo Vivi <rodrigo.vivi@intel.com>
> ---
> v2: drop reset file (Rodrigo, Lucas)
> ---
>  drivers/gpu/drm/xe/xe_sriov_pf_sysfs.c | 49 ++++++++++++++++++++++++++
>  1 file changed, 49 insertions(+)
> 
> diff --git a/drivers/gpu/drm/xe/xe_sriov_pf_sysfs.c b/drivers/gpu/drm/xe/xe_sriov_pf_sysfs.c
> index 360b0ffd9cb4..3a8c488d183c 100644
> --- a/drivers/gpu/drm/xe/xe_sriov_pf_sysfs.c
> +++ b/drivers/gpu/drm/xe/xe_sriov_pf_sysfs.c
> @@ -13,6 +13,7 @@
>  #include "xe_pm.h"
>  #include "xe_sriov.h"
>  #include "xe_sriov_pf.h"
> +#include "xe_sriov_pf_control.h"
>  #include "xe_sriov_pf_helpers.h"
>  #include "xe_sriov_pf_provision.h"
>  #include "xe_sriov_pf_sysfs.h"
> @@ -52,6 +53,7 @@ static int emit_choice(char *buf, int choice, const char * const *array, size_t
>   *     ├── vf1/
>   *     │   ├── ...
>   *     │   ├── device -> ../../../BDF.1
> + *     │   ├── stop
>   *     │   └── profile
>   *     │       ├── exec_quantum_ms
>   *     │       ├── preempt_timeout_us
> @@ -291,8 +293,55 @@ static const struct attribute_group profile_vf_attr_group = {
>  	.is_visible = profile_vf_attr_is_visible,
>  };
>  
> +#define DEFINE_SIMPLE_CONTROL_SRIOV_VF_ATTR(NAME)					\
> +											\
> +static ssize_t xe_sriov_vf_attr_##NAME##_store(struct xe_device *xe, unsigned int vfid,	\
> +					       const char *buf, size_t count)		\
> +{											\
> +	bool yes;									\
> +	int err;									\
> +											\
> +	if (!vfid)									\
> +		return -EPERM;								\
> +											\
> +	err = kstrtobool(buf, &yes);							\
> +	if (err)									\
> +		return err;								\
> +	if (!yes)									\
> +		return count;								\
> +											\
> +	err = xe_sriov_pf_control_##NAME##_vf(xe, vfid);				\
> +	return err ?: count;								\
> +}											\
> +											\
> +static XE_SRIOV_VF_ATTR_WO(NAME)
> +
> +DEFINE_SIMPLE_CONTROL_SRIOV_VF_ATTR(stop);
> +
> +static struct attribute *control_vf_attrs[] = {
> +	&xe_sriov_vf_attr_stop.attr,
> +	NULL
> +};
> +
> +static umode_t control_vf_attr_is_visible(struct kobject *kobj,
> +					  struct attribute *attr, int index)
> +{
> +	struct xe_sriov_kobj *vkobj = to_xe_sriov_kobj(kobj);
> +
> +	if (vkobj->vfid == PFID)
> +		return 0;
> +
> +	return attr->mode;
> +}
> +
> +static const struct attribute_group control_vf_attr_group = {
> +	.attrs = control_vf_attrs,
> +	.is_visible = control_vf_attr_is_visible,
> +};
> +
>  static const struct attribute_group *xe_sriov_vf_attr_groups[] = {
>  	&profile_vf_attr_group,
> +	&control_vf_attr_group,
>  	NULL
>  };
>  
One comment about the patch name
besides, it looks good:
Reviewed-by: Piotr Piórkowski <piotr.piorkowski@intel.com>

> -- 
> 2.47.1
> 

-- 

^ permalink raw reply	[flat|nested] 45+ messages in thread

* Re: [PATCH v2 12/17] drm/xe/pf: Allow bulk change all VFs priority using sysfs
  2025-10-28 17:58 ` [PATCH v2 12/17] drm/xe/pf: Allow bulk change all VFs priority using sysfs Michal Wajdeczko
@ 2025-10-30 12:43   ` Lucas De Marchi
  2025-10-30 13:47     ` Lucas De Marchi
  0 siblings, 1 reply; 45+ messages in thread
From: Lucas De Marchi @ 2025-10-30 12:43 UTC (permalink / raw)
  To: Michal Wajdeczko; +Cc: intel-xe, Rodrigo Vivi

On Tue, Oct 28, 2025 at 06:58:26PM +0100, Michal Wajdeczko wrote:
>It is expected to be a common practice to configure the same level
>of scheduling priority across all VFs and PF (at least as starting
>point). Due to current GuC FW limitations it is also the only way
>to change VFs priority.
>
>Add write-only sysfs attribute that will apply required priority
>level to all VFs and PF at once.
>
>  /sys/bus/pci/drivers/xe/BDF/
>  ├── sriov_admin/
>      ├── .bulk_profile
>      │   └── sched_priority		[WO] low, normal
>
>Writing "low" to this write-only attribute will change PF and
>VFs scheduling priority on all tiles/GTs to LOW (function will
>be scheduled only if it has work submitted). Similarly, writing
>"normal" will change functions priority to NORMAL (functions will
>be scheduled irrespective of whether there is a work or not).

the only place documenting what low and normal mean seems to be this
commit message. We need it documented somewhere that is user visible.

>
>Signed-off-by: Michal Wajdeczko <michal.wajdeczko@intel.com>
>Cc: Lucas De Marchi <lucas.demarchi@intel.com>
>Cc: Rodrigo Vivi <rodrigo.vivi@intel.com>
>Acked-by: Rodrigo Vivi <rodrigo.vivi@intel.com>
>---
> drivers/gpu/drm/xe/xe_sriov_pf_sysfs.c | 42 +++++++++++++++++++++++++-
> 1 file changed, 41 insertions(+), 1 deletion(-)
>
>diff --git a/drivers/gpu/drm/xe/xe_sriov_pf_sysfs.c b/drivers/gpu/drm/xe/xe_sriov_pf_sysfs.c
>index 0430ffaa746a..19724a28fb33 100644
>--- a/drivers/gpu/drm/xe/xe_sriov_pf_sysfs.c
>+++ b/drivers/gpu/drm/xe/xe_sriov_pf_sysfs.c
>@@ -24,7 +24,8 @@
>  *     ├── ...
>  *     ├── .bulk_profile
>  *     │   ├── exec_quantum_ms
>- *     │   └── preempt_timeout_us
>+ *     │   ├── preempt_timeout_us
>+ *     │   └── sched_priority

below this dir list I think it would be a good place to document them
and then make this show up in the rendered documentation.

Lucas De Marchi

>  *     ├── pf/
>  *     │   ├── ...
>  *     │   └── profile
>@@ -108,9 +109,48 @@ static XE_SRIOV_DEV_ATTR_WO(NAME)
> DEFINE_SIMPLE_BULK_PROVISIONING_SRIOV_DEV_ATTR_WO(exec_quantum_ms, eq, u32);
> DEFINE_SIMPLE_BULK_PROVISIONING_SRIOV_DEV_ATTR_WO(preempt_timeout_us, pt, u32);
>
>+static const char * const sched_priority_names[] = {
>+	[GUC_SCHED_PRIORITY_LOW] = "low",
>+	[GUC_SCHED_PRIORITY_NORMAL] = "normal",
>+	[GUC_SCHED_PRIORITY_HIGH] = "high",
>+};
>+
>+static bool sched_priority_high_allowed(unsigned int vfid)
>+{
>+	/* As of today GuC FW allows to select 'high' priority only for the PF. */
>+	return vfid == PFID;
>+}
>+
>+static bool sched_priority_bulk_high_allowed(struct xe_device *xe)
>+{
>+	/* all VFs are equal - it's sufficient to check VF1 only */
>+	return sched_priority_high_allowed(VFID(1));
>+}
>+
>+static ssize_t xe_sriov_dev_attr_sched_priority_store(struct xe_device *xe,
>+						      const char *buf, size_t count)
>+{
>+	size_t num_priorities = ARRAY_SIZE(sched_priority_names);
>+	int match;
>+	int err;
>+
>+	if (!sched_priority_bulk_high_allowed(xe))
>+		num_priorities--;
>+
>+	match = __sysfs_match_string(sched_priority_names, num_priorities, buf);
>+	if (match < 0)
>+		return -EINVAL;
>+
>+	err = xe_sriov_pf_provision_bulk_apply_priority(xe, match);
>+	return err ?: count;
>+}
>+
>+static XE_SRIOV_DEV_ATTR_WO(sched_priority);
>+
> static struct attribute *bulk_profile_dev_attrs[] = {
> 	&xe_sriov_dev_attr_exec_quantum_ms.attr,
> 	&xe_sriov_dev_attr_preempt_timeout_us.attr,
>+	&xe_sriov_dev_attr_sched_priority.attr,
> 	NULL
> };
>
>-- 
>2.47.1
>

^ permalink raw reply	[flat|nested] 45+ messages in thread

* Re: [PATCH v2 13/17] drm/xe/pf: Allow change PF scheduling priority using sysfs
  2025-10-28 17:58 ` [PATCH v2 13/17] drm/xe/pf: Allow change PF scheduling " Michal Wajdeczko
@ 2025-10-30 13:35   ` Lucas De Marchi
  2025-10-30 13:49     ` Lucas De Marchi
  0 siblings, 1 reply; 45+ messages in thread
From: Lucas De Marchi @ 2025-10-30 13:35 UTC (permalink / raw)
  To: Michal Wajdeczko; +Cc: intel-xe, Rodrigo Vivi

On Tue, Oct 28, 2025 at 06:58:27PM +0100, Michal Wajdeczko wrote:
>We have just added bulk change of the scheduling priority for all
>VFs and PF, but that only allow to select LOW and NORMAL priority.
>
>Add read-write attribute under PF to allow changing its priority
>without impacting other VFs priority settings.
>
>For completeness also add read-only attributes under VFs, to show
>currently selected priority levels used by the VFs.
>
>  /sys/bus/pci/drivers/xe/BDF/
>  ├── sriov_admin/
>      ├── pf/
>      │   └── profile
>      │       └── sched_priority	[RW] low, normal, high
>      ├── vf1/
>      │   └── profile
>      │       └── sched_priority	[RO] low, normal
>
>Writing "high" to the PF read-write attribute will change PF
>priority on all tiles/GTs to HIGH (schedule function in the next
>time-slice after current one completes and it has work). Writing
>"low" or "normal" to change priority to LOW/NORMAL is supported.

same as patch before this one: we need to document allowed values and
their meaning somewhere that is user visible. I like the doc in this
commit message and think you could re-use it where it's visible for end
user.

>
>When read, those files will display the current and available
>scheduling priorities. The currently active priority level will
>be enclosed in square brackets, default output will be like:
>
> $ grep . -h sriov_admin/{pf,vf1,vf2}/profile/sched_priority
> [low] normal high
> [low] normal
> [low] normal
>
>Signed-off-by: Michal Wajdeczko <michal.wajdeczko@intel.com>
>Cc: Lucas De Marchi <lucas.demarchi@intel.com>
>Cc: Rodrigo Vivi <rodrigo.vivi@intel.com>
>Acked-by: Rodrigo Vivi <rodrigo.vivi@intel.com>
>---
> drivers/gpu/drm/xe/xe_sriov_pf_sysfs.c | 82 +++++++++++++++++++++++++-
> 1 file changed, 80 insertions(+), 2 deletions(-)
>
>diff --git a/drivers/gpu/drm/xe/xe_sriov_pf_sysfs.c b/drivers/gpu/drm/xe/xe_sriov_pf_sysfs.c
>index 19724a28fb33..2e5dbf1bff76 100644
>--- a/drivers/gpu/drm/xe/xe_sriov_pf_sysfs.c
>+++ b/drivers/gpu/drm/xe/xe_sriov_pf_sysfs.c
>@@ -17,6 +17,21 @@
> #include "xe_sriov_pf_sysfs.h"
> #include "xe_sriov_printk.h"
>
>+static int emit_choice(char *buf, int choice, const char * const *array, size_t size)
>+{
>+	int pos = 0;
>+	int n;
>+
>+	for (n = 0; n < size; n++) {
>+		pos += sysfs_emit_at(buf, pos, "%s%s%s%s",
>+				    n ? " " : "",
>+				    n == choice ? "[" : "",
>+				    array[n],
>+				    n == choice ? "]" : "");
>+	}
>+	return pos + sysfs_emit_at(buf, pos, "\n");


I'd just do:

	pos += sysfs_emit_at(buf, pos, "\n");

	return pos;

to follow the same pattern.

>+}
>+
> /*
>  * /sys/bus/pci/drivers/xe/BDF/
>  * :
>@@ -30,12 +45,14 @@
>  *     │   ├── ...
>  *     │   └── profile
>  *     │       ├── exec_quantum_ms
>- *     │       └── preempt_timeout_us
>+ *     │       ├── preempt_timeout_us
>+ *     │       └── sched_priority
>  *     ├── vf1/
>  *     │   ├── ...
>  *     │   └── profile
>  *     │       ├── exec_quantum_ms
>- *     │       └── preempt_timeout_us
>+ *     │       ├── preempt_timeout_us
>+ *     │       └── sched_priority
>  *     ├── vf2/
>  *     :
>  *     └── vfN/
>@@ -115,6 +132,12 @@ static const char * const sched_priority_names[] = {
> 	[GUC_SCHED_PRIORITY_HIGH] = "high",
> };
>
>+static bool sched_priority_change_allowed(unsigned int vfid)
>+{
>+	/* As of today GuC FW allows to selectively change only the PF priority. */
>+	return vfid == PFID;
>+}
>+
> static bool sched_priority_high_allowed(unsigned int vfid)
> {
> 	/* As of today GuC FW allows to select 'high' priority only for the PF. */
>@@ -199,15 +222,70 @@ static XE_SRIOV_VF_ATTR(NAME)
> DEFINE_SIMPLE_PROVISIONING_SRIOV_VF_ATTR(exec_quantum_ms, eq, u32, "%u\n");
> DEFINE_SIMPLE_PROVISIONING_SRIOV_VF_ATTR(preempt_timeout_us, pt, u32, "%u\n");
>
>+static ssize_t xe_sriov_vf_attr_sched_priority_show(struct xe_device *xe, unsigned int vfid,
>+						    char *buf)
>+{
>+	size_t num_priorities = ARRAY_SIZE(sched_priority_names);
>+	u32 priority;
>+	int err;
>+
>+	err = xe_sriov_pf_provision_query_vf_priority(xe, vfid, &priority);
>+	if (err)
>+		return err;
>+
>+	if (!sched_priority_high_allowed(vfid))
>+		num_priorities--;
>+
>+	xe_assert(xe, priority < num_priorities);
>+	return emit_choice(buf, priority, sched_priority_names, num_priorities);
>+}
>+
>+static ssize_t xe_sriov_vf_attr_sched_priority_store(struct xe_device *xe, unsigned int vfid,
>+						     const char *buf, size_t count)
>+{
>+	size_t num_priorities = ARRAY_SIZE(sched_priority_names);
>+	int match;
>+	int err;
>+
>+	if (!sched_priority_change_allowed(vfid))
>+		return -EOPNOTSUPP;
>+
>+	if (!sched_priority_high_allowed(vfid))
>+		num_priorities--;
>+
>+	match = __sysfs_match_string(sched_priority_names, num_priorities, buf);
>+	if (match < 0)
>+		return -EINVAL;
>+
>+	err = xe_sriov_pf_provision_apply_vf_priority(xe, vfid, match);
>+	return err ?: count;
>+}
>+
>+static XE_SRIOV_VF_ATTR(sched_priority);
>+
> static struct attribute *profile_vf_attrs[] = {
> 	&xe_sriov_vf_attr_exec_quantum_ms.attr,
> 	&xe_sriov_vf_attr_preempt_timeout_us.attr,
>+	&xe_sriov_vf_attr_sched_priority.attr,
> 	NULL
> };
>
>+static umode_t profile_vf_attr_is_visible(struct kobject *kobj,
>+					  struct attribute *attr, int index)
>+{
>+	struct xe_sriov_kobj *vkobj = to_xe_sriov_kobj(kobj);
>+
>+	if (attr == &xe_sriov_vf_attr_sched_priority.attr &&
>+	    !sched_priority_change_allowed(vkobj->vfid))
>+		return attr->mode & 0444;

first time I see a attr->mode & 0444 return from is_visible. It's
usually the mode or 0 to make it invisible, but this looks correct... it
would make it read-only if read was allowed or invisible if the attr
didn't have read perm.

with the doc from commit message added to the kernel-doc:


	Reviewed-by: Lucas De Marchi <lucas.demarchi@intel.com>

Lucas De Marchi

>+
>+	return attr->mode;
>+}
>+
> static const struct attribute_group profile_vf_attr_group = {
> 	.name = "profile",
> 	.attrs = profile_vf_attrs,
>+	.is_visible = profile_vf_attr_is_visible,
> };
>
> static const struct attribute_group *xe_sriov_vf_attr_groups[] = {
>-- 
>2.47.1
>

^ permalink raw reply	[flat|nested] 45+ messages in thread

* Re: [PATCH v2 15/17] drm/xe/pf: Add sysfs device symlinks to enabled VFs
  2025-10-28 17:58 ` [PATCH v2 15/17] drm/xe/pf: Add sysfs device symlinks to enabled VFs Michal Wajdeczko
@ 2025-10-30 13:40   ` Lucas De Marchi
  0 siblings, 0 replies; 45+ messages in thread
From: Lucas De Marchi @ 2025-10-30 13:40 UTC (permalink / raw)
  To: Michal Wajdeczko; +Cc: intel-xe, Rodrigo Vivi

On Tue, Oct 28, 2025 at 06:58:29PM +0100, Michal Wajdeczko wrote:
>For convenience, for every enabled VF add 'device' symlink from
>our SR-IOV admin VF folder to enabled sysfs PCI VF device entry.
>Remove all those links when disabling PCI VFs.
>
>For completeness, add static 'device' symlink for the PF itself.
>
>  /sys/bus/pci/drivers/xe/BDF/sriov_admin/
>  ├── pf
>  │   └── device -> ../../../BDF	# PF BDF
>  ├── vf1
>  │   └── device -> ../../../BDF'	# VF1 BDF
>  ├── vf2
>  │   └── device -> ../../../BDF"	# VF2 BDF
>
>Signed-off-by: Michal Wajdeczko <michal.wajdeczko@intel.com>
>Cc: Lucas De Marchi <lucas.demarchi@intel.com>
>Cc: Rodrigo Vivi <rodrigo.vivi@intel.com>
>Acked-by: Rodrigo Vivi <rodrigo.vivi@intel.com>


Reviewed-by: Lucas De Marchi <lucas.demarchi@intel.com>

Lucas De Marchi

^ permalink raw reply	[flat|nested] 45+ messages in thread

* Re: [PATCH v2 16/17] drm/xe/pf: Allow to stop and reset VF using sysfs
  2025-10-28 17:58 ` [PATCH v2 16/17] drm/xe/pf: Allow to stop and reset VF using sysfs Michal Wajdeczko
  2025-10-30  8:45   ` Piotr Piórkowski
@ 2025-10-30 13:43   ` Lucas De Marchi
  2025-10-30 13:50     ` Michal Wajdeczko
  1 sibling, 1 reply; 45+ messages in thread
From: Lucas De Marchi @ 2025-10-30 13:43 UTC (permalink / raw)
  To: Michal Wajdeczko; +Cc: intel-xe, Rodrigo Vivi

On Tue, Oct 28, 2025 at 06:58:30PM +0100, Michal Wajdeczko wrote:
>It is expected that VFs activity will be monitored and in some
>cases admin might want to silence specific VF without killing
>the VM where it was attached.
>
>Add write-only attribute to stop GuC scheduling at VFs level.
>
>  /sys/bus/pci/drivers/xe/BDF/
>  ├── sriov_admin/
>      ├── vf1/
>      │   └── stop		[WO] bool
>      ├── vf2/
>      │   └── stop		[WO] bool
>
>Writing "1" or "y" (or whatever is recognized by the strtobool()
>function) to this file will trigger the change of the VF state
>to STOP (GuC will stop servicing the VF). To go back to a READY
>state (to allow GuC to service this VF again) the VF FLR must be
>triggered (which can be done by writing 1 to device/reset file).
>
>Signed-off-by: Michal Wajdeczko <michal.wajdeczko@intel.com>
>Cc: Lucas De Marchi <lucas.demarchi@intel.com>
>Cc: Rodrigo Vivi <rodrigo.vivi@intel.com>
>---
>v2: drop reset file (Rodrigo, Lucas)
>---
> drivers/gpu/drm/xe/xe_sriov_pf_sysfs.c | 49 ++++++++++++++++++++++++++
> 1 file changed, 49 insertions(+)
>
>diff --git a/drivers/gpu/drm/xe/xe_sriov_pf_sysfs.c b/drivers/gpu/drm/xe/xe_sriov_pf_sysfs.c
>index 360b0ffd9cb4..3a8c488d183c 100644
>--- a/drivers/gpu/drm/xe/xe_sriov_pf_sysfs.c
>+++ b/drivers/gpu/drm/xe/xe_sriov_pf_sysfs.c
>@@ -13,6 +13,7 @@
> #include "xe_pm.h"
> #include "xe_sriov.h"
> #include "xe_sriov_pf.h"
>+#include "xe_sriov_pf_control.h"
> #include "xe_sriov_pf_helpers.h"
> #include "xe_sriov_pf_provision.h"
> #include "xe_sriov_pf_sysfs.h"
>@@ -52,6 +53,7 @@ static int emit_choice(char *buf, int choice, const char * const *array, size_t
>  *     ├── vf1/
>  *     │   ├── ...
>  *     │   ├── device -> ../../../BDF.1
>+ *     │   ├── stop
>  *     │   └── profile
>  *     │       ├── exec_quantum_ms
>  *     │       ├── preempt_timeout_us
>@@ -291,8 +293,55 @@ static const struct attribute_group profile_vf_attr_group = {
> 	.is_visible = profile_vf_attr_is_visible,
> };
>
>+#define DEFINE_SIMPLE_CONTROL_SRIOV_VF_ATTR(NAME)					\
>+											\
>+static ssize_t xe_sriov_vf_attr_##NAME##_store(struct xe_device *xe, unsigned int vfid,	\
>+					       const char *buf, size_t count)		\
>+{											\
>+	bool yes;									\
>+	int err;									\
>+											\
>+	if (!vfid)									\
>+		return -EPERM;								\
>+											\
>+	err = kstrtobool(buf, &yes);							\
>+	if (err)									\
>+		return err;								\
>+	if (!yes)									\
>+		return count;								\
>+											\
>+	err = xe_sriov_pf_control_##NAME##_vf(xe, vfid);				\
>+	return err ?: count;								\
>+}											\
>+											\
>+static XE_SRIOV_VF_ATTR_WO(NAME)
>+
>+DEFINE_SIMPLE_CONTROL_SRIOV_VF_ATTR(stop);
>+
>+static struct attribute *control_vf_attrs[] = {
>+	&xe_sriov_vf_attr_stop.attr,
>+	NULL
>+};
>+
>+static umode_t control_vf_attr_is_visible(struct kobject *kobj,
>+					  struct attribute *attr, int index)
>+{
>+	struct xe_sriov_kobj *vkobj = to_xe_sriov_kobj(kobj);
>+
>+	if (vkobj->vfid == PFID)
>+		return 0;

I think this will be the case for the entire group, isn't it?
Should we go ahead and just return SYSFS_GROUP_INVISIBLE so even the dir
is invisible instead of having an empty dir?

Lucas De Marchi

>+
>+	return attr->mode;
>+}
>+
>+static const struct attribute_group control_vf_attr_group = {
>+	.attrs = control_vf_attrs,
>+	.is_visible = control_vf_attr_is_visible,
>+};
>+
> static const struct attribute_group *xe_sriov_vf_attr_groups[] = {
> 	&profile_vf_attr_group,
>+	&control_vf_attr_group,
> 	NULL
> };
>
>-- 
>2.47.1
>

^ permalink raw reply	[flat|nested] 45+ messages in thread

* Re: [PATCH v2 12/17] drm/xe/pf: Allow bulk change all VFs priority using sysfs
  2025-10-30 12:43   ` Lucas De Marchi
@ 2025-10-30 13:47     ` Lucas De Marchi
  0 siblings, 0 replies; 45+ messages in thread
From: Lucas De Marchi @ 2025-10-30 13:47 UTC (permalink / raw)
  To: Michal Wajdeczko; +Cc: intel-xe, Rodrigo Vivi

On Thu, Oct 30, 2025 at 07:43:55AM -0500, Lucas De Marchi wrote:
>On Tue, Oct 28, 2025 at 06:58:26PM +0100, Michal Wajdeczko wrote:
>>It is expected to be a common practice to configure the same level
>>of scheduling priority across all VFs and PF (at least as starting
>>point). Due to current GuC FW limitations it is also the only way
>>to change VFs priority.
>>
>>Add write-only sysfs attribute that will apply required priority
>>level to all VFs and PF at once.
>>
>> /sys/bus/pci/drivers/xe/BDF/
>> ├── sriov_admin/
>>     ├── .bulk_profile
>>     │   └── sched_priority		[WO] low, normal
>>
>>Writing "low" to this write-only attribute will change PF and
>>VFs scheduling priority on all tiles/GTs to LOW (function will
>>be scheduled only if it has work submitted). Similarly, writing
>>"normal" will change functions priority to NORMAL (functions will
>>be scheduled irrespective of whether there is a work or not).
>
>the only place documenting what low and normal mean seems to be this
>commit message. We need it documented somewhere that is user visible.

oops, doc is in the last patch.


>
>>
>>Signed-off-by: Michal Wajdeczko <michal.wajdeczko@intel.com>
>>Cc: Lucas De Marchi <lucas.demarchi@intel.com>
>>Cc: Rodrigo Vivi <rodrigo.vivi@intel.com>
>>Acked-by: Rodrigo Vivi <rodrigo.vivi@intel.com>


Reviewed-by: Lucas De Marchi <lucas.demarchi@intel.com>

Lucas De Marchi

^ permalink raw reply	[flat|nested] 45+ messages in thread

* Re: [PATCH v2 13/17] drm/xe/pf: Allow change PF scheduling priority using sysfs
  2025-10-30 13:35   ` Lucas De Marchi
@ 2025-10-30 13:49     ` Lucas De Marchi
  0 siblings, 0 replies; 45+ messages in thread
From: Lucas De Marchi @ 2025-10-30 13:49 UTC (permalink / raw)
  To: Michal Wajdeczko; +Cc: intel-xe, Rodrigo Vivi

On Thu, Oct 30, 2025 at 08:35:41AM -0500, Lucas De Marchi wrote:
>On Tue, Oct 28, 2025 at 06:58:27PM +0100, Michal Wajdeczko wrote:
>>+static umode_t profile_vf_attr_is_visible(struct kobject *kobj,
>>+					  struct attribute *attr, int index)
>>+{
>>+	struct xe_sriov_kobj *vkobj = to_xe_sriov_kobj(kobj);
>>+
>>+	if (attr == &xe_sriov_vf_attr_sched_priority.attr &&
>>+	    !sched_priority_change_allowed(vkobj->vfid))
>>+		return attr->mode & 0444;
>
>first time I see a attr->mode & 0444 return from is_visible. It's
>usually the mode or 0 to make it invisible, but this looks correct... it
>would make it read-only if read was allowed or invisible if the attr
>didn't have read perm.
>
>with the doc from commit message added to the kernel-doc:
>
>
>	Reviewed-by: Lucas De Marchi <lucas.demarchi@intel.com>

doc is in the last patch and the other comment is just a nit.


Reviewed-by: Lucas De Marchi <lucas.demarchi@intel.com>

Lucas De Marchi

^ permalink raw reply	[flat|nested] 45+ messages in thread

* Re: [PATCH v2 16/17] drm/xe/pf: Allow to stop and reset VF using sysfs
  2025-10-30 13:43   ` Lucas De Marchi
@ 2025-10-30 13:50     ` Michal Wajdeczko
  2025-10-30 14:14       ` Lucas De Marchi
  0 siblings, 1 reply; 45+ messages in thread
From: Michal Wajdeczko @ 2025-10-30 13:50 UTC (permalink / raw)
  To: Lucas De Marchi; +Cc: intel-xe, Rodrigo Vivi



On 10/30/2025 2:43 PM, Lucas De Marchi wrote:
> On Tue, Oct 28, 2025 at 06:58:30PM +0100, Michal Wajdeczko wrote:
>> It is expected that VFs activity will be monitored and in some
>> cases admin might want to silence specific VF without killing
>> the VM where it was attached.
>>
>> Add write-only attribute to stop GuC scheduling at VFs level.
>>
>>  /sys/bus/pci/drivers/xe/BDF/
>>  ├── sriov_admin/
>>      ├── vf1/
>>      │   └── stop        [WO] bool
>>      ├── vf2/
>>      │   └── stop        [WO] bool
>>
>> Writing "1" or "y" (or whatever is recognized by the strtobool()
>> function) to this file will trigger the change of the VF state
>> to STOP (GuC will stop servicing the VF). To go back to a READY
>> state (to allow GuC to service this VF again) the VF FLR must be
>> triggered (which can be done by writing 1 to device/reset file).
>>
>> Signed-off-by: Michal Wajdeczko <michal.wajdeczko@intel.com>
>> Cc: Lucas De Marchi <lucas.demarchi@intel.com>
>> Cc: Rodrigo Vivi <rodrigo.vivi@intel.com>
>> ---
>> v2: drop reset file (Rodrigo, Lucas)
>> ---
>> drivers/gpu/drm/xe/xe_sriov_pf_sysfs.c | 49 ++++++++++++++++++++++++++
>> 1 file changed, 49 insertions(+)
>>
>> diff --git a/drivers/gpu/drm/xe/xe_sriov_pf_sysfs.c b/drivers/gpu/drm/xe/xe_sriov_pf_sysfs.c
>> index 360b0ffd9cb4..3a8c488d183c 100644
>> --- a/drivers/gpu/drm/xe/xe_sriov_pf_sysfs.c
>> +++ b/drivers/gpu/drm/xe/xe_sriov_pf_sysfs.c
>> @@ -13,6 +13,7 @@
>> #include "xe_pm.h"
>> #include "xe_sriov.h"
>> #include "xe_sriov_pf.h"
>> +#include "xe_sriov_pf_control.h"
>> #include "xe_sriov_pf_helpers.h"
>> #include "xe_sriov_pf_provision.h"
>> #include "xe_sriov_pf_sysfs.h"
>> @@ -52,6 +53,7 @@ static int emit_choice(char *buf, int choice, const char * const *array, size_t
>>  *     ├── vf1/
>>  *     │   ├── ...
>>  *     │   ├── device -> ../../../BDF.1
>> + *     │   ├── stop
>>  *     │   └── profile
>>  *     │       ├── exec_quantum_ms
>>  *     │       ├── preempt_timeout_us
>> @@ -291,8 +293,55 @@ static const struct attribute_group profile_vf_attr_group = {
>>     .is_visible = profile_vf_attr_is_visible,
>> };
>>
>> +#define DEFINE_SIMPLE_CONTROL_SRIOV_VF_ATTR(NAME)                    \
>> +                                            \
>> +static ssize_t xe_sriov_vf_attr_##NAME##_store(struct xe_device *xe, unsigned int vfid,    \
>> +                           const char *buf, size_t count)        \
>> +{                                            \
>> +    bool yes;                                    \
>> +    int err;                                    \
>> +                                            \
>> +    if (!vfid)                                    \
>> +        return -EPERM;                                \
>> +                                            \
>> +    err = kstrtobool(buf, &yes);                            \
>> +    if (err)                                    \
>> +        return err;                                \
>> +    if (!yes)                                    \
>> +        return count;                                \
>> +                                            \
>> +    err = xe_sriov_pf_control_##NAME##_vf(xe, vfid);                \
>> +    return err ?: count;                                \
>> +}                                            \
>> +                                            \
>> +static XE_SRIOV_VF_ATTR_WO(NAME)
>> +
>> +DEFINE_SIMPLE_CONTROL_SRIOV_VF_ATTR(stop);
>> +
>> +static struct attribute *control_vf_attrs[] = {
>> +    &xe_sriov_vf_attr_stop.attr,
>> +    NULL
>> +};
>> +
>> +static umode_t control_vf_attr_is_visible(struct kobject *kobj,
>> +                      struct attribute *attr, int index)
>> +{
>> +    struct xe_sriov_kobj *vkobj = to_xe_sriov_kobj(kobj);
>> +
>> +    if (vkobj->vfid == PFID)
>> +        return 0;
> 
> I think this will be the case for the entire group, isn't it?
> Should we go ahead and just return SYSFS_GROUP_INVISIBLE so even the dir
> is invisible instead of having an empty dir?

today it can't be empty as at this level there is already a "device"
link and a "profile" dir - both for PF and all VFs.

> 
> Lucas De Marchi
> 
>> +
>> +    return attr->mode;
>> +}
>> +
>> +static const struct attribute_group control_vf_attr_group = {
>> +    .attrs = control_vf_attrs,
>> +    .is_visible = control_vf_attr_is_visible,
>> +};
>> +
>> static const struct attribute_group *xe_sriov_vf_attr_groups[] = {
>>     &profile_vf_attr_group,
>> +    &control_vf_attr_group,
>>     NULL
>> };
>>
>> -- 
>> 2.47.1
>>


^ permalink raw reply	[flat|nested] 45+ messages in thread

* Re: [PATCH v2 16/17] drm/xe/pf: Allow to stop and reset VF using sysfs
  2025-10-30 13:50     ` Michal Wajdeczko
@ 2025-10-30 14:14       ` Lucas De Marchi
  0 siblings, 0 replies; 45+ messages in thread
From: Lucas De Marchi @ 2025-10-30 14:14 UTC (permalink / raw)
  To: Michal Wajdeczko; +Cc: intel-xe, Rodrigo Vivi

On Thu, Oct 30, 2025 at 02:50:09PM +0100, Michal Wajdeczko wrote:
>
>
>On 10/30/2025 2:43 PM, Lucas De Marchi wrote:
>> On Tue, Oct 28, 2025 at 06:58:30PM +0100, Michal Wajdeczko wrote:
>>> It is expected that VFs activity will be monitored and in some
>>> cases admin might want to silence specific VF without killing
>>> the VM where it was attached.
>>>
>>> Add write-only attribute to stop GuC scheduling at VFs level.
>>>
>>>  /sys/bus/pci/drivers/xe/BDF/
>>>  ├── sriov_admin/
>>>      ├── vf1/
>>>      │   └── stop        [WO] bool
>>>      ├── vf2/
>>>      │   └── stop        [WO] bool
>>>
>>> Writing "1" or "y" (or whatever is recognized by the strtobool()
>>> function) to this file will trigger the change of the VF state
>>> to STOP (GuC will stop servicing the VF). To go back to a READY
>>> state (to allow GuC to service this VF again) the VF FLR must be
>>> triggered (which can be done by writing 1 to device/reset file).
>>>
>>> Signed-off-by: Michal Wajdeczko <michal.wajdeczko@intel.com>
>>> Cc: Lucas De Marchi <lucas.demarchi@intel.com>
>>> Cc: Rodrigo Vivi <rodrigo.vivi@intel.com>
>>> ---
>>> v2: drop reset file (Rodrigo, Lucas)
>>> ---
>>> drivers/gpu/drm/xe/xe_sriov_pf_sysfs.c | 49 ++++++++++++++++++++++++++
>>> 1 file changed, 49 insertions(+)
>>>
>>> diff --git a/drivers/gpu/drm/xe/xe_sriov_pf_sysfs.c b/drivers/gpu/drm/xe/xe_sriov_pf_sysfs.c
>>> index 360b0ffd9cb4..3a8c488d183c 100644
>>> --- a/drivers/gpu/drm/xe/xe_sriov_pf_sysfs.c
>>> +++ b/drivers/gpu/drm/xe/xe_sriov_pf_sysfs.c
>>> @@ -13,6 +13,7 @@
>>> #include "xe_pm.h"
>>> #include "xe_sriov.h"
>>> #include "xe_sriov_pf.h"
>>> +#include "xe_sriov_pf_control.h"
>>> #include "xe_sriov_pf_helpers.h"
>>> #include "xe_sriov_pf_provision.h"
>>> #include "xe_sriov_pf_sysfs.h"
>>> @@ -52,6 +53,7 @@ static int emit_choice(char *buf, int choice, const char * const *array, size_t
>>>  *     ├── vf1/
>>>  *     │   ├── ...
>>>  *     │   ├── device -> ../../../BDF.1
>>> + *     │   ├── stop
>>>  *     │   └── profile
>>>  *     │       ├── exec_quantum_ms
>>>  *     │       ├── preempt_timeout_us
>>> @@ -291,8 +293,55 @@ static const struct attribute_group profile_vf_attr_group = {
>>>     .is_visible = profile_vf_attr_is_visible,
>>> };
>>>
>>> +#define DEFINE_SIMPLE_CONTROL_SRIOV_VF_ATTR(NAME)                    \
>>> +                                            \
>>> +static ssize_t xe_sriov_vf_attr_##NAME##_store(struct xe_device *xe, unsigned int vfid,    \
>>> +                           const char *buf, size_t count)        \
>>> +{                                            \
>>> +    bool yes;                                    \
>>> +    int err;                                    \
>>> +                                            \
>>> +    if (!vfid)                                    \
>>> +        return -EPERM;                                \
>>> +                                            \
>>> +    err = kstrtobool(buf, &yes);                            \
>>> +    if (err)                                    \
>>> +        return err;                                \
>>> +    if (!yes)                                    \
>>> +        return count;                                \
>>> +                                            \
>>> +    err = xe_sriov_pf_control_##NAME##_vf(xe, vfid);                \
>>> +    return err ?: count;                                \
>>> +}                                            \
>>> +                                            \
>>> +static XE_SRIOV_VF_ATTR_WO(NAME)
>>> +
>>> +DEFINE_SIMPLE_CONTROL_SRIOV_VF_ATTR(stop);
>>> +
>>> +static struct attribute *control_vf_attrs[] = {
>>> +    &xe_sriov_vf_attr_stop.attr,
>>> +    NULL
>>> +};
>>> +
>>> +static umode_t control_vf_attr_is_visible(struct kobject *kobj,
>>> +                      struct attribute *attr, int index)
>>> +{
>>> +    struct xe_sriov_kobj *vkobj = to_xe_sriov_kobj(kobj);
>>> +
>>> +    if (vkobj->vfid == PFID)
>>> +        return 0;
>>
>> I think this will be the case for the entire group, isn't it?
>> Should we go ahead and just return SYSFS_GROUP_INVISIBLE so even the dir
>> is invisible instead of having an empty dir?
>
>today it can't be empty as at this level there is already a "device"
>link and a "profile" dir - both for PF and all VFs.

I missed that this group didn't have a name, so it's not a separat dir.
Yep, this looks good.

Reviewed-by: Lucas De Marchi <lucas.demarchi@intel.com>

thanks
Lucas De Marchi

^ permalink raw reply	[flat|nested] 45+ messages in thread

* Re: [PATCH v2 17/17] drm/xe/pf: Add documentation for sriov_admin attributes
  2025-10-28 17:58 ` [PATCH v2 17/17] drm/xe/pf: Add documentation for sriov_admin attributes Michal Wajdeczko
@ 2025-10-30 17:25   ` Lucas De Marchi
  2025-10-31 12:35     ` Michal Wajdeczko
  0 siblings, 1 reply; 45+ messages in thread
From: Lucas De Marchi @ 2025-10-30 17:25 UTC (permalink / raw)
  To: Michal Wajdeczko; +Cc: intel-xe, Rodrigo Vivi

On Tue, Oct 28, 2025 at 06:58:31PM +0100, Michal Wajdeczko wrote:
>Add initial documentation for all recently added Xe driver
>specific SR-IOV sysfs files located under device/sriov_admin.
>
>Signed-off-by: Michal Wajdeczko <michal.wajdeczko@intel.com>
>Cc: Lucas De Marchi <lucas.demarchi@intel.com>
>Cc: Rodrigo Vivi <rodrigo.vivi@intel.com>
>---
>v2: update version (Rodrigo)
>---
> .../ABI/testing/sysfs-driver-intel-xe-sriov   | 160 ++++++++++++++++++
> 1 file changed, 160 insertions(+)
> create mode 100644 Documentation/ABI/testing/sysfs-driver-intel-xe-sriov
>
>diff --git a/Documentation/ABI/testing/sysfs-driver-intel-xe-sriov b/Documentation/ABI/testing/sysfs-driver-intel-xe-sriov
>new file mode 100644
>index 000000000000..ac688a66bf36
>--- /dev/null
>+++ b/Documentation/ABI/testing/sysfs-driver-intel-xe-sriov
>@@ -0,0 +1,160 @@
>+What:		/sys/bus/pci/drivers/xe/.../sriov_admin/
>+Date:		October 2025
>+KernelVersion:	6.19
>+Contact:	intel-xe@lists.freedesktop.org
>+Description:
>+		This directory appears for the particular Intel Xe device when:
>+
>+		 - device supports SR-IOV, and
>+		 - device is a Physical Function (PF), and
>+		 - driver support for the SR-IOV PF is enabled on given device.
>+
>+		This directory is used as a root for all attributes required to
>+		manage both Physical Function (PF) and Virtual Functions (VFs).
>+
>+
>+What:		/sys/bus/pci/drivers/xe/.../sriov_admin/pf/
>+Date:		October 2025
>+KernelVersion:	6.19
>+Contact:	intel-xe@lists.freedesktop.org
>+Description:
>+		This directory holds attributes related to the SR-IOV Physical
>+		Function (PF).
>+
>+
>+What:		/sys/bus/pci/drivers/xe/.../sriov_admin/vf1/
>+What:		/sys/bus/pci/drivers/xe/.../sriov_admin/vf2/
>+What:		/sys/bus/pci/drivers/xe/.../sriov_admin/vf<N>/
>+Date:		October 2025
>+KernelVersion:	6.19
>+Contact:	intel-xe@lists.freedesktop.org
>+Description:
>+		These directories hold attributes related to the SR-IOV Virtual
>+		Functions (VFs).
>+
>+		Note that the VF number <N> is 1-based as described in PCI SR-IOV
>+		specification as the Xe driver follows that naming schema.
>+
>+		There could be "vf1", "vf2" and so on, up to "vf<N>", where <N>
>+		matches value of the "sriov_totalvfs" attribute.

matches the

>+
>+
>+What:		/sys/bus/pci/drivers/xe/.../sriov_admin/pf/profile/exec_quantum_ms
>+What:		/sys/bus/pci/drivers/xe/.../sriov_admin/pf/profile/preempt_timeout_us
>+What:		/sys/bus/pci/drivers/xe/.../sriov_admin/pf/profile/sched_priority
>+What:		/sys/bus/pci/drivers/xe/.../sriov_admin/vf<n>/profile/exec_quantum_ms
>+What:		/sys/bus/pci/drivers/xe/.../sriov_admin/vf<n>/profile/preempt_timeout_us
>+What:		/sys/bus/pci/drivers/xe/.../sriov_admin/vf<n>/profile/sched_priority
>+Date:		October 2025
>+KernelVersion:	6.19
>+Contact:	intel-xe@lists.freedesktop.org
>+Description:
>+		These files represent scheduling parameters of the PF or VFs and
>+		are available only for Intel Xe platforms with GPU sharing based
>+		on the time-slice basis. These scheduling parameters can be changed

I didn't like how the "based ... basis" was, so tried to improve it.
Suggestion:

These files expose scheduling parameters for the PF and its VFs, and are
visible only on Intel Xe platforms that use time-sliced GPU sharing.
They can be changed even if VFs are enabled and running and reflect the
settings of all tiles/GTs assigned to the given function.

>+		settings of all tiles/GTs assigned to the given function.
>+		even if VFs are enabled and running. Those parameters reflects
>+		settings of all tiles/GTs assigned to the given function.
>+
>+		exec_quantum_ms: (RW) unsigned integer
>+			The GT execution quantum (EQ) in [ms] of the given function.

s/of/for/

>+			Actual quantum value might be aligned per HW/FW requirements.

Is the aligned value returned on read so user can know the actual value?

Does this need to be expanded to mention what is a GT execution quantum?
I don't think we have it documented elsewhere. How does a 0 in one of
the functions interact with the others not being 0?

In the scenario below, I was expecting the functions to get 1/3, 1/3, 1/6, and 1/6
of the GPU, but I'm not entirely sure:

	cat sriov_admin/*/profile/exec_quantum_ms
	100
	100
	50
	50

>+
>+			Default is 0 (unlimited).
>+
>+		preempt_timeout_us: (RW) unsigned integer
>+			The GT preemption timeout in [us] of the given function.
>+			Actual timeout value might be aligned per HW/FW requirements.
>+
>+			Default is 0 (unlimited).
>+
>+		sched_priority: (RW/RO) string
>+			The GT scheduling priority of the given function.
>+
>+			"low" - function will be scheduled on the GPU for its EQ/PT
>+				only if function has any work already submitted.
>+
>+			"normal" - functions will be scheduled on the GPU for its EQ/PT
>+				irrespective of whether it has submitted a work or not.
>+
>+			"high" - function will be scheduled on the GPU for its EQ/PT
>+				in the next time-slice after the current one completes
>+				and function has a work submitted.
>+
>+			Default is "low".
>+
>+			When read, this file will display the current and available
>+			scheduling priorities. The currently active priority level will
>+			be enclosed in square brackets, like:
>+
>+				[low] normal high
>+
>+			This file can be read-only if changing is currently not supported

This file can be read-only if changing the priority is not supported


rest looks good.

thanks
Lucas De Marchi


>+			for given function due to any known HW/FW limitations.
>+
>+		Writes to these attributes may fail with errors like:
>+			-EINVAL if provided input is malformed or not recognized,
>+			-EPERM if change is not applicable on given HW/FW,
>+			-EIO if FW refuses to change the provisioning.
>+
>+		Reads from these attributes may fail with:
>+			-EUCLEAN if value is not consistent across all tiles/GTs.
>+
>+
>+What:		/sys/bus/pci/drivers/xe/.../sriov_admin/.bulk_profile/exec_quantum_ms
>+What:		/sys/bus/pci/drivers/xe/.../sriov_admin/.bulk_profile/preempt_timeout_us
>+What:		/sys/bus/pci/drivers/xe/.../sriov_admin/.bulk_profile/sched_priority
>+Date:		October 2025
>+KernelVersion:	6.19
>+Contact:	intel-xe@lists.freedesktop.org
>+Description:
>+		These files allows bulk reconfiguration of the scheduling parameters
>+		of the PF or VFs and are available only for Intel Xe platforms with
>+		GPU sharing based on the time-slice basis. These scheduling parameters
>+		can be changed even if VFs are enabled and running.
>+
>+		exec_quantum_ms: (WO) unsigned integer
>+			The GT execution quantum (EQ) in [ms] to be applied to all functions.
>+			See sriov_admin/{pf,vf<N>}/profile/exec_quantum_ms for more details.
>+
>+		preempt_timeout_us: (WO) unsigned integer
>+			The GT preemption timeout (PT) in [us] to be applied to all functions.
>+			See sriov_admin/{pf,vf<N>}/profile/preempt_timeout_us for more details.
>+
>+		sched_priority: (RW/RO) string
>+			The GT scheduling priority to be applied for all functions.
>+			See sriov_admin/{pf,vf<N>}/profile/sched_priority for more details.
>+
>+		Writes to these attributes may fail with errors like:
>+			-EINVAL if provided input is malformed or not recognized,
>+			-EPERM if change is not applicable on given HW/FW,
>+			-EIO if FW refuses to change the provisioning.
>+
>+
>+What:		/sys/bus/pci/drivers/xe/.../sriov_admin/vf<n>/stop
>+Date:		October 2025
>+KernelVersion:	6.19
>+Contact:	intel-xe@lists.freedesktop.org
>+Description:
>+		This file allows to control scheduling of the VF on the Intel Xe GPU
>+		platforms. It allows to implement custom policy mechanism in case VFs
>+		are misbehaving or triggering adverse events above defined thresholds.
>+
>+		stop: (WO) bool
>+			All GT executions of given function shall be immediately stopped.
>+			To allow scheduling this VF again, the VF FLR must be triggered.
>+
>+		Writes to this attribute may fail with errors like:
>+			-EINVAL if provided input is malformed or not recognized,
>+			-EPERM if change is not applicable on given HW/FW,
>+			-EIO if FW refuses to change the scheduling.
>+
>+
>+What:		/sys/bus/pci/drivers/xe/.../sriov_admin/pf/device
>+What:		/sys/bus/pci/drivers/xe/.../sriov_admin/vf<n>/device
>+Date:		October 2025
>+KernelVersion:	6.19
>+Contact:	intel-xe@lists.freedesktop.org
>+Description:
>+		These are symlinks to the underlying PCI device entry representing
>+		given Xe SR-IOV function. For the PF, this link is always present.
>+		For VFs, this link is present only for currently enabled VFs.
>-- 
>2.47.1
>

^ permalink raw reply	[flat|nested] 45+ messages in thread

* Re: [PATCH v2 03/17] drm/xe/pf: Add _locked variants of the VF EQ config functions
  2025-10-29  8:47   ` Piotr Piórkowski
  2025-10-29  9:00     ` Piotr Piórkowski
@ 2025-10-30 19:47     ` Michal Wajdeczko
  1 sibling, 0 replies; 45+ messages in thread
From: Michal Wajdeczko @ 2025-10-30 19:47 UTC (permalink / raw)
  To: Piotr Piórkowski; +Cc: intel-xe, Lucas De Marchi



On 10/29/2025 9:47 AM, Piotr Piórkowski wrote:
> Michal Wajdeczko <michal.wajdeczko@intel.com> wrote on wto [2025-paź-28 18:58:17 +0100]:
>> In upcoming patches we will want to configure VF's execution
>> quantum (EQ) on all GTs under single lock to avoid potential
>> races in parallel GT configuration attempts.
>>
>> Signed-off-by: Michal Wajdeczko <michal.wajdeczko@intel.com>
>> Cc: Lucas De Marchi <lucas.demarchi@intel.com>
>> ---
>>  drivers/gpu/drm/xe/xe_gt_sriov_pf_config.c | 58 +++++++++++++++++-----
>>  drivers/gpu/drm/xe/xe_gt_sriov_pf_config.h |  4 ++
>>  2 files changed, 49 insertions(+), 13 deletions(-)
>>
>> diff --git a/drivers/gpu/drm/xe/xe_gt_sriov_pf_config.c b/drivers/gpu/drm/xe/xe_gt_sriov_pf_config.c
>> index c0c0215c0703..717f81e76b8c 100644
>> --- a/drivers/gpu/drm/xe/xe_gt_sriov_pf_config.c
>> +++ b/drivers/gpu/drm/xe/xe_gt_sriov_pf_config.c
>> @@ -1732,29 +1732,65 @@ static int pf_get_exec_quantum(struct xe_gt *gt, unsigned int vfid)
>>  }
>>  
>>  /**
>> - * xe_gt_sriov_pf_config_set_exec_quantum - Configure execution quantum for the VF.
>> + * xe_gt_sriov_pf_config_set_exec_quantum_locked() - Configure execution quantum of the VF.
>>   * @gt: the &xe_gt
>>   * @vfid: the VF identifier
>>   * @exec_quantum: requested execution quantum in milliseconds (0 is infinity)
>>   *
>> - * This function can only be called on PF.
>> + * This function can only be called on PF with the master mutex hold.
>> + * It will log the provisioned value or an error in case of the failure.
>>   *
>>   * Return: 0 on success or a negative error code on failure.
>>   */
>> -int xe_gt_sriov_pf_config_set_exec_quantum(struct xe_gt *gt, unsigned int vfid,
>> -					   u32 exec_quantum)
>> +int xe_gt_sriov_pf_config_set_exec_quantum_locked(struct xe_gt *gt, unsigned int vfid,
>> +						  u32 exec_quantum)
>>  {
>>  	int err;
>>  
>> -	mutex_lock(xe_gt_sriov_pf_master_mutex(gt));
>> +	lockdep_assert_held(xe_gt_sriov_pf_master_mutex(gt));
>> +
>>  	err = pf_provision_exec_quantum(gt, vfid, exec_quantum);
>> -	mutex_unlock(xe_gt_sriov_pf_master_mutex(gt));
>>  
>>  	return pf_config_set_u32_done(gt, vfid, exec_quantum,
>> -				      xe_gt_sriov_pf_config_get_exec_quantum(gt, vfid),
>> +				      pf_get_exec_quantum(gt, vfid),
>>  				      "execution quantum", exec_quantum_unit, err);
>>  }
>>  
>> +/**
>> + * xe_gt_sriov_pf_config_set_exec_quantum - Configure execution quantum for the VF.
>> + * @gt: the &xe_gt
>> + * @vfid: the VF identifier
>> + * @exec_quantum: requested execution quantum in milliseconds (0 is infinity)
>> + *
>> + * This function can only be called on PF.
>> + * It will log the provisioned value or na error in case of the failure.
> 
> typo: na -> an
>> + *
>> + * Return: 0 on success or a negative error code on failure.
>> + */
>> +int xe_gt_sriov_pf_config_set_exec_quantum(struct xe_gt *gt, unsigned int vfid,
>> +					   u32 exec_quantum)
>> +{
>> +	guard(mutex)(xe_gt_sriov_pf_master_mutex(gt));
>> +
>> +	return xe_gt_sriov_pf_config_set_exec_quantum_locked(gt, vfid, exec_quantum);
>> +}
>> +
>> +/**
>> + * xe_gt_sriov_pf_config_get_exec_quantum_locked() - Get VF's execution quantum.
>> + * @gt: the &xe_gt
>> + * @vfid: the VF identifier
>> + *
>> + * This function can only be called on PF with the master mutex hold.
>> + *
>> + * Return: VF's (or PF's) execution quantum in milliseconds.
>> + */
>> +u32 xe_gt_sriov_pf_config_get_exec_quantum_locked(struct xe_gt *gt, unsigned int vfid)
>> +{
>> +	lockdep_assert_held(xe_gt_sriov_pf_master_mutex(gt));
>> +
>> +	return pf_get_exec_quantum(gt, vfid);
> 
> NIT: Perhaps it would have been better to be consistent and call it the locked version here.

almost all static functions in this file are expecting lock to be
already taken, also enforced with lockdep_assert_held, and we use
_locked suffix only public functions (as those usually are _not_
expecting the lock)

> 
>> +}
>> +
>>  /**
>>   * xe_gt_sriov_pf_config_get_exec_quantum - Get VF's execution quantum.
>>   * @gt: the &xe_gt
>> @@ -1766,13 +1802,9 @@ int xe_gt_sriov_pf_config_set_exec_quantum(struct xe_gt *gt, unsigned int vfid,
>>   */
>>  u32 xe_gt_sriov_pf_config_get_exec_quantum(struct xe_gt *gt, unsigned int vfid)
>>  {
>> -	u32 exec_quantum;
>> +	guard(mutex)(xe_gt_sriov_pf_master_mutex(gt));
>>  
>> -	mutex_lock(xe_gt_sriov_pf_master_mutex(gt));
>> -	exec_quantum = pf_get_exec_quantum(gt, vfid);
>> -	mutex_unlock(xe_gt_sriov_pf_master_mutex(gt));
>> -
>> -	return exec_quantum;
>> +	return pf_get_exec_quantum(gt, vfid);
>>  }
>>  
>>  static const char *preempt_timeout_unit(u32 preempt_timeout)
>> diff --git a/drivers/gpu/drm/xe/xe_gt_sriov_pf_config.h b/drivers/gpu/drm/xe/xe_gt_sriov_pf_config.h
>> index 513e6512a575..b4beb5a97031 100644
>> --- a/drivers/gpu/drm/xe/xe_gt_sriov_pf_config.h
>> +++ b/drivers/gpu/drm/xe/xe_gt_sriov_pf_config.h
>> @@ -40,6 +40,10 @@ int xe_gt_sriov_pf_config_bulk_set_lmem(struct xe_gt *gt, unsigned int vfid, uns
>>  u32 xe_gt_sriov_pf_config_get_exec_quantum(struct xe_gt *gt, unsigned int vfid);
>>  int xe_gt_sriov_pf_config_set_exec_quantum(struct xe_gt *gt, unsigned int vfid, u32 exec_quantum);
>>  
>> +u32 xe_gt_sriov_pf_config_get_exec_quantum_locked(struct xe_gt *gt, unsigned int vfid);
>> +int xe_gt_sriov_pf_config_set_exec_quantum_locked(struct xe_gt *gt, unsigned int vfid,
>> +						  u32 exec_quantum);
>> +
>>  u32 xe_gt_sriov_pf_config_get_preempt_timeout(struct xe_gt *gt, unsigned int vfid);
>>  int xe_gt_sriov_pf_config_set_preempt_timeout(struct xe_gt *gt, unsigned int vfid,
>>  					      u32 preempt_timeout);
> 
> but anyway it looks good:
> Reviewed-by: Piotr Piórkowski <piotr.piorkowski@intel.com>

thanks!

> 
>> -- 
>> 2.47.1
>>
> 


^ permalink raw reply	[flat|nested] 45+ messages in thread

* Re: [PATCH v2 17/17] drm/xe/pf: Add documentation for sriov_admin attributes
  2025-10-30 17:25   ` Lucas De Marchi
@ 2025-10-31 12:35     ` Michal Wajdeczko
  0 siblings, 0 replies; 45+ messages in thread
From: Michal Wajdeczko @ 2025-10-31 12:35 UTC (permalink / raw)
  To: Lucas De Marchi; +Cc: intel-xe, Rodrigo Vivi



On 10/30/2025 6:25 PM, Lucas De Marchi wrote:
> On Tue, Oct 28, 2025 at 06:58:31PM +0100, Michal Wajdeczko wrote:
>> Add initial documentation for all recently added Xe driver
>> specific SR-IOV sysfs files located under device/sriov_admin.
>>
>> Signed-off-by: Michal Wajdeczko <michal.wajdeczko@intel.com>
>> Cc: Lucas De Marchi <lucas.demarchi@intel.com>
>> Cc: Rodrigo Vivi <rodrigo.vivi@intel.com>
>> ---
>> v2: update version (Rodrigo)
>> ---
>> .../ABI/testing/sysfs-driver-intel-xe-sriov   | 160 ++++++++++++++++++

...

>> +            Actual quantum value might be aligned per HW/FW requirements.
> 
> Is the aligned value returned on read so user can know the actual value?

yes, while GuC does not tell us about any specific low level alignment,
the PF driver clamps requested value based on the GuC ABI prior to sending
this new EQ settings to the GuC, see [1]

[1] https://elixir.bootlin.com/linux/v6.18-rc3/source/drivers/gpu/drm/xe/xe_gt_sriov_pf_config.c#L197

and you may also try it:

$ echo 150000 | /sys/bus/pci/drivers/xe/0000:00:02.0/sriov_admin/vf1/profile/exec_quantum_ms

[ ] xe 0000:00:02.0: [drm] PF: Tile0: GT0: VF1 provisioned with 100000ms execution quantum
[ ] xe 0000:00:02.0: [drm] PF: Tile0: GT1: VF1 provisioned with 100000ms execution quantum

$ cat /sys/bus/pci/drivers/xe/0000:00:02.0/sriov_admin/vf1/profile/exec_quantum_ms
100000

> 
> Does this need to be expanded to mention what is a GT execution quantum?
> I don't think we have it documented elsewhere.

we have small kernel-doc for the EQ KLV [2], but it looks it could be
a good candidate for a section in the xe_sriov.rst ;)

[2] https://elixir.bootlin.com/linux/v6.18-rc3/source/drivers/gpu/drm/xe/abi/guc_klvs_abi.h#L257

> How does a 0 in one of
> the functions interact with the others not being 0?

that's definitely a not recommended configuration, but still may work

please note that currently there is no synchronization between GuCs,
so engines on those GTs are utilized for the VF WLs independently

btw, there is one problematic configuration when EQ/PT is infinity/infinity
together with VF priority set to "normal" - I'm wondering if we should
somehow try to reject such setting over sysfs ...

> 
> In the scenario below, I was expecting the functions to get 1/3, 1/3, 1/6, and 1/6
> of the GPU, but I'm not entirely sure:
> 
>     cat sriov_admin/*/profile/exec_quantum_ms
>     100
>     100
>     50
>     50

with VFs priority set to "normal" this should work

but if VFs priority is still at default "low" then if other VFs are
idle, the one VF with active WL will get 100% GPU (it will be just
sliced at its EQ)


^ permalink raw reply	[flat|nested] 45+ messages in thread

end of thread, other threads:[~2025-10-31 12:35 UTC | newest]

Thread overview: 45+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2025-10-28 17:58 [PATCH v2 00/17] PF: Add sriov_admin sysfs tree Michal Wajdeczko
2025-10-28 17:58 ` [PATCH v2 01/17] drm/xe/pf: Prepare sysfs for SR-IOV admin attributes Michal Wajdeczko
2025-10-28 17:58 ` [PATCH v2 02/17] drm/xe/pf: Take RPM during calls to SR-IOV attr.store() Michal Wajdeczko
2025-10-28 17:58 ` [PATCH v2 03/17] drm/xe/pf: Add _locked variants of the VF EQ config functions Michal Wajdeczko
2025-10-29  8:47   ` Piotr Piórkowski
2025-10-29  9:00     ` Piotr Piórkowski
2025-10-30 19:47     ` Michal Wajdeczko
2025-10-28 17:58 ` [PATCH v2 04/17] drm/xe/pf: Add _locked variants of the VF PT " Michal Wajdeczko
2025-10-29 11:00   ` Piotr Piórkowski
2025-10-29 20:27   ` Lucas De Marchi
2025-10-28 17:58 ` [PATCH v2 05/17] drm/xe/pf: Allow change PF and VFs EQ/PT using sysfs Michal Wajdeczko
2025-10-29 11:17   ` Piotr Piórkowski
2025-10-29 20:26   ` Lucas De Marchi
2025-10-28 17:58 ` [PATCH v2 06/17] drm/xe/pf: Relax report helper to accept PF in bulk configs Michal Wajdeczko
2025-10-28 17:58 ` [PATCH v2 07/17] drm/xe/pf: Fix signature of internal config helpers Michal Wajdeczko
2025-10-29  8:02   ` Piotr Piórkowski
2025-10-28 17:58 ` [PATCH v2 08/17] drm/xe/pf: Add functions to bulk configure EQ/PT on GT Michal Wajdeczko
2025-10-29 13:59   ` Piotr Piórkowski
2025-10-29 20:32   ` Lucas De Marchi
2025-10-28 17:58 ` [PATCH v2 09/17] drm/xe/pf: Add functions to bulk provision EQ/PT Michal Wajdeczko
2025-10-29 20:33   ` Lucas De Marchi
2025-10-28 17:58 ` [PATCH v2 10/17] drm/xe/pf: Allow bulk change all VFs EQ/PT using sysfs Michal Wajdeczko
2025-10-28 17:58 ` [PATCH v2 11/17] drm/xe/pf: Add functions to provision scheduling priority Michal Wajdeczko
2025-10-28 17:58 ` [PATCH v2 12/17] drm/xe/pf: Allow bulk change all VFs priority using sysfs Michal Wajdeczko
2025-10-30 12:43   ` Lucas De Marchi
2025-10-30 13:47     ` Lucas De Marchi
2025-10-28 17:58 ` [PATCH v2 13/17] drm/xe/pf: Allow change PF scheduling " Michal Wajdeczko
2025-10-30 13:35   ` Lucas De Marchi
2025-10-30 13:49     ` Lucas De Marchi
2025-10-28 17:58 ` [PATCH v2 14/17] drm/xe/pf: Promote xe_pci_sriov_get_vf_pdev Michal Wajdeczko
2025-10-28 17:58 ` [PATCH v2 15/17] drm/xe/pf: Add sysfs device symlinks to enabled VFs Michal Wajdeczko
2025-10-30 13:40   ` Lucas De Marchi
2025-10-28 17:58 ` [PATCH v2 16/17] drm/xe/pf: Allow to stop and reset VF using sysfs Michal Wajdeczko
2025-10-30  8:45   ` Piotr Piórkowski
2025-10-30 13:43   ` Lucas De Marchi
2025-10-30 13:50     ` Michal Wajdeczko
2025-10-30 14:14       ` Lucas De Marchi
2025-10-28 17:58 ` [PATCH v2 17/17] drm/xe/pf: Add documentation for sriov_admin attributes Michal Wajdeczko
2025-10-30 17:25   ` Lucas De Marchi
2025-10-31 12:35     ` Michal Wajdeczko
2025-10-28 20:04 ` ✗ CI.checkpatch: warning for PF: Add sriov_admin sysfs tree (rev2) Patchwork
2025-10-28 20:05 ` ✓ CI.KUnit: success " Patchwork
2025-10-28 20:43 ` ✓ Xe.CI.BAT: " Patchwork
2025-10-29  7:15 ` ✗ Xe.CI.Full: failure " Patchwork
2025-10-29 10:11   ` Michal Wajdeczko

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox