From: "Summers, Stuart" <stuart.summers@intel.com>
To: "jani.nikula@linux.intel.com" <jani.nikula@linux.intel.com>
Cc: "intel-xe@lists.freedesktop.org" <intel-xe@lists.freedesktop.org>,
"Lin, Shuicheng" <shuicheng.lin@intel.com>,
"Vivi, Rodrigo" <rodrigo.vivi@intel.com>,
"Roper, Matthew D" <matthew.d.roper@intel.com>,
"Brost, Matthew" <matthew.brost@intel.com>,
"Wajdeczko, Michal" <michal.wajdeczko@intel.com>,
"Nerlige Ramappa, Umesh" <umesh.nerlige.ramappa@intel.com>,
"Ceraolo Spurio, Daniele" <daniele.ceraolospurio@intel.com>
Subject: Re: [PATCH 3/9] drm/xe: Split out configfs data structures
Date: Mon, 4 May 2026 14:24:14 +0000 [thread overview]
Message-ID: <67d2074cd720da46c40a9f5b38089a7642d1e0fd.camel@intel.com> (raw)
In-Reply-To: <be85da366bbb91d83d66d4e2fccf1679c6c8c96b@intel.com>
On Mon, 2026-05-04 at 11:47 +0300, Jani Nikula wrote:
> On Mon, 04 May 2026, Stuart Summers <stuart.summers@intel.com> wrote:
> > Split the configfs data structures into their own _types.h file.
>
> Why? The commit message must always answer the question, "why".
>
> The obvious downside here is that you expose the types that were
> previously hidden in xe_configfs.c to anyone who includes
> xe_configfs_types.h. And that header depends on a ton of other
> headers,
> making the header interdepencies worse overall.
>
> Maybe you need that header later for something, but please spell that
> out here.
Yeah this is exactly true.. I'll do a better job with the commit
messages here going forward. Really appreciate the feedback Jani!
-Stuart
>
>
> BR,
> Jani.
>
> >
> > Signed-off-by: Stuart Summers <stuart.summers@intel.com>
> > Assisted-by: Copilot:claude-opus-4.7
> > ---
> > drivers/gpu/drm/xe/xe_configfs.c | 85 +++++++---------------
> > ----
> > drivers/gpu/drm/xe/xe_configfs_types.h | 59 ++++++++++++++++++
> > 2 files changed, 80 insertions(+), 64 deletions(-)
> > create mode 100644 drivers/gpu/drm/xe/xe_configfs_types.h
> >
> > diff --git a/drivers/gpu/drm/xe/xe_configfs.c
> > b/drivers/gpu/drm/xe/xe_configfs.c
> > index 1e134057fae8..12b7fe65446d 100644
> > --- a/drivers/gpu/drm/xe/xe_configfs.c
> > +++ b/drivers/gpu/drm/xe/xe_configfs.c
> > @@ -4,7 +4,6 @@
> > */
> >
> > #include <linux/bitops.h>
> > -#include <linux/ctype.h>
> > #include <linux/configfs.h>
> > #include <linux/cleanup.h>
> > #include <linux/find.h>
> > @@ -15,12 +14,10 @@
> >
> > #include "instructions/xe_mi_commands.h"
> > #include "xe_configfs.h"
> > +#include "xe_configfs_types.h"
> > #include "xe_defaults.h"
> > #include "xe_gt_types.h"
> > -#include "xe_hw_engine_types.h"
> > #include "xe_module.h"
> > -#include "xe_pci_types.h"
> > -#include "xe_sriov_types.h"
> >
> > /**
> > * DOC: Xe Configfs
> > @@ -245,36 +242,6 @@
> > * # rmdir /sys/kernel/config/xe/0000:03:00.0/
> > */
> >
> > -/* Similar to struct xe_bb, but not tied to HW (yet) */
> > -struct wa_bb {
> > - u32 *cs;
> > - u32 len; /* in dwords */
> > -};
> > -
> > -struct xe_config_group_device {
> > - struct config_group group;
> > - struct config_group sriov;
> > -
> > - struct xe_config_device {
> > - struct wa_bb
> > ctx_restore_mid_bb[XE_ENGINE_CLASS_MAX];
> > - struct wa_bb
> > ctx_restore_post_bb[XE_ENGINE_CLASS_MAX];
> > - bool enable_psmi;
> > - bool enable_survivability_mode;
> > - u64 engines_allowed;
> > - u64 gt_types_allowed;
> > - struct {
> > - bool admin_only_pf;
> > - unsigned int max_vfs;
> > - } sriov;
> > - } config;
> > -
> > - /* protects attributes */
> > - struct mutex lock;
> > - /* matching descriptor */
> > - const struct xe_device_desc *desc;
> > - /* tentative SR-IOV mode */
> > - enum xe_sriov_mode mode;
> > -};
> >
> > static const struct xe_config_device device_defaults = {
> > .enable_psmi = false,
> > @@ -322,16 +289,6 @@ static const struct {
> > { .name = "media", .type = XE_GT_TYPE_MEDIA },
> > };
> >
> > -static struct xe_config_group_device
> > *to_xe_config_group_device(struct config_item *item)
> > -{
> > - return container_of(to_config_group(item), struct
> > xe_config_group_device, group);
> > -}
> > -
> > -static struct xe_config_device *to_xe_config_device(struct
> > config_item *item)
> > -{
> > - return &to_xe_config_group_device(item)->config;
> > -}
> > -
> > static bool is_bound(struct xe_config_group_device *dev)
> > {
> > unsigned int domain, bus, slot, function;
> > @@ -359,7 +316,7 @@ static bool is_bound(struct
> > xe_config_group_device *dev)
> >
> > static ssize_t enable_survivability_mode_show(struct config_item
> > *item, char *page)
> > {
> > - struct xe_config_device *dev = to_xe_config_device(item);
> > + struct xe_config_device *dev = xe_configfs_to_device(item);
> >
> > return sprintf(page, "%d\n", dev-
> > >enable_survivability_mode);
> > }
> > @@ -367,7 +324,7 @@ static ssize_t
> > enable_survivability_mode_show(struct config_item *item, char *pa
> > static ssize_t enable_survivability_mode_store(struct config_item
> > *item, const char *page,
> > size_t len)
> > {
> > - struct xe_config_group_device *dev =
> > to_xe_config_group_device(item);
> > + struct xe_config_group_device *dev =
> > xe_configfs_to_group_device(item);
> > bool enable_survivability_mode;
> > int ret;
> >
> > @@ -386,7 +343,7 @@ static ssize_t
> > enable_survivability_mode_store(struct config_item *item, const c
> >
> > static ssize_t gt_types_allowed_show(struct config_item *item,
> > char *page)
> > {
> > - struct xe_config_device *dev = to_xe_config_device(item);
> > + struct xe_config_device *dev = xe_configfs_to_device(item);
> > char *p = page;
> >
> > for (size_t i = 0; i < ARRAY_SIZE(gt_types); i++)
> > @@ -399,7 +356,7 @@ static ssize_t gt_types_allowed_show(struct
> > config_item *item, char *page)
> > static ssize_t gt_types_allowed_store(struct config_item *item,
> > const char *page,
> > size_t len)
> > {
> > - struct xe_config_group_device *dev =
> > to_xe_config_group_device(item);
> > + struct xe_config_group_device *dev =
> > xe_configfs_to_group_device(item);
> > char *buf __free(kfree) = kstrdup(page, GFP_KERNEL);
> > char *p = buf;
> > u64 typemask = 0;
> > @@ -437,7 +394,7 @@ static ssize_t gt_types_allowed_store(struct
> > config_item *item, const char *page
> >
> > static ssize_t engines_allowed_show(struct config_item *item, char
> > *page)
> > {
> > - struct xe_config_device *dev = to_xe_config_device(item);
> > + struct xe_config_device *dev = xe_configfs_to_device(item);
> > char *p = page;
> >
> > for (size_t i = 0; i < ARRAY_SIZE(engine_info); i++) {
> > @@ -529,7 +486,7 @@ static int parse_engine(const char *s, const
> > char *end_chars, u64 *mask,
> > static ssize_t engines_allowed_store(struct config_item *item,
> > const char *page,
> > size_t len)
> > {
> > - struct xe_config_group_device *dev =
> > to_xe_config_group_device(item);
> > + struct xe_config_group_device *dev =
> > xe_configfs_to_group_device(item);
> > ssize_t patternlen, p;
> > u64 mask, val = 0;
> >
> > @@ -552,14 +509,14 @@ static ssize_t engines_allowed_store(struct
> > config_item *item, const char *page,
> >
> > static ssize_t enable_psmi_show(struct config_item *item, char
> > *page)
> > {
> > - struct xe_config_device *dev = to_xe_config_device(item);
> > + struct xe_config_device *dev = xe_configfs_to_device(item);
> >
> > return sprintf(page, "%d\n", dev->enable_psmi);
> > }
> >
> > static ssize_t enable_psmi_store(struct config_item *item, const
> > char *page, size_t len)
> > {
> > - struct xe_config_group_device *dev =
> > to_xe_config_group_device(item);
> > + struct xe_config_group_device *dev =
> > xe_configfs_to_group_device(item);
> > bool val;
> > int ret;
> >
> > @@ -634,14 +591,14 @@ static ssize_t wa_bb_show(struct
> > xe_config_group_device *dev,
> >
> > static ssize_t ctx_restore_mid_bb_show(struct config_item *item,
> > char *page)
> > {
> > - struct xe_config_group_device *dev =
> > to_xe_config_group_device(item);
> > + struct xe_config_group_device *dev =
> > xe_configfs_to_group_device(item);
> >
> > return wa_bb_show(dev, dev->config.ctx_restore_mid_bb,
> > page, SZ_4K);
> > }
> >
> > static ssize_t ctx_restore_post_bb_show(struct config_item *item,
> > char *page)
> > {
> > - struct xe_config_group_device *dev =
> > to_xe_config_group_device(item);
> > + struct xe_config_group_device *dev =
> > xe_configfs_to_group_device(item);
> >
> > return wa_bb_show(dev, dev->config.ctx_restore_post_bb,
> > page, SZ_4K);
> > }
> > @@ -798,7 +755,7 @@ static ssize_t wa_bb_store(struct wa_bb
> > wa_bb[static XE_ENGINE_CLASS_MAX],
> > static ssize_t ctx_restore_mid_bb_store(struct config_item *item,
> > const char *data, size_t
> > sz)
> > {
> > - struct xe_config_group_device *dev =
> > to_xe_config_group_device(item);
> > + struct xe_config_group_device *dev =
> > xe_configfs_to_group_device(item);
> >
> > return wa_bb_store(dev->config.ctx_restore_mid_bb, dev,
> > data, sz);
> > }
> > @@ -806,7 +763,7 @@ static ssize_t ctx_restore_mid_bb_store(struct
> > config_item *item,
> > static ssize_t ctx_restore_post_bb_store(struct config_item *item,
> > const char *data, size_t
> > sz)
> > {
> > - struct xe_config_group_device *dev =
> > to_xe_config_group_device(item);
> > + struct xe_config_group_device *dev =
> > xe_configfs_to_group_device(item);
> >
> > return wa_bb_store(dev->config.ctx_restore_post_bb, dev,
> > data, sz);
> > }
> > @@ -830,7 +787,7 @@ static struct configfs_attribute
> > *xe_config_device_attrs[] = {
> >
> > static void xe_config_device_release(struct config_item *item)
> > {
> > - struct xe_config_group_device *dev =
> > to_xe_config_group_device(item);
> > + struct xe_config_group_device *dev =
> > xe_configfs_to_group_device(item);
> >
> > mutex_destroy(&dev->lock);
> >
> > @@ -846,7 +803,7 @@ static struct configfs_item_operations
> > xe_config_device_ops = {
> > static bool xe_config_device_is_visible(struct config_item *item,
> > struct configfs_attribute
> > *attr, int n)
> > {
> > - struct xe_config_group_device *dev =
> > to_xe_config_group_device(item);
> > + struct xe_config_group_device *dev =
> > xe_configfs_to_group_device(item);
> >
> > if (attr == &attr_enable_survivability_mode) {
> > if (!dev->desc->is_dgfx || dev->desc->platform <
> > XE_BATTLEMAGE)
> > @@ -869,7 +826,7 @@ static const struct config_item_type
> > xe_config_device_type = {
> >
> > static ssize_t sriov_max_vfs_show(struct config_item *item, char
> > *page)
> > {
> > - struct xe_config_group_device *dev =
> > to_xe_config_group_device(item->ci_parent);
> > + struct xe_config_group_device *dev =
> > xe_configfs_to_group_device(item->ci_parent);
> >
> > guard(mutex)(&dev->lock);
> >
> > @@ -881,7 +838,7 @@ static ssize_t sriov_max_vfs_show(struct
> > config_item *item, char *page)
> >
> > static ssize_t sriov_max_vfs_store(struct config_item *item, const
> > char *page, size_t len)
> > {
> > - struct xe_config_group_device *dev =
> > to_xe_config_group_device(item->ci_parent);
> > + struct xe_config_group_device *dev =
> > xe_configfs_to_group_device(item->ci_parent);
> > unsigned int max_vfs;
> > int ret;
> >
> > @@ -903,7 +860,7 @@ static ssize_t sriov_max_vfs_store(struct
> > config_item *item, const char *page, s
> >
> > static ssize_t sriov_admin_only_pf_show(struct config_item *item,
> > char *page)
> > {
> > - struct xe_config_group_device *dev =
> > to_xe_config_group_device(item->ci_parent);
> > + struct xe_config_group_device *dev =
> > xe_configfs_to_group_device(item->ci_parent);
> >
> > guard(mutex)(&dev->lock);
> >
> > @@ -912,7 +869,7 @@ static ssize_t sriov_admin_only_pf_show(struct
> > config_item *item, char *page)
> >
> > static ssize_t sriov_admin_only_pf_store(struct config_item *item,
> > const char *page, size_t len)
> > {
> > - struct xe_config_group_device *dev =
> > to_xe_config_group_device(item->ci_parent);
> > + struct xe_config_group_device *dev =
> > xe_configfs_to_group_device(item->ci_parent);
> > bool admin_only_pf;
> > int ret;
> >
> > @@ -941,7 +898,7 @@ static struct configfs_attribute
> > *xe_config_sriov_attrs[] = {
> > static bool xe_config_sriov_is_visible(struct config_item *item,
> > struct configfs_attribute
> > *attr, int n)
> > {
> > - struct xe_config_group_device *dev =
> > to_xe_config_group_device(item->ci_parent);
> > + struct xe_config_group_device *dev =
> > xe_configfs_to_group_device(item->ci_parent);
> >
> > if (attr == &sriov_attr_max_vfs && dev->mode !=
> > XE_SRIOV_MODE_PF)
> > return false;
> > @@ -1084,7 +1041,7 @@ static struct xe_config_group_device
> > *find_xe_config_group_device(struct pci_dev
> > if (!item)
> > return NULL;
> >
> > - return to_xe_config_group_device(item);
> > + return xe_configfs_to_group_device(item);
> > }
> >
> > static void dump_custom_dev_config(struct pci_dev *pdev,
> > diff --git a/drivers/gpu/drm/xe/xe_configfs_types.h
> > b/drivers/gpu/drm/xe/xe_configfs_types.h
> > new file mode 100644
> > index 000000000000..935097aafa96
> > --- /dev/null
> > +++ b/drivers/gpu/drm/xe/xe_configfs_types.h
> > @@ -0,0 +1,59 @@
> > +/* SPDX-License-Identifier: MIT */
> > +/*
> > + * Copyright © 2026 Intel Corporation
> > + */
> > +#ifndef _XE_CONFIGFS_TYPES_H_
> > +#define _XE_CONFIGFS_TYPES_H_
> > +
> > +#include <linux/configfs.h>
> > +#include <linux/mutex.h>
> > +#include <linux/types.h>
> > +
> > +#include "xe_hw_engine_types.h"
> > +#include "xe_pci_types.h"
> > +#include "xe_sriov_types.h"
> > +
> > +struct config_item;
> > +
> > +/* Similar to struct xe_bb, but not tied to HW (yet) */
> > +struct wa_bb {
> > + u32 *cs;
> > + u32 len; /* in dwords */
> > +};
> > +
> > +struct xe_config_group_device {
> > + struct config_group group;
> > + struct config_group sriov;
> > +
> > + struct xe_config_device {
> > + struct wa_bb
> > ctx_restore_mid_bb[XE_ENGINE_CLASS_MAX];
> > + struct wa_bb
> > ctx_restore_post_bb[XE_ENGINE_CLASS_MAX];
> > + bool enable_psmi;
> > + bool enable_survivability_mode;
> > + u64 engines_allowed;
> > + u64 gt_types_allowed;
> > + struct {
> > + bool admin_only_pf;
> > + unsigned int max_vfs;
> > + } sriov;
> > + } config;
> > +
> > + /* protects attributes */
> > + struct mutex lock;
> > + /* matching descriptor */
> > + const struct xe_device_desc *desc;
> > + /* tentative SR-IOV mode */
> > + enum xe_sriov_mode mode;
> > +};
> > +
> > +static inline struct xe_config_group_device
> > *xe_configfs_to_group_device(struct config_item *item)
> > +{
> > + return container_of(to_config_group(item), struct
> > xe_config_group_device, group);
> > +}
> > +
> > +static inline struct xe_config_device
> > *xe_configfs_to_device(struct config_item *item)
> > +{
> > + return &xe_configfs_to_group_device(item)->config;
> > +}
> > +
> > +#endif /* _XE_CONFIGFS_TYPES_H_ */
>
next prev parent reply other threads:[~2026-05-04 14:24 UTC|newest]
Thread overview: 37+ messages / expand[flat|nested] mbox.gz Atom feed top
2026-05-04 4:43 [PATCH 0/9] Add new debug infrastructure for configfs Stuart Summers
2026-05-04 4:43 ` [PATCH 1/9] drm/xe: Rename survivability_mode to enable_survivability_mode Stuart Summers
2026-05-04 13:29 ` Gustavo Sousa
2026-05-04 14:32 ` Summers, Stuart
2026-05-04 14:38 ` Summers, Stuart
2026-05-04 14:40 ` Summers, Stuart
2026-05-04 18:31 ` Rodrigo Vivi
2026-05-04 18:38 ` Summers, Stuart
2026-05-04 4:43 ` [PATCH 2/9] drm/xe: Sort xe_config_device fields and defaults alphabetically Stuart Summers
2026-05-04 13:58 ` Gustavo Sousa
2026-05-04 14:38 ` Summers, Stuart
2026-05-04 15:47 ` Lin, Shuicheng
2026-05-04 15:54 ` Summers, Stuart
2026-05-04 4:43 ` [PATCH 3/9] drm/xe: Split out configfs data structures Stuart Summers
2026-05-04 4:52 ` Summers, Stuart
2026-05-04 8:47 ` Jani Nikula
2026-05-04 14:24 ` Summers, Stuart [this message]
2026-05-04 21:48 ` Matthew Brost
2026-05-04 21:51 ` Summers, Stuart
2026-05-04 4:43 ` [PATCH 4/9] drm/xe: Add a new debug focused configfs group Stuart Summers
2026-05-04 15:42 ` Gustavo Sousa
2026-05-04 15:50 ` Summers, Stuart
2026-05-04 17:28 ` Gustavo Sousa
2026-05-04 17:44 ` Summers, Stuart
2026-05-04 19:04 ` Gustavo Sousa
2026-05-05 21:45 ` Summers, Stuart
2026-05-04 4:43 ` [PATCH 5/9] drm/xe: Move debug configfs entries to xe_configfs_debug.c Stuart Summers
2026-05-04 4:43 ` [PATCH 6/9] drm/xe/guc: Add configfs support for guc_log_level Stuart Summers
2026-05-05 23:54 ` Daniele Ceraolo Spurio
2026-05-04 4:43 ` [PATCH 7/9] drm/xe/guc: Add support for NPK as a GuC log target Stuart Summers
2026-05-04 4:43 ` [PATCH 8/9] drm/xe: Add infrastructure for debug configfs parameters Stuart Summers
2026-05-04 4:43 ` [PATCH 9/9] drm/xe: Migrate existing debug configfs entries to params infrastructure Stuart Summers
2026-05-04 4:54 ` [PATCH 0/9] Add new debug infrastructure for configfs Summers, Stuart
2026-05-04 5:30 ` ✗ CI.checkpatch: warning for " Patchwork
2026-05-04 5:32 ` ✓ CI.KUnit: success " Patchwork
2026-05-04 6:44 ` ✓ Xe.CI.BAT: " Patchwork
2026-05-04 8:42 ` ✗ Xe.CI.FULL: failure " Patchwork
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=67d2074cd720da46c40a9f5b38089a7642d1e0fd.camel@intel.com \
--to=stuart.summers@intel.com \
--cc=daniele.ceraolospurio@intel.com \
--cc=intel-xe@lists.freedesktop.org \
--cc=jani.nikula@linux.intel.com \
--cc=matthew.brost@intel.com \
--cc=matthew.d.roper@intel.com \
--cc=michal.wajdeczko@intel.com \
--cc=rodrigo.vivi@intel.com \
--cc=shuicheng.lin@intel.com \
--cc=umesh.nerlige.ramappa@intel.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox