From: Rodrigo Vivi <rodrigo.vivi@intel.com>
To: Michal Wajdeczko <michal.wajdeczko@intel.com>
Cc: "Satyanarayana K V P" <satyanarayana.k.v.p@intel.com>,
intel-xe@lists.freedesktop.org,
"Piotr Piórkowski" <piotr.piorkowski@intel.com>,
dri-devel@lists.freedesktop.org
Subject: Re: [RFC v2 1/1] drm/xe/pf: Skip creating DRM device entries in PF admin‑only mode
Date: Mon, 23 Feb 2026 16:22:51 -0500 [thread overview]
Message-ID: <aZzFK-wdWZpcXVz4@intel.com> (raw)
In-Reply-To: <5d56b0ab-ca4c-4a5d-adf6-245040b4888e@intel.com>
On Mon, Feb 23, 2026 at 09:31:56PM +0100, Michal Wajdeczko wrote:
>
>
> On 2/23/2026 4:09 PM, Satyanarayana K V P wrote:
> > When the PF is configured for admin‑only mode, it is restricted to
> > management functions and should not expose a device node that would
> > allow users to run workloads.
>
> maybe instead of doing such massive changes, better option would be to
> define a separate drm_driver structure with different set of ioctls?
Well, I did considered that option, but that wouldn't be backward
compatible for UMDs. They use the availability of the cardN to enumarate
the GPU, regardless the subset of IOCTL you might have available.
>
> as maybe in pf-admin-mode we may still want to use XE_DEVICE_QUERY
> and/or XE_OBSERVATION or some other future ioctls for VFs monitoring
As of now, there's no requirement for any of these IOCTL to be present
on the PF while VF is leased. So, this option here is the most
straightforward.
Except for our usage of the drm-debugfs. Hence this patch here end up
aligning with accel on that sense.
>
> xe_device.c:
>
> static struct drm_driver driver = {
> ... .driver_features =
> DRIVER_GEM |
> DRIVER_RENDER | DRIVER_SYNCOBJ |
> DRIVER_SYNCOBJ_TIMELINE | DRIVER_GEM_GPUVA,
> ...
> .ioctls = xe_ioctls,
> .num_ioctls = ARRAY_SIZE(xe_ioctls),
> };
>
>
> static struct drm_driver driver_admin_only_pf = {
> ... .driver_features = 0,
> ...
> .ioctls = xe_ioctls_admin_only_pf,
> .num_ioctls = ARRAY_SIZE(xe_ioctls_admin_only_pf),
> };
>
> the only problem seems to be that we have to make this choice sooner
> than today we detect PF/VF mode, but OTOH the admin-only-pf flag is
> only available as configfs attribute so we can just trust that flag
Yes, ideally sooner. Ideally if we could extract that from some
early SRIOV provisioning info and pass the total num VFs = 0 to avoid
this mode. But this mode being the default if in PF.
>
> >
> > In this mode, no DRM device entry is created; however, sysfs and debugfs
> > interfaces for the PF remain available at:
> >
> > sysfs: /sys/devices/pci0000:00/<B:D:F>
> > debugfs: /sys/kernel/debug/dri/<B:D:F>
> >
> > Signed-off-by: Satyanarayana K V P <satyanarayana.k.v.p@intel.com>
> > Cc: Michal Wajdeczko <michal.wajdeczko@intel.com>
> > Cc: Rodrigo Vivi <rodrigo.vivi@intel.com>
> > Cc: Piotr Piórkowski <piotr.piorkowski@intel.com>
> > Cc: dri-devel@lists.freedesktop.org
> >
> > ---
> > V2 -> V3:
> > - Introduced new helper function xe_debugfs_create_files() to create
> > debugfs entries based on admin_only_pf mode or normal mode.
Although this patch is bigger around the debugfs, it is less intrusive
then the previous one.
I glanced all the patch and everything looks good to me, except that I
didn't carefully reviewed line by line for now yet.
Acked-by: Rodrigo Vivi <rodrigo.vivi@intel.com>
> >
> > V1 -> V2:
> > - Rebased to latest drm-tip.
> > - Update update_minor_dev() to debugfs_minor_dev().
> > ---
> > drivers/gpu/drm/xe/Makefile | 1 +
> > drivers/gpu/drm/xe/xe_debugfs.c | 18 +++--
> > drivers/gpu/drm/xe/xe_debugfs_helpers.c | 78 +++++++++++++++++++
> > drivers/gpu/drm/xe/xe_debugfs_helpers.h | 27 +++++++
> > drivers/gpu/drm/xe/xe_device.c | 20 +++--
> > drivers/gpu/drm/xe/xe_gsc_debugfs.c | 8 +-
> > drivers/gpu/drm/xe/xe_gt_debugfs.c | 20 +++--
> > drivers/gpu/drm/xe/xe_gt_sriov_pf_debugfs.c | 5 +-
> > drivers/gpu/drm/xe/xe_gt_sriov_vf_debugfs.c | 5 +-
> > drivers/gpu/drm/xe/xe_guc_debugfs.c | 20 ++---
> > drivers/gpu/drm/xe/xe_huc_debugfs.c | 8 +-
> > drivers/gpu/drm/xe/xe_pxp_debugfs.c | 23 ++++--
> > drivers/gpu/drm/xe/xe_sriov.h | 8 ++
> > drivers/gpu/drm/xe/xe_sriov_pf_debugfs.c | 5 +-
> > drivers/gpu/drm/xe/xe_sriov_vf.c | 5 +-
> > drivers/gpu/drm/xe/xe_tile_debugfs.c | 10 +--
> > drivers/gpu/drm/xe/xe_tile_sriov_pf_debugfs.c | 14 ++--
> > 17 files changed, 202 insertions(+), 73 deletions(-)
> > create mode 100644 drivers/gpu/drm/xe/xe_debugfs_helpers.c
> > create mode 100644 drivers/gpu/drm/xe/xe_debugfs_helpers.h
> >
next prev parent reply other threads:[~2026-02-23 21:23 UTC|newest]
Thread overview: 6+ messages / expand[flat|nested] mbox.gz Atom feed top
2026-02-23 15:09 [RFC v2 0/1] Do not create drm device for PF only admin mode Satyanarayana K V P
2026-02-23 15:09 ` [RFC v2 1/1] drm/xe/pf: Skip creating DRM device entries in PF admin‑only mode Satyanarayana K V P
2026-02-23 20:31 ` Michal Wajdeczko
2026-02-23 21:22 ` Rodrigo Vivi [this message]
2026-02-23 15:57 ` ✗ CI.checkpatch: warning for Do not create drm device for PF only admin mode (rev2) Patchwork
2026-02-23 15:58 ` ✓ CI.KUnit: success " Patchwork
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=aZzFK-wdWZpcXVz4@intel.com \
--to=rodrigo.vivi@intel.com \
--cc=dri-devel@lists.freedesktop.org \
--cc=intel-xe@lists.freedesktop.org \
--cc=michal.wajdeczko@intel.com \
--cc=piotr.piorkowski@intel.com \
--cc=satyanarayana.k.v.p@intel.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox