From: Raag Jadav <raag.jadav@intel.com>
To: Daniele Ceraolo Spurio <daniele.ceraolospurio@intel.com>
Cc: Matthew Brost <matthew.brost@intel.com>,
lucas.demarchi@intel.com, rodrigo.vivi@intel.com,
intel-xe@lists.freedesktop.org, riana.tauro@intel.com,
michal.wajdeczko@intel.com
Subject: Re: [PATCH v5 2/2] drm/xe/gt: Introduce runtime suspend/resume
Date: Mon, 20 Oct 2025 13:48:26 +0200 [thread overview]
Message-ID: <aPYhinOaJJmMpv3a@black.igk.intel.com> (raw)
In-Reply-To: <4bc0b037-ae4a-49fe-a379-a9aec25ef42d@intel.com>
On Thu, Oct 16, 2025 at 11:47:40AM -0700, Daniele Ceraolo Spurio wrote:
> On 10/16/2025 11:38 AM, Matthew Brost wrote:
> > On Thu, Oct 16, 2025 at 08:32:23AM -0700, Daniele Ceraolo Spurio wrote:
> > > On 10/14/2025 12:30 AM, Raag Jadav wrote:
> > > > If power state is retained between suspend/resume cycle, we don't need
> > > > to perform full GT re-initialization. Introduce runtime helpers for GT
> > > > which greatly reduce suspend/resume delay.
> > > >
> > > > v2: Drop redundant xe_gt_sanitize() and xe_guc_ct_stop() (Daniele)
> > > > Use runtime naming for guc helpers (Daniele)
> > > > v3: Drop redundant logging, add kernel doc (Michal)
> > > > Use runtime naming for ct helpers (Michal)
> > > > v4: Fix tags (Rodrigo)
> > > > v5: Include host_l2_vram workaround (Daniele)
> > > > Reuse xe_guc_submit_enable/disable() helpers (Daniele)
> > > Based on the reply about VF behavior in the previous rev, I am thinking this
> > > is not the correct approach to this.
> > > If on a VF the runtime_suspend/resume functions are not called at all like
> > > you said, it means that the VF driver needs to be able to cope with the fact
> > > that the HW can lose power without it being directly notified if its rpm
> > > refcount is 0. This in turn means that on a VF the driver can't rely on what
> > > you do in xe_uc_runtime_suspend/resume() to idle the state and must instead
> > > guarantee that the state is already idled when the last rpm ref is released
> > > and that a new rpm ref is taken before restarting anything (which might
> > > already be true). AFAIK there are no difference in the SW state management
> > > of queues and CTBs between PF and VF, so if we achieve that on a VF we'll
> > > also have it on PF/Native, which means that there will be no need to
> > > pause/unpause CTBs and exec_queues. The only thing that the PF would need to
> > > do in the rpm flow is program the HW (e.g. the host_l2_vram stuff and the
> > > irq re-enabling).
> > >
> > > tl;dr, if we guarantee that:
> > > 1 - if the rpm refcount is 0 then there is no activity on HW, so nothing
> > > that needs to be paused (which might already be true)
> > > 2 - an rpm ref is taken before any activity is started (which might also
> > > already be true)
> > >
> > > Then we're guaranteeing that there is nothing to pause/unpause at runtime
> > > suspend/resume time, so we're safe skipping those calls entirely on VF while
> > > on native/PF we can just focus on the HW re-programming.
> > >
> > > BTW, any idea how this is working with the current code? Given that we're
> > > re-loading the GuC on runtime resume, are the VFs getting disconnected and
> > VFs hold a RPM ref. See pf_enable_vfs in xe_pci_sriov.c.
>
> Ok, so we just can never do runtime pm if there are VFs active and my
> concerns about VF rpm behavior are void. Thanks for clarifying. It might be
> worth adding an assert to the runtime suspend/resume functions to make sure
> this is documented there as well and not just in the SRIOV code.
Would you prefer xe_assert() or drm_WARN()?
> I think that we should still investigate if we actually do need to call
> pause/unpause on runtime suspend or if we're already covered by the rpm
> refs, because we might end up being able to actually turn on rpm for VFs,
> but we can do that as a follow up step and go ahead with the approach in
> this patch first.
Raag
next prev parent reply other threads:[~2025-10-20 11:48 UTC|newest]
Thread overview: 18+ messages / expand[flat|nested] mbox.gz Atom feed top
2025-10-14 7:30 [PATCH v5 0/2] Introduce GT runtime suspend/resume Raag Jadav
2025-10-14 7:30 ` [PATCH v5 1/2] drm/xe/guc: Make xe_guc_submit_pause() available for non-VF cases Raag Jadav
2025-10-14 18:09 ` Matthew Brost
2025-10-15 7:04 ` Raag Jadav
2025-10-16 12:21 ` Raag Jadav
2025-10-16 19:01 ` Matthew Brost
2025-10-14 7:30 ` [PATCH v5 2/2] drm/xe/gt: Introduce runtime suspend/resume Raag Jadav
2025-10-16 15:32 ` Daniele Ceraolo Spurio
2025-10-16 18:38 ` Matthew Brost
2025-10-16 18:47 ` Daniele Ceraolo Spurio
2025-10-20 11:48 ` Raag Jadav [this message]
2025-10-14 7:42 ` ✓ CI.KUnit: success for Introduce GT runtime suspend/resume (rev2) Patchwork
2025-10-14 8:19 ` ✓ Xe.CI.BAT: " Patchwork
2025-10-14 15:40 ` ✗ Xe.CI.Full: failure " Patchwork
2025-10-16 13:03 ` ✗ CI.checkpatch: warning for Introduce GT runtime suspend/resume (rev3) Patchwork
2025-10-16 13:04 ` ✓ CI.KUnit: success " Patchwork
2025-10-16 14:08 ` ✓ Xe.CI.BAT: " Patchwork
2025-10-17 9:43 ` ✓ Xe.CI.Full: " Patchwork
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=aPYhinOaJJmMpv3a@black.igk.intel.com \
--to=raag.jadav@intel.com \
--cc=daniele.ceraolospurio@intel.com \
--cc=intel-xe@lists.freedesktop.org \
--cc=lucas.demarchi@intel.com \
--cc=matthew.brost@intel.com \
--cc=michal.wajdeczko@intel.com \
--cc=riana.tauro@intel.com \
--cc=rodrigo.vivi@intel.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox