From: Raag Jadav <raag.jadav@intel.com>
To: Rodrigo Vivi <rodrigo.vivi@intel.com>
Cc: Daniele Ceraolo Spurio <daniele.ceraolospurio@intel.com>,
intel-xe@lists.freedesktop.org, matthew.brost@intel.com,
thomas.hellstrom@linux.intel.com, riana.tauro@intel.com,
michal.wajdeczko@intel.com, matthew.d.roper@intel.com,
michal.winiarski@intel.com, matthew.auld@intel.com,
maarten@lankhorst.se, jani.nikula@intel.com,
lukasz.laguna@intel.com, zhanjun.dong@intel.com, lukas@wunner.de,
badal.nilawar@intel.com
Subject: Re: [PATCH v6 8/8] drm/xe/pci: Introduce PCIe FLR
Date: Sat, 2 May 2026 09:41:07 +0200 [thread overview]
Message-ID: <afWqkw063tUekzLQ@black.igk.intel.com> (raw)
In-Reply-To: <afPCSglJC3E0e_Zh@intel.com>
On Thu, Apr 30, 2026 at 04:57:46PM -0400, Rodrigo Vivi wrote:
> On Wed, Apr 29, 2026 at 10:57:53AM -0700, Daniele Ceraolo Spurio wrote:
> > On 4/29/2026 9:22 AM, Rodrigo Vivi wrote:
> > > On Wed, Apr 29, 2026 at 06:33:55AM +0200, Raag Jadav wrote:
> > > > On Tue, Apr 28, 2026 at 04:28:15PM -0700, Daniele Ceraolo Spurio wrote:
> > > > > <snip>
> > > > >
> > > > > I haven't gone through the code yet, but I wanted to ask some questions
> > > > > regarding the approach first.
> > > > Sure.
> > > >
> > > > > > +
> > > > > > +/**
> > > > > > + * DOC: PCI Error Handling
> > > > > > + *
> > > > > > + * Xe driver registers PCI callbacks which are called by PCI core in case of
> > > > > > + * bus errors or resets.
> > > > > > + *
> > > > > > + * Currently only PCI Function Level Reset (FLR) callbacks are supported. Since
> > > > > > + * most of the Endpoint Function state is lost on PCIe FLR, the flow is pretty
> > > > > > + * much similar to system suspend/resume flow with a few notable exceptions.
> > > > > IMO we need a couple of lines to describe what the impact of FLR is on the
> > > > > HW. Something like:
> > > > >
> > > > > "PCI FLR clears VRAM and resets the state of all the HW units. Therefore,
> > > > > the contents of all exec queues and BOs in VRAM are lost and the HW needs a
> > > > > full re-init".
> > > > Makes sense.
> > > >
> > > > > > + *
> > > > > > + * Prepare phase:
> > > > > > + * - Temporarily wedge the device to prevent userspace access
> > > > > I'm not convinced that wedging is the correct approach here, because the
> > > > > expectation from the apps POV is that wedging is permanent, so they won't
> > > > > try again later. Maybe we can have a separate flr_in_progress flag and
> > > > > return something like -EBUSY or -EAGAIN when the FLR is in progress?
> > > > This was my initial plan but during implementation I realized that much
> > > > of the code paths that need handling based new flag are already handled
> > > > by wedged flag. Like IOCTLs, dummy page faulting, GT reset worker, GuC
> > > > submission, GuC PC and TLB invalidation corner cases, SRIOV races and so
> > > > on. So I decided to reuse it here.
> > > >
> > > > In my understand wedging is permanent only when we choose to send the
> > > > uevent and expect device recovery from userspace, which IIUC we're not.
> > > > So I hope that's okay?
> > > Right, it should be okay.
> > >
> > > But we have 2 different users on top.
> > >
> > > Runtime (NEO/Level0-core and Apps):
> > >
> > > UMDs will send DEVICE_LOST to application in the case of any kind of reset.
> > > Nothing prevents App to go and try it again. It will just receive error.
> > >
> > > Admin (Level0-sysman and XPUManager):
> > >
> > > As Raag told, to them it is only permanent if we ask for help through the
> > > wedge uevent hints. Otherwise they should still be able to re-enumerate
> > > the devices whenever needed.
> >
> > Those are very specific to server use-cases. While that's what we're
> > currently implementing FLR for, there might be other use-cases in the future
> > that require us to implement this on the client side (there is already at
> > least one case where we wedge but we could instead recover via
> > driver-triggered FLR), where the apps can be less curated.
> >
> > I'm a bit lost on how a random app is supposed to tell the difference
> > between temporary and permanent wedges if they get a DEVICE_LOST error in
> > both cases. Are we expecting all apps to register to the uevent? Or are the
> > UMD drivers expected to return a different code if the wedge is permanent?
> > Because I don't think that an app should just keep trying again non-stop.
>
> Thomas had a proposal with watch queue where we could pass UMD some different
> error codes in different situations so UMD could perhaps handle different cases
> in different ways.
+1, or we can simply hook this up to WEDGED=none (which IIRC AMD is already
doing).
Raag
> But as of now they have no different ways of handling things. They send
> DEVICE_LOST to the application. Application can be reinitialized or not.
> Nothing there states that device is lost forever at the same time that
> nothing is done to restart the application automatically. It is up to
> the user to restart things over when they need/want to.
>
> >
> > >
> > > > > > + * - Stop accepting new submissions
> > > > > This is done as part of the above step and it isn't a separate one, right?
> > > > We explicitly xe_guc_submit_disable() inside flr_prepare() so I thought it
> > > > was worth spelling out. Will drop.
> >
> > Maybe instead of dropping it, reword it as "stop all submissions to the
> > GuC".
> >
> > > >
> > > > > > + * - Kill exec queues which signals all fences and frees in-flight jobs
> > > > > > + * - Skip memory eviction due to untrustworthy VRAM contents
> > > > > Note that the VRAM contents are not necessarily untrustworthy at this points
> > > > > since the FLR hasn't happened yet. However, if the admin is triggering an
> > > > > FLR it is likely that something is broken (whether memory, GuC, GT or
> > > > > something else), so we shouldn't try to touch the HW anyway.
> > > > Yes, that's what I meant here but your phrasing is better. Will update.
> > > >
> > > > > > + * - Remove all memory mappings since VRAM contents will be lost
> > > > > Dumb question, but what happens if a userspace app has an object mapped and
> > > > > they try to access it from the CPU after this step?
> > > > I'm not much familiar with MM parts but from what I understand it'll
> > > > cause a fault which should be redirected to dummy page. I've tried to
> > > > handle it with commit c020fff70d75 but I'm not sure if that's sufficient.
> > > > This is why I've marked MM corner cases as TODO.
> >
> > AFAICS that patch only redirects to dummy page while the wedged flag is set.
> > What happens after the FLR is completed and we've removed the wedged flag?
> > If we've dropped the mapping to the memory, where is that access going to
> > go?
> >
> > > >
> > > > > > + *
> > > > > > + * Re-initialization phase:
> > > > > > + * - Recreate kernel bos due to skipped eviction in prepare phase
> > > > > > + * - Restore kernel queues which were killed in prepare phase
> > > > > > + * - Reload all uC firmwares
> > > > > > + * - Bring up GT and unwedge to allow userspace access
> > > > > > + *
> > > > > > + * Since VRAM contents are lost, the user is expected to recreate user memory
> > > > > > + * and reload context.
> > > > > How is the user expected to realize that they need to re-create their BOs? A
> > > > > queue can be killed for different reasons and normally that doesn't imply
> > > > > that any associated BO is now invalid.
> > > > We return -ECANCELED if wedged flag is set and the dummy page data will
> > > > read all 0s. This would be the indication to the application that it needs
> > > > to recreate user memory and reload context.
> >
> > Applications don't usually check their memory to see if it is still good.
> > Are we expecting them to start doing this? or are we expecting all memory to
> > get thrown out every time an application gets an -ECANCELED error?
> > In either case I'd like an ack from the UMD teams on this.
> >
> > Daniele
> >
> > > >
> > > > Raag
> > > >
> > > > > > + *
> > > > > > + * TODO: Add PCIe error handling callbacks using similar flow.
> > > > > > + *
> > > > > > + * Current implementation is only limited to re-initializing GT.
> > > > > > + * This needs to be extended for a lot of components listed below.
> > > > > > + *
> > > > > > + * - Proper re-initialization of GSC and PXP for integrated platforms
> > > > > > + * - SRIOV cases which need synchronization between PF and VF
> > > > > > + * - Re-initialization of all child devices of Xe
> > > > > > + * - User memory handling and MM corner cases
> > > > > > + * - Display
> > > > > > + */
> > > > > > +
> > > > > >
> >
next prev parent reply other threads:[~2026-05-02 7:41 UTC|newest]
Thread overview: 19+ messages / expand[flat|nested] mbox.gz Atom feed top
2026-04-23 10:00 [PATCH v6 0/8] Introduce Xe PCIe FLR Raag Jadav
2026-04-23 10:00 ` [PATCH v6 1/8] drm/xe/uc_fw: Allow re-initializing firmware Raag Jadav
2026-04-23 10:00 ` [PATCH v6 2/8] drm/xe/guc_submit: Introduce guc_exec_queue_reinit() Raag Jadav
2026-04-23 10:00 ` [PATCH v6 3/8] drm/xe/gt: Introduce FLR helpers Raag Jadav
2026-04-23 10:00 ` [PATCH v6 4/8] drm/xe/bo_evict: Introduce xe_bo_restore_map() Raag Jadav
2026-04-23 10:00 ` [PATCH v6 5/8] drm/xe/exec_queue: Introduce xe_exec_queue_reinit() Raag Jadav
2026-04-23 10:00 ` [PATCH v6 6/8] drm/xe/migrate: Introduce xe_migrate_reinit() Raag Jadav
2026-04-23 10:00 ` [PATCH v6 7/8] drm/xe/pm: Introduce xe_device_suspend/resume() Raag Jadav
2026-04-23 10:00 ` [PATCH v6 8/8] drm/xe/pci: Introduce PCIe FLR Raag Jadav
2026-04-28 23:28 ` Daniele Ceraolo Spurio
2026-04-29 4:33 ` Raag Jadav
2026-04-29 16:22 ` Rodrigo Vivi
2026-04-29 17:57 ` Daniele Ceraolo Spurio
2026-04-30 20:57 ` Rodrigo Vivi
2026-05-02 7:41 ` Raag Jadav [this message]
2026-04-23 10:09 ` ✗ CI.checkpatch: warning for Introduce Xe PCIe FLR (rev6) Patchwork
2026-04-23 10:10 ` ✓ CI.KUnit: success " Patchwork
2026-04-23 11:05 ` ✓ Xe.CI.BAT: " Patchwork
2026-04-23 20:58 ` ✗ Xe.CI.FULL: failure " Patchwork
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=afWqkw063tUekzLQ@black.igk.intel.com \
--to=raag.jadav@intel.com \
--cc=badal.nilawar@intel.com \
--cc=daniele.ceraolospurio@intel.com \
--cc=intel-xe@lists.freedesktop.org \
--cc=jani.nikula@intel.com \
--cc=lukas@wunner.de \
--cc=lukasz.laguna@intel.com \
--cc=maarten@lankhorst.se \
--cc=matthew.auld@intel.com \
--cc=matthew.brost@intel.com \
--cc=matthew.d.roper@intel.com \
--cc=michal.wajdeczko@intel.com \
--cc=michal.winiarski@intel.com \
--cc=riana.tauro@intel.com \
--cc=rodrigo.vivi@intel.com \
--cc=thomas.hellstrom@linux.intel.com \
--cc=zhanjun.dong@intel.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox