Intel-XE Archive on lore.kernel.org
 help / color / mirror / Atom feed
From: "Thomas Hellström" <thomas.hellstrom@linux.intel.com>
To: Matthew Brost <matthew.brost@intel.com>
Cc: intel-xe@lists.freedesktop.org, stuart.summers@intel.com,
	 francois.dugast@intel.com, daniele.ceraolospurio@intel.com,
	 michal.wajdeczko@intel.com
Subject: Re: [PATCH v3 3/3] drm/xe: Move LRC seqno to system memory to avoid slow dGPU reads
Date: Thu, 26 Feb 2026 18:56:29 +0100	[thread overview]
Message-ID: <980e782fb633f53db3a30594c3c8a893a19b664c.camel@linux.intel.com> (raw)
In-Reply-To: <aaCCXVt2RTt2uuSq@lstrano-desk.jf.intel.com>

On Thu, 2026-02-26 at 09:26 -0800, Matthew Brost wrote:
> On Thu, Feb 26, 2026 at 09:11:20AM -0800, Matthew Brost wrote:
> 
> Missed a comment.
> 
> > 

8<------------------------

> > > > struct xe_hw_engine *hwe,
> > > >  	if (IS_ERR(lrc->bo))
> > > >  		return PTR_ERR(lrc->bo);
> > > >  
> > > > +	seqno_bo = xe_bo_create_pin_map_novm(xe, tile,
> > > > PAGE_SIZE,
> > > > +					    
> > > > ttm_bo_type_kernel,
> > > > +					     XE_BO_FLAG_GGTT |
> > > > +					    
> > > > XE_BO_FLAG_GGTT_INVALIDATE |
> > > > +					    
> > > > XE_BO_FLAG_SYSTEM,
> > > > false);
> > > 
> > > XE_BO_FLAG_PINNED_NORESTORE?
> > > 
> 
> Maybe (?), but this seems dangerous… Can’t fences be pending during
> hibernate? We also check whether a job has started (by looking at the
> start seqno) in the TDR, and if the seqno is in VRAM, nonsensical
> reads
> could confuse those checks. Also consider the case where the fence
> seqno
> is clobbered—we could end up with values in memory that indicate the
> next job we run is already signaled.
> 
> So after typing this out, I actually think the answer is no to this
> flag.
> 
> Matt

OK, I wasn't sure exactly what happens on suspend / resume with LR
jobs. The !LR jobs are idled AFAIR.

We have an issure registered somewhere with LR Jobs WRT Suspend /
Resume, since if we change the spinner to preemptible in
xe_exec_compute_mode@lr-mode-workload and suspend while it's running it
doesn't complete on resume, even if the VM gets properly preempted
during the VRAM eviction.

But that's ofc beyond this patch.

With the IS_ERR() fix,
Reviewed-by: Thomas Hellström <thomas.hellstrom@linux.intel.com>


> 
> > > Thanks,
> > > Thomas
> > > 
> > > 
> > > > +	if (IS_ERR(seqno_bo)) {
> > > > +		err = PTR_ERR(lrc->bo);
> > > > +		goto err_lrc_finish;
> > > > +	}
> > > > +	lrc->seqno_bo = seqno_bo;
> > > > +
> > > >  	xe_hw_fence_ctx_init(&lrc->fence_ctx, hwe->gt,
> > > >  			     hwe->fence_irq, hwe->name);
> > > >  
> > > > diff --git a/drivers/gpu/drm/xe/xe_lrc_types.h
> > > > b/drivers/gpu/drm/xe/xe_lrc_types.h
> > > > index a4373d280c39..5a718f759ed6 100644
> > > > --- a/drivers/gpu/drm/xe/xe_lrc_types.h
> > > > +++ b/drivers/gpu/drm/xe/xe_lrc_types.h
> > > > @@ -22,6 +22,12 @@ struct xe_lrc {
> > > >  	 */
> > > >  	struct xe_bo *bo;
> > > >  
> > > > +	/**
> > > > +	 * @seqno_bo: Buffer object (memory) for seqno
> > > > numbers.
> > > > Always in system
> > > > +	 * memory as this a CPU read, GPU write path object.
> > > > +	 */
> > > > +	struct xe_bo *seqno_bo;
> > > > +
> > > >  	/** @size: size of the lrc and optional indirect ring
> > > > state
> > > > */
> > > >  	u32 size;
> > > >  

  reply	other threads:[~2026-02-26 17:56 UTC|newest]

Thread overview: 29+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2026-02-18  4:33 [PATCH v3 0/3] dGPU memory optimizations Matthew Brost
2026-02-18  4:33 ` [PATCH v3 1/3] drm/xe: Split H2G and G2H into separate buffer objects Matthew Brost
2026-02-18 23:12   ` Summers, Stuart
2026-02-19  3:46     ` Matthew Brost
2026-02-24 15:58   ` Thomas Hellström
2026-02-24 16:12     ` Matthew Brost
2026-02-25 10:55       ` Thomas Hellström
2026-02-25 18:08         ` Matthew Brost
2026-02-26 12:08       ` Thomas Hellström
2026-02-18  4:33 ` [PATCH v3 2/3] drm/xe: Avoid unconditional VRAM reads in H2G path Matthew Brost
2026-02-18 23:20   ` Summers, Stuart
2026-02-26 12:47   ` Thomas Hellström
2026-02-18  4:33 ` [PATCH v3 3/3] drm/xe: Move LRC seqno to system memory to avoid slow dGPU reads Matthew Brost
2026-02-24  2:40   ` Matthew Brost
2026-02-26 12:25   ` Thomas Hellström
2026-02-26 17:11     ` Matthew Brost
2026-02-26 17:26       ` Matthew Brost
2026-02-26 17:56         ` Thomas Hellström [this message]
2026-02-26 12:43   ` Thomas Hellström
2026-02-26 16:55     ` Matthew Brost
2026-02-18  4:40 ` ✓ CI.KUnit: success for dGPU memory optimizations Patchwork
2026-02-18  5:23 ` ✗ Xe.CI.BAT: failure " Patchwork
2026-02-18  6:15 ` ✗ Xe.CI.FULL: " Patchwork
2026-02-18  7:07 ` ✓ CI.KUnit: success for dGPU memory optimizations (rev2) Patchwork
2026-02-18  7:36 ` ✓ Xe.CI.BAT: " Patchwork
2026-02-18  7:53 ` ✓ Xe.CI.FULL: " Patchwork
2026-02-18 12:29 ` ✓ CI.KUnit: success for dGPU memory optimizations (rev3) Patchwork
2026-02-18 13:09 ` ✓ Xe.CI.BAT: " Patchwork
2026-02-18 14:08 ` ✗ Xe.CI.FULL: failure " Patchwork

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=980e782fb633f53db3a30594c3c8a893a19b664c.camel@linux.intel.com \
    --to=thomas.hellstrom@linux.intel.com \
    --cc=daniele.ceraolospurio@intel.com \
    --cc=francois.dugast@intel.com \
    --cc=intel-xe@lists.freedesktop.org \
    --cc=matthew.brost@intel.com \
    --cc=michal.wajdeczko@intel.com \
    --cc=stuart.summers@intel.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox