All of lore.kernel.org
 help / color / mirror / Atom feed
From: Matthew Brost <matthew.brost@intel.com>
To: "Thomas Hellström" <thomas.hellstrom@linux.intel.com>
Cc: <intel-xe@lists.freedesktop.org>, <stuart.summers@intel.com>,
	<arvind.yadav@intel.com>, <himal.prasad.ghimiray@intel.com>,
	<francois.dugast@intel.com>
Subject: Re: [PATCH v4 00/12] Fine grained fault locking, threaded prefetch, storm cache
Date: Thu, 26 Feb 2026 11:36:21 -0800	[thread overview]
Message-ID: <aaCgtYQNFYLnvZoR@lstrano-desk.jf.intel.com> (raw)
In-Reply-To: <e967bf81cd917fa3086568458cd9100a4528517a.camel@linux.intel.com>

On Thu, Feb 26, 2026 at 02:43:29PM +0100, Thomas Hellström wrote:
> Hi, Matt.
> 
> On Wed, 2026-02-25 at 20:28 -0800, Matthew Brost wrote:
> > Fine-grained fault locking provides immediate benefits: it allows
> > page
> > faults from the same VM to be processed in parallel (unless they
> > target
> > the same range) and enables a sane multi-threaded prefetch
> > implementation. UMD prefetch benchmarks see 10% to 50% improvement in
> > prefetch performance on BMG depending on PCIe bus speed.
> > 
> > Once parallel fault processing is available, the pagefault queue can
> > be
> > unified into a single queue with multiple workers pulling faults to
> > process. A single queue then allows a sensible pagefault cache to be
> > implemented, so that multiple faults targeting the same region can be
> > batched together and acknowledged in, ideally, a single pass. This
> > saves
> > CPU cycles during pagefault handling and improves overall throughput
> > of
> > the fault handler.
> > 
> > Significant improvements in UMD pagefault benchmarks can be seen when
> > utilizing this caching.
> > 
> > v3:
> >  - Fix kunit build (CI)
> > v4:
> >  - Actually fix kunit build (CI)
> > 
> > Matt
> > 
> > Matthew Brost (12):
> >   drm/xe: Fine grained page fault locking
> >   drm/xe: Allow prefetch-only VM bind IOCTLs to use VM read lock
> >   drm/xe: Thread prefetch of SVM ranges
> >   drm/xe: Use a single page-fault queue with multiple workers
> >   drm/xe: Add num_pf_work modparam
> >   drm/xe: Engine class and instance into a u8
> >   drm/xe: Track pagefault worker runtime
> >   drm/xe: Chain page faults via queue-resident cache to avoid fault
> >     storms
> >   drm/xe: Add pagefault chaining stats
> >   drm/xe: Add debugfs pagefault_info
> >   drm/xe: batch CT pagefault acks with periodic flush
> >   drm/xe: Track parallel page fault activity in GT stats
> > 
> >  drivers/gpu/drm/drm_gpusvm.c            |   2 +-
> >  drivers/gpu/drm/xe/xe_debugfs.c         |  11 +
> >  drivers/gpu/drm/xe/xe_defaults.h        |   1 +
> >  drivers/gpu/drm/xe/xe_device.c          |  17 +-
> >  drivers/gpu/drm/xe/xe_device_types.h    |  17 +-
> >  drivers/gpu/drm/xe/xe_gt_stats.c        |   7 +
> >  drivers/gpu/drm/xe/xe_gt_stats_types.h  |   7 +
> >  drivers/gpu/drm/xe/xe_guc_ct.c          |  94 +++-
> >  drivers/gpu/drm/xe/xe_guc_ct.h          |  35 +-
> >  drivers/gpu/drm/xe/xe_guc_pagefault.c   |  35 +-
> >  drivers/gpu/drm/xe/xe_guc_types.h       |   6 +
> >  drivers/gpu/drm/xe/xe_module.c          |   4 +
> >  drivers/gpu/drm/xe/xe_module.h          |   1 +
> >  drivers/gpu/drm/xe/xe_pagefault.c       | 675 ++++++++++++++++++++--
> > --
> >  drivers/gpu/drm/xe/xe_pagefault.h       |  74 +++
> >  drivers/gpu/drm/xe/xe_pagefault_types.h | 109 +++-
> >  drivers/gpu/drm/xe/xe_svm.c             | 129 +++--
> >  drivers/gpu/drm/xe/xe_svm.h             |  59 ++-
> >  drivers/gpu/drm/xe/xe_userptr.c         |  20 +-
> >  drivers/gpu/drm/xe/xe_vm.c              | 215 ++++++--
> >  drivers/gpu/drm/xe/xe_vm_types.h        |  37 +-
> >  21 files changed, 1309 insertions(+), 246 deletions(-)
> 
> Before I get to reviewing this, some suggestions from Claude:
> 
>   Confirmed regressions (3 commits with issues):
> 
>   c664c1b91090 — Fine grained page fault locking
> 
>    - Reference leak in vm_bind_ioctl_ops_create (xe_vm.c). 
>   xe_svm_range_find_or_insert() was changed to take a reference, but
> two paths
>    don't put it: (1) when xe_svm_range_validate() returns true → goto 
>   check_next_range, and (2) when xa_alloc() fails → goto
> unwind_prefetch_ops. 
>   The validate path is on every prefetch of an already-populated range,
> so 
>   refcounts grow unbounded.
>

Indeed. Will fix.

>   80012f80c75f — Chain page faults
> 
>    - Commit message typos only: "samr ASID" → "same ASID", "IRQ pathd"
> → "IRQ 
>   paths". No code issues.
>

Yes.

>   569104fb76ed — batch CT pagefault acks with periodic flush
> 
>    - Off-by-one in flush period: guc_ack_fault_begin initialises 

This is pretty good, caught this one myself after posting.

I'm convinced everyone should use Claude as a spot check before posting.

>   pagefault_ack_counter to PERIOD - 2 = 14, but the comment says first
> flush 
>   should be at ack #2. With counter=14 the first flush fires at ack #3 
>   (counter hits 16, 16&15==0). Fix: = XE_GUC_PAGEFAULT_FLUSH_PERIOD -
> 1.
>    - Commit message typo: "Assistent-by" → "Assisted-by".

Yes.

Matt

> 
> /Thomas
> 
> 
> 

      reply	other threads:[~2026-02-26 19:36 UTC|newest]

Thread overview: 33+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2026-02-26  4:28 [PATCH v4 00/12] Fine grained fault locking, threaded prefetch, storm cache Matthew Brost
2026-02-26  4:28 ` [PATCH v4 01/12] drm/xe: Fine grained page fault locking Matthew Brost
2026-02-26  4:28 ` [PATCH v4 02/12] drm/xe: Allow prefetch-only VM bind IOCTLs to use VM read lock Matthew Brost
2026-02-26  4:28 ` [PATCH v4 03/12] drm/xe: Thread prefetch of SVM ranges Matthew Brost
2026-02-26  4:28 ` [PATCH v4 04/12] drm/xe: Use a single page-fault queue with multiple workers Matthew Brost
2026-05-06 15:46   ` Maciej Patelczyk
2026-05-06 19:42     ` Matthew Brost
2026-05-07 12:41       ` Maciej Patelczyk
2026-02-26  4:28 ` [PATCH v4 05/12] drm/xe: Add num_pf_work modparam Matthew Brost
2026-05-06 15:59   ` Maciej Patelczyk
2026-02-26  4:28 ` [PATCH v4 06/12] drm/xe: Engine class and instance into a u8 Matthew Brost
2026-05-06 16:04   ` Maciej Patelczyk
2026-05-07 16:20     ` Maciej Patelczyk
2026-02-26  4:28 ` [PATCH v4 07/12] drm/xe: Track pagefault worker runtime Matthew Brost
2026-05-07 12:51   ` Maciej Patelczyk
2026-02-26  4:28 ` [PATCH v4 08/12] drm/xe: Chain page faults via queue-resident cache to avoid fault storms Matthew Brost
2026-05-08 12:03   ` Maciej Patelczyk
2026-02-26  4:28 ` [PATCH v4 09/12] drm/xe: Add pagefault chaining stats Matthew Brost
2026-05-07 13:15   ` Maciej Patelczyk
2026-05-07 13:52     ` Francois Dugast
2026-02-26  4:28 ` [PATCH v4 10/12] drm/xe: Add debugfs pagefault_info Matthew Brost
2026-05-07 10:07   ` Maciej Patelczyk
2026-02-26  4:28 ` [PATCH v4 11/12] drm/xe: batch CT pagefault acks with periodic flush Matthew Brost
2026-05-08  9:24   ` Maciej Patelczyk
2026-02-26  4:28 ` [PATCH v4 12/12] drm/xe: Track parallel page fault activity in GT stats Matthew Brost
2026-05-07 13:56   ` Maciej Patelczyk
2026-05-07 14:23     ` Francois Dugast
2026-02-26  4:35 ` ✗ CI.checkpatch: warning for Fine grained fault locking, threaded prefetch, storm cache (rev4) Patchwork
2026-02-26  4:36 ` ✓ CI.KUnit: success " Patchwork
2026-02-26  5:26 ` ✗ Xe.CI.BAT: failure " Patchwork
2026-02-26  8:59 ` ✗ Xe.CI.FULL: " Patchwork
2026-02-26 13:43 ` [PATCH v4 00/12] Fine grained fault locking, threaded prefetch, storm cache Thomas Hellström
2026-02-26 19:36   ` Matthew Brost [this message]

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=aaCgtYQNFYLnvZoR@lstrano-desk.jf.intel.com \
    --to=matthew.brost@intel.com \
    --cc=arvind.yadav@intel.com \
    --cc=francois.dugast@intel.com \
    --cc=himal.prasad.ghimiray@intel.com \
    --cc=intel-xe@lists.freedesktop.org \
    --cc=stuart.summers@intel.com \
    --cc=thomas.hellstrom@linux.intel.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.