Intel-XE Archive on lore.kernel.org
 help / color / mirror / Atom feed
From: Matthew Brost <matthew.brost@intel.com>
To: intel-xe@lists.freedesktop.org
Cc: stuart.summers@intel.com, arvind.yadav@intel.com,
	himal.prasad.ghimiray@intel.com,
	thomas.hellstrom@linux.intel.com, francois.dugast@intel.com
Subject: [PATCH v2 09/12] drm/xe: Add pagefault chaining stats
Date: Wed, 25 Feb 2026 10:47:10 -0800	[thread overview]
Message-ID: <20260225184713.2606772-10-matthew.brost@intel.com> (raw)
In-Reply-To: <20260225184713.2606772-1-matthew.brost@intel.com>

Add GT stats to quantify pagefault chaining behavior during fault storms.

Track total chained faults, faults chained directly from the IRQ
handler, cases where IRQ chaining also drained the queue, chained faults
that had to be requeued due to range mismatch, and cases where the last
serviced range allowed immediate ack.

Signed-off-by: Matthew Brost <matthew.brost@intel.com>
---
 drivers/gpu/drm/xe/xe_gt_stats.c       |  5 +++++
 drivers/gpu/drm/xe/xe_gt_stats_types.h |  5 +++++
 drivers/gpu/drm/xe/xe_pagefault.c      | 18 ++++++++++++++++--
 3 files changed, 26 insertions(+), 2 deletions(-)

diff --git a/drivers/gpu/drm/xe/xe_gt_stats.c b/drivers/gpu/drm/xe/xe_gt_stats.c
index c1af3ecb429b..cdd467dfb46d 100644
--- a/drivers/gpu/drm/xe/xe_gt_stats.c
+++ b/drivers/gpu/drm/xe/xe_gt_stats.c
@@ -54,6 +54,11 @@ void xe_gt_stats_incr(struct xe_gt *gt, const enum xe_gt_stats_id id, int incr)
 #define DEF_STAT_STR(ID, name) [XE_GT_STATS_ID_##ID] = name
 
 static const char *const stat_description[__XE_GT_STATS_NUM_IDS] = {
+	DEF_STAT_STR(CHAIN_PAGEFAULT_COUNT, "chain_pagefault_count"),
+	DEF_STAT_STR(CHAIN_IRQ_PAGEFAULT_COUNT, "chain_irq_pagefault_count"),
+	DEF_STAT_STR(CHAIN_DRAIN_IRQ_PAGEFAULT_COUNT, "chain_drain_irq_pagefault_count"),
+	DEF_STAT_STR(CHAIN_MISMATCH_PAGEFAULT_COUNT, "chain_mismatch_pagefault_count"),
+	DEF_STAT_STR(LAST_PAGEFAULT_COUNT, "last_pagefault_count"),
 	DEF_STAT_STR(SVM_PAGEFAULT_COUNT, "svm_pagefault_count"),
 	DEF_STAT_STR(TLB_INVAL, "tlb_inval_count"),
 	DEF_STAT_STR(SVM_TLB_INVAL_COUNT, "svm_tlb_inval_count"),
diff --git a/drivers/gpu/drm/xe/xe_gt_stats_types.h b/drivers/gpu/drm/xe/xe_gt_stats_types.h
index 129260bfdfe6..591e614e1cfc 100644
--- a/drivers/gpu/drm/xe/xe_gt_stats_types.h
+++ b/drivers/gpu/drm/xe/xe_gt_stats_types.h
@@ -9,6 +9,11 @@
 #include <linux/types.h>
 
 enum xe_gt_stats_id {
+	XE_GT_STATS_ID_CHAIN_PAGEFAULT_COUNT,
+	XE_GT_STATS_ID_CHAIN_IRQ_PAGEFAULT_COUNT,
+	XE_GT_STATS_ID_CHAIN_DRAIN_IRQ_PAGEFAULT_COUNT,
+	XE_GT_STATS_ID_CHAIN_MISMATCH_PAGEFAULT_COUNT,
+	XE_GT_STATS_ID_LAST_PAGEFAULT_COUNT,
 	XE_GT_STATS_ID_SVM_PAGEFAULT_COUNT,
 	XE_GT_STATS_ID_TLB_INVAL,
 	XE_GT_STATS_ID_SVM_TLB_INVAL_COUNT,
diff --git a/drivers/gpu/drm/xe/xe_pagefault.c b/drivers/gpu/drm/xe/xe_pagefault.c
index 9c14f9505faf..c497dd8d9724 100644
--- a/drivers/gpu/drm/xe/xe_pagefault.c
+++ b/drivers/gpu/drm/xe/xe_pagefault.c
@@ -364,6 +364,7 @@ xe_pagefault_queue_requeue(struct xe_pagefault_queue *pf_queue,
 					    usm.pf_queue);
 	struct xe_pagefault *next = pf->consumer.next, *lpf;
 
+	xe_gt_stats_incr(gt, XE_GT_STATS_ID_CHAIN_MISMATCH_PAGEFAULT_COUNT, 1);
 	xe_assert(xe, pf->consumer.alloc_state ==
 		  XE_PAGEFAULT_ALLOC_STATE_CHAINED);
 
@@ -423,6 +424,10 @@ static bool xe_pagefault_cache_hit(struct xe_pagefault_queue *pf_queue,
 			xe_assert(xe, pf_work->cache.pf->consumer.alloc_state ==
 				  XE_PAGEFAULT_ALLOC_STATE_ACTIVE);
 
+			xe_gt_stats_incr(pf->gt,
+					 XE_GT_STATS_ID_CHAIN_PAGEFAULT_COUNT,
+					 1);
+
 			pf->consumer.alloc_state =
 				XE_PAGEFAULT_ALLOC_STATE_CHAINED;
 			pf->consumer.next = pf_work->cache.pf->consumer.next;
@@ -559,8 +564,10 @@ static void xe_pagefault_queue_work(struct work_struct *w)
 
 		/* Last fault same address, ack immediately */
 		if (xe_pagefault_cache_match(pf, cache_start, cache_end,
-					     cache_asid))
+					     cache_asid)) {
+			xe_gt_stats_incr(gt, XE_GT_STATS_ID_LAST_PAGEFAULT_COUNT, 1);
 			goto ack_fault;
+		}
 
 		err = xe_pagefault_service(pf);
 
@@ -816,8 +823,15 @@ int xe_pagefault_handler(struct xe_device *xe, struct xe_pagefault *pf)
 		lpf->consumer.next = NULL;
 
 		if (xe_pagefault_cache_hit(pf_queue, lpf)) {
-			if (empty)
+			xe_gt_stats_incr(pf->gt,
+					 XE_GT_STATS_ID_CHAIN_IRQ_PAGEFAULT_COUNT,
+					 1);
+			if (empty) {
 				xe_pagefault_queue_advance(pf_queue);
+				xe_gt_stats_incr(pf->gt,
+						 XE_GT_STATS_ID_CHAIN_DRAIN_IRQ_PAGEFAULT_COUNT,
+						 1);
+			}
 		} else {
 			int work_index = xe_pagefault_work_index(xe);
 
-- 
2.34.1


  parent reply	other threads:[~2026-02-25 18:47 UTC|newest]

Thread overview: 15+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2026-02-25 18:47 [PATCH v2 00/12] Fine grained fault locking, threaded prefetch, storm cache Matthew Brost
2026-02-25 18:47 ` [PATCH v2 01/12] drm/xe: Fine grained page fault locking Matthew Brost
2026-02-25 18:47 ` [PATCH v2 02/12] drm/xe: Allow prefetch-only VM bind IOCTLs to use VM read lock Matthew Brost
2026-02-25 18:47 ` [PATCH v2 03/12] drm/xe: Thread prefetch of SVM ranges Matthew Brost
2026-02-25 18:47 ` [PATCH v2 04/12] drm/xe: Use a single page-fault queue with multiple workers Matthew Brost
2026-02-25 18:47 ` [PATCH v2 05/12] drm/xe: Add num_pf_work modparam Matthew Brost
2026-02-25 18:47 ` [PATCH v2 06/12] drm/xe: Engine class and instance into a u8 Matthew Brost
2026-02-25 18:47 ` [PATCH v2 07/12] drm/xe: Track pagefault worker runtime Matthew Brost
2026-02-25 18:47 ` [PATCH v2 08/12] drm/xe: Chain page faults via queue-resident cache to avoid fault storms Matthew Brost
2026-02-25 18:47 ` Matthew Brost [this message]
2026-02-25 18:47 ` [PATCH v2 10/12] drm/xe: Add debugfs pagefault_info Matthew Brost
2026-02-25 18:47 ` [PATCH v2 11/12] drm/xe: batch CT pagefault acks with periodic flush Matthew Brost
2026-02-25 18:47 ` [PATCH v2 12/12] drm/xe: Track parallel page fault activity in GT stats Matthew Brost
2026-02-25 19:45 ` ✗ CI.checkpatch: warning for Fine grained fault locking, threaded prefetch, storm cache Patchwork
2026-02-25 19:45 ` ✗ CI.KUnit: failure " Patchwork

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20260225184713.2606772-10-matthew.brost@intel.com \
    --to=matthew.brost@intel.com \
    --cc=arvind.yadav@intel.com \
    --cc=francois.dugast@intel.com \
    --cc=himal.prasad.ghimiray@intel.com \
    --cc=intel-xe@lists.freedesktop.org \
    --cc=stuart.summers@intel.com \
    --cc=thomas.hellstrom@linux.intel.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox