From: Brian Nguyen <brian3.nguyen@intel.com>
To: intel-xe@lists.freedesktop.org
Cc: matthew.brost@intel.com
Subject: [PATCH 2/4] drm/xe: Add explicit abort page reclaim list
Date: Wed, 7 Jan 2026 09:04:50 +0800 [thread overview]
Message-ID: <20260107010447.4125005-8-brian3.nguyen@intel.com> (raw)
In-Reply-To: <20260107010447.4125005-6-brian3.nguyen@intel.com>
PRLs could be invalidated to indicate its getting dropped from current
scope but are still valid. So standardize calls and add abort to clearly
define when an invalidation is a real abort and PRL should fallback.
v3:
- Update abort function to macro. (Matthew B)
Signed-off-by: Brian Nguyen <brian3.nguyen@intel.com>
---
drivers/gpu/drm/xe/xe_page_reclaim.h | 19 +++++++++++++++++++
drivers/gpu/drm/xe/xe_pt.c | 21 +++++++++------------
2 files changed, 28 insertions(+), 12 deletions(-)
diff --git a/drivers/gpu/drm/xe/xe_page_reclaim.h b/drivers/gpu/drm/xe/xe_page_reclaim.h
index a4f58e0ce9b4..12f861f357d8 100644
--- a/drivers/gpu/drm/xe/xe_page_reclaim.h
+++ b/drivers/gpu/drm/xe/xe_page_reclaim.h
@@ -19,6 +19,7 @@
struct xe_tlb_inval;
struct xe_tlb_inval_fence;
struct xe_tile;
+struct xe_gt;
struct xe_vma;
struct xe_guc_page_reclaim_entry {
@@ -75,6 +76,24 @@ struct drm_suballoc *xe_page_reclaim_create_prl_bo(struct xe_tlb_inval *tlb_inva
struct xe_page_reclaim_list *prl,
struct xe_tlb_inval_fence *fence);
void xe_page_reclaim_list_invalidate(struct xe_page_reclaim_list *prl);
+
+/**
+ * xe_page_reclaim_list_abort() - Invalidate a PRL and log an abort reason
+ * @gt: GT owning the page reclaim request
+ * @prl: Page reclaim list to invalidate
+ * @fmt: format string for the log message with args
+ *
+ * Abort page reclaim process by invalidating PRL and doing any relevant logging.
+ */
+#define xe_page_reclaim_list_abort(gt, prl, fmt, ...) \
+ do { \
+ struct xe_gt *__gt = (gt); \
+ struct xe_page_reclaim_list *__prl = (prl); \
+ \
+ xe_page_reclaim_list_invalidate(__prl); \
+ vm_dbg(>_to_xe(__gt)->drm, "PRL aborted: " fmt, ##__VA_ARGS__); \
+ } while (0)
+
void xe_page_reclaim_list_init(struct xe_page_reclaim_list *prl);
int xe_page_reclaim_list_alloc_entries(struct xe_page_reclaim_list *prl);
/**
diff --git a/drivers/gpu/drm/xe/xe_pt.c b/drivers/gpu/drm/xe/xe_pt.c
index 6cd78bb2b652..2752a5a48a97 100644
--- a/drivers/gpu/drm/xe/xe_pt.c
+++ b/drivers/gpu/drm/xe/xe_pt.c
@@ -1618,10 +1618,9 @@ static int generate_reclaim_entry(struct xe_tile *tile,
} else if (is_2m_pte(xe_child)) {
reclamation_size = COMPUTE_RECLAIM_ADDRESS_MASK(SZ_2M); /* reclamation_size = 9 */
} else {
- xe_page_reclaim_list_invalidate(prl);
- vm_dbg(&tile_to_xe(tile)->drm,
- "PRL invalidate: unsupported PTE level=%u pte=%#llx\n",
- xe_child->level, pte);
+ xe_page_reclaim_list_abort(tile->primary_gt, prl,
+ "unsupported PTE level=%u pte=%#llx",
+ xe_child->level, pte);
return -EINVAL;
}
@@ -1670,10 +1669,9 @@ static int xe_pt_stage_unbind_entry(struct xe_ptw *parent, pgoff_t offset,
break;
} else {
/* overflow, mark as invalid */
- xe_page_reclaim_list_invalidate(xe_walk->prl);
- vm_dbg(&xe->drm,
- "PRL invalidate: overflow while adding pte=%#llx",
- pte);
+ xe_page_reclaim_list_abort(xe_walk->tile->primary_gt, xe_walk->prl,
+ "overflow while adding pte=%#llx",
+ pte);
break;
}
}
@@ -1682,10 +1680,9 @@ static int xe_pt_stage_unbind_entry(struct xe_ptw *parent, pgoff_t offset,
/* If aborting page walk early, invalidate PRL since PTE may be dropped from this abort */
if (xe_pt_check_kill(addr, next, level - 1, xe_child, action, walk) &&
xe_walk->prl && level > 1 && xe_child->base.children && xe_child->num_live != 0) {
- xe_page_reclaim_list_invalidate(xe_walk->prl);
- vm_dbg(&xe->drm,
- "PRL invalidate: kill at level=%u addr=%#llx next=%#llx num_live=%u\n",
- level, addr, next, xe_child->num_live);
+ xe_page_reclaim_list_abort(xe_walk->tile->primary_gt, xe_walk->prl,
+ "kill at level=%u addr=%#llx next=%#llx num_live=%u\n",
+ level, addr, next, xe_child->num_live);
}
return 0;
--
2.52.0
next prev parent reply other threads:[~2026-01-07 1:04 UTC|newest]
Thread overview: 14+ messages / expand[flat|nested] mbox.gz Atom feed top
2026-01-07 1:04 [PATCH 0/4] Page-reclaim fixes and PRL stats addition Brian Nguyen
2026-01-07 1:04 ` [PATCH 1/4] drm/xe: Remove debug comment in page reclaim Brian Nguyen
2026-01-07 1:04 ` Brian Nguyen [this message]
2026-01-07 19:57 ` [PATCH 2/4] drm/xe: Add explicit abort page reclaim list Matthew Brost
2026-01-07 1:04 ` [PATCH 3/4] drm/xe: Fix page reclaim entry handling for large pages Brian Nguyen
2026-01-08 16:22 ` Matthew Brost
2026-01-07 1:04 ` [PATCH 4/4] drm/xe: Add page reclamation related stats Brian Nguyen
2026-01-08 16:24 ` Matthew Brost
2026-01-07 1:19 ` ✓ CI.KUnit: success for Page-reclaim fixes and PRL stats addition (rev3) Patchwork
2026-01-07 2:02 ` ✓ Xe.CI.BAT: " Patchwork
2026-01-07 4:45 ` ✗ Xe.CI.Full: failure " Patchwork
-- strict thread matches above, loose matches on Subject: below --
2026-01-05 23:33 [PATCH 0/4] Page-reclaim fixes and PRL stats addition Brian Nguyen
2026-01-05 23:33 ` [PATCH 2/4] drm/xe: Add explicit abort page reclaim list Brian Nguyen
2026-01-06 2:23 ` Matthew Brost
2026-01-06 12:44 ` Nguyen, Brian3
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20260107010447.4125005-8-brian3.nguyen@intel.com \
--to=brian3.nguyen@intel.com \
--cc=intel-xe@lists.freedesktop.org \
--cc=matthew.brost@intel.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox