From: Arvind Yadav <arvind.yadav@intel.com>
To: intel-xe@lists.freedesktop.org
Cc: matthew.brost@intel.com, himal.prasad.ghimiray@intel.com,
thomas.hellstrom@linux.intel.com, pallavi.mishra@intel.com
Subject: [RFC v3 8/8] drm/xe/bo: Add purgeable shrinker state helpers
Date: Wed, 10 Dec 2025 10:00:52 +0530 [thread overview]
Message-ID: <20251210043112.3267620-9-arvind.yadav@intel.com> (raw)
In-Reply-To: <20251210043112.3267620-1-arvind.yadav@intel.com>
Encapsulate TTM purgeable flag updates and shrinker page accounting
into helper functions. This prevents desynchronization between the
TTM tt->purgeable flag and the shrinker's page bucket counters.
Without these helpers, direct manipulation of xe_ttm_tt->purgeable
risks forgetting to update the corresponding shrinker counters,
leading to incorrect memory pressure calculations.
Add xe_bo_set_purgeable_shrinker() and xe_bo_clear_purgeable_shrinker()
which atomically update both the TTM flag and transfer pages between
the shrinkable and purgeable buckets.
Cc: Matthew Brost <matthew.brost@intel.com>
Cc: Thomas Hellström <thomas.hellstrom@linux.intel.com>
Cc: Himal Prasad Ghimiray <himal.prasad.ghimiray@intel.com>
Signed-off-by: Arvind Yadav <arvind.yadav@intel.com>
---
drivers/gpu/drm/xe/xe_bo.c | 60 ++++++++++++++++++++++++++++++
drivers/gpu/drm/xe/xe_bo.h | 3 ++
drivers/gpu/drm/xe/xe_vm_madvise.c | 13 +++++--
3 files changed, 73 insertions(+), 3 deletions(-)
diff --git a/drivers/gpu/drm/xe/xe_bo.c b/drivers/gpu/drm/xe/xe_bo.c
index 31b2fa490440..fa29e4951af5 100644
--- a/drivers/gpu/drm/xe/xe_bo.c
+++ b/drivers/gpu/drm/xe/xe_bo.c
@@ -843,6 +843,66 @@ static void xe_bo_set_purged(struct xe_bo *bo)
atomic_set(&bo->madv_purgeable, XE_MADV_PURGEABLE_PURGED);
}
+/**
+ * xe_bo_set_purgeable_shrinker() - Mark BO purgeable and update shrinker
+ * @bo: Buffer object
+ *
+ * Transfers pages from shrinkable to purgeable bucket. Shrinker can now
+ * discard pages immediately without swapping. Caller holds BO lock.
+ */
+void xe_bo_set_purgeable_shrinker(struct xe_bo *bo)
+{
+ struct ttm_buffer_object *ttm_bo = &bo->ttm;
+ struct ttm_tt *tt = ttm_bo->ttm;
+ struct xe_device *xe = ttm_to_xe_device(ttm_bo->bdev);
+ struct xe_ttm_tt *xe_tt;
+
+ xe_bo_assert_held(bo);
+
+ if (!tt || !ttm_tt_is_populated(tt))
+ return;
+
+ xe_tt = container_of(tt, struct xe_ttm_tt, ttm);
+
+ if (!xe_tt->purgeable) {
+ xe_tt->purgeable = true;
+ /* Transfer pages from shrinkable to purgeable count */
+ xe_shrinker_mod_pages(xe->mem.shrinker,
+ -(long)tt->num_pages,
+ tt->num_pages);
+ }
+}
+
+/**
+ * xe_bo_clear_purgeable_shrinker() - Mark BO non-purgeable and update shrinker
+ * @bo: Buffer object
+ *
+ * Transfers pages from purgeable to shrinkable bucket. Shrinker must now
+ * swap pages instead of discarding. Caller holds BO lock.
+ */
+void xe_bo_clear_purgeable_shrinker(struct xe_bo *bo)
+{
+ struct ttm_buffer_object *ttm_bo = &bo->ttm;
+ struct ttm_tt *tt = ttm_bo->ttm;
+ struct xe_device *xe = ttm_to_xe_device(ttm_bo->bdev);
+ struct xe_ttm_tt *xe_tt;
+
+ xe_bo_assert_held(bo);
+
+ if (!tt || !ttm_tt_is_populated(tt))
+ return;
+
+ xe_tt = container_of(tt, struct xe_ttm_tt, ttm);
+
+ if (xe_tt->purgeable) {
+ xe_tt->purgeable = false;
+ /* Transfer pages from purgeable to shrinkable count */
+ xe_shrinker_mod_pages(xe->mem.shrinker,
+ tt->num_pages,
+ -(long)tt->num_pages);
+ }
+}
+
/**
* xe_ttm_bo_purge() - Purge buffer object backing store
* @ttm_bo: The TTM buffer object to purge
diff --git a/drivers/gpu/drm/xe/xe_bo.h b/drivers/gpu/drm/xe/xe_bo.h
index 1090a60f6ef6..9c5ccd03af7b 100644
--- a/drivers/gpu/drm/xe/xe_bo.h
+++ b/drivers/gpu/drm/xe/xe_bo.h
@@ -270,6 +270,9 @@ static inline bool xe_bo_madv_is_dontneed(struct xe_bo *bo)
return atomic_read(&bo->madv_purgeable) == XE_MADV_PURGEABLE_DONTNEED;
}
+void xe_bo_set_purgeable_shrinker(struct xe_bo *bo);
+void xe_bo_clear_purgeable_shrinker(struct xe_bo *bo);
+
static inline void xe_bo_unpin_map_no_vm(struct xe_bo *bo)
{
if (likely(bo)) {
diff --git a/drivers/gpu/drm/xe/xe_vm_madvise.c b/drivers/gpu/drm/xe/xe_vm_madvise.c
index 868be570664d..cdbbdfda5928 100644
--- a/drivers/gpu/drm/xe/xe_vm_madvise.c
+++ b/drivers/gpu/drm/xe/xe_vm_madvise.c
@@ -257,12 +257,16 @@ void xe_bo_recheck_purgeable_on_vma_unbind(struct xe_bo *bo)
if (xe_bo_all_vmas_dontneed(bo)) {
/* All VMAs are DONTNEED - mark BO purgeable */
- if (current_state != XE_MADV_PURGEABLE_DONTNEED)
+ if (current_state != XE_MADV_PURGEABLE_DONTNEED) {
atomic_set(&bo->madv_purgeable, XE_MADV_PURGEABLE_DONTNEED);
+ xe_bo_set_purgeable_shrinker(bo);
+ }
} else {
/* At least one VMA is WILLNEED - BO must not be purgeable */
- if (current_state != XE_MADV_PURGEABLE_WILLNEED)
+ if (current_state != XE_MADV_PURGEABLE_WILLNEED) {
atomic_set(&bo->madv_purgeable, XE_MADV_PURGEABLE_WILLNEED);
+ xe_bo_clear_purgeable_shrinker(bo);
+ }
}
}
@@ -307,13 +311,16 @@ static bool xe_vm_madvise_purgeable_bo(struct xe_device *xe, struct xe_vm *vm,
vmas[i]->purgeable_state = XE_MADV_PURGEABLE_WILLNEED;
/* Mark VMA WILLNEED - BO becomes non-purgeable immediately */
atomic_set(&bo->madv_purgeable, XE_MADV_PURGEABLE_WILLNEED);
+ xe_bo_clear_purgeable_shrinker(bo);
break;
case DRM_XE_VMA_PURGEABLE_STATE_DONTNEED:
vmas[i]->purgeable_state = XE_MADV_PURGEABLE_DONTNEED;
/* Mark BO purgeable only if all VMAs are DONTNEED */
- if (xe_bo_all_vmas_dontneed(bo))
+ if (xe_bo_all_vmas_dontneed(bo)) {
atomic_set(&bo->madv_purgeable, XE_MADV_PURGEABLE_DONTNEED);
+ xe_bo_set_purgeable_shrinker(bo);
+ }
break;
default:
drm_warn(&vm->xe->drm, "Invalid madvice value = %d\n",
--
2.43.0
next prev parent reply other threads:[~2025-12-10 4:32 UTC|newest]
Thread overview: 18+ messages / expand[flat|nested] mbox.gz Atom feed top
2025-12-10 4:30 [RFC v3 0/8] drm/xe/madvise: Add support for purgeable buffer objects Arvind Yadav
2025-12-10 4:30 ` [RFC v3 1/8] drm/xe/uapi: Add UAPI " Arvind Yadav
2025-12-10 5:33 ` Matthew Brost
2025-12-10 7:16 ` Yadav, Arvind
2026-01-22 13:32 ` Thomas Hellström
2025-12-10 4:30 ` [RFC v3 2/8] drm/xe/bo: Add purgeable bo state tracking and field madv to xe_bo Arvind Yadav
2025-12-10 5:46 ` Matthew Brost
2025-12-10 7:18 ` Yadav, Arvind
2025-12-10 4:30 ` [RFC v3 3/8] drm/xe/madvise: Implement purgeable buffer object support Arvind Yadav
2025-12-10 4:30 ` [RFC v3 4/8] drm/xe/bo: Handle CPU faults on purged buffer objects Arvind Yadav
2025-12-10 4:30 ` [RFC v3 5/8] drm/xe/vm: Prevent binding of " Arvind Yadav
2025-12-10 4:30 ` [RFC v3 6/8] drm/xe/madvise: Implement per-VMA purgeable state tracking Arvind Yadav
2025-12-10 4:30 ` [RFC v3 7/8] drm/xe/madvise: Block imported and exported dma-bufs Arvind Yadav
2025-12-10 4:30 ` Arvind Yadav [this message]
2025-12-11 7:22 ` ✗ CI.checkpatch: warning for drm/xe/madvise: Add support for purgeable buffer objects (rev4) Patchwork
2025-12-11 7:24 ` ✓ CI.KUnit: success " Patchwork
2025-12-11 7:57 ` ✓ Xe.CI.BAT: " Patchwork
2025-12-11 14:56 ` ✗ Xe.CI.Full: failure " Patchwork
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20251210043112.3267620-9-arvind.yadav@intel.com \
--to=arvind.yadav@intel.com \
--cc=himal.prasad.ghimiray@intel.com \
--cc=intel-xe@lists.freedesktop.org \
--cc=matthew.brost@intel.com \
--cc=pallavi.mishra@intel.com \
--cc=thomas.hellstrom@linux.intel.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox