From: Matthew Brost <matthew.brost@intel.com>
To: Arvind Yadav <arvind.yadav@intel.com>
Cc: <intel-xe@lists.freedesktop.org>,
<himal.prasad.ghimiray@intel.com>,
<thomas.hellstrom@linux.intel.com>, <pallavi.mishra@intel.com>
Subject: Re: [PATCH v4 8/8] drm/xe/bo: Add purgeable shrinker state helpers
Date: Tue, 20 Jan 2026 09:58:14 -0800 [thread overview]
Message-ID: <aW/CNu35NYz/2758@lstrano-desk.jf.intel.com> (raw)
In-Reply-To: <20260120060900.3137984-9-arvind.yadav@intel.com>
On Tue, Jan 20, 2026 at 11:38:54AM +0530, Arvind Yadav wrote:
> Encapsulate TTM purgeable flag updates and shrinker page accounting
> into helper functions. This prevents desynchronization between the
> TTM tt->purgeable flag and the shrinker's page bucket counters.
>
> Without these helpers, direct manipulation of xe_ttm_tt->purgeable
> risks forgetting to update the corresponding shrinker counters,
> leading to incorrect memory pressure calculations.
>
> Add xe_bo_set_purgeable_shrinker() and xe_bo_clear_purgeable_shrinker()
> which atomically update both the TTM flag and transfer pages between
> the shrinkable and purgeable buckets.
>
> v4:
> - @madv_purgeable atomic_t → u32 change across all relevant patches. (Matt)
>
> Cc: Matthew Brost <matthew.brost@intel.com>
I think this patch is right but best to double check with Thomas as he
wrote the shrinker logic and is the expert here.
Matt
> Cc: Thomas Hellström <thomas.hellstrom@linux.intel.com>
> Cc: Himal Prasad Ghimiray <himal.prasad.ghimiray@intel.com>
> Signed-off-by: Arvind Yadav <arvind.yadav@intel.com>
> ---
> drivers/gpu/drm/xe/xe_bo.c | 60 ++++++++++++++++++++++++++++++
> drivers/gpu/drm/xe/xe_bo.h | 3 ++
> drivers/gpu/drm/xe/xe_vm_madvise.c | 13 +++++--
> 3 files changed, 73 insertions(+), 3 deletions(-)
>
> diff --git a/drivers/gpu/drm/xe/xe_bo.c b/drivers/gpu/drm/xe/xe_bo.c
> index cc547915161d..2b1448ea3aed 100644
> --- a/drivers/gpu/drm/xe/xe_bo.c
> +++ b/drivers/gpu/drm/xe/xe_bo.c
> @@ -836,6 +836,66 @@ static int xe_bo_move_notify(struct xe_bo *bo,
> return 0;
> }
>
> +/**
> + * xe_bo_set_purgeable_shrinker() - Mark BO purgeable and update shrinker
> + * @bo: Buffer object
> + *
> + * Transfers pages from shrinkable to purgeable bucket. Shrinker can now
> + * discard pages immediately without swapping. Caller holds BO lock.
> + */
> +void xe_bo_set_purgeable_shrinker(struct xe_bo *bo)
> +{
> + struct ttm_buffer_object *ttm_bo = &bo->ttm;
> + struct ttm_tt *tt = ttm_bo->ttm;
> + struct xe_device *xe = ttm_to_xe_device(ttm_bo->bdev);
> + struct xe_ttm_tt *xe_tt;
> +
> + xe_bo_assert_held(bo);
> +
> + if (!tt || !ttm_tt_is_populated(tt))
> + return;
> +
> + xe_tt = container_of(tt, struct xe_ttm_tt, ttm);
> +
> + if (!xe_tt->purgeable) {
> + xe_tt->purgeable = true;
> + /* Transfer pages from shrinkable to purgeable count */
> + xe_shrinker_mod_pages(xe->mem.shrinker,
> + -(long)tt->num_pages,
> + tt->num_pages);
> + }
> +}
> +
> +/**
> + * xe_bo_clear_purgeable_shrinker() - Mark BO non-purgeable and update shrinker
> + * @bo: Buffer object
> + *
> + * Transfers pages from purgeable to shrinkable bucket. Shrinker must now
> + * swap pages instead of discarding. Caller holds BO lock.
> + */
> +void xe_bo_clear_purgeable_shrinker(struct xe_bo *bo)
> +{
> + struct ttm_buffer_object *ttm_bo = &bo->ttm;
> + struct ttm_tt *tt = ttm_bo->ttm;
> + struct xe_device *xe = ttm_to_xe_device(ttm_bo->bdev);
> + struct xe_ttm_tt *xe_tt;
> +
> + xe_bo_assert_held(bo);
> +
> + if (!tt || !ttm_tt_is_populated(tt))
> + return;
> +
> + xe_tt = container_of(tt, struct xe_ttm_tt, ttm);
> +
> + if (xe_tt->purgeable) {
> + xe_tt->purgeable = false;
> + /* Transfer pages from purgeable to shrinkable count */
> + xe_shrinker_mod_pages(xe->mem.shrinker,
> + tt->num_pages,
> + -(long)tt->num_pages);
> + }
> +}
> +
> /**
> * xe_ttm_bo_purge() - Purge buffer object backing store
> * @ttm_bo: The TTM buffer object to purge
> diff --git a/drivers/gpu/drm/xe/xe_bo.h b/drivers/gpu/drm/xe/xe_bo.h
> index 00e93b3065c9..681495e905af 100644
> --- a/drivers/gpu/drm/xe/xe_bo.h
> +++ b/drivers/gpu/drm/xe/xe_bo.h
> @@ -270,6 +270,9 @@ static inline bool xe_bo_madv_is_dontneed(struct xe_bo *bo)
> return bo->madv_purgeable == XE_MADV_PURGEABLE_DONTNEED;
> }
>
> +void xe_bo_set_purgeable_shrinker(struct xe_bo *bo);
> +void xe_bo_clear_purgeable_shrinker(struct xe_bo *bo);
> +
> static inline void xe_bo_unpin_map_no_vm(struct xe_bo *bo)
> {
> if (likely(bo)) {
> diff --git a/drivers/gpu/drm/xe/xe_vm_madvise.c b/drivers/gpu/drm/xe/xe_vm_madvise.c
> index 5808fef89777..0fb07a1ed3ae 100644
> --- a/drivers/gpu/drm/xe/xe_vm_madvise.c
> +++ b/drivers/gpu/drm/xe/xe_vm_madvise.c
> @@ -274,12 +274,16 @@ void xe_bo_recheck_purgeable_on_vma_unbind(struct xe_bo *bo)
>
> if (xe_bo_all_vmas_dontneed(bo)) {
> /* All VMAs are DONTNEED - mark BO purgeable */
> - if (bo->madv_purgeable != XE_MADV_PURGEABLE_DONTNEED)
> + if (bo->madv_purgeable != XE_MADV_PURGEABLE_DONTNEED) {
> bo->madv_purgeable = XE_MADV_PURGEABLE_DONTNEED;
> + xe_bo_set_purgeable_shrinker(bo);
> + }
> } else {
> /* At least one VMA is WILLNEED - BO must not be purgeable */
> - if (bo->madv_purgeable != XE_MADV_PURGEABLE_WILLNEED)
> + if (bo->madv_purgeable != XE_MADV_PURGEABLE_WILLNEED) {
> bo->madv_purgeable = XE_MADV_PURGEABLE_WILLNEED;
> + xe_bo_clear_purgeable_shrinker(bo);
> + }
> }
> }
>
> @@ -325,13 +329,16 @@ static bool xe_vm_madvise_purgeable_bo(struct xe_device *xe, struct xe_vm *vm,
>
> /* Mark VMA WILLNEED - BO becomes non-purgeable immediately */
> bo->madv_purgeable = XE_MADV_PURGEABLE_WILLNEED;
> + xe_bo_clear_purgeable_shrinker(bo);
> break;
> case DRM_XE_VMA_PURGEABLE_STATE_DONTNEED:
> vmas[i]->purgeable_state = XE_MADV_PURGEABLE_DONTNEED;
>
> /* Mark BO purgeable only if all VMAs are DONTNEED */
> - if (xe_bo_all_vmas_dontneed(bo))
> + if (xe_bo_all_vmas_dontneed(bo)) {
> bo->madv_purgeable = XE_MADV_PURGEABLE_DONTNEED;
> + xe_bo_set_purgeable_shrinker(bo);
> + }
> break;
> default:
> drm_warn(&vm->xe->drm, "Invalid madvice value = %d\n",
> --
> 2.43.0
>
next prev parent reply other threads:[~2026-01-20 17:58 UTC|newest]
Thread overview: 37+ messages / expand[flat|nested] mbox.gz Atom feed top
2026-01-20 6:08 [PATCH v4 0/8] drm/xe/madvise: Add support for purgeable buffer objects Arvind Yadav
2026-01-20 6:08 ` [PATCH v4 1/8] drm/xe/uapi: Add UAPI " Arvind Yadav
2026-01-20 17:20 ` Matthew Brost
2026-01-21 18:42 ` Vivi, Rodrigo
2026-01-20 6:08 ` [PATCH v4 2/8] drm/xe/bo: Add purgeable bo state tracking and field madv to xe_bo Arvind Yadav
2026-01-20 17:45 ` Matthew Brost
2026-01-21 5:30 ` Yadav, Arvind
2026-01-22 15:05 ` Thomas Hellström
2026-01-20 6:08 ` [PATCH v4 3/8] drm/xe/madvise: Implement purgeable buffer object support Arvind Yadav
2026-01-20 16:58 ` Matthew Brost
2026-01-20 17:15 ` Matthew Brost
2026-01-21 8:24 ` Yadav, Arvind
2026-01-22 15:30 ` Thomas Hellström
2026-01-30 8:13 ` Yadav, Arvind
2026-01-20 17:44 ` Matthew Brost
2026-01-20 6:08 ` [PATCH v4 4/8] drm/xe/bo: Handle CPU faults on purged buffer objects Arvind Yadav
2026-01-20 17:23 ` Matthew Brost
2026-01-22 15:54 ` Thomas Hellström
2026-01-20 6:08 ` [PATCH v4 5/8] drm/xe/vm: Prevent binding of " Arvind Yadav
2026-01-20 17:27 ` Matthew Brost
2026-01-23 5:41 ` Yadav, Arvind
2026-01-23 12:37 ` Thomas Hellström
2026-01-30 8:17 ` Yadav, Arvind
2026-01-20 6:08 ` [PATCH v4 6/8] drm/xe/madvise: Implement per-VMA purgeable state tracking Arvind Yadav
2026-01-20 17:41 ` Matthew Brost
2026-01-21 5:11 ` Yadav, Arvind
2026-01-23 13:07 ` Thomas Hellström
2026-01-20 6:08 ` [PATCH v4 7/8] drm/xe/madvise: Block imported and exported dma-bufs Arvind Yadav
2026-01-20 17:51 ` Matthew Brost
2026-01-23 13:31 ` Thomas Hellström
2026-01-30 8:22 ` Yadav, Arvind
2026-01-30 8:59 ` Thomas Hellström
2026-01-20 6:08 ` [PATCH v4 8/8] drm/xe/bo: Add purgeable shrinker state helpers Arvind Yadav
2026-01-20 17:58 ` Matthew Brost [this message]
2026-01-23 13:42 ` Thomas Hellström
2026-01-20 6:14 ` ✗ CI.checkpatch: warning for drm/xe/madvise: Add support for purgeable buffer objects (rev5) Patchwork
2026-01-20 6:16 ` ✗ CI.KUnit: failure " Patchwork
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=aW/CNu35NYz/2758@lstrano-desk.jf.intel.com \
--to=matthew.brost@intel.com \
--cc=arvind.yadav@intel.com \
--cc=himal.prasad.ghimiray@intel.com \
--cc=intel-xe@lists.freedesktop.org \
--cc=pallavi.mishra@intel.com \
--cc=thomas.hellstrom@linux.intel.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox