public inbox for intel-xe@lists.freedesktop.org
 help / color / mirror / Atom feed
From: "Yadav, Arvind" <arvind.yadav@intel.com>
To: "Thomas Hellström" <thomas.hellstrom@linux.intel.com>,
	intel-xe@lists.freedesktop.org
Cc: <matthew.brost@intel.com>, <himal.prasad.ghimiray@intel.com>,
	<pallavi.mishra@intel.com>
Subject: Re: [PATCH v6 10/12] drm/xe/bo: Add purgeable shrinker state helpers
Date: Wed, 18 Mar 2026 17:45:40 +0530	[thread overview]
Message-ID: <f6743382-3132-4904-b876-b695f8bf3ede@intel.com> (raw)
In-Reply-To: <fd01cee5878be605002e2a4f349be9f5b76010a6.camel@linux.intel.com>


On 10-03-2026 15:31, Thomas Hellström wrote:
> On Tue, 2026-03-03 at 20:50 +0530, Arvind Yadav wrote:
>> Encapsulate TTM purgeable flag updates and shrinker page accounting
>> into helper functions. This prevents desynchronization between the
>> TTM tt->purgeable flag and the shrinker's page bucket counters.
>>
>> Without these helpers, direct manipulation of xe_ttm_tt->purgeable
>> risks forgetting to update the corresponding shrinker counters,
>> leading to incorrect memory pressure calculations.
>>
>> Add xe_bo_set_purgeable_shrinker() and
>> xe_bo_clear_purgeable_shrinker()
>> which atomically update both the TTM flag and transfer pages between
>> the shrinkable and purgeable buckets.
>>
>> Update purgeable BO state to PURGED after successful shrinker purge
>> for DONTNEED BOs.
>>
>> v4:
>>    - @madv_purgeable atomic_t → u32 change across all relevant
>>      patches (Matt)
>>
>> v5:
>>    - Update purgeable BO state to PURGED after a successful shrinker
>>      purge for DONTNEED BOs.
>>    - Split ghost BO and zero-refcount handling in xe_bo_shrink()
>> (Thomas)
>>
>> v6:
>>    - Create separate patch for 'Split ghost BO and zero-refcount
>>      handling'. (Thomas)
>>
>> Cc: Matthew Brost <matthew.brost@intel.com>
>> Cc: Himal Prasad Ghimiray <himal.prasad.ghimiray@intel.com>
>> Cc: Thomas Hellström <thomas.hellstrom@linux.intel.com>
>> Signed-off-by: Arvind Yadav <arvind.yadav@intel.com>
>> ---
>>   drivers/gpu/drm/xe/xe_bo.c         | 63
>> ++++++++++++++++++++++++++++++
>>   drivers/gpu/drm/xe/xe_bo.h         |  2 +
>>   drivers/gpu/drm/xe/xe_vm_madvise.c |  8 +++-
>>   3 files changed, 71 insertions(+), 2 deletions(-)
>>
>> diff --git a/drivers/gpu/drm/xe/xe_bo.c b/drivers/gpu/drm/xe/xe_bo.c
>> index 3a4965bdadf2..598d4463baf3 100644
>> --- a/drivers/gpu/drm/xe/xe_bo.c
>> +++ b/drivers/gpu/drm/xe/xe_bo.c
>> @@ -863,6 +863,66 @@ void xe_bo_set_purgeable_state(struct xe_bo *bo,
>>   	bo->madv_purgeable = new_state;
>>   }
>>   
>> +/**
>> + * xe_bo_set_purgeable_shrinker() - Mark BO purgeable and update
>> shrinker
>> + * @bo: Buffer object
>> + *
>> + * Transfers pages from shrinkable to purgeable bucket. Shrinker can
>> now
>> + * discard pages immediately without swapping. Caller holds BO lock.
>> + */
>> +void xe_bo_set_purgeable_shrinker(struct xe_bo *bo)
>> +{
>> +	struct ttm_buffer_object *ttm_bo = &bo->ttm;
>> +	struct ttm_tt *tt = ttm_bo->ttm;
>> +	struct xe_device *xe = ttm_to_xe_device(ttm_bo->bdev);
>> +	struct xe_ttm_tt *xe_tt;
>> +
>> +	xe_bo_assert_held(bo);
>> +
>> +	if (!tt || !ttm_tt_is_populated(tt))
>> +		return;
>> +
>> +	xe_tt = container_of(tt, struct xe_ttm_tt, ttm);
>> +
>> +	if (!xe_tt->purgeable) {
>> +		xe_tt->purgeable = true;
>> +		/* Transfer pages from shrinkable to purgeable count
>> */
>> +		xe_shrinker_mod_pages(xe->mem.shrinker,
>> +				      -(long)tt->num_pages,
>> +				      tt->num_pages);
>> +	}
>> +}
>> +
>> +/**
>> + * xe_bo_clear_purgeable_shrinker() - Mark BO non-purgeable and
>> update shrinker
>> + * @bo: Buffer object
>> + *
>> + * Transfers pages from purgeable to shrinkable bucket. Shrinker
>> must now
>> + * swap pages instead of discarding. Caller holds BO lock.
>> + */
>> +void xe_bo_clear_purgeable_shrinker(struct xe_bo *bo)
>> +{
>> +	struct ttm_buffer_object *ttm_bo = &bo->ttm;
>> +	struct ttm_tt *tt = ttm_bo->ttm;
>> +	struct xe_device *xe = ttm_to_xe_device(ttm_bo->bdev);
>> +	struct xe_ttm_tt *xe_tt;
>> +
>> +	xe_bo_assert_held(bo);
>> +
>> +	if (!tt || !ttm_tt_is_populated(tt))
>> +		return;
>> +
>> +	xe_tt = container_of(tt, struct xe_ttm_tt, ttm);
>> +
>> +	if (xe_tt->purgeable) {
>> +		xe_tt->purgeable = false;
>> +		/* Transfer pages from purgeable to shrinkable count
>> */
>> +		xe_shrinker_mod_pages(xe->mem.shrinker,
>> +				      tt->num_pages,
>> +				      -(long)tt->num_pages);
>> +	}
>> +}
>> +
>>   /**
>>    * xe_ttm_bo_purge() - Purge buffer object backing store
>>    * @ttm_bo: The TTM buffer object to purge
>> @@ -1243,6 +1303,9 @@ long xe_bo_shrink(struct ttm_operation_ctx
>> *ctx, struct ttm_buffer_object *bo,
>>   			lret = xe_bo_move_notify(xe_bo, ctx);
>>   		if (!lret)
>>   			lret = xe_bo_shrink_purge(ctx, bo, scanned);
>> +		if (lret > 0 && xe_bo_madv_is_dontneed(xe_bo))
>> +			xe_bo_set_purgeable_state(xe_bo,
>> +						
>> XE_MADV_PURGEABLE_PURGED);
>>   		goto out_unref;
>>   	}
>>   
>> diff --git a/drivers/gpu/drm/xe/xe_bo.h b/drivers/gpu/drm/xe/xe_bo.h
>> index 0d9f25b51eb2..46d1fff10e4f 100644
>> --- a/drivers/gpu/drm/xe/xe_bo.h
>> +++ b/drivers/gpu/drm/xe/xe_bo.h
>> @@ -272,6 +272,8 @@ static inline bool xe_bo_madv_is_dontneed(struct
>> xe_bo *bo)
>>   }
>>   
>>   void xe_bo_set_purgeable_state(struct xe_bo *bo, enum
>> xe_madv_purgeable_state new_state);
>> +void xe_bo_set_purgeable_shrinker(struct xe_bo *bo);
>> +void xe_bo_clear_purgeable_shrinker(struct xe_bo *bo);
>>   
>>   static inline void xe_bo_unpin_map_no_vm(struct xe_bo *bo)
>>   {
>> diff --git a/drivers/gpu/drm/xe/xe_vm_madvise.c
>> b/drivers/gpu/drm/xe/xe_vm_madvise.c
>> index 8acc19e25aa5..ab83e94980e4 100644
>> --- a/drivers/gpu/drm/xe/xe_vm_madvise.c
>> +++ b/drivers/gpu/drm/xe/xe_vm_madvise.c
>> @@ -312,12 +312,16 @@ void xe_bo_recompute_purgeable_state(struct
>> xe_bo *bo)
>>   
>>   	if (vma_state == XE_BO_VMAS_STATE_DONTNEED) {
>>   		/* All VMAs are DONTNEED - mark BO purgeable */
>> -		if (bo->madv_purgeable !=
>> XE_MADV_PURGEABLE_DONTNEED)
>> +		if (bo->madv_purgeable !=
>> XE_MADV_PURGEABLE_DONTNEED) {
>>   			xe_bo_set_purgeable_state(bo,
>> XE_MADV_PURGEABLE_DONTNEED);
>> +			xe_bo_set_purgeable_shrinker(bo);
>> +		}
>>   	} else if (vma_state == XE_BO_VMAS_STATE_WILLNEED) {
>>   		/* At least one VMA is WILLNEED - BO must not be
>> purgeable */
>> -		if (bo->madv_purgeable !=
>> XE_MADV_PURGEABLE_WILLNEED)
>> +		if (bo->madv_purgeable !=
>> XE_MADV_PURGEABLE_WILLNEED) {
>>   			xe_bo_set_purgeable_state(bo,
>> XE_MADV_PURGEABLE_WILLNEED);
>> +			xe_bo_clear_purgeable_shrinker(bo);
>> +		}
>>   	}
>>   	/* XE_BO_VMAS_STATE_NO_VMAS: Preserve existing BO state */
>>   }
> I think this can be simplified a bit using something like the below
> applied after the above patch: (untested).
>
> diff --git a/drivers/gpu/drm/xe/xe_bo.c b/drivers/gpu/drm/xe/xe_bo.c
> index 07acce383cb1..9f0885cd3cfd 100644
> --- a/drivers/gpu/drm/xe/xe_bo.c
> +++ b/drivers/gpu/drm/xe/xe_bo.c
> @@ -835,47 +835,14 @@ static int xe_bo_move_notify(struct xe_bo *bo,
>   	return 0;
>   }
>   
> -/**
> - * xe_bo_set_purgeable_state() - Set BO purgeable state with
> validation
> - * @bo: Buffer object
> - * @new_state: New purgeable state
> - *
> - * Sets the purgeable state with lockdep assertions and validates
> state
> - * transitions. Once a BO is PURGED, it cannot transition to any other
> state.
> - * Invalid transitions are caught with xe_assert().
> - */
> -void xe_bo_set_purgeable_state(struct xe_bo *bo,
> -			       enum xe_madv_purgeable_state new_state)
> -{
> -	struct xe_device *xe = xe_bo_device(bo);
> -
> -	xe_bo_assert_held(bo);
> -
> -	/* Validate state is one of the known values */
> -	xe_assert(xe, new_state == XE_MADV_PURGEABLE_WILLNEED ||
> -		      new_state == XE_MADV_PURGEABLE_DONTNEED ||
> -		      new_state == XE_MADV_PURGEABLE_PURGED);
> -
> -	/* Once purged, always purged - cannot transition out */
> -	xe_assert(xe, !(bo->madv_purgeable == XE_MADV_PURGEABLE_PURGED
> &&
> -			new_state != XE_MADV_PURGEABLE_PURGED));
> -
> -	bo->madv_purgeable = new_state;
> -}
> -
> -/**
> - * xe_bo_set_purgeable_shrinker() - Mark BO purgeable and update
> shrinker
> - * @bo: Buffer object
> - *
> - * Transfers pages from shrinkable to purgeable bucket. Shrinker can
> now
> - * discard pages immediately without swapping. Caller holds BO lock.
> - */
> -void xe_bo_set_purgeable_shrinker(struct xe_bo *bo)
> +static void xe_bo_set_purgeable_shrinker(struct xe_bo *bo, enum
> xe_madv_purgeable_state new_state)
> +					
>   {
>   	struct ttm_buffer_object *ttm_bo = &bo->ttm;
>   	struct ttm_tt *tt = ttm_bo->ttm;
>   	struct xe_device *xe = ttm_to_xe_device(ttm_bo->bdev);
>   	struct xe_ttm_tt *xe_tt;
> +	long int tt_pages;
>   
>   	xe_bo_assert_held(bo);
>   
> @@ -883,44 +850,44 @@ void xe_bo_set_purgeable_shrinker(struct xe_bo
> *bo)
>   		return;
>   
>   	xe_tt = container_of(tt, struct xe_ttm_tt, ttm);
> -
> -	if (!xe_tt->purgeable) {
> +	tt_pages = tt->num_pages;
> +	
> +	if (!xe_tt->purgeable && new_state ==
> XE_MADV_PURGEABLE_DONTNEED) {
>   		xe_tt->purgeable = true;
> -		/* Transfer pages from shrinkable to purgeable count
> */
> -		xe_shrinker_mod_pages(xe->mem.shrinker,
> -				      -(long)tt->num_pages,
> -				      tt->num_pages);
> +		xe_shrinker_mod_pages(xe->mem.shrinker, -tt_pages,
> tt_pages);
> +	} else if (xe_tt->purgeable && new_state ==
> XE_MADV_PURGEABLE_WILLNEED) {
> +		xe_tt->purgeable = false;
> +		xe_shrinker_mod_pages(xe->mem.shrinker, tt_pages, -
> tt_pages);
>   	}
>   }
>   
>   /**
> - * xe_bo_clear_purgeable_shrinker() - Mark BO non-purgeable and update
> shrinker
> + * xe_bo_set_purgeable_state() - Set BO purgeable state with
> validation
>    * @bo: Buffer object
> + * @new_state: New purgeable state
>    *
> - * Transfers pages from purgeable to shrinkable bucket. Shrinker must
> now
> - * swap pages instead of discarding. Caller holds BO lock.
> + * Sets the purgeable state with lockdep assertions and validates
> state
> + * transitions. Once a BO is PURGED, it cannot transition to any other
> state.
> + * Invalid transitions are caught with xe_assert().
>    */
> -void xe_bo_clear_purgeable_shrinker(struct xe_bo *bo)
> +void xe_bo_set_purgeable_state(struct xe_bo *bo
> +,			       enum xe_madv_purgeable_state new_state)
>   {
> -	struct ttm_buffer_object *ttm_bo = &bo->ttm;
> -	struct ttm_tt *tt = ttm_bo->ttm;
> -	struct xe_device *xe = ttm_to_xe_device(ttm_bo->bdev);
> -	struct xe_ttm_tt *xe_tt;
> +	struct xe_device *xe = xe_bo_device(bo);
>   
>   	xe_bo_assert_held(bo);
>   
> -	if (!tt || !ttm_tt_is_populated(tt))
> -		return;
> +	/* Validate state is one of the known values */
> +	xe_assert(xe, new_state == XE_MADV_PURGEABLE_WILLNEED ||
> +		      new_state == XE_MADV_PURGEABLE_DONTNEED ||
> +		      new_state == XE_MADV_PURGEABLE_PURGED);
>   
> -	xe_tt = container_of(tt, struct xe_ttm_tt, ttm);
> +	/* Once purged, always purged - cannot transition out */
> +	xe_assert(xe, !(bo->madv_purgeable == XE_MADV_PURGEABLE_PURGED
> &&
> +			new_state != XE_MADV_PURGEABLE_PURGED));
>   
> -	if (xe_tt->purgeable) {
> -		xe_tt->purgeable = false;
> -		/* Transfer pages from purgeable to shrinkable count
> */
> -		xe_shrinker_mod_pages(xe->mem.shrinker,
> -				      tt->num_pages,
> -				      -(long)tt->num_pages);
> -	}
> +	bo->madv_purgeable = new_state;
> +	xe_bo_set_purgeable_shrinker(bo, new_state);
>   }
>   
>   /**
> diff --git a/drivers/gpu/drm/xe/xe_bo.h b/drivers/gpu/drm/xe/xe_bo.h
> index 46d1fff10e4f..0d9f25b51eb2 100644
> --- a/drivers/gpu/drm/xe/xe_bo.h
> +++ b/drivers/gpu/drm/xe/xe_bo.h
> @@ -272,8 +272,6 @@ static inline bool xe_bo_madv_is_dontneed(struct
> xe_bo *bo)
>   }
>   
>   void xe_bo_set_purgeable_state(struct xe_bo *bo, enum
> xe_madv_purgeable_state new_state);
> -void xe_bo_set_purgeable_shrinker(struct xe_bo *bo);
> -void xe_bo_clear_purgeable_shrinker(struct xe_bo *bo);
>   
>   static inline void xe_bo_unpin_map_no_vm(struct xe_bo *bo)
>   {
>

Thanks Thomas, that makes sense. I will combine both shrinker helpers 
into one and now call it directly from xe_bo_set_purgeable_state(). This 
also removes the dual‑call pattern from xe_bo_recompute_purgeable_state()

Thanks,
Arvind

>
>
>
>
>
>

  reply	other threads:[~2026-03-18 12:16 UTC|newest]

Thread overview: 39+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2026-03-03 15:19 [PATCH v6 00/12] drm/xe/madvise: Add support for purgeable buffer objects Arvind Yadav
2026-03-03 15:19 ` [PATCH v6 01/12] drm/xe/uapi: Add UAPI " Arvind Yadav
2026-03-03 15:53   ` Souza, Jose
2026-03-20  4:00     ` Yadav, Arvind
2026-03-10  8:31   ` Thomas Hellström
2026-03-03 15:19 ` [PATCH v6 02/12] drm/xe/bo: Add purgeable bo state tracking and field madv to xe_bo Arvind Yadav
2026-03-03 15:19 ` [PATCH v6 03/12] drm/xe/madvise: Implement purgeable buffer object support Arvind Yadav
2026-03-10  8:41   ` Thomas Hellström
2026-03-03 15:20 ` [PATCH v6 04/12] drm/xe/bo: Block CPU faults to purgeable buffer objects Arvind Yadav
2026-03-05 15:26   ` Thomas Hellström
2026-03-03 15:20 ` [PATCH v6 05/12] drm/xe/vm: Prevent binding of purged " Arvind Yadav
2026-03-05 15:38   ` Thomas Hellström
2026-03-20  2:34     ` Yadav, Arvind
2026-03-03 15:20 ` [PATCH v6 06/12] drm/xe/madvise: Implement per-VMA purgeable state tracking Arvind Yadav
2026-03-10  9:57   ` Thomas Hellström
2026-03-23  6:47     ` Yadav, Arvind
2026-03-03 15:20 ` [PATCH v6 07/12] drm/xe/madvise: Block imported and exported dma-bufs Arvind Yadav
2026-03-03 15:20 ` [PATCH v6 08/12] drm/xe/bo: Block mmap of DONTNEED/purged BOs Arvind Yadav
2026-03-10 10:17   ` Thomas Hellström
2026-03-18 13:03     ` Yadav, Arvind
2026-03-03 15:20 ` [PATCH v6 09/12] drm/xe/dma_buf: Block export " Arvind Yadav
2026-03-10 10:19   ` Thomas Hellström
2026-03-18 13:02     ` Yadav, Arvind
2026-03-03 15:20 ` [PATCH v6 10/12] drm/xe/bo: Add purgeable shrinker state helpers Arvind Yadav
2026-03-10 10:01   ` Thomas Hellström
2026-03-18 12:15     ` Yadav, Arvind [this message]
2026-03-03 15:20 ` [PATCH v6 11/12] drm/xe/madvise: Enable purgeable buffer object IOCTL support Arvind Yadav
2026-03-10 10:23   ` Thomas Hellström
2026-03-03 15:20 ` [PATCH v6 12/12] drm/xe/bo: Skip zero-refcount BOs in shrinker Arvind Yadav
2026-03-05 15:49   ` Thomas Hellström
2026-03-17  5:59     ` Yadav, Arvind
2026-03-03 16:12 ` ✗ CI.checkpatch: warning for drm/xe/madvise: Add support for purgeable buffer objects (rev7) Patchwork
2026-03-03 16:14 ` ✓ CI.KUnit: success " Patchwork
2026-03-03 16:50 ` ✓ Xe.CI.BAT: " Patchwork
2026-03-03 22:05 ` [PATCH v6 00/12] drm/xe/madvise: Add support for purgeable buffer objects Souza, Jose
2026-03-03 22:49   ` Matthew Brost
2026-03-04 13:29     ` Souza, Jose
2026-03-23  6:37       ` Yadav, Arvind
2026-03-04  4:01 ` ✗ Xe.CI.FULL: failure for drm/xe/madvise: Add support for purgeable buffer objects (rev7) Patchwork

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=f6743382-3132-4904-b876-b695f8bf3ede@intel.com \
    --to=arvind.yadav@intel.com \
    --cc=himal.prasad.ghimiray@intel.com \
    --cc=intel-xe@lists.freedesktop.org \
    --cc=matthew.brost@intel.com \
    --cc=pallavi.mishra@intel.com \
    --cc=thomas.hellstrom@linux.intel.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox