Intel-XE Archive on lore.kernel.org
 help / color / mirror / Atom feed
From: "Summers, Stuart" <stuart.summers@intel.com>
To: "intel-xe@lists.freedesktop.org" <intel-xe@lists.freedesktop.org>,
	"Brost,  Matthew" <matthew.brost@intel.com>
Cc: "stable@vger.kernel.org" <stable@vger.kernel.org>,
	"Alvi, Arselan" <arselan.alvi@intel.com>
Subject: Re: [PATCH v2] drm/xe: Adjust page count tracepoints in shrinker
Date: Wed, 7 Jan 2026 21:14:39 +0000	[thread overview]
Message-ID: <ee81fdabe081a413b10cce2297ea360c46502db0.camel@intel.com> (raw)
In-Reply-To: <20260107205732.2267541-1-matthew.brost@intel.com>

On Wed, 2026-01-07 at 12:57 -0800, Matthew Brost wrote:
> Page accounting can change via the shrinker without calling
> xe_ttm_tt_unpopulate(), which normally updates page count tracepoints
> through update_global_total_pages. Add a call to
> update_global_total_pages when the shrinker successfully shrinks a
> BO.
> 
> v2:
>  - Don't adjust global accounting when pinning (Stuart)
> 
> Cc: stable@vger.kernel.org
> Fixes: ce3d39fae3d3 ("drm/xe/bo: add GPU memory trace points")
> Signed-off-by: Matthew Brost <matthew.brost@intel.com>

Reviewed-by: Stuart Summers <stuart.summers@intel.com>

> ---
>  drivers/gpu/drm/xe/xe_bo.c | 9 +++++++--
>  1 file changed, 7 insertions(+), 2 deletions(-)
> 
> diff --git a/drivers/gpu/drm/xe/xe_bo.c b/drivers/gpu/drm/xe/xe_bo.c
> index 8b6474cd3eaf..6ab52fa397e3 100644
> --- a/drivers/gpu/drm/xe/xe_bo.c
> +++ b/drivers/gpu/drm/xe/xe_bo.c
> @@ -1054,6 +1054,7 @@ static long xe_bo_shrink_purge(struct
> ttm_operation_ctx *ctx,
>                                unsigned long *scanned)
>  {
>         struct xe_device *xe = ttm_to_xe_device(bo->bdev);
> +       struct ttm_tt *tt = bo->ttm;
>         long lret;
>  
>         /* Fake move to system, without copying data. */
> @@ -1078,8 +1079,10 @@ static long xe_bo_shrink_purge(struct
> ttm_operation_ctx *ctx,
>                               .writeback = false,
>                               .allow_move = false});
>  
> -       if (lret > 0)
> +       if (lret > 0) {
>                 xe_ttm_tt_account_subtract(xe, bo->ttm);
> +               update_global_total_pages(bo->bdev, -(long)tt-
> >num_pages);
> +       }
>  
>         return lret;
>  }
> @@ -1165,8 +1168,10 @@ long xe_bo_shrink(struct ttm_operation_ctx
> *ctx, struct ttm_buffer_object *bo,
>         if (needs_rpm)
>                 xe_pm_runtime_put(xe);
>  
> -       if (lret > 0)
> +       if (lret > 0) {
>                 xe_ttm_tt_account_subtract(xe, tt);
> +               update_global_total_pages(bo->bdev, -(long)tt-
> >num_pages);
> +       }
>  
>  out_unref:
>         xe_bo_put(xe_bo);


  parent reply	other threads:[~2026-01-07 21:14 UTC|newest]

Thread overview: 5+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2026-01-07 20:57 [PATCH v2] drm/xe: Adjust page count tracepoints in shrinker Matthew Brost
2026-01-07 21:04 ` ✓ CI.KUnit: success for " Patchwork
2026-01-07 21:14 ` Summers, Stuart [this message]
2026-01-07 21:38 ` ✓ Xe.CI.BAT: " Patchwork
2026-01-08  0:42 ` ✗ Xe.CI.Full: failure " Patchwork

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=ee81fdabe081a413b10cce2297ea360c46502db0.camel@intel.com \
    --to=stuart.summers@intel.com \
    --cc=arselan.alvi@intel.com \
    --cc=intel-xe@lists.freedesktop.org \
    --cc=matthew.brost@intel.com \
    --cc=stable@vger.kernel.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox