From: Matthew Brost <matthew.brost@intel.com>
To: Tejas Upadhyay <tejas.upadhyay@intel.com>
Cc: <intel-xe@lists.freedesktop.org>, <matthew.auld@intel.com>,
<thomas.hellstrom@linux.intel.com>,
<himal.prasad.ghimiray@intel.com>
Subject: Re: [RFC PATCH V7 1/9] drm/xe: Link VRAM object with gpu buddy
Date: Wed, 29 Apr 2026 20:50:22 -0700 [thread overview]
Message-ID: <afLRfnHix3GiqD+Y@gsse-cloud1.jf.intel.com> (raw)
In-Reply-To: <20260413131623.2891528-12-tejas.upadhyay@intel.com>
On Mon, Apr 13, 2026 at 06:46:22PM +0530, Tejas Upadhyay wrote:
> Setup to link TTM buffer object inside gpu buddy. This functionality
> is critical for supporting the memory page offline feature on CRI,
> where identified faulty pages must be traced back to their
> originating buffer for safe removal.
>
> V2(MattB): Clear block->private in xe_ttm_vram_mgr_del as well
>
> Signed-off-by: Tejas Upadhyay <tejas.upadhyay@intel.com>
Reviewed-by: Matthew Brost <matthew.brost@intel.com>
> ---
> drivers/gpu/drm/xe/xe_ttm_vram_mgr.c | 6 ++++++
> 1 file changed, 6 insertions(+)
>
> diff --git a/drivers/gpu/drm/xe/xe_ttm_vram_mgr.c b/drivers/gpu/drm/xe/xe_ttm_vram_mgr.c
> index 5fd0d5506a7e..01a9b92772f8 100644
> --- a/drivers/gpu/drm/xe/xe_ttm_vram_mgr.c
> +++ b/drivers/gpu/drm/xe/xe_ttm_vram_mgr.c
> @@ -54,6 +54,7 @@ static int xe_ttm_vram_mgr_new(struct ttm_resource_manager *man,
> struct xe_ttm_vram_mgr *mgr = to_xe_ttm_vram_mgr(man);
> struct xe_ttm_vram_mgr_resource *vres;
> struct gpu_buddy *mm = &mgr->mm;
> + struct gpu_buddy_block *block;
> u64 size, min_page_size;
> unsigned long lpfn;
> int err;
> @@ -138,6 +139,8 @@ static int xe_ttm_vram_mgr_new(struct ttm_resource_manager *man,
> }
>
> mgr->visible_avail -= vres->used_visible_size;
> + list_for_each_entry(block, &vres->blocks, link)
> + block->private = tbo;
> mutex_unlock(&mgr->lock);
>
> if (!(vres->base.placement & TTM_PL_FLAG_CONTIGUOUS) &&
> @@ -176,8 +179,11 @@ static void xe_ttm_vram_mgr_del(struct ttm_resource_manager *man,
> to_xe_ttm_vram_mgr_resource(res);
> struct xe_ttm_vram_mgr *mgr = to_xe_ttm_vram_mgr(man);
> struct gpu_buddy *mm = &mgr->mm;
> + struct gpu_buddy_block *block;
>
> mutex_lock(&mgr->lock);
> + list_for_each_entry(block, &vres->blocks, link)
> + block->private = NULL;
> gpu_buddy_free_list(mm, &vres->blocks, 0);
> mgr->visible_avail += vres->used_visible_size;
> mutex_unlock(&mgr->lock);
> --
> 2.52.0
>
next prev parent reply other threads:[~2026-04-30 3:50 UTC|newest]
Thread overview: 21+ messages / expand[flat|nested] mbox.gz Atom feed top
2026-04-13 13:16 [RFC PATCH V7 0/9] Add memory page offlining support Tejas Upadhyay
2026-04-13 13:16 ` [RFC PATCH V7 1/9] drm/xe: Link VRAM object with gpu buddy Tejas Upadhyay
2026-04-30 3:50 ` Matthew Brost [this message]
2026-04-13 13:16 ` [RFC PATCH V7 2/9] drm/gpu: Add gpu_buddy_addr_to_block helper Tejas Upadhyay
2026-04-13 13:28 ` Matthew Auld
2026-04-13 17:30 ` Matthew Auld
2026-04-14 5:36 ` Upadhyay, Tejas
2026-04-13 13:16 ` [RFC PATCH V7 3/9] drm/xe: Link LRC BO and its execution Queue Tejas Upadhyay
2026-04-13 13:16 ` [RFC PATCH V7 4/9] drm/xe: Extend BO purge to handle vram pages as well Tejas Upadhyay
2026-04-13 13:16 ` [RFC PATCH V7 5/9] drm/xe: Handle physical memory address error Tejas Upadhyay
2026-04-30 11:28 ` Matthew Auld
2026-05-04 10:52 ` Upadhyay, Tejas
2026-04-13 13:16 ` [RFC PATCH V7 6/9] drm/xe/cri: Add debugfs to inject faulty vram address Tejas Upadhyay
2026-04-13 13:16 ` [RFC PATCH V7 7/9] gpu/buddy: Add routine to dump allocated buddy blocks Tejas Upadhyay
2026-04-13 13:16 ` [RFC PATCH V7 8/9] drm/xe/configfs: Add vram bad page reservation policy Tejas Upadhyay
2026-04-13 13:16 ` [RFC PATCH V7 9/9] drm/xe/cri: Add sysfs interface for bad gpu vram pages Tejas Upadhyay
2026-04-13 16:36 ` ✗ CI.checkpatch: warning for Add memory page offlining support (rev7) Patchwork
2026-04-13 16:37 ` ✓ CI.KUnit: success " Patchwork
2026-04-13 17:43 ` ✓ Xe.CI.BAT: " Patchwork
2026-04-13 20:12 ` ✗ Xe.CI.FULL: failure " Patchwork
2026-04-15 15:10 ` [RFC PATCH V7 0/9] Add memory page offlining support Upadhyay, Tejas
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=afLRfnHix3GiqD+Y@gsse-cloud1.jf.intel.com \
--to=matthew.brost@intel.com \
--cc=himal.prasad.ghimiray@intel.com \
--cc=intel-xe@lists.freedesktop.org \
--cc=matthew.auld@intel.com \
--cc=tejas.upadhyay@intel.com \
--cc=thomas.hellstrom@linux.intel.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox