From: Matthew Auld <matthew.auld@intel.com>
To: Tejas Upadhyay <tejas.upadhyay@intel.com>,
intel-xe@lists.freedesktop.org
Cc: matthew.brost@intel.com, himal.prasad.ghimiray@intel.com
Subject: Re: [RFC PATCH 2/7] drm/xe/svm: Use xe_vram_addr_to_region
Date: Thu, 12 Feb 2026 17:53:17 +0000 [thread overview]
Message-ID: <1e1f6110-e3b7-42c0-963c-d6ba033f883a@intel.com> (raw)
In-Reply-To: <20260212163439.1514363-11-tejas.upadhyay@intel.com>
On 12/02/2026 16:34, Tejas Upadhyay wrote:
> Replace the direct use of block->private with the helper function
> xe_vram_addr_to_region to get vram region.
>
> Signed-off-by: Tejas Upadhyay <tejas.upadhyay@intel.com>
> ---
> drivers/gpu/drm/xe/xe_svm.c | 9 ++-------
> 1 file changed, 2 insertions(+), 7 deletions(-)
>
> diff --git a/drivers/gpu/drm/xe/xe_svm.c b/drivers/gpu/drm/xe/xe_svm.c
> index 213f0334518a..e773456af040 100644
> --- a/drivers/gpu/drm/xe/xe_svm.c
> +++ b/drivers/gpu/drm/xe/xe_svm.c
> @@ -762,7 +762,8 @@ static int xe_svm_populate_devmem_pfn(struct drm_pagemap_devmem *devmem_allocati
> int j = 0;
>
> list_for_each_entry(block, blocks, link) {
> - struct xe_vram_region *vr = block->private;
> + u64 block_start = drm_buddy_block_offset(block);
> + struct xe_vram_region *vr = xe_vram_addr_to_region(bo->tile->xe, block_start);
Oh, I think block_offset is the relative offset, which always starts
from zero, but here you want the real addr, but that info comes from the
region...
I was gonna say since you seem to already have bo->tile above, why not
just pick the VRAM from that tile? Like with bo->tile->mem.vram? But I
think bo->tile is NULL even here so above looks like it will crash?
What about looking at the bo flags or current ttm placement to figure
out which VRAM this belongs to? There looks to already be
res_to_mem_region() and you have res above.
> struct drm_buddy *buddy = vram_to_buddy(vr);
> u64 block_pfn = block_offset_to_pfn(devmem_allocation->dpagemap,
> drm_buddy_block_offset(block));
> @@ -1033,9 +1034,7 @@ static int xe_drm_pagemap_populate_mm(struct drm_pagemap *dpagemap,
> struct dma_fence *pre_migrate_fence = NULL;
> struct xe_device *xe = vr->xe;
> struct device *dev = xe->drm.dev;
> - struct drm_buddy_block *block;
> struct xe_validation_ctx vctx;
> - struct list_head *blocks;
> struct drm_exec exec;
> struct xe_bo *bo;
> int err = 0, idx;
> @@ -1072,10 +1071,6 @@ static int xe_drm_pagemap_populate_mm(struct drm_pagemap *dpagemap,
> &dpagemap_devmem_ops, dpagemap, end - start,
> pre_migrate_fence);
>
> - blocks = &to_xe_ttm_vram_mgr_resource(bo->ttm.resource)->blocks;
> - list_for_each_entry(block, blocks, link)
> - block->private = vr;
> -
> xe_bo_get(bo);
>
> /* Ensure the device has a pm ref while there are device pages active. */
next prev parent reply other threads:[~2026-02-12 17:53 UTC|newest]
Thread overview: 10+ messages / expand[flat|nested] mbox.gz Atom feed top
2026-02-12 16:34 [RFC PATCH 0/7] Add memory page offlining support Tejas Upadhyay
2026-02-12 16:34 ` [RFC PATCH 1/7] drm/xe: Add a helper to get vram region from physical address Tejas Upadhyay
2026-02-12 16:34 ` [RFC PATCH 2/7] drm/xe/svm: Use xe_vram_addr_to_region Tejas Upadhyay
2026-02-12 17:53 ` Matthew Auld [this message]
2026-02-13 6:00 ` Upadhyay, Tejas
2026-02-12 16:34 ` [RFC PATCH 3/7] drm/xe: Implement VRAM object tracking ability using physical address Tejas Upadhyay
2026-02-12 16:34 ` [RFC PATCH 4/7] drm/xe: Handle physical memory address error Tejas Upadhyay
2026-02-12 16:34 ` [RFC PATCH 5/7] drm/xe/cri: Add debugfs to inject faulty vram address Tejas Upadhyay
2026-02-12 16:34 ` [RFC PATCH 6/7] drm/xe: Add routine to dump allocated VRAM blocks Tejas Upadhyay
2026-02-12 16:34 ` [RFC PATCH 7/7] [DO NOT REVIEW]drm/xe/cri: Add sysfs interface for bad gpu vram pages Tejas Upadhyay
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=1e1f6110-e3b7-42c0-963c-d6ba033f883a@intel.com \
--to=matthew.auld@intel.com \
--cc=himal.prasad.ghimiray@intel.com \
--cc=intel-xe@lists.freedesktop.org \
--cc=matthew.brost@intel.com \
--cc=tejas.upadhyay@intel.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox