Intel-XE Archive on lore.kernel.org
 help / color / mirror / Atom feed
From: Matthew Brost <matthew.brost@intel.com>
To: Tejas Upadhyay <tejas.upadhyay@intel.com>
Cc: <intel-xe@lists.freedesktop.org>, <matthew.auld@intel.com>,
	<himal.prasad.ghimiray@intel.com>
Subject: Re: [RFC PATCH 1/6] drm/xe/svm: Use res_to_mem_region
Date: Mon, 23 Feb 2026 18:16:56 -0800	[thread overview]
Message-ID: <aZ0KGKDe+z86vSML@lstrano-desk.jf.intel.com> (raw)
In-Reply-To: <20260213092552.1527799-9-tejas.upadhyay@intel.com>

On Fri, Feb 13, 2026 at 02:55:54PM +0530, Tejas Upadhyay wrote:
> Replace the direct use of block->private with the helper function
> res_to_mem_region to get vram region.
> 
> V2(MattA): Use res_to_mem_region
> 
> Signed-off-by: Tejas Upadhyay <tejas.upadhyay@intel.com>
> ---
>  drivers/gpu/drm/xe/xe_bo.c  | 2 +-
>  drivers/gpu/drm/xe/xe_bo.h  | 1 +
>  drivers/gpu/drm/xe/xe_svm.c | 8 +-------
>  3 files changed, 3 insertions(+), 8 deletions(-)
> 
> diff --git a/drivers/gpu/drm/xe/xe_bo.c b/drivers/gpu/drm/xe/xe_bo.c
> index cb8a177ec02b..70aca621c1a1 100644
> --- a/drivers/gpu/drm/xe/xe_bo.c
> +++ b/drivers/gpu/drm/xe/xe_bo.c
> @@ -173,7 +173,7 @@ mem_type_to_migrate(struct xe_device *xe, u32 mem_type)
>  	return tile->migrate;
>  }
>  
> -static struct xe_vram_region *res_to_mem_region(struct ttm_resource *res)
> +struct xe_vram_region *res_to_mem_region(struct ttm_resource *res)

We need kernel doc now, perhaps a better name for a public function too.

>  {
>  	struct xe_device *xe = ttm_to_xe_device(res->bo->bdev);
>  	struct ttm_resource_manager *mgr;
> diff --git a/drivers/gpu/drm/xe/xe_bo.h b/drivers/gpu/drm/xe/xe_bo.h
> index c914ab719f20..393f1b4faf99 100644
> --- a/drivers/gpu/drm/xe/xe_bo.h
> +++ b/drivers/gpu/drm/xe/xe_bo.h
> @@ -311,6 +311,7 @@ int xe_bo_dumb_create(struct drm_file *file_priv,
>  		      struct drm_mode_create_dumb *args);
>  
>  bool xe_bo_needs_ccs_pages(struct xe_bo *bo);
> +struct xe_vram_region *res_to_mem_region(struct ttm_resource *res);
>  
>  static inline size_t xe_bo_ccs_pages_start(struct xe_bo *bo)
>  {
> diff --git a/drivers/gpu/drm/xe/xe_svm.c b/drivers/gpu/drm/xe/xe_svm.c
> index 213f0334518a..8015eb6fcbc9 100644
> --- a/drivers/gpu/drm/xe/xe_svm.c
> +++ b/drivers/gpu/drm/xe/xe_svm.c
> @@ -762,7 +762,7 @@ static int xe_svm_populate_devmem_pfn(struct drm_pagemap_devmem *devmem_allocati
>  	int j = 0;
>  
>  	list_for_each_entry(block, blocks, link) {
> -		struct xe_vram_region *vr = block->private;
> +		struct xe_vram_region *vr = res_to_mem_region(res);
>  		struct drm_buddy *buddy = vram_to_buddy(vr);

vr, buddy can now we declared outside of the loop.

Matt

>  		u64 block_pfn = block_offset_to_pfn(devmem_allocation->dpagemap,
>  						    drm_buddy_block_offset(block));
> @@ -1033,9 +1033,7 @@ static int xe_drm_pagemap_populate_mm(struct drm_pagemap *dpagemap,
>  	struct dma_fence *pre_migrate_fence = NULL;
>  	struct xe_device *xe = vr->xe;
>  	struct device *dev = xe->drm.dev;
> -	struct drm_buddy_block *block;
>  	struct xe_validation_ctx vctx;
> -	struct list_head *blocks;
>  	struct drm_exec exec;
>  	struct xe_bo *bo;
>  	int err = 0, idx;
> @@ -1072,10 +1070,6 @@ static int xe_drm_pagemap_populate_mm(struct drm_pagemap *dpagemap,
>  					&dpagemap_devmem_ops, dpagemap, end - start,
>  					pre_migrate_fence);
>  
> -		blocks = &to_xe_ttm_vram_mgr_resource(bo->ttm.resource)->blocks;
> -		list_for_each_entry(block, blocks, link)
> -			block->private = vr;
> -
>  		xe_bo_get(bo);
>  
>  		/* Ensure the device has a pm ref while there are device pages active. */
> -- 
> 2.52.0
> 

  reply	other threads:[~2026-02-24  2:17 UTC|newest]

Thread overview: 13+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2026-02-13  9:25 [RFC PATCH 0/6] Add memory page offlining support Tejas Upadhyay
2026-02-13  9:25 ` [RFC PATCH 1/6] drm/xe/svm: Use res_to_mem_region Tejas Upadhyay
2026-02-24  2:16   ` Matthew Brost [this message]
2026-02-13  9:25 ` [RFC PATCH 2/6] drm/xe: Implement VRAM object tracking ability using physical address Tejas Upadhyay
2026-02-13  9:25 ` [RFC PATCH 3/6] drm/xe: Handle physical memory address error Tejas Upadhyay
2026-02-13  9:25 ` [RFC PATCH 4/6] [DO NOT REVIEW]drm/xe/cri: Add debugfs to inject faulty vram address Tejas Upadhyay
2026-02-13  9:25 ` [RFC PATCH 5/6] drm/xe: Add routine to dump allocated VRAM blocks Tejas Upadhyay
2026-02-13  9:25 ` [RFC PATCH 6/6] [DO NOT REVIEW]]drm/xe/cri: Add sysfs interface for bad gpu vram pages Tejas Upadhyay
2026-02-18  0:37   ` Rodrigo Vivi
2026-02-20 11:18     ` Aravind Iddamsetty
2026-02-20 14:52       ` Vivi, Rodrigo
2026-02-22  5:32         ` Aravind Iddamsetty
2026-02-23 21:26           ` Rodrigo Vivi

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=aZ0KGKDe+z86vSML@lstrano-desk.jf.intel.com \
    --to=matthew.brost@intel.com \
    --cc=himal.prasad.ghimiray@intel.com \
    --cc=intel-xe@lists.freedesktop.org \
    --cc=matthew.auld@intel.com \
    --cc=tejas.upadhyay@intel.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox