Intel-XE Archive on lore.kernel.org
 help / color / mirror / Atom feed
From: "Jahagirdar, Akshata" <akshata.jahagirdar@intel.com>
To: Matt Roper <matthew.d.roper@intel.com>, <intel-xe@lists.freedesktop.org>
Cc: Lucas De Marchi <lucas.demarchi@intel.com>
Subject: Re: [PATCH] drm/xe/migrate: Future-proof compressed PAT check
Date: Fri, 26 Jul 2024 10:26:00 -0700	[thread overview]
Message-ID: <871e6e40-37b0-4ae3-822d-b6aaf6587def@intel.com> (raw)
In-Reply-To: <20240726171757.2728819-2-matthew.d.roper@intel.com>

[-- Attachment #1: Type: text/plain, Size: 1746 bytes --]


On 7/26/2024 10:17 AM, Matt Roper wrote:
> Although all current Xe2 platforms support FlatCCS, we probably
> shouldn't assume that will be universally true forever.  In the past
> we've had platforms like PVC that didn't support compression, and the
> same could show up again at some point in the future.  Future-proof the
> migration code by adding an explicit check for FlatCCS support to the
> condition that decides whether to use a compressed PAT index for
> migration.
>
> While we're at it, we can drop the IS_DGFX check since it's redundant
> with the src_is_vram check (only dGPUs have VRAM).
>
> Cc: Akshata Jahagirdar<akshata.jahagirdar@intel.com>
> Cc: Lucas De Marchi<lucas.demarchi@intel.com>
> Signed-off-by: Matt Roper<matthew.d.roper@intel.com>
> ---
>   drivers/gpu/drm/xe/xe_migrate.c | 3 ++-
>   1 file changed, 2 insertions(+), 1 deletion(-)
>
> diff --git a/drivers/gpu/drm/xe/xe_migrate.c b/drivers/gpu/drm/xe/xe_migrate.c
> index c007f68503d4..6f24aaf58252 100644
> --- a/drivers/gpu/drm/xe/xe_migrate.c
> +++ b/drivers/gpu/drm/xe/xe_migrate.c
> @@ -781,7 +781,8 @@ struct dma_fence *xe_migrate_copy(struct xe_migrate *m,
>   	bool copy_ccs = xe_device_has_flat_ccs(xe) &&
>   		xe_bo_needs_ccs_pages(src_bo) && xe_bo_needs_ccs_pages(dst_bo);
>   	bool copy_system_ccs = copy_ccs && (!src_is_vram || !dst_is_vram);
> -	bool use_comp_pat = GRAPHICS_VER(xe) >= 20 && IS_DGFX(xe) && src_is_vram && !dst_is_vram;
> +	bool use_comp_pat = xe_device_has_flat_ccs(xe) &&
> +		GRAPHICS_VER(xe) >= 20 && src_is_vram && !dst_is_vram;
>   
LGTM.

Reviewed-by: Akshata Jahagirdar <akshata.jahagirdar@intel.com>

>   	/* Copying CCS between two different BOs is not supported yet. */
>   	if (XE_WARN_ON(copy_ccs && src_bo != dst_bo))

[-- Attachment #2: Type: text/html, Size: 2710 bytes --]

  parent reply	other threads:[~2024-07-26 17:27 UTC|newest]

Thread overview: 12+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2024-07-26 17:17 [PATCH] drm/xe/migrate: Future-proof compressed PAT check Matt Roper
2024-07-26 17:23 ` ✓ CI.Patch_applied: success for " Patchwork
2024-07-26 17:23 ` ✓ CI.checkpatch: " Patchwork
2024-07-26 17:25 ` ✓ CI.KUnit: " Patchwork
2024-07-26 17:26 ` Jahagirdar, Akshata [this message]
2024-07-26 17:40   ` [PATCH] " Lucas De Marchi
2024-07-26 17:39 ` ✓ CI.Build: success for " Patchwork
2024-07-26 17:42 ` ✓ CI.Hooks: " Patchwork
2024-07-26 17:44 ` ✓ CI.checksparse: " Patchwork
2024-07-26 18:04 ` ✓ CI.BAT: " Patchwork
2024-07-27  5:13 ` ✗ CI.FULL: failure " Patchwork
2024-07-29 15:21   ` Matt Roper

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=871e6e40-37b0-4ae3-822d-b6aaf6587def@intel.com \
    --to=akshata.jahagirdar@intel.com \
    --cc=intel-xe@lists.freedesktop.org \
    --cc=lucas.demarchi@intel.com \
    --cc=matthew.d.roper@intel.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox