Intel-XE Archive on lore.kernel.org
 help / color / mirror / Atom feed
From: "Summers, Stuart" <stuart.summers@intel.com>
To: "intel-xe@lists.freedesktop.org" <intel-xe@lists.freedesktop.org>,
	"Brost,  Matthew" <matthew.brost@intel.com>
Cc: "simon.richter@hogyros.de" <simon.richter@hogyros.de>,
	"Auld, Matthew" <matthew.auld@intel.com>
Subject: Re: [PATCH v5 1/2] drm/xe: Fix build_pt_update_batch_sram for non-4K PAGE_SIZE
Date: Mon, 13 Oct 2025 16:53:56 +0000	[thread overview]
Message-ID: <2e20ff5a11f48968e4affba9c1576d9dc82e227f.camel@intel.com> (raw)
In-Reply-To: <20251013034555.4121168-2-matthew.brost@intel.com>

On Sun, 2025-10-12 at 20:45 -0700, Matthew Brost wrote:
> The build_pt_update_batch_sram function in the Xe migrate layer
> assumes
> PAGE_SIZE == XE_PAGE_SIZE (4K), which is not a valid assumption on
> non-x86 platforms. This patch updates build_pt_update_batch_sram to
> correctly handle PAGE_SIZE > 4K by programming multiple 4K GPU pages
> per
> CPU page.
> 
> v5:
>  - Mask off non-address bits during compare
> 
> Signed-off-by: Matthew Brost <matthew.brost@intel.com>

Reviewed-by: Stuart Summers <stuart.summers@intel.com>

> ---
>  drivers/gpu/drm/xe/xe_migrate.c | 30 ++++++++++++++++++++++--------
>  1 file changed, 22 insertions(+), 8 deletions(-)
> 
> diff --git a/drivers/gpu/drm/xe/xe_migrate.c
> b/drivers/gpu/drm/xe/xe_migrate.c
> index 7345a5b65169..216fc0ec2bb7 100644
> --- a/drivers/gpu/drm/xe/xe_migrate.c
> +++ b/drivers/gpu/drm/xe/xe_migrate.c
> @@ -1781,13 +1781,15 @@ static void build_pt_update_batch_sram(struct
> xe_migrate *m,
>                                        u32 size)
>  {
>         u16 pat_index = tile_to_xe(m->tile)->pat.idx[XE_CACHE_WB];
> +       u64 gpu_page_size = 0x1ull << xe_pt_shift(0);
>         u32 ptes;
>         int i = 0;
>  
> -       ptes = DIV_ROUND_UP(size, XE_PAGE_SIZE);
> +       ptes = DIV_ROUND_UP(size, gpu_page_size);
>         while (ptes) {
>                 u32 chunk = min(MAX_PTE_PER_SDI, ptes);
>  
> +               chunk = ALIGN_DOWN(chunk, PAGE_SIZE / XE_PAGE_SIZE);
>                 bb->cs[bb->len++] = MI_STORE_DATA_IMM |
> MI_SDI_NUM_QW(chunk);
>                 bb->cs[bb->len++] = pt_offset;
>                 bb->cs[bb->len++] = 0;
> @@ -1796,18 +1798,30 @@ static void build_pt_update_batch_sram(struct
> xe_migrate *m,
>                 ptes -= chunk;
>  
>                 while (chunk--) {
> -                       u64 addr = sram_addr[i].addr & PAGE_MASK;
> +                       u64 addr = sram_addr[i].addr &
> ~(gpu_page_size - 1);
> +                       u64 pte, orig_addr = addr;
>  
>                         xe_tile_assert(m->tile, sram_addr[i].proto ==
>                                        DRM_INTERCONNECT_SYSTEM);
>                         xe_tile_assert(m->tile, addr);
> -                       addr = m->q->vm->pt_ops->pte_encode_addr(m-
> >tile->xe,
> -                                                               
> addr, pat_index,
> -                                                                0,
> false, 0);
> -                       bb->cs[bb->len++] = lower_32_bits(addr);
> -                       bb->cs[bb->len++] = upper_32_bits(addr);
>  
> -                       i++;
> +again:
> +                       pte = m->q->vm->pt_ops->pte_encode_addr(m-
> >tile->xe,
> +                                                               addr,
> pat_index,
> +                                                               0,
> false, 0);
> +                       bb->cs[bb->len++] = lower_32_bits(pte);
> +                       bb->cs[bb->len++] = upper_32_bits(pte);
> +
> +                       if (gpu_page_size < PAGE_SIZE) {
> +                               addr += XE_PAGE_SIZE;
> +                               if (orig_addr + PAGE_SIZE != addr) {
> +                                       chunk--;
> +                                       goto again;
> +                               }
> +                               i++;
> +                       } else {
> +                               i += gpu_page_size / PAGE_SIZE;
> +                       }
>                 }
>         }
>  }


  parent reply	other threads:[~2025-10-13 16:54 UTC|newest]

Thread overview: 17+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2025-10-13  3:45 [PATCH v5 0/2] Different page size handle in migrate layer Matthew Brost
2025-10-13  3:45 ` [PATCH v5 1/2] drm/xe: Fix build_pt_update_batch_sram for non-4K PAGE_SIZE Matthew Brost
2025-10-13 16:38   ` [v5,1/2] " Simon Richter
2025-10-13 16:53   ` Summers, Stuart [this message]
2025-10-13  3:45 ` [PATCH v5 2/2] drm/xe: Enable 2M pages in xe_migrate_vram Matthew Brost
2025-10-13 17:08   ` Summers, Stuart
2025-10-13 17:14     ` Matthew Brost
2025-10-13 17:22       ` Summers, Stuart
2025-10-13 17:34         ` Matthew Brost
2025-10-13 18:01           ` Summers, Stuart
2025-10-14  2:17         ` Simon Richter
2025-10-14  3:08           ` Summers, Stuart
2025-10-13 17:29       ` Simon.Richter
2025-10-13 17:32         ` Matthew Brost
2025-10-13  5:38 ` ✓ CI.KUnit: success for Different page size handle in migrate layer (rev2) Patchwork
2025-10-13  6:23 ` ✓ Xe.CI.BAT: " Patchwork
2025-10-13  6:51 ` ✓ Xe.CI.Full: " Patchwork

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=2e20ff5a11f48968e4affba9c1576d9dc82e227f.camel@intel.com \
    --to=stuart.summers@intel.com \
    --cc=intel-xe@lists.freedesktop.org \
    --cc=matthew.auld@intel.com \
    --cc=matthew.brost@intel.com \
    --cc=simon.richter@hogyros.de \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox