From: Matthew Brost <matthew.brost@intel.com>
To: Matthew Auld <matthew.auld@intel.com>
Cc: <intel-xe@lists.freedesktop.org>
Subject: Re: [PATCH 1/6] drm/xe/migrate: rework size restrictions for sram pte emit
Date: Wed, 15 Oct 2025 17:36:09 -0700 [thread overview]
Message-ID: <aPA9+XZLfGIYyZ+v@lstrano-desk.jf.intel.com> (raw)
In-Reply-To: <20251015141929.123637-9-matthew.auld@intel.com>
On Wed, Oct 15, 2025 at 03:19:31PM +0100, Matthew Auld wrote:
> We allow the input size to not be aligned to PAGE_SIZE, which leads to
> various bugs in build_pt_update_batch_sram() for PAGE_SIZE > 4K systems.
> For example if ptes is exactly one gpu_page_size then the chunk size is
> rounded down to zero. The simplest fix looks to be forcing PAGE_SIZE
> aligned inputs.
>
> Signed-off-by: Matthew Auld <matthew.auld@intel.com>
> Cc: Matthew Brost <matthew.brost@intel.com>
Reviewed-by: Matthew Brost <matthew.brost@intel.com>
> ---
> drivers/gpu/drm/xe/xe_migrate.c | 13 ++++++++-----
> 1 file changed, 8 insertions(+), 5 deletions(-)
>
> diff --git a/drivers/gpu/drm/xe/xe_migrate.c b/drivers/gpu/drm/xe/xe_migrate.c
> index 4ca48dd1cfd8..ff8e442bf519 100644
> --- a/drivers/gpu/drm/xe/xe_migrate.c
> +++ b/drivers/gpu/drm/xe/xe_migrate.c
> @@ -1798,6 +1798,8 @@ static void build_pt_update_batch_sram(struct xe_migrate *m,
> u32 ptes;
> int i = 0;
>
> + xe_tile_assert(m->tile, PAGE_ALIGNED(size));
> +
> ptes = DIV_ROUND_UP(size, gpu_page_size);
> while (ptes) {
> u32 chunk = min(MAX_PTE_PER_SDI, ptes);
> @@ -1811,12 +1813,13 @@ static void build_pt_update_batch_sram(struct xe_migrate *m,
> ptes -= chunk;
>
> while (chunk--) {
> - u64 addr = sram_addr[i].addr & ~(gpu_page_size - 1);
> - u64 pte, orig_addr = addr;
> + u64 addr = sram_addr[i].addr;
> + u64 pte;
>
> xe_tile_assert(m->tile, sram_addr[i].proto ==
> DRM_INTERCONNECT_SYSTEM);
> xe_tile_assert(m->tile, addr);
> + xe_tile_assert(m->tile, PAGE_ALIGNED(addr));
>
> again:
> pte = m->q->vm->pt_ops->pte_encode_addr(m->tile->xe,
> @@ -1827,7 +1830,7 @@ static void build_pt_update_batch_sram(struct xe_migrate *m,
>
> if (gpu_page_size < PAGE_SIZE) {
> addr += XE_PAGE_SIZE;
> - if (orig_addr + PAGE_SIZE != addr) {
> + if (!PAGE_ALIGNED(addr)) {
> chunk--;
> goto again;
> }
> @@ -1918,10 +1921,10 @@ static struct dma_fence *xe_migrate_vram(struct xe_migrate *m,
>
> if (use_pde)
> build_pt_update_batch_sram(m, bb, m->large_page_copy_pdes,
> - sram_addr, len + sram_offset, 1);
> + sram_addr, npages << PAGE_SHIFT, 1);
> else
> build_pt_update_batch_sram(m, bb, pt_slot * XE_PAGE_SIZE,
> - sram_addr, len + sram_offset, 0);
> + sram_addr, npages << PAGE_SHIFT, 0);
>
> if (dir == XE_MIGRATE_COPY_TO_VRAM) {
> if (use_pde)
> --
> 2.51.0
>
next prev parent reply other threads:[~2025-10-16 0:36 UTC|newest]
Thread overview: 20+ messages / expand[flat|nested] mbox.gz Atom feed top
2025-10-15 14:19 [PATCH 0/6] Some migration fixes/improvements Matthew Auld
2025-10-15 14:19 ` [PATCH 1/6] drm/xe/migrate: rework size restrictions for sram pte emit Matthew Auld
2025-10-16 0:36 ` Matthew Brost [this message]
2025-10-15 14:19 ` [PATCH 2/6] drm/xe/migrate: fix chunk handling for 2M page emit Matthew Auld
2025-10-16 0:34 ` Matthew Brost
2025-10-15 14:19 ` [PATCH 3/6] drm/xe/migrate: fix batch buffer sizing Matthew Auld
2025-10-16 0:36 ` Matthew Brost
2025-10-15 14:19 ` [PATCH 4/6] drm/xe/migrate: trim " Matthew Auld
2025-10-16 0:38 ` Matthew Brost
2025-10-15 14:19 ` [PATCH 5/6] drm/xe/migrate: support MEM_COPY instruction Matthew Auld
2025-10-16 0:58 ` Matthew Brost
2025-10-16 9:41 ` Matthew Auld
2025-10-16 18:46 ` Matthew Brost
2025-10-16 21:26 ` Matthew Brost
2025-10-17 11:23 ` Matthew Auld
2025-10-15 14:19 ` [PATCH 6/6] drm/xe/migrate: skip bounce buffer path on xe2 Matthew Auld
2025-10-16 21:28 ` Matthew Brost
2025-10-15 22:58 ` ✓ CI.KUnit: success for Some migration fixes/improvements Patchwork
2025-10-15 23:52 ` ✓ Xe.CI.BAT: " Patchwork
2025-10-16 16:08 ` ✓ Xe.CI.Full: " Patchwork
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=aPA9+XZLfGIYyZ+v@lstrano-desk.jf.intel.com \
--to=matthew.brost@intel.com \
--cc=intel-xe@lists.freedesktop.org \
--cc=matthew.auld@intel.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox