From: Matthew Brost <matthew.brost@intel.com>
To: Akshata Jahagirdar <akshata.jahagirdar@intel.com>
Cc: <intel-xe@lists.freedesktop.org>, <akshatajahagirdar6@gmail.com>
Subject: Re: [PATCH v5 3/8] drm/xe/migrate: Add helper function to program identity map
Date: Tue, 16 Jul 2024 23:03:15 +0000 [thread overview]
Message-ID: <Zpb8MzhcjAC9iXbB@DUT025-TGLU.fm.intel.com> (raw)
In-Reply-To: <810bcf155874734488222a4be7414054d6445b8d.1721170212.git.akshata.jahagirdar@intel.com>
On Tue, Jul 16, 2024 at 10:54:04PM +0000, Akshata Jahagirdar wrote:
> Add an helper function to program identity map.
>
> v2: Formatting nits
>
> Signed-off-by: Akshata Jahagirdar <akshata.jahagirdar@intel.com>
One nit below, but can fixed at merge or in next rev.
With that:
Reviewed-by: Matthew Brost <matthew.brost@intel.com>
> ---
> drivers/gpu/drm/xe/xe_migrate.c | 88 ++++++++++++++++++---------------
> 1 file changed, 48 insertions(+), 40 deletions(-)
>
> diff --git a/drivers/gpu/drm/xe/xe_migrate.c b/drivers/gpu/drm/xe/xe_migrate.c
> index 85eec95c9bc2..6b952ed98a51 100644
> --- a/drivers/gpu/drm/xe/xe_migrate.c
> +++ b/drivers/gpu/drm/xe/xe_migrate.c
> @@ -130,6 +130,51 @@ static u64 xe_migrate_vram_ofs(struct xe_device *xe, u64 addr)
> return addr + (256ULL << xe_pt_shift(2));
> }
>
> +static void xe_migrate_program_identity(struct xe_device *xe, struct xe_vm *vm, struct xe_bo *bo,
> + u64 map_ofs, u64 vram_offset, u16 pat_index, u64 pt_ofs)
Maybe a better name for pt_ofs indicating it used to program the 2M PTEs?
s/pt_ofs/pt_2m_ofs?
Matt
> +{
> + u64 pos, ofs, flags;
> + u64 entry;
> + /* XXX: Unclear if this should be usable_size? */
> + u64 vram_limit = xe->mem.vram.actual_physical_size +
> + xe->mem.vram.dpa_base;
> + u32 level = 2;
> +
> + ofs = map_ofs + XE_PAGE_SIZE * level + vram_offset * 8;
> + flags = vm->pt_ops->pte_encode_addr(xe, 0, pat_index, level,
> + true, 0);
> +
> + xe_assert(xe, IS_ALIGNED(xe->mem.vram.usable_size, SZ_2M));
> +
> + /*
> + * Use 1GB pages when possible, last chunk always use 2M
> + * pages as mixing reserved memory (stolen, WOCPM) with a single
> + * mapping is not allowed on certain platforms.
> + */
> + for (pos = xe->mem.vram.dpa_base; pos < vram_limit;
> + pos += SZ_1G, ofs += 8) {
> + if (pos + SZ_1G >= vram_limit) {
> + entry = vm->pt_ops->pde_encode_bo(bo, pt_ofs,
> + pat_index);
> + xe_map_wr(xe, &bo->vmap, ofs, u64, entry);
> +
> + flags = vm->pt_ops->pte_encode_addr(xe, 0,
> + pat_index,
> + level - 1,
> + true, 0);
> +
> + for (ofs = pt_ofs; pos < vram_limit;
> + pos += SZ_2M, ofs += 8)
> + xe_map_wr(xe, &bo->vmap, ofs, u64, pos | flags);
> + break; /* Ensure pos == vram_limit assert correct */
> + }
> +
> + xe_map_wr(xe, &bo->vmap, ofs, u64, pos | flags);
> + }
> +
> + xe_assert(xe, pos == vram_limit);
> +}
> +
> static int xe_migrate_prepare_vm(struct xe_tile *tile, struct xe_migrate *m,
> struct xe_vm *vm)
> {
> @@ -253,47 +298,10 @@ static int xe_migrate_prepare_vm(struct xe_tile *tile, struct xe_migrate *m,
>
> /* Identity map the entire vram at 256GiB offset */
> if (IS_DGFX(xe)) {
> - u64 pos, ofs, flags;
> - /* XXX: Unclear if this should be usable_size? */
> - u64 vram_limit = xe->mem.vram.actual_physical_size +
> - xe->mem.vram.dpa_base;
> -
> - level = 2;
> - ofs = map_ofs + XE_PAGE_SIZE * level + 256 * 8;
> - flags = vm->pt_ops->pte_encode_addr(xe, 0, pat_index, level,
> - true, 0);
> -
> - xe_assert(xe, IS_ALIGNED(xe->mem.vram.usable_size, SZ_2M));
> -
> - /*
> - * Use 1GB pages when possible, last chunk always use 2M
> - * pages as mixing reserved memory (stolen, WOCPM) with a single
> - * mapping is not allowed on certain platforms.
> - */
> - for (pos = xe->mem.vram.dpa_base; pos < vram_limit;
> - pos += SZ_1G, ofs += 8) {
> - if (pos + SZ_1G >= vram_limit) {
> - u64 pt31_ofs = bo->size - XE_PAGE_SIZE;
> -
> - entry = vm->pt_ops->pde_encode_bo(bo, pt31_ofs,
> - pat_index);
> - xe_map_wr(xe, &bo->vmap, ofs, u64, entry);
> -
> - flags = vm->pt_ops->pte_encode_addr(xe, 0,
> - pat_index,
> - level - 1,
> - true, 0);
> -
> - for (ofs = pt31_ofs; pos < vram_limit;
> - pos += SZ_2M, ofs += 8)
> - xe_map_wr(xe, &bo->vmap, ofs, u64, pos | flags);
> - break; /* Ensure pos == vram_limit assert correct */
> - }
> -
> - xe_map_wr(xe, &bo->vmap, ofs, u64, pos | flags);
> - }
> + u64 pt31_ofs = bo->size - XE_PAGE_SIZE;
>
> - xe_assert(xe, pos == vram_limit);
> + xe_migrate_program_identity(xe, vm, bo, map_ofs, 256, pat_index, pt31_ofs);
> + xe_assert(xe, (xe->mem.vram.actual_physical_size <= SZ_256G));
> }
>
> /*
> --
> 2.34.1
>
next prev parent reply other threads:[~2024-07-16 23:04 UTC|newest]
Thread overview: 20+ messages / expand[flat|nested] mbox.gz Atom feed top
2024-07-16 22:54 [PATCH v5 0/8] Implement compression support on BMG Akshata Jahagirdar
2024-07-16 22:54 ` [PATCH v5 1/8] drm/xe/migrate: Handle clear ccs logic for xe2 dgfx Akshata Jahagirdar
2024-07-16 22:54 ` [PATCH v5 2/8] drm/xe/migrate: Add kunit to test clear functionality Akshata Jahagirdar
2024-07-16 22:54 ` [PATCH v5 3/8] drm/xe/migrate: Add helper function to program identity map Akshata Jahagirdar
2024-07-16 23:03 ` Matthew Brost [this message]
2024-07-16 22:54 ` [PATCH v5 4/8] drm/xe/xe2: Introduce identity map for compressed pat for vram Akshata Jahagirdar
2024-07-16 23:19 ` Matthew Brost
2024-07-16 22:54 ` [PATCH v5 5/8] drm/xe/xe_migrate: Handle migration logic for xe2+ dgfx Akshata Jahagirdar
2024-07-16 22:54 ` [PATCH v5 6/8] drm/xe/migrate: Add kunit to test migration functionality for BMG Akshata Jahagirdar
2024-07-16 22:54 ` [PATCH v5 7/8] drm/xe/xe2: Do not run xe_bo_test for xe2+ dgfx Akshata Jahagirdar
2024-07-16 22:54 ` [PATCH v5 8/8] drm/xe/migrate: Parameterize ccs and bo data clear in xe_migrate_clear() Akshata Jahagirdar
2024-07-16 23:40 ` Matthew Brost
2024-07-16 23:00 ` ✓ CI.Patch_applied: success for Implement compression support on BMG Patchwork
2024-07-16 23:00 ` ✓ CI.checkpatch: " Patchwork
2024-07-16 23:01 ` ✓ CI.KUnit: " Patchwork
2024-07-16 23:13 ` ✓ CI.Build: " Patchwork
2024-07-16 23:16 ` ✓ CI.Hooks: " Patchwork
2024-07-16 23:17 ` ✓ CI.checksparse: " Patchwork
2024-07-16 23:40 ` ✗ CI.BAT: failure " Patchwork
2024-07-17 1:09 ` ✗ CI.FULL: " Patchwork
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=Zpb8MzhcjAC9iXbB@DUT025-TGLU.fm.intel.com \
--to=matthew.brost@intel.com \
--cc=akshata.jahagirdar@intel.com \
--cc=akshatajahagirdar6@gmail.com \
--cc=intel-xe@lists.freedesktop.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox