From: "Thomas Hellström" <thomas.hellstrom@linux.intel.com>
To: Himal Prasad Ghimiray <himal.prasad.ghimiray@intel.com>,
intel-xe@lists.freedesktop.org
Cc: Matt Roper <matthew.d.roper@intel.com>
Subject: Re: [PATCH v7 06/10] drm/xe/xe2: Update chunk size for each iteration of ccs copy
Date: Tue, 12 Dec 2023 13:27:31 +0100 [thread overview]
Message-ID: <b8197de8-3b6a-053c-2a5e-629ef518c056@linux.intel.com> (raw)
In-Reply-To: <20231211134356.1645973-7-himal.prasad.ghimiray@intel.com>
On 12/11/23 14:43, Himal Prasad Ghimiray wrote:
> In xe2 platform XY_CTRL_SURF_COPY_BLT can handle ccs copy for
> max of 1024 main surface pages.
>
> v2:
> - Use better logic to determine chunk size (Matt/Thomas)
>
> Cc: Matt Roper <matthew.d.roper@intel.com>
> Cc: Thomas Hellström <thomas.hellstrom@linux.intel.com>
> Signed-off-by: Himal Prasad Ghimiray <himal.prasad.ghimiray@intel.com>
> ---
> drivers/gpu/drm/xe/xe_migrate.c | 33 ++++++++++++++++++++++-----------
> 1 file changed, 22 insertions(+), 11 deletions(-)
>
> diff --git a/drivers/gpu/drm/xe/xe_migrate.c b/drivers/gpu/drm/xe/xe_migrate.c
> index 1016e2591737..9698986eab06 100644
> --- a/drivers/gpu/drm/xe/xe_migrate.c
> +++ b/drivers/gpu/drm/xe/xe_migrate.c
> @@ -65,9 +65,15 @@ struct xe_migrate {
> };
>
> #define MAX_PREEMPTDISABLE_TRANSFER SZ_8M /* Around 1ms. */
> +#define MAX_CCS_LIMITED_TRANSFER SZ_4M /* XE_PAGE_SIZE * (FIELD_MAX(XE2_CCS_SIZE_MASK) + 1) */
> +
> +#define MAX_MEM_TRANSFER_PER_PASS(_xe) ((!IS_DGFX(_xe) && GRAPHICS_VER(_xe) >= 20 && \
> + xe_device_has_flat_ccs(_xe)) ? \
> + MAX_CCS_LIMITED_TRANSFER : MAX_PREEMPTDISABLE_TRANSFER)
Nit: perhaps open-code instead of macro:
max_mem_transfer_per_pass = ...
Either way
Reviewed-by: Thomas Hellström <thomas.hellstrom@linux.intel.com>
> #define NUM_KERNEL_PDE 17
> #define NUM_PT_SLOTS 32
> -#define NUM_PT_PER_BLIT (MAX_PREEMPTDISABLE_TRANSFER / SZ_2M)
> +#define LEVEL0_PAGE_TABLE_ENCODE_SIZE SZ_2M
> +#define NUM_PT_PER_BLIT(_xe) (MAX_MEM_TRANSFER_PER_PASS(_xe) / LEVEL0_PAGE_TABLE_ENCODE_SIZE)
>
> /**
> * xe_tile_migrate_engine() - Get this tile's migrate engine.
> @@ -366,14 +372,14 @@ struct xe_migrate *xe_migrate_init(struct xe_tile *tile)
> return m;
> }
>
> -static u64 xe_migrate_res_sizes(struct xe_res_cursor *cur)
> +static u64 xe_migrate_res_sizes(struct xe_device *xe, struct xe_res_cursor *cur)
> {
> /*
> * For VRAM we use identity mapped pages so we are limited to current
> * cursor size. For system we program the pages ourselves so we have no
> * such limitation.
> */
> - return min_t(u64, MAX_PREEMPTDISABLE_TRANSFER,
> + return min_t(u64, MAX_MEM_TRANSFER_PER_PASS(xe),
> mem_type_is_vram(cur->mem_type) ? cur->size :
> cur->remaining);
> }
> @@ -672,10 +678,12 @@ struct dma_fence *xe_migrate_copy(struct xe_migrate *m,
> u32 update_idx;
> u64 ccs_ofs, ccs_size;
> u32 ccs_pt;
> +
> bool usm = xe->info.has_usm;
> + u32 avail_pts = NUM_PT_PER_BLIT(xe);
>
> - src_L0 = xe_migrate_res_sizes(&src_it);
> - dst_L0 = xe_migrate_res_sizes(&dst_it);
> + src_L0 = xe_migrate_res_sizes(xe, &src_it);
> + dst_L0 = xe_migrate_res_sizes(xe, &dst_it);
>
> drm_dbg(&xe->drm, "Pass %u, sizes: %llu & %llu\n",
> pass++, src_L0, dst_L0);
> @@ -684,18 +692,18 @@ struct dma_fence *xe_migrate_copy(struct xe_migrate *m,
>
> batch_size += pte_update_size(m, src_is_vram, src, &src_it, &src_L0,
> &src_L0_ofs, &src_L0_pt, 0, 0,
> - NUM_PT_PER_BLIT);
> + avail_pts);
>
> batch_size += pte_update_size(m, dst_is_vram, dst, &dst_it, &src_L0,
> &dst_L0_ofs, &dst_L0_pt, 0,
> - NUM_PT_PER_BLIT, NUM_PT_PER_BLIT);
> + avail_pts, avail_pts);
>
> if (copy_system_ccs) {
> ccs_size = xe_device_ccs_bytes(xe, src_L0);
> batch_size += pte_update_size(m, false, NULL, &ccs_it, &ccs_size,
> &ccs_ofs, &ccs_pt, 0,
> - 2 * NUM_PT_PER_BLIT,
> - NUM_PT_PER_BLIT);
> + 2 * avail_pts,
> + avail_pts);
> }
>
> /* Add copy commands size here */
> @@ -922,9 +930,12 @@ struct dma_fence *xe_migrate_clear(struct xe_migrate *m,
> struct xe_sched_job *job;
> struct xe_bb *bb;
> u32 batch_size, update_idx;
> +
> bool usm = xe->info.has_usm;
> + u32 avail_pts = NUM_PT_PER_BLIT(xe);
> +
> + clear_L0 = xe_migrate_res_sizes(xe, &src_it);
>
> - clear_L0 = xe_migrate_res_sizes(&src_it);
> drm_dbg(&xe->drm, "Pass %u, size: %llu\n", pass++, clear_L0);
>
> /* Calculate final sizes and batch size.. */
> @@ -932,7 +943,7 @@ struct dma_fence *xe_migrate_clear(struct xe_migrate *m,
> pte_update_size(m, clear_vram, src, &src_it,
> &clear_L0, &clear_L0_ofs, &clear_L0_pt,
> emit_clear_cmd_len(gt), 0,
> - NUM_PT_PER_BLIT);
> + avail_pts);
> if (xe_device_has_flat_ccs(xe) && clear_vram)
> batch_size += EMIT_COPY_CCS_DW;
>
next prev parent reply other threads:[~2023-12-12 12:27 UTC|newest]
Thread overview: 29+ messages / expand[flat|nested] mbox.gz Atom feed top
2023-12-11 13:43 [PATCH v7 00/10] Enable compression handling on LNL Himal Prasad Ghimiray
2023-12-11 13:43 ` [PATCH v7 01/10] drm/xe/xe2: Determine bios enablement for flat ccs on igfx Himal Prasad Ghimiray
2023-12-11 23:12 ` Matt Roper
2023-12-12 12:23 ` Thomas Hellström
2023-12-11 13:43 ` [PATCH v7 02/10] drm/xe/xe2: Modify main memory to ccs memory ratio Himal Prasad Ghimiray
2023-12-11 23:15 ` Matt Roper
2023-12-11 13:43 ` [PATCH v7 03/10] drm/xe/xe2: Allocate extra pages for ccs during bo create Himal Prasad Ghimiray
2023-12-12 0:41 ` Matt Roper
2023-12-12 9:00 ` Ghimiray, Himal Prasad
2023-12-11 13:43 ` [PATCH v7 04/10] drm/xe/xe2: Updates on XY_CTRL_SURF_COPY_BLT Himal Prasad Ghimiray
2023-12-12 0:45 ` Matt Roper
2023-12-11 13:43 ` [PATCH v7 05/10] drm/xe/xe_migrate: Use NULL 1G PTE mapped at 255GiB VA for ccs clear Himal Prasad Ghimiray
2023-12-11 13:43 ` [PATCH v7 06/10] drm/xe/xe2: Update chunk size for each iteration of ccs copy Himal Prasad Ghimiray
2023-12-12 12:27 ` Thomas Hellström [this message]
2023-12-11 13:43 ` [PATCH v7 07/10] drm/xe/xe2: Update emit_pte to use compression enabled PAT index Himal Prasad Ghimiray
2023-12-12 12:28 ` Thomas Hellström
2023-12-11 13:43 ` [PATCH v7 08/10] drm/xe/xe2: Handle flat ccs move for igfx Himal Prasad Ghimiray
2023-12-12 12:31 ` Thomas Hellström
2023-12-11 13:43 ` [PATCH v7 09/10] drm/xe/xe2: Modify xe_bo_test for system memory Himal Prasad Ghimiray
2023-12-11 13:43 ` [PATCH v7 10/10] drm/xe/xe2: Support flat ccs Himal Prasad Ghimiray
2023-12-12 12:33 ` Thomas Hellström
2023-12-11 14:25 ` ✓ CI.Patch_applied: success for Enable compression handling on LNL. (rev8) Patchwork
2023-12-11 14:25 ` ✗ CI.checkpatch: warning " Patchwork
2023-12-11 14:26 ` ✓ CI.KUnit: success " Patchwork
2023-12-11 14:34 ` ✓ CI.Build: " Patchwork
2023-12-11 14:34 ` ✓ CI.Hooks: " Patchwork
2023-12-11 14:35 ` ✓ CI.checksparse: " Patchwork
2023-12-11 15:10 ` ✓ CI.BAT: " Patchwork
-- strict thread matches above, loose matches on Subject: below --
2023-12-11 13:41 [PATCH v7 00/10] *Enable compression handling on LNL Himal Prasad Ghimiray
2023-12-11 13:41 ` [PATCH v7 06/10] drm/xe/xe2: Update chunk size for each iteration of ccs copy Himal Prasad Ghimiray
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=b8197de8-3b6a-053c-2a5e-629ef518c056@linux.intel.com \
--to=thomas.hellstrom@linux.intel.com \
--cc=himal.prasad.ghimiray@intel.com \
--cc=intel-xe@lists.freedesktop.org \
--cc=matthew.d.roper@intel.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox