Intel-XE Archive on lore.kernel.org
 help / color / mirror / Atom feed
From: "Ghimiray, Himal Prasad" <himal.prasad.ghimiray@intel.com>
To: "Thomas Hellström" <thomas.hellstrom@linux.intel.com>,
	intel-xe@lists.freedesktop.org
Subject: Re: [PATCH v6 5/9] drm/xe/xe2: Update chunk size for each iteration of ccs copy
Date: Fri, 8 Dec 2023 10:35:14 +0530	[thread overview]
Message-ID: <f175d2c8-f671-47b0-8cbe-a6fd6a117c4f@intel.com> (raw)
In-Reply-To: <64600d5a-09e8-e73f-40e7-8ed4486971d2@linux.intel.com>


On 07-12-2023 18:14, Thomas Hellström wrote:
> Hi, Himal,
>
> On 12/7/23 10:19, Himal Prasad Ghimiray wrote:
>> In xe2 platform XY_CTRL_SURF_COPY_BLT can handle ccs copy for
>> max of 1024 main surface pages.
>>
>> Cc: Thomas Hellström <thomas.hellstrom@linux.intel.com>
>> Signed-off-by: Himal Prasad Ghimiray <himal.prasad.ghimiray@intel.com>
>> ---
>>   drivers/gpu/drm/xe/xe_migrate.c | 34 ++++++++++++++++++++++++++++-----
>>   1 file changed, 29 insertions(+), 5 deletions(-)
>>
>> diff --git a/drivers/gpu/drm/xe/xe_migrate.c 
>> b/drivers/gpu/drm/xe/xe_migrate.c
>> index b4dd1b6d78f0..98dca906a023 100644
>> --- a/drivers/gpu/drm/xe/xe_migrate.c
>> +++ b/drivers/gpu/drm/xe/xe_migrate.c
>> @@ -672,11 +672,24 @@ struct dma_fence *xe_migrate_copy(struct 
>> xe_migrate *m,
>>           u32 update_idx;
>>           u64 ccs_ofs, ccs_size;
>>           u32 ccs_pt;
>> +
>>           bool usm = xe->info.supports_usm;
>> +        u32 avail_pts = NUM_PT_PER_BLIT;
>>             src_L0 = xe_migrate_res_sizes(&src_it);
>>           dst_L0 = xe_migrate_res_sizes(&dst_it);
>>   +        /* In IGFX the XY_CTRL_SURF_COPY_BLT can handle max of 1024
>> +         * pages. Hence limit the processing size to SZ_4M per
>> +         * iteration.
>> +         */
>> +        if (!IS_DGFX(xe) && GRAPHICS_VER(xe) >= 20) {
>> +            src_L0 = min_t(u64, src_L0, SZ_4M);
>> +            dst_L0 = min_t(u64, dst_L0, SZ_4M);
>> +
>> +            avail_pts = SZ_4M / SZ_2M;
>> +        }
>
> Can we limit the size inside xe_migrate_res_sizes() instead?
>
> if (!is_vram)
>     if (assume_compressed)
>             size = min(size, NUM_COMPRESSED_PAGES_PER_CHUNK);
>
Sure. Will try with the same.
>
>> +
>>           drm_dbg(&xe->drm, "Pass %u, sizes: %llu & %llu\n",
>>               pass++, src_L0, dst_L0);
>>   @@ -684,18 +697,18 @@ struct dma_fence *xe_migrate_copy(struct 
>> xe_migrate *m,
>>             batch_size += pte_update_size(m, src_is_vram, src, 
>> &src_it, &src_L0,
>>                             &src_L0_ofs, &src_L0_pt, 0, 0,
>> -                          NUM_PT_PER_BLIT);
>> +                          avail_pts);
>>             batch_size += pte_update_size(m, dst_is_vram, dst, 
>> &dst_it, &src_L0,
>>                             &dst_L0_ofs, &dst_L0_pt, 0,
>> -                          NUM_PT_PER_BLIT, NUM_PT_PER_BLIT);
>> +                          avail_pts, avail_pts);
>>             if (copy_system_ccs) {
>>               ccs_size = xe_device_ccs_bytes(xe, src_L0);
>>               batch_size += pte_update_size(m, false, NULL, &ccs_it, 
>> &ccs_size,
>>                                 &ccs_ofs, &ccs_pt, 0,
>> -                              2 * NUM_PT_PER_BLIT,
>> -                              NUM_PT_PER_BLIT);
>> +                              2 * avail_pts,
>> +                              avail_pts);
>>           }
>>             /* Add copy commands size here */
>> @@ -923,8 +936,19 @@ struct dma_fence *xe_migrate_clear(struct 
>> xe_migrate *m,
>>           struct xe_bb *bb;
>>           u32 batch_size, update_idx;
>>           bool usm = xe->info.supports_usm;
>> +        u32 avail_pts = NUM_PT_PER_BLIT;
>>             clear_L0 = xe_migrate_res_sizes(&src_it);
>> +
>> +        /* In IGFX the XY_CTRL_SURF_COPY_BLT can handle max of 1024
>> +         * pages. Hence limit the processing size to SZ_4M per
>> +         * iteration.
>> +         */
>> +        if (!IS_DGFX(xe) && GRAPHICS_VER(xe) >= 20) {
>> +            clear_L0 = min_t(u64, clear_L0, SZ_4M);
>> +            avail_pts = SZ_4M / SZ_2M;
>> +        }
>> +
>>           drm_dbg(&xe->drm, "Pass %u, size: %llu\n", pass++, clear_L0);
>>             /* Calculate final sizes and batch size.. */
>> @@ -932,7 +956,7 @@ struct dma_fence *xe_migrate_clear(struct 
>> xe_migrate *m,
>>               pte_update_size(m, clear_vram, src, &src_it,
>>                       &clear_L0, &clear_L0_ofs, &clear_L0_pt,
>>                       emit_clear_cmd_len(gt), 0,
>> -                    NUM_PT_PER_BLIT);
>> +                    avail_pts);
>>           if (xe_device_has_flat_ccs(xe) && clear_vram)
>>               batch_size += EMIT_COPY_CCS_DW;

  reply	other threads:[~2023-12-08  5:05 UTC|newest]

Thread overview: 26+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2023-12-07  9:19 [Intel-xe] [PATCH v6 0/9] Enable compression handling on LNL Himal Prasad Ghimiray
2023-12-07  9:19 ` [Intel-xe] [PATCH v6 1/9] drm/xe/xe2: Determine bios enablement for flat ccs on igfx Himal Prasad Ghimiray
2023-12-07  9:19 ` [Intel-xe] [PATCH v6 2/9] drm/xe/xe2: Allocate extra pages for ccs during bo create Himal Prasad Ghimiray
2023-12-07  9:19 ` [Intel-xe] [PATCH v6 3/9] drm/xe/xe2: Updates on XY_CTRL_SURF_COPY_BLT Himal Prasad Ghimiray
2023-12-07  9:19 ` [Intel-xe] [PATCH v6 4/9] drm/xe/xe_migrate: Use NULL 1G PTE mapped at 255GiB VA for ccs clear Himal Prasad Ghimiray
2023-12-07  9:19 ` [Intel-xe] [PATCH v6 5/9] drm/xe/xe2: Update chunk size for each iteration of ccs copy Himal Prasad Ghimiray
2023-12-07 12:44   ` Thomas Hellström
2023-12-08  5:05     ` Ghimiray, Himal Prasad [this message]
2023-12-07  9:19 ` [Intel-xe] [PATCH v6 6/9] drm/xe/xe2: Update emit_pte to use compression enabled PAT index Himal Prasad Ghimiray
2023-12-07 12:47   ` Thomas Hellström
2023-12-08  5:06     ` Ghimiray, Himal Prasad
2023-12-07  9:19 ` [Intel-xe] [PATCH v6 7/9] drm/xe/xe2: Handle flat ccs move for igfx Himal Prasad Ghimiray
2023-12-07 12:58   ` Thomas Hellström
2023-12-11  5:15     ` Ghimiray, Himal Prasad
2023-12-07  9:19 ` [Intel-xe] [PATCH v6 8/9] drm/xe/xe2: Modify xe_bo_test for system memory Himal Prasad Ghimiray
2023-12-07 13:00   ` Thomas Hellström
2023-12-07  9:19 ` [Intel-xe] [PATCH v6 9/9] drm/xe/xe2: Support flat ccs Himal Prasad Ghimiray
2023-12-07 13:01   ` Thomas Hellström
2023-12-08  5:08     ` Ghimiray, Himal Prasad
2023-12-07 10:20 ` ✓ CI.Patch_applied: success for Enable compression handling on LNL. (rev7) Patchwork
2023-12-07 10:20 ` ✓ CI.checkpatch: " Patchwork
2023-12-07 10:21 ` ✓ CI.KUnit: " Patchwork
2023-12-07 10:28 ` ✓ CI.Build: " Patchwork
2023-12-07 10:29 ` ✓ CI.Hooks: " Patchwork
2023-12-07 10:30 ` ✓ CI.checksparse: " Patchwork
2023-12-07 11:07 ` ✗ CI.BAT: failure " Patchwork

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=f175d2c8-f671-47b0-8cbe-a6fd6a117c4f@intel.com \
    --to=himal.prasad.ghimiray@intel.com \
    --cc=intel-xe@lists.freedesktop.org \
    --cc=thomas.hellstrom@linux.intel.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox