Intel-XE Archive on lore.kernel.org
 help / color / mirror / Atom feed
From: "Ghimiray, Himal Prasad" <himal.prasad.ghimiray@intel.com>
To: Matt Roper <matthew.d.roper@intel.com>
Cc: intel-xe@lists.freedesktop.org
Subject: Re: [Intel-xe] [PATCH v4 5/9] drm/xe/xe2: Update chunk size for each iteration of ccs copy
Date: Fri, 8 Dec 2023 09:52:24 +0530	[thread overview]
Message-ID: <988eb0a4-0808-4e3b-812f-9ebd4b413258@intel.com> (raw)
In-Reply-To: <20231207000146.GT1327160@mdroper-desk1.amr.corp.intel.com>


On 07-12-2023 05:31, Matt Roper wrote:
> On Wed, Dec 06, 2023 at 10:01:22AM +0530, Himal Prasad Ghimiray wrote:
>> In xe2 platform XY_CTRL_SURF_COPY_BLT can handle ccs copy for
>> max of 1024 main surface pages.
>>
>> Cc: Thomas Hellström <thomas.hellstrom@linux.intel.com>
>> Signed-off-by: Himal Prasad Ghimiray <himal.prasad.ghimiray@intel.com>
>> ---
>>   drivers/gpu/drm/xe/xe_migrate.c | 34 ++++++++++++++++++++++++++++-----
>>   1 file changed, 29 insertions(+), 5 deletions(-)
>>
>> diff --git a/drivers/gpu/drm/xe/xe_migrate.c b/drivers/gpu/drm/xe/xe_migrate.c
>> index b4dd1b6d78f0..98dca906a023 100644
>> --- a/drivers/gpu/drm/xe/xe_migrate.c
>> +++ b/drivers/gpu/drm/xe/xe_migrate.c
>> @@ -672,11 +672,24 @@ struct dma_fence *xe_migrate_copy(struct xe_migrate *m,
>>   		u32 update_idx;
>>   		u64 ccs_ofs, ccs_size;
>>   		u32 ccs_pt;
>> +
>>   		bool usm = xe->info.supports_usm;
>> +		u32 avail_pts = NUM_PT_PER_BLIT;
>>   
>>   		src_L0 = xe_migrate_res_sizes(&src_it);
>>   		dst_L0 = xe_migrate_res_sizes(&dst_it);
>>   
>> +		/* In IGFX the XY_CTRL_SURF_COPY_BLT can handle max of 1024
>> +		 * pages. Hence limit the processing size to SZ_4M per
>> +		 * iteration.
>> +		 */
>> +		if (!IS_DGFX(xe) && GRAPHICS_VER(xe) >= 20) {
> Where is the igpu limitation coming from?  The change to expressing copy
> size in terms of pages seems to be a general Xe2 IP change that we'd
> expect all future platforms, both igpu and dgpu to follow.

I added the limitation considering dgfx can have 64 K pages. And then 
limiting size should be decided by

xe_migrate_res_sizes. Will try to make this change more generic in next version.

>
>
>> +			src_L0 = min_t(u64, src_L0, SZ_4M);
>> +			dst_L0 = min_t(u64, dst_L0, SZ_4M);
>> +
>> +			avail_pts = SZ_4M / SZ_2M;
> What does the SZ_2M here represent?  It's not super obvious.
>
> Also, even though you have a comment above, it still might be nicer to
> "show the work" for SZ_4M instead of using a magic number.  E.g.,
>
>          XE_PAGE_SIZE * (FIELD_MAX(XE2_CCS_SIZE_MASK) + 1) / ...
>
>
> Matt
>
>> +		}
>> +
>>   		drm_dbg(&xe->drm, "Pass %u, sizes: %llu & %llu\n",
>>   			pass++, src_L0, dst_L0);
>>   
>> @@ -684,18 +697,18 @@ struct dma_fence *xe_migrate_copy(struct xe_migrate *m,
>>   
>>   		batch_size += pte_update_size(m, src_is_vram, src, &src_it, &src_L0,
>>   					      &src_L0_ofs, &src_L0_pt, 0, 0,
>> -					      NUM_PT_PER_BLIT);
>> +					      avail_pts);
>>   
>>   		batch_size += pte_update_size(m, dst_is_vram, dst, &dst_it, &src_L0,
>>   					      &dst_L0_ofs, &dst_L0_pt, 0,
>> -					      NUM_PT_PER_BLIT, NUM_PT_PER_BLIT);
>> +					      avail_pts, avail_pts);
>>   
>>   		if (copy_system_ccs) {
>>   			ccs_size = xe_device_ccs_bytes(xe, src_L0);
>>   			batch_size += pte_update_size(m, false, NULL, &ccs_it, &ccs_size,
>>   						      &ccs_ofs, &ccs_pt, 0,
>> -						      2 * NUM_PT_PER_BLIT,
>> -						      NUM_PT_PER_BLIT);
>> +						      2 * avail_pts,
>> +						      avail_pts);
>>   		}
>>   
>>   		/* Add copy commands size here */
>> @@ -923,8 +936,19 @@ struct dma_fence *xe_migrate_clear(struct xe_migrate *m,
>>   		struct xe_bb *bb;
>>   		u32 batch_size, update_idx;
>>   		bool usm = xe->info.supports_usm;
>> +		u32 avail_pts = NUM_PT_PER_BLIT;
>>   
>>   		clear_L0 = xe_migrate_res_sizes(&src_it);
>> +
>> +		/* In IGFX the XY_CTRL_SURF_COPY_BLT can handle max of 1024
>> +		 * pages. Hence limit the processing size to SZ_4M per
>> +		 * iteration.
>> +		 */
>> +		if (!IS_DGFX(xe) && GRAPHICS_VER(xe) >= 20) {
>> +			clear_L0 = min_t(u64, clear_L0, SZ_4M);
>> +			avail_pts = SZ_4M / SZ_2M;
>> +		}
>> +
>>   		drm_dbg(&xe->drm, "Pass %u, size: %llu\n", pass++, clear_L0);
>>   
>>   		/* Calculate final sizes and batch size.. */
>> @@ -932,7 +956,7 @@ struct dma_fence *xe_migrate_clear(struct xe_migrate *m,
>>   			pte_update_size(m, clear_vram, src, &src_it,
>>   					&clear_L0, &clear_L0_ofs, &clear_L0_pt,
>>   					emit_clear_cmd_len(gt), 0,
>> -					NUM_PT_PER_BLIT);
>> +					avail_pts);
>>   		if (xe_device_has_flat_ccs(xe) && clear_vram)
>>   			batch_size += EMIT_COPY_CCS_DW;
>>   
>> -- 
>> 2.25.1
>>

  reply	other threads:[~2023-12-08  4:22 UTC|newest]

Thread overview: 32+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2023-12-06  4:31 [Intel-xe] [PATCH v4 0/9] Enable compression handling on LNL Himal Prasad Ghimiray
2023-12-06  4:31 ` [Intel-xe] [PATCH v4 1/9] drm/xe/xe2: Determine bios enablement for flat ccs on igfx Himal Prasad Ghimiray
2023-12-06 22:14   ` Matt Roper
2023-12-06  4:31 ` [Intel-xe] [PATCH v4 2/9] drm/xe/xe2: Allocate extra pages for ccs during bo create Himal Prasad Ghimiray
2023-12-06 22:27   ` Matt Roper
2023-12-08  3:59     ` Ghimiray, Himal Prasad
2023-12-06  4:31 ` [Intel-xe] [PATCH v4 3/9] drm/xe/xe2: Updates on XY_CTRL_SURF_COPY_BLT Himal Prasad Ghimiray
2023-12-06 23:22   ` Matt Roper
2023-12-08  4:01     ` Ghimiray, Himal Prasad
2023-12-06  4:31 ` [Intel-xe] [PATCH v4 4/9] drm/xe/xe_migrate: Use NULL 1G PTE mapped at 255GiB VA for ccs clear Himal Prasad Ghimiray
2023-12-06 23:37   ` Matt Roper
2023-12-08  4:10     ` Ghimiray, Himal Prasad
2023-12-06  4:31 ` [Intel-xe] [PATCH v4 5/9] drm/xe/xe2: Update chunk size for each iteration of ccs copy Himal Prasad Ghimiray
2023-12-07  0:01   ` Matt Roper
2023-12-08  4:22     ` Ghimiray, Himal Prasad [this message]
2023-12-06  4:31 ` [Intel-xe] [PATCH v4 6/9] drm/xe/xe2: Update emit_pte to use compression enabled PAT index Himal Prasad Ghimiray
2023-12-07  0:14   ` Matt Roper
2023-12-08  5:01     ` Ghimiray, Himal Prasad
2023-12-06  4:31 ` [Intel-xe] [PATCH v4 7/9] drm/xe/xe2: Handle flat ccs move for igfx Himal Prasad Ghimiray
2023-12-07  0:17   ` Matt Roper
2023-12-08  4:32     ` Ghimiray, Himal Prasad
2023-12-06  4:31 ` [Intel-xe] [PATCH v4 8/9] drm/xe/xe2: Modify xe_bo_test for system memory Himal Prasad Ghimiray
2023-12-07  0:23   ` Matt Roper
2023-12-08  4:35     ` Ghimiray, Himal Prasad
2023-12-06  4:31 ` [Intel-xe] [PATCH v4 9/9] drm/xe/xe2: Support flat ccs Himal Prasad Ghimiray
2023-12-06  8:23 ` [Intel-xe] ✓ CI.Patch_applied: success for Enable compression handling on LNL. (rev5) Patchwork
2023-12-06  8:23 ` [Intel-xe] ✓ CI.checkpatch: " Patchwork
2023-12-06  8:24 ` [Intel-xe] ✓ CI.KUnit: " Patchwork
2023-12-06  8:32 ` [Intel-xe] ✓ CI.Build: " Patchwork
2023-12-06  8:32 ` [Intel-xe] ✓ CI.Hooks: " Patchwork
2023-12-06  8:33 ` [Intel-xe] ✓ CI.checksparse: " Patchwork
2023-12-06  9:07 ` [Intel-xe] ✗ CI.BAT: failure " Patchwork

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=988eb0a4-0808-4e3b-812f-9ebd4b413258@intel.com \
    --to=himal.prasad.ghimiray@intel.com \
    --cc=intel-xe@lists.freedesktop.org \
    --cc=matthew.d.roper@intel.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox