Intel-XE Archive on lore.kernel.org
 help / color / mirror / Atom feed
From: Matthew Auld <matthew.auld@intel.com>
To: Matthew Brost <matthew.brost@intel.com>
Cc: intel-xe@lists.freedesktop.org
Subject: Re: [PATCH v2 6/7] drm/xe/migrate: skip bounce buffer path on xe2
Date: Tue, 21 Oct 2025 10:23:22 +0100	[thread overview]
Message-ID: <36b2ca0b-a576-4990-8971-7c43aa4a5296@intel.com> (raw)
In-Reply-To: <aPaE5T23Z1ZG7N/I@lstrano-desk.jf.intel.com>

On 20/10/2025 19:52, Matthew Brost wrote:
> On Mon, Oct 20, 2025 at 01:54:38PM +0100, Matthew Auld wrote:
>> Now that we support MEM_COPY we should be able to use the PAGE_COPY
>> mode, otherwise falling back to BYTE_COPY mode when we have odd
>> sizing/alignment.
>>
>> v2:
>>   - Use info.has_mem_copy_instr
>>   - Rebase on latest changes.
>>
>> Signed-off-by: Matthew Auld <matthew.auld@intel.com>
>> Cc: Matthew Brost <matthew.brost@intel.com>
>> ---
>>   drivers/gpu/drm/xe/xe_migrate.c | 19 ++++++++++++-------
>>   1 file changed, 12 insertions(+), 7 deletions(-)
>>
>> diff --git a/drivers/gpu/drm/xe/xe_migrate.c b/drivers/gpu/drm/xe/xe_migrate.c
>> index 14ade32b8b69..7819a168ed17 100644
>> --- a/drivers/gpu/drm/xe/xe_migrate.c
>> +++ b/drivers/gpu/drm/xe/xe_migrate.c
>> @@ -1938,8 +1938,9 @@ static struct dma_fence *xe_migrate_vram(struct xe_migrate *m,
>>   	unsigned long i, j;
>>   	bool use_pde = xe_migrate_vram_use_pde(sram_addr, len + sram_offset);
>>   
>> -	if (drm_WARN_ON(&xe->drm, (len & XE_CACHELINE_MASK) ||
>> -			(sram_offset | vram_addr) & XE_CACHELINE_MASK))
>> +	if (!xe->info.has_mem_copy_instr &&
>> +	    drm_WARN_ON(&xe->drm,
>> +			(len & XE_CACHELINE_MASK) || (sram_offset | vram_addr) & XE_CACHELINE_MASK))
>>   		return ERR_PTR(-EOPNOTSUPP);
>>   
>>   	xe_assert(xe, npages * PAGE_SIZE <= MAX_PREEMPTDISABLE_TRANSFER);
>> @@ -2158,8 +2159,9 @@ int xe_migrate_access_memory(struct xe_migrate *m, struct xe_bo *bo,
>>   	xe_bo_assert_held(bo);
>>   
>>   	/* Use bounce buffer for small access and unaligned access */
>> -	if (!IS_ALIGNED(len, XE_CACHELINE_BYTES) ||
>> -	    !IS_ALIGNED((unsigned long)buf + offset, XE_CACHELINE_BYTES)) {
>> +	if (!xe->info.has_mem_copy_instr &&
>> +	    (!IS_ALIGNED(len, XE_CACHELINE_BYTES) ||
>> +	     !IS_ALIGNED((unsigned long)buf + offset, XE_CACHELINE_BYTES))) {
>>   		int buf_offset = 0;
>>   		void *bounce;
>>   		int err;
>> @@ -2231,9 +2233,12 @@ int xe_migrate_access_memory(struct xe_migrate *m, struct xe_bo *bo,
>>   		if (current_bytes & ~PAGE_MASK) {
>>   			int pitch = 4;
> 
> Shouldn't the pitch be 1 for info.has_mem_copy_instr and / or we use
> linear copy mode for non-256 byte aligned copies?

Ah yes, this is indeed wrong. Thanks for catching that.

> 
> Matt
> 
>>   
>> -			current_bytes = min_t(int, current_bytes,
>> -					      round_down(S16_MAX * pitch,
>> -							 XE_CACHELINE_BYTES));
>> +			if (xe->info.has_mem_copy_instr)
>> +				current_bytes = min_t(int, current_bytes, U16_MAX * pitch);
>> +			else
>> +				current_bytes =
>> +					min_t(int, current_bytes,
>> +					      round_down(S16_MAX * pitch, XE_CACHELINE_BYTES));
>>   		}
>>   
>>   		__fence = xe_migrate_vram(m, current_bytes,
>> -- 
>> 2.51.0
>>


  reply	other threads:[~2025-10-21  9:23 UTC|newest]

Thread overview: 15+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2025-10-20 12:54 [PATCH v2 0/7] Some migration fixes/improvements Matthew Auld
2025-10-20 12:54 ` [PATCH v2 1/7] drm/xe/migrate: rework size restrictions for sram pte emit Matthew Auld
2025-10-20 12:54 ` [PATCH v2 2/7] drm/xe/migrate: fix chunk handling for 2M page emit Matthew Auld
2025-10-20 12:54 ` [PATCH v2 3/7] drm/xe/migrate: fix batch buffer sizing Matthew Auld
2025-10-20 12:54 ` [PATCH v2 4/7] drm/xe/migrate: trim " Matthew Auld
2025-10-20 12:54 ` [PATCH v2 5/7] drm/xe/migrate: support MEM_COPY instruction Matthew Auld
2025-10-20 18:41   ` Matthew Brost
2025-10-20 12:54 ` [PATCH v2 6/7] drm/xe/migrate: skip bounce buffer path on xe2 Matthew Auld
2025-10-20 18:52   ` Matthew Brost
2025-10-21  9:23     ` Matthew Auld [this message]
2025-10-20 12:54 ` [PATCH v2 7/7] drm/xe/configfs: add disable_mem_copy knob Matthew Auld
2025-10-20 22:49   ` Matthew Brost
2025-10-21  2:18   ` Lucas De Marchi
2025-10-21  9:06     ` Matthew Auld
2025-10-20 13:04 ` ✗ CI.KUnit: failure for Some migration fixes/improvements (rev2) Patchwork

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=36b2ca0b-a576-4990-8971-7c43aa4a5296@intel.com \
    --to=matthew.auld@intel.com \
    --cc=intel-xe@lists.freedesktop.org \
    --cc=matthew.brost@intel.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox