From: "Thomas Hellström" <thomas.hellstrom@linux.intel.com>
To: Francois Dugast <francois.dugast@intel.com>,
igt-dev@lists.freedesktop.org
Subject: Re: [PATCH i-g-t 1/5] lib/intel_blt: Promote blt_bo_copy()
Date: Wed, 19 Mar 2025 13:41:37 +0100 [thread overview]
Message-ID: <b7619b7a4a16255b8261155a2c7efdd89b96f4d8.camel@linux.intel.com> (raw)
In-Reply-To: <20250305090743.16894-2-francois.dugast@intel.com>
On Wed, 2025-03-05 at 10:06 +0100, Francois Dugast wrote:
> This function abstracts copy with mem blt. Move it to the library so
> that it can be also be used elsewhere without code duplication.
>
> Signed-off-by: Francois Dugast <francois.dugast@intel.com>
LGTM.
Reviewed-by: Thomas Hellström <thomas.hellstrom@linux.intel.com>
> ---
> lib/intel_blt.c | 53
> +++++++++++++++++++++++++++++++++++++
> lib/intel_blt.h | 3 +++
> tests/intel/xe_copy_basic.c | 36 +------------------------
> 3 files changed, 57 insertions(+), 35 deletions(-)
>
> diff --git a/lib/intel_blt.c b/lib/intel_blt.c
> index b2fb3151e..84318a557 100644
> --- a/lib/intel_blt.c
> +++ b/lib/intel_blt.c
> @@ -1903,6 +1903,59 @@ int blt_mem_copy(int fd, const intel_ctx_t
> *ctx,
> return ret;
> }
>
> +/**
> + * blt_bo_copy:
> + * @fd: drm fd
> + * @src_handle: handle of the source BO
> + * @dst_handle: handle of the destination BO
> + * @ctx: intel_ctx_t context
> + * @size: BO size
> + * @width: width
> + * @height: height
> + * @region: memory region
> + *
> + * Copy BO with mem blit from @src_handle into @dst_handle.
> + */
> +void blt_bo_copy(int fd, uint32_t src_handle, uint32_t dst_handle,
> const intel_ctx_t *ctx,
> + uint32_t size, uint32_t width, uint32_t height,
> uint32_t region)
> +{
> + struct blt_mem_data mem = {};
> + uint64_t bb_size = xe_bb_size(fd, SZ_4K);
> + uint64_t ahnd = intel_allocator_open_full(fd, ctx->vm, 0, 0,
> +
> INTEL_ALLOCATOR_SIMPLE,
> +
> ALLOC_STRATEGY_LOW_TO_HIGH, 0);
> + uint8_t src_mocs = intel_get_uc_mocs_index(fd);
> + uint8_t dst_mocs = src_mocs;
> + uint32_t bb;
> + int result;
> +
> + bb = xe_bo_create(fd, 0, bb_size, region, 0);
> +
> + blt_mem_init(fd, &mem);
> + blt_set_mem_object(&mem.src, src_handle, size, 0, width,
> height,
> + region, src_mocs, DEFAULT_PAT_INDEX,
> M_LINEAR,
> + COMPRESSION_DISABLED);
> + blt_set_mem_object(&mem.dst, dst_handle, size, 0, width,
> height,
> + region, dst_mocs, DEFAULT_PAT_INDEX,
> M_LINEAR,
> + COMPRESSION_DISABLED);
> + mem.src.ptr = xe_bo_map(fd, src_handle, size);
> + mem.dst.ptr = xe_bo_map(fd, dst_handle, size);
> +
> + blt_set_batch(&mem.bb, bb, bb_size, region);
> + igt_assert(mem.src.width == mem.dst.width);
> +
> + blt_mem_copy(fd, ctx, NULL, ahnd, &mem);
> + result = memcmp(mem.src.ptr, mem.dst.ptr, mem.src.size);
> +
> + intel_allocator_bind(ahnd, 0, 0);
> + munmap(mem.src.ptr, size);
> + munmap(mem.dst.ptr, size);
> + gem_close(fd, bb);
> + put_ahnd(ahnd);
> +
> + igt_assert_f(!result, "source and destination differ\n");
> +}
> +
> static void emit_blt_mem_set(int fd, uint64_t ahnd, const struct
> blt_mem_data *mem,
> uint8_t fill_data)
> {
> diff --git a/lib/intel_blt.h b/lib/intel_blt.h
> index 5d6191ac9..4357d70eb 100644
> --- a/lib/intel_blt.h
> +++ b/lib/intel_blt.h
> @@ -271,6 +271,9 @@ int blt_mem_copy(int fd, const intel_ctx_t *ctx,
> uint64_t ahnd,
> const struct blt_mem_data *mem);
>
> +void blt_bo_copy(int fd, uint32_t src_handle, uint32_t dst_handle,
> const intel_ctx_t *ctx,
> + uint32_t size, uint32_t width, uint32_t height,
> uint32_t region);
> +
> int blt_mem_set(int fd, const intel_ctx_t *ctx,
> const struct intel_execution_engine2 *e,
> uint64_t ahnd,
> const struct blt_mem_data *mem, uint8_t
> fill_data);
> diff --git a/tests/intel/xe_copy_basic.c
> b/tests/intel/xe_copy_basic.c
> index a43842e39..458106b0b 100644
> --- a/tests/intel/xe_copy_basic.c
> +++ b/tests/intel/xe_copy_basic.c
> @@ -44,41 +44,7 @@ static void
> mem_copy(int fd, uint32_t src_handle, uint32_t dst_handle, const
> intel_ctx_t *ctx,
> uint32_t size, uint32_t width, uint32_t height, uint32_t
> region)
> {
> - struct blt_mem_data mem = {};
> - uint64_t bb_size = xe_bb_size(fd, SZ_4K);
> - uint64_t ahnd = intel_allocator_open_full(fd, ctx->vm, 0, 0,
> -
> INTEL_ALLOCATOR_SIMPLE,
> -
> ALLOC_STRATEGY_LOW_TO_HIGH, 0);
> - uint8_t src_mocs = intel_get_uc_mocs_index(fd);
> - uint8_t dst_mocs = src_mocs;
> - uint32_t bb;
> - int result;
> -
> - bb = xe_bo_create(fd, 0, bb_size, region, 0);
> -
> - blt_mem_init(fd, &mem);
> - blt_set_mem_object(&mem.src, src_handle, size, 0, width,
> height,
> - region, src_mocs, DEFAULT_PAT_INDEX,
> M_LINEAR,
> - COMPRESSION_DISABLED);
> - blt_set_mem_object(&mem.dst, dst_handle, size, 0, width,
> height,
> - region, dst_mocs, DEFAULT_PAT_INDEX,
> M_LINEAR,
> - COMPRESSION_DISABLED);
> - mem.src.ptr = xe_bo_map(fd, src_handle, size);
> - mem.dst.ptr = xe_bo_map(fd, dst_handle, size);
> -
> - blt_set_batch(&mem.bb, bb, bb_size, region);
> - igt_assert(mem.src.width == mem.dst.width);
> -
> - blt_mem_copy(fd, ctx, NULL, ahnd, &mem);
> - result = memcmp(mem.src.ptr, mem.dst.ptr, mem.src.size);
> -
> - intel_allocator_bind(ahnd, 0, 0);
> - munmap(mem.src.ptr, size);
> - munmap(mem.dst.ptr, size);
> - gem_close(fd, bb);
> - put_ahnd(ahnd);
> -
> - igt_assert_f(!result, "source and destination differ\n");
> + blt_bo_copy(fd, src_handle, dst_handle, ctx, size, width,
> height, region);
> }
>
> /**
next prev parent reply other threads:[~2025-03-19 12:41 UTC|newest]
Thread overview: 26+ messages / expand[flat|nested] mbox.gz Atom feed top
2025-03-05 9:06 [PATCH i-g-t 0/5] Stress copy engines during render Francois Dugast
2025-03-05 9:06 ` [PATCH i-g-t 1/5] lib/intel_blt: Promote blt_bo_copy() Francois Dugast
2025-03-19 12:41 ` Thomas Hellström [this message]
2025-03-19 19:15 ` Zbigniew Kempczyński
2025-03-05 9:06 ` [PATCH i-g-t 2/5] lib/intel_blt: Allow forcing multiple runs in blt_mem_copy() Francois Dugast
2025-03-19 12:42 ` Thomas Hellström
2025-03-05 9:06 ` [PATCH i-g-t 3/5] lib/intel_blt: Use blt_mem_copy() to stress copy functions Francois Dugast
2025-03-19 12:46 ` Thomas Hellström
2025-03-19 19:21 ` Zbigniew Kempczyński
2025-03-05 9:06 ` [PATCH i-g-t 4/5] tests/intel/xe_render_copy: Expose render duration Francois Dugast
2025-03-19 12:48 ` Thomas Hellström
2025-03-05 9:06 ` [PATCH i-g-t 5/5] tests/intel/xe_render_copy: Render under copy stress Francois Dugast
2025-03-06 4:30 ` ✓ Xe.CI.BAT: success for Stress copy engines during render (rev2) Patchwork
2025-03-06 4:47 ` ✓ i915.CI.BAT: " Patchwork
2025-03-06 7:00 ` ✗ i915.CI.Full: failure " Patchwork
2025-03-06 10:38 ` ✗ Xe.CI.Full: " Patchwork
2025-03-11 7:31 ` ✓ Xe.CI.BAT: success for Stress copy engines during render (rev3) Patchwork
2025-03-11 8:04 ` ✓ i915.CI.BAT: " Patchwork
2025-03-11 9:24 ` ✗ i915.CI.Full: failure " Patchwork
2025-03-12 1:06 ` ✗ Xe.CI.Full: " Patchwork
2025-03-12 22:53 ` ✗ Xe.CI.BAT: failure for Stress copy engines during render (rev4) Patchwork
2025-03-12 23:04 ` ✓ i915.CI.BAT: success " Patchwork
2025-03-12 23:46 ` ✓ i915.CI.Full: " Patchwork
2025-03-13 14:39 ` ✗ Xe.CI.Full: failure " Patchwork
-- strict thread matches above, loose matches on Subject: below --
2025-03-05 13:57 [PATCH i-g-t 0/5] Stress copy engines during render Francois Dugast
2025-03-05 13:57 ` [PATCH i-g-t 1/5] lib/intel_blt: Promote blt_bo_copy() Francois Dugast
2025-03-06 5:31 ` Zbigniew Kempczyński
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=b7619b7a4a16255b8261155a2c7efdd89b96f4d8.camel@linux.intel.com \
--to=thomas.hellstrom@linux.intel.com \
--cc=francois.dugast@intel.com \
--cc=igt-dev@lists.freedesktop.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox