From: "Adrián Larumbe" <adrian.larumbe@collabora.com>
To: igt-dev@lists.freedesktop.org,
Petri Latvala <adrinael@adrinael.net>,
Arkadiusz Hiler <arek@hiler.eu>,
Kamil Konieczny <kamil.konieczny@linux.intel.com>,
Juha-Pekka Heikkila <juhapekka.heikkila@gmail.com>,
Bhanuprakash Modem <bhanuprakash.modem@gmail.com>
Cc: "Boris Brezillon" <boris.brezillon@collabora.com>,
"Steven Price" <steven.price@arm.com>,
"Liviu Dudau" <liviu.dudau@arm.com>,
"Adrián Larumbe" <adrian.larumbe@collabora.com>,
"Daniel Almeida" <daniel.almeida@collabora.com>,
"Janne Grunau" <j@jannau.net>,
"Danilo Krummrich" <dakr@kernel.org>,
kernel@collabora.com
Subject: [PATCH v1 4/4] tests/panthor: Add VM_BIND repeat tests
Date: Fri, 13 Mar 2026 17:58:31 +0000 [thread overview]
Message-ID: <20260313175908.1752151-5-adrian.larumbe@collabora.com> (raw)
In-Reply-To: <20260313175908.1752151-1-adrian.larumbe@collabora.com>
In a separate file for the time being, but should eventually be
incorporated into panthor_vm.c.
Signed-off-by: Adrián Larumbe <adrian.larumbe@collabora.com>
---
tests/panthor/meson.build | 1 +
tests/panthor/panthor_vm_repeat.c | 555 ++++++++++++++++++++++++++++++
2 files changed, 556 insertions(+)
create mode 100644 tests/panthor/panthor_vm_repeat.c
diff --git a/tests/panthor/meson.build b/tests/panthor/meson.build
index 42a46e9934a9..fe5220b88430 100644
--- a/tests/panthor/meson.build
+++ b/tests/panthor/meson.build
@@ -3,6 +3,7 @@ panthor_progs = [
'panthor_group',
'panthor_query',
'panthor_vm',
+ 'panthor_vm_repeat',
]
foreach prog : panthor_progs
diff --git a/tests/panthor/panthor_vm_repeat.c b/tests/panthor/panthor_vm_repeat.c
new file mode 100644
index 000000000000..13a0e4f4b356
--- /dev/null
+++ b/tests/panthor/panthor_vm_repeat.c
@@ -0,0 +1,555 @@
+// SPDX-License-Identifier: MIT
+// SPDX-FileCopyrightText: Copyright (C) 2025 Collabora Ltd.
+
+#include "igt.h"
+#include "igt_core.h"
+#include "igt_panthor.h"
+#include "igt_syncobj.h"
+#include "panthor_drm.h"
+
+int
+igt_main()
+{
+ int fd;
+
+ igt_fixture() {
+ fd = drm_open_driver(DRIVER_PANTHOR);
+ }
+
+ igt_describe("Create and destroy a VM");
+ igt_subtest("vm_create_destroy") {
+ uint32_t vm_id;
+
+ igt_panthor_vm_create(fd, &vm_id, 0);
+ igt_assert_neq(vm_id, 0);
+
+ igt_panthor_vm_destroy(fd, vm_id, 0);
+ }
+
+ igt_subtest("vm_bind_repeat") {
+ uint32_t vm_id;
+ struct panthor_bo bo;
+ uint64_t bo_size = SZ_4K;
+ uint64_t map_size = SZ_4K * 4;
+
+ igt_panthor_vm_create(fd, &vm_id, 0);
+ igt_assert(vm_id != 0);
+
+ igt_panthor_bo_create(fd, &bo, bo_size, 0, 0);
+ igt_panthor_vm_bind_repeat(fd, vm_id, bo.handle, SZ_2M,
+ map_size,
+ DRM_PANTHOR_VM_BIND_OP_TYPE_MAP |
+ DRM_PANTHOR_VM_BIND_OP_MAP_REPEAT, 0,
+ bo_size);
+ igt_panthor_vm_destroy(fd, vm_id, 0);
+ }
+
+ igt_subtest("vm_bind_repeat_write") {
+ uint32_t vm_id;
+ uint32_t group_handle;
+ struct panthor_bo cmd_buf_bo = { };
+ struct panthor_bo result_bo = { };
+ uint64_t command_stream_gpu_addr;
+ uint32_t command_stream_size;
+ uint64_t result_gpu_addr;
+ uint32_t syncobj_handle;
+ const int INITIAL_VA_CS = 0x1000000;
+ const int INITIAL_VA = 0x2000000;
+ const uint64_t map_size = SZ_4K * 4;
+ const uint64_t repeat_bo_size = SZ_4K;
+
+ igt_panthor_vm_create(fd, &vm_id, 0);
+
+ igt_panthor_bo_create_mapped(fd, &cmd_buf_bo, 4096, 0, 0);
+ igt_panthor_vm_bind(fd, vm_id, cmd_buf_bo.handle, INITIAL_VA_CS,
+ cmd_buf_bo.size,
+ DRM_PANTHOR_VM_BIND_OP_TYPE_MAP, 0);
+ command_stream_gpu_addr = INITIAL_VA_CS;
+
+ /* Create the BO to receive the result of the store. */
+ igt_panthor_bo_create_mapped(fd, &result_bo, repeat_bo_size, 0,
+ 0);
+ /* Also bind the result BO. */
+ igt_panthor_vm_bind_repeat(fd, vm_id, result_bo.handle,
+ INITIAL_VA, map_size,
+ DRM_PANTHOR_VM_BIND_OP_TYPE_MAP |
+ DRM_PANTHOR_VM_BIND_OP_MAP_REPEAT, 0,
+ repeat_bo_size);
+ result_gpu_addr = INITIAL_VA;
+
+ command_stream_size =
+ igt_panthor_issue_store_multiple(cmd_buf_bo.map,
+ result_gpu_addr,
+ 0xdeadbeef);
+
+ group_handle = igt_panthor_group_create_simple(fd, vm_id, 0);
+ igt_assert_neq(group_handle, 0);
+ syncobj_handle = syncobj_create(fd, 0);
+
+ igt_panthor_group_submit_simple(fd, group_handle, 0,
+ command_stream_gpu_addr,
+ command_stream_size,
+ syncobj_handle, 0);
+
+ igt_assert(syncobj_wait
+ (fd, &syncobj_handle, 1, INT64_MAX, 0, NULL));
+
+ igt_assert_eq(*(uint32_t *)result_bo.map, 0xdeadbeef);
+
+ syncobj_destroy(fd, syncobj_handle);
+
+ result_gpu_addr = INITIAL_VA + 2 * SZ_4K;
+ command_stream_size =
+ igt_panthor_issue_store_multiple(cmd_buf_bo.map,
+ result_gpu_addr,
+ 0xdeadbaaf);
+ syncobj_handle = syncobj_create(fd, 0);
+ igt_panthor_group_submit_simple(fd, group_handle, 0,
+ command_stream_gpu_addr,
+ command_stream_size,
+ syncobj_handle, 0);
+ igt_assert(syncobj_wait
+ (fd, &syncobj_handle, 1, INT64_MAX, 0, NULL));
+ igt_assert_eq(*(uint32_t *)result_bo.map, 0xdeadbaaf);
+ syncobj_destroy(fd, syncobj_handle);
+
+ igt_panthor_group_destroy(fd, group_handle, 0);
+
+ igt_panthor_vm_destroy(fd, vm_id, 0);
+
+ igt_panthor_free_bo(fd, &cmd_buf_bo);
+ igt_panthor_free_bo(fd, &result_bo);
+ }
+
+ igt_subtest("vm_bind_repeat_partial_unmap_start_size_unaligned") {
+ uint32_t vm_id;
+ struct panthor_bo bo;
+ const uint64_t repeat_bo_size = SZ_2M;
+ uint64_t map_size = repeat_bo_size * 3;
+ const int INITIAL_VA = SZ_4M;
+
+ igt_panthor_vm_create(fd, &vm_id, 0);
+ igt_assert(vm_id != 0);
+
+ igt_panthor_bo_create(fd, &bo, repeat_bo_size, 0, 0);
+ igt_panthor_vm_bind_repeat(fd, vm_id, bo.handle, INITIAL_VA,
+ map_size,
+ DRM_PANTHOR_VM_BIND_OP_TYPE_MAP |
+ DRM_PANTHOR_VM_BIND_OP_MAP_REPEAT, 0,
+ repeat_bo_size);
+ igt_panthor_vm_bind(fd, vm_id, 0, INITIAL_VA, SZ_4K * 2,
+ DRM_PANTHOR_VM_BIND_OP_TYPE_UNMAP, EINVAL);
+ igt_panthor_vm_destroy(fd, vm_id, 0);
+ }
+
+ igt_subtest("vm_bind_repeat_partial_unmap_start_size_aligned") {
+ uint32_t vm_id;
+ struct panthor_bo bo;
+ const uint64_t repeat_bo_size = SZ_2M;
+ uint64_t map_size = repeat_bo_size * 3;
+ const int INITIAL_VA = SZ_4M;
+
+ igt_panthor_vm_create(fd, &vm_id, 0);
+ igt_assert(vm_id != 0);
+
+ igt_panthor_bo_create(fd, &bo, repeat_bo_size, 0, 0);
+ igt_panthor_vm_bind_repeat(fd, vm_id, bo.handle, INITIAL_VA,
+ map_size,
+ DRM_PANTHOR_VM_BIND_OP_TYPE_MAP |
+ DRM_PANTHOR_VM_BIND_OP_MAP_REPEAT, 0,
+ repeat_bo_size);
+ igt_panthor_vm_bind(fd, vm_id, 0, INITIAL_VA, repeat_bo_size,
+ DRM_PANTHOR_VM_BIND_OP_TYPE_UNMAP, 0);
+ igt_panthor_vm_destroy(fd, vm_id, 0);
+ }
+
+ igt_subtest
+ ("vm_bind_repeat_partial_unmap_start_size_unaligned_no_gpupage_multiple") {
+ uint32_t vm_id;
+ struct panthor_bo bo;
+ const uint64_t repeat_bo_size = SZ_2M;
+ uint64_t map_size = repeat_bo_size * 3;
+ const int INITIAL_VA = SZ_4M;
+
+ igt_panthor_vm_create(fd, &vm_id, 0);
+ igt_assert(vm_id != 0);
+
+ igt_panthor_bo_create(fd, &bo, repeat_bo_size, 0, 0);
+ igt_panthor_vm_bind_repeat(fd, vm_id, bo.handle, INITIAL_VA,
+ map_size,
+ DRM_PANTHOR_VM_BIND_OP_TYPE_MAP |
+ DRM_PANTHOR_VM_BIND_OP_MAP_REPEAT, 0,
+ repeat_bo_size);
+ igt_panthor_vm_bind(fd, vm_id, 0, INITIAL_VA, 4 * SZ_64K,
+ DRM_PANTHOR_VM_BIND_OP_TYPE_UNMAP, EINVAL);
+ igt_panthor_vm_destroy(fd, vm_id, 0);
+ }
+
+ igt_subtest("vm_bind_repeat_partial_unmap_below_start") {
+ uint32_t vm_id;
+ struct panthor_bo bo;
+ const uint64_t repeat_bo_size = SZ_2M;
+ uint64_t map_size = repeat_bo_size * 3;
+ const int INITIAL_VA = SZ_4M;
+
+ igt_panthor_vm_create(fd, &vm_id, 0);
+ igt_assert(vm_id != 0);
+
+ igt_panthor_bo_create(fd, &bo, repeat_bo_size, 0, 0);
+ igt_panthor_vm_bind_repeat(fd, vm_id, bo.handle, INITIAL_VA,
+ map_size,
+ DRM_PANTHOR_VM_BIND_OP_TYPE_MAP |
+ DRM_PANTHOR_VM_BIND_OP_MAP_REPEAT, 0,
+ repeat_bo_size);
+ igt_panthor_vm_bind(fd, vm_id, 0, INITIAL_VA - repeat_bo_size,
+ repeat_bo_size * 3,
+ DRM_PANTHOR_VM_BIND_OP_TYPE_UNMAP, 0);
+ igt_panthor_vm_destroy(fd, vm_id, 0);
+ }
+
+ igt_subtest("vm_bind_repeat_partial_unmap_above_start") {
+ uint32_t vm_id;
+ struct panthor_bo bo;
+ const uint64_t repeat_bo_size = SZ_2M;
+ uint64_t map_size = repeat_bo_size * 3;
+ const int INITIAL_VA = SZ_4M;
+
+ igt_panthor_vm_create(fd, &vm_id, 0);
+ igt_assert(vm_id != 0);
+
+ igt_panthor_bo_create(fd, &bo, repeat_bo_size, 0, 0);
+ igt_panthor_vm_bind_repeat(fd, vm_id, bo.handle, INITIAL_VA,
+ map_size,
+ DRM_PANTHOR_VM_BIND_OP_TYPE_MAP |
+ DRM_PANTHOR_VM_BIND_OP_MAP_REPEAT, 0,
+ repeat_bo_size);
+ igt_panthor_vm_bind(fd, vm_id, 0, INITIAL_VA + repeat_bo_size,
+ repeat_bo_size,
+ DRM_PANTHOR_VM_BIND_OP_TYPE_UNMAP, 0);
+ igt_panthor_vm_destroy(fd, vm_id, 0);
+ }
+
+ igt_subtest("vm_bind_repeat_remap") {
+ uint32_t vm_id;
+ struct panthor_bo bo;
+ struct panthor_bo bo2;
+ const uint64_t repeat_bo_size = SZ_2M;
+ uint64_t map_size = repeat_bo_size * 3;
+ const int INITIAL_VA = SZ_4M;
+
+ igt_panthor_vm_create(fd, &vm_id, 0);
+ igt_assert(vm_id != 0);
+
+ igt_panthor_bo_create(fd, &bo, repeat_bo_size, 0, 0);
+ igt_panthor_vm_bind_repeat(fd, vm_id, bo.handle, INITIAL_VA,
+ map_size,
+ DRM_PANTHOR_VM_BIND_OP_TYPE_MAP |
+ DRM_PANTHOR_VM_BIND_OP_MAP_REPEAT, 0,
+ repeat_bo_size);
+ igt_panthor_bo_create(fd, &bo2, SZ_8M, 0, 0);
+
+ /* Now attempt normal VM_BIND's that intersect with the previous chunk */
+
+ igt_panthor_vm_bind(fd, vm_id, bo2.handle,
+ INITIAL_VA + repeat_bo_size,
+ repeat_bo_size * 2,
+ DRM_PANTHOR_VM_BIND_OP_TYPE_MAP, 0);
+
+ igt_panthor_vm_destroy(fd, vm_id, 0);
+ }
+
+ igt_subtest("vm_bind_repeat_remap_start_unaligned") {
+ uint32_t vm_id;
+ struct panthor_bo bo;
+ struct panthor_bo bo2;
+ const uint64_t repeat_bo_size = SZ_2M;
+ uint64_t map_size = repeat_bo_size * 3;
+ const int INITIAL_VA = SZ_4M;
+
+ igt_panthor_vm_create(fd, &vm_id, 0);
+ igt_assert(vm_id != 0);
+
+ igt_panthor_bo_create(fd, &bo, repeat_bo_size, 0, 0);
+ igt_panthor_vm_bind_repeat(fd, vm_id, bo.handle, INITIAL_VA,
+ map_size,
+ DRM_PANTHOR_VM_BIND_OP_TYPE_MAP |
+ DRM_PANTHOR_VM_BIND_OP_MAP_REPEAT, 0,
+ repeat_bo_size);
+ igt_panthor_bo_create(fd, &bo2, SZ_8M, 0, 0);
+
+ /* Now attempt normal VM_BIND's that intersect with the previous chunk */
+
+ igt_panthor_vm_bind(fd, vm_id, bo2.handle,
+ INITIAL_VA + map_size - SZ_1M, SZ_4M,
+ DRM_PANTHOR_VM_BIND_OP_TYPE_MAP, EINVAL);
+
+ igt_panthor_vm_destroy(fd, vm_id, 0);
+ }
+
+ igt_subtest("vm_bind_repeat_remap_start_size_unaligned") {
+ uint32_t vm_id;
+ struct panthor_bo bo;
+ struct panthor_bo bo2;
+ const uint64_t repeat_bo_size = SZ_2M;
+ uint64_t map_size = repeat_bo_size * 3;
+ const int INITIAL_VA = SZ_4M;
+
+ igt_panthor_vm_create(fd, &vm_id, 0);
+ igt_assert(vm_id != 0);
+
+ igt_panthor_bo_create(fd, &bo, repeat_bo_size, 0, 0);
+ igt_panthor_vm_bind_repeat(fd, vm_id, bo.handle, INITIAL_VA,
+ map_size,
+ DRM_PANTHOR_VM_BIND_OP_TYPE_MAP |
+ DRM_PANTHOR_VM_BIND_OP_MAP_REPEAT, 0,
+ repeat_bo_size);
+ igt_panthor_bo_create(fd, &bo2, SZ_8M, 0, 0);
+
+ /* Now attempt normal VM_BIND's that intersect with the previous chunk */
+
+ igt_panthor_vm_bind(fd, vm_id, bo2.handle,
+ INITIAL_VA + map_size - SZ_1M, SZ_1M,
+ DRM_PANTHOR_VM_BIND_OP_TYPE_MAP, EINVAL);
+
+ igt_panthor_vm_destroy(fd, vm_id, 0);
+ }
+
+ igt_subtest("vm_bind_repeat_remap_start_size_aligned") {
+ uint32_t vm_id;
+ struct panthor_bo bo;
+ struct panthor_bo bo2;
+ const uint64_t repeat_bo_size = SZ_2M;
+ uint64_t map_size = repeat_bo_size * 3;
+ const int INITIAL_VA = SZ_4M;
+
+ igt_panthor_vm_create(fd, &vm_id, 0);
+ igt_assert(vm_id != 0);
+
+ igt_panthor_bo_create(fd, &bo, repeat_bo_size, 0, 0);
+ igt_panthor_vm_bind_repeat(fd, vm_id, bo.handle, INITIAL_VA,
+ map_size,
+ DRM_PANTHOR_VM_BIND_OP_TYPE_MAP |
+ DRM_PANTHOR_VM_BIND_OP_MAP_REPEAT, 0,
+ repeat_bo_size);
+
+ igt_panthor_bo_create(fd, &bo2, SZ_8M, 0, 0);
+
+ /* Now attempt normal VM_BIND's that intersect with the previous chunk */
+
+ igt_panthor_vm_bind(fd, vm_id, bo2.handle,
+ INITIAL_VA + map_size - SZ_2M, SZ_4M,
+ DRM_PANTHOR_VM_BIND_OP_TYPE_MAP, 0);
+
+ igt_panthor_vm_destroy(fd, vm_id, 0);
+ }
+
+ igt_subtest("vm_bind_repeat_remap_aligned_split_original_va") {
+ uint32_t vm_id;
+ struct panthor_bo bo;
+ struct panthor_bo bo2;
+ const uint64_t repeat_bo_size = SZ_2M;
+ uint64_t map_size = repeat_bo_size * 3;
+ const int INITIAL_VA = SZ_4M;
+
+ igt_panthor_vm_create(fd, &vm_id, 0);
+ igt_assert(vm_id != 0);
+
+ igt_panthor_bo_create(fd, &bo, repeat_bo_size, 0, 0);
+ igt_panthor_vm_bind_repeat(fd, vm_id, bo.handle, INITIAL_VA,
+ map_size,
+ DRM_PANTHOR_VM_BIND_OP_TYPE_MAP |
+ DRM_PANTHOR_VM_BIND_OP_MAP_REPEAT, 0,
+ repeat_bo_size);
+
+ igt_panthor_bo_create(fd, &bo2, SZ_8M, 0, 0);
+
+ /* Now attempt normal VM_BIND's that intersect with the previous chunk */
+
+ igt_panthor_vm_bind(fd, vm_id, bo2.handle,
+ INITIAL_VA + repeat_bo_size, repeat_bo_size,
+ DRM_PANTHOR_VM_BIND_OP_TYPE_MAP, 0);
+
+ igt_panthor_vm_destroy(fd, vm_id, 0);
+ }
+
+ igt_subtest("vm_bind_repeat_remap_start_aligned_size_unaligned") {
+ uint32_t vm_id;
+ struct panthor_bo bo;
+ struct panthor_bo bo2;
+ const uint64_t repeat_bo_size = SZ_2M;
+ uint64_t map_size = repeat_bo_size * 3;
+ const int INITIAL_VA = SZ_4M;
+
+ igt_panthor_vm_create(fd, &vm_id, 0);
+ igt_assert(vm_id != 0);
+
+ igt_panthor_bo_create(fd, &bo, repeat_bo_size, 0, 0);
+ igt_panthor_vm_bind_repeat(fd, vm_id, bo.handle, INITIAL_VA,
+ map_size,
+ DRM_PANTHOR_VM_BIND_OP_TYPE_MAP |
+ DRM_PANTHOR_VM_BIND_OP_MAP_REPEAT, 0,
+ repeat_bo_size);
+
+ igt_panthor_bo_create(fd, &bo2, SZ_8M, 0, 0);
+
+ /* Now attempt normal VM_BIND's that intersect with the previous chunk */
+
+ igt_panthor_vm_bind(fd, vm_id, bo2.handle,
+ INITIAL_VA + repeat_bo_size, SZ_1M,
+ DRM_PANTHOR_VM_BIND_OP_TYPE_MAP, EINVAL);
+
+ igt_panthor_vm_destroy(fd, vm_id, 0);
+ }
+
+ igt_subtest("vm_bind_repeat_remap_aligned_intersect_left") {
+ uint32_t vm_id;
+ struct panthor_bo bo;
+ struct panthor_bo bo2;
+ const uint64_t repeat_bo_size = SZ_2M;
+ uint64_t map_size = repeat_bo_size * 3;
+ const int INITIAL_VA = SZ_4M;
+
+ igt_panthor_vm_create(fd, &vm_id, 0);
+ igt_assert(vm_id != 0);
+
+ igt_panthor_bo_create(fd, &bo, repeat_bo_size, 0, 0);
+ igt_panthor_vm_bind_repeat(fd, vm_id, bo.handle, INITIAL_VA,
+ map_size,
+ DRM_PANTHOR_VM_BIND_OP_TYPE_MAP |
+ DRM_PANTHOR_VM_BIND_OP_MAP_REPEAT, 0,
+ repeat_bo_size);
+
+ igt_panthor_bo_create(fd, &bo2, SZ_8M, 0, 0);
+
+ /* Now attempt normal VM_BIND's that intersect with the previous chunk */
+
+ igt_panthor_vm_bind(fd, vm_id, bo2.handle,
+ INITIAL_VA - repeat_bo_size,
+ repeat_bo_size * 2,
+ DRM_PANTHOR_VM_BIND_OP_TYPE_MAP, 0);
+
+ igt_panthor_vm_destroy(fd, vm_id, 0);
+ }
+
+ igt_subtest("vm_bind_repeat_remap_size_unaligned_intersect_left") {
+ uint32_t vm_id;
+ struct panthor_bo bo;
+ struct panthor_bo bo2;
+ const uint64_t repeat_bo_size = SZ_2M;
+ uint64_t map_size = repeat_bo_size * 3;
+ const int INITIAL_VA = SZ_4M;
+
+ igt_panthor_vm_create(fd, &vm_id, 0);
+ igt_assert(vm_id != 0);
+
+ igt_panthor_bo_create(fd, &bo, repeat_bo_size, 0, 0);
+ igt_panthor_vm_bind_repeat(fd, vm_id, bo.handle, INITIAL_VA,
+ map_size,
+ DRM_PANTHOR_VM_BIND_OP_TYPE_MAP |
+ DRM_PANTHOR_VM_BIND_OP_MAP_REPEAT, 0,
+ repeat_bo_size);
+
+ igt_panthor_bo_create(fd, &bo2, SZ_8M, 0, 0);
+
+ /* Now attempt normal VM_BIND's that intersect with the previous chunk */
+
+ igt_panthor_vm_bind(fd, vm_id, bo2.handle,
+ INITIAL_VA - repeat_bo_size,
+ repeat_bo_size * SZ_1M,
+ DRM_PANTHOR_VM_BIND_OP_TYPE_MAP, EINVAL);
+
+ igt_panthor_vm_destroy(fd, vm_id, 0);
+ }
+
+ igt_subtest("vm_bind_repeat_remap_start_aligned_intersect_right") {
+ uint32_t vm_id;
+ struct panthor_bo bo;
+ struct panthor_bo bo2;
+ const uint64_t repeat_bo_size = SZ_2M;
+ uint64_t map_size = repeat_bo_size * 3;
+ const int INITIAL_VA = SZ_4M;
+
+ igt_panthor_vm_create(fd, &vm_id, 0);
+ igt_assert(vm_id != 0);
+
+ igt_panthor_bo_create(fd, &bo, repeat_bo_size, 0, 0);
+ igt_panthor_vm_bind_repeat(fd, vm_id, bo.handle, INITIAL_VA,
+ map_size,
+ DRM_PANTHOR_VM_BIND_OP_TYPE_MAP |
+ DRM_PANTHOR_VM_BIND_OP_MAP_REPEAT, 0,
+ repeat_bo_size);
+
+ igt_panthor_bo_create(fd, &bo2, SZ_8M, 0, 0);
+
+ /* Now attempt normal VM_BIND's that intersect with the previous chunk */
+
+ igt_panthor_vm_bind(fd, vm_id, bo2.handle,
+ INITIAL_VA + map_size - repeat_bo_size,
+ repeat_bo_size + SZ_4K * 6,
+ DRM_PANTHOR_VM_BIND_OP_TYPE_MAP, 0);
+
+ igt_panthor_vm_destroy(fd, vm_id, 0);
+ }
+
+ igt_subtest("vm_bind_repeat_remap_wrap_around_va") {
+ uint32_t vm_id;
+ struct panthor_bo bo;
+ struct panthor_bo bo2;
+ const uint64_t repeat_bo_size = SZ_2M;
+ uint64_t map_size = repeat_bo_size * 3;
+ const int INITIAL_VA = SZ_4M;
+
+ igt_panthor_vm_create(fd, &vm_id, 0);
+ igt_assert(vm_id != 0);
+
+ igt_panthor_bo_create(fd, &bo, repeat_bo_size, 0, 0);
+ igt_panthor_vm_bind_repeat(fd, vm_id, bo.handle, INITIAL_VA,
+ map_size,
+ DRM_PANTHOR_VM_BIND_OP_TYPE_MAP |
+ DRM_PANTHOR_VM_BIND_OP_MAP_REPEAT, 0,
+ repeat_bo_size);
+
+ igt_panthor_bo_create(fd, &bo2, SZ_8M, 0, 0);
+
+ /* Now attempt normal VM_BIND's that intersect with the previous chunk */
+
+ igt_panthor_vm_bind(fd, vm_id, bo2.handle,
+ INITIAL_VA - repeat_bo_size, SZ_8M,
+ DRM_PANTHOR_VM_BIND_OP_TYPE_MAP, 0);
+
+ igt_panthor_vm_destroy(fd, vm_id, 0);
+ }
+
+ igt_subtest("vm_bind_repeat_high_vas") {
+ uint32_t vm_id;
+ struct panthor_bo bo;
+ struct panthor_bo bo2;
+ const uint64_t repeat_bo_size = SZ_4K;
+ uint64_t map_size = 16 * repeat_bo_size;
+ const uint64_t INITIAL_VA = 0x7fffffff0000;
+
+ igt_panthor_vm_create(fd, &vm_id, 0);
+ igt_assert(vm_id != 0);
+
+ igt_panthor_bo_create(fd, &bo, repeat_bo_size, 0, 0);
+ igt_panthor_vm_bind_repeat(fd, vm_id, bo.handle, INITIAL_VA,
+ map_size,
+ DRM_PANTHOR_VM_BIND_OP_TYPE_MAP |
+ DRM_PANTHOR_VM_BIND_OP_MAP_REPEAT, 0,
+ repeat_bo_size);
+
+ igt_panthor_bo_create(fd, &bo2, map_size, 0, 0);
+
+ /* Now attempt normal VM_BIND's that intersect with the previous chunk */
+
+ igt_panthor_vm_bind(fd, vm_id, bo2.handle,
+ INITIAL_VA, repeat_bo_size,
+ DRM_PANTHOR_VM_BIND_OP_TYPE_MAP, 0);
+
+ igt_panthor_vm_destroy(fd, vm_id, 0);
+ }
+
+ igt_fixture() {
+ drm_close_driver(fd);
+ }
+}
--
2.53.0
next prev parent reply other threads:[~2026-03-13 17:59 UTC|newest]
Thread overview: 9+ messages / expand[flat|nested] mbox.gz Atom feed top
2026-03-13 17:58 [PATCH v1 0/4] Test panthor repeated mappings Adrián Larumbe
2026-03-13 17:58 ` [PATCH v1 1/4] drm-uapi/panthor: Sync panthor uapi Adrián Larumbe
2026-03-16 12:21 ` Kamil Konieczny
2026-03-13 17:58 ` [PATCH v1 2/4] panthor: Move issue_store_multiple into library file Adrián Larumbe
2026-03-13 17:58 ` [PATCH v1 3/4] test/panthor: Add support for repeated mappings Adrián Larumbe
2026-03-13 17:58 ` Adrián Larumbe [this message]
2026-03-13 18:28 ` ✓ Xe.CI.BAT: success for Test panthor " Patchwork
2026-03-13 19:05 ` ✗ i915.CI.BAT: failure " Patchwork
2026-03-14 22:35 ` ✗ Xe.CI.FULL: " Patchwork
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20260313175908.1752151-5-adrian.larumbe@collabora.com \
--to=adrian.larumbe@collabora.com \
--cc=adrinael@adrinael.net \
--cc=arek@hiler.eu \
--cc=bhanuprakash.modem@gmail.com \
--cc=boris.brezillon@collabora.com \
--cc=dakr@kernel.org \
--cc=daniel.almeida@collabora.com \
--cc=igt-dev@lists.freedesktop.org \
--cc=j@jannau.net \
--cc=juhapekka.heikkila@gmail.com \
--cc=kamil.konieczny@linux.intel.com \
--cc=kernel@collabora.com \
--cc=liviu.dudau@arm.com \
--cc=steven.price@arm.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox