* [igt-dev] [PATCH 0/3] Test GPUVA and NULL VM binds
@ 2023-03-15 20:59 Matthew Brost
2023-03-15 20:59 ` [igt-dev] [PATCH 1/3] xe: Update to latest uAPI Matthew Brost
` (3 more replies)
0 siblings, 4 replies; 5+ messages in thread
From: Matthew Brost @ 2023-03-15 20:59 UTC (permalink / raw)
To: igt-dev
Tests: https://patchwork.freedesktop.org/series/115217/
Signed-off-by: Matthew Brost <matthew.brost@intel.com>
Matthew Brost (3):
xe: Update to latest uAPI
xe_exec_basic: Add NULL VM bind section
xe_vm: MMAP style VM binds section
include/drm-uapi/xe_drm.h | 8 +
tests/xe/xe_exec_basic.c | 29 +++-
tests/xe/xe_vm.c | 326 +++++++++++++++++++++++++++++++++-----
3 files changed, 322 insertions(+), 41 deletions(-)
--
2.34.1
^ permalink raw reply [flat|nested] 5+ messages in thread
* [igt-dev] [PATCH 1/3] xe: Update to latest uAPI
2023-03-15 20:59 [igt-dev] [PATCH 0/3] Test GPUVA and NULL VM binds Matthew Brost
@ 2023-03-15 20:59 ` Matthew Brost
2023-03-15 20:59 ` [igt-dev] [PATCH 2/3] xe_exec_basic: Add NULL VM bind section Matthew Brost
` (2 subsequent siblings)
3 siblings, 0 replies; 5+ messages in thread
From: Matthew Brost @ 2023-03-15 20:59 UTC (permalink / raw)
To: igt-dev
Needed to test NULL VM binds.
Signed-off-by: Matthew Brost <matthew.brost@intel.com>
---
include/drm-uapi/xe_drm.h | 8 ++++++++
1 file changed, 8 insertions(+)
diff --git a/include/drm-uapi/xe_drm.h b/include/drm-uapi/xe_drm.h
index 593b01ba..4bde1087 100644
--- a/include/drm-uapi/xe_drm.h
+++ b/include/drm-uapi/xe_drm.h
@@ -446,6 +446,14 @@ struct drm_xe_vm_bind_op {
* than differing the MAP to the page fault handler.
*/
#define XE_VM_BIND_FLAG_IMMEDIATE (0x1 << 18)
+ /*
+ * When the NULL flag is set, the page tables are setup with a special
+ * bit which indicates writes are dropped and all reads return zero. The
+ * NULL flags is only valid for XE_VM_BIND_OP_MAP operations, the BO
+ * handle MBZ, and the BO offset MBZ. This flag is intended to implement
+ * VK sparse bindings.
+ */
+#define XE_VM_BIND_FLAG_NULL (0x1 << 19)
/** @reserved: Reserved */
__u64 reserved[2];
--
2.34.1
^ permalink raw reply related [flat|nested] 5+ messages in thread
* [igt-dev] [PATCH 2/3] xe_exec_basic: Add NULL VM bind section
2023-03-15 20:59 [igt-dev] [PATCH 0/3] Test GPUVA and NULL VM binds Matthew Brost
2023-03-15 20:59 ` [igt-dev] [PATCH 1/3] xe: Update to latest uAPI Matthew Brost
@ 2023-03-15 20:59 ` Matthew Brost
2023-03-15 20:59 ` [igt-dev] [PATCH 3/3] xe_vm: MMAP style VM binds section Matthew Brost
2023-03-15 21:33 ` [igt-dev] ✗ Fi.CI.BUILD: failure for Test GPUVA and NULL VM binds Patchwork
3 siblings, 0 replies; 5+ messages in thread
From: Matthew Brost @ 2023-03-15 20:59 UTC (permalink / raw)
To: igt-dev
A NULL VM bind results in writes being dropped and reads returning zero,
verify uAPI for NULL VM binds is working as designed.
Signed-off-by: Matthew Brost <matthew.brost@intel.com>
---
tests/xe/xe_exec_basic.c | 29 ++++++++++++++++++++++++-----
1 file changed, 24 insertions(+), 5 deletions(-)
diff --git a/tests/xe/xe_exec_basic.c b/tests/xe/xe_exec_basic.c
index d9af97d8..c5796e45 100644
--- a/tests/xe/xe_exec_basic.c
+++ b/tests/xe/xe_exec_basic.c
@@ -27,6 +27,7 @@
#define BIND_ENGINE (0x1 << 4)
#define DEFER_ALLOC (0x1 << 5)
#define DEFER_BIND (0x1 << 6)
+#define SPARSE (0x1 << 7)
/**
* SUBTEST: once-%s
@@ -89,6 +90,7 @@ test_exec(int fd, struct drm_xe_engine_class_instance *eci,
.syncs = to_user_pointer(&sync),
};
uint64_t addr[MAX_N_ENGINES];
+ uint64_t sparse_addr[MAX_N_ENGINES];
uint32_t vm[MAX_N_ENGINES];
uint32_t engines[MAX_N_ENGINES];
uint32_t bind_engines[MAX_N_ENGINES];
@@ -112,8 +114,11 @@ test_exec(int fd, struct drm_xe_engine_class_instance *eci,
xe_get_default_alignment(fd));
addr[0] = 0x1a0000;
- for (i = 1; i < MAX_N_ENGINES; ++i)
+ sparse_addr[0] = 0x301a0000;
+ for (i = 1; i < MAX_N_ENGINES; ++i) {
addr[i] = addr[i - 1] + (0x1ull << 32);
+ sparse_addr[i] = sparse_addr[i - 1] + (0x1ull << 32);
+ }
if (flags & USERPTR) {
#define MAP_ADDRESS 0x00007fadeadbe000
@@ -161,6 +166,13 @@ test_exec(int fd, struct drm_xe_engine_class_instance *eci,
xe_vm_bind_userptr_async(fd, vm[i], bind_engines[i],
to_user_pointer(data), addr[i],
bo_size, sync, 1);
+ if (flags & SPARSE)
+ __xe_vm_bind_assert(fd, vm[i], bind_engines[i],
+ 0, 0, sparse_addr[i], bo_size,
+ XE_VM_BIND_OP_MAP |
+ XE_VM_BIND_FLAG_ASYNC |
+ XE_VM_BIND_FLAG_NULL, sync,
+ 1, 0, 0);
}
if (flags & DEFER_BIND)
@@ -171,7 +183,8 @@ test_exec(int fd, struct drm_xe_engine_class_instance *eci,
uint64_t batch_offset = (char *)&data[i].batch - (char *)data;
uint64_t batch_addr = __addr + batch_offset;
uint64_t sdi_offset = (char *)&data[i].data - (char *)data;
- uint64_t sdi_addr = __addr + sdi_offset;
+ uint64_t sdi_addr = (flags & SPARSE ? sparse_addr[i % n_vm] :
+ __addr)+ sdi_offset;
int e = i % n_engines;
b = 0;
@@ -254,9 +267,11 @@ test_exec(int fd, struct drm_xe_engine_class_instance *eci,
INT64_MAX, 0, NULL));
}
- for (i = (flags & INVALIDATE && n_execs) ? n_execs - 1 : 0;
- i < n_execs; i++)
- igt_assert_eq(data[i].data, 0xc0ffee);
+ if (!(flags & SPARSE)) {
+ for (i = (flags & INVALIDATE && n_execs) ? n_execs - 1 : 0;
+ i < n_execs; i++)
+ igt_assert_eq(data[i].data, 0xc0ffee);
+ }
syncobj_destroy(fd, sync[0].handle);
for (i = 0; i < n_engines; i++) {
@@ -288,6 +303,10 @@ igt_main
{ "basic-defer-bind", DEFER_ALLOC | DEFER_BIND },
{ "userptr", USERPTR },
{ "rebind", REBIND },
+ { "null", SPARSE },
+ { "null-defer-mmap", SPARSE | DEFER_ALLOC },
+ { "null-defer-bind", SPARSE | DEFER_ALLOC | DEFER_BIND },
+ { "null-rebind", SPARSE | REBIND },
{ "userptr-rebind", USERPTR | REBIND },
{ "userptr-invalidate", USERPTR | INVALIDATE },
{ "userptr-invalidate-race", USERPTR | INVALIDATE | RACE },
--
2.34.1
^ permalink raw reply related [flat|nested] 5+ messages in thread
* [igt-dev] [PATCH 3/3] xe_vm: MMAP style VM binds section
2023-03-15 20:59 [igt-dev] [PATCH 0/3] Test GPUVA and NULL VM binds Matthew Brost
2023-03-15 20:59 ` [igt-dev] [PATCH 1/3] xe: Update to latest uAPI Matthew Brost
2023-03-15 20:59 ` [igt-dev] [PATCH 2/3] xe_exec_basic: Add NULL VM bind section Matthew Brost
@ 2023-03-15 20:59 ` Matthew Brost
2023-03-15 21:33 ` [igt-dev] ✗ Fi.CI.BUILD: failure for Test GPUVA and NULL VM binds Patchwork
3 siblings, 0 replies; 5+ messages in thread
From: Matthew Brost @ 2023-03-15 20:59 UTC (permalink / raw)
To: igt-dev
GPUVA added support for this let's test it.
Signed-off-by: Matthew Brost <matthew.brost@intel.com>
---
tests/xe/xe_vm.c | 326 +++++++++++++++++++++++++++++++++++++++++------
1 file changed, 290 insertions(+), 36 deletions(-)
diff --git a/tests/xe/xe_vm.c b/tests/xe/xe_vm.c
index c8c3a804..44b9cdd4 100644
--- a/tests/xe/xe_vm.c
+++ b/tests/xe/xe_vm.c
@@ -1203,9 +1203,9 @@ static void *hammer_thread(void *tdata)
return NULL;
}
-#define MUNMAP_FLAG_USERPTR (0x1 << 0)
-#define MUNMAP_FLAG_INVALIDATE (0x1 << 1)
-#define MUNMAP_FLAG_HAMMER_FIRST_PAGE (0x1 << 2)
+#define MAP_FLAG_USERPTR (0x1 << 0)
+#define MAP_FLAG_INVALIDATE (0x1 << 1)
+#define MAP_FLAG_HAMMER_FIRST_PAGE (0x1 << 2)
/**
@@ -1297,7 +1297,7 @@ test_munmap_style_unbind(int fd, struct drm_xe_engine_class_instance *eci,
vm = xe_vm_create(fd, DRM_XE_VM_CREATE_ASYNC_BIND_OPS, 0);
bo_size = page_size * bo_n_pages;
- if (flags & MUNMAP_FLAG_USERPTR) {
+ if (flags & MAP_FLAG_USERPTR) {
map = mmap(from_user_pointer(addr), bo_size, PROT_READ |
PROT_WRITE, MAP_SHARED | MAP_FIXED |
MAP_ANONYMOUS, -1, 0);
@@ -1316,7 +1316,7 @@ test_munmap_style_unbind(int fd, struct drm_xe_engine_class_instance *eci,
/* Do initial binds */
bind_size = (page_size * bo_n_pages) / n_binds;
for (i = 0; i < n_binds; ++i) {
- if (flags & MUNMAP_FLAG_USERPTR)
+ if (flags & MAP_FLAG_USERPTR)
xe_vm_bind_userptr_async(fd, vm, 0, addr, addr,
bind_size, sync, 1);
else
@@ -1331,7 +1331,7 @@ test_munmap_style_unbind(int fd, struct drm_xe_engine_class_instance *eci,
* cause a fault if a rebind occurs during munmap style VM unbind
* (partial VMAs unbound).
*/
- if (flags & MUNMAP_FLAG_HAMMER_FIRST_PAGE) {
+ if (flags & MAP_FLAG_HAMMER_FIRST_PAGE) {
t.fd = fd;
t.vm = vm;
#define PAGE_SIZE 4096
@@ -1390,7 +1390,7 @@ test_munmap_style_unbind(int fd, struct drm_xe_engine_class_instance *eci,
data = map + i * page_size;
igt_assert_eq(data->data, 0xc0ffee);
}
- if (flags & MUNMAP_FLAG_HAMMER_FIRST_PAGE) {
+ if (flags & MAP_FLAG_HAMMER_FIRST_PAGE) {
memset(map, 0, PAGE_SIZE / 2);
memset(map + PAGE_SIZE, 0, bo_size - PAGE_SIZE);
} else {
@@ -1440,7 +1440,7 @@ try_again_after_invalidate:
igt_assert_eq(data->data, 0xc0ffee);
}
}
- if (flags & MUNMAP_FLAG_HAMMER_FIRST_PAGE) {
+ if (flags & MAP_FLAG_HAMMER_FIRST_PAGE) {
memset(map, 0, PAGE_SIZE / 2);
memset(map + PAGE_SIZE, 0, bo_size - PAGE_SIZE);
} else {
@@ -1451,7 +1451,7 @@ try_again_after_invalidate:
* The munmap style VM unbind can create new VMAs, make sure those are
* in the bookkeeping for another rebind after a userptr invalidate.
*/
- if (flags & MUNMAP_FLAG_INVALIDATE && !invalidate++) {
+ if (flags & MAP_FLAG_INVALIDATE && !invalidate++) {
map = mmap(from_user_pointer(addr), bo_size, PROT_READ |
PROT_WRITE, MAP_SHARED | MAP_FIXED |
MAP_ANONYMOUS, -1, 0);
@@ -1462,7 +1462,7 @@ try_again_after_invalidate:
/* Confirm unbound region can be rebound */
syncobj_reset(fd, &sync[0].handle, 1);
sync[0].flags |= DRM_XE_SYNC_SIGNAL;
- if (flags & MUNMAP_FLAG_USERPTR)
+ if (flags & MAP_FLAG_USERPTR)
xe_vm_bind_userptr_async(fd, vm, 0,
addr + unbind_n_page_offfset * page_size,
addr + unbind_n_page_offfset * page_size,
@@ -1510,7 +1510,7 @@ try_again_after_invalidate:
igt_assert_eq(data->data, 0xc0ffee);
}
- if (flags & MUNMAP_FLAG_HAMMER_FIRST_PAGE) {
+ if (flags & MAP_FLAG_HAMMER_FIRST_PAGE) {
exit = 1;
pthread_join(t.thread, NULL);
pthread_barrier_destroy(&barrier);
@@ -1525,6 +1525,227 @@ try_again_after_invalidate:
xe_vm_destroy(fd, vm);
}
+static void
+test_mmap_style_bind(int fd, struct drm_xe_engine_class_instance *eci,
+ int bo_n_pages, int n_binds, int unbind_n_page_offfset,
+ int unbind_n_pages, unsigned int flags)
+{
+ struct drm_xe_sync sync[2] = {
+ { .flags = DRM_XE_SYNC_SYNCOBJ | DRM_XE_SYNC_SIGNAL, },
+ { .flags = DRM_XE_SYNC_SYNCOBJ | DRM_XE_SYNC_SIGNAL, },
+ };
+ struct drm_xe_exec exec = {
+ .num_batch_buffer = 1,
+ .num_syncs = 2,
+ .syncs = to_user_pointer(&sync),
+ };
+ uint64_t addr = 0x1a0000, base_addr = 0x1a0000;
+ uint32_t vm;
+ uint32_t engine;
+ size_t bo_size;
+ uint32_t bo0 = 0, bo1 = 0;
+ uint64_t bind_size;
+ uint64_t page_size = xe_get_default_alignment(fd);
+ struct {
+ uint32_t batch[16];
+ uint64_t pad;
+ uint32_t data;
+ } *data;
+ void *map0, *map1;
+ int i, b;
+ struct thread_data t;
+ pthread_barrier_t barrier;
+ int exit = 0;
+
+ vm = xe_vm_create(fd, DRM_XE_VM_CREATE_ASYNC_BIND_OPS, 0);
+ bo_size = page_size * bo_n_pages;
+
+ if (flags & MAP_FLAG_USERPTR) {
+ map0 = mmap((void *)addr, bo_size, PROT_READ |
+ PROT_WRITE, MAP_SHARED | MAP_FIXED |
+ MAP_ANONYMOUS, -1, 0);
+ map1 = mmap((void *)(addr + bo_size), bo_size, PROT_READ |
+ PROT_WRITE, MAP_SHARED | MAP_FIXED |
+ MAP_ANONYMOUS, -1, 0);
+ igt_assert(map0 != MAP_FAILED);
+ igt_assert(map1 != MAP_FAILED);
+ } else {
+ bo0 = xe_bo_create(fd, 0, vm, bo_size);
+ map0 = xe_bo_map(fd, bo0, bo_size);
+ bo1 = xe_bo_create(fd, 0, vm, bo_size);
+ map1 = xe_bo_map(fd, bo1, bo_size);
+ }
+ memset(map0, 0, bo_size);
+ memset(map1, 0, bo_size);
+
+ engine = xe_engine_create(fd, vm, eci, 0);
+
+ sync[0].handle = syncobj_create(fd, 0);
+ sync[1].handle = syncobj_create(fd, 0);
+
+ /* Do initial binds */
+ bind_size = (page_size * bo_n_pages) / n_binds;
+ for (i = 0; i < n_binds; ++i) {
+ if (flags & MAP_FLAG_USERPTR)
+ xe_vm_bind_userptr_async(fd, vm, 0, addr, addr,
+ bind_size, sync, 1);
+ else
+ xe_vm_bind_async(fd, vm, 0, bo0, i * bind_size,
+ addr, bind_size, sync, 1);
+ addr += bind_size;
+ }
+ addr = base_addr;
+
+ /*
+ * Kick a thread to write the first page continously to ensure we can't
+ * cause a fault if a rebind occurs during munmap style VM unbind
+ * (partial VMAs unbound).
+ */
+ if (flags & MAP_FLAG_HAMMER_FIRST_PAGE) {
+ t.fd = fd;
+ t.vm = vm;
+#define PAGE_SIZE 4096
+ t.addr = addr + PAGE_SIZE / 2;
+ t.eci = eci;
+ t.exit = &exit;
+ t.map = map0 + PAGE_SIZE / 2;
+ t.barrier = &barrier;
+ pthread_barrier_init(&barrier, NULL, 2);
+ pthread_create(&t.thread, 0, hammer_thread, &t);
+ pthread_barrier_wait(&barrier);
+ }
+
+ /* Verify we can use every page */
+ for (i = 0; i < n_binds; ++i) {
+ uint64_t batch_offset = (char *)&data->batch - (char *)data;
+ uint64_t batch_addr = addr + batch_offset;
+ uint64_t sdi_offset = (char *)&data->data - (char *)data;
+ uint64_t sdi_addr = addr + sdi_offset;
+ data = map0 + i * page_size;
+
+ b = 0;
+ data->batch[b++] = MI_STORE_DWORD_IMM;
+ data->batch[b++] = sdi_addr;
+ data->batch[b++] = sdi_addr >> 32;
+ data->batch[b++] = 0xc0ffee;
+ data->batch[b++] = MI_BATCH_BUFFER_END;
+ igt_assert(b <= ARRAY_SIZE(data[i].batch));
+
+ sync[0].flags &= ~DRM_XE_SYNC_SIGNAL;
+ if (i)
+ syncobj_reset(fd, &sync[1].handle, 1);
+ sync[1].flags |= DRM_XE_SYNC_SIGNAL;
+
+ exec.engine_id = engine;
+ exec.address = batch_addr;
+ __xe_exec_assert(fd, &exec);
+
+ addr += page_size;
+ }
+ addr = base_addr;
+
+ /* Bind some of the pages to different BO / userptr */
+ syncobj_reset(fd, &sync[0].handle, 1);
+ sync[0].flags |= DRM_XE_SYNC_SIGNAL;
+ sync[1].flags &= ~DRM_XE_SYNC_SIGNAL;
+ if (flags & MAP_FLAG_USERPTR)
+ xe_vm_bind_userptr_async(fd, vm, 0, addr + bo_size +
+ unbind_n_page_offfset * page_size,
+ addr + unbind_n_page_offfset * page_size,
+ unbind_n_pages * page_size, sync, 2);
+ else
+ xe_vm_bind_async(fd, vm, 0, bo1,
+ unbind_n_page_offfset * page_size,
+ addr + unbind_n_page_offfset * page_size,
+ unbind_n_pages * page_size, sync, 2);
+ igt_assert(syncobj_wait(fd, &sync[0].handle, 1, INT64_MAX, 0, NULL));
+ igt_assert(syncobj_wait(fd, &sync[1].handle, 1, INT64_MAX, 0, NULL));
+
+ /* Verify all pages written */
+ for (i = 0; i < n_binds; ++i) {
+ data = map0 + i * page_size;
+ igt_assert_eq(data->data, 0xc0ffee);
+ }
+ if (flags & MAP_FLAG_HAMMER_FIRST_PAGE) {
+ memset(map0, 0, PAGE_SIZE / 2);
+ memset(map0 + PAGE_SIZE, 0, bo_size - PAGE_SIZE);
+ } else {
+ memset(map0, 0, bo_size);
+ memset(map1, 0, bo_size);
+ }
+
+ /* Verify we can use every page */
+ for (i = 0; i < n_binds; ++i) {
+ uint64_t batch_offset = (char *)&data->batch - (char *)data;
+ uint64_t batch_addr = addr + batch_offset;
+ uint64_t sdi_offset = (char *)&data->data - (char *)data;
+ uint64_t sdi_addr = addr + sdi_offset;
+
+ data = map0 + i * page_size;
+ b = 0;
+ data->batch[b++] = MI_STORE_DWORD_IMM;
+ data->batch[b++] = sdi_addr;
+ data->batch[b++] = sdi_addr >> 32;
+ data->batch[b++] = 0xc0ffee;
+ data->batch[b++] = MI_BATCH_BUFFER_END;
+ igt_assert(b <= ARRAY_SIZE(data[i].batch));
+
+ data = map1 + i * page_size;
+ b = 0;
+ data->batch[b++] = MI_STORE_DWORD_IMM;
+ data->batch[b++] = sdi_addr;
+ data->batch[b++] = sdi_addr >> 32;
+ data->batch[b++] = 0xc0ffee;
+ data->batch[b++] = MI_BATCH_BUFFER_END;
+ igt_assert(b <= ARRAY_SIZE(data[i].batch));
+
+ sync[0].flags &= ~DRM_XE_SYNC_SIGNAL;
+ if (i)
+ syncobj_reset(fd, &sync[1].handle, 1);
+ sync[1].flags |= DRM_XE_SYNC_SIGNAL;
+
+ exec.engine_id = engine;
+ exec.address = batch_addr;
+ __xe_exec_assert(fd, &exec);
+
+ addr += page_size;
+ }
+ addr = base_addr;
+
+ igt_assert(syncobj_wait(fd, &sync[0].handle, 1, INT64_MAX, 0, NULL));
+ igt_assert(syncobj_wait(fd, &sync[1].handle, 1, INT64_MAX, 0, NULL));
+
+ /* Verify all pages written */
+ for (i = 0; i < n_binds; ++i) {
+ uint32_t result = 0;
+
+ data = map0 + i * page_size;
+ result |= data->data;
+
+ data = map1 + i * page_size;
+ result |= data->data;
+
+ igt_assert_eq(result, 0xc0ffee);
+ }
+
+ if (flags & MAP_FLAG_HAMMER_FIRST_PAGE) {
+ exit = 1;
+ pthread_join(t.thread, NULL);
+ pthread_barrier_destroy(&barrier);
+ }
+
+ syncobj_destroy(fd, sync[0].handle);
+ syncobj_destroy(fd, sync[1].handle);
+ xe_engine_destroy(fd, engine);
+ munmap(map0, bo_size);
+ munmap(map1, bo_size);
+ if (bo0)
+ gem_close(fd, bo0);
+ if (bo1)
+ gem_close(fd, bo1);
+ xe_vm_destroy(fd, vm);
+}
+
igt_main
{
struct drm_xe_engine_class_instance *hwe, *hwe_non_copy = NULL;
@@ -1537,55 +1758,74 @@ igt_main
int unbind_n_page_offfset;
int unbind_n_pages;
unsigned int flags;
- } sections[] = {
+ } munmap_sections[] = {
{ "all", 4, 2, 0, 4, 0 },
{ "one-partial", 4, 1, 1, 2, 0 },
{ "either-side-partial", 4, 2, 1, 2, 0 },
{ "either-side-partial-hammer", 4, 2, 1, 2,
- MUNMAP_FLAG_HAMMER_FIRST_PAGE },
+ MAP_FLAG_HAMMER_FIRST_PAGE },
{ "either-side-full", 4, 4, 1, 2, 0 },
{ "end", 4, 2, 0, 3, 0 },
{ "front", 4, 2, 1, 3, 0 },
{ "many-all", 4 * 8, 2 * 8, 0 * 8, 4 * 8, 0 },
{ "many-either-side-partial", 4 * 8, 2 * 8, 1, 4 * 8 - 2, 0 },
{ "many-either-side-partial-hammer", 4 * 8, 2 * 8, 1, 4 * 8 - 2,
- MUNMAP_FLAG_HAMMER_FIRST_PAGE },
+ MAP_FLAG_HAMMER_FIRST_PAGE },
{ "many-either-side-full", 4 * 8, 4 * 8, 1 * 8, 2 * 8, 0 },
{ "many-end", 4 * 8, 4, 0 * 8, 3 * 8 + 2, 0 },
{ "many-front", 4 * 8, 4, 1 * 8 - 2, 3 * 8 + 2, 0 },
- { "userptr-all", 4, 2, 0, 4, MUNMAP_FLAG_USERPTR },
- { "userptr-one-partial", 4, 1, 1, 2, MUNMAP_FLAG_USERPTR },
+ { "userptr-all", 4, 2, 0, 4, MAP_FLAG_USERPTR },
+ { "userptr-one-partial", 4, 1, 1, 2, MAP_FLAG_USERPTR },
{ "userptr-either-side-partial", 4, 2, 1, 2,
- MUNMAP_FLAG_USERPTR },
+ MAP_FLAG_USERPTR },
{ "userptr-either-side-full", 4, 4, 1, 2,
- MUNMAP_FLAG_USERPTR },
- { "userptr-end", 4, 2, 0, 3, MUNMAP_FLAG_USERPTR },
- { "userptr-front", 4, 2, 1, 3, MUNMAP_FLAG_USERPTR },
+ MAP_FLAG_USERPTR },
+ { "userptr-end", 4, 2, 0, 3, MAP_FLAG_USERPTR },
+ { "userptr-front", 4, 2, 1, 3, MAP_FLAG_USERPTR },
{ "userptr-many-all", 4 * 8, 2 * 8, 0 * 8, 4 * 8,
- MUNMAP_FLAG_USERPTR },
+ MAP_FLAG_USERPTR },
{ "userptr-many-either-side-full", 4 * 8, 4 * 8, 1 * 8, 2 * 8,
- MUNMAP_FLAG_USERPTR },
+ MAP_FLAG_USERPTR },
{ "userptr-many-end", 4 * 8, 4, 0 * 8, 3 * 8 + 2,
- MUNMAP_FLAG_USERPTR },
+ MAP_FLAG_USERPTR },
{ "userptr-many-front", 4 * 8, 4, 1 * 8 - 2, 3 * 8 + 2,
- MUNMAP_FLAG_USERPTR },
+ MAP_FLAG_USERPTR },
{ "userptr-inval-either-side-full", 4, 4, 1, 2,
- MUNMAP_FLAG_USERPTR | MUNMAP_FLAG_INVALIDATE },
- { "userptr-inval-end", 4, 2, 0, 3, MUNMAP_FLAG_USERPTR |
- MUNMAP_FLAG_INVALIDATE },
- { "userptr-inval-front", 4, 2, 1, 3, MUNMAP_FLAG_USERPTR |
- MUNMAP_FLAG_INVALIDATE },
+ MAP_FLAG_USERPTR | MAP_FLAG_INVALIDATE },
+ { "userptr-inval-end", 4, 2, 0, 3, MAP_FLAG_USERPTR |
+ MAP_FLAG_INVALIDATE },
+ { "userptr-inval-front", 4, 2, 1, 3, MAP_FLAG_USERPTR |
+ MAP_FLAG_INVALIDATE },
{ "userptr-inval-many-all", 4 * 8, 2 * 8, 0 * 8, 4 * 8,
- MUNMAP_FLAG_USERPTR | MUNMAP_FLAG_INVALIDATE },
+ MAP_FLAG_USERPTR | MAP_FLAG_INVALIDATE },
{ "userptr-inval-many-either-side-partial", 4 * 8, 2 * 8, 1,
- 4 * 8 - 2, MUNMAP_FLAG_USERPTR |
- MUNMAP_FLAG_INVALIDATE },
+ 4 * 8 - 2, MAP_FLAG_USERPTR |
+ MAP_FLAG_INVALIDATE },
{ "userptr-inval-many-either-side-full", 4 * 8, 4 * 8, 1 * 8,
- 2 * 8, MUNMAP_FLAG_USERPTR | MUNMAP_FLAG_INVALIDATE },
+ 2 * 8, MAP_FLAG_USERPTR | MAP_FLAG_INVALIDATE },
{ "userptr-inval-many-end", 4 * 8, 4, 0 * 8, 3 * 8 + 2,
- MUNMAP_FLAG_USERPTR | MUNMAP_FLAG_INVALIDATE },
+ MAP_FLAG_USERPTR | MAP_FLAG_INVALIDATE },
{ "userptr-inval-many-front", 4 * 8, 4, 1 * 8 - 2, 3 * 8 + 2,
- MUNMAP_FLAG_USERPTR | MUNMAP_FLAG_INVALIDATE },
+ MAP_FLAG_USERPTR | MAP_FLAG_INVALIDATE },
+ { NULL },
+ };
+ const struct section mmap_sections[] = {
+ { "all", 4, 2, 0, 4, 0 },
+ { "one-partial", 4, 1, 1, 2, 0 },
+ { "either-side-partial", 4, 2, 1, 2, 0 },
+ { "either-side-full", 4, 4, 1, 2, 0 },
+ { "either-side-partial-hammer", 4, 2, 1, 2,
+ MAP_FLAG_HAMMER_FIRST_PAGE },
+ { "end", 4, 2, 0, 3, 0 },
+ { "front", 4, 2, 1, 3, 0 },
+ { "many-all", 4 * 8, 2 * 8, 0 * 8, 4 * 8, 0 },
+ { "many-either-side-partial", 4 * 8, 2 * 8, 1, 4 * 8 - 2, 0 },
+ { "many-either-side-partial-hammer", 4 * 8, 2 * 8, 1, 4 * 8 - 2,
+ MAP_FLAG_HAMMER_FIRST_PAGE },
+ { "userptr-all", 4, 2, 0, 4, MAP_FLAG_USERPTR },
+ { "userptr-one-partial", 4, 1, 1, 2, MAP_FLAG_USERPTR },
+ { "userptr-either-side-partial", 4, 2, 1, 2, MAP_FLAG_USERPTR },
+ { "userptr-either-side-full", 4, 4, 1, 2, MAP_FLAG_USERPTR },
{ NULL },
};
@@ -1790,7 +2030,7 @@ igt_main
break;
}
- for (const struct section *s = sections; s->name; s++) {
+ for (const struct section *s = munmap_sections; s->name; s++) {
igt_subtest_f("munmap-style-unbind-%s", s->name) {
igt_require_f(hwe_non_copy,
"Requires non-copy engine to run\n");
@@ -1804,6 +2044,20 @@ igt_main
}
}
+ for (const struct section *s = mmap_sections; s->name; s++) {
+ igt_subtest_f("mmap-style-bind-%s", s->name) {
+ igt_require_f(hwe_non_copy,
+ "Requires non-copy engine to run\n");
+
+ test_mmap_style_bind(fd, hwe_non_copy,
+ s->bo_n_pages,
+ s->n_binds,
+ s->unbind_n_page_offfset,
+ s->unbind_n_pages,
+ s->flags);
+ }
+ }
+
igt_fixture {
xe_device_put(fd);
close(fd);
--
2.34.1
^ permalink raw reply related [flat|nested] 5+ messages in thread
* [igt-dev] ✗ Fi.CI.BUILD: failure for Test GPUVA and NULL VM binds
2023-03-15 20:59 [igt-dev] [PATCH 0/3] Test GPUVA and NULL VM binds Matthew Brost
` (2 preceding siblings ...)
2023-03-15 20:59 ` [igt-dev] [PATCH 3/3] xe_vm: MMAP style VM binds section Matthew Brost
@ 2023-03-15 21:33 ` Patchwork
3 siblings, 0 replies; 5+ messages in thread
From: Patchwork @ 2023-03-15 21:33 UTC (permalink / raw)
To: Matthew Brost; +Cc: igt-dev
== Series Details ==
Series: Test GPUVA and NULL VM binds
URL : https://patchwork.freedesktop.org/series/115219/
State : failure
== Summary ==
IGT patchset build failed on latest successful build
9b8c5dbe8cd82163ee198c43b81222d2b9b75fd4 tests/kms: Add missing igt_put_cairo_ctx()
[301/481] Linking target tests/i915_pm_sseu.
[302/481] Linking target tests/i915_power.
[303/481] Linking target tests/i915_query.
[304/481] Linking target tests/i915_selftest.
[305/481] Linking target tests/i915_suspend.
[306/481] Linking target tests/kms_big_fb.
[307/481] Linking target tests/kms_big_joiner.
[308/481] Linking target tests/kms_busy.
[309/481] Linking target tests/kms_ccs.
[310/481] Linking target tests/kms_cdclk.
[311/481] Linking target tests/kms_draw_crc.
[312/481] Linking target tests/kms_fbcon_fbt.
[313/481] Linking target tests/kms_fence_pin_leak.
[314/481] Linking target tests/kms_flip_scaled_crc.
[315/481] Linking target tests/kms_flip_tiling.
[316/481] Linking target tests/kms_frontbuffer_tracking.
[317/481] Linking target tests/kms_legacy_colorkey.
[318/481] Linking target tests/kms_psr2_su.
[319/481] Linking target tests/kms_mmap_write_crc.
[320/481] Linking target tests/kms_psr_stress_test.
[321/481] Linking target tests/kms_pwrite_crc.
[322/481] Linking target tests/sysfs_defaults.
[323/481] Linking target tests/sysfs_heartbeat_interval.
[324/481] Linking target tests/sysfs_preempt_timeout.
[325/481] Linking target tests/sysfs_timeslice_duration.
[326/481] Linking target tests/xe_compute.
[327/481] Linking target tests/xe_dma_buf_sync.
[328/481] Linking target tests/xe_debugfs.
[329/481] Linking target tests/xe_evict.
[330/481] Linking target tests/xe_exec_basic.
[331/481] Linking target tests/xe_exec_balancer.
[332/481] Linking target tests/xe_exec_compute_mode.
[333/481] Linking target tests/xe_exec_fault_mode.
[334/481] Linking target tests/xe_exec_reset.
[335/481] Linking target tests/xe_exec_threads.
[336/481] Linking target tests/xe_guc_pc.
[337/481] Linking target tests/xe_huc_copy.
[338/481] Linking target tests/xe_mmap.
[339/481] Linking target tests/xe_mmio.
[340/481] Linking target tests/xe_pm.
[341/481] Compiling C object 'tests/59830eb@@xe_vm@exe/xe_xe_vm.c.o'.
FAILED: tests/59830eb@@xe_vm@exe/xe_xe_vm.c.o
cc -Itests/59830eb@@xe_vm@exe -Itests -I../../../usr/src/igt-gpu-tools/tests -I../../../usr/src/igt-gpu-tools/include -I../../../usr/src/igt-gpu-tools/include/drm-uapi -I../../../usr/src/igt-gpu-tools/include/linux-uapi -Ilib -I../../../usr/src/igt-gpu-tools/lib -I../../../usr/src/igt-gpu-tools/lib/stubs/syscalls -I. -I../../../usr/src/igt-gpu-tools/ -I/usr/include/cairo -I/usr/include/glib-2.0 -I/usr/lib/x86_64-linux-gnu/glib-2.0/include -I/usr/include/pixman-1 -I/usr/include/uuid -I/usr/include/freetype2 -I/usr/include/libpng16 -I/usr/include/libdrm -I/usr/include/x86_64-linux-gnu -I/usr/include/valgrind -I/usr/include -fdiagnostics-color=always -pipe -D_FILE_OFFSET_BITS=64 -Wall -Winvalid-pch -Wextra -std=gnu11 -O2 -g -D_GNU_SOURCE -include config.h -D_FORTIFY_SOURCE=2 -Wbad-function-cast -Wdeclaration-after-statement -Wformat=2 -Wimplicit-fallthrough=0 -Wlogical-op -Wmissing-declarations -Wmissing-format-attribute -Wmissing-noreturn -Wmissing-prototypes -Wnested-externs -Wold-style-definition -Wpointer-arith -Wredundant-decls -Wshadow -Wstrict-prototypes -Wuninitialized -Wunused -Wno-clobbered -Wno-maybe-uninitialized -Wno-missing-field-initializers -Wno-pointer-arith -Wno-address-of-packed-member -Wno-sign-compare -Wno-type-limits -Wno-unused-parameter -Wno-unused-result -Werror=address -Werror=array-bounds -Werror=implicit -Werror=init-self -Werror=int-to-pointer-cast -Werror=main -Werror=missing-braces -Werror=nonnull -Werror=pointer-to-int-cast -Werror=return-type -Werror=sequence-point -Werror=trigraphs -Werror=write-strings -fno-builtin-malloc -fno-builtin-calloc -pthread -MD -MQ 'tests/59830eb@@xe_vm@exe/xe_xe_vm.c.o' -MF 'tests/59830eb@@xe_vm@exe/xe_xe_vm.c.o.d' -o 'tests/59830eb@@xe_vm@exe/xe_xe_vm.c.o' -c ../../../usr/src/igt-gpu-tools/tests/xe/xe_vm.c
../../../usr/src/igt-gpu-tools/tests/xe/xe_vm.c: In function ‘test_mmap_style_bind’:
../../../usr/src/igt-gpu-tools/tests/xe/xe_vm.c:1641:3: error: implicit declaration of function ‘__xe_exec_assert’ [-Werror=implicit-function-declaration]
1641 | __xe_exec_assert(fd, &exec);
| ^~~~~~~~~~~~~~~~
../../../usr/src/igt-gpu-tools/tests/xe/xe_vm.c:1641:3: warning: nested extern declaration of ‘__xe_exec_assert’ [-Wnested-externs]
cc1: some warnings being treated as errors
ninja: build stopped: subcommand failed.
^ permalink raw reply [flat|nested] 5+ messages in thread
end of thread, other threads:[~2023-03-15 21:33 UTC | newest]
Thread overview: 5+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2023-03-15 20:59 [igt-dev] [PATCH 0/3] Test GPUVA and NULL VM binds Matthew Brost
2023-03-15 20:59 ` [igt-dev] [PATCH 1/3] xe: Update to latest uAPI Matthew Brost
2023-03-15 20:59 ` [igt-dev] [PATCH 2/3] xe_exec_basic: Add NULL VM bind section Matthew Brost
2023-03-15 20:59 ` [igt-dev] [PATCH 3/3] xe_vm: MMAP style VM binds section Matthew Brost
2023-03-15 21:33 ` [igt-dev] ✗ Fi.CI.BUILD: failure for Test GPUVA and NULL VM binds Patchwork
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox