* [PATCH i-g-t] tests/intel/xe_exec_store: fix sync usage
@ 2024-04-08 17:41 Matthew Auld
2024-04-08 17:55 ` Matthew Brost
` (2 more replies)
0 siblings, 3 replies; 6+ messages in thread
From: Matthew Auld @ 2024-04-08 17:41 UTC (permalink / raw)
To: igt-dev; +Cc: Matthew Brost
If using async binds it looks like an in-fence for the exec is needed to
ensure the exec happens after the out-fence from the binds are complete.
Therefore we need to unset DRM_XE_SYNC_FLAG_SIGNAL after doing the
binds, but before the exec, otherwise the sync is rather treated
as an out-fence and the binds can then happen after the exec, leading to
various failures. In addition it looks like async unbind should be
waited on before tearing down the queue/vm which has the bind engine
attached, since the scheduler timeout is immediately set to zero on
destroy, which might then trigger job timeouts. However it looks like
it's also fine to rather just destroy the object and leave KMD to unbind
everything itself. Update the various subtests here to conform to this.
In the case of the persistent subtest it looks simpler to use sync
vm_bind since we don't have another sync for the in-fence at hand, plus
we don't seem to need a dedicated bind engine.
Closes: https://gitlab.freedesktop.org/drm/xe/kernel/-/issues/1270
Signed-off-by: Matthew Auld <matthew.auld@intel.com>
Cc: Matthew Brost <matthew.brost@intel.com>
---
tests/intel/xe_exec_store.c | 22 ++++++++++++----------
1 file changed, 12 insertions(+), 10 deletions(-)
diff --git a/tests/intel/xe_exec_store.c b/tests/intel/xe_exec_store.c
index c57bcb852..728ce826b 100644
--- a/tests/intel/xe_exec_store.c
+++ b/tests/intel/xe_exec_store.c
@@ -102,13 +102,13 @@ static void persistance_batch(struct data *data, uint64_t addr)
*/
static void basic_inst(int fd, int inst_type, struct drm_xe_engine_class_instance *eci)
{
- struct drm_xe_sync sync = {
- .type = DRM_XE_SYNC_TYPE_SYNCOBJ,
- .flags = DRM_XE_SYNC_FLAG_SIGNAL,
+ struct drm_xe_sync sync[2] = {
+ { .type = DRM_XE_SYNC_TYPE_SYNCOBJ, .flags = DRM_XE_SYNC_FLAG_SIGNAL, },
+ { .type = DRM_XE_SYNC_TYPE_SYNCOBJ, .flags = DRM_XE_SYNC_FLAG_SIGNAL, }
};
struct drm_xe_exec exec = {
.num_batch_buffer = 1,
- .num_syncs = 1,
+ .num_syncs = 2,
.syncs = to_user_pointer(&sync),
};
struct data *data;
@@ -122,7 +122,8 @@ static void basic_inst(int fd, int inst_type, struct drm_xe_engine_class_instanc
uint32_t bo = 0;
syncobj = syncobj_create(fd, 0);
- sync.handle = syncobj;
+ sync[0].handle = syncobj_create(fd, 0);
+ sync[1].handle = syncobj;
vm = xe_vm_create(fd, 0, 0);
bo_size = sizeof(*data);
@@ -134,7 +135,7 @@ static void basic_inst(int fd, int inst_type, struct drm_xe_engine_class_instanc
exec_queue = xe_exec_queue_create(fd, vm, eci, 0);
bind_engine = xe_bind_exec_queue_create(fd, vm, 0);
- xe_vm_bind_async(fd, vm, bind_engine, bo, 0, addr, bo_size, &sync, 1);
+ xe_vm_bind_async(fd, vm, bind_engine, bo, 0, addr, bo_size, sync, 1);
data = xe_bo_map(fd, bo, bo_size);
if (inst_type == STORE)
@@ -149,12 +150,14 @@ static void basic_inst(int fd, int inst_type, struct drm_xe_engine_class_instanc
exec.exec_queue_id = exec_queue;
exec.address = data->addr;
- sync.flags &= DRM_XE_SYNC_FLAG_SIGNAL;
+ sync[0].flags &= ~DRM_XE_SYNC_FLAG_SIGNAL;
+ sync[1].flags |= DRM_XE_SYNC_FLAG_SIGNAL;
xe_exec(fd, &exec);
igt_assert(syncobj_wait(fd, &syncobj, 1, INT64_MAX, 0, NULL));
igt_assert_eq(data->data, value);
+ syncobj_destroy(fd, sync[0].handle);
syncobj_destroy(fd, syncobj);
munmap(data, bo_size);
gem_close(fd, bo);
@@ -232,7 +235,7 @@ static void store_cachelines(int fd, struct drm_xe_engine_class_instance *eci,
batch_map[b++] = value[n];
}
batch_map[b++] = MI_BATCH_BUFFER_END;
- sync[0].flags &= DRM_XE_SYNC_FLAG_SIGNAL;
+ sync[0].flags &= ~DRM_XE_SYNC_FLAG_SIGNAL;
sync[1].flags |= DRM_XE_SYNC_FLAG_SIGNAL;
sync[1].handle = syncobjs;
exec.exec_queue_id = exec_queues;
@@ -250,7 +253,6 @@ static void store_cachelines(int fd, struct drm_xe_engine_class_instance *eci,
for (i = 0; i < count; i++) {
munmap(bo_map[i], bo_size);
- xe_vm_unbind_async(fd, vm, 0, 0, dst_offset[i], bo_size, sync, 1);
gem_close(fd, bo[i]);
}
@@ -300,7 +302,7 @@ static void persistent(int fd)
vram_if_possible(fd, engine->instance.gt_id),
DRM_XE_GEM_CREATE_FLAG_NEEDS_VISIBLE_VRAM);
- xe_vm_bind_async(fd, vm, 0, sd_batch, 0, addr, batch_size, &sync, 1);
+ xe_vm_bind_sync(fd, vm, sd_batch, 0, addr, batch_size);
sd_data = xe_bo_map(fd, sd_batch, batch_size);
prt_data = xe_bo_map(fd, prt_batch, batch_size);
--
2.44.0
^ permalink raw reply related [flat|nested] 6+ messages in thread* Re: [PATCH i-g-t] tests/intel/xe_exec_store: fix sync usage 2024-04-08 17:41 [PATCH i-g-t] tests/intel/xe_exec_store: fix sync usage Matthew Auld @ 2024-04-08 17:55 ` Matthew Brost 2024-04-08 18:09 ` Matthew Auld 2024-04-09 2:11 ` ✓ CI.xeBAT: success for " Patchwork 2024-04-09 2:31 ` ✗ Fi.CI.BAT: failure " Patchwork 2 siblings, 1 reply; 6+ messages in thread From: Matthew Brost @ 2024-04-08 17:55 UTC (permalink / raw) To: Matthew Auld; +Cc: igt-dev On Mon, Apr 08, 2024 at 06:41:13PM +0100, Matthew Auld wrote: > If using async binds it looks like an in-fence for the exec is needed to > ensure the exec happens after the out-fence from the binds are complete. > Therefore we need to unset DRM_XE_SYNC_FLAG_SIGNAL after doing the > binds, but before the exec, otherwise the sync is rather treated > as an out-fence and the binds can then happen after the exec, leading to > various failures. In addition it looks like async unbind should be > waited on before tearing down the queue/vm which has the bind engine > attached, since the scheduler timeout is immediately set to zero on > destroy, which might then trigger job timeouts. However it looks like > it's also fine to rather just destroy the object and leave KMD to unbind > everything itself. Update the various subtests here to conform to this. > > In the case of the persistent subtest it looks simpler to use sync > vm_bind since we don't have another sync for the in-fence at hand, plus > we don't seem to need a dedicated bind engine. > > Closes: https://gitlab.freedesktop.org/drm/xe/kernel/-/issues/1270 > Signed-off-by: Matthew Auld <matthew.auld@intel.com> > Cc: Matthew Brost <matthew.brost@intel.com> Changes LGTM but IMO we jus delete this test as I'm unsure what coverage this test is providing. Anyways: Reviewed-by: Matthew Brost <matthew.brost@intel.com> > --- > tests/intel/xe_exec_store.c | 22 ++++++++++++---------- > 1 file changed, 12 insertions(+), 10 deletions(-) > > diff --git a/tests/intel/xe_exec_store.c b/tests/intel/xe_exec_store.c > index c57bcb852..728ce826b 100644 > --- a/tests/intel/xe_exec_store.c > +++ b/tests/intel/xe_exec_store.c > @@ -102,13 +102,13 @@ static void persistance_batch(struct data *data, uint64_t addr) > */ > static void basic_inst(int fd, int inst_type, struct drm_xe_engine_class_instance *eci) > { > - struct drm_xe_sync sync = { > - .type = DRM_XE_SYNC_TYPE_SYNCOBJ, > - .flags = DRM_XE_SYNC_FLAG_SIGNAL, > + struct drm_xe_sync sync[2] = { > + { .type = DRM_XE_SYNC_TYPE_SYNCOBJ, .flags = DRM_XE_SYNC_FLAG_SIGNAL, }, > + { .type = DRM_XE_SYNC_TYPE_SYNCOBJ, .flags = DRM_XE_SYNC_FLAG_SIGNAL, } > }; > struct drm_xe_exec exec = { > .num_batch_buffer = 1, > - .num_syncs = 1, > + .num_syncs = 2, > .syncs = to_user_pointer(&sync), > }; > struct data *data; > @@ -122,7 +122,8 @@ static void basic_inst(int fd, int inst_type, struct drm_xe_engine_class_instanc > uint32_t bo = 0; > > syncobj = syncobj_create(fd, 0); > - sync.handle = syncobj; > + sync[0].handle = syncobj_create(fd, 0); > + sync[1].handle = syncobj; > > vm = xe_vm_create(fd, 0, 0); > bo_size = sizeof(*data); > @@ -134,7 +135,7 @@ static void basic_inst(int fd, int inst_type, struct drm_xe_engine_class_instanc > > exec_queue = xe_exec_queue_create(fd, vm, eci, 0); > bind_engine = xe_bind_exec_queue_create(fd, vm, 0); > - xe_vm_bind_async(fd, vm, bind_engine, bo, 0, addr, bo_size, &sync, 1); > + xe_vm_bind_async(fd, vm, bind_engine, bo, 0, addr, bo_size, sync, 1); > data = xe_bo_map(fd, bo, bo_size); > > if (inst_type == STORE) > @@ -149,12 +150,14 @@ static void basic_inst(int fd, int inst_type, struct drm_xe_engine_class_instanc > > exec.exec_queue_id = exec_queue; > exec.address = data->addr; > - sync.flags &= DRM_XE_SYNC_FLAG_SIGNAL; > + sync[0].flags &= ~DRM_XE_SYNC_FLAG_SIGNAL; > + sync[1].flags |= DRM_XE_SYNC_FLAG_SIGNAL; > xe_exec(fd, &exec); > > igt_assert(syncobj_wait(fd, &syncobj, 1, INT64_MAX, 0, NULL)); > igt_assert_eq(data->data, value); > > + syncobj_destroy(fd, sync[0].handle); > syncobj_destroy(fd, syncobj); > munmap(data, bo_size); > gem_close(fd, bo); > @@ -232,7 +235,7 @@ static void store_cachelines(int fd, struct drm_xe_engine_class_instance *eci, > batch_map[b++] = value[n]; > } > batch_map[b++] = MI_BATCH_BUFFER_END; > - sync[0].flags &= DRM_XE_SYNC_FLAG_SIGNAL; > + sync[0].flags &= ~DRM_XE_SYNC_FLAG_SIGNAL; > sync[1].flags |= DRM_XE_SYNC_FLAG_SIGNAL; > sync[1].handle = syncobjs; > exec.exec_queue_id = exec_queues; > @@ -250,7 +253,6 @@ static void store_cachelines(int fd, struct drm_xe_engine_class_instance *eci, > > for (i = 0; i < count; i++) { > munmap(bo_map[i], bo_size); > - xe_vm_unbind_async(fd, vm, 0, 0, dst_offset[i], bo_size, sync, 1); > gem_close(fd, bo[i]); > } > > @@ -300,7 +302,7 @@ static void persistent(int fd) > vram_if_possible(fd, engine->instance.gt_id), > DRM_XE_GEM_CREATE_FLAG_NEEDS_VISIBLE_VRAM); > > - xe_vm_bind_async(fd, vm, 0, sd_batch, 0, addr, batch_size, &sync, 1); > + xe_vm_bind_sync(fd, vm, sd_batch, 0, addr, batch_size); > sd_data = xe_bo_map(fd, sd_batch, batch_size); > prt_data = xe_bo_map(fd, prt_batch, batch_size); > > -- > 2.44.0 > ^ permalink raw reply [flat|nested] 6+ messages in thread
* Re: [PATCH i-g-t] tests/intel/xe_exec_store: fix sync usage 2024-04-08 17:55 ` Matthew Brost @ 2024-04-08 18:09 ` Matthew Auld 2024-04-10 0:31 ` Matthew Brost 0 siblings, 1 reply; 6+ messages in thread From: Matthew Auld @ 2024-04-08 18:09 UTC (permalink / raw) To: Matthew Brost; +Cc: igt-dev On 08/04/2024 18:55, Matthew Brost wrote: > On Mon, Apr 08, 2024 at 06:41:13PM +0100, Matthew Auld wrote: >> If using async binds it looks like an in-fence for the exec is needed to >> ensure the exec happens after the out-fence from the binds are complete. >> Therefore we need to unset DRM_XE_SYNC_FLAG_SIGNAL after doing the >> binds, but before the exec, otherwise the sync is rather treated >> as an out-fence and the binds can then happen after the exec, leading to >> various failures. In addition it looks like async unbind should be >> waited on before tearing down the queue/vm which has the bind engine >> attached, since the scheduler timeout is immediately set to zero on >> destroy, which might then trigger job timeouts. However it looks like >> it's also fine to rather just destroy the object and leave KMD to unbind >> everything itself. Update the various subtests here to conform to this. >> >> In the case of the persistent subtest it looks simpler to use sync >> vm_bind since we don't have another sync for the in-fence at hand, plus >> we don't seem to need a dedicated bind engine. >> >> Closes: https://gitlab.freedesktop.org/drm/xe/kernel/-/issues/1270 >> Signed-off-by: Matthew Auld <matthew.auld@intel.com> >> Cc: Matthew Brost <matthew.brost@intel.com> > > Changes LGTM but IMO we jus delete this test as I'm unsure what coverage > this test is providing. Do you mean just delete the entire xe_exec_store? > > Anyways: > Reviewed-by: Matthew Brost <matthew.brost@intel.com> > >> --- >> tests/intel/xe_exec_store.c | 22 ++++++++++++---------- >> 1 file changed, 12 insertions(+), 10 deletions(-) >> >> diff --git a/tests/intel/xe_exec_store.c b/tests/intel/xe_exec_store.c >> index c57bcb852..728ce826b 100644 >> --- a/tests/intel/xe_exec_store.c >> +++ b/tests/intel/xe_exec_store.c >> @@ -102,13 +102,13 @@ static void persistance_batch(struct data *data, uint64_t addr) >> */ >> static void basic_inst(int fd, int inst_type, struct drm_xe_engine_class_instance *eci) >> { >> - struct drm_xe_sync sync = { >> - .type = DRM_XE_SYNC_TYPE_SYNCOBJ, >> - .flags = DRM_XE_SYNC_FLAG_SIGNAL, >> + struct drm_xe_sync sync[2] = { >> + { .type = DRM_XE_SYNC_TYPE_SYNCOBJ, .flags = DRM_XE_SYNC_FLAG_SIGNAL, }, >> + { .type = DRM_XE_SYNC_TYPE_SYNCOBJ, .flags = DRM_XE_SYNC_FLAG_SIGNAL, } >> }; >> struct drm_xe_exec exec = { >> .num_batch_buffer = 1, >> - .num_syncs = 1, >> + .num_syncs = 2, >> .syncs = to_user_pointer(&sync), >> }; >> struct data *data; >> @@ -122,7 +122,8 @@ static void basic_inst(int fd, int inst_type, struct drm_xe_engine_class_instanc >> uint32_t bo = 0; >> >> syncobj = syncobj_create(fd, 0); >> - sync.handle = syncobj; >> + sync[0].handle = syncobj_create(fd, 0); >> + sync[1].handle = syncobj; >> >> vm = xe_vm_create(fd, 0, 0); >> bo_size = sizeof(*data); >> @@ -134,7 +135,7 @@ static void basic_inst(int fd, int inst_type, struct drm_xe_engine_class_instanc >> >> exec_queue = xe_exec_queue_create(fd, vm, eci, 0); >> bind_engine = xe_bind_exec_queue_create(fd, vm, 0); >> - xe_vm_bind_async(fd, vm, bind_engine, bo, 0, addr, bo_size, &sync, 1); >> + xe_vm_bind_async(fd, vm, bind_engine, bo, 0, addr, bo_size, sync, 1); >> data = xe_bo_map(fd, bo, bo_size); >> >> if (inst_type == STORE) >> @@ -149,12 +150,14 @@ static void basic_inst(int fd, int inst_type, struct drm_xe_engine_class_instanc >> >> exec.exec_queue_id = exec_queue; >> exec.address = data->addr; >> - sync.flags &= DRM_XE_SYNC_FLAG_SIGNAL; >> + sync[0].flags &= ~DRM_XE_SYNC_FLAG_SIGNAL; >> + sync[1].flags |= DRM_XE_SYNC_FLAG_SIGNAL; >> xe_exec(fd, &exec); >> >> igt_assert(syncobj_wait(fd, &syncobj, 1, INT64_MAX, 0, NULL)); >> igt_assert_eq(data->data, value); >> >> + syncobj_destroy(fd, sync[0].handle); >> syncobj_destroy(fd, syncobj); >> munmap(data, bo_size); >> gem_close(fd, bo); >> @@ -232,7 +235,7 @@ static void store_cachelines(int fd, struct drm_xe_engine_class_instance *eci, >> batch_map[b++] = value[n]; >> } >> batch_map[b++] = MI_BATCH_BUFFER_END; >> - sync[0].flags &= DRM_XE_SYNC_FLAG_SIGNAL; >> + sync[0].flags &= ~DRM_XE_SYNC_FLAG_SIGNAL; >> sync[1].flags |= DRM_XE_SYNC_FLAG_SIGNAL; >> sync[1].handle = syncobjs; >> exec.exec_queue_id = exec_queues; >> @@ -250,7 +253,6 @@ static void store_cachelines(int fd, struct drm_xe_engine_class_instance *eci, >> >> for (i = 0; i < count; i++) { >> munmap(bo_map[i], bo_size); >> - xe_vm_unbind_async(fd, vm, 0, 0, dst_offset[i], bo_size, sync, 1); >> gem_close(fd, bo[i]); >> } >> >> @@ -300,7 +302,7 @@ static void persistent(int fd) >> vram_if_possible(fd, engine->instance.gt_id), >> DRM_XE_GEM_CREATE_FLAG_NEEDS_VISIBLE_VRAM); >> >> - xe_vm_bind_async(fd, vm, 0, sd_batch, 0, addr, batch_size, &sync, 1); >> + xe_vm_bind_sync(fd, vm, sd_batch, 0, addr, batch_size); >> sd_data = xe_bo_map(fd, sd_batch, batch_size); >> prt_data = xe_bo_map(fd, prt_batch, batch_size); >> >> -- >> 2.44.0 >> ^ permalink raw reply [flat|nested] 6+ messages in thread
* Re: [PATCH i-g-t] tests/intel/xe_exec_store: fix sync usage 2024-04-08 18:09 ` Matthew Auld @ 2024-04-10 0:31 ` Matthew Brost 0 siblings, 0 replies; 6+ messages in thread From: Matthew Brost @ 2024-04-10 0:31 UTC (permalink / raw) To: Matthew Auld; +Cc: igt-dev On Mon, Apr 08, 2024 at 07:09:05PM +0100, Matthew Auld wrote: > On 08/04/2024 18:55, Matthew Brost wrote: > > On Mon, Apr 08, 2024 at 06:41:13PM +0100, Matthew Auld wrote: > > > If using async binds it looks like an in-fence for the exec is needed to > > > ensure the exec happens after the out-fence from the binds are complete. > > > Therefore we need to unset DRM_XE_SYNC_FLAG_SIGNAL after doing the > > > binds, but before the exec, otherwise the sync is rather treated > > > as an out-fence and the binds can then happen after the exec, leading to > > > various failures. In addition it looks like async unbind should be > > > waited on before tearing down the queue/vm which has the bind engine > > > attached, since the scheduler timeout is immediately set to zero on > > > destroy, which might then trigger job timeouts. However it looks like > > > it's also fine to rather just destroy the object and leave KMD to unbind > > > everything itself. Update the various subtests here to conform to this. > > > > > > In the case of the persistent subtest it looks simpler to use sync > > > vm_bind since we don't have another sync for the in-fence at hand, plus > > > we don't seem to need a dedicated bind engine. > > > > > > Closes: https://gitlab.freedesktop.org/drm/xe/kernel/-/issues/1270 > > > Signed-off-by: Matthew Auld <matthew.auld@intel.com> > > > Cc: Matthew Brost <matthew.brost@intel.com> > > > > Changes LGTM but IMO we jus delete this test as I'm unsure what coverage > > this test is providing. > > Do you mean just delete the entire xe_exec_store? > Yea, not sure how this test ever got added as basically every test in the suite does a dword store. Probably along the lines of lets port i915 tests to Xe! Not seeing a ton of value in this one... Probably above my pay grade to make those types od decessions though. Matt > > > > Anyways: > > Reviewed-by: Matthew Brost <matthew.brost@intel.com> > > > > > --- > > > tests/intel/xe_exec_store.c | 22 ++++++++++++---------- > > > 1 file changed, 12 insertions(+), 10 deletions(-) > > > > > > diff --git a/tests/intel/xe_exec_store.c b/tests/intel/xe_exec_store.c > > > index c57bcb852..728ce826b 100644 > > > --- a/tests/intel/xe_exec_store.c > > > +++ b/tests/intel/xe_exec_store.c > > > @@ -102,13 +102,13 @@ static void persistance_batch(struct data *data, uint64_t addr) > > > */ > > > static void basic_inst(int fd, int inst_type, struct drm_xe_engine_class_instance *eci) > > > { > > > - struct drm_xe_sync sync = { > > > - .type = DRM_XE_SYNC_TYPE_SYNCOBJ, > > > - .flags = DRM_XE_SYNC_FLAG_SIGNAL, > > > + struct drm_xe_sync sync[2] = { > > > + { .type = DRM_XE_SYNC_TYPE_SYNCOBJ, .flags = DRM_XE_SYNC_FLAG_SIGNAL, }, > > > + { .type = DRM_XE_SYNC_TYPE_SYNCOBJ, .flags = DRM_XE_SYNC_FLAG_SIGNAL, } > > > }; > > > struct drm_xe_exec exec = { > > > .num_batch_buffer = 1, > > > - .num_syncs = 1, > > > + .num_syncs = 2, > > > .syncs = to_user_pointer(&sync), > > > }; > > > struct data *data; > > > @@ -122,7 +122,8 @@ static void basic_inst(int fd, int inst_type, struct drm_xe_engine_class_instanc > > > uint32_t bo = 0; > > > syncobj = syncobj_create(fd, 0); > > > - sync.handle = syncobj; > > > + sync[0].handle = syncobj_create(fd, 0); > > > + sync[1].handle = syncobj; > > > vm = xe_vm_create(fd, 0, 0); > > > bo_size = sizeof(*data); > > > @@ -134,7 +135,7 @@ static void basic_inst(int fd, int inst_type, struct drm_xe_engine_class_instanc > > > exec_queue = xe_exec_queue_create(fd, vm, eci, 0); > > > bind_engine = xe_bind_exec_queue_create(fd, vm, 0); > > > - xe_vm_bind_async(fd, vm, bind_engine, bo, 0, addr, bo_size, &sync, 1); > > > + xe_vm_bind_async(fd, vm, bind_engine, bo, 0, addr, bo_size, sync, 1); > > > data = xe_bo_map(fd, bo, bo_size); > > > if (inst_type == STORE) > > > @@ -149,12 +150,14 @@ static void basic_inst(int fd, int inst_type, struct drm_xe_engine_class_instanc > > > exec.exec_queue_id = exec_queue; > > > exec.address = data->addr; > > > - sync.flags &= DRM_XE_SYNC_FLAG_SIGNAL; > > > + sync[0].flags &= ~DRM_XE_SYNC_FLAG_SIGNAL; > > > + sync[1].flags |= DRM_XE_SYNC_FLAG_SIGNAL; > > > xe_exec(fd, &exec); > > > igt_assert(syncobj_wait(fd, &syncobj, 1, INT64_MAX, 0, NULL)); > > > igt_assert_eq(data->data, value); > > > + syncobj_destroy(fd, sync[0].handle); > > > syncobj_destroy(fd, syncobj); > > > munmap(data, bo_size); > > > gem_close(fd, bo); > > > @@ -232,7 +235,7 @@ static void store_cachelines(int fd, struct drm_xe_engine_class_instance *eci, > > > batch_map[b++] = value[n]; > > > } > > > batch_map[b++] = MI_BATCH_BUFFER_END; > > > - sync[0].flags &= DRM_XE_SYNC_FLAG_SIGNAL; > > > + sync[0].flags &= ~DRM_XE_SYNC_FLAG_SIGNAL; > > > sync[1].flags |= DRM_XE_SYNC_FLAG_SIGNAL; > > > sync[1].handle = syncobjs; > > > exec.exec_queue_id = exec_queues; > > > @@ -250,7 +253,6 @@ static void store_cachelines(int fd, struct drm_xe_engine_class_instance *eci, > > > for (i = 0; i < count; i++) { > > > munmap(bo_map[i], bo_size); > > > - xe_vm_unbind_async(fd, vm, 0, 0, dst_offset[i], bo_size, sync, 1); > > > gem_close(fd, bo[i]); > > > } > > > @@ -300,7 +302,7 @@ static void persistent(int fd) > > > vram_if_possible(fd, engine->instance.gt_id), > > > DRM_XE_GEM_CREATE_FLAG_NEEDS_VISIBLE_VRAM); > > > - xe_vm_bind_async(fd, vm, 0, sd_batch, 0, addr, batch_size, &sync, 1); > > > + xe_vm_bind_sync(fd, vm, sd_batch, 0, addr, batch_size); > > > sd_data = xe_bo_map(fd, sd_batch, batch_size); > > > prt_data = xe_bo_map(fd, prt_batch, batch_size); > > > -- > > > 2.44.0 > > > ^ permalink raw reply [flat|nested] 6+ messages in thread
* ✓ CI.xeBAT: success for tests/intel/xe_exec_store: fix sync usage 2024-04-08 17:41 [PATCH i-g-t] tests/intel/xe_exec_store: fix sync usage Matthew Auld 2024-04-08 17:55 ` Matthew Brost @ 2024-04-09 2:11 ` Patchwork 2024-04-09 2:31 ` ✗ Fi.CI.BAT: failure " Patchwork 2 siblings, 0 replies; 6+ messages in thread From: Patchwork @ 2024-04-09 2:11 UTC (permalink / raw) To: Matthew Auld; +Cc: igt-dev [-- Attachment #1: Type: text/plain, Size: 769 bytes --] == Series Details == Series: tests/intel/xe_exec_store: fix sync usage URL : https://patchwork.freedesktop.org/series/132170/ State : success == Summary == CI Bug Log - changes from XEIGT_7802_BAT -> XEIGTPW_10990_BAT ==================================================== Summary ------- **SUCCESS** No regressions found. Participating hosts (5 -> 5) ------------------------------ No changes in participating hosts Changes ------- No changes found Build changes ------------- * IGT: IGT_7802 -> IGTPW_10990 IGTPW_10990: 10990 IGT_7802: 7802 xe-1057-9c78ecd17c19a10cdb73b12362d6b9bf914105b2: 9c78ecd17c19a10cdb73b12362d6b9bf914105b2 == Logs == For more details see: https://intel-gfx-ci.01.org/tree/intel-xe/IGTPW_10990/index.html [-- Attachment #2: Type: text/html, Size: 1314 bytes --] ^ permalink raw reply [flat|nested] 6+ messages in thread
* ✗ Fi.CI.BAT: failure for tests/intel/xe_exec_store: fix sync usage 2024-04-08 17:41 [PATCH i-g-t] tests/intel/xe_exec_store: fix sync usage Matthew Auld 2024-04-08 17:55 ` Matthew Brost 2024-04-09 2:11 ` ✓ CI.xeBAT: success for " Patchwork @ 2024-04-09 2:31 ` Patchwork 2 siblings, 0 replies; 6+ messages in thread From: Patchwork @ 2024-04-09 2:31 UTC (permalink / raw) To: Matthew Auld; +Cc: igt-dev [-- Attachment #1: Type: text/plain, Size: 10340 bytes --] == Series Details == Series: tests/intel/xe_exec_store: fix sync usage URL : https://patchwork.freedesktop.org/series/132170/ State : failure == Summary == CI Bug Log - changes from IGT_7802 -> IGTPW_10990 ==================================================== Summary ------- **FAILURE** Serious unknown changes coming with IGTPW_10990 absolutely need to be verified manually. If you think the reported changes have nothing to do with the changes introduced in IGTPW_10990, please notify your bug team (I915-ci-infra@lists.freedesktop.org) to allow them to document this new failure mode, which will reduce false positives in CI. External URL: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_10990/index.html Participating hosts (36 -> 38) ------------------------------ Additional (5): bat-kbl-2 fi-glk-j4005 fi-elk-e7500 fi-kbl-8809g bat-arls-3 Missing (3): bat-dg2-11 bat-mtlp-8 fi-bsw-n3050 Possible new issues ------------------- Here are the unknown changes that may have been introduced in IGTPW_10990: ### IGT changes ### #### Possible regressions #### * igt@i915_pm_rpm@module-reload: - bat-jsl-3: [PASS][1] -> [INCOMPLETE][2] [1]: https://intel-gfx-ci.01.org/tree/drm-tip/IGT_7802/bat-jsl-3/igt@i915_pm_rpm@module-reload.html [2]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_10990/bat-jsl-3/igt@i915_pm_rpm@module-reload.html Known issues ------------ Here are the changes found in IGTPW_10990 that come from known issues: ### IGT changes ### #### Issues hit #### * igt@debugfs_test@basic-hwmon: - bat-arls-3: NOTRUN -> [SKIP][3] ([i915#9318]) [3]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_10990/bat-arls-3/igt@debugfs_test@basic-hwmon.html * igt@fbdev@info: - bat-kbl-2: NOTRUN -> [SKIP][4] ([i915#1849]) [4]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_10990/bat-kbl-2/igt@fbdev@info.html * igt@gem_huc_copy@huc-copy: - fi-glk-j4005: NOTRUN -> [SKIP][5] ([i915#2190]) [5]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_10990/fi-glk-j4005/igt@gem_huc_copy@huc-copy.html * igt@gem_lmem_swapping@basic: - fi-glk-j4005: NOTRUN -> [SKIP][6] ([i915#4613]) +3 other tests skip [6]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_10990/fi-glk-j4005/igt@gem_lmem_swapping@basic.html * igt@gem_lmem_swapping@parallel-random-engines: - bat-kbl-2: NOTRUN -> [SKIP][7] +39 other tests skip [7]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_10990/bat-kbl-2/igt@gem_lmem_swapping@parallel-random-engines.html - bat-arls-3: NOTRUN -> [SKIP][8] ([i915#10213]) +3 other tests skip [8]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_10990/bat-arls-3/igt@gem_lmem_swapping@parallel-random-engines.html * igt@gem_mmap@basic: - bat-arls-3: NOTRUN -> [SKIP][9] ([i915#4083]) [9]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_10990/bat-arls-3/igt@gem_mmap@basic.html * igt@gem_render_tiled_blits@basic: - bat-arls-3: NOTRUN -> [SKIP][10] ([i915#10197] / [i915#10211] / [i915#4079]) [10]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_10990/bat-arls-3/igt@gem_render_tiled_blits@basic.html * igt@gem_tiled_blits@basic: - bat-arls-3: NOTRUN -> [SKIP][11] ([i915#10196] / [i915#4077]) +2 other tests skip [11]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_10990/bat-arls-3/igt@gem_tiled_blits@basic.html * igt@gem_tiled_pread_basic: - bat-arls-3: NOTRUN -> [SKIP][12] ([i915#10206] / [i915#4079]) [12]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_10990/bat-arls-3/igt@gem_tiled_pread_basic.html * igt@i915_pm_rps@basic-api: - bat-arls-3: NOTRUN -> [SKIP][13] ([i915#10209]) [13]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_10990/bat-arls-3/igt@i915_pm_rps@basic-api.html * igt@kms_addfb_basic@addfb25-x-tiled-legacy: - bat-arls-3: NOTRUN -> [SKIP][14] ([i915#10200]) +9 other tests skip [14]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_10990/bat-arls-3/igt@kms_addfb_basic@addfb25-x-tiled-legacy.html * igt@kms_cursor_legacy@basic-busy-flip-before-cursor-atomic: - fi-glk-j4005: NOTRUN -> [SKIP][15] +10 other tests skip [15]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_10990/fi-glk-j4005/igt@kms_cursor_legacy@basic-busy-flip-before-cursor-atomic.html - bat-arls-3: NOTRUN -> [SKIP][16] ([i915#10202]) +1 other test skip [16]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_10990/bat-arls-3/igt@kms_cursor_legacy@basic-busy-flip-before-cursor-atomic.html * igt@kms_dsc@dsc-basic: - bat-arls-3: NOTRUN -> [SKIP][17] ([i915#9886]) [17]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_10990/bat-arls-3/igt@kms_dsc@dsc-basic.html * igt@kms_force_connector_basic@force-load-detect: - bat-arls-3: NOTRUN -> [SKIP][18] ([i915#10207]) [18]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_10990/bat-arls-3/igt@kms_force_connector_basic@force-load-detect.html * igt@kms_pm_backlight@basic-brightness: - bat-arls-3: NOTRUN -> [SKIP][19] ([i915#9812]) [19]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_10990/bat-arls-3/igt@kms_pm_backlight@basic-brightness.html * igt@kms_pm_rpm@basic-pci-d3-state: - fi-elk-e7500: NOTRUN -> [SKIP][20] +24 other tests skip [20]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_10990/fi-elk-e7500/igt@kms_pm_rpm@basic-pci-d3-state.html * igt@kms_psr@psr-primary-mmap-gtt: - bat-arls-3: NOTRUN -> [SKIP][21] ([i915#9732]) +3 other tests skip [21]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_10990/bat-arls-3/igt@kms_psr@psr-primary-mmap-gtt.html * igt@kms_setmode@basic-clone-single-crtc: - bat-arls-3: NOTRUN -> [SKIP][22] ([i915#10208] / [i915#8809]) [22]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_10990/bat-arls-3/igt@kms_setmode@basic-clone-single-crtc.html * igt@prime_vgem@basic-fence-mmap: - bat-arls-3: NOTRUN -> [SKIP][23] ([i915#10196] / [i915#3708] / [i915#4077]) +1 other test skip [23]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_10990/bat-arls-3/igt@prime_vgem@basic-fence-mmap.html * igt@prime_vgem@basic-fence-read: - bat-arls-3: NOTRUN -> [SKIP][24] ([i915#10212] / [i915#3708]) [24]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_10990/bat-arls-3/igt@prime_vgem@basic-fence-read.html * igt@prime_vgem@basic-read: - bat-arls-3: NOTRUN -> [SKIP][25] ([i915#10214] / [i915#3708]) [25]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_10990/bat-arls-3/igt@prime_vgem@basic-read.html * igt@prime_vgem@basic-write: - bat-arls-3: NOTRUN -> [SKIP][26] ([i915#10216] / [i915#3708]) [26]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_10990/bat-arls-3/igt@prime_vgem@basic-write.html * igt@runner@aborted: - fi-kbl-8809g: NOTRUN -> [FAIL][27] ([i915#10689]) [27]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_10990/fi-kbl-8809g/igt@runner@aborted.html #### Possible fixes #### * igt@i915_selftest@live@active: - fi-bsw-nick: [DMESG-FAIL][28] -> [PASS][29] [28]: https://intel-gfx-ci.01.org/tree/drm-tip/IGT_7802/fi-bsw-nick/igt@i915_selftest@live@active.html [29]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_10990/fi-bsw-nick/igt@i915_selftest@live@active.html * igt@i915_selftest@live@guc_multi_lrc: - bat-dg2-8: [ABORT][30] ([i915#10366]) -> [PASS][31] [30]: https://intel-gfx-ci.01.org/tree/drm-tip/IGT_7802/bat-dg2-8/igt@i915_selftest@live@guc_multi_lrc.html [31]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_10990/bat-dg2-8/igt@i915_selftest@live@guc_multi_lrc.html * igt@i915_selftest@live@late_gt_pm: - bat-dg2-14: [ABORT][32] ([i915#10366] / [i915#10461]) -> [PASS][33] [32]: https://intel-gfx-ci.01.org/tree/drm-tip/IGT_7802/bat-dg2-14/igt@i915_selftest@live@late_gt_pm.html [33]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_10990/bat-dg2-14/igt@i915_selftest@live@late_gt_pm.html [i915#10196]: https://gitlab.freedesktop.org/drm/intel/issues/10196 [i915#10197]: https://gitlab.freedesktop.org/drm/intel/issues/10197 [i915#10200]: https://gitlab.freedesktop.org/drm/intel/issues/10200 [i915#10202]: https://gitlab.freedesktop.org/drm/intel/issues/10202 [i915#10206]: https://gitlab.freedesktop.org/drm/intel/issues/10206 [i915#10207]: https://gitlab.freedesktop.org/drm/intel/issues/10207 [i915#10208]: https://gitlab.freedesktop.org/drm/intel/issues/10208 [i915#10209]: https://gitlab.freedesktop.org/drm/intel/issues/10209 [i915#10211]: https://gitlab.freedesktop.org/drm/intel/issues/10211 [i915#10212]: https://gitlab.freedesktop.org/drm/intel/issues/10212 [i915#10213]: https://gitlab.freedesktop.org/drm/intel/issues/10213 [i915#10214]: https://gitlab.freedesktop.org/drm/intel/issues/10214 [i915#10216]: https://gitlab.freedesktop.org/drm/intel/issues/10216 [i915#10366]: https://gitlab.freedesktop.org/drm/intel/issues/10366 [i915#10461]: https://gitlab.freedesktop.org/drm/intel/issues/10461 [i915#10689]: https://gitlab.freedesktop.org/drm/intel/issues/10689 [i915#1849]: https://gitlab.freedesktop.org/drm/intel/issues/1849 [i915#2190]: https://gitlab.freedesktop.org/drm/intel/issues/2190 [i915#3708]: https://gitlab.freedesktop.org/drm/intel/issues/3708 [i915#4077]: https://gitlab.freedesktop.org/drm/intel/issues/4077 [i915#4079]: https://gitlab.freedesktop.org/drm/intel/issues/4079 [i915#4083]: https://gitlab.freedesktop.org/drm/intel/issues/4083 [i915#4613]: https://gitlab.freedesktop.org/drm/intel/issues/4613 [i915#8809]: https://gitlab.freedesktop.org/drm/intel/issues/8809 [i915#9318]: https://gitlab.freedesktop.org/drm/intel/issues/9318 [i915#9732]: https://gitlab.freedesktop.org/drm/intel/issues/9732 [i915#9812]: https://gitlab.freedesktop.org/drm/intel/issues/9812 [i915#9886]: https://gitlab.freedesktop.org/drm/intel/issues/9886 Build changes ------------- * CI: CI-20190529 -> None * IGT: IGT_7802 -> IGTPW_10990 CI-20190529: 20190529 CI_DRM_14545: 9c78ecd17c19a10cdb73b12362d6b9bf914105b2 @ git://anongit.freedesktop.org/gfx-ci/linux IGTPW_10990: 10990 IGT_7802: 7802 == Logs == For more details see: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_10990/index.html [-- Attachment #2: Type: text/html, Size: 12092 bytes --] ^ permalink raw reply [flat|nested] 6+ messages in thread
end of thread, other threads:[~2024-04-10 0:33 UTC | newest] Thread overview: 6+ messages (download: mbox.gz follow: Atom feed -- links below jump to the message on this page -- 2024-04-08 17:41 [PATCH i-g-t] tests/intel/xe_exec_store: fix sync usage Matthew Auld 2024-04-08 17:55 ` Matthew Brost 2024-04-08 18:09 ` Matthew Auld 2024-04-10 0:31 ` Matthew Brost 2024-04-09 2:11 ` ✓ CI.xeBAT: success for " Patchwork 2024-04-09 2:31 ` ✗ Fi.CI.BAT: failure " Patchwork
This is a public inbox, see mirroring instructions for how to clone and mirror all data and code used for this inbox